qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,052,746
<p>I want to solve <span class="math-container">$2x = \sqrt{x+3}$</span>, which I have tried as below:</p> <p><span class="math-container">$$\begin{equation} 4x^2 - x -3 = 0 \\ x^2 - \frac14 x - \frac34 = 0 \\ x^2 - \frac14x = \frac34 \\ \left(x - \frac12 \right)^2 = 1 \\ x = \frac32 , -\frac12 \end{equation}$$</span></p> <p>This, however, is incorrect.</p> <p>What is wrong with my solution?</p>
fleablood
280,126
<p><span class="math-container">$\require{cancel}$</span>Additional details in <span class="math-container">$\color{blue}{blue}$</span>. Important detail in <span class="math-container">$\color{green}{green}$</span>. Mistakes in <span class="math-container">$\color{red}{\cancel{\text{canceled red}}}$</span>. Corrections in <span class="math-container">$\color{purple}{purple}$</span>.</p> <p><span class="math-container">$\begin{equation} \color{blue}{2x = \sqrt{x+3}}\color{green}{\ge 0}\\ \color{blue}{4x^2 = x+3}\color{green}{\text{!AND!} x\ge 0}\\ 4x^2 - x -3 = 0 \\ x^2 - \frac14 x - \frac34 = 0 \\ x^2 - \frac14x = \frac34 \\ \color{blue}{x^2 -2\cdot\frac 18=\frac 34}\\ \color{blue}{x^2 -2\cdot\frac 18 +(\frac 18)^2=\frac 34+\frac 1{64}}\\ \color{red}{\cancel{(x - \frac12 )^2 = 1}}\color{purple}{(x - \frac18)^2 = \frac {49}{64}} \\ \color{blue}{x-\frac 18=\pm \frac 78}\\ \color{red}{\cancel{x = \frac32 , -\frac12}}\color{purple}{x = 1 , -\frac34}\color{green}{\text{!AND!} x\ge 0}\\ \color{purple}{x = 1}\\ \end{equation}$</span></p>
2,343,027
<p>I am having a problem in proving this map to be one-one. It is not said anything about the relationship about $K$ and $R$. Or is it not necessary that they be related somehow. Please help.</p>
egreg
62,967
<p>Hint: a ring homomorphism $f$ is 1-1 if and only if $\ker f=\{0\}$; what are the ideals of $K$?</p>
2,343,027
<p>I am having a problem in proving this map to be one-one. It is not said anything about the relationship about $K$ and $R$. Or is it not necessary that they be related somehow. Please help.</p>
Pierre-Yves Gaillard
660
<p>The statement is false: consider the unique morphism from a field to the zero ring.</p> <p><strong>Edit.</strong> The above lines answer the question </p> <blockquote> <p>Let $f: K \rightarrow R$ be a ring homomorphism, where $K$ is a field and $R$ is a commutative ring with unity. Prove that $f$ is one-to-one,</p> </blockquote> <p>which was the first version. The point to understand is: Why (in commutative algebra) do we require $0\ne1$ for domains (and in particular for fields), but not for rings? The standard answer is:</p> <p>We do <strong>not</strong> require $0\ne1$ for rings because we want any family of rings to have a product. But domains don't have (nontrivial) products anyway, so it is simpler to reject the "anomaly" $0=1$ for them.</p>
1,416,275
<p>I've just began the study of linear functionals and the dual base. And this book I'm reading says the dual space $V^{*}$ may be identified with the space of row vectors. This notion seems very important, but I'm having trouble understanding it. Here is the text:</p> <blockquote> <p>Let $\sigma$ be an element of the dual space $V^{*}$, i.e. a linear map $\sigma: V \rightarrow K$. Choose a basis for $V$, say the usual the basis, then $\sigma$ is represented by a matrix $[\sigma]$. However, such a matrix $[\sigma]$ is a row vector. Also, the map $\sigma \rightarrow [\sigma]$ is a vector space isomorphism.</p> <p>On the other hand, any row vector $\phi = (a_1, \ldots, a_n)$ defines a linear functional $\phi: V \rightarrow K$ by \begin{align*} \phi(x_1, \ldots, x_n) = (a_1, \ldots, a_n) \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix} \end{align*} or simply $\phi(x_1, \ldots, x_n) = a_1 x_1 + a_2 x_2 + \ldots + a_n x_n$.</p> </blockquote> <p>The author speaks of the matrixrepresentation $[\sigma]$, but he doesn't really explain it. Why is this matrix a row vector? Also, the second part of the text: is this merely a definition? Why does he claim $\phi(x_1, \ldots, x_n) = a_1 x_1 + \ldots + a_n x_n$? The output of a linear functional is suppose to be a scalar, and not a vector? And this is clearly a linear combination of vectors...</p> <p>Maybe some of the advanced mathematicians here could give me some examples, because I can't get my head around this at the moment. </p>
Matematleta
138,929
<p>The elements of $V^*$ operate on $V$ in the obvious way: $(\varphi ,v)\mapsto \varphi (v)$.</p> <p>We ask how can we represent this using the bases for $V$ and $V^*$.</p> <p>Given the basis $\mathcal B=\left \{ v_{1},\cdots ,v_{n} \right \}$ for $V$, there is the natural basis $\mathcal B^*=\left \{ v_{1}^*,\cdots ,v_{n}^* \right \}$ for $V^*$ defined by </p> <p>$v_{i}^*(v_{j})=\delta _{ij}, \quad 1\leq i,j\leq n.$.</p> <p>Then, every $v\in V$ is of the form $v=a_{1}v_{1}+\cdots +a_{n}v_{n}$, which in turn can be represented by a column vector </p> <p>$\pmatrix{a_1\\ \vdots\\ a_n}$.</p> <p>Likewise, every $\varphi \in V^*$ is of the form $\varphi =b_{1}v_{1}^*+\cdots +b_{n}v_{n}^*$ which can be represented by the row vector </p> <p>$(b_1,\cdots ,b_n)$</p> <p>Now, using the definition of the $v^*_i$, it is easy to see that </p> <p>$(\varphi ,v)\mapsto b_1a_1+\cdots +b_na_n.$ which is just </p> <p>$(b_1,\cdots ,b_n)\pmatrix{a_1\\ \vdots\\ a_n}$</p>
80,056
<p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
Jaap Eldering
3,928
<p>I just finished teaching a course on linear algebra to non-math students. I used a combination of latex-beamer slides and blackboard. One advantage of the slides was being able to do examples of Gauss elimination and inversion of matrices quicker than on the blackboard and without making mistakes. On the other hand, I feel that slides can easily make a lecture less interactive.</p> <p>And, I must agree with Thierry Zell: it took quite some time to prepare these slides, even though I could adapt the latex sources from the previous people teaching this course.</p>
80,056
<p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
Adrien Hardy
15,517
<p>It also depends on how do you think it is the best for your students to learn : By listening (hopefully carefully) to the course, and then reading notes you'll provide them, OR by letting them write themselves the content. </p> <p>I don't like to much the first option, certainly because I've not been used too, and I believe it is a huge advantage to write yourself everything at the moment, because of obvious memorization advantages (it was important for me to have my own notations, a kind of taming procedure) and, once you read your notes again, you usually remember where was the parts the teacher got enthusiastic. </p> <p>Considering then the second option, it is for me an evidence that blackboard win :</p> <ul> <li>you give the time to the students to write since you do it yourself</li> <li>the statements stay longer (at least if you have enough blackboards, or just keep the main Theorem on !) </li> <li>there is more interactions content-author-students</li> <li>your eyes are not constantly dried by this terrible white light</li> <li>it allows improvisation</li> <li>it is more classy (personal point of view, I agree) </li> </ul> <p>Against :</p> <ul> <li>it is suicidal (that is terribly soporific for the students) to NOT prepare a lot your presentation, at least as long as you should spend time one slides</li> <li>it requires a good handwriting from the teacher </li> <li>its not convenient for drawing complex pictures</li> </ul> <p>My conclusion is then the same than André Henriques !</p>
80,056
<p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
GMark
17,764
<p>My solution is to use a tablet PC (the pen-enabled kind, not the modern entertainment tablets like the Ipad),hooked up to a data projector. </p> <p>I have "lecture templates" which contain the copying intensive stuff (statements of theorems, definitions, graphs, complex diagrams) on the page, along with plenty of blank space for annotation. Those are on a website prior to the lecture. The students print them off at home, and bring them to class. I then annotate the lecture notes (using a pdf annotator and the tablet pen) and the students take notes as they wish. </p> <p>This, I feel, combines the benefits of having some complex material prepared ahead of time with the benefits of having arguments, calculations etc. developed in real time, rather than canned in advance. So it avoids the canned slides-whizzing-by problem. </p> <p>The only disadvantages I can see are the limitations of screen size. Sometimes nothing replaces the virtue of a big whiteboard, and having every part of a long development in front of your eyes all at once. In that case, I use a whiteboard.</p>
424,675
<p>Just one simple question:</p> <p>Let $\tau =(56789)(3456)(234)(12)$.</p> <p>How many elements does the conjugacy class of $\tau$ contain? How do you solve this exersie?</p> <p>First step is to write it in disjunct cyclces I guess. What's next? :)</p>
Dr Anil Kumar
149,989
<p>finite difference method is the oldest method to find the limited region or close region and FEM is the structural method to solve the partial deferential equation. </p>
189,293
<p><img src="https://i.stack.imgur.com/Ngfb2.jpg" alt="Vesica Pisces"></p> <p>I have the radius and center $(x,y)$ on both circles, but how do I get the $(x,y)$ of the red circle, or in other words how do I get the $(x,y)$ position of where the circles intersect at the top or bottom?</p>
Sasha
11,069
<p>Let $O_1$ and $O_2$ denote centers of each circle, and $r_1$ and $r_2$ denote their radii. Let $P$ denote the point of intersection you are interested in. We know length of each side of the triangle $\triangle O_1 O_2 P$, hence we can determine its height $h$, i.e. distance from $P$ to the line passing through $O_1$ and $O_2$. It is easiest to do this from the <a href="http://mathworld.wolfram.com/TriangleArea.html" rel="nofollow">triangle area</a> formulas. Let $d$ denote length of $O_1 O_2$, then $$ A(\triangle O_1 O_2 P) = \sqrt{\frac{r_1+r_2+d}{2}\cdot \frac{r_1+r_2-d}{2}\cdot \frac{r_1-r_2+d}{2}\cdot \frac{r_2+d-r_1}{2}} = \frac{1}{2} h d $$ Let $Q$ denote projection of $P$ on $O_1O_2$. Knowing $h$ allows to find length of $O_1Q$ and of $QO_2$ using Pythagorean theorem, allowing to determine coordinates of $Q$, and thus of $P$.</p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
Mikhail Katz
72,694
<p>The reason the "Klein bottle" is called a bottle has its origin in something of a German pun on Fläche/Flasche; see <a href="http://en.wikipedia.org/wiki/Klein_bottle">here</a></p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
Pierre-Yves Gaillard
660
<p>Gabriel introduced the notation $\text{Sex}(\mathcal A,\mathcal B)$ to denote the category of left exact functors from $\mathcal A$ to $\mathcal B$. This because the Latin word for <em>left</em> (which is <em>sinister</em>) starts with an S.</p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
Jean-Sébastien
31,493
<p>QED comes from the latin <em>quod erat demonstrandum</em></p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
MJD
25,554
<p>The identity element of a group is $e$ for <em>einheit</em>, the German word for "identity".</p>
1,354,745
<p>Let polynomial $p(z)=z^2+az+b$ be such that $a$and $b$ are complex numbers and $|p(z)|=1$ whenever $|z|=1$. Prove that $a=0$ and $b=0$.</p> <p>I could not make much progress. I let $z=e^{i\theta}$ and $a=a_1+ib_1$ and $b=a_2+ib_2$ </p> <p>Using these values in $P(z)$ i got $|P (z)|^2=1=(\cos (2\theta)-a_2\sin (\theta)+a_1\cos (\theta)+b_1)^2+(\sin(2\theta)+a_1\sin (\theta)+a_2\cos (\theta)+b_2)^2$</p> <p>But i dont see how to proceed further neither can i think of any other approach any other approach. So, someone please help. I dont know complex analysis so it would be more helpful if someone can provide hints/solutions that dont use complex analysis.</p>
Dleep
240,562
<p>Hint: What happens when $ z = 1 $, $ z = i $, $z = -1$, $z = -i$ ?</p>
4,498,203
<p>We know that each row (and each column) of composition table of a finite group, is a rearrangement (permutation) of the elements of the group.</p> <p>How about the other way round? If we have a composition table where each row and each column is a permutation of the elements of a set, does this composition table necessarily define a group?</p> <p>If not then give a counter example.</p>
Shaun
104,041
<p>No.</p> <p>Consider</p> <p><span class="math-container">$$\begin{array}{c|ccc} \ast &amp; e &amp; a &amp; b\\ \hline e &amp; e &amp; a &amp; b\\ a &amp; b &amp; e &amp; a\\ b &amp; a &amp; b &amp; e \end{array}.$$</span></p> <p>There is no identity, so it cannot be a group.</p>
2,906,865
<p>I am trying to find general formula of the sequence $(x_n)$ defined by $$x_1=1, \quad x_{n+1}=\dfrac{7x_n + 5}{x_n + 3}, \quad \forall n&gt;1.$$ I tried put $y_n = x_n + 3$, then $y_1=4$ and $$\quad y_{n+1}=\dfrac{7(y_n-3) + 5}{y_n }=7 - \dfrac{16}{y_n}, \quad \forall n&gt;1.$$ From here, I can't solve it. How can I determine general formula of above sequence?</p> <p>With <em>Mathematica</em>, I found $x_n = \dfrac{5\cdot 4^n-8}{4^n+8}$. I want to know a method to solve problem, than have a given formula.</p>
Daniel Schepler
337,888
<p>Since the function being iterated is a projective-linear function, it follows that if you let $$ \begin{bmatrix} a_n \\ b_n \end{bmatrix} := \begin{bmatrix} 7 &amp; 5 \\ 1 &amp; 3 \end{bmatrix} ^{n-1} \begin{bmatrix} 1 \\ 1\end{bmatrix}$$ then $x_n = \frac{a_n}{b_n}$. Now, to find the powers of the matrix $\begin{bmatrix} 7 &amp; 5 \\ 1 &amp; 3 \end{bmatrix}$ all you need to do is to diagonalize it.</p>
2,906,865
<p>I am trying to find general formula of the sequence $(x_n)$ defined by $$x_1=1, \quad x_{n+1}=\dfrac{7x_n + 5}{x_n + 3}, \quad \forall n&gt;1.$$ I tried put $y_n = x_n + 3$, then $y_1=4$ and $$\quad y_{n+1}=\dfrac{7(y_n-3) + 5}{y_n }=7 - \dfrac{16}{y_n}, \quad \forall n&gt;1.$$ From here, I can't solve it. How can I determine general formula of above sequence?</p> <p>With <em>Mathematica</em>, I found $x_n = \dfrac{5\cdot 4^n-8}{4^n+8}$. I want to know a method to solve problem, than have a given formula.</p>
minhthien_2016
336,417
<p>The chracteristic equation of the given sequence is <span class="math-container">$$y=\dfrac{7y+5}{y+3} \Leftrightarrow y_1 = 5 \lor y_2 = -1.$$</span> Let us consider the sequence <span class="math-container">$$b_n = \dfrac{x_n-y_1}{x_n - y_2}=\dfrac{x_n-5}{x_n+1}.$$</span> We note that <span class="math-container">$$b_{n+1}=\dfrac{x_{n+1}-5}{x_{n+1}+1}=\dfrac{\dfrac{7x_n+5}{x_n+3}-5}{\dfrac{7x_n+5}{x_n+3} + 1}=\dfrac{2x_n-10}{8x_n + 8} = \frac{1}{4} \cdot \dfrac{x_n-5}{x_n +1 }=\frac{1}{4} b_n.$$</span> Therefore <span class="math-container">$(b_n)$</span> is a geometric progression, with ratio <span class="math-container">$\dfrac{1}{4}$</span> and the first term <span class="math-container">$$b_1 = \frac{x_1-5}{x_1+1}=\frac{1-5}{1+1}=\frac{-4}{2}=-2.$$</span> Therefore <span class="math-container">$$b_{n+1}=b_1 \cdot \left (\dfrac{1}{4}\right)^n= - 2 \cdot \left (\dfrac{1}{4}\right)^n,$$</span> or equaivalently, <span class="math-container">$$\dfrac{x_{n+1}-5}{x_{n+1}+1}=- 2 \cdot \left (\dfrac{1}{4}\right)^n \Leftrightarrow x_{n+1} = \frac{5\cdot 4^n - 2}{4^n +2}\Leftrightarrow x_n = \frac{5\cdot 4^n - 8}{4^n + 8}.$$</span></p>
3,002,668
<p>I have to solve this inequality:</p> <p><span class="math-container">$$5 ≤ 4|x − 1| + |2 − 3x|$$</span></p> <p>and prove its solution with one (or 2 or 3) of this sentences:</p> <p><span class="math-container">$$∀x∀y |xy| = |x||y|$$</span></p> <p><span class="math-container">$$∀x∀y(y ≤ |x| ↔ y ≤ x ∨ y ≤ −x)$$</span></p> <p><span class="math-container">$$∀x∀y(|x| ≤ y ↔ x ≤ y ∧ −x ≤ y)$$</span></p> <p>The solution of inequality is:</p> <p><span class="math-container">$$(-\infty, \frac{1}{7}&gt; U &lt;\frac{11}{7}, \infty)$$</span></p> <p>But I have a hard time with proving the solution with the sentence. E.g. if I choose the second one I get this:</p> <p>5 ≤ 4(x-1) + (2-3x) ∨ 5 ≤ - [4(x-1) + (2-3x)] &lt;=> </p> <p>5 ≤ 4x - 4 + 2 - 3x ∨ 5 ≤ -(4x-4+2-3x) &lt;=> </p> <p>5 ≤ x-2 ∨ 5 ≤ -x + 2 &lt;=> 7 ≤ x ∨ x ≤ 3</p> <p>and that is wrong.</p> <p>Can someone help me out, please? Sorry for bad English, that is not my first language.</p> <p>EDIT: Sentences should be applicable. I have another inequality which is already solved and it was done like this:</p> <p>|3x| ≤ |2x − 1|</p> <p>(x ∈ R | −1 ≤ x ≤1/5)</p> <p>Sentences:</p> <p>∀x∀y(y ≤ |x| ↔ y ≤ x ∨ y ≤ −x) (1)</p> <p>∀x∀y(|x| ≤ y ↔ x ≤ y ∧ −x ≤ y) (2)</p> <p>Solution: We make another sentence which we have to prove:</p> <p>∀x( |3x| ≤ |2x − 1| ↔ −1 ≤ x ≤ 1/5) (3)</p> <p>|3x| ≤ |2x − 1| ⇔ |3x| ≤ 2x − 1 ∨ |3x| ≤ −(2x − 1) ⇔</p> <p>3x ≤ 2x − 1 ∧ −3x ≤ 2x − 1 ∨ 3x ≤ −(2x − 1) ∧ −3x ≤ −(2x − 1) ⇔</p> <p>x ≤ −1 ∧ 1 ≤ 5x ∨ 5x ≤ 1 ∧ −1 ≤ x ⇔</p> <p>1/5 ≤ x ≤ −1 ∨ −1 ≤ x ≤1/5⇔ −1 ≤ x ≤1/5</p> <p>So we proved sentence (3)</p>
Will Jagy
10,400
<p>Part (I) fill in this table <span class="math-container">$$ \begin{array}{c|c|c|c|c|c|c|c} x &amp; x-1&amp;|x-1|&amp;4|x-1| &amp;-3x&amp;2-3x&amp;|2-3x|&amp; y = 4|x-1| +|2-3x| \\ \hline \frac{-1}{6} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline 0 &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \frac{1}{6} &amp;&amp;&amp;&amp;&amp;&amp; &amp; \\ \hline \frac{1}{3} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \frac{1}{2} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \frac{2}{3} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \frac{5}{6} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline 1 &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \frac{7}{6} &amp;&amp;&amp;&amp;&amp;&amp; &amp; \\ \hline \frac{4}{3} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \frac{3}{2} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \frac{5}{3} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \frac{11}{6} &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline 2 &amp; &amp;&amp;&amp;&amp;&amp;&amp; \\ \hline \end{array} $$</span></p> <p>Part (II) draw on graph paper, horizontal axis called <span class="math-container">$x,$</span> vertical axis called <span class="math-container">$y$</span> </p>
179,583
<p>I have a fairly large array, a billion or so by 500,000 array. I need to calculate the singular value decomposition of this array. The problem is that my computer RAM will not be able to handle the whole matrix at once. I need an incremental approach of calculating the SVD. This would mean that I could take one or a couple or a couple hundred/thousand (not too much though) rows of data at one time, do what I need to do with those numbers, and then throw them away so that I can address memory toward getting the rest of the data.</p> <p>People have posted a couple of papers on similar issues such as <a href="http://www.bradblock.com/Incremental_singular_value_decomposition_of_uncertain_data_with_missing_values.pdf" rel="nofollow">http://www.bradblock.com/Incremental_singular_value_decomposition_of_uncertain_data_with_missing_values.pdf</a> and <a href="http://www.jofcis.com/publishedpapers/2012_8_8_3207_3214.pdf" rel="nofollow">http://www.jofcis.com/publishedpapers/2012_8_8_3207_3214.pdf</a>. </p> <p>I am wondering if anyone has done any previous research or has any suggestions on how should go on approaching this? I really do need the FASTEST approach, without losing too much accuracy in the data.</p>
Hans Engler
9,787
<p>You could compute the SVD of randomly chosen submatrices of your original matrix, as shown e.g. in <a href="http://www.cc.gatech.edu/fac/vempala/papers/dfkvv.pdf" rel="nofollow">the 2004 paper by Drineas, Frieze, Kannan, Vempala and Vinay</a>, and scale the result to obtain an approximate SVD of the original matrix. There has been quite a bit of additional work on randomized matrix methods since then. The grandfather of all this is the Kaczmarz method of 1939 for solving the problem $Ax = b$, if only one row of $A$ at a time is accessible.</p> <p>It might also be useful to check if maybe a few top singular values are sufficient for your purposes. If so, Lanczos methods (e.g.) will result in additional time savings.</p>
1,396,067
<p><strong>Question:</strong><br/> The bacteria in a certain culture double every $7.3$ hours. The culture has $7,500$ bacteria at the start. How many bacteria will the culture contain after $3$ hours? <br /> <br /> <strong>Possible Answers:</strong><br/> a. $9,449$ bacteria<br/> b. $9,972$ bacteria<br/> c. $40,510$ bacteria<br/> d. $8,247$ bacteria<br/> <br/> <br/>I got this answer right on my quiz but I want to be sure that I can do it again. Please help me with setting up this problem and getting the correct answer.</p>
Community
-1
<p>If $x\in n\mathbb{Z}\cap m\mathbb{Z}$, then $x$ is an integer multiple of both $m$ and $n$, and therefore an integer multiple of $\text{lcm}(n,m)$. That is, if both $n$ and $m$ divide $x$, then $\text{lcm}(n,m)$ divides $x$. Many proofs of this are given <a href="https://math.stackexchange.com/questions/727544/cant-understand-a-proof-let-a-b-c-be-integers-if-a-and-b-divide-c-th">here</a>. This implies that every $x\in n\mathbb{Z}\cap m\mathbb{Z}$ is a multiple of $\text{lcm}(n,m)$, thus: $$n\mathbb{Z}\cap m\mathbb{Z} = \langle \text{lcm}(n,m)\rangle.$$ So, you were not wrong, since by basic number theory: $$nm = \gcd(n,m)\text{lcm}(n,m).$$ In general, the intersection of two finitely generated groups is not necessarily finitely generated.</p> <p>Edit:</p> <blockquote> <p>We want to show that every $x\in n\mathbb{Z}\cap m\mathbb{Z}$ is an integer multiple of $\text{lcm}(n,m)$ (think of how every $y\in n\mathbb{Z}$ is an integer multiple of $n$, and $n$ generates $n\mathbb{Z}$). If $x\in n\mathbb{Z}\cap m\mathbb{Z}$ then for some integers $k_1,k_2$, $$x=k_1n\quad\text{and}\quad x=k_2m.$$ By definition, this means that both $n$ and $m$ divide $x$. By the theorem linked above, this implies that $\text{lcm}(n,m)$ divides $x$, which by definition means that there exists a $k\in\mathbb{Z}$ such that $$x=k\cdot\text{lcm}(n,m).$$ Since $x$ was arbitrary, we can conclude that $$n\mathbb{Z}\cap m\mathbb{Z} = \text{lcm}(n,m)\mathbb{Z}.$$</p> </blockquote> <p>To see why it has to be the least multiple, consider $3\mathbb{Z}\cap 5\mathbb{Z}$. Notice that $15\in 3\mathbb{Z}\cap 5\mathbb{Z}$, but $15\notin 30\mathbb{Z}$. So if we attempt to make a larger multiple a generator, some elements of $3\mathbb{Z}\cap 5\mathbb{Z}$ will be left out. </p>
17,143
<p>My next project I'd like to start working on is Domain Coloring. I am aware of the beautiful discussion at:</p> <p><a href="https://mathematica.stackexchange.com/questions/7275/how-can-i-generate-this-domain-coloring-plot">How can I generate this &quot;domain coloring&quot; plot?</a></p> <p>And I am studying it. However, a lot of the articles on domain coloring refer back to Hans Lundmark's page at:</p> <p><a href="http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html" rel="nofollow noreferrer">http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html</a></p> <p>So, I would like to begin my work by using Mathematica to draw these three images based on Hans' notes. I'd appreciate if anyone can provide some code that will produce these images, as I could use it to start my study of the rest of Hans' page.</p> <p><img src="https://i.stack.imgur.com/FuqMb.jpg" alt="arg"></p> <p><img src="https://i.stack.imgur.com/9S0I6.jpg" alt="abs"></p> <p><img src="https://i.stack.imgur.com/8cqhp.png" alt="blend"></p> <p>A very small adjustment. Still learning.</p> <pre><code>g[{f_, cf_}] := DensityPlot[f, {x, -1, 1}, {y, -1, 1}, PlotPoints -&gt; 51, ColorFunction -&gt; cf, Frame -&gt; False]; g /@ {{Arg[-(x + I y)], "SolarColors"}, {Mod[Log[2, Abs[x + I y]], 1], GrayLevel}} ImageMultiply @@ % </code></pre> <p><img src="https://i.stack.imgur.com/115CH.png" alt="scheme-blend-1"></p> <p>Not sure where to put my current question, so I'll update here. Just came back to visit and discovered some wonderful answers at the bottom of this list. I do understand the opening code:</p> <pre><code>f[z_] := (z + 2)^2*(z - 1 - 2 I)*(z + I) paint[z_] := Module[{x = Re[z], y = Im[z]}, color = Blend[{Black, Red, Orange, Yellow}, Rescale[ArcTan[-x, -y], {-Pi, Pi}]]; shade = Mod[Log[2, Abs[x + I y]], 1]; Darker[color, shade/4]] </code></pre> <p>But then I encounter difficulty with the following code:</p> <pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -&gt; False, ColorFunction -&gt; Function[{x, y}, paint[f[x + y I]]], Frame -&gt; False, Axes -&gt; False, MaxRecursion -&gt; 1, PlotPoints -&gt; 50, Mesh -&gt; 400, PlotRangePadding -&gt; 0, MeshStyle -&gt; None, ImageSize -&gt; 300] </code></pre> <p>I'm good with the first few lines. Looks like ParametricPlot is plotting points, where x and y both range from -3 to 3 (correct me if I am wrong). I also understand the ColorFunctionScaling and the ColorFunction lines. I understand Axes, PlotRangePadding, MeshStyle, and ImageSize. Where I am having trouble is with what PlotPoints->50 and Mesh->400 are doing. </p> <p>First of all, my image size is 300. What does PlotPoints->50 mean? Does that mean it will sample and array of 50x50 points out of 300x300 and scale the results to fit in the domain [-3,3]x[-3,3]? My next question is, then those points get colored? And if so, how are the remainder of the points in the image colored? For example, I tried:</p> <pre><code>Table[ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -&gt; False, ColorFunction -&gt; Function[{x, y}, paint[f[x + y I]]], PlotPoints -&gt; n, MeshStyle -&gt; None], {n, 10, 50, 10}] </code></pre> <p>And the images got a little sharper as the PointPlots->n increased. </p> <p>Here's another question. What does Mesh->400 do in this situation. For example, I tried lowering the mesh number:</p> <pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -&gt; False, ColorFunction -&gt; Function[{x, y}, paint[f[x + y I]]], Frame -&gt; False, Axes -&gt; False, MaxRecursion -&gt; 1, PlotPoints -&gt; 50, Mesh -&gt; 100, PlotRangePadding -&gt; 0, MeshStyle -&gt; None, ImageSize -&gt; 300] </code></pre> <p>And was completely surprised that it had an effect on the image, particularly when MeshStyle->None. Here's the image I get:</p> <p><img src="https://i.stack.imgur.com/4jqEj.png" alt="today"></p> <p>Why does setting Mesh->100 decrease the sharpness of the image?</p> <p>One final question I have regards adding the mesh lines. Simon suggested<br> For the mesh you could do something like Mesh->{Range[-5,5],Range[-5,5]}, MeshStyle->Opacity[0.5], MeshFunctions->{(Re@f[#1+I #2]&amp;),(Im@f[#1+I #2]&amp;)} and cormullion added them to produce a beautiful result, but I tried this:</p> <pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -&gt; False, ColorFunction -&gt; Function[{x, y}, paint[f[x + y I]]], Frame -&gt; False, Axes -&gt; False, MaxRecursion -&gt; 1, PlotPoints -&gt; 50, Mesh -&gt; {Range[-5, 5], Range[-5, 5]}, PlotRangePadding -&gt; 0, MeshStyle -&gt; Opacity[0.5], MeshFunctions -&gt; {(Re@f[#1 + I #2] &amp;), (Im@f[#1 + I #2] &amp;)}, ImageSize -&gt; 300] </code></pre> <p>And got this resulting image.</p> <p><img src="https://i.stack.imgur.com/zamAO.png" alt="today2"></p> <p>So I am clearly missing something. Maybe someone could post the code that gives cormullion's last image?</p> <p>OK, just purchased and installed Presentations package. Tried this:</p> <pre><code>With[{f = Function[z, (z + 2)^2 (z - 1 - 2 I) (z + I)], zmin = -2 - 2 I, zmax = 2 + 2 I, colorFunction = Function[arg, HotColor[Rescale[arg, {-Pi, Pi}]]], imgSize = 400}, Draw2D[{ComplexDensityDraw[Arg[f[z]], {z, zmin, zmax}, ColorFunction -&gt; colorFunction, ColorFunctionScaling -&gt; False, Mesh -&gt; 50, MeshFunctions -&gt; {Function[{x, y}, Abs[f[x + I y]]]}, PlotPoints -&gt; {50, 50}]}, Frame -&gt; True, FrameLabel -&gt; {Re, Im}, PlotLabel -&gt; Row[{"Arg coloring and Abs mesh of ", f[z]}], RotateLabel -&gt; False, BaseStyle -&gt; 12, ImageSize -&gt; imgSize]] </code></pre> <p>But got this colorless image.</p> <p><img src="https://i.stack.imgur.com/xSlX8.png" alt="today3"></p> <p>Any thoughts on how to fix this?</p>
J. M.'s persistent exhaustion
50
<p>I might as well put in my own take. I tried a bit to match the coloring used by Hans in his pics, and here's the result:</p> <pre><code>SetAttributes[colorize, Listable]; colorize[z_] := Darker[Blend[{{0, Black}, {1/2, Red}, {3/4, Orange}, {1, Yellow}}, Mod[Arg[z]/(2 π), 1]], If[z == 0, 0, 1 - Mod[Log2[Abs[z]], 1]]] </code></pre> <p>Now, we can do any of two things with this function. The first possibility I shall present is similar to the approach Mark used in <a href="https://mathematica.stackexchange.com/a/16900">this answer</a> for generating fractals; that is, directly generate an <code>Image[]</code> object using the coloring function:</p> <pre><code>Image[Developer`ToPackedArray[ colorize[N[Table[x + I y, {y, 10, -10, -1/20}, {x, -10, 10, 1/20}]]] /. RGBColor -&gt; List]] </code></pre> <p><img src="https://i.stack.imgur.com/BQOGY.png" alt="some hot discs"></p> <p>This, of course, is non-adaptive, and does not allow for the superimposition of meshlines. We thus present the second bit, which is more or less equivalent to the approach proposed in other answers: the use of <code>ParametricPlot[]</code>:</p> <pre><code>ParametricPlot[{x, y}, {x, -15, 15}, {y, -15, 15}, Axes -&gt; None, ColorFunction -&gt; Function[{x, y, u, v}, colorize[u + I v]], ColorFunctionScaling -&gt; False, MeshStyle -&gt; Opacity[2/5, GrayLevel[2/5]], PlotPoints -&gt; 350] </code></pre> <p><img src="https://i.stack.imgur.com/icBsJ.png" alt="some hot discs and a mesh"></p> <p>Probably the only annoying thing about this procedure is that one needs a relatively large amount of plot points just for the colors to come out right. The good thing about this compared to the approach of using <code>DensityPlot[]</code> is that the color function is allowed to depend on the parameters themselves, in contrast to <code>DensityPlot[]</code> where the color function is dependent only on the heights.</p> <p>Here's Hans's polynomial example:</p> <pre><code>With[{f = Function[z, (z + 2)^2 (z - 1 - 2 I) (z + I)]}, ParametricPlot[{x, y}, {x, -4, 4}, {y, -4, 4}, Axes -&gt; None, ColorFunction -&gt; Function[{x, y, u, v}, colorize[f[u + I v]]], ColorFunctionScaling -&gt; False, MeshFunctions -&gt; {Function[{x, y, u, v}, Re[f[u + I v]]], Function[{x, y, u, v}, Im[f[u + I v]]]}, MeshStyle -&gt; Opacity[2/5, GrayLevel[2/5]], PlotPoints -&gt; 400]] </code></pre> <p><img src="https://i.stack.imgur.com/JBd3o.png" alt="burning up a polynomial"></p> <hr> <p>Just for fun, here's a domain-colored map of Hans's polynomial example to the Riemann sphere, using a slightly modified coloring scheme:</p> <pre><code>SetAttributes[colorize, Listable]; colorize[z_] := Blend[{ Blend[{{0, Black}, {1/2, Red}, {3/4, Orange}, {1, Yellow}}, Mod[Arg[z]/(2 π), 1]], ColorData[{"GrayTones", "Reverse"}][If[z == 0, 1, Mod[Log2[Abs[z]], 1]]]}] With[{f = Function[z, (z + 2)^2 (z - 1 - 2 I) (z + I)], h = π/360}, tex = Developer`ToPackedArray[colorize[ f[Table[Cot[φ/2] Exp[I θ], {φ, h, π - h, h}, {θ, -π, π, h}] // N]] /. RGBColor -&gt; List];] ParametricPlot3D[{Cos[θ] Sin[φ], Sin[θ] Sin[φ], Cos[φ]}, {θ, -π, π}, {φ, 0, π}, Axes -&gt; False, Boxed -&gt; False, Lighting -&gt; "Neutral", Mesh -&gt; None, PlotPoints -&gt; 95, PlotStyle -&gt; Texture[tex]] </code></pre> <p><img src="https://i.stack.imgur.com/JFFaE.png" alt="burn up the Riemann sphere"></p>
2,222,966
<p>Given the three line segments below, of lengths a, b and 1, respectively:<a href="https://i.stack.imgur.com/HWoz8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HWoz8.jpg" alt="enter image description here"></a></p> <p>construct the following length using a compass and ruler: $$\frac{1}{\sqrt{b+\sqrt{a}}} \ \ \text{and} \ \ \ \sqrt[4]{a} $$</p> <p>Make sure to draw the appropriate diagram(s) and describe your process in words. We are also to use the following axioms and state where they are used:</p> <ol> <li>Any two points can be connected by a line segment, </li> <li>Any line segment can be extended to a line, </li> <li>Any point and a line segment define a circle, </li> <li>Points are born as intersection of lines, circles and lines and circles</li> </ol> <p>Can someone please guide me or show me as to how to construct this? I know if we draw a triangle whose base(let's suppose this is $a+1$) is the diameter of a semi-circle, then the line perpendicular to this base leading to the top of the semi-circle will divide the trianlge into two smaller triangles with the bases resulting in $a$ and $1$. I don't know how to end up with $\sqrt{a}$ from there. But with it, the process can be repeated to end up with $\sqrt[4]{a}$. Can someone explain or show me? I will then be able to tackle a whole lot of other questions.</p>
Will Jagy
10,400
<p>It is all similar right triangles, along with the theorem that, when a triangle has all three vertices on a circle and two of them on a diameter, then it is a right triangle. </p> <p><a href="https://i.stack.imgur.com/GVY8h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GVY8h.jpg" alt="enter image description here"></a></p>
590,219
<p>Disclaimer: this is a homework question. I'm looking for direction, not an answer.</p> <blockquote> <p>Given a field <span class="math-container">$F$</span>, show that <span class="math-container">$F[x,x^{-1}]$</span> is a principal ideal domain.</p> </blockquote> <p>I'm unsure how to proceed. Would it be better to prove this directly? (ie, let <span class="math-container">$I$</span> be an ideal, show that <span class="math-container">$I = (f)$</span> for some <span class="math-container">$f \in F[x,x^{-1}]$</span>). The proof that polynomials are a PID would involve (I imagine) division by remainder and use of degree, both of which don't seem to have obvious parallels for Laurent polynomials. Should I try and devise some parallels and mimic the proof for polynomials? Or is this overkill? (or just wrong?)</p> <p><strong>edit 1</strong>: I guess the approach I mentioned above amounts to showing that Laurent polynomials are a euclidean domain (we know that euclidean domain => principle ideal domain, so this would be sufficient) Are they a euclidean domain though? (it seems like this would have been the question if they were, instead of asking if they were a PID).</p> <p><strong>edit 2</strong>: I've spat this out, think most parts of it are correct, though it seems kind of ugly/cumbersome (but that might just be me trying to spell things out more than is needed):</p> <blockquote> <p>Given a Laurent polynomial <span class="math-container">$f \in F[x,x^{-1}]$</span>, define its "negative degree" <span class="math-container">$\deg^-(f)$</span> to be the largest power of <span class="math-container">$x^{-1}$</span> that appears in <span class="math-container">$f$</span>.</p> <p>Let <span class="math-container">$I$</span> be an ideal of <span class="math-container">$F[x,x^{-1}]$</span>. Note that <span class="math-container">$\{x^{-\deg^-(f)}f \mid f \in I\} \subseteq F[x]$</span>. Let <span class="math-container">$J$</span> be the ideal in <span class="math-container">$F[x]$</span> generated by this set. <span class="math-container">$F[x]$</span> is a principal ideal domain, so <span class="math-container">$J$</span> is a principal ideal and we have <span class="math-container">$J = (j)$</span> for some <span class="math-container">$j \in F[x]$</span>.</p> <p>We claim <span class="math-container">$I = (j)$</span> (now meaning an ideal of <span class="math-container">$F[x,x^{-1}]$</span>).</p> <p>Let <span class="math-container">$f \in I$</span>. Then <span class="math-container">$x^{-\deg^-(f)}f \in J$</span>, meaning <span class="math-container">$f = x^{\deg^-(f)}g$</span>, where <span class="math-container">$g = x^{-\deg^-(f)}f$</span> is in <span class="math-container">$J$</span>. Because <span class="math-container">$g \in J = (j)$</span>, there exists <span class="math-container">$g' \in F[x]$</span> such that <span class="math-container">$g = g'j$</span>, and thus <span class="math-container">$f = (x^{\deg^-(f)}g')j$</span> is a multiple of <span class="math-container">$j$</span>, so <span class="math-container">$f \in (j)$</span>.</p> <p>Let <span class="math-container">$f \in (j)$</span>. Then <span class="math-container">$f = gj$</span> for some <span class="math-container">$g \in F[x,x^{-1}]$</span>. But note that <span class="math-container">$j = x^{-\deg^-(f')}f'$</span> for some <span class="math-container">$f' \in I$</span>, so <span class="math-container">$f = gx^{-\deg^-(f')}f'$</span>, so <span class="math-container">$f$</span> is a multiple of an element of an ideal <span class="math-container">$I$</span>, so <span class="math-container">$f$</span> itself is in <span class="math-container">$I$</span>.</p> <p>This shows <span class="math-container">$I = (j)$</span>. So an arbitrary ideal of <span class="math-container">$F[x,x^{-1}]$</span> is principal, so <span class="math-container">$F[x,x^{-1}]$</span> is a principal ideal domain.</p> </blockquote> <p>I kind of feel like I still don't "get" the proof (I more-or-less see how each part works with the others but I'm having trouble seeing the bigger picture), though this may be due to a poor handle on ideals in general.</p>
user52045
52,045
<p>Polynomial ring over a field $k$ is a PID. Notice that $S^{-1}k[x]=k[x,x^{-1}]$, where $S=\{x^i:i\in \mathbb N\}$. Now use the fact that <a href="https://math.stackexchange.com/questions/536624/is-the-localization-of-a-pid-a-pid">localization of a PID is a PID</a>.</p>
2,874,763
<p>I know for 3-D $$\nabla^2 \left(\frac1r\right)=-4\pi\, \delta(\vec{r})\,.$$ I would like to know, what is $$\text{Div}\cdot\text{Grad}\left(\frac{1}{r^2}\right)$$ in 4-Dimensions ($r^2=x_1^2+x_2^2+x_3^2+x_4^2$)?</p>
Maxim
491,644
<p>Since the Fourier transform of a radial function is also a radial function and the transform of $r^\lambda$ is a homogeneous function of degree $-\lambda - n$ for $(-\lambda - n)/2 \notin \mathbb N^0$, $$\mathcal F[r^\lambda] = (r^\lambda, e^{i \boldsymbol x \cdot \boldsymbol \xi}) = C_{\lambda, n} \rho^{-\lambda - n}, \\ \mathcal F[r^{2 - n}] = C_n \rho^{-2}, \\ \mathcal F[\nabla^2 r^{2 - n}] = -\rho^2 \mathcal F[r^{2 - n}] = -C_n, \\ \nabla^2 r^{2 - n} = -C_n \delta(\boldsymbol x).$$ The constant $C_n = 4 \pi^{n/2}/\Gamma(n/2 - 1)$ can be found by taking a simple test function and using the identity $(r^\lambda, \mathcal F[\phi]) = (\mathcal F[r^\lambda], \phi)$.</p>
929,598
<p>A rectangle is 4 times as long as it is wide. If the length is increased by 4 inches and the width is decreased by 1 inch, the area will be 60 square inches. What were the dimensions of the original rectangle? Explain your answer.</p>
mathlove
78,967
<p>HINT : $$(2k+1)!!=1\cdot 3\cdot 5\cdot \cdots (2k-1)\cdot (2k+1)$$ $$(2k+1)!=1\cdot 2\cdot 3\cdot \cdots (2k)\cdot (2k+1) $$ $$2^k\cdot k!=2^k\times \{1\cdot 2\cdot 3\cdot \cdots (k-1)\cdot k\}=2\cdot 4\cdot 6\cdot\cdots (2k-2)\cdot (2k).$$</p>
1,413,150
<p>So for a periodic function <span class="math-container">$f$</span> (of period <span class="math-container">$1$</span>, say), I know the Riemann-Lebesgue Lemma which states that if <span class="math-container">$f$</span> is <span class="math-container">$L^1$</span> then the Fourier coefficients <span class="math-container">$F(n)$</span> go to zero as <span class="math-container">$n$</span> goes to infinity. And as far as I know, the converse of this is not true. My question, then, is this:</p> <blockquote> <p>Under what conditions on the Fourier coefficients <span class="math-container">$F(n)$</span> is the function <span class="math-container">$f$</span>, defined pointwise as the Fourier series with <span class="math-container">$F(n)$</span> as coefficients,</p> <ol> <li>integrable,</li> <li>continuous, and</li> <li>differentiable?</li> </ol> </blockquote>
Robert Israel
8,508
<p>Let $$ \eqalign{f(n) = \dfrac{1}{n} + \left( 1 + \dfrac{1}{n}\right)^n &amp;= \dfrac{1}{n} + \exp\left( n \ln\left(1+\dfrac{1}{n}\right)\right) \cr &amp;= \dfrac{1}{n} + \exp\left(1 - \dfrac{1}{2n} + \dfrac{1}{3n^2} + O\left(\dfrac{1}{n^3}\right)\right) \cr &amp;= e - \dfrac{e-2}{2n} + \dfrac{11e}{24 n^2} + O\left(\dfrac{1}{n^3}\right) }$$</p> <p>Then $$\eqalign{f(n+1) &amp;= e - \dfrac{e-2}{2n+2} + \dfrac{11e}{24 (n+1)^2} + O\left(\dfrac{1}{n^3}\right)\cr &amp;= e - \dfrac{e-2}{2n} + \dfrac{23 e - 24}{24 n^2} + O\left(\dfrac{1}{n^3}\right) \cr f(n+1) - f(n) &amp;= \dfrac{12e-24}{24n^2} + O\left(\dfrac{1}{n^3}\right)}$$</p> <p>and since $e &gt; 2$, this is positive for sufficiently large $n$.</p>
1,074,341
<p>Prove that a Covering map is proper if and only if it is finite-sheeted.</p> <p>First suppose the covering map $q:E\to X$ is proper, i.e. the preimage of any compact subset of $X$ is again compact. Let $y\in X$ be any point, and let $V$ be an evenly covered nbhd of $y$. Then since $q$ is proper, and $\{y\}$ is compact, $q^{-1}( \{ y\})$ is also compact. In particular the sheets $\bigsqcup_{\alpha\in I}U_\alpha$ of V are an open cover of $q^{-1}( \{ y\})$ and must therefore contain a finite subcover $\{U_1,...,U_n\}$. Then the cardinality of the fiber $q^{-1}( \{ y\})$ is $n$, so that $q$ is finite-sheeted.</p> <p>Conversely, we suppose that $q$ is finite-sheeted. Let $C\subset X$ be a compact set, and let $\{U_a\}_{a\in I}$ be an open cover of $q^{-1}(C)$...</p> <p>Now how do I continue?</p>
Georges Elencwajg
3,217
<p>$\Rightarrow$ Each fiber is compact (by properness) and discrete (from definition of covering space) hence is finite.</p> <p>$\Leftarrow$ You have to prove that for $K\subset X$ the inverse image $q^{-1}(K)$ is compact.<br> Since $\operatorname {res} q:q^{-1}(K) \to K$ is a finite covering space in its own right apply <a href="https://math.stackexchange.com/a/1072276/3217">my answer to cocomi</a>. </p>
4,147,126
<p>Let <span class="math-container">$\ p_n\ $</span> be the <span class="math-container">$\ n$</span>-th prime number.</p> <blockquote> <p>Does the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem" rel="nofollow noreferrer">prime number theorem</a> ,</p> <p><span class="math-container">$\Large{\lim_{x\to\infty}\frac{\pi(x)}{\left[ \frac{x}{\log(x)}\right]} = 1},$</span></p> <p>imply that:</p> <p><span class="math-container">$ \displaystyle\lim_{n\to\infty}\ \frac{p_n}{p_{n+1}} = 1\ ?$</span></p> </blockquote> <p>Edit: I totally get where the vote-to-closes come from and I kind of agree with them. Yeah this is not the question I intended to ask actually. I think I've done an X-Y communication thingy. I'll leave the question and accept the answer though. But I have learned something about prime numbers along the way in reading the answers...</p>
Aphelli
556,825
<p>Indeed, yes! It can be shown elementarily that the statement of the PNT that you gave is equivalent to <span class="math-container">$\frac{p_n}{n\ln{n}} \rightarrow 1$</span>. Since <span class="math-container">$\frac{n+1}{n} \rightarrow 1$</span>, <span class="math-container">$\frac{\ln(n+1)}{\ln{n}}\rightarrow 1$</span>, it follows that <span class="math-container">$\frac{p_{n+1}}{p_n} \rightarrow 1$</span>.</p> <p>Edit: here’s the elementary proof. <span class="math-container">$\frac{\pi(p_n)\ln{p_n}}{p_n} \rightarrow 1$</span>, thus <span class="math-container">$p_n+o(p_n)=(n\ln{p_n})$</span> (so <span class="math-container">$n=o(p_n)$</span>). Write <span class="math-container">$q_n=\frac{p_n}{n}$</span>. Then <span class="math-container">$p_n+o(p_n)=(n\ln{n})+n\ln{q_n}$</span>. Now, <span class="math-container">$\frac{q_n}{\ln{p_n}} \rightarrow 1$</span>, and <span class="math-container">$p_n \rightarrow \infty$</span>, so that <span class="math-container">$\ln{q_n}=\ln{\ln{p_n}}+o(1)$</span>. Thus <span class="math-container">$n\ln{q_n}=o(p_n)$</span>, so that <span class="math-container">$p_n+o(p_n)=n\ln{n}$</span>, QED.</p>
655,981
<p>How to calculate this complex integral? $$\int_0^{2\pi}\cot(t-ia)dt,a&gt;0$$</p> <p>I got that the integral is $2\pi i$ if $|a|&lt;1$ and $0$ if $a&gt;1$ yet, friends of mine got $2\pi i$ regardless the value of $a$. looking for the correct way</p>
Daniel Fischer
83,702
<blockquote> <p>so, I'll use the residue theorem</p> </blockquote> <p>Be careful. You need a closed contour for the residue theorem, but the interval $[0,2\pi]$ isn't a closed contour.</p> <p>There are of course several methods to evaluate the integral. One particularly nice way, since $\cos = \sin'$ and $\sin$ is $2\pi$-periodic, is</p> <p>$$\int_0^{2\pi} \cot (t-ia)\,dt = \int_0^{2\pi} \frac{\sin'(t-ia)}{\sin (t-ia)}\,dt = \int_\gamma \frac{dz}{z} = 2\pi i\cdot n(\gamma,0),$$</p> <p>where $\gamma \colon [0,2\pi] \to \mathbb{C}\setminus \{0\}$ is the closed curve $t\mapsto \sin (t-ia)$.</p> <p>The addition theorem yields</p> <p>$$\gamma(t) = \sin t\cosh a - i\cos t \sinh a,$$</p> <p>so $\gamma$ is an ellipse with centre $0$ and semiaxes $\sinh a$ and $\cosh a$, traversed once in the positive sense. Hence $n(\gamma,0) = 1$ for all $a &gt; 0$, and the integral is $2\pi i$ for all $a &gt; 0$.</p> <p>Another method is to write</p> <p>$$\cot (t-ia) = \frac{\cos t\cosh a + i\sin t \sinh a}{\sin t \cosh a - i \cos t \sinh a},$$</p> <p>and then write $z = e^{it}$, so substitute $\cos t = \frac12(z+z^{-1})$ and $\sin t = \frac{1}{2i}(z-z^{-1})$, and apply the residue theorem to the integral</p> <p>$$\int_{\lvert z\rvert = 1} \frac{(z+z^{-1})\cosh a + (z-z^{-1})\sinh a}{(-i)[(z-z^{-1})\cosh a + (z+z^{-1})\sinh a]}\,\frac{dz}{iz}$$</p> <p>over the unit circle. The integrand simplifies to</p> <p>$$\frac{z^2 e^a + e^{-a}}{z\left(z^2 e^a - e^{-a}\right)} = \frac{z^2 + e^{-2a}}{z(z-e^{-a})(z+e^{-a})},$$</p> <p>and the residues in $\pm e^{-a}$ are easily seen to cancel, leaving only the residue in $0$, which is $1$.</p>
935,506
<p>I'm a bit puzzled by this one.</p> <p>The domain $X = S(0,1)\cup S(3,1)$ (where $S(\alpha, \rho)$ is a circular area with it's center at $\alpha$ and radius $\rho$). So the domain is basically two circles with radius 1 and centers at 0 and 3.</p> <p>I'm supposed to find analytic function $f$ defined on $X$ where the imaginary part of $f$ is a constant but $f$ is not constant.</p> <p>Where do I start?</p>
ncmathsadist
4,154
<p>Yes. You can try one of two things. Integrate by parts with $$u = t$$ and $$dv = {2t\,dt\over \sqrt{1 - t^2}}$$ or a trig sub $t = \sin(\theta)$. The or is inclusive or here.</p>
935,506
<p>I'm a bit puzzled by this one.</p> <p>The domain $X = S(0,1)\cup S(3,1)$ (where $S(\alpha, \rho)$ is a circular area with it's center at $\alpha$ and radius $\rho$). So the domain is basically two circles with radius 1 and centers at 0 and 3.</p> <p>I'm supposed to find analytic function $f$ defined on $X$ where the imaginary part of $f$ is a constant but $f$ is not constant.</p> <p>Where do I start?</p>
Bouzari Abdelkader
727,256
<p><span class="math-container">$$\int4x\sqrt{1-x^{4}}dx\\ x^{2}=\sin y\Rightarrow 2xdx=\cos ydy\\ \int4x\sqrt{1-x^{4}}dx=2\int \cos^{2}ydy=\int(1+\cos 2y)dy=y+\frac{\sin 2y}{2}+C\\ \int4x\sqrt{1-x^{4}}dx=\arcsin x^{2}+x^{2}\sqrt{1-x^{4}}+C\\ $$</span></p>
1,904,903
<p>Taken from Soo T. Tan's Calculus textbook Chapter 9.7 Exercise 27-</p> <p>Define $$a_n=\frac{2\cdot 4\cdot 6\cdot\ldots\cdot 2n}{3\cdot 5\cdot7\cdot\ldots\cdot (2n+1)}$$ One needs to prove the convergence or divergence of the series $$\sum_{n=1}^{\infty} a_n$$</p> <p>upon finding the radius of convergence for $\sum_{n=1}^{\infty}\frac{2\cdot 4\cdot 6\cdot\ldots\cdot 2n}{3\cdot 5\cdot7\cdot\ldots\cdot (2n+1)}\cdot x^{2n+1}$ to be $1$ and checking the endpoints. Also, please use tests and methods that are taught in introductory courses.</p> <p>Answers show divergence but no without explanation. </p>
Community
-1
<p>Let</p> <p>$$ a = \frac{2}{3} \cdot \frac{4}{5} \cdots \frac{2n}{2n+1} , \quad b = \frac{1}{2} \cdot \frac{3}{4} \cdots \frac{2n-1}{2n} $$</p> <p>Then $a &gt; b$ and $ab = \dfrac{1}{2n+1}$, so actually $a &gt; \dfrac{1}{\sqrt{2n+1}}$ - stronger than what you needed.</p> <p>In other words: You can use this to prove that even $\sum a_n^2$ diverges, which is stronger than the original question.</p>
246,606
<p>I have matrix:</p> <p>$$ A = \begin{bmatrix} 1 &amp; 2 &amp; 3 &amp; 4 \\ 2 &amp; 3 &amp; 3 &amp; 3 \\ 0 &amp; 1 &amp; 2 &amp; 3 \\ 0 &amp; 0 &amp; 1 &amp; 2 \end{bmatrix} $$</p> <p>And I want to calculate $\det{A}$, so I have written:</p> <p>$$ \begin{array}{|cccc|ccc} 1 &amp; 2 &amp; 3 &amp; 4 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 3 &amp; 3 &amp; 3 &amp; 2 &amp; 3 &amp; 3 \\ 0 &amp; 1 &amp; 2 &amp; 3 &amp; 0 &amp; 1 &amp; 2 \\ 0 &amp; 0 &amp; 1 &amp; 2 &amp; 0 &amp; 0 &amp; 1 \end{array} $$</p> <p>From this I get that:</p> <p>$$ \det{A} = (1 \cdot 3 \cdot 2 \cdot 2 + 2 \cdot 3 \cdot 3 \cdot 0 + 3 \cdot 3 \cdot 0 \cdot 0 + 4 \cdot 2 \cdot 1 \cdot 1) - (3 \cdot 3 \cdot 0 \cdot 2 + 2 \cdot 2 \cdot 3 \cdot 1 + 1 \cdot 3 \cdot 2 \cdot 0 + 4 \cdot 3 \cdot 1 \cdot 0) = (12 + 0 + 0 + 8) - (0 + 12 + 0 + 0) = 8 $$</p> <p>But WolframAlpha is saying that <a href="http://www.wolframalpha.com/input/?i=det+%7B%7B1%2C2%2C3%2C4%7D%2C%7B2%2C3%2C3%2C3%7D%2C%7B0%2C1%2C2%2C3%7D%2C%7B0%2C0%2C1%2C2%7D%7D&amp;dataset=" rel="nofollow">it is equal 0</a>. So my question is where am I wrong?</p>
martini
15,379
<p><a href="http://en.wikipedia.org/wiki/Rule_of_Sarrus">Sarrus's rule</a> works only for $3\times 3$-determinants. So you have to find another way to compute $\det A$, for example you can apply elementary transformations not changing the determinant, that is e. g. adding the multiple of one row to another: \begin{align*} \det \begin{bmatrix} 1 &amp; 2 &amp; 3 &amp; 4 \\ 2 &amp; 3 &amp; 3 &amp; 3 \\ 0 &amp; 1 &amp; 2 &amp; 3 \\ 0 &amp; 0 &amp; 1 &amp; 2 \end{bmatrix} &amp;= \det \begin{bmatrix} 1 &amp; 2 &amp; 3 &amp; 4 \\ 0 &amp; -1 &amp; -3 &amp; -5 \\ 0 &amp; 1 &amp; 2 &amp; 3 \\ 0 &amp; 0 &amp; 1 &amp; 2 \end{bmatrix}\\ &amp;= \det \begin{bmatrix} 1 &amp; 2 &amp; 3 &amp; 4 \\ 0 &amp; -1 &amp; -3 &amp; -5 \\ 0 &amp; 0 &amp; -1 &amp; -2 \\ 0 &amp; 0 &amp; 1 &amp; 2 \end{bmatrix}\\ &amp;= \det \begin{bmatrix} 1 &amp; 2 &amp; 3 &amp; 4 \\ 0 &amp; -1 &amp; -3 &amp; -5 \\ 0 &amp; 0 &amp; -1 &amp; -2 \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{bmatrix} \end{align*} To compute the determinant of a triagonal matrix, we just have to multiply the diagonal elements, so $$ \det A = \det \begin{bmatrix} 1 &amp; 2 &amp; 3 &amp; 4 \\ 0 &amp; -1 &amp; -3 &amp; -5 \\ 0 &amp; 0 &amp; -1 &amp; -2 \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{bmatrix} = 1 \cdot (-1)^2 \cdot 0 = 0. $$</p>
2,195,287
<blockquote> <p>Knowing that $p$ is prime and $n$ is a natural number show that $$n^{41}\equiv n\bmod 55$$ using Fermat's little theorem $$n^p\equiv n\bmod p$$</p> </blockquote> <p>If the exercise was to show that $$n^{41}\equiv n\bmod 11$$ I would just rewrite $n^{41}$ as a power of $11$ and would easily prove that the congruence is true in this case but I cannot apply the same logic when I have $\bmod55$ since $n^{41}$ cannot be written as power of $55$.</p> <p>Any hint?</p>
lhf
589
<p>We have</p> <ul> <li><p>mod $\ \ 5:\quad$ $n^{41} \equiv (n^8)^5n \equiv n^8 n \equiv n^9 \equiv n^5 n^4 \equiv n n ^4 \equiv n^5 \equiv n$</p></li> <li><p>mod $11:\quad$ $n^{41} \equiv (n^3)^{11} n^8 \equiv n^3 n^8 \equiv n^{11} \equiv n$</p></li> </ul> <p>Now apply the Chinese reminder theorem.</p>
1,985,427
<p>$$ A= \begin{bmatrix} 2 &amp; 1 &amp; -1 \\ -2 &amp; -2 &amp; 1 \\ 0 &amp; -2 &amp; 1 \\ \end{bmatrix} $$</p> <p>Can someone show me the best way to approach this? Should I use pivoting? I tried using the formula, but I think that only works for 2 x 2 matrices. </p>
Deepak Suwalka
371,592
<p>$\begin {bmatrix} 2&amp;1&amp;-1\\-2&amp;-2&amp;1\\0&amp;-2&amp;1 \end {bmatrix}$</p> <p>We can also solve it using determinants-</p> <p>Calculating determinant of the matrix </p> <p>Let $A=2[-2+2]-1[-2]-1[4]$</p> <p>$=2-4=-2$</p> <p>Calculating matrix of minors</p> <p>$\begin {bmatrix} -2+2&amp;&amp;-2-0&amp;&amp;4-0\\1-2&amp;&amp;2-0&amp;&amp;-4-0\\1-2&amp;&amp;2-2&amp;&amp;-4+2 \end {bmatrix}$</p> <p>$\begin {bmatrix} 0&amp;-2&amp;4\\-1&amp;2&amp;-4\\-1&amp;0&amp;-2 \end {bmatrix}$</p> <p>Turning matrix of minor into matrix of cofactors.</p> <p>$\begin {bmatrix} 0&amp;2&amp;4\\1&amp;2&amp;4\\-1&amp;-2&amp;-2 \end {bmatrix}$</p> <p>Finding adjoint (Franspose) of a matrix.</p> <p>$\begin {bmatrix} 0&amp;1&amp;-1\\2&amp;2&amp;0\\4&amp;4&amp;-2 \end {bmatrix}$</p> <p>Finding inverse of matrix</p> <p>Inverse $=\dfrac{1}{Det\,A}\;Adj\,A$</p> <p>$\Rightarrow\dfrac{1}{-2}\begin {bmatrix} 0&amp;1&amp;-1\\2&amp;2&amp;0\\4&amp;4&amp;-2 \end {bmatrix}$</p> <p>$\Rightarrow\begin {bmatrix} 0&amp;&amp;-0.5&amp;&amp;0.5\\-1&amp;&amp;-1&amp;&amp;0\\-2&amp;&amp;-2&amp;&amp;1 \end {bmatrix}$</p>
408,601
<p>I am asked to find the derivative of $\left(x^x\right)^x$. So I said let $$y=(x^x)^x \Rightarrow \ln y=x\ln x^x \Rightarrow \ln y = x^2 \ln x.$$Differentiating both sides, $$\frac{dy}{dx}=y(2x\ln x+x)=x^{x^2+1}(2\ln x+1).$$</p> <p>Now I checked this answer with Wolfram Alpha and I get that this is only correct when $x\in\mathbb{R},~x&gt;0$. I see that if $x&lt;0$ then $(x^x)^x\neq x^{x^2}$ but if $x$ is negative $\ln x $ is meaningless anyway (in real analysis). Would my answer above be acceptable in a first year calculus course? </p> <p>So, how do I get the correct general answer, that $$\frac{dy}{dx}=(x^x)^x (x+x \ln(x)+\ln(x^x)).$$</p> <p>Thanks in advance. </p>
iostream007
76,954
<p>in step $$\dfrac{dy}{dx}=y(2x\ln x+x)$$ $$\dfrac{dy}{dx}=y(x\ln x+x\ln x+x)$$ in question $y=(x^x)^x$ $$\dfrac{dy}{dx}=(x^x)^x(x\ln x+\ln x^x+x)$$ just rearrange your second last step</p>
2,611,382
<p>Solve the equation,</p> <blockquote> <p>$$ \sin^{-1}x+\sin^{-1}(1-x)=\cos^{-1}x $$</p> </blockquote> <p><strong>My Attempt:</strong> $$ \cos\Big[ \sin^{-1}x+\sin^{-1}(1-x) \Big]=x\\ \cos\big(\sin^{-1}x\big)\cos\big(\sin^{-1}(1-x)\big)-\sin\big(\sin^{-1}x\big)\sin\big(\sin^{-1}(1-x)\big)=x\\ \sqrt{1-x^2}.\sqrt{2x-x^2}-x.(1-x)=x\\ \sqrt{2x-x^2-2x^3+x^4}=2x-x^2\\ \sqrt{x^4-2x^3-x^2+2x}=\sqrt{4x^2-4x^3+x^4}\\ x(2x^2-5x+2)=0\\ \implies x=0\quad or \quad x=2\quad or \quad x=\frac{1}{2} $$ Actual solutions exclude $x=2$.ie, solutions are $x=0$ or $x=\frac{1}{2}$. I think additional solutions are added because of the squaring of the term $2x-x^2$ in the steps. </p> <p>So, how do you solve it avoiding the extra solutions in similar problems ?</p> <p><strong>Note:</strong> I dont want to substitute the solutions to find the wrong ones.</p>
Barry Cipra
86,747
<p>Here's a way to avoid the extraneous solution. Note that $\arcsin u+\arccos u={\pi\over2}$ for all $u\in[-1,1]$. Thus we can rewrite $\arcsin x+\arcsin(1-x)=\arccos x$ as </p> <p>$$\arcsin x+{\pi\over2}-\arccos(1-x)={\pi\over2}-\arcsin x$$</p> <p>which simplifies to</p> <p>$$2\arcsin x=\arccos(1-x)$$</p> <p>Applying $\cos$ to each side and using $\cos(2\theta)=1-2\sin^2\theta$, we get $1-2x^2=1-x$, or</p> <p>$$2x^2-x=0$$</p> <p>which has $x=0$ and $x={1\over2}$ as its only solutions.</p>
1,424,124
<p>If $a,b$ be two positive integers , where $b&gt;2 $ , then is it possible that $2^b-1\mid2^a+1$ ? I have figured out that if $2^b-1\mid 2^a+1$, then $2^b-1\mid 2^{2a}-1$ , so $b\mid2a$ and also $a &gt;b$ ; but nothing else. Please help. Thanks in advance</p>
Community
-1
<p>Assume that it is possible.Then obviously $a&gt;b$ so write $a=bx+r$ with $r \leq b-1$ .Then : $$2^b-1 \mid 2^{bx}-1$$</p> <p>Multiply by $2^r$ to get : $$2^b-1 \mid 2^{bx+r}-2^r=2^a-2^r$$ </p> <p>But we known that $2^b-1 \mid 2^a+1$ so subtracting them we get :</p> <p>$$2^b-1 \mid 2^r+1$$ </p> <p>This means that : $$2^b-1 \leq 2^r+1 \leq 2^{b-1}+1$$ $$2^{b-1} \leq 2$$ so $$b \leq 2$$ a contradiction .</p>
3,030,753
<p>Let <span class="math-container">$f:\mathbb R \rightarrow \mathbb R$</span> be a continuous function and <span class="math-container">$x_0 \in \mathbb R$</span> such that f is differentiable on both intervals <span class="math-container">$(-\infty, x_0]$</span> and <span class="math-container">$[x_0, +\infty)$</span>. Prove or disprove that there exist two functions <span class="math-container">$g, h : \mathbb R \rightarrow \mathbb R$</span> differentiable everywhere such that</p> <p><span class="math-container">$$ f(x) = g(x) + h(x)|x - x_0|\ \ \forall x \in \mathbb R. $$</span></p> <p>This feels like it characterizes every non-differentiable point of a continuous function in terms of absolute values but I couldn't come up with a function to disprove nor I was able to construct <span class="math-container">$g$</span> and <span class="math-container">$h$</span>.</p> <p>Help and directions appreciated.</p>
Robert Z
299,698
<p>Hint (to be read after copper.hat hint). </p> <p>Let us consider the following two differentiable extensions of <span class="math-container">$f$</span>: <span class="math-container">$$F_+(x) = \begin{cases} f(x), &amp; x \ge x_0, \\ f(x_0)+f'_+(x_0)(x-x_0), &amp; x \le x_0, \end{cases}$$</span> and <span class="math-container">$$F_-(x) = \begin{cases} f(x), &amp; x \le x_0, \\ f(x_0)+f'_-(x_0)(x-x_0), &amp; x \ge x_0. \end{cases}$$</span> Then <span class="math-container">$F:=F_+ + F_-$</span> is differentiable in <span class="math-container">$\mathbb{R}$</span> and <span class="math-container">$$F(x)-f(x)=f(x_0)+\begin{cases} f'_+(x_0)(x-x_0), &amp; x \le x_0, \\ f'_-(x_0)(x-x_0), &amp; x \ge x_0, \end{cases}$$</span> that is <span class="math-container">$$F(x)-f(x)=f(x_0)+f'_+(x_0)\cdot \frac{x-x_0 -|x-x_0|}{2} +f'_-(x_0)\cdot \frac{x-x_0 +|x-x_0|}{2}.$$</span> Can you take it from here?</p>
410,013
<p><strong>Short question:</strong> Is there a standard term for a set <span class="math-container">$F$</span> such that there does not exist a surjection <span class="math-container">$F \twoheadrightarrow \omega$</span> (in the context of ZF)?</p> <p><strong>More detailed version:</strong> Consider the following four notions of “finiteness” in ZF, the third of which is the one I am asking about and will be arbitrarily named “P-finite” here:</p> <ul> <li><p>“<span class="math-container">$F$</span> is <strong>finite</strong>” means any of the following equivalent statements:</p> <ul> <li><p>there exists <span class="math-container">$n\in\omega$</span> and a bijection <span class="math-container">$n \xrightarrow{\sim} F$</span>,</p> </li> <li><p>there exists a bijection <span class="math-container">$E \xrightarrow{\sim} F$</span> with <span class="math-container">$E\subseteq\omega$</span> and no bijection <span class="math-container">$\omega \xrightarrow{\sim} F$</span>,</p> </li> <li><p>every nonempty subset of <span class="math-container">$\mathscr{P}(F)$</span> has a maximal element.</p> </li> </ul> </li> <li><p>“<span class="math-container">$F$</span> is <strong>T-finite</strong>” means:</p> <ul> <li>every chain in <span class="math-container">$\mathscr{P}(F)$</span> has a maximal element.</li> </ul> </li> <li><p>“<span class="math-container">$F$</span> is <strong>P-finite</strong>” [nonstandard terminology which I'd like a standard term form] means any of the following equivalent statements:</p> <ul> <li><p><span class="math-container">$\mathscr{P}(F)$</span> is Noetherian under inclusion (i.e., any increasing sequence <span class="math-container">$A_0 \subseteq A_1 \subseteq A_2 \subseteq \cdots$</span> of subsets of <span class="math-container">$F$</span> is stationary),</p> </li> <li><p><span class="math-container">$\mathscr{P}(F)$</span> is Artinian under inclusion (i.e., any decreasing sequence <span class="math-container">$A_0 \supseteq A_1 \supseteq A_2 \supseteq \cdots$</span> of subsets of <span class="math-container">$F$</span> is stationary),</p> </li> <li><p>there does not exist a surjection <span class="math-container">$F \twoheadrightarrow \omega$</span>.</p> </li> </ul> </li> <li><p>“<span class="math-container">$F$</span> is <strong>D-finite</strong>” (i.e., Dedekind-finite) means any of the following equivalent statements:</p> <ul> <li><p>there is no bijection of <span class="math-container">$F$</span> with a proper subset of it,</p> </li> <li><p>there is no injection <span class="math-container">$\omega \hookrightarrow F$</span>.</p> </li> </ul> </li> </ul> <p>(I gave several equivalent conditions to emphasize the parallel between these four notions.)</p> <p>We have finite <span class="math-container">$\Rightarrow$</span> T-finite <span class="math-container">$\Rightarrow$</span> P-finite <span class="math-container">$\Rightarrow$</span> D-finite, and none of the implications I just wrote is reversible. (To construct a permutation model with a P-finite set that is not T-finite, start with a set of atoms in bijection with <span class="math-container">$\mathbb{R}$</span> and use the group of permutations given by continuous increasing bijections <span class="math-container">$\mathbb{R} \xrightarrow{\sim} \mathbb{R}$</span> and the normal subgroup given by pointwise stabilizers of finite sets.)</p> <p>Surely these four notions, and the implications and nonimplications I just mentioned must appear somewhere in the literature, as well as possibly others. My question is, what is the standard name for “P-finiteness”, and where are its properties, including what I just wrote, discussed in greater detail?</p>
Guozhen Shen
101,817
<p>I suggest the terminology &quot;power Dedekind finite&quot; by Andreas Blass in his paper <em>Power-Dedekind Finiteness</em>, and I use this terminology throughout all my papers. By Kuratowski's celebrated theorem, a set <span class="math-container">$F$</span> does not map onto <span class="math-container">$\omega$</span> if and only if the power set of <span class="math-container">$F$</span> is Dedekind finite. So this terminology does make sense. I do not like the terminology &quot;weakly Dedekind finite&quot;, since it is in fact stronger than Dedekind finiteness. If we use &quot;strongly Dedekind finite&quot;, then it contains a adverb &quot;strongly&quot; which is different from the adverb &quot;weakly&quot; used in its dual notion &quot;weakly Dedekind infinite&quot;.</p>
97,340
<p>Fellow Puny Humans, </p> <p>A <em>geometric net</em> is a system of points and lines that obeys three axioms:</p> <ol> <li>Each line is a set of points.</li> <li>Distinct line has at most one point in common.</li> <li>If $p$ is a point and $L$ is a line with $p \notin L$, then there is exactly one line $M$ such that $p \in M$ and $L \cap M = \phi $. </li> </ol> <p>And whenever $L \cap M = \phi$ we say that $L$ is parallel to $M$ i.e $L || M$.</p> <p>So far so good.</p> <p>I want to partition these lines of geometric net into <em>equivalence classes</em> with two lines in same class if they are <em>equal or parallel</em>. One can easily show that binary operation <em>equal or parallel</em> is an <em>equivalence relation</em>.</p> <p>Let's say there are $m$ such classes, then how many points does a line have in each class? For a given line $l$ in any class, if a point $p \in l$ then how many lines passes through $p$. </p> <p>For example, if I partition them into two classes $CL_1$ and $CL_2$ of parallel or equal lines, then number of points on any line in $CL_1$ is equal to number of lines in $CL_2$. This implies that each point belongs to two line. Can this be extended to a case when number of classes are $m$ i.e. each point belong to $m$ lines? I am confused because I can not show it for the case when more than two lines passes through the same point.</p> <p>This problem is from TAOCP 4(a) : <em>combinatorial searching</em> Problem 21. (Edision Wesly).</p>
Community
-1
<p>It is exactly the number of lines you have in a class, simply because <em>equal or parallel</em> is an equivalence relation. Let me clarify:</p> <p>Say $C_1, C_2,\dots, C_m$ are your equivalence classes. Say $L\in C_i$ is a line in $C_i$, for some $i\in\{1,\dots,m\}$. Say $1\leq j \leq m$, $j\neq i$, and $M\in C_j$. If $L\cap M = \emptyset$, then $L$ is parallel to $M$, and so $L$ and $M$ must belong to the same class. In other words, $C_i\cap C_j\neq \emptyset$, or $C_i = C_j$ by equivalence. But by assumption, $i\neq j$, so $C_i\neq C_j$, and so $L$ must intersect $M$. By one of your axioms, it must intersect $M$ in only one point. By arbitrariness of $M$, $L$ must intersect every line in $C_j$ in only one point.</p>
966,570
<p>My attempt: Let the characteristic be $n$. </p> <p>Then, $n \cdot (1_6, 1_{15}) = (0_6, 0_{15})$,</p> <p>i.e. $n \cdot 1_6=0_6$ and $n \cdot 1_{15}=0_{15}$</p> <p>The least $n$ for which both are true is $30$, so $30$ is the characteristic.</p> <p>Is my method correct? If so, if my writing ok?</p>
Harry Wilson
67,863
<p>So, you are asking for a metric that takes two matrices, $A$ and $B$, and outputs a real number $d(A,B)$ obeying the principles one expects from a distance function : symmetry, reflexivity, the triangle equality. One way to view this is to view that space of $n \times n$ real matrices as the vector space $\mathbb{R}^{n^2}$. This comes with the standard euclidean norm/metric, that lets you tell how close or far two matrices are from each other in some sense, but I am not sure it is of any use.</p> <p>It would basically treat each entry in the matrix as a coordinate, and see how far they are form each other. A matrix of all zeroes would be close to a matrix with nearly all zeroes, and a single entry equal to one.</p> <p>You could always use the discrete metric: two matrices have a distance of 1 if they are different, and 0 if they are equal.</p>
966,570
<p>My attempt: Let the characteristic be $n$. </p> <p>Then, $n \cdot (1_6, 1_{15}) = (0_6, 0_{15})$,</p> <p>i.e. $n \cdot 1_6=0_6$ and $n \cdot 1_{15}=0_{15}$</p> <p>The least $n$ for which both are true is $30$, so $30$ is the characteristic.</p> <p>Is my method correct? If so, if my writing ok?</p>
Pieter21
170,149
<p>Depends on what would you like the distance between for instance $I$ and $-I$ to be. Or even the distance between $I$ and null/zero matrix $0$.</p> <p>But I expect that in most cases you should also take into account the vectors you want to multiply with.</p> <p>In the general case, I don't expect such distance would exist as a single (real) value, however, there will be plenty of special cases that you'd have to handle separately.</p>
2,788,015
<p>I'm trying to solve an exercise that says</p> <blockquote> <p>Show that a locally compact space is $\sigma$-compact if and only if is separable.</p> </blockquote> <p>Here locally compact means that also is Hausdorff. I had shown that separability imply $\sigma$-compactness but I'm stuck in the other direction.</p> <p>Assuming that $X$ is $\sigma$-compact it seems enough to show that a compact Hausdorff space is separable. However I don't have a clue about how to do it. </p> <p>My first thought was try to show that a compact Hausdorff space is first countable, what would imply that it is second countable and from here the proof is almost done. However it seems that my assumption is not true, so I'm again in the starting point.</p> <p>Some hint will be appreciated, thank you.</p> <hr> <p>EDIT: it seems that the exercise is wrong. Searching in the web I found <a href="http://at.yorku.ca/cgi-bin/bbqa?forum=ask_a_topologist_2003&amp;task=show_msg&amp;msg=0014.0001" rel="nofollow noreferrer">a "sketch" for a proof</a> that a compact Hausdorff space is not separable:</p> <blockquote> <p>Another natural example: take more than |R| copies of the unit interval and take their product. This is compact Hausdorff (Tychonov theorem) but not separable (proof not too hard, but omitted).</p> <p>Hope this helped,</p> <p>Henno Brandsma</p> </blockquote> <p>My knowledge of topology is little and the exercise appear in a book of analysis (this is a part of the exercise 18 on page 57 of <em>Analysis III</em> of Amann and Escher.)</p> <p>My hope is that @HennoBrandsma (an user of this web) appear and clarify the question :)</p>
Mirko
188,367
<p>take $\omega_1+1$ with the order topology. This is compact Hausdorff, but not separable. (That is, take the space of all countable ordinals, together with the first uncountable ordinal, with the order topology. This is not first countable either. As a comment suggest, perhaps the author meant that only metrizable spaces are considered?) </p>
2,098,693
<p>Full Question: Five balls are randomly chosen, without replacement, from an urn that contains $5$ red, $6$ white, and $7$ blue balls. What is the probability of getting at least one ball of each colour?</p> <p>I have been trying to answer this by taking the complement of the event but it is getting quite complex. Any help?</p>
callculus42
144,421
<p>The idea of taking the converse probability sounds good to me. Let $r,b,w$ the events where <strong>no</strong> red, <strong>no</strong> black and <strong>no</strong> white balls are drawn.</p> <p>Then it is asked for</p> <p>$1-P(r\cup w\cup b)=1-\left[P(r)+P(w)+P(b)-P(r,w)-P(r,b)-P(w,b)+P(r,w,b)\right]$</p> <p>For $P(r\cup w\cup b)$ the <em>inclusion exclusion principle</em> is applied.</p> <p>$1-\left({5 \choose 0 }\cdot {13 \choose 5}+{6 \choose 0}\cdot {12 \choose 5}+{7 \choose 0}\cdot {11 \choose 5}-{7 \choose 5}\cdot {11 \choose 0}-{6 \choose 5}\cdot {12 \choose 0}-{5 \choose 5}\cdot {11 \choose 0}+0\right)/{18\choose 5}$</p> <p>$\approx 70.67\%$</p> <p>The number of binomial coefficients which has to be calculated is more or less the same like in <a href="https://math.stackexchange.com/users/131263/barak-manos">barak manos</a> answer. Both ways lead to the same result.</p>
4,635,416
<p>Let <span class="math-container">$X$</span> be a symmetric random variable, that is <span class="math-container">$X$</span> and <span class="math-container">$-X$</span> have the same distribution function <span class="math-container">$F$</span>. Suppose that <span class="math-container">$F$</span> is continuous and strictly increasing in a neighborhood of <span class="math-container">$0$</span>. Then prove that the median <span class="math-container">$m$</span> of <span class="math-container">$F$</span> is equal to <span class="math-container">$0$</span>, where we define <span class="math-container">$m:=\inf\{x\in \mathbb{R}|F(x)\ge \frac{1}{2}\}$</span>.</p> <p>This definition of the median kind of annoys me. I could easily show that <span class="math-container">$\mathbb{P}(X\le 0)\ge \frac{1}{2}$</span> and <span class="math-container">$\mathbb{P}(X\ge 0)\ge \frac{1}{2}$</span> and by the usual definition of the median I would be done, but I don't know how to deal with that <span class="math-container">$\inf$</span>. I could only observe that my first equality implies that <span class="math-container">$m&lt;0$</span>. I think that the point of the qustion is to use that <span class="math-container">$F$</span> is invertible on that neighborhood, but I can't make any progress.</p>
grand_chat
215,011
<p>Define <span class="math-container">$M:=\{x\in {\mathbb R}\mid F(x)\ge\frac12\}$</span>. You've shown that <span class="math-container">$0\in M$</span>. It remains to show that no number less than zero is a member of <span class="math-container">$M$</span>. To do this:</p> <ol start="0"> <li><p>You've already shown <span class="math-container">$P(X\le0)\ge\frac12$</span>.</p> </li> <li><p>Since <span class="math-container">$F$</span> is continuous at <span class="math-container">$x=0$</span>, you know <span class="math-container">$P(X=0)=0$</span>; otherwise there would be a jump discontinuity in <span class="math-container">$F$</span> at <span class="math-container">$x=0$</span>.</p> </li> <li><p>Since <span class="math-container">$F$</span> is strictly increasing in a neighborhood of <span class="math-container">$0$</span>, there exists <span class="math-container">$\delta&gt;0$</span> such that <span class="math-container">$F(t_1)&lt;F(t_2)$</span> whenever <span class="math-container">$-\delta&lt;t_1&lt;t_2&lt;\delta$</span>.</p> </li> <li><p>If <span class="math-container">$t\in(-\delta, 0)$</span> then <span class="math-container">$-\delta&lt;t&lt;0&lt;\delta$</span> so <span class="math-container">$$F(t)\stackrel{(2)}&lt;F(0)=P(X\le 0)\stackrel{(1)}=P(X&lt;0)=1-P(X\ge0)\stackrel{(0)}\le \frac12.$$</span> Hence <span class="math-container">$F(t)&lt;\frac12$</span> and <span class="math-container">$t$</span> cannot be a member of <span class="math-container">$M$</span>.</p> </li> <li><p>If <span class="math-container">$t\in (-\infty, -\delta]$</span>, then <span class="math-container">$t\le -\delta&lt;-\delta/2&lt;0$</span> so <span class="math-container">$F(t)\le F(-\delta/2)\stackrel{(3)}&lt;\frac12$</span> and again <span class="math-container">$t\not\in M$</span>.</p> </li> </ol> <hr /> <p>Note that <strong>you need to use the condition</strong> that <span class="math-container">$F$</span> is strictly increasing in a neighborhood of zero. If <span class="math-container">$F$</span> is flat in a neighborhood of zero, then <span class="math-container">$\inf M$</span> will be negative.</p>
215,333
<p>There are many symbols for understanding internet-related properties: <code>$NetworkConnected</code>, <code>PingTime</code>, <code>NetworkPacketTrace</code>, <code>NetworkPacketRecording</code>, etc.</p> <p>But is there any convenient way of testing your network's upload speed from within Mathematica?</p>
Rohit Namjoshi
58,370
<p>Here is another way. Install the speedtest cli application for your OS from <a href="https://www.speedtest.net/apps/cli" rel="nofollow noreferrer">here</a>.</p> <pre><code>SetEnvironment["PATH" -&gt; Environment["PATH"] &lt;&gt; "path to install dir"] output = RunProcess[{"speedtest", "-fcsv", "--output-header"}, "StandardOutput"]; output // ImportString[#, "CSV"] &amp; // Part[#, All, 3 ;; -2] &amp; (* Remove personal info *) // Dataset </code></pre> <p><a href="https://i.stack.imgur.com/tqxBy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tqxBy.png" alt="enter image description here"></a></p>
1,779,965
<p>Given the numbers $x = 123$ and $y = 100$ how to apply the Karatsuba algorithm to multiply these numbers ?</p> <p>The formula is </p> <pre><code>xy=10^n(ac)+10^n/2(ad+bc)+bd </code></pre> <p>As I understand $n = 3$ (number of digits) and I tried writing the numbers as </p> <pre><code>x = 10*12+3 , y = 10*10 +0 thus a = 12 , b = 3 , c = 10 , d = 0 </code></pre> <p>or</p> <pre><code>x = 100*1+23 , y = 100*1 +0 thus a = 1 , b = 23 , c = 1 , d = 0 </code></pre> <p>I looked at some explanations of the algorithm and tried it successfully for other numbers , but I don't know how to solve it in this particular case.</p> <p>Is this example part of a more general case of the algorithm (like 3-digit numbers)? I found a question that may be related (<a href="https://math.stackexchange.com/questions/220419/karatsuba-multiplication-with-integers-of-size-3">Karatsuba multiplication with integers of size 3</a>) and from the answer I gather that it's impossible , so is it that Karatsuba can't multiply $2$ numbers of $3$-digits or is there a way to do this ?</p>
Haseeb Saeed
648,375
<p><span class="math-container">$xy=123\cdot 100$</span></p> <p><span class="math-container">$(12\cdot 10+3)(10\cdot 10+0)$</span></p> <p><span class="math-container">$xy=10^n(ac)+10^n/2(ad+bc)+bd$</span>, where <span class="math-container">$n=2 a=12, b=3, c=10, d=0$</span></p> <p>subcategory 1:</p> <p><span class="math-container">$ac=12\cdot 10 = 120$</span> <span class="math-container">$bd=3\cdot 0 = 0$</span> <span class="math-container">$ad+bc= (12+3)(10+0)=150-120-0 = 30..$</span></p> <p><span class="math-container">$120\cdot 10^2 + 30\cdot 10^1 + 0$</span></p> <p>subcategory 2:</p> <p><span class="math-container">$12\cdot 10$</span></p> <p><span class="math-container">$a=1, b=2, c=1, d=0$</span></p> <p><span class="math-container">$ac=1$</span> <span class="math-container">$bd=0$</span> <span class="math-container">$ad+bc=(a+b)(b+d)-ac-bd$</span> <span class="math-container">$ad+bc= (1+2)(1+0)-1-0 = 2$</span></p> <p><span class="math-container">$ac\cdot 10^2 + (ad+bc)\cdot 10 + bd =120..$</span></p>
2,286,749
<p>My question is about the general solution for the following differential equation: $$ \frac{dx}{dt} = x^a(1-x)^b,\quad a,b\gt 0~~~~~~~~~~~~~~~(1)~. $$</p> <p>Obviously, if $a=b=1$ then (1) reduces to $$ \frac{dx}{dt} = x(1-x) $$ which has as solution $$ x(t) = \frac{1}{1 + A e^{-t}}\,,$$ for some constant, $A$. In fact, for $a,b$ positive integers, a solution can be obtained by using method of separation of variables and partial fractions. I want to be able to find a solution that considers all cases and which would obviously include the special cases when $a$ and $b$ are positive integers.</p>
projectilemotion
323,432
<p>Well, your differential equation is separable:</p> <p><span class="math-container">$$\int \frac{1}{x^a(1-x)^b}~dx=\int dt \tag{1}$$</span></p> <p>The left hand side cannot be integrated in terms of elementary functions.</p> <blockquote> <p>However, one can directly apply the definition of the <a href="https://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function" rel="nofollow noreferrer">incomplete beta function</a>:</p> <p><span class="math-container">$$\operatorname*{B}(x;\,m,n) = \int_0^x t^{m-1}\,(1-t)^{n-1}\,dt \tag{2}$$</span></p> </blockquote> <p>If we put <span class="math-container">$(1)$</span> into a similar form:</p> <p><span class="math-container">$$\int x^{-a}(1-x)^{-b}~dx=t+c_1 \tag{3}$$</span></p> <p>Hence, after applying <span class="math-container">$(2)$</span>, the integral on the left hand side is evaluated as: <span class="math-container">$$\int x^{-a}(1-x)^{-b}~dx=\operatorname*{B}(x,1-a,1-b)+c_2$$</span> Therefore, the general solution in implicit form is simply: <span class="math-container">$$\bbox[5px,border:2px solid #C0A000]{\operatorname*{B}(x;1-a,1-b)=t+C}$$</span></p>
1,030,335
<blockquote> <p>Let <span class="math-container">$n$</span> and <span class="math-container">$r$</span> be positive integers with <span class="math-container">$n \ge r$</span>. Prove that:</p> <p><span class="math-container">$$\binom{r}{r} + \binom{r+1}{r} + \cdots + \binom{n}{r} = \binom{n+1}{r+1}.$$</span></p> </blockquote> <p>Tried proving it by induction but got stuck. Any help with proving it by induction or any other proof technique is appreciated.</p>
David
119,775
<p>As mentioned above, this has been answered before (actually very recently). But for something different, here is a pictorial proof. $$\def\r{\color{red}{\bullet}}\def\b{\color{blue}{\bullet}}\def\u{\circ}\def\w{\bullet}\def\s{\ \ \ \ } \eqalignno{ \matrix{\w\cr \w\s\w\cr \w\s\w\s\w\cr \w\s\w\s\w\s\w\cr \w\s\w\s\w\s\w\s\w\cr \w\s\w\s\b\s\w\s\w\s\w\cr}\cr =\matrix{\w\cr \w\s\w\cr \w\s\w\s\w\cr \w\s\w\s\w\s\w\cr \w\s\r\s\b\s\w\s\w\cr \w\s\w\s\u\s\w\s\w\s\w\cr}\cr =\matrix{\w\cr \w\s\w\cr \w\s\w\s\w\cr \w\s\r\s\b\s\w\cr \w\s\r\s\w\s\w\s\w\cr \w\s\w\s\u\s\w\s\w\s\w\cr}\cr =\matrix{\w\cr \w\s\w\cr \w\s\r\s\b\cr \w\s\r\s\w\s\w\cr \w\s\r\s\w\s\w\s\w\cr \w\s\w\s\u\s\w\s\w\s\w\cr}\cr =\matrix{\w\cr \w\s\r\cr \w\s\r\s\w\cr \w\s\r\s\w\s\w\cr \w\s\r\s\w\s\w\s\w\cr \w\s\w\s\u\s\w\s\w\s\w\cr}\cr}$$</p>
1,334,527
<p>The integral in hand is $$ I(n) = \frac{1}{\pi}\int_{-1}^{1} \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}\, dx $$ I dont know whether it has closed-form or not, but currently I only want to know its asymptotic behavior. Setting $x=\cos\theta$, then $$ I(n) = \frac{1}{\pi}\int_{0}^{\pi/2} \Big[(1+2\cos\theta)^{2n}+(1-2\cos\theta)^{2n}\Big]\, d\theta $$ The second term can be neglected, therefore $$ I(n) \sim \frac{1}{\pi}\int_{0}^{\pi/2}(1+2\cos\theta)^{2n}\, d\theta $$ How can I move on?</p>
achille hui
59,379
<p>To compute the asymptotic expansion of the integral, we split it into two pieces</p> <p>$$\frac{1}{\pi}\int_{-1}^{1} \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}dx = \frac{1}{\pi}\left(\int_{-1}^0 + \int_0^1\right)\frac{(1+2x)^{2n}}{\sqrt{1-x^2}}dx $$ Over the interval $[-1,0]$, we have $|1+2x|\le 1$, so the contribution there is bounded.</p> <p>$$\mathcal{I}_1 \stackrel{def}{=} \left|\frac{1}{\pi}\int_{-1}^0 \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}dx\right| \le \frac{1}{\pi}\int_{-1}^0 \frac{dx}{\sqrt{1-x^2}} = \frac12 $$ Over the interval $[0,1]$, introduce variable </p> <p>$$1 + 2x = 3 e^{-t} \quad\iff\quad x = \frac{3e^{-t}-1}{2}$$ We have</p> <p>$$\mathcal{I}_2 \stackrel{def}{=} \frac{1}{\pi}\int_{0}^1 \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}dx = \frac{3^{2n+1}}{2\pi}\int_{0}^{\log 3} e^{-(2n+1)t} \left[1 - \left(\frac{3e^{-t}-1}{2} \right)^2 \right]^{-1/2} dt$$ Notice near $t = 0$, the complicated mess of the square bracket has following Taylor series expansion: $$\left[1 - \left(\frac{3e^{-t}-1}{2} \right)^2 \right]^{-1/2} = \frac{1}{\sqrt{3t}}\left( 1+ \frac{5}{8}t+ \frac{49}{384}t^2-\frac{29}{3072}t^3 + \cdots \right)\tag{*1}$$ The whole expression is in the form which we can apply <a href="https://en.wikipedia.org/wiki/Watson%27s_lemma">Watson's Lemma</a> and read off the asympotic expansion:</p> <p>$$\begin{align} \mathcal{I}_2 \;\approx &amp;\; \frac{3^{2n+1}}{2\sqrt{3}\pi} \left( \frac{\Gamma\left(\frac12\right)}{\sqrt{2n+1}} + \frac{5}{8}\frac{\Gamma\left(\frac32\right)}{\sqrt{2n+1}^3} + \frac{49}{384}\frac{\Gamma\left(\frac52\right)}{\sqrt{2n+1}^5} - \frac{29}{3072}\frac{\Gamma\left(\frac72\right)}{\sqrt{2n+1}^7} + \cdots \right)\\ \;\approx &amp;\; \frac{3^{2n}\sqrt{3}}{2\sqrt{\pi(2n+1)}} \left( 1 + \frac{5}{16(2n+1)} + \frac{49}{512(2n+1)^2} - \frac{145}{8192(2n+1)^3} + \cdots\right) \end{align} $$ Since $\mathcal{I}_1$ is always of $O(1)$, above asymbolic expansion for $\mathcal{I}_2$ is also the one for $\mathcal{I}_1 + \mathcal{I}_2$. i.e. the one you are looking for.</p> <p>If one want more terms for the asymptotic expansion, one just need to throw the LHS of $(*1)$ to an CAS, crank out more terms of the Taylor expansion and repeat above process.</p>
1,658,577
<p>I'm an electrical/computer engineering student and have taken fair number of engineering math courses. In addition to Calc 1/2/3 (differential, integral and multivariable respectfully), I've also taken a course on linear algebra, basic differential equations, basic complex analysis, probability and signal processing (which was essentially a course on different integral transforms).</p> <p>I'm really interested in learning rigorous math, however the math courses I've taken so far have been very applied - they've been taught with a focus on solving problems instead of proving theorems. I would have to relearn most of what I've been taught, this time with a focus on proofs. </p> <p>However, I'm afraid that if I spend a while relearning content I already know, I'll soon become bored and lose motivation. However, I don't think not revisiting topics I already know is a good idea, because it would be next to impossible to learn higher level math without knowing lower level math from a proof based point of view.</p>
USA
626,318
<p>Your goal is slant to applied math.</p> <p>Try “Real Variables with ..metric space and topology” by Robert Ash. The book will also teach you proof writing. Prof Ash is an electrical engineer who later became a Math prof. The book has answers to all problems that helps you to gain confidence quickly. Prof Ash also has books on Modern Algebra and Complex Analysis.</p> <p>All are good.</p> <p>Rudin or Artin you can read easily and work through.</p>
1,658,577
<p>I'm an electrical/computer engineering student and have taken fair number of engineering math courses. In addition to Calc 1/2/3 (differential, integral and multivariable respectfully), I've also taken a course on linear algebra, basic differential equations, basic complex analysis, probability and signal processing (which was essentially a course on different integral transforms).</p> <p>I'm really interested in learning rigorous math, however the math courses I've taken so far have been very applied - they've been taught with a focus on solving problems instead of proving theorems. I would have to relearn most of what I've been taught, this time with a focus on proofs. </p> <p>However, I'm afraid that if I spend a while relearning content I already know, I'll soon become bored and lose motivation. However, I don't think not revisiting topics I already know is a good idea, because it would be next to impossible to learn higher level math without knowing lower level math from a proof based point of view.</p>
Ziqi Fan
851,072
<p>There is no &quot;engineering math&quot; separate from &quot;math&quot;. To learn good math as an engineer, you have no other way but to understand math from its nature.</p> <p>The first course you learn is probably real analysis. In my opinion, no one masters real analysis when the topic is met the first time. Don't worry if you feel you have done the homework and passed the exam but don't have a clear clue of what it is for. Real analysis is the topic that opens the door to modern math, as compared to what you learned at high school. The core technique you need to master is formal reasoning, or proof construction. Learning first-order logic systematically would definitely provide you with a clearer understanding of this technique. Once you have mastered the tool for formal reasoning, you are able to appreciate proofs written in English in a more fundamental way. Without the knowledge of first order logic, long proofs cannot be truly understood, especially when there are many levels of existential and universal propositions. After understanding first-order logic and probably a little set theory, it is the moment you review real analysis and tell why learning the material the first time is not really helpful.</p> <p>You will definitely learn linear algebra. This is the course you learn linear spaces and transformations between linear spaces. It is the first time you understand that modern math is concerned with the structures of specific types of sets with operations defined on them. Some universities teach this topic with an emphasis on matrices and vectors. I don't quite agree with this type of education for linear algebra. I would recommend learning this topic from the bottom up. That is, understanding clearly the concepts of linear transformations, and then, associate them with matrices and vectors. With the proof capability you learned in real analysis, you should be able to understand and construct proofs more easily in this course.</p> <p>Then you will need to know how to model the world of uncertainty. Probability theory and statistics are what you need to learn. Again, pay more attention to theory and do not focus on data at the beginning. You need to rigorously construct your understanding of events, probability, random variables, expectation, convergence, etc. Do note that the concept of convergence from real analysis plays a significant role in the analysis of estimators.</p> <p>With all these basic course learned and understood, you should learn optimization. Optimization tells you how to make the best choice with a definition of cost in mind. It is widely used in signal processing, control, machine learning, computer graphics, etc. In fact, many rigorous engineering papers published on good conferences or journals are optimization in nature.</p> <p>With these topics learned and practiced in everyday applications, you should be able to deal with most engineering problems in electrical engineering or computer science in a very rigorous way. With these describing languages in mind, you should solve real problems with coding. Now, coding is a technique that connects knowledge and capability. With a good understanding of math, you should be able to write high-quality codes.</p>
2,659,448
<p>The following question is an exercise from Munkres' Analyis on Manifolds (Chapter 4 - Section 20):</p> <p>Consider the vectors $a_i$ in $R^3$ such that:</p> <p>$[a_1\ a_2\ a_3\ a_4] = \begin{bmatrix} 1 &amp; 0 &amp; 1 &amp; 1 \\ 1 &amp; 0 &amp; 1 &amp; 1 \\ 1 &amp; 1 &amp; 2 &amp; 0 \end{bmatrix}$</p> <p>Let $V$ be the subspace of $R^3$ spanned by $a_1$ and $a_2$. Show that $a_3$ and $a_4$ also span $V$, and that the frames $(a_1,a_2)$ and $(a_3,a_4)$ belong to opposite orientations of $V$.</p> <p>My initial approach was to span $a_1$ and $a_2$ to determine V. Since two vectors can span at most $R^2$, the third term of any general vector within span must be 0. So, $(x,x,x+y)=0\Longrightarrow x=0,y=0$ And so, V is simply the origin $(0,0)$. Spanning $a_3$ and $a_4$ also yields this, showing that they both span V. This seems like a very odd answer for the first part in my opinion.</p> <p>However, in trying to answer the second part I am completely lost. In Munkres, we see that the orientation of some $n$-tuple $(x_1,\ldots,x_n)$ is determined by the sign of the determinant of the matrix they form. But surely the determinant of a $3\times2$ matrix is not defined? One thought I had was to simply take the determinant of the upper $\frac{2}{3}$ of the matrix (so determinant is defined), but the determinant of this is 0, which isn't covered by Munkres' definition, so the frame has no orientation?</p> <p>Any help / clarification / solutions would be massively appreciated.</p>
Tashi Walde
509,746
<p>Recall that two frames $(a_1,\dots,a_n)$ and $(v_1,\dots,v_n)$ have the same orientation if the linear isomorphism which sends $a_i\mapsto v_i$ (which exists uniquely) has positive determinant. </p> <p>The subspace $V$ has two bases $a=(1,1,1),b=(0,0,1)$ and $v=(1,1,2),w=(1,1,0)$. If you change from one basis to the other, you use the following formulas: $v=a+b$ and $w=a-b$. This means that the linear isomorphism $a\mapsto v$ and $b\mapsto w$ is described (in the basis $\{a,b\}$) by the matrix $$\begin{bmatrix}1&amp;1\\1&amp;-1\end{bmatrix}$$ which has determinant $-2$ which is negative. This means that the two bases (frames) have opposite orientations.</p>
19,261
<p>Every simple graph $G$ can be represented ("drawn") by numbers in the following way:</p> <ol> <li><p>Assign to each vertex $v_i$ a number $n_i$ such that all $n_i$, $n_j$ are coprime whenever $i\neq j$. Let $V$ be the set of numbers thus assigned. <br/></p></li> <li><p>Assign to each maximal clique $C_j$ a unique prime number $p_j$ which is coprime to every number in $V$.</p></li> <li><p>Assign to each vertex $v_i$ the product $N_i$ of its number $n_i$ and the prime numbers $p_k$ of the maximal cliques it belongs to.</p></li> </ol> <blockquote> <p>Then $v_i$, $v_j$ are adjacent iff $N_i$ and $N_j$ are not coprime,</p> </blockquote> <p>i.e. there is a (maximal) clique they both belong to. <strong>Edit:</strong> It's enough to assign $n_i = 1$ when $v_i$ is not isolated and does not share all of its cliques with another vertex.</p> <p>Being free in assigning the numbers $n_i$ and $p_j$ lets arise a lot of possibilites, but also the following question:</p> <blockquote> <p><strong>QUESTION</strong></p> <p>Can the numbers be assigned <em>systematically</em> such that the greatest $N_i$ is minimal (among all that do the job) &#x2014; and if so: how?</p> </blockquote> <p>It is obvious that the $n_i$ in the first step have to be primes for the greatest $N_i$ to be minimal. I have taken the more general approach for other - partly <a href="https://mathoverflow.net/questions/19076/bringing-number-and-graph-theory-together-a-conjecture-on-prime-numbers/19080#19080">answered </a> - questions like "Can the numbers be assigned such that the set $\lbrace N_i \rbrace_{i=1,..,n}$ fulfills such-and-such conditions?"</p>
Benoît Kloeckner
4,961
<p>This topic being quite large, I cannot insist enough to recommand you to take a look to Marcel Berger's <em>Panoramic view of Riemannian geometry</em>. The Bonnet-Myers theorem, the sphere theorems (for the recent development on this one, I think the web page of Simon Brendle contains a survey) are two celebrated examples of the topological consequences of geometric properties in the setting or Riemannian geometry.</p>
3,261,846
<blockquote> <p>What is the solution to the IVP <span class="math-container">$$y'+y=|x|, \ x \in \mathbb{R}, \ y(-1)=0$$</span></p> </blockquote> <p>The general solution of the above problem is <span class="math-container">$y_{g}(x)=ce^{-x}$</span>.</p> <p>How to find the particular solution? As <span class="math-container">$|x|$</span> is not differentiable at origin. Is there any alternate way to get the solution?</p>
A.Γ.
253,273
<p>Hint: using the <a href="https://en.wikipedia.org/wiki/Integrating_factor" rel="nofollow noreferrer">integrating factor</a> the equation can be rewritten as <span class="math-container">$$ (e^{x}y(x))'=e^{x}|x|. $$</span> Thus, you are left with writing down the solution to <span class="math-container">$w'(x)=f(x)$</span>, <span class="math-container">$w(x_0)=0$</span> (<span class="math-container">$f$</span> continuous).</p> <blockquote class="spoiler"> <p> <span class="math-container">$$w(x)=\int_{x_0}^x f(t)\,dt.$$</span></p> </blockquote>
2,352,811
<p>Why it's not enough for the partial derivatives to exist for implying differentiability of the function? Why is the continuity of the partial derivatives needed?</p>
krirkrirk
221,594
<p><strong>Hint</strong> : </p> <p>consider $f(x,y) = \frac{xy}{x^2+y^2}$, $f(0,0) = 0$</p>
452,306
<p>I am trying to be able to find the radius of a cone combined with a cylinder. see my other question (Solving for radius of a combined shape of a cone and a cylinder where the cone is base is concentric with the cylinder? part2 )</p> <p>I have a volume calculation that Has been reduced as far as I know how to.</p> <p>Know values:</p> <p>$$v=65712.4$$ $$x=3$$ $$y=2$$ $$\theta=30$$ $$r=unknown$$</p> <p>$$v=\pi r^3\left(2y-\frac{2}{3}\tan\theta-\frac{x}{r}\right)$$</p> <p>Since I haven't solved a Quadratic equation in a while. </p> <p>I would appreciate it explained in steps. </p> <p>Thank You For Your Time.</p>
Daniel Fischer
83,702
<p>It is sufficient to show that $g^{-1} \in \overline{A}$, then $\langle g\rangle \subset \overline{A}$, and $\overline{A}$ is the closure of a subgroup, hence a subgroup.</p> <p>If $g$ has finite order, it is trivial that $g^{-1} \in A = \overline{A}$, so let's suppose that $g^n \neq 1$ for $n \neq 0$.</p> <p>If $g^{-1} \notin \overline{A}$, then there is a symmetric open neighbourhood $U$ of $1$ with $g^{-1}U \cap A = \varnothing$.</p> <p>Then, for all $n \in \mathbb{N}$, we have $g^nU \cap A = \{g^n\}$. Otherwise, if $g^k \in g^nU$ for $k \neq n$, by symmetry of $U$ we can assume that $k &gt; n$, then $g^{k-n-1} \in g^{-1}U\cap A$, which contradicts the choice of $U$.</p> <p>Now, $A$ is an infinite set, hence has an accumulation point $p$, since $G$ is compact. Let $V$ a symmetric open neighbourhood of $1$ with $V\cdot V \subset U$. Since $p$ is an accumulation point of $A$, $pV$ contains at least two points $g^k$ and $g^n$, $n \neq k$ of $A$. But then $g^k \in g^nU \cap A$. That contradicts the above, hence $g^{-1} \in \overline{A}$.</p>
452,306
<p>I am trying to be able to find the radius of a cone combined with a cylinder. see my other question (Solving for radius of a combined shape of a cone and a cylinder where the cone is base is concentric with the cylinder? part2 )</p> <p>I have a volume calculation that Has been reduced as far as I know how to.</p> <p>Know values:</p> <p>$$v=65712.4$$ $$x=3$$ $$y=2$$ $$\theta=30$$ $$r=unknown$$</p> <p>$$v=\pi r^3\left(2y-\frac{2}{3}\tan\theta-\frac{x}{r}\right)$$</p> <p>Since I haven't solved a Quadratic equation in a while. </p> <p>I would appreciate it explained in steps. </p> <p>Thank You For Your Time.</p>
zudumathics
265,160
<p>I really like Fischer's solution. I would like to post an answer based on Fischer's answer in the "filling the blank" spirit - certainly helpful for beginners like me.</p> <p>We need to get help from the two theorems below (refer to section 1.15 in Bredon's <em>Topology and Geometry</em>):</p> <ul> <li>In a topological group $G$ with unity element $1$, the symmetric neighborhoods of $e$ form a neighborhood basis at $1$.</li> <li>If $G$ is a topological group and $U$ is any neighborhood of $1$ and $n$ is any positive integer, then there exists a symmetric neighborhood $V$ of $1$ such that $V^n\subset N$.</li> </ul> <p>Since $A.A\subset A$, by the same argument using the continuity of the multiplication map to prove the closure of a subgroup is a subgroup, we have $\bar{A}.\bar{A}\subset\bar{A}$. Therefore, if $g^{-1}\in\bar{A}$, then $g^{-n}\in\bar{A}$ for any $n\in\textbf{N}$, and so the group $\langle g\rangle\subset\bar{A}$. Taking bar of $A\subset\langle g\rangle\subset\bar{A}$ yields $\bar{A}=\bar{\langle g\rangle}$, which means $\bar{A}$ is a subgroup due to being the closure of a subgroup.</p> <p>We now show that $g^{-1}\in \bar{A}$. The case $g$ has finite order is trivial, so let $g$ has infinite order. Suppose $g^{-1}\notin\bar{A}$. Then by definition, there is a neighborhood $U'$ of $1$ that, by the homeomorphic translation, $g^{-1}U'\cap A=\emptyset$. Since the symmetric neighborhoods form a neighborhood basis of $1$, we can choose a symmetric neighborhood $U$ of $1$ such that $g^{-1}U\cap A=\emptyset$.</p> <p>Since $A$ is an infinite set in a compact space, $A$ has a limit point $p$. We can choose a symmetric neighborhood $V$ of $1$ such that $V^2\subset U$. Then, $pV$, being a neighborhood of a limit point of $A$, must contain at least two distinct points $g^m$, $g^n$ ($m, n\in \textbf{N}$), i.e. $g^m=pa$, $g^n=pb$ for some $a,b\in V$. Due to $V$ being symmetric and $V^2\subset U$, $b^{-1}a=u$ for some $u\in U$, and we have $g^m=pa=pb.u\in g^n U\cap A$</p> <p>Now, if $m&gt;n$, then $g^{m-n-1}\in g^{-1}U\cap A$ (contradict with our definition of $U$). If $n&gt;m$, then since $U^{-1}=U$, $g^m\in g^n U^{-1}$, and so $g^n\in g^m U\cap A$ (also contradict with our definition of $U$). Therefore $g^{-1}\in \bar{A}$.</p>
1,187,713
<p>How would I go about proving that if $a_n$ is a real sequence such that $\lim_{n\to\infty}|a_n|=0$, then there exists a subsequence of $a_n$, which we call $a_{n_k}$, such that $\sum_{k=1}^\infty a_{n_k}$ is convergent.</p> <p>I think that I can choose terms $a_{n_k}$ such that they are terms of a geometric series, so that means that it will converge, but I don't know how to formally state this.</p>
Dimitris
37,229
<p>Your idea is good. You can pick $a_{k_1}$ such that $|a_{k_1}|&lt;1/2$. Then pick $k_2&gt;k_1$ so that $|a_{k_2}|&lt;1/4$, and inductively pick ${k_n}&gt;{k_{n-1}}$ such that $|a_{k_n}|&lt;1/2^n$. </p>
127,086
<p>I am struggling with an integral pretty similar to one already resolved in MO (link: <a href="https://mathoverflow.net/questions/101469/integration-of-the-product-of-pdf-cdf-of-normal-distribution">Integration of the product of pdf &amp; cdf of normal distribution </a>). I will reproduce the calculus bellow for the sake of clarity, but I want to stress the fact that my computatons are essentially a reproduction of the discussion of the previous thread.</p> <p>In essence, I need to solve: <span class="math-container">$$\int_{-\infty}^\infty\Phi\left(\frac{f-\mathbb{A}}{\mathbb{B}}\right)\phi(f)\,df,$$</span> where <span class="math-container">$\Phi$</span> is cdf of a standard normal, and <span class="math-container">$\phi$</span> its density. <span class="math-container">$\mathbb{B}$</span> is a negative constant.</p> <p>As done in the aforementioned link, the idea here is to compute the derivative of the integral with respect to <span class="math-container">$\mathbb{A}$</span> (thanks to Dominated Convergence Theorem, integral and derivative can switch positions). With this, <span class="math-container">\begin{align*} \partial_A\left[\int_{-\infty}^\infty\Phi\left(\frac{f-A}{B}\right)\phi(f)\,df\right]&amp;=\int_{-\infty}^\infty\partial_A\left[\Phi\left(\frac{f-A}{B}\right)\phi(f)\right]\,df=\int_{-\infty}^\infty-\frac{1}{B}\phi\left(\frac{f-A}{B}\right)\phi(f)\,df \end{align*}</span> We note now that </p> <p><span class="math-container">$$\phi\left(\frac{f-A}{B}\right)\phi(f)=\frac{1}{2\pi}\exp\left(-\frac{1}{2}\left[\frac{(f-A)^2}{B^2}+f^2\right]\right)=\exp\left(-\frac{1}{2B^2}\left[f^2(1+B^2)+A^2-2Af\right]\right)$$</span> <span class="math-container">$$=\frac{1}{2\pi}\exp\left(-\frac{1}{2B^2}\left[\left(f\sqrt{1+B^2}-\frac{A}{\sqrt{1+B^2}}\right)^2+\frac{B^2}{1+B^2}A^2\right]\right)$$</span></p> <p>Finally, then,</p> <p><span class="math-container">$$\partial_A\left[\int_{-\infty}^\infty\Phi\left(\frac{f-A}{B}\right)\phi(f)\,df\right]$$</span></p> <p><span class="math-container">$$\ \ \ \ \ =-\frac{1}{\sqrt{2\pi}B}\exp\left(-\frac{A^2}{2(1+B^2)}\right)\frac{1}{2\pi}\int_{-\infty}^\infty\exp\left(-\frac{1}{2B^2}\left[f\sqrt{1+B^2}-\frac{A}{\sqrt{1+B^2}}\right]^2\right)\,df$$</span></p> <p>and with the change of variable <span class="math-container">\begin{align} \left[y\longmapsto f\frac{\sqrt{1+B^2}}{B}-\frac{A}{B\sqrt{1+B^2}}\Longrightarrow df=\frac{B}{\sqrt{1+B^2}}\,dy\right] \end{align}</span> we get <span class="math-container">\begin{align} \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty\exp\left(-\frac{1}{2B^2}\left[f\sqrt{1+B^2}-\frac{A}{\sqrt{1+B^2}}\right]^2\right)\,df=\frac{B}{\sqrt{1+B^2}}\int_{-\infty}^{\infty}\phi(y)\,dy=\frac{B}{\sqrt{1+B^2}} \end{align}</span> This means that <span class="math-container">\begin{align} \partial_A\left[\int_{-\infty}^\infty\Phi\left(\frac{f-A}{B}\right)\phi(f)\,df\right]&amp;=-\frac{1}{\sqrt{2\pi}B}\exp\left(-\frac{A^2}{2(1+B^2)}\right)\frac{B}{\sqrt{1+B^2}}=-\frac{1}{\sqrt{1+B^2}}\phi\left(\frac{A}{\sqrt{1+B^2}}\right) \end{align}</span> At this point, given that (as <span class="math-container">$\mathbb{B}$</span> is negative) <span class="math-container">$$\Phi\left(\frac{f-A}{\mathbb{B}}\right)\phi(f)=0$$</span> when <span class="math-container">$\mathbb{A}\rightarrow-\infty$</span>, the integral we are looking for is equal to <span class="math-container">\begin{align} \int_{-\infty}^{\mathbb{A}}-\frac{1}{\sqrt{1+\mathbb{B}^2}}\phi\left(\frac{A}{\sqrt{1+\mathbb{B}^2}}\right)\,dA \end{align}</span> Again with the obvious change of variables <span class="math-container">$$\left[y\longmapsto\frac{A}{\sqrt{1+\mathbb{B}^2}}\Longrightarrow\sqrt{1+\mathbb{B}^2}\,dy=dA\right]$$</span> one gets <span class="math-container">\begin{align} \int_{-\infty}^{\mathbb{A}}-\frac{1}{\sqrt{1+\mathbb{B}^2}}\phi\left(\frac{A}{\sqrt{1+\mathbb{B}^2}}\right)\,dA=-\frac{1}{\sqrt{1+\mathbb{B}^2}}\sqrt{1+\mathbb{B}^2}\int_{-\infty}^{\mathbb{A}/\sqrt{1+\mathbb{B}^2}}\phi(y)\,dy=-\Phi({\mathbb{A}/\sqrt{1+\mathbb{B}^2}}). \end{align}</span> The problem here is that this number should obviously be positive, so at some point I am missing a signal. As the computations seem sound to me, I would like to see if anyone could help me to find my mistake. </p> <p>Many thanks to you all.</p>
Did
4,661
<p><a href="http://www.youtube.com/watch?v=aNUr__-VZeQ">The horror, the horror</a>... :-)</p> <p>Recall that $\Phi(x)=P[X\leqslant x]$ for every $x$, where the random variable $X$ is standard normal, and that, for every suitable function $u$, $$ \int_{-\infty}^{+\infty}u(x)\phi(x)\mathrm dx=E[u(Y)], $$ where the random variable $Y$ is standard normal. Using this for $u=\Phi$, one sees that the integral $I$ you are interested in is $$ I=P[X\leqslant B^{-1}(Y-A))]=P[BX\geqslant Y-A]=P[Z\leqslant A], $$ where $Z=Y-BX$, where $X$ and $Y$ are i.i.d. standard normal, and where we used the fact that $B\lt0$ to reverse the inequality sign. Now, the random variable $Z$ is centered gaussian with variance $\sigma^2=1+B^2$, hence $Z=\sigma U$ with $U$ standard normal, and $$ I=P[U\leqslant A/\sigma]=\Phi(A/\sqrt{1+B^2}). $$</p>
117,024
<p>The trivial approach of counting the number of triangles in a simple graph $G$ of order $n$ is to check for every triple $(x,y,z) \in {V(G)\choose 3}$ if $x,y,z$ forms a triangle. </p> <p>This procedure gives us the algorithmic complexity of $O(n^3)$.</p> <p>It is well known that if $A$ is the adjacency matrix of $G$ then the number of triangles in $G$ is $tr(A^3)/6.$</p> <p>Since matrix multiplication can be computed in time $O(n^{2.37})$ it is natural to ask:</p> <p>Is there any (known) faster method for computing the number of triangles of a graph?</p>
27rabbit
978,635
<p>In Competitive Programming, we have a particularly popular algorithm for a rather sparse graph running in <span class="math-container">$O(m\sqrt{m})$</span> time complexity where <span class="math-container">$m$</span> is the number of the edges in the graph. Although it is certainly not the fastest, as Listing mentioned, it's still efficient in most cases.</p> <p>Let <span class="math-container">$G$</span> denote the original graph.</p> <p>The algo goes in this way:</p> <ol> <li><p>add directions to the edges in the original undirected graph according to the degree of the vertices in the original graph <span class="math-container">$G$</span>, say the original edge <span class="math-container">$(u,v)$</span> is now <span class="math-container">$(u\rightarrow v)$</span> iff <span class="math-container">$d(u)&lt;d(v)$</span> where <span class="math-container">$d(\cdot)$</span> denotes the degree of a vertex in <span class="math-container">$G$</span>. In this way we get a new directed graph <span class="math-container">$G^\prime$</span>. The triangle in <span class="math-container">$G$</span> transformed to this triple-edge-tuple: <span class="math-container">$((u\rightarrow v), (u\rightarrow w), (v\rightarrow w))$</span>.</p> </li> <li><p>Firstly enumerate each vertex <span class="math-container">$u$</span>, and then enumerate all its out-neighbours in the descending order of degree, i.e. we enumerate the neighbour <span class="math-container">$v_2$</span> ahead of the neightbour <span class="math-container">$v_1$</span> if <span class="math-container">$d(v_2) &gt; d(v_1)$</span>. And when enumerating, we mark the neighbour with the unique timestamp of <span class="math-container">$u$</span> so that we can know whether we have gone through a vertex before. <strong>NOTE</strong> that here out-neighbour means the neighbour in the directed graph <span class="math-container">$G^\prime$</span> instead of original graph <span class="math-container">$G$</span>, i.e. <span class="math-container">$v$</span> is the out-neightbour of <span class="math-container">$u$</span> iff there is an edge <span class="math-container">$(u\rightarrow v)$</span> in <span class="math-container">$G^\prime$</span>.</p> </li> <li><p>Now we already have two vertices <span class="math-container">$u$</span> and <span class="math-container">$v$</span> and then we enumerate the out-neighbour of <span class="math-container">$v$</span> denoted as <span class="math-container">$w$</span>, and we can easily check if there is an edge <span class="math-container">$(u\rightarrow w)$</span> because of the timestamp we made in step 2. The number of <span class="math-container">$w$</span> that there is an edge <span class="math-container">$(u\rightarrow w)$</span> is the number of the triangles that <span class="math-container">$u$</span> and <span class="math-container">$v$</span> can form.</p> </li> </ol> <p>In this way, we can obviously count all the triangles. And here is the proof of time complexity.</p> <p>The key principle is to illustrate the degree in new graph <span class="math-container">$G^\prime$</span> is rather small. Consider an arbitrary vertex <span class="math-container">$u$</span>, if its degree in original graph is <span class="math-container">$d(u)\le\sqrt{m}$</span>, then its out-degree <span class="math-container">$d_{+}^\prime(u)$</span> in new graph <span class="math-container">$G^\prime$</span> is certainly also less than or equal to <span class="math-container">$\sqrt{m}$</span>. If <span class="math-container">$d(u)&gt;\sqrt{m}$</span>, then <span class="math-container">$d_{+}^\prime(u)$</span> will not be greater than <span class="math-container">$2\sqrt{m}$</span> because the directed edge in new graph is from a smaller-degree vertex to a larger-degree vertex, and there won't be more than <span class="math-container">$2\sqrt{m}$</span> vertices with degree larger than <span class="math-container">$\sqrt{m}$</span>.</p> <p>This is a pretty algorithm, hope it could inspire the following readers here.</p>
4,637,565
<p>I am thinking of positive sequences whose sum is infinite but whose sum of squares is not?</p> <p>One representative sequence is <span class="math-container">$$x[n] = \frac{a}{n+b},$$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are given real numbers such that <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b\ge0$</span>.</p> <p>I know that there will be infinitely many more sequences <span class="math-container">$x[n]$</span> such that <span class="math-container">$x[n]\ge0, ~x=1, 2, ...$</span>, <span class="math-container">$\sum x[n] = \infty$</span>, and <span class="math-container">$\sum (x[n])^2 &lt;= M$</span> for a sufficiently large constant value <span class="math-container">$M$</span>.</p> <p>Can you give me some examples? If possible, I would really appreciate it if you could tell me how to find these sequences (i.e., methodology of how to find).</p>
Gareth Ma
948,125
<p>Essentially copying off <a href="https://en.wikipedia.org/wiki/Sequence_space" rel="nofollow noreferrer">wikipedia</a>, the property you are asking for is related to something called the <span class="math-container">$\ell^p$</span> sequence space. Specifically, for some base field, say the reals, for <span class="math-container">$0 &lt; p &lt; \infty$</span>, the <span class="math-container">$\ell^p$</span> space consists of all sequences <span class="math-container">$(x_n)$</span> satisfying <span class="math-container">$\sum_n |x_n|^p &lt; \infty$</span>. You are looking for real sequences that lie in the <span class="math-container">$\ell^2(\mathbb{R})$</span> space but not in <span class="math-container">$\ell^1(\mathbb{R})$</span> space. As far as I know, if <span class="math-container">$\mathbb{R}$</span> is replaced by a closed interval <span class="math-container">$[a, b]$</span>, then we have <span class="math-container">$\ell^1([a, b]) \subseteq \ell^2([a, b])$</span>, see <a href="https://math.stackexchange.com/a/3425597">here</a>. There are various related questions on mathSE as well. Hope this helps your googling journey.</p>
4,637,565
<p>I am thinking of positive sequences whose sum is infinite but whose sum of squares is not?</p> <p>One representative sequence is <span class="math-container">$$x[n] = \frac{a}{n+b},$$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are given real numbers such that <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b\ge0$</span>.</p> <p>I know that there will be infinitely many more sequences <span class="math-container">$x[n]$</span> such that <span class="math-container">$x[n]\ge0, ~x=1, 2, ...$</span>, <span class="math-container">$\sum x[n] = \infty$</span>, and <span class="math-container">$\sum (x[n])^2 &lt;= M$</span> for a sufficiently large constant value <span class="math-container">$M$</span>.</p> <p>Can you give me some examples? If possible, I would really appreciate it if you could tell me how to find these sequences (i.e., methodology of how to find).</p>
Adam Latosiński
653,715
<p>Let's first notice, that if <span class="math-container">$$\limsup_{n\to\infty} \frac{x_{n+1}}{x_n} &lt; 1$$</span> then both series <span class="math-container">$\sum_{n=1}^\infty x_n$</span> and <span class="math-container">$\sum_{n=1}^\infty x_n^2$</span> converge. On the other hand, if <span class="math-container">$$\liminf_{n\to\infty} \frac{x_{n+1}}{x_n} &gt; 1$$</span> then both series <span class="math-container">$\sum_{n=1}^\infty x_n$</span> and <span class="math-container">$\sum_{n=1}^\infty x_n^2$</span> diverge. Therefore you need as a necessary (but not sufficient) condition <span class="math-container">$$ \liminf_{n\to\infty} \frac{x_{n+1}}{x_n} \le 1 \le \limsup_{n\to\infty} \frac{x_{n+1}}{x_n}.$$</span></p> <p>To find the convergence of such series, you can often use the <a href="https://en.wikipedia.org/wiki/Ratio_test#2._Raabe%27s_test" rel="nofollow noreferrer">Raabe's test</a>. We define <span class="math-container">$$ y_n = n\left(\frac{x_n}{x_{n+1}} -1\right)$$</span> <span class="math-container">$$ z_n = n\left(\frac{x^2_n}{x^2_{n+1}} -1\right) $$</span> The series <span class="math-container">$\sum_{n=1}^\infty x_n$</span> diverges while <span class="math-container">$\sum_{n=1}^\infty x_n^2$</span> converges if <span class="math-container">$$ \limsup_{n\to\infty} y_n \le 1, \qquad \liminf_{n\to\infty} z_n &gt; 1 $$</span> For example, any sequence which has the asymptotic behavior <span class="math-container">$x_n \sim n^{-\alpha}$</span>, <span class="math-container">$\alpha \in(\frac12, 1]$</span> will give you <span class="math-container">$$\lim_{n\to\infty} y_n = \alpha \le 1 $$</span> <span class="math-container">$$\lim_{n\to\infty} z_n = 2\alpha &gt; 1 $$</span> so it will satisfy these conditions.</p>
1,039,563
<p>Whether the graphs G and G' given below are isomorphic?</p> <p><img src="https://i.stack.imgur.com/0evn6.jpg" alt="enter image description here"></p>
Arthur
15,500
<p>$f(t)$ goes to either positive or negative infinity as $t$ grows, assuming it has degree at least $1$. That means that there is some $T$ such that $|f(t)|&gt;10^9$ as long as $t&gt;T$.</p> <p>Choose an $n\in \Bbb N$ such that $2\pi n&gt;T$. Then compare $$ \int_0^{2\pi n} f(t) \sin t\: dt $$ and $$ \int_0^{2\pi (n+1/2)} f(t) \sin t\: dt $$ They must differ by at least $10^9$. The same for the last one and $$ \int_0^{2\pi (n+1)} f(t) \sin t\: dt $$ So the integral cannot converge.</p>
4,319,590
<blockquote> <p>Let <span class="math-container">$H$</span> be a subgroup of <span class="math-container">$G$</span> and <span class="math-container">$x,y \in G$</span>. Show that <span class="math-container">$x(Hy)=(xH)y.$</span></p> </blockquote> <p>I have that <span class="math-container">$Hy=\{hy \mid h \in H\}$</span> so wouldn't <span class="math-container">$x(Hy)=\{x hy \mid h \in H\}$</span>? If so there doesn't seem to be much to be shown since if this holds I suppose that <span class="math-container">$(xH)y=\{x h y \mid h \in H\}$</span> would also hold and these two are clearly the same sets? Am I misinterpreting the set <span class="math-container">$x(Hy)$</span>? Should this be <span class="math-container">$\{xhy \mid h \in H, y \in G\}$</span> for fixed <span class="math-container">$y$</span>?</p>
Servaes
30,382
<p>You are entirely correct; there isn't much to be shown, and you've shown it.</p>
1,699,752
<p>Does $ 1 + 1/2 - 1/3 + 1/4 +1/5 - 1/6 + 1/7 + 1/8 - 1/9 + ...$ converge? </p> <p>I know that $(a_n)= 1/n$ diverges, and $(a_n)= (-1)^n (1/n)$, converges, but given this pattern of a negative number every third element, I am unsure how to determine if this converges. </p> <p>I tried to use the comparison test, but could not find sequences to compare it to, and the alternating series test doesn't seem to work, because every other is not negative. </p>
carmichael561
314,708
<p>Let $s_n$ be the $n$th partial sum of the series. If the series converges, then the sequence $\{s_n\}$ is bounded.</p> <p>However, observe that $s_4&gt;1+\frac{1}{4}$, $s_7&gt; 1+\frac{1}{4}+\frac{1}{7}$, and in general $$ s_{3m+1}&gt;\sum_{k=0}^m\frac{1}{3k+1} $$ Since $\sum_{k=0}^{\infty}\frac{1}{3k+1}$ diverges, this shows that the sequence $\{s_n\}$ is not bounded, so the series diverges.</p>
2,030,547
<p>The following expression came up in a proof I was reading, where it is said "It is easily shown: $$\lim_{x\to\infty} x(1-\frac{\ln (x-1)}{\ln x})=0."$$</p> <p>Unfortunately I'm not having an easy time showing it. I guess it should come down to showing that the ratio $\frac{\ln (x-1)}{\ln x}$ converges to 1 superlinearly, which seems intuitive but I don't know how to prove it formally. Any tips?</p> <p>Edit: original question had an implicit typo - I had $\ln x - 1$ rather than the intended $\ln(x-1)$.</p>
Brian M. Scott
12,042
<p>The displayed line <em>is</em> the definition of $v_0^*$. Each vector $v_0$ in $V$ determines a linear functional $v_0^*$ (which I read ‘vee-nought-star’) on $X^*$, i.e., an element $v_0^*$ of $V^{**}$. This $v_0^*$ is therefore a linear function from $V^*$ to $\Bbb R$, and it’s defined by</p> <p>$$v_0^*(f)=f(v_0)\tag{1}$$</p> <p>for each $f\in V^*$. That is, its value at the linear functional $f\in V^*$ is simply the value of $f$ at $v_0$.</p> <p>One does of course have to verify that the function from $V^*$ to $\Bbb R$ defined by $(1)$ actually <em>is</em> linear, but this is quite straightforward.</p>
2,359,621
<p>Consider $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ where</p> <p>$$f(x,y):=\begin{cases} \frac{x^3}{x^2+y^2} &amp; \textit{ if } (x,y)\neq (0,0) \\ 0 &amp; \textit{ if } (x,y)= (0,0) \end{cases} $$</p> <p>If one wants to show the continuity of $f$, I mainly want to show that </p> <p>$$ \lim\limits_{(x,y)\rightarrow0}\frac{x^3}{x^2+y^2}=0$$</p> <p>But what does $\lim\limits_{(x,y)\rightarrow0}$ mean? Is it equal to $\lim\limits_{(x,y)\rightarrow0}=\lim\limits_{||(x,y)||\rightarrow0}$ or does it mean $\lim\limits_{x\rightarrow0}\lim\limits_{y\rightarrow0}$?</p> <p>If so, how does one show that the above function tends to zero?</p>
Dave
334,366
<p>$\lim_\limits{(x,y)\to 0}$ likely means $\lim_\limits{(x,y)\to(0,0)}$, which means that $x$ and $y$ are both tending to $0$. One could use polar coordinates where $x=r\cos(\theta)$ and $y=r\sin(\theta)$ to obtain: $$\lim_{(x,y)\to(0,0)}\frac{x^3}{x^2+y^2}=\lim_{r\to 0}\frac{r^3\cos^3(\theta)}{r^2}=\lim_{r\to 0} r\cos^3(\theta)$$ Then note that $|\cos^3(\theta)|\leq 1~~~\forall\theta\in\Bbb R$.</p>
1,007,533
<p>Prove that if $v$ is an eigenvector for the matrix $A$, then $A^2v=c^2v$</p> <p>Pretty much all I have is:</p> <p>$Av=cv$ where $v$ is a nonzero vector</p>
The Artist
154,018
<p>$$Av=cv$$</p> <p>$$AAv=Acv$$</p> <p>c is a scalar you can factor it out</p> <p>$$AAv=c(Av)$$</p> <p>$$A^2v=c(Av)$$</p> <blockquote> <p>But you know that $Av=cv$ So </p> </blockquote> <p>$$A^2v=c(cv)=c^2v$$</p>
2,797,709
<p>How is $\; 4 \cos^2 (t/2) \sin(1000t) = 2 \sin(1000t) + 2\sin(1000t)\cos t\,$? This is actually part of a much bigger physics problem, so I need to solve it from the LHS quickly. Is there an easy method by which I can do this?</p>
José Carlos Santos
446,262
<p>Just use the fact that$$\cos(t)=\cos^2\left(\frac t2\right)-\sin^2\left(\frac t2\right)=2\cos^2\left(\frac t2\right)-1.$$</p>
4,003,948
<p>In the Book that I'm reading (Mathematics for Machine Learning), the following para is given, while listing the properties of a matrix determinant:</p> <blockquote> <p>Similar matrices (Definition 2.22) possess the same determinant. Therefore, for a linear mapping <span class="math-container">$Φ : V → V$</span> all transformation matrices <span class="math-container">$A_Φ$</span> of <span class="math-container">$Φ$</span> have the same determinant. Thus, the determinant is invariant to the choice of basis of a linear mapping.</p> </blockquote> <p>I know that matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar if they satisfy <span class="math-container">$B=C^{-1}AC$</span>. I can prove that determinants of such <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are equal using other properties of a determinant.</p> <p>But beyond that I don't understand what this paragraph is saying. I can understand all matrices <span class="math-container">$Y$</span> such that <span class="math-container">$Y=X^{-1}AX$</span> have the same determinant as <span class="math-container">$A$</span>, for varying <span class="math-container">$X$</span>s.</p> <p>But how do I connect this to linear mappings of the form <span class="math-container">$Φ : V → V$</span>. What does <span class="math-container">$Φ : V → V$</span> mean here? Maybe someone can give me an example.</p> <p>EDIT: This video is pretty basic, but it helped me understand better <a href="https://www.youtube.com/watch?v=s4c5LQ5a4ek" rel="nofollow noreferrer">https://www.youtube.com/watch?v=s4c5LQ5a4ek</a></p>
Trevor Gunn
437,127
<p>Suppose that <span class="math-container">$V$</span> is the vector space of quadratic polynomials and <span class="math-container">$\Phi$</span> is multiplication by <span class="math-container">$(1 + x)$</span> mod <span class="math-container">$x^3$</span>.</p> <p>Now let us consider two different bases for <span class="math-container">$V$</span>. One will be <span class="math-container">$1, x, x^2$</span> and the other will be <span class="math-container">$1, (1 + x), (1 + x)^2$</span>.</p> <p>In the first basis we have</p> <p><span class="math-container">\begin{align} \phi(1) &amp;= 1 + x \\ \phi(x) &amp;= \phantom{1 + {}}x + x^2 \\ \phi(x^2) &amp;= \phantom{1 + x +{}} x^2. \end{align}</span></p> <p>In the second basis,</p> <p><span class="math-container">\begin{align} \phi(1) &amp;= \phantom{1 - 3} (1 + x) \\ \phi(1 + x) &amp;= \phantom{1 - 3(1 + x) + 3}{}(1 + x)^2 \\ \phi((1 + x)^2) &amp;= (1 + x)^3 \equiv 1 +3x+3x^2 \pmod{x^3} \\ &amp;= 1 - 3(1 + x) + 3(1 + x)^2 \end{align}</span></p> <p>The two matrices we get out of this are <span class="math-container">$$ \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 1 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 1 \end{pmatrix} \text{ and } \begin{pmatrix} 0 &amp; 0 &amp; 1 \\ 1 &amp; 0 &amp; -3 \\ 0 &amp; 1 &amp; 3 \end{pmatrix} $$</span></p> <p>And the statement/fact in question is that these two matrices are similar. If we were to write this as <span class="math-container">$A = PBP^{-1}$</span>, the matrix <span class="math-container">$P$</span> would be the matrix corresponding to writing <span class="math-container">$\{1,(1+x),(1+x)^2\}$</span> in terms of the first basis. I.e.</p> <p><span class="math-container">$$ P = \begin{pmatrix} 1 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 2 \\ 0 &amp; 0 &amp; 1 \end{pmatrix} $$</span></p>
686,361
<p>Given if we know $P(S)$ and $P(C|S)$ and $P(D|S)$, how do you compute $E[C|D=d]$? One way that I thought of is to find the conditional probability of $P(C|D)$ by computing the joint probability $P(C,D,S)$ and marginalizing it over $S$. But, $P(D|S)$ is a binomial distribution with parameter $q$ and $S$. Finding the full joint probability distribution will be too complicated. Does anyone know an easier way to find $E[C|D=d]$? Thanks. </p> <p>Note: I forgot to mention that C and D are independent given S</p>
Did
6,179
<p>$$ E(C\mid D=d)=\frac{E(C;D=d)}{P(D=d)} $$ $$ E(C;D=d)=\sum_{c,s}c\,P(C=c\mid S=s)\,P(D=d\mid S=s)\,P(S=s) $$ $$ P(D=d)=\sum_{s}P(D=d\mid S=s)\,P(S=s) $$</p>
4,273,026
<p>Let <span class="math-container">$\Omega\subset\mathbb{R}^n$</span> be a bounded open set, <span class="math-container">$n\geq 2$</span>. For <span class="math-container">$r&gt;0$</span>, denote by <span class="math-container">$B_r(x_0)=\{x\in\mathbb{R}^n:|x-x_0|&lt;r\}$</span> whose closure is a proper subset of <span class="math-container">$\Omega$</span>. Let <span class="math-container">$u\in W^{1,p}(\Omega)$</span> (the standard Sobolev space) for <span class="math-container">$1&lt;p&lt;n$</span> be a nonnegative, bounded function such that for every <span class="math-container">$\frac{1}{2}\leq\sigma^{'}&lt;\sigma\leq 1$</span>, we have <span class="math-container">\begin{equation} \sup_{B_{\sigma^{'}r}(x_0)}\,u\leq \frac{1}{2}\sup_{B_{\sigma r}(x_0)}\,u+\frac{c}{(\sigma-\sigma^{'})^{\frac{n}{q}}}\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}u^q\,dx\right)^\frac{1}{q}\quad\forall q\in(0,p^{*}), \end{equation}</span> where <span class="math-container">$c$</span> is some fixed positive constant,independent of <span class="math-container">$x_0,r$</span>, <span class="math-container">$p^{*}=\frac{np}{n-p}$</span> and <span class="math-container">$|B_r(x_0)|$</span> denote the Lebsegue measure of the ball <span class="math-container">$B_r(x_0)$</span>. Then by the iteration lemma stated below, we have <span class="math-container">\begin{equation} \sup_{B_{\frac{r}{2}}(x_0)}\,u\leq c\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}u^q\,dx\right)^\frac{1}{q}\quad\forall q\in(0,p^{*}), \end{equation}</span> where <span class="math-container">$c$</span> is some fixed positive constant,independent of <span class="math-container">$x_0,r$</span>.</p> <p>Iteration lemma: Let <span class="math-container">$f=f(t)$</span> be a nonnegative bounded function defined for <span class="math-container">$0\leq T_0\leq t\leq T_1$</span>. Suppose that for <span class="math-container">$T_0\leq t&lt;\tau\leq T_1$</span> we have <span class="math-container">$$ f(t)\leq c_1(\tau-t)^{-\theta}+c_2+\xi f(\tau), $$</span> where <span class="math-container">$c_1,c_2,\theta,\xi$</span> are nonnegative constants and <span class="math-container">$\xi&lt;1$</span>. Then there exists a constant <span class="math-container">$c$</span> depending only on <span class="math-container">$\theta,\xi$</span> such that for every <span class="math-container">$\rho, R$</span>, <span class="math-container">$T_0\leq \rho&lt;R\leq T_1$</span>, we have <span class="math-container">$$ f(\rho)\leq c[c_1(R-\rho)^{-\theta}+c_2]. $$</span> Applying the iteration lemma with <span class="math-container">$f(t)=\sup_{B_t(x_0)}\,u$</span>, <span class="math-container">$\tau=\sigma r$</span>, <span class="math-container">$t=\sigma^{'}r$</span>, <span class="math-container">$\theta=\frac{n}{q}$</span> in the given estimate on <span class="math-container">$u$</span> above, the second estimate on <span class="math-container">$u$</span> above follows. My question is can we obtain the following estimate <span class="math-container">\begin{equation} \sup_{B_r(x_0)}\,u\leq c\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}u^q\,dx\right)^\frac{1}{q}\quad\forall q\in(0,p^{*}), \end{equation}</span> which estimates the supremum of <span class="math-container">$u$</span> over the whole ball <span class="math-container">$B_r(x_0)$</span>? If it is possible, does it follow by a covering argument? Here <span class="math-container">$c$</span> is some fixed positive constant, independent of <span class="math-container">$x_0,r$</span>.</p> <p>Thank you very mych.</p>
user378654
970,339
<p>No: let <span class="math-container">$u(x) = |x|^a$</span> and <span class="math-container">$x_0 = 0$</span>. I do this with <span class="math-container">$q = 1$</span> to simplify computation, but you can repeat the argument with any (or all) <span class="math-container">$q$</span>. The first inequality (up to modifying the constant, and assuming say <span class="math-container">$a \geq 1$</span>) reads <span class="math-container">$$ (\sigma' r)^a \leq \frac{1}{2} (\sigma r)^a + \frac{c_1}{a(\sigma - \sigma')^{n}}r^a. $$</span> When <span class="math-container">$(\sigma'/\sigma)^a \leq \frac{1}{2}$</span> this is true just by looking at the first term, and if not then <span class="math-container">$$ \frac{c_1}{a(\sigma - \sigma')^{n}} \geq \frac{c_1}{a\sigma(1 - (1/2)^{1/a})^{n}} \geq \frac{c_1}{a(1 - (1/2)^{1/a})^{n}} \geq \frac{c_1}{A} $$</span> uniformly for large <span class="math-container">$a$</span>, where <span class="math-container">$A&gt;0$</span> is some upper bound on the denominator. Then the inequality is satisfied with <span class="math-container">$c_1 = A$</span> for all <span class="math-container">$a$</span> large, and all <span class="math-container">$\sigma, \sigma', r$</span>.</p> <p>On the other hand, the desired inequality (with <span class="math-container">$r = 1$</span>) reads <span class="math-container">$$ 1 \leq \frac{c_2}{a}. $$</span> This is false for any given <span class="math-container">$c_2$</span> if <span class="math-container">$a = a(c_2)$</span> is taken large enough.</p> <p>Let me make a further comment, based on you tagging this question under partial differential equations and regualrity. The fact that this inequality is false for some simple example is accidental and not particularly interesting. More importantly:</p> <ul> <li>You should not expect inequalities like this to be true. You will almost always lose some room between nested balls when you use cutoff functions, local estimates, etc. Maybe once in a blue moon you can &quot;recover&quot; the loss by some argument, but only with enormous effort which hinges on much finer analysis of situations unrelated to what you are mostly interested in.</li> <li>The regularity arguments we develop are OK with needing some room in estimates.</li> <li>Covering arguments let you modify the amount of room you need between sets at the expense of larger constants (e.g. replacing <span class="math-container">$\frac{1}{2}$</span> by <span class="math-container">$\frac{999}{1000}$</span> here), not remove all the room entirely.</li> </ul>
3,553,975
<p>I fear that this is a stupid question, but I want to have a go anyway. </p> <p>Let <span class="math-container">$k$</span> be a field, and let <span class="math-container">$f(x,y)$</span> be an irreducible homogeneous quadratic polynomial in <span class="math-container">$k[x,y]$</span>. </p> <p><em>Question</em>: (when) is <span class="math-container">$k[x,y]/(f(x,y)) \cong (k[x]/f(x,1))[y]$</span> ?</p> <p>Probably I am seeing ghosts, but is there some more general (correct) identity that I am totally missing ? Can the assumptions on <span class="math-container">$f(x,y)$</span> be relaxed ? </p>
Eugaurie
382,410
<p>A function with the property <span class="math-container">$x &lt; y \Rightarrow f(x) &lt; f(y) $</span> can be called <span class="math-container">$\textbf{strictly monotone increasing}$</span>. </p>
1,051,372
<p>If $|z_1|=1,|z_2|=1$, how can one prove $|1+z_1|+|1+z_2|+|1+z_1z_2|\ge2$</p>
Raclette
196,274
<p>You have that $|1+w| \geq |\Re(1+w)| = |1 + \Re(w)|$. Since you only consider $|w| = 1$, you have that $|1 + Re(w)| = 1 + Re(w)$. Thus, it is enough to show that $$1 + \cos(t_1) + 1 + \cos(t_2) + 1 + \cos(t_1+t_2) \geq 2,$$ where $z_1 = e^{it_1}$ and $z_2 = e^{it_2}$. If $t_1$ is fixed, then the LFH obtains its extremal values when $$-\sin(t_2) -\sin(t_1+t_2) = 0,$$ that is, $$\sin(t_2) = - \sin(t_1+t_2),$$ which implies that either $$t_1 = -t_1 - t_2+2\pi k \Leftrightarrow t_2 = -2 t_1 + 2\pi k$$ or $$t_1 = \pi + t_1 + t_2 + 2\pi k\Leftrightarrow t_2 = \pi + 2\pi k.$$ Unless I'm mistaken, the first case corresponds to local maxima. The second case, which corresponds to local minima, gives $$LHS = 2 + \cos(t_1) + \cos(t_1 + \pi) = 2.$$ That is, for each fixed $t_1$, the LFH is greater than $2$ as a function of $t_2$. Which of course implies that it is greater than $2$ for all $(t_1, t_2)$.</p>
724,900
<p>Assuming $y(x)$ is differentiable. </p> <p>Then, what is formula for differentiation ${d\over dx}f(x,y(x))$?</p> <p>I examine some example but get no clue....</p>
Cameron Williams
22,551
<p>There are two pieces to this: $f$ is a function of $x$ and a function of $y$ which suggests use of the chain rule. The multivariate chain rule says</p> <p>$$\frac{d}{dx}f(x,y(x)) = \frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}\frac{dy}{dx}.$$</p> <p>More generally, if you have a function $f(x,y,\ldots,z)$ and the variables $x,y,\ldots,z$ depend on $t$, then</p> <p>$$\frac{d}{dt}f(x(t),y(t),\ldots, z(t)) = \frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial y}{\partial t}+\cdots+\frac{\partial f}{\partial z}\frac{dz}{dt}.$$</p> <p>(Your case is $x(t) = t$.)</p> <hr> <p>For example, consider $f(x,y) = \sqrt{x^2+y^2}$ and $y(x) = x^2$, then $f(x,y(x)) = \sqrt{x^2+(x^2)^2} = \sqrt{x^2+x^4}.$ If we simply compute the derivative with this new expression for $f$, we get </p> <p>$$\frac{df}{dx} = \frac{2x+4x^3}{2\sqrt{x^2+x^4}} = \frac{x+2x^3}{\sqrt{x^2+x^4}}.$$</p> <p>Let's try using the equation I gave up above to see if it matches.</p> <p>$$\frac{df}{dx} = \frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}\frac{dy}{dx} = \frac{x}{\sqrt{x^2+y^2}}+\frac{y(x)}{\sqrt{x^2+y^2}}\cdot 2x = \frac{x+2xy(x)}{\sqrt{x^2+y^2}} = \frac{x+2x^3}{\sqrt{x^2+x^4}}.$$</p> <p>They agree as they should.</p>
1,334,719
<p>Given $F=A^TA$, with $A$ is a $m\times n$ matrix. Then what is the derivative w.r.t. $A$ ?</p> <p>I know when $A$ is a $m\times 1$ vector, the derivative is $$\frac{\partial F}{\partial A} = 2A$$.</p> <p>Does this equation still hold when A is extended to $m\times n$ matrix?</p>
mathcounterexamples.net
187,663
<p><strong>Hint.</strong></p> <p>$F$ can be written by function composition of the functions:</p> <p>$$R: A \longmapsto A^{T}$$ and $$S: (A,B) \longmapsto A.B$$ The first one is linear, so its derivative is itself. The second one is bilinear and its derivative is $S^\prime(A,B)(h,k)=A.h+k.B$</p> <p>At the end, you have to use the chain rule to compute the derivative of the composition of both.</p>
1,334,719
<p>Given $F=A^TA$, with $A$ is a $m\times n$ matrix. Then what is the derivative w.r.t. $A$ ?</p> <p>I know when $A$ is a $m\times 1$ vector, the derivative is $$\frac{\partial F}{\partial A} = 2A$$.</p> <p>Does this equation still hold when A is extended to $m\times n$ matrix?</p>
Mark Joshi
106,024
<p>$F(A+H) = (A+H)^{t}(A+H) = (A^t + H^t)(A+H)=A^t A + A^tH + H^t A + H^t H.$</p> <p>So $$F(A+H) - F(A) = A^t H + H A^t + O(||H||^2).$$ So the derivative is the map $$ H \mapsto A^t H + H A^t. $$</p>
3,120,729
<p>I came across this exercise:</p> <blockquote> <p>Prove that <span class="math-container">$$\tan x+2\tan2x+4\tan4x+8\cot8x=\cot x$$</span></p> </blockquote> <p>Proving this seems tedious but doable, I think, by exploiting double angle identities several times, and presumably several terms on the left hand side would vanish or otherwise reduce to <span class="math-container">$\cot x$</span>.</p> <p>I started to wonder if the pattern holds, and several plots for the first few powers of <span class="math-container">$2$</span> seem to suggest so. I thought perhaps it would be easier to prove the more general statement:</p> <blockquote> <p>For <span class="math-container">$n\in\{0,1,2,3,\ldots\}$</span>, prove that <span class="math-container">$$2^{n+1}\cot(2^{n+1}x)+\sum_{k=0}^n2^k\tan(2^kx)=\cot x$$</span></p> </blockquote> <p>Presented this way, a proof by induction seems to be the smart way to do it.</p> <p><strong>Base case:</strong> Trivial, we have</p> <p><span class="math-container">$$\tan x+2\cot2x=\frac{\sin x}{\cos x}+\frac{2\cos2x}{\sin2x}=\frac{\cos^2x}{\sin x\cos x}=\cot x$$</span></p> <p><strong>Induction hypothesis:</strong> Assume that</p> <p><span class="math-container">$$2^{N+1}\cot(2^{N+1}x)+\sum_{k=0}^N2^k\tan(2^kx)=\cot x$$</span></p> <p><strong>Inductive step:</strong> For <span class="math-container">$n=N+1$</span>, we have</p> <p><span class="math-container">$$\begin{align*} 2^{N+2}\cot(2^{N+2}x)+\sum_{k=0}^{N+1}2^k\tan(2^kx)&amp;=2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)+\sum_{k=0}^N2^k\tan(2^kx)\\[1ex] &amp;=2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)-2^{N+1}\cot(2^{N+1}x)+\cot x \end{align*}$$</span></p> <p>To complete the proof, we need to show</p> <p><span class="math-container">$$2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)-2^{N+1}\cot(2^{N+1}x)=0$$</span></p> <p>I noticed that if I ignore the common factor of <span class="math-container">$2^{N+1}$</span> and make the substitution <span class="math-container">$y=2^{N+1}x$</span>, this reduces to the base case,</p> <p><span class="math-container">$$2^{N+1}\left(2\cot2y+\tan y-\cot y\right)=0$$</span></p> <p>and this appears to complete the proof, and the original statement is true.</p> <p>First question: <strong>Is the substitution a valid step in proving the identity?</strong></p> <p>Second question: <strong>Is there a nifty way to prove the special case for <span class="math-container">$n=2$</span>?</strong></p>
J.G.
56,861
<p>Defining <span class="math-container">$t:=\tan y$</span>, <span class="math-container">$$2\cot 2y+\tan y-\cot y=\frac{1-t^2}{t}+t-\frac{1}{t}=0.$$</span>This will work for any <span class="math-container">$y$</span>, so I guess I've answered your questions in reverse order (but to both I say yes). </p>
1,521,124
<p>What will be the value of $3/1!+5/2!+7/3!+...$?</p> <p>I'm trying to bring it in terms of $e$.Is it possible?</p> <p>I used taylor series for e.</p>
the_candyman
51,370
<p>I guess you can write it as follow:</p> <p>$$\sum_{n=1}^{+\infty}\frac{2n+1}{n!} = \sum_{n=1}^{+\infty}\left(\frac{2}{(n-1)!}+\frac{1}{n!}\right) = \\ = \sum_{n=1}^{+\infty}\left(\frac{2}{(n-1)!}\right) +\sum_{n=1}^{+\infty}\left(\frac{1}{n!}\right) = \\ = 2\sum_{n=0}^{+\infty}\left(\frac{1}{n!}\right) +\sum_{n=0}^{+\infty}\left(\frac{1}{n!}\right) - 1 = 3e - 1,$$</p> <p>where of course $$\sum_{n=0}^{+\infty}\left(\frac{1}{n!}\right) = e.$$</p>
3,027,925
<p>Just for my own understanding of how exactly integration works, are these steps correct:</p> <p><span class="math-container">$$\begin{align}\int x\,d(x^2) \qquad &amp;\implies x^2 = u \\ &amp; \implies x= \sqrt{u}\end{align}$$</span> </p> <p>Thus, it becomes <span class="math-container">$$\int\sqrt{u}\,du = \frac{2}{3}u^{3/2} \implies {2\over3}x^3$$</span></p>
PrincessEev
597,568
<p>While fundamentally correct, aside from your lack of a <span class="math-container">$+C$</span> constant, notationally your work is a bit of a mess (before someone edited it for clarity's sake). </p> <p>First, we start with</p> <p><span class="math-container">$$\int x d(x^2)$$</span></p> <p>From here, we make the substitution <span class="math-container">$u = x^2$</span>. This gives <span class="math-container">$x = \sqrt{u}$</span>. Thus,</p> <p><span class="math-container">$$x d(x^2) = \sqrt{u} du \;\;\; \Rightarrow \;\;\; \int xd(x^2) = \int u^{1/2}du = \frac{2}{3}u^{3/2} + C$$</span></p> <p>Reutilizing our substitution, we finally get our answer:</p> <p><span class="math-container">$$\frac{2}{3}u^{3/2} + C = \frac{2}{3}x^3 +C$$</span></p> <p>Your errors make it hard to follow your thought process, because <span class="math-container">$\sqrt{u} du \neq \frac{2}{3}u^{3/2}$</span>, when you assert they're equal in your last line. </p> <p>As a note, this question was asked before: you might want to check more thoroughly next time. <a href="https://math.stackexchange.com/questions/1288831/how-to-integrate-with-respect-to-x2">The original question.</a></p>
2,741,832
<p>When one first learns measure theory, it is a small novelty to find out that $$\bigcup_{n=0}^\infty B_{\epsilon/2^n}(r_n)$$ is not all of $\mathbb{R}$, where $\{r_n\}$ is an enumeration of the rationals and $\epsilon$ is an arbitrary positive number (notice this fact is equally impressive if $\epsilon$ is small or large).</p> <p>Of course, by measure arguments, the set above has measure at most $\epsilon$ and can't be all of $\mathbb{R}$. However, there doesn't seem to be another canonical line of reasoning that explains why the union above is not all of $\mathbb{R}$. That makes me wonder, what if we remove that ability to use this argument?</p> <blockquote> <p>Is there a pair of sequences of positive real numbers $\{c_n\}$ and $\{d_n\}$ both tending to $0$ such that $$\sum_{n=0}^\infty c_n=\infty=\sum_{n=0}^\infty d_n$$ where we can demonstrate $$\bigcup_{n=0}^\infty B_{c_n}(r_n)=\mathbb{R}\quad\text{and}\quad\bigcup_{n=0}^\infty B_{d_n}(r_n)\neq\mathbb{R}$$ with a fixed enumeration of the rationals $\{r_n\}$?</p> </blockquote> <p>An existential proof of both questions would be sufficient for me. But an explicit $\{c_n\}$ and $\{d_n\}$ would be interesting to see.</p> <p>I feel like the $\{c_n\}$ construction might be fairly easy in comparison to $\{d_n\}$, and using dependent choice, I even think I have an argument off the top of my head: just let $\{c_n\}$ be fairly constant until you swallow up $[-N,N]$ and then let it decrease. Continue ad infinitum. But what about $\{d_n\}$?</p>
Ross Millikan
1,827
<p>One approach is to let $c_n=\frac 1n$ and let $d_n=\frac 1n$ when the rational is outside $(-1,2)$ and $d_n=2^{-n}$ when the rational is inside. The interval $(0,1)$ then has the measure argument work because none of the balls centered outside $(-1,2)$ can reach there. We just need to make sure enough of the early rationals are outside $(-1,2)$ to make sure the sum of $d_n$ diverges. We can just specify that all the rationals in $(-1,2)$ will be in odd positions in the enumeration. The sum of $\frac 1{2n}$ diverges, so the sum of $d_n$ will diverge. </p> <p>To make the union of the $c_n$ balls cover $\Bbb R$ we start with $r_1=-3,r_2=-3-1-0.9\cdot \frac 12,r_3=-1+1+0.9\cdot \frac 13$ with the idea that we make sure to cover a harmonically expanding interval. Then let $r_4$ pick one of the rationals less than $-3$ in the covered interval and $r_5$ pick up one of the rationals greater than $-3$ in the covered interval. Continuing, $r_{4k+2}$ is outside the growing interval by $0.9\cdot \frac 1{4k+2}$ on the minus side, $r_{4k+3}$ is outside the growing interval by $0.9\cdot \frac 1{4k+3}$ on the positive side, $r_{4k+4}$ is a fill-in less than $-3$ and $r_{4k+5}$ is a fill-in greater than $-3$. The union of these balls will covr all of $\Bbb R$</p>
1,024,794
<p>I have this equation: $x+7-(\frac{5x}8 + 10) = 3 $</p> <p>I've used step-by-step calculators online but I simply don't understand it. Here is how I've tried to solve the problem: </p> <p>$$x+7-\left(\frac{5x}8+10\right) = x + 7 - \frac{5x}8 - 10 = 3$$</p> <p>$$x + 7 - \frac{5x}8 - 10 + 10 = 3 + 10$$</p> <p>$$x + 7 - 7 - \frac{5x}8 = 13 - 7$$</p> <p>$$x - \frac{5x}8 = 6$$</p> <p>$$x - 8\times\frac{5x}8 = 6\times8$$</p> <p>$$x - 5x = 48$$</p> <p>$$\frac{-4x}{-4} = \frac{48}4$$</p> <p>$$x = -12$$</p> <p>Now obviously, it's wrong. The right answer is $16$, but I don't know how to get to that answer. Therefore, I'm extremely thankful if someone truly can show what I need to do, and why I need to do it, because I'm completely lost right now. Thanks.</p>
mick
39,261
<p>$\xi(s)=\xi(1-s)$</p> <p>You defined $k(s)=\xi(1-s)$.</p> <p>So $k(s)=\xi(s)$.</p> <p>Next you wonder if $\xi(\xi(s)) = 0 =&gt; \xi(s) = 0$ is true iff RH is true.</p> <p>There is no reason to assume those 2 problems are related.</p> <p>Notice that if $\xi(s) = 0$ this implies that $\xi(\xi(s)) = \xi(0) = \frac{1}{2}$.</p> <p>Since $\frac{1}{2}$ is not $0$ your claim fails.</p> <p>Q.E.D.</p>
634,890
<blockquote> <p><strong>Moderator Notice</strong>: I am unilaterally closing this question for three reasons. </p> <ol> <li>The discussion here has turned too chatty and not suitable for the MSE framework. </li> <li>Given the recent pre-print of <a href="http://arxiv.org/abs/1402.0290" rel="noreferrer">T. Tao</a> (see also the blog-post <a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="noreferrer">here</a>), the continued usefulness of this question is diminished.</li> <li>The final update on <a href="https://math.stackexchange.com/a/649373/1543">this answer</a> is probably as close to an "answer" an we can expect. </li> </ol> </blockquote> <p>Eminent Kazakh mathematician Mukhtarbay Otelbaev, Prof. Dr. has published a full proof of the Clay Navier-Stokes Millennium Problem.</p> <p>Is it correct?</p> <p>See <a href="http://bnews.kz/en/news/post/180213/" rel="noreferrer">http://bnews.kz/en/news/post/180213/</a></p> <p>A link to the paper (in Russian): <a href="http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf" rel="noreferrer">http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf</a></p> <p>Mukhtarbay Otelbaev has published over 200 papers, had over 70 PhD students, and he is a member of the Kazak Academy of Sciences. He has published papers on Navier-Stokes and Functional Analysis.</p> <p>please confine answers to any actual mathematical error found! thanks</p>
Stephen Montgomery-Smith
22,016
<p>This web page has Theorem 6.1. It is written in Spanish, but actually is rather easy to follow even if (like me) you don't know any Spanish. However it is not made clear on this web site that the statement of Theorem 6.1 is "<strong>If</strong> $\|A^\theta \overset 0u\| \le C_\theta\|$, <strong>then</strong> $\| \overset0u \| \le C_1(1+\|\overset0f\|+\|\overset0f\|^l)$."</p> <p><a href="http://francis.naukas.com/2014/01/18/la-demostracion-de-otelbaev-del-problema-del-milenio-de-navier-stokes/">http://francis.naukas.com/2014/01/18/la-demostracion-de-otelbaev-del-problema-del-milenio-de-navier-stokes/</a></p> <p>This is the proposed counterexample to Theorem 6.1 given at <a href="http://dxdy.ru/topic80156-60.html">http://dxdy.ru/topic80156-60.html</a>. I used google translate, and then cleaned it up. I also added details here and there.</p> <p>Let $\hat H = \ell_2$.</p> <p>Let the operator $A$ be defined by $ Ae_i = e_i $ for $ i &lt; 50$, $ Ae_i = ie_i $ for $ i \ge $ 50</p> <p>Define the bilinear operator $ L $ to be nonzero only on two-dimensional subspaces $ L (e_ {2n}, e_ {2n +1}) = \frac1n (e_ {2n} + e_ {2n +1}) $, with $ n \ge 25 $.</p> <p>Check conditions:</p> <p>U3. Even with a margin of 50 .</p> <p>U2. $ (e_i, L (e_i, e_i)) = 0 $ for $ i \ge $ 50. This is also true for eigenvectors $ u $ with $ \lambda = 1 $, because for them $ L (u, u) = 0 $.</p> <p>U4. $ L (e, u) = 0 $ for the eigenvectors $ e $ with $ \lambda = 1 $ also trivial. (Stephen's note: he also needs to check $L_e^*u = L_u^*e = 0$, but that looks correct to me.)</p> <p>U1. $ (Ax, x) \ge (x, x) $. Also $$ \| L (u, v) \| ^ 2 = \sum_{n \ge 25} u ^ 2_ {2n} v ^ 2_ {2n +1} / n ^ 2 \le C\|(u_n/\sqrt n)\|_4^2 \|(v_n/\sqrt n)\|_4^2 \le C\|(u_n/\sqrt n)\|_2^2 \|(v_n/\sqrt n)\|_2^2 = C \left (\sum u ^ 2_ {n} / n \right) \left (\sum v ^ 2_ {n} / n \right) $$ so we can take $\beta = -1/2$.</p> <p>And now consider the elements $ u_n =-n (e_ {2n} + e_ {2n +1}) $. Their norms are obviously rising. Let $ \theta = -1 $. Then the $A^\theta $-norms of all these elements are constant. But, $ f_n = u_n + L (u_n, u_n) = 0 $.</p> <p><strong>Update:</strong> Later on in <a href="http://dxdy.ru/topic80156-90.html">http://dxdy.ru/topic80156-90.html</a> there is a response relayed from Otelbaev in which he asserts he can fix the counterexample by adding another hypothesis to Theorem 6.1, namely the existence of operators $P_N$ converging strongly to the identity such that one has good solvability properties for $u + P_N L(P_N u,P_N u) = f$, in that if $\| f \|$ is small enough then $\| u \|$ is also small.</p> <p>Terry Tao communicated to me that he thinks a small modification of the counterexample also defeats this additional hypothesis.</p> <p><strong>Update 2:</strong> Terry Tao modified his example to correct for that fact that the statement of Theorem 6.1 is that a bound on $u \equiv \overset0u$ implies a lower bound on $f \equiv \overset0f$ rather than the other way around (i.e. we had a translation error for Theorem 6.1 that I point out above).</p> <p>Let $\hat H$ be $N$-dimensional Euclidean space, with $N \ge 50$. Let $\theta = -1$ and $\beta = -1/100$. Take $$ A e_n = \begin{cases} e_n &amp; \text{for $n&lt;50$} \\ 50\ 2^{n-50} e_n &amp; \text{for $50 \le n \le N$.}\end{cases}$$ and $$L(e_n, e_n) = - 2^{-(n-1)/2} e_{n+1} \quad\text{for $50 \le n &lt; N$,}$$ and all other $L(e_i,e_j)$ zero.</p> <p>Axioms (Y.2) and (Y.4) are easily verified. For (Y.1), observe that if $u = \sum_n c_n e_n$ and $v = \sum_n d_n e_n$, then for a universal constant $C$, we have $$ \| L(u,v) \|^2 \le C \sum_n 2^{-n} c_n^2 d_n^2, \\ |c_n| \le C 2^{n/100} \| A^\beta u \| ,\\ |d_n| \le C 2^{n/100} \| A^\beta v \| ,$$ and the claim (Y.1) follows from summing geometric series.</p> <p>Finally, set $$ u = \sum_{n=50}^N 2^{n/2} e_n $$ then one calculates that $$ \| A^\theta u \| &lt; C $$ for an absolute constant C, and $$ u + L(u,u) = 2^{50/2} e_{50} $$ so $$ \| u + L(u,u) \| \le C $$ but that $$ \| u \| \ge 2^{N/2}. $$ Since $N$ is arbitrary, this gives a counterexample to Theorem 6.1.</p> <p>By writing the equation $u+L(u,u)=f$ in coordinates we obtain $f_n = u_n$ for $n \le 50$, and $f_n = u_n + 2^{-n/2} u_{n-1}^2$ if $50&lt;n\le N$. Hence we see that $u$ is uniquely determined by $f$. From the inverse function theorem we see that if $\| f \|$ is sufficiently small then $\| u \| &lt; 1/2$, so the additional axiom Otelbaev gives to try to fix Theorem 6.1 is also obeyed (setting $P_N$ to be the identity).</p> <p><strong>Update 3:</strong> on Feb 14, 2014, Professor Otelbaev sent me this message, which I am posting with his permission:</p> <p>Dear Prof. Montgomery-Smith,</p> <p>To my shame, on the page 56 the inequality (6.34) is incorrect therefore the proposition 6.3 (p. 54) isn't proved. I am so sorry.</p> <p>Thanks for goodwill.</p> <p>Defects I hope to correct in English version of the article.</p>
634,890
<blockquote> <p><strong>Moderator Notice</strong>: I am unilaterally closing this question for three reasons. </p> <ol> <li>The discussion here has turned too chatty and not suitable for the MSE framework. </li> <li>Given the recent pre-print of <a href="http://arxiv.org/abs/1402.0290" rel="noreferrer">T. Tao</a> (see also the blog-post <a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="noreferrer">here</a>), the continued usefulness of this question is diminished.</li> <li>The final update on <a href="https://math.stackexchange.com/a/649373/1543">this answer</a> is probably as close to an "answer" an we can expect. </li> </ol> </blockquote> <p>Eminent Kazakh mathematician Mukhtarbay Otelbaev, Prof. Dr. has published a full proof of the Clay Navier-Stokes Millennium Problem.</p> <p>Is it correct?</p> <p>See <a href="http://bnews.kz/en/news/post/180213/" rel="noreferrer">http://bnews.kz/en/news/post/180213/</a></p> <p>A link to the paper (in Russian): <a href="http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf" rel="noreferrer">http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf</a></p> <p>Mukhtarbay Otelbaev has published over 200 papers, had over 70 PhD students, and he is a member of the Kazak Academy of Sciences. He has published papers on Navier-Stokes and Functional Analysis.</p> <p>please confine answers to any actual mathematical error found! thanks</p>
AlsoInAstana
124,205
<p>I read on the Russian side dxdy a comment from a mathematician in Almaty, KZ, that sheds some light on the process. It's the comment on page 13 by MAnvarbek. Otelbaev presented his proof 1 year ago, and immediately they found large errors, and no new ideas. The problem was with all the parameters. Otelbaev worked on correcting the errors, and then published---without again showing his result. Editors in Almaty did not approve publishing the paper. </p> <p>The author of this statement wrote also that there is a committee of the Institute of Mathematics analysing the paper by Otelbaev. They thank the user <strong>sup</strong> at dxdy for proposing the example and Tao for improving it. The example saves a lot of work for the Committee, but they are sure that they would find the mistake anyway.</p>
634,890
<blockquote> <p><strong>Moderator Notice</strong>: I am unilaterally closing this question for three reasons. </p> <ol> <li>The discussion here has turned too chatty and not suitable for the MSE framework. </li> <li>Given the recent pre-print of <a href="http://arxiv.org/abs/1402.0290" rel="noreferrer">T. Tao</a> (see also the blog-post <a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="noreferrer">here</a>), the continued usefulness of this question is diminished.</li> <li>The final update on <a href="https://math.stackexchange.com/a/649373/1543">this answer</a> is probably as close to an "answer" an we can expect. </li> </ol> </blockquote> <p>Eminent Kazakh mathematician Mukhtarbay Otelbaev, Prof. Dr. has published a full proof of the Clay Navier-Stokes Millennium Problem.</p> <p>Is it correct?</p> <p>See <a href="http://bnews.kz/en/news/post/180213/" rel="noreferrer">http://bnews.kz/en/news/post/180213/</a></p> <p>A link to the paper (in Russian): <a href="http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf" rel="noreferrer">http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf</a></p> <p>Mukhtarbay Otelbaev has published over 200 papers, had over 70 PhD students, and he is a member of the Kazak Academy of Sciences. He has published papers on Navier-Stokes and Functional Analysis.</p> <p>please confine answers to any actual mathematical error found! thanks</p>
Dr. Betty Rostro
129,304
<p>I favor the Terry Tao version he uses Von Neuman criteria, this is a bit better and more elegant in my opinion.</p> <p><a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="nofollow">http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/</a></p> <p>I see from Otelbaev's <a href="http://enu.kz/repository/repository2013/articlemmf1.pdf" rel="nofollow">http://enu.kz/repository/repository2013/articlemmf1.pdf</a>, that he's made improvements. I see where he comes from Hilbert (Banach) and Cauchy problem and uses Sobolev and Galerkins, still he makes the case for weak solutions, and then reiterates to claim strong solution. However, if he's relying on boundedness and trilinears a,b to find optimal boundedness using strong-weak uniqueness using Serrin criterion this was done by Lemarie already. Paraproduct issues aside Serrin criteria assumes Navier Stokes does not blow up, however, that is based on log inequalities from Wong which obtained them for earlier scholars.</p> <p>I used Navier Stokes (NS) during my MSc thesis at Rice. At NASA-JSC though we applied different corrections to Navier Stokes though.</p> <p>So I'd care to see merely a strong solution not just a Strong-Weak Uniqueness as other arguments have been claiming for years.</p> <p>Which by the way Magnetohydrodynamic work with embedded theory, very much published already proves. Littlewood-Paley conjecture and Soboloev Embedding, Young's Inequalities, all these do break down you know! So then the key is has he solved the remaining open problem???? If so where are those answers.</p> <p>What was provided is a repeat of all the MHD and electrolyte theory already in publication since 2004.</p> <p>I mean in that case other groups have obtained prior answers that run around the same lines.</p> <p>Is there an English translation as there are Chinese and Americans and myself that have similar findings from a Physics perspective. </p> <p>Also, what about the Terry Tao version with the Von Neuman criteria, this is a bit more elegant in my opinion.</p> <p>Is Perelman contributing to this answer, I wonder where his comments would be? I'm assuming he would also start discussing the need for saddle criteria, where is that in this paper here.</p> <p>Betty Rostro, PhD</p>
3,104,706
<p>Let <span class="math-container">$T$</span> be the left shift operator on <span class="math-container">$B(l^{2}(\mathbb{N}))$</span>. How to see that von Neumann algebra generated by <span class="math-container">$T$</span> is <span class="math-container">$B(l^{2}(\mathbb{N}))$</span>?</p>
Jens Schwaiger
532,419
<p>It is known that every infinite set contains a countable subset (axiom of choice). So let <span class="math-container">$X$</span> contain the subset <span class="math-container">$A:=\{a_n\mid n\in\mathbb{N}\}$</span> with <span class="math-container">$a_n\not=a_m$</span> for <span class="math-container">$n\not=m$</span>. Then <span class="math-container">$f:A\to A$</span> with <span class="math-container">$f(a_n):=a_{n+1}$</span> is injective and not surjective. Likewise <span class="math-container">$g:A\to A$</span>, <span class="math-container">$g(a_1):=a_2$</span>, <span class="math-container">$g(a_n):=a_n$</span> for <span class="math-container">$n\geq 2$</span> is surjective but not injective. Let <span class="math-container">$X:=A\cup Y$</span> with <span class="math-container">$A\cap Y=\emptyset$</span>. Then <span class="math-container">$F,G:X\to X$</span> defined by <span class="math-container">$F(y):=G(y):=y$</span> for <span class="math-container">$y\in Y$</span> and by <span class="math-container">$F(a):=f(a), G(a):=g(a)$</span> for <span class="math-container">$a\in A$</span> are as desired.</p> <p><em>Edit:</em> <span class="math-container">$g$</span> should be defined differently. <span class="math-container">$g(a_1):=a_1$</span>, <span class="math-container">$g(a_{n}):=a_{n-1}$</span> for <span class="math-container">$n\geq 2$</span>.</p>
3,104,706
<p>Let <span class="math-container">$T$</span> be the left shift operator on <span class="math-container">$B(l^{2}(\mathbb{N}))$</span>. How to see that von Neumann algebra generated by <span class="math-container">$T$</span> is <span class="math-container">$B(l^{2}(\mathbb{N}))$</span>?</p>
Community
-1
<p>Suppose <span class="math-container">$X$</span> is infinite. Let <span class="math-container">$C=\{x_1,x_2,x_3,\dots\}$</span> be a countably infinite subset. </p> <p>Define <span class="math-container">$h:X\to X$</span> by <span class="math-container">$h(x)=x, x\in X\setminus C$</span> and <span class="math-container">$h(x_i)=x_{i+1},x_i\in C$</span>.</p> <p>h is injective but not surjective. </p> <p>I'll leave the second to you. </p>
2,934,238
<p>Let <span class="math-container">$a\in \mathbb{Q}$</span> such that <span class="math-container">$18a$</span> and <span class="math-container">$25a$</span> are integers, then we wish to prove that <span class="math-container">$a$</span> must be an integer itself. What that means is that <span class="math-container">$a=\frac{p}{1}$</span> where <span class="math-container">$p \in \mathbb{Z}$</span>. What we do know is that we can express the <span class="math-container">$\gcd(18,25)$</span> as: <span class="math-container">$$ \gcd(18,25)=18x +25y$$</span> Now if <span class="math-container">$x=y=a$</span>, we are done, since: <span class="math-container">$$ \gcd(18,25)=18a +25a=43a$$</span> as the <span class="math-container">$\gcd$</span> is always an integer and so is 43, so <span class="math-container">$a$</span> is also an integer.</p> <p>But, how would I generalise this?</p>
Mo Pol Bol
359,447
<p>You can also prove this by contradiction:</p> <p>Assume $a=\frac{m}{n}$, where $m,n$ are coprime and $n\gt1$. Then, if $18a=k_1\in\mathbb{Z}$, by the fundamental theorem of arithmetic $$n=2^b3^c.$$ However, assuming $$25a=k_2\in\mathbb{Z},$$ implies $n=5$ or $n=25$, which is obviously a contradiction.</p>
2,934,238
<p>Let <span class="math-container">$a\in \mathbb{Q}$</span> such that <span class="math-container">$18a$</span> and <span class="math-container">$25a$</span> are integers, then we wish to prove that <span class="math-container">$a$</span> must be an integer itself. What that means is that <span class="math-container">$a=\frac{p}{1}$</span> where <span class="math-container">$p \in \mathbb{Z}$</span>. What we do know is that we can express the <span class="math-container">$\gcd(18,25)$</span> as: <span class="math-container">$$ \gcd(18,25)=18x +25y$$</span> Now if <span class="math-container">$x=y=a$</span>, we are done, since: <span class="math-container">$$ \gcd(18,25)=18a +25a=43a$$</span> as the <span class="math-container">$\gcd$</span> is always an integer and so is 43, so <span class="math-container">$a$</span> is also an integer.</p> <p>But, how would I generalise this?</p>
B. Goddard
362,009
<p>Another way to look at this: <span class="math-container">$18a$</span> and <span class="math-container">$25a$</span> are integers. Therefore, so is <span class="math-container">$25a-18a = 7a$</span>.</p> <p>Therefore so is <span class="math-container">$18a-2(7a) = 4a.$</span></p> <p>Therefore so is <span class="math-container">$7a-4a = 3a.$</span></p> <p>Therefore so is <span class="math-container">$4a-3a = a.$</span></p>
2,934,238
<p>Let <span class="math-container">$a\in \mathbb{Q}$</span> such that <span class="math-container">$18a$</span> and <span class="math-container">$25a$</span> are integers, then we wish to prove that <span class="math-container">$a$</span> must be an integer itself. What that means is that <span class="math-container">$a=\frac{p}{1}$</span> where <span class="math-container">$p \in \mathbb{Z}$</span>. What we do know is that we can express the <span class="math-container">$\gcd(18,25)$</span> as: <span class="math-container">$$ \gcd(18,25)=18x +25y$$</span> Now if <span class="math-container">$x=y=a$</span>, we are done, since: <span class="math-container">$$ \gcd(18,25)=18a +25a=43a$$</span> as the <span class="math-container">$\gcd$</span> is always an integer and so is 43, so <span class="math-container">$a$</span> is also an integer.</p> <p>But, how would I generalise this?</p>
Bill Dubuque
242
<p><em>Conceptually</em> <span class="math-container">$\, \dfrac{m}{18} = a = \dfrac{n}{25}\,$</span> so it's least denominator divides <em>coprimes</em> <span class="math-container">$18,25$</span> so is <span class="math-container">$1,\,$</span> so <span class="math-container">$\,a\in\Bbb Z$</span></p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> This is an additive analog of this (multiplicative) group result </p> <p><span class="math-container">$$ a^{\large 18} = 1 = a^{\large 25}\,\Rightarrow\, {\rm ord}(a)\mid 18,25\,\Rightarrow\, {\rm ord}(a)=1\,\Rightarrow\, a = 1$$</span></p> <p>For further discussion see <a href="https://math.stackexchange.com/search?tab=votes&amp;q=user%3a242%20%22denominator%20ideals%22">denominator ideals</a> and <a href="https://math.stackexchange.com/search?tab=votes&amp;q=user%3a242%20%22order%20ideals%22">order ideals</a> and <a href="https://math.stackexchange.com/search?tab=votes&amp;q=user%3a242%20%22unique%20fractionization%22">unique fractionization</a>.</p>
244,769
<p>I am DMing a game of DnD and one of my players is really into fear effects, which is cool, but the effect of having monsters suffer from the &quot;panicked&quot; condition gets tedious to render via dice rolls.</p> <p>The rule is, on the battle grid the monster will run for 1 square in a random direction, then from that new position it will move into another random adjacent square. repeat this process until its moved its full move speed.</p> <pre><code>movespeed = 6; points = Point[ NestList[{(#[[1]] + RandomChoice[{-1, 0, 1}]), #[[2]] + RandomChoice[{-1, 0, 1}]} &amp;, {11/2, 11/2}, movespeed]]; Graphics[{PointSize[Large], points}, GridLines -&gt; {Range[0, 11], Range[0, 11]}, PlotRange -&gt; {{0, 11}, {0, 11}}, Axes -&gt; True] </code></pre> <p>I have written some code that shows me the squares the monster moves through, but I would love to replace the little black dots with numbers like &quot;1&quot;, &quot;2&quot;,...,&quot;6&quot; so that I know the path it actually took.</p>
Adam
74,641
<p>To clear up ambiguities, the following code finds the transformation from the points in xy space <span class="math-container">$(0,1),(1,0)$</span> and <span class="math-container">$(0,0)$</span> to the points in uv space <span class="math-container">$a,b$</span> and <span class="math-container">$c$</span> projected onto <span class="math-container">$\overline{ab}$</span> respectively.</p> <pre><code>With[{a = {ax, ay}, b = {bx, by}, c = {cx, cy}, m = {{mx, mxy}, {myx, myy}}}, With[{v = Projection[c - b, a - b, Dot] + b}, m /. Simplify@ First@Solve[{a == m.{0, 1} + v, c == m.{1, 0} + v}, Flatten@m]]] </code></pre> <p>The result is a matrix <code>m</code> that can be used to make an affine transform, i.e. <code>AffineTransform[{%,v}]</code>, if it's still in the inner <code>With</code>.</p> <p>The math isn't too bad to do without <code>Solve</code>. Consider</p> <pre><code>With[{v=Projection[c - b, a - b, Dot] + b},m=AffineTransform[{ RotationMatrix[ArcTan @@ (a - b)]. ScalingMatrix[Norm[# - v] &amp; /@ {a, c}], v}]] </code></pre> <p>This latter code doesn't do negative scaling, so it is broken when the points are oriented the wrong way (<code>c</code> has to be to the right etc.). You could entirely remove the <code>ScalingMatrix</code>...</p> <p>With regards to the end goal, I did something similar here <a href="https://mathematica.stackexchange.com/questions/241770/detect-and-correct-sheared-squares-in-image">Detect and correct sheared squares in image</a>, though my code was <em>nasty</em>.</p> <hr /> <p>Also I really enjoy doing interactive things with MMA, I'm interested in developing this further with <code>Manipulate</code> etc.</p>
3,858,414
<p>I need help solving this task, if anyone had a similar problem it would help me.</p> <p>The task is:</p> <p>Calculate using the rule <span class="math-container">$\lim\limits_{x\to \infty}\left(1+\frac{1}{x}\right)^x=\large e $</span>:</p> <p><span class="math-container">$\lim_{x\to0}\left(\frac{1+\mathrm{tg}\: x}{1+\sin x}\right)\Large^{\frac{1}{\sin x}} $</span></p> <p>I tried this:</p> <p><span class="math-container">$ \lim_{x\to0}\left(\frac{1+\mathrm{tg}\: x}{1+\sin x}\right)^{\Large\frac{1}{\sin x}}=\lim_{x\to0}\left(\frac{1+\frac{\sin x}{\cos x}}{1+\sin x}\right)^{\Large\frac{1}{\sin x}}=\lim_{x\to0}\left(\frac{\sin x+\cos x}{\cos x\cdot(1+\sin x)}\right)^{\Large\frac{1}{\sin x}} $</span></p> <p>But I do not know, how to solve this task. Thanks in advance !</p>
Aryaman Maithani
427,810
<p>Suppose that <span class="math-container">$[x] \cap [y] \neq \emptyset$</span> <strong>and</strong> <span class="math-container">$[x] \neq [y]$</span>.</p> <p>Since the intersection is nonempty, we may pick <span class="math-container">$z \in [x] \cap [y]$</span>. Since the classes are not equal, we may pick an element <span class="math-container">$w \in R$</span> which is in one class but not in the other. WLOG, assume that <span class="math-container">$w \in [x]\setminus[y]$</span>.</p> <p>Since <span class="math-container">$w, z \in [x],$</span> we have that <span class="math-container">$x \sim w$</span> and <span class="math-container">$x \sim z$</span>. Using symmetry and transitivity, we get that <span class="math-container">$z \sim w$</span>. However, we also have <span class="math-container">$y \sim z$</span>. Transitivity gives <span class="math-container">$y \sim w$</span> which is a contradiction since <span class="math-container">$w \notin [y]$</span>.</p>
10,468
<p>I know that many graph problems can be solved very quickly on graphs of bounded degeneracy or arboricity. (It doesn't matter which one is bounded, since they're at most a factor of 2 apart.) </p> <p>From Wikipedia's article on the clique problem I learnt that finding cliques of any constant size k takes linear time on graphs of bounded arboricity. That's pretty cool.</p> <p>I wanted to know more examples of algorithms where the bounded arboricity condition helps. This might even be well-studied enough to have a survey article written on it. Unfortunately, I couldn't find much about my question. Can someone give me examples of such algorithms and references? Are there some commonly used algorithmic techniques that exploit this promise? How can I learn more about these results and the tools they use?</p>
Martin Vatshelle
23,539
<p>There is one more approach to solve problems like Max Clique on graphs of bounded degeneracy. You can look at the complement graph of a graph $G$ (i.e. every edge is a non-edge and every non-edge is an edge). Solving Max Clique on $G$ is the same as solving Max Independent set on the complement.</p> <p>For the complement of bounded degeneracy graphs algorithms for many problems are known. E.g. Maximum Independent Set, Minimum Dominating Set, Perfect Code, k-Coloring, H- Cover, H-Homomorphism and H-Role Assignment are FPT parameterized by the degeneracy of the complement. See <a href="http://www.ii.uib.no/~martinv/Papers/Logarithmic_booleanwidth.pdf" rel="nofollow">http://www.ii.uib.no/~martinv/Papers/Logarithmic_booleanwidth.pdf</a> (submitted to journal)</p> <p>Some of these problems make sense to translate to the complement graph, such as:</p> <p>Can G be colored with $k$ colors -> can the complement be covered by $k$ cliques? (fixed $k$)</p> <p>Is there an $3$-regular induced subgraph of $G$ -> is there an induced $k$ regular subgraph of the complement on $k+4$ vertices?</p>
124,660
<p>I'm solving a fairly simple equation :</p> <pre><code>w[p1_, p2_, xT_] := 94.8*cv*p1*y[(p1 - p2)/p1, xT]*Sqrt[(p1 - p2)/p1*mw/t1]; y[r_, xT_] := 1 - (1.4 r)/(3 xT*γ) /. γ -&gt; 1.28; sol = NSolve[{w[p1, p2, 0.66] == 30, p2 == 1.07}, {p1, p2}] /. {cv -&gt; 1.77, t1 -&gt; 318, mw -&gt; 38}; </code></pre> <p>Mathematica has no problem solving this case. However, when I change the y function to:</p> <pre><code>y[r_, xT_] := Max[0.667, 1 - (1.4 r)/(3 xT*γ)] /. γ -&gt; 1.28 </code></pre> <p>Then the calculation does not seem to converge. Is there a reason why this should be difficult ? Can it be implemented differently ?</p>
ubpdqn
1,997
<p>My original answer was a misinterpretation. I post this in case it is helpful.</p> <pre><code>y[r_, xT_, g_] := 1 - (1.4 r)/(3 xT*g) w[p1_, p2_, xT_, cv_, mw_, t1_] := 94.8*cv*p1*y[(p1 - p2)/p1, xT, 1.28]*Sqrt[(p1 - p2)/p1*mw/t1]; ans = p2 /. NSolve[w[1.07, p2, 0.66, 1.77, 38, 318] == 30, p2] plot1 = Plot[{w[1.07, p2, 0.66, 1.77, 38, 318], 10 y[(1.07 - p2)/1.07, 0.66, 1.28]}, {p2, 0, 1}, MeshFunctions -&gt; (#2 &amp;), Mesh -&gt; {{30}}, MeshStyle -&gt; {Red, PointSize[0.02]}, GridLines -&gt; {ans, {30}}, Frame -&gt; True, FrameTicks -&gt; {{Automatic, Table[{j, j/10.}, {j, 0, 40, 5}]}, {Automatic, None}}, FrameLabel -&gt; {"p1", "w", "", "y"}, ImageSize -&gt; 300, PlotRange -&gt; {0, 40}] y2[r_, xT_, g_] := Max[0.667, 1 - (1.4 r)/(3 xT*g)] w2[p1_, p2_, xT_, cv_, mw_, t1_] := 94.8*cv*p1*y2[(p1 - p2)/p1, xT, 1.28]*Sqrt[(p1 - p2)/p1*mw/t1]; ans2 = p2 /. NSolve[w2[1.07, p2, 0.66, 1.77, 38, 318] == 30, p2] plot2 = Plot[{w2[1.07, p2, 0.66, 1.77, 38, 318], 10 y2[(1.07 - p2)/1.07, 0.66, 1.28]}, {p2, 0, 1}, MeshFunctions -&gt; (#2 &amp;), Mesh -&gt; {{30}}, MeshStyle -&gt; {Red, PointSize[0.02]}, GridLines -&gt; {ans2, {30}}, Frame -&gt; True, FrameTicks -&gt; {{Automatic, Table[{j, j/10.}, {j, 0, 40, 5}]}, {Automatic, None}}, FrameLabel -&gt; {"p1", "w", "", "y"}, PlotRange -&gt; {0, 40}, ImageSize -&gt; 300] </code></pre> <p>Visualizing:</p> <p><a href="https://i.stack.imgur.com/ck4BS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ck4BS.png" alt="enter image description here"></a></p> <ul> <li>just avoided all the substitions</li> <li>always find plotting useful</li> <li>desired solution is selected (first form->{0.676056, 0.134129}, second case->0.676056)</li> </ul>
175,723
<p>I am reading Goldstein's Classical Mechanics and I've noticed there is copious use of the $\sum$ notation. He even writes the chain rule as a sum! I am having a real hard time following his arguments where this notation is used, often with differentiation and multiple indices thrown in for good measure. How do I get some working insight into how sums behave without actually saying "Now imagine n=2. What does the sum become in this case?" Is there an easier way to do this? Is there an "algebra" or "calculus" of sums, like a set of rules for manipulating them? I've seen some documents on the web but none of them seem to come close to Goldstein's usage in terms of sophistication. Where can I get my hands on practice material for this notation?</p>
paul garrett
12,291
<p>Also, since $\theta$ is an algebraic integer, to show $\theta(\theta-1)/2$ is an algebraic integer, it suffices to show that it is $2$-adically integral. </p> <p>Thus, the plan is to prove that <em>either</em> $2|\theta$ or $2|(\theta-1)$, in $\mathbb Z_2$.</p> <p>Hensel's lemma shows that $x^3+11x-4=0$ has solutions $1,3$ mod $4$, and that both these give solutions in $\mathbb Z_2$. Thus, since the thing is a cubic, the third root is also in $\mathbb Z_2$, and (by looking at the constant term) is divisible by $4$, in fact. The solutions $\theta_1,\theta_2$ in $\mathbb Z_2$ congruent to $1,3$ mod $4$ both have the property that $2|(\theta_j-1)$, as desired.</p> <p>True, this did not determine the minimal polynomial of $\theta(\theta-1)/2$.</p>
1,343,722
<p>Note: I am looking at the sequence itself, not the sequence of partial sums.</p> <p>Here's my attempt...</p> <p>Setting up:</p> <p>$$\left\{\frac{2(n+1)}{2(n+1)-1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p> <p>Simplifying:</p> <p>$$\left\{\frac{2n+2}{2n+1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p> <p>$$\frac{(2n+2)(2n-1)-(2n)(2n+1)}{(2n+1)(2n-1)}$$</p> <p>$$\frac{4n^2+2n-2-(4n+2n)}{(2n+1)(2n-1)}$$</p> <p>$$\frac{4n^2-4n-2}{4n^2-1}$$</p> <p>How should I proceed from this point? I think I need to get rid of the ratio, so that I can judge whether or not it'll be positive or negative. Or can I just judge from this point that it will be a positive value? When I use the $\frac{a_{n+1}}{a_{n}}$ test, I get a result that the sequence is strictly decreasing.</p>
Joel
85,072
<p>Assuming as @Soke does, your series is $$\sum_{n=1}^\infty \frac{n^2}{2^n}$$ we can use the geometric series in order to find a closed form solution. The trick is to take derivatives and multiply by $x$.</p> <p>Note that $$f(x) = \sum_{n=1}^\infty x^n = \frac{x}{1-x}$$ yields a derivative of $$\sum_{n=1}^\infty n x^{n-1} = \frac{d}{dx} \frac{x}{1-x} = \frac{1}{(1-x)^2}.$$ We multiply it by $x$ to get: $$\sum_{n=1}^\infty nx^n = \frac{x}{(1-x)^2}.$$</p> <p>If we take another derivative we find $$\sum_{n=1}^\infty n^2 x^{n-1} = \frac{d}{dx} \frac{x}{(1-x)^2} = \frac{(1-x)^2+2(1-x)x}{(1-x)^4}= \frac{1+x}{(1-x)^3}.$$</p> <p>Finally we multiply by $x$ again: $$\sum_{n=1}^\infty n^2 x^n = \frac{x(1+x)}{(1-x)^3}$$ and the answer you are looking for corresponds to $x=1/2$.</p>
3,600,633
<p>As I was reading <a href="https://math.stackexchange.com/questions/1918673/how-can-i-prove-that-the-finite-extension-field-of-real-number-is-itself-or-the">this question</a>, I saw Ethan's answer. However, perhaps this is very obvious, but why does the degree of the polynomial be at most <span class="math-container">$2$</span>? I get that the polynomial must be irreducible but does that force the degree to be at most <span class="math-container">$2$</span>?</p>
N. S.
9,176
<p>Let <span class="math-container">$P(X) \in \mathbb R[X]$</span> be the minimal polynomial of <span class="math-container">$\alpha$</span>.</p> <p>If <span class="math-container">$\alpha \in \mathbb R$</span> then <span class="math-container">$X-\alpha |P(X)$</span>. Since <span class="math-container">$X-\alpha \in \mathbb R[X]$</span> has <span class="math-container">$\alpha$</span> as a root, it follows that <span class="math-container">$P(X)=X-\alpha$</span>.</p> <p>if <span class="math-container">$\alpha \notin \mathbb R$</span> then <span class="math-container">$X-\alpha |P(X)$</span>. Moreover, <span class="math-container">$P(\bar{\alpha})=0$</span> means <span class="math-container">$X-\bar{\alpha} |P(X)$</span>. From here, since <span class="math-container">$X-\alpha$</span> and <span class="math-container">$X-\bar{\alpha}$</span> are relatively prime we get that <span class="math-container">$(X-\alpha)(X-\bar{\alpha})|P(X)$</span>.</p> <p>Now, it is easy to see that <span class="math-container">$(X-\alpha)(X-\bar{\alpha}) \in \mathbb R[X]$</span> from where it follows that <span class="math-container">$$P(X)= (X-\alpha)(X-\bar{\alpha}) $$</span></p>
3,809,127
<blockquote> <p>Determine if the sequence <span class="math-container">$x_k \in \mathbb{R}^3$</span> is convergent when <span class="math-container">$$x_k=(2, 1, k^{-1})$$</span></p> </blockquote> <p>Our professor gave a hint that one should look at <span class="math-container">$||2k-a||$</span> and try to find a contradiction here.</p> <p>So assuming that it converges to say <span class="math-container">$(a,b,c)$</span> we get <span class="math-container">$$||(2k,1,k^{-1})-(a,b,c)||.$$</span></p> <p>Using the hint I found that <span class="math-container">$$||2k-a||=||(1,0,0)\cdot((2k,1,k^{-1})-(a,b,c))||.$$</span></p> <p>From Cauchy-Schwartz it follows that <span class="math-container">$$||2k-a|| \leqslant||(1,0,0)||\ ||(2k,1,k^{-1})-(a,b,c)|| = ||(2k,1,k^{-1})-(a,b,c)||$$</span></p> <p>Hence a contradiction since <span class="math-container">$||2k-a||$</span> grows without a bound, when <span class="math-container">$k \to \infty.$</span></p> <p>So couple of questions. Firstly I'm a bit confused with the notation here, should I have <span class="math-container">$|2k-a|$</span> instead of <span class="math-container">$||2k-a||$</span>? And more importantly how should I come up with the intuition to look at <span class="math-container">$||2k-1||$</span> or <span class="math-container">$|2k-a|$</span> (whichever is the right notation)? I wouldn't have probably guessed this if I was doing this on an exam...</p>
Diger
427,553
<p>Start with your setup for <strong>even</strong> <span class="math-container">$n$</span> of two lists <span class="math-container">$$1: (1,n/2+1),(2,n/2+2),...,(n/2,n) \\ 2: (n/2+1,1),(n/2+2,2),...,(n,n/2)$$</span> for which the corresponding sum equals <span class="math-container">$n^2/2$</span>. Any permutation within either list will lead to the same value <span class="math-container">$n^2/2$</span>. For example in list 1 interchange <span class="math-container">$n/2+i$</span> and <span class="math-container">$n/2+j$</span> for some distinct <span class="math-container">$i,j=1,2,...,n/2$</span> which doesn't change <span class="math-container">$|n/2+i-j|+|n/2+j-i|=n$</span> while the other terms don't change. Permuting <span class="math-container">$n/2+i$</span> with <span class="math-container">$i=1,...,n/2$</span> from list 1 with <span class="math-container">$j=1,...,n/2$</span> from list 2 gives <span class="math-container">$|i-j|+|n/2+j-n/2-i|=2|i-j|&lt;n$</span>.</p> <p>For <strong>odd</strong> <span class="math-container">$n$</span> you have the three lists <span class="math-container">$$1: (1,(n+1)/2+1),(2,(n+1)/2+2),...,((n-1)/2,n) \\ 2: ((n+1)/2,(n+1)/2) \\ 3: ((n+1)/2+1,1),((n+1)/2+2,2),...,(n,(n-1)/2)$$</span> which gives the sum <span class="math-container">$(n^2-1)/2$</span>. As before permuting distinct elements <span class="math-container">$i,j=1,...,(n-1)/2$</span> within each list doesn't change the value. E.g. for list 1 we have: <span class="math-container">$|(n+1)/2+j-i|+|(n+1)/2+i-j|=n+1 \, .$</span> Permuting element <span class="math-container">$(n+1)/2+i$</span> from list 1 with <span class="math-container">$j$</span> from list 3 gives <span class="math-container">$|i-j|+|(n+1)/2+j-(n+1)/2-i|=2|i-j|&lt;n+1 \, .$</span> Finally you can check that permuting elements from list 1 or 3 with the element of list 2 doesn't change the value e.g. take <span class="math-container">$(n+1)/2+i$</span> from list 1 reduces the sum of list 1 by <span class="math-container">$i$</span>, but the value in list 2 increases to <span class="math-container">$i$</span>.</p> <p>Overall the maximum sum is <span class="math-container">$\left \lfloor \frac{n^2}{2} \right \rfloor$</span>.</p>