qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
261,410 | <p>Let $z_1,z_2,\dots,z_n\in\Bbb{C}$ be distinct and $w_1,w_2,\dots,w_n\in\Bbb{C}$ be arbitrary. Suppose $f, g$ are two polynomials of degree less than $n$ such that
$$f(z_j)=w_j,\qquad g(z_j)=\bar{w}_j \qquad\text{for $1\leq j\leq n$}.$$
Define $\Omega(z)=\prod_{j=1}^n(z-z_j)$. The following puzzles me.</p>
<blockquote>
<p><strong>Question 1.</strong> Is this true?
$$\sum_{k=1}^n\frac{\vert f^{\prime}(z_k)\vert^2}{\vert \Omega^{\prime}(z_k)\vert^2}\leq
\sum_{k=1}^n\frac{\vert g^{\prime}(z_k)\vert^2}{\vert\Omega^{\prime}(z_k)\vert^2}.$$</p>
</blockquote>
<p><strong>EDIT.</strong> I'm sorry, one of the conditions was missing. To give credit, I'll leave the question as it stands and ask the correct one below.</p>
<blockquote>
<p><strong>Question 2.</strong> What if we insist that $f$ and $g$ have equal degrees?</p>
</blockquote>
| Pietro Majer | 6,101 | <p>For $n\ge 3$, let's take $z_j$, for $1\le j\le n$, be the $n$-th roots of unity, and $w_j:=z_j+4\bar{z_j}$, so that the assumption are satisfied by $$f(z):=z+4z^{n-1}$$ $$g(z):=4z+z^{n-1}.$$ However,
$$ |g'(z_k)|=|4+(n-1) z_k^{-2}| \le n+3<\phantom{Z} $$ $$ \phantom{ZZZZ} <4n-5\le|1+4(n-1) z_k^{-2}|= |f'(z_k)| $$ for any index $k$ in the sums. </p>
<p><strong>Rmk.</strong> For $n=2$, $f$ and $g$ are affine functions, with $|f'|=|g'|={|w_1-w_2|\over |z_1-z_2|}$, so that the claim is true. </p>
|
252,147 | <blockquote>
<p><strong>Problem:</strong> Let $G$ be an infinite abelian group. Show that if $G$ has a nontrivial subgroup $K$ such that $K$ is contained in all nontrivial subgroups of $G$, then $G$ is a $p$- group for some prime $p$. Moreover, $G$ is of type $p^\infty$ (quasicyclic) group.</p>
</blockquote>
<p>I have the following result:</p>
<blockquote>
<p>If $G$ is an infinite abelian group all of whose proper subgroup are finite, then $G$ is of type $p^\infty$ group for some prime $p$.</p>
</blockquote>
<p>So I am trying to show that each of subgroup of $G$ is finite. But I can't see anymore. Please help me to find some direction.</p>
| Hagen von Eitzen | 39,174 | <p>Assume $K\le G$ is a non-trivial subgroup of $G$ and that for every non-trivial subgroup $X\le G$, we have $K\le X$.
Then $K$ itself has no nontrivial subgroup. Especially, for any $k\in K\setminus\{1\}$, we have $\langle k\rangle=K$ and thus $K\cong \mathbb Z$ or $K\cong \mathbb Z/n\mathbb Z$. But only the case with $n=p$ prime leads to $K$ having no nontrivial subgroups.</p>
<p>Let $g\in G\setminus\{1\}$ be arbitrary. Then $k\in\langle k\rangle = K\le \langle g\rangle$, i.e. $k=g^n$ for some $n$. If we write $n=p^mr$ with $r$ not divisible by $p$, then $h:=g^{p^{m+1}}$ has order dividing $r$ (because $h^r=g^{pn}=k^p=1$) and thus $k\notin \langle h\rangle$, from which we conclude $h=1$, hence $g^{p^{m+1}}=1$ and the order of $g$ is a power of $p$. This shows that $G$ is a $p$-group.</p>
<p>Can you see why the second claim (type $p^\infty$) follows from here if we use that $|G|=\infty$?
And where did I use that $G$ is abelian?</p>
|
3,149,110 | <p>I am learning algebraic number theory, the exercises are so hard for me, could you please recommend me a book with answers? Many thanks!</p>
| B. Goddard | 362,009 | <p>Reid's book from 1910(-ish) gives a very slow introduction. I don't think it has answers, but it has almost everything worked out in the text. If you don't mind the stilted, 100-year-old language, you mind find it useful. I see it's on Amazon:</p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/1110380461" rel="nofollow noreferrer" rel="nofollow noreferrer">https://www.amazon.com/Elements-Theory-Algebraic-Numbers/dp/1110380461</a></p>
<p>But I know there are scans of it online. </p>
|
604,824 | <p>So the puzzle is like this:</p>
<blockquote>
<p>An ant is out from its nest searching for food. It travels in a straight line from its nest. After this ant gets 40 ft away from the nest, suddenly a rain starts to pour and washes away all its scent trail. This ant has the strength of traveling 280 ft more then it will starve to death. Suppose this ant's nest is a huge wall and this ant can travel in a whatever curve it wants, how can this ant find its way back? </p>
</blockquote>
<p>I interpret it as: I start at the origin. I know that there is a straight line with distance 40 ft to the origin, but I don't know the direction. In what parametric curve I will sure hit the line when the parameter $t$ is increasing, while the total arc length is less than or equal to 280 ft.</p>
<p>I asked a friend of mine who is a PhD in math, he told me this is a calculus of variation problem. I wonder if I could use basic calculus stuff to solve this puzzle (I have learned ODE as well). My hunch tells me that a spiral should be used as the path, yet I am not sure what kind of spiral to use here. Any hint shall be appreciated. Thanks dudes!</p>
<h3>Clarification by dfeuer</h3>
<p>As some people seem to be having trouble understanding the problem description, I'll add an equivalent one that should be clear:</p>
<p>Starting in the center of a circle of radius 40 ft, draw a path with the shortest possible length that intersects every line that is tangent to the circle.</p>
| EuYu | 9,246 | <p>The envelope of the walls sweeps out a circle of radius $40$. Here's one possible method to find the wall (i.e. a path which intersects every tangent of the circle). From the origin, walk to $(0,40)$. Now walk counter-clockwise around the circle three-quarters of the way to $(40,0)$. Now walk vertically upwards to $(40,40)$. All in all you've travelled $40 + \frac{3}{2}\pi 40 + 40 \approx 268.5$ ft.</p>
|
787,558 | <p>we have $ad-bc >1$ is it true that at least one of $a,b,c,d$ is not divisible by $ad-bc$ ?
Thanks in advance.</p>
<p><strong>Example:</strong>
$a=2$ , $b = 1$, $c = 2$, $d = 2$, $ad-bc = 2$ </p>
<p>so $b$ is not divisible by $ad-bc$</p>
| 355durch113 | 137,450 | <p>For a=2, d=3, b=2, c=1 the statement is not true. Are you sure you wrote it down correctly?</p>
|
361,862 | <p>I would like you to expose and explain briefly some examples of theorems having some hypothesis that are (as far as we know) actually necessary in their proofs but whose uses in the arguments are extremely subtle and difficult to note at a first sight. I am looking for hypothesis or conditions that appear to be almost absent from the proof but which are actually hidden behind some really abstract or technical argument. It would be even more interesting if this unnoticed hypothesis was not noted at first but later it had to be added in another paper or publication not because the proof of the theorem were wrong but because the author did not notice that this or that condition was actually playing a role behind the scene and needed to be added. And, finally, an extra point if this hidden hypothesis led to some important development or advance in the area around the theorem in the sense that it opened new questions or new paths of research. This question might be related with this <a href="https://mathoverflow.net/questions/352249/nontrivially-fillable-gaps-in-published-proofs-of-major-theorems">other</a> but notice that it is not the same as I am speaking about subtleties in proof that were not exactly incorrect but incomplete in the sense of not mentioning that some object or result had to be use maybe in a highly tangential way.</p>
<p>In order to put some order in the possible answers and make this post useful for other people I would like you to give references and to at least explain the subtleties that helps the hypothesis to hide at a first sight, expose how they relate with the actual proof or method of proving, and tell the main steps that were made by the community until this hidden condition was found, i.e., you can write in fact a short history about the evolution of our understanding of the subtleties and nuances surrounding the result you want to mention.</p>
<p>A very well known and classic example of this phenomenon is the full theory of classical greek geometry that although correctly developed in the famous work of Euclides was later found to be incompletely axiomatized as there were some axioms that Euclides uses but <a href="https://en.wikipedia.org/wiki/Euclidean_geometry#cite_note-6" rel="noreferrer">he did not mention</a> as such mainly because these manipulations are so highly intuitive that were not easy to recognize that they were being used in an argument. Happily, a better understanding of these axioms and their internal respective logical relations through a period of long study and research lasting for millennia led to the realization that these axioms were not explicitly mention but necessary and to the development of new kinds of geometry and different geometrical worlds.</p>
<p>Maybe this one is (because of being the most classic and expanded through so many centuries and pages of research) the most known, important and famous example of the phenomena I am looking for. However, I am also interested in other small and more humble examples of this phenomena appearing and happening in some more recent papers, theorems, lemmas and results in general.</p>
<p>Note: I vote for doing this community wiki as it seems that this is the best way of dealing with this kind of questions.</p>
| Timothy Chow | 3,106 | <p>This example has been mentioned <a href="https://mathoverflow.net/a/16885">elsewhere on MO</a> but seems worth reproducing here. The abstract of Amnon Neeman's paper <a href="https://doi.org/10.1007/s002220100197" rel="noreferrer">A counterexample to a 1961 “theorem” in homological algebra</a> says:</p>
<blockquote>
<p>In 1961, Jan-Erik Roos published a “theorem”, which says that in an [AB4∗] abelian category, lim<sup>1</sup> vanishes on Mittag–Leffler sequences. … This is a “theorem” that many people since have known and used. In this article, we outline a counterexample. We construct some strange abelian categories, which are perhaps of some independent interest.</p>
</blockquote>
<p>It turns out that the theorem can be repaired by adding some relatively weak hypotheses that are usually satisfied in practice. That the need for such hypotheses apparently went unnoticed for so long is perhaps evidence that they are "highly subtle."</p>
|
381,011 | <p>I should prove this claim:</p>
<blockquote>
<p>Every undirected graph with n vertices and $2n$ edges is connected.</p>
</blockquote>
<p>If it is false I should find a counterexample.
I was thinking to consider the complete graph with $n$ vertices. Such a graph is connected and contains $\frac{n(n-1)}{2}$ nodes. Considering that $2n > \frac{n(n-1)}{2}\implies$ my graph is connected too. But I'm not sure this could be a solution because even my graph has $2n$ edges it doesn't have to be complete.
Can anybody help me?</p>
| Mark Bennet | 2,906 | <p>How might you prove or give a counter-example? What does it mean for a graph to be connected?</p>
<p>You observe that the number of edges in a complete graph is $\cfrac {n(n-1)}{2}$, and this grows faster than $2n$. So large complete graphs will have many more edges than $2n$.</p>
<p>You might also know that the minimum number of edges to make a connected graph with $n$ vertices is $n-1$ (a tree). So if need be we can delete quite a lot of edges from our complete graph and still leave it connected. So we can create a connected graph with some flexibility on the number of edges, but just deleting edges from a complete graph could leave the result connected. If deleting edges looks a bit tricky, let's try to add a disconnected component to this and still end up with $2n$ edges. </p>
<p>Now the complete graph on 5 vertices has 10 edges, so we've no flexibility there. But when we go up to 6 vertices, we find 15 edges, and we note that this is greater than $2\times 7=14$.</p>
<p>So we can take a complete graph on 6 vertices, add a single vertex, to make 7, and delete one edge to give 14. And we have a disconnected graph with $2n$ edges.</p>
|
3,628,358 | <p>As stated, I need to prove that, up to isomorphism, the only simple group of order <span class="math-container">$p^2 q r$</span>, where <span class="math-container">$p, q, r$</span> are distinct primes, is <span class="math-container">$A_5$</span> (the alternating group of degree 5).</p>
<p>Now I know the following: if <span class="math-container">$G$</span> is a simple group and <span class="math-container">$|G| = 60$</span>, then <span class="math-container">$G$</span> is isomorphic to <span class="math-container">$A_5$</span>. However, I don't even know how to begin the proof that <span class="math-container">$|G| = 60$</span>, or anything similar.</p>
| user113019 | 682,920 | <p>Since you asked for an "elementary" proof, I'll try to write one that doesn't use Burnside's transfer theorem.</p>
<p>Let <span class="math-container">$G$</span> be a simple group of order <span class="math-container">$p^2qr$</span>, WLOG <span class="math-container">$q<r$</span>. We call <span class="math-container">$n_p,n_q,n_r$</span> the numbers of the respective Sylow subgroups, which must be all greater than <span class="math-container">$1$</span>: then, <span class="math-container">$n_p \in \{q,r,qr\}$</span> and <span class="math-container">$n_r \in \{p,p^2,pq,p^2q\}$</span>. We also call <span class="math-container">$m \in \{1,p\}$</span> the largest cardinality of the intersection of two distinct Sylow <span class="math-container">$p$</span>-subgroups.</p>
<p>First of all, we observe that <span class="math-container">$G$</span> has no subgroups of index <span class="math-container">$q$</span>, otherwise it would be isomorphic to a subgroup of <span class="math-container">$A_q$</span>, which has no elements of order <span class="math-container">$r$</span>: in particular, <span class="math-container">$n_p \neq q$</span>.</p>
<p>Case 1: <span class="math-container">$n_r \in \{p,p^2\}$</span>.</p>
<p>This forces <span class="math-container">$p>r$</span>: indeed, in both cases we must have <span class="math-container">$r|p^2-1$</span>, i.e. <span class="math-container">$r|p\pm1$</span> and then <span class="math-container">$r \le p+1$</span>, and the inequality <span class="math-container">$p>r$</span> only fails when <span class="math-container">$(p,r)=(2,3)$</span>, which leaves no room for <span class="math-container">$q$</span>. Then, since <span class="math-container">$n_p \ge p+1$</span>, the only possibility for <span class="math-container">$n_p$</span> is <span class="math-container">$qr$</span>.</p>
<p>Case 1.1: <span class="math-container">$n_r \in \{p,p^2\}$</span>, <span class="math-container">$n_p=qr$</span> and <span class="math-container">$m=p$</span>.</p>
<p>Let <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> be two Sylow <span class="math-container">$p$</span>-subgroups such that <span class="math-container">$|P \cap Q|=p$</span>: then, <span class="math-container">$N_G(P \cap Q)$</span> has order a multiple of <span class="math-container">$p^2$</span> and it has more than one Sylow <span class="math-container">$p$</span>-subgroup, so that it is forced to be the whole <span class="math-container">$G$</span>, and <span class="math-container">$P \cap Q \triangleleft G$</span>.</p>
<p>Case 1.2: <span class="math-container">$n_r \in \{p,p^2\}$</span>, <span class="math-container">$n_p=qr$</span> and <span class="math-container">$m=1$</span>.</p>
<p>There are exactly <span class="math-container">$qr(p^2-1)$</span> elements of order <span class="math-container">$p$</span> or <span class="math-container">$p^2$</span>; among the remaining <span class="math-container">$qr$</span> elements, there are exactly <span class="math-container">$n_q(q-1)$</span> of order <span class="math-container">$q$</span> and <span class="math-container">$n_r(r-1)$</span> of order <span class="math-container">$r$</span>, so that</p>
<p><span class="math-container">$1+n_q(q-1)+n_r(r-1) \le qr \Rightarrow n_r(r-1) \le qr-1-n_q(q-1) < q(r-1) \Rightarrow n_r<q$</span>,</p>
<p>which contradicts <span class="math-container">$n_r \ge r+1$</span>.</p>
<p>Case 2: <span class="math-container">$n_r=p^2q$</span>.</p>
<p>Arguing as in Case 1.2, there are exactly <span class="math-container">$p^2q$</span> elements of order <span class="math-container">$\neq r$</span>, and if <span class="math-container">$m=1$</span> we would have <span class="math-container">$1+n_q(q-1)+n_p(p^2-1) \le p^2q \Rightarrow n_p<q$</span>, contradiction. Therefore <span class="math-container">$m=p$</span>, and taking <span class="math-container">$P,Q$</span> as in Case 1.1, <span class="math-container">$N_G(P \cap Q)$</span> has order <span class="math-container">$p^2q$</span> or <span class="math-container">$p^2r$</span>: in the former case it is exactly the set of elements of order <span class="math-container">$\neq r$</span>, so that it is a characteristic subgroup (and then normal), while in the latter it has index <span class="math-container">$q$</span> in <span class="math-container">$G$</span>, but we had already ruled out this case at the beginning.</p>
<p>Case 3: <span class="math-container">$n_r=pq$</span> and <span class="math-container">$n_p=r$</span>.</p>
<p>Write <span class="math-container">$r=kp+1$</span> and <span class="math-container">$pq=hr+1$</span>, with <span class="math-container">$h,k \ge 1$</span>. We have <span class="math-container">$pq=h(kp+1)+1=hkp+h+1$</span>, so that <span class="math-container">$h=tp-1$</span> for some <span class="math-container">$t \ge 1$</span> and <span class="math-container">$q=hk+t$</span>. Since <span class="math-container">$q<r$</span>, we also have <span class="math-container">$(tp-1)k+t<kp+1 \Rightarrow (tp-p-1)k<1-t$</span>, which is impossible if <span class="math-container">$t>1$</span>, so that <span class="math-container">$t=1$</span> and <span class="math-container">$q=(p-1)k+1$</span>. Finally, we have <span class="math-container">$k>1$</span> (otherwise <span class="math-container">$p=q$</span>) and then <span class="math-container">$q>p$</span>. Now, <span class="math-container">$n_p=r$</span> implies that any Sylow <span class="math-container">$p$</span>-subgroup has a normalizer of order <span class="math-container">$p^2q$</span>, so that the latter has a normal Sylow <span class="math-container">$q$</span>-subgroup unless <span class="math-container">$(p,q)=(2,3)$</span> (which leads to <span class="math-container">$|G|=60$</span>, in which case I take for granted that <span class="math-container">$G \cong A_5$</span>), and then <span class="math-container">$n_q=r$</span>. However, we should have <span class="math-container">$q|r-1=kp \Rightarrow q|k \Rightarrow q|\text{gcd}(k,q)=1$</span>, impossible.</p>
<p>Case 4: <span class="math-container">$n_r=pq$</span> and <span class="math-container">$n_p=qr$</span>.</p>
<p>If <span class="math-container">$m=1$</span>, we get the same contradiction as in Case 1.2, therefore <span class="math-container">$m=p$</span>, and similarly to Case 2, taking <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> as usual, we must have <span class="math-container">$|N_G(P \cap Q)|=p^2q$</span>. Since <span class="math-container">$N_G(P \cap Q)$</span> has more than one Sylow <span class="math-container">$p$</span>-subgroup, it has a normal Sylow <span class="math-container">$q$</span>-subgroup (by the classification of groups of order <span class="math-container">$p^2q$</span>), so that the only possibility for <span class="math-container">$n_q$</span> is <span class="math-container">$r$</span>, and then <span class="math-container">$r \equiv 1 (\bmod q)$</span>. Also, since any Sylow <span class="math-container">$r$</span>-subgroup <span class="math-container">$R$</span> has a normalizer of order <span class="math-container">$pr$</span>, a subgroup <span class="math-container">$H<R$</span> of order <span class="math-container">$p$</span> cannot be normal in <span class="math-container">$R$</span>, otherwise the index of <span class="math-container">$N_G(H)$</span> in <span class="math-container">$G$</span> would be <span class="math-container">$1$</span> or <span class="math-container">$q$</span>, leading to a contradiction in both cases. Therefore <span class="math-container">$r \equiv 1 (\bmod p)$</span>, and then <span class="math-container">$r \equiv 1 (\bmod pq)$</span>, which contradicts <span class="math-container">$pq \equiv 1 (\bmod r)$</span>.</p>
|
54,878 | <p>Consider the 2 parameter family of linear systems </p>
<p>$$\frac{DY(t)}{Dt} = \begin{pmatrix}
a & 1 \\
b & 1 \end{pmatrix} Y(t)
$$</p>
<p>In the ab plane, identify all regions where this system posseses a saddle, a sink, a spiral sink, and so on. </p>
<p>I was able to get the eigenvalues as $$\lambda = \frac{a+1}{2} \pm \frac{\sqrt{(a+1)^2 - 4(a-b)}}{2}$$</p>
<p>but need help in finding the sink and source.</p>
<p>I got the spiral sink as: if $a \lt -1$</p>
<p>spiral source if $a \gt -1$</p>
<p>and center if $a = -1$</p>
<p>Can someone check this?</p>
| Alp Uzman | 169,085 | <p>Here is a slightly more high-brow answer. Given the trace-determinant plane (together with the qualitative behaviors associated to different regions of it), one can determine all cases in one unified argument.</p>
<p>Let us first be explicit on the dependencies of the coefficient matrix to the parameters and put</p>
<p><span class="math-container">$$A_{a,b}=\begin{pmatrix} a&1\\b&1\end{pmatrix},\,\, a,b\in\mathbb{R}.$$</span></p>
<p>As noted above, <span class="math-container">$\operatorname{tr}(A_{a,b})=a+1$</span> and <span class="math-container">$\det(A_{a,b})=a-b$</span>. Thus the point <span class="math-container">$(a+1,a-b)$</span> on the trace-determinant plane determines completely (<span class="math-container">$\ast$</span>). Further, note that since there are no restrictions on <span class="math-container">$a$</span> and <span class="math-container">$b$</span> all of the trace-determinant plane is covered by varying <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Since we're also asked about the <span class="math-container">$ab$</span>-plane, this suggest that we define a coordinate change from the <span class="math-container">$ab$</span>-plane to the <span class="math-container">$td$</span>-plane (=trace-determinant plane). Define</p>
<p><span class="math-container">$$\Phi:\mathbb{R}^2\to \mathbb{R}^2, \begin{pmatrix}a\\b\end{pmatrix}\mapsto \begin{pmatrix} a+1\\a-b\end{pmatrix}. $$</span></p>
<p>(It is structurally clear where <span class="math-container">$\Phi$</span> comes from. Indeed <span class="math-container">$\Phi(a,b)=(\operatorname{tr}(A_{a,b}),\det(A_{a,b}))$</span>.)</p>
<p>Note that <span class="math-container">$\Phi \begin{pmatrix}a\\b\end{pmatrix}=\begin{pmatrix} 1&0\\1&-1\end{pmatrix}\begin{pmatrix} a\\b\end{pmatrix}+\begin{pmatrix}1\\0\end{pmatrix}$</span>, so that <span class="math-container">$\Phi$</span> is an affine coordinate change with inverse</p>
<p><span class="math-container">$$\Phi^{-1}\begin{pmatrix}t\\d\end{pmatrix}=\begin{pmatrix} 1&0\\1&-1\end{pmatrix}\begin{pmatrix} t\\d\end{pmatrix}+\begin{pmatrix}-1\\-1\end{pmatrix}.$$</span></p>
<p>Here is a humble caricature for this coordinate change (<a href="https://www.desmos.com/calculator/afq5mwx6pc" rel="nofollow noreferrer">https://www.desmos.com/calculator/afq5mwx6pc</a> is the link for the <span class="math-container">$ab$</span>-plane and the trace-determinant plane is from Hirsch, Smale and Devaney's <em>Differential Equations, Dynamical Systems, and an Introduction to Chaos</em> (3e, p.64)):</p>
<p><a href="https://i.stack.imgur.com/SHhb0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SHhb0.png" alt="enter image description here" /></a></p>
<p>The behavioral regions on the trace-determinant plane are thus transferred to the <span class="math-container">$ab$</span>-plane. All that's left is to calculate equations for the curves separating the regions, which is easily done by plugging in the equations for the corresponding curves in the trace-determinant plane. For instance the critical curve <span class="math-container">$d=\dfrac{t^2}{4}$</span> is transformed into <span class="math-container">$\Phi^{-1}\begin{pmatrix}t\\\frac{t^2}{4}\end{pmatrix}= \begin{pmatrix}t-1\\ t-1+\frac{t^2}{4}\end{pmatrix}$</span>.</p>
<hr />
<p>Regarding my statement above marked with (<span class="math-container">$\ast$</span>), note that the trace determinant plane does not distinguish two distinct behaviors on the critical curve <span class="math-container">$\dfrac{t^2}{4}=d$</span>, (if <span class="math-container">$\dfrac{t^2}{4}=d$</span>, then the matrix <span class="math-container">$A_{a,b}=A_{t-1,t-1-d}=A_{t-1,t-1-\frac{t^2}{4}}$</span> may or may not be diagonalizable) hence for the complete analysis of the behaviors some further inquiry is required. Say <span class="math-container">$\dfrac{t^2}{4}=d$</span>. Then the repeated eigenvalue of <span class="math-container">$A$</span> is <span class="math-container">$\dfrac{t}{2}$</span>. The associated eigenvector equation is</p>
<p><span class="math-container">$$\begin{pmatrix} 0\\0\end{pmatrix}=\begin{pmatrix} \frac{t}{2}-1 & 1\\ t-1-\frac{t^2}{4} & 1-\frac{t}{2}\end{pmatrix}\begin{pmatrix} v_1\\ v_2\end{pmatrix} =\begin{pmatrix} \frac{t}{2}-1 & 1\\ -\left(\frac{t}{2}-1\right)^2 & -\left(\frac{t}{2}-1\right)\end{pmatrix}\begin{pmatrix} v_1\\ v_2\end{pmatrix},$$</span></p>
<p>so that any eigenvector associated to <span class="math-container">$\dfrac{t}{2}$</span> is proportional to <span class="math-container">$V_t=\begin{pmatrix} 1\\ -\left(\frac{t}{2}-1\right)\end{pmatrix}$</span>, whence</p>
<p><span class="math-container">$$A_{t-1,t-1-\frac{t^2}{4}}\sim \begin{pmatrix} \frac{t}{2}&1\\0&\frac{t}{2}\end{pmatrix},$$</span></p>
<p>which completes the analysis.</p>
<hr />
<p>Finally, here is a further exercise to exemplify the benefit of the qualitative method I've described here. Say the parameters <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are chosen uniformly (and independently) at random from <span class="math-container">$[-2,2]$</span>. Out of all phase portraits with positive likelihood, which phase portrait is the least likely? (Note that the coordinate change <span class="math-container">$\Phi$</span> is volume preserving.)</p>
|
697,984 | <p>I want to check whether the position operator $A$, where $Af(x)=xf(x)$ , is self-adjoint. For this to be true it has to be Hermitian and also the domains of it and its adjoint must be equal. The Hilbert space I'm working with is of course $L^2(\mathbb{R}) $ with the natural inner product. The problem I'm having is with checking the domains, the definition of the adjoint domain is extremely unwieldy. Here are the definitions I'm using.</p>
<p>The domain of a linear operator is defined thusly:
$$D(A) =\{ f \in H : Af \in H\}.$$</p>
<p>The domain of its adjoint (and subsequently the adjoint itself) is defined through:
$$D(A^*) = \{ f \in H : \exists f_1 : \forall g \in H, \ (f,Ag)=(f_1,g) \}.$$
The adjoint is then given by $A^*f = f_1$.</p>
<p>I'm not very comfortable with this. Verifying that my operator is Hermitian boils down to just writing down an integral, but I now I have no idea how to go about comparing the two domains to establish that $D(A)=D(A^*)$. </p>
<p>The domain of $A$ I've found to be the set of all functions such that $\int_{\mathbb{R}}dx \ x^2f^2(x)$ exists. But writing down the definition for the domain of the adjoint gives me:</p>
<p>$$D(A^*)=\{ f \in L^2(\mathbb{R}) : \exists f_1 :\forall g : \int_{\mathbb{R}}dx \ x f^*(x)g(x) = \int_{\mathbb{R}}dx \ f_1^*(x)g(x) \ \}.$$</p>
<p>(here $f_1$ and $g$ also belong to $L^2(\mathbb{R})$).</p>
<p>I'm supposed to conclude that the two sets are equal but I don't know how. Any help would be greatly appreciated.</p>
| homegrown | 125,659 | <p>By the sum rule of probability, we have
$$P[(A\cap B^C)\cup (A^C\cap B)]=P(A\cap B^C)+P(A^C\cap B).$$ Then by noticing that $P(A\cap B^C)=P(A)-P(A\cap B)$ and $P(A^C\cap B)=P(B)-P(A\cap B)$ we arrive at:
$$=P(A)-P(A\cap B)+P(B)-P(A\cap B)=P(A)+P(B)-2P(A\cap B)$$ and we are done.</p>
|
697,984 | <p>I want to check whether the position operator $A$, where $Af(x)=xf(x)$ , is self-adjoint. For this to be true it has to be Hermitian and also the domains of it and its adjoint must be equal. The Hilbert space I'm working with is of course $L^2(\mathbb{R}) $ with the natural inner product. The problem I'm having is with checking the domains, the definition of the adjoint domain is extremely unwieldy. Here are the definitions I'm using.</p>
<p>The domain of a linear operator is defined thusly:
$$D(A) =\{ f \in H : Af \in H\}.$$</p>
<p>The domain of its adjoint (and subsequently the adjoint itself) is defined through:
$$D(A^*) = \{ f \in H : \exists f_1 : \forall g \in H, \ (f,Ag)=(f_1,g) \}.$$
The adjoint is then given by $A^*f = f_1$.</p>
<p>I'm not very comfortable with this. Verifying that my operator is Hermitian boils down to just writing down an integral, but I now I have no idea how to go about comparing the two domains to establish that $D(A)=D(A^*)$. </p>
<p>The domain of $A$ I've found to be the set of all functions such that $\int_{\mathbb{R}}dx \ x^2f^2(x)$ exists. But writing down the definition for the domain of the adjoint gives me:</p>
<p>$$D(A^*)=\{ f \in L^2(\mathbb{R}) : \exists f_1 :\forall g : \int_{\mathbb{R}}dx \ x f^*(x)g(x) = \int_{\mathbb{R}}dx \ f_1^*(x)g(x) \ \}.$$</p>
<p>(here $f_1$ and $g$ also belong to $L^2(\mathbb{R})$).</p>
<p>I'm supposed to conclude that the two sets are equal but I don't know how. Any help would be greatly appreciated.</p>
| Nuduja Mapa | 1,019,214 | <p><strong>Ρ[(Α∩Β′)∪(Α′∩Β)]</strong><br />
<strong>= Ρ(Α∩Β′) + Ρ(Α′∩Β) − Ρ(Α∩Β′∩Α′∩Β)</strong> (Law of GRA)<br />
<strong>= Ρ(Α∩Β′) + Ρ(Α′∩Β) − Ρ(Α∩Α′∩Β′∩Β)</strong> (Commutative)<br />
<strong>= Ρ(Α∩Β′) + Ρ(Α′∩Β) − Ρ()</strong> (Complementary Law)<br />
<strong>= Ρ(Α) − Ρ(Α∩Β) + Ρ(Β) − Ρ(Α∩Β)</strong> (Since, Ρ(Α) = Ρ(Α∩Β′) + Ρ(Α∩Β) and Ρ(Β) = Ρ(Α′∩Β) + Ρ(Α∩Β))<br />
<strong>= Ρ(Α) + Ρ(Β) − 2 Ρ(Α∩Β)</strong></p>
|
2,311,979 | <p>Let $A = (a_{i,j})_{n\times n}$ and $B = (b_{i,j})_{n\times n}$</p>
<p>$(AB) = (c_{i,j})_{n\times n}$, where $c_{i,j} = \sum_{k=1}^n a_{i,k} b_{k,j}$, so</p>
<p>$(AB)^T = (c_{j,i})$, where $c_{j,i} = \sum_{k=1}^n a_{j,k}b_{k,i} $, and
$B^T = b_{j,i}$ and $A^T = a_{j,i}$, so </p>
<p>$B^T A^T = d_{j,i}$ where $d_{j,i} = \sum_{k=1}^n b_{j,k} a_{k,i}$, but this mean that $(AB)^T \not = (B^T A^T)$, so where is the problem in this derivation ?</p>
<p>Edit: To be clear, lets be more precise;
Let $A = (a_{x,y})_{p\times n}$ and $B = (b_{z,t})_{n\times q}$</p>
<p>So, $A^T_{n\times p} = (a_{y,x})$ and $B^T_{q\times n} = (b_{t,z})$, which implies</p>
<p>$$(B^T A^T)_{i,j}^{q \times p} = \sum_{k=1}^n b_{i,k} a_{k,j},$$ and</p>
<p>$(AB)_{c,d}^{p\times q} = \sum_{k=1}^n a_{c,k} b_{k,d}$, which implies
$$((AB)^T)_{d,c}^{q\times p} = \sum_{k=1}^n = a_{d,k} b_{k,c}.$$
Since $i,d \in \{1,...,q\}$ and $j,c \in \{1,...,p\}$,
$$((AB)^T)_{d,c}^{q\times p} = \sum_{k=1}^n = a_{d,k} b_{k,c} = = \sum_{k=1}^n = a_{i,k} b_{k,j},$$
which again concludes that $(AB)^T \not = (B^T A^T)$.</p>
| Community | -1 | <p>Why are you making things so difficult?</p>
<p>Let $A$ be a $m \times n$ matrix, $B$ be a $n \times p$ matrix.</p>
<p>$$(B^TA^T)_{ij} = \sum_{k=1}^{n}(B^{T})_{ik}(A^T)_{kj}$$</p>
<p>$$= \sum_{k=1}^{n}B_{ki}A_{jk}$$
$$= \sum_{k=1}^{n}A_{jk}B_{ki}$$
$$=(AB)_{ji}$$
$$=((AB)^{T})_{ij}$$</p>
<p>Therefore we conclude</p>
<p>$$(AB)^T = B^T A^T$$</p>
|
923,000 | <p>I am confused because I have seen implies and equivalent used interchangibly. For instance, I've seen </p>
<p>$$x-y=0 \implies x=y$$</p>
<p>And I've also seen</p>
<p>$$x-y=0 \Longleftrightarrow x=y$$</p>
<p>Are both of these statements correct? Which one am I supposed to use?</p>
<p>I know that implies is true if the first statement is false, or both are true. And equivalence is only true if both are true or both are false. So is using them interchangibly okay for stuff like this?</p>
| Dave | 174,047 | <p>If you're concerned about time, I don't think reading <em>Calculus</em> by Spivak is the best thing to do.</p>
<p>Either Zorich or Apostol is a great choice. I would say that they're "intermediate" in difficulty. Zorich contains more in-depth discussions of topics, and more examples than does Apostol.</p>
<p>If your goal is only to move on to Royden, you'll probably cover the material more quickly in Apostol. Zorich covers a number of topics not addressed in Apostol, such as vector analysis and submanifolds of $\mathbf{R}^n$. These are important topics, but not direct prerequisites for Royden. Still, I think with all the Lagrange multipliers and similar tools people use in economics, the submanifold topic is important if you want to understand the theory very clearly.</p>
<p>Zorich's first volume is quite concrete, whereas Apostol becomes abstract more quickly. This is probably because he doesn't want to duplicate what would be in a rigorous calculus book like his <em>Calculus</em>, although he does this more than Rudin's book does. His analysis book was for second- or third-year North American students, whereas Zorich's is, at the outset, for first-year Russian ones. Russian students have typically had some calculus in high school, but the practical portion of learning calculus continues into their first-year of university, with harder problems. So in Zorich I, you deal with hard problems on real numbers, rather than delving straight into metric spaces as you would in Apsotol's book.</p>
<p>Zorich covers only Riemann integration, whereas Apostol has chapters on Riemann-Stieltjes integration in one variable, Lebesgue integrals on the line, multiple Riemann integrals, and multiple Lebesgue integrals. The treatment of Lebesgue integration is less abstract than in more advanced books. Since it's limited to $\mathbf{R}$ or $\mathbf{R}^n$, it's more elementary, but at the same time there is some loss in clarity compared to the abstract theory on measure spaces. One reason to use Apostol might be a sort of introduction to the Lebesgue theory before returning to it at a higher level and "relearning" certain parts of it. Whether you'd want this is up to you.</p>
<p>The fact that both Rudin and Apostol have chapters on Riemann-Stieltjes, rather than Riemann, integration, indicates to me that they assumed students had already studied Riemann integrals rigorously, and would be ready for a generalized version right from the start. Considering the type of calculus courses most students take these days, this is rarely the case now. Zorich doesn't have this problem. </p>
<p>All in all, for a typical student who is good at math but didn't learn their calculus from a book like Spivak's or Apostol's <em>Calculus</em>, I think Zorich is the better choice because of the more concrete approach in the first volume (this doesn't necessarily mean easier). On the other hand, time constraints might cause you to prefer Apostol's analysis book.</p>
<p>EDIT: An important point that I neglected to mention is that Zorich's book will be <em>much</em> better than Apostol's if you aren't yet acquainted at all with multivariable calculus. A practical knowledge of some multivariable calculus is probably one of the tacit assumptions that Apostol and Rudin make about their readers, which is what allows them to deal with multivariable calculus in a briefer and more abstract way. Compare Apostol's 23-page chapter on multivariable differential calculus to Zorich's 132 (in the Russian version).</p>
<p>EDIT: Based on your later comments, I would suggest that reading </p>
<ol>
<li><p>Spivak's <em>Calculus</em>, </p></li>
<li><p>Whichever you prefer of Apostol's <em>Mathematical Analysis</em> or Rudin's <em>Principles of Mathematical Analysis</em>.</p></li>
</ol>
<p>would be a reasonable plan.</p>
<p>However, before beginning the multivariable calculus parts of those books, it would be best to learn some linear algebra and multivariable calculus from another source. This could be Volume 2 of Apostol's <em>Calculus</em>. You could instead skip straight to the multivariable part of Volume 1 of Zorich, but you'd have to learn the necessary linear algebra elsewhere first. I don't recommend Spivak's <em>Calculus on Manifolds</em> if you want to learn multivariable calculus for the first time. Also, you won't need Munkres - you'll get enough topology to start with in whichever other book you read. </p>
<p>EDIT: In answer to your additional question, these topics are mostly not discussed in Spivak. </p>
<p>However, Spivak is an excellent introduction to the mathematical way of thinking. That is, although you will not learn all the specific facts that arise in higher-level books (you do learn many, of course), you will learn to read and understand definitions, theorems and proofs the way mathematicians do, and to produce your own proofs. You will become intimately familiar with real numbers, sequences of real numbers, functions of a real variable and limits, so you will have examples in mind for the more general structures introduced in topology. You will also solve difficult problems.</p>
<p>So it is not that you will <em>know</em> topology already when you've read Spivak's book, it's mainly that it ought to be easier for you to learn because you will have improved your way of approaching mathematical questions. Countable sets are in fact discussed in the exercises to Spivak, however.</p>
<p>I can't guarantee that your trouble will "go away," but there is a good chance it will.</p>
<p>Also feel free to use Zorich rather than Rudin or Apostol, after Spivak, or even to jump straight to the multivariable part of Zorich at the end of Volume 1 and start reading from there. </p>
|
46,462 | <p>Hi I have a simple question. How do I plot the following with Day 1 as my X axis and Day 2 as my Y axis? I need the 22 variances plotted according to the Day they were taken from (these were originally 3D measurements taken over 2 days with the same specimens each day, there were 11 specimens and 22 xyz measurements from which I have taken the variances).</p>
<p>Can someone kindly help me out? I can't even find Scatterplot listed in Mathematica docs, do they call it something else?</p>
<pre><code>{{{"ID", "Day", 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,
13., 14., 15., 16., 17., 18., 19., 20., 21., 22.}, {"H. sapiens",
1., 145.7, 153.2, 164.6, 161.1, 170.8, 191.7, 179.2, 178.5, 198.5,
169.9, 135.8, 182.8, 205.3, 210.3, 197.3, 238.4, 207.4, 209.,
188.6, 201.3, 210.2, 206.9}, {"H. sapiens", 2., 146.8, 152.7,
165.6, 165.1, 172.4, 189.2, 179.2, 193.9, 199.7, 172.9, 136.1,
181.8, 204.3, 212.3, 197.4, 238., 205.7, 211.5, 191.5, 204.3,
218.2, 204.2}, {"Pan troglodytes",
1., -200.9, -230.2, -205.5, -238.8, -237.3, -207.1, -213.2, \
-221.6, -225.9, -247.1, -242.6, -266.5, -259.1, -258., -266.1, -259., \
-227.3, -228.9, -212.1, -213.6, -223.9, -225.1}, {"Pan troglodytes",
2., -199.4, -229.6, -203.5, -243.6, -238.6, -205.7, -213.1, \
-222.3, -227.3, -258.1, -242.9, -265.4, -265.1, -258., -269.9, -260., \
-228.2, -233.3, -212.3, -215.1, -225.4, -223.6}, {"Pan verus ", 1.,
261.2, 273., 274., 283., 285.06, 300., 294., 305.037, 315.33,
289.08, 263.6, 298.46, 306.3, 316.7, 306.5, 331.6, 320.81, 323.7,
298.13, 312.79, 319.2, 310.2}, {"Pan verus", 2., 264.1, 268.7,
273.1, 289.19, 288.03, 299.9, 291.9, 304.137, 316.54, 289.675,
263.4, 298.58, 309.3, 316.2, 306., 332.3, 323.31, 327.4, 297.26,
310.95, 321.8, 310.9}, {"Pan schweinfurthii ", 1., 230.6, 241.2,
241., 257.3, 253.3, 274.1, 265.9, 272.36, 280.147, 259.21, 229.8,
268.19, 277.62, 286.6, 275.56, 306.3, 287.4, 292.04, 270.36,
283.35, 291.3, 281.13}, {"Pan schweinfurthii ", 2., 226.9, 237.9,
239.9, 254.6, 257., 273.7, 262., 273.33, 282.84, 260.05, 229.6,
267.49, 278.23, 287.9, 272.72, 306.2, 287.58, 287.92, 270.85,
283.37, 292.3, 281.88}, {"Pan paniscus ",
1., -421.9, -447., -420.9, -454.5, -451.7, -419.8, -427.8, -431.9, \
-445.7, -463.4, -459.2, -476.3, -470.1, -466.5, -475.5, -450.1, \
-439.5, -445.8, -417.4, -419.5, -434.4, -432.2}, {"Pan paniscus",
2., -426.2, -447.9, -421.5, -455.6, -453., -420.4, -428.7, -432.7, \
-441.5, -466.8, -459.5, -476.4, -470.9, -467.2, -474.7, -472.5, \
-437.5, -446.9, -416.4, -424.7, -433.1, -437.}, {"G. gorilla ",
1., -175.9, -204., -175.1, -221., -217., -168., -187., -198., \
-213., -229., -220.8, -251.8, -251., -245., -256., -243., -211., \
-224., -190., -194., -210., -212.}, {"G. gorilla ", 2., -180.8, -205.,
214.1, -228., -223., -167., -191., -195., -206., -242.9, -221.6, \
-251.8, -253., -243., -256., -243., -211., -224., -189.8, -196., \
-210., -215.}, {"G. graueri ", 1., 220., 192., 222., 177., 184., 228.,
214., 205., 187., 162., 174., 147., 155., 162., 145., 170., 194.,
186., 221.2, 215.5, 198.8, 198.3}, {"G. graueri ", 2., 221., 189.,
226., 182., 179., 230., 214., 210., 188., 161., 172., 146., 153.,
156., 146., 169., 193., 189., 221.2, 220.6, 202.2,
196.9}, {"G. beringei",
1., -763., -793.5, -762.2, -810.3, -802., -749.7, -771., -777., \
-783., -828., -809.3, -836.9, -833., -828., -837.9, -809., -788., \
-795., -767.3, -768.4, -789., -782.8}, {"G. beringei",
2., -763.3, -791.9, -761.3, -805.6, -800., -748.5, -769., -776., \
-774., -817.4, -812.8, -836.4, -833., -825., -837.2, -820., -791., \
-797., -766.6, -769.8, -782.6, -786.3}, {"G. diehli",
1., -78., -106.4, -77., -124., -121., -72.4, -87.5, -94., -99., \
-134., -122.1, -149., -153., -141., -154., -137., -105., -111., -83., \
-87., -104., -105.}, {"G. diehli",
2., -79.6, -105.8, -77., -124., -119., -72.1, -86., -100., -99., \
-134., -121.5, -148., -152., -142., -152., -135., -104., -110., \
-84.7, -86., -103., -107.}, {"P. abelii ",
1., -214.09, -232.6, -209.24, -232.56, -227.49, -185.5, -204.5, \
-209.5, -213.9, -250.52, -249.8, -256.49, -249.85, -244.22, -257.96, \
-237.6, -219.2, -225.2, -198.8, -201.9, -223.3, -224.2}, {"P. abelii \
", 2., -214.93, -230.8, -209.97, -233.98, -230.51, -184.4, -204.4, \
-208.4, -214.1, -249.21, -250., -258.98, -250.1, -242., -256.86, \
-234.5, -225.9, -226.4, -201., -205.8, -225.1, -224.1}, {"P. \
pygmaeus",
1., -288.8, -280.5, -280.5, -265., -264., -238.5, -250.7, -234.2, \
-224.9, -258.9, -296.1, -259.5, -246.3, -234.5, -251.1, -212.5, \
-224.5, -219.6, -249.1, -230.4, -224.4, -233.}, {"P. pygmaeus",
2., -293.5, -284.3, -278.4, -256.8, -255.6, -236.3, -248.4, \
-233.4, -227.3, -261.4, -295.5, -262., -242.6, -233.8, -252.2, -212., \
-225.1, -220.3, -248.2, -230.9, -225.3, -233.7}}}
</code></pre>
| ubpdqn | 1,997 | <p>Significant manual cleaning was required for block of data in post.</p>
<p>The data:</p>
<pre><code>data = {{{"ID", "Day", 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.,
12., 13., 14., 15., 16., 17., 18., 19., 20., 21.,
22.}, {"H. sapiens", 1., 145.7, 153.2, 164.6, 161.1, 170.8,
191.7, 179.2, 178.5, 198.5, 169.9, 135.8, 182.8, 205.3, 210.3,
197.3, 238.4, 207.4, 209., 188.6, 201.3, 210.2,
206.9}, {"H. sapiens", 2., 146.8, 152.7, 165.6, 165.1, 172.4,
189.2, 179.2, 193.9, 199.7, 172.9, 136.1, 181.8, 204.3, 212.3,
197.4, 238., 205.7, 211.5, 191.5, 204.3, 218.2,
204.2}, {"Pan troglodytes",
1., -200.9, -230.2, -205.5, -238.8, -237.3, -207.1, -213.2, \
-221.6, -225.9, -247.1, -242.6, -266.5, -259.1, -258., -266.1, -259., \
-227.3, -228.9, -212.1, -213.6, -223.9, -225.1}, {"Pan troglodytes",
2., -199.4, -229.6, -203.5, -243.6, -238.6, -205.7, -213.1, \
-222.3, -227.3, -258.1, -242.9, -265.4, -265.1, -258., -269.9, -260., \
-228.2, -233.3, -212.3, -215.1, -225.4, -223.6}, {"Pan verus", 1.,
261.2, 273., 274., 283., 285.06, 300., 294., 305.037, 315.33,
289.08, 263.6, 298.46, 306.3, 316.7, 306.5, 331.6, 320.81, 323.7,
298.13, 312.79, 319.2, 310.2}, {"Pan verus", 2., 264.1, 268.7,
273.1, 289.19, 288.03, 299.9, 291.9, 304.137, 316.54, 289.675,
263.4, 298.58, 309.3, 316.2, 306., 332.3, 323.31, 327.4, 297.26,
310.95, 321.8, 310.9}, {"Pan schweinfurthii ", 1., 230.6, 241.2,
241., 257.3, 253.3, 274.1, 265.9, 272.36, 280.147, 259.21, 229.8,
268.19, 277.62, 286.6, 275.56, 306.3, 287.4, 292.04, 270.36,
283.35, 291.3, 281.13}, {"Pan schweinfurthii ", 2., 226.9, 237.9,
239.9, 254.6, 257., 273.7, 262., 273.33, 282.84, 260.05, 229.6,
267.49, 278.23, 287.9, 272.72, 306.2, 287.58, 287.92, 270.85,
283.37, 292.3, 281.88}, {"Pan paniscus",
1., -421.9, -447., -420.9, -454.5, -451.7, -419.8, -427.8, \
-431.9, -445.7, -463.4, -459.2, -476.3, -470.1, -466.5, -475.5, \
-450.1, -439.5, -445.8, -417.4, -419.5, -434.4, -432.2}, {"Pan \
paniscus",
2., -426.2, -447.9, -421.5, -455.6, -453., -420.4, -428.7, \
-432.7, -441.5, -466.8, -459.5, -476.4, -470.9, -467.2, -474.7, \
-472.5, -437.5, -446.9, -416.4, -424.7, -433.1, -437.}, {"G. gorilla \
", 1., -175.9, -204., -175.1, -221., -217., -168., -187., -198., \
-213., -229., -220.8, -251.8, -251., -245., -256., -243., -211., \
-224., -190., -194., -210., -212.}, {"G. gorilla ", 2., -180.8, -205.,
214.1, -228., -223., -167., -191., -195., -206., -242.9, \
-221.6, -251.8, -253., -243., -256., -243., -211., -224., -189.8, \
-196., -210., -215.}, {"G. graueri ", 1., 220., 192., 222., 177.,
184., 228., 214., 205., 187., 162., 174., 147., 155., 162., 145.,
170., 194., 186., 221.2, 215.5, 198.8, 198.3}, {"G. graueri ",
2., 221., 189., 226., 182., 179., 230., 214., 210., 188., 161.,
172., 146., 153., 156., 146., 169., 193., 189., 221.2, 220.6,
202.2, 196.9}, {"G. beringei",
1., -763., -793.5, -762.2, -810.3, -802., -749.7, -771., -777., \
-783., -828., -809.3, -836.9, -833., -828., -837.9, -809., -788., \
-795., -767.3, -768.4, -789., -782.8}, {"G. beringei",
2., -763.3, -791.9, -761.3, -805.6, -800., -748.5, -769., -776., \
-774., -817.4, -812.8, -836.4, -833., -825., -837.2, -820., -791., \
-797., -766.6, -769.8, -782.6, -786.3}, {"G. diehli",
1., -78., -106.4, -77., -124., -121., -72.4, -87.5, -94., -99., \
-134., -122.1, -149., -153., -141., -154., -137., -105., -111., -83., \
-87., -104., -105.}, {"G. diehli",
2., -79.6, -105.8, -77., -124., -119., -72.1, -86., -100., -99., \
-134., -121.5, -148., -152., -142., -152., -135., -104., -110., \
-84.7, -86., -103., -107.}, {"P. abelii ",
1., -214.09, -232.6, -209.24, -232.56, -227.49, -185.5, -204.5, \
-209.5, -213.9, -250.52, -249.8, -256.49, -249.85, -244.22, -257.96, \
-237.6, -219.2, -225.2, -198.8, -201.9, -223.3, -224.2}, {"P. abelii \
", 2., -214.93, -230.8, -209.97, -233.98, -230.51, -184.4, -204.4, \
-208.4, -214.1, -249.21, -250., -258.98, -250.1, -242., -256.86, \
-234.5, -225.9, -226.4, -201., -205.8, -225.1, -224.1}, {"P. \
pygmaeus",
1., -288.8, -280.5, -280.5, -265., -264., -238.5, -250.7, \
-234.2, -224.9, -258.9, -296.1, -259.5, -246.3, -234.5, -251.1, \
-212.5, -224.5, -219.6, -249.1, -230.4, -224.4, -233.}, {"P. \
pygmaeus",
2., -293.5, -284.3, -278.4, -256.8, -255.6, -236.3, -248.4, \
-233.4, -227.3, -261.4, -295.5, -262., -242.6, -233.8, -252.2, -212., \
-225.1, -220.3, -248.2, -230.9, -225.3, -233.7}}};
</code></pre>
<p>If I interpret correctly: there are 11 species with data from 2 days (11 replications).
The OP suggests the data recorded are "variances" (there are 22 values per sepcies) and the OP states "from which I have taken the variance". However, variance should be non-negative. </p>
<p>The following plots "day 2" against "day 1" by species. This may not be desired output.</p>
<pre><code>dat = Rest@data[[1]];
spec = dat[[All, 1]];
datm = #[[3 ;;]] & /@ (Thread[{#1, #2}] & @@@ GatherBy[dat, #[[1]] &]);
lpt = ListPlot[datm,
PlotLegends ->
PointLegend[Automatic,
Style[#, FontFamily -> "Arial"] & /@ DeleteDuplicates[spec],
LegendLayout -> (Grid[#, Frame -> True, Alignment -> Left] &)],
PlotMarkers -> {Automatic, 12}, Frame -> True,
FrameLabel -> {"Day 1", "Day 2"},
BaseStyle -> {FontFamily -> "Arial", 14}, ImageSize -> 600]
</code></pre>
<p><img src="https://i.stack.imgur.com/CnhCW.png" alt="enter image description here"></p>
<p>If the aim is to take means and variances of species. I leave other ways to visulaize, eg. error bar plots distribution plots up to the intention OP.</p>
<pre><code>mnlp = ListPlot[List /@ (Mean /@ datm),
PlotLegends ->
PointLegend[Automatic,
Style[#, FontFamily -> "Arial"] & /@ DeleteDuplicates[spec],
LegendLayout -> (Grid[#, Frame -> True, Alignment -> Left] &)],
PlotMarkers -> {Automatic, 12}, Frame -> True,
FrameLabel -> {"Day 1", "Day 2"},
BaseStyle -> {FontFamily -> "Arial", 14}, ImageSize -> 600,
PlotLabel -> "Mean Values"];
varlp = ListPlot[List /@ (Variance /@ datm),
PlotLegends ->
PointLegend[Automatic,
Style[#, FontFamily -> "Arial"] & /@ DeleteDuplicates[spec],
LegendLayout -> (Grid[#, Frame -> True, Alignment -> Left] &)],
PlotMarkers -> {Automatic, 12}, Frame -> True,
FrameLabel -> {"Day 1", "Day 2"},
BaseStyle -> {FontFamily -> "Arial", 14}, ImageSize -> 600,
PlotLabel -> "Variance Values"];
</code></pre>
<p>Note: only minimal change was required-> the argument of <code>ListPlot</code>. The above objects are shown below:</p>
<p><img src="https://i.stack.imgur.com/51FWk.png" alt="enter image description here"></p>
<p><img src="https://i.stack.imgur.com/HO8Ch.png" alt="enter image description here"></p>
|
2,832,311 | <p>Suppose I draw 10 tickets at random with replacement from a box of tickets, each of which is labeled with a number. The average of the numbers on the tickets is 1, and the SD of the numbers on the tickets is 1. Suppose I repeat this over and over, drawing 10 tickets at a time. Each time, I calculate the sum of the numbers on the 10 tickets I draw. Consider the list of values of the sample sum, one for each sample I draw. This list gets an additional entry every time I draw 10 tickets.</p>
<p>i) As the number of repetitions grows, the average of the list of sums is increasingly likely to be between 9.9 and 10.1.</p>
<p>ii) As the number of repetitions grows, the histogram of the list of sums is likely to be approximated better and better by a normal curve (after converting the list to standard units).</p>
<p>The answer given is that (i) is correct and (ii) is not correct .</p>
<p>I assumed the reason why (i) is correct is the fact that as the number of draws increase the average of the list of sums is going to converge to tickets' expected sum of value.</p>
<p>I can't figure out why (ii) is incorrect. Based on the definitions for CLT : </p>
<ul>
<li>The central limit theorem applies to sum of .</li>
<li>The number of draws should be reasonably large.</li>
<li>The more lopsided the values are, the more draws needed for
reasonable approximation (compare the approximations of rolling 5 in
roulette to flipping a fair coin).</li>
<li>It is another type of convergence : <strong>as the number of draws grows, the
normal approximation gets better.</strong></li>
</ul>
<p>Aside from what I can perceive that this case does seem to satisfy the above definitions, doesn't the last definition confirms what (ii) is suggesting to be true?</p>
| Bill Wallis | 350,028 | <p>In addition to the current answer/comment, you are confusing yourself with conflicting notation.</p>
<p>Given the sequence $a_{n} = 1/n$, you then write
$$
a_{3}(n) \geq a_{2}(N) \hspace{20pt}\text{and}\hspace{20pt} a_{2}(n) \geq a_{100}(N)
$$
but these do not make much sense by your definition. Instead, you would have
$$
a_{2} = 1/2, \hspace{20pt} a_{3} = 1/3, \hspace{20pt} \ldots, \hspace{20pt} a_{100} = 1/100
$$
and so on, so that
$$
a_{n} = 1/n \hspace{20pt}\text{and}\hspace{20pt} a_{N} = 1/N.
$$
Things like $a_{2}(n)$ have not been defined and do not make sense at the moment, which is partially leading to your confusion.</p>
<hr>
<p>To flesh out an example, consider the sequence $a_{n} = 1/n$ with $a = 0$. Let $\varepsilon = 1/10$. Clearly if you pick $N = 10$, then $a_{10} = 1/10 \leq \varepsilon$, so $a_{10}$ is in the $\varepsilon$-ball (around $a = 0$). Similarly, we have $a_{11} = 1/11 \leq 1/10 = \varepsilon$, so that $a_{11}$ is also inside the $\varepsilon$-ball. In fact, every $a_{n}$ for $n \geq N = 10$ lies inside the $\varepsilon$-ball.</p>
<p>What if $\varepsilon = 1/100$? Then we must pick $N = 100$ so that every $a_{n}$ for $n \geq N$ lies inside the $\varepsilon$-ball. It's usually a bit of work to determine how $N$ depends on $\varepsilon$ (in this case it is simply $N = 1/\varepsilon$), but once you can show that $N$ depends on $\varepsilon$ in some way so that that definition holds, you've shown convergence.</p>
|
658,168 | <blockquote>
<p>Prove that the set of nonzero real numbers is a group under the operation $*$ defined by
\begin{align}
a*b = \begin{cases} ab &\mbox{if } a > 0 \\
\frac{a}{b} &\mbox{if } a < 0
\end{cases}
\end{align}</p>
</blockquote>
<p>I have trouble proving the <strong>associativity propery</strong> of a group here. Here is my work so far:</p>
<p>If $a > 0, b > 0$, then $(a*b)*c = (ab)*c = (ab)c$ and $a*(b*c) = a*(bc) = a(bc)$. </p>
<p>If $a > 0, b < 0$, then $(a*b)*c = (a/b)*c = (a/b)/c = a/(bc)$ and $a*(b*c) = a*(b/c) = a/(b/c) = ac/b$.</p>
<p>If $a < 0, b > 0$, then $(a*b)*c = (a/b)*c = (a/b)c = ac/b$ and $a*(b*c) = a*(bc) = a/(bc)$.</p>
<p>If $a < 0, b < 0$, then $(a*b)*c = (a/b)*c = (a/b)/c = a/(bc)$ and $a*(b*c) =a*(b/c) = a/(b/c) = ac/b$.</p>
<p>I had been unable to prove that $(a*b)*c = a*(b*c)$ for all cases but the first one. I cannot understand why...</p>
| John Habert | 123,636 | <p>I've highlighted in red the fixes you need. You have to be careful about evaluating $ab$ and $a/b$ to get the sign.</p>
<p>If $a > 0, b < 0$, then $(a*b)*c = \color{red}{(ab)*c = ab/c}$ and $a*(b*c) = a*(b/c) = \color{red}{a(b/c) = ab/c}$.</p>
<p>If $a < 0, b > 0$, then $(a*b)*c = (a/b)*c = \color{red}{(a/b)/c = a/(bc)}$ and $a*(b*c) = a*(bc) = a/(bc)$.</p>
<p>If $a < 0, b < 0$, then $(a*b)*c = (a/b)*c = \color{red}{(a/b)c = ac/b}$ and $a*(b*c) =a*(b/c) = a/(b/c) = ac/b$.</p>
<hr>
<p>As an aside, the associative property is the least of your worries. This operation fails to be closed for the integers. Was the problem for the non-zero rationals?</p>
|
658,168 | <blockquote>
<p>Prove that the set of nonzero real numbers is a group under the operation $*$ defined by
\begin{align}
a*b = \begin{cases} ab &\mbox{if } a > 0 \\
\frac{a}{b} &\mbox{if } a < 0
\end{cases}
\end{align}</p>
</blockquote>
<p>I have trouble proving the <strong>associativity propery</strong> of a group here. Here is my work so far:</p>
<p>If $a > 0, b > 0$, then $(a*b)*c = (ab)*c = (ab)c$ and $a*(b*c) = a*(bc) = a(bc)$. </p>
<p>If $a > 0, b < 0$, then $(a*b)*c = (a/b)*c = (a/b)/c = a/(bc)$ and $a*(b*c) = a*(b/c) = a/(b/c) = ac/b$.</p>
<p>If $a < 0, b > 0$, then $(a*b)*c = (a/b)*c = (a/b)c = ac/b$ and $a*(b*c) = a*(bc) = a/(bc)$.</p>
<p>If $a < 0, b < 0$, then $(a*b)*c = (a/b)*c = (a/b)/c = a/(bc)$ and $a*(b*c) =a*(b/c) = a/(b/c) = ac/b$.</p>
<p>I had been unable to prove that $(a*b)*c = a*(b*c)$ for all cases but the first one. I cannot understand why...</p>
| Sammy Black | 6,509 | <p>A different answer addresses your question directly, but I am including this answer to offer a slightly different perspective. The approach described below yields the fact that you have a group <em>automatically</em> (without checking various properties), but presumes that you're comfortable with the notions of group homomorphism/isomorphism/automorphism and semidirect products.</p>
<hr>
<p>As a starting point, note that $\Bbb{R}_{\ne 0}$ (with usual multiplication) is isomorphic to the cartesian product $\Bbb{R}_{> 0} \times \{ \pm 1 \}$ by separating the magnitude of the real number from its sign. Explicitly, the map is
$$
\begin{align}
\Bbb{R}_{\ne 0} &\to \Bbb{R}_{> 0} \times \{ \pm 1 \} \\
a &\mapsto \left( |a|, \frac{a}{|a|} \right)
\end{align}
$$
whose inverse is the map
$$
\left( b, \varepsilon\right) \mapsto b \varepsilon.
$$</p>
<p>Now, the multiplicative group of positive reals (the left-hand factor in the direct product) has the reciprocal map $\varphi \in \operatorname{Aut}(\Bbb{R}_{>0})$, given by $\varphi(b) = \frac{1}{b}$, which is an involution: $\varphi^2 = 1$. Therefore, there is a natural map
$$
\begin{align}
\left\{ \pm 1 \right\} &\xrightarrow{\Phi} \operatorname{Aut}(\Bbb{R}_{>0}) \\
+1 &\mapsto 1 \\
-1 &\mapsto \varphi
\end{align}
$$</p>
<p>Your group is now isomorphic to the semidirect product
$$
\Bbb{R}_{>0} \rtimes \{ \pm 1 \}
$$
since
$$
(b_1, \varepsilon_1) \cdot (b_2, \varepsilon_2) = (b_1 \Phi(\varepsilon_1)(b_2), \varepsilon_1 \varepsilon_2) =
\begin{cases}
(b_1 b_2, \varepsilon_1 \varepsilon_2) & \text{if } \varepsilon_1 = +1 \\
(b_1 \frac{1}{b_2}, \varepsilon_1 \varepsilon_2) & \text{if } \varepsilon_1 = -1.
\end{cases}
$$</p>
|
184,266 | <p>Let $a,b,c$ and $d$ be positive real numbers such that $a+b+c+d=4.$ </p>
<p>Prove the inequality </p>
<blockquote>
<p>$$a^2bc+b^2cd+c^2da+d^2ab \leq 4 .$$ </p>
</blockquote>
<p>Thanks :) </p>
| Y.Z | 19,079 | <p>Let $S=a^2bc+b^2cd+c^2da+d^2ab$. We can easily find that:</p>
<p>$$S-(ac+bd)(ab+cd)=-bd(a-c)(b-d);$$
$$S-(bc+ad)(bd+ac)=ac(a-c)(b-d)$$
which implies $$S\le \max\{(ac+bd)(ab+cd),(bc+ad)(bd+ac)\}.$$</p>
<p>By AG mean inequality:</p>
<p>\begin{align*}
(ac+bd)(ab+cd)&\le \left(\frac{(ac+bd)+(ab+cd)}{2}\right)^2\\
{}&=\frac{(a+d)^2(b+c)^2}{4}\\
{}&\le \frac{1}{4}\left[\left(\frac{a+d+b+c}{2}\right)^2\right]^2\\
{}&=4
\end{align*}
Similarly, we have
$$(bc+ad)(bd+ac)\le 4$$.</p>
<p>Thus we have $S\le 4$.</p>
|
1,930,401 | <p>Are there any non-linear real polynomials $p(x)$ such that $e^{p(x)}$ has a closed form antiderivative? If not, is the value of $\int_{0}^{\infty}e^{p(x)}dx$ known for any $p$ with negative leading term other than $-x$ and $-x^2$?</p>
| Takahiro Waki | 268,226 | <p>About convex or concave, it's obviously convex polygon. By repeating this work, we get theorem which area of quadrilateral is max iff four points are on a circle. By using this theorem repeatedly, iff all points are on a circle, we get the area can be max.</p>
<p>When $L_k $ is on a plane of radius R, S is max.
If angle of chord L_k is $a_k$,</p>
<p>$$\displaystyle sin{\frac{a_k}2}=\frac{L_k}{2R}$$</p>
<p>$$\displaystyle\sum a_k=2\sum sin^{-1}{\frac{L_k}{2R}}=2\pi$$</p>
<p>$\displaystyle\sum \frac{L_k}{2R}\sqrt{1-(\frac{L_k}{2R})^2}=2\prod\frac{L_k}{2R}\tag1$</p>
<p>we get R by solving (1). therefore
$$S=\displaystyle\sum S_k=\sum \frac{RL_k}{2\sqrt{1-(\frac{L_k}{2R})^2}}$$</p>
<p>equation of (1) by <a href="https://in.answers.yahoo.com/question/index?qid=20140526080730AASjOAI" rel="nofollow">https://in.answers.yahoo.com/question/index?qid=20140526080730AASjOAI</a></p>
|
3,084,934 | <p>I want to prove or disprove that the Fourier transform <span class="math-container">$\mathcal F \colon (\mathcal S(\mathbb R^d), \lVert \cdot \rVert_1) \to L^1(\mathbb R^d)$</span> is unbounded, where <span class="math-container">$\lVert\cdot \rVert_1$</span> denotes the <span class="math-container">$L^1(\mathbb R^d)$</span>-norm. </p>
<p>Having thought about this for a moment, I believe it is indeed unbounded. So I tried to find a sequence of Schwartz functions <span class="math-container">$(f_n)_{n\in \mathbb N} \subseteq \mathcal S(\mathbb R^d)$</span> with <span class="math-container">$\forall n: \lVert f_n \rVert_1 = 1$</span> and <span class="math-container">$$\lVert \mathcal F f_n \rVert \to +\infty.$$</span>
Of course I first thought about Gaussians but couldn't quite find a suitable sequence. Any help appreciated!</p>
| Alexdanut | 629,594 | <p>Alternatively we may use Heine's definition of the limit of a sequence.<br>
Let <span class="math-container">$x_n$</span> be a sequence so that <span class="math-container">$x_n \rightarrow a => \lim_{n\to \infty} f(x_n)=\infty$</span>.
We have: <span class="math-container">$\lim_{x\to a} \frac{1}{f(x)}=\lim_{n\to \infty} \frac{1}{f(x_n)}=\frac{1}{\lim_{n\to\infty} f(x_n)}=\frac{1}{\infty}=0$</span></p>
|
500,144 | <p>Hello and how you doing today? I just came across a problem which need to use Stokes theorem.</p>
<p>The problems says:</p>
<p>Evaluate the surface integral</p>
<p>$$
\int_{S}\nabla\times\vec{F}\cdot{\rm d}\vec{S}
$$</p>
<p>where F(x,y,z)=$(y^2)i$ + $(2xy)j$+$(xz^2)k$ and S is the surface of the paraboloid $z=
x^2+y^2$ bounded by the planes $x=0,y=0$ and $z=1$ in the first quadrant pointing upward.</p>
<p>I got $\nabla F$ which is $(-z^2)j$</p>
<p>So, I stuck at here because I dont know the boundary C in order to use Stokes theorem. So could someone please help me to start?</p>
<p>By the way thank you very much for taking your time and consideration to help me on this problem.</p>
<p><img src="https://i.stack.imgur.com/1FxUk.jpg" alt="enter image description here"></p>
| kedrigern | 97,299 | <p>If $$f(x)=(x-a)^3+(x-b)^3+(x-c)^3$$ then $$f'(x)=3(x-a)^2+3(x-b)^2+3(x-c)^2$$ Since $a,b,c$ are distinct real numbers $f'(x) > 0$ for all $x\in\mathbb{R}$ and therefore $f$ is strictly increasing and therefore it has only one real root.</p>
<p>EDIT: The last statement is true since $f$ is a polynomial function of degree $3$ ($a_0>0$) so $\lim_{x\to-\infty} f(x) = -\infty$, $\lim_{x\to\infty} f(x) = \infty$ and $f$ is continuous.</p>
|
500,144 | <p>Hello and how you doing today? I just came across a problem which need to use Stokes theorem.</p>
<p>The problems says:</p>
<p>Evaluate the surface integral</p>
<p>$$
\int_{S}\nabla\times\vec{F}\cdot{\rm d}\vec{S}
$$</p>
<p>where F(x,y,z)=$(y^2)i$ + $(2xy)j$+$(xz^2)k$ and S is the surface of the paraboloid $z=
x^2+y^2$ bounded by the planes $x=0,y=0$ and $z=1$ in the first quadrant pointing upward.</p>
<p>I got $\nabla F$ which is $(-z^2)j$</p>
<p>So, I stuck at here because I dont know the boundary C in order to use Stokes theorem. So could someone please help me to start?</p>
<p>By the way thank you very much for taking your time and consideration to help me on this problem.</p>
<p><img src="https://i.stack.imgur.com/1FxUk.jpg" alt="enter image description here"></p>
| Тимофей Ломоносов | 54,117 | <p>Let's substitute the variable: $x=y+\frac{(a+b+c)}{3}$</p>
<p>The equation will now look like </p>
<p>$$3y^3+(2a^2-2ab-2ac+2b^2-2bc+2c^2)y+\frac{(a+b-2c)(a-3b+c)(b-2a+c)}{9}=0$$</p>
<p>Now we apply Cardano's method.</p>
<p>$$Q=\left(\frac{2a^2-2ab-2ac+2b^2-2bc+2c^2}{3}\right)^3+\left(\frac{\frac{(a+b-2c)(a-3b+c)(b-2a+c)}{9}}{2}\right)^2=\frac{8}{27}(a^2-ab-ac+b^2-bc+c^2)^3+\frac{1}{324}(a+b-2c)^2(a-2b+c)^2(b-2a+c)^2$$</p>
<p>Then the answer is given depending on the sign of Q.</p>
<p>If $Q>0$ there is only one real root</p>
<p>If $Q<0$ there are three distinct real roots</p>
<p>If $Q=0$ there are two distinct real roots.</p>
|
500,144 | <p>Hello and how you doing today? I just came across a problem which need to use Stokes theorem.</p>
<p>The problems says:</p>
<p>Evaluate the surface integral</p>
<p>$$
\int_{S}\nabla\times\vec{F}\cdot{\rm d}\vec{S}
$$</p>
<p>where F(x,y,z)=$(y^2)i$ + $(2xy)j$+$(xz^2)k$ and S is the surface of the paraboloid $z=
x^2+y^2$ bounded by the planes $x=0,y=0$ and $z=1$ in the first quadrant pointing upward.</p>
<p>I got $\nabla F$ which is $(-z^2)j$</p>
<p>So, I stuck at here because I dont know the boundary C in order to use Stokes theorem. So could someone please help me to start?</p>
<p>By the way thank you very much for taking your time and consideration to help me on this problem.</p>
<p><img src="https://i.stack.imgur.com/1FxUk.jpg" alt="enter image description here"></p>
| egreg | 62,967 | <p>Set $m=(a+b+c)/3$, $A=a-m$, $B=b-m$, $C=c-m$ and $x=y+m$. Then your equation becomes
$$
(y-A)^3 + (y-B)^3 + (y-C)^3 = 0
$$
and, since $A+B+C=0$, your expansion applies to give
$$
y^3+(A^2+B^2+C^2)-\frac{A^3+B^3+C^3}{3}=0
$$
which is a suppressed cubic, whose discriminant is
$$
\frac{1}{4}\biggl(-\frac{A^3+B^3+C^3}{3}\biggr)^2+\frac{1}{27}(A^2+B^2+C^2)^3>0
$$
so the equation has only one real root.</p>
|
104,335 | <p>I am implementing a code that works correctly but it takes too much time. I did not see how to optimize it to be run quickly. Here is my code:</p>
<pre><code>data=RandomInteger[{1,400},{5000,2}];
c=10;
r=60;
pts=c + r {Cos[#], Sin[#]} & /@ Range[0, 2 π, 2 π/16];
newCoord = Table[(# - pts[[i]]) & /@ data, {i, 1, Length@pts}];
PolarCoords = Table[ToPolarCoordinates[#] & /@ newCoord[[i]] /.
{x_, y_} /; y < 0 -> {x, y + 2 π}, {i, 1, Length@pts}]; // AbsoluteTiming
(* {8.59775, Null} *)
</code></pre>
<p>I am running my code Mac OS X processor, Core i7 and RAM 8 GB.</p>
| Searke | 144 | <p>Most of your time is spent in defining PolarCoords. Let's take a look at your code. </p>
<p>It looks like you've tried to optimize it already. Let's try to simplify it first:</p>
<pre><code>PolarCoords =
Map[Function[i,
ToPolarCoordinates /@
newCoord[[i]] /. {x_, y_} /; y < 0 -> {x, y + 2 \[Pi]}],
Range@Length@pts]
</code></pre>
<p>Simpler:</p>
<pre><code>PolarCoords =
Map[Function[i, ToPolarCoordinates /@ newCoord[[i]]],
Range@Length@pts] /. {x_, y_} /; y < 0 -> {x, y + 2 \[Pi]}
</code></pre>
<p>Simpler:</p>
<pre><code>PolarCoords =
Map[ToPolarCoordinates, newCoord] /. {x_, y_} /; y < 0 -> {x, y + 2 \[Pi]}
</code></pre>
<p>And even simpler since ToPolarCoodrindates doesn't need to be mapped:</p>
<pre><code>PolarCoords =
ToPolarCoordinates[newCoord] /. {x_, y_} /; y < 0 :> {x, y + 2 \[Pi]}
</code></pre>
<p>This code above is helpful. It helps me understand what the line is supposed to do. You wanted to first run ToPolarCoordinates over every point and then shift them right by 2Pi if y is negative. </p>
<p>In large vectorized computations, conditionals can be a problem. You can replace the condition with:</p>
<pre><code>{x_, y_} :> {x, Mod[y, 2 Pi]}
</code></pre>
<p>Which does what I think you intended. So if I wanted to make this run fast, I would probably combine the two operations into one function:</p>
<pre><code>f[{x_, y_}] := {Sqrt[x^2 + y^2], Mod[ArcTan[x, y], 2 Pi]}
g[coord_] := Map[f, coord_, {2}]
</code></pre>
<p>And then I could use either use Compile to make a compiled version of g or use ParallelMap. </p>
|
2,007,403 | <p>Determine the convergence or divergence of </p>
<p>$$\sqrt[n]{n!}$$</p>
<p>I was trying to use the propriety $\lim_{n\to\infty}\sqrt[n]{n}=1$, maybe I can write this</p>
<p>$\lim_{n\to\infty}\sqrt[n]{n!}=\lim_{n\to\infty}\sqrt[n]{n \ \times(n-1)\times(n-2)\times(n-3)\cdots2\ \times \ 1}=\lim_{n\to\infty}\sqrt[n]n \ \times\sqrt[n]{(n-1)}\times\sqrt[n]{(n-2)}\times\sqrt[n]{(n-3)}\cdots\sqrt[n]{2}\ \times \sqrt[n] 1= 1\times1\times1\times1\times1\times\cdots1\times1\times1=1$</p>
<p>Am I right?</p>
| Claude Leibovici | 82,404 | <p>Whenever I see a factorial somewhere in a problem of limits, I automatically thing about <a href="https://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow noreferrer">Stirling approximation</a></p>
<p>$$n!\sim \sqrt{2\pi n}\,\left(\frac n e \right)^n$$ $$\sqrt[n]{n!}\sim (2 \pi n)^{\frac 1{2n}}\,\frac n e $$ So, for large $n$ $$\sqrt[n]{n!}\sim \frac n e $$</p>
|
428,843 | <p>Consider the lines in the image below:</p>
<p><img src="https://i.stack.imgur.com/AWmrd.png" alt="enter image description here"></p>
<p>Given a set of arbitrary points $p1$ and $p2$ where the direction of travel is from the former to the latter, I want to be able to directional arrow marks as in the image above.</p>
<p>I got as far as calculating the mid-points of the lines but could not figure out how to cater to various combinations of $x1<x2$, $x1>x2$, etc. Is there a direct way to calculate these points? <strong>EDIT</strong>: By direct, I mean in one step without conditioning of where the points lie with respect to each other.</p>
<p>$f1(p1, p2) = $ get the line coordinates of the left directional marker.<br>
$f2(p1, p2) = $ get the line coordinates of the right directional marker.</p>
| DonAntonio | 31,254 | <p>Hints: Induction on</p>
<p>$$\bullet\;\;x_n<x_{n+1}\iff 1+\sqrt{2+\sqrt{3\ldots+\sqrt n}}<1+\sqrt{2+\sqrt{3+\ldots+\sqrt{n+\sqrt{n+1}}}}\iff$$</p>
<p>$$2+\sqrt{3+\ldots+\sqrt n}<2+\sqrt{3+\ldots\sqrt{n+1}}\iff\ldots$$</p>
<p>$$\bullet\bullet\;x_{n+1}^2=1+\sqrt{2+\sqrt{3+\ldots+\sqrt{n+1}}}\le 1+\sqrt2\left(\sqrt{1+\sqrt{2+\ldots+\sqrt n}}\right)=1+\sqrt2\,x_n$$</p>
<p>$$\iff\left(\sqrt{2+\sqrt{3+\ldots+\sqrt{n+1}}}\right)\le\sqrt{2+2\sqrt{2+\ldots+\sqrt n}}\iff\ldots$$</p>
<p>For (c) you're already done with (a)-(b) since then you have a monotone ascending sequence bounded from above, so the sequence's limit equals its supremum...</p>
|
428,843 | <p>Consider the lines in the image below:</p>
<p><img src="https://i.stack.imgur.com/AWmrd.png" alt="enter image description here"></p>
<p>Given a set of arbitrary points $p1$ and $p2$ where the direction of travel is from the former to the latter, I want to be able to directional arrow marks as in the image above.</p>
<p>I got as far as calculating the mid-points of the lines but could not figure out how to cater to various combinations of $x1<x2$, $x1>x2$, etc. Is there a direct way to calculate these points? <strong>EDIT</strong>: By direct, I mean in one step without conditioning of where the points lie with respect to each other.</p>
<p>$f1(p1, p2) = $ get the line coordinates of the left directional marker.<br>
$f2(p1, p2) = $ get the line coordinates of the right directional marker.</p>
| Kevin Pardede | 82,064 | <p>10 days old question, but .</p>
<p>a) Is already clear, that $ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} < \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n+1}}}}$ , because $\sqrt{n} <\sqrt{n} + \sqrt{n+1}$ which is trivial.<br>
My point here is to give some opinion about b) and c), for me it's better to do the c) first. We know that : $$ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} < \sqrt{p+\sqrt{p+\sqrt{p+ ... }}} $$
But it is only true for $q\leq p<\infty $ for $q \in \mathbb{Z}^{+}$. Because it is trivial that
$$ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} > \sqrt{1+\sqrt{1+\sqrt{1+ ... }}} $$
Let $x=\sqrt{2+\sqrt{2+\sqrt{2+ ... }}}$, then $x^2=2+ \sqrt{2+\sqrt{2+\sqrt{2+ ... }}} \rightarrow x^2-x-2=0 $, thus $x=2$, because $x>0$.
Now let's probe this equation :
$$\sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} \leq \sqrt{2+\sqrt{2+\sqrt{2+ ... }}}=2 \tag{1}$$
2 is bigger than 1 , with their difference is 1. so for $x_{n}$ to be bigger than 2, it is required for $\sqrt{2+\sqrt{3+\sqrt{4+ ... \sqrt{n}}}} \geq 3$
but if square both sides of (1) and substract, we get that $\sqrt{2+\sqrt{3+\sqrt{4+ ... \sqrt{n}}}} \leq 3$. <br>
for (b) , first square both sides, the '1' is gone , square again until the '2' is gone, and we arrive to this equation :
$$\sqrt{3+\sqrt{4+...\sqrt{n}}} \leq 2.\sqrt{2+\sqrt{3+...\sqrt{n}}}$$
which is true, because from (1) we know that <br>$\sqrt{3 +\sqrt{4 ...+\sqrt{n}}} \leq 2$ and $ \sqrt{2+\sqrt{3+...\sqrt{n}}} >0 $
<br> In fact, if you can prove (b) then (c) is trivial and vice versa.</p>
|
69,476 | <p>Hello everybody !</p>
<p>I was reading a book on geometry which taught me that one could compute the volume of a simplex through the determinant of a matrix, and I thought (I'm becoming a worse computer scientist each day) that if the result is exact this may not be the computationally fastest way possible to do it.</p>
<p>Hence, the following problem : if you are given a polynomial in one (or many) variables $\alpha_1 x^1 + \dots + \alpha_n x^n$, what is the cheapest way (in terms of operations) to evaluate it ?</p>
<p>Indeed, if you know that your polynomial is $(x-1)^{1024}$, you can do much, much better than computing all the different powers of $x$ and multiply them by their corresponding factor.</p>
<p>However, this is not a problem of factorization, as knowing that the polynomial is equal to $(x-1)^{1024} + (x-2)^{1023}$ is also much better than the naive evaluation.</p>
<p>Of course, multiplication and addition all have different costs on a computer, but I would be quite glad to understand how to minimize the "total number of operations" (additions + multiplications) for a start ! I had no idea how to look for the corresponding litterature, and so I am asking for your help on this one :-)</p>
<p>Thank you !</p>
<p>Nathann</p>
<p>P.S. : <em>I am actually looking for a way, given a polynomial, to obtain a sequence of addition/multiplication that would be optimal to evaluate it. This sequence would of course only work for <strong>THIS</strong> polynomial and no other. It may involve working for hours to find out the optimal sequence corresponding to this polynomial, so that it may be evaluated many times cheaply later on.</em></p>
| Gerhard Paseman | 3,206 | <p>The cheapest way of finding the value of a polynomial, given unlimited preprocessing resources, is to look up the precalculated value in the table. However, if you know you are going to need several more values evaluated at successive intervals, you might try a method similar to that desired by Charles Babbage: differences. Namely, store the value and the the n kth order differences (similar to evaluations at derivatives) for point x, and then use n additions to derive the differences and value for the polynomial at the point x+1. If you need to loop through to evaluate the polynomial at successive integers, this gets those values with O(n) additions per evaluation point.</p>
<p>(Of course needing random or real access to the polynomial will require something different, but you might find storing values at derivatives useful for evaluating the polynomial at near by points, especially if multiplication is expensive..)</p>
<p>Gerhard "Email Me About System Design" Paseman, 2011.07.04 </p>
|
3,226,028 | <h2>Problem</h2>
<p>I want to know how to solve the differential equation
<span class="math-container">$$ \dot{x} + a\cdot x - b\cdot \sqrt{x} = 0 $$</span> for <span class="math-container">$a>0$</span> and both situations: for <span class="math-container">$b > 0$</span> and <span class="math-container">$b < 0$</span>. </p>
<h2>My work</h2>
<p>One can separate the variables to obtain:
<span class="math-container">$$ \frac{dx}{b\cdot \sqrt{x} - a\cdot x} = dt$$</span> but I do not know how to proceed ...
<a href="https://www.wolframalpha.com/input/?i=solve+x%27(t)%2Bax(t)-bsqrt(x(t))+%3D+0" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=solve+x%27(t)%2Bax(t)-bsqrt(x(t))+%3D+0</a> it seems to have an explicit solution ... </p>
<h2>Context</h2>
<p>This problem occurs in the following context:
<span class="math-container">$$ \ddot{X} + a \cdot \dot{X} = f(X)$$</span> then multiplying both sides with <span class="math-container">$2\dot{X}^T$</span> one obtains:
<span class="math-container">$$ (\dot{X}^T\dot{X})' + 2a\cdot \dot{X}^T\dot{X} = 2\dot{X}^T f(X)$$</span> Let <span class="math-container">$v= \dot{X}^T \dot{X}$</span> and the above differential equation arises ... </p>
| eyeballfrog | 395,748 | <p>Some caution has to be taken here because of the presence of the <span class="math-container">$\sqrt{}$</span> function--we need to keep track of signs carefully. We know that <span class="math-container">$x(t) \ge 0$</span> everywhere, so there is a function <span class="math-container">$u$</span> such that <span class="math-container">$u \ge 0$</span> and <span class="math-container">$x = u^2$</span>. Additionally, we can scale <span class="math-container">$u$</span> and <span class="math-container">$t$</span> to eliminate the constants. The proper choice for this is <span class="math-container">$x(t) = (b/a)^2 u(a t/2)^2$</span>, giving
<span class="math-container">$$
u\, [u'+u - \mathrm{sgn}(b)] = 0
$$</span>
which has two solutions: <span class="math-container">$u(s)= 0$</span> and <span class="math-container">$u(s) = \mathrm{sgn}(b)+Ce^{-s}$</span>. For this second solution, <span class="math-container">$x(t) = (b/a)^2 [\mathrm{sgn}(b)+Ce^{-s}]^2$</span>. Factoring <span class="math-container">$|b|/a$</span> into the brackets gives
<span class="math-container">$$
x(t) = \left(\frac{b}{a} + Ce^{-at/2}\right)^2 \;\;\;\;\mathrm{or}\;\;\; x(t) = 0
$$</span>
Except, there's one little problem here. We required <span class="math-container">$u\ge 0$</span>, but that term in parentheses, which has the same sign as <span class="math-container">$u$</span>, could be negative. The key is to note that for both solutions, when <span class="math-container">$x(t) = 0$</span>, <span class="math-container">$x'(t)$</span> is also <span class="math-container">$0$</span>. This allows them to be spliced together at that point and still be continuously differentiable. Thus, the general solution is
<span class="math-container">$$
x(t) = \left[\max \left(\frac{b}{a} + Ce^{-at/2},0\right)\right]^2
$$</span>
for an arbitrary real constant <span class="math-container">$C$</span>.</p>
|
31,308 | <p>Apologies if my question is poorly phrased. I'm a computer scientist trying to teach myself about generalized functions. (Simple explanations are preferred. -- Thanks.)</p>
<p>One of the references I'm studying states that the space of Schwartz test functions of rapid decrease is the set of infinitely differentiable functions: $\varphi: \mathbb{R} \rightarrow \mathbb{R}$ such that for all natural numbers $n$ and $r$,</p>
<p>$\lim_{x\rightarrow\pm\infty} |x^n \varphi^{(r)}(x)|$</p>
<p>What I would like to know is why is necessary or important for test functions to decay rapidly in this manner? i.e. faster than powers of polynomials. I'd appreciate an explanation of the intuition behind this statement and if possible a simple example.</p>
<p>Thanks.</p>
<p>EDIT: the OP is actually interested in a particular 1994 paper on "Spatial Statistics" by
Kent and Mardia,
1994 Link between kriging and thin plate splines (with J. T. Kent). In Probability, Statistics and Optimization (F. P. Kelly ed.). Wiley, New York, pp 325-339.</p>
<p>Both are in Statistics at Leeds,</p>
<p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/</a> </p>
<p><a href="http://www.maths.leeds.ac.uk/~john/" rel="nofollow">http://www.maths.leeds.ac.uk/~john/</a> </p>
<p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html</a> </p>
<p>Scanned article:
<a href="http://www.gigasize.com/get.php?d=90wl2lgf49c" rel="nofollow">http://www.gigasize.com/get.php?d=90wl2lgf49c</a> </p>
<p>FROM THE OP: Here is motivation for my question: I'm trying to understand a paper that replaces an integral $$\int f(\omega) d\omega$$ with $$\int \frac{|\omega|^{2p + 2}}{ (1 + |\omega|^2)^{p+1}} \; f(\omega) \; d\omega$$ where $p \ge 0$ ($p = -1$ yields to the unintegrable expression) because $f(\omega)$ contains a singularity at the origin i.e. is of the form $\frac{1}{\omega^2}.$ </p>
<p>LATER, ALSO FROM THE OP:
I understand some parts of the paper but not all of it. For example, I am unable to justify the equations (2.5) and (2.7). Why do they take these forms and not some other form?</p>
| Helge | 3,983 | <p>The Schwartz space $\mathcal{F}$ is just one space, one could use to define distributions. Two other common examples are smooth functions $C^{\infty}$ and smooth functions with compact support $C^{\infty} _c$. Then one has the inclusions
$$
C^{\infty} _{c} \subseteq \mathcal{S} \subseteq C^{\infty}
$$
Now distributions are just taking the topological dual of these spaces, so one has then
$$
(C^{\infty})' \subseteq \mathcal{S}' \subseteq (C^{\infty} _{c})'
$$
So the inclusions get reversed. So imposing a less restrictive decay condition would lead you to a small space of distributions. In fact, $(C^{\infty})'$ consists of distributions of compact support.</p>
<p>The other issue mentioned in the other posts, is that the Fourier transform takes the Schwartz space into itself. It is much less obvious what the Fouriertransform does on $C_c^{\infty}$, and the Fourier transform is not even defined on $\mathcal{C}^{\infty}$.</p>
|
31,308 | <p>Apologies if my question is poorly phrased. I'm a computer scientist trying to teach myself about generalized functions. (Simple explanations are preferred. -- Thanks.)</p>
<p>One of the references I'm studying states that the space of Schwartz test functions of rapid decrease is the set of infinitely differentiable functions: $\varphi: \mathbb{R} \rightarrow \mathbb{R}$ such that for all natural numbers $n$ and $r$,</p>
<p>$\lim_{x\rightarrow\pm\infty} |x^n \varphi^{(r)}(x)|$</p>
<p>What I would like to know is why is necessary or important for test functions to decay rapidly in this manner? i.e. faster than powers of polynomials. I'd appreciate an explanation of the intuition behind this statement and if possible a simple example.</p>
<p>Thanks.</p>
<p>EDIT: the OP is actually interested in a particular 1994 paper on "Spatial Statistics" by
Kent and Mardia,
1994 Link between kriging and thin plate splines (with J. T. Kent). In Probability, Statistics and Optimization (F. P. Kelly ed.). Wiley, New York, pp 325-339.</p>
<p>Both are in Statistics at Leeds,</p>
<p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/</a> </p>
<p><a href="http://www.maths.leeds.ac.uk/~john/" rel="nofollow">http://www.maths.leeds.ac.uk/~john/</a> </p>
<p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html</a> </p>
<p>Scanned article:
<a href="http://www.gigasize.com/get.php?d=90wl2lgf49c" rel="nofollow">http://www.gigasize.com/get.php?d=90wl2lgf49c</a> </p>
<p>FROM THE OP: Here is motivation for my question: I'm trying to understand a paper that replaces an integral $$\int f(\omega) d\omega$$ with $$\int \frac{|\omega|^{2p + 2}}{ (1 + |\omega|^2)^{p+1}} \; f(\omega) \; d\omega$$ where $p \ge 0$ ($p = -1$ yields to the unintegrable expression) because $f(\omega)$ contains a singularity at the origin i.e. is of the form $\frac{1}{\omega^2}.$ </p>
<p>LATER, ALSO FROM THE OP:
I understand some parts of the paper but not all of it. For example, I am unable to justify the equations (2.5) and (2.7). Why do they take these forms and not some other form?</p>
| Zen Harper | 6,651 | <p>If you want to extend <strong>differentiation</strong> to all continuous functions, then (provided you have some convenient mathematical properties of the extension) you are FORCED to use distributions or roughly equivalent things; you have no choice! Similarly, to extend <strong>the Fourier transform</strong> you are forced to consider <em>tempered distributions</em>.</p>
<p>Speaking as a pure mathematician: the main purpose of general distributions is to extend <em>differentiation</em>, not integration (since integration makes things nicer; it is differentiation which is the nastier operation). They are fine as long as you aren't using the Fourier transform.</p>
<p>Thus, every locally integrable function can be regarded as a distribution, and therefore differentiated; so, when you're considering differential equations, this might be all you need (you don't have to worry whether the functions are differentiable or not, because distributions always are). You find distribution solutions, then try to prove that they're actually functions.</p>
<p>It's similar to solving polynomial equations by using complex numbers; even if all the roots are real, it's still sometimes easier to solve them with complex numbers, then try to prove they're real (e.g. by showing they're self-conjugate).</p>
<p>However, if you want to do <em>Fourier Transforms</em> then you have to consider <em>tempered distributions</em> (or Schwartz distributions), since general distributions are sometimes too nasty to have Fourier transforms.</p>
<p>Note that even genuine locally integrable functions need not represent tempered distributions, so general distributions are not appropriate for Fourier transforms even when you only want to consider functions.</p>
<p>But <em>Fourier inversion</em> works perfectly for tempered distributions, no further restrictions are needed, unlike, say, $L^1$. If $f \in L^1$ then $\widehat{f}$ is usually not in $L^1$, so you can't do Fourier inversion theory nicely on $L^1$ (you would have to <strong>assume</strong> that also $\widehat{f} \in L^1$, which is often not true!)</p>
<p>Extension in mathematics is very powerful; when you don't have to worry about restrictions and annoying details, it is easier! For example, <strong>complex numbers are easier than real numbers</strong>, <strong>complex analysis is easier than real analysis</strong>, and <strong>Lebesgue integration is easier than Riemann integration</strong>!! Students never believe this, but it's true if you actually want to <strong>use</strong> it (rather than do toy problems in books)...</p>
|
31,308 | <p>Apologies if my question is poorly phrased. I'm a computer scientist trying to teach myself about generalized functions. (Simple explanations are preferred. -- Thanks.)</p>
<p>One of the references I'm studying states that the space of Schwartz test functions of rapid decrease is the set of infinitely differentiable functions: $\varphi: \mathbb{R} \rightarrow \mathbb{R}$ such that for all natural numbers $n$ and $r$,</p>
<p>$\lim_{x\rightarrow\pm\infty} |x^n \varphi^{(r)}(x)|$</p>
<p>What I would like to know is why is necessary or important for test functions to decay rapidly in this manner? i.e. faster than powers of polynomials. I'd appreciate an explanation of the intuition behind this statement and if possible a simple example.</p>
<p>Thanks.</p>
<p>EDIT: the OP is actually interested in a particular 1994 paper on "Spatial Statistics" by
Kent and Mardia,
1994 Link between kriging and thin plate splines (with J. T. Kent). In Probability, Statistics and Optimization (F. P. Kelly ed.). Wiley, New York, pp 325-339.</p>
<p>Both are in Statistics at Leeds,</p>
<p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/</a> </p>
<p><a href="http://www.maths.leeds.ac.uk/~john/" rel="nofollow">http://www.maths.leeds.ac.uk/~john/</a> </p>
<p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html</a> </p>
<p>Scanned article:
<a href="http://www.gigasize.com/get.php?d=90wl2lgf49c" rel="nofollow">http://www.gigasize.com/get.php?d=90wl2lgf49c</a> </p>
<p>FROM THE OP: Here is motivation for my question: I'm trying to understand a paper that replaces an integral $$\int f(\omega) d\omega$$ with $$\int \frac{|\omega|^{2p + 2}}{ (1 + |\omega|^2)^{p+1}} \; f(\omega) \; d\omega$$ where $p \ge 0$ ($p = -1$ yields to the unintegrable expression) because $f(\omega)$ contains a singularity at the origin i.e. is of the form $\frac{1}{\omega^2}.$ </p>
<p>LATER, ALSO FROM THE OP:
I understand some parts of the paper but not all of it. For example, I am unable to justify the equations (2.5) and (2.7). Why do they take these forms and not some other form?</p>
| Peter Luthy | 5,751 | <p>While I'm not saying anything new, I feel the responses thus far either miss the point or are not very complete. Generalized functions (aka distributions) are defined as linear functionals on some class of functions, typically referred to as test functions. To begin with, one usually wants any locally integrable function to be a generalized function. If $f$ is any locally integrable function then the generalized function corresponding to $f$ is just the linear functional $\int f\phi$ when $\phi$ is a test function. So the obvious first choice for the space of test functions is the space of compactly supported functions since integrating a locally integrable function against a smooth compactly supported always makes sense. Then one can define the derivative of a generalized function, say T, to be the functional T' which satisfies $T'(\phi)=-T(d\phi/dx)$ whenever $\phi$ is a smooth, compactly supported function. If T can be represented by a smooth function, then this is just the integration by parts formula, which makes sense since $\phi$ is compactly supported. So the function $e^{e^{e^x}}$ is a perfectly reasonable generalized function in this case.</p>
<p>As said a number of times above, one would also like to define the Fourier transform of a generalized function via the formula $\hat{T}(\phi)=T(\hat{\phi})$. The problem with the space of compactly supported functions is that <em>the Fourier transform of a nonzero compactly supported function is <strong>never</strong> compactly supported.</em> So $T(\hat{\phi})$ might not make sense if T is allowed to be any locally integrable function. In particular, suppose that $\phi$ was some smooth function of compact support whose Fourier transform goes to zero slower than something like $e^{-x^{10}}$. The function $e^{x^{11}}$ is locally integrable and hence a linear functional on the space of compactly supported smooth functions, but it is easy to see that $\int e^{x^{11}}\hat{\phi}$ isn't going to be a finite number.</p>
<p>The Schwarz space is nice because the Fourier transform of a Schwarz function is a Schwarz function. So given any linear functional T on the Schwarz space (such a T is called a tempered distribution), one can define the Fourier transform $\hat{T}$ of $T$ via the formula $\hat{T}(\phi)=T(\hat{\phi})$ when $\phi$ is a Schwarz function. This formula will always make sense when T is a tempered distribution.</p>
|
856,334 | <p>the problem is based on this picture. <img src="https://i.stack.imgur.com/51fmQ.jpg" alt="enter image description here"></p>
<p>at beginning or we say $t=0$, $P$ is a circle of which the center is at the point $(0,r)$, $r_0=1$ is the initial radius of this circle. $AB$ is a vector which has an angle $\theta$ from the $x$ axis, and this vector $AB$ will always on the tangent of the circle (the side close to $O$). We suppose that $A_0=(x_0,y_0)$ and the initial value of $\theta$ is $\theta_0$. we have also $|AP|=l=2$. the angle betwen $PA$ and $x$ axis is noted $\varphi$.
The vector $AB$ moves on the tangent of the circle with a velocity $-x$, so $A$ will be closer and closer to O. when the distance $AP=l<2r$, the radius becomes smaller, and then we have new $P,r$, but the new $r$ should satisfy $AP=2r$. </p>
<p>The question is : when $A\to O,r\to 0$, what is the limit of $\theta$?</p>
<p>I tried to use the differential method to get the result, but didn't make it. I know that the angle $\angle PAQ$ is always $\pi/6$ and $\dot{l}=2\dot{r}$. I don't know how to continue. thanks very much for your help.</p>
| JJacquelin | 108,514 | <p>Starting from the beautifull result of MvG, it is possible to integrate $$
y'=\frac{\mathrm dy}{\mathrm dx}=
-\frac{3x+\sqrt3\left(4y-\sqrt{3x^2 + 4y^2}\right)}
{-3\sqrt{3} x + 4y - \sqrt{3x^2 + 4y^2}}
$$
and finally obtain an equivalent $-2\sqrt(3)\frac{x}{\ln(x)}$ for the function $y(x)$. This proves that $y(x)$ tends to $0$ for $x$ tending to $0$</p>
<p><img src="https://i.stack.imgur.com/BbrIP.jpg" alt="enter image description here"></p>
<p>The preceeding result leads to the equation of the family of curves expressed on parametric form (the parameter is $z$).</p>
<p>This shows that all the curves are homothetic. The factor $C$ is determined by the initial point. So, it is sufficient to draw only one curve (for example $C=1$) to show the behaviour of all. </p>
<p>For small values of the parameter $z$, the point $(x,y)$ is closed to $0$. In order to clearly see the decreassing of $\theta$ while $z$ tends to $0$, it is necessary to go to very small values. This is due to the logarithmic term in the equivalent function. This is clear below on the several graphs with different magnitudes.</p>
<p><img src="https://i.stack.imgur.com/4DYgz.jpg" alt="enter image description here"></p>
<p>When $z$ tends to infinity, $x$ tends to $0$ and $y$ tends to $y_{max} = e^{-2+\Re(\tanh^{-1}(2) ) } $ approximately $= 0.234408$ where $\Re(\tanh^{-1}(2))$ is the real part of $\tanh^{-1}(2)$ which is complex.</p>
|
191,175 | <p>How to calculate the limit of $(n+1)^{\frac{1}{n}}$ as $n\to\infty$?</p>
<p>I know how to prove that $n^{\frac{1}{n}}\to 1$ and $n^{\frac{1}{n}}<(n+1)^{\frac{1}{n}}$. What is the other inequality that might solve the problem?</p>
| André Nicolas | 6,312 | <p>What about $n^{1/n}\lt (n+1)^{1/n}\le (2n)^{1/n}=2^{1/n}n^{1/n}$, then squeezing.</p>
<p>Or else, for $n \ge 2$,
$$n^{1/n}\lt (n+1)^{1/n}\lt (n^2)^{1/n}=(n^{1/n})(n^{1/n}).$$
Then we don't have to worry about $2^{1/n}$.</p>
|
3,296,122 | <p>I was given a problem in which a matrix <span class="math-container">$A$</span> was specified along with its determinant value, now the determinant of another matrix <span class="math-container">$B$</span> was asked to be found out whose indices were scalar multiplies of <span class="math-container">$A$</span>. What I did was to factor out the scalar from the matrix and substitute the value of matrix <span class="math-container">$A$</span>. However to my great surprise I found out that each row had to be factored out and the scalars would be indexed to its power by the number of rows and then the determinant operation be applied. This is something that took me aback and left me absolutely confused. I am looking for a clear cut and elimination by contradiction (yet trivial) explanation to this.</p>
| Kraigolas | 655,232 | <p>As a brief example, consider </p>
<p><span class="math-container">$$\begin{align}A&=\begin{pmatrix}a & b\\ c& d\end{pmatrix}\\[10 pt]
\det(A)&=ad-bc \\[10pt]
B&=nA \\[10 pt]
&=\begin{pmatrix}na & nb\\ nc& nd\end{pmatrix} \\[10 pt]
\det(B)&=nand-nbnc \\[10pt]
&=n^2ad-n^2bc\\[10 pt]
&=n^2(ad-bc)\\[10 pt]
&=n^2\det(A)
\end{align}
$$</span>
In general this is true: for an <span class="math-container">$p$</span>-dimensional matrix, <span class="math-container">$\det(nA)=n^p\det(A)$</span>. This is because we multiply together <span class="math-container">$p$</span> values from the matrix when computing the determinant, and thus we pick up <span class="math-container">$p$</span> copies of <span class="math-container">$n$</span>. I suggest playing around with this idea if you are still confused. </p>
|
1,092,091 | <p>I wonder how I can calculate the distance between two coordinates in a $3D$ coordinate-system.
Like this. I've read about <em><a href="http://www.purplemath.com/modules/distform.htm" rel="nofollow">the distance formula</a></em>:</p>
<p>$$d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$$</p>
<p>(How) Can I use that it $3D$ coordinates, or is there any other method?</p>
<p>Thanks!</p>
| amWhy | 9,003 | <p>In three dimensions, the distance between two points (which are each triples of the form $(x, y, z))$ is given by $$d= \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2 + (z_2-z_1)^2}$$</p>
|
2,280,243 | <blockquote>
<p>A tribonacci sequence is a sequence of numbers such that each term from the fourth onward is the sum of the previous three terms. The first three terms in a tribonacci sequence are called its <em>seeds</em> For example, if the three seeds of a tribonacci sequence are $1,2$,and $3$, it's 4th terms is $6$<br>
($1+2+3$),then $11(2+3+6)$.</p>
</blockquote>
<p>Find the smallest 5 digit term in a tribonacci sequence if the seeds are $6,19,22$</p>
<p>I'm having trouble with this. I don't know where to start. The formula for the tribonacci sequence in relation to its seeds is $$u_{n+3} = u_{n} + u_{n+1} + u_{n+2}$$
This tribonacci formula holds for all integer $n$. But that's all I know how to work out. And just if it helps, the next few numbers in the sequence mentioned in the question are $47,88,157,292$. Is there some shortcut to it, because I need to show some working out and having two pages full of addition doesn't sound very easy to mark, does it?</p>
| Claude Leibovici | 82,404 | <p>In <a href="http://www.kurims.kyoto-u.ac.jp/EMIS/journals/GMN/yahoo_site_admin/assets/docs/8_GMN-3532-V17N1.243113959.pdf" rel="nofollow noreferrer">this paper</a>, the authors show that, if $S_1,S_2,S_3$ are the seeds, then </p>
<p>$$S_n=T_{n-2}\,S_1+(T_{n-2}+T_{n-3})\,S_2+T_{n-1}\,S_3$$ where $T_k$ is the "usual" Tribonacci number.</p>
<p>Applied to the seeds you give, this generates the following values
$$\left(
\begin{array}{ccc}
n & T_n & S_n \\
1 & 0 & 6 \\
2 & 1 & 19 \\
3 & 1 & 22 \\
4 & 2 & 47 \\
5 & 4 & 88 \\
6 & 7 & 157 \\
7 & 13 & 292 \\
8 & 24 & 537 \\
9 & 44 & 986 \\
10 & 81 & 1815
\end{array}
\right)$$</p>
<p>Hoping that this could help. Just continue for a few terms to get the answer.</p>
<p><strong>Edit</strong></p>
<p>In <a href="http://nntdm.net/papers/nntdm-21/NNTDM-21-1-67-69.pdf" rel="nofollow noreferrer">this paper</a>, the author shows that $$\lim_{n\to \infty } \, \frac{S_{n+1}}{S_{n}}=\lim_{n\to \infty } \, \frac{T_{n+1}}{T_{n}}=\tau=\frac{1}{3} \left(1+\sqrt[3]{19-3 \sqrt{33}}+\sqrt[3]{19+3 \sqrt{33}}\right)$$ which is $\approx 1.83929$. This could also help you to find your result.</p>
<p>Using the last term you provided, making the approximation $S_n=\text{Round}\left[292 \tau ^{n-7}\right]$, the next terms would be $537, 988, 1817$.</p>
|
2,927,079 | <p><a href="https://i.stack.imgur.com/ih7X2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ih7X2.png" alt="enter image description here"></a></p>
<p>In the second paragraph, Munkres assumes that there exists a separation of <span class="math-container">$Y$</span> (in the sense he defined in Lemma 23.1) and proves that <span class="math-container">$Y$</span> is not connected. </p>
<p>So I would think the first paragraph should prove the converse statement: if <span class="math-container">$Y$</span> is not connected, then there is a separation. But in the first paragraph Munkres says "Suppose <span class="math-container">$A$</span> and <span class="math-container">$B$</span> form a separation of <span class="math-container">$Y$</span>". Is it true that he does NOT mean the separation that he defined in the statement of Lemma 23.1? Does he mean "Suppose <span class="math-container">$Y$</span> is not connected"? If so, the next sentence that claims that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are open in <span class="math-container">$Y$</span> makes sense, but otherwise (if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> form a separation in the sense he defined in Lemma 23.1) it doesn't.</p>
| yushang | 764,253 | <p>In my understanding, <span class="math-container">$Y$</span> can be viewed as a subset then the first paragraph is the proving of <span class="math-container">$\overline{A}\cap B=\emptyset$</span> and <span class="math-container">$A\cap\overline{B}=\emptyset$</span>. This means <span class="math-container">$A,B$</span> is a pair of separated set. In "Counterexample of Topology" this lemma is written as the definition of connected set.</p>
|
1,651,427 | <blockquote>
<p>Let $f$ be a bounded function on $[0,1]$. Assume that for any $x\in[0,1)$, $f(x+)$ exists. Define $g(x)=f(x+)$, $x\in [0,1)$, and $g(1)=f(1)$. Is $g(x)$ right continuous? </p>
</blockquote>
<p>Prove it or give me a counterexample.</p>
<p>My ideas:</p>
<p>$(1)$If $f$ is of bounded variation, then $g$ must be right continuous.</p>
<p>$(2)$If the continuous points of $f$ are dense in $[0,1]$, then $g$ must be right continuous.</p>
<p>But I can not find a counterexample. Please help me. Thanks!</p>
| Andres Mejia | 297,998 | <p>Define $f:[0,1] \to \mathbb{R}$ by</p>
<p>$f(x) = \begin{cases}
\ 1 \textrm{ if $x \in [0,1)$} \\
\ -1 \textrm{ if $x \in [1/2,1]$} \\
\end{cases}$</p>
<p>then $g(x)$ is well defined for all $x \in [0,1]$. It is not continuous, despite the fact that $g(1)=f(1)$.</p>
|
52,299 | <p>Hello everybody.</p>
<p>I'm looking for an "easy" example of a (non-zero) holomorphic function $f$ with almost everywhere vanishing radial boundary limits: $\lim\limits_{r \rightarrow 1-} f(re^{i\phi})=0$.</p>
<p>Does anyone know such an example.</p>
<p>Best
CJ</p>
| GH from MO | 11,919 | <p>To complement Andrey Rekalo's response, Lusin's construction was generalized by Bagemihl and Seidel (Math. Zeitschrift 61 (1954), online <a href="http://www.digizeitschriften.de/main/dms/img/?PPN=GDZPPN002384698" rel="nofollow noreferrer">here</a>). See their Corollary 4 whose proof takes about 2 pages, much less than the original one by Lusin-Priwaloff. Of course the proof relies on Mergelyan's famous approximation theorem for which see Section 20.5 in Rudin: Real and Complex Analysis.</p>
<p><strong>EDIT:</strong> Lvriemsurf asked in a comment if we can replace "almost every angle" by "every angle" in the construction. The answer is "no", as follows from the <a href="https://encyclopediaofmath.org/wiki/Luzin-Privalov_theorems" rel="nofollow noreferrer">Lusin-Priwaloff theorems</a>.</p>
|
939,868 | <p>There are a lot math journals with title "acta" includes, for instance, Acta Mathematica, acta arithmetica, etc. Would you explain what "acta" means?</p>
| Claude Leibovici | 82,404 | <p>In latin, $actum$ means $fact$ and $acta$ is, in latin, the plural form of $actum$.</p>
|
2,354,004 | <p>I'm struggling with the following sum:</p>
<p>$$\sum_{n=0}^\infty \frac{n!}{(2n)!}$$</p>
<p>I know that the final result will use the error function, but will not use any other non-elementary functions. I'm fairly sure that it doesn't telescope, and I'm not even sure how to get $\operatorname {erf}$ out of that.</p>
<p>Can somebody please give me a hint? No full answers, please.</p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
\sum_{n = 0}^{\infty}{n! \over \pars{2n}!} & =
1 + \sum_{n = 1}^{\infty}{\Gamma\pars{n} \over \pars{n - 1}!}\,
{\Gamma\pars{n + 1} \over \Gamma\pars{2n + 1}}
\\[5mm] & =
1 + \sum_{n = 0}^{\infty}{1 \over n!}\,\ \overbrace{%
{\Gamma\pars{n + 1}\Gamma\pars{n + 2} \over \Gamma\pars{2n + 3}}}
^{\ds{\substack{\ds{=\ \mrm{B}\pars{n + 1,n + 2}.}\\[1mm] \ds{\mrm{B}:\ Beta\ Function}}}}
\\[5mm] & =
1 + \sum_{n = 0}^{\infty}{1 \over n!}\ \overbrace{%
\int_{0}^{1}t^{n}\pars{1 - t}^{n + 1}\,\dd t}^{\ds{\mrm{B}\pars{n + 1,n + 2}}}
\\[5mm] & =
1 + \int_{0}^{1}\pars{1 - t}\sum_{n = 0}^{\infty}
{\bracks{t\pars{1 - t}}^{\,n} \over n!}\,\dd t
\\[5mm] & =
1 + \int_{0}^{1}\pars{1 - t}\expo{t\,\pars{1 - t}}\,\dd t
\\[5mm] & =
1 +
\int_{-1/2}^{1/2}\pars{{1 \over 2} - t}\exp\pars{{1 \over 4} - t^{2}}\,\dd t
\\[5mm] & =
1 + \expo{1/4}\ \overbrace{\int_{0}^{1/2}\expo{-t^{2}}\,\dd t}
^{\ds{{1 \over 2}\,\root{\pi}\,\mrm{erf}\pars{1 \over 2}}}\qquad
\pars{~\mrm{erf}:\ Error\ Function~}
\\[5mm] & =
\bbx{1 + {1 \over 2}\,\root{\pi}\expo{1/4}\,\mrm{erf}\pars{1 \over 2}}
\approx 1.5923
\end{align}</span></p>
|
150,180 | <p>I try to read Gross's paper on Heegner points and it seems ambiguous for me on some points:</p>
<p>Gross (page 87) said that $Y=Y_{0}(N)$ is the open modular curve over $\mathbb{Q}$ which classifies ordered pairs $(E,E^{'})$ of elliptic curves together with cyclic isogeny $E\rightarrow E^{'}$ of degree $N$. Gross uses on some steps the cyclic isogeny between two elliptic curves over $\mathbb{C}$. One of the books that I have read to understand the theory of modular curves is "A first course in Modular forms, written by Fred Diamond and Jerry Shurman". </p>
<p>Theorem 1.5.1.(page 38)</p>
<p>Let $N$ be a positive integer. </p>
<p>(a) The moduli space for $\Gamma_{0}(N)$ is
$$S_{0}(N)=\{[E_{\tau},\langle1/N+\Lambda_{\tau}\rangle]:\tau\in H\},$$
with $H$ is the upper half plan. Two points $[E_{\tau},\langle1/N+\Lambda_{\tau}\rangle]$ and $[E_{\tau^{'}},\langle1/N+\Lambda_{\tau^{'}}\rangle]$ are equal if and only if $\Gamma_{0}(N)\tau=\Gamma_{0}(N)\tau^{'}$. Thus there is a bijection
$$\psi:S_{0}(N)\rightarrow Y_{0}(N), [\mathbb{C}/\Lambda_{\tau},\langle1/N+\Lambda_{\tau}\rangle]\mapsto \Gamma_{0}(N)\tau.$$</p>
<p>So, how can we make the link between the equivalence classes of the enhanced elliptic curves for $\Gamma_{0}(N)$ (= the equivalence classes of $(E,C)$ where $E$ is a complex elliptic curve and $C$ is a cyclic subgroup of $E$ of order $N$) defined in Diamond/Shurman's book and the cyclic isogenies $E\rightarrow E^{'}$ of degree $N$ used by Gross.</p>
<p>I also ask if there is any other paper which explains the theory of Heegner points explicitly? </p>
<p>I have look at Darmon's note and Gross-Zagier paper "Heegner points and derivatives of L-series" and it seems that the both were influenced by Gross's paper! Is there any other paper which explains Heegner points explecitely and independently of Gross's paper?</p>
<p>(I keep this post open for any further question about Gross's paper and I apologise for any mistakes in my English.)</p>
<p>Thank you.</p>
| ya-tayr | 29,980 | <p>Mimicking the theory of projective resolutions, try this:</p>
<p>Start with the category whose objects are pairs $(V_1,V_0,d:V_1 \to V_0)$ where the $V_i$ are vector bundles, and whose morphisms are pairs $(f_i:V_i \to W_i)_{i \in \{0,1\}}$ intertwining with $d$. </p>
<p>Now divide each Hom group by the subgroup of maps where $f_0:V_0 \to W_0$ lifts to $H:V_0 \to W_1$. Taking cokernels gives a functor to coherent sheaves, and I think its an equivalence.</p>
<p>PS: correcting something. I forgot the possibility that, on some non-projective schemes, a coherent sheaf might not admit any surjection from a vector bundle. (Or perhaps, might not admit any nontrivial vector bundles at all.)</p>
|
150,180 | <p>I try to read Gross's paper on Heegner points and it seems ambiguous for me on some points:</p>
<p>Gross (page 87) said that $Y=Y_{0}(N)$ is the open modular curve over $\mathbb{Q}$ which classifies ordered pairs $(E,E^{'})$ of elliptic curves together with cyclic isogeny $E\rightarrow E^{'}$ of degree $N$. Gross uses on some steps the cyclic isogeny between two elliptic curves over $\mathbb{C}$. One of the books that I have read to understand the theory of modular curves is "A first course in Modular forms, written by Fred Diamond and Jerry Shurman". </p>
<p>Theorem 1.5.1.(page 38)</p>
<p>Let $N$ be a positive integer. </p>
<p>(a) The moduli space for $\Gamma_{0}(N)$ is
$$S_{0}(N)=\{[E_{\tau},\langle1/N+\Lambda_{\tau}\rangle]:\tau\in H\},$$
with $H$ is the upper half plan. Two points $[E_{\tau},\langle1/N+\Lambda_{\tau}\rangle]$ and $[E_{\tau^{'}},\langle1/N+\Lambda_{\tau^{'}}\rangle]$ are equal if and only if $\Gamma_{0}(N)\tau=\Gamma_{0}(N)\tau^{'}$. Thus there is a bijection
$$\psi:S_{0}(N)\rightarrow Y_{0}(N), [\mathbb{C}/\Lambda_{\tau},\langle1/N+\Lambda_{\tau}\rangle]\mapsto \Gamma_{0}(N)\tau.$$</p>
<p>So, how can we make the link between the equivalence classes of the enhanced elliptic curves for $\Gamma_{0}(N)$ (= the equivalence classes of $(E,C)$ where $E$ is a complex elliptic curve and $C$ is a cyclic subgroup of $E$ of order $N$) defined in Diamond/Shurman's book and the cyclic isogenies $E\rightarrow E^{'}$ of degree $N$ used by Gross.</p>
<p>I also ask if there is any other paper which explains the theory of Heegner points explicitly? </p>
<p>I have look at Darmon's note and Gross-Zagier paper "Heegner points and derivatives of L-series" and it seems that the both were influenced by Gross's paper! Is there any other paper which explains Heegner points explecitely and independently of Gross's paper?</p>
<p>(I keep this post open for any further question about Gross's paper and I apologise for any mistakes in my English.)</p>
<p>Thank you.</p>
| Daniel Schäppi | 1,649 | <p>Here are a few comments that might be useful. I don't think there is a chance that this can work unless the scheme in question has the resolution property (meaning every coherent sheaf is a quotient of a locally free sheaf of finite rank). Otherwise the category of locally free sheaves does not even form a generator of the category of all quasi-coherent sheaves, so it clearly contains more information.</p>
<p>Secondly, Quiaochu Yuan's construction for the affine case does not work globally for most schemes. What he does is indeed freely adding cokernels (to get to coherent sheaves) or freely adding all colimits (to get to quasi-coherent sheaves). The free cocompletion under all colimits of an additive category is given by taking the category of additive presheaves on it. (The free cocompletion under cokerenels is simply the closure of the representables under cokernels.) So, if we do that to the category of vector bundles on a scheme, we obtain a category of presheaves. However, any category of presheaves has a projective generator, while the category of quasi-coherent sheaves rarely does.</p>
<p>Finally, something positive that can be said: If you do assume that your scheme satisfies the resolution property (and I'll assume it is quasi-compact, not sure if that's necessary), then the full subcategory of vector bundles is a dense subcategory of the category of quasi-coherent sheaves. This is actually a quite amazing result: in a Grothendieck abelian category, any strong generator is dense, see</p>
<p>Brian Day and Ross Street, Categories in which all strong generators are dense, J. Pure Appl. Algebra 43 (1986), no. 3, 235–242. MR 868984</p>
<p>Thus we know that the category of quasi-coherent sheaves is a reflective subcategory of the free cocompletion of the category of vector bundles. Any reflective subcategory is the localization of the surrounding category at the morphisms that the reflector inverts (that is, it can be obtained by formally inverting a class of morphisms). Since we're dealing with a locally finitely presentable category, this can be further reduced to inverting a generating set of these morphisms. In some sense this says that the category of quasi-coherent sheaves can be obtained by first freely adding colimits, and then imposing some relations (formally turn a certain set of morphisms into isomorphisms).</p>
<p>It seems however rather difficult to get an explicit such set of morphisms in general.</p>
<p><strong>Edit:</strong> I noticed that you're also interested in algebraic spaces and algebraic stacks. The above argument about the category of quasi-coherent sheaves also works at that level of generality as long as the resolution property holds. Specifically, if you have a quasi-compact stack $X$ on the fpqc-site of affine schemes which has the resolution property (in algebraic topology these are sometimes called Adams stacks, since they are precisely the stacks associated to Adams Hopf algebroids), then the category of quasi-coherent sheaves on $X$ is given by a localization of the free cocompletion of the category of dualizable quasi-coherent sheaves on $X$ at a set of morphisms.</p>
<p>Note that it is not clear from this argument wether or not this set of morphisms is entirely determined by the subcategory of dualizable quasi-coherent sheaves. If that is not the case, there could be Adams stacks in the above sense with equivalent categories of dualizable sheaves but inequivalent categories of quasi-coherent sheaves.</p>
|
18,659 | <p>This is more of a philosophy/foundation question.</p>
<p>I usually come across things like "the set of all men", or for example sets of symbols, i.e. sets of non-mathematical objects.</p>
<p>This confuses me, because as I understand it, the only kind of objects that exists in set theory are sets. It doesn't make sense to speak of other objects unless we have formalized them in terms of sets. So what to do with something like the set of all men? Are we working with a different set theory, a naive one? Or is it that we are omitting the formalization, because it is straightforward (e.g. assign a number to every man)?</p>
| Pete L. Clark | 1,149 | <p>If you are being, say, at least semiformal in your approach to set theory, whether or not objects which are not sets exist depends upon the particular brand of set theory you choose. The most common contemporary set theory, ZFC, is a "pure set theory", in which every object is itself a set, so the men indeed do not form a set.</p>
<p>But there are other set theories which allow non set elements, or <a href="http://en.wikipedia.org/wiki/Urelement" rel="noreferrer">urelements</a> (what a great name!). In particular, Quine's New Foundations with Urelements is a relatively popular such theory.</p>
<p>So far as I know it is towards the philosophical end of the spectrum to worry about whether sets should be allowed to contain urelements or not. The mathematical justification for this is that, using the axiom of choice, any set can be put in bijection with a von Neumann ordinal, hence a pure set. But you should be able to speak of sets of men if you want to, I suppose. </p>
<p><b>Addendum</b>: I like Sergei Ivanov's answer. He hits the following key point: if you ask a generic mathematician whether or not an object which is not a set can be an element of a set, you will not get either "yes" or "no" as an answer, but rather an explanation of why they regard the question as being a mathematically unfruitful one. When using sets for mathematical purposes, the "nature" of the objects which comprise sets is now regarded as being completely irrelevant. This is the "structuralist" approach to mathematics, which has been clarified and taken further by the more modern categorical approach. </p>
|
2,468,067 | <p>Can we say that that if $f(x)$ and $f^{-1}(x)$ intersect, then at least one point of intersection will lie on $y=x$? </p>
<p>Also there are many function e.g. $f(x)=1-x^3$ where point of intersection exists outside $y=x$ There will be $5$(odd) point of intersection of $f(x)=1-x^3$ and $f^{-1}(x)=(1-x)^{1/3}$ out of which one lie on $y=x$. Will there exist a function in which there will be even number of point of intersection but odd number of point of intersection will lie outside $y=x$? </p>
<p>Also it is clear that in case of strictly increasing continuous function , point of intersection if exists will lie on $y=x$ but will also be true for
strictly increasing discontinuous function?</p>
| Fly by Night | 38,495 | <p>It is perfectly possible for the graphs of $f$ and $f^{-1}$ to cross away from the line $y=x$. </p>
<p>We need is $f(p)=q$ and $f^{-1}(p)=q$. </p>
<p>Given that $f^{-1}$ exists, and so $f$ and $f^{-1}$ are one-to-one, the condition that $f^{-1}(p)=q$ becomes $f(f^{-1}(p)) = f(q)$, i.e. $p=f(q)$. Hence:</p>
<p>The graphs of $f$ and $f^{-1}$ cross at the points $(x,y)=(p,q)$ where <strong>both</strong> $f(p)=q$ <strong>and</strong> $f(q)=p$.</p>
<p>This is trivially true on the line $y=x$, where $p=q$.</p>
|
1,032,650 | <p><img src="https://i.stack.imgur.com/GVk1i.png" alt="enter image description here"></p>
<p>Here, ABCD is a rectangle, and BC = 3 cm. An Equilateral triangle XYZ is inscribed inside the rectangle as shown in the figure where YE = 2 cm. YE is perpendicular to DC. Calculate the length of the side of the equilateral triangle XYZ.</p>
| Ross Millikan | 1,827 | <p>Hint: Extend $EY$ to meet $AB$ at $F$. Drop a vertical line from $X$, hitting $CD$ at $G$. You now have three right triangles with the hypotenuse being the side of the equilateral triangle.</p>
|
1,032,650 | <p><img src="https://i.stack.imgur.com/GVk1i.png" alt="enter image description here"></p>
<p>Here, ABCD is a rectangle, and BC = 3 cm. An Equilateral triangle XYZ is inscribed inside the rectangle as shown in the figure where YE = 2 cm. YE is perpendicular to DC. Calculate the length of the side of the equilateral triangle XYZ.</p>
| g.kov | 122,782 | <p><a href="https://i.stack.imgur.com/d1dCS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d1dCS.png" alt="enter image description here"></a></p>
<p>Let <span class="math-container">$|XY|=|YZ|=|ZX|=a$</span>,
<span class="math-container">$\angle EYZ=\theta$</span>, <span class="math-container">$|FY|=1$</span>.</p>
<p>Then <span class="math-container">$\angle XYF=120^\circ-\theta$</span>,</p>
<p><span class="math-container">\begin{align}
\triangle EYZ:\quad
a\cos\theta&=2
\tag{1}\label{1}
,\\
\triangle FXY:\quad
a\cos(120^\circ-\theta)&=1
\tag{2}\label{2}
,
\end{align}</span> </p>
<p><span class="math-container">\begin{align}
a\cos(120^\circ-\theta)&=
\frac {a\sqrt3}2\,\sin\theta
-\frac a2\cos\theta
\\
&=
\frac {a\sqrt3}2\,\sin\theta
-1
,
\end{align}</span> </p>
<p>so the system \eqref{1},\eqref{2}
changes to</p>
<p><span class="math-container">\begin{align}
a\cos\theta&=2
\tag{3}\label{3}
,\\
a\,\sin\theta
&=\frac{4\sqrt3}3
\tag{4}\label{4}
,
\end{align}</span> </p>
<p>which can be easily solved for <span class="math-container">$a$</span>:
<span class="math-container">\begin{align}
a^2\cos^2\theta+
a^2\sin^2\theta
&=2^2+\left(\frac{4\sqrt3}3\right)^2
,\\
a^2&=\frac {28}3
,\\
a&=\frac 23\,\sqrt{21}
.
\end{align}</span></p>
|
302,061 | <p>Can you say how to find number of non-abelian groups of order n?</p>
<p>Suppose n is 24 ,then from structure theorem of finite abelian group we know that there are 3 abelian groups.But what can you say about the number of non-abelian groups of order 24?</p>
<p>The following link is a list of number of groups of order n:
<a href="http://oeis.org/wiki/Number_of_groups_of_order_n" rel="nofollow">http://oeis.org/wiki/Number_of_groups_of_order_n</a>
.But here also they did not mention anything how to find number of non-abelian groups of order n.</p>
| Chris Godsil | 16,143 | <p>The short answer is that there is no formula for the number of non-abelian groups of order $n$, nor is there an algorithm for computing the number of such groups of order $n$.</p>
<p>Note that we do not really have a formula for the number of abelian groups.
We can express the number in terms of the number of partitions of an integer; this is something we know a lot about and there are algorithms for computing it for a given value.</p>
|
2,487,638 | <p>$X = C([0, 1], \mathbb{R})$, $T = T(d_\infty)$. where $(X,T)$ is a topological space could some one please explain what $X=C([0,1],\mathbb{R})$ means, my teacher never explained ? Is it some kind of cover ?</p>
| Ben P. | 422,515 | <p>$C([0,1],\mathbb{R})$ is the set of all continuous functions $f:[0,1]\rightarrow\mathbb{R}$</p>
|
2,487,638 | <p>$X = C([0, 1], \mathbb{R})$, $T = T(d_\infty)$. where $(X,T)$ is a topological space could some one please explain what $X=C([0,1],\mathbb{R})$ means, my teacher never explained ? Is it some kind of cover ?</p>
| cronos2 | 148,305 | <p>It's just the collection of all continuous mappings from $[0, 1]$ to $\Bbb R$ (with respect to the usual topology of $[0, 1]$ inherited from $\Bbb R$), endowed with the topology given by the uniform norm $||•||_{\infty}$.</p>
<p>Let me know if you don't understand anything. </p>
|
1,509,340 | <p>I'm just wondering, what are the advantages of using either the Newton form of polynomial interpolation or the Lagrange form over the other?
It seems to me, that the computational cost of the two are equal, and seeing as the interpolated polynomial is unique, why ever use one over the other?</p>
<p>I get that they give different forms of the polynomial, but when is one form superior to the other?</p>
| lhf | 589 | <p>Here are two differences:</p>
<ul>
<li><p>Lagrange's form is more efficient when you have to interpolate several data sets on the same data points.</p></li>
<li><p>Newton's form is more efficient when you have to interpolate data incrementally.</p></li>
</ul>
|
554,025 | <p>I have a question as such:</p>
<blockquote>
<p>Class A has 45 students in it, and class B has 30 students in it. In class A, every student attends any particular lecture with probability 0.7 independent of the other students. For class B, two thirds of lectures are attended by everyone, with probability 1/3 that a student is missing.</p>
<p>Suppose you are looking for Class A, but the doors are not labelled. You open one of the two doors at random, and see 30 students. What is the probability that you opened the right door?</p>
</blockquote>
<p>My work so far:</p>
<p>I worked out the expected value of the number of students in each class: for class <span class="math-container">$A$</span>, it is <span class="math-container">$31.5$</span>, and for class <span class="math-container">$B$</span>, it is <span class="math-container">$29\frac23$</span>. Given this, it is likelier that I have opened the wrong door - but I don't know how to work out the probabilities to arrive at a precise numerical statement. Can anyone help me out?</p>
| bof | 97,206 | <p>A <em>crossing</em> in a drawing of $K_n$ is an unordered pair $\{e,f\}$ of edges which cross each other but have no endpoint in common, i.e., they have $4$ distinct endpoints between them. If $x=\{e,f\}$ is a crossing, let $V(X)$ be the set consisting of the $4$ endpoints of $e$ and $f$.</p>
<p>Let $f(n)$ be the <em>crossing number</em> of $K_n$, i.e., the minimum possible number of crossings in a drawing of $K_n$ in the plane. We know that$f(5)=\frac15\binom54$. In order to prove that $f(n)\ge\frac15\binom n4$ for all $n\ge5$, it will suffice to show that $f(n)/\binom n4$ is a nondecreasing function of $n$.</p>
<p>Consider any $n\ge5$. I want to show that $f(n)/\binom n4\ge f(n-1)/\binom{n-1}4$, i.e., that $(n-4)f(n)\ge nf(n-1)$.</p>
<p>Consider a drawing of $K_n$ with exactly $f(n)$ crossings. Let $S$ be the set of all pairs $(X,v)$ such that $X$ is a crossing in $K_n$ and $v$ is a vertex in $V(K_n)\setminus V(X)$. Choosing $X$ first, we see that $|S|=(n-4)f(n)$. Choosing $v$ first, we see that $|S|\ge nf(n-1)$. Thus $(n-4)f(n)\ge nf(n-1)$, Q.E.D.</p>
<p>Since $f(5)=1$, it follows that $f(n)\ge\frac15\binom n4$ for all $n\ge5$. Likewise, if we know (e.g. from <a href="http://oeis.org/A000241" rel="nofollow">OEIS</a> or <a href="http://en.wikipedia.org/wiki/Crossing_number_%28graph_theory%29" rel="nofollow">Wikipedia</a>) that $f(7)=9$, it follows that $f(n)\ge\frac9{35}\binom n4$ for all $n\ge7$. Moreover, it is easy to see that there is a constant $c\gt0$ such that $f(n)\sim c\binom n4\sim\frac n{24}n^4$ as $n\to\infty$. The same arguments apply, e.g., to the <a href="http://mathworld.wolfram.com/RectilinearCrossingNumber.html" rel="nofollow"><em>rectilinear crossing number</em></a> of $K_n$, i.e., the minimum number of crossings in a drawing of $K_n$ in which edges are drawn as straight line segments.</p>
|
719,681 | <p>There are 2 similar questions on <span class="math-container">$\log$</span> that I'm unable to solve. </p>
<ol>
<li><p>Given that <span class="math-container">$\log_a xy^2 = p$</span> and <span class="math-container">$\log_a x^2/y^3 = q $</span>. Express <span class="math-container">$\log_a 1/\sqrt{xy}$</span> or <span class="math-container">$\log_a 1/(xy)^{1/2}$</span> in terms of <span class="math-container">$p$</span> and <span class="math-container">$q$</span>
(<span class="math-container">$a$</span> is the base). I was thinking along the line of using <span class="math-container">$p - q$</span> but I can't seem to get <span class="math-container">$y^{1/2}$</span>. The answers are <span class="math-container">$3p+\frac{2q}7$</span> and <span class="math-container">$-5p-\frac q{14}$</span></p></li>
<li><p>Given that <span class="math-container">$\log_b(x^3y^2) = p$</span> and <span class="math-container">$\log_b(y/x) = q$</span>. Express <span class="math-container">$\log_b(x^2y)$</span> in terms of <span class="math-container">$p$</span> and <span class="math-container">$q$</span> .(<span class="math-container">$b$</span> is the base)</p></li>
</ol>
| Ali | 136,967 | <p>log(xy^2) = log(x) + 2 log(y) = p Eq-1
log(x^2/y^3) = 2 log(x) -3 log(y) = q Eq-2</p>
<p>solving Eq-1 and 2 by multiply Eq-1 by -2 both sides</p>
<p>-2 log(x) -4 log(y) = -2p
2 log(x) -3 log(y) = q
---------------------------- by adding
-7 log(y) = q-2p
log(y) = (2p - q)/7
now solve for log(x) the same way
log(x) =(3p + 2q)/7</p>
<p>log(1/(xy)^1/2)) = 0 - 1/2(log(x)+log(y) = -1/2 log(x) -1/2 log(y0
we know log(x) and log(y) in term of p and q
log(1/(xy)^1/2)) = -1/2(3p +2q)/7 -1/2(2p - q)/7 = -5p/14 -q/14
I hope that help</p>
|
2,012,318 | <p>Find the volume of:</p>
<p>$V=[(x,y,z): 0 \leqslant z \leqslant 4 - \sqrt{x^2+y^2}, 2x \leqslant x^2+ y^2 \leqslant 4x] $</p>
<p>I should somehow construct triple integral here in order to solve this, which means that i have to find limits of integration for three variables, but i am just not quite sure how, i assume that i should first integrate for $z$ since we could say that limits for $z$ are already given, but what i am supposed to do with other two variables. When i find limits, what function i am going to integrate, is it going to be just $ \iiint dzdydx $?</p>
| Tyler | 383,143 | <p>To show two sets $X$ and $Y$ are equal, you should show $X\subset Y$ and $Y\subset X$.</p>
<p>Let $x\in A\times (B-C)$. Then $x=(a,y)$, where $a\in A$ and $y\in B-C$. What does that tell us about the element $x$? Can you deduce that it must be an element of $(A\times B)-(A\times C)$? If so, that shows $A\times(B-C)\subset(A\times B)-(A\times C)$.</p>
<p>Similarly, let $x\in(A\times B)-(A\times C)$. Then $x\in A\times B$, so $x=(a,b)$ where $a\in A$ and $b\in B$. Moreover, $x$ is not an element of $A\times C$. Again, what can you deduce about $x$? Show that it is an element of $A\times (B-C)$, and this shows that $(A\times B)-(A\times C)\subset A\times (B-C)$.</p>
<p>After showing these two, equality follows.</p>
|
4,575,771 | <p>I need to show that <span class="math-container">$\int_0^1 (1+t^2)^{\frac 7 2} dt < \frac 7 2 $</span>. I've checked numerically that this is true, but I haven't been able to prove it.</p>
<p>I've tried trigonometric substitutions. Let <span class="math-container">$\tan u= t:$</span></p>
<p><span class="math-container">$$\int_0^1 (1+t^2)^{\frac 7 2} dt = \int_0^{\frac{\sqrt 2}{2}} (1+\tan^2 u )^{\frac 9 2} du = \int_0^{\frac{\sqrt 2}{2}} \sec^9 u \ du = \int_0^{\frac{\sqrt 2}{2}} \sec^{10} u \cos u\ du = \int_0^{\frac{\sqrt 2}{2}} \frac {\cos u}{(1-\sin^2 u)^5} du$$</span>
Now let <span class="math-container">$\sin u = w$</span>. Then:
<span class="math-container">$$\int_0^{\frac{\sqrt 2}{2}} \frac {\cos u}{(1-\sin^2 u)^5} du = \int_0^{\sin {\frac{\sqrt 2}{2}}} \frac {1}{(1-w^2)^5} dw.$$</span>
This last integral is solvable using partial fraction decomposition, but even after going through all the work required I'm not really sure how to compare it with <span class="math-container">$\frac 7 2$</span>, because of that <span class="math-container">$\sin {\frac {\sqrt{2}}{2}}$</span> term, which is not easy to compare.</p>
| Gary | 83,800 | <p>By the Cauchy–Schwarz inequality
<span class="math-container">\begin{align*}
\int_0^1 {(1 + t^2 )^{7/2} {\rm d}t} & \le \sqrt {\int_0^1 {(1 + t^2 )^3 {\rm d}t} \int_0^1 {(1 + t^2 )^4 {\rm d}t} } = \sqrt {\frac{{42496}}{{3675}}} \\ & = \sqrt {\frac{{49}}{4}\frac{{169984}}{{180075}}} < \sqrt {\frac{{49}}{4}} = \frac{7}{2}.
\end{align*}</span>
Alternatively,
<span class="math-container">$$
\sqrt {\frac{{42496}}{{3675}}} < \sqrt {\frac{{42849}}{{3600}}} = \sqrt {\frac{{4761}}{{400}}} = \frac{{69}}{{20}} < \frac{{70}}{{20}} = \frac{7}{2}.
$$</span></p>
|
402,427 | <p><em>Sorry if I don't use the words properly, I haven't learnt these things in English, only some of the words.
Anyway, I'm practicing to one of my exams and sadly this task seemed more challanging for me than it should be. Some kind of explain would help a lot!</em></p>
<p>10 meters of clothes have 6 holes in it.</p>
<p>a) What kind of distribution does the number of holes per meter follow? (<em>I think it must be Poisson distribution)</em></p>
<p>b) What's the probability that there's more than 10 holes in 5 meters of clothes?</p>
| Community | -1 | <p>If it's not obvious that $\mathbb{Z}[X]/(2,X) \cong \mathbb{F}_2$, then quotient out by one element at a time:</p>
<p>$$ \mathbb{Z}[X]/(2,X) = \left( \mathbb{Z}[X] / (2) \right) / (X) $$</p>
<p>or</p>
<p>$$ \mathbb{Z}[X]/(2,X) = \left( \mathbb{Z}[X] / (X) \right) / (2) $$</p>
<p>and maybe it will be easier.</p>
|
4,236,077 | <p>Suppose that <span class="math-container">$A\subset B$</span> and <span class="math-container">$A\subset C$</span>. Why does this imply <span class="math-container">$A\subset B\cup C?$</span></p>
<p>If <span class="math-container">$x\in A$</span>, then since <span class="math-container">$A\subset B$</span> and <span class="math-container">$A\subset C$</span>, we know <span class="math-container">$x\in B$</span> and <span class="math-container">$x\in C.$</span> This implies <span class="math-container">$x\in B \cap C.$</span></p>
<p>But it is not generally true that <span class="math-container">$B\cap C\subset B\cup C $</span> since <span class="math-container">$B$</span> and <span class="math-container">$C$</span> may be disjoint.</p>
<p>So why does <span class="math-container">$A\subset B$</span> and <span class="math-container">$A\subset C$</span> imply <span class="math-container">$A\subset B\cup C?$</span></p>
| user0102 | 322,814 | <p>The following result is useful to have at hand:</p>
<p><strong>Proposition</strong></p>
<p>Given some set <span class="math-container">$U$</span> as well as <span class="math-container">$X\subseteq U$</span> and <span class="math-container">$Y\subseteq U$</span>, <span class="math-container">$X\subseteq Y$</span> iff <span class="math-container">$X\cap Y = X$</span>.</p>
<p><strong>Solution</strong></p>
<p>Since <span class="math-container">$A\subseteq B$</span> and <span class="math-container">$A\subseteq C$</span>, we conclude that <span class="math-container">$A\cap B = A$</span> and <span class="math-container">$A\cap C = A$</span>.</p>
<p>Having said that, according to the properties of the operations on sets, we get that:</p>
<p><span class="math-container">\begin{align*}
A\cap(B\cup C) & = (A\cap B)\cup(A\cap C)\\\\
& = A\cup A\\\\
& = A
\end{align*}</span>
<span class="math-container">\begin{align*}
\end{align*}</span></p>
<p>Hence we conclude that <span class="math-container">$A\subseteq B\cup C$</span>, and we are done.</p>
<p><strong>REMARK</strong></p>
<p>You can also apply the transitivity property. Indeed, we have
<span class="math-container">\begin{align*}
A\subseteq B \subseteq B\cup C \Rightarrow A\subseteq B\cup C
\end{align*}</span></p>
<p>Another related result (perhaps a stronger one) says that <span class="math-container">$A\subseteq B\cap C$</span>.</p>
<p>Indeed, we can proceed as follows:</p>
<p><span class="math-container">\begin{align*}
A\cap(B\cap C) & = (A\cap B)\cap C\\\\
& = A\cap C\\\\
& = A
\end{align*}</span></p>
<p>Hopefully this helps!</p>
|
4,237,342 | <p>I am a researcher and encountered the following challenging function in my work:</p>
<p><span class="math-container">$$f(S)=\sum_{k=1}^{S-1}(\ln (S)-\ln (k))^2 \bigg [ \frac{1}{(S-k)^2}+\frac{1}{(S+k)^2} \bigg ]$$</span></p>
<p>And I am only interested in the first term of the Taylor expansion of this function when <span class="math-container">$S->+\infty$</span>. Matlab simulations give me that it is equivalent to:</p>
<p><span class="math-container">$$\frac{a}{S}$$</span></p>
<p>In other words, simply a positive scalar divided by the parameter <span class="math-container">$S$</span>.</p>
<p>Do you have any idea how to compute this term?</p>
<p>Thank you so much.</p>
| Jack D'Aurizio | 44,121 | <p><span class="math-container">$$f(S)=\frac{1}{S}\cdot\underbrace{\frac{1}{S}\sum_{k=1}^{S-1}\log^2\left(\frac{k}{S}\right)\left(\frac{1}{\left(1-\frac{k}{S}\right)^2}+\frac{1}{\left(1+\frac{k}{S}\right)^2}\right)}_{\text{Riemann sum}}$$</span></p>
<p>And the Riemann sum converges to</p>
<p><span class="math-container">$$ a=\int_{0}^{1}\frac{\log^2(x)}{(1-x)^2}\,dx + \int_{0}^{1}\frac{\log^2(x)}{(1+x)^2}\,dx. $$</span>
Considering that <span class="math-container">$\frac{1}{(1-x)^2}=\sum_{n\geq 0}(n+1) x^n$</span> and <span class="math-container">$\int_{0}^{1}x^n\log^2(x)\,dx=\frac{2}{(n+1)^3}$</span> we have</p>
<p><span class="math-container">$$ a = 4\sum_{n\geq 0}\frac{(2n+1)}{(2n+1)^3}=\color{red}{\frac{\pi^2}{2}}. $$</span></p>
|
4,062,987 | <p>I was reading this question here: <a href="https://math.stackexchange.com/questions/1230688/what-are-the-semisimple-mathbbz-modules">What are the semisimple $\mathbb{Z}$-modules?</a> and I understood everything except why we need <span class="math-container">$\alpha_p$</span> copies here <span class="math-container">$$
\bigoplus_{\text{$p$ prime}}(\mathbb{Z}/p\mathbb{Z})^{(\alpha_p)}
$$</span>
And not just <span class="math-container">$$
\bigoplus_{\text{$p$ prime}}\mathbb{Z}/p\mathbb{Z}
$$</span></p>
<p>Could anyone explain this to me please?</p>
| Community | -1 | <p>Without looking at the dup solution, you can find a recursive relation for <span class="math-container">$a_{n+1}$</span> and <span class="math-container">$a_n$</span>. One has: <span class="math-container">$a_{n+1} = \dfrac{1\cdot 3\cdot 5\cdots (2n+1)}{2\cdot 4\cdot 6\cdots (2n+2)}= \dfrac{2n+1}{2n+2}\cdot a_n< a_n$</span> since <span class="math-container">$2n+1 < 2n+2$</span>. And <span class="math-container">$a_n > 0$</span> for all <span class="math-container">$n$</span>, hence it converges as a monotonically decreasing and bounded below sequence.</p>
|
1,936,043 | <p>I would like to prove that the sequence $n^{(-1)^{n}}$ is divergent. </p>
<p>My thoughts: I know $(-1)^n$ is divergent, so $n$ to the power of a divergent sequence is still divergent? I am not sure how to give a proper proof, pls help!</p>
| Darío A. Gutiérrez | 353,218 | <p>If $n$ even then $-1^n = 1 $
$$n^1 = n\Rightarrow \lim\limits_{n \rightarrow \infty}{({n})} = \infty$$ </p>
<p>If $n$ odd then $-1^n = -1 $
$$n^{-1} = \frac{1}{n}\Rightarrow \lim\limits_{n \rightarrow \infty}{(\frac{1}{n})} = 0$$</p>
|
1,936,043 | <p>I would like to prove that the sequence $n^{(-1)^{n}}$ is divergent. </p>
<p>My thoughts: I know $(-1)^n$ is divergent, so $n$ to the power of a divergent sequence is still divergent? I am not sure how to give a proper proof, pls help!</p>
| Olivier Oloa | 118,798 | <p><strong>Hint</strong>. By setting $$u_n:=n^{(-1)^n}$$ one gets that
$$
\lim_{n \to \infty}u_{2n}=\infty \neq 0 =\lim_{n \to \infty}u_{2n+1}
$$ thus the sequence $\left\{ u_n \right\}$ is <em>divergent</em>.</p>
|
2,868,047 | <p>My question is in relation to a problem I am trying to solve <a href="https://math.stackexchange.com/questions/2867002/finding-mathbbpygx">here</a>. If $g(.)$ is a monotonically increasing function and $a <b$, is it always true that $a<g(a)<g(b)<b$? Why or why not?</p>
| MJD | 25,554 | <p>In what follows I will refer to “pattern” in the dice rolls. For example the pattern <code>AAABC</code> means that three of the dice show the same number and the other two dice are different from the first three and also different from each other. The roll $1 2 2 4 2$ has this pattern, but $1 2 2 1 2$ does not; that has the pattern <code>AAABB</code>, and similarly $2 2 2 2 1$ is not pattern<code>AAABC</code> but <code>AAAAB</code>. </p>
<p>There are seven patterns possible with five dice:</p>
<pre><code>AAAAA
AAAAB
AAABB
AAABC
AABBC
AABCD
ABCDE
</code></pre>
<p>If you want three of a kind “or better” on five dice, you are interested in the sum of the first four patterns and you want to disregard the other three. What follows is an explanation of how to compute the probability for each pattern.</p>
<p>We will represent the patterns numerically like this: <code>AAABC</code> will be $(2, 0, 1)$ because there are two letters that appear once, no letters that appear twice, and one letter that appears three times. <code>AAABB</code> will be $(0, 1, 1)$ because there are no letters that appear once, one that appears twice, and one that appears three times. <code>AABCD</code> is $(3, 1)$ (we omit the trailing zero) and <code>ABCDE</code> is $(5)$. We'll refer to these numbers as $n_1, n_2, \ldots$. For <code>AAABC</code>, we have $n_1 = 2, n_2 = 0, n_3 = 1$. For five dice, the representations of the seven patterns are:</p>
<p>$$\begin{array}{lrrrrr}
& n_1 & n_2 & n_3 & n_4 & n_5 \\
AAAAA & 0 & 0 & 0 & 0 & 1 \\
AAAAB & 1 & 0 & 0 & 1 \\
AAABB & 0 & 1 & 1 \\
AAABC & 2 & 0 & 1 \\
AABBC & 1 & 2 \\
AABCD & 3 & 1 \\
ABCDE & 5
\end{array}
$$</p>
<p>Now suppose we're rolling $N$ dice each with $d$ sides. If there are $N$ dice, we should always have $\sum i\cdot n_i = N$. We'll also take $k=\sum n_i$; this is just the number of different letters in the pattern.</p>
<p>Then it transpires that the number of ways of rolling any pattern is:</p>
<p>$$
\color{maroon}{d\choose k}\color{darkblue}{k!}
{\color{darkgreen}{N!}\over \color{purple}{\prod {i!}^{n_i}{n_i}!}}
$$</p>
<p>To get the probability, just divide by $d^N$.</p>
<p>I'll work through one example to demonstrate the formula. How many ways are there to roll the pattern <code>AAABC</code>, which is three of a kind, but not counting full house, for of a kind, or five of a kind. For <code>AAABC</code> we have $n_1=2, n_2=0, n_3 = 1$, so $k=n_1+n_2+n_3 = 3$, and the formula gives:</p>
<p>$$\color{maroon}{6\choose 3}\color{darkblue}{3!}
{\color{darkgreen}{5!}\over \color{purple}{(1!^2\cdot2!)(2!^0\cdot0!)(3!^1\cdot1!)}} =
\color{maroon}{20}\cdot \color{darkblue}{6}\cdot
{\color{darkgreen}{120}\over \color{purple}{2\cdot1\cdot6}} = \mathbf{1200}
$$</p>
<p>There are 1200 ways to roll the pattern <code>AAABC</code>, so the probability is $\frac{1200}{6^5} \approx 15.43\%$. Similar calculations for the other three patterns of interest give:</p>
<p>$$\begin{array}{lrrl}
A A A B C & 1200 & 15.43 & \% \\
A A A B B & 300 & 3.86 \\
A A A A B & 150 & 1.93 \\
A A A A A & 6 & 0.08 \\ \hline
\text{Total} & 1656 & 21.30 & \%
\end{array}
$$</p>
<p>So you can expect to get three of a kind or better around one time in five.</p>
<p>(By far the most common pattern is the single pair <code>AABCD</code>, which occurs almost half the time.)</p>
<p>I explained the formula in more detail <a href="https://blog.plover.com/math/yahtzee.html" rel="nofollow noreferrer">in a blog post</a>. <a href="https://perl.plover.com/misc/enumeration/Dice" rel="nofollow noreferrer">This page tabulates the probabilities for every pattern of up to 12 dice</a>. You program that generated these tables is available online: the URL</p>
<pre><code> https://perl.plover.com/misc/enumeration/tabulate-dice.cgi?N=7&S=11
</code></pre>
<p>generates a table for seven dice each with 11 sides. You can adjust the 7 and 11 to suit yourself. <a href="https://perl.plover.com/misc/enumeration/" rel="nofollow noreferrer">Related materials</a>, including program source code, are available also.</p>
|
1,913,835 | <p>I'm having a difficult time explaining/understanding a (seemingly) simple argument of an algorithm that I know I can use to determine if a directed graph <strong>G</strong> is strongly connected.</p>
<p>The algorithm that I know (does this have a name?) goes like this:</p>
<pre><code>Use BFS (breadth-first-search) on G staring from some node S
IF every node is found
Construct G^ (G with reversed Edges, G transpose?)
Use BFS on G^ starting from the same node S
IF every node is found
G is strongly connected
ELSE
G is not strongly connected
ELSE
G is not strongly connected
End
</code></pre>
<p>So the first run of BFS ensures that the node S can reach every other node on G, which makes sense. If it cant, then clearly G is not strongly connected.</p>
<p>The second run of BFS on G^, as I understand, will show that any node on the graph can also make it to S in G. <strong>This is the part I can't fully explain to myself.</strong> </p>
<p>The conclusion of the algorithm I understand, if S can make it to every node and every node can make it to S, then any two nodes will always be able to make it to each other through S.</p>
<p>To reiterate my question, I'm looking for an explanation on why determining S can make it to every node in G^ shows that every node can make it to S in G. I've tried doing a proof by contradiction, but I'm having a hard time wording it out. Any explanation will help me. Thank you!</p>
| Ashwin Ganesan | 157,927 | <p>When you do a BFS starting at a node $S$ of a directed graph, the neighbors of $S$ in the BFS tree would be the out-neighbors of $S$ (not the in-neighbors). The resulting BFS tree tells you the shortest directed path from $S$ to all other nodes. The important observation here is that BFS on a digraph considers only the outgoing arcs from each vertex. So if you want to determine whether there is a directed path from some other node $T$ to $S$, you need to do a BFS rooted at $T$. </p>
<p>Alternatively, you can reverse the direction on each arc of the directed graph and perform a BFS rooted at $S$. There is a directed path from $T$ to $S$ in the given digraph if and only if there is a directed path from $S$ to $T$ in the reverse digraph. This should be clear from small examples, but a formal proof can be given: if $(x_0,x_1), (x_1,x_2), \ldots, (x_{k-1},x_k)$ is a directed path from $S$ to $T$ in the reverse digraph, then $(x_k,x_{k-1}),\ldots,(x_1,x_0)$ is a directed path from $T$ to $S$ in the given digraph. In other words, rather than do a BFS on the digraph starting from node $T$, do a BFS on the reverse digraph starting from node $S$. The latter approach requires only a second BFS on the reverse digraph starting from $S$ rather than a BFS starting from every other node on the given digraph.</p>
<p>You can think of the second BFS starting from $S$ as being a BFS on the original graph itself (not the reverse graph), but with the BFS implemented by choosing neighbors based on incoming arcs (rather than outgoing arcs). </p>
|
4,219,303 | <p>I'm trying to solve <span class="math-container">$y''-3y^2 =0$</span>, i use the substitution <span class="math-container">$w=\frac{dy}{dx}$</span>.</p>
<p>Using the chain rule i have:
<span class="math-container">$$\frac{d^2y}{dx^2} = \frac{dw}{dx} = \frac{dw}{dy}\cdot \frac{dy}{dx} = w \cdot \frac{dw}{dy}$$</span>
So i can build the system:
<span class="math-container">$$w\cdot \frac{dw}{dy} = 3y^2$$</span>
<span class="math-container">$$\frac{dy}{dx} = w$$</span></p>
<p>But i'm not sure how to solve the system. I appreciate your help.</p>
| Alfredo Maussa | 954,357 | <p>Assuming that what you did is right, you could integrate as separable variables:</p>
<p><span class="math-container">$\int wdw = \int 3y^2dy$</span> , so <span class="math-container">$\frac{w^2}2 = y^3 +C_1 => w=\pm\sqrt{2y^3+2C_1}$</span></p>
<p>then</p>
<p><span class="math-container">$\int \frac1{w(y)}dy = \int dx$</span></p>
<p><span class="math-container">$\int \frac1{\pm\sqrt{2y^3+2C_1}}dy = x$</span></p>
|
1,603,323 | <p>If <span class="math-container">$A$</span> is a positive definite matrix can it be concluded that the kernel of <span class="math-container">$A$</span> is <span class="math-container">$\{0\}$</span>? </p>
<p>pf: R.T.P <span class="math-container">$\ker A = 0$</span>.
Suppose not, i.e., there exists some <span class="math-container">$x\in\ker A$</span> s.t <span class="math-container">$x\neq 0$</span>, then
<span class="math-container">$$Ax = 0\;\Longrightarrow x^T Ax = 0$$</span>
which is a contradiction by definition of positive definite.
Therefore <span class="math-container">$\ker A=\{0\}$</span>.</p>
| YiFan | 496,634 | <p>Your proof is correct. As an alternative, we have a nonzero <span class="math-container">$x$</span> so that <span class="math-container">$Ax=0=0x$</span>, which means <span class="math-container">$0$</span> is an eigenvalue of <span class="math-container">$A$</span>. But the eigenvalues of a positive definite matrix are supposed to be positive, a contradiction.</p>
|
19,356 | <p>So I was wondering: are there any general differences in the nature of "what every mathematician should know" over the last 50-60 years? I'm not just talking of small changes where new results are added on to old ones, but fundamental shifts in the nature of the knowledge and skills that people are expected to acquire during or before graduate school.</p>
<p>To give an example (which others may disagree with), one secular (here, secular means "trend over time") change seems to be that mathematicians today are expected to feel a lot more comfortable with picking up a new abstraction, or a new abstract formulation of an existing idea, even if the process of abstraction lies outside that person's domain of expertise. For example, even somebody who knows little of category theory would not be expected to bolt if confronted with an interpretation of a subject in his/her field in terms of some new categories, replete with objects, morphisms, functors, and natural transformations. Similarly, people would not blink much at a new algebraic structure that behaves like groups or rings but is a little different.</p>
<p>My sense would be that the expectations and abilities in this regard have improved over the last 50-60 years, partly because of the development of "abstract nonsense" subjects including category theory, first-order logic, model theory, universal algebra etc., and partly because of the increasing level of abstraction and the need for connecting frameworks and ideas even in the rest of mathematics. I don't really know much about how mathematics was taught thirty years ago, but I surmised the above by comparing highly accomplished professional mathematicians who probably went to graduate school thirty years ago against today's graduate students.</p>
<p>Some other guesses:</p>
<ol>
<li>Today, people are expected to have a lot more of a quick idea of a larger number of subjects, and less of an in-depth understanding of "Big Proofs" in areas outside their subdomain of expertise. Basically, the Great Books or Great Proofs approach to learning may be declining. The rapid increase in availability of books, journals, and information via the Internet (along with the existence of tools such as Math Overflow) may be making it more profitable to know a bit of everything rather than master big theorems outside one's area of specialization.</li>
<li>Also, probably a thorough grasp of multiple languages may be becoming less necessary, particularly for people who are using English as their primary research language. Two reasons: first, a lot of materials earlier available only in non-English languages are now available as English translations, and second, translation tools are much more widely available and easy-to-use, reducing the gains from mastery of multiple languages.</li>
</ol>
<p>These are all just conjectures. Contradictory information and ideas about other possible secular trends would be much appreciated.</p>
<p>NOTE: This might be too soft for Math Overflow! Moderators, please feel free to close it if so.</p>
| James Weigandt | 4,872 | <p>The general question of what a professional mathematician should know was asked by Phil Davis at the end of <a href="http://www.siam.org/news/news.php?id=1642">this article</a>. Barry Mazur <a href="http://www.math.harvard.edu/~mazur/preprints/math_ed_2.pdf">posted a brief response</a> about a year ago.</p>
<p>I'm too young to have a picture of this question 30 years ago. Perhaps Bourbaki's <em>Éléments de mathématique</em> comprised an appropriate list. Someone who is old enough to know should correct me. </p>
|
1,995,663 | <p>My brother in law and I were discussing the four color theorem; neither of us are huge math geeks, but we both like a challenge, and tonight we were discussing the four color theorem and if there were a way to disprove it.</p>
<p>After some time scribbling on the back of an envelope and about an hour of trial-and-error attempts in Sumopaint, I can't seem to come up with a pattern that only uses four colors for this "map". Can anyone find a way (algorithmically or via trial and error) to color it so it fits the four color theorem?</p>
<p><a href="https://i.stack.imgur.com/rlVrW.png"><img src="https://i.stack.imgur.com/rlVrW.png" alt=""five color" graph"></a></p>
| Arkya | 276,417 | <p>This works, as you can check..</p>
<p><a href="https://i.stack.imgur.com/wJgmS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wJgmS.png" alt="enter image description here"></a></p>
|
1,995,663 | <p>My brother in law and I were discussing the four color theorem; neither of us are huge math geeks, but we both like a challenge, and tonight we were discussing the four color theorem and if there were a way to disprove it.</p>
<p>After some time scribbling on the back of an envelope and about an hour of trial-and-error attempts in Sumopaint, I can't seem to come up with a pattern that only uses four colors for this "map". Can anyone find a way (algorithmically or via trial and error) to color it so it fits the four color theorem?</p>
<p><a href="https://i.stack.imgur.com/rlVrW.png"><img src="https://i.stack.imgur.com/rlVrW.png" alt=""five color" graph"></a></p>
| Especially Lime | 341,019 | <p>To answer the "algorithmically" question, this map has some regions that only border four others. There is a relatively short, algorithmic proof that if you can 4-colour all but one of the regions of a map, and the last region, R, only borders four others (call them R_1, R_2, R_3, R_4 in clockwise ordering about R), then you can colour the whole map. To do this, work as follows. It is easy unless R_1 to R_4 already use all the colours. Say they are blue, green, red, yellow in that order around region R. Now we try to recolour R_1 to red, in the hope that this will allow us to colour R blue. If we do that, we have to recolour any red region which borders R_1 blue. Then we have to recolour any blue region which borders one of these red, and so on. We keep doing this until we run out of things to recolour. Now one of two things happens: either we can colour R blue or we had to recolour R_3 blue. In the latter case, there was a chain of red and blue regions stretching from R_1 to R_3, but there can't also be a chain of green and yellow regions stretching from R_2 to R_4 (they have to cross somewhere, but the fact that they use different colours means that they can't). So now if we try the same trick recolouring R_2 yellow, any yellow regions next to R_2 green, and so on, this time we won't have to recolour R_4, and we will be able to colour R green.</p>
<p>We've now shown that if we can colour everything apart from a region that meets four (or fewer) others, we can do this recolouring trick and then colour the last region. Similarly, if we had a colouring of some of the regions, and one of the uncoloured regions only bordered four coloured ones, we can recolour in this way and then colour that region as well. So if we can find an ordering of regions such that each one borders at most four previous ones, we can progressively recolour in this way. In this map we can -- basically progressively remove regions that only have four neighbours in what's left, then reverse that ordering -- so this algorithm will work.</p>
<p>This method was the basis of Kempe's incorrect proof of the 4-colour theorem, and was used by Heawood to prove the 5-colour theorem (using five colours we are ok so long as there is always a region we can remove which borders at most five others, but that is true for any plane map). It can be used to easily find a 4-colouring of Martin Gardner's "April Fools" map, which would be very difficult to find by trial and error.</p>
|
2,835,474 | <p>What is linear about a linear combination of things?. In linear algebra, the "things" we are dealing with are usually vectors and the linear combination gives the span of the vectors. Or it could be a linear combination of variables and functions. But why not just call it combination. Why is the term "linear" included?What is so "linear" about it?</p>
| OnceUponACrinoid | 246,291 | <p>It is linear because such a combination has the form</p>
<p>$$
(const_1) (quantity_1)+(const_2)(quantity_2)+\ldots
$$</p>
<p>as opposed to expressions such as
$$
(const)(quantity_1)(quantity_2)
$$
or</p>
<h2>$$
(const)(quantity)^3
$$</h2>
<p>More generally, such forms preserve "linearity", in the sense that scaling by constants or adding or substracting them preserves the form.</p>
|
2,520,044 | <p>$$\lim_{x\to2}{\frac{\sqrt{3x-2}-\sqrt{5x-6}}{\sqrt{2x-1}-\sqrt{x+1}}}$$</p>
<p>Evaluate the limit.</p>
<p>Thanks for any help</p>
| lab bhattacharjee | 33,337 | <p>Hint:</p>
<p>Set $x-2=h$</p>
<p>and rationalize the denominator & the numerator.</p>
<p>Setting limit to $0$ often eases of the calculation</p>
|
3,449,826 | <p><span class="math-container">$C$</span> is any closed curve encompassing the whole branch cut.
The approach to this problem would involve using the residue theorem:</p>
<p>1) We first want to find the residues at infinity so we change the form to where we are able to perform a series expansion.</p>
<p>2) Obtain the coefficients of <span class="math-container">$\frac{1}{z}$</span>.</p>
<p>3) Plug the residues back into the formula: <span class="math-container">$\oint\limits_{C}\mathrm{d}z \, f(z)=2\pi i \, \mathrm{Res}(f,\infty)$</span></p>
<p>So for the integral: <span class="math-container">$\oint\limits_{C}\mathrm{d}z \,\sqrt{\frac{z-a}{z-b}}$</span> </p>
<p>I would like some help in obtaining the form which allows for an expansion. If this approach is bad for this integral, then please inform me of a better solution.</p>
<p>A similar problem has been posted before <a href="https://math.stackexchange.com/questions/2288715/integral-int-z-r-sqrtz-az-bdz-a-neq-b-rmaxa-b-z-in-c">here</a>, for which <span class="math-container">$\oint\limits_{C}\mathrm{d}z \,\sqrt{(z-a)(z-b)} = \dfrac{\pi i}{4}(a-b)^2$</span>. I am just referencing this because the problem is so similar as with our approach utilizing residues at infinity.</p>
| GEdgar | 442 | <p>More complete version of Pedrpan's answer... As <span class="math-container">$z \to \infty$</span>, we have
<span class="math-container">$$
\frac{z-a}{z-b} = \frac{1-a/z}{1-b/z} =
\left(1-\frac{a}{z}\right)\left(1+\frac{b}{z}+O(z^{-2})\right)
= 1 +\frac{b-a}{z}+O(z^{-2})
$$</span>
then one branch of the square root is analytic near <span class="math-container">$z=\infty$</span> and
<span class="math-container">$$
\left(\frac{z-a}{z-b}\right)^{1/2} = 1 +\frac{b-a}{2z}+O(z^{-2})
$$</span>
We chose the branch of the square root so that the integrand <span class="math-container">$ \to 1$</span> at <span class="math-container">$z=\infty$</span>.</p>
<p>The residue at <span class="math-container">$z=\infty$</span> is <span class="math-container">$\frac{b-a}{2}$</span> so the integral is
<span class="math-container">$$
2\pi i \frac{b-a}{2} = i\pi(b-a) .
$$</span></p>
|
2,965,821 | <p>Let <span class="math-container">$(x_k)_{k\in N}$</span> <span class="math-container">$\subset \mathbb{R^4} $</span>.</p>
<p>Then there's this series, which I have to check for convergence and its limit.</p>
<p><a href="https://i.stack.imgur.com/o6zpZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o6zpZ.png" alt="enter image description here"></a></p>
<p>I think that <span class="math-container">$(-1)^k * k$</span> diverges, because of the geometric series, which is saying that if <span class="math-container">$|q|^n$</span> <span class="math-container">$\geq 1$</span>, the series diverges.</p>
<p>Now for the second part we have <span class="math-container">$(-1)^k * (1/k)$</span>, which converges, because <span class="math-container">$1/k$</span> converges towards 0. </p>
<p><span class="math-container">$(-1)^k * k^{300}$</span> diverges too, for the same reason like the first series. </p>
<p><span class="math-container">$arctan (k) = \pi/2$</span>, because it does not tend to 0 as k tends to infinity the divergence test tells us that the infinite series diverges. </p>
<p>I don't know if that is correct at all...</p>
| Bertrand Wittgenstein's Ghost | 606,249 | <p>Chain rule states: <span class="math-container">$\frac {d}{dx}f(g(x))=f'(g(x))\cdot g'(x)$</span>, then <span class="math-container">$$(e^{x^3+2})'=e^{x^3+2}\cdot 3x^2$$</span></p>
|
14,140 | <p>One of the most annoying "features" of <em>Mathematica</em> is that the <code>Plot</code> family does extrapolation on <code>InterpolatingFunction</code>s without any warning. I'm sure it was discussed to hell previously, but I cannot seem to find any reference. While I know how to simply overcome the problem by defining a global variable for the domain of the interpolation, from time to time I forget to do this and then I spend days figuring out where the numerical error originates. This could be avoided if <code>Plot</code> was to give a warning.</p>
<p>Consider the following example. An ODE system is defined and integrated for two different time ranges:</p>
<pre><code>odes = {
a'[t] == -a[t] - .2 a[t]^2 + 2. b[t],
b'[t] == a[t] + .1 a[t]^2 - 1.1 b[t], a[0] == 1, b[0] == 1
};
sol100 = First@NDSolve[odes, {a, b}, {t, 0, 100}];
sol500 = First@NDSolve[odes, {a, b}, {t, 0, 500}];
</code></pre>
<p>Now querying the function value for a point outside of the range correctly gives a warning:</p>
<pre><code>(a /. sol100)[500]
</code></pre>
<blockquote>
<pre><code>InterpolatingFunction::dmval: Input value {500} lies outside
the range of data in the interpolating function. Extrapolation will be used. >>
651.034
</code></pre>
</blockquote>
<p>The same is not done when we use the function in <code>Plot</code>:</p>
<pre><code>Show[
Plot[{a[t], b[t]} /. sol100, {t, 0, 400}, PlotStyle -> {Thick, Red}],
Plot[{a[t], b[t]} /. sol500, {t, 0, 400}, PlotStyle -> {Thick, Blue}]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/X7xaL.png" alt="Mathematica graphics"></p>
<p>I've tried to force a warning, with no avail. The following example won't give a warning.</p>
<pre><code>On[InterpolatingFunction::dmval]
Check[Plot[{a[t], b[t]} /. sol100, {t, 0, 500}], "Error",
InterpolatingFunction::dmval]
</code></pre>
<p>Interestingly, one can be sure that <code>InterpolatingFunction::dmval</code> is NOT turned off at all inside the <code>Plot</code> family. In the following example, <code>LogLinearPlot</code> is able to drop a warning about sampling from below the domain (that can be ignored being unrelated, see <a href="https://mathematica.stackexchange.com/q/5986/89">this post</a>, also it seems to be fixed in v9), but it does not give the same warning when sampling from <strong>above</strong> (> 100)! </p>
<pre><code>LogLinearPlot[{a[t], b[t]} /. sol100, {t, 0.1, 500}]
</code></pre>
<blockquote>
<pre><code>InterpolatingFunction::dmval: Input value {-2.30241} lies outside the
range of data in the interpolating function. Extrapolation will be used. >>
</code></pre>
</blockquote>
<p>It is even more disturbing to see that <code>Plot</code> checks the lower boundary but not the upper (thanks to <a href="https://mathematica.stackexchange.com/users/50/j-m">J.M.</a> for the comment):</p>
<pre><code>Plot[{a[t], b[t]} /. sol100, {t, -1, 500}]
</code></pre>
<blockquote>
<pre><code>InterpolatingFunction::dmval: Input value {-0.989765} lies outside the
range of data in the interpolating function. Extrapolation will be used. >>
</code></pre>
</blockquote>
<p>As <a href="https://mathematica.stackexchange.com/users/312/oleksandr-r">Oleksandr</a> has pointed out, it is not about lower vs. upper boundaries but first point vs. the rest. </p>
<pre><code>Plot[{a[t], b[t]} /. sol100, {t, 101, 500}]
</code></pre>
<blockquote>
<pre><code>InterpolatingFunction::dmval: Input value {101.008} lies outside the
range of data in the interpolating function. Extrapolation will be used. >>
</code></pre>
</blockquote>
<h2><strong>Questions</strong></h2>
<ol>
<li>Why <code>Plot</code> does not give a warning when extrapolating an <code>InterpolatingFunction</code>? Is there some higher-level consideration that justifies this behaviour, or is it a bug?</li>
<li>How can one force <code>Plot</code> to give a warning? Is there any workaround that forces <code>InterpolatingFunction::dmval</code> not to be attenuated inside <code>Plot</code>?</li>
</ol>
| Mr.Wizard | 121 | <p>It seems that indeed messages are suppressed somewhere in the evaluation of <code>Plot</code>. Interestingly this appears to happen some time after the first numeric evaluation. If we issue this message for each (numeric) point plotted we see that it is printed only once:</p>
<pre><code>f[x_?NumericQ] := (Message[InterpolatingFunction::"dmval", {21.06`}]; Sin[x])
Plot[f[x], {x, 0, 10}]
</code></pre>
<p>If we make <code>f</code> to <a href="https://mathematica.stackexchange.com/a/2676/121">redefine itself</a> such that it only issues the message after the first point is evaluated the message is never printed:</p>
<pre><code>f[x_?NumericQ] :=
(
f[y_?NumericQ] := (Message[InterpolatingFunction::"dmval", {21.06`}]; Sin[y]);
Sin[x]
)
Plot[f[x], {x, 0, 10}]
</code></pre>
<p>This shows that the suppression of this message is not the result of some special optimization while plotting an <code>InterpolatingFunction</code> (which might be done outside the normal evaluation chain) but rather something more general.</p>
<p>It should be noted that the <code>::dmval</code> message is not the only one suppressed, e.g.:</p>
<pre><code>f[x_?NumericQ] := (Message[qrs::argx, 1, 2]; Sin[x])
Plot[f[x], {x, 0, 10}]
</code></pre>
<p>We can further demonstrate that Plot does not (appear to) use the high level function <code>Quiet</code> itself for this suppression, as this does not restore the messages:</p>
<pre><code>Block[{Quiet = # &}, Plot[f[x], {x, 0, 10}] ]
</code></pre>
<p>One can get messages past Plot with something like this:</p>
<pre><code>Unprotect[Message]
Message[x : InterpolatingFunction::"dmval", other___] :=
Print @ Style[Row[{HoldForm[InterpolatingFunction::"dmval"], ": ",
StringForm[x, other]}], "Message"]
f[x_?NumericQ] := (Message[InterpolatingFunction::"dmval", {21.06`}]; Sin[x])
Plot[f[x], {x, 0, 10}, PlotPoints -> 5, MaxRecursion -> 0]
</code></pre>
<p>But there is no automatic limit to the number of messages printed making this impractical to use as written.</p>
<p>I am continuing to explore this question.</p>
|
704,073 | <p>I encountered something interesting when trying to differentiate $F(x) = c$.</p>
<p>Consider: $\lim_{x→0}\frac0x$. </p>
<p>I understand that for any $x$, no matter how incredibly small, we will have $0$ as the quotient. But don't things change when one takes matters to infinitesimals?
I.e. why is the function $\frac0x = f(x)$, not undefined at $x=0$?</p>
<p>I would appreciate a strong logical argument for why the limit stays at $0$. </p>
| wroobell | 130,926 | <p>You are in fact considering function $f: \mathbb R \setminus \{0\} \mapsto \mathbb R$ that is defined $f(x) = \frac 0x$ so it i equal $0$ on all its domain. Lets look at limit $\lim \limits_{x \rightarrow 0^+} f(x)$. Function is identically equal $0$ on every open interval $(0,\varepsilon)$ for $\varepsilon>0$. Hence right side limit is equal $0$. By analogy we also have $\lim \limits_{x \rightarrow 0^-} f(x)=0$. Whether there exists limit $\lim \limits_{x \rightarrow 0} f(x)$ is now a matter of convention. According to how I was taught it has no sense to talk about a limit of function outside its domain. But if you think otherwise, then since both one-sided limits exist and are equal $0$ the limit exists and is equal $0$.</p>
|
704,073 | <p>I encountered something interesting when trying to differentiate $F(x) = c$.</p>
<p>Consider: $\lim_{x→0}\frac0x$. </p>
<p>I understand that for any $x$, no matter how incredibly small, we will have $0$ as the quotient. But don't things change when one takes matters to infinitesimals?
I.e. why is the function $\frac0x = f(x)$, not undefined at $x=0$?</p>
<p>I would appreciate a strong logical argument for why the limit stays at $0$. </p>
| user133943 | 133,943 | <p>A function isn't just an expression, but you can think whether a single expression can be applied to an argument. The expression $0^{-1}$ is rather meaningless, so you don't know how to get the behavior of the function $f(x)=0\cdot x^{-1}$ at $x=0$ from the expression.</p>
<p>Limits are just a way to describe the behavior (if it looks consistent enough that the limit exists) <strong>around</strong> the point. It doesn't state anything about the value of the function at the point. That is if
$a=\lim_{x\to b}f(x)$
then the function $f_1(x)=\left\{\begin{matrix}f(x)&\text{ if }x\neq b\\a&\text{ if }x=b\end{matrix}\right.$ is continuous.</p>
|
3,046,083 | <p>Is it true that the intersection of the closures of sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span> is equal to the closure of their intersection?
<span class="math-container">$ cl(A)\cap{cl(B)}=cl(A\cap{B})$</span> ?</p>
| Kavi Rama Murthy | 142,385 | <p>No. Take <span class="math-container">$A=(-1,0), B=(0,1)$</span>. Note that <span class="math-container">$0$</span> is in the closure of both of these sets.</p>
|
2,534,999 | <p>I tried to solve $z^3=(iz+1)^3$. I noticed that $(iz+1)^3=i(z-1)^3$ so $(\frac{z-1}{z})^3=i$. How to finish it?</p>
| Fred | 380,717 | <p>If $z^3=(iz+1)^3$, then $|z|=|iz+1|$, hence, if $z=x+iy$, we have $y=1/2$.</p>
<p>Therefore $z=x+\frac{i}{2}$.</p>
<p>Can you proceed ?</p>
|
4,527,429 | <p>I am confused as to how we open the abs value, do we get <span class="math-container">$e=0$</span> and <span class="math-container">$e=2x$</span>, or does the identity not exist?</p>
<p>Thanks.</p>
| Shaun | 104,041 | <p>Note that, if the identity <span class="math-container">$e$</span> did exist, then for negative <span class="math-container">$n$</span> we would have</p>
<p><span class="math-container">$$n=n*e=|n-e|\ge 0.$$</span></p>
|
1,767,682 | <p>I was thinking about sequences, and my mind came to one defined like this:</p>
<p>-1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, 1, 1, 1, ...</p>
<p>Where the first term is -1, and after the nth occurrence of -1 in the sequence, the next n terms of the sequence are 1, followed by -1, and so on. Which led me to perhaps a stronger example, </p>
<p>-1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, 1, -1, ...</p>
<p>Where the first term is -1, and after the nth occurrence of -1 in the sequence, the next $2^n$ terms of the sequence are 1, followed by -1, and so on.</p>
<p>By the definition of convergence or by Cauchy's criterion, the sequence does not converge, as any N one may choose to define will have an occurrence of -1 after it, which must occur within the next N terms (and certainly this bound could be decreased)</p>
<p>However, due to the decreasing frequency of -1 in the sequence, I would be tempted to say that there is some intuitive way in which this sequence converges to 1. Is there a different notion of convergence that captures the way in which this sequence behaves?</p>
| marty cohen | 13,079 | <p>As André Nicolas wrote,
the Cesaro mean,
for which the $n$-th term
is the average of the
first $n$ terms
will do what you want.</p>
<p>In both your cases,
for large $n$,
if $(a_n)$ is your sequence,
if
$b_n = \frac1{n}\sum_{k=1}^n a_k
$,
then
$b_n \to 1$
since the number of $-1$'s
gets arbitrarily small
compared to the number of $1$'s.</p>
|
3,287,016 | <blockquote>
<p>Show <span class="math-container">$\lvert\sin z\rvert \leq C \lvert z\rvert$</span> <span class="math-container">$\forall \lvert z\rvert\leq 1$</span></p>
</blockquote>
<p>The question asks me to show the constant <span class="math-container">$C$</span> exists and to estimate it.</p>
<p>I only know <span class="math-container">$\sin z = \dfrac{e^{iz}-e^{-iz}}{2i}$</span>, so I want to find <span class="math-container">$C$</span> such that <span class="math-container">$\lvert e^{iz} - e^{-iz}\rvert \leq C \lvert z\rvert$</span> for <span class="math-container">$\lvert z\rvert \leq 1$</span>, but I do not know how to solve it, thank you for helping.</p>
| Robert Israel | 8,508 | <p>Hint: <span class="math-container">$|e^{iz}| \le e^{|z|}$</span> so <span class="math-container">$|\sin(z)| \le e$</span> for <span class="math-container">$|z|=1$</span>.</p>
|
3,287,016 | <blockquote>
<p>Show <span class="math-container">$\lvert\sin z\rvert \leq C \lvert z\rvert$</span> <span class="math-container">$\forall \lvert z\rvert\leq 1$</span></p>
</blockquote>
<p>The question asks me to show the constant <span class="math-container">$C$</span> exists and to estimate it.</p>
<p>I only know <span class="math-container">$\sin z = \dfrac{e^{iz}-e^{-iz}}{2i}$</span>, so I want to find <span class="math-container">$C$</span> such that <span class="math-container">$\lvert e^{iz} - e^{-iz}\rvert \leq C \lvert z\rvert$</span> for <span class="math-container">$\lvert z\rvert \leq 1$</span>, but I do not know how to solve it, thank you for helping.</p>
| copper.hat | 27,978 | <p>Note that <span class="math-container">$\sin' = \cos$</span> and <span class="math-container">$|\cos z| \le \cosh (\operatorname{im} z) \le \cosh 1$</span> for <span class="math-container">$|z| \le 1$</span>. Hence <span class="math-container">$|\sin z| \le (\cosh 1) |z|$</span>.</p>
|
3,356,951 | <p>On SAT,scores range from 2000 to 2400, with two thirds of the scores falling in the range of 2200 to 2300. If we further assume that test scores are normally distributed in this range from 2000 to 2400, determine the mean and standard deviation.</p>
| Hassan A | 446,166 | <p>No matter how you fit a normal distribution to this data, it will always put positive probability outside of 2000 to 2400 range. So it is not possible to fix the range to a bounded set and assume that the underlying distribution is normal.
However, we can approximate a normal distribution assuming that the probability of scores being less than 2000 is equal to the probability of scores being above 2400. </p>
<p>Now, let <span class="math-container">$\mu$</span> denote the mean and <span class="math-container">$\sigma^2$</span> denote the variance. Then,
<span class="math-container">$$\Phi(\frac{2300-\mu}{\sigma})-\Phi(\frac{2200-\mu}{\sigma})=\frac{2}{3}$$</span>
and
<span class="math-container">$$1-\Phi(\frac{2400-\mu}{\sigma})=\Phi(\frac{2000-\mu}{\sigma})$$</span></p>
<p>where <span class="math-container">$\Phi(.)$</span> is the CDF of standard normal. Solving these two equations give <span class="math-container">$\mu
$</span> and <span class="math-container">$\sigma$</span>.</p>
|
1,671,111 | <p>I'm looking for an elegant way to show that, among <em>non-negative</em> numbers,
$$
\max \{a_1 + b_1, \dots, a_n + b_n\} \leq \max \{a_1, \dots, a_n\} + \max \{b_1, \dots, b_n\}
$$</p>
<p>I can show that $\max \{a+b, c+d\} \leq \max \{a,c\} + \max \{b,d\}$ by exhaustively checking all possibilities of orderings among $a,c$ and $b,d$.</p>
<p>But, I feel like there should be a more intuitive/efficient way to show this property for arbitrary sums like the one above.</p>
| user247608 | 247,608 | <p>How about proof by induction?</p>
<p>You've proved the following by slogging it out.
max(a+b,c+d) <= max(a,c) + max(b,d)</p>
<p>To show the induction step, consider the case for three terms:</p>
<p>max(a+b,c+d,e+f) = max(max(a+b,c+d), e+f ) ( max is associative).</p>
<pre><code> = max( max(a,c) + max(b,d), e+f) (1st application of result).
max( max(a,c)+e) ) + max( max(b,d)+ f) (2nd application of result).
max(a,c,e) + max(b,d,f) (associativity of max).
</code></pre>
<p>This is the pattern for the general case, adding the n+1 term x+y </p>
<p>max(a+b,c+d,.....y+z) = max(max(a+b,c+d,....), y+z )</p>
<pre><code> = max( max(a,c,...) + max(b,d,....), y+z)
max( max(a,c,.....)+y) ) + max( max(b,d,.....) + z)
max(a,c,.....,y) + max(b,d,.....z)
</code></pre>
|
4,631,618 | <p>Consider this absolute value quadratic inequality</p>
<p><span class="math-container">$$ |x^2-4| < |x^2+2| $$</span></p>
<p>Right side is always positive for all real numbers,so the absolute value is not needed.</p>
<p>Now consider the cases for the left absolute value</p>
<ol>
<li><span class="math-container">$$ x^2-4 \geq 0 $$</span></li>
</ol>
<p>We get <span class="math-container">$$ x \geq \pm 2 $$</span></p>
<p>Solve for first case <span class="math-container">$$ x^2 - 4 < x^2+2 $$</span> the solution <span class="math-container">$$ 0 < 6$$</span> This is true for all real numbers; taking in consideration the boundary that x has to be greater equal +2 and -2 the first part of the solution I think should be <span class="math-container">$$ L_1 = [2, \infty) $$</span></p>
<ol start="2">
<li><span class="math-container">$$ x^2-4 < 0$$</span> <span class="math-container">$$ x < \pm 2 $$</span></li>
</ol>
<p>Solve for seonc case I get <span class="math-container">$$ x > \pm 1 $$</span> Solution for the second case considering the boundary of 2. should be <span class="math-container">$$ L_2 = (-2,-1) \cup (1,2) $$</span> Final solution;</p>
<p><span class="math-container">$$ L = (-2,-1) \cup (2,\infty) $$</span></p>
<p>According to the solutions,this is wrong; it should be <span class="math-container">$$ L = (-\infty,-1) \cup (1,\infty) $$</span> Rechecking their L1 and L2; L2 should be correct but for L1 they have <span class="math-container">$$ L_1 = R \backslash (-2,2) $$</span></p>
<p>So all real numbers except -2 and 2? Can anyone explain how this is the solution.First the sign is greater equal than +2 -2 shouldnt those numbers be included? Also we are looking for numbers GREATER than -2 and +2, since 2 is greater than -2 I assumed we only need to take numbers from to infinity,how does negative infinity come in consideration here?</p>
<p>Thanks in advance!</p>
| Tuvasbien | 702,179 | <p><strong>Little introduction to hyperbolic geometry :</strong></p>
<p>Let <span class="math-container">$\operatorname{ch}(x)=\frac{e^x+e^{-x}}{2}$</span> and <span class="math-container">$\operatorname{sh}(x)=\frac{e^x-e^{-x}}{2}$</span>. Using the Taylor expansion of <span class="math-container">$\exp$</span>, namely <span class="math-container">$e^x=\sum_{n=0}^{+\infty}\frac{x^n}{n!}$</span> we get that</p>
<p><span class="math-container">$$ \operatorname{ch}(x)=\sum_{n=0}^{+\infty}\frac{x^{2n}}{(2n)!} \text{ and } \operatorname{sh}(x)=\sum_{n=0}^{+\infty}\frac{x^{2n+1}}{(2n+1)!} $$</span></p>
<p><strong>Back to the question :</strong></p>
<p>On the one hand,
<span class="math-container">$$ \sum_{n=1}^{+\infty}\frac{2n(2n-1)}{(2n)!}=\sum_{n=1}^{+\infty}\frac{1}{(2n-2)!}=\operatorname{ch}(1) $$</span>
on the other hand,
<span class="math-container">$$ \sum_{n=1}^{+\infty}\frac{2n(2n-1)}{(2n)!}=4\sum_{n=1}^{+\infty}\frac{n^2}{(2n)!}-\sum_{n=1}^{+\infty}\frac{1}{(2n-1)!}=4\sum_{n=1}^{+\infty}\frac{n^2}{(2n)!}-\operatorname{sh}(1) $$</span>
thus
<span class="math-container">$$ \sum_{n=1}^{+\infty}\frac{n^2}{(2n)!}=\frac{\operatorname{ch}(1)+\operatorname{sh}(1)}{4}=\frac{e}{4}. $$</span>
This means that
<span class="math-container">$$ \sum_{n=0}^{+\infty}\frac{(n+1)^2}{(2n)!}=\sum_{n=1}^{+\infty}\frac{n^2}{(2n)!}+\operatorname{sh}(1)+\operatorname{ch}(1)=\frac{5e}{4}. $$</span></p>
<p><em>Remark :</em></p>
<p>I've used hyperbolic trigonometry only to lighten the calculations, but you can of course replace <span class="math-container">$\operatorname{ch}$</span> and <span class="math-container">$\operatorname{sh}$</span> with their expressions using exponentials.</p>
|
4,451,894 | <p><strong>Problem</strong><br />
There is a knight on an infinite chessboard. After moving one step, there are <span class="math-container">$8$</span> possible positions, and after moving two steps, there are <span class="math-container">$33$</span> possible positions. The possible position after moving n steps is <span class="math-container">$a_n$</span>, find the formula for <span class="math-container">$a_n$</span>.</p>
<hr />
<p>I found this sequence is <a href="http://oeis.org/A118312" rel="nofollow noreferrer">http://oeis.org/A118312</a></p>
<p>But I can't understand this Recurrence Relation</p>
<p><span class="math-container">$$a_n = 3a_{n-1} - 3a_{n-2} + a_{n-3}, \quad\quad n\geq3$$</span></p>
<p>Can someone give the intuition for this relationship?</p>
| Community | -1 | <h2>An intuitive setup</h2>
<p>The growth of the number of knight moves for <span class="math-container">$n \ge 3$</span> can be modelled by an octagon increasing linearly in size, simulated below</p>
<p><a href="https://i.stack.imgur.com/TRVOA.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TRVOA.gif" alt="enter image description here" /></a></p>
<p><em>Simulation taken from <a href="https://math.stackexchange.com/a/1093185/1044958">here</a>.</em></p>
<p>We will use the following image to construct the recurrence relation.</p>
<p><a href="https://i.stack.imgur.com/yGa1H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yGa1H.png" alt="enter image description here" /></a></p>
<p>In the image above,
<span class="math-container">$$a_n = \text{black points + red points + green points + blue points}$$</span>
<span class="math-container">$$a_{n-1} - a_{n-2} = (\text{black points + red points + green points}) - (\text{black points + red points}) = \text{green points}$$</span>
<span class="math-container">$$a_{n-3} = \text{black points}$$</span></p>
<p>Hence, the relation is equivalent to proving
<span class="math-container">$$3\cdot\text{green points} + \text{black points} = \text{black points + red points + green points + blue points}$$</span>
<span class="math-container">$$\iff 2\cdot\text{green points} = \text{red points + blue points}$$</span></p>
<p>As the growth of the number of points is linear, we have
<span class="math-container">$$a = \text{red points}$$</span>
<span class="math-container">$$a + d = \text{green points}$$</span>
<span class="math-container">$$a + 2d = \text{blue points}$$</span></p>
<p>It holds that
<span class="math-container">$$2(a + d) = a + (a + 2d)$$</span></p>
<p><span class="math-container">$\blacksquare$</span></p>
|
172,366 | <blockquote>
<p>What will be the value of $P(12)+P(-8)$ if $P(x)=x^{4}+ax^{3}+bx^{2}+cx+d$
provided that $P(1)=10$, $P(2)=20$, $P(3)=30$?</p>
</blockquote>
<p>I put these values and got three simultaneous equations in $a, b, c, d$. What is the smarter way to approach these problems?</p>
| Did | 6,179 | <p>Two remarks, to avoid almost every computation:</p>
<ul>
<li>The polynomial $P(x)-10x$ has roots $1$, $2$ and $3$, hence there exists a polynomial $Q$ such that $P(x)-10x=(x-1)(x-2)(x-3)Q(x)$. </li>
<li>The polynomial $P(x)-10x$ has degree $4$ and leading coefficient $1$, hence $Q(x)=x+z$ for some unknown constant $z$ whose value will be irrelevant.</li>
</ul>
<p>Thus, $P(12)+P(-8)=10\cdot(12-8)+11\cdot10\cdot9\cdot(12+z)+9\cdot10\cdot11\cdot(8-z)$, that is, $P(12)+P(-8)=10\cdot4+11\cdot10\cdot9\cdot(12+z+8-z)=40+990\cdot20=19840$.</p>
|
172,366 | <blockquote>
<p>What will be the value of $P(12)+P(-8)$ if $P(x)=x^{4}+ax^{3}+bx^{2}+cx+d$
provided that $P(1)=10$, $P(2)=20$, $P(3)=30$?</p>
</blockquote>
<p>I put these values and got three simultaneous equations in $a, b, c, d$. What is the smarter way to approach these problems?</p>
| Yuki | 31,172 | <p>Other way doing this:</p>
<p>We try to find reals $e,f,g$ such that $P(12)+P(-8)=eP(1)+fP(2)+gP(3)$. So, if we try to equal "$x^k$ evaluated", we gain a system of equations:
$$
\left\{\begin{array}{ccc}
1^ke+2^kf+3^kg&=&12^k+(-8)^k
\end{array}\right.,\quad k=0,\cdots,4
$$
In particular,
$$
\left\{\begin{array}{ccc}
e+f+g&=&1+1\\
e+2f+3g&=&12+(-8)\\
e+2^2f+3^2g&=&12^2+(-8)^2
\end{array}\right.
$$
and we obtain $e=100,f=-198,g=100$. Verifying the others values for $k$:
$$
\left\{\begin{array}{ccc}
100+2^3(-198)+3^3\cdot100&=&12^3+(-8)^3\\
100+2^4\cdot(-198)+3^4\cdot100&=&12^4+(-8)^4 + 19800
\end{array}\right.
$$</p>
<p>So, $P(12)+P(-8)=100P(1)-198P(2)+100P(3)+1980=19840$</p>
|
3,454,095 | <p>Minimize <span class="math-container">$\;\;\displaystyle \frac{(x^2+1)(y^2+1)(z^2+1)}{ (x+y+z)^2}$</span>, if <span class="math-container">$x,y,z>0$</span>.
By setting gradient to zero I found <span class="math-container">$x=y=z=\frac{1}{\displaystyle\sqrt{2}}$</span>, which could minimize the function.</p>
<blockquote>
<p>Question from Jalil Hajimir</p>
</blockquote>
| Xiaohai Zhang | 628,555 | <p>You first fix <span class="math-container">$y, z$</span> and let <span class="math-container">$x > 0$</span> vary. Taking derivative with respect to <span class="math-container">$x$</span>, dropping all those nonnegative terms such as <span class="math-container">$y^2+1$</span> to simplify notation, leads to
<span class="math-container">$$ \frac{d (OP\ full\ epxr)}{d x} \approx x - \frac{(x^2+1)}{x+y+z} = \frac{x(y+z) - 1}{x+y+z}, $$</span>
where the <span class="math-container">$\approx$</span> means I dropped some positive terms (they do not affect my analysis of the positivity of the derivate).</p>
<p>It is evident that the gradient is negative for small <span class="math-container">$x$</span>, and once <span class="math-container">$x > \frac{1}{y+z}$</span> the gradient becomes positive. Hence the function is minimized at <span class="math-container">$x = \frac{1}{y+z}$</span> when <span class="math-container">$y, z$</span> are fixed. Similarly, the function is minized at <span class="math-container">$y = \frac{1}{x+z}$</span> when <span class="math-container">$x, z$</span> are fixed. And the function is minimized at <span class="math-container">$z = \frac{1}{x+y}$</span> when <span class="math-container">$x, y$</span> are fixed.</p>
<p>Let the global minimizing point be <span class="math-container">$x_0, y_0, z_0$</span>, given the previous arguments, we must have <span class="math-container">$x_0= \frac{1}{y_0+z_0}, y_0 = \frac{1}{x_0+z_0}, z_0 = \frac{1}{x_0+y_0} \Rightarrow x_0 = y_0 = z_0 = \frac{\sqrt{2}}{2}$</span> (otherwise, we can find a point with smaller value).</p>
<p>Hence the global minimum is unique at <span class="math-container">$x = y = z = \frac{\sqrt{2}}{2}$</span> if one exists. </p>
<p>For rigorous argument that the a global minimum exists, one can look at a compact set <span class="math-container">$[\epsilon, N]\times[\epsilon, N]\times [\epsilon, N]$</span>. The function must have a global minimum in the compact set. One can easily argue it does not take minimum at the boundary (contradicts requirements previously stated, or compare function value on the boundary with that of <span class="math-container">$x = y = z = \frac{\sqrt{2}}{2}$</span>). Hence the minimum MUST be in the interior (with gradients being zero).</p>
<p>Hence <span class="math-container">$x = y = z = \frac{\sqrt{2}}{2}$</span> is the unique global minimum.</p>
|
2,247,522 | <p>Suppose $X$ and $Y$ are discrete random variables. Show that $$E(X \mid Y)=E(X \mid Y^3).$$</p>
<p>The conditional expected value of a discrete random variable is expressed as
$$E(X \mid Y)=\sum xp_{X \mid Y}(x \mid y),$$
where$$p_{X \mid Y}(x \mid y)=\frac{p_{X,Y}(x,y)}{p_Y(y)}.$$ </p>
<p>Similarly, you can say that
$$E(X \mid Y^3)=\sum x p_{X \mid Y^3}(x \mid y^3),$$
where$$p_{X \mid Y^3}(x \mid y^3)=\frac{p_{X,Y^3}(x,y^3)}{p_{Y^3}(y^3)}.$$ </p>
<p>The goal is to show that </p>
<p>$$\sum xp_{X \mid Y}(x \mid y)=\sum xp_{X \mid Y^3}(x \mid y^3).$$</p>
<p>From here I don't really know how to show that the two are equal, some help would be appreciated. </p>
| Jay Zha | 379,853 | <p>Provide a generic approach. Strictly speaking $\mathbb E[X|Y]$ is just a notation, what it really means is $\mathbb E[X|\sigma(Y)]$, where $\sigma(Y)$ is the $\sigma$-field generated by $Y$ (see its definition below).</p>
<p><em>Proof.</em> We need to show that $\sigma(Y)=\sigma(Y^3)$.</p>
<p>Let $f(x)=x^3$, $x \in \mathbb R$, then $Y^3=f \circ Y$
$$\sigma(Y)=\{Y^{-1}(A)|A \in \mathcal B(\mathbb R)\}=\{Y^{-1}\circ f^{-1}\circ f(A)|A \in \mathcal B(\mathbb R)\}$$</p>
<p>$$\sigma(Y^3)=\{(f\circ Y)^{-1}(A)|A \in \mathcal B(\mathbb R)\}=\{Y^{-1} \circ f^{-1}(A)|A \in \mathcal B(\mathbb R)\}$$</p>
<p>Now notice that both $f(x)=x^3$ and $f^{-1}(x)=\sqrt[3]{x}$ are monotone functions, and thus both of them are Borel-measurable (This is by Prop 5.10 of <em>Real Analysis for Graduate Students Richard F. Bass</em>).</p>
<p>Thus the above two sigma algebras are the same, because: $f$ Borel-measurable, we get $\sigma(Y^3)\subset \sigma(Y)$; $f^{-1}$ Borel-measurable, we get $\sigma(Y) \subset \sigma(Y^3)$.</p>
|
2,041,441 | <p>$\binom{74}{37}-2$ is divisible by :</p>
<p>a) $1369$</p>
<p>b) $38$</p>
<p>c) $36$ </p>
<p>d) $none$ $of$ $ these$</p>
<p>I have no idea how to solve this...I tried writing $\binom{74}{37}$ in some useful form but its not helping...any clues?? Thanks in advance!!</p>
| 2'5 9'2 | 11,123 | <p>The first line of the following is true by a combinatorial argument (among other arguments) where you count how many ways to choose $p$ marbles from a collection of $2p$ marbles, where half are red and half are blue.
$$\begin{align}
\binom{p+p}{p}
&=\sum_{k=0}^p\binom{p}{k}\binom{p}{p-k}\\
&=2+\sum_{k=1}^{p-1}\binom{p}{k}\binom{p}{p-k}\\
\end{align}$$</p>
<p>Each binomial coefficient in each term in the sum is divisible by $p$ (if $p$ is prime). So mod $p^2$, $$\binom{2p}{p}\equiv2$$ With $p=37$, this shows $1369=37^2$ divides $\binom{74}{37}-2$.</p>
<hr>
<p>On a multiple choice test, I would expect that the intention is to check if a person has simply memorized that $\binom{2p}{p}\equiv2$ mod $p^2$.</p>
|
96,369 | <p>Is there a mathematical term which would include both liminf and limsup? (In a similar way we talk about extrema to describe both maxima and minima?)</p>
<p>The only thing I was able to find was that some authors use the name extreme limits; see google books: <a href="http://www.google.com/search?tbm=bks&tbo=1&q=%22extreme+limits%22+%22lim+sup%22" rel="nofollow">"extreme limits" "lim sup"</a>. It is used e.g. in Thomson, Bruckner, Bruckner: Elementary real analysis, <a href="http://books.google.com/books?id=6l_E9OTFaK0C&pg=PA70" rel="nofollow">p.70</a></p>
<p>EDIT: I've checked a few more searches to see to which extent the
above term is widespread. Searching for
<a href="http://www.google.com/search?tbm=bks&tbo=1&q=%22extreme+limits%22" rel="nofollow">"extreme limits"</a> only
gives a lot of non-mathematical result, so these are basically
attempts to filter out results that are interesting for this
question.</p>
<ul>
<li>Google Books: <a href="http://www.google.com/search?tbm=bks&tbo=1&q=%22extreme+limits%22+sequence" rel="nofollow">"extreme limits" sequence</a></li>
<li>Google Books: <a href="http://www.google.com/search?tbm=bks&tbo=1&q=%22extreme+limits%22+function" rel="nofollow">"extreme limits" function</a></li>
<li>Google books: <a href="http://www.google.com/search?tbm=bks&q=%22Subject%3AMathematics%22+%22extreme+limits%22" rel="nofollow">"Subject:Mathematics" "extreme limits"</a></li>
<li>Google Scholar - searching in Engineering, Computer Science, and
Mathematics: <a href="http://scholar.google.com/scholar?as_subj=eng&q=%22extreme%20limits%22" rel="nofollow">"extreme limits"</a></li>
</ul>
<p>NOTE: This question arose more or less out of curiosity. When I discussed limit superior and limit inferior with some other MSE users, we found out that none of us was familiar with a name which would describe these two things. (Of course, it might be the case that such term is not needed very much, if we are only able to find something which is used frequently.)</p>
| Will Jagy | 10,400 | <p>I would go with cluster point, as the target <span class="math-container">$\mathbb R$</span> is a metric space. See <a href="https://www.planetmath.org/typesoflimitpoints" rel="nofollow noreferrer">this PlanetMath link</a>.</p>
|
3,998,098 | <p>I was asked to determine the locus of the equation
<span class="math-container">$$b^2-2x^2=2xy+y^2$$</span></p>
<p>This is my work:</p>
<blockquote>
<p>Add <span class="math-container">$x^2$</span> to both sides:
<span class="math-container">$$\begin{align}
b^2-x^2 &=2xy+y^2+x^2\\
b^2-x^2 &=\left(x+y\right)^2
\end{align}$$</span></p>
</blockquote>
<p>I see that this is similar to the equation of a circle. How can I find the locus of this expression?</p>
| Piquito | 219,998 | <p>HINT.-A way to study the curve is to consider it as a union of two graphs of functions (similar to how you would see two functions <span class="math-container">$y=\pm\sqrt{r^2-x^2}$</span> for a circle) so you get two explicits functions:
<span class="math-container">$$y=-x+\sqrt{b^2-x^2}\\y=-x-\sqrt{b^2-x^2}$$</span> whose domains are <span class="math-container">$[-b,b]$</span>.</p>
<p>This way you have an ellipse for each value of <span class="math-container">$b$</span> whose axis are (approximately) in the lines
<span class="math-container">$$y=0.625x\\y-1.6x$$</span></p>
<p>For example, for <span class="math-container">$b=2$</span> you have (approximately) the semiaxis <span class="math-container">$1.2358$</span> and <span class="math-container">$3.2358$</span>.</p>
|
2,900,372 | <p>Let $R=\mathbb{Z}[\sqrt{-2}]$</p>
<blockquote>
<p>1) Is it true that for every free $R-$module $M$ of rank=$n$ that $M$ is a free $\mathbb{Z}-$module of rank=$2n$?</p>
<p>2) Find two non-isomorphic $R$-modules with $19$ elements each.</p>
</blockquote>
<p>For (1) I tried to begin with a basis $\{m_1,...,m_n\}$ of $M$ as a $R-$module and construct a basis $\{m_1',...,m_{2n}'\}$ of it as a $\mathbb{Z}-$module.</p>
<p>For (2) I have no idea.</p>
<p>Any hints?? </p>
| Angina Seng | 436,618 | <p>For $2$ look for modules $R/I$ where $I$ is an ideal of $R$. As $R$
is a principal ideal domain, $I=\left<\alpha\right>$ for some $\alpha=a+b\sqrt{-2}$. Then $|R/\left<\alpha\right>|=|\alpha|^2=a^2+2b^2$. If we
have $\alpha=\eta\beta$ where $\eta$ is a unit in $R$. Then $\left<\alpha\right>
=\left<\beta\right>$. But the only units in $R$ are $\pm 1$. Taking $\alpha=1+3\sqrt{-2}$ and $\beta=-1+3\sqrt{-2}$, then
$|R/\left<\alpha\right>|=|R/\left<\beta\right>|=19$. Can you prove
$R/\left<\alpha\right>\not\equiv R/\left<\beta\right>$ as $R$-modules?</p>
|
1,284,388 | <p>Solving for variable $d$:</p>
<p>$v = \frac{1}{2}hd^2 + 9.9$</p>
<p>$-2(v - 9.9) = hd^2$</p>
<p>$-2v + 19.8 = hd^2$</p>
<p>$d = \sqrt{\frac{-2v + 19.8}{h}}$</p>
<p>The correct answer is:</p>
<p>$d = \pm\sqrt{\frac{2v - 19.8}{h}}$</p>
| user4640007 | 230,456 | <p>$$V = \frac{1}{2}hd^2 + 9.9$$</p>
<p>Subtract 9.9 to the LHS and multiply everything by 2</p>
<p>$$2(V-9.9) = hd^2$$</p>
<p>1) Divide by $h$ (likely the height in the problem)</p>
<p>2) Solve for $d$ by taking the square root of both sides</p>
|
963,503 | <p>Vectors $a$, $b$ and $c$ all have length one. $a + b + c = 0$. Show that
$$
|a-c| = |a-b| = |b-c|
$$
I am not sure how to get started, as writing out the norms didn't help and there is no way to manipulate
$$
|a-c| \le |a-b| + |b-c|
$$
to get an equality. I just need an idea of where to start.</p>
| Paul | 17,980 | <p><strong>Proof:</strong> If there is <span class="math-container">$x \in Z\setminus X$</span>, then</p>
<p><span class="math-container">$x\notin X\cap (Y \cup Z)$</span>, however, <span class="math-container">$x\in (X\cap Y) \cup Z$</span>. A tradiction!</p>
|
204,842 | <p>A probability measure defined on a sample space $\Omega$ has the following properties:</p>
<ol>
<li>For each $E \subset \Omega$, $0 \le P(E) \le 1$</li>
<li>$P(\Omega) = 1$</li>
<li>If $E_1$ and $E_2$ are disjoint subsets $P(E_1 \cup E_2) = P(E_1) + P(E_2)$</li>
</ol>
<p>The above definition defines a measure that is finitely additive (by induction) but not necessarily countably additive.</p>
<p>What is a probability measure that would be finitely additive but not countably additive (for a countable sample space $\Omega$)?</p>
<p>The example that I have seen most commonly on forums (this and elsewhere) is to set $P(E) = 0$ if $E$ is finite and $P(E) = 1$ if $E$ is co-finite. But that is <strong>not</strong> a probability measure as defined above since it is not defined on every subset of $\Omega$. </p>
<p>So an example of such a probability measure, or what is the reasoning that a finitely additive probability measure is not always countably additive?</p>
| hot_queen | 72,316 | <p>In this interesting <a href="http://www.math.wisc.edu/~akumar/INDUCED_IDEALS.pdf" rel="nofollow">note</a>, the author proves the following: It is consistent to have a finitely additive total extension of Lebesgue measure on $[0, 1]$ such that, although, the measure zero sets form a sigma ideal, there is no real valued measurable cardinal below continnum.</p>
|
3,965,834 | <p>Does this sum converge or diverge?</p>
<p><span class="math-container">$$ \sum_{n=0}^{\infty}\frac{\sin(n)\cdot(n^2+3)}{2^n} $$</span></p>
<p>To solve this I would use <span class="math-container">$$ \sin(z) = \sum \limits_{n=0}^{\infty}(-1)^n\frac{z^{2n+1}}{(2n+1)!} $$</span></p>
<p>and make it to <span class="math-container">$$\sum \limits_{n=0}^{\infty}\sin(n)\cdot\frac{(n^2+3)}{2^n} = \sum \limits_{n=0}^{\infty}(-1)^n\frac{n^{2n+1}}{(2n+1)!} \cdot \sum \limits_{n=0}^{\infty}\frac{(n^2+3)}{2^n} $$</span></p>
<p>and since <span class="math-container">$$\sum \limits_{n=0}^{\infty}\frac{(n^2+3)}{2^n} \text{ and } \sum \limits_{n=0}^{\infty}(-1)^n\frac{n^{2n+1}}{(2n+1)!} $$</span></p>
<p>converges <span class="math-container">$$ \sum \limits_{n=0}^{\infty}\frac{\sin(n)\cdot(n^2+3)}{2^n} $$</span>
would also converge.</p>
<p>Is my assumption true? I'm also a bit scared to use it since I've got the sin(z) equation from a source outside the stuff that my professor gave us</p>
| Uri Toti | 834,026 | <p>Since you don't know how the <span class="math-container">$sin(x)$</span> as <span class="math-container">$x\to \infty$</span> behaves, you can use the fact that, if a series converges absolutely, then the original series converges. So you can see that</p>
<p><span class="math-container">$\sum^{\infty}_{n=0} \frac{sin(n)(n^2+3)}{2^n} \leqslant \sum^{\infty}_{n=0} \frac{n^2+3}{2^n}$</span></p>
<p>and I'll leave the rest to you... I just wanted to give some mental momentum.</p>
|
2,825,522 | <p>I have this problem:</p>
<p><a href="https://i.stack.imgur.com/blD6N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/blD6N.png" alt="enter image description here"></a></p>
<p>I have not managed to solve the exercise, but this is my breakthrough:</p>
<p><a href="https://i.stack.imgur.com/0dTdO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0dTdO.jpg" alt="enter image description here"></a></p>
<p>How can I continue to find it?</p>
| soktinpk | 188,945 | <p>An easy, general way to solve these problems is to draw a circle around them. Then by symmetry, each of the 6 arcs has a measure of $360^\circ/6=60^\circ$. The angle intercepts two arcs on one side and on arc on the other, resulting in $\frac{120^\circ+60^\circ}{2}=90^\circ$.</p>
<p>In the circle below arc $DCB$ measures $120^\circ$ and $EF$ measures $60^\circ$, so by an angle-intercepting-arc theorem, the angle is the average of the intercepted arcs: $90^\circ$.</p>
<p><a href="https://i.stack.imgur.com/uBG9N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uBG9N.png" alt="enter image description here"></a></p>
|
2,268,345 | <p>Find the value of $$S=\sum_{n=1}^{\infty}\left(\frac{2}{n}-\frac{4}{2n+1}\right)$$ </p>
<p>My Try:we have</p>
<p>$$S=2\sum_{n=1}^{\infty}\left(\frac{1}{n}-\frac{2}{2n+1}\right)$$ </p>
<p>$$S=2\left(1-\frac{2}{3}+\frac{1}{2}-\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+\cdots\right)$$ so</p>
<p>$$S=2\left(1+\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\cdots\right)$$ But we know</p>
<p>$$\ln2=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots$$ So</p>
<p>$$S=2(2-\ln 2)$$</p>
<p>Is this correct?</p>
| Στέλιος | 403,502 | <p>We can use the digamma function $\psi$. It is known that</p>
<p>$$\psi(z+1)=-\gamma+\sum_{n=1}^{\infty}\dfrac{z}{n(n+z)},\ \ \forall z \in \mathbb{C} \setminus \{-1,-2,...\}.$$</p>
<p>Thus for $z=\frac{1}{2}$ we obtain</p>
<p>$$\psi\left(\dfrac{3}{2}\right)=-\gamma+\sum_{n=1}^{\infty}\dfrac{1}{n(2n+1)}=-\gamma+\sum_{n=1}^{\infty}\left(\dfrac{1}{n}-\dfrac{2}{(2n+1)}\right)=-\gamma+\dfrac{S}{2}.$$</p>
<p>From the other hand, for the half-integer values of $\psi$ we have </p>
<p>$$\psi\left(n+\dfrac{1}{2}\right)=-\gamma-2\ln 2+\sum_{k=1}^n\dfrac{2}{2k-1},$$</p>
<p>from where, for $n=1$, we conclude that</p>
<p>$$\psi\left(\dfrac{3}{2}\right)=-\gamma-2\ln 2+2.$$</p>
<p>Finally $-\gamma+\dfrac{S}{2}=-\gamma-2\ln 2+2 \Rightarrow S=4(1-\ln 2),$
as Simply Beautiful Art mentioned.</p>
<p><em>Note:</em> S is a real number and it can be justified via comparison with the series of the Basel Problem.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.