qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,034,697
<p>The exercise goes like this:</p> <p>-Let $W= {(x,y,z)|2x+3y-z=0}$ Then $W\subseteq\mathbb{R}^3$, find the dimension of $W$.</p> <p>-Find the dimension $[\mathbb{R}^3|W]$</p> <p>This was a problem from my algebra exam, it was a team exam and this problem was solved by another member of the team (we...he had it right), his solution goes like this:</p> <p>Lets take the natural projection of $\mathbb{R}^3\to \mathbb{R}^3|W$, we have $Dim \,\,\mathbb{R}^3=3$, $\text{dim}\,\, Ker= dim \,W=2$. because it is superyective by the dimension theorem we have dim $[\mathbb{R}^3|W]=1$</p> <p>I don't understand the solution (I think he used things we haven't seen in class, he's a little bit ahead). I can see why Dim $\mathbb{R}^3=3$ but not why dim $Ker= dim\, W=2$ and why dim $[\mathbb{R}^3|W]=1$. What does being superyective have to do with it?</p> <p>Can someone please explain what is used here? We already got an A in the exam but I'd like to understand how the problem was solved.</p> <p>I know it's probably something very basic, but I don't know much.</p> <p>Thanks</p>
Learnmore
294,365
<p>I think an easy way to solve this is as follows:</p> <p>dim $(\mathbb R^3)/W$=dim $\mathbb R^3$-dim $W$</p> <p>Now let $(a,b,c)\in W$ then $2a+3b-c=0$ so $c=2a+3b$</p> <p>so a basis for $W$ is $\{(1,0,2)^t,(0,1,3)^t\}$ so dim$W$=2</p> <p>So you get your answer.</p> <p>Another way out is try the linear mapping $f:\mathbb R^3\rightarrow \mathbb R$ defined by $f(x,y,z)=2x+3y-z$(check it's linear) having kernel as $W$.Then $\mathbb R^3/W\cong Imf$ .Im$f$ is a subspace of $\mathbb R^3$ having dimension $1$</p>
244,433
<p>I have a list:</p> <pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025},...} </code></pre> <p>And I wanted to remove every third pair and get</p> <pre><code> newdata = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {3.*10^-9, 0.0025},...} </code></pre>
xzczd
1,871
<pre><code>newdata = data; newdata[[3 ;; ;; 3]] = Nothing; newdata </code></pre> <p>If it's OK to overwrite <code>data</code>, then simply</p> <pre><code>data[[3 ;; ;; 3]] = Nothing; </code></pre>
704,917
<p>I need your help evaluating this integral: <span class="math-container">$$I=\int_0^\infty F(x)\,F\left(x\,\sqrt2\right)\frac{e^{-x^2}}{x^2} \, dx,\tag1$$</span> where <span class="math-container">$F(x)$</span> represents <a href="http://mathworld.wolfram.com/DawsonsIntegral.html" rel="nofollow noreferrer">Dawson's function/integral</a>: <span class="math-container">$$F(x)=e^{-x^2}\int_0^x e^{y^2} \, dy = \frac{\sqrt{\pi}}{2} e^{-x^{2}} \operatorname{erfi}(x).\tag2$$</span></p> <p>Dawson's function can also be represented by the infinite integral <span class="math-container">$$F(x) = \frac{1}{2} \int_{0}^{\infty} e^{-t^{2}/4} \sin(xt) \, dt.$$</span></p> <p>Since <span class="math-container">$F(x)$</span> behaves like <span class="math-container">$x$</span> near <span class="math-container">$x=0$</span> and like <span class="math-container">$\frac{1}{2x}$</span> for large values of <span class="math-container">$x$</span>, we know that integral <span class="math-container">$(1)$</span> converges.</p>
Cleo
97,378
<p>$$I=\frac{\pi^{3/2}}8\left(\sqrt2-4\right)+\frac{3\,\pi^{1/2}}2\arctan\sqrt2$$</p>
4,459,439
<p>Suppose <span class="math-container">$G$</span> is an abelian finite group, and the number of order-2 elements in <span class="math-container">$G$</span> is denoted by <span class="math-container">$N$</span>.</p> <p>I have found that <span class="math-container">$N= 2^n-1$</span> for some <span class="math-container">$n$</span> that satisfy <span class="math-container">$2^n| \ |G|$</span>. I write my proof. Would you tell me if this proof is correct? Moreover, Would you tell me can we say more about the number <span class="math-container">$N$</span>?</p> <h2>My Proof</h2> <p>I: The subset of all elements with order 2 union <span class="math-container">$\{e \}$</span> is a subgroup of <span class="math-container">$G$</span> because for all <span class="math-container">$x \ne y: (xy)^2=x^2y^2=e$</span>, and all elements are self inverse. Therefore, <span class="math-container">$N+1$</span> divides |G| by the Lagrange theorem.</p> <p>II: If <span class="math-container">$N=1$</span> (i.e. there is only one element of order 2 in <span class="math-container">$G$</span>), namely <span class="math-container">$x$</span>, everything will be fine. However, if we have <span class="math-container">$x$</span> and <span class="math-container">$y$</span> as elements of order 2 in <span class="math-container">$G$</span>, then <span class="math-container">$xy$</span> has order 2 and <span class="math-container">$N=3$</span>. If there exists another element of order 2 in <span class="math-container">$G$</span>, namely <span class="math-container">$z$</span>, then <span class="math-container">$xz,\ yz,\ xyz$</span> have order 2 and <span class="math-container">$N=7$</span>. If there exists another element of order 2 in <span class="math-container">$G$</span>, namely <span class="math-container">$w$</span>, then <span class="math-container">$xw,\ yw,\ zw,\ xyw,\ xzw,\ yzw, \ xyzw$</span> have order 2 and <span class="math-container">$N=15$</span>. By induction, <span class="math-container">$N=$$n\choose{1}$$+\cdots +$${n}\choose{n}$</span> <span class="math-container">$= \ 2^n-1$</span> for some <span class="math-container">$n$</span>.</p> <p>With I and II, <span class="math-container">$N= 2^n-1$</span> for some <span class="math-container">$n$</span> that satisfy <span class="math-container">$2^n| \ |G|$</span>.</p> <p>Obviously, if |G| is odd, <span class="math-container">$N=0$</span>. Or if <span class="math-container">$|G|=36$</span>, <span class="math-container">$N=1$</span> or <span class="math-container">$N=3$</span>.</p> <p>Is what I wrote correct?</p> <p>Can we be more specefic about the number of elements of order 2 in <span class="math-container">$G$</span>?</p>
Berci
41,488
<p>Yes, your argument is fine.</p> <p>Note that the given subgroup of <span class="math-container">$N+1$</span> elements naturally carries a <span class="math-container">$\Bbb Z/2\Bbb Z$</span> vector space structure, so indeed its cardinality must be <span class="math-container">$2^n$</span>.</p> <p>Since such a vector space exists for all <span class="math-container">$n\in\Bbb N$</span>, these (<span class="math-container">$2^n-1$</span>) are exactly the possible numbers for <span class="math-container">$N$</span>.</p>
39,684
<p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p> <p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p> <p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
Zev Chonoles
264
<p>Differential Geometry: the <a href="http://en.wikipedia.org/wiki/Gauss%E2%80%93Bonnet_theorem" rel="nofollow">Gauss-Bonnet theorem</a>. </p> <p>I took a one-semester intro course on differential geometry class and we got to this towards the end of the semester, so I feel that a couple of months is an appropriate time frame for this theorem.</p>
39,684
<p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p> <p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p> <p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
Community
-1
<p>In Complex Analysis the <a href="http://en.wikipedia.org/wiki/Riemann_mapping_theorem" rel="nofollow">Riemann Mapping Theorem</a>.</p>
39,684
<p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p> <p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p> <p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
Community
-1
<ul> <li>Primary decomposition theorem in linear algebra.</li> </ul>
2,530,458
<p>Find Range of $$ y =\frac{x}{(x-2)(x+1)} $$</p> <p>Why is the range all real numbers ? </p> <p>the denominator cannot be $0$ Hence isn't range suppose to be $y$ not equals to $0$ ?</p>
farruhota
425,072
<p>Alternatively: the function can be expressed as the sum: $$y =\frac{x}{(x-2)(x+1)}=\frac13\left(\frac{2}{x-2}+\frac{1}{x+1}\right)=\frac13\left(g(x)+h(x)\right)$$ The functions $g(x)$ and $h(x)$ are hyperbolas with the range $g\ne0$ and $h\ne0$. And the function $y$ has the range in $\mathbb{R}$, in particular, $y(0)=0$.</p>
3,009,387
<p>I'm asking the following: it is true that if <span class="math-container">$K$</span> is a normal subgroup of <span class="math-container">$G$</span> and <span class="math-container">$K\leq H\leq G$</span> then <span class="math-container">$K$</span> is normal in <span class="math-container">$H$</span>? I tried to prove it but I failed to do so, so I'm starting to suspect that it is not true. Can you provide me a proof or a counterexample of this statement or hint about its proof? </p>
Siong Thye Goh
306,553
<p>For your way <span class="math-container">$1$</span>, check the computation of your denominator, it should give you <span class="math-container">$0$</span> again.</p> <p>For your way <span class="math-container">$2$</span>, check your factorization in your denominator as well.</p> <p>Use L'hopital's rule:</p> <p><span class="math-container">$$\lim_{x\rightarrow -5} \frac{2x^2-50}{2x^2+3x-35}= \lim_{x\rightarrow -5} \frac{4x}{4x+3}=\frac{-20}{-17}=\frac{20}{17}$$</span></p>
3,009,387
<p>I'm asking the following: it is true that if <span class="math-container">$K$</span> is a normal subgroup of <span class="math-container">$G$</span> and <span class="math-container">$K\leq H\leq G$</span> then <span class="math-container">$K$</span> is normal in <span class="math-container">$H$</span>? I tried to prove it but I failed to do so, so I'm starting to suspect that it is not true. Can you provide me a proof or a counterexample of this statement or hint about its proof? </p>
user
505,767
<p>As an alternative by <span class="math-container">$y=x+5 \to 0$</span></p> <p><span class="math-container">$$\lim_{x\rightarrow -5} \frac{2x^2-50}{2x^2+3x-35}=\lim_{y\rightarrow 0} \frac{2(y-5)^2-50}{2(y-5)^2+3(y-5)-35}=\lim_{y\rightarrow 0} \frac{2y^2-20y}{2y^2-17y}=\lim_{y\rightarrow 0} \frac{2y-20}{2y-17}$$</span></p>
1,156,874
<p>How to show that $\mathbb{Z}[i]/I$ is a finite field whenever $I$ is a prime ideal? Is it possible to find the cardinality of $\mathbb{Z}[i]/I$ as well?</p> <p>I know how to show that it is an integral domain, because that follows very quickly.</p>
Lubin
17,760
<p>Another approach, especially if you’re only interested in counting $\Bbb Z[i]/I\,$: There’s a theorem in algebraic number theory that if $I$ is a nonzero principal ideal of the ring of integers $R$ in an algebraic number field $K$, say with $I=(\xi)$, then the cardinality of $R/I$ is $\big|\text N(\xi)\big|$, where N is the field-theoretic norm from $K$ down to $\Bbb Q$.</p> <p>In the case $K=\Bbb Q(i)$, every $I$ is principal and the norm already is positive, so you can drop the absolute-value bars. In other words, if $I=(a+bi)$, then the cardinality of $\Bbb Z[i]/I$ is $a^2+b^2$. This has nothing to do with whether $I$ is prime.</p>
130,564
<p>Hi, everyone.</p> <p>I am looking for some references for period matrix of abelian variety over arbitrary field, if you know, could you please tell me?</p> <p>For period matrix of abelian varieties, I means that if $A$ is an abelian variety over complex number field, $A \cong V/\Gamma$. If we chose a basis of $V$, and a basis of lattice $\Gamma$, write the basis of lattice in term by the basis of $V$, the matrix which is called period matrix.</p> <p>I want to know how to define period matrix for abelian variety over arbitrary field, and some basic properties of this matrix.</p> <p>Thank you very much!</p>
anon
22,503
<p>For an abelian variety $A$ over a subfield $k$ of the complex numbers, you have the first de Rham cohomology group of $A$ over $k$, which is a $k$-vector space, and the first singular cohomology group of $A/\mathbb{C}$, which is a $\mathbb{Q}$-vector space. When tensored up to $\mathbb{C}$ the two vector spaces are canonically isomorphic, and the matrix of periods of $A$ compares bases coming from the two vector spaces.</p>
749,926
<p>I have a group of 10 players and I want to form two groups with them.Each group must have atleast one member.In how many ways can I do it?</p>
Jean-Sébastien
31,493
<p>Take one of the guy, say, Joe. We will form Joe's team. Either of the $9$ other players can be or not be in Joe's team, so there are $2^9$ choices.</p> <p>But this include the case where all $10$ players are on the same team, so we remove $1$ from that, leading to $2^9-1$.</p> <p><strong>Added</strong> For completeness, I'll add a way if leftovers are allowed, but this method loses its efficiency in that case.</p> <p>So either Joe will be picked, or he won't. If he is, the $9$ remaining players can go anywhere from Joe's team, his opponents or the leftovers, which gives $3^9$ possibilities. Of those, we need to substract the ones that leave the opposing team empty, namely $2^9$ of the $3^9$, giving a total of $3^9-2^9$ valid teams in which Joe is part of.</p> <p>For those where Joe is not part of, we can use @Andre's methode to find $(3^9-2*2^9+1)/2$ (This is where the "Joe" method stops working.</p> <p>Another way of solving this is to choose how many members from our original $10$ players are gonna be part of the teams, and then sum over the possible combinations. This would give $$ \sum_{k=2}^{10}{10\choose k}(2^{k-1}-1). $$ The term $(2^{k-1}-1)$ can be found using "Joe" method. Note that the sum starts at $2$ because we don't allow for empty team.</p>
1,485,310
<blockquote> <p>Write a formula/formulae for the following sequence:</p> <p>b). 1,3,6,10,15,...</p> </blockquote> <p>I am not getting any pattern here, from which to derive a formula. This sequence does not look like the examples I could solve: like</p> <blockquote> <p>a) 1,0,1,0,1...</p> </blockquote> <p>where I got that <span class="math-container">$S_n =1 $</span> (for <span class="math-container">$n=1,3,5,7,...$</span>) and <span class="math-container">$S_n=0$</span> (for <span class="math-container">$n=2,4,6,8,...$</span>)</p> <p>or</p> <blockquote> <p>c.) 1,1,1,2,1,3,1,4,1,...</p> </blockquote> <p>where <span class="math-container">$S_n=1$</span> for <span class="math-container">$n=1,2,3,5,7,...$</span> and <span class="math-container">$S_n= n/2$</span> for <span class="math-container">$n=4,6,8,.. $</span></p>
Amey Deshpande
210,002
<p>For (b) you can observe the difference between next and previous terms is in A.P.</p> <p>Thus you can define it recursively as $s_1 = 1$, $s_{n+1}=s_n + (n+1)$.</p>
2,064,984
<p>I am learning about the student t-test. </p> <p>I am struggling, however, to be given a reasonable explanation why the standard deviation of the standard normal distribution curve is 1. </p> <p>It says "The Standard Normal Variable is denoted Z and has mean 0 and S.D 1..."</p> <p>"... this is written as Z ~ N(0,1^2)</p> <p>Can someone please explain? Very much appreciate it.</p>
Logician6
306,688
<p>The "standard normal distribution" has mean 0 and standard deviation 1 because that is in fact the definition of the "standard normal distribution". To understand this, what I think you need to understand is what a "normal distribution" in general is.</p> <p>A "normal distribution" is a distribution of "relative frequency" $\Phi(x)$ with respect to a variable $x$ that is in a very specific shape, specifically \begin{align*} \Phi(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}, \end{align*} where $\mu$ and $\sigma&gt;0$ are parameters for the mean and standard deviation, respectively.</p> <p>As you've learned in statistics, the normal distribution has a "bell curve shape" (and you'll find $\Phi(x)$ has exactly the shape you've seen).</p> <p>And like I said, the only thing special about the "standard normal distribution" is that the parameters are $\mu=0$ and $\sigma=1$, hence having the equation $\Phi_s(z)=1/\sqrt{2\pi} \cdot e^{-z^2/2}$. I think the real question is "why is the standard normal distribution so useful?" It's useful because it gives an eloquent "$z$-score" measurement that captures how much "standard devations" a given point is from the mean.</p> <p>Let's say the distribution $\Psi(x)$ of height (in feet) of all people in a classroom was the following normal distribution \begin{align*} \Psi(x)=\frac{1}{\sqrt{2\pi}(.5)}e^{-\frac{(x-5.5)^2}{2(.5)^2}}, \end{align*} in other words $\mu=5.5$ and $\sigma=.5$. A height like $x=6$ ft may seem like quite a deviation from the mean, but how do we quantify how much of a deviation is it really? This is where the "standard normal distribution" comes in. Any point on a distribution with given mean and standard deviation parameters can be "normalized" into a $z$-score using the equation \begin{align*} z=\frac{x-\mu}{\sigma}, \end{align*} which quantifies what the point is on the standard normal distribution $\Phi_s(z)$ (since the z-score by the defined equation above has $\mu_z=0$ and $\sigma_z=1$). And again, by design, the z-score tells us how many "standard deviations away" something is. So in our example, $6$ ft would be a height that has a $z$-score of $z=1$, meaning it is a single standard deviation from the mean.</p> <p>Hope that helps. And here's the <a href="https://en.wikipedia.org/wiki/Normal_distribution" rel="nofollow noreferrer">Wikipedia page</a> for more information (the introduction has an intuitive explanation worth the read). And in case you're curious as to how the $t$-distribution is analogous to the normal distribution, let's just say it converges to the normal distribution as an "additional parameter" (the degrees of freedom) gets very large. But I won't get ahead of myself, and wait for you to ask that if you're curious about that.</p>
644,935
<p>I'm having trouble integrating $3^x$ using the $px + q$ rule. Can some please walk me through this?</p> <p>Thanks</p>
amWhy
9,003
<p>We can simply take as primitive $$\int a^x \,\text{d}x = \dfrac{a^x}{\log a} + C$$</p> <p>Suggestion: Verify for yourself that $$\frac{\text{d}}{dx}\left(\frac {a^x}{\log a} + C\right) = a^x$$</p> <p>So $$\int 3^x \,\text{d}x = \dfrac{3^x}{\log 3} + C.$$</p>
90,656
<p>In the introduction to 'A convenient setting for Global Analysis', Michor &amp; Kriegl make this claim: "The study of Banach manifolds per se is not very interesting, since they turnout to be open subsets of the modeling space for many modeling spaces."</p> <p>But finite-dimensional manifolds are found to be interesting even though they can be embedded in some Euclidean space (of larger dimension). (Actually this seems to me, to make the above claim intuitively plausible, so that claim should be no more than we should expect).</p> <p>But they do go on to say that "Banach manifolds are not suitable for many questions of Global Analysis, as ... a Banach Lie group acting effectively on a finite dimensional smooth manifold it must be finite dimensional itself.", which does seem a rather strong limitation. </p>
Georges Elencwajg
450
<p>In his <a href="http://archive.numdam.org/ARCHIVE/AIF/AIF_1966__16_1/AIF_1966__16_1_1_0/AIF_1966__16_1_1_0.pdf">remarkable thesis</a> Douady proved that, given a compact complex analytic space $X$, the set $H(X)$ of analytic subspaces of $X$ has itself a natural structure of analytic space .<br> If $X=\mathbb P^n(\mathbb C)$ for example, then $H(X)$ is the Hilbert scheme $ Hilb(\mathbb P^n(\mathbb C))$.<br> However the problem is much more difficult for non algebraic $X$.<br> Douady solved it by massive use of Banach analytic manifolds, the most important of them being the grassmannian of complemented closed subspaces of a Banach space. </p> <p>The thesis starts with the candid statement of its aim: "Le but de ce travail est de munir son auteur du grade de docteur-ès-sciences mathématiques et l'ensemble H(X) des sous espaces analytiques compacts de X d'une structure d'espace analytique", that is to endow its author with the title of doctor in mathematics and the the set H(X) of compact analytic subspaces of X with the structure of analytic space.</p>
1,339,649
<p>Summation convention holds. If $\frac{\partial}{\partial t}g_{ij}=\frac{2}{n}rg_{ij}-2R_{ij}$, then ,I compute: $$ \frac{1}{2}g^{ij}\frac{\partial}{\partial t}g_{ij}=\frac{1}{2}g^{ij}(\frac{2}{n}rg_{ij}-2R_{ij})=\frac{1}{n}r(\sum\limits_i\sum\limits_jg^{ij}g_{ij})-g^{ij}R_{ij}=nr-R $$</p> <p>But on the Hamilton's THREE-MANIFOLDS WITH POSITIVE RICCI CURVATURE,the result is : $$ \frac{1}{2}g^{ij}\frac{\partial}{\partial t}g_{ij}=r-R $$</p> <p>I don't know where I make my mistake,and who can tell me ? Very thanks.</p>
Community
-1
<p>Note: </p> <p>$$\sum_{i,j=1}^n g^{ij} g_{ij} = \text{tr} (g^{-1}g) = \text{tr} (I_n) = n.$$</p>
3,219,428
<p>Sorry for the strange title, as I don't really know the proper terminology.</p> <p>I need a formula that returns 1 if the supplied value is anything from 10 to 99, returns 10 if the value is anything from 100 to 999, returns 100 if the value is anything from 1000 to 9999, and so on.</p> <p>I will be translating this to code and will ensure the value is never less than 1, in case that changes anything.</p> <p>It's probably something really simple but I can't wrap my head around a nice way to do this so... thanks!</p>
Nilotpal Sinha
60,930
<p><span class="math-container">$f(x) = 10^{[\log_{10} x]-1}$</span> where <span class="math-container">$[x]$</span> denotes floor</p>
2,031,964
<p>I am required to prove that the following series $$a_1=0, a_{n+1}=(a_n+1)/3, n \in N$$ is bounded from above and is monotonously increasing through induction and calculate its limit. Proving that it's monotonously increasing was simple enough, but I don't quite understand how I can prove that it's bounded from above through induction, although I can see that it is bounded. It's a fairly new topic to me. I would appreciate any help on this.</p>
mfl
148,513
<p>We have </p> <p>$$\begin{cases}a_0&amp;=0\\a_1&amp;=\frac13 \\a_2&amp; =\frac 13+\frac{1}{3^2}\\a_3&amp; =\frac 13+\frac{1}{3^2}+\frac{1}{3^3} \end{cases}$$ What does this sugest? That $$a_n=\sum_{k=1}^n\frac{1}{3^k}=\dfrac{\frac13-\frac{1}{3^{n+1}}}{\frac23}.$$ Show this by induction and get that $a_n&lt;\frac 12,\forall n\in \mathbb{N}.$</p>
3,043,598
<p>I have seen a procedure to calculate <span class="math-container">$A^{100}B$</span> like products without actually multiplying where <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are matrices. But the procedure will work only if <span class="math-container">$A$</span> is diagonalizable because the procedure attempts to find <span class="math-container">$B$</span> such that</p> <p><span class="math-container">$B = a_1X_1 + a_2X_2 + ...$</span>, where <span class="math-container">$X_1,X_2,...$</span> are independent eigen-vectors (or basis) of <span class="math-container">$A$</span> and <span class="math-container">$a_1,a_2,...$</span> are scalars. </p> <p>Is there any other procedure to multiple such matrices where there is no restriction on matrix <span class="math-container">$A$</span> being deficient or diagnalizable?</p>
Gratus
544,878
<p>I think the fastest way in general is trying to calculate <span class="math-container">$A^{100}$</span> faster. Have you heard of fast exponentiation algorithm?</p> <p>When you have to calculate <span class="math-container">$A^{100}$</span>, You can calculate <span class="math-container">$A^2, A^4, A^8....A^{64}$</span> and compute <span class="math-container">$A^{64} A^{32} A^{4}$</span> instead of multipling A 100 times. This will take about 10 matrix multiplication. Then, you can use <span class="math-container">$A^{100}$</span> to get the result what you want. </p>
399,804
<p>The Question was:</p> <blockquote> <p>Express $2\cos{X} = \sin{X}$ in terms of $\sin{X}$ only.</p> </blockquote> <p>I have had dealings with similar problems but for some reason, due to I believe a minor oversight, I am terribly vexed.</p>
Julien
38,053
<p><strong>Note:</strong> too long for a comment...</p> <p>Squaring to force the use of $\cos^2x+\sin^2x=1$ results in an equation which is not equivalent to the original equation. This actually creates another countable set of solutions. </p> <p>First note that, $\cos \theta=0$ does not occur when the equation is fulfilled, so: $$(E): 2\cos x=\sin x \iff \tan x=2 $$ has the solution set $\arctan 2+\pi\mathbb{Z}$.</p> <p>Now if you square $(E)$: $$ (E)^2: 4\cos^2x=\sin^2x\iff (2\cos x-\sin x)(2\cos x+\sin x)=0\iff\tan x=\pm2 $$ has the solution set $(\arctan 2 +\pi\mathbb{Z})\sqcup(-\arctan 2+\pi\mathbb{Z})$. So that's not equivalent to $(E)$.</p> <p>Replacing $\cos x$ by $\pm\sqrt{1-\sin^2x}$ does not really work either. The best we can say is that $$ (E)\iff \cos x\geq 0 \wedge 2\sqrt{1-\sin^2 x}=\sin x\quad\mbox{or}\quad \cos x\leq 0 \wedge -2\sqrt{1-\sin^2 x}=\sin x. $$</p>
5,896
<p>$\sum_{n=1}^{\infty} \frac{\varphi(n)}{n}$ where $\varphi(n)$ is 1 if the variable $\text n$ has the number $\text 7$ in its typical base-$\text10$ representation, and $\text0$ otherwise.</p> <p>I am supposed to find out if this series converges or diverges. I think it diverges, and here is why.</p> <p>We can see that there is a series whose partial sums are always below our series, but which diverges. Compare some of the terms of each sequence</p> <p>$\frac{1}{7} > \frac{1}{8}$<br/> $\frac{1}{70} > \frac{1}{80}$<br/> $\frac{1}{71} > \frac{1}{80}$<br/> $\frac{1}{72} > \frac{1}{80}$<br/> $\text ... $<br/> $\frac{1}{79} > \frac{1}{80}$<br/> $\text ... $<br/> $\frac{1}{700} > \frac{1}{800}$<br/> $\text ... $<br/></p> <p>And continue in this way.</p> <p>Obviously some terms are left out of the sequence on the left, which is fine since our sequence of terms on the left is already greater than the right side. Notice the right side can be grouped into</p> <p>$\frac{1}{8} + \frac{1}{8} + ... $ because we will have $10$ $\frac{1}{80}$s, $100$ $\frac{1}{800}$s, etc etc. Thus we are adding up infinitely many 1/8s. This is similar to the idea of the divergence of the harmonic series. So, my conclusion is that it diverges. A bunch of other students in my real analysis class have come to the conclusion that is, in fact, convergent, and launched into a detailed verbal explanation about comparison with a geometric series that I couldn't follow without seeing their work. Is my reasoning, like they suspect, flawed? I can't see how.</p> <p>Sorry about the poor format, I'm new to TeX and couldn't figure out how to format a piecewise function (it was telling me a my \left delimiter wasn't recognized).</p>
Michael Lugo
173
<p>Here's another proof that your sum diverges. </p> <p>Consider the sum $\sum_{n-1}^\infty (1-\phi(n))/n$. This is the sum of the reciprocals of integers which <i>don't</i> have a 7 in their decimal expansion.</p> <p>The number of integers $n$ with $1-\phi(n)=1$ and $1 \le n &lt; 10^k$ is $9^k - 1$. (We can choose each of $k$ decimal digits to be anything but 7, except we don't want to choose all zeroes.) Thus the numbers of integers $n$ with $1-\phi(n) = 1$ and $10^{k-1} \le n &lt; 10^k$ is $(9^k - 1) - (9^{k-1}-1) = 8 \times 9^{k-1}$.</p> <p>Therefore $$ S_k = \sum_{n=10^{k-1}}^{10^k - 1} {1-\phi(n) \over n} $$ has $8 \times 9^{k-1}$ nonzero terms; they are each at most $10^{k-1}$. So $S_k \le 8 \times (9/10)^{k-1}$.</p> <p>The infinite sum is then $$ \sum_{k=1}^\infty S_k \le 8 \times \sum_{k=1}^\infty (9/10)^{k-1} = 8 \times 10 = 80 $$ and in particular it's finite.</p> <p>Now, the harmonic series $\sum_{n=1}^\infty 1/n$ diverges; removing terms whose sum converges (like the sum that I just showed to converge) won't change that. So your series diverges.</p>
1,220,790
<p>Consider the black scholes equation, </p> <p>$$ \frac{\partial V}{\partial t } + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2 } + ( r-q )S\frac{\partial V}{\partial S }-rV =0 $$</p> <p>How do I show that if $V( S, t)$ is a solution, then $S(\frac{\partial V}{\partial S })$ is also a solution?</p> <p>I tried substituting $S(\frac{\partial V}{\partial S })$ to the equation and working through the calculations but it doesn't seem to work out.</p> <p>On a related note, how do we also show that for $ \beta = 1-2(r-q)/\sigma^2$, </p> <p>$$ W(S, t) = S^\beta V(\frac{1}{S}, t) $$</p> <p>is also a solution?</p> <p>The relevant partial derivatives are, </p> <p>$$ \begin{align} \frac{\partial W}{\partial S} &amp; = \beta S^{\beta -1 }V- S^{\beta -2}\frac{\partial V}{\partial S}\\ \frac{\partial^2 W}{\partial S^2} &amp; = S^{\beta -4}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -3}\frac{\partial V}{\partial S} + \beta(\beta -1 )S^{\beta-2}V \end{align} $$</p> <p>So the various terms in the PDE are,</p> <p>$$\begin{align} (r-q)S\frac{\partial W}{\partial S} &amp; = (r-q) \left[ \beta S^{\beta }V- S^{\beta -1}\frac{\partial V}{\partial S} \right] \\ \frac{1}{2}\sigma^2S^2\frac{\partial^2W}{\partial S^2}&amp; =\frac{1}{2}\sigma^2\left[ S^{\beta -2}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -1}\frac{\partial V}{\partial S} + \beta(\beta -1 )S^{\beta}V \right] \end{align} $$</p> <p>I can see that $ (r-q) \beta S^{\beta }V$ in the top term cancels with $\frac{1}{2}\sigma^2\beta(\beta -1 )S^{\beta}V $ in the bottom term.</p> <p>So we end up with, </p> <p>$$ (r-q)S\frac{\partial W}{\partial S}+\frac{1}{2}\sigma^2S^2\frac{\partial^2W}{\partial S^2}=-(r-q) S^{\beta -1}\frac{\partial V}{\partial S} + \frac{1}{2}\sigma^2\left[ S^{\beta -2}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -1}\frac{\partial V}{\partial S} \right] $$</p> <p>But beyond this I kind of stuck despite trying various manipulations.</p> <p>Any help will be greatly appreciated!</p>
Raskolnikov
3,567
<p>First note that</p> <p>$$\frac{\partial}{\partial S}\left(S\frac{\partial V}{\partial S}\right) = \frac{\partial V}{\partial S} + S \frac{\partial^2 V}{\partial S^2}$$</p> <p>and</p> <p>$$\frac{\partial^2}{\partial S^2}\left(S\frac{\partial V}{\partial S}\right) = 2\frac{\partial^2 V}{\partial S^2} + S \frac{\partial^3 V}{\partial S^3} \; .$$</p> <p>Then, take the derivative of the Black-Scholes equation with respect to $S$.</p> <p>$$\frac{\partial}{\partial T}\frac{\partial V}{\partial S } + \frac{1}{2}\sigma^2 \left(2S\frac{\partial^2 V}{\partial S^2} + S^2 \frac{\partial^3 V}{\partial S^3}\right) + ( r-q )\left(\frac{\partial V}{\partial S } + S\frac{\partial^2 V}{\partial S^2 }\right)-r\frac{\partial V}{\partial S} =0 \; .$$</p> <p>Using our first two identities, we identify the second and third term</p> <p>$$\frac{\partial}{\partial T}\frac{\partial V}{\partial S } + \frac{1}{2}\sigma^2 \left(S\frac{\partial^2}{\partial S^2}\left(S\frac{\partial V}{\partial S}\right)\right) + ( r-q )\left(\frac{\partial}{\partial S}\left(S\frac{\partial V}{\partial S}\right)\right)-r\frac{\partial V}{\partial S} =0 \; .$$</p> <p>Multiplying everything by S, we get</p> <p>$$\frac{\partial}{\partial T}\left(S\frac{\partial V}{\partial S }\right) + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2}{\partial S^2}\left(S\frac{\partial V}{\partial S}\right) + ( r-q )S\frac{\partial}{\partial S}\left(S\frac{\partial V}{\partial S}\right)-rS\frac{\partial V}{\partial S} =0 \; .$$</p>
506,720
<p>Hi how can I find the dimension of a vector space? For example : $V = \mathbb{C} , F = \mathbb{Q}$ what is the dimension of $V$ over $\mathbb{Q}$?</p>
John Gowers
26,267
<p>For this particular example, there are two ways to show that the dimension must be infinite: </p> <p>$\mathbb Q$ is countable, so for any countable set $\{a_1,a_2,\dots\}\subset \mathbb C$, the set of linear combinations of elements of that set over $\mathbb Q$ must be countable. But $\mathbb C$ is uncountable, so it must have uncountable dimension over $\mathbb Q$. </p> <p>We can find an infinite linearly-independent set of complex numbers over the rationals. Consider the set $\{\log(2),\log(3),\log(5),\log(7),\log(11)\dots\}$; i.e., the logarithms of the prime numbers. This is easily shown to be linearly independent using the properties of logarithms and prime numbers. </p>
11,519
<p>Hello,</p> <p>If $f:\mathbb{R} \to \mathbb{R}$ a differentiable function, it is very easy to find its Lipschitz constant. Is there any way to extend this to functions $f: \mathbb{R} \to \mathbb{R}^n$ (or similar)?</p>
Pete L. Clark
1,149
<p>Let $f = (f_1,\ldots,f_n): [a,b] \rightarrow \mathbb{R}^n$ be a continuously differentiable function. (See the comments above for an explanation as to why the hypotheses have been strengthened.) </p> <p>For $1 \leq i \leq n$, let </p> <p>$L_i = \max_{x \in [a,b]} |f_i'(x)|$, </p> <p>so that, by the Mean Value Theorem, for $x,y \in [a,b]$,</p> <p>$|f_i(x)-f_i(y)| = |f_i'(c)||x-y| \leq L_i |x-y|$.</p> <p>Then, taking the standard Euclidean norm on $\mathbb{R}^n$, </p> <p>$|f(x)-f(y)|^2 = \sum_{i=1}^n |f_i(x)-f_i(y)|^2 \leq (\sum_{i=1}^n L_i^2) \ |x-y|^2$, </p> <p>so </p> <p>$|f(x)-f(y)| \leq \sqrt{(\sum_{i=1}^n L_i^2)} \ |x-y|$.</p> <p>Thus we can take </p> <p>$L = \sqrt{\sum_{i=1}^n L_i^2}$. </p> <p>Since all norms on $\mathbb{R}^n$ are equivalent -- i.e., differ at most by a multiplicative constant -- the choice of norm on $\mathbb{R}^n$ will change the expression of the Lipschitz constant $L$ in terms of the Lipschitz constants $L_i$ of the components, but not whether $f$ is Lipschitz. </p>
11,519
<p>Hello,</p> <p>If $f:\mathbb{R} \to \mathbb{R}$ a differentiable function, it is very easy to find its Lipschitz constant. Is there any way to extend this to functions $f: \mathbb{R} \to \mathbb{R}^n$ (or similar)?</p>
Phil Isett
7,193
<p>Another nice way to do this computation (which generally gives more precise information) is to use the formula</p> <p>$ f(x + h) - f(x) = [ \int_0^1 Df(x + th) dt ] h$</p> <p>Also, the formula for $Df(x)$ in Hahn's post above is not correct.</p>
1,057,675
<p>I was asked to prove that $\lim\limits_{x\to\infty}\frac{x^n}{a^x}=0$ when $n$ is some natural number and $a&gt;1$. However, taking second and third derivatives according to L'Hôpital's rule didn't bring any fresh insights nor did it clarify anything. How can this be proven? </p>
Ben Grossmann
81,360
<p>Sometimes, we want to consider $\infty$ as a value that can be attained. For example, we could define $$ f(x) = \begin{cases} 1/|x| &amp; x \neq 0\\ \infty &amp; x = 0 \end{cases} $$ In this case, we would say that $f:\Bbb R \to (0,\infty]$ is an onto function</p> <hr> <p>We can consider $\Bbb R = (-\infty,\infty)$ as a subset of the larger space $\overline{\Bbb R} = (-\infty,\infty) \cup \{\pm \infty\}$. Under this consideration, we would state that $\Bbb R$ is open (an not closed) in $\overline{\Bbb R}$.</p>
1,057,675
<p>I was asked to prove that $\lim\limits_{x\to\infty}\frac{x^n}{a^x}=0$ when $n$ is some natural number and $a&gt;1$. However, taking second and third derivatives according to L'Hôpital's rule didn't bring any fresh insights nor did it clarify anything. How can this be proven? </p>
Michael Albanese
39,599
<p>Note that $\infty \notin \mathbb{R}$, so it doesn't make sense to say that $[0, \infty]$ is a subset of $\mathbb{R}$. On the other hand, the set $$[0, \infty) = \{x \in \mathbb{R} \mid 0 \leq x\}$$ completely makes sense and defines a subset of $\mathbb{R}$.</p> <p>However, one can consider the extended real numbers $\overline{\mathbb{R}} = \mathbb{R}\cup\{-\infty, \infty\}$ and extend the order structure from $\mathbb{R}$ by declaring $-\infty &lt; x &lt; \infty$ for every $x \in \mathbb{R}$. In this setting, the set $$[0, \infty] = \{x \in \overline{\mathbb{R}} \mid 0 \leq x \leq \infty\}$$ is a perfectly well-defined subset of $\overline{\mathbb{R}}$, as is $[0, \infty)$ which can be written as $$[0, \infty) = \{x \in \overline{\mathbb{R}} \mid 0 \leq x &lt; \infty\}.$$</p>
1,634,325
<blockquote> <p><strong>Problem</strong>: Is there sequence that sublimit are $\mathbb{N}$? If it's eqsitist prove this.</p> </blockquote> <p>I try to solve this problem by guessing what type of sequence need to be. <br>For example: $a_n=(-1)^n$ has two sublimit $\{1,-1\}$. <br> $a_n=n \times\sin(\frac{\pi}{2})$ has 0 sublimit because $\lim _{x\to \infty}{n\times\sin(\frac{\pi}{2})}=\infty$. <br> But I don't know how to solve this so please give me a hint.</p>
DanielWainfleet
254,665
<p>Let $p_m$ be the $m$th prime.For $n\in N,$ if $n=(p_m)^k$ for some $m,k\in N,$ let $x_n=m+1.$ If $n$ is not a power of a prime, let $x_n=1.$</p>
1,556,298
<p>If we have $p\implies q$, then the only case the logical value of this implication is false is when $p$ is true, but $q$ false.</p> <p>So suppose I have a broken soda machine - it will never give me any can of coke, no matter if I insert some coins in it or not.</p> <p>Let $p$ be 'I insert a coin', and $q$ - 'I get a can of coke'.</p> <p>So even though the logical value of $p \implies q$ is true (when $p$ and $q$ are false), it doesn't mean the implication itself is true, right? As I said, $p \implies q$ has a logical value $1$, but implication is true when it matches the truth table of implication. And in this case, it won't, because $p \implies q$ is <strong>false</strong> for true $p$ (the machine doesn't work).</p> <p>That's why I think it's not right to say the implication is true based on only one row of the truth table. Does it make sense?</p>
Dan Christensen
3,515
<blockquote> <p>When is implication true?</p> </blockquote> <p><span class="math-container">$p\implies q$</span> is true when both <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are true, or when <span class="math-container">$p$</span> is false. Assuming your problem is with the latter.</p> <p><strong>Theorem:</strong></p> <p><span class="math-container">$\neg p \implies [p \implies q]$</span></p> <p><strong>Proof:</strong></p> <p>Suppose <span class="math-container">$\neg p$</span>.</p> <p>Suppose <span class="math-container">$p$</span>.</p> <p>Suppose to the contrary that <span class="math-container">$\neg q$</span>.</p> <p>We have the contradiction <span class="math-container">$p\land \neg p.$</span></p> <p>Therefore <span class="math-container">$\neg\neg q$</span> by contradiction.</p> <p>Remove <span class="math-container">$\neg\neg$</span> to obtain <span class="math-container">$q$</span>.</p> <p>Therefore <span class="math-container">$p\implies q$</span>.</p> <p>Therefore, as required, <span class="math-container">$\neg p \implies [p \implies q]$</span>.</p> <p>Not sure why this is thought to not hold in natural language, but it is widely believed. Anything whether true or false will follow from a falsehood. What is wrong with: If pigs could fly, I would be the King of France. (Assuming pigs can't actually fly. See <a href="http://www.dcproof.com/IfPigsCanFly.html" rel="nofollow noreferrer">If Pigs Could Fly</a> at my math blog.)</p>
179,230
<p>I draw <a href="http://reference.wolfram.com/language/ref/Cos.html" rel="nofollow noreferrer"><code>Cos</code></a> function using the code line : </p> <pre><code>GraphicsColumn[ { Plot[Cos[0.0625*Pi x], {x, 0, 40*Pi}, Axes -&gt; False], Plot[Cos[0.0625*Pi x], {x, 0, 40*Pi}, Axes -&gt; False], Plot[Cos[0.0625*Pi x], {x, 0, 40*Pi}, Axes -&gt; False], Plot[Cos[0.0625*Pi x], {x, 0, 40*Pi}, Axes -&gt; False] } ,ImageSize -&gt; Large ] </code></pre> <p>and manually add circles and arrows as you can see in the attached picture , how can I export the figure . export doesn't work and I can't save it or copy it please help<br> <a href="https://i.stack.imgur.com/eSZ4E.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eSZ4E.jpg" alt="see image"></a></p>
Bill
18,890
<p>If you can adapt this then <code>Export</code> works just fine</p> <pre><code>g1 = GraphicsColumn[{ Plot[Cos[0.0625*Pi x], {x, 0, 40*Pi}, Axes -&gt; False, Epilog -&gt; {Disk[{5 Pi, 3/4}, {8, .2}], Disk[{5 Pi, 1/4}, {8, .2}], Arrowheads[{-.05, .05}], Arrow[BezierCurve[{{5 Pi, 1.1}, {10 Pi, 1.3}, {15 Pi, 1.1}}]]}, PlotRange -&gt; {-1, 1.5}], Plot[Cos[0.0625*Pi x], {x, 0, 40*Pi}, Axes -&gt; False, Epilog -&gt; {Disk[{10 Pi, 3/4}, {8, .2}], Disk[{10 Pi, 1/4}, {8, .2}] }]}, ImageSize -&gt; Large] Export["g1.jpg", g1] </code></pre> <p><a href="https://i.stack.imgur.com/IVOO6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IVOO6.jpg" alt="enter image description here"></a></p>
2,883,023
<p>Find the number of zeros of $f(z)=z^6-5z^4+3z^2-1$ in $|z|\leq1$. </p> <p>My attempts have not gotten far. </p> <p>I know we can examine the related equation $f(w)=w^3-5w^2+3w-1$ in $|w|\leq1$, letting $w=z^2$.</p> <p>It is clear that $f(w)=0$ for $|w|=1$ if and only if $w=-1$. </p> <p>My main problem is that this seems obviously like an application of Rouché's theorem, where we let $g(w)=5w^2$, whereupon we have the relation $|f(w)|&lt; |g(w)|$ for all $|w|=1$, with the sole exception that equality holds when $w=-1$. </p> <p>I would like to use Rouché to equate the number of zeros of $f$ and $g$, but my understanding of Rouché is that the inequality should be strict with no exceptions. </p>
dxiv
291,201
<p>Alt. hint: &nbsp; show that the cubic $\,w^3-5w^2+3w-1\,$ has a unique real root which is $\,\gt 1\,$, then use that the product of all three roots is $\,1\,$.</p>
2,883,023
<p>Find the number of zeros of $f(z)=z^6-5z^4+3z^2-1$ in $|z|\leq1$. </p> <p>My attempts have not gotten far. </p> <p>I know we can examine the related equation $f(w)=w^3-5w^2+3w-1$ in $|w|\leq1$, letting $w=z^2$.</p> <p>It is clear that $f(w)=0$ for $|w|=1$ if and only if $w=-1$. </p> <p>My main problem is that this seems obviously like an application of Rouché's theorem, where we let $g(w)=5w^2$, whereupon we have the relation $|f(w)|&lt; |g(w)|$ for all $|w|=1$, with the sole exception that equality holds when $w=-1$. </p> <p>I would like to use Rouché to equate the number of zeros of $f$ and $g$, but my understanding of Rouché is that the inequality should be strict with no exceptions. </p>
Nosrati
108,128
<p><strong>Hint:</strong> On $|z|=1$ : $$|z^6-1|\leqslant|z|^6+1=2\leqslant5|z|^4-3|z|^2\leqslant|-5z^4+3z^2|$$</p>
622,552
<p>In the context of (most of the times convex) optimization problems -</p> <p>I understand that I can build a Lagrange dual problem and assuming I know there is strong duality (no gap) I can find the optimum of the primal problem from the one of the dual. Now I want to find the primal optimum point (i.e. the point in which the optimum is attained). Somehow, it is assumed that both the optimum of the primal and dual point are achieved at the same point (which is then a saddle point). Why/when is that true? Does strong duality can happen only at a saddle point? Does that require the primal problem to be convex?</p> <p>Now, from this, it is assumed it's enough for to find the minimizer of the Lagrangian at the value of dual parameters (point of the solution of the dual problem) in order to get the primal optimal point. Why is that true?</p> <p>Thank you, Dany</p>
jkn
37,377
<p>Ok, so I'm not totally sure whether this addresses all (some!) of your doubts, so let me know if it does not. As a disclaimer, what follows is just one out of several ways to view duality in optimisation (you can find others in the answers to <a href="https://math.stackexchange.com/questions/223235/please-explain-the-intuition-behind-the-dual-problem-in-optimization?rq=1">this post</a>).</p> <p>To make things a bit concrete I'm going to assume that your primal problem is a minimisation problem with objective <span class="math-container">$f:\mathbb{R}^n\to\mathbb{R}$</span> and set of feasible points <span class="math-container">$X\subseteq\mathbb{R}^n$</span>. Suppose we can find some function <span class="math-container">$\phi:\mathbb{R}^m\to\mathbb{R}$</span> and set <span class="math-container">$Y\subseteq\mathbb{R}^m$</span> that has the following property:</p> <blockquote> <p><span class="math-container">$\phi$</span> evaluated at any point <span class="math-container">$y\in Y$</span> gives a lower bound for <span class="math-container">$f$</span> over all of <span class="math-container">$X$</span>. That is, for any <span class="math-container">$y\in Y$</span></p> <p><span class="math-container">$$f(x)\geq \phi(y)\quad\quad\forall x\in X.$$</span></p> </blockquote> <p>Note that the above implies that for any <span class="math-container">$y\in Y$</span>, <span class="math-container">$\phi(y)$</span> is also a lower bound for the optimum value of the primal (that is, <span class="math-container">$p^*:=\inf_{x\in X}f(x)$</span>). Then an interesting question to ask is what is the best lower bound we can extract from <span class="math-container">$\phi$</span> and <span class="math-container">$Y$</span>? What is <span class="math-container">$d^*:=\sup_{y\in Y}\phi(y)$</span>?</p> <p>We call answering this question &quot;a dual problem to the original primal problem&quot; and we say that &quot;strong duality&quot; holds for this primal-dual pair if <span class="math-container">$d^*=p^*$</span>. Strong duality is handy because (assuming we can solve the dual problem) it gives us a way to check how suboptimal a candidate optimal point <span class="math-container">$\tilde{x}$</span> (which we have computed in some way) of the primal is by simply computing the difference <span class="math-container">$f(\tilde{x})-d^*$</span> (otherwise, since we don't know <span class="math-container">$p^*$</span>, it is hard to judge how good is our candidate optimal point <span class="math-container">$\tilde{x}$</span>). Indeed, if <span class="math-container">$f(\tilde{x})=d^*$</span> we know that <span class="math-container">$\tilde{x}$</span> is optimal and we can stop looking for a better <span class="math-container">$x$</span>.</p> <p>In order for the above to be of use, we should aim to construct a dual problem that is simple to solve. This is where the Lagrangian dual problem comes in; it is a dual problem which is always convex irrespective of whether the primal is. The objective function of the Lagrangian dual problem is</p> <p><span class="math-container">$$\phi(y):=\inf_{x\in X}L(x,y)$$</span></p> <p>where <span class="math-container">$L$</span> is the so-called &quot;Lagrangian function&quot; and <span class="math-container">$y$</span> are the &quot;Lagrangian multipliers&quot;.</p> <p>To check whether strong duality holds for a given primal-dual pair there is a whole battery of sufficient conditions collectively referred to as constraint qualifications. The most basic of which applies to the case where the primal is convex and the dual is the Lagrangian dual is called Slater's condition. It asks that there exists at least one &quot;strictly feasible&quot; point for the primal (that is, that <span class="math-container">$X$</span> has non-empty relative interior). If I remember correctly, in this case strong duality holds if and only if <span class="math-container">$L(x,y)$</span> has a saddle point (which turns out to be a pair of optimal point <span class="math-container">$x$</span> and multipliers <span class="math-container">$y$</span>), however I don't have an intuitive explanation why this is so.</p> <p>With regards to how to actually find an optimal point <span class="math-container">$x^*$</span> what typically happens in practice is that people run an optimisation algorithm (often gradient decent or Newton's) on both (and separately) the primal and dual problems which hopefully generate some sequences <span class="math-container">$(x_n)$</span> and <span class="math-container">$(y_n)$</span> such that</p> <p><span class="math-container">$$f(x_1)\geq f(x_2)\geq f(x_3)\geq\dots,\quad\quad L(y_1)\leq L(y_2)\leq L(y_3)\leq\dots.$$</span></p> <p>Every few iterations they stop and check whether the difference <span class="math-container">$f(x_n)-L(y_n)$</span> is smaller than some tolerance level and if so stop completely and say that <span class="math-container">$x_n$</span> optimal. They call the corresponding <span class="math-container">$y_n$</span> a certificate of optimality, because anyone can then verify that <span class="math-container">$x_n$</span> is (nearly) optimal by checking that the mentioned difference is small. This is the essence of the so called primal-dual solvers. Note that if the primal, said algorithms is guaranteed to generate <span class="math-container">$x_n$</span> such that <span class="math-container">$f(x_n)\to p^*$</span> (ditto with respect to the dual problem).</p>
28,532
<p><code>MapIndexed</code> is a very handy built-in function. Suppose that I have the following list, called <code>list</code>:</p> <pre><code>list = {10, 20, 30, 40}; </code></pre> <p>I can use <code>MapIndexed</code> to map an arbitrary function <code>f</code> across <code>list</code>:</p> <pre><code>{f[10, {1}], f[20, {2}], f[30, {3}], f[40, {4}]} </code></pre> <p>where the second argument to <code>f</code> is the part specification of each element of the list.</p> <p>But, now, what if I would like to use <code>MapIndexed</code> only at certain elements? Suppose, for example, that I want to apply <code>MapIndexed</code> to only the second and third elements of <code>list</code>, obtaining the following:</p> <pre><code>{10, f[20, {2}], f[30, {3}], 40} </code></pre> <p>Unfortunately, there is no built-in "<code>MapAtIndexed</code>", as far as I can tell. What is a simple way to accomplish this? Thanks for your time.</p>
rm -rf
5
<p><code>If</code> does the job and is simple enough:</p> <pre><code>MapIndexed[If[2 ≤ First@#2 ≤ 3, f[#, #2], #] &amp;, list] (* {10, f[20, {2}], f[30, {3}], 40} *) </code></pre>
113,797
<p>I'm trying to extract every 21st character from this text, s (given below), to create new strings of all 1st characters, 2nd characters, etc.</p> <p>I have already separated the long string into substrings of 21 characters each using</p> <pre><code> splitstring[String : str_, n_] := StringJoin @@@ Partition[Characters[str], n, n, 1, {}] </code></pre> <p>giving:</p> <pre><code> {"ALJJJEAQZJMZKOZDKEHBL", "XLPXNEHZCSEJVVLWHTUDJ", \ "WFYXKKMWNNTNPHDTMGIOP", "OOSYPXGTLOHOPHTDHBHWO", \ "MWGSKXSTNNEYSQHRSGPKP", "CJBNVIYCZHIVPFSWCKFPJ", \ "OZQLNGPTLCIALHMBIGUOP", "ESYNDGACTURTALHLSGFBR", \ "LPRMYKFQFXTEEZQHIUMOC", "CLSUIKWYLRPRRJZWCKUOW", \ "WJLLVYEJXNIEMDQYQTDFC", "HJQLYKFQJTKUCICIRKOWX", \ "PHLWYCCDRKVPAHCYPZFRL", "CNYBIOEWWGHQQBCDHGPRW", \ "WHWIUOQTYOJLHGLFRTTVL", "CQCIUEHZYJKJEWHOVMYOM", \ "JOBTIHCOSGCZVZJFEYHAC", "JQCQFSTWLOHYPXZDHBHTW", \ "TLGIUJIWEXSJGKLTMKAFF", "PYGYICGSPRPLPOZNKPUSH", \ "NSMQCGPWHILVXLZARZLIS", "POBDNEXJYMESQDTAQWKWL", \ "WPBIDCCJXKHQCOTAJXBVZ", "WWDNCCCCWACQHZJREEHRO", \ "LQSSRCVPFHCJCGLURBHFM", "POYKFKWRHKVGLLSYRTISC", \ "ESYKXCEYFNSAHUHYIBJVL", "WZLKPJHQJEQQVHNHSEHTO", \ "TNFJCRMZCCIEXKSALTMFP", "YTLGCORTMRIAILOISWKRJ", \ "DTMRXGVYHEHRSIPRIWKOX", "SLBQVJHSZTMZMIHRAFFTJ", \ "CTMSYGADQEHQXUZIQHLZJ", "OOGRZGVHLKPTIUEHHKHKT", \ "CHCNCMMYLIMBWWHNKXBQP", "DEWQCIVZRNETVVPFCVYST", \ "RTYZYEPWZTMQHLNHSGPRO", "TNFJCRRLNNPRHGYARRJVW", \ "HJBIFTAPWRVUCZODYYLGF", "COBDWKMDTGJCEBDTVRDRO", \ "HJQLPLVHJYKNJSLGEBZDL", "OOWKROWOOOJRXKRAMYMMM", \ "FOBFTGTSLHIGLQLWVVLTL", "TDYBEGVNJLEAQDPRQXKRH", \ "WOGIUCPLCJEAJBYGLTSCY", "OCUDECCQCURAELOAPEHKP", \ "YJBIVOPWZTEVHJHNEYNMX", "CFSHYKPPWCGUMEWYKNHZW", \ "JQSWCRANSOALVIJLPRZDL", "YOFDJVCDHTAVAHTRMTBHP", "RJZBIOEOSCR"} </code></pre> <p>How can I make strings of all the first characters, second characters, etc.?</p> <pre><code> s=ALJJJEAQZJMZKOZDKEHBLXLPXNEHZCSEJVVLWHTUDJWFYXKKMWNNTNPHDTMGIOPOOSYPXG\ TLOHOPHTDHBHWOMWGSKXSTNNEYSQHRSGPKPCJBNVIYCZHIVPFSWCKFPJOZQLNGPTLCIALH\ MBIGUOPESYNDGACTURTALHLSGFBRLPRMYKFQFXTEEZQHIUMOCCLSUIKWYLRPRRJZWCKUOW\ WJLLVYEJXNIEMDQYQTDFCHJQLYKFQJTKUCICIRKOWXPHLWYCCDRKVPAHCYPZFRLCNYBIOE\ WWGHQQBCDHGPRWWHWIUOQTYOJLHGLFRTTVLCQCIUEHZYJKJEWHOVMYOMJOBTIHCOSGCZVZ\ JFEYHACJQCQFSTWLOHYPXZDHBHTWTLGIUJIWEXSJGKLTMKAFFPYGYICGSPRPLPOZNKPUSH\ NSMQCGPWHILVXLZARZLISPOBDNEXJYMESQDTAQWKWLWPBIDCCJXKHQCOTAJXBVZWWDNCCC\ CWACQHZJREEHROLQSSRCVPFHCJCGLURBHFMPOYKFKWRHKVGLLSYRTISCESYKXCEYFNSAHU\ HYIBJVLWZLKPJHQJEQQVHNHSEHTOTNFJCRMZCCIEXKSALTMFPYTLGCORTMRIAILOISWKRJ\ DTMRXGVYHEHRSIPRIWKOXSLBQVJHSZTMZMIHRAFFTJCTMSYGADQEHQXUZIQHLZJOOGRZGV\ HLKPTIUEHHKHKTCHCNCMMYLIMBWWHNKXBQPDEWQCIVZRNETVVPFCVYSTRTYZYEPWZTMQHL\ NHSGPROTNFJCRRLNNPRHGYARRJVWHJBIFTAPWRVUCZODYYLGFCOBDWKMDTGJCEBDTVRDRO\ HJQLPLVHJYKNJSLGEBZDLOOWKROWOOOJRXKRAMYMMMFOBFTGTSLHIGLQLWVVLTLTDYBEGV\ NJLEAQDPRQXKRHWOGIUCPLCJEAJBYGLTSCYOCUDECCQCURAELOAPEHKPYJBIVOPWZTEVHJ\ HNEYNMXCFSHYKPPWCGUMEWYKNHZWJQSWCRANSOALVIJLPRZDLYOFDJVCDHTAVAHTRMTBHP\ RJZBIOEOSCR </code></pre>
kglr
125
<pre><code>StringJoin /@ Transpose[Characters @ StringPartition[s,21]] </code></pre> <p><img src="https://i.stack.imgur.com/iFapi.png" alt="Mathematica graphics"></p>
1,192,338
<p>How to prove that there are infinite taxicab numbers? ok i was reading this <a href="http://en.wikipedia.org/wiki/Taxicab_number#Known_taxicab_numbers" rel="nofollow">http://en.wikipedia.org/wiki/Taxicab_number#Known_taxicab_numbers</a> and thought of this question..any ideas?</p>
Dietrich Burde
83,966
<p>It is easy to show that there are infinitely many positive integers which are representable as the sum of two cubes, e.g., see the article <a href="https://www.google.at/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0CB8QFjAA&amp;url=https%3A%2F%2Fcs.uwaterloo.ca%2Fjournals%2FJIS%2FVOL6%2FBroughan%2Fbroughan25.pdf&amp;ei=xOAGVY7dN8PeOP7VgOAF&amp;usg=AFQjCNG8rHk5ueNrG5WD97gCXepEIjun7w&amp;bvm=bv.88198703,d.ZWU" rel="nofollow">Characterizing the Sum of Two Cubes</a> by K.A. Broughan (2003). If we require a representation as the sum of two cubes in at least $N\ge 2$ different ways, then the result is more difficult to show; and the proof uses the theory of elliptic curves etc. For a good survey, see the article <a href="https://www.google.at/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0CB8QFjAA&amp;url=https%3A%2F%2Fwww.maa.org%2Fsites%2Fdefault%2Ffiles%2Fpdf%2Fupload_library%2F22%2FFord%2Fsilverman331-340.pdf&amp;ei=zeEGVbFkhsw9zsWAsAY&amp;usg=AFQjCNGrGtbbjX5GN4HF32EDTcaJQDHMPg&amp;bvm=bv.88198703,d.ZWU" rel="nofollow">Taxicabs and sum of two cubes</a> by J. H. Silverman. In particular, the following result due to K. Mahler is discussed:</p> <p><strong>Theorem(Mahler):</strong> There is a constant $c&gt;0$ such that for infinitely many positive integers $m$, the number of positive integer solutions to the equation $x^3+y^3=m$ exceeds $c(\log(m))^{1/3}$.</p>
3,710,710
<p>Suppose <span class="math-container">$A(t,x)$</span> is a <span class="math-container">$n\times n$</span> matrix that depends on a parameter <span class="math-container">$t$</span> and a variable <span class="math-container">$x$</span>, and let <span class="math-container">$f(t,x)$</span> be such that <span class="math-container">$f(t,\cdot)\colon \mathbb{R}^n \to \mathbb{R}^n$</span>.</p> <p>Is there a chain rule for <span class="math-container">$$\frac{d}{dt} A(t,f(t,x))?$$</span></p> <p>It should be something like <span class="math-container">$A_t(t,f(t,x)) + ....$</span>, what is the other term?</p>
greg
357,854
<p>Consider the scalar function <span class="math-container">$\alpha$</span> which matches the proposed functional form, i.e. <span class="math-container">$$\eqalign{ &amp;\alpha = \alpha(t,f) \qquad &amp;f = f(t,x) \\ &amp;\alpha,t\in{\mathbb R}^{1} \qquad &amp;f,x\in{\mathbb R}^{n} }$$</span> Everyone knows how to go about calculating its total time derivative <span class="math-container">$$\eqalign{ \frac{d\alpha}{dt} &amp;= \frac{\partial\alpha}{\partial t} + \left(\frac{\partial\alpha}{\partial f}\cdot\frac{\partial f}{\partial t}\right) + \left(\frac{\partial\alpha}{\partial f}\cdot\frac{\partial f}{\partial x}\cdot\frac{\partial x}{\partial t}\right) }$$</span> The only twist with the matrix <span class="math-container">$A={\bf[}\alpha_{ij}{\bf]}\;$</span> is that each element is such a function. <br>Therefore <span class="math-container">$$\eqalign{ \frac{dA}{dt} = \left[\frac{d\alpha_{ij}}{dt}\right] }$$</span></p>
464,426
<p>Find the limit of $$\lim_{x\to 1}\frac{x^{1/5}-1}{x^{1/3}-1}$$</p> <p>How should I approach it? I tried to use L'Hopital's Rule but it's just keep giving me 0/0.</p>
Community
-1
<p>What do you mean by "keep giving you 0/0"?</p> <p>After you apply the L'Hopital's Rule, you should get:</p> <p>\begin{align*} \lim_{x\to1}\frac{x^{1/5}-1}{x^{1/3}-1}&amp;=\lim_{x\to1}\frac{\frac d{dx}(x^{1/5}-1)}{\frac d{dx}(x^{1/3}-1)}\\ &amp;=\lim_{x\to1}\frac{\frac15x^{-4/5}}{\frac13x^{-2/3}}\\ &amp;=\boxed{\dfrac35} \end{align*}</p>
915,054
<p>I'm trying to find a closed form of this sum: $$S=\sum_{n=1}^\infty\frac{\Gamma\left(n+\frac{1}{2}\right)}{(2n+1)^4\,4^n\,n!}.\tag{1}$$ <a href="http://www.wolframalpha.com/input/?i=Sum%5BGamma%28n%2B1%2F2%29%2F%28%282n%2B1%29%5E4+4%5En+n%21%29%2C+%7Bn%2C1%2CInfinity%7D%5D"><em>WolframAlpha</em></a> gives a large expressions containing multiple generalized hypergeometric functions, that is quite difficult to handle. After some simplification it looks as follows: $$S=\frac{\pi^{3/2}}{3}-\sqrt{\pi}-\frac{\sqrt{\pi}}{324}\left[9\,_3F_2\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\\+3\,_4F_3\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)+\,_5F_4\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\right].\tag{2}$$ I wonder if there is a simpler form. Elementary functions and simpler special funtions (like Bessel, gamma, zeta, polylogarithm, polygamma, error function etc) are okay, but not hypergeometric functions.</p> <p>Could you help me with it? Thanks!</p>
user153012
153,012
<p>Another possible closed form of $S$ is the following. It containts also a generalized hypergeometric function, but just one.</p> <p>$$S = \frac{\sqrt{\pi}}{648} {_6F_5}\left(\begin{array}c\ 1,\frac32,\frac32,\frac32,\frac32,\frac32\\2,\frac52,\frac52,\frac52,\frac52\end{array}\middle|\,\frac14\right).$$</p> <p><a href="http://www.wolframalpha.com/input/?i=hypergeom%28%5B1%2C%203%2F2%2C%203%2F2%2C%203%2F2%2C%203%2F2%2C%203%2F2%5D%2C%20%5B2%2C%205%2F2%2C%205%2F2%2C%205%2F2%2C%205%2F2%5D%2C%201%2F4%29"><em>WolframAlpha</em></a>'s simplification gives back your form.</p>
915,054
<p>I'm trying to find a closed form of this sum: $$S=\sum_{n=1}^\infty\frac{\Gamma\left(n+\frac{1}{2}\right)}{(2n+1)^4\,4^n\,n!}.\tag{1}$$ <a href="http://www.wolframalpha.com/input/?i=Sum%5BGamma%28n%2B1%2F2%29%2F%28%282n%2B1%29%5E4+4%5En+n%21%29%2C+%7Bn%2C1%2CInfinity%7D%5D"><em>WolframAlpha</em></a> gives a large expressions containing multiple generalized hypergeometric functions, that is quite difficult to handle. After some simplification it looks as follows: $$S=\frac{\pi^{3/2}}{3}-\sqrt{\pi}-\frac{\sqrt{\pi}}{324}\left[9\,_3F_2\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\\+3\,_4F_3\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)+\,_5F_4\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\right].\tag{2}$$ I wonder if there is a simpler form. Elementary functions and simpler special funtions (like Bessel, gamma, zeta, polylogarithm, polygamma, error function etc) are okay, but not hypergeometric functions.</p> <p>Could you help me with it? Thanks!</p>
Noam Shalev - nospoon
219,995
<p>First, in view of Legrende's duplication formula, <span class="math-container">$$S=\sum_{n=1}^\infty\frac{\Gamma\left(n+\frac{1}{2}\right)}{(2n+1)^4\,4^n\,n!}=2\sqrt{\pi}\sum_{n=1}^\infty\frac{\Gamma(2n)}{\Gamma(n)\,n!\,(2n+1)^4\, 16^n} \\=-\frac{\sqrt{\pi}}{3}\int_0^1 \ln^3(x)\sum_{n=1}^{\infty}\frac{\Gamma(2n)}{\Gamma(n)\,n!}\left(\frac{x^2}{16}\right)^ndx\\ =-\sqrt{\pi}-\frac{\sqrt{\pi}}{6}\int_0^1\frac{\ln^3(x)}{\sqrt{1-x^2/4}}dx=-\sqrt{\pi}-\frac{\sqrt{\pi}}{3}\int_0^{\frac{\pi}{6}}\ln^3(2\sin x)dx$$</span></p> <p>Claim: for <span class="math-container">$0&lt;a\leq \frac{\pi}{2}$</span>,</p> <blockquote> <p><span class="math-container">$$\int_0^a \ln^3\left(\frac{\sin x}{\sin a}\right)dx\tag{0}=\frac{4a-3\pi}{2}a^2\ln(2\sin a)-\frac{3\pi}{4}\zeta(3)+3\left(\frac{\pi}{2}-a\right)\Re\left(\frac12 \operatorname{Li}_3(e^{2ia})+\operatorname{Li}_3(1-e^{2ia})\right)+3\Im\left(\frac14\operatorname{Li}_4(e^{2ia})+\operatorname{Li}_4(1-e^{2ia})\right) $$</span></p> </blockquote> <p>Proof. The idea is exactly identical to the proof displayed in <a href="https://math.stackexchange.com/questions/1438381/closed-form-of-int-01-frac-ln2x-sqrtxa-bx-dx">this question</a>. The proof is rather tedious (and obviously inefficient), and ends with a somewhat of a cancellation (implying the existence of a shortcut) , so I omit the boring algebra and outline the main ideas, which can be repeated systematically to obtain closed forms for even higher powers of logsine.</p> <p>things to know: <span class="math-container">$$\ln(2\sin x)=\ln(1-e^{2ix})+i\left(\frac{\pi}{2}-x\right) \tag{1}$$</span> <span class="math-container">$$\small\int\frac{\ln^3(1-x)}{x}dx=\ln^3(1-x)\ln(x)+3\ln^2(1-x)\text{Li}_2(1-x)-6\ln(1-x)\text{Li}_3(1-x)+6\text{Li}_4(1-x) \tag{2}$$</span> <span class="math-container">$$\int_0^a x\ln(2\sin x)dx=-\frac{a}{2}\text{Cl}_2(2a)-\frac14\Re\text{Li}_3(e^{2ia})+\frac{\zeta(3)}{4}\tag{3}$$</span> <span class="math-container">$$\int_0^a x^2\ln(2\sin x)dx=-\frac{a^2}{2}\text{Cl}_2(2a)-\frac{a}{2}\Re\text{Li}_3(e^{2ia})+\frac14\Im\text{Li}_4(e^{2ia})\tag{4}$$</span> <span class="math-container">$$\int_0^a \ln(\sin x)dx=-a\ln2-\frac12 \text{Cl}_2(2a)\tag{5}$$</span> <span class="math-container">$$\int_0^a \ln^2(\sin x)dx=\frac{a^3}{3}+a\ln^2 2-a\ln^2(2\sin a)-\ln(\sin a)\text{Cl}_2(2a)-\Im\text{Li}_3(1-e^{2ia})\tag{6}$$</span></p> <p><span class="math-container">$(1)$</span> is trivial, <span class="math-container">$(2)$</span> is not too hard to find, <span class="math-container">$(5)$</span> and <span class="math-container">$(6)$</span> are shown in the linked answer, and <span class="math-container">$(3)$</span>&amp;<span class="math-container">$(4)$</span> are easily found using <span class="math-container">$\,\,\ln(2\sin x)=-\sum_{n\geq1}\frac{\cos(2xn)}{n}$</span>.</p> <p>It is obvious that since we have <span class="math-container">$(5)$</span> and <span class="math-container">$(6)$</span>, the claim <span class="math-container">$(0)$</span> depends on a closed form for <span class="math-container">$\displaystyle\int_0^a \ln^3(\sin x)dx$</span>, and the latter may be evaluated in terms of <span class="math-container">$\displaystyle\int_0^a \ln^3(2\sin x)dx$</span>.</p> <p>But, with the help of <span class="math-container">$(1)$</span>, <span class="math-container">$$\int_0^a \ln^3(2\sin x)dx=\Re\int_0^a \ln^3(1-e^{2ix})dx+3\int_0^a \ln(2\sin x)\left(\frac{\pi}{2}-x\right)^2dx\\ =\frac12\Im\int_1^{e^{2ia}}\frac{\ln^3(1-x)}{x} dx+3\int_0^a \ln(2\sin x)\left(\frac{\pi}{2}-x\right)^2dx$$</span></p> <p>(Same idea @RandomVariable had in <a href="https://math.stackexchange.com/questions/917154/looking-for-closed-forms-of-int-0-pi-4-ln2-sin-x-dx-and-int-0-pi-4?lq=1">this answer</a>.)</p> <p>Now we employ <span class="math-container">$(2),(3),(4),$</span> and <span class="math-container">$(5)$</span>. Some expressions cancel and claim follows.<span class="math-container">$\square $</span> </p> <p>This result, together with the fact that <span class="math-container">$e^{i\pi/3}$</span> and <span class="math-container">$1-e^{i\pi/3}$</span> are conjugates, yields <span class="math-container">$\displaystyle \int_0^{\frac{\pi}{6}} \ln^3(2\sin x)dx=-\frac{\pi}{4}\zeta(3)-\frac94\Im\text{Li}_4(e^{i\pi/3})$</span>, and</p> <blockquote> <p><span class="math-container">$$S=\sqrt{\pi}\left(\frac{\pi}{12}\zeta(3)+\frac{9}{12}\Im\text{Li}_4(e^{i\pi/3})-1\right)$$</span></p> </blockquote> <p>This form is equivalent to @user153012's form, as <span class="math-container">$$\frac{2}{\sqrt{3}}\Im\text{Li}_4(e^{i\pi/3})=\sum_{n\geq 0}\frac{(-1)^n}{(3n+1)^4}+\sum_{n\geq 0}\frac{(-1)^n}{(3n+2)^4} \\=\frac{\psi^{(3)}\left(\frac13\right)}{216}-\frac{\pi^4}{81}$$</span></p> <hr> <p>Also, as noted in the comments in the linked question, this may be used to write a closed form for a certain hypergeometric function.</p> <hr> <p>This serves as a generalisation for the series, because <span class="math-container">$\displaystyle \sum_{n=1}^{\infty} \frac{\Gamma(n+1/2)}{(2n+1)^4 n!}a^{2n}=-\sqrt{\pi}\left(1+\frac1{6a}\int_0^{\sin^{-1} a}\ln^3\left(\frac{\sin x}{a}\right)dx\right)$</span> </p> <p>As an example, using closed forms for trilogarithms displayed in <a href="https://math.stackexchange.com/questions/932932/known-exact-values-of-the-operatornameli-3-function?rq=1">this post</a>, we have <span class="math-container">$$\int_0^{\frac{\pi}{4}}\ln^3(\sqrt{2}\sin x)dx=-\frac{\pi^3}{128}\ln2-\frac{3\pi}{8}\zeta(3)+\frac34\beta(4)+3\Im\text{Li}_4(1-i)$$</span></p> <p>where <span class="math-container">$\beta(4)=\Im\text{Li}_4(i)$</span> is a value of Dirichlet's beta function.</p> <p>Or equivalently, <span class="math-container">$$\sum_{n=1}^{\infty} \frac{\Gamma\left(n+\frac12\right)}{(2n+1)^4\,2^n\,n!}=-\sqrt{\pi}-\frac{\sqrt{2\pi}}{6}\left(-\frac{\pi^3}{128}\ln2-\frac{3\pi}{8}\zeta(3)+\frac34\beta(4)+3\Im\text{Li}_4(1-i)\right)$$</span></p>
405,866
<p>Original question: For compact metric spaces, plenty of subtly different definitions converge to the same concept. Overtness can be viewed as a property dual to compactness. So is there a similar story for overt metric spaces?</p> <p>Edit: Since overtness is trivially true assuming the Law of the Excluded Middle, clearly the question is primarily interesting when we do not assume the LEM.</p> <p>Edit 2: It looks like it is extremely difficult for a metric space to not be overt even in constructive settings. So editing the question to ask if there is ANY model where metric spaces are not overt.</p> <p>Edit 3: For these reasons I changed the question again, from &quot;Is there any model of mathematics where there exists a metric space that is not overt?&quot;. PT</p>
Arno
15,002
<p>In computable analysis, the typical approach to metric spaces is that of a <em>computable metric space</em>. If we assume that there already is some external concept of the metric space we want to handle, we will ask for a particular dense sequence such that we can compute the distance of any two points in that sequence from their indices. Without any prior understanding of the space, we should probably talk about complete spaces only (to avoid confusion over which points exist and which don't exist), so we again start with a sequence with computable distances, and then consider its completion to be the computably Polish space we are talking about.</p> <p>Either approach means that our spaces by construction have a computable dense sequence, which renders them overt. However, this also means that we do not get that closed (or even <span class="math-container">$\Pi^0_2$</span>-) subspaces of a computably Polish space are computably Polish themselves. The problem is precisely overtness, because an overt closed subset of a computably (Quasi)Polish space already contains a dense sequence. The metric itself obviously translates to subspaces. Thus, we should be looking at non-overt subspaces of a computable metric space to get the examples you are looking for.</p> <p>The spaces <span class="math-container">$2^\omega$</span> and <span class="math-container">$\mathbb{N}^\omega$</span> are particularly illuminating examples of what is happening here. In either space, the usual way to think about a closed set is as the set of infinite paths through some given tree. The set will be overt, if we can choose the tree to be pruned, i.e. have no leaves/deadends. So, the set of infinite paths through Kleene's tree equipped with the distance inherited from <span class="math-container">$2^\omega$</span> would be a very much non-overt space in computable analysis.</p>
405,866
<p>Original question: For compact metric spaces, plenty of subtly different definitions converge to the same concept. Overtness can be viewed as a property dual to compactness. So is there a similar story for overt metric spaces?</p> <p>Edit: Since overtness is trivially true assuming the Law of the Excluded Middle, clearly the question is primarily interesting when we do not assume the LEM.</p> <p>Edit 2: It looks like it is extremely difficult for a metric space to not be overt even in constructive settings. So editing the question to ask if there is ANY model where metric spaces are not overt.</p> <p>Edit 3: For these reasons I changed the question again, from &quot;Is there any model of mathematics where there exists a metric space that is not overt?&quot;. PT</p>
Andrej Bauer
1,176
<p>To conjure up a non-overt space we must change slightly the definition of topology, since even inuitionistically every space is overt, so long as every union of opens is open.</p> <p>Let <span class="math-container">$\Sigma$</span> be the <a href="https://ncatlab.org/nlab/show/Sierpinski+space" rel="nofollow noreferrer">Sierpinski space</a>, whose underlying set consists of the truth values <span class="math-container">$\{\bot, \top\}$</span> and the topology is generated by the basic open <span class="math-container">$\{\top\}$</span>.</p> <p>We topologize the topology <span class="math-container">$\mathcal{O}(X)$</span> of a space <span class="math-container">$X$</span> with the <a href="https://ncatlab.org/nlab/show/Scott+topology" rel="nofollow noreferrer">Scott topology</a>.</p> <p><strong>Definition:</strong> A space <span class="math-container">$X$</span> is overt when the map <span class="math-container">$\exists_X : \mathcal{O}(X) \to \Sigma$</span>, defined by <span class="math-container">$\exists_X(U) = (\exists x \in X . x \in U)$</span>, is continuous.</p> <p>In other words, <span class="math-container">$\exists_X(U) = \top$</span> if, and only if, the open set <span class="math-container">$U$</span> is inhabited.</p> <p><strong>Theorem:</strong> Using the traditional definition of topological spaces, every space <span class="math-container">$X$</span> is overt.</p> <p><em>Proof.</em> Let <span class="math-container">$X$</span> be a topological space. We only need to verify that <span class="math-container">$$\exists_X^{-1}(\{\top\}) = \{U \in \mathcal{O}(X) \mid \exists x \in X \,.\, x \in U\}$$</span> is Scott-open. But this is easy: it is obviously an upper set (a superset of an inhabited set is inhabited), so we just need to check the directed union condition: if <span class="math-container">$\bigcup_{i \in I} U_i$</span> is an inhabited directed union, then obviously there is <span class="math-container">$i \in I$</span> such that <span class="math-container">$U_i$</span> is inhabited. <span class="math-container">$\Box$</span></p> <p>So far we have not used excluded middle, so simply passing to an intuitionistic setting is not going to help. We need to change the definition of topological spaces.</p> <p>One possibility is to use <a href="https://ncatlab.org/nlab/show/locale" rel="nofollow noreferrer">locales</a> instead of spaces, in which case overtness is related to the notion of open maps, see Ingo's answer.</p> <p>Here we will do it a more pedestrian way, by changing the definition of topology so that the above definition of <span class="math-container">$\exists_X$</span> becomes invalid for a suitably chosen <span class="math-container">$X$</span>, which will then be an example of a non-overt space. This can be accomplished in several ways, some of which are more direct than others. Let me just present the main idea, using the setup of computable mathematics, as described by Arno in his answer.</p> <p>In (one kind of) computable mathematics we require &quot;everything to be computable&quot;. In particular, open subsets are not closed under arbitrary unions, but only the <em>computable</em> ones. For example, the discrete topology on <span class="math-container">$\mathbb{N}$</span> is generated by taking unions of <em>computable</em> sequences of singletons <span class="math-container">$\{\{n\} \mid n \in \mathbb{N}\}$</span>, which means that <span class="math-container">$\mathcal{O}(\mathbb{N})$</span> will consists of the <em>computably enumerable</em> subsets of <span class="math-container">$\mathbb{N}$</span>, and <em>not</em> the powerset of <span class="math-container">$\mathbb{N}$</span>.</p> <p>As it turns out, in this setting we should think of <span class="math-container">$\Sigma = \{\bot, \top\}$</span> as the set of <em>semidecidable</em> truth values, i.e., the computable maps <span class="math-container">$\mathbb{N} \to \Sigma$</span> correspond to the computably enumerable subsets of <span class="math-container">$\mathbb{N}$</span> (not the computable ones, which we get as maps <span class="math-container">$\mathbb{N} \to \{0,1\}$</span> where <span class="math-container">$\{0,1\}$</span> has the discrete topology).</p> <p>Now, consider a subset <span class="math-container">$X \subseteq \mathbb{N}$</span> which is not computably enumerable, such as the complement of the Halting set. We again equip it with the discrete topology generated by the singletons <span class="math-container">$\{\{n\} \mid n \in X\}$</span>. Observe that <span class="math-container">$X$</span> is clearly a metric space, even discrete metric space whose (computable) metric is defined by <span class="math-container">$d(m, n) = \mathsf{if} \; m = n \; \mathsf{then} 1 \; \mathsf{else}\; 0$</span>. The open subsets of <span class="math-container">$X$</span> are of the form <span class="math-container">$S \cap X$</span> where <span class="math-container">$S$</span> is a computably enumerable set. If there were a computable map <span class="math-container">$\exists_X : \mathcal{O}(X) \to \Sigma$</span> satisfying <span class="math-container">$\exists_X(U) = (\exists x \in X \,.\, x \in U)$</span>, then <span class="math-container">$\{n \in \mathbb{N} \mid n \in X\}$</span> would be semidecidable, because <span class="math-container">$n \in X \iff \exists_X(X \cap \{n\}) = \top$</span>. This would make <span class="math-container">$X$</span> computably enumerable, which it isn't. Therefore, <span class="math-container">$X$</span> is not overt.</p>
2,312,016
<p>Prove limit of three variables using (ε, δ)-definition.</p> <p>$$\lim_{(x, y, z)\to (0, 1, 2)} (3x+3y-z)=1$$</p> <p>I have no idea how to do this with three variables.</p>
5xum
112,884
<p>You have to prove that for every $\epsilon &gt; 0$, there exists some $\delta &gt; 0$ such that if $$\sqrt{x^2+y^2+z^2} &lt; \delta$$ then $|3x+3y-z-1| &lt; \epsilon$.</p> <hr> <p>To do that, here's a <strong>hint</strong>:</p> <ul> <li>If $\sqrt{x^2+(y-1)^2+(z-2)^2} &lt; \delta$, then $|x|&lt;\delta$ and $|y-1|&lt;\delta$ and $|z-2|&lt;\delta$.</li> <li>$3x+3y-z = 3x + 3(1+(y-1)) - (z-2) -2$</li> </ul>
158,662
<p>I know to prove a language is regular, drawing NFA/DFA that satisfies it is a decent way. But what to do in cases like</p> <p>$$ L=\{ww \mid w \text{ belongs to } \{a,b\}*\} $$</p> <p>where we need to find it it is regular or not. Pumping lemma can be used for irregularity but how to justify in a case where it can be regular?</p>
sxd
12,500
<p>This language is not regular.</p> <p>HINT Suppose that it is, then the pumping lemma should hold.</p> <p>Let $p$ be the pumping length, and pick $w = a^pba^pb$. Can you procede now?</p>
578,337
<p>For $n=1,2,3,\dots,$ and $|x| &lt; 1$ I need to prove that $\frac{x}{1+nx^2}$ converges uniformly to zero function. How ?. For $|x| &gt; 1$ it is easy. </p>
ncmathsadist
4,154
<p>You have $${x\over 1 + nx^2} = {1\over \sqrt{n}} {{\sqrt{n}x\over 1 + nx^2}} $$ The function $x\mapsto{x\over 1 + x^2}$ is bounded; let $M$ be the supremum of its absolute value. Then you have $$\left|{x\over 1 + nx^2}\right| \le {M\over \sqrt{n}}.$$</p>
290,903
<p>I am unable to understand the fundamental difference between a Gradient vector and a Tangent vector. I need to understand the geometrical difference between the both. </p> <p>By Gradient I mean a vector $\nabla F(X)$ , where $ X \in [X_1 X_2\cdots X_n]^T $</p> <p>Note: I saw similar questions on "Difference between a Slope and Gradient" but the answers didn't help me much.</p> <p>Appreciate any effort.</p>
Community
-1
<p>I suppose the question has been answered in the comments.</p> <p>The gradient of a function $(x_1,x_2,\ldots,x_n)\mapsto y$ is the vector $\left(\dfrac{\partial y}{\partial x_1},\dfrac{\partial y}{\partial x_2},\ldots,\dfrac{\partial y}{\partial x_n}\right)$.</p> <p>The tangent to a curve $x\mapsto(y_1,y_2,\ldots,y_m)$ is the vector $\left(\dfrac{\mathrm dy_1}{\mathrm dx},\dfrac{\mathrm dy_2}{\mathrm dx},\ldots,\dfrac{\mathrm dy_n}{\mathrm dx}\right)$.</p> <p>Both can be thought of as special cases of the Jacobian of a vector-valued multivariate function $(x_1,x_2,\ldots,x_n)\mapsto(y_1,y_2,\ldots,y_m)$, which is the matrix $$\begin{pmatrix} \frac{\partial y_1}{\partial x_1} &amp; \frac{\partial y_1}{\partial x_2} &amp; \cdots &amp; \frac{\partial y_1}{\partial x_n}\\ \frac{\partial y_2}{\partial x_1} &amp; \frac{\partial y_2}{\partial x_2} &amp; \cdots &amp; \frac{\partial y_2}{\partial x_n}\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ \frac{\partial y_m}{\partial x_1} &amp; \frac{\partial y_m}{\partial x_2} &amp; \cdots &amp; \frac{\partial y_m}{\partial x_n}\\ \end{pmatrix}$$</p>
489,109
<p>I've been stumped on this problem for hours and cannot figure out how to do it from tons of tutorials.</p> <p>Please note: This is an intro to calculus, so we haven't learned derivatives or anything too complex.</p> <p>Here's the question: </p> <p>Let $f(x) = x^5 + x + 7$. Find the value of the inverse function at a point. $f^{-1}(1035) = $_<strong>__</strong>?</p> <p>I tried setting $f(x)$ as $y$.. and solving for $x$. Clearly that doesn't help lol. I've tried many different approaches and cannot figure out the answer. I used wolframalpha, my textbook, notes, examples, and tons of Google searches and nothing makes sense. Can someone please help? Thanks!!</p>
QED
91,884
<p>$$1035-7=1028=1024+4=4^5+4$$ Therefore $f^{-1}(1035)=4$.</p>
1,855,641
<p>$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}$$</p> <p>We have three unknowns if they were two it were easy but I have no idea when it becomes three unknowns any hints?</p> <p><strong>note</strong>:There isn't any information about value of $a$,$b$ and $c$</p>
Roby5
243,045
<p>Using AM-GM inequality, we have</p> <blockquote> <p>$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} \geq 3 \cdot \sqrt[3]{\frac{a}{b} \cdot \frac{b}{c} \cdot \frac{c}{a}}=3$$</p> </blockquote> <p>The equality is indeed attained when</p> <blockquote> <p>$$\frac{a}{b}=\frac{b}{c}=\frac{c}{a}\tag{1}$$</p> </blockquote> <p>From $(1)$, we must have $a=b=c$ to attain equality.</p> <p><strong>Note:</strong></p> <ul> <li><p>The AM-GM inequality holds true in this case, only when $a,b,c&gt;0$ or $a,b,c&lt;0$. This can be seen by substituting $x=\frac{a}{b},y=\frac{b}{c}, z=\frac{c}{a}$ and setting $x,y,z &gt;0$</p></li> <li><p>A simple counter-example to show that the sign of $a,b,c$ must be same is $a=b=-1$ and $c=1$.(This can also be proved using contradiction in the previous argument.)</p></li> <li><p>If $a,b,c \in \mathbb{R}$ and no other condition is given, then as pointed out by @Behrouz Maleki, $\frac{a}{b}+\frac{b}{c}+\frac{c}{a} \rightarrow -\infty.$</p></li> </ul>
1,855,641
<p>$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}$$</p> <p>We have three unknowns if they were two it were easy but I have no idea when it becomes three unknowns any hints?</p> <p><strong>note</strong>:There isn't any information about value of $a$,$b$ and $c$</p>
Ryan Roberson
353,357
<p>Well, the best I can think of, [that you stated the <em>minimum</em>, not the <em>least absolute value</em>]:</p> <ul> <li><p>if $A$ is some arbitrarily <em>large</em> negative, i.e. $-10^{100}$</p></li> <li><p>and $B$ is some arbitrarily <em>small</em> positive, i.e. $10^{-100}$ (so therefore $A/B= -10^{200}$)</p></li> <li><p>and $C$ is some arbitrarily <em>non-zero</em> number</p></li> </ul> <p>As $B$ decreases, $B/C$ approaches $0$, and as $A$ approaches $-\infty$, $C/A$ approaches $0$ as well, but $A/B$ approaches $-\infty$.</p> <p>As for minimizing the absolute value, that would take much more complex [or just different] maths. To summarize, $A=-\infty$ and $B=1/\infty$ and $C\ne0$</p> <p>I am almost certain you meant minimizing the absolute value... but just in case, here's what I could think of.</p>
1,855,641
<p>$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}$$</p> <p>We have three unknowns if they were two it were easy but I have no idea when it becomes three unknowns any hints?</p> <p><strong>note</strong>:There isn't any information about value of $a$,$b$ and $c$</p>
chaviaras michalis
247,390
<p>Another idea to see this is the following : Let's supoose we have $$f(a,b,c)= \frac{a}{b} + \frac{b}{c} + \frac{c}{a}, \hspace{5mm}a,b,c\in \mathbb{R}^{*}$$ We have $\frac{\partial f}{\partial a}= \frac{1}{b}-\frac{c}{a^2} , \hspace{2mm} \frac{\partial f}{\partial b} = \frac{1}{c} - \frac{a}{b^2}, \hspace{2mm} \frac{\partial f}{\partial c}=\frac{1}{a} -\frac{b}{c^2} $ </p> <p>So if we demand $\nabla f = 0 $ or equivalently $$\frac{\partial f}{\partial a}= 0 \Leftrightarrow \frac{c}{a^2}=\frac{1}{b} \hspace{5mm} (1)$$ $$\frac{\partial f}{\partial b} = 0 \Leftrightarrow \frac{a}{b^2}=\frac{1}{c}\hspace{5mm} (2)$$ $$\frac{\partial f}{\partial c} = 0 \Leftrightarrow \frac{b}{c^2}=\frac{1}{a}\hspace{5mm} (3)$$ From (1) and (2) we have $$\frac{b}{a^2} = \frac{1}{c} \text{ and } \frac{a}{b^2}=\frac{1}{c}$$ which gives us $b^3=a^3 \Rightarrow b=a$ and (3) gives us $c^2=a^2 \Rightarrow c = \pm\sqrt{a}$</p> <p>The Hessian matrix of f is \begin{equation*} \nabla^2 f = \begin{bmatrix} 2ca^{-3} &amp; -{b^{-2}} &amp; -a^{-2} \\ -b^{-2} &amp; 2ab^{-3}&amp; -c^{-2} \\ -a^{-2} &amp; -c^{-2} &amp; 2bc^{-3} \end{bmatrix} \end{equation*}</p> <p>\begin{equation*} det(\nabla^2 f) = (2ca^{-3}) \begin{vmatrix} 2ab^{-3} &amp; -c^{-2} \\ -c^{-2} &amp; 2bc^{-3} \end{vmatrix} -(-b^{-2}) \begin{vmatrix} -b^{-2} &amp; -c^{-2} \\ -a^{-2} &amp; 2bc^{-3} \end{vmatrix} +(-a^{-2}) \begin{vmatrix} -b^{-2} &amp;2ab^{-3} \\ -a^{-2} &amp; c^{-2} \end{vmatrix} = \cdots = 8a^{-2}b^{-2}c^{-2} - 4a^{-3}c^{-1} +2b^{-3}c^{-3}-2a^{-2}b^{-2}c^{-2}-2a^{-3}b^{-3} \end{equation*} If we see when the gradient is zero , we have two cases , 1)$a=b,c=a$ and 2) $a=b,c=-a$</p> <p>For the first case the last determinant gives us $$1) : \hspace{7mm} 6a^{-6}-4a^{-4}$$ and for the second case $$2) : \hspace{7mm} 2a^{-6}+4a^{-4}$$ For the second case we can't have local minimum since if we replace $c=-a$ to the hessian matrix , then we have a principal minor (the element (1,1)) of the hesian) determinant equals to -2a^{-4} which is obviously negative for every $a\neq 0 $ </p> <p>For the first case since we can find $a$ such that $6a^{-6}-4a^{-4} &gt;0$ , so we can moving forward two the second principali minor determinant which is the above : </p> <p>\begin{equation*} \begin{vmatrix} 2a^{-2} &amp; -\frac{1}{a^2}\\ -\frac{1}{a^2} &amp; 2a^{-2} \end{vmatrix} =\cdots = 3a^{-4} &gt;0, \hspace{7mm} \forall a \in \mathbb{R}^{*} \end{equation*} The next move is to see the element (1,1) of the hessian matrix which is $2a^{-2}$ so its positive again for all alpha except zero. So for a=b=c we have local minimum which is globally because all the local minimum have the same value $1+1+1=3$.</p>
1,009,503
<p>Theorem 15 in Chapter 15 of Peter Lax's functional analysis book says</p> <p>$X$ is a Banach space, $Y$ and $Z$ are closed subspaces of $X$ that complement each other $X = Y \oplus Z$, in the sense that every $x\in X$ can be decomposed uniquely as $x = y+z$ where $y\in Y$, $z\in Z$. Denote the two complements of $y$ and $z$ of $x$ by $y = P_Y x$, $z = P_Z x$. Then</p> <p>1) $P_Y$ and $P_Z$ are linear maps of $X$ on $Y$ and $Z$ respectively.</p> <p>2) $P_Y^2 = P_Y$, $P_Z^2 = P_Z$, $P_YP_Z = 0$.</p> <p>3) $P_Y$ and $P_Z$ are continuous.</p> <p>However, I would like to say that for all $\hat y\in Y$, $\|y-x\| \leq \|\hat y - x\|$.</p> <p>which is a property of projections we already know, but doesn't seem to be built from the definition of projections given by Lax. I'm not too familiar with Banach spaces, so I don't really want to use this property until I know for sure it is implied by the previous properties. </p> <p>Can anyone give me some intuition if 1) this property is implied by the definitions, 2) this property may not hold in some special case (Not sure if $X$ is reflexive), or 3) this is implicit in the definition of projection and I'm just reading the text wrong? </p> <p>Thanks!</p> <p>Edit: Just to add clarification, I'm considering any complemented Banach space (infinite dimensional). Not necessarily a Hilbert space, and no indication that it is reflexive.</p>
Graham Kemp
135,106
<p>In order to count occurrence of 'heads in a row' you stop counting just before the first tail. &nbsp; This is a &ldquo;count successes before one failure&rdquo; senario; so the number of heads in a row $X$ given a biased coin, with probability of heads $p$, has a <strong>negative binomial probability distribution</strong>.</p> <p>$$\begin{align} X\mid p &amp;\sim \mathcal{N\!\!B}(p, 1) \\[2ex] \mathsf P(X=x\mid p) &amp; = p^x(1-p) &amp; : x\in\{0,1,\ldots \} \\[2ex] \mathsf E[X\mid p] &amp; = \sum_{x=0}^\infty x p^x (1-p) \\[1ex] &amp; = \frac{p}{1-p} \\[2ex] \mathsf E[X\mid p=0.5] &amp; = 1 \\[2ex] \mathsf E[X\mid p=0.8] &amp; = 4 \end{align}$$</p>
1,369,990
<p>I came across a quesion - <a href="https://www.hackerrank.com/contests/ode-to-code-finals/challenges/pingu-and-pinglings" rel="nofollow">https://www.hackerrank.com/contests/ode-to-code-finals/challenges/pingu-and-pinglings</a></p> <p>The question basically asks to generate all combinations of size k and sum up the product of numbers of all combinations..Is there a general formula to calculate the same,as it is quite tough to generate all the possible combinations and operate on them.. For example for n=3(no of elements) and k=2 and the given 3 numbers are 4 2 1,then the answer will be 14 as For k=2 the combinations are {4,2},{4,1},{2,1} so answer is (4×2)+(4×1)+(2×1)=8+4+2=14. I hope i am clear in asking my question.</p>
Rus May
17,853
<p>If the numbers in your set are $a_1,\dots,a_n$, then your sum is the coefficient of $x^k$ in the product $\prod_i (1+a_ix)$. If there's a relationship between the $a_i$'s, then you might be able to simplify the product and then extract the coefficient of $x^k$. Otherwise, I doubt there's much simplification.</p>
115,367
<p>Let $f(z)$ be an analytic function on $D=\{z : |z|\leq 1\}$. $f(z) &lt; 1$ if $|z|=1$. How to show that there exists $z_0 \in D$ such that $f(z_0)=z_0$. I try to define $f(z)/z$ and use Schwarz Lemma but is not successful. </p> <p>Edit: Hypothesis is changed to $f(z) &lt; 1$ if $|z|=1$. I try the following. If $f$ is constant, then conclusion is true. Suppose that $f$ is not constant and $f(z_0)\neq z_0$ for all $z_0\in D$. Then $g(z)=\frac{1}{f(z)-z}$ is analytic. If I can show that $g$ is bounded, then we are done. But it seems that $g$ is not bounded.</p>
user8268
8,268
<p>$|f|$ has maximum on the boundary, hence it maps $D$ to $D$, and you can use Brouwer fixed point theorem.</p>
2,173,918
<p>Let $f(z)=\sum\limits_{k=1}^\infty\frac{z^k}{1-z^k}$. I want to show that this series represents a holomorphic function in the unit disk. I'm, however, quite confused. For example, is $f(z)$ even a power series? It doesn't look as such. Here's what I have so far come up with.</p> <blockquote> <p>Proof:</p> </blockquote> <p>$$ \sum\limits_{k=1}^\infty\frac{z^k}{1-z^k}=-\sum\limits_{k=1}^\infty\frac{1}{1-z^{-k}}=-\sum\limits_{k=-\infty}^{-1}\sum\limits_{n=0}^\infty z^{kn}$$</p> <p>[If the above expression is correct then we have a Laurent series of a power series].</p> <p>Now, $|c_n|=1$ for all $n$. By Parseval's formula,</p> <p>$$2\pi\sum\limits_{k=-\infty}^{-1}\sum\limits_{n=0}^\infty \rho^{2kn}=\sum\limits_{k=-\infty}^{-1} \int\limits_0^{2\pi}\left|g(\rho^ke^{it})\right|^2dt=^? \int\limits_0^{2\pi} \sum\limits_{k=1}^{\infty} \left|g(\rho^{-k}e^{it})\right|^2dt$$ where $0\le\rho\le 1$, and $g$ is some function (holomorphic) on the unit disk. So we know that this integral (above in the middle) exists.</p> <p>Now, I believe, what remains to be proved is that the infinite series of this integral also exists. Do you think this approach is OK, or did I make some mistakes in it? How can we prove that the series $\sum\limits_{k=-\infty}^{-1}\sum\limits_{n=0}^\infty \rho^{2kn}$ converges? And then, since it converges, does this imply that $G$ is holomorphic?</p> <p>Thanks a lot.</p>
zhw.
228,045
<p>Suppose $0&lt;r&lt;1.$ Then $r^k \to 0.$ Thus $r^k\le 1/2$ for large $k.$ For these $k$ and $|z|\le r$ we have</p> <p>$$\left| \frac{z^k}{1-z^k}\right | \le \frac{r^k}{1-r^k} \le 2r^k.$$</p> <p>Since $\sum 2r^k&lt;\infty,$ Weierstrass M implies our series converges uniformly on $\{|z|\le r\}.$ This is true for every $r\in (0,1),$ hence the series converges uniformly on compact subsets of the open unit disc. This proves the series converges to a homolorphic function in the open unit disc.</p>
1,353,498
<p>The problem is to prove or disprove that there is a noncyclic abelian group of order $51$. </p> <p>I don't think such a group exists. Here is a brief outline of my proof:</p> <p>Assume for a contradiction that there exists a noncyclic abelian group of order $51$.</p> <p>We know that every element (except the identity) has order $3$ or $17$. Assume that $|a|=3$ and $|b|=17$. Then I managed to prove that the subgroups generated by $a$ and $b$ only intersect at the identity element, from which we can show that $ab$ is a generator of the whole group, so it is cyclic. Contradiction.</p> <p>So every element (except the identity) has the same order $p$, where $p$ is either $3$ or $17$. </p> <p>If $p=17$, take $a$ not equal to the identity, and take $b$ not in the subgroup generated by $a$. Then we can prove that $a^kb^l$ where $k,l$ are integers between $0$ and $16$ inclusive are distinct, hence the group has more than $51$ elements, contradiction.</p> <p>If $p=3$, take $a$ not equal to the identity and take $b$ not in the subgroup generated by $a$. Then we can prove that $a^kb^l$ where $k,l$ are integers betwen $0$ and $2$ inclusive are distinct. This subgroup has $9$ elements so we can find $c$ that's not of the form $a^kb^l$. Then we can prove that $a^kb^lc^m$ where $k,l,m$ are integers betwen $0$ and $2$ inclusive are distinct. Then this subgroup has $27$ elements so we can find $d$ that's not of the form $a^kb^lc^m$. Then we prove that $a^kb^lc^md^n$ where $k,l,m,n$ are integers between $0$ and $2$ inclusive are distinct, this being $81$ elements. Contradiction.</p>
Arthur
15,500
<p>Any finite abelian group is isomorphic to a group of the form $$\Bbb Z_{p_1^{a_1}}\times \Bbb Z_{p_2^{a_2}}\times\cdots\times \Bbb Z_{p_n^{a_n}}$$ where $p_i$ are (not necessarily distinct) primes. The order of such a group is $p_1^{a_1}\cdots p_n^{a_n}$. How many ways can this be done for $51$?</p>
795,193
<p><strong>THEOREM</strong>: Suppose $\{f_n\}$ is a sequence of continuous functions from $[a,b]$ to $\Bbb R$ that converge pointwise to a continuous function $f$ over $[a,b]$. If $f_{n+1}\leq f_n$, then convergence is uniform. </p> <p>Then, why is the continuity of the the functions $f_i$'s important for the theorem?</p> <p><strong>Almost all textbooks seems to use property of compactness which I will get to learn only in Metric Spaces chapter. Can anyone please give me an alternate reasoning?</strong></p> <p><strong>Attempt</strong>: If each $f_k$ is continuous in $[a,b]$, it means it is bounded as well. Let the infimum of $f_k$ b3 denoted as $m_k$ and supremum as $M_k$ in $[a,b]$. Then, since the sequence of functions is given to be monotonically increasing, this means:</p> <p>$m_k &lt; m_{k+1} &lt; .......&lt; m_n &lt;...$ and $M_k &lt; M_{k+1} &lt; .......&lt; M_n &lt; ...$</p> <p>How do I proceed next?</p> <p><strong>Edit</strong> : Unless $f_n(x)$ is continuous at every point, we might not be able to infer the very definition of uniform convergence which states that a uniformily convergent sequence of functions $f_i$ such that $lim ~f_i = f$, then unless each $f_i$ is continuous, we won't be able to find the value of $f_i(x)$ at every point $x$ and hence won't be able to write the following definition of uniform convergence "</p> <p>$\forall \epsilon &gt;0, \exists ~m \in N$ s.t $f(x)-f_i(x) &lt; \epsilon ~~\forall~~ n\geq m ~~\forall x ~\in ~ I$ where $I$ is the given interval</p> <p>Would this be correct??</p>
Tony Piccolo
71,180
<p>Compactness cannot be avoided.</p> <p>Let $\varepsilon&gt;0$ and $x \in [a,b]$.</p> <p>Since $f_n(x)$ converges pointwise to $f(x)$, there exists $N_x$ such that $$|f_n(x)-f(x)|&lt;\varepsilon/3 \quad \text{for} \quad n \ge N_x$$ Also, since $f$ and $f_{N_x}$ are continuous, there is an open neighborhood $U_x$ of $x$ such that $$|f(x)-f(y)|&lt;\varepsilon/3\quad \text {and} \quad |f_{N_x}(x)-f_{N_x}(y)|&lt;\varepsilon/3 \quad \text {for} \quad y \in U_x \cap[a,b]$$ Moreover, by monotonicity, one has $$f(x) \le f_n(x) \le f_m(x) \quad \text {for} \quad n\ge m$$ From compacteness of $[a,b]$ it follows that there are a finite number of points $x_i \in [a,b]$ such that the $U_{x_i}$ cover $[a,b]$.<br> Let $\bar N$ be the maximum of the $N_{x_i}$.<br> Then, for any $x \in [a,b]$, $\,x$ belongs to some $U_{x_i}$ and $$|f_n(x)-f(x)|\le$$$$|f_{N_{x_i}}(x)-f(x)| \le$$$$|f_{N_{x_i}}(x)-f_{N_{x_i}}(x_i)|+|f_{N_{x_i}}(x_i)-f(x_i)|+|f(x_i)-f(x)|&lt;\varepsilon$$ for $n \ge \bar N$ .</p>
1,443,812
<p>Suppose that $X$ and $Y$ have joint p.d.f.</p> <p>$$ f(x, y) = 3x, \; 0 &lt; y &lt; x &lt; 1.$$ </p> <p>Find $f_X(x)$,the marginal p.d.f. of $X$.</p> <p>this is what i got</p> <p>$$f_X(x) = \int_0^x f(x, y)dy = \int_0^x 3x dy = 3x^2$$ for $0 &lt; x &lt; 1$.</p> <p>however, if want to know whether $X$ and $Y$ are independent or not, how can I do it?</p>
purugin
271,254
<p>f(x, y) = fX(x)fY(y) since you already get the fx(X), find fy(y) and then verify, if fx(x)fy(y) is the same withf(x,y) then they are independent.</p>
11,090
<p>In <em>MMA</em> (8.0.0/Linux), I tried to to create an animation using the command</p> <pre><code>Export["s4s5mov.mov", listOfFigures] </code></pre> <p>and got the output</p> <p><img src="https://i.stack.imgur.com/bFWPP.png" alt="enter image description here"></p> <p>Doing a little research, one can <a href="http://reference.wolfram.com/mathematica/ref/format/QuickTime.html" rel="nofollow noreferrer">read</a> in the Documentation Center that</p> <p><img src="https://i.stack.imgur.com/woicu.png" alt="enter image description here"></p> <p>And I was wondering <strong>if there is some way to overcome this limitation <em>within MMA</em></strong>.</p> <p><strong>EDIT</strong></p> <p>Here is a sample code of the inverted animation problem:</p> <pre><code>movingP = Table[Show[ ParametricPlot[{Sin[x], Cos[x]}, {x, 0, 2 \[Pi]}, AxesLabel -&gt; {x, y}], Graphics[{Red, PointSize[Large], Point[{Sin[(n \[Pi])/8], Cos[(n \[Pi])/8]}]}] ], {n, 0, 15}] Export["~/Desktop/point.avi", movingP] </code></pre> <p>will produce an avi like this:</p> <p><img src="https://i.stack.imgur.com/JPkLa.gif" alt="enter image description here"></p> <p>(The gif has been tampered to look like the avi)</p>
amr
950
<pre><code>stringToHex[str_] := ToExpression["16^^" &lt;&gt; str]; </code></pre> <p>This is just a way of automating the normal notation you would use, which is 16^^6b (check <a href="http://reference.wolfram.com/mathematica/tutorial/InputSyntax.html">here</a> for the documentation).</p>
3,118,462
<p>cars arrives according to a Poisson process with rate=2 per hour and trucks arrives according to a Poisson process with rate=1 per hour. They are independent. </p> <p>What is the probability that <strong>at least</strong> 3 cars arrive before a truck arrives? </p> <p>My thoughts: Interarrival of cars A ~ Exp(2 per hour), Interarrival of trucks B ~ Exp(1 per hour). </p> <p>Probability that <strong>at least</strong> 3 cars arrive before a truck arrives</p> <p><span class="math-container">$= 1- Pr(B&lt;A) - Pr(A&lt;B)Pr(B&lt;A) - Pr(A&lt;B)Pr(A&lt;B)Pr(B&lt;A) \\= 1 - (\frac{1}{3})-(\frac{2}{3}\cdot\frac{1}{3})-(\frac{2}{3}\cdot\frac{2}{3}\cdot\frac{1}{3})\\=\frac{8}{27}.$</span> </p> <p>Is this correct?</p>
Paras Khosla
478,779
<p>Consider any one factor pair of <span class="math-container">$68$</span>, say <span class="math-container">$1 \times 68$</span>. Now go on dividing one number by <span class="math-container">$2$</span> and multiplying other by <span class="math-container">$2$</span>, this way the product remains the same but the sum changes. Do the same operation again, you get <span class="math-container">$4 \times 17$</span> which sums up to <span class="math-container">$21$</span> and you have the required factor pair.<span class="math-container">$$\begin{array}{ |p{3cm}||p{3cm}|p{3cm}|p{3cm}| } \hline a &amp; \times &amp; b&amp;a+b\\\hline68 &amp; \times &amp; 1 &amp;69\\ 34 &amp;\times &amp; 2 &amp;36 \\ 17 &amp;\times&amp; 4&amp; 21 \\\hline\end{array}$$</span>You can go through the same procedure for <span class="math-container">$ab=748$</span> and <span class="math-container">$a+b=56$</span> although doing so may require greater practice with two-digit <span class="math-container">$ab$</span> as a prerequisite.</p>
531,342
<p>Prove or disprove that the greedy algorithm for making change always uses the fewest coins possible when the denominations available are pennies (1-cent coins), nickels (5-cent coins), and quarters (25-cent coins).</p> <p>Does anyone know how to solve this?</p>
vadim123
73,324
<p>After giving out the maximal number of quarters, there will be $0-24$ cents remaining. Then there will be at most 4 nickels to give out. After giving out nickels greedily, there will be $0-4$ cents remaining, so there will be at most 4 pennies to give out. Now, can you prove that we cannot rearrange our change to use fewer coins?</p>
2,853,278
<p><a href="https://math.stackexchange.com/a/203701/312406">This answer</a> suggests the idea, that a local ring $(R, \mathfrak{m})$ whose maximal ideal is nilpotent is in fact an Artinian ring.</p> <p>Is this true? If so, how is it proven?</p>
rschwieb
29,335
<p>There is a version of this theorem even for noncommutative rings. It is called the <a href="https://en.wikipedia.org/wiki/Hopkins%E2%80%93Levitzki_theorem" rel="nofollow noreferrer">Hopkins-Levitzki theorem</a>.</p> <p>It says, in a nutshell, that if $R/J(R)$ is semisimple and $J(R)$ is nilpotent, then $R$ is right Artinian iff right Noetherian. Yours is a special case where $R/J(R)$ is a field.</p> <p>Here is the <a href="http://ringtheory.herokuapp.com/search/results/?H=21&amp;H=23&amp;L=63l&amp;L=63r" rel="nofollow noreferrer">DaRT search</a> which currently yields two examples of local, non-Artinian rings with nilpotent Jacobson radicals.</p>
3,921,847
<p>I had the following question:</p> <blockquote> <p>Does there exist a nonzero polynomial <span class="math-container">$P(x)$</span> with integer coefficients satisfying both of the following conditions?</p> <ul> <li><span class="math-container">$P(x)$</span> has no rational root;</li> <li>For every positive integer <span class="math-container">$n$</span>, there exist an integer <span class="math-container">$m$</span> such that <span class="math-container">$n$</span> divides <span class="math-container">$P(m)$</span>.</li> </ul> </blockquote> <p>I created a proof showing that there was no polynomial satisfying both of these conditions:</p> <blockquote> <p>Suppose that we have a nonzero polynomial with integer coefficients <span class="math-container">$P(x)=\sum_i c_i x^i$</span> without a rational root, and for all positive integer <span class="math-container">$n$</span>, we have an integer <span class="math-container">$m_n$</span> such that <span class="math-container">$n|P(m_n)$</span>. This would imply <span class="math-container">$P(m_n)\equiv0\pmod n\Rightarrow \sum_i c_i m_n^i\equiv0$</span>. By Freshman's Dream we have <span class="math-container">$P(m_n+an)=\sum_i c_i(m_n+an)^i\equiv\sum_i c_im_n^i+c_ia^in^i\equiv\sum_ic_im_n^i=P(m_n)\equiv0\pmod n$</span> for some integer <span class="math-container">$a$</span>. Therefore if <span class="math-container">$b\equiv m_n\pmod n$</span> then <span class="math-container">$P(b)\equiv P(m_n)\equiv0\pmod n$</span>.</p> </blockquote> <blockquote> <p>Now the above conditions and findings imply for all prime <span class="math-container">$p$</span>, we have a number <span class="math-container">$m_p$</span> such that <span class="math-container">$p|P(m_p)$</span>, and that if <span class="math-container">$b\equiv m_p\pmod p$</span> then <span class="math-container">$P(b)\equiv 0\pmod p$</span>. Consider the set of the smallest <span class="math-container">$n$</span> primes <span class="math-container">$\{p_1,p_2,p_3,\cdots,p_n\}$</span>. By Chinese Reamainder Theorem there exists an integer <span class="math-container">$b$</span> such that <span class="math-container">$b\equiv m_{p_1}\pmod{p_1},b\equiv m_{p_2}\pmod{p_2},b\equiv m_{p_3}\pmod{p_3},\cdots,b\equiv m_{p_n}\pmod{p_n}$</span> by Chinese Remainder Theorem. Then <span class="math-container">$p_1,p_2,p_3\cdots,p_n|P(b)\Rightarrow p_1p_2p_3\cdots p_n|P(b)$</span>. As <span class="math-container">$n$</span> approaches infinity (as there are infinitely many primes), <span class="math-container">$p_1,p_2,p_3\cdots,p_n$</span> approaches infinity. Therefore either <span class="math-container">$P(b)=\infty$</span> or <span class="math-container">$P(b)=0$</span>. Since for finite <span class="math-container">$b$</span> and integer coefficients <span class="math-container">$P(b)$</span> must be finite, then <span class="math-container">$P(b)=0$</span>. However as <span class="math-container">$a$</span> is an integer, this implies <span class="math-container">$P$</span> has a rational root, a contradiction.</p> </blockquote> <p>I'm not sure if my proof is correct, and my main concern is that I am incorrectly using Chinese Remainder Theorem since I am not sure if I can apply it to infinitely many divisors.</p> <p>Is this proof correct, and if not, how do I solve this question?</p> <p><strong>EDIT:</strong> It appears not only is my proof incorrect (as<span class="math-container">$b$</span> does not converge) as Paul Sinclair has shown, but that according to Jaap Scherphuis there are examples of polynomials that satisfy the conditions. Therefore, my question now is how one can prove the <em>existence</em> of these polynomials while using elementary methods (as this is an IMO selection test problem).</p>
Paul Sinclair
258,282
<p>Your definition of <span class="math-container">$b$</span> depends on <span class="math-container">$n$</span>. So it is not a constant <span class="math-container">$b$</span>, but rather each <span class="math-container">$n$</span> has its own <span class="math-container">$b_n$</span>. And it is not the case that <span class="math-container">$P(b) = \infty$</span> or <span class="math-container">$P(b) = 0$</span>, but instead <span class="math-container">$$\lim_{n\to\infty} P(b_n) = \infty\text{ or }\lim_{n\to\infty} P(b_n) = 0$$</span> But this completely spoils your conclusion. Since you do not have a finite fixed <span class="math-container">$b$</span>, you do not have a root <span class="math-container">$P(b) = 0$</span>.</p>
3,921,847
<p>I had the following question:</p> <blockquote> <p>Does there exist a nonzero polynomial <span class="math-container">$P(x)$</span> with integer coefficients satisfying both of the following conditions?</p> <ul> <li><span class="math-container">$P(x)$</span> has no rational root;</li> <li>For every positive integer <span class="math-container">$n$</span>, there exist an integer <span class="math-container">$m$</span> such that <span class="math-container">$n$</span> divides <span class="math-container">$P(m)$</span>.</li> </ul> </blockquote> <p>I created a proof showing that there was no polynomial satisfying both of these conditions:</p> <blockquote> <p>Suppose that we have a nonzero polynomial with integer coefficients <span class="math-container">$P(x)=\sum_i c_i x^i$</span> without a rational root, and for all positive integer <span class="math-container">$n$</span>, we have an integer <span class="math-container">$m_n$</span> such that <span class="math-container">$n|P(m_n)$</span>. This would imply <span class="math-container">$P(m_n)\equiv0\pmod n\Rightarrow \sum_i c_i m_n^i\equiv0$</span>. By Freshman's Dream we have <span class="math-container">$P(m_n+an)=\sum_i c_i(m_n+an)^i\equiv\sum_i c_im_n^i+c_ia^in^i\equiv\sum_ic_im_n^i=P(m_n)\equiv0\pmod n$</span> for some integer <span class="math-container">$a$</span>. Therefore if <span class="math-container">$b\equiv m_n\pmod n$</span> then <span class="math-container">$P(b)\equiv P(m_n)\equiv0\pmod n$</span>.</p> </blockquote> <blockquote> <p>Now the above conditions and findings imply for all prime <span class="math-container">$p$</span>, we have a number <span class="math-container">$m_p$</span> such that <span class="math-container">$p|P(m_p)$</span>, and that if <span class="math-container">$b\equiv m_p\pmod p$</span> then <span class="math-container">$P(b)\equiv 0\pmod p$</span>. Consider the set of the smallest <span class="math-container">$n$</span> primes <span class="math-container">$\{p_1,p_2,p_3,\cdots,p_n\}$</span>. By Chinese Reamainder Theorem there exists an integer <span class="math-container">$b$</span> such that <span class="math-container">$b\equiv m_{p_1}\pmod{p_1},b\equiv m_{p_2}\pmod{p_2},b\equiv m_{p_3}\pmod{p_3},\cdots,b\equiv m_{p_n}\pmod{p_n}$</span> by Chinese Remainder Theorem. Then <span class="math-container">$p_1,p_2,p_3\cdots,p_n|P(b)\Rightarrow p_1p_2p_3\cdots p_n|P(b)$</span>. As <span class="math-container">$n$</span> approaches infinity (as there are infinitely many primes), <span class="math-container">$p_1,p_2,p_3\cdots,p_n$</span> approaches infinity. Therefore either <span class="math-container">$P(b)=\infty$</span> or <span class="math-container">$P(b)=0$</span>. Since for finite <span class="math-container">$b$</span> and integer coefficients <span class="math-container">$P(b)$</span> must be finite, then <span class="math-container">$P(b)=0$</span>. However as <span class="math-container">$a$</span> is an integer, this implies <span class="math-container">$P$</span> has a rational root, a contradiction.</p> </blockquote> <p>I'm not sure if my proof is correct, and my main concern is that I am incorrectly using Chinese Remainder Theorem since I am not sure if I can apply it to infinitely many divisors.</p> <p>Is this proof correct, and if not, how do I solve this question?</p> <p><strong>EDIT:</strong> It appears not only is my proof incorrect (as<span class="math-container">$b$</span> does not converge) as Paul Sinclair has shown, but that according to Jaap Scherphuis there are examples of polynomials that satisfy the conditions. Therefore, my question now is how one can prove the <em>existence</em> of these polynomials while using elementary methods (as this is an IMO selection test problem).</p>
GreginGre
447,764
<p>Here is an elementary way to construct such a polynomial explicitely. I will need the fact that if <span class="math-container">$p\nmid a$</span>, then <span class="math-container">$a$</span> is invertible modulo <span class="math-container">$p$</span>, which can be proven using Bézout theorem.</p> <p>The proof of Fact 1 is extremely long, because i did not want to use group theory but only congruences. It can be shorten into a 5 lines proof if you allow quotient group and the group <span class="math-container">$(\mathbb{Z}/p\mathbb{Z})^\times$</span>.</p> <p><strong>Fact 1.</strong> Let <span class="math-container">$p$</span> be an odd prime number. Then the number of nonzero squares modulo <span class="math-container">$p$</span> is <span class="math-container">$\dfrac{p-1}{2}$</span>. In particular, if <span class="math-container">$a,b$</span> are integrs both prime to <span class="math-container">$p$</span>, then one of the integer <span class="math-container">$a,b$</span> or <span class="math-container">$ab$</span> is a square mod <span class="math-container">$p$</span>.</p> <p><strong>Proof.</strong> Set <span class="math-container">$m=\dfrac{p-1}{2}$</span>. An integer coprime to <span class="math-container">$p$</span> is congruent to some integer <span class="math-container">$\pm k$</span>, where <span class="math-container">$1\leq k\leq m$</span>. Since <span class="math-container">$(-k)^2\equiv k ^2 \mod p$</span>, there is at most <span class="math-container">$m$</span> nonzero squaremod <span class="math-container">$p$</span>.</p> <p>Now if <span class="math-container">$1\leq k_1,k_2\leq m$</span> satisfy <span class="math-container">$k_1^2\equiv k_2^2 \mod p$</span>, then <span class="math-container">$p\mid (k_1-k_2)(k_1+k_2)$</span>. Note that <span class="math-container">$2\leq k_1+k_2\leq 2m=p-1$</span>, so <span class="math-container">$p\nmid k_1+k_2$</span>. Therefore <span class="math-container">$p\mid k_1-k_2,$</span> meaning <span class="math-container">$k_1=k_2$</span> since <span class="math-container">$-p&lt;-m\leq k_1-k_2\leq m&lt;p$</span>. Consequently, the integrs <span class="math-container">$k^2,1\leq \leq m$</span> are all distinct modulo <span class="math-container">$p$</span>, and there is exactly <span class="math-container">$m$</span> non zero square mod <span class="math-container">$p$</span>.</p> <p>Now let <span class="math-container">$a,b$</span> be two integers coprime to <span class="math-container">$p$</span>. If <span class="math-container">$a $</span>or <span class="math-container">$b$</span> is a square modulo <span class="math-container">$p$</span>, we are done. Assume <span class="math-container">$a,b$</span> are not squares modulo <span class="math-container">$p$</span>. Since <span class="math-container">$a$</span> is coprime to <span class="math-container">$p$</span>; it is invertible modulo <span class="math-container">$p$</span>, so the elements of <span class="math-container">$A=\{a k^2, 1\leq k\leq m\}$</span> are pairwise distinct modulo <span class="math-container">$p$</span>. Now the elements <span class="math-container">$B=\{k^2, 1\leq k\leq m\}$</span> are all distinct modulo <span class="math-container">$p$</span>.</p> <p>Note that it is not possible to have <span class="math-container">$ak_1^2\equiv k_2^2 \mod p$</span> for some <span class="math-container">$1\leq k_1,k_2\leq m$</span>, since otherwise <span class="math-container">$a$</span> would be a square modulo <span class="math-container">$p$</span>, as we can see by multiply by the square of an inverse mod <span class="math-container">$p$</span> of <span class="math-container">$k_1$</span>.</p> <p>A counting argument then shows that an integer coprime to <span class="math-container">$p$</span> can be represented by a unique element of <span class="math-container">$A\cup B$</span> modulo <span class="math-container">$p$</span> .</p> <p>Thus, if <span class="math-container">$b$</span> is not a square mod <span class="math-container">$p$</span>, then <span class="math-container">$b$</span> is congruent to an element of <span class="math-container">$B.$</span> Consequently <span class="math-container">$b\equiv a k^2\mod p$</span>, and <span class="math-container">$ab\equiv (ak)^2\mod p$</span> is a square mod <span class="math-container">$p$</span>.</p> <p><strong>Fact 2.</strong> Let <span class="math-container">$p$</span> be a prime number, and let <span class="math-container">$P\in\mathbb{Z}[X]$</span>. Assume that there is <span class="math-container">$x_0\in \mathbb{Z}$</span> such that <span class="math-container">$P(x_0)\equiv 0 \mod p $</span> and <span class="math-container">$P'(x_0)\not\equiv 0 \mod p$</span>. Then for all <span class="math-container">$m\geq 0$</span>, there exist <span class="math-container">$x_m\in\mathbb{Z}$</span> such that <span class="math-container">$P(x_m)\equiv 0 \mod p^{m+1}$</span> and <span class="math-container">$x_{m}\equiv x_0 \mod p.$</span></p> <p><strong>Prof.</strong> By induction on <span class="math-container">$m$</span>, the case <span class="math-container">$m=0$</span> being part of the assumption. Assume that such <span class="math-container">$x_m$</span> exists for some <span class="math-container">$m\geq 0$</span> and let us show the existence of some <span class="math-container">$x_{m+1}$</span>. By assumption, <span class="math-container">$P(x_m)=\mu p^{m+1}$</span>. Let <span class="math-container">$0\leq \lambda\leq p-1$</span> such that <span class="math-container">$\lambda$</span> is an inverse of <span class="math-container">$P'(x_0)$</span> modulo <span class="math-container">$p$</span>, and set <span class="math-container">$x_{m+1}=x_m-\mu \lambda p^{m+1}$</span>.</p> <p>Note that for all <span class="math-container">$x,y\in\mathbb{Z}$</span>, we have <span class="math-container">$P(x)=P(y)+(x-y)P'(y)+(x-y)^2 Q(y)$</span> for some <span class="math-container">$Q\in \mathbb{Z}[X]$</span>. Applying this to <span class="math-container">$x=x_{m+1}$</span> and <span class="math-container">$y=x_m$</span>, we get <span class="math-container">$P(x_{m+1})=\mu p^{m+1}-\mu\lambda p^{m +1}P'(x_m) \ mod p^{m+2}$</span>.</p> <p>Since <span class="math-container">$x_m\equiv x_0 \mod p$</span>, we have <span class="math-container">$P'(x_m)\equiv P'(x_0) \mod p$</span> and thus <span class="math-container">$\lambda P'(x_m)\equiv \lambda P'(x_0)\equiv 1 \mod p$</span>. Consequently <span class="math-container">$\lambda p^{m+1}P'(x_m)\equiv p^{m+1} \mod p^{m+2}$</span> and <span class="math-container">$P((x_{m+1})\equiv 0 \mod p^{m+2}$</span>. Finalement note that by construction, <span class="math-container">$x_{m+1}\equiv x_m \mod p^{m+1}$</span>, so <span class="math-container">$x_{m+1}\equiv x_m \equiv x_0 \mod p$</span>, which shows this induction step.</p> <p><strong>Thm</strong>. Let <span class="math-container">$P=(X^2+X+2)(X^2-17)(X^2-19)(X^2-17\times 19)$</span>. Then <span class="math-container">$P$</span> has no rational roots, but has a root modulo <span class="math-container">$n$</span> for every integer <span class="math-container">$n\geq 2$</span>.</p> <p><strong>Proof.</strong> Clearly, <span class="math-container">$P$</span> has no rational roots. By CRT, it is enough to prove that <span class="math-container">$P$</span> has a root modulo <span class="math-container">$p^m$</span> for all prime <span class="math-container">$p$</span> and all <span class="math-container">$m\geq 1$</span>.</p> <p>Note that <span class="math-container">$1$</span> is a root of <span class="math-container">$X^2+X+1$</span> mod 2. The derivative at <span class="math-container">$1$</span> of this polynomial is <span class="math-container">$3$</span>, which is <span class="math-container">$\not\equiv 0 \mod 2$</span>. Hence <span class="math-container">$X^2+X+1$</span>. has a root mod <span class="math-container">$2^m$</span> for all <span class="math-container">$m\geq 1$</span> by Fact 2.</p> <p>Let <span class="math-container">$p$</span> be an odd prime number, <span class="math-container">$p\neq 17,19$</span>. By Fact <span class="math-container">$2$</span>, one of the integers <span class="math-container">$17,19,17\times 19$</span> is a nonzero square mod <span class="math-container">$p$</span>. Let <span class="math-container">$a$</span> be this integer. Notice now that the derivative of <span class="math-container">$X^2-a$</span> at an integer <span class="math-container">$x_0$</span> which is nonzero <span class="math-container">$p$</span> is <span class="math-container">$2x_0$</span>, which is also nonzero modulo <span class="math-container">$p$</span>. By Fact 2, <span class="math-container">$X^2-a$</span> has a root mod <span class="math-container">$p^m$</span> for all <span class="math-container">$m\geq 1$</span>.</p> <p>Assume that <span class="math-container">$p=17$</span>. Then <span class="math-container">$6^2\equiv 19 \mod 17$</span>, so <span class="math-container">$X^2-19$</span> has a root mod <span class="math-container">$p$</span>, hence mod <span class="math-container">$p^m$</span> for all <span class="math-container">$m\geq 1$</span> as previously.</p> <p>Assume that <span class="math-container">$p=19$</span>. Then <span class="math-container">$6^2\equiv 17 \mod 19$</span>, so <span class="math-container">$X^2-17$</span> has a root mod <span class="math-container">$p$</span>, hence mod <span class="math-container">$p^m$</span> for all <span class="math-container">$m\geq 1$</span> as previously.</p> <p>All in all <span class="math-container">$P$</span> has a root modulo <span class="math-container">$p^m$</span> for all prime <span class="math-container">$p$</span> and all <span class="math-container">$m\geq 1$</span>.</p>
1,602,312
<p>Can we prove that $\sum_{i=1}^k x_i$ and $\prod_{i=1}^k x_i$ is unique for $x_i \in R &gt; 0$?<br> I stated that conjecture to solve CS task, but I do not know how to prove it (or stop using it if it is false assumption). </p> <p>Is the pair of Sum and Product of array of k real numbers > 0 unique per array? I do not want to find underlying numbers, or reverse this pair into factorisation. I just assumed that pair is unique per given set to compare two arrays of equal length to determine whether numbers inside with possible multiplicities are the same. The order in array does not matter. 0 is discarded to give meaningful sum and product.</p> <p>At the input I get two arrays of equal length, one of which is sorted, and I want to determine if the elements with multiplicities are the same, but I do not want to sort the second array (this will take more than linear time, which is the objective).<br> So I thought to make checksums, sum and product seemed unique to me, hence the question.</p> <p>For example:<br> I have two arrays: v=[1,3,5,5,8,9.6] and the second g=[9.6,5,3,1,5,8].<br> I calculate sum = 31.6, and product = 5760. For both arrays, since one is sorted version of another.<br> And now I calculate this for b=[1,4,4,5,8,9.6], sum is 31.6 but product is 6144. So I assumed that if both sum and product are not the same for two given arrays, than the element differs.</p> <p>Getting back to question, I am thinking that the pair {sum, product} is the same for all permutations of array (which is desired), but will change when elements are different (which is maybe wishfull thinking).</p>
Eli Rose
123,848
<p>Let's work in $\mathbb{R}^2$. You can visualize vectors as arrows, and add them by laying the arrows head-to-tail.</p> <p><a href="https://i.stack.imgur.com/pieZh.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/pieZh.gif" alt="enter image description here"></a></p> <p>That's what we're going to do to visualize linearly dependent and linearly independent vectors.</p> <p>Here's a collection of vectors:</p> <p>$$ \begin{aligned} A &amp;= (2, 0)\\ B &amp;= (1, -1)\\ C &amp;= (1, 1) \end{aligned} $$</p> <p>If you drew each of these vectors as arrows on the same coordinate grid, it would look like (a better drawn version of) this.</p> <pre><code> | C | / | / | / --------o--------A | \ | \ | \ | B </code></pre> <p>(Note that the vector $A$ coincides overlaps with the $x$-axis).</p> <p>Are these vectors linearly independent? No, they're dependent, since $A = B + C$. And if you laid $B$'s tail at the head of $C$, you would get $A$.</p> <pre><code> | C | / \ | / \ | / \ --------o--------A | | | | </code></pre> <p>You can think about linear independence as efficiency -- do you really need everything you've got, or are some of your tools extraneous?</p> <p>Suppose someone asked you to build a bridge with the vectors $A, B, C$. Now they ask you to build the same bridge with just $B$ and $C$. Can you do it?</p> <p>Sure, since whenever you would have used an $A$ you could just use one copy of $B$ plus one copy of $C$.</p> <p>(In fact you could build the same bridge with any two of them, since you can also make $C$ out of $A$ and $B$, by using $A - B = C$.)</p> <p>Even if they gave you a shorter copy of $C$, like $(\frac{1}{2}, \frac{1}{2})$, you're allowed to scale vector by a constant before you use them, so you could double it and still make $A$. This is where <em>linear combinations</em> come in.</p> <p>A linearly <strong>independent</strong> set is one where none of the elements can be made with a combination of the others, even with scaling allowed.</p>
801,668
<p>Sources: <a href="https://rads.stackoverflow.com/amzn/click/0495011665" rel="nofollow noreferrer"><em>Calculus: Early Transcendentals</em> (6 edn 2007)</a>. p. 206, Section 3.4. Question 95.<br> <a href="https://rads.stackoverflow.com/amzn/click/0470383348" rel="nofollow noreferrer"><em>Elementary Differential Equations</em></a> (9 edn 2008). p. 165, Section 3.3, Question 34.</p> <blockquote> <p><strong>Quandary:</strong> If $y=f(x)$, and $x = u(t)$ is a new independent variable, where $f$ and $u$ are twice differentiable functions, what's $\dfrac{d^{2}y}{dt^{2}} $?</p> <p><strong>Answer:</strong> By the chain rule, $\dfrac{dy}{dt} = \dfrac{dy}{dx} \dfrac{dx}{dt} $. Then by the product rule, </p> <p>$\begin{align} \dfrac{d^{2}y}{dt^{2}} &amp; \stackrel{\mathrm{defn}}{=}\dfrac{d}{dt} \dfrac{dy}{dt} \\ &amp; = \color{forestgreen}{ \dfrac{d}{dt} \dfrac{dy}{dx} } &amp; \dfrac{dx}{dt} &amp; +\frac{dy}{dx} \quad \dfrac{d}{dt} \dfrac{dx}{dt} \\ &amp; = \color{red}{\frac{d^{2}y}{dx^{2}}(\frac{dx}{dt})} &amp; \dfrac{dx}{dt} &amp; + \frac{dy}{dx} \quad \frac{d^{2}x}{dt^{2}} \\ &amp; = \frac{d^{2}y}{dx^{2}}(\frac{dx}{dt})^{2} &amp; &amp;+\frac{dy}{dx} \quad \frac{d^{2}x}{dt^{2}} \end{align} $</p> </blockquote> <p>Why $\color{forestgreen}{ \dfrac{d}{dt} \dfrac{dy}{dx} } = \color{red}{\dfrac{d^{2}y}{dx^{2}}(\dfrac{dx}{dt})} $? Please explain informlly and intuitively.</p>
Deepak
151,732
<p>Just chain rule in action. Let $\displaystyle\frac{dy}{dx} = z$</p> <p>So $\displaystyle \frac{d}{dt}(\frac{dy}{dx}) = \frac{dz}{dt} = \frac{dz}{dx}.\frac{dx}{dt} = \frac{d}{dx}(\frac{dy}{dx}).\frac{dx}{dt}$, as you wanted to prove.</p>
2,279,120
<p>How do we prove that for any complex number $z$ the minimum value of $|z|+|z-1|$ is $1$ ? $$ |z|+|z-1|=|z|+|-(z-1)|\geq|z-(z-1)|=|z-z+1|=|1|=1\\\implies|z|+|z-1|\geq1 $$</p> <p>But, when I do as follows $$ |z|+|z-1|\geq|z+z-1|=|2z-1|\geq2|z|-|1|\geq-|1|=-1 $$ Since LHS can never be less than 0, $|z|+|z-1|\geq0$</p> <p>Why do I seem to get different result compared to the first method ?</p> <p>ie. 1st method $\implies (|z|+|z-1|)_{min}=1\\$</p> <p>2nd method$\implies (|z|+|z-1|)_{min}=0\\$</p> <p>What is going wrong in the second approach ?</p>
Michael Rozenberg
190,319
<p>$$|z|+|1-z|\geq|z+1-z|=1$$ The equality occurs for $z=\frac{1}{2}$.</p>
25,485
<p>Not sure if this is more appropriate for here or for Math.SE, but here goes:</p> <p>How does one who is self-studying mathematics determine if a textbook is too hard for you?</p> <p>Math is hard in general, but when does a textbook cross that line from being challenging to being nearly intractable?</p> <p>Sometimes I can't tell if I'm just being challenged when I have to re-read one paragraph ten times to understand what the author is saying (even if I understand all the components of their statement individually), or if the book is simply not at the right level for my current background.</p>
guest troll
20,204
<p>When you are self-studying, it is a much harder environment because (a) you lack external &quot;whip&quot; of the grade. And (b) you don't have a teacher, lectures, etc. With that in mind, you need to pick the books that are most beginner friendly. Especially useful are &quot;programmed instruction&quot; books (e.g. those by Stroud). Please ignore all the people telling you to learn calculus from Spivak or RA from Rudin. That's just Internet jocks, pecker-flexing. You can always go do the Spivak or Rudin LATER. But pick something easy to start with, that you won't give up on. Often Amazon reviews are helpful to find such books. (And also ones that have the answers in the back...very important for self studiers.)</p>
4,631,463
<p>I came cross the following equation:<br /> <span class="math-container">$$\lim_{x\to+\infty}\frac{1}{x}\int_0^x|\sin t|\mathrm{d}t=\frac{2}{\pi}$$</span> I wonder how to prove it.</p> <p>Using the Mathematica I got the following result: <a href="https://i.stack.imgur.com/CVWgR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CVWgR.png" alt="enter image description here" /></a> Could you suggest some ideas how to prove this? Any hints will be appreciated.</p>
Ninad Munshi
698,724
<p><span class="math-container">$\textbf{Hint}$</span>: the limmand can be rewritten as</p> <p><span class="math-container">$$ \frac{\int_0^{\left\lfloor \frac{x}{\pi}\right\rfloor \pi} |\sin t|dt + \int_{\left\lfloor \frac{x}{\pi}\right\rfloor \pi}^x |\sin t|dt}{x}$$</span></p> <p><span class="math-container">$$=2 \frac{\left\lfloor \frac{x}{\pi}\right\rfloor}{x}+\frac{\int_{ \left\lfloor \frac{x}{\pi}\right\rfloor}^x|\sin t|dt}{x}$$</span></p> <p>because the integrand is <span class="math-container">$\pi$</span>-periodic. The equality <span class="math-container">$z \equiv \lfloor z \rfloor + \{z\}$</span> (where <span class="math-container">$0\leq \{ \cdot \} &lt; 1$</span> denotes the fractional part function) can be used to calculate the limit on the left. Squeeze theorem can be used for the remainder on the right.</p>
254,126
<p>If 0 &lt; a &lt; b, where a, b $\in\mathbb{R}$, determine $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg)$</p> <p>The answer (from the back of the text) is $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg) = b$ but I have no idea how to get there. The course is Real Analysis 1, so its a course on proofs. This chapter is on limit theorems for sequences and series. The squeeze theorem might be helpful. </p> <p>I can prove that $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg) \le b$ but I can't find a way to also prove that it is larger than b</p> <p>Thank you!</p>
Chris Gerig
22,295
<p>To answer your only question: No. A 1-manifold is either a (open or closed) line segment or a circle or a disjoint union of such. But you can <em>fatten</em> the figure-$8$ (for example) out to get an honest manifold homotopy-equivalent to what you want (plane minus two points).</p>
1,980,510
<p>$$ \lim_{(x,y) \to (1,0)} \frac{(x-1)^2\ln(x)}{(x-1)^2 + y^2}$$</p> <p>I tried L' Hospitals Rule and got now where. Then I tried using $x = r\cos(\theta)$ and $y = r\sin(\theta)$, but no help. How would I approach this? T</p>
Tsemo Aristide
280,301
<p>$$\frac{(x-1)^2\ln(x)}{(x-1)^2+y^2} = \frac{\ln(x)}{1+{y^2\over {(x-1)^2}}}.$$ The limit of the numerator is $-\infty$ and the limit of the denominator is $1$ so the limit is $-\infty$.</p>
1,393,154
<p><span class="math-container">$4n$</span> to the power of <span class="math-container">$3$</span> over <span class="math-container">$2 = 8$</span> to the power of negative <span class="math-container">$1$</span> over <span class="math-container">$3$</span></p> <p>Written Differently for Clarity:</p> <p><span class="math-container">$$(4n)^\frac{3}{2} = (8)^{-\frac{1}{3}}$$</span></p> <hr /> <blockquote> <p><strong>EDIT</strong></p> <p>Actually, the problem should be solving <span class="math-container">$4n^{\frac{3}{2}} = 8^{-\frac{1}{3}}$</span>. Another user edited this question for clarity, but they edited it incorrectly to add parentheses around the right hand side, as can be seen above.</p> </blockquote>
Siwel
259,274
<p>We have that $4n^{3/2}=8^{-1/3}$. Proceed as follows, to find $n$.</p> <ul> <li>Simplify $8^{-1/3}=1/\sqrt[3]{8}=1/2$</li> <li>Divide both sides by 4 to get $n^{3/2}=1/8$. This is the next step, bearing in mind the order of operations, which is often remembered with the phrase BODMAS(/BIDMAS) (Brackets, Other(/Indices), Division, Multiplication, Addition, Subtraction). That is, if we were to add in unnecessary brackets to emphasize the order, we would write $4(n^{3/2})=1/2$, and we can see dividing by 4 is the next available choice to isolate $n$.</li> <li>Raise both sides to the power 2/3 (or if you like, take the "3/2"th root) of both sides, and simplify, to get $n=(1/8)^{2/3}=1/8^{2/3}=1/\sqrt[3]{8}^2=1/2^2=1/4$</li> </ul>
2,843,560
<p>If $\sin x +\sin 2x + \sin 3x = \sin y\:$ and $\:\cos x + \cos 2x + \cos 3x =\cos y$, then $x$ is equal to</p> <p>(a) $y$</p> <p>(b) $y/2$</p> <p>(c) $2y$</p> <p>(d) $y/6$</p> <p>I expanded the first equation to reach $2\sin x(2+\cos x-2\sin x)= \sin y$, but I doubt it leads me anywhere. A little hint would be appreciated. Thanks!</p>
Aryabhata
1,102
<p>If $z = e^{ix} = \cos x + i \sin x$</p> <p>Then we have $z + z^2 + z^3 = e^{iy} = w$ (say)</p> <p>Divide by $z^2$ to see that</p> <p>$$z + \frac{1}{z} + 1 = \frac{w}{z^2}$$</p> <p>The left side is real and thus</p> <p>$$w = az^2$$</p> <p>Since $|w| = |z| = 1$ we must have that $|a| = 1$</p> <p>Thus $$w = \pm z^2$$</p> <p>This gives rise to two equations:</p> <p>$$z + z^3 = 0$$</p> <p>and</p> <p>$$z + 2z^2 + z^3 = 0$$</p> <p>I will leave the rest to you. </p> <p>And as others said, be careful that $e^{ix}$ is periodic and for the question to make sense you might need to put bounds on $x,y$.</p>
2,949,011
<blockquote> <p>If <span class="math-container">$b_n\to\infty$</span> and <span class="math-container">$\{a_n\}$</span> is such that <span class="math-container">$b_n&gt;a_n$</span> for all <span class="math-container">$n$</span>, then <span class="math-container">$a_n\to\infty$</span>.</p> </blockquote> <p>We are to prove this by using either a)formal definitions or b)counter examples. </p> <p>I am very unsure of how to prove using the formal definitions. I can see it is quite clear that if I chose to use a counter example to prove we can't show that <span class="math-container">$a_n$</span> goes to infinity simply because it is less than <span class="math-container">$b_n$</span>, that this would maybe be easier, but once again, I am at a loss on how to go about setting up this proof.</p> <p>Any help in proving this would be great.</p>
hamam_Abdallah
369,188
<p>Your statement is false. as a counter example,</p> <p>take</p> <p><span class="math-container">$$b_n=n \text{ and } \; a_n=-n.$$</span></p> <p>then <span class="math-container">$$a_n\le b_n,$$</span></p> <p><span class="math-container">$$b_n\to +\infty$$</span> but <span class="math-container">$$a_n\to -\infty$$</span></p>
1,013,346
<p>Given a box which contains $3$ red balls and $7$ blue balls. A ball is drawn from the box and a ball of the other color is then put into the box. A second ball is drawn from the box, What is the probability that the second ball is blue? </p> <p>could anyone provide me any hint? </p> <p>Please, don't offer a complete sketch of the solution, a hint is enough for me as this is a homework problem. </p>
Timbuc
118,527
<p>$$\frac{\pi^4}{90}=\sum_{n=1}^\infty\frac1{n^4}=\sum_{n=1}^\infty\frac1{(2n)^4}+\sum_{n=1}^\infty\frac1{(2n-1)^4}=\frac1{16}\sum_{n=1}^\infty\frac1{n^4}+\sum_{n=1}^\infty\frac1{(2n-1)^4}\implies$$</p> <p>$$\implies\sum_{n=1}^\infty\frac1{(2n-1)^4}=\left(1-\frac1{16}\right)\frac{\pi^4}{90}=\frac{\pi^4}{96} $$</p>
1,013,346
<p>Given a box which contains $3$ red balls and $7$ blue balls. A ball is drawn from the box and a ball of the other color is then put into the box. A second ball is drawn from the box, What is the probability that the second ball is blue? </p> <p>could anyone provide me any hint? </p> <p>Please, don't offer a complete sketch of the solution, a hint is enough for me as this is a homework problem. </p>
Vivek Kaushik
169,367
<p>Here is another proof I came up with. Start with the quadruple integral:</p> <p>$$J=\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1} \frac{1}{1-x^2_1x^2_2x^2_3x^2_4}dx_1dx_2dx_3dx_4.$$</p> <p>Replace the integrand with a geometric series as such: $$\frac{1}{1-x^2_1x^2_2x^2_3x^2_4}=\sum_{n=0}^{\infty}(x_1x_2x_3x_4)^{2n}.$$ Replace the integrand with the geometric series and integrate term by term to get:</p> <p>$$J=\sum_{n=0}^{\infty} \frac{1}{(2n+1)^4}.$$ </p> <p>We now try to evaluate: $$J=\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1} \frac{1}{1-x^2_1x^2_2x^2_3x^2_4}dx_1dx_2dx_3dx_4.$$ Make the change of variables $$x_i=\frac{\sin(u_i)}{\cos(u_{i+1})},x_4=\frac{\sin(u_4)}{\cos(u_{1})}, 1\leq i \leq 3.$$ The Jacobian determinant turns out to be: $$\frac{\partial(x_1,...x_4)}{\partial(u_1,...u_4)}=1-x^2_1x^2_2x^2_3x^2_4$$ which cancels with the integrand. The region of integration is the (open) polytope $P$ defined by the constraints $$0&lt;u_{i}+u_{i+1}&lt;\frac{\pi}{2},0&lt;u_{1}+u_{4}&lt;\frac{\pi}{2}, u_{i}&gt;0.$$ So this means $$J=\sum_{n=0}^{\infty} \frac{1}{(2n+1)^4}=\text{Vol}(P)$$ where $\text{Vol}$ means volume. I will consider the scaled polytope $V$ defined by constraints $$0&lt;v_{i}+v_{i+1}&lt;1,0&lt;v_{1}+v_{4}&lt;1, u_{i}&gt;0.$$ You can show through change of variables $$u_i=\frac{\pi v_i}{2}, 1\leq i \leq 4$$ that $$\text{Vol}(P)=\left(\frac{\pi}{2} \right)^4 \text{Vol}(V).$$ So we will be done once we find $\text{Vol}(V).$ We will find $\text{Vol}(V)$ through probability. </p> <p>We first find $$\text{Pr}\left(v\in V \cap v_1,...v_4&lt;\frac{1}{2}\right),$$ which is the probability we pick an arbitrary point $v$ in $V$ where all coordinates are less than $\frac{1}{2}.$</p> <p>$$\text{Pr}\left(v\in V \cap v_1,...v_4&lt;\frac{1}{2}\right)=\left(\frac{1}{2}\right)^4=\frac{1}{16}.$$ This is the case because the open hypercube $I=(0,\frac{1}{2})^4 \subset V,$ which you can verify. </p> <p>Now we find $$\text{Pr}\left(v\in V \cap \text{exactly one } v_i \geq \frac{1}{2}\right),$$ which is the probability we pick an arbitrary point $v$ in $V$ where exactly one coordinate is greater than or equal to $\frac{1}{2}.$ It turns out:</p> <p>$$\text{Pr}\left(v\in V \cap \text{exactly one } v_i \geq \frac{1}{2}\right)=4\int_{\frac{1}{2}}^{1}\int_{0}^{1-v_1}\int_{0}^{1-v_1}\int_{0}^{\frac{1}{2}} 1 dv_3dv_4dv_2dv_1=\frac{1}{12}.$$ What I did here was compute $$\text{Pr}\left(v\in V \cap \text{only } v_1 \geq \frac{1}{2}\right)$$ and multiplied this answer by $4$ to account for all $4$ cases of this condition. </p> <p>Now we find $$\text{Pr}\left(v\in V \cap \text{exactly two } v_i \geq \frac{1}{2}\right),$$ which is the probability we pick an arbitrary point $v$ in $V$ where exactly two coordinates are greater than or equal to $\frac{1}{2}.$ It turns out: $$\text{Pr}\left(v\in V \cap \text{exactly two } v_i \geq \frac{1}{2}\right)=4\int_{\frac{1}{2}}^{1}\int_{\frac{1}{2}}^{v_1}\int_{0}^{1-v_1}\int_{0}^{1-v_1} 1 dv_2dv_4dv_3dv_1=\frac{1}{48}.$$ What I did here was compute $$\text{Pr}\left(v\in V \cap \text{only } v_1,v_3 \geq \frac{1}{2}\cap v_3&lt;v_1\right)$$ and multiplied this answer by $4$ to account for all $4$ cases of this condition. Keep in mind here that <strong>only two nonconsecutive</strong> coordinates can be greater than or equal $\frac{1}{2}$ at the same time. Thus, if we have more than two greater than or equal to $\frac{1}{2}$ at the same time, we will violate the constraints of $V$. Thus, we now just add up the computed probabilities to get: $$\text{Vol}(V)=\frac{1}{16}+\frac{1}{12}+\frac{1}{48}=\frac{1}{6},$$ which means $$\sum_{n=0}^{\infty} \frac{1}{(2n+1)^4}=\text{Vol}(U)=\left(\frac{\pi}{2}\right)^4\text{Vol}(V)=\left(\frac{\pi^4}{16}\right)\frac{1}{6}=\frac{\pi^4}{96}.$$</p>
353
<p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p> <blockquote> <p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p> </blockquote> <p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
Henry Towsner
62
<p>When I've taught propositional logic I acknowledge that this is a formalism that doesn't perfectly match the English usage, and use it as an opportunity to point out </p> <ol> <li>The evaluation of $\rightarrow$ has to be purely a property of truth values, whereas "implies" in English involves the meaning of the statements, not just whether they're true or false. This is an important property of the propositional logic, and this is the first good opportunity to emphasize it. (My students usually agree that, if $\rightarrow$ has to be truth functional, the accepted interpretation is probably the best that can be done.)</li> <li>They're in good historic company, and there are variations of propositional logic (like modal logic and relevant logic) which try to address exactly this issue, but they have to give up having everything be a function on truth values. This is a nice chance to indicate why there's more than one notion of formal logic.</li> </ol>
353
<p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p> <blockquote> <p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p> </blockquote> <p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
user578
578
<ol> <li><p>Avoid real world analogies. They confuse students because real world means natural language. Instead remind them of implication and its truth table whenever you write a mathematical proposition on the board.</p></li> <li><p>Remind students to work with the definitions. Over and over again.</p></li> <li><p>Introduce the notion of a vacuous truth (or vacuous implication) as soon as possible, I prefer the first class of the semester (even if it's before anything related to propositional calculus), and point it out explicitly whenever it comes up.</p></li> </ol>
353
<p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p> <blockquote> <p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p> </blockquote> <p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
Michael Joyce
1,397
<p>I think the intuition for implication is aided when you consider it in the context of universal quantification.</p> <p>A claim of the form $(\forall x)(A(x) \Rightarrow B(x))$ can be translated roughly as every $x$ that has property $A$ must also have property $B$. Most of the use of conditional claims used in practice have this form implicitly assumed.</p> <p>For example, using Brendan Sullivan's example which I agree is good for explanation, when someone says, "If it rains, I use an umbrella," the implicit reading is that this is a universal principle with the logical structure of $(\forall t)(\text{it rains at time } t \Rightarrow \text{I use an umbrella at time } t)$. Intuitively, we grasp that whatever happens when it is not raining cannot affect the truth value of the universal implication.</p> <p>The challenge in teaching this, of course, is that universal quantification is often not introduced until well after logical implication, but maybe that should be done differently.</p>
1,897,538
<p>Next week I will start teaching Calculus for the first time. I am preparing my notes, and, as pure mathematician, I cannot come up with a good real world example of the following.</p> <p>Are there good examples of \begin{equation} \lim_{x \to c} f(x) \neq f(c), \end{equation} or of cases when $c$ is not in the domain of $f(x)$?</p> <p>The only thing that came to my mind is the study of physics phenomena at temperature $T=0 \,\mathrm{K}$, but I am not very satisfied with it.</p> <p>Any ideas are more than welcome!</p> <p><strong>Warning</strong></p> <p>The more the examples are approachable (e.g. a freshman college student), the more I will be grateful to you! In particular, I would like the examples to come from natural or social sciences. Indeed, in a first class in Calculus it is not clear the importance of indicator functions, etc..</p> <p><strong>Edit</strong></p> <p>As B. Goddard pointed out, a very subtle point in calculus is the one of removable singularities. If possible, I would love to have some example of this phenomenon. Indeed, most of the examples from physics are of functions with poles or indeterminacy in the domain.</p>
dxiv
291,201
<p>If instantaneous elastic collisions count as <code>real world</code> then the speed $f(t)$ of a ball traveling at constant speed $v$ and hitting a wall at $t = t_0$ is $v$ for $t \lt t_0$, $0$ at $t = t_0$, and $-v$ for $t \gt t_0$, so both lateral limits exist at $t_0$ but are different between them and different from $f(t_0)$.</p> <p><hr> [ <em>EDIT</em> ] As posted, $f(t)$ is considered to be the signed linear speed along the direction of movement, which is assumed to be perpendicular to the wall.</p> <p>Replacing this with the magnitude of the speed $|f(t)|$ gives a function that equals $v$ for all $t \neq t_0$ and is $0$ at $t = t_0$, which is an example where $lim_{t \to t_0} |f(t)|$ exists, but is different from $|f(t_0)|$.</p>
588,214
<p>What is the meaning of $[G:C_G(x)]$ in group theory? Is this equivalent to $\frac{|G|}{|Z_G(x)|}$, or to $|Z_G(x)|$?</p>
Henry Swanson
55,540
<p>$C_G(x)$ is the <em>centralizer</em> of $x$ in $G$. That is, $\{ g \in G \mid gxg^{-1} = x \}$ or equivalently, $\{ g \in G \mid gx = xg \}$.</p> <p>The $[G : H]$ notation means the <em>index</em> of $H$ in $G$, and it is defined as the number of cosets of $H$ in $G$. Lagrange's theorem says that this is equal to $\frac{G}{H}$.</p> <p>By the way, for this particular $H = C_G(x)$, this has a very useful value. It is the size of the conjugacy class of $x$, i.e., $\{ gxg^{-1} \mid g \in G \}$. Are you by chance studying the class equation?</p> <p>EDIT: $C_G(x)$ is in fact a subgroup. Obviously $e \in C_G(x)$. Now, we show that if $a, b \in C_G(x)$, then $ab^{-1} \in C_G(x)$. $$(ab^{-1}) x = ab^{-1}x(bb^{-1}) = ab^{-1}(xb)b^{-1} = ab^{-1}(bx) b^{-1} = axb^{-1} = x(ab^{-1})$$</p>
937,912
<p>I'm looking for a closed form of this integral.</p> <p>$$I = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx ,$$</p> <p>where $\operatorname{Li}_2$ is the <a href="http://mathworld.wolfram.com/Dilogarithm.html" rel="noreferrer">dilogarithm function</a>.</p> <p>A numerical approximation of it is</p> <p>$$ I \approx 1.39130720750676668181096483812551383015419528634319581297153...$$</p> <p>As Lucian said $I$ has the following equivalent forms:</p> <p>$$I = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx = \int_0^1 \frac{\operatorname{Li}_2\left( \sqrt{x} \right)}{2 \, \sqrt{x} \, \sqrt{1-x}} \,dx = \int_0^{\frac{\pi}{2}} \operatorname{Li}_2(\sin x) \, dx = \int_0^{\frac{\pi}{2}} \operatorname{Li}_2(\cos x) \, dx$$</p> <p>According to <em>Mathematica</em> it has a closed-form in terms of generalized hypergeometric function, Claude Leibovici has given us <a href="https://math.stackexchange.com/a/937926/153012">this form</a>.</p> <p>With <em>Maple</em> using Anastasiya-Romanova's form I could get a closed-form in term of Meijer G function. It was similar to Juan Ospina's <a href="https://math.stackexchange.com/a/938024/153012">answer</a>, but it wasn't exactly that form. I also don't know that his form is correct, or not, because the numerical approximation has just $6$ correct digits. </p> <p>I'm looking for a closed form of $I$ without using generalized hypergeometric function, Meijer G function or $\operatorname{Li}_2$ or $\operatorname{Li}_3$.</p> <p>I hope it exists. Similar integrals are the following.</p> <p>$$\begin{align} J_1 &amp; = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{1+x} \,dx = \frac{\pi^2}{6} \ln 2 - \frac58 \zeta(3) \\ J_2 &amp; = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x}} \,dx = \pi^2 - 8 \end{align}$$</p> <p>Related techniques are in <a href="http://carma.newcastle.edu.au/jon/Preprints/Papers/ArXiv/Zeta21/polylog.pdf" rel="noreferrer">this</a> or in <a href="http://arxiv.org/pdf/1010.6229.pdf" rel="noreferrer">this</a> paper. <a href="http://www-fourier.ujf-grenoble.fr/~marin/une_autre_crypto/Livres/Connon%20some%20series%20and%20integrals/Vol-3.pdf" rel="noreferrer">This</a> one also could be useful.</p>
Juan Ospina
170,228
<p>Using Maple I am obtaining</p> <p>$$1+\frac{\pi }{16}{\ _4F_3(1,1,1,3/2;\,2,2,2;\,1)}+\frac{\sqrt {\pi }}{8} G^{4, 1}_{4, 4}\left(-1\, \Big\vert\,^{1, 5/2, 5/2, 5/2}_{2, 3/2, 3/2, 1}\right) $$</p> <p>and a numerical approximation is</p> <p>$$1.3913063720392030337$$</p>
937,912
<p>I'm looking for a closed form of this integral.</p> <p>$$I = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx ,$$</p> <p>where $\operatorname{Li}_2$ is the <a href="http://mathworld.wolfram.com/Dilogarithm.html" rel="noreferrer">dilogarithm function</a>.</p> <p>A numerical approximation of it is</p> <p>$$ I \approx 1.39130720750676668181096483812551383015419528634319581297153...$$</p> <p>As Lucian said $I$ has the following equivalent forms:</p> <p>$$I = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx = \int_0^1 \frac{\operatorname{Li}_2\left( \sqrt{x} \right)}{2 \, \sqrt{x} \, \sqrt{1-x}} \,dx = \int_0^{\frac{\pi}{2}} \operatorname{Li}_2(\sin x) \, dx = \int_0^{\frac{\pi}{2}} \operatorname{Li}_2(\cos x) \, dx$$</p> <p>According to <em>Mathematica</em> it has a closed-form in terms of generalized hypergeometric function, Claude Leibovici has given us <a href="https://math.stackexchange.com/a/937926/153012">this form</a>.</p> <p>With <em>Maple</em> using Anastasiya-Romanova's form I could get a closed-form in term of Meijer G function. It was similar to Juan Ospina's <a href="https://math.stackexchange.com/a/938024/153012">answer</a>, but it wasn't exactly that form. I also don't know that his form is correct, or not, because the numerical approximation has just $6$ correct digits. </p> <p>I'm looking for a closed form of $I$ without using generalized hypergeometric function, Meijer G function or $\operatorname{Li}_2$ or $\operatorname{Li}_3$.</p> <p>I hope it exists. Similar integrals are the following.</p> <p>$$\begin{align} J_1 &amp; = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{1+x} \,dx = \frac{\pi^2}{6} \ln 2 - \frac58 \zeta(3) \\ J_2 &amp; = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x}} \,dx = \pi^2 - 8 \end{align}$$</p> <p>Related techniques are in <a href="http://carma.newcastle.edu.au/jon/Preprints/Papers/ArXiv/Zeta21/polylog.pdf" rel="noreferrer">this</a> or in <a href="http://arxiv.org/pdf/1010.6229.pdf" rel="noreferrer">this</a> paper. <a href="http://www-fourier.ujf-grenoble.fr/~marin/une_autre_crypto/Livres/Connon%20some%20series%20and%20integrals/Vol-3.pdf" rel="noreferrer">This</a> one also could be useful.</p>
Vladimir Reshetnikov
19,661
<p>We will go through a sequence of integrals, and, remarkably, we will see that at each step an integrand will have a continuous closed-form antiderivative in terms of elementary functions, dilogarithms and trilogarithms, so evaluation of an integral is then just a matter of calculating values (or limits) at end-points and taking a difference.</p> <p>I used <em>Mathematica</em> to help me find some of those antiderivatives, but then I significantly simplified them manually. In each case correctness of the result was proved manually by direct differentiation, so we do not have to trust <em>Mathematica</em> on it. Maybe somebody will find a more elegant and enlightening way to evaluate them.</p> <hr> <p>First change the variable $x=\cos\theta$ and rewrite the integral as: $$I=\int_0^{\pi/2}\operatorname{Li}_2(\cos\theta)\,d\theta\tag{0}$$ Then we use a <a href="http://functions.wolfram.com/10.07.07.0003.01">known integral representation</a> of the dilogarithm: $$\operatorname{Li}_2(z)=-\int_0^1\frac{\ln(1-t\,z)}t\,dt.\tag1$$ Use it to rewrite $(0)$ and then change the order of integration: $$I=-\int_0^1\frac1t\int_0^{\pi/2}\ln(1-t\,\cos\theta)\,d\theta\,dt.\tag2$$</p> <hr> <p>Our first goal is to evaluate the inner integral in $(2)$. The integrand has a closed-form antiderivative in terms of elementary functions and dilogarithms that is continuous in the region of integration: $$\int\ln(1-t\,\cos\theta)\,d\theta=\theta\!\;\ln\!\left(\frac{1+\sqrt{1-t^2}}2\right)-2\,\Im\,\operatorname{Li}_2\!\left(\frac{1-\sqrt{1-t^2}}t\!\;e^{i\!\;\theta}\right).\tag3$$ (compare it with the <a href="http://mathb.in/40064">raw</a> <em>Mathematica</em> result)</p> <p>Taking the difference of values of $(3)$ at the end-points $\pi/2$ and $0$, we obtain: $$\int_0^{\pi/2}\ln(1-t\,\cos\theta)\,d\theta=\frac\pi2\,\ln\!\left(\frac{1+\sqrt{1-t^2}}2\right)-2\,\Im\,\operatorname{Li}_2\!\left(i\,\frac{1-\sqrt{1-t^2}}t\right).\tag4$$ Recall that the imaginary part of the dilogarithm can be represented as the <a href="http://mathworld.wolfram.com/InverseTangentIntegral.html">inverse tangent integral</a>: $$\Im\,\operatorname{Li}_2(iz)=\operatorname{Ti}_2(z)=\int_0^z\frac{\arctan(v)}v dv.\tag{$4'$}$$ So, $$\int_0^{\pi/2}\ln(1-t\,\cos\theta)\,d\theta=\frac\pi2\,\ln\!\left(\frac{1+\sqrt{1-t^2}}2\right)-2\,\operatorname{Ti}_2\!\left(\frac{1-\sqrt{1-t^2}}t\right).\tag{$4''$}$$</p> <hr> <p>Now our goal is to evaluate the outer integral in $(2)$. Substituting $(4'')$ back into $(2)$ we get: $$I=-\frac\pi2\!\;I_1+2\!\;I_2,\tag5$$ where $$I_1=\int_0^1\frac1t\,\ln\!\left(\frac{1+\sqrt{1-t^2}}2\right)dt,\tag6$$ $$I_2=\int_0^1\frac1t\,\operatorname{Ti}_2\!\left(\frac{1-\sqrt{1-t^2}}t\right)dt.\tag7$$ The integrand in $(6)$ has a closed-form antiderivative in terms of elementary functions and dilogarithms. One way to find it is to change variable $t=2\sqrt{u-u^2}$ and integrate by parts. $$\int\frac1t\,\ln\!\left(\frac{1+\sqrt{1-t^2}}2\right)dt=\frac14\,\ln^2\!\left(\frac{1+\sqrt{1-t^2}}2\right)-\frac12\, \operatorname{Li}_2\!\left(\frac{1-\sqrt{1-t^2}}2\right).\tag8$$ (compare it with the <a href="http://mathb.in/40065">raw</a> <em>Mathematica</em> result)</p> <p>Taking the difference of its values at the end-points, and using <a href="http://mathworld.wolfram.com/Dilogarithm.html">well-known values</a> $$\operatorname{Li}_2(1)=\zeta(2)=\frac{\pi^2}6,\tag{$8'$}$$ $$\operatorname{Li}_2\left(\tfrac12\right)=\frac{\pi^2}{12}-\frac{\ln^22}2,\tag{$8''$}$$ we get: $$I_1=\frac{\ln^22}2-\frac{\pi^2}{24}.\tag9$$ To evaluate $I_2$ change the variable $t=\frac{2z}{1+z^2}$: $$I_2=\int_0^1\frac{1-z^2}{z\,(1+z^2)}\operatorname{Ti}_2(z)\,dz.\tag{10}$$ Again, the integrand has a closed-form antiderivative in terms of elementary functions, dilogarithms and trilogarithms. Before giving the result, we will try to split it into smaller parts. First, recall $(4')$ and a simple integral ${\large\int}\frac{1-z^2}{z\,(1+z^2)}dz=\ln\!\left(\frac z{1+z^2}\right)$, and integrate by parts: $$\int\frac{1-z^2}{z\,(1+z^2)}\operatorname{Ti}_2(z)\,dz=\ln\!\left(\frac z{1+z^2}\right)\operatorname{Ti}_2(z)\\-\underbrace{\int\frac{\ln z\cdot\arctan z}z\,dz}_{I_3}+\underbrace{\int\frac{\ln(1+z^2)\cdot\arctan z}z\,dz}_{I_4}.\tag{11}$$ The following results can be checked by direct differentiation: $$I_3=\operatorname{Ti}_2(z)\ln z-\Im\,\operatorname{Li}_3(iz),\tag{$11'$}$$ $$I_4=\left[\frac{\pi^2}3-\ln\left(1+z^2\right)\ln z-\frac12\,\operatorname{Li}_2\!\left(-z^2\right)\right]\arctan z\\-\frac\pi2\,\arctan^2z+\frac\pi8\,\ln^2\left(1+z^2\right)+\operatorname{Ti}_2(z)\ln\left(1+z^2\right)-2\,\Im\,\operatorname{Li}_3(1+iz).\tag{$11''$}$$ Plugging $(11')$ and $(11'')$ into $(11)$ we obtain: $$\int\frac{1-z^2}{z\,(1+z^2)}\operatorname{Ti}_2(z)\,dz=\left[\frac{\pi^2}3-\ln\left(1+z^2\right)\ln z-\frac12\,\operatorname{Li}_2\!\left(-z^2\right)\right]\arctan z\\-\frac\pi2\,\arctan^2z+\frac\pi8\,\ln^2\left(1+z^2\right)+\,\Im\,\operatorname{Li}_3(iz)-2\,\Im\,\operatorname{Li}_3(1+iz).\tag{$11'''$}$$ (compare it with the <a href="http://mathb.in/40066">raw</a> <em>Mathematica</em> result)</p> <p>Taking the difference of its values at the end-points $1$ and $0$, we get: $$I_2=\frac{3\!\;\pi^3}{32}+\frac\pi8\!\;\ln^22-2\,\Im\,\operatorname{Li}_3(1+i).\tag{12}$$ Plugging $(9)$ and $(12)$ back into $(5)$ we get the final result:</p> <blockquote> <p>$$\large\int_0^1\frac{\operatorname{Li}_2(x)}{\sqrt{1-x^2}}\,dx=\frac{5\!\;\pi^3}{24}-4\,\Im\,\operatorname{Li}_3(1+i).\tag{$\heartsuit$}$$</p> </blockquote>
907,851
<p>I am really new into math, why is $-2^2 = -4 $ and $(-2)^2 = 4 $? </p>
GEdgar
442
<p>It is true that $-2^2$ is ambiguous (unless you know the convention), because $-(2^2) \ne (-2)^2$. (And, indeed, some calculators or programing languages may do it using their own convention, different from the mathematicians' convention.) The mathematicians' convention is $-2^2 = -(2^2)$. Why? Presumably because we often need to write $-(2^2)$. But if you ever need to write $(-2)^2$ you can just write $2^2$ instead.</p>
907,851
<p>I am really new into math, why is $-2^2 = -4 $ and $(-2)^2 = 4 $? </p>
DanielV
97,045
<p>Universally, exponentiation is agreed to be evaluated before subtraction.</p> <p>However, there are two opposing wishes that go into the design of a grammar:</p> <ul> <li>Negation is evaluated at the same time as subtraction, to avoid irregularities</li> <li>Unary operators are all evaluated before binary operators, to keep things simple</li> </ul> <p>In the first case, $-2^2 = -4$. In the second case, $-2^2 = 4$. Most mathematical publications and mathematics forums will use the first case. However, some software (I believe Haskell like grammars are an example) use the second case.</p>
10,666
<p>My question is about <a href="http://en.wikipedia.org/wiki/Non-standard_analysis">nonstandard analysis</a>, and the diverse possibilities for the choice of the nonstandard model R*. Although one hears talk of <em>the</em> nonstandard reals R*, there are of course many non-isomorphic possibilities for R*. My question is, what kind of structure theorems are there for the isomorphism types of these models? </p> <p><b>Background.</b> In nonstandard analysis, one considers the real numbers R, together with whatever structure on the reals is deemed relevant, and constructs a nonstandard version R*, which will have infinitesimal and infinite elements useful for many purposes. In addition, there will be a nonstandard version of whatever structure was placed on the original model. The amazing thing is that there is a <em>Transfer Principle</em>, which states that any first order property about the original structure true in the reals, is also true of the nonstandard reals R* with its structure. In ordinary model-theoretic language, the Transfer Principle is just the assertion that the structure (R,...) is an elementary substructure of the nonstandard reals (R*,...). Let us be generous here, and consider as the standard reals the structure with the reals as the underlying set, and having all possible functions and predicates on R, of every finite arity. (I guess it is also common to consider higher type analogues, where one iterates the power set &omega; many times, or even ORD many times, but let us leave that alone for now.) </p> <p>The collection I am interested in is the collection of all possible nontrivial elementary extensions of this structure. Any such extension R* will have the useful infinitesimal and infinite elements that motivate nonstandard analysis. It is an exercise in elementary mathematical logic to find such models R* as ultrapowers or as a consequence of the Compactness theorem in model theory. </p> <p>Since there will be extensions of any desired cardinality above the continuum, there are many non-isomorphic versions of R*. Even when we consider R* of size continuum, the models arising via ultrapowers will presumably exhibit some saturation properties, whereas it seems we could also construct non-saturated examples. </p> <p>So my question is: what kind of structure theorems are there for the class of all nonstandard models R*? How many isomorphism types are there for models of size continuum? How much or little of the isomorphism type of a structure is determined by the isomorphism type of the ordered field structure of R*, or even by the order structure of R*? </p>
Gerald Edgar
454
<p>See Robinson's book <em>Non-Standard Analysis</em> (North-Holland 1966)...<br> Section 3.1 has some remarks about the order type of the non-standard natural numbers.</p>
2,338,123
<p>Function $f(z)$ is an entire function such that $$|f(z)| \le |z^{n}|$$ for $z \in \mathbb{C}$ and some $n \in \mathbb{N}$.</p> <p>Show that the singularities of the function $$\frac {f(z)}{z^{n}}$$ are removable. What can be implied about the function $f(z)$ if moreover $f(1) = i$? Draw a far-reaching conclusion.</p> <p>My attempt: If the singularities of $\frac {f(z)}{z^{n}}$ are removable, it is entire (not sure, need help with the justification) and bounded, so constant from Liouville's theorem, the constant value of the funcion is $i$, hence $f(z)=iz^{n}$. </p> <p>But what about the $n$ here, is it arbitrary? Could somebody help me prove the removability of the singularities and suggest if my attempt is going the right way?</p>
Jack D'Aurizio
44,121
<p>Since $f(z)$ is an entire function, $g(z)=\frac{f(z)}{z^n}$ may only have a pole at the origin. If that was the case, then $\left|g(z)\right|$ would be unbounded in a neighbourhood of the origin, but that contradicts $\left|g(z)\right|\leq 1$ for any $z\neq 0$. It follows that $z=0$ is a removable singularity for $g(z)$ and $g(z)$ is an entire function. Since $g(z)$ is bounded, by Liouville's theorem it follows that $g(z)$ is constant, so $f(z)=C z^n$. If $f(1)=i$, it follows that $f(z)=i z^n$.</p>
18,895
<p>I know that the answer is $C(8,2)$, but I don't get, why. Can anyone, please, explain it?</p>
aioobe
27,807
<p>$C(8, 2)$ means "$8$ choose $2$", which in this case should be interpreted as "how many ways can you choose $2$ items out of $8$ possible". Here the $8$ items are the bits, and the $2$ comes from the bits you "select to be $1$". That is, there are just as many bytes with exactly two 1s as there are ways to select $2$ bits from $8$ possible.</p>
1,776,726
<p>I'm trying to determine whether or not </p> <blockquote> <p>$$\sum_{k=1}^\infty \frac{2+\cos k}{\sqrt{k+1}}$$ </p> </blockquote> <p>converges or not. </p> <p>I have tried using the ratio test but this isn't getting me very far. Is this a sensible way to go about it or should I be doing something else?</p>
ivanhu42
337,970
<p>there is a high-school method.</p> <p>since $\frac{a^2+1}{a^2+2}$ is an even function, and it's decreasing when $x \lt 0$, and increasing when $x \gt 0$, all we need to do is to find the min value of $\mid a \mid$.</p> <p>since $x^2 + (a - 3) x - (a - 2) = 0$ has root(s) as real number(s), it follows that $$ \Delta = (a - 3)^2 + 4 \cdot 1 \cdot (a - 2) \geq 0 $$</p> <p>so we have $a^2 - 2a + 1 \geq 0$, and the min value of $a$ is $0$</p> <p>so the min value of $\frac{a^2+1}{a^2+2}$ is $\frac{1}{2}$</p>
894,909
<p>Given </p> <p>$(x+3)(y−4)=0 $</p> <p>Quantity $A = xy $</p> <p>Quantity $B = -12 $</p> <p>A Quantity $A$ is greater.<br> B Quantity $B$ is greater.<br> C The two quantities are equal.<br> D The relationship cannot be determined from the information given. </p> <p>How is the answer D and not C ? </p> <p>Please help.</p>
Shabbeh
165,678
<p>Because the first equation says either $x=-3$ or $y=4$ or both and only in last case C is correct.</p> <p>for example </p> <p>$$\begin{cases} x=-3\\y=0\end{cases}\rightarrow \begin{cases} A=0\\B=-12\end{cases}$$</p>
3,755,032
<p>I was given <span class="math-container">$(x,y)$</span> that satisfies both of this equation:</p> <p><span class="math-container">$4|xy| - y^2 - 2 = 0\\(2x+y)^2 + 4x^2 = 2$</span></p> <p>And was asked to find the maximum value of <span class="math-container">$4x + y$</span>.</p> <p>Solving for <span class="math-container">$y^2$</span>, I get this equation: <span class="math-container">$8x^2 + 4|xy| + 4xy - 4 = 0$</span></p> <p>I assumed that if I were to find the maximum value of <span class="math-container">$4x + y$</span>, then both <span class="math-container">$x$</span> and <span class="math-container">$y$</span> must be positive, so <span class="math-container">$xy \ge 0$</span> means that <span class="math-container">$|xy|$</span> is equal to <span class="math-container">$xy$</span>. So, I have</p> <p><span class="math-container">$8x^2 + 8xy - 4 = 0$</span></p> <p>Solving for <span class="math-container">$y$</span>, I have <span class="math-container">$y = \frac{4-8x^2}{8x}$</span> which makes <span class="math-container">$4x+y = f(x) = \frac{24x^2 + 4}{8x}$</span>. Finding the extreme point using <span class="math-container">$f'(x) = 0$</span>, I get <span class="math-container">$x = \frac{1}{\sqrt{6}}$</span>, <span class="math-container">$y = \frac{\sqrt{6}}{3}$</span>, and <span class="math-container">$f(\frac{1}{\sqrt{6}}) = \sqrt{6}$</span>, which is the wrong solution (and this being the minimum point, too, while not satisfying the first equation at the same time ...)</p> <p>The choices were: <span class="math-container">$1, \sqrt{2}, 2, \sqrt{3}, 4$</span>. Some help would be helpful!</p>
Thulashitharan D
786,768
<p>No it does not, It represents a hyperbola.<br> General equation of a straight line is <span class="math-container">$ax+by+c=$</span> where <span class="math-container">$a,b,c \in\mathbb R$</span>. But your equation when simplified looks like <span class="math-container">$2y+xy-3=0$</span> Notice that straight line equation has no <span class="math-container">$xy$</span>( or coefficient term <span class="math-container">$xy$</span> term is zero) hence it does not represent a straight line.<br><img src="https://i.stack.imgur.com/qXea5.png" alt="" /></p>
3,755,032
<p>I was given <span class="math-container">$(x,y)$</span> that satisfies both of this equation:</p> <p><span class="math-container">$4|xy| - y^2 - 2 = 0\\(2x+y)^2 + 4x^2 = 2$</span></p> <p>And was asked to find the maximum value of <span class="math-container">$4x + y$</span>.</p> <p>Solving for <span class="math-container">$y^2$</span>, I get this equation: <span class="math-container">$8x^2 + 4|xy| + 4xy - 4 = 0$</span></p> <p>I assumed that if I were to find the maximum value of <span class="math-container">$4x + y$</span>, then both <span class="math-container">$x$</span> and <span class="math-container">$y$</span> must be positive, so <span class="math-container">$xy \ge 0$</span> means that <span class="math-container">$|xy|$</span> is equal to <span class="math-container">$xy$</span>. So, I have</p> <p><span class="math-container">$8x^2 + 8xy - 4 = 0$</span></p> <p>Solving for <span class="math-container">$y$</span>, I have <span class="math-container">$y = \frac{4-8x^2}{8x}$</span> which makes <span class="math-container">$4x+y = f(x) = \frac{24x^2 + 4}{8x}$</span>. Finding the extreme point using <span class="math-container">$f'(x) = 0$</span>, I get <span class="math-container">$x = \frac{1}{\sqrt{6}}$</span>, <span class="math-container">$y = \frac{\sqrt{6}}{3}$</span>, and <span class="math-container">$f(\frac{1}{\sqrt{6}}) = \sqrt{6}$</span>, which is the wrong solution (and this being the minimum point, too, while not satisfying the first equation at the same time ...)</p> <p>The choices were: <span class="math-container">$1, \sqrt{2}, 2, \sqrt{3}, 4$</span>. Some help would be helpful!</p>
Tanmay Gajapati
634,349
<p>There's a simple way to distinguish between a linear equation (or any polynomial) and a non-polynomial equation. A polynomial is defines as <span class="math-container">$$y=a_1x^1 + a_2x^2 + ... + a_nx^n$$</span> Observe the variable <span class="math-container">$x$</span> is on one side and <span class="math-container">$y$</span> is on the other, this is the general form. So bring one of the variable to one side and see if the degree is a <em>positive integer</em>, only then the equation is a polynomial.<br> Try using this information for your question. (A linear equation is an equation of degree 1)</p>
2,130,699
<p>I have the coordinates of potentially any points within a 3D arc, representing the path an object will take when launched through the air. X is forward, Z is up, if that is relevant. Using any of these points, I need to create a bezier curve that follows this arc. The curve requires the tangent values for the very start and end of the curve. I can't figure out the formula needed to convert this arc into a bezier curve, and I'm hoping someone can help. I've found plenty of resources discussing circular arcs, but the arc here is not circular.</p>
rschwieb
29,335
<p>A commutative ring with only trivial idempotents is called <em>connected</em>, and there are <a href="http://ringtheory.herokuapp.com/commsearch/commresults/?has=58&amp;lacks=5" rel="nofollow noreferrer">lots of connected rings that aren't domains</a>. </p> <p>Most of the ones appearing in the search query can be taken to be $\mathbb R$ algebras. Many of them are simply local rings that aren't domains (including the one that matches Mariano's example.)</p> <p>If you're still willing to consider things that aren't $\mathbb R$ algebras, then notice the one $\mathbb Z[x]/(x^2-1)$, which has no nontrivial idempotents, isn't local, and isn't a domain.</p>
2,461,820
<p>I am a bit confused with the following question, I get that P(T|D) = 0.95 and P(D) = 0.0001 however because i'm unable to work out P(T|~D) i'm struggling to apply the theorem, am i missing something? Also i'm unsure about what to do with the information relating to testing negative when you don't have the disease correctly 95% of the time</p> <p>You take a test T to tell whether you have a disease D. The test comes back positive. You know that test is 95% accurate (the probability of testing positive when you do have the disease is 0.95, and the probability of testing negative when you don’t have the disease is also 0.95). You also know that the disease is rare, only 1 person is 10,000 gets the disease. What is the probability that you have the disease? How would this change if the disease was more common, say affecting 1 person in 100?</p>
fleablood
280,126
<p>$e = e^1$</p> <p>$1 = e^0$</p> <p>So $\ln(e) =1$ and $\ln(1) = 0$.</p> <p>And $\ln(\ln e) = \ln (1) = 0$.</p>
54,541
<p>Apparently, Mathematica has no real sprintf-equivalent (unlike any other high-level language known to man). <a href="https://mathematica.stackexchange.com/questions/970/sprintf-or-close-equivalent-or-re-implementation">This has been asked before</a>, but I'm wondering if the new <code>StringTemplate</code> function in Mathematica 10 can be extended to include such formatting capabilities.</p> <p>What I have in mind is a function that takes a <code>TemplateObject</code>, and looks for "formatting specification strings" immediately after <code>TemplateSlot</code>'s and <code>TemplateExpression</code>'s and replaces them with <code>TemplateExpression</code>'s containing appropriate formatting code. So, for example, you could write:</p> <pre><code>st = applyFormat@StringTemplate["Number: `1`%.2 some other text"] </code></pre> <p>and you would get something equivalent to: </p> <pre><code>TemplateObject[{"Number: ", TemplateExpression[ToString[NumberForm[TemplateSlot[1], {\[Infinity], 2}]]], " some other text"}, InsertionFunction -&gt; TextString, CombinerFunction -&gt; StringJoin] </code></pre> <p>I'm not particularly picky about the syntax (it doesn't have to mimic sprintf), as long as:</p> <ul> <li>it's easy to write and easy to read</li> <li>it supports Mathematica's number formatting functions (<code>AccountingForm</code>, <code>ScientificForm</code>...)</li> <li>it's extensible (e.g by delegating the formatting to a pattern that can be overwritten/extended)</li> <li>it's compatible with existing <code>StringTemplate</code> templates</li> </ul> <p>I've started a function that does this, but I'm curious if you have better ideas (both implementation- and syntax-wise), so I'm posting it as an answer, not as part of the question.</p>
a06e
534
<p>You can also try:</p> <pre><code>StringTemplate["Number: `1` some other text"] [ToString[NumberForm[N[Pi], {\[Infinity], 2}]]] </code></pre> <p>That is, leave the formatting to <code>ToString</code>, which does handle <code>NumberForm</code> and everything else.</p>
72,854
<p>Hi everybody,</p> <p>Does there exist an explicit formula for the Stirling Numbers of the First Kind which are given by the formula $$ x(x-1)\cdots (x-n+1) = \sum_{k=0}^n s(n,k)x^k. $$</p> <p>Otherwise, what is the computationally fastest formula one knows?</p>
Feldmann Denis
17,164
<p>There is an explicit formula : $s(n,m)=\frac{(2n-m)!}{(m-1)!}\sum_{k=0}^{n-m}\frac{1}{(n+k)(n-m-k)!(n-m+k)!}\sum_{j=0}^{k}\frac{(-1)^{j} j^{n-m+k} }{j!(k-j)!}.$ For once, it is not in Wikipedia (en), but in the french version of it (and I posted it there myself, if I may so brag)</p>
2,492,107
<p>The question asks $\text{span}(A1,A2)$</p> <p>$$A1 =\begin{bmatrix}1&amp;2\\0&amp;1\end{bmatrix}$$ $$A2 = \begin{bmatrix}0&amp;1\\2&amp;1\end{bmatrix}$$</p> <p>I began by calculating $c_1[A1] + c_2[A2]$ then converting it into a matrix and row reducing. I found the restrictions where the stuff after the augment must = 0 then plugged those back into \begin{bmatrix}w&amp;x\\y&amp;z\end{bmatrix} and got the wrong answer. Could anyone please review my work and explain my mistake or the information I am missing? Thank you.</p> <p>The solution I worked out (wrong): <a href="https://i.stack.imgur.com/nUdeq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nUdeq.jpg" alt="enter image description here"></a></p>
Claude Leibovici
82,404
<p><strong>Hint</strong></p> <p>Consider $$S_n=\sum_{i=1}^n \frac i{1+i^2+i^4}=\frac 12\left(\sum_{i=1}^n \frac 1{1-i+i^2}-\sum_{i=1}^n \frac 1{1+i+i^2}\right)$$ which beautifully telescopes.</p>
1,250,755
<p>I'm sure there's something I'm missing here; probably a naive confusion of mathematics with metamathematics. Regardless, I've come up with what looks to me like a proof that (first-order) ZF+AD is inconsistent:</p> <p>Assume otherwise. Then it cannot be shown that ZF+AD is consistent relative to ZF. However, ZFC is consistent relative to ZF, and as ZF+AD is consistent (by our assumption for contradiction,) a model of ZF+AD exists in ZFC by Gödel's completeness theorem. Hence ZF+AD is consistent relative to ZFC. This establishes that ZF+AD is consistent relative to ZF, a contradiction.</p> <p>I'd appreciate anyone pointing out the problematic step(s).</p> <p>EDIT: Upon further thought, under the (misguided) assumption that this argument holds, an adaption of this argument can be used to show that every first-order countable theory has a countable model in ZF: ZFC has a model in ZF (I think this was shown by Cohen via forcing?) and such a theory has a model in ZFC by the first incompleteness theorem. "Composing" these models gives us a model of the theory in ZF.</p>
Sam
221,113
<p>What Asaf Karagila says in the second paragraph of his answer is basically what you are overlooking in your argument. Namely, it is implied in your argument that the statement $''\text{Con}(ZF+AD)\implies ZFC\vdash\text{Con}(ZF+AD)''$ is true in the meta-theory. But this is not necessarily true. What is true in the meta-theory is that $''\varphi\implies ZFC\vdash\varphi''$ whenever $\varphi$ is a $\Sigma_1$ sentence. However, since $\text{Con}(ZF+AD)$ is a $\Pi_1$ sentence, then this fact doesn't apply. (By ''true'' I actually mean ''meta-theorem'' ).</p> <p>So your conclusion that "a model of $ZF+AD$ exists in $ZFC$ by Gödel's completeness theorem" must actually be rephrased to "a model of $ZF+AD$ exists in $ZFC+\text{Con}(ZF+AD)$ by Gödel's completeness theorem". But this fact (along with the other facts you are using) isn't enough to establish the contradiction you are looking for.</p> <p>Also, the method of forcing does not produce a set model for (all of) $ZFC$ from within $ZFC$. So we are not justified in inferring $''ZFC\vdash\text{Con}(ZFC)''$ by appealing solely to forcing results and Gödel's completeness theorem. ​</p>
406,437
<p>Calculus the extreme value of the $f(x,y)=x^{2}+y^{2}+xy+\dfrac{1}{x}+\dfrac{1}{y}$</p> <p>pleasee help me.</p>
Alex Wertheim
73,817
<p>How about the ring of quaternions? This is an example of a non-commutative ring and I don't think it's super difficult to wrap your head around. Understanding the ring of quaternions is also in fact very useful in understanding representations of rotations and groups of isometries for polyhedra. </p>