qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,878,723
<blockquote> <p>Find the value of <span class="math-container">$k$</span> if the curve <span class="math-container">$y = x^2 - 2x$</span> is tangent to the line <span class="math-container">$y = 4x + k$</span></p> </blockquote> <p>I have looked at the solution to this question and the first step is the &quot;equate the two functions&quot;:<br /> <span class="math-container">$ x^2 - 2x = 4x + k$</span></p> <p>Why? How does that help solve the equation? And how can I use what I get from equating the two functions to find the solution?</p>
heropup
118,193
<p>If the line <span class="math-container">$y = 4x + k$</span> is tangent to <span class="math-container">$y = x^2 - 2x$</span>, then there exists some value of <span class="math-container">$x$</span> for which the two curves intersect; i.e., <span class="math-container">$$4x + k = x^2 - 2x.$$</span> This results in one condition for <span class="math-container">$x$</span> that depends on <span class="math-container">$k$</span>.</p> <p>The second condition is that <em>at this same value of</em> <span class="math-container">$x$</span>, the derivative of <span class="math-container">$y = x^2 - 2x$</span> equals the slope of the tangent line <span class="math-container">$y = 4x + k$</span>. This results in the condition <span class="math-container">$$2x - 2 = 4.$$</span></p>
3,438,048
<p>I've recently obtained my University entrance papers from 1967 (yes,52 years ago!) and I found the question below difficult. I presume the answer is a symmetric expression in the differences between alpha,beta and gamma.Am I missing some obvious trick? Any help would be appreciated.</p> <p>Simplify and evaluate the determinant <a href="https://i.stack.imgur.com/Dfft4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dfft4.png" alt="enter image description here"></a></p> <p>and show that its value is independent of theta.</p>
Vijayakumar Muni
400,662
<p>First of all in your definition of a topology <span class="math-container">$\tau$</span> on <span class="math-container">$X,$</span> there is a typo in condition (c).</p> <p>It is supposed to be <span class="math-container">$``$</span>for <span class="math-container">$\alpha\in J,$</span> where <span class="math-container">$J$</span> is an index set <span class="math-container">$\Big($</span>this index set <span class="math-container">$J$</span> is any of the following three types: (i) <span class="math-container">$J$</span> is either finite set, e.g. <span class="math-container">$\{1,2,\ldots,n\},$</span> where <span class="math-container">$n$</span> is a finite positive integer, (ii) <span class="math-container">$J$</span> is a countably infinite set, e.g. <span class="math-container">$\mathbb{N},$</span> the set of all positive integers, (iii) <span class="math-container">$J$</span> is an uncountable set, e.g. <span class="math-container">$\mathbb{Q}^c,$</span> the set of all irrational numbers<span class="math-container">$\Big),$</span> if <span class="math-container">$A_\alpha\in \tau,$</span> then <span class="math-container">$\bigcup_{\alpha\in J}A_\alpha\in \tau."$</span> </p> <p>The meaning of condition (a) is <span class="math-container">$``$</span>the empty set and the whole <span class="math-container">$X$</span> are the members of <span class="math-container">$\tau."$</span></p> <p>The meaning of condition (b) is <span class="math-container">$``$</span>the finite-intersection of members of <span class="math-container">$\tau$</span> is a member of <span class="math-container">$\tau."$</span></p> <p>The meaning of condition (c) is <span class="math-container">$``$</span>the arbitrary union of members of <span class="math-container">$\tau$</span> is a member of <span class="math-container">$\tau."$</span> <span class="math-container">$\big($</span>Here the word arbitrary union refers to- the union of finite number of members, or the union of countably infinite number of members, or the union of uncountable number of members.<span class="math-container">$\big)$</span></p> <p>If all these three conditions satisfies, then <span class="math-container">$\tau$</span> <span class="math-container">$\big($</span>viz. a subset of <span class="math-container">$\mathcal{P}(X)\big)$</span> is called a topology on <span class="math-container">$X,$</span> and <span class="math-container">$X$</span> is called a topological space endowed with a topology <span class="math-container">$\tau.$</span></p> <p>Now, let us come to your example: </p> <p>Given <span class="math-container">$X=\mathbb{R},$</span> the set of all real numbers, viz. <span class="math-container">$(-\infty, \, \infty).$</span></p> <p>Given <span class="math-container">$\tau:=\Big\{U\,\,\Big| \, U\subseteq\mathbb{R},\,\,$</span>if<span class="math-container">$\,\,x\in U,\,\,$</span>then <span class="math-container">$\exists$</span> a finite<span class="math-container">$\,\,\epsilon_x \in (0, \infty)\,\,$</span>s.t.<span class="math-container">$\,\,(x-\epsilon_x, \, x+\epsilon_x)\subseteq U\Big\}.$</span></p> <p>Now let us see why <span class="math-container">$\tau$</span> is a topology on <span class="math-container">$\mathbb{R}.$</span></p> <p>For this, we need to check the conditions (a), (b), and (c). Remember, even if at least one of these conditions fails, then this <span class="math-container">$\tau$</span> will not be called as a topology on <span class="math-container">$\mathbb{R}.$</span> </p> <p><span class="math-container">$\underline{\text{Condition (a).}}$</span> Note that <span class="math-container">$\varnothing\subset \mathbb{R}.$</span> But there is no <span class="math-container">$x\in \varnothing.$</span> Hence it is not required to check the condition: <span class="math-container">$``$</span>Does there exists any finite <span class="math-container">$\,\epsilon_x\in (0, \infty)\,\,$</span>s.t.<span class="math-container">$\,(x-\epsilon_x, \, x+\epsilon_x)\subseteq \varnothing,"\,$</span> as this condition is trivially true. <span class="math-container">$\big($</span>To understand more about this logic, please read any textbook on conditional propositions: Let <span class="math-container">$p$</span> and <span class="math-container">$q$</span> be two propositions. Then the compound proposition <span class="math-container">$p\Longrightarrow q$</span> is a true proposition under three cases: (i) both <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are true, (ii) both <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are false, (iii) <span class="math-container">$p$</span> is false and <span class="math-container">$q$</span> is true. The truthness of above conditional statement follows from (iii).<span class="math-container">$\big)$</span> Therefore <span class="math-container">$\varnothing\in \tau.$</span></p> <p>Now <span class="math-container">$\mathbb{R}\subseteq \mathbb{R}.$</span> Then for every real number <span class="math-container">$x\in \mathbb{R},$</span> one can choose, for instance, <span class="math-container">$\epsilon_x=1.$</span> Then note that <span class="math-container">$(x-1, \, x+1)\subset \mathbb{R}.$</span> Therefore <span class="math-container">$\mathbb{R}\in \tau.$</span></p> <p><span class="math-container">$\underline{\text{Condition (b).}}$</span> Let <span class="math-container">$U_1\in \tau$</span> and <span class="math-container">$U_2\in \tau.$</span> Then <span class="math-container">$U_1\cap U_2\subseteq \mathbb{R},$</span> as <span class="math-container">$U_1\subseteq \mathbb{R}$</span> and <span class="math-container">$U_2\subseteq \mathbb{R}.$</span> </p> <p>If <span class="math-container">$U_1\cap U_2=\varnothing,$</span> then <span class="math-container">$U_1\cap U_2\in \tau,\,$</span> by condition (a).</p> <p>If <span class="math-container">$U_1\cap U_2\neq\varnothing,$</span> then <span class="math-container">$\exists$</span> some <span class="math-container">$x\in U_1\cap U_2.\,\,$</span> Since <span class="math-container">$x\in U_1,$</span> therefore <span class="math-container">$\exists$</span> some finite number, say <span class="math-container">$\epsilon_{1x}\in (0, \infty)$</span> s.t. <span class="math-container">$(x-\epsilon_{1x}, \, x+\epsilon_{1x})\subseteq U_1.\,\,$</span> Similarly, as <span class="math-container">$x\in U_2,$</span> therefore <span class="math-container">$\exists$</span> some finite number, say <span class="math-container">$\epsilon_{2x}\in (0, \infty)$</span> s.t. <span class="math-container">$(x-\epsilon_{2x}, \, x+\epsilon_{2x})\subseteq U_2.$</span> Let us define <span class="math-container">$\epsilon_x:=$</span> min<span class="math-container">$\{\epsilon_{1x},\, \epsilon_{2x}\}.$</span> Then <span class="math-container">$\epsilon_x\in (0, \infty)$</span> is also a finite number s.t. <span class="math-container">$(x-\epsilon_x, \, x+\epsilon_x)\subseteq U_1\cap U_2.\,\,$</span> As <span class="math-container">$\,x\,$</span> was arbitrary element in <span class="math-container">$U_1\cap U_2,$</span> so for every <span class="math-container">$x\in U_1\cap U_2,$</span> we can find some finite number <span class="math-container">$\epsilon_x\in (0, \infty),$</span> s.t. <span class="math-container">$(x-\epsilon_x, \, x+\epsilon_x)\subseteq U_1\cap U_2.$</span> Hence <span class="math-container">$U_1\cap U_2\in \tau.$</span> </p> <p>Now we prove that <span class="math-container">$\,U_1\cap\cdots \cap U_n\in \tau,\,\,$</span> whenever <span class="math-container">$\,U_1,\cdots, U_n\in \tau,$</span> where <span class="math-container">$n\in \mathbb{N}$</span> is finite.</p> <p>For this let us assume <span class="math-container">$\,\bigcap_{i=1}^{n-1}U_i\in \tau,\,$</span> <span class="math-container">$($</span>where<span class="math-container">$\,\,n\geq 2).$</span> </p> <p>Then <span class="math-container">$\,\bigcap_{i=1}^{n}U_i=\Big(\bigcap_{i=1}^{n-1}U_i\Big)\bigcap U_n.\,$</span> Denoting <span class="math-container">$\,\bigcap_{i=1}^{n-1}U_i=V,\,\,$</span>then by using the previous argument, we have <span class="math-container">$V\cap U_n\in \tau,\,\,$</span> as <span class="math-container">$V\in \tau,\, U_n\in \tau.\,\,$</span>Therefore <span class="math-container">$\bigcap_{i=1}^{n}U_i\in \tau.\,\,$</span>Hence by induction, any finite-intersection of members of <span class="math-container">$\tau$</span> is a member of <span class="math-container">$\tau.$</span> </p> <p>*Note: If we take the intersection of infinite collection of elements of <span class="math-container">$\tau,$</span> then it need not be a member of <span class="math-container">$\tau.$</span> For instance, let <span class="math-container">$U_n:=\big(-\frac{1}{n},\, \frac{1}{n}\big)\in \tau,$</span> for each <span class="math-container">$n\in \mathbb{N}.$</span> Now if we take their intersection:</p> <p><span class="math-container">$$\bigcap_{i\in \mathbb{N}}\Big(-\frac{1}{n},\,\frac{1}{n}\Big)=\{0\}\notin \tau,$$</span> as for the element <span class="math-container">$x=0\in \bigcap_{i\in \mathbb{N}}\Big(-\frac{1}{n},\,\frac{1}{n}\Big),$</span> <span class="math-container">$\nexists\,$</span> any finite <span class="math-container">$\epsilon_0\in (0, \infty)\,$</span> s.t. <span class="math-container">$\,(0-\epsilon_0,\, 0+\epsilon_0)=(-\epsilon_0, \epsilon_0)\subseteq \bigcap_{i\in \mathbb{N}}\Big(-\frac{1}{n},\,\frac{1}{n}\Big).$</span> </p> <p><span class="math-container">$\underline{\text{Condition (c).}}\,$</span> Let <span class="math-container">$U_\alpha\in \tau,\,$</span> for each <span class="math-container">$\alpha\in J.\,$</span> We need to check whether <span class="math-container">$\,\bigcup_{\alpha\in J}U_\alpha\in \tau?$</span> </p> <p>Let <span class="math-container">$\,x\in \bigcup_{\alpha\in J}U_\alpha.\,$</span> <span class="math-container">$\big($</span>If there is no <span class="math-container">$\,x\in \bigcup_{\alpha\in J}U_\alpha,\,$</span> then <span class="math-container">$\bigcup_{\alpha\in J}U_\alpha=\varnothing\in \tau,\,$</span> by condition (a).<span class="math-container">$\big)\,$</span> Then <span class="math-container">$\,x\in U_\alpha,\,$</span> for some <span class="math-container">$\alpha\in J.\,$</span> Since <span class="math-container">$U_\alpha\in \tau,\,$</span> therefore <span class="math-container">$\exists$</span> some finite <span class="math-container">$\,\epsilon_{\alpha x}\in (0, \infty)\,$</span> s.t.<span class="math-container">$\,(x-\epsilon_{\alpha x},\,\, x+\epsilon_{\alpha x})\subseteq U_\alpha.\,$</span> But again as <span class="math-container">$U_\alpha\subseteq \bigcup_{\alpha\in J}U_\alpha,\,\,$</span> so by transitivity of <span class="math-container">$\,\subseteq,\,$</span> we have <span class="math-container">$$(x-\epsilon_{\alpha x},\,\, x+\epsilon_{\alpha x})\subseteq \bigcup_{\alpha\in J}U_\alpha.$$</span> This condition is true for every <span class="math-container">$x\in \bigcup_{\alpha\in J}U_\alpha,\,$</span> as our chosen <span class="math-container">$x$</span> was arbitrary element of <span class="math-container">$\bigcup_{\alpha\in J}U_\alpha.\,$</span> Hence <span class="math-container">$\,\bigcup_{\alpha\in J}U_\alpha\in \tau.$</span></p> <p>All the above verifications forces to claim that <span class="math-container">$\mathbb{R}$</span> is a topological space endowed with a topology <span class="math-container">$\tau.$</span></p>
1,772,562
<p>Let $f:\mathbb{C}\rightarrow\mathbb{C}$ be holomorphic. If we have $|f(z)|\leq|z|^n$ for some $n\in\mathbb{N}$ and all $z\in\mathbb{C}$, then $f$ is a polynomial.</p> <p>I tried to apply Liouville's theorem but it does not help.</p> <p>Thanks for your help.</p>
Martín-Blas Pérez Pinilla
98,199
<p>Even true if the the condition is $|f(z)|\le C|z|^n$ for $|z|\ge R&gt;0$. Let be $P_n$ the $n$th degree Taylor polynomial of $f$ at zero and consider $f-P_n$.</p>
88,363
<p>It is easy to truncate Series upto some order, say $n$. My question is how do I remove low orders? Let us say my series is a power series in $x$. I want to remove the terms with negative powers because they diverge at $x = 0$. I can simply write</p> <p>s1-s2, where</p> <p>s1=Normal[Series[blah, {x, 0, n}]</p> <p>s2=Normal[Series[blah, {x, 0, -1}]</p> <p>but Mathematica does not understand to cancel the removed terms because they are complicated. The solution would be to use Collect[s1-s2, x, Simplify], but this is horribly slow as I increase $n$ above even 2. I suppose I could simply delete the terms by hand, but the outputs are very messy, and there must exist a proper way to do this.</p>
Acus
18,792
<p>Why not to subtract two expansions like in </p> <pre><code>t1 = Series[1/Sin[x], {x, 0, 10}] t2 = Series[1/Sin[x], {x, 0, 0}] </code></pre> <p>Then </p> <pre><code>Normal[t1] - Normal[t2] </code></pre> <p>Out[3]:= x/6 + (7 x^3)/360 + (31 x^5)/15120 + (127 x^7)/604800 + ( 73 x^9)/3421440</p>
3,583,117
<p>I would like to understand clearly why the following equality is true</p> <p><span class="math-container">$P[X+Y \leq z] = E_Y[P[X+Y] \leq z | Y]]$</span></p> <p>I wrote the left part of the equation as follows:</p> <p><span class="math-container">$E_Y[P[X+Y] \leq z | Y]] = \sum_y y P[X+y \leq z]P(y)$</span></p> <p>and I have tried with a toy example where <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are two <span class="math-container">$r.v$</span> that model the throw of a die and it works, but I would like to clearly understand why is it true, I know that is linked with the law of total probability right?</p>
roundsquare
706,295
<p><span class="math-container">$P[X+Y \le z] = \sum_{y \in Y}P[Y=y] \times P[X + y \le z] = \sum_{y \in Y}P[Y=y] \times P[X + Y \le z | Y=y] = E_Y[P(X + Y \le z | Y=y)]= E_Y[P(X + Y \le z | Y)]$</span></p> <p>where the last equality is just a bit of notation.</p>
3,413,480
<p>The <a href="https://de.wikipedia.org/wiki/Asymmetrisches_Kryptosystem#Sicherheit" rel="nofollow noreferrer">German Wikipedia article on asymmetric cryptography</a> states that asymmetric cryptography is <em>always</em> based on assumptions which <strong>can not</strong> be proven:</p> <p><em>Die Sicherheit aller asymmetrischen Kryptosysteme beruht also immer auf unbewiesenen Annahmen.</em></p> <p>translation: <em>Thus, the security of all asymmetric cryptosystems is always based on unproven assumptions.</em></p> <p>I could not find any confirmation for this statement in other sources.</p> <ul> <li>is this statement correct?</li> <li>or is it wrong, i.e. while there is no proof for the irreversibility of trapdoor functions yet, it can't be ruled out that there may be a proof that e.g. the prime factorization or discrete logarithm are irreversible functions?</li> </ul> <p>I don't have a strong math background, but e.g. a simple module operation is obviously not reversible because the same result can be achieved with different numbers.</p> <ul> <li>5 mod 3 = 2</li> <li>8 mod 3 = 2</li> </ul> <p>So for modulo, a proof of irreversibility exists. (Now, afaik, a modulo-operation is a one-way function but not a trapdoor function - and maybe that's a crucial difference for such a statement of <em>is unprovable</em>).</p> <p>Update:</p> <p>Some clarification: How I read this statement is that <em>always</em> does not refer only to the current knowledge, but says that asymmetric cryptography with trapdoor functions <em>is and will generally always be based on unproven assumptions</em> (i.e. it is generally not possible to find a trapdoor function and prove it is irreversible).</p>
quarague
169,704
<p>This seems to be an open research problem. If you look at <a href="https://en.wikipedia.org/wiki/One-way_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/One-way_function</a> it says that the existence of one-way functions is currently unproven. So for cryptography applications there are two pieces missing for a rigorous proof. First, one-way functions do exist and second, the particular function used in the cryptographic algorithm is such a one-way function. </p> <p><strong>Edit:</strong> I would say that the statement using 'always' on the German wikipedia is wrong. With current knowledge it is based on unproven assumptions. If at some later time it is proven that one-way functions exist one could prove the security of public key cryptography. If on the other hand it is shown that no one-way functions exists, this proves that all public key cryptography is a priori unsecure. It might just be the case that we don't know how to invert a particular function even though we know it can't be a one-way function.</p>
3,413,480
<p>The <a href="https://de.wikipedia.org/wiki/Asymmetrisches_Kryptosystem#Sicherheit" rel="nofollow noreferrer">German Wikipedia article on asymmetric cryptography</a> states that asymmetric cryptography is <em>always</em> based on assumptions which <strong>can not</strong> be proven:</p> <p><em>Die Sicherheit aller asymmetrischen Kryptosysteme beruht also immer auf unbewiesenen Annahmen.</em></p> <p>translation: <em>Thus, the security of all asymmetric cryptosystems is always based on unproven assumptions.</em></p> <p>I could not find any confirmation for this statement in other sources.</p> <ul> <li>is this statement correct?</li> <li>or is it wrong, i.e. while there is no proof for the irreversibility of trapdoor functions yet, it can't be ruled out that there may be a proof that e.g. the prime factorization or discrete logarithm are irreversible functions?</li> </ul> <p>I don't have a strong math background, but e.g. a simple module operation is obviously not reversible because the same result can be achieved with different numbers.</p> <ul> <li>5 mod 3 = 2</li> <li>8 mod 3 = 2</li> </ul> <p>So for modulo, a proof of irreversibility exists. (Now, afaik, a modulo-operation is a one-way function but not a trapdoor function - and maybe that's a crucial difference for such a statement of <em>is unprovable</em>).</p> <p>Update:</p> <p>Some clarification: How I read this statement is that <em>always</em> does not refer only to the current knowledge, but says that asymmetric cryptography with trapdoor functions <em>is and will generally always be based on unproven assumptions</em> (i.e. it is generally not possible to find a trapdoor function and prove it is irreversible).</p>
kelalaka
338,051
<p>This is due to the fact that the existence of One-Way Functions (OWF) implies that <span class="math-container">$P \neq NP$</span>. In other words, with contrapositive, if <span class="math-container">$P = NP$</span> then OWF doesn't exit. So if we have one you would know it in the news. Therefore, the security of the cryptographic systems is based on unproven results.</p> <p>There are candidates for OWF functions like;</p> <ul> <li>Multiplication and Factoring</li> <li>The Rabin function used in Rabin Cryptosystem</li> <li>Discrete logarithms used in ECC, ElGamal encryption, DHKE</li> <li>Cryptographically secure hash functions</li> </ul> <p>After extensive research, these are neither proven as an OWF or the reverse.</p> <ul> <li>As of Sep. 2021, there is an advancement in this area by Yanyi Liu and Rafael Pass <ul> <li><p><a href="https://arxiv.org/abs/2009.11514" rel="nofollow noreferrer">On One-way Functions and Kolmogorov Complexity</a>.</p> <p>This is an existence result. A nice introduction is on the <a href="https://www.quantamagazine.org/researchers-identify-master-problem-underlying-all-cryptography-20220406/" rel="nofollow noreferrer">Quanta magazine</a></p> </li> </ul> </li> </ul>
1,175,632
<p>Determine whether the following integral converges or diverges: \begin{align*} \iint_Q e^{-xy} \ dA, \end{align*} where $Q$ is the first quadrant of the $xy$-plane.</p> <p>How should I go about this problem? Should I compare it with another known integral?</p>
sakas
521,275
<p>Let <span class="math-container">$X_i=\begin{cases} S, &amp; \text{with prob 0.3 } \\ F, &amp; \text{with prob 0.7 } \\ \end{cases}$</span> for <span class="math-container">$i=1,2,3,4,5$</span> be IID for the outcome of the i-th day. The probability you are asked is the following (if we are talking for exactly 3 days succes in a row)</p> <p>P(Mr Li get home in time 3 days in a row)= <span class="math-container">$ P(X_1=X_2=X_3=S)+P(X_1=F,X_2=X_3=X_4=S)+P(X_1=X_2=F,X_3=X_4=X_5=S)= 0.3^3+0.3^30.7+0.3^30.7^2=0.05913$</span> </p> <p>(which is the correct answer in the book)</p>
24,186
<p>Consider the code below:</p> <pre><code>s = Solve[(3 - Cos[4*x])*(Sin[x] - Cos[x]) == 2, x, InverseFunctions -&gt; True]; Select[s[[All, 1, 2]], Element[#, Reals] &amp;] </code></pre> <p>In MMA 8.0, I get </p> <pre><code>{-\[Pi], \[Pi]/2, \[Pi]} </code></pre> <p>but in MMA 9.0, I get an empty set { }</p> <p>Assuming the solution by MMA 8.0 is correct, can someone show me a work around for MMA 9.0? </p>
Thies Heidecke
47
<p>In those more complicated cases consisting of multiple steps, using Composition clears things up for me while still retaining a pure functional style. In your example of calculating the distance between two points in 2D i would write:</p> <pre><code>u = {-3, 3}; v = {1, 5}; Composition[Sqrt, #.# &amp;, Subtract][u, v] (* 2 Sqrt[5] *) </code></pre> <p>or as rm -rf pointed out you can just use the builtin</p> <pre><code>EuclideanDistance[u, v] (* 2 Sqrt[5] *) </code></pre> <p>In case you want to incorporate the extra <code>Abs</code> to each parameter before calculating the distances you can unpack the parameters <code>##</code> into a list, take the absolute value and rewrap it into a new <code>Sequence</code> before passing it to <code>Subtract</code>:</p> <pre><code>Composition[Sqrt, #.# &amp;, Subtract, Sequence @@ Abs[{##}] &amp;][u, v] (* 2 Sqrt[2] *) </code></pre> <p>You could reuse those functions by binding them to a new symbol via <code>Set</code> and extract the idea of mapping a function onto every parameter, taking extra advantage of functions that are <code>Listable</code>, like <code>Abs</code>:</p> <pre><code>MapSequence[f_] := Sequence @@ f[{##}] &amp; /; MemberQ[Attributes[f], Listable] MapSequence[f_] := Sequence @@ (f /@ {##}) &amp; AbsDistance = Composition[Sqrt, #.# &amp;, Subtract, MapSequence[Abs]]; AbsDistance[u, v] (* 2 Sqrt[2] *) </code></pre>
3,043,039
<p>Let <span class="math-container">$f:(0,1) \to \mathbb{R}$</span> be a given function. Explain how the following definition is not equivalent to the definition of the limit</p> <p><span class="math-container">$\lim\limits_{x \to x_0} f(x) = L$</span></p> <p>of <span class="math-container">$f$</span> at <span class="math-container">$x_0 \in [0,1]$</span> . </p> <p>For any <span class="math-container">$\epsilon \gt 0$</span>,for any <span class="math-container">$\delta \gt 0$</span> such that for all <span class="math-container">$x \in (0,1)$</span> and <span class="math-container">$0 \lt|x-x_0| \lt \delta$</span>, one has <span class="math-container">$|f(x) - L| \lt \epsilon$</span> .</p> <p>This definition is incorrect because for any <span class="math-container">$\epsilon \gt 0$</span> there exists some <span class="math-container">$\delta \gt 0$</span> that is small enough. It can't be any delta. Is this the only reason why this definition is not valid?</p>
user
505,767
<p>Yes the correct definition requires</p> <p><span class="math-container">$$\forall \epsilon &gt;0 \quad \exists \delta &gt;0 \quad \ldots$$</span></p> <p>and the other part of the definition is correct.</p> <p>Indeed let consider for example <span class="math-container">$f(x)=x$</span> with <span class="math-container">$\lim_{x\to 1} x=1$</span> and take <span class="math-container">$\epsilon =.01$</span> and <span class="math-container">$\delta =.5$</span> then assume <span class="math-container">$x$</span> such that <span class="math-container">$0&lt;|x-1|&lt;0.5$</span> that is <span class="math-container">$x=1.4$</span> and we have</p> <p><span class="math-container">$$|f(x)-1|=|1.4-1|=0.4&gt;\epsilon$$</span></p>
2,848,891
<p>Find the number of solutions of $$\left\{x\right\}+\left\{\frac{1}{x}\right\}=1,$$ where $\left\{\cdot\right\}$ denotes Fractional part of real number $x$.</p> <h2>My try:</h2> <p>When $x \gt 1$ we get</p> <p>$$\left\{x\right\}+\frac{1}{x}=1$$ $\implies$</p> <p>$$\left\{x\right\}=1-\frac{1}{x}.$$</p> <p>Letting $x=n+f$, where $n \in \mathbb{Z^+}$ and $ 0 \lt f \lt 1$, we get</p> <p>$$f=1-\frac{1}{n+f}.$$</p> <p>By Hint given by $J.G$, i am continuing the solution:</p> <p>we have</p> <p>$$f^2+(n-1)f+1-n=0$$ solving we get</p> <p>$$f=\frac{-(n-1)+\sqrt{(n+3)(n-1)}}{2}$$ $\implies$</p> <p>$$f=\frac{\left(\sqrt{n+3}-\sqrt{n-1}\right)\sqrt{n-1}}{2}$$</p> <p>Now obviously $n \ne 1$ for if we get $f=0$</p> <p>So $n=2,3,4,5...$ gives values of $f$ as</p> <p>$\frac{\sqrt{5}-1}{2}$, $\sqrt{3}-1$, so on which gives infinite solutions.</p>
Community
-1
<p>Let $x:=n+f$. The equation is</p> <p>$$f+\frac1{n+f}=1,$$ </p> <p>giving the solutions in $f$</p> <p>$$f=\frac{\pm\sqrt{(n+1)^2-4}-n+1}2.$$</p> <p>The negative sign cannot work, nor the negative $n$. Then $n\ge1$ is required, but $n=1$ yields $x=1$, which is wrong. Finally,</p> <p>$$f=\frac{\sqrt{(n+1)^2-4}-n+1}2, \forall n&gt;1.$$</p>
2,848,891
<p>Find the number of solutions of $$\left\{x\right\}+\left\{\frac{1}{x}\right\}=1,$$ where $\left\{\cdot\right\}$ denotes Fractional part of real number $x$.</p> <h2>My try:</h2> <p>When $x \gt 1$ we get</p> <p>$$\left\{x\right\}+\frac{1}{x}=1$$ $\implies$</p> <p>$$\left\{x\right\}=1-\frac{1}{x}.$$</p> <p>Letting $x=n+f$, where $n \in \mathbb{Z^+}$ and $ 0 \lt f \lt 1$, we get</p> <p>$$f=1-\frac{1}{n+f}.$$</p> <p>By Hint given by $J.G$, i am continuing the solution:</p> <p>we have</p> <p>$$f^2+(n-1)f+1-n=0$$ solving we get</p> <p>$$f=\frac{-(n-1)+\sqrt{(n+3)(n-1)}}{2}$$ $\implies$</p> <p>$$f=\frac{\left(\sqrt{n+3}-\sqrt{n-1}\right)\sqrt{n-1}}{2}$$</p> <p>Now obviously $n \ne 1$ for if we get $f=0$</p> <p>So $n=2,3,4,5...$ gives values of $f$ as</p> <p>$\frac{\sqrt{5}-1}{2}$, $\sqrt{3}-1$, so on which gives infinite solutions.</p>
fleablood
280,126
<p>It's simpler if you realize that:</p> <p>EDIT (new text): </p> <p>$\{a\} + \{\frac b\} = (a + b) - ([a]+ [b])$ and $([a] + [b])$ is always an integer. So $\{a\} + \{frac b\}$ is an integer if and only if $a + b$ is an integer.</p> <p>Now $0\le \{a \} &lt; 1$ so $0 \le \{a\} + \{b\} &lt; 2$.</p> <p>So $\{a \} +\{b\} = 0$ if and only if $a, b$ are both integers.</p> <p>And $\{a\} + \{b\} = 1$ if and only if $ a + b$ is an integer but neither $a$ nor $b$ are integers.</p> <p>Now the only way $x$ and $\frac 1x$ can both be integers is if $x=\pm 1$.</p> <p>So $\{x\} + \{\frac 1x\} = 1$ means nothing more or than $\{x\} + \{\frac 1x\}$ is an integer and $x\ne \pm 1$.</p> <p>OLD: <i>$\{x\} + \{\frac 1x\} = (x + \frac 1x) - ([x] + [\frac 1x])$ and $([x] + [\frac 1x])$ is always an integer. </p> <p>And as $0 \le \{k\} &lt; 1$ and $\frac 1x \ne 0$ and if $\frac 1x$ is defined $x \ne 0$, then it is <em>always</em> the case that $0 &lt; \{x\} + \{\frac 1x\} &lt; 2$ whenever $x \ne 0$.</p> <p>so $\{x\} + \{\frac 1x\} = 1$ means nothing more or less than $x + \frac 1x$ is an integer.</i></p> <p>....</p> <p>So $x + \frac 1x = n$ where $n$ is an integer can be solved by</p> <p>$x^2 + 1 = nx$ and $x \ne 0$ (but as $x = 0\implies x^2 + 1 = 1\ne 0 =nx$ we will not have to worry about that condition.)</p> <p>$x^2 -nx + 1 = 0$</p> <p>So $x = \frac {n \pm \sqrt{n^2 - 4}}{2}$.</p> <p>And those real numbers will exist for any integer $|n| \ge 2$. </p> <p>There are clearly infinitely many such $x = \frac {n \pm \sqrt{n^2 - 4}}{2}; |n| \ge 2$.</p> <p>NEW: However we must omit the $x = \frac {n \pm \sqrt{n^2 - 4}}{2}; |n| \ge 2=\pm 1$</p> <p>i.e</p> <p>$n \pm 2= \pm\sqrt{n^2 - 4}$</p> <p>$n^2 \pm 4n + 4 = n^2 - 4$</p> <p>$n = \pm 2$.</p> <p>So for the infinite number of integers $n; |n| &gt; 2$ we will have so an $x$.</p> <p>======</p> <p>If we want to convince ourselves. </p> <p>Let $x = \frac {-17 +\sqrt{17^2 -4}}2$ then:</p> <p>$17^2 - 4 = 285$ And $15^2 = 225 &lt; 285 &lt; 289 = 17^2$ so </p> <p>$15 &lt;\sqrt{285} &lt; 17$</p> <p>$-2 &lt;-17+\sqrt{285} &lt; 0$</p> <p>$-1 &lt; x = \frac {-17 +\sqrt{285}}2 &lt; 0$ so</p> <p>$\{x\} = \frac {-17 +\sqrt{285}}2 + 1$.</p> <p>And $\frac 1x = \frac 2{-17+\sqrt{17^2 -4}}=$</p> <p>$\frac {2(-17 - \sqrt{285})}{(-17 + \sqrt{285})(-17-\sqrt 285)}=$</p> <p>$\frac {2(-17 - \sqrt{285})}{289-285}$</p> <p>$\frac {-17 - \sqrt{285}}2$</p> <p>So $\frac {-17 - 17}2 &lt; \frac 1x &lt; \frac {-17 - 15}2$</p> <p>So $-17 &lt; \frac 1x &lt; -16$ so</p> <p>$\{\frac 1x\} = \frac 1x + 17 = \frac {-17 - \sqrt{285}}2 + 17$.</p> <p>So $\{x\} + \{\frac 1x\} = \frac {-17 +\sqrt{285}}2 + 1 + \frac {-17 - \sqrt{285}}2 + 17=1$.</p> <p>Ta-da... I guess that is worth doing at least once in a lifetime......</p>
75,795
<p><strong>The problem:</strong></p> <p>If we have</p> <blockquote> <p>$P(H_\eta|E_1,E_2,...,E_e)(1 \leq \eta \leq \mathbb{H})$</p> </blockquote> <p>and</p> <blockquote> <p>$P(E_1,E_2,...,E_e)$</p> </blockquote> <p>for all <strong>True</strong> and <strong>False</strong> values of $E_\epsilon(1 \leq \epsilon \leq e)$ and $H_\eta(1 \leq \eta \leq \mathbb{H})$.</p> <p>Can we find</p> <blockquote> <p>$P(H_h)$, $P(E_\epsilon|H_h) (1 \leq \epsilon \leq e)$ and $P(E_\epsilon) (1 \leq \epsilon \leq e)$</p> </blockquote> <p>??</p>
karmic_mishap
17,529
<p>Yes, the theorem which allows you to calculate $P(H_{h})$ from the given probabilities is called [Bayes' Theorem]. The extended form listed on the Wikipedia entry linked should cover this situation nicely.</p>
23,566
<p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p> <p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p> <p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p> <p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p> <p>I feel like a tone deaf musician and an ataxic painter at the same time.</p> <p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p> <p>I know that it will require practice and hard work, but I need direction.</p> <p>Any help is welcome.</p> <p>Kind regards,</p> <p>-- Mathemastov</p>
Carl Offner
2,579
<p>One thing I remember realizing in high school was that I would often see something, or read something, and think to myself, "yes, that makes sense". But then a day later I wouldn't have a clue about it. That was my first introduction to the difference between what I later learned was "active understanding" as opposed to "passive understanding". It's something that is not obvious to anyone without a certain amount of introspection.</p> <p>Another thing I learned (much later, in graduate school) was that if I wanted to learn something, it was a good idea to get at least three books on the subject. I always found that reading just one book, I would inevitably run across something that would stop me dead, completely unable to continue. But if I had one or two other books available, almost always there would be some alternative explanation that made sense to me and made it possible for me to go on.</p>
1,022,950
<p>I was reading linear dependence between vectors, where I come across below explanation:</p> <hr> <p>In a rectangular xy-coordinate system every vector in the plane can be expressed in exactly one way as a linear combination of the standard unit vectors. For example, the only way to express the vector (3, 2) as a linear combination of i = (1, 0) and j = (0, 1) is</p> <blockquote> <p>(3, 2) = 3(1, 0) + 2(0, 1) = 3i + 2j ...formula(1)</p> </blockquote> <p>Suppose, however, that we were to introduce a third coordinate axis that makes an angle of 45◦ with the x-axis. The unit vector along the w-axis is</p> <blockquote> <p>w = $(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$</p> </blockquote> <p>Whereas Formula (1) shows the only way to express the vector (3, 2) as a linear combination of i and j, there are infinitely many ways to express this vector as a linear combination of i, j, and w. Three possibilities are</p> <p>(3, 2) = 3(1, 0) + 2(0, 1) + 0$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 3i + 2j + 0w</p> <p>(3, 2) = 2(1, 0) + (0, 1) + $\sqrt{2}$$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 3i + j + $\sqrt{2}$w</p> <p>(3, 2) = 4(1, 0) + 3(0, 1) - $\sqrt{2}$$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 4i + 3j - $\sqrt{2}$w</p> <hr> <p>What I did not understood is </p> <ul> <li>How these last three expressions of (3,2) are formed, I just did not get anything of it. Maybe missing something elementary maths.</li> <li>How introducing another axis allows us to express any vector in <strong>infinitely many ways</strong>?, and how these last three expressions proves that?</li> </ul>
peterwhy
89,922
<p>Assuming $n, \epsilon &gt; 0$, $$\begin{align*} \left|\sqrt{\frac{n+1}n}-1\right| &amp;&lt; \epsilon\\ \left|\sqrt{1+\frac1n}-1\right| &amp;&lt; \epsilon\\ \sqrt{1+\frac1n}-1 &amp;&lt; \epsilon\\ \sqrt{1+\frac1n} &amp;&lt; 1 + \epsilon\\ 1+\frac1n &amp;&lt; (1+\epsilon)^2\\ \frac1n &amp;&lt; (1+\epsilon)^2-1\\ n &amp;&gt; \frac1{(1+\epsilon)^2-1}\\ n &amp;&gt; \frac1{2\epsilon + \epsilon^2}\\ \end{align*}$$</p>
1,184,961
<p>I need to prove/show that $n^3 \leq 3^n$ for all natural numbers $n$ by strong induction. I have no clue where to begin!!!! :( I know how to do the beginning steps of showing that it's true for $k = 0$ and $k = 1$, etc but get suck on how to start the strong inductive step.</p>
DanielV
97,045
<p>For the strong inductive step, you want to assume:</p> <p>$$k^3 \le 3^n$$</p> <p>and use that to prove</p> <p>$$(k+1)^3 \le 3^{n+1}$$</p> <p>Use the fact that $a &lt; b$ and $b &lt; c$ implies $a &lt; c$.</p>
1,184,961
<p>I need to prove/show that $n^3 \leq 3^n$ for all natural numbers $n$ by strong induction. I have no clue where to begin!!!! :( I know how to do the beginning steps of showing that it's true for $k = 0$ and $k = 1$, etc but get suck on how to start the strong inductive step.</p>
Daniel W. Farlow
191,378
<p>We can prove that $n^3 &lt; 3^n$ for all $n\geq 4$ (which is basically the same thing as proving that $n^3 \leq 3^n$ for $n\geq 0$), and we can prove this using <strong>weak</strong> induction (there's no need to use strong induction here).</p> <p>Start by noting that $$ 3n^2+3n+1&lt;2(3^n)\tag{1} $$ is true for $n\geq 4$. One can verify $(1)$ using induction or, more cumbersomely, in a direct fashion. </p> <p><strong>Claim:</strong> For $n\geq 4$, $$ n^3 &lt; 3^n. $$</p> <p><em>Proof.</em> For $n\geq 4$, let $P(n)$ denote the proposition $$ P(n) : n^3 &lt; 3^n. $$</p> <p><strong>Base step ($n=4$):</strong> Since $4^3=64&lt;81=3^4$, the statement $P(4)$ is true.</p> <p><strong>Inductive step:</strong> Suppose that for some fixed $k\geq 4$, $$ P(k) : k^3 &lt; 3^k $$ holds. It must be shown that $$ P(k+1) : (k+1)^3 &lt; 3^{k+1} $$ follows. Starting with the left-hand side of $P(k+1)$, \begin{align} (k+1)^3 &amp;= k^3+3k^2+3k+1\\[0.5em] &amp;&lt; 3^k+3k^2+3k+1\tag{by $P(k)$}\\[0.5em] &amp;&lt; 3^k+2(3^k)\tag{by $(1)$}\\[0.5em] &amp;= 3(3^k)\\[0.5em] &amp;= 3^{k+1}, \end{align} we end up with the right-hand side of $P(k+1)$. Thus, $P(k+1)$ is also true, and this concludes the inductive step $P(k)\to P(k+1)$. </p> <p>Thus, by mathematical induction, $P(n)$ is true for all $n\geq 4$. $\blacksquare$</p>
99,572
<p>One of the most useful tools in the study of convex polytopes is to move from polytopes (through their fans) to toric varieties and see how properties of the associated toric variety reflects back on the combinatorics of the polytopes. This construction requires that the polytope is rational which is a real restriction when the polytope is general (neither simple nor simplicial). Often we would like to consider general polytopes and even polyhedral spheres (and more general objects) where the toric variety construction does not work.</p> <p>I am aware of very general constructions by M. Davis, and T. Januszkiewicz, (one relevant paper might be <a href="http://www.math.osu.edu/%7Edavis.12/old_papers/DJ_toric.dmj.pdf" rel="nofollow noreferrer">Convex polytopes, Coxeter orbifolds and torus actions, Duke Math. J. 62 (1991)</a> and several subsequent papers). Perhaps these constructions allow you to start with arbitrary polyhedral spheres and perhaps even in greater generality.</p> <p>I ask about an explanation of the scope of these constructions, and, in simple terms as possible, how does the construction go?</p>
Dan Zaffran
2,109
<p>Dear Gil, we have a nonrational construction with F. Battaglia in the case of simplicial fans here: <a href="http://arxiv.org/abs/1108.1637%20%22D.%20Zaffran,%20F.%20Battaglia%3A%20Foliations%20modelling%20nonrational%20simplicial%20toric%20varieties%22" rel="nofollow">arXiv:1108.1637</a>, where we use foliated compact manifolds instead of toric varieties. In this setting, Stanley's proof of (the necessary part of) the g-conjecture carries over.</p>
120,260
<p>Let $X$ be a simply connected smooth projective variety, whose Picard group is generated by the classes of the irreducible codimension 1 loci $D_1, \ldots, D_k$. Let $E_1, \ldots, E_r$ be other irreducible codimension 1 loci, and suppose that $X^0$ is the complement in $X$ of the divisors $D_i$ and $E_j$.</p> <p>Suppose now that $X_0$ is the complement of $n$ irreducible loci of codimension $1$ in $Y$, a smooth projective variety.</p> <p>Question: Can I conclude that the Picard group of $Y$ has rank $n-r$?</p> <p>I can answer the question affirmatively over $\mathbb{C}$, by using the long exact sequence with compact support associated with the inclusion $Y \setminus X^0 \to Y$, but I would like to know if there is an algebraic proof of this (valid over any algebraically closed field $k$).</p> <p>EDIT: As pointed out in the answer, I am actually assuming that the Picard group of $X$ is FREELY generated by the $D_1, \ldots, D_k$.</p>
alvarezpaiva
21,123
<p>I can't say that what I'll relate is fundamental, but it does fit into the new ideas category. Since I and (my collaborator) Florent Balacheff have given talks on the subject and the paper will be in the ArXiv in a few days I feel free to comment on it. <strong>This post is an annoucement of joint work with Florent Balacheff and Kroum Tzanev.</strong></p> <p>As you comment, the basic result in the geometry of numbers is Minkowski's (first) theorem: <em>If the volume of a <span class="math-container">$0$</span>-symmetric convex body <span class="math-container">$K \subset \mathbb{R}^n$</span> is at least <span class="math-container">$2^n$</span>, then <span class="math-container">$K$</span> contains a non-zero integer point.</em></p> <p>But what happens when the body is not <span class="math-container">$0$</span>-symmetric? It is easy to see that Minkowski's theorem fails completely, but that's because one is not thinking symplectically. By using some Hamiltonian dynamics of the sort Balacheff and I used to study isosystolic inequalities in <a href="https://arxiv.org/abs/1109.4253" rel="nofollow noreferrer">this paper</a>, we guessed that the &quot;right&quot; result should be the following:</p> <p><strong>Conjecture.</strong> If a convex body in <span class="math-container">$\mathbb{R}^n$</span> contains no integer point other than the origin, then the volume of its dual body with respect to the origin is at least (n+1)/n!</p> <p>In other words, one should have a sort of uncertainty principle: if the origin is localized as the unique integer point inside a convex body, the dual body cannot be too small. In fact, its volume is bounded below by <span class="math-container">$(n+1)/n!$</span>. Another formulation of the conjecture that seems more elementary goes as follows:</p> <p>If every hyperplane <span class="math-container">$m_1x_1 + \cdots m_nx_n = 1$</span>, where the <span class="math-container">$m_i$</span> are integers not all equal to zero, intersects a convex body <span class="math-container">$K \subset \mathbb{R}^n$</span>, then the volume of <span class="math-container">$K$</span> is at least <span class="math-container">$(n+1)/n!$</span></p> <p>We proved the conjecture in the case <span class="math-container">$n = 2$</span> and the asymptotic version:</p> <p><strong>Theorem.</strong> There exists a (universal) constant <span class="math-container">$C \leq 1$</span> such that if a convex body <span class="math-container">$K \subset \mathbb{R}^n$</span> contains no integer point other than the origin, then the volume of <span class="math-container">$K^*$</span> is at least <span class="math-container">$C^n(n+1)/n!$</span>.</p> <p>In fact, this result is equivalent to Bourgain-Milman. Moreover, it easily implies the asymptotic version of a conjecture of Ehrhart:</p> <p><strong>Theorem.</strong> There exists a universal constant <span class="math-container">$c \geq 1$</span> such that if <span class="math-container">$K \subset \mathbb{R}^n$</span> is a convex body with barycenter at the origin and containing no other integer point, then the volume of <span class="math-container">$K$</span> is at most <span class="math-container">$c^n (n+1)^n/n!$</span>.</p> <p>However, what is really interesting for us is that at least in the case <span class="math-container">$n=2$</span> the result trascends the geometry of numbers and is really a result in Hamiltonian dynamics. I just need a definition:</p> <p><strong>Definition.</strong> A hypersurface in the cotangent bundle of a manifold <span class="math-container">$M$</span> is said to be <em>optical</em> if its intersection with every cotangent space is a convex hypersurface enclosing the origin.</p> <p>To an optical hypersurface in the cotangent of a compact manifold we can associate two numbers: the symplectic volume of the region enclosed by <span class="math-container">$\Sigma$</span> and the least action of its periodic characteristics.</p> <p><strong>Theorem.</strong> An optical hypersurface <span class="math-container">$\Sigma$</span> in the cotangent space of the two-torus carries a periodic characteristic whose action is less than or equal to the square root of two-thirds the symplectic volume enclosed by <span class="math-container">$\Sigma$</span>.</p> <p>The inequality is sharp.</p> <p>Finsler geometers will be happier if I translate: <em>If the Holmes-Thompson volume of a (non-reversible) Finsler <span class="math-container">$2$</span>-torus <span class="math-container">$(T^2,F)$</span> is <span class="math-container">$3/2\pi$</span>, then <span class="math-container">$(T^2,F)$</span> carries a (non-contractible) periodic geodesic of length at most <span class="math-container">$1$</span>.</em></p> <p>In other words, this is the (non-reversible) Finsler version of Loewner's systolic inequality. The reversible Finsler version (replace <span class="math-container">$3/2\pi$</span> by <span class="math-container">$2/\pi$</span>) is due to Stéphane Sabourau and can be found <a href="http://jlms.oxfordjournals.org/content/82/3/549.abstract?sid=baf19794-496e-4b85-80ac-9d254af6d2cc" rel="nofollow noreferrer">here</a>.</p>
463,650
<p>Consider the sequence $\left \{ x_{n} \right \}$ that satisfies the condition: $$\left | x_{n+1}-x_{n} \right |&lt; \frac{1}{2^{n}} \ \ \ for\ all\ n=1,2,3,...$$ Part (1): Prove that the sequence $\left \{ x_{n} \right \}$ is convergent.</p> <p>Part (2): Does the result in part (1) hold if we only assume that $\left | x_{n+1}-x_{n} \right |&lt; \frac{1}{n} \ \ \ for\ all\ n=1,2,3,...$?</p> <p>For part (1), I proved that the sequence is Cauchy and hence it is convergent. For part (2), I feel like the sequence is not necessarily convergent. I am trying to come up with a sequence that is divergent, but satisfies the condition given in part (2). Any ideas?</p>
Brian M. Scott
12,042
<p>Let $x_1=1$, say, and let $x_{n+1}=x_n+\frac1{2n}$ for each $n\in\Bbb Z^+$.</p>
1,714,654
<p>Show that a box (rectangular parallelopiped) of maximum volume V with prescribed surface area is a cube. Let $$V=xyz$$ $$S=2xy + 2yz + 2zx$$ $S$ is constant.</p> <p>Using Lagrange method, I am stuck at $V_x$$_x$=$0$=$V_y$$_y$=$V_z$$_z$ at the (only) critical point. How to approach this. </p>
RRL
148,510
<p>We have the inequality $z - z^3/6 \leqslant \sin z \leqslant z$ for $z &gt; 0$.</p> <p>Hence, for $x &gt; y$</p> <p>$$1 - \frac{(x^2 - y^2)^2}{6} \leqslant \frac{\sin (x^2 - y^2)}{x^2 - y^2} \leqslant 1,$$</p> <p>and by the squeeze theorem the limit is $1$ as $(x,y) \to (0,0)$ with $x &gt; y.$</p> <p>Similarly, for $y &gt; x$</p> <p>$$1 - \frac{(y^2 - x^2)^2}{6} \leqslant \frac{\sin (y^2 - x^2)}{y^2 - x^2} = \frac{\sin (x^2 - y^2)}{x^2 - y^2} \leqslant 1,$$</p> <p>and by the squeeze theorem the limit is $1$ as $(x,y) \to (0,0)$ with $y &gt; x.$</p> <p>The inequality also shows that as $x \to y$ with $y$ fixed,</p> <p>$$\lim_{x \to y} \frac{\sin (x^2 - y^2)}{x^2 - y^2} = 1,$$</p> <p>and, although the function as written above is undefined for $x = y,$ it can be extended continuously to $1$ on that line.</p> <p>Technically, with $f(x,y) = \sin(x^2 - y^2) /( x^2 - y^2),$ you would say</p> <p>$$\lim_{(x,y) \to (0,0), x \neq y} f(x,y) = 1,$$</p> <p>and $f$ can be continuously extended to a function $\hat{f}$ on $\mathbb{R}^2$ such that</p> <p>$$\lim_{(x,y) \to (0,0)} \hat{f}(x,y) = 1.$$</p>
438,231
<p>How should I state the general solution for the equation $\sin(4\phi)=\cos(2\phi)$. The angles are $15$, $45$, $75$ and $135$ if I restrict myself within the range $[0,360]$</p>
lab bhattacharjee
33,337
<p>As $\cos2\phi=\sin4\phi=\cos(90^\circ-4\phi)$</p> <p>$\implies 2\phi=n360^\circ\pm(90^\circ-4\phi)$ where $n$ is any integer</p> <p>Taking '+' sign, $2\phi=n360^\circ+90^\circ-4\phi$</p> <p>$\implies 6\phi=n360^\circ+90^\circ \implies \phi=n60^\circ+15^\circ$</p> <p>As $0\le \phi&lt;360^\circ, 0\le n60^\circ+15^\circ&lt;360^\circ\implies 0\le n\le 5$</p> <p>Taking '-' sign, $2\phi=n360^\circ-90^\circ+4\phi$</p> <p>$\implies 2\phi=90^\circ-n360^\circ\implies \phi=45^\circ-n180^\circ$</p> <p>As $0\le \phi&lt;360^\circ, 0\le 45^\circ-n180^\circ&lt;360^\circ \implies -1\le n\le0$</p> <p>So, there are $6+2=8$ solutions in $\in[0, 360^\circ)$</p>
106,464
<p>I'd like to prove the following:</p> <blockquote> <p>If $\mathfrak{a} \subseteq k[x_0, \ldots, x_n]$ is a homogeneous ideal, and if $f \in k[x_0,\ldots,x_n]$ is a homogeneous polynomial with $\mathrm{deg} \ f &gt; 0$, such that $f(P) = 0 $ for all $P \in Z(\mathfrak{a})$ in $\mathbb P^n$, then $f^q \in \mathfrak{a}$ for some $ q &gt; 0$.</p> </blockquote> <p>I've been given the hint: interpret the problem in terms of the affine ($n+1$)-space whose affine coordinate ring is $k[x_0,\ldots,x_n]$ and use the usual Nullstellensatz. </p> <hr> <p>I'm not really sure what the hint means. We have the isomorphism $k[x_0,...,x_n] \cong k[x_0,...,x_n] / I(\mathbb A_k^{n+1})$ (since $I(\mathbb A^{n+1}) = I(Z(0)) = 0$). But I don't see how this is helpful at all, nor am I sure this is what the hint means.</p> <p>Any help would be greatly appreciated. Thanks</p>
Georges Elencwajg
3,217
<p>Consider $V(\mathfrak a)\subset \mathbb A^{n+1}(k)=k^{n+1}$, the cone in <em>affine</em> $n+1$ space defined by the ideal $\mathfrak a$ .<br> Your polynomial $f$ will vanish on $V(\mathfrak a)$ because that's <em>exactly</em> what it means that it vanishes on $Z(\mathfrak a)$ (see below)<br> The usual Nullstellensatz then implies that $f^q\in \mathfrak a$ for some $q\gt 0 $ . </p> <p><strong>NB</strong> Given a point $P=[a_0:a_1:...:a_n]\in \mathbb P^n (k)$, you cannot define the value of $f$ at $P$ : the expression $f(P)$ doesn't make sense.<br> What makes sense is to say that $f=0 $ on the line $k\cdot (a_0,a_1,...,a_n)\subset k^{n+1}$ .<br> One then writes $f(P)=0$, although $f(P)$ is not defined!<br> So saying that $f$ vanishes on $Z(\mathfrak a)$ really means that $f$ vanishes on $V(\mathfrak a)$ <em>by definition</em>.</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
Eran
121,912
<p>Ladner's theorem states that there exist <span class="math-container">$\mathsf{NP}$</span>-intermediate problems when <span class="math-container">$\mathsf{P}\neq\mathsf{NP}$</span>. However, the problem constructed in Ladner's proof is rather 'unnatural'. The question arises of whether any 'natural' examples of problems can be <span class="math-container">$\mathsf{NP}$</span>-intermediate.</p> <p>The Dichotomy Conjecture of Feder and Vardi (first stated <a href="https://www.cs.rice.edu/~vardi/papers/stoc93rj.pdf" rel="noreferrer">here</a>) states that, under the assumption that <span class="math-container">$\mathsf{P}\neq\mathsf{NP}$</span>, the computational problems known as constraint satisfaction problems (CSPs for short) are either <span class="math-container">$\mathsf{NP}$</span>-complete or belong to <span class="math-container">$\mathsf{P}$</span>.</p> <p>The consensus in the community (last I knew) is that Dmitriy Zhuk (<a href="https://arxiv.org/abs/1704.01914" rel="noreferrer">https://arxiv.org/abs/1704.01914</a>) and Andrei Bulatov (<a href="https://arxiv.org/abs/1703.03021" rel="noreferrer">https://arxiv.org/abs/1703.03021</a>) have independently proven the conjecture to be true. Their proofs cap a decades long approach of applying universal algebra to the question.</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
Rodrigo A. Pérez
13,923
<p>Graph theory / Discrete dynamics: In 2007, A. Trahtman <a href="https://arxiv.org/abs/0709.0099" rel="noreferrer">proved</a> the <a href="https://en.m.wikipedia.org/wiki/Road_coloring_theorem" rel="noreferrer">Road Coloring Conjecture</a>, which had been posited 37 years earlier by R. Adler and B. Weiss.</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
zhoraster
8,146
<p>A remarkable example is the <a href="https://en.wikipedia.org/wiki/Gaussian_correlation_inequality" rel="noreferrer">Gaussian correlation conjecture</a> (which only recently became the Gaussian correlation inequality). The formulation is very simple:</p> <blockquote> <p>For arbitrary centered Gaussian measure, any two convex symmetric sets are positively correlated.</p> </blockquote> <p>It was formulated over 60 years ago (in the above general form, in 1972) and since then had been attacked by many mathematicians. Despite its apparent simplicity, only several partial results had been obtained before its complete proof in 2014. </p> <p>What is remarkable is that the proof was quite simple and came from a retired statistician Thomas Royen, whose previous scientific output was not very noticeable. Moreover, the article was turned down by some scientists. It seems that the true reasons were that the author was not well known, and the article itself did not look serious (you can find its first non-LaTeX version <a href="https://arxiv.org/abs/1408.1028v1" rel="noreferrer">here</a>). Finally, it was <a href="http://www.pphmj.com/abstract/8713.htm" rel="noreferrer">published</a> by some predatory "Far East" journal. Unsurprisingly, it took about two years for the proof to come to the public attention, and for its author to become famous. </p> <p>Unfortunately, the story brings out some unpleasant features of the scientific community: hypocrisy and prejudice. </p> <p>More on the story <a href="https://www.quantamagazine.org/statistician-proves-gaussian-correlation-inequality-20170328" rel="noreferrer">here</a>.</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
Orr Shalit
1,193
<p>Here are two examples from operator theory/operator algebras. Both were open problems for more than forty years. The first example is remarkable because it is was quite well known, it is so simple to state and was elusive for so much time. The second example is notable because of its importance and because it was open since Arveson's seminal "subalgebras" paper. </p> <ol> <li><p>In 2015 Kreg Knese proved that von Neumann's inequality holds for triples of <span class="math-container">$3 \times 3$</span> contractions (this breakthrough followed important work of Lukasz Kosinski). The introduction to <a href="https://arxiv.org/pdf/1508.06488.pdf" rel="noreferrer">Knese's paper</a> explains it all well (I <a href="https://noncommutativeanalysis.com/2015/09/05/one-of-the-most-outrageous-open-problems-in-operatormatrix-theory-is-solved/" rel="noreferrer">blogged about it</a>, in case you want a version with some more superlatives). </p></li> <li><p>In 2013 Davidson and Kennedy proved the existence of "sufficiently many boundary representations" in every operator system, which was an open problem raised by Arveson in 1969. Here is their <a href="https://arxiv.org/pdf/1303.3252.pdf" rel="noreferrer">paper on arxiv</a> (here is a <a href="https://noncommutativeanalysis.com/2013/03/17/forty-five-years-later-a-major-open-problem-in-operator-algebras-is-solved/" rel="noreferrer">blog post</a> I wrote on this, geared towards non-specialists). Davidson and Kennedy's solution came five years after Arveson himself <a href="https://arxiv.org/abs/math/0701329" rel="noreferrer">settled the problem</a> for separable operator systems (following important work of Dritschel and McCullough), which was also an exciting development. </p></li> </ol>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
Mozibur Ullah
35,706
<p>The Baez-Dolan corbordism hypothesis or conjecture which states that the higher corbordism category is the free symmetric higher monoidal category on a single object was formalised by Lurie and proven in his paper classifying topological field theiries in 2008.</p>
3,686,921
<blockquote> <p>Prove that for all triangles with angles <span class="math-container">$\alpha, \beta, \gamma$</span>, <span class="math-container">$$\frac{\sin\alpha}{\cos\alpha + 1} + \frac{\sin\beta}{\cos\beta + 1} + \frac{\sin\gamma}{\cos\gamma + 1} = \frac{\cos\alpha + \cos\beta + \cos\gamma + 3}{\sin\alpha + \sin\beta + \sin\gamma}$$</span></p> </blockquote> <p>Let <span class="math-container">$\tan\dfrac{\alpha}{2} = a, \tan\dfrac{\beta}{2} = b, \tan\dfrac{\gamma}{2} = c$</span>, we have that <span class="math-container">$$\dfrac{\sin\beta}{\cos\beta + 1} = \dfrac{1}{b}, \cos\beta = \dfrac{1 - b^2}{1 + b^2}, \sin\beta = \dfrac{2b}{1 + b^2}$$</span> and <span class="math-container">$bc + ca + ab = 1$</span>.</p> <p>It needs to be proven that <span class="math-container">$$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{\dfrac{1 - a^2}{1 + a^2} + \dfrac{1 - b^2}{1 + b^2} + \dfrac{1 - c^2}{1 + c^2} + 3}{\dfrac{2a}{1 + a^2} + \dfrac{2b}{1 + b^2} + \dfrac{2c}{1 + c^2}}$$</span></p> <p><span class="math-container">$$\impliedby \frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{\dfrac{1}{1 + a^2} + \dfrac{1}{1 + b^2} + \dfrac{1}{1 + c^2}}{\dfrac{a}{1 + a^2} + \dfrac{b}{1 + b^2} + \dfrac{c}{1 + c^2}}$$</span></p> <p><span class="math-container">$$\impliedby \left(\frac{1}{a} + \frac{1}{b} + \frac{1}{c}\right)\left(\frac{a}{1 + a^2} + \frac{b}{1 + b^2} + \frac{c}{1 + c^2}\right) = \frac{1}{1 + a^2} + \frac{1}{1 + b^2} + \frac{1}{1 + c^2}$$</span></p> <p><span class="math-container">$$\impliedby \left(\frac{a}{b} + \frac{a}{c}\right)\frac{1}{1 + a^2} + \left(\frac{b}{c} + \frac{b}{a}\right)\frac{1}{1 + b^2} + \left(\frac{c}{a} + \frac{c}{b}\right)\frac{1}{1 + c^2} = 0$$</span></p> <p><span class="math-container">$$\impliedby \frac{a(b + c)}{bc(c + a)(a + b)} + \frac{b(c + a)}{ca(a + b)(b + c)} + \frac{c(a + b)}{ab( b + c)(c + a)} = 0$$</span></p> <p><span class="math-container">$$\impliedby \frac{1 - bc}{(1 - ca)(1 - ab)} + \frac{1 - ca}{(1 - ab)(1 - bc)} + \frac{1 - ab}{(1 - bc)(1 - ca)} = 0$$</span></p> <p><span class="math-container">$$\impliedby (1 - bc)^2 + (1 - ca)^2 + (1 - ab)^2 = 0$$</span></p> <p><span class="math-container">$$\impliedby bc = ca = ab = 1 \impliedby bc + ca + ab = 3,$$</span> which is definitely incorrect.</p> <p>I've surmised that the correct equality is <span class="math-container">$$\frac{\sin\alpha}{\cos\alpha + 1} + \frac{\sin\beta}{\cos\beta + 1} + \frac{\sin\gamma}{\cos\gamma + 1} = \frac{\cos\alpha + \cos\beta + \cos\gamma + 1}{\sin\alpha + \sin\beta + \sin\gamma},$$</span> but then I wouldn't know what to do first.</p>
lab bhattacharjee
33,337
<p>For <span class="math-container">$1+\cos\alpha\ne0,$</span> <span class="math-container">$$\dfrac{\sin\alpha}{1+\cos\alpha}=\cdots=\tan\dfrac\alpha2$$</span></p> <p>Now, <span class="math-container">$$\tan\dfrac\alpha2+\tan\dfrac\beta2+\tan\dfrac\gamma2$$</span></p> <p><span class="math-container">$$=\dfrac{\sin\left(\dfrac\alpha2+\dfrac\beta2\right)}{\cos\dfrac\alpha2\cos\dfrac\beta2}+\dfrac{\sin\dfrac\gamma2}{\cos\dfrac\gamma2}$$</span></p> <p><span class="math-container">$$=\dfrac{\cos\dfrac\gamma2}{\cos\dfrac\alpha2\cos\dfrac\beta2}+\dfrac{\sin\dfrac\gamma2}{\cos\dfrac\gamma2}\text{ using }\alpha+\beta=\pi-\gamma$$</span></p> <p><span class="math-container">$$=\dfrac{\cos^2\dfrac\gamma2+\sin\dfrac\gamma2\cos\dfrac\alpha2\cos\dfrac\beta2}{\cos\dfrac\alpha2\cos\dfrac\beta2\cos\dfrac\gamma2}$$</span></p> <p>Now the numerator</p> <p><span class="math-container">$$= 1-\sin^2\dfrac\gamma2+\sin\dfrac\gamma2\cos\dfrac\alpha2\cos\dfrac\beta2 $$</span></p> <p><span class="math-container">$$= 1-\sin\dfrac\gamma2\left(\sin\dfrac\gamma2-\cos\dfrac\alpha2\cos\dfrac\beta2\right) $$</span></p> <p><span class="math-container">$$= 1-\sin\dfrac\gamma2\left(\cos\left(\dfrac\alpha2+\dfrac\beta2\right) -\cos\dfrac\alpha2\cos\dfrac\beta2\right) $$</span></p> <p><span class="math-container">$$= 1+\sin\dfrac\alpha2\sin\dfrac\beta2\sin\dfrac\gamma2$$</span></p> <p>Now use <a href="https://math.stackexchange.com/questions/176892/prove-trigonometry-identity-for-cos-a-cos-b-cos-c">Prove trigonometry identity for $\cos A+\cos B+\cos C$</a> </p> <p>and <a href="https://math.stackexchange.com/questions/608307/if-a-b-c-pi-then-show-that-sina-sinb-sinc-4-cos-fraca">If $A + B + C = \pi$, then show that $\sin(A) + \sin(B) + \sin(C) = 4\cos\frac{A}{2}\cos\frac{B}{2}\cos\frac{C}{2}$</a></p>
3,882,566
<p>We have <span class="math-container">$0&lt;b≤ a$</span>, and:</p> <p><span class="math-container">$$\underbrace{\dfrac{1+⋯+a^7+a^8}{1+⋯+a^8+a^9}}_{A} \quad \text{and} \quad \underbrace{\dfrac{1+⋯+b^7+b^8}{1+⋯+b^8+b^9}}_{B}$$</span></p> <p>Source: Lumbreras Editors</p> <hr /> <p>It was my strategy:</p> <p><span class="math-container">$1 ≤ \dfrac{1+⋯+a^8}{1+⋯+b^8}$</span> ya que <span class="math-container">$a^p ≥ b^p$</span>, <span class="math-container">$∀\, p ≥ 0 $</span> (monotonous function).</p> <p>We also have to, <span class="math-container">$ \dfrac{a}{b} ≤ 1 $</span></p>
Michael Rozenberg
190,319
<p><span class="math-container">$$1-A=1-\frac{1+a+...+a^8}{1+a+...+a^9}=\frac{a^9}{1+a+...+a^9}=$$</span> <span class="math-container">$$=\frac{1}{\frac{1}{a^9}+\frac{1}{a^8}+...+1}\geq\frac{1}{\frac{1}{b^9}+\frac{1}{b^8}+...+1}=1-B,$$</span> which gives <span class="math-container">$$A\leq B.$$</span></p>
3,536,822
<p>A man has three bags filled with balls. One bag contains balls weighing <span class="math-container">$9$</span> grams, the second bag contains balls weighing <span class="math-container">$10$</span> grams and the third bag contains balls weighing <span class="math-container">$11$</span> grams. The man got confused and does not know which bag contains which balls. But in each bag all the balls weigh the same. He has an old-fashioned scale that is about to break. This means that he can only weigh once with it. How does the man find out which ball weighs what?</p>
lemontree
344,246
<p>Your basic idea is correct, but you are missing an important bit: There is an implicit universal quantification in the definitions:</p> <blockquote> <p>Soundness: <em>For all</em> formulas <span class="math-container">$A, B$</span>, if <span class="math-container">$A \vdash B$</span>, then <span class="math-container">$A \Rightarrow B$</span><br> Completeness: <em>For all</em> formuas <span class="math-container">$A, B$</span>, if <span class="math-container">$A \Rightarrow B$</span>, then <span class="math-container">$A \vdash B$</span></p> </blockquote> <p>So when a proof system is not sound, what is negated is the universality of the implication:</p> <blockquote> <p>Unsoundness: <em>Not for all</em> formulas <span class="math-container">$A, B$</span>, if <span class="math-container">$A \vdash B$</span>, then <span class="math-container">$A \Rightarrow B$</span><br> Incompleteness: <em>Not for all</em> formuas <span class="math-container">$A, B$</span>, if <span class="math-container">$A \Rightarrow B$</span>, then <span class="math-container">$A \vdash B$</span></p> </blockquote> <p>This is in turn equivalent to an existential negated statement:</p> <blockquote> <p>Unsoundness: <em>There exist</em> formulas <span class="math-container">$A, B$</span> <em>such that not</em> if <span class="math-container">$A \vdash B$</span>, then <span class="math-container">$A \Rightarrow B$</span><br> Incompleteness: <em>There exist</em> formuas <span class="math-container">$A, B$</span> <em>such that not</em> if <span class="math-container">$A \Rightarrow B$</span>, then <span class="math-container">$A \vdash B$</span></p> </blockquote> <p>And this is in turn equivalent ot the following: </p> <blockquote> <p>Unsoundness: <em>There exist</em> formulas <span class="math-container">$A, B$</span> such that <span class="math-container">$A \vdash B$</span> <em>but not</em> <span class="math-container">$A \Rightarrow B$</span><br> Inompleteness: <em>There exist</em> formuas <span class="math-container">$A, B$</span> such that <span class="math-container">$A \Rightarrow B$</span> <em>but not</em> <span class="math-container">$A \vdash B$</span></p> </blockquote> <p><strong>So if a proof system is unsound, then <em>some</em> of the proofs it produces are not semantically valid. It doesn't have to be per se that <em>everything</em> it proves is nonsense.</strong><br> This may be even more convincing on the incompleteness side: The word "incomplete" just means that <em>some</em> sequents are missing from the proof system; it doesn't have to be that that it fails to prove all sequents whatsoever.</p> <p><strong>If a proof sytem is unsound and complete, then all the semantically valid inferences can be proven, but in addition it proves some sequents that are not actually valid.</strong> </p> <p>Edit (changing my last pargraph thanks to Malice Vendrine's comment):<br> Note also that those provable but invalid sequents do not have to be contradictory: It might just be that they are not true in <em>all</em> structures. For example, a system that proves <span class="math-container">$\vdash A \lor B \to A$</span> would be unsound, because this inference is not universally valid. But neither is its negation (there may well be structures in which the formula is satisfied, for example, in any structure in which <span class="math-container">$A$</span> is true).<br> So proving non-valid formulas does not immediately lead to an inconsistency. Only if we can prove the negation of a formula that is valid (and hence, by completeness, also provable) or, vice versa, a formula whose negation is valid, unsoundness in combination with completeness makes the system inconsistent.</p>
2,855,411
<p>Find all real number(s) $x$ satisfying the equation $\{(x +1)^3\}$ = $x^3$ , where $\{y\}$ denotes the fractional part of $y$ , for example $\{3.1416\ldots\}=0.1416\ldots$.</p> <p>I am trying all positive real numbers from $1,2,\dots$ but I didn't get any decimals.</p> <p>Is there a smarter way to solve this problem? ... Please advise.</p>
Henry
6,460
<p>Hints:</p> <ul> <li>$0 \le \{y\} \lt 1$</li> <li>so any solution to $\{(x +1)^3\} = x^3$ has $0 \le x^3 \lt 1$ and thus $0 \le x \lt 1$</li> <li>so $1 \le x+1 \lt 2$ and $1 \le (x+1)^3 \lt 8$</li> <li>any solution has $(x+1)^3 = x^3 +n$ for $n \in \{1,2,3,4,5,6,7\}$, which gives you seven quadratic equations to check </li> <li>for example, $x=0$ is a solution when $n=1$ </li> </ul>
131,294
<p>How do I show that $ f(t) = t^2 + t +1 $ is irreducible in $K[t]$, where $K = \{0,1\}$?</p> <p>I know how to tackle this over $\mathbb{Z}$ or $\mathbb{Q}$ using Guass or Eisenstein say...but I'm a little unsure how to proceed in this case.</p> <p>Any help is much appreciated.</p>
Belgi
21,335
<p>As said by a couple of users, this polynomial does not have a root in $\mathbb{F}_{2}$ and since it is of degree $2$, it is irreducible.</p> <p>You ask if $f$ can have a quadratic factor or a factor of higher degree, assume that such factor $g$ exist, i.e. $f=gh$ for some polynomial $h\neq0$. </p> <p>Then $\deg(f)=\deg(g)+\deg(h)$ implies $\deg(g)\leq \deg(f)=2$ and by our assumption $\deg(g)\geq2$ so we can not have that $f$ have a factor of higher degree. So we have $f=gh$ and since $\deg(f)=\deg(g)=2$, it holds that $\deg(h)=0$, i.e. $h\in\mathbb{F}_{2}$ is a constant polynomial. Since $h\neq0$ (otherwise $f=gh=0$) we have $h=1$, hence $f=g$. </p> <p>That is you can not decompose $f$ (not to a linear factor as you said, not to a quadratic factor or higher by this explanation) </p>
1,321,233
<p>German Wikipedia states that the Ramsey`s theorem is a generalization of the Pigeonhole principle <a href="http://de.wikipedia.org/wiki/Satz_von_Ramsey" rel="noreferrer" title="source">source</a></p> <p>But does not say why this is true. I am doing a presentation about the Ramsey theory and also wanna explain why this is true, but as not really an mathematician myself I can`t figure it out by myself.</p> <p>So my question is: </p> <p><em>What is an easy explanation for the fact that the Ramsey`s theorem is a generalization of the Pigeonhole principle?</em></p> <p>Thanks for any help in advance</p>
Hagen von Eitzen
39,174
<p>The pigeonhole principle states that for a given number of pigeons, if only there are enough holes then at least one empty hole is guaranteed to exist.</p> <p>Ramsey' s theorem states that if only there are enough vertices then at least one thingy (e.g., red or blue triangle) is guaranteed to exist.</p>
3,299,661
<p>I am familiar with all 3 of the entities I have listed in my question. I know the definitions of "reflexive", "symmetric", and "transitive". However, I am afraid I do not mechanistically understand the "flow" of how we ultimately generate equivalence classes from a particular relation that exhibits the 3 properties of equivalence.</p> <p>To help illustrate my confusion, consider the following example:</p> <p><span class="math-container">$S=\{1,2,3,4,5,6\}$</span></p> <p>Let <span class="math-container">$R_1$</span> be a relation on <span class="math-container">$S$</span> such that <span class="math-container">$x-y$</span> is divisible by <span class="math-container">$3$</span></p> <p>So, firstly, from what I understand about relations, I am going to find all of the order pairs that satisfy this (these ordered pairs are a subset of the cartesian product <span class="math-container">$S$</span> x <span class="math-container">$S$</span>). </p> <p><span class="math-container">$R_1 = \{(1,4) (2,5) (3,6) (4,1) (5, 2) (6, 3)(1,1)(2,2)(3,3)(4,4)(5,5)(6,6)\}$</span></p> <p>Ok cool. These are all of the ordered pairs that "satisfy" or "make the <span class="math-container">$R_1$</span> relation true".</p> <p>For this given relation, I can observe the following:</p> <p><strong>1 -</strong> The reflexive property is satisfied because of the presence of <span class="math-container">$(1,1), (2,2),\ etc$</span></p> <p><strong>1st question</strong>: If, for example (6,6) was not in this set, <span class="math-container">$R_1$</span> could not be deemed reflexive because <span class="math-container">$(3,6)$</span> and <span class="math-container">$(6,3)$</span> are present, correct? (i.e. because the element "6" shows up as as an ordered pair, <span class="math-container">$(6,6)$</span> MUST show up as well in order to declare this relation reflexive)</p> <p><strong>2 -</strong> The symmetric property is satisfied because of the presence of <span class="math-container">$(1,4) \&amp; (4,1)$</span>, <span class="math-container">$(2,5)\&amp;(5,2),\ etc$</span></p> <p><strong>3 -</strong> The transitive property is satisfied because...</p> <p><strong>2nd question</strong>: I actually do not immediately see why the transitive property is satisfied (I believe that the transitive property <em>should be</em> satisfied because the "congruence modulo n" relation is an equivalence relation...and I'm fairly certain that the relation <span class="math-container">$R_1$</span> that I described is of that form). Is it just because my set is too small to see the transitive property in its stereotypical form?</p> <p>So, assuming that this relation IS an equivalence relation (I believe that it is...for the reason mentioned above), I really do not understand how we go from this single set of ordered pairs to equivalence classes. From example videos I have seen, I know that a set of integers mod 3 will create three equivalence classes...namely, the integers with remainder 0, 1, and 2 when divided by 3. </p> <p><strong>3rd question</strong>: However, I do not really understand, mechanistically, how we "separate" these ordered pairs. All of the ordered pairs are initially grouped together. How do we decide, from this initial <span class="math-container">$R_1$</span> set, which ordered pairs belong to which equivalence class? Obviously, if you know how mod 3 works, you could sort of intuit that 1 and 4 go together because </p> <p><span class="math-container">$1 mod (3) = 4 mod (3)$</span></p> <p>...however, if I knew nothing about how <span class="math-container">$mod (3)$</span> worked, how would I know how to make the appropriate partitions? </p>
Ruben
386,073
<p>If you'd extend your set just a bit you'd see transitivity happen. Consider the set <span class="math-container">$\{1,2,3,4,5,6,7 \}$</span> for example. Now your <span class="math-container">$R_1$</span> contains <span class="math-container">$(1,4), (4,7)$</span> and <span class="math-container">$(1,7)$</span>.</p> <p>Once you know you have an equivalence relation you can just choose one representative, for example <span class="math-container">$1$</span>, to represent the pairs <span class="math-container">$(1,4), (1,7)$</span> and also <span class="math-container">$(4,7)$</span> by transitivity.</p>
2,094,123
<p>A plane curve is printed on a piece of paper with the directions of both axes specified. How can I (roughly) verify if the curve is of the form $y=a e^{bx}+c$ without fitting or doing any quantitative calculation?</p> <p>For example, for linear curves, I can choose two points on the curve and check if the midpoint is also on the curve. For parabolas, I can examine the geometric relationship between the tangent at a point and the secant connecting the peak and that point. Does the exponential curve have any similar geometric features that I can take advantage of?</p>
Community
-1
<p>Assuming $c=0$ (it's not the graph of a exponential function otherwise):</p> <p>Pick a point on the curve. Draw its tangent and extend it until it meets the $x$-axis. Also drop a vertical from the point to the $x$-axis. Now you have a right triangle.</p> <p>Do this for lots of points. The bases of all the triangles should have the same length.</p>
1,802,515
<blockquote> <p>Say you have a bank account in which your invested money yields 3% every year, continuously compounded. Also, you have estimated that you spend $1000 every month to pay your bills, that are withdrawn from this account.</p> <p>Create a differential model for that, find its equilibriums and determine its stability.</p> </blockquote> <p>My problem here is that the \$1000 withdrawal is not continuous on time, it's discrete. The best I could achieve is, if <span class="math-container">$S(t)$</span> is the current balance: <span class="math-container">$\dot S (t) = 0,0025S(t) - 1000$</span>. I'm using <span class="math-container">$0,0025$</span> as the interest rate because it yields 3% every year, so it should yield 0,25% every month. But I'm pretty confident that it's wrong. Any help would be highly appreciated! Thanks!</p>
Doug M
317,162
<p>The ballance must be sufficient to generate $1,000 / month in cash flow</p> <p>$B(e^{0.0025}-1) = 1000\\ B = \dfrac{1000}{e^{0.0025}-1} = \$399,500$</p>
1,615,177
<p>The primes $p$ are, of course, in one-to-one correspondence with the squares of primes $p^2$. But is there any interval $a &lt; x &lt; b$ possible where the primes thin out so much, that it contains more squares of primes than primes?</p>
Tito Piezas III
4,781
<p>I think the intent of your question was that the number of primes in the interval, call it $\pi(n)'$, is <strong><em>non-zero</em></strong>. If so, then the simplest case is,</p> <p>$$p_n^2\leq x \leq p_{n+1}^2$$</p> <p>for prime $p_i$. Your question then assumes the count as $\pi(n)'&lt;2.$ However,</p> <ol> <li>A consequence of <em><a href="https://en.wikipedia.org/wiki/Legendre%27s_conjecture" rel="nofollow">Legendre's conjecture</a></em> is that the number of primes in that interval is <strong><em>at least 2</em></strong>.</li> <li>More strongly, if <em><a href="https://en.wikipedia.org/wiki/Brocard%27s_conjecture" rel="nofollow">Brocard's conjecture</a></em> is true then, for $n&gt;1$, there are <strong><em>at least 4</em></strong>.</li> </ol> <p>As the count $\pi(n)' =2, 5, 6, 15, 9, 22, 11, 27, 47,\dots$ (<a href="http://oeis.org/A050216" rel="nofollow">A050216</a>) goes up fast, it is highly doubtful that Brocard's conjecture is false.</p>
129,875
<p>The Fourier transform of the Heaviside step function $u(t)$ <a href="http://fourier.eng.hmc.edu/e101/lectures/handout3/node3.html" rel="nofollow noreferrer">is</a> $\dfrac{1}{iω} + π δ(ω)$.<br> The Laplace transform of the same function <a href="http://leevaraiya.org/releases/LeeVaraiya_DigitalV2_02.pdf#page=569" rel="nofollow noreferrer">is</a> $\dfrac{1}{s}$. (<strong>Edit:</strong> This was my mistake, see <a href="/a/2728442/4890">my answer</a>.)</p> <p>I remember the proof <a href="http://211.71.86.13/web/jp/05sb/xhyxt/ckwx/lec17.pdf" rel="nofollow noreferrer">came from derivatives and signums</a>, and I'm <strong>not</strong> interested in the proof.<br> Rather, I want to understand <em>why</em> they <em>should</em> be different a bit more, shall we say, <em>intuitively</em>.</p> <p>I mean, the Laplace transform of $x(t)$ is just $$\mathcal{L}(x)(s) = \int_{-∞}^∞ e^{-st}x(t)\,dt$$ whereas the Fourier transform of $x(t)$ is just $$\mathcal{F}(x)(ω) = \int_{-∞}^∞ e^{-iωt}x(t)\,dt$$ so it's pretty obvious they <strong>only</strong> differ by the dummy variable name. So if we substitute $s = iω$, then they <em>should</em> turn out to be the same... and yet the result for the Fourier transform contains an extra Dirac delta.</p> <p>Could someone please explain why there is such a discrepancy more or less intuitively (rather than just presenting another mathematical proof)?</p>
Julián Aguirre
4,791
<p>The integral defining $\mathcal{L}(x)$ converges for all $s&gt;0$ (or, more generally, for all $s\in\mathbb{C}$ with $\operatorname{Re}(s)&gt;0$.) However, the integral defining $\mathcal{F}(x)$ does not converge for any $\omega\in\mathbb{R}$. As noted n the comments, $\mathcal{F}(x)$ is not a function, but a distribution; it is defined not as an integral, but through a different process (duality.)</p> <p>Another way of seeing this, is that $\mathcal{L}(x)(i\,\omega)$ is not defined for any $\omega\in\mathbb{R}$.</p>
4,215
<p>I suspect it is impossible to split a (any) 3d solid into two, such that each of the pieces is identical in shape (but not volume) to the original. How can I prove this?</p>
David E Speyer
448
<p>You can certainly take a rectangular box, $2^{1/3} \times 2^{2/3} \times 2$ and slice it into two boxes of size $1 \times 2^{1/3} \times 2^{2/3}$.</p>
433,403
<ol> <li>Let F(x,y) be the statement, “x can fool y,” where the domain consists of all of the people in the world. Translate this statement into symbolic logic. a. Everyone can be fooled by somebody.</li> </ol> <p>Would it be: For every x.y in W, F(x,y) is in W?</p> <p>I am not getting the gist of this...</p>
Austin Mohr
11,245
<p>I find it helpful to write an intermediate step that is halfway between English and symbolic logic.</p> <p>"Everyone can be fooled by somebody" is the same as "For every person $x$, there exists a person $y$ such that $y$ can fool $x$". If we replace all the English parts with symbols, this becomes "$\forall x \exists y, F(y,x)$". If you like, you could explicitly include the domain and get "$\forall x \in W, \exists y \in W, F(y,x)$", but I think it is clear enough without this.</p>
354,250
<p><strong>Remark:</strong> All the answers so far have been very insightful and on point but after receiving public and private feedback from other mathematicians on the MathOverflow I decided to clarify a few notions and add contextual information. 08/03/2020.</p> <h2>Motivation:</h2> <p>I recently had an interesting exchange with several computational neuroscientists on whether organisms with spatiotemporal sensory input can simulate physics without computing partial derivatives. As far as I know, partial derivatives offer the most quantitatively precise description of spatiotemporal variations. Regarding feasibility, it is worth noting that a number of computational neuroscientists are seriously considering the question that human brains might do reverse-mode automatic differentiation, or what some call backpropagation [7].</p> <p>Having said this, a large number of computational neuroscientists (even those that have math PhDs) believe that complex systems such as brains may simulate classical mechanical phenomena without computing approximations to partial derivatives. Hence my decision to share this question.</p> <h2>Problem definition:</h2> <p>Might there be an alternative formulation for mathematical physics which doesn't employ the use of partial derivatives? I think that this may be a problem in reverse mathematics [6]. But, in order to define equivalence a couple definitions are required:</p> <p><strong>Partial Derivative as a linear map:</strong></p> <p>If the derivative of a differentiable function <span class="math-container">$f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> at <span class="math-container">$x_o \in \mathbb{R}^n$</span> is given by the Jacobian <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o} \in \mathbb{R}^{m \times n}$</span>, the partial derivative with respect to <span class="math-container">$i \in [n]$</span> is the <span class="math-container">$i$</span>th column of <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o}$</span> and may be computed using the <span class="math-container">$i$</span>th standard basis vector <span class="math-container">$e_i$</span>:</p> <p><span class="math-container">\begin{equation} \frac{\partial{f}}{\partial{x_i}} \Bigr\rvert_{x=x_o} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) \Bigr\rvert_{x=x_o}. \tag{1} \end{equation}</span></p> <p>This is the general setting of numerical differentiation [3].</p> <p><strong>Partial Derivative as an operator:</strong></p> <p>Within the setting of automatic differentiation [4], computer scientists construct algorithms <span class="math-container">$\nabla$</span> for computing the dual program <span class="math-container">$\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> which corresponds to an operator definition for the partial derivative with respect to the <span class="math-container">$i$</span>th coordinate:</p> <p><span class="math-container">\begin{equation} \nabla_i = e_i \frac{\partial}{\partial x_i} \tag{2} \end{equation}</span></p> <p><span class="math-container">\begin{equation} \nabla = \sum_{i=1}^n \nabla_i = \sum_{i=1}^n e_i \frac{\partial}{\partial x_i}. \tag{3} \end{equation}</span></p> <p>Given these definitions, a constructive test would involve creating an open-source library for simulating classical and quantum systems that doesn’t contain a method for numerical or automatic differentiation.</p> <h2>The special case of classical mechanics:</h2> <p>For concreteness, we may consider classical mechanics as this is the general setting of animal locomotion, and the vector, Hamiltonian, and Lagrangian formulations of classical mechanics have concise descriptions. In all of these formulations the partial derivative plays a central role. But, at the present moment I don't have a proof that rules out alternative formulations. Has this particular question already been addressed by a mathematical physicist?</p> <p>Perhaps a reasonable option might be to use a probabilistic framework such as Gaussian Processes that are provably universal function approximators [5]?</p> <h2>Koopman Von Neumann Classical Mechanics as a candidate solution:</h2> <p>After reflecting upon the answers of Ben Crowell and <a href="https://mathoverflow.net/a/354289">gmvh</a>, it appears that we require a formulation of classical mechanics where:</p> <ol> <li>Everything is formulated in terms of linear operators.</li> <li>All problems can then be recast in an algebraic language.</li> </ol> <p>After doing a literature search it appears that Koopman Von Neumann Classical Mechanics might be a suitable candidate as we have an operator theory in Hilbert space similar to Quantum Mechanics [8,9,10]. That said, I just recently came across this formulation so there may be important subtleties I ignore.</p> <h2>Related problems:</h2> <p>Furthermore, I think it may be worth considering the following related questions:</p> <ol> <li>What would be left of mathematical physics if we could not compute partial derivatives?</li> <li>Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?</li> <li>Are the operations of multivariable calculus necessary and sufficient for modelling classical mechanical phenomena?</li> </ol> <h2>A historical note:</h2> <p>It is worth noting that more than 1000 years ago as a result of his profound studies on optics the mathematician and physicist Ibn al-Haytham(aka Alhazen) reached the following insight:</p> <blockquote> <p>Nothing of what is visible, apart from light and color, can be perceived by pure sensation, but only by discernment, inference, and recognition, in addition to sensation.-Alhazen</p> </blockquote> <p>Today it is known that even color is a construction of the mind as photons are the only physical objects that reach the retina. However, broadly speaking neuroscience is just beginning to catch up with Alhazen’s understanding that the physics of everyday experience is simulated by our minds. In particular, most motor-control scientists agree that to a first-order approximation the key purpose of animal brains is to generate movements and consider their implications. This implicitly specifies a large class of continuous control problems which includes animal locomotion.</p> <p>Evidence accumulated from several decades of neuroimaging studies implicates the role of the cerebellum in such internal modelling. This isolates a rather uniform brain region whose processes at the circuit-level may be identified with efficient and reliable methods for simulating classical mechanical phenomena [11, 12].</p> <p>As for the question of whether the mind/brain may actually be modelled by Turing machines, I believe this was precisely Alan Turing’s motivation in conceiving the Turing machine [13]. For a concrete example of neural computation, it may be worth looking at recent research that a single dendritic compartment may compute the xor function: [14], <a href="https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_single_biological_neuron_can_compute_xor/" rel="nofollow noreferrer">Reddit discussion</a>.</p> <h2>References:</h2> <ol> <li>William W. Symes. Partial Differential Equations of Mathematical Physics. 2012.</li> <li>L.D. Landau &amp; E.M. Lifshitz. Mechanics (Volume 1 of A Course of Theoretical Physics). Pergamon Press 1969.</li> <li>Lyness, J. N.; Moler, C. B. (1967). &quot;Numerical differentiation of analytic functions&quot;. SIAM J. Numer. Anal. 4: 202–210. <a href="https://doi.org/10.1137/0704019" rel="nofollow noreferrer">doi:10.1137/0704019</a>.</li> <li>Naumann, Uwe (2012). The Art of Differentiating Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1.</li> <li>Michael Osborne. Gaussian Processes for Prediction. Robotics Research Group Department of Engineering Science University of Oxford. 2007.</li> <li>Connie Fan. REVERSE MATHEMATICS. University of Chicago. 2010.</li> <li>Richards, B.A., Lillicrap, T.P., Beaudoin, P. et al. A deep learning framework for neuroscience. Nat Neurosci 22, 1761–1770 (2019). <a href="https://doi.org/10.1038/s41593-019-0520-2" rel="nofollow noreferrer">doi:10.1038/s41593-019-0520-2</a>.</li> <li>Wikipedia contributors. &quot;Koopman–von Neumann classical mechanics.&quot; Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Feb. 2020. Web. 7 Mar. 2020.</li> <li>Koopman, B. O. (1931). &quot;Hamiltonian Systems and Transformations in Hilbert Space&quot;. Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. <a href="https://doi.org/10.1073/pnas.17.5.315" rel="nofollow noreferrer">doi:10.1073/pnas.17.5.315</a>. PMC 1076052. PMID 16577368.</li> <li>Frank Wilczek. Notes on Koopman von Neumann Mechanics, and a Step Beyond. 2015.</li> <li>Daniel McNamee and Daniel M. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019.</li> <li>Jörn Diedrichsen, Maedbh King, Carlos Hernandez-Castillo, Marty Sereno, and Richard B. Ivry. Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across Task Domains. Neuron review. 2019.</li> <li>Turing, A.M. (1936). &quot;On Computable Numbers, with an Application to the Entscheidungsproblem&quot;. Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. <a href="https://doi.org/10.1112/plms/s2-42.1.230" rel="nofollow noreferrer">doi:10.1112/plms/s2-42.1.230</a>. (and Turing, A.M. (1938). &quot;On Computable Numbers, with an Application to the Entscheidungsproblem: A correction&quot;. Proceedings of the London Mathematical Society.</li> <li>Albert Gidon, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, Matthew Evan Larkum. <a href="https://doi.org/10.1126/science.aax6239" rel="nofollow noreferrer">Dendritic action potentials and computation in human layer 2/3 cortical neurons</a>. Science. 2020.</li> </ol>
Abdelmalek Abdesselam
7,410
<p>Well if you take out partial derivatives, at least quantum field theory and in particular conformal field theory will survive the massacre. The reason is explained in my MO answer: <a href="https://mathoverflow.net/questions/259155/p-adic-numbers-in-physics/259160#259160">$p$-adic numbers in physics</a></p> <p>One can use random/quantum fields <span class="math-container">$\phi:\mathbb{Q}_{p}^{d}\rightarrow \mathbb{R}$</span> as toy models of fields <span class="math-container">$\phi:\mathbb{R}^d\rightarrow\mathbb{R}$</span>. In this <span class="math-container">$p$</span>-adic or hierarchical setting, Laplacians and all that are nonlocal and not given by partial derivatives.</p> <p>Most equations in physics are <em>local</em> and therefore need partial derivatives in order to be formulated. What should remain, in the very hypothetical scenario proposed in the question, is everything pertaining to <strong>nonlocal</strong> phenomena.</p>
354,250
<p><strong>Remark:</strong> All the answers so far have been very insightful and on point but after receiving public and private feedback from other mathematicians on the MathOverflow I decided to clarify a few notions and add contextual information. 08/03/2020.</p> <h2>Motivation:</h2> <p>I recently had an interesting exchange with several computational neuroscientists on whether organisms with spatiotemporal sensory input can simulate physics without computing partial derivatives. As far as I know, partial derivatives offer the most quantitatively precise description of spatiotemporal variations. Regarding feasibility, it is worth noting that a number of computational neuroscientists are seriously considering the question that human brains might do reverse-mode automatic differentiation, or what some call backpropagation [7].</p> <p>Having said this, a large number of computational neuroscientists (even those that have math PhDs) believe that complex systems such as brains may simulate classical mechanical phenomena without computing approximations to partial derivatives. Hence my decision to share this question.</p> <h2>Problem definition:</h2> <p>Might there be an alternative formulation for mathematical physics which doesn't employ the use of partial derivatives? I think that this may be a problem in reverse mathematics [6]. But, in order to define equivalence a couple definitions are required:</p> <p><strong>Partial Derivative as a linear map:</strong></p> <p>If the derivative of a differentiable function <span class="math-container">$f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> at <span class="math-container">$x_o \in \mathbb{R}^n$</span> is given by the Jacobian <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o} \in \mathbb{R}^{m \times n}$</span>, the partial derivative with respect to <span class="math-container">$i \in [n]$</span> is the <span class="math-container">$i$</span>th column of <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o}$</span> and may be computed using the <span class="math-container">$i$</span>th standard basis vector <span class="math-container">$e_i$</span>:</p> <p><span class="math-container">\begin{equation} \frac{\partial{f}}{\partial{x_i}} \Bigr\rvert_{x=x_o} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) \Bigr\rvert_{x=x_o}. \tag{1} \end{equation}</span></p> <p>This is the general setting of numerical differentiation [3].</p> <p><strong>Partial Derivative as an operator:</strong></p> <p>Within the setting of automatic differentiation [4], computer scientists construct algorithms <span class="math-container">$\nabla$</span> for computing the dual program <span class="math-container">$\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> which corresponds to an operator definition for the partial derivative with respect to the <span class="math-container">$i$</span>th coordinate:</p> <p><span class="math-container">\begin{equation} \nabla_i = e_i \frac{\partial}{\partial x_i} \tag{2} \end{equation}</span></p> <p><span class="math-container">\begin{equation} \nabla = \sum_{i=1}^n \nabla_i = \sum_{i=1}^n e_i \frac{\partial}{\partial x_i}. \tag{3} \end{equation}</span></p> <p>Given these definitions, a constructive test would involve creating an open-source library for simulating classical and quantum systems that doesn’t contain a method for numerical or automatic differentiation.</p> <h2>The special case of classical mechanics:</h2> <p>For concreteness, we may consider classical mechanics as this is the general setting of animal locomotion, and the vector, Hamiltonian, and Lagrangian formulations of classical mechanics have concise descriptions. In all of these formulations the partial derivative plays a central role. But, at the present moment I don't have a proof that rules out alternative formulations. Has this particular question already been addressed by a mathematical physicist?</p> <p>Perhaps a reasonable option might be to use a probabilistic framework such as Gaussian Processes that are provably universal function approximators [5]?</p> <h2>Koopman Von Neumann Classical Mechanics as a candidate solution:</h2> <p>After reflecting upon the answers of Ben Crowell and <a href="https://mathoverflow.net/a/354289">gmvh</a>, it appears that we require a formulation of classical mechanics where:</p> <ol> <li>Everything is formulated in terms of linear operators.</li> <li>All problems can then be recast in an algebraic language.</li> </ol> <p>After doing a literature search it appears that Koopman Von Neumann Classical Mechanics might be a suitable candidate as we have an operator theory in Hilbert space similar to Quantum Mechanics [8,9,10]. That said, I just recently came across this formulation so there may be important subtleties I ignore.</p> <h2>Related problems:</h2> <p>Furthermore, I think it may be worth considering the following related questions:</p> <ol> <li>What would be left of mathematical physics if we could not compute partial derivatives?</li> <li>Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?</li> <li>Are the operations of multivariable calculus necessary and sufficient for modelling classical mechanical phenomena?</li> </ol> <h2>A historical note:</h2> <p>It is worth noting that more than 1000 years ago as a result of his profound studies on optics the mathematician and physicist Ibn al-Haytham(aka Alhazen) reached the following insight:</p> <blockquote> <p>Nothing of what is visible, apart from light and color, can be perceived by pure sensation, but only by discernment, inference, and recognition, in addition to sensation.-Alhazen</p> </blockquote> <p>Today it is known that even color is a construction of the mind as photons are the only physical objects that reach the retina. However, broadly speaking neuroscience is just beginning to catch up with Alhazen’s understanding that the physics of everyday experience is simulated by our minds. In particular, most motor-control scientists agree that to a first-order approximation the key purpose of animal brains is to generate movements and consider their implications. This implicitly specifies a large class of continuous control problems which includes animal locomotion.</p> <p>Evidence accumulated from several decades of neuroimaging studies implicates the role of the cerebellum in such internal modelling. This isolates a rather uniform brain region whose processes at the circuit-level may be identified with efficient and reliable methods for simulating classical mechanical phenomena [11, 12].</p> <p>As for the question of whether the mind/brain may actually be modelled by Turing machines, I believe this was precisely Alan Turing’s motivation in conceiving the Turing machine [13]. For a concrete example of neural computation, it may be worth looking at recent research that a single dendritic compartment may compute the xor function: [14], <a href="https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_single_biological_neuron_can_compute_xor/" rel="nofollow noreferrer">Reddit discussion</a>.</p> <h2>References:</h2> <ol> <li>William W. Symes. Partial Differential Equations of Mathematical Physics. 2012.</li> <li>L.D. Landau &amp; E.M. Lifshitz. Mechanics (Volume 1 of A Course of Theoretical Physics). Pergamon Press 1969.</li> <li>Lyness, J. N.; Moler, C. B. (1967). &quot;Numerical differentiation of analytic functions&quot;. SIAM J. Numer. Anal. 4: 202–210. <a href="https://doi.org/10.1137/0704019" rel="nofollow noreferrer">doi:10.1137/0704019</a>.</li> <li>Naumann, Uwe (2012). The Art of Differentiating Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1.</li> <li>Michael Osborne. Gaussian Processes for Prediction. Robotics Research Group Department of Engineering Science University of Oxford. 2007.</li> <li>Connie Fan. REVERSE MATHEMATICS. University of Chicago. 2010.</li> <li>Richards, B.A., Lillicrap, T.P., Beaudoin, P. et al. A deep learning framework for neuroscience. Nat Neurosci 22, 1761–1770 (2019). <a href="https://doi.org/10.1038/s41593-019-0520-2" rel="nofollow noreferrer">doi:10.1038/s41593-019-0520-2</a>.</li> <li>Wikipedia contributors. &quot;Koopman–von Neumann classical mechanics.&quot; Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Feb. 2020. Web. 7 Mar. 2020.</li> <li>Koopman, B. O. (1931). &quot;Hamiltonian Systems and Transformations in Hilbert Space&quot;. Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. <a href="https://doi.org/10.1073/pnas.17.5.315" rel="nofollow noreferrer">doi:10.1073/pnas.17.5.315</a>. PMC 1076052. PMID 16577368.</li> <li>Frank Wilczek. Notes on Koopman von Neumann Mechanics, and a Step Beyond. 2015.</li> <li>Daniel McNamee and Daniel M. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019.</li> <li>Jörn Diedrichsen, Maedbh King, Carlos Hernandez-Castillo, Marty Sereno, and Richard B. Ivry. Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across Task Domains. Neuron review. 2019.</li> <li>Turing, A.M. (1936). &quot;On Computable Numbers, with an Application to the Entscheidungsproblem&quot;. Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. <a href="https://doi.org/10.1112/plms/s2-42.1.230" rel="nofollow noreferrer">doi:10.1112/plms/s2-42.1.230</a>. (and Turing, A.M. (1938). &quot;On Computable Numbers, with an Application to the Entscheidungsproblem: A correction&quot;. Proceedings of the London Mathematical Society.</li> <li>Albert Gidon, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, Matthew Evan Larkum. <a href="https://doi.org/10.1126/science.aax6239" rel="nofollow noreferrer">Dendritic action potentials and computation in human layer 2/3 cortical neurons</a>. Science. 2020.</li> </ol>
rimu
175,280
<p>For one example of non-trivial physics without partial derivatives, one can look into Volume 1 of the Feynman lectures. In Chapter 28, Feynman starts to develop electrodynamics without partial derivatives — they only appear in Volume 2.</p> <p>Instead of the Maxwell equation, Feynman uses a somewhat complex formula for the field that is generated by a single moving charge. The formula only has ordinary derivatives — the first and second time derivative of the speed of the charged particle — but becomes a bit unusual in that uses <em>retarded time</em>: The field at a distant point is determined by the movement of the particle some time ago, to correct for the finite speed of light.</p>
2,059,192
<p>I was reading about Sobolev spaces and came across the notation $\dot{H}^1, \dot{H}^{-1}, \dot{H}^t$. I'm familiar with $H^1, H^{-1}, H^t$, but not the dot, and I can't find these spaces defined anywhere. Is this notation common, and could you explain it to me or point me to a reference?</p> <p>I have more or less the same question about the spaces $L_t^2,H_x^2$, and the norm $||\cdot||_{L_t^2H_x^2}$.</p>
Matt
206,546
<p>I have seen the dot notation to mean trace-free: $\dot{H}^1 \equiv H^1_0$. The space $L^2_t,H_x^2$ should mean that the function $u=u(x,t)$ is $L^2$ in time and $H^2$ in space. The norm would be</p> <p>$$\|u\|^2_{L^2_t,H^2_x} = \int_0^T \|u(\cdot,t)\|^2_{H^2(\Omega)} dt.$$</p> <p>In general, a common notation is $L^p(0,T; X)$, where $X$ is a Banach space, and the norm is defined as</p> <p>$$\|u\|_{L^p(0,T;X)} = \left( \int_0^T \|u\|^p_{X} dt\right)^{1/p}.$$</p> <p>In your case we consider $L^2(0,T;H^2(\Omega))$.</p>
842,271
<p>Evaluation of $\displaystyle \int \frac{\sqrt[3]{x+\sqrt[4]{x}}}{\sqrt{x}}dx$</p> <p>$\bf{My\; Try::}$ Let $x=t^4\;,$ Then $dx = 4t^3dt$</p> <p>So Integral is $\displaystyle \int\frac{\sqrt[3]{t^4+t}}{t^2} \cdot 4t^3dt$</p> <p>So Integral is $\displaystyle 4\int t^{\frac{7}{3}}\cdot (1+t^{-3})^{\frac{1}{3}}$</p> <p>Now How can i solve after that</p> <p>Help me</p> <p>Thanks</p>
Claude Leibovici
82,404
<p>As Pranav Arora showed, there is no nice answer to this antiderivative. Personally, the only way I can think about it is a Taylor expansion of the integrand followed by a term by term integration.</p> <p>As, you did, starting with $x=t^4$,we have $$\frac{\sqrt[3]{x+\sqrt[4]{x}}}{\sqrt{x}}=t^{-\frac{5}{3}} \sqrt[3] {1+t^3}=t^{-\frac{5}{3}} \Big(1+\frac{t^3}{3}-\frac{t^6}{9}+\frac{5 t^9}{81}-\frac{10 t^{12}}{243}+O\left(t^{13}\right)\Big)$$</p> <p>Then, replacing $t$ by $\sqrt[4] x$, we can find that $$\frac{\sqrt[3]{x+\sqrt[4]{x}}}{\sqrt{x}}=\frac{1}{x^{5/12}}+\frac{\sqrt[3]{x}}{3}-\frac{x^{13/12}}{9}+\frac{5 x^{11/6}}{81}-\frac{10 x^{31/12}}{243}+\frac{22 x^{10/3}}{729}-\frac{154 x^{49/12}}{6561}+\frac{374 x^{29/6}}{19683}+O\left(x^{61/12}\right)$$ Now integration leads to $$\int \frac{\sqrt[3]{x+\sqrt[4]{x}}}{\sqrt{x}}dx=\frac{12 x^{7/12}}{7}+\frac{x^{4/3}}{4}-\frac{4 x^{25/12}}{75}+\frac{10 x^{17/6}}{459}-\frac{40 x^{43/12}}{3483}+\frac{22 x^{13/3}}{3159}-\frac{616 x^{61/12}}{133407}+\frac{748 x^{35/6}}{229635}+O\left(x^{73/12}\right)$$</p> <p>If we compare the exact and approximate solutions for $$\int_0^a \frac{\sqrt[3]{x+\sqrt[4]{x}}}{\sqrt{x}}dx$$ they match quite well for $0 \leq a \leq 2$.</p>
2,471,680
<p>I am working with a theorem and i need the reference of the above limit. Kindly guide.</p>
Community
-1
<p>Not too hard to prove it from first principles.</p> <p>We have to prove $$\forall \epsilon &gt; 0: \exists N: \forall n &gt; N: |k^n - 0|&lt; \epsilon$$</p> <p>which is the same as saying </p> <p>$$\forall \epsilon &gt; 0: \exists N: \forall n &gt; N: k^n&lt; \epsilon$$</p> <p><strong>Proof:</strong> </p> <p>Pick $\epsilon &gt; 0$. Then, pick any $$N &gt; \frac{\log \epsilon}{\log k}$$</p> <p>For $n &gt; N$, we have:</p> <p>$$n &gt; \frac{\log \epsilon}{\log k} \implies n \log k &lt; \log \epsilon$$ $$\implies \log k^n &lt; \log \epsilon$$</p> <p>$$\implies k^n &lt; \epsilon \quad$$</p> <p>where we used that $ \log k &lt; 0$ for $k \in (0,1)$, so the inequality sign reverses and where we also used that $\log$ is a strictly increasing function.</p> <p>This ends the proof.</p>
2,802,156
<p>I have a function:</p> <p>$${{\mathop{\rm F}\nolimits} _i}\left( {\bf{\xi }} \right) = \sum\limits_k^N {{\mathop{\rm D}\nolimits} \left( {\frac{1}{N}\sum\limits_j^N {{\mathop{\rm G}\nolimits} \left( {j,{\mathop{\rm I}\nolimits} \left( {j,{\bf{\xi }}} \right)} \right)} - {\mathop{\rm G}\nolimits} \left( {k,{\mathop{\rm I}\nolimits} \left( {k,{\bf{\xi }}} \right)} \right)} \right)}$$</p> <p>$${\rm F}_i(\xi)=\sum_k^N {\rm D}_k\left(\frac1N\sum_j^N{\rm G}_j({\rm I}_j(\xi))-{\rm G}_k({\rm I}_k(\xi))\right).$$</p> <p>$\xi$ is a vector.</p> <p>How do I calculate the partial derivative using the chain rule?</p> <p>$$\frac{\partial{\rm F}_i}{\partial\xi}=? $$</p> <p>I guess...</p> <p>$$\frac{\partial{\rm F}_i}{\partial\xi}=\sum_k^N\frac{\partial{\rm F}_i}{\partial{\rm D}_k}\left(\frac1N\sum_j^N\frac{\partial {\rm D}_k}{\partial {\rm G}_j}\frac{\partial {\rm G}_j}{\partial {\rm I}_j}\frac{\partial {\rm I}_j}{\partial \xi}-\frac{\partial {\rm D}_k}{\partial {\rm G}_k}\frac{\partial {\rm G}_k}{\partial {\rm I}_k}\frac{\partial {\rm I}_k}{\partial \xi}\right). $$</p> <ul> <li>full version</li> </ul> <p><a href="https://i.stack.imgur.com/GBOTf.jpg" rel="nofollow noreferrer">enter image description here</a></p>
Fimpellizzeri
173,410
<p>Personally, I think the total derivative chain rule is easiest to remember:</p> <p>$$D_{f\circ g}(x) = D_f(g(x)) \cdot D_g(x)$$</p> <p>What the $D$'s look like $($as $m\times n$ matrices$)$ of course depends on the domain and codomain of each function.</p>
3,917,601
<p>Let <span class="math-container">$(X,\mathcal{M},\mu)$</span> be a measure space. Suppose <span class="math-container">$E_n\in \mathcal{M}$</span> such that <span class="math-container">$$\sum_{n=1}^\infty \mu(E_n) &lt; \infty$$</span> show <span class="math-container">$\mu(\lim\sup_{n\to\infty} E_n) = 0.$</span></p> <p>Also, how can I prove or give a counterexample of the conclusion if the hypothesis is replaced with <span class="math-container">$$\sum_{n=1}^\infty \mu(E_n)^2 &lt; \infty$$</span></p> <p>Can I prove this by Fatou's Lemma? Thank you</p>
varpi
677,270
<p>Proof of the first part:</p> <p>The result you mention is the first Borel-Cantelli Lemma. First recall that <span class="math-container">$$\limsup_n E_n = \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty E_k.$$</span> By continuity from above (check the conditions as an exercise) and sub-additivity, we have <span class="math-container">$$\mu\left(\limsup_n E_n\right) = \lim_n\mu\left(\bigcup_{k=n}^\infty E_k\right)\leq \lim_n \left(\sum_{k=n}^\infty \mu(E_k)\right).$$</span> Note that last term is the tail of a convergent series, hence it converges to zero.</p> <p>For the second part see Sam Wong's solution.</p>
2,752,511
<p>Prove that if $X$ is Hausdorff, $\Delta=\{(x, x)\mid x\in X\}$ is closed in $X\times X$ (with the product topology).</p> <p><strong>My attempt:</strong></p> <p>Let $x_1, x_2\in X$ s.t. $x_1\ne x_2$.</p> <p>There exist neighborhoods $U_1$ and $U_2$ of $x_1$ and $x_2$ that are disjoint.</p> <p>$U_1\times U_2$ is a basis element in the product topology on $X\times X$. So, $U_1\times U_2$ is open in $X\times X$.</p> <p>Let $x\in X$. </p> <p>$(x, x)\in U_1\times U_2\implies x\in U_1$ and $x\in U_2\implies x\in U_1\cap U_2$, which contradicts the fact that $U_1$ and $U_2$ are disjoint.</p> <p>So, $(x, x)\notin U_1\times U_2$.</p> <p>I feel that I'm on the right track but don't know how to proceed. Could someone please help me out?</p>
Siddhartha
257,185
<p>Let $(x, y)\in X\times X-\Delta$</p> <p>$\implies(x, y)\in X\times X$ and $(x, y)\notin\Delta$</p> <p>$\implies x, y\in X\text{ and }x\ne y$</p> <p>There exist neighborhoods $U_x$ and $U_y$ of $x$ and $y$ respectively that are disjoint.</p> <p>$U_x\times U_y$ is a basis element in the product topology on $X\times X$.</p> <p>$(x, y)\in U_x\times U_y$ --------------------------------------------- (1)</p> <p>Let $(u, v)\in U_x\times U_y$. --------------------------------------------- (2)</p> <p>$\implies u\in U_x$ and $v\in U_y$</p> <p>Since $U_x\subset X$ and $U_y\subset X$,</p> <p>$u, v\in X$</p> <p>$\implies(u, v)\in X\times X$ --------------------------------------------- (3)</p> <p>$u=v\implies u=v\in U_x\cap U_y$, a contradiction as $U_x$ and $U_y$ are disjoint.</p> <p>So, $u\ne v$.</p> <p>$\implies(u, v)\notin\Delta$ --------------------------------------------- (4)</p> <p>From (3) and (4),</p> <p>$(u, v)\in X\times X-\Delta$ --------------------------------------------- (5)</p> <p>From (2) and (5),</p> <p>$U_x\times U_y\subset X\times X-\Delta$ --------------------------------------------- (6)</p> <p>Combining (1) and (6),</p> <p>$(x, y)\in U_x\times U_y\subset X\times X-\Delta$</p> <p>So, $X\times X-\Delta$ is open in $X\times X$.</p> <p>So, $\Delta$ is closed in $X\times X$.</p> <p>QED</p>
187,545
<p><span class="math-container">$\DeclareMathOperator\GL{GL}\DeclareMathOperator\L{\mathfrak{L}}$</span>The free Lie algebra <span class="math-container">$\L(V)$</span> generated by an <span class="math-container">$r$</span>-dimensional vector space <span class="math-container">$V$</span> is, in the language of <a href="https://en.wikipedia.org/wiki/Free_Lie_algebra" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Free_Lie_algebra</a>, the free Lie algebra generated by any choice of basis <span class="math-container">$e_1, \ldots , e_r$</span> for the vector space <span class="math-container">$V$</span>. (Work over the field <span class="math-container">${\mathbb R}$</span> or <span class="math-container">${\mathbb C}$</span>, whichever you prefer.) It is a graded Lie algebra<br /> <span class="math-container">$$\L(V) = V \oplus \L_2 (V) \oplus \L_3 (V) \oplus \ldots .$$</span> The general linear group <span class="math-container">$\GL(V)$</span> of <span class="math-container">$V$</span> acts on <span class="math-container">$\L(V)$</span> by gradation-preserving Lie algebra automorphisms. Thus each graded piece <span class="math-container">$\L_k (V)$</span> is a finite dimensional representation space for <span class="math-container">$\GL(V)$</span>. (The `weight' of <span class="math-container">$\L_k (V)$</span> is <span class="math-container">$k$</span> in the sense that <span class="math-container">$\lambda \mathrm{Id} \in \GL(V)$</span> acts on <span class="math-container">$\L_k (V)$</span> by scalar multiplication by <span class="math-container">$\lambda^k$</span>.) QUESTION: How does <span class="math-container">$\L_k (V)$</span> break up into <span class="math-container">$\GL(V)$</span>-irreducibles?</p> <p>I only really know that <span class="math-container">$\L_2 (V) = \Lambda ^2 (V)$</span>, which is already irreducible.</p> <p>To start the game off, perhaps some reader out there already is familiar with <span class="math-container">$\L_3 (V)$</span> as a <span class="math-container">$\GL(V)$</span>-rep, and can tell me its irreps in terms of the Young diagrams / Schur theory involving 3 symbols?</p> <p>(My motivation arises from trying to understand some details of the subRiemannian geometry <a href="https://en.wikipedia.org/wiki/Sub-Riemannian_manifold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Sub-Riemannian_manifold</a> of the Carnot group whose Lie algebra is the free <span class="math-container">$k$</span>-step Lie algebra, which is <span class="math-container">$\L(V)$</span>-truncated after step <span class="math-container">$k$</span>. )</p>
Tom Church
250
<p>Fix a primitive <span class="math-container">$k$</span>-th root of unity <span class="math-container">$\zeta_k$</span>, and let <span class="math-container">$\rho\in S_k$</span> be a <span class="math-container">$k$</span>-cycle. (I am working over <span class="math-container">$\mathbb{C}$</span> here, obviously.) Klyachko [Kl] proved in 1974 that <span class="math-container">$L_k(V)\simeq \{x\in V^{\otimes k}\,|\,\rho(x) = \zeta_k\cdot x\}$</span>.</p> <p>From this Frobenius reciprocity gives the following formula for the multiplicity of <span class="math-container">$\mathbb{S}_\lambda(V)$</span> in <span class="math-container">$L_k(V)$</span> (though I believe this formula is much older):</p> <p><span class="math-container">$\frac{1}{k}\sum_{d|k}\mu(d)\chi_\lambda(\tau^{m/d})$</span></p> <p>Here <span class="math-container">$\mu$</span> denote the Möbius function, <span class="math-container">$\chi_\lambda\colon S_k\to \mathbb{Z}$</span> be the character of the corresponding <span class="math-container">$S_k$</span>-irrep, and <span class="math-container">$\tau\in S_k$</span> is a <span class="math-container">$k$</span>-cycle (so <span class="math-container">$\tau^{k/d}$</span> is a product of <span class="math-container">$k/d$</span> disjoint <span class="math-container">$d$</span>-cycles). However this formula is not so helpful, since the irreducible characters of <span class="math-container">$S_k$</span> are not easy to compute; in practice, there is a much more useful (positive, bijective) formula as follows.</p> <ol> <li><p>[As motivation] Since <span class="math-container">$L_k(V)\subset V^{\otimes k}$</span>, the multiplicity of <span class="math-container">$\mathbb{S}_\lambda(V)$</span> in <span class="math-container">$L_k(V)$</span> is bounded above by its multiplicity in <span class="math-container">$V^{\otimes k}$</span>, which is the number of <em>standard tableaux</em> of shape <span class="math-container">$\lambda$</span>. (A tableau <span class="math-container">$T$</span> of shape <span class="math-container">$\lambda$</span> is a labeling of the boxes of <span class="math-container">$\lambda$</span> by <span class="math-container">$1,\ldots,k$</span>; it is <em>standard</em> if the labels are increasing within each row and within each column.)</p> </li> <li><p>The <em>descent set</em> <span class="math-container">$D_T\subset \{1,\ldots,k\}$</span> of a tableau <span class="math-container">$T$</span> is the subset <span class="math-container">$D_T=\{i\,|$</span> box <span class="math-container">$i$</span> is in a higher row than box <span class="math-container">$i+1\}$</span>. The <em>major index</em> <span class="math-container">$\text{maj}(T)$</span> is the sum <span class="math-container">$\text{maj}(T):= \sum_{i\in D_T}i$</span>.</p> </li> <li><p>The multiplicity of <span class="math-container">$\mathbb{S}_\lambda(V)$</span> in <span class="math-container">$L_k(V)$</span> is the <strong>number of standard tableaux</strong> <span class="math-container">$T$</span> of shape <span class="math-container">$\lambda$</span> <strong>with major index</strong> <span class="math-container">$\text{maj}(T)\equiv 1\bmod{k}$</span>.</p> </li> </ol> <p>You should have no trouble working out the computations you want by hand for small <span class="math-container">$k$</span> using this result (as F.C.'s computations show, the answer gets large fast as <span class="math-container">$k$</span> gets bigger).</p> <p>[Remark on references: I always thought this formula was due to Klyachko as well; from a brief look at his paper I don't see it there (but it may be implicit there; certainly his &quot;<span class="math-container">$\text{ind}(\sigma)$</span>&quot; is precisely the major index modulo <span class="math-container">$k$</span>). It appears explicitly in Kraśkiewicz-Weyman &quot;Algebra of coinvariants and the action of a Coxeter element&quot; which appeared as a preprint in the late 1980s. Credit can be difficult to pin down in this period, especially due to the communication barriers between the Soviet Union and other countries; I make no guarantees about the proper credit for these results.]</p> <p>[Kl]: Klyachko's article was in Russian; the English translation is Klyachko, Lie elements in the tensor algebra, Siberian Mathematical Journal 15 (1974) 6, 914-920, <a href="https://doi.org/10.1007/BF00966559" rel="nofollow noreferrer">online here</a></p>
3,451,374
<p>Given that I have a random variable <span class="math-container">$\max\{K-X, 0\}$</span> where <span class="math-container">$k&gt;0$</span> is a constant and <span class="math-container">$x$</span> is uniformly distributed on <span class="math-container">$[-K, K]$</span> or I guess more generally with any distribution. How does one go about finding the Expectation of such random variables? Some ideas come up to mind like I think it could be <span class="math-container">$P(K-x&gt;0)\cdot(K-x)+P(k-x&lt;0)\cdot0$</span> but that is itself a random variable. Maybe it is obtained by taking the Expectation of this one but I can't justify that.</p>
Kavi Rama Murthy
142,385
<p>In general <span class="math-container">$E\max \{K-X,0\}=E(K-X) I_{X \leq K}=KP(X \leq K)-EXI_{X \leq K}$</span>. In your special case this gives <span class="math-container">$E\max \{K-X,0\}=K-EX=K$</span></p>
2,941,456
<blockquote> <p>Given <span class="math-container">$K$</span> elements between <span class="math-container">$1$</span> and <span class="math-container">$7$</span> (inclusive), how many ways can you arrange the elements s.t. their sum adds to <span class="math-container">$N$</span>? </p> </blockquote> <p>I can brute-force my way to counting the number of ways for small <span class="math-container">$K$</span> and <span class="math-container">$N$</span>, but is there a general formula that addresses this problem? I feel like this is a commonly discussed problem, but I just don't know what the solution is. This came out of something I'm working on (programming stuff). Thanks.</p>
Rushabh Mehta
537,349
<p>The best way to solve this is via a <a href="https://en.wikipedia.org/wiki/Generating_function" rel="nofollow noreferrer">generating function</a>. We treat the powers of <span class="math-container">$x$</span> as the values of <span class="math-container">$k$</span></p> <p>So, to represent the fact that each element can take on any value from 1 to 7, we introduce a polynomial with 1 term for each power between 1 and 7, i.e., <span class="math-container">$x+x^2+x^3+x^4+x^5+x^6+x^7=\frac{x^8-1}{x-1}$</span>. Then, since there are <span class="math-container">$k$</span> elements, we raise this polynomial to the <span class="math-container">$k$</span>th power, and find the coefficient of <span class="math-container">$x^n$</span> in the result.</p> <p>So, our answer will be the coefficient of <span class="math-container">$x^n$</span> in <span class="math-container">$$\bigg(\frac{x^8-1}{x-1}\bigg)^k$$</span></p>
14,391
<p>I am currently tutoring a student who is really lacking in math. At first I thought she was just resistant; perhaps she thought that her teacher had no choice but to pass her. What truly stunned me is that she is going to tenth-grade and has very little common sense with numbers.</p> <p>I once asked her what is 3 minus 1/2, and she could not answer it. So, I drew three circles in front of her and shaded out one half. Now I asked how many circles are left, and she still could not answer it .... </p> <p>An episode like this is not a rarity with her. Moreover, too often she would give responses that even a third-grader would know were nonsense. (What's half-way between 19.5 and 20? She said 9.) According to her mom, she is fine with her other classes. That's why I am wondering if a person can have issues with learning math specifically. </p>
JRN
77
<p>I do not specialize in learning disabilities, but it's possible that your student has <em>dyscalculia</em>. From <a href="https://en.wikipedia.org/wiki/Dyscalculia" rel="noreferrer">Wikipedia</a>:</p> <blockquote> <p>Dyscalculia is difficulty in learning or comprehending arithmetic, such as difficulty in understanding numbers, learning how to manipulate numbers, and learning facts in mathematics. It is generally seen as the mathematical equivalent to dyslexia.</p> <p>It can occur in people from across the whole IQ range – often higher than average – along with difficulties with time, measurement, and spatial reasoning.</p> </blockquote> <p>The Wikipedia page gives some examples of signs and symptoms. For example,</p> <blockquote> <p>Surprisingly, students with dyscalculia often do exceptionally in writing, reading, and speaking.</p> </blockquote>
3,638,028
<p>find <span class="math-container">$f\circ f$</span> for the function <span class="math-container">$f\colon \mathbb R^2\to \mathbb R^2$</span> (,)=(−,) I know that if (,)=(−,), then () is its inverse reflected about the -axis. If this is the case then <span class="math-container">$f\circ f$</span> = f^-1(−f^-1(−)). I also know that it may equal (−,−) but I have no idea how (−,)=(−,-). I also know that its got something to do with vectors or scalars but I'm still stuck. I need someone to explain it in detail for me.</p> <p>I am not sure on how to do this question. Could someone please help me?</p>
Jeppe Stig Nielsen
70,134
<p>With colors: <span class="math-container">$$\frac{\color{blue}{k}\color{red}{(k+1)}+\color{green}{2}\color{red}{(k+1)}}{2} =\frac{(\color{blue}{k}+\color{green}{2})\color{red}{(k+1)}}{2}$$</span> The red part <span class="math-container">$\color{red}{(k+1)}$</span> is seen in both terms of the left-hand-side, so can be "moved out" to the right.</p>
1,241,970
<p>A fair coin is tossed three times. Let $X$ be the number of heads that turn up on the first two tosses and $Y$ the number of heads that turn up on the third toss. Give the distribution of $X$, $Y$, $X + Y$, $X − Y$ and $XY$.</p>
Karolina Sz
232,662
<p>Is it good answer? $$X=\{0,1,2\}, Y=\{0,1\}$$ $$P(Y=1)=1/2, P(Y=0)=1/2, P(X=0)=1/4, P(X=1)=1/2, P(X=2)=1/4$$ $$P(X+Y=0)=1/8, P(X+Y=1)=3/8, P(X+Y=2)=3/8, P(X+Y=3)=1/8$$ $$P(X-Y=-1)=1/8, P(X-Y=0)=3/8, P(X-Y=1)=3/8, P(X-Y=2)=1/8$$ $P(XY=0)=5/8, P(XY=1)=2/8, P(XY=2)=1/8$ </p>
397,274
<p>Suppose you have a group isomorphism given by the first isomorphism theorem:</p> <p><span class="math-container">$$G/\ker(\phi) \simeq \operatorname{im}(\phi)$$</span></p> <p>What can we say about the group <span class="math-container">$\ker(\phi)\times \operatorname{im}(\phi)$</span>? In particular, when does the following hold:</p> <p><span class="math-container">$$G\simeq \ker(\phi)\times \operatorname{im}(\phi)?$$</span></p> <p>I ask this question because i want to prove that <span class="math-container">$GL_n^+(\mathbb{R}) \simeq SL_n(\mathbb{R}) \times \mathbb{R}^*_{&gt;0}$</span>, with <span class="math-container">$GL_n^+(\mathbb{R})$</span> the group of matrices with positive determinant. I proved that <span class="math-container">$SL_n(\mathbb{R})$</span> is a normal subgroup and that <span class="math-container">$GL_n^+(\mathbb{R})/ SL_n(\mathbb{R}) \simeq \mathbb{R}^*_{&gt;0}$</span>, using the surjective homomorphism <span class="math-container">$\det(M)$</span>. I tried something with semidirect products but I got stuck.</p>
Najib Idrissi
10,014
<p>Let $N = ker(\phi)$ and $K = im(\phi)$, then you're asking when, given an exact sequence $1 \to N \to G \to K \to 1$ is trivial.</p> <ul> <li>First you need the extension to be split, that is, there must exist a morphism $s : K \to G$ such that the composition $\phi \circ s$ is the identity. In this case $G \simeq N \rtimes K$, the semidirect product of $N$ and $K$ : this is the splitting lemma (for non-abelian groups).</li> <li>Now you want this semidirect product to be direct; this is true iff $K$ is also normal in $G$, or equivalently that there exists a morphism $G \to N$ which is the identity on $N$.</li> </ul> <p>I don't include proofs here, as they're found in any basic group theory notes.</p> <hr> <p>In fact you can get away with the first condition. Indeed, if there exists a map $p : G \to N$ which is the identity on $N$, then a section of $\phi$ automatically exists and the isomorphism $G \cong \operatorname{im}(\phi) \times \operatorname{ker}(\phi) = K \times N$ holds. The required isomorphism is $(\phi, p) : G \to K \times N$ (it's not hard to check that this is in fact an isomorphism).</p>
1,840,778
<p>In rectangle $ABCD$, we have $AD = 3$ and $AB = 4$. Let $M$ be the midpoint of $\overline{AB}$, and let $X$ be the point such that $MD = MX$, $\angle MDX = 77^\circ$, and $A$ and $X$ lie on opposite sides of $\overline{DM}$. Find $\angle XCD$, in degrees. </p> <p><img src="https://i.stack.imgur.com/3TsZm.png" alt="Diagram"></p> <p>Thanks!</p>
Huang
18,931
<p>Hint: It's easy to see MC=MD, hence points D X and C are located on a circle, with M being the center. Now, we know angle XCD is half of angle DMX (why?)</p>
440,791
<p>I am trying to figure out if the infinite product <span class="math-container">$$\omega=\frac{5\sqrt{3}}{12}\prod\limits_{\substack{p\equiv 1\pmod3 \\ p\ge 13}}\left(\frac{p-2}{p-1}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\ p\ge 13}}\left(\frac{p}{p-1}\right)$$</span> is asymptotically equal to the infinite product <span class="math-container">$$c=\frac{5775}{2592\pi}\prod\limits_{\substack{p\equiv 1\pmod3 \\ p\ge 13}}\left(\frac{p(p-2)}{(p-1)^2}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\ p\ge 13}}\left(\frac{p^2}{p^2-1}\right)$$</span>.</p> <p>So, I reformulated the products as <span class="math-container">$$\omega(x)=\frac{5\sqrt{3}}{12}\prod\limits_{\substack{p\equiv 1\pmod3 \\ 13\leq p\leq x}}\left(\frac{p-2}{p-1}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\ 13\leq p\leq x}}\left(\frac{p}{p-1}\right)$$</span> and <span class="math-container">$$c(x)=\frac{5775}{2592\pi}\prod\limits_{\substack{p\equiv 1\pmod3 \\ 13\leq p\leq x}}\left(\frac{p(p-2)}{(p-1)^2}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\ 13\leq p\leq x}}\left(\frac{p^2}{p^2-1}\right)$$</span>. Then, taking the quotient we get <span class="math-container">$$\frac{\omega(x)}{c(x)}=\frac{216\sqrt{3}\pi}{1155}\prod\limits_{\substack{p\equiv 1\pmod3 \\ 13\leq p\leq x}}\left(1-\frac{1}{p}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\ 13\leq p\leq x}}\left(1+\frac{1}{p}\right)$$</span> <span class="math-container">$$=\frac{216\sqrt{3}\pi}{1155}\prod\limits_{\substack{p\equiv 2\pmod3 \\ 13\leq p\leq x}}\left(1-\frac{1}{p^2}\right)\prod\limits_{\substack{p\equiv 1\pmod3 \\ 13\leq p\leq x}}\left(1-\frac{1}{p}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\13\leq p\leq x}}\left(\frac{1}{1-\frac{1}{p}}\right).$$</span></p> <p>As <span class="math-container">$x\to\infty$</span>, from <a href="https://core.ac.uk/reader/81188410" rel="nofollow noreferrer">A. Languasco's paper</a>, we observe that the last two products in the quotient above are asymptotically equal. So, the limit of the quotient depends on the product <span class="math-container">$$\frac{216\sqrt{3}\pi}{1155}\prod\limits_{\substack{p\equiv 2\pmod3 \\ 13\leq p\leq x}}\left(1-\frac{1}{p^2}\right)$$</span>, but I don't know if this converges to <span class="math-container">$1$</span> or not as <span class="math-container">$x\to\infty$</span>. I can see that the last product (except the constant) has something to do with <span class="math-container">$\frac{1}{\zeta(2)}$</span> but I don't know if it's actually smaller or greater than that.</p> <p>I am essentially trying to check if eventually the products would be same or not?</p> <p>I would really appreciate it if somebody could provide some hints or ideas on how to progress from here.</p>
Henri Cohen
81,776
<p>You have almost obtained the answer yourself: <span class="math-container">$$\prod_{p\equiv1\pmod{3}}(1-1/p)\prod_{p\equiv2\pmod{3}}(1+1/p)=1/L(\chi,1)$$</span> where <span class="math-container">$\chi(p)=(-3/p)$</span> the Legendre symbol (as in KConrad's answer), and <span class="math-container">$L(\chi,1)=\pi/(3\sqrt{3})$</span>, so you immediately obtain that with your strange coefficients <span class="math-container">$\omega/c=320/243$</span> exactly.</p> <p>Note that is is not difficult to obtain hundreds of decimals of <span class="math-container">$\omega$</span> and <span class="math-container">$c$</span> if desired.</p>
43,505
<p>I am looking to make a physics based Mathematica project. Ideally the project would take around 12 hours, gathering any experimental data and analyse the findings.</p> <p>I'd have full access to university physics labs. The project would be for 2nd year physics students in the end and would aim to introduce using Mathematica in their work.</p>
Chris Degnen
363
<p>You could try making an intuitive explanation of Planck's constant. There are a several formulae and plots to interrelate and explain, notably Wein's law, the Raleigh-Jeans curve, and then Planck's law. Some plotting challenges.</p> <p><a href="http://en.wikipedia.org/wiki/Planck_constant#Black-body_radiation" rel="nofollow">http://en.wikipedia.org/wiki/Planck_constant#Black-body_radiation</a></p> <p>The result would be a nice explanation of the discovery of the quantum world.</p>
231,887
<p>I'm learning to do proofs, and I'm a bit stuck on this one. The question asks to prove for any positive integer $k \ne 0$, $\gcd(k, k+1) = 1$.</p> <p>First I tried: $\gcd(k,k+1) = 1 = kx + (k+1)y$ : But I couldn't get anywhere.</p> <p>Then I tried assuming that $\gcd(k,k+1) \ne 1$ , therefore $k$ and $k+1$ are not relatively prime, i.e. they have a common divisor $d$ s.t. $d \mid k$ and $d \mid k+1$ $\implies$ $d \mid 2k + 1$</p> <p>Actually, it feels obvious that two integers next to each other, $k$ and $k+1$, could not have a common divisor. I don't know, any help would be greatly appreciated.</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$\ $</span> Both <span class="math-container">$\rm\:1+k\:$</span> and <span class="math-container">$\rm\:k\:$</span> are multiples of <span class="math-container">$\rm\:c = gcd(k,1+k),\:$</span> and the set of multiples of any integer <span class="math-container">$\rm\:c\:$</span> enjoy a special structure: they're <a href="https://math.stackexchange.com/a/2942475/242">closed under subtraction</a> since <span class="math-container">$\rm\:ac\! -\! bc = (a\!-\!b)c$</span>.</p> <hr /> <p>Said <span class="math-container">$\rm\: mod\ c\!:\ 1\!+\!k\equiv 0,\,\ k\equiv 0\:\Rightarrow\:1\equiv 0,\:$</span> i.e. <span class="math-container">$\rm\:c\:|\:1$</span></p> <hr /> <p>The same methods show more generally that <span class="math-container">$\rm\,\gcd(k,nk\!+\!1) = 1,\,$</span> which can also be viewed in terms of <a href="https://math.stackexchange.com/a/95825/242">gcd mod reduction</a> as in the Euclidean algorithm.</p>
231,887
<p>I'm learning to do proofs, and I'm a bit stuck on this one. The question asks to prove for any positive integer $k \ne 0$, $\gcd(k, k+1) = 1$.</p> <p>First I tried: $\gcd(k,k+1) = 1 = kx + (k+1)y$ : But I couldn't get anywhere.</p> <p>Then I tried assuming that $\gcd(k,k+1) \ne 1$ , therefore $k$ and $k+1$ are not relatively prime, i.e. they have a common divisor $d$ s.t. $d \mid k$ and $d \mid k+1$ $\implies$ $d \mid 2k + 1$</p> <p>Actually, it feels obvious that two integers next to each other, $k$ and $k+1$, could not have a common divisor. I don't know, any help would be greatly appreciated.</p>
cansomeonehelpmeout
413,677
<p>Let <span class="math-container">$d=\gcd(k.k+1)$</span>, then <span class="math-container">$d\mid k+1-k=1$</span>. So <span class="math-container">$d=\pm 1$</span>.</p>
299,824
<p>I've posted this <a href="https://math.stackexchange.com/q/2733614/214353">to Math.SE</a> about a month ago:</p> <p>Seems like $$ \Delta(a_0+a_1t^d+a_2t^{2d}+...+a_nt^{nd})=(-1)^{n\frac{d(d-1)}2}d^{nd}(a_0a_n)^{d-1}[\Delta(a_0+a_1t+a_2t^2+...+a_nt^n)]^d, $$ where $\Delta$ is the discriminant.</p> <p>Presumably this is not difficult to prove, but I just need a reference. Who did discover this formula for the first time?</p>
Igor Rivin
11,142
<p>I don't know of a standard reference, but there is <a href="http://faculty.bard.edu/cullinan/disccomp.pdf" rel="nofollow noreferrer">a note by John Cullinan at Bard College</a> where he computes the discriminant of a <em>composition</em> of two polynomials, so a nontrivially more general result than the one you mention.</p>
299,824
<p>I've posted this <a href="https://math.stackexchange.com/q/2733614/214353">to Math.SE</a> about a month ago:</p> <p>Seems like $$ \Delta(a_0+a_1t^d+a_2t^{2d}+...+a_nt^{nd})=(-1)^{n\frac{d(d-1)}2}d^{nd}(a_0a_n)^{d-1}[\Delta(a_0+a_1t+a_2t^2+...+a_nt^n)]^d, $$ where $\Delta$ is the discriminant.</p> <p>Presumably this is not difficult to prove, but I just need a reference. Who did discover this formula for the first time?</p>
Joe Silverman
11,926
<p>You say "presumably not difficult to prove." Did you try? It seems like a pretty easy exercise, using $$ \Delta(F(t)) = \prod_{F(a)=0} F'(a). $$ Taking $F(t)=f(t^d)$, the roots of $F$ are the $d$'th roots of the roots of $f$, while $F'(t)=dt^{d-1}f'(t^d)$. So I doubt you'll find this formula in a reference, but if you need to use it in a paper, you can just state it as a lemma, and for the proof, say "Exercise using the formula $ \Delta(F(t)) = \prod_{F(a)=0} F'(a) $ with $F(t)=f(t^d)$."</p>
75,900
<p>I have got a following equation: </p> <pre><code>-(c - x)/Sqrt[b^2 + (c - x)^2] + x/Sqrt[a^2 + x^2] == 0. </code></pre> <p>Trying to solve it for x, so I evaluate</p> <pre><code>Solve[-(c-x)/Sqrt[b^2+(c-x)^2]+x/Sqrt[a^2+x^2] == 0,x]. </code></pre> <p>This produces</p> <blockquote> <pre><code>{{x -&gt; (a*c)/(a - b)}, {x -&gt; (a*c)/a + b)}} </code></pre> </blockquote> <p>It is obviously wrong. Well, the second solution <code>a c/(a + b)</code> is <strong>indeed</strong> the right one, but the first <code>a c/(a - b)</code> is obviously wrong. You can check it yourself with direct substitution (lets take <code>a = 2</code>, <code>b = 3</code>, <code>c = 4</code>):</p> <pre><code>(-(c - x)/Sqrt[b^2 + (c - x)^2] + x/Sqrt[a^2 + x^2])/. x -&gt; a*c/(a - b) /. a -&gt; 2 /. b -&gt; 3 /.c -&gt; 4 </code></pre> <p>It produces <code>-(8/Sqrt[17])</code>. Not zero. So wrong.</p> <p>Now lets try the same but with a c/(a + b):</p> <pre><code>(-(c - x)/Sqrt[b^2 + (c - x)^2] + x/Sqrt[a^2 + x^2])/. x -&gt; a*c/(a + b) /. a -&gt; 2 /. b -&gt; 3 /.c -&gt; 4 </code></pre> <p>Produces zero. So what's wrong with <code>Solve</code>?</p> <p>I'm using <em>Mathematica</em> 9.</p>
m_goldberg
3,066
<p>Your argument concluding that <code>(a c)/(a + b)</code> is a root but <code>(a c)/(a - b)</code> is not a root is not sound. </p> <p>You accept that <code>(a c)/(a + b)</code> is a root because your equation is satisfied by the triple <code>{a, b, c} = {2, 3, 4}</code>. However, it immediately follows that <code>(a c)/(a - b)</code> has the same value when <code>{a, b, c} = {2, -3, 4}</code>. So by your own reasoning you should accept <code>(a c)/(a - b)</code> as a solution as well.</p>
3,132,380
<p>To compute this I used the fact that <span class="math-container">$S(n,2) = 2^{n-1}-1$</span> and used the recurrence relation <span class="math-container">$S(n,k) = kS(n-1,k) + S(n-1,k-1)$</span>, and used induction to get that <span class="math-container">$S(n,3)=\dfrac{3^{n-1}+1}{2}-2^{n-1}$</span>.</p> <p>But is there a quicker way to do this? Is there a way to just see plainly, as in, just counting how many ways we could put <span class="math-container">$n$</span> distinguishable balls into <span class="math-container">$3$</span> indistinguishable boxes where no box is empty? I tried but it seems difficult.</p>
Marko Riedel
44,883
<p>With Stirling numbers of the second kind we have the combinatorial class</p> <p><span class="math-container">$$\def\textsc#1{\dosc#1\csod} \def\dosc#1#2\csod{{\rm #1{\small #2}}} \textsc{SET}_{=k}(\textsc{SET}_{\ge 1}(\mathcal{Z})).$$</span></p> <p>This yields the EGF</p> <p><span class="math-container">$$\frac{(\exp(z)-1)^k}{k!}.$$</span></p> <p>Setting <span class="math-container">$k=3$</span> we find</p> <p><span class="math-container">$${n\brace 3} = n! [z^n] \frac{1}{6} (\exp(z)-1)^3.$$</span></p> <p>Extracting the coefficient with <span class="math-container">$n\ge 1$</span> we have</p> <p><span class="math-container">$$\frac{1}{6} n! [z^n] (\exp(3z)-3\exp(2z)+3\exp(z)-1) \\ = \frac{1}{6} n! \left(\frac{3^n}{n!} - 3 \frac{2^n}{n!} + 3 \frac{1}{n!} \right) \\ = \frac{1}{6} (3^n - 3 \times 2^n + 3).$$</span></p> <p>This is the claim.</p>
1,953,517
<p>Let $X,Y$ be two independet Poisson variables with parameters $\mu,\lambda&gt;0$. Let $N:=Y+X$ what is $\mathbb{E}(X\vert N=n)$?</p> <p>I already computed $P(X=k\vert N=n)$ for $k,n\in \mathbb{Z}_{+}$ which is $$P(X=k\vert N=n)=\binom{n}{k}\frac{\mu^{n-k}\lambda^k}{(\mu+\lambda)^n}$$ if $n&gt;k$ else $0$.</p> <p>I know that $n=k+j$. But now I get stucked $$\mathbb{E}(X\vert N=n)=\sum\limits_{k=0}^\infty k\binom{n}{k}\frac{\mu^{n-k}\lambda^k}{(\mu+\lambda)^n}=\sum\limits_{k=0}^\infty k\frac{n!}{k!(n-k)!}\frac{\mu^{n-k}\lambda^k}{(\mu+\lambda)^n}$$ $$\sum\limits_{k=0}^\infty \frac{(k+j)!}{(k-1)!(j)!}\frac{\mu^{j}\lambda^k}{(\mu+\lambda)^{k+j}}$$</p> <p>How can I compute the expected value?</p>
Benson Lin
371,844
<p>First, remove 20 bikes at the beginning. This is for the "at least" restraints in the question. Now consider for each possible value of the number of bikes in warehouse 2, which are $10,11,\cdots,20$.</p> <p>For each case, we remove another $0$ to $10$ bikes.</p> <p>Now we simply have to divide the remaining bikes among the warehouse 1, 3 and 4. This can be found with the aid of <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow">stars and bars</a> which gives us the formula for $n$ bikes and $3$ warehouses as:</p> <p>$$ \binom{n+3-1}{3-1} = \binom{n+2}{2} = \frac{(n+2)(n+1)}{2} $$</p> <p>Now we just have to sum with $n = 80,79,\cdots,70$. This gives us an answer of $32241$</p>
327,990
<p>So i was working on this:</p> <p>$$ \lim\limits_{x\to1} \frac{x + \sqrt{x} - 2}{x - 1} $$</p> <p>and I thought to simpify my top by multiplying by a conjugate, taking everything other than the $x$ to be the $b$ from $a+b$ so that my conjugate looked like $x - \sqrt{x} + 2$.</p> <p>The multiplication, if correct, led me to $x^2 - x + 4\sqrt{x} - 4$, which i was happy to note had $1$ as a root (which corresponded with my denominator and would allow for me to cancel off the [x-1]'s. </p> <p>However, I'm having trouble finding the chunk that multiplies $(x-1)$ to give the $x^2 - x + 4\sqrt{x} - 4$, and this chunk's limit will define the nominator's limit, which, divided by 2 (from the conjugate left alone at the denominator after I cancel) should be my overall limit, i think = P</p> <p>I'm pretty new to calculus so I may have made some mistakes, but if I am, I'd still like to know how I would have gotten that other root, since Ruffini's isn't working for me.</p> <p>Thanks in advance = D</p>
André Nicolas
6,312
<p><strong>Hint:</strong> I think you will find things <strong>much</strong> easier if you let $x=t^2$. This makes no real mathematical difference, but will send you in the right direction. The $\sqrt{x}$ was causing unnecessary confusion. After the substitution, the work will take seconds only.</p>
1,700
<p>Is there an algorithm in literature to compute an efficient (pareto optimal) and envy-free cake cutting when there are only $n=2$ players and a mediator?</p>
Joseph Malkevitch
1,618
<p>Take a look at:</p> <p><a href="http://ideas.repec.org/p/pad/wpaper/0022.html" rel="nofollow">http://ideas.repec.org/p/pad/wpaper/0022.html</a></p> <p>or the description of Crawford Divide and Choose as described in the book Equity: In Theory and Practice, by H. Peyton Young, Princeton U. Press.</p>
819,704
<p>Here is the problem I have </p> <p>$\lim \limits_{x \to -1} (x + 1)^2 sin (\frac{1}{x + 1})$</p> <p>I approached it like this:</p> <p>\begin{align} -1 \le sin(\displaystyle \frac{1}{x + 1}) \le 1 \\ -(x + 1) \le sin(1) \le (x + 1) \end{align}</p> <p>I then go on to solve the limit by replacing $sin (\frac{1}{x + 1})$ with $-(x + 1)$ and $(x + 1)$ respectively.</p> <p>Although this yields the correct answer, I am unsure if this is the correct way to solve this type of problem using the Squeeze theorem. Is this correct?</p>
DanZimm
37,872
<p>There appears to be a silly mistake which @mm-aops touched on, but I want to point out more directly.</p> <p>If we have an inequality like so:</p> <p>$$ a \le f\left( \frac{b}{c} \right) \le d $$ then we cannot conclude that $$ a \cdot c \le f(b) \le d \cdot c $$ If you want a direct counter example consider $f(x) = x^2$ and $a = 1, b = 4, c = 2, d = 5$ then we have surely $$ 2 \le 4 \le 5 \implies 2 \le f\left( \frac{4}{2} \right) \le 5 $$ but it is not true that $$ 4 \le f(4) \le 10 $$ since $f(4) = 16$.</p> <p>Now with this said you started off correctly noting that $\sin(\cdot)$ is bounded above by $1$ and below by $-1$ i.e. $$ -1 \le \sin \left( \frac{1}{x+1} \right) \le 1 $$ Now since we're interested in the limit of $(x+1)^2 \sin \left( \frac{1}{x+1} \right)$ we should try to bound that. Do you see where to go from here?</p>
2,712,631
<p>The problem is to use a power series to evaluate the integral to six decimal places. The upper limit of integration is one and the lower limit of integration is zero.</p> <p>To start the problem I factored $x$ out and focused on $\arctan(3x)$. I knew that by taking the derivative I could get this equation in the form $\frac{1}{1-r}$. After I put it in that form I integrated and solved for $c$. I then put $x$ back into the equation and integrated.</p> <p>Is this effective method for solving this problem? I determined that the 19th term would give me an answer to the sixth decimal place is this correct?</p> <p><img src="https://i.stack.imgur.com/0BfNf.jpg" alt="Here is the problem worked through in detail"></p>
Claude Leibovici
82,404
<p><em>Just added for your curiosity.</em></p> <p>After geometryfan's answer, writing $$\arctan(3x)=\sum_{n=0}^{p-1}(-1)^n\frac{(3x)^{2n+1}}{2n+1}$$ you search $p$ such that, for a given value of $x$ $$\frac{(3x)^{2p+1}}{2p+1} \leq \epsilon$$ For simplicity, let $3x=a$ and $2p+1=k$.</p> <p>The solution of $a^k=\epsilon\,k$ is given in terms of <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> $$k=-\frac{W\left(-\frac{\log (a)}{\epsilon }\right)}{\log (a)}$$</p> <p>Since the argument is large, you can approximate the value of $W(z)$ using the expansion given in the linked page $$W(z)=L_1-L_2+\frac{L_2}{L_1}+\frac{L_2(-2+L_2)}{2L_1^2}+\cdots$$ where $L_1=\log(z)$ and $L_2=\log(L_1)$. </p> <p>Applied to the case of $a=0.3$, this would give $$\left( \begin{array}{ccc} \epsilon &amp; k &amp; p \\ 10^{- 1} &amp; 1.54727 &amp; 1 \\ 10^{- 2} &amp; 2.93722 &amp; 1 \\ 10^{- 3} &amp; 4.49314 &amp; 2 \\ 10^{- 4} &amp; 6.14396 &amp; 3 \\ 10^{- 5} &amp; 7.85188 &amp; 4 \\ 10^{- 6} &amp; 9.59722 &amp; 5 \\ 10^{- 7} &amp; 11.3688 &amp; 6 \\ 10^{- 8}&amp; 13.1596 &amp; 7 \\ 10^{- 9} &amp; 14.9652 &amp; 7 \\ 10^{- 10} &amp; 16.7825 &amp; 8 \\ 10^{- 11} &amp; 18.6091 &amp; 9 \\ 10^{- 12} &amp; 20.4435 &amp; 10 \\ 10^{- 13} &amp; 22.2844 &amp; 11 \\ 10^{- 14} &amp; 24.1307 &amp; 12 \\ 10^{- 15} &amp; 25.9818 &amp; 13 \end{array} \right)$$ Checking for $p=13$, we have $\frac {0.3^{27}}{27} \approx 2.82 \times 10^{-16} &lt; 10^{-15}$ while $\frac {0.3^{25}}{25} \approx 3.39 \times 10^{-15} &gt; 10^{-15}$.</p>
238,577
<p>The following working program uses <a href="https://mathematica.stackexchange.com/questions/58560/graph-and-markov-chain">Graph and Markov Chain</a></p> <pre><code>P = {{1/2, 1/2, 0, 0}, {1/2, 1/2, 0, 0}, {1/4, 1/4, 1/4, 1/4}, {0, 0, 0, 1}}; proc = DiscreteMarkovProcess[3, P]; Graph[proc, GraphStyle -&gt; &quot;DiagramBlue&quot;, EdgeLabels -&gt; With[{sm = MarkovProcessProperties[proc, &quot;TransitionMatrix&quot;]}, Flatten@Table[DirectedEdge[i, j] -&gt; sm[[i, j]], {i, 2}, {j, 2}]]] sm = MarkovProcessProperties[proc, &quot;TransitionMatrix&quot;] sm == P </code></pre> <p>Since I couldn't make it work for larger matrices, I clarified in the last two lines that sm is just P. But, if I try to replace sm by P in the first part, all hell breaks loose. So, I tried copy paste changing just P to a larger matrix, but this does not work. Why?</p> <pre><code>P = {{0, 1/4, 1/2, 1/4, 0, 0}, {0, 1, 0, 0, 0, 0}, {0, 0, 1/3, 0, 2/3, 0}, {0, 0, 0, 0, 0, 1}, {0, 0, 1/4, 0, 3/4, 0}, {1/4, 0, 0, 0, 3/4, 0}}; P // MatrixForm proc = DiscreteMarkovProcess[1, P]; Graph[proc, EdgeLabels -&gt; With[{sm = MarkovProcessProperties[proc, &quot;TransitionMatrix&quot;]}, Flatten@Table[DirectedEdge[i, j] -&gt; sm[[i, j]], {i, 6}, {j, 6}]]] </code></pre>
florin
54,979
<p>@kglr Thanks! Your code can be shortened a bit, since tm2 is precisely P, to</p> <pre><code> P = {{0, 1/4, 1/2, 1/4, 0, 0}, {0, 1, 0, 0, 0, 0}, {0, 0, 1/3, 0, 2/3, 0}, {0, 0, 0, 0, 0, 1}, {0, 0, 1/4, 0, 3/4, 0}, {1/4, 0, 0, 0, 3/4, 0}}; proc = DiscreteMarkovProcess[1, P]; Graph[proc, EdgeLabels -&gt; {DirectedEdge[i_, j_] :&gt; P[[i, j]]}] </code></pre> <p>The command RuleDelayed (:&gt;, :&gt;) seems very useful :)</p>
288,974
<p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p> <p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p> <h2>Usual way</h2> <p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p> <p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p> <p>$\implies \tan ^2 \theta = \tan^2\theta$</p> <p>$\implies LHS=RHS$</p> <p>$\therefore proved$</p> <h2>Funny way</h2> <p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p> <p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p> <p>$\implies 0 = 0$</p> <p>$\therefore proved$</p> <p>Please explain why is this wrong.</p>
Kaz
28,530
<p>You are confusing derivation with proving.</p> <p>If you want to prove that some $X = Y$ statement is true, you have to show that that statement can be derived from some other statement which is already known to be true. You're doing it backwards: you're deriving from $X = Y$ some statement which is true, namely $0 = 0$.</p> <p>To generalize this, let us observe that in a typical mathematical proof-by-algebraic-derivation, you start with some questionable statement $S_0$ and then you go through some derivations to show that it is equivalent to some truthful statement $S_T$: $S_0\Leftrightarrow S_1\Leftrightarrow S_2\cdots\Leftrightarrow S_T$. This proof method works only because the arrows go both ways, and so the opposite derivation is possible: although you are proceeding from $S_0$ toward $S_T$, you are in fact at the same time showing that $S_0$ can be derived from $S_T$. For this method to work, however, none of those arrows must be a one way implication (denoted by $\implies$). If you have such a one-way "trap door" in the logic, then the crucial reversal of implication cannot happen, and so the proof does not hold. Even though you arrive at a truthful statement $S_T$, that statement does not imply the truth of your starting proposition $S_0$.</p> <p>Under most derivation steps in algebra, you don't have to worry about the direction because the derivations establish equivalence: this means that there is a two-way implication between the statements. For instance $3X = 3Y$ can be derived from $X = Y$, but also $X = Y$ can be derived from $3X = 3Y$. These statements are equivalent, and so we can connect them with a double arrow: $X = Y \Leftrightarrow 3X = 3Y$.</p> <p>Some derivation steps, however, only go one way, because they involve some "trapdoor" function: an operation which cannot be reversed, because it erases information. One example of a trapdoor function is multiplication by zero, around which your question revolves. Another example is taking a remainder in a division.</p> <p>For instance, suppose $X$ and $Y$ are integers. Then have $$X = Y \implies (X\mod 3) = (Y\mod 3)$$</p> <p>(If X equals Y, then the remainder left when dividing X by 3 is the same as the remainder left when dividing Y by 3). However, the converse isn't true. Just because two numbers have the same remainder when divided by three doesn't mean that they are equal.</p> <h3>More about "trap doors"</h3> <p>More formally, we can define "trapdoor function" as any function which fails to be <em>one-to-one</em> (or <em>injective</em>), because such functions have inverse functions. If if $g$ is an injective function covering the entire domain of $X$ and $Y$, then we have $X = Y \Leftrightarrow g(X) = g(Y)$. If $h$ fails to be injective (is not invertible) then we have $X = Y \implies h(X) = g(Y)$. The function $g$ does not have to be <em>onto</em>, only <em>one-to-one</em>.</p> <p>An example of an injective function is $e^x$, over the real numbers. It is a one to one function in that it maps each domain value to a unique value in its range. (But it is not onto: it does not map a domain value to every real number: its output is only positive real numbers. That doesn't matter.)</p> <p>Therefore, we know that $X = Y \Leftrightarrow e^X = e^Y$. In the real domain only!</p> <p>In the domain of complex numbers, $e^x$ is not one to one. More than one value of $x$ will map to the same range value. The inverse function, $\ln x$, is not actually a function in the complex plane, because it is multi-valued. (When the complex logarithm is used as a function anyway, it has to be restricted to a particular "branch".) Therefore if our proof involves complex numbers, $X = Y \Leftrightarrow e^X = e^Y$ does not hold. Complex exponentiation is a "trap door" and so the implication only goes one way: $X = Y \implies e^X = e^Y$.</p>
288,974
<p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p> <p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p> <h2>Usual way</h2> <p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p> <p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p> <p>$\implies \tan ^2 \theta = \tan^2\theta$</p> <p>$\implies LHS=RHS$</p> <p>$\therefore proved$</p> <h2>Funny way</h2> <p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p> <p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p> <p>$\implies 0 = 0$</p> <p>$\therefore proved$</p> <p>Please explain why is this wrong.</p>
notes
39,595
<p>The "usual way" is often used with inequalities, but we take care to make sure that we can reverse our steps. For instance, suppose we want to prove $$a+b \geq 2\sqrt{ab}.$$ Since both sides are positive, this statement is true iff $$(a+b)^2 \geq 4ab,$$ which is true iff $$(a+b)^2-4ab \geq 0 \Leftrightarrow (a-b)^2 \geq 0.$$</p> <p>Since this last inequality is true, and we can reverse all of our steps, the original inequality is true. Notice that at each step, we carefully ensure that we only do reversible operations (each step we want if and only if statements).</p> <p>When you try to multiply by zero, however, that's a non-reversible step (you can't divide by zero). In fact, the "usual way" that you explain is not valid either, because you divide by $\cos^2{\theta}$, which is reversible only if $\cos \theta \neq 0$. Thus, neither really constitute a proof of the identity $\sin^2 \theta = \tan^2 \theta \cos^2 \theta$. </p> <p>Why does it matter that the steps are reversible? Otherwise, you CAN'T go back from a true statement (0 = 0) to prove the original statement ($\sin^2 \theta = \tan^2 \theta \cos^2 \theta$), and so you have not really proved what you wanted to prove. </p>
3,110,660
<p>Let <span class="math-container">$f:\mathbb{R}^n\to \mathbb{R}^n$</span> a function of class <span class="math-container">$C^1$</span> such that <span class="math-container">$Df(x)$</span> is invertible for all <span class="math-container">$x\in \mathbb{R}^n$</span>. Show that <span class="math-container">$\{x\in \mathbb{R}^n:f(x)=0\}$</span> is a countable set.</p> <p>My attemp:</p> <p>I prove that, if <span class="math-container">$a\in \mathbb{R}^n$</span>, then there exists <span class="math-container">$\delta&gt;0$</span> such that, if <span class="math-container">$x\in B_{\delta}(a)$</span> then <span class="math-container">$f(x)\neq f(a)$</span>.</p>
zhw.
228,045
<p>Let <span class="math-container">$Z$</span> be the zero set of <span class="math-container">$f.$</span> If <span class="math-container">$Z$</span> were uncountable, it would have a limit point <span class="math-container">$a\in \mathbb R^n.$</span> Thus there would exist a sequence of distinct points <span class="math-container">$x_k\in Z$</span> converging to <span class="math-container">$a.$</span> By the continuity of <span class="math-container">$f,$</span> <span class="math-container">$a\in Z.$</span> But that contradicts your statement "if <span class="math-container">$x\in B_{\delta}(a)$</span> then <span class="math-container">$f(x)\neq f(a)$</span>."</p>
292,122
<p>This question actually came out of a question. In some other post, I saw a reference and going through, found this, $n&gt;0$.</p> <p>Solve for n explicitly without calculator: $$\frac{3^n}{n!}\le10^{-6}$$</p> <p>And I appreciate hint rather than explicit solution.</p> <p>Thank You.</p>
Ross Millikan
1,827
<p>I would use <a href="http://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow">Stirling's approximation</a> $n!\approx \frac {n^n}{e^n}\sqrt{2 \pi n}$ to get $\left( \frac {3e}n\right)^n \sqrt{2 \pi n} \lt 10^{-6}$. Then for a first cut, ignore the square root part an set $3e \approx 8$ so we have $\left( \frac 8n \right)^n \lt 10^{-6}$. Now take the base $10$ log asnd get $n(\log 8 - \log n) \lt -6$ Knowing that $\log 2 \approx 0.3$, it looks like $16$ will not quite work, as this will become $16(-0.3)\lt 6$. Each increment of $n$ lowers it by a factor $5$ around here, or a log of $0.7$. We need a couple of those, so I would look for $18$.</p> <p>Added: the square root I ignored is worth a factor of $10$, which is what makes $17$ good enough.a</p> <p><a href="http://www.wolframalpha.com/input/?i=3%5E17/17!" rel="nofollow">Alpha</a> shows that $17$ is good enough.</p>
292,122
<p>This question actually came out of a question. In some other post, I saw a reference and going through, found this, $n&gt;0$.</p> <p>Solve for n explicitly without calculator: $$\frac{3^n}{n!}\le10^{-6}$$</p> <p>And I appreciate hint rather than explicit solution.</p> <p>Thank You.</p>
half-integer fan
54,125
<p>How about we <strong>overestimate</strong> $3^n$ as $\sqrt{10}^n$ and <strong>underestimate</strong> every contribution to the factorial beyond $10$ as only $10$? Then $$\frac{3^n}{n!} \le \frac{10^5 \cdot 10^{\frac{n-10}2}}{10! \cdot 10^{n-10}} $$ Since $10!$ is about $10^{6.5}$ we have $$ 10^{-1.5} \cdot 10^{-\frac{n-10}2} \le 10^{-6} $$ After taking the $\log_{10}$ we have $$ -1.5 -\frac{n-10}2 \le -6$$ $$ 1.5 + \frac{n-10}2 \ge 6$$ $$ \frac{n-10}2 \ge 4.5$$ $$ n-10 \ge 9 $$ $$ n \ge 19$$ Since you asked for <em>a</em> solution and not the minimal solution this should suffice, and is more plausibly done without a calculator. The only real mental calculation was remembering the approximate value of $10!$ (well, and $3^2$, but let's be reasonable).</p> <p>Note that if you know the factorials for higher numbers (or exact values for some powers of 3, such as $3^7 = 2187$) then you could refine this argument using higher $n$ for the "fixed part".</p>
801,562
<p>We consider that $R$ is a commutative ring with $1_R$.</p> <p>Each $c \in R^*$(if we see it as a constant polynomial), divides each polynomial of $R[X]$.</p> <p>($c \in R^*$ means that $c$ is invertible.)</p> <p>I haven't undersotod it..Could you explain it to me?</p> <p>Does it mean that if we have a polynomial $p(X) \in R$,then $\frac{p}{c} \in \mathbb{Z}$ ? If yes, why is it like that??</p>
rschwieb
29,335
<p>If "reversible" mean "invertible," then this is true because invertible elements divide everything. (This is true in any ring with units and doesn't depend on the polynomial ring.)</p> <p>If $u$ is invertible, then $a=(au^{-1})u$, and this says that $u|a$.</p>
3,142,339
<p>Let <span class="math-container">$p$</span> be a real number. I am looking for all <span class="math-container">$(x,y)$</span> such that <span class="math-container">$\ln[e^{x}+e^{y}]=px+(1-p)y$</span>. My effort:</p> <p>Take exponent of both sides to obtain <span class="math-container">$e^{x}+e^{y}=e^{px}e^{(1-p)y}$</span> and then let <span class="math-container">$X=e^{x}, Y=e^{y}$</span>, so that <span class="math-container">$X+Y=X^{p}Y^{1-p}$</span>. How can I proceed from here?</p>
Lubin
17,760
<p>Well, Vasily has put his finger on one problem, but I would like to point out a much more serious one.</p> <p>We write a continued fraction to get a number that is the limit of the convergents, that is, of the expressions that you get when you cut your continued fraction off, to be a finite c.f.</p> <p>The first convergent is <span class="math-container">$\frac1i$</span>, no problem, but the second is <span class="math-container">$$\frac1{i+\frac1i}=\frac10\,,$$</span> a most unfortunate development. My recommendation would be to go on to other complex continued fractions, where the partial denominators are rather larger than <span class="math-container">$i$</span>.</p>
1,507,181
<p><a href="https://i.stack.imgur.com/nuhUB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nuhUB.png" alt="enter image description here"></a></p> <p><em>We know $G(0) = 0$</em></p> <p>Okay, so I have the above graph but I'm having a difficult time translating it into the graph of $G(x)$.</p> <p>What I know so far is that the slope changes abruptly from 0 to 2 at $x=0$. I also know that the slope gets extremely close to -1 but is never -1 itself, and it gets really close to 0 but isn't 0 itself. Finally, I know that $G(x)$ has a negative slope for $x&lt;0$ and a positive slope for $x&gt;0$.</p> <p>What I don't understand is how we can show that the slope is getting really close to -1 or 0 on $G(x)$? </p>
tomi
215,986
<p>Another way to attack this problem is to draw a tangent field diagram. This is usually done when the gradient is a function of both $x$ and $y$, but there is no reason why not to go ahead in this instance.</p> <p>The basic idea is to draw a short line with gradient $G'(x)$ at every point $(x,y)$:</p> <p><a href="https://i.stack.imgur.com/RlN15.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RlN15.jpg" alt="enter image description here"></a> </p> <p>You then take your starting point $(0,0)$ and draw a curve that would fit the general pattern.</p>
934,660
<p>Prove that for $ n \geq 2$, n has at least one prime factor.</p> <p>I'm trying to use induction. For n = 2, 2 = 1 x 2. For n > 2, n = n x 1, where 1 is a prime factor. Is this sufficient to prove the result? I feel like I may be mistaken here.</p>
Francesco Alem.
175,276
<p>let $n \in \mathbb{N}, n \geq 2$ and let's consider cases:</p> <p>$n\mbox{ is prime}$: done. </p> <p>$n\mbox{ is not prime} \iff n \mbox{ is mixed} \implies n=\prod_{i=1}^j P_i^{q_i}$, where $P_k \mbox { is prime}$, $q_k \mbox { is the exponent}$ $\implies$ $n$ has prime factors: done.</p>
3,604,745
<p>(I Prefer to open new question because those are my homework and i want to understand my way)</p> <p>In my homework i need to solve the integral: </p> <p><span class="math-container">$$ \int \frac{e^x}{2e^x + \sqrt{e^x}}dx $$</span></p> <p>I tried the substitution method: </p> <p><span class="math-container">$$ t = e^x \Rightarrow dt = e^x dx $$</span></p> <p>Therefore i get: </p> <p><span class="math-container">$$ \int \frac{dt}{2t + \sqrt{t}} $$</span></p> <p>But what now? How can i proceed? Is this the right way? </p> <p>I prefer hint and not whole answer - those are my homework</p> <p>Thanks. </p>
jeea
550,450
<p>Following your work:</p> <p><span class="math-container">$$\int \frac{dt}{2t + \sqrt{t}} = \int\frac{dt}{\sqrt{t}(2\sqrt{t}+1)} = \ln(2\sqrt{t}+1)+c$$</span></p> <p>This is because the derivative of <span class="math-container">$2\sqrt{t}+1 = \frac{1}{\sqrt{t}}$</span> is present so essentially we have <span class="math-container">$$\int \frac{d(2\sqrt{t}+1)}{2\sqrt{t}+1}$$</span></p> <p>whose answer is <span class="math-container">$\ln(2\sqrt{t}+1)$</span></p>
747,400
<p>I've run into something confusing.</p> <p>The problem is that I have to find the power series representation of $g(x)$ using the given $f(x)$, specifically $g(x) = \ln(1 - 3x)$ using $f(x) = \frac{1}{1-x}$.</p> <p>Now, I did it, but my result is incorrect: $$\ln(1-3x) = \int \frac{dx}{1-3x} = \int \sum 3^{k}x^{k} = \sum \frac{3^{k}x^{k+1}}{k+1}$$</p> <p>The answer at the back of my textbook is: $$\sum \frac{3^k x^k}{k}$$</p> <p>My result for the interval of convergence was however correct, which leads me to believe that there is maybe a mistake in the book's answer? </p> <p>Any help is appreciated, thanks!</p>
anon
11,763
<p>If you interpret $\underbrace{A\oplus\cdots\oplus A}_n$ as $\bigoplus_{i\in I}A$ for infinite $n$ then you're right back at the definition.</p>
747,400
<p>I've run into something confusing.</p> <p>The problem is that I have to find the power series representation of $g(x)$ using the given $f(x)$, specifically $g(x) = \ln(1 - 3x)$ using $f(x) = \frac{1}{1-x}$.</p> <p>Now, I did it, but my result is incorrect: $$\ln(1-3x) = \int \frac{dx}{1-3x} = \int \sum 3^{k}x^{k} = \sum \frac{3^{k}x^{k+1}}{k+1}$$</p> <p>The answer at the back of my textbook is: $$\sum \frac{3^k x^k}{k}$$</p> <p>My result for the interval of convergence was however correct, which leads me to believe that there is maybe a mistake in the book's answer? </p> <p>Any help is appreciated, thanks!</p>
rschwieb
29,335
<p>Sure, it is always true that a free module (under this definition) will be isomorphic to $A^{(I)}$ for some index set $I$, no matter if the set $I$ is finite or infinite.</p> <p>Suppose you are given $M=\bigoplus_{i\in I} M_i$ with isomorphisms $\phi_i:M_i\to A$. This can be composed with the map that injects $A$ into the $i$th position in $\bigoplus_{i\in I}A$. Abusing notation, use $\phi_i$ to denote this composition, so that $\phi:M_i\to \bigoplus _{i\in I} A$. </p> <p>Then you can verify that $\phi: M\to \bigoplus_{i\in I}A$ given by $\phi(\sum m_i)=\sum\phi_i (m_i)$ is an isomorphism.</p> <p>One thing worth worth pointing out here is the difference between notations $A^n$ and $A^{(n)}$. Usually $A^n:=\prod_{i\in I}A$, and as you probably know $\prod_{i\in I}A\ncong \bigoplus_{i\in I}A$ in general. But $A^I\cong A^{(I)}$ when $I$ is a finite set, so it's safe to use $A^n$ to mean either the product or the sum since $n$ indicates the number of copies is finite.</p>
173,387
<p>How can I indent properly long code in <em>Mathematica</em>? Are there some best practices?</p>
GenericAccountName
38,159
<p>I have a feeling many are going to find things they don't like slightly about my indentation, and this is totally a subjective question based on preference. I don't think there's a standardized best practice for coding style.</p> <p>For me, similarly to other languages, have the brackets line up vertically with the start of the function name. Having commas on separate lines can make it clear to separate args for some functions. As things get more complicated tab over.</p> <p>I usually use this kind of indentation in a "code" style cell rather than an "input" cell, or use a standard text editor. </p> <p>Here's my basic form:</p> <pre><code>func[x_] := Module[ { foo = 1, bar = 2 }, If[cond, Print[foo]; , (* Else *) Print[bar]; ]; Switch[x, _Integer, Print[2 * val]; , _String, Print["2 " &lt;&gt; val]; , _, Print["Default"]; ]; (* Some random code from an answer I posted to a different question *) Export["test.gif", ImageResize[#, 100] &amp; /@ Table[ ImageTrim[ Import["ExampleData/coneflower.jpg"] , {{0, 0}, {m, m}} ] , {m, 100, 50, -5} ] , "GIF" ]; ] </code></pre>
3,055,324
<p>I need some help with constructing a proof for the following statement,<span class="math-container">$ \frac{P_1 P_2}{hcf(m,n)} = lcm(P_1,P_2)$</span> where <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> are polynomials with real coefficients.</p> <p>I know how to do the same for integers using prime factors and their exponents but not sure where to go with polynomials.</p>
Joel Pereira
590,578
<p>Think of the irreducible factors of P<span class="math-container">$_1$</span> and P<span class="math-container">$_2$</span> as your prime factors. Suppose P<span class="math-container">$_1$</span> = gcd(P<span class="math-container">$_1$</span>,P<span class="math-container">$_2$</span>)(<span class="math-container">$q_1q_2\ldots q_n$</span>) and P<span class="math-container">$_2$</span>=gcd(P<span class="math-container">$_1$</span>,P<span class="math-container">$_2$</span>)(<span class="math-container">$r_1r_2\ldots r_m$</span>). Thus <span class="math-container">$$\frac{P_1P_2}{gcd(P_1,P_2)}=gcd(P_1,P_2)(q_1\ldots q_n)(r_1 \ldots r_m).$$</span> Note that the numerator has [gcd(P<span class="math-container">$_1$</span>,P<span class="math-container">$_2$</span>)]<span class="math-container">$^2$</span> as a factor.</p> <p>So the RHS is a common multiple of P<span class="math-container">$_1$</span> and P<span class="math-container">$_2$</span>. You should be able to show that if there is a "smaller" lcm, then we can get a "larger" gcd.</p>
3,055,324
<p>I need some help with constructing a proof for the following statement,<span class="math-container">$ \frac{P_1 P_2}{hcf(m,n)} = lcm(P_1,P_2)$</span> where <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> are polynomials with real coefficients.</p> <p>I know how to do the same for integers using prime factors and their exponents but not sure where to go with polynomials.</p>
William Grannis
332,311
<p>Do it the exact same way. Suppose that the hcf/gcd of <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> is <span class="math-container">$G$</span>. Because <span class="math-container">$G$</span> is a factor of <span class="math-container">$P_1$</span>, there exists an <span class="math-container">$R_1$</span> such that <span class="math-container">$P_1$</span> is equal to <span class="math-container">$R_1G$</span>, and likewise <span class="math-container">$P_2$</span> equals some <span class="math-container">$R_2G$</span>. </p> <p><span class="math-container">$P_1P_2 = R_1R_2GG$</span></p> <p><span class="math-container">${P_1P_2 \over G} = R_1R_2G$</span></p> <p><span class="math-container">$R_1$</span> and <span class="math-container">$R_2$</span> can have no factors in common as any factor <span class="math-container">$H$</span> could be multiplied by <span class="math-container">$G$</span> to obtain a new GCD. </p> <p>Because <span class="math-container">$R_1R_2G$</span> is a multiple of <span class="math-container">$R_1G$</span>, it is a multiple of <span class="math-container">$P_1$</span>, and likewise for <span class="math-container">$P_2$</span>. It is a multiple of both, and no factor can be removed which would preserve its multiplicity. Therefore, it is the Least Common Multiple.</p> <p>Q.E.D.</p>
549,347
<p>How would I solve the following question. And determine if its true or false.</p> <p>1.$\forall x \in R , \exists y\in R, x^2+y^2=-1$</p> <p>2: $\exists x\in R,\forall y \in R, x^2+y^2=-1$</p> <p>For the first one I think I can justify it is false.</p> <p>As for any arbitrary x must y must be </p> <p>$y=\sqrt{-x^2-1}$ which would not be real number.</p> <p>The second one I can say that two numbers squared cannot be a negative.So it would be false?</p>
meh
70,191
<p>Another simple way to think about this questions is that for any $$ x ∈ \mathbb{R}, x^2 \geq 0 $$</p> <p>Then $$x^2 + y^2 \geq 0 $$</p>
2,566,193
<p>Let's say we draw $49$ numbers from $1,\ldots,100$ without returning them back. Then we use the arithmetic mean from the sample. $$M=\frac{1}{49}\sum_{i=1}^{49}X_i$$</p> <p>They gave me the hint that $M$ is roughly normal distributed despite the dependencies in the draws. Now I have to determine an symmetric intervall $J$ around the mean with $\mathbb{P}(M\in J)\approx0.95$</p> <p>My idea was to use the variance and covariance to solve this problem.</p> <p>$$Var\left(\frac{1}{49}\left(X_1+\ldots+X_{49}\right)\right)$$</p> <p>$$=\frac{1}{49^2}\left(49\cdot Var(X_1)+49\cdot48Cov(X_1,X_2\right))$$</p> <p>But I'm not sure how this works</p>
Dietrich Burde
83,966
<p>In addition to Jose's answer, we can give explicit $3\times 3$ matrices $A$ satisfying $A^2+4A+3I=0$, with different traces, e.g., $A=-I$, or $A=-3I$, or $$ A=\begin{pmatrix} 0 &amp; 1/r &amp; -4 \cr r &amp; 0 &amp; -4r \cr 1 &amp; 1/r &amp; -5\end{pmatrix} $$</p> <p>$$ A=\begin{pmatrix} 0 &amp; 9/r &amp; -12 \cr r &amp; 0 &amp; -4r \cr 1 &amp; 3/r &amp; -7\end{pmatrix} $$</p>
1,842,340
<p>A polynomial with integer coefficients is called primitive if its coefficients are relatively prime. For example, $$3{x^2} + 7x + 9$$ is primitive while $$10{x^2} + 5x + 15$$ is not.</p> <p>(a) Prove that the product of two primitive polynomials is primitive.</p> <p>(b) Use this to prove Gauss's Lemma: If a polynomial with integer coefficients can be factored into polynomials with rational coefficients, it can also be factored into primitive polynomials with integer coefficients</p>
Jyrki Lahtonen
11,619
<p>Supplementing Benjamin Lindqvist's answer with the details of how this can be shown <em>in this case</em> from first principles. All this assuming that the codes are to be binary for larger field alphabets the claim is false.</p> <p>Assume that a binary linear code $C$ of length $n=k+2$, dimension $k$ and minimum distance $d\ge3$ exists. The code is defined by two check equations, so its parity check matrix $H$ has type $2\times n$. In particular its columns consist of two bits each, so they all are one of $\binom 0 0$, $\binom 0 1$, $\binom 1 0$, $\binom 1 1$.</p> <p>Obviously $H$ cannot contain a column $\binom 0 0$. For if column $i$ were all zeros, then the code $C$ would have a word of weight one, with the sole $1$ at position $i$. Less obvious but still not difficult to see is that $H$ cannot have two equal columns. For if the columns $i$ and $j$ were equal, then the vector $x$ of weight two with ones at positions $i$ and $j$ would satisfy the equation $Hx^T=\binom 0 0$. Meaning that $x\in C$, in violation of the assumption that the minimum weight of $C$ is $\ge3$.</p> <p>So all the $n$ columns of $H$ are non-zero and distinct. Therefore there can be at most three of them. In other words $3\ge n=k+2$ and $1\ge k$.</p>
444,486
<p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p> <blockquote> <p>If $S$ is a set, $\operatorname{card}(S) &lt; \operatorname{card}(\mathcal{P}(S))$.</p> </blockquote> <p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
celtschk
34,930
<p>The difference is that in the complex plane, you've got a multiplication $\mathbb C\times\mathbb C\to\mathbb C$ defined, which makes $\mathbb C$ into a field (which basically means that all the usual rules of arithmetics hold.)</p>
444,486
<p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p> <blockquote> <p>If $S$ is a set, $\operatorname{card}(S) &lt; \operatorname{card}(\mathcal{P}(S))$.</p> </blockquote> <p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
Emily
31,475
<p>The big difference between $\mathbb{R}^2$ and $\mathbb{C}$: differentiability.</p> <p>In general, a function from $\mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:</p> <p>$$\lim_{h \to 0} \frac{\mathbf{f}(\mathbf{x}+\mathbf{h})-\mathbf{f}(\mathbf{x})-\mathbf{J}\mathbf{h}}{\|\mathbf{h}\|} = 0$$</p> <p>where $\mathbf{f}, \mathbf{x}, $ and $\mathbf{h}$ are vector quantities.</p> <p>In $\mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:</p> <p>$$\begin{align*} f(x+iy) &amp;\stackrel{\textrm{def}}{=} u(x,y)+iv(x,y) \\ u_x &amp;= v_y, \\ u_y &amp;= -v_x. \end{align*} $$</p> <p>These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit</p> <p>$$\lim_{h\ \to\ 0} \frac{f(z+h)-f(z)-Jh}{h} = 0$$</p> <p>to exist. Note the difference here: we divide by $h$, not by its modulus.</p> <hr> <p>In essence, multiplication between elements of $\mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $\mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $\mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically. </p> <p>In $\mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.</p> <p>In $\mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are <em>analytic</em>, and in the reals we can have differentiable functions that are not analytic. In $\mathbb{C}$, differentiability implies analyticity.</p> <hr> <p>Example:</p> <p>Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that $$u_x = 2x = v_y, \\ u_y = -2y = -v_x,$$ so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then $$J = \begin{pmatrix} 2x &amp; -2y \\ 2y &amp; 2x \end{pmatrix}.$$ Taking the determinant, we find $\det J = 4x^2+4y^2$, which is non-zero except at the origin.</p> <p>By contrast, consider $f(x+iy) = x^2+y^2-2ixy$. Then,</p> <p>$$u_x = 2x \neq -2x = v_y, \\ u_y = -2y \neq 2y = -v_x,$$</p> <p>so the function is not differentiable.</p> <p>However, $$J = \begin{pmatrix} 2x &amp; 2y \\ -2y &amp; -2x \end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $\mathbb{R}^2$.</p>
444,486
<p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p> <blockquote> <p>If $S$ is a set, $\operatorname{card}(S) &lt; \operatorname{card}(\mathcal{P}(S))$.</p> </blockquote> <p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
Kendra Lynne
83,385
<p>Since everyone is defining the space, I figured I could give an example of why we use it (relating to your "Electrical Engineering" reference). The <span class="math-container">$i$</span> itself is what makes using complex numbers/variables ideal for numerous applications. For one, note that:</p> <p><span class="math-container">\begin{align*} i^1 &amp;= \sqrt{-1}\\ i^2 &amp;= -1\\ i^3 &amp;= -i\\ i^4 &amp;= 1. \end{align*}</span> In the complex (real-imaginary) plane, this corresponds to a rotation, which is easier to visualize and manipulate mathematically. These four powers "repeat" themselves, so for geometrical applications (versus real number manipulation), the math is more explicit.</p> <p>One of the immediate applications in Electrical Engineering relates to Signal Analysis and Processing. For example, Euler's formula: <span class="math-container">$$ re^{i\theta}=r\cos\theta +ir\sin\theta $$</span> relates complex exponentials to trigonometric formulas. Many times, in audio applications, a signal needs to be decomposed into a series of sinusoidal functions because you need to know their individual amplitudes (<span class="math-container">$r$</span>) and phase angles (<span class="math-container">$\theta$</span>), maybe for filtering a specific frequency:</p> <p><img src="https://i.stack.imgur.com/G9u91.gif" alt="Fourier Transform"></p> <p>This means the signal is being moved from the time-domain, where (time,amplitude) = <span class="math-container">$(t,y)$</span>, to the frequency domain, where (sinusoid magnitude,phase) = <span class="math-container">$(r,\theta)$</span>. The Fourier Transform (denoted "FT" in the picture) does this, and uses Euler's Formula to express the original signal as a sum of sinusoids of varying magnitude and phase angle. To do further signal analysis in the <span class="math-container">$\mathbb{R}^2$</span> domain isn't nearly as "clean" computationally.</p>
12,359
<p>There are, IMO, quite a lot of badly tagged questions and... not very good tags. Some of them were discussed on meta recently; some of these discussions show, IMO, that users who created these tags don't always understand tagging system of Math.SE well enough.</p> <p>On Meta.SO one needs 500 rep to create a new tag, on SO — 1500 (!) rep. On Math.SE — just 300 rep.</p> <p>Since its creation Math.SE has grown a lot. Maybe now it's time to (ask to) raise reputation requirements for creating new tag on Math.SE? (Say, to 1000 rep?)</p> <p>(Idea suggested by <a href="http://meta.math.stackexchange.com/users/4781/tito-piezas-iii">Tito Piezas III</a> in <a href="http://meta.math.stackexchange.com/questions/12314/do-we-really-need-a-ramanujan-radicals-tag#comment48374_12314">this comment</a>.)</p>
Grace Note
14,141
<p>After a quick check on this, we've increased the threshold to the suggested 1000 reputation. That seems a saner number than 1500 for here. Only a handful of tags are created in the 1000-1500 reputation range, many of which were things like <a href="https://math.stackexchange.com/questions/tagged/conditional-probability" class="post-tag" title="show questions tagged &#39;conditional-probability&#39;" rel="tag">conditional-probability</a> and <a href="https://math.stackexchange.com/questions/tagged/partial-derivative" class="post-tag" title="show questions tagged &#39;partial-derivative&#39;" rel="tag">partial-derivative</a> that are quite frequently used.</p>
189,069
<p>The Survival Probability for a walker starting at the origin is defined as the probability that the walker stays positive through n steps. Thanks to the Sparre-Andersen Theorem I know this PDF is given by</p> <pre><code>Plot[Binomial[2 n, n]*2^(-2 n), {n, 0, 100}] </code></pre> <p>However, I want to validate this empirically. </p> <p>My attempt to validate this for <code>n=100</code>:</p> <pre><code>FoldList[ If[#2 &lt; 0, 0, #1 + #2] &amp;, Prepend[Accumulate[RandomVariate[NormalDistribution[0, 1], 100]], 0]] </code></pre> <p>I want<code>FoldList</code> to stop if <code>#2 &lt; 0</code> evaluates to <code>True</code>, not just substitute in 0. </p>
kirma
3,056
<p>Count number of steps before random walk value either goes negative or over <span class="math-container">$m$</span> steps are already taken, for <span class="math-container">$n$</span> walks. Then count amount of last successful steps on each integer bin, reverse it, accumulate these values (essentially extend last nonnegative value backwards to every value before it), reverse it again to get the original order, drop the extra value that counted paths that continued over <span class="math-container">$m$</span> steps and calculate probabilities:</p> <pre><code>With[{n = 5000, m = 100}, Table[First@ NestWhile[# + {1, RandomVariate@NormalDistribution[]} &amp;, {-1, 0}, Last@# &gt;= 0 &amp;&amp; First@# &lt;= m &amp;], n] // BinCounts[#, {1, Max@# + 1, 1}] &amp; // Reverse // Accumulate // Reverse // Most@#/n &amp; // ListPlot[{#, Table[Binomial[2 j, j] 2^(-2 j), {j, m}]}, PlotRange -&gt; All] &amp;] </code></pre> <p><a href="https://i.stack.imgur.com/S4kAz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S4kAz.png" alt="enter image description here"></a></p>
1,488,388
<p><strong>The Statement of the Problem:</strong></p> <p>Let $G$ be a finite abelian group. Let $w$ be the product of all the elements in $G$. Prove that $w^2 = 1$.</p> <p><strong>Where I Am:</strong></p> <p>Well, I know that the commutator subgroup of $G$, call it $G'$, is simply the identity element, i.e. $1$. But, can I conclude from this that $\forall g \in G, g=g^{-1}$, i.e., $\forall g \in G, g^2 = gg^{-1} = 1$, which is our desired result? That just seems... strange. But, it kind of makes sense. After all, each element in $G$ has an associated inverse element (because it's a group), and because it's abelian, we can always position an element next to its inverse, i.e.</p> <p>$$ w^2 = (g_1g_1^{-1}g_2g_2^{-1}g_3g_3^{-1}\cdot \cdot \cdot g_ng_n^{-1})^2 = (1\cdot 1\cdot 1\cdot \cdot \cdot 1)^2=1.$$</p> <p>Is that all there is to it? Actually, looking at it now, I don't even need to mention the commutator subgroup, do I...</p>
fleablood
280,126
<p>List the elements of G as {$g_1, ...., g_n$} for each $g_i$ there is precisely one element $g_i^{-1}$ so that $g_i^{-1}g_i = 1$. It is possible that $g_i = g_i^{-1}$ or it is possible $g_i^{-1} = g_j$ for some other j. It doesn't matter.</p> <p>The set of all inverses {$g_1^{-1}.... g_n^{-1}$} is the same set as {$g_1, ...., g_n$} as each inverse is also in the G and there is a 1-1 correspondence between elements and their specific inverses.</p> <p>As G is abelian $w^2$ = $\prod g_i$ x $\prod g_1^{-1}$ = $\prod (g_i^{-1}*g_i) = \prod 1 = 1$.</p>
2,792,061
<p>I would like to understand how to apply <em>well-founded induction</em>. I have found two definitions which I list now, followed by the question.</p> <blockquote> <p>(1) A binary relation $\prec$ is <a href="http://www.cs.cornell.edu/courses/cs6110/2013sp/lectures/lec07-sp13.pdf" rel="noreferrer"><em>well-founded</em></a> if there are no infinite descending chains. An <em>infinite descending chain</em> is an infinite sequence $a_0, a_1, \dotsc$ such that $a_{i + 1} \prec a_i$ (it goes in reverse order) for all $i \geq 0$.</p> <p>(2) Well-founded induction says that, in order to prove a property $P$ holds on a set $A$ which has a well-founded binary relation $\prec$, it's enough to prove that $P$ holds for any $a \in A$ whenever $P$ holds for $b \prec a$.</p> </blockquote> <p>That last paragraph I don't quite understand. And I am having trouble parsing the multiple-nested $\Rightarrow$ blocks in the formal definition:</p> <blockquote> <p>$$\forall a \in A.(\forall b \in A.b \prec a \Rightarrow P(b)) \Rightarrow P(a) \Rightarrow \forall a \in A.P(a)$$</p> </blockquote> <p>An alternative from <a href="https://en.wikipedia.org/wiki/Well-founded_relation#Induction_and_recursion" rel="noreferrer">Wikipedia</a> is just as difficult to parse:</p> <blockquote> <p>(3) If $x$ is an element of $X$ and $P(y)$ is true for all $y$ such that $y R x$, then $P(x)$ must also be true.</p> <p>$$\forall x\in X\,[(\forall y\in X\,(y\,R\,x\to P(y)))\to P(x)]\to \forall x\in X\,P(x)$$</p> </blockquote> <p>The questions are:</p> <ol> <li>If one could break down / parse that formal equation to explain its meaning.</li> <li>And likewise for that paragraph (2), what it means that "it's enough to prove that $P$ holds for any $a \in A$ whenever $P$ holds for $b \prec a$"</li> </ol> <p>My attempt at understanding the wiki version is, for all $x \in X$, then if [the first big block] is true, then we can say for all $x \in X$, $P(x)$ is true. So that is essentially saying if that big chunk is true, then we have proven our property $P(x)$ is true for all x. Not sure how that is the case, but continuing...</p> <p>Then the "first big block" is saying, if for all $y \in X$ [the second big block] is true, then $P(x)$ is true, for the specific $x$ we are focused on atm. So for all $y \in X$, $P(x)$ is going to be true if that "second big block" is true.</p> <p>Finally, the "second big block" is saying if $y$ <em>precedes</em> $x$, then $P(y)$ is true. So if $y$ comes before $x$, then we at least know $P(y)$ is true. Not really sure what this means (how we know $P(y)$ is true).</p> <p>So to summarize, if for all $x$, that for all $y$, if $y$ precedes $x$ then $P(y)$ is true, that if that is true, then $P(x)$ is true, and if that's true, then $P(x)$ is true for all $x$.</p> <p>I have no idea what I am saying right now lol, I am having a tough time understanding this and looking for some guidance. Thank you so much.</p>
tchappy ha
384,082
<p>I am reading &quot;Linear Algebra Done Right 3rd Edition&quot; by Sheldon Axler.</p> <p>This book contains the following problem. (Exercise 2.A 16 on p.38)</p> <blockquote> <p>Suppose <span class="math-container">$p_0, p_1, \dots, p_m$</span> are polynomials in <span class="math-container">$\mathcal{P}_m(\mathbb{F})$</span> such that <span class="math-container">$p_j(2)=0$</span> for each <span class="math-container">$j$</span>. Prove that <span class="math-container">$p_0, p_1, \dots, p_m$</span> is not linearly independent in <span class="math-container">$\mathcal{P}_m(\mathbb{F})$</span>.</p> </blockquote> <p>My answer is here:</p> <blockquote> <p>If <span class="math-container">$m=0$</span>, <span class="math-container">$p_0$</span> is a constant function and <span class="math-container">$p_0(2)=0$</span>.<br /> So, <span class="math-container">$p_0$</span> is the zero function from <span class="math-container">$\mathbb{F}$</span> to <span class="math-container">$\mathbb{F}$</span>.<br /> So, <span class="math-container">$p_0$</span> is not linearly independent in <span class="math-container">$\mathcal{P}_0(\mathbb{F})$</span>.<br /> Suppose that <span class="math-container">$m\geq1$</span>.<br /> We can write <span class="math-container">$p_j(x)=(x-2)q_j(x)$</span> for any <span class="math-container">$x\in\mathbb{F}$</span>, where <span class="math-container">$q_j\in\mathcal{P}_{m-1}(\mathbb{F})$</span> for each <span class="math-container">$j\in\{0,\dots,m\}$</span>.<br /> Since <span class="math-container">$\text{Span}(1,x,\dots,x^{m-1})=\mathcal{P}_{m-1}(\mathbb{F})$</span>, <span class="math-container">$q_0,\dots,q_m$</span> is linearly dependent by 2.23 on p.35 in &quot;Linear Algebra Done Right 3rd Edition&quot; by Sheldon Axler.<br /> So, there exist numbers <span class="math-container">$a_0,\dots,a_m\in\mathbb{F}$</span>, not all <span class="math-container">$0$</span>, such that <span class="math-container">$a_0 q_0(x)+\cdots+a_m q_m(x)=0$</span> for any <span class="math-container">$x\in\mathbb{F}$</span>.<br /> Obviously, <span class="math-container">$(x-2)(a_0 q_0(x)+\cdots+a_m q_m(x))=0$</span> for any <span class="math-container">$x\in\mathbb{F}$</span>.<br /> So, <span class="math-container">$a_0 (x-2) q_0(x)+\cdots+a_m (x-2) q_m(x)=0$</span> for any <span class="math-container">$x\in\mathbb{F}$</span>.<br /> So, <span class="math-container">$a_0 p_0(x)+\cdots+a_m p_m(x)=0$</span> for any <span class="math-container">$x\in\mathbb{F}$</span>.<br /> So, <span class="math-container">$p_0,p_1,\dots,p_m$</span> is not linearly independent in <span class="math-container">$\mathcal{P}_m(\mathbb{F})$</span>.</p> </blockquote>
2,503,306
<p>Suppose $g{^n}$=e. Show the order of $g$ divides $n$.</p> <p>Would I use Eulers Theorem???;</p> <p>$a{^{\phi p}}$ $\equiv1 \pmod p$</p> <p>$a{^{p-1}}\equiv1 \pmod p$</p> <p>$a{^p}\equiv a\pmod p$</p> <p>So then I would have </p> <p>$g{^n}\equiv g\pmod n$</p> <p>then I think you use the $\gcd$, which states $\gcd(a,b) = 1$</p> <p>or </p> <p>$a=nq+r$ and $b=nq+r$</p> <p>which is $a\equiv b\pmod n$??</p>
Asinomás
33,907
<p>You don't need Euler's theorem. You just need division with remainder.</p> <p>Suppose the order of $g$ is $d$ and $g^n=e$.</p> <p>Suppose $n$ is not a multiple of $d$, then $n=kd+r$ with $0&lt;r&lt;d$.</p> <p>It follows that $e=g^n=g^{kd}g^r=eg^r=g^r\implies g^r=e,$ contradicting the fact that $d$ is the order of $g$.</p>
203,505
<p>Let <span class="math-container">$P(x)$</span> be a non-constant polynomial with real coefficients.</p> <p>Can <a href="http://en.wikipedia.org/wiki/Natural_density" rel="noreferrer">natural density</a> of</p> <p><span class="math-container">$$\{n\ |\ \lfloor P(n)\rfloor \ \text{is prime.}\}$$</span></p> <p>be positive?</p>
David E Speyer
297
<p>No. Let $\omega(p)$ be the number of roots of $f$ modulo $p$. Clearly, for any finite set $S$, the upper asymptotic density of your set is bounded by $\prod_{p \in S} (1-\omega(p)/p)$. (Because the probability that $p \nmid f(n)$ is $1-\omega(p)/p$, these probabilities are independent for distinct primes, and $f(n)$ only equals $p$ finitely many times.) We have $ \prod_{p \in S} (1-\omega(p)/p) \leq \exp (- \sum_{p \in S} \omega(p)/p)$. </p> <p>But the Cebotarov (or Frobenius) density theorem gives that $\sum \omega(p)/p$ diverges to $\infty$, so we may take a finite set $S$ large enough to make $\sum_{p \in S} \omega(p)/p$ greater than any specified $N$.</p> <p>I'll mention how this fits into the Bateman-Horn conjecture. That says that the density should go to $0$ like $\prod \frac{p-\omega(p)}{p-1} \cdot \frac{x}{\log f(x)}$, where the cool thing is that the product converges to a nonzero number if and only (1) $f$ is irreducible and (2) $\omega(p) \neq p$ for any $p$. But all I need to answer your question is an upper bound of $\prod_{p \leq N} \frac{p-\omega(p)}{p} \cdot x$.</p>
138,708
<p>I'm dealing with derivatives of scalar functions of matrices and wondering if Mathematica can help me here.</p> <p>The standard approach of expanding it in terms of components is cumbersome. As an motivating example, I want to minimize the following function, where $X$ is a matrix</p> <p>$$f(X) = \text{tr}(X'X)$$</p> <p>I can use <a href="https://tminka.github.io/papers/matrix/" rel="noreferrer">matrix differential calculus</a> to derive that one step of gradient descent to minimize this function is:</p> <p>$$X^* = X - 2 X$$</p> <p>On other hand, suppose $X$ is square, and my function is $$g(X)=\text{tr}(X^2)$$</p> <p>Now, a single step of gradient descent looks as follows $$X^* = X - 2 X'$$</p> <p>This can get complicated to do by hand, as an example, Exercise 1 of <a href="http://www.janmagnus.nl/misc/mdc2007-3rdedition" rel="noreferrer">Magnus</a> 9.10 asks to show that gradient descent step of the following function</p> <p>$$h(X) = \det A X B$$</p> <p>is the following</p> <p>$$X^* = X-\det(AXB)(A'(B'X'A')^{-1}B')'$$</p> <p>now take $$h^*(X) = \text{tr}(AX'BXC)$$ formula to do a single step of gradient descent is $$ \text{probably something simple} $$</p> <p>Is there a way to get help deriving/checking expressions above in Mathematica?</p> <p>(note, I'm using "gradient descent step" instead of "derivative" because there are multiple notations for derivative which differ in shape, but reformulating it as gradient descent removes ambiguity)</p>
mikado
36,788
<p>It is certainly fairly easy to check these relations for specific sizes of array:</p> <pre><code>X = Array[x, {7, 11}]; Map[D[Tr[Transpose[X].X], #] &amp;, X, {2}] == 2 X (* True *) Y = Array[y, {17, 17}]; Map[D[Tr[Y.Y], #] &amp;, Y, {2}] == 2 Transpose[Y] (* True *) </code></pre> <p>Generalisation to the determinant expression should be relatively simple...</p> <p>EDIT</p> <p>Without being too rigorous about it, we can use the following relations to find some matrix derivatives (I assume in the following that all arguments to <code>Tr</code> are square )</p> <pre><code>With[{m = 4, n = 2, p = 3}, A = Array[a, {m, n}]; X = Array[x, {p, n}]; B = Array[b, {p, p}]; F = Array[c, {n, m}]; Y = Array[y, {n, p}];] Tr[B] == Tr[Transpose[B]] (* True *) Tr[X.Y] == Tr[Y.X] (* True *) </code></pre> <p>This allows us to derive (for example)</p> <pre><code>Tr[A.Transpose[X].B.X.F] == Tr[B.X.F.A.Transpose[X]] == Tr[Transpose[B].X.Transpose[A].Transpose[F].Transpose[X]] // Simplify (* True *) </code></pre> <p>and guess the form of the derivative</p> <pre><code>Map[D[Tr[A.Transpose[X].B.X.F], #] &amp;, X, {2}] == B.X.F.A + Transpose[B].X.Transpose[A].Transpose[F] // Expand (* True *) </code></pre>
2,917,535
<p>I have found this problem in a 10th grade textbook and it's given me headaches trying to solve it. It says, determine the set:</p> <p>$$ A = \left \{ x \in \mathbb Z| \root3\of{\frac{7x+2}{x+5}} \in \mathbb Z\right \} $$</p> <p>So I have to find a condition for x so that the expression under the radical is a perfect cube. I remember solving these kind of exercises with perfect squares, but I can't figure this one out. </p>
fleablood
280,126
<p>Let $ \root3\of{\frac{7x+2}{x+5}} = N$</p> <p>$\frac {7x+2}{x+5} = N^3$</p> <p>$\frac {7x + 35 - 33}{x + 5} = N^3$</p> <p>$\frac {7x + 35}{x+5} - \frac {33}{x+5} = N^3$</p> <p>$7 - \frac {33}{x+5} = N^3 \in \mathbb Z$.</p> <p>So $x+5$ divides $33$. So $x+5 =\pm 1, \pm 3, \pm 11$ or $\pm 33$.</p> <p>That means $N^3 = 7-\frac {33}{x+5} = 8,10,18,40,6,4, -4, -26$. </p> <p>The only one of those that is a perfect cube is $N^2 = 7-\frac {33}{x+5} = 8$ and $\frac {33}{x+5} = -1$.</p> <p>So $x + 5 = -33$ and $x = -38$.</p> <p>And check: $\root3\of{\frac{-7*38+2}{-38+5}}= \sqrt[3]{\frac {264}{-33}}=\sqrt[3] 8 = 2$.</p>