qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,392,858
<p>Is is known that the space of symmetric matrices $\mathbb{R}_{sym}^{n \times n}$ has $\binom{n}{2}$ dimensions.</p> <p>And according to the spectral theorem every symmetric matrix $A \in \mathbb{R}_{sym}^{n \times n}$ has a spectral decomposition in terms of 1-rank matrices.</p> <p>A = $\sum_{i=1}^n \lambda_i v_i v_i^T $</p> <p>Hence we conclude the dimension space of the symmetric matrices is $n$.</p> <p>Where is the fallacy of this reasoning ?</p> <p>Thanks in advance. </p>
InsideOut
235,392
<p>The dimension space of $n$-symmetric matrices is $\frac{n(n+1)}{2}$. By the spectral theorem every symmetric matrix, with real coefficient, is diagonalizable; in other words every symmetric matrix is similar to a diagonal matrix. But this not means that the dimension of $n$-symmetric matrices in $n$. </p> <p>The set of diagonal matrices is a vector subspace of symmetric matrices, and is very easy to find a symmetric matrix such that is not diagonal.</p>
563,499
<p>What's the summation of the following expression;</p> <p>$$\sum_{k=1}^{n+3}\left(\frac{1}{2}\right)^{k}\left(\frac{1}{4}\right)^{n-k}$$ The solution is said to $$2\left(\frac{1}{4} \right)^{n}\left(2^{n+3}-1\right)$$</p> <p>But I'm getting $$\left(\frac{1}{4} \right)^{n}\left(2^{n+3}-1\right).$$ How is this possible? $$\sum_{k=1}^{n+3}\left(2 \times\frac{1}{4}\right)^{k}\left(\frac{1}{4}\right)^n\left(\frac{1}{4}\right)^{-k} \rightarrow \left(\frac{1}{4}\right)^n \sum_{k=1}^{n+3} 2^k\left(\frac{1}{4}\right)^k \left(\frac{1}{4}\right)^{-k}\rightarrow \left(\frac{1}{4}\right)^n\left(2^{n+3}-1\right)$$</p>
pi37
46,271
<p>Note that $$ \left(\frac{1}{2}\right)^{k}\left(\frac{1}{4}\right)^{n-k}=\left(\frac{1}{4}\right)^n 2^k $$ So the sum in question simplifies to $$ \left(\frac{1}{4}\right)^n\left(\sum_{k=1}^{n+3} 2^k\right) $$ Now by the geometric series formula $$ \sum_{k=1}^{n+3} 2^k=2\sum_{k=0}^{n+2} 2^k=2(2^{n+3}-1) $$ So the total sum is $$ 2\left(\frac{1}{4}\right)^n(2^{n+3}-1) $$</p>
555,239
<p>Since the polynomial has three irrational roots, I don't know how to solve the equation with familiar ways to solve the similar question. Could anyone answer the question?</p>
N. S.
9,176
<p>$$x-y = x^2 + y^2 - xy \Leftrightarrow \\ 2x-2y = 2x^2 + 2y^2 - 2xy \Leftrightarrow \\ 0= 2x^2 + 2y^2 - 2xy-2x+ 2y \Leftrightarrow \\ (x-y)^2+(x-1)^2+(y+1)^2=2$$</p> <p>As $x,y$ are integers, there are only 2 possibilities for each bracket: $0$ or $1$. So two of the squares have to be $1$ and the third one must be $0$.</p> <p>Second one leads to $$(x-y)^2+(x-1)^2+(y-1)^2=2$$</p>
590,817
<p>(I'm a software developer so excuse me)</p> <p>I'm building an application for a client and one of the formulas that has been provided in the spec is <code>value1 = value2 * (1 + 5%)</code>. When I asked about it I was told that it's some kind of notation for <code>value1 = value2 * 0.15</code>.</p> <p>Also, they said that <code>value1 = value2 * (1 + -5%)</code> is the same as <code>value1 = value2 * 0.95</code>.</p> <p>Could someone tell me what this means? Where can I read about what is actually going on with these calculations?</p> <p>Another question: How can a percentage be negative?</p> <p><strong>edit:</strong> I could be wrong about the first example. Client might have said 1.05 and not 0.15.</p>
Rachit Bhargava
113,115
<p>According to me, it seems for first:</p> <p>value1 = value2 * 1.05</p> <p>And, the second one is alright!</p>
3,151,662
<p>Consider <span class="math-container">$a_1,\dots,a_n\in\mathbb{R}^n$</span> and identify <span class="math-container">$a_j\in\mathcal{L}(\mathbb{R},\mathbb{R}^n)$</span> via <span class="math-container">$\varphi\mapsto \varphi1$</span>.</p> <p>Also, consider <span class="math-container">$A\in\mathcal{L}(\mathbb{R}^n)$</span> given by <span class="math-container">$$A\colon (x_1,\dots,x_n)\mapsto a_1x_1 + \dots + a_nx_n\tag{$\star$}$$</span></p> <p>What's the name or symbol of the map <span class="math-container">$$\mathcal{L}(\mathbb{R},\mathbb{R}^n)\times\dots\times\mathcal{L}(\mathbb{R},\mathbb{R}^n)\to\mathcal{L}(\mathbb{R}^n,\mathbb{R}^n),\quad(a_1,\dots,a_n)\mapsto A$$</span> where <span class="math-container">$A$</span> and <span class="math-container">$a_1,\dots,a_n$</span> are related as in <span class="math-container">$(\star)$</span>? I'd like to write e.g. <span class="math-container">$A=a_1\otimes\dots\otimes a_n$</span>. Is there some higher level concept that induces that map? (E.g. some time ago I wondered if this map would be the tensor product). </p> <p>The matrix equivalent would be saying that the columns of <span class="math-container">$A$</span> are the column vectors <span class="math-container">$a_1,\dots,a_n$</span> and writing <span class="math-container">$A=\begin{bmatrix}a_1&amp; \dots &amp;a_n\end{bmatrix}$</span>. However, I'd like to keep things matrix free.</p> <p>Thanks in advance.</p>
David Richerby
97,480
<p>Matrices are used widely in computer graphics. If you have the coordinates of an object in 3d space, then scaling, stretching and rotating the object can all be done by considering the coordinates to be vectors and multiplying them by the appropriate matrix. When you want to display that object on-screen, the <a href="https://en.wikipedia.org/wiki/3D_projection" rel="noreferrer">projection</a> down to a 2D object is also a matrix multiplication.</p>
3,151,662
<p>Consider <span class="math-container">$a_1,\dots,a_n\in\mathbb{R}^n$</span> and identify <span class="math-container">$a_j\in\mathcal{L}(\mathbb{R},\mathbb{R}^n)$</span> via <span class="math-container">$\varphi\mapsto \varphi1$</span>.</p> <p>Also, consider <span class="math-container">$A\in\mathcal{L}(\mathbb{R}^n)$</span> given by <span class="math-container">$$A\colon (x_1,\dots,x_n)\mapsto a_1x_1 + \dots + a_nx_n\tag{$\star$}$$</span></p> <p>What's the name or symbol of the map <span class="math-container">$$\mathcal{L}(\mathbb{R},\mathbb{R}^n)\times\dots\times\mathcal{L}(\mathbb{R},\mathbb{R}^n)\to\mathcal{L}(\mathbb{R}^n,\mathbb{R}^n),\quad(a_1,\dots,a_n)\mapsto A$$</span> where <span class="math-container">$A$</span> and <span class="math-container">$a_1,\dots,a_n$</span> are related as in <span class="math-container">$(\star)$</span>? I'd like to write e.g. <span class="math-container">$A=a_1\otimes\dots\otimes a_n$</span>. Is there some higher level concept that induces that map? (E.g. some time ago I wondered if this map would be the tensor product). </p> <p>The matrix equivalent would be saying that the columns of <span class="math-container">$A$</span> are the column vectors <span class="math-container">$a_1,\dots,a_n$</span> and writing <span class="math-container">$A=\begin{bmatrix}a_1&amp; \dots &amp;a_n\end{bmatrix}$</span>. However, I'd like to keep things matrix free.</p> <p>Thanks in advance.</p>
alephzero
223,485
<p>Determinants are of great theoretical significance in mathematics, since in general "the determinant of something <span class="math-container">$= 0$</span>" means something very special is going on, which may be either good news of bad news depending on the situation.</p> <p>On the other hand determinants have very little practical use in numerical calculations, since evaluating a determinant of order <span class="math-container">$n$</span> "from first principles" involves <span class="math-container">$n!$</span> operations, which is prohibitively expensive unless <span class="math-container">$n$</span> is very small. Even Cramer's rule, which is often taught in an introductory course on determinants and matrices, is not the <em>cheapest</em> way to solve <span class="math-container">$n$</span> linear equations in <span class="math-container">$n$</span> variables numerically if <span class="math-container">$n&gt;2$</span>, which is a pretty serious limitation!</p> <p>Also, if the typical magnitude of each term in a matrix of of order <span class="math-container">$n$</span> is <span class="math-container">$a$</span>, the determinant is likely to be of magnitude <span class="math-container">$a^n$</span>, and for large <span class="math-container">$n$</span> (say <span class="math-container">$n &gt; 1000$</span>) that number will usually be too large or too small to do <em>efficient</em> computer calculations, unless <span class="math-container">$|a|$</span> is <em>very</em> close to <span class="math-container">$1$</span>.</p> <p>On the other hand, almost <em>every</em> type of numerical calculation involves the same techniques that are used to solve equations, so the practical applications of matrices are more or less "the whole of applied mathematics, science, and engineering". Most applications involve systems of equations that are much too big to create and solve by hand, so it is hard to give realistic simple examples. In real-world numerical applications, a set of <span class="math-container">$n$</span> linear equations in <span class="math-container">$n$</span> variables would still be "small" from a practical point of view if <span class="math-container">$n = 100,000,$</span> and even <span class="math-container">$n = 1,000,000$</span> is not usually big enough to cause any real problems - the solution would only take a few seconds on a typical personal computer.</p>
3,151,662
<p>Consider <span class="math-container">$a_1,\dots,a_n\in\mathbb{R}^n$</span> and identify <span class="math-container">$a_j\in\mathcal{L}(\mathbb{R},\mathbb{R}^n)$</span> via <span class="math-container">$\varphi\mapsto \varphi1$</span>.</p> <p>Also, consider <span class="math-container">$A\in\mathcal{L}(\mathbb{R}^n)$</span> given by <span class="math-container">$$A\colon (x_1,\dots,x_n)\mapsto a_1x_1 + \dots + a_nx_n\tag{$\star$}$$</span></p> <p>What's the name or symbol of the map <span class="math-container">$$\mathcal{L}(\mathbb{R},\mathbb{R}^n)\times\dots\times\mathcal{L}(\mathbb{R},\mathbb{R}^n)\to\mathcal{L}(\mathbb{R}^n,\mathbb{R}^n),\quad(a_1,\dots,a_n)\mapsto A$$</span> where <span class="math-container">$A$</span> and <span class="math-container">$a_1,\dots,a_n$</span> are related as in <span class="math-container">$(\star)$</span>? I'd like to write e.g. <span class="math-container">$A=a_1\otimes\dots\otimes a_n$</span>. Is there some higher level concept that induces that map? (E.g. some time ago I wondered if this map would be the tensor product). </p> <p>The matrix equivalent would be saying that the columns of <span class="math-container">$A$</span> are the column vectors <span class="math-container">$a_1,\dots,a_n$</span> and writing <span class="math-container">$A=\begin{bmatrix}a_1&amp; \dots &amp;a_n\end{bmatrix}$</span>. However, I'd like to keep things matrix free.</p> <p>Thanks in advance.</p>
G Cab
317,234
<p>Besides the applications already mentioned in the previous answers, just consider that matrices are the fundamental basis for <a href="https://en.wikipedia.org/wiki/Finite_element_method" rel="nofollow noreferrer">Finite Elements</a> design, today widely used in every sector of engineering. </p> <p>Actually a <a href="https://en.wikipedia.org/wiki/Truss" rel="nofollow noreferrer">truss</a> is a <em>physical</em> representation of a matrix: if its <a href="https://en.wikipedia.org/wiki/Direct_stiffness_method" rel="nofollow noreferrer">stiffness matrix</a> has null determinant, it means that there can be movements without external forces, i.e. the truss will collapse.</p> <p>Also, in the continuous analysis of the deformation of bodies, stress and strain each are represented by matrices (tensors).</p> <p>The inertia of a body to rotation is a <a href="https://en.wikipedia.org/wiki/Moment_of_inertia" rel="nofollow noreferrer">matrix</a>.</p> <p>An electric network is described by a matrix voltages/ currents, and a null determinant denotes a short somewhere.</p> <p>And so on ...</p>
2,623,735
<p><a href="https://i.stack.imgur.com/5QfOQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5QfOQ.png" alt="enter image description here"></a></p> <p>I have proven that $S$ is also a basis. But I am not sure about the second one. Is it just identity matrix $3\times 3$ as we don't change anything?</p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>If $A=\tan (\frac {x}{2}) $ then</p> <p>$$\cos (x)=\frac {1-A^2}{1+A^2} $$ and $$\sin (x)=\frac {2A}{1+A^2} $$</p>
4,414,843
<p>For reference: Show that the area of ​​triangle <span class="math-container">$ABC = R\times MN(R=BO)$</span></p> <p>I can't demonstrate this relationship</p> <p><a href="https://i.stack.imgur.com/A3ci3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A3ci3.jpg" alt="enter image description here" /></a></p> <p>My progress:</p> <p><span class="math-container">$$S_{\triangle ABC} = \frac{abc}{4R}$$</span> <span class="math-container">$$S_{\triangle ABC}=\frac{AC\times BH}{2}$$</span></p> <p><span class="math-container">$BMHN$</span> is cyclic</p> <p>Therefore <span class="math-container">$\angle HMN \cong\angle HBN\\ \angle MBH \cong \angle MNH\\$</span></p> <p><span class="math-container">$\triangle AMH \sim \triangle AHB\\ \triangle CNH \sim \triangle CHB$</span></p> <p>...?</p>
sirous
346,566
<p>Hints for a geometric solution:</p> <p>-Draw a circle radius <span class="math-container">$R$</span> center on B, it intersect altitude BH at E.</p> <p>-Connect N to E and extend it to meet a circle center on N and radius MN at F. It can be seen that NF is parallel with AC, so it is perpendicular on BH at point E. So <span class="math-container">$FN=MN$</span>. We have:</p> <p><span class="math-container">$S_{ABC}=S_{BFN}+S_{ACNF}=\frac{R\times FN}2+\frac{(FN+AC)(BH-R)}2$</span></p> <p>which finally gives:</p> <p><span class="math-container">$R\times FN=FN\times BH-R\times FN + AC\times BH-AC\times R$</span></p> <p>Or:</p> <p><span class="math-container">$2R\times FN=AC\times BH$</span></p> <p>if you prove:</p> <p><span class="math-container">$AC\times R=FN\times BH$</span></p>
677,785
<p>I have to evaluate this integral:</p> <p>$$ \int_0^4 \int_\sqrt{y}^2 y^2 {e}^{x^7} \operatorname d\!x \operatorname d\!y\, $$</p> <p>I have no idea what to do with $\;{e}^{x^7}$.</p> <p>I have even <a href="http://www.wolframalpha.com/input/?i=int+e%5Ex%5E7+dx" rel="nofollow">tried $\int{e}^{x^7} dx$ with WolframAlpha</a>, but it gives me something with $\;\Gamma\;$ and I don't know what to do with that.</p> <p>I tried posing $\;u = x^7\;$ and doing another change of variables. I got $\;445 {e}^{128}/9408\;$, but I'm not really sure about it.</p> <p>If anyone could at least point me in the right direction, it would be awesome! Thanks.</p>
Community
-1
<p>The interchange of the order of integration is justified by Fubini's theorem:</p> <p>$$ \int_0^4 \int_\sqrt{y}^2 y^2 {e}^{x^7} dxdy=\int_0^2\int_0^{x^2}y^2 {e}^{x^7} dydx=\frac 1 3\int_0^2 x^6{e}^{x^7} dx=\frac1{21}{e}^{x^7} \bigg|_0^2=\frac {{e}^{2^7}-1}{21} $$</p>
2,101,750
<p>The WP article on general topology has a section titled "<a href="https://en.wikipedia.org/wiki/General_topology#Defining_topologies_via_continuous_functions">Defining topologies via continuous functions</a>," which says,</p> <blockquote> <p>given a set S, specifying the set of continuous functions $S \rightarrow X$ into all topological spaces X defines a topology [on S].</p> </blockquote> <p>The first thing that bothered me about this was that clearly this collection of continuous functions is a proper class, not a set. Is there a way of patching up this statement so that it literally makes sense, and if so, how would one go about proving it?</p> <p>The same section of the article has this:</p> <blockquote> <p>for a function f from a set S to a topological space, the initial topology on S has as open subsets A of S those subsets for which f(A) is open in X.</p> </blockquote> <p>This confuses me, because it seems that this does not necessarily define a topology. For example, let S be a set with two elements, and let f be a function that takes these elements to two different points on the real line. Then f(S) is not open, which means that S is not an open set in S, but that violates one of the axioms of a topological space. Am I just being stupid because I haven't had enough coffee this morning?</p>
Daron
53,993
<p>The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.</p>
1,777,901
<p>My task is this:</p> <p>Show that $$\ln(2) = \sum \limits_{n=1}^\infty \frac{1}{n2^n}$$</p> <p>My work so far: </p> <p>If we approximate $\ln(x)$ around $x = 1$, we get:</p> <p>$\ln(x) = (x-1) - \frac{(x-1)^2}{2} + \frac{(x-1)^3}{3} - \frac{(x-1)^4}{4} + ...$</p> <p>Substituting $x = 2$ then gives us:</p> <p>$\ln(2) = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} + ...$</p> <p>No suprise there, we should always get alternating series for $\ln(x)$ when we are doing the taylor expansion. By using Euler transform which is shown at the middle of this page on <a href="https://en.wikipedia.org/wiki/Natural_logarithm" rel="nofollow">natural logarithm</a> one can obtain the wanted result, but how do you derive it? I need someone to actually show and explain in detail how one starts from the left side of the equation and ends up on the other. </p> <p>Thanks in advance </p>
Solumilkyu
297,490
<p>Note that \begin{align} \frac{1}{1-x}&amp;=1+x+x^2+\cdots\qquad(-1&lt;x&lt;1). \end{align} Thus \begin{align} -\ln(1-x)&amp;=\int\frac{1}{1-x}{\rm d}x\\ &amp;=x+\frac{1}{2}x^2+\frac{1}{3}x^3+\cdots\\ &amp;=\sum_{n=1}^\infty\frac{1}{n}x^n. \end{align} By taking $x=\frac{1}{2}$ into the above equation, we have $$\sum_{n=1}^\infty\frac{1}{n2^n}=-\ln\left(\frac{1}{2}\right)=\ln(2).$$</p>
1,777,901
<p>My task is this:</p> <p>Show that $$\ln(2) = \sum \limits_{n=1}^\infty \frac{1}{n2^n}$$</p> <p>My work so far: </p> <p>If we approximate $\ln(x)$ around $x = 1$, we get:</p> <p>$\ln(x) = (x-1) - \frac{(x-1)^2}{2} + \frac{(x-1)^3}{3} - \frac{(x-1)^4}{4} + ...$</p> <p>Substituting $x = 2$ then gives us:</p> <p>$\ln(2) = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} + ...$</p> <p>No suprise there, we should always get alternating series for $\ln(x)$ when we are doing the taylor expansion. By using Euler transform which is shown at the middle of this page on <a href="https://en.wikipedia.org/wiki/Natural_logarithm" rel="nofollow">natural logarithm</a> one can obtain the wanted result, but how do you derive it? I need someone to actually show and explain in detail how one starts from the left side of the equation and ends up on the other. </p> <p>Thanks in advance </p>
Olivier Oloa
118,798
<p><strong>Hint</strong>. One may write</p> <blockquote> <p>$$ \begin{align} \sum \limits_{n=1}^\infty \frac{1}{n2^n}&amp;=\sum \limits_{n=1}^\infty \int_0^{1/2}x^{n-1}dx \\\\&amp;=\int_0^{1/2}\sum \limits_{n=1}^\infty x^{n-1}\:dx \\\\&amp;=\int_0^{1/2}\frac1{1-x}\:dx \\\\&amp;=\left[-\ln(1-x)\right]_0^{1/2} \\\\&amp;=\ln 2 \end{align} $$ </p> </blockquote> <p>as wanted.</p> <p>Similarly, one gets $$ \sum \limits_{n=1}^\infty \frac{t^n}{n}=-\ln(1-t),\quad |t|&lt;1. $$</p>
3,064,501
<p>So I was trying to find the <strong>time complexity</strong> of an algorithm to find the <span class="math-container">$N$</span>th prime number (where <span class="math-container">$N$</span> could be any positive integer).</p> <p>So is there any way to exactly determine how far <span class="math-container">$(N+1)$</span>th prime number will be if <span class="math-container">$N$</span>th prime number is already known ?</p>
DanaJ
117,584
<p>You're got what looks like two questions.</p> <ol> <li><p>The time complexity of the nth prime. In practice it is <span class="math-container">$O\big(\frac{n^{2/3}}{\log^2n}\big)$</span> using fast prime count algorithms. In theory this could be lowered to something on the order of <span class="math-container">$O(n^{1/2})$</span>. See:</p> <ul> <li><a href="https://math.stackexchange.com/questions/507178/most-efficient-algorithm-for-nth-prime-deterministic-and-probabilistic">Most efficient algorithm for nth prime, deterministic and probabilistic?</a></li> <li><a href="https://cs.stackexchange.com/questions/10683/what-is-the-time-complexity-of-generating-n-th-prime-number">https://cs.stackexchange.com/questions/10683/what-is-the-time-complexity-of-generating-n-th-prime-number</a></li> <li><a href="https://github.com/kimwalisch/primecount" rel="nofollow noreferrer">primesieve README</a></li> </ul></li> <li><p>Does knowing P(n-1) allow one to quickly find P(n)? Yes, in the sense that we have reduced the problem to a single <em>nextprime</em> call. Once beyond trivial sizes, this is just a loop calling <em>isprime</em>. One might want to use a wheel or a partial sieve to skip obvious composites.</p> <p>See Cramér's conjecture for a thought on how long this would take. In practice this is quite efficient, with inputs of thousands of digits being straightforward to calculate. You can look up the concept of "merits" for gaps. On average you'll find the next prime about <span class="math-container">$\log n$</span> away. For large inputs, I found sieving to a distance of <span class="math-container">$30 \log n$</span> (30 merits) was plenty to get excellent performance with <em>exceptionally</em> few gaps that need to look farther.</p> <p>Knowing the previous prime hasn't really given us any special information though, as Gerry pointed out.</p></li> </ol>
268,676
<p>It is not hard to check that the three roots of $x^3-2=0$ is $\sqrt[3]{2}, \sqrt[3]{2}\zeta_3, \sqrt[3]{2}\zeta_3^{2}$, hence the splitting field for $x^3-2$ over $\mathbb{Q}$ is $\mathbb{Q}[\sqrt[3]{2}, \sqrt[3]{2}\zeta_3, \sqrt[3]{2}\zeta_3^{2}]$. However, since $\sqrt[3]{2}\zeta_3^{2}$ can be compute through $\sqrt[3]{2}, \sqrt[3]{2}\zeta_3$ then the splitting field is $\mathbb{Q}[\sqrt[3]{2}, \sqrt[3]{2}\zeta_3]$.</p> <p>In the case $x^5-2=0$, in the book Galois theory by J.S.Milne, the author said that the splitting field is $\mathbb{Q}[\sqrt[5]{2}, \zeta_5]$. </p> <p>My question is : </p> <ol> <li>How can the other roots of $x^5-2$ be represented in term of $\sqrt[5]{2}, \zeta_5$, so that he can write the splitting field is$\mathbb{Q}[\sqrt[5]{2}, \zeta_5] $ ?</li> <li>Is the splitting field for $x^n -a$ over $\mathbb{Q}$ is $\mathbb{Q}[\alpha,\zeta_n]$, where $\alpha$ is the real $n$-th root of $a$ ?</li> </ol>
Hagen von Eitzen
39,174
<p>Note that $(\alpha\zeta_n^k)^n = \alpha^n\zeta_n^{nk}=\alpha^n=a$, $0\le k&lt;n$.</p>
268,676
<p>It is not hard to check that the three roots of $x^3-2=0$ is $\sqrt[3]{2}, \sqrt[3]{2}\zeta_3, \sqrt[3]{2}\zeta_3^{2}$, hence the splitting field for $x^3-2$ over $\mathbb{Q}$ is $\mathbb{Q}[\sqrt[3]{2}, \sqrt[3]{2}\zeta_3, \sqrt[3]{2}\zeta_3^{2}]$. However, since $\sqrt[3]{2}\zeta_3^{2}$ can be compute through $\sqrt[3]{2}, \sqrt[3]{2}\zeta_3$ then the splitting field is $\mathbb{Q}[\sqrt[3]{2}, \sqrt[3]{2}\zeta_3]$.</p> <p>In the case $x^5-2=0$, in the book Galois theory by J.S.Milne, the author said that the splitting field is $\mathbb{Q}[\sqrt[5]{2}, \zeta_5]$. </p> <p>My question is : </p> <ol> <li>How can the other roots of $x^5-2$ be represented in term of $\sqrt[5]{2}, \zeta_5$, so that he can write the splitting field is$\mathbb{Q}[\sqrt[5]{2}, \zeta_5] $ ?</li> <li>Is the splitting field for $x^n -a$ over $\mathbb{Q}$ is $\mathbb{Q}[\alpha,\zeta_n]$, where $\alpha$ is the real $n$-th root of $a$ ?</li> </ol>
Alfonso Fernandez
54,227
<ol> <li><p>Assuming that by $\zeta_5$ you mean the fifth primitive root of unity, the roots of $x^5-2$ are simply $\sqrt[5]{2}, \sqrt[5]{2} \zeta_5, \sqrt[5]{2} \zeta_5^2, \sqrt[5]{2} \zeta_5^3, \sqrt[5]{2} \zeta_5^4$.</p></li> <li><p>The roots of $x^n - a$ are given when $x^n=a$. Let $x=re^{i\theta}$ be such a root, then $\left( re^{i\theta} \right)^n = a$, meaning $r^n=a$ and $n\theta = 2\pi k$. This means that the roots are always $\sqrt[n]{a},\sqrt[n]{a}\zeta_n^1,...,\sqrt[n]{a}\zeta_n^{n-1} \in \mathbb{Q}[\sqrt[n]{a},\zeta_n]$.</p></li> </ol>
3,364,316
<p>While I'm reading E. Landau's <em>Grundlagen der Analysis</em> (tr. <em>Foundations of Analysis</em>, 1966), I couldn't understand the proof of <em>Theorem 3</em> at the segment of <em>Natural Numbers</em> which I've quoted below.</p> <blockquote> <p><strong>Theorem 3:</strong> <em>If</em><br> <span class="math-container">$$x \neq 1$$</span> <em>then there exists one</em> (hence, by Axiom 4, exactly one) <span class="math-container">$u$</span> <em>such that</em><br> <span class="math-container">$$x = u'$$</span> <strong>Proof:</strong> Let <span class="math-container">$\mathbb{S}$</span> be the set consisting of the number <span class="math-container">$1$</span> and of all those <span class="math-container">$x$</span> for which there exists such a <span class="math-container">$u$</span>. (For any such <span class="math-container">$x$</span>, we have of necessity that<br> <span class="math-container">$$x \neq 1$$</span> by Axiom 3.)<br> I) <span class="math-container">$1$</span> belongs to <span class="math-container">$\mathbb{S}$</span>.<br> II) If <span class="math-container">$x$</span> belongs to <span class="math-container">$\mathbb{S}$</span>, then, with <span class="math-container">$u$</span> denoting the number <span class="math-container">$x$</span>, we have<br> <span class="math-container">$$x'=u'$$</span><br> so that <span class="math-container">$x'$</span> belongs to <span class="math-container">$\mathbb{S}$</span>.<br> By Axiom 5, <span class="math-container">$\mathbb{S}$</span> therefore contains all the natural numbers. <span class="math-container">$\square$</span> </p> </blockquote> <p>Sir Landau refers to the Axioms of Peano on proof text. Can someone explain what's going on?</p>
Claude Leibovici
82,404
<p><em>Siminar to Omnomnomnom's answer.</em></p> <p>Your idea was good <span class="math-container">$$a_n= \frac{p_{n}}{p_{n}+p_{n+1}} \sim \frac{n \log (n)}{n \log (n)+(n+1) \log (n+1)}$$</span> Now, using Taylor expansion for large values of <span class="math-container">$n$</span> <span class="math-container">$$a_n=\frac{1}{2}-\frac{{\log \left({n}\right)}+1}{4 n\log(n)}+\frac{\log^2(n)+\log(n)+1}{8n^2\log^2(n) }+O\left(\frac{1}{n^2}\right)$$</span></p>
2,130,397
<p>If I want to find the power series representation of the following function:</p> <p>$$ \ln \frac{1+x}{1-x} $$</p> <p>I understand that it can be written as </p> <p>$$ \ln (1+x) - \ln(1-x) $$</p> <p>And I understand that if I now write in the power series representations for $ln(1+x)$ and $ln(1-x)$:</p> <p>$$\sum_{n=1}^\infty \frac{(-1)^{n-1}x^{n}}{n} - \sum_{n=1}^\infty \frac{(-1)^{n-1}(-x)^{n}}{n} $$</p> <p>My textbook solution does an odd thing where it writes it out as</p> <p>$$\sum_{n=1}^\infty \frac{x^{n}}{n} - \sum_{n=1}^\infty \frac{(-1)^{n-1}(-x)^{n}}{n} $$</p> <p>$$2\sum_{n=1}^\infty \frac{x^{2n-1}}{2n-1} $$</p> <p>I have no idea how it got from the line where I have the power series representation for $ln(1+x)$ and $ln(1-x)$ to the last two lines. If anyone could help me link my part to the textbook solution I would really appreciate it! Thank you! </p>
Bernard
202,857
<p><em>Bioche's rules</em> prompt you to set $i=\tan x$, $\mathrm d t=(1+t^2)\mathrm d x$, so you finally get the integral of the rational function $$\int_0^\infty\frac{\mathrm dt}{2+t^2}.$$</p>
1,525,660
<p><a href="https://i.stack.imgur.com/w2y9k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w2y9k.png" alt="image"></a></p> <p>Problem above. (Sorry I can't embed yet and the link seems to be removed when hyperlinked)</p> <p>Hello,</p> <p>Fairly simple (I imagine) question that I am stuck on. The first two parts are fairly straightforward, however I am unsure on how to continue beyond that.</p> <p>How does one find the roots given two roots? I've tried long dividing and resulted in nothing useful; any clever ideas?</p> <p>Also for the final part, would the velocity merely be z'(t) and acceleration z''(t)? (after taking the complex conjugate of the denominator to turn it real)</p> <p>Seems too easy to be the most numerously marked question, hence I feel like something is probably going wrong. Would I take the real parts of the velocity/acceleration after finding z' and z'' to find the magnitude? Or would it be a case of just finding the magnitude in the same sense as you would of an argand diagram (root of [x^2 + y^2]) simply with the variable t in place?</p> <p>Thanks</p> <p>These questions were taken out of a Cambridge 'Maths for Natural Sciences' past exam question if any were curious</p>
E.H.E
187,799
<p>$$\frac{1}{1-x}=\sum_{n=0}^{\infty }x^n$$ $$\frac{d^3}{dx^3}(\frac{1}{1-x})=\frac{6}{(1-x)^4}=\sum_{n=3}^{\infty }n(n-1)(n-2)x^{n-3}$$ $$\frac{1}{(1-x)^4}=\frac{1}{6}\sum_{n=3}^{\infty }n(n-1)(n-2)x^{n-3}$$ now replace $x$ by $x^2$ $$\frac{1}{(1-x^2)^4}=\frac{1}{6}\sum_{n=3}^{\infty }n(n-1)(n-2)x^{2n-6}=1+4x^2+10x^4+20x^6+35x^8+..$$</p>
112,651
<p>What is known about the set of well orderings of $\aleph_0$ in set theory without choice? I do not mean the set of countable well-order types, but the set of all subsets of $\aleph_0$ which (relative to a pairing function) code well orderings. And I would be interested in an answer in, say, ZF without choice. My actual concern is higher order arithmetic.</p> <p>I would not be surprised if ZF proves there are continuum many. But I don't know.</p> <p>At the opposite extreme, is it provable in ZF that there are not more well orderings of $\aleph_0$ than there are countable well-order types?</p>
Amit Kumar Gupta
7,521
<p>Consider the tree of finite partial attempts to build a well-ordering, and notice that it has size continuum.</p> <p>More rigorously, let:</p> <p>$$T = \{ f : n \to \omega\ |\ n \in \omega, f \mbox{ injective } \}$$</p> <p>ordered by extension. This is clearly an $\omega$ branching tree of height $\omega$, and its branches are precisely the injections $\omega \to \omega$. But we're interested in the set of well-orderings of $\omega$. Now, those injections which are bijections give us distinct well-orderings, but perhaps there are too few of them. What about the branches that aren't surjections? We can create distinct well-orderings out of them too: if a branch $b$ is not surjective and $X$ is the set of naturals missed by its range, consider the well-ordering obtained by taking $b$, then concatenating on to its end the numbers in $X$, ordered naturally.</p> <p>So the branches of our tree are in bijection with a set of well-orderings of $\omega$, and there are continuum many branches, so there are continuum many well-orderings. Note that the set of well-orderings we get is not even the set of all well-orderings. In particular every well-ordering we get has order type $\leq \omega + \omega$.</p>
4,192,687
<p>Let <span class="math-container">$f: [0,1] \rightarrow \mathbb{R}$</span> be a continuous function. <br /> How can I show that <span class="math-container">$ \lim_{s\to\infty} \int_0^1 f(x^s) \, dx$</span> exists?<br /> It is difficult for me to calculate a limit, as no concrete function or function values are given...</p> <p>My idea is to use the dominated convergence theorem with <span class="math-container">$|f(x)| \leq 1$</span> so that <br /> <span class="math-container">$ \lim_{s\to\infty} \int_0^1 f(x^s) \, dx = \int_0^1 \lim_{s\to\infty} f(x^s) \, dx$</span>. But how can I calculate the limit of <span class="math-container">$f(x^s)$</span> then? There is no information given about the monotony of the function. Thanks.</p>
Vladimir
154,757
<p><span class="math-container">$\lim_{s\to\infty} f(x^s)=f(0)$</span> if <span class="math-container">$x&lt;1$</span> and <span class="math-container">$f(1)$</span> if <span class="math-container">$x=1$</span>. Thus, <span class="math-container">$\lim_{s\to\infty}\int_0^1f(x^s)dx=f(0)$</span>.</p>
24,873
<p>It is very elementary to show that $\mathbb{R}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;1$: subtract a point and use the fact that connectedness is a homeomorphism invariant.</p> <p>Along similar lines, you can show that $\mathbb{R^2}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;2$ by subtracting a point and checking if the resulting space is simply connected. Still straightforward, but a good deal less elementary.</p> <p>However, the general result that $\mathbb{R^n}$ isn't homeomorphic to $\mathbb{R^m}$ for $n\neq m$, though intuitively obvious, is usually proved using sophisticated results from algebraic topology, such as invariance of domain or extensions of the Jordan curve theorem.</p> <p>Is there a more elementary proof of this fact? If not, is there intuition for why a proof is so difficult?</p>
Henno Brandsma
4,280
<p>There are reasonably accessible proofs that are purely general topology. First one needs to show Brouwer's fixed point theorem (which has an elementary proof, using barycentric subdivion and Sperner's lemma), or some result of similar hardness. Then one defines a topological dimension function (there are 3 that all coincide for separable metric spaces, dim (covering dimension), ind (small inductive dimension), Ind (large inductive dimension)), say we use dim, and then we show (using Brouwer) that $\dim(\mathbb{R}^n) = n$ for all $n$. As homeomorphic spaces have the same dimension (which is quite clear from the definition), this gives the result. This is in essence the approach Brouwer himself took, but he used a now obsolete dimension function called Dimensionsgrad in his paper, which does coincide with dim etc. for locally compact, locally connected separable metric spaces. Lebesgue proposed the covering dimension, but had a false proof for $\dim(\mathbb{R}^n) = n$, which Brouwer corrected.</p> <p>One can find such proofs in Engelking (general topology), Nagata (dimension theory), or nicely condensed in van Mill's books on infinite dimensional topology. These proofs do not use homology, homotopy etc., although one could say that the Brouwer proof of his fixed point theorem (via barycentric division etc.) was a precursor to such ideas.</p>
24,873
<p>It is very elementary to show that $\mathbb{R}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;1$: subtract a point and use the fact that connectedness is a homeomorphism invariant.</p> <p>Along similar lines, you can show that $\mathbb{R^2}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;2$ by subtracting a point and checking if the resulting space is simply connected. Still straightforward, but a good deal less elementary.</p> <p>However, the general result that $\mathbb{R^n}$ isn't homeomorphic to $\mathbb{R^m}$ for $n\neq m$, though intuitively obvious, is usually proved using sophisticated results from algebraic topology, such as invariance of domain or extensions of the Jordan curve theorem.</p> <p>Is there a more elementary proof of this fact? If not, is there intuition for why a proof is so difficult?</p>
Qiaochu Yuan
232
<blockquote> <p>is there intuition for why a proof is so difficult?</p> </blockquote> <p>Sure: the topological category is horrible. A generic continuous function is bizarre and will <a href="http://en.wikipedia.org/wiki/Space-filling_curve">violate your geometric intuitions</a>. When we prove that $\mathbb{R}^n$ is not homeomorphic to $\mathbb{R}^m$ by proving that $S^n$ is not homotopy equivalent to $S^m$, much of the work goes into proving that, <em>up to homotopy</em>, we can ignore how bizarre generic continuous functions are. That is, you think that "homeomorphic" is an intuitive condition, but it's not. </p> <p>This is why the corresponding question in the smooth category is much easier; generic smooth functions are much less bizarre in a way that is quantified by <a href="http://en.wikipedia.org/wiki/Sard%27s_theorem">Sard's lemma</a>. </p> <hr> <p>Here is a specific example of what I mean. The reason we can distinguish $\mathbb{R}$ from $\mathbb{R}^m, m &gt; 1$ by removing a point is because continuous functions send points to points. It is tempting to argue as follows: we can distinguish $\mathbb{R}^2$ from $\mathbb{R}^m, m &gt; 2$ by removing a <em>line</em>, since the result is not connected for $\mathbb{R}^2$ but is connected for $\mathbb{R}^m$. But of course this argument doesn't work because continuous functions need not send lines to lines; the image of a line can be much bigger, e.g. all of $\mathbb{R}^m$. </p> <p>This is <em>weird.</em> For $\mathbb{R}^2$, as you say, we can rescue this proof by removing a point because we know about simple connectedness and because, again, continuous functions send points to points. But for $\mathbb{R}^3$ we are stuck: removing a plane doesn't work, and removing a line doesn't even work, so if we want to stick to our "removing a point" strategy we had better figure out what the analogue of simple connectedness is in higher dimensions. This naturally leads to homotopy and homology, which happen to be strong enough tools to deal with the fact that continuous functions are bizarre, but they don't change the fact that continuous functions are bizarre. </p>
24,873
<p>It is very elementary to show that $\mathbb{R}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;1$: subtract a point and use the fact that connectedness is a homeomorphism invariant.</p> <p>Along similar lines, you can show that $\mathbb{R^2}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;2$ by subtracting a point and checking if the resulting space is simply connected. Still straightforward, but a good deal less elementary.</p> <p>However, the general result that $\mathbb{R^n}$ isn't homeomorphic to $\mathbb{R^m}$ for $n\neq m$, though intuitively obvious, is usually proved using sophisticated results from algebraic topology, such as invariance of domain or extensions of the Jordan curve theorem.</p> <p>Is there a more elementary proof of this fact? If not, is there intuition for why a proof is so difficult?</p>
Community
-1
<p>Consider the one point compactifications, <span class="math-container">$S^n$</span> and <span class="math-container">$S^m$</span>, respectively. If <span class="math-container">$\mathbb R^n$</span> is homeomorphic to <span class="math-container">$R^m$</span>, their one-point compactifications would be, as well. But <span class="math-container">$H_n(S^n)=\mathbb Z$</span>, whereas <span class="math-container">$H_n(S^m)=0$</span>, for <span class="math-container">$n\ne m,0$</span>.</p>
1,283,325
<p>I got two questions about $p$-adic numbers:</p> <blockquote> <ol> <li>I often read that the field $\mathbb Q_p$ is much different than the field $\mathbb R$.</li> </ol> </blockquote> <p>An element of $\mathbb Q_p$ is of the form $\sum_{i=-k}^{\infty}a_ip^i$ where $a_i\in \{0,...,p-1\}$.</p> <p>But isn't this just a real number? So at least the elements of $\mathbb Q_p$ are a subset of $\mathbb R$? That would mean that these fields are especially different in terms of their operation?</p> <blockquote> <ol start="2"> <li>Let $x\in \mathbb Q_p^*$. Why $x$ can be written uniquely like this: $x=p^na$ where $a$ is an element of the $p$-adic integers?</li> </ol> </blockquote> <p>Thanks in advance!</p>
Dietrich Burde
83,966
<ol> <li><p>No, for example $x=\sqrt{-1}=i\in \mathbb{Q}_5$, but $x\not\in \mathbb{R}$.</p></li> <li><p>See <a href="http://en.wikipedia.org/wiki/P-adic_number">here</a>. </p></li> </ol>
1,626,821
<p>Is there a name for a vector with all equal elements? If so, what is it?</p> <p>For example,</p> <p>$$ (7, 7, 7, 7, 7) $$</p>
Narasimham
95,860
<p><em>Diagonal vector</em> which makes equal angle to co-ordinates axes.</p>
875,644
<p>I have a parabolic basin which i am trying to find the equation for so I can reproduce it. I have taken $3$ points along one line of it to find the equation of the parabola, and I'm wondering if there is a way I can go from this to the equation of the parabolic basin. The equation I have for the parabola is:</p> <p>$y = 0.1x^2+0.3 $</p> <p>($b= 0$ so no $x$ term).</p> <p>I understand the equation of a parabolic basin takes the form:</p> <p>$z = ax^2 + by^2$ or somehting those lines. </p> <p>Any help would be appreciated.</p>
Shine
157,361
<p>In your first step, it should be: $m=y_1 -y_2$</p> <p>$$\int_0^1\int_0^1g(y_1-y_2)\Bbb{1}_{\{y_1&gt;y_2\}}dy_1dy_2 = \int_0^1\int_{y_2}^{1}g(y_1-y_2)dy_1dy_2=\int_0^1\int_0^{1-y_2}g(m)dm dy_2=\int_0^1\int_0^1 g(m)I_{(0&lt;m&lt;1-y_2)}dmdy_2=\int_0^1\int_0^1 g(m)I_{(0&lt;m&lt;1-y_2)}dy_2 dm \quad=\int_0^1\int_0^1 g(m)I_{(0&lt;y_2&lt;1-m)}dy_2 dm =\int_0^1 g(m)(1-m)dm$$</p>
1,613,645
<p>Let's get started:</p> <p>$$\hat f(n) = \frac{1}{2\pi}\int_0^{2\pi} |x|e^{-inx} dx$$</p> <p>since $|x|$ is an even function:</p> <p>$$= \frac{1}{\pi}\int_0^{\pi} xe^{-inx} dx$$</p> <p>Integration by parts yields:</p> <p>$$e^{-inx}\Big|_0^{\pi} + \frac{1}{in} \int_0^\pi e^{-inx} dx = (-1)^n - 1 + \frac{1}{in} \left( \frac{(-1)^n}{-in} + \frac{1}{in} \right) \\ = (-1)^n - 1 + \frac{(-1)^n - 1}{n^2}$$</p> <p>So if $n$ is even then $\hat f(n) = 0$. Otherwise:</p> <p>$$\hat f(n) = \frac{1}{\pi} \left( -2 -\frac{2}{n^2} \right)$$</p> <p>but that doesn't make sense since we know that $\hat f(n) \to 0$.</p> <p>Where is my mistake? </p> <p><strong>EDIT</strong> it should be </p> <p>$$x e^{-inx}\Big|_0^{\pi} + \frac{1}{in} \int_0^\pi e^{-inx} dx = \frac{\pi e^{-in\pi}}{-in} + \frac{(-1)^n - 1}{n^2}$$</p> <p>So $$\hat f(n) = \frac{1}{\pi} \left( \frac{(-1)^n}{-in} + \frac{(-1)^n - 1}{n^2} \right)$$</p>
Arthur
15,500
<p>It has moved $\frac{35}{60}$ of a full circle (a full circle consists of $60$ minutes, and it has moved $35$ of those). A full circle is $2\pi\cdot15cm=30\pi cm$. It has therefore moved a total of $$ \frac{35}{60}\cdot30\pi cm=\frac{35\pi}{2}cm $$</p>
1,913,873
<p>If $a,b,c,d,e&gt;1$, then prove that: $$\frac{a^2}{b-1} + \frac{b^2}{c-1} + \frac{d^2}{e-1} + \frac{c^2}{a-1} + \frac{e^2}{d-1} \ge 20. $$</p> <p>I don't know how to begin. What should be the approach?</p>
levap
32,262
<p>Note that we can rewrite your expression as</p> <p>$$ \left( \frac{a^2}{b-1} + \frac{b^2}{c-1} + \frac{c^2}{a-1} \right) + \left( \frac{d^2}{e-1} + \frac{e^2}{d-1} \right) \geq 20. $$</p> <p>Let us consider first the baby problem of finding the minimum value of $f(x) = \frac{x^2}{x-1}$ assuming $x &gt; 1$. The minimum is readily seen to be attained at $x = 2$ with value $4$. </p> <p>Next, consider the problem of finding the minimum value of</p> <p>$$ \frac{d^2}{e-1} + \frac{e^2}{d-1} = f(d) \frac{d-1}{e-1} + f(e) \frac{e-1}{d-1} $$</p> <p>assuming $d,e &gt; 1$. The arithmetic-geometric inequality shows that</p> <p>$$ f(d) \frac{d-1}{e-1} + f(e) \frac{e-1}{d-1} \geq 2 \sqrt{f(d)f(e)} \geq 2f(2) = 8 $$</p> <p>where the minimum value of $8$ is indeed attained when $d = e = 2$.</p> <p>Finally, consider the problem of finding the minimum value of</p> <p>$$ \frac{a^2}{b-1} + \frac{b^2}{c-1} + \frac{c^2}{a-1} = f(a)\frac{a-1}{b-1} + f(b)\frac{b-1}{c-1} + f(c)\frac{c-1}{a-1} $$</p> <p>assuming $a,b,c &gt; 1$. Again, the arithmetic-geometric inequality shows that</p> <p>$$ f(a)\frac{a-1}{b-1} + f(b)\frac{b-1}{c-1} + f(c)\frac{c-1}{a-1} \geq 3 (f(a)f(b)f(c))^{\frac{1}{3}} \geq 3f(2) = 12 $$</p> <p>where the minimum indeed attained at $a = b = c = 2$.</p> <p>Combining everything above, we obtain the required result.</p>
1,913,873
<p>If $a,b,c,d,e&gt;1$, then prove that: $$\frac{a^2}{b-1} + \frac{b^2}{c-1} + \frac{d^2}{e-1} + \frac{c^2}{a-1} + \frac{e^2}{d-1} \ge 20. $$</p> <p>I don't know how to begin. What should be the approach?</p>
Community
-1
<p>Using Cauchy-Scwarz to \begin{align*} \frac{a_1}{\sqrt{x_1}}, \frac{a_2}{\sqrt{x_2}}, \ldots, \frac{a_n}{\sqrt{x_n}} \\ \sqrt{x_1}, \sqrt{x_2}, \ldots, \sqrt{x_n} \end{align*} we get \begin{align*} \frac{a_1^2}{x_1}+\frac{a_2^2}{x_2}+\cdots + \frac{a_n^2}{x_n} \geq \frac{(a_1+a_2+\cdots + a_n)^2}{x_1+x_2+\cdots+x_n} \end{align*} Hence we have \begin{align*} \frac{a^2}{b-1}+\frac{b^2}{c-1}+\frac{d^2}{e-1}+\frac{c^2}{a-1}+\frac{e^2}{d-1} \geq \frac{(a+b+c+d+e)^2}{a+b+c+d+e-5} \end{align*} Putting $x=a+b+c+d+e$, we need to prove \begin{align*} \frac{x^2}{x-5} \geq 20 \end{align*}Since $x &gt; 5$, this can be written as $x^2 -20x + 100 \geq 0$. This follows readily since \begin{align*} x^2-20x+100 = (x-10)^2\end{align*}</p> <p>The proof also shows that the numerators can be any permutation of $a^2,b^2,c^2,d^2,e^2$</p>
185,766
<p>After studying general a linear algebra course, how would an advanced linear algebra course differ from the general course? </p> <p>And would an advanced linear algebra course be taught in graduate schools?</p>
Mathemagician1234
7,012
<p>First of all, it's not clear what an advanced course in linear algebra at either the undergraduate or graduate level consists of. It really depends on what the first course consists of and this varies enormously from university to university depending not only on the background and career paths of the students, but the aims of the instructor. It can be a largely applied course where rigorous theorems about linear transformations and abstract vector spaces are either largely avoided or downplayed, such as those based on Gilbert Strang's textbooks. It can also be a highly abstract course where applications are barely mentioned at all and the fine theoretical structure of finite dimensional vector spaces is developed in full detail, such as Axler's, Halmos' or Hoffman/Kunze's textbooks. And there are textbooks which try to steer a middle course between the 2 extremes, developing both theory and application in more or less equal measure. The classic example of this kind of course is Charles Curtis' textbook. </p> <p>I'm quite sympathetic to the last kind of textbook - finding both the theoretical and applied sides of linear algebra to be of equal importance in developing the subject in it's fullest utility for both mathematicians and scientists. </p> <p>Then again, it's not that cut and dried often, either - often actual courses in linear algebra resist such simple classification - therefore, advanced courses to follow such classes up will be even more difficult to construct. Gilbert Strang's justly famous course at MIT, for example, is a course built around the applications of linear algebra to real world problems. But it's hardly a plug-and-chug, mindless algorithm course: Strang analyzes each application and algorithm, as well as the theory behind it, thoroughly. But at the same time, it's not really an abstract mathematics course the way we describe it-the deep theorems and proofs of linear algebra, while not ignored, are not really the core concerns of the class. For Strang, the abstract theory of linear algebra is really the domain of an abstract algebra course. (Indeed, Strang's course is partially designed to provide a mastery of the computational aspects of linear algebra needed for MIT students to go on to effectively study modern algebra in <em>Micheal Artin's</em> equally famous course!). However, this is MIT we're talking about-hardly your average program with average mathematics majors. </p> <p>In my experience, what most people mean when they say "advanced linear algebra", they mean the abstract theory of linear operators in the context of modern algebra. At most universities, this material is covered in serious abstract algebra courses at either honors undergraduate or first-year graduate level. This means the study of R-modules over commutative rings in the special case where R is a commutative division ring i.e. a field. This means modules, algebras over R, submodules, R-module maps, product spaces, the Jordan-Holder theorem, tensor products, dual spaces, free modules and perhaps some elementary homological algebra. As I've said, this is usually covered in the student's first substantial year-long algebra course, at either the undergraduate or graduate levels.</p> <p>Also in my experience, there usually isn't a separate "advanced linear algebra" class for students at the advanced undergraduate or graduate level. But there <strong>are</strong> exceptions. For example, Peter Lax's linear algebra book is based on a graduate course on the subject that he's taught there for many years designed to bring incoming graduate students at NYU who are weak in linear algebra up to speed for a second year functional analysis course. Lax became frustrated with the anemic skills in basic linear algebra most graduate students at NYU had and designed this course to rectify this very damaging lacuna in thier training. </p> <p>If you're looking for a text that focuses purely on the abstract theory of linear operators, the best book is probably <em>Module Theory: An Approach To Linear Algebra</em> by T.S. Blythe. THE book on the subject and sadly, out of print and hard to find. Here's to Dover republishing it. </p> <p>Hope that answers your question. </p>
185,766
<p>After studying general a linear algebra course, how would an advanced linear algebra course differ from the general course? </p> <p>And would an advanced linear algebra course be taught in graduate schools?</p>
Paul Siegel
1,509
<p>To answer your question succinctly, a first course on linear algebra should cover the basic computational tools: row reduction, determinants, and eigenvalues. A more advanced course should force the students to come to terms with more abstract language (vector spaces over an arbitrary field), and it should contain a sophisticated treatment of the spectral theorem. In an ideal world it would also introduce multilinear algebra and/or the canonical forms, but these topics are often reserved for graduate courses (often in the context of rings and modules).</p> <p>In practice there is quite a large gap between advanced undergraduate and graduate algebra. This can lead to strange circumstances wherein students first learn about tensor products in a differential geometry course or the Jordan canonical form in a number theory course. I recommend that undergraduates take as much linear algebra as possible since good graduate programs will often assume that students know more linear algebra than they do in practice.</p>
111,425
<p>If $R$ is a unital integral ring, then its characteristic is either $0$ or prime. If $R$ is a ring without unit, then the char of $R$ is defined to be the smallest positive integer $p$ s.t. $ pa = 0 $ for some nonzero element $a \in R$. I am not sure how to prove that the characteristic of an integral domain without a unit is still either $0$ or a prime $p$. I know that if $p$ is the char of $R$, then $px = 0 $ for all $x \in R$. If we assume $ p \neq 0 $ and $R$ has nonzero char, and $p$ factors into $nm$, then $ (nm) a = 0 $ , which means $ n (ma) = 0 $. Well $ma \neq 0$, because this would contradict the minimality of $p$ on $a$. But I don't know where to go from this point w/o invoking a unit. </p> <p>Edit: I had left out the assumption that $R$ is assumed to be a integral domain. This has been corrected. </p>
Mariano Suárez-Álvarez
274
<p>Suppose $p$ is the characteristic of $R$ and not prime, so that $p=mn$ for some positive integers $m$,~$n>1$. In particular, $p&gt;n$ and $p&gt;m$. <em>According to the definition you are using</em>, $p$ is the least positive number such that there exists a non-zero $a\in R$ with $pa=0$: it follows that $na\neq0$, and then that moreover $m(na)\neq0$. This is absurd, of course, because $m(na)=(mn)a=pa$ because the addition in $R$ is associative.</p>
300,460
<p>How would we go about proving that $$\frac{1}{1\cdot 2} + \frac{1}{2\cdot 3} + \frac{1}{3 \cdot 4} +\ldots +\frac{1}{n(n+1)} = \frac{n}{n+1}$$</p>
Community
-1
<p>$\frac{1}{1\times 2}+\frac{1}{2\times 3}+\ldots+\frac{1}{n(n+1)}$</p> <p>$=(1-\frac12)+(\frac12-\frac13)+\ldots+(\frac{1}{n}-\frac{1}{n+1})$</p> <p>$=1-\frac{1}{n+1}$</p> <p>$=\frac{n}{n+1}$</p>
2,439,340
<p>How would one proceed to prove this statement?</p> <blockquote> <p>The set of the strictly increasing sequences of natural numbers is not enumerable.</p> </blockquote> <p>I've been trying to solve this for quite a while, however I don't even know where to start.</p>
Joseph Nelson Pulikkottil
415,350
<p>There are uncountable number of positive real numbers, take {An}=cn where c is a positive real number Do this with every positive numbers it is increasing and uncountable.</p>
4,164,650
<p>The outline of the exercise is that: Fix <span class="math-container">$b &gt; 1, y &gt; 0$</span> and show that there is a unique real <span class="math-container">$x$</span> such that <span class="math-container">$b^x = y$</span>. (For further specification see e.g. <a href="https://math.stackexchange.com/questions/598890/baby-rudin-excercie-1-7-existence-of-logarithms">this post</a>) Part f.) requires to show that <span class="math-container">$b^x = y$</span> where <span class="math-container">$x = \sup A, A = \{w \in \mathbb{R}\mid b^w &lt; y\}$</span>. It is very well possible that I am too pedantic, but I am struggling to prove the part since I can't even show that <span class="math-container">$A$</span> is necessarily 1.) non-empty 2.) bounded above.</p> <p>The three cases to consider are that i.) <span class="math-container">$b &lt; y$</span>, ii.) <span class="math-container">$b = y$</span>, iii.) <span class="math-container">$b &gt; y$</span>. In the second case the claim is straightforward to prove. But how can you show in the i.) and iii.) that <span class="math-container">$A$</span> is bounded above and non-empty, respectively? Specifically I am stuck at the thought that since we are proving the existence of the logarithm, and at this point in the book the author hasn't covered sequences, series and limits, you can't really make an argument that you can multiply/divide <span class="math-container">$b$</span> enough times by itself to get a value greater/less than <span class="math-container">$y$</span>.</p>
Blue
409
<p>The second-degree equation <span class="math-container">$Ax^2+Bxy+Cy^2+Dx+Ey+F=0$</span> corresponds to a conic in <em>general position</em>. Moving its focus to the origin imposes certain relations on the coefficients that can be difficult to describe.</p> <p>It's easier to start with the definition that a (non-circle) conic is the locus of a point whose distance to a given point (the focus) is a constant (the eccentricity) times its distance to a given line (the directrix).</p> <p><span class="math-container">$$\text{distance to focus} = \text{eccentricity} \cdot \text{distance to directrix} \tag{0}$$</span></p> <hr /> <p>In the figure, <span class="math-container">$O$</span> is the focus, <span class="math-container">$P$</span> is the point on the conic, and <span class="math-container">$P'$</span> is the projection of <span class="math-container">$P$</span> onto the directrix, the line at distance <span class="math-container">$d$</span> from <span class="math-container">$O$</span> whose normal makes an angle <span class="math-container">$\theta_0$</span> with the <span class="math-container">$x$</span>-axis.</p> <p><a href="https://i.stack.imgur.com/8ZDrW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ZDrW.png" alt="enter image description here" /></a></p> <p>Writing <span class="math-container">$r$</span> for <span class="math-container">$|OP|$</span> and <span class="math-container">$\theta$</span> for the angle <span class="math-container">$\overline{OP}$</span> makes with the <span class="math-container">$x$</span>-axis (that is, taking <span class="math-container">$(r,\theta)$</span> to be the polar coordinates of <span class="math-container">$P$</span>), we have <span class="math-container">$$d = \frac{r}{e}+r\cos(\theta-\theta_0) \quad\to\quad r = \frac{de}{1+e\cos(\theta-\theta_0)} \tag{1}$$</span> It's customary to replace <span class="math-container">$de$</span> with, say, <span class="math-container">$\ell$</span> for the length of the semi-latus rectum (the length <span class="math-container">$r$</span> when <span class="math-container">$\overline{OP}$</span> is parallel to the line; ie, when <span class="math-container">$\theta-\theta_0=90^\circ$</span>). <span class="math-container">$$r = \frac{\ell}{1+e\cos(\theta-\theta_0)} \tag{2}$$</span> This allows the formula to apply to circles, as well. <span class="math-container">$\square$</span></p> <hr /> <p>For the corresponding Cartesian formulation, we square relation <span class="math-container">$(0)$</span> and write <span class="math-container">$$x^2+y^2=e^2 \left(x\cos\theta_0+y\sin\theta_0-d\right)^2 \tag{3}$$</span> Expanding and re-arranging, we have <span class="math-container">$$\begin{align} 0 &amp;= x^2(1-e^2\cos^2\theta_0)+y^2(1-e^2\sin^2\theta_0)-2xye^2\sin\theta_0\cos\theta_0 \\ &amp;\quad+2xde^2\cos\theta_0+2yde^2\sin\theta_0 -e^2d^2 \end{align} \tag{4}$$</span> Again, we can replace <span class="math-container">$de$</span> with semi-latus rectum <span class="math-container">$\ell$</span> to get a version that works with circles: <span class="math-container">$$\begin{align} 0 &amp;= x^2(1-e^2\cos^2\theta_0)+y^2(1-e^2\sin^2\theta_0)-2xye^2\sin\theta_0\cos\theta_0 \\ &amp;\quad+2xe\ell\cos\theta_0+2ye\ell\sin\theta_0 -\ell^2 \end{align} \tag{5}$$</span> I should note that multiplying-through by some arbitrary (non-zero) constant scales the coefficients of <span class="math-container">$(5)$</span> without affecting the conic itself. Accordingly, when comparing the general second-degree equation <span class="math-container">$Ax^2+Bxy+Cy^2+Dx+Ey+F=0$</span>, we have to allow for the constant, say <span class="math-container">$k$</span>, so that <span class="math-container">$A=k(1-e^2\cos^2\theta_0)$</span>, <span class="math-container">$C=k(1-e^2\sin^2\theta_0)$</span>, etc. This adds some complexity to retrieving the geometric parameters from the coefficients; see, for instance, <a href="https://math.stackexchange.com/a/801572/409">this answer</a>.</p>
1,850,258
<p>From where can I learn mathematics from the basic blocks up? I feel like I have a lot of holes in the mathematics that I know and I would like to see where all those concepts come from. I would like to see what are the ideas that are took from granted, as foundation, and which ideas are made from this foundation.</p>
Mathmo123
154,802
<p>I think all four of your questions can be answered by looking at the following question:</p> <blockquote> <p>Which prime numbers $p$ can be written as a sum of two squares?</p> </blockquote> <p>i.e. when do there exist integers $a,b$ such that $$p = a^2+b^2.$$</p> <p>This is certainly a natural, number theoretic question to ask. And the answer relies heavily on the Gaussian integers. Indeed, if $p$ can be expressed in the above form, then, as an element of $\mathbb Z[i]$, $$p=(a+bi)(a-bi),$$ so we can rephrase our question as follows:</p> <blockquote> <p>For which prime numbers $p\in\mathbb Z$ does $p$ split as a product of two elements in $\mathbb Z[i]$?</p> </blockquote> <p>It turns out that the answer to your questions $3$ and $4$ is yes:</p> <ul> <li>An element $P\in \mathbb Z[i]$ is <em>prime</em> if whenever $x,y\in\mathbb Z[i]$ and $P\mid xy$, then $P\mid x$ or $P\mid y$.</li> <li>It turns out that this condition is equivalent to saying that whenever $P=xy$, then one of $x$ or $y$ is a unit - i.e. $\pm1,\pm i$.</li> <li>It also turns out that $\mathbb Z[i]$ is a <em>unique factorisation domain</em> - i.e. every element can be expressed uniquely as a product of primes (up to ordering and multiplication by units).</li> </ul> <p>In particular, we can rephrase our question once more:</p> <blockquote> <p>For which prime numbers $p\in \mathbb Z$ is $p$ no longer prime in $\mathbb Z[i]$?</p> </blockquote> <p>It turns out that this is a question we can answer using the arithmetic of $\mathbb Z[i]$. It is possible (but not easy) to show that $$p\text{ is no longer prime in }\mathbb Z[i]\iff X^2+1\text{ is reducible modulo }p.$$ Note that $X^2+1$ is the minimal polynomial of $i$. Using facts from elementary number theory, $-1$ is a square mod $p$ if and only if $p=2$ or $p\equiv 1 \pmod 4$. This gives an answer to our question.</p> <p>However, the story doesn't stop here. Let's say instead, we wanted to know which prime numbers $p$ can be written in the form $$p=a^2+5b^2?$$</p> <p>If we were to play the same game as before, we might want to consider the ring $\mathbb Z[\sqrt{-5}]$. However, in this setting we have a problem: we no longer have unique factorisation, since, for example, $$6 = (1+\sqrt{-5})(1-\sqrt{-5})=2\cdot 3.$$ It is problems like this which the field of algebraic number theory comes to answer.</p>
3,715,522
<p>I am trying to understand fully how drug half-life works. So I derived this relationship: </p> <p><span class="math-container">$$\ U_{r} = \frac{1+\ U_{r-1}}{2}$$</span> Where <span class="math-container">$\ U_{0}=0$</span> and r is a set of natural numbers.</p> <p>My issue to how to deduce a relationship for the sum to infinity:</p> <p><span class="math-container">$$\ S_{\infty}=\lim_{n\to\infty} \sum_{r=1}^n \ U_{r}$$</span></p> <p>Consequently I need to get the relationship for <span class="math-container">$\ S_{\infty}$</span> if <span class="math-container">$\ U_{r} = \frac{A+\ U_{r-1}}{2}$</span> and <span class="math-container">$\ U_{0}=0$</span></p>
aritracb
782,875
<p><span class="math-container">$U_r=\frac{1}{2}+\frac{U_{r-1}}{2}=\frac{1}{2}+\frac{1}{4}+\frac{U_{r-2}}{4}=\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{U_{r-3}}{8}=...=\sum_{i=1}^{r}\frac{1}{2^i}+\frac{U_0}{2^r}=1-\frac{1}{2^r}$</span></p> <p>and hence <span class="math-container">$\sum_{1}^{n}U_r=n-\sum_1^n\frac{1}{2^r}$</span></p>
663,563
<p>it seems obvious that this integral is zero and so is the limit but what theorem we are using here?</p> <p>I see it's connected to Riemann sums with an interval=zero Right ?</p> <p>The function $\mathrm{f}$ is continuous.</p> <p>$$\lim_{x \to 0}\int_0^x\mathrm{f}(x)\ \mathrm{d}x= \ ?$$</p>
Martín-Blas Pérez Pinilla
98,199
<p>Another approach valid when $f$ is continuos: the integral is a function $F$ s.t. $F′=f$ and $F(0)=0$. By continuity, $\lim F=F(0)=0$.</p>
4,315,572
<p>exercise:</p> <p>Let us assume that the function f has derivatives of all orders.</p> <p>Suppose that all zeros of <span class="math-container">$f$</span> have finite multiplicity. Let <span class="math-container">$a$</span> and <span class="math-container">$b$</span> be points of <span class="math-container">$A$</span>, such that <span class="math-container">$a&lt;b$</span> and neither point is a zero. Show that <span class="math-container">$f$</span> has at most finitely many zeros in <span class="math-container">$] a, b[$</span></p> <p>(We say that a point <span class="math-container">$c$</span> is a root of <span class="math-container">$f(x)=0$</span> with multiplicity <span class="math-container">$m$</span>, if <span class="math-container">$f^{(k)}(c)=0$</span> for <span class="math-container">$k=0, \ldots, m-1$</span> and <span class="math-container">$f^{(m)}(c) \neq 0 .$</span> As usual <span class="math-container">$f^{(0)}$</span> denotes <span class="math-container">$f$</span>.)</p> <p>lemma:</p> <p>A zero of finite multiplicity is an isolated point of the set of zeros</p> <p>proof:</p> <p>Considering Taylor polynomial <span class="math-container">$E(h)=f(c+h)-\left(f(c)+\frac{1}{1 !} f^{\prime}(c) h+\frac{1}{2 !} f^{\prime \prime}(c) h^{2}+\cdots+\frac{1}{m !} f^{(m)}(c) h^{m}\right)$</span> and using L’Hopital’s Rule we can show that <span class="math-container">$\lim\limits_{h \rightarrow 0} \frac{E(h)}{h^{m}}=0$</span>. Because of the multiplicity and for <span class="math-container">$h \neq 0$</span> we can write <span class="math-container">$$ \frac{f(c+h)-f(c)}{h^{m}}=\frac{1}{m !} f^{(m)}(c)+\frac{E(h)}{h^{m}} $$</span> and from this we can deduce the proof of the lemma.</p> <p>So if we know if all zeros of <span class="math-container">$f$</span> have finite multiplicity then all zeros are isolated. Meanwhile we have bounded interval we can use Bolzano–Weierstrass theorem to show that if f have infinitely many zeros then at least one of the zeros should be limit point which contradicts to be isolated.</p> <p>But why I do need <span class="math-container">$f(a)$</span>, <span class="math-container">$f(b)$</span> not being zero which was declared in the exercise. I am suspicious about using intermediate value theorem and taylor polynomial with remainder. But I don't know how to do it??</p> <p>Remark: ( May be it can be some kind of hint which I cannot figure out: next exercise asks ::: In the previous exercise, if f (a) and f (b) have the same sign, show that the number of zeros in ]a, b[, counted by multiplicity, is even. If <span class="math-container">$f(a)$</span> and <span class="math-container">$f(b)$</span> have opposite signs, show that the number of zeros in ]a, b[, counted by multiplicity, is odd)</p>
emil agazade
801,252
<p>Considering very useful comments and the answer of @TonyK and after long thoughts I have made conclusion:</p> <p>What we know:</p> <ol> <li><p>All zeros are isolated.</p> </li> <li><p>Given open bounded interval but the end points is in the domain of the function.</p> </li> </ol> <p>So to find a contradiction we assume that there are infinitely many zeros in <span class="math-container">$(a,b)$</span> If that is, we can construct sequence <span class="math-container">$\left(a_{n}\right)_{n=1}^{\infty}$</span> where <span class="math-container">$f(a_{n})=0$</span>. Meanwhile interval is bounded by Bolzano–Weierstrass theorem we have a subsequence <span class="math-container">$\left(a_{k_{n}}\right)_{n=1}^{\infty}$</span> which <span class="math-container">$\lim\limits _{n \rightarrow \infty} a_{k_{n}}=t$</span>, and by continuity we have <span class="math-container">$f(t)=0$</span>.</p> <p>If <span class="math-container">$t=a$</span> then <span class="math-container">$f(a)=0$</span> but it is excluded because of the assumption of the exercise. Similarly <span class="math-container">$t=b \Rightarrow f(b)=0$</span> is ruled out.Now <span class="math-container">$t$</span> must be interior limit point of the interval <span class="math-container">$(a,b)$</span> and again by continuity <span class="math-container">$f(t)$</span> should be zero. But as a limit point being zero of the function contradicts with the fact that &quot;all zeros are isolated&quot;.</p> <p>I would be grateful if somebody check my answer.</p>
1,700,246
<p>Let $F=\mathbb{F}_{q}$, where $q$ is an odd prime power. Let $e,f,d$ be a standard basis for the $3$-dimensional orthogonal space $V$, i.e. $(e,e)=(f,f)=(e,d)=(f,d)$ and $(e,f)=(d,d)=1$. I have an element $g\in SO_{3}(q)$ defined by: $g: e\mapsto -e$, $f\mapsto \frac{1}{2}e -f +d$, $d\mapsto e+d$. I would like to determine the spinor norm of this element using Proposition 1.6.11 in the book 'The Maximal Subgroups of the Low-Dimensional Finite Classical Groups' by Bray, Holt and Roney-Dougal. </p> <p>The proposition is quite long to state so it would be handy if someone who can help already has a copy of the book to refer to. If not, then please let me know and I can post what the proposition says. </p> <p>I have followed the proposition and have that the matrices $$A:=I_{3}-g=\left( \begin{array}{ccc} 2 &amp; -\frac{1}{2} &amp; -1 \\ 0 &amp; 2 &amp; 0 \\ 0 &amp; -1 &amp; 0 \end{array} \right)$$ $$F= \textrm{matrix of invariant symmetric bilinear form =}\left( \begin{array}{ccc} 0 &amp; 1 &amp; 0 \\ 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 \end{array} \right).$$</p> <p>If $k$ is the rank of $A$, the proposition says to let $B$ be a $k\times 3$ matrix whose rows form a basis of a complement of the nullspace of $A$. I have that $ker\, A=&lt;(1,0,2)^{T}&gt;$. Now by the way the proposition is stated it seems as if it does not make a difference as to what complement is taken. However, I have tried 3 different complements and I get contradictory answers each time.</p> <p>Try 1) Orthogonal complement of $Ker\, A$, where $B=\left( \begin{array}{ccc} 1 &amp; 0 &amp; 0 \\ 0 &amp; -2 &amp; 1 \end{array} \right)$. This gives me $det(BAFB^{T})=-25$.</p> <p>Try 2) $B=\left( \begin{array}{ccc} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \end{array} \right)$. This gives me $det(BAFB^{T})=-4$.</p> <p>Try 3) $B=\left( \begin{array}{ccc} 0 &amp; 0 &amp; 1 \\ 0 &amp; 1 &amp; 0 \end{array} \right)$. This gives me $det(BAFB^{T})=0$.</p> <p>The problem is that whatever complement I take the determinants should all be non-zero squares at the same time, but this is not the case. </p> <p>I am not sure if I have misunderstood how to use the proposition. Any help will be appreciated. Thanks.</p>
Joe
107,639
<p>Obviously $r^2\ge0$ so if $y\le0$ the required inequality can't be reached.</p> <p>If otherwise $y&gt;0$ you can consider $\sqrt y$ which is again $&gt;0$; then you have two cases:</p> <ul> <li>$x\ge0$; in this case you have $\sqrt x&lt;\sqrt y$ and since $\Bbb Q$ is dense in $\Bbb R$, you can find an $r\in\Bbb Q$ such that $\sqrt x&lt;r&lt;\sqrt y$ from which you get the conclusion;</li> <li>$x&lt;0$; in this case you have $x&lt;0&lt;y$ so you can conclude simply taking $r=0$.</li> </ul>
1,182,684
<p>Let $\mathbf{F}(x,y,z) = y \hat{i} + x \hat{j} + z^2 \hat{k}$ be a vector field. Determine if its conservative, and find a potential if it is.</p> <p><strong>Attempt at solution:</strong></p> <p>We have that $\frac{\partial F_1}{\partial y} = 1 = \frac{\partial F_2}{\partial x} $, $\frac{\partial F_1}{\partial z} = 0 = \frac{\partial F_3}{\partial x}$, $\frac{\partial F_2}{\partial z} = 0 = \frac{\partial F_3}{\partial y}$, so the potential might exist.</p> <p>Now we need to find a function $f$ such that $\nabla f = \mathbf{F}$.</p> <p>For the first component, this means that $\frac{\partial f(x,y,z)}{\partial x} = y $, or after integrating, $f(x,y,z) = yx + C(y,z)$. Now I don't know how to determine the constant of integration $C(y,z)$, and I don't understand if I should add another constant when I integrate the second component.</p> <p>For the second component, we have that $f(x,y,z) = xy + D(x,z)$, and for the third $f(x,y,z) = \frac{z^3}{3} + E(x,y)$. What now?</p> <p>Any help please? In my textbook this is explained really in a terrible way.</p>
Mark Viola
218,419
<p>You don't have to find the integration constant immediately. Keep proceeding as follows. </p> <p>After you determined that $f(x,y,z) = xy+g(y,z)$, differentiate with respect to $y$. </p> <p>This gives $\frac{\partial f}{\partial y}=x+\frac{\partial g}{\partial y}=F_y=x$. </p> <p>Thus, $\frac{\partial g}{\partial y}=0$, which implies that $g$ is a function of $z$ only. In turn, this means that $f(x,y,z)=xy+h(z)$.</p> <p>Next, differentiate $f$ with respect to $z$. </p> <p>This gives $\frac{\partial f}{\partial z}=h'(z)=F_z=z^2$.</p> <p>Thus, $h(z)=\frac13z^3+C$. </p> <p>Finally, $f(x,y,z)=xy+h(z)=xy+\frac13z^3+C$.</p> <p>To check this, we have $$\vec F=\nabla f(x,y,z)$$</p> <p>$$=\hat x\frac{\partial f}{\partial x}+\hat y\frac{\partial f}{\partial y}+\hat z\frac{\partial f}{\partial z}$$</p> <p>$$=\hat xy+\hat yx+\hat zz^2$$which completes the task!</p>
165,853
<blockquote> <p>Schauder's conjecture: &quot;<em>Every continuous function, from a nonempty compact and convex set in a (Hausdorff) topological vector space into itself, has a fixed point.</em>&quot; [Problem 54 in The Scottish Book]</p> </blockquote> <p>I wonder whether this conjecture is resolved. I know R. Cauty [Solution du problème de point fixe de Schauder, Fund. Math. 170 (2001) 231–246] proposed an answer, but apparently in the international conference &quot;Fixed Point Theory and its Applications&quot; in 2005, T. Dobrowolski remarked that there is a gap in the proof.</p>
Mohammad Golshani
11,115
<p>I have taken the following from the review of the following paper &quot;<a href="https://mathscinet.ams.org/mathscinet-getitem?mr=2778675" rel="nofollow noreferrer">Schauder's conjecture on convex metric spaces</a>&quot; written in 2010 :</p> <blockquote> <p>One of the most resistant open problems in the theory of nonlocally convex linear metric spaces is:</p> <p>Schauder's Conjecture. Let <span class="math-container">$E$</span> be a compact convex subset in a topological vector space. Then any continuous mapping <span class="math-container">$f:E\to E$</span> has a fixed point.</p> <p>In this paper, the authors prove that it holds for convex metric spaces and consequently compact convex subsets of a <span class="math-container">$CAT(0)$</span> space have the fixed point property for continuous mappings.</p> </blockquote> <p>So it seems that the problem in its general form is still open.</p>
165,853
<blockquote> <p>Schauder's conjecture: &quot;<em>Every continuous function, from a nonempty compact and convex set in a (Hausdorff) topological vector space into itself, has a fixed point.</em>&quot; [Problem 54 in The Scottish Book]</p> </blockquote> <p>I wonder whether this conjecture is resolved. I know R. Cauty [Solution du problème de point fixe de Schauder, Fund. Math. 170 (2001) 231–246] proposed an answer, but apparently in the international conference &quot;Fixed Point Theory and its Applications&quot; in 2005, T. Dobrowolski remarked that there is a gap in the proof.</p>
jaco
49,821
<p>There is R. Cauty paper from 2012 titled 'Un theoreme de Lefschetz-Hopf pour les fonctions a iterees compactes' which from what I heard was reviewed to establish a correct proof of Schauder's conjecture ( or rather a generalization for iterates of f ), and will appear in an international journal.</p> <p>Edit: published: R. Cauty, J. Reine Angew. Math. 729 (2017), 1–27 <a href="https://doi.org/10.1515/crelle-2014-0134" rel="nofollow noreferrer">DOI link</a></p>
173,286
<p>I have these two functions <code>fun</code> and <code>microstep</code>.Fun makes use of a Module construct within which I define the <code>Array</code> I need to store the values of magnetization for different temperatures (each case stored in a different row). <code>microstep</code> is the function that store the data at the correct position at each step of the Monte Carlo algorithm. The monte Carlo procedure doesn't matter really much now, what bothers me is that when I define the magnetization array inside fun, the function doesn't work properly:</p> <pre><code> fun [numbofsets_, nsteps_] := Module [{confinit, magnetization, index}, index = MapIndexed[ { #2[[1]], # } &amp;, numbofsets ]; (* {index, temp} tuple*) confinit = RandomChoice[{-1, 1}, {10, 10}]; (* initial random matrix *) magnetization = ConstantArray[ 0, {Length@numbofsets, nsteps}]; Table[ NestList[ microstep[ ##[[1]], ##[[2]], ##[[3]], ##[[4]] ] &amp; \ , { index[[i, 1]], index[[i, 2]] , confinit, 2 } , nsteps]; , {i, 1, Length@numbofsets}]; ] </code></pre> <p>and</p> <pre><code>microstep[tindex_, temp_, matrix_, mcindex_] := Module[{ tempmatrix = matrix, dimx, dimy, x , y , it = 1/temp, down, up, left, right, spinsum , randnum, bool = False , J = 1 }, (* generic Metropolis Alghoritm *) dimx = Dimensions[matrix][[1]]; dimy = Dimensions[matrix][[2]]; x = RandomInteger[{1, dimx}]; y = RandomInteger[{1, dimy}]; randnum = RandomReal[]; spinsum = Plus[Compile`GetElement[matrix, Mod[x + 1, dimx, 1], y], Compile`GetElement[matrix, Mod[x - 1, dimx, 1], y], Compile`GetElement[matrix, x, Mod[y - 1, dimy, 1]], Compile`GetElement[matrix, x, Mod[y + 1, dimy, 1]]]; If[2*J *spinsum*tempmatrix[[x, y]] &lt; 0 \[Or] randnum &lt; E^(- it*2*J*tempmatrix[[x, y]]*spinsum) , tempmatrix[[x, y]] = -Compile`GetElement[matrix, x, y]; bool = True ]; (* tricky part starts here *) If[bool, magnetization[[tindex, mcindex]] = Abs[(magnetization[[tindex, mcindex - 1]] + 2 *tempmatrix[[x, y]])] ; , magnetization[[tindex, mcindex]] = magnetization[[tindex, mcindex - 1]]; ]; {tindex, temp, tempmatrix, mcindex + 1} ] </code></pre> <p>now if i run </p> <pre><code>fun [{2, 3, 4}, 10] </code></pre> <p>i get </p> <blockquote> <p>"Part specification magnetization[[1,1]] is longer than depth of \ object"</p> </blockquote> <p>Meanwhile If I declare the magnetization array outside the Module, the function works properly giving me the correctly stored values, but it forces me to use global variables :</p> <pre><code>magnetization = ConstantArray[0, {3, 11}]; fun [{2, 3, 4}, 10]; magnetization </code></pre> <blockquote> <p>{ {0, 2, 0, 2, 4, 6, 8, 6, 8, 6, 6}, {0, 2, 2, 0, 2, 2, 4, 4, 2, 0, 2}, {0, 2, 4, 6, 8, 10, 8, 6, 6, 4, 4} })</p> </blockquote> <p>I think the problem rise up from the fact that module is a scoping construct but I thought that a function called inside it would see the local variable but it doesn't and I don't know how to solve the problem. In c-like languages pointers can be used, is there anything similar in mathematica? Also, as always, any suggestion is appreciated</p>
enano9314
32,571
<p>Just give your function <code>microstep[tindex_, temp_, matrix_, mcindex_]</code> an extra arg, so make it <code>microstep[tindex_, temp_, matrix_, mcindex_, magnetization_]</code> and feed it in as an argument.</p> <p>By default <code>microstep</code> is looking for <code>magnetization</code> in the global scope, so you need to either make <code>magnetization</code> Global or make feed it in as an argument.</p> <p>Additionally, if you want to rewrite values (e.g pass by reference instead of pass by value, which is what the above would do) you can pass in arguments in their <code>Held</code> form.</p> <p>So your downvalue would now be <code>microstep[tindex_, temp_, matrix_, mcindex_, Hold[magnetization_]]</code>, and obviously change <code>fun</code> to call <code>microstep</code> with the extra argument.</p> <p>If this is not clear, I can update with more code examples when I get to my machine that has Mma on it.</p> <p>Cheers!</p> <p>Edit -- It seems that if you look @Heinriks answer, he implemented what I was thinking.</p>
1,762,001
<p>I recently watched a <a href="https://www.youtube.com/watch?v=SrU9YDoXE88" rel="noreferrer">video about different infinities</a>. That there is $\aleph_0$, then $\omega, \omega+1, \ldots 2\omega, \ldots, \omega^2, \ldots, \omega^\omega, \varepsilon_0, \aleph_1, \omega_1, \ldots, \omega_\omega$, etc..</p> <p>I can't find myself in all of this. Why there are so many infinities, and why even bother to classify infinity, when infinity is just... infinity? <strong>Why do we use all of these symbols? What does even each type of infinity mean?</strong></p>
Q the Platypus
264,438
<p>The major motivation for classifying infinities (other then the intrinsic enjoyment of mathematics) is that different infinities permit different properties.</p> <p>Countable infinities can be reasoned about using inductive proofs. On the other hand many of the properties analysis makes use of requires uncountable sets.</p> <p>Having different infinities often makes showing that two sets are not isomorphic by allowing us to determining the cardinality of both.</p>
1,762,001
<p>I recently watched a <a href="https://www.youtube.com/watch?v=SrU9YDoXE88" rel="noreferrer">video about different infinities</a>. That there is $\aleph_0$, then $\omega, \omega+1, \ldots 2\omega, \ldots, \omega^2, \ldots, \omega^\omega, \varepsilon_0, \aleph_1, \omega_1, \ldots, \omega_\omega$, etc..</p> <p>I can't find myself in all of this. Why there are so many infinities, and why even bother to classify infinity, when infinity is just... infinity? <strong>Why do we use all of these symbols? What does even each type of infinity mean?</strong></p>
Chill2Macht
327,486
<p>If it's any consolation, in practice you won't encounter more than two different types of infinity: that corresponding to the natural numbers, and that corresponding to the real numbers (cardinality of the continuum).</p> <p>The reason why it's necessary to differentiate between those two is that the "size" (cardinality) of the real numbers corresponds to the cardinality of the <em>power set</em> of the natural numbers.</p> <p>An elementary theorem states that the "size" (cardinality) of any set is always strictly smaller than that of its power set, and the same holds true for the natural numbers.</p> <p>Hence many of the tricks that work for the "infinity" associated with the natural numbers do not work for the "infinity" associated with their power set/the real numbers.</p> <p>For example, you can "add up" some sequences of "infinitely many" non-zero numbers and still get a finite result, provided that the infinity involved corresponds to that of the natural numbers.</p> <p>However, if the infinity involved corresponds to the real numbers (i.e. an uncountable sum of non-zero numbers), the sum will always be infinite, no matter how small you try to make the individual terms.</p> <p>It is discrepancies like these which necessitate that we differentiate (in particular) between these two different types of infinities. However, at least speaking as someone who specializes in probability and statistics, any other type of infinity does not occur as often. </p>
1,762,001
<p>I recently watched a <a href="https://www.youtube.com/watch?v=SrU9YDoXE88" rel="noreferrer">video about different infinities</a>. That there is $\aleph_0$, then $\omega, \omega+1, \ldots 2\omega, \ldots, \omega^2, \ldots, \omega^\omega, \varepsilon_0, \aleph_1, \omega_1, \ldots, \omega_\omega$, etc..</p> <p>I can't find myself in all of this. Why there are so many infinities, and why even bother to classify infinity, when infinity is just... infinity? <strong>Why do we use all of these symbols? What does even each type of infinity mean?</strong></p>
Community
-1
<blockquote> <p>infinity is just... infinity?</p> </blockquote> <p>This is your problem; this is simply wrong. Now, to be fair, it's a fairly well entrenched wrong idea because mankind has spent thousands of years trying to reason about the infinite, and we've only really figured out <em>how</em> to do so in the past hundred years or so, and there hasn't really been enough time for this knowledge to seep into the 'common knowledge' of laypersons.</p> <p>Most of the things you wrote down are <strong>ordinal numbers</strong>. Ordinal numbers are used to quantify things called <strong>well order types</strong>, and collectively formalize a particular generalization of the idea of 'counting'. We have "so many symbols" because we need to be able to write down the quantities we're talking about &mdash; it's the same reason we have lots of symbols for integers, such as $1, 2, 3, 10, 11, 20, 134301, 10^{100}, \ldots$</p> <p>One thing I want to point out now: ordinal numbers and cardinal numbers are different ideas, but fairly closely related. They have absolutely nothing to do with the symbol $\infty$ that you encounter in calculus. For that, look up the <strong>extended real numbers</strong>.</p> <p>An <strong>order type</strong> is basically the 'shape' of an <strong>ordered set</strong>. The ordinal number $\omega$ is the shape of the natural numbers, although there are lots of other ordered sets with the same shape, for example:</p> <ul> <li>The set of all even natural numbers, with their usual ordering</li> <li>The set of all powers of 2, with their usual ordering</li> </ul> <p>An example of a different, but still infinite order type is that of the <em>integers</em>. We can easily see that the integers and the natural numbers have different order types, because the natural numbers have a least element, but the integers do not.</p> <p>The integers do not form a <em>well</em> order type, though, so there isn't an ordinal number to quantify them.</p> <p>The important thing to realize is that order types can have infinitely many values... and then still have even more, larger values. As a simple example, the real numbers (with their usual ordering) have an order type (again, it's not a well-order type). There are infinitely many values in the interval $(0,1)$, but nonetheless there are more real numbers even larger than all of those!</p> <p>Now, consider the set of values $X = \{ 0, 1/2, 3/4, 7/8, 15/16, \ldots \} $; i.e. the values of the form $1 - 2^{-n}$ for natural numbers $n$. This may be familiar from Zeno's paradoxes.</p> <p>This set of numbers with its usual ordering <em>is</em> a well-order type &mdash; it's yet another example of the order type $\omega$.</p> <p>Now, let $Y$ be the set you get from $X$ by adding an additional point that is larger than everything already in $X$. Let's consider specifically the set $Y = X \cup \{ 1 \}$ with its usual ordering.</p> <p>The ordered set $Y$ is a well-order type, and an infinite one at that, but it's <em>not</em> $\omega$. Two particular features of $Y$ are:</p> <ul> <li>$Y$ has a largest element $1$</li> <li>$1$ does not have an immediate predecessor, despite there existing smaller values</li> </ul> <p>We call this well-order type $\omega + 1$, since we got it by starting with $X$ (which has well-order type $\omega$), and afterwards adding one extra point (which has well-order type 1).</p>
1,165,207
<p>From browsing the internet so far I've came to the conclusion that an ordered tuple is something in which there is no repettion of the elments Eg: An ordered 4-tuple is (1,2,3,4) or (5,3,1,7) i.e. no elements are repeated</p> <p>But an unordered 4-tuple is something like this (1,2,2,3) or (7,1,4,1)</p> <p>Is this a valid difference between an ordered and an unordered tuple? If yes then are there any other differences??</p> <p>Thanks in advance.</p>
Robert Israel
8,508
<p>No, that's completely wrong. Whether ordered or unordered, there may or may not be repetitions. The correct distinction is that in an ordered tuple the order counts, and in an unordered tuple it doesn't. So $(1,2,2,3)$ and $(2,1,2,3)$ are different as ordered $4$-tuples, but they are the same as unordered $4$-tuples.</p>
3,920,812
<p>Please excuse if the formatting of this post is wrong.</p> <p>There's a question that asks for the 2nd derivative of <span class="math-container">$y-2x-3xy=2$</span></p> <p>From what I know, I have to use implicit differentiation, using which I get: <span class="math-container">$$\frac{12+18y}{(1-3x)^{2}}$$</span> But can you solve for y in the initial equation and differentiate two times (aka explicit differentiation)? By doing that I got: <span class="math-container">$$\frac{48}{(1-3x)^{3}}$$</span> I'm not sure if this is a correct answer as I am new to differentiation. I guess that this question is also tied with another question; can you substitute y in implicit differentiation by solving for it in the initial equation?</p> <p>Thank you.</p>
boojum
882,145
<p>(This was too long to be a comment.)</p> <p>You can substitute <span class="math-container">$ \ y \ $</span> into the implicit differentiation calculation when it is a single function suggested by the original equation. Often, though, that equation may be describing two or more implicit functions and it is not necessarily straightforward to select the correct one for substitution.</p> <p>You can get away with it in <em>this</em> problem since there is <em>just one</em> explicit function, <span class="math-container">$ \ y \ = \ \frac{2·(1 \ + \ x)}{1 \ - \ 3x} \ \ . $</span> Inserting that into your first result gives <span class="math-container">$$ y'' \ \ = \ \ \frac{12 \ + \ 18y}{(1 \ - \ 3x)^2 } \ \ = \ \ \frac{12 \ + \ 18·\left[ \ \frac{2·(1 \ + \ x) }{1 \ - \ 3x} \ \right]}{(1 \ - \ 3x)^2} \ · \ \frac{1 \ - \ 3x}{1 \ - \ 3x} $$</span> <span class="math-container">$$ = \ \ \frac{12 · (1 \ - \ 3x) \ + \ 18· 2·(1 \ + \ x) }{(1 \ - \ 3x)^3 } \ \ = \ \ \frac{12 \ - 36x \ + \ 36 \ + \ 36x }{(1 \ - \ 3x)^3 } \ \ = \ \ \frac{12 · (1 \ - \ 3x) \ + \ 18· 2·(1 \ + \ x) }{(1 \ - \ 3x)^3 } \ \ = \ \ \frac{48}{(1 \ - \ 3x)^3 } \ \ . $$</span></p> <p>So you were doing fine here. Since this is an old question, you likely soon found for other implicit differentiation problems that the avenues for substitution were far more limited.</p>
3,961,131
<p>Suppose we are given 2 predicates <span class="math-container">$A(x)$</span> and <span class="math-container">$B(x)$</span> with domain <span class="math-container">$M$</span>.</p> <p>Suppose next we are given the following predicate <span class="math-container">$$\neg (A(x) \land B(x)) \land (\forall x(A(x) \rightarrow B(x)))$$</span> which we know is true, so <span class="math-container">$$\neg (A(x) \land B(x)) \land (\forall x(A(x) \rightarrow B(x))) = 1$$</span></p> <p>The question is how does it restrict the truth sets of <span class="math-container">$A(x)$</span> and <span class="math-container">$B(x)?$</span></p> <p>It is obvious that we have <span class="math-container">$$\neg (A(x) \land B(x)) = 1 \\ A(x) \land B(x) = 0\\ A(x) = 0 \lor B(x) = 0$$</span> So from that we get that either truth set for <span class="math-container">$A(x)$</span> is <span class="math-container">$E_A \neq M$</span> or truth set for <span class="math-container">$B(x)$</span> is <span class="math-container">$E_B \neq M$</span>.</p> <p>But knowing that <span class="math-container">$$\forall x(A(x) \rightarrow B(x)) = 1\\ \forall x(\neg A(x) \lor B(x)) = 1$$</span> I have no idea how to link it to useful information on truth sets of <span class="math-container">$A(x)$</span> and <span class="math-container">$B(x)$</span>, any suggestions?</p>
Ross Millikan
1,827
<p>You are getting hung up on truth sets when there is only one variable in the problem. Focus on one element <span class="math-container">$x$</span> of <span class="math-container">$M$</span> and ask whether <span class="math-container">$A(x)$</span> and <span class="math-container">$B(x)$</span> can be true because all the elements are equivalent. You have found that both <span class="math-container">$A(x)$</span> and <span class="math-container">$B(x)$</span> cannot both be true but <span class="math-container">$A(x) \implies B(x)$</span>. You should be able to derive that <span class="math-container">$A(x)$</span> is false and <span class="math-container">$B(x)$</span> can be anything. Check that in the original axiom and it works.</p>
401,389
<p>I worked through some examples of Bayes' Theorem and now was reading the proof.</p> <p>Bayes' Theorem states the following:</p> <blockquote> <p>Suppose that the sample space S is partitioned into disjoint subsets <span class="math-container">$B_1, B_2,...,B_n$</span>. That is, <span class="math-container">$S = B_1 \cup B_2 \cup \cdots \cup B_n$</span>, <span class="math-container">$\Pr(B_i) &gt; 0$</span> <span class="math-container">$\forall i=1,2,...,n$</span> and <span class="math-container">$B_i \cap B_j = \varnothing$</span> <span class="math-container">$\forall i\ne j$</span>. Then for an event A,</p> <p><span class="math-container">$\Pr(B_j \mid A)=\cfrac{B_j \cap A}{\Pr(A)}=\cfrac{\Pr(B_j) \cdot \Pr(A \mid B_j)}{\sum\limits_{i=1}^{n}\Pr(B_i) \cdot \Pr(A \mid B_i)}\tag{1}$</span></p> </blockquote> <p><img src="https://i.stack.imgur.com/nnbZU.png" alt="enter image description here" /></p> <p>The numerator is just from definition of conditional probability in multiplicative form.</p> <p>For the denominator, I read the following:</p> <p><span class="math-container">$A= A \cap S= A \cap (B_1 \cup B_2 \cup \cdots \cup B_n)=(A \cap B_1) \cup (A\cap B_2) \cup \cdots \cup(A \cap B_n)\tag{2}$</span></p> <p>Now this is what I don't understand:</p> <blockquote> <p><strong>The sets <span class="math-container">$A \cup B_i$</span> are disjoint because the sets <span class="math-container">$B_1, B_2, ..., B_n$</span> form a partition.<span class="math-container">$\tag{$\clubsuit$}$</span></strong></p> </blockquote> <p>I don't see how that is inferred or why that is the case. What does B forming a partition have anything to do with it being disjoint with A. Can someone please explain this conceptually or via an example?</p> <p>I worked one example where you had 3 coolers and in each cooler you had either root beer or soda. So the first node would be which cooler you would choose and the second nodes would be whether you choose root beer or soda. But I don't see why these would be disjoint. If anything, I would say they weren't disjoint because each cooler contains both types of drinks.</p> <p>Thank you in advance! :)</p>
amWhy
9,003
<p>As Tharsis pointed out, and was clarified in the comments, it is all of sets of the given by $\;(A \cap B_i),\; 1 \leq i \leq n\;$ that are pairwise disjoint. </p> <p>$$\;(A \cap B_i)\cap (A \cap B_j) = \varnothing,\;\;\;\forall i, j,\;\;\text{s.t.}\;\;1 \leq i, j\leq n\;\;\text{and}\;\;i\neq j$$</p> <p>e.g., $(A\cap B_1)$ is disjoint from $(A\cap B_2)$, but $(A\cup B_1)$ is certainly not disjoint from $(A\cup B_2)$, etc. </p> <p>Pay attention to the distinction between $∩,∪.$</p>
401,389
<p>I worked through some examples of Bayes' Theorem and now was reading the proof.</p> <p>Bayes' Theorem states the following:</p> <blockquote> <p>Suppose that the sample space S is partitioned into disjoint subsets <span class="math-container">$B_1, B_2,...,B_n$</span>. That is, <span class="math-container">$S = B_1 \cup B_2 \cup \cdots \cup B_n$</span>, <span class="math-container">$\Pr(B_i) &gt; 0$</span> <span class="math-container">$\forall i=1,2,...,n$</span> and <span class="math-container">$B_i \cap B_j = \varnothing$</span> <span class="math-container">$\forall i\ne j$</span>. Then for an event A,</p> <p><span class="math-container">$\Pr(B_j \mid A)=\cfrac{B_j \cap A}{\Pr(A)}=\cfrac{\Pr(B_j) \cdot \Pr(A \mid B_j)}{\sum\limits_{i=1}^{n}\Pr(B_i) \cdot \Pr(A \mid B_i)}\tag{1}$</span></p> </blockquote> <p><img src="https://i.stack.imgur.com/nnbZU.png" alt="enter image description here" /></p> <p>The numerator is just from definition of conditional probability in multiplicative form.</p> <p>For the denominator, I read the following:</p> <p><span class="math-container">$A= A \cap S= A \cap (B_1 \cup B_2 \cup \cdots \cup B_n)=(A \cap B_1) \cup (A\cap B_2) \cup \cdots \cup(A \cap B_n)\tag{2}$</span></p> <p>Now this is what I don't understand:</p> <blockquote> <p><strong>The sets <span class="math-container">$A \cup B_i$</span> are disjoint because the sets <span class="math-container">$B_1, B_2, ..., B_n$</span> form a partition.<span class="math-container">$\tag{$\clubsuit$}$</span></strong></p> </blockquote> <p>I don't see how that is inferred or why that is the case. What does B forming a partition have anything to do with it being disjoint with A. Can someone please explain this conceptually or via an example?</p> <p>I worked one example where you had 3 coolers and in each cooler you had either root beer or soda. So the first node would be which cooler you would choose and the second nodes would be whether you choose root beer or soda. But I don't see why these would be disjoint. If anything, I would say they weren't disjoint because each cooler contains both types of drinks.</p> <p>Thank you in advance! :)</p>
broccoli
50,577
<p><a href="http://bayesianthink.blogspot.com/2012/08/understanding-bayesian-inference.html" rel="nofollow">Here</a> is a write that describes Bayes theorem in detail along with a bunch of examples on how to use it (different write ups)</p>
1,647,673
<p>Prove or disprove that $$\left|a_1\right|+\left|a_2\right|+\ldots+\left|a_n\right|\leq n\sqrt{a_1^2+\ldots+a_n^2}$$</p> <p>Where $a_1,\ldots,a_n\in\mathbb{R}$ and $n\in\mathbb{N}$.</p> <p>EDIT: I was hoping there is a way without using a known inequality, ie to prove that $RHS-LHS\geq 0$</p>
Michael Rozenberg
190,319
<p>I think you mean $RHS=\sqrt{n(a_1^2+a_2^2+...+a_n^2)}$. </p> <p>If so we have:</p> <p>$RHS-LHS=\frac{\sum\limits_{1\leq i&lt;j\leq n}\left(|a_i|-|a_j|\right)^2}{RHS+LHS}\geq0$, which you wished. </p>
760,195
<p>I've seen this proof in a text. I have an issue with it and wanted to check its validity. </p> <p>Let $X\sim B(n,p)$, we seek the expectation. We let $q=1-p$ \begin{equation} E(X)=\sum_{j=0}^{n} j {n\choose j} p^{j}q^{n-j}=p\partial_{p}\sum_{j=0}^{n} {n\choose j} p^{j}q^{n-j}=\underline{p\partial_{p} (p+q)^{n}} \quad\text{(Binomial Theorem)} \\ =pn(p+q)^{n-1}. \end{equation} Now plugging in $p+q=1$ gives the required result. However at the underlined step ($p\partial_{p} (p+q)^{n}$), plugging in $p+q=1$ gives $0$. </p> <p>I'm aware of a couple of other ways to show this result so I am really just interested in whether this proof is valid. I'm not convinced that specifying a specific stage when we can plug in values constitutes a valid proof.</p> <p>Thanks.</p>
DanielV
97,045
<p>There are a few common notations in mathematics where an object is used as a function, but it hides a declaration of an assumption.</p> <p>One example is the line integral, consider (A0), the proposition $X = \oint_{\omega} y\,dz$ . This is actually two propositions, one hiding in the other:</p> <p>$$X = \int_\omega y\,dz \tag{A1}$$ $$\omega \text{ is a closed manifold } \tag{A2}$$</p> <p>Suppose you were to assume (A0), derive some results, and then later add the assumption (A3) that $\omega$ is no longer a closer manifold. Your results would be vacuous, since even though (A2) was never explicitly stated, it contradicts (A3).</p> <p>...</p> <p>Partial derivatives are another example of a function that hides an assumption. The assumption (a0) that $x = \frac{\partial y} {\partial z}$ is actually two (or more) propositions:</p> <p>$$x = \frac {dy}{dz} \tag{a1}$$ $$0 = \frac {d\text{ you have to guess these variables} } {dz} \tag{a2}$$</p> <p>In your example, the formula $\frac{\partial \text{ stuff}}{\partial p}$ is hiding the assumption that $\frac{d q}{d p} = 0$. The further addtion of $p + q = 1$, and thus $\frac{dp}{dq} = 1$ makes the final result vacuous, as you observed by the fact that you can get two different answers-- <em>if you take what he wrote literally</em>.</p> <p>This is one of those situations where a pedantic person has to be told "read what I meant, not what I wrote". The common practice of writing mathematics to be easily read rather than absolutely pedantically perfect is what makes the study tolerable, although it makes things like peer review very long painful processes of repeatedly asking "what did you really mean here?".</p> <p>The author meant for you to infer that there is a way the derivation could be done which is correct. Effectively, first you derive (d3):</p> <p>Consider a functions $$F_n(x,y) = \sum_{j=0}^{n} j {n\choose j} x^{j}y^{n-j} \tag{d1}$$ $$G_n(x,y) = \sum_{j=0}^{n} {n\choose j} x^{j}y^{n-j} \tag{d2}$$ $$x \frac{dG_n}{dx} = F_n \tag{d3}$$</p> <p>Then you apply (d3) to the case of the binomial expectation. (d3) is derived just as a statement about functions, so $x$ and $y$ have no meaning outside of (d3) and thus the additional assumption $p + q = 1$ has nothing to contradict. Actually writing things out that way is tedious though, so except for the few people who are interested in formal logic a blind eye tends to be turned.</p>
236,927
<p>I thought this would be a hard problem but I found a link that seems to ask the answer to this question as a homework problem? Can somone help me out here, are there an infinite number of prime powers that differ by 1? or are there a finite number of them? If so which are they?</p>
André Nicolas
6,312
<p><a href="http://en.wikipedia.org/wiki/Catalan%27s_conjecture" rel="nofollow">Catalan's Conjecture,</a> a theorem since $2002$, shows that the only examples where the exponents are $\ge 2$ are $3^2-2^3$.</p> <p>If we allow exponent equal to $1$, the answer is not known. Perhaps there are infinitely many Fermat primes, that is, primes of the form $2^n+1$ (it turns out that for $2^n+1$ to be prime, we need $n$ to have shape $2^k$). </p> <p>For many years, only five such primes have been known. There may not be others, or there may be finitely many others, or infinitely many. At the current time, we do not know.</p>
236,927
<p>I thought this would be a hard problem but I found a link that seems to ask the answer to this question as a homework problem? Can somone help me out here, are there an infinite number of prime powers that differ by 1? or are there a finite number of them? If so which are they?</p>
Paolo Leonetti
45,736
<p>In number theory, Mihailescu theorem is the solution to a famous and old conjecture formulated by French mathematician Eugene Charles Catalan in $1844$ (see "Note extraite d'une lettre adressè à l'èditeur). Although it was proved in April $2002$, it appeared for first time in Crelle's Jounal in $2004$. His formulation is really easy: there is a unique couple of powers of natural numbers such that they differ by $1$. Formally:</p> <p>If $x,y,p,q$ are integers greater than $1$ and $x^p-y^q=1$ then $x=q=3$ and $y=p=2$.</p> <p>If someone is really interested in a complete proof, then it can be found in Catalan's Conjecture, Springer, 2007. Otherwise, I can assure that some highlight cases can be completely solved with elementary tools (I actually made a .pdf file collecting these proof, if needed I will attach it here):</p> <ul> <li><p>Case $2\mid q$;</p></li> <li><p>Case $2\mid p$;</p></li> <li><p>Case $x=q$ (completely solved by myself, actually I cannot find references about it);</p></li> <li><p>Case $y\mid x-1$ (some its corollaries are still used as olympid problems).</p></li> </ul> <p>With regard to the original question, we missed only the case that a prime $p$ is in the form $q^m \pm 1$ for some prime $q$ and positive integer $m$. Avoiding trivial cases, if $q$ is odd then $2\mid q^m\pm 1$, i.e. $p$ must be in the form $2^m \pm 1$. If a prime $p$ is in the form $2^m-1$ then it's defined "Mersenne prime", if a prime is in the form $2^{2^k}+1$ then it's defined "Fermat prime". </p> <p>The fact is that, up to now, it's still unknows if one or both those set of primes are not finite.</p>
348,324
<p>In an interview the interviewer asked me the following but I failed to give the answer. </p> <p>$\{0,1\}^\mathbb{N}$ with product topology is homeomorphic to which subset of $\mathbb{R}$?</p> <p>Can anyone give me the answer and explain me please? thanks for your kind help.</p>
Fly by Night
38,495
<p>This idea was dealt with by Roger Penrose in page 370 of his book "<em><a href="http://www.amazon.co.uk/Road-Reality-Complete-Guide-Universe/dp/0099440687" rel="nofollow">The Road to Reality: A Complete Guide to the Laws of the Universe</a></em>".</p> <p>The set is called the <a href="http://en.wikipedia.org/wiki/Cantor_set" rel="nofollow">Cantor Set</a>, and according to Wikipedia: As a topological space, the Cantor set is naturally homeomorphic to the product of countably many copies of the space $\{0, 1\}$, where each copy carries the discrete topology. This is the space of all sequences in two digits. </p>
1,412,869
<p>Does it make sense to talk about Christoffel symbols in flat space time? Do they have non-zero values? I understand that the Christoffel symbols appear as an indication of curvature in space. So, are they non-existent in flat space-time?</p>
Victor Caran
529,004
<p>As frakbak explained, one has a notion of Christoffel symbols in flat spacetime, as they basically record information about derivatives of the metric tensor with respect to different indices, each. It makes kind of sense, since changes of the metric tensor describe local changes of a scalar product which encodes information about changes of projections to a new coordinate basis, featureing a new tangent plane. While such changes of the projection to a new coordinate basis are inevitable in an arbritrarily curved spacetime, you can just deliberately provide the covariant derivative with such unnatural changes in flat spacetime by changing to an "inappropriate", curved coordinate system. </p>
3,746,630
<p>So I am solving some probability/finance books and I've gone through two similar problems that conflict in their answers.</p> <h2>Paul Wilmott</h2> <p>The first book is Paul Wilmott's <a href="https://smile.amazon.com/Frequently-Asked-Questions-Quantitative-Finance/dp/0470748753" rel="nofollow noreferrer">Frequently Asked Questions in Quantitative Finance</a>. This book poses the following question:</p> <blockquote> <p>Every day a trader either makes 50% with probability 0.6 or loses 50% with probability 0.4. What is the probability the trader will be ahead at the end of a year, 260 trading days? Over what number of days does the trader have the maximum probability of making money?</p> </blockquote> <p><strong>Solution:</strong></p> <blockquote> <p>This is a nice one because it is extremely counterintuitive. At first glance it looks like you are going to make money in the long run, but this is not the case. Let n be the number of days on which you make 50%. After <span class="math-container">$n$</span> days your returns, <span class="math-container">$R_n$</span> will be: <span class="math-container">$$R_n = 1.5^n 0.5^{260−n}$$</span> So the question can be recast in terms of finding <span class="math-container">$n$</span> for which this expression is equal to 1.</p> </blockquote> <p>He does some math, which you can do as well, that leads to <span class="math-container">$n=164.04$</span>. So a trader needs to win at least 165 days to make a profit. He then says that the average profit <em>per day</em> is:</p> <blockquote> <p><span class="math-container">$1−e^{0.6 \ln1.5 + 0.4\ln0.5}$</span> = −3.34%</p> </blockquote> <p>Which is mathematically wrong, but assuming he just switched the numbers and it should be:</p> <blockquote> <p><span class="math-container">$e^{0.6 \ln1.5 + 0.4\ln0.5} - 1$</span> = −3.34%</p> </blockquote> <p>That still doesn't make sense to me. Why are the probabilities in the exponents? I don't get Wilmott's approach here.</p> <p>*PS: I ignore the second question, just focused on daily average return here.</p> <hr /> <h2>Mark Joshi</h2> <p>The second book is Mark Joshi's <a href="https://smile.amazon.com/Quant-Interview-Questions-Answers-Second/dp/0987122827" rel="nofollow noreferrer">Quant Job Interview Question and Answers</a> which poses this question:</p> <blockquote> <p>Suppose you have a fair coin. You start off with a dollar, and if you toss an <em>H</em> your position doubles, if you toss a <em>T</em> it halves. What is the expected value of your portfolio if you toss infinitely?</p> </blockquote> <p><strong>Solution</strong></p> <blockquote> <p>Let <span class="math-container">$X$</span> denote a toss, then: <span class="math-container">$$E(X) = \frac{1}{2}*2 + \frac{1}{2}\frac{1}{2} = \frac{5}{4}$$</span> So for <span class="math-container">$n$</span> tosses: <span class="math-container">$$R_n = (\frac{5}{4})^n$$</span> Which tends to infinity as <span class="math-container">$n$</span> tends to infinity</p> </blockquote> <hr /> <hr /> <p>Uhm, excuse me what? Who is right here and who is wrong? Why do they use different formula's? Using Wilmott's (second, corrected) formula for Joshi's situation I get the average return per day is:</p> <p><span class="math-container">$$ e^{0.5\ln(2) + 0.5\ln(0.5)} - 1 = 0% $$</span></p> <p>I ran a Python simulation of this, simulating <span class="math-container">$n$</span> days/tosses/whatever and it seems that the above is not correct. Joshi was right, the portfolio tends to infinity. Wilmott was also right, the portfolio goes to zero when I use his parameters.</p> <p>Wilmott also explicitly dismisses Joshi's approach saying:</p> <blockquote> <p>As well as being counterintuitive, this question does give a nice insight into money management and is clearly related to the Kelly criterion. If you see a question like this it is meant to trick you if the expected profit, here 0.6 × 0.5 + 0.4 × (−0.5) = 0.1, is positive with the expected return, here −3.34%, negative.</p> </blockquote> <p>So what is going on?</p> <p>Here is the code:</p> <pre><code>import random def traderToss(n_tries, p_win, win_ratio, loss_ratio): SIM = 10**5 # Number of times to run the simulation ret = 0.0 for _ in range(SIM): curr = 1 # Starting portfolio for _ in range(n_tries): # number of flips/days/whatever if random.random() &gt; p_win: curr *= win_ratio # LINE 9 else: curr *= loss_ratio # LINE 11 ret += curr # LINE 13: add portfolio value after this simulation print(ret/SIM) # Print average return value (E[X]) </code></pre> <p>Use: <code>traderToss(260, 0.6, 1.5, 0.5)</code> to test Wilmott's trader scenario.</p> <p>Use: <code>traderToss(260, 0.5, 2, 0.5)</code> to test Joshi's coin flip scenario.</p> <hr /> <hr /> <p>Thanks to the followup comments from Robert Shore and Steve Kass below, I have figured one part of the issue. <strong>Joshi's answer assumes you play once, therefore the returns would be additive and not multiplicative.</strong> His question is vague enough, using the word &quot;your portfolio&quot;, suggesting we place our returns back in for each consecutive toss. If this were the case, we need the <a href="https://www.investopedia.com/articles/investing/071113/breaking-down-geometric-mean.asp" rel="nofollow noreferrer">geometric mean</a> not the arithmetic mean, which is the expected value calculation he does.</p> <p>This is verifiable by changing the python simulation to:</p> <pre><code>import random def traderToss(): SIM = 10**5 # Number of times to run the simulation ret = 0.0 for _ in range(SIM): if random.random() &gt; 0.5: curr = 2 # Our portfolio becomes 2 else: curr = 0.5 # Our portfolio becomes 0.5 ret += curr print(ret/SIM) # Print single day return </code></pre> <p>This yields <span class="math-container">$\approx 1.25$</span> as in the book.</p> <p>However, if returns are multiplicative, therefore we need a different approach, which I assume is Wilmott's formula. This is where I'm stuck. Because I still don't understand the Wilmott formula. Why is the end of day portfolio on average:</p> <p><span class="math-container">$$ R_{day} = r_1^{p_1} * r_2^{p_2} * .... * r_n^{p_n} $$</span></p> <p>Where <span class="math-container">$r_i$</span>, <span class="math-container">$p_i$</span> are the portfolio multiplier, probability for each scenario <span class="math-container">$i$</span>, and there are <span class="math-container">$n$</span> possible scenarios. Where does this (generalized) formula come from in probability theory? This isn't a geometric mean. Then what is it?</p>
Ingix
393,096
<p>The answer by Brian M. Scott shows the main effect at work: In the Wilmot scenario, the <strong>expected value</strong> after 260 trading days is enourmous, but the probability of it actually being <strong>higher than the starting value</strong> is small. That's not at odds with each other, it' s how the expected value works when you can in theory make enourmous gains.</p> <p>Wilmot does make an error however when he calculates the average profit per day, where you don't understand the formula. I'll try to show how he got there, and where the error was:</p> <p>If you run his scenrio for <span class="math-container">$k$</span> days, then the number of &quot;good days&quot; (where you make <span class="math-container">$50\%$</span>) is a random variable <span class="math-container">$G_k$</span>, and the number of &quot;bad days&quot; (where you loose <span class="math-container">$50\%$</span>) is also a random variable <span class="math-container">$B_k$</span>. Both variables follow a <a href="https://en.wikipedia.org/wiki/Binomial_distribution" rel="nofollow noreferrer">binomial distribution</a> (Bin), where <span class="math-container">$G_k \sim \text{Bin}(k,0.6)$</span> and <span class="math-container">$B_k \sim \text{Bin}(k,0.4)$</span>.</p> <p>Now, Wilmot correctly uses the fact that the value of the return after <span class="math-container">$k$</span> days is</p> <p><span class="math-container">$$R=1.5^{G_k}0.5^{B_k}.$$</span></p> <p>You can find this in his formula</p> <p><span class="math-container">$$R_n=1.5^n0.5^{260-n},$$</span></p> <p>because this talks about the <span class="math-container">$260$</span> trading days scenario (so <span class="math-container">$k=260$</span>) and he defined <span class="math-container">$n$</span> to be the number of good days you have (so <span class="math-container">$n=G_{260}$</span>). This shows that that first terms (<span class="math-container">$1.5$</span> to the power of something) are the same in both formulas. Also, we have <span class="math-container">$G_k+B_k=k$</span> (each day is either good or bad), so <span class="math-container">$B_{260}=260-G_{260}=260-n$</span>, which shows that the secnd terms (<span class="math-container">$0.5$</span> to the power of something) are also the same.</p> <p>Again, up to here everything is correct. We have <span class="math-container">$R=1.5^{G_k}0.5^{B_k}$</span>, so <span class="math-container">$R$</span> is also a random variable. Now we know what the expected values of <span class="math-container">$G_k$</span> and <span class="math-container">$B_k$</span> are, for a binomial distribution that's easy to calculate:</p> <p><span class="math-container">$$E(G_k)=0.6k,\, E(B_k)=0.4k.$$</span></p> <p>The <strong>error</strong>, I assume, that they did was to use the above correct formulas and <strong>incorrectly</strong> conclude that</p> <p><span class="math-container">$$E(R)\overset{\color{red}{\text{wrong}}}{=}1.5^{E(G_k)}0.5^{E(B_k)}=1.5^{0.6k}0.5^{0.4k}=\left(1.5^{0.6}\times0.5^{0.4}\right)^k.$$</span></p> <p>The last part of the formula seems to show that the return changes by a factor of <span class="math-container">$$1.5^{0.6}\times0.5^{0.4} = e^{0.6\ln(1.5)+0.4\ln(0.5)} \approx 0.9666 $$</span> each day, which corresponds to the <span class="math-container">$3.34\%$</span> loss per day they calculated.</p> <p>The error here is that if <span class="math-container">$f(x)$</span> is any non-linear function, then if <span class="math-container">$X$</span> is a random variable then generally</p> <p><span class="math-container">$$E(f(X)) \neq f (E(X))$$</span></p>
1,884,852
<p>Suppose there are <em>k</em> dice thrown. Let <em>M</em> denote the minimum of the <em>k</em> numbers rolled. </p> <p>I've learned that finding the individual probability is:</p> <p>$$P(M = m) = P(M \ge m) - P(M \ge m + 1) $$</p> <p>Can someone please explain this to me? I've tried plugging in values for $m = 1, ... , 6$ but it isn't clear to me how that formula is derived. </p>
Sarvesh Ravichandran Iyer
316,409
<p>Note that:</p> <p>$$ \Pr[M \geq m] = \sum_{i=m+1}^\infty Pr[M = i] $$ because if $M \geq m$, then either $M=m$ or $M=m+1$ or $M=m+2$ or $M=m+3$ or ... (by the way, the infinite sum exists because it is bounded above by 1)</p> <p>Similarly, $$ \Pr[M \geq (m+1)] = \sum_{i=(m+1)+1}^\infty Pr[M = i] $$ because if $M \geq (m+1)$, then either $M=m+1$ or $M=m+2$ or $M=m+3$ or ...</p> <p>Now, you subtract, and what do you get? $$ \Pr[M \geq (m+1)] - \Pr[M \geq m] = \Pr[M = m] $$</p> <p>So that sum is easy. Even the problem is, but I'll write a separate answer for that.</p>
4,036,896
<p>Let <span class="math-container">$\boldsymbol{A}$</span> is a real symmetric matrix, <span class="math-container">$\boldsymbol{B}$</span> is a real antisymmetric matrix, <span class="math-container">$\boldsymbol{A}^2 = \boldsymbol{B}^2$</span>, prove <span class="math-container">$\boldsymbol{A} = \boldsymbol{B} = \boldsymbol{0}$</span>.</p> <hr /> <p>I tried the second-order matrix. Let <span class="math-container">$\boldsymbol{A} = \begin{bmatrix} a_{11} &amp; a_{12} \\ a_{12} &amp; a_{22} \end{bmatrix}$</span>, <span class="math-container">$\boldsymbol{B} = \begin{bmatrix} b_{11} &amp; b_{12} \\ {-b_{12}} &amp; b_{22} \end{bmatrix}$</span>, <span class="math-container">$a_{ij}, b_{ij} \in \mathbb{R}$</span>.</p> <p><span class="math-container">\begin{align} \boldsymbol{A}^2 &amp;= \begin{bmatrix} {a_{11}^2 + a_{12}^2} &amp; {a_{11}a_{12} + a_{12}a_{22}} \\ {a_{11}a_{12} + a_{12}a_{22}} &amp; {a_{12}^2 + a_{22}^2} \end{bmatrix} \\ \boldsymbol{B}^2 &amp;= \begin{bmatrix} {b_{11}^2 - b_{12}^2} &amp; {b_{11}b_{12} + b_{12}b_{22}} \\ {-b_{11}b_{12} - b_{12}b_{22}} &amp; {-b_{12}^2 + b_{22}^2} \end{bmatrix} \end{align}</span></p> <p>Because <span class="math-container">$\boldsymbol{A}^2 = \boldsymbol{B}^2$</span>, I got <span class="math-container">\begin{align} a_{11}^2 + a_{12}^2 &amp;= b_{11}^2 - b_{12}^2 \\ a_{11}a_{12} + a_{12}a_{22} &amp;= b_{11}b_{12} + b_{12}b_{22} \\ a_{11}a_{12} + a_{12}a_{22} &amp;= -b_{11}b_{12} - b_{12}b_{22} \\ a_{12}^2 + b_{22}^2 &amp;= -b_{12}^2 + b_{22}^2 \end{align}</span></p> <p>Hence, <span class="math-container">\begin{align} a_{11}a_{12}+a_{12}a_{22}=b_{11}b_{12}+b_{12}b_{22}=0 \end{align}</span></p> <p>I lost my momentum here.</p>
user8675309
735,806
<p>note the orthogonality<br /> <span class="math-container">$-1\cdot \text{trace}\big(BA\big)= \text{trace}\big(B^TA\big)= \text{trace}\big(B^TA\big)^T= \text{trace}\big(A^TB\big)= \text{trace}\big(AB\big)= \text{trace}\big(BA\big)$</span><br /> <span class="math-container">$\implies \text{trace}\big(BA\big)= 0$</span></p> <p>using this we can compute the squared Frobenius norm of A+B<br /> <span class="math-container">$\Big \Vert A+B\Big \Vert_F^2= \text{trace}\big(A^TA\big) +\text{trace}\big(B^TA\big)+\text{trace}\big(A^TB\big)+\text{trace}\big(B^TB\big) $</span><br /> <span class="math-container">$= \text{trace}\big(A^2\big) +0+0 -\text{trace}\big(B^2\big) $</span><br /> <span class="math-container">$=0 $</span><br /> <span class="math-container">$\implies A+B = \mathbf 0\implies A = -B $</span><br /> so <span class="math-container">$A$</span> is both symmetric and skew symmetric, i.e. <span class="math-container">$A=A^T=-A^T \implies A=\mathbf 0=-B = B$</span></p>
2,948,327
<p>Being an undergraduate student I find difficult to understand the perfect differences between normal and partial differential equations. Elaborate the answer </p>
gandalf61
424,513
<p>Let's look at some examples of ODEs and PDEs in physics:</p> <p>1) A particle moves under the influence of gravity, electromagnetic forces, viscosity or other forces. The position of the particle is a function of a single independent variable (time) so we can represent the equation of motion of the particle by an ODE.</p> <p>2) A chain hangs under its own weight, and has static loads attached to it at fixed points. The deflection of the chain is a function of a single independent variable (the distance along the chain) so again the deflection of the chain satisfies an ODE.</p> <p>3) A string that is fixed at both ends is displaced from its equilibrium position and released. The deflection of the string is now a function of two independent variables - time and distance along the string - so its equation of motion must be represented by a PDE, which is the "wave equation".</p> <p>4) An insulated metal bar starts at a uniform temperature. Its ends are then heated to two different constant temperatures. Initially the temperature of the bar is a function of two independent variables - time and distance along the bar. So the temperature of the bar satisfies a PDE - the heat equation. As time goes by, the temperature of the bar approaches an equilibrium state where its temperature does not depend on time, but only on the distance along the bar. The equilibrium temperature (as a function of distance only) can be found by solving an ODE.</p>
2,246,025
<p>How do I solve this? $$y^\prime = y^2 -4$$ I think I am supposed to use the separable equations method and then use partial fractions.</p>
badjohn
332,763
<p>Try studying infinite numbers. They will be quite different from anything in software. (Though they aren't particularly new.)</p> <p>You could look at octonians.</p>
104,818
<p>Can anyone help me with this? I want to know how to solve it.</p> <blockquote> <p>Let $f:\mathbb R \longrightarrow \mathbb R$ be a continuous function with period $P$. Also suppose that $$\frac{1}{P}\int_0^Pf(x)dx=N.$$ Show that $$\lim_{x\to 0^+}\frac 1x\int_0^x f\left(\frac{1}{t}\right)dt=N.$$</p> </blockquote>
Davide Giraudo
9,849
<p>Using the function $g(t)=f\left(\frac P{2\pi}t\right)$, we can assume that $f$ is $2\pi$-periodic. If $P_n$ is a sequence of trigonometric polynomial which converges uniformly to $f$, then $$\left|\frac 1x\int_0^xf\left(\frac 1t\right)dt-\frac 1{2\pi}\int_0^{2\pi}f(t)dt\right|\leq 2\sup_{s\in\mathbb R}|f(s)-P_n(s)|+\left|\frac 1x\int_0^xP_n\left(\frac 1t\right)dt-\frac 1{2\pi}\int_0^{2\pi}P_n(t)dt\right|,$$ so, by linearity, it's enough to show the result when $f$ is of the form $e^{ipt}$, where $p\in\mathbb Z$. It's clear if $p=0$, so we assume $p\neq 0$. We have $\int_0^{2\pi}f(t)dt=\frac 1p(e^{i2\pi p}-1)=0$ and $$\frac 1x\int_0^xf\left(\frac 1t\right)dt=\frac 1x\int_{1/x}^{+\infty}\frac{e^{ips}}{s^2}dx=\int_1^{+\infty}\frac{e^{ip\frac yx}}{y^2}dy,$$ so \begin{align*} \frac 1x\int_0^xf\left(\frac 1t\right)dt&amp;=\left[\frac x{ip}\frac{e^{ip\frac yx}}{y^2}\right]_1^{+\infty}-\frac x{ip}\int_1^{+\infty}\frac{e^{ip\frac yx}}{y^3}(-2)dy\\ &amp;=-\frac x{ip}e^{ip/x}+2\frac x{ip}\int_1^{+\infty}\frac{e^{ip\frac yx}}{y^3}dy, \end{align*} and finally $$\left|\frac 1x\int_0^xf\left(\frac 1t\right)dt\right|\leq \frac x{|p|}\left(1+2\int_1^{+\infty}t^{-3}dt\right)=2\frac x{|p|},$$ which gives the result.</p>
85,052
<p>A housemate of mine and I disagree on the following question: </p> <p>Let's say that we play a game of yahtzee. Of the five dice you throw, two dice obtain the value 1, two other dice obtain the value 2, and one die shows you six dots on the top side. Given the fact that you haven't thrown a "full house" yet, you start throwing the die which value is 6 again and again, until you throw a one or a two. You get to throw the die of six one or two times. If you throw a one or a two the first time, you stop, because now you have the "full house" already. If you haven't thrown a one or a two with the die of six, you throw it again, hoping for a one or a two this time.</p> <p>Now, what is the probability that you throw a one or a two with the fifth die after two turns? (Given the way a rational person operates in this situation.)</p> <p>My take on this question was the following: the probability that you throw a one or a two the first time with the fifth dice is $1/3$, and the probability that you don't throw a dice of which the value is one or two the first time, but you do throw a one or a two the second time is $ 2/3 \cdot 1/3 = 2/9$. Adding these values gives you the probability: $1/3 + 2/9 = 3/9 + 2/9 = 5/9$. </p> <p>My housemate, however, argues that the chance to throw a one or a two the first time is $1/3$, and believes that throwing the fifth dice again, gives you a probability of throwing a one or a two of $1/3$ again. Adding these values gives the expected probability of throwing a full house of $1/3 + 1/3 = 2/3$. </p> <p>Who is right, my housemate or me?</p> <p>I strongly believe I am right, but even if you tell me I'm right, I might not be able to convince my housemate of the truth. He argues that my way of reasoning implies that the probability of throwing a one or a two with the fifth dice the second time is smaller than throwing it the first time. Could you please also provide me with a pedagogically sound way to explain him why the probability is $5/9$? </p> <p>Thanks in advance</p>
David Mitra
18,986
<p>Imagine throwing that die twice. Repeat this 999 times. </p> <p>Out of the 999 trials, how many of them would have the first throw showing 1 or 2? Well, around 1/3 of them. 333 of the 1000 trials have the first throw resulting in 1 or 2.</p> <p>Now, how many of the trials will have the second throw resulting in 1 or 2, but not the first? Well, around 666 of the trials do not have the first throw showing 1 or 2 (see above). And around 1/3 of these 666 will have the second throw showing 1 or 2. So one third of 666 is 222.</p> <p>So the probability of the first throw showing 1 or 2 or the second showing 1 or 2, but not the first is $(333+222)/999=5/9$.</p> <p>The last sentence bears repeating: you want the probability that either</p> <p>1) the first throw results in 1 or 2</p> <p>or</p> <p>2) the first throw does not result in 1 or 2 and the second throw results in 1 or 2. </p> <p>You add the probabilities of 1) and 2).</p>
1,884,958
<p>How do I show that <span class="math-container">$$\text{Var}(aX+b)=a^2\text{Var}(X).$$</span> Since I am reading statistics for the first time, I don't have any idea how to start.</p> <p>Thanks for helping me.</p>
RandomGuy
153,631
<p>Directly from the definition: $Var(aX)=E[(aX)^2] - E[(aX)]^2=E[a^2X^2]-E[(aX)]^2=a^2E[X^2]-(aE[X])^2=a^2E[X^2]-a^2E[X]^2=a^2(E[X^2]-E[X]^2)=a^2Var(X),$ where in the third and fourth equality, I have applied the linearity of Expectation, in the form $E[cX]=cE[X]$.</p> <p>Next, observe $Var[Y+b]=Var(Y)$, with a similiar proof to the above, using directly the definition of $Var[Y]$, again.</p> <p>Putting the two together, you have $Var(aX+b)= a^2Var(X)$. q.e.d.</p>
121,909
<p>I came across this question while studying primitive roots. I know it has something to do with the fact that if the order of $a$ is $m$ then for every $k \in \mathbb{Z}$, the order of $a^k$ is $m/(m,k)$. The question is as follows: </p> <blockquote> <p>Let $p$ be an odd prime. Prove that $a^2$ is never a primitive root $\pmod{p}$. </p> </blockquote> <p>I would appreciate any help. Thank you.</p>
JavaMan
6,491
<p>The order of $a^2$ is no more than $\frac{p-1}{2}$ since $a^{p-1} \equiv 1\pmod{p}$ by <a href="http://en.wikipedia.org/wiki/Fermat%27s_little_theorem" rel="nofollow">Fermat's Little Theorem</a>.</p>
121,909
<p>I came across this question while studying primitive roots. I know it has something to do with the fact that if the order of $a$ is $m$ then for every $k \in \mathbb{Z}$, the order of $a^k$ is $m/(m,k)$. The question is as follows: </p> <blockquote> <p>Let $p$ be an odd prime. Prove that $a^2$ is never a primitive root $\pmod{p}$. </p> </blockquote> <p>I would appreciate any help. Thank you.</p>
Arturo Magidin
742
<p>$g$ is a primitive root modulo $p$ if and only if the order of $g$ modulo $p$ is $p-1$.</p> <p>If the order of $a$ is not $p-1$, then $a^2$ has order less than or equal to the order of $a$, hence is not a primitive root.</p> <p>If the order of $a$ <em>is</em> $p-1$ then what is the order of $a^2$?</p>
389,675
<p>I'm trying to use a program to find the largest prime factor of 600851475143. This is for Project Euler here: <a href="http://projecteuler.net/problem=3">http://projecteuler.net/problem=3</a></p> <p>I first attempted this with the code that goes through every number up to 600851475143, tests its divisibility, and adds it to an array of prime factors, printing out the largest.</p> <p>This is great for small numbers, but for large numbers it would take a VERY long time (and a lot of memory). </p> <p>Now I took university calculus a while ago, but I'm pretty rusty and haven't kept up on my math since. </p> <p>I don't want a straight up answer, but I'd like to be pointed toward resources or told what I need to learn to implement some of the algorithms I've seen around in my program.</p>
apnorton
23,353
<p>Project Euler problems (at least the ones that I have done) tend to deal with a lot of number theory topics. So, reading an introductory number theory book could be helpful.</p> <p>With regards to your particular situation, I suggest finding primes first, then testing the primes for divisibility. That is, to find prime factors of $25$, don't test $1, 2, 3, 4, 5, 6, \ldots$--this would take forever, and you'd also be left with composite factors as well. Rather, find the primes under $25$, and test those: $2, 3, 5, 7,\ldots$ This will be much faster.</p> <p>Also, you may want to research prime-finding algorithms. If you start dealing with elliptic curves, that's probably overkill for Project Euler--there are some simple, yet effective, algorithms out there.</p>
138,723
<p>By cleaning up a notebook, I mean how can I hide all the codes in the notebook so that the end-users can't see it? I saw Eric Schulz's famous interactive calculus textbook, the users can't see the code, and there is no cell brackets on the right hand side of the CDF. </p>
Carl Woll
45,431
<p>Another possibility is to modify the notebooks style sheet by defining a new "Screen Environment" that hides cell brackets and closes input cells:</p> <pre><code>SetOptions[ EvaluationNotebook[], StyleDefinitions -&gt; Notebook[{ Cell[StyleData[StyleDefinitions-&gt;"Default.nb"]], Cell[StyleData[All, "Working"], MenuCommandKey-&gt;"D"], Cell[StyleData[All, "Clean"], MenuCommandKey-&gt;"C", ShowCellBracket-&gt;False, MenuSortingValue -&gt; 10000 ], Cell[StyleData["Input", "Clean"], CellOpen-&gt;False] }, StyleDefinitions-&gt;"PrivateStylesheetFormatting.nb" ] ] </code></pre> <p>The "Clean" environment hides cell brackets and closes input cells. I also gave it a keyboard short cut, so that using <kbd>Command</kbd>+<kbd>c</kbd> switches to the "Clean" environment, and <kbd>Command</kbd>+<kbd>d</kbd> switches back to the working environment. Here is an animation:</p> <p><a href="https://i.stack.imgur.com/HIQPC.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/HIQPC.gif" alt="enter image description here"></a></p> <p>Note that I used the keyboard shortcuts to switch at the end of the animation.</p>
1,321,544
<p>How do you evaluate the following?</p> <p>$$\cos\left[\cos^{-1}\left(\frac{3}{4}\right)\right]$$</p> <p>To me the cosine of an arc cosine is just the value, which would be $3/4$.</p>
Mythomorphic
152,277
<p>One possible way is to draw a right-angled triangle with hypotenuse$=4$, base$=3$.</p> <p>So we have angle between hypotenuse and base, $\theta=\arccos\frac34$.</p> <p>So in the same triangle, $\cos\left[\cos^{-1}\left(\frac{3}{4}\right)\right]=\cos\theta=\frac34$</p>
1,457,838
<p>I am given position vectors: $\vec{OA} = i - 3j$ and $\vec{OC}=3i-j$. And asked to find a position vector of the point that divides the line $\vec{AC}$ in the ratio $-2:3$. </p> <p>So I found the vector $\vec{AC}$, and it is $2i+2j$. Then, if the point of interest is $L$, position vector $\vec{OL} = \vec{OA} + \lambda\vec{AC}$. Where $\lambda$ is the appropriate scalar we have to apply. But I am a bit confused with the negative ratio. I tried to apply the logic of positive ratios like $3:2$. In this case, we would split the line into 5 "portions" and we would be applying a ratio of $\frac{3}{5}$. I interpret negative ratio, in the following manner. I first split the $\vec{AC}$ into 5 "portions", then I move two portions outside of the line, to the left; then I move 2 portions back and 1 in. If that makes any sense at all. Therefore the ratio to be applied should be $\frac{1}{5}$. However, I am not getting a required result.</p>
Samrat Mukhopadhyay
83,973
<p>If $p$ is prime then the later statement is obvious. Now, let $p&gt;1$ and $\forall 1&lt;n\le \sqrt{p},\ n\not| p$. If $p$ is not prime, then $\exists\ 1&lt;a\le b&lt;p$ such that $p=ab\implies p\ge a^2\implies a\le \sqrt{p}$ and $a|p$ which is a contradiction. </p>
3,187,451
<p>Can you help me find a function <span class="math-container">$f(X,Y)$</span>, such that <span class="math-container">$f(1,x) = f(x,1) = f(\ln x, \ln x)$</span>?</p> <p>Either always, for all <span class="math-container">$x$</span> or in the limit <span class="math-container">$x$</span> tends to infinity, all these three expressions must become equal. Actually in computer science, there are arrays in which update takes constant <span class="math-container">$O(1)$</span> time, and cumulative sum linear <span class="math-container">$O(n)$</span> time.</p> <p>In cumulative arrays, update takes <span class="math-container">$O(n)$</span> time and sum is constant time.</p> <p>While in segment trees, both are <span class="math-container">$O(\ln n)$</span></p> <p>So I thought that there might be some <span class="math-container">$f(\text{update}, \text{sum}) = \text{constant function}$</span>.</p>
Botond
281,471
<p>What do you think about <span class="math-container">$$f(x,y)=(a_1,...,a_n)$$</span> For as many constants (<span class="math-container">$n$</span>) as you'd like: <span class="math-container">$a_1,...,a_n \in \mathbb{R}$</span>?</p>
3,187,451
<p>Can you help me find a function <span class="math-container">$f(X,Y)$</span>, such that <span class="math-container">$f(1,x) = f(x,1) = f(\ln x, \ln x)$</span>?</p> <p>Either always, for all <span class="math-container">$x$</span> or in the limit <span class="math-container">$x$</span> tends to infinity, all these three expressions must become equal. Actually in computer science, there are arrays in which update takes constant <span class="math-container">$O(1)$</span> time, and cumulative sum linear <span class="math-container">$O(n)$</span> time.</p> <p>In cumulative arrays, update takes <span class="math-container">$O(n)$</span> time and sum is constant time.</p> <p>While in segment trees, both are <span class="math-container">$O(\ln n)$</span></p> <p>So I thought that there might be some <span class="math-container">$f(\text{update}, \text{sum}) = \text{constant function}$</span>.</p>
Empy2
81,790
<p><span class="math-container">$$\exp(\sqrt{(X-1)(Y-1)}+1)+|X-Y|$$</span> <span class="math-container">$f(x,1)=f(1,x)=x+e-1; f(\ln(x),\ln(x))=x$</span></p>
3,270,856
<p>So i have to calculate this triple integral:</p> <p><span class="math-container">$$\iiint_GzdV$$</span> Where G is defined as: <span class="math-container">$$x^2+y^2-z^2 \geq 6R^2, x^2+y^2+z^2\leq12R^2, z\geq0$$</span></p> <p>So with drawing it it gives this:</p> <p><a href="https://i.stack.imgur.com/60hok.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/60hok.jpg" alt="G region"></a></p> <p>It seems i should use spherical coordinates since sphere is involved. But i don't know if i can use the default ones or should manipulate them since i have hyperbole of some sort(but since its <span class="math-container">$a=b=c$</span>, is it a double sided cone then? i know that z>0 means im only looking at upper half, but still)</p> <p>So using this:</p> <p><span class="math-container">$$x=r\cos\phi\cos\theta; y=r\sin\phi\cos\theta ; z=r\sin\theta$$</span></p> <p>So i think <span class="math-container">$$\phi \in [0,2\pi]$$</span> And for <span class="math-container">$$r: r^2\leq12R^2$$</span> But am not sure about theta, how can i get it? From the <span class="math-container">$z\geq0$</span> i get <span class="math-container">$r\sin\theta&gt;=0$</span> so <span class="math-container">$\sin\theta \geq 0$</span> So from that is <span class="math-container">$0\leq\theta\leq \frac{\pi}{2}$</span></p> <p>Am I doing it correctly, or should i fix anything, thank you in advance.</p>
user10354138
592,552
<p>The condition <span class="math-container">$$ x^2+y^2-z^2\geq 6R^2 $$</span> gives <span class="math-container">$$ r^2(\sin^2\theta-\cos^2\theta)\geq 6R^2 $$</span> So you want <span class="math-container">$(r,\theta)\in(0,\infty)\times(0,\frac\pi2)$</span> such that <span class="math-container">$$ r^2\leq 12R^2 \text{ and } r^2(1-\cos 2\theta)\geq 6R^2 $$</span> So <span class="math-container">$1-\cos 2\theta\geq\frac12$</span>, i.e, <span class="math-container">$\cos 2\theta\leq \frac12$</span> and hence <span class="math-container">$\theta\in(0,\frac\pi6)$</span>. Putting them together, <span class="math-container">$$ \\\int_G z\,\mathrm{d}V= \int_0^{2\pi}\int_0^{\pi/6}\int_{R\sqrt{6/(1-\cos 2\theta)}}^{R\sqrt{12}} r\cos\theta\,r^2\sin\theta\,\mathrm{d}r\,\mathrm{d}\theta\,\mathrm{d}\phi $$</span></p>
3,033,812
<p>My problem: If there are 5 different candies in a jar and a child wants to take out one or more candies, how many ways can this be done? </p> <p>I said it is <span class="math-container">$^5C_1 -\; ^5C_0 = 5-1 = 4$</span> ways. The <span class="math-container">$-1$</span> for the unwanted case using this trick:</p> <p>At least/At most = total number of combinations - unwanted cases</p> <p>But according to my answer sheet, it said <span class="math-container">$2^5 -1$</span> is the answer.</p> <p>So my question is that in what situations should I use exponents and what impact does it have? </p>
Especially Lime
341,019
<p>Generally you use a power whenever you have several independent choices to make, and each choice has the same options.</p> <p>Here, for each candy in the jar you can choose to take it, or not (<span class="math-container">$2$</span> options). All these decisions are independent, and there are <span class="math-container">$5$</span> of them, so there are <span class="math-container">$2^5$</span> possibilities for which candies you can take from the jar. However, this includes the case of taking none of them, so you need to subtract <span class="math-container">$1$</span> to get the answer you're looking for.</p>
198,995
<p>From Barbeau's <em>Polynomials</em>:</p> <blockquote> <ul> <li>(a) Is it possible to find a polynomial, apart from the constant $0$ itself, which is identically equal to $0$ (i.e. a polynomial $P(t)$ with some nonzero coefficient such that $P(c)=0$ for each number $c$)?</li> </ul> </blockquote> <p>And then I thought about 2 hypothesis:</p> <ol> <li><em>P(c-c)</em></li> <li>I thought about a polynomial such as $ax^2+bx+c=0$, then I could make a polynomial with $a=1$, $b=-x$, $c=0$ which would render $x^2+(-x)x=0$. I tested it on Mathematica with values from $-10$ to $10$ and it gave me $0$ for all these values.</li> </ol> <p>When I went for the answer, I've found:</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<img src="https://i.stack.imgur.com/qUtmS.jpg" alt="enter image description here"></p> <p>When I went to the answer, I couldn't understand it, can you help me? I'm trying to know what's he doing in this answer, I guess it's a way to prove it, but It's still intractible to me. You can explain me or recommend me some thing for reading. I'll be happy if you also tell me something about my hypothesis. Thanks.</p>
Arthur
15,500
<p>I don't know if this will answer all your questions, but what the author is doing is to assume there is a non-zero polynomial which evaluates to zero at all points, then reach a contradiction. He does this by analyzing the hypothetical polynomial in a number of different ways:</p> <hr> <p>Assume the polynomial $p(x)$ evaluates to $0$ at all points, but is not equal to zero. Then it has some definite, positive degree $n$. Since $p(0) = 0$, we must have $p(x) = x\cdot q_0(x)$ for some non-zero polynomial $q_0$.</p> <p>Since $p(1) = 0$, we must have $q_0(1) = 0$, so $q_0(x) = (x-1)q_1(x)$. Therefore we must have $p(x) = x(x-1)q_1(x)$ for some non-zero polynomial $q_1$.</p> <p>Now he continues this until he reaches $p(x) = x(x-1)(x-2)(x-3)\cdots (x-n + 1)(x-n)q_n(x)$ for some polynomial $q_n$. But the left side has degree $n$, while the right side has degree at least $(n+1)$, which is a contradiction.</p> <hr> <p>The linear equation he mentions goes like this:</p> <p>Assume $p(x)$ has positive degree $n$. I'm going to illustrate with $n = 2$, but it's the same for all positive integers. We can then write $p(x) = ax^2 + bx + c$. To find $a, b$ and $c$, we make use of the following: $$ 0 = p(0) = a\cdot 0^2 + b\cdot 0 + c = c \quad\Longrightarrow \quad c=0 $$ as well as $$ 0 = p(1) = a\cdot 1^2 + b\cdot 1 + c = a + b\quad\Longrightarrow \quad a = -b $$ and $$ 0 = p(2) = a\cdot 2^2 + b\cdot 2 + c = 4a + 2b \quad\Longrightarrow \quad 2a = -b $$</p> <p>to which the only solutions are $a = b = c = 0$, and $p$ is the zero polynomial.</p> <hr> <p>The last one he mentions, Taylor's theorem, depends on the fact that many functions are completely determined by its value and all its derivatives at a single point. Polynomials are among these functions. Since $p$ always evaluates to zero, all its derivatives do too, and the only Taylor series which satisfies this is the Taylor series for the zero function.</p> <hr> <p><strong>Edit:</strong> Somehow I completely misread the first paragraph. What he does here is a variation on the first explanation I gave, only he proves that all the coefficients has to be zero the following way:</p> <p>Assume $p(x) = a_nx^n + \cdots + a_1x + a_0$. Since $p(0) = 0$, we know that $a_0 = 0$. Now examine the function $a_nx^{n-1} + \ldots + a_2x + a_1 = \frac{p(x)}{x}$, and suppose $a_1 \neq 0$. Then the inequalities listed in the text arrives at a contradiction, so $a_1 = 0$. Now proceed similarily with $\frac{p(x)}{x^2} = a_nx^{n-2} + \ldots + a_3x + a_2$, and $a_2$ has to be zero. And so on. </p>
202,041
<p>I have been trying to determine the number of metrics of constant curvature on a surface of genus $n$, say $\Sigma$. For low values, the answer is clear, the moduli space is a point for the sphere, and is two dimensional for the torus, but the higher dimensional cases stump me, and I am unable to find the result. Any help or a reference would be appreciated.</p>
Sam Nead
1,650
<p>This is the geometric version of Riemann's "problem of moduli". A connected, closed, oriented surface of genus $g$ has a moduli space of dimension $6g - 6$. </p> <p>One way to count the dimension of hyperbolic structures is via <a href="http://en.wikipedia.org/wiki/Fenchel%E2%80%93Nielsen_coordinates" rel="nofollow">Fenchel-Nielsen coordinates</a>. See also Section 10.6 of Farb and Margalit's book "Primer on mapping class groups".</p>
390,129
<p>Let <span class="math-container">$O$</span> be a <span class="math-container">$d$</span>-dimensional rotation matrix (i.e., it has real entries and <span class="math-container">$OO^T = O^TO = I$</span>). Let <span class="math-container">$\mathbf{x}$</span> be a uniformly random bitstring of length <span class="math-container">$d$</span>, i.e., <span class="math-container">$\mathbf{x} \sim U(\{0,1\}^d)$</span>. In other words, <span class="math-container">$\mathbf{x}$</span> is a vertex of the Hamming cube, selected uniformly at random. I would like to show that there exists a <span class="math-container">$C &gt; 0$</span> such that <span class="math-container">$$\mathbb{P}\left[\|O\mathbf{x}\|_1 \leq \frac{d}{4}\right] \leq 2^{-Cd}.$$</span> I am horribly stuck, any ideas on how to approach this problem would be very much appreciated. Below are some of my own attempts. This question is cross-posted at math stack exchange <a href="https://math.stackexchange.com/questions/4099958/probability-of-ell-1-norms-of-vertices-of-the-rotated-hamming-cube">here</a>.</p> <hr> <p><strong>Observation 1:</strong> If <span class="math-container">$O = I$</span>, then the statement holds.</p> <p>If <span class="math-container">$O = I$</span>, then <span class="math-container">$\|O\mathbf{x}\|_1 = \|\mathbf{x}\|_1$</span> is simply the number of ones in the bitstring. Among the <span class="math-container">$2^d$</span> choices for <span class="math-container">$\mathbf{x}$</span>, the number of choices that satisfies <span class="math-container">$\|\mathbf{x}\|_1 \leq d/4$</span> is</p> <p><span class="math-container">$$1 + \binom{d}{1} + \binom{d}{2} + \cdots + \binom{d}{\lfloor d/4\rfloor} \leq 2^{dH(\lfloor d/4\rfloor/d)} \leq 2^{dH(1/4)},$$</span> hence the probability is upper bounded by <span class="math-container">$2^{-d(1-H(1/4))}$</span>. Here, <span class="math-container">$H(\cdot)$</span> is the binary entropy function, i.e., <span class="math-container">$H(p) = -p\log_2(p) - (1-p)\log_2(1-p)$</span>.</p> <p><strong>Observation 2:</strong> Numerical experiments support this result. Below is a plot of the probability versus the dimension, where <span class="math-container">$O$</span> is selected at random:</p> <p><a href="https://i.stack.imgur.com/sI9dn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/sI9dn.png" alt="Plot of the probability versus the dimension" /></a></p> <p>The blue line is the probability. The orange line is the bound derived in the case where <span class="math-container">$O = I$</span>.</p> <p>For comparison, here is the same numerical experiment, but with <span class="math-container">$O = I$</span>:</p> <p><a href="https://i.stack.imgur.com/ksyxQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ksyxQ.png" alt="enter image description here" /></a></p> <p>Thus, it appears that the introduction of <span class="math-container">$O$</span> decreases the probability.</p> <p>Both plots are obtained by sampling <span class="math-container">$100000$</span> <span class="math-container">$\mathbf{x}$</span>'s at random. The code is here:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import random from scipy.stats import ortho_group H = lambda p : -p * np.log2(p) - (1-p) * np.log2(1-p) C = 1 - H(1/4) print(C) N = 100000 ds,Ps = [],[] for d in range(2,40): O = ortho_group.rvs(dim = d) # O = np.eye(d) P = 0 for _ in range(N): x = random.choices(range(2), k = d) if np.linalg.norm(O @ x, ord = 1) &lt;= d/4: P += 1/N print(d,P) ds.append(d) Ps.append(P) fig = plt.figure() ax = fig.gca() ax.plot(ds, Ps) ax.plot(ds, [2**(-C*d) for d in ds]) ax.set_yscale('log') ax.set_xlabel('d') ax.set_ylabel('P') plt.show() </code></pre>
Marco
143,536
<p>Adding more detail to Mikael's point, the result seems to hold <strong>on average</strong> over <span class="math-container">$O$</span> because of the following:</p> <ol> <li><p>Using a <a href="https://en.wikipedia.org/wiki/Chernoff_bound" rel="nofollow noreferrer">Chernoff bound</a>, we can see that the probability that for any constant <span class="math-container">$\epsilon$</span>, with probability at least <span class="math-container">$1- e^{-C d}$</span> the random vector <span class="math-container">$x$</span> has at least <span class="math-container">$\frac{(1-\epsilon)d}{2}$</span> 1's.</p> </li> <li><p>Consider a fixed vector <span class="math-container">$x \in \{0,1\}^d$</span>. For a uniformly distributed <span class="math-container">$O$</span>, <span class="math-container">$O x$</span> is uniformly distributed on the sphere of size <span class="math-container">$\sqrt{\|x\|_1}$</span>. This has distribution essentially the same as the vector <span class="math-container">$\sqrt{\|x\|_1} (G_1,\ldots,G_d)$</span>, where the <span class="math-container">$G_i$</span>'s are independent Gaussians <span class="math-container">$N(0,\frac{1}{d})$</span> (in fact <span class="math-container">$\sqrt{\|x\|_1}$</span> has the same distribution as <span class="math-container">$\sqrt{\|x\|_1} (G_1,\ldots,G_d)/\|G\|_2$</span>, or alternatively one can bypass using Gaussians and work with concentration on the sphere, as Mikael suggested).</p> </li> </ol> <p>Then in expectation (over <span class="math-container">$O$</span>) <span class="math-container">$$E \|Ox\|_1 \approx \sqrt{\|x\|_1} \cdot \|G\|_1 = \sqrt{\|x\|_1} \cdot \sqrt{d} \sqrt{\frac{2}{\pi}},$$</span> where the last inequality uses the fact that the expected value of a <a href="https://en.wikipedia.org/wiki/Folded_normal_distribution" rel="nofollow noreferrer">folded Gaussian</a> of standard deviation <span class="math-container">$\sigma$</span> is <span class="math-container">$\sigma \sqrt{\frac{2}{\pi}}$</span>.</p> <p>Moreover, the function <span class="math-container">$\|\cdot\|_1$</span> is <span class="math-container">$\sqrt{d}$</span> Lipschitz wrt <span class="math-container">$\ell_2$</span> we can use concentration of such functions over Gaussian space to get that with probability at least <span class="math-container">$1 - e^{- Cd}$</span> we have <span class="math-container">$$\|Ox\|_1 \ge (1-\epsilon) \sqrt{\|x\|_1} \cdot \sqrt{d} \sqrt{\frac{2}{\pi}},$$</span> see for example Example 4.2 in van Handel excellent <a href="https://web.math.princeton.edu/~rvan/APC550.pdf" rel="nofollow noreferrer">notes</a> or inequality (1.6) of Ledoux-Talagrand &quot;Probability in Banach Spaces&quot;.</p> <ol start="3"> <li>Taking a union bound over the two steps above seems to give the desired result.</li> </ol>
970,270
<p><strong>Problem</strong></p> <p>On the one hand, a complex measure decomposes into: $$\mu=\Re_+\mu-\Re_-\mu+i\Im_+\mu-i\Im_-\mu=:\sum_{\alpha=0\ldots3}i^\alpha\mu_\alpha$$</p> <p>This gives rise to the integrability condition: $$f\in L(\mu)\iff f\in L(\mu_\alpha)\quad(\alpha=1,\ldots,3)$$</p> <p>On the other hand, a complex measure admits a derivative: $$\mu(E)=\int_Eu\mathrm{d}|\mu|\quad(|u|=1)$$</p> <p>This gives rise to the integrability condition: $$f\in L(\mu):\iff fu\in L(|\mu|)\iff f\in L(|\mu|)$$</p> <blockquote> <p>So the question arises wether these approaches coincide: $$f\in L(|\mu|)\iff f\in L(\mu_\alpha)\quad(\alpha=1,\ldots,3)$$</p> </blockquote> <p><strong>Attempt</strong></p> <p>So far by construction it is: $$\mu_\alpha(E)=\int_Eu_\alpha\mathrm{d}|\mu|\quad(0\leq u_\alpha\leq1)$$ which gives for positive functions: $$\int|f|\mathrm{d}\mu_\alpha=\int|f|u_\alpha\mathrm{d}|\mu|\leq\int |f|\mathrm{d}|\mu|$$ That proves the inclusion: $$f\in L(|\mu|)\implies f\in L(\mu_\alpha)\quad(\alpha=1,\ldots,3)$$ But what about the converse?</p> <p><strong>Related</strong></p> <p>For a formal treatment see: <a href="https://math.stackexchange.com/q/950599/79762">Complex Measures: Integration</a></p> <p>For a related recapitulation see: <a href="https://math.stackexchange.com/q/950627/79762">Complex Measures: Variation</a></p> <p>For a similar problem see: <a href="https://math.stackexchange.com/q/970046/79762">Complex Functions: Integrability</a> </p> <p>For a general problem see: <a href="https://math.stackexchange.com/q/970004/79762">Radon-Nikodym: Integrability?</a></p>
Disintegrating By Parts
112,478
<p>If we have a signed measure $\mu$ on a measurable space $(\Omega,\Sigma)$, then $\mu$ may be allowed to assume values of $\pm \infty$, but not both. A complex measure $\mu$ is not allowed to assume any type of infinite value. For either type of measure $\mu$ which is not allowed to assume infinite values, the variation measure $|\mu|$ is a finite measure, where $$ |\mu|E = \sup_{\pi}\sum_{S \in \pi}|\mu S| $$ is taken over all finite partitions $\pi$ of measurable subsets of $E$. Because we want to deal with a complex measure, then we assume without loss of generality that all measures related to this discussion are finite.</p> <p>If $\mu$ is a signed measure, then we can decompose $\mu$ into positive meausres $\mu_{+}$, $\mu_{-}$ according to the (Hahn) Jordan decomposition. This decomposition is the same as a variation measure approach: $$ |\mu|=\mu_{+}+\mu_{-},\;\;\; \mu = \mu_{+}-\mu_{-}. $$ This extends to the complex case as well $$ \mu = (\Re\mu)_{+}-(\Re\mu)_{-}+i\{(\Im\mu)_{+}-(\Im\mu)_{-}\} $$ with $$ |\mu| \le |\Re\mu| + |\Im\mu| = |(\Re\mu)_{-}|+|(\Re\mu)_{+}|+|(\Im\mu)_{-}|+|(\Im\mu)_{+}|. $$ But we also have $$ |\Re\mu| \le |\mu|,\;\;\; |\Im\mu| \le |\mu|. $$ So integrability of a function $f$ with respect to $|\mu|$ is equivalent to simultaneous integrability with respect to $|(\Re\mu)_{-}|,|(\Re\mu)_{+}|,|(\Im\mu)_{-}|,|(\Im\mu)_{+}|$.</p>
970,270
<p><strong>Problem</strong></p> <p>On the one hand, a complex measure decomposes into: $$\mu=\Re_+\mu-\Re_-\mu+i\Im_+\mu-i\Im_-\mu=:\sum_{\alpha=0\ldots3}i^\alpha\mu_\alpha$$</p> <p>This gives rise to the integrability condition: $$f\in L(\mu)\iff f\in L(\mu_\alpha)\quad(\alpha=1,\ldots,3)$$</p> <p>On the other hand, a complex measure admits a derivative: $$\mu(E)=\int_Eu\mathrm{d}|\mu|\quad(|u|=1)$$</p> <p>This gives rise to the integrability condition: $$f\in L(\mu):\iff fu\in L(|\mu|)\iff f\in L(|\mu|)$$</p> <blockquote> <p>So the question arises wether these approaches coincide: $$f\in L(|\mu|)\iff f\in L(\mu_\alpha)\quad(\alpha=1,\ldots,3)$$</p> </blockquote> <p><strong>Attempt</strong></p> <p>So far by construction it is: $$\mu_\alpha(E)=\int_Eu_\alpha\mathrm{d}|\mu|\quad(0\leq u_\alpha\leq1)$$ which gives for positive functions: $$\int|f|\mathrm{d}\mu_\alpha=\int|f|u_\alpha\mathrm{d}|\mu|\leq\int |f|\mathrm{d}|\mu|$$ That proves the inclusion: $$f\in L(|\mu|)\implies f\in L(\mu_\alpha)\quad(\alpha=1,\ldots,3)$$ But what about the converse?</p> <p><strong>Related</strong></p> <p>For a formal treatment see: <a href="https://math.stackexchange.com/q/950599/79762">Complex Measures: Integration</a></p> <p>For a related recapitulation see: <a href="https://math.stackexchange.com/q/950627/79762">Complex Measures: Variation</a></p> <p>For a similar problem see: <a href="https://math.stackexchange.com/q/970046/79762">Complex Functions: Integrability</a> </p> <p>For a general problem see: <a href="https://math.stackexchange.com/q/970004/79762">Radon-Nikodym: Integrability?</a></p>
C-star-W-star
79,762
<blockquote> <p>In fact, the very very heart of the whole story here are the two inequalities: $$|\mu(E)|\leq|\mu|(E)&lt;\infty\quad(E\in\Sigma)$$</p> </blockquote> <p>For positive measures given as derivative: $$\kappa(E)=\int_Eh\mathrm{d}\lambda\quad(h\geq0)$$ the positive integrals agree: $$\int f\mathrm{d}\kappa=\int fh\mathrm{d}\lambda\quad(f\geq0)$$ <em>(Note, these are allowed to be infinite!)</em></p> <p>Now, consider the representation by the Radon-Nikodym derivative: $$\mu_\alpha=\int_Eu_\alpha\mathrm{d}|\mu|\quad(u_\alpha\geq0,|u|=1)$$ Exploiting the bounds: $$u_\alpha\leq|u|$$ $$|u|\leq\sum_{\alpha=0\ldots3}u_\alpha$$ one has the estimates: $$\int|f|\mathrm{d}\mu_\alpha=\int|f|\cdot u_\alpha\mathrm{d}|\mu|\leq\int|f|\cdot|u|\mathrm{d}|\mu|=\int|f|\mathrm{d}|\mu|$$ $$\int|f|\mathrm{d}|\mu|=\int|f|\cdot|u|\mathrm{d}|\mu|\leq\sum_{\alpha=0\ldots3}\int|f|\cdot u_\alpha\mathrm{d}|\mu|=\sum_{\alpha=0\ldots3}\int|f|\mathrm{d}\mu_\alpha$$ and therefore: $$f\in L(|\mu|)\iff f\in L(\mu_\alpha)\quad(\alpha=0\ldots3)$$</p>
3,163,697
<p>So, I have to approximate a value of square root of e: <span class="math-container">$\sqrt{e}$</span> with a precision of <span class="math-container">$10^{-3}$</span>. I have calculated the first and second derivative: So instead of <span class="math-container">$\sqrt{e}$</span> I need to approximate the value for <span class="math-container">$$\frac{1}{\sqrt[4]{e}}$$</span> If I follow the formula given by one of the users: <span class="math-container">$$\displaystyle\sum_{n=0}^{\infty}\frac{x^n}{n!}$$</span> I gues the result is the same then?</p> <p>Thanks in advance.</p>
Carl Christian
307,944
<p>There are some fundamental issues which have yet to be addressed. </p> <p>In general, our goal is to compute a target value <span class="math-container">$T \in \mathbb{R}$</span>. Typically, we cannot compute the exact value, so we settle for an approximation <span class="math-container">$A$</span>. The error is by definition <span class="math-container">$E = T-A$</span>. The relative error is by definition <span class="math-container">$R = (T-A)/T$</span>. The relative error is only defined when <span class="math-container">$T \not =0$</span>. The accuracy of the approximation <span class="math-container">$A$</span> is the absolute value of the relative error <span class="math-container">$R$</span>, i.e., the real number <span class="math-container">$|R|$</span>.</p> <p>The use of the word precision is not correct in this context. In numerical analysis, the word precision is used almost exclusively when stating the accuracy of the elementary arithmetic operations. This accuracy is called the machine precision. It is hardware dependent.</p> <p>Computing <span class="math-container">$T = \sqrt{e}$</span> with an accuracy of <span class="math-container">$\tau = 10^{-3}$</span> means to find <span class="math-container">$A$</span> such that <span class="math-container">$$|R| = \frac{|T-A|}{|T|} \leq \tau.$$</span> No more and no less. Now, let <span class="math-container">$n$</span> be a nonnegative integer. By Taylor's formula, there exists a <span class="math-container">$\xi \in (0,\frac{1}{2})$</span> such that <span class="math-container">$$ e^{\frac{1}{2}} = \sum_{j=0}^n \frac{1}{j!}\left(\frac{1}{2}\right)^j + \frac{e^{\xi}}{(n+1)!}\left(\frac{1}{2}\right)^{n+1}.$$</span> This formula is the foundation for our work. It is convenient to do the Taylor expansion at the point <span class="math-container">$x_0 =0$</span> rather than anywhere else, simply because all relevant derivatives can be evaluated exactly at <span class="math-container">$x_0=1$</span>. We therefore focus on the approximation <span class="math-container">$A_n$</span> given by <span class="math-container">$$A_n = \sum_{j=0}^n \frac{1}{j!}\left(\frac{1}{2}\right)^j.$$</span> We must bound the relative error <span class="math-container">$R_n$</span> given by <span class="math-container">$$ R_n = \frac{T - A_n}{T}.$$</span> The error <span class="math-container">$E_n = T - A_n$</span> satisfies <span class="math-container">$$ |E_n| = |T - A_n| \leq \frac{e^{\xi}}{(n+1)!}\left(\frac{1}{2}\right)^{n+1}$$</span> We do not know the exact value of <span class="math-container">$\xi$</span>. Fortunately, it also irrelevant, because <span class="math-container">$x \rightarrow e^x$</span> is increasing. It follows that <span class="math-container">$$e^{\xi} \leq e^{\frac{1}{2}} = T = |T|.$$</span> It follows that <span class="math-container">$$ |E_n| \leq \frac{|T|}{(n+1)!}\left(\frac{1}{2}\right)^{n+1}$$</span> or equivalently <span class="math-container">$$ |R_n| \leq \frac{1}{(n+1)!}\left(\frac{1}{2}\right)^{n+1}.$$</span> It is now straight forward to verify that <span class="math-container">$n=4$</span> is the smallest value such that <span class="math-container">$$ \frac{1}{(n+1)!}\left(\frac{1}{2}\right)^{n+1} \leq \tau = 10^{-3}.$$</span></p>
2,900
<p>I saved an <code>InterpolationFunction</code> in a ".mx" files using <code>DumpSave</code> on a variable that was scoped by a <code>Module</code>. Here is a stripped-down example:</p> <pre><code>Module[{interpolation}, interpolation=Interpolation[Range[10]]; DumpSave["interpolation.mx", interpolation]; ] </code></pre> <p>Is there a way to find out the variable name, presumably of the form <code>interpolation$nnn</code>, of the expression when I <code>Get</code> the interpolation? It is not apparent what the variable is when using</p> <pre><code>&lt;&lt;"interpolation.mx" </code></pre> <p>Next time I will not use a <code>Module</code> for scoping the save variable, but meantime I'd like to access the saved data and assign it to a new variable.</p>
Sjoerd C. de Vries
57
<p>You can open the MX file in an ASCII editor. The variable name is there in plain text (interpolation$511 in my case). The rest is binary gibberish.</p> <p>So given an <code>MX</code> file with a single variable with the <code>$</code> suffix, the following expression can be used to access that variable directly:</p> <pre><code>getExpression[filename_] := Module[{a}, ToExpression[(a = StringCases[Import[filename, "Text"], WordCharacter ... ~~ "$" ~~ NumberString][[1]])]; &lt;&lt; (filename); ToExpression[a] ] </code></pre> <p>E.g.,</p> <pre><code>result = getExpression["interpolation.mx"] (* InterpolatingFunction[{{1,10}},&lt;&gt;] *) </code></pre> <p>Evaluating the variable name before reading it with <code>Get</code> (or <code>&lt;&lt;</code>) seemingly overcomes the <code>Temporary</code> attribute discussed in Leonid`s answer. As a (perhaps unwanted) side-effect of the above expression, the original variable remains defined.</p> <p>This approach could probably be extended to work with multiple variables and expressions in one MX file.</p>
4,092,994
<p>The question is</p> <blockquote> <p>Find the solutions to the equation <span class="math-container">$$2\tan(2x)=3\cot(x) , \space 0&lt;x&lt;180$$</span></p> </blockquote> <p>I started by applying the tan double angle formula and recipricoal identity for cot</p> <p><span class="math-container">$$2* \frac{2\tan(x)}{1-\tan^2(x)}=\frac{3}{\tan(x)}$$</span> <span class="math-container">$$\implies 7\tan^2(x)=3 \therefore x=\tan^{-1}\left(-\sqrt\frac{3}{7} \right)$$</span> <span class="math-container">$$x=-33.2,33.2$$</span></p> <p>Then by using the quadrants <a href="https://i.stack.imgur.com/QFDTs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QFDTs.png" alt="quadrant" /></a></p> <p>I was lead to the final solution that <span class="math-container">$x=33.2,146.8$</span> however the answer in the book has an additional solution of <span class="math-container">$x=90$</span>, I understand the reasoning that <span class="math-container">$\tan(180)=0$</span> and <span class="math-container">$\cot(x)$</span> tends to zero as x tends to 90 however how was this solution fou<strong>n</strong>d?</p> <p>Is there a process for consistently finding these &quot;hidden answers&quot;?</p>
Zalnd
478,981
<p><span class="math-container">$$\frac{4\tan(x)}{1-\tan^2(x)} = \frac{3}{\tan(x)}$$</span></p> <p><span class="math-container">$$\frac{4\tan(x)}{1-\tan^2(x)} - \frac{3}{\tan(x)} = 0$$</span></p> <p><span class="math-container">$$\frac{4\tan^2(x)-3[1-\tan^2(x)]}{\tan(x)[1-\tan^2(x)]} = 0$$</span></p> <p><span class="math-container">$$\frac{7\tan^2(x)-3}{\tan(x)[1-\tan^2(x)]} = 0$$</span></p> <p>You focused in the fact that the equation is satisfied when the numerator is zero, i.e., <span class="math-container">$7\tan^2(x)-3=0$</span>, but the equation is also satisfied when <span class="math-container">$\tan(x)\to\infty$</span> (when the denominator itself tends to infinity).</p> <p><span class="math-container">$$\lim_{\tan(x)\to\infty} \frac{7\tan^2(x)-3}{\tan(x)[1-\tan^2(x)]} = \lim_{\tan(x)\to\infty} \frac{\tan^2(x)\left[7-\frac{3}{\tan^2(x)}\right]}{\tan^2(x)\left[\frac{1}{\tan(x)}-\tan(x)\right]} = \lim_{\tan(x)\to\infty} \frac{7-\frac{3}{\tan^2(x)}}{\frac{1}{\tan(x)}-\tan(x)} = 0$$</span></p>
1,453,067
<p>My friend's professor raised this question in a coaching and he and I tried everything we could think of. But later I thought that since $\sin (2x) $ can have values only between -1 &amp; +1 and anything but +1 makes the equation complex ( keeping in mind that the integral is meant to be non-complex), there is no solution to the integral. Am I correct?</p> <p>$\int \sqrt {\sin(2x)-1}$</p>
Brevan Ellefsen
269,764
<p>$$\int \sqrt {\sin(2x)-1}dx$$ Substitute $u=2x$ and $du=2dx$ $$= \frac{1}{2}\int \sqrt {\sin(u)-1}du$$ Substitute $s=\sin u -1$ and $ds = \cos u \,du$ and notice that $\cos u = \sqrt{1-\sin^2u} = \sqrt{1-(s+1)^2} = \sqrt{1-(s^2 + 2s + 1)} = \sqrt{1- s^2 - 2s - 1} = \sqrt{-s^2 - 2s} = \sqrt{-s(s+2)}$ $$= \frac{1}{2}\int \frac{\sqrt {s}}{\cos(u)}ds$$ $$= \frac{1}{2}\int \frac{\sqrt {s}}{\sqrt{-s(s+2)}}ds$$ $$= \frac{1}{2}\int \sqrt{\frac{s}{-s(s+2)}}ds$$ $$= \frac{1}{2}\int \sqrt{\frac{1}{-(s+2)}}ds$$ $$= \frac{1}{2}\int \frac{1}{\sqrt{-s-2}}ds$$ Substitute $p=-s-2$ and $dp=-ds$ $$= -\frac{1}{2}\int \frac{dp}{\sqrt{p}}$$ Can you take it from here now? Just integrate and then back substitute</p>
2,529,262
<p>I have five real numbers $a,b,c,d,e$ and their arithmetic mean is $2$. I also know that the arithmetic mean of $a^2, b^2,c^2,d^2$, and $e^2$ is $4$. Is there a way by which I can prove that the range of $e$ (or any ONE of the numbers) is $[0,16/5]$. I ran across this problem in a book and am stuck on it. Any help would be appreciated.</p>
quasi
400,434
<p>\begin{align*} &amp;(a - 2)^2 + (b-2)^2 + (c-2)^2 + (d-2)^2 + (e - 2)^2\\[4pt] &amp;= (a^2 + b^2 + c^2 + d^2 +e^2) - 4(a + b + c + d +e) + 20\\[4pt] &amp;= 20 - 4(10) + 20\\[4pt] &amp;= 0\\[10pt] &amp;\;\text{hence}\\[10pt] &amp;\;a = b = c = d = e = 2\\[4pt] \end{align*}</p>
3,791,936
<p>An advanced sum <a href="https://www.facebook.com/photo.php?fbid=3190290677734375&amp;set=a.222846247812181&amp;type=3&amp;theater" rel="noreferrer">proposed</a> by Cornel Valean:</p> <blockquote> <p><span class="math-container">$$S=\sum_{n=1}^\infty\frac{2^{2n}H_{n+1}}{(n+1)^2{2n\choose n}}$$</span> <span class="math-container">$$=4\text{Li}_4\left(\frac12\right)-\frac12\zeta(4)+\frac72\zeta(3)-4\ln^22\zeta(2)+6\ln2\zeta(2)+\frac16\ln^42-1$$</span></p> </blockquote> <p>I managed to find the integral representation of <span class="math-container">$\ \displaystyle\sum_{n=1}^\infty\frac{2^{2n}H_n}{n^2{2n\choose n}}\ $</span> but not <span class="math-container">$S$</span>:</p> <p><a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Reihenentwicklungen#Potenzen_des_Arkussinus" rel="noreferrer">Since</a></p> <p><span class="math-container">$$\frac{\arcsin x}{\sqrt{1-x^2}}=\sum_{n=1}^\infty\frac{(2x)^{2n-1}}{n{2n\choose n}}$$</span></p> <p>we can write</p> <p><span class="math-container">$$\frac{2\sqrt{x}\arcsin \sqrt{x}}{\sqrt{1-x}}=\sum_{n=1}^\infty\frac{2^{2n}x^{n}}{n{2n\choose n}}$$</span></p> <p>now multiply both sides by <span class="math-container">$-\frac{\ln(1-x)}{x}$</span> then <span class="math-container">$\int_0^1$</span> and use that <span class="math-container">$-\int_0^1 x^{n-1}\ln(1-x)dx=\frac{H_n}{n}$</span> we have</p> <p><span class="math-container">$$\sum_{n=1}^\infty\frac{2^{2n}H_n}{n^2{2n\choose n}}=-2\int_0^1 \frac{\arcsin \sqrt{x}\ln(1-x)}{\sqrt{x}\sqrt{1-x}}dx\tag1$$</span></p> <p>But I could not get the integral representation of <span class="math-container">$S$</span>. Any idea?</p> <p>In case you find the integral, I prefer solutions that do not use contour integration or you can leave it to me to give it a try. Thank you.</p> <p>In case the reader is curious about computing the integral in <span class="math-container">$(1)$</span>, set <span class="math-container">$x=\sin^2\theta$</span> then use the Fourier series of <span class="math-container">$\ln(\cos \theta)$</span>.</p>
Ali Shadhar
432,085
<p>Following @Felix's idea above:</p> <p><span class="math-container">$$S=\sum_{n=1}^\infty\frac{2^{2n}H_{n+1}}{(n+1)^2{2n\choose n}}=\sum_{n=2}^\infty\frac{2^{2n-2}H_n}{n^2{2n-2\choose n-1}}$$</span></p> <p>Note that</p> <p><span class="math-container">$$\frac{{2n+2\choose n+1}}{{2n\choose n}}=\frac{\frac{\Gamma(2n+3)}{\Gamma^2(n+2)}}{\frac{\Gamma(2n+1)}{\Gamma^2(n+1)}}=\frac{\frac{(2n+2)(2n+1)\Gamma(2n+1)}{((n+1)\Gamma(n+1))^2}}{\frac{\Gamma(2n+1)}{\Gamma^2(n+1)}}=\frac{(2n+2)(2n+1)}{(n+1)^2}=\frac{2(2n+1)}{n+1}$$</span></p> <p>replace <span class="math-container">$n$</span> by <span class="math-container">$n-1$</span> we get</p> <p><span class="math-container">$$\frac{1}{{2n-2\choose n-1}}=\frac{2(2n-1)}{n{2n\choose n}}$$</span></p> <p>Therefore</p> <p><span class="math-container">$$S=\sum_{n=2}^\infty\frac{2^{2n-1}(2n-1)H_n}{n^3{2n\choose n}}=\sum _{n=1}^{\infty } \frac{2^{2n} H_n}{n^2 {2n\choose n}}-\frac12 \sum _{n=1}^{\infty } \frac{2^{2n} H_n}{n^3 {2n\choose n}}-1\tag1$$</span></p> <p>In the question body we have</p> <p><span class="math-container">$$\sum _{n=1}^{\infty } \frac{2^{2n} H_n}{n^2 {2n\choose n}}=-2\int_0^1 \frac{\arcsin \sqrt{x}\ln(1-x)}{\sqrt{x}\sqrt{1-x}}dx\overset{\sqrt{x}=\sin\theta}{=}-8\int_0^{\pi/2} \theta \ln(\cos\theta)d\theta$$</span></p> <p><span class="math-container">$$=-8\int_0^{\pi/2}\theta\left(-\ln(2)-\sum_{n=1}^\infty\frac{(-1)^n\cos(2n\theta)}{n}\right)d\theta=6\ln(2)\zeta(2)+\frac72\zeta(3)\tag2$$</span></p> <p>and <a href="https://math.stackexchange.com/q/3792648">here</a> we already showed</p> <p><span class="math-container">$$\sum_{n=1}^\infty\frac{2^{2n}H_n}{n^3{2n\choose n}}=-8\text{Li}_4\left(\frac12\right)+\zeta(4)+8\ln^2(2)\zeta(2)-\frac{1}{3}\ln^4(2)\tag3$$</span></p> <p>Finally, plug <span class="math-container">$(2)$</span> and <span class="math-container">$(3)$</span> in <span class="math-container">$(1)$</span> we obtain</p> <p><span class="math-container">$$S=4\text{Li}_4\left(\frac12\right)-\frac12\zeta(4)+\frac72\zeta(3)-4\ln^2(2)\zeta(2)+6\ln(2)\zeta(2)+\frac16\ln^4(2)-1$$</span></p>
258,132
<p>Consider the following simple example as motivation for my question. If it were the case that, say, the Riemann hypothesis turned out to be independent of ZFC, I have no doubt it would be accepted by many as a new axiom (or some stronger principle which implied it). This is because we intuitively think that if we cannot find a zero off of the half-line, then there is no zero off the half-line. It just doesn't "really" exist.</p> <p>Similarly, if Goldbach's conjecture is independent of ZFC, we would accept it as true as well, because we could never find any counter-examples.</p> <p>However, is there any reason we should suppose that adding these two independent statements as axioms leads to a consistent system? Yes, because we have the "standard model" of the natural numbers (assuming sufficient consistency).</p> <p>But can this Platonic line of thinking work in a first-order way, for any theory? Or is it specific to the natural numbers, and its second-order model? </p> <p>In other words, let $T$ be a (countable, effectively enumerable) theory of first-order predicate logic. Define ${\rm Plato}(T)$ to be the theory obtained from $T$ by adjoining statements $p$ such that: $p:=\forall x\ \varphi(x)$ where $\varphi(x)$ is a sentence with $x$ a free-variable (and the only one) and $\forall x\ \varphi(x)$ is independent of $T$. Does ${\rm Plato}(T)$ have a model? Is it consistent?</p> <p>The motivation for my question is that, as an algebraist, I have a very strong intuition that if you cannot construct a counter-example then there is no counter-example and the corresponding universal statement is "true" (for the given universe of discourse). In particular, I'm thinking of the Whitehead problem, which was shown by Shelah to be independent of ZFC. From an algebraic point of view, this seems to suggest that Whitehead's problem has a positive solution, since you cannot really find a counter-example to the claim. But does adding the axiom "there is no counter-example to the Whitehead problem" disallow similar new axioms for other independent statements? Or can this all be done in a consistent way, as if there really is a Platonic reality out there (even if we cannot completely touch it, or describe it)?</p>
Maxime Ramzi
102,343
<p>By definition, if a sentence $\phi$ is independent from the theory $T$, then both $T + \phi$ and $T+\neg \phi$ are coherent. Gödel's completeness theorem shows that if a theory is coherent, then it is consistent, meaning there is a model that satisfies it. In less technical terms, if you cannot derive a contradiction from an axiom system, then there is a possible word in which this axiom system is satisfied. So if for example RH happened to be independent from ZFC, then you could add RH as an axiom to ZFC to obtain, say, ZFCR, and as long as ZFC is consistent, so is ZFCR (but also, ZFC+$\neg RH$ would be consistent). So to answer your question about $Plato(T)$: if $T$ has a model, $T$ does not prove $\neg p$, then $T+p =Plato(T)$ has a model. This has nothing to do with the integers. But of course, adding new axioms will change the statements that are independent. Maybe $ZFC + RH \vdash CH$, for instance, whereas ZFC alone doesn't.</p> <p>EDIT: As has been pointed out in the comments below, $Plato(T)$ is obtained by adding all statements $p$ such that ... If you consider this rather than what I've considered before, then obviously $Plato(T)$ is inconsistent : consider $\phi(x)$ with only free variable $x$ such that $\forall x, \phi(x)$ is independent from $T$. Then you consider $\psi(y) := (\exists x, \neg \phi(x))\land (y=y)$ which is a formula with only free variable $y$, and is such that $\forall y, \psi(y)$ is independent from $T$. Then $Plato(T)$ contains both $\forall x, \phi(x)$ and $\forall y, \psi(y)$, from which you can derive $\exists x, \neg \phi(x)$, which is an immediate contradiction.</p>
263,650
<p>As proposed by Quillen, Drinfeld, and Deligne and other important mathematicians, there is supposed to be a philosophy that, at least over a field of characteristic zero, assigns to every "deformation problem" a differential graded Lie algebra or $L_{\infty}$-algebra that controls it. </p> <p>I've seen this idea realized in various situations, like for example the deformation theory of a compact complex manifold, which is "controlled" by the Kodaira-Spencer DGLA, or the deformation theory of Dirac structures in exact Courant algebroids. However, from my naive point of view as an outsider, I see this set of techniques completely disconnected from a different type of moduli problems of more "analytic" character. I refer for example to the moduli of Einstein metrics or the moduli of instantons in Donaldson's theory. It looks like the DGLA-philosophy has played virtually no role in the study of the moduli spaces of Einstein metrics and instantons. Why is this so? Does the DGLA-principle still applies to these problems but it does not add anything interesting? What is the DGLA controlling the deformation theory of these problems? It looks like the more analysis requires a moduli space problem, the less of a relevant role is played by the DGLA-philosophy, which seems to be more "algebraically inclined". I wonder because sometimes it looks like the moduli-literature is very polarized in different communities using different techniques to study moduli problems, so I would like to know to which extent the methods of one community apply to the problem considered by a different community.</p> <p>Thanks.</p>
Justin Hilburn
333
<p>I haven't thought about Einstein metrics but, as AHusain mentioned, Kevin Costello has written down many examples. The keyword is elliptic moduli problem. Look at <a href="https://arxiv.org/abs/1111.4234" rel="nofollow noreferrer">https://arxiv.org/abs/1111.4234</a></p>
263,650
<p>As proposed by Quillen, Drinfeld, and Deligne and other important mathematicians, there is supposed to be a philosophy that, at least over a field of characteristic zero, assigns to every "deformation problem" a differential graded Lie algebra or $L_{\infty}$-algebra that controls it. </p> <p>I've seen this idea realized in various situations, like for example the deformation theory of a compact complex manifold, which is "controlled" by the Kodaira-Spencer DGLA, or the deformation theory of Dirac structures in exact Courant algebroids. However, from my naive point of view as an outsider, I see this set of techniques completely disconnected from a different type of moduli problems of more "analytic" character. I refer for example to the moduli of Einstein metrics or the moduli of instantons in Donaldson's theory. It looks like the DGLA-philosophy has played virtually no role in the study of the moduli spaces of Einstein metrics and instantons. Why is this so? Does the DGLA-principle still applies to these problems but it does not add anything interesting? What is the DGLA controlling the deformation theory of these problems? It looks like the more analysis requires a moduli space problem, the less of a relevant role is played by the DGLA-philosophy, which seems to be more "algebraically inclined". I wonder because sometimes it looks like the moduli-literature is very polarized in different communities using different techniques to study moduli problems, so I would like to know to which extent the methods of one community apply to the problem considered by a different community.</p> <p>Thanks.</p>
domenico fiorenza
8,320
<p>The Quillen-Drinfeld-Deligne-etc. philosopy should not be looked at as something too mysterious.</p> <p>Namely, it reduces to the fact that if the set of objects one is interesting in the infinitesimal deformations of is not too wild, then it can be described in the form $f(v)+Q(v)=0$, where $f:V\to W$ is a linear function and $Q:V \to W$ is a quadratic function. Let me make a very simple example of how this works: assume we have a degree four polynomial $p(x)$ and assume $x=a$ is a zero of $p$. We wanto to describe the other zeroes. To do so, we consider the polynomial $q(x)=p(x+a)$. this will be a fourth degree polynomial vanishing at $x=0$so it will have the form, say, $q(x)=x^4+2x^3-x^2+7x$. We are now interested in the zeroes of $x$. We can add a new variable $y$ and set $x^2=y$. Now the equation $q(x)=0$ has become the two degree two equations $y^2+2xy-x^2+7x=0$ and $x^2-y=0$. So it is of the form $f(v)+Q(v)=0$ with $v=(x,y)$, $f(x,y)=(7x,-y)$ and $Q(x,y)=(y^2+2xy-x^2,x^2)$.</p> <p>Once the set we want to describe is written in the form $f(v)+Q(v)=0$ we are done. The linear map $f\colon V\to W$ can be seen as a degree 1 map from $V[-1]$ to $W[-2]$, where $V[-1]$ is the same vector space as $V$ but now with all of its elements seen as if they were in degree 1, and $W[-2]$ is the same vector space as $W$ but with all of its elements seen as if they were in degree 2. Also, the quadratic map $Q: Sym^2(V) \to W$ can be seen as a degree zero map $V[-1]\wedge V[-1] \to W[-2]$. It is then immediate to see that setting $\mathfrak{g}=V[-1]\oplus W[-2]$, the graded vector space $\mathfrak{g}$ is a differential graded Lie algebra with differential defined by $f$ and bracket defined by $2Q$ (here we use the fact that we are in characteristic zero or at least not in characteristic 2). The set of deformations we are intersted in is then described by the equation $dx+\frac{1}{2}[x,x]=0$ for degree 1 elements in $\mathfrak{g}$, i.e., by the solutions of the Maurer-Cartan equation for $\mathfrak{g}$. </p> <p>Moreover, if there is a Lie algebra $\mathfrak{h}$ of symmetries for our set, we can include $\mathfrak{h}$ in our differential graded Lie algebra by setting $\mathfrak{g}^0=\mathfrak{h}$ and extending the definition of the differential and of the bracket so to encode both the Lie algebra structure on $\mathfrak{h}$ and its Lie action on $V$. Doing this one sees that our infinitesimal deformations modulo the action of $\exp(\mathfrak{h})$ are precisely the Maurer-Cartan elements of $\mathfrak{g}$ modulo the gauge action of $\exp(\mathfrak{g}^0)$.</p> <p>All this to say that describing every (regular enough) deformation problem by means of a DGLA is something that should not be seen as something particularly deep. What is particular is the fact that sometimes the DGLA governing the problem is naturally attached to the geometry of the problem, e.g., for (almost) complex structures the equation defining them is $J^2=0$, so it is quadratic on the nose, and being quadratic is not an artifact. </p> <p>For Einstein metrics or instantons I have not thought of which the DGLA description would be. It could indeed be interesting to figure out. Yet, as I tried to explain above, if such an approach is not taken in the literature, the reason it is not because it in principle cannot be taken, but if the DGLA governing these two problems is to be build artificially then DGLA point of view adds very little (if anything) with respect to a direct approach to the problem. </p>
2,945,913
<p>I have a quick question about simplifying these exponents and then comparing them:</p> <p><span class="math-container">$8^{\log_2 n}, 2^{3log_2(log_2n)}$</span> and <span class="math-container">$2^{(log_2(n))^2} $</span></p> <p>I know the third one evaluates to <span class="math-container">$n^{log_2(n)}$</span>, but I'm not sure how I would simplify the other two. I do know that <span class="math-container">$2^{log_2(n)}$</span> = n, but how could I use this to simplify the other ones because I don't think I'm simplifying them correctly. I also tried simplifying 8 into <span class="math-container">$2^3$</span>, but I wasn't sure what to do from there.</p> <p>Thanks!</p>
Matthew Hunt
257,736
<p>The usual method for integrals of this form is to use</p> <p><span class="math-container">$$t=\tan\frac{u}{2}$$</span></p>
1,822,562
<p>Please explain what method you used to prove so. $$\sum_{n=3}^\infty \frac{\tan\left(\frac{\pi}{n}\right)}{n}$$</p>
Mark Viola
218,419
<p>In <a href="https://math.stackexchange.com/questions/1517704/limit-lim-x-to0-1-tan9x-frac1-arcsin5x/1520012#1520012">THIS ANSWER</a>, I showed using elementary inequalities from geometry that the tangent function satisfies the inequalities </p> <p>$$x\le \tan(x)\le x\sec(x) \tag 1$$</p> <p>for $0\le x&lt;\pi/2$. Therefore, using $(1)$ we find for $n\ge 3$</p> <p>$$\left|\frac{\tan\left(\frac{\pi}{n}\right)}{n}\right|\le \frac{\pi}{n^2}\sec(\pi/n)\le \frac{2\pi }{n^2}$$</p> <p>since the secant function on $[0,\pi/3]$ attains its maximum at $\pi/3$</p> <p>Finally, using $\sum_{n=1}^\infty\frac1{n^2}=\frac{\pi^2}{6}$, we see that the series of interest converges and is in fact, less than $\pi^3/3$. </p>
3,066,967
<p>Prove that <span class="math-container">$k^2+k+1$</span> is not divisible by <span class="math-container">$101$</span> for any natural <span class="math-container">$k.$</span></p>
Michael Rozenberg
190,319
<p>Let <span class="math-container">$$k^2+k+1\equiv0(\mod101).$$</span> <span class="math-container">$k\equiv0(\mod101)$</span> is impossible, which says that we can assume that <span class="math-container">$k$</span> is not divisible by <span class="math-container">$101$</span>.</p> <p>Now, <span class="math-container">$$(k-1)(k^2+k+1)\equiv0(\mod101)$$</span> or <span class="math-container">$$k^3\equiv1(\mod101).$$</span> But by the Fermat's Little theorem <span class="math-container">$$k^{100}\equiv1(\mod101),$$</span> which gives <span class="math-container">$$k\equiv1(\mod101),$$</span> which gives <span class="math-container">$1^2+1+1$</span> is divisible by <span class="math-container">$101$</span>, which is a contradiction. </p>
199,738
<p>It is known that, if a function $f$ from a planar domain $D$ to a Banach space $A$ is weakly analytic [i.e. $l(f)$ is analytic for every $l$ in $A^*$], then $f$ is strongly analytic [i.e. $\lim_{h \to 0} h^{-1}[f(z+h)-f(z)]$ exists in norm for every $z$ in $D$].</p> <p>Now the question is, if above $f$ is assumed to be weakly continuous [i.e.$l(f)$ is continuous for every $l$ in $A^*$], then is it true that $f$ will be strongly continuous.[i.e. $\lim_{h \to 0} [f(z+h)-f(z)] = 0$ in norm for every $z$ in $D$.] </p>
Rudy the Reindeer
5,798
<p>Let $X$ be an infinite dimensional Banach space. Let $T_w$ denote the weak topology and let $T$ denote the norm topology. Then $\mathrm{id}: (X, T_w) \to (X, T)$ is not continuous but $\mathrm{id}: (X, T_w) \to (X, T_w)$ is. </p> <p>Note that every map $f: X \to Y$ that is weakly continuous where weakly means $f: (X, T_w) \to Y$ is also strongly continuous, $f: (X, T) \to Y$, since the norm (or strong) topology contains more open sets than the weak topology (by definition).</p>
2,972,938
<p>When is it possible to make a change of variables in the limit?</p> <p>For example <span class="math-container">$\lim_{x \to \infty}(\ln x/x)$</span>, can I change <span class="math-container">$x=e^{y}$</span>?</p> <p>Then <span class="math-container">$\lim_{x \to \infty}(\ln x/x)= \lim_{y \to \infty}(y/e^{y})$</span>?</p> <p>How can I prove that the change of variables is valid?</p>
user247327
247,327
<p>What?? If "<span class="math-container">$A^2$</span> is the identity matrix" then <span class="math-container">$A^3= A(A^2)= A$</span>. No "calculation" at all required!</p>
222,639
<p>It seems to be true that multiply transitive permutation groups have been classified completely (using CFSG), but I am having trouble finding a reference where this classification is actually stated. Is there a canonical reference?</p>
Michael Zieve
30,412
<p>This is a bit absurd as a reference, but if you just want an explicit list that has the dubious merit of having gotten past a referee, the <span class="math-container">$3$</span>-transitive groups are listed on pp. 86-87 of Abhyankar's paper <a href="https://www.ams.org/journals/bull/1992-27-01/S0273-0979-1992-00270-7/S0273-0979-1992-00270-7.pdf" rel="nofollow noreferrer">Galois theory on the line in nonzero characteristic</a>. That list also explicitly states which groups in the list are <span class="math-container">$4$</span>-transitive, <span class="math-container">$5$</span>-transitive, etc. On the other hand, a drawback of Abhyankar's list is that it's missing some of the groups between <span class="math-container">$\text{PSL}_2(q)$</span> and <span class="math-container">$\text{P}\Gamma\text{L}_2(q)$</span> when <span class="math-container">$q$</span> is an odd fourth power. Personally I prefer the list in Derek Holt's answer, since I value correctness more than publishedness.</p>
2,205,042
<p>I want to show that there exists some $M$ such that for any $n$ and any $x \in [\epsilon, 2\pi - \epsilon]$ we have $\left |\sum_{m = 1}^n e^{imx} \right|\leq M$. Geometrically, it is like starting at the origin facing east, then turning left by $x$ degrees and moving forward by 1 unit of distance, and repeating this $n$ times. We want to show that the spot you end up is always less than some distance $M$ from the origin. From this geometric interpretation, it is clear that this is the case since $x$ is at least $\epsilon$ and at most $2\pi - \epsilon$. However, how can I show this algebraically? </p>
mrp
134,447
<p>Well, you are missing the crucial parts. First of all, the set of accepting states should be $F_1 \cup F_2$, not the intersection. The idea is that we start in $M_1$, and when we have accepted a word $w_1$ in $M_1$, then we non-deterministically choose between either stopping there, or continue reading a word $w_2$ which should be in $M_2$. Hence you must add transitions $$\delta(r,a) = \begin{cases} \delta_2(r,a) \cup \{q_1\} \text{ if } r \in F_2 \text{ and } a = \varepsilon \\ \delta_1(r,a) \cup \{q_2\} \text{ if } r \in F_1 \text{ and } a = \varepsilon\end{cases}$$ that can take you from an accept state in one of the automata to the start state in the other. This way, you can "shuffle" back and forth between $M_1$ and $M_2$.</p>
510,633
<p>A independent variable is "the input" and the dependent variable is the "output", atleast thats how it was explained to us.</p> <p>But if you have some random function can't both variables be seen as "affecting" the other variable?</p> <p>For example, in $ y = 1/x$, "x" could be seen as an input and "y" the output, but isn't the opposite also true. Why can't "y" be the input and "x" be the output? </p> <p>You could say that the dependent variable is the variable that has a unique independent variable associated with it, but thats not necessary true with the independent variable. (Think $y = x^2$. $x$ is the independent variable because for every $x$ value there is only one $y$ value, but there isn't a a unique $x$ value for every $y$ value)</p> <p>But how can this definition be extended to functions that has unique domain values for all range values and unique range values for all domain values? Think $y= 1/x$.</p>
Brian M. Scott
12,042
<p>HINT: Use the binomial theorem to expand both sides of the desired inequality. For example,</p> <p>$$\left(1+\frac1n\right)^n=\sum_{k=0}^n\binom{n}k\left(\frac1n\right)^k\;.$$</p> <p>Note that</p> <p>$$\binom{n}k=\frac{n(n-1)(n-2)\ldots(n-k+1)}{k!}\;.$$</p>
1,136,060
<p>What identity would I need to use to solve for $\theta$? </p> <p>$5 + \cos(\theta) = 7\sin(\theta)$ </p> <p>By plugging this into a calculator, I was able to get $\theta \approx 53.13^\circ$. </p>
copper.hat
27,978
<p>Write $7 \sin \theta - \cos \theta = 5$, then ${7 \over \sqrt{50}} \sin \theta - {1 \over \sqrt{50}}\cos \theta = {5 \over \sqrt{50}}$.</p> <p>Now note that $({7 \over \sqrt{50}})^2 + ({1 \over \sqrt{50}})^2 = 1$. Now find $\psi$ such that $\cos \psi = {7 \over \sqrt{50}}$, $\sin \psi = {1 \over \sqrt{50}}$, then the problem becomes $\sin (\theta-\psi) = {5 \over \sqrt{50}}$.</p> <p>We have $\psi = \arcsin {1 \over \sqrt{50}}$, so $\theta = \arcsin {5 \over \sqrt{50}} + \arcsin {1 \over \sqrt{50}} \approx 53.13^\circ$.</p>