qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,364,686
<p>I have a point in a 3D coordinate system 1 (CS1). There can be two situations: the point is constant or the point is moving along a straight line from one known position to another at constant speed.</p> <p>The CS1 is rotating in another (static) 3D coordinate system (CS2). The rotations of CS1 are known, i.e. the starting and ending angles are known, and the angular speeds are constant, so we can get a precise rotation matrix at any moment of time.</p> <p>I need to find the length of the point's trajectory in the CS2.</p> <p>In the simplest case, when the point isn't moving in CS1 and CS1 is rotating around one axis of CS2, the trajectory is a simple arc. In more complex cases, my current solution is to find a few points along the way (having point's position in CS1 and rotation angles of CS1 in CS2) and interpolate them with cubic spline, then get the length of the spline.</p> <p>Is there a more precise and/or straightforward way to find the trajectory of the point in CS2? Thanks.</p>
Community
-1
<p><strong>Hint:</strong></p> <p>WLOG one of the points describes a unit circle in the plane <span class="math-container">$XY$</span> and the second point a circle in a parallel plane, with some phase difference but at the same unit angular speed.</p> <p><span class="math-container">$$P(t)=(\cos t,\sin t,0),$$</span> <span class="math-container">$$Q(t)=(r\cos(t-t_0),r\sin(t-t_0),z_0).$$</span></p> <p>Now a point that travels between them at constant speed has the coordinates <span class="math-container">$$R(t)=(1-v(t-t_1))P(t)+v(t-t_1)Q(t)$$</span> and <span class="math-container">$$\dot R(t)=-vP(t)+vQ(t)+(1-v(t-t_1))\dot P(t)+v(t-t_1)\dot Q(t).$$</span></p> <p>After simplification, the element of arc seems to be just the square root of a quadratic trinomial, and this has an analytical antiderivative. But you will definitely need a CAS.</p>
2,514,236
<p>For example, the matrix could have finitely many rows and columns, but each row/column has uncountably many elements and you can do the standard matrix multiplication by taking care to match up the entries with corresponding pairs of real number indices. </p> <p>Do such objects exist and has there been any work on them?</p> <p>Does </p>
fred goodman
124,085
<p>Two such generalizations come to mind: integral operators defined by a "kernel" $T(f)(x) = \int K(x, y)\ f(y) dy$. Such operators compose by convolving kernels $\int K_1(x, y) K_2(y, z) dy$, which is evidently a continuous generalization of matrix multiplication. The second generalization is less obviously a direct generalization, but here it is: elements in an arbitrary von Neumann factor of type II$_1$ can be regarded as continuous generalizations of finite matrices.</p> <p>But you should abandon the idea of "finitely many rows and columns" and think "continuously indexed rows and columns" instead.</p>
39,466
<p>I could not solve this problem:</p> <blockquote> <p>Prove that for a non-Archimedian field $K$ with completion $L$, $$\left\{|x|\in\mathbb R \mid x\in K\right\} =\left\{|x|\in\mathbb R \mid x\in L\right\}$$</p> </blockquote> <p>I considered a Cauchy sequence in $K$ with norms having limit $l$, but I could not construct an element of $K$ with norm $l$ from the sequence.</p> <p>Will anyone please show how to prove it?</p>
t.b.
5,363
<p>If $|x| \lt 1$ then $\frac{1}{1-|x|} = \sum_{n = 0}^{\infty} |x|^{n}$ by the summation formula for the geometric series. Now use this, the triangle inequality and the assumption that $|a_{n}| \leq C$ for all $n$ to estimate your given series from above, hence it series converges absolutely for $|x| \lt 1$.</p> <p>Alternatively, you can easily show that the radius of convergence $\rho^{-1} = \limsup_{n \to \infty} \sqrt[n]{|a_n|}$ satisfies $\rho^{-1} \leq 1$, since $\sqrt[n]{C} \; \xrightarrow{n \to \infty}\; 1$ for all $C \gt 0$. If you look at the proof of this formula for the radius of convergence (usually called the <a href="http://en.wikipedia.org/wiki/Cauchy%2DHadamard_theorem" rel="nofollow">Cauchy-Hadamard theorem</a>), you'll see that this essentially comes down to the same as the first paragraph: a comparison with a geometric series which is known to converge.</p>
2,795,777
<p>I encountered this problem in one of my linear algebra homeworks (Linear Algebra with Applications 5th Ed 1.3.44):</p> <p>Consider a $n \times m$ matrix $A$, such that $n &gt; m$. Show there is a vector $b$ in $\mathbb{R}^{n}$ such that the system $Ax=b$ is inconsistent.</p> <p>I have a strong intuition as to why this is true, because the transformation matrix maps a vector in $\mathbb{R}^{m}$ to $\mathbb{R}^{n}$, so it is going from a lower dimension to a higher. When the $m$ components in $x$ vary, they would at most be parameterizing an $m$-dimensional subspace in $\mathbb{R}^{n}$. However, my "proof" (which is included below, feels very handwavey and sloppy. It may also be incorrect in a number of places. I'd appreciate it if I could get some pointers on how to formalize proofs of this type a little more, so they are rigorous enough to write on a homework/test, and maybe an example with this example.</p> <p>My proof:</p> <p>Consider the case where $A$ has at least $m$ linearly independent row-vectors. Using elementary row operations, rearrange $A$ to $A'$, so these $m$ row vectors are the first $m$ rows. $b'$ will refer to the vector $b$ under the same rearrangement of rows. If we place the first $m$ rows in reduced row ecchelon form using only elementary operations with the first $m$ rows, the augmented matrix $[A'|b']$ will have the following form, where $x_{i}$ is the $i$-th element of the solution vector $x$.</p> <p>\begin{bmatrix} 1 &amp; 0 &amp; \dots &amp; 0 &amp; x_{1} \\ 0 &amp; 1 &amp; \dots &amp; 0 &amp; x_{2}\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots &amp; \vdots \\ 0 &amp; 0 &amp; \dots &amp; 1 &amp; x_{m} \\ a'_{m+1, 1} &amp; a'_{m+1, 2} &amp; \dots &amp; a'_{m+1, m} &amp; b'_{m+1} \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots &amp; \vdots \\ a'_{n,1} &amp; a'_{n, 2} &amp; \dots &amp; a'_{n,m} &amp; b'_{n} \end{bmatrix}</p> <p>Now consider the $m+1$th row. To eliminate coefficients in this row, it would mean that $x_{m+1} = b'_{m+1} - $$\sum_{i=1}^{m} x_{i}\cdot a_{m+1,i}$, because to eliminate each coefficient would involve scaling by $a_{m+1,i}$ and then subtracting. The system is inconsistent for all $x_{m+1} \neq 0$, so we then choose any $b'_{m+1}$ for which this inequality holds, there are infinitely many, to find $b'$ which makes $A'x=b'$ inconsistent. Then, unswap the rows to make our $b'$ back into $b$ and we have found a vector which makes our system inconsistent.</p>
quasi
400,434
<p>The matrix $$ A-xI= \begin{bmatrix}-x &amp; 1 &amp;0 &amp;\ldots &amp; 0 &amp;0\\ 0 &amp; -x &amp;1 &amp;\ldots &amp;0 &amp;0 \\ \vdots &amp; \vdots &amp; \vdots &amp; &amp; \vdots &amp;\vdots \\ 0 &amp; 0 &amp;0 &amp;\ldots &amp;-x &amp;1 \\ 10^{10} &amp; 0 &amp;0 &amp;\ldots &amp;0 &amp; -x\end{bmatrix} _{10 \times 10} $$ has only two generalized diagonals not containing a zero:</p> <ul> <li>The main diagonal, with signed product $x^{10}$.$\\[4pt]$ <li>The off diagonal (consisting of all ones), together with the entry $10^{10}$, with signed product $-10^{10}$. </ul> <p>It follows that the characteristic polynomial of $A$ is $x^{10}-10^{10}$, which has only two real roots, namely $\pm 10$.</p>
649,239
<p>By <a href="http://en.wikipedia.org/wiki/Post%27s_theorem" rel="nofollow">Post's Theorem</a> we know that a set $A\subseteq\mathbf{N}$ is recursively enumerable iff it is definable by a $\Sigma_1$-formula, i.e. there exists a $\Sigma_1$-formula $\varphi(x)$ with $x$ free such that for every number $n$: \[ n\in A\longleftrightarrow \mathfrak{N}\vDash\varphi(\overline{n}) \] where $\mathfrak{N}$ is the standard model of the first-order language of Peano Arithmetic.</p> <p>I have the following question: given a r.e. set $A$ can we always find a $\Sigma_1$-formula defining it?</p>
Xoff
36,246
<p>The definition of a recursive enumerable set is that it is the domain of some partial recursive function. </p> <p>There is a recursive primitive function $\psi$, such that $\psi(n,t,x)=0$ if and only if $\phi_n(x)$ (the recursive function $n$ on entry $x$) halts in less than $t$ steps, else $\psi(n,t,x)=1$. Any recursive primitive function can be defined by a $\Delta_0$-formula. Hence</p> <p>$$\phi_n(x)\mbox{ halts } \Leftrightarrow \exists t\; \psi(n,t,x)=0$$</p> <p>The existence of $\psi$ is a consequence of Kleene <a href="http://en.wikipedia.org/wiki/Kleene%27s_T_predicate" rel="nofollow">T-predicate</a></p>
3,913,032
<blockquote> <p><strong>Problem.</strong> Let <span class="math-container">$A$</span> be a non-singular <span class="math-container">$n\times n$</span> matrix and let <span class="math-container">$\Gamma=[\Gamma_1\quad\Gamma_2]$</span> be an <span class="math-container">$n\times n$</span> orthogonal matrix where <span class="math-container">$\Gamma_1$</span> is <span class="math-container">$n\times n_1$</span>, <span class="math-container">$\Gamma_2$</span> is <span class="math-container">$n\times n_2$</span> and <span class="math-container">$n=n_1+n_2$</span>. Show that <span class="math-container">$$\det(\Gamma_1^TA\Gamma_1)=\det(A)\det(\Gamma_2^TA^{-1}\Gamma_2).$$</span></p> </blockquote> <p><strong>My Attempts.</strong> Here we make use of the property of orthogonal matrix: <span class="math-container">\begin{align} \det(A)=\det(\Gamma^TA\Gamma)=\det\left(\begin{bmatrix} \Gamma_1^T \\ \Gamma_2^T \end{bmatrix}A\begin{bmatrix} \Gamma_1 &amp; \Gamma_2 \end{bmatrix}\right)=\det\left(\begin{bmatrix} \Gamma_1^TA\Gamma_1 &amp; \Gamma_1^TA\Gamma_2 \\ \Gamma_2^TA\Gamma_1 &amp; \Gamma_2^TA\Gamma_2 \end{bmatrix}\right). \end{align}</span> Since <span class="math-container">$A$</span> is non-singular, <span class="math-container">$\Gamma_1^TA\Gamma_1$</span> is also non-singular. Thus, <span class="math-container">\begin{align} \det(A)=\det(\Gamma_1^TA\Gamma_1)\det\left(\Gamma_2^TA\Gamma_2-\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right). \end{align}</span> If the formula we want to prove is true, we would have <span class="math-container">\begin{align} 1&amp;=\det(\Gamma_2^TA^{-1}\Gamma_2)\cdot\det\left(\Gamma_2^TA\Gamma_2-\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right) \\ &amp;=\det\left(\Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_2-\Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right). \end{align}</span> Nonetheless, I have no idea how to simplify the terms in the parenthesis because I only have <span class="math-container">$\Gamma_1\Gamma_1^T+\Gamma_2\Gamma_2^T=I$</span>. Hope anyone has good suggestions.</p>
skh76
850,993
<p>The radius of the sphere should be equal to the radius of the cylinder face. Though you are correct about the height of the cylinder being twice the radius of the sphere.</p>
4,610,394
<p>Clearly, none of the roots are in <span class="math-container">$\mathbb{Q}$</span> so <span class="math-container">$f(x) = x^4 + 1$</span> does not have any linear factors. Thus, the only thing left to check is to show that <span class="math-container">$f(x)$</span> cannot reduce to two quadratic factors.</p> <p>My proposed solution was to state that <span class="math-container">$f(x) = x^4 + 1 = (x^2 + i)(x^2 - i)$</span> but <span class="math-container">$\pm i \not\in \mathbb{Q}$</span> so <span class="math-container">$f(x)$</span> is irreducible.</p> <p>However, I stumbled across this post <a href="https://math.stackexchange.com/questions/1249143/x4-1-reducible-over-mathbbr-is-this-possible">$x^4 + 1$ reducible over $\mathbb{R}$... is this possible?</a> with a comment suggesting that <span class="math-container">$x^4 + 1 = (x^2 + \sqrt{2}x + 1)(x^2 - \sqrt{2}x + 1)$</span> which turns out to be a case that I did not fully consider. It made me realize that <span class="math-container">$\mathbb{Q}[x]$</span> being a UFD only guarantees a unique factorization of irreducible elements in <span class="math-container">$\mathbb{Q}[x]$</span> (which <span class="math-container">$x^2 \pm i$</span> nor <span class="math-container">$x^2 \pm \sqrt{2} x + 1$</span> aren't in <span class="math-container">$\mathbb{Q}[x]$</span>) so checking a single combination of quadratic products is not sufficient.</p> <p>Therefore, what is the ideal method for checking that <span class="math-container">$x^4 + 1$</span> cannot be reduced to a product of two quadratic polynomials in <span class="math-container">$\mathbb{Q}[x]$</span>? Am I forced to just brute force check solutions of <span class="math-container">$x^4 + 1 = (x^2 + ax + b)(x^2 + cx + d)$</span> don't have rational solutions <span class="math-container">$(a,b,c,d) \in \mathbb{Q}^4$</span>?</p>
Patrick Stevens
259,262
<p>I'd say it's easiest to use <a href="https://en.wikipedia.org/wiki/Eisenstein%27s_criterion" rel="nofollow noreferrer">Eisenstein's criterion</a> after a <a href="https://en.wikipedia.org/wiki/Eisenstein%27s_criterion#Indirect_(after_transformation)" rel="nofollow noreferrer">shift</a>, although this might be a sledgehammer. <span class="math-container">$$(x+k)^4 + 1 = x^4 + 4k x^3+6k^2 x^2 + 4k^3 x + k^4 +1$$</span> So we want a prime <span class="math-container">$p$</span> dividing each of <span class="math-container">$\{4k, 6k^2, 4k^3, k^4 + 1\}$</span> but <span class="math-container">$p^2$</span> not dividing <span class="math-container">$k^4 + 1$</span>. That looks easy enough: let <span class="math-container">$p = 2$</span> and <span class="math-container">$k = 1$</span>.</p> <hr /> <p>(If you want to use a <em>lot</em> more theory and make the test a little simpler, note that the <a href="https://en.wikipedia.org/wiki/Discriminant" rel="nofollow noreferrer">discriminant</a> of <span class="math-container">$x^4+1$</span> is <span class="math-container">$256$</span>, whose only prime factor is <span class="math-container">$2$</span>, so in fact <span class="math-container">$p=2$</span> is the <em>only</em> prime that could possibly work. I used to understand why this was true, but I no longer do; it's something to do with theorem 1.3 in <a href="https://kconrad.math.uconn.edu/blurbs/gradnumthy/disc.pdf" rel="nofollow noreferrer">https://kconrad.math.uconn.edu/blurbs/gradnumthy/disc.pdf</a> . I believe it may be possible to use this theory to show that <span class="math-container">$p=2$</span> <em>does</em> work without finding <span class="math-container">$k$</span>, but that's far beyond my pay grade.)</p>
4,610,394
<p>Clearly, none of the roots are in <span class="math-container">$\mathbb{Q}$</span> so <span class="math-container">$f(x) = x^4 + 1$</span> does not have any linear factors. Thus, the only thing left to check is to show that <span class="math-container">$f(x)$</span> cannot reduce to two quadratic factors.</p> <p>My proposed solution was to state that <span class="math-container">$f(x) = x^4 + 1 = (x^2 + i)(x^2 - i)$</span> but <span class="math-container">$\pm i \not\in \mathbb{Q}$</span> so <span class="math-container">$f(x)$</span> is irreducible.</p> <p>However, I stumbled across this post <a href="https://math.stackexchange.com/questions/1249143/x4-1-reducible-over-mathbbr-is-this-possible">$x^4 + 1$ reducible over $\mathbb{R}$... is this possible?</a> with a comment suggesting that <span class="math-container">$x^4 + 1 = (x^2 + \sqrt{2}x + 1)(x^2 - \sqrt{2}x + 1)$</span> which turns out to be a case that I did not fully consider. It made me realize that <span class="math-container">$\mathbb{Q}[x]$</span> being a UFD only guarantees a unique factorization of irreducible elements in <span class="math-container">$\mathbb{Q}[x]$</span> (which <span class="math-container">$x^2 \pm i$</span> nor <span class="math-container">$x^2 \pm \sqrt{2} x + 1$</span> aren't in <span class="math-container">$\mathbb{Q}[x]$</span>) so checking a single combination of quadratic products is not sufficient.</p> <p>Therefore, what is the ideal method for checking that <span class="math-container">$x^4 + 1$</span> cannot be reduced to a product of two quadratic polynomials in <span class="math-container">$\mathbb{Q}[x]$</span>? Am I forced to just brute force check solutions of <span class="math-container">$x^4 + 1 = (x^2 + ax + b)(x^2 + cx + d)$</span> don't have rational solutions <span class="math-container">$(a,b,c,d) \in \mathbb{Q}^4$</span>?</p>
Oscar Lanzi
248,217
<p>You can indeed try a factorization of the form</p> <p><span class="math-container">$x^4+1=(x^2+ax+b)(x^2+cx+d).$</span></p> <p>Expanding the right side and matching terms with like powers gives</p> <p><span class="math-container">$x^3$</span> terms: <span class="math-container">$a+c=0,c=-a$</span></p> <p><span class="math-container">$x^2$</span> terms: <span class="math-container">$ac+b+d=0,b+d=a^2$</span></p> <p><span class="math-container">$x^1$</span> terms: <span class="math-container">$ad+bc=a(d-b)=0,a=0$</span> or <span class="math-container">$d=b$</span></p> <p>The case <span class="math-container">$a=0$</span> leads to <span class="math-container">$(x^2+i)(x^2-i)$</span> which fails to lie in <span class="math-container">$\mathbb Q[x]$</span>. So we try <span class="math-container">$d=b$</span>, which then means <span class="math-container">$d=b=a^2/2$</span> from the matching of <span class="math-container">$x^2$</span> terms. Then matching the <span class="math-container">$x^0$</span> terms gives:</p> <p><span class="math-container">$x^0$</span> terms: <span class="math-container">$bd=b^2=1.$</span></p> <p>Then <span class="math-container">$b=a^2/2=\pm1$</span> and neither choice of the <span class="math-container">$\pm$</span> sign allows a rational value for <span class="math-container">$a$</span>. In fact only <span class="math-container">$b=+1,a=\pm\sqrt2$</span> admits a quadratic-quadratic factorization even over the reals.</p>
65,810
<p>Recently, I have been learning about nef line bundles. I know that when $X$ is projective or Moishezon, a line bundle $L$ over $X$ is said to be nef iff $$L.C=\int_{C}c_{1}(L)\ge 0$$ for every curve $C$ in $X$.</p> <p>Demailly gave a definition of nefness that works on an arbitrary compact complex manifold, i.e., a line bundle $L$ over $X$ is said to be nef if for every $\varepsilon &gt;0$ there exists a smooth hermitian metric $h_{\varepsilon}$ on $L$ such that its curvature $\Theta_{h_{\varepsilon}}(L)\ge -\varepsilon\omega$. For projective manifolds, Demailly's definition coincides with the above one given by integration (this is an easy consequence of Seshadri's ampleness criterion).</p> <p><strong>Question:</strong> Is this equivalence also true for Moishezon manifolds?</p> <p>I don't know of any counterexamples. If it is not true, could someone give me a counterexample?</p>
mrw
15,465
<p>what is $\omega$ here? if it is a Kahler form, then Moishezon + Kahler implies projective, and as you said they are equivalent. </p>
55,435
<p>I've recently become interested in the elementary theory of groups due to Sela and Myasnikov-Kharlampovich's work with free groups. I'd like a good introduction to the field of the elementary theory of groups, and in particular I'd like a reference to contain examples of group properties that cannot be read from a group's elementary theory. For example, it seems that the statements "G is vritually abelian" or "G is hopfian" could not be expressed with first-order sentences, but I don't have enough knowledge yet to determine if such things are true. Does anyone know such a reference? Or, specifically, does someone know a proof of the fact that being hopfian (or some other group property) can't be read from a group's elementary theory? Thanks!</p>
HJRW
1,463
<p>I learned a lot from reading Bestvina and Feighn's article <a href="http://arxiv.org/abs/0809.0467" rel="nofollow">Notes on Sela's work: Limit groups and Makanin-Razborov diagrams</a>. It's not a broad introduction to elementary theory, but it does express some of Sela's ideas quite succinctly. You may need a background in geometric group theory (specifically, in understanding laminations on 2-complexes or group actions on $\mathbb{R}$-trees) to get the most out of it.</p> <p>Regarding the question of whether or not the Hopf property is elementary: the answer is obviously `no' if you allow infinitely generated examples. Indeed, consider the free group on countably many generators, $F_\infty$. The elementary theory is completely determined by the set of finitely generated subgroups, so $F_\infty$ is elementary equivalent to $F_2$. But $F_2$ is Hopfian and $F_\infty$ is not.</p> <p><strong>EDIT:</strong> I'm getting a little nervous about the claim that the elementary theory is determined by the list of finitely generated subgroups. However, Sela and Kharlampovich--Miasnikov proved that the natural inclusions $F_n\subseteq F_{n+1}$ are elementary embeddings, from which it does indeed follow that $F_\infty$ is elementarily equivalent to $F_2$.</p> <p>I don't know a finitely generated example, although I agree that one must surely exist.</p>
2,966,392
<p>Suppose <span class="math-container">$\lim_{n\rightarrow\infty }z_n=z$</span>.<br> Prove <span class="math-container">$\lim_{n\rightarrow\infty}\operatorname{Re}(z_n)=\operatorname{Re}(z)$</span></p> <p>Where, <span class="math-container">$z\in\mathbb{C}$</span> and <span class="math-container">$z_n$</span> is a complex-sequence.</p> <p><b>My work</b></p> <p>Let <span class="math-container">$\epsilon &gt;0$</span>, </p> <p>By hypothesis exists <span class="math-container">$N\in\mathbb{N}$</span> such that if <span class="math-container">$n\geq N$</span> then <span class="math-container">$|z_n-z|&lt;\epsilon.$</span></p> <p>We know <span class="math-container">$|\operatorname{Re}(z_n)|&lt;|z_n|$</span> and <span class="math-container">$|\operatorname{Re}(z)|&lt;|z|$</span> then,</p> <p>Then</p> <p><span class="math-container">$|\operatorname{Re}(z_n)-\operatorname{Re}(z)|\leq|\operatorname{Re}(z_n)|+|\operatorname{Re}(z)|&lt;|z_n|+|z|$</span></p> <p>Here i'm a little stuck. Can someone help me?</p>
T_M
562,248
<p>If <span class="math-container">$|Re(w)| \leq |w|$</span> is always true.... then apply it to <span class="math-container">$z_n - z$</span></p>
3,623,432
<p>Say I have two independent normal distributions (both with <span class="math-container">$\mu=0$</span>, <span class="math-container">$\sigma=\sigma$</span>) one for only positive values and one for only negatives so their pdfs look like:</p> <p><span class="math-container">$p(x, \sigma) = \frac{\sqrt{2}}{\sqrt{\pi} \sigma} exp(-\frac {x^2}{2 \sigma^2}), \forall x&gt;0$</span> and</p> <p><span class="math-container">$p(y, \sigma) = \frac{\sqrt{2}}{\sqrt{\pi} \sigma} exp(-\frac{y^2}{2 \sigma^2}), \forall y&lt;0$</span>.</p> <p>If I pluck samples from both and then take the average <span class="math-container">$ = \frac{x+y}{2}$</span> I would imagine the expected value of this average to be zero but I would imagine the variance would be less than the variance of the individual distributions because the averaging of a positive and negative number would "squeeze" the final distribution. </p> <p>I think the correct way to calculate it is using the following integral.</p> <p><span class="math-container">$Var( \frac{x+y}{2}) = \frac{2}{ \pi \sigma^2} \int^{\infty}_{0} \int^{0}_{- \infty} \frac{(x + y)^2}{4}exp(-\frac {x^2}{2 \sigma^2}) exp(-\frac {y^2}{2 \sigma^2}) dx dy$</span></p> <p>But I am not sure if I am over-simplifying it. Does that logic seem correct or am I missing something?</p> <p>Thank you.</p> <p>Edited to mention independence and correct formulae mistakes.</p>
Erik Cristian Seulean
615,501
<p>I think you can use moment generating functions to be able to get <span class="math-container">$E\big(\big(\frac{X+Y}{2}\big)^2\big)$</span> and <span class="math-container">$E\big(\frac{X+Y}{2}\big)$</span> to calculate the variance.</p> <p><span class="math-container">$$M_\frac{X+Y}{2}(t)= E(e^{t\frac{X+Y}{2}}) = E\big(e^{\frac{t}{2}X + \frac{t}{2}Y}\big) = E\big(e^{\frac{t}{2}X}\big) + E\big(e^{\frac{t}{2}Y}\big)$$</span></p> <p>You can calculate both of these independently: </p> <p><span class="math-container">$$E(e^{\frac{t}{2}}X) = \frac{1}{2\sqrt{\pi}}\int_0^\infty e^{\frac{t}{2}x}e^{-\frac{x}{2\sigma^2}}dx = \frac{1}{2\sqrt{\pi}}\int_0^\infty e^{{-\frac{1}{2}}\big(t^2\sigma^2 - tx + \frac{x^2}{\sigma^2}\big) + \frac{t^2\sigma^2}{2}}dx = e^\frac{t^2\sigma^2}{2}\int_0^\infty\frac{1}{2\sqrt{\pi}}e^{-\frac{(x-t\sigma)^2}{2\sigma^2}} $$</span> </p> <p>Note that the last integral represents a normal distribution that is centred in <span class="math-container">$t\sigma$</span> and since the Normal Distribution is symmetric this is half of the value. Since the integral of the PDF on the entire space is 1, it means that the above mentioned integral will integrate to <span class="math-container">$\frac{1}{2}$</span>.</p> <p><span class="math-container">$$E(e^{\frac{t}{2}}X) = \frac{1}{2}e^\frac{t^2\sigma^2}{2}$$</span></p> <p>In similar manner you can compute <span class="math-container">$E(e^{\frac{t}{2}}X)$</span> but in that case the integral would be from <span class="math-container">$-\infty$</span> to <span class="math-container">$0$</span>. </p> <p>After that you can take the first and second derivative and get the values required to calculate the variance. </p>
3,174,339
<p>Let <span class="math-container">$M$</span> be a <span class="math-container">$C^{\infty}$</span> manifold. Let <span class="math-container">$U$</span> be an open subset of <span class="math-container">$M$</span>. Now take a closed subset (with respect to the subspace topology on <span class="math-container">$U$</span>) <span class="math-container">$V \subseteq U$</span>. Does is then follow that <span class="math-container">$V$</span> is closed in <span class="math-container">$M$</span>? Thank you. </p> <p>PS I would further like to add more conditions as it appears that I may have simplified the problem too much as shown by Kavi Rama Murthy below. We further let <span class="math-container">$(U, \phi)$</span> be a local chart with <span class="math-container">$p \in U$</span>, and <span class="math-container">$V = \phi^{-1}(\overline{B(\phi(p), \varepsilon)})$</span> for <span class="math-container">$\epsilon &gt; 0$</span> small so that <span class="math-container">$\overline{B(\phi(p), \varepsilon)} \subseteq \phi(U)$</span>. </p>
wjmolina
25,134
<p>Whether <span class="math-container">$f:X\to Y$</span>, and in particular an injection, is an open map depends on the topologies, e.g., if <span class="math-container">$Y$</span> has the discrete topology, then any <span class="math-container">$f$</span> is an open map. You can check that if <span class="math-container">$f$</span> is injective onto an open subset of <span class="math-container">$Y$</span> and <span class="math-container">$f^{-1}$</span> is continuous, then <span class="math-container">$f$</span> is an open map.</p>
3,174,339
<p>Let <span class="math-container">$M$</span> be a <span class="math-container">$C^{\infty}$</span> manifold. Let <span class="math-container">$U$</span> be an open subset of <span class="math-container">$M$</span>. Now take a closed subset (with respect to the subspace topology on <span class="math-container">$U$</span>) <span class="math-container">$V \subseteq U$</span>. Does is then follow that <span class="math-container">$V$</span> is closed in <span class="math-container">$M$</span>? Thank you. </p> <p>PS I would further like to add more conditions as it appears that I may have simplified the problem too much as shown by Kavi Rama Murthy below. We further let <span class="math-container">$(U, \phi)$</span> be a local chart with <span class="math-container">$p \in U$</span>, and <span class="math-container">$V = \phi^{-1}(\overline{B(\phi(p), \varepsilon)})$</span> for <span class="math-container">$\epsilon &gt; 0$</span> small so that <span class="math-container">$\overline{B(\phi(p), \varepsilon)} \subseteq \phi(U)$</span>. </p>
Bijco
626,084
<p>Another example : take a set <span class="math-container">$X$</span> with at least two elements. Consider <span class="math-container">$X_1$</span> to be <span class="math-container">$X$</span> equiped with the discrete topology and <span class="math-container">$X_2$</span> to be <span class="math-container">$X$</span> equiped with the trivial topology. Then, the identity from <span class="math-container">$X_1$</span> to <span class="math-container">$X_2$</span> is clearly a continuous injection (it is even a bijection) but it clearly does not preserve open sets.</p> <p>I don't think there is a "minimum" requirement on a map to be open, it depends mostly on the topologies. (But homeomorphisms are always open).</p>
244,333
<p>Consider this equation : </p> <p><span class="math-container">$$\sqrt{\left( \frac{dy\cdot u\,dt}{L}\right)^2+(dy)^2}=v\,dt,$$</span></p> <p>where <span class="math-container">$t$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$T$</span> , and <span class="math-container">$y$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$L$</span>. Now how to proceed ? </p> <p>This equation arises out of following problem : </p> <p>A cat sitting in a field suddenly sees a standing dog. To save its life, the cat runs away in a straight line with speed <span class="math-container">$u$</span>. Without any delay, the dog starts with running with constant speed <span class="math-container">$v&gt;u$</span> to catch the cat. Initially, <span class="math-container">$v$</span> is perpendicular to <span class="math-container">$u$</span> and <span class="math-container">$L$</span> is the initial separation between the two. If the dog always changes its direction so that it is always heading directly at the cat, find the time the dog takes to catch the cat in terms of <span class="math-container">$v, u$</span> and <span class="math-container">$L$</span>. <hr> See my solution below : </p> <p>Let initially dog be at <span class="math-container">$D$</span> and cat at <span class="math-container">$C$</span> and after time <span class="math-container">$dt$</span> they are at <span class="math-container">$D'$</span> and <span class="math-container">$C'$</span> respectively. Dog velocity is always pointing towards cat.</p> <p>Let <span class="math-container">$DA = dy, \;AD' = dx$</span></p> <p>Let <span class="math-container">$CC'=udt,\;DD' = vdt$</span> as interval is very small so <span class="math-container">$DD'$</span> can be taken straight line.</p> <p>Also we have <span class="math-container">$\frac{DA}{DC}= \frac{AD'}{ CC'}$</span> using triangle property.</p> <p><span class="math-container">$\frac{dy}{L}= \frac{dx}{udt}\\ dx = \frac{dy.udt}{L}$</span></p> <p><span class="math-container">$\sqrt{(dx)^2 + (dy)^2} = DD' = vdt \\ \sqrt{(\frac{dy.udt}{L})^2 + (dy)^2} = vdt $</span></p> <p>Here <span class="math-container">$t$</span> varies from <span class="math-container">$0-T$</span>, and <span class="math-container">$y$</span> varies from <span class="math-container">$0-L$</span>. Now how to proceed?<img src="https://i.stack.imgur.com/Ji3Fc.jpg" alt="enter image description here"></p>
pokrate
50,639
<p>Why this approach is right or wrong ? </p> <p><img src="https://i.stack.imgur.com/I9nPW.gif" alt="enter image description here"></p>
244,333
<p>Consider this equation : </p> <p><span class="math-container">$$\sqrt{\left( \frac{dy\cdot u\,dt}{L}\right)^2+(dy)^2}=v\,dt,$$</span></p> <p>where <span class="math-container">$t$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$T$</span> , and <span class="math-container">$y$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$L$</span>. Now how to proceed ? </p> <p>This equation arises out of following problem : </p> <p>A cat sitting in a field suddenly sees a standing dog. To save its life, the cat runs away in a straight line with speed <span class="math-container">$u$</span>. Without any delay, the dog starts with running with constant speed <span class="math-container">$v&gt;u$</span> to catch the cat. Initially, <span class="math-container">$v$</span> is perpendicular to <span class="math-container">$u$</span> and <span class="math-container">$L$</span> is the initial separation between the two. If the dog always changes its direction so that it is always heading directly at the cat, find the time the dog takes to catch the cat in terms of <span class="math-container">$v, u$</span> and <span class="math-container">$L$</span>. <hr> See my solution below : </p> <p>Let initially dog be at <span class="math-container">$D$</span> and cat at <span class="math-container">$C$</span> and after time <span class="math-container">$dt$</span> they are at <span class="math-container">$D'$</span> and <span class="math-container">$C'$</span> respectively. Dog velocity is always pointing towards cat.</p> <p>Let <span class="math-container">$DA = dy, \;AD' = dx$</span></p> <p>Let <span class="math-container">$CC'=udt,\;DD' = vdt$</span> as interval is very small so <span class="math-container">$DD'$</span> can be taken straight line.</p> <p>Also we have <span class="math-container">$\frac{DA}{DC}= \frac{AD'}{ CC'}$</span> using triangle property.</p> <p><span class="math-container">$\frac{dy}{L}= \frac{dx}{udt}\\ dx = \frac{dy.udt}{L}$</span></p> <p><span class="math-container">$\sqrt{(dx)^2 + (dy)^2} = DD' = vdt \\ \sqrt{(\frac{dy.udt}{L})^2 + (dy)^2} = vdt $</span></p> <p>Here <span class="math-container">$t$</span> varies from <span class="math-container">$0-T$</span>, and <span class="math-container">$y$</span> varies from <span class="math-container">$0-L$</span>. Now how to proceed?<img src="https://i.stack.imgur.com/Ji3Fc.jpg" alt="enter image description here"></p>
Maxim
491,644
<p>Suppose the cat is running along the <span class="math-container">$y$</span> axis. If <span class="math-container">$\rho$</span> is the distance between the dog and the cat, <span class="math-container">$\rho_y$</span> is the distance between them along the <span class="math-container">$y$</span> axis and <span class="math-container">$\alpha = \alpha(t)$</span> is the angle between the trajectories, then <span class="math-container">$$\frac {d\rho} {dt} = u \cos \alpha - v, \\ \frac {d\rho_y} {dt} = u - v \cos \alpha, \\ \frac d {dt} (v \rho + u \rho_y) = u^2 - v^2, \\ v \rho + u \rho_y = (u^2 - v^2) t + v L = 0, \\ t = \frac {v L} {v^2 - u^2}.$$</span></p>
3,327,435
<p>I have no clue for the following problem: </p> <blockquote> <p>Let <span class="math-container">$G$</span> be a finite group, <span class="math-container">$p$</span> a prime number, <span class="math-container">$S$</span> a Sylow <span class="math-container">$p$</span> subgroup of <span class="math-container">$G$</span>. Let <span class="math-container">$N$</span> be the normalizer of <span class="math-container">$S$</span> inside <span class="math-container">$G$</span>. Let <span class="math-container">$X, Y$</span> two subsets of <span class="math-container">$Z(S)$</span> (center of <span class="math-container">$S$</span>) such that <span class="math-container">$\exists g \in G, gXg^{-1}= Y$</span>. Then we need to show that <span class="math-container">$\exists n \in N$</span> such that <span class="math-container">$gxg^{-1} = nxn^{-1}, \forall x \in X$</span>. </p> </blockquote> <p>So I guess first I can assume <span class="math-container">$X, Y$</span> to be subgroups by taking the smallest subgroup containing them. Then I have no clue. </p>
jgon
90,543
<p>I upvoted Arturo's answer, but I'm going to write out a more complete answer because I was also struggling with this question.</p> <p>We'll follow Arturo's advice, and make the changes Derek Holt suggests in the comments.</p> <p><span class="math-container">$S$</span> centralizes <span class="math-container">$X$</span>, so <span class="math-container">$gSg^{-1}$</span> centralizes <span class="math-container">$Y=gXg^{-1}$</span>. However <span class="math-container">$S$</span> also centralizes <span class="math-container">$Y$</span>. Thus <span class="math-container">$S$</span> and <span class="math-container">$gSg^{-1}$</span> are both Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$C(Y)$</span>. Thus there is <span class="math-container">$c\in C(Y)$</span> with <span class="math-container">$S=cgSg^{-1}c^{-1}$</span>, since Sylow <span class="math-container">$p$</span>-subgroups are conjugate. Since <span class="math-container">$S=(cg)S(cg)^{-1}$</span>, <span class="math-container">$cg\in N(S)$</span>. Then let <span class="math-container">$n=cg$</span>, <span class="math-container">$x\in X$</span>, and <span class="math-container">$y=gxg^{-1}$</span>. Then we have <span class="math-container">$$nxn^{-1}=cgx(cg)^{-1} = cgxg^{-1}c^{-1} = cyc^{-1}=cc^{-1}y=y=gxg^{-1},$$</span> as desired.</p>
954,130
<p>I have to prove that the function $f(n)=3n^2-n+4$ is $O(n^2)$. So I use the definition of big oh:</p> <blockquote> <p>$f(n)$ is big oh $g(n)$ if there exist an integer $n_0$ and a constant $c&gt;0$ such that for all integers $n\geq n_0$, $f(n)\leq cg(n)$.</p> </blockquote> <p>And it doesn't matter what those constants are. So I will choose $c=1$</p> <p>\begin{align} f(n)&amp;\leq cg(n)\\ 3n^2-n+4&amp;\leq 1*n^2\\ 3n^2-n+4&amp;\leq n^2\\ 0&amp;\leq n^2-3n^2+n-4\\ 0&amp;\leq -2n^2+n-4 \end{align}</p> <p>Now I am having trouble figuring out $n_0$ from here. In the book he simplified the polynomial to its roots and logically determined $n_0$. It looks like this polynomial can't be broken down into a $(a\pm b)(c\pm d)$ form. </p>
Leonardo Castro
845,603
<p>You can think of the equation of motion</p> <p><span class="math-container">$s(t) = s(0) + v(0) t + \frac{a}{2} t^2$</span></p> <p>as a Taylor expansion of the function <span class="math-container">$s(t)$</span> around <span class="math-container">$t=0$</span>, exact for constant acceleration. The coefficients of this series are <span class="math-container">$s(0)$</span>, <span class="math-container">$v(0)$</span> and <span class="math-container">$a$</span> (<span class="math-container">$=a(0)$</span>) and have different units.</p> <p>When you expand a function <span class="math-container">$f(x)$</span> in Taylor series, the coefficients are given by</p> <p><span class="math-container">$a_n = \frac{1}{n!} \left[\frac{d^n f(x)}{dx^n}\right]_{x=x_0}$</span></p> <p>and so they have units of <span class="math-container">$f$</span>, <span class="math-container">$f/x$</span>, <span class="math-container">$f/x^2$</span>, <span class="math-container">$f/x^3$</span>, etc.</p> <p>So, the correct expansion of <span class="math-container">$exp(x)$</span> for <span class="math-container">$x$</span> in meters is</p> <p><span class="math-container">$exp(x) = 1 m + x + \frac{1}{2m} x^2 + \frac{1}{6m^2} x^3 + ... $</span></p> <p>For instance,</p> <p><span class="math-container">$exp(2m) = 1 m + 2m + \frac{1}{2m} 4 m^2 + \frac{1}{6m^2} 8 m^3 + ... $</span></p> <p>If you want to keep the expansion as simple as</p> <p><span class="math-container">$exp(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + ... $</span>,</p> <p>you'd better have a dimensionless <span class="math-container">$x$</span>.</p> <p>Besides, a unit should be treated as a multiplication factor. Different functions treats multiplied factors different ways. For example,</p> <p><span class="math-container">$log(4.18 J) = log(4.18) + log(J)$</span> and</p> <p><span class="math-container">$e^{4.18 J} = (e^{J})^{4.18} = (e^{4.18})^{J}$</span>.</p> <p>That's too complicated and usually avoided in formulas of natural sciences.</p> <p>In some equations involving entropy, the logarithm is divided as in <span class="math-container">$log(ab) = log(a) + log(b)$</span>, such that the arguments <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are dimensional. The Wikipedia page [<a href="https://en.wikipedia.org/wiki/Sackur%E2%80%93Tetrode_equation" rel="nofollow noreferrer">Sackur-Tetrode equation</a>]<a href="https://en.wikipedia.org/wiki/Sackur%E2%80%93Tetrode_equation" rel="nofollow noreferrer">1</a> has an example of such equations, but also a justification for it:</p> <blockquote> <p>Strictly speaking, the use of dimensioned arguments to the logarithms is incorrect, however their use is a &quot;shortcut&quot; made for simplicity. If each logarithmic argument were divided by an unspecified standard value expressed in terms of an unspecified standard mass, length and time, these standard values would cancel in the final result, yielding the same conclusion. The individual entropy terms will not be absolute, but will rather depend upon the standards chosen, and will differ with different standards by an additive constant.</p> </blockquote>
947,191
<p>Show that $\sum _{n=1 } ^{\infty } (n \pi + \pi/2)^{-1 } $ diverges.</p> <p>Both the root test and the ratio test is inconclusive. Can you suggest a series for the series comparison test?</p> <p>Thanks in advance!</p>
Umberto P.
67,536
<p>Use the <em>limit comparison test</em> with $\sum_{n=1}^\infty n^{-1}$.</p>
947,191
<p>Show that $\sum _{n=1 } ^{\infty } (n \pi + \pi/2)^{-1 } $ diverges.</p> <p>Both the root test and the ratio test is inconclusive. Can you suggest a series for the series comparison test?</p> <p>Thanks in advance!</p>
Ishfaaq
109,161
<p>It is never too pretty to go for the Root and Ratio Tests on the onset. A much more elegant method would be to notice that:</p> <p>$$ \dfrac{1}{n \pi + \frac{\pi }{2}} = \dfrac{2 }{ 2n \pi + \pi } = \dfrac{2}{\pi } \cdot \dfrac{1}{2n + 1} \ge \dfrac{2}{\pi } \cdot \dfrac{1}{2n + n} = \dfrac{2}{3 \pi } \cdot \dfrac{1}{n } $$</p> <p>Now use the fact that the Harmonic Series $ \sum \dfrac 1 n $ diverges. </p>
653,319
<p>I understand that $\lim_{\theta\to0}(\sin(θ)/θ) = 1$ but what is $x$ when, $\lim_{\theta\to0}(\tan(θ)/θ) = x$ where $x$ is a real constant value. </p> <p>Please help me, I will be eternally great full :D</p>
user124200
124,200
<p>$x=1$</p> <p>Let $\theta=y$. Then $$\tan y=\frac{\sin y}{\cos y}$$ and $\lim_{y \to 0} \frac{\sin y}{y}=1$ and $\cos 0=1$ so $\lim_{y\to 0}\frac{\tan y}{y}=0$.</p>
1,529,827
<p>In order to make it clear, I ask three questions:</p> <ol> <li>Does $|2^m - 3^n|&lt;10^6$ have any integers solution for $m&gt;20$ ?</li> <li>Is $ \liminf |2^m - 3^n|$ infinite ?</li> <li>Is $ \liminf |2^m - 3^n|/m$ finite ?</li> </ol>
Gottfried Helms
1,714
<p>The problem can empirically examined with the use of the continued fraction of <span class="math-container">$ \beta = \log_2(3)$</span> For each <span class="math-container">$n$</span> we can find the optimal <span class="math-container">$m_n$</span> by <span class="math-container">$m_{n,\text{lo}}=\lfloor n \cdot \beta \rfloor$</span> resp. <span class="math-container">$m_{n,\text{hi}}=\lceil n \cdot \beta \rceil$</span> and the optimal <span class="math-container">$n$</span> can be found by the convergents of the continued fraction of <span class="math-container">$\beta$</span>.<br> The best results -as I assume- are collected in the work of Wadim Zudilin, for instance you can find a preview of "A new lower bound for ||(3/2)^k||" at W. Zudilin's homepage. </p> <p>I've a small discussion and a couple of - I think: instructive - pictures at my webspace which indicate a dependence of the smallest differences <span class="math-container">$|2^{m_n}-3^n|$</span> from the parameter <span class="math-container">$n$</span> (as also indicated by @Wojowu's answer), up to <span class="math-container">$n \gt 10^{1000}$</span> - maybe this is helpful or at least useful for visual intuition. Here is one picture <a href="https://i.stack.imgur.com/rEgTc.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rEgTc.gif" alt="(bild)"></a> </p> <p>A short review of the formulae empirically up to <span class="math-container">$n \approx 3.78 \cdot 1e29$</span> (which is pretty near the right bottom edge in the above picture) suggests, that possibly <span class="math-container">$ \lim \sup g(n)=\Large{{|2^{m_n}-3^n|\over 3^n }} \small \cdot 3n $</span> might be finite, see the following table</p> <pre><code> n g(n) 1.00000000000 1.00000000 5.00000000000 0.80246914 41.0000000000 1.41804877 306.000000000 0.93889533 15601.0000000 0.85156907 79335.0000000 0.87222344 190537.000000 0.03687320 10781274.0000 0.39482048 171928773.000 0.92285323 397573379.000 0.12624166 6586818670.00 0.20003757 137528045312. 0.37077074 5.40930392448E12 1.06988637 1.15717186888E13 0.44771035 4.31166034847E14 1.72524065 5.75093460288E15 1.83493581 1.30441933148E17 0.70175324 3.97560349370E17 0.18091377 4.64028225930E18 0.37542422 2.74441332064E19 0.83355872 7.76921173599E19 0.79349425 2.05632218873E20 0.05490846 3.11509610182E22 0.03729717 1.75219014922E24 1.03849316 1.71940427683E26 1.32651409 3.47354084703E26 1.21289278 2.43842505160E27 1.38110746 1.13220034315E28 0.80968184 8.81376024008E28 0.50404350 3.78155609260E29 0.35681677 </code></pre> <p>This is the link to the essay: <a href="http://go.helms-net.de/math/collatz/2hochS_3hochN_V2.htm" rel="nofollow noreferrer">(essay)</a> </p> <p><strong><em>[update]</em></strong>: Here is a picture of the function <span class="math-container">$ g(n)=\Large {{2^{m_{n,hi}} - 3^n \over 3^n }} \small \cdot 3n $</span> which seems to indicate a bounded interval which does not vanish in the limit of <span class="math-container">$n,m_n \to \infty$</span> up to <span class="math-container">$n \approx 10^{525} $</span> (at the convergents at the continued fraction) :<br> <a href="https://i.stack.imgur.com/j4SSh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j4SSh.png" alt="(bild)"></a></p>
239,202
<p>Let $\Gamma$ be a $C^2$ compact submanifold of $\mathbb{R}^n$. Consider the distance function $\delta$ from $\Gamma$. It is well known that, for sufficiently small $\varepsilon&gt;0$, $\delta$ is $C^2$ on $\{ 0&lt;\delta &lt; \varepsilon\}$, and that it satisfies the eikonal equation</p> <p>$$ \| \nabla \delta \| = 1, \qquad \text{with} \qquad \delta|_{\Gamma} = 0. $$</p> <p>Now recall Bochner's formula, valid for a smooth function $u \in C^\infty(M)$ on a general Riemannian manifold. It reads:</p> <p>$$ \frac{1}{2} \Delta\left( \| \nabla u\|^2 \right) = \nabla u \cdot \nabla \Delta u + \| \mathrm{Hess} (u) \|^2_{\mathrm{HS}} + \mathrm{Ric}(\nabla u, \nabla u). \qquad (\star)$$</p> <p>Here $\| \cdot \|_{\mathrm{HS}}$ is the Hilbert-Schmidt norm and $\Delta$ is the Laplace-Beltrami operator. When we specify $(\star)$ to $M = \mathbb{R}^n$ and $u$ is a <em>smooth</em> solution of the eikonal equation, we have the following:</p> <p>$$ \nabla u \cdot \nabla \Delta u = - \| \mathrm{Hess} (u) \|^2_{\mathrm{HS}}. \qquad (\star\star)$$</p> <p>Observe that the r.h.s. requires only $C^2$ regularity of $u$, while the l.h.s., a priori, requires a further derivative ($\nabla u \cdot \nabla \Delta u$ is indeed the directional derivative of $\Delta u$ in the direction $\nabla u$).</p> <p><strong>Q: Is it true, even if $\delta$ is a priori only a $C^2$ solution of the eikonal equation, that $\Delta \delta$ admits a directional derivative in the direction of $\nabla \delta$?</strong> Or, in other words, is it true that $(\star\star)$ holds for $\delta$?</p> <p>The same question indeed can be posed in the Riemannian setting.</p> <p>P.S: A direct computation with the distance from the $C^2$ (but not $C^3$) surface in $\mathbb{R}^2$ given by $y= x^{5/2}$ seems to support this claim.</p> <p>This could be a standard fact about the regularity of solutions of the eikonal equation, but I have not been able to find any reference on this precise point.</p>
Raziel
13,915
<p>Since this question seems to have attracted some interest, I will post my own answer (which is a proof of the statement in the comments by Anton).</p> <p><strong>If $u : M \to \mathbb{R}$ is a $C^2$ solution of the Eikonal equation $$ \| \nabla u\| = 1, $$ then $\mathrm{Hess}(u)$, which a priori is only continuous, is smooth along integral lines of $\nabla u$ (and analytic, in the Euclidean case).</strong></p> <p>The proof boils down to show that the Riccati equation for $\mathrm{Hess}(u)$ holds true in the distributional sense (any "simpler" proof is welcome). Assume $M = \mathbb{R}^n$. The eikonal equation is (sum over repeated indices is assumed) $$(\partial_i u) (\partial_i u) = 1.$$</p> <p>Its first derivative gives $$(\partial_{ij} u) (\partial_i u) = 0.$$</p> <p>We can indeed differentiate again, but not use Leibnitz since $\partial_{ij} u$ might not be differentiable. Nevertheless, as distributions we obtain that</p> <p>$$\partial_k (\partial_i \partial_j u) (\partial_i u)= - (\partial_{k\ell} u)( \partial_{\ell j} u) .$$ </p> <p>(To be precise one must consider $\partial_{i}\partial_j u \in C_c^0(\mathbb{R}^n)^*$ so that its distributional derivative is in $C_c^1(\mathbb{R}^n)^*$, and so the multiplication for $C^1(\mathbb{R}^n)$ functions, such as $\partial_i u$, is well defined). We can interchange the derivatives w.r.t. $k$ and $i$ in the distributional sense (indeed we can do it for test functions in $C^\infty_c(\mathbb{R}^n)$, and then by density also for test functions in $C^1_c(\mathbb{R}^n)$):</p> <p>$$\partial_i (\partial_k \partial_j u) (\partial_i u) = - (\partial_{k\ell} u)( \partial_{\ell j} u). $$ </p> <p>Since the r.h.s. is continuous, the identity holds also in the strong sense. In particular, the Hessian $H_{kj} := \partial_{k}\partial_j u$ admits derivative in the direction of $v = \nabla u$ and it satisfies the Riccati equation</p> <p>$$ D_v H = -H^2 .$$</p> <p>One can compute explicit solutions of this along the integral lines of $v=\nabla u$, and they are analytic.</p> <p><strong>Riemannian case.</strong> The argument generalizes in a straightforward way to the Riemannian setting, with the non-trivial occurrence of the curvature due to the interchange of covariant derivatives. Denoting with $S$ the symmetric $(1,1)$ tensor associated with $\mathrm{Hess}(u)$ (the shape operator of the level sets of $u$):</p> <p>$$ \nabla_{v} S + S^2 + R = 0,$$</p> <p>where $R$ is the curvature operator $g(R X,Y) = \mathrm{Rm}(x,\nabla u,\nabla u,Y)$. In particular, $S$ is smooth along integral lines of $\nabla u$. Taking the trace, we get:</p> <p>$$(\nabla u) ( \Delta u) + \|\mathrm{Hess}(u)\|_{\mathrm{HS}}^2 + \mathrm{Ric}(\nabla u,\nabla u) = 0,$$</p> <p>which is the particular case of Bochner's formula for $C^2$ solutions of the Eikonal equation.</p> <p><strong>Comments.</strong> These equations are indeed classical, the only non-trivial part (at least to me), was the unexpected extra regularity of $\mathrm{Hess}(u)$ along the integral curve $\gamma(t)$ of $\nabla u$ (which is indeed a geodesic).</p> <p><strong>Remark.</strong> This means that the mean curvature $\Delta u$ of a $C^2$ surface $\{u = 0\}$ is actually smooth in the normal direction (and analytic if the ambient space is Euclidean).</p>
3,252,765
<p>We are trying to codify in terms of modern algorithm the works of the ancient Indian mathematician <em>Udayadivakara</em> (CE 1073). In his work <em>Sundari</em>, he quotes one <em>Acarya Jayadeva</em> who has given methods to solve Pell's equations. In these methods, one can find the the cyclic <em>Chakravala</em> method to deal with <span class="math-container">$X^2-DY^2=1$</span> wrongly attributed to Bhaskara. He also gives the method to solve <span class="math-container">$X^2-DY^2=C$</span> for any integer <span class="math-container">$C$</span>. </p> <ol> <li>His algorithm starts off by finding the nearest square integer <span class="math-container">$&gt;D$</span> named <span class="math-container">$P^2$</span>. Then <span class="math-container">$a=P^2-D$</span>.</li> <li>Now some <span class="math-container">$b$</span> is chosen in such a way that <span class="math-container">$Db^2+Ca$</span> is some perfect square <span class="math-container">$Q^2$</span>. </li> <li>Then the <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> solutions can be found by using <span class="math-container">$Y=\frac{Q\pm P b}{a}$</span> and <span class="math-container">$X=PY \mp b$</span>.</li> <li>This procedure can continue indefinitely to find all the solutions. </li> <li>Coming to the question of the fundamental solution i.e. the solution with which Bhavana has to be performed repeatedly to get other solutions (related to the modern automorphism group of the quadratic form), Prof. K.S. Shukla who first translated the work from Sanskrit to English, in his example says that it should be chosen "appropriately".</li> <li><strong>Our primary question is then what is the criterion to derive this <em>fundamental solution</em>? Is there a way to derive such a criterion?</strong> </li> <li>The whole procedure seems to resemble Conway's topograph method which has been posted here several times <a href="https://math.stackexchange.com/questions/1719280/does-the-pell-like-equation-x2-dy2-k-have-a-simple-recursion-like-x2-dy2?noredirect=1&amp;lq=1">Does the Pell-like equation $X^2-dY^2=k$ have a simple recursion like $X^2-dY^2=1$?</a> It is quite fascinating to think that some wonderful mind came up with this algorithm about 1000 years ago and the optimality of it is equally amazing.</li> </ol> <p>P.S: If anyone so wishes, we would be happy to provide a version of the original paper written by Prof. Shukla in 1950 dealing with this!</p>
wendy.krieger
78,024
<p>In the case of <span class="math-container">$X^2-DY^2=C$</span>, this applies.</p> <ol> <li><span class="math-container">$C$</span> is the product of short and special primes, and some <span class="math-container">$Z^2$</span> relative to base <span class="math-container">$D$</span>.</li> <li><span class="math-container">$Z=\gcd(X,Y)$</span> can be any number, but <span class="math-container">$Z^2\mid C$</span>.</li> <li>Where <span class="math-container">$D = 1 \pmod{4}$</span>, then <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> can be integer-halfs (ie both Z+½).</li> </ol> <p>Let's see what this means.</p> <p>In base <span class="math-container">$D$</span>, the period of 1/p will divide <span class="math-container">$(p-1)/2$</span> an even (short) or odd (long) number of times. For example, in decimal, short primes are of the form <span class="math-container">$40n\pm 3^x$</span>, so 3, 13, 31, 37 are 'short primes'. This means that the equation has a solution in terms of <span class="math-container">$C=111$</span> or <span class="math-container">$C=1209=3 * 13 * 31 $</span>, for example, since these are the product of short primes relative to 10.</p> <p>The long primes never occur as solutions unless they divide both <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, in which case <span class="math-container">$p^2 \mid C$</span>.</p> <p>The special primes, are generally taken as the set of 2 and the divisors of <span class="math-container">$D$</span>. These will necessarily accompany certain primes. This give for example, that <span class="math-container">$3$</span> will not occur alone, but requires a <span class="math-container">$2$</span> or <span class="math-container">$5$</span> to be with it. So there is no solution to <span class="math-container">$X^2-10Y^2=3$</span>, but there is for <span class="math-container">$X^2-10Y^2=6$</span>.</p> <p><strong>General Theory</strong></p> <p>Algebraically, these equate to values of the form <span class="math-container">$a \pm b\sqrt{D}$</span>. This is an integer-system, in that every number of this form divides some integer <span class="math-container">$C=a^2-D.b^2$</span>. Many of these integer systems are not completely factorable in themselves, but rely heavily on 'multiplication-half' systems. </p> <p>For example, the decimal case <span class="math-container">$a\pm b\sqrt{10}$</span> has a half-multiples of the form <span class="math-container">$a\sqrt 2 \pm b \sqrt 5$</span>. <span class="math-container">$a$</span> and <span class="math-container">$b$</span> can be integer-halfs when the difference of the numbers under the square roots is divisible by four, ie <span class="math-container">$4\mid (5-2)$</span> here.</p> <p>Note also that there can be a unit in the integer-half system. <span class="math-container">$D=10$</span> does not have one, but there is certainly one in <span class="math-container">$D=42$</span> as <span class="math-container">$\sqrt 7 + \sqrt 6$</span>. This system also has multiplication-halfs at <span class="math-container">$a\sqrt 2 + b\sqrt{21}$</span>, and <span class="math-container">$a\sqrt 3+b \sqrt{14}$</span>. </p> <p>Note that any given prime occurs in a particular multiplication-half, and requires its special prime(s) to find a value of <span class="math-container">$C$</span>. So in decimal, the prime <span class="math-container">$41$</span> can occur as a direct solution, whereas <span class="math-container">$37$</span> requires an additional special half-factor (2 or 5) to be a solution. An additional multiplication-half will do, so <span class="math-container">$13*37=481=10.7^2-3^2$</span> works.</p> <p>The case of <span class="math-container">$D=130$</span> is interesting. It has four half-systems (1,130), (2,65), (5,26) and (10,13), which work on the XOR principle (ie a prime in (2,65) times one in (10,13), requires a prime from (5,12) to make the square (2×10×5 is square). But the unit for this system has no half-form.</p> <p>I believe that if you take the primitive roots of the primes, and make a table of even and odd indexes (ie <span class="math-container">$g^i = n\pmod p$</span>), then these rules apply.</p> <ol> <li><p>If the sum of indexes for <span class="math-container">$D \pmod p$</span> is even, then <span class="math-container">$p$</span> can occur as a single-power divisor of <span class="math-container">$C$</span>.</p></li> <li><p>The actual parity of the individual divisors of <span class="math-container">$D$</span> determine which 'half' it falls in. For example, relative to <span class="math-container">$D=30$</span>, we have the prime 13, as long in binary, short in -3, and long in 5. This gives 101, so the period of 13 in base 30 has a length dividing 6. But the unit for <span class="math-container">$x+y \sqrt{30}$</span> has a half falling in <span class="math-container">$\sqrt 6 + \sqrt 5$</span>. So to make these all even, it's not in the set 000,110,001,111, (which require no helper prime), but in 010, 100, 011, 101, which needs either a <span class="math-container">$2$</span> or <span class="math-container">$3$</span>. So we find for example, that 26 and 39 are possible candidates for <span class="math-container">$C$</span>, but not 13 itself.</p></li> </ol>
2,438,111
<p>I hope my title somehow encapsulates my problem.</p> <p>Let's say we have a 1-D Grid with the values 2,1,5,8,1,1. Imagine those values are of some physical quantity $\alpha$. The mean of this would be $(2+1+5+8+1+1)/6 = 3$</p> <p>Now let's say we have some function $f(x) = x^2$, which computes another quantity $\beta$ out of the initial ones. When we put in our mean it yields $f(3) = 9$. So one could think that the total $\beta$ would be $6 \times 9=54$.</p> <p>Now let's compute $\beta$ directly for each grid element and sum it up to get the total amount. $\beta_{total} = 4+1+25+64+1+1=96$</p> <p>$96 \neq 54$.</p> <p>Intuitively, I'd say 96 is the right result, but I'm kind of at a loss why the mean times the number of values fails.</p>
eyeballfrog
395,748
<p>Remember your trig identities.</p> <p>$$ A\cos(x - C) = A\cos(x)\cos(C) + A\sin(x)\sin(C) $$</p> <p>So if $f(x) = \sqrt{3}\cos(x) + \sin(x) = A\cos(x-C)$, then $A\cos(C) = \sqrt{3}$ and $A\sin(C) = 1$. If we square both sides and add them together, we get $$ [A\cos(C)]^2 + [A\sin(C)]^2 = A^2[\cos(C)^2 +\sin(C)^2] = A^2 = (\sqrt{3})^2 + (1)^2 = 4 $$ So $A = 2$.</p> <p>From this argument it should be clear that in the general case, the amplitude of $a\sin(x) + b\cos(x)$ is $\sqrt{a^2 + b^2}$.</p>
2,438,111
<p>I hope my title somehow encapsulates my problem.</p> <p>Let's say we have a 1-D Grid with the values 2,1,5,8,1,1. Imagine those values are of some physical quantity $\alpha$. The mean of this would be $(2+1+5+8+1+1)/6 = 3$</p> <p>Now let's say we have some function $f(x) = x^2$, which computes another quantity $\beta$ out of the initial ones. When we put in our mean it yields $f(3) = 9$. So one could think that the total $\beta$ would be $6 \times 9=54$.</p> <p>Now let's compute $\beta$ directly for each grid element and sum it up to get the total amount. $\beta_{total} = 4+1+25+64+1+1=96$</p> <p>$96 \neq 54$.</p> <p>Intuitively, I'd say 96 is the right result, but I'm kind of at a loss why the mean times the number of values fails.</p>
Saj_Eda
168,169
<p>Use trigonometric identities, you will see that the amplitude is 2. Why? follow these lines</p> <p>$f(x)=tan(\pi/3)cos(x)+sin(x)=\frac{sin(\pi/3)cos(x)+cos(\pi/3)sin(x)}{cos(\pi/3)}=2sin(x+\pi/3)$</p>
2,512,137
<blockquote> <p>A social worker has 77 days to make his visits. He wants to make at least one visit a day, and has 133 visits to make. Is there a period of consecutive days in which he makes a.) 21 b.) 23 visits? Why?</p> </blockquote> <p>a.) Set $a_i$ to be the number of visits up to and including day $i$, for $i = 1,\dots, 77$. Then if we combine the set of all $a_i$ with the set $\{ a_1+21,a_2+21,\dots,a_{77}+21 \}$, then we have $77\cdot2=154$ numbers, less than or equal to $133+21 = 154$. Thus we see by the pigeonhole principle that we might have that not any of these 154 numbers to be identical. So there isn't necessarily a period of 21 days.</p> <p>b.) No, since $23&gt;21$, it follows from a.) that this wouldn't work.</p> <p>Is this correct thinking? Being somewhat superstitious, I feel that at least one should be correct.. Thanks</p>
anonymous
375,166
<p>In a) you should also add 21 to the relevant set; otherwise you're not accounting for the possibility that the period of consecutive days starts on the first day (i.e. currently you're merely looking for an equality of the form $a_i=a_j+21$ but not $a_i=21$). Adding 21 to the set changes the answer to the affirmative.</p>
3,506,316
<p>I am trying to evaluate this limit:</p> <p><span class="math-container">$$\lim_{x\to0^{+}}(x-\sin x)^{\frac{1}{\log x}}$$</span></p> <p>It's a <span class="math-container">$0^0$</span> intedeterminate form, and I am unsure how to deal with it. I have a feeling that if I could turn it to a form where L'Hopital's rule is applicable, then I'd have a chance at solving the problem.</p> <p>Is there any consistent way of turning a <span class="math-container">$0^0$</span> form into a <span class="math-container">$\frac{\infty}{\infty}$</span> or <span class="math-container">$\frac{0}{0}$</span> form?</p> <p>If not, how do you deal with this kind of limits?</p>
Clement Yung
620,517
<p>The general trick is to apply exponential/logarithm to the expression: <span class="math-container">$$ \lim_{x \to 0^+} (x - \sin{x})^{\frac{1}{\log{x}}} = \lim_{x \to 0^+} e^{\frac{\log(x - \sin{x})}{\log{x}}} = e^{\lim_{x \to 0^+} \frac{\log(x - \sin{x})}{\log{x}}} $$</span> Now you can apply L'Hopital Rule to the limit in the exponent. I use <span class="math-container">$=_L$</span> to denote equality when L'Hopital Rule is applied. <span class="math-container">\begin{align*} \lim_{x \to 0^+} \frac{\log(x - \sin{x})}{\log{x}} &amp;=_L \lim_{x \to 0^+} \frac{\frac{1 - \cos{x}}{x - \sin{x}}}{\frac{1}{x}} \\ &amp;= \lim_{x \to 0^+} \frac{x - x\cos{x}}{x - \sin{x}} \\ &amp;=_L \lim_{x \to 0^+} \frac{1 - \cos{x} + x\sin{x}}{1 - \cos{x}} \\ &amp;=_L \lim_{x \to 0^+} \frac{2\sin{x} + x\cos{x}}{\sin{x}} \\ &amp;=_L \lim_{x \to 0^+} \frac{3\cos{x} - x\sin{x}}{\cos{x}} \\ &amp;= \frac{3 - 0}{1} \\ &amp;= 3 \end{align*}</span> Thus the desired limit is <span class="math-container">$e^3$</span>.</p>
3,506,316
<p>I am trying to evaluate this limit:</p> <p><span class="math-container">$$\lim_{x\to0^{+}}(x-\sin x)^{\frac{1}{\log x}}$$</span></p> <p>It's a <span class="math-container">$0^0$</span> intedeterminate form, and I am unsure how to deal with it. I have a feeling that if I could turn it to a form where L'Hopital's rule is applicable, then I'd have a chance at solving the problem.</p> <p>Is there any consistent way of turning a <span class="math-container">$0^0$</span> form into a <span class="math-container">$\frac{\infty}{\infty}$</span> or <span class="math-container">$\frac{0}{0}$</span> form?</p> <p>If not, how do you deal with this kind of limits?</p>
user3290550
278,972
<p><span class="math-container">$$\lim_{x\to0^{+}}(x-\sin x)^{\frac{1}{\ln x}}$$</span> <span class="math-container">$$^\lim_{x\to0^{+}}e^{\ln(x-\sin x)^{\frac{1}{\ln x}}}$$</span></p> <p><span class="math-container">$$e^{\lim_{x\to0^{+}}\dfrac{\ln(x-\sin x)}{\ln(x)}}$$</span></p> <p>Applying L Hospital as we have <span class="math-container">$\dfrac{\infty}{\infty}$</span> form</p> <p><span class="math-container">$$e^{\lim_{x\to0^{+}}\dfrac{x(1-\cos x)}{x-\sin x}}=1$$</span></p> <p><span class="math-container">$$e^{\lim_{x\to0^{+}}\dfrac{x^3(1-\cos x)}{x^2(x-\sin x)}}$$</span></p> <p><span class="math-container">$$\lim_{x\to0^{+}}\dfrac{x-\sin x}{x^3}=\dfrac{1}{6}$$</span> <span class="math-container">$$\lim_{x\to0^{+}}\dfrac{1-\cos x}{x^2}=\dfrac{1}{2}$$</span></p> <p>After putting these values , you will get final answer as <span class="math-container">$e^3$</span></p>
2,346,804
<p>Please help me finish this problem.</p> <p>$xy''+(3x-1)y'-(4x+9)y=0$ where $y(0)=0$</p> <p>$L[xy'']+L[(3x-1)y']-L[(4x+9)y]=L[0]$</p> <p>$L[xy'']=\frac{d}{dp}(p^2Y)$</p> <p>$L[(3x-1)y']=-3\frac{d}{dp}(pY)$</p> <p>$L[(4x+9)y]=-4\frac{dY}{dp}$</p> <p>$-\frac{d}{dp}(p^2Y)-3\frac{d}{dp}(pY)+4\frac{dY}{dp}=0$</p> <p>$-\frac{d}{dp}(p^2Y)-(3pY)\frac{d}{dp}+4\frac{dY}{dp}=0$</p> <p>I don't know what to do from here and would really appreciate some help finishing the problem. </p>
J.G
293,121
<p>You can think of it by the symmetry of the problem. More formally, let $X_i$ be the indicator function of the $i$th student getting picked. Then $\mathbb{E}[X_i]=\mathbb{P}(\text{child $i$ got picked})$. We also know that $\mathbb{E}[\sum_{i=1}^{10} X_i]=3$, as in every possibility, we pick 3 students. By linearity of expectation, $\mathbb{E}[\sum_{i=1}^{10} X_i]=\sum_{i=1}^{10}\mathbb{E}[X_i]$. By symmetry, $\mathbb{E}[X_i]=\mathbb{E}[X_j]$ for any two students $i$ and $j$, so letting, for instance, $x=\mathbb{E}[X_1]$, we get $3=10x$. Divide through to get $x$.</p>
2,346,804
<p>Please help me finish this problem.</p> <p>$xy''+(3x-1)y'-(4x+9)y=0$ where $y(0)=0$</p> <p>$L[xy'']+L[(3x-1)y']-L[(4x+9)y]=L[0]$</p> <p>$L[xy'']=\frac{d}{dp}(p^2Y)$</p> <p>$L[(3x-1)y']=-3\frac{d}{dp}(pY)$</p> <p>$L[(4x+9)y]=-4\frac{dY}{dp}$</p> <p>$-\frac{d}{dp}(p^2Y)-3\frac{d}{dp}(pY)+4\frac{dY}{dp}=0$</p> <p>$-\frac{d}{dp}(p^2Y)-(3pY)\frac{d}{dp}+4\frac{dY}{dp}=0$</p> <p>I don't know what to do from here and would really appreciate some help finishing the problem. </p>
G Tony Jacobs
92,129
<p>Suppose you are the particular student. You are all ten placed into ten positions, three of which are the chosen ones. What is your probability of being in one of those three positions? Three out of ten.</p>
2,346,804
<p>Please help me finish this problem.</p> <p>$xy''+(3x-1)y'-(4x+9)y=0$ where $y(0)=0$</p> <p>$L[xy'']+L[(3x-1)y']-L[(4x+9)y]=L[0]$</p> <p>$L[xy'']=\frac{d}{dp}(p^2Y)$</p> <p>$L[(3x-1)y']=-3\frac{d}{dp}(pY)$</p> <p>$L[(4x+9)y]=-4\frac{dY}{dp}$</p> <p>$-\frac{d}{dp}(p^2Y)-3\frac{d}{dp}(pY)+4\frac{dY}{dp}=0$</p> <p>$-\frac{d}{dp}(p^2Y)-(3pY)\frac{d}{dp}+4\frac{dY}{dp}=0$</p> <p>I don't know what to do from here and would really appreciate some help finishing the problem. </p>
CopyPasteIt
432,081
<p>In order to apply Sample Space Theory to word problems concerning odds, you have to do two things:</p> <p>Describe the problem as a random experiment<br> Select a probability model </p> <p>There may be more than one valid way to do this, so you can't just plug into a formula or apply a simple technique - you have to understand what you are doing. </p> <p>In what follows we will give two additional solutions to the MODEL 1: C(10,3) = 120 design that the OP used. The <a href="https://en.wikipedia.org/wiki/Discrete_uniform_distribution" rel="nofollow noreferrer">discrete uniform distribution</a> will play a role in the sample space design. Also, the following basic concept from probability theory,</p> <blockquote> <p><em>To find the probability of two independent events that occur in sequence, find the probability of each event occurring separately, and then multiply the probabilities.</em></p> </blockquote> <p>will be developed.</p> <hr> <p>Model 2</p> <p>The random experiment is to select one student out of 10, then to select another student out of the remaining 9, and to then select one final student out of the remaining 8 students. This experiment can be viewed as performing a sequence of three independent 'selection' experiments. We want to know the probability of a specific student being selected.</p> <p>Let $T = \{1,2,3,4,5,6,7,8,9,10\}$. The sample space $S$ will be the ordered $\text{3-tuples}$ taken from $T \times T \times T$ that are observed when (mentally) performing the experiment. </p> <p>The sample space will contain $10 \, 9 \, 8 = 720$ outcomes. The probability of any outcome will be $ \frac{1}{10}\frac{1}{9}\frac{1}{8} = \frac{1}{720}$.</p> <p>To answer the question, we can assign our student to $1 \in T$ and look at the event $A$ in $S$ of all $\text{3-tuples}$ containing student $1$. If the student is selected right away, there are $9$ ways to select the next student and $8$ ways for the third, or a $72$ count. Continuing this counting argument, we see that $A$ has $72 + 72 + 72$ elements.</p> <p>Ans: $\frac{3\; 72}{720} = 30\%$</p> <p>It is not necessary to use this detail or to even count the number of outcomes in $S$ if we use this trick: </p> <p>What is probability that the student is NOT selected?</p> <p>The chance of not being selected $\text{first$\;\;\;\,$is}$ $\frac{9}{10}$.<br> The chance of not being selected $\text{second is}$ $\frac{8}{9}$.<br> The chance of not being selected $\text{third$\;\;\,$ is}$ $\frac{7}{8}$.</p> <p>Since each selection pick is independent from the prior pick, you can multiply, which is easy since things cancel out, and you get $\frac{7}{10}$. So the probability of our student being selected is $1 - \frac{7}{10}$, or again, $30\%$.</p> <hr> <p>Model 3</p> <p>The random experiment is a teacher has 10 new students coming to class and they will be seated randomly in exactly 10 chairs. He is a nasty fellow, and places tacks on three of the seats in the front row. As the students come into the class, they will choose a chair. After they sit down they will either jump up or remain seated. The first student comes into the class and sits down, and we want to know how often he jumps up. </p> <p>Our sample space $S = \{j, n\}$ where $j$ means the student jumps up and $n$ means the student is not fazed. Let $T = \{1,2,3,4,5,6,7,8,9,10\}$ represent the chairs. If we set it up so that the teacher puts the tacks on chairs $J = \{1,2,3\}$, then we can map $T$ onto $S$ and assign probabilities to the outcomes in $S$. Here, $J$ gets mapped to $j$, so $P(j) = .3$ and $P(n) = .7$ in $S$.</p> <p>The first student walks in the door. We want to know the probability of event $J$ occurring (i.e. the student picks chair 1 OR chair 2 OR chair 3), so that we see the student jump up.</p> <p>Ans: $30\%$</p>
2,322,646
<p>Let $f$ and $\varphi$ be continuous real valued functions on $\mathbb{R}$. Suppose $\varphi(x)=0$ for $|x|&gt;5$ and that $\int_{\mathbb{R}}\varphi(x)\mathbb{d}x=1$. Show that $$\lim_{h\to 0}\left[\frac{1}{h}\int_{\mathbb{R}}f(x-y)\varphi\left(\frac{y}{h}\right)\mathbb{d}y\right]=f(x).$$ I don't know how to proceed. Please help.</p>
Tucker
256,305
<p>Note that $\varphi(y/h) = 0$ for $|y/h|&gt;5$, so we don't need to integrate over the entire real line.</p> <p>$$ \lim_{h\to 0}\frac{1}{h}\int_{-5h}^{5h}f(x-y)\varphi(y/h)\mathbb{d}y $$</p> <p>define $\xi=y/h$.</p> <p>$$ \lim_{h\to 0}\frac{1}{h}\int_{-5}^{5}f(x-h\xi)\varphi(\xi)\mathbb{d}(\xi h) $$</p> <p>$$ \mathbb{d}(h\xi)=h\mathbb{d}\xi $$</p> <p>$$ \lim_{h\to 0}\int_{-5}^{5}f(x-h\xi)\varphi(\xi)\mathbb{d}\xi $$</p> <p>$\xi$ is finite over the integration interval, so we can now apply the limit.</p> <p>$$ \int_{-5}^{5}f(x-0\xi)\varphi(\xi)\mathbb{d}\xi=f(x)\int_{-5}^{5}\varphi(\xi)\mathbb{d}\xi $$</p> <p>but</p> <p>$$ \int_{-5}^{5}\varphi(\xi)\mathbb{d}\xi=1 $$</p> <p>as $\varphi(\xi)$ vanishes when $|\xi|&gt;5$ and as stated at the beginning of the problem $\int_{\mathbb{R}}\varphi(\xi)\mathbb{d}\xi=1$</p>
738,083
<blockquote> <p>Show that if two random variables X and Y are equal almost surely, then they have the same distribution. Show that the reverse direction is not correct.</p> </blockquote> <p>If $2$ r.v are equal a.s. can we write $\mathbb P((X\in B)\triangle (Y\in B))=0$ (How to write this better ?)</p> <p>then </p> <p>$\mathbb P(X\in B)-\mathbb P(Y\in B)=\mathbb P(X\in B \setminus Y\in B)\le \mathbb P((X\in B)\triangle (Y\in B))=0$</p> <p>$\Longrightarrow P(X\in B)=\mathbb P(Y\in B)$</p> <p>but the other direction makes no sense for me, i don't know how this can be true.</p>
5xum
112,884
<p>Take $X$ and $Y$ with probabilities $P(X=1)=P(X=2)=P(Y=1)=P(Y=2)=0.5$ and which are independent. Then $$P(X=Y) = P(X=1, Y=1) + P(X=2,Y=2) =\\= P(X=1)P(Y=1)+P(Y=2)P(Y=2)=0.5,$$ meaning that $X=Y$ holds with probability $0.5$, not $1$.</p>
432,811
<p>I'm trying to solve $$\operatorname{Arg}(z-2) - \operatorname{Arg}(z+2) = \frac{\pi}{6}$$ for $z \in \mathbb{C}$.</p> <p>I know that $$\operatorname{Arg} z_1 - \operatorname{Arg} z_2 = \operatorname{Arg} \frac{z_1}{z_2},$$ but that's only valid when $\operatorname{Arg} z_1 - \operatorname{Arg} z_2 \in (-\pi,\pi]$, so I'm not sure how to even begin solving this.</p> <p>I'm not familiar with modular arithmetic so if it is possible to solve this without using it then that would be great! (not that I know whether it is required to solve this in the first place)</p> <p>Thank you in advance.</p>
Christian Blatter
1,303
<p>I suggest you draw a figure.</p> <p>When $z$ lies in the lower half plane ${\rm Im}(z)&lt;0$ then $$-\pi&lt;\arg(z-2)-\arg(z+2)&lt;0\ .$$ It follows that there are no points in the lower half plane fulfilling your condition.</p> <p>Consider now a point $z$ in the upper half plane $H:\ {\rm Im}(z)&gt;0$. Then $$0&lt;\arg(z-2)-\arg(z+2)&lt;\pi\ .$$ The condition $$\arg(z-2)-\arg(z+2)={\pi\over 6}$$ means that the two segments connecting $z$ with the points $2$ and $-2$ enclose an angle of ${\pi\over 6}$. The set of $z$ fulfilling this condition is, according to the theorem about peripheral angles (resp., its inverse), an arc of a circle $\gamma$. The midpoint $M$ of $\gamma$ lies on the imaginary axis such that $\angle(2,M,-2)={\pi\over3}$. It follows that $M=2\sqrt{3}i$, and the radius of $\gamma$ is obviously $4$. The equation of this circle $\gamma$ is $$|z-2\sqrt{3}i|^2=16\ ,$$ and the set $S$ you are interested in is $\gamma\cap H$. One could provide a parametric representation of $S$ as follows: $$S=\left\{z=2\sqrt{3}i+4e^{it}\&gt;\biggm|\&gt;-{\pi\over3}&lt;t&lt;{4\pi\over3}\right\}\ .$$</p>
29,766
<p>I'm looking for a news site for Mathematics which particularly covers recently solved mathematical problems together with the unsolved ones. Is there a good site MO users can suggest me or is my only bet just to google for them?</p>
Péter Komjáth
6,647
<p>The Wikipedia page <a href="http://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics" rel="nofollow">List of unsolved problems in mathematics</a> has a specific (and long) sublist for recently solved problems. </p>
93,621
<p>As we know, most of the spectral sequences are doubly graded. However, this "doubly graded" condition is not a part of the formal definition of spectral sequence. Is there any useful triply (quadruply, quintuply, etc.) graded spectral sequences? If not, is there a hope that some meaningful work can be done with this topic?</p>
Igor Rivin
11,142
<p>Of course this has been studied. You need but google "trigraded complex", and much wisdom will be found. OK, some wisdom, notably Ravenel's Complex Cobordism and Stable Homotopy Groups of Spheres, and a very cute presentation by Noah Forman on</p> <p>Graham Denham. The combinatorial laplacian of the tutte complex. J. Algebra, 242(1):160-175, 2001.</p> <p>and related matters.</p>
3,491,867
<p>I'm working on an integral used to illustrate <span class="math-container">$\pi &gt; \frac{22}{7}$</span> and I'm stuck on finding the name of a theorem for the following:</p> <p>Let <span class="math-container">$f(x)$</span> be a continuous Real Valued function on the interval <span class="math-container">$[a,b]$</span> (where <span class="math-container">$a$</span>, <span class="math-container">$b$</span> can be finite or infinite). If <span class="math-container">$f(x) \geq 0$</span> on <span class="math-container">$[a,b]$</span> then <span class="math-container">$$ \int_a^b f(x) \:dx \geq 0 $$</span></p> <p>Does anyone know what the name of this theorem is?</p>
mathcounterexamples.net
187,663
<p>Not sure this theorem has a name. It is however a property named <strong>positivity of the integral</strong>.</p>
542,391
<p>I understand the processes of putting a matrix into Jordan normal form and forming the transformation matrix associated to "diagonalizing" the matrix. So here's my question:</p> <p>Why is it that when you have an eigenvalue x=0 with algebraic multiplicity greater than 1, that you don't put a 1 in the superdiagonal of the JNF matrix but when the eigenvalue is non-zero and satisfies the same properties, we put a 1 in the superdiagonal of the Jordan normal form?</p> <p>My professor posted solutions to an assignment involving finding a matrix exponential, but the JNF of a matrix had eigenvalue x=0 with algebraic multiplicity of 3,yet had no entries of 1 along the superdiagonal.</p> <p>In advance, I would like to thank you for your help.</p>
Amzoti
38,839
<p>We have a single eigenvalue of $\lambda_1 = 1$ and a triple eigenvalue of $\lambda_{2,3,4} = 0$.</p> <p>For $\lambda=0$, we need to find three linearly independent eigenvectors and can just use the null space of $A$ for this. We have:</p> <p>$$NS(A) = NS \left(\begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 0 &amp; 0 &amp; 0\end{bmatrix}\right)$$</p> <p>This produces $v_{2,3,4} = (0,0,0,1), (0,0,1,0), (0,1,0,0)$ as three linearly independent eigenvectors, thus this matrix is diagonalizable and we can write the Jordan block using the eigenvalues down the main diagonal as:</p> <p>$$J = \begin{bmatrix} 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1\end{bmatrix}$$</p>
363,911
<p>If a function $f:[-2,3]\to \mathbb{R}$ is defined by </p> <p>$f(x)=\begin{cases} 2|x|+1 \; ;\; \text{ if } x \in \Bbb Q \\ 0 \; ;\; \text{ if } x \notin \Bbb Q \end{cases}$ </p> <p>Prove that $f$ is not Riemann integrable.</p> <p>What I came up with:<br> $m_k=0$,$M_k=7$ </p> <p>Which implies $U(P,f)=35$ and $L(P,f)=0$, for any partition of $[-2,3]$. So the upper and lower integrals are not equal,<br> hence $f \notin {\mathscr R}[-2,3]$</p>
Pedro
23,350
<p><strong>ADD</strong> It seems you're being given Darboux's approach to integration. I guess that for each partition $P=\{a=x_0,\dots,x_n=b\}$ of the interval $[a,b]$ you're defining $$M_k=\sup_{x\in[x_{k-1},x_{k}]}f(x)$$ $$m_k=\inf_{x\in[x_{k-1},x_{k}]}f(x)$$</p> <p>and then the lower and upper sums of $f$ over $P$ as $$U(f,P)=\sum_{k=1}^n M_k(x_k-x_{k-1})$$</p> <p>$$L(f,P)=\sum_{k=1}^n m_k(x_k-x_{k-1})$$</p> <p>and the the lower and upper integrals (which always exist) as</p> <p>$$\overline{\int_a^b}f=\inf\{U(f,P):P\text{ is a partition of }[a,b]\}$$</p> <p>$$\underline{\int_a^b}f=\sup\{L(f,P):P\text{ is a partition of }[a,b]\}$$</p> <p>The prove a function is <strong>not</strong> Riemann (equiv. Darboux) integrable, you can show those last number differ. But you have to try <strong>many</strong> partitions to see what is really going on, since the supremum and infimum are taken when $P$ varies throughout <strong>all possible</strong> partitoins of $[a,b]$. In particular, $M_k$ and $m_k$ will usualy vary for different partitions, as the comments show.</p> <hr> <p>I hope you can see that $$\underline{\int_{-2}^3} f=0$$</p> <p>Now you have to prove that $$\overline{\int_{-2}^3} f$$ is bounded away from zero, and you'll have proven the integral cannot exist.</p> <p>To prove both assertions, use that both the irrationals and rationals are dense in $\Bbb R$. For each partition $P=\{-3=x_1,x_2,\dots,x_{n-1},x_n=2\}$, can you see why </p> <p>$$\inf_{[x_{k-1},x_k]}f(x)=0$$</p> <p>for any interval in the partition, for example?</p> <p>On the other hand, what is the minimum value $2|x|+1$ takes on $[-2,3]$? What does this tell you, plus the density of $\Bbb Q$ on $\Bbb R$ about $$\overline{\int_{-2}^3} f\text{ ? }$$</p>
387,295
<p>I need to find $$\underset{n \to \infty}{\lim} \underset{x\in [0,1]}{\sup} \left| \frac{x+x^{2}}{1+n+x} \right|.$$ How to show that supremum will be at the point $x=1$?</p>
davidlowryduda
9,754
<p>It's positive, so you don't need absolute values.</p> <p>Then you could take a derivative, sett it equal to zero to find your critical points, etc. This answers your question of deciding for which $x$ it attains its max.</p> <p>On the other hand, you could be generous and say the numerator is at most $2$, and the denominator is at least $1+n$. This is easier and sufficient to solve the supremum problem.</p>
3,578,191
<p>Without tables or a calculator, find the value of <span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span>.</p> <p>I do not understand how the positive/negative signs are obtained as shown in the book; is there a formula for expanding these kind of things (what kind of expression is it, by the way?)?</p> <p><a href="https://i.stack.imgur.com/TZjZo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TZjZo.png" alt="enter image description here"></a></p> <p>This is my solution:</p> <p><span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span></p> <p><span class="math-container">$= \displaystyle\frac{[(\sqrt5+2)^3+(\sqrt5-2)^3][(\sqrt5+2)^3-(\sqrt5-2)^3]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{(\sqrt5+2+\sqrt5-2)[(\sqrt5+2)^2\color{red}{+}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2](\sqrt5+2-\sqrt5+2)[(\sqrt5+2)^2\color{red}{-}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{[2\sqrt5(5+4\sqrt5+4+\color{red}{5-4}+5-4\sqrt5+4][4(5+4\sqrt5+4\color{red}{-(5-4)}+(5-4\sqrt5+4)]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{2584\sqrt5}{8\sqrt5}$</span></p> <p><span class="math-container">$=323$</span></p> <p>Because of the multiplication, I still got the same answer as given in the book. However, is the book or I correct in terms of the positive/negative signs(in red)?</p>
Quanto
686,284
<p>Alternatively, let <span class="math-container">$a=9+4\sqrt5$</span> and <span class="math-container">$b=9-4\sqrt5$</span>. Then, <span class="math-container">$a+b=18$</span>, <span class="math-container">$ab=1$</span> and,</p> <p><span class="math-container">$$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5} =\frac{a^3 - b^3}{8\sqrt5}=\frac{(a-b)(a^2 +ab+b^2)}{8\sqrt5}$$</span> <span class="math-container">$$=\frac{\sqrt{(a+b)^2-4ab}\&gt;[(a+b)^2 -ab)]}{8\sqrt5} =\frac{\sqrt{18^2-4}\&gt;(18^2 -1)}{8\sqrt5}=323$$</span></p>
3,631,903
<p>A Calculus A level trigonometry problem:</p> <blockquote> <p>Solve <span class="math-container">$\tan x = \dfrac{p}{q}$</span> where <span class="math-container">$p,q\in\mathbb{Z}$</span> such that <span class="math-container">$$3\cos x\ - 4\sin x = -5$$</span></p> </blockquote> <p>I tried moving terms to one side, but that doesn't help much.</p> <p>Any ideas?</p>
19aksh
668,124
<p>We have <span class="math-container">$|x-a| = \begin{cases}x-a,&amp; \text{if } x\ge a \\ -(x-a), &amp; \text{if } x&lt; a\end{cases}$</span></p> <p>So, <span class="math-container">$f(x) = \begin{cases}-(x+1) +x-3(x-1)+2(x-2)-(x+2) &amp; \text{for } x &lt;-1 \\ (x+1) +x-3(x-1)+2(x-2)-(x+2) &amp;\text{for }-1\le x &lt;0 \\ (x+1) -x -3(x-1)+2(x-2)-(x+2) &amp; \text{for } 0\le x&lt;1 \\ (x+1)-x+3(x-1)+2(x-2)-(x+2) &amp;\text{for }1\le x&lt;2 \\ (x+1)-x+3(x-1)-2(x-2)-(x+2) &amp; \text{for } x\ge2\end{cases}, $</span></p> <p>Simplify these and equate to <span class="math-container">$0$</span> to get the values of <span class="math-container">$x$</span></p>
1,715,265
<p>I've tried a method similar to showing that $\mathbb{Q}(\sqrt2, \sqrt3)$ is a primitive field extension, but the cube root of 2 just makes it a nightmare.</p> <p>Thanks in advance </p>
Paramanand Singh
72,031
<p>Try to express both $\sqrt{2}$ and $\sqrt[3]{2}$ as rational functions of $a = \sqrt{2}+\sqrt[3]{2}$. The job is simple and easily done via equation $$(a -\sqrt{2})^{3}=2\tag{1}$$ so that $$a^{3}-3\sqrt{2}a^{2}+6a-2\sqrt{2}=2$$ or $$\sqrt{2}=\frac{a^{3}+6a-2}{3a^{2}+2}\tag{2}$$ and we have $$\sqrt[3]{2}=a-\sqrt{2}$$ and using equation $(2)$ we can replace $\sqrt{2}$ by a rational function of $a$, so that $\sqrt[3]{2}$ is also a rational function if $a$. It thus follows that $\mathbb{Q}(a)=\mathbb{Q}(\sqrt{2},\sqrt[3]{2})$.</p>
3,360,396
<p>In trying to answer <a href="https://math.stackexchange.com/q/1854193/104041">this question</a> on MSE, I got stuck. This taunts me because I think I should be able to do it.</p> <h2>The Question:</h2> <blockquote> <p>Let <span class="math-container">$\phi : G\twoheadrightarrow H$</span> be an epimorphism of groups. Suppose <span class="math-container">$W$</span> and <span class="math-container">$K$</span> are conjugate in <span class="math-container">$H$</span>. Show that <span class="math-container">$\phi^{-1}(W)$</span> and <span class="math-container">$\phi^{-1}(K)$</span> are conjugate in <span class="math-container">$G$</span>.</p> </blockquote> <h2>My Attempt:</h2> <p><a href="https://math.stackexchange.com/questions/1854193/pre-image-of-conjugate-subgroups?r=SearchResults#comment3795230_1854193"><em>Following @DerekHolt's comment . . .</em></a></p> <blockquote> <p>It is true and straightforward <span class="math-container">$g\phi^{-1}(W)g^{\color{red}{-1}} = \phi^{-1}(K)$</span>. Just do the normal thing and show that any element of the LHS is in the RHS and vice versa.</p> </blockquote> <p><em>So here goes . . .</em></p> <p>Let <span class="math-container">$h\in g\phi^{-1}(W)g^{-1}$</span> for some <span class="math-container">$g\in G$</span>. Then there exists a <span class="math-container">$w\in W$</span> for which <span class="math-container">$h=g\phi^{-1}(w)g^{-1}$</span>. But since <span class="math-container">$gWg^{-1}=K$</span>, there exists a <span class="math-container">$k\in K$</span> such that . . . I don't know.</p> <p>It's not immediately obvious what I should do, even given the hint. I think I did something simple wrong already.</p> <hr> <p><em>NB:</em> It's 00:41 here . . . now. That's my excuse.</p>
diracdeltafunk
19,006
<p>Small note: in your attempted proof you have <span class="math-container">$g \in G$</span> and <span class="math-container">$W \subseteq H$</span>, so <span class="math-container">$gWg^{-1}$</span> isn't (a priori) a well-defined thing. Perhaps trying to work with this object made things seem more confusing than they are.</p> <p>Let's make sure to use both of the assumptions: that <span class="math-container">$\phi$</span> is surjective (equivalent to epimorphic in the category of groups), and that <span class="math-container">$W$</span> and <span class="math-container">$K$</span> are conjugate in <span class="math-container">$H$</span>. So, let <span class="math-container">$hWh^{-1} = K$</span> for some <span class="math-container">$h \in H$</span> and let <span class="math-container">$g \in G$</span> such that <span class="math-container">$\phi(g) = h$</span>. We claim that <span class="math-container">$g\phi^{-1}(W)g^{-1} = \phi^{-1}(K)$</span>. First, let <span class="math-container">$x \in g\phi^{-1}(W)g^{-1}$</span> be arbitrary, and write <span class="math-container">$x = g a g^{-1}$</span> for some <span class="math-container">$a \in \phi^{-1}(W)$</span>. Then <span class="math-container">$\phi(a) \in W$</span>, so <span class="math-container">$$\phi(x) = \phi(g) \phi(a) \phi(g^{-1}) = h \phi(a) h^{-1} \in h W h^{-1} = K.$$</span> In other words, <span class="math-container">$x \in \phi^{-1}(K)$</span>, so (since <span class="math-container">$x$</span> was arbitrary) <span class="math-container">$g \phi^{-1}(W) g^{-1} \subseteq \phi^{-1}(K)$</span>. Conversely, let <span class="math-container">$y \in \phi^{-1}(K)$</span> be arbitrary. Then <span class="math-container">$$\phi(g^{-1}yg) = h^{-1} \phi(y) h \in h^{-1} K h = h^{-1} h W h^{-1} h = W,$$</span> so <span class="math-container">$g^{-1} y g \in \phi^{-1}(W)$</span>. Thus, <span class="math-container">$y \in g\phi^{-1}(W)g^{-1}$</span>. Since <span class="math-container">$y$</span> was arbitrary, <span class="math-container">$\phi^{-1}(K) \subseteq g\phi^{-1}(W)g^{-1}$</span>.</p>
1,329,078
<p>I am having problems in classifying the differential equation $y''=y(x^2)$ in categories like homogeneous, exact, bernoulli, separable and non-exact so I could have the general solution. </p> <p>Or would someone help me find the solution </p>
André Nicolas
6,312
<p>We count the non-empty subsets of $\{1,2,3,\dots,n\}$. There are $2^n-1$ of them.</p> <p>There are $2^0$ subsets with biggest element $1$, $2^1$ with biggest element $2$, $2^2$ with biggest element $3$, and so on up to $2^{n-1}$ with biggest element $n$. Add up.</p>
1,821,582
<blockquote> <p>Find all solutions of $$\{x^3\}+[x^4]=1$$ where $[x]=\lfloor x\rfloor$</p> </blockquote> <p>$$$$</p> <p>I know that $0\le\{x^3\}&lt;1\Rightarrow 0&lt;[x^4]\le 1$. Thus $[x^4]=1$. I couldn't get any further though since I'm having trouble with $x^4$ in the term $[x^4]$. $$$$As an example, in another question, I was given the expression $$[2x]-[x+1]$$ In this, to 'convert' the terms inside the floor function, I split the values of $x$ into 2 cases: $x=[x]+\{x\}, \{x\}\in[0,0.5)$ and $x=[x]+\{x\}, \{x\}\in[0.5,1)$. $$$$In this way, I was able to split the value of $[2x]$ into $2[x]$ and $2[x]+1$ respectively. Hence the expressions within the floor function became easier to deal with as the expression $[2x]-[x+1]$ was converted into $2[x]-[x]-1$ and $2[x]+1-[x]-1$ respectively.$$$$ Is there any way to do something similar with $x^4$ in $[x^4]$ so that the $x^3$ inside $\{x^3\}$ and $x^4$ inside $[x^4]$ are converted into the 'same kind'? If this isn't possible, is there any other way to solve this problem, and other problems in which the expressions within the floor/fractional part function, or the floor/fractional part functions themselves are raised to powers? $$$$Many thanks in anticipation.</p>
JasonM
343,478
<p>$\{x^3\}=0$ and $\lfloor x^4 \rfloor=1$ implies $$x \in \{\sqrt[3]{n} | n \in \mathbb{Z} \} \cap \left([1, \sqrt[4]{2}) \cup (-\sqrt[4]{2}, -1]\right)=\{\pm 1\}$$, so $x=\pm 1$</p>
167,946
<p>I seek to replace derivatives like <code>D[u[x, y], x, x]</code> which are evaluated as $u^{(2,0)}[x,y]$ by variables with names like <code>uxx</code>. Derivatives that I work with are denoted by their "order-vector", for example <code>{2,0}</code> is the vector for this particular derivative and <code>{1,1}</code> is for <code>uxy</code>. I have a list of different derivative vectors and I want to create rule for replacement which is comprised of elements like $u^{(2,0)}[x,y]\rightarrow uxx$. But for that, I need to transform {2,0} into (2,0) which mathematica doesn't want to do:</p> <pre><code>dsT={1,0,0,0}; ToExpression[StringReplace[ToString[dsT], {"}" -&gt; ")", "{" -&gt; "("}]] </code></pre> <p>This returns an error though without "ToExpression" the output is exaxtly what I need "(1, 0, 0, 0)". How can I do this conversion for any length of derivative vector?</p>
Vsevolod A.
51,665
<p>Turned out to be:</p> <pre><code>dsT = {1, 0, 0, 1, 0}; vars = Table[ToExpression["x" &lt;&gt; ToString[i]], {i, 1, Length[dsT]}]; \!\(\*SuperscriptBox[\(u\), TagBox[ RowBox[{"(", RowBox[{"Sequence", "@@", "dsT"}], ")"}], Derivative], MultilineFunction-&gt;None]\)[Sequence @@ vars] </code></pre>
3,597,301
<p>We know that formula of finding mode of grouped data is</p> <p>Mode = <span class="math-container">$l+\frac{(f_1-f_0)}{(2f_1-f_0-f_2)}\cdot h$</span></p> <p>Where, <span class="math-container">$f_0$</span> is frequency of the class preceding the modal class and <span class="math-container">$f_2$</span> is frequency of the class succeeding the modal class. But how to calculate mode when there is no class preceding or succeeding the modal class.</p>
ANANT
905,810
<p>we can take their value as 0. The frequency of the succeeding model class is taken as 0 if model class is the last observation.</p> <p>You can also check it from the equation as-</p> <p><span class="math-container">$l =$</span> lower limit of the modal class,</p> <p><span class="math-container">$h =$</span> size of the class interval (assuming all class sizes to be equal),</p> <p><span class="math-container">$f_1 =$</span> frequency of the modal class,</p> <p><span class="math-container">$f_0 =$</span> frequency of the class preceding the modal class,</p> <p><span class="math-container">$f_2 =$</span> frequency of the class succeeding the modal class.</p> <p>Even if <span class="math-container">$f_2$</span> is <span class="math-container">$0$</span>, the mode can be easily found by using the above expression.</p>
1,902,455
<p>$x=e^t$ $y=te^(-t)$</p> <p>$\frac{dy}{dx}= \frac{e^(-t)(1-t)}{e^(t)}$</p> <p>$\frac{d^2y}{dx^2}= \frac{\frac{dy}{dx}}{\frac{dx}{dt}}= \frac{e^(-t)(1-t)}{e^t}$</p> <p>any t's without proper enclosement are meant to be to the power...I don't know why its giving me this trouble. I entered these answers into my homework and it said it could not understand my answers and could not be graded.... Also it asked the interval over which it is is concave upward and if im not mistaken from my work it would be from 0 to infinity.</p>
Claude Leibovici
82,404
<p><em>A small trick without any integration.</em></p> <p>Because of the square in denominator, you can assume that $$\int \frac{1-y^2}{(1+y^2)^2} dy = \frac{P_n(y)}{1+y^2}$$ where $P_n(y)$ is a polynomial of degree $n$.</p> <p>Differentiate both sides to get $$\frac{1-y^2}{(1+y^2)^2}=\frac{\left(y^2+1\right) P_n'(y)-2 y P_n(y)}{\left(1+y^2\right)^2}$$ In the rhs, the degree of the numerator is $2+(n-1)=1+n$; since it is $2$ in the lhs, then $n=1$. </p> <p>So, now, we know that $P_1(y)=a+b y$; so $$\frac{1-y^2}{(1+y^2)^2}=\frac{\left(y^2+1\right) b-2 y (a+by)}{\left(1+y^2\right)^2}=\frac{b-2ay-by^2}{\left(1+y^2\right)^2}$$ Now, identify to get $a=0$ and $b=1$.</p>
2,913,974
<p>In an additive category, we say that an object $A$ is compact if the functor $\text{Hom}(A, -)$ respects coproducts. That is, if the canonical morphism $$ \coprod_{i} \text{Hom} \left( A, X_{i} \right) \longrightarrow \text{Hom} \left( A, \coprod_{i} X_{i} \right) $$ is a bijection. Suppose $A \oplus B$ is compact. Why are the summands $A$ and $B$ compact? Everywhere claims this is obvious and provides no justification, but I cannot see why this is true. </p>
Fabio Lucchini
54,738
<p>You have a split exact sequence $0\to A\to A\oplus B\to B\to 0$ which produces the commutative diagram with split exact rows $$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \coprod_{i} \text{Hom} \left( A, X_{i} \right) @&gt;&gt;&gt; \coprod_{i} \text{Hom} \left( A\oplus B, X_{i} \right) @&gt;&gt;&gt; \coprod_{i} \text{Hom} \left( B, X_{i} \right) @&gt;&gt;&gt; 0 \\ @. @VVV @VVV @VVV \\ 0 @&gt;&gt;&gt; \text{Hom} \left( A, \coprod_{i} X_{i} \right) @&gt;&gt;&gt; \text{Hom} \left( A\oplus B, \coprod_{i} X_{i} \right) @&gt;&gt;&gt; \text{Hom} \left( B, \coprod_{i} X_{i} \right) @&gt;&gt;&gt; 0 \end{CD}$$ where the middle vertical row is an isomorphism by assumption. Consequently, the left vertical arrow is monic, while the right vertical arrow is epic. By exchanging the roles of $A$ and $B$, we prove that these arrows are, in fact, isomorphisms.</p>
2,435
<p>I'm not sure we already have something similar, but I'm working on more code inspections for the IntelliJ plugin and it's always a good idea to ask the community. Since it doesn't really fit on main, I'm posting it here on Meta.</p> <p>Linting is an excellent way to point the developer to probable errors that he might have overlooked. With a dynamic language like the one of Mathematica, we are a bit restricted with what we can do, since we cannot evaluate code and since most things require evaluation to be sure if they are a bug or not. Nevertheless, there are checks we can do. For instance <code>If[a=b, ..]</code> is most likely a bug and even if the developer knew what he did, it is a bad style.</p> <p>There are trickier examples like <code>If[a&lt;5,...]</code>. This looks okay but knowing that <code>a&lt;5</code> stays unevaluated if the comparison cannot be done, it is a source of error because you end up with the unevaluated <code>If</code> expression in your wrong result and debugging might be complicated.</p> <p>In both examples, wrapping <code>TrueQ</code> around the condition resolves the issue and although there might still be a bug, at least you can be sure your <code>If</code> expression is evaluated to some branch. Other common sources of error are, e.g. <code>x_?testFunc[#]&amp;</code> or implicit multiplication through linebreaks.</p> <p><strong>Question:</strong> What are common bugs in your code and could they have been pointed out by a linter? If you like to share your thoughts, please provide one issue per answer, so that others can vote. I'm looking forward to your suggestions and see if I can implement some of them in IntelliJ.</p> <hr> <p>Example issue: With the <a href="https://mathematica.stackexchange.com/a/176489/187">alternative layout for packages</a> which was pointed out by Leonid, we can use <em>directives</em> for a static code analyzer to easily export symbols or declare them as package symbols. As Leonid pointed out, the directives need to be on their own source-line with nothing else on it. So for the directives</p> <pre><code>PackageScope["myFunc"] PackageExport["MyExportedFunc"] </code></pre> <p>I implemented the following rules</p> <ol> <li>They need to be on their own source line with nothing else on it</li> <li>Their string argument must be a valid identifier</li> </ol> <p><a href="https://i.stack.imgur.com/3bO61.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3bO61.gif" alt="enter image description here"></a></p>
Szabolcs
12
<h2>Non-ASCII characters</h2> <p>Add an inspection to warn about non-ASCII characters that appear anywhere else except in comments.</p> <p>The encoding that Mathematica will assume when loading a file using <code>Get</code> is not predictable and typically differs between operating systems. E.g., on macOS/Linux it might assume UTF-8 while on Windows it may assume Windows-1252. If the source file contains anything but ASCII, it might be interpreted differently on different platforms.</p> <p>Such characters will typically appear in strings such as usage messages. I personally encountered this:</p> <p><a href="https://github.com/szhorvat/IGraphM/issues/44" rel="nofollow noreferrer">https://github.com/szhorvat/IGraphM/issues/44</a></p> <p>I used non-ASCII characters in my usage messages when I edited the source files on macOS and on Windows they appeared as Chinese characters.</p> <p><em>Note:</em> I would exempt comments from this inspection because misinterpretation won't usually cause problems there. In principle it could, in particular if it affects the closing <code>*)</code> (note that in my example above an extra ASCII letter was corrupted), but usually it won't. I expect many people will want to write comments in their native language.</p> <p><strong>Comment halirutan:</strong> I've implemented this and created a mapping to all known named characters so that I can provide a QuickFix which is able to replace the Unicode character automatically.</p> <p><a href="https://i.stack.imgur.com/by8R8.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/by8R8.gif" alt="enter image description here"></a></p>
2,435
<p>I'm not sure we already have something similar, but I'm working on more code inspections for the IntelliJ plugin and it's always a good idea to ask the community. Since it doesn't really fit on main, I'm posting it here on Meta.</p> <p>Linting is an excellent way to point the developer to probable errors that he might have overlooked. With a dynamic language like the one of Mathematica, we are a bit restricted with what we can do, since we cannot evaluate code and since most things require evaluation to be sure if they are a bug or not. Nevertheless, there are checks we can do. For instance <code>If[a=b, ..]</code> is most likely a bug and even if the developer knew what he did, it is a bad style.</p> <p>There are trickier examples like <code>If[a&lt;5,...]</code>. This looks okay but knowing that <code>a&lt;5</code> stays unevaluated if the comparison cannot be done, it is a source of error because you end up with the unevaluated <code>If</code> expression in your wrong result and debugging might be complicated.</p> <p>In both examples, wrapping <code>TrueQ</code> around the condition resolves the issue and although there might still be a bug, at least you can be sure your <code>If</code> expression is evaluated to some branch. Other common sources of error are, e.g. <code>x_?testFunc[#]&amp;</code> or implicit multiplication through linebreaks.</p> <p><strong>Question:</strong> What are common bugs in your code and could they have been pointed out by a linter? If you like to share your thoughts, please provide one issue per answer, so that others can vote. I'm looking forward to your suggestions and see if I can implement some of them in IntelliJ.</p> <hr> <p>Example issue: With the <a href="https://mathematica.stackexchange.com/a/176489/187">alternative layout for packages</a> which was pointed out by Leonid, we can use <em>directives</em> for a static code analyzer to easily export symbols or declare them as package symbols. As Leonid pointed out, the directives need to be on their own source-line with nothing else on it. So for the directives</p> <pre><code>PackageScope["myFunc"] PackageExport["MyExportedFunc"] </code></pre> <p>I implemented the following rules</p> <ol> <li>They need to be on their own source line with nothing else on it</li> <li>Their string argument must be a valid identifier</li> </ol> <p><a href="https://i.stack.imgur.com/3bO61.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3bO61.gif" alt="enter image description here"></a></p>
Szabolcs
12
<blockquote> <h1>Status Completed</h1> </blockquote> <p>I'll convert my comment as this came up again:</p> <p>This is already handled:</p> <pre><code>_?foo[#]&amp; </code></pre> <p>But with the many operator forms, now this is a common mistake:</p> <pre><code>_?foo[12] </code></pre> <p>For example,</p> <pre><code>f[x_?GreaterEqualThan[5]] := Sqrt[x-5] </code></pre> <p>IDEA should warn about this.</p> <p><strong>Comment halirutan:</strong> Very good catch! I fixed this and it should now also report such operator forms</p> <p><img src="https://i.imgur.com/IMI6rmW.png" alt="img"></p> <p>Will be available in WL Plugin <strong>version 2019.1.2</strong></p>
2,191,360
<blockquote> <p>Show that $$ f(x,y)= \begin{cases} \dfrac{xy^2}{x^2+y^4} &amp; (x,y) ≠ (0,0) \\ 0 &amp; (x,y) = (0,0) \end{cases}$$ is bounded.</p> </blockquote> <p>I thought about splitting it up into different cases like $x&lt;y$ but it turned out to be too many and I could not cover all of them. As a hint I got the idea to use the arithmetic geometric inequality. I hope someone can help me. </p>
farruhota
425,072
<p>It can be estimated as $-1\leq\frac{xy^2}{x^2+y^4}\leq1:$</p> <p>LHS: $-1\leq\frac{xy^2}{x^2+y^4} \Rightarrow y^4+xy^2+x^2\geq0 \Rightarrow D=x^2-4x^2&lt;0$ and $y=0 \Rightarrow x^2\geq0.$</p> <p>RHS: $\frac{xy^2}{x^2+y^4}\leq1 \Rightarrow y^4-xy^2+x^2\geq0 \Rightarrow D=x^2-4x^2&lt;0$ and $y=0 \Rightarrow x^2\geq0.$</p>
1,336,506
<p>We know that the usual $\leq$ is a partial order relation on the group of integers $\mathbb Z$ and $\mathbb Z$ is a totally ordered with this partial order relation. Is there any other partially order relation exist in $\mathbb Z$ which makes $\mathbb Z$ a partially ordered group (or totally ordered group)? </p>
Rajesh
54,310
<p>Take $P = \{x\in\mathbb Z: x = 0 \text{ or }x\geq 2 \}.$ Then $G = (\mathbb Z, P)$ is a partially ordered group with positive cone $P.$ In this case, the element $3\wedge4$ does not exist in $G.$ If $3\wedge 4 = 3,$ then $3\leq 4, 4 - 3\in P$ but $4 - 3 = 1\notin P.$ This implies $G$ is not orderisomorphic to usual order on the group $\mathbb Z.$ </p>
4,385,908
<p>For an ideal <span class="math-container">$I$</span> in <span class="math-container">$A = \mathbb{C}[x, y, z]$</span> set <span class="math-container">$$Z_{xy}(I) = \{(a, b) \in \mathbb{C}^2: f(a, b, z) = 0\text{ for all }f \in I\}.$$</span></p> <p>Let <span class="math-container">$$J = \{f(x, y): f(a, b) = 0\text{ for all }(a, b) \in Z_{xy}(I)\}.$$</span></p> <p>Prove <span class="math-container">$$Z_{xy}(I) \times \mathbb{C} = Z(I) = \{(a, b, c) \in \mathbb{C}^2: f(a, b, c) = 0\text{ for all }f \in I\} \iff \operatorname{rad}I = JA.$$</span></p> <p>I'm a bit confused about the definition of <span class="math-container">$J$</span>. I'm not sure where the <span class="math-container">$f$</span> is supposed to come from since in the other sets in this problem the function <span class="math-container">$f$</span> took on 3 arguments rather than 2.</p>
John Dawkins
189,130
<p>(1) <span class="math-container">$X_n$</span> needs to be <span class="math-container">$\mathcal F_n$</span>-measurable, meaning only that it must be constant on <span class="math-container">$\{n,n+1,\ldots\}$</span>.</p> <p>(2) you need to have <span class="math-container">$E[X_n\cdot 1_B]=E[Y\cdot 1_B]$</span> for each <span class="math-container">$B\in\mathcal F_n$</span>. For <span class="math-container">$B=\{n,n+1,\ldots\}$</span> this means the constant value of <span class="math-container">$X_n$</span> mentioned in (1) must be <span class="math-container">$$ b_n:={\sum_{k=n}^\infty 2k^{-2}\over \sum_{k=n}^\infty 2^{-k}}=2^n\sum_{k=n}^\infty k^{-2}. $$</span> For <span class="math-container">$B=\{k\}$</span> (where <span class="math-container">$k$</span> is an element of <span class="math-container">$\{1,2,\ldots,n-1\}$</span>) this means <span class="math-container">$X_n(k) = 2^{k+1}k^{-2}$</span>.</p>
90,070
<h2>Question:</h2> <p>Let <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> be an orthogonal matrix and let <span class="math-container">$\varepsilon&gt;0$</span>. Then does there exist a rational orthogonal matrix <span class="math-container">$B\in\mathbb{R}^{n \times n}$</span> such that <span class="math-container">$\|A-B\|&lt;\varepsilon$</span>?</p> <h2>Definitions:</h2> <ul> <li>A matrix <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> is an <em>orthogonal matrix</em> if <span class="math-container">$A^T=A^{-1}$</span></li> <li>A matrix <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> is a <em>rational matrix</em> if every entry of it is rational.</li> </ul>
Qiaochu Yuan
290
<p>Sure. Consider matrices which fix $n-2$ of the standard basis vectors and describe a rotation in the plane spanned by the last two about an angle $\theta$ such that $\sin \theta, \cos \theta$ are both rational; these are dense in all such rotations, and all such rotations generate the orthogonal group, so the corresponding products (all of which are rational) are dense in the orthogonal group. </p>
90,070
<h2>Question:</h2> <p>Let <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> be an orthogonal matrix and let <span class="math-container">$\varepsilon&gt;0$</span>. Then does there exist a rational orthogonal matrix <span class="math-container">$B\in\mathbb{R}^{n \times n}$</span> such that <span class="math-container">$\|A-B\|&lt;\varepsilon$</span>?</p> <h2>Definitions:</h2> <ul> <li>A matrix <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> is an <em>orthogonal matrix</em> if <span class="math-container">$A^T=A^{-1}$</span></li> <li>A matrix <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> is a <em>rational matrix</em> if every entry of it is rational.</li> </ul>
Denis Serre
8,799
<p>I should say <strong>yes</strong>. For this, I shall use the fact that in the unit sphere $\mathbb S^{d-1}$, the set of rational vectors is dense. I shall proceed by induction over $n$.</p> <p>So let $A\in {\bf O}_n(\mathbb R)$ be given. Let $\vec v_1$ be its first column, an element of ${\mathbb S}^{n-1}$. We can choose a rational unit vector $\vec w_1$ arbitrarily close to $\vec v_1$. The first step is to construct a rational orthogonal matrix $B$ with first column $\vec w_1$. To this end we choose inductively rational unit vectors $\vec w_2,\ldots,\vec w_n$. This is possible because at each step, we may take a rational unit vector in the unit sphere of a "rational" subspace. Here, a subspace $F$ is rational if it admits a rational basis.</p> <p>Now, let us form $A_1=B^{-1}A$. This is a orthogonal matrix, whose first column is arbitrarily close to $\vec e_1$. Hence its first line is close to $(1,0,\ldots,0)$ as well. Thus $$A_1\sim\begin{pmatrix} 1 &amp; 0^T \\\\ 0 &amp; R \end{pmatrix}.$$ The matrix $R$ is arbitrarily close to ${\bf O}_{n-1}({\mathbb R})$. By the induction hypothesis, there exists a rational orthogonal matrix $Q$ arbitraly close to $R$. Then $$B\begin{pmatrix} 1 &amp; 0^T \\\\ 0 &amp; Q \end{pmatrix}$$ is arbitrarily close to $A$.</p>
457,977
<p>I am trying to use residues to compute $$\int_0^\infty\frac{\log x}{(1+x)^3}\,\operatorname d\!x.$$My first attempt involved trying to take a circular contour with the branch cut being the positive real axis, but this ended up cancelling off the term I wanted. I wasn't sure if there was another contour I should use. I also had someone suggest using the substitution $x=e^z$, so the integral becomes $$\int_{-\infty}^\infty\frac{ze^z}{(1+e^z)^3}\,\operatorname d\!z$$so that the poles are the at the odd multiples of $i\pi$. I haven't actually worked this out, but it does not seem like the solution the author was looking for (this question comes from an old preliminary exam).</p> <p>Any suggestions on how to integrate?</p>
Felix Marin
85,343
<p><span class="math-container">\begin{align} &amp;\bbox[10px,#ffd]{\int_{0}^{\infty}{\ln\left(x\right) \over \left(1 + x\right)^{3}}\,{\rm d}x} = \int_{0}^{\pi/2} {\ln\left(\tan^{2}\left(x\right)\right) \over \left\lbrack 1 + \tan^{2}\left(x\right)\right\rbrack^{3} } \,2\tan\left(x\right)\sec^{2}\left(x\right)\,{\rm d}x \\[5mm] = &amp;\ 4\int_{0}^{\pi/2} \ln\left(\tan\left(x\right)\right) \tan\left(x\right)\cos^{4}\left(x\right)\,{\rm d}x = 4\int_{0}^{\pi/2} \ln\left(\tan\left(x\right)\right) \sin\left(x\right)\cos^{3}\left(x\right)\,{\rm d}x \\[5mm] &amp;\ 4\int_{0}^{\pi/2} \ln\left(\sin\left(x\right)\right) \sin\left(x\right)\cos^{3}\left(x\right)\,{\rm d}x - 4\int_{0}^{\pi/2} \ln\left(\cos\left(x\right)\right) \sin\left(x\right)\cos^{3}\left(x\right)\,{\rm d}x \\[5mm] = &amp;\ 4\int_{0}^{1} x\left(1 - x^{2}\right)\ln\left(x\right)\,{\rm d}x + 4\int_{1}^{0}x^{3}\ln\left(x\right)\,{\rm d}x = 4\int_{0}^{1} \left(x - 2x^{3}\right)\ln\left(x\right)\,{\rm d}x \\[5mm] = &amp;\ 4\lim_{n \to 0}{{\rm d} \over {\rm d}n} \int_{0}^{1} \left(x^{n + 1} - 2x^{n + 3}\right)\,{\rm d}x = 4\lim_{n \to 0}{{\rm d} \over {\rm d}n} \left({1 \over n + 2} - {2 \over n + 4}\right) \\[5mm] = &amp;\ 4\lim_{n \to 0} \left\lbrack -\,{1 \over \left(n + 2\right)^{2}} + {2 \over \left(n + 4\right)^{2}} \right\rbrack = 4 \left(-\,{1 \over 4} + {1 \over 8}\right) = -\,{1 \over 2} \end{align}</span></p>
3,395,098
<p>I am trying to work out for what <span class="math-container">$\lambda_1, \lambda_2 &gt; 0$</span> is it true that <span class="math-container">$f(y) = \lambda_1 e^{y-\lambda_1 e^y} + \lambda_2 e^{y-\lambda_2 e^y}$</span> is unimodal?</p> <p>Experimentally it seems it is unimodal when <span class="math-container">$\lambda_1 &lt; \lambda_2$</span> and <span class="math-container">$\frac{\lambda_2}{\lambda{1}} &lt; 7.5$</span> .</p> <p>To work this out I started with:</p> <p><span class="math-container">$$\frac{d}{dy} \left(\lambda_1 e^{y-\lambda_1 e^y} + \lambda_2 e^{y-\lambda_2 e^y} \right) = \lambda_1 e^{y - \lambda_1 e^y} (1 - \lambda_1 e^y ) + \lambda_2 e^{y - \lambda_2 e^y} (1 - \lambda_2 e^y )$$</span></p> <p>It seems we then need to check when</p> <p><span class="math-container">$$\lambda_1 e^{y - \lambda_1 e^y} (1 - \lambda_1 e^y ) + \lambda_2 e^{y - \lambda_2 e^y} (1 - \lambda_2 e^y ) = 0$$</span></p> <p>has more than one solution when solved for <span class="math-container">$y \in \mathbb{R}$</span>. How can we determine the conditions under which it has different numbers of solutions?</p> <h1>Added:</h1> <p>Substituting <span class="math-container">$z = e^y$</span> and dividing by <span class="math-container">$e^{y-1}$</span> we are trying to determine how many solutions</p> <p><span class="math-container">$$ \lambda_1 e^{1-\lambda_1 z}(1-\lambda_1 z) +\lambda_2e^{1-\lambda_2 z}(1-\lambda_2 z) = 0 $$</span></p> <p>has with <span class="math-container">$z &gt; 0$</span>.</p> <h1>Examples:</h1> <p>Example <span class="math-container">$\lambda_1 = 1, \lambda_2 = 7$</span> with only one mode (code in python):</p> <pre><code>import matplotlib.pyplot as plt import numpy as np def pdf_func(y, params): return sum([lambd*np.exp(y - lambd * np.exp(y)) for lambd in params]) params = [1, 7] xs = np.linspace(-10,10,1000) plt.plot(xs, [pdf_func(y, params) for y in xs]) </code></pre> <p><a href="https://i.stack.imgur.com/iN2Yd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iN2Yd.png" alt="enter image description here" /></a></p> <p>Example <span class="math-container">$\lambda_1 = 1, \lambda_2 = 50$</span> with two modes:</p> <p><a href="https://i.stack.imgur.com/HFAnp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HFAnp.png" alt="enter image description here" /></a></p> <h1>Questions</h1> <ul> <li>How can one prove (assuming it is true) that that the number of local maxima that <span class="math-container">$f(y)$</span> has is either 1 or 2 and there are no other possibilities?</li> <li>Is it true that for <span class="math-container">$\lambda_2 &gt; \lambda_1 &gt; 0$</span>, there exists a threshold <span class="math-container">$c$</span> so that if <span class="math-container">$\frac{\lambda_2}{\lambda_1} &lt; c$</span> then <span class="math-container">$f(y)$</span> is unimodal and if not it has two local maxima? (My guess is that the answer is yes and this threshold is around <span class="math-container">$7.5$</span>.)</li> </ul>
Cesareo
397,348
<p>Consider </p> <p><span class="math-container">$$ f(z) = \lambda _1 z e^{-\lambda _1 z}+\lambda _2 z e^{-\lambda _2 z} $$</span></p> <p>with <span class="math-container">$z = e^y$</span></p> <p>and now</p> <p><span class="math-container">$$ f'(z) = -\lambda _1 e^{-\lambda _1 z} \left(\lambda _1 z-1\right)-\lambda _2 e^{-\lambda _2 z} \left(\lambda _2 z-1\right)=0 $$</span></p> <p>or</p> <p><span class="math-container">$$ \frac{\lambda_2^2}{\lambda_1^2}e^{(\lambda_1-\lambda_2)z}+\frac{z-\frac{1}{\lambda_2}}{z-\frac{1}{\lambda_1}} = 0 $$</span></p> <p>here</p> <p><span class="math-container">$$ \frac{\lambda_2^2}{\lambda_1^2}e^{(\lambda_1-\lambda_2)z}\gt 0 $$</span></p> <p>so the zero's location is associated to the sign of</p> <p><span class="math-container">$$ \frac{z-\frac{1}{\lambda_2}}{z-\frac{1}{\lambda_1}} $$</span></p> <p>so</p> <p><span class="math-container">$$ \min_i\frac{1}{\lambda_i}\le z \le \max_i\frac{1}{\lambda_i} $$</span></p> <p>NOTE</p> <p>As an example, for <span class="math-container">$\lambda_1 = 1, \lambda_2 = 50$</span> we have the sign change plot</p> <p><a href="https://i.stack.imgur.com/nF2cB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nF2cB.jpg" alt="enter image description here"></a></p> <p>Now the solutions should be searched into the negative interval which is <span class="math-container">$0.02\le z \le 1$</span>. The roots here are <span class="math-container">$z =\{0.0211011,0.113856,1\}$</span></p> <p>Now considering after <span class="math-container">$\lambda_2 = \zeta\lambda_1$</span></p> <p><span class="math-container">$f'(z,\zeta,\lambda_1)=\lambda _1 \zeta e^{-\lambda _1 z \zeta } \left(\lambda _1 z \zeta -1\right)+\lambda _1 e^{-\lambda _1 z} \left(\lambda _1 z-1\right)$</span>. </p> <p>or calling <span class="math-container">$y = \lambda_1 z, x = \zeta$</span></p> <p><span class="math-container">$$ g(x,y) = x(x y -1)e^{-y(x-1)}+y - 1 $$</span></p> <p>or also</p> <p><span class="math-container">$$ x y +W\left(\frac{(y-1)e^{1-y}}{x}\right) - 1 = 0 $$</span></p> <p>Here <span class="math-container">$W(\cdot)$</span> is the Lambert function.</p> <p>Follows the plot for <span class="math-container">$g(x,y) = 0$</span></p> <p><a href="https://i.stack.imgur.com/TcUbk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TcUbk.jpg" alt="enter image description here"></a></p> <p>As can be observed roughly for <span class="math-container">$0\le x \le 7.5$</span> we have one root and for <span class="math-container">$7.5 \lt x$</span> we have three roots.</p> <p>The determination of the passage from one root to three is made as follows. Solving numerically <span class="math-container">$g(x,y) = 0$</span> and <span class="math-container">$\frac{dg}{dy} = 0$</span> we obtain the point <span class="math-container">$(7.56628, 0.802977)$</span> (the black point as the intersection of <span class="math-container">$g(x,y) = 0$</span> in blue and <span class="math-container">$\frac{dg}{dy} = 0$</span> in green)</p> <p><a href="https://i.stack.imgur.com/nNxJt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nNxJt.jpg" alt="enter image description here"></a></p>
996,052
<p>A disk of radius <span class="math-container">$5$</span> cm has density <span class="math-container">$10$</span> g/cm<span class="math-container">$^2$</span> at its center, density <span class="math-container">$0$</span> at its edge, and its density is a linear function of the distance from the center. Find the mass of the disk.</p> <p>my answer: <span class="math-container">$157.08$</span>g</p> <p><span class="math-container">$D=10-2x$</span></p> <p>Double integration of <span class="math-container">$D$</span> from <span class="math-container">$y=0$</span> to <span class="math-container">$y=2\pi$</span> and <span class="math-container">$x=0$</span> to <span class="math-container">$x = 5$</span></p>
Fahd Siddiqui
187,108
<p>Dimensional analysis tells me that is incorrect. $D$ has dimensions of $g/cm^2$ integrating with $dx$ gives it dimensions of $g/cm$ and not $g$ as required by the answer.</p> <p><strong>Solution</strong></p> <p>Consider the elemental area of the disc $dA=2\pi xdx$</p> <p>Now mass=density*Area</p> <p>$m=DdA$</p> <p>$m=\int (10-2x)2\pi xdx$</p> <p>$m=\int (10-2x)2\pi xdx$ over $x=0$ to $x=5$</p> <p>$m=2\pi\int (10-2x) xdx$ over $x=0$ to $x=5$</p>
1,559,485
<p>Suppose $\sup_{x \in \mathbb{R}} f'(x) \le M$.</p> <p>I am trying to show that this is true if and only if $$\frac{f(x) - f(y)}{x - y} \le M$$</p> <p>for all $x, y \in \mathbb{R}$.</p> <p><strong>Proof</strong></p> <p>$\text{sup}_{x \in \mathbb{R}} f'(x) \le M$</p> <p>$f'(x) \le M$ for all $x \in \mathbb{R}$</p> <p>$\lim_{y \to x} \frac{f(y) - f(x)}{y - x} \le M$</p> <p>$\lim_{y \to x} \frac{f(x) - f(y)}{x - y} \le M$</p> <p>I can see geometrically why this property holds, but how do I get rid of the limit here? Or am I approaching it wrong in general?</p>
fleablood
280,126
<p>well, first of all, we have to presume f is continuous and differentiable on R. This statement isn't true otherwise.</p> <p>1) Suppose $\frac{f(x) - f(y)}{x - y} &gt; M$ for some $x, y \in \mathbb R$.</p> <p>By the mean value theorem, there exist a $c; x &lt;c &lt; y$ where $f'(c) = \frac{f(x) - f(y)}{x - y}$.</p> <p>So $f'(c) &gt; M$.</p> <p>So $\sup f'(x) \le M \implies f'(c) \le M$ for all $c \in \mathbb R \implies$ $\frac{f(x) - f(y)}{x - y} \le M$ for all $x, y \in \mathbb R$.</p> <p>2) Suppose $\frac{f(x) - f(y)}{x - y} \le M$ for some $x, y \in \mathbb R$.</p> <p>Then $\lim_{y \rightarrow x}\frac{f(x) - f(y)}{x - y} = f'(x) \le M$ for all $x \in \mathbb R$. So {$f'(x)|x \in \mathbb R$} is bounded above by M so $\sup_{x \in \mathbb{R}} f'(x) \le M$.</p>
434,290
<p>According to the <a href="http://arxiv.org/abs/0910.5922" rel="nofollow">equation 4</a>, $$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)\tag{1}$$ what conditions makes, $$\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)=1$$ so the equation (1) will be </p> <p>$$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}$$ The author used the <a href="http://arxiv.org/abs/hep-ph/9503217" rel="nofollow">article reference</a> to establish the equation $$\frac{1}{2} \Gamma_{lin}= \frac{1}{\tau_{linear}} \approx \frac{1.196}{\omega_{mass}} \approx \frac{.846}{R^2}$$ but I didn't get any argument there, can you explain this a bit please.</p>
DonAntonio
31,254
<p>And yet another way for you to enjoy. Define</p> <p>$$f(z):=\frac{\text{Log}\,z}{z^4+1}\;,\;\;C_R:=[-R,R]\cup\gamma_R:=\{z\in\Bbb C\;;\;z=Re^{it}\,,\,\,0&lt;t&lt;\pi\}\;,\;\;1&lt;R\in\Bbb R$$</p> <p>Now, the only poles within the region determined by $\,C_R\,$ are the simple (why? And note that $\,z=0\,$ is a pole but with residue equal to zero...) ones </p> <p>$$z_1=e^{\frac{\pi i}4}\;,\;\;z_2=e^{\frac{3\pi i}4}\implies$$</p> <p>$$\begin{align*}\text{Res}_{z=z_1}(f)&amp;=\lim_{z\to z_1}(z-z_1)f(z)\stackrel{\text{l'Hospital}}=\lim_{z\to z_1}\frac{\text{Log}\,z}{4z^3}&amp;=\frac{\pi i}{16e^{\frac{3\pi i}4}}&amp;=\frac{\pi }{16\sqrt2}\left(1-i\right)=\\ \text{Res}_{z=z_2}(f)&amp;=\lim_{z\to z_2}(z-z_2)f(z)\stackrel{\text{l'Hospital}}=\lim_{z\to z_2}\frac{\text{Log}\,z}{4z^3}&amp;=\frac{3\pi i}{16e^{\frac{\pi i}4}}&amp;=\frac{3\pi }{16\sqrt2}\left(1+i\right)\end{align*}$$</p> <p>So by Cauchy Theorem we get:</p> <p>$$2\pi i\left(\frac{\pi}{16\sqrt2}\left(1-i\right)+\frac{3\pi}{16\sqrt2}\left(1+i\right)\right)=-\frac{\pi^2}{4\sqrt2}=\oint\limits_{C_R}f(z)\,dz=\int\limits_{-R}^R\frac{dx}{x^4+1}+\int\limits_{\gamma_R}f(z)dz$$</p> <p>And since</p> <p>$$\left|\;\int\limits_{\gamma_R}f(z)dz\;\right|\le\frac{\pi R}{R^4-1}\xrightarrow[R\to\infty]{}0$$</p> <p>we get</p> <p>$$2\int\limits_0^\infty\frac{dx}{x^4+1}\stackrel{\text{why?}}=\int\limits_{-\infty}^\infty\frac1{x^4+1}=\lim_{R\to\infty}\int\limits_{C_R}f(z)\,dz=-\frac{\pi^2}{4\sqrt2}$$</p> <p><strong>Note:</strong> To do the above we <em>had</em> to choose a branch cut for the complex logarithm function, yet we didn't choose the usual one (i.e., the non-positive reals) but rather the negative purely imaginary axis, so...<strong>why can we do that?</strong>, and what happened with zero, the great nemesis of the complex logarithm?</p>
4,609,833
<p>As far as I can tell using Mathematica, the following identity seems to hold: <span class="math-container">$$(n+2)^n=(n+1)\sum_{k=0}^n\binom{n}{k}\frac{(k+1)^{k-1}(n-k)^{n-k}}{n+1-k},$$</span> where we define <span class="math-container">$0^0=1$</span>. However, I am having trouble proving it. I thought that this looked familiar and found the formulas (5.64) and (5.65) in Graham, Knuth, and Patashnik's &quot;Concrete Mathematics&quot;, which are <span class="math-container">$$\sum_{k=0}^n\binom{n}{k}(tk+r)^k(tn-tk+s)^{n-k}\frac{r}{tk+r}=(tn+r+s)^n$$</span> and <span class="math-container">$$\sum_{k=0}^n\binom{n}{k}(tk+r)^k(tn-tk+s)^{n-k}\frac{r}{tk+r}\frac{s}{tn-tk+s}=(tn+r+s)^n\frac{r+s}{tn+r+s},$$</span> but these seem to just barely not work for my purpose. Does anyone know a proof of this result (or perhaps a counterexample)? I suspect I will have to do some playing with generating functions.</p>
zbrads2
655,480
<p>I think I have found a solution using the principal branch of the Lambert W function. The Lagrange inversion theorem can be used to show that <span class="math-container">$$W_0^p(x)=\sum_{n=p}^\infty\frac{-p(-n)^{n-p-1}}{(n-p)!}x^n,$$</span> so that by shifting the index, we have <span class="math-container">$$\left(\frac{-W_0(-x)}{x}\right)^p=\sum_{n=0}^\infty\frac{p(n+p)^{n-1}}{n!}x^n.$$</span> Then by taking <span class="math-container">$p=\pm1$</span> and multiplying the two cases, we get <span class="math-container">$$\left(\sum_{n=0}^\infty(n+1)^{n-1}\frac{x^n}{n!}\right)\left(\sum_{n=0}^\infty(n-1)^{n-1}\frac{x^n}{n!}\right)=-1.$$</span> Now, the LHS is <span class="math-container">$$\left(\sum_{n=0}^\infty(n+1)^{n-1}\frac{x^n}{n!}\right)\left(\sum_{n=0}^\infty(n-1)^{n-1}\frac{x^n}{n!}\right)=\sum_{n=0}^\infty\sum_{k=0}^n\binom{n}{k}(k-1)^{k-1}(n-k+1)^{n-k-1}\frac{x^n}{n!}.$$</span> Thus, for <span class="math-container">$n&gt;0$</span>, we have <span class="math-container">$$\sum_{k=0}^n\binom{n}{k}(k-1)^{k-1}(n-k+1)^{n-k-1}=0.$$</span> Then on the one hand, we have <span class="math-container">$$\sum_{k=0}^{n+1}\binom{n+1}{k}(k-1)^{k-1}(n-k+2)^{n-k}=0,$$</span> while on the other hand, <span class="math-container">$$\sum_{k=0}^{n+1}\binom{n+1}{k}(k-1)^{k-1}(n-k+2)^{n-k}=-(n+2)^n+\sum_{k=0}^n\binom{n+1}{k+1}(k)^{k}(n-k+1)^{n-k-1}.$$</span> Thus, <span class="math-container">\begin{equation}(n+2)^n=\sum_{k=0}^n\binom{n+1}{k+1}k^{k}(n-k+1)^{n-k-1}\\ =(n+1)\sum_{k=0}^n\binom{n}{k}\frac{k^{k}(n-k+1)^{n-k-1}}{k+1}\\ =(n+1)\sum_{k=0}^n\binom{n}{k}\frac{(n-k)^{n-k}(k+1)^{k-1}}{n+1-k}.\end{equation}</span></p>
4,609,833
<p>As far as I can tell using Mathematica, the following identity seems to hold: <span class="math-container">$$(n+2)^n=(n+1)\sum_{k=0}^n\binom{n}{k}\frac{(k+1)^{k-1}(n-k)^{n-k}}{n+1-k},$$</span> where we define <span class="math-container">$0^0=1$</span>. However, I am having trouble proving it. I thought that this looked familiar and found the formulas (5.64) and (5.65) in Graham, Knuth, and Patashnik's &quot;Concrete Mathematics&quot;, which are <span class="math-container">$$\sum_{k=0}^n\binom{n}{k}(tk+r)^k(tn-tk+s)^{n-k}\frac{r}{tk+r}=(tn+r+s)^n$$</span> and <span class="math-container">$$\sum_{k=0}^n\binom{n}{k}(tk+r)^k(tn-tk+s)^{n-k}\frac{r}{tk+r}\frac{s}{tn-tk+s}=(tn+r+s)^n\frac{r+s}{tn+r+s},$$</span> but these seem to just barely not work for my purpose. Does anyone know a proof of this result (or perhaps a counterexample)? I suspect I will have to do some playing with generating functions.</p>
joriki
6,622
<p>You can actually do this by applying the first formula that you quoted from <em>Concrete Mathematics</em> three times with <span class="math-container">$r=t=1$</span>, that is,</p> <p><span class="math-container">$$ \sum_{k=0}^n\binom{n}{k}(k+1)^{k-1}(n-k+s)^{n-k}=(n+1+s)^n\;.\tag1 $$</span></p> <p>Split your sum like this:</p> <p><span class="math-container">\begin{eqnarray} \sum_{k=0}^n\binom nk\frac{(k+1)^{k-1}(n-k)^{n-k}}{n-k+1} &amp;=&amp; \sum_{k=0}^n\binom nk\frac{(k+1)^{k-1}(n-k)^{n-k}}{n-k+1}((n-k+1)-(n-k)) \\ &amp;=&amp; \sum_{k=0}^n\binom nk(k+1)^{k-1}(n-k)^{n-k}-\sum_{k=0}^n\binom nk\frac{(k+1)^{k-1}(n-k)^{n-k+1}}{n-k+1}\;. \end{eqnarray}</span></p> <p>The first sum is just <span class="math-container">$(1)$</span> with <span class="math-container">$s=0$</span>, so that’s <span class="math-container">$(n+1)^n$</span>. In the second sum, insert <span class="math-container">$s$</span> and differentiate with respect to it to get rid of the unwanted factors:</p> <p><span class="math-container">\begin{eqnarray} \frac{\partial}{\partial s}\sum_{k=0}^n\binom nk\frac{(k+1)^{k-1}(n-k+s)^{n-k+1}}{n-k+1} &amp;=&amp; \sum_{k=0}^n\binom nk(k+1)^{k-1}(n-k+s)^{n-k} \\ &amp;=&amp; (n+1+s)^n \;, \end{eqnarray}</span></p> <p>again applying <span class="math-container">$(1)$</span>. This result is readily integrated, so all we need is a value of the sum at some particular value of <span class="math-container">$s$</span>. Substitute <span class="math-container">$s=1$</span> and again apply <span class="math-container">$(1)$</span> to obtain</p> <p><span class="math-container">\begin{eqnarray} \sum_{k=0}^n\binom nk\frac{(k+1)^{k-1}(n-k+1)^{n-k+1}}{n-k+1} &amp;=&amp; \sum_{k=0}^n\binom nk(k+1)^{k-1}(n-k+1)^{n-k} \\ &amp;=&amp;(n+2)^n\;. \end{eqnarray}</span></p> <p>Thus, putting it all together,</p> <p><span class="math-container">\begin{eqnarray} \sum_{k=0}^n\binom nk\frac{(k+1)^{k-1}(n-k)^{n-k}}{n-k+1} &amp;=&amp; (n+1)^n-(n+2)^n-\int_1^0\mathrm ds(n+1+s)^n \\ &amp;=&amp; (n+1)^n-(n+2)^n-\left[\frac{(n+1+s)^{n+1}}{n+1}\right]_1^0 \\ &amp;=&amp;(n+1)^n-(n+2)^n-(n+1)^n+\frac{(n+2)^{n+1}}{n+1} \\ &amp;=&amp;\frac{(n+2)^n}{n+1}\;. \end{eqnarray}</span></p>
1,383,781
<p>Given $\mathbb{X}$ = $\mathbb{R^2}$, consider $\| \cdot \|_2$ and $\| \cdot \|_\infty$ </p> <p>We can show that </p> <p>$\| x \|_\infty \leq \| x \|_2 \leq \sqrt2 \| x \|_\infty$ </p> <p>Hence $\| \cdot \|_2$ and $\| \cdot \|_\infty$ are equivalent norms</p> <p>Is there some deeper implication regarding this particular relationship? Why do we care if two norms are equivalent in this sense?</p>
mathcounterexamples.net
187,663
<p>Some application of this result and more generally that all norms are equivalent on finite dimensional spaces:</p> <ul> <li>The compact subspaces are the closed bounded spaces.</li> <li>All linear maps are continuous. More generally all multilinear maps are continuous.</li> <li>All linear maps are bounded on the unit ball.</li> </ul>
646,109
<p>For function $f:\mathbb{R}\rightarrow\mathbb{R}$ that satisfies $f\left(x+y\right)=f\left(x\right)f\left(y\right)$ and is not the zero-function I can prove that $f\left(1\right)&gt;0$ and $f\left(x\right)=f\left(1\right)^{x}$ for each $x\in\mathbb{Q}$. Is there a way to prove that for $x\in\mathbb{R}$?</p> <p>This question has been marked to be a duplicate of the question whether $f(xy)=f(x)f(y)$ leads to $f(x)=x^p$ for some $p$. I disagree on that. Both questions are answered by means of construction of a function $g$ that suffices $g(x+y)=g(x)+g(y)$. In this question: $g(x)=\log f(x)$ and in the other $g(x)=\log f(e^x)$. So the answers are alike, but both questions definitely have another startpoint.</p>
Martín-Blas Pérez Pinilla
98,199
<p>Continuity (or continuity in some point or measurability) is required. See <a href="http://en.wikipedia.org/wiki/Cauchy%27s_functional_equation" rel="nofollow">Cauchy's functional equation</a>. Your problem is reducible to this.</p>
139,817
<p>Studying stability of certain non-autonomous dynamical systems on Lie groups I have come across the following question: Exactly which finite-dimensional, real Lie groups have adjoint representations that are bounded away from zero?</p> <p>Edit: by "bounded away from zero" I mean that the image of the adjoint representation avoids an open neighborhood of zero in End(g), where g is the Lie algebra. Equivalently, the closure of the image does not contain zero, or, the norm (pick your favorite one) of every element of the adjoint representation is bounded from below by one and the same positive number. By Hadamard's inequality, a determinant bound will do as well. [end edit]</p> <p>This should include compact Lie groups since for those there exists an inner product on the Lie algebra with respect to which all inner automorphisms are orthogonal, i.e. the elements of the adjoint representation have norm 1. Correct?</p> <p>Also, for abelian Lie groups the adjoint representation is trivial, hence again bounded away from zero.</p> <p>I believe that semisimple Lie groups should also be included but can not think of a valid argument.</p> <p>Is there actually a counter example? I tried the general linear group GL(2) but the elements of the adjoint representation that I tried always have (some) unit eigenvalues. Is this an accident? I would have thought that GL(n) itself occurs as an adjoint representation somehow which would then not be bounded away from zero. But evidently I am not quite understanding the different dimensions here (the adjoint representation of GL(2) is a subgroup of GL(4)).</p> <p>My apologies if this is trivial but I could not find anything that looked relevant in several books on Lie groups.</p>
Community
-1
<p>I am not sure what you mean by "bounded away from zero", but if you mean that the closure of the image of the adjoint representation does not contain zero, that is correct. A proof might proceed by showing that the image lies in the adjoint group of the lie algebra and the latter lies in the group of elements of determinant one.</p>
3,156,643
<blockquote> <p>Prove that <span class="math-container">$\sin(x) &lt; x$</span> when <span class="math-container">$0&lt;x&lt;2\pi.$</span></p> </blockquote> <p>I have been struggling on this problem for quite some time and I do not understand some parts of the problem. I am supposed to use rolles theorem and Mean value theorem</p> <p>First using the mean value theorem I got <span class="math-container">$\cos(x) = \dfrac {\sin(x)}x$</span> and since <span class="math-container">$1 ≥ \cos x ≥ -1$</span> , <span class="math-container">$1 ≥ \dfrac {\sin(x)}x$</span> which is <span class="math-container">$x ≥ \sin x$</span> for all <span class="math-container">$x ≥ 0$</span>.</p> <p>Here the first issue is that I didn't know how to change <span class="math-container">$≥$</span> to <span class="math-container">$&gt;$</span>. </p> <p>The second part is proving when <span class="math-container">$x&lt;2\pi$</span> and this part I have no idea.</p> <p>I know that <span class="math-container">$2\pi &gt; 1$</span> , and <span class="math-container">$1 ≥ \sin x$</span> and my thought process ends here.</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$x-\sin\, x=\int_0^{x}[1-\cos\, t] dt \geq 0$</span> and equality can hold only of the non-negative continuous function <span class="math-container">$1-\cos\, t$</span> is identically <span class="math-container">$0$</span> from <span class="math-container">$0$</span> to <span class="math-container">$x$</span>. This is not true for any <span class="math-container">$x&gt;0$</span> so strict inequality holds. </p>
555,239
<p>Since the polynomial has three irrational roots, I don't know how to solve the equation with familiar ways to solve the similar question. Could anyone answer the question?</p>
André Nicolas
6,312
<p><strong>Added:</strong> The approach below is ugly: It would be most comfortable to delete. </p> <p>We look at your second equation. Look first at the case $x\ge 0$, $y\ge 0$. We have $x^2+y^2-xy=(x-y)^2+xy$. Thus $x^2+y^2-xy\ge xy$. So if the equation is to hold, we need $xy\le x+y$. </p> <p>Note that $xy-x-y=(x-1)(y-1)-1$. The only way we can have $xy-x-y\le 0$ is if $(x-1)(y-1)=0$ or $(x-1)(y-1)=1$. </p> <p>In the first case, we have $x=1$ or $y=1$. Suppose that $x=1$. Then we are looking at the equation $1+y=1+y^2-y$, giving $y=0$ or $y=2$. By symmetry we also have the solution $y=1$, $x=0$ or $x=2$.</p> <p>If $(x-1)(y-1)=1$, we have $x=0$, $y=0$ or $x=2$, $y=2$.</p> <p>Now you can do an analysis of the remaining $3$ cases $x\lt 0$, $y\ge 0$; $y\lt 0$, $x\ge 0$; $x\lt 0$, $y\lt 0$. There is less to these than meets the eye. The first two cases are essentially the same. And since $x^2+y^2-xy=\frac{1}{2}((x-y)^2+x^2+y^2)$, we have $x^2+y^2-xy\ge 0$ for all $x,y$, so $x\lt 0$, $y\lt 0$ is impossible. </p>
555,239
<p>Since the polynomial has three irrational roots, I don't know how to solve the equation with familiar ways to solve the similar question. Could anyone answer the question?</p>
individ
128,505
<p>This equation can be rewritten just in another form: $\frac{m+1}{n}+\frac{n+1}{m}=a$ Can be solved using the equation Pell:</p> <p>$p^2-(a^2-4)s^2=1$</p> <p>Solutions have the form:</p> <p>$n=2(p-(a+2)s)s$</p> <p>$m=-2(p+(a+2)s)s$</p> <p>And more:</p> <p>$n=\frac{2p(p+(a-2)s)}{a-2}$</p> <p>$m=\frac{2p(p-(a-2)s)}{a-2}$</p> <p>If this equation has a solution: $p^2-(a^2-4)s^2=4$ Formulas are: :</p> <p>$n=\frac{p-(a-2)s+2}{2(a-2)}$</p> <p>$m=\frac{p+(a-2)s+2}{2(a-2)}$</p> <p>Anyway I'll write more general equation: $\frac{X^2+aX+Y^2+bY+c}{XY}=j$</p> <p>If this square a:</p> <p>$t=\sqrt{(b+a)^2+4c(j-2)}$</p> <p>Then using the equation Pell:</p> <p>$p^2-(j^2-4)s^2=1$</p> <p>Solutions can be written:.</p> <p>$X=\frac{(b+a\pm{t})}{2(j-2)}p^2+(t\mp{(b-a)})ps-\frac{(b(3j-2)+a(6-j)\mp{(j+2)t})}{2}s^2$</p> <p>$Y=\frac{(b+a\pm{t})}{2(j-2)}p^2+(-t\mp{(b-a)})ps-\frac{(b(6-j)+a(3j-2)\mp{(j+2)t})}{2}s^2$</p> <p>We must also take into account that the number $p,s$ may still be different signs.</p>
944,948
<p>$\textbf{QUESTION-}$ Let $P$ be a p-group with $|P:Z(P)|\leq p^n$. Show that $|P'| \leq p^{n(n-1)/2}$.</p> <p>If $P=Z(P)$ it is true. Now let $n &gt; 1$, then</p> <p>If I see $P$ as a nilpotent group and construct its upper central series, it will end , so let it be,</p> <p>$e=Z_0&lt;Z_1&lt;Z_2&lt;......&lt;Z_r=P$ </p> <p>Now as $Z_{i+1}/Z_i=Z(P/Z_i)$, so if if I take some $x\in Z_2$\ $Z_1$ then $N$={$[x,y]|y\in P$} $\leq Z_1(P)$ and $N \triangleleft P $, so $P/N$ is a group with order $\leq p^{n-1}$.</p> <p>Now if I let $H=P/N$ then obviously |$H/Z(H)$|$\leq p^{n-1}$.</p> <p>Now $H'\cong P'N/N \cong P'/(P' \cap N)$ so from here I could finally bring $P'$ atleast into the picture, now |$P'$|=$|H'|.|P'\cap N|$ so $|P'|\leq |H'||N|$. </p> <p>This is where I am $\textbf{STUCK}$</p> <p>Now , from here how can I calculate or find some power $p$ bounds on $|H'|$ and $|N|$ so i could get my result.</p>
southsinger
80,958
<p>First of all notice that (insert $-x$ instead of $y$)</p> <p>$$ f' (x) = f' (0) + x^2 $$</p> <p>On the other hand, we know that </p> <p>$$ \lim_{x \to 0} f(x)/x = f'(0) =1 $$</p> <p>so</p> <p>$$ f'(x) = 1+x^2 $$</p>
3,151,662
<p>Consider <span class="math-container">$a_1,\dots,a_n\in\mathbb{R}^n$</span> and identify <span class="math-container">$a_j\in\mathcal{L}(\mathbb{R},\mathbb{R}^n)$</span> via <span class="math-container">$\varphi\mapsto \varphi1$</span>.</p> <p>Also, consider <span class="math-container">$A\in\mathcal{L}(\mathbb{R}^n)$</span> given by <span class="math-container">$$A\colon (x_1,\dots,x_n)\mapsto a_1x_1 + \dots + a_nx_n\tag{$\star$}$$</span></p> <p>What's the name or symbol of the map <span class="math-container">$$\mathcal{L}(\mathbb{R},\mathbb{R}^n)\times\dots\times\mathcal{L}(\mathbb{R},\mathbb{R}^n)\to\mathcal{L}(\mathbb{R}^n,\mathbb{R}^n),\quad(a_1,\dots,a_n)\mapsto A$$</span> where <span class="math-container">$A$</span> and <span class="math-container">$a_1,\dots,a_n$</span> are related as in <span class="math-container">$(\star)$</span>? I'd like to write e.g. <span class="math-container">$A=a_1\otimes\dots\otimes a_n$</span>. Is there some higher level concept that induces that map? (E.g. some time ago I wondered if this map would be the tensor product). </p> <p>The matrix equivalent would be saying that the columns of <span class="math-container">$A$</span> are the column vectors <span class="math-container">$a_1,\dots,a_n$</span> and writing <span class="math-container">$A=\begin{bmatrix}a_1&amp; \dots &amp;a_n\end{bmatrix}$</span>. However, I'd like to keep things matrix free.</p> <p>Thanks in advance.</p>
poetasis
546,655
<p>If the determinant of a matrix is zero, then there are no solutions to a set of equations represented by an nXn matrix set equal to a 1Xn matrix. If it is non-zero, then there are solutions and they can all be found using <a href="https://www.purplemath.com/modules/cramers.htm" rel="nofollow noreferrer">Cramer's Rule</a>. They are also used in Photoshop for various visual tricks; they are used to cast 3D shapes onto a 2D surface; they are used to analyze seismic waves... and a hundred other applications where data need to be crunched in a simple manner.</p>
3,151,662
<p>Consider <span class="math-container">$a_1,\dots,a_n\in\mathbb{R}^n$</span> and identify <span class="math-container">$a_j\in\mathcal{L}(\mathbb{R},\mathbb{R}^n)$</span> via <span class="math-container">$\varphi\mapsto \varphi1$</span>.</p> <p>Also, consider <span class="math-container">$A\in\mathcal{L}(\mathbb{R}^n)$</span> given by <span class="math-container">$$A\colon (x_1,\dots,x_n)\mapsto a_1x_1 + \dots + a_nx_n\tag{$\star$}$$</span></p> <p>What's the name or symbol of the map <span class="math-container">$$\mathcal{L}(\mathbb{R},\mathbb{R}^n)\times\dots\times\mathcal{L}(\mathbb{R},\mathbb{R}^n)\to\mathcal{L}(\mathbb{R}^n,\mathbb{R}^n),\quad(a_1,\dots,a_n)\mapsto A$$</span> where <span class="math-container">$A$</span> and <span class="math-container">$a_1,\dots,a_n$</span> are related as in <span class="math-container">$(\star)$</span>? I'd like to write e.g. <span class="math-container">$A=a_1\otimes\dots\otimes a_n$</span>. Is there some higher level concept that induces that map? (E.g. some time ago I wondered if this map would be the tensor product). </p> <p>The matrix equivalent would be saying that the columns of <span class="math-container">$A$</span> are the column vectors <span class="math-container">$a_1,\dots,a_n$</span> and writing <span class="math-container">$A=\begin{bmatrix}a_1&amp; \dots &amp;a_n\end{bmatrix}$</span>. However, I'd like to keep things matrix free.</p> <p>Thanks in advance.</p>
Raaja_is_at_topanswers.xyz
286,483
<p>In system-theory, </p> <ol> <li>systems can be represented by matrices and each column represent the internal-state of the system. </li> <li>If a determinant of one such matrix is zero, then we can say that one of the state associated with certain dynamics is being duplicated.</li> <li>Based on some special matrix operations, we arrive at something called as relative-gain-array (RGA). This will give information on how much each states/output of a system interactes with each other, collectively speaking.</li> </ol> <p>However, top are just a few examples. There are much more.</p>
422,118
<p>I'm a CS major working on social network analysis and its friends.</p> <p>In page 15 of <a href="http://open.umich.edu/sites/default/files/1446/SI508-F08-Week3.pdf" rel="nofollow">this lecture note</a>, two very interesting questions have been asked. Given a social network graph, in which cases would we find nodes with high betweenness but relatively low degree? And, which cases would cause the opposite to happen, that is, high degree but relatively low betweenness? I'm trying to understand this from an intuitive point of view. I'd really appreciate it if someone can shed some light on this.</p> <p><strong>Added notes:</strong><br/> <strong>Betweenness:</strong> intuition: how many pairs of nodes would have to go through a particular node in order to reach one another in the minimum number of hops? Check <a href="http://en.wikipedia.org/wiki/Betweenness_centrality" rel="nofollow">Betweenness centrality</a> in Wikipedia for formal definitions.</p>
Gill
86,878
<p>So I think that $\triangledown(\top,\top)$ does follow for all normal logics in the sense you have defined. The following would be a proof in any extension of the logic you have given.</p> <p>A $\vdash\top$ by 1</p> <p>B $\vdash\triangledown(\top,\bot)$ by 6</p> <p>C $\vdash\triangledown(\top,\bot)\to(\triangledown(\top,\bot\to\top)\to\triangledown(\top,\top))$ by 4 and 3</p> <p>D $\vdash\top\to\bot$ by 1</p> <p>E $\vdash \triangledown(\top,\bot\to\top)$ by 6</p> <p>Now apply MP twice to C. I think that would work unless I have misunderstood something.</p>
4,414,843
<p>For reference: Show that the area of ​​triangle <span class="math-container">$ABC = R\times MN(R=BO)$</span></p> <p>I can't demonstrate this relationship</p> <p><a href="https://i.stack.imgur.com/A3ci3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A3ci3.jpg" alt="enter image description here" /></a></p> <p>My progress:</p> <p><span class="math-container">$$S_{\triangle ABC} = \frac{abc}{4R}$$</span> <span class="math-container">$$S_{\triangle ABC}=\frac{AC\times BH}{2}$$</span></p> <p><span class="math-container">$BMHN$</span> is cyclic</p> <p>Therefore <span class="math-container">$\angle HMN \cong\angle HBN\\ \angle MBH \cong \angle MNH\\$</span></p> <p><span class="math-container">$\triangle AMH \sim \triangle AHB\\ \triangle CNH \sim \triangle CHB$</span></p> <p>...?</p>
peta arantes
419,513
<p>The orthocenter and the circumcenter of a triangle are isogonal conjugates, therefore <span class="math-container">$\angle ABH=\angle NBO=α\\ HMBN(cyclic)\implies \angle BHM=\angle MNB=90−α\\ \therefore \angle BFN = 90^0 \implies BO\perp MN \\ [BMON]=[BMN]−[MNO]=\frac{BO⋅MN}{2}=\frac{R⋅MN}{2}(I)\\ \triangle MBN \sim \triangle CBA \therefore \frac{MN}{MB}=\frac{b}{a} \\ ∠MHB=90−(90−∠A) \implies MB=BHsen(\angle A)\\ [BMON]=\frac{R⋅ba⋅BH⋅sen(\angle A)}{2}=\frac{[ABC]⋅Rsen(∠A)}{a}=\frac{[ABC]}{2}(II)\\ (I)=(II):\frac{[ABC]}{2}=\frac{R.MN}{2} \implies \boxed{[ABC]=R.MN} $</span> <a href="https://i.stack.imgur.com/xbaKT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xbaKT.jpg" alt="enter image description here" /></a></p> <p>(Solution by FelipeM.)</p>
4,414,843
<p>For reference: Show that the area of ​​triangle <span class="math-container">$ABC = R\times MN(R=BO)$</span></p> <p>I can't demonstrate this relationship</p> <p><a href="https://i.stack.imgur.com/A3ci3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A3ci3.jpg" alt="enter image description here" /></a></p> <p>My progress:</p> <p><span class="math-container">$$S_{\triangle ABC} = \frac{abc}{4R}$$</span> <span class="math-container">$$S_{\triangle ABC}=\frac{AC\times BH}{2}$$</span></p> <p><span class="math-container">$BMHN$</span> is cyclic</p> <p>Therefore <span class="math-container">$\angle HMN \cong\angle HBN\\ \angle MBH \cong \angle MNH\\$</span></p> <p><span class="math-container">$\triangle AMH \sim \triangle AHB\\ \triangle CNH \sim \triangle CHB$</span></p> <p>...?</p>
JMP
210,189
<p>First establish that <span class="math-container">$MN = BH \times \sin \angle ABC$</span></p> <p>(<a href="https://en.wikipedia.org/wiki/Law_of_sines" rel="nofollow noreferrer">law of sines</a> - <span class="math-container">$\sin (\angle HMB):BH = \sin (\angle ABC):MN$</span>).</p> <p>Then,</p> <p><span class="math-container">$\triangle AOC$</span> is isosceles, let <span class="math-container">$\angle OAC=\angle OCA = \beta$</span>.</p> <p>Then <span class="math-container">$\angle AOC = 180^\circ - 2\beta = 180^\circ - \alpha-\gamma$</span>.</p> <p>where <span class="math-container">$\alpha,\gamma$</span> are the appropriate angles from the isosceles triangles <span class="math-container">$\triangle AOB$</span> and <span class="math-container">$\triangle BOC$</span>.</p> <p>The perpendicular from <span class="math-container">$O$</span> through <span class="math-container">$AC$</span> divides the angle in two, i.e. <span class="math-container">$90^\circ - \alpha-\gamma$</span>, and so we arrive at <span class="math-container">$AC=R\times \sin (\angle ABC)$</span>.</p>
3,227,215
<p><a href="https://i.stack.imgur.com/7pJ4t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7pJ4t.png" alt="enter image description here" /></a></p> <blockquote> <p><span class="math-container">$(O, R)$</span> is the circumscribed circle of <span class="math-container">$\triangle ABC$</span>. <span class="math-container">$I \in \triangle ABC$</span>. <span class="math-container">$AI$</span>, <span class="math-container">$BI$</span> and <span class="math-container">$CI$</span> intersects <span class="math-container">$AB$</span>, <span class="math-container">$BC$</span> and <span class="math-container">$CA$</span> respectively at <span class="math-container">$M$</span>, <span class="math-container">$N$</span> and <span class="math-container">$P$</span>. Prove that <span class="math-container">$$\large \frac{1}{AM \cdot BN} + \frac{1}{BN \cdot CP} + \frac{1}{CP \cdot AM} \le \frac{4}{3(R - OI)^2}$$</span></p> </blockquote> <p>I have provided my own solution and I would be greatly appreciated if there are any other solutions, perhaps one involving trigonometry. I deeply apologise for the misunderstanding.</p>
Michael Rozenberg
190,319
<p>It's wrong.</p> <p>Indeed, let <span class="math-container">$\Delta A'B'C'\sim\Delta ABC$</span> such that <span class="math-container">$\frac{A'B'}{AB}=\epsilon.$</span></p> <p>Thus, if your inequality is true, so we have <span class="math-container">$$\sum_{cyc}\frac{1}{A'M'\cdot B'N'}\leq\frac{4}{3(R'-O'I')}$$</span> or <span class="math-container">$$\frac{1}{\epsilon}\sum_{cyc}\frac{1}{AM\cdot BN}\leq\frac{4}{3(R-OI)},$$</span> which is wrong for <span class="math-container">$\epsilon\rightarrow0^+$</span>.</p>
120,067
<p>The <em>theta function</em> is the analytic function $\theta:U\to\mathbb{C}$ defined on the (open) right half-plane $U\subset\mathbb{C}$ by $\theta(\tau)=\sum_{n\in\mathbb{Z}}e^{-\pi n^2 \tau}$. It has the following important transformation property.</p> <blockquote> <p><strong>Theta reciprocity</strong>: $\theta(\tau)=\frac{1}{\sqrt{\tau}}\theta\left(\frac{1}{\tau}\right)$.</p> </blockquote> <p>This theorem, while fundamentally analytic&mdash;the proof is just Poisson summation coupled with the fact that a Gaussian is its own Fourier transform&mdash;has serious arithmetic significance.</p> <ul> <li><p>It is the key ingredient in the proof of the functional equation of the Riemann zeta function.</p></li> <li><p>It expresses the <em>automorphy</em> of the theta function.</p></li> </ul> <p>Theta reciprocity also provides an analytic proof (actually, the <em>only</em> proof, as far as I know) of the Landsberg-Schaar relation</p> <p>$$\frac{1}{\sqrt{p}}\sum_{n=0}^{p-1}\exp\left(\frac{2\pi i n^2 q}{p}\right)=\frac{e^{\pi i/4}}{\sqrt{2q}}\sum_{n=0}^{2q-1}\exp\left(-\frac{\pi i n^2 p}{2q}\right)$$</p> <p>where $p$ and $q$ are arbitrary positive integers. To prove it, apply theta reciprocity to $\tau=2iq/p+\epsilon$, $\epsilon&gt;0$, and then let $\epsilon\to 0$.</p> <p>This reduces to the formula for the quadratic Gauss sum when $q=1$:</p> <p>$$\sum_{n=0}^{p-1} e^{2 \pi i n^2 / p} = \begin{cases} \sqrt{p} &amp; \textrm{if } \; p\equiv 1\mod 4 \\\ i\sqrt{p} &amp; \textrm{if } \; p\equiv 3\mod 4 \end{cases}$$</p> <p>(where $p$ is an odd prime). From this, it's not hard to deduce Gauss's "golden theorem".</p> <blockquote> <p><strong>Quadratic reciprocity</strong>: $\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=(-1)^{(p-1)(q-1)/4}$ for odd primes $p$ and $q$.</p> </blockquote> <p>For reference, this is worked out in detail in the paper "<a href="http://www.math.kth.se/~akarl/langmemorial.pdf">Applications of heat kernels on abelian groups: $\zeta(2n)$, quadratic reciprocity, Bessel integrals</a>" by Anders Karlsson.</p> <hr> <p>I feel like there is some deep mathematics going on behind the scenes here, but I don't know what.</p> <blockquote> <p>Why should we expect theta reciprocity to be related to quadratic reciprocity? Is there a high-concept explanation of this phenomenon? If there is, can it be generalized to other reciprocity laws (like Artin reciprocity)?</p> </blockquote> <p>Hopefully some wise number theorist can shed some light on this!</p>
Alexey Ustinov
5,712
<p>This is not an answer but a comment concerning the Landsberg-Schaar relation (LS). It admits not only analytic proof. The article <a href="https://arxiv.org/abs/1810.06172" rel="nofollow noreferrer">A proof of the Landsberg-Schaar relation by finite methods</a> by Ben Moore gives an elementary proof of the LS. But this proof is unreasonably complicated. The LS can be proven in two lines.</p> <p>It is well known that an arbitrary complex-valued function <span class="math-container">$f$</span> is represented by its finite (discrete) Fourier series <span class="math-container">$$f(x)=\sum\limits_{k=0}^{n-1}\widehat{f}_n(k)e\left(\frac{kx}{n}\right)\qquad(0\le x&lt;n),$$</span> with finite Fourier coefficients <span class="math-container">$$\widehat{f}_n(k)=\dfrac{1}{n}\sum\limits_{x=0}^{n-1}f(x)e\left(-\frac{kx}{n}\right)\qquad(0\le k&lt;n)$$</span> where <span class="math-container">$e(t)=e^{2\pi it}$</span>. The first step is <a href="http://iam.khv.ru/articles/Ustinov/nth10_eng.pdf" rel="nofollow noreferrer">“A Discrete Analog of the Poisson Summation Formula”</a>: if <span class="math-container">$n=n_1n_2$</span> then <span class="math-container">$$\sum\limits_{x=0}^{n_2-1}f(n_1x)=n_2 \sum\limits_{x=0}^{n_1-1}\widehat{f}_n(n_2x).$$</span> It follows directly from the formula for <span class="math-container">$\widehat{f}_n(k)$</span>.</p> <p>The function <span class="math-container">$f(x)=e\left(x^2/( 4pq)\right)$</span> is periodic with the period <span class="math-container">$n=2pq$</span> and <span class="math-container">\begin{align*} \widehat{f}_{2pq}(k)=&amp;\frac{1}{ 2pq}\sum_{y=0}^{2pq-1}e\left(\frac{y^2-2ky}{ 4pq}\right)=\frac{1}{ 2pq}\sum_{y=0}^{2pq-1}e\left(\frac{(y-k)^2-k^2}{ 4pq}\right)=\\=&amp; \frac{1}{ 2pq}e\left(-\frac{k^2}{ 4pq}\right)\sum_{y=0}^{2pq-1}e\left(-\frac{y^2}{ 4pq}\right)=\frac{1}{ 2pq}e\left(-\frac{k^2}{ 4pq}\right)\cdot\frac{S(4pq)}{ 2}, \end{align*}</span> where <span class="math-container">$$S(p)=\sum\limits_{x=1}^{p}e(x^2/p)=\frac{1+i^{-p}}{1+i^{-1}}\cdot\sqrt{p}$$</span> is a Gauss sum. So (the second step) applying the discrete Poisson summation formula to <span class="math-container">$f$</span> with <span class="math-container">$n_1=2q$</span>, <span class="math-container">$n_2=p$</span> and <span class="math-container">$n=n_1n_2=2pq$</span> we get the formula <span class="math-container">$$\sum_{x=0}^{p-1}e\left(\frac{qx^2}{ p}\right)=\frac{S(4pq)}{4q}\sum_{x=0}^{2q-1}e\left(-\frac{px^2}{ 4q}\right),$$</span> which is equivalent to LS.</p>
3,950,463
<blockquote> <p>What is <span class="math-container">$100$</span>th derivative of <span class="math-container">$y=\ln(2x-x^2)$</span> at <span class="math-container">$x=1$</span>?</p> <p><span class="math-container">$a)2\times99!$</span></p> <p><span class="math-container">$b)-2\times99!$</span></p> <p><span class="math-container">$c)2\times101!$</span></p> <p><span class="math-container">$d)-2\times101!$</span></p> </blockquote> <p>I tried to find a pattern by calculating first derivatives of <span class="math-container">$y$</span>:</p> <p><span class="math-container">$$y'=\cfrac{2-2x}{2x-x^2} ,\quad y'(1)=0$$</span></p> <p><span class="math-container">$$y''=\cfrac{-2(2x-x^2)-(2-2x)^2}{(2x-x^2)^2},\quad y''(1)=-2$$</span></p> <p>And <span class="math-container">$y'''$</span>is going to get really ugly. so I couldn't find any pattern.</p>
Ninad Munshi
698,724
<p>We can be a bit clever instead. Notice that</p> <p><span class="math-container">$$\ln(2x-x^2) = \ln(1-1+2x-x^2) = \ln(1-(x-1)^2)$$</span></p> <p><span class="math-container">$\ln(1-x)$</span> has a known Taylor series</p> <p><span class="math-container">$$-\sum_{n=1}^\infty \frac{x^n}{n}$$</span></p> <p>which means the Taylor series of <span class="math-container">$\ln(1-(x-1)^2)$</span> is</p> <p><span class="math-container">$$-\sum_{n=1}^\infty \frac{(x-1)^{2n}}{n}$$</span></p> <p>Comparing terms to Taylor's theorem will give you that the answer is b.</p>
10,505
<p>I've only done a few questions here but already it's grinding on me. Why can't we have the writing-answer panel and the preview panel side by side, rather than below, this means for big answers I can't make use of the preview! It'd be great if side by side, two scroll-bars, or even a pop out (I have a window manager that gives me an always-on-top button :)) but this may not help those 'less fortunate' when it comes to OS.</p> <p>It'd also be great if the preview wasn't trying to update EVERY keystroke, I'd love to have (the option, as I can see others disagreeing) an update button, or only when I pressed enter. Maybe only try and update everything after the last parsed expression (rather than the entire messages) </p> <p>It can get QUITE laggy. So please please do this, or have like a profile option to do it. </p> <p>Thanks</p>
robjohn
13,854
<p>Whether you use ChatJax or not in chat, look at the bookmarks described in <a href="https://math.meta.stackexchange.com/a/3297">this answer</a>. The <code>rendering off</code> and <code>rendering on</code> bookmarks will turn off and on the MathJax rendering. While off, this prevents the rendering from affecting your typing.</p>
1,841,958
<p>This is a claim on Wikipedia <a href="https://en.wikipedia.org/wiki/Partially_ordered_set">https://en.wikipedia.org/wiki/Partially_ordered_set</a></p> <p>I am not sure how to make sense of the claim</p> <p>What does it mean by ordered by inclusion? Inclusion as in $\subseteq$? </p> <p>Can someone provide a small example of couple subspaces being "ordered" by inclusion?</p> <p>Is this a linear order?</p>
André Nicolas
6,312
<p>Yes, inclusion as in $\subseteq$. A small example may clarify things. Take the usual three-dimensional vector space $\mathbb{R}^3$. Geometrically speaking, there are four types of subspaces of $\mathbb{R}^3$.</p> <p>(i) The space consisting of the zero vector only.</p> <p>(ii) One-dimensional subspaces, which can be identified with lines through the origin.</p> <p>(iii) Two-dimensional subspaces, which may be identified with planes through the origin.</p> <p>(iv) The whole of $\mathbb{R}^3$.</p> <p>Let $U$ be a plane through the origin, and let $V$ be <em>the same</em> plane. Then $U\subseteq V$.</p> <p>Let $U$ be the subspace consisting of the zero vector, and let $V$ be any subspace of $\mathbb{R}^3$. Then $U\subseteq V$.</p> <p>Let $U$ be a line through the origin, and let $V$ be a plane that contains that line. Then $U\subseteq V$.</p> <p>Let $U$ and $V$ be <strong>different</strong> lines through the origin. Then $U\subseteq V$ is false, as is $V\subseteq U$. Thus our partial order on subspaces of $\mathbb{R}^3$ is not a linear order.</p>
434,061
<p>I am given a matrix $A\in M(n\times n, \mathbb{C})$ normal (in matrix form $AA^*=A^*A$) and $A^2=A$. The task is to prove that the matrix is Hermitian.</p> <p>But when I try something like $A^*=\,\,...$ , then I can't reach $A$, because I can't "get rid of star" in expression. Also it is not enough to show $BA=BA^*$ for some $B$ since matrix don't form a field, and I haven't got any other thoughts.</p> <p>Thanks in advance!</p>
Sungjin Kim
67,070
<p><strong>Hint</strong> By spectral theorem, a normal matrix is diagonalizable by a unitary matrix. </p> <p>Then what are the eigenvalues of a diagonalizable matrix $A$ which satisfies $$A^2=A?$$</p>
3,985,177
<p>When it comes to proving that two sets are equal, say <span class="math-container">$A = B$</span>, we're usually told that we have to prove that <span class="math-container">$A \subset B$</span> and <span class="math-container">$B \subset A$</span>. However, I'm under the impression that this strategy isn't unique. Two sets are equal if they have the same elements, and this commonly used strategy is just one way to prove that.</p> <p>A better way to prove that two sets are equal, at least in my opinion, is to use set notation. However, because I never see this being used, I'm unsure if this method is correct. Example:</p> <p><em>Prove that <span class="math-container">$ (A \cup B) \times C = (A \times C) \cup (B \times C) $</span></em>.</p> <p><em>Proof</em>. We have, <span class="math-container">\begin{align} (A \cup B) \times C &amp;= \{x; x \in A \operatorname{or} B \} \times \{y; y \in C \} \\ &amp;= \{(x, y); x \in A \operatorname{or} B, y \in C \} \\ &amp;= \{(x, y); x \in A, y \in C \} \cup \{(x, y); x \in B, y \in C \} \\ &amp;= (A \times C) \cup (B \times C). \end{align}</span></p> <p>So the question is, is this method correct? Thanks in advance.</p>
Yuval Peres
360,408
<p>No holomorphic function can map the complex plane to a bounded open set. See <a href="https://artofproblemsolving.com/wiki/index.php/Liouville%27s_Theorem_(complex_analysis)#:%7E:text=In%20complex%20analysis%2C%20Liouville%27s%20Theorem,Theorem%20is%20a%20stronger%20result" rel="nofollow noreferrer">Liouville's theorem</a>.</p>
1,039,141
<blockquote> <p>Let <span class="math-container">$X = \mathbb{R}$</span> and <span class="math-container">$Y = \{x \in \mathbb{R} :x ≥ 1\}$</span>, and define <span class="math-container">$G : X → Y$</span> by <span class="math-container">$$G(x) = e^{x^2}.$$</span> Prove that <span class="math-container">$G$</span> is onto.</p> </blockquote> <p>Is this going along the right path and if so how do get the function to equal <span class="math-container">$y$</span>?</p> <blockquote> <p><span class="math-container">$G: \mathbb{R} \to\mathbb{N}_1$</span>. Let <span class="math-container">$y$$\in $$\mathbb{N_1}$</span>.</p> <p><em>claim:</em> <span class="math-container">$\sqrt{\ln y}$</span> maps to <span class="math-container">$y$</span>.</p> <p>Does <span class="math-container">$\sqrt{\ln y}$</span> belong to <span class="math-container">$\mathbb{N_1}$</span>? Yes because <span class="math-container">$y \in \mathbb{N_1}$</span>, <span class="math-container">$G( \sqrt{\ln y})=e^{(\sqrt{\ln y})^2}$</span>.</p> </blockquote>
Mimo
195,551
<p>For any y $\in$ Y we have to show that there exists an x in X such that G(x) = y.</p> <p>Now, $$G(x)=y$$ $$\implies e^{x^2} = y$$ $$\implies x^2 = \ln y $$ $$\implies x = \pm \sqrt {\ln y}$$</p> <p>Since, y $\in$ Y, y $\ge 1$ and hence $\ln y\ge 0$ and $\pm \sqrt {\ln y}$ is well defined and is in X.Thus for any real y in Y there are two reals x in X, such that G(x) = y. Thus, $G:X \rightarrow Y$ is onto. </p>
2,337,332
<p>Tried a lot. Though unable to find starting point.</p>
Sri-Amirthan Theivendran
302,692
<p>Write $$ 9^{16}-5^{16}=(9^8-5^8)(9^8+5^8)\tag{1}. $$ By Euler's theorem, $$ 9^6\equiv 1\mod 14;\quad 5^6\equiv 1\mod 14 $$ since $5$ and $9$ are coprime to $14$. Then $$ 9^8-5^8\equiv 9^2-5^2\equiv4(14)\equiv0\mod{14} $$ and the result follows from (1).</p>
3,064,501
<p>So I was trying to find the <strong>time complexity</strong> of an algorithm to find the <span class="math-container">$N$</span>th prime number (where <span class="math-container">$N$</span> could be any positive integer).</p> <p>So is there any way to exactly determine how far <span class="math-container">$(N+1)$</span>th prime number will be if <span class="math-container">$N$</span>th prime number is already known ?</p>
Community
-1
<p>It depends on what you call large, and what theory you use. Using the fact all primes greater than 3 are 1 or -1 mod 6 and a <a href="https://en.m.wikipedia.org/wiki/Sieve_of_Sundaram" rel="nofollow noreferrer">Sieve of Sundaram</a> style argument, you can show that any natural number n, of certain forms will create at most 1 half ( one prime one composite) of a twin prime pair. Specifically, the following make at least one of 6n+1 or 6n-1 composite: <span class="math-container">$$\begin{cases}6n+1,\quad\text{n=(6j+1)k+j or n=(6j-1)k-j}\\6n-1,\quad\text{n=(6j+1)k-j}\end{cases}$$</span> Where, k,j>0, in the case of semiprimes with roughly same size factors, j,k will be close together making each roughly <span class="math-container">$\frac{1}{6}\sqrt p$</span> p being the number you are trying to factor. Of course, the twin prime conjecture, says there's no point after which, all natural numbers are of these forms. Good luck.</p>
692,998
<p>The inner product in a $L^2$ space can be defined as:</p> <p>$$\langle f,g\rangle =\int_a^b \bar{f}(x)g(x)w(x)dx$$</p> <p>For Legendre polynomials, we define it as:</p> <p>$$\langle P_m,P_n\rangle =\int_0^1 \bar{P}_m(x)P_n(x)dx$$ so $w(x)=1$.</p> <p>But there are case in which $w(x)\neq 1$. For example, Laguerre $w(x)=e^{-x}$ and Hermite polynomials $w(x)=e^{-x^2}$.</p> <p><strong>Is there any intuition/motivation behind different weight functions of orthogonal polynomials</strong>? I think it might be related to measure theory and Sturm-Liouville problems.</p>
froggie
23,685
<p>I'm not sure this is a great answer, but in the case of the Legendre polynomials, you are working on a compact interval, so the given inner product with weight $\equiv 1$ makes sense.</p> <p>On the other hand, for the Laguerre and Hermite polynomials, you work on the intervals $[0,\infty)$ and $(-\infty, \infty)$, respectively. Since products of polynomials are not integrable on these infinite intervals, you need <em>some</em> weight in the inner product just to get convergence in the integrals. But you can't just choose any weight: you need weights that decay faster at $\infty$ than the reciprocal of any polynomial. Hence choosing weights like $e^{-x}$ (for the Laguerre polynomials) and $e^{-x^2}$ (for the Hermite polynomials). </p> <p>These choices seem very natural to me. I think if you were to ask a random mathematician to name a positive function on $\mathbb{R}$ that decays faster than the reciprocal of any polynomial, they would probably say $e^{-x}$.</p> <p>For the Hermite polynomials, where you are working over $(-\infty, \infty)$, you need to have a weight that decays quickly at both $-\infty$ and $\infty$, hence $e^{-x^2}$.</p>
5,711
<p>I am teaching abroad to non-native English speakers with a large variance of language skills. </p> <p>I teach both pre-calculus and AP calculus (AB &amp; BC). For both of those classes I define the new terms. I use word problems, have them read from the textbook, take notes in English during class, have the students speak English in class, and teach the class in English. </p> <p>I am asking this question because my CP told me I need to embed more English into my lesson, so the students learn English. </p> <p>So what are some techniques to embed English into a mathematics lesson, and what skills and lessons have you learned from teaching mathematics in an ESL environment?</p>
J W
376
<p>I teach in an ESL environment where students come from a variety of countries. I try to be particularly sensitive to differences in notation/convention in mathematics, which seem to come up most often in the notation for the decimal point, what sign to use for multiplication and how logarithms are written.</p> <p>I have also learned to take my time and avoid using too much idiomatic or low-frequency vocabulary. I usually state difficult points twice, often reformulated slightly the second time, to give students a second chance to understand. Of course, this applies to a non-ESL environment too, but it is especially important when students are struggling to follow the subtleties of mathematics in a second or foreign language.</p> <p>That said, I do not attempt to turn my mathematics lessons into English lessons, apart from the occasional remark. I do, however, mention when mathematical English usage differs from everyday English usage, a classic example being the word "series." I would also say that getting students to work in small groups to solve problems will provide practice in speaking English. You can also set written assignments requiring students to write in full English sentences and not just the symbolic language of mathematics. For the latter, Kevin Houston's <a href="http://www1.maths.leeds.ac.uk/~khouston/pdf/htwm.pdf" rel="nofollow">How to Write Mathematics</a> may come in handy.</p>
5,711
<p>I am teaching abroad to non-native English speakers with a large variance of language skills. </p> <p>I teach both pre-calculus and AP calculus (AB &amp; BC). For both of those classes I define the new terms. I use word problems, have them read from the textbook, take notes in English during class, have the students speak English in class, and teach the class in English. </p> <p>I am asking this question because my CP told me I need to embed more English into my lesson, so the students learn English. </p> <p>So what are some techniques to embed English into a mathematics lesson, and what skills and lessons have you learned from teaching mathematics in an ESL environment?</p>
Tom Au
1,333
<p>One way to teach English along with calculus is to assign so-called "word problems." (E.g. if g is the acceleration due to gravity, how do you integrate/differentiate to get Newton's Law.)</p> <p>It's a situation where students have to "translate" from English to math. Then they have to do the math.</p>
2,130,397
<p>If I want to find the power series representation of the following function:</p> <p>$$ \ln \frac{1+x}{1-x} $$</p> <p>I understand that it can be written as </p> <p>$$ \ln (1+x) - \ln(1-x) $$</p> <p>And I understand that if I now write in the power series representations for $ln(1+x)$ and $ln(1-x)$:</p> <p>$$\sum_{n=1}^\infty \frac{(-1)^{n-1}x^{n}}{n} - \sum_{n=1}^\infty \frac{(-1)^{n-1}(-x)^{n}}{n} $$</p> <p>My textbook solution does an odd thing where it writes it out as</p> <p>$$\sum_{n=1}^\infty \frac{x^{n}}{n} - \sum_{n=1}^\infty \frac{(-1)^{n-1}(-x)^{n}}{n} $$</p> <p>$$2\sum_{n=1}^\infty \frac{x^{2n-1}}{2n-1} $$</p> <p>I have no idea how it got from the line where I have the power series representation for $ln(1+x)$ and $ln(1-x)$ to the last two lines. If anyone could help me link my part to the textbook solution I would really appreciate it! Thank you! </p>
Jan Eerland
226,665
<p>Well, when you substitute $t=\cos\left(x\right)$ then we know that:</p> <p>$$\cos^2\left(x\right)+\sin^2\left(x\right)=t^2+\sin^2\left(x\right)=1\tag1$$</p> <p>Another way, substitute $\text{u}=\tan\left(x\right)$:</p> <p>$$\int_0^\frac{\pi}{2}\frac{1}{1+\cos^2\left(x\right)}\space\text{d}x=\int_0^\infty\frac{1}{2+\text{u}^2}\space\text{d}\text{u}\tag2$$</p> <p>And after that substitute $\text{v}=\frac{\text{u}}{\sqrt{2}}$:</p> <p>$$\int_0^\infty\frac{1}{2+\text{u}^2}\space\text{d}\text{u}=\frac{1}{\sqrt{2}}\int_0^\infty\frac{1}{1+\text{v}^2}\space\text{d}\text{v}=\frac{1}{\sqrt{2}}\left(\lim_{\text{v}\to\infty}\arctan\left(\text{v}\right)-\arctan\left(0\right)\right)=\frac{\frac{\pi}{2}-0}{\sqrt{2}}\tag3$$</p>
2,435,596
<p>Suppose we have an unsigned $8$ bit number (min=$0$, max=$255$).</p> <p>the result of "$200 + 200$" overflows to $144$</p> <p>the result of "$100 - 200$" (under?)overflows to $156$</p> <p>Is there are mathematical symbol to represent this?</p>
Community
-1
<p>"Overflow" is the wrong term here:</p> <blockquote> <p>to flow over the edge or brim of (a receptacle, container, etc.).</p> </blockquote> <p><sub>ref: <a href="http://www.dictionary.com/browse/overflow" rel="nofollow noreferrer">http://www.dictionary.com/browse/overflow</a></sub></p> <p>In the cited example, you have an integer that is too large to fit in the data format you are using to represent an integer: specifically 8-bit binary numerals.</p> <p>The operation you are describing is not <em>overflow</em>, but a common reaction to overflow that a lot of computer environments practice: to store just the least significant 8 bits of the (two's complement) result.</p> <p>In computer lingo this is sometimes called "wrapping", envisioning the number line being wrapped up around a circle whose circumference has the 256 possible values you can store.</p> <p>Mathematically, this turns out to be the same as computing a reduced representative <em>modulo</em> 256. Or more accurately, you aren't performing addition of integers, but you are performing addition modulo 256.</p> <p>To specify that the operation and the result are equivalent modulo 256, one writes</p> <p>$$ 200 + 200 = 144 \pmod{256} $$</p> <p>Sometimes, people use the <em>remainder</em> operator (which, confusingly, is often notated as $\bmod$), and would write</p> <p>$$ (200 + 200) \bmod 256 = 144 $$</p> <p>Some standards even explicitly invoke this mathematical operation &mdash; <code>+</code> on 8-bit types isn't defined to be addition of integers, but addition modulo 256.</p> <hr> <p>Incidentally, some computing environments also practice other behaviors; e.g. you can often ask for <em>saturating</em> arithmetic, under which $200 + 200$ would simply produce the maximum value $255$. The C and C++ standards decree that overflow (of signed integer types) <em>shall not happen</em>; if you write a program where it does, undefined behavior results meaning the standard allows the program to do anything at all.</p>
3,125,263
<p>I can't solve the last exercises in a worksheet of Pre-Calculus problems. It says:</p> <p>Quadratic function <span class="math-container">$f(x)=ax^2+bx+c$</span> determines a parabola that passes through points <span class="math-container">$(0, 2)$</span> and <span class="math-container">$(4, 2)$</span>, and its vertex has coordinates <span class="math-container">$(x_v, 0)$</span>.</p> <p>a) Calculate coordinate <span class="math-container">$x_v$</span> of parabola's vertex.</p> <p>b) Calculate <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> coefficients.</p> <p>How can I get parabola's equation with this information and find what is requested?</p> <p>I would appreciate any help. Thanks in advance.</p>
Vinyl_cape_jawa
151,763
<p>HINTS:</p> <p>A graph is a collection of points where the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> coordinates of these points are in a relationship. We sometimes write <span class="math-container">$y$</span> instead of <span class="math-container">$f(x)$</span> to stress this fact. </p> <p>Your equation </p> <p><span class="math-container">$$ y=ax^2+bx+c $$</span> is this relation.</p> <p>Try plugging in the coordinates of your given points, which you know lie on this curve (so they will satisfy the linking relation between the coordinate pairs)</p> <p>I would definitely start with the point <span class="math-container">$(0,2)$</span>, zeros are always good to have around.</p> <p>You will get</p> <p><span class="math-container">$$ 2=a\cdot0^2+b\cdot0+c $$</span></p> <p>Then I would try with the other two points. </p> <p>Hope this helped</p>
1,299,474
<p><img src="https://i.stack.imgur.com/EyYdm.jpg" alt="enter image description here"></p> <p>Here is an attempt at a solution:</p> <p><img src="https://i.stack.imgur.com/IWSah.jpg" alt="enter image description here"></p> <p>Since $f(x)&gt;0$, $f(x)&gt;\delta$ for all x between $1$ and $2$ </p> <p>Is this correct? </p>
xavierm02
10,385
<p>"By the algebra of continuous functions" doesn't look like a proper justification to me. I'd prefer something like "because it's the composition of two continuous functions".</p> <p>But anyway, taking the inverse is unnecessary.</p> <hr> <p>You theorem states that if $f$ is continuous on $[1,2]$, then:</p> <ul> <li><p>There are $m,M\in \Bbb R$ so that $\forall x \in [1,2], m \le f(x)\le M$</p></li> <li><p>There are $x_m,x_M\in [1,2]$ so that $f(x_m)=m$ and $f(x_M)=M$</p></li> </ul> <p>Now, since $\forall x\in [1,2], f(x) &gt; 0$, we have $m &gt; 0$ by taking $x=x_m$.</p> <p>So we have that $\forall x \in [1,2], 0 &lt; \frac{m}{2}&lt;m \le f(x)$ so that $\delta := \frac{m}{2}$ works.</p>
1,299,474
<p><img src="https://i.stack.imgur.com/EyYdm.jpg" alt="enter image description here"></p> <p>Here is an attempt at a solution:</p> <p><img src="https://i.stack.imgur.com/IWSah.jpg" alt="enter image description here"></p> <p>Since $f(x)&gt;0$, $f(x)&gt;\delta$ for all x between $1$ and $2$ </p> <p>Is this correct? </p>
Project Book
234,125
<p>I would've said something along the line $f(x)$ attains its lower bound say at $a \in [1,2]$, $f(a) &gt; 0$ by definition so let $\delta = f(a)/2$.</p>
806,532
<p>This question takes place in a general metric space $X$. </p> <p>Let $x$ be an interior* point of $E \subset X$ iff there exists a deleted neighborhood of $x$ that is contained in $E$. </p> <p>This is like the normal definition of "interior point", except it uses "deleted neighborhood" instead of "neighborhood", thus allowing a point not in $E$ to be an interior* point of $E$.</p> <p>My question is: why is this not the standard definition of "interior point"? I see a couple reasons that it would make a more elegant system.</p> <ol> <li>"Limit point" and "interior* point" are both defined in terms of deleted neighborhoods ($x$ is a limit point of $E$ iff all deleted neighborhoods of $x$ include some point of $E$). This is more symmetrical.</li> <li>(Note: I do not yet have a general/categorical notion of duality) "Limit point" and "interior* point" are more adequately dual, for $x$ is a limit point of $E$ iff $x$ is not an interior* point of the complement of $E$, whereas this does not hold for "limit point" and "interior point".</li> <li>The dual notions of closure and interior are more symmetrically defined using "interior* point". The closure is defined as the <b>union</b> of $E$ and the set of limit points of $E$, and the interior is defined as the <b>intersection</b> of $E$ and the set of interior* points of $E$. The duality between closure and interior is harder to see with the standard definition of interior as the set of interior points of $E$. Also the proof that the complement of the closure of $E$ is the interior of the complement of $E$ reduces to a few applications of DeMorgan's law.</li> </ol> <p>So why do people use "interior point" and not "interior* point"? </p>
user642796
8,348
<p>I can't speak much for categorical reasons, but here are my opinions from the topological point of view.</p> <p>I think a large reason is that the closure of a set is a much more fundamental concept than the derived set $A^\prime$ (<em>i.e.</em>, the set of all accumulation (limit) points of $A$). This is perhaps evidenced by the following definition of the closure:</p> <blockquote> <p>$\overline{A}$ is the smallest (with respect to $\subseteq$) closed set including $A$ as a subset.</p> </blockquote> <p>From this one can easily prove the characterisation that $x \in \overline{A}$ iff every (open) neighbourhood of $x$ meets $A$.</p> <p>While we have the equality $\overline{A} = A \cup A^\prime$, this is sort of an artificial way to look at it, since the points of $\overline{A}$ left out of $A^\prime$ are exactly the isolated points of $A$: those elements $x$ of $A$ which have an open neighbourhood which intersects $A$ only at $x$. We recover these points by not caring where open neighbourhoods of $x$ meet $A$, but only insisting that they do.</p> <p>Taking the closure as more primary than the derived set, without too much difficulty we can show that $$X \setminus \mathrm{Int} ( A ) = \overline{X \setminus A}$$ or, equivalently, $$X \setminus \overline{A} = \mathrm{Int} ( X \setminus A )$$ giving a very distinct connection between the concepts of interior and closure. The same connection would not hold with the $\mathrm{Int}^*$ operator.</p> <p>(More anecdotally, I cannot recall ever seeing the $\mathrm{Int}^*$ concept used. This gives at least circumstantial evidence to the idea that topologists have not found much use for it, which is a good reason for not basing too much on the idea.)</p>
3,717,932
<p>How can this identity convolution be shown?</p> <p><span class="math-container">$$\int^\infty_{-\infty} f(\tau)\delta(t-\tau)d\tau=f(t)$$</span></p> <p>I keep getting stuck in traps when trying to show this and need a bit of assistance</p>
Enrico M.
266,764
<p>There is no <em>proof</em> of the formula you're asking, this is a definition. However, here is the way why the expression is used in so many contexts.</p> <p>Consider a sequence of function <span class="math-container">$$ \delta_\epsilon(x)=\begin{cases}\frac{1}{2\epsilon},&amp;-\epsilon&lt;x&lt;\epsilon,\\ 0,&amp;\mbox{otherwise}. \end{cases} $$</span></p> <p>It is reasonable to expect that <span class="math-container">$$ \delta(x)=\lim_{\epsilon\to 0}\delta_\epsilon(x). $$</span></p> <p>Now consider the integral and assume that we are free to change the order of operations: <span class="math-container">$$ \int_{\mathbb R}\delta(x)f(x)dx=\lim_{\epsilon\to 0}\int_{\mathbb R}\delta_{\epsilon}(x)f(x)dx=\lim_{\epsilon\to 0}\int_{-\epsilon}^{\epsilon}\frac{1}{2\epsilon} f(x)dx=\lim_{\epsilon\to0}2\epsilon\cdot \frac{1}{2\epsilon} f(\xi), $$</span> where due to the mean value theorem <span class="math-container">$\xi\in(-\epsilon,\epsilon)$</span>.</p> <p>Hence we can conclude that <span class="math-container">$$ \lim_{\epsilon\to 0}f(\xi)=f(0), $$</span> which gives you a &quot;<em>proof</em>&quot; of the original formula.</p> <p>Now just repeat the same for <span class="math-container">$\delta (x-a)$</span>, or <span class="math-container">$\delta(t-\tau)$</span> if you prefer.</p>
765,404
<p>Can anyone explain the partial derivative below:</p> <p>$\frac{\partial a^tX^{-1}b}{\partial X} = -X^{-t}ab^tX^{-t}$</p> <p>I was trying to derive this equation using the below formula, but failed.</p> <p><img src="https://i.stack.imgur.com/apR2q.png" alt="enter image description here"></p>
Community
-1
<p>Let $Y = X^{-1}$, since it's easier to type.</p> <p>Taking the differential of $I=Y\cdot X$ you'll find that $$dY = -Y\cdot dX\cdot Y$$</p> <p>Now rearrange $a'\cdot Y\cdot b$ into $ab':Y$ and take the differential $$\eqalign{ d(ab':Y) &amp;= ab':dY \cr &amp;= -ab':(Y\cdot dX\cdot Y) \cr &amp;= -(Y'\cdot ab'\cdot Y'):dX \cr }$$ Passing to the derivative $$ \frac{\partial(ab':Y)}{\partial X} = -(Y'\cdot ab'\cdot Y') $$</p>
1,618,042
<p>Is there any operation that makes a set of primes i.e. {2,3,5,7.... .} a group with identity 2?</p>
Tsemo Aristide
280,301
<p>Hint: Take any bijection between the set of prime and Z which takes 2 to 0 and transport the structure with this bijection.</p>
2,425,157
<p>How do I show that $$ \frac 12 \left(\frac 1 {3^2}+\frac 1{4^2}+ \frac 1{5^2}+\dots\right) &lt; \frac 1 {3^2} + \frac 1{5^2} + \frac1{7^2} +\dots \quad ?$$</p>
farruhota
425,072
<p>Alternatively: Note that the RHS: $$\frac{\pi ^2}{6}=1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\frac{1}{5^2}+\frac{1}{6^2}+\frac{1}{7^2}+\cdots =\\ \left(1+\frac{1}{3^2}+\frac{1}{5^2}+\frac{1}{7^2}+\cdots \right)+\frac{1}{2^2}\left(\underbrace{1+\frac{1}{2^2}+\frac{1}{3^2}\cdots}_{\frac{\pi^2}{6}} \right) \Rightarrow $$ $$\frac{1}{3^2}+\frac{1}{5^2}+\frac{1}{7^2}+\cdots=\frac{\pi^2}{6}-\frac{1}{2^2}\cdot\frac{\pi^2}{6}-1=\frac{\pi^2}{8}-1.$$ The LHS: $$\frac 12 \left(\frac 1 {3^2}+\frac 1{4^2}+ \frac 1{5^2}+\dots\right)=\frac12\cdot\left(\frac{\pi^2}{6}-1-\frac{1}{2^2}\right)=\frac{\pi^2}{12}-\frac{5}{8}.$$ Hence: $$\frac{\pi^2}{12}-\frac{5}{8}&lt;\frac{\pi^2}{8}-1 \iff 9&lt;\pi^2.$$</p>