qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
112,320
<p>What is the number of strings of length $235$ which can be made from the letters A, B, and C, such that the number of A's is always odd, the number of B's is greater than $10$ and less than $45$ and the Number of C's is always even?</p> <p>What I can think of is </p> <p>$$\left(\binom{235}{235} - \left\lfloor235 - \frac{235}2\right\rfloor\right) \binom{235}{35} \binom {235}{ \lfloor 235/2\rfloor}\;.$$</p> <p>Thanks</p>
Gerry Myerson
8,269
<p>It's the coefficient of $x^{235}$ in $$(x+x^3+x^5+\cdots)(x^{11}+x^{12}+\cdots+x^{44})(1+x^2+x^4+\cdots)$$ Use the formula for sum of a geometric progression, then use the binomial theorem, it should fall right out. </p>
4,821
<p>A quick bit of motivation: recently a question I answered quite a while ago ( <a href="https://math.stackexchange.com/questions/22437/combining-two-3d-rotations/178957">Combining Two 3D Rotations</a> ) picked up another (IMHO rather poor) answer. While it was downvoted by someone else and I strongly concur with their opinion, I haven't downvoted it myself because I'm leery of any perception of 'competitive' downvoting on questions that I've already answered; in general I tend to be <em>very</em> stingy with downvotes (certainly more than I probably should), but this seems like a particularly thorny case.</p> <p>What I'm wondering is whether this is a reasonable concern (or reasonable approach) on my part; do people concur that this is something to be worried about from an ethical perspective, or should a bad answer be downvoted regardless of whether it might be abstractly 'beneficial' to myself to do so?</p>
Qiaochu Yuan
232
<p>Bad answers should be downvoted if you feel you have the expertise to conclude that they are bad with some confidence. This is useful information you are communicating to other users, who may not have such expertise, and it is worth communicating. I agree that there is some conflict of interest here, but it's not enough to end someone's career or anything. </p>
761,726
<p>We know that the Banach space $X$ is infinite-dimensional,</p> <p>theconclusion we want to show is: then $X'$ is also infinite-dimensional. </p> <p>$X'$: the space of linear bdd functions</p>
Prahlad Vaidyanathan
89,789
<p>Again, Hahn-Banach to the rescue : Choose an infinite basis $\mathcal{B}$ of $X$. Start with $v_1\in \mathcal{B}$, and choose $f_1 \in X'$ such that $\|f_1\| = 1$ and $f_1(v_1) = 1$.</p> <p>Now choose $v_2 \in \mathcal{B}$ and use Hahn-Banach to produce $f_2 \in X'$ such that $f_2(v_1) = 0$ and $f_2(v_2) = 1$.</p> <p>Thus proceeding, construct $\mathcal{D}_n := \{f_1, f_2,\ldots, f_n\} \subset X'$ such that $$ f_j(v_i) = 0 \quad\forall i&lt;j, \text{ and } f_j(v_j) = 1 $$ Notice that the set $\mathcal{D}_n$ is linearly independent, and so $\text{dim}(X') \geq n$.</p> <p>This is true for any $n\in \mathbb{N}$, and so $X'$ is infinite dimensional.</p>
4,042,741
<p>I'm really struggling to understand the literal arithmetic being applied to find a complete residue system of modulo <span class="math-container">$n$</span>. Below is the definition my textbook provides along with an example.</p> <blockquote> <p>Let <span class="math-container">$k$</span> and <span class="math-container">$n$</span> be natural numbers. A set <span class="math-container">$\{a_1,a_2,...,a_k\}$</span> is called a canonical complete residue system modulo <span class="math-container">$n$</span> if every integer is congruent modulo <span class="math-container">$n$</span> to exactly one element of the set</p> </blockquote> <p>I'm struggling to understand how to interpret this definition. Two integers, <span class="math-container">$a$</span> and <span class="math-container">$b$</span>, are &quot;congruent modulo <span class="math-container">$n$</span>&quot; if they have the same remainder when divided by <span class="math-container">$n$</span>. So the set <span class="math-container">$\{a_1,a_2,...,a_k\}$</span> would be all integers that share a quotient with <span class="math-container">$b$</span> divided by <span class="math-container">$n$</span>?</p> <p>After I understand the definition, this is a simple example provided by my textbook</p> <blockquote> <p>Find three residue systems modulo <span class="math-container">$4$</span>: the canonical complete residue system, one containing negative numbers, and one containing no two consecutive numbers</p> </blockquote> <p>My first point of confusion is &quot;modulo <span class="math-container">$4$</span>&quot;. <span class="math-container">$a{\space}mod{\space}n$</span> is the remainder of Euclidean division of <span class="math-container">$a$</span> by <span class="math-container">$n$</span>. So what is meant by simply &quot;modulo <span class="math-container">$4$</span>&quot;? What literal arithmetic do I perform to find a complete residue system using &quot;modulo <span class="math-container">$4$</span>&quot;?</p>
David
893,427
<p>You can draw a picture for modulo 6. Get a piece of paper and place it with the longer dimension horizontal. With a dark pen and a ruler, draw seven horizontal lines completely across the page. In about the middle of the page write the integers from 0 to 5 ONE PER REGION, starting at the bottom.</p> <p>Enter all the other integers (or at least a lot of them) as follows: In each region, repeatedly add 6 to the number on the left until you run out of paper.</p> <p>And then return to the middle and repeatedly subtract 6 from every number on the right.</p> <p>(A complete residue system will be 6 integers, no two being from the same region.)</p>
252,272
<p>I'm working with trace of matrices. Trace is defined for square matrix and there are some useful rule, i.e. <span class="math-container">$\text{tr}(AB) = \text{tr}(BA)$</span>, with <span class="math-container">$A$</span> and <span class="math-container">$B$</span> square, and more in general trace is invariant under cyclic permutation.</p> <p>I was wondering if the formula <span class="math-container">$\text{tr}(AB) = \text{tr}(BA)$</span> holds even if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are rectangular, namely <span class="math-container">$A$</span> is <span class="math-container">$n$</span>-by-<span class="math-container">$m$</span> and <span class="math-container">$B$</span> is <span class="math-container">$m$</span>-by-<span class="math-container">$n$</span>.</p> <p>I figured out that if one completes the involved matrices to be square by adding zero entries in the right places, then the formula still works... but I want to be sure about this!</p>
martini
15,379
<p>Yes, it holds true. Let $A$ be a $n\times m$ and $B$ be a $m \times n$ matrix over the commutative ring $R$, we have \begin{align*} \mathrm{tr}(AB) &amp;= \sum_{i=1}^n (AB)_{ii}\\ &amp;=\sum_{i=1}^n \sum_{j=1}^m A_{ij}B_{ji}\\ &amp;= \sum_{j=1}^m \sum_{i=1}^n B_{ji}A_{ij}\\ &amp;= \sum_{j=1}^m (BA)_{jj}\\ &amp;= \mathrm{tr}(BA) \end{align*} So you can just prove the formula by computing.</p>
4,393,925
<p>Is the integral of <span class="math-container">$\tan(x)\,\mathrm{d}x$</span> equal to the negative <span class="math-container">$\ln$</span> of absolute value of <span class="math-container">$\cos(x)$</span>, the same as integral of <span class="math-container">$\tan(x)\,\mathrm{d}x$</span> equal to the <span class="math-container">$\ln$</span> of absolute value of <span class="math-container">$\sec(x)$</span> <span class="math-container">$$ \int\tan(x)\,\mathrm{d}x = -\ln\left|\cos(x)\right| + C \equiv \int\tan(x)\,\mathrm{d}x = \ln\left|\sec(x)\right|+C $$</span></p>
Átila Correia
953,679
<p>As @Randall has mentioned, the following property of logarithms is useful in the present context:</p> <p><span class="math-container">\begin{align*} \alpha\ln|x| = \ln|x|^{\alpha} \end{align*}</span></p> <p>At the given case, the proposed integral is given by <span class="math-container">\begin{align*} \int\tan(x)\mathrm{d}x = \int\frac{\sin(x)}{\cos(x)}\mathrm{d}x = -\int\frac{\mathrm{d}(\cos(x))}{\cos(x)} = -\ln|\cos(x)| + c \end{align*}</span></p> <p>Applying the above-mentioned property, we get that <span class="math-container">\begin{align*} \int\tan(x)\mathrm{d}x = \ln|\cos(x)|^{-1} + c = \ln|\sec(x)| + c \end{align*}</span></p>
2,351,883
<p>I am learning about tensor products. In trying to understand the definitions, I seem to be getting some contradiction.</p> <p>Consider the differential form</p> <p>$$ d x^{1} \wedge d x^{2} = d x^{1} \otimes d x^{2} - d x^{2} \otimes d x^{1}. $$</p> <p>If I use the symmetry property of the tensor product</p> <p>$$ d x^{2} \otimes d x^{1} = d x^{1} \otimes d x^{2} $$</p> <p>I get 0! This is clearly wrong. I think that I cannot change places inside the tensor product, but I cannot justify why. What is going on?</p>
Community
-1
<p>The symmetry property of the tensor product is not that $u \otimes v = v \otimes u$: it is that the linear map $S$ satisfying $S(u \otimes v) = v \otimes u$ is invertible (and the inverse has the same form).</p> <p>That is, on <em>vector spaces</em> (or vector bundles or other such things), $S$ is an isomorphism $U \otimes V \cong V \otimes U$, which is how the term "symmetry" applies.</p>
141,115
<p>I know only some basics about mathematica. However I need to write down the following sum: </p> <p>$\sum_{\{m_k\}_N}\prod_{k=1}^N\frac{1}{m_k}[T_k(Z(\tau))]^{m_k}$. </p> <p>Where $\{m_k\}_N$ denotes partitions of $N$ i.e. $\sum_{k=1}^Nkm_k=N$. The argument in brackets [..] is the Hecke Operator, for now not that important. My problem is more that I do not know how to write the sum over partitions. The Hecke Operator I would then just insert and I think this would not be the most difficult part. </p> <p>I know that usually I should write some code expressing my trials however I really have no idea how to handle the problem. Could someone please help me. </p>
Szabolcs
12
<p>This happens because texturing is done triangle by triangle. Polygons with more sides are broken down into triangles, and each triangle is textured individually. I believe your example is effectively equivalent to</p> <pre><code>pic = ExampleData[{"TestImage", "Mandrill"}]; Graphics[{Texture[pic], EdgeForm[Black], Polygon[{{0, 0}, {1, 0}, {2, 2}}, VertexTextureCoordinates -&gt; {{0, 0}, {2, 0}, {2, 2}}], Polygon[{{2, 2}, {0.5, 2.5}, {0, 0}}, VertexTextureCoordinates -&gt; {{2, 2}, {0, 2}, {0, 0}}]}] </code></pre> <p><img src="https://i.stack.imgur.com/E1pzq.png" alt="Mathematica graphics"></p>
3,108,847
<p>I am trying to prove that If <span class="math-container">$z\in \mathbb{C}-\mathbb{R}$</span> such that <span class="math-container">$\frac{z^2+z+1}{z^2-z+1}\in \mathbb{R}$</span>. Show that <span class="math-container">$|z|=1$</span>.</p> <p>1 method , through which I approached this problem is to assume <span class="math-container">$z=a+ib$</span> and to see that <span class="math-container">$$\frac{z^2+z+1}{z^2-z+1}=1+\frac{2z}{z^2-z+1}$$</span>.</p> <p>So problem reduces to show that <span class="math-container">$|z|=1$</span> whenever <span class="math-container">$\frac{2z}{z^2-z+1}\in \mathbb{R}$</span></p> <p>I put <span class="math-container">$z=a+ib$</span> and then rationalise to get the imaginary part of <span class="math-container">$\frac{2z}{z^2-z+1}$</span> be <span class="math-container">$\frac{b-b^3-a^2b}{something}$</span>. I equated this to zero and got my answer.</p> <p>Is there any better method?</p>
Bello Bello
643,633
<p>Let <span class="math-container">$w=\frac{z^2+z+1}{z^2-z+1}=1+\frac{2z}{z^2-z+1}$</span></p> <p>Since <span class="math-container">$\frac{z^2+z+1}{z^2-z+1}\in \mathbb{R}$</span> , so <span class="math-container">$\operatorname{Im}(w)=0$</span> <span class="math-container">$$\iff w-\bar{w}=0$$</span></p> <p>Now, let's solve <span class="math-container">$1+\frac{2z}{z^2-z+1}=\overline{1+\frac{2z}{z^2-z+1}}$</span></p> <p><span class="math-container">$$\implies \frac{z}{z^2-z+1}=\overline{\frac{z}{z^2-z+1}}$$</span></p> <p>Since <span class="math-container">$z^2-z+1=0 \ when\ z= \frac{1 \pm i \sqrt{3}}{2}$</span> </p> <p>Assume <span class="math-container">$z \neq \frac{1 \pm i \sqrt{3}}{2}$</span> <span class="math-container">$$\implies \frac{z}{z^2-z+1}= \frac{\overline{z}}{\overline{z^2}-\overline{z}+1}$$</span></p> <p><span class="math-container">$$\implies z(\overline{z^2}-\overline{z}+1)=\overline{z}(z^2-z+1) $$</span></p> <p>After some simplifications, we have <span class="math-container">$$ (z-\overline{z})(|z|-1)=0$$</span> <span class="math-container">$$\implies (z-\overline{z})=0 \ or \ |z|-1=0$$</span></p> <p>Since <span class="math-container">$(z-\overline{z})=0\ $</span>must be true by the above statement, so we just need to solve <span class="math-container">$|z|-1=0 \implies |z|=1$</span></p>
696,869
<p>Question: Show that if a square matrix $A$ satisfies the equation $A^2 + 2A + I = 0$, then $A$ must be invertible.</p> <p>My work: Based on the section I read, I will treat I to be an identity matrix, which is a $1 \times 1$ matrix with a $1$ or as an square matrix with main diagonal is all ones and the rest is zero. I will also treat the $O$ as a zero matrix, which is a matrix with all zeros.</p> <p>So the question wants me to show that the square matrix $A$ will make the following equation true. Okay so I pick $A$ to be $[-1]$, a $1 \times 1$ matrix with a $-1$ inside. This was out of pure luck.</p> <p>This makes $A^2 = [1]$. This makes $2A = [-2]$. The identity matrix is $[1]$.</p> <p>$1 + -2 + 1 = 0$. I satisfied the equation with my choice of $A$ which makes my choice of the matrix $A$ an invertible matrix.</p> <p>I know matrix $A *$ the inverse of $A$ is the identity matrix.</p> <p>$[-1] * inverse = [1]$. So the inverse has to be $[-1]$.</p> <p>So the inverse of $A$ is $A$. </p> <p>It looks right mathematically speaking. </p> <p>Anyone can tell me how they would pick the square matrix A because I pick my matrix out of pure luck? </p>
Mitchell
529,914
<p>Since $A^2 + 2A + I = 0$, $(A + I)^2 = 0$. Hence $A = -I$, for any size square matrix. Since the $det(-I) = 1$, the inverse exists. </p> <p>Since $I* I = I$ and $-I * -I = I$</p> <p>$A = -I * -I = A$</p> <p>Thus, $A*A = I$, making $A$ its own inverse.</p> <p>Another solution to this problem is as follows:</p> <p>$A^2 + 2A = -I$</p> <p>$A(A+2)= -I$ and thus $A(-A -2I) = I$ (Since the identity is basically 1), and thus $-A -2I$ is the inverse of $A$.</p> <p>From earlier conclusion $A = -I$, and thus $-(-I) - 2I = -I = A$, so it remains that $A$ is its own inverse.</p>
84,076
<p>I think computation of the Euler characteristic of a real variety is not a problem in theory.</p> <p>There are some nice papers like <em><a href="http://blms.oxfordjournals.org/content/22/6/547.abstract" rel="nofollow">J.W. Bruce, Euler characteristics of real varieties</a></em>.</p> <p>But suppose we have, say, a very specific real nonsingular hypersurface, given by a polynomial, or a nice family of such hypersurfaces. What is the least cumbersome approach to computation of $\chi(V)$? One can surely count the critical points of an appropriate Morse function, but I hope it's not the only possible way.</p> <p>(Since I am talking about dealing with specific examples, here's one: $f (X_1,\ldots,X_n) = X_1^3 - X_1 + \cdots + X_n^3 - X_n = 0$, where $n$ is odd.)</p> <p><strong>Update:</strong> the original motivation is the following: the well-known results by Oleĭnik, Petrovskiĭ, Milnor, and Thom give upper bounds on $\chi (V)$ or $b(V) = \sum_i b_i (V)$ that are exponential in $n$. It is easy to see that this is unavoidable, e.g. $(X_1^2 - X_1)^2 + \cdots + (X_n^2 - X_n)^2 = 0$ is an equation of degree $4$ that defines exactly $2^n$ isolated points in $\mathbb{R}^n$. I was interested in specific families of real algebraic sets with large $\chi (V)$ or $b (V)$ <em>defined by one equation of degree $3$</em>. I couldn't find an appropriate reference with such examples and it seems like a proof for such example would require some computations (unlike the case of degree $4$).</p>
F. C.
10,881
<p>One possible way to compute Euler characteristic is to use its properties:</p> <ul> <li><p>$\chi$ is additive on disjoint unions</p></li> <li><p>$\chi$ is multiplicative on fibrations</p></li> <li><p>$\chi$ of the point is $1$</p></li> </ul> <p>So one has to either decompose the variety as a disjoint union, or prove that it fibers over some base, and then do the same for the pieces, until one reaches something with known Euler characteristic.</p> <p>The same kind of procedure can be used to count points over finite fields.</p>
791,372
<p>Hi I am trying to solve this double integral $$ I:=\int_0^\infty \int_0^\infty \frac{\log x \log y}{\sqrt {xy}}\cos(x+y)\,dx\,dy=(\gamma+2\log 2)\pi^2. $$ Thank you.</p> <p>The constant in the result is given by $\gamma\approx .577$, and is known as the Euler-Mascheroni constant. I was thinking to write $$ I=\Re \bigg[\int_0^\infty \int_0^\infty \frac{\log x \log y}{\sqrt{xy}}\, e^{i(x+y)}\, dx\, dy\bigg] $$ and using Leibniz's rule for differentiation under the integral sign to write $$ I(\eta, \xi)=\Re\bigg[ \int_0^\infty \int_0^\infty \ \frac{\log (\eta x)\log(\xi y)}{\sqrt{xy}} e^{i(x+y)}dx\,dy. \bigg]\\ $$ After taking the derivatives it became obvious that I need to try another method since the x,y constants cancel out. How can we solve this integral I? Thanks. </p>
Pranav Arora
117,767
<p>The integral is $$I=\Re\left(\left(\int_0^{\infty} \frac{\ln x}{\sqrt{x}}e^{ix}\,dx\right)^2\right)$$ Evaluating the definite integral first: $$J=\int_0^{\infty} \frac{\ln x}{\sqrt{x}}e^{ix}\,dx$$ Use the substitution $\sqrt{x}=e^{i\pi/4}t$ to obtain: $$J=2e^{i\pi/4}\int_0^{\infty} e^{-t^2}\ln(t^2e^{i\pi/2})\,dt=2e^{i\pi/4}\int_0^{\infty} \left(2\ln te^{-t^2}+\frac{i\pi}{2}e^{-t^2}\right)\,dt$$ Using the following results: $$\int_0^{\infty} e^{-t^2}\ln t\,dt=-\frac{\sqrt{\pi}}{4}(\gamma+2\ln2)$$ $$\int_0^{\infty} e^{-t^2}\,dt=\frac{\sqrt{\pi}}{2}$$ ....we obtain: $$J=2e^{i\pi/4}\left(-\frac{\sqrt{\pi}}{2}(\gamma+2\ln 2)+\frac{i\pi}{2}\frac{\sqrt{\pi}}{2}\right)$$ $$\Rightarrow e^{i\pi/4}\sqrt{\pi}\left(\frac{i\pi}{2}-(\gamma+2\ln 2)\right)$$ Squaring $J$ and taking the real part, $$I=\pi^2(\gamma+2\ln 2)$$</p>
280,500
<p>I would like to pose a question about the range of validity of the expansion of Binomial Theorems. </p> <p>I know that for non-positive integer, rational $n$ $$ \left(1+x\right)^{n}=1+nx+\frac{n\left(n-1\right)}{2!}x^{2}+\dots, $$ where the range of validity is $\left|x\right|&lt;1$.</p> <p>My question is that if we tried to expand $\left(1+f(x)\right)^n$, where $f(x)$ is any arbitrary function defined on the reals, does it follow that we could just say that the range of validity of this expansion is just $\left|f(x)\right|&lt;1$?</p> <p>For example, could I say that the range of validity of the Binomial Theorem expansion of $\left(1+(x+2x^3)\right)^n$ is just the values of $x$ that satisfies $\left|x+2x^3\right|&lt;1$? Or is it not as straightforward as doing such substitution?</p> <p>Thanks in advance for your inputs.</p>
Beer
58,613
<p>Sorry to resurrect this post again. But I was following the suggestion above to find the range of validity of $f(x)=\log(1+\sin(x))$, and I obtained that $|\sin(x)|&lt;1 \implies |x|&lt;\frac{\pi}{2} \text{ or } |x-2\pi|&lt;\frac{\pi}{2}$ etc. </p> <p>When I tried to plot $f(x)$ vs its series expansion with Mathematica, I have the image found at <a href="https://i.stack.imgur.com/4ma3M.jpg" rel="nofollow noreferrer">http://i.stack.imgur.com/4ma3M.jpg</a> ( Sorry, I couldn't hotlink the picture as I do not have enough rep.)</p> <p>Wouldn't this imply that the answer obtained earlier only works for $|x|&lt;\frac{\pi}{2}$, and not for other intervals like $|x-2\pi|&lt;\frac{\pi}{2}$? How should I argue for the invalidity of the other intervals? </p> <p>On another note, is range of validity of a power series the same as its radius of convergence? </p> <p>Thanks again.</p>
1,823,187
<blockquote> <p>There are $n \gt 0$ different cells and $n+2$ different balls. Each cell cannot be empty. How many ways can we put those balls into those cells?</p> </blockquote> <p>My solution:</p> <p>Let's start with putting one different ball to each cell. for the first cell there are $n+2$ options to choose a ball. .. for the $n$th cell there are $3$ options to choose a ball. total : $\frac{(n+2)!}{2!}$</p> <p>Now we got $2$ balls left and <strong>my question is</strong>: Can I change the way I choose? I mean until now I choose for each cell a ball. Can I now choose for each of the balls that left a unique cell? Is this legal? If the answer is yes, then let's choose for each ball a cell ($n$ options for picking up a cell) so we get $n^2$ options to put $2$ balls in $n$ cells.</p> <p>we get: $\frac{(n+2)!}{ 2!} n^2$</p> <p>Is this correct?</p>
Joshhh
285,105
<p>A simpler example might be $f(x) = \frac{\sin{x}}{\sqrt{x}}$. Since $\frac{1}{\sqrt{x}}$ is continuously differentiable and monotonically decreasing to 0, and since $\sin{x}$ has a bounded and integrable anti-derivative, from Dirichlet's test $\int\limits_1^{\infty}\frac{\sin{x}}{\sqrt{x}}dx$ converges. Integration by parts shows that $\int\limits_1^{\infty}\frac{\sin^2{x}}{x}dx$ diverges.</p>
2,426,535
<p>In the book <em>Simmons, George F.</em>, Introduction to topology and modern analysis, page no- 98, question no- 2, the problem is : <strong><em>Let $X$ be a topological space and a $Y$ be metric space and $f:A\subset X\rightarrow Y$ be a continuous map. Then $f$ can be extended in at most one way to a continuous mapping of $\bar{A}$ into $Y$.</em></strong></p> <p>I am trying to prove this way. Let $x_0\in \bar{A}-A$ and suppose that there is two extension $f$ and $g$ such that $f(x)=g(x)$ for $x\in A$. Now $f(x_0)\in \overline{f(A)}$ and $g(x_0)\in\overline {g(A)}$. So there exists a sequence $\{f(x_n)\}$ and $\{g(y_n)\}$ that converge to $f(x_0)$ and $g(x_0)$ respectively, where $x_n$ and $y_n$ belong to $A$ for all $n$. Then I am stuck!! Please help to complete the proof.</p>
bc78
250,074
<p>To follow your ides: Assume for a contradiction that <span class="math-container">$f(x_0)\neq g(x_0)$</span> and choose <span class="math-container">$\epsilon=\frac{d(f(x_0),g(x_0))}{4}&gt;0$</span>, where <span class="math-container">$d$</span> is the metric of the metric space <span class="math-container">$Y$</span>. Let <span class="math-container">$N=f^{-1}(B_{\epsilon}(f(x_0)))\cap g^{-1}(B_{\epsilon}(g(x_0)))$</span>. Since <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous, <span class="math-container">$N$</span> is a neighborhood of <span class="math-container">$x_0$</span>. Since <span class="math-container">$x_0\in \overline{A}$</span>, <span class="math-container">$N\cap A\neq \emptyset$</span>. Moreover, <span class="math-container">$x_0\in \overline{N\cap A}$</span> since <span class="math-container">$\overline{N\cap A}\subset \overline{N}\cap \overline{A}$</span>. Then, the continuity of <span class="math-container">$f$</span> implies that <span class="math-container">$f(\overline{N\cap A})\subset\overline{f(N\cap A)}=\overline{g(N\cap A)}\subset \overline{g(g^{-1}(B_{\epsilon}(x_0)))}\subset \overline{B_{\epsilon}(g(x_0))}$</span>. Then, the fact that <span class="math-container">$x_0\in \overline{N\cap A}$</span> implies that <span class="math-container">$$d(f(x_0),g(x_0))\leq \epsilon=\frac{d(f(x_0),g(x_0))}{4},$$</span> which is clearly a contradiction.</p>
537,021
<p>Say I divide a number by 6, will a number modulus by 6 always be between 0-5? If so, will a number modulus any number (N) , the result should be between $0$ and $ N - 1$?</p>
Prahlad Vaidyanathan
89,789
<p>Yes, this is the <a href="http://en.wikipedia.org/wiki/Euclidean_division" rel="nofollow">Euclidean Algorithm</a> : For any $a, n \in \mathbb{Z}, n\neq 0$, there exist $q,r \in \mathbb{Z}$ such that $$ a = qn + r, 0 \leq r &lt; |n| $$ and by definition, $a\equiv r\pmod{n}$</p>
1,742,982
<p>I was trying to solve the equation using factorial as shown below but now I'm stuck at this level and need help.</p> <p>$$C(n,3) = 2*C(n,2)$$</p> <p>$$\frac{n!}{3!(n-3)!} = 2\frac{n!}{2!(n-2)!}$$</p> <p>$$3! (n - 3)! = (n - 2)!$$</p>
sayan
312,099
<p>First we have to find the zeros of the eqn $y=-(x^2)(x+5)(x-3)$. So from the given eqn we get the zeros which are 0 of multiplicity 2,-5 of 1 and 3 of 1.</p> <p>1.$y&gt;0$ when $-5&lt;x&lt;3$</p> <p>2.$y&lt;0$ when $-\infty&lt;x&lt;-5\;\;and\;\;3&lt;x&lt;\infty$</p> <p>3.As $x$ tends to $-\infty$, $y$ tends to $-\infty$ and $x$ tends to +infinty, $y$ tends to -infinity.</p> <p>Now we have to calculate the max value of $y$.</p> <p>$y=-(x^2)(x+5)(x-3)$</p> <p>$\implies$$y=-(x^2)({x^2}+2x-15)$</p> <p>$\implies$$y=-({x^4}+2{x^3}-15{x^2})$</p> <p>Then make this term ${x^4}+2{x^3}-15{x^2}$ into of the form</p> <p>$y=-({f(x)^4})+K_1\;\;where\;\;K_1&gt;0$</p> <p>Or</p> <p>$y=-({f(x)^4}+{g(x)^2})+K_2\text{ where }K_2&gt;0$</p> <p>Then you will get $K_1=K_2$, the max value of $y$.</p> <p>Then you will get the $R(y)=(-\infty,K_1\;\;OR\;\;K_2$]</p>
1,978,035
<p>In the Wikipedia page about quintics, there was a list of quintics that could be solved with trigonometric roots.</p> <p>For example:$$x^5+x^4-4x^3-3x^2+3x+1\tag1$$ has roots of the form $2\cos \frac {2k\pi}{11}$ $$x^5+x^4-16x^3+5x^2+21x-9=0\tag2$$ has roots of the form $\sum_{k=0}^{7}e^{\frac {2\pi i 3^k}{41}}$</p> <hr> <blockquote> <p><strong>Question:</strong> Is there a formula or underlying structure to find the roots of the quintics?</p> </blockquote> <hr> <p>Wikipedia says that the roots are the sums of the first $n$-th roots of unity with $n=10k+1$, so I'm guessing we first have to find $n$, or do we first have to find $k$?</p> <p>Any help is appreciated.</p>
Will Jagy
10,400
<p>ADDED: I remembered where I had seen this before, with lower degree; let $\alpha \neq 1$ be a seventh root of unity, $$ \alpha^7 = 1, $$ and take $$ \gamma = \alpha + \alpha^6, $$ $$ \gamma^2 = \alpha^2 + 2 + \alpha^5, $$ $$ \gamma^3 = \alpha^3 + 3 \alpha + 3 \alpha^6 + \alpha^4. $$ Therefore $$ \gamma^3 + \gamma^2 - 2 \gamma - 1 = \alpha^3 + \alpha^2 + \alpha + 1 + \alpha^6 + \alpha^5 + \alpha^4 = 0 $$ This is a cubic with three irrational roots, and is <a href="https://en.wikipedia.org/wiki/Casus_irreducibilis" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Casus_irreducibilis</a></p> <p><a href="https://i.stack.imgur.com/stt5l.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/stt5l.jpg" alt="enter image description here"></a></p> <p>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=</p> <p>I simply checked the one that seems easiest. Cute. Let $\omega \neq 1$ be an 11th root of unity, $$ \omega^{11} = 1, $$ so that a root is $$ \beta = \omega + \omega^{10}$$ $$ \beta^2 = \omega^2 + 2 + \omega^9, $$ $$ \beta^3 = \omega^3 + 3 \omega + 3 \omega^{10} + \omega^8, $$ $$ \beta^4 = \omega^4 + 4 \omega^2 + 6 + 4 \omega^9 + \omega^7, $$ $$ \beta^5 = \omega^5 + 5 \omega^3 + 10 \omega + 10 \omega^{10} + 5 \omega^8 + \omega^6. $$ Then $$ 1 + 3 \beta - 3 \beta^2 - 4 \beta^3 + \omega^4 + \omega^5 = 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} = \frac{\omega^{11} - 1}{\omega - 1} = 0 $$ </p> <p>The quintic above and a few more are listed in <a href="https://en.wikipedia.org/wiki/Quintic_function#Other_solvable_quintics" rel="nofollow noreferrer">Wiki Quintic</a>. I cannot tell the source of these, how the list may be extended, or whether this family gives all such possibilities. I checked with gp-pari, for each prime $p = 10n+1,$ the discriminant of their $x^5 + x^4 - 4n x^3 + a x^2 + b x + c$ was $\Delta = w^2 p^4,$ with $w \neq 0 \pmod p.$ For the quintic above, $p = 11,$ and $w = 1.$</p> <p>Here is an early post that discusses things, <a href="https://math.stackexchange.com/questions/1388/how-to-solve-a-cyclic-quintic-in-radicals">How to solve a cyclic quintic in radicals?</a></p> <p>Easy enough to find cubic monics of the form $x^3 + x^2 + b x + c$ with no rational roots and square discriminant, so that the Galois group is $\mathbb Z_3.$ Not that there is no loss in having either that form or $x^3 + bx + c,$ as a pure (integer) translation changes the coefficient of $x^2$ by $3,$ and the differences between $x^3 + x^2 + b x + c$ and $x^3 - x^2 + b x - c$ are not important for this problem.</p> <pre><code> sqrt d b c d sqrt d 7 -142 -701 49 7 7 -2 -1 49 7 13 -4 1 169 13 19 -6 -7 361 19 37 -12 11 1369 37 49 -16 -29 2401 49 56 -9 -1 3136 56 62 -10 -8 3844 62 65 -30 53 4225 65 65 -56 -181 4225 65 79 -26 41 6241 79 86 -14 8 7396 86 91 -16 13 8281 91 91 -44 -127 8281 91 91 -86 -337 8281 91 97 -32 -79 9409 97 104 -17 -25 10816 104 133 -82 259 17689 133 139 -46 103 19321 139 152 -25 31 23104 152 163 -54 -169 26569 163 172 -100 352 29584 172 182 -30 -64 33124 182 183 -20 -9 33489 183 201 -22 5 40401 201 203 -30 41 41209 203 209 -44 -121 43681 209 217 -72 209 47089 217 219 -24 -27 47961 219 247 -82 -311 61009 247 248 -72 -256 61504 248 254 -42 80 64516 254 273 -30 27 74529 273 287 -30 -43 82369 287 296 -49 -137 87616 296 301 -44 83 90601 301 309 -34 -61 95481 309 313 -104 371 97969 313 325 -30 -25 105625 325 349 -116 -517 121801 349 392 -65 167 153664 392 399 -44 69 159201 399 427 -142 601 182329 427 436 -36 -4 190096 436 446 -74 -256 198916 446 448 -37 -29 200704 448 453 -50 -123 205209 453 469 -156 -799 219961 469 496 -41 23 246016 496 520 -121 -545 270400 520 532 -44 -64 283024 532 566 -94 304 320356 566 579 -64 143 335241 579 581 -114 419 337561 581 589 -44 -7 346921 589 611 -82 235 373321 611 628 -52 64 394384 628 632 -105 -433 399424 632 651 -72 -225 423801 651 679 -86 251 461041 679 688 -57 -121 473344 688 728 -65 -169 529984 728 776 -129 503 602176 776 813 -90 261 660969 813 832 -69 131 692224 832 845 -56 -25 714025 845 851 -86 233 724201 851 sqrt d b c d sqrt d </code></pre>
1,978,035
<p>In the Wikipedia page about quintics, there was a list of quintics that could be solved with trigonometric roots.</p> <p>For example:$$x^5+x^4-4x^3-3x^2+3x+1\tag1$$ has roots of the form $2\cos \frac {2k\pi}{11}$ $$x^5+x^4-16x^3+5x^2+21x-9=0\tag2$$ has roots of the form $\sum_{k=0}^{7}e^{\frac {2\pi i 3^k}{41}}$</p> <hr> <blockquote> <p><strong>Question:</strong> Is there a formula or underlying structure to find the roots of the quintics?</p> </blockquote> <hr> <p>Wikipedia says that the roots are the sums of the first $n$-th roots of unity with $n=10k+1$, so I'm guessing we first have to find $n$, or do we first have to find $k$?</p> <p>Any help is appreciated.</p>
Roman Chokler
38,328
<p>I am not sure if there is any simple formula for the root structure of polynomials having a solvable polynomial. The only thing is that the subnormal series tells you the order in which you adjoin radicals. It does not tell you which number in each field formed you take a radical of to make the next field, but if you make the correct choices, then the final field is the splitting field, and the root of the polynomial will be just linear combinations over the base field, of the elements of the splitting field.</p> <p>For example: $1\triangleleft G_2\triangleleft V\triangleleft A_4\triangleleft S_4$, where $G_2$ is any order two group containing a double-transposition has the following.</p> <p>$\text{ord}(S_4/A_4)=2$</p> <p>$\text{ord}(A_4/V)=3$</p> <p>$\text{ord}(V/G_2)=2$</p> <p>$\text{ord}(G_2/1)=2$</p> <p>Thus the general quartic's splitting field is constructed by these steps:</p> <p>Step 1: Pick a number in the base field and adjoin it's square root.</p> <p>Step 2: Pick a number in the new field and adjoin it's cube root and $e^{2\pi i/3}$.</p> <p>Step 3: Pick a number in the new field and adjoin it's square root.</p> <p>Step 4: Pick a number in the new field and adjoin it's square root, thus producing the splitting field.</p> <p>By the way, for your two quintics, we know that: $$e^{\pm 2\pi i/5}=\frac{-1+\sqrt{5}\pm i\sqrt{10+2\sqrt{5}}}{4}$$ $$e^{\pm 4\pi i/5}=\frac{-1-\sqrt{5}\pm i\sqrt{10-2\sqrt{5}}}{4}$$</p> <p>For $x^5+x^4-4x^3-3x^2+3x+1=0$:</p> <p>If $(a,b,k)$ is any of $(-4,4,2)$, $(2,-4,4)$, $(4,0,6)$, $(-2,-2,8)$, $(0,2,10)$ then \begin{align*} 2\cos \frac{2k\pi}{11}=-\frac{1}{5} &amp;-\frac{1}{10}e^{a\pi i/5}\sqrt[5]{7832-2200\sqrt{5}+(1320+880\sqrt{5})i\sqrt{10+2\sqrt{5}}}\\ &amp;-\frac{1}{10}e^{-a\pi i/5}\sqrt[5]{7832-2200\sqrt{5}-(1320+880\sqrt{5})i\sqrt{10+2\sqrt{5}}}\\ &amp;-\frac{1}{10}e^{b\pi i/5}\sqrt[5]{7832+2200\sqrt{5}+(1320-880\sqrt{5})i\sqrt{10-2\sqrt{5}}}\\ &amp;-\frac{1}{10}e^{-b\pi i/5}\sqrt[5]{7832+2200\sqrt{5}-(1320-880\sqrt{5})i\sqrt{10-2\sqrt{5}}} \end{align*}</p> <p>For $x^5+x^4-16x^3+5x^2+21x-9=0$:</p> <p>If $(a,b,n)$ is any of $(0,-2,1)$, $(-4,0,2)$, $(2,2,4)$, $(-2,4,8)$, $(4,-4,16)$ then</p> <p>\begin{align*} \sum_{k=0}^7e^{2n\pi i3^k/41}=-\frac{1}{5} &amp;+\frac{1}{10}e^{a\pi i/5}\sqrt[5]{321768-8200\sqrt{5}+(9840+14760\sqrt{5})i\sqrt{10+2\sqrt{5}}}\\ &amp;+\frac{1}{10}e^{-a\pi i/5}\sqrt[5]{321768-8200\sqrt{5}-(9840+14760\sqrt{5})i\sqrt{10+2\sqrt{5}}}\\ &amp;+\frac{1}{10}e^{b\pi i/5}\sqrt[5]{321768+8200\sqrt{5}+(9840-14760\sqrt{5})i\sqrt{10-2\sqrt{5}}}\\ &amp;+\frac{1}{10}e^{-b\pi i/5}\sqrt[5]{321768+8200\sqrt{5}-(9840-14760\sqrt{5})i\sqrt{10-2\sqrt{5}}} \end{align*}</p>
137,571
<p>As the title, if I have a list:</p> <pre><code>{"", "", "", "2$70", ""} </code></pre> <p>I will expect:</p> <pre><code>{"", "", "", "2$70", "2$70"} </code></pre> <p>If I have</p> <pre><code>{"", "", "", "3$71", "", "2$72", ""} </code></pre> <p>then:</p> <pre><code>{"", "", "", "3$71", "3$71", "2$72", "2$72"} </code></pre> <p>And </p> <pre><code>{"", "", "", "3$71", "","", "2$72", ""} </code></pre> <p>should give </p> <pre><code>{"", "", "", "3$71", "3$71", "", "2$72", "2$72"} </code></pre> <p>This is my try:</p> <pre><code>{"", "", "", "2$70", ""} /. {p : Except["", String], ""} :&gt; {p, p} </code></pre> <p>But I don't know why it doesn't work. Poor ability of pattern match. Can anybody give some advice?</p>
Chris Degnen
363
<p>Borrowing kglr's pattern</p> <pre><code>x = {"", "", "1$71", "3$71", "", "", "2$72", "", "", "", ""}; Prepend[ Map[Last, Partition[x, 2, 1] /. {p : Except[""], ""} :&gt; {p, p}], First[x]] </code></pre> <blockquote> <p>{"", "", "1\$71", "3\$71", "3\$71", "", "2\$72", "2\$72", "", "", ""}</p> </blockquote> <p>user might consider to use <code>Apply</code> rather than <code>Map</code>:</p> <pre><code>Last@@@(Partition[x, 2, 1] /. {p : Except[""], ""} :&gt; {p, p})//Prepend[First@x] </code></pre>
792,390
<p>Is there a name or short way of writing of $\frac{n!}{m!}$? I've searched and the closest I could find was binomial coefficient. Is there any other way?</p>
Mikotar
149,637
<p>One way that I've seen is $(n)_m$. This is a common way to write this notion in combinatorics.</p>
792,390
<p>Is there a name or short way of writing of $\frac{n!}{m!}$? I've searched and the closest I could find was binomial coefficient. Is there any other way?</p>
Jack M
30,481
<p>That tends to be called a <em>falling</em> or <em>rising</em> factorial, and there are <a href="http://en.wikipedia.org/wiki/Pochhammer_symbol">multiple notations</a>, though none of them are standard enough that you could use one in a paper without defining it first.</p>
51,096
<p>Is it possible to have a countable infinite number of countable infinite sets such that no two sets share an element and their union is the positive integers?</p>
long trail
13,270
<p>Let p_n be the nth prime and let A_n be the positive integers whose smallest prime divisor is p_n (throw 1 in with A_1). This is basically applying the sieve of Eratosthenes to the entire set of positive integers. </p>
51,096
<p>Is it possible to have a countable infinite number of countable infinite sets such that no two sets share an element and their union is the positive integers?</p>
André Nicolas
6,312
<p>Maybe we can start slowly, by doing a decomposition of $\mathbb{N}$ into $2$ disjoint sets, say the <strong>odds</strong> and the <strong>evens</strong>.</p> <p>Let's now go for a decomposition into $3$ disjoint sets. Leave the odds alone, and decompose the evens into those divisible by $2$ but no higher power of $2$, and those divisible by $4$. To put it another way, we are using the odds, twice the odds, and the rest.</p> <p>Continue, and let's introduce some notation. Let $W_0$ be the set of odd positive integers. Let $W_1$ be the set of positive integers which are $2^1$ times an odd number. Let $W_2$ be the set of positive integers which are $2^2$ times an odd number. In general let $W_n$ be the set of integers which are $2^n$ times an odd number. </p> <p>It is clear that the $W_k$ are all infinite, pairwise disjoint, and that their union is all of $\mathbb{N}$.</p>
585,808
<p>As part of showing that $$ \sum_{n=1}^\infty \left|\sin\left(\frac{1}{n^2}\right)\right| $$ converges, I ended up with trying to show that $$ \left|\sin\left(\frac{1}{n^2}\right)\right|&lt;\frac{1}{n^2}, \quad n=1, 2, 3,\dots $$ since I know that the sum of the right hand side converges. But I can't show this. I've tried searching but I haven't been able to find anything.</p> <p>What I've tried is that firstly, the absolute values are not needed since $\sin x&gt;0$ if $0&lt;x&lt;1$. I rearranged a little bit: $$ \sin\left(\frac{1}{n^2}\right)-\frac{1}{n^2}&lt;0 $$ and the derivative is $$ \frac{2}{n^3}\left(1-\cos\left(\frac{1}{n^2}\right)\right)&gt; 0 $$ so my idea of showing that it is decreasing and negative for the first $n$ wouldn't work. </p> <p>How can I show this? Help is appreciated. </p> <p>Edit: Maybe I should add that I'm not <em>completely</em> sure it is true, but I tried it numerically and it seems like it. </p>
Community
-1
<p>Consider $f(x)=\sin x -x\Rightarrow f'(x)=\cos x-1 &lt; 0 \text { for x &gt; 0 } \Rightarrow f(x)\text{ is strictly decreasing function}$</p> <p>But then, $f(0)=0$ which would imply $f(x)&lt; 0 $ for $x &gt; 0$ i.e., $\sin x &lt; x$ for $x &gt;0$ </p> <p>Thus, $\sin \big(\frac{1}{n^2}\big)&lt; \frac{1}{n^2}$</p> <p>Does this imply $|\sin\big(\frac{1}{n^2}\big)|&lt; \frac{1}{n^2}$</p>
1,219,462
<p>Proposition: Any polynomial of degree $n$ with leading coefficient $(-1)^n$ is the characteristic polynomial of some linear operator.</p> <p>I do not want to construct an 'explicit matrix' corresponding to the polynomial $(-1)^n(\lambda_n x^n+\cdots+ \lambda_0)$. However, I want to use induction to prove the existence holds. However, I have no idea since generally the polynomial is not necessary reducible. Can anyone give any hint?</p>
K_user
221,871
<p>You may get an idea from <a href="http://en.wikipedia.org/wiki/Companion_matrix" rel="nofollow">Companion matrix</a>. </p>
2,253,645
<p>$(1,2)$ intersection $(2,3)=\{2\}$</p> <p>$(1,2)$ intersection $[2,3]=\{2\}$</p> <p>$\{1,2\}$ intersection $[1,2]=[1,2]$</p> <p>$\{1,2\}$ union $[1,2]=[1,2]$</p> <p>$\{1,2\}$ intersection $(1,3)$ intersection $[1,3)=(1,3)$</p> <p>$\{1,2\}$ union $(1,3)$ union $[1,3)=(1,3)$</p> <p>Is my answer correct? I find my self confused when I have sets $\{a,b\}$ and intervals like $(a,b)$ and $[a,b]$ to find intersection and union of them.</p>
niksirat
133,629
<p>Thanks for your answer @rych. Consider spline $s$ of degree $k=2m+1$ for nodes $x_0&lt;\cdots&lt;x_n$ in <strong>whole line</strong> $\mathbb{R}$ instead of $[x_0,x_n]$.</p> <p>Definition of spline: \begin{array}{l} 1.\qquad s_i(x)=s(x)|_{[x_i,x_{i+1}]}\in\mathbb{P}_{2m+1},\qquad i=0,1,\ldots,n-1,\\ 2.\qquad s\in\mathbb{C}^{2m}(\mathbb{R}). \end{array} Definition of <strong>natural spline</strong> based on "<em>Numerical Mathematics</em>" by A. Quarteroni, page 357 : $$s^{(i)}(x_0)=s^{(i)}(x_n)=0,\quad i=m+1,\ldots,2m.\qquad (*)$$ $s_0\in\mathbb{P}_{2m+1}$ and $s_0\in\mathbb{C}^{2m}(\mathbb{R})$.</p> <p>Consider taylor expansion of $s_0$ at $x_0$: $$\begin{align} s_0(x)=&amp;\sum_{i=0}^{2m}{\frac{s_0^{(i)}(x_0)}{i!}(x-x_0)^i}+\left[\frac{s^{(2m+1)}(x_0+)-s^{(2m+1)}(x_0-)}{(2m+1)!}\right](x-x_0)^{2m+1}\\ \stackrel{(*)}{=}&amp; p_m(x)+b_0(x-x_0)^{2m+1},\qquad x\in[x_0,x_1]. \end{align}$$. How can i proof $s|_{(-\infty,x_0],[x_n,\infty)}\in\mathbb{P}_m$?</p>
543,938
<p>Can anyone share a link to proof of this? $${{p-1}\choose{j}}\equiv(-1)^j(\text{mod}\ p)$$ for prime $p$.</p>
lab bhattacharjee
33,337
<p>$$\binom {p-1}j=\prod_{1\le r\le j}\frac{p-r}r$$</p> <p>Now, $\displaystyle p-r\equiv -r\pmod p\implies \frac{p-r}r\equiv-1\pmod p$</p>
543,938
<p>Can anyone share a link to proof of this? $${{p-1}\choose{j}}\equiv(-1)^j(\text{mod}\ p)$$ for prime $p$.</p>
Marc van Leeuwen
18,880
<p>It is well known that $\binom pi\equiv0\pmod p$ for $0&lt;i&lt;p$. Now Pascal's recurrence gives $\binom{p-1}i\equiv-\binom{p-1}{i-1}\pmod p$ for those$~i$, and so $\binom{p-1}i\equiv(-1)^i\pmod p$ for $0\leq i&lt;p$ follows by an immediate induction on$~i$, with $\binom{p-1}0=1$ as base case.</p> <p>More generally this gives $\binom{p^k-1}i\equiv(-1)^i\pmod p$ for $0\leq i&lt;p^k$ for any positive integer$~k$ (since $\binom{p^k}i\equiv0\pmod p$ for $0&lt;i&lt;p^k$). This result would be a bit harder to prove by just reducing modulo$~p$ all factors in the expansion of $\binom{p^k}i$ than the basic case (since now some factors in numerator and denominator reduce to$~0$ modulo$~p$).</p>
4,202,490
<p>Trying to construct an example for a Business Calculus class (meaning trig functions are not necessary for the curriculum). However, I want to touch on the limit problem involved with the <span class="math-container">$\sin(1/x)$</span> function.</p> <p>I am sure there is a simple function, or there isn't... But would love some insight.</p> <p>I also understand that the functions that satisfy this condition are maybe way outside the scope of the course. I'm just looking for different &quot;flavors&quot; of showing limits that don't exist besides just showing the limit from the left and the limit from the right does not exists.</p>
Toby Bartels
63,003
<p>I endorse Ethan's answer: define the function by drawing a graph. <em>You</em> know that it's <span class="math-container">$ x \mapsto \sin ( 1 / x ) $</span>, but <em>they</em> don't need to know that.</p> <p>If you don't like that, hardmath has given an answer; but you can make it look more like <span class="math-container">$ \sin ( 1 / x ) $</span> (including being continuous) by using <span class="math-container">$ 2 \{ 1 / x \} - 1 $</span> when <span class="math-container">$ [ 1 / x ] $</span> is odd and <span class="math-container">$ 1 - 2 \{ 1 / x \} $</span> when <span class="math-container">$ [ 1 / x ] $</span> is even. Your students might prefer a more explicit piecewise-defined representation: <span class="math-container">$$ \cases { 1 - 2 / x &amp; for $ x &gt; 1 $, \\ 2 / x - 3 &amp; for $ 1 / 2 &lt; x \leq 1 $, \\ 5 - 2 / x &amp; for $ 1 / 3 &lt; x \leq 1 / 2 $, \\ \vdots &amp; etc. } $$</span> This is based on a piecewise-linear approximation of the sine function (or rather a cosine function with period <span class="math-container">$ 2 $</span>). You can make it differentiable by using a piecewise-quadratic approximation instead, twice differentiable using a piecewise-cubic, etc. (But to make it infinitely differentiable, you're back at the sine function.)</p>
2,744,068
<p>Theorem: The only idempotent matrix whose eigenvalues are all zero is the null matrix.</p> <p>Then how to prove this?</p>
quasi
400,434
<p>Suppose $A$ is idempotent, and $A\ne 0$. <p> Let $x$ be such that $Ax\ne 0$, and let $y=Ax$. <p> Then $Ay= A(Ax) = A^2x = Ax = y$, so $y$ is an eigenvector with eigenvalue $1$.</p>
2,744,068
<p>Theorem: The only idempotent matrix whose eigenvalues are all zero is the null matrix.</p> <p>Then how to prove this?</p>
copper.hat
27,978
<p>If all eigenvalues are zero then $T^n = 0$. Since $T=T^2 = \cdots = T^n$ then we see that $T=0$.</p>
1,334,680
<p>How to apply principle of inclusion-exclusion to this problem?</p> <blockquote> <p>Eight people enter an elevator at the first floor. The elevator discharges passengers on each successive floor until it empties on the fifth floor. How many different ways can this happen</p> </blockquote> <p>The people are <strong>distinguishable</strong>.</p>
DRF
176,997
<p>What you are butting your head against here IMO is the fact that you need a meta-language at the beginning. Essentially at some point you have to agree with other people what your axioms and methods of derivation are and these concepts cannot be intrinsic to your model. </p> <p>Usually I think we take axioms in propositional logic as understood, with the idea that they apply to purely abstract notions of sentences and symbols. You might somethimes see proofs of the basic axioms such as Modus Ponens in terms of a meta-language i.e. not inside the system of logic but rather outside of it.</p> <p>There is a lot of philosophical fodder at this level since really you need some sort of understanding between different people (real language perhaps or possibly just shared brain structures which allow for some sort of inherent meta-deduction) to communicate the basic axioms.</p> <p>There is some extra confusion in the way these subjects are usually taught since propositional logic will often be explained in terms of, for example, truth tables, which seem to already require having some methods for modeling in place. The actual fact IMO is that at the bottom there is a shared turtle of interhuman understanding which allows you to grasp what the axioms you define are supposed to mean and how to operate with them.</p> <p>Anyhow that's my take on the matter.</p>
1,334,680
<p>How to apply principle of inclusion-exclusion to this problem?</p> <blockquote> <p>Eight people enter an elevator at the first floor. The elevator discharges passengers on each successive floor until it empties on the fifth floor. How many different ways can this happen</p> </blockquote> <p>The people are <strong>distinguishable</strong>.</p>
Stefan Perko
166,694
<p>As already pointed out, this is indeed circular. The only thing you can do is pretend it is not. </p> <p>A simple reason is for example: How are you going to explain what a proof is? Well, you might just a give a philosophical description, but I turns out that you can study proofs mathematically (as sequences of formulas satisfying some properties). If you believe, that is not circular, then you have to accept the "common sense" is always right by default or something like this. Giving a philosophical description is, I believe, just hiding the fact, that it's still circular.</p> <p>But it turns out, this isn't a real issue. You've been doing math all your life and it somehow magically worked. </p> <p>When we start to do math rigerously we start with our existing knowledge, let's call it "Math $0$", that is based on what we know from school and "common sense" and then inside of that formalize math again as a logical system "Math $1$". In a logic course we are just taking this more serious, than in introductery real analysis.</p> <p>Concerning "$=$": In classical mathematics equality is defined in its predicate logic, as is $\in$ as some relation satisfiying some rules. (In other flavors of mathematics, I call them "structural" as opposed to "material", there is no "gobal" notion of equality and one works with equivalence relations in general (or sometimes appartness relations), but don't be confused by this, it's not important.)</p> <p>Ah, and one more thing: "$=$" is usually not "literally" a relation as in: a subset of a cartesian product, we just like to think of it that way.</p>
1,334,680
<p>How to apply principle of inclusion-exclusion to this problem?</p> <blockquote> <p>Eight people enter an elevator at the first floor. The elevator discharges passengers on each successive floor until it empties on the fifth floor. How many different ways can this happen</p> </blockquote> <p>The people are <strong>distinguishable</strong>.</p>
Pepijn Schmitz
249,930
<p>It's turtles all the way down.<a href="https://en.wikipedia.org/wiki/Turtles_all_the_way_down" rel="nofollow">*</a></p> <p>In other words: there is nothing at the bottom of mathematics but philosophy. It's a set of rules we came up with because it seemed useful, but it has no absolute grounding in reality or the universe. It functions well in the "shared delusion" that is our understanding of the universe, but there is no way we could tell whether it actually connects to reality.</p>
1,195,092
<p>The book I am using for my Introduction of Topology course is Principles of Topology by Fred H. Croom.</p> <p>I was given the following problem: </p> <blockquote> <p>Prove that a set $E$ is countable if and only if there is a surjection from $\mathbb{N}$ to $E$. </p> </blockquote> <p>I have a rough idea on how to prove this. Though the converse is tricky and I am having a hard time coming up with a proof to justify my claim. </p> <blockquote> <blockquote> <p>$P \rightarrow Q$: Let $E$ be a nonempty countable set. By definition of a countable set, this implies there is a bijection between $\mathbb{N}$ and $E$. Therefore there is a surjection from $\mathbb{N}$ to $E$. </p> <p>$Q \rightarrow P$: Let there be a surjection from $\mathbb{N}$ to $E$. Thus we have a function $f$ such that $f(\mathbb{N})=E$. Seeking to prove $E$ is countable, we have two cases: if $E$ is finite or if $E$ is infinite. </p> </blockquote> </blockquote> <p>This is where I am stuck. How would I take the two cases of a finite or infinite $E$? Would I need to find (create) such a function $f$ that makes the bijection from $E$ to $\mathbb{N}$, making $E$ countable? Am I on the right track? Any suggestions?</p> <hr> <p>Sorry for the rather long question. If is it rather confusing, let me know so I can clarify. I sincerely thank you for taking the time to read this question. I greatly appreciate any assistance you may provide.</p> <p><em>As a disclaimer, I put the General-Topology tag because this question was given in a topology course.</em> </p>
bobbym
77,276
<p>The absorbing chain matrix is</p> <p>$$\left( \begin{array}{ccccc} \text{} &amp; o &amp; 5 &amp; 55 &amp; 6 \\ o &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; \frac{1}{6} \\ 5 &amp; \frac{2}{3} &amp; 0 &amp; \frac{1}{6} &amp; \frac{1}{6} \\ 55 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ 6 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right)$$</p> <p>The states have been labeled o which represents a throw of (1,2,3,4), 5 which is the first 5 thrown, 55 which is 2 5's in a row and 6 which is the first 6 thrown.</p> <p>$$ \text{Q=}\left( \begin{array}{cc} \frac{2}{3} &amp; \frac{1}{6} \\ \frac{2}{3} &amp; 0 \\ \end{array} \right)$$</p> <p>$$\text{R=}\left( \begin{array}{cc} 0 &amp; \frac{1}{6} \\ \frac{1}{6} &amp; \frac{1}{6} \\ \end{array} \right)$$</p> <p>$$\text{0=}\left( \begin{array}{cc} 0 &amp; 0 \\ 0 &amp; 0 \\ \end{array} \right)$$</p> <p>Now B is the probability of ending in some absorbing state (first row) starting in some transient state (first column).</p> <p>$$\text{B=}\left( \begin{array}{ccc} \text{} &amp; 55 &amp; 6 \\ o &amp; \frac{1}{8} &amp; \frac{7}{8} \\ 5 &amp; \frac{1}{4} &amp; \frac{3}{4} \\ \end{array} \right)$$</p> <p>When we check the element o,6 we see the answer of $ \frac{7}{8} $</p>
220,736
<p>I have reduced a problem I'm working on to something resembling a graph theory problem, and my limited intuition tells me that it's not so esoteric that only I could have ever considered it. <strong>I'm looking to see if someone knows of any related work.</strong> Here's the problem:</p> <hr> <p>Given a roadway map (directed graph) and a set of sensor activations that reports how many vehicles were on each edge at a given time, generate as many sets of routes (i.e. source edge, sink edge, and start-time) that would explain the given sensor activations as possible.</p> <hr> <p>Here is a trivial approach: Create a source and sink for each edge, and generate/consume as much traffic is needed to satisfy the count at that edge for that time-slice.</p> <p>The trivial case is useless to me, as I'm trying to study the dependencies across multiple intersections and multiple time-slices. What I need is a <em>likely</em> explanation, where the routes resemble the kind of routes that actual drivers would choose, and where the distribution of trip lengths also makes sense.</p> <p>If I could generate all such explanations, or at least a great number of them, I could then treat picking the "likely" one as a separate problem.</p> <p>Is there an algorithm that might be applicable here?</p>
Joseph O'Rourke
6,094
<p>Just to emphasize Thomas Richard's remark about smoothness, unless I've miscalculated, a $\frac{1}{4} L$-square leads to area $$2 \epsilon L - \epsilon^2 (4-\pi) &lt; 2 \epsilon L \;.$$ <hr /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://i.stack.imgur.com/9jfoc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9jfoc.jpg" alt="Epsilon"></a></p> <hr /> <p><em>Added</em>. This is to illustrate my "hypothesis" in the comments: Any smooth curve (not necessarily convex) such that its radius of curvature exceeds $\epsilon$ (at every point) leads to area $2 \epsilon L$. In the right figure, too-sharp curvature leads to gaps between the $\epsilon$-disk and the tube boundaries. <hr /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="https://i.stack.imgur.com/ubsot.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ubsot.jpg" alt="EpsilonCurvature"></a></p> <hr />
13,705
<p>Let $m$ be a positive integer. Define $N_m:=\{x\in \mathbb{Z}: x&gt;m\}$. I was wondering when does $N_m$ have a "basis" of two elements. I shall clarify what I mean by a basis of two elements: We shall say the positive integers $a,b$ generate $N_m$ and denote $N_m=&lt;a,b&gt;$ if every element $x\in N_m$ can be written as $x=\alpha a+\beta b$ where $\alpha,\beta$ are nonnegative integers (not both zero) <STRIKE>and no nonnegative linear combination (with at least one coefficient nonzero) of $a,b$ gives an element not in $N_m$.</STRIKE></p> <p>More specifically, I want to understand the set $\{(a,b,m):N_m=&lt;a,b&gt;\}$.</p> <p>An example would be the triple $(2,3,1)$. Every positive integer greater than 1 can be written as $2\alpha +3\beta$ for nonnegative integers $\alpha,\beta$, for if it is even we can write it as $2\alpha$ for some positive integer $\alpha$ and if it is odd and greater than $3$ then we can write it as $3+2\alpha$ for some positive integer $\alpha$ and of course $3=1\cdot 3$. FInally the smallest integer a positive linear combination of $2,3$ can generate is $2$.</p> <p>PS: Not quite sure what to tag this. Feel free to retag.</p> <p>EDIT: After Jason's answer, it seems the first version of this question is more interesting, where the last condition in paragraph 1 is relaxed.</p>
Aryabhata
1,102
<p>(Now that you have relaxed the last condition of paragraph 1)</p> <p>I suppose you mean non negative $\alpha$, $\beta$? (You seem to be allowing them to be zero).</p> <p>In which case, this looks like the <a href="http://mathworld.wolfram.com/FrobeniusNumber.html" rel="nofollow">Frobenius Problem</a> which says that for $a,b$ such that $(a,b)=1$, every number greater than $(a-1)(b-1) - 1 $ is representable in the for $a\alpha + b \beta$ for $\alpha \ge 0, \beta \ge 0$.</p> <p>Thus given an $m$, if a factorization of $m+1$ as $(a-1)(b-1)$ with $(a,b) = 1$ exists, then $N_m \subset \lt a, b \gt$.</p> <p>If you are also looking for $m$ such that $m \notin \lt a, b \gt$ (the least such $m$) then there are $m$ for which there are no such $a,b$, as $m$ must necessarily be of the form $(a-1)(b-1) - 1$ with $(a,b) = 1$. </p> <p>In fact we can characterize all such $m$.</p> <p>If $m$ is even, there is no such representation (as both $a-1$ and $b-1$ must be odd).</p> <p>If $m$ is odd, then $m+1 = (2-1)(m+2 - 1)$ and thus $a = 2, b = m+2$ will work.</p>
69,208
<p>Consider $f:\{1,\dots,n\} \to \{1,\dots,m\}$ with $m &gt; n$. Let $\operatorname{Im}(f) = \{f(x)|x \in \{1,\dots,n\}\}$.</p> <p>a.) What is the probability that a random function will be a bijection when viewed as $$f&#39;:\{1,\dots,n\} \to \operatorname{Im}(f)?$$</p> <p>b.) How many different function f are there for which $$\sum_{i=1}^n f(i)=k?$$ Not sure where to start and have no idea what $\operatorname{Im}(f)$ means. Please help me get started on this.</p>
zyx
14,120
<p>$x^4=0$ is correct if interpreted to mean "the identity holds up to terms of degree 4 or higher". This is because terms of degree $\geq 4$ were dropped from the power series for $\sin$ and $\cos$ at the start of the calculation. If terms of degree $\geq n$ are dropped from the series then this order $n$ approximation to $\sin^2 + \cos^2$ will be equal to $1 + ax^n + bx^{n+1} + \dots$ for some numerical coefficients $a,b,\dots$ (that depend on the value of $n$ used).</p> <p>Near $x=0$ the higher degree terms are smaller and an identity being correct up to higher order means greater accuracy.</p>
2,992,454
<p>Prove :</p> <blockquote> <p><span class="math-container">$f : (a,b) \to \mathbb{R} $</span> is convex, then <span class="math-container">$f$</span> is bounded on every closed subinterval of <span class="math-container">$(a,b)$</span></p> </blockquote> <p>where <span class="math-container">$f$</span> is convex if <span class="math-container">$f(\lambda x + (1-\lambda)y) \le \lambda f(x) + (1-\lambda)f(y), \forall x,y \in (a,b), \forall \lambda \in [0,1]$</span></p> <hr> <h2>Try</h2> <h3><span class="math-container">$f$</span> is bounded above</h3> <p>Let <span class="math-container">$J = [\alpha, \beta] \subset (a,b)$</span>. </p> <p><span class="math-container">$\forall c \in [\alpha, \beta]$</span>, <span class="math-container">$\exists \lambda_0 \in [0,1]$</span> s.t. <span class="math-container">$c = \lambda_0 \alpha + (1-\lambda_0) \beta$</span>, and</p> <p><span class="math-container">$$ f(c) = f(\lambda_0 \alpha + (1-\lambda_0) \beta) \le \lambda_0 f(\alpha) + (1-\lambda_0) f(\beta) $$</span></p> <p>thus, <span class="math-container">$f$</span> is bounded above on <span class="math-container">$[\alpha, \beta]$</span></p> <p>But I'm stuck at how I should proceed to prove that <span class="math-container">$f$</span> is bounded below.</p>
RRL
148,510
<p>Here is an analytic proof that a convex function <span class="math-container">$f:[\alpha,\beta] \to \mathbb{R}$</span> is bounded on the closed interval.</p> <p>Take <span class="math-container">$M = \max (f(\alpha),f(\beta))$</span>. Note that any <span class="math-container">$x \in [\alpha,\beta]$</span> is of the form <span class="math-container">$x = \lambda\alpha + (1-\lambda)\beta$</span> where <span class="math-container">$0 \leqslant \lambda \leqslant 1$</span>.</p> <p>Hence, we have for all <span class="math-container">$x \in [\alpha, \beta]$</span>, the upper bound</p> <p><span class="math-container">$$f(x) \leqslant \lambda f(\alpha) + (1-\lambda)f(\beta) \leqslant \lambda M + (1-\lambda)M = M$$</span></p> <p>To find a lower bound, write <span class="math-container">$x = \frac{a+b}{2} + \theta$</span>. Since <span class="math-container">$\frac{a+b}{2} = \frac{1}{2} \left(\frac{a+b}{2} + \theta \right) + \frac{1}{2} \left(\frac{a+b}{2} - \theta \right) $</span>, we have by convexity</p> <p><span class="math-container">$$f\left(\frac{a+b}{2}\right) \leqslant \frac{1}{2}f \left(\frac{a+b}{2} + \theta \right) + \frac{1}{2}f \left(\frac{a+b}{2} - \theta \right),$$</span></p> <p>and</p> <p><span class="math-container">$$f(x) = f\left(\frac{a+b}{2} + \theta \right) \geqslant 2f\left(\frac{a+b}{2}\right)- f \left(\frac{a+b}{2} - \theta \right)$$</span></p> <p>But from the upper bound we have <span class="math-container">$-f \left(\frac{a+b}{2} - \theta \right) \geqslant -M$</span>, and so we have the lower bound</p> <p><span class="math-container">$$f(x) \geqslant 2f\left(\frac{a+b}{2}\right)-M = m$$</span></p>
1,480,511
<p>I have included a screenshot of the problem I am working on for context. T is a transformation. What is meant by Tf? Is it equivalent to T(f) or does it mean T times f?</p> <p><a href="https://i.stack.imgur.com/LtRS1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtRS1.png" alt="enter image description here"></a></p>
Gondim
237,793
<p>You have shown that $3$ and $5$ divide $a^2$, so $3$ and $5$ also divide $a$ (remember that if a prime divides a product, then it divides at least one of the factors). Now $a=15k$, so you get $b^2 = 15k^2$. Do the same thing to show that $15|b$ and get a contradiction.</p>
1,480,511
<p>I have included a screenshot of the problem I am working on for context. T is a transformation. What is meant by Tf? Is it equivalent to T(f) or does it mean T times f?</p> <p><a href="https://i.stack.imgur.com/LtRS1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtRS1.png" alt="enter image description here"></a></p>
Ennar
122,131
<p>I'll give a proof for more general statement: Let $p$ be prime and $a$ a positive integer coprime with $p$. Then $\sqrt{ap}$ is irrational.</p> <p>Assume that $ap = \frac{x^2}{y^2}$ where $x,y\in\mathbb Z$ are coprime. Then we have $apy^2 = x^2\implies p|x^2\implies p|x$, because $p$ is prime. Thus, $x = px'$ and $apy^2 = p^2x'^2\implies ay^2 = px'^2\implies p|ay^2$. Since $p$ and $a$ are coprime, we have that $p|y^2\implies p|y$. Contradiction, as $x,y$ are coprime by our assumption. Hence, $\sqrt{ap}$ is irrational.</p> <p>Specially, $\sqrt p$ is irrational for any prime, just like $\sqrt{p_1p_2\cdots p_k}$ for distinct primes $p_1,\ldots,p_k$. Thus, $\sqrt{15} = \sqrt{3\cdot 5}$ is irrational.</p> <hr> <p>Also, because for positive reals we have $\sqrt{xy} = \sqrt{x}\sqrt y$, for any positive integer $n$, we can express $\sqrt n = m\sqrt{p_1\cdots p_k}$ for some distinct primes $p_1,\ldots,p_k$. If $k = 0$, $n$ is a perfect square, so $\sqrt n$ is integer, otherwise, $\sqrt n$ is irrational, by our previous argument that $\sqrt{p_1\cdots p_k}$ is irrational. Now, since $15$ is not a perfect square, $\sqrt {15}$ must be irrational.</p>
93,458
<blockquote> <p>Let <span class="math-container">$n$</span> be a nonnegative integer. Show that <span class="math-container">$\lfloor (2+\sqrt{3})^n \rfloor $</span> is odd and that <span class="math-container">$2^{n+1}$</span> divides <span class="math-container">$\lfloor (1+\sqrt{3})^{2n} \rfloor+1 $</span>.</p> </blockquote> <p>My attempt:</p> <p><span class="math-container">$$ u_{n}=(2+\sqrt{3})^n+(2-\sqrt{3})^n=\sum_{k=0}^n{n \choose k}2^{n-k}(3^{k/2}+(-1)^k3^{k/2})\in\mathbb{2N} $$</span></p> <p><span class="math-container">$$ 0\leq (2-\sqrt{3})^n \leq1$$</span></p> <p><span class="math-container">$$ (2+\sqrt{3})^n\leq u_{n}\leq 1+(2+\sqrt{3})^n $$</span></p> <p><span class="math-container">$$ (2+\sqrt{3})^n-1\leq u_{n}-1\leq (2+\sqrt{3})^n $$</span></p> <p><span class="math-container">$$ \lfloor (2+\sqrt{3})^n \rfloor=u_{n}-1\in\mathbb{2N}+1 $$</span></p>
lhf
589
<p><em>Hint for the first part:</em> Consider $u_n = (2+\sqrt{3})^{n} + (2-\sqrt{3})^{n}$. Prove that $u_n$ is always an even integer and that $u_n = \lceil (2+\sqrt{3})^n \rceil$. Use that $(2-\sqrt{3})^{n}\to 0$.</p> <p>(This has now been incorporated into the edited question.)</p> <p><em>Hint for the second part:</em> Consider $v_n = (1+\sqrt{3})^{n} + (1-\sqrt{3})^{n}$. Find a second-order recursion for $v_n$ based on the quadratic equation that defines $1\pm\sqrt{3}$.</p>
853,308
<p>I want to calculate the expected number of steps (transitions) needed for absorbtion. So the transition probability matrix $P$ has exactly one (lets say it is the first one) column with a $1$ and the rest of that column $0$ as entries.</p> <p>$P = \begin{bmatrix} 1 &amp; * &amp; \cdots &amp; * \\ 0 &amp; \vdots &amp; \ddots &amp; \vdots \\ \vdots &amp; &amp; &amp; \\ 0 &amp; &amp; &amp; \end{bmatrix} \qquad s_0 = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \\0 \\ \vdots \\ \\ \end{bmatrix}$</p> <p>How can I now find the expected (<i>mean</i>) number of steps needed for the absorbtion for a given initial state $s_0$?</p> <p>EDIT: An explicit example here:</p> <p>$P = \begin{bmatrix} 1 &amp; 0.1 &amp; 0.8 \\ 0 &amp; 0.7 &amp; 0.2 \\ 0 &amp; 0.2 &amp; 0 \end{bmatrix} \qquad s_0 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \implies s_1 = \begin{bmatrix} 0.8 \\ 0.2 \\ 0 \end{bmatrix} \implies s_2 = \begin{bmatrix} 0.82 \\ 0.14 \\ 0.04 \end{bmatrix} \ldots $</p>
Mauro ALLEGRANZA
108,274
<p>You have not necessarily to think in term of "definition" as a sort of "replacement" of the intuitive notion of function with its set-theoretic counterpart. </p> <p>We can say instead that set theory provides a "model" for the mathematical concept of function. </p> <p>Functions was already known in mathematics well before set theory. With the set-theoretic definition of function as a set of ordered pairs we have a simple and very useful "model" for them which, of course, does not contradict the "usual" behaviour of functions in mathematics. </p> <p>See also : <a href="http://en.wikipedia.org/wiki/Function_%28mathematics%29" rel="nofollow">Function</a></p>
2,390,538
<p>The problem says:</p> <p>If every closed ball in a metric space $X$ is compact, show that $X$ is separable.</p> <p>I'm trying to use an equivalence in metric spaces that tells us: let X be the matric space, the following are equivalent</p> <p>X is 2nd countable</p> <p>X is Lindeloff</p> <p>X is separable</p> <p>I also thought about taking balls of a "big" radius and that they are disjoint, but I do not see how to make the set of balls is countable.</p>
Matematleta
138,929
<p>For a different approach, choose $x\in X$ so that $X = \cup_{n \in \mathbb{N}} \overline B_x(n).$ Now consider, for fixed $n\in \mathbb N,$ the fact that the compact ball $\overline B_x(n)$ is totally bounded. This means that, for each $m\in \mathbb N$, there is a $\textit{finite}$ set of points $x_{m,n}\in \overline B_x(n)$ such that if $y\in \overline B_x(n)$ then there is an $m\in \mathbb N$ and $x_{m,n}$ such that $d(y,x_{m,n})&lt;1/m,$ which implies that $\left \{ x_{m,n} \right \}_{m\in \mathbb N}$ is a countable dense subset of $\overline B_x(n)$ and therefore that $\left \{ x_{m,n} \right \}_{m,n\in \mathbb N}$ is a countable dense subset of $X$. The result follows by the equivalence separable $\Leftrightarrow $ second countable. </p>
167,904
<p>In one the the answers to this thread " <a href="https://mathoverflow.net/questions/119850/">Can one embedd the projectivezed tangent space of CP^2 in a projective space? </a> " it was mentioned that " $\mathbb{P}(T\mathbb{P}^2)$ isomorphic to the variety of complete flags in the vector space $\mathbb{C^3}$ ". </p> <p>I'm having a hard time understanding why this is true, and can't seem to find any references. </p>
Steven Sam
321
<p>I will take $P^2$ to mean the space of lines in $C^3$. The tangent space at a particular point in $P^2$ (say represented by a line $L$) is a linear map from $L$ to $C^3/L$. Let $\mathcal{L}$ be the line bundle whose total space is $\{(L,x) \in P^2 \times C^3 \mid x \in L\}$ (this is $O(-1)$ but it doesn't matter). Then from the first fact, the tangent bundle is $Hom(\mathcal{L}, C^3/\mathcal{L}) \cong \mathcal{L}^* \otimes (C^3/\mathcal{L})$ where I use $C^3$ to be the trivial bundle $C^3 \times P^2$.</p> <p>The projectivization of a vector bundle ignores tensoring with line bundles, so $P(TP^2)$ is the same as the space of projectivization of $C^3/\mathcal{L})$. So its points correspond to a choice of line $L$ and a choice of line in $C^3/L$. The latter is equivalent to a 2-dimensional subspace in $C^3$ containing $L$.</p> <p>In general, $P(TP^n)$ is just the space of partial flags of type $(1,2)$ in $C^{n+1}$.</p> <p>EDIT. A derivation of the above fact for tangent spaces of projective spaces can be found here: <a href="http://concretenonsense.wordpress.com/2009/08/17/tangent-bundle-of-the-grassmannian/" rel="nofollow">http://concretenonsense.wordpress.com/2009/08/17/tangent-bundle-of-the-grassmannian/</a></p>
720,823
<p>I was solving some linear algebra problems and I have a quick question about one problem. I'm given the matrix $A = \{a_1,a_2\}$ where $a_1=[1,1]$ and $a_2=[-1,1]$. I need to solve for the eigenvalues of the matrix $A$ over the complex numbers $\mathbb{C}$. I solved and got the eigenvalue $1-i$ and $1+i$. Now this is where I get confused. I was taught that after finding the eigenvalue you plug it into the equation $[A - \lambda I]$. However, the book is putting it into the this equation. $[\lambda I - A]$. I understand the equations are exactly the same because they are both set to zero. You just have to multiply one by negative one. However, why for this problem are they choosing this other equation. It is the first time I've seen it. Also, once I plug it in I obtain $B=\{a_1,a_2\}$ where $a_1=[i,1]$ and $a_2=[-1,i]$. How do I reduce this matrix so that I have one row completely zero?</p>
Studentmath
125,085
<p>Regarding your first question: as you have said, it's exactly the same. I have seen [xI-A] used mostly for finding the Eigenvalues, and the other way around for finding the Ker of a given eigenvalue (for the matching vectors, etc.). It's more comftorble to work with positive x's and negetive numbers than the other way around, I guess.</p> <p>Regarding your second question: Hint: consider what happens if you double the first row with the scalar i over C. Then you can reduce rows from each other and get one row completely zero, as one is linearly dependent of the other.</p>
1,429,853
<p>In a number sequence, I've figured the $n^{th}$ element can be written as $10^{2-n}$.</p> <p>I'm now trying to come up with a formula that describes the sum of this sequence for a given $n$. I've been looking at the geometric sequence, but I'm not sure how connect it.</p>
Community
-1
<p>If it is $10^{2-n}$ then, the initial term is $10$ and common ratio is $\frac 1{10}$, Now, can you get it?</p>
1,775,787
<p>How many different numbers can be obtained by rearranging the digits of 1,273,421,695?</p> <p>Would it be C(10,2)*C(10,2)*P(8,6) = 40 million, 824 thousand</p> <p>Or would it be (10*10*8*8*6*5*4*3*2*1)/(2!*2!) = 1 million 152 thousand</p> <p>Thanks in advance for the help.</p>
Justin
337,806
<p>Using permutations, I'm pretty sure that it would be 10! which is equal to (10*9*8*7*6*5*4*3*2*1). </p> <p>Hope that helps, Justin</p>
1,775,787
<p>How many different numbers can be obtained by rearranging the digits of 1,273,421,695?</p> <p>Would it be C(10,2)*C(10,2)*P(8,6) = 40 million, 824 thousand</p> <p>Or would it be (10*10*8*8*6*5*4*3*2*1)/(2!*2!) = 1 million 152 thousand</p> <p>Thanks in advance for the help.</p>
Ross Millikan
1,827
<p>You must place the two <span class="math-container">$1$</span>s, two <span class="math-container">$2$</span>s and <span class="math-container">$6$</span> other digits into <span class="math-container">$10$</span> positions. You can place the <span class="math-container">$1$</span>s in <span class="math-container">$10\choose 2$</span> ways, then the <span class="math-container">$2$</span>s in <span class="math-container">$8 \choose 2$</span> ways. The remaining <span class="math-container">$6$</span> digits must be placed in the remaining <span class="math-container">$6$</span> positions, so <span class="math-container">$6!$</span> ways. Total <span class="math-container">$907200$</span>. Alternatively, <span class="math-container">$\frac{10!}{2!2!}.\ 10!$</span> is the number of ways of arranging <span class="math-container">$10$</span> digits, then the two factors of <span class="math-container">$2!$</span> remove the duplications from the repeating digits.</p> <p>Posting almagest's comment as an answer so we have a correct one.</p>
4,600,992
<p>I have two sequences of random variables <span class="math-container">$\{ X_n\}$</span> and <span class="math-container">$\{Y_n \}$</span>. I know that <span class="math-container">$X_n \to^d D, Y_n \to^d D$</span>. Can I conclude that <span class="math-container">$X_n - Y_n \to^p 0$</span>?</p> <p>If I cannot, what other conditions do I need for the conclusion to hold? Thanks.</p>
donaastor
251,847
<p>Here is a different way to prove that <span class="math-container">$f(x)$</span> approaches infinity. This one doesn't have any series which could confuse you. Your intuition &quot;failed&quot; (tricked you) because you forgot to multiply your brackets with that <span class="math-container">$x$</span> outside. If you compared the coefficients after that, you would guess the correct answer. For all <span class="math-container">$x&gt;2$</span> we have: <span class="math-container">$$f(x)=x\int_1^x\frac{e^t}{t}dt-e^x&gt;x\int_1^{\frac{x}{2}}\frac{e^t}{\frac{x}{2}}dt+x\int_{\frac{x}{2}}^x\frac{e^t}{x}dt-e^x=$$</span> <span class="math-container">$$=x\int_1^{\frac{x}{2}}\frac{2e^t}{x}dt+x\int_{\frac{x}{2}}^x\frac{e^t}{x}dt-e^x=x\int_1^{\frac{x}{2}}\frac{e^t}{x}dt+x\int_1^{\frac{x}{2}}\frac{e^t}{x}dt+x\int_{\frac{x}{2}}^x\frac{e^t}{x}dt-e^x=$$</span> <span class="math-container">$$=x\int_1^{\frac{x}{2}}\frac{e^t}{x}dt+x\int_{1}^x\frac{e^t}{x}dt-e^x=\int_1^{\frac{x}{2}}e^tdt+\int_{1}^xe^tdt-e^x=$$</span> <span class="math-container">$$=e^{\frac{x}{2}}-e+e^x-e-e^x=e^{\frac{x}{2}}-2e.$$</span> The last expression obviouslt approaches <span class="math-container">$\infty$</span>. The spirit of this solution was based on <span class="math-container">$$x\int_1^x\frac{e^t}{t}dt&gt;x\int_1^x\frac{e^t}{x}dt,$$</span> but it wasn't enough, so I split the integral into <span class="math-container">$2$</span> parts and did this trick on both of them. To prove that <span class="math-container">$f$</span> is increasing, we can simply take its derivative: <span class="math-container">$$f'(x)=\int_1^x\frac{e^t}{t}dt+x\frac{e^x}{x}-e^x=\int_1^x\frac{e^t}{t}dt&gt;0.$$</span> Then the third way to prove that it approaches infinity would be to calculate, once again, its second derivative: <span class="math-container">$$f''(x)=\frac{d}{dx}f'(x)=\frac{e^x}{x}&gt;0,$$</span> and show that <span class="math-container">$f$</span> is convex. Any convex function that is increasing must approach infinity.</p>
78,243
<p>A positive integer $n$ is said to be <em>happy</em> if the sequence $$n, s(n), s(s(n)), s(s(s(n))), \ldots$$ eventually reaches 1, where $s(n)$ denotes the sum of the squared digits of $n$.</p> <p>For example, 7 is happy because the orbit of 7 under this mapping reaches 1. $$7 \to 49 \to 97 \to 130 \to 10 \to 1$$ But 4 is not happy, because the orbit of 4 is an infinite loop that does not contain 1. $$4 \to 16 \to 37 \to 58 \to 89 \to 145 \to 42 \to 20 \to 4 \to \ldots$$</p> <p>I have tabulated the happy numbers up to $10^{10000}$, and it appears that they have a limiting density, although the rate of convergence is slow. Is it known if the happy numbers do in fact have a limiting density? In other words, does $\lim_{n\to\infty} h(n)/n$ exist, where $h(n)$ denotes the number of happy numbers less than $n$?</p> <p><img src="https://dl.dropbox.com/u/39561574/happiness.jpg" alt="Relative frequency of happy numbers up to 1e10000"></p>
Gerry Myerson
3,684
<p>Guy, Unsolved Problems In Number Theory, 3rd edition, problem E34, writes, "It seems that about 1/7 of all numbers are happy, but what bounds on the density can be proved?" He doesn't give an answer, so I suppose nothing was known as of the publication of the book. Helen Grundman has written several papers on happy numbers, maybe you could ask her. </p>
118,545
<p>I gather that the question whether the Bruck-Chowla-Ryser condition was sufficient used to top the list, but now that that's settled - what is considered the most interesting open question?</p>
Peter Dukes
45,255
<p>Personally, I am most interested in design theory with an "asymptotic flavor", and I think there are (edit: <em>were</em>, pre-Keevash) some very interesting open questions in this direction.</p> <p>To cut to the chase, I think asymptotic design theory today is effectively searching for a constructive proof of Gustavsson's Theorem: All "admissible" graphs $G$ which are sufficiently large and dense (i.e. $\delta(G) \gtrapprox (1-\epsilon(k)) v$) can be edge-decomposed into cliques of size $k$.</p> <p>There are loads of spin-off questions, such as completion of sparse partial latin squares, construction of optimal packings and coverings, or imposing extra structure on the graph decompositions.</p> <p>(edit: I do agree that the Hadamard conjecture is bigger, for sure.)</p>
332,993
<p>How do I approach the problem?</p> <blockquote> <p>Q: Let $ \displaystyle z_{n+1} = \frac{1}{2} \left( z_n + \frac{1}{z_n} \right)$ where $ n = 0, 1, 2, \ldots $ and $\frac{-\pi}{2} &lt; \arg (z_0) &lt; \frac{\pi}{2} $. Prove that $\lim_{n\to \infty} z_n = 1$.</p> </blockquote>
achille hui
59,379
<p>One common trick to deal with recurrence equation involving reciprocal is to rewrite $z_n$ as ratio of two sequence $p_n, q_n$ to be determined. Notice,</p> <p>$$\frac{p_{n+1}}{q_{n+1}} = z_{n+1} = \frac12 (z_n + \frac{1}{z_n}) = \frac12(\frac{p_n}{q_n} + \frac{q_n}{p_n}) = \frac{p_n^2 + q_n^2}{2p_nq_n}$$</p> <p>We can take $p_{n+1} = p_n^2 + q_n^2$ and $q_{n+1} = 2 p_{n} q_{n}$. From this, we get</p> <p>$$p_{n+1} \pm q_{n+1} = ( p_{n} \pm q_{n} )^2 \implies (p_{n} \pm q_{n}) = (p_{0} \pm q_{0})^{2^n} $$</p> <p>Choose $p_0 = z_0$ and $q_0 = 1$, we get $p_n \pm q_n = (z_0 \pm 1)^{2^n}$ and hence:</p> <p>$$z_n = \frac{(z_0 + 1)^{2^n} + (z_0 - 1)^{2^n}}{(z_0 + 1)^{2^n} - (z_0 - 1)^{2^n}}$$</p> <p>For $z_0 \in \mathbb{C}\setminus\{0\}$ such that $-\frac{\pi}{2} &lt; \arg( z_0 ) &lt; \frac{\pi}{2}$, we have:</p> <p>$$|z_0 + 1| &gt; |z_0 - 1| \implies |\frac{z_0-1}{z_0+1}| &lt; 1$$</p> <p>Let $\alpha = \frac{z_0-1}{z_0+1}$, we get:</p> <p>$$\lim_{n\to\infty} z_n = \lim_{n\to\infty}\frac{1 + \alpha^{2^n}}{1 - \alpha^{2^n}} = 1$$</p> <p>A similar argument shows that if you start with a $z_0$ such that $-\frac{\pi}{2} &lt; \arg( -z_0 ) &lt; \frac{\pi}{2}$, $|\alpha|$ will be greater than $1$ and $z_n$ converges to $-1$ instead.</p>
2,699,753
<p>$a,b$ and $c$ are all natural numbers, and function $f(x)$ always returns a natural number. If$$ \sum_{n=b}^{a} f(n) = c,$$ in terms of $b,c$ and $f$, how would you solve for $a$? Do I require more information to solve for $a$?</p> <p>EDIT: If $x$ increases $f(x)$ increases</p>
Cye Waldman
424,641
<p>In problems of this type I prefer to convert the sequence to a generalized Fibonacci form. I have solved this exact problem elsewhere and the solution to the general form $f(n) = Af(n-1)+B$ is given <a href="https://math.stackexchange.com/questions/2357418/solving-fibonacci-recurrence-relation/2358956#2358956">here</a>. The result is this:</p> <p>$$f_n=\frac{[(A-1)f_0+B]A^n-B}{A-1}$$</p> <p>for $A\ne 1$.</p>
817,680
<p><strong>Question:</strong></p> <blockquote> <p>Assume that $a_{n}&gt;0,n\in N^{+}$, and that $$\sum_{n=1}^{\infty}a_{n}$$ is convergent. Show that $$\sum_{n=1}^{\infty}\dfrac{a_{n}}{(n+1)a_{n+1}}$$ is divergent?</p> </blockquote> <p>My idea: since $\sum_{n=1}^{\infty}a_{n}$ converges, then there exists $M&gt;0$ such $$a_{1}+a_{2}+\cdots+a_{n}&lt;M$$ then I can't continue. Thank you.</p>
vadim123
73,324
<p>This is essentially the same as Erick Wong's lovely <a href="https://math.stackexchange.com/a/281028/73324">solution</a> to a similar problem.</p> <p>Set $\displaystyle b_n=\frac{a_n}{(n+1)a_{n+1}}$. Since $a_n\to 0$ and $a_n&gt;0$ there is some decreasing subsequence $a_{n_1}&gt;a_{n_2}&gt;a_{n_3}&gt;\cdots$. By passing to a subsubsequence if necessary, we may assume without loss that $n_{i+1}\ge 2n_i+1$ for each $i$. Set $d_i=n_{i+1}-n_i$. The strategy now will be to bound $\displaystyle \sum_{n_i\le k&lt;n_{i+1}}b_k$ below by $\frac{1}{2}$ using AM-GM; this proves that the desired series diverges. We have</p> <p>$$\prod_{n_i\le k&lt;n_{i+1}}b_k&gt;\prod_{n_i\le k&lt;n_{i+1}}\frac{1}{n_{i+1}+1}\frac{a_k}{a_{k+1}}=\left(\frac{1}{n_{i+1}+1}\right)^{d_i}\frac{a_{n_i}}{a_{n_{i+1}}}&gt;\left(\frac{1}{n_{i+1}+1}\right)^{d_i}$$</p> <p>Now, by <a href="http://en.wikipedia.org/wiki/Inequality_of_arithmetic_and_geometric_means" rel="nofollow noreferrer">AM-GM</a>, we have $$\sum_{n_i\le k&lt;n_{i+1}}b_k\ge d_i\left(\frac{1}{n_{i+1}+1}\right)\ge \frac{1}{2}$$ Where the last inequality follows from rearranging $\displaystyle \frac{n_{i+1}}{2}\ge n_i+\frac{1}{2}$ as $$d_i=n_{i+1}-n_i\ge \frac{1}{2}(n_{i+1}+1)$$</p>
666,503
<p>How to isolate $x$ in this equation: $px+(\frac{b}{a})px=m$</p> <blockquote> <p>Blockquote</p> </blockquote> <p>And get $\frac{a}{a+b}*\frac{m}{p}$</p>
Community
-1
<p>Notice that $p$ and $x$ are two common factors of the two terms of the sum so we can factor: $$px+\frac b a px=p\left(1+\frac b a\right)x=p\frac{a+b}ax$$ Can you take it from here?</p>
1,690,854
<p>Solve the equation $$-x^3 + x + 2 =\sqrt{3x^2 + 4x + 5.}$$ I tried. The equation equavalent to $$\sqrt{3x^2 + 4x + 5} - 2 + x^3 - x=0.$$ $$\dfrac{3x^2+4x+1}{\sqrt{3x^2 + 4x + 5} + 2}+x^3 - x=0.$$ $$\dfrac{(x+1)(3x+1)}{\sqrt{3x^2 + 4x + 5} + 2}+ (x+1) x (x-1)=0.$$ $$(x+1)\left [\dfrac{3x+1}{\sqrt{3x^2 + 4x + 5} + 2}+ x (x-1)=0\right ]=0.$$ How can I prove the equation $$\dfrac{3x+1}{\sqrt{3x^2 + 4x + 5} + 2}+ x (x-1)=0$$ has no solution?</p>
chenbai
59,487
<p>hint: edit 2: when $x&gt;1$, there is no solution, when $-\dfrac{1}{3} \le x \le 0$, there is no solution also.</p> <p>when $0\le x\le 1, x(1-x) &lt; f(x)=\dfrac{x}{3}+\dfrac{1}{2+\sqrt{5}}$ </p> <p>and $\dfrac{3x+1}{\sqrt{3x^2 + 4x + 5} + 2}\ge \dfrac{x}{3}+\dfrac{1}{2+\sqrt{5}}$ </p> <p>so only possible is $x &lt;-\dfrac{1}{3}$</p> <p>now there is a $g(x)=\dfrac{4}{3}(3x+1)$</p> <p>you need to prove $\dfrac{3x+1}{\sqrt{3x^2 + 4x + 5} + 2}&gt; g(x) &gt;x(1-x)$ which is easy.</p>
315,235
<p>I am learning about vector-valued differential forms, including forms taking values in a Lie algebra. <a href="http://en.wikipedia.org/wiki/Vector-valued_differential_form#Lie_algebra-valued_forms">On Wikipedia</a> there is some explanation about these Lie algebra-valued forms, including the definition of the operation $[-\wedge -]$ and the claim that "with this operation the set of all Lie algebra-valued forms on a manifold M becomes a graded Lie superalgebra". The explanation on Wikipedia is a little short, so I'm looking for more information about Lie algebra-valued forms. Unfortunately, the Wikipedia page does not cite any sources, and a Google search does not give very helpful results.</p> <blockquote> <p>Where can I learn about Lie algebra valued differential forms?</p> </blockquote> <p>In particular, I'm looking for a proof that $[-\wedge -]$ turns the set of Lie algebra-valued forms into a graded Lie superalgebra. I would also appreciate some information about how the exterior derivative $d$ and the operation $[-\wedge -]$ interact.</p>
Adam Marsh
525,270
<p>Although this is an old post, it comes up prominently in searches and so another viewpoint might be helpful. I’ll view the finite-dimensional real Lie algebra $\mathfrak{g}$ as real matrices under the Lie commutator (such a faithful rep always exists). The reference the treatment is taken from is available online <a href="https://mathphysicsbook.com" rel="nofollow noreferrer">here</a>.</p> <p>At a single point $p$ on a manifold $M$, a $\mathfrak{g}$-valued differential $1$-form can be viewed as a linear mapping</p> <p>$$\check{\Theta}:T_{p}M\to \mathfrak{g}.$$</p> <p>These $1$-forms comprise a real vector space with basis the tensor product of the bases of $T_{p}^{*}M$ and $\mathfrak{g}$ as mentioned in the previous answer. Therefore we can take the exterior product of any number of them to obtain $\mathfrak{g}$-valued $k$-forms.</p> <p>With real-valued $1$-forms $\varphi$, we usually turn $k$-forms into alternating multilinear mappings via an isomorphism which multiplies the values of the forms, e.g.</p> <p>$$\bigwedge_{i=1}^{k}\varphi_{i}\mapsto\sum_{\pi}\textrm{sign}\left(\pi\right)\prod_{i=1}^{k}\varphi_{\pi\left(i\right)},$$ </p> <p>or for two $1$-forms</p> <p>$$(\varphi\wedge\psi)(v,w)=\varphi(v)\psi(w)-\psi(v)\varphi(w).$$</p> <p>(Note that this isomorphism is not unique, and others are in use). With $\mathfrak{g}$-valued forms we can do the same, making sure to keep the order of the factors consistent to ensure anti-symmetry. So for example the exterior product of two $\mathfrak{g}$-valued $1$-forms is</p> <p>\begin{aligned}(\check{\Theta}[\wedge]\check{\Psi})\left(v,w\right) &amp; =[\check{\Theta}\left(v\right),\check{\Psi}\left(w\right)]-[\check{\Theta}\left(w\right),\check{\Psi}\left(v\right)]\\ &amp; =\check{\Theta}\left(v\right)\check{\Psi}\left(w\right)-\check{\Psi}\left(w\right)\check{\Theta}\left(v\right)-\check{\Theta}\left(w\right)\check{\Psi}\left(v\right)+\check{\Psi}\left(v\right)\check{\Theta}\left(w\right)\\ &amp; =(\check{\Theta}\wedge\check{\Psi})\left(v,w\right)+(\check{\Psi}\wedge\check{\Theta})\left(v,w\right), \end{aligned} </p> <p>where we denote the exterior product using the Lie commutator by $\check{\Theta}[\wedge]\check{\Psi}$ to avoid ambiguity (note that $[\check{\Theta},\check{\Psi}](v,w)$ and $[\check{\Theta}(v),\check{\Psi}(w)]$ give different results that are in general not even anti-symmetric). We can therefore view $\mathfrak{g}$-valued $k$-forms as alternating multilinear mappings from $k$ vectors to $\mathfrak{g}$.</p> <p>The above also shows that </p> <p>$$(\check{\Theta}[\wedge]\check{\Theta})\left(v,w\right)=2[\check{\Theta}\left(v\right),\check{\Theta}\left(w\right)]$$</p> <p>does not in general vanish; instead, for $\mathfrak{g}$-valued $j$- and $k$-forms $\check{\Theta}$ and $\check{\Psi}$ we have the graded commutativity rule </p> <p>$$\check{\Theta}[\wedge]\check{\Psi}=\left(-1\right)^{jk+1}\check{\Psi}[\wedge]\check{\Theta},$$</p> <p>with an accompanying graded Jacobi identity, forming a $\mathbb{N}$-graded Lie algebra which decomposes into a direct sum of each $\Lambda^{k}$. Alternatively, we can decompose into even and odd grade forms, comprising a $\mathbb{Z}_{2}$ gradation, i.e. a Lie superalgebra (since the sign of $(-1)^{jk+1}$ is determined by whether $j$ and $k$ are odd or even).</p> <p>The exterior derivative can be viewed as operating on the form component of $\mathfrak{g}$-valued forms, as mentioned in the previous answer; this can be also seen by examining the definition of the exterior form in light of the basis for Lie algebra valued forms given above.</p>
520,285
<p>I've been going through this representation theory <a href="http://math.berkeley.edu/~serganov/math252/notes1.pdf" rel="nofollow">lecture notes</a>, and I've found the ''hungry knights'' problem very interesting. Do you know any interesting problems similar to that one?</p> <p>To define ''similar'': problems which have a recreational, puzzle-like taste and you can solve them using representation theory/group theory. One example I know is <a href="http://qchu.wordpress.com/2009/06/13/gila-i-group-actions-and-equivalence-relations/" rel="nofollow">this series</a> of blog posts by Qiaochu Yuan applying group theory to few basic combinatorial questions about colorings.</p>
hengxin
51,434
<p>Here is a paper: <a href="http://www.jstor.org/stable/2689726" rel="nofollow">A Theorem about Primes Proved on a Chessboard</a>. It is </p> <blockquote> <p>"An elementary treatment of a class of solutions to the $n$-queens problem leads to a proof of Fermat's theorem on primes which are sums of two squares."</p> </blockquote> <p>It also uses the chessboard to provide a specific geometrical setting for illustrating and interpreting abstract concepts usually first encountered in an introductory abstract algebra course, such as cyclic group, subgraph, and coset.</p>
2,843,822
<p>How do I rewrite $(1\,2)(1\,3)(1\,4)(1\,5)$ as a single cycle? I have tried questions in the form: $(1\,4\,3\,5\,2)(4\,5\,3\,2\,1)$.</p>
TheSimpliFire
471,884
<p>For a product of transpositions, just reverse the order! $$(1\,2)(1\,3)(1\,4)(1\,5)=(1\,5\,4\,3\,2)$$</p>
191,673
<p>If I input:</p> <pre><code>data = RandomVariate[ProbabilityDistribution[x/8, {x, 0, 4}], 10]; {EstimatedDistribution[data, ProbabilityDistribution[x/8, {x, 0, θ}], ParameterEstimator -&gt; "MaximumLikelihood"], data} </code></pre> <p>Mathematica returns:</p> <pre><code>{ProbabilityDistribution[\[FormalX]/ 8, {\[FormalX], 0, 4.99291}], {3.8921, 2.93817, 2.07761, 1.12473, 3.96292, 1.20091, 2.86696, 1.52381, 2.43073, 3.13515}} </code></pre> <p>I think the mle for a sample from this distribution is the maximum of the sample. What is Mathematica computing here? In other words, why is Mathematica returning 4.99291?</p> <p>From <a href="https://mathematica.stackexchange.com/questions/191673/what-is-the-maximum-likelihood-estimator-of-a-distribution-with-pdf-fx-x-8#comment499117_191674">a comment</a>:</p> <blockquote> <p>I just now restarted Mathematica and I am getting the same bad results. I am using version 9.</p> </blockquote>
m0nhawk
2,342
<p>Your <code>data</code> has only 10 samples of random values. This can result in discrepancy between the the PDF and estimation.</p> <p>You will have almost 4 for 1000 samples:</p> <pre><code>data = RandomVariate[ProbabilityDistribution[x/8, {x, 0, 4}], 1000]; {EstimatedDistribution[data, ProbabilityDistribution[x/8, {x, 0, θ}], ParameterEstimator -&gt; "MaximumLikelihood"]} (* {ProbabilityDistribution[\[FormalX]/ 8, {\[FormalX], 0, 3.99886}]} *) </code></pre>
1,497,232
<p>Prove or disprove. If $f(A) \subseteq f(B)$ then $A \subseteq B$</p> <p>Let y be arbitrary. </p> <p>$f(A)$ means $\exists a \in A (f(a)=y)$</p> <p>$f(B)$ means $\exists b \in B (f(b)=y)$ </p> <p>but $\forall a \in A \exists ! y \in f(a)(f(a)=y)$ </p> <p>and $\forall b \in B \exists ! y \in f(b)(f(b)=y)$</p> <p>Therefore if $f(a)=y=f(b)$, then $a = b$. This along with the given that $f(A) \subseteq f(B)$ shows that $A \subseteq B$</p>
Weaam
1,746
<p>Suppose $f$ is <em>injective</em> and that $f(A) \subseteq f(B)$. Let $x \in A$. Then $f(x) \in f(A) \subseteq f(B)$ and $f(x) = f(y)$ for some $y \in B$. But since $f$ is injective, $x = y$ and $x \in B$.</p> <p>On the other hand, if $f$ is <em>not injective</em>, then there is distinct $x \neq y$ with $f(x) = f(y)$. Let $A = \{x, y\}, B = \{y\}$. Then $f(A) \subseteq f(B)$ but $A$ is not in $B$. </p> <p>For counterexample, take $f(x) = x^2$ with $A = \{-2,2\}$.</p>
1,497,232
<p>Prove or disprove. If $f(A) \subseteq f(B)$ then $A \subseteq B$</p> <p>Let y be arbitrary. </p> <p>$f(A)$ means $\exists a \in A (f(a)=y)$</p> <p>$f(B)$ means $\exists b \in B (f(b)=y)$ </p> <p>but $\forall a \in A \exists ! y \in f(a)(f(a)=y)$ </p> <p>and $\forall b \in B \exists ! y \in f(b)(f(b)=y)$</p> <p>Therefore if $f(a)=y=f(b)$, then $a = b$. This along with the given that $f(A) \subseteq f(B)$ shows that $A \subseteq B$</p>
drhab
75,923
<p>If codomain $Y$ of function $f:X\to Y$ is a singleton then $f(A)=Y$ for <strong>any</strong> non-empty set $A\subseteq X$. </p> <p>So $f(A)\subseteq f(B)$ is true for any pair of nonempty subsets of $X$. </p> <p>If $X$ is has more than one element then nonempty sets $A, B\subseteq X$ exist with $A\nsubseteq B$.</p> <p>This proves that the statement is not true.</p>
2,870,956
<p>Assume that f is a non-negative real function, and let $a&gt;0$ be a real number.</p> <p>Define $I_a(f)$ to be</p> <p>$I_a(f)$=$\frac{1}{a}\int_{0}^{a} f(x) dx$ </p> <p>We now assume that $\lim_{x\rightarrow \infty} f(x)=A$ exists.</p> <p>Now I want to proof whether $\lim_{a\rightarrow \infty} I_a(f)=A$ is true or not. I have concluded that this is not always true.</p> <p>My approach has been to construct the following counterexample $f(x)=A+\frac {1}{x}$, it is easily seen that $\lim_{x\rightarrow \infty} f(x)=A$.</p> <p>By integrating the chosen function I get that </p> <p>$\lim_{a\rightarrow \infty}I_a(f)=A+\lim_{a\rightarrow \infty}\frac{1}{a}\int_{0}^a \frac{1}{x}dx = A+\lim_{a\rightarrow \infty} [\frac{\log(a)}{a}-(-\infty)]\rightarrow \infty$. </p> <p>Therefore I concluded that $\lim_{x\rightarrow \infty} f(x)=A$ does not in general imply that $\lim_{a\rightarrow \infty}I_a(f)=A$. </p> <p>I am of course unsure whether my calculations are correct, because you could also write the $\log(0)$ as a limit of $\log(\epsilon)\rightarrow \log(0)$, an then by division by $a$, and letting $a\rightarrow \infty$ get that the whole expression with the integral of $\frac{1}{x}$ goes to zero, I this case, it might be true that $\lim_{x\rightarrow \infty} f(x)=A$ implies that $\lim_{a\rightarrow \infty}I_a(f)=A$. </p> <p>Does anybody have an idea to this?</p>
José Carlos Santos
446,262
<p>Your approach is fine, including the fact that you noticed that there's a problem with $0$. But that problem is easy to solve: define $f(x)=A+\frac1{x+k}$, for some $k&gt;0$. Then the problem will go away.</p>
4,611,065
<p>This question is motivated by curiosity and I haven't much background to exhibit .</p> <p>Going through a couple of books dealing with real analysis, I've noticed that 2 definitions can be given of the exponential function known in algebra as <span class="math-container">$f(x)= e^x$</span>.</p> <p>One definition says : The exponential function is the unique function defined on <span class="math-container">$\mathbb R$</span> such that <span class="math-container">$f(0)=1$</span> and <span class="math-container">$\forall (x) [ f'(x)= f(x)] $</span>.</p> <p>The other one defines the exponential function as the inverse of the natural logarithm function . More precisely <span class="math-container">$(1)$</span> <span class="math-container">$\exp_a (x)$</span> is defined as the inverse of <span class="math-container">$\log_a (x)$</span> , <span class="math-container">$(2)$</span> then , <span class="math-container">$\exp_a (x)$</span> is shown to be identical to <span class="math-container">$a^x$</span>, and finally <span class="math-container">$(3)$</span> every function of the form : <span class="math-container">$a^x$</span> is shown to be a &quot; special case&quot; of the <span class="math-container">$e^x$</span> function.</p> <p>My question :</p> <p>(1) Do these definitions exhaust the ways the exponential function can be defined?</p> <p>(2) Are these definitions actually different at least conceptually ( though denoting in fact the same object)?</p> <p>(3) Is there a reason to prefer one definition over the other? What is each definition good for?</p>
peek-a-boo
568,204
<p><strong>Title question:</strong></p> <p>Yes.</p> <hr /> <p><strong>1. Do these definitions exhaust the ways the exponential function can be defined?</strong></p> <p>No. There are lots of other ways. For example:</p> <ol> <li>for any <span class="math-container">$a&gt;0$</span> you can define <span class="math-container">$a^x$</span> as the unique continuous function <span class="math-container">$f_a:\Bbb{R}\to\Bbb{R}$</span> such that <span class="math-container">$f_a(1)=a$</span> and such that for all <span class="math-container">$x,y\in\Bbb{R}$</span>, we have <span class="math-container">$f_a(x+y)=f_a(x)f_a(y)$</span>. Uniqueness is relatively easy since such functions must agree on <span class="math-container">$\Bbb{Q}$</span>, so by continuity, and density of <span class="math-container">$\Bbb{Q}$</span> in <span class="math-container">$\Bbb{R}$</span>, must be equal on <span class="math-container">$\Bbb{R}$</span>. See <a href="https://math.stackexchange.com/a/3767604/568204">this answer</a> for more details.</li> <li>You could just start with the series definition, <span class="math-container">$e^x=\sum_{n=1}^{\infty}\frac{x^n}{n!}$</span> (the way we arrive at this function is motivated by the IVP <span class="math-container">$f’=f, f(0)=1$</span>, and by inserting a power-series ansatz for <span class="math-container">$f$</span>), and show this series has infinite radius of convergence.</li> <li>you could try to show that for each <span class="math-container">$x\in\Bbb{R}$</span>, the limit <span class="math-container">$\lim\limits_{n\to\infty}\left(1+\frac{x}{n}\right)^n$</span> exists, and you can call this <span class="math-container">$e^x$</span>.</li> <li>you could first define for each <span class="math-container">$a&gt;1$</span>, the quantity <span class="math-container">$a^x$</span> when <span class="math-container">$x\in\Bbb{Q}$</span> (this isn’t too hard, though still requires analysis) and finally define for arbitrary <span class="math-container">$x\in\Bbb{R}$</span>, <span class="math-container">$a^x:=\sup\{a^r\,:\, r\in\Bbb{Q}, r\leq x\}$</span>. Finally, you can define the number <span class="math-container">$e:=\lim\limits_{n\to\infty}\left(1+\frac{1}{n}\right)^n$</span> (showing it first exists), and then define <span class="math-container">$e^x$</span> this way.</li> </ol> <hr /> <p><strong>2. Are these definitions actually different at least conceptually ( though denoting in fact the same object)?</strong></p> <p>Afterall they’re the same thing, so answering whether they’re different conceptually is a subjective matter, but I’d say yes. For example, (1) is more of an “algebraic-analysis” question in the sense that it is asking for the non-trivial solution to a functional equation; this problem is “easy” on <span class="math-container">$\Bbb{Q}$</span>, but more difficult on <span class="math-container">$\Bbb{R}$</span>. Definition (4) is kind of similar, in that both (1) and (4) tell you essentially the same thing on the rationals, but then (4) is generalizing to the reals by preserving monotonicity.</p> <p>Definition (3) is of course a separate definition, which can be motivated from the definition of the number <span class="math-container">$e$</span> given in definition (4), and some limit properties (which you need to prove).</p> <p>Definition (2) is motivated more from an ODE standpoint.</p> <hr /> <p><strong>3. Is there a reason to prefer one definition over the other? What is each definition good for?</strong></p> <p>Yes! I would prefer either the explicit series definition, or as the unique solution to the ODE. The reason to prefer one definition over another depends highly on the type of things you’re working on, and particularly so on the properties of the functions you’re looking to prove immediately.</p> <ul> <li>In an analysis class, the fact that <span class="math-container">$\exp’=\exp$</span> is of paramount importance, which is why you’ll often see it being explicitly defined using the power series, or by first defining <span class="math-container">$\log$</span> somehow (e.g. <span class="math-container">$\log(x)=\int_1^x\frac{1}{t}\,dt$</span> for <span class="math-container">$x&gt;0$</span>) and then inverting it (after showing it maps <span class="math-container">$(0,\infty)$</span> onto <span class="math-container">$\Bbb{R}$</span> bijectively). Next, after you learn about the existence and uniqueness theorems of ODEs, you can easily complete the proof of the equivalence of these three definitions. However, even here, I would argue that defining it via the series first, or using IVP is more fruitful because it gives an obvious generalization to complex analysis. Logarithms are a bit more iffy in the complex plane. Using any of these definitions, the fundamental property <span class="math-container">$\exp(x+y)=\exp(x)\exp(y)$</span> can easily be deduced, so it recovers definition (1) (after you define <span class="math-container">$e:=\exp(1)$</span>). So, <strong>with these definitions, proving properties becomes very quick</strong>. Also, using the series or IVP allows for more generalization, even to the case of Banach algebras (which are extremely important and frequent in higher analysis). The only “downside” to this approach is that it requires the knowledge of power series (which you may consider advanced in terms of prerequisites, so unexplainable to elementary/middle-schoolers) and/or the theorems from ODE theory.</li> <li>Definition (1) takes a concept we know from middle/high school (in fact some kiddos “know” these “rules” so much so that they even apply those properties to numbers <span class="math-container">$a\in\Bbb{C}\setminus [0,\infty)$</span> and arrive at all sorts of “paradoxical” results), so in that sense it is a natural generalization based on the idea of “preserving the functional equation”. This definition has the advantage of keeping in-line with how exponentials are taught up to high school. The disadvantage is that to prove the existence of such a function, one needs to appeal to one of the other methods. Also, from here, proving differentiability is slightly annoying (also, defining <span class="math-container">$e$</span> is annoying here).</li> <li>Definition (4) is nice in the sense that if we accept that <span class="math-container">$a^x$</span> exists when <span class="math-container">$x$</span> is rational and <span class="math-container">$a&gt;1$</span>, and the monotonicity properties, then this definition still preserves monotonicity (so, it keeps our high-school dreams alive, hence it’s somewhat nice pedagogically). Also, once you learn about <span class="math-container">$\Bbb{R}$</span> as a complete ordered field, you’re ready to understand this definition (e.g Chapter 1 of <em>Rudin’s Principles of Mathematical Analysis</em>), so prerequisites wise, it is minimal. The disadvantage is that this is very tedious to work with. Proving the functional equation is a small mess, one still has to prove that <span class="math-container">$e$</span>, given by the limit, is well-defined, proving continuity/differentiability properties are a pain etc. A more conceptual drawback of this definition is that it relies heavily on the total order of <span class="math-container">$\Bbb{R}$</span>, hence doesn’t generalize to more general situations (e.g <span class="math-container">$\Bbb{C}$</span>, or Banach-algebras more generally).</li> </ul> <p>Also, back to my first bullet point, defining functions as solutions to ODEs may sound abstract, but it is actually very fruitful. Doing it for the exponential and trigonometric functions may sound silly because in middle/high-school we are introduced to them in a particular way, so the ODE approach seems out of the blue. However, once you do some analysis/ODEs you’ll find this very natural, because you’ll notice that very often certain ODEs just keep popping up, e.g the equation for a simple harmonic oscillator, or Bessel’s equation etc. We thus would like to investigate properties of their solutions (the trig functions, and Bessel functions respectively). Btw, <a href="https://math.stackexchange.com/a/4610473/568204">here</a> I show how we can derive the properties of trigonometric functions from the ODEs, and you’ll see that the treatment is pretty quick and also geometric (it’s slightly quicker to start with <span class="math-container">$e^z$</span> as a power series, and prove things directly from there).</p> <p>Regarding definition (1) and my second bullet point here, functional equations are important in analysis, because not only does the exponential function satisfy such an important equation, but the Gamma function <span class="math-container">$\Gamma(z)$</span> (probably one of the more important special transcendental functions in analysis) also satisfies a very important functional equation, providing a smooth interpolation of the factorial (it’s not unique though… but it is unique if we require log-convexity; this is the Bohr-Mollerup theorem). See <a href="https://math.stackexchange.com/a/4380874/568204">How can construction of the gamma function be motivated?</a> for a discussion of this matter.</p> <hr /> <p>The bottom line is that it is good to have many ways of looking at the same thing, because it can give you further insight on your original way of thinking, and it can also give you different avenues for generalization. Of course you should try to understand each separately, and then see how they fit together.</p>
1,576,836
<p>I was solving a question related to functions and i come across a limit which i cannot understand.The question is <br> If $a$ and $b$ are positive real numbers such that $a-b=2,$ then find the smallest value of the constant $L$ for which $\sqrt{x^2+ax}-\sqrt{x^2+bx}&lt;L$ for all $x&gt;0$<br></p> <hr> <p>First i found the domain of definition of the function in question $\sqrt{x^2+ax}-\sqrt{x^2+bx}$.The domain is $x\geq0 \cup x\leq -a$.<br> Then i found the horizontal asymptotes as $x\to \infty$.<br> $\lim_{x\to\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{x\to\infty}\frac{(a-b)x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}=\lim_{x\to\infty}\frac{2x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}$<br> $=\lim_{x\to\infty}\frac{2}{\sqrt{1+\frac{a}{x}}+\sqrt{1+\frac{b}{x}}}=\frac{2}{2}=1$<br> Then i found the horizontal asymptotes as $x\to -\infty$.<br> $\lim_{x\to-\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{x\to-\infty}\frac{(a-b)x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}=\lim_{x\to-\infty}\frac{2x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}$<br> $=\lim_{x\to-\infty}\frac{2}{\sqrt{1+\frac{a}{x}}+\sqrt{1+\frac{b}{x}}}=\frac{2}{2}=1$<br><br> But when i drew the graph using graphing calculator,the horizontal asymptote as $x\to-\infty$ was $-1$I do not understand what mistake i made in calculating the limit as $x\to -\infty$<br><br> Then i guessed i should have made the substitution $x=-t$ and <br> $\lim_{x\to-\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{t\to\infty}\sqrt{t^2-at}-\sqrt{t^2-bt}=\lim_{t\to \infty}\frac{(b-a)t}{\sqrt{t^2-at}+\sqrt{t^2-bt}}=\lim_{t\to\infty}\frac{-2t}{\sqrt{t^2-at}+\sqrt{t^2-bt}}$<br> $=\lim_{t\to\infty}\frac{-2}{\sqrt{1-\frac{a}{t}}+\sqrt{1-\frac{b}{t}}}=\frac{-2}{2}=-1$<br><br></p> <hr> <p>I want to ask why the answer came wrong in the first method and correct in the second method.</p> <blockquote> <p>Is it always necessary to put $x=-t$ and change the limit to plus infinity while calculating the limit as $x\to-\infty$<br> Please help me.Thanks.</p> </blockquote>
Adhvaitha
228,265
<p>The short answer to your question is that $x=\sqrt{x^2}$ only for $x &gt; 0$. For $x&lt;0$, we have $x=-\sqrt{x^2}$.</p> <p>In the first method, the fact that $$\dfrac{2x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}} = \dfrac2{\sqrt{1+a/x}+\sqrt{1+b/x}}$$ is true only for $x &gt; 0$. For $x&lt;0$, we have $x = -\sqrt{x^2}$, and hence we have $$\dfrac{2x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}} = -\dfrac2{\sqrt{1+a/x}+\sqrt{1+b/x}}$$</p>
1,576,836
<p>I was solving a question related to functions and i come across a limit which i cannot understand.The question is <br> If $a$ and $b$ are positive real numbers such that $a-b=2,$ then find the smallest value of the constant $L$ for which $\sqrt{x^2+ax}-\sqrt{x^2+bx}&lt;L$ for all $x&gt;0$<br></p> <hr> <p>First i found the domain of definition of the function in question $\sqrt{x^2+ax}-\sqrt{x^2+bx}$.The domain is $x\geq0 \cup x\leq -a$.<br> Then i found the horizontal asymptotes as $x\to \infty$.<br> $\lim_{x\to\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{x\to\infty}\frac{(a-b)x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}=\lim_{x\to\infty}\frac{2x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}$<br> $=\lim_{x\to\infty}\frac{2}{\sqrt{1+\frac{a}{x}}+\sqrt{1+\frac{b}{x}}}=\frac{2}{2}=1$<br> Then i found the horizontal asymptotes as $x\to -\infty$.<br> $\lim_{x\to-\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{x\to-\infty}\frac{(a-b)x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}=\lim_{x\to-\infty}\frac{2x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}$<br> $=\lim_{x\to-\infty}\frac{2}{\sqrt{1+\frac{a}{x}}+\sqrt{1+\frac{b}{x}}}=\frac{2}{2}=1$<br><br> But when i drew the graph using graphing calculator,the horizontal asymptote as $x\to-\infty$ was $-1$I do not understand what mistake i made in calculating the limit as $x\to -\infty$<br><br> Then i guessed i should have made the substitution $x=-t$ and <br> $\lim_{x\to-\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{t\to\infty}\sqrt{t^2-at}-\sqrt{t^2-bt}=\lim_{t\to \infty}\frac{(b-a)t}{\sqrt{t^2-at}+\sqrt{t^2-bt}}=\lim_{t\to\infty}\frac{-2t}{\sqrt{t^2-at}+\sqrt{t^2-bt}}$<br> $=\lim_{t\to\infty}\frac{-2}{\sqrt{1-\frac{a}{t}}+\sqrt{1-\frac{b}{t}}}=\frac{-2}{2}=-1$<br><br></p> <hr> <p>I want to ask why the answer came wrong in the first method and correct in the second method.</p> <blockquote> <p>Is it always necessary to put $x=-t$ and change the limit to plus infinity while calculating the limit as $x\to-\infty$<br> Please help me.Thanks.</p> </blockquote>
Harish Chandra Rajpoot
210,295
<p>Notice, here is an easier method to solve </p> <p>let $x=-\large \frac{1}{t}$, $$\lim_{x\to -\infty}(\sqrt{x^2+ax}-\sqrt{x^2+bx})$$ $$=\lim_{t\to 0^+}\left(\sqrt{\left(\frac{-1}{t}\right)^2+a\left(\frac{-1}{t}\right)}-\sqrt{\left(\frac{-1}{t}\right)^2+b\left(\frac{-1}{t}\right)}\right)$$ $$=\lim_{t\to 0^+}\frac{1}{t}\left(\sqrt{1-at}-\sqrt{1-bt}\right)$$ $$=\lim_{t\to 0^+}\frac{\left(\sqrt{1-at}-\sqrt{1-bt}\right)\left(\sqrt{1-at}+\sqrt{1-bt}\right)}{t\left(\sqrt{1-at}+\sqrt{1-bt}\right)}$$ $$=\lim_{t\to 0^+}\frac{1-at-1+bt}{t\left(\sqrt{1-at}+\sqrt{1-bt}\right)}$$ $$=\lim_{t\to 0^+}\frac{b-a}{\sqrt{1-at}+\sqrt{1-bt}}$$ $$=\frac{(b-a)}{1+1}=\color{red}{\frac{b-a}{2}}$$</p>
713,732
<p>I want to know how one would go about solving an <em>unfactorable cubic</em>. I know how to factor cubics to solve them, but I do not know what to do if I cannot factor it. For example, if I have to solve for $x$ in the cubic equation: $$2x^3+6x^2-x+4=0$$ how would I do it?</p> <p>Edit: I have heard people telling me to convert it into a <em>depressed cubic</em> (where the $x^2$ term disappears), but I have no idea how to do that.</p> <p>Edit 2: I am aware that there is a <em>cubic formula</em>, that for any cubic equation $ax^3+bx^2+cx+d$, it's roots are: $$x = \sqrt[3]{-\dfrac{b^3}{27a^3} + \dfrac{bc}{6a^2} - \dfrac{d}{2a} + \sqrt{\left(-\dfrac{b^3}{27a^3} + \dfrac{bc}{6a^2}-\dfrac{d}{2a}\right)^2 - \left(\dfrac{c}{3a}-\dfrac{b^2}{9a^2}\right)^3}} + \sqrt[3]{-\dfrac{b^3}{27a^3}+\dfrac{bc}{6a^2}-\dfrac{d}{2a} - \sqrt{\left(-\dfrac{b^3}{27a^3}+\dfrac{bc}{6a^2}-\dfrac{d}{2a}\right)^2 - \left(\dfrac{c}{3a}-\dfrac{b^2}{9a^2}\right)^3}} - \dfrac{b}{3a}$$ This formula is way too complicated so I do not even bother memorizing it or using it.</p>
Git Gud
55,235
<p>I don't get the <code>Then I sum out the two matrices</code> part. You're supposed to compute $\begin{bmatrix} 1 &amp; 1 &amp; 0 &amp; 1\end{bmatrix} \times \text{Parity matrix}^T$ to find the syndrome.</p> <p>Then you get the syndrome $\begin{bmatrix} 1 &amp; 1\end{bmatrix}^T$ which yields the correction $0101$.</p>
1,043
<p>Hi all,</p> <p>The short-time fourier transform decomposes a signal window into a sin/cosine series.</p> <p>How would one approximate a signal in the same way, but using a set of arbitrary basis functions instead of sin/cos? These arbitrary basis functions are likely in my case to be very small discrete chunks of a 1-dimensional non-periodic waveform. </p> <p>Is this something wavelets are used for? </p> <p>Please excuse my tag, 'signal-analysis' does not exist and I can not create it. </p>
Vasile Moșoi
1,093
<p>If we talk about the time periodic signals, may be used every set of functions with scalar product (Hilbert space), and optimise to minimize mean squared error relative to the original signal. In signal theory there are some functions frequently used in this scope such us the Legendre, Tchebyshev ore Hermite polynomials. For initiate in this theory you can read the book "LINEAR SPACES IN ENGINEERING" by FAZLIOLLAH REZA, GINN AND COMPANY, TORONTO-LONDON, 1971.</p>
256,373
<p>I've not been able to find a package which will deal with Geometric Algebra. Perhaps somebody can help?</p>
Bob Hanlon
9,362
<pre><code>Clear[&quot;Global`*&quot;] $Version &quot;12.3.1 for Mac OS X x86 (64-bit) (June 19, 2021)&quot; sol = {Reduce[{x^3 + y^3 + z^3 == n, x &gt;= 0, y &gt;= 0, z &gt;= 0, 0 &lt; n &lt; 100}, {n, x, y, z}, Integers] // ToRules} (* {{n -&gt; 1, x -&gt; 0, y -&gt; 0, z -&gt; 1}, {n -&gt; 1, x -&gt; 0, y -&gt; 1, z -&gt; 0}, {n -&gt; 1, x -&gt; 1, y -&gt; 0, z -&gt; 0}, {n -&gt; 2, x -&gt; 0, y -&gt; 1, z -&gt; 1}, {n -&gt; 2, x -&gt; 1, y -&gt; 0, z -&gt; 1}, {n -&gt; 2, x -&gt; 1, y -&gt; 1, z -&gt; 0}, {n -&gt; 3, x -&gt; 1, y -&gt; 1, z -&gt; 1}, {n -&gt; 8, x -&gt; 0, y -&gt; 0, z -&gt; 2}, {n -&gt; 8, x -&gt; 0, y -&gt; 2, z -&gt; 0}, {n -&gt; 8, x -&gt; 2, y -&gt; 0, z -&gt; 0}, {n -&gt; 9, x -&gt; 0, y -&gt; 1, z -&gt; 2}, {n -&gt; 9, x -&gt; 0, y -&gt; 2, z -&gt; 1}, {n -&gt; 9, x -&gt; 1, y -&gt; 0, z -&gt; 2}, {n -&gt; 9, x -&gt; 1, y -&gt; 2, z -&gt; 0}, {n -&gt; 9, x -&gt; 2, y -&gt; 0, z -&gt; 1}, {n -&gt; 9, x -&gt; 2, y -&gt; 1, z -&gt; 0}, {n -&gt; 10, x -&gt; 1, y -&gt; 1, z -&gt; 2}, {n -&gt; 10, x -&gt; 1, y -&gt; 2, z -&gt; 1}, {n -&gt; 10, x -&gt; 2, y -&gt; 1, z -&gt; 1}, {n -&gt; 16, x -&gt; 0, y -&gt; 2, z -&gt; 2}, {n -&gt; 16, x -&gt; 2, y -&gt; 0, z -&gt; 2}, {n -&gt; 16, x -&gt; 2, y -&gt; 2, z -&gt; 0}, {n -&gt; 17, x -&gt; 1, y -&gt; 2, z -&gt; 2}, {n -&gt; 17, x -&gt; 2, y -&gt; 1, z -&gt; 2}, {n -&gt; 17, x -&gt; 2, y -&gt; 2, z -&gt; 1}, {n -&gt; 24, x -&gt; 2, y -&gt; 2, z -&gt; 2}, {n -&gt; 27, x -&gt; 0, y -&gt; 0, z -&gt; 3}, {n -&gt; 27, x -&gt; 0, y -&gt; 3, z -&gt; 0}, {n -&gt; 27, x -&gt; 3, y -&gt; 0, z -&gt; 0}, {n -&gt; 28, x -&gt; 0, y -&gt; 1, z -&gt; 3}, {n -&gt; 28, x -&gt; 0, y -&gt; 3, z -&gt; 1}, {n -&gt; 28, x -&gt; 1, y -&gt; 0, z -&gt; 3}, {n -&gt; 28, x -&gt; 1, y -&gt; 3, z -&gt; 0}, {n -&gt; 28, x -&gt; 3, y -&gt; 0, z -&gt; 1}, {n -&gt; 28, x -&gt; 3, y -&gt; 1, z -&gt; 0}, {n -&gt; 29, x -&gt; 1, y -&gt; 1, z -&gt; 3}, {n -&gt; 29, x -&gt; 1, y -&gt; 3, z -&gt; 1}, {n -&gt; 29, x -&gt; 3, y -&gt; 1, z -&gt; 1}, {n -&gt; 35, x -&gt; 0, y -&gt; 2, z -&gt; 3}, {n -&gt; 35, x -&gt; 0, y -&gt; 3, z -&gt; 2}, {n -&gt; 35, x -&gt; 2, y -&gt; 0, z -&gt; 3}, {n -&gt; 35, x -&gt; 2, y -&gt; 3, z -&gt; 0}, {n -&gt; 35, x -&gt; 3, y -&gt; 0, z -&gt; 2}, {n -&gt; 35, x -&gt; 3, y -&gt; 2, z -&gt; 0}, {n -&gt; 36, x -&gt; 1, y -&gt; 2, z -&gt; 3}, {n -&gt; 36, x -&gt; 1, y -&gt; 3, z -&gt; 2}, {n -&gt; 36, x -&gt; 2, y -&gt; 1, z -&gt; 3}, {n -&gt; 36, x -&gt; 2, y -&gt; 3, z -&gt; 1}, {n -&gt; 36, x -&gt; 3, y -&gt; 1, z -&gt; 2}, {n -&gt; 36, x -&gt; 3, y -&gt; 2, z -&gt; 1}, {n -&gt; 43, x -&gt; 2, y -&gt; 2, z -&gt; 3}, {n -&gt; 43, x -&gt; 2, y -&gt; 3, z -&gt; 2}, {n -&gt; 43, x -&gt; 3, y -&gt; 2, z -&gt; 2}, {n -&gt; 54, x -&gt; 0, y -&gt; 3, z -&gt; 3}, {n -&gt; 54, x -&gt; 3, y -&gt; 0, z -&gt; 3}, {n -&gt; 54, x -&gt; 3, y -&gt; 3, z -&gt; 0}, {n -&gt; 55, x -&gt; 1, y -&gt; 3, z -&gt; 3}, {n -&gt; 55, x -&gt; 3, y -&gt; 1, z -&gt; 3}, {n -&gt; 55, x -&gt; 3, y -&gt; 3, z -&gt; 1}, {n -&gt; 62, x -&gt; 2, y -&gt; 3, z -&gt; 3}, {n -&gt; 62, x -&gt; 3, y -&gt; 2, z -&gt; 3}, {n -&gt; 62, x -&gt; 3, y -&gt; 3, z -&gt; 2}, {n -&gt; 64, x -&gt; 0, y -&gt; 0, z -&gt; 4}, {n -&gt; 64, x -&gt; 0, y -&gt; 4, z -&gt; 0}, {n -&gt; 64, x -&gt; 4, y -&gt; 0, z -&gt; 0}, {n -&gt; 65, x -&gt; 0, y -&gt; 1, z -&gt; 4}, {n -&gt; 65, x -&gt; 0, y -&gt; 4, z -&gt; 1}, {n -&gt; 65, x -&gt; 1, y -&gt; 0, z -&gt; 4}, {n -&gt; 65, x -&gt; 1, y -&gt; 4, z -&gt; 0}, {n -&gt; 65, x -&gt; 4, y -&gt; 0, z -&gt; 1}, {n -&gt; 65, x -&gt; 4, y -&gt; 1, z -&gt; 0}, {n -&gt; 66, x -&gt; 1, y -&gt; 1, z -&gt; 4}, {n -&gt; 66, x -&gt; 1, y -&gt; 4, z -&gt; 1}, {n -&gt; 66, x -&gt; 4, y -&gt; 1, z -&gt; 1}, {n -&gt; 72, x -&gt; 0, y -&gt; 2, z -&gt; 4}, {n -&gt; 72, x -&gt; 0, y -&gt; 4, z -&gt; 2}, {n -&gt; 72, x -&gt; 2, y -&gt; 0, z -&gt; 4}, {n -&gt; 72, x -&gt; 2, y -&gt; 4, z -&gt; 0}, {n -&gt; 72, x -&gt; 4, y -&gt; 0, z -&gt; 2}, {n -&gt; 72, x -&gt; 4, y -&gt; 2, z -&gt; 0}, {n -&gt; 73, x -&gt; 1, y -&gt; 2, z -&gt; 4}, {n -&gt; 73, x -&gt; 1, y -&gt; 4, z -&gt; 2}, {n -&gt; 73, x -&gt; 2, y -&gt; 1, z -&gt; 4}, {n -&gt; 73, x -&gt; 2, y -&gt; 4, z -&gt; 1}, {n -&gt; 73, x -&gt; 4, y -&gt; 1, z -&gt; 2}, {n -&gt; 73, x -&gt; 4, y -&gt; 2, z -&gt; 1}, {n -&gt; 80, x -&gt; 2, y -&gt; 2, z -&gt; 4}, {n -&gt; 80, x -&gt; 2, y -&gt; 4, z -&gt; 2}, {n -&gt; 80, x -&gt; 4, y -&gt; 2, z -&gt; 2}, {n -&gt; 81, x -&gt; 3, y -&gt; 3, z -&gt; 3}, {n -&gt; 91, x -&gt; 0, y -&gt; 3, z -&gt; 4}, {n -&gt; 91, x -&gt; 0, y -&gt; 4, z -&gt; 3}, {n -&gt; 91, x -&gt; 3, y -&gt; 0, z -&gt; 4}, {n -&gt; 91, x -&gt; 3, y -&gt; 4, z -&gt; 0}, {n -&gt; 91, x -&gt; 4, y -&gt; 0, z -&gt; 3}, {n -&gt; 91, x -&gt; 4, y -&gt; 3, z -&gt; 0}, {n -&gt; 92, x -&gt; 1, y -&gt; 3, z -&gt; 4}, {n -&gt; 92, x -&gt; 1, y -&gt; 4, z -&gt; 3}, {n -&gt; 92, x -&gt; 3, y -&gt; 1, z -&gt; 4}, {n -&gt; 92, x -&gt; 3, y -&gt; 4, z -&gt; 1}, {n -&gt; 92, x -&gt; 4, y -&gt; 1, z -&gt; 3}, {n -&gt; 92, x -&gt; 4, y -&gt; 3, z -&gt; 1}, {n -&gt; 99, x -&gt; 2, y -&gt; 3, z -&gt; 4}, {n -&gt; 99, x -&gt; 2, y -&gt; 4, z -&gt; 3}, {n -&gt; 99, x -&gt; 3, y -&gt; 2, z -&gt; 4}, {n -&gt; 99, x -&gt; 3, y -&gt; 4, z -&gt; 2}, {n -&gt; 99, x -&gt; 4, y -&gt; 2, z -&gt; 3}, {n -&gt; 99, x -&gt; 4, y -&gt; 3, z -&gt; 2}} *) And @@ (And @@ {x^3 + y^3 + z^3 == n, x &gt;= 0, y &gt;= 0, z &gt;= 0, 0 &lt; n &lt; 100} /. sol) (* True *) </code></pre> <p><strong>EDIT:</strong> Including negative integers. Also, to eliminate the unnecessary permutations, <strong>assume that <code>{x, y, z}</code> are ordered</strong>.</p> <pre><code>sol2 = Solve[{x^3 + y^3 + z^3 == n, 0 &lt; n &lt; 100, x &gt;= y &gt;= z}, {n, x, y, z}, Integers] </code></pre> <p><a href="https://i.stack.imgur.com/UBexg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UBexg.png" alt="enter image description here" /></a></p> <p>To avoid the <a href="https://reference.wolfram.com/language/ref/Root.html" rel="nofollow noreferrer"><code>Root</code></a> functions, the variables need to be bounded.</p> <pre><code>sol3 = Solve[{x^3 + y^3 + z^3 == n, 0 &lt; n &lt; 100, x &gt;= y &gt;= z, -200 &lt; x &lt; 200, -200 &lt; y &lt; 200, -200 &lt; z &lt; 200}, {n, x, y, z}, Integers] // Short[#, 20] &amp; </code></pre> <p><a href="https://i.stack.imgur.com/igtr8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/igtr8.png" alt="enter image description here" /></a></p>
4,282,006
<p><strong>Evaluate the limit</strong></p> <p><span class="math-container">$\lim_{x\rightarrow \infty}(\sqrt[3]{x^3+x^2}-x)$</span></p> <p>I know that the limit is <span class="math-container">$1/3$</span> by looking at the graph of this function, but I struggle to show it algebraically.</p> <p>Is there anyone who can help me out and maybe even provide a solution to this problem?</p>
TheSilverDoe
594,484
<p>You have <span class="math-container">$$\sqrt[3]{x^3+x^2} - x = x \left(\sqrt[3]{1 + \frac{1}{x}} - 1 \right) = x \left( 1 + \frac{1}{3x} + o\left(\frac{1}{x} \right) - 1\right) = \frac{1}{3} + o(1)$$</span></p> <p>and you are done.</p>
3,118
<p>Can anyone help me out here? Can't seem to find the right rules of divisibility to show this:</p> <p>If $a \mid m$ and $(a + 1) \mid m$, then $a(a + 1) \mid m$.</p>
Sayem Rahman
977,607
<p>If <span class="math-container">$a|m$</span> and <span class="math-container">$a+1|m$</span> then <span class="math-container">$lcm(a,a+1)|m$</span>. And we know that <span class="math-container">$lcm(a,a+1)\cdot gcd(a,a+1) = a(a+1)$</span> . But <span class="math-container">$a$</span> and <span class="math-container">$(a+1)$</span> are two consecutive numbers so <span class="math-container">$gcd(a,a+1)$</span> <span class="math-container">$=$</span> <span class="math-container">$1$</span>, hence <span class="math-container">$lcm(a,a+1) = a(a+1)$</span>. So <span class="math-container">$a(a+1)|m$</span></p>
3,310,027
<p>Is the expression <span class="math-container">$\forall a,b,c \in M : \varphi(a,b,c)$</span> equivalent to <span class="math-container">$\forall a \forall b \forall c : (a \in M \land b \in M \land c \in M) \rightarrow \varphi(a,b,c)$</span> ?</p>
md2perpe
168,433
<p>The formula <span class="math-container">$\forall a \in M : \varphi(a)$</span> is syntactic sugar for <span class="math-container">$\forall a \ (a \in M \to \varphi(a))$</span> and <span class="math-container">$\forall a, b, c \in M : \varphi(a,b,c)$</span> is syntactic sugar for <span class="math-container">$\forall a \in M \ \forall b \in M \ \forall c \in M : \varphi(a,b,c)$</span> which according to the former syntactic sugar means <span class="math-container">$\forall a \ (a \in M \to \forall b \ (b \in M \to \forall c \ (c \in M \to \varphi(a,b,c))))$</span> which is logically equivalent to <span class="math-container">$\forall a \ \forall b \ \forall c \ (a \in M \wedge b \in M \wedge c \in M \to \varphi(a,b,c))$</span>.</p>
2,189,818
<p>I am having trouble finding the natural parameterization of these curves:</p> <blockquote> <p>$$\alpha(t)=\left(\sin^2\left(\frac{t}{\sqrt{2}}\right),\frac{1}{2}\sin \left(t\sqrt{2}\right), \left(\frac{t}{\sqrt{2}}\right)\right)$$</p> </blockquote> <p>The thing is when finding $$\|\alpha'(t)\|=\sqrt{\frac{3}{2}\sin^2\left({\sqrt{2}t}\right)+1}$$ I do not know how to integrate this. The second one I have is </p> <blockquote> <p>$$\beta(t)=\left(\frac{4}{5}\cos t,1-\sin{t},-\frac{3}{5}\cos t\right)$$</p> </blockquote> <p>I get $s=4\left(1-\cos\left(\frac{t}{2}\right)\right)$ or $t=2\arccos({4-s})$</p> <p>I am to find the Tangent, normal, binormal, tangent and curvature of the curves, but I am at a block, because when I try to naturally parameterize then I come to problems: </p> <ol> <li>For the first one, I cannot figure out the integral of $\|\alpha'(t)\|$</li> <li>For the second one I think I have made a mistake because finding the derivative when putting $t$ in dependence of $s$ in $\beta(t)$ would be very messy business to find the derivative for example. </li> </ol>
Joffan
206,402
<p>I'll illustrate one of the techniques at <a href="https://math.stackexchange.com/questions/5877/efficiently-finding-two-squares-which-sum-to-a-prime">the question lulu linked</a> although due to limitations on the tools I have available I'll use smaller numbers:</p> <p>Say we want to find the squares that add to the prime $9874577$. Then we can use an <a href="https://en.wikipedia.org/wiki/Exponentiation_by_squaring" rel="nofollow noreferrer">exponentiation by squaring</a> (towards $p^{9874576}$) on some small primes to observe that $2^{2468644} \equiv 9874576 \equiv -1 \bmod 9874577$ and that the step before, $2^{1234322} \equiv 1698670$ which is thus a square root of $-1 \bmod 9874577$.</p> <p>Then we can use the Euclidean GCD algorithm for Gaussian integers on $9874577$ and $1698670+i$ to find that $\gcd(9874577,1698670+i) = 2924+1151i$ and so get $2924^2+1151^2 = 9874577$.</p>
748,489
<p>I have a measure ($x$) which the domain is $[0, +\infty)$ and measure some sort of variability. I want to create a new measure ($y$) that represents regularity and is inverse related to $x$.</p> <p>It is easy, just make $y = -x$. However I want this new measure to be positive, to make it more interpretable. Linearly, it is impossible, because I would have to map 0 to $+\infty$. However I can create a non-linear measure that follows:</p> <p>$$ f(x) = y $$ $$ f(0) = 1 $$ $$ x \to +\infty =&gt; y \to 0 $$</p> <p>The logarithmic transformation almost do this, but it does not inverse the relation. What function would do this mapping?</p> <p>Would be interesting if I could set a "precision", like if $x &gt; 100$, the step in $f(x)$ can be smaller than 0.01.</p>
Angel Moreno
327,493
<p>Try more generally: $$ y = f_a(x) = (1+x^a)^{-1/a} , a \in R^+ $$ For example: $$ f_3(x) = \frac {1} {\sqrt[3]{1+x^3}} $$ $$ f_1(x) = \frac {1} {1+x} $$ $$ f_{5/3}(x) = \frac {1} {\sqrt[5]{(1+x^{5/3})^3}} $$</p> <p>The previous functions have a polynomial decrease. If you want an exponential decrease from 1 to 0 you can use the hyperbolic tangent.</p> <p>$$ f(x) = 1 - \tanh(x) $$</p>
748,489
<p>I have a measure ($x$) which the domain is $[0, +\infty)$ and measure some sort of variability. I want to create a new measure ($y$) that represents regularity and is inverse related to $x$.</p> <p>It is easy, just make $y = -x$. However I want this new measure to be positive, to make it more interpretable. Linearly, it is impossible, because I would have to map 0 to $+\infty$. However I can create a non-linear measure that follows:</p> <p>$$ f(x) = y $$ $$ f(0) = 1 $$ $$ x \to +\infty =&gt; y \to 0 $$</p> <p>The logarithmic transformation almost do this, but it does not inverse the relation. What function would do this mapping?</p> <p>Would be interesting if I could set a "precision", like if $x &gt; 100$, the step in $f(x)$ can be smaller than 0.01.</p>
J.G.
56,861
<p>As Angel Moreno has noted, we can use $1-\tanh x$. Of course, that also means we can use $1-\tanh^2 x=\operatorname{sech}^2 x$, which in turn means we could instead use $\operatorname{sech} x$.</p> <p>To be even more general, let $F$ denote your favourite cumulative distribution function of support $[0,\,\infty)$; maybe it's the $\lambda=1$ exponential, viz. $F=1-e^{-x}$. Then $1-F$ will do nicely. Or if you can only think of a support-$\mathbb{R}$ choice for $F$, just use $1-F(\ln x)$ instead. For example, my previous suggestion couldn't use a normal $F$, but it could use a log-normal one.</p>
3,362,916
<p>I'm trying to graph <span class="math-container">$|x+y|+|x-y|=4$</span>. I rewrote the expression as follows to get a function that resembles the direction of unit vectors at <span class="math-container">$\pi/4$</span> to the horizontal axis (take it to be <span class="math-container">$x$</span>)<span class="math-container">$$\biggl|\dfrac{x+y}{\sqrt{2}}\biggr|+\biggl|\dfrac{x-y}{\sqrt{2}}\biggr|=2\sqrt{2}$$</span></p> <p>However, I'm not able to proceed further. Any hints are appreciated. Notice this is an exam problem, so time-efficient methods are key. Please provide any hints accordingly.</p>
Ninad Munshi
698,724
<p>I would take cases. When <span class="math-container">$x &gt; y$</span></p> <p><span class="math-container">$$|x+y| + |x-y| = 4 \implies x = 2$$</span></p> <p>Try to figure out the rest of the cases on your own then graph everything together.</p>
3,362,916
<p>I'm trying to graph <span class="math-container">$|x+y|+|x-y|=4$</span>. I rewrote the expression as follows to get a function that resembles the direction of unit vectors at <span class="math-container">$\pi/4$</span> to the horizontal axis (take it to be <span class="math-container">$x$</span>)<span class="math-container">$$\biggl|\dfrac{x+y}{\sqrt{2}}\biggr|+\biggl|\dfrac{x-y}{\sqrt{2}}\biggr|=2\sqrt{2}$$</span></p> <p>However, I'm not able to proceed further. Any hints are appreciated. Notice this is an exam problem, so time-efficient methods are key. Please provide any hints accordingly.</p>
Parcly Taxel
357,390
<p>The lines <span class="math-container">$x+y=0$</span> and <span class="math-container">$x-y=0$</span> define a coordinate system rotated <span class="math-container">$45^\circ$</span> from the canonical axes and scaled by a factor of <span class="math-container">$\sqrt2$</span>. Accordingly, <span class="math-container">$|x+y|+|x-y|=4$</span> defines a diamond (rotated square) in this new coordinate system, which corresponds to a (axis-aligned, centred) square in the canonical one.</p> <p>To determine the size of this square, note that <span class="math-container">$(x,y)=(2,0)$</span> satisfies the equation. Thus the graph of <span class="math-container">$|x+y|+|x-y|=4$</span> is a square of side <span class="math-container">$4$</span> centred on the origin.</p>
925,746
<p>I don't really understand Tautologies or how to prove them, so if someone could help, that would be great! </p>
MPW
113,214
<p>If Q is true, the first implication is true, so the disjunction is true.</p> <p>If Q is false, the second implication is true, so the disjunction is true.</p> <p>In either case, the disjunction is true, <em>quod erat demonstrandum.</em></p>
3,058,019
<blockquote> <p>Two numbers <span class="math-container">$297_B$</span> and <span class="math-container">$792_B$</span>, belong to base <span class="math-container">$B$</span> number system. If the first number is a factor of the second number, then what is the value of <span class="math-container">$B$</span>?</p> </blockquote> <p>Solution: <a href="https://i.stack.imgur.com/6ScrF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6ScrF.jpg" alt="enter image description here"></a> </p> <p>But base cannot be negative. Could someone please explain where I am going wrong?</p>
vadim123
73,324
<p>The long division is the source of the error; you can't have <span class="math-container">$7/2$</span> as the quotient. The quotient needs to be an integer, that's what "factor" means.</p> <p>If the quotient is <span class="math-container">$2$</span>, then the base is <span class="math-container">$4$</span>. This is found by solving <span class="math-container">$7B^2+9B+2=\color{red}{ 2}(2B^2+9B+7)$</span>, and discarding the negative root.</p> <p>If the quotient is <span class="math-container">$3$</span>, then the base is <span class="math-container">$19$</span>. This is found by solving <span class="math-container">$7B^2+9B+2=\color{red}{ 3}(2B^2+9B+7)$</span>, and discarding the negative root.</p> <p>No other quotients make any sense. However, if the base is <span class="math-container">$4$</span>, then you don't get digits <span class="math-container">$7$</span> and <span class="math-container">$9$</span>. Hence the answer must be <span class="math-container">$B=19$</span>.</p>
1,604,573
<p>Consider the random graph $G(n,\frac{1}{n})$. I'm trying to estimate the size of the maximum matching in $G$. </p> <p>If we look at one vertex, the expected value of its degree is $\frac{n-1}{n}$ so it seems like with high prob it should be 1.</p> <p>So if I can show that with high probability half of the vertices has degree $1$, then with high probability the size of maximum match in $G$ would be of size $\frac{n}{4}$, but I couldn't prove it, and I'm looking for a hint on how to show that or something similar to that claim.</p>
user2316602
187,745
<p>I would like to expand on the answer of D Poole, specifically to show the concentration of <span class="math-container">$Y$</span>.</p> <p>It is actually really easy if we use the method of bounded differences:</p> <blockquote> <p><strong>Theorem</strong> Suppose that <span class="math-container">$X_1$</span>, <span class="math-container">$\cdots$</span>, <span class="math-container">$X_n \in \mathcal{X}$</span> are independent, and <span class="math-container">$f:\mathcal{X}^n\to \mathbb{R}$</span>. Let <span class="math-container">$c_1,\cdots,c_n$</span> satisfy <span class="math-container">$$ \sup_{x_1,\cdots,x_n,x_i'} |f(x_1,\cdots, x_{i-1},x_i,x_{i+1},\cdots, x_n) - f(x_1,\cdots, x_{i-1},x'_i,x_{i+1},\cdots, x_n)|\le c_i$$</span> for <span class="math-container">$i=1,\cdots, n$</span>. Then <span class="math-container">$$\mathrm{P}\{f-\mathbb{E}f\ge t\}\le \exp\left( \frac{-2t^2}{\sum_{i=1}^n c_i^2}\right).$$</span></p> </blockquote> <p>Here we have <span class="math-container">$n \choose 2$</span> random variables, namely 0-1 variables saying which edges are present in the graph. When we add/remove one edge, the number of isolated edges changes by at most two. Therefore <span class="math-container">$Y$</span> is exponentially concentrated around the mean.</p>
2,869,442
<blockquote> <p>Check whether the series $$\sum_{n=1}^{\infty}\int_0^{\frac{1}{n}}\frac{\sqrt{x}}{1+x^2}\ dx$$ is convergent.</p> </blockquote> <p>I tried to sandwich the function by $\dfrac{1}{1+x^2}$ and $\dfrac{x}{1+x^2}$ , but this did not help at all. Any other way of approaching?</p>
A. Pongrácz
577,800
<p>I think you tried to estimate the wrong part of the fraction. If you think about it, $\sqrt{x}$ is much different from 1 and from $x$ in a close neighborhood of 0. However, $1+x^2$ is very close to 1 there. So simply estimate by $\int\limits_{0}^{1/n} \sqrt{x}= \frac{2}{3}n^{-3/2}$, whose sum is convergent. </p>
2,869,442
<blockquote> <p>Check whether the series $$\sum_{n=1}^{\infty}\int_0^{\frac{1}{n}}\frac{\sqrt{x}}{1+x^2}\ dx$$ is convergent.</p> </blockquote> <p>I tried to sandwich the function by $\dfrac{1}{1+x^2}$ and $\dfrac{x}{1+x^2}$ , but this did not help at all. Any other way of approaching?</p>
Andrei
331,661
<p>$$\sum_{n=1}^\infty\int_0^{\frac{1}{n}}\frac{\sqrt x}{1+x^2}dx&lt;\sum_{n=1}^\infty\int_0^{\frac{1}{n}}\frac{\sqrt x}{1}dx=\sum_{n=1}^\infty\frac{2}{3n^{3/2}}=\frac{2}{3}\zeta\left(\frac{3}{2}\right)$$</p>
1,750,104
<p>I've had this question in my exam, which most of my batch mates couldn't solve it.The question by the way is the Laplace Transform inverse of </p> <p>$$\frac{\ln s}{(s+1)^2}$$</p> <p>A Hint was also given, which includes the Laplace Transform of ln t.</p>
xpaul
66,420
<p>Let $f(t)=\ln t$, then $F(s)=L\{f\}=-\frac{\gamma+\ln s}{s}$. So $\ln s=-sL\{f\}-\gamma$. Let $G(s)=\frac{s}{(s+1)^2}$ and then $g(t)=L^{-1}\{G\}=(t-1)e^{-t}$. Thus $$ \frac{\ln s}{(s+1)^2}=-L\{f\}\frac{s}{(s+1)^2}-\frac{\gamma}{(s+1)^2}=-F(s)G(s)-\frac{\gamma}{(s+1)^2}. $$ Using $$ F(s)G(s)=L\{\int_0^tf(\tau)g(t-\tau)d\tau\} $$ one has \begin{eqnarray} L^{-1}\{\frac{\ln s}{(s+1)^2}\}&amp;=&amp;-\int_0^tf(t)g(t-\tau)d\tau-\gamma L^{-1}\{\frac{1}{(s+1)^2}\} \\ &amp;=&amp;-\int_0^t\ln\tau(t-\tau-1)e^{-(t-\tau)}d\tau-\gamma te^{-t}\\ &amp;=&amp;e^{-t} (e^t-1-t\text{Chi}(t)-t\text{Shi}(t)). \end{eqnarray} Here $\text{Chi}(t), \text{Shi}(t)$ are CoshIntegaral and SinhIntegral.</p>
3,978,378
<p>The question asks Find an efficient proof for all the cases at once by first demonstrating</p> <p><span class="math-container">$$ (a+b)^2 \leq (|a|+|b|)^2 $$</span></p> <p>My attempt at the proof:</p> <p>for <span class="math-container">$a,b\in\mathbb{R}$</span></p> <p><span class="math-container">$$ \begin{align*} |a+b|^2 =&amp;(a+b)^2 \text{ (Since $\forall x\in\mathbb{R}$, $x^2=|x|^2$)}\\ =&amp; a^2+2ab+b^2\\ \leq&amp;|a|^2+2|a||b|+|b|^2 \text{( Since $\forall x\in\mathbb{R}, x\leq |x|$)}\\ =&amp;(|a| +|b|)^2 \end{align*} $$</span></p> <p>But this is where I get stuck, I can arrive at the inequality but I do not know how to continue you from here. The question states that this should be efficient proof for all the cases, but jumping from that step to <span class="math-container">$|a+b|\leq |a|+|b|$</span> seems like a big jump with some steps missing. Any push in the right direction would be appreciated!</p>
Martin Argerami
22,857
<p>All you need to say is that if <span class="math-container">$0&lt;r^2&lt;s^2$</span>, then <span class="math-container">$r&lt;s$</span>. This follows from the fact that <span class="math-container">$r\geq s&gt;0$</span> implies <span class="math-container">$r^2\geq s^2$</span>.</p> <p>Also, you are missing a square on <span class="math-container">$|a|$</span>, and your last <span class="math-container">$\leq$</span> should be <span class="math-container">$=$</span>.</p>
519,764
<p>Question: show that the following three points in 3D space A = &lt;-2,4,0>, B = &lt;1,2,-1> C = &lt;-1,1,2> form the vertices of an equilateral triangle.</p> <p>How do i approach this problem?</p>
DonAntonio
31,254
<p>Another, fancy approach: </p> <p>Calculate the directed vectors</p> <p>$$\underline u:=\vec{AB}=B-A=(3,-2,-1)\;,\;\;\underline v:=\vec{AC}=C-A=(1,-3,2)$$</p> <p>Now calculate the angle $\;\theta:=\angle BAC\;$ using the usual inner product</p> <p>$$\cos\theta:=\frac{\underline u\cdot\underline v}{||\underline u||\;||\underline v||}=\frac{3+6-2}{\sqrt{14}\cdot\sqrt{14}}=\frac12\implies\theta=\arccos\frac12=\frac\pi3 (=60^\circ)$$</p> <p>Well, do something similar for any other vertex you want (say, $\;\vec{BA}\,,\,\vec{BC}\;$) and check you get again the same angle, and a triangle with two angles of $\;60^\circ\;$ must be an equilateral one...</p>
37,804
<p>I'm trying to gain some intuition for the usefullness of the spectral theory for bounded self adjoint operators. I work in PDE and any interesting applications/examples I've ever encountered are concerning <em>compact operators</em> and <em>unbounded operators</em>. Here I have the examples of $-\Delta$, the laplacian and $(-\Delta)^{-1}$, the latter being compact.</p> <p>The most common example I see of a bounded non-compact operator is the shift map on $l_2$ given by $T(u_1,u_2,\cdots) = (u_2,u_3,\cdots)$. While this nicely illustrates the different kind of spectra, I don't see why this is useful or where this may come up in practice.<em>Why does knowing things about the spectrum of the shift operator help you in any practical way?</em></p> <p>Secondly, concerning the spectral theorem for bounded, <em>self adjoint</em> operators. All useful applications I have encountered concern <em>compact or unbounded operators</em>. Is there an example arising in PDE (preferably) or some other applied field where knowing the spectral representation for a bounded, non-compact operator is useful? I have yet to encounter one that didn't just reduce to the compact case. Any insight/suggestions are appreciated.</p> <p>Best, dorian</p>
Helge
3,983
<p>Simple answer: bounded operators are simpler than unbounded ones, so it's better to study them.</p> <p>Discretized models lead to bounded operators. For example consider the discrete Laplacian $\Delta$ on $\mathbb{Z}^d$ given by $$ \Delta u(n) = \sum_{| m - n|_1 = 1} u(m), $$ where $u:\mathbb{Z}^d\to \mathbb{C}$. Boundedness just follows from the triangle inequality.</p> <p>Furthermore, if one considers the time evolution operator $U(t)$ (for example in quantum mechanics $U(t) = e^{- i tH}$). Then $U(t)$ is a bounded even unitary operator. ($H$ might not be).</p> <p><b>Edit:</b> Another important example is if $A$ is a (possibly unbounded) operator with non-trivial essential spectrum, then the resolvents $$ (A - z)^{-1} $$ are bounded but not compact.</p>
3,765,225
<p>I have a matrix: <span class="math-container">$$\left(\begin{array}{lll} a &amp; 0 &amp; 0 \\ 0 &amp; b &amp; 0 \\ 0 &amp; 0 &amp; c \end{array}\right)$$</span> Which I want to change to: <span class="math-container">$$\left(\begin{array}{lll} a &amp; 0 &amp; 0 \\ 0 &amp; c &amp; 0 \\ 0 &amp; 0 &amp; b \end{array}\right)$$</span> How can I do that with a unitary transformation?</p>
Siddharth Bhat
261,373
<p>The solution proposed seems wrong. Recall that when building a turing machine that uses non-determinism, if <em>any</em> path accepts,</p> <p>So we cannot <code>ACCEPT</code> if <em>none</em> is equivalent: we can only take <code>ACCEPT</code> decisions from <em>one</em> of the branches of computation, not <em>all</em> of them!</p> <p><a href="http://web.cse.ohio-state.edu/%7Erademacher.10/Sp16_6321/ps3sol.pdf" rel="nofollow noreferrer">A solution set to the above problem</a> uses that this problem is <span class="math-container">$\overline{\texttt{NP}}$</span> : That is, the <em>complement</em> of <span class="math-container">$\texttt{NP}$</span>, where we can <code>ACCEPT</code> from <em>all</em> branches, and <code>REJECT</code> from <em>one</em> branch.</p>
14,448
<p>Here's the most common way that I've seen letter grades assigned in undergrad math courses. At the end of the semester, the professor: 1) computes the raw score (based on homework, quizzes, and tests) for each student; 2) writes down all the raw scores in order; 3) somewhat arbitrarily clusters the scores into groups corresponding to grades of A, B, C, etc.</p> <p>There are a few problems with this approach though. One might object that the assignment of letter grades is too arbitrary. Students also might object that they don't know in advance what grade they are likely to get in the course.</p> <p>On the other hand, it's difficult to assign letter grades in a less arbitrary manner. For example, if we declare in advance that a score of 80-90 on an exam corresponds to a B, we might find that the exam was too difficult and that scores on the exam were lower than expected.</p> <p><strong>What do you think is the best approach to assigning letter grades in an undergraduate math course?</strong></p>
rnrstopstraffic
6,949
<p>What I do is a blend of the two methods that you describe. At the beginning of the course I give students the strictest grading scale that I might possibly use (a standard 90-80-70 scale). At the end of the term, I reserve the right to lower the scale as needed. In this way, any deviations from the stated scale are in the student's favor but I can account for overall difficulty of the class. </p>
332,380
<p>The following is an excerpt from Sharpe's <em>Differential Geometry - Cartan's Generalization of Klein's Erlangen Program</em>.</p> <blockquote> <p>Now we come to the question of higher derivatives. As usual in modern differential geometry, we shall be concerned only with the skew-symmetric part of the higher derivatives. In essence, what we shall be doing is taking the partial derivatives with respect to the base (i.e., manifold) variables and skew-symmetrizing the result, thus forgetting about the part of the higher derivatives that vanish under this procedure. However, this will not be made explicit in our treatment. The part of the higher derivative that disappears has not been studied much in differential geometry since Elie Cartan showed how useful it is to consider only the skew-symmetric part, that is, the exterior derivative. The old masters did use the symmetric part...</p> </blockquote> <p>"Partial derivatives with respect to the base" must be the covariant derivative of the connection. If I correctly understand what's written in <a href="https://math.stackexchange.com/a/1980156/223002">this answer</a>, then we have for any <strong>torsion free</strong> connection on a manifold the equality <span class="math-container">$ \mathrm dw=\operatorname{Alt}(\nabla w)$</span>.</p> <p>This already seems rather remarkable since the exterior derivative is intrinsic.</p> <p><strong>Question 1.</strong> What is the geometric meaning of the above fact for torsion-free connections? How does taking anti-symmetrization "forget" the structure of a horizontal bundle (connection)?</p> <p><strong>Question 2.</strong> Still for a torsion-free connection, what is the geometric meaning of "the part of the higher derivative that vanish under this procedure" of anti-symmetrization (i.e the symmetric part)?</p> <p><strong>Question 3.</strong> What's the conceptual picture for an arbitrary connection on a manifold (possibly with torsion)? Is the anti-symmetrization still the exterior derivative? What is the symmetric part?</p> <p><em>Remark on Q3.</em> I vaguely recall being told that torsion measures the failure of the fundamental theorem of calculus, and also that in torsion-free connections the parallel transport commutator is given by the Lie bracket (again, the latter is intrinsic).</p>
Pedro Lauridsen Ribeiro
11,211
<p>The meaning of higher-order derivatives in differential geometry is better understood through <em>jet bundles</em>. The covariant derivative <span class="math-container">$\nabla\phi$</span> of (say) a smooth section <span class="math-container">$\phi:M\rightarrow E$</span> of a vector bundle <span class="math-container">$\pi:E\rightarrow M$</span>, being a derivative, can be seen as a section of the so-called <em>first-order jet bundle</em> <span class="math-container">$\pi^1_0:J^1 E\cong\mathrm{End}(TM,E)\rightarrow E$</span> of <span class="math-container">$E$</span>. Likewise, derivatives of higher order (say, <span class="math-container">$k\in\mathbb{N}$</span>) of smooth sections of <span class="math-container">$\pi$</span> are described in a coordinate-free fashion by the so-called <em>jet bundle of order</em> <span class="math-container">$k$</span> <span class="math-container">$\pi^k_0:J^k E\rightarrow E$</span>. This hierarchy of (vector) bundles over <span class="math-container">$E$</span> is endowed with vector bundle projections <span class="math-container">$\pi^k_l:J^kE\rightarrow J^l E$</span>, <span class="math-container">$k&gt;l$</span> such that <span class="math-container">$\pi^l_m\circ\pi^k_l=\pi^k_m$</span> for all integers <span class="math-container">$k&gt;l&gt;m\geq 0$</span>. Roughly speaking, one may identify elements of <span class="math-container">$J^k E$</span> locally with Taylor polynomials of order <span class="math-container">$k$</span>, and the projections <span class="math-container">$\pi^k_l$</span> discard derivatives of order above <span class="math-container">$l$</span>. </p> <p>The key point is that the leading-order derivatives are, in a sense, always symmetric: the kernel of <span class="math-container">$\pi^k_{k-1}$</span> is a bundle whose total space is canonically isomorphic to <span class="math-container">$S^kT^*M\otimes E$</span> = the tensor product of the <em>symmetrized</em> <span class="math-container">$k$</span>-fold tensor power of <span class="math-container">$T^*M$</span> with <span class="math-container">$E$</span>. Particularly, any reference to a specific covariant derivative when globally defining highest-order derivatives disappears - more precisely, all yield the same result (differences only affect lower-order terms). One can exploit this fact to write smooth sections of <span class="math-container">$J^k E$</span> in terms of symmetrized iterated covariant derivatives (see e.g. <a href="https://mathoverflow.net/questions/240329/jets-of-sections-of-vector-bundles-expressed-by-symmetrized-iterated-covariant-d">this MO question of mine</a>).</p> <p>This also means that any partial antisymmetrization will always discard highest-order derivatives - this holds irrespectively of <span class="math-container">$\nabla$</span> being torsion-free or not (for general vector bundles, this concept usually has no meaning). Hence, if you <em>totally</em> antisymmetrize (as done with <span class="math-container">$k$</span>-forms), only derivatives of <em>at most first order</em> survive. This is of course related to the nilpotency of the exterior derivative. </p> <p>(<strong>EDIT 2</strong>) To illustrate this, let us consider the concept of iterated covariant derivatives, mentioned in the MO question linked in the second paragraph. To that end, we combine a covariant derivative on the vector bundle <span class="math-container">$\pi$</span> with a covariant derivative on <span class="math-container">$TM$</span> in order to define covariant derivatives on the tensor bundles <span class="math-container">$\otimes^k T^*M \otimes E\rightarrow M$</span> for all <span class="math-container">$k\in\mathbb{N}$</span> by means of Leibniz's rule. We denote all these covariant derivatives collectively by <span class="math-container">$\nabla$</span>. Once this is done, the iterated covariant derivatives <span class="math-container">$\nabla^k\phi$</span> of all orders <span class="math-container">$k\in\mathbb{N}$</span> of smooth sections <span class="math-container">$\phi$</span> of <span class="math-container">$\pi$</span> are defined recursively as follows: <span class="math-container">$$\nabla^0\phi=\phi\ ,\,\nabla^1\phi=\nabla\phi\ ,\,\nabla^{k+1}\varphi=\nabla(\nabla^k\phi)\ ,$$</span> so that <span class="math-container">$\nabla^k\phi$</span> is a smooth section of <span class="math-container">$\otimes^k T^*M \otimes E\rightarrow M$</span> for all <span class="math-container">$k\in\mathbb{N}$</span>. Particularly, for <span class="math-container">$k=2$</span>, one can write <span class="math-container">$$\nabla^2\phi(X,Y)=\nabla_X(\nabla_Y\phi)-\nabla_{\nabla_X Y}\phi\ .$$</span> If we antisymmetrize <span class="math-container">$\nabla^2\phi$</span>, we obtain <span class="math-container">$$\nabla^2\phi(X,Y)-\nabla^2\phi(Y,X)=\nabla_X(\nabla_Y\phi)-\nabla_Y(\nabla_X\phi)-\nabla_{\nabla_X Y-\nabla_Y X}\phi$$</span> <span class="math-container">$$=\text{Riem}_\nabla(X,Y)\phi-\nabla_{T_\nabla(X,Y)}\phi\ ,$$</span> where <span class="math-container">$$\text{Riem}_\nabla(X,Y)\phi=\nabla_X(\nabla_Y\phi)-\nabla_Y(\nabla_X\phi)-\nabla_{[X,Y]}\phi$$</span> is the Riemann curvature tensor of the covariant derivative <em>on</em> <span class="math-container">$\pi$</span> and <span class="math-container">$$T_\nabla(X,Y)=\nabla_X Y-\nabla_Y X-[X,Y]$$</span> is the torsion tensor of the covariant derivative <em>on</em> <span class="math-container">$TM$</span>. The above expression clearly depends on derivatives of <span class="math-container">$\phi$</span> of at most first order. Notice that the above antisymmetrization is closely related to the operation of <em>exterior covariant differentiation</em> <span class="math-container">$d^\nabla$</span> of <span class="math-container">$E$</span>-valued differential forms, as mentioned in Jesper Göransson's answer: since <span class="math-container">$\phi$</span> is a <span class="math-container">$E$</span>-valued 0-form and therefore <span class="math-container">$d^\nabla\phi=\nabla\phi$</span>, we have that <span class="math-container">$$d^\nabla(d^\nabla\phi)(X,Y)=\nabla_X(\nabla_Y\phi)-\nabla_Y(\nabla_X\phi)-\nabla_{[X,Y]}\varphi=\text{Riem}_\nabla(X,Y)\varphi$$</span> <span class="math-container">$$=\nabla^2\phi(X,Y)-\nabla^2\phi(Y,X)+\nabla_{T_\nabla(X,Y)}\varphi\ .$$</span> Particularly, if <span class="math-container">$E=M\times\mathbb{R}$</span> (i.e. sections of <span class="math-container">$\pi$</span> are just real-valued functions on <span class="math-container">$M$</span>) and <span class="math-container">$\nabla$</span> is just ordinary differentiation of functions on <span class="math-container">$M$</span> (so that <span class="math-container">$\text{Riem}_\nabla=0$</span> and <span class="math-container">$T_\nabla=0$</span>), we thus recover the nilpotency of the (ordinary) exterior derivative.</p> <p>A similar phenomenon happens if we antisymmetrize the vector field entries of the tensor field <span class="math-container">$\nabla^k\phi$</span> for <span class="math-container">$k&gt;2$</span> - any antisymmetrization with respect to (say) <span class="math-container">$j$</span> entries leaves us with an expression which depends on <span class="math-container">$\nabla^i\phi$</span> for <span class="math-container">$i$</span> only up to <span class="math-container">$k-(j-1)=k-j+1$</span> for all <span class="math-container">$2\leq j\leq k$</span>. If <span class="math-container">$j&lt;k$</span>, this is what is called "partial antisymmetrization" above.</p> <p>Returning to covariant derivatives <span class="math-container">$\nabla$</span> <em>on</em> <span class="math-container">$M$</span> (that is, on <span class="math-container">$TM$</span>), the equivalence of the vanishing of the torsion tensor <span class="math-container">$T_\nabla(X,Y)$</span> of the connection <span class="math-container">$\nabla$</span> (which, by the way, simply states that <span class="math-container">$\nabla_X Y-\nabla_Y X$</span> is the Lie bracket of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> = Lie derivative of <span class="math-container">$Y$</span> along <span class="math-container">$X$</span>) to the antisymmetric part of <span class="math-container">$\nabla\omega$</span> being equal to <span class="math-container">$d\omega$</span> for all 1-forms <span class="math-container">$\omega$</span>, mentioned by Deane Yang in his comment to the OP's question, comes from Henri Cartan's formula relating Lie differentiation and exterior differentiation: <span class="math-container">$\mathscr{L}_{\!X}\omega=d(i_X\omega)+i_X(d\omega)$</span>. </p> <p>(<strong>EDIT 1</strong>) To see this, suppose that the torsion tensor of <span class="math-container">$\nabla$</span> vanishes, i.e. <span class="math-container">$$\nabla_X Y-\nabla_Y X=[X,Y]=\mathscr{L}_{\!X}Y\ .$$</span> Recall that the action of <span class="math-container">$\nabla$</span> on an 1-form <span class="math-container">$\omega$</span> is given by Leibniz's rule<span class="math-container">$$(\nabla_X\omega)(Y)=X(\omega(Y))-\omega(\nabla_X Y)$$</span> and so is the Lie derivative of <span class="math-container">$\omega$</span>: <span class="math-container">$$(\mathscr{L}_{\!X}\omega)(Y)=X(\omega(Y))-\omega(\mathscr{L}_{\!X}Y)=X(\omega(Y))-\omega([X,Y])\ .$$</span> The antisymmetric part of <span class="math-container">$\nabla\omega$</span> is then given by <span class="math-container">$$(\nabla_X\omega)(Y)-(\nabla_Y\omega)(X)=X(\omega(Y))-Y(\omega(X))-\omega([X,Y])\ .$$</span> Notice that I used the vanishing of the torsion of <span class="math-container">$\nabla$</span> in the rhs. On the other hand, Henri Cartan's formula tells us that <span class="math-container">$$d\omega(X,Y)=(\mathscr{L}_{\!X}\omega)(Y)-Y(\omega(X))=X(\omega(Y))-Y(\omega(X))-\omega([X,Y])\ ,$$</span> hence indeed <span class="math-container">$(\nabla_X\omega)(Y)-(\nabla_Y\omega)(X)=d\omega(X,Y)$</span>. Conversely, if one assumes this last identity and backtracks the above calculations, one obtains that <span class="math-container">$\omega(T_\nabla(X,Y))=0$</span> for all 1-forms <span class="math-container">$\omega$</span>, hence <span class="math-container">$T_\nabla(X,Y)=0$</span> for all vector fields <span class="math-container">$X,Y$</span>.</p> <p>More generally, torsion introduces a sort of "internal rotation" on the frame bundle along parallel transport which cannot be accounted for by pushing forward by the diffeomorphism flow integrating the vector field along which we are differentiating (the latter is the effect measured by Lie derivatives) - hence the failure of the fundamental theorem of calculus. Interestingly, given any covariant derivative <span class="math-container">$\nabla$</span> on <span class="math-container">$M$</span>, there is a unique covariant derivative <span class="math-container">$\nabla'$</span> on <span class="math-container">$M$</span> which is <em>torsion-free</em> and <em>has the same geodesics as</em> <span class="math-container">$\nabla$</span>, which means that this "internal rotation" effect can be cancelled by suitably redefining <span class="math-container">$\nabla$</span>. This is not to say that torsion should be always discarded - for instance, when dealing with foliations, there is a natural connection (the so-called <em>Bott connection</em>) which is only defined along vector fields tangent to the foliation - that is, a so-called <em>partial connection</em>. This connection is flat but has nonvanishing torsion, which therefore is important in dealing with the holonomy of the foliation.</p>
569,300
<p>Let $G$ be a finite group and assume it has more than one Sylow $p$-subgroup.</p> <p>It is known that order of intersection of two Sylow p-subgroups may change depending on the pairs of Sylow p-subgroups.</p> <blockquote> <p>I wonder whether there is a condition which guarantees that intersection of any two Sylow $p$-subgroups has the same order.</p> </blockquote> <p>Thanks for your help.</p>
mesel
106,102
<p>I have made up such a condition.</p> <p>We know that the action of $G$ on $Syl_p(G)$ by conjugation is transitive. If this action is double transitive then the intersection of any two Sylow p-subgroups is conjugate and they must have same order.</p> <p>Proof: Let $P,Q,R,S$ be elements of $Syl_p(G)$ such that $P\neq Q$ and $R\neq S$. By double transivity, $\exists x$ in $G$ such that $P^x=R$ and $Q^x=S$, thus $(P\cap Q)^x=P^x\cap Q^x =R\cap S$.</p> <p>But I do not know when $G$ is double transive on $Syl_p(G)$. Should I ask it as a new question?</p>
548,470
<p>Prove $$(x+1)e^x = \sum_{k=0}^{\infty}\frac{(k+1)x^k}{k!}$$ using Taylor Series.</p> <p>I can see how the $$\sum_{k=0}^{\infty}\frac{x^k}{k!}$$ plops out, but I don't understand how $(x+1)$ can become $(k+1)$.</p>
Stefan
36,189
<p>Rewrite: $$(x+1) e^x = xe^x + e^x $$ then use your series representation for $e^x$.</p>
2,239,203
<p>Why does $\lim\limits_{x\to\infty}(x!)^{1/x}\neq 1?$</p> <p>As far as I know, anything to the power of $0$ is $1$.</p> <p>We have a factorial raised to $1/\infty=0$, but the limit is not $1$? I don't even know what the limit is. But it seems like infinity? Why is this?</p> <p><a href="https://i.stack.imgur.com/hYnAE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hYnAE.png" alt="enter image description here"></a></p>
marty cohen
13,079
<p>A useful inequality is $n! &gt; (n/e)^n$. This can be proved by induction from $(1+1/n)^n &lt; e$.</p> <p>From this $(n!)^{1/n} &gt; n/e$ and this last is unbounded.</p>
1,095,918
<p><img src="https://i.stack.imgur.com/EST8r.jpg" alt="my problem is in prop 27, cannot prove it. Can use definition before. Notice that p-closure is the closure of G in the open point topolgy"></p> <p>For extra notations: C(E,F) is the set of all continuous functions from E to F (topological spaces). Can anybody help me prove the proposition?</p>
Henno Brandsma
4,280
<p>Let $V$ be any entourage in the uniformity of $F$, and $x$ be a fixed point in $E$. Then let $W$ we any symmetric entourage such that $W \circ W \circ W \subseteq V$, which can be done by the standard axioms for a uniformity. Then find $U$ open in $E$ that contains $x$, and such that for all $f \in G$, and for all $y \in U$ we have that $(f(x), f(y)) \in W$.</p> <p>I claim that $U$ now works $\overline{G}^{(pw)}$ as well for the original $V$. </p> <p>So let $y \in U$ and $g \in \overline{G}^{(pw)}$. Now consider the following set:</p> <p>$$O = \left\{f \in F^E \mid f(x) \in W[g(x)] \text{ and } f(y) \in W[g(y)] \right\}\text{.}$$</p> <p>This is a basic neighbourhood of $g$ in $F^E$ in the product topology, and so by the assumption that $g \in \overline{G}^{(pw)}$ we have some $f \in O \cap G$. We then know that (as $y \in U, f \in G$ and by the way we chose $U$) $(f(x), f(y)) \in W$ and as $f \in O$ we know that $(f(x), g(x)) \in W, (f(y), g(y)) \in W$. So by how we chose $W$ we know that $(g(x), g(y)) \in V$, as required.</p>
2,410,243
<p>Suppose we want to $ min_i$ median$(a_i)$</p> <p>$a_i$ are real numbers</p> <p>Does someone know how to pose this as an integer programming problem or point me in the direction of a resource? </p>
prubin
458,896
<p>Given <em>bounded</em> variables $a_{1},\dots,a_{N}$ subject to some constraints, you can minimize their median with $N$ additional binary variables $z_{1},\dots,z_{N}$ and one additional continuous variable $y$ if $N$ is odd (or if you are a bit sloppy about the definition of ``median''). You minimize $y$ subject to the constraints $y\ge a_{i}-M_{i}z_{i},\, i=1,\dots,N$ and $\sum_{i=1}^{N}z_{i}=\left\lfloor \frac{N}{2}\right\rfloor $, where the $M_{i}$ are suitably large constants. The constraint force $y$ to be greater than or equal to $\left\lceil \frac{N}{2}\right\rceil $ of the $a_{i}$; the objective will result in those being the $\left\lceil \frac{N}{2}\right\rceil $ smallest of them and in $y$ being no bigger than the largest of that set, making $y$ the median.</p> <p>If $N$ is even and $a_{(1)},\dots,a_{(N)}$ is the order statistic of the $a$ variables, the median is technically $(a_{\left(\left\lfloor \frac{N}{2}\right\rfloor \right)}+a_{\left(\left\lceil \frac{N}{2}\right\rceil \right)})/2$. If you can live with using $a_{\left(\left\lfloor \frac{N}{2}\right\rfloor \right)}$ as the ``median'', the above should work. Otherwise, you need a second set of binary variables ($w_{1},\dots,w_{N}$) and a second continuous variable (replace $y$ with $y_{1}$ and $y_{2}$). You minimize $(y_{1}+y_{2})/2$ subject to the constraints \begin{align*} y_{1} &amp; \ge a_{i}-M_{i}z_{i}\quad\forall i\\ y_{2} &amp; \ge a_{i}-M_{i}w_{i}\quad\forall i\\ \sum_{i=1}^{N}z_{i} &amp; =\frac{N}{2}-1\\ \sum_{i=1}^{N}w_{i} &amp; =\frac{N}{2} \end{align*} where the $M_{i}$ are as above. The constraints force $y_{1}$ to be at least as large as $\frac{N}{2}+1$ of the $a_{i}$ and $y_{2}$ to be at least as large as $\frac{N}{2}$ of them. The minimum objective value will occur when $y_{1}=a_{\left(\frac{N}{2}+1\right)}$ and $y_{2}=a_{\left(\frac{N}{2}\right)}$.</p>
1,650,881
<p>I found this problem in a book on undergraduate maths in the Soviet Union (<a href="http://www.ftpi.umn.edu/shifman/ComradeEinstein.pdf" rel="nofollow">http://www.ftpi.umn.edu/shifman/ComradeEinstein.pdf</a>):</p> <blockquote> <p>A circle is inscribed in a face of a cube of side a. Another circle is circumscribed about a neighboring face of the cube. Find the least distance between points of the circles.</p> </blockquote> <p>The solution to the problem is in the book (page 61), but I am wondering how to find the maximum distance between points of the circles and I cannot see how the method used there can be used to find this.</p>
grand_chat
215,011
<p>To find the maximum distance instead of the minimum distance, you can follow the same method, but interchange "maximize" with "minimize" everywhere. In particular the analog of Lemma 21.1 is</p> <p><strong>Lemma 21.1b.</strong> Let $\alpha$ and $\beta$ be real numbers. Then $$\min_{t\in[0,2\pi)}\alpha\cos t +\beta\sin t=-\sqrt{\alpha^2+\beta^2}.$$</p> <p>The rest of the proof can be followed. You'll see negative signs popping up where they used to be positive.</p>
201,807
<p>I heard this problem, so I might be missing pieces. Imagine there are two cities separated by a very long road. The road has only one lane, so cars cannot overtake each other. $N$ cars are released from one of the cities, the cars travel at constant speeds $V$ chosen at random and independently from a probability distribution $P(V)$. What is the expected number of groups of cars arriving simultaneously at the other city? </p> <p>P.S.: Supposedly, this was a Princeton physics qualifier problem, if that makes a difference. </p>
Ross Millikan
1,827
<p>I would assume that the cars all have distinct speeds and are released at the same time in a random order. The slowest car will accumulate all the cars behind it. The slowest car not in that group will accumulate all the cars behind it and in front of the slowest car. These are the groups at the other end.</p> <p>In this model, imagine we have the fastest $N-1$ in some order and put the slowest car into the list at random. The number of cars in front of the slowest will be randomly distributed from $0$ to $N-1$, so the expected number of groups $E(N)=1+\frac 1N \sum_{i=0}^{N-1}E(i)$ with $E(0)=0$. A little numerical exploration indicates that $E(N)$ is a little above $\ln (N)$ and the difference is very slowly decreasing. It drops below $0.6$ at $N=22$ and is still above the <a href="http://en.wikipedia.org/wiki/Euler-Mascheroni_constant">Euler-Masheroni constant</a> at $N=2700.$ </p>
201,807
<p>I heard this problem, so I might be missing pieces. Imagine there are two cities separated by a very long road. The road has only one lane, so cars cannot overtake each other. $N$ cars are released from one of the cities, the cars travel at constant speeds $V$ chosen at random and independently from a probability distribution $P(V)$. What is the expected number of groups of cars arriving simultaneously at the other city? </p> <p>P.S.: Supposedly, this was a Princeton physics qualifier problem, if that makes a difference. </p>
Hagen von Eitzen
39,174
<p>A car will eventually (remember that the road is very long) be in the same group as the one before it if that car is slower from the start or is eventually slower because it has to decelerate for some car ahead. Ultimately, a car will be the first car of a cluster iff none of the cars before it has a slower initial speed. Hence the number of clusters is the number of "new records" or peaks in a sequence of random variables. As a matter of fact, the distribution $V$ does not matter (as long as it is continuous) and one may work simply with a permutation of speeds. Let $F(n,k)$ be the set of permutations of $\{1, \ldots,n\}$ with $k$ peaks and $f(n,k)=|F(n,k)|$. The elements of $F(n+1,k)$ that start with $1$ can be bijected with $F(n,k-1)$ by dropping the leading $1$ and decreasing each number by one. The elements of $F(n+1,k)$ that not start with $1$ can be mapped to $F(n,k)$ by dropping the one and decreasing each number by one, but this time each element of $F(n,k)$ is obtained $n$ times (depending on the position of the $1$). We conclude $$\tag1 f(n+1,k) = f(n,k-1) + n f(n,k).$$ Actually, we are interested in $E_n=\frac1{n!}\sum_k k f(n,k)$, the expected number of peaks in a random permutation. Summing $k$ times (1) over $k$ produces $$ \sum_k k f(n+1,k) = \sum_k (k-1) f(n,k-1)+\sum_k f(n,k-1) + n \sum_k kf(n,k)$$ $$ (n+1)! E_{n+1} = n!E_n+\sum_k f(n,k) + n!nE_n.$$ Using $\sum_k f(n,k)=n!$, we find $$E_{n+1} = E_n+\frac 1{n+1}$$ and with $E_1=1$ we see thath $E_n$ is the $n$th harmonic number.</p>
1,377,412
<p>I am brand new to ODE's, and have been having difficulties with this practice problem. Find a 1-parameter solution to the homogenous ODE:$$2xy \, dx+(x^2+y^2) \, dy = 0$$assuming the coefficient of $dy \ne 0$ The textbook would like me to use the subsitution $x = yu$ and $dx=y \, du + u \, dy,\ y \ne 0$ Rewriting the equation with the subsititution: $$2uy^2(y \, du + u \, dy)+(x^2+y^2) \, dy = 0$$ divide by $y^2$ $$2u(y \, du + u \, dy) + (u^2+1 ) \, dy=0$$ but after further simplification I end up getting ${dy \over y}$ which would mean I would get a logarithm after integrating, and the answer is given as $$3x^2y+y^3 = c$$ Could I get some help/hints as to how this answer was obtained? </p>
user247327
247,327
<p>Yes, you get $dy/y+ 2u/(3u^2+ 1)du= 0$. </p> <p>The integral of $dy/y$ is $ln(|y|)$ and the integral of $2u/(3u^2+ 1)du$, using the substitution $v= 3u^2+ 1$ so that $dv= 6udu$ or $2udu= dv/3$, is the integral of $1/v dv/3$ which is $$(1/3)ln(|v|)= (1/3)ln(3u^2+ 1)= (1/3)ln(3x^2/y^2+ 1)$$</p> <p>So integrating $dy/y+ 2u/(3u^2+ 1)= 0$ becomes $$ln(|y|)+ (1/3)ln(3x^2/y^2+ 1)= C$$ or, equivalently, $-3ln(|y|)= ln(3x^2/y^2+ 1)+ C$.</p> <p>Now, <strong>taking the exponential of both sides</strong> $y^{-3}= C(3x^2/y+ 1)$.</p> <p>Finally, multiply on both sides by $y^3$ to get $1= C(3x^2y+ y^3)$ which is the same as $1/C= C'= 3x^2y+ y^3$.</p>