qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,319,126 | <p>I have to find dimension of $V+W,$ where$ V$ is a vector subspace given by solutions of the linear system:</p>
<p>$$x+2y+z=0$$
$$3y+z+3t=0$$</p>
<p>and $W$ is the subspace generated from vectors
$(4,0,1,3)^T,(1,0,-1,0)^T$.</p>
<p>I don't know how to combine the two subspaces and calculate the dimension.</p>
| Bernard | 202,857 | <p>You have to calculate the dimension of $V\cap W$. A vector $v=\lambda{}^{\mathrm t}(4,0,1,3)+\mu{}^{\mathrm t}(1,0,-1,0)={}^{\mathrm t}(4\lambda+\mu,0,\lambda-\mu,3\lambda)$ satisfies the equations of both planes if and only if
$$\begin{cases}x+2y+z=5\lambda=0\\3y+z+3t=10\lambda-\mu=0\end{cases}\iff \lambda=\mu=0.$$
Hence $V\cap W=\{0\}$, $V+W=V\oplus W$ and
$$\dim(V+W)=\dim V+\dim W=4.$$</p>
|
174,149 | <p>How many seven - digit even numbers greater than $4,000,000$ can be formed using the digits $0,2,3,3,4,4,5$?</p>
<p>I have solved the question using different cases: when $4$ is at the first place and when $5$ is at the first place, then using constraints on last digit.</p>
<p>But is there a smarter way ?</p>
| draks ... | 19,341 | <p>You have the following combinations:</p>
<ul>
<li>$4\dots 0$, where in the middle you permute $\{2,3,3,4,5\}: \frac{5!}{2}$, with the $2$ in the denominator accounts for the double $3$. And the same for $4\dots 2$, $4\dots 4$ and $5\dots 4$.</li>
<li>$5\dots 0$, where in the middle you permute $\{2,3,3,4,4\}: \frac{5!}{2^2}$, with 2 doubles $3$ and $4$. And the same for $5\dots 2$.</li>
</ul>
<p>So in total you have $4\frac{5!}{2}+2\frac{5!}{2^2}=300\;$ even numbers.</p>
|
3,678,417 | <p>I understand:
<span class="math-container">$$\sum\limits^n_{i=1} i = \frac{n(n+1)}{2}$$</span>
what happens when we restrict the range such that:
<span class="math-container">$$\sum\limits^n_{i=n/2} i = ??$$</span></p>
<p>Originally I thought we might just have <span class="math-container">$\frac{n(n+1)}{2}/2$</span> but I know that's not correct since starting the summation at n/2 would be the larger values of the numbers between <span class="math-container">$1..n$</span></p>
| hamam_Abdallah | 369,188 | <p>If <span class="math-container">$n$</span> is even of the form <span class="math-container">$2p$</span>, the sum is
<span class="math-container">$$(p+0)+(p+1)+(p+2)+...+(p+n-p)=$$</span></p>
<p><span class="math-container">$$p(n-p+1)+1+2+3+...(n-p)=$$</span></p>
<p><span class="math-container">$$p(n-p+1)+\frac{(n-p)(n-p+1)}{2}=$$</span></p>
<p><span class="math-container">$$\frac{(n-p+1)(n+p)}{2}$$</span></p>
<p>If <span class="math-container">$n$</span> is odd of the form <span class="math-container">$2p-1$</span>, we get the same sum.</p>
|
2,673,835 | <p>I need to prove that the sequence
$$
f_n = \sum_{i=0}^n\prod_{j=0}^i \left(z+j\right)^{-1} = \frac{1}{z}+\frac{1}{z(z+1)}+\cdots + \frac{1}{z(z+1)(z+2)\cdots (z+n)}$$
converge uniformly to a function in every compact subset of $\mathbb{C}\setminus\{0,-1,-2,-3,\dotsc\}$. The problem has other questions, but this is what stop me. Obviusly, $f_n$ is holomorphic in that domain. </p>
<p>In fact, I don't know even whats the limit of the sequence or closed form of $f_n$. Can you give me a hint to continue? Please, don't spoil me final solution. </p>
<p>I think I could take a compact $K$ and consider the series that define every term of $f_n$ and sum, but I don't get a general formula. </p>
<p>Edit: I get</p>
<p>$$
f_n(z)= \sum_{i=0}^n \frac{1}{z}\frac{\Gamma(z+1)}{\Gamma(z+1+i)}= \Gamma(z)\sum_{i=0}^n\frac{1}{\Gamma(z+1+i)}
$$</p>
<p>This smell like M Weierstrass test</p>
| Ѕᴀᴀᴅ | 302,797 | <p>For a fixed conpact set $K$, suppose $K \subseteq B(0, r)$ and $m \in \mathbb{N}_+$, $m > r + 1$. Then for $z \in K$,
\begin{align*}
\left| \sum_{k = m}^\infty \prod_{j = 0}^k (z + j)^{-1} \right| &\leqslant \sum_{k = m}^\infty \prod_{j = 0}^k |z + j|^{-1} = \prod_{j = 0}^{m - 1} |z + j|^{-1} \sum_{k = m}^\infty \prod_{j = m}^k |z + j|^{-1}\\
&\leqslant \prod_{j = 0}^{m - 1} |z + j|^{-1} \sum_{k = m}^\infty \prod_{j = m}^k (j - |z|)^{-1}\\
&\leqslant \prod_{j = 0}^{m - 1} |z + j|^{-1} \sum_{k = m}^\infty \frac{1}{(m - r)^{k - m}} < +\infty.
\end{align*}</p>
|
2,673,835 | <p>I need to prove that the sequence
$$
f_n = \sum_{i=0}^n\prod_{j=0}^i \left(z+j\right)^{-1} = \frac{1}{z}+\frac{1}{z(z+1)}+\cdots + \frac{1}{z(z+1)(z+2)\cdots (z+n)}$$
converge uniformly to a function in every compact subset of $\mathbb{C}\setminus\{0,-1,-2,-3,\dotsc\}$. The problem has other questions, but this is what stop me. Obviusly, $f_n$ is holomorphic in that domain. </p>
<p>In fact, I don't know even whats the limit of the sequence or closed form of $f_n$. Can you give me a hint to continue? Please, don't spoil me final solution. </p>
<p>I think I could take a compact $K$ and consider the series that define every term of $f_n$ and sum, but I don't get a general formula. </p>
<p>Edit: I get</p>
<p>$$
f_n(z)= \sum_{i=0}^n \frac{1}{z}\frac{\Gamma(z+1)}{\Gamma(z+1+i)}= \Gamma(z)\sum_{i=0}^n\frac{1}{\Gamma(z+1+i)}
$$</p>
<p>This smell like M Weierstrass test</p>
| Jack D'Aurizio | 44,121 | <p>An alternative approach, related to the fact that the incomplete $\Gamma$ function has a simple Laplace transform. One may notice that the function defined by such series of reciprocal Pochhammer symbols fulfills the functional identity $z f(z) = 1+ f(z+1)$ and $f(1)=e-1$. The same happens for
$$ g(z)=\int_{0}^{1} x^{z-1} e^{1-x}\,dx $$
over $\mathbb{R}^+$ (or $\text{Re}(z)>0$), by integration by parts. $g(z)$ is trivially holomorphic over $\text{Re}(z)>0$ and its only singularity over $\text{Re}(z)=0$ is the simple pole at the origin. Then the integral representation and the functional identity provide an analytic continuation to $\mathbb{C}\setminus\{0,-1,-2,-3,-4,\ldots\}$.</p>
<p>The situation is indeed very similar to the well-known facts about the (complete) $\Gamma$ function, $ z\Gamma(z)=\Gamma(z+1)$ and $\Gamma(z)=\int_{0}^{+\infty}x^{z-1}e^{-x}\,dx$ over $\text{Re}(z)>0$.</p>
|
2,148,187 | <p>I am given a charge of $Q(t)$ on the capacitor of an LRC circuit with a differential equation</p>
<p>$Q''+2Q'+5Q=3\sin(\omega t)-4\cos(\omega t)$ with the initial conditions $Q(0)=Q'(0)=0$</p>
<p>$\omega >0$ which is constant and $t$ is time. I am then asked find the steady state and transient parts of the solution and the value of $\omega$ for which the amplitude of the steady state charge maximal.</p>
<p>I believe the transient part is just the homogeneous solution to the ODE and the steady state part of this solution is the complementary solution.</p>
<p>I solved the homogeneous and got</p>
<p>$Q_{tr}=c_1e^{-t}\sin(2t)+c_2e^{-t}\cos(2t)$ which I am pretty sure is right. </p>
<p>The problem is that I am not given a value for $\omega$ so if I were to go ahead and solve it, I would get a mess because I use undetermined coefficients.</p>
<p>So if I go ahead and "guessed" a solution, I get $Q_{ss}=A\sin(\omega t)+B\cos(\omega t)$ but if I differentiated this and actually plugged this into the derivative I get a huge mess so I am not sure if that's entirely the right way to approach this problem...</p>
<p>I mean if I actually solved the steady state solution, I get:</p>
<p>$Q(t)= c_1e^{-t}sin(2t)+c_2e^{-t}cos(2t)+ \frac{-3\omega^2-8\omega+15}{\omega^4-6\omega^2+25} \sin( \omega t)+\frac{4\omega^2-6\omega-20}{\omega^4-6\omega^2+25} \cos( \omega t)$</p>
<p>Plugging the initial conditions into this would be terrible. Is this even the correct approach?</p>
<p>For the second part of the problem I guess that I would take the derivative of $Q(t)$ and then find the critical points for which there will be a maximum but I am not sure about that.</p>
<p>Any guidance would be much appreciated thanks :) .</p>
| trace | 420,766 | <p>your Q(t) is actually wrong, it should have come out to plus 8w not minus. Then use the formula R^2 = A^2 + B^2 to find R(w). Then solve for the value that makes R a maximal.</p>
|
1,755,029 | <p>Imagine a cubic array made up of an $n\times n\times n$ arrangement of unit cubes: the cubic array is n
cubes wide, n cubes high and n cubes deep. A special case is a $3\times3\times3$ Rubik’s cube, which
you may be familiar with. How many unit cubes are there on the surface of the $n\times n\times n$ cubic array?</p>
<p>As far as I can see there are 27 unit cubes in a $n\times n\times n$ rubik cube. But the answer says something different. There are total $6n^2$ squares are present in $n\times n\times n$ cube. But after that I cant proceed.</p>
<p>Please help :)</p>
| Stella Biderman | 123,230 | <p>So, i think that thing you're missing is that you're counting the number of <em>squares</em> on the surface; not the number of cubes. For example, a corner piece of a Rubix cube is one cube but contributes three squares.</p>
<p>In a $n\times n\times n$ cube, you have the <em>outer layer</em> of cubes counting, and the ones on the inside not counting. The easiest way to see this is recursive: for a $n\times n\times n$ cube, you have an $(n-2)\times (n-2)\times (n-2)$ cube on the inside that is coated by a number of cubes that form the cubes on the outside. Subtracting $$n^3-(n-2)^3=6n^2-12n+8$$ gives the final answer.</p>
<p>This generalizes immediately to higher dimensions... In dimension $k$ the answer is $$n^k-(n-2)^k$$</p>
|
3,074,901 | <p>Find the rank of the following matrix</p>
<p><span class="math-container">$$\begin{bmatrix}1&-1&2\\2&1&3\end{bmatrix}$$</span></p>
<p>My approach: </p>
<p>The row space exists in <span class="math-container">$R^3$</span> and is spanned by two vectors. Since the vectors are independent of each other(because they are not scalar multiples of each other). Therefore, the row rank of the matrix which is the rank of this is two, which is the correct answer.</p>
<p>However, I'm still confused as to why the answer is the answer. If the row space exists in R^3 doesn't it have be be spanned by at least three vectors. For example, the unit vectors <span class="math-container">$u_1, u_2, u_3$</span> span row space and are independent of each other so the rank of the space should be 3. </p>
<p>Can someone please tell me the flaw in my logic/understanding?</p>
| Taffies1 | 598,984 | <p>The rank of a matrix is simply the number of nonzero rows in reduced row echelon form (rref). If you find the Reduced row echelon form of this given matrix it will yield:</p>
<p>(1, 0, 5/3)</p>
<p>(0, 1, -1/3)</p>
<p>Clearly, there are 2 nonzero rows in the reduced row echelon form of the given matrix. Thus, the rank is thereby 2. </p>
|
3,074,901 | <p>Find the rank of the following matrix</p>
<p><span class="math-container">$$\begin{bmatrix}1&-1&2\\2&1&3\end{bmatrix}$$</span></p>
<p>My approach: </p>
<p>The row space exists in <span class="math-container">$R^3$</span> and is spanned by two vectors. Since the vectors are independent of each other(because they are not scalar multiples of each other). Therefore, the row rank of the matrix which is the rank of this is two, which is the correct answer.</p>
<p>However, I'm still confused as to why the answer is the answer. If the row space exists in R^3 doesn't it have be be spanned by at least three vectors. For example, the unit vectors <span class="math-container">$u_1, u_2, u_3$</span> span row space and are independent of each other so the rank of the space should be 3. </p>
<p>Can someone please tell me the flaw in my logic/understanding?</p>
| Metric | 221,865 | <p>With the case of <span class="math-container">$\mathbb{R}^3$</span>, the dimension is 3, since it has a basis that contains 3 elements.</p>
<p>The row space of your matrix lives as a "subspace" of the bigger structure <span class="math-container">$\mathbb{R}^3$</span>. That is, you don't view it as <span class="math-container">$\mathbb{R}^3$</span>, but rather as its own entity <em>within</em> <span class="math-container">$\mathbb{R}^3$</span>. It's a nicely structured <em>chunk</em> of <span class="math-container">$\mathbb{R}^3$</span>, if you will.</p>
<p>Being its own entity, it must have its <em>own</em> basis! The row space is filled with linear combinations of the two rows of your matrix, and since the two rows are linearly independent (as you rightfully pointed out), its basis contains only 2 elements, so its dimension is 2!</p>
<p>I think an example is more enlightening. Consider this simpler matrix instead:</p>
<p><span class="math-container">$$\begin{bmatrix}1&0&0\\0&1&0\end{bmatrix}$$</span></p>
<p>Its rows are linearly independent, but are elements of <span class="math-container">$\mathbb{R}^3$</span>. In this case, your intuition wouldn't have told you that the dimension is <span class="math-container">$3$</span> simply because they are elements of <span class="math-container">$\mathbb{R}^3$</span>, right? If you look at its row space (i.e., the linear combination of its rows), I'm sure you can see the plane, which is of dimension 2.</p>
|
2,548,469 | <p>Suppose there is a sequence of iid variates from $U(0,1)$, $X_1,X_2,\dots$ If we stop the process when $X_n>X_{n+1}$, what is the expected number of generated variates?</p>
<p>I am just thinking about treating it as a Bernoulli process, so that I can use geometric distribution. Is this the right approach?</p>
| robjohn | 13,854 | <p>$$
\begin{align}
\int\frac{x-1}{x+1}\frac{\mathrm{d}x}{\sqrt{x^3+x^2+x}}
&=\int\frac{x^{-1/2}-x^{-3/2}}{x^{1/2}+x^{-1/2}}\frac{\mathrm{d}x}{\sqrt{\left(x^{1/2}+x^{-1/2}\right)^2-1}}\\
&=2\int\frac1{x^{1/2}+x^{-1/2}}\frac{\mathrm{d}\!\left(x^{1/2}+x^{-1/2}\right)}{\sqrt{\left(x^{1/2}+x^{-1/2}\right)^2-1}}\\
&=2\int\frac1u\frac{\mathrm{d}u}{\sqrt{u^2-1}}\\[9pt]
&=2\sec^{-1}(u)+C\\[15pt]
&=2\sec^{-1}\left(x^{1/2}+x^{-1/2}\right)+C
\end{align}
$$</p>
|
1,488,737 | <blockquote>
<p>Let $A$ be a square matrix of order $n$. Prove that if $A^2=A$ then $\mathrm{rank}(A)+\mathrm{rank}(I-A)=n$.</p>
</blockquote>
<p>I tried to bring the $A$ over to the left hand side and factorise it out, but do not know how to proceed. please help. </p>
| martini | 15,379 | <p><strong>Hint</strong>. Show that under the given condition, the following holds:</p>
<p>$$\ker A = \operatorname{im} (I -A) $$</p>
|
2,943,892 | <p>I have heard it said that completeness is a not a property of topological spaces, only a property of metric spaces (or topological groups), because Cauchy sequences require a metric to define them, and different metrics yield different sets of Cauchy sequence, even if the metrics induce the same topology. But why wouldn't the following definition of Cauchy sequence work?</p>
<p>(*) A sequence <span class="math-container">$(x_n)$</span> in a topological space <span class="math-container">$(X,\tau)$</span> is Cauchy if there exists a nested sequence of open sets <span class="math-container">$(U_n)$</span> where each open set is a proper subset of the one before it and the intersection of all of them contains at most one element, such that for any natural number <span class="math-container">$m$</span>, there exists a natural number <span class="math-container">$N$</span> such that for all <span class="math-container">$n\geq N$</span>, we have <span class="math-container">$x_n\in U_m$</span>.</p>
<p>My question is, why isn't this definition equivalent to being Cauchy in all metrics on <span class="math-container">$X$</span> whose topology is <span class="math-container">$\tau$</span>? Are there any metrics where being Cauchy is equivalent to (*)?</p>
| Alex Kruckman | 7,062 | <p>According to your proposed definition, the sequence <span class="math-container">$1,2,3,\dots$</span> would be Cauchy in <span class="math-container">$\mathbb{R}$</span>, witnessed by the sequence of open sets <span class="math-container">$U_n = (n,\infty)$</span>. </p>
<hr>
<p><strong>Edit:</strong> Let me incorporate some information from the comments to make this a more complete answer. </p>
<p>As another example of why your definition is unsatisfactory: In <span class="math-container">$\mathbb{R}^2$</span>, <em>any</em> sequence <span class="math-container">$(a_n,0)$</span> of points on the <span class="math-container">$x$</span>-axis is Cauchy, witnessed by the sequence of open sets <span class="math-container">$\mathbb{R}\times (-1/n,1/n)$</span>. </p>
<p>The fact that completeness and Cauchyness are not topological properties can be formalized by the fact that there are generally many different metrics compatible with a given topology, and these different metrics can induce different notions of completeness and Cauchyness. Looked at a different way, homeomorphisms preserve all topological properties (I would take this to be the definition of a topological property), but homeomorphisms between metric spaces do not necessarily preserve completeness (see the examples in btilly's and Andreas Blass's comments). </p>
<p>On the other hand, the notion of completeness actually lives somewhere in between the world of topological properties and metric properties, in the sense that many different metrics can agree about which sequences are Cauchy. It turns out that they agree when they induce the same <a href="https://en.wikipedia.org/wiki/Uniform_space" rel="noreferrer">uniform structure</a> on the space. And indeed, completeness can be defined purely in terms of the induced uniform structure, so it's really a uniform property. </p>
<p>There is one class of spaces in which topological property and uniform properties coincide: a compact space admits a unique uniformity. So you could call completeness a topological property for compact spaces. But in a rather trivial way, since every compact uniform space is automatically complete. </p>
<p>It could be worthwhile to view compactness as the proper topological version of completeness, i.e. the topological property that comes the closest to agreeing with the uniform/metric property of completeness. </p>
|
1,581,456 | <p>Given functions $g,h: A \rightarrow B$ and a set C that contains at least two elements, with $f \circ g = f \circ h$ for all $f:B \rightarrow C$. Prove that $g = h$. </p>
<p>My logic is to take C = B and h(x) =x for all x in particular and the result follows immediately. But, I don't see the use of the condition on C.
Please somebody help.</p>
| Piquito | 219,998 | <p>Your identity is true. I give here the method for you to prove the point for general $n$. </p>
<p>One has as main preliminary remark $$\color{red}{z^n=-1\iff z^{n+k}=-z^k\iff \frac{1}{1-z^k}=\frac{z^{n-k}}{1+z^{n-k}}}$$ We make $$A=\sum_{k=o}^{n-1}\frac{2k+1}{1-z^{2k+1}}$$ $$B=n\sum_{k=0}^{n-1}\frac{1}{1+z^k}$$ </p>
<p><strong>Example of algebraic verification: $n=6$, even.</strong></p>
<p>The odd exponents in $A$ are $\begin{cases}1\\3\\5\\7=6+1\\9=6+3\\11=6+5\end{cases}$
therefore
$$A=\left({1\over 1-z}+{3\over 1-z^3}+{5\over 1-z^5}\right)+\left({6+1\over 1+z}+{6+3\over 1+z^3}+{6+5\over 1+z^5}\right)$$
$$A=B\Rightarrow \left({1\over 1-z}+{3\over 1-z^3}+{5\over 1-z^5}\right)+\left({1\over 1+z}+{3\over 1+z^3}+{5\over 1+z^5}\right)= 6\left({1\over 1+1}+{1\over 1+z^2}+{1\over 1+z^4}\right)$$ Hence $${2\over 1-z^2}+{6\over 1+1}+{10\over 1-z^{10}}= {6\over 1+1}+{6\over 1+z^2}+{6\over 1+z^4}$$ $${2\over 1-z^2}+{10\over 1+z^4}= {6\over 1+z^2}+{6\over 1+z^4}\iff {1\over 1+z^4}={1-2z^2\over 1-z^4}\iff z^4-z^2+1=0 $$ Since $z^6+1=(z^2+1)(z^4-z^2+1)=0$ the algebraic verification is ended.</p>
<p><strong>Example of algebraic verification: $n=7$, odd.</strong></p>
<p>$$A={1\over 1-z}+{3\over 1-z^3}+{5\over 1-z^5}+{7\over 1+1}+{7+2\over 1+z^2}+{7+4\over 1+z^4}+{7+6\over 1+z^6}=B$$
$$\left({1-6z\over 1-z}-{7\over 1+z}+{7z^2\over 1-z^2}+{2+5z^2\over 1+z^2}+{4\over 1+z^4}\right)={7\over 1+z^3}-{3\over 1-z^3}$$
$$\left({2z(2z^7-z^6+z^5-z^4+6z^3-z^2+z-1)\over z^8-1}\right)={7\over 1+z^3}-{3\over 1-z^3}$$ but the parenthesis equals
$${2z(5z^3-2)\over –z-1}$$ because $$\left({2z(2z^7-(\color{red}{z^6-z^5+z^4-z^3+z^2-z+1})+5z^3)\over z^8-1}\right)={2z(5z^3-2)\over –z-1}$$
where the red polynomial is null as a factor of $z^7+1=0$. Therefore $${2z(5z^3-2)\over –z-1}={7\over 1+z^3}-{3\over 1-z^3}= {2(5z^2-2)\over z^6-1}$$ i.e $${z\over –z-1}={1\over z^6-1}\iff z^7-z=-z-1$$ which ends the proof $n=7$.</p>
|
3,166,947 | <p>I want to transform the following
<span class="math-container">$$\prod_{k=0}^{n} (1+x^{2^{k}})$$</span>
to the canonical form <span class="math-container">$\sum_{k=0}^{n} c_{k}x^{k}$</span></p>
<p>This is what I got so far
<span class="math-container">\begin{align*}
\prod_{k=0}^{n} (1+x^{2^{k}})= \dfrac{x^{2^{n}}-1}{x-1} (x^{2^{n}}+1) \\
\end{align*}</span>
but I don't know how to continue, can someone help me with this?</p>
| Explorer | 630,833 | <p>Note that <span class="math-container">$ \sum_{k=0}^{n-1} 2^k=2^n-1$</span>. Therefore, the highest power of <span class="math-container">$x$</span> in <span class="math-container">$\prod_{k=0}^{n-1}(1+x^{2^k})$</span> is <span class="math-container">$2^n-1$</span>. In other words, none of the terms will repeat as we keep multiplying by <span class="math-container">$1+x^{2^n}$</span>. Thus, the relation can be obtained using induction. </p>
<p>It is easy to see that for <span class="math-container">$n=1$</span>
<span class="math-container">\begin{equation}
\prod_{k=0}^1(1+x^{2^k}) = (1+x)(1+x^2)= 1+x+x^2+x^3.
\end{equation}</span>
Now, let us assume that <span class="math-container">$\prod_{k=0}^{n-1}(1+x^{2^k}) = \sum_{k=0}^{2^n-1} x^k$</span>. Then, we have
<span class="math-container">\begin{equation}
\prod_{k=0}^{n}(1+x^{2^k}) = \left(\sum_{k=0}^{2^{n-1}} x^k\right)(1+x^{2^{n}}) = \sum_{k=0}^{2^{n}-1} x^k+ \sum_{k=0}^{2^{n}-1} x^{k+2^n} = \sum_{k=0}^{2^{n+1}-1} x^k.
\end{equation}</span></p>
|
3,959,263 | <p>Let <span class="math-container">$G$</span> be a tree with a maximum degree of the vertices equal to <span class="math-container">$k$</span>.
<strong>At least</strong> how many vertices with a degree of <span class="math-container">$1$</span> can be in <span class="math-container">$G$</span> and why?</p>
<p>I think the answer must be <span class="math-container">$k$</span> but I don't know how to prove it.</p>
| RobPratt | 683,666 | <p>For <span class="math-container">$d\in\{1,\dots,k\}$</span>, let <span class="math-container">$n_d$</span> be the number of nodes of degree <span class="math-container">$d$</span>. By the handshake lemma, we have
<span class="math-container">$$\sum_{d=1}^k d n_d = 2\left(\sum_{d=1}^k n_d - 1\right),$$</span>
which implies that
<span class="math-container">$$n_1 = 2 + \sum_{d=3}^k (d-2) n_d \ge 2 + (k-2) n_k \ge 2 + (k-2) 1 = k.$$</span>
This lower bound is attained by a star with <span class="math-container">$k+1$</span> nodes.</p>
<hr />
<p>Alternatively, perform a depth-first search rooted at a node with degree <span class="math-container">$k$</span>. Each branch yields at least one leaf.</p>
|
586,112 | <p>Consider following statment $\{\{\varnothing\}\} \subset \{\{\varnothing\},\{\varnothing\}\}$</p>
<p>I think above statement is false as $\{\{\varnothing\}\}$ is subset of $\{\{\varnothing\},\{\varnothing\}\}$ but to be proper subset there must be some element in $\{\{\varnothing\},\{\varnothing\}\}$ which is not in $\{\{\varnothing\}\}$.As this is not the case here so it is false.</p>
<p>Is my explanation and answer right or not?</p>
| Ahaan S. Rungta | 85,039 | <p>The answer that I have always seen: An integral usually has a defined limit where as an antiderivative is usually a general case and will most always have a $\mathcal{+C}$, the constant of integration, at the end of it. This is the only difference between the two other than that they are completely the same. </p>
<p>You will be safe in class though if you assume them to be identical if neither of them has a defined limit.</p>
|
586,112 | <p>Consider following statment $\{\{\varnothing\}\} \subset \{\{\varnothing\},\{\varnothing\}\}$</p>
<p>I think above statement is false as $\{\{\varnothing\}\}$ is subset of $\{\{\varnothing\},\{\varnothing\}\}$ but to be proper subset there must be some element in $\{\{\varnothing\},\{\varnothing\}\}$ which is not in $\{\{\varnothing\}\}$.As this is not the case here so it is false.</p>
<p>Is my explanation and answer right or not?</p>
| Hussain Kamran | 257,959 | <p>i think that indefinite integral and anti derivative are very much closely related things but definitely equal to each other.
indefinite integral denoted by the symbol"∫" is the family of all the anti derivatives of the integrand f(x) and anti derivative is the many possible answers which may be evaluated from the indefinite integral.
e.g:
consider the indefinite integral:
∫2xdx
in this indefinite integral, 2x is the differentiated function with respect to x and the integration results in the function x^2+c where "c" is the constant but what is the antiderivative of 2xdx?
the answer is not one it can be many i.e x^2+10, x^2+4, x^2+0 etc and these all are the members of ∫2xdx.
so the family i.e ∫2xdx is the integral and all its member are the antiderivatives.</p>
|
17,455 | <p>Assume $S_1$ and $S_2$ are two $n \times n$ (positive definite if that helps) matrices, $c_1$ and $c_2$ are two variables taking scalar form, and $u_1$ and $u_2$ are two $n \times 1$ vectors. In addition, $c_1+c_2=1$, but in the more general case of $m$ $S$'s, $u$'s, and $c$'s, the $c$'s also sum to 1.</p>
<p>What is the derivative of $(c_1 S_1+c_2 S_2)^{-1}(c_1 u_1+c_2 u_2)$ with respect to both $c_1$ and $c_2$?</p>
| Community | -1 | <p>Since I did this initially unregistered, I can't seem to figure out how to get back in and add a comment instead of an answer. This is a bit long, but I am still having trouble resolving this problem satisfactorily. Also, I apologize for the formatting.</p>
<p>@Jonas Meyer I got similar answers to you the first time I had tried it. The problem I'm actually dealing with is more general and contains perhaps a c3, c4, or any higher number. However, one of these cs will always be defined as c=1-sum(other cs).</p>
<p>In the simpler problem, S^-1*(c1*u1+c2*u2), if you assume c1 and c2 are independent, then the derivative wrt c1 is S^-1*u1 as I said above. This answer makes quite a lot of sense with regards to my current problem. If S=c1*S1+c2*S2 and S1=S2, I also found the answer that you have in the reply, S^-1*(u1-c1*u1-c2*u2). The problem is that this isn't consistent with what is above. This is a portfolio management problem. The first answer will give the equivalent of a portfolio with confidence c1=100% in the u1 view, while the second will give the difference between that view and the blended final portfolio. I am trying to address how much the confidences c1 and c2 are responsible for the final portfolio weights. In the first case, it is obvious: Multiply S^-1*u1 by c1 since it is homogeneous and divide by S^-1*(c1*u1+c2*u2). It is not obvious in the case of S^-1*(u1-c1*u1-c2*u2). For instance, if c1=100% and c2=0%, then the first case will show 100% of the weights are driven by c1 and 0% are driven by c2. The second case will say the contribution is 0% for c1. Something is mistaken here.</p>
<p>If c2=1-c1, then the derivative wrt c1 is S^-1*(u1-u2). For the more complicated case, where S=c1*S1+c2*S2=S2+c1*(S1-S2), the derivative wrt c1 can be calculated using what is in the answer and there will a term in one part so that if S1=S2 then it falls out and the answer becomes S^-1*(u1-u2). So if you assume c2=1-c1, I think you can get consistent answers between the case with 1 S and the case with 2 Ss. However, it still doesn't resolve my problems when you multiply the derivative wrt c1 by c1 and divide by S^-1*(c1*u1+c2*u2). This number will not provide the relative contributions from c1 to the portfolio weights.</p>
|
4,385,209 | <p>Let's start by generalizing the concept of a metric space. An <span class="math-container">$S$</span>-metric space is a set <span class="math-container">$X$</span> with a function <span class="math-container">$d : X \times X \to S$</span> such that</p>
<ul>
<li><span class="math-container">$d(x,y) = 0 \iff x = y$</span></li>
<li><span class="math-container">$d(x,y) = d(y,x)$</span></li>
<li><span class="math-container">$d(x,z) \leq d(x,y) + d(y,z)$</span></li>
</ul>
<p>This is just a metric space which need not necessarily map into <span class="math-container">$\mathbb{R}$</span>. So my question is:</p>
<blockquote>
<p>Are all <span class="math-container">$\mathbb{R}$</span>-metrizable spaces also <span class="math-container">$\mathbb{Q}$</span>-metrizable spaces?</p>
</blockquote>
<p>I suspect the answer is "No", but I have yet to come up with a counter example. I have shown that a few metrizable spaces are <span class="math-container">$\mathbb{Q}$</span>-metrizable.</p>
<p>For example discrete spaces are <span class="math-container">$\mathbb{Q}$</span>-metrizable since the usual metric has a range of <span class="math-container">$\{0,1\}$</span>. Additionally <span class="math-container">$\mathbb{R}^n$</span> is <span class="math-container">$\mathbb{Q}$</span>-metrizable. If we take <span class="math-container">$d$</span> to be the normal Euclidean metric then we can define <span class="math-container">$d'$</span> such that:</p>
<p><span class="math-container">$
d'(x,y)=\left\lceil d(x,y)\right\rceil
$</span></p>
<p>The first two conditions follow trivially from the fact that <span class="math-container">$d$</span> is a metric and the third is true by virtue of the fact that <span class="math-container">$\lceil x+y\rceil \leq \lceil x\rceil + \lceil y\rceil$</span>.</p>
<p>Since <span class="math-container">$\mathbb{R}^n$</span> is generated by unit balls, this metric generates the usual topology on <span class="math-container">$\mathbb{R}^n$</span>.</p>
<p>This gives us a whole lot more spaces which are homeomorphic to a subspace of <span class="math-container">$\mathbb{R}^n$</span> as well, but I don't see a way to adjust this more generally.</p>
<p>Is there an example of a space which is <span class="math-container">$\mathbb{R}$</span>-metrizable but not <span class="math-container">$\mathbb{Q}$</span>-metrizable by the above definition?</p>
| coudy | 716,791 | <p><span class="math-container">${\bf R}$</span> is not <span class="math-container">${\bf Q}$</span>-metrizable. In fact no connected set <span class="math-container">$X$</span> is <span class="math-container">$\bf Q$</span>-metrizable. Let
<span class="math-container">$$\phi(x,y) = d(x,y) \in {\bf Q}$$</span>
This is a continuous function. If <span class="math-container">$X$</span> is connected, so is <span class="math-container">$X\times X$</span> and so is its image <span class="math-container">$\phi(X\times X)$</span>. The only connected subset of <span class="math-container">$\bf Q$</span> containing <span class="math-container">$0$</span> is <span class="math-container">$\{0\}$</span> hence <span class="math-container">$\phi$</span> is zero everywhere.</p>
<p>Here I am assuming that the topology given by the <span class="math-container">$\bf Q$</span>-distance is the euclidean one. If we do not make this assumption, then we can put a <span class="math-container">$\bf Q$</span>-distance on all spaces that are in bijection with <span class="math-container">$\bf R$</span> because <span class="math-container">$\bf R$</span> itself is in bijection with <span class="math-container">$\{0,1\}^{\bf N}$</span> and <span class="math-container">$\{0,1\}^{\bf N}$</span> possesses the <span class="math-container">$\bf Q$</span>-distance
<span class="math-container">$$
d(\{x_n\}_{n \in {\bf N}},\{y_n\}_{n \in {\bf N}}) = 2^{-\min\{n \in {\bf N} \mid x_n \neq y_n\}}.
$$</span></p>
|
2,878,412 | <p>I've been working on a problem that involves discovering valid methods of expressing natural numbers as Roman Numerals, and I came across a few oddities in the numbering system.</p>
<p>For example, the number 5 could be most succinctly expressed as $\texttt{V}$, but as per the rules I've seen online, could also be expressed as $\texttt{IVI}$. </p>
<p>Are there any rules that bar the second expression from being valid? Or are the rules for roman numerals such that multiple valid expressions express the same number. </p>
<h1>Edit</h1>
<p>A sample set of rules I've seen online:</p>
<ol>
<li>Only one I, X, and C can be used as the leading numeral in part of a subtractive pair.</li>
<li>I can only be placed before V or X in a subtractive pair.</li>
<li>X can only be placed before L or C in a subtractive pair.</li>
<li>C can only be placed before D or M in a subtractive pair.</li>
<li>Other than subtractive pairs, numerals must be in descending order (meaning that if you drop the first term of each subtractive pair, then the numerals will be in descending order).</li>
<li>M, C, and X cannot be equalled or exceeded by smaller denominations.</li>
<li>D, L, and V can each only appear once.</li>
<li>Only M can be repeated 4 or more times.</li>
</ol>
| hmakholm left over Monica | 14,366 | <p>"Roman numerals" presumably means how the actual Romans actually wrote down numbers.</p>
<p>They would never have written five as IVI, full stop.</p>
<p>If you're following a particular set of formal rules that are <em>not</em> "do things as the actual Romans would have done them", then follow those rules. Be prepared to get nonsense results if the rules you follow are sloppily phrased.</p>
|
4,042,741 | <p>I'm really struggling to understand the literal arithmetic being applied to find a complete residue system of modulo <span class="math-container">$n$</span>. Below is the definition my textbook provides along with an example.</p>
<blockquote>
<p>Let <span class="math-container">$k$</span> and <span class="math-container">$n$</span> be natural numbers. A set <span class="math-container">$\{a_1,a_2,...,a_k\}$</span> is called a canonical complete residue system modulo <span class="math-container">$n$</span> if every integer is congruent modulo <span class="math-container">$n$</span> to exactly one element of the set</p>
</blockquote>
<p>I'm struggling to understand how to interpret this definition. Two integers, <span class="math-container">$a$</span> and <span class="math-container">$b$</span>, are "congruent modulo <span class="math-container">$n$</span>" if they have the same remainder when divided by <span class="math-container">$n$</span>. So the set <span class="math-container">$\{a_1,a_2,...,a_k\}$</span> would be all integers that share a quotient with <span class="math-container">$b$</span> divided by <span class="math-container">$n$</span>?</p>
<p>After I understand the definition, this is a simple example provided by my textbook</p>
<blockquote>
<p>Find three residue systems modulo <span class="math-container">$4$</span>: the canonical complete residue system, one containing negative numbers, and one containing no two consecutive numbers</p>
</blockquote>
<p>My first point of confusion is "modulo <span class="math-container">$4$</span>". <span class="math-container">$a{\space}mod{\space}n$</span> is the remainder of Euclidean division of <span class="math-container">$a$</span> by <span class="math-container">$n$</span>. So what is meant by simply "modulo <span class="math-container">$4$</span>"? What literal arithmetic do I perform to find a complete residue system using "modulo <span class="math-container">$4$</span>"?</p>
| Bill Dubuque | 242 | <p>It may prove helpful to understand the general conceptual background that underlies this definition. Congruence is an <em>equivalence relation</em> (generalized equality) so it partitions the integers into equivalences classes, here <span class="math-container">$\,[a] = a + n\Bbb Z\,$</span> is the class of all integers <span class="math-container">$\equiv a\pmod{n}.\,$</span></p>
<p>Conceptually, modular arithmetic is actually working with these classes, e.g. the <a href="https://math.stackexchange.com/a/879262/242">congruence sum rule</a> <span class="math-container">$\,a'\equiv a,\,b'\equiv b\Rightarrow a'+b'\equiv a+b\,$</span> is equivalent to saying that <span class="math-container">$\,[a]+[b] = [a+b]\,$</span> is a well-defined addition on the classes. Ditto for multiplication, so these classes inherit the addition and multiplication from the integers. <a href="https://math.stackexchange.com/a/3563794/242">Further</a> they inherit the ring laws from <span class="math-container">$\,\Bbb Z,\,$</span> i.e. associative, commutative, and distributive laws. Thus the classes enjoy the same algebraic (ring) structure as the integers <span class="math-container">$\Bbb Z,\,$</span> so we can perform arithmetic on them just as we do with integers.</p>
<p>But when calculating it is often more convenient to work with numbers than sets (classes), so we often choose convenient representatives ("normal forms") for each class. A complete system of residues is precisely that. The most common choice here is the least natural in <span class="math-container">$[a],$</span> which is simply its remainder ("residue") <span class="math-container">$= a\bmod n,\,$</span> so this complete system of rep's <span class="math-container">$\bmod n\,$</span> is <span class="math-container">$\,\{0,1,\ldots, n-1\}.\,$</span> Another common choice are reps of least absolute value, e.g. <span class="math-container">$\{0,\pm1,\pm2\}\bmod 5$</span>.</p>
<p>It may help to consider the analogy with fractions. Here the congruence relation for fraction equivalence is <span class="math-container">$\,a/b \equiv c/d\iff ad = bc.\,$</span> and the common normal form reps are again the least terms reps, i.e. fractions with coprime numerator and denominator. The above sum rule is usually never proved at elementary level - which should make you ponder why you accepted its truth without proof. This will be rectified if you study fraction fields (or localizations) in abstract algebra (which also generalizes modular arithmetic in the study of general quotient (residue) rings, and congruences, and their <a href="https://math.stackexchange.com/a/815450/242">relationship</a>).</p>
<p><strong>Remark</strong> <span class="math-container">$ $</span> The choice of the remainder as a normal form rep <a href="https://math.stackexchange.com/a/3267856/242">also works</a> in polynomial rings over a field (e.g. this yields the complex numbers <span class="math-container">$\,\Bbb C \cong \Bbb R[x]/(x^2+1)\,$</span> as the complete system of reps of real polynomials mod <span class="math-container">$\,x^2+1),\,$</span> or in any domain with a (computable) Euclidean algorithm. But in general there may not be any (computable) natural way to choose such normal forms, so we may have no choice but to work with the <em>classes</em> themselves.</p>
|
2,351,883 | <p>I am learning about tensor products.
In trying to understand the definitions, I seem to be getting
some contradiction.</p>
<p>Consider the differential form</p>
<p>$$
d x^{1} \wedge d x^{2} = d x^{1} \otimes d x^{2} - d x^{2} \otimes d x^{1}.
$$</p>
<p>If I use the symmetry property of the tensor product</p>
<p>$$
d x^{2} \otimes d x^{1} = d x^{1} \otimes d x^{2}
$$</p>
<p>I get 0! This is clearly wrong.
I think that I cannot change places inside the tensor product,
but I cannot justify why.
What is going on?</p>
| Community | -1 | <p>It is <em>not</em> true that $dx^1\otimes dx^2=dx^2\otimes dx^1$. You might want to check your book how $dx^1\otimes dx^2$ is <em>defined</em>. </p>
<p>Consider a vector space $V$ and $f,g$ being two linear functionals on $V$. Then for $u,v\in V$, one has
$$
(f\otimes g-g\otimes f)(u,v)=f(u)g(v)-g(u)f(v)
$$</p>
|
1,403,486 | <p>As an introduction to multivariable calculus, I'm given a small introduction to some topological terminology and definitions. As the title says, I have to prove that <span class="math-container">$\{(x,y): x>0\}$</span> is connected. My tools for this are:</p>
<blockquote>
<p><strong>Definition 1</strong>: Two disjoint sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, neither empty, are said to be <strong>mutually separated</strong> if neither contains a boundary point of the other.</p>
<p><strong>Definition 2</strong>: A set is <strong>disconnected</strong> if it is the union of separated subsets.</p>
<p><strong>Definition 3</strong>: A set is <strong>connected</strong> if it is not disconnected.</p>
</blockquote>
<p>Because <strong>Definition 3</strong> is a negation, I'm encouraged to do this by contradiction. Suppose <span class="math-container">$\{(x,y): x>0\}$</span> is disconnected. Then it is the union of mutually separated sets. I don't know where to go from here or if there is a way to show directly that the set can't be expressed as the union of mutually separated sets directly. Any guidance would be appreciated.</p>
| James | 81,163 | <p>What you have written is the same as $X \subseteq Y$ and $Y\subseteq X$. Notice that $X \subseteq Y$ iff $\forall z(z\in X \rightarrow z\in Y)$, so $X\subseteq Y$ and $Y\subseteq X$ iff $\forall z(z\in X \rightarrow z\in Y \text{ and } z\in Y \rightarrow z\in X)$ iff $\forall z(z\in X \leftrightarrow z\in Y)$ iff $X=Y$ by definition.</p>
|
331,859 | <p>I need to find the antiderivative of
$$\int\sin^6x\cos^2x \mathrm{d}x.$$ I tried symbolizing $u$ as squared $\sin$ or $\cos$ but that doesn't work. Also I tried using the identity of $1-\cos^2 x = \sin^2 x$ and again if I symbolize $t = \sin^2 x$ I'm stuck with its derivative in the $dt$.</p>
<p>Can I be given a hint?</p>
| André Nicolas | 6,312 | <p><strong>Hint</strong> We can use double-angle identities to reduce powers. We could use $\cos 2t=2\cos^2 t-1$ and $\cos 2t=1-2\sin^2 t$. We end up with polynomial of degree $4$ in $\cos 2x$. Repeat the idea where needed. </p>
<p>It is more efficient in this case to use $\sin 2t=2\sin t\cos t$, that is, first rewrite our expression as $(\sin x\cos x)^2\sin^4 x$. Then rewrite as $\frac{1}{16}(\sin^2 2x)(1-\cos 2x)^2$. Expand the square. Not quite finished. But we end up having to integrate $\sin^2 2x$ (standard), $\sin^2 2x\cos 2x$ (simple substitution), and $\sin^2 2x\cos^2 2x$, a close relative of $\sin^2 4x$. </p>
<p><strong>Remark:</strong> In this problem, like in a number of trigonometric integrations, it is possible to end up with dramatically different-looking answers. They all differ by constants, so we are saved by the $+C$. </p>
|
331,859 | <p>I need to find the antiderivative of
$$\int\sin^6x\cos^2x \mathrm{d}x.$$ I tried symbolizing $u$ as squared $\sin$ or $\cos$ but that doesn't work. Also I tried using the identity of $1-\cos^2 x = \sin^2 x$ and again if I symbolize $t = \sin^2 x$ I'm stuck with its derivative in the $dt$.</p>
<p>Can I be given a hint?</p>
| lab bhattacharjee | 33,337 | <p>$$\int \sin^6x\cos^2xdx=\int \sin^6x(1-\sin^2x)dx=\int \sin^6xdx-\int \sin^8xdx$$
$$=I_6-I_8 \text{ where }I_n=\int\sin^nxdx$$</p>
<p>$$\text{Now, }I_{n+2}=\int\sin^{n+2}xdx=\int\sin^{n+1}x\cdot \sin xdx$$
$$=\sin^{n+1}x\int \sin xdx-\int\left(\frac{d \sin^{n+1}x}{dx} \int \sin xdx\right)dx$$ (using <a href="http://en.wikipedia.org/wiki/Integration_by_parts" rel="nofollow">Integration by parts</a>)
$$=\sin^{n+1}x(-\cos x)-\int(n+1) \sin^nx\cos x(-\cos x)dx$$
$$=-\sin^{n+1}x\cos x+(n+1)\int \sin^nx(1-\sin^2x)dx$$
$$=-\sin^{n+1}x\cos x+(n+1)(I_n-I_{n+2})$$
$$\implies I_{n+2}=-\frac{\sin^{n+1}x\cos x}{(n+2)}+\frac{n+1}{n+2}I_n$$</p>
<p>Now, $I_0=\int \sin^0xdx=\int dx=x$</p>
<p>Put $n=0,2,4,6$ respectively to get the values of $I_2,I_4,I_6,I_8$</p>
|
1,274,514 | <p>I want to show that proposition<span class="math-container">$5.33$</span> in introduction to homological algebra Rotman :let <span class="math-container">$I$</span> be a directed set , and let <span class="math-container">$\{A_i,\alpha_j^i\}$</span>, <span class="math-container">$\{B_i,\beta_j^i\}$</span>, and <span class="math-container">$\{C_i,\gamma_j^i\}$</span> be directed systems of left <span class="math-container">$R$</span>-modules over <span class="math-container">$I$</span> if <span class="math-container">$r:\{A_i,\alpha_j^i\}\to\{B_i,\beta_j^i\}$</span> and <span class="math-container">$s:\{B_i,\beta_j^i\}\to\{C_i,\gamma_j^i\}$</span> are morphisms of direct systems, and if</p>
<blockquote>
<p><span class="math-container">$$0\to A_i\xrightarrow{r_i}B_i\xrightarrow{s_i}C_i\to0$$</span></p>
</blockquote>
<p>is exact for each <span class="math-container">$i\in I$</span>,then there is an exact sequence</p>
<blockquote>
<p><span class="math-container">$$0\to\varinjlim A_i\xrightarrow{r^\to}\varinjlim B_i\xrightarrow{s^\to }\varinjlim C_i\to0$$</span></p>
</blockquote>
<p>I have same problem to show that ker <span class="math-container">${s^\to}\subset$</span>Image<span class="math-container">$ \ r^\to$</span>.
can you help me!thanks.</p>
| Bernard | 202,857 | <p>Take an element $b\in\ker s^\to$, and an element $b_i$ in some $B_i$ such that $\beta_i(b_i)=b $. Then $\gamma_i(s_i(b_i))=0$, so that $\gamma_{ij}(s_i(b_i))=0$ in some $C_j\enspace(j\ge i)$.</p>
<p>As $\,\gamma_{ij}s_i=s_j\beta_{ij}$, this means $\,b_j=\beta_{ij}(b_i)\in\ker s_j$, so there exists $a_j\in A_j$ such that $b_j=r_j(a_j)$. Since $\beta_jr_j=r^\to\alpha_j$, we deduce
$$b=\beta_j(b_j)=r^\to(\alpha_j(a_j)),$$
which proves $b$ is the image of $a=\alpha_j(a_j)$ by $r^\to$.</p>
|
2,343,993 | <blockquote>
<p>Find the limit -$$\left(\frac{n}{n+5}\right)^n$$</p>
</blockquote>
<p>I set it up all the way to $\dfrac{\left(\dfrac{n+5}{n}\right)}{-\dfrac{1}{n^2}}$ but now I am stuck and do not know what to do.</p>
| marty cohen | 13,079 | <p>It's easier to turn it over.</p>
<p>$\left(\frac{n+5}{n}\right)^n
=\left(1+\frac{5}{n}\right)^n
\to e^5
$
so
$\left(\frac{n}{n+5}\right)^n
\to e^{-5}
$.</p>
|
84,076 | <p>I think computation of the Euler characteristic of a real variety is not a problem in theory.</p>
<p>There are some nice papers like <em><a href="http://blms.oxfordjournals.org/content/22/6/547.abstract" rel="nofollow">J.W. Bruce, Euler characteristics of real varieties</a></em>.</p>
<p>But suppose we have, say, a very specific real nonsingular hypersurface, given by a polynomial, or a nice family of such hypersurfaces. What is the least cumbersome approach to computation of $\chi(V)$? One can surely count the critical points of an appropriate Morse function, but I hope it's not the only possible way.</p>
<p>(Since I am talking about dealing with specific examples, here's one:
$f (X_1,\ldots,X_n) = X_1^3 - X_1 + \cdots + X_n^3 - X_n = 0$, where $n$ is odd.)</p>
<p><strong>Update:</strong> the original motivation is the following: the well-known results by Oleĭnik, Petrovskiĭ, Milnor, and Thom give upper bounds on $\chi (V)$ or $b(V) = \sum_i b_i (V)$ that are exponential in $n$. It is easy to see that this is unavoidable, e.g. $(X_1^2 - X_1)^2 + \cdots + (X_n^2 - X_n)^2 = 0$ is an equation of degree $4$ that defines exactly $2^n$ isolated points in $\mathbb{R}^n$. I was interested in specific families of real algebraic sets with large $\chi (V)$ or $b (V)$ <em>defined by one equation of degree $3$</em>. I couldn't find an appropriate reference with such examples and it seems like a proof for such example would require some computations (unlike the case of degree $4$).</p>
| Ryan Budney | 1,465 | <p>Presumably the least-cumbersome approach will depend on the specific variety you need to work with. </p>
<p>In your case, I'd think of solving for $x_n$ in terms of $x_1,\cdots,x_{n-1}$. There's always at least one solution, and sometimes as many as three. This gives a fairly natural stratification of your variety and you can try to inductively compute the Euler characteristic of the variety as a union of subspaces. I think in your case the Euler characteristic is $3$ when $n=1$, $1$ for $n=2$ and $-1$ for $n=3$. </p>
<p>I'm just doing some quick computations by hand, so they're somewhat heuristic and not guaranteed to be accurate. I imagine a little more work and you could get the general picture, and if the pattern holds it appears that $\chi = 5-2n$. </p>
|
2,898,767 | <p>For $M_n (\mathbb{C})$, the vector space of all $n \times n $ complex matrices,</p>
<p>if $\langle A, X \rangle \ge 0$ for all $X \ge 0$ in $M_n{\mathbb{C}}$,then $A \ge 0$</p>
<p>which of the following define an inner product on $M_n(\mathbb{C})$?</p>
<p>$1)$$ \langle A, B\rangle = tr(A^*B)$</p>
<p>$2)$$ \langle A, B\rangle = tr(AB^*)$</p>
<p>$3)$$\langle A, B\rangle = tr(BA)$</p>
<p>Taken from Zhang linear algebra books page no .112.</p>
<p><a href="https://i.stack.imgur.com/72YYh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/72YYh.png" alt="enter image description here"></a></p>
<p><strong>My attempts:</strong>
I read this Wikipedia article, but could not get any idea on how to clarify these options:</p>
<p><a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Inner_product_space</a></p>
<p>Any hints/solutions will be appreciated, thank you.</p>
| mathcounterexamples.net | 187,663 | <p>Usually, for $\alpha \notin \mathbb N$, mathematical packages only define $x \mapsto x^\alpha$ for $x \ge 0$.</p>
|
402,750 | <p>I read that a continuous random variable having an exponential distribution can be used to model inter-event arrival times in a Poisson process. Examples included the times when asteroids hit the earth, when earthquakes happen, when calls are received at a call center, etc. In all these examples, the expected value of the number of events per unit of time, lambda, is known and is constant over time. Moreover, each event's occurrence is independent of previous events' occurrences. And the exponential variable that models inter-event arrivals has the same lambda parameter as the Poisson variable that models the number of events.</p>
<p>And now, my problem. It don't get the connection between the intuition behind the exp distrib and its pdf. It seems obvious that the more time it passes by without an earthquake happening, the more likely it is than an earthquake will happen. Assume my understanding of lambda is correct, i.e., lambda = the rate at which the event happens, e.g., 5 earthquakes per minute on some remote, angry planet. On the pdf graphic, the prob of generating a value between 0 and 1 is greater than the prob of choosing a value between 4 and 5 for instance. How is this graphic related to the fact that on average we need to have 5 earthquakes per minute?</p>
| Did | 6,179 | <blockquote>
<p>It seems obvious that the more time it passes by without an earthquake happening, the more likely it is than an earthquake will happen.</p>
</blockquote>
<p>As explained in the comments, to use an exponential distribution is to assume the opposite: that whatever time passed by without an earthquake happening, still as likely as ever it is than an earthquake will happen. </p>
<p>So, if one thinks this rate should not be constant, how to model the time $X$ until the first earthquake happens? The hypothesis is that, for every $t\geqslant0$, when $s\to0^+$,
$$
P[X\leqslant t+s\mid X\geqslant t]=r(t)s+o(s),
$$
for some function $r$. Thus, the function $\bar F:t\mapsto P[X\geqslant t]$ is such that
$$
\bar F(t+s)=\bar F(t)\cdot(1-r(t)s+o(s)),
$$
for every $t\geqslant0$, that is,
$$
\bar F'(t)=-r(t)\bar F(t).
$$
Since $\bar F(0)=1$ by definition, one gets
$$
\bar F(t)=\exp\left(-\int_0^tr(u)\,\mathrm du\right),
$$
and finally, $X$ has PDF
$$
f(t)=\exp\left(-\int_0^tr(u)\,\mathrm du\right)r(t)\,\mathbf 1_{t\geqslant0}.
$$
One sees that <strong>every</strong> density $f$ can be written like that, since one recovers the instantaneous rate $r$ through the formula
$$
r(t)=\frac{f(t)}{\bar F(t)}=\frac{f(t)}{1-F(t)}.
$$</p>
|
2,426,535 | <p>In the book <em>Simmons, George F.</em>, Introduction to topology and modern analysis, page no- 98, question no- 2, the problem is : <strong><em>Let $X$ be a topological space and a $Y$ be metric space and $f:A\subset X\rightarrow Y$ be a continuous map. Then $f$ can be extended in at most one way to a continuous mapping of $\bar{A}$ into $Y$.</em></strong></p>
<p>I am trying to prove this way. Let $x_0\in \bar{A}-A$ and suppose that there is two extension $f$ and $g$ such that $f(x)=g(x)$ for $x\in A$. Now $f(x_0)\in \overline{f(A)}$ and $g(x_0)\in\overline {g(A)}$. So there exists a sequence $\{f(x_n)\}$ and $\{g(y_n)\}$ that converge to $f(x_0)$ and $g(x_0)$ respectively, where $x_n$ and $y_n$ belong to $A$ for all $n$. Then I am stuck!! Please help to complete the proof.</p>
| Henno Brandsma | 4,280 | <p>To follow your idea: Suppose we have $f_1$ and $f_2$ that are both continuous extensions of $f: A \to Y$ to $\overline{A}$. Let $p \in \overline{A}$ and so we have a net $a_i, i \in (I,\le)$ from $A$ such that $a_i \to p$.</p>
<p>The continuity of $f_1$ implies that $f_1(a_i) \to f_1(p)$.</p>
<p>The continuity of $f_2$ implies that $f_2(a_i) \to f_2(p)$ .</p>
<p>But the nets (as $f_1(a_i) = f_2(a_i) = f(a_i)$) are identical so must converge to the same point in the metric space $Y$ (limits of sequences are unique).</p>
<p>We conclude that $f_1(p) = f_2(p)$
As this holds for all $p \in \overline{A}$, $f_1 = f_2$ on $\overline{A}$.</p>
|
3,130,040 | <p><strong>Edit:</strong> Please prove this without using Cauchy-Schwarz or Preimage. (Also, please show <em>how</em> a chosen <span class="math-container">$ \epsilon $</span> would work by proving that <span class="math-container">$B(\mathbf x, \epsilon)\subseteq\Omega $</span>)</p>
<p>I've been having trouble proving the following set is <strong>open</strong> without the use of Cauchy-Schwarz inequality.</p>
<p><span class="math-container">$$ \Omega = \{(x,y)\in \mathbb R^2:x+y\ne0 \} $$</span></p>
<p>Our definition of <strong>open</strong> is:</p>
<blockquote>
<p>A set <span class="math-container">$S $</span> is open <span class="math-container">$ \iff $</span> every point is an interior point (i.e. <span class="math-container">$ \forall \mathbf x\in S $</span> there exists <span class="math-container">$\epsilon >0 $</span> such that <span class="math-container">$ B(\mathbf x,\epsilon)\subseteq S) $</span></p>
</blockquote>
<p>I have seen a proof for this using Cauchy-Schwarz inequality however I am not very familiar with it so I would appreciate if someone could share a simpler way to prove this statement. (I am familiar with the regular triangle inequality <span class="math-container">$ ||\mathbf a+\mathbf b|| \le ||\mathbf a|| + \mathbf||\mathbf b || $</span>)</p>
<p>I'm also not very familiar with much of the terminology associated with metric spaces since I am only doing an introductory course to Analysis at the moment, so apologies if any part of this description is ambiguous or unclear.</p>
<p><strong>My Attempt</strong></p>
<p>Suppose <span class="math-container">$\mathbf x =(x,y)\in \Omega$</span>. We need to prove it is an interior point.</p>
<p>Geometrically, it is clear that taking the radius for the ball around <span class="math-container">$ \mathbf x $</span> as <span class="math-container">$ \epsilon=\frac{|x+y |}{\sqrt{2}} $</span> will work. (The perpendicular distance from the point to the line <span class="math-container">$ x+y=0 $</span>).</p>
<p>However to prove this analytically (in case I was working on a similar question where I did not have geometric intuition to help me), I would like to show that</p>
<p><span class="math-container">$$ B\bigg(\mathbf x, \frac{|x+y |}{\sqrt{2}}\bigg) \subseteq\Omega $$</span></p>
<p>So I need to show <span class="math-container">$ \forall \mathbf{a}\in B(\mathbf x,\epsilon), \mathbf{a}\in \Omega $</span>.</p>
<p>I'm stuck at this point. My only guess would be to incorporate <span class="math-container">$$ ||\mathbf x - \mathbf a || < \epsilon $$</span> at some point but honestly I have no idea.</p>
| Surb | 154,545 | <p>Why Cauchy schwartz should be used here ? (Btw, I don't really see how it can be used...). Anyway, </p>
<p><strong>Method 1:</strong>
<span class="math-container">$$\Omega ^c=\{(x,-x)\in\mathbb R^2\mid x\in \mathbb R\},$$</span></p>
<p>which is obviously sequentially closed, and thus closed.</p>
<p><strong>Method 2:</strong></p>
<p>The function <span class="math-container">$f(x,y)=x+y$</span> is "obviously" continuous on <span class="math-container">$\mathbb R^2$</span>. Moreover, <span class="math-container">$$\Omega =f^{-1}\big(\mathbb R\setminus \{0\}\big),$$</span>
and thus <span class="math-container">$\Omega $</span> is open.</p>
|
3,130,040 | <p><strong>Edit:</strong> Please prove this without using Cauchy-Schwarz or Preimage. (Also, please show <em>how</em> a chosen <span class="math-container">$ \epsilon $</span> would work by proving that <span class="math-container">$B(\mathbf x, \epsilon)\subseteq\Omega $</span>)</p>
<p>I've been having trouble proving the following set is <strong>open</strong> without the use of Cauchy-Schwarz inequality.</p>
<p><span class="math-container">$$ \Omega = \{(x,y)\in \mathbb R^2:x+y\ne0 \} $$</span></p>
<p>Our definition of <strong>open</strong> is:</p>
<blockquote>
<p>A set <span class="math-container">$S $</span> is open <span class="math-container">$ \iff $</span> every point is an interior point (i.e. <span class="math-container">$ \forall \mathbf x\in S $</span> there exists <span class="math-container">$\epsilon >0 $</span> such that <span class="math-container">$ B(\mathbf x,\epsilon)\subseteq S) $</span></p>
</blockquote>
<p>I have seen a proof for this using Cauchy-Schwarz inequality however I am not very familiar with it so I would appreciate if someone could share a simpler way to prove this statement. (I am familiar with the regular triangle inequality <span class="math-container">$ ||\mathbf a+\mathbf b|| \le ||\mathbf a|| + \mathbf||\mathbf b || $</span>)</p>
<p>I'm also not very familiar with much of the terminology associated with metric spaces since I am only doing an introductory course to Analysis at the moment, so apologies if any part of this description is ambiguous or unclear.</p>
<p><strong>My Attempt</strong></p>
<p>Suppose <span class="math-container">$\mathbf x =(x,y)\in \Omega$</span>. We need to prove it is an interior point.</p>
<p>Geometrically, it is clear that taking the radius for the ball around <span class="math-container">$ \mathbf x $</span> as <span class="math-container">$ \epsilon=\frac{|x+y |}{\sqrt{2}} $</span> will work. (The perpendicular distance from the point to the line <span class="math-container">$ x+y=0 $</span>).</p>
<p>However to prove this analytically (in case I was working on a similar question where I did not have geometric intuition to help me), I would like to show that</p>
<p><span class="math-container">$$ B\bigg(\mathbf x, \frac{|x+y |}{\sqrt{2}}\bigg) \subseteq\Omega $$</span></p>
<p>So I need to show <span class="math-container">$ \forall \mathbf{a}\in B(\mathbf x,\epsilon), \mathbf{a}\in \Omega $</span>.</p>
<p>I'm stuck at this point. My only guess would be to incorporate <span class="math-container">$$ ||\mathbf x - \mathbf a || < \epsilon $$</span> at some point but honestly I have no idea.</p>
| Carsten S | 90,962 | <p>You can make your life easier by choosing a smaller <span class="math-container">$\varepsilon$</span>. Let’s take <span class="math-container">$\varepsilon=|x+y|/2$</span>. Now assume <span class="math-container">$\|(x,y)-(x’,y’)\|<\varepsilon$</span>. Then <span class="math-container">$|x-x’|<\varepsilon$</span> and <span class="math-container">$|y-y’|<\varepsilon$</span>. Now
<span class="math-container">$$
|x’+y’|\ge|x+y|-|(x+y)-(x’+y’)|\ge|x+y|-|x-x’|-|y-y’|\ge|x+y|-2\varepsilon>0.
$$</span>
The first two inequalities follow from the triangle inequality. </p>
|
1,041,226 | <p>I need to prove the following: ${n\choose m-1}+{n\choose m}={n+1\choose m}$, $1\leq m\leq n$.</p>
<p>With the definition: ${n\choose m}= \left\{
\begin{array}{ll}
\frac{n!}{m!(n-m)!} & \textrm{für \(m\leq n\)} \\
0 & \textrm{für \(m>n\)}
\end{array}
\right.$</p>
<p>and $n,m\in\mathbb{N}$.</p>
<p>I'm not really used to calculations with factorials and can't make much sense from it...</p>
| albo | 22,610 | <p>This is the most simplest answer, </p>
<p>$$\begin{align*}\begin{split}
{n\choose m-1}+{n\choose m} &= \frac{m}{m}\cdot\frac{n!}{(m-1)!(n-m+1)!}+\frac{(n+1-m)}{(n+1-m)}\cdot\frac{n!}{m!(n-m)!}\\
&=\frac{mn!}{(m)!(n-m+1)!}+\frac{(n+1-m)n!}{m!(n+1-m)!} \\
&=\frac{mn!+(n+1)n!-mn!}{(m)!(n-m+1)!}\\
&=\frac{(n+1)n!}{(m)!(n-m+1)!} \\
&=\frac{(n+1)!}{(m)!(n-m+1)!}
={n+1\choose m}\end{split}\end{align*}$$</p>
|
1,041,226 | <p>I need to prove the following: ${n\choose m-1}+{n\choose m}={n+1\choose m}$, $1\leq m\leq n$.</p>
<p>With the definition: ${n\choose m}= \left\{
\begin{array}{ll}
\frac{n!}{m!(n-m)!} & \textrm{für \(m\leq n\)} \\
0 & \textrm{für \(m>n\)}
\end{array}
\right.$</p>
<p>and $n,m\in\mathbb{N}$.</p>
<p>I'm not really used to calculations with factorials and can't make much sense from it...</p>
| erfan soheil | 195,909 | <p>This is combinatorial proof maybe you like it
Let yo have have a set whit $n$ elements and you want choose $m+1$ elements. Divide the set to two set that one of them has one element an the other one hase $n-1$ elements. now if you want to choose $m+1$ element you can do it in two ways</p>
<p>or you chose $m$ elements from $n$ elements' set and choose one element from singelton or choose $m+1$ elements from $n $ elements' set and choose nothing from singelton</p>
|
1,041,226 | <p>I need to prove the following: ${n\choose m-1}+{n\choose m}={n+1\choose m}$, $1\leq m\leq n$.</p>
<p>With the definition: ${n\choose m}= \left\{
\begin{array}{ll}
\frac{n!}{m!(n-m)!} & \textrm{für \(m\leq n\)} \\
0 & \textrm{für \(m>n\)}
\end{array}
\right.$</p>
<p>and $n,m\in\mathbb{N}$.</p>
<p>I'm not really used to calculations with factorials and can't make much sense from it...</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
\begin{align}
\color{#c00000}{{\pars{1 + z}^{n + 1} \over z^{k + 1}}}&
={\pars{1 + z}\pars{1 + z}^{n} \over z^{k + 1}}
=\color{#c00000}{%
{\pars{1 + z}^{n} \over z^{k + 1}} + {\pars{1 + z}^{n} \over z^{k}}}
\end{align}</p>
<blockquote>
<p>Then,
\begin{align}
\sum_{m\ =\ 0}^{n + 1}{n + 1 \choose m}z^{m - k - 1}
&=\sum_{m\ =\ 0}^{n}{n \choose m}z^{m - k - 1}
+\sum_{m\ =\ 0}^{n}{n \choose m}z^{m - k}
\\[5mm]z^{-k - 1} + \sum_{m\ =\ 1}^{n + 1}{n + 1 \choose m}z^{m - k - 1}
&=z^{-k - 1} + \sum_{m\ =\ 1}^{n}{n \choose m}z^{m - k - 1}
+\sum_{m\ =\ 1}^{n + 1}{n \choose m - 1}z^{m - 1 - k}
\\[5mm]z^{n - k} + \sum_{m\ =\ 1}^{n}{n + 1 \choose m}z^{m - k - 1}
&=\sum_{m\ =\ 1}^{n}{n \choose m}z^{m - k - 1}
+\sum_{m\ =\ 1}^{n}{n \choose m - 1}z^{m - 1 - k} + z^{n - k}
\end{align}</p>
</blockquote>
<p>$$
\sum_{m\ =\ 1}^{n}\color{#c00000}{{n + 1 \choose m}}z^{m - k - 1}
=\sum_{m\ =\ 1}^{n}\bracks{\color{#c00000}{{n \choose m} + {n \choose m - 1}}}
z^{m - k - 1}
$$</p>
<blockquote>
<p>$$
\color{#66f}{\large{n + 1 \choose m}}
=\color{#66f}{\large{n \choose m} + {n \choose m - 1}}\,,\qquad 1\ \leq\ m\ \leq\ n
$$</p>
</blockquote>
|
137,571 | <p>As the title, if I have a list:</p>
<pre><code>{"", "", "", "2$70", ""}
</code></pre>
<p>I will expect:</p>
<pre><code>{"", "", "", "2$70", "2$70"}
</code></pre>
<p>If I have</p>
<pre><code>{"", "", "", "3$71", "", "2$72", ""}
</code></pre>
<p>then:</p>
<pre><code>{"", "", "", "3$71", "3$71", "2$72", "2$72"}
</code></pre>
<p>And </p>
<pre><code>{"", "", "", "3$71", "","", "2$72", ""}
</code></pre>
<p>should give </p>
<pre><code>{"", "", "", "3$71", "3$71", "", "2$72", "2$72"}
</code></pre>
<p>This is my try:</p>
<pre><code>{"", "", "", "2$70", ""} /. {p : Except["", String], ""} :> {p, p}
</code></pre>
<p>But I don't know why it doesn't work. Poor ability of pattern match. Can anybody give some advice?</p>
| Mr.Wizard | 121 | <p>As I presently interpret the question</p>
<p>(Now with refinements after considering Chris Degnen's simultaneous answer)</p>
<pre><code>fn[list_] :=
Partition[list, 2, 1, -1, ""] // Cases[{p_, ""} | {_, p_} :> p]
</code></pre>
<p>Test:</p>
<pre><code> {"", "x", "y", "", "z", ""} // fn
</code></pre>
<blockquote>
<pre><code>{"", "x", "y", "y", "z", "z"}
</code></pre>
</blockquote>
<h2>Patterns</h2>
<p>Since you seem only to be interested in a pattern-matching solution here is my proposal to <em>avoid</em> the extremely slow use of <a href="http://reference.wolfram.com/language/ref/ReplaceRepeated.html" rel="nofollow noreferrer"><code>ReplaceRepeated</code></a> while still using pattern-matching as the core of the operation.</p>
<pre><code>fn2[list_] :=
list // ReplacePart @
ReplaceList[list, {a___, p_, "", ___} :> (2 + Length@{a} -> p)]
</code></pre>
<h2>Recursive replacement</h2>
<p>I just realized that this is a perfect time to use a self-referential replacement:</p>
<pre><code>fn3[list_] :=
list /. {a___, p_, "", b___} :> {a, p, p, ##& @@ fn3@{b}}
</code></pre>
<h2>Benchmark</h2>
<p>All three methods are much faster than kglr's <code>foo</code> (note the log-log scale).</p>
<p>Now with Carl Woll's <code>fc</code>, the fastest method yet. ( but no patterns ;-) )</p>
<pre><code>Needs["GeneralUtilities`"]
$RecursionLimit = 1*^5;
BenchmarkPlot[{foo, fn, fn2, fn3, fc},
RandomChoice[{"", "", "", "a", "b"}, #] &
]
</code></pre>
<p><a href="https://i.stack.imgur.com/d1Uuk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d1Uuk.png" alt="enter image description here"></a></p>
|
51,096 | <p>Is it possible to have a countable infinite number of countable infinite sets such that no two sets share an element and their union is the positive integers?</p>
| Steven Stadnicki | 785 | <p>While all of the answers here are obviously excellent, I'll throw in one more: imagine the points of $\mathbb{N}^2$ drawn out as an infinite grid of cells (the positive quadrant of the plane, essentially); then fill the cell $(1,1)$ with $1$, the cells $(2,1)$ and $(1,2)$ with $2$ and $3$, and in general all of the cells $(n,1)$ through $(1,n)$ that sum to $n+1$ with the numbers from $\left({1\over 2}n(n-1)\right)+1$ to ${1\over 2}n(n+1)$. This provides an easy arithmetic pairing function $\langle\cdot,\cdot\rangle$ from $\mathbb{N}^2$ to $\mathbb{N}$, mapping $(i,j)$ to $\langle i,j\rangle = {1\over2}(i+j-2)(i+j-1)+i$ (and with explicitly definable inverse functions $j_0(n), j_1(n)$ such that $\langle j_0(n),j_1(n)\rangle = n$ for all $n$, though I won't write those out); $\mathbb{N}$ can then be partitioned into the sets $\mathbb{N}_k = \left\{\langle k,i\rangle: i\in\mathbb{N}\right\}$. This is the approach usually taken in recursion theory, in particular, where the explicit $\Delta_0$ definability of the pairing function and its inverses is important.</p>
|
184,361 | <p>I'm doing some exercises on Apostol's calculus, on the floor function. Now, he doesn't give an explicit definition of $[x]$, so I'm going with this one:</p>
<blockquote>
<p><strong>DEFINITION</strong> Given $x\in \Bbb R$, the integer part of $x$ is the unique $z\in \Bbb Z$ such that $$z\leq x < z+1$$ and we denote it by $[x]$.</p>
</blockquote>
<p>Now he asks to prove some basic things about it, such as: if $n\in \Bbb Z$, then $[x+n]=[x]+n$</p>
<p>So I proved it like this: Let $z=[x+n]$ and $z'=[x]$. Then we have that</p>
<p>$$z\leq x+n<z+1$$</p>
<p>$$z'\leq x<z'+1$$</p>
<p>Then $$z'+n\leq x+n<z'+n+1$$</p>
<p>But since $z'$ is an integer, so is $z'+n$. Since $z$ is unique, it must be that $z'+n=z$.</p>
<p>However, this doesn't seem to get me anywhere to prove that
$$\left[ {2x} \right] = \left[ x \right] + \left[ {x + \frac{1}{2}} \right]$$</p>
<p>in and in general that </p>
<p>$$\left[ {nx} \right] = \sum\limits_{k = 0}^{n - 1} {\left[ {x + \frac{k}{n}} \right]} $$</p>
<p>Obviously one could do an informal proof thinking about "the carries", but that's not the idea, let alone how tedious it would be. Maybe there is some easier or clearer characterization of $[x]$ in terms of $x$ to work this out.</p>
<p>Another property is
$$[-x]=\begin{cases}-[x]\text{ ; if }x\in \Bbb Z \cr-[x]-1 \text{ ; otherwise}\end{cases}$$</p>
<p>I argue: if $x\in\Bbb Z$, it is clear $[x]=x$. Then $-[x]=-x$, and $-[x]\in \Bbb Z$ so $[-[x]]=-[x]=[-x]$. For the other, I guess one could say:</p>
<p>$$n \leqslant x < n + 1 \Rightarrow - n - 1 < x \leqslant -n$$</p>
<p>and since $x$ is not an integer, this should be the same as $$ - n - 1 \leqslant -x < -n$$</p>
<p>$$ - n - 1 \leqslant -x < (-n-1)+1$$</p>
<p>So $[-x]=-[x]-1$</p>
| marty cohen | 13,079 | <p>Here is a complete answer
to the second question:</p>
<p>If $n$ is a positive integer
then
$\lfloor nx \rfloor = \sum_{k=0}^{n-1} \lfloor x+\frac{k}{n} \rfloor$.</p>
<p>Let
$m = \lfloor x \rfloor$
and
$d = x - m$,
so
$0 \le d < 1$.</p>
<p>Let
$j
=\lfloor nd \rfloor
$,
so
$0 \le j \le n-1
$
and
$\frac{j}{n}
\le d
< \frac{j+1}{n}
$.
If
$0 \le k \le n-j-1$,</p>
<p>$\begin{array}\\
m
&\le m+d+\frac{k}{n}\\
&< m+\frac{n-j}{n}+\frac{j}{n}\\
&= m+\frac{n}{n}\\
&= m+1\\
\end{array}
$</p>
<p>so
$\lfloor m+d+\frac{k}{n} \rfloor
=m
$.</p>
<p>If
$n-j \le k \le n-1$,</p>
<p>$\begin{array}\\
m+d+\frac{k}{n}
&\ge m+\frac{n-j}{n}+\frac{j}{n}\\
&= m+\frac{n}{n}\\
&= m+1\\
\end{array}
$</p>
<p>and</p>
<p>$\begin{array}\\
m+d+\frac{k}{n}
&\lt m+\frac{n-j+1}{n}+\frac{n-1}{n}\\
&= m+\frac{2n-j}{n}\\
&= m+2-\frac{j}{n}\\
\end{array}
$</p>
<p>so
$\lfloor m+d+\frac{k}{n} \rfloor
=m+1
$.</p>
<p>Therefore</p>
<p>$\begin{array}\\
\sum_{k=0}^{n-1} \lfloor x+\frac{k}{n} \rfloor
&=\sum_{k=0}^{n-1} \lfloor m+d+\frac{k}{n} \rfloor\\
&= \sum_{k=0}^{n-j-1} \lfloor m+d+\frac{k}{n} \rfloor
+\sum_{k=n-j}^{n-1} \lfloor m+d+\frac{k}{n} \rfloor\\
&= (n-j)m+j(m+1)\\
&= nm+j\\
\end{array}
$</p>
<p>and
$\lfloor nx \rfloor
=\lfloor n(m+d) \rfloor
=nm+\lfloor nd \rfloor
=nm+j
$.</p>
<p>We are done.</p>
|
70,143 | <p>Is there a good way to find the fan and polytope of the blow-up of $\mathbb{P}^3$ along the union of two invariant intersecting lines?</p>
<p>Everything I find in the literature is for blow-ups along smooth invariant centers.</p>
<p>Thanks!</p>
| pinaki | 1,508 | <p>To find the polytope associated to a toric variety directly you have to realize the variety as the closure of a map from the torus. In this case at least, it is not too hard to get such a description. Let the homogeneous coordinates of $\mathbb{P}^3$ be $[w:x:y:z]$ and the two lines be $C_1 := \lbrace w = x = 0 \rbrace$ and $C_2 := \lbrace w = y = 0 \rbrace$. Then the blow-up $B$ of $\mathbb{P}^3$ along $C_1 \cup C_2$ is the closure of
<br>
$\lbrace ([w:x:y:z], [w^2:wx:wy: wz: xy]) : [w:x:y:z] \in \mathbb{P}^3 \setminus (C_1 \cup C_2) \rbrace$
<br>
in $\mathbb{P}^3 \times \mathbb{P}^4$. If we identify $\mathbb{C}^3$ with $\mathbb{P}^3 \setminus V(w)$, and write $X, Y, Z$ respectively for $x/w, y/w, z/w$, then $\mathbb{C}^3$ is embedded in $B$ via the map <br>
$(X, Y, Z) \mapsto ([1:X:Y:Z), (1: X: Y: Z: XY])$
<br>
Composing with the Segre embedding (and getting rid of duplicate coordinates), we get <br>
$(X, Y, Z) \mapsto [1: X : Y: Z: X^2: XY: XZ: Y^2: YZ: Z^2: X^2Y: XY^2: XYZ]$
<br>
Therefore the polytope is the convex hull of the exponents of these monomials. I believe its vertices are (0,0,0), (2,0,0), (0,2,0), (2, 1, 0), (1, 2, 0), (0, 0, 2) and (1,1,1). </p>
<p>PS: There is a detail to be filled: blow-ups along singular subvarieties are not in general normal, so a priori $B$ might not be a normal toric variety (i.e. the polytope is associated not to $B$ but the normalization of $B$). But as David shows in his answer (and probably proved for a general torus invariant subspaces in the article he mentions in the comment), $B$ is indeed normal.</p>
|
585,808 | <p>As part of showing that
$$
\sum_{n=1}^\infty \left|\sin\left(\frac{1}{n^2}\right)\right|
$$
converges, I ended up with trying to show that
$$
\left|\sin\left(\frac{1}{n^2}\right)\right|<\frac{1}{n^2}, \quad n=1, 2, 3,\dots
$$
since I know that the sum of the right hand side converges. But I can't show this. I've tried searching but I haven't been able to find anything.</p>
<p>What I've tried is that firstly, the absolute values are not needed since $\sin x>0$ if $0<x<1$. I rearranged a little bit:
$$
\sin\left(\frac{1}{n^2}\right)-\frac{1}{n^2}<0
$$
and the derivative is
$$
\frac{2}{n^3}\left(1-\cos\left(\frac{1}{n^2}\right)\right)> 0
$$
so my idea of showing that it is decreasing and negative for the first $n$ wouldn't work. </p>
<p>How can I show this? Help is appreciated. </p>
<p>Edit: Maybe I should add that I'm not <em>completely</em> sure it is true, but I tried it numerically and it seems like it. </p>
| JP McCarthy | 19,352 | <p>It's actually very straightforward to show that $\sin x\leq x$ for $0<x<1$ which is all you have to do...</p>
<p>You can do it by definition using a unit circle.</p>
<p>You can do it by noting that $\sin 0=0$ and $\cos x<1$ here (i.e. $\sin x$ grows slower than $x$ here).</p>
|
585,808 | <p>As part of showing that
$$
\sum_{n=1}^\infty \left|\sin\left(\frac{1}{n^2}\right)\right|
$$
converges, I ended up with trying to show that
$$
\left|\sin\left(\frac{1}{n^2}\right)\right|<\frac{1}{n^2}, \quad n=1, 2, 3,\dots
$$
since I know that the sum of the right hand side converges. But I can't show this. I've tried searching but I haven't been able to find anything.</p>
<p>What I've tried is that firstly, the absolute values are not needed since $\sin x>0$ if $0<x<1$. I rearranged a little bit:
$$
\sin\left(\frac{1}{n^2}\right)-\frac{1}{n^2}<0
$$
and the derivative is
$$
\frac{2}{n^3}\left(1-\cos\left(\frac{1}{n^2}\right)\right)> 0
$$
so my idea of showing that it is decreasing and negative for the first $n$ wouldn't work. </p>
<p>How can I show this? Help is appreciated. </p>
<p>Edit: Maybe I should add that I'm not <em>completely</em> sure it is true, but I tried it numerically and it seems like it. </p>
| Robert Israel | 8,508 | <p>Hint: $|\sin(t)| \le |t|$ for real $t$, with equality only at $0$. </p>
|
585,808 | <p>As part of showing that
$$
\sum_{n=1}^\infty \left|\sin\left(\frac{1}{n^2}\right)\right|
$$
converges, I ended up with trying to show that
$$
\left|\sin\left(\frac{1}{n^2}\right)\right|<\frac{1}{n^2}, \quad n=1, 2, 3,\dots
$$
since I know that the sum of the right hand side converges. But I can't show this. I've tried searching but I haven't been able to find anything.</p>
<p>What I've tried is that firstly, the absolute values are not needed since $\sin x>0$ if $0<x<1$. I rearranged a little bit:
$$
\sin\left(\frac{1}{n^2}\right)-\frac{1}{n^2}<0
$$
and the derivative is
$$
\frac{2}{n^3}\left(1-\cos\left(\frac{1}{n^2}\right)\right)> 0
$$
so my idea of showing that it is decreasing and negative for the first $n$ wouldn't work. </p>
<p>How can I show this? Help is appreciated. </p>
<p>Edit: Maybe I should add that I'm not <em>completely</em> sure it is true, but I tried it numerically and it seems like it. </p>
| ncmathsadist | 4,154 | <p>If $0 < x < \pi/2$, $\sin(x)$ is the $y$--coordinate of the point on the unit circle distance $x$ via the circle. That is the shortest distance of any path from the point $P$, $(\cos(x), \sin(x))$ to the $x$--axis. Since $x$ describes the length of a path from $P$ to the $x$--axis that is not a straight line, we have $\sin(x) < x$. Draw of picture of this to see it nicely.</p>
|
1,219,462 | <p>Proposition: Any polynomial of degree $n$ with leading coefficient $(-1)^n$ is the characteristic polynomial of some linear operator.</p>
<p>I do not want to construct an 'explicit matrix' corresponding to the polynomial $(-1)^n(\lambda_n x^n+\cdots+ \lambda_0)$. However, I want to use induction to prove the existence holds. However, I have no idea since generally the polynomial is not necessary reducible. Can anyone give any hint?</p>
| Stefano | 108,586 | <p>Hint: The direct proof is not hard. Try with a matrix with a lower diagonal identity and with your polynomial coefficients (in suitable order) in the last column.</p>
|
1,216,302 | <p>I am looking for some sequence of random variables $(X_n)$ such that </p>
<p>$$ \lim_{n \rightarrow \infty} E(X_n^2) = 0 $$</p>
<p>but such that the following <strong>almost sure</strong> convergence does <strong>NOT</strong> hold:</p>
<p>$$ \frac{S_n - E(S_n)}{n} \rightarrow 0$$</p>
<p>where the $S_n$ are the partial sums of the $X_n$.</p>
<p>Note: for any such sequence the convergence in probability will <em>always</em> hold; if the random variables are not correlated, so will the convergence almost surely. In particular, any counterexample must consist of correlated random variables.</p>
<p>Many thanks for your help.</p>
| Davide Giraudo | 9,849 | <p>Let $\left(Y_N\right)_{N\geqslant 1}$ be an i.i.d. sequence such that $\Pr\left(Y_N=1\right)=\Pr\left(Y_N=-1\right)=1/(2N)$ and $\Pr\left(Y_N=0\right)=1-1/N$. Let $\left(n_k\right)_{k\geqslant 1}$ be a strictly increasing sequence of integers that will be specified later. For $n_N\leqslant i\leqslant n_{N+1}-1$, define $X_i:=Y_N$. Since each random variable $X_i$ is centered, it suffices to prove that $S_n/n$ does not converge to $0$ almost surely. Observe that
$$
S_{n_l}=\sum_{N=1}^{l-1}\sum_{i=n_N}^{n_{N+1}-1}X_i=\sum_{N=1}^{l-1} \left(n_{N+1}-n_N\right)Y_N,
$$
hence $$
\frac{S_{n_l}}{n_l}=\left(1-\frac{n_l}{n_{l+1}}\right)Y_l+n_{l}^{-1}\sum_{N=1}^{l-2} \left(n_{N+1}-n_N\right)Y_N
$$
and thus (accounting that $\left\lvert Y_i\right\rvert\leqslant 1$),
$$
\left\lvert \frac{S_{n_l}}{n_l}\right\rvert \geqslant \left\lvert Y_l\right\rvert-2\frac{n_l}{n_{l+1}}.
$$
If we choose the sequence $\left(n_l\right)_{l\geqslant 1}$ such that $n_l/n_{l+1}\to 0$, then
$$
\liminf_{l\to +\infty}\left\lvert \frac{S_{n_l}}{n_l}\right\rvert \geqslant \liminf_{l\to +\infty}\left\lvert Y_l\right\rvert
$$
Using the so-called second Borel-Cantelli lemma, one can show that $\liminf_{l\to +\infty}\left\lvert Y_l\right\rvert=1$ almost surely.</p>
<p>Remark that $\left(Y_l\right)_{l\geqslant 1}$ could be replaced by any sequence of centered random variables with converges to $0$ in $\mathbb L^p$ but not almost surely. This shows in particular that having $\left\lVert X_n\right\rVert_p\to 0$ for all $p$ is not sufficient for the strong law of large numbers.</p>
|
3,035,228 | <p>Show that two cardioids <span class="math-container">$r=a(1+\cos\theta)$</span> and <span class="math-container">$r=a(1-\cos\theta)$</span> are at right angles.</p>
<hr>
<p><span class="math-container">$\frac{dr}{d\theta}=-a\sin\theta$</span> for the first curve and <span class="math-container">$\frac{dr}{d\theta}=a\sin\theta$</span> for the second curve but i dont know how to prove them perpendicular.</p>
| Dylan | 135,643 | <p>Since <span class="math-container">$(x,y)=(r\cos\theta,r\sin\theta)$</span>, the 2 curves can be given parametrically as</p>
<p><span class="math-container">\begin{align}
\vec{r}_1 &= \big(a(1+\cos\theta)\cos\theta,a(1+\cos\theta)\sin\theta\big) \\
\vec{r}_2 &= \big(a(1-\cos\theta)\cos\theta,a(1-\cos\theta)\sin\theta\big)
\end{align}</span></p>
<p>Their tangent vectors are</p>
<p><span class="math-container">\begin{align}
\vec{r}_1' &= \big(a(-\sin\theta - \sin2\theta),a(\cos\theta + \cos2\theta) \big) \\
\vec{r}_2' &= \big(a(-\sin\theta + \sin2\theta),a(\cos\theta - \cos2\theta) \big)
\end{align}</span></p>
<p>Then
<span class="math-container">$$ \vec{r}_1' \cdot \vec{r}_2' = a^2(\sin^2\theta - \sin^2 2\theta) + a^2(\cos^2\theta - \cos^2 2\theta) = 0 $$</span></p>
<p>i.e. <span class="math-container">$\vec{r}_1'\perp \vec{r}_2' $</span></p>
<hr>
<p>Alternatively, you can use</p>
<p><span class="math-container">$$ \frac{dy}{dx} = \frac{\frac{dx}{d\theta}}{\frac{dy}{d\theta}} = \frac{\frac{dr}{d\theta}\cos\theta - r\sin\theta}{\frac{dr}{d\theta}\sin\theta + r\cos\theta} $$</span></p>
<p>to compute the tangent slopes that way</p>
|
345,766 | <p>I'm trying to calculate this limit expression:</p>
<p>$$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s} $$</p>
<p>Both the numerator and denominator should converge, since $0 \leq a, b \leq 1$, but I don't know if that helps. My guess would be to use L'Hopital's rule and take the derivative with respect to $s$, which gives me:</p>
<p>$$ \lim_{s \to \infty} \frac{s (ab)^{s-1}}{s (ab)^{s-1}} $$</p>
<p>but this still gives me the non-expression $\frac{\infty}{\infty}$ as the solution, and applying L'Hopital's rule repeatedly doesn't change that. My second guess would be to divide by some multiple of $ab$ and therefore simplify the expression, but I'm not sure how that would help, if at all. </p>
<p>Furthermore, the solution in the tutorial I'm working through is listed as $ab$, but if I evaluate the expression that results from L'Hopital's rule, I get $1$ (obviously). </p>
| Taladris | 70,123 | <p>The derivative (with respect to $x$) of $a^x$ is not $x a^{x-1}$ but $ln(a)a^x$, since $a^x=e^{x ln(a)}$.</p>
<p>You can solve your problem by noticing that $\displaystyle \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s}=1- \frac{1}{1 +ab + (ab)^2 + ... (ab)^s}$.</p>
|
4,636,101 | <p>Given a curve <span class="math-container">$y = x^3-x^4$</span>, how can I find the equation of the line in the form <span class="math-container">$y=mx+b$</span> that is tangent to only two distinct points on the curve?</p>
<p>The problem given is part of the Madas Special Paper Set. This paper set, seems to not have any answers as Madas himself only released the answers on request. Sadly, Madas passed away this year and the contact information has been removed. So I am here to ask how to solve this question. I have tried to create some simultaneous equations but I cannot get to a resolute answer. I have also tried to differentiate and find some equation for the lines gradient but have come to no success: Please help! <a href="https://i.stack.imgur.com/2CsfY.png" rel="nofollow noreferrer">Question Here</a></p>
| GReyes | 633,848 | <p>If you call <span class="math-container">$f(x)=x^3-x^4$</span>, and you consider the function
<span class="math-container">$$
F(x_1,x_2)=\frac{f(x_1)-f(x_2)}{x_1-x_2}
$$</span>
giving you the slope of the line joining two points on the graph, you can easily see that in the optimal situation (when the line is tangent), if you fix <span class="math-container">$x_1$</span>, the function has a critical point w.r.t.<span class="math-container">$x_2$</span> and vice versa. In one case it is a local max in the other it is a local min. So basically we are trying to find a saddle point for <span class="math-container">$F$</span>.</p>
<p>You have the optimality conditions
<span class="math-container">$$
\frac{\partial F}{\partial x_1}=2x_1+x_2-3x_1^2-2x_1x_2-x_2^2=0;
$$</span>
<span class="math-container">$$
\frac{\partial F}{\partial x_2}=2x_2+x_1-3x_2^2-2x_1x_2-x_1^2=0.
$$</span>
Disregarding the situation <span class="math-container">$x_1=x_2$</span> (corresponding to the degenerate case) and subtracting the above equations you find
<span class="math-container">$$
x_1+x_2=\frac 1{2}
$$</span>
which, when plugged into the first optimality condition, gives a quadratic equation for <span class="math-container">$x_2$</span>:
<span class="math-container">$$
\frac 1{4}+x_2-2x_2^2=0
$$</span>
with solutions <span class="math-container">$x_2=(1\pm\sqrt{3})/4$</span>
Taking into account the symmetry of your equations, you conclude that the points you are looking for have <span class="math-container">$x$</span>-coordinates
<span class="math-container">$$
\frac{1+\sqrt{3}}{4};\qquad \frac{1-\sqrt{3}}{4}
$$</span>
I did the computations on a napkin, make sure that everything is correct.</p>
|
4,636,101 | <p>Given a curve <span class="math-container">$y = x^3-x^4$</span>, how can I find the equation of the line in the form <span class="math-container">$y=mx+b$</span> that is tangent to only two distinct points on the curve?</p>
<p>The problem given is part of the Madas Special Paper Set. This paper set, seems to not have any answers as Madas himself only released the answers on request. Sadly, Madas passed away this year and the contact information has been removed. So I am here to ask how to solve this question. I have tried to create some simultaneous equations but I cannot get to a resolute answer. I have also tried to differentiate and find some equation for the lines gradient but have come to no success: Please help! <a href="https://i.stack.imgur.com/2CsfY.png" rel="nofollow noreferrer">Question Here</a></p>
| MathWonk | 301,562 | <p>If the graph of an unknown linear function <span class="math-container">$L(x)= ax+b$</span> grazes that of the given monic quartic <span class="math-container">$Q_4(x)= x^4 - x^3$</span> from below, as in your picture, then their difference <span class="math-container">$Q_4(x)- L(x)$</span> vanishes to second order at the two points of tangency, hence must have two double roots, thus factor as</p>
<p>(1) <span class="math-container">$Q_4(x)- L(x)= (Q_2(x)^2)$</span> where</p>
<p>(2) <span class="math-container">$Q_2(x) =(x-r_1)(x-r_2)$</span> is a monic quadratic. Note that the quadratic <span class="math-container">$Q_2(x)$</span> is symmetric (even) with respect to a line midway between its roots. Thus it can be written as</p>
<p>(3) <span class="math-container">$Q_2(x)= (x-m)^2 -c^2$</span> where <span class="math-container">$x=m$</span> is the midpoint.</p>
<p>Differentiate (1) twice to eliminate <span class="math-container">$L(x)$</span> and deduce</p>
<p>(4) <span class="math-container">$Q_4''(x) = 2 [Q_2(x) Q_2''(x) + (Q_2')^2]$</span> and note that this quadratic is also even with respect to <span class="math-container">$x=m$</span>. (It is the sum of products of expressions that have this even symmetry.)</p>
<p>The left side of (4) is a known function of <span class="math-container">$x$</span>, quadratic in <span class="math-container">$x$</span>. Explicitly, in this problem <span class="math-container">$Q_4''(X)= 12 x^2 - 6 x =x(12x-6)$</span>.
Finding its line of symmetry (mid-point of roots) determines <span class="math-container">$m=1/4$</span>. Once <span class="math-container">$m$</span> is known, solve for <span class="math-container">$c$</span> by substituting (3) into (4) evaluated at <span class="math-container">$x=m=1/4$</span>. Deduce <span class="math-container">$c=\pm \sqrt{3}/4$</span>.
Note that this determines <span class="math-container">$Q_2(x)$</span> completely: <span class="math-container">$Q_2(x)= (x-\frac{1}{4})^2 -\frac{3}{16}$</span>.</p>
<p><strong>Additional response to OP's comment.</strong>
The line <span class="math-container">$L$</span> is the line that passes through the two points on the graph of the quartic that have as their <span class="math-container">$x$</span> coordinates the values <span class="math-container">$x_1=r_1= m-c$</span> and <span class="math-container">$x_2= r_2= m+c$</span>. It can also be written simply as <span class="math-container">$L(x)= Q_4(x)- Q_2(x)^2$</span>.</p>
|
3,357,502 | <p>I'm aware that <span class="math-container">$D(n)$</span> can be calculated in O(sqrt(n)) time. Can <span class="math-container">$ D(n, k) $</span> also be calculated in O(sqrt(n)) time? What's the best algorithm?</p>
<p>For example, if <span class="math-container">$n = 8$</span> and <span class="math-container">$k = 3$</span>, then <span class="math-container">$ D(n, k) = \lfloor{\frac{8}{1}}\rfloor + \lfloor{\frac{8}{2}}\rfloor + \lfloor{\frac{8}{3}}\rfloor $</span></p>
| reuns | 276,986 | <p>You meant <span class="math-container">$$\sum_{a,b, ab\le n, a \le k}1 = \sum_{a \le k} \left\lfloor \frac{n}{a} \right\rfloor$$</span></p>
<p>Then yes it can be computed in <span class="math-container">$O(\sqrt{n})$</span> operations because <span class="math-container">$$\left\{\left\lfloor \frac{n}{a} \right\rfloor, a \ge 1\right\}$$</span>
contains at most <span class="math-container">$2 \sqrt{n}+1$</span> elements and </p>
<p><span class="math-container">$\left\lfloor \frac{n}{a} \right\rfloor = c \iff \frac{n}{a} = c+r, r \in [0,1)$</span></p>
<p><span class="math-container">$\iff n= ca+ra, a =\frac{n}{c+r}$</span></p>
<p><span class="math-container">$\iff a \in (\frac{n}{c+1},\frac{n}{c}]$</span> so <span class="math-container">$\left\lfloor \frac{n}{c}\right\rfloor-\left\lfloor \frac{n}{c+1}\right\rfloor$</span> many choices for <span class="math-container">$a$</span></p>
<p>This way we obtain <span class="math-container">$$ \sum_{a \le k} \left\lfloor \frac{n}{a} \right\rfloor = \sum_{a =1}^{ \min(\lfloor \sqrt{n}\rfloor,k)} \left\lfloor \frac{n}{a} \right\rfloor+ \sum_{c=\lfloor n/k\rfloor}^{\lfloor n/\lfloor \sqrt{n}\rfloor\rfloor-1} \min(k,\left\lfloor \frac{n}{c}\right\rfloor)-\min(k,\left\lfloor \frac{n}{c+1}\right\rfloor)$$</span></p>
|
2,188,965 | <p>Can someone explain to me how this step done? I got a different answer than what the solution said.</p>
<p>Simplify $x(y+z)(\bar{x} + y)(\bar{y} + x + z)$</p>
<p>what the solution got </p>
<p>$x(y+z)(\bar{x} + y)(\bar{y} + x + z)$ = $x(y + z\bar{x})(\bar{y} + x + z)$ (Using distrubitive)</p>
<p>What I got</p>
<p>$x(y+z)(\bar{x} + y)(\bar{y} + x + z)$ = $x(y\bar{x} + y + z\bar{x} + zy)(\bar{y} + x + z)$ (Using distrubitive)</p>
| Shraddheya Shendre | 384,307 | <p>Hint : $ab + ac = a(b+c) = a$ and $a+a=a$.<br>
Do you see the '$b$' and '$c$' (and also the '$a$') in your case? </p>
|
4,202,490 | <p>Trying to construct an example for a Business Calculus class (meaning trig functions are not necessary for the curriculum). However, I want to touch on the limit problem involved with the <span class="math-container">$\sin(1/x)$</span> function.</p>
<p>I am sure there is a simple function, or there isn't... But would love some insight.</p>
<p>I also understand that the functions that satisfy this condition are maybe way outside the scope of the course. I'm just looking for different "flavors" of showing limits that don't exist besides just showing the limit from the left and the limit from the right does not exists.</p>
| hardmath | 3,111 | <p>Consider applying the <a href="https://en.wikipedia.org/wiki/Fractional_part" rel="nofollow noreferrer">fractional part function</a> to <span class="math-container">$1/x^2$</span> or something similar. This would be an even function, so the behavior from the left is the same as the behavior from the right of zero, but neither limit from above or below exists because of oscillations.</p>
<p>Note that defining the fractional part function on negative numbers is done differently by various authors, so that's another reason to introduce <span class="math-container">$1/x^2$</span> and avoid that ambiguity.</p>
<p>I wouldn't call this an <em>algebraic function</em>, although the notions of integer part and fractional part of positive reals numbers should be pretty intuitive for your "Business Calculus" students. The fractional part function is not continuous, so it is scarcely surprising when limits involving it fail to exist.</p>
|
3,553,233 | <p>I am having difficulty finishing this proof. At first, the proof is easy enough. Here's what I have thus far:<br>
Because <span class="math-container">$5 \nmid n$</span>, we know <span class="math-container">$\exists q \in \mathbb{Z}$</span> such that <span class="math-container">$$n = 5q + r$$</span> where <span class="math-container">$0 < r < 4$</span>. Note <span class="math-container">$r \neq 0$</span> because if <span class="math-container">$r = 0$</span>, then <span class="math-container">$5 \mid n$</span>. Also note that <span class="math-container">$n^2 = 25q^2 + 10qr + r^2$</span>. Then we have four cases: when <span class="math-container">$r=1$</span>, <span class="math-container">$r=2$</span>, <span class="math-container">$r = 3$</span>, and <span class="math-container">$r = 4$</span>. This is where I run into difficulty. In each of these cases, we can prove that either <span class="math-container">$n^2 = 5k + 1$</span> or <span class="math-container">$n^2 = 5k - 1$</span> for some integer <span class="math-container">$k$</span>, but I cannot see how to prove both for each case. Any ideas? </p>
<p>As a side note on how I went to prove each case, I simply plugged <span class="math-container">$r$</span> into the formula <span class="math-container">$n^2 = 25q^2 + 10qr + r^2$</span>. This results in <span class="math-container">$n^2 = 25q^2 + 10q + 1$</span>.
Continuing, we get <span class="math-container">$n^2 = 5(5q^2 + 2q) + 1$</span>, and because <span class="math-container">$5q^2 + 2q$</span> is still an integer, this is of the form <span class="math-container">$n^2 = 5k + 1$</span> for some integer <span class="math-container">$k$</span>. But I cannot find how to make the 1 a negative to prove both cases.</p>
| fleablood | 280,126 | <p>Just do it.</p>
<p><span class="math-container">$n^2 = 25q^2 + 10qr + r^2 = 5(5q^2 + 2qr) + r^2$</span>.</p>
<p>ANd <span class="math-container">$r^2 = 1,4=5-1, 9 =10-1$</span> or <span class="math-container">$16=15+1$</span>.</p>
<p>So if <span class="math-container">$n = 5q + 1$</span> then <span class="math-container">$n^2 = 5(5q^2 + 2q) + 1$</span></p>
<p>If <span class="math-container">$n=5q+2$</span> then <span class="math-container">$n^2 = 5(5q^2 + 4q)+4 = 5(5q^2+4q + 1) -1$</span></p>
<p>If <span class="math-container">$n = 5q+3$</span> then <span class="math-container">$n^2= 5(5q^2 + 6q) + 9 = 5(5q^2 + 6q + 2) -1$</span></p>
<p>And if <span class="math-container">$n = 5q+4$</span> then <span class="math-container">$n^2 = 5(q^2 + 8q) + 16=5(5q^2 + 8q + 3) + 1$</span>.</p>
<p>That's it.</p>
|
3,553,233 | <p>I am having difficulty finishing this proof. At first, the proof is easy enough. Here's what I have thus far:<br>
Because <span class="math-container">$5 \nmid n$</span>, we know <span class="math-container">$\exists q \in \mathbb{Z}$</span> such that <span class="math-container">$$n = 5q + r$$</span> where <span class="math-container">$0 < r < 4$</span>. Note <span class="math-container">$r \neq 0$</span> because if <span class="math-container">$r = 0$</span>, then <span class="math-container">$5 \mid n$</span>. Also note that <span class="math-container">$n^2 = 25q^2 + 10qr + r^2$</span>. Then we have four cases: when <span class="math-container">$r=1$</span>, <span class="math-container">$r=2$</span>, <span class="math-container">$r = 3$</span>, and <span class="math-container">$r = 4$</span>. This is where I run into difficulty. In each of these cases, we can prove that either <span class="math-container">$n^2 = 5k + 1$</span> or <span class="math-container">$n^2 = 5k - 1$</span> for some integer <span class="math-container">$k$</span>, but I cannot see how to prove both for each case. Any ideas? </p>
<p>As a side note on how I went to prove each case, I simply plugged <span class="math-container">$r$</span> into the formula <span class="math-container">$n^2 = 25q^2 + 10qr + r^2$</span>. This results in <span class="math-container">$n^2 = 25q^2 + 10q + 1$</span>.
Continuing, we get <span class="math-container">$n^2 = 5(5q^2 + 2q) + 1$</span>, and because <span class="math-container">$5q^2 + 2q$</span> is still an integer, this is of the form <span class="math-container">$n^2 = 5k + 1$</span> for some integer <span class="math-container">$k$</span>. But I cannot find how to make the 1 a negative to prove both cases.</p>
| Robert Lewis | 67,071 | <p>To keep things concise I have stretched the notation a bit, but I think my meaning is clear:</p>
<p>In the language of congruence, </p>
<p><span class="math-container">$5 \not \mid n \Longrightarrow n \not \equiv 0 \mod 5; \tag 1$</span></p>
<p>then one of the following holds:</p>
<p><span class="math-container">$n \equiv 1, 2, 3, 4 \mod 5, \tag 2$</span></p>
<p>whence</p>
<p><span class="math-container">$n^2 \equiv 1, 4, 9, 16 \mod 5 \equiv 1, 4, 4, 1 \mod 5; \tag 3$</span></p>
<p>but </p>
<p><span class="math-container">$4 \equiv -1 \mod 5, \tag 4$</span></p>
<p>so</p>
<p><span class="math-container">$n^2 \equiv \pm 1 \mod 5 \Longrightarrow n^2 \pm 1 \equiv 0 \mod 5$</span>
<span class="math-container">$\Longrightarrow \exists k \in \Bbb Z, \; n^2 \pm 1 = 5k \Longrightarrow \exists k \in \Bbb Z, \;n^2 = 5k \pm 1. \tag 5$</span></p>
|
1,334,680 | <p>How to apply principle of inclusion-exclusion to this problem?</p>
<blockquote>
<p>Eight people enter an elevator at the first floor. The elevator
discharges passengers on each successive floor until it empties on the
fifth floor. How many different ways can this happen</p>
</blockquote>
<p>The people are <strong>distinguishable</strong>.</p>
| Giorgio Mossa | 11,888 | <p>I remember when I faced this problem in my early days as a student.
Here's a (hopefully) simple explanation, result of some year of studies.</p>
<p>One of the first thing to understand is that set theory is not just a family of axioms, it is a formal system that is specified by</p>
<ul>
<li>a language, that is the set of well formed formulas</li>
<li>a set of axioms, that is a subset of the well formed formulas</li>
<li>a set of inference rules, that can be thought as (meta)operations on formulas that allow to build inductively the set of theorems for the theory.</li>
</ul>
<p>To be fair one could also treat axioms as inference rules (with no hypothesis) and so regard set theory as a logic-system of its own.
In order to present this system you don't need any notion of first-order logic (you don't need to know what is an interpretation or a model for a theory, you don't need to know what a theory is).</p>
<p>So shortly <strong>set theory is a logic by its own</strong>. In order to use this system (this logic) you don't need to know what first-order logic is. The only thing you need to know is how to use the inference rules to build recursively the theorem of the theory, or if you prefer, you need to know how to build proofs.</p>
<p>This situation is similar to arithmetics were you don't need to know equational logic (that is the logic underlying equational theories) to do calculuations, in order to arithmetics you can simply use the computational rules (which can be seen as inference rules) to do your calculuation (the proofs) in a mechanical way.</p>
<p>So from this perspective it should be clear that mathematical logic (intended as the study of formal systems) does not come first of set theory (when it is regarded as a foundational theory).</p>
<p>On the other hand mathematical logic is a <em>mathematical theory</em> of formal systems. It aims to study and to prove abstract properties of these formal systems, it doesn't simply use them.
In order to develop a theory of this kind one can proceed in two possibile ways:</p>
<ul>
<li>either giving an axiomatic theory of formal systems: that is a (meta) formal system whose language is able to express properties of the formal systems, whose axioms express basic properties that these systems should have and whose inference rules allows one to prove any statement that should hold for these formal systems</li>
<li>or defining what a formal system should be in a meta/foundational theory (for instance set theory) and then use the axioms and inference rules of the (meta)theory to prove, from the given definitions, the properties that these formal systems have. </li>
</ul>
<p>Since our mind are really used to think in terms of collection and since set theory is (or at least should be) the formal theory of collection the second approach to mathematical logic is more appealing and with this choice mathematical logic in some sense become of second nature to set theory. </p>
<p>I hope this helps, if you need any clarification feel free to ask in the comments.</p>
|
3,209,237 | <p>The proof of the CRT goes as follows:<br>
Given the number <span class="math-container">$x \epsilon Z_m$</span>, <span class="math-container">$m=m_1m_2...m_k$</span>
<span class="math-container">$$M_k = m/m_k$$</span>
construct:
<span class="math-container">$$ x = a_1M_1y_1+a_2M_2y_2+...+a_nM_ny_n$$</span>
where <span class="math-container">$y_k$</span> is the particular inverse of <span class="math-container">$M_k\ mod\ m_k$</span>
<span class="math-container">$$\Rightarrow x\equiv a_kM_ky_k\equiv a_k(mod\ m_k)$$</span></p>
<p>What I don't understand is:<br>
how is <span class="math-container">$x\equiv a_1M_1y_1+a_2M_2y_2+...+a_kM_ky_k$</span> and this lies in <span class="math-container">$mod\ m$</span>? Is this because there is some rule in modular arithmetic for adding two numbers in two different mod worlds like: <span class="math-container">$c \mod d \ + e\ mod f = (c+d)(mod(ef))$</span>? As far as I know, there isn't one like that. And how does the addition of these items all in a different mod world provide the solution for x?</p>
| mathmaniage | 379,585 | <p>The part 2 of the chinese remainder theorem, <a href="https://forthright48.com/chinese-remainder-theorem-part-1-coprime-moduli/" rel="nofollow noreferrer">which starts off at this page and continues to the next</a>, explains the concept of lcm required to understand the OP's question and the concept why the solution exists in mod m, which is actually only a way of finding the minimum solution.</p>
|
1,072,656 | <p>I am building a website which will run on the equation specified below. I am in pre-algebra and do not have any idea how to go about this equation. my friends say it is a system of equation but I don't know how to solve those and no one I know seems to know how to do them with exponents. I was hoping that people on this site could tell me the answer to the problem. if you could explain how to do it not for me but for people in my situation with more experience that might make the question better for everyone ells looking into it. Thank you for your answer!</p>
<p>Here is the system of equations:</p>
<p>$xy^5=8000$</p>
<p>$(xy^4)-(xy^3)=5000$</p>
| Alice Ryhl | 132,791 | <p>Multiply second equation with $y^2$ then you get</p>
<p>$$xy^5 \cdot y-xy^5=5000y^2$$</p>
<p>Substitute $xy^5=8000$</p>
<p>$$8000y-8000=5000y^2$$</p>
<p>Divide with $1000$ and move all terms to left side</p>
<p>$$5y^2-8y+8=0$$</p>
<p>Since $d=b^2-4ac=(-8)^2-4\cdot5\cdot8=-96$ which is negative, there are no solutions.</p>
|
69,208 | <p>Consider $f:\{1,\dots,n\} \to \{1,\dots,m\}$ with $m > n$.
Let $\operatorname{Im}(f) = \{f(x)|x \in \{1,\dots,n\}\}$.</p>
<p>a.)
What is the probability that a random function will be a bijection when viewed as
$$f':\{1,\dots,n\} \to \operatorname{Im}(f)?$$</p>
<p>b.)
How many different function f are there for which $$\sum_{i=1}^n f(i)=k?$$
Not sure where to start and have no idea what $\operatorname{Im}(f)$ means. Please help me get started on this.</p>
| Lubin | 17,760 | <p>Why not look at the <em>formal series</em> for sine and cosine, $s(x)$ and $c(x)$, exactly the series you’ve written, and notice that $s'=c$, $c'=-s$, and differentiate the formal expression $s^2+c^2$ to get zero. Reflect on that, and realize that you’ve just shown that the formal series $s^2+c^2$ is constant (not for calculus reasons, but because a nonconstant formal series has nonzero derivative). And the constant in question is clearly $1$.</p>
|
2,992,454 | <p>Prove :</p>
<blockquote>
<p><span class="math-container">$f : (a,b) \to \mathbb{R} $</span> is convex, then <span class="math-container">$f$</span> is bounded on every closed subinterval of <span class="math-container">$(a,b)$</span></p>
</blockquote>
<p>where <span class="math-container">$f$</span> is convex if <span class="math-container">$f(\lambda x + (1-\lambda)y) \le \lambda f(x) + (1-\lambda)f(y), \forall x,y \in (a,b), \forall \lambda \in [0,1]$</span></p>
<hr>
<h2>Try</h2>
<h3><span class="math-container">$f$</span> is bounded above</h3>
<p>Let <span class="math-container">$J = [\alpha, \beta] \subset (a,b)$</span>. </p>
<p><span class="math-container">$\forall c \in [\alpha, \beta]$</span>, <span class="math-container">$\exists \lambda_0 \in [0,1]$</span> s.t. <span class="math-container">$c = \lambda_0 \alpha + (1-\lambda_0) \beta$</span>, and</p>
<p><span class="math-container">$$
f(c) = f(\lambda_0 \alpha + (1-\lambda_0) \beta) \le \lambda_0 f(\alpha) + (1-\lambda_0) f(\beta)
$$</span></p>
<p>thus, <span class="math-container">$f$</span> is bounded above on <span class="math-container">$[\alpha, \beta]$</span></p>
<p>But I'm stuck at how I should proceed to prove that <span class="math-container">$f$</span> is bounded below.</p>
| Hagen von Eitzen | 39,174 | <p>Pick <span class="math-container">$\gamma$</span> with <span class="math-container">$\alpha<\gamma<\beta$</span>. Let <span class="math-container">$A=(\alpha,f(\alpha))$</span>, <span class="math-container">$B=(\beta,f(\beta))$</span>, <span class="math-container">$C=(\gamma,f(\gamma))$</span> Then</p>
<ul>
<li>on <span class="math-container">$[\alpha,\beta]$</span>, <span class="math-container">$f$</span> is below the line <span class="math-container">$AB$</span></li>
<li>on <span class="math-container">$[\alpha,\gamma]$</span>, <span class="math-container">$f$</span> is above the line <span class="math-container">$CB$</span></li>
<li>on <span class="math-container">$[\gamma,\beta]$</span>, <span class="math-container">$f$</span> is above the line <span class="math-container">$AC$</span></li>
</ul>
<p>("below and "above" are meant to include "on" here)<a href="https://i.stack.imgur.com/Zo3oY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zo3oY.png" alt="enter image description here"></a></p>
|
1,480,511 | <p>I have included a screenshot of the problem I am working on for context. T is a transformation. What is meant by Tf? Is it equivalent to T(f) or does it mean T times f?</p>
<p><a href="https://i.stack.imgur.com/LtRS1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtRS1.png" alt="enter image description here"></a></p>
| marty cohen | 13,079 | <p>This is getting really old.</p>
<p>Copy and paste from
another answer of mine.</p>
<p>If $n$ is a positive integer that is
not a square of an integer,
then $\sqrt{n}$ is irrational.</p>
<p>Let $k$ be such that
$k^2 < n < (k+1)^2$.
Suppose $\sqrt{n}$ is rational.
Then there is a smallest positive integer $q$ such that
$\sqrt{n} = p/q$.</p>
<p>Then $\sqrt{n} = \sqrt{n}\frac{\sqrt{n}-k}{\sqrt{n}-k}
= \frac{n-k\sqrt{n}}{\sqrt{n}-k}
= \frac{n-kp/q}{p/q-k}
= \frac{nq-kp}{p-kq}
$.</p>
<p>Since $k < \sqrt{n} < k+1$,
$k < p/q < k+1$,
or $kq < p < (k+1)q$,
so $0 < p-kq < q$.
We have thus found a representation of
$\sqrt{n}$ with a smaller denominator,
which contradicts the specification of $q$.</p>
|
3,436,430 | <p>Evaluate <span class="math-container">$$\int_{-2}^{2}\int_{-\sqrt{4-x^2}}^{\sqrt{4-x^2}}\int_{2-\sqrt{4-x^2-y^2}}^{2+\sqrt{4-x^2-y^2}}(x^2+y^2+z^2)^{3/2} \; dz \; dy \; dx$$</span> by converting to spherical coordinates.<br>
We know that <span class="math-container">$(x^2+y^2+z^2)^{3/2} = (\rho^2)^{3/2} = \rho^3$</span>. The range of <span class="math-container">$y$</span> tells us that we have disk of radius <span class="math-container">$2$</span>. How can we use this find the limits of each integral and ultimately find the solution? Thanks you for the help. </p>
| Mnifldz | 210,719 | <p>The first thing to notice here is exactly what volume you're integrating inside. The volume you're integrating is the sphere of radius <span class="math-container">$2$</span> but centered at the point <span class="math-container">$(0,0,2)$</span>. If we converted directly to spherical coordinates for this volume, our limits will indeed be very tricky. What I would recommend instead is to translate your coordinate system from <span class="math-container">$(x,y,z)$</span> to <span class="math-container">$(x',y',z')$</span> where
<span class="math-container">$$
x'=x \hspace{2pc} y'=y \hspace{2pc} z' = z-2.
$$</span></p>
<p>In this case your differentials all stay the same, but instead what changes is your integrand. Instead it changes to:</p>
<p><span class="math-container">$$
\left (x'^2 + y'^2 + (z'+2)^2 \right )^{3/2} \;\; =\;\; \left (x'^2 + y'^2 + z'^2 + 4z' + 4 \right )^{3/2}.
$$</span></p>
<p>With this transformation we find that the limits of integration will transform to
<span class="math-container">$$
\int_{-2}^2 \int_{-\sqrt{4-x’^2}}^{\sqrt{4-x’^2}} \int_{-\sqrt{4-x’^2-y’^2}}^{\sqrt{4-x’^2-y’^2}} \left (x'^2 + y'^2 + z'^2 + 4z' + 4 \right )^{3/2} dz'dy'dx'.
$$</span></p>
<p>At least here you get to integrate over a sphere centered at the origin. Your integrand will transform as
<span class="math-container">$$
\left (x'^2 + y'^2 + z'^2 + 4z' + 4 \right )^{3/2} dz'dy'dx' \;\; \to \;\; \left (\rho^2 + 4\rho\cos\theta + 4\right )^{3/2}\rho^2\sin\theta d\rho d\theta d\phi.
$$</span></p>
<p>How to integrate this expression is an added difficulty, but at least the bounds are reasonable.</p>
|
93,458 | <blockquote>
<p>Let <span class="math-container">$n$</span> be a nonnegative integer. Show that <span class="math-container">$\lfloor (2+\sqrt{3})^n \rfloor $</span> is odd and that <span class="math-container">$2^{n+1}$</span> divides <span class="math-container">$\lfloor (1+\sqrt{3})^{2n} \rfloor+1 $</span>.</p>
</blockquote>
<p>My attempt:</p>
<p><span class="math-container">$$ u_{n}=(2+\sqrt{3})^n+(2-\sqrt{3})^n=\sum_{k=0}^n{n \choose k}2^{n-k}(3^{k/2}+(-1)^k3^{k/2})\in\mathbb{2N} $$</span></p>
<p><span class="math-container">$$ 0\leq (2-\sqrt{3})^n \leq1$$</span></p>
<p><span class="math-container">$$ (2+\sqrt{3})^n\leq u_{n}\leq 1+(2+\sqrt{3})^n $$</span></p>
<p><span class="math-container">$$ (2+\sqrt{3})^n-1\leq u_{n}-1\leq (2+\sqrt{3})^n $$</span></p>
<p><span class="math-container">$$ \lfloor (2+\sqrt{3})^n \rfloor=u_{n}-1\in\mathbb{2N}+1 $$</span></p>
| marty cohen | 13,079 | <p><em>For the first claim:</em></p>
<p>Here is a more general discussion.
As is often the case,
none of this is original,
and I did work it out
on my own for fun.</p>
<p>To make one root of
<span class="math-container">$f(x)
=x^2-2ax+b
=0$</span>
with
<span class="math-container">$a, b > 0$</span>
in <span class="math-container">$(0, 1)$</span>.</p>
<p>The roots are
<span class="math-container">$x_{\pm}
=\dfrac{2a\pm\sqrt{4a^2-4b}}{2}
=a\pm\sqrt{a^2-b}
$</span>.</p>
<p><span class="math-container">$\begin{array}\\
x_{-}
&=a-\sqrt{a^2-b}\\
&=(a-\sqrt{a^2-b})\dfrac{a+\sqrt{a^2-b}}{a+\sqrt{a^2-b}}\\
&=\dfrac{a^2-(a^2-b)}{a+\sqrt{a^2-b}}\\
&=\dfrac{b}{a+\sqrt{a^2-b}}\\
\end{array}
$</span></p>
<p>So we want
<span class="math-container">$0 < b < a+\sqrt{a^2-b}
$</span>
so that
<span class="math-container">$ b < a^2$</span>
and
<span class="math-container">$(b-a)^2 < a^2-b
$</span>
or
<span class="math-container">$b^2-2ab+a^2 < a^2-b
$</span>
or
<span class="math-container">$b^2+b < 2ab
$</span>
or
<span class="math-container">$b< 2a-1$</span>.</p>
<p>If <span class="math-container">$a=2$</span> then
<span class="math-container">$b < 3$</span>
so, if <span class="math-container">$b$</span> is an integer,
<span class="math-container">$b \in \{1, 2\}
$</span>
so
<span class="math-container">$a^2-b
\in \{3, 2\}
$</span>
so
<span class="math-container">$\{x_-, x_+\}
=2\pm\sqrt{3},
2\pm\sqrt{2}
$</span>.</p>
<p>Let <span class="math-container">$c = a^2-b$</span>.</p>
<p><span class="math-container">$\begin{array}\\
x_+^n+x_-^n
&=(a+\sqrt{c})^n+(a-\sqrt{c})^n\\
&=\sum_{k=0}^n \binom{n}{k}a^kc^{(n-k)/2}+\sum_{k=0}^n \binom{n}{k}a^k(-1)^{n-k}c^{(n-k)/2}\\
&=\sum_{k=0}^n \binom{n}{k}a^kc^{(n-k)/2}(1+(-1)^{n-k})\\
&=\sum_{k=0}^n \binom{n}{k}a^{n-k}c^{k/2}(1+(-1)^{k})\\
&=\sum_{k=0}^{\lfloor n/2 \rfloor}2\binom{n}{2k}a^{n-2k}c^{k}\\
&=2\sum_{k=0}^{\lfloor n/2 \rfloor}\binom{n}{2k}a^{n-2k}c^{k}\\
\end{array}
$</span></p>
<p>Since
<span class="math-container">$0 < x_-^n < 1$</span>.
<span class="math-container">$\lfloor x_+^n-1 \rfloor
$</span>
is odd and
<span class="math-container">$\lfloor x_+^n\rfloor
=2\sum_{k=0}^{\lfloor n/2 \rfloor}\binom{n}{2k}a^{n-2k}c^{k}-1
$</span>
so</p>
<p><span class="math-container">$\begin{array}\\
x_+^n-\lfloor x_+^n \rfloor
&=x_+^n-2\sum_{k=0}^{\lfloor n/2 \rfloor}\binom{n}{2k}a^{n-2k}c^{k}+1\\
&=1-x_-^n\\
&\to_- 1\\
\end{array}
$</span></p>
<p>If <span class="math-container">$a=2, b=1$</span>
then
<span class="math-container">$x_+^n-\lfloor x_+^n \rfloor
=1-(2-\sqrt{3})^n
=1-\dfrac{1}{(2+\sqrt{3})^n}
$</span>.</p>
|
2,451,469 | <p>I have a system of differential equations:</p>
<p>$$x(t)'' + a \cdot x(t)' = j(t)$$
$$j(t)' = -b \cdot j(t) - x(t)' + u(t)$$</p>
<p>The task is: Substitute $v(t) = x(t)'$ into the system and rewrite the system as 3 coupled linear
differential equations of the same form (with
$y(t) = x(t)$ the solution sought), with the time-dependent vector function $x(t)=(x(t), v(t), j(t))$.</p>
<p>Write down the system matrix $A$, and the vectors $b$ and $d$ explicitly.</p>
<p>Anyone can guide me on how to write 3 equations?</p>
| alexjo | 103,399 | <p>$$\begin{align}
&x(t)'' + a \cdot x(t)' = j(t)\\
&j(t)' = -b \cdot j(t) - x(t)' + u(t)
\end{align}
$$</p>
<p>Putting $v(t) = x(t)'$ we have
$$\begin{align}
v-x'&=0\\
v' + a v - j&=0\\
j' +b j + v - u&=0
\end{align}
$$
that is
$$
\begin{bmatrix}
x'\\
v'\\
j'
\end{bmatrix}=\begin{bmatrix}
0& 1& 0\\
0 & -a& 1\\
0 &-1 &-b
\end{bmatrix}\begin{bmatrix}
x\\
v\\
j
\end{bmatrix}+\begin{bmatrix}
0\\
0\\
u
\end{bmatrix}
$$</p>
|
3,460 | <p>I asked the question "<a href="https://mathoverflow.net/questions/284824/averaging-2-omegan-over-a-region">Averaging $2^{\omega(n)}$ over a region</a>" because this is a necessary step in a research paper I am writing. The answer is detailed and does exactly what I need, and it would be convenient to directly cite the result. However, the author of the answer is anonymous... how would one deal with such a situation? I could of course very easily just reproduce the argument in my paper, but that would be academically dishonest.</p>
| darij grinberg | 2,530 | <p>You seem to view a nickname in the bibliography as embarrassing or unprofessional. There is no reason this should be the case. If you use the built-in "cite" feature of the stackexchange network, the author's nickname will immediately be followed by the URL of their profile, which should allay any doubts about what kind of name it is:</p>
<pre><code>@MISC {496728,
TITLE = {Decomposition of a tensor product of Lie algebra representations into irreducibles},
AUTHOR = {Balerion_the_black (https://math.stackexchange.com/users/88863/balerion-the-black)},
HOWPUBLISHED = {Mathematics Stack Exchange},
NOTE = {URL:https://math.stackexchange.com/q/496728 (version: 2013-09-17)},
EPRINT = {https://math.stackexchange.com/q/496728},
URL = {https://math.stackexchange.com/q/496728}
}
</code></pre>
<p>The profile link also survives name changes and is a unique identifier (and in this case, also a <a href="http://awoiaf.westeros.org/index.php/Balerion_(cat)" rel="nofollow noreferrer">disambiguator</a>).</p>
<p>In the unlikely case the editors object to such a reference, let them suggest an alternative. It is ultimately not your fault if they mutilate your bibliography; this is what arXiv is for :)</p>
|
3,002,874 | <p>I found this limit in a book, without any explanation:</p>
<p><span class="math-container">$$\lim_{n\to\infty}\left(\sum_{k=0}^{n-1}(\zeta(2)-H_{k,2})-H_n\right)=1$$</span></p>
<p>where <span class="math-container">$H_{k,2}:=\sum_{j=1}^k\frac1{j^2}$</span>. However Im unable to find the value of this limit from myself. After some work I get the equivalent expression</p>
<p><span class="math-container">$$\lim_{n\to\infty}\sum_{k=0}^{n-1}\sum_{j=k}^\infty\frac1{(j+1)^2(j+2)}$$</span></p>
<p>but anyway Im stuck here. Can someone show me a way to compute this limit? Thank you.</p>
<p>UPDATE: Wolfram Mathematica computed it value perfectly, so I guess there is some integral or algebraic identity from where to calculate it.</p>
| Claude Leibovici | 82,404 | <p>Considering your last expression <span class="math-container">$$a_n=\sum_{k=0}^{n-1}\sum_{j=k}^\infty\frac1{(j+1)^2(j+2)}$$</span>
<span class="math-container">$$\sum_{j=k}^\infty\frac1{(j+1)^2(j+2)}=\psi ^{(1)}(k+1)-\frac{1}{k+1}$$</span> making
<span class="math-container">$$a_n=n \,\psi ^{(1)}(n+1)$$</span> the expansion of which being
<span class="math-container">$$a_n=1-\frac{1}{2 n}+\frac{1}{6 n^2}+O\left(\frac{1}{n^4}\right)$$</span></p>
|
1,960,169 | <p><a href="http://puu.sh/rCwCy/c78a9ef78a.png" rel="nofollow noreferrer">Asymptote http://puu.sh/rCwCy/c78a9ef78a.png</a></p>
<p>Well my thinking was if the asymptote is at x = 4, it will reach as close to 4 as possible but will never reach 4, meaning it's not defined at 4. </p>
| MathMajor | 113,330 | <p>We have</p>
<p>$$x^2 + 4x - 5 = (x^2 + 4x + 4) - 4 - 5 = (x + 2)^2 - 9.$$</p>
<p>Therefore, the vertex is at $(-2, \, -9)$.</p>
|
111,899 | <p>Evaluate the integral using trigonometric substitutions. </p>
<p>$$\int{ x\over \sqrt{3-2x-x^2}} \,dx$$</p>
<p>I am familiar with using the right triangle diagram and theta, but I do not know which terms would go on the hypotenuse and sides in this case. If you can determine which numbers or $x$-values go on the hypotenuse, adjacent, and opposite sides, I can figure out the rest, although your final answer would help me check mine. Thanks!</p>
| David Mitra | 18,986 | <p>The "trick" in evaluating
$$\tag{1}
\int{x\over\sqrt{3-2x-x^2}}\,dx
$$
is to complete the square of the expression in the radicand: rewrite $3-2x-x^2$ as$$\tag{2}4-\color{maroon}{(x+1)}^2.$$</p>
<p>I'm not sure what this right triangle diagram you speak of is, but with the method I assume you're using, the second "trick" is to take advantage of one of the Pythagorean Identities so that the square root in $(1)$ can be taken. Looking at $(2)$, you should be reminded of
$$
a^2-\color{maroon}{a^2\sin^2\theta}=a^2\cos^2\theta.
$$
So, one may make the substitution $$\tag{3} (x+1)=2\sin\theta.$$
Then $$4-(x+1)^2 =4-(2\sin\theta)^2= 4-4\sin^2\theta= 4\cos^2\theta$$</p>
<p>Also, from our substitution rule $(3)$: $dx=2\cos\theta\, d\theta $ and $x=2\sin\theta-1$.</p>
<p>The integral $(1)$ then becomes
$$
\int { 2\sin\theta-1 \over2\cos\theta}\cdot2\cos\theta\,d\theta= \int(2\sin\theta-1)\,d\theta.
$$
I'll leave the rest for you... </p>
<p>(and now I recall the triangle business: you can label two sides using $\sin\theta=(x+1)/2$; usually, after you find the antiderivative and write back in terms of $x$, the triangle is used as an an aid to simplify your answer). </p>
|
1,303,274 | <p>Define a sequence {$\ x_n$} recursively by</p>
<p>$$ x_{n+1} =
\sqrt{2 x_n -1}, \ and \ x_0=a \ where \ a>1
$$
Prove that {$\ x_n$} is strictly decreasing. I'm not sure where to start.</p>
| k170 | 161,538 | <p>$$ \sum\limits_{n=0}^\infty (-1)^n = \lim\limits_{m\to\infty}\sum\limits_{n=0}^m (-1)^n $$
$$ = \lim\limits_{m\to\infty} \frac12\left((-1)^m + 1\right) $$
Note that the sequence
$$a_m=\frac12\left((-1)^m + 1\right) $$
is convergent if
$$ \lim\limits_{m\to\infty} a_{2m} = \lim\limits_{m\to\infty} a_{2m+1} = L$$
However, in this case we have
$$ \lim\limits_{m\to\infty} \frac12\left((-1)^{2m}+1\right) =\frac22= 1 $$
And
$$ \lim\limits_{m\to\infty} \frac12\left((-1)^{2m+1}+1\right) =\frac02= 0 $$
Therefore $a_m$ is a divergent sequence.</p>
|
1,429,853 | <p>In a number sequence, I've figured the $n^{th}$ element can be written as $10^{2-n}$.</p>
<p>I'm now trying to come up with a formula that describes the sum of this sequence for a given $n$. I've been looking at the geometric sequence, but I'm not sure how connect it.</p>
| Clayton | 43,239 | <p><strong>Hint:</strong> We have $10^{2-n}=10^2\cdot 10^{-n}=10^2\cdot(10^{-1})^n$ and $$\sum_{k=0}^n x^k=\frac{1-x^{n+1}}{1-x}.$$ </p>
|
4,600,131 | <blockquote>
<p>If <span class="math-container">$$f(x)=\binom{n}{1}(x-1)^2-\binom{n}{2}(x-2)^2+\cdots+(-1)^{n-1}\binom{n}{n}(x-n)^2$$</span>
Find the value of <span class="math-container">$$\int_0^1f(x)dx$$</span></p>
</blockquote>
<p>I rewrote this into a compact form.
<span class="math-container">$$\sum_{k=1}^n\binom{n}{k}(x-k)^2(-1)^{k-1}$$</span>
Now,
<span class="math-container">$$\int_0^1\sum_{k=1}^n\binom{n}{k}(x-k)^2(-1)^{k-1}dx$$</span>
<span class="math-container">$$=\sum_{k=1}^n\binom{n}{k}\frac{(1-k)^3}{3}(-1)^{k-1}-\sum_{k=1}^n\binom{n}{k}\frac{(-k)^3}{3}(-1)^{k-1}$$</span>
<span class="math-container">$$=\sum_{k=1}^n\binom{n}{k}\frac{(1-k)^3}{3}(-1)^{k-1}+\sum_{k=1}^n\binom{n}{k}\frac{k^3}{3}(-1)^{k-1}$$</span>
After this, I took <span class="math-container">$\dfrac13$</span> common and did some simplifications but nothing useful came out.</p>
<p>Any help is greatly appreciated.</p>
| lab bhattacharjee | 33,337 | <p><strong>Hint:</strong></p>
<p>Another way:</p>
<p><span class="math-container">$$f(x)=\sum_{k=1}^n\binom nk(x-k)^2(-1)^{k+1}$$</span></p>
<p><span class="math-container">$(x-k)^2=x^2+k(1-2x)+k(k-1)$</span></p>
<p><span class="math-container">$$\binom nk(x-k)^2=x^2\binom nk+(1-2x)n\binom{n-1}{k-1}+n(n-1)\binom{n-2}{k-2}$$</span></p>
<p><span class="math-container">$$\sum_{k=1}^n(-1)^{k+1}\binom nk=-\sum_{k=1}^n\binom nk(-1)^k=1-(1-1)^n$$</span></p>
<p><span class="math-container">$$\sum_{k=1}^n(-1)^{k+1}\binom{n-1}{k-1}=\sum_{k=1}^n\binom{n-1}{k-1}(-1)^{k-1}=(1-1)^{n-1}=?$$</span></p>
<p><span class="math-container">$$\sum_{k=1}^n(-1)^{k+1}\binom{n-2}{k-2}=?$$</span></p>
|
2,643,900 | <p>for the problem </p>
<p>$$(1-2x)y'=y$$</p>
<p>the BC'S are $y(0)=-1$ and $y(1)=1$ and $0\leq x\leq 1$.</p>
<p>I solved this and got $\ln y =\ln\left(\dfrac 2 {1-2x}\right)+c$.</p>
<p>How do we determine the constant such that $y$ is real and finite everywhere from $0$ to $1$ (both limits included)?</p>
| Doug M | 317,162 | <p>$\ln y =\ln(2/(1-2x))+c\\
y =e^{\ln(2/(1-2x))+c}\\
y =e^{\ln(2/(1-2x))}e^{c}$</p>
<p>But $e^c$ is just as much of an arbitrary constant as $c$ was.</p>
<p>$y =Ce^{\ln(2/(1-2x))}\\
y =\frac {C}{1-2x}$</p>
<p>However, you have a problem at $x = \frac 12$</p>
<p>Going back to:</p>
<p>$(1-2x)y' = y \implies y(\frac 12) = 0$</p>
<p>And that is going to mean that $y = 0$ is the only valid solution over the interval $(0,1)$</p>
<p>And that gives you contradictions at the endpoints.</p>
|
2,643,900 | <p>for the problem </p>
<p>$$(1-2x)y'=y$$</p>
<p>the BC'S are $y(0)=-1$ and $y(1)=1$ and $0\leq x\leq 1$.</p>
<p>I solved this and got $\ln y =\ln\left(\dfrac 2 {1-2x}\right)+c$.</p>
<p>How do we determine the constant such that $y$ is real and finite everywhere from $0$ to $1$ (both limits included)?</p>
| Botond | 281,471 | <p>Your equation is:
$$(1-2x)*\frac{\mathrm{d}y}{\mathrm{d}x}=y$$
$$\frac{1}{y}\frac{\mathrm{d}y}{\mathrm{d}x}=\frac{1}{1-2x}$$
Integrating with respect to $x$:
$$\int\frac{1}{y}\frac{\mathrm{d}y}{\mathrm{d}x}\mathrm{d}x=\int\frac{1}{1-2x}\mathrm{d}x$$
$$\log(y)=-\frac{1}{2}\log(1-2x)+C$$
$$\exp(\log(y))=\exp\left(-\frac{1}{2}\log(1-2x)+C\right)$$
$$y=\exp(C)\exp\left(\log\left(\frac{1}{\sqrt{1-2x}}\right)\right)$$
Redefining $C$:
$$y=\frac{C}{\sqrt{1-2x}}$$
From $y(0)=-1$:
$$-1=\frac{C_1}{\sqrt{1-2*0}}$$
$$C_1=-1$$
From $y(1)=1$
$$1=\frac{C_2}{\sqrt{1-2*1}}$$
$$C_2=\sqrt{-1}$$
So, for $y(1)=1$ the solution is $\frac{i}{\sqrt{1-2x}}=\frac{1}{\sqrt{2x-1}}$ (Thanks to <a href="https://math.stackexchange.com/users/135643/dylan">@Dylan</a> for pointing my mistake out), and for $y(0)=-1$ the solution is $y=\frac{-1}{\sqrt{1-2x}}$<br>
And also, for a first order differential eqaution you can give only $1$ initial condition unless "they are telling the same" (To get a continuous solution). But you can consider the function
$$y(x)=
\begin{cases}
-\dfrac{1}{\sqrt{1-2x}}& \text{for } x< \frac{1}{2}\\
\dfrac{1}{\sqrt{2x-1}} & \text{for } x>\frac{1}{2}
\end{cases}$$
This function satisfies the differential equation for all $x\in \mathbb{R}\setminus\left\{\frac{1}{2}\right\}$, and it's continuous in it's domain.<br>
Note: With using the absolute value you can avoid using complex numbers (and the mistakes they can cause) like <a href="https://math.stackexchange.com/a/2644334/281471">Dylan did in his answer</a>.</p>
|
21,569 | <p>Recently I commented on a question which was about tiling a 100 by 100 grid with a 1 x 8 square. (Actually to prove this was impossible.) One of our users posted a link to a very interesting looking paper on tiling problems as a comment, and I also commented, but now looking through my comments I can't find the post. I assume it was deleted. However, I really want to find this paper! I had planned on giving a talk about it next week. Is there any way for me to locate the question? </p>
| Daniel Fischer | 83,702 | <p>I guess it's <a href="https://math.stackexchange.com/questions/1436647/tiling-problem-related-to-algebra">this question</a>, and it's about <a href="http://mathworld.wolfram.com/KlarnersTheorem.html" rel="nofollow noreferrer">Klarner's theorem</a>, <a href="http://dx.doi.org/10.1016/S0021-9800(69)80044-X" rel="nofollow noreferrer">here</a>?</p>
|
1,699,627 | <p>Why is the following equation true?$$\int_0^\infty e^{-nx}x^{s-1}dx = \frac {\Gamma(s)}{n^s}$$
I know what the Gamma function is, but why does dividing by $n^s$ turn the $e^{-x}$ in the integrand into $e^{-nx}$? I tried writing out both sides in their integral forms but $n^{-s}$ and $e^{-x}$ don't mix into $e^{-nx}$. I tried using the function's property $\Gamma (s+1)=s\Gamma (s)$ but I still don't know how to turn it into the above equation. What properties do I need?</p>
| Doug M | 317,162 | <p>At most 3 kings is the same thing as not 4 kings.</p>
<p>$P(4$ kings)$ = 48 / {52\choose 5} = \frac{1}{54145}$</p>
|
1,699,627 | <p>Why is the following equation true?$$\int_0^\infty e^{-nx}x^{s-1}dx = \frac {\Gamma(s)}{n^s}$$
I know what the Gamma function is, but why does dividing by $n^s$ turn the $e^{-x}$ in the integrand into $e^{-nx}$? I tried writing out both sides in their integral forms but $n^{-s}$ and $e^{-x}$ don't mix into $e^{-nx}$. I tried using the function's property $\Gamma (s+1)=s\Gamma (s)$ but I still don't know how to turn it into the above equation. What properties do I need?</p>
| Sumedh | 73,593 | <p>Since you need to find probability $P(\leq 3~\text{kings})$, you can find $P(>3~\text{kings})$ and subtract it from $1$.</p>
<p>$P(>3~\text{kings}) \iff P(4~\text{kings})$, which is $48/52C5 = 1/54145$ </p>
<p>Here, we take $48$, since out of $5$ cards four are kings, and fifth can be any one out of the remaining $48$ cards.</p>
<p>So your final answer is $1 - (1/54145)$ which is $54144/54145$.</p>
|
1,699,627 | <p>Why is the following equation true?$$\int_0^\infty e^{-nx}x^{s-1}dx = \frac {\Gamma(s)}{n^s}$$
I know what the Gamma function is, but why does dividing by $n^s$ turn the $e^{-x}$ in the integrand into $e^{-nx}$? I tried writing out both sides in their integral forms but $n^{-s}$ and $e^{-x}$ don't mix into $e^{-nx}$. I tried using the function's property $\Gamma (s+1)=s\Gamma (s)$ but I still don't know how to turn it into the above equation. What properties do I need?</p>
| barak manos | 131,263 | <p>Split it into <strong>disjoint</strong> events, and then add up their probabilities:</p>
<hr>
<p>The probability of exactly $\color\red0$ kings is:</p>
<p>$$\frac{\binom{4}{\color\red0}\cdot\binom{52-4}{5-\color\red0}}{\binom{52}{5}}$$</p>
<hr>
<p>The probability of exactly $\color\red1$ king is:</p>
<p>$$\frac{\binom{4}{\color\red1}\cdot\binom{52-4}{5-\color\red1}}{\binom{52}{5}}$$</p>
<hr>
<p>The probability of exactly $\color\red2$ kings is:</p>
<p>$$\frac{\binom{4}{\color\red2}\cdot\binom{52-4}{5-\color\red2}}{\binom{52}{5}}$$</p>
<hr>
<p>The probability of exactly $\color\red3$ kings is:</p>
<p>$$\frac{\binom{4}{\color\red3}\cdot\binom{52-4}{5-\color\red3}}{\binom{52}{5}}$$</p>
<hr>
<p>Hence the overall probability is:</p>
<p>$$\sum\limits_{n=0}^{3}\frac{\binom{4}{n}\cdot\binom{52-4}{5-n}}{\binom{52}{5}}=\frac{54144}{54145}$$</p>
|
249,047 | <p>I have the following matrix: $$A=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 1 \\
0 & 0 & 1 \\
\end{bmatrix}
$$
What is the norm of $A$? I need to show the steps, should not use Matlab...<br>
I know that the answer is $\sqrt{\sqrt{5}/2+3/2}$. I am using the simple version to calculate the norm but getting different answer: $\sum_{i=0}^3\sum_{j=0}^3(a_{ij})^2=\sqrt{1+1+1+1}=2$
Maybe this is some different kind of norm, not sure.</p>
<p>This might help - i need to get a condition number of $A$, which is $k(A)=\|A\|\|A^{-1}\|$...that is why i need to calculate the norm of $A$. </p>
| 000 | 22,144 | <p>Find the closed form and take the limit in the case of infinite sums. That is, you find the closed form of the sum $\sum_{4 \le k \le m}\frac{1}{k^2-1}$ and evaluate $\lim_{m \to \infty}(\text{closed form of the sum})$.</p>
<p>In this case, you apply partial fraction decomposition to $\frac{1}{k^2-1}$ and arrive at $$\frac{1}{k^2-1}=\frac{-\frac{1}{2}}{k+1}+\frac{\frac{1}{2}}{k-1}=\frac{1}{2}\left(\frac{1}{k-1}-\frac{1}{k+1}\right).$$</p>
<p>Taking the partial sum $$\sum_{4 \le k \le m}\frac{1}{k^2-1}=\frac{1}{2}\sum_{4 \le k \le m}\frac{1}{k-1}-\frac{1}{k+1}.$$</p>
<p>The sum telescopes, so we see that $$\sum_{4 \le k \le m}\frac{1}{k+1}-\frac{1}{k-1}=\color{red}{\frac{1}{3}}-\color{green}{\frac{1}{5}}+\color{red}{\frac{1}{4}}-\frac{1}{6}+\color{green}{\frac{1}{5}}-\frac{1}{7}+\dots+\color{green}{\frac{1}{m-1}}-\frac{1}{m-3}+\color{red}{\frac{1}{m}}-\frac{1}{m-2}+\color{red}{\frac{1}{m+1}}-\color{green}{\frac{1}{m-1}}=\frac{1}{3}+\frac{1}{4}+\frac{1}{m}+\frac{1}{m+1}.$$</p>
<p>Taking the limit, we have $$\lim_{m \to \infty}\frac{1}{3}+\frac{1}{4}+\frac{1}{m}+\frac{1}{m+1}=\frac{1}{3}+\frac{1}{4}+\lim_{m \to \infty}\frac{1}{m}+\frac{1}{m+1}=\frac{1}{3}+\frac{1}{4}=\frac{7}{12}.$$</p>
<p>Recalling the factor of $\frac{1}{2}$, we arrive at $\sum_{4 \le k \le \infty}\frac{1}{k^2-1}=\frac{1}{2}\cdot\frac{7}{12}=\frac{7}{24}$.</p>
|
4,600,992 | <p>I have two sequences of random variables <span class="math-container">$\{ X_n\}$</span> and <span class="math-container">$\{Y_n \}$</span>. I know that
<span class="math-container">$X_n \to^d D, Y_n \to^d D$</span>. Can I conclude that <span class="math-container">$X_n - Y_n \to^p 0$</span>?</p>
<p>If I cannot, what other conditions do I need for the conclusion to hold? Thanks.</p>
| Aaron | 9,863 | <p>Let us use the fundamental theorem of calculus and the mean value theorem, alone with some very rough estimates.</p>
<p>If we set <span class="math-container">$f(x)=x\int_1^{x}\frac{e^t}{t}dt-e^x$</span>, then the product rule and the fundamental theorem of calculus yield
<span class="math-container">$$f'(x)=\int_1^{x}\frac{e^t}{t}dt$$</span>
and if we can get rough bounds for this quantity, we can get rough bounds for <span class="math-container">$f(x)$</span>.</p>
<p>If <span class="math-container">$g(x)=e^x/x$</span>, then <span class="math-container">$g'(x)=\frac{(x-1)e^x}{x^2}$</span>, and thus <span class="math-container">$g(x)$</span> is increasing on <span class="math-container">$(1,\infty)$</span>. In particular, if <span class="math-container">$x\geq 1$</span>, then <span class="math-container">$g(x)\geq g(1)=e$</span>. Therefore if <span class="math-container">$x\geq 1, f'(x)=\int_1^x g(t)dt \geq (x-1)e$</span>. Consequently, if <span class="math-container">$x>1$</span>, <span class="math-container">$$f(x)=f(1)+\int_1^{x}f'(t)dt\geq -e+\int_1^x e(t-1)dt=e(x^2/2-x-(1/2)).$$</span></p>
<p>This polynomial lower bound is obviously not bounded above, and thus <span class="math-container">$f(x)$</span> must approach infinity.</p>
|
320,355 | <p>Show that $$\nabla\cdot (\nabla f\times \nabla h)=0,$$
where $f = f(x,y,z)$ and $h = h(x,y,z)$.</p>
<p>I have tried but I just keep getting a mess that I cannot simplify. I also need to show that </p>
<p>$$\nabla \cdot (\nabla f \times r) = 0$$</p>
<p>using the first result.</p>
<p>Thanks in advance for any help</p>
| Slugger | 59,157 | <p>We use $\nabla f =(f_x,f_y,f_z)$ and $\nabla h=(h_x,h_y,h_z)$. For the cross product we have $(a,b,c) \times (u,v,w)=\hat{i} (bw-cv)+\hat{j}(cu-aw)+\hat{k} (av-bu) $, alternatively written this is expressed $(a,b,c) \times (x,y,z) = (bw-cv,cu-aw,av-bu)$ but this is exactly the same. Here $a,b,c,u,v,w$ are not meant to mean anything special but are just to illustrate the point.</p>
<p>Using this information we calculate $(\nabla f\times \nabla h)$:
$$(\nabla f\times \nabla h)=\hat{i}(f_yh_z-f_zh_y)+\hat{j}(f_zh_x-f_xh_z)+\hat{k}(f_xh_y-f_yh_x)$$
Now for the dot product $\nabla \cdot (\nabla f\times \nabla h)$ we repeatedly use the product rule to obtain:
$$\nabla \cdot (\nabla f\times \nabla h)= \frac{\partial}{\partial x}(f_yh_z-f_zh_y)+\frac{\partial}{\partial y}(f_zh_x-f_xh_z)+\frac{\partial}{\partial z}(f_xh_y-f_yh_x)$$
$$=(f_{yx}h_z+f_yh_{zx}-f_{zx}h_y-f_zh_{yx})+(f_{zy}h_x+f_zh_{xy}-f_{xy}h_z - f_xh_{zy})+(f_{xz}h_y+f_xh_{yz}-f_{yz}h_x-f_yh_{xz})$$
Now $f_{yz}=f_{zy}$ etcetera assuming that $f$ and $h$ are twice continuous differentiable. Upon close inspection we see that all the terms cancel to give
$$\nabla \cdot (\nabla f\times \nabla h)=0$$</p>
<p>If we consider $r$ to be the radial vector then this is irrotational (from vector calculus). Then it is a result from vector calculus that there exists function $\phi$ such that $r=\nabla \phi$. Then
$$\nabla\cdot (\nabla f\times r)=\nabla\cdot (\nabla f\times \nabla \phi)=0$$
by the previous result.</p>
|
320,355 | <p>Show that $$\nabla\cdot (\nabla f\times \nabla h)=0,$$
where $f = f(x,y,z)$ and $h = h(x,y,z)$.</p>
<p>I have tried but I just keep getting a mess that I cannot simplify. I also need to show that </p>
<p>$$\nabla \cdot (\nabla f \times r) = 0$$</p>
<p>using the first result.</p>
<p>Thanks in advance for any help</p>
| hd.scania | 74,980 | <p>$$\nabla f\times\nabla h\\=\left(\left(\hat{x}\frac{\partial}{\partial x}+\hat{y}\frac{\partial}{\partial y}+\hat{z}\frac{\partial}{\partial z}\right)f\right)\times\left(\left(\hat{x}\frac{\partial}{\partial x}+\hat{y}\frac{\partial}{\partial y}+\hat{z}\frac{\partial}{\partial z}\right)h\right)\\=\left(\hat{x}\frac{\partial f}{\partial x}+\hat{y}\frac{\partial f}{\partial y}+\hat{z}\frac{\partial f}{\partial z}\right)\times\left(\hat{x}\frac{\partial h}{\partial x}+\hat{y}\frac{\partial h}{\partial y}+\hat{z}\frac{\partial h}{\partial z}\right)\\=\begin{vmatrix}\hat{x}&\hat{y}&\hat{z}\\\frac{\partial f}{\partial x}&\frac{\partial f}{\partial y}&\frac{\partial f}{\partial z}\\\frac{\partial h}{\partial x}&\frac{\partial h}{\partial y}&\frac{\partial h}{\partial z}\end{vmatrix}\\=\left(\hat{x}\frac{\partial f}{\partial y}\frac{\partial h}{\partial z}+\hat{y}\frac{\partial f}{\partial z}\frac{\partial h}{\partial x}+\hat{z}\frac{\partial f}{\partial x}\frac{\partial h}{\partial y}\right)-\left(\hat{x}\frac{\partial f}{\partial z}\frac{\partial h}{\partial y}+\hat{y}\frac{\partial f}{\partial x}\frac{\partial h}{\partial z}+\hat{z}\frac{\partial f}{\partial y}\frac{\partial h}{\partial x}\right)$$$$=\left(\hat{x}\tfrac{\partial f}{\partial y}\tfrac{\partial h}{\partial z}+\hat{y}\tfrac{\partial f}{\partial z}\tfrac{\partial h}{\partial x}+\hat{z}\tfrac{\partial f}{\partial x}\tfrac{\partial h}{\partial y}\right)-\left(\hat{x}(\tfrac{\partial f}{\partial z}\tfrac{\partial h}{\partial y})(\tfrac{\partial y}{\partial z}\tfrac{\partial z}{\partial y})+\hat{y}(\tfrac{\partial f}{\partial x}\tfrac{\partial h}{\partial z})(\tfrac{\partial z}{\partial x}\tfrac{\partial x}{\partial z})+\hat{z}(\tfrac{\partial f}{\partial y}\tfrac{\partial h}{\partial x})(\tfrac{\partial x}{\partial y}\tfrac{\partial y}{\partial x})\right)$$$$=\left(\hat{x}\frac{\partial f}{\partial y}\frac{\partial h}{\partial z}+\hat{y}\frac{\partial f}{\partial z}\frac{\partial h}{\partial x}+\hat{z}\frac{\partial f}{\partial x}\frac{\partial h}{\partial y}\right)-\left(\hat{x}\frac{\partial f}{\partial y}\frac{\partial h}{\partial z}+\hat{y}\frac{\partial f}{\partial z}\frac{\partial h}{\partial x}+\hat{z}\frac{\partial f}{\partial x}\frac{\partial h}{\partial y}\right)=\vec{0}$$
Hence
$$\nabla\bullet\nabla f\times\nabla h=\nabla\bullet\vec{0}=0$$
We have furthermore
$$\nabla\times(\nabla f\times\nabla h)=\nabla\times\vec{0}=\vec{0}$$
Then concerned to $\nabla\bullet\nabla f\times\vec{r}=0$ , $\vec{r}\times\nabla h=\vec{0}$ is REQUIRED, that means $\vec{r}//\nabla h$ !! :))</p>
|
2,072,666 | <p>I have a set as : <b> {∀x ∃y P(x, y), ∀x¬P(x, x)}. </b>. In order to satisfy this set I know that there should exist an interpretation <b> I </b> such that it should satisfy all the elements in the set. For instance my interpretation for x is 3 and for y is 4. Should I apply the same numbers (3,4) to ∀x¬P(x, x) as well ? Moreover, there are two x's in the function argument so should I apply 3 and 3 which is the value of the x that I assigned ? Thanks.</p>
| Mark Viola | 218,419 | <p>Let $y=-2x/e$. Then, as $x\to 0$, $y\to 0$.</p>
<p>Next, we see that </p>
<p>$$\frac{(-2x/e)}{\log(1+(-2x/e))}=\frac{y}{\log(1+y)}=\frac{1}{\frac{\log(1+y)}{y}}$$</p>
<p>Hence, we can assert that </p>
<p>$$\lim_{x\to 0}\frac{(-2x/e)}{\log(1+(-2x/e))}=\frac{1}{\lim_{y\to 0}\frac{\log(1+y)}{y}}=1$$</p>
<p>as was to be shown!</p>
|
607,862 | <p>Let $f$ be a continuous function. What is the maximum of $\int_0^1 fg$ among all continuous functions $g$ with $\int_0^1 |g| = 1$?</p>
| Pablo Rotondo | 22,121 | <p><strong>Hint:</strong> Try concentrating the <em>weight</em> of $g$ around the maximum of $|f|$. What happens then?</p>
|
2,658,627 | <p>I am trying to solve the following:</p>
<blockquote>
<p>Suppose the bank paid 12 % per year, but compounded that interest
monthly. That is, suppose 1 % interest was added to your account every
month. Then how much would you have after 30 years and after 60 years
if you started with $100?</p>
</blockquote>
<p>What I did is uisng this formula:</p>
<p><span class="math-container">$$ y[n]=(1.001^{12})*y[0] $$</span></p>
<p>where <span class="math-container">$ y[0] = 100\$ $</span></p>
<p>The answer for 30 years was <span class="math-container">$~143\$ $</span></p>
<p>but it is wrong as I've been said.</p>
<blockquote>
<p>When I compound your interest monthly this means that I give you a
1/12th of the total years rate every single month. In this specific
example we have a 12% interest rate so that would mean that every
single month your account is going to grow by 1%. Now, we are doing
this 12 times a year for 30 years</p>
</blockquote>
<p>Still complicating for me.</p>
| Mostafa Ayaz | 518,023 | <p>Note that monthly interest $1$% is not equivalent to that of yearly $12$% because$$\text{interests per month after 1 year}=(1.01)^{12}\approx1.1268\\\text{interests per year after 1 year}=(1.12)^{1}=1.12$$so we can see$$\text{1% interest per month in one year}≠\text{12% interest per annum}$$therefore if our interest is $1$% per month our deposit after $30$ years and $60$ years starting with $100\$$ would be$$M_{30}=(1.01)^{30\times 12}\approx3594.96\$\\M_{60}=(1.01)^{30\times 12}\approx129237.67\$$$and by yearly $12$% interest we have$$M_{30}=(1.12)^{30}\approx2995.99\$\\M_{60}=(1.12)^{60}\approx89759.69\$$$</p>
|
2,491,968 | <p>$f(z)=(z+1/z)^2$ is the given function . How to find whether the function is differentiable at origin or not ?</p>
| Mark | 310,244 | <p>Note that:
$$f(z) = z^2 + 2 + \frac{1}{z^2}$$
This clearly shows that $f(z)$ has a pole of order $2$ at the origin, and is therefore meromorphic there, and not holomorphic.</p>
|
2,491,968 | <p>$f(z)=(z+1/z)^2$ is the given function . How to find whether the function is differentiable at origin or not ?</p>
| Ben | 493,581 | <p>To see it in a more direct sense, let $x \in \mathbb{R}$:</p>
<p>Consider $\lim_{x \rightarrow 0} f(ix)=\lim_{x \rightarrow 0} (xi)^{2} +2 +1/(xi)^{2}=\lim_{x \rightarrow 0} -x^{2} +2 -1/x^{2}=- \infty$.</p>
<p>Now consider $\lim_{x \rightarrow 0} f(x)=\lim_{x \rightarrow 0} x^{2} +2 +1/x^{2}= \infty$.</p>
<p>In the first limit, we are looking at the value of of the limit strictly on the imaginary axis and in the second, we are looking strictly along the real axis. But one approaches $-\infty$ infinity and the other approaches $\infty$. Ignoring the fact that we of course have the limits going to $\pm \infty$, we see that they are moving in different directions entirely. So, it seems clear now that the limit does not exist.</p>
|
1,142,631 | <p>A sequence of probability measures $\mu_n$ is said to be tight if for each $\epsilon$ there exists a finite interval $(a,b]$ such that $\mu((a,b])>1-\epsilon$ For all $n$.</p>
<p>With this information, prove that if $\sup_n\int f$ $d\mu_n<\infty$ for a nonnegative $f$ such that $f(x)\rightarrow\infty$ as $x\rightarrow\pm\infty$ then $\{\mu_n\}$ is tight.</p>
<p>This is the first I've worked with tight probability measure sequences so I'm not sure how to prove this. Any thoughts?</p>
| Cecilia | 257,899 | <p>The condition to ensure the tightness is that $\displaystyle \frac{f(x)}{x} \to \infty$ as $x \to \infty$. In fact if e.g. $f(x)=|x|$ then $$\sup_n \int f \, dμ_n<\infty$$ is only a necessary condition for the tightness.</p>
|
405,449 | <blockquote>
<p>Is the polynomial $x^{105} - 9$ reducible over $\mathbb{Z}$?</p>
</blockquote>
<p>This exercise I received on a test, and I didn't resolve it. I would be curious in any demonstration with explanations. Thanks!</p>
| Key Ideas | 78,535 | <p>For binomials there is a classical irreducibility test (below). It implies that $\,x^{105}-c\,$ is irreducible over a field $\,F\,$ if $\,c\,$ is not a third, fifth, or seventh power in $\,F,$ since $\,105 = 3\cdot 5\cdot 7.$</p>
<p><strong>Theorem</strong> $\ $ Suppose $\:c\in F\:$ a field, and $\:0 < n\in\mathbb Z.$</p>
<p>$\quad x^n - c\ $ is irreducible over $\:F \iff c \not\in F^p\:$ for all primes $\,p\mid n\:$ and $\ c\not\in -4F^4$ when $\: 4\mid n$.</p>
<p>Proofs can be found in many Field Theory textbooks, e.g. see Lang's Algebra, or see Karpilovsky, <a href="http://books.google.com/books?id=-2WR0fw9gLMC&pg=PA425&lpg=PA425" rel="nofollow">Topics in Field Theory, Theorem 8.1.6.</a></p>
|
3,976,572 | <p>Suppose <span class="math-container">$f(x)= x^3+2x^2+3x+3$</span> and has roots <span class="math-container">$a , b ,c$</span>.
Then find the value of
<span class="math-container">$\left(\frac{a}{a+1}\right)^{3}+\left(\frac{b}{b+1}\right)^{3}+\left(\frac{c}{c+1}\right)^{3}$</span>.</p>
<p>My Approach :
I constructed a new polynomial <span class="math-container">$g(x) = f\left(\frac{x^{\frac{1}{3}}}{1-x^{\frac{1}{3}}}\right)$</span> and then used the Vieta's formula for the sum of roots taken one at a time to solve the sum.</p>
<p>But then I realised that I won't be able to do so as <span class="math-container">$g(x)$</span> is not a polynomial anymore.
Can anyone help me please.</p>
| Aryan | 866,404 | <p>It has some boring calculations so I'll just write about the sketch of my solution:
Using Vietta formulas, find the coefficients of the polynomial that has roots <span class="math-container">${1-\frac{1}{a+1},1-\frac{1}{b+1},1-\frac{1}{c+1}}$</span>. Then just use the Newton formula to find the sum of cubes of that polynomial</p>
|
1,391,214 | <p>I've been thinking about the differences in numbers so for example: </p>
<p>$\begin{array}{ccccccc} &&&0&&&\\ &&1&&1&&\\&1&&2&&3&\\0&&1&&3&&6
\end{array}$</p>
<p>or with absolute differences:</p>
<p>$\begin{array}{ccccccccccc}
&&&\vdots&&\vdots&&\vdots\\
&&1&&0&&2&&2\\
&1&&2&&2&&4&&2\\
2&&3&&5&&7&&11&&13
\end{array}$</p>
<p>so I found this and I wanted to know if this has an actual mathematical formula: </p>
<p>$\begin{array}{cccccccc}
&&&&\vdots\\
&&&3&\cdots&\vdots\\
&&2&&5&\cdots&\vdots\\
&1&&3&&8&\cdots&\vdots\\
0&&1&&4&&12&\cdots
\end{array}$</p>
<p>and the sequence continues 0, 1, 4, 12, 32, 80, 192, 448, 1024, 2304, 5120, 11264, 24576, 53248, 114688, 245760,... If you couldn't tell what I was doing to generate them the first diagonal going up from the 0 increased by one. I wasn't able to find the formula, but I have found some other diagonals have properties like, the second (1,3,5...) is the odd numbers, all of those diagonals are linear.</p>
| Oleg567 | 47,993 | <p>This sequence can be described by formula:
$$
u(n)=\left\{
\begin{array}{l}
n\cdot 2^{n-1}, \qquad\; n=0,1,2,3,4,5,6;\\
n\cdot 2^{n-1}+4, \;\; n=7,8,9,...;\end{array}
\right.
$$</p>
<p>$u(0)=0$, $u(1)=1$, $u(2)=4$, $u(3)=12$, $\ldots$, $u(6)=192$; </p>
$u(7)=448+4$, $u(8)=1024+4$, $\ldots$, $u(13)=53248+4$.</p>
|
1,930,558 | <p>I want a good textbook covering elemenents of discrete mathematics Average level.Im a mathematics undergraduate so i dont want it to be towards Computer science that much.Interested in combinatorics and graph theory .But also covering enumeration and other stuff.One book i found is <a href="http://rads.stackoverflow.com/amzn/click/0071005447" rel="nofollow">https://www.amazon.com/Elements-Discrete-Mathematics-C-Liu/dp/0071005447</a> .</p>
| Brian M. Scott | 12,042 | <p>For a math major I strongly recommend Edward A. Scheinerman, <a href="http://rads.stackoverflow.com/amzn/click/0840049420" rel="nofollow"><em>Mathematics: A Discrete Introduction</em></a>; it’s well written, and it’s definitely aimed at math majors, not computer science majors. The book by Kenneth Rosen is exhaustive, but it’s aimed more at computer science majors and is not, in my opinion, all that well written; much the same goes for the one by Richard Johnsonbaugh. Susanna S. Epp, <a href="http://rads.stackoverflow.com/amzn/click/0495391328" rel="nofollow"><em>Discrete Mathematics with Applications</em></a>, is well written and does a decent job of targeting both math majors and computer science majors and is probably a little easier than the Scheinerman for someone who is encountering abstract mathematics and proofs for the first time.</p>
|
227,618 | <p>I'm creating AI for a card game, and I run into problem calculating the probability of passing/failing the hand when AI needs to start the hand. Cards are A, K, Q, J, 10, 9, 8, 7 (with A being the strongest) and AI needs to play to not take the hand.</p>
<p>Assuming there are 4 cards of the suit left in the game and one is in AI's hand, I need to calculate probability that one of the other players would take the hand. Here's an example:</p>
<p>AI player has: J
Other 2 players have: A, K, 7</p>
<p>If a single opponent has AK7 then AI would lose. However, if one of the players has A or K without 7, AI would survive. Now, looking at possible distribution, I have:</p>
<pre><code>P1 P2 AI
--- --- ---
AK7 loses
AK 7 survives
A7 K survives
K7 A survives
A 7K survives
K 7A survives
7 AK survives
AK7 loses
</code></pre>
<p>Looking at this, it seems that there is 75% chance of survival.</p>
<p>However, I skipped the permutations that mirror the ones from above. It should be the same, but somehow when I write them all down, it seems that chance is only 50%:</p>
<pre><code>P1 P2 AI
--- --- ---
AK7 loses
A7K loses
K7A loses
KA7 loses
7AK loses
7KA loses
AK 7 survives
A7 K survives
K7 A survives
KA 7 survives
7A K survives
7K A survives
A K7 survives
A 7K survives
K 7A survives
K A7 survives
7 AK survives
7 KA survives
AK7 loses
A7K loses
K7A loses
KA7 loses
7AK loses
7KA loses
</code></pre>
<p>12 loses, 12 survivals = 50% chance. Obviously, it should be the same (shouldn't it?) and I'm missing something in one of the ways to calculate. </p>
<p>Which one is correct?</p>
| Martin Sleziak | 8,297 | <p>We know that $|\mathbb R|=\mathfrak c=2^{\aleph_0}$</p>
<p>$|\mathcal P(\mathbb R)|=2^{\mathfrak c}$</p>
<p>$|\mathbb R^{\mathbb R}|=\mathfrak c^{\mathfrak c}$</p>
<p>So we only need to show that $2^{\mathfrak c}=\mathfrak c^{\mathfrak c}$.</p>
<p>We have
$$\mathfrak c^{\mathfrak c} = (2^{\aleph_0})^{\mathfrak c} = 2^{\aleph_0\mathfrak c}=2^\mathfrak c,$$
since $\aleph_0\mathfrak c=\mathfrak c$.</p>
<p>We get the last equality from
$$\mathfrak c \le \aleph_0 \mathfrak c \le \mathfrak c \mathfrak c = 2^{\aleph_0} 2^{\aleph_0} = 2^{\aleph_0+\aleph_0} = 2^{\aleph_0} = \mathfrak c$$
and from Cantor-Benstein theorem.</p>
|
227,618 | <p>I'm creating AI for a card game, and I run into problem calculating the probability of passing/failing the hand when AI needs to start the hand. Cards are A, K, Q, J, 10, 9, 8, 7 (with A being the strongest) and AI needs to play to not take the hand.</p>
<p>Assuming there are 4 cards of the suit left in the game and one is in AI's hand, I need to calculate probability that one of the other players would take the hand. Here's an example:</p>
<p>AI player has: J
Other 2 players have: A, K, 7</p>
<p>If a single opponent has AK7 then AI would lose. However, if one of the players has A or K without 7, AI would survive. Now, looking at possible distribution, I have:</p>
<pre><code>P1 P2 AI
--- --- ---
AK7 loses
AK 7 survives
A7 K survives
K7 A survives
A 7K survives
K 7A survives
7 AK survives
AK7 loses
</code></pre>
<p>Looking at this, it seems that there is 75% chance of survival.</p>
<p>However, I skipped the permutations that mirror the ones from above. It should be the same, but somehow when I write them all down, it seems that chance is only 50%:</p>
<pre><code>P1 P2 AI
--- --- ---
AK7 loses
A7K loses
K7A loses
KA7 loses
7AK loses
7KA loses
AK 7 survives
A7 K survives
K7 A survives
KA 7 survives
7A K survives
7K A survives
A K7 survives
A 7K survives
K 7A survives
K A7 survives
7 AK survives
7 KA survives
AK7 loses
A7K loses
K7A loses
KA7 loses
7AK loses
7KA loses
</code></pre>
<p>12 loses, 12 survivals = 50% chance. Obviously, it should be the same (shouldn't it?) and I'm missing something in one of the ways to calculate. </p>
<p>Which one is correct?</p>
| Asaf Karagila | 622 | <p>Note that $\mathbb R$ has the same cardinality as $\mathbb R\times\mathbb R$. Therefore their power sets have the same size.</p>
<p>In particular $f\colon\mathbb R\to\mathbb R$ is an element in $\mathcal P(\mathbb{R\times R})$, and therefore $\mathbb{R^R}$ has cardinality of at most $2^{|\mathbb R|}$, and the other direction is trivial, because: $$2<|\mathbb R|\implies |2^\mathbb R|\leq|\mathbb{R^R}|\leq|2^\mathbb R|$$</p>
<p>Thus equality holds.</p>
|
3,895,275 | <p>I have a matrix <span class="math-container">$A= \begin{pmatrix} 5 & 3 \\ 2 & 1 \end{pmatrix} $</span> and I should find <span class="math-container">$m$</span>, <span class="math-container">$n$</span>, <span class="math-container">$r$</span> in case that <span class="math-container">$A^2+nA+rI=0$</span> (<span class="math-container">$I$</span> is Identity matrix) . and after that find <span class="math-container">$A^{-1}$</span>
with that relation .</p>
<p>I really tried to find those variables but i do not know how to solve that with just one equation.
Could you help me find <span class="math-container">$m$</span> ,<span class="math-container">$n$</span> , <span class="math-container">$r$</span> in every way you think is true.</p>
| TheSilverDoe | 594,484 | <p><strong>Hint :</strong> <span class="math-container">$m=1$</span>, <span class="math-container">$n=-\mathrm{Tr}(A)$</span> and <span class="math-container">$r=\det(A)$</span> always work. This is called Cayley-Hamilton theorem.</p>
|
4,611,065 | <p>This question is motivated by curiosity and I haven't much background to exhibit .</p>
<p>Going through a couple of books dealing with real analysis, I've noticed that 2 definitions can be given of the exponential function known in algebra as <span class="math-container">$f(x)= e^x$</span>.</p>
<p>One definition says : The exponential function is the unique function defined on <span class="math-container">$\mathbb R$</span> such that <span class="math-container">$f(0)=1$</span> and <span class="math-container">$\forall (x) [ f'(x)= f(x)] $</span>.</p>
<p>The other one defines the exponential function as the inverse of the natural logarithm function . More precisely <span class="math-container">$(1)$</span> <span class="math-container">$\exp_a (x)$</span> is defined as the inverse of <span class="math-container">$\log_a (x)$</span> , <span class="math-container">$(2)$</span> then , <span class="math-container">$\exp_a (x)$</span> is shown to be identical to <span class="math-container">$a^x$</span>, and finally <span class="math-container">$(3)$</span> every function of the form : <span class="math-container">$a^x$</span> is shown to be a " special case" of the <span class="math-container">$e^x$</span> function.</p>
<p>My question :</p>
<p>(1) Do these definitions exhaust the ways the exponential function can be defined?</p>
<p>(2) Are these definitions actually different at least conceptually ( though denoting in fact the same object)?</p>
<p>(3) Is there a reason to prefer one definition over the other? What is each definition good for?</p>
| Qiaochu Yuan | 232 | <p>There are several equivalent definitions, and it is important and valuable to know that they all define the same function. Really they should all be collected into a "definition-theorem," which might look like this.</p>
<blockquote>
<p><strong>Definition-Theorem:</strong> The following five functions are identical:</p>
</blockquote>
<blockquote>
<ol>
<li>The unique differentiable function satisfying <span class="math-container">$\exp(0) = 1$</span> and <span class="math-container">$\exp'(x) = \exp(x)$</span>.</li>
<li>The inverse of the natural logarithm <span class="math-container">$\ln x = \int_1^x \frac{dt}{t}$</span>.</li>
<li>The function <span class="math-container">$\exp(x) = \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n$</span>.</li>
<li>The function <span class="math-container">$e^x$</span> where <span class="math-container">$a^x$</span> is defined for rational <span class="math-container">$x$</span> and non-negative <span class="math-container">$a$</span> in the usual way and then extended by continuity to all real <span class="math-container">$x$</span>, and where <span class="math-container">$e = \lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n$</span>.</li>
<li>The function <span class="math-container">$\displaystyle \exp(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!}$</span>.</li>
</ol>
</blockquote>
<p>Of these, I personally favor introducing the exponential using definition 1. I think it offers the most satisfying account of why <span class="math-container">$\exp(x)$</span> is a natural and interesting function to study: because it is an <a href="https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors" rel="noreferrer">eigenvector</a> of differentiation. This goes a long way towards explaining the role of <span class="math-container">$\exp(x)$</span> in solving differential equations which is where many of its applications are. However, it is a <em>characterization</em> rather than a <em>construction</em>: one has to do some additional work to show that such a function exists and is unique.</p>
<p>Definitions 3, 4, and 5 are all worth knowing but compared to definition 1 I think they lack motivation. In fact definition 1 is, again in my opinion, the best way to motivate them. Definition 3 arises from applying the <a href="https://en.wikipedia.org/wiki/Euler_method" rel="noreferrer">Euler method</a> to solve <span class="math-container">$f'(x) = f(x)$</span>. Definition 5 arises from writing down a Taylor series solution to <span class="math-container">$f'(x) = f(x)$</span>. And definition 4 provides no reason to single <span class="math-container">$e$</span> out over any other exponential base; that reason is provided by definition 1. On the other hand, these 3 definitions do successfully <em>construct</em> the exponential, which definition 1 does not without more background work.</p>
<p>Definition 2 is, I think, somewhat better motivated than 3, 4, or 5 but I still don't prefer it. The natural logarithm is a great and useful function but what motivates taking its inverse? The definition also requires having developed some theory of integration whereas definition 1 only requires derivatives. And, importantly, it does not readily generalize to the <a href="https://en.wikipedia.org/wiki/Matrix_exponential" rel="noreferrer">matrix exponential</a> (also very useful for solving differential equations and a crucial tool in <a href="https://en.wikipedia.org/wiki/Lie_theory" rel="noreferrer">Lie theory</a>), whereas definition 1 generalizes easily: for a fixed matrix <span class="math-container">$A$</span>, the matrix exponential <span class="math-container">$t \mapsto \exp(tA)$</span> is the unique differentiable function <span class="math-container">$\mathbb{R} \to M_n(\mathbb{R})$</span> satisfying <span class="math-container">$\exp(0) = I$</span> and <span class="math-container">$\frac{d}{dt} \exp(tA) = A \exp(tA)$</span>.</p>
<p>Definitions 3 and 5 also generalize to the matrix exponential whereas definition 4 does not. Even if you don't care about matrices yet this distinction is still relevant for <em>complex</em> exponentials, as in <a href="https://en.wikipedia.org/wiki/Euler%27s_formula" rel="noreferrer">Euler's formula</a>: neither definitions 2 nor 4 prepare you at all for understanding complex exponentials, whereas definitions 1, 3, and 5 generalize straightforwardly to this case as well.</p>
<p>However, definition 2 is notable for, I think, being closest to the historical line of development: natural logarithms were in fact discovered before either <span class="math-container">$e$</span> or the natural exponential. And definition 4 is notable in that it most directly connects the natural exponential to the ordinary pre-calculus exponential.</p>
|
1,576,836 | <p>I was solving a question related to functions and i come across a limit which i cannot understand.The question is <br>
If $a$ and $b$ are positive real numbers such that $a-b=2,$ then find the smallest value of the constant $L$ for which $\sqrt{x^2+ax}-\sqrt{x^2+bx}<L$ for all $x>0$<br></p>
<hr>
<p>First i found the domain of definition of the function in question $\sqrt{x^2+ax}-\sqrt{x^2+bx}$.The domain is $x\geq0 \cup x\leq -a$.<br>
Then i found the horizontal asymptotes as $x\to \infty$.<br>
$\lim_{x\to\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{x\to\infty}\frac{(a-b)x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}=\lim_{x\to\infty}\frac{2x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}$<br>
$=\lim_{x\to\infty}\frac{2}{\sqrt{1+\frac{a}{x}}+\sqrt{1+\frac{b}{x}}}=\frac{2}{2}=1$<br>
Then i found the horizontal asymptotes as $x\to -\infty$.<br>
$\lim_{x\to-\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{x\to-\infty}\frac{(a-b)x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}=\lim_{x\to-\infty}\frac{2x}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}$<br>
$=\lim_{x\to-\infty}\frac{2}{\sqrt{1+\frac{a}{x}}+\sqrt{1+\frac{b}{x}}}=\frac{2}{2}=1$<br><br>
But when i drew the graph using graphing calculator,the horizontal asymptote as $x\to-\infty$ was $-1$I do not understand what mistake i made in calculating the limit as $x\to -\infty$<br><br>
Then i guessed i should have made the substitution $x=-t$ and <br>
$\lim_{x\to-\infty}\sqrt{x^2+ax}-\sqrt{x^2+bx}=\lim_{t\to\infty}\sqrt{t^2-at}-\sqrt{t^2-bt}=\lim_{t\to \infty}\frac{(b-a)t}{\sqrt{t^2-at}+\sqrt{t^2-bt}}=\lim_{t\to\infty}\frac{-2t}{\sqrt{t^2-at}+\sqrt{t^2-bt}}$<br>
$=\lim_{t\to\infty}\frac{-2}{\sqrt{1-\frac{a}{t}}+\sqrt{1-\frac{b}{t}}}=\frac{-2}{2}=-1$<br><br></p>
<hr>
<p>I want to ask why the answer came wrong in the first method and correct in the second method.</p>
<blockquote>
<p>Is it always necessary to put $x=-t$ and change the limit to plus
infinity while calculating the limit as $x\to-\infty$<br>
Please help me.Thanks.</p>
</blockquote>
| MichaelChirico | 205,203 | <p>The crucial thing to recall is that $\sqrt{x^2} = |x|$ -- at core, the composition of the square root with the quadratic operator results in something that is basically bi-linear.</p>
<p>Consider $x^2 -3x + 4$; let's first express it as a transformation of $x^2$:</p>
<p>$$x^2-3x+4 = (x-\frac{3}{2})^2 + \frac{7}{4}$$</p>
<p>When we take the square root of this expression, and let $x$ get arbitrarily far away from the vertex of the parabola, the vertical shift will matter less and less. To see this, consider the graphs of $\sqrt{x^2-3x+4}$ and of $\sqrt{x^2-3x+10}$ (i.e., differing only in their constant):</p>
<p><a href="https://i.stack.imgur.com/dwnpp.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dwnpp.gif" alt="root-parabolas"></a></p>
<p>(<em>via Wolfram|Alpha</em>)</p>
<p>As we get further from the underlying vertex ($\frac{3}{2}$), the difference in constants stops mattering -- both look identical to the underlying absolute function, $|x-\frac{3}{2}|$.</p>
<p>Turning to the problem at hand, expressing it like the function above we get:</p>
<p>\begin{align}\sqrt{x^2+ax}-\sqrt{x^2+bx} & =\sqrt{(x+\frac{a}{2})^2-\frac{a^2}{4}}-\sqrt{(x+\frac{b}{2})^2-\frac{b^2}{4}} \\\\ & \approx |x+\frac{a}{2}| - |x+\frac{b}{2}| \\\\& = \frac{b}{2}-\frac{a}{2}\end{align}</p>
<p>The last line follows because when $x$ gets very negative (specifically, for $x < \min \{ -\frac{a}{2}, -\frac{b}{2}\}$), both $x+\frac{a}{2}$ and $x+\frac{b}{2}$ will be negative, so they both flip signs when taking the absolute value.</p>
|
2,389,645 | <p>According to Wolfram Alpha : $\lim_{x \to 0^{-}}\lfloor x \rfloor = -1$ and $\lfloor\lim_{x \to 0^{-}} x\rfloor = 0$ . The first expression is obvious but the second doesn't make sense . It should be $-1$ because for example we have $\lfloor - 0.00001 \rfloor = -1$ . My teacher also accepted the Wolfram Alpha's result . I'm really confused about it .</p>
| Math Lover | 348,257 | <p>Hint: $\lim_{x\rightarrow 0^{-}}{x}=0$.</p>
<p>You first evaluate the limit and then take the floor value.</p>
|
2,389,645 | <p>According to Wolfram Alpha : $\lim_{x \to 0^{-}}\lfloor x \rfloor = -1$ and $\lfloor\lim_{x \to 0^{-}} x\rfloor = 0$ . The first expression is obvious but the second doesn't make sense . It should be $-1$ because for example we have $\lfloor - 0.00001 \rfloor = -1$ . My teacher also accepted the Wolfram Alpha's result . I'm really confused about it .</p>
| Nathanael Skrepek | 423,961 | <p>This is due the fact that the function <span class="math-container">$x \mapsto \lfloor x\rfloor$</span> isn't conutinuous.
You have to be aware of how a limit works. For the first term <span class="math-container">$\lim_{x\to 0-} \lfloor x\rfloor$</span> you take values which have a small distance to <span class="math-container">$0$</span> and are negative. In other words for every <span class="math-container">$\epsilon>0$</span> you take <span class="math-container">$y \in (-\epsilon,0)$</span> and for all these <span class="math-container">$y$</span> you have
<span class="math-container">$$
\lfloor y \rfloor = -1.
$$</span>
Hence, no matter how close you come to <span class="math-container">$0$</span> the value of the bracket will always be <span class="math-container">$-1$</span>, which means nothing else than the limit is also <span class="math-container">$-1$</span>.</p>
<p>On the other hand we have <span class="math-container">$\lim_{x\to 0 -} x= 0$</span> which is pretty obvious. Now if you insert that into the floor brackets, you obatin
<span class="math-container">$$\lfloor\lim_{x\to 0 -} x\rfloor = \lfloor0\rfloor=0$$</span></p>
|
984,232 | <p>We consider everything in the category of groups. It is known that monomorphisms are stable under pullback; that is, if
$$\begin{array}
AA_1 & \stackrel{f_1}{\longrightarrow} & A_2 \\
\downarrow{h} & & \downarrow{h'} \\
B_1 & \stackrel{g_1}{\longrightarrow} & B_2
\end{array}
$$ is a pullback, then $g_1$ being one-to-one implies that $f_1$ is also one-to-one. Now if we weaken the condition, suppose that the kernel of $g_1$ is known, what can we say about the kernel of $f_1$? More precisely, if there is a commutative diagram </p>
<p>$$\begin{array}
A & & B_0 & & A_1 &\stackrel{f_1}{\longrightarrow} & A_2\\
& & \parallel & &\downarrow{h}& &\downarrow{h'}\\
0 & \stackrel{}{\longrightarrow} &B_0 & \stackrel{g_0}{\longrightarrow} &B_1 & \stackrel{g_1}{\longrightarrow} & B_2 & \stackrel{}{\longrightarrow} & 0
\end{array}$$</p>
<p>where the last row is an exact sequence and $A_1$ is the pullback, can we complete an exact sequence in the first row? Or at least is there a natural map from $B_0$ to $A_1$ making the diagram commutative?</p>
| Andrew Hubery | 367,470 | <p>In a pullback diagram we always have <span class="math-container">$\ker(f_1)\cong\ker(g_1)$</span>.</p>
<p>For, we have the inclusion <span class="math-container">$\ker(g_1)\to B_1$</span> and the trivial map <span class="math-container">$\ker(g_1)\to A_2$</span>. By the universal property of the pullback, we obtain a unique map <span class="math-container">$\ker(g_1)\to A_1$</span>, whose image is then contained in <span class="math-container">$\ker(f_1)$</span>. This gives the required inverse to the induced map <span class="math-container">$\ker(f_1)\to\ker(g_1)$</span>.</p>
|
131,741 | <p>Take the following example <code>Dataset</code>:</p>
<pre><code>data = Table[Association["a" -> i, "b" -> i^2, "c" -> i^3], {i, 4}] // Dataset
</code></pre>
<p><img src="https://i.stack.imgur.com/PZSgO.png" alt="Mathematica graphics"></p>
<p>Picking out two of the three columns is done this way:</p>
<pre><code>data[All, {"a", "b"}]
</code></pre>
<p><img src="https://i.stack.imgur.com/LG3jU.png" alt="Mathematica graphics"></p>
<p>Now instead of just returning the "a" and "b" columns I want to map the functions f and h to their elements, respectively, and still drop "c". Based on the previous result and the documentation of <code>Dataset</code> I hoped the following would do that:</p>
<pre><code>data[All, {"a" -> f, "b" -> h}]
</code></pre>
<p><img src="https://i.stack.imgur.com/Wvwml.png" alt="Mathematica graphics"></p>
<p>As you can see, the behavior is not like the one before. Although the functions are mapped as desired, the unmentioned column "c" still remains in the data.</p>
<p>Do I really need one of the following (clumsy looking) alternatives</p>
<pre><code>data[All, {"a" -> f, "b" -> h}][All, {"a", "b"}]
data[Query[All, {"a", "b"}], {"a" -> f, "b" -> h}]
Query[All, {"a", "b"}]@data[All, {"a" -> f, "b" -> h}]
</code></pre>
<p>to get: </p>
<p><img src="https://i.stack.imgur.com/q62un.png" alt="Mathematica graphics"></p>
<p>or is there a more elegant solution?</p>
| Mike Colacino | 18,550 | <p>I don't find @kglr solution inelegant, but perhaps a little prettier with</p>
<pre><code>data[All, {"a" -> f, "b" -> h}] // KeyDrop["c"]
</code></pre>
|
4,282,006 | <p><strong>Evaluate the limit</strong></p>
<p><span class="math-container">$\lim_{x\rightarrow \infty}(\sqrt[3]{x^3+x^2}-x)$</span></p>
<p>I know that the limit is <span class="math-container">$1/3$</span> by looking at the graph of this function, but I struggle to show it algebraically.</p>
<p>Is there anyone who can help me out and maybe even provide a solution to this problem?</p>
| Antoine | 73,561 | <p>Use the equality <span class="math-container">$a^3 - b^3 = (a - b)(a^2 + a b + b^2)$</span> for <span class="math-container">$a = \sqrt[3]{x^3 + x^2}$</span> and <span class="math-container">$b = x$</span>. Convert your expression to <span class="math-container">$$\frac{(a - b)(a^2 + a b + b^2)}{a^2 + a b + b^2}$$</span>
and simplify further.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.