qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,049,808 | <p>Let $u_0 = 1$ and $u_{n+1} = \frac{u_n}{1+u_n^2}$ for all $n \in \mathbb{N}$.</p>
<p>I can show that $u_n \sim \frac{1}{\sqrt{2n}}$, but I would like one more term in the asymptotic development, something like $u_n = \frac{1}{\sqrt{2n}}+\frac{\alpha}{n\sqrt{n}} + o\bigl(\frac{1}{n^{3/2}}\bigr)$.</p>
<p>Here is the outline of my proof of $u_n \sim \frac{1}{\sqrt{2n}}$: </p>
<ul>
<li>$(u_n)$ is decreasing, and bounded from below by $0$, hence converges.</li>
<li>The limit $\ell$ satisifies $\ell = \frac{\ell}{1+\ell^2}$, hence $\ell = 0$.</li>
<li>A computation gives $v_n = u_{n+1}^{-2} - u_n^{-2} \to 2$.</li>
<li>Using Cesàro lemma, $\frac{1}{n} \sum_{k=0}^{n-1} v_k \to 2$.</li>
<li>Hence $\frac{1}{n} u_n^{-2} \to 2$.</li>
</ul>
| G.Kós | 141,614 | <p>Let $a_n=\dfrac1{u_n^2}$. Then
$$
a_{n+1}-a_n = \left(\frac{1+u_n^2}{u_n}\right)^2 - \frac1{u_n^2} = 2+u_n^2 = 2+\frac1{a_n}.
$$
From this and $a_2=4$ we can find a successive sequence of estimates for $n\ge2$:
\begin{align*}
a_n &\ge 2n; \\
a_n &\le 2n + \sum_{k=2}^{n-1}\frac1{2k}
< 2n+\frac12\log n; \\
a_n &\ge 2n + \sum_{k=2}^{n-1}\frac1{2k+\frac12\log k}
> 2n+\int_2^n \frac{dx}{2x+\frac12\log x}
\\ &> 2n+\int_2^n \frac{dx}{2x}
- \int_2^n \frac{\frac12\log x}{2x(2x+\frac12\log x)}dx
= 2n+\frac12\log n - \mathcal{O}(1).
\end{align*}</p>
<p>Then
$$
u_n = a_n^{-1/2} =
\frac1{\sqrt{2n}} \left(1+\frac{\log n}{4n}+\mathcal{O}\bigg(\frac1n\bigg)\right)^{-1/2} \\
= \frac1{\sqrt{2n}} \left(1-\frac12 \cdot \frac{\log n}{4n} + \mathcal{O}\bigg(\frac1n\bigg) \right) =
\frac1{\sqrt{2n}} - \frac1{8\sqrt2}\cdot \frac{\log n}{n^{3/2}} + \mathcal{O}\left(\frac1{n^{3/2}}\right).
$$</p>
|
674,620 | <p>What is the slope of the tangent line to the function $$g(x)=x^2 \frac{\cos x}{1+x^3}$$ when $x=\pi/2$?
When $x=a$?</p>
| Semsem | 117,040 | <p>First find the derivative
$$g'(x)= \{\frac{x^2\cos x}{1+x^3}\}'
\\=\frac{(1+x^3)(2x\cos x-x^2\sin x)-(x^2\cos x)(3x^2)}{(1+x^3)^2}$$
Second the slope is $m=g'(\frac{\pi}{2})$</p>
|
1,549,843 | <p>Prove the following limits:</p>
<p>$$\lim_{x \rightarrow 0^+} x^x = 1$$
$$\lim_{x \rightarrow 0^+} x^{\frac{1}{x}}=0$$
$$\lim_{x \rightarrow \infty} x^{\frac{1}{x}}=1$$</p>
<p>They are not that hard using l'Hospital or the Sandwich theorem. But I curious if they can be solved with the basic knowledge of limits. I have been trying to make some famous limits like the definition of $e$ but without luck.
Thank you for your help.</p>
| Hagen von Eitzen | 39,174 | <p>Using $a^b=\exp(b\ln a)$ as definition of exponentiation with irrational exponents, it is natural to take logarithms; then the claims are equivalent to
$$\lim_{x\to 0^+}x\ln x=0,\qquad \lim_{x\to 0^+}\frac 1x\ln x=-\infty, \qquad \lim_{x\to \infty}\frac 1x\ln x=0.$$
Substituting $x=e^{-y}$ for the first two and $x=e^y$ for the last (so that $y\to+\infty$ in all cases), they are equivalent to
$$\lim_{y\to+\infty}\frac{-y}{e^y} =0,\qquad \lim_{y\to+\infty }(-ye^y)=-\infty,\qquad \lim_{y\to+\infty}\frac{y}{e^y}=0.$$
This makes the middle one clear and the other tow equivalent to the fact that the exponential has superpolynomial 8or at least superlinear) growth.
If not already known, this follows from the general inequality $e^t\ge 1+t$, from which find for $t\ge -1$ that $e^t=(e^{t/2})^2\ge (1+t/2)^2=1+t+\frac14t^2\ge\frac14t^2$.</p>
|
3,917,283 | <p>I have the following : <span class="math-container">$f_n(x)=\frac{nx}{1+nx^3} \quad n=1,2,\dots \quad$</span> and <span class="math-container">$f(x)=\lim_{n \to \infty} f_n(x) ,$</span></p>
<p>and I have done the following : <span class="math-container">$$|f_n(x) -f(x)| = \biggl|\frac{nx}{1+nx^3}-\frac{1}{x^2}\biggr| = \biggl|\frac{1}{x^2(1+nx^3)}\biggr |$$</span> where <span class="math-container">$\frac{1}{x^2}$</span> is the convergence of the series of the function. Now I know that <span class="math-container">$\Bigl|\frac{1}{x^2(1+nx^3)}\Bigr|< \epsilon$</span> but now I don't know how to choose <span class="math-container">$N$</span> in order to fully prove that it is uniformly convergent or not. Can somebody help me proceed with the proof?</p>
| mechanodroid | 144,766 | <p>Assume that <span class="math-container">$f_n \to f$</span> uniformly on <span class="math-container">$\langle 0,1\rangle$</span>. For <span class="math-container">$\varepsilon = 1$</span> there exists <span class="math-container">$n_0 \in \Bbb{N}$</span> such that
<span class="math-container">$$n \ge n_0 \implies |f(x)-f_n(x)| < 1.$$</span></p>
<p>Now take <span class="math-container">$n := n_0 + 1$</span> and consider
<span class="math-container">$$\left|f\left(\frac1n\right) - f_n\left(\frac1n\right)\right| = n^2 - \frac1{1+\frac1{n^2}} \ge n^2-1 \ge n_0^2 \ge 1$$</span>
which is a contradiction. This implies that the convergence is not uniform on <span class="math-container">$\langle 0,1\rangle$</span>.</p>
|
2,712,318 | <p>Let $a$ be the real root of the equation $x^3+x+1=0$</p>
<p>Calculate $$\sqrt[\leftroot{-2}\uproot{2}3]{{(3a^{2}-2a+2)(3a^{2}+2a)}}+a^{2}$$</p>
<p>The correct answer should be $ 1 $. I've tried to write $a^3$ as $-a-1$ but that didn't too much, I guess there is some trick here :s</p>
| hamam_Abdallah | 369,188 | <p>The expansion of what is under square root gives</p>
<p>$$A=9a^4+6a^3-6a^3-4a^2+6a^2+4a=$$
$$a (9a^3+2a+4)=a (9 (-a-1)+2a+4)=$$
$$-a (7a+5)=$$</p>
<p>this must be equal to
$$B=(1-a^2)^3=1-3a^2+3a^4-a^6=$$
$$$$</p>
<p>the difference is
$$B-A=1+4a^2+5a+3a^4-(a+1)^2=$$
$$3a^2+3a+3a^4=3a (a^3+a+1)=0$$</p>
<p>It is correct.</p>
|
2,712,318 | <p>Let $a$ be the real root of the equation $x^3+x+1=0$</p>
<p>Calculate $$\sqrt[\leftroot{-2}\uproot{2}3]{{(3a^{2}-2a+2)(3a^{2}+2a)}}+a^{2}$$</p>
<p>The correct answer should be $ 1 $. I've tried to write $a^3$ as $-a-1$ but that didn't too much, I guess there is some trick here :s</p>
| mz71 | 301,435 | <p>Maybe there is a trick to it, but good ol' factoring can work here.</p>
<p>It is easy to check (using the identity $a^3=-a-1$) that
$$ (3a^2-2a+2)(3a^2+2a) = -5a-7a^2 $$
Let $\sqrt[3]{-5a-7a^2}+a^2 = y$. Given that $a\in \mathbb{R}$, we know that also $y\in \mathbb{R}$, and we must solve for it.
$$y-a^2 = \sqrt[3]{-5a-7a^2} \\ (y-a^2)^3 = -5a-7a^2 \\ y^3 - 3y^2a^2 + 3ya^4-a^6 = -5a-7a^2 \\ y^3-3y^2a^2+3ya(-a-1)-(-a-1)^2=-5a-7a^2 \\ y^3-3y^2a^2-3ya^2-3ya+6a^2+3a-1=0$$ So many threes, only the $6$ stands out. But if we use $6=3+3$: $$ (y^3-1)+(3a^2-3y^2a^2)+(3a^2+3a-3ya^2-3ya)=0 \\ (y^3-1)-3a^2(y^2-1)-(3a^2+3a)(y-1)=0$$ As one can see, it is possible to factor out $(y-1)$, thereby concluding that $y=1$ is a solution. </p>
|
4,432,047 | <p><span class="math-container">$3\times 2$</span> is <span class="math-container">$3+3$</span> or <span class="math-container">$2+2+2$</span>. We know both are correct as multiplication is commutative for whole numbers. But which one is <strong>mathematically</strong> accurate?</p>
| Mark Bennet | 2,906 | <p>The answer to this question depends on context. Strictly you need also addition to be associative even to know that <span class="math-container">$2+2+2$</span> is well-defined. Addition is usually taken to be a binary operation, and if you want to be very strict you should be expressing it as such.</p>
<p>Both answers are arithmetically correct. Any useful definition will have both of them (or equivalent statements) as consequences.</p>
<p>The foundational question is not whether these statements are correct or true, but whether there are natural axioms from which they flow as consequences. If we are intending to model arithmetic, we would normally say that the axioms were flawed if they did not imply both statements (or equivalents with particular notations for functions and brackets and the like).</p>
<p>As indicated above there are notational issues as well as foundational ones. In some definitions, because addition is binary, <span class="math-container">$2+2+2$</span> would not be counted as a well-formed expression. There are good, as well as bad, notational conventions. Actually defining the conventions (notations) by which apparently natural arithmetical expressions and equations are validly written is somewhat trickier than first appears. But when it is done, the informal expressions we use can generally be seen to be equivalent to valid expressions in appropriate formal systems.</p>
|
80,452 | <p>When can the curvature operator of a Riemannian manifold (M,g) be diagonalized by a basis of the following form</p>
<p>'{${E_i\wedge E_j }$}' where '{${E_i}$}' is an orthonormal basis of the tangent space? If the manifold is three dimensional then it is always possible. But what about higher dimensional cases?</p>
| Thomas Richard | 8,887 | <p>I am not aware of any systematic treatment of this question. However there are two standard examples.</p>
<p>1) If $(M,g)$ is an hyper surface in a constant curvature manifold, then its curvature operator is given by $R=A\wedge A-kI$ where $I$ is the identity and $A$ is the second fundamental form. Then if $e_i$ diagonalise $A$, $e_i\wedge e_j$ diagonalise $R$.</p>
<p>2) Another example is rotationnaly symmetric metrics on $\mathbb{R}\times S^n$, the proof can be found in Riemannian Geometry by Petersen.</p>
|
80,452 | <p>When can the curvature operator of a Riemannian manifold (M,g) be diagonalized by a basis of the following form</p>
<p>'{${E_i\wedge E_j }$}' where '{${E_i}$}' is an orthonormal basis of the tangent space? If the manifold is three dimensional then it is always possible. But what about higher dimensional cases?</p>
| Robert Bryant | 13,972 | <p>There are still a few interesting things to say about this question, so I thought I'd add some comments.</p>
<p>In one sense, the answer to the question of when a Riemannian metric has an orthonormal coframing that diagonalizes the curvature in the manner requested by the OP is an algebraic problem: As is well-known, the space of curvature tensors (when regarded as quadratic forms on $\Lambda^2$ that satisfy the first Bianchi identity) has dimension $D_n = \tfrac{1}{12}n^2(n^2{-}1)$, while the set of those diagonalizable in some orthonormal coframe is the $\mathrm{O}(n)$-orbit of a linear subspace of dimension $\tfrac{1}{2}n(n{-}1)$, so (when $n\ge 3$) it is a cone $\mathcal{R}_n$ of dimension $n(n{-}1)$. Thus, a Riemannian metric will have to satisfy a set of at least
$$
R_n = \tfrac{1}{12}n^2(n^2{-}1) - n(n{-}1) = \tfrac{1}{12}n(n{-}1)(n{-}3)(n{+}4)
$$
polynomial relations on its curvature in order for such a diagonalization to be possible at every point. Writing out a set of generators for these relations is not likely to be easy and probably won't be enlightening, even for $n=4$, which is when it first becomes nontrivial. Moreover, there is no guarantee that this ideal $\mathcal{I}_n$ of polynomial relations is generated by only $R_n$ polynomials or that you won't still have to impose some inequalities to make sure that the curvature is diagonalizable by an element in $\mathrm{O}(n)$ rather than by an element of $\mathrm{O}(n,\mathbb{C})$ that doesn't lie in $\mathrm{O}(n)$. However, this approach would, in theory, give the answer to the OP's literal question.</p>
<p>On the other hand, one might want to interpret the question as asking how one could 'generate' all of the metrics that satisfy this diagonalizability property, at least locally. This is a more interesting (and more challenging) problem. Willie and Thomas have each given examples of classes of such metrics that essentially depend on one function of $n$ variables: Willie cited the conformally flat metrics, which are locally of the form $e^u g_0$ where $g_0$ is the standard metric on $\mathbb{R}^n$, and Thomas cited the induced metrics on hypersurfaces in a space form of dimension $n{+}1$, each of which, locally, can be described as the graph of one function of $n$-variables). The interesting question is whether these are, themselves, special cases of some more general class of metrics with the desired property. Might there be a class of examples that depend on more than one arbitrary function of $n$ variables? Another interesting question is whether their examples 'reach' all the curvature tensors that satisfy the relations $\mathcal{I}_n$ and, if not, whether are there other examples that do.</p>
<p>This latter question is easier to answer than the former. It is easy to see, just by an algebraic count, that neither the conformally flat metrics nor those induced on hypersurfaces in space forms can actually `reach' all of the $\mathrm{O}(n)$-orbits in $\mathcal{R}_n$. (The two sets of orbits that they reach overlap, and they are distinct, proper closed subsets of $\mathcal{R}_n$.) On the other hand, examples provided by É. Cartan of nondegenerate submanifolds of dimension $n$ in $\mathbb{R}^{2n}$ that have flat normal bundle turn out to have their curvature tensors in $\mathcal{R}_n$ and, using these, one can reach an open subset of the orbits in $\mathcal{R}_n$. Now, Cartan's examples depend locally on $n^2{-}n$ arbitrary functions of <em>two</em> variables (not $n$ variables), and it turns out that they satisfy many more differential equations (of higher order) than just the $\mathcal{I}_n$. For example, in Cartan's examples, the diagonalizing coframing $\omega=(\omega_i)$ turns out to be integrable, i.e., $\omega_i\wedge d\omega_i = 0$ for all $i$, so that the metric itself can be diagonalized in a local coordinate chart and thus is locally of the form
$$
g = e^{2f_1}\ {dx_1}^2 +e^{2f_2}\ {dx_2}^2 + \cdots + e^{2f_n}\ {dx_n}^2.
$$
Meanwhile, the condition for a metric in this form to have its curvature tensor be diagonal with respect to the coframing $\omega = (\omega_i) = (e^{f_i}\ dx_i)$ and, hence, take values in $\mathcal{R}_n$ turns out to be an involutive system of second order PDE for the functions $f_i$ whose general local solution depends on $n^2{-}n$ arbitrary functions of two variables. These turn out to be slightly more general than the ones that arise as Cartan's examples, and, using solutions of this type, one can reach all of the $\mathrm{O}(n)$-orbits in $\mathcal{R}_n$.</p>
<p>However, the question of how to 'generate' the 'general' metric whose curvature tensor takes values in $\mathcal{R}_n$ for $n\ge 4$ seems to be a very difficult problem. It is an overdetermined system for the metric that is not involutive, and computing its first two prolongations, even in the $n=4$ case, yields a system that is extremely algebraically complicated and still not involutive. Thus, I do not know (and I believe that it is not known) whether the general local solution of this problem depends (modulo diffeomorphism) on more than one arbitrary function of $n$ variables.</p>
|
13,616 | <p>Are there enough interesting results that hold for general locally ringed spaces for a book to have been written? If there are, do you know of a book? If you do, pelase post it, one per answer and a short description.</p>
<p>I think that the tags are relevant, but feel free to change them.</p>
<p>Also, have there been any attempts to classify locally ringed spaces? Certainly, two large classes of locally ringed spaces are schemes and manifolds, but this still doesn't cover all locally ringed spaces.</p>
| Marc Nieper-Wißkirchen | 1,841 | <p>A locally ringed space is nothing but a local ring object (in the internal sense) in a category of sheaves over a topological space, which happens to be an example of a topos.</p>
<p>So in order to study general properties of locally ringed spaces, one could proceed by studying properties of local ring objects in arbitrary topoi. As any proposition (in the internal language) on a local ring (a commutative ring with $0 \neq 1$ and $s + t = 1 \implies s \in R^\times \lor t \in R^\times$) is true for any topos if and only if it can be derived intuitionistically (which is more or less the same as constructively), the original questions seems to boil down to:</p>
<p>"What are the constructively valid properties and constructions for a local ring?"</p>
<p>For example, the construction of Kähler differentials makes also sense constructively, which immediately implies (by the above reasoning) that every morphism $X \to Y$ of locally ringed spaces has an associated module $\Omega_{X/Y}$ of Kähler differentials.</p>
<p>And there is a lot of literature on constructive algebra. The book of Mines, Richman and Ruitenburg as well as many of the preprints on Fred Richman's homepage are a start. Some material can also be found in "Sheaves in Geometry and Logic" by Mac Lane and Moerdijk.</p>
|
4,352,129 | <p>I am struggling to compute the power series expansion of <span class="math-container">$$f(z) = \frac{1}{2z+5}$$</span> about <span class="math-container">$z=0$</span>, where <span class="math-container">$f$</span> is a complex function. I tried comparing it to the geometric series as follows,<span class="math-container">$$ f(z) = \frac{1}{2z+5} = \frac{1}{1-\omega} = \sum_{n=0}^\infty\omega^n $$</span> which gives us <span class="math-container">$$ 2z+ 5 = 1 - \omega \implies \omega = -2z - 4 \implies f(z) = \sum_{n-0}^\infty(-2z + 4)^n = \sum_{n=0}^\infty(-2)^n(z+2)^n$$</span> which is clearly not even an expansion about <span class="math-container">$z=0$</span>. This is the wrong answer as well according to my textbook which stated that the answer was <span class="math-container">$$\sum_{n=0}^\infty\frac{2^n}{5^{n+1}}(-1)^nz^n$$</span>
could someone please explain where I have gone wrong.</p>
| Riemann'sPointyNose | 794,524 | <p>You've gone wrong because
<span class="math-container">$$
\frac{1}{1-\omega}=\sum_{n\geq 0}\omega^n
$$</span>
assumes that <span class="math-container">${|\omega|<1}$</span>. In your case, this would mean <span class="math-container">${|2z+4|<1}$</span>, i.e. <span class="math-container">${|z+2|<\frac{1}{2}}$</span>. You have calculated the series about <span class="math-container">${z=-2}$</span> with radius of convergence <span class="math-container">${\frac{1}{2}}$</span>.</p>
<p>Instead, consider
<span class="math-container">$$
\frac{1}{2z+5} = \frac{1}{5}\left(\frac{1}{\frac{2}{5}z+1}\right) = \frac{1}{5}\left(\frac{1}{1-\left(\frac{-2}{5}z\right)}\right)
$$</span>
can you take it from here?</p>
|
2,433,614 | <p>The solution to the initial-value problem $y' = y^{2} + 1$ with $y(0)=1$ is $y = \tan(x + \frac{\pi}{4})$. I would like to show that this is the correct solution in a way that is analogous to my solution to differential equation $y' = y - 12$.</p>
<p><strong>Solution to Differential Equation</strong></p>
<p>If $y' = y - 12$,
\begin{equation*}
\frac{y'}{y - 12} = 1 ,
\end{equation*}
\begin{equation*}
\left(\ln(y - 12)\right)' = 1 ,
\end{equation*}
\begin{equation*}
\ln(y - 12) = t + C
\end{equation*}
for some constant $C$. If $C' = e^{C}$,
\begin{equation*}
y = C' e^{t} + 12 .
\end{equation*}</p>
| anomaly | 156,999 | <p>Start out by defining $\sin$ and $\cos$ to be the unique solutions to the differential equation $y'' = -y$ with the boundary conditions $y(0) = 0, y'(0) = 1$ and $y(0) = 1, y'(0) = 0$, respectively. (It's not difficult to show that such functions exist and are unique, and one can verify that the usual Taylor series satisfy the equation.) Standard uniform convergence arguments then show that $\sin$ and $\cos$ are smooth on $\mathbb{R}$. Our goal is then to show that the map $f:[0, 2\pi] \to \mathbb{R}^2$ given by $f(t) = (\cos t, \sin t)$ parametrizes the unit circle $S^1$ (as a closed curve). </p>
<p>First, consider the function $g(t) = \cos^2 t + \sin^2 t$. The derivative $\varphi = \frac{d}{dt} \sin t$ also satisfies $\varphi'' = -\varphi$ with $\varphi(0) = 1$ and
$$\varphi'(0) = -\frac{d^2}{dt^2} \sin t\big\vert_{t=0} = -\sin 0 = 0$$
by the definition above. Hence $\varphi(t) = \cos t$; that is, $\frac{d}{dt} \sin t = \cos t$. Applying the same argument to $\cos t$ gives $\frac{d}{dt} \cos t= - \sin t$. Hence
\begin{align*}
g'(t) = 2\cos t \sin t - 2\sin t \cos t = 0.
\end{align*}
Since $g(0) = 1$ by the definition above, we have $g\equiv 1$ everywhere.</p>
<p>Thus we've shown that the image of $f$ lies on $S^1$. Suppose $f(t) = f(t')$ with $t' \not = t$. Then $\sin t = \sin t'$ and $\cos t = \cos t'$, so the smooth function $\varphi(t) = \sin (x + t) - \sin (x + t')$ satisfies $\varphi'' = -\varphi$ with $\varphi(0), \varphi'(0) = 0$. The solution of that second-order equation with prescribed boundary conditions is unique, so $\varphi = 0$; that is, $\varphi$ has period $t' - t$. Such a $t' \not = t$ must exist, since $|f'| = 1$ everywhere. It follows that there exists some constant $N$ such that the map $f:[0, N] \to S^1$; that is, $f$ is onto and injective except that $f(0) = f(N)$.</p>
<p>The only thing left to show is that $N = 2\pi$. That's a bit trickier; it depends on what definition you want to take for $\pi$. Continuing with the calculus theme, let's define to be $\pi$ to be half the circumference of the unit circle. Then the usual formula for arc length gives $N = 2\pi$, since $f'(t) = (-\sin t, \cos t)$ has $|f'(t)| = 1$ everywhere.</p>
<p>(We also haven't discussed the angle sum formulas. They can be derived by recasting them in terms of the exponential function, defined to be the unique solution to the differential equation $y' = y$ with boundary condition $y(0) = 1$. Since $y(t + t_0)/y(t_0)$ is also a solution for any $t_0$, we must $y(t + t') = y(t)y(t')$ for all $t, t'$. We could also have worked from $\exp t$ at the beginning, but that doesn't seem to be what you were going for.)</p>
|
927,815 | <p>Find an equation of the plane that passes through the points $A(0, 1, 0)$, $B(1, 0, 0)$ and $C(0, 0, 1)$.</p>
| Novice | 97,093 | <p>Apologies. I view the question wrongly.</p>
<p>Find the vectors $\vec{AB}$ and $\vec{BC}$:</p>
<p>$\vec{AB}=\left(
\begin{array}{c}
1\\
-1\\
0\\
\end{array}
\right)$</p>
<p>$\vec{BC}=\left(
\begin{array}{c}
1\\
0\\
-1\\
\end{array}
\right)$</p>
<p>Take the cross product and it will be the normal of the plane.</p>
<p>Normal $n=\left(
\begin{array}{c}
1\\
1\\
1\\
\end{array}
\right)$</p>
<p>The equation of the plane will be:</p>
<p>$\mathbb{r}\cdot\left(
\begin{array}{c}
1\\
1\\
1\\
\end{array}
\right)=1$
or $x+y+z=1$</p>
|
3,827,853 | <p>I am trying to compute the singular cohomology of <span class="math-container">$\mathbb{N}\subset\mathbb{R}$</span>. I have that <span class="math-container">$C_0(\mathbb{N})\cong C_1(\mathbb{N})\cong\mathbb{Z}^{\oplus\mathbb{N}}$</span>, where I mean <span class="math-container">$\mathbb{Z}^{\oplus\mathbb{N}}=\{(n_0,n_1,n_2,...)|n_i\in\mathbb{Z},n_i=0\text{ for all but finitely many }i\}$</span>. Moreover, I know that <span class="math-container">$d_1=d_0=0$</span>.</p>
<p>I know that that Hom<span class="math-container">$(\mathbb{Z}^{\oplus\mathbb{N}},\mathbb{Z})\cong C^0(\mathbb{N})\cong C^1(\mathbb{N})$</span>.</p>
<p>I am fairly sure that Hom<span class="math-container">$(\mathbb{Z}^{\oplus\mathbb{N}},\mathbb{Z})\cong \mathbb{Z}^\mathbb{N}$</span>, where <span class="math-container">$\mathbb{Z}^\mathbb{N}=\{(n_0,n_1,n_2,...|n_i\in\mathbb{Z}\}$</span>. I am, however, unsure how to prove this.</p>
| Arturo Magidin | 742 | <p>This gives the same answer as the one posted, but invoking categorical arguments.</p>
<p>Since the direct sum is the coproduct, and maps <em>from</em> a coproduct correspond to families of maps from the constituents, each morphism <span class="math-container">$f\colon \oplus_{i\in I}A_i\to B$</span> corresponds to a family of morphism <span class="math-container">$\{f_i\colon A_i\to B\}_{i\in I}$</span>. That is, there is a natural isomorphism (induced by the universal property of the direct sum)
<span class="math-container">$$\mathrm{Hom}(\oplus_{i\in I}A_i,B)\cong \prod_{i\in I}\mathrm{Hom}(A_i,B).$$</span>
(Similarly, maps into the product correspond to families of maps into the factors).
For <span class="math-container">$A_i=B=\mathbb{Z}$</span>, each <span class="math-container">$\mathrm{Hom}(A_i,B)\cong\mathbb{Z}$</span>, which gives
<span class="math-container">$$\mathrm{Hom}(\mathbb{Z}^{\oplus\mathbb{N}},\mathbb{Z})\cong \prod_{i\in\mathbb{N}}\mathbb{Z}= \mathbb{Z}^{\mathbb{N}}.$$</span></p>
|
975,207 | <p><strong>Question:</strong></p>
<blockquote>
<p>Solve the following system for $a,b,c\in \mathbb{R}$:
$$\begin{cases}
b^2-6=2\sqrt{2a+6}\\
c^2-6=2\sqrt{2b+6}\\
a^2-6=2\sqrt{2c+6}
\end{cases}$$</p>
</blockquote>
<p>I found the following:$$ (b^2-6)^2=4(2a+6)$$
$$(c^2-6)^2=4(2b+6)$$
$$(a^2-6)^2=4(2c+6)$$
Then maybe $a=b=c$ is one case.</p>
<p>Thank you.</p>
| HK Lee | 37,116 | <p>Let $$ 3T:=a^2+b^2+c^2-18 = 2\sqrt{2a+6} +
2\sqrt{2b+6} + 2\sqrt{2c+6} $$</p>
<p>If $$ a= {\rm max}\ \{ a,b,c\},\ b
= {\rm min}\ \{ a,b,c\},\ a>b$$ then
$$T>b^2-6=2\sqrt{2a+6} > T$$</p>
<p>Contradiction.</p>
|
975,207 | <p><strong>Question:</strong></p>
<blockquote>
<p>Solve the following system for $a,b,c\in \mathbb{R}$:
$$\begin{cases}
b^2-6=2\sqrt{2a+6}\\
c^2-6=2\sqrt{2b+6}\\
a^2-6=2\sqrt{2c+6}
\end{cases}$$</p>
</blockquote>
<p>I found the following:$$ (b^2-6)^2=4(2a+6)$$
$$(c^2-6)^2=4(2b+6)$$
$$(a^2-6)^2=4(2c+6)$$
Then maybe $a=b=c$ is one case.</p>
<p>Thank you.</p>
| Arian | 172,588 | <p>$$(b^2-6)^2-(c^2-6)^2=8(a-b)\Leftrightarrow (b^2-c^2)(b^2+c^2-12)=8(a-b)$$
$$(b^2-6)^2-(a^2-6)^2=8(a-c)\Leftrightarrow (b^2-a^2)(b^2+a^2-12)=8(a-c)$$
$$(c^2-6)^2-(a^2-6)^2=8(b-c)\Leftrightarrow (c^2-a^2)(c^2+a^2-12)=8(b-c)$$
Assume that $a\geq b\geq c\geq 0$ then
$$b^2+c^2\geq 12$$
on the other hand
$$b^2+a^2\leq12$$
and $$c^2+a^2\leq12$$
which would yield $$b^2+a^2\leq b^2+c^2\Leftrightarrow a^2\leq c^2\Leftrightarrow a\leq c$$ and
$$c^2+a^2\leq c^2+b^2\Leftrightarrow a^2\leq b^2\Leftrightarrow a\leq b$$
Therefore $a=b=c$.
It remains to check when not all of them are positive.</p>
|
515,915 | <p><strong>Definiton</strong> A $p$-adic integer is a (formal) series
$$\alpha=a_0+a_1p+a_2p^2+\ldots$$
with $0\leq a_i<p$.</p>
<p>The set of $p$-adic integers is denoted by $\mathbb{Z}_p$. If we cut an element $\alpha\in\mathbb{Z}_p$ at its $k$-th term
$$\alpha_k=a_0+a_1p+\ldots+\alpha_{k-1}p^{k-1}$$
$\textbf{we get a well defined element of }$ $\mathbf{\mathbb{Z}/p^k\mathbb{Z}}$.</p>
<p>Could someone explain me the bold part?</p>
| dfeuer | 17,596 | <p>This holds only in the plane, not in 3-space. Do you see why?</p>
<p><strong>Finding an argument:</strong> In the complex plane, a rotation is multiplication by $e^{i\theta}$ and translation is addition of a constant. So rotation followed by translation is $f(x)=e^{i\theta}x+c$ while translation followed by rotation is $g(x)=e^{i\theta}(x+c)=e^{i\theta}x+e^{i\theta}c$. These will be different for any $\theta\ne 2\pi k$ and $c\ne 0$.</p>
<p><strong>Turning the argument into one that doesn't use complex numbers:</strong> Thus we can choose a convenient value for $x$, like, say, $0$, and construct a purely geometric argument based on that (erasing the coordinate system, we see that the point chosen as the axis of rotation will always end up doing something different, and this is really easy to prove).</p>
|
2,447,014 | <p>I attempted to solve this question using a method learned from a previous answer on here. I was just looking for a bit more guidance.
This is what I have for this problem:</p>
<blockquote>
<p>$y=\frac{4x}{x+1}\\
y(x+1) =4x \\
yx + y = 4x \\
y = 4x-yx \\
y= x(4-y) \\
\frac{y}{4-y}=x \\
\frac{y}{4-y} > 0 \\
0 < y < 4 $</p>
</blockquote>
<p>Is this the correct approach? Also, it says to prove my answer. Would this just be doing the same steps in reverse?
Thanks,</p>
<p>EDIT: It came to my attention that proving it might just be plugging in the values and showing that they satisfy the original domain + target space. Is this correct?</p>
| marty cohen | 13,079 | <p>$\frac{4x}{x+1}
=\frac{4x+4-4}{x+1}
=\frac{4(x+1)-4}{x+1}
=4-\frac{4}{x+1}
$</p>
<p>Now, the ranges.</p>
<p>$\begin{array}\\
x+1:
&(0, \infty)\to (1, \infty)\\
\frac1{x}:
&(1, \infty) \to (0, 1)\\
4x:
&(0, 1)\to (0, 4)\\
4-x:
&(0, 4) \to 0, 4)\\
\end{array}
$
So the range is
$(0, 4)$.</p>
|
265,549 | <p>$$ \left\{ x \in\mathbb{R}\; \middle\vert\; \tfrac{x}{|x| + 1} < \tfrac{1}{3} \right\}$$</p>
<p>What is the supremum and infimum of this set? I thought the supremum is $\frac{1}{3}$. But can we say that for any set $ x < n$ that $n$ is the supremum of the set? And for the infimum I have no idea at all. Also, let us consider this example:</p>
<p>$$ \left\{\tfrac{-1}{n} \;\middle\vert\; n \in \mathbb{N}_0\right\}$$</p>
<p>How can I find the infimum and supremum of this set? It confuses me a lot. I know that as $n$ gets bigger $\frac{-1}{n}$ asymptotically approaches $0$ and if $n$ gets smaller $\frac{-1}{n}$ approaches infinity, but that's about it. </p>
| M. Strochyk | 40,362 | <p>Considering separately cases $x\geqslant{0}$ and $x<0$ you can find range of $x$ for which $f(x)=\dfrac{x}{|x| + 1} - \dfrac{1}{3} <0.$</p>
<p>For $x\geqslant {0}$ $$f(x)=\dfrac{x}{x+1}-\dfrac{1}{3}=\dfrac{x+1-1}{x+1}-\dfrac{1}{3}=\dfrac{2}{3}- \dfrac{1}{x+1},$$
For $x<0$ $$f(x)=\dfrac{x}{1-x}-\dfrac{1}{3}=\dfrac{x-1+1}{1-x}-\dfrac{1}{3}= \dfrac{1}{1-x} -\dfrac{4}{3} $$
therefore, the function $f(x)$ increases on $\mathbb{R}.$ Then for $x \in [0, \,+\infty) $ the inequality $f(x)<0$ holds for $0 \leqslant x < \dfrac{1}{2}.$
Solutions of the inequality $\dfrac{1}{1-x} -\dfrac{4}{3}<0$ in the second case lies in $(- \infty, \, 0) \cap \left(- \infty, \, \dfrac{1}{4}\right)= (- \infty, \, 0) .$<br>
<br />
Thus $f(x)<0$ for $x \in (- \infty, \, 0)\cup \left[0, \, \dfrac{1}{2}\right)= \left(- \infty, \, \dfrac{1}{2} \right) \Rightarrow \sup \left \lbrace x\vert \; f(x)<0 \right \rbrace = \dfrac{1}{2} ; \;\; \inf \left \lbrace x\vert \; f(x)<0 \right \rbrace = -\infty .$</p>
|
2,873,449 | <p>For a continuous function $f: R \to R $ Define:
$$ \lim_{x \to 0} \frac{1}{x^2} \int_{0}^{x} f(t)t \space dt $$
Since the function is continuous I can assume that it's also integrable since continuity implies integrability. I assume furthermore that there exists a function $F$ which is an antiderivative of $f$ for which the following are true:</p>
<p>$$\int_{a}^{b} f(x) dx =F(b)-F(a) \space\space\space\space\space a,b\in R \space $$
$$ \lim_{x \to x_{0}} \frac{F(x)-F(x_{0})}{x-x_{0}}=f(x)$$
And that for $f$
$$ \lim_{x \to x_{0}} f(x)=f(x_{0})$$</p>
<p>In order to find the limit, I used partial integration and ended up with:
$$ \lim_{x \to 0} \frac{F(x)(x-1) +F(0)}{x^2}$$
At this point, I tried to use L'Hôpital's rule and ended up with the value $\frac{f(0)}{2}$ which seems totally wrong to me .
Any advice would be appreciated, I mainly think that my solution idea is wrong, but I am stuck.</p>
| Ted Shifrin | 71,348 | <p>By L'Hôpital's rule, this limit is equal to
$$\lim_{x\to 0} \frac{xf(x)}{2x} = \frac12\lim_{x\to 0} f(x) = \frac{f(0)}2.$$
(Use the first Fundamental Theorem of Calculus to differentiate the integral, since the integrand is guaranteed continuous.) You were correct.</p>
|
1,797,245 | <p>How does one show that $I_n = \int\limits_0^1 x^n e^x dx$ is decreasing?</p>
<p>The best I came up with is this: $I_{n-1} - I_n= \int\limits_0^1 e^xx^{n-1}(1-x)dx$, but how do we go from here?</p>
<p>I'd appreciate some hints.</p>
| rtybase | 22,583 | <p>According to <a href="https://en.wikipedia.org/wiki/Mean_value_theorem#Mean_Value_Theorems_for_Definite_Integrals" rel="nofollow">Mean Value Theorem</a>, there $\exists c \in (0,1)$ such that
$$\int_{0}^{1}e^xx^{n-1}(1-x)dx=e^cc^{n-1}(1-c)(1-0)\geq0$$</p>
<p>because $f(x)=e^xx^{n-1}(1-x)$ is continuous on $[0,1]$</p>
|
1,237,464 | <p>In a bag of reds and black balls, $30\%$ were red, and $90\%$ of the black balls and $80\%$ of the red balls are marked balls. What percentage of the bag of balls is marked?</p>
<p>I thought I would have to use: Let </p>
<p>$A = \text{marked red balls},$</p>
<p>$B = \text{marked black balls}.$ </p>
<p>$P(A \cup B) = P(A) + P(B) - P(A \cap B)$</p>
<p>But the answer is simply $0.8\times0.3 + 0.7\times0.9$. How come we don't have to take $P(A \cap B)$ into consideration?</p>
| Community | -1 | <p>Let in bag be $x$ balls.<br/>
Number of red balls: $30\%x$<br/>
Number of black balls: $(100-30)\%x=70\%x$<br/>
Number of marked red balls: $80\%\cdot30\%x=24\%x$<br/>
Number of marked black balls: $90\%\cdot70\%x=63\%x$<br/>
So, answer will be: $24\%x+63\%x=87\%x$</p>
|
1,237,464 | <p>In a bag of reds and black balls, $30\%$ were red, and $90\%$ of the black balls and $80\%$ of the red balls are marked balls. What percentage of the bag of balls is marked?</p>
<p>I thought I would have to use: Let </p>
<p>$A = \text{marked red balls},$</p>
<p>$B = \text{marked black balls}.$ </p>
<p>$P(A \cup B) = P(A) + P(B) - P(A \cap B)$</p>
<p>But the answer is simply $0.8\times0.3 + 0.7\times0.9$. How come we don't have to take $P(A \cap B)$ into consideration?</p>
| drhab | 75,923 | <p><strong>Hint</strong>:</p>
<p>$P\left(\text{marked}\right)=P\left(\text{marked and black}\right)+P\left(\text{marked and red}\right)$</p>
<p>$P\left(\text{marked}\right)=P\left(\text{marked}\mid\text{black}\right)P\left(\text{black}\right)+P\left(\text{marked}\mid\text{red}\right)P\left(\text{red}\right)$</p>
<hr>
<p>By definition: $$P\left(\text{marked}\mid\text{black}\right)P\left(\text{black}\right)=P\left(\text{marked and black}\right)$$
and likewise: $$P\left(\text{marked}\mid\text{red}\right)P\left(\text{red}\right)=P\left(\text{marked and red}\right)$$</p>
|
1,405,787 | <p>We're given the following function :</p>
<p>$$f(x,y)=\dfrac{1}{1+x-y}$$</p>
<p>Now , how to prove that the given function is differentiable at $(0,0)$ ?</p>
<p>I found out the partial derivatives as $f_x(0,0)=(-1)$ and $f_y(0,0)=1$ ,</p>
<p>Clearly the partial derivatives are continuous , but that doesn't guarantee differentiability , does it ?</p>
<p>Is there any other way to prove the same ?</p>
| Jack | 206,560 | <p>You need to show that the limit definition of the partial derivative from each direction is the same value. Hence</p>
<p>$\frac{\partial f}{\partial x}(0,0) = \lim_{h \to 0^{-}} \frac{f(0+h,0)-f(0,0)}{h} = \lim_{h \to 0^{-}} \frac{\frac{1}{1+h} - 1}{h} = \lim_{h \to 0^{-}} -\frac{1}{(1+h)^{2}} = -1$ (Using l'hopital's rule).</p>
<p>$\frac{\partial f}{\partial x}(0,0) = \lim_{h \to 0^{+}} \frac{f(0+h,0)-f(0,0)}{h} = \lim_{h \to 0^{-}} \frac{\frac{1}{1+h} - 1}{h} = \lim_{h \to 0^{-}} -\frac{1}{(1+h)^{2}} = -1$.</p>
<p>Therefore the function is differentiable with respect to $x$. Now rinse and repeat the same thing for $y$ to determine if $f$ is differentiable with respect to $y$.</p>
|
1,628,386 | <p>I had following limit of two variables as a problem on my calculus test. How does one show whether the limit below exists or does not exist? I think it does not exist but I was not able to show that rigorously. There was a hint reminding that $\lim_{t\to 0}\sin t / t=1$.</p>
<p>$$\lim_{(x,y)\to (0,0)} \frac{3x^2\sin^2y}{2x^4+2\sin y^4}$$</p>
| marty cohen | 13,079 | <p>The first approximation
I would use is
$\sin(x)
\approx x$
for small $x$.</p>
<p>Therefore
$\frac{3x^2\sin^2y}{2x^4+2\sin y^4}
\approx \frac{3x^2y^2}{2x^4+2y^4}
= \frac{3(x/y)^2}{2(x/y)^4+2}
$
(dividing by $y^4$).</p>
<p>Therefore,
if $x/y=c$,
this approaches
$\frac{3c^2}{2c^4+2}
$,
which can take on a variety of values.</p>
|
1,201,904 | <p>I have to implement a circuit following the boolean equation A XOR B XOR C, however the XOR gates I am using only have two inputs (I am using the 7486 XOR gate to be exact, in case that makes a difference)... is there a way around this?</p>
| copper.hat | 27,978 | <p>Exclusive or $\oplus$ is associative (and commutative), so you can use any order
and any pairing.</p>
<p>$A \oplus B \oplus C = (A \oplus B) \oplus C = A \oplus (B \oplus C) $.</p>
|
82,061 | <p>Well, another problem here I don't get.</p>
<p>From what I know, sin(x)'=cos(x) right?</p>
<p>Well, here is a problem:</p>
<p>Find $\frac{d}{dx}(y\cos(\frac{y}{x^4}))$ ?</p>
<p>Let $u=y\cos(\frac{y}{x^4})$ and $s=y$, $v=\cos(\frac{y}{x^4})$</p>
<p>$$u'=v\frac{ds}{dx}+s \frac{dv}{dx}$$</p>
<p>$$u'=\cos\frac{y}{x^4} \frac{dy}{dx} + y \frac{d}{dx}(\cos\frac{y}{x^4})$$</p>
<p>Now the question, isn't $\frac{d}{dx}(\cos\frac{y}{x^4})=-\sin(\frac{y}{x^4})$ and here it's where it all ends? Why is the teacher going on and differentiates what's in the parentheses? </p>
| Ross Millikan | 1,827 | <p>For $k=2$ you can use the fact that $\left(\sum_n a_n\right)^2=\left(\sum_n a_n^2\right)+2\sum_{i<j}a_ia_j$. Your expression is the third term (without the factor $2$). For $k=3,$</p>
<p>$ \left(\sum_n a_n\right)^3=\left(\sum_n a_n^3\right)+3\sum_{i\ne j}a_i^2a_j+6\sum_{i<j<k}a_ia_ja_k$</p>
<p>$=\left(\sum_n a_n^3\right)+3\sum_i a_i^2((\sum_ja_j)-a_i)+6\sum_{i<j<k}a_ia_ja_k$</p>
<p>$=3(\sum_i a_i^2)(\sum_ja_j)-2\left(\sum_n a_n^3\right)+6\sum_{i<j<k}a_ia_ja_k$</p>
<p>and again you are at $O(n)$. I suspect these are available somewhere, because they get messy.</p>
|
2,266,342 | <p>So on my review for my final exam there is this question: </p>
<p>Is there a linear transformation from $P_2$ to $P_2$ with the following properties? In each case, either give an example of such a transformation or prove that no such transformation exist.</p>
<blockquote>
<p>$T(t^2+t+1) = t^2+t+1, T(t^2+2t+3) = 3t^2+2t+1, T(t^2+2t+2) = 2t^2+2t+1$</p>
</blockquote>
<p>So we know that the standard basis for $P_2$ is </p>
<p>{$1,t,t^2$}</p>
<p>My question is how do you even start? Is there a trick? we have never done this kind of question in our class before.</p>
| Robert Israel | 8,508 | <p>Are $t^2 + t + 1$, $t^2 + 2 t + 3$, $t^2 + 2 t + 2$ linearly independent? If so, then they form a basis and there is such a linear transformation. If not, you must determine whether these equations are consistent.</p>
|
2,902,768 | <p>$f:\mathbb{R}^2 \to \mathbb{R}$</p>
<p>$f\Bigg(\begin{matrix}x\\y\end{matrix}\Bigg)=\begin{cases}\frac{xy^2}{x^2+y^2},(x,y)^T \neq(0,0)^T \\0 , (x,y)^T=(0,0)^T\end{cases}$</p>
<p>I need to determine all partial derivatives for $(x,y)^T \in \mathbb{R}^2$:</p>
<p>$f_x=y^2/(x^2+y^2)-2x^2y^2/(x^2+y^2)^2$ for $(x,y)^T \neq (0,0)$</p>
<p>$f_y=2xy/(x^2+y^2)-2xy^3/(x^2+y^2)^2$ for $(x,y)^T \neq (0,0)$</p>
<p>and $f_x=f_y=0$ for $(x,y)^T = (0,0)$.</p>
<p>Then I need to determine $\frac{\partial f}{\partial v}((0,0)^T)$ for all $v=(v_1,v_2)^T \in \mathbb{R}^2$.</p>
<p>I tried: $\frac{1}{s} (f(x+sv)-f(x))$ at $x=(0,0)^T$ is equal to $\frac{1}{s} f(sv)$=$\frac{1}{s} f\Big(\begin{matrix}sv_1\\sv_2\end{matrix}\Big)$.</p>
<p>Which is either equal to $0$ when the argument is $(0,0)^T$ or it is $\frac{1}{s}\frac{sv_1s^2v_2^2}{s^2v_1^2+s^2v_2^2}$ which converges to $\frac{v_1v_2^2}{v_1^2+v_2^2}$ as $s \to \infty$.</p>
<p>Is that correct so far?</p>
<p>And how do I know if $f$ is continuously partial differentiable on $\mathbb{R}^2$? According to our professor $f$ is not differentiable at $0$. How do I show that? As far as I know it has something to do with that something is not linear but I don't know what exactly. So I guess it can't be continuously partial differentiable on $\mathbb{R}^2$ as well but I am not sure about that.</p>
<p>Thanks for your help!</p>
| mechanodroid | 144,766 | <p>The partial derivatives at $(0,0)$ are</p>
<p>$$\partial_x f(0,0) = \lim_{h\to 0} \frac{f(h,0) - f(0,0)}h = 0$$
$$\partial_y f(0,0) = \lim_{h\to 0} \frac{f(0,h) - f(0,0)}h = 0$$</p>
<p>so the only candidate for the differential $Df(0,0)$ is the zero operator.</p>
<p>However, the limit</p>
<p>$$\lim_{(h_1, h_2) \to (0,0)} \frac{\|f(h_1, h_2) - f(0,0) - Df(0,0)(h_1, h_2)\|}{\|(h_1, h_2)\|} = \lim_{(h_1, h_2) \to (0,0)} \frac{\|f(h_1, h_2)\|}{\sqrt{h_1^2 + h_2^2}} = \lim_{(h_1, h_2) \to (0,0)} \frac{h_1h_2^2}{(h_1^2 + h_2^2)^{3/2}}$$
does not exist because e.g. for $h_1 = h_2$ we get $$\lim_{h_1 \to 0} \frac{h_1^3}{|h_1|^3}$$</p>
<p>which is $\pm 1$, depending on whether $h_1$ approaches $0$ from the left or from the right.</p>
<p>Therefore, $f$ is not differentiable at $(0,0)$.</p>
|
925,941 | <p>Solve:</p>
<p>$xy=-30$<br>
$x+y=13$</p>
<p>{15, -2} is a particular solution, but, how would I know if is the only solution, or what would be the way to solve this without "guessing" ?</p>
| user26486 | 107,671 | <p>The other answers here show the strategies of:</p>
<hr>
<p>$1$) Substitution (the most general). See Kim Jong Un's answer on this question.</p>
<hr>
<p>$2)$ Solving a quadratic equation that has the roots $x,y$ (this uses the <a href="http://en.wikipedia.org/wiki/Vieta's_formulas" rel="nofollow">Vieta's formulas</a>). See Sami Ben Romdhane's answer.</p>
<hr>
<p>$3)$ Using the fact that $(x+y)^2-4xy=(x-y)^2$ to have the numeric value of $x-y$. Then $(x+y)-(x-y)=2y$, an easy way to get the value of $y$. See André Nicolas' answer.</p>
<hr>
<p>I'm posting another method that works great for this particular problem.</p>
<p>Add the equations in a specific way to get:$$\begin{align}xy+2(x+y)&=-30+2\cdot 13=-4\\\iff (x+2)(y+2)&=0\end{align}$$</p>
<p>Thus either $x=-2$ or $y=-2$, which give the solutions $(-2,15)$ and $(15,-2)$, respectively.</p>
|
339,423 | <p>Assuming $\lbrace R_i \rbrace _{i=1}^{n}$ be a set of rings, show that $$e=(e_1,...,e_n)$$ is the identity of $$R=R_1 \times R_2 \times ... \times R_n$$ if and only if $e_i$ is the identity of $R_i$ for all $1 \leq i \leq n$. I come across this statement when $R$ is the internal direct product of $R_i$ but here, we don have this assumption. How do we tackle this problem?</p>
| Branimir Ćaćić | 49,610 | <p>$
\newcommand{\so}{\mathfrak{sl}}
\newcommand{\gl}{\mathfrak{gl}}
\newcommand{\bC}{\mathbb{C}}
\DeclareMathOperator{\GL}{GL}
\DeclareMathOperator{\SL}{SL}
\DeclareMathOperator{\SO}{SO}
\DeclareMathOperator{\tr}{tr}
\newcommand{\i}[1]{{#1}^{-1}}
\newcommand{\c}[2]{{#2}{#1}\i{{#2}}}
$Let $\so(2,\bC)$ denote the complex vector space of $2 \times 2$ complex traceless matrices.</p>
<ol>
<li><p>Since similar matrices have the same trace,
$$
\forall X \in \so(2,\bC), \; \forall A \in \GL(2,\bC), \; \c{X}{A} \in \so(2,\bC).
$$
Hence, $(A,X) \mapsto \c{X}{A}$ defines a representation of $\GL(2,\bC)$ on the vector space $\so(2,\bC)$, which restricts to a representation $\pi_0 : \SL(2,\bC) \to \GL(\so(2,\bC))$ of $\SL(2,\bC)$ on $\so(2,\bC)$.</p></li>
<li><p>You should check that the vector space $\so(2,\bC)$ admits the basis of Pauli matrices $\{\sigma_1,\sigma_2,\sigma_2\}$, where
$$
\sigma_1 = \begin{pmatrix}0&1\\1&0\end{pmatrix}, \quad \sigma_2 = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}.
$$
Hence, one has an isomorphism $\gamma : \bC^3 \to \so(2,\bC)$ defined by $\gamma(x) := x_1 \sigma_1 + x_2 \sigma_2 + x_3 \sigma_3$, which in turn induces a representation $\pi : \SL(2,\bC) \to \GL(\bC^3) \cong \GL(3,\bC)$ of $\SL(2,\bC)$ on $\bC^3$ by
$$
\forall A \in \SL(2,\bC), \; \pi(A) := \c{\pi_0(A)}{\gamma}.
$$</p></li>
</ol>
<p>Given all this, you must show that:</p>
<ol>
<li><p>$\pi(\SL(2,\bC)) = \SO(3,\bC)$.</p></li>
<li><p>For any $T \in \SO(3,\bC)$, $\i{\pi}(T)$ has exactly two elements.</p></li>
</ol>
<p>It is worth noting, in terms of wider context, that this construction is intimately linked to the intimately interrelated theories of Clifford algebras and of spin and spin$^\bC$ groups. In fact, the restriction of $\pi : \SL(2,\bC) \to \GL(3,\bC)$ to a representation of $\operatorname{SU}(2)$ on $\bC^3$ yields the accidental isomorphism $\operatorname{Spin}(3) \cong \operatorname{SU}(2)$, for, as you can check $\pi(\operatorname{SU}(2)) = \SO(3,\mathbb{R})$. Moreover, the isomorphism $\gamma : \bC^3 \to \so(2,\bC)$ yields the identification of $\bC^2$ as the unique (up to unitary equivalence) irreducible $\operatorname{Spin}(3)$-module.</p>
|
339,423 | <p>Assuming $\lbrace R_i \rbrace _{i=1}^{n}$ be a set of rings, show that $$e=(e_1,...,e_n)$$ is the identity of $$R=R_1 \times R_2 \times ... \times R_n$$ if and only if $e_i$ is the identity of $R_i$ for all $1 \leq i \leq n$. I come across this statement when $R$ is the internal direct product of $R_i$ but here, we don have this assumption. How do we tackle this problem?</p>
| Ted | 15,012 | <p><span class="math-container">$\DeclareMathOperator{\tr}{tr}$</span>The map <span class="math-container">$(A,B) \mapsto \tr(AB)$</span> defines a symmetric, nondegenerate bilinear form on the space of traceless matrices in <span class="math-container">$M_2\mathbb{C}$</span>. Clearly the action of <span class="math-container">$g$</span> preserves this form; hence, we get a map <span class="math-container">$SL_2\mathbb{C} \to SO_3\mathbb{C}$</span>. To see that the map is 2:1, check that the kernel of this map is <span class="math-container">$\{1, -1\}$</span>. Surjectivity is more difficult to check.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Community | -1 | <p>$$\sum\limits_{n=1}^{\infty} n = 1 + 2 + 3 + \cdots \text{ad inf.} = -\frac{1}{12}$$</p>
<p>You can also see many more here: <a href="http://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/" rel="nofollow">The Euler-Maclaurin formula, Bernoulli numbers, the zeta function, and real-variable analytic continuation</a></p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| user02138 | 2,720 | <p>\begin{eqnarray}
\sum_{k = 0}^{\lfloor q - q/p) \rfloor} \left \lfloor \frac{p(q - k)}{q} \right \rfloor = \sum_{k = 1}^{q} \left \lfloor \frac{kp}{q} \right \rfloor
\end{eqnarray}</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Community | -1 | <p>Ah, this is one identity which comes into use for proving the <em>Euler's Partition Theorem</em>. The identity is as follows: $$ (1+x)(1+x^{2})(1+x^{3}) \cdots = \frac{1}{(1-x)(1-x^{3})(1-x^{5}) \cdots}$$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Yrogirg | 14,862 | <p>I actually think <a href="http://en.wikipedia.org/wiki/Currying">currying</a> is really cool:</p>
<p>$$(A \times B) \to C \; \simeq \; A \to (B \to C)$$</p>
<p>Though not strictly an identity, but an isomorphism.</p>
<p>When I met it for the first time it seemed to be a bit odd but it is so convenient and neat. At least in programming.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Z. L. | 55,611 | <p>$32768=(3-2+7)^6 / 8$</p>
<p>Just a funny coincidence.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| marty cohen | 13,079 | <p>$\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3) = \pi$
(using the principal value),
but if you blindly use the addition formula
$\tan^{-1}(x) + \tan^{-1}(y)
= \tan^{-1}\dfrac{x+y}{1-x y}$
twice, you get zero:</p>
<p>$\tan^{-1}(1) + \tan^{-1}(2) =
\tan^{-1}\dfrac{1+2}{1-1*2}
=\tan^{-1}(-3)$;
$\tan^{-1}(1) + \tan^{-1}(2) + \tan^{-1}(3)
=\tan^{-1}(-3) + \tan^{-1}(3)
=\tan^{-1}\dfrac{-3+3}{1-(-3)(3)}
= 0$.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| xpaul | 66,420 | <p>I have one: In a $\Delta ABC$,
$$\tan A+\tan B+\tan C=\tan A\tan B\tan C.$$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Felix Marin | 85,343 | <p>$${\Large%
\sqrt{\,\vphantom{\huge A}\color{#00f}{20}\color{#c00000}{25}\,}\, =\ \color{#00f}{20}\ +\ \color{#c00000}{25}\ =\ 45}
$$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Jaycob Coleman | 76,533 | <p>Let $\sigma(n)$ denote the sum of the divisors of $n$.</p>
<p>If $$p=1+\sigma(k),$$ then $$p^a=1+\sigma(kp^{a-1})$$
where $a,k$ are positive integers and $p$ is a prime such that $p\not\mid k$.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| ZHN | 131,755 | <p>$$\frac{\pi}{4}=\sum_{n=1}^{\infty}\arctan\frac{1}{f_{2n+1}}, $$
where $f_{2n+1}$ there are fibonacci numbers, $n=1,2,...$</p>
|
4,556,592 | <p>Check if the applications defined below are linear transformations:</p>
<p>a) <span class="math-container">$T: \mathbb R^2 \to \mathbb R^2$</span>, <span class="math-container">$T(x_1, x_2) = (x_1 – 1, x_2).$</span></p>
<p>b) <span class="math-container">$T: \mathbb R^2 \to \mathbb R^2$</span>, <span class="math-container">$T(x_1, x_2) = (x_2, x_1).$</span></p>
<p>c) <span class="math-container">$T: M_{2,2} \to \mathbb R, T (\left [ \begin{matrix}
a & b \\
c & d \\
\end{matrix} \right ]) = a – 2b + c – 2d$</span>.</p>
<hr />
<p>My attempt:</p>
<p>a) Let <span class="math-container">$u = (u_1, u_2)$</span> e <span class="math-container">$v = (v_1, v_2)$</span></p>
<p><span class="math-container">$T(u+v) = T(u_1+v_1, u_2+v_2) = (u_1 – 1 + v_1 – 1, u_2 + v_2) = (u_1 + v_1 – 2, u_2 + v_2) \ne T(u) + T(v)$</span></p>
<p><span class="math-container">$T(cu) = T(cu_1, cu_2) = (c(u_1 - 1), cu_2) = (cu_1 - c, u_2) \ne cT(u)$</span>.</p>
<p>No.</p>
<p>b)Let <span class="math-container">$u = (u_1, u_2)$</span> e <span class="math-container">$v = (v_1, v_2)$</span></p>
<p><span class="math-container">$T(u+v) = T(u_1+v_1, u_2+v_2) = (u_2 + v_2 , u_1 + v_1) = T(v) + T(u).$</span></p>
<p><span class="math-container">$T(cu) = T(cu_1, cu_2) = (cu_2, cu_1) = c(u_2, u_1) \ne cT(u).$</span></p>
<p>No.</p>
<p>c)Let <span class="math-container">$u = (a, b, c, d)$</span> e <span class="math-container">$v = (e, f, g, h)$</span></p>
<p><span class="math-container">$T(u+v) = T([a, b, c, d] + [e, f, g, h]) = (a+b) – 2(b+f) + (c+g) – 2(d+h) = (a -2b +c -2d, e -2f + c -2h) = T(u) + T(v)$</span></p>
<p><span class="math-container">$T(ku) = T(k[a, b, c, d]) = ka – k2b + kc – k2d = k(a – 2b + c – 2d) = kT(u).$</span></p>
<p>Yes.</p>
<p>Are my checks correct?</p>
<p>Thanks.</p>
| 311411 | 688,046 | <p><em>Partial Answer:</em></p>
<p>For part (a) you are not applying the rule of function <span class="math-container">$T$</span> correctly.</p>
<p><span class="math-container">$$(a+)\quad T(u_1+v_1, u_2+v_2) = (u_1 + v_1 – 1, u_2 + v_2) \not = (u_1 + v_1 – 2, u_2 + v_2).$$</span></p>
<p><span class="math-container">$$(a\cdot)\quad\quad\quad\quad\quad T(cu) = T(cu_1, cu_2) = (cu_1 - 1, cu_2) \not= (c(u_1 - 1), cu_2).$$</span></p>
<p>"No" is however the correct conclusion. To justify it, it is better to simply note <span class="math-container">$T(0)\not=0$</span>. (It can be shown <span class="math-container">$T(0)=0$</span> is a necessary condition for any linear mapping.)</p>
<p>For part (c+), a different sort of mistake appears in your work. You have written (with your typo fixed): <span class="math-container">$$T([a, b, c, d] + [e, f, g, h]) $$</span> <span class="math-container">$$= (a+e) – 2(b+f) + (c+g) – 2(d+h) $$</span> <span class="math-container">$$= (a -2b +c -2d, e -2f + c -2h)$$</span> as though an output of <span class="math-container">$T$</span> is an ordered pair. But this <span class="math-container">$T$</span> outputs a real number (takes values in a single dimension).</p>
|
4,556,592 | <p>Check if the applications defined below are linear transformations:</p>
<p>a) <span class="math-container">$T: \mathbb R^2 \to \mathbb R^2$</span>, <span class="math-container">$T(x_1, x_2) = (x_1 – 1, x_2).$</span></p>
<p>b) <span class="math-container">$T: \mathbb R^2 \to \mathbb R^2$</span>, <span class="math-container">$T(x_1, x_2) = (x_2, x_1).$</span></p>
<p>c) <span class="math-container">$T: M_{2,2} \to \mathbb R, T (\left [ \begin{matrix}
a & b \\
c & d \\
\end{matrix} \right ]) = a – 2b + c – 2d$</span>.</p>
<hr />
<p>My attempt:</p>
<p>a) Let <span class="math-container">$u = (u_1, u_2)$</span> e <span class="math-container">$v = (v_1, v_2)$</span></p>
<p><span class="math-container">$T(u+v) = T(u_1+v_1, u_2+v_2) = (u_1 – 1 + v_1 – 1, u_2 + v_2) = (u_1 + v_1 – 2, u_2 + v_2) \ne T(u) + T(v)$</span></p>
<p><span class="math-container">$T(cu) = T(cu_1, cu_2) = (c(u_1 - 1), cu_2) = (cu_1 - c, u_2) \ne cT(u)$</span>.</p>
<p>No.</p>
<p>b)Let <span class="math-container">$u = (u_1, u_2)$</span> e <span class="math-container">$v = (v_1, v_2)$</span></p>
<p><span class="math-container">$T(u+v) = T(u_1+v_1, u_2+v_2) = (u_2 + v_2 , u_1 + v_1) = T(v) + T(u).$</span></p>
<p><span class="math-container">$T(cu) = T(cu_1, cu_2) = (cu_2, cu_1) = c(u_2, u_1) \ne cT(u).$</span></p>
<p>No.</p>
<p>c)Let <span class="math-container">$u = (a, b, c, d)$</span> e <span class="math-container">$v = (e, f, g, h)$</span></p>
<p><span class="math-container">$T(u+v) = T([a, b, c, d] + [e, f, g, h]) = (a+b) – 2(b+f) + (c+g) – 2(d+h) = (a -2b +c -2d, e -2f + c -2h) = T(u) + T(v)$</span></p>
<p><span class="math-container">$T(ku) = T(k[a, b, c, d]) = ka – k2b + kc – k2d = k(a – 2b + c – 2d) = kT(u).$</span></p>
<p>Yes.</p>
<p>Are my checks correct?</p>
<p>Thanks.</p>
| Shubham Namdeo | 206,275 | <p>a). Note that <span class="math-container">$T(0,0)=(-1,0)\neq (0,0)$</span>. Therefore, <span class="math-container">$T$</span> fails to be linear map. Because if <span class="math-container">$T$</span> was linear then <span class="math-container">$T(0,0)=T(-1 \cdot (0,0)) = -1 T(0,0)$</span> and this implies that <span class="math-container">$2 T(0) =0 \implies T(0) =0$</span>, as the underlying field is <span class="math-container">$\mathbb{R}$</span>.</p>
<p>b). For <span class="math-container">$(x_1,x_2),(y_1,y_2)\in \mathbb{R}^2$</span> and <span class="math-container">$\alpha , \beta \in \mathbb{R}$</span>, we have <span class="math-container">$T\big(\alpha (x_1, x_2)+\beta(y_1,y_2)\big) = T(\alpha x_1+\beta y_1, \alpha x_2+\beta y_2) = (\alpha x_2+\beta y_2, \alpha x_1+\beta y_1)=\alpha (x_2,x_1)+\beta(y_2,y_1)=\alpha T(x_1,x_2)+\beta T(y_1,y_2).$</span></p>
<p>Hence, <span class="math-container">$T$</span> is a linear map.</p>
<p>c). For <span class="math-container">$\alpha , \beta \in \mathbb{R}$</span>, we have <span class="math-container">$\alpha \begin{pmatrix}
a & b\\
c & d
\end{pmatrix}
+\beta \begin{pmatrix}
x & y\\
z & w
\end{pmatrix} = \begin{pmatrix}
\alpha a + \beta x & \alpha b + \beta y \\
\alpha c + \beta z & \alpha d + \beta w
\end{pmatrix}.\;\;$</span> So,
<span class="math-container">$$T \begin{pmatrix}
\alpha a + \beta x & \alpha b + \beta y \\
\alpha c + \beta z & \alpha d + \beta w
\end{pmatrix} = \alpha a + \beta x- 2(\alpha b + \beta y) + \alpha c + \beta z -2 (\alpha d + \beta w)$$</span> <span class="math-container">$$= \alpha (a-2b+c-2d)+\beta (x-2y+z-2w)= \alpha T\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}+\beta T\begin{pmatrix}
x & y\\
z & w
\end{pmatrix} .$$</span></p>
<p>Hence, <span class="math-container">$T$</span> is a linear map.</p>
|
3,687,484 | <p>Here, <span class="math-container">$H^1(\mathbb{R}^2)$</span> is the standard Sobolev spaces for <span class="math-container">$L^2(\mathbb{R}^2)$</span> functions whose weak derivative belongs to <span class="math-container">$L^2(\mathbb{R}^2).$</span></p>
<p>My question in the title comes from calculus of variations. It is usually the case that a minimizer of some given energy functional defined on <span class="math-container">$H^1(\mathbb{R}^2)$</span> is known to be continuous (or even <span class="math-container">$C^2(\mathbb{R}^2))$</span>. I want to know the behavior of this minimizer at infinity.</p>
<p>If <span class="math-container">$u \in L^2(\mathbb{R}^2),$</span> then is known <span class="math-container">$\liminf_{|x| \to \infty} u(x) = 0.$</span> But it cannot say <span class="math-container">$\limsup_{|x| \to \infty} u(x) = 0$</span> since counterexamples exist.</p>
<p>If we assume <span class="math-container">$u \in H^{1+\epsilon}(\mathbb{R}^2)$</span> for some <span class="math-container">$\epsilon > 0,$</span> then the classical Morrey's inequality can imply uniform H\"older continuity of <span class="math-container">$u.$</span> So we can conclude <span class="math-container">$\limsup_{|x| \to \infty} u(x) = 0$</span> via proof by contradiction.</p>
<p>So my problem is about the case <span class="math-container">$\epsilon = 0.$</span> That is, when
<span class="math-container">$$u \in H^1(\mathbb{R}^2) \cap C(\mathbb{R}^2),$$</span>
is it true that
<span class="math-container">$$\limsup_{|x| \to \infty} u(x) = 0?$$</span></p>
<p>Using proof by contradiction, I think this should be true. Here is my non-rigorous argument.</p>
<blockquote>
<p>Assume not, then there are <span class="math-container">$\epsilon > 0$</span> and <span class="math-container">$x_n \in \mathbb{R}^2$</span> such that <span class="math-container">$|x_n| \to \infty$</span> and <span class="math-container">$|u(x_n)| \geq 2\epsilon.$</span> By the continuity, there is <span class="math-container">$r_n > 0$</span> such that <span class="math-container">$|u(x)| \geq \epsilon$</span> for all <span class="math-container">$x \in B(x_n, r_n).$</span>
Since <span class="math-container">$u \in L^2, r_n \to 0$</span> as <span class="math-container">$n \to \infty.$</span>
<strong>I think non-rigorously that</strong>
<span class="math-container">$$ \int_{B(x_n, r_n)} |\nabla u|^2 \gtrsim \int_{B(x_n, r_n)} (\frac{\epsilon}{r_n})^2 = \epsilon^2$$</span>
for large <span class="math-container">$n$</span> and
<span class="math-container">$$
\int_{\mathbb{R}^2} |\nabla u|^2 \geq \sum_{n\,\text{is large}} \int_{B(x_n, r_n)} |\nabla u|^2.
$$</span>
So they imply a contradiction <span class="math-container">$\int_{\mathbb{R}^2} |\nabla u|^2 = \infty$</span></p>
</blockquote>
<p>I appreciate any discussion.</p>
<p><strong>Edit</strong>: How about <span class="math-container">$u$</span> is additionally assumed to be <span class="math-container">$C^1(\mathbb{R}^2)$</span> or even <span class="math-container">$C^2(\mathbb{R}^2)?$</span> Is there any proof or counterexample?</p>
| Yung-Hsiang Huang | 225,984 | <p>This seems to be <strong>not</strong> a general property for functions in <span class="math-container">$H^1(\mathbb{R}^2) \cap C(\mathbb{R}^2)$</span> because I find the following remark from <a href="https://www.springer.com/gp/book/9780857292261" rel="nofollow noreferrer">this book</a>.</p>
<p><a href="https://i.stack.imgur.com/XgKRR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XgKRR.png" alt="enter image description here"></a></p>
|
495,069 | <p>Find an equation of the plane.
The plane that passes through the line of intersection of the planes
$x − z = 1$ and $y + 4z = 1$
and is perpendicular to the plane
$x + y − 2z = 2$.</p>
<p>I keep getting the answer of $7x-y+5z=6$ and I am told that it is wrong. I do not understand what I am doing wrong.
I have the 1st normal vector to be $\langle 1,0,-1\rangle$ and the second normal vector to be $\langle0,1,4\rangle$
and when I did a cross product on them, I got $\langle1,-4,1\rangle$ to be the direction of the line.
I then got a 2nd vector parallel to the desired plane as $\langle1,1,-2\rangle$ since its perpendicular to $x+y-2z=2$
I got a normal plane by the cross product of the 2 normal vectors and the result was $\langle 7,-1,5\rangle$
Then I plugged $\langle7,-1,5\rangle$ into the scalar equation of the plane for $\langle a,b,c\rangle$ and used the point $(1,1,0) $
and for my final answer I got $7x-y+5z=6$
I need help figuring out where I went wrong.</p>
| Cameron Buie | 28,900 | <p>$$\langle 1,-4,1\rangle\times\langle 1,1,-2\rangle=\langle 7,3,5\rangle$$</p>
|
1,310,530 | <p>From: $2015$ Singapore Mathematical Olympiad Secondary 2 (Grade 8) Question 21 Round 1 on 3rd June.</p>
<blockquote>
<p>Find the value of $\sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}$ (No use of calculators)</p>
</blockquote>
<p>My attempt:Special cases formula -> $x^2=(x+1)(x-1)+1$</p>
<p>Therefore,</p>
<p>\begin{align}
\sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}&=\sqrt {(99^2-1)(101^2-1)+(100\times 2)^2}\\
\end{align}</p>
<p>Now take $y$=100</p>
<p>\begin{align}
\sqrt {(99^2-1)(101^2-1)+(100\times 2)^2}&=\sqrt {[(y-1)^2-1][(y+1)^2-1]+(2y)^2}\\
&= \sqrt {[y^2-2(y)(1)+1-1][y^2+2(y)(1)+1-1]+4y^2}\\
&= \sqrt {(y^2-2y)(y^2+2y)+4y^2}\\
&= \sqrt {y^4-4y^2+4y^2}\\
&= \sqrt {y^4}\\
&= y^2
\end{align}</p>
<p>$100^2=10000$</p>
<p>therefore $\sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}=10000$</p>
<p>But using the calculator,the answer is 10002.Where did I go wrong,and is there a simpler way to do this other than using long multiplication?</p>
<p>EDITED ANSWER(Error found by @Mathlove):</p>
<p>\begin{align}
\sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}&=\sqrt {(99^2+1)(101^2+1)+(100\times 2)^2}\\
\end{align}</p>
<p>Now take $y$=100</p>
<p>\begin{align}
\sqrt {(99^2+1)(101^2+1)+(100\times 2)^2}&=\sqrt {[(y-1)^2+1][(y+1)^2+1]+(2y)^2}\\
&= \sqrt {[y^2-2(y)(1)+1+1][y^2+2(y)(1)+1+1]+4y^2}\\
&= \sqrt {(y^2-2y+2)(y^2+2y+2)+4y^2}\\
&= \sqrt {(y^4-4y^2)+2(y^2+2y+2)-4y-2y^2+4y^2+4y^2}\\
&= \sqrt {y^4-4y^2+2y^2+4y+4-4y-2y^2+4y^2+4y^2}\\
&= \sqrt {y^4+4y^2+4}\\
&= \sqrt {(y^2+2)^2}
&= y^2+2
\end{align}</p>
<p>$100^2+2=10002$</p>
<p>therefore $\sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}=10002$</p>
| mathlove | 78,967 | <p>Note that $$98\times 100+2\not =99^2-1$$
and that
$$100\times 102+2\not=101^2-1.$$</p>
<p>We have
$$98\times 100+2=(99-1)(99+1)+2=99^2-1+2=99^2+1$$
and
$$100\times 102+2=(101-1)(101+1)+2=101^2-1+2=101^2+1.$$</p>
|
39,828 | <p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p>
<p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p>
<ul>
<li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li>
<li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li>
<li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li>
</ul>
<p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p>
<p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p>
<blockquote>
<p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p>
</blockquote>
<p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p>
<p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p>
<p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p>
<blockquote>
<p>How much would you subscribe to the statement that
EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p>
</blockquote>
<p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p>
<hr>
<p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p>
<hr>
<p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
| roy smith | 9,449 | <p>If it interests you.</p>
|
39,828 | <p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p>
<p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p>
<ul>
<li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li>
<li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li>
<li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li>
</ul>
<p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p>
<p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p>
<blockquote>
<p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p>
</blockquote>
<p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p>
<p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p>
<p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p>
<blockquote>
<p>How much would you subscribe to the statement that
EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p>
</blockquote>
<p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p>
<hr>
<p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p>
<hr>
<p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
| Ronnie Brown | 19,949 | <p>Dan Schechtman, winner of the 2011 Nobel Prize in Chemistry for the discovery of quasi crystals, said: “The main lesson that I have learned over time is that a good scientist is a humble and listening scientist and not one that is sure 100 percent in what [they read] in the textbooks.”</p>
<p>My research on groupoids and higher groupoids was started in the 1960s by a dissatisfaction with a van Kampen theorem that did not compute the fundamental group of the circle, a basic example: but groupoids were at the time regarded as "rubbish" by many senior mathematicians, and the idea of higher van Kampen theorems using higher groupoids was described by one such for 10 years as "ridiculous". (He gave in eventually!)</p>
<p>My worry is that people may be encouraged to follow high ups, rather than to analyse a programme on mathematical grounds, and so to develop their own feeling for mathematical structures.</p>
<p>January, 2015: One needs a variety of strategies, one of which is to look at what a theory does <strong>not</strong> do but somehow in principle should. This is the notion of <strong>anomaly</strong>. I have listed 5 anomalies in standard algebraic topology in <a href="http://groupoids.org.uk/pdffiles/galway7.pdf" rel="nofollow noreferrer">this presentation</a> Dec, 2014, Galway.</p>
<p>See also the advice given to me 1964 by S. Ulam, quoted in my web page discussing the issue of <a href="http://groupoids.org.uk/famousproblems.html" rel="nofollow noreferrer">famous problems</a> in category theory.</p>
<p>Alexander Grothendieck wrote to me that: "Throughout my whole life as a mathematician, the possibility of making explicit, elegant computations has always come out by itself, as a byproduct of a thorough conceptual understanding of what was going on. Thus I never bothered about whether what would come out would be suitable for this or that, but just tried to understand -- and it always turned out that understanding was all that mattered."</p>
<p>So I always advocate writing and rewriting to make things clear to yourself, testing that by explaining to other people. At Bangor we explained to research students that a thesis must have a "thesis". So having decided on the latter, the <strong>first</strong> thing for the student to do is write up the background to that "thesis", which can always be expected to be a useful part of the final thesis. All sorts of things may turn up in that process.</p>
|
3,843,559 | <p>Can someone please give me a hint why for metric spaces we have</p>
<p><span class="math-container">$d_1(x,y)<d(x,y)\Rightarrow \{x|d(x,y)<\varepsilon\}\subset \{x|d_1(x,y)<\varepsilon\}$</span></p>
<p>I have expected the opposite:</p>
<p><span class="math-container">$d_1(x,y)<d(x,y)\Rightarrow \{x|d(x,y)<\varepsilon\}\supset \{x|d_1(x,y)<\varepsilon\}$</span></p>
| Gabriel Delgado | 671,176 | <p>The way I see it is a matter of precision, think that <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are points and <span class="math-container">$d_1$</span> is the distance in centimeters and <span class="math-container">$d$</span> is in millimeters, so numerically we have <span class="math-container">$d1 <d$</span>, and every measure using <span class="math-container">$d$</span> is therefore more accurate ...</p>
|
1,729,893 | <p>Let $\nu_1, \nu_2, \nu_3 \in \mathbb{R}$ not all be zero.</p>
<p>I wish to show
$$\nu_1^2 + \nu_1\nu_2 + \nu_2^2+\nu_2\nu_3 + \nu_3^2 > 0\text{.}$$</p>
<p><a href="http://www.wolframalpha.com/input/?i=x%5E2%2Bx*y%2By%5E2%2By*z%2Bz%5E2+%3E+0" rel="nofollow">Wolfram</a> seems to suggest splitting this into cases, but I'm wondering if there's a shorter way to approach this. This expression seems very similar to the binomial expansion, minus the fact that we have three terms that are squared and two cross-terms occurring (rather than one cross-term multiplied by $2$).</p>
<p>As for my work, if any one of these are $0$, (I think) this is a trivial exercise. If any two of these are $0$, this is a trivial exercise (you're left with a squared non-zero term). But if all three are non-zero? Then I'm at a loss on how to pursue this, because there isn't a clean way to deal with three variables (or is there?). I've tried seeing if Wolfram could perhaps factor the above. It can't, but maybe, I thought, we could try working with
$$(\nu_1 + \nu_2 + \nu_3)^2 = (\nu_1^2 + \nu_1\nu_2 + \nu_2^2+\nu_2\nu_3 + \nu_3^2) +2\nu_1\nu_3+\nu_1\nu_2+\nu_2\nu_3$$
but there is no guarantee that this is $> 0$ either (take $\nu_3 = -(\nu_1 + \nu_2)$, for example).</p>
| marty cohen | 13,079 | <p>Expanding Deathkamp Drone's suggestion,
though I thought of it independently:</p>
<p>We want to show that
$a^2+ab+b^2+bc+c^2 > 0
$.</p>
<p>This is the same as
$2a^2+2ab+2b^2+2bc+2c^2 > 0
$.</p>
<p>$\begin{array}\\
2a^2+2ab+2b^2+2bc+2c^2
&=a^2+a^2+2ab+b^2+b^2+2bc+c^2+c^2\\
&=a^2+(a+b)^2+(b+c)^2+c^2\\
&\gt 0\\
\end{array}
$</p>
<p>unless
$a=0, a+b=0, b+c=0, c=0$
which implies that
$a=b=c=0$.</p>
|
356,940 | <p>According to Wolfie:</p>
<p>$2^{-1} \bmod 5 = 3$</p>
<p><a href="http://www.wolframalpha.com/input/?i=2%5E-1+mod+5" rel="nofollow">http://www.wolframalpha.com/input/?i=2%5E-1+mod+5</a></p>
<p>Why is that?</p>
| HSN | 58,629 | <p>You need to think by yourself what inverses mean, without thinking about mod 5 first. The multiplicative inverse of $a$ -if it exists- is some element $b$ such that $a\cdot b = 1$.</p>
<p>Now work mod 5. The inverse of $2$ is an element $b\in\mathbb{Z}/(5)$ such that 2*b=1. It shouldn't be too hard to verify that $b=3$ does the trick, i.e. $2\cdot3\equiv 1\mod 5$. If you don't know that inverses are unique yet, it may be a good exercise to establish that result first.</p>
|
1,843,724 | <p>I know that the Maclaurin expansion of $e^x$ is $$1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...$$
But i'm not sure how to find the Maclaurin series here
I tried this</p>
<p>$$
f'_{(0)}=-2xe^{-x^2}=0
$$
And that follows to every derivative that follows, so how can I get a power series out of it?</p>
| Enrico M. | 266,764 | <p>That is easier than what you think.</p>
<p>General expansion:</p>
<p>$$e^A = 1 + A + \frac{1}{2}A^2 + \frac{1}{6}A^3 + \cdots$$</p>
<p>So if $A = -x^2$ you get</p>
<p>$$e^{-x^2} = 1 - x^2 + \frac{1}{2}(-x^2)^2 + \frac{1}{6}(-x^2)^3 + \cdots$$</p>
<p>So</p>
<p>$$e^{-x^2} = 1 - x^2 + \frac{x^4}{2!} - \frac{x^6}{3!} + \cdots$$</p>
<p>To find the general series expansion, you just have to remember the series expansion of $e^A$:</p>
<p>$$e^A = \sum_{k = 0}^{+\infty} \frac{A^k}{k!}$$</p>
<p>So again: if $A = -x^2$ you gain finally</p>
<p>$$\boxed{e^{-x^2} = \sum_{k = 0}^{+\infty} \frac{(-x^2)^k}{k!} = \sum_{k = 0}^{+\infty} \frac{(-1)^k}{k!} x^{2k}}$$</p>
|
2,086,134 | <p>Solve for $x$ if $4^x + x^x = 20$ where $x \in \mathbb{Z}$ </p>
<p>My Attempt: </p>
<p>By inspection,<br>
$x=2$</p>
<p>Now,
$4^2 + 2^2 = 20$</p>
<p>$16+4=20$</p>
<p>$20=20$</p>
<p>Which is True.</p>
<p>But, I Could not do it by any process. Can anyone help me? </p>
<p>Thanks in Advance.</p>
| Community | -1 | <p>If the solution is an integer,</p>
<p>$$4^1+1^1<20,\\4^0+0^0<20^*,\\\frac1{4^n}+\frac{(-1)^n}{n^n}\le1+1<20$$ and for all $n>2$, $$4^n+n^n\ge16+4=20.$$</p>
<hr>
<p>$^*$Whatever the meaning of $0^0$.</p>
|
341,621 | <p>Is there an inequality such as
$$(a+b)^2 \leq 2(a^2 + b^2)$$
for higher powers of $k$
$$(a+b)^k \leq C(a^k + b^k)?$$</p>
| Giuseppe Negro | 8,157 | <p>Take the ratio $$f(a, b)=\frac{(a+b)^k}{a^k+b^k}$$and observe that it is homogeneous of degree zero, that is $f(a, b)=f(\lambda a, \lambda b)$ for all $\lambda >0$. So $f$ is constant along rays in $\mathbb{R}^2$ and so, in particular,
$$f(a, b)\le \max_{(x, y)\in \mathbb{S}^1} f(x, y),$$
where $\mathbb{S}^1$ is the unit circle. This means that the sought inequality is true with $C=\max_{\mathbb{S}^1} f(x,y)$. You can check that with $k=2$ you recover $C=2$.</p>
|
2,703,559 | <p>My assignment asks me to prove that the only automorphism of order 2 of $\mathbb{Z}_q$ is $m \mapsto -m$ for $q = 3$ and $q = 5$ and $q = 7$. I have been stuck for ages, and now I wonder if it is true. I need some help to get started. This is my attempt: Assume $k$ and $q$ are relative prime (this is necessary otherwise $\phi$ is not an automorphism), then:
$$
m = \phi^2(m) = k^2m \implies k^2 \equiv 1 \mod q
$$
But I don't know how to go on from there. I cant see that $k = -1$ is the only option.</p>
| cansomeonehelpmeout | 413,677 | <p>You are almost there. </p>
<p>You're right that $k=-1$ is a possibility, since $k^2\equiv_q 1$ (you also know that there are atmost two values $k$ can have, since $\mathbb{Z}_q$ is a field). Remember that if $k=1$, i.e, $m\mapsto m$ then the order of the automorphism is $1$, not $2$.</p>
|
3,892,595 | <p>While solving number theory problems I sometimes I have to use a function that can be defined as</p>
<p><span class="math-container">$$
f(n) =
\begin{cases}
1 & \text{ if } n = 1, \\
0 & \text{ if } n > 1.
\end{cases}
$$</span></p>
<p>where <span class="math-container">$ n $</span> is a positive integer.</p>
<p>For example, using this function, we can define Euler's totient function as</p>
<p><span class="math-container">$$ \varphi(n) = \sum_{k=1}^n f(\gcd(k, n))$$</span></p>
<p>where <span class="math-container">$ n $</span> is a positive integer.</p>
<p>Is there a name or notation for such a function already? I just want to make sure that I do not create my own notation for something that already has a popular name or notation in mathematics.</p>
| davidlowryduda | 9,754 | <p>In Apostol's Introduction to Analytic Number Theory, he calls this function <span class="math-container">$I$</span>.</p>
<p><a href="https://i.stack.imgur.com/Voa4A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Voa4A.png" alt="enter image description here" /></a></p>
<p>Part of Apostol's rationale is that <span class="math-container">$I$</span> acts like the identity element in the group of arithmetic functions (where Dirichlet multiplication is the operator), and so something like <span class="math-container">$I$</span> is a good name.</p>
<p>Elsewhere I've seen it called many things. There is no standard name, though there are lots of unambiguous names. If I were king of the notational universe, I might use Kronecker-delta based names like <span class="math-container">$\delta(\cdot)$</span>, <span class="math-container">$\delta_1(\cdot)$</span>, or <span class="math-container">$\delta_{[n=1]}(\cdot)$</span> --- but (fortunately) I am not king of the notational universe.</p>
|
3,892,595 | <p>While solving number theory problems I sometimes I have to use a function that can be defined as</p>
<p><span class="math-container">$$
f(n) =
\begin{cases}
1 & \text{ if } n = 1, \\
0 & \text{ if } n > 1.
\end{cases}
$$</span></p>
<p>where <span class="math-container">$ n $</span> is a positive integer.</p>
<p>For example, using this function, we can define Euler's totient function as</p>
<p><span class="math-container">$$ \varphi(n) = \sum_{k=1}^n f(\gcd(k, n))$$</span></p>
<p>where <span class="math-container">$ n $</span> is a positive integer.</p>
<p>Is there a name or notation for such a function already? I just want to make sure that I do not create my own notation for something that already has a popular name or notation in mathematics.</p>
| unobservable_node | 442,550 | <p>It is usually called an indicator function in signal processing. It is denoted as either <span class="math-container">$\delta(1)$</span> or <span class="math-container">$\mathbb{1}_{n=1}$</span>.</p>
|
866,921 | <p>On the <a href="http://chat.stackexchange.com/rooms/36/mathematics">Mathematics chat</a> we were recently talking about the following problem <a href="https://math.stackexchange.com/users/32016/chriss-sis">@Chris'ssis</a> had to solve during an interview :</p>
<p>$$3\times 4=8$$
$$4\times 5=50$$
$$5\times 6=30$$
$$6\times 7=49$$
$$7\times 8=?$$</p>
<p>We have not managed to solve it so far, all we know is the solution (which was given <strong>after</strong> we had given up) :</p>
<blockquote class="spoiler">
<p> $224$</p>
</blockquote>
<p>How do we find this solution ?</p>
| apnorton | 23,353 | <p><strong>Spoiler Alert:</strong> <em>(I use the answer given above in the response below. If you don't want to see it, you may want to skip this answer...)</em></p>
<p>I'm replacing $\times$ by $\circ$, as the latter is more commonly used with unknown operations. I hate it when people redefine a common symbol, then "$=$" to describe a relationship.</p>
<p>Note that $$\begin{align}3\circ4 &= 4\cdot 2\\
4\circ 5 &= 5\cdot 10\\
5\circ 6 &= 6\cdot 5\\
6\circ 7 &= 7\cdot 7 \\
7\circ 8 &= 8\cdot 28 \\
\end{align}$$</p>
<p>Thus, we can define:
$$a\circ b\quad{\buildrel \rm def\over =}\quad b\cdot x_a$$
Where $x_n$ is some sequence. OEIS yields three possible sequences:
$$x_n = \frac{\binom{n+2}{2}\gcd(n,3)}{3},\quad n \ge 0$$
(<a href="http://oeis.org/A234041">A234041</a>)
$$x_n = \text{denominatorOf}\left(\frac{(n-2)(n+3)}{(n)(n+1)}\right)\quad n \ge 3$$
(<a href="https://oeis.org/A027626">A027626</a>: GCD of $n$-th and $(n+1)$st tetrahedral numbers, offset by me for this problem)</p>
<p>The last sequence from OEIS is <a href="http://oeis.org/A145911">A145911</a> which is not promising <em>at all</em>. (It's a combination of, what appears to be, $3$ other sequences.)</p>
|
1,401,760 | <p>First I tried to use integration:
$$y=\lim_{n\to\infty}\frac{a^n}{n!}=\lim_{n\to\infty}\frac{a}{1}\cdot\frac{a}{2}\cdot\frac{a}{3}\cdots\frac{a}{n}$$
$$\log y=\lim_{n\to\infty}\sum_{r=1}^n\log\frac{a}{r}$$
But I could not express it as a <em>riemann integral</em>. Now I am thinking about sandwich theorem.</p>
<p>$$\frac{a}{n!}=\frac{a}{1}\cdot\frac{a}{2}\cdot\frac{a}{3}\cdots\frac{a}{t} \cdot\frac{a}{t+1}\cdot\frac{a}{t+2}\cdots\frac{a}{n}=\frac{a}{t!}\cdot\frac{a}{t+1}\cdot\frac{a}{t+2}\cdots\frac{a}{n}$$
Since $\frac{a}{t+1}>\frac{a}{t+2}>\frac{a}{t+1}>\cdots>\frac{a}{n}$
$$\frac{a^n}{n!}<\frac{a^t}{t!}\cdot\big(\frac{a}{t+1}\big)^{n-t}$$
since $\frac{a}{t+1}<1$, $$\lim_{n\to\infty}\big(\frac{a}{t+1}\big)^{n-t}=0$$
Hence, $$\lim_{n\to\infty}\frac{a^t}{t!}\big(\frac{a}{t+1}\big)^{n-t}=0$$
And by using sandwich theorem, $y=0$. Is this correct?</p>
| haqnatural | 247,767 | <p>You can prove it as follows:</p>
<p>for every $\varepsilon >0$ and $m+1>\left| a \right| $ and if $n$ is big enough then
$$0<\left| \frac {a^n}{ n! } \right| =\frac { \left| a \right| }{ 1 } \cdot \frac { \left| a \right| }{ 2 } \cdots \frac { \left| a \right| }{ m } \cdot \frac { \left| a \right| }{ m+1 } \cdots \frac { \left| a \right| }{ n } <\frac { { \left| a \right| }^m }{ m! } { \left( \frac { \left| a \right| }{ m+1 } \right) }^{ n-m }<\varepsilon $$</p>
|
163,043 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/160847/polynomials-irreducible-over-mathbbq-but-reducible-over-mathbbf-p-for">Polynomials irreducible over $\mathbb{Q}$ but reducible over $\mathbb{F}_p$ for every prime $p$</a> </p>
</blockquote>
<p>Anyone know an $f\in \mathbb{Z}[x]$ that is irreducible over $\mathbb{Q}$ but whose reduction mod $p$ is reducible over the first three positive primes?</p>
| Community | -1 | <p>"Most" polynomials are irreducible. So if we can find any polynomial reducible modulo 2, 3, and 5, we should have no trouble finding one that is also irreducible over the integers.</p>
<p>And we can even consider the three primes independently, and use the Chinese Remainder Theorem (CRT) to combine our results.</p>
<p>The simplest reducible polynomial modulo 2 is $x^2$. The same modulo 3 and 5. So applying the CRT is trivial in this case, since $x^2$ reduces to $x^2$ modulo 2 3 and 5.</p>
<p>Alas, $x^2$ is reducible over the integers. But "most" polynomials are irreducible, so just pick another polynomial that reduces to $x^2$ modulo 2, 3, and 5. As $30$ is equivalent to $0$ modulo 2, 3, and 5, we can add it to any coefficient without changing its residues modulo 2, 3, and 5.</p>
<p>We were, in some sense, incredibly unlucky with our first guess of $x^2$; any other guess is extremely likely to find a good example by choosing any other one. $x^2 + 30$ turns out to be irreducible, so there you go!</p>
<p>Let's do a more involved example. Suppose we choose</p>
<ul>
<li>$f(x) \equiv x(x-1) = x^2 + x\pmod 2 $</li>
<li>$f(x) \equiv (x-1)(x-2) = x^2 + x + 2 \pmod 3 $</li>
<li>$f(x) \equiv x^3 \pmod 5 $</li>
</ul>
<p>Applying the CRT to each coefficient gives</p>
<ul>
<li>$f(x) \equiv 6 x^3 + 25 x^2 + 25x + 20 \pmod {30}$</li>
</ul>
<p>Happily, $6 x^3 + 25 x^2 + 25x + 20$ is irreducible over the integers.</p>
|
2,172,870 | <p>I have two vectors $u$, and $v$, I know that $\mid u \mid$ = 3 and $\mid v \mid$ = 5, and that $u\cdot v = -12$. I need to calculate the length of the vector $(3u+2v) \times (3v-u)$.</p>
<p>Because I know the dot product of the vectors I know what cosine of the angle between them is $\cos \theta = -0.8$, and also $\sin \theta = 0.6$ Using this I started calculating the components of the vectors, but got nowhere. Am I missing some sort of fast, clever way of doing this?</p>
| Lelouch | 152,626 | <p>The definition of a 'closed' knights tour on a $m \times n$board, is a sequence of steps from a starting square $a_1$ to another square $a_{mn}$ , such that every square is visited exactly once, and the last sqaure is only one knight step away from $a_1$. Having said that, it is obvious, that for $mn $(mod2) $= 1$, there exists no closed tour. </p>
<p>Short proof:</p>
<p>Suppose $a_1$ is black. Clearly then for any $i; i\le mn , i$(mod 2)$=1, a_i$ must be black. Since $mn$ is odd, $a_{mn}$ must be black, which implies that it cannot be a square one knight step away from $a_1$. Thus there exists no closed tour for odd $mn$.</p>
|
3,609,791 | <p><span class="math-container">$f(x) = \frac{x}{(1+x^2)}$</span></p>
<p>I need to find the range of function.</p>
<p>My method-</p>
<p>Let <span class="math-container">$f(x)=y$</span> </p>
<p>Then <span class="math-container">$x²y-x+y=0$</span></p>
<p><span class="math-container">$$x=1\pm\frac{\sqrt{(1-4y²)}}{2y}$$</span></p>
<p>For <span class="math-container">$x$</span> to be real , </p>
<p><span class="math-container">$1-4y^2\ge0$</span> <strong>and</strong> <span class="math-container">$y\neq 0$</span></p>
<p><span class="math-container">$(2y+1)(2y-1)\ge0$</span> <strong>and</strong> <span class="math-container">$y\neq0$</span></p>
<p>Hence <span class="math-container">$y \in [-1/2,1/2] -\{0\}$</span></p>
<p>So far so good.</p>
<p>But if I put <span class="math-container">$0$</span> in the function,</p>
<p>Then <span class="math-container">$f(0)= 0/1+0 =0$</span></p>
<p>While my solution says that <span class="math-container">$y$</span> cannot be zero. </p>
<p>Where am I going wrong?</p>
| Michael Rozenberg | 190,319 | <p>This can be equal to <span class="math-container">$0$</span>.</p>
<p>You solved this problem for <span class="math-container">$y\neq0$</span>.</p>
<p>Now, easy to see that <span class="math-container">$y=0$</span> holds.</p>
<p>I like the following way.</p>
<p>Let <span class="math-container">$x>0.$</span></p>
<p>Thus, by AM-GM
<span class="math-container">$$\frac{x}{x^2+1}\leq\frac{x}{2\sqrt{x^2\cdot1}}=\frac{1}{2}.$$</span>
The equality occurs fpr <span class="math-container">$x=1$</span>.</p>
<p>Also, for <span class="math-container">$x<0$</span> by AM-GM again we obtain:
<span class="math-container">$$\frac{x}{x^2+1}\geq\frac{x}{2\sqrt{x^2\cdot1}}=\frac{x}{2|x|}=\frac{x}{2(-x)}=-\frac{1}{2}.$$</span>
The equality occurs for <span class="math-container">$x=-1$</span>.</p>
<p>Now, let <span class="math-container">$x=0$</span>.</p>
<p>We obtain a value <span class="math-container">$y=0$</span>.</p>
<p>Thus, since <span class="math-container">$f$</span> is a continuous function, we got the following range:
<span class="math-container">$$\left[-\frac{1}{2},\frac{1}{2}\right]$$</span></p>
|
4,356,457 | <p>Let <span class="math-container">$k$</span> be a finite field of size <span class="math-container">$q$</span>, let <span class="math-container">$n\ge1$</span>, and let <span class="math-container">$\mathrm{GL}_n(k)$</span> and <span class="math-container">$M_n(k)$</span> be the group of invertible matrices and ring of <span class="math-container">$n\times n$</span> matrices, respectively.</p>
<p>Now, I have calculated (by grouping by the characteristic polynomial) that the number of conjugacy classes are:</p>
<ul>
<li><span class="math-container">$q-1$</span> in <span class="math-container">$\mathrm{GL}_1(k)=k^\times$</span>, <span class="math-container">$q$</span> in <span class="math-container">$M_1(k)=k$</span>;</li>
<li><span class="math-container">$q^2-1$</span> in <span class="math-container">$\mathrm{GL}_2(k)$</span>, <span class="math-container">$q^2+q$</span> in <span class="math-container">$M_2(k)$</span>; and</li>
<li><span class="math-container">$q^3-q$</span> in <span class="math-container">$\mathrm{GL}_3(k)$</span>, <span class="math-container">$q^3+q^2+q$</span> in <span class="math-container">$M_3(k)$</span>.</li>
</ul>
<p>I conjecture that the pattern continues on the <span class="math-container">$M_n(k)$</span>-side, i.e., that there are always <span class="math-container">$q^n+\dots+q$</span> conjugacy classes in <span class="math-container">$M_n(k)$</span>. However, I cannot seem to prove this. How should I proceed?</p>
<p>Also, is there an analogous formula for the number of conjugacy classes in <span class="math-container">$\mathrm{GL}_n(k)$</span>?</p>
<p><strong>Note:</strong> By conjugacy class in <span class="math-container">$M_n(k)$</span>, I mean the equivalence classes under the relation, for <span class="math-container">$a,b\in M_n(k)$</span>, of <span class="math-container">$a\sim b$</span> iff there exists a <span class="math-container">$u\in\mathrm{GL}_n(k)$</span> such that <span class="math-container">$a=ubu^{-1}$</span>.</p>
| reuns | 276,986 | <p>I find <span class="math-container">$$\sum_{n\ge 0} a_n t^n=\prod_{e=1}^\infty \frac1{1-t^e|k|} \qquad \text{and}\qquad\sum_{n\ge 0} b_n t^n=\prod_{e=1}^\infty \frac{1-t^e}{1-t^e|k|}$$</span> for the generating functions of the number of <span class="math-container">$GL_n(k)$</span>-conjugacy classes in <span class="math-container">$M_n(k)$</span> and <span class="math-container">$GL_n(k)$</span> respectively.</p>
<p>The idea is that <span class="math-container">$$\frac1{1-t|k|}= \sum_{f\in k[x]\text{ monic}} t^{\deg(f)}$$</span>
so each term in the expansion of <span class="math-container">$\prod_{e=1}^\infty \frac1{1-t^e|k|}$</span> is of the form <span class="math-container">$t^{\sum_{e\ge 1}e \deg(f_e)}$</span> which corresponds to the conjugacy class of matrices <span class="math-container">$A\in M_n(k), n = \sum_{e\ge 1}e \deg(f_e)$</span> such that the <span class="math-container">$k[x]$</span>-module structure on <span class="math-container">$k^n$</span> given by <span class="math-container">$A$</span> is isomorphic to the <span class="math-container">$k[x]$</span>-module <span class="math-container">$$\prod_{e\ge 1} \prod_{h \text{ irreducible},h^m \| f_e} (k[x]/(h^e))^m$$</span></p>
|
264,595 | <p>I've been trying to find an asymptotic expansion of the following series</p>
<p>$$C(x) = \sum\limits_{n=1}^{\infty} \frac{x^{2n+1}}{n!{\sqrt{n}} }$$</p>
<p>and</p>
<p>$$L(x) = \sum\limits_{n=1}^{\infty} \frac{x^{2n+1}}{n(n!{\sqrt{n}}) }$$</p>
<p>around $+\infty$, in the from</p>
<p>$$\exp(x^2)\Big(1+\frac{a_1}{x}+\frac{a_2}{x^2} + .. +\frac{a_k}{x^k}\Big) + O\Big(\frac{\exp(x^2)}{x^{k+1}}\Big)$$</p>
<p>where $x$ is a positive real number. As far as I progressed, I obtained only</p>
<p>$$C(x) = \exp(x^2) + \frac{\exp(x^2)}{x} + O\Big(\frac{\exp(x^2)}{x}\Big).$$</p>
<p>I tried to use ideas from <a href="https://math.stackexchange.com/questions/484367/upper-bound-for-an-infinite-series-with-a-square-root?rq=1">https://math.stackexchange.com/questions/484367/upper-bound-for-an-infinite-series-with-a-square-root?rq=1</a>, <a href="https://math.stackexchange.com/questions/115410/whats-the-sum-of-sum-limits-k-1-infty-fractkkk">https://math.stackexchange.com/questions/115410/whats-the-sum-of-sum-limits-k-1-infty-fractkkk</a>, <a href="https://math.stackexchange.com/questions/378024/infinite-series-involving-sqrtn?noredirect=1&lq=1">https://math.stackexchange.com/questions/378024/infinite-series-involving-sqrtn?noredirect=1&lq=1</a>, but I was unable to make them work in my case. </p>
<p>Any suggestions would be greatly appreciated!</p>
<p>(If someone has a solid culture in this kind of things, is there are any specific names for $C(x) $ and $L(x) $ ?).</p>
<p>PS:</p>
<p>This question was asked on the math.SE but was closed as duplicate of <a href="https://math.stackexchange.com/questions/2117742/lim-x-rightarrow-infty-sqrtxe-x-left-sum-k%ef%bc%9d1-infty-fracxk/2123100#2123100">https://math.stackexchange.com/questions/2117742/lim-x-rightarrow-infty-sqrtxe-x-left-sum-k%ef%bc%9d1-infty-fracxk/2123100#2123100</a>. However, the latter question provides only the first term of the asymptotic expansion and does not address sufficiently the problem considered here.</p>
| esg | 48,831 | <p>I sketch the arguments for $C(x)$, the arguments for $L(x)$ are essentially the same.</p>
<p>The specific form of the sum suggests probabilistic arguments.
Let $X_x$ be a $\mathrm{Poiss}(x^2)$-distributed random variable and note that
$$C(x)\,e^{-x^2}=\mathbb{E}\frac{x}{\sqrt{X_x}}\,1_{\{X_x\geq 1\}}$$
It is known that $\frac{X_x -x^2}{x}\longrightarrow N(0,1)$ (standard normal) in distribution as $x\longrightarrow \infty$ and that
that all moments $\mathbb{E}\left(\frac{X_x-x^2}{x}\right)^k$ of $X_x$ converge to the corresponding moments of $N(0,1)$, so that (for large $x$) $X_x$ is concentrated around $x^2$, with deviations of order $x$.</p>
<p>To use that information split $\{X_x\geq 1\}$ on the rhs into (say) the parts $1\leq X_x <\tfrac{1}{2}x^2$, $X_x-x^2 >\tfrac{1}{2} x^{2}$ and
$|X_x-x^2| \le \tfrac{1}{2}x^{2}$.</p>
<p>By routine arguments the integrals over the first two parts are asymptotically exponentially small.
For the remaining part write
\begin{align*}
\mathbb{E}\frac{x}{\sqrt{X_x}}\,1_{\{|X_x-x^2|\leq \tfrac{1}{2}x^2\}}
&=\mathbb{E}\frac{x}{\sqrt{x^2+(X_x-x^2)}}\,1_{\{|X_x-x^2|\leq \tfrac{1}{2} x^2\}}\\
&=\mathbb{E}\frac{1}{\sqrt{1 +\frac{1}{x^2}(X_x-x^2)}}\,1_{\{|X_x-x^2|\leq \tfrac{1}{2} x^2\}}\\
&=\mathbb{E}\sum_{k=0}^\infty {-\frac{1}{2} \choose k} \frac{1}{x^{2k}}(X_x-x^2)^k
\,1_{\{|X_x-x^2|\leq \tfrac{1}{2}x^2\}}\\
%C(x)\,e^{-x^2}&= 1+\frac{3}{8}x^{-2} + \frac{65}{128}x^{-4} + \frac{1225}{1024}x^{-6} + \frac{1619583}{425984}x^{-8}+\mathcal{O}(x^{-10})\\
%L(x)\,x^2\,e^{-x^2}&= 1+\frac{15}{8}x^{-2} + \frac{665}{128}x^{-4}+\frac{19845}{1024}x^{-6}+\frac{37475823}{425984}x^{-8}+\mathcal{O}(x^{-10})
\end{align*}
Clearly the series may be integrated termwise, and completing the tails changes it only by asymptotically exponentially small terms. For $k\geq 2$ the central moment
$c_k(x):=\mathbb{E}\left(X_x-x^2\right)^k$ is a polynomial in $x^2$ of degree $\lfloor k/2\rfloor$.
Thus (after regrouping of terms) the formal series
$$\sum_{k=0}^\infty x^{-2k}{-\tfrac{1}{2} \choose k} c_k(x)$$
gives a full asymptotic expansion of $C(x)e^{-x^2}$. Evaluating the first eight terms gives
$$C(x)\,e^{-x^2}= 1+\frac{3}{8}x^{-2} + \frac{65}{128}x^{-4} + \frac{1225}{1024}x^{-6} + \frac{131691}{32768}x^{-8}+\mathcal{O}(x^{-10})$$</p>
<p><strong>EDIT</strong>: I corrected the coefficient of $x^{-8}$. Thanks to Johannes Trost
for pointing out that I miscalculated. Similarly the coefficient of $x^{-8}$
in the asymptotic series for $L(x)$ given in the comment must be corrected.</p>
|
3,636,040 | <p>I am reading a paper titled "Cross-Calibration for Data Fusion of EO-1/Hyperion and Terra/ASTER" which mentions a particular minimization problem
<img src="https://i.stack.imgur.com/UgZrB.png" alt="Frobenius norm of y(I)-r(I)x"></p>
<p>I am not able to comprehend how this equation is converted (during implementation, in the code) to a quadratic optimization problem specified in the form of <img src="https://i.stack.imgur.com/x0eWT.png" alt="generic form of quadratic optimization equation"> and how would the matrices <span class="math-container">$H$</span> and <span class="math-container">$f$</span> be computed. </p>
<p>I know the question is a bit ambiguous and might require more information. But I'm just looking for some guidance as to how to begin with the conversion between the two forms (Frobenius norm and Quadratic optimization)</p>
| Siong Thye Goh | 306,553 | <p>For your paper <span class="math-container">$\tilde{y}_i$</span> and <span class="math-container">$r_i$</span> are row vectors. </p>
<p>We have </p>
<p><span class="math-container">\begin{align}\|\tilde{y_i}-r_iX_i\|_F^2 &= (\tilde{y_i}-r_iX_i)(\tilde{y_i}-r_iX_i)^T\\
&=\|\tilde{y_i}\|_F^2-2r_iX_i\tilde{y}_i^T+r_i X_i X_i^Tr_i\end{align}</span></p>
<p>which is in quadratic form.</p>
|
3,169,258 | <p>I need to evaluate on the circle <span class="math-container">$\left|z\right|=\pi$</span> the integral
<span class="math-container">$$\int_{\left|z\right|=\pi}\frac{\left|z\right|e^{-\left|z\right|}}{z}dz.$$</span>
The function is not holomorphic there. Anyway, I tried to integrate it using polar coordinates and simplyfing the modulo and I got <span class="math-container">$2\pi e^{-\pi}$</span> while the result should be <span class="math-container">$2\pi^2 ie^{-\pi}$</span>.
I'm sure is trivial and I overlooked a stupid error. Can anybody tell me where?</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>Let be <span class="math-container">$z = \pi e^{i\theta}, \theta\in[0,2\pi]$</span>:
<span class="math-container">$$
\int_{|z|=\pi}\frac{|z|e^{-|z|}}{z}dz =
\int_0^{2\pi}\frac{\pi e^{-\pi}}{\pi e^{i\theta}}\pi i e^{i\theta} = 2\pi^2 i e^{-\pi}.
$$</span>
<strong>But...</strong> Cauchy formula can be used:
<span class="math-container">$$
\int_{|z|=\pi}\frac{|z|e^{-|z|}}{z}dz =
\int_{|z|=\pi}\frac{\pi e^{-\pi}}{z}dz = 2\pi^2 i e^{-\pi}.
$$</span></p>
|
470,081 | <p>There is a question from an old topology prelim that is somewhat giving me a hard time. Consider the cylinder $X= S^1 \times [-1,1]$. Now we define an equivalence relation $\sim$ as follows: For points $v,v' \in S^1$, we have $(v,-1) \sim (v',-1)$ and $(v,1) \sim (v',1)$. I am asked to show that the quotient space $X^{*}= S^1 \times [-1,1]/\sim$ is homeomorphic to the unit sphere $S^2$. The problem is I can't off the top of my head come up with a decent continuous bijection from the quotient space onto $S^2$. What might work here? </p>
<p>Suppose I had some sort of continuous bijection $h: X^{*} \rightarrow S^2$. Now the quotient map $p: X \rightarrow X^{*}$ is continuous and surjective, and since $X$ is compact, so is $X^{*}$. We also know that $S^2$ being a topological manifold is Hausdorff. Recall that if there is a continuous bijection between the compact space $X^{*}$ (any compact space for that matter) and the Hausdorff space $S^2$ (or any Hausdorff space), then that continuous bijection is a homeomorphism. This is what I intended to do, but I still can't come up with such a continuous bijection. Also, perhaps I am a bit confused in trying to visualize the quotient space. I would really appreciate some input on this, and any ideas that may prove useful.</p>
| Cheerful Parsnip | 2,941 | <p>Here's an alternative approach which may or may not be what your topology prelim was asking for. The quotient space is gotten by crushing each end of the cylinder $S^1\times[-1,1]$ to a point. I claim this is a 2-manifold. This is clear away from the two crushed points. Let $p$ be the point which is gotten by crushing $S^1\times\{1\}$. Let $U$ be the set of all equivalence classes of points $[(x,t)]$ in the quotient with $t>0$. I claim the map $f\colon D^2\to U$ given by $f(r,\theta)=[(\cos \theta,\sin\theta ,1-r)]$ is a homeomorphism. Here I am using polar coordinates in the unit disk $D^2$. (This step is probably just about equivalent in complexity to doing the problem a different way, but maybe it's slightly easier to see.) So $p$ has an open neighborhood homeomorphic to an open subset of $\mathbb R^2$. Symmetrically, the other quotient point is also a manifold point. So we know the quotient is a surface. Moreover, our argument shows that it is the union of two open disks and the circle at $t=0$, which is the disjoint union of a point and an open interval. So the Euler characteristic is $1-1+2=2$, and by the classification of surfaces, it must be a sphere.</p>
|
3,804,195 | <p>I am currently reworking the following exercise that was worked during my previous class's lecture on normed vector spaces:</p>
<blockquote>
<p>Show that <span class="math-container">$||x||_{\infty} \leq ||x||_{2} \leq ||x||_{1}$</span> for any <span class="math-container">$x \in \mathbb{R}^n$</span>.</p>
</blockquote>
<p>After class, I asked if this is the same as proving that <span class="math-container">$||a||_p\le ||a||_1$</span> for any <span class="math-container">$p \leq 1$</span> or more generally, <span class="math-container">$||a||_q\le ||a||_p$</span> whenever <span class="math-container">$p \leq q$</span> . My professor then asked me to finish the following computation and report back my findings:</p>
<p><span class="math-container">$(\sum_{i=0}^n |a_i|^p)^{1/p}\le (\sum_{i=0}^{n-1} |a_i|^p)^{1/p} +(|a_n|^p)^{1/p}\\\le (\sum_{i=0}^{n-2} |a_i|^p)^{1/p} +(|a_{n-1}|^p)^{1/p} + (|a_n|^p)^{1/p} \\ ...\\...\\$</span></p>
<p>However, as I have made progress I have a few questions.</p>
<p><span class="math-container">$\bullet$</span> Is this using Minkowski's inequality repeatedly?</p>
<p><span class="math-container">$\bullet$</span> I don't fully understand the step <span class="math-container">$(\sum_{i=0}^n |a_i|^p)^{1/p}\le (\sum_{i=0}^{n-1} |a_i|^p)^{1/p} +(|a_n|^p)^{1/p}$</span>. Why are we adding the <span class="math-container">$p$</span> norm of the <span class="math-container">$n^{th}$</span> term to (<span class="math-container">$\sum_{i=0}^{n-1} |a_i|^p)^{1/p}$</span>?</p>
| glS | 173,147 | <p>As per <a href="https://math.stackexchange.com/a/3804250/173147">the other answer</a>, linear programming achieves this. The idea is that the problem of finding the coefficients <span class="math-container">$p_i\ge0$</span> such that <span class="math-container">$\sum_{i=1}^n p_i \mathbf x_i = \mathbf x\in\mathbb R^d$</span> with <span class="math-container">$\sum_i p_i=1$</span> can be reframed as the task of finding the vector <span class="math-container">$\mathbf p\in\mathbb R^n$</span> such that
<span class="math-container">$X\mathbf p = \mathbf x,$</span>
where <span class="math-container">$X$</span> is the matrix whose <span class="math-container">$i$</span>-th <em>column</em> is <span class="math-container">$\mathbf x_i$</span>, under the constraints <span class="math-container">$\mathbf p\ge0$</span> and <span class="math-container">$\sum_i p_i = 1$</span>. This is a linear program in standard form.</p>
<p>Approaching it as a standard linear algebraic task, we can define the quantities
<span class="math-container">$$
\tilde X=\begin{pmatrix} X \\ 1 \cdots 1\end{pmatrix}
\quad\text{ and }\quad
\tilde{\mathbf x}\equiv \begin{pmatrix}\mathbf x \\ 1\end{pmatrix},\tag1
$$</span>
to rewrite the problem as <span class="math-container">$\tilde X\mathbf p =\tilde{\mathbf x}$</span>.
Here, <span class="math-container">$\tilde X$</span> is a matrix of dimensions <span class="math-container">$(d+1)\times n$</span>.
This equation admits solutions <em>iff</em> <span class="math-container">$\tilde{\mathbf x}\in\operatorname{range}(\tilde X)$</span>, in which case the set of solutions has the form
<span class="math-container">$$\mathbf p \in \tilde X^+\tilde{\mathbf x} + \ker(\tilde X),\tag2$$</span>
with <span class="math-container">$\tilde X^+$</span> denoting the pseudo-inverse of <span class="math-container">$\tilde X$</span>.
For (2) to be an actual solution of our convex problem, we also need <span class="math-container">$p_i\ge 0$</span> for all <span class="math-container">$i$</span>. We then use the shifts allowed by <span class="math-container">$\ker(\tilde X)$</span> to achieve this.</p>
<hr />
<p>As a concrete example of this method in action, consider in <span class="math-container">$\mathbb R^2$</span> the points
<span class="math-container">$$
\mathbf x_1 = (1, 1),
\qquad \mathbf x_2 = (-1, 1),
\qquad \mathbf x_3 = (-1, -1),
\qquad \mathbf x_4 = (1, -1),$$</span>
that is, the vertices of a square around the origin. Suppose we then have <span class="math-container">$\mathbf x=(0.5, 0.8)$</span>, and wish to find the convex decomposition of <span class="math-container">$\mathbf x$</span> in terms of <span class="math-container">$\mathbf x_i$</span>. We start by computing <span class="math-container">$\tilde X$</span> and <span class="math-container">$\tilde X^+$</span>, which read
<span class="math-container">$$
\tilde X = \begin{pmatrix} 1 & -1 & -1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & 1 & 1 & 1 \end{pmatrix},
\qquad
\tilde X^+ = \frac14\begin{pmatrix} 1 & 1 & 1 \\ -1 & 1 & 1 \\ -1 & -1 & 1 \\ 1 & -1 & 1 \end{pmatrix}.
$$</span>
Applying <span class="math-container">$\tilde X^+$</span> to <span class="math-container">$\tilde{\mathbf x}$</span> we get
<span class="math-container">$\tilde X^+ \tilde{\mathbf x}=(0.575, 0.325, -0.075, 0.175)^T$</span>.
One element of this vector is negative, which means that this is not a valid convex combination. We then consider the one-dimensional space obtained translating this point with the kernel of <span class="math-container">$\tilde X$</span>.
This is one-dimensional in this case: <span class="math-container">$\ker(\tilde X)=\mathbb R (-1, 1, -1, 1)^T$</span>. The full set of solutions is therefore
<span class="math-container">$$\begin{pmatrix} 0.575 \\ 0.325 \\ -0.075 \\ 0.175\end{pmatrix} + t \begin{pmatrix} -1 \\ 1 \\ -1 \\ 1\end{pmatrix}, \qquad t\in\mathbb R.$$</span>
We can easily find that all the elements are positive for
<span class="math-container">$t\in[-0.175, -0.075]$</span>, which gives us the set of viable convex decompositions of <span class="math-container">$\mathbf x$</span>.</p>
<p>Here's a visualisation of the set of solutions found with this technique:</p>
<p><span class="math-container">$\hspace{100pt}$</span><img src="https://i.stack.imgur.com/jVSgQ.gif" width="300"></p>
<p>Of course, the complexity of this naive approach might scale badly with the number of points and dimension, so for the general case optimised linear programming algorithms will perform much better.</p>
<hr />
<p>A slightly more complex variation of the above is obtained asking to decompose <span class="math-container">$\mathbf x$</span> using <em>five</em> different vertices. In this case the kernel of <span class="math-container">$\tilde X$</span> is two-dimensional, and thus there is a two-dimensional convex region of possible coefficients. We can visualise this situation as follows:</p>
<p><span class="math-container">$\hspace{100pt}$</span><img src="https://i.stack.imgur.com/9zPUJ.gif" width="300"></p>
<p>where the figure on the left is used to choose a point in <span class="math-container">$\tilde X^+\tilde{\mathbf x}+\ker(\tilde X)$</span> such that all elements are positive, and on the right we show the corresponding convex decomposition of the red point in terms of the black ones.</p>
<hr />
<p>The above animation was generated with Mathematica using the following code</p>
<pre><code>decoupleOneElement[list_List,index_Integer]:={
Normal @ SparseArray[index -> 1, Length @ list],
ReplacePart[list, index -> 0] // # / Total @ # &
};
lineRepresentationConvexComb[coefficients_]:=With[{
nonzeroIndices = Flatten @ Position[
coefficients, _?(# !=0 &)
]
},
Fold[
Last @ Sow @ decoupleOneElement[#1, #2]&,
coefficients[[nonzeroIndices]],
Range[Length @ nonzeroIndices - 1]
] // Reap // Last // Last // Map[
ReplacePart[
ConstantArray[0, Length @ coefficients],
Thread @ Rule[nonzeroIndices, #]
]&,
#, {2}
]&
];
DynamicModule[{pts, x, XTilde, kernelOfXTilde},
pts = {{1, 1}, {-1, 1}, {-1, -1}, {1, -1}};
x = {0.5, 0.8};
XTilde = Append[Transpose@pts, ConstantArray[1, Length@pts]] // Echo[#, "X=", MatrixForm]&;
kernelOfXTilde = First @ NullSpace @ XTilde // Echo[#,"ker(X)=", MatrixForm]&;
Manipulate[
Dot[PseudoInverse @ XTilde, Append[x,1]] + t * kernelOfXTilde // lineRepresentationConvexComb //
Map[Total[# * pts]&, #, {2}] & // Map @ Line // Deploy @ Graphics[{
PointSize @ 0.03, Point @ pts,
{Red, Point @ x},
#
},
Axes -> True, AxesOrigin -> {0, 0},
GridLines -> Automatic,
PlotRange -> ConstantArray[{-1.2, 1.2}, 2]
]&,
{{t, -0.1}, -0.175, -0.075, 0.001, Appearance->"Labeled"}
]
]
</code></pre>
<hr />
<p>The code to generate the second visualisation is the following:</p>
<pre><code>DynamicModule[{
pts, x, XTilde, kernelOfXTilde, solutionSet, solutionConstraints,
pointInConstraints = {-0.1, 0.1}, movingPoint = False
},
pts = {{1, 1}, {0, 2}, {-1, 1}, {-1, -1}, {1, -1}};
x = {0.5, 1};
XTilde = Append[Transpose @ pts, ConstantArray[1, Length @ pts]] // Echo[#, "X=", MatrixForm]&;
kernelOfXTilde = NullSpace @ XTilde // Echo[#,"ker(X)=", MatrixForm] &;
solutionSet = Plus[
Dot[PseudoInverse @ XTilde, Append[x,1]],
{t1, t2} * kernelOfXTilde // Apply @ Sequence
];
solutionConstraints = And @@ Thread @ GreaterEqual[solutionSet, 0];
Row @ {
(* plot constraints on the parameters, and allow to choose point in the feasible region *)
EventHandler[
RegionPlot[
solutionConstraints,
{t1, -4, 4}, {t2, -4, 4}, PlotRange -> All, PlotPoints -> 400,
ImageSize -> 300, FrameLabel->{"t1", "t2"}, Frame -> True, PlotRangePadding -> None
] ~ Show ~ Graphics[{PointSize @ 0.03, Point @ Dynamic @ pointInConstraints}],
{
"MouseDown" :> If[
Not @ TrueQ @ movingPoint && Norm[pointInConstraints - MousePosition["Graphics"]] <= 0.1,
movingPoint = True
],
"MouseUp" :> If[TrueQ @ movingPoint, movingPoint = False],
"MouseDragged" :> With[{mp = MousePosition["Graphics"]},
If[TrueQ @ movingPoint && TrueQ[solutionConstraints /. Thread @ Rule[{t1, t2}, mp]],
pointInConstraints = mp
]
]
}
],
(* show convex combination corresponding to the chosen values of t1 and t2 *)
Dynamic @ Deploy @ Graphics[{
PointSize @ 0.03, Point @ pts,
{Red, Point @ x},
solutionSet /. Thread @ Rule[{t1, t2}, pointInConstraints] //
lineRepresentationConvexComb //
Map[Total[# * pts]&, #, {2}] & // Map @ Line
},
Axes -> True, AxesOrigin -> {0, 0},
GridLines -> Automatic, ImageSize -> 300,
PlotRange -> All
]
}
]
</code></pre>
|
2,131,224 | <blockquote>
<p>Say I have vectors $x, y$, then is $\text{proj }_x y $ a scalar multiple of $x$?</p>
</blockquote>
<p>I have a book saying that it is, but I have no clue why this true. Is this really true?</p>
| C. Falcon | 285,416 | <p>By definition, $\textrm{proj}_xy\in\textrm{span}(x)$, so <strong>yes</strong>, this is true.</p>
|
1,465,755 | <p>How do I find the angles $\alpha ,\beta ,\theta $ between the vector $(1,0,-1)$ and the unit vectors $i,j,k$ along the axes?</p>
<p>This question is not making sense to me. I know that in order to find the angle between any two nonzero vectors, I just have to take their dot product and divide it by the product of their lengths as such: $\cos { \theta } =\frac { \overrightarrow { v } \cdot \overrightarrow { w } }{ \left\| v \right\| \left\| w \right\| } $</p>
<p>How can I extend this knowledge to the 3 dimensional vector I was given? I don't know how I can get the dot product of the given vector with the given unit vectors. </p>
<p>Hints only, please. No actual solution. </p>
| Chappers | 221,811 | <p>Hint: $i=(1,0,0)$, $j=(0,1,0)$, $k=(0,0,1)$. Or equally, $$ (1,0,-1) = i-k $$.</p>
|
2,375,714 | <p>I know this is a really easy question but for some reason I'm having trouble with it.</p>
<p>If $M$ is an object in an additive category $\mathcal C$, and $\text{Hom}_{\mathcal C}(M,M) = 0$, then $M = 0$.</p>
<p>I know that this implies $\text{id}_M =0$ but I'm having trouble showing that $M$ satisfies the condition to be the zero object, or showing that the map from $0\rightarrow M$ is necessarily invertible.</p>
<p>We have that this composite $M \rightarrow 0 \rightarrow M$ is the $0$ map and the identity simultaneously but I'm stumped.</p>
<p>I think I'm thinking about it too much.</p>
| Angina Seng | 436,618 | <p>Given an object $A$ there will be a morphism from $A$ to $M$, the
zero morphism. Composing an arbitrary morphism $f:A\to M$ with the
unique map from $M$ to $M$ you get $f$, as the unique map from $M$ to $M$
is the identity, and also the zero morphism, as it factors through the
zero object. There is therefore one morphism from $A$ to $M$: $M$ is a terminal object. Terminal objects are all isomorphic. So $M$ is isomorphic
to the zero object.</p>
|
3,243,243 | <p>I am learning category theory using Basic Category Theory by Tom Leinster as my main source. In the chapter on natural transformations he says that isomorphism of categories is unreasonably strict for the notion of the sameness of two categories. Isomorphism would require functors,
<span class="math-container">$$
F:A\rightarrow B,G:B\rightarrow A
$$</span>
such that
<span class="math-container">$$
G\circ F=1_A, F\circ G=1_B
$$</span></p>
<p>Instead he says that for equivalence we loosen the requirement on these functors to be isomorphic,
<span class="math-container">$$
G\circ F\cong 1_A,F\circ G\cong 1_B
$$</span>
Then this is better. This section threw me for a loop. I don't understand the difference between the equivalence and the isomorphism statements. Any help clarifying what is trying to be said here is greatly appreciated.</p>
| user289143 | 289,143 | <p>Two functors are said to be isomorphic if there is a natural isomorphism between them, i.e. a natural transformation <span class="math-container">$\eta: G \circ F \rightarrow 1_A$</span> such that the components <span class="math-container">$\eta_X: (G \circ F) (X) \rightarrow 1_A(X)$</span> are isomorphisms <span class="math-container">$\forall X \in ob(A)$</span>.</p>
|
33,424 | <p>Suppose $f(z) = P(z)e^{Q(z)}$ where $P,Q$ are real polynomials. What is the number of non-real zeros of $f^{(k)}$ as $k$ increases?</p>
<p>We know that $f''$ has $\geq m$ zeros where $m$ depends on $Q(z)$. </p>
| John | 7,929 | <p>One could use the level sets $\{z \in H^{+}: \text{Im} Q(z) = 0 \}$ to count non-real zeros of the derivatives of $f(z)$ where $Q$ is the Newton's method function for $f$.</p>
|
859,209 | <p>Im looking for an efficent method of solving the following inequality: $$\left(\frac{x-3}{x+1}\right)^2-7 \left|\frac{x-3}{x+1}\right|+ 10 <0$$</p>
<p>I've tried first determining when the absolute value will be positive or negative etc, and than giving it the signing in accordance to range it is in, bur it turned out to be quite complex and apperently also wrong. Are there any other ways?</p>
| Indrayudh Roy | 70,140 | <p>Substituting $t=|\frac {x-3}{x+1}|$ will help. Can you see it?</p>
|
859,209 | <p>Im looking for an efficent method of solving the following inequality: $$\left(\frac{x-3}{x+1}\right)^2-7 \left|\frac{x-3}{x+1}\right|+ 10 <0$$</p>
<p>I've tried first determining when the absolute value will be positive or negative etc, and than giving it the signing in accordance to range it is in, bur it turned out to be quite complex and apperently also wrong. Are there any other ways?</p>
| Mark Fischler | 150,362 | <p>When $t \equiv \frac{x-3}{x+1} > 0$, the solution to $ t^2 - 7t + 10 <0$ is $ 2<t<5$. (Factor the quadratic.)</p>
<p>When $t < 0$, the solution to $ t^2 + 7t + 10 <0$ is $ -2<t<-5$. </p>
<p>On the $t < 0$ branch, we need to solve $-2 < \frac{x-3}{x+1} < 5$. We break this up into two possibilities, $x < -1$ and $x > -1$, because when we multiply through by $x+1$ in the $x < -1$case we have to flip the sense of the the inequality.</p>
<p>When $x < -1$ we then get on one side $-2(x+1) < x-3$, which gives $x > +\frac{1}{3}$ and this does not work. But for $x > -1$ we get
$$\begin{array}{c}
-5(x+1) < x-3 < 2(x-1) \\
6x > -2 \rightarrow x > -\frac{1}{3} \\
3x < +1 \rightarrow x < +\frac{1}{3}
\end{array}
$$
which has solution
$$
-\frac{1}{3} < x < \frac{1}{3}
$$</p>
<p>On the $t > 0$ branch, we need to solve $2 < \frac{x-3}{x+1} < 5$. We again break this up into two possibilities, $x < -1$ and $x > -1$.</p>
<p>When $x < -1$ we then get on one side $2(x+1) > x-3$, which gives $x > -2$, and $5(x+1) < x-3$, which gives $x > -5$. For $x > -1$ we get $2(x+1) < x-3$, which holds if $x < -5$, which contradicts $x > -1$ so that case does not give any solutions.</p>
<p>Thus the answer combines the two solution regions:</p>
<p>$$
\left( -5 < x < 2 \right) \bigcup \left( -\frac{1}{3} < x < \frac{1}{3} \right)
$$</p>
|
2,370,378 | <p>How many distinct pairs of disjoint hyperplanes of size $q^{n-1}$ exist in $\mathbb{F}_q^n$?</p>
<p>Initially I had just thought to pick n points to define a hyperplane, and divide by the number of ways to pick such points, that is:</p>
<p>$\frac{q^n \choose n}{q^{n-1} \choose n}$</p>
<p>From there each hyperplane would presumably have $q-1$ disjoint neighbors giving:</p>
<p>$\frac{(q-1){q^n \choose n}}{2{q^{n-1} \choose n}}$</p>
<p>However, I realize this does not deal with sets of points which are coplanar. This number then is a lower bound, as these coplanar sets are not counted enough times. Is there a way to get around this? Knowing the number of sets of n non-coplanar points would presumably do the trick.</p>
| Angina Seng | 436,618 | <p>We call disjoint hyperplanes <em>parallel</em>. For a given hyperplane there
will be $q-1$ planes parallel to it. There are $(q^n-1)/(q-1)$
hyperplanes through the origin (why?) and so $q(q^n-1)/(q-1)$ hyperplanes overall. When counting pairs of disjoint hyperplanes I suppose
one has to decide whether one is counting ordered or unordered pairs...</p>
|
1,886,239 | <p>I am trying to figure out whether or not the following series is convergent: $$\sum_{k=1}^{\infty} \frac{\ln(3^k-2k)}{3k+k^2}$$ </p>
<p>Now, I know from the back of the book that it is divergent, but I haven't been able to show it. I think I am supposed to compare it to some other series, but I don't know which one. I have tried looking at the integral $\int_{1}^{\infty} f(x) dx$ but that integral was really hard to solve (haven't managed it) which makes me think that there should be an easier way. </p>
<p>In general, I am having some trouble with this type of exercise where I should use comparison tests. I never know what I should compare it to.</p>
| Robert Z | 299,698 | <p>Hint: use Asymptotic Comparison Test
$$0\leq\frac{\ln(3^k-2k)}{3k+k^2}\sim \frac{\ln(3^k)}{k^2}=\frac{\ln(3)}{k}.$$</p>
|
14,765 | <p>I like to make the "dominoes" analogy when I teach my students induction.</p>
<p>I recently came across the following video:</p>
<p><a href="https://www.youtube.com/watch?v=-BTWiZ7CYoI" rel="noreferrer">https://www.youtube.com/watch?v=-BTWiZ7CYoI</a></p>
<p>In this video, a sequence of concrete block wall caps are set up like dominoes on the top of a wall. The first wall cap is knocked down, setting off the domino effect. The blocks are spaced so that they are resting on each other when they fall, but just barely. So rather than resting flat each block is supported slightly by its successor. When the last block falls, however, it falls flat (having no subsequent block to rest on). This causes the block behind it to slip off, and lay flat, which causes the brick behind it to slip off and lie flat, until all the blocks are lying flat perfectly end to end.</p>
<p>Is there any instance of a similar phenomena occurring in mathematics? I am thinking of a situation in which you want to prove both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span> (say). If you are able to prove: </p>
<ol>
<li><span class="math-container">$P(1)$</span></li>
<li><span class="math-container">$\forall k \in \{1,2,3, \dots, 99\} P(k) \implies P(k+1)$</span></li>
<li><span class="math-container">$P(100) \implies Q(100)$</span></li>
<li><span class="math-container">$\forall k \in \{ 100, 99, 98, \dots, 3,2\}, Q(k) \implies Q(k-1)$</span></li>
</ol>
<p>Then it will follow that both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> are true for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span>.</p>
<p>If an example is found, it could be a great example for teaching because it would force students to think through the logic of why induction works rather than blindly following a certain form of "an induction proof".</p>
| guest | 12,117 | <p>Use repetition and carrot/stick (e.g. weekly period-long exams, daily one question pop quizzes, in class games). </p>
<p>Question shows an unconscious assumption that clear explanation is the key criteria. ("I told them several times.") It's not. We are not computers that get fixed forever with a line of code. We are physical, imperfect beings. Our brains adapted to make sense of our universe but are subject to various fallacies (e.g. optical illusions). We learn to avoid various errors by "imitation and practice" (cf. Aristotle).</p>
<p>Instead of taking the attitude "how can these students make this mistake" have a position that is more sympathetic and tough at the same time. "I get why you made the error but I will keep hammering you until you get it right." Drill sergeant. Or even the "yeah, this stuff is tough...let me tell you a sneaky trick" (like you are a fellow criminal, fellow imperfect human).</p>
<p>Make little silly names for the errors. A computer would not need them. It would grok the iff symbol. But humans are humans. If you call it "offsides" or "penalty kick" or whatever, it will make a weird association that helps them.</p>
|
126,057 | <p>Hi,</p>
<p>Can one define a Fubini-Study metric/Kaehler metric on the projective space of an infinite dimensional Hilbert space, i.e. using the formula $\partial \bar{\partial} \log |Z|^2$?
This should be very well-known to the experts. Anyhow I don't have much experience with infinite dimension and worried that something may go wrong. I appreciate any comments or references.</p>
| Ahmed Sulejmani | 26,516 | <p>The answer is yes and can be found for example in S.Kobayashi "The geometry of bounded domains" T.A.M.S 1959 (92) 267- 290</p>
|
3,643,917 | <p>Find the limit of
<span class="math-container">$$\lim_{x\to 0+} \frac{1}{x^2} \int_{0}^{x} t^{1+t} dt$$</span>. </p>
<p>My idea to solve it is to use L'Hospital's rule but I am not sure why I can use it and how should i do it. Many thanks to them who are willing to help. </p>
| Paramanand Singh | 72,031 | <p>Use the substitution <span class="math-container">$u=t^2$</span> to reduce the expression under limit to <span class="math-container">$$\frac{1}{2x^2}\int_{0}^{x^2}(\sqrt {u}) ^{\sqrt{u}} \, du$$</span> By Fundamental Theorem of Calculus the desired limit equals <span class="math-container">$1/2$</span> as the integrand above tends to <span class="math-container">$1$</span> as <span class="math-container">$u\to 0$</span>.</p>
|
610,472 | <p>What is the value of these limits;</p>
<p>$\lim_{x\rightarrow 1^{+}}\frac{\lfloor x\rfloor-1}{\lfloor x\rfloor-x}$</p>
<p>$\lim_{x\rightarrow 1^{-}}\frac{\lfloor x\rfloor-1}{\lfloor x\rfloor-x}$</p>
| Mikasa | 8,581 | <p>Hint:</p>
<p>Look at the following figures. While $x\to 1^+$ then in a very small neighborhood of $x=1$ from the right we have: $$x\to1^+\longrightarrow x\in [1,1+\epsilon)\to \lfloor x\rfloor=1$$</p>
<p><img src="https://i.stack.imgur.com/PJRjQ.jpg" alt="enter image description here"></p>
<p><img src="https://i.stack.imgur.com/vYgey.jpg" alt="enter image description here"></p>
|
3,624,953 | <p>Let <span class="math-container">$(X_t)_{t \in [0, T]}$</span> be a continuous stochastic process with paths which are a.s. continuous, the underlying space of which is irrelevant but is well defined. Let <span class="math-container">$a$</span> be a constant
Define two stopping times
<span class="math-container">$$\tau_1 = \inf\{t \geq 0: X_t > a\}$$</span>
<span class="math-container">$$\tau_2 = \inf\{t \geq 0: X_t = a\}$$</span>
Evidently, <span class="math-container">$X_{\tau_2} = a$</span>. However, can we claim <span class="math-container">$X_{\tau_1} = a$</span> ? This "feels like" having something to do with continuity/topology but I cannot figure it out. Any help would be greatly appreciated.</p>
| Kavi Rama Murthy | 142,385 | <p>You need some assumption on <span class="math-container">$X_0$</span> for this. Assume that <span class="math-container">$X_0 <a$</span>. Then you can apply the following elementary fact:</p>
<p>Let <span class="math-container">$f:[0,\infty) \to \mathbb R$</span> be a continuous function with <span class="math-container">$f(0) <a$</span>. Then <span class="math-container">$\inf \{t\geq 0 : f(t) \geq a\}=\inf \{t\geq 0: f(t)=a\}$</span>.</p>
<p>Proof of this fact is an easy consequence of intermediate value property of continuous functions. </p>
|
102,932 | <p>This is a naive question but I hope that the answers will be educational. When is it the case that a finitely presented group $G$ admits a faithful $2$-dimensional complex representation, e.g. an embedding into $\text{GL}_2(\mathbb{C})$? (I am mostly interested in sufficient conditions.) </p>
<p>I think I can figure out the finite groups with this property (they can be conjugated into $\text{U}(2)$ and taking determinants reduces to the classification of finite subgroups of $\text{SU}(2)$ and an extension problem) as well as the f.g. abelian groups with this property (there can't be too much torsion). But already I don't know what finitely presented groups appear as, say, congruence subgroups of $\text{GL}_2(\mathcal{O}_K)$ for $K$ a number field. </p>
<p>What can be said if you are given, say, a nice space $X$ with fundamental group $G$? I hear that in this case linear representations of $G$ are related to vector bundles on $X$ with flat connection. </p>
| Ian Agol | 1,345 | <p>If the group <span class="math-container">$G$</span> does not have <a href="http://en.wikipedia.org/wiki/Property_FA" rel="nofollow noreferrer">property FA</a>, then a necessary and sufficient
condition is that the group embeds in <span class="math-container">$\operatorname{GL}_2(\mathcal{O}_K)$</span>, for some number
field <span class="math-container">$K$</span> (although there are such subgroups which do not have property FA).
This follows from <a href="http://en.wikipedia.org/wiki/Bass-Serre_theory" rel="nofollow noreferrer">Bass-Serre theory</a>. Of course, this begs the question of classifying finitely presented subgroups of <span class="math-container">$\operatorname{GL}_2(\mathcal{O}_K)$</span> with property FA.</p>
<p>More generally, <a href="http://books.google.com/books?id=MOAqeoYlBMQC&lpg=PP1&pg=PR6#v=onepage&q&f=false" rel="nofollow noreferrer">Bass-Serre theory implies</a> that a general finitely generated
subgroup of <span class="math-container">$\operatorname{GL}_2(K)$</span> will have a graph of groups decomposition
into subgroups of <span class="math-container">$\operatorname{GL}_2(\mathcal{O}_K)$</span> for some number fields <span class="math-container">$K$</span>.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Geometrization_conjecture" rel="nofollow noreferrer">geometrization theorem</a> and <a href="http://en.wikipedia.org/wiki/Ending_lamination_conjecture" rel="nofollow noreferrer">ending lamination theorems</a> classify
discrete subgroups of <span class="math-container">$\operatorname{PSL}_2(\mathbb{C})$</span> (which up to finite-index
embed in <span class="math-container">$\operatorname{SL}_2(\mathbb{C})$</span>), by their topological type as a 3-orbifold
and the ending lamination data.</p>
<p>You ask about congruence subgroups of <span class="math-container">$\operatorname{GL}_2(\mathcal{O}_K)$</span>.
If <span class="math-container">$K=\mathbb{Q}$</span> or <span class="math-container">$K=\mathbb{Q}(\sqrt{-D})$</span>, for some <span class="math-container">$D>0$</span>,
then the group is a discrete non-uniform lattice in <span class="math-container">$\operatorname{PSL}_2(\mathbb{R})$</span>
or <span class="math-container">$\operatorname{PSL}_2(\mathbb{C})$</span>, and one may classify the congruence
subgroups of <span class="math-container">$\operatorname{SL}_2(\mathbb{Z})$</span> by a result of Tim Hsu (more
generally, I think there exists and algorithm to determine
if a finite-index subgroup of <span class="math-container">$\operatorname{GL}_2(\mathcal{O}_{\mathbb{Q}(\sqrt{-D})})$</span>
is congruence, but I don't know if it's written down - I could describe
it for you though if you're interested). More generally, one can
determine if a discrete non-uniform arithmetic lattice in <span class="math-container">$\operatorname{PSL}_2(\mathbb{R})$</span>
or <span class="math-container">$\operatorname{PSL}_2(\mathbb{C})$</span> is congruence.</p>
<p>Otherwise, <a href="http://www.ams.org/mathscinet-getitem?mr=272790" rel="nofollow noreferrer">Serre essentially showed</a> that a finite-index subgroup of <span class="math-container">$\operatorname{GL}_2(\mathcal{O}_K)$</span>
will have the congruence subgroup property (and thus, non-uniform
lattices in a product <span class="math-container">$(\mathbb{H}^2)^k\times (\mathbb{H}^3)^l$</span> will
have this property if <span class="math-container">$k+l>1$</span>).</p>
<p>For examples of groups which don't have property FA, there's a paper
of <a href="http://www.ams.org/journals/proc/2006-134-11/S0002-9939-06-08398-5/home.html" rel="nofollow noreferrer">Calegari and Dunfield</a> which constructs an ascending HNN
extension subgroup of <span class="math-container">$\operatorname{SL}_2(\mathbb{C})$</span>.</p>
<p>There are many necessary conditions which show that various groups
cannot embed in <span class="math-container">$\operatorname{GL}_2(\mathbb{C})$</span>, some of which you describe.
But I think a general classification is beyond reach at this point.</p>
<p>As you say, if a space <span class="math-container">$X$</span> has a <span class="math-container">$\mathbb{C}^2$</span> bundle with flat
connection, then you get a representation of <span class="math-container">$G=\pi_1(X)$</span> into
<span class="math-container">$\operatorname{GL}_2(\mathbb{C})$</span>. The space of such flat bundles is computable
if <span class="math-container">$G$</span> is finitely presented, it amounts to computing the <a href="http://en.wikipedia.org/wiki/Character_variety" rel="nofollow noreferrer">character
variety</a> of <span class="math-container">$G$</span> into <span class="math-container">$\operatorname{GL}_2(\mathbb{C})$</span>. However, it is difficult
to tell if there is a faithful representation. If you can solve
the word problem in <span class="math-container">$G$</span>, then in principle one can determine if
a representation is not faithful. Also, it seems difficult to certify that
a representation is faithful, except if it is discrete. The
difficulty is to find a nice fundamental domain for the
action on a product of symmetric spaces on which the group
acts discretely (in fact, it might not exist).</p>
|
102,932 | <p>This is a naive question but I hope that the answers will be educational. When is it the case that a finitely presented group $G$ admits a faithful $2$-dimensional complex representation, e.g. an embedding into $\text{GL}_2(\mathbb{C})$? (I am mostly interested in sufficient conditions.) </p>
<p>I think I can figure out the finite groups with this property (they can be conjugated into $\text{U}(2)$ and taking determinants reduces to the classification of finite subgroups of $\text{SU}(2)$ and an extension problem) as well as the f.g. abelian groups with this property (there can't be too much torsion). But already I don't know what finitely presented groups appear as, say, congruence subgroups of $\text{GL}_2(\mathcal{O}_K)$ for $K$ a number field. </p>
<p>What can be said if you are given, say, a nice space $X$ with fundamental group $G$? I hear that in this case linear representations of $G$ are related to vector bundles on $X$ with flat connection. </p>
| Misha | 21,684 | <p>First, instead of subgroups of $GL(2,C)$ it suffices to consider subgroups of $SL(2, C)$. I will also restrict to subgroups which are not virtually solvable (since these should be easy to classify). Lastly, I will consider characterization of subgroups of $SL(2, C)$ "up to abstract commensuration" since the criterion is much cleaner in this setting. (Recall that two abstract groups $G_1, G_2$ are called "commensurable" if they contain isomorphic finite-index subgroups.) </p>
<p>Definition. Let $p$ be a prime and $c$ an integer. A $p$-congruence structure of degree $c$ for a group $\Gamma$, is a descending chain of finite-index normal subgroups of $\Gamma$:
$\Gamma=N_0\supset N_1 \supset ... \supset N_k ... $, such that:</p>
<p>(i) $\bigcap_{k=0}^\infty N_k= 1$; </p>
<p>(ii) $N_1/N_k$ is a finite $p$-group for every $k\ge 2$; </p>
<p>(iii) $d(N_i/N_j)\le c$ for all $j\ge i\ge 1$, where $d(H)$ denotes the minimal number of generators
of a group $H$. </p>
<p>Theorem. A finitely-generated (non-virtually solvable) group $\Gamma$ is commensurable to a
subgroup of $SL(2,C)$ if and only if $\Gamma$ admits (for some prime $p$) a $p$-congruence structure of degree $c=3$. </p>
<p>This theorem is an application of "A Group Theoretic Characterization of Linear Groups" by Alex Lubotzky (Journal of Algebra, 113 (1988), 207-214), combined with the classification of Lie algebras of dimension $\le 3$ over fields. Note that Lubotzky does not get a group-theoretic characterization of subgroups of $SL(n,C)$ for particular values of $n$ (even up to commensurability), however, for $n=2$ the situation is better than for the general $n$ since dimension of the Lie group is so small in this case. </p>
|
1,405,809 | <p>So we have to find $\lim\limits_{x\to 0} \frac{a^x-1}{x}$ without using any series expansions or the L'Hopital's rule.<br>
I did it using both but I have no idea how to do it. I tried many substitutions but nothing worked. Please point me in the right direction.</p>
| Sabino Di Trani | 82,009 | <p>Let's try with substitution $x=\frac{t}{\log a}$</p>
<p>Using $a=e^{\log a}$ you obtain $$\lim_{t\to \infty} \left(\frac{a^{\frac{t}{\log a}}-1}{t}\right) \log a= \lim_{t\to \infty} \left(\frac{e^t-1}{t}\right) \log a= \log a$$</p>
|
257,027 | <p><strong>Question:</strong> Consider a distribution $D$, and $n$ i.i.d. random variables $X_i$, all distributed according to $D$. Let $p^D_2:=\Pr[X_1=X_2]$. What is a lower bound for $p^D_n:=\Pr[\exists i\neq j. X_i=X_j]$ (as a function of $p^D_2$)?</p>
<p><strong>Conjecture:</strong> $p^D_n \geq 1-\bigl(1-p^D_2\bigr)^{n\choose 2}$. [<strong>EDIT:</strong> This particular bound is wrong. Counterexample by Will Perkins: $D(1)=0.8$, $D(2)=0.1$, $D(3)=0.1$, $n=3$.]</p>
<p><strong>What bounds would I like:</strong> Tight bounds are preferred, of course. The conjecture above would be sufficient. But any bound that allows me to show the following is fine: For some $n\in O\bigl(\sqrt{1/p_2^D}\bigr)$, we have that $p^D_n\geq\frac12$.</p>
<p><strong>Relation to uniform birthday inequality:</strong> If $D$ is the uniform distribution on $N$ elements, then $p^D_2=1/N$, and $p^D_n\leq \bigl(1-\tfrac1N\bigr)^{n\choose 2}$ [1]. Thus the conjecture holds for uniform $D$.</p>
<hr>
<p><strong>Approaches I tried:</strong></p>
<p><strong>Approach 1:</strong> I tried to show that, for fixed $q$, we have that $p_n^D \geq p_n^U$ where $U$ is the uniform distribution on $1/q$ elements. (Assuming that $1/q$ is an integer.) Then I would just have to find a formula for $p_n^U$ which is the uniform birthday inequality. Unfortunately, it turns out that this approach cannot work: Consider the distribution $D$ on three elements with probabilities $2/3,1/6,1/6$. Then $p_2^D=1/2$. And $p_3^D<1$. (Because there is a nonzero chance of picking three different elements.) But for $U$ being the uniform distribution on $2$ elements, we have $p^U_3=1$. Thus $p_n^D \ngeq p_n^U$ for $n=3$.</p>
<p><strong>Approach 2:</strong> [<strong>EDIT:</strong> This approach cannot work because it would show the conjecture above which is wrong.] Perkins [1] shows implicitly in his introduction that the conjecture above (Definition 1 in [1]) is true for any distribution $D$ that satisfies the "repulsion inequality" (Definition 2 in [1]). This repulsion inequality says, in our special case and our notation:
$$
\Pr[X_{N+1}\in\{X_1,\dots,X_N\}|X_1,\dots,X_N\text{ all distinct}]
\geq
\Pr[X_{N+1}\in\{X_1,\dots,X_N\}].
$$
(Here $X_1,\dots,X_{N+1}$ are i.i.d. according to $D$.) Thus, showing the repulsion property would answer my question. But I have not been able to prove the repulsion property.</p>
<p><strong>Related work:</strong> I have found many references considering the Birthday inequality for non-uniform distributions, e.g., [2]. However, in all those cases, it was only shown that $p_n^D\geq p_n^U$ where $U$ is the uniform distribution on the support of $D$ (note that the support of $D$ can be very large if $D$ has a large number of low probability events). Or they contained exact formulas for the probability $p_n^D$ from which I did not manage to derive a bound in terms of $p_2^D$. There is one <a href="https://mathoverflow.net/q/255880/101775">question</a> on MathOverflow that asks for the same thing (in somewhat different words), but it gives much less details and has only an incorrect answer.</p>
<p>[1] Will Perkins, Birthday Inequalities, Repulsion, and Hard Spheres, <a href="http://arxiv.org/abs/1506.02700v2" rel="nofollow noreferrer">http://arxiv.org/abs/1506.02700v2</a></p>
<p>[2] Clevenson, M. Lawrence, and William Watkins. "Majorization and the birthday inequality." Mathematics Magazine 64.3 (1991): 183-188. <a href="http://www.jstor.org/stable/2691301" rel="nofollow noreferrer">http://www.jstor.org/stable/2691301</a></p>
| zhoraster | 8,146 | <p>Let $X_i$ take values $x_1,x_2,\dots$ with probabilities $p_1,p_2,\dots$</p>
<p>Define the events
$$
A_i = \{\exists j\neq i: X_i = X_j\}
$$
By the Chung-Erdős inequality,
$$
p^D_n = P\left(\bigcup_{i=1}^n A_i\right) \ge \frac{\big(\sum_{i=1}^n P(A_i)\big)^2}{\sum_{i=1}^n P(A_i) + \sum_{i\neq j}P(A_i\cap A_j)}\\ = \frac{n^2P(A_1)^2}{nP(A_1) + n(n-1)P(A_1\cap A_2)}.
$$
Now
$$
P(A_i) = 1- \sum_{m\ge 1} p_m (1-p_m)^{n-1} \approx 1-\sum_{m\ge 1} (p_m - (n-1) p_m^2)= (n-1)p^D_2 .
$$
Further,
$$
P(A_1\cap A_2)\le \sum_{m\ge 1} p_m^2 + (n-2)(n-3)\sum_{m'\neq m''}(p_{m'})^2(p_{m''})^2\\\le p_2^D +n(n-1)(p_2^D)^2.
$$
Therefore,
$$
p_n^D \gtrsim \frac{n^2(n-1)^2(p_2^D)^2}{2n(n-1)p_2^D + n^2(n-1)^2(p_2^D)^2}.
$$
Taking now $n\sim C(p_2^D)^{-1/2}$ with $C>1$, we get
$$
p_n^D \gtrsim \frac{C^4}{2C^2 + C^4}>\frac13.
$$</p>
<p>Though this is quite on a sketchy side, but may be useful. My point is that the Chung-Erdős inequality should do the trick.</p>
|
1,818,557 | <p>I will be teaching some "topology" to high school students. I was wondering how to explain to such a school student that on a sphere the shortest path between 2 points is given by a great circle?</p>
<p>Also, how to explain that if they lived on a sphere they would have no notion of "above" or "below"? I cannot find a nice way to convince them since they see the sphere embedded on 3d? </p>
| Zarrax | 3,035 | <p>You can always rotate the sphere so that points A and B are both on the equator. The idea then is you reduce your distance from point B the fastest if you head in the direction of point B, and that direction is along the equator. </p>
|
1,818,557 | <p>I will be teaching some "topology" to high school students. I was wondering how to explain to such a school student that on a sphere the shortest path between 2 points is given by a great circle?</p>
<p>Also, how to explain that if they lived on a sphere they would have no notion of "above" or "below"? I cannot find a nice way to convince them since they see the sphere embedded on 3d? </p>
| Martin Kochanski | 340,970 | <p>Use a ball. Note that the less curvy a line is, the straighter it is (and therefore shorter). </p>
<p>Then note that cutting a slice through the <strong>middle</strong> of the ball gets you the straightest line available. Which is a good definition of a great circle.</p>
<p><em>(And as other answerers have pointed out, if the ball is edible then you will be nourishing bodies as well as minds).</em></p>
|
1,133,694 | <p>Suppose $a_{1},a_{2},a_{3}...a_{n}$ is a complex sequence satisfying $\bigl\lvert\left(\sum_{{k=1}}^{n}a_{k}b_{k}\right)\bigr\rvert \leq1$ for all $b_1,b_2,...,b_n$ such that $\left(\sum_{{k=1}}^{n}\mid b_{k}\mid^{2}\right)\leq1$. Show that $\left(\sum_{{k=1}}^{n}\mid a_{k}\mid^{2}\right)\leq1$</p>
<p>I'm considering to prove by contradiction of Cauchy-Schwarz inequality but don't know where to start. The conclusion seems obvious.</p>
| WimC | 25,313 | <p>If $a=0$ then the inequality holds. Now suppose $\lVert a \rVert=\left(\sum|a_j|^2\right)^{1/2} > 0$. Let $b_j = \overline{a_j}/\lVert a \rVert$. Then $\sum |b_j|^2=1$ and $\lVert a \rVert = \sum a_j b_j \leq 1 $.</p>
|
1,133,694 | <p>Suppose $a_{1},a_{2},a_{3}...a_{n}$ is a complex sequence satisfying $\bigl\lvert\left(\sum_{{k=1}}^{n}a_{k}b_{k}\right)\bigr\rvert \leq1$ for all $b_1,b_2,...,b_n$ such that $\left(\sum_{{k=1}}^{n}\mid b_{k}\mid^{2}\right)\leq1$. Show that $\left(\sum_{{k=1}}^{n}\mid a_{k}\mid^{2}\right)\leq1$</p>
<p>I'm considering to prove by contradiction of Cauchy-Schwarz inequality but don't know where to start. The conclusion seems obvious.</p>
| Zoe | 212,300 | <p>Suppose $\textbf{a,b}$ are complex vectors such that $\textbf{a}=(a_1,a_2,...,a_n) and \textbf{b}=(b_1,b_2,...,b_n)$.</p>
<p>Then $|\textbf{a}|^2=\sum_{k=1}^{n}|a_k|^2$ and $|\textbf{b}|^2=\sum_{k=1}^{n}|b_k|^2$.</p>
<p>Start proof by supposing $|\textbf{a}|^2 >1$.</p>
<p>Let $\textbf{b}=\frac{\textbf{a}}{|\textbf{a}|}$,so $|\textbf{b}|^2=\sum_{k=1}^{n}|b_k|^2 =1$,which satisfies the condition given in the statement.</p>
<p>According to the statement,we should have $|(\sum_{k=1}^{n}a_kb_k)|\leq1$.However,we actually have
$$|(\sum_{k=1}^{n}a_kb_k)|=|\textbf{a $\cdot$ b}|=|\textbf{a} \cdot \frac{\textbf{a}}
{|\textbf{a}|}|=|\textbf{a}|>1$$</p>
<p>Therefore,$|\textbf{a}|^2$(that is ,$\sum_{k=1}^{n}|a_k|^2$)cannot be greater than $1$.The proof is complete.</p>
|
1,950,466 | <p>Let $x_1, x_2, ...,x_n \in \mathbb{H}$, $\mathbb{H}$ is the Hilbert space. $x_j \neq 0$ for every $j$. If $x_i \perp x_j$ for $i \neq j$ then show that $x_j$ 's are linearly independent. </p>
<p>I think I need to use the fact that $<\alpha x + \beta y , z> = \alpha<x,z> + \beta<y,z>$ but how?</p>
| DonAntonio | 31,254 | <p>Suppose $\;\sum\limits_{i=1}^na_ix_i=0\;$ , then for all $\;1\le k\le n\;$ :</p>
<p>$$\left\langle 0,x_k\right\rangle=\left\langle \sum\limits_{i=1}^na_ix_i\,,\,\,x_k\right\rangle=\sum\limits_{i=1}^na_i\langle x_i,\,x_k\rangle=a_k\langle x_k,\,x_k\rangle$$</p>
<p>End the proof now.</p>
|
4,651,798 | <p>This question comes from the province-stage olympiad in my country in order to qualify for the national stage:</p>
<blockquote>
<p>Given the set <span class="math-container">$S=\{1,2,3,4\}$</span>.The number of non-empty subsets <span class="math-container">$A_1,A_2, ..., A_6$</span> that fulfil these three criteria:</p>
<ol>
<li><span class="math-container">$A_1\cap A_2=\emptyset$</span>.</li>
<li><span class="math-container">$A_1\cup A_2\subseteq A_3$</span>.</li>
<li><span class="math-container">$A_3\subseteq A_4\subseteq\dots\subseteq A_6$</span>.
is ...</li>
</ol>
</blockquote>
<p>From what I managed to conclude, <span class="math-container">$|A_1|+|A_2|\leq|A_3|$</span> and <span class="math-container">$|A_3|\leq|A_4|\leq|A_5|\leq|A_6|$</span> from the second and third criterion respectively. Then, I thought to divide the first criterion into two cases: <span class="math-container">$|A_1|=|A_2|$</span> and <span class="math-container">$|A_1|\neq|A_2|$</span>. I don't know how to proceed from here. Any help would be appreciated.</p>
| Marc van Leeuwen | 18,880 | <p>It seems simplest to me to put <span class="math-container">$B_2=A_2\cup A_1$</span> and <span class="math-container">$B_i=A_i$</span> for <span class="math-container">$i\in\{1,3,4,5,6\}$</span>; we can recover the <span class="math-container">$A_i$</span> from the <span class="math-container">$B_i$</span> by <span class="math-container">$A_2=B_2\setminus B_1$</span> since <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> are disjoint and <span class="math-container">$A_i=B_i$</span> for <span class="math-container">$i\in\{1,3,4,5,6\}$</span>. The question then becomes to count the chains <span class="math-container">$\emptyset\subset B_1\subset B_2\subseteq B_3\subseteq B_4\subseteq B_5\subseteq B_6$</span>. Given such a chain we can map every <span class="math-container">$k\in\{1,2,3,4\}$</span> to <span class="math-container">$\min(\{i\mid 1\leq{i}\leq 6\land k\in B_i\}\cup\{7\})$</span>, which map <span class="math-container">$f:\{1,2,3,4\}\to\{1,2,3,4,5,6,7\}$</span> must satisfy <span class="math-container">$\{1,2\}\subseteq\operatorname{Im}(f)$</span> since <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> have to be non-empty. Conversely any map satisfying the conditions defines a unique chain of subset <span class="math-container">$B_i$</span>, since <span class="math-container">$B_i=\{\, k\in\{1,2,3,4\}\mid f(k)\leq{i}\,\}$</span>.</p>
<p>Without the condition on <span class="math-container">$\operatorname{Im}(f)$</span> the number of maps is <span class="math-container">$7^4$</span>. The number of maps without <span class="math-container">$1$</span> in its image is <span class="math-container">$6^4$</span>, and so is the number without <span class="math-container">$2$</span> in its image. But those to sets to exclude overlap in the <span class="math-container">$5^4$</span> maps that have neither <span class="math-container">$1$</span> nor <span class="math-container">$2$</span> in its image, so by inclusion/exclusion the answer is <span class="math-container">$7^4-2\times6^4+5^4=434$</span>.</p>
|
4,202,654 | <p>This question regards the proof of proposition 8 in chapter 8.1 of Bosch's 'Algebraic Geometry and Commutative Algebra'.</p>
<p>Let <span class="math-container">$A, B$</span> be <span class="math-container">$R$</span>-algebras, <span class="math-container">$f_0: A \to B$</span> a morphism of <span class="math-container">$R$</span>-algebras (that is, an <span class="math-container">$R$</span>-linear homomorphism of rings), and <span class="math-container">$\mathfrak{I} \subseteq B$</span> and ideal satisfying <span class="math-container">$\mathfrak{I}^2=0$</span>.</p>
<p>Let <span class="math-container">$f_1: A \to B$</span> be an <span class="math-container">$R$</span>-linear map satisfying <span class="math-container">$f_1(xy) = f_1(x)f_1(y)$</span>, for <span class="math-container">$x, y \in A$</span>, and <span class="math-container">$f_0 \equiv f_1 \mbox{ mod } \mathfrak{I}$</span>. I want to understand how this implies that <span class="math-container">$f_1(1) = 1$</span>. The author argues as follows:</p>
<p>"Using the geometric series in conjunction with <span class="math-container">$\mathfrak{I}^2=0$</span>, the congruence <span class="math-container">$f_1(1) \equiv f_0(1) = 1 \mbox{ mod } \mathfrak{I}$</span> shows that <span class="math-container">$f_1(1)$</span> is a unit in <span class="math-container">$B$</span>. The latter must be idempotent and, thus, corresponds with <span class="math-container">$1$</span> if <span class="math-container">$f_1$</span> is multiplicative."</p>
<p>What does he mean by 'geometric series'? Why must <span class="math-container">$f_1(1)$</span> be idempotent? Can someone help me?</p>
| Mark Saving | 798,694 | <p>I think an easier proof is to note that <span class="math-container">$f_1 - f_0 \in \mathfrak{I}$</span>. Then <span class="math-container">$0 = (f_1(1) - f_0(1))^2 = (f_1(1) - 1)^2 = f_1(1)^2 - 2 f_1(1) + 1$</span>, so <span class="math-container">$1 = 2 f_1(1) - f_1(1)^2$</span>. Applying the multiplicative property gives us <span class="math-container">$f_1(1)^2 = f_1(1) f_1(1) = f_1(1 \cdot 1) = f_1(1)$</span>, so we have <span class="math-container">$1 = 2 f_1(1) - f_1(1)^2 = 2 f_1(1) - f_1(1) = f_1(1)$</span>.</p>
<p>Note that in general, if <span class="math-container">$a$</span> is nilpotent then <span class="math-container">$1 - a$</span> is a unit. This is because if <span class="math-container">$a^n = 0$</span>, we have <span class="math-container">$1 = 1^n - a^n = (1 - a) (1 + ... + a^{n - 1})$</span> (this is the geometric series argument that the author was probably alluding to). So since <span class="math-container">$f_1(x) = f_0(x) - (f_0(x) - f_1(x))$</span>, and <span class="math-container">$f_0(x) - f_1(x)$</span> is nilpotent, it follows that <span class="math-container">$f_1(x)$</span> is a unit whenever <span class="math-container">$f_0(x) = 1$</span>.</p>
<p>In particular, <span class="math-container">$f_1(1)$</span> must be a unit since <span class="math-container">$f_0(1) = 1$</span>.</p>
<p>And we have shown that <span class="math-container">$f_1(1)^2 = f_1(1)$</span>. Since <span class="math-container">$f_1(1)$</span> is a unit, we can divide both sides by <span class="math-container">$f_1(1)$</span> to get that <span class="math-container">$f_1(1) = 1$</span>.</p>
|
4,202,654 | <p>This question regards the proof of proposition 8 in chapter 8.1 of Bosch's 'Algebraic Geometry and Commutative Algebra'.</p>
<p>Let <span class="math-container">$A, B$</span> be <span class="math-container">$R$</span>-algebras, <span class="math-container">$f_0: A \to B$</span> a morphism of <span class="math-container">$R$</span>-algebras (that is, an <span class="math-container">$R$</span>-linear homomorphism of rings), and <span class="math-container">$\mathfrak{I} \subseteq B$</span> and ideal satisfying <span class="math-container">$\mathfrak{I}^2=0$</span>.</p>
<p>Let <span class="math-container">$f_1: A \to B$</span> be an <span class="math-container">$R$</span>-linear map satisfying <span class="math-container">$f_1(xy) = f_1(x)f_1(y)$</span>, for <span class="math-container">$x, y \in A$</span>, and <span class="math-container">$f_0 \equiv f_1 \mbox{ mod } \mathfrak{I}$</span>. I want to understand how this implies that <span class="math-container">$f_1(1) = 1$</span>. The author argues as follows:</p>
<p>"Using the geometric series in conjunction with <span class="math-container">$\mathfrak{I}^2=0$</span>, the congruence <span class="math-container">$f_1(1) \equiv f_0(1) = 1 \mbox{ mod } \mathfrak{I}$</span> shows that <span class="math-container">$f_1(1)$</span> is a unit in <span class="math-container">$B$</span>. The latter must be idempotent and, thus, corresponds with <span class="math-container">$1$</span> if <span class="math-container">$f_1$</span> is multiplicative."</p>
<p>What does he mean by 'geometric series'? Why must <span class="math-container">$f_1(1)$</span> be idempotent? Can someone help me?</p>
| Milten | 620,957 | <p>You have <span class="math-container">$f_1(1) = f_0(1)+x = 1+x$</span> for some <span class="math-container">$x\in\mathfrak I$</span>. Then
<span class="math-container">$$
f_1(1)(1-x) = 1-x^2 = 1,
$$</span>
since <span class="math-container">$\mathfrak I^2=0$</span>, so <span class="math-container">$f_1(1)$</span> is a unit. This can be seen as an application of the geometric series, since in general
<span class="math-container">$$
(1+x)^{-1} = \sum_{n=0}^\infty (-x)^n
$$</span>
if <span class="math-container">$x^n=0$</span> for large enough <span class="math-container">$n$</span>.</p>
<p>By multiplicativity,
<span class="math-container">$$
f_1(1)=f_1(1\cdot1) = f_1(1)^2,
$$</span>
showing idempotence. And the only idempotent unit is <span class="math-container">$1$</span>.</p>
|
789,802 | <p>How do I solve $\displaystyle \lim_{x \to 0} \sqrt{x^2 + x^3} \sin \frac{\pi}{x}$ using squeeze Theorem?</p>
<p>My book only teaches me the simplest use of the Theorem. I have no idea what should I do with a function as complex as this...</p>
<p>I know I have to start with:</p>
<p>$$ -1 \le \sin \frac{\pi}{x} \le 1$$</p>
<p>But what do I do next?</p>
| Ivo Terek | 118,056 | <p>You have $$-1 \leq \sin \frac{\pi}{x} \leq 1$$</p>
<p>Multiply it by what is left:</p>
<p>$$-\sqrt{x^2 + x^3} \leq \sqrt{x^2 + x^3} \sin \frac{\pi}{x} \leq \sqrt{x^2 + x^3}$$</p>
<p>What happens when $x \to 0$? The limit we want will be <em>squeezed</em> between which values? </p>
|
2,721,372 | <p>A sequence is defined by $a_1=2$ and $a_n=3a_{n-1}+1 $ .Find the sum $a_1+a_2+\cdots+a_n$</p>
<p>how to find sum $a_1=2,a_2=7,\ldots$</p>
<p>Also i found the value of $a_n=\frac{5}{6}\cdot3^n-\frac{1}{2}$</p>
| Jordan | 545,687 | <p>In general, the predicate logic statement that $\forall x(x \in A \implies x \in B)$ is written as $A \subseteq B$.</p>
<p>The <em>empty set</em> $\emptyset$ is the set that contains no elements. Therefore, the empty set is a subset of any set, that is, $\emptyset \subseteq X$ for all $X$. This is because the statement $x \in \emptyset$ is false for any $x$, so the imiplication</p>
<p>$$
\forall x(x \in \emptyset \implies x \in X)
$$</p>
<p>must be true. (See the truth table below for the for the <em>implication</em> connective.)</p>
<p>$$
\begin{array}{c|l|c}
\text{p} & \text{q} & \text{$p \implies q$} \\
\hline
T & T & T \\
T & F & F \\
F & T & T \\
F & F & T
\end{array}
$$</p>
<p>Note that the bottom two rows of the truth table are <em>vacuously</em> true.</p>
|
1,348,587 | <p>Let $f$ be a real-valued function (a function with target space the set of reals). Let $P(x, M)$ stand for $|f(x)| \leq M $, let $N$ be the set of positive real numbers, and let $\mathbb{R}$ be the set of real numbers.</p>
<p>a) Which of the following statements is an accurate translation of "f is bounded"?</p>
<p>(i): ($\forall M \in N$)($\exists x \in \mathbb{R}$)($P(x,M)$)</p>
<p>(ii): ($\exists M \in N$)($\forall x \in \mathbb{R}$)($P(x,M)$)</p>
<p>(iii): ($\forall x \in \mathbb{R}$)($\exists M \in N$)($P(x,M)$)</p>
<p>(iv): ($\exists x \in \mathbb{R}$)($\forall M \in N$)($P(x,M)$)</p>
<p>I understand that (III) is the answer that defines a bounded function, but I don't understand how it differs from (II). Also, if someone can provide me with a more explicitly method of reading these types of statements that would really help clarify a lot of things.</p>
| Mauro ALLEGRANZA | 108,274 | <p>The difference between : $\exists x \forall y$ and $\forall y \exists x$ is clearly shown by this example regarding <em>natural numbers</em> :</p>
<blockquote>
<p>$\forall n \ \exists m \ (n < m)$</p>
</blockquote>
<p>is clearly true in $\mathbb N$,</p>
<p>while :</p>
<blockquote>
<p>$\exists m \ \forall n \ (n < m)$</p>
</blockquote>
<p>is false in $\mathbb N$.</p>
|
1,715,032 | <p>When given propositions to prove such as the following question:
prove that $|z+i| = |z-i|$ if $z \in \mathbb{R}$.</p>
<p>Would I have to prove this proposition without substituting $z$ for a complex number?</p>
| Community | -1 | <p>You should certainly NOT substitute a complex number for $z$ (not the other way around, anyway!) if you are asked to prove the identity for REAL valued $z$.</p>
<p>So - rephrasing - yes, you must prove that proposition without substituting some random real numbers for $z$. That will only prove the proposition for the few values you substituted, it will not prove it to be true for ALL real values of $z$. For that you need a more general, theoretical proof. Proofs are exceptionally rarely complete by just plugging in a few values and noticing that the proposition holds.</p>
<p>In this case, if $z$ is real, you can apply the definition of the absolute value of a complex number, because the real part and the imaginary part of $z + i$ and $z - i$ are plainly clear. It all reduces to the obvious identity $\sqrt{z^2 + 1^2} = \sqrt{z^2 + (-1)^2}$, which holds for ALL possible values of $z$, simply because $1^2 = (-1)^2$ regardless of $z$.</p>
|
2,783,129 | <p>How many different value of x from 0° to 180° for the equation $(2\sin x-1)(\cos x+1) = 0$?</p>
<p>The solution shows that one of these is true:</p>
<p>$\sin x = \frac12$ and thus $x = 30^\circ$ or $120^\circ$ </p>
<p>$\cos x = -1$ and thus $x = 180^\circ$</p>
<p><strong>Question:</strong> Inserting the $\arcsin$ of $1/2$ will yield to $30°$, how do I get $120^\circ$? and what is that $120^\circ$, why is there $2$ value but when you substitute $\frac12$ as $x$, you'll only get $1$ value which is the $30^\circ$?</p>
<p>Also, when I do it inversely: $\sin(30^\circ)$ will result to 1/2 which is true as $\arcsin$ of $1/2$ is $30^\circ$. But when you do $\sin(120^\circ)$, it will be $\frac{\sqrt{3}}{2}$, and when you calculate the $\arcsin$ of $\frac{\sqrt{3}}{2}$, it will result to $60^\circ$ and not $120^\circ$. Why?</p>
| user | 505,767 | <p>As you noted $x=120°$ is not a solution indeed</p>
<ul>
<li>$(2\sin 120°-1)(\cos 120°+1)=\left(2\frac{\sqrt3}2-1\right)\left(-\frac12+1\right)\neq 0$</li>
</ul>
<p>but</p>
<ul>
<li>$2\sin x-1=0 \implies x=\frac16 \pi + 2k\pi,\,x=\frac56 \pi + 2k\pi$</li>
<li>$\cos x+1 = 0 \implies x=\pi + 2k\pi=(2k+1)\pi$</li>
</ul>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.