qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
803,687
<p>How to prove : </p> <blockquote> <p>$A/m^n$ is Artinian for all $n\geq 0$ if $A$ is a Noetherian ring and $m$ maximal ideal.</p> </blockquote> <p>Any suggestions ?</p>
Keenan Kidwell
628
<p>You need to prove that $A/\mathfrak{m}^n$ is of finite length as a module over itself, or equivalently, as an $A$-module (since ideals of $A/\mathfrak{m}^n$ are the same as $A$-submodules). Consider the chain $A/\mathfrak{m}^n\supseteq \mathfrak{m}/\mathfrak{m}^n\supseteq\cdots\supseteq\mathfrak{m}^{n-1}/\mathfrak{m}^n\supseteq 0$ of $A$-submodules of $A/\mathfrak{m}^n$. As $A$ is Noetherian, these are all finitely generated $A$-modules. In particular, the quotients by consecutive terms are finitely generated $A$-modules killed by $\mathfrak{m}$, i.e., finite-dimensional $A/\mathfrak{m}$-modules. A vector space over a field is of finite length over that field if and only if it is of finite dimension, and the $A$-length of an $A$-moduled killed by $\mathfrak{m}$ is the same as the $A/\mathfrak{m}$-dimension. Thus the quotients in this filtration are all finite length over $A$. By induction, using that length is additive in short exact sequences, $A/\mathfrak{m}^n$ itself is of finite length over $A$.</p>
4,404,751
<p>I want to prove the following: Every symmetric matrix whose entries are calculated as <span class="math-container">$ 1/(n -1) $</span> with <span class="math-container">$n$</span> as the size of the matrix, except for the diagonal which is 0, has a characteristic polynomial with a root at <span class="math-container">$x=1$</span>. In other words, every such matrix has an eigenvalue of 1.</p> <p>For example Matrix 1:</p> <p><span class="math-container">\begin{array}{ccc} 0 &amp; \frac{1}{2} &amp; \frac{1}{2} \\ \frac{1}{2} &amp; 0 &amp; \frac{1}{2} \\ \frac{1}{2} &amp; \frac{1}{2} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$f(x)=-x^3+\frac{3 x}{4}+\frac{1}{4} $</span> ,which has a root at <span class="math-container">$x=1$</span></p> <p>Matrix 2:</p> <p><span class="math-container">\begin{array}{cccc} 0 &amp; \frac{1}{3} &amp; \frac{1}{3} &amp; \frac{1}{3} \\ \frac{1}{3} &amp; 0 &amp; \frac{1}{3} &amp; \frac{1}{3} \\ \frac{1}{3} &amp; \frac{1}{3} &amp; 0 &amp; \frac{1}{3} \\ \frac{1}{3} &amp; \frac{1}{3} &amp; \frac{1}{3} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$f(x)=-(1/27) - (8 x)/27 - (2 x^2)/3 + x^4 $</span> ,which also has a root at <span class="math-container">$x=1$</span></p> <p>Matrix 3:</p> <p><span class="math-container">\begin{array}{ccccc} 0 &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; 0 &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; 0 &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; 0 &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$ f(x)=(4 + 60 x + 320 x^2 + 640 x^3 - 1024 x^5)/1024 $</span> ,which also has a root at <span class="math-container">$x=1$</span></p> <p>I want to show that this is true for any such n by n matrix, i.e. for all n.</p> <p>Looking for some tips and tricks on how to approach this.</p>
Koro
266,435
<p>Note the following lemma:</p> <p>If the sum of every row of an <span class="math-container">$ m$</span> by <span class="math-container">$m$</span> matrix <span class="math-container">$A$</span> is <span class="math-container">$k$</span>, then <span class="math-container">$k$</span> is an eigenvalue of <span class="math-container">$A$</span>.</p> <p><strong>Proof</strong>: Observe that <span class="math-container">$A[1]_{m\times 1}=k[1]_{m\times 1}$</span>, where <span class="math-container">$[1]_{m\times 1}$</span> represents a matrix of order <span class="math-container">$m$</span> by <span class="math-container">$1$</span>, whose every entry equals <span class="math-container">$1$</span>. QED.</p> <p>In your case, <span class="math-container">$k=1$</span>.</p>
2,243,900
<p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p> <blockquote> <p>What exactly is calculus? </p> </blockquote> <p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
richard1941
133,895
<p>There are several topics of calculus: limits, differentiation, integration, power series, approximation of functions... all are bound together by the concept of INFINITY. That is, they are the result if an infinite process. So I would say that calculus is the study of a particular set of infinite processes that have useful problem solving ability.</p>
31,782
<p>This question is edited following the comment of Joseph. He pointed out that the main object of the first version of this question is the cut locus.</p> <p>Recall that the cut locus of a set <span class="math-container">$S$</span> in a geodesic space <span class="math-container">$X$</span> is the closure of the set of all points <span class="math-container">$p \in X$</span> that have two or more distinct shortest paths in <span class="math-container">$X$</span> from <span class="math-container">$S$</span> to <span class="math-container">$p$</span>. <a href="http://en.wikipedia.org/wiki/Cut_locus" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Cut_locus</a></p> <p>A simple lemma shows that, for a disk <span class="math-container">$D^2$</span> with a Riemannian metric and piecewise smooth <em>generic</em> boundary, the cut locus of <span class="math-container">$D^2$</span> with respect to its boundary is a tree. A picture of such tree can be found on page 542, figure 17 of the article of Thurston "Shapes of polyhedra". The tree is white. <a href="http://arxiv.org/PS_cache/math/pdf/9801/9801088v2.pdf" rel="nofollow noreferrer">http://arxiv.org/PS_cache/math/pdf/9801/9801088v2.pdf</a> For an ellipse on the 2-plane, the tree is the segment that joins its focal points.</p> <p>More generically for a Riemannian manifold <span class="math-container">$M^n$</span> with boundary, the cut locus of <span class="math-container">$\partial M$</span> should be a deformation retract of <span class="math-container">$M$</span>. (I guess it is a <span class="math-container">$CW$</span> complex of dimension less than <span class="math-container">$n$</span>.) To prove this lemma, notice that <span class="math-container">$M^n\setminus\operatorname{cut-locus}(\partial M^n)$</span> is canonically foliated by geodesic segments that join <span class="math-container">$X$</span> with <span class="math-container">$\partial M$</span>.</p> <p>I wonder if this lemma has a name or maybe is contained in some textbook on Riemannian geometry?</p>
John Sullivan
46,735
<p>The 'simple lemma' certainly isn't always true. Let's consider just open disks in the Euclidean plane with a flat metric. Fremlin (Proc. LMS, 1997) calls the set of points without a unique nearest boundary point the "skeleton". He gives an example where the skeleton is somewhere dense (that is, its closure, which you call the "cut locus", has nonempty interior). The skeleton will always be an R-tree.</p>
292,639
<p>Traditional Fourier analysis picks a period and then describes a function as: $$f(x) = \frac{1}{2} a_0 + \sum_{k=1}^\infty\, (a_k \cos{(\omega \cdot kx)} + b_n \sin{(\omega \cdot kx)})$$</p> <p>I am wondering whether there is a way to Fourier-analyze a function in a way that the period is dependent on $x$. Let $g$ be a continuous (or differentiable, if necessary) function that is positive for all arguments $x$. Is there a representation of $f$ in a form that looks something like this? $$f(x) = \frac{1}{2} a'_0 + \sum_{k=1}^\infty\, (a'_k \cos{(g(x) \cdot kx)} + b'_n \sin{(g(x) \cdot kx)})$$</p> <p>Perhaps one can first make $f(x)$ periodic in the traditional sense, then apply Fourier analysis, and then backtransform the result. This is just an idea.</p> <p>By the way, $g$ needs to have certain properties for this question to make sense. If $g(x)$ falls so steeply that no period is ever completed, Fourier analysis might not be sensible. If someone would like to work out the details, he may feel free to do so here.</p>
Community
-1
<p>Note that the function is an odd function and you integrating it from $-4$ to $4$. In general, if $f(x)$ is odd and integrable, we have $$\int_{-a}^a f(x) dx = 0$$where $a \in \mathbb{R}$.</p>
162,147
<p>I have</p> <pre><code>J = Table[{x10, y10, x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}] L = Table[{x10, y10, 2.0*x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}] </code></pre> <p>I want the third elements of J and L to be added and the first and second elements are as they are (as they are the same in both cases) for an example.</p>
Henrik Schumacher
38,178
<pre><code>result = J; result[[All, All, 3]] += L[[All, All, 3]] </code></pre>
3,033,202
<p>I have been banging my head against this proof for a few days now, as I can visualize why it is true in my head, but don't know how to prove it in words:</p> <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be nonempty subsets of <span class="math-container">$\mathbb{R}$</span>. Show that if there exist disjoint, open sets <span class="math-container">$U$</span> and <span class="math-container">$V$</span> with <span class="math-container">$A \subseteq U$</span> and <span class="math-container">$B \subseteq V$</span>, then <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are separated.</p> <p>I've seen two answers to this proof on here, but I don't fully understand either one of them, and the question was asked so long ago that neither of those users are active on here anymore, so I can't even ask them specific questions to help my understanding. I have tried proving it directly, but immediately get bogged down in multiple cases of what <span class="math-container">$A$</span> looks like in <span class="math-container">$U$</span> while <span class="math-container">$B$</span> looks a certain way in <span class="math-container">$V$</span>, and vice versa. I have also tried assuming that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> aren't separated, and I can't find a way to reach a contradiction (or contrapositive) from that. I would greatly appreciate any assistance as it is the only proof from my homework that I haven't been able to figure out on my own.</p>
José Carlos Santos
446,262
<p>You can use the fact that<span class="math-container">$$\arcsin'(x)=\frac1{\sin'(\arcsin x)}=\frac1{\cos(\arcsin x)}=\frac1{\sqrt{1-x^2}}.$$</span>So, and since <span class="math-container">$\arcsin(0)=0$</span>, you get that<span class="math-container">$$\arcsin s=\int_0^s\frac{\mathrm dx}{\sqrt{1-x^2}}.$$</span></p>
177,091
<p>What would $\int\limits_{-\infty}^\infty e^{ikx}dx$ be equal to where $i$ refers to imaginary unit? What steps should I go over to solve this integral? </p> <p>I saw this in the Fourier transform, and am unsure how to solve this.</p>
draks ...
19,341
<p>$\int\limits_{-\infty}^\infty e^{ikx}dx$ is a (<em>EDIT: scaled by $\frac1{2\pi}$</em>) representation of <a href="http://en.wikipedia.org/wiki/Dirac_delta_function" rel="nofollow">Dirac's $\delta(k)$ function</a>. For the antiderivative see the other answers...</p>
18,090
<p>I am wondering if anyone could prove the following equivalent definition of recurrent/persistent state for Markov chains:</p> <p>1) $P(X_n=i,n\ge1|X_0=i)=1$<p> 2) Let $T_{ij}=min\{n: X_n=j|X_0=i\}$, a state $i$ is recurrent if for $\forall j$ such that $P(T_{ij}&lt;\infty)&gt;0$, one has $P(T_{ij}&lt;\infty)=1$</p>
Did
6,179
<p>See <a href="https://mathoverflow.net/questions/53011/proof-of-equivalent-definitions-of-recurrent-states-for-markov-chains">this page</a>.</p> <p><strong>Edit:</strong> Since incorrect statements continue to pop up on this page, let me explain once again the situation.</p> <p>First, two notations (standard in the field): for every state $i$, $P_i$ denotes the distribution of the Markov chain $(X_n)_{n\ge0}$ conditionally on the event $[X_0=i]$ and $T_i=\inf\{n\ge1;X_n=i\}$ denotes the first hitting time of $i$. Next, a tautology: the events $[\exists n\ge1,X_n=i]$ and $[T_i&lt;+\infty]$ are equal.</p> <p>Second, a statement of the question the OP is interested in. One considers two properties which may or may not be satisfied by a given Markov chain $(X_n)_{n\ge0}$ and a given state $i$:</p> <p>(1) $P_i(\exists n\ge1,X_n=i)=1$.</p> <p>(2) For every state $j$, either $P_i(T_j&lt;+\infty)=0$ or $P_i(T_j&lt;+\infty)=1$.</p> <p>Property (1) is often taken as a definition of the fact that $i$ is a <em>recurrent state</em> of the Markov chain $(X_n)_{n\ge0}$. Property (2) is more unusual.</p> <p>Now, an answer to the question.</p> <p>Property (1) implies property (2). Proof: see Byron's post. Note that if (1) holds, both $P_i(T_j&lt;+\infty)=0$ and $P_i(T_j&lt;+\infty)=1$ can occur for different states $i$ and $j$ of the same Markov chain. Example: state space $\{0,1,2\}$, transitions $0\to1$, $0\to2$, $1\to1$, $1\to2$, $2\to1$ and $2\to2$ all with positive probabilities, the other transitions with probability zero. Then $P_i(T_0&lt;+\infty)=0$ and $P_i(T_j&lt;+\infty)=1$ for every $i$ and every $j\ne0$.</p> <p>Property (2) does not imply property (1). Proof by an example: state space $\mathbb{Z}$, transitions $i\to i+1$ with probability $1$. Then $P_i(T_j&lt;+\infty)=0$ if $j\le i$, $P_i(T_j&lt;+\infty)=1$ if $j\ge i+1$, but $P_i(\exists n\ge1,X_n=i)=0$ for every $i$.</p> <p>Finally (and once again), a congenial reference for this is the first chapter of the book <em>Markov chains</em> by James Norris, which is freely available <a href="http://www.statslab.cam.ac.uk/~james/Markov" rel="nofollow noreferrer">here</a>.</p>
466,873
<p>I'm trying to prove that if $K$ is a finite field, then every subset of $\mathbb A^n(K)$ is algebraic. I know that if $K$ is finite, then every element of $K$ is algebraic, i.e., for every $a\in K$ there is a polynomial $f\in K$ such that $f(a)=0$, but this didn't help me to solve the question. I almost sure that we have to use this to solve this question.</p> <p>I need help.</p> <p>Thanks in advance.</p>
Alex Youcis
16,497
<p>Here is a another approach, I'll give you a hint. Show that every map $K^n\to K$ is a polynomial. Begin by showing that you can write the indicator function at any point as a polynomial (remember you can find a polynomial that has, as roots, any subset of $K$). Then, just add up appropriate multiples of these indicator functions.</p>
4,159,451
<p>Kindly see the boldened sentence below. What probability concept does &quot;the most likely value&quot; refer to? How do I symbolize it?</p> <p>I don't think the author's referring to Expected Value, because it's additive.</p> <blockquote> <p>      The principle of additivity is so intuitively appealing that it’s easy to think it’s obvious. But, just like the pricing of life annuities, it’s not obvious! To see that, substitute other notions in place of expected value and watch everything go haywire. Consider:<br />       <em><strong>The most likely value of the sum of a bunch of things is the sum of the most likely values of each of the things.</strong></em> [emphasis mine]<br />       That’s totally wrong. Suppose I choose randomly which of my three children to give the family fortune to. The most likely value of each child’s share is zero, because there’s a two in three chance I’m disinheriting them. But the most likely value of the sum of those three allotments—in fact, its <em>only possible</em> value—is the amount of my whole estate.</p> </blockquote> <p>Ellenberg, <em>How Not to Be Wrong</em> (2014), page 213.</p>
Thomas Andrews
7,933
<p>You can only really define the “most likely” value for some random variable. If <span class="math-container">$f$</span> is a pdf for a random <span class="math-container">$X,$</span> then the most like value of <span class="math-container">$X$</span> is the “mode” of <span class="math-container">$X,$</span> a value <span class="math-container">$x_0$</span> such that:</p> <p><span class="math-container">$$f(x_0)=\sup_x f(x).$$</span></p> <p>But the mode is not necessarily unique. And even if <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> have a unique mode, <span class="math-container">$X_1+X_2$</span> might not.</p> <p>I suppose, if <span class="math-container">$f$</span> is continuous, you could define <span class="math-container">$M_f=\sup_x f(x),$</span> then define, for <span class="math-container">$M&lt;M_f,$</span> <span class="math-container">$E_{f,M} =\{x\mid f(x)&gt;M\}$</span> and define our “most like value” as:</p> <p><span class="math-container">$$\begin{align}\mathcal A(X)&amp;=\lim_{M\to {M_f}^-}\frac{\int_{E_{f,M}}xf(x)\,dx}{\int_{E_{f,M}}f(x)\,dx} \\&amp;=\lim_{M\to {M_{f}}^{-}}E\left(X\mid X\in E_{f,M}\right)\end{align}$$</span></p> <p>That will be, roughly, a weighted average when there are multiple values for the mode, where the weights prefer the ones where <span class="math-container">$f$</span> converges more slowly to <span class="math-container">$\sup f.$</span> In particular, if there is one mode, <span class="math-container">$A(X)$</span> will be that mode.</p> <p>This definition satisfies some useful rules.</p> <p><span class="math-container">$\mathcal A(aX+b)=a\mathcal A(X)+b$</span> when <span class="math-container">$a,b$</span> are (constant) real numbers.</p> <p>If <span class="math-container">$X$</span> is a an even distribution (<span class="math-container">$f(x)=f(-x)$</span>) then <span class="math-container">$\mathcal A(X)=E(X)=0.$</span></p> <p>This definition works for some <span class="math-container">$f$</span> unbounded, for example <span class="math-container">$f(x)=\frac{1}{2\sqrt{x}}$</span> in <span class="math-container">$(0,1].$</span> Then <span class="math-container">$M\to +\infty.$</span></p> <p>If <span class="math-container">$X_1,\dots,X_n$</span> are real continuous random variables, then we would want:</p> <p><span class="math-container">$$\mathcal A(X_1+X_2+\cdots+X_n)=\mathcal A(X_1)+\mathcal A(X_2)+\cdots +\mathcal A(X_n).$$</span></p> <p>This is true if the <span class="math-container">$X_i$</span> are normal, or more generally if <span class="math-container">$X_i-E(X_i)$</span> is even.</p> <p>But this won’t be true in general.</p>
3,912,722
<blockquote> <p>Circle of radius <span class="math-container">$r$</span> touches the parabola <span class="math-container">$y^2+12x=0$</span> at its vertex. Centre of circle lies left of the vertex and circle lies entirely within the parabola. What is the largest possible value of <span class="math-container">$r$</span>?</p> </blockquote> <p>So my book has given the solution as follows:</p> <blockquote> <p>The equation of the circle can be taken as: <span class="math-container">$(x+r)^2+y^2=r^2$</span><br /> and when we solve the equation of the circle and the parabola, we get <span class="math-container">$x=0$</span> or <span class="math-container">$x=12-2r$</span>.</p> </blockquote> <blockquote> <p>Then, <span class="math-container">$12-2r≥0$</span> and finally, the largest possible value of <span class="math-container">$r$</span> is <span class="math-container">$6$</span>.</p> </blockquote> <p>This is where I got stuck as I'm not able to understand why that condition must be true. I get that the circle must lie within the parabola...</p> <p>Can someone please explain this condition to me?</p>
g.kov
122,782
<p>Under given constraints, the maximal value of the radius <span class="math-container">$r_{\max}=\tfrac1\kappa$</span>, where <span class="math-container">$\kappa$</span> is the curvature of the parabola at its vertex.</p> <p>For the given parabola <span class="math-container">$y^2=-12x$</span> we can use a convenient parametric representation</p> <p><span class="math-container">\begin{align} y(t)&amp;=t ,\quad x(t)=-\tfrac1{12}\,t^2 ,\\ y'(t)&amp;=1 ,\quad x'(t)=-\tfrac1{6}\,t ,\\ y''(t)&amp;=0 ,\quad x''(t)=-\tfrac1{6} , \end{align}</span></p> <p>so we can use a known expression for <a href="https://en.wikipedia.org/wiki/Curvature#In_terms_of_a_general_parametrization" rel="nofollow noreferrer">the curvature of parametric curve</a></p> <p><span class="math-container">\begin{align} \kappa(t)&amp;= \frac{x'y''-y'x''}{\sqrt{(x'^2+y'^2)^3}} \\ &amp;=\frac{-\tfrac1{6}\,t\cdot0-1\cdot(-\tfrac16)} {\sqrt{\Big(\tfrac1{36}\,t^2+1\Big)^3}} =\frac{36}{\sqrt{(t^2+36)^3}} . \end{align}</span></p> <p>The vertex is located at the point <span class="math-container">$(x(t),y(t))|_{t=0}$</span>, so the answer is</p> <p><span class="math-container">\begin{align} r_{\max}&amp;=\frac1{\kappa(0)} =\frac{\sqrt{(0^2+36)^3}}{36} =6 . \end{align}</span></p>
3,912,722
<blockquote> <p>Circle of radius <span class="math-container">$r$</span> touches the parabola <span class="math-container">$y^2+12x=0$</span> at its vertex. Centre of circle lies left of the vertex and circle lies entirely within the parabola. What is the largest possible value of <span class="math-container">$r$</span>?</p> </blockquote> <p>So my book has given the solution as follows:</p> <blockquote> <p>The equation of the circle can be taken as: <span class="math-container">$(x+r)^2+y^2=r^2$</span><br /> and when we solve the equation of the circle and the parabola, we get <span class="math-container">$x=0$</span> or <span class="math-container">$x=12-2r$</span>.</p> </blockquote> <blockquote> <p>Then, <span class="math-container">$12-2r≥0$</span> and finally, the largest possible value of <span class="math-container">$r$</span> is <span class="math-container">$6$</span>.</p> </blockquote> <p>This is where I got stuck as I'm not able to understand why that condition must be true. I get that the circle must lie within the parabola...</p> <p>Can someone please explain this condition to me?</p>
fleablood
280,126
<p>For the circle to be entirely <em>within</em> the parabola means the parabola and the circle intersect at exactly one point.</p> <p>However solving <span class="math-container">$(x+r)^2 + y^2 = r^2$</span> and <span class="math-container">$y^2 + 12x = 0$</span> we actually will find <em>three</em> (potential) points of intersection.</p> <blockquote> <p><span class="math-container">$y^2=-12x$</span> so <span class="math-container">$(x+r)^2 +y^2 =(x+r)^2 -12x = r^2$</span> and <span class="math-container">$x(x+2r -12) =0$</span> so</p> </blockquote> <blockquote> <p><span class="math-container">$x = 0$</span> and <span class="math-container">$y=0$</span>, or <span class="math-container">$x = (12-2r)$</span> and <span class="math-container">$y = \pm \sqrt{12(2r-12)}$</span>.</p> </blockquote> <p><span class="math-container">$x=0, y=0$</span> is okay, but we can't have <span class="math-container">$x = (12-2r)$</span> and <span class="math-container">$y= \pm \sqrt{12(2r-12)}$</span> be two more points of intersection.</p> <p>The only we to avoid those two <em>potential</em> points existing is if either: i) <span class="math-container">$(12-2r, \pm \sqrt{12(2r-12)}) = (0,0)$</span>; or in other words if <span class="math-container">$r=6$</span>; or ii) if <span class="math-container">$\sqrt{12(2r-12)}$</span> doesn't exist; or in other words if <span class="math-container">$12(2r-12) &lt; 0$</span> or in other words if <span class="math-container">$r &lt; 6$</span>.</p> <p>So if <span class="math-container">$r \le 6$</span> then only one point of intersection exists.</p> <p>If <span class="math-container">$r &gt; 6$</span> then three points of intersection exist.</p> <p>As we figure that three points of intersection was impossible, we concluder <span class="math-container">$r &lt; 6$</span>&gt;</p> <p>......</p> <p>Hmm, I suppose we have to justify my very first sentence &quot;For the circle to be entirely <em>within</em> the parabola means the parabola and the circle intersect at exactly one point&quot;.</p> <p>Note: if <span class="math-container">$r &gt; 6$</span> and if <span class="math-container">$2r -12 &lt; x &lt; 0$</span> then the points of the circle <span class="math-container">$y=\pm\sqrt{r^2 - (x+r)^2}$</span> will be so that <span class="math-container">$y^2 &lt; -12x$</span> so those points of the circle are <em>outside</em> the parabola.</p> <p>========</p> <p>Maybe a clear explanation of what <em>algebraically</em> what a circle being <em>within</em> a parabola means.</p> <p>A point <span class="math-container">$(w,v)$</span> is &quot;inside&quot; the parabola <span class="math-container">$y^2 + 12x =0$</span> or <span class="math-container">$y = \pm \sqrt {-12x}$</span> if the point <span class="math-container">$(w,v)$</span> is left of the vertex of <span class="math-container">$y^2 + 12x = 0$</span>; that is to say that <span class="math-container">$w &lt; 0$</span>, and if the point <span class="math-container">$(w,v)$</span> is withing the bounds of the parabola at <span class="math-container">$x = w$</span>; that is to say that if <span class="math-container">$y^2 + 12w =0$</span> so <span class="math-container">$y =\pm \sqrt {-12w}$</span> then the value of <span class="math-container">$v$</span> is so that <span class="math-container">$-\sqrt{-12w} \le v \le \sqrt{-12w}$</span>.</p> <p>Now the equation of the circle is <span class="math-container">$(x+r)^2 + y^2 = r^2$</span> so if we have a point <span class="math-container">$(w,v)$</span> on the circle that point is satisfies <span class="math-container">$(w+r)^2 + v^2 = r^2$</span> and if the circle is inside the parabola we must have <span class="math-container">$-\sqrt{-12w} \le v \le \sqrt{-12w}$</span>.</p> <p>Solving: <span class="math-container">$w^2 + 2wr +r^2 + v^2 = r^2$</span> so <span class="math-container">$v =\pm\sqrt{-w(w+2r)}$</span> so we need <span class="math-container">$-\sqrt{-12w} \le -\sqrt{-w(w+2r)} \le \sqrt{-w(w+2r)}\le \sqrt{-12w}$</span>.</p> <p>For all points of the circle we have <span class="math-container">$w$</span> between <span class="math-container">$-2r, 0$</span> so <span class="math-container">$w \le 0$</span>. If <span class="math-container">$w=0$</span> we have <span class="math-container">$v=0$</span> and the inequality holds.</p> <p>If <span class="math-container">$w &lt; 0$</span> then <span class="math-container">$\sqrt{-w} &gt; 0$</span> and we can divide the inequality be <span class="math-container">$\sqrt{-w}$</span> to get</p> <p><span class="math-container">$-\sqrt{12}\le -\sqrt{w+2r} \le \sqrt{w+2r} \le \sqrt{12}$</span> or <span class="math-container">$0 \le w+2r \le 12$</span>. for all <span class="math-container">$w &lt; 0$</span>. That means that <span class="math-container">$2r &lt; 12 + |w|$</span> for all <span class="math-container">$|w|&gt;0$</span> or that <span class="math-container">$r &lt; 6 + \epsilon$</span> for all <span class="math-container">$\epsilon &gt; 0$</span>.</p> <p>Sigh... okay, this throws everyone the first time the see it but that means <span class="math-container">$r \le 6$</span>. (If <span class="math-container">$r &gt; 6$</span> then if we set <span class="math-container">$\epsilon = r-6$</span> we have <span class="math-container">$\epsilon &gt; 0$</span> and we get the contradiction: <span class="math-container">$r &lt; 6 + \epsilon = 6 +(r-6) = r$</span>.)</p> <p>.....</p> <p>Hmmm.... this proof is nowhere near as clear or easy to follow as my first argument that if a circle is within a parabola that means it intersects at exactly one point.</p> <p>But.... although it is geometrically obvious that that statement about intersection is true, I'm not sure we can justify it without an argument.</p>
4,305,593
<blockquote> <p>Prove that <span class="math-container">$$ \int_{0}^{\infty} \frac{\sin^2 x-x\sin x}{x^3} \, dx = \frac{1}{2} - \ln 2 .$$</span></p> </blockquote> <p>Integration by parts gives</p> <p><span class="math-container">\begin{align*} &amp;\lim_{R\to \infty} \int_{0}^{R} \frac{\sin^2 x-x\sin x}{x^3} \, dx \\ &amp;= \lim_{R\to \infty} \biggl( \int_{0}^{R} \frac{\sin^2x}{x^3} \, dx - \int_{0}^{R} \frac{\sin x}{x^2} \, dx \biggr)\\ &amp;= \lim_{R\to\infty} \biggl( \frac{\sin^2 x}{-2x^2}\Biggr\rvert_{0}^{R} - \int_{0}^{R} \frac{\sin (2x)}{-2x^2} \, dx - \biggl(-\frac{\sin x}{x} \Biggr\rvert_{0}^{R} + \int_{0}^{R} \frac{\cos x}{x} \,dx \biggr) \biggr) \\ &amp;= \lim_{R\to \infty} \biggl(\frac{1}{2} + \int_{0}^{2R} \frac{\sin u}{u^2/2} \, \Bigl(\frac{1}2 \, du\Bigr) - \biggl( 1 + \int_{0}^{R} \frac{\cos x}{x} \, dx \biggr)\biggr) \\ &amp;\hspace{22em}\text{(using the substitution $u\mapsto 2x$)}\\ &amp;= -\frac{1}{2} + \lim_{R\to \infty} \biggl(\int_{0}^{2R} \frac{\sin u}{u^2} \, du - \int_{0}^{R} \frac{\cos x}{x} \,dx \biggr)\\ &amp;= \frac{1}{2} + \lim_{R\to\infty} \biggl(\int_{R}^{2R} \frac{\cos x}{x} \, dx \biggr) \end{align*}</span></p> <p>Thus it suffices to show that <span class="math-container">$\lim_{R\to\infty} \int_{R}^{2R} \frac{\cos x}{x} \, dx = \ln 2$</span>. The Taylor series expansion of <span class="math-container">$\cos x$</span> is given by <span class="math-container">$\cos x = \sum_{i=0}^{\infty} \frac{(-1)^i x^{2i}}{(2i)!}$</span>.</p> <blockquote> <p>(If the step below (the one involving the interchanging of an infinite sum and integral) is valid, why exactly is it valid? For instance, does it use uniform convergence?)</p> </blockquote> <p>The limit equals</p> <p><span class="math-container">$$ \lim_{R\to\infty} \int_{R}^{2R} \biggl( \frac{1}{x} + \sum_{i=1}^{\infty} \frac{(-1)^i x^{2i-1}}{(2i)!} \biggr) \, dx = \ln 2 + \lim_{R\to\infty} \sum_{i=1}^{\infty} \biggl[\frac{(-1)^ix^{2i}}{(2i)(2i)!}\biggr]_{R}^{2R} . $$</span></p> <p>But I don't know how to show <span class="math-container">$\lim_{R\to\infty} \lim_{R\to\infty} \sum_{i=1}^{\infty} \Bigl[\frac{(-1)^ix^{2i}}{(2i)(2i)!}\Bigr]_{R}^{2R} = -2 \ln 2$</span>.</p>
Sangchul Lee
9,340
<p>Here is a correct solution that you might want to compare with:</p> <p><span class="math-container">\begin{align*} &amp;\int_{0}^{\infty}\frac{\sin^2 x - x \sin x}{x^3} \, \mathrm{d}x \\ &amp;= \lim_{\substack{R\to\infty \\ \varepsilon \to 0^+}} \int_{\varepsilon}^{R}\frac{\sin^2 x - x \sin x}{x^3} \, \mathrm{d}x \\ &amp;= \lim_{\substack{R\to\infty \\ \varepsilon \to 0^+}} \biggl( \left[ -\frac{\sin^2 x}{2x^2} + \frac{\sin x}{x} \right]_{\varepsilon}^{R} + \int_{\varepsilon}^{R} \left( \frac{\sin(2x)}{2x^2} - \frac{\cos x}{x} \right) \, \mathrm{d}x \biggr) \\ &amp;= -\frac{1}{2} + \lim_{\substack{R\to\infty \\ \varepsilon \to 0^+}} \biggl( \int_{2\varepsilon}^{2R} \frac{\sin x}{x^2} \, \mathrm{d}x - \int_{\varepsilon}^{R} \frac{\cos x}{x} \, \mathrm{d}x \biggr) \\ &amp;= -\frac{1}{2} + \lim_{\substack{R\to\infty \\ \varepsilon \to 0^+}} \biggl( \left[ -\frac{\sin x}{x} \right]_{2\varepsilon}^{2R} + \int_{2\varepsilon}^{2R} \frac{\cos x}{x} \, \mathrm{d}x - \int_{\varepsilon}^{R} \frac{\cos x}{x} \, \mathrm{d}x \biggr) \\ &amp;= \frac{1}{2} + \lim_{\substack{R\to\infty \\ \varepsilon \to 0^+}} \biggl( \int_{R}^{2R} \frac{\cos x}{x} \, \mathrm{d}x - \int_{\varepsilon}^{2\varepsilon} \frac{\cos x}{x} \, \mathrm{d}x \biggr). \end{align*}</span></p> <p>Now by noting that</p> <p><span class="math-container">\begin{align*} \left|\int_{R}^{2R} \frac{\cos x}{x} \, \mathrm{d}x\right| &amp;= \left| \frac{\sin R}{R} - \frac{\sin (2R)}{2R} + \int_{R}^{2R} \frac{\sin x}{x^2} \, \mathrm{d}x \right| \\ &amp;\leq \frac{1}{R} + \frac{1}{2R} + \int_{R}^{2R} \frac{1}{x^2} \, \mathrm{d}x \\ &amp;= \frac{2}{R}, \end{align*}</span></p> <p>it follows that</p> <p><span class="math-container">$$ \lim_{R\to\infty} \int_{R}^{2R} \frac{\cos x}{x} \, \mathrm{d}x = 0. $$</span></p> <p>On the other hand, by substituting <span class="math-container">$x = \varepsilon u$</span>,</p> <p><span class="math-container">$$ \int_{\varepsilon}^{2\varepsilon} \frac{\cos x}{x} \, \mathrm{d}x = \int_{1}^{2} \frac{\cos(\varepsilon u)}{u} \, \mathrm{d}u \xrightarrow{\varepsilon \to 0^+} \int_{1}^{2} \frac{1}{u} \, \mathrm{d}u = \log 2 $$</span></p> <p>by the dominated convergence theorem. Therefore</p> <p><span class="math-container">$$ \int_{0}^{\infty}\frac{\sin^2 x - x \sin x}{x^3} \, \mathrm{d}x = \frac{1}{2} - \log 2. $$</span></p>
4,305,593
<blockquote> <p>Prove that <span class="math-container">$$ \int_{0}^{\infty} \frac{\sin^2 x-x\sin x}{x^3} \, dx = \frac{1}{2} - \ln 2 .$$</span></p> </blockquote> <p>Integration by parts gives</p> <p><span class="math-container">\begin{align*} &amp;\lim_{R\to \infty} \int_{0}^{R} \frac{\sin^2 x-x\sin x}{x^3} \, dx \\ &amp;= \lim_{R\to \infty} \biggl( \int_{0}^{R} \frac{\sin^2x}{x^3} \, dx - \int_{0}^{R} \frac{\sin x}{x^2} \, dx \biggr)\\ &amp;= \lim_{R\to\infty} \biggl( \frac{\sin^2 x}{-2x^2}\Biggr\rvert_{0}^{R} - \int_{0}^{R} \frac{\sin (2x)}{-2x^2} \, dx - \biggl(-\frac{\sin x}{x} \Biggr\rvert_{0}^{R} + \int_{0}^{R} \frac{\cos x}{x} \,dx \biggr) \biggr) \\ &amp;= \lim_{R\to \infty} \biggl(\frac{1}{2} + \int_{0}^{2R} \frac{\sin u}{u^2/2} \, \Bigl(\frac{1}2 \, du\Bigr) - \biggl( 1 + \int_{0}^{R} \frac{\cos x}{x} \, dx \biggr)\biggr) \\ &amp;\hspace{22em}\text{(using the substitution $u\mapsto 2x$)}\\ &amp;= -\frac{1}{2} + \lim_{R\to \infty} \biggl(\int_{0}^{2R} \frac{\sin u}{u^2} \, du - \int_{0}^{R} \frac{\cos x}{x} \,dx \biggr)\\ &amp;= \frac{1}{2} + \lim_{R\to\infty} \biggl(\int_{R}^{2R} \frac{\cos x}{x} \, dx \biggr) \end{align*}</span></p> <p>Thus it suffices to show that <span class="math-container">$\lim_{R\to\infty} \int_{R}^{2R} \frac{\cos x}{x} \, dx = \ln 2$</span>. The Taylor series expansion of <span class="math-container">$\cos x$</span> is given by <span class="math-container">$\cos x = \sum_{i=0}^{\infty} \frac{(-1)^i x^{2i}}{(2i)!}$</span>.</p> <blockquote> <p>(If the step below (the one involving the interchanging of an infinite sum and integral) is valid, why exactly is it valid? For instance, does it use uniform convergence?)</p> </blockquote> <p>The limit equals</p> <p><span class="math-container">$$ \lim_{R\to\infty} \int_{R}^{2R} \biggl( \frac{1}{x} + \sum_{i=1}^{\infty} \frac{(-1)^i x^{2i-1}}{(2i)!} \biggr) \, dx = \ln 2 + \lim_{R\to\infty} \sum_{i=1}^{\infty} \biggl[\frac{(-1)^ix^{2i}}{(2i)(2i)!}\biggr]_{R}^{2R} . $$</span></p> <p>But I don't know how to show <span class="math-container">$\lim_{R\to\infty} \lim_{R\to\infty} \sum_{i=1}^{\infty} \Bigl[\frac{(-1)^ix^{2i}}{(2i)(2i)!}\Bigr]_{R}^{2R} = -2 \ln 2$</span>.</p>
Claude Leibovici
82,404
<p><span class="math-container">$$I=\int \frac{\sin^2 (x)-x\sin (x)}{x^3}\, dx=\int \frac{\sin^2 (x)}{x^3}\, dx-\int \frac{\sin (x)}{x^2}\, dx$$</span> <span class="math-container">$$\int \frac{\sin^2 (x)}{x^3}\, dx=\frac 1 2\int \frac{1-\cos(2x)}{x^3}\,dx$$</span> <span class="math-container">$$\int \frac{1-\cos(2x)}{x^3}\,dx=-\frac 1{2x^2}-\int \frac{\cos(2x)}{x^3}\,dx$$</span> <span class="math-container">$$\int \frac{\cos(2x)}{x^3}\,dx=-\frac{\cos(2x)}{x^2}-\int\frac{\sin(2x)}{x^2}\,dx$$</span> <span class="math-container">$$\int\frac{\sin(2x)}{x^2}\,dx=-\frac{\sin(2x)}{x}+2\int \frac{\cos(2x)}x\,dx$$</span> <span class="math-container">$$\int \frac{\cos(2x)}x\,dx=\int \frac{\cos(y)}y\,dy=\text{Ci}(y)=\text{Ci}(2x)$$</span></p> <p>Combining all the above <span class="math-container">$$\color{red}{\int \frac{\sin^2 (x)}{x^3}\, dx=\text{Ci}(2 x)-\frac{\sin (x) (\sin (x)+2 x \cos (x))}{2 x^2}}$$</span> <span class="math-container">$$\int \frac{\sin (x)}{x^2}\, dx=-\frac{\sin (x)}{x}+\int \frac{\cos(x)}x\,dx$$</span></p> <p><span class="math-container">$$\color{red}{\int \frac{\sin (x)}{x^2}\, dx=-\frac{\sin (x)}{x}+\text{Ci}(x)}$$</span></p> <p>So, <span class="math-container">$$\color{blue}{I=-\text{Ci}(x)+\text{Ci}(2 x)-\frac{1}{4 x^2}+\frac{\cos (2 x)}{4 x^2}+\frac{\sin(x)}{x}-\frac{\sin (2 x)}{2 x}}$$</span> If <span class="math-container">$x\to \infty$</span> its value is <span class="math-container">$0$</span>.</p> <p>Now, for <span class="math-container">$x\to 0$</span>, using expansion we have <span class="math-container">$$\left(\log (2)-\frac{1}{2}\right)-\frac{x^2}{12}+\frac{13 x^4}{1440}+O\left(x^6\right)$$</span> <span class="math-container">$$J=\int_\epsilon^\infty \frac{\sin^2 (x)-x\sin (x)}{x^3}\, dx=\left(\frac{1}{2}-\log (2)\right)+\frac{\epsilon^2}{12}+O\left(\epsilon^4\right)$$</span></p>
619,607
<p>$\mathbf{(1)}$ Find $y^{\prime}$ of $y=8^{\sqrt x}$ </p> <p>My try: </p> <p>$\ln y=\ln(8)^{\sqrt x}$<br> $\dfrac{1}{y}y^{\prime}=\sqrt{x}\ln8$<br> I don't know how to proceed with right side. </p> <p>$\mathbf{(2)}$ Find $y^{\prime}$ of $y=(t+4)(t+6)(t+7).$</p> <p>This one I have no idea what to do so I don't have any work to show. My text says to use logarithmic differentiation, but still I don't how to solve this. </p> <p>Thank you. </p>
Michael Albanese
39,599
<p>In the first problem, you got down to $\ln y = \ln(8)^{\sqrt{x}}$ which is correct, but you made a mistake in the next line. On the left you differentiated, but on the right you rewrote $\ln(8)^{\sqrt{x}}$ as $\sqrt{x}\ln(8)$. While it is true that $\ln(8)^{\sqrt{x}} = \sqrt{x}\ln(8)$, the equation you wrote is false as you did not differentiate both the left and right hand sides of the equation. I recommend first simplifying the right hand side so that you obtain $\ln y = \sqrt{x}\ln(8)$ and now proceed to differentiate.</p> <p>The process for all logarithmic differentiation problems is the same: </p> <ul> <li>take logarithms of both sides, </li> <li>simplify using the properties of the logarithm ($\ln(AB) = \ln(A) + \ln(B)$, etc.), </li> <li>differentiate both sides (making sure to use implicit differentiation where necessary), </li> <li>rearrange for $y'$,</li> <li>replace $y$ by the corresponding expression in terms of $x$. </li> </ul> <p>These steps should allow you to complete the first problem, as well as begin the second. Feel free to ask for help if you get stuck.</p>
3,746,986
<p>Let <span class="math-container">$\phi:\mathbb (0,\infty) \to [0,\infty)$</span> be a continuous function, and let <span class="math-container">$c \in (0,\infty)$</span> be fixed.</p> <p>Suppose that &quot;<span class="math-container">$\phi$</span> is convex at <span class="math-container">$c$</span>&quot;. i.e. for any <span class="math-container">$x_1,x_2&gt;0, \alpha \in [0,1]$</span> satisfying <span class="math-container">$\alpha x_1 + (1- \alpha)x_2 =c$</span>, we have <span class="math-container">$$ \phi(c)=\phi\left(\alpha x_1 + (1- \alpha)x_2 \right) \leq \alpha \phi(x_1) + (1-\alpha)\phi(x_2) . $$</span></p> <p>Assume also that <span class="math-container">$\phi$</span> is strictly decreasing in a neighbourhood of <span class="math-container">$c$</span>.</p> <blockquote> <p>Do the one-sided derivatives <span class="math-container">$\phi'_{-}(c),\phi'_{+}(c)$</span> necessarily exist?</p> </blockquote> <p><strong>Edit:</strong></p> <p>As pointed by Aryaman Maithani if <span class="math-container">$c$</span> is a global minimum of <span class="math-container">$\phi$</span>, then clearly <span class="math-container">$\phi$</span> is convex at <span class="math-container">$c$</span>, but there should be no reason to expect for existence of one-sided derivatives. (e.g. <span class="math-container">$\phi(x)=\sqrt{|x|}, c=0$</span>).</p> <p><strong>Edit 2:</strong></p> <p>In the example described <a href="https://math.stackexchange.com/a/3747376/104576">here</a>, the left derivative does not exist. Can we create an example where the right derivative does not exist?</p>
Asaf Shachar
104,576
<p>This answer is merely an attempt to fill in the details in the example described <a href="https://math.stackexchange.com/a/3747376/104576">here</a>. Convexity of <span class="math-container">$\phi$</span> at <span class="math-container">$0$</span> means that</p> <p><span class="math-container">$$ 0=\phi(0) \leq \alpha \phi(x) + (1-\alpha)\phi(y), \tag{1} $$</span> for every <span class="math-container">$-1&lt; x \le 0 \le y \le 1$</span> satisfying <span class="math-container">$$ \alpha x + (1- \alpha)y =0. \tag{2} $$</span> In particular, for every <span class="math-container">$-1&lt;x \le 0 \le y \le 1$</span>, we should have <span class="math-container">$$ 0 \le \alpha \sqrt{1 - (1+x)^2} + (1-\alpha)(-y)=\alpha\big( \sqrt{1 - (1+x)^2} +x\big). $$</span> This is equivalent to <span class="math-container">$$ x^2+x=x(x+1) \le 0, $$</span> which holds since <span class="math-container">$-1&lt;x\le 0$</span>.</p> <p>Now, suppose that <span class="math-container">$-1&lt; x \le 0 \le 1 \le y $</span>. The inequality <span class="math-container">$(1)$</span> holds if and only if <span class="math-container">$$ 0\leq \alpha \sqrt{1 - (1+x)^2} + (\alpha-1). $$</span></p> <p>we also have <span class="math-container">$0 \ge -\alpha x=(1-\alpha)y\ge (1-\alpha) \Rightarrow (\alpha-1) \ge \alpha x$</span>, so <span class="math-container">$$ \alpha \sqrt{1 - (1+x)^2} + (\alpha-1) \ge \alpha \big(\sqrt{1 - (1+x)^2} + x\big) \ge 0 $$</span> holds as before for <span class="math-container">$-1&lt; x \le 0$</span>.</p>
4,218,513
<p>This is an old Schwarz lemma problem from the August 2020 UMD qualifying exam for analysis, which is posted <a href="https://www-math.umd.edu/images/pdfs/quals/Analysis/Analysis-August-2020.pdf" rel="nofollow noreferrer">here</a>. The precise wording from the test is:</p> <p>Suppose <span class="math-container">$f(z)$</span> is a holomorphic function on the unit disk with <span class="math-container">$|f(z)| \le 3$</span> for all <span class="math-container">$|z| &lt; 1$</span>, and <span class="math-container">$f(1/2) = 2$</span>. Show that <span class="math-container">$f(z)$</span> has no zeros in the disk <span class="math-container">$|z| &lt; 1/8$</span>. (Hint: first show <span class="math-container">$f(0) \neq 0$</span>).</p> <p>There is another kind of standard problem that assumes <span class="math-container">$|f(0)| \ge r$</span> and asks to prove <span class="math-container">$|f(z)| \ge \frac{r - |z|}{1 - r|z|}$</span> on the disk <span class="math-container">$|z| &lt; r$</span>; see problem 5 from chapter IX.1 of Gamelin. This problem instead gives you a value for <span class="math-container">$f(a)$</span>, <span class="math-container">$a \neq 0$</span>.</p>
leoli1
649,658
<p>Another, perhaps more direct approach, but which also doesn't really use the hint: Let <span class="math-container">$|a|&lt;\frac{1}{8}$</span>. Consider <span class="math-container">$g_a = f\circ \phi_a$</span> where <span class="math-container">$\phi_a(z)=\frac{z-a}{1-\overline{a}z}$</span> as in your answer. If <span class="math-container">$f(-a)=0$</span>, i.e. <span class="math-container">$g(0)=0$</span>, the Schwarz lemma would imply that <span class="math-container">$|g(z_a)|=|f(\frac{1}{2})|\leq 3|z_a|$</span> where <span class="math-container">$z_a = \phi_a^{-1}(\frac{1}{2})=\frac{1+2a}{2+\overline{a}}$</span>. But we have <span class="math-container">$f(\frac{1}{2})=2$</span> and <span class="math-container">$3|z_a|=3\frac{|1+2a|}{|2+\overline{a}|}&lt;3\cdot\frac{1+\frac{2}{8}}{2-\frac{1}{8}}=2$</span>, which would contradict the above inequality.</p>
3,464,383
<p>I couldn't find any substantial list of 'strange infinite convergent series' so I wanted to ask the MSE community for some. By <em>strange</em>, I mean infinite series/limits that <strong>converge when you would not expect them to and/or converge to something you would not expect</strong>.</p> <p>My favorite converges to Khinchin's (sometimes Khintchine's) constant, <span class="math-container">$K$</span>. For almost all <span class="math-container">$x \in \mathbb{R}$</span> (those for which this does not hold making up a measure zero subset) with infinite c.f. representation: <span class="math-container">$$x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac1{\ddots}}}$$</span> We have: <span class="math-container">$$\lim_{n \to \infty} =\root n \of{\prod_{i=1}^na_i} = \lim_{n \to \infty}\root n \of {a_1a_2\dots a_n} = K$$</span> Which is...wow! That it converges independent of <span class="math-container">$x$</span> really gets me.</p>
Ben
144,928
<p>Let <span class="math-container">$x_n$</span> be the nth positive solution of <span class="math-container">$\csc(x)=x$</span>, i.e. <span class="math-container">$x_1\approx 1.1141$</span>, <span class="math-container">$x_2\approx 2.7726$</span>, etc. Then,</p> <p><span class="math-container">$$\sum_{n=1}^{\infty}\frac{1}{x_n^2}=1$$</span></p> <hr> <p>Edit: Even more surprisingly, if we define <span class="math-container">$s(k)=\sum x_n^{-k}$</span>, then we have the generating function</p> <p><span class="math-container">\begin{align*} \sum_{k=1}^{\infty}s(2k)x^{2k} &amp;=\frac{x}{2}\left(\frac{1+x\cot(x)}{\csc(x)-x}\right) \\ &amp;=x^2+\frac{2x^4}{3}+\frac{21x^6}{40}+\frac{59x^8}{140}+\frac{24625x^{10}}{72576}+\cdots \end{align*}</span></p> <p>Unfortunately it seems that, as with the Riemann zeta function, the values of <span class="math-container">$s$</span> at odd integers are out of reach.</p>
35,579
<p>Consider the kernel of the homomorphism from two copies of the free group $F_2 \times F_2$ onto the integers sending every generator to 1. How to see that this subgroup is not finitely presented?</p>
IJL
556,441
<p>Euler characteristics of classifying spaces give a way to see this. The Euler characteristic of the free group $F_2$ is -1, because the figure of 8 has two 1-cells and one 0-cell. Euler characteristics multiply for direct products, so the Euler characteristic of $F_2 \times F_2$ is 1. The Euler characteristic of the infinite cyclic group is 0 (= the Euler characteristic of the circle). </p> <p>In general, if $N$ is a normal subgroup of $G$ with quotient $Q=G/N$, there is a product formula: the Euler characteristic of $G$ is the product of those of $N$ and $Q$, whenever all three Euler characteristics are defined. </p> <p>Going back to our example, since there is no solution to $1=0.x$, we see that the kernel cannot have an Euler characteristic. We know that the kernel is finitely generated, and we know that it has a 2-dimensional classifying space, so the only way it can fail to have an Euler characteristic is if the classifying space needs infinitely many 2-cells, or equivalently if the kernel is not finitely presented. </p>
2,122,955
<p><em>My question is from T.Y.Lam- A First Course in Noncommutative Rings.</em></p> <blockquote> <p>For a ring $R$, if all left $R$-modules are semisimple, then all short exact sequences of left $R$-modules split.</p> </blockquote> <p>A module $M$ is a semisimple module if every submodule $N$ of $M$ is a direct summand, that is there exists a submodule $N'$ such that $M=N \oplus N'$.</p> <p>In order to prove the proposition above, let $M$ be a left $R$-module and let us consider $0 \rightarrow A \hookrightarrow B \rightarrow C \rightarrow0$ be a short exact sequence, where $A,B$ and $C$ are semisimple left $R$-modules. What should my strategy be in order to $B\cong A \oplus C$.</p>
Mustafa
400,050
<blockquote> <p><strong>Theorem:</strong> Let be <span class="math-container">$R$</span> is a ring and <span class="math-container">$M,M_1,M_2$</span> are <span class="math-container">$R-$</span>module. If <span class="math-container">$f:M_1\to M$</span>, <span class="math-container">$g:M\to M_2$</span> are <span class="math-container">$R-$</span>homomorphism and <span class="math-container">$0\to M_1\to M \to M_2\to 0 $</span> is an exact sequence ,then the following are equivalent:</p> <p><span class="math-container">$1)$</span> there exists a homomorphism <span class="math-container">$\alpha:M\to M_1$</span> where <span class="math-container">$\alpha \circ f=i_{M_1}$</span></p> <p><span class="math-container">$2)$</span> there exists a homomorphism <span class="math-container">$\beta:M_2\to M$</span> where <span class="math-container">$g \circ \beta=i_{M_2}$</span></p> <p><span class="math-container">$3)$</span> the exact sequence spilt and <span class="math-container">$M\cong Im(f) \oplus ker(\alpha) \cong Im(\beta) \oplus ker(g) \cong M_1 \oplus M_2 $</span></p> </blockquote>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
Phil H
11,736
<p>The first thing to note is that a decimal representation is just a bunch of fractions; how many whole tenths, how many whole hundredths etc.</p> <p>Now an irrational number is one that is never a whole number of these fractions, however small you make the fractions. </p> <p>As someone pointed out, you can demonstrate numbers with recurring decimal digits like 1/3; it is no whole 1s, 3 whole tenths, 3 whole hundredths, etc. Because this is recursive, the student should get the idea that it's never going to match the fractions based on 10s, and thus recurs without stopping (which is a more graspable phrase than 'infinitely').</p> <p>Thus if the problem is that the decimal representation goes on without stopping, you have an example without going as far as irrational numbers.</p> <p>By contrast, of course, an irrational number is one which goes on indefinitely regardless of the base of the number, and that's rather harder to explain. </p>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
Wilf
123,393
<p>As usual with visualizing things, a graph does help - here is a graph of y=$\sqrt x$ <br><em>(y=$x^\frac {1}{2}$ if you prefer )</em>:</p> <p><img src="https://i.stack.imgur.com/92fAh.png" alt="a graph of y=x^(1/2) should be here...."></p> <p>As $x$ gets bigger, $\sqrt x$ gets bigger, but by smaller and smaller amounts - example values:</p> <p>$\sqrt 1=1$, $\sqrt 2=1.414$~, $\sqrt 3=1.732$~, $\sqrt 4=2$, $\sqrt 4=2$, $\sqrt 5=2.236$~</p> <p>Also, of course nothing appears below $y$=0 or to the left of $x$= as it impossible to get the square root of a negative</p>
174,676
<p>When I am faced with a simple linear congruence such as $$9x \equiv 7 \pmod{13}$$ and I am working without any calculating aid handy, I tend to do something like the following:</p> <p>"Notice" that adding $13$ on the right and subtracting $13x$ on the left gives: $$-4x \equiv 20 \pmod{13}$$</p> <p>so that $$x \equiv -5 \equiv 8 \pmod{13}.$$</p> <p>Clearly this process works and is easy to justify (apart from not having an algorithm for "noticing"), but my question is this: I have a vague recollection of reading somewhere this sort of process was the preferred method of C. F. Gauss, but I cannot find any evidence for this now, so does anyone know anything about this, or could provide a reference? (Or have I just imagined it all?)</p> <p>I would also be interested to hear if anyone else does anything similar.</p>
CopyPasteIt
432,081
<p>When presented with</p> <p><span class="math-container">$\tag 1 ax \equiv b \pmod{n}$</span></p> <p>if <span class="math-container">$a \mid b$</span> the solution is right in front of you.</p> <p>But there is also a 'plug in' solution if <span class="math-container">$a \mid n-1$</span> or <span class="math-container">$a \mid n+1$</span>:</p> <p>If <span class="math-container">$a \mid n-1$</span> then <span class="math-container">$x = \Large(\frac{n-1}{a}) \normalsize (-b)$</span> solves <span class="math-container">$\text{(1)}$</span>.</p> <p>If <span class="math-container">$a \mid n+1$</span> then <span class="math-container">$x = \Large(\frac{n+1}{a}) \normalsize (b)$</span> solves <span class="math-container">$\text{(1)}$</span>.</p> <p>Can we 'make hay' with the OP's linear congruence?</p> <p><span class="math-container">$\quad 9x \equiv 7 \pmod{13} \; \text{ iff } \; -4x \equiv 7 \pmod{13} \; \text{ iff }$</span><br> <span class="math-container">$ \quad 4x \equiv -7 \pmod{13} \; \text{ iff } \; 4x \equiv 6 \pmod{13}$</span></p> <p>We are in business now with <span class="math-container">$4x \equiv 6 \pmod{13}$</span> since <span class="math-container">$4 \mid 12$</span>; a solution is</p> <p><span class="math-container">$\quad x = \Large(\frac{n-1}{a}) \normalsize (-b) = (3)(-6) = -18 \equiv 8 \pmod{13}$</span></p> <hr /> <p>Here is an example where the <span class="math-container">$n + 1$</span> manipulation can be used:</p> <p><span class="math-container">$\quad 5x \equiv 1 \pmod{17} \; \text{ iff } \; -12x \equiv 1 \pmod{17} \; \text{ iff }$</span><br> <span class="math-container">$ \quad 12x \equiv -1 \pmod{17} \; \text{ iff } \; 12x \equiv 16 \pmod{17} \; \text{ iff } \; 6x \equiv 8 \pmod{17}$</span></p> <p>We are in business now with <span class="math-container">$6x \equiv 8 \pmod{17}$</span> since <span class="math-container">$6 \mid 18$</span>; a solution is</p> <p><span class="math-container">$\quad x = \Large(\frac{n+1}{a}) \normalsize (b) = (3)(8) = 24 \equiv 7 \pmod{17}$</span></p>
1,056,397
<p>I saw the following problem: $$\lim_{x\to \infty} \sqrt{9x^2+x}-3x$$ My first thought was to say that the $x$ term is overpowered when $x$ becomes large enough, so the square root becomes just $\sqrt{9x^2} = 3x$, and the value of the limit is zero.</p> <p>However, the solution given is $1\over6$, and starts with: $$\lim_{x\to \infty} \sqrt{9x^2+x}-3x=\lim_{x\to \infty} {(\sqrt{9x^2+x}-3x)(\sqrt{9x^2+x}+3x)\over \sqrt{9x^2+x}+3x}$$</p> <p>I'm assuming the continuation is:</p> <p>$$=\lim_{x\to \infty} {x\over \sqrt{9x^2+x}+3x} = \lim_{x\to \infty} {x\over \sqrt{9x^2}+3x} = \lim_{x\to \infty} {x \over 6x} = {1 \over 6}$$</p> <p>The question is, why is it okay to ignore the $x$ term in $\sqrt{9x^2+x}+3x$ but not in $\sqrt{9x^2+x}-3x$?</p>
Mark Bennet
2,906
<p>You should note that $9x^2+x = (3x+\frac 16)^2-\frac 1{36}$ so that $\sqrt {9x^2+x}\approx 3x+\frac 16$ for large $x$</p> <p>When you subtract $3x$ in the numerator, the dominant term in $x$ vanishes and the highest order term is $\frac 16$.</p> <p>In the denominator, you add $3x$ and the highest order term is $6x$.</p> <p>The use of the conjugate in this way - where the highest order term in the numerator will cancel - is quite common, and is useful in other circumstances too. Here it avoids having to complete the square or justify the approximation.</p>
248,167
<p>I want to know why, when I look at the Julia sets of the quadratic family, I see only a finite number of repeating patterns, rather than a countable infinity of them.</p> <p>My question is specifically about the interaction of these three theorems:</p> <p><strong>Theorem 1</strong>: Let $z_0\in\mathbb{C}$ be an repelling periodic point of the function $f_c:z\mapsto z^2+c$. Tan Lei proved in the 90s that the filled in Julia set $K_c$ is asymptotically $\lambda$-self-similar about $z_0$, where $\lambda$ denotes the multiplier of the orbit.</p> <p><strong>Theorem 2</strong>: (Iterated preimages are dense) Let $z\in J_c$, then the preimages of $z$ under the set $\cup_{n\in\mathbb{N}} ~ f^{-n}(z)$ is dense in $J_c$</p> <p><strong>Theorem 3</strong>: $J_c$ is the closure of repelling periodic points.</p> <p>Let's expand on Theorem 1:<br> Technically it means that the sets $(\lambda^n \tau_{-z_0} K_c)\cap\mathbb{D}_r$ approach (in the Hausdorff metric of compact subsets of $\mathbb{C}$) a set $X \cap \mathbb{D_r}$ where the limit model $X \subset \mathbb{C}$ is such $\lambda$-self-similar: $X = \lambda X$.<br> Practically this means that, when one zooms into a computer generated $K_c$ about $z_0$, the image becomes, to all practical purposes, self-similar. No new information is gained by zooming again about $z_0$.</p> <p>Lei also proved that $K_c$ is asymptotically $\lambda$-self-similar about the preimages of $z_0$, with the same limit model $X$, up to rotation and rescaling. This means that zooming in at each point in the repelling cycle of $z_0$, provides a basically the same spectacle, apart maybe rotated, that does zooming into $z_0$. Not only, but the preimages of $z_0$ are dense in $J_{c}$ (Theorem 2), meaning that this $X$ pattern can be seen throughout the Julia set.</p> <p>Now, let consider a different repelling periodic point $z_1$. Lei tells us that $K_c$ will be asymptotically self-similar about $z_1$ and all <em>its</em> pre-images, with an <em>a priori different</em> limit set $Y$. Since the pre-images of $z_1$ are also dense in $J_c$ we may observe the limit model $Y$ all over $J_c$.</p> <p>So, <strong><em>a priori</em></strong> to each repelling periodic orbit, there should be an associated limit model, and each of these limit models could be distinct. <em>However</em>, when I look at a computer generated Julia set, the parts of it that are asymptotically self-similar, seem to approach one of a <strong><em>finite</em></strong> set of limit models (up to rotation).</p> <p>Why is it so? Maybe my eye cannot see the difference? Or the computer cannot generate all of the detail?</p> <p>Or is it the case that the limit models are finite?</p> <p><a href="https://i.stack.imgur.com/JKaA9.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JKaA9.jpg" alt="Simple Julia zoom"></a> In this image (read like a comic strip), I zoom into the neighbourhood of a point, four times, then purposely "miss the center", and zoom onto a detail for four more times. The patterns that emerge are very similar. Are they the same?<br> This is perhaps one of the simplest Julia set, but the experience is </p>
Per Alexandersson
1,056
<p>To extend my comment and emphasize the self-similarity of Julia sets and the Dragon curve, here is an interpolation between the two.</p> <p><a href="https://i.stack.imgur.com/7ANvL.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/7ANvL.gif" alt="enter image description here"></a></p> <p>Each frame is generated by two complex functions,</p> <pre><code>f1[z_, t_] := ((1.0 + I) z/2) t + (1 - t) (Sqrt[z + 0.9 I]); f2[z_, t_] := (1 - (1.0 - I) z/2) t + (1 - t) (-Sqrt[ z + 0.9 I]); </code></pre> <p>where $t$ goes from $0$ to $1$ in the animation frames. For $t=0$, we have the classical Julia fractal for $c=-0.9i$, and at $t=1$, we have the two generators for the Dragon curve.</p> <p>So what about the colors? Let $J$ be attracting set. Then $f_1(C)$ is the black set, and $f_2(C)$ is the blue set, and $C=f_1(C) \cup f_2(C)$. This puts emphasis of the self-similar nature.</p> <p>So, note that in the Dragon curve case, since $f_1$ and $f_2$ are analytic and affine, they do not distort the picture at all, so you'll see <em>exact</em> copies at smaller levels. In the Julia case, we only have analytic maps, so there is some distortion caused by the square root, but the picture is more or less preserved (this is the nature of analytic maps).</p>
248,167
<p>I want to know why, when I look at the Julia sets of the quadratic family, I see only a finite number of repeating patterns, rather than a countable infinity of them.</p> <p>My question is specifically about the interaction of these three theorems:</p> <p><strong>Theorem 1</strong>: Let $z_0\in\mathbb{C}$ be an repelling periodic point of the function $f_c:z\mapsto z^2+c$. Tan Lei proved in the 90s that the filled in Julia set $K_c$ is asymptotically $\lambda$-self-similar about $z_0$, where $\lambda$ denotes the multiplier of the orbit.</p> <p><strong>Theorem 2</strong>: (Iterated preimages are dense) Let $z\in J_c$, then the preimages of $z$ under the set $\cup_{n\in\mathbb{N}} ~ f^{-n}(z)$ is dense in $J_c$</p> <p><strong>Theorem 3</strong>: $J_c$ is the closure of repelling periodic points.</p> <p>Let's expand on Theorem 1:<br> Technically it means that the sets $(\lambda^n \tau_{-z_0} K_c)\cap\mathbb{D}_r$ approach (in the Hausdorff metric of compact subsets of $\mathbb{C}$) a set $X \cap \mathbb{D_r}$ where the limit model $X \subset \mathbb{C}$ is such $\lambda$-self-similar: $X = \lambda X$.<br> Practically this means that, when one zooms into a computer generated $K_c$ about $z_0$, the image becomes, to all practical purposes, self-similar. No new information is gained by zooming again about $z_0$.</p> <p>Lei also proved that $K_c$ is asymptotically $\lambda$-self-similar about the preimages of $z_0$, with the same limit model $X$, up to rotation and rescaling. This means that zooming in at each point in the repelling cycle of $z_0$, provides a basically the same spectacle, apart maybe rotated, that does zooming into $z_0$. Not only, but the preimages of $z_0$ are dense in $J_{c}$ (Theorem 2), meaning that this $X$ pattern can be seen throughout the Julia set.</p> <p>Now, let consider a different repelling periodic point $z_1$. Lei tells us that $K_c$ will be asymptotically self-similar about $z_1$ and all <em>its</em> pre-images, with an <em>a priori different</em> limit set $Y$. Since the pre-images of $z_1$ are also dense in $J_c$ we may observe the limit model $Y$ all over $J_c$.</p> <p>So, <strong><em>a priori</em></strong> to each repelling periodic orbit, there should be an associated limit model, and each of these limit models could be distinct. <em>However</em>, when I look at a computer generated Julia set, the parts of it that are asymptotically self-similar, seem to approach one of a <strong><em>finite</em></strong> set of limit models (up to rotation).</p> <p>Why is it so? Maybe my eye cannot see the difference? Or the computer cannot generate all of the detail?</p> <p>Or is it the case that the limit models are finite?</p> <p><a href="https://i.stack.imgur.com/JKaA9.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JKaA9.jpg" alt="Simple Julia zoom"></a> In this image (read like a comic strip), I zoom into the neighbourhood of a point, four times, then purposely "miss the center", and zoom onto a detail for four more times. The patterns that emerge are very similar. Are they the same?<br> This is perhaps one of the simplest Julia set, but the experience is </p>
Lasse Rempe
3,651
<p>Given that I have started making pictures, I thought it might be worthwhile adding another, shorter, direct answer to your questions, in addition to my longer, more detailed one.</p> <p><strong>Question 1.</strong> Are the limits in your pictures the same (up to a linear map)?</p> <p><strong>Answer.</strong> Yes. The only points in your "double basilica" picture at which two bounded Fatou components (interior regions of the filled Julia set) meet are preimages of the same periodic point. (This is the $\alpha$-fixed point of the first renormalisation.) Hence the Julia set near the two points is related by a conformal map, and the two scaling limits are the same, up to a linear transformation. </p> <p><strong>Question 2.</strong> Are there only finitely many scaling limits? <strong>Answer.</strong> No. But you must focus in on different periodic points to observe them. In other words, <strong>first</strong> fix your periodic point, then zoom in. </p> <p>You did not give the precise parameter for your example, but here are scaling limits for the parameter $c=-1.3$. Full Julia set:</p> <p><a href="https://i.stack.imgur.com/Aydwhm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Aydwhm.png" alt="Double basilica Julia set"></a></p> <p>Scaling limits at the three real periodic points $x_1=-0.744989959798873$, $x_2=0.241619848709566$ and $x_3=1.131900530695346$ ($\alpha$ fixed point, $\alpha$ fixed point of the first renormalisation, and $\beta$ fixed point, respectively):</p> <p><img src="https://i.stack.imgur.com/NqTVqm.png" width="200"/> <img src="https://i.stack.imgur.com/Vv4Rwm.png" width="200"/> <img src="https://i.stack.imgur.com/8s6xmm.png" width="200"/></p> <p>Finally, the scaling limit near the period 3 point $1.131900530695346 + 0.227896812185643i$. I give three successive zooms (each finer by a factor of 10), to emphasise the spiralling structure due to the non-real multiplier. The periodic point is at the centre of each picture.</p> <p><img src="https://i.stack.imgur.com/0Drnl.png" width="200"/> <img src="https://i.stack.imgur.com/Uxo6x.png" width="200"/> <img src="https://i.stack.imgur.com/BRxfO.png" width="200"/></p> <p>You can clearly see that the scaling limits are different. You can pick more periodic points and obtain more scaling limits.</p>
190,497
<p>Q: Is there a reference for a detailed proof of <a href="http://en.wikipedia.org/wiki/Explicit_formula" rel="nofollow">Riemann's explicit formula</a> ?</p> <p>I am reading the <a href="http://www.claymath.org/millennium/Riemann_Hypothesis/1859_manuscript/EZeta.pdf" rel="nofollow">TeXified version</a> of Riemann's manuscript, but sometimes I can't follow (I think the author has kept typos from the <a href="http://www.claymath.org/millennium/Riemann_Hypothesis/1859_manuscript/riemann1859.pdf" rel="nofollow">orignal paper</a>).</p> <p>Here are some points I have trouble with (but there are others) :</p> <ul> <li><p>How does he calculate $$\int_{a+i\mathbb{R}} \frac{d}{ds} \left( \frac{1}{s} \log(1-s/\beta)\right) x^s ds$$ on page 4 ?</p></li> <li><p>What do I need to know about <a href="http://en.wikipedia.org/wiki/Logarithmic_integral_function" rel="nofollow">$Li(x)$</a> to see how the terms $Li(x^{1/2+i\alpha})$ appear ?</p></li> </ul>
Raymond Manzoni
21,783
<p>I would recommend reading Edwards' excellent <a href="http://books.google.com/books?id=ruVmGFPwNhQC&amp;pg=PA1" rel="nofollow">'Riemann's Zeta Function'</a> (it's <a href="http://rads.stackoverflow.com/amzn/click/0486417409" rel="nofollow">cheap</a> !) since it details every part of Riemann's article (with more rigorous proofs of course !). </p> <p>Concerning your second question see <a href="http://books.google.com/books?id=ruVmGFPwNhQC&amp;pg=PA27" rel="nofollow">page 27</a> of Edwards' book.<br> About the $\mathrm{Li}$ function you probably won't need more than Wikipedia's informations.<br> What is really required to understand Riemann's proof is a good knowledge of complex integration (the venerable <a href="http://rads.stackoverflow.com/amzn/click/1603864547" rel="nofollow">Whittaker and Watson</a> may be useful for this).</p>
2,026,406
<p>As written in the title, does there exists a $\mathbb C$-Banach space isometric to a Hilbert Space but whose norm is not induced by an inner product?</p> <p>Since an inner product in the Hilbert space has to fulfill the Parallelogram Identity, how could it be that such a $\mathbb C$-Banach space exists?</p>
marco2013
79,890
<p>Let $f:E \rightarrow F$ an isometry from $E$, a Banach space, to $F$, a Hilbert space with inner product $(x,y) \mapsto L(x,y)$. Then $\forall x \in E, \|f(x)\|_F=\|x\|_E$.</p> <p>$\forall x,y \in F, \Re L(x,y)=\frac{1}{2}(\|x+y\|_F^2-\|x\|_F^2-\|y\|_F^2)$</p> <p>because $\|x+y\|_F^2=L(x+y,x+y)=L(x,x)+L(x,y)+L(y,x)+L(y,y)$ and because $L(y,x)= \overline{L(x,y)}$.</p> <p>And $\forall x,y \in F , \Im L(x,y)=\frac{1}{2}(\|x-iy\|_F^2-\|x\|_F^2-\|y\|_F^2)$</p> <p>So $L'(x,y)=L(f(x),f(y))=\frac{1}{2}(\|f(x)+f(y)\|_F^2-\|f(x)\|_F^2-\|f(y\|_F^2)+\frac{i}{2}(\|f(x)-if(y)\|_F^2-\|f(x)\|_F^2-\|f(y\|_F^2)$</p> <p>is an inner product on $E$.</p> <p>And $\forall x \in E, L'(x,x)=\|f(x)\|_F^2=\|x\|_E^2$.</p> <p>So $\|.\|_E$ is induced by an inner product.</p>
76,747
<pre><code>Plot[D[Abs[x], x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>gives out several lines of error messages and an empty plot.</p> <pre><code>Plot[Derivative[1][Abs[#1] &amp; ][x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>just gives out an empty plot.</p> <p>How do I plot $\left(\left|x\right|\right)^\prime$?</p>
Michael E2
4,999
<p>To extend the other answers, if you're going to be using the derivative of <code>Abs</code> often in your computations and do not need the complex absolute value, then you can define the <a href="http://reference.wolfram.com/mathematica/ref/Derivative.html" rel="nofollow noreferrer"><code>Derivative</code></a> of <code>Abs</code> once and for all, using whichever formula for the derivative of <code>Abs</code> you find convenient.</p> <pre><code>Derivative[1][Abs][x_] = Piecewise[{{1, x &gt; 0}, {-1, x &lt; 0}}, Indeterminate]; Plot[Evaluate@D[Abs[x], x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p><img src="https://i.stack.imgur.com/o6tAp.png" alt="Mathematica graphics"></p> <p>Note that one must use <code>Evaluate</code> on the derivative. (see, for instance, <a href="https://mathematica.stackexchange.com/questions/1301/generalivar-is-not-a-valid-variable-when-plotting-what-actually-causes-this">General::ivar is not a valid variable when plotting - what actually causes this and how to avoid it?</a>).</p> <p>To <a href="http://reference.wolfram.com/mathematica/ref/Unset.html" rel="nofollow noreferrer"><code>Unset</code></a> the definition, do the following:</p> <pre><code>Derivative[1][Abs][x_] =. </code></pre> <p>You can also localize the definition of the derivative to a block of code as follows:</p> <pre><code>Internal`InheritedBlock[{Derivative}, Derivative[1][Abs][x_] = Piecewise[{{1, x &gt; 0}, {-1, x &lt; 0}}, Indeterminate]; ... &lt;code&gt; ... ] </code></pre> <p>The definition of <code>Derivative</code> is automatically reset in this case.</p>
76,747
<pre><code>Plot[D[Abs[x], x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>gives out several lines of error messages and an empty plot.</p> <pre><code>Plot[Derivative[1][Abs[#1] &amp; ][x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>just gives out an empty plot.</p> <p>How do I plot $\left(\left|x\right|\right)^\prime$?</p>
m_goldberg
3,066
<p>You could rewrite it as</p> <pre><code>Plot[Evaluate @ ComplexExpand @ D[Abs[x], x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>But I would rewrite it as</p> <pre><code>dAbs[x_] = ComplexExpand @ D[Abs[x], x]; Plot[dAbs[x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>Both produce</p> <p><img src="https://i.stack.imgur.com/turwj.png" alt="plot"></p>
3,851,399
<p>I am attempting to prove the following statement</p> <blockquote> <p>Prove that if n is any integer then 4 either divides n^2 or n^2 − 1</p> </blockquote> <p>I have started with the case of n = 2k</p> <pre><code>Consider the case n = 2k n = 2k n^2 = 4k^2 ⇒ n = 4k ∴ 4 divides n^2 as there is some integer k in which n = k </code></pre> <p>Would this be considered as a correct proof for the first case here? Is there any additions I should make?</p> <p>I also attempted case 2, where n = 2k+1, however I am less sure of the direction I have taken this and is incomplete, so some advice on this would also be appreciated.</p> <pre><code>Consider the case n = 2k+1 n = 2k+1 n^2 = (2k+1)^2 n^2 = 4k^2 + 1 n^2 - 1 = 4k^2 </code></pre>
Anonymous
772,237
<p>Here's another simple proof using parity. <span class="math-container">$[$</span>Actually the same thing what @Ares did, but not taking the form of <span class="math-container">$2k$</span> or <span class="math-container">$(2k + 1)$$]$</span></p> <p>Suppose <span class="math-container">$n$</span> is even, then <span class="math-container">$2|n$</span>, which implies <span class="math-container">$4|n^2$</span>, so <span class="math-container">$n$</span> has to be odd.</p> <p>Now if <span class="math-container">$n$</span> is odd, then note that <span class="math-container">$(n^2 - 1) = (n + 1)(n - 1)$</span> . Here in this case both <span class="math-container">$(n + 1)$</span> and <span class="math-container">$(n - 1)$</span> will be even as <span class="math-container">$n$</span> is odd. So <span class="math-container">$2|(n+1)$</span> , <span class="math-container">$2|(n-1)$</span> which implies that :- <span class="math-container">$$(2*2)|(n+1)(n-1)$$</span> <span class="math-container">$$\rightarrow 4|(n^2 - 1)$$</span></p> <p>In other words, <span class="math-container">$4|n^2$</span> if <span class="math-container">$n$</span> is even, else <span class="math-container">$4|(n^2 - 1)$</span> if <span class="math-container">$n$</span> is odd.</p>
413,982
<p>How can I divide this using long division?</p> <p>$$\frac{ax^3-a^2x^2-bx^2+b^2}{ax-b}$$</p> <p><strong>Edit</strong></p> <p>Sorry guys I wrote it wrong... Fixed it now.</p>
Community
-1
<p>Hint: See if $(ax-b)$ is a factor of $(ax^3-a^2x^2-bx^2+b^2)$ by writing:</p> <p>$$(ax^3-a^2x^2-bx^2+b^2)=(ax-b)(\square x^2+\square x+\square)$$</p> <p>and see if you can fill in the $\square$s.</p>
1,789,414
<blockquote> <p><strong>Question :</strong> Prove that the equation $x + \cos(x) + e^{x} = 0$ has <em>exactly</em> one root</p> </blockquote> <hr> <p><em>This is what I thought of doing:</em></p> <p>$$\text{Let} \ \ \ f(x) = x + \cos(x) + e^{x}$$ By using the Intermediate Value Theorem on the open interval $(-\infty, \infty)$, and then by showing that </p> <p>$$\left(\lim_{x \to -\infty}f(x) &lt; 0 &lt; \lim_{x \to +\infty}f(x)\right) \lor \left(\lim_{x \to +\infty}f(x) &lt; 0 &lt; \lim_{x \to -\infty}f(x)\right)$$</p> <p>I could show that $\exists\ x \in \mathbb{R} \ s.t.\ f(x) = 0$.</p> <p>However this method, although it does show the existence of an $x$ such that $f(x)=0$, it doesn't show that there is <b><em>only</em> one $x$</b> that satisfies the statement $f(x)=0$.</p> <hr> <p>The original question, suggest's using either <em>Rolle's Theorem</em> or the <em>Mean Value Theorem</em>, however we face the same problem with both theorems as both theorems prove the existence of <b><em>at least</em> a single $x$</b> (or any arbitrary number) satisfying their given statements, they don't prove the existence of <b><em>only one $x$</em></b> satisfying their statements.</p> <p>All three theorem's I've mentioned here :</p> <ol> <li><em>Intermediate Value Theorem</em></li> <li><em>Rolle's Theorem</em></li> <li><em>Mean Value Theorem</em></li> </ol> <p>Can all be used to show that the equation $x + cos(x) + e^{x} = 0$ has <b><em>at least</em></b> one root, but they can't be used to show $x + cos(x) + e^{x} = 0$ has <b><em>only</em></b> one root. (Or can they?)</p> <p>How can this problem be solved then, using either Rolle's Theorem or the Mean Value Theorem (or even the Intermediate Value Theorem)</p>
Michael Burr
86,421
<p>Hint: Consider the derivative: $$ 1-\sin(x)+e^x. $$ Since $1\geq \sin(x)$, $1-\sin(x)\geq0$. Therefore, the derivative is always positive. If there were two roots, then there would be a place where the derivative is zero. (why?)</p>
1,789,414
<blockquote> <p><strong>Question :</strong> Prove that the equation $x + \cos(x) + e^{x} = 0$ has <em>exactly</em> one root</p> </blockquote> <hr> <p><em>This is what I thought of doing:</em></p> <p>$$\text{Let} \ \ \ f(x) = x + \cos(x) + e^{x}$$ By using the Intermediate Value Theorem on the open interval $(-\infty, \infty)$, and then by showing that </p> <p>$$\left(\lim_{x \to -\infty}f(x) &lt; 0 &lt; \lim_{x \to +\infty}f(x)\right) \lor \left(\lim_{x \to +\infty}f(x) &lt; 0 &lt; \lim_{x \to -\infty}f(x)\right)$$</p> <p>I could show that $\exists\ x \in \mathbb{R} \ s.t.\ f(x) = 0$.</p> <p>However this method, although it does show the existence of an $x$ such that $f(x)=0$, it doesn't show that there is <b><em>only</em> one $x$</b> that satisfies the statement $f(x)=0$.</p> <hr> <p>The original question, suggest's using either <em>Rolle's Theorem</em> or the <em>Mean Value Theorem</em>, however we face the same problem with both theorems as both theorems prove the existence of <b><em>at least</em> a single $x$</b> (or any arbitrary number) satisfying their given statements, they don't prove the existence of <b><em>only one $x$</em></b> satisfying their statements.</p> <p>All three theorem's I've mentioned here :</p> <ol> <li><em>Intermediate Value Theorem</em></li> <li><em>Rolle's Theorem</em></li> <li><em>Mean Value Theorem</em></li> </ol> <p>Can all be used to show that the equation $x + cos(x) + e^{x} = 0$ has <b><em>at least</em></b> one root, but they can't be used to show $x + cos(x) + e^{x} = 0$ has <b><em>only</em></b> one root. (Or can they?)</p> <p>How can this problem be solved then, using either Rolle's Theorem or the Mean Value Theorem (or even the Intermediate Value Theorem)</p>
Z Ahmed
671,540
<p>Let <span class="math-container">$f(x)=x+\sin x + e^x$</span>, then <span class="math-container">$f'(x)=1-\cos x+ e^x&gt;0$</span> implies that <span class="math-container">$f(x)$</span> is monotonicaly increasing so <span class="math-container">$f(x)=0$</span> has almost one real root. Since <span class="math-container">$f(-\infty)=-\infty$</span> and <span class="math-container">$f(\infty)=+\infty$</span>, so by IVT this eq. has one real root. Hence the eq. Has exactly one real root.</p>
1,792,422
<p>I need help with an optimization problem. I have a rectangle space being fenced. Three sides are fenced with a material costing 4 dollars and the last side costs 16 dollars. I was given that the area is 250 and asked to minimize the cost.</p> <p>I got what I think is the correct formula for the cost where $$C(y)=\frac{5000}{y}+8y.$$ This is where I got stuck. In my class, we're allowed to use demos to get the optimized value but when I plug the number into my formula I end up with negative values for the dimensions. Any help would be nice. </p>
Martin Argerami
22,857
<p>$$ \frac1{2\sqrt c}=\frac{\sqrt 4-\sqrt 0}{4-0}=\frac24=\frac12. $$ Then $ \sqrt c=1 $, so $c=1$. </p>
618,763
<p>Let $\mathbb{F}$ be a field, $ \mathbb{R} = \mathbb{F}[X,X^{-1}]$ the ring of Laurent polynomials over $\mathbb{F}$</p> <blockquote> <blockquote> <p>a) find the group of units in R.</p> <p>b) find an Euclidean norm in R.</p> </blockquote> </blockquote> <p>so for a, I understand the units are $uX^{\pm i}$ where $ u \in F, u\neq 0$.</p> <p>can you give me a hint about b?</p>
8k14
115,868
<p><strong>Hint</strong> Prove that any $f\in R$ can be represented as $f=x^{-n}g$ for $g\in F[x]$. Then rewrite the condition that $f$ is a unit as a condition for polynomials. </p> <p>As for b), as you know, $\deg$ is an Euclidean norm for $F[x]$. Try to find its analogue for $R$. For $f=x^{-n}g$ with $g\in F[x], g(0)\not=0$, what should it be equal to?</p>
1,039,326
<p>What is the fastest way to solve for $z^3 = -2 (1+i \sqrt 3) \bar z$?</p> <p>I know how to do this using complex algebra. but that takes a long time. Can someone show me a faster way?</p>
AlexR
86,940
<p>Multiply by $z$ to obtain $z^4 = -2 (1+i\sqrt 3)|z|^2$. Looking at absolute values we see that $|z|^2 = \sqrt{4 + 4 \cdot 3} = \sqrt{16} = 4$, finally looking at angles we have $$4 \arg z = \arg (-1-i\sqrt 3) = -\frac{2\pi}3$$ Combined we get $z = |z| e^{i\arg z} = 2 e^{-i\frac{\pi}6 + k \frac\pi2} = (1-\sqrt 3i) \cdot e^{\frac{ik\pi}2}$ where $k\in\{0,1,2,3\}$ so all in all $$z = \pm(1\pm \sqrt3 i)$$ And of course the trivial solution $z=0$.</p> <hr> <p>Note that I used a few equalities: $\arg zw \equiv \arg z + \arg w \pmod{2\pi}$, $|z| = \sqrt{\Re^2 z + \Im^2 z} = \sqrt{z\bar z}$ and $\exp(i\frac\pi6) = \frac12(1+\sqrt3 i).$</p>
1,460,704
<p>Intuitively we know that $n^2$ grows faster than $n$, thus the difference tends to negative infinity. But I have trouble proving it symbolically because of the indeterminate form $\infty - \infty$. Is there anyway to do this without resorting to the Epsilon-Delta definition ?</p>
Yes
155,328
<p>Let $M &lt; 0$; then $n-n^{2} = n(1-n) &lt; n(1-5) = -4n &lt; M$ if $n &gt; \max \{ 5, |M|/4 \}$.</p>
1,778,982
<p>I can prove that $G$ is cyclic, but I am not sure how to prove the orders. I know I need to use the Fundamental Theorem of Cyclic Groups but I'm not sure how to apply it. Is there something obvious I am missing?</p>
Domenico Vuono
227,073
<p>If $|G|=m$ for a theorem due to a Sylow for each integer $p^i$ with $i$ integer and $p$ prime exist a subgroup of order $p^i$. For this motive you have two possibilities :$pq$ or $p^3$</p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
Rayhunter
76,236
<ol> <li><p>Induction is used to prove statements are true for all natural numbers.</p></li> <li><p>You want to prove that this is true only for infinitely large n.</p></li> </ol> <p>That said, it should be clear that a mathematical induction is inapplicable here. </p> <p>Trying to use induction to solve this problem is like trying to eat a soup with a fork.</p>
2,160,671
<p>I got stuck with this problem.</p> <p>\begin{matrix} 1 &amp; 1 &amp; 1 &amp; 1\\ 0 &amp; 1 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 1 &amp; 1\\ \end{matrix}</p> <p>Consider the $3\times 4$ matrix $\bf A$ (above). Do the columns of $\bf A$ span $\mathbb R^3$? </p> <p>Prove your answer. Also, Find a $4\times 3$ matrix $\bf B$, such that $\bf AB = I_3$</p> <p>--</p> <p>I know that the columns of $\bf A$ span $\mathbb R^3$ as there more columns than rows. But I cannot understand how to find matrix $\bf B$ because I cannot implement "super-augmented" matrix and do Gauss-Jordan elimination. Looks like I need to do something with 4th column of $\bf A$ and 4th row of $\bf B$. What do you think?</p> <p>Thanks!</p>
egreg
62,967
<p>You <em>can</em> use the augmented matrix method! \begin{align} \left[\begin{array}{cccc|ccc} 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \end{array}\right] &amp;\to \left[\begin{array}{cccc|ccc} 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; -1 \\ 0 &amp; 1 &amp; 0 &amp; -1 &amp; 0 &amp; 1 &amp; -1 \\ 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \end{array}\right] \\&amp;\to \left[\begin{array}{cccc|ccc} 1 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; -1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; -1 &amp; 0 &amp; 1 &amp; -1 \\ 0 &amp; 0 &amp; 1 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \end{array}\right] \end{align} Now the left part represents the equations and each column in the right part represents the constant terms in a linear system.</p> <p>For instance, the first system is $$ \begin{cases} x_1+x_4=1\\ x_2-x_4=0\\ x_3+x_4=0 \end{cases} $$ so the first column of a right inverse is (choosing $x_4=0$) $$ \begin{bmatrix}1\\0\\0\\0\end{bmatrix} $$ Thus a right inverse is $$ \begin{bmatrix} 1 &amp; -1 &amp; 0 \\ 0 &amp; 1 &amp; -1 \\ 0 &amp; 0 &amp; 1 \\ 0 &amp; 0 &amp; 0 \end{bmatrix} $$</p>
2,160,671
<p>I got stuck with this problem.</p> <p>\begin{matrix} 1 &amp; 1 &amp; 1 &amp; 1\\ 0 &amp; 1 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 1 &amp; 1\\ \end{matrix}</p> <p>Consider the $3\times 4$ matrix $\bf A$ (above). Do the columns of $\bf A$ span $\mathbb R^3$? </p> <p>Prove your answer. Also, Find a $4\times 3$ matrix $\bf B$, such that $\bf AB = I_3$</p> <p>--</p> <p>I know that the columns of $\bf A$ span $\mathbb R^3$ as there more columns than rows. But I cannot understand how to find matrix $\bf B$ because I cannot implement "super-augmented" matrix and do Gauss-Jordan elimination. Looks like I need to do something with 4th column of $\bf A$ and 4th row of $\bf B$. What do you think?</p> <p>Thanks!</p>
krishna
417,603
<p>B is a 4*3 matrix. we will find the columns of B using the fact that the columns of AB are linear combinations of columns of A with column elements of B. $\\1\begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix}+0\begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}+0\begin{bmatrix} 1\\ 1\\ 1 \end{bmatrix}+0\begin{bmatrix} 1\\ 0\\ 1 \end{bmatrix}=\begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix}$ So the first column of B is given by $ \begin{bmatrix} 1\\ 0\\ 0\\ 0 \end{bmatrix}$ similarly $ \\-1\begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix}+1\begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}+0\begin{bmatrix} 1\\ 1\\ 1 \end{bmatrix}+0\begin{bmatrix} 1\\ 0\\ 1 \end{bmatrix}=\begin{bmatrix} 0\\ 1\\ 0 \end{bmatrix}$ so the second column of B is given by $\begin{bmatrix} -1\\ 1\\ 0\\ 0 \end{bmatrix}$ $\\0\begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix}-1\begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix}+1\begin{bmatrix} 1\\ 1\\ 1 \end{bmatrix}+0\begin{bmatrix} 1\\ 0\\ 1 \end{bmatrix}=\begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}$ So the third column of B is given by $\begin{bmatrix} 0\\ -1\\ 1\\ 0 \end{bmatrix}$ $B=\begin{bmatrix} 1 &amp;-1 &amp;0 \\ 0&amp; 1 &amp;-1 \\ 0&amp; 0 &amp;1 \\ 0&amp; 0 &amp; 0 \end{bmatrix}$.</p> <p>Note that the matrix B is not unique. </p>
1,856,071
<p>If I would extend two lines $l_1$ and $l_2$ they would intersect with an angle of 90 degrees. How should I write with math terms that there would be a 90 degree angle. I assume $l_1 \perp l_2$ is wrong if they do not intersect (when not extending them).</p> <p>Is there a possibility to express a 90 degree angle between an extension of $l_1$ and $l_2$? How to express an extension of $l_1$?</p>
Mike Pierce
167,197
<p>Generally <em>lines</em> are thought to go on indefinitely (they don't have endpoints), so the $l_1$ and $l_2$ you are describing should really be called <em>line segments</em>. With that in mind you could say something like:</p> <blockquote> <p>Let $\ell_1$ and $\ell_2$ be the lines that contain the line segments $l_1$ and $l_2$ respectively. Note that $\ell_1 \perp \ell_2$.</p> </blockquote>
2,314,369
<p>My objective is to represent a unit square using a single matrix.<br>The four corners of the unit square are represented by column vectors as follows,</p> <p>$\left[\begin{matrix} 0 \\ 0 \end{matrix}\right],\left[\begin{matrix} 1 \\ 0 \end{matrix}\right],\left[\begin{matrix} 1 \\ 1 \end{matrix}\right],\left[\begin{matrix} 0 \\ 1 \end{matrix}\right]$</p> <p>when we write these points in a single matrix, it becomes $2 X 4$ matrix and we cannot multiply it with the standard matrix which is $2 X 2$, but when written as $\left[\begin{matrix} 0&amp;0 \\ 1&amp;0\\ 1&amp;1\\0&amp;1 \end{matrix}\right] $, it becomes $4 X 2 $ matrix which we can multiply with standard matrix of $2 X 2$ and produces the desired result. </p> <p>I find it counter-intuitive as points are represented by column vectors.</p> <p><strong><em>Question:</em></strong> Is it valid to say that by writing a 4 X 2 matrix I am representing a square by mentioning the four corners of it ?</p>
Christian Sykes
322,386
<p>You can't multiply $$X = \begin{bmatrix}0&amp;1&amp;1&amp;0\\ 0&amp;0&amp;1&amp;1 \end{bmatrix}$$ by a $2\times2$ matrix $A$ on the right, but you can on the <em>left</em>. In fact, if $x_1, \dots, x_k$ are $n\times 1$ vectors, $A$ is any $n\times n$ matrix, and $$X = \begin{bmatrix}x_1&amp;\dots&amp;x_k\end{bmatrix},$$ then $$AX = \begin{bmatrix}Ax_1&amp;\dots&amp;Ax_k\end{bmatrix}.$$ Thus we can apply a linear operator to an entire collection of points (<em>any</em> finite collection from the same finite dimensional vector space) "simultaneously" via matrix multiplication.</p>
4,094,333
<blockquote> <p>Suppose <span class="math-container">$a_1,a_2&gt;0$</span> and <span class="math-container">$a_{n+2}=2+\dfrac{1}{a_{n+1}^2}+\dfrac{1}{a_n^2}(n\ge 1)$</span>. Prove <span class="math-container">$\{a_n\}$</span> converges.</p> </blockquote> <p>First, we may show <span class="math-container">$\{a_n\}$</span> is bounded for <span class="math-container">$n\ge 3$</span>, since <span class="math-container">$$2 \le a_{n+2}\le 2+\frac{1}{2^2}+\frac{1}{2^2}=\frac{5}{2},~~~~~~ \forall n \ge 1.$$</span></p> <p>But how to go on?</p>
xpaul
66,420
<p>Note that <span class="math-container">$2&lt;a_n&lt;3$</span> for <span class="math-container">$n\ge5$</span>. Since, for <span class="math-container">$n\ge5$</span>, <span class="math-container">\begin{eqnarray} &amp;&amp;|a_{n+3}-a_{n+2}|\\ &amp;=&amp;\bigg|\frac{1}{a_{n+2}^2}-\frac{1}{a_{n}^2}\bigg|=\frac{(a_{n+2}+a_{n})|a_{n+2}-a_{n}|}{a_{n+2}^2a_{n}^2}\\ &amp;\le&amp;\frac{6}{2^2\cdot 2^2}|a_{n+2}-a_{n}|\le\frac12|a_{n+2}-a_{n}|\\ &amp;\le&amp;\cdots\le\frac{1}{2^{n-2}}|a_5-a_3| \end{eqnarray}</span> we conclude that <span class="math-container">$\{a_n\}$</span> is Cauchy and hence <span class="math-container">$\lim_{n\to\infty}a_n=L$</span> exists.</p>
2,572,564
<p>So I have a straight line, in the classic <span class="math-container">$y = mx + b$</span>, and I'm just trying to translate the formula for the line a certain distance along its normal.</p> <p>For example, with <a href="https://i.stack.imgur.com/r8vct.jpg" rel="nofollow noreferrer">this graph</a>, how would I translate the red line to the blue one if (for example) <span class="math-container">$x$</span> was <span class="math-container">$4$</span>?</p>
user
505,767
<p>By <strong>first order expansion</strong> $$\frac{log(1+\frac{x^4}{3})}{sen^6(x)} =\frac{\frac{x^4}{3}+o(x^6)}{x^6+o(x^6)} \to +\infty$$</p> <p>By <strong>standard limits</strong> $$\frac{log(1+\frac{x^4}{3})}{\frac{x^4}{3}} \frac{\frac{x^4}{3}}{x^6}\frac{x^6}{sen^6(x)}\to +\infty$$</p>
581,599
<p>Is it generally true that $$\frac{\mathrm d}{\mathrm da} \int_{-\infty}^{a-y} f(x)\, \mathrm dx = f(a-y),$$ where $a$ and $y$ in the expression are constants?</p> <p>To give context to the question, I am reading about the function convolution of two independent random variables $X, Y$. </p>
Eastsun
3,599
<p>Here is a proof using the definition of derivative:</p> <p>$$\begin{eqnarray*}\frac{d}{d a}\int_{\infty}^{a-y}f(x)dx &amp;=&amp; \lim_{t\to0}\frac{\int_{-\infty}^{a-y}f(x)dx-\int_{-\infty}^{a-y-t}f(x)dx}{t}\\ &amp;=&amp; \lim_{t\to0}\frac{\int_{a-y-t}^{a-y}f(x)dx}{t} \quad \textbf{mean value theorem for integration}\\ &amp;=&amp; \lim_{t\to0}f(\theta_t)\quad \textbf{where}\quad \theta_t\in(a-y-t,a-y)\\ &amp;=&amp; f(a-y)\end{eqnarray*}$$</p>
1,656,294
<p>Apart from the Dirchet function ( equals 1 for rational x and 0 for irrational x), and similar constructs (equals some constant a for rational x and b for irrational , where a!=b) , what are the other functions which are defined on all real numbers and discontinuous at all points?</p> <p>Are there infinite such functions? </p>
jjagmath
571,433
<p>Set <span class="math-container">$y = \tan x$</span>. Then <span class="math-container">$y \in (0,1)$</span> and</p> <ul> <li><span class="math-container">$t_1 = y^y$</span></li> <li><span class="math-container">$t_2 = y^{1/y}$</span></li> <li><span class="math-container">$t_3 = y^{-y}$</span></li> <li><span class="math-container">$t_4 = y^{-1/y}$</span></li> </ul> <p>We got rid of the noise. Since <span class="math-container">$\log$</span> in increasing, this is the same of comparing</p> <ul> <li><span class="math-container">$\log t_1 = y \log y$</span></li> <li><span class="math-container">$\log t_2 = \dfrac{1}{y} \log y$</span></li> <li><span class="math-container">$\log t_3 = -y \log y$</span></li> <li><span class="math-container">$\log t_4 = -\dfrac{1}{y} \log y$</span></li> </ul> <p>It's obvious that <span class="math-container">$$-\frac{1}{y} &lt; -y &lt; y &lt; \frac{1}{y}$$</span> Multiplying by <span class="math-container">$\log y &lt;0$</span> we get <span class="math-container">$$-\frac{1}{y}\log y &gt; -y\log y &gt; y \log y &gt; \frac{1}{y} \log y$$</span></p> <p>which means <span class="math-container">$$t_4&gt;t_3&gt;t_1&gt;t_2$$</span></p>
1,427,992
<blockquote> <p>Let $k$ be a divisor of $n^4+n^3+n^2+n+1$ , then either $k$ or $k-1$ is divisible by $5$.</p> </blockquote> <p>I solved this problem by considering the prime factorization of $k$ and showing it to be either $5$ or $1$ modulo $5$ with the help of Fermat's Little Theorem and an elementary divisibility trick.</p> <p>I noticed that there was nothing special about number $5$ here, by similar reasoning one can show that any divisor of $n^{p-1}+n^{p-2}+...+n+1$ is either divisible by $p$ or $1$ mod $p$.</p> <p>While proving this, I made this observation ;</p> <blockquote> <p>Let $f(n,k)=a_nn^k+a_{n-1}n^{k-1}+...a_0$ ( $\forall a_k$ integers), then </p> <p>$f(m,k)$ is always divisible by $\gcd[(n-m), f(n,k)]$</p> </blockquote> <p>I checked this for random numbers, and it turned out to be true. I have been trying to prove it with the same techniques but for some reason, these are not working.</p> <p>How can I approach this conjecture?</p>
f10w
31,498
<p>We will show that <span class="math-container">$$f(x)+f^*(u) = \langle x,u \rangle \Longleftrightarrow u \in \partial f(x).$$</span></p> <p>Indeed, we have <span class="math-container">\begin{align*} u \in \partial f(x) &amp;\Longleftrightarrow f(z) \geq f(x) + \langle u, z-x\rangle \quad\forall z \\ &amp;\Longleftrightarrow \langle u, x\rangle - f(x) \ge \langle u, z\rangle - f(z) \quad\forall z \\ &amp;\Longleftrightarrow \langle u, x\rangle - f(x) = \sup_z\left\{ \langle u, z\rangle - f(z)\right\} \\ &amp;\Longleftrightarrow \langle u, x\rangle - f(x) = f^*(u) \\ &amp;\Longleftrightarrow f(x)+f^*(u) = \langle x,u \rangle, \text{QED}. \end{align*}</span></p>
1,427,992
<blockquote> <p>Let $k$ be a divisor of $n^4+n^3+n^2+n+1$ , then either $k$ or $k-1$ is divisible by $5$.</p> </blockquote> <p>I solved this problem by considering the prime factorization of $k$ and showing it to be either $5$ or $1$ modulo $5$ with the help of Fermat's Little Theorem and an elementary divisibility trick.</p> <p>I noticed that there was nothing special about number $5$ here, by similar reasoning one can show that any divisor of $n^{p-1}+n^{p-2}+...+n+1$ is either divisible by $p$ or $1$ mod $p$.</p> <p>While proving this, I made this observation ;</p> <blockquote> <p>Let $f(n,k)=a_nn^k+a_{n-1}n^{k-1}+...a_0$ ( $\forall a_k$ integers), then </p> <p>$f(m,k)$ is always divisible by $\gcd[(n-m), f(n,k)]$</p> </blockquote> <p>I checked this for random numbers, and it turned out to be true. I have been trying to prove it with the same techniques but for some reason, these are not working.</p> <p>How can I approach this conjecture?</p>
Svetoslav
254,733
<p>Let $f:X\rightarrow \mathbb R$ and $X^*$ is the dual of $X$. First recall the <strong>definition of $f^*: X^*\rightarrow \mathbb R$</strong></p> <p>$f^*(u^*)=\sup\limits_{x\in X}{\{\langle u^*,x\rangle - f(x)\}}$, where $u^*\in X^*$ and $\langle .,.\rangle$ is the duality pairing in $X^*\times X$.</p> <p>Then the <strong>definition for subdifferential set</strong> is:</p> <p>$\partial f (x)=\{u^*\in X^*: f(z)\ge f(x)+\langle u^*,z-x\rangle\}$.</p> <p><strong>Proposition:</strong></p> <p>$f(x)+f^*(u^*)-\langle u^*,x\rangle =0\Leftrightarrow u^*\in\partial f(x)$.</p> <p><strong>Proof:</strong></p> <p>1) Let $f(x)+f^*(u^*)-\langle u^*,x\rangle =0$. From the definition $f^*(u^*)=\sup\limits_{x\in X}{\{\langle u^*,x\rangle - f(x)\}}\Rightarrow$</p> <p>$0=f(x)+f^*(u^*)-\langle u^*,x\rangle \ge f(x)+\langle u^*,z\rangle -f(z)-\langle u^*,x\rangle\quad \forall z\in X$ which is $f(z)\ge f(x)+\langle u^*,z-x\rangle \quad \forall z\in X$</p> <p>2) Let $u^*\in \partial f(x)$. By definition $f(z)\ge f(x)+\langle u^*,z-x\rangle\quad \forall z\in X$. Consequently $\langle u^*,x\rangle-f(x)\ge \langle u^*,z\rangle - f(z)\quad\forall z\in X$</p> <p>$\Rightarrow \langle u^*,x\rangle-f(x)\ge \sup\limits_{z\in X}{\{\langle u^*,z\rangle - f(z)\}}=f^*(u^*)\quad\quad (1)$. </p> <p>But from the definition of $f^*(u^*)\Rightarrow f^*(u^*)\ge \langle u^*,x\rangle-f(x)\quad\quad (2)$. </p> <p>From $(1)$ and $(2)$ follows the equality.</p>
833,353
<p>Let $a,b \geq 0$ and $0&lt;p,q &lt; 1$ s.t. $p + q = 1$.</p> <p>Is it true that $a^p b^q \leq a+b$?</p>
PhoemueX
151,552
<p>Regarding the edited question, assume w.l.o.g. That $a \leq b$.</p> <p>Then</p> <p>$$a^p b^q \leq b^p b^q = b \leq a+b.$$</p>
833,353
<p>Let $a,b \geq 0$ and $0&lt;p,q &lt; 1$ s.t. $p + q = 1$.</p> <p>Is it true that $a^p b^q \leq a+b$?</p>
Alexander Vigodner
92,276
<p>Since for $1/q+1/q=1$ $$ \frac{a^p}{p}+\frac{b^q}{q}\ge ab $$ We can redefine $\tilde p=1/p$, $\tilde q=1/q$, $\tilde a=a^p$, $\tilde b =b^q$ Then the above inequality implies $$ \tilde a+\tilde b\ge p \tilde a +q \tilde b \ge \tilde a ^{\tilde p} b^{\tilde q} $$ REMARK: I believe $p,q&gt;0$ and not $&gt;1$ as written. </p>
2,675,758
<p>For example:</p> <p>There are two green marbles and two red marbles in a bag. Two marbles are chosen at random, what is the probability that the two marbles which have been chosen are of the same colour?</p> <p>Ordered + distinct: <br> Marbles = $G1, G2, R1, R2$ <br> Event space = $\{|G1~G2|, |G2~G1|, |R1~R2|, |R2~R1|\} = 4$ ways <br> Sample space = $^4P_2 = 12$ ways <br> Probability = $\dfrac{4}{12} = \dfrac{1}{3}$ <br></p> <p>Unordered + distinct: <br> Marbles = $G1, G2, R1, R2$ <br> Event space = $\{|G1~G2|, |R1~R2|\} = 2$ ways <br> Sample space = $^4C_2 = 6$ ways <br> Probability = $\dfrac{2}{6} = \dfrac{1}{3}$ <br></p> <p>Ordered + identical: <br> Marbles = $G, G, R, R$ <br> Event space = $\{|G~G|, |R~R|\} = 2$ ways <br> Sample space = $\{|G~G|, |G~R|, |R~G|, |R~R|\} = 4$ ways <br> Probability = $\dfrac{2}{4} = \dfrac{1}{2}$ <br></p> <p>Unordered + identical: <br> Marbles = $G, G, R, R$ <br> Event space = $\{|G~G|, |R~R|\} = 2$ ways <br> Sample space = $\{||G~G|, |G~R|, |R~R||\} = 3$ ways <br> Probability = $\dfrac{2}{3}$</p> <p>Which approach is the correct one? And more importantly, <strong>why</strong> are the other approaches wrong?</p>
Community
-1
<p>$$x^2+4x+4=100$$</p> <p>$$x^2+4x-96=0$$</p> <p>$$(x-8)(x+12)=0$$</p> <p>So, $x=8$ and $x=-12$.</p> <p>$8&gt;0$, but $-12 \ngtr 0$, so applying the restriction, $x=8$</p>
2,614,776
<p><strong>Exercise :</strong></p> <blockquote> <p><strong>(A.1)</strong> Let G be a group. The subgroup <span class="math-container">$Z(G) = \{z \in G | zg =gz \space \forall \space g \in G\}$</span> is called the center of <span class="math-container">$G$</span>. Show that <span class="math-container">$Z(G)$</span> is a proper subgroup of <span class="math-container">$G$</span>.</p> <p><strong>(A.2)</strong> Find the center of the dihedral group <span class="math-container">$D_3$</span>.</p> <p><strong>(A.3)</strong> Show that the groups <span class="math-container">$D_3$</span> and <span class="math-container">$S_3$</span> are isomorphic. Using (A.2) find the center of <span class="math-container">$S_3$</span>.</p> <p><strong>(A.4)</strong> Show that for all <span class="math-container">$n&gt;2$</span> the center <span class="math-container">$Z(S_n)$</span> of the permutation group <span class="math-container">$S_n$</span> contains only the identity permutation.</p> </blockquote> <p><strong>Attempt :</strong></p> <p><strong>(A.1)</strong></p> <p>From the definition of the identity element : <span class="math-container">$eg = ge = g \space \forall \space g \in G$</span>. This means that <span class="math-container">$e \in Z(G) \Rightarrow Z(G) \neq \emptyset$</span>.</p> <p>Now, let <span class="math-container">$a,b \in Z(G)$</span>. Then :</p> <p><span class="math-container">$$\forall \space g \in G: (ab)g = a(bg) = a(gb) = (ag)b = (ga)b = g(ab)$$</span></p> <p>which means that <span class="math-container">$ab \in Z(G)$</span>.</p> <p>Finally, let <span class="math-container">$c \in Z(G)$</span>. Then :</p> <p><span class="math-container">$\forall \space g \in G : cg=gc \Rightarrow c^{-1}(gc)c^{-1}=c^{-1}(gc)c^{-1} \Rightarrow egc^{-1} =c^{-1}ge \Rightarrow gc^{-1} = c^{-1}g \Rightarrow c^{-1} \in Z(G) $</span></p> <p>which finally leads to <span class="math-container">$Z(G) \leq G$</span>.</p> <p><strong>(A.2)</strong></p> <p>I found an elaboration for the center of the dihedral group <span class="math-container">$D_{2n}$</span> <a href="https://ysharifi.wordpress.com/2011/02/02/center-of-dihedral-groups/" rel="nofollow noreferrer">here</a> but it won't work to find out the center of <span class="math-container">$D_3$</span>, so I would appreciate any tips or links.</p> <p>For <strong>(A.3)-(A.4)</strong> I am at loss on how to even start, so I would really appreciate any thorough answer or links with similar exercises - tips.</p> <p>Sorry for not being able to provide an attempt but currently I'm going over problems that were in exams the previous years (for my semester exams) and it seems there are a lot of stuff that I have difficulty handling, which seem more weird that what we had handled this far.</p>
Nick A.
412,202
<p>A2:Use the relations $σ^3=e,τ^2=ε,στσ=τ^{-1}$.</p> <p>A4: Do you know what conjugate ellements is and cycle representation? If yes then this is a very easy excersise sincethe center $Z(G)$ consists of precisly the ellements only conjugate to themselves. So if there is an element $σ$ in $S_n$ then this means that it has no conjugate. Can you continue from here?</p>
270,875
<p>When I run</p> <pre><code>FullSimplify[Sqrt[(1 - Sqrt[1 - 2 x^2 y^2])^2], {1 &gt; 2 x^2 y^2 &gt; 0}] </code></pre> <p>I obtain <code>1 - Sqrt[1 - 2 x^2 y^2]</code>. However, when I run</p> <pre><code>FullSimplify[ExpandAll[Sqrt[(1 - Sqrt[1 - 2 x^2 y^2])^2]], {1 &gt; 2 x^2 y^2 &gt; 0}] </code></pre> <p>I am never able to recover the simpler expression and I always obtain <code>Sqrt[2 - 2 x^2 y^2 - 2 Sqrt[1 - 2 x^2 y^2]]</code>. (I did try various <code>PowerExpand</code> and <code>ComplexityFunction</code> variants with no avail.)</p> <p><em><strong>How to deal with this? Is there a way to ask <code>FullSimplify</code> to look into square roots and search for squares of simpler expressions that have unambiguous square roots given the assumptions?</strong></em></p> <hr /> <p>Ultimately, I believe that the issue is with FullSimplify not realizing that some forms of the expressions would simplify even further in the next step <em>given the assumptions</em>.</p> <p>Here is a more advanced example of what I am dealing with and that I would like to simplify more algorithmically. First:</p> <pre><code>FullSimplify[ -Sqrt[2] x^2 y + Sqrt[1 - x^2 y^2 - Sqrt[1 - 2 x^2 y^2]]/y + x Sqrt[ ((-1 + 2 x^2 y^2) (-1 + Sqrt[1 - 2 x^2 y^2]))/(1 + Sqrt[1- 2 x^2 y^2]) ], {1 &gt; 2 x^2 y^2 &gt; 0, x &gt; 0, y &gt; 0}] </code></pre> <p>returns the same expression. However, one can see by &quot;manual&quot; operations that</p> <pre><code>Simplify[ Simplify[ Simplify[ -Sqrt[2] x^2 y + Sqrt[1 - x^2 y^2 - Sqrt[1 - 2 x^2 y^2]]/y + x Sqrt[ ((-1 + 2 x^2 y^2) (-1 + Sqrt[1 - 2 x^2 y^2]))/(1 + Sqrt[1 - 2 x^2 y^2]) ], {1 &gt; 2 x^2 y^2 &gt; 0, x &gt; 0, y &gt; 0}] /. {1 - x^2 y^2 - Sqrt[1 - 2 x^2 y^2] -&gt; (1 - Sqrt[1 - 2 x^2 y^2])^2/2}, {1 &gt; 2 x^2 y^2 &gt; 0, x &gt; 0, y &gt; 0}] /. {((-1 + 2 x^2 y^2) (-1 + Sqrt[1 - 2 x^2 y^2]))/(1 + Sqrt[1 - 2 x^2 y^2]) -&gt; ((1 - 2 x^2 y^2) 2 x^2 y^2)/(1 + Sqrt[1 - 2 x^2 y^2])^2}, {1 &gt; 2 x^2 y^2 &gt; 0, x &gt; 0, y &gt; 0}] </code></pre> <p>the expression is actually identically zero given the assumptions. Note that no further assumptions were used in these &quot;manual&quot; operations, only identities that apply under the assumptions <code>{1 &gt; 2 x^2 y^2 &gt; 0, x &gt; 0, y &gt; 0}</code> were used. Ideally the solution should work also in this example.</p>
Akku14
34,287
<p>Eliminate the term you want to ged rid of with Solve</p> <pre><code>f = Sqrt[(1 - Sqrt[1 - 2 x^2 y^2])^2]; fe = ExpandAll[f]; fsol /. First@ Solve[{fe == fsol, g == Sqrt[1 - 2 x^2 y^2], g &gt;= 0}, fsol, {g}, Reals] // FullSimplify[#, 1 &gt; 2 x^2 y^2 &gt; 0] &amp; (* 1 - Sqrt[1 - 2 x^2 y^2] *) </code></pre>
3,489,896
<p>It is clear to me that <span class="math-container">$\liminf X_n \le \limsup X_n$</span> but every time that I see that <span class="math-container">$\liminf X_n \le \sup X_n$</span> it sound to me not so obvious awkward. Could someone help me to understand this inequality?<span class="math-container">$ $</span> <span class="math-container">$ $</span></p>
mathcounterexamples.net
187,663
<p><strong>Hint</strong></p> <p>As <span class="math-container">$\limsup X_n \le \sup X_n$</span>, this second inequality <span class="math-container">$\liminf X_n \le \sup X_n$</span> should be obvious to you if <span class="math-container">$\liminf X_n \le \limsup X_n$</span> is.</p> <p>So you're left to prove and think why <span class="math-container">$\limsup X_n \le \sup X_n$</span>.</p>
115,630
<p>Let $V$ and $W$ be two algebraic structures, $v\in V$, $w\in W$ be two arbitrary elements.</p> <p>Then, what is the geometric intuition of $v\otimes w$, and more complex $V\otimes W$ ? Please explain for me in the most concrete way (for example, $v, w$ are two vectors in 2 dimensional vector spaces $V, W$)</p> <p>Thanks</p>
Michael Hardy
11,667
<p>The difference between the ordered pair $(v,w)$ of vectors and the tensor product $v\otimes w$ of vectors is that for a scalar $c\not\in\{0,1\}$, the pair $(cv,\;w/c)$ is different from the pair $(v,w)$, but the tensor product $(cv)\otimes(w/c)$ is the same as the tensor product $v\otimes w$.</p>
115,630
<p>Let $V$ and $W$ be two algebraic structures, $v\in V$, $w\in W$ be two arbitrary elements.</p> <p>Then, what is the geometric intuition of $v\otimes w$, and more complex $V\otimes W$ ? Please explain for me in the most concrete way (for example, $v, w$ are two vectors in 2 dimensional vector spaces $V, W$)</p> <p>Thanks</p>
Mnifldz
210,719
<p>The way I like to think about this is that if we have two vector spaces $V$ and $W$, their tensor product $V \otimes W$ intuitively is the idea "for each vector $v\in V$, attach to it the entire vector space $W$." Notice however that since we have $V\otimes W \cong W \otimes V$ we can rephrase this as the idea "for every $w \in W$ attach to it the entire vector space $V$." What distinguishes $V\times W$ from $V\otimes W$ is that the tensor space is essentially a linearized version of $V\times W$. To obtain $V\otimes W$ we start with the product space $V\times W$ and consider the free abelian group over it, denoted by $\mathcal{F}(V\times W)$. Given this larger space, that admits scalar products of ordered pairs, we set up equivalence relations of the form $(av,w) \sim a(v,w) \sim (v, aw)$ with $v\in V, w \in W$ and $a \in \mathbb{F}$ (a field), so that we identify points in the product space that yield multilinear relationships in the quotient space generated by these relationships. If $\mathcal{S}$ is the subspace that is spanned by these equivalence relations, we define $V\otimes W = \mathcal{F}(V\times W)/\mathcal{S}$.</p> <p>Personally, I like to understand the tensor product in terms of multilinear maps and differential forms since this further makes the notion of tensor product more intuitive for me (and this is typically why tensor products are used in physics/applied math). For instance if we take an $n$-dimensional real vector space $V$, we can consider the collection of multilinear maps of the form</p> <p>$$ F: \underbrace{V \otimes \ldots \otimes V}_{k \; times} \to \mathbb{R} $$ </p> <p>where $F \in V^* \otimes \ldots \otimes V^*$, and this tensor space has basis vectors $dx^{i_1} \otimes \ldots \otimes dx^{i_k}$ and $\{i_1, \ldots, i_k\} \subseteq \{1, \ldots, n\}$. Notice now that given $k$ vectors $v_1, \ldots, v_k \in V$ we have that multilinear tensor functional $dx^{i_1} \otimes \ldots \otimes dx^{i_k}$ acts on the ordered pair $(v_1, \ldots, v_k)$ as </p> <p>$$ dx^{i_1} \otimes \ldots \otimes dx^{i_k}(v_1, \ldots, v_k) \;\; =\;\; dx^{i_1}(v_1) \ldots dx^{i_k}(v_k). $$ </p> <p>Where we have that $dx^{i_j}(v_j)$ takes the vector $v_j$ and picks out its $i_j$-th component. Extending this notion to differential forms where instead we would have basis elements $dx^{i_1} \wedge \ldots \wedge dx^{i_k}$ (where antisymmetrization is taken into account), we would have that this basis form would take a set of $k$ vectors and in some sense "measure" how much these vectors overlap with the subspace generated by the basis $\{x_{i_1}, \ldots, x_{i_k}\}$. </p>
115,630
<p>Let $V$ and $W$ be two algebraic structures, $v\in V$, $w\in W$ be two arbitrary elements.</p> <p>Then, what is the geometric intuition of $v\otimes w$, and more complex $V\otimes W$ ? Please explain for me in the most concrete way (for example, $v, w$ are two vectors in 2 dimensional vector spaces $V, W$)</p> <p>Thanks</p>
rehctawrats
407,385
<blockquote> <p>geometric intuition of <span class="math-container">$v\otimes w$</span> [...] in the most concrete way</p> </blockquote> <p>Let me try to throw in an answer on <strong>freshmen level</strong>. I deliberately try to leave out any information that is not essential to the problem, e.g. don't bother about the specific dimensions of the quantities.</p> <p>Say we have a matrix <span class="math-container">$V$</span> and a vector <span class="math-container">$w$</span> resulting in <span class="math-container">$$Vw=v$$</span> (implying consistent dimensions of the matrix <span class="math-container">$V$</span> and of the <em>column vectors</em> <span class="math-container">$v$</span> and <span class="math-container">$w$</span> over, e.g., <span class="math-container">$\mathbb{R}$</span>). Now, let's <strong>forget about <span class="math-container">$V$</span> and try to reconstruct it given only <span class="math-container">$v$</span> and <span class="math-container">$w$</span></strong>.</p> <p>It is generally not possible to fully reconstruct it from this little information, but we can do our best by <strong>try</strong>ing <span class="math-container">$$\tilde{V}:=v\otimes w \frac{1}{\lVert w\rVert^2}$$</span> In this case the tensor product can be rewritten to <span class="math-container">$$v\otimes w := v\,w^{\sf T}$$</span> Then, checking the ability of <span class="math-container">$\tilde{V}$</span> to approximate the action of <span class="math-container">$V$</span> we see <span class="math-container">$$\tilde{V}w=v\,w^{\sf T}w\frac{1}{\lVert w\rVert^2}=v$$</span> which is the best we can do.</p> <p>To make it more concrete, let's throw in some numbers in the easiest non-trivial case: <span class="math-container">$$Vw=\left(\begin{matrix}1&amp;0\\0&amp;2\end{matrix}\right)\left(\begin{matrix}1\\0\end{matrix}\right)=\left(\begin{matrix}1\\0\end{matrix}\right)=v$$</span> Then <span class="math-container">$$\tilde{V}=\left(\begin{matrix}1\\0\end{matrix}\right)\left(\begin{matrix}1&amp;0\end{matrix}\right)=\left(\begin{matrix}1&amp;0\\0&amp;0\end{matrix}\right)\approx V=\left(\begin{matrix}1&amp;0\\0&amp;2\end{matrix}\right)$$</span> <strong>The subspace spanned by <span class="math-container">$w=\left(\begin{matrix}1\\0\end{matrix}\right)$</span> is accurately mapped by <span class="math-container">$\tilde{V}$</span>.</strong></p> <p>To give more (very elementary) geometric insight, let's make the matrix slightly more interesting: <span class="math-container">$$V=\left(\begin{matrix}1&amp;1\\0&amp;2\end{matrix}\right)$$</span> This leaves the <span class="math-container">$\left(\begin{matrix}1\\0\end{matrix}\right)$</span>-space invariant and scales and shears the <span class="math-container">$\left(\begin{matrix}0\\1\end{matrix}\right)$</span>-space. We now "test" the matrix by the "arbitrary" vector <span class="math-container">$w=\left(\begin{matrix}1\\1\end{matrix}\right)$</span> and get <span class="math-container">$$Vw=\left(\begin{matrix}1&amp;1\\0&amp;2\end{matrix}\right)\left(\begin{matrix}1\\1\end{matrix}\right)=\left(\begin{matrix}2\\2\end{matrix}\right)=v$$</span> and the approximate reconstruction <span class="math-container">$$\tilde{V}=\left(\begin{matrix}2\\2\end{matrix}\right)\left(\begin{matrix}1&amp;1\end{matrix}\right)\frac{1}{2}=\left(\begin{matrix}1&amp;1\\1&amp;1\end{matrix}\right)\approx V=\left(\begin{matrix}1&amp;1\\0&amp;2\end{matrix}\right)$$</span> <strong>We have lost the information about the invariance of the <span class="math-container">$\left(\begin{matrix}1\\0\end{matrix}\right)$</span>-axis and the scaling &amp; shearing behavior of the <span class="math-container">$\left(\begin{matrix}0\\1\end{matrix}\right)$</span>-axis, but the diagonal <span class="math-container">$\left(\begin{matrix}1\\1\end{matrix}\right)$</span>-space's behavior is captured by <span class="math-container">$\tilde{V}$</span>.</strong></p> <p>To increase the level of this answer by an <span class="math-container">$\varepsilon&gt;0$</span>, the <strong>spectral decomposition</strong> of a diagonalizable matrix <span class="math-container">$V$</span> should be mentioned. If <span class="math-container">$v_1,\dots,v_n$</span> are normalized eigenvectors with corresponding eigenvalues <span class="math-container">$\lambda_1,\dots,\lambda_n$</span>, then <span class="math-container">$$V=\sum_{i=1}^n v_i\otimes v_i \lambda_i$$</span></p> <p><strong>More intuitively, we can say - very roughly speaking - that the tensor product <span class="math-container">$v\otimes w$</span> is the <em>pseudo-inverse</em> operation of the matrix vector multiplication <span class="math-container">$Vw=v$</span>.</strong></p> <hr> <p>To finish this, I'd like to give one concrete (and special) application of the tensor product in partial differential equations. It links this question to the divergence theorem, which is itself very intuitive. To keep it descriptive, let's stick to the case of solid mechanics (because everyone has inherent understanding of bodies and forces), although this holds for all elliptic PDEs in general.</p> <p>Say we have an elastic body (e.g. a rubber band) occupying a domain <span class="math-container">$\Omega\subset\mathbb{R}^3$</span>. Some part of it is subject to Dirichlet boundary conditions (e.g. you're statically stretching it). This results internal stresses, which are typically represented by matrices <span class="math-container">$\sigma\in\mathbb{R}^{3\times3}$</span>. The system of PDEs describing this state is <span class="math-container">$${\rm div}(\sigma)=0$$</span> (considering the boundary conditions and the governing material law which closes these equations).</p> <p>In some applications you are interested in the average stress throughout the body <span class="math-container">$\Omega$</span> <span class="math-container">$$\bar{\sigma}=\frac{1}{\mu(\Omega)}\int_\Omega \sigma \, {\rm d}\mu$$</span> The divergence theorem implies that the average stress can be represented by the surface integral <span class="math-container">$$\bar{\sigma}=\frac{1}{\mu(\Omega)}\int_{\partial\Omega} \left(\sigma n\right) \otimes x \, {\rm d}A$$</span> where <span class="math-container">$\partial\Omega$</span> is the boundary of <span class="math-container">$\Omega$</span>, <span class="math-container">$n$</span> is the surface normal, and <span class="math-container">$x$</span> the coordinate vector.</p> <p><strong>This means that the volume average of a matrix can be represented integrating the pseudo-inverse of its application over the surface.</strong></p>
3,649,125
<p>Bridge is a game of four players in which each player is dealt 13 cards from a standard 52 card deck. Bridge players (such as myself) are interested in the number of possible deals, where each player is distinct. This can be counted by</p> <p><span class="math-container">$$\binom{52}{13}\binom{39}{13}\binom{26}{13}\binom{13}{13}=5.364\times10^{28}$$</span></p> <p>However, this number is misleadingly large, since bridge players usually only care about the face cards (jack, queen, king, and ace) in each suit. We often consider the cards with denominations 2-10 as indistinguishable.<b> Supposing we distinguish only face cards, what is the number of possible deals? </b></p> <p><a href="http://www.rpbridge.net/7z74.htm" rel="nofollow noreferrer">This source</a> puts the figure at <span class="math-container">$8.110\times10^{15}$</span> based on a computer program. I am curious if there is a more elegant mathematical solution.</p>
Ross Millikan
1,827
<p>There are <span class="math-container">$16$</span> cards of interest, so distributing them would give you <span class="math-container">$4^{16}$</span> possibilities. The problem is that no hand can contain more than <span class="math-container">$13$</span> cards, so we subtract off the ones that have a hand of more than that. There are <span class="math-container">$4$</span> hands where one hand has all <span class="math-container">$16$</span> of the interesting cards. For hands with <span class="math-container">$15$</span> cards, there are <span class="math-container">$4$</span> ways to choose the hand that gets them, <span class="math-container">$16$</span> ways to choose the card taken out, and <span class="math-container">$3$</span> ways to choose the hand that gets the odd one. For hands with <span class="math-container">$14$</span> cards, there are <span class="math-container">$4$</span> ways to choose the hand that gets them, <span class="math-container">$16 \choose 2$</span> ways to choose the two other cards, and <span class="math-container">$3^2$</span> ways to choose which hand gets them. The grand total is <span class="math-container">$$4^{16}-4-4\cdot16\cdot 3-4{16 \choose 2}3^2=4294962780\approx 4.3\cdot 10^9$$</span> The source you link considers <span class="math-container">$10$</span>s to be distinct as well, which is why the number is so much higher. A similar analysis works if you considered <span class="math-container">$10$</span>s. I get <span class="math-container">$$4^{20}-4\sum_{k=0}^6{20 \choose k}3^k=1099381833744\approx 1.1\cdot 10^{12}$$</span></p>
1,200,878
<p>Integrate the following integral:</p> <p>$$\int \frac{\sin x (2 \cos x - \sin x)}{2\sin x + \cos x} dx$$</p> <p>I have tried it by using by parts by considering the $\sin x$ as first function. Again in the following step i got stuck. Please help me.</p>
Elaqqad
204,937
<p><strong>Hints</strong> First you can notice that:</p> <p>$$\frac{\sin x (2 \cos x - \sin x)}{2\sin x + \cos x}=\cos x-\frac{1}{2\sin x + \cos x} $$</p> <p>and remember that it's very useful to make use of $t=tan(x/2)$ for the second integral, and you can find the following primitive :</p> <p>$$\sin(x)-\frac{1}{\sqrt5}\text{tanh}^{-1}(\frac{t-2}{\sqrt5}) $$</p>
3,081,918
<p>It was claimed <a href="https://math.stackexchange.com/a/2387782/446262">here</a> that the convergence of the series<span class="math-container">$$\sum_{n=2}^\infty \frac{\Lambda(n)-1}{n^{1/2}\log^3 n}\tag1$$</span>(where <span class="math-container">$\Lambda$</span> is the <a href="https://en.wikipedia.org/wiki/Von_Mangoldt_function" rel="noreferrer">Von Mangoldt function</a>) is equivalent to the Riemann hypothesis. Is this true? That post provided a link to the Wikipedia article about the Von Mangoldt function, which does <em>not</em> mention this. Also, <a href="https://aimath.org/WWN/rh/articles/html/95a/" rel="noreferrer">this page</a> about the Von Mangoldt function in the context of the Riemann hypothesis makes no mention to that.</p> <p>If it is true that the convergence of the series <span class="math-container">$(1)$</span> is equivalent to the Riemann hypothesis, then I would like to have a reference for that.</p>
Conrad
298,272
<p>A good place to start is by reading Terry Tao's post <a href="https://terrytao.wordpress.com/2013/07/19/the-riemann-hypothesis-in-various-settings/" rel="nofollow noreferrer">"The Riemann hypothesis in various settings"</a></p>
509,487
<p>Prove $1+{n \choose 1}2+{n \choose 2}4+...+{n \choose n-1}2^{n-1}+{n \choose n}2^n=3^n$ using combinatorial arguments. I have no idea how to begin solving this, a nudge in the right direction would be appreciated. </p>
robjohn
13,854
<p>Using red, green, and blue, how many ways can you paint a fence with $n$ pickets?</p> <p><strong>Hint:</strong> Count the number of ways to paint the fence if $k$ pickets are to be red or green. There are $\binom{n}{k}$ ways to select $k$ pickets to be painted red or green and for each of those choices, there are $2^k$ ways to paint those pickets. The remaining pickets are painted blue.</p>
1,457,797
<p>I'm told that a circle intersects the y-axis at $(0,0)$ and $(0, -4)$.</p> <p>I have a tangent that starts at $(0, -6)$. I want to find the point of intersection.</p> <p>I am taking the midpoint of the circle to be $(0, -2)$ and the radius to be $2$ from the points above.</p> <p>The equation of the circle then is:</p> <p>${(x - 0)^2 + (y + 2)^2 = 4}$</p> <p>$\implies$ ${x^2 + y^2 + 4y = 0}$</p> <p>I am taking the equation of the line to be ${y = -6}$.</p> <p>If I substitute ${y = -6}$ into ${x^2 + y^2 + 4y = 0}$ I get</p> <p>${x^2 + 36 - 24 = 0}$</p> <p>${x^2 = -12}$</p> <p>I think I have gone wrong somewhere.</p>
BLAZE
144,533
<blockquote> <p>I am taking the equation of the line to be ${y = -6}$.</p> <p>If I substitute ${y = -6}$ into ${x^2 + y^2 + 4y = 0}$ I get</p> <p>${x^2 + 36 - 24 = 0}$</p> <p>${x^2 = -12}$</p> <p>I think I have gone wrong somewhere.</p> </blockquote> <p>You have not done anything wrong, the reason why this has happens is because the line $y=-6$ does not intersect the circle. This is why you get ${x^2 = -12}$ which has no real roots. Which is exactly as it should be.</p> <pre><code>But to get any further with this question we need some more information from you: </code></pre> <blockquote> <p>I have a tangent that starts at $(0,−6)$.</p> </blockquote> <p>A tangent to what?</p> <blockquote> <p>I am taking the midpoint of the circle to be $(0,−2)$.</p> </blockquote> <p>By midpoint I presume you mean circle centre. But what makes you think this is at $(0,−2)$? </p> <blockquote> <p>I want to find the point of intersection.</p> </blockquote> <p>Point of intersection of what with what? (I'm assuming one of those is the circle) </p> <blockquote> <p>I am taking the equation of the line to be $y=−6$.</p> </blockquote> <p>Why? Equation of what line? The tangent to the circle?</p>
4,135,473
<p>Can you explain how the lower integral is the supremum of a sum? Isn't <span class="math-container">$L(f,P)$</span> just a sum and therefore a single number? Definition 5.13 says from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>, not from each <span class="math-container">$x_j$</span> to <span class="math-container">$x_{j—1}$</span>. And it says for a given partition, not finer and finer partitions.</p> <p>I'm just very confused on these definitions and would love if someone could rephrase it in a way that matches this definition. (I know there's another way to define it in terms of limits but I need to understand it the way this textbook defines it.)</p> <p><a href="https://i.stack.imgur.com/itDAx.png" rel="nofollow noreferrer">Textbook definition of Riemann sums</a></p> <p><a href="https://i.stack.imgur.com/brlL9.png" rel="nofollow noreferrer">Textbook definition of upper and lower integrals</a></p>
WhatsUp
256,378
<p>For <span class="math-container">$x &gt; 0$</span>:</p> <p><span class="math-container">$[2x]$</span> is the number of positive integers <span class="math-container">$\leq 2x$</span>. Count them by separating the even and odd numbers.</p> <p>For general <span class="math-container">$x$</span>: add a sufficiently large integer to it.</p>
3,629,379
<p>I'm trying to understand the following proof that a space <span class="math-container">$X$</span> is compact if and only if every net has a cluster point. I have a specific confusion with how cluster points relate to closure which is expanded on after the proof. </p> <p>A cluster point is defined as </p> <p><a href="https://i.stack.imgur.com/IEQUo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IEQUo.png" alt="enter image description here"></a></p> <p>Here is the proof of the implication that if every net has a cluster point then <span class="math-container">$X$</span> is compact. </p> <p><a href="https://i.stack.imgur.com/6zdRd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6zdRd.png" alt="enter image description here"></a></p> <blockquote> <p>How does closure relate to cluster points?</p> </blockquote> <p>I do not understand why <span class="math-container">$x$</span> must be in <span class="math-container">$\overline{X \backslash U_\alpha}$</span>. As far as I understand the closure of a space is the space with all its limit points. But a cluster point need not be a limit point? Is there some other reason <span class="math-container">$x$</span> must be in the closure?</p>
Jacob Manaker
330,413
<p>Yes, there are counterexamples. Apart from the trivial group, there is also the two element group <span class="math-container">$C_2=\{e,a\}$</span>, where <span class="math-container">$a^2=e$</span>. In this group, <span class="math-container">$a$</span> is the unique non-identity element and <span class="math-container">$a^2=e$</span> so <span class="math-container">$a$</span> cannot be written as a product of non-identity elements.</p> <p>These are the only counter-examples. Indeed, let <span class="math-container">$G$</span> be group of cardinality at least three and let <span class="math-container">$g\in G$</span>. We write <span class="math-container">$g$</span> as a product of two non-identity elements. If <span class="math-container">$g=e$</span>, then take <span class="math-container">$h\neq e$</span> and we have <span class="math-container">$g=e=h*h^{-1}$</span>. If <span class="math-container">$g\neq e$</span>, then there exists <span class="math-container">$h\in G\setminus\{e,g\}$</span> and <span class="math-container">$g=h*(h^{-1}g)$</span>, with neither <span class="math-container">$h$</span> nor <span class="math-container">$h^{-1}g$</span> being the identity.</p>
2,596,347
<p>Suppose $X$ is compact and Hausdorff and that $f:X \to Y$ is continuous, closed, and surjective. How can I show that $Y$ is Hausdorff?</p>
Henno Brandsma
4,280
<p>Let $y_1 \neq y_2$ be distinct points of $Y$. By surjectivity we have $x_1 ,x_2 \in X$ such that $f(x_1) = y_1 , f(x_2) = y_2$. In particular, $f[\{x_i\}] = y_i$ for both $i$ and as in $X$ singletons are closed (Hausdorff implies $T_1$) and $f$ is a closed map (first time we use it), we have that both $\{y_i\}$ are closed in $Y$.</p> <p>Now, define $F_i = f^{-1}[\{y_i\}] \subseteq X$, $i=1,2$. $F_1$ and $F_2$ are disjoint closed (by continuity of $f$) subsets of $X$, which is compact and Hausdorff and thus $T_4$ or normal. So in $X$ we can find open disjoint subsets $U_1, U_2$ such that $F_1 \subseteq U_1$ and $F_2 \subseteq U_2$.</p> <p>Now define $O_i = Y\setminus f[X\setminus U_i]$ for $i=1,2$.</p> <p>These sets are open as $f$ is a closed map and complements of open sets are closed and complements of closed sets are open.</p> <p>For each $i$ $y_i \in O_i$: suppose $y_i \notin O_i$ then by definition $y_i \in f[X\setminus U_i]$ so $y_i = f(p)$ for some $p \in X \setminus U_i$. But by definition $p \in F_i$ as it maps to $y_i$ and so should be in $U_i$ by $F_i \subseteq U_i$, a contradiction with $p \in X \setminus U_i$. This contradiction shows that $y_i \in O_i$.</p> <p>$O_1 \cap O_2 = \emptyset$. Suppose that $y$ lies in both. Then for some $x \in X$, $f(x) = y$. As $U_1$ and $U_2$ are disjoint, $x$ must lie in $X\setminus U_1$ or in $X \setminus U_2$ (or both); suppose it lies in $X \setminus U_j$ for some $j \in \{1,2\}$. But then $y = f(x) \in f[X\setminus U_j]$ which means $y \notin O_j$, which is our contradiction. </p> <p>So $O_1$ and $O_2$ are the required disjoint open neighbourhoods of $y_1$ resp. $y_2$.</p> <p>Sanity check: we used the closedness of $f$ and its Hausdorffness too, (also to get normality from the combination with compactness), and the continuity of $f$ and its surjectivity. The proof would also have worked if $X$ would have been merely $T_4$ (normal and $T_1$) and $f$ the same (continuous closed surjection).</p>
1,580,736
<p>I know that $\arcsin(x) + \arccos(x) = \frac{\pi}2$,</p> <p>but how to use that to solve the following question?</p> <p>$$2\arcsin(x)-3\arccos(x)=\frac{\pi}6 $$</p>
Narasimham
95,860
<p>Eliminate either arcsin or arcos.</p> <p>If arcsin is eliminated, using both equations and simplifying $ x= \cos (\pi/6)= \sqrt3 /2. $</p> <p>(The first one is an identity. The second one is an equation to be solved for $x$. An identity can be used to <em>simplify</em> part of the equation but not <em>solve</em> it using <em>only</em> this identity).</p>
116,220
<p>Say two graphs are not isomorphic but are both strongly regular with the same set of parameters. Are there any parameters (other than the usual such as order, degrees, eigenvalues and multiplicities, etc.) that are determined, e.g., independence number, chromatic number, etc.?</p> <p>Thanks for any help</p>
GeoffDS
13,886
<p>Okay, well, I checked Brouwer's website and combined that with the <a href="https://mathoverflow.net/questions/39312/smallest-non-isomorphic-strongly-regular-graphs">comment</a> to the accepted answer of a question on this site. I checked the complement of the Shrikhande graph versus the complement of the line graph of $K_{4, 4}$ using Sage and found independence numbers of 3 and 4, and chromatic numbers of 6 and 4, respectively. Both are strongly regular with parameters (16, 9, 4, 6). So, that answers my question for some parameters.</p> <p>They have the same girth though.</p>
116,220
<p>Say two graphs are not isomorphic but are both strongly regular with the same set of parameters. Are there any parameters (other than the usual such as order, degrees, eigenvalues and multiplicities, etc.) that are determined, e.g., independence number, chromatic number, etc.?</p> <p>Thanks for any help</p>
Dima Pasechnik
11,100
<p>It's a classic result that a graph parameter called <i>Lovasz theta-function</i> $\theta(\Gamma)$ of a strongly regular graph $\Gamma$ is determined by its parameters. And the significance of $\theta(\Gamma)$ is that it is <a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/v1i1a1" rel="nofollow">"sandwiched"</a> between the clique number and the chromatic number.</p> <p>In more detail, the parameters of the s.r.g. $\Gamma$ determine a 3-dimensional commutative algebra of symmetric matrices (the adjacency matrix $A(\Gamma)$ of $\Gamma$, the adjacency matrix of its complement, and the identity matrix span this algebra). Anything that can be expressed in terms of this algebra, which is specified by the eigenvalues of $A(\Gamma)$, is a parameter you are asking about, and $\theta(\Gamma)$ is one of them. Another one is the number of spanning trees, as by Matrix Tree Theorem it is determined by the eigenvalues.</p>
4,412,371
<p>Would appreciate some help with proving the following statement:</p> <p>Let <span class="math-container">$a_{n}$</span> be a sequence, and <span class="math-container">$\lim _{n\rightarrow \infty }a_{n}=1 $</span>.</p> <p>Prove that <span class="math-container">$\lim _{n\rightarrow \infty }a_{1}+a_{2}+\ldots +a_{n}=\infty $</span>. It's easy to see that if <span class="math-container">$a_{n}$</span> is monotonically decreasing, then we can choose the sequence <span class="math-container">$b_{n}=\sum ^{n}a_{n}k=1$</span>, then <span class="math-container">$\lim _{n\rightarrow \infty }b_{n}=\infty$</span> and since <span class="math-container">$\forall n\in \mathbb{N} \left( a_{n+1}\leq a_{n}\right)$</span>, we can assume that <span class="math-container">$\lim _{n\rightarrow \infty }a_{1}+a_{2}+\ldots +a_{n}=\infty $</span>, but what happens when <span class="math-container">$a_{n}$</span> is monotonically increasing?</p>
Falcon
766,785
<p>If <span class="math-container">$a_n \to 1$</span> then there is <span class="math-container">$N \in \mathbb N$</span> such that <span class="math-container">$a_n \ge 1/2$</span> for all <span class="math-container">$n \ge N$</span>. Therefore, for <span class="math-container">$n \ge N$</span>, <span class="math-container">$$b_n = \sum_{k = 1}^n a_k \ge (n - N) \frac{1}{2} + \sum_{k = 1}^N a_k$$</span> and <span class="math-container">$(n - N)1/2$</span> goes to <span class="math-container">$\infty$</span> for <span class="math-container">$n \to \infty.$</span></p>
290,507
<p>Every metric space is associated with a topological space in a canonical way. According to <a href="http://topospaces.subwiki.org/wiki/Induced_topology_from_metric_is_functorial" rel="nofollow">this</a> source, this amounts to a <a href="http://en.wikipedia.org/wiki/Full_and_faithful_functors" rel="nofollow">full functor</a> from the category of metric spaces with continuous maps to the category of topological spaces with continuous maps.</p> <p>Is it possible that there exists another way of obtaining a topological space from a metric space that is equally deserving of the label "canonical"? Perhaps something that no one has thought of yet? To say it in a different way, is there a sense in which the aforementioned functor is unique? Lets assume that the morphisms between metric spaces are precisely the continuous maps, although an answer that considers other morphisms between metric spaces is welcome.</p> <p>Now obviously this is a soft question, as I have neglected to specify what it means for a map to be deserving of the term "canonical." For this reason, let me motivate the question a little.</p> <p>At some point in an introductory work on analysis, the author will define the meaning of the expression "the topological space associated with (or induced by) a metric space." I'd like to know if this definition is in some sense "the unique correct definition," or whether it is only "one of many possible."</p> <hr> <p>EDIT: Lets put this another way. Obviously there are many function $\mathsf{Met} \rightarrow \mathsf{Top}$, but most of them are pretty boring. So we can restrict ourselves to functors, where $\mathsf{Met}$ and $\mathsf{Top}$ are viewed as categories (technically we need to make also specify what the morphisms of $\mathsf{Met}$ should be.) Anyway, as Martin points out, we're still going to be left with lots of "boring" functors. So I guess the question is, how do we get rid of all the "boring" ones? And once we do, is the canonical functor the only one that's left? Obviously I haven't defined "boring" so this is a very soft question.</p> <p>Magma suggests the following refinement of the question: does the canonical functor satisfy suitable a universal mapping property?</p> <hr> <p>Here's another angle. Suppose we run into an alien species, which studies topological spaces (and calls them topotopos, and what we would call "an open set of a toplogical space" they call "an openopen of a topotopo"). They also study metric spaces (and call them metrometros.) We send that species a message asking them about the "the openopens of a metrometro." Will their notion of open set of a metric space coincide with our notion? And if so, why?</p>
Martin Brandenburg
1,650
<p>Let $(X,d)$ be a metric space. Then the canonical topology $\tau_{can}$ is the coarsest topology on $X$ which makes (with the product topology) $d : X \times X \to \mathbb{R}$ continuous.</p> <p>Proof: It follows from the triangle inequality that $d$ is continuous in the canonical topology (in fact short when we endow $X \times X$ with the sum metric). Conversely, if $\tau$ is a topology such that $d$ is continuous with respect to $\tau$, then in particular for every $x_0 \in X$ the map $X \cong X \times \{x_0\} \to X \times X \xrightarrow{d} \mathbb{R}$ is continuous. The preimage of $(-\infty,r)$ is the open ball of radius $r$ and center $x_0$. This shows $\tau_{can} \subseteq \tau$.</p> <p>It follows that the canonical forgetful functor $\mathsf{Met} \to \mathsf{Top}$ is <em>terminal</em> among all functors $F : \mathsf{Met} \to \mathsf{Top}$ over $\mathsf{Set}$ such that $d : F(X) \times F(X) \to \mathbb{R}$ is continuous for all $(X,d) \in \mathsf{Met}$. The initial such functor is given by the discrete topology.</p> <p>So I am pretty sure that aliens will come up with the same topology.</p>
290,507
<p>Every metric space is associated with a topological space in a canonical way. According to <a href="http://topospaces.subwiki.org/wiki/Induced_topology_from_metric_is_functorial" rel="nofollow">this</a> source, this amounts to a <a href="http://en.wikipedia.org/wiki/Full_and_faithful_functors" rel="nofollow">full functor</a> from the category of metric spaces with continuous maps to the category of topological spaces with continuous maps.</p> <p>Is it possible that there exists another way of obtaining a topological space from a metric space that is equally deserving of the label "canonical"? Perhaps something that no one has thought of yet? To say it in a different way, is there a sense in which the aforementioned functor is unique? Lets assume that the morphisms between metric spaces are precisely the continuous maps, although an answer that considers other morphisms between metric spaces is welcome.</p> <p>Now obviously this is a soft question, as I have neglected to specify what it means for a map to be deserving of the term "canonical." For this reason, let me motivate the question a little.</p> <p>At some point in an introductory work on analysis, the author will define the meaning of the expression "the topological space associated with (or induced by) a metric space." I'd like to know if this definition is in some sense "the unique correct definition," or whether it is only "one of many possible."</p> <hr> <p>EDIT: Lets put this another way. Obviously there are many function $\mathsf{Met} \rightarrow \mathsf{Top}$, but most of them are pretty boring. So we can restrict ourselves to functors, where $\mathsf{Met}$ and $\mathsf{Top}$ are viewed as categories (technically we need to make also specify what the morphisms of $\mathsf{Met}$ should be.) Anyway, as Martin points out, we're still going to be left with lots of "boring" functors. So I guess the question is, how do we get rid of all the "boring" ones? And once we do, is the canonical functor the only one that's left? Obviously I haven't defined "boring" so this is a very soft question.</p> <p>Magma suggests the following refinement of the question: does the canonical functor satisfy suitable a universal mapping property?</p> <hr> <p>Here's another angle. Suppose we run into an alien species, which studies topological spaces (and calls them topotopos, and what we would call "an open set of a toplogical space" they call "an openopen of a topotopo"). They also study metric spaces (and call them metrometros.) We send that species a message asking them about the "the openopens of a metrometro." Will their notion of open set of a metric space coincide with our notion? And if so, why?</p>
Ittay Weiss
30,953
<p>The open ball topology is unique in the following sense. Firstly, following on ideas of Flagg one can consider a generalization of metric spaces where instead of taking values in <span class="math-container">$[0,\infty ]$</span> the metric takes values in a value quantale. If <span class="math-container">$Met$</span> stands for the category of all such metric spaces, valued in all possible value quantales, and taking morphisms to be the usual <span class="math-container">$\epsilon-\delta $</span> notion of continuous mappings, this category is equivalent to <span class="math-container">$Top$</span>, the usual category of topological spaces. The equivalency is given by the usual open-ball topology, correctly interpreted to use only balls of suitably positive radius, where the radius is measured in the given value quantale. So far this is just a straightforward generalization of the usual scenario, we simply do not insist on using the specific value quantale <span class="math-container">$[0,\infty ]$</span>. </p> <p>The open ball topology functor <span class="math-container">$Met\to Top$</span> above is the unique concrete equivalence between the categories. In other words, if <span class="math-container">$F\colon Met\to Top$</span> is a an equivalence of categories and its effect on the underlying sets is trivial (so that the functor assigns to each metric a topology on the same set), then <span class="math-container">$F$</span> is necessarily the open ball topology construction. </p>
4,502,175
<p>I want to create a simple demo of moving an eye (black circle) inside a bigger circle with a black stroke when moving a mouse. I have cursor positions mouseX and mouseY on a canvas and I need to map the value of mouse position into a circle so the eye is moving inside the circle.</p> <p>This should be trivial but I have no idea how to solve this problem.</p> <p>This is a coding problem but I think that I will get the best results from this Q&amp;A. If not I will ask on stack overflow.</p> <p>This is the code that shows the problem.</p> <p><a href="https://editor.p5js.org/jcubic/sketches/E2hVGceN9" rel="nofollow noreferrer">https://editor.p5js.org/jcubic/sketches/E2hVGceN9</a></p> <p>If you use map function in P5JS library (that is linear map from one range to a different range) I get the black circle to move in a square with a side equal to the diameter of the circle. So the black circle is outside.</p> <p>I'm not sure what should I use to calculate the position of the black circle so it's always inside the bigger circle.</p> <p><a href="https://i.stack.imgur.com/u8V9U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u8V9U.png" alt="enter image description here" /></a></p>
peterwhy
89,922
<p>By your use of the <a href="https://p5js.org/reference/#/p5/map" rel="nofollow noreferrer">map</a> function on both <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, the centre of the black circle (with diameter <span class="math-container">$20$</span>) will be limited to a rectangle.</p> <p>If the centre of the outer circle (with diameter <span class="math-container">$100$</span>) is at the origin, your code limits the centre of the black circle to <span class="math-container">$|x|\le 40$</span> and <span class="math-container">$|y|\le 40$</span>: (your code, my comment)</p> <pre><code> let x = map(posX, 0, width, 50 + 10, 150 - 10); let y = map(posY, 0, height, 50 + 10, 150 - 10); // i.e. ..., 100 - 40, 100 + 40); </code></pre> <p>One extreme case, at the top or bottom of the outer circle, the possible <span class="math-container">$x$</span> can only be <span class="math-container">$0$</span> relative to the outer centre. Then one change is to limit the <span class="math-container">$x$</span>-position of the black circle to exactly that, and keep the <span class="math-container">$y$</span> formula unchanged:</p> <pre><code> let x = map(posX, 0, width, 100, 100); // or simply let x = 100; </code></pre> <p>A less-extreme case is to allow the black circle to move in a rectangle inside the outer circle. Note how the centre of the black circle can be at most <span class="math-container">$40$</span> away from the centre of the outer circle. If we let <span class="math-container">$maxY$</span> be the maximum <span class="math-container">$y$</span>-distance, then the maximum <span class="math-container">$x$</span> distance would be <span class="math-container">$\sqrt{40^2-maxY^2}$</span>:</p> <pre><code> let maxY = 24; // for example, must be between 0 and 40 let maxX = sqrt(40 * 40 - maxY * maxY); let x = map(posX, 0, width, 100 - maxX, 100 + maxX); let y = map(posY, 0, height, 100 - maxY, 100 + maxY); </code></pre> <p>Setting <span class="math-container">$maxY=40$</span> would be the extreme case above.</p>
1,241,586
<p>I want to prove that for all positive integers $n$, $2^1+2^2+2^3+...+2^n=2^{n+1}-2$. By mathematical induction:</p> <p>1) it holds for $n=1$, since $2^1=2^2-2=4-2=2$</p> <p>2) if $2+2^2+2^3+...+2^n=2^{n+1}-2$, then prove that $2+2^2+2^3+...+2^n+2^{n+1}=2^{n+2}-2$ holds.</p> <p>I have no idea how to proceed with step 2), could you give me a hint?</p>
Michael Hoppe
93,935
<p>Define $S=2^1+\cdots+2^n$ and realize that $2S=2^2+\cdots+2^{n+1}=S+2^{n+1}-2$.</p>
81,016
<p>I'm trying to learn about derived categories of algebraic stacks. To be honest, as of now, I don't need anything fancy nor deep. In my setup I have a scheme $X$ (well, a smooth and projective variety over $\mathbb{C}$) and a finite group $G$ acting on it. I would like to understand simple things like: what does a coherent sheaf on $[X/G]$ look like? what do the (derived) pushforward functors between the various spaces do? and the pullbacks?</p> <p>Any pointers would be greatly appreciated!</p>
Yosemite Sam
16,857
<p>In case anyone is interested, I've been finding this thesis useful <a href="http://tobias-lib.uni-tuebingen.de/volltexte/2007/2941/pdf/diss.pdf" rel="nofollow">http://tobias-lib.uni-tuebingen.de/volltexte/2007/2941/pdf/diss.pdf</a></p>
2,437,466
<p>A point, $p$, is defined as a cluster point of a set $S$ if $\forall \epsilon &gt; 0,$ there exists an open ball of radius $\epsilon$ centered at $p$ that contains infinitely many points in $S$.</p> <p>We want to prove that a subset $S$ of a metric space $(E,d)$ is closed iff $S$ contains all of its cluster points.</p> <p>I believe I have the forward direction, which seems to be pretty straightforward, but I am having difficulty with the $(\Leftarrow)$ direction. My idea was to somehow use the fact that $S$ contains all of its cluster points to show that every infinite subset of $S$ contains a cluster point. Then we can say $S$ is sequentially compact, therefore it is compact, and hence closed.</p> <p>I'm just mostly unsure of how to show the first part. Thanks for any comments.</p>
Jaroslaw Matlak
389,592
<p>As your first steps of mathematical induction are right, I'll just perform a proof of inductive thesis for $n=k+1$.</p> <p>First notice, that for $k&gt;1$ $$\frac{4(k+1)}{(k+2)} &lt; \frac{(2k+2)(2k+1)}{(k+1)^2}$$ proof: $$(k-1)k(k+1)^2(k+2)&gt;0\\ \frac{k(k-1)}{(k+1)^2(k+2)}&gt;0\\ \frac{4k^2-4k}{(k+1)^2(k+2)}&gt;0\\ \frac{-4k^2+4k}{(k+1)^2(k+2)}&lt;0\\ \frac{4(k^3+3k^2+3k+1)-(4k^3+16k^2+8k+4)}{(k+1)^2(k+2)}&lt;0\\ \frac{4(k+1)^3}{(k+1)^2(k+2)}&lt;\frac{4k^3+16k^2+8k+4}{(k+1)^2(k+2)}\\ \frac{4(k+1)^2}{(k+1)^2(k+2)}&lt;\frac{(2k+2)(2k+1)(k+2)}{(k+1)^2(k+2)}\\ \frac{4(k+1)}{(k+2)}&lt;\frac{(2k+2)(2k+1)}{(k+1)^2}$$</p> <p>Thus, since $\frac{4(k+1)}{(k+2)}$ and $\frac{4^k}{k+1}$ are both positive, we have: $$\frac{4^{k+1}}{k+2}=\frac{4(k+1)}{(k+2)}\frac{4^k}{k+1} &lt; \frac{4(k+1)}{(k+2)}\frac{(2k)!}{(k!)^2}&lt;\frac{(2k+2)(2k+1)}{(k+1)^2}\frac{(2k)!}{(k!)^2} = \frac{(2k+2)!}{(((k+1)!)^2}$$</p>
128,009
<p>Hi, Is there any known sequence such that the sum of a combination of one subsequence never equals another subsequence sum. The subsequences should have elements only from the parent sequence.</p> <p>Thanks Sundi</p>
Pedro Lauridsen Ribeiro
11,211
<p>I have never seen such a characterization (and I'm inclined to agree with André that there may not be one at the same level of simplicity as in the case of invariant differential operators), but my best bet towards such one would be to employ a Beals-Cordes-type characterization of pseudodifferential operators: a bounded operator $T$ in the Hilbert space $L^2(\mathbb{R}^n)$ is a pseudodifferential operator of order zero if and only if its commutators with any element of the Lie algebra generated by the coordinate vector fields and (multiplication by) the linear monomials in $\mathbb{R}^n$ (seen as the Abelian Lie group of $n$-dimensional translations) are bounded operators in $L^2(\mathbb{R}^n)$ as well. Obviously, the invariant operators are those that, in addition, have zero commutator with all elements of the Lie algebra of $\mathbb{R}^n$. See for instance H. O. Cordes, "The Technique of Pseudodifferential Operators" (Cambridge, 1995), Chapter 8. </p> <p>An appropriate replacement of the above Lie algebra in the case of a (reductive) Lie group would involve its Lie algebra and the analog of multiplication by linear monomials, which probably involves the Fourier-Helgason transform (see, for instance, S. Helgason, "Geometric Analysis on Symmetric Spaces" (AMS, 1994), Chapter III). </p>
128,009
<p>Hi, Is there any known sequence such that the sum of a combination of one subsequence never equals another subsequence sum. The subsequences should have elements only from the parent sequence.</p> <p>Thanks Sundi</p>
Dick Palais
7,311
<p>It seems to me that an interesting "first step" or sub-problem---and one that should be fairly straightforward, is to characterize the space of invariant symbols of pseudo differential operators on a Lie group (or more generally on a homogeneous space of a Lie group).</p>
1,504,888
<p>Let $R$ be an integral domain. The following is a well known fact:</p> <blockquote> <p>Let $a,b \in R$. Then $(a)=(b)$ if and only if there exist a unit $u \in R$ such that $a=ub$.</p> </blockquote> <p>I would like to generalize this result for ideals which are generated by two elements. So the question is:</p> <blockquote> <p>Let $a,a',b,b' \in R$. Does it exist some property for these four elements which is equivalent on having $(a,b)=(a',b')$?</p> </blockquote> <p>I know that maybe this question is a bit vague, I hope that somebody could give me some reference or hint.</p> <p>EDIT: As Mathmo123's comment suggests, maybe the condition could be</p> <blockquote> <p>There exists $U \in GL_2(R)$ such that $$\left[ \begin{matrix} a \\ b\end{matrix} \right] = U \left[ \begin{matrix} a' \\ b'\end{matrix} \right]$$</p> </blockquote> <p>Clearly, this is a sufficient condition. But is it necessary as well?</p>
Olivier Bégassat
11,258
<p>Concerning domains, there is a nice partial result covering the case where the ideal is principal.</p> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$R$</span> be a domain, and let <span class="math-container">$a,b,x\in R$</span> be elements of <span class="math-container">$R$</span>. The following are equivalent :</p> <ol> <li><span class="math-container">$\langle a,b\rangle = \langle x\rangle$</span></li> <li>There exists <span class="math-container">$P=\begin{pmatrix}u &amp; v \\ -b' &amp; a'\end{pmatrix}\in\mathrm{GL}_2(R)$</span> with <span class="math-container">$\det(P)=1$</span>, such that <span class="math-container">$\begin{pmatrix}x\\0\end{pmatrix}=P\begin{pmatrix}a\\b\end{pmatrix}$</span></li> </ol> </blockquote> <p>This is (part of) Theorem 1.1, chapter IV in <a href="https://www.amazon.fr/Modules-sur-anneaux-commutatifs-exercices/dp/2916352333" rel="nofollow noreferrer">Modules sur les anneaux commutatifs</a>, G.-M. Diaz-Toca, H. Lombardi, C. Quitté.</p> <p><strong>Proof.</strong> The case where <span class="math-container">$x=0$</span> is trivial, so we assume <span class="math-container">$x\neq 0$</span>.</p> <p>(1)<span class="math-container">$\implies$</span>(2). By hypothesis there exist <span class="math-container">$u,v\in R$</span> and <span class="math-container">$a',b'\in R$</span> such that <span class="math-container">$ua+vb=x$</span>, <span class="math-container">$xa'=a$</span> and <span class="math-container">$xb'=b$</span>. Then one also has <span class="math-container">$0=ab-ab=xa'b-xb'a=x(-b'a+a'b)$</span>. Since <span class="math-container">$R$</span> is a domain and <span class="math-container">$x\neq 0$</span>, this implies <span class="math-container">$-b'a+a'b=0$</span>, thus <span class="math-container">$$\begin{pmatrix}x\\0\end{pmatrix}=\begin{pmatrix}u &amp; v \\ -b' &amp; a'\end{pmatrix}\begin{pmatrix}a\\b\end{pmatrix}$$</span> Furthermore <span class="math-container">$x=ua+vb=x(ua'+vb')$</span> and again, since <span class="math-container">$R$</span> is a domain and <span class="math-container">$x\neq 0$</span> we have <span class="math-container">$\det(P)=ua'+vb'=1$</span>.</p> <p>The converse implication (2)<span class="math-container">$\implies$</span>(1) was already noted in previous posts. <span class="math-container">$\square$</span></p>
81,261
<p>We have $Z_1,Z_2,Z_3$ are independent standard normal random variables. Find</p> <p>a) $\mathbb P(Z_1&lt;Z_2+Z_3)$</p> <p>b) $\mathrm{Var}(Z_1Z_2^2)$</p> <p>c) $\mathbb P(Z_1/Z_2&gt;1)$</p> <p>d) $\mathbb P(Z_1^2&gt;Z_2^2+Z_3^2)$</p>
Did
6,179
<p>Re d), $(Z_2,Z_3)$ is a random point of the complex plane whose module and argument are independent, the argument being uniformly distributed and the module $R=\sqrt{Z_2^2+Z_3^2}$ having density $$ u(r)=r\mathrm e^{-r^2/2}\cdot[r\gt0]. $$ <strong>First application:</strong> For every $r\gt0$, $$ \mathrm P(R^2\lt r^2)=\int\limits_0^ru(s)\mathrm ds=1-\mathrm e^{-r^2/2}, $$ hence one is looking for $$ A=\mathrm P(R^2\lt Z_1^2)=\mathrm E(1-\mathrm e^{-Z_1^2/2})=1-B\quad\mbox{with}\quad B=\mathrm E(\mathrm e^{-Z_1^2/2}). $$ <strong>Second application:</strong> $$ C=\mathrm E(\mathrm e^{-R^2/2})=\int\limits_0^{+\infty}\mathrm e^{-r^2/2}\,u(r)\mathrm dr=\left.-(1/2)\mathrm e^{-r^2}\right|_0^{+\infty}=1/2. $$ On the other hand, $Z_2$ and $Z_3$ are i.i.d. and distributed like $Z_1$ hence $$ C=\mathrm E(\mathrm e^{-Z_2^2/2}\mathrm e^{-Z_3^2/2})=\mathrm E(\mathrm e^{-Z_2^2/2})\cdot\mathrm E(\mathrm e^{-Z_2^2/2})=B^2. $$ <strong>Conclusion:</strong> $A=1-1/\sqrt2$.</p>
392,297
<p>For which sets of primes <span class="math-container">$P$</span> is there a finite type <span class="math-container">$\mathbb{Z}$</span>-algebra <span class="math-container">$A$</span> such that<span class="math-container">$$p\in P\iff\mathrm{Hom}(A, \mathbb{F}_p)=\emptyset?$$</span>Do all the finite <span class="math-container">$P$</span> arise this way?</p> <p><span class="math-container">$A=\mathbb{Z}/n$</span> works for the cofinite <span class="math-container">$P$</span>.</p>
R.P.
17,907
<p>Here's a sketch of an answer. I think the answer is that you can get three types of sets: (i) finite sets, (ii) co-finite sets, and (iii) sets of the form <span class="math-container">$$ S_f = \{ p : f(x) ~ \textrm{has a root in $\mathbb{F}_p$}\} - S_0 $$</span> for some polynomial <span class="math-container">$f \in \mathbb{Z}[X]$</span>, and a finite set of primes <span class="math-container">$S_0$</span>. By the Chebotarev density theorem, the sets <span class="math-container">$S_f$</span> have a density, which is a non-zero rational number.</p> <p>What this really means is that you can get all sets <span class="math-container">$P$</span> already by considering rings of the form <span class="math-container">$\mathbb{Z}[1/N][X]/(f)$</span>, where <span class="math-container">$N \geq 1$</span> is an integer and <span class="math-container">$f \in \mathbb{Z}[X]$</span> is a polynomial, which maybe shouldn't be so surprising.</p> <p>I'm rusty on the technical side of these things, so I might be wrong, but I don't think so.</p> <p>Case (i): if the characteristic of <span class="math-container">$A$</span> is positive, then <span class="math-container">$\operatorname{Hom}(A,\mathbb{F}_p)=\emptyset$</span> for all but finitely many <span class="math-container">$p$</span>.</p> <p>So we may assume that <span class="math-container">$\operatorname{char}(A)=0$</span>. Now let <span class="math-container">$R \subset A$</span> be the subring consisting of elements that are algebraic over <span class="math-container">$\mathbb{Z}$</span>.</p> <p>Case (ii): <span class="math-container">$R$</span> is a subring of <span class="math-container">$\mathbb{Q}$</span>. Then I claim that <span class="math-container">$\operatorname{Hom}(A,\mathbb{F}_p)=\emptyset$</span> for only finitely many <span class="math-container">$p$</span>. Indeed, let <span class="math-container">$A'$</span> be a quotient of <span class="math-container">$A$</span> corresponding to an irreducible component of <span class="math-container">$\operatorname{Spec}(A)$</span>, then <span class="math-container">$A' \otimes_{\mathbb{Z}} \mathbb{Q}$</span> is the affine coordinate ring of a variety <span class="math-container">$V$</span> over <span class="math-container">$\mathbb{Q}$</span>, which I think is guaranteed to be geometrically irreducible by the fact that <span class="math-container">$R$</span> does not contain any irrational algebraic elements. But then, by the Lang-Weil bounds, this variety <span class="math-container">$V$</span> will have points over <span class="math-container">$\mathbb{F}_p$</span> for sufficiently large <span class="math-container">$p$</span>. Hence certainly <span class="math-container">$\operatorname{Hom}(A,\mathbb{F}_p) \neq \emptyset$</span> for all but finitely many <span class="math-container">$p$</span>.</p> <p>Case (iii): <span class="math-container">$R$</span> contains non-rational algebraic elements. This means that every <span class="math-container">$p$</span> for which <span class="math-container">$\operatorname{Hom}(A,\mathbb{F}_p) \neq \emptyset$</span> must be a prime which splits in <span class="math-container">$R$</span> into at least one prime with residue class degree <span class="math-container">$1$</span>. By Chebotarev, the set <span class="math-container">$S$</span> of these primes has a density <span class="math-container">$\delta$</span> with <span class="math-container">$0 &lt; \delta &lt; 1$</span>. Since the irreducible components of <span class="math-container">$A_{R} = \operatorname{Spec}(A) \otimes R$</span> are (again to the best of my knowledge) geometrically irreducible as <span class="math-container">$R$</span>-schemes, we can repeat the argument from case (ii) to show that, for <span class="math-container">$p$</span> in <span class="math-container">$S$</span> sufficiently large, these irreducible components of <span class="math-container">$A_{R}$</span> have points over <span class="math-container">$\mathbb{F}_p$</span>, which in turn implies that for all but finitely many <span class="math-container">$p$</span> in <span class="math-container">$S$</span> we have <span class="math-container">$\operatorname{Hom}(A,\mathbb{F}_p) \neq \emptyset$</span>.</p>
1,628,616
<p>I need to prove that the set of functions of the form $\{g\colon [-1,1]\to \mathbb{R}\, |$ there exists a constant $c$ such that $g(x) = c\cdot x\}$ is complete. How shall I proceed? I am taking a Cauchy sequence and want to prove that it converges to a function in this set, but how can I guarantee the limit will have the same form? (that is linear)</p>
HK Lee
37,116
<p>$$ B= \{ (x_1,\cdots,x_{n+1} )\in {\bf R}^{n+1} | \sum_i x_i^2 =1,\ x_i\geq 0 \} $$</p> <p>So we have $$ f: B\rightarrow B',\ f(x_1,\cdots,x_{n+1} ) = (x_1,\cdots, x_n,0) $$</p> <p>Fix $p=(\epsilon,\cdots,\epsilon,0)\in B'$ Define $$ C:= \{(x_1,\cdots, x_n,0) \in B'| \sum_i x_i^2=1 \} $$</p> <p>Consider $$ c(t)= p + (v-p)t ,\ v\in C $$</p> <p>Then $$ f(t):= |c(t)|^2 = \sum_i ( \epsilon +(x_i - \epsilon )t ) ^2 $$</p> <p>So $$ f' = \sum_i 2( \epsilon +(x_i - \epsilon )t ) (x_i - \epsilon ) = \sum_i 2\epsilon (x_i - \epsilon ) + 2 (x_i - \epsilon )^2t $$</p> <p>$$ f'(t_0)=0,\ t_0=-\frac{\sum_i \epsilon (x_i-\epsilon )}{ \sum_i (x_i-\epsilon)^2 } = \frac{-v\cdot p + |p|^2}{|v-p|^2} $$</p> <p>Since $\angle (v,p) &lt; \frac{\pi}{2} $ so $-v\cdot (1,\cdots, 1,0) &lt; 0 $. Since $\epsilon$ is small then $t_0&lt;0$ Hence $f(t)$ is increasing on $[0,1]$. So $c(t)$ is in $B'$. Note that from this we have an deformation retract from $B'$ to $p$. Hence we have an homeomorphism from unit ball to $B'$</p>
1,507,859
<blockquote> <p>$$\int \tan \sqrt {x} \,dx$$</p> </blockquote> <p>I was trying to solve this. But it took very long time and three pages. Could someone please tell me how to solve this quickly. </p>
Zhanxiong
192,408
<p>Since $a &gt; 1$, we can write $a = 1 + \delta$ for some $\delta &gt; 0$, by binomial formula, for each $n$, $$a^n = (1 + \delta)^n = 1 + n\delta + \cdots + \delta^n &gt; 1 + n\delta.$$ Since $\{n\delta\}$ is unbounded (for fixed $\delta$) by Archimedean property of real numbers, the result follows. </p>
1,507,859
<blockquote> <p>$$\int \tan \sqrt {x} \,dx$$</p> </blockquote> <p>I was trying to solve this. But it took very long time and three pages. Could someone please tell me how to solve this quickly. </p>
Paramanand Singh
72,031
<p>This is another approach which uses proof by contradiction. Suppose that the sequence $s_{n} = a^{n}$ is bounded above. Since $a &gt; 1$ we can see that $$s_{n + 1} = a^{n + 1} = a\cdot a^{n} = as_{n} &gt; s_{n}$$ so that the sequence $s_{n}$ is strictly increasing. Since it is also bounded above it follows that the limit $L = \lim_{n \to \infty}s_{n}$ exists. Moreover because of strictly increasing nature $L$ must be greater than any term in the sequence. In particular $L &gt; a &gt; 1$. Since $s_{n + 1} = as_{n}$ it follows by taking limits that $L = aL$. However this is not possible if both $a , L $ are greater than $1$. This contradiction finishes our job and $s_{n} = a^{n}$ is unbounded.</p>
1,391,014
<p>Let $f,g:[0,1] \rightarrow [0,1]$ be continuous functions such that $f\circ g =g\circ f$. Prove that there exists $x \in [0,1]$ such that $f(x)=g(x)$</p>
PhoemueX
151,552
<p>This answer was inspired by the one of @PaulSinclair.</p> <p>Let us assume that the claim is false, so that $f\left(x\right)\neq g\left(x\right)$ for all $x\in I:=\left[0,1\right]$. By swapping $f,g$, we can assume $f\left(0\right)&lt;g\left(0\right)$. If we had $f\left(x\right)\geq g\left(x\right)$ for some $x\in I$, the intermediate value theorem (applied to $f-g$) would yield the claim, in contradiction to our assumption. Hence, $f\left(x\right)&lt;g\left(x\right)$ for all $x\in I$.</p> <p>By continuity, there is $\varepsilon&gt;0$ with $\varepsilon+f\left(x\right)\leq g\left(x\right)$ for all $x\in I$. In particular $g\left(x\right)\geq\varepsilon$ for all $x\in I$.</p> <p>By induction, we show $g^{n}\left(I\right)\subset\left[n\varepsilon,\infty\right)$ for all $n\in\mathbb{N}$. For $n=1$, we just showed that this holds.</p> <p>Now, assume $g^{n}\left(I\right)\subset\left[n\varepsilon,\infty\right)$ and let $x\in I$ be arbitrary. We have \begin{eqnarray*} g^{n+1}\left(x\right) &amp; = &amp; g\left(g^{n}\left(x\right)\right)\\ &amp; \geq &amp; \varepsilon+f\left(g^{n}\left(x\right)\right)\\ &amp; \overset{f\circ g=g\circ f}{=} &amp; \varepsilon+g^{n}\left(f\left(x\right)\right)\\ &amp; \overset{\text{induction}}{\geq} &amp; \varepsilon+n\varepsilon\\ &amp; = &amp; \left(n+1\right)\varepsilon \end{eqnarray*} and thus $g^{n+1}(I) \subset [(n+1)\varepsilon, \infty)$.</p> <p>For $n$ large enough, this is a contradiction to $g\left(I\right)\subset I$. Hence, the claim must hold.</p>
73,767
<p>I have the following expressions $g^{ab+cd+...}$ or in the full form <code>Power[g,Plus[Times[a,b],Times[c,d],...]]</code></p> <p>How to convert this expression into</p> <p><code>Power[g,Times[a,b]]Power[g,Times[c,d]]...</code>?</p> <p>Here $...$ could be either nothing or one more, but of course it would be nice to get the most general expression for any number of terms.</p>
ybeltukov
4,678
<p>These forms are equivalent from mathematics point of view. So if you want to <em>display</em> exponents separately you can use</p> <pre><code>MakeBoxes[Power[z_, Plus@x__], StandardForm] := RowBox[SuperscriptBox[ToBoxes@z, ToBoxes@#] &amp; /@ {x}] g^(a b + c d) </code></pre> <p><img src="https://i.stack.imgur.com/m8Rml.png" alt="enter image description here"></p> <p>You can evaluate <code>Clear[MakeBoxes]</code> to get back the default formatting.</p> <p>If you want to keep them separately for a further <em>pattern-matching</em> you can freely use a custom function</p> <pre><code>pow[z_, x__Plus] := Times @@ (pow[z, #] &amp; /@ x) g^(a b + c d) /. Power -&gt; pow </code></pre> <p><img src="https://i.stack.imgur.com/WwUKe.png" alt="enter image description here"></p> <pre><code>% /. pow -&gt; Power </code></pre> <p><img src="https://i.stack.imgur.com/grDY9.png" alt="enter image description here"></p>
2,636,929
<p>I know that it can be obtained by simply differentiating the equation and finding the roots of the derivative, but it is a lengthy and tricky process. I am looking for a faster and more straightforward way.</p> <p>A more effective and quick way to find the answer via simple differentiation will also be appreciated.</p>
J.G.
56,861
<p>Let $y=\arcsin x$, so you're trying to extremise $y^4+(\frac{\pi}{2}-y)^4$. The first two derivatives with respect to $y$ are proportional to $y^3-(\frac{\pi}{2}-y)^3,\,y^2+(\frac{\pi}{2}-y)^2$. The former vanishes when $y=\frac{\pi}{4}$ i.e. $x=\frac{1}{\sqrt{2}}$, while the latter is non-negative so we have obtained a minimum of the original function. In particular, $\arcsin^4\frac{1}{\sqrt{2}}+\arccos^4\frac{1}{\sqrt{2}}=\frac{\pi^4}{8}$. To find the maximum, go to the extreme values of $x$ in the original function, namely $\pm 1$. <a href="https://www.wolframalpha.com/input/?i=plot%20(arcsin(x))%5E4%2B(arccos(x))%5E4%20from%20-1%20to%201" rel="nofollow noreferrer">You'll find</a> the maximum is at $x=-1$, giving $(-\frac{\pi}{2})^4+\pi^4=\frac{17\pi^4}{16}$.</p>
3,113,243
<p>Let <span class="math-container">${\mathcal D} = C^\infty_c$</span>, the space of <em>test functions</em> (smooth functions <span class="math-container">$\Bbb R \to \Bbb C$</span> with compact support). Let <span class="math-container">$\mathcal D^*$</span> be the space of <em>distributions</em> (continuous linear functionals <span class="math-container">${\mathcal D} \to {\Bbb C}$</span>). </p> <p>I am trying to fill in details of a proof in Richards &amp; Youn's <em>Theory of Distributions: A Nontechnical Introduction</em> that</p> <p><span class="math-container">$\qquad$</span> If <span class="math-container">$S \in \mathcal D^*$</span> and <span class="math-container">$\varphi, \psi \in {\mathcal D}$</span>, then <span class="math-container">$S * (\varphi * \psi) = (S * \varphi) * \psi)$</span>.</p> <p>My sticking point is this: I need to show that <span class="math-container">$\rho_n \to \rho$</span> in <span class="math-container">$\mathcal D$</span>, where </p> <p><span class="math-container">$\qquad \rho(x) = \int_{-\infty}^{+\infty} \varphi(v) \psi(x-v) dv \quad$</span> (This defines the convolution <span class="math-container">$\rho := \varphi * \psi.)$</span> <span class="math-container">$\qquad \rho_n(x) = \sum_{m=-\infty}^{+\infty} \varphi (\frac m n)\psi(x - \frac m n) {\frac 1 n}$</span>.</p> <p>By "convergence in <span class="math-container">$\mathcal D$</span>" is meant that, for each <span class="math-container">$k = 0, 1, 2, \dots$</span>, the sequence of derivatives <span class="math-container">$\rho_n^{(k)}$</span> converges <em>uniformly</em> to the derivative <span class="math-container">$\rho^{(k)}$</span>.</p> <p>Because <span class="math-container">$\text{support }\varphi \subseteq [-M, M]$</span> for some positive integer <span class="math-container">$M$</span>, the ostensibly infinite limits can be replaced by finite ones. Observe that <span class="math-container">$\rho_n$</span> is a Riemann sum for <span class="math-container">$\rho$</span>, where <span class="math-container">$\rho_n$</span> uses the partition <span class="math-container">${\mathscr P}_n = \{ \frac m n \}$</span> consisting of equally spaced points separated by distance <span class="math-container">$\frac 1 n$</span>. As <span class="math-container">$\rho$</span> necessarily exists and as <span class="math-container">$\text{mesh } {\mathscr P}_n = {\frac 1 n} \to 0$</span>, one has <em>pointwise</em> convergence <span class="math-container">$\rho_n(x) \to \rho(x)$</span> for each <span class="math-container">$x \in \Bbb R$</span>.</p> <p>Any test function is uniformly continuous, and <span class="math-container">$\varphi, \psi, \rho,$</span> and <span class="math-container">$\rho_n$</span> are all test functions. Richards &amp; Youn state: </p> <p><strong><span class="math-container">$\qquad \rho_n \to \rho$</span> uniformly because of that uniform continuity.</strong><br> <span class="math-container">$\qquad$</span> <strong>Can anyone provide details as to why that should be so?</strong></p> <p>Some facts that might be helpful: <span class="math-container">$\text{support }\psi \subseteq [-N, N]$</span> for some positive integer <span class="math-container">$N$</span>. Therefore <span class="math-container">$\text{support }\rho, \text{ support }\rho_n \subseteq [-M-N, M+N]$</span>. </p> <p>(I don't need to prove uniform convergence of the <span class="math-container">$k$</span>th derivatives, <span class="math-container">$k&gt;0$</span>. Such a proof would be virtually identical to that for uniform convergence of <span class="math-container">$\rho_n$</span> to <span class="math-container">$\rho$</span>.)</p>
Aphelli
556,825
<p><span class="math-container">$$|\rho(x)-\rho_n(x)|=\left|\sum_m{\int_{m/n}^{(m+1)/n}{(\varphi(v)\psi(x-v)-\varphi(m/n)\psi(x-m/n))\,dv}}\right| \leq \sum_m{\int_{m/n}^{(m+1)/n}{|\varphi(v)\psi(x-v)-\varphi(v)\psi(x-m/n)+\varphi(v)\psi(x-m/n)-\varphi(m/n)\psi(m/n)|\, dv}}$$</span></p> <p>Thus </p> <p><span class="math-container">$$|\rho(x)-\rho_n(x)| \leq \sum_m{\|\psi\|_{\infty}\int_{m/n}{(m+1)/n}{|\varphi(v)-\varphi(m/n)|\,dv}+\|\psi’\|_{\infty}\int_{m/n}^{(m+1)/n}{|\varphi(v)||v-m/n|\,dv}}$$</span>. </p> <p>Hence </p> <p><span class="math-container">$$|\rho(x)-\rho_n(x)| \leq \|\psi\|_{\infty}\|\varphi’\|_{\infty}\sum_{|m| \leq M}{\int_{m/n}^{(m+1)/n}{|v-m/n|\,dv}}+1/n\|\psi’\|_{\infty}\|\varphi\|_{L^1} \leq C/n$$</span></p> <p>where <span class="math-container">$C$</span> is a constant depending on the first-order seminorms of <span class="math-container">$\varphi,\psi$</span> and on the support of <span class="math-container">$\varphi$</span> (assumed to be in <span class="math-container">$(-(M-1),M-1)$</span>).</p>
3,113,243
<p>Let <span class="math-container">${\mathcal D} = C^\infty_c$</span>, the space of <em>test functions</em> (smooth functions <span class="math-container">$\Bbb R \to \Bbb C$</span> with compact support). Let <span class="math-container">$\mathcal D^*$</span> be the space of <em>distributions</em> (continuous linear functionals <span class="math-container">${\mathcal D} \to {\Bbb C}$</span>). </p> <p>I am trying to fill in details of a proof in Richards &amp; Youn's <em>Theory of Distributions: A Nontechnical Introduction</em> that</p> <p><span class="math-container">$\qquad$</span> If <span class="math-container">$S \in \mathcal D^*$</span> and <span class="math-container">$\varphi, \psi \in {\mathcal D}$</span>, then <span class="math-container">$S * (\varphi * \psi) = (S * \varphi) * \psi)$</span>.</p> <p>My sticking point is this: I need to show that <span class="math-container">$\rho_n \to \rho$</span> in <span class="math-container">$\mathcal D$</span>, where </p> <p><span class="math-container">$\qquad \rho(x) = \int_{-\infty}^{+\infty} \varphi(v) \psi(x-v) dv \quad$</span> (This defines the convolution <span class="math-container">$\rho := \varphi * \psi.)$</span> <span class="math-container">$\qquad \rho_n(x) = \sum_{m=-\infty}^{+\infty} \varphi (\frac m n)\psi(x - \frac m n) {\frac 1 n}$</span>.</p> <p>By "convergence in <span class="math-container">$\mathcal D$</span>" is meant that, for each <span class="math-container">$k = 0, 1, 2, \dots$</span>, the sequence of derivatives <span class="math-container">$\rho_n^{(k)}$</span> converges <em>uniformly</em> to the derivative <span class="math-container">$\rho^{(k)}$</span>.</p> <p>Because <span class="math-container">$\text{support }\varphi \subseteq [-M, M]$</span> for some positive integer <span class="math-container">$M$</span>, the ostensibly infinite limits can be replaced by finite ones. Observe that <span class="math-container">$\rho_n$</span> is a Riemann sum for <span class="math-container">$\rho$</span>, where <span class="math-container">$\rho_n$</span> uses the partition <span class="math-container">${\mathscr P}_n = \{ \frac m n \}$</span> consisting of equally spaced points separated by distance <span class="math-container">$\frac 1 n$</span>. As <span class="math-container">$\rho$</span> necessarily exists and as <span class="math-container">$\text{mesh } {\mathscr P}_n = {\frac 1 n} \to 0$</span>, one has <em>pointwise</em> convergence <span class="math-container">$\rho_n(x) \to \rho(x)$</span> for each <span class="math-container">$x \in \Bbb R$</span>.</p> <p>Any test function is uniformly continuous, and <span class="math-container">$\varphi, \psi, \rho,$</span> and <span class="math-container">$\rho_n$</span> are all test functions. Richards &amp; Youn state: </p> <p><strong><span class="math-container">$\qquad \rho_n \to \rho$</span> uniformly because of that uniform continuity.</strong><br> <span class="math-container">$\qquad$</span> <strong>Can anyone provide details as to why that should be so?</strong></p> <p>Some facts that might be helpful: <span class="math-container">$\text{support }\psi \subseteq [-N, N]$</span> for some positive integer <span class="math-container">$N$</span>. Therefore <span class="math-container">$\text{support }\rho, \text{ support }\rho_n \subseteq [-M-N, M+N]$</span>. </p> <p>(I don't need to prove uniform convergence of the <span class="math-container">$k$</span>th derivatives, <span class="math-container">$k&gt;0$</span>. Such a proof would be virtually identical to that for uniform convergence of <span class="math-container">$\rho_n$</span> to <span class="math-container">$\rho$</span>.)</p>
Greg Grunberg
129,384
<p>I didn't quite understand Mindlack's answer, but it did suggest some critical steps which I've been able to exploit to form my own answer:</p> <p>Write <span class="math-container">$$\rho(x) := \int_{-M}^M {\varphi(v) \psi(x-v) \, dv} = \sum_{m=-Mn+1}^{Mn} \int_{(m-1)/n}^{m/n} {\varphi(v) \psi(x-v) \, dv},$$</span><br> <span class="math-container">$$\rho_n(x) := \sum_{m=-Mn+1}^{Mn} {\varphi({\frac m n}) \psi(x-{\frac m n}) {\frac 1 n}} = \sum_{m=-Mn+1}^{Mn} \int_{(m-1)/n}^{m/n} {\varphi({\frac m n}) \psi(x-{\frac m n}) \,dv}.$$</span> </p> <p>For the Riemann sum <span class="math-container">$\rho_n(x)$</span> we partitioned integration interval <span class="math-container">$[-M, M]$</span> (where <span class="math-container">$M$</span> is a positive integer) into subintervals <span class="math-container">$[{\frac {m-1} n}, {\frac m n}]$</span> of equal width <span class="math-container">$\frac 1 n$</span>; the integrand is evaluated that the right endpoint <span class="math-container">$\frac m n$</span> of the subinterval. Then</p> <p><span class="math-container">$$|\rho(x)-\rho_n(x)| = \left| \sum_{m=-Mn+1}^{Mn} \int_{(m-1)/n}^{m/n} \left[ \varphi(v) (\psi(x-v) -\psi(x - {\frac m n})) + (\varphi(v)- \varphi({\frac m n})) \psi(x-{\frac m n}) \right] \right|$$</span><br> (we've added and subtracted the same term from the integrand), so use of the triangle inequality gives <span class="math-container">$$\le {\sum_{m=-Mn+1}^{Mn} \int_{(m-1)/n}^{m/n} \left( \left| \varphi(v) \right| \cdot \left| \frac {\psi(x-v) - \psi(x-{\frac m n})} {(x-v)-(x-{\frac m n})} \right| \cdot \left| {(x-v)-(x-{\frac m n})} \right| + \left| \frac {\varphi(v) - \varphi(\frac m n)} {v - \frac m n} \right| \cdot \left| v - \frac m n\right| \cdot \left| \psi(x - \frac m n) \right| \right) \,dv.}$$</span> </p> <p>Observe: </p> <ul> <li>Factor <span class="math-container">$|\varphi(v)|$</span> is dominated by supremum norm <span class="math-container">$||\varphi||_\infty$</span>, while factor <span class="math-container">$\left| \psi(x - \frac m n)\right|$</span> is dominated by <span class="math-container">$||\psi||_\infty$</span>. </li> <li><p>From Rudin's <em>Principles of Mathematical Analysis</em>: Suppose continuous vector-valued function <span class="math-container">${\mathbf f}:[a,b] \to {\Bbb R}^d$</span> is differentiable on <span class="math-container">$(a,b)$</span>. Then there exists a point <span class="math-container">$t \in (a,b)$</span> such that<br> <span class="math-container">$$ \left| \frac {{\mathbf f}(b) - {\mathbf f}(a)} {b-a} \right| \le |{\mathbf f'}(t)|.$$</span> Consequently factor <span class="math-container">$\left| \frac {\varphi(v) - \varphi(\frac m n)} {v - \frac m n} \right|$</span> is dominated by <span class="math-container">$||\varphi'||_\infty$</span>. (If <span class="math-container">$\varphi$</span> is complex-valued rather than real-valued, think of <span class="math-container">$\varphi(v)$</span> as an element of <span class="math-container">${\Bbb R}^2.$</span>) Similarly factor <span class="math-container">$\left| \frac {\psi(x-v) - \psi(x-{\frac m n})} {(x-v)-(x-{\frac m n})} \right|$</span> is dominated by <span class="math-container">$||\psi'||_\infty$</span>. </p></li> <li><p>Factors <span class="math-container">$\left| {(x-v)-(x-{\frac m n})} \right|$</span> and <span class="math-container">$\left| v- {\frac m n} \right|$</span> in the integrand's two terms both equal <span class="math-container">${\frac m n} - v$</span>.</p></li> <li><span class="math-container">$ \int_{(m-1)/n}^{m/n}({\frac m n} - v) \,dv = {\frac 1 {2n^2}}.$</span></li> <li>The summation <span class="math-container">$\sum_{m=-Mn+1}^{Mn}$</span> contains <span class="math-container">$2Mn$</span> terms.</li> </ul> <p>Consequently <span class="math-container">$$\forall x: |\rho(x)-\rho_n(x) | \le {\frac {C_0M} n} \quad \text{where} \quad C_0 := ||\varphi||_\infty \cdot ||\psi' ||_\infty + ||\varphi'||_\infty \cdot ||\psi||_\infty.$$</span></p> <p>Differentiating <span class="math-container">$k$</span> times with respect to <span class="math-container">$x$</span> the initial expressions for <span class="math-container">$\rho(x)$</span> and <span class="math-container">$\rho_n(x)$</span>, we have more generally <span class="math-container">$$\rho^{(k)}(x) := \int_{-M}^M {\varphi(v) \psi^{(k)}(x-v) \, dv} = \sum_{m=-Mn+1}^{Mn} \int_{(m-1)/n}^{m/n} {\varphi(v) \psi^{(k)}(x-v) \, dv}$$</span><br> <span class="math-container">$$\rho_n^{(k)}(x) := \sum_{m=-Mn+1}^{Mn} {\varphi({\frac m n}) \psi^{(k)}(x-{\frac m n}) {\frac 1 n}} = \sum_{m=-Mn+1}^{Mn} \int_{(m-1)/n}^{m/n} {\varphi({\frac m n}) \psi^{(k)}(x-{\frac m n}) \,dv}.$$</span></p> <p>An argument identical to the above but with <span class="math-container">$\psi^{(k)}$</span> replacing <span class="math-container">$\psi$</span> then gives <span class="math-container">$$|\rho^{(k)}-\rho_n^{(k)} | \le {\frac {C_k M} n} \quad \text{where} \quad C_k := ||\varphi||_\infty \cdot ||\psi^{(k+1)} ||_\infty + ||\varphi'||_\infty \cdot ||\psi^{(k)}||_\infty.$$</span></p> <p>For each <span class="math-container">$k = 0,1,2,\dots$</span> we have <span class="math-container">$\rho_n^{(k)} \to \rho^{(k)}$</span> uniformly since the upper bound <span class="math-container">${\frac {C_k M} n}$</span>, which is independent of <span class="math-container">$x$</span>, shrinks to zero as <span class="math-container">$n \to \infty$</span>. Conclude <span class="math-container">$\rho_n \to \rho$</span> in <span class="math-container">$\mathcal D$</span>.</p>
40,473
<p>I found this « Worm on the rubber band » problem in Concrete Mathematics book.</p> <p>A slow worm $W$ starts at one end of a meter-long rubber band and crawls one centimetre per minute toward the other end.</p> <p>At the end of each minute, a keeper of the band $K$ stretches it one meter.</p> <p>Does the worm ever reach the finish?</p> <p>The given solution is: $W$ reaches the finish if and when $H(n)$ ever surpasses 100, where $H(n)$ is the $n$th Harmonic number.</p> <p>How to solve this generalized problem with continuous data and with $W$ crawling with a velocity $u=f(t)$ and $K$ crawling with velocity $v=g(t)$ where $u(t)$ and $v(t)$ are both arbitrary functions of time.</p> <p>For example, I want to find if and when the worm will reach the end of a rubber band of length $L$ if (with $a$ and $b$ constants) $u(t)=a*t$ and $v(t)=b*(t)$ ? </p>
Jean-Pierre
6,240
<p>Thanks to Lubos, I get the following results:</p> <p>With the data of the question: rubber band of length L, u(t)=a*t and v(t)=b*(t) with a et b constants, the insect will ALWAYS reach the end of a rubber band in a time:</p> <p>T= [(2.L)/b * ( Exp(b/a) -1) ] ^(1/2)</p> <p>If u is constant and v= b(t) with b constant, the insect will reach the end in a time</p> <p>T= (2.L/b)^(1/2) * tan[ ( b.L/2.u²)^1/2] </p> <p>ONLY if b &lt; (Pi.u )² / 2.L</p>
55,853
<p>Given a function $f:X\to X$, let $\text{Fix}(f)=\{x\in X\mid x=f(x)\}$.</p> <p>In a recent <a href="https://math.stackexchange.com/questions/55846/is-the-set-of-fixed-points-in-a-non-hausdorff-space-always-closed/55847#55847">comment</a>, I wondered whether $X$ is Hausdorff $\iff$ $\text{Fix}(f)\subseteq X$ is closed for every continuous $f:X\to X$ (the forwards implication is a simple, well-known result). At first glance it seemed plausible to me, but I don't have any particular reason for thinking so. I'll also repost Qiaochu's comment to me below for reference: </p> <blockquote> <p>I would be very surprised if this were true, but it doesn't seem easy to construct a counterexample. Any counterexample needs to be infinite and $T_1$, but not Hausdorff, and I don't have good constructions for such spaces which don't result in a huge collection of endomorphisms...</p> </blockquote> <p>Is there a non-Hausdorff space $X$ for which $\text{Fix}(f)\subseteq X$ is closed for every continuous $f:X\to X$?</p>
Sam
3,208
<p>Let me propose the following counterexample:</p> <p>Take $X = \overline{\mathbb Q}$ the one-point compactification of $\mathbb Q$. This space is not Hausdorff, since $\mathbb Q$ is not locally compact (the problem is $\infty$).</p> <p>Now let $f: \overline{\mathbb Q} \to \overline{\mathbb Q}$ be a continuous function, and let $x\in \overline{\mathrm{Fix}(f)}$ be an arbitrary point in the closure of $\mathrm{Fix}(f)$.</p> <p><em>Case I:</em> Suppose $\infty \in \mathrm{Fix}(f)$. Then either $x = \infty$ or we must have that the restriction $f|_{\mathbb Q}: \mathbb Q \to \overline{\mathbb Q}$ is continuous. But then also $x \in \overline{\mathrm{Fix}(f|_\mathbb{Q})} \subset \mathrm{Fix}(f|_\mathbb{Q}) \cup \{\infty\}= \mathrm{Fix}(f)$.</p> <p><em>Case II:</em> Now suppose $\infty \notin \mathrm{Fix}(f)$ and $x\ne \infty$. Then there is a convergent sequence $x_n \to x$ with $x_n\in \mathrm{Fix}(f)$. But then by continuity: $f(x) = \lim_{n\to \infty} f(x_n) = \lim_{n\to \infty} x_n = x$.</p> <p>So there is only one case left:</p> <p><em>Can we have $x=\infty \in \overline{\mathrm{Fix}(f)}$ but at the same time $\infty \notin \mathrm{Fix}(f)$?</em></p> <p>If this were the case, then $\mathrm{Fix}(f)$ would definitely not be compact. But this implies that there must be a sequence in $x_n \in \mathrm{Fix}(f) \subset \mathbb Q$ without a convergent subsequence - that is: no convergent subsequence in $\mathbb Q$!</p> <p>But then such a sequence has a subsequence converging to $\infty$, which implies that $\infty \in \mathrm{Fix}(f)$.</p> <p>Hoping I haven't made some silly mistake, this concludes the argument that this $X$ is indeed a counterexample.</p>
2,124,560
<p>A pentagon ABCDE contains 5 triangles whose areas are each one. The triangles are ABC, BCD, CDE, DEA, and EAB. Find the area of ABCDE?</p> <p>Is there a theorem for overlapping triangle areas?</p>
Jack D'Aurizio
44,121
<p>If $ABC$ and $BCD$ have the same area, then $BC\parallel AD$ and so on. It follows that $ABCDE$ is not necessarily a regular pentagon, but it is for sure the image of a regular pentagon through an affine map. Affine maps preserve the ratios between areas, hence it is enough to understand what is the area of a regular pentagon $ABCDE$ in which $[ABC]=1$. Such area is $\color{red}{\frac{5+\sqrt{5}}{2}}$. <a href="https://i.stack.imgur.com/1tMOS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1tMOS.png" alt="enter image description here"></a> $$\scriptstyle\text{A non-regular pentagon with } [ABC]=[BCD]=[CDE]=[DEA]=[EAB] $$</p>
1,001,913
<p>If we want to graph a horizontal line, we will do the following:</p> <pre><code>y = 0x + 3 </code></pre> <p>No matter the domain for x, the range for y will always be 3. Therefore, we have a horizontal line.</p> <pre><code>y = 0(0) + 3 = (0,3) y = 0(1) + 3 = (1,3) y = 0(2) + 3 = (2,3) </code></pre> <p>Now the formula to graph a vertical line looks like this:</p> <pre><code>x = 3 </code></pre> <p>Well, wait a second. Where is the y? I would like to see the y in the equation. But it is missing. How can I write the equation for a vertical line that includes the y variable? This is all I can think of:</p> <pre><code>x = 0y + 3 </code></pre> <p>And with the following domain:</p> <pre><code>x = 0(0) + 3 x = 0(1) + 3 x = 0(2) + 3 </code></pre> <p>Is this correct? Is it ok to reverse the x and y, as I just did above? Or does this not make it a slope-intercept equation anymore? It should still be a linear equation, since the variables are raised to the first power, in my opinion. But the slope-intercept form looks like this: y = mx + b. So I am not sure if this is still a slope-intercept equation. </p>
Brian M. Scott
12,042
<p>There is a fairly easy combinatorial proof. We count the number of subsets of $\{0,1,\ldots,n\}$ of size $m+1$ in two ways. First, of course, there are $\binom{n+1}{m+1}$ of them. Now suppose that $S=\{a_0,\ldots,a_m\}$ is such a subset, where $a_0&lt;\ldots&lt;a_m$. Clearly $a_k\ge k$; let $t=a_k-k\ge 0$. Moreover, </p> <p>$$n\ge m\ge a_k+(m-k)=m+t\;,$$</p> <p>so $t\le n-m$. For a given $t\in\{0,\ldots,n-m\}$ there are </p> <p>$$\binom{a_k}k=\binom{t+k}k=\binom{t+k}t$$</p> <p>ways to choose the $k$ members of $S$ below $a_k=t+k$ and </p> <p>$$\binom{n-(t+k)}{m-k}=\binom{n-k-t}{n-m-t}$$</p> <p>ways to choose the $m-k$ members of $S$ above $a_k=t+k$. Thus, there are</p> <p>$$\binom{t+k}t\binom{n-k-t}{n-m-t}$$</p> <p>ways to choose $S$ with $a_k=t+k$ and altogether</p> <p>$$\sum_{t=0}^{n-m}\binom{t+k}t\binom{n-k-t}{n-m-t}$$</p> <p>ways to choose $S$. It follows that</p> <p>$$\sum_{t=0}^{n-m}\binom{t+k}t\binom{n-k-t}{n-m-t}=\binom{n+1}{m+1}\;.$$</p>
1,816,783
<p>When can one conclude that a character table has non-real entries?</p> <p>In particular, by constructing the character table for $\mathbb{Z}/3\mathbb{Z}$ or $A_4$ how does one determine that some of the entries will be nonreal? Is the reason that the same complex values in the table for $\mathbb{Z}/3\mathbb{Z}$ also appear in the table for $A_4$ because $A_4 / ({1,(12)(34),(13)(24),(14)(23)})\cong \mathbb{Z}/3\mathbb{Z}$?</p> <p>Here's what I have for $\mathbb{Z}/3\mathbb{Z}$.</p> <p><a href="https://i.stack.imgur.com/z8Oln.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z8Oln.jpg" alt="enter image description here"></a> </p> <p>Using short orthogonality relations I conclude that $$1+a^2+b^2=1+c^2+d^2+=1+a^2+c^2=1+b^2+d^2=3,$$ and $$1+a+b=1+c+d=1+ac+bd=0,$$ I don't see how to conclude from this that $a=d=\omega$ and $b=c=\bar{\omega}=\omega^2$, where $\omega=e^{2\pi i/3}.$</p>
Gerry Myerson
8,269
<p>Characters of degree 1 are homomorphisms from the group to the multiplicative group of nonzero complex numbers. It follows that if, say, $g$ is a group element of order 3, then $(\chi(g))^3=1$, so $\chi(g)$ is 1, $\omega=e^{2\pi i/3}$, or $\omega^2$. Once you have found all the linear characters with $\chi(g)=1$, if you have any more characters they must have $\chi(g)=\omega$ or $\chi(g)=\omega^2$. </p>
1,653,569
<p>Consider, in the first-order NGB theory of sets, the following axioms: $$\vdash\exists x\forall y(y\notin x)$$ and $$\vdash\forall y(y\notin\varnothing)$$</p> <p>The second one defines the constant $\varnothing$. Moreover, from the second axiom we get the first by $\exists$I inference rule.</p> <p>So my question is: we can replace axiom $$\vdash\exists x\forall y(y\notin x)$$ with $$\vdash\forall y(y\notin\varnothing)$$?</p>
Mauro ALLEGRANZA
108,274
<p>Yes, but we have to assume the individual constant $\emptyset$ as primitive in the language.</p> <p>The way used e.g. into Mendelson's textbook [see 6th ed., 2015, page 231] is to formalize $\mathsf {NBG}$ in a f-o language with a single predicate letter $A^2_2$ (i.e. $\in$) but no function letter or individual constants.</p> <hr> <p>There is a similar situation in f-o <a href="https://en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic" rel="nofollow">Peano arithmetic</a> $\mathsf {PA}$: the usual presentation is with the individual constant $0$ as primitive and axiom: $\forall x \ (0 \ne S(x))$, but we can dispense withh $0$ and replace the above axiom with: $\exists z \forall x \ (z \ne S(x))$. </p>
1,356,585
<p>I am reading Hajime Sato's: <em>Algebraic Topology, an Intuitive Approach</em>. His Sample Problem 1.3 is: <em>Show that the topological spaces X and I are not homeomorphic.</em> (Note that this requires a font where I appears as a simple straight vertical line.) </p> <p>Sato argues by contradiction. Brief sketch of his argument: Suppose there existed a homeomorphism $f: X \to I$. Remove the point $x_0$ which lies at the crossing point of X. Then the restriction of $f$ to the new domain is still homeomorphic. But $X-x_0$ consists of four disjoint line segments (each half open), and $I-f(x_0)$ consists of two disjoint line segments (each half open). Therefore, he concludes, the spaces aren't homeomorphic. </p> <p>I don't understand how the conclusion (in the final sentence) follows. I know that two spaces are homeomorphic if there exists a bijective continuous map between them with a continuous inverse. And I know the characterization of continuity in terms of open sets. But somehow I am not seeing it. I feel like I'm missing something really obvious. Any ideas?</p>
Ivo Terek
118,056
<p>The issue is that connectedness is preserved by homeomorphisms. More exactly, two homeomorphic spaces must have the same number of connected components. If $X \cong I$, then $X \setminus \{\rm crossing~point\} \cong I \setminus \{\rm one ~point\}$. But $X \setminus \{\rm crossing~point\}$ has four connected components, while $I \setminus \{\rm one ~point\}$ hasn't. Done.</p>
748,083
<p>I'm trying to find the Jordan Normal Form of the matrix $A = \begin{bmatrix} 1 &amp; 0 &amp; 1 &amp; -1 \\0 &amp; 1 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 1 &amp; 1\\ 0 &amp; 0 &amp; 0 &amp; 1\end{bmatrix}$ </p> <p>(That is, a matrix J and a matrix S such that $A = S J S^{-1})$.</p> <p>Finding the eigenvalues and eigenvectors is no problem. The only eigenvalue is $\lambda$=1 with multiplicity 4. The corresponding eigenvectors are $x = \begin{bmatrix}1\\0\\0\\0\end{bmatrix}$ and $y = \begin{bmatrix}0\\1\\0\\0\end{bmatrix}$.</p> <p>However, I get stuck after that. In order to find the S and J matrix, I have to determine the consistency of the matrices $(A-I)w = x$ and $(A-I)w = y$. But both of these matrices are inconsistent:</p> <p>$(A-I)w = x \rightarrow \begin{array}{cccc|c}0 &amp; 0 &amp; 1 &amp; -1 &amp;1\\0 &amp; 0 &amp; 1 &amp; 0 &amp; 0\\0 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\end{array} \rightarrow \begin{array}{cccc|c}0 &amp; 0 &amp; 1 &amp; 0 &amp;1\\0 &amp; 0 &amp; 1 &amp; 0 &amp;0\\0 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\end{array}$</p> <p>Which is clearly inconsistent since w3 = 0 from row 2 and w3 = 1 from row 1.</p> <p>Also, $(A-I)w = x \rightarrow \begin{array}{cccc|c}0 &amp; 0 &amp; 1 &amp; -1 &amp;0\\0 &amp; 0 &amp; 1 &amp; 0 &amp; 1\\0 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\end{array} \rightarrow \begin{array}{cccc|c}0 &amp; 0 &amp; 1 &amp; 0 &amp;0\\0 &amp; 0 &amp; 1 &amp; 0 &amp;1\\0 &amp; 0 &amp; 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0\end{array}$</p> <p>Which is also clearly inconsistent. </p> <p>So right now I do not know how to construct either the S or J matrix, since at least one of the systems $(A-I)w = x$ or $(A-I)w = y$ should have had a solution (and then also a solution for $(A-I)^2$ and possibly $(A-I)^3$ if there's one 1x1 and one 3x3 block). </p> <p>Can someone help me out on how to construct the S and J matrices? Thanks!</p>
Scrooge McDuck
83,481
<p>The value of the function in 0 is 0 for every a. The function is continuous, so you could do the derivative and use binomial theorem to prove that for every x > 0 is positive and for every x &lt; 0 is negative.</p> <p>EDIT: <a href="https://i.imgur.com/CZcJutJ.jpg" rel="nofollow noreferrer">http://i.imgur.com/CZcJutJ.jpg</a></p>
2,354,094
<p>I've been stumped by this problem:</p> <blockquote> <p>Find three non-constant, pairwise unequal functions $f,g,h:\mathbb R\to \mathbb R$ such that $$f\circ g=h$$ $$g\circ h=f$$ $$h\circ f=g$$ or prove that no three such functions exist.</p> </blockquote> <p>I highly suspect, by now, that no non-trivial triplet of functions satisfying the stated property exists... but I don't know how to prove it.</p> <p>How do I prove this, or how do I find these functions if they do exist?</p> <p>All help is appreciated!</p> <p>The functions should also be continuous.</p>
lulu
252,071
<p>An example:</p> <p>$$f(x) = \begin{cases} x, &amp; \text{if $x\in \mathbb Q$} \\ -x, &amp; \text{if $x\notin \mathbb Q$} \end{cases}$$</p> <p>$$g(x) = \begin{cases} -x, &amp; \text{if $x\in \mathbb Q$} \\ x, &amp; \text{if $x\notin \mathbb Q$} \end{cases}$$</p> <p>$$h(x)=-x$$</p>