qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,212,336
<p>Let R be the set of all real numbers. Is $\{\mathbb R^+,\mathbb R^−,\{0\}\}$ a partition of $\mathbb R$? Explain your answer.</p> <p>My answer is no because of $\{0\}$. I am confused with $\{0\}$. please help.</p>
Batman
127,428
<p>Draw a picture of $\log x$ and the left and right Riemann sums corresponding to the integral with interval width $1$. Which one matches the sum on the left hand side? </p>
74,347
<blockquote> <p>Construct a function which is continuous in $[1,5]$ but not differentiable at $2, 3, 4$.</p> </blockquote> <p>This question is just after the definition of differentiation and the theorem that if $f$ is finitely derivable at $c$, then $f$ is also continuous at $c$. Please help, my textbook does not have the answer. </p>
AD - Stop Putin -
1,154
<p>How about $f(x) = \max(\sin(n\pi x),0)$ or perhaps $g(x) = |\sin(n\pi x)|$?</p>
1,290,516
<p>Find the values of $m$ if the line $y=mx+2$ is a tangent to the curve $x^2-2y^2=1$.</p> <p>My working:</p> <p>First we differentiate $x^2-2y^2=1$ with respect to $y$ to get the gradient. We get $y^2=\frac{1}{2}x^2-\frac{1}{2}\implies y=\pm\sqrt{\frac{1}{2}x^2-\frac{1}{2}}$.</p> <p>We take the positive one for demonstration<br> $\frac{dy}{dx}=\frac{1}{2}x(\frac{1}{2}x^2-\frac{1}{2})^{-\frac{1}{2}}=\frac{x}{2\sqrt{\frac{1}{2}x^2-\frac{1}{2}}}$</p> <p>$\implies(1-2m^2)x^2=-2m^2$</p> <p>Since the tangent touches the curve, we can make $x^2-2(mx+2)^2=1$, we then get $(1-2m^2)x^2=9+8mx$</p> <p>$\implies(1-2m^2)x^2=-2m^2$ and $(1-2m^2)x^2=9+8mx$ are two equations with two unknowns, then we should be able to find the values of $m$, but I couldn't find any easy way to solve those 2 simultaneous equations. Is there any easier method?</p> <p>I tried solving $9+8mx=-2m^2$ but we still have two unknowns in one equation?</p> <p>Also, if we don't use those two simultaneous equations, can we solve this question with a different method?</p> <p>I am trying to solve WITHOUT implicit differentiation.</p> <p>Many thanks for the help!</p>
André Nicolas
6,312
<p>Let $(a,b)$ be a point of tangency. We have $2x-4y\frac{dy}{dx}=0$, so the slope of the tangent line at $(a,b)$ (if $b\ne 0$) is $\frac{a}{2b}$.</p> <p>The tangent line has equation $y-b=(x-a)(a/2b)$. Simplifying , and comparing with $y=mx+2$, we find that $b-a^2/(2b)=2$. It follows that $2b^2-a^2=4b$. Since $a^2-2b^2=1$, we conclude that $b=-1/4$. The rest is routine.</p> <p><strong>Remark:</strong> Note that the tangent line happens to be at a point on the "lower" half of the hyperbola. So taking the positive square root turns out not to be useful. </p>
1,701,176
<p>The problem I'm having is with the logs. I go:</p> <p>$$\lim_{n \to \infty} \Big( \frac{\log{(n+1)}}{\log{(n)}} \cdot \frac{n-2}{n-1} \Big)$$</p> <p>$$=\lim_{n \to \infty} \Big( \frac{\log{(n+1)}}{\log{(n)}}\Big) \cdot \lim_{n \to \infty} \Big(\frac{n-2}{n-1} \Big)$$</p> <p>and here I know that $$\lim_{n \to \infty} \Big(\frac{n-2}{n-1} \Big) = \lim_{n \to \infty} \Bigg(\frac{1-\frac{2}{n}}{1-\frac{1}{n}} \Bigg) = \frac{\lim_{n \to \infty} ({1-\frac{2}{n}})}{\lim_{n \to \infty} (1-\frac{1}{n})} = 1$$</p> <p>However, I don't know how to do the equivalent for $$\lim_{n \to \infty} \Big( \frac{\log{(n+1)}}{\log{(n)}}\Big)$$</p> <p>I know that the numerator and denominator functions converge as $n$ grows, but I don't know how to compute the limit algebraically and show that it's also $1$.</p>
Doug M
317,176
<p>I say, $\lim\limits_{n \to \infty} \Big( \frac{\log{(n+1)}}{\log{(n)}}\Big) = 1 $ In wich case, I must show that</p> <p>$\forall \epsilon &gt; 0, \exists N&gt;0$ such that $n&gt;N\implies |\Big( \frac{\log{(n+1)}}{\log{(n)}}\Big) -1|&lt;\epsilon$</p> <p>$|\Big( \frac{\log{(n+1)}-\log{(n)}}{\log{(n)}}\Big)|&lt;\epsilon$</p> <p>$|\Big( \frac{\log{(1+1/n)}}{\log{(n)}}\Big)| &lt; 1/N &lt;\epsilon$</p> <p>Let $N &gt; 1/\epsilon$</p>
1,131,622
<p>The question itself is a very easy one:<br/></p> <blockquote> <p>Somebody has got two kids, one of whom is a girl. Then what's the probability that he's got <strong>at least</strong> one boy?</p> </blockquote> <p>My answer is that, since he's already got a girl, then "he's got at least one boy" amounts to "the other kid is a boy", whose probability is apparently $\frac{1}{2}$.<br/> But my friends argue that the probability should be $\frac23$: they say this is a binomial distribution, all the possible cases are (girl,girl),(girl,boy),(boy,girl) which yields that the probability is two cases out of three and is thus $\frac23$.<br/> But I think this is totally unacceptable. I don't think it is a binomial distribution at all, at least not what my friends explained to me. However, I just can't disuade them of their opinion, nor can I prove that I am wrong.<br/> So what on earth is the probability? and why? Any help is appreciated. Thanks in advance.<hr/> Esp. Can anybody show why <strong>my</strong> explanation is wrong? Isn't it that whether the other kid is a boy or a girl a 50/50 event? <hr/> EDIT:<br/> Thanks for all the help you provided for me, and special thanks will go to @HammyTheGreek and @KSmarts, who have made it clear to me that there is in fact some ambiguity in my statement in this problem.<br/> As is pointed in <a href="http://en.wikipedia.org/wiki/Boy_or_Girl_paradox" rel="nofollow">this link</a> ,two distinct interpretation of the statement "one of whom is a girl" that gives rise to ambiguity:<br/></p> <blockquote> <p>From all families with two children, at least one of whom is a boy, a family is chosen at random. This would yield the answer of 1/3.<br/> From all families with two children, one child is selected at random, and the sex of that child is specified to be a boy. This would yield an answer of 1/2. </p> </blockquote>
Community
-1
<p>The a priori probabilities indeed follow a binomial distribution, and all pairs are equiprobable $$P(BB)=P(BG)=P(GB)=P(GG)=\dfrac14.$$ The distribution of the number of boys follows $(\dfrac14,\dfrac12,\dfrac14)$.</p> <p>Now you are told that $BB$ is excluded, then the a posteriori probabilities turn to $$P(BB|\lnot BB)=0,P(BG|\lnot BB)=P(GB|\lnot BB)=P(GG|\lnot BB)=\dfrac{1/4}{3/4}=\frac13$$ (all pairs but $BB$ remain equiprobable). The distribution of the number of boys follows $(\dfrac13,\dfrac23,0)$.</p>
394,517
<p>How can I evaluate $\lim_{x \to \infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right)$?</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>As we are dealing with the limit in real numbers, $x&gt;0\implies x\to+\infty$</p> <p>Put $x=y^2$</p> <p>$$\implies \sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}$$</p> <p>$$=\sqrt{y^2+y}-\sqrt{y^2-y}$$</p> <p>$$=\frac{y^2+y-(y^2-y)}{\sqrt{y^2+y}+\sqrt{y^2-y}} \text{ (Rationalizing the numerator )}$$</p> <p>$$=\frac2{\sqrt{1+\frac1y}+\sqrt{1-\frac1y}} (\text{Dividing the numerator &amp; the denominator by } y)$$</p> <p>Now, as $x\to\infty, y\to\infty$</p> <hr> <p>Alternatively, put $x=\frac1{h^2}$ </p> <p>$$\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}$$</p> <p>$$= \frac{\sqrt{1+h}-\sqrt{1-h}}h$$</p> <p>$$= \frac{1+h-(1-h)}{(\sqrt{1+h}+\sqrt{1-h})h} \text{ (Rationalizing the numerator )}$$</p> <p>$$=\frac2{\sqrt{1+h}+\sqrt{1-h}}\text{ if }h\ne0$$ </p> <p>As $x\to\infty, h\to0\implies h\ne0$</p>
394,517
<p>How can I evaluate $\lim_{x \to \infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right)$?</p>
doubting thomas
78,251
<p>Rationalize. Then divide througout by $\sqrt x$. Then substitute limit.</p>
2,867,207
<p>This question is a (perhaps naive) 'simplification' of a result in a paper, so the answer could be negative.</p> <p>Define the cone $\Sigma(\theta)$ for $\theta\in(0,\pi/2]$, $$\Sigma(\theta) = \left\{ z = x+iy : x&gt;0, |y|&lt;(\tan\theta)x\right\}, $$ and define the norm $\|f\|_\theta$ for functions $f$ analytic on $\Sigma(\theta)$ by</p> <p>$$\|f\|_\theta = \sup_{z\in \Sigma(\theta)}|f|$$ Let $Y_\theta$ be the space of functions analytic on $\Sigma(\theta)$ with $\|f\|_\theta &lt; \infty$. Also let $\chi$ be a distinguished analytic function in $Y_{\pi/2}$ with $\chi(0) = 0$. It would seem then, that the following inequality is true: Let $f\in Y_{\theta}$. Then for any $\theta'&lt;\theta$, $$ \|\chi f'\|_{\theta'} \leq C \frac{\|f\|_{\theta}}{\theta-\theta'} $$ where the constant $C$ can depend on $\chi$. How is this proven, and how does $\chi$ help?</p> <hr> <p>Remarks</p> <ul> <li>(basic Cauchy Estimate) Let $B(r)$ denote the ball around $0$ of radius $r$, and let $|f|_r$ denote $\sup_{z\in B(r)} |f(z)|$. Lets say $f\in X_r$ if $f:B(r)\to \mathbb C$ is analytic on its domain, with $|f|_r&lt;\infty$. From the usual Cauchy formula $f'(z) = \frac1{2π i}\int_{\partial D} \frac{f(w) dw}{(w-z)^2}$ it is not hard to prove that for any $f\in X_r$, with any $r'&lt;r$, $$ |f'|_{r'} \leq C \frac{|f|_{r}}{r-r'}.$$</li> </ul>
DeepSea
101,504
<p>For starters such as yourself, you can begin by assuming otherwise. That is the fraction is reducible. So $\exists k \in \mathbb{N}, k &gt; 1$ such that $k \mid 21n + 4, k \mid 14n+3\implies 21n+4 = ak, 14n+3 = bk$ for some natural numbers $a,b$. Thus: $42n+8 = 2ak, 42n+9 = 3bk\implies 3bk - 2ak = 1\implies k(3b-2a) = 1\implies k = 1$, contradicting the assumption that $k &gt; 1$. This means $\text{gcd}(21n+4,14n+3) = 1$ or $\dfrac{21n+4}{14n+3}$ is irreducible.</p>
3,208,613
<p>We have a fair <span class="math-container">$3$</span> sided die <span class="math-container">$(a,b,c)$</span>, and we perform the following experiment:</p> <p>Roll the die until we have seen <span class="math-container">$10$</span> of any of the sides, let <span class="math-container">$X$</span> be the number of times the die was rolled.</p> <p>eg. <span class="math-container">${bcbaaaaaaaaaa,bbbbbbbbbb,aaaaaaacacccccccccc}$</span></p> <p>We are interested in the probability of <span class="math-container">$X$</span> for all possible values of <span class="math-container">$X$</span>, ie. <span class="math-container">$X={10,11...28}$</span>.</p>
Jane Cooper
647,608
<p>Hint: Observe that by the fundamental theorem of algebra, every polynomial of degree 2 has 2 roots. Therefore, if we have 2 polynomials of degree 2, we need one of them to have a root with multiplicity and not the other one. The quadratic formula will be useful here.</p> <p>Edit: Matthew pointed out that it could be possible that they both have two roots, but one in common. I was mistaken.</p>
2,479,363
<p>For every square matrix $A$, does there always exists a non diagonal matrix $B$ such that AB=BA</p>
Will Jagy
10,400
<p>well, no, not if $A$ is diagonal with all distinct diagonal entries. </p> <p>The simplest way to state the relevant theorem is this: for a square matrix $M,$ if the characteristic polynomial of $M$ and the minimal polynomial of $M$ are the same, then the only matrices that commute with $M$ are polynomials in $M,$ that is $$ w_0 I + w_1 M + w_2 M^2 + \cdots + w_{n-1} M^{n-1}, $$ when $M$ is an $n$ by $n$ matrix. We do not need to use degree $n$ or higher because of Cayley-Hamilton.</p>
2,479,363
<p>For every square matrix $A$, does there always exists a non diagonal matrix $B$ such that AB=BA</p>
David314
493,155
<p>In general, no.</p> <p>However, when $A$ is not diagonal, the answer is yes, since $B=A$ satisfies $B$ is not diagonal and $AB=BA$.</p> <p>Consider</p> <p>$$A= \begin{bmatrix} 0 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $$</p> <p>and any</p> <p>$$B= \begin{bmatrix} a &amp; b \\ c &amp; d \end{bmatrix} $$</p> <p>Then</p> <p>$$AB= \begin{bmatrix} 0 &amp; 0 \\ c &amp; d \end{bmatrix} $$</p> <p>but</p> <p>$$BA= \begin{bmatrix} 0 &amp; b \\ 0 &amp; d \end{bmatrix} $$</p> <p>so if $AB=BA$, $b=c=0$, forcing $B$ to be diagonal.</p>
1,119,563
<p>Why is $\sec^{-1}(2/\sqrt{2}) = \sec^{-1}(\sqrt{2})$ true?</p>
Workaholic
201,168
<p>Since </p> <p>$$ \dfrac{2}{\sqrt{2}}=\dfrac{\sqrt{2}\sqrt{2}}{\sqrt{2}}=\sqrt{2}. $$</p> <p>In general, for $a\geqslant0$ we have</p> <p>$$ \dfrac{a}{\sqrt{a}}=\sqrt{a}. $$</p>
2,466,949
<p>Room coordinates are following my walls, to use the guidance system I build the position from various other sensors &amp; built a GPS position from it.</p> <p>As I also need the a "fake" compass I'm trying to interface a moving robot with a sensor I made.</p> <p>Robot expect compass to send him the values of a 3-axis magnometer. As my sensor gives me the orientation pitch &amp; roll I have this formula:</p> <p>$\text{Orientation}=\text{atan2}( (-\text{ymag}*\cos(\text{Roll}) + \text{zmag}*\sin(\text{Roll}) ) , (\text{xmag}*\cos(\text{Pitch}) + \text{ymag}*\sin(\text{Pitch})*\sin(\text{Roll})+ \text{zmag}*\sin(\text{Pitch})*\cos(\text{Roll})))$</p> <p>as I've 3 unknown variables &amp; one equation I need more equations. But I'm stuck, there should be a way based on Orientation values to get constraints (i.e in $\text{atan2}(y,x) = \arctan(y/x)$ if $x &gt; 0$, etc.) but I can translate those relations to equations.</p> <p>Am I missing something or is it impossible?</p> <p>What Im trying to do:</p> <p>-get Xmag,Ymag and Zmag, those are the expected output of the fake compass. </p> <p>-Known variables are: Orientation (Yaw) Pitch &amp; Roll, on the robot system (X: right of robot, Y: front of robot, Z: going up) Yaw is the rotation on Z in reference to a "North" arbitrary selected, Pitch the rotation on X and Roll the rotation on Y.</p>
JMoravitz
179,297
<p>This can be explained as the following using multiplication principle:</p> <ul> <li>Pick which of the seven available spaces is occupied by the <code>n</code></li> <li>Pick which of the six remaining available spaces is occupied by the <code>g</code></li> <li>$\vdots$</li> <li>Pick which of the four remaining spaces is occupied by the <code>m</code></li> <li>All remaining three spaces will be occupied by the <code>a</code>'s.</li> </ul> <p>Applying multiplication principle, there are $7\cdot6\cdot5\cdot4$ ways to do this.</p>
2,708,891
<p>Let $R$ be a ring and consider $f = r_nx^n + 1.x^{n-1} + \cdots + rx + r_0\; \in R[X]$ such that $r^n = 0$ for all $r \in R$. Then can I call $f$ a monic polynomial in $R[X]$ (assume $r_n$ is non-invertible)?</p>
K B Dave
534,616
<p>I think the answer is <em>no</em> for the same reason that $x^p-x$ is not said to be the zero polynomial in $\mathbb{F}_p[x]$.</p> <p>One "reason" that $x^p-x$ is not the zero polynomial in $\mathbb{F}_p[x]$ is that, even though it evaluates to zero in $\mathbb{F}_p$, it does not evaluate to zero in every $\mathbb{F}_p$-algebra—in particular, it's not zero in the tautological evaluation $x\mapsto x$.</p> <p>Similarly, even though the leading term of your polynomial evaluates to zero in $R$, it doesn't evaluate to zero in every $R$-algebra.</p>
10,974
<p>Is the following true: If two chain complexes of free abelian groups have isomorphic homology modules then they are chain homotopy equivalent.</p>
Mariano Suárez-Álvarez
1,409
<p>The natural functor $K^b(\mathbb Z\mathrm{-free})\to D^b(\mathbb Z)$ from the homotopy category of bounded complexes of finitely generated free abelian groups to the derived category of bounded complexes of finitely generated abelian groups is an equivalence. This means that a map of bounded complexes of finitely generated free abelian groups which induces an isomorphism in homology is an homotopy equivalence. </p> <p>This and the fact that one can always lift a morphism $f:H_\bullet(X)\to H_\bullet(Y)$ between the homologies of two complexes of free abelian groups to a morphism $\tilde f:X\to Y$ of complexes which induces $f$ give an affirmative answer to your question.</p>
1,239,211
<p>I have been allowed to attend some preparatory lectures for a seminar on the Goodwillie Calculus of Functors. I found in my notes from one of the lectures two statements which I would like to ask about.</p> <p>The first one is probably straightforward and I'm guessing is related to Whitehead-type theorems. Still, I would still like a detailed explanation of what it means.</p> <blockquote> <ol> <li>Every homotopy type is a filtered colimit of finite CW complexes.</li> </ol> </blockquote> <p>The second statement is a lot more problematic because I don't understand any of the context. Here it is:</p> <blockquote> <ol start="2"> <li>We want to look at (extraordinary) homology theories $h_\ast :\mathsf{Top}\rightarrow \mathsf{grAb}$ which commute with filtered colimits.</li> </ol> </blockquote> <p>My question is why do we want to study homology theories which commute with filtered colimits? So that we may reduce to (finite) CW complexes? Is there anything else?</p> <p>This statement is preceded in my notes by the following theorem of Whitehead: </p> <p><strong><em>Theorem.</strong> For any extraordinary homology theory which is finitary ($\overset?=$ determined by values on finite CW complexes) there exists a spectrum $E\in \mathsf{Sp}$ such that $h_\ast (X)=\pi_\ast (E\wedge X)$ where $\pi _\ast$ are stable homotopy groups and $\wedge $ is the smash product.</em></p> <p>Now I don't yet know anything about either spectra no stable homotopy, so I can't make out much of this theorem myself.</p>
Najib Idrissi
10,014
<p>I think it's easier to understand if you look at it the other way around. Singular homology preserves filtered colimits (exercise: prove it), but it does not preserve other types of colimits in general (exercise: find a counterexample, a very simply one in fact; if you're stuck, have a look <a href="https://math.stackexchange.com/questions/1155065/homology-and-colimits?lq=1">here</a>). So then the theorem shows how useful it is:</p> <blockquote> <p>Every homotopy type is a filtered colimit of finite CW complexes.</p> </blockquote> <p>By this theorem, it means that when you want to understand homology, it's sufficient to know what it does with finite CW complexes, and then, because you know it preserves filtered colimits, you will also know what it does to every homotopy type.</p> <p>And now it also makes sense why we restrict our attention to generalized homology theories that only preserve filtered colimits (but not necessarily general ones): otherwise, singular homology wouldn't even be an example of a generalized homology theory, so it's not quite clear what we would be generalizing here...</p> <p>And now the adjective "finitary" makes sense: a homology theory is said to be finitary if it preserves filtered colimits, and then by the first theorem it is indeed determined by its value on finite CW complexes.</p>
1,955,729
<p>Are undecidable problems only those that have no algorithm to give a yes or no answer in a finite time or are there problems with no algorithm to give a yes no answer even in an infinite time? (If undecidability means there isn't a yes/no answer over a finite period, doesn't that mean given enough time these problems are actually decidable?)</p> <p>If some problems do require an infinite time to be answered can you give examples of these problems?</p>
Robert Israel
8,508
<p>By definition, an algorithm must give an answer in a finite time (although that time may be arbitrarily long). </p> <p>For example: the halting problem is undecidable. That is, there is no algorithm that, given a Turing machine and its input, decides whether or not the Turing machine given that input will halt.</p> <p>Now you could just simulate running the Turing machine on that input. If the simulation halts, you return Yes. However, there is the possibility that the Turing machine will go on forever without halting. We can't just say "if it runs forever, return No", precisely because only an answer in a finite time counts as an answer.</p>
1,955,729
<p>Are undecidable problems only those that have no algorithm to give a yes or no answer in a finite time or are there problems with no algorithm to give a yes no answer even in an infinite time? (If undecidability means there isn't a yes/no answer over a finite period, doesn't that mean given enough time these problems are actually decidable?)</p> <p>If some problems do require an infinite time to be answered can you give examples of these problems?</p>
Mitchell Spector
350,214
<p>The hyperarithmetic sets of integers are precisely the ones for which membership in them is decidable algorithmically, with a finite set of instructions, if you are allowed to do an infinite sequence of steps in any positive time interval you want. (These sets are the effective analogue to the Borel sets.)</p> <p>A simple example of a hyperarithmetic set which is not recursive is the set of all (codes for) Turing machines which halt, with an empty tape as input. There are many hyperarithmetic sets that are much more complicated than this one, however.</p> <p>A good source for this is <em>Theory of Recursive Functions and Effective Computability</em>, by Hartley Rogers, Jr. See Chapter 16 (The Analytical Hierarchy), section 16.5 (Generalized Computability), p. 407 (in the paragraphs under the heading Generalized Machines).</p> <p>Needless to say, this doesn't make all hyperarithmetic sets decidable. They're only decidable in this generalized sense in which infinite sequences of algorithmic steps are allowed.</p>
589,309
<p>Finding all sets of primes $p$ and $q$ such that $p$ divides $q^2 -4$ and $q$ divides $p^2-1$.</p>
Community
-1
<p>Hint :</p> <p>$p$ divides $q+2$ or $q-2$ and $q$ divides $p+1$ or $p-1$</p> <p>Consider one by one case : </p> <p>$p=l(q+2),q=m(p+1)\Rightarrow p=l(m(p+1)+2)=lmp+lm+2l\Rightarrow p= ??$</p> <p>try other cases...</p> <ul> <li>$p=l(q+2),q=m(p-1)$ </li> <li>$p=l(q-2),q=m(p-1)$ </li> <li>$p=l(q-2),q=m(p+1)$ </li> </ul>
2,200,034
<p>Suppose the points A and B are connected by two roads of the length $S_1$ and $S_2$. Cars can drive from A to B on either of the two roads. They start at point A and must decide, <strong>one after the other</strong> which road to take. They know how many cars already chose to drive on each road.</p> <p>The speed of all cars on a road is equal to $\frac{1}{\sqrt{(N)}}$, where $N$ is the number of cars driving on that road at any time. </p> <p>What is the limit of the ratios of cars on road $1$ to road $2$ as the number of cars that arrive per unit of time tends to infinity?</p> <p>EDIT: The drivers can make decisions instantly and are $100\%$ logical.</p>
Brian Tung
224,454
<p>I think you're better off looking for the long-run equilibrium result in the limit as time goes to infinity (assuming that interarrival times are small compared to the travel time), rather than as the arrival rate goes to infinity.</p> <p>With that interpretation, at equilibrium, the travel times on the two roads are equal. That is, the speeds on the two roads are proportional to their lengths. Symbolically,</p> <p>$$ \frac{S_2}{S_1} = \frac{1/\sqrt{N_2}}{1/\sqrt{N_1}} = \sqrt\frac{N_1}{N_2} $$</p> <p>This can be simplified to obtain</p> <p>$$ \frac{N_2}{N_1} = \left(\frac{S_1}{S_2}\right)^2 $$</p>
366,844
<p>Using the infinite product of $\sin(\pi z)$, one can find the Hadamard product for $e^z-1$:</p> <p>$$e^z-1 =2ie^{z/2}\sin(-iz/2)= 2i e^{z/2} (-iz/2) \prod_n \left(1+\frac{z^2}{4\pi n^2}\right)\\= e^{z/2} z \prod_n \left(1+\frac{z^2}{4\pi n^2}\right).$$</p> <p>I don't see a way to find the product for $\cos\pi z$. A naive attempt is letting $\{a_n\}\subset{\Bbb C}$ be all the zeros of $\cos(\pi z)$ and showing the possible convergence of $$ \prod_{n=1}^\infty\left(1-\frac{z}{a_n}\right) $$</p> <p>Is there an alternative way to find the Hadamard product in the title for $\cos\pi z$?</p>
Ron Gordon
53,268
<p>Well, you can perform a logarithmic differentiation and get a series that may be summed using the residue theorem.</p> <p>Let $p(z)$ be the product in question; we intend to prove that $p(z)=\cos{\pi z}$. </p> <p>$$\log{p} = \sum_{n=0}^{\infty} \log{\left ( 1-\frac{4 z^2}{(2 n+1)^2}\right)}$$</p> <p>$$\frac{d}{dz} \log{p} = -z \sum_{n=-\infty}^{\infty} \frac{1}{(n+(1/2))^2-z^2}$$</p> <p>Note that we were able to use the symmetry of the sum to change the lower limit to $-\infty$. This sum is in a form that may be evaluated using the residue theorem:</p> <p>$$\sum_{n=-\infty}^{\infty} f(n) = -\sum_k \text{Res}_{s=s_k} [\pi \cot{\pi s} \, f(s)]$$</p> <p>where the $s_k$ are the non-integral poles of $f$. In this case, $f(s) = 1/((s+(1/2))^2-z^2)$, so that the poles of $f$ are at $s_{\pm} = -1/2 \pm z$. The residues of these poles are</p> <p>$$\frac{\pi \cot{(-\pi/2 + \pi z)}}{2 z} - \frac{\pi \cot{(-\pi/2 - \pi z)}}{2 z} = -\frac{\pi \tan{\pi z}}{z}$$</p> <p>Therefore</p> <p>$$\frac{d}{dz} \log{p} = -\pi \tan{\pi z} \implies \log{p} = \log{\cos{\pi z}} + C$$</p> <p>where $C$ is a constant of integration, which using $p(0)=1$ implies that $C=0$. Then</p> <p>$$p(z) = \cos{\pi z}$$</p> <p>as was to be shown.</p>
366,844
<p>Using the infinite product of $\sin(\pi z)$, one can find the Hadamard product for $e^z-1$:</p> <p>$$e^z-1 =2ie^{z/2}\sin(-iz/2)= 2i e^{z/2} (-iz/2) \prod_n \left(1+\frac{z^2}{4\pi n^2}\right)\\= e^{z/2} z \prod_n \left(1+\frac{z^2}{4\pi n^2}\right).$$</p> <p>I don't see a way to find the product for $\cos\pi z$. A naive attempt is letting $\{a_n\}\subset{\Bbb C}$ be all the zeros of $\cos(\pi z)$ and showing the possible convergence of $$ \prod_{n=1}^\infty\left(1-\frac{z}{a_n}\right) $$</p> <p>Is there an alternative way to find the Hadamard product in the title for $\cos\pi z$?</p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> <span class="math-container">\begin{align} &amp;\bbox[5px,#ffd]{\left.\prod_{n = 0}^{m} \bracks{1 - 4z^{2}/\pars{2n + 1}^{2}} \right\vert_{\,m\ \in\ \mathbb{N}_{\,\geq 1}}} = \prod_{n = 0}^{m} {\pars{n + 1/2}^{2} - z^{2} \over \pars{n + 1/2}^{2}} \\[5mm] = &amp;\ \on{f}_{m}\pars{z}\on{f}_{m}\pars{-z} \end{align}</span> where <span class="math-container">\begin{align} \on{f}_{m}\pars{z} &amp; \equiv \prod_{n = 0}^{m} {n + 1/2 - z \over n + 1/2} = {\pars{1/2 - z}^{\overline{m + 1}} \over \pars{1/2}^{\overline{m + 1}}} \\[5mm] &amp; = {\pars{1/2 - z + m}!\,/\,\Gamma\pars{1/2 - z} \over \pars{1/2 + m}!\,/\,\Gamma\pars{1/2}} \\[5mm] &amp; \stackrel{{\rm as}\ m\ \to\ \infty}{\sim}\,\,\, {\root{\pi} \over \Gamma\pars{1/2 - z}}\ \times \\[2mm] &amp;\ {\root{2\pi}\pars{1/2 - z + m}^{1 - z + m}\,\,\, \expo{-1/2 + z - m} \over \root{2\pi}\pars{1/2 + m}^{1 + m}\,\, \expo{-1/2 - m}} \\[5mm] &amp; \stackrel{{\rm as}\ m\ \to\ \infty}{\sim}\,\,\, {\root{\pi} \over \Gamma\pars{1/2 - z}}\ \times \\[2mm] &amp;\ {m^{1 - z + m}\,\,\,\,\bracks{1 + \pars{1/2 - z}/m}^{\,m} \over m^{m + 1}\,\,\,\bracks{1 + \pars{1/2}/m}^{\,m}}\expo{z} \\[5mm] &amp; \stackrel{{\rm as}\ m\ \to\ \infty}{\sim}\,\,\, {\root{\pi} \over \Gamma\pars{1/2 - z}}\,m^{-z} \end{align}</span> Then, <span class="math-container">\begin{align} &amp;\bbox[5px,#ffd]{\left.\prod_{n = 0}^{m} \bracks{1 - 4z^{2}/\pars{2n + 1}^{2}} \right\vert_{\,m\ \in\ \mathbb{N}_{\,\geq 1}}} \\[5mm] = &amp;\ \lim_{m \to \infty}\braces{\bracks{{\root{\pi} \over \Gamma\pars{1/2 - z}}\,m^{-z}}\bracks{{\root{\pi} \over \Gamma\pars{1/2 + z}}\,m^{z}}} \\[5mm] = &amp;\ {\pi \over \pi/\sin\pars{\pi\bracks{1/2 + z}}} = \bbx{\cos\pars{\pi z}} \\ &amp; \end{align}</span></p>
2,429,769
<p>I watched a <a href="https://www.youtube.com/watch?v=lXkRj6MKbZs" rel="nofollow noreferrer">great youtube video</a> about how to prove a limit of a multivariable function exists. It explained that one method is by substitution. For example, we can solve $$lim_{(x, y) \to 0,0} \frac{xy}{\sqrt{x^2 + y^2}}$$</p> <p>By substituting with polar coordinates - namely letting $x = rcos\theta$ and $y=rsin\theta$ </p> <p>The youtube video mentioned that polar coordinates are not the only accepted form of substitution. </p> <p><strong>What are other forms of substitution one can use? When do you know which form to substitute with?</strong></p>
An aedonist
143,679
<p>Maybe the following visualisation could be helpful.</p> <p>Draw a rectangle, with a base long $1$, and height long $v$, initial velocity.</p> <p>Next to it, to the right, you can draw a rectangle whose base is still long $1$, while the height equals $v - d$, $d$ for drag. You could continue, so that the $n$-th rectangle has height $v - (n-1)d$, until the height is negative. Well it turns out the travelled distance equals the area of all the rectangles, which is quite easy to calculate.</p> <p>A resonable Approximation (should you reduce in future your ´time step Duration) is given by $\frac {v \frac{v}{d}}{2} = \frac{v^2}{2d}$, by the formula giving you the area of a triangle). For the exact result you could check for "Gauss's trick", allegedely discovered by the great mathematician at the age of 7.</p>
1,208,323
<p>I am trying to prove that a $n \times n$ matrix $A$ and $A^T$ have the same eigenvalues.</p> <p>I can prove that $A$ and $A^T$ have the same entries on the diagonal, but I am not sure where to go from there.</p>
PersonaA
226,382
<p>Hint: They will have the same eigenvalues if they have the same characteristic polynomial. (which can be shown that they do have the same easily)</p>
1,208,323
<p>I am trying to prove that a $n \times n$ matrix $A$ and $A^T$ have the same eigenvalues.</p> <p>I can prove that $A$ and $A^T$ have the same entries on the diagonal, but I am not sure where to go from there.</p>
Community
-1
<p>$\lambda$ is an eigenvalue of the $n \times n$ matrix $A$ iff $\det(A-\lambda I)=0$.</p> <p>Remember that the determinant of a matrix is equal to the determinant of its transpose. Thus if $\det(A-\lambda I)=0$ then $\det([A-\lambda I]^T)=0$. But $[A-\lambda I]^T = A^T-\lambda I^T = A^T - \lambda I$. Therefore $\det(A-\lambda I) = 0 \implies \det(A^T - \lambda I) =0$. Thus $A$ and $A^T$ have the same eigenvalues.</p>
1,301,522
<p>Many texts will define a manifold as "a second-countable Hausdorff space that is locally homeomorphic to Euclidean space". By definition of homeomorphism, shouldn't this really and officially read as "locally homeomorphic to a <em>subset</em> of Euclidean space"?</p>
Rob Arthan
23,171
<p>Definitely not: "locally homeomorphic to an <em>open</em> subset of Euclidean space" would be equivalent to the stated (and standard) definition, but Euclidean $n$-space for any $n &gt; 0$ has subsets that are not manifolds, e.g., $\{0\} \cup \{1/n : 0 &lt; n \in \mathbb{N}\} \subseteq \mathbb{R}$ or the union of the $x$-axis and the $y$-axis in $\mathbb{R}^2$. Any such subset would be a manifold according to your proposed alternative definition.</p>
241,998
<p>Consider a list of even length, for example <code>list={1,2,3,4,5,6,7,8}</code></p> <p>what is the fastest way to accomplish both these operations ?</p> <p><strong>Operation 1</strong>: two by two element inversion, the output is:</p> <pre><code>{2,1,4,3,6,5,8,7} </code></pre> <p>A code that work is:</p> <pre><code>Flatten@(Reverse@Partition[list,2]) </code></pre> <p><strong>Operation 2</strong>: Two by two Reverse, the output is:</p> <pre><code>{7,8,5,6,3,4,1,2} </code></pre> <p>A code that work is:</p> <pre><code>Flatten@(Reverse@Partition[Reverse[list],2]) </code></pre> <p><strong>The real lists have length 16, no need for anything adapated to long lists</strong></p>
kglr
125
<pre><code>PermutationList[Cycles[Partition[list,2]]] </code></pre> <blockquote> <pre><code>{2, 1, 4, 3, 6, 5, 8, 7} </code></pre> </blockquote> <p>Simply <code>Reverse</code> the output above to get your second list:</p> <pre><code>Reverse @ % </code></pre> <blockquote> <pre><code>{7, 8, 5, 6, 3, 4, 1, 2} </code></pre> </blockquote> <p>Alternatively, define a <code>Cycles</code> object that can used to <code>Permute</code> other lists:</p> <pre><code>cycles = Cycles[Partition[Range @ 8, 2]]; Permute[list, cycles] </code></pre> <blockquote> <pre><code>{2, 1, 4, 3, 6, 5, 8, 7} </code></pre> </blockquote> <p>or</p> <pre><code> list[[PermutationList @ cycles]] </code></pre> <blockquote> <pre><code>{2, 1, 4, 3, 6, 5, 8, 7} </code></pre> </blockquote> <p><code>Reverse</code> the input list and <code>Permute</code>:</p> <pre><code>Permute[Reverse @ list,cycles] </code></pre> <blockquote> <pre><code>{7, 8, 5, 6, 3, 4, 1, 2} </code></pre> </blockquote> <pre><code>Permute[CharacterRange[&quot;A&quot;, &quot;H&quot;], cycles] </code></pre> <blockquote> <pre><code>{&quot;B&quot;, &quot;A&quot;, &quot;D&quot;, &quot;C&quot;, &quot;F&quot;, &quot;E&quot;, &quot;H&quot;, &quot;G&quot;} </code></pre> </blockquote> <pre><code>Permute[Reverse @ CharacterRange[&quot;A&quot;, &quot;H&quot;], cycles] </code></pre> <blockquote> <pre><code>{&quot;G&quot;, &quot;H&quot;, &quot;E&quot;, &quot;F&quot;, &quot;C&quot;, &quot;D&quot;, &quot;A&quot;, &quot;B&quot;} </code></pre> </blockquote>
3,772,399
<p>I need help with the following question:</p> <p>Let <span class="math-container">$X_i$</span> be independent, non-negative random variables, <span class="math-container">$i \in \{1,...,n\}$</span>. I want to show that for all <span class="math-container">$t &gt; 0$</span>, <span class="math-container">$$P(S_n &gt; 3t) \leq P(\max_{1 \leq i \leq n} X_i &gt; t) + P(S_n &gt;t)^2$$</span> where we define <span class="math-container">$S_n \equiv \sum_{i = 1}^n X_i$</span></p> <hr /> <p><strong>My &quot;attempt&quot;:</strong> I'm not really sure how to approach, but obviously we can say that <span class="math-container">$$P(S_n &gt; 3t) = P(S_n &gt; 3t, \max_{1 \leq i \leq n} X_i &gt; t) + P(S_n &gt; 3t, \max_{1 \leq i \leq n} X_i \leq t) \\ \leq P(\max_{1 \leq i \leq n} X_i &gt; t) + \sum_{i=1}^n P(S_i &gt; 3t, S_j \leq 3t \quad \forall j &lt; i, \max_{i \leq n} X_i \leq t)$$</span> since we have that <span class="math-container">$\{S_n &gt; 3t\} = \bigcup_{i=1}^n \{S_i &gt; 3t, S_j \leq 3t \quad \forall j &lt; i\}$</span> and this is a disjoint union, but I don't know where to go from here. Any help would be appreciated!</p>
Eric Wofsey
86,856
<p>You can make a continuity argument to reduce to the case of diagonalizable matrices. The characteristic polynomial of <span class="math-container">$C_A$</span> varies continuously with <span class="math-container">$A$</span>, and diagonalizable matrices are dense in <span class="math-container">$GL_n(\mathbb{C})$</span> (for instance, because every matrix is conjugate to an upper triangular one, and an upper triangular matrix can always be perturbed to a diagonalizable one by just making the diagonal entries distinct).</p> <p>So, if you know the eigenvalues (with their multiplicities) of <span class="math-container">$C_A$</span> when <span class="math-container">$A$</span> is diagonal, you can deduce them for arbitrary <span class="math-container">$A$</span> by continuity. In the case that <span class="math-container">$C_A$</span> is diagonal, you can write down what <span class="math-container">$C_A$</span> does to the entries of a matrix quite explicitly to find its eigenvalues.</p> <p>More details on how to finish are hidden below.</p> <blockquote class="spoiler"> <p> Suppose <span class="math-container">$A$</span> is diagonalizable with diagonal entries <span class="math-container">$a_1,\dots,a_n$</span>. Then <span class="math-container">$C_A$</span> multiplies the <span class="math-container">$ij$</span> entry of a matrix by <span class="math-container">$a_i^{-1}a_j$</span> (since left multiplication by <span class="math-container">$A$</span> multiplies the <span class="math-container">$j$</span>th column by <span class="math-container">$a_j$</span> and right multiplication by <span class="math-container">$A^{-1}$</span> multiplies the <span class="math-container">$i$</span>th row by <span class="math-container">$a_i^{-1}$</span>). In other words, with respect to the standard basis on <span class="math-container">$M_n(\mathbb{C})$</span>, <span class="math-container">$C_A$</span> is diagonal with diagonal entries <span class="math-container">$a_i^{-1}a_j$</span>.<br /> <br /> Thus, if <span class="math-container">$A$</span> is any diagonalizable matrix, the eigenvalues of <span class="math-container">$C_A$</span> (with multiplicity) are <span class="math-container">$a_i^{-1}a_j$</span>, where the <span class="math-container">$a_i$</span> are the eigenvalues of <span class="math-container">$A$</span>. It follows by continuity that the same is true for arbitrary <span class="math-container">$A\in GL_n(\mathbb{C})$</span>.</p> </blockquote>
1,989,182
<p>Why does only one particular solution allow enough degrees of freedom for the general solution?</p>
Artem
29,547
<p>This is an elementary but very important fact for any <em>linear</em> operators. That is, let $A$ be a linear operator, $A\colon U\longrightarrow V$. Consider the problem $$ A(u)=v,\tag{1} $$ that is, to find a $u\in U$ for a given $v\in V$. Assume that such solution exists. Then it is true that the general solution to $(1)$ can be written $$ u_g=u_h+u_p, $$ where $u_h\in\ker A$, that is, solves homogeneous equation $A(u)=0$ and $u_p$ is <em>any</em> particular solution to $(1)$.</p> <p><em>Proof:</em> First, using the linearity of $A$ show that if $u_1$ and $u_2$ solve $(1)$ then $u_1-u_2$ solves $A(u)=0$. Now, assume that some specific $u_p$ solves $(1)$ and let $u_h$ be a solution to $A(u)=0$. Then, clearly, $u_p+u_h$ also solves $(1)$. Now take <em>any</em> $u_g$ that solves $(1)$. Again, $u_g-u_p$ solves $A(u)=0$ and hence $u-u_p=u_h$ from where $$ u_g=u_h+u_p. $$</p>
2,221,897
<p>Show that </p> <p>$$\lim_{n \to \infty} \sum_{k=3}^n \frac{2k}{k^2+n^2+1} = \ln(2)$$</p> <p>How many ways are there to prove it ?</p> <p>Is there a standard way ?</p> <p>I was thinking about making it a Riemann sum. Or telescoping.</p> <p>What is the easiest way ? What is the shortest way ?</p>
Claude Leibovici
82,404
<p><em>Just added for your curiosity.</em></p> <p>Riemann sum is certainly the fastest way to do it but you can also do it differently using generalized harmonic numbers (after partial fraction decomposition) and obtain $$S_n=\sum_{k=3}^n \frac{2k}{k^2+n^2+1}=-H_{2-\sqrt{-n^2-1}}+H_{n-\sqrt{-n^2-1}}-H_{\sqrt{-n^2-1}+2}+H_{n+\sqrt{-n^2-1}}$$ Now, using the asymptotics $$H_p=\gamma +\log \left({p}\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{n^4}\right)$$ you should arrive to $$S_n=(\log (1-i)+\log (1+i))+\frac{1}{2 n}-\frac{20}{3 n^2}-\frac{1}{4 n^3}+O\left(\frac{1}{n^4}\right)$$ that is to say $$S_n=\log (2)+\frac{1}{2 n}-\frac{20}{3 n^2}-\frac{1}{4 n^3}+O\left(\frac{1}{n^4}\right)$$ which, for sure, shows the limit but also how it is approached; moreover, it gives a very good approxiamtion of $S_n$.</p> <p>For example $S_{10}=\frac{1402864984}{2067340275}\approx 0.6786$ while the above expansion would give $\log (2)-\frac{203}{12000}\approx 0.6762$.</p> <p><strong>Edit for clarity</strong></p> <p>Let $\alpha=-i\sqrt{n^2+1}$, $\beta=i\sqrt{n^2+1}$ be the roots of $k^2+n^2+1=0$. Partial fraction decomposition leads to $$\frac{2k}{k^2+n^2+1}=\frac{2}{\alpha -\beta }\left(\frac \alpha {k-\alpha}-\frac \beta {k-\beta} \right)$$ Now, using $$\sum_{k=3}^n\frac 1{k-a}=\psi ^{(0)}(-a+n+1)-\psi ^{(0)}(3-a)=H_{n-a}-H_{2-a}$$</p>
2,221,897
<p>Show that </p> <p>$$\lim_{n \to \infty} \sum_{k=3}^n \frac{2k}{k^2+n^2+1} = \ln(2)$$</p> <p>How many ways are there to prove it ?</p> <p>Is there a standard way ?</p> <p>I was thinking about making it a Riemann sum. Or telescoping.</p> <p>What is the easiest way ? What is the shortest way ?</p>
Felix Marin
85,343
<p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>$\ds{\lim_{n \to \infty}\sum_{k = 3}^{n}{2k \over k^{2} + n^{2} + 1} = \ln\pars{2}:\ {\Large ?}}$.</p> </blockquote> <p>\begin{align} \lim_{n \to \infty}\sum_{k = 3}^{n}{2k \over k^{2} + n^{2} + 1} &amp; = \lim_{n \to \infty}\sum_{k = 3}^{n}{2k \over k^{2} + n^{2}} + \lim_{n \to \infty}\sum_{k = 3}^{n}\pars{% {2k \over k^{2} + n^{2} + 1} - {2k \over k^{2} + n^{2}}} \\[5mm] &amp; = 2\lim_{n \to \infty}\bracks{{1 \over n}\sum_{k = 3}^{n}{k/n \over 1 + \pars{k/n}^{2}}} - 2\lim_{n \to \infty}\underbrace{\sum_{k = 3}^{n} {k \over \pars{k^{2} + n^{2} + 1}\pars{k^{2} + n^{2}}}} _{\ds{&lt; {\sum_{k = 3}^{n}k \over \pars{n^{2} + 10}\pars{n^{2} + 9}} \stackrel{\mrm{as}\ n\ \to\ \infty}{\to} {\large 0}}} \\[5mm] &amp; = 2\int_{0}^{1}{x \over x^{2} + 1}\,\dd x = \bbx{\ln\pars{2}} \approx 0.6931 \end{align}</p>
1,265,531
<p>I understand the question but I am not sure how to solve it. For example, if we flip HHHTTTTT then the next three must be heads because of the question. This however seems counterintuitive. I believe that there are $2^{10}$ possible strings, but I am unsure of how to count all possible strings that begin with HHH.</p>
ajotatxe
132,456
<p>If $A$ means three heads at the beginning, and $B$ means $5$ heads and $5$ tails, we want to compute $$p(A/B)=\frac{p(A\cap B)}{p(B)}$$</p> <p>And $$p(A\cap B)=\frac{\binom72}{2^{10}}$$ $$p(B)=\frac{\binom{10}5}{2^{10}}$$</p>
33,153
<p>Here is one definition of a differential equation:</p> <blockquote> <p>&quot;An equation containing the derivatives of one or more dependent variables, with respect to one of more independent variables, is said to be a differential equation (DE)&quot; <em>(Zill - A First Course in Differential Equations)</em></p> </blockquote> <p>Here is another:</p> <blockquote> <p>&quot;A differential equation is a relationship between a function of time &amp; it's derivatives&quot; <em>(Braun - Differential equations and their applications)</em></p> </blockquote> <p>Here is another:</p> <blockquote> <p>&quot;Equations in which the unknown function or the vector function appears under the sign of the derivative or the differential are called differential equations&quot; <em>(L. Elsgolts - Differential Equations &amp; the Calculus of Variations)</em></p> </blockquote> <p>Here is another:</p> <blockquote> <p>&quot;Let <span class="math-container">$f(x)$</span> define a function of <span class="math-container">$x$</span> on an interval <span class="math-container">$I: a &lt; x &lt; b$</span>. By an ordinary differential equation we mean an equation involving <span class="math-container">$x$</span>, the function <span class="math-container">$f(x)$</span> and one of more of it's derivatives&quot; <em>(Tenenbaum/Pollard - Ordinary Differential Equations)</em></p> </blockquote> <p>Here is another:</p> <blockquote> <p>&quot;A differential equation is an equation that relates in a nontrivial way an unknown function &amp; one or more of the derivatives or differentials of an unknown function with respect to one or more independent variables.&quot; <em>(Ross - Differential Equations)</em></p> </blockquote> <p>Here is another:</p> <blockquote> <p>&quot;A differential equation is an equation relating some function <span class="math-container">$f$</span> to one or more of it's derivatives.&quot; <em>(Krantz - Differential equations demystified)</em></p> </blockquote> <p>Now, you can see that while there is just some tiny variation between them, calling <span class="math-container">$f(x)$</span> the function instead of <span class="math-container">$f$</span> or calling it a function instead of an equation but generally they all hint at the same thing.</p> <p>However:</p> <blockquote> <p>&quot;Let <span class="math-container">$U$</span> be an open domain of n-dimensional euclidean space, &amp; let <span class="math-container">$v$</span> be a vector field in <span class="math-container">$U$</span>. Then by the differential equation determined by the vector field <span class="math-container">$v$</span> is meant the equation <span class="math-container">$x' = v(x), x \in U$</span>.</p> <p>Differential equations are sometimes said to be equations containing unknown functions and their derivatives. This is false. For example, the equations <span class="math-container">$\frac{dx}{dt} = x(x(t))$</span> is not a differential equation.&quot; <em>(Arnold - Ordinary Differential Equations)</em></p> </blockquote> <p>This is quite different and the last comment basically says that all of the above definitions, in all of the standard textbooks, are in fact incorrect.</p> <p>Would anyone care to expand upon this point if it is of interest as some of you might know about Arnold's book &amp; perhaps be able to give some clearer examples than <span class="math-container">$\frac{dx}{dt} = x(x(t))$</span>, I honestly can't even see how to make sense of <span class="math-container">$\frac{dx}{dt} = x(x(t))$</span>. The more explicit (and with more detail) the better!</p> <p>A second question I would really appreciate an answer to would be - is there any other book that takes the view of differential equations that Arnold does? I can't find any elementary book that starts by defining differential equations in the way Arnold does and then goes on to work in phase spaces etc. Multiple references welcomed.</p>
Sam Lisi
8,343
<blockquote> <p><i> "When I use a word," Humpty Dumpty said, in a rather a scornful tone, "it means just what I choose it to mean—neither more nor less."</i></p> </blockquote> <p>I think Arnol'd is correct, but I think he is being unnecessarily confrontational about it. All the books on your list that I am familiar with nearly immediately jump to a more precise formulation that a differential equation is one of the two following things: \[ y^{(n)}(t) = F(t, y(t), y'(t), \dots, y^{(n-1)}(t) ), \] or \[ G(t, y(t), \dots, y^{(n)}(t)) = 0. \]</p> <p>Here is another example of an equation that I would not want to call a differential equation: \[ y'(t) = y(t-1). \] This meets the heuristic definition, but fails to be of the form I specified above (or of the form Arnol'd considers).</p> <p>I now see that Qiaochu has written nearly the same thing above. </p> <p>btw, I think Arnold's book is fantastic, but should be complemented with a more standard treatment of ODE, if only so that you know what everyone else knows in addition to the topics Arnold focuses on. </p> <hr> <p>EDIT: To answer the 2nd half of the question, I don't know of any books that are as geometric as Arnold. IMO, the big strength of his book is that he makes the geometric intuition jump out at the reader, and downplays the analytical side of things. This complements the more traditional books that focus on the analytical aspects (and on explicit solutions) and lose all the geometry.</p> <p>Arnold has another book that is somewhat more advanced, <i>Mathematical Methods of Classical Mechanics</i>. I think it's another great book, though it's hard to read. He also has a book called <i>Geometrical methods in the theory of ODE</i>. This is also a more advanced book, so it is not one you want to look at yet.</p> <p>A book that I found very compelling was Hirsch and Smale, <i>Differential Equations, Dynamical Systems and Linear Algebra</i>. It's more analytical than Arnold, but is more geometric than most.</p> <hr> <p>EDIT 8 years later: Let me add a recommendation for Strogatz's <i>Nonlinear dynamics and chaos</i>. I think it's a beautiful book and wish I could go back in time and give it to my younger self. </p>
378,966
<p>$$A_t-A_{xx} = \sin(\pi x)$$ $$A(0,t)=A(1,t)=0$$ $$A(x,t=0)=0$$ Find $A$.</p> <p>I know I need to find the homogeneous and particular solutions. Im just not sure on this PDE.</p>
Ron Gordon
53,268
<p>The solution may be accomplished using a Laplace transform. Defining</p> <p>$$\hat{A}(x,s) = \int_0^{\infty} dt \, A(x,t) \, e^{-s t}$$</p> <p>and applying the initial condition, we get an ordinary differential equation in $x$:</p> <p>$$\frac{d^2}{dx^2} \hat{A} - s \hat{A} = -\frac{1}{s} \sin{\pi x}$$</p> <p>The zero boundary conditions in $x$ mean that the homogeneous solution is zero. The solution then takes the form $\hat{a}(x,s) = P \sin{\pi x}$. Plugging this into the equation, we get the solution</p> <p>$$\hat{A}(x,s) = \frac{\sin{\pi x}}{s (\pi^2 + s)}$$</p> <p>You can use partial fractions, or simply look up in a table of inverse LT's; the solution is</p> <p>$$A(x,t) = \frac{1}{\pi^2} \sin{\pi x} \, (1-e^{-\pi^2 t})$$</p>
163,640
<p>Early in a course in Algebra the result that every group can be embedded as a subgroup<br> of a symmetric group is introduced. One can further work on it to embed it as a subgroup of a suitable (higher degree) alternating group.</p> <p>Inverting the view point we can say that the family of simple groups $A_n, n\geq 5$, contains all finite groups as their subgroups.</p> <p>My question now is, is the same true for each of the other infinite families listed in the Classification of Finite Simple Groups?</p> <p>In case the answer to this question is negative it might lead to some categorization. Cayley's embedding theorem is often considered a 'useless theorem', as no result about that group can be proved using that embedding. (Is that correct?) Other simple groups being somewhat more special (structure preserving maps of some non-trivial structure), we can categorize groups according to which infinite family(ies) they fall into. And groups embeddable in a particular family, but not embeddable in another may exhibit some special property.</p> <p>Hope this provides a motivation for the question.</p>
Derek Holt
35,840
<p>In general, for the groups in a family of fixed Lie rank, there will be a bound on the degree of an alternating group that can occur as a subgroup, so the answer to your question is no. This is easily seen from the fact that they have representations of a fixed degree. For example $E_8(q)$ has a representation of degree $248$ over ${\mathbb F}_q$, so it cannot possibly contain $A_n$ for $n &gt; 250$, and I would guess that there is a much lower bound than that.</p> <p>Of course, if by a family you mean one of the doubly infinite families like $A_n(q)$ for arbitrary $n$ then the answer is yes, because, for each such family, by making $n$ sufficient large, the groups will contain alternating groups of arbitrarily large degrees as subgroup of their Weyl groups.</p> <p>To be more specific, the image of the natural permutation representation of $A_n$ over ${\mathbb F}_q$ preserves a unitary form and an orthogonal form with matrix $I_n$, so $A_n &lt; L_n(q)$, $A_n &lt; U_n(q)$ and $A_n$ lies in one of the types of orthogonal groups. I am not sure if it lies in the orthogonal type that preserves a diagonal form with non-square determinant (but I think it does), but that type certainly contains $A_{n-1}$. It is also easy to see that $A_n &lt; {\rm Sp}_{2n}(q)$ for all $q$. That deals with all of the doubly infinite families.</p>
163,640
<p>Early in a course in Algebra the result that every group can be embedded as a subgroup<br> of a symmetric group is introduced. One can further work on it to embed it as a subgroup of a suitable (higher degree) alternating group.</p> <p>Inverting the view point we can say that the family of simple groups $A_n, n\geq 5$, contains all finite groups as their subgroups.</p> <p>My question now is, is the same true for each of the other infinite families listed in the Classification of Finite Simple Groups?</p> <p>In case the answer to this question is negative it might lead to some categorization. Cayley's embedding theorem is often considered a 'useless theorem', as no result about that group can be proved using that embedding. (Is that correct?) Other simple groups being somewhat more special (structure preserving maps of some non-trivial structure), we can categorize groups according to which infinite family(ies) they fall into. And groups embeddable in a particular family, but not embeddable in another may exhibit some special property.</p> <p>Hope this provides a motivation for the question.</p>
DavidLHarden
12,610
<p>Another bit of information available from Cayley's Theorem: </p> <p>It is possible to prove, without using the transfer homomorphism, that a finite group $G$ with a cyclic (and nontrivial) Sylow 2-subgroup has a normal 2-complement.<br> First we show that having a cyclic, nontrivial Sylow 2-subgroup implies the group has a subgroup of index 2: a generator of a Sylow 2-subgroup gets sent, via Cayley's embedding, to an odd permutation of the elements of $G$.<br> Next, note that if that subgroup of index 2 again has even order, it again has a subgroup of index 2.</p>
2,767,679
<blockquote> <p>Let $A$ be an $m \times n$ matrix and let $B, C$ be $n \times p$ matrices. Prove that $A(B + C) = AB + AC$</p> </blockquote> <p>I know it's obvious that it is and that every mathematician takes this for granted but I've been asked to prove it and I don't know how to do it without just multiplying out the brackets. Any help would be greatly appreciated.</p>
Joppy
431,940
<p>The $(i, j)$th entry of the left hand side is $$ \sum_{k = 1}^n a_{ik}(b_{kj} + c_{kj})$$ while the $(i, j)$th entry of the right hand side is $$ \sum_{k = 1}^n a_{ik}b_{kj} + \sum_{k = 1}^n a_{ik} c_{kj}$$ which are indeed equal. And since every entry is equal, the matrices must be equal.</p>
3,910,739
<p>I am trying to find a pdf for a random variable <span class="math-container">$X$</span> where <span class="math-container">$X=-2Y+1$</span> and <span class="math-container">$Y$</span> is given by <span class="math-container">$N(4,9)$</span></p> <p>Here is my attempt:</p> <p>we know <span class="math-container">$\mu=4$</span> and <span class="math-container">$\sigma=3$</span>. so that the normal distribution of <span class="math-container">$Y$</span> is given by <span class="math-container">$\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}$</span><br /> We can differentiate the cumulative function of <span class="math-container">$X$</span> to get the pdf for <span class="math-container">$X$</span>.<br /> cdf of <span class="math-container">$X = P(X&lt;x)$</span> = <span class="math-container">$P(-2Y+1&lt;x)=P(Y&lt;\frac{-(x-1)}{2})=\int_{-\infty}^{\frac{-(x-1)}{2}}\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}dy$</span><br /> so <span class="math-container">$\frac{d}{dx}(\int_{-\infty}^{\frac{-(x-1)}{2}}\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}dy)=f(x)$</span>, which is the density function for <span class="math-container">$X$</span><br /> <span class="math-container">$f(x)=-\frac{1}{6\sqrt{2\pi}}e^\frac{-(\frac{-(x-1)}{2}-4)^2}{18}$</span></p> <p>Is this a correct way to approach the problem? I feel like my answer is very funky.</p>
Kolmogorov
551,240
<p>Alternatively, one may use the very well known (and easy) fact that</p> <blockquote> <p>If <span class="math-container">$S \sim N(0,1)$</span> , then <span class="math-container">$T = aS + b \sim N(b , a^2)$</span> .</p> </blockquote> <p>Here, as <span class="math-container">$Y \sim N(4,9)$</span> , so <span class="math-container">$Y = 3S + 4$</span> . Therefore, <span class="math-container">$$X = -2Y +1 = -2(3S + 4) + 1 = -6S - 7$$</span> Thus, <span class="math-container">$X \sim N(-7 , 36)$</span> .</p>
870,240
<p>Which number is larger? $\underbrace{888\cdots8}_\text{19 digits}\times\underbrace{333\cdots3}_\text{68 digits}$ or $\underbrace{444\cdots4}_\text{19 digits}\times\underbrace{666\cdots67}_\text{68 digits}$? Why? How much is it larger?</p>
Martin Sleziak
8,297
<p>You have<br> $\underbrace{888\cdots8}_\text{19 digits}\times\underbrace{333\cdots3}_\text{68 digits} = 8\cdot 3 \cdot (\underbrace{111\cdots1}_\text{19 digits}\times \underbrace{111\cdots1}_\text{68 digits}) = 24 \cdot (\underbrace{111\cdots1}_\text{19 digits}\times \underbrace{111\cdots1}_\text{68 digits})$.<br> Similarly we get<br> $\underbrace{444\cdots4}_\text{19 digits}\times\underbrace{666\cdots6}_\text{68 digits} = 4\cdot 6 \cdot (\underbrace{111\cdots1}_\text{19 digits}\times \underbrace{111\cdots1}_\text{68 digits}) = 24 \cdot (\underbrace{111\cdots1}_\text{19 digits}\times \underbrace{111\cdots1}_\text{68 digits})$.<br> So these two numbers are equal.</p> <p>It is clear that $$\underbrace{444\cdots4}_\text{19 digits}\times\underbrace{666\cdots6}_\text{68 digits} \le \underbrace{444\cdots4}_\text{19 digits}\times\underbrace{666\cdots7}_\text{68 digits}.$$ Since the multiplier is increased by one, the difference is exactly $\underbrace{444\cdots4}_\text{19 digits}$.</p>
4,115,069
<p>I understand 'functionals' as functions of functions, for example:</p> <p><span class="math-container">$$ S[y(x)]= \int_{t_1}^{t_2} \sqrt{1+(y')^2} dx$$</span></p> <p>Which is the famous arc length integral</p> <p>Now, in a similar way, a limit we can write as:</p> <p><span class="math-container">$$L(a, [y(x)] ) = \lim_{x \to a} y(x) \tag{1}$$</span></p> <p>In this way, we can think of a limit as a function of a 'function' and a 'number'. So, would it be correct to call the above object a functionals? Why/Why not?</p> <p>Examples of (1):</p> <p><span class="math-container">$$L(0,\frac{\sin x}{x}) = \lim_{x \to 0} \frac{\sin x}{x} = 1$$</span></p> <p><span class="math-container">$$L(0,e^x) = 1$$</span></p> <p>etc</p> <hr /> <p>This doubt mainly emerged while I was answering through <a href="https://math.stackexchange.com/questions/4114333/is-l-a-function-of-a/4115070#4115070">this post</a></p>
user21820
21,820
<p>It's not been pointed out yet, but your syntax is wrong. In &quot;<span class="math-container">$L(0,e^x)$</span>&quot; the variable &quot;<span class="math-container">$x$</span>&quot; is undefined, so the whole thing is meaningless if you want <span class="math-container">$L$</span> to be a function. The point is that not all syntax is literally a function. The limit notation includes a limiting variable, which is not expressible in terms of functions in the usual sense. It is the same with summation; in &quot;<span class="math-container">$\sum_{k=1}^n f(k)$</span>&quot; the &quot;<span class="math-container">$f(k)$</span>&quot; is an expression with one free variable <span class="math-container">$k$</span>, <strong>not</strong> a function, and neither is &quot;<span class="math-container">$\sum_{k=1}^n$</span>&quot; a function since it binds the variable <span class="math-container">$k$</span>. It is also the same with quantifiers; in &quot;<span class="math-container">$∀x ∃y ( Q(x,y) )$</span>&quot; the &quot;<span class="math-container">$∀x$</span>&quot; is certainly not a function.</p> <p>So if you want to treat the limiting operation as an abstract mathematical object, you need to <strong>reify</strong> the relevant expressions and syntactic structure. <em>peek-a-boo</em> has shown you one way to do that for limits, but I will show you how to do that for summation to make the underlying concept clearer. Reification simply means to capture the 'essence' of a concept by an object.</p> <p>We typically define summation for commutative rings. Given any commutative ring <span class="math-container">$(R,0,1,+,·)$</span> and any function <span class="math-container">$f : D→R$</span> with <span class="math-container">$D⊆ℤ$</span>, we can define <span class="math-container">$S(f,m,n) = \sum_{k=m}^n f(k)$</span> for every <span class="math-container">$m,n∈D$</span>. Here <span class="math-container">$f$</span> reifies the expression &quot;<span class="math-container">$f(k)$</span>&quot; with free variable <span class="math-container">$k$</span>, in the sense that applying <span class="math-container">$f$</span> to the value of an expression &quot;<span class="math-container">$t$</span>&quot; captures the essence of the expression &quot;<span class="math-container">$f(t)$</span>&quot; (i.e. the value of &quot;<span class="math-container">$f(k)$</span>&quot; after substituting the free variable by the term &quot;<span class="math-container">$t$</span>&quot;). And <span class="math-container">$S$</span> reifies the concept of <span class="math-container">$\sum_{k=m}^n E$</span> built from expressions E,m,n where <span class="math-container">$E$</span> has one free variable <span class="math-container">$k$</span>.</p> <p>Importantly, notice that the free variable <strong>does matter</strong> in the syntactic constructions; &quot;<span class="math-container">$\sum_{k=m}^n f(i)$</span>&quot; would <strong>not</strong> mean <span class="math-container">$\sum_{k=m}^n f(k)$</span>. But this issue of matching free variables <strong>does not appear</strong> in the reified parts. This commonly occurs in reifying most mathematical notation, including quantifiers, summations/products, limits and so on.</p> <p>Also, if you do want to formalize reasoning about limits in a clean algebraic manner and also be able to deal with undefined limits algebraically, then the best approach is to extend the possible limit values to include some <a href="https://en.wikipedia.org/wiki/Sentinel_value" rel="nofollow noreferrer">sentinel value</a>, say <span class="math-container">$null$</span>, and then define <span class="math-container">$L(f,x)$</span> to be the limit of <span class="math-container">$f$</span> at <span class="math-container">$x$</span> if it exists but <span class="math-container">$null$</span> otherwise, for any function <span class="math-container">$f : D→ℂ$</span> with <span class="math-container">$x∈D⊆ℂ$</span>.</p>
16,627
<p>Yesterday, I wrote <a href="https://math.stackexchange.com/a/904777">this answer</a>, but then realized that the OP had considered breaking things into specific cases, so I deleted my answer. Right after I deleted my answer, I saw that the OP had accepted my answer. I undeleted my answer and commented to the OP, asking if they indeed found the answer useful. They said yes, so I left the answer.</p> <p>The answer is accepted, and shows as such in the <a href="https://math.stackexchange.com/users/13854/robjohn?tab=answers">list of my answers</a>. However, I never received the reputation for the acceptance and it does not show up in the <a href="https://math.stackexchange.com/users/13854/robjohn?tab=reputation">record of my reputation</a>.</p> <p>I'm not so worried about the 15 points, but I am curious about why this happened and hope that it can be fixed so that others won't miss reputation they've earned.</p>
Community
-1
<p>This appears to be a <a href="http://en.wikipedia.org/wiki/Race_condition" rel="nofollow noreferrer">race condition</a> between deletion and acceptance. In the example linked above, the answer's revision history shows it <a href="https://math.stackexchange.com/posts/904777/revisions">was deleted</a> at 6:23:36 on August 21, 2014. The timeline shows it was <a href="https://math.stackexchange.com/posts/904770/timeline">accepted</a> at exactly the same time. Deleted answers <a href="https://meta.stackexchange.com/a/148507/">lose the checkmark</a> while they are deleted. In this case, the accept-vote got deleted and never restored (this is shown as "unaccept" event in the timeline). </p> <p>I checked earlier instances of <a href="http://data.stackexchange.com/math/query/220296/answers-that-were-deleted-after-being-accepted" rel="nofollow noreferrer">answers deleted/undeleted after being accepted</a>: their owners still have +15. Most recent example is <a href="https://math.stackexchange.com/q/772481/">Definition of Harmonic Conjugates</a>:</p> <ol> <li>Answer was <a href="https://math.stackexchange.com/posts/772481/timeline">accepted on April 28</a></li> <li>It was deleted (and then undeleted) <a href="https://math.stackexchange.com/posts/904777/revisions">on August 21</a> ... which I guess was you testing the system, precisely because of this question. </li> <li>The accept-vote is preserved and +15 remains. </li> </ol> <hr> <p>I admit observation bias: my query finds only answers with surviving accept-votes, because this is how it is written. There are circumstances when an answer is shown to be accepted (with a checkmark) but no accept-vote exists (hence no +15 is given): </p> <ol> <li>Question owner was deleted before accept-votes began to be transferred to Community. <a href="https://meta.stackexchange.com/q/238465/">Posted on Meta</a></li> <li>Accept-vote was invalidated due to serial voting, <a href="https://meta.stackexchange.com/q/238466/">also on Meta</a></li> </ol>
2,193,550
<p>Prove that $G$ acts faithfully on $X$ when there are no two elements of $G$ acting the same way on an element $X$.</p> <p>So I don't have much of a proof, but here's what I'm thinking. I know that for $G$ to act faithfully on $X$, the identity is the only element that fixes every element in $X$, so $\forall x \in X, ex = x$. So this means that, $\forall g \in G, gx \neq x$. So if $\phi$ is the action, $ker\phi = \{e \in G | ex = x\}$. I don't know how to derive the proof of this though.</p>
Daniel
150,142
<p>You can easily verify that the order of the given element is 8: just compute all powers and check that the first one that yields 1 is the 8th power.</p> <p>On the other hand, you're right when you say that the polynomial in the quotient is irreducible (there are no square roots of -2 in $\Bbb F_5$), but since the degree of this polynomial is 2, the quotient field would have $5^2 = 25$ elements, among which $25 - 1 = 24$ are invertible. Notice that since $8\mid 24$, there is no contradiction at all.</p>
1,821,248
<p>Which of the following are true?</p> <ol> <li><p>$\sigma \circ \sigma(j)=j~\forall j, 1 \leq j \leq 5$.</p></li> <li><p>$\sigma^{-1}(j)=\sigma(j)~\forall j, 1 \leq j \leq 5$.</p></li> <li><p>The set $ \{k: \sigma(k) \neq k \}$ has even number of elements.</p></li> <li><p>The set $\{k:\sigma(k) =k \}$ has an odd number of elements . </p></li> </ol> <p>Can someone tell me to solve it in $1$ or $2$ min....trick..or some concept behind it?</p>
Virtuoz
153,521
<p>For $k=1,\ldots 5$ denote $$ i_k = \sigma(k),\; k = \sigma^{-1}(i_k) $$ Then $$ k=\sigma^{-1}(i_k) \le \sigma(i_k) $$ That's why $\sigma(i_5) = 5, \sigma(i_4) = 4, \ldots \sigma(i_1) = 1$. Since $\sigma(i_k) = k$ and $\sigma^{-1}(i_k) = k$ we get $$\sigma = \sigma^{-1}$$ Next steps must be obvious :)</p>
1,821,248
<p>Which of the following are true?</p> <ol> <li><p>$\sigma \circ \sigma(j)=j~\forall j, 1 \leq j \leq 5$.</p></li> <li><p>$\sigma^{-1}(j)=\sigma(j)~\forall j, 1 \leq j \leq 5$.</p></li> <li><p>The set $ \{k: \sigma(k) \neq k \}$ has even number of elements.</p></li> <li><p>The set $\{k:\sigma(k) =k \}$ has an odd number of elements . </p></li> </ol> <p>Can someone tell me to solve it in $1$ or $2$ min....trick..or some concept behind it?</p>
drhab
75,923
<p>In general: If $\tau$ and $\sigma$ are permutations on $\left\{ 1,\dots,n\right\} $ with $\tau\left(j\right)\leq\sigma\left(j\right)$ for each $j\in\left\{ 1,\dots,n\right\} $ then with induction it can be shown that $\sigma^{-1}\left(i\right)=\tau^{-1}\left(i\right)$ for $i=1,\dots,n$ or equivalently $\tau=\sigma$.</p> <p>Base case: If $\sigma\left(k\right)=1$ then also $\tau\left(k\right)=1$ so $\sigma^{-1}\left(1\right)=\tau^{-1}\left(1\right)$.</p> <p>Suppose it is true for $i=1,\dots,m$ and $\sigma^{-1}\left(m+1\right)=r$ or equivalently $\sigma\left(r\right)=m+1$. Then $\tau\left(r\right)\leq m+1$ but $\tau\left(r\right)=s\leq m$ leads to a contradiction: $r=\tau^{-1}\left(s\right)=\sigma^{-1}\left(s\right)\neq\sigma^{-1}\left(m+1\right)$. </p> <p>So in your question we have: $$\sigma^{-1}=\sigma$$</p>
1,300,273
<p>I have a question about evaluating the limit:</p> <p>$$\lim_{x \to\infty }\left(x^{f(x)}-x \right)$$</p> <p>where:</p> <p>$f(x)$ is a continuous map from the positive reals to the positive reals , and</p> <p>$\lim_{x\rightarrow \infty }f(x)= 1$.</p> <p>I attempted to apply L'Hôpital's rule by writing:</p> <p>$x^{f(x)}-x$ = $\log(\exp(x^{f(x)})/\exp(x))$ </p> <p>then applying the rule to $\exp(x^{f(x)})/\exp(x)$, however this quotient appeared in the resulting expression and successive applications of the rule would not remove it. </p> <p>The Wikipedia article on L'Hôpital's rule (link below) mentions the way the original expression can occur in the result of applying the rule. The article gives some examples where this problem is solved by using transformations but I could not get that method to work in this case. </p> <p>I would appreciate any help in evaluating this limit and/or in referring me to a source where it or similar limits are evaluated.</p> <p><a href="http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow">http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule</a></p> <p>EDIT </p> <p>Thanks for the comment and the answer (now deleted). They show me I left some information out of my question. Apologies for the omission. I should have included the following:</p> <ol> <li><p>The function $f(x)$ is assumed to be $C^\infty$ on the positive reals.</p></li> <li><p>The limit will depend on $f(x)$ so I was looking for an evaluation of the limit that related the limit to the properties of $f(x)$. For example, I was looking for those properties of $f(x)$ that imply the limit is $\infty$ and those that imply it is finite.</p></li> </ol>
Community
-1
<p>Let $L$ the desired limit. Take $f(x)=1$ we get trivially $L=0$. Now take $f(x)=1+\frac1{\ln x}$ we get</p> <p>$$x^{f(x)}-x=x\left(x^{f(x)-1}-1\right)=x\left(e-1\right)\xrightarrow{x\to\infty}\infty$$ so we see that the result depends on the choice of $f$.</p>
14,340
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://mathematica.stackexchange.com/questions/3247/consistent-plot-styles-across-multiple-mma-files-and-data-sets">Consistent Plot Styles across multiple MMA files and data sets</a> </p> </blockquote> <p>So, here's my problem; I have a lot of data that is shown in different plots. I want all the plots to have the same options (<code>PlotStyle</code>, <code>Axes</code>, <code>BaseStyle</code>, <code>FrameTicks</code>, etc...). I also want to be able to modify these options (because the size and <code>FontSize</code> change depending on where I want to use the plots, in my thesis or in a presentation) and do that without having to change each <code>Plot</code> function by hand.</p> <p>I guess what I'm looking for is something like this:</p> <pre><code>optionPacket = PlotStyle -&gt; {RGBColor[1, 0, 0]}, Frame -&gt; True, BaseStyle -&gt; {FontSize -&gt; 20}; </code></pre> <p>and then use it like this:</p> <pre><code>ListPlot[mydata, optionPacket] ListPlot[mydata2, optionPacket] </code></pre> <p>Is there any way to accomplish this? (What I just posted obviously doesn't work or I wouldn't be asking).</p>
J. M.'s persistent exhaustion
50
<p>Use <code>Sequence[]</code> for the purpose:</p> <pre><code>optionPacket = Sequence[PlotStyle -&gt; {RGBColor[1, 0, 0]}, Frame -&gt; True, BaseStyle -&gt; {FontSize -&gt; 20}] {ListPlot[RandomVariate[NormalDistribution[], {7, 2}], optionPacket], ListPlot[RandomVariate[WeibullDistribution[1, 2], {7, 2}], optionPacket]} // GraphicsRow </code></pre> <p><img src="https://i.stack.imgur.com/LGQGl.png" alt="list plots"></p> <hr> <p>Szabolcs mentions that you can also use a plain list for the purpose, so</p> <pre><code>optionPacket = {PlotStyle -&gt; {RGBColor[1, 0, 0]}, Frame -&gt; True, BaseStyle -&gt; {FontSize -&gt; 20}} </code></pre> <p>works just as well.</p>
3,915,771
<p>I'm given the series:</p> <p><span class="math-container">$$\sum_{n=2}^{\infty} \frac{n^2}{n^4-n-3}$$</span></p> <p>I know it converges, however I'm meant to show that by the comparison test. What would be a good choice here? <span class="math-container">$\frac{1}{k^2}$</span> and <span class="math-container">$\frac{1}{k}$</span> don't work here, obviously.</p>
José Carlos Santos
446,262
<p>Since <span class="math-container">$\sum_{n=2}^\infty\frac1{n^2}$</span> converges and since<span class="math-container">$$\lim_{n\to\infty}\frac{\dfrac{n^2}{n^4-n-3}}{\dfrac1{n^2}}=1,$$</span>your series converges.</p>
672,707
<p><img src="https://i.stack.imgur.com/Tr5Jy.gif" alt="enter image description here" /></p> <p>2 How do I solve this equation involving a logarithm? 3</p>
Newb
98,587
<p>$$\log_2\left(\frac{x}{2}\right) = \log_3\left(\frac{2+x}{3}\right)$$</p> <p>$$\log_2(x)-\log_2(2) = \log_3(2+x)-\log_3(3)$$</p> <p>$$\log_2(x)-1 = \log_3(2+x)-1$$</p> <p>$$\log_2(x) = \log_3(2+x)$$</p> <p>$$\log_3(2+x)=\frac{\log_{10}(2+x)}{\log_{10}(3) }$$</p> <p>$$\log_2(x) = \frac{\log_{10}(x)}{\log_{10}(2)}$$</p> <p>$$\therefore \frac{\log_{10}(2+x)}{\log_{10}(3)} = \frac{\log_{10}(x)}{\log_{10}(2)}$$ From now on, we'll write $\log_{10}(a)$ as $\log(a)$.</p> <p>$$\log(2+x)\log(2) = \log(3)\log(x)$$</p> <p>$$\log(2+x) = \frac{\log(3)}{\log(2)}\log(x)$$</p> <p>$$ \frac{\log(3)}{\log(2)} \approx 1.585\ldots$$</p> <p>$$\log(2+x) = 1.585\log(x)$$</p> <p>$$\log(2+x) = \log(x^{1.585})$$</p> <p>I don't have the time to finish off the answer right now, I'll do it later. Hopefully it should be obvious, just subtract and multiply out.</p>
15,237
<p><a href="https://matheducators.stackexchange.com/questions/176/knowing-mathematics-does-not-translate-to-knowing-to-teach-mathematics-why">A question</a> has been asked about why great mathematicians are not necessarily great teachers. On the other hand, I am wondering if knowing more mathematics actually helps with one's teaching of lower level courses in mathematics. For example, I believe that a good student with bachelor's degree in mathematics should have sufficient knowledge to teach calculus. However, how does having a master's or doctorate degree help one's teaching in calculus, if any at all?</p> <p>I am teaching calculus now and I do not understand commutative algebra; I took a course on commutative algebra long time ago; I did poorly in the course and now I could hardly recall anything from this course. If I invest substantial among of time studying this subject well now, will it help me in my calculus course in any sense?</p>
guest
11,935
<p>My personal impression across many fields is that a certain amount of extra subject matter can help (especially with "top high school" classes like AP Calculus or AP Chemistry or AP Bio). but that by and large, the issues in teaching and learning intro topics like first year college calc, chem, physics, bio, and through diffyQ and engine math are MUCH more about the basics AND about PEDAGOGY as opposed to being further advanced in your field. I have seen in chem where the salty old (barely passed a Ph.D. and with the grace of his adisor) had way more empathy and ability to teach freshman chem (e.g. analogy of wheels to car for limiting reactant) versus the shiny research profs. Plus he knew stoich and equilibrium problems inside out. </p> <p>So, net, net: I don't think advanced education really helps you be a better teacher. Often it can even be a negative (look at the peeps here wanting to "do something interesting" rather than horsing up the troops.) </p> <p>Really if you even have a math UNDERGRAD (as opposed to ed) , you are well, well equipped to teach calculus. After that, really concentrate on being a salty, salty TEACHER. Not a mathematician.</p>
15,237
<p><a href="https://matheducators.stackexchange.com/questions/176/knowing-mathematics-does-not-translate-to-knowing-to-teach-mathematics-why">A question</a> has been asked about why great mathematicians are not necessarily great teachers. On the other hand, I am wondering if knowing more mathematics actually helps with one's teaching of lower level courses in mathematics. For example, I believe that a good student with bachelor's degree in mathematics should have sufficient knowledge to teach calculus. However, how does having a master's or doctorate degree help one's teaching in calculus, if any at all?</p> <p>I am teaching calculus now and I do not understand commutative algebra; I took a course on commutative algebra long time ago; I did poorly in the course and now I could hardly recall anything from this course. If I invest substantial among of time studying this subject well now, will it help me in my calculus course in any sense?</p>
Jessica B
4,746
<p>There's a general point I can't see explicitly in the other answers: knowing more maths (and generally having spent time knowing/thinking about maths) helps you have a bigger picture. A lot of maths starts to fit together better as you know more for longer.</p> <p>As a (not very good) analogy, suppose someone left a sandwich on a bench, and an ant found it. The ant could teach all its ant friends how to reach the sandwich up one of the legs of the bench, and that is arguably enough. But it can't guess that there are three other legs they could climb up.</p> <p>Having a sense of the bigger picture doesn't automatically make you better at teaching. But it does make it easier to be good at teaching. It means you have a better idea which bits are important, which aspects come up elsewhere in maths, how to reverse-engineer suitable exam questions...</p>
285,227
<p>I am trying to prove $\exp(x+y) = \exp(x) \exp(y)$.</p> <p>I may use that $$\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}$$ I further know how to multiply two power series in one point, i.e. if $f(x) = \sum_{n=0}^\infty c_n(x-a)^n$ and $g(x) = \sum_{k=0}^\infty d_n(x-a)^n$ then $$ f(x)g(x) = \sum_{n=0}^\infty e_n(x-a)^n $$ with $$ e_n = \sum_{m=0}^n c_md_{n-m} $$</p>
Community
-1
<p>My solution</p> <p>Let $x,y \in \mathbb R$ and </p> <p>$f(z) := \sum_{n=0}^\infty \left(\frac {x^n}{n!} \right )z^n$ and $g(z) := \sum_{n=0}^\infty \left(\frac {y^n}{n!} \right )z^n$. Then $\exp(x) \exp(y) = f(1)g(1)$. That is $$ f(z)g(z) = \sum_{n=0}^\infty \left( \sum_{k=0}^m \frac {x^m y^{n-m}}{m! (n-m)!} \right)z^n $$ $$ = \sum_{n=0}^\infty \frac 1 {n!} (x+y)^n z^n $$ thus $f(1)g(1) = \exp(x+y)$.</p>
285,227
<p>I am trying to prove $\exp(x+y) = \exp(x) \exp(y)$.</p> <p>I may use that $$\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}$$ I further know how to multiply two power series in one point, i.e. if $f(x) = \sum_{n=0}^\infty c_n(x-a)^n$ and $g(x) = \sum_{k=0}^\infty d_n(x-a)^n$ then $$ f(x)g(x) = \sum_{n=0}^\infty e_n(x-a)^n $$ with $$ e_n = \sum_{m=0}^n c_md_{n-m} $$</p>
nordmann
59,392
<p>$A(t)=\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}t^n$<br> $B(t)=\exp(y) = \sum_{n=0}^\infty \frac {y^n}{n!}t^n$<br> $C(t) = A(t)*B(t)=\sum_{n=0}^\infty (\sum_{k+z=n}^\ \frac {x^k}{k!}*\frac {y^z}{z!})t^n=\sum_{n=0}^\infty \frac {(x+y)^n}{n!}t^n=exp(x+y)$</p> <p>and use $t=1$</p> <p>sry i was too late^^</p>
599,602
<p>Please help with this calculus question. I'm asked to solve $$(1+y^2) \,\mathrm{d}x = (\tan^{-1}y - x)\,\mathrm{d}y.$$</p>
alexjo
103,399
<p>The ODE $$ (1+y^2)+(x-\arctan y)y'=0\tag 1 $$ is not exact because, putting $M(x,y)=1+y^2$ and $N(x,y)=x-\arctan y$, $$M_y=\frac{\partial M}{\partial y}=2y\neq 1=\frac{\partial N}{\partial x}=N_x.$$ We have to find an integrating factor $\mu(y)$ such that $$\frac{\partial (\mu M)}{\partial y}=\frac{\partial (\mu N)}{\partial x}$$ that is $$ \mu'(1+y^2)+2y\mu=\mu $$ and isolating $\mu$ $$ \frac{\mu'}{\mu}=\frac{1-2y}{1+y^2}. $$ Integrating we have $$ \log\mu=\arctan y-\log(1+y^2) $$ that is $$ \mu(y)=\frac{\operatorname{e}^{\arctan y}}{1+y^2} $$ Multiplying the eq. (1) by $\mu$ $$ \operatorname{e}^{\arctan y}+\frac{\operatorname{e}^{\arctan y}}{1+y^2}(x-\arctan y)y'=0\tag 2 $$ and calling $$P(x,y)=\operatorname{e}^{\arctan y}$$ and $$Q(x,y)=\frac{\operatorname{e}^{\arctan y}}{1+y^2}(x-\arctan y)$$ we see that the eq. (2) is exact because $$P_y=\frac{\operatorname{e}^{\arctan y}}{1+y^2}=Q_x.$$</p> <p>Then, defining $f(x,y)$ such that $f_x=P$ and $f_y=Q$, the solution will be given by $f(x,y)=K$ where $K$ is an arbitrary constant.</p> <p>Integrating $f_x$ with respect to $x$ we have $$ f(x,y)=\int f_x\operatorname{d}x=\int \operatorname{e}^{\arctan y}\operatorname{d}x=\operatorname{e}^{\arctan y}x+g(y) $$ with $g(y)$ an arbitrary function of $y$.</p> <p>Differentiating with respect to $y$ and observing that $f_y=Q$ we find $$ g'(y)=-\frac{\operatorname{e}^{\arctan y}}{1+y^2}\arctan y $$ and integrating we'll have $$ g(y)=\operatorname{e}^{\arctan y}(1-\arctan y). $$ Substituting $g(y)$ into $f(x,y)$ we have $$ f(x,y)=\operatorname{e}^{\arctan y}(1+x-\arctan y) $$ and the solution will be given by $f(x,y)=K$, that is $$ \operatorname{e}^{\arctan y}(1+x-\arctan y)=K.\tag 3 $$</p> <p><strong>NOTE</strong></p> <p>If you kow the Lambert $W$-function, also called the omega function, defined as the function $W(z)$ that satisfies $$W(z)\operatorname{e}^{W(z)}=z$$ you can express $y(x)$ from eq. (3) as $$ y(x)=\tan\left(1+x+W\left(-K\operatorname{e}^{-(1+x)}\right)\right) $$</p>
2,462,722
<p><a href="https://i.stack.imgur.com/jtWGA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jtWGA.jpg" alt="enter image description here"></a></p> <p>I got this from QFT Demystified in the author attempt to derive the Euler Lagrange equation. But isn't the Taylor expansion for $f(x+a)$ supposed to be: $$f(x+a)=f(a)+x\frac{df(a)}{dx}+...(1)$$ Even if I exchange $x\rightarrow \epsilon, a \rightarrow x$, it should have been: $$f(\epsilon+x)=f(x)+\epsilon\frac{df(x)}{d\epsilon}+...(2)$$</p> <p>I understand that in (1), $a$ is the point where we start or "nail" the fitting process, and $x$ is the independent variable. But what are $\epsilon$ and $x$ in (2)? Feel free to be rigorous if there's no intuitive way to answer, cause I really don't understand Taylor expansion at all and I need any answer. :((</p> <p>Thank you! :D</p>
Gerhard S.
474,939
<p>The confusion arises because of the notation you are using. Write (1) as $$f(x+a)=f(a)+xf'(a).$$ Now proceed as you suggest and replace $x$ by $\epsilon$ and $a$ by $x$. This yields $$f(\epsilon+x)=f(x)+\epsilon f'(x).$$ Is it clearer now?</p>
2,740,349
<blockquote> <p>A triangle has the side lengths of $3$, $5$, and $7$. Express $\cos(y)+\sin(y)$, where $y$ is the largest angle in the triangle.</p> </blockquote> <p>I have tried to apply pythagoras theorm, trying to express the other two angles in some way, split the triangle into smaller triangles, but all without success. </p> <p>Thanks in advance.</p>
Nico
340,686
<p>Yes, in fact this is sometimes taken as the definition of the determinant. You may have seen the determinant defined as the unique alternating, $n$-linear map such that $\det(e_1,\dots,e_n) = 1$. Rephrased, we can say this succinctly as "the unique $n$-form such that $\omega (e_1,\dots,e_n) = 1$. </p> <p>One way to approach this is to define the wedge product of $k\in\Lambda^k(V^*)$ and $l \in \Lambda^l(V^*)$ by $$k\wedge l = \frac{1}{k!l!} \mathrm{Asym}(k \otimes l)$$ Where $\mathrm{Asym}$ is the antisymmetrization operator: $$\mathrm{Asym}(T)(x_1,\dots,x_k) = \sum_{\sigma \in S_n} \mathrm{sgn}(\sigma) T(x_{\sigma(1)},\dots,x_{\sigma(k)}).$$</p> <p>Since $\dim(\Lambda^k(V^*)) = \binom{n}{k}$ (for $\dim(V) = n$), we clearly see that the space of $n$-forms has dimension $\binom{n}{n} = 1$. All you have to do now if find an $n$-form such that $\omega(e_1,\dots,e_n) = 1$ (you seem to have the right idea on how to find that), and voila, we have produced the (unique) determinant operator. </p> <p>For more details I highly recommend chapter 3 of Loring Tu's <em>Introduction to Manifolds</em> available in pdf form online. </p>
3,692,435
<p>prove the following identity:</p> <p><span class="math-container">$\displaystyle\sum_{k=0}^{n}\frac{1}{k+1}\binom{2k}{k}\binom{2n-2k}{n-k} = \binom{2n+1}{n}$</span></p> <p>what I tried:</p> <p>I figured that: <span class="math-container">$\displaystyle\binom{2n+1}{n} = (2n+1) C_n$</span> and <span class="math-container">$\displaystyle\sum_{k=0}^{n}\frac{1}{k+1}\binom{2k}{k}\binom{2n-2k}{n-k}= \sum_{k=0}^{n}C_k\binom{2n-2k}{n-k}$</span></p> <p>from here i tried simplifying:<span class="math-container">$\displaystyle\binom{2n-2k}{n-k}$</span> to something i could work with but did not succeed</p> <p>I also know that <span class="math-container">$\displaystyle C_n = \sum_{k=0}^{n-1}C_k C_{n-k-1}$</span> so I tried to prove : <span class="math-container">$\displaystyle\sum_{k=0}^{n}C_k\binom{2n-2k}{n-k}= C_n + \sum_{k=0}^{n-1}C_k\binom{2n-2k}{n-k} = C_n + 2n\sum_{k=0}^{n-1}C_kC_{n-k-1}$</span> but that approach also failed (couldn't prove the last equality)</p> <p>any suggestions?</p>
Angina Seng
436,618
<p>This is <span class="math-container">$$\sum_{k=0}^n C_kA_{n-k}=B_n\tag{*}$$</span> where <span class="math-container">$$C_n=\frac1{n+1}\binom{2n}{n},$$</span> <span class="math-container">$$A_n=\binom{2n}{n}$$</span> and <span class="math-container">$$B_n=\binom{2n+1}{n}.$$</span> All we need to confirm (*) is to prove the generating function identity <span class="math-container">$$C(x)A(x)=B(x)$$</span> where <span class="math-container">$A(x)=\sum_{n=0}^\infty A_n x^n$</span> etc.</p> <p>But <span class="math-container">$$C(x)=\frac{1-\sqrt{1-4x}}{2x}$$</span> and <span class="math-container">$$A(x)=\frac1{\sqrt{1-4x}}$$</span> so that <span class="math-container">$$C(x)A(x)=\frac1{2x\sqrt{1-4x}}-\frac12 =\frac12\sum_{m=1}^\infty\binom{2m}mx^{m-1} =\frac12\sum_{n=0}^\infty\binom{2n+2}{n+1}x^n$$</span> so all we now need is <span class="math-container">$$\binom{2n+1}n=\frac12\binom{2n+2}{n+1}.$$</span></p>
2,098,882
<p>How to show that:</p> <p>$\forall k\in\mathbb{N}^*$ and $\forall x\in\mathbb{R}^*$, the inequality $\left(kx-1\right)e^{kx}&gt;-1$ holds.</p> <p>Thank you for your help.</p>
Bernard
202,857
<p>It results from the variations of $f$ on $\mathbf R$: $f'(x)=k^2x\mathrm e^{kx}$, so $f$ decreases on $\mathbf R^-$, increases on $\mathbf R^+$ and has a minimum at $x=0$, hence for $x\ne 0$, $f(x)&gt;f(0)=-1$.</p>
2,098,882
<p>How to show that:</p> <p>$\forall k\in\mathbb{N}^*$ and $\forall x\in\mathbb{R}^*$, the inequality $\left(kx-1\right)e^{kx}&gt;-1$ holds.</p> <p>Thank you for your help.</p>
PMC1234
338,746
<p>We consider the function $f$ defined on $\mathbb{R}$ such that $f(x)=(kx-1)e^{kx}$ with $k$ a natural integer (excluding zero).</p> <p>Let $X=kx$. We then have $f(X)=(X-1)e^X$. Let's show that $f$ is strictly superior to $-1$. </p> <p>$f$ is differentiable on $\mathbb{R}$ where: \begin{align*} \forall X\in\mathbb{R},\ f'(X)&amp;=\left((X-1)e^X\right)'\\ &amp;=Xe^X \end{align*} Knowing that $e^X&gt;0$ on $\mathbb{R}$, then $f'$ is of the sign of $X$. $f$ is strictly decreasing for $X&lt;0$ and strictly increasing for $X&gt;0$. Hence, the function has a minimum at $X=0$ which equals: \begin{align*} f(0)&amp;=(0-1)e^0\\ &amp;=-1 \end{align*} Thus, if we exclude X=0, we have <em>a fortiori</em>, $f(X)&gt;f(0)\Leftrightarrow f(x)&gt;-1\Leftrightarrow (kx-1)e^{kx}&gt;-1$.</p>
3,752,770
<p>I tested this in python using:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 10*2*np.pi, 10000) y = np.sin(x) plt.plot(y/y) plt.plot(y) </code></pre> <p>Which produces:</p> <p><a href="https://i.stack.imgur.com/pCwoV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pCwoV.png" alt="" /></a></p> <p>The blue line representing <code>sin(x)/sin(x)</code> appears to be <code>y=1</code></p> <p>However, I don't know if the values at the point where <code>sin(x)</code> crosses the x-axis really equals 1, 0, infinity or just undefined.</p>
John Hughes
114,036
<p>Let's ask a simpler question: is <span class="math-container">$\frac{x}{x} = 1$</span> ?</p> <p>The answer (which follows from the axioms for a field) is that <span class="math-container">$y = \frac{x}{x} = x \cdot x^{-1}$</span> is <em>undefined</em> if <span class="math-container">$x = 0$</span>, so while <span class="math-container">$\frac{x}x = 1$</span> for <span class="math-container">$x \ne 0$</span>, for <span class="math-container">$x = 0$</span> it's not even defined.</p> <p>What about <span class="math-container">$y = \frac{x-a}{x-a}$</span>? Once again, that's equal to <span class="math-container">$1$</span> for <span class="math-container">$x \ne a$</span>, and undefined for <span class="math-container">$x = a$</span>.</p> <p>Now let's bring computers into it. The way we draw plots on a computer is to take a sequence of points <span class="math-container">$x_1 &lt; x_2 &lt; x_3 &lt; \ldots &lt; x_n$</span>, and plot them with their corresponding <span class="math-container">$y$</span>-values. In between these points, we, as authors of naive plotting programs, often connect the dots with a line segment because...well, because that's <em>usually</em> right, for nice-enough functions. When your functions have discontinuities, though, it's definitely <em>not</em> right.</p> <p>In the case of <span class="math-container">$y = \frac{x}{x}$</span>, if the <span class="math-container">$x_i$</span> are all nonzero, then the corresponding <span class="math-container">$y_i$</span> are all <span class="math-container">$1$</span>, and we connect-the-dots to get a horizontal line at <span class="math-container">$y = 1$</span>, which is correct everywhere except where <span class="math-container">$x = 0$</span>, which <em>should</em> be a hole in the graph, but won't be. Of course, if one of your <span class="math-container">$x_i$</span> actually <em>is</em> zero, then when your computer attempts to compute <span class="math-container">$\frac{x_i}{x_i} = \frac{0}{0}$</span>, it'll probably produce <code>NaN</code>, a special value meaning &quot;not a number,&quot; and the graphics-plotting part of the program will perhaps ignore it (bad) or perhaps try to use it to plot something (which will be nonsense).</p> <p>In short, in this case, mathematics and computing have diverged from one another.</p> <p>What about the <span class="math-container">$\frac{x-a}{x-a}$</span> case? The answer's the same: your <span class="math-container">$y_i$</span> values will all be <span class="math-container">$1$</span>, unless some <span class="math-container">$x_i = a$</span>. But what if <span class="math-container">$a$</span> is a number that <em>cannot</em> be represented on a computer, something like <span class="math-container">$a = \pi$</span>? Then you're guaranteed that the <span class="math-container">$y_i$</span> values will all be <span class="math-container">$1$</span>, and you'll see a horizontal line, even though you <em>should</em> see a line with a hole in it. In short, the plot is guaranteed to be wrong (unless the software is much more subtle than the sort of thing you showed).</p> <p>Key idea: Computer numbers and real numbers are not the same, and for subtle stuff like well-defined-ness and limits, pretending that they <em>are</em> the same can lead to some bad results, and often deep misunderstandings.</p>
202,719
<p><code>Reduce</code> often provides a much fuller solution than <code>Solve</code>. But it's always in the form of a true statement rather than functions or replacement rules, e.g.</p> <p>Input:</p> <pre><code>Reduce[Sin[x^2] + Cos[a] == 0 &amp;&amp; -π/2 &lt;= x &lt;= π/2, x] </code></pre> <p>Output:</p> <pre><code>(Cos[a] == -1 &amp;&amp; (x == -Sqrt[(π/2)] || x == Sqrt[π/2])) || (-1 &lt; Cos[a] &lt;= Sin[1/4 (-4 π + π^2)] &amp;&amp; (x == -Sqrt[π + ArcSin[Cos[a]]] || x == Sqrt[π + ArcSin[Cos[a]]])) || (Cos[a] == 0 &amp;&amp; x == 0) || (-1 &lt; Cos[a] &lt; 0 &amp;&amp; (x == -Sqrt[-ArcSin[Cos[a]]] || x == Sqrt[-ArcSin[Cos[a]]])) </code></pre> <p>What I need is a function that takes the variable solved for (i.e. <code>x</code> here) as its input and gives corresponding value (or a list of values/separate functions for non-unique solutions) as the output. For example, the above output would be represented similarly to the following:</p> <pre><code>solution[x_] = Piecewise[{ {{-Sqrt[Pi/2], Sqrt[Pi/2]}, Cos[a] == -1}, {{-Sqrt[Pi + ArcSin[Cos[a]]], Sqrt[Pi + ArcSin[Cos[a]]]}, -1 &lt; Cos[a] &lt;= Sin[(1/4)*(-4*Pi + Pi^2)]}, {0, Cos[a] == 0}, {{-Sqrt[-ArcSin[Cos[a]]], Sqrt[-ArcSin[Cos[a]]]}, -1 &lt; Cos[a] &lt; 0}}] </code></pre> <p>What is a good way to transform the expression returned by <code>Reduce</code> to such a function?</p>
AsukaMinato
68,689
<p>Convert it to string and solve it.</p> <pre><code>convert[x_] := x /. (Or[(a_) &amp;&amp; (b_)]) :&gt; {b, a} // InputForm // ToString // StringReplace[#, "||" -&gt; ","] &amp; // "Piecewise[{" ~~ # ~~ "}]" &amp; // ToExpression; </code></pre> <p>this works for easy situation, for example:</p> <pre><code>convert[(a &lt;= -2 &amp;&amp; 1 + 2 a &lt;= y &lt;= 1) || (-2 &lt; a &lt;= 2 (1 - Sqrt[2]) &amp;&amp; 1/4 (4 a - a^2) &lt;= y &lt;= 1) || (2 (1 - Sqrt[2]) &lt; a &lt;= 2 (-1 + Sqrt[2]) &amp;&amp; -1 &lt;= y &lt;= 1) || (2 (-1 + Sqrt[2]) &lt; a &lt;= 2 &amp;&amp; -1 &lt;= y &lt;= 1/4 (4 a + a^2)) || (a &gt; 2 &amp;&amp; -1 &lt;= y &lt;= -1 + 2 a)] </code></pre> <p>gives <span class="math-container">$$\begin{cases} 2 a+1\leq y\leq 1 &amp; a\leq -2 \\ \frac{1}{4} \left(4 a-a^2\right)\leq y\leq 1 &amp; -2&lt;a\leq 2 \left(1-\sqrt{2}\right) \\ -1\leq y\leq 1 &amp; 2 \left(1-\sqrt{2}\right)&lt;a\leq 2 \left(\sqrt{2}-1\right) \\ -1\leq y\leq \frac{1}{4} \left(a^2+4 a\right) &amp; 2 \left(\sqrt{2}-1\right)&lt;a\leq 2 \\ -1\leq y\leq 2 a-1 &amp; a&gt;2 \end{cases}$$</span></p> <p>but for complex situations,someone told me a better solution.</p> <pre><code>/. (Or[a_ &amp;&amp; b_]) :&gt; {b, a} // Apply[Piecewise[{##}] &amp;] </code></pre> <p>for example</p> <pre><code>(Cos[a] == -1 &amp;&amp; (x == -Sqrt[(\[Pi]/2)] || x == Sqrt[\[Pi]/2])) || (-1 &lt; Cos[a] &lt;= Sin[1/4 (-4 \[Pi] + \[Pi]^2)] &amp;&amp; (x == -Sqrt[\[Pi] + ArcSin[Cos[a]]] || x == Sqrt[\[Pi] + ArcSin[Cos[a]]])) || (Cos[a] == 0 &amp;&amp; x == 0) || (-1 &lt; Cos[a] &lt; 0 &amp;&amp; (x == -Sqrt[-ArcSin[Cos[a]]] || x == Sqrt[-ArcSin[Cos[a]]])) /. (Or[a_ &amp;&amp; b_]) :&gt; {b, a} // Apply[Piecewise[{##}] &amp;] </code></pre> <p>gives</p> <p><span class="math-container">$$ \begin{cases} x=-\sqrt{\frac{\pi }{2}}\lor x=\sqrt{\frac{\pi }{2}} &amp; \cos (a)=-1 \\ x=-\sqrt{\sin ^{-1}(\cos (a))+\pi }\lor x=\sqrt{\sin ^{-1}(\cos (a))+\pi } &amp; -1&lt;\cos (a)\leq \sin \left(\frac{1}{4} \left(\pi ^2-4 \pi \right)\right) \\ x=0 &amp; \cos (a)=0 \\ x=-\sqrt{-\sin ^{-1}(\cos (a))}\lor x=\sqrt{-\sin ^{-1}(\cos (a))} &amp; -1&lt;\cos (a)&lt;0 \\ \end{cases}$$</span></p>
440,439
<p>I was working more on the topic on my previous question when I have to know whether the following statement is true to circumvent the &quot;exception&quot; caused by division by singular matrices; again, long story short, the statement follows:</p> <p>If two singular matrices <span class="math-container">$A, B$</span> exist s.t. the determinant of <span class="math-container">$EA-B$</span> is identically zero for all real matrices <span class="math-container">$E$</span>, then either <span class="math-container">$A=YB$</span> or <span class="math-container">$B=ZA$</span>, <span class="math-container">$Y$</span> and <span class="math-container">$Z$</span> being undetermined matrices.</p> <p>Is it true (vacuously or not) in general?</p>
Conrad
133,811
<p>This holds in general in any domain <span class="math-container">$U$</span>, while of course if <span class="math-container">$f=h^2$</span> is bounded, then <span class="math-container">$h$</span> bounded, so the extra condition is automatically satisfied (similarly note that <span class="math-container">$f$</span> bounded is equivalent to <span class="math-container">$g$</span> bounded, so only one of the two is needed).</p> <p>Wlog let's assume <span class="math-container">$f, g$</span> not identically zero, so they have isolated zeroes since we are in a domain <span class="math-container">$U$</span>. Since <span class="math-container">$f^3=g^2$</span> it follows that any zero of <span class="math-container">$f$</span> of multiplicity <span class="math-container">$k \ge 1$</span> is a zero of <span class="math-container">$g^2$</span> of multiplicity <span class="math-container">$3k$</span>, hence <span class="math-container">$k$</span> even and as a zero of <span class="math-container">$g$</span> it has multiplicity <span class="math-container">$3k/2 &gt;k$</span>. This means that <span class="math-container">$h=g/f$</span> is analytic in <span class="math-container">$U$</span> so <span class="math-container">$g^2=h^2f^2=f^3$</span> hence <span class="math-container">$f=h^2$</span> outside of its isolated zeroes and by continuity there too, so in all <span class="math-container">$U$</span> and then <span class="math-container">$g=hf=h^3$</span> and we are done!</p>
4,527,300
<p>An AP practice question asks:</p> <p><span class="math-container">$$\lim_{h\to0} \frac{(1+h)^3 + \frac{8}{\sqrt{1+h}}-9}{h} $$</span></p> <p>The answer should be -1. How did they get this without a calculator?</p>
Lorago
883,088
<p>Let <span class="math-container">$f(x)=x^3+\frac{8}{\sqrt{x}}$</span>. Then <span class="math-container">$f(1)=9$</span>, and <span class="math-container">$f(1+h)=(1+h)^3+\frac{8}{\sqrt{1+h}}$</span>. This means that you limit can be written as</p> <p><span class="math-container">$$\lim_{h\to0}\frac{f(1+h)-f(1)}{h}=f'(1).$$</span></p> <p>But we know using standard differentiation rules that</p> <p><span class="math-container">$$f'(x)=3x^2-\frac{4}{x\sqrt{x}},$$</span></p> <p>and so <span class="math-container">$f'(1)=3-4=-1$</span> is your limit.</p>
4,527,300
<p>An AP practice question asks:</p> <p><span class="math-container">$$\lim_{h\to0} \frac{(1+h)^3 + \frac{8}{\sqrt{1+h}}-9}{h} $$</span></p> <p>The answer should be -1. How did they get this without a calculator?</p>
binbni
1,011,566
<p>We can also calculate it this way. <span class="math-container">\begin{align} \lim_{h \to 0}\frac{{(1+h)^3}+\frac{8}{\sqrt{1+h}}-9}{h} &amp;=\lim_{h\to0}\frac{{h^3+3h^2+3h+8(\frac{1}{\sqrt{1+h}}-1)}}{h} \\ &amp;=\lim_{h \to 0}h^2+3h+3+8\frac{1-\sqrt{1+h}}{h\sqrt{1+h}}\\ &amp;=\lim_{h \to 0}h^2+3h+3+8\frac{1-(1+h)}{h\sqrt{1+h}(1+\sqrt{1+h})}\\ &amp;=\lim_{h \to 0}h^2+3h+3+8\frac{-1}{\sqrt{1+h}(1+\sqrt{1+h})}=-1 \end{align}</span></p>
1,291,511
<p>This may seem like a silly question, but I just wanted to check. I know there are proofs that if $f(x)=f'(x)$ then $f(x)=Ae^x$. But can we 'invent' another function that obeys $f(x)=f'(x)$ which is <strong>non-trivial</strong>?</p>
Michael Hardy
11,667
<p>You have $\dfrac{dy}{dx}=y$. Often one writes $\dfrac{dy} y = dx$ and then evaluates both sides of $\displaystyle\int\frac{dy} y = \int dx$, etc.</p> <p>However, for a question like this perhaps one should be more careful.</p> <p>If $f(x)\ne 0$ for all $x$, then one has $\dfrac{f'(x)}{f(x)}=1$ for all $x$. This implies $$ \frac{d}{dx} \log |f(x)| = 1 $$ for all $x$. The mean value theorem entails there can be no antiderivatives besides $$ \log|f(x)| = x + \text{constant} $$ for all $x$. This implies $|f(x)| = e^x\cdot\text{a positive constant}$, so $f(x)=e^x\cdot\text{a nonzero constant}$.</p> <p>Dividing by $f(x)$ assumes $f(x)$ is not $0$, so one has to check separately the case where $f(x)$ is everywhere $0$, and it checks.</p> <p>Now what about cases where $f(x)=0$ for some $x$ but not all? Can we rule those out? We would have $f(x_0)=0$ and $f(x_1)\ne 1$. Then one solution would be $$ g(x) = f(x_1)e^{x-x_1} = f(x_1)e^{-x_1} e^x = A e^x. $$ But $g(x_0)\ne0=f(x_0)$ and $g(x_1)=f(x_1)$ and $(f-g)'=f-g$ everywhere.</p> <p>[to be continued, maybe${}\,\ldots\,{}$]</p>
1,291,511
<p>This may seem like a silly question, but I just wanted to check. I know there are proofs that if $f(x)=f'(x)$ then $f(x)=Ae^x$. But can we 'invent' another function that obeys $f(x)=f'(x)$ which is <strong>non-trivial</strong>?</p>
Lukas Betz
238,388
<p>Consider $g(x) = f(x)\exp(-x)$. Then we have $g'(x) = f'(x)\exp(-x)-f(x)\exp(-x) = 0$. Thus, $g\equiv c$ for some constant $c$. Hence $f(x) = c\exp(x)$.</p>
1,822,008
<p>Here are two functions: $f\left(u,v\right)=u^{2}+3v^{2}$</p> <p>$g\left(x,y\right)=\begin{pmatrix} e^{x}\cos y \\ e^{x}\sin y \end{pmatrix} $</p> <p>I need to make Jacobian matrix of $f\circ g$. I found derivative of their composition:</p> <p>$\frac{d\left(f\circ g\right) }{d\left(x,y\right) }=2e^{2x}\cos^{2}{y}+4e^{2x}\sin{y}\cos{y}+6e^{2x}sin^{2}{y} $</p> <p>How do I put that in Jacobian matrix?</p>
b00n heT
119,285
<p>Using the chain rule instead: \begin{align*}D(f\circ g)(x,y)&amp; =\color{red}{Df(g(x,y))}\cdot\color{blue}{ Dg(x,y)}\\ &amp; = \color{red}{\begin{pmatrix} 2u&amp;6v \end{pmatrix}\circ(g(x,y))}\cdot \color{blue}{ \begin{pmatrix}e^x\cos y &amp; -e^x\sin y \\ e^x\sin y &amp; e^x\cos y\end{pmatrix}}\\ &amp; =\color{red}{ \begin{pmatrix} 2e^x\cos y&amp;6e^x\sin y \end{pmatrix}}\cdot \color{blue}{ \begin{pmatrix}e^x\cos y &amp; -e^x\sin y \\ e^x\sin y &amp; e^x\cos y\end{pmatrix}}\\ \phantom{asd} \\ &amp; = \begin{pmatrix}2e^{2x}\cos^2y + 6e^{2x}\sin^2y &amp; -2e^{2x}\cos y \sin y + 6e^{2x}\sin y\cos y \end{pmatrix}\\ \phantom{asd} \\ &amp; = 2e^{2x}\begin{pmatrix}1 + 2\sin^2y &amp; 2\sin y\cos y \end{pmatrix} \end{align*}</p>
1,175,297
<p>Note: The following definitions from my book, Discrete Mathematics and Its Applications [7th ed, 598].</p> <p>This is my book's definition for a reflexive relation <img src="https://i.stack.imgur.com/og5wE.png" alt="enter image description here"></p> <p>This is my book's definition for a anti symmetric relation <img src="https://i.stack.imgur.com/OaZGk.png" alt="enter image description here"></p> <p>Is a reflexive relation just the same as a anti symmetric relation? From what I've, the only way to meet that antisymmetric requirement is to have the same ordered pair, say an element a from Set A, (a,a). If you have anything other than the same ordered pair, (1,2) and (2,1), it will not meet the antisymmetric requirement. But the overall definition of reflexive relation is that it's the same ordered pair. Are they just two ways of saying the same thing? Is it possible to have one and not the other?</p>
Michael Burr
86,421
<p>No, antisymmetric is not the same as reflexive.</p> <p>An example of a relation that is reflexive, but not antisymmetric is the relation $$R={(1,1),(1,2),(2,2),(2,1)}$$ on $$A={1,2}.$$ It is reflexive because for all elements of $A$ (which are $1$ and $2$), $(1,1)\in R$ and $(2,2)\in R$. The relation is not anti-symmetric because $(1,2)$ and $(2,1)$ are in $R$, but $1\not=2$.</p> <p>An example of a relation that is not reflexive, but is antisymmetric is the empty relation $R=\emptyset$ on $A={1}$. It doesn't have $(1,1)$ in it, but it is vacuously antisymmetric.</p> <p>On a further note: reflexive is: $$\forall a\in A, (a,a)\in R.$$</p> <p>Anti-symmetric is $$\forall (a,b),(b,a)\in R, a=b.$$</p> <p>Note that the statements have different hypotheses and conclusions (even though they look similar).</p>
154,757
<p>I have this data:</p> <ul> <li><p>$a=6$</p></li> <li><p>$b=3\sqrt2 -\sqrt6$ </p></li> <li><p>$\alpha = 120°$</p></li> </ul> <p><strong>How to calculate the area of this triangle?</strong></p> <p>there is picture:</p> <p><img src="https://i.stack.imgur.com/hr2Cp.jpg" alt=""></p>
zzzzzzzzzzz
26,498
<p>Assuming the diagram like so, where C = $\alpha=120$<img src="https://i.stack.imgur.com/I9Umc.gif" alt="enter image description here"></p> <p>Then we have the equation </p> <p>$$Area = \displaystyle\frac{a b\sin C}{2}$$</p> <p>This is the same as the equation you probably know,</p> <p>$$Area = \displaystyle\frac{base*height}{2}$$</p> <p>Do you know why?</p>
3,222,871
<p>Let <span class="math-container">$P(x, y, 1)$</span> and <span class="math-container">$Q(x, y, z)$</span> lie on the curves <span class="math-container">$$\frac{x^2}{9}+\frac{y^2}{4}=4$$</span> and <span class="math-container">$$\frac{x+2}{1}=\frac{y-\sqrt{3}}{\sqrt{3}}=\frac{z-1}{2}$$</span> respectively. Then find the square of the minimum distance between <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>.</p> <p>My Attempt is:</p> <p>I tried to find minimum distance between the points <span class="math-container">$(-2,\sqrt{3})$</span> and <span class="math-container">$(6\cos \theta,4\sin \theta)$</span>.</p>
Vedant Chourey
638,765
<p>You can use the method of Lagrange's multipliers. The function formed by the distance between the two points <span class="math-container">$ (x,y,z)$</span> and <span class="math-container">$(x,y,1)$</span> is examined. i.e. <span class="math-container">$\phi = \sqrt{(z-1)^2} $</span> The constrains are respectively <span class="math-container">$$ \frac{x^2} {9} + \frac{y^2} {4} $$</span> And <span class="math-container">$$ \frac{x+2} {1} = \frac{y- \sqrt{3}} {\sqrt{3}} =\frac{z-1} {2} $$</span> The auxiliary function is formed as <span class="math-container">$$ F(x_1, x_2 , x_3, . . . , x_n, \alpha_1, \alpha_2 . . . , \alpha_k ) = f(x_1, x_2, . . .,x_n) + \sum_{i=0}^k \alpha_i \beta_i ( x_1, x_2, . . . , x_n) $$</span> Where <span class="math-container">$\beta_i $</span> is the function Now <span class="math-container">$$\frac{\partial F}{\partial x_1} =0=\frac{\partial F}{\partial x_2} = . . . = \frac{\partial F}{\partial x_n} $$</span> Which gives the stationary points of F After these you have to find the extremum points and obtaining the value of <span class="math-container">$ \alpha_1 , \alpha_2, . . . , \alpha_n $</span> these are the multipliers You can further obtain the points for maximum distance</p>
3,222,871
<p>Let <span class="math-container">$P(x, y, 1)$</span> and <span class="math-container">$Q(x, y, z)$</span> lie on the curves <span class="math-container">$$\frac{x^2}{9}+\frac{y^2}{4}=4$$</span> and <span class="math-container">$$\frac{x+2}{1}=\frac{y-\sqrt{3}}{\sqrt{3}}=\frac{z-1}{2}$$</span> respectively. Then find the square of the minimum distance between <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>.</p> <p>My Attempt is:</p> <p>I tried to find minimum distance between the points <span class="math-container">$(-2,\sqrt{3})$</span> and <span class="math-container">$(6\cos \theta,4\sin \theta)$</span>.</p>
Christian Blatter
1,303
<p>You can do it without using Lagrange's method. Consider the parametric representations <span class="math-container">$$p(s):=\bigl(6\cos s,4\sin s,1\bigr)\qquad(s\in{\mathbb R}/(2\pi))$$</span> and <span class="math-container">$$q(t):=\bigl(t-2,\sqrt{3}(t+1),2t+1\bigr)\qquad(t\in{\mathbb R})\ .$$</span> We have to determine <span class="math-container">$s$</span> and <span class="math-container">$t$</span> such that the vector <span class="math-container">$$f(s,t):=p(s)-q(t)$$</span> is orthogonal to <span class="math-container">$p'(s)=\bigl(-6\sin s, 4\cos s,0\bigr)$</span> and to <span class="math-container">$q'(t)=(1,\sqrt{3},2)=:u$</span>. In this way one obtains the equations <span class="math-container">$$f(s,t)\cdot p'(s)=0,\qquad f(s,t)\cdot u=0\ .$$</span> Computing <span class="math-container">$t=h(s)$</span> from the second equation leads to the single equation <span class="math-container">$$g(s):={1\over4}\bigl(-14 \sqrt{3} \cos s - 12 \sqrt{3} \cos(2s) - (51 + 86 \cos s) \sin s\bigr)=0\ .$$</span> The last equation has four solutions <span class="math-container">$s_i$</span> (found numerically), and computing the values <span class="math-container">$$d_i^2:=\bigl|f\bigl(s_i,h(s_i)\bigr)\bigr|^2$$</span> we obtain exactly the values found by @Cesareo.</p> <p>Here is my computer output for this problem:</p> <p><a href="https://i.stack.imgur.com/u796d.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u796d.jpg" alt="enter image description here"></a></p>
3,222,871
<p>Let <span class="math-container">$P(x, y, 1)$</span> and <span class="math-container">$Q(x, y, z)$</span> lie on the curves <span class="math-container">$$\frac{x^2}{9}+\frac{y^2}{4}=4$$</span> and <span class="math-container">$$\frac{x+2}{1}=\frac{y-\sqrt{3}}{\sqrt{3}}=\frac{z-1}{2}$$</span> respectively. Then find the square of the minimum distance between <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>.</p> <p>My Attempt is:</p> <p>I tried to find minimum distance between the points <span class="math-container">$(-2,\sqrt{3})$</span> and <span class="math-container">$(6\cos \theta,4\sin \theta)$</span>.</p>
Claude Leibovici
82,404
<p>Starting from @Christian Blatter's answer, using <span class="math-container">$s=2 \tan ^{-1}(x)$</span> and expanding, we end with <span class="math-container">$$2 \sqrt{3}\, x^4+70 \,x^3+72 \sqrt{3} \,x^2-274\, x-26 \sqrt{3}=0$$</span> Let <span class="math-container">$x=t-\frac{35}{4 \sqrt{3}}$</span> to get the depressed quartic <span class="math-container">$$t^4-\frac{937 }{8}t^2+\frac{24467}{24 \sqrt{3}} t-\frac{166043}{256}=0$$</span> which can be exactly solved using radicals.</p> <p>Following the steps given <a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">here</a>, we have <span class="math-container">$$\Delta=\frac{386701126204}{27}\quad P=-937\quad Q=\frac{24467}{3 \sqrt{3}}\quad \Delta_0=5935\quad D=-261003$$</span> So, four real roots with <span class="math-container">$$p=-\frac{937}{8}\quad q=\frac{24467}{24 \sqrt{3}}$$</span></p> <p>Just finish to get the exact values of <span class="math-container">$(t_1,t_2,t_3,t_4)$</span> from which <span class="math-container">$(x_1,x_2,x_3,x_4)$</span> and finally <span class="math-container">$(s_1,s_2,s_3,s_4)$</span> in terms of messy radicals. </p>
4,072,769
<blockquote> <p>How to evaluate this? <span class="math-container">$$\prod_{k=1}^m \tan \frac{k\pi}{2m+1}$$</span></p> </blockquote> <p>My work</p> <p>I couldn't figure out a method to solve this product. I thought that this identity could help. <span class="math-container">$$\frac{e^{i\theta}-1}{e^{i\theta}+1}=i\tan \frac{\theta}{2}$$</span></p> <p>By supposing <span class="math-container">$z=e^{\frac{2k\pi}{2m+1}i}$</span>, then, <span class="math-container">$$i\tan \frac{k\pi}{2m+1}=\frac{z-1}{z+1}$$</span> So, the product <span class="math-container">$$\displaystyle\prod_{k=1}^m \tan\frac{k\pi}{2m+1}=\displaystyle\prod_{k=1}^m \frac{z-1}{i(z+1)}=\displaystyle\prod_{k=1}^m \frac{e^{\frac{2k\pi}{2m+1}i}-1}{i(e^{\frac{2k\pi}{2m+1}i}+1)}$$</span></p> <p>which is getting more complicated.</p> <p>Answer is <span class="math-container">$\sqrt{2m+1}$</span>. Any help is appreciated.</p>
Quanto
686,284
<p>Let</p> <p><span class="math-container">$$f(x) = x^{n-1} + x^{n-2} + x^{n-3} \&gt;\cdots \&gt; +\&gt;x +1 = \prod_{k=1}^{n-1}(x - e^{i\frac{2\pi k}n}) $$</span> and set <span class="math-container">$n=2m+1$</span> to evaluate</p> <p><span class="math-container">$$\frac{f(1)}{f(-1)}= 2m+1 = \prod_{k=1}^{2m}\frac{1 - e^{i\frac{2\pi k}{2m+1}}}{1 + e^{i\frac{2\pi k}{2m+1}}} = i^{2m} \prod_{k=1}^{2m}\tan \frac{k\pi}{2m+1} = \prod_{k=1}^{m}\tan^2\frac{k\pi}{2m+1} $$</span> Thus</p> <p><span class="math-container">$$\prod_{k=1}^m \tan \frac{k\pi}{2m+1}=\sqrt{2m+1}$$</span></p>
2,431,861
<p>Let $P(z)=\displaystyle \sum_{0\le k\le n}a_kz^k$ a complex polynomial. What conditions must satisfy the coefficients $a_k$ to have $$P(z)=-\overline{P(\overline z)}\space \space ?$$</p>
José Carlos Santos
446,262
<p>That's when and only whe every $a_k$ is purely imaginary.</p> <p>If every $a_k$ is purely imaginary, then\begin{align}-\overline{P\bigl(\overline z\bigr)}&amp;=-\overline{\sum_{k=0}^na_k\overline z^k}\\&amp;=-\sum_{k=0}^n\overline{a_k}z^k\\&amp;=\sum_{k=0}^na_kz^k\end{align}because $(\forall k\in\{0,1,\ldots,n\}):\overline{a_k}=-a_k$.</p> <p>On the other hand, if $-\overline{P\bigl(\overline z\bigr)}=P(z)$, then $(\forall k\in\{0,1,\ldots,n\}):\overline{a_k}=-a_k$ and therefore each $a_k$ is purely imaginary.</p>
90,876
<p>$$2x-\dfrac{x+1}{2} + \dfrac{1}{3}(x+3)= \dfrac{7}{3}$$</p> <p>When I solve this I always end up with 11x = 5, which is wrong, no matter which way I solve it. Does anyone know how to solve it? Steps? (Because I know the answer should be x=1)</p>
David Mitra
18,986
<p>$$\eqalign{&amp;2x -{x+1\over 2}+{x+3\over 3 }={7\over 3}\cr &amp;\iff12x \color{red}{- 3}(x+1) +{2 (x+3)}={14}\cr &amp;\iff12x-3x\color{red}{-3}+2x+6 ={14}\cr &amp;\iff 11x =11\cr &amp;\iff x=1 } $$</p> <p>You most likely forgot to "distribute the negative" (since you said you obtained $11x=5$).</p> <p><hr> To see what's going on there: we are using the rule that subtraction of a quantity is the same adding $(-1)$ times the quantity. </p> <p>$$\eqalign{12x -3(x+1)&amp;=12x +(-1)\cdot3 (x+1) \cr &amp;=12x +(-3)(x+1)\cr &amp;=12x+(-3)x+(-3)\cdot1\cr&amp;=12x-3x-3.}$$</p> <p>Of course, once you're accustomed to it, you just "distribute the negative sign".</p>
928,644
<blockquote> <p>Let $f,g$ be $\mathcal E$-$\mathcal B(\mathbb R)$-measurable functions. I want to show piecewise function $h$ of $f$ and $g$ is also measurable.</p> </blockquote> <p>Suppose $(X, \mathcal E)$ is a measure space, let $f,g$ be $\mathcal E$-$\mathcal B(\mathbb R)$-measurable functions and let $A \in \mathcal E$.</p> <p>I want to show $h: X \rightarrow \mathbb R$ given by $ h(x) = \left\{ \begin{array}{lr} f(x) : x \in A\\ g(x) : x \in A^C \end{array} \right.\\$ is again a $\mathcal E$-$\mathcal B(\mathbb R)$-measurable function.</p> <p>I've tried writing $(-\infty, a]$ as two disjoint sets $A_1, A_2$ such that $A_1 \cup A_2 = (-\infty, a]$, but then $f^{-1}(-\infty, a]) = f^{-1}(A_1 \cup A_2) = f^{-1}(A_1) \cup f^{-1}(A_2)$ and I can't say whether this is an element of $\mathcal E$. Also I don't use that $A \in \mathcal E$.</p> <p>Can anyone help ? </p>
PhoemueX
151,552
<p>Do you know that multiplication and addition of measurable functions are again measurable? If yes, simply note that</p> <p>$$ h = f \cdot \chi_A + g \cdot \chi_{A^c}, $$</p> <p>where $\chi_A$ is the characteristic function of $A$.</p>
198,739
<p>I'm actually doing an exercise where I have to draw graphs of functions. I understand r=|s| but not |r|=|s|. Are they the same?</p>
Brian M. Scott
12,042
<p>They are not the same. If $r=|s|$, then $r$ can never be negative, but $|r|=|s|$ is true if $r=-1$ and $s=1$ (or for that matter if $s=-1$). The statement that $|r|=|s|$ just says that $r=\pm s$, i.e., that $r=s$ or $r=-s$: in both cases $r$ and $s$ will have the same absolute value, regardless of their algebraic signs.</p>
103,540
<p>Suppose you have a triangular chessboard of size $n$, whose "squares" are ordered triples $(x,y,z)$ of nonnegative integers that add up to $n$. A rook can move to any other point that agrees with it in one coordinate -- for example, if you are on $(3,1,4)$ then you can move to $(2,2,4)$ or to $(6,1,1)$, but not to $(4,3,1)$.</p> <p>What is the maximum number of mutually non-attacking rooks that can be placed on this chessboard?</p> <p>More generally, is anything known about the graph whose vertices are these ordered triples and whose edges are rook moves?</p>
Will Sawin
18,060
<p>For n=6 you can fit 5 rooks</p> <p>(0,2,4) (4,0,2) (1,4,1) (3,3,0) (2,1,3)</p> <p>For n=9 you can fit 7 rooks</p> <p>(0,3,6) (6,0,3) (2,6,1) (4,5,0) (3,1,5) (5,2,2) (1,4,4)</p>
1,983,614
<p>Consider a measurable space $(\Omega, \mathcal{F})$ and let $I$ be an arbitrary index set. </p> <p>Is the following true?</p> <blockquote> <p>If $\left( A_i \right)_{i \in I}$ is a chain in $\mathcal{F}$ &ndash; that is, $\forall i \in I$, $A_i \in \mathcal{F}$ and for all $i, j \in I$, we have $A_i \subseteq A_j$ or $A_j \subseteq A_i$ &ndash; then $$\displaystyle \bigcup_{i \in I} A_i \in \mathcal{F}.$$</p> </blockquote>
Michael Hardy
11,667
<p>Within the known axioms of set theory you cannot disprove that $2^{\aleph_0} = \aleph_1.$ Recall that $\aleph_1$ is defined as the cardinality of the set of all countable ordinals and $2^{\aleph_0}$ is the cardinality of $[0,1]$.</p> <p>Let $A$ be any non-measurable subset of $[0,1]$. Suppose $|[0,1]| = 2^{\aleph_0} = \aleph_1$. Since $A$ must be uncountable and since no cardinality lies between $\aleph_0$ and $\aleph_1$ (that much is provable within conventional set theory), we have $|A|=\aleph_1$, so $$ A = \{ a_i : i \text{ is a countable ordinal} \}, $$ for some indexing $i\mapsto a_i$. Then let $$ A_i = \{ a_j : j \le i\}. $$ Then for each $i$, the set $A_i$ is countable, hence measurable, but $$ \bigcup_i A_i = A $$ is not measurable.</p> <p>Since you can't disprove that $2^{\aleph_0} = \aleph_1,$ you can't prove that your proposed union must be measurable.</p>
2,710,200
<p>I need to find the norm of an operator from $l^2 \to l^1$, but I'm struggling because of the different norms on $l^2$ and $l^1$. </p> <p>The operator is defined by $T:l^2 \to l^1, x_i \mapsto 2^{-i}x_i$. </p> <p>Using the canonical basis, I have that $||T||\geq 1/2$, but I have a feeling this is not a very good lower bound. I also cant seem to find any upper bound, because I have that $$||Tx||_1 = \sum_{i=1}^{\infty}2^{-i}x_i$$ but I can't relate this to $||x||_2$ because $||x||_2= (\sum_i^\infty x_i^2)^{1/2}$. </p> <p>Thanks for any help!</p>
Rigel
11,776
<p>Using the Cauchy-Schwarz inequality you get $$ \|Tx\|_1 = \sum_{i=1}^\infty 2^{-i} |x_i| \leq \left(\sum_{i=1}^\infty 4^{-i}\right)^{1/2} \left(\sum_{i=1}^\infty|x_i|^2\right)^{1/2} = \frac{1}{\sqrt{3}} \|x\|_2. $$ On the other hand, if you choose $x=(x_i)$ with $x_i = 2^{-i}$, you check in a moment that you get equality in the above inequality, hence $\|T\| = 1/\sqrt{3}$.</p> <p>Edit: same answer given in comments by acetone.</p>
2,515,765
<p>The following question is from an intermediate calculus book I am going through: </p> <p>Find two sets in $\mathbb R^2$ that have the same interior, but whose complements have different interiors. </p> <p>This seems like the kind of question that should be fairly straightforward, but I just can't think of an answer. I have tried, for instance, taking the first set to be $\mathbb R^2$ minus an open disc and comparing with $\mathbb R^2$ minus a closed disc. However the complements have the same interior. I have also tried the same with removing single points, squares etc, but nothing seems to work. If anyone can shed any light on this I'd be very grateful.<br> Thanks in advance. </p>
William Elliot
426,203
<p>{0} and {0,1}........................</p>
3,082,944
<blockquote> <p>Prove that space <span class="math-container">$X$</span> of all symmetric matrices in <span class="math-container">$GL_2(\mathbb R)$</span> with both the eigenvalues belonging to the interval <span class="math-container">$(0,2),$</span> with the topology inherited from <span class="math-container">$M_2(\mathbb R) $</span> is <strong>connected</strong>.</p> </blockquote> <p>Space of all symmetric matrices in <span class="math-container">$M_2(\mathbb R)$</span> is path -connected. </p> <p>I was not able to show why <span class="math-container">$det(\lambda A+(1-\lambda)B)\neq 0$</span> where <span class="math-container">$\lambda \in (0,1)$</span> and <span class="math-container">$A,B\in GL_2(\mathbb R), A=A^T,B=B^T.$</span></p> <p>Also how to use the eigenvalues from <span class="math-container">$(0,2)$</span> to prove the connectedness.</p> <p>I hope my doubts are clear to you.</p> <p>Any help is appreciated. Thank you.</p>
José Carlos Santos
446,262
<p>The space <span class="math-container">$SO_2(\mathbb{R})$</span> is connected. Now let<span class="math-container">$$\Lambda=\left\{\begin{bmatrix}\lambda_1&amp;0\\0&amp;\lambda_2\end{bmatrix}\,\middle|\,\lambda_1,\lambda_2\in(0,2)\right\}.$$</span>The set <span class="math-container">$\Lambda$</span> is connected too. So, the range of the map<span class="math-container">$$\begin{array}{ccc}SO_2(\mathbb{R})\times\Lambda&amp;\longrightarrow&amp;GL_2(\mathbb{R})\\(P,D)&amp;\mapsto&amp;P^{-1}DP\end{array}$$</span>is connected too. But this range is <span class="math-container">$X$</span>.</p>
1,520,643
<p>If A² = I, prove that the matrix A is diagonalizable.</p> <p>I have computed the eigenvalues to be 1 or -1 but I'm not sure how to proceed from here. </p> <p>I'm thinking along the lines of "since rank(A + I) + rank(A - I) = n, therefore there exists n linearly independent vectors which corresponds to n eigenvectors. Hence, A is diagonalizable". Is that correct?</p> <p>Thanks in advance!</p>
skyking
265,767
<p>You're right the eigenvalues are $\pm1$. Next step is to observe that eigenvectors corresponding to a eigenvector form a linear subspace - therefore you can create a base for these linear subspace. Then you end up with a base $u_j$ and $v_k$ so that $Au_j = u_j$ and $Av_k=-v_k$. Express $A$ in this base.</p>
1,520,643
<p>If A² = I, prove that the matrix A is diagonalizable.</p> <p>I have computed the eigenvalues to be 1 or -1 but I'm not sure how to proceed from here. </p> <p>I'm thinking along the lines of "since rank(A + I) + rank(A - I) = n, therefore there exists n linearly independent vectors which corresponds to n eigenvectors. Hence, A is diagonalizable". Is that correct?</p> <p>Thanks in advance!</p>
abel
9,252
<p>we know that the eigenvalues of $A$ are $\pm 1.$ suppose the dimension of the null space of $A - I$ is $k\ge 1$ and a basis for the null space is $\{x_1, x_2, \ldots, x_k\}.$ pick any $y$ not in the null space of $A-I.$ then $Ay - y \ne 0$ and $A(Ay-y) = A^2y-Ay = y-Ay = -1(Ay-y)$ that is $Ay-y$ is an eigenvector of $A$ corresponding to the eigenvalue $-1.$ therefore the eigenspaces of $A$ span the whole space $R^n$ which implies that $A$ is diagonalizable.</p>
2,853,668
<blockquote> <p>Show that $$\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{x^n}$$ converges for every $x&gt;1$.</p> </blockquote> <p>let $a(x)$ be the sum of the series. does $a$ continious at $x=2$? differentiable?</p> <p>I guess the first part is with leibniz but I am not sure about it.</p>
marty cohen
13,079
<p>Let's look at the partial sums, and let $y = -1/x$ so $-1 &lt; y &lt; 0$..</p> <p>$\begin{array}\\ s_m(y) &amp;=\sum_{n=1}^{m} (-1)^{n-1}(-y)^n\\ &amp;=\sum_{n=1}^{m} (-1)^{n-1}(-1)^ny^n\\ &amp;=-\sum_{n=1}^{m} y^n\\ &amp;=-y\sum_{n=0}^{m-1} y^n\\ &amp;=-y\dfrac{1-y^m}{1-y}\\ &amp;=\dfrac{-y}{1-y}-\dfrac{-y^{m+1}}{1-y}\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ s_m(y)+\dfrac{y}{1-y} &amp;=\dfrac{y^{m+1}}{1-y}\\ \end{array} $</p> <p>Therefore $\sum_{n=1}^{m} \frac{(-1)^{n-1}}{x^n}+\dfrac{-1/x}{1+1/x} =\dfrac{1}{(-x)^{m+1}(1+1/x)} $ or $\sum_{n=1}^{m} \frac{(-1)^{n-1}}{x^n}-\dfrac{1}{x+1} =\dfrac{(-1)^{m+1}}{x^{m}(x+1)} $.</p> <p>What is needed now is to show that $\lim_{m \to \infty} \dfrac{1}{x^{m}(x+1)} =0$.</p> <p>(This is from "What is Mathematics")</p> <p>Since $x &gt; 1$, $x = 1+z$ where $z &gt; 0$.</p> <p>By Bernoulli's inequality, $x^m =(1+z)^m \ge 1+mz \gt mz =m(x-1)$, so $ \dfrac{1}{x^{m}(x+1)} \lt \dfrac{1}{m(x-1)(x+1)} =\dfrac{1}{m(x^2-1)} $, so to make $\dfrac{1}{x^{m}(x+1)} \lt \epsilon $ it is enough to take $m \gt \dfrac1{\epsilon(x^2-1)} $.</p> <p>This is certainly not the best $m$, but it is completely elementary.</p>
2,969,004
<p>I have seen several references to "order" of an element in the Symmetric Group. Specifically, that the order of a cycle is the least common multiple of the lengths of the cycles in its decomposition.</p> <p>But the Symmetric Group is not cyclic, and I'm only familiar with the concept of "order" for cyclic groups. So what does it mean in this context?</p>
seamp
606,999
<p>The order of an element <span class="math-container">$g$</span> in a finite group <span class="math-container">$G$</span> is the smallest integer <span class="math-container">$n \in \mathbb{N}^*$</span> such that <span class="math-container">$g^n = e$</span> (the neutral element of the group). This is well-defined for every finite group <span class="math-container">$G$</span>, so in particular for the Symmetric group as well.</p> <p>The Lagrange theorem implies indeed, that for any finite group <span class="math-container">$G$</span> with <span class="math-container">$p$</span> elements and any <span class="math-container">$g \in G$</span>, then <span class="math-container">$g^p = e$</span>, which shows that every <span class="math-container">$g$</span> has finite order, less or equal to <span class="math-container">$p$</span>. </p>
15,871
<p>I would like to state something about the existence of solutions $x_1,x_2,\dots,x_n \in \mathbb{R}$ to the set of equations</p> <p>$\sum_{j=1}^n x_j^k = np_k$, $k=1,2,\dots,m$</p> <p>for suitable constants $p_k$. By "suitable", I mean that there are some basic requirements that the $p_k$ clearly need to satisfy for there to be any solutions at all ($p_{2k} \ge p_k^2$, e.g.).</p> <p>There are many ways to view this question: find the coordinates $(x_1,\dots,x_n)$ in $n$-space where all these geometric structures (hyperplane, hypersphere, etc.) intersect. Or, one can see this as determining the $x_j$ necessary to generate the truncated Vandermonde matrix $V$ (without the row of 1's) such that $V{\bf 1} = np$ where ${\bf 1} = (1,1,\dots,1)^T$ and $p = (p_1,\dots,p_m)^T$.</p> <p>I'm not convinced one way or the other that there has to be a solution when one has $m$ degrees of freedom $x_1,\dots,x_m$ (same as number of equations). In fact, it would be interesting to even be able to prove that for finite number $m$ equations $k=1,2,\dots,m$ that one could find $x_1,\dots,x_n$ for bounded $n$ (that is, the number of data points required does not blow up).</p> <p>A follow on question would be to ask if requiring ordered solutions, i.e. $x_1 \le x_2 \le \dots \le x_n$, makes the solution unique for the cases when there is a solution.</p> <p>Note: $m=2$ is easy. There is at least one solution = the point(s) where a line intersects a circle given that $p_2 \ge p_1^2$. </p> <p>Any pointers on this topic would be helpful -- especially names of problems resembling it.</p>
fedja
1,131
<p>Normally your "defect" is called an "additional assumption"/"extra condition"/... and the typical phrase is "the inverse implication also holds under the additional assumption that...". Yes, the search for such things is something that mathematicians do on an everyday basis trying to bridge the gap between what is necessary and what is sufficient. </p> <p>The thing you should understand is that in real life it is not necessarily the first priority to have $D$ as weak as possible from the logical standpoint. What often matters much more is that $D$ is easy to verify, holds in many interesting cases, allows one to give an easy and elegant proof, etc. Actually, Joel has already brought the idea of pure logical minimality to its absurd extreme form, so I hardly need to comment more on this issue.</p> <p>By the way, the assumption that $P\implies Q$ is completely unnecessary in your definition of the "defect"; it makes just as much sense without it. Indeed, the usual story is that we know something ($Q$), we want to conclude something else ($P$), we suspect that the implication $Q\implies P$ is (may be) not always true, but we want this implication not for its own sake but to figure out something about some object $X$, so we ask what other property $D$ $X$ possesses that together with $Q$ will give us $P$. </p>
15,871
<p>I would like to state something about the existence of solutions $x_1,x_2,\dots,x_n \in \mathbb{R}$ to the set of equations</p> <p>$\sum_{j=1}^n x_j^k = np_k$, $k=1,2,\dots,m$</p> <p>for suitable constants $p_k$. By "suitable", I mean that there are some basic requirements that the $p_k$ clearly need to satisfy for there to be any solutions at all ($p_{2k} \ge p_k^2$, e.g.).</p> <p>There are many ways to view this question: find the coordinates $(x_1,\dots,x_n)$ in $n$-space where all these geometric structures (hyperplane, hypersphere, etc.) intersect. Or, one can see this as determining the $x_j$ necessary to generate the truncated Vandermonde matrix $V$ (without the row of 1's) such that $V{\bf 1} = np$ where ${\bf 1} = (1,1,\dots,1)^T$ and $p = (p_1,\dots,p_m)^T$.</p> <p>I'm not convinced one way or the other that there has to be a solution when one has $m$ degrees of freedom $x_1,\dots,x_m$ (same as number of equations). In fact, it would be interesting to even be able to prove that for finite number $m$ equations $k=1,2,\dots,m$ that one could find $x_1,\dots,x_n$ for bounded $n$ (that is, the number of data points required does not blow up).</p> <p>A follow on question would be to ask if requiring ordered solutions, i.e. $x_1 \le x_2 \le \dots \le x_n$, makes the solution unique for the cases when there is a solution.</p> <p>Note: $m=2$ is easy. There is at least one solution = the point(s) where a line intersects a circle given that $p_2 \ge p_1^2$. </p> <p>Any pointers on this topic would be helpful -- especially names of problems resembling it.</p>
Gerald Edgar
454
<p>In one field of mathematics, it's the "Tauberian condition".</p>
3,134,991
<p>If nine coins are tossed, what is the probability that the number of heads is even?</p> <p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p> <p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p> <p><span class="math-container">$n = 9, k = 0$</span></p> <p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p> <p><span class="math-container">$n = 9, k = 2$</span></p> <p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p> <p><span class="math-container">$n = 9, k = 4$</span> <span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p> <p><span class="math-container">$n = 9, k = 6$</span></p> <p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p> <p><span class="math-container">$n = 9, k = 8$</span></p> <p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p> <p>Add all of these up: </p> <p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
Selene Routley
10,549
<p>Nine coins, so that two events </p> <p><span class="math-container">$\mathscr{E}_1$</span> = #heads is even and </p> <p><span class="math-container">$\mathscr{E}_2$</span> = #tails is even</p> <p>are mutually exclusive (the number of tails is 9 - number of heads, so former is even iff latter odd) and comprise all possibilities, thus <span class="math-container">$P(\mathscr{E}_1) + P(\mathscr{E}_2) =1$</span>. But if the coins are fair, then the probabilities must be unchanged if we swap the roles of heads and tails. Hence <span class="math-container">$P(\mathscr{E}_1)= P(\mathscr{E}_2)$</span> and we immediately see both probabilities must be <span class="math-container">$\frac{1}{2}$</span>.</p> <hr/> <p>Now you are wondering why your approach doesn't work, because it is basically sound. You've simply made a slip. </p> <p>You're approach is: sum every second term in the 10 member (i.e. an even number of terms) sequence whose <span class="math-container">$n^{th}$</span> term is the probability of <span class="math-container">$n$</span> heads. So the sum is:</p> <p><span class="math-container">$$S_1=\sum_{k=0}^{N/2} \binom{N}{2\,k}\left(\frac{1}{2}\right)^N$$</span></p> <p>with <span class="math-container">$N$</span> odd (here equal to 9).</p> <p>But, by dint of <span class="math-container">$\binom{N}{2\,k} = \binom{N}{N-2\,k}$</span>, this sum is equal to the sum of all the other terms</p> <p><span class="math-container">$$S_2 =\sum_{k=0}^{N/2} \binom{N}{N-2\,k}\left(\frac{1}{2}\right)^N$$</span></p> <p>in the sequence that don't belong to the first sum. So <span class="math-container">$S_1=S_2$</span> and clearly <span class="math-container">$S_1+S_2=1$</span>, because this sum is the sum of probabilities of all possible mutually exclusive outcomes, therefore 1, or, alternatively, call up the binomial theorem and see that <span class="math-container">$S_1+S_2=\left(\frac{1}{2} + \frac{1}{2}\right)^9=1$</span></p>
2,406,061
<p>I am also confused about whether these are symbols or have some meaning of their own. PS- I know that <span class="math-container">$\operatorname{d}y\over\operatorname{d}x$</span> geometrically represents the slope. But, I've come across <span class="math-container">$\operatorname{d}x\over\operatorname{d}y$</span> to make problems easier. What does <span class="math-container">$\operatorname{d}x\over\operatorname{d}y$</span> mean?</p>
Daniel Cunha
355,450
<p>You should be very careful, those are merely notations.</p> <p>$\frac{d\,y(x)}{d\,x}$ is the derivative of a variable $y$ with respect to $x$. It represents how much $y$ varies for small variations of $x$. If you draw the curve of $y(x)$, the derivative will be the slope, as you said.</p> <p>The opposite works as well, if one can define $x$ as a function of $y$ (which is the same as inverting the function $y(x)$ - we must consider a subset of the image where it is injective to do so), you can differentiate it too: $\frac{d\,x(y)}{d\,y}$</p> <p>But be careful! It is just a notation, it is not the division of $dx$ and $dy$, it is a limit as you can see <a href="https://en.wikipedia.org/wiki/Derivative" rel="noreferrer">here</a>.</p> <p>$\boxed{\frac{d\,y(x)}{d\,x} = \lim\limits_{h\rightarrow0} \frac{y(x+h)-y(x)}{h}}$</p> <p><a href="http://www.felderbooks.com/papers/dx.html" rel="noreferrer">Here</a> is some discussion about the meaning of $dx$ alone, pay attention to the commentary at the end: "Since I first posted this paper, two different people have emailed me to tell me that Real Mathematicians don't do this. Playing with dx in the ways described in this paper is apparently one of those smarmy tricks that physicists use to give headaches to mathematicians. "</p>
1,291,107
<p>Let $X$ be random variable and $f$ it's density. How can one calculate $E(X\vert X&lt;a)$?</p> <p>From definition we have:</p> <p>$$E(X\vert X&lt;a)=\frac{E\left(X \mathbb{1}_{\{X&lt;a\}}\right)}{P(X&lt;a)}$$</p> <p>Is this equal to:</p> <p>$$\frac{\int_{\{X&lt;a\}}xf(x)dx}{P(X&lt;a)}$$</p> <p>? If yes, then how one justify it? Thanks. I'm conditional expectation noob.</p> <p>Also, what is $E(X|X=x_0)$? In discrete case it is $x_0$...</p>
KittyL
206,286
<p>From the beginning: $$(1-x)(x-5)^3=x-1\\ (1-x)(x-5)^3+1-x=0\\ (1-x)(x-5)^3+(1-x)=0\\ (1-x)[(x-5)^3+1]=0\\$$</p> <p>This implies $1-x=0$ or $(x-5)^3=-1$. I believe you can solve these.</p>
97,130
<p>I tried to prove that $$(1-2x)^2=1/3+4/\pi^2\sum_1^\infty \cos(2n x \pi)/n^2$$ for $x \in [0,1)$ with Fourier analysis, but I just found a Fourier series which defines the function. I also found the fourier series of $\cos(2n x \pi)$.</p> <p>I don't think these results are helpful.</p> <p>Any suggestions on how to prove this equation?</p>
AD - Stop Putin -
1,154
<p><strong>Hint/Problems</strong></p> <p>(<em>Changed since the previous was wrong -- the function is not even in it self..</em>)</p> <ol> <li><p>Note first that $x\mapsto e(x)=(1-2x)^2$ is even on $[0,1]$ in the sense $e(x)= e(1-x)$ (either visualise it or by computation $(1-2(1-x))^2 = (1-2+2x)^2 = (-1+2x)^2)$).</p></li> <li><p>Now, let us look at the function $s$ on $I=[-1,1]$, which is the <em>even extension</em> of the function $x\mapsto (1-2x)^2$ on $[0,1]$ (so the graph looks a bit like $\omega$). </p></li> <li><p>Note that the sine Fourier coefficients are 0, while the cosine Fourier coefficients are given by $$a_n = 2\int_0^1s(x)\cos(n\pi x)dx.$$</p></li> <li><p>Next $a_n=0$ for odd $n$, this follows from 1 (try to see why without a calculation) or a simple calculation, and for even $n\ne0$ we have $$a_n= \frac{16}{n^2\pi^2}$$ while $a_0=2/3$.</p></li> <li><p>Why is $s$ equal to the Fourier series on $[-1,1]$? </p></li> <li><p>What is $s$ equal to in $[0,1]$?</p></li> </ol> <hr> <p><em>Last edit</em></p> <p>For even $n$ we have $n=2k$ for some $k$ and that $a_0$ should be divided by 2 in the expansion.</p>
97,130
<p>I tried to prove that $$(1-2x)^2=1/3+4/\pi^2\sum_1^\infty \cos(2n x \pi)/n^2$$ for $x \in [0,1)$ with Fourier analysis, but I just found a Fourier series which defines the function. I also found the fourier series of $\cos(2n x \pi)$.</p> <p>I don't think these results are helpful.</p> <p>Any suggestions on how to prove this equation?</p>
robjohn
13,854
<p>Consider the equivalent problem using $y=x-\frac12$ on the interval $[-\frac12,\frac12]$: Prove that $$ 4y^2=\frac13+\frac{4}{\pi^2}\sum_{n=1}^\infty(-1)^n\frac{\cos(2\pi ny)}{n^2}\tag{1} $$ Since $\pi\csc(\pi z)$ has residue $(-1)^n$ at each integer, let's consider $f_y(z)=\pi\csc(\pi z)\frac{\cos(2\pi zy)}{z^2}$.</p> <p>Let $\gamma_N$ be the rectangular path $$ (N{+}\!\tfrac12)-iN\to(N{+}\!\tfrac12)+iN\to-(N{+}\!\tfrac12)+iN\to-(N{+}\!\tfrac12)-iN\to(N{+}\!\tfrac12)-iN $$ It is not hard to see that for $|y|&lt;\frac12$, $$ \lim_{N\to\infty}\;\;\oint_{\gamma_N}f_y(z)\;\mathrm{d}z=0\tag{2} $$ Equation $(2)$ relates the residue of $f_y$ at $0$ with the sum in $(1)$: $$ \operatorname{Res}(f_y,0)+ 2\sum_{n=1}^\infty(-1)^n\frac{\cos(2n\pi y)}{n^2}=0\tag{3} $$ Let's look at the Laurent series for $f_y$: $$ \begin{align} \pi\csc(\pi z)\frac{\cos(2\pi zy)}{z^2} &amp;=\left(\frac1z+\frac{\pi^2}{6}z+\dots\right)\left(\frac{1}{z^2}-2\pi^2y^2+\dots\right)\\ &amp;=\frac{1}{z^3}+\left(\frac{\pi^2}{6}-2\pi^2y^2\right)\frac{1}{z}+\dots\tag{4} \end{align} $$ Equation $(4)$ says that $\operatorname{Res}(f_y,0)=\frac{\pi^2}{6}-2\pi^2y^2$. Combining this with $(3)$ yields $$ \sum_{n=1}^\infty(-1)^n\frac{\cos(2n\pi y)}{n^2}=\pi^2\left(y^2-\frac{1}{12}\right)\tag{5} $$ which immediately verfies $(1)$.</p>
182,756
<p>I'm trying to solve a non linear ODE numerically with <code>ParametricNDSolve</code>, but as far as I got is shown below. My problem is to set the find root correctly. What I know is this: <code>x'[0] == 0, x[R] == 0, x'[R] == 0</code>. Any help? Here is my code: </p> <pre><code>c = -0.7177; r1 = 0.8; r2 = 125; R = 1.39; f[r_] := Piecewise[{{0, 0 &lt;= r &lt;= r1}, {900/(1 - r1^3), r1 &lt; r &lt;= 1}, {0, 1 &lt; r &lt;= R}}] ps = ParametricNDSolveValue[{x''[r] + (1/r) x'[r] == c n Exp[-x[r]] + f[r], x'[0] == 0, x[0] == x0}, {x, x'}, {r, 0, R}, {x0,n}, Method -&gt; "StiffnessSwitching"] ff = FindRoot[{Last[ps[x0,n]][R] == 0, First[ps[x0,n]][R] == 0}, {x0, -2}] </code></pre>
Henrik Schumacher
38,178
<p>As Alex Trounev said, this is a second-order ODE with discontinuous right-hand side. You can use <code>Piecewise</code> to set up the forcing term:</p> <pre><code>rhs = Piecewise[{ {c n0 Exp[-x[r]] + (3 h)/(a^3 - b^3), a &lt;= r &lt; b} }, c n0 Exp[-x[r]] ] </code></pre> <blockquote> <p><span class="math-container">$$\begin{cases} \frac{3 h}{a^3-b^3}+c \,\text{n0}\, e^{-x(r)} &amp; a\leq r&lt;b \\ c \,\text{n0} \, e^{-x(r)} &amp; \text{True} \end{cases}$$</span></p> </blockquote> <p>Notice that for convenience, I used <code>c n0 Exp[-x[r]]</code> as the default term. The full equation can be set up as</p> <pre><code>c = 0.72; h = 300; a = 15; b = 17; R = 25; ϵ = $MachineEpsilon; ps = ParametricNDSolveValue[ { x''[r] + 2 x'[r] == rhs, x[R] == 0, x'[R] == 0 }, x, {r, ϵ, R}, {n0}, Method -&gt; "StiffnessSwitching", WorkingPrecision -&gt; 30 ]; </code></pre> <p>Solving it for a given parameter</p> <pre><code>f = ps[0.00001]; </code></pre> <p>Plotting the result:</p> <pre><code>Plot[f[r], {r, ϵ, R}] </code></pre> <p><a href="https://i.stack.imgur.com/tF960.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tF960.png" alt="enter image description here"></a></p> <p>Something must be wrong in your model: The solution blows up heavily towards <span class="math-container">$r = 0$</span> so that <code>x'[0] == 0</code> cannot be expected. Actually, you cannot prescribe more than two boundary conditions for an ODE of order 2.</p>
182,756
<p>I'm trying to solve a non linear ODE numerically with <code>ParametricNDSolve</code>, but as far as I got is shown below. My problem is to set the find root correctly. What I know is this: <code>x'[0] == 0, x[R] == 0, x'[R] == 0</code>. Any help? Here is my code: </p> <pre><code>c = -0.7177; r1 = 0.8; r2 = 125; R = 1.39; f[r_] := Piecewise[{{0, 0 &lt;= r &lt;= r1}, {900/(1 - r1^3), r1 &lt; r &lt;= 1}, {0, 1 &lt; r &lt;= R}}] ps = ParametricNDSolveValue[{x''[r] + (1/r) x'[r] == c n Exp[-x[r]] + f[r], x'[0] == 0, x[0] == x0}, {x, x'}, {r, 0, R}, {x0,n}, Method -&gt; "StiffnessSwitching"] ff = FindRoot[{Last[ps[x0,n]][R] == 0, First[ps[x0,n]][R] == 0}, {x0, -2}] </code></pre>
Alex Trounev
58,388
<p>This problem has a solution. It is given below</p> <pre><code>c = 0.72; h = 300; a = 15; b = 17; R = 25; f[r_] := Piecewise[{{0, 0 &lt;= r &lt;= a}, {(3 h)/(a^3 - b^3), a &lt; r &lt;= b}, {0, b &lt; r &lt;= R}}] ps = ParametricNDSolveValue[{x''[r] + 2 x'[r] == c n0 Exp[-x[r]] + f[r], x'[0] == 0, x[0] == x0}, x, {r, 0, R}, {n0, x0}]; n = FindRoot[{ps[n0, x0][R] == 0, ps[n0, x0]'[R] == 0}, {n0, -.2}, {x0, 1}] {n0 -&gt; 9.19855*10^-8, x0 -&gt; 0.585175} {Plot[ Evaluate[Table[ps[n0, 1][r], {n0, -.2, 2, .1}]], {r, 0, R}, PlotRange -&gt; All], Plot[ps[n[[1, 2]], n[[2, 2]]][r], {r, 0, R}]} </code></pre> <p><a href="https://i.stack.imgur.com/BnH2M.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BnH2M.png" alt="fig1"></a></p>
64,130
<p>This is an arithmetic follow-up to my previous question <a href="https://mathoverflow.net/questions/64112/does-there-exist-a-non-trivial-semi-stable-curve-of-genus-1-with-only-4-singular">Does there exist a non-trivial semi-stable curve of genus &gt;1 with only 4 singular fibres</a> </p> <p>Let $k$ be an algebraically closed field and let $f:X\longrightarrow \mathbf{P}^1_{k}$ be a semi-stable curve. Let $s$ denote the number of singular fibres. If $X$ is non-isotrivial and of positive genus, we have that $s&gt;2$ (Beauville and Szpiro). As Angelo stated in my previous question, for genus >1 and $k=\mathbf{C}$, Sheng-Li Tan has shown that $s&gt;4$.</p> <p>Now, let $S=\textrm{Spec} \mathbf{Z}$ and let $X\longrightarrow S$ be a (regular) semi-stable arithmetic surface. Let $s$ be the number of singular fibres. Fontaine has shown that $s&gt;0$ if $X$ is of positive genus. </p> <p><strong>Question.</strong> Let $g&gt;0$ be an integer. Does there exist a semi-stable arithmetic surface $X\longrightarrow S$ of genus $g$ with precisely one singular fibre?</p> <p>I expect the answer to be yes for $g=1$ but no for $g&gt;1$.</p> <p><strong>Example.</strong> The modular curve $X_1(\ell)$ ($\ell$ big enough) has semi-stable reduction over Spec $\mathbf{Z}[\zeta_{l}]$. This model has precisely one singular fibre. Note that the modular curve $X_1(l)$ does not have semi-stable reduction over $\mathbf{Z}$.</p>
JSE
431
<p>I have the opposite intuition -- I would think the answer would be yes for all g. In genus 1, you are asking (I think) whether there are elliptic curves with prime conductor. There are a lot of elliptic curves of prime conductor; I believe the question of whether there are infinitely many is open, and considered hard. </p> <p>In higher genus, I would still expect a lot of semistable curves with only one singular fiber; after all, the singular fibers in your question are CLOSED fibers, not GEOMETRIC fibers as in the case of k=C. A better analogy would be curves over P^1_k where k is a finite field; it is much harder to find a curve whose bad fibers form a subscheme of degree 1 than it is to find a curve whose bad fibers form a subscheme with 1 irreducible component. You are asking for the latter.</p> <p>Loosely speaking, I think if you write down a hyperelliptic curve y^2 = f(x) and the discriminant of f(x) is prime, you're almost there, maybe with some problems at 2. Are there infinitely many polynomials of a given degree with prime discriminant? Surely yes, though again this is a "Schinzel-type" statement which would be very hard to prove.</p>
70,803
<p>Let $S$ be the sphere in $\mathbb{R}^3$ and $C:[0,1]\to S$ a continuously differentiable curve on $S$. Let $T:[0,1]\to\mathbb{R}^3$ denote the tangent vector of $C$. Let $P(t)$ be the plane containing $C(t)$ and having normal vector $T(t)$.</p> <p>Given a size $d$ of the "paint brush" we define the "brush" $b:[0,1]\to \mathcal{P}(S)$ by letting $b(t)$ be the points of $S$ that are at most a distance $d$ (metric on the sphere) from $C(t)$ that are contained in $P(t)$.</p> <p>We can think of this as saying the "brush" $b(t)$ is an arc on the sphere that is "orthogonal" to the motion $C(t)$ of the "paint brush".</p> <p>Given $d$ what is the arclength of the shortest curves such that $\cup_{t\in[0,1]} b(t) = S$. This says that the "paint brush" covered the sphere.</p>
Jean-Marc Schlenker
9,890
<p>This question is somewhat related to <a href="https://mathoverflow.net/questions/69099/shortest-closed-curve-to-inspect-a-sphere">this recent one</a>. More precisely, the comment by Gjergji Zaimi in the earlier question gives a painting of length $2\sqrt{2}\pi$ for $d=\pi/4$, which, as explained in another comment there, is optimal for a path at constant distance from the sphere. So for $d=\pi/4$ the optimal length should be $2\sqrt{2}\pi$.</p>
4,510,384
<p>In exercise 2.13 of page 43 of the book <a href="https://rads.stackoverflow.com/amzn/click/com/0134746759" rel="nofollow noreferrer" rel="nofollow noreferrer">Mathematical Proofs: A Transition to Advanced Mathematics</a> the reader is asked to state the logical negation of some statements. Of these, I find the authors' answer to one of them baffling.</p> <p>The statement to negate is:</p> <blockquote> <p>&quot;Two sides of the triangle have the same length.&quot;</p> </blockquote> <p>The authors' negation of the statement is:</p> <blockquote> <p>&quot;The sides of the triangle have different lengths&quot;.</p> </blockquote> <p>Am I mistaken in assuming that when negating a statement, one is supposed to state what previously presumed false as true and vice versa? If one assumes the proposition &quot;Two sides of the triangle have the same length.&quot; to be true, is it erroneous to conclude that the negation would be &quot;The sides of the triangle have different lengths&quot; or (exclusively) 'Three sides of the triangle have the same length'? I thank your aid in advance.</p>
fleablood
280,126
<p>Your difficulty arises from your interpreting the sentence &quot;two sides are equal&quot; as &quot;two sides are equal (and the third side is a different length)&quot;. However that is not what &quot;two sides are equal&quot; means. &quot;two sides are equal&quot; means &quot;there are two sides that are equal... we don't know <em>which</em> two sides are equal and we don't know anything about whether the third is or is not also equal to those two sides&quot;.</p> <p>Hopefully if you view it that way you can say why the answer was what it was.</p> <p>.... read on if you wish......</p> <p>If all three sides are equal then any two sides will be equal so all three sides is compatible with (and is a subspace of) two sides being equal.</p> <p>The negation of &quot;two sides are equal&quot; is &quot;there are no two sides that are equal&quot; and that is equivalent to &quot;all sides are different&quot;.</p> <p>However if we take the statement &quot;<em>EXACTLY</em> two sides are equal&quot; that would mean that two sides are equal and the third is a different length. The negation of <em>that</em> would be: That it is not the case that exactly two sides are equal so either there are fewer than two sides that are equal (no two sides are equal) or there are more than two sides that are equal (there are three sides).</p> <p>So the negation of &quot;<em>EXACTLY</em> two sides are equal&quot; would be &quot;Either all sides are different or all sides are the same&quot;.</p> <p>.....</p> <p>But &quot;two sides are equal&quot; does NOT mean &quot;exactly two sides are equal&quot;. &quot;two sides are equal&quot; means &quot;there exists at least one pair of equal sides&quot;. And the negation <em>IS</em> &quot;all sides are different&quot;.</p>
1,176,615
<p>I am invited to calculate the minimum of the following set:</p> <p>$\big\{ \lfloor xy + \frac{1}{xy} \rfloor \,\Big|\, (x+1)(y+1)=2 ,\, 0&lt;x,y \in \mathbb{R} \big\}$.</p> <p>Is there any idea?</p> <p>(The question changed because there is no maximum for the set (as proved in the following answers) and I assume that the source makes mistake)</p>
Ishfaaq
109,161
<p>This needs further verification. </p> <p><em>I believe that the maximum does not exist since the set is not bounded above.</em> </p> <p>Suppose $x, y \gt 0$ satisfies $ (x + 1)(y + 1) = 2 $. Then we can conclude that </p> <ol> <li>$ xy + x+ y = 1 $</li> <li>$ 1 + \dfrac{1}{x} + \dfrac{1}{y} = \dfrac{1}{xy} $ (dividing by $xy$)</li> </ol> <p>Subtracting equation $2$ from $1$ we can rearrange this to $$ xy + \dfrac{1}{xy} = 2 + \left({\dfrac{1}{x} - x}\right ) + \left({\dfrac{1}{y} - y}\right ) $$ </p> <p>Now notice that the expression $\left({\dfrac{1}{t} - t}\right )$ can be made arbitrarily large by making $t$ arbitrarily small. Hence the floor function $ \lfloor xy + \frac{1}{xy} \rfloor $ is also not bounded above and so the maximum of the given set cannot exist. </p>
451,722
<p>I want to find the projection of the point $M(10,-12,12)$ on the plane $2x-3y+4z-17=0$. The normal of the plane is $N(2,-3,4)$.</p> <p>Do I need to use Gram–Schmidt process? If yes, is this the right formula?</p> <p>$$\frac{N\cdot M}{|N\cdot N|} \cdot N$$</p> <p>What will the result be, vector or scalar?</p> <p>Thanks!</p>
eccstartup
26,947
<p>Set the projection point on the plane as $P=(x,y,z)$.</p> <p>You need three equations:</p> <ol> <li><p>Point $P$ on the plane. $$2x-3y+4z=17$$</p></li> <li><p>$\vec{MP}\perp plane$</p></li> </ol> <p>$$\vec{MP}\perp \vec{PQ_1}$$</p> <p>$$\vec{MP}\perp \vec{PQ_2}$$</p> <p>where $Q_1$ and $Q_2$ are two different points on the plane.</p> <p>Because $\vec{MP}// \vec{N}$, you can use $\vec{N}$ instead of $\vec{MP}$ above.</p>
75,880
<p>Say $f:X\rightarrow Y$ and $g:Y\rightarrow X$ are functions where $g\circ f:X\rightarrow X$ is the identity. Which of $f$ and $g$ is onto, and which is one-to-one?</p>
Brian M. Scott
12,042
<p>HINT: $$\begin{array}{}&amp;&amp;\bullet&amp;&amp;\\ &amp;&amp;&amp;\searrow&amp;\\ \bullet&amp;\to&amp;\bullet&amp;\to&amp;\bullet\\ X&amp;f&amp;Y&amp;g&amp;X \end{array}$$</p>