qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,845,448 | <p>I know that $(AB)^T$ = $B^TA^T$ and that $(A^T)^{-1}= (A^{-1})^T$ but couldn't reach any convincing answer. Can someone demonstrate the expression.</p>
| Community | -1 | <p>you have the next property too
$$(AB)^{-1}=B^{-1}A^{-1}$$
then the inverse of $(AB)^{T}$ is $\left[(AB)^{T}\right]^{-1}$
$$\left[(AB)^{T}\right]^{-1}=\left[B^{T}A^{T}\right]^{-1}=\left[A^{T}\right]^{-1}\left[B^{T}\right]^{-1}=\left[A^{-1}\right]^{T}\left[B^{-1}\right]^{T}$$</p>
|
4,429,607 | <blockquote>
<p>Minimum value of <span class="math-container">$M$</span> such that <span class="math-container">$\exists a, b, c \in \mathbb{R}$</span> and
<span class="math-container">$$
\left|4 x^{3}+a x^{2}+b x+c\right| \leq M ,\quad \forall|x| \leq 1
$$</span></p>
</blockquote>
<p>What i considered was that putting x= 0, 1 and -1 we get c should be between M and -M and similarily <span class="math-container">$|\pm4 + a \pm b +c| \leq M$</span> , from this i was not able to conclude anything about M , Or is it possible to solve it using the derivative to be 0 at minima ?</p>
<p><strong>Update</strong> : i made a graph so that to try doing from that . I can make a observation that minimum will be when the M ,-M line is touching the both maxima , minima occuring there . (Not sure though if its correct ) does it help ? <a href="https://i.stack.imgur.com/U9rzt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U9rzt.jpg" alt="enter image description here" /></a></p>
| dshin | 16,006 | <p>Let <span class="math-container">$P[x]$</span> be the set of all cubic polynomials with leading coefficient <span class="math-container">$4$</span>, and let <span class="math-container">$P_M[x]$</span> be the subset of <span class="math-container">$P[x]$</span> consisting of polynomials <span class="math-container">$p(x)$</span> satisfying the required property that
<span class="math-container">$$-M\leq p(x) \leq +M,$$</span>
for all <span class="math-container">$x\in [-1,+1]$</span>.</p>
<p>Clearly, if <span class="math-container">$p(x) \in P_M[x]$</span>, then also <span class="math-container">$q(x)={p(x)-p(-x)\over 2} \in P_M[x]$</span>. This implies that if <span class="math-container">$P_M[x]$</span> is nonempty, then so is <span class="math-container">$Q_M[x] \subset P_M[x]$</span> consisting only of odd functions. Hence we can restrict our attention to polynomials of the form <span class="math-container">$f_b(x)=4x^3+bx$</span> over the range <span class="math-container">$x\in[0,1]$</span>.</p>
<p>In other words, the problem to solve becomes to compute</p>
<p><span class="math-container">$$M^* = \min_b \max_{x\in[0,1]}\left|f_b(x)\right|$$</span></p>
<p>where <span class="math-container">$f_b(x)=4x^3+bx$</span>.</p>
<p>Note <span class="math-container">$f'_b(x) = 12x^2+b$</span>. If <span class="math-container">$b\geq0$</span>, then <span class="math-container">$f_b(x)$</span> is nondecreasing over the range <span class="math-container">$[0,1]$</span>. While if <span class="math-container">$b<0$</span>, then <span class="math-container">$f_b(x)$</span> has a single local extremum in the range <span class="math-container">$(0, \infty)$</span> at <span class="math-container">$x=\sqrt{{-b \over 12}}$</span>. This local extremum is a minimum that falls in the range <span class="math-container">$[0,1]$</span> iff <span class="math-container">$-12\leq b\leq 0$</span>.</p>
<p>For <span class="math-container">$b\not \in (-12,0)$</span> and <span class="math-container">$x\in[0,1]$</span>, then, <span class="math-container">$\left|f_b(x)\right|$</span> is maximized at one of the endpoints of the interval <span class="math-container">$[0,1]$</span>, and so we have:</p>
<p><span class="math-container">$$\begin{eqnarray}
\min_{b\not\in(-12,0)} \max_{x\in[0,1]}\left|f_b(x)\right| &=& \min_{b\not\in(-12,0)} \left|f_b(1)\right| \\
&=& \min_{b\not\in(-12,0)} \left|4+b\right| \\
&=& 4
\end{eqnarray}$$</span></p>
<p>For <span class="math-container">$b\in[-12,0]$</span>, on the other hand, <span class="math-container">$\left|f_b(x)\right|$</span> is maximized at either <span class="math-container">$x=\sqrt{{-b \over 12}}$</span> or <span class="math-container">$x=1$</span>, so</p>
<p><span class="math-container">$$\begin{eqnarray}
\min_{b\in[-12,0]} \max_{x\in[0,1]}\left|f_b(x)\right| &=&
\min_{b\in[-12,0]} \max_{x\in\bigl\{\sqrt{{-b \over 12}}, 1\bigr\}}\left|f_b(x)\right| \\
&=& \min_{b\in[-12,0]} \max\left(\left|\sqrt{{-b^3 \over 27}}\right|, \left|4+b\right|\right)
\end{eqnarray}$$</span></p>
<p>The <span class="math-container">$\max$</span> is minimized at <span class="math-container">$b=-3$</span>; this can be seen by solving the equation</p>
<p><span class="math-container">$$\left|\sqrt{{-b^3 \over 27}}\right| = \left|4+b\right|,$$</span></p>
<p>which yields a cubic equation when squaring both sides, and by factoring that cubic.</p>
<p>The final answer then is <span class="math-container">$M^*=1$</span>, attained by the cubic polynomial <span class="math-container">$4x^3-3x$</span>.</p>
|
4,429,607 | <blockquote>
<p>Minimum value of <span class="math-container">$M$</span> such that <span class="math-container">$\exists a, b, c \in \mathbb{R}$</span> and
<span class="math-container">$$
\left|4 x^{3}+a x^{2}+b x+c\right| \leq M ,\quad \forall|x| \leq 1
$$</span></p>
</blockquote>
<p>What i considered was that putting x= 0, 1 and -1 we get c should be between M and -M and similarily <span class="math-container">$|\pm4 + a \pm b +c| \leq M$</span> , from this i was not able to conclude anything about M , Or is it possible to solve it using the derivative to be 0 at minima ?</p>
<p><strong>Update</strong> : i made a graph so that to try doing from that . I can make a observation that minimum will be when the M ,-M line is touching the both maxima , minima occuring there . (Not sure though if its correct ) does it help ? <a href="https://i.stack.imgur.com/U9rzt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U9rzt.jpg" alt="enter image description here" /></a></p>
| River Li | 584,414 | <p><strong>Remarks</strong>: It is related to the Chebyshev polynomials of the first kind. I used it before e.g. in <a href="https://math.stackexchange.com/questions/3940136/minimizing-the-maximum-of-x2-x-k/3949211">Minimizing the maximum of $|x^2 - x - k|$</a></p>
<hr>
<p>The condition is written as
<span class="math-container">$$|x^3 + (a/4)x^2 + (b/4)x + c/4| \le M/4, \quad \forall -1 \le x \le 1.$$</span></p>
<p>Recall the well-known result:</p>
<p>For any given <span class="math-container">$n \ge 1$</span>, among the polynomials of degree <span class="math-container">$n$</span> with leading coefficient <span class="math-container">$1$</span>,
<span class="math-container">$$f(x) = \frac{1}{2^{n - 1}}T_n(x)$$</span>
is the one of which the maximal absolute value on the interval <span class="math-container">$[-1, 1]$</span> is minimal,
where <span class="math-container">$T_n(x)$</span> is the Chebyshev polynomials of the first kind.
This maximal absolute is <span class="math-container">$\frac{1}{2^{n - 1}}$</span>, and <span class="math-container">$|f(x)|$</span> reaches this maximum exactly <span class="math-container">$n + 1$</span> times at
<span class="math-container">$$x = \cos \frac{k\pi}{n}, \quad 0 \le k \le n.$$</span>
See:
see "Minimal <span class="math-container">$\infty$</span>-norm" <a href="https://en.wikipedia.org/wiki/Chebyshev_polynomials" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Chebyshev_polynomials</a>, or <a href="https://handwiki.org/wiki/Chebyshev_polynomials" rel="nofollow noreferrer">https://handwiki.org/wiki/Chebyshev_polynomials</a></p>
<p>Here, <span class="math-container">$n = 3$</span>, <span class="math-container">$T_3(x) = 4x^3 - 3x$</span>,
and <span class="math-container">$\frac{M^\ast}{4} = \frac{1}{2^2}$</span> i.e. <span class="math-container">$M^\ast = 1$</span>.</p>
|
3,843,352 | <p>It’s called a Diophantine Equation, and it’s sometimes known as the “summing of three cubes”.</p>
<blockquote>
<p>A Diophantine equation is a polynomial equation, usually in two or
more unknowns, such that only the integer solutions are sought or
studied (an integer solution is such that all the unknowns take
integer values).</p>
</blockquote>
<p>It seems easy for <span class="math-container">$x^3+y^3+z^3=8$</span>.
<span class="math-container">$x=1$</span>, <span class="math-container">$y=-1$</span> and <span class="math-container">$z=2.$</span></p>
<p>But what for higher values of <span class="math-container">$k$</span>?</p>
| poetasis | 546,655 | <p>This is the low-tech approach I took for this question. Note the optional output file in lines 140 and 220. The user may copy/paste or import the data into a spreadsheet for sorting as I did for the other answer.</p>
<pre><code> 100 print "Enter limit";
110 input l1
120 print time<span class="math-container">$
130 print
140 rem open "outfile.txt" for output as #1
150 for n1 = -l1 to l1
160 for n2 = n1 to l1
170 for n3 = n2 to l1
180 t0 = n1^3+n2^3+n3^3
190 for t9 = 1 to l1
200 if t0 = t9
210 print " " n1 n2 n3 t9 "."
220 rem print #1, n1 n2 n3 t9 "."
230 endif
240 next t9
250 next n3
260 next n2
270 rem The following IF/ELSE skips zero
280 rem for the next iteration of n1
290 if n1 = -1
300 n1 = 1
310 goto 160
320 endif
330 next n1
340 print time$</span>
</code></pre>
|
3,455,967 | <p>Solve: (Hint: use <span class="math-container">$x\ln y=t$</span>)<span class="math-container">$$(xy+2xy\ln^2y+y\ln y)\text{d}x+(2x^2\ln y+x)\text{d}y=0$$</span></p>
<p>My Work:</p>
<p><span class="math-container">$$x\ln y=t, \text{ d}t=\ln y \text{ d}x+\frac{x}{y} \text{ d}y$$</span></p>
<p><span class="math-container">$$(x+2x\ln^2y+\ln y)\text{ d}x+\left(\frac{2x^2}{y}\ln y+\frac{x}{y}\right)\text{d}y=0$$</span></p>
<p><span class="math-container">$$(x+2x\ln^2y)\text{ d}x+\left(\frac{2x^2}{y}\ln y\right)\text{d}y+\text{d}t=0$$</span></p>
<p><span class="math-container">$$(x+2t\ln y)\text{ d}x+\left(\frac{2x}{y}t\right)\text{d}y+\text{d}t=0$$</span></p>
<p><span class="math-container">$$\left(\frac{x}{t}+2\ln y\right)\text{d}x+\left(2\frac{x}{y}\right)\text{d}y+\frac{\text{d}t}{t}=0$$</span></p>
<p><span class="math-container">$$\frac{x}{t}\text{d}x+2\text{ d}t+\frac{\text{d}t}{t}=0$$</span></p>
<p><span class="math-container">$$\frac{x}{t}\text{d}x=\left(-2-\frac{1}{t}\right)\text{d}t$$</span></p>
<p><span class="math-container">$$\frac{x^2}{2}=-t^2-t+c$$</span></p>
<p><span class="math-container">$$\frac{x^2}{2}=-x^2\ln^2y-x\ln y+c$$</span></p>
<p><strong>1.</strong> Is my answer correct? </p>
<p><strong>2.</strong> How we could recognize that we should use <span class="math-container">$x\ln y=t$</span>. If question didn't Hint? </p>
<p><strong>3.</strong> All the way that I did to solve this equation was weird for me (because for example we had <span class="math-container">$dx,dy,dt$</span> in line 3) and had not saw this way to solve differential equation before is There any other way to simplification?</p>
| Maverick | 171,392 | <p>I am using the product rule of differentiation in reverse</p>
<p>Divide throughout by <span class="math-container">$y$</span></p>
<p><span class="math-container">$$\left(x+2x(lny)^2+lny\right)dx+\left(2x^2\frac{lny}{y}+\frac{x}{y}\right)dy=0$$</span></p>
<p><span class="math-container">$$xdx+(lny)^2d(x^2)+x^2d((lny)^2)+(lny)d(x)+xd(lny)=0$$</span></p>
<p><span class="math-container">$$xdx+d(x^2(lny)^2)+d(xlny)=0$$</span>
Integrating
<span class="math-container">$$\frac{x^2}{2}+x^2(lny)^2+xlny=c$$</span></p>
|
3,455,967 | <p>Solve: (Hint: use <span class="math-container">$x\ln y=t$</span>)<span class="math-container">$$(xy+2xy\ln^2y+y\ln y)\text{d}x+(2x^2\ln y+x)\text{d}y=0$$</span></p>
<p>My Work:</p>
<p><span class="math-container">$$x\ln y=t, \text{ d}t=\ln y \text{ d}x+\frac{x}{y} \text{ d}y$$</span></p>
<p><span class="math-container">$$(x+2x\ln^2y+\ln y)\text{ d}x+\left(\frac{2x^2}{y}\ln y+\frac{x}{y}\right)\text{d}y=0$$</span></p>
<p><span class="math-container">$$(x+2x\ln^2y)\text{ d}x+\left(\frac{2x^2}{y}\ln y\right)\text{d}y+\text{d}t=0$$</span></p>
<p><span class="math-container">$$(x+2t\ln y)\text{ d}x+\left(\frac{2x}{y}t\right)\text{d}y+\text{d}t=0$$</span></p>
<p><span class="math-container">$$\left(\frac{x}{t}+2\ln y\right)\text{d}x+\left(2\frac{x}{y}\right)\text{d}y+\frac{\text{d}t}{t}=0$$</span></p>
<p><span class="math-container">$$\frac{x}{t}\text{d}x+2\text{ d}t+\frac{\text{d}t}{t}=0$$</span></p>
<p><span class="math-container">$$\frac{x}{t}\text{d}x=\left(-2-\frac{1}{t}\right)\text{d}t$$</span></p>
<p><span class="math-container">$$\frac{x^2}{2}=-t^2-t+c$$</span></p>
<p><span class="math-container">$$\frac{x^2}{2}=-x^2\ln^2y-x\ln y+c$$</span></p>
<p><strong>1.</strong> Is my answer correct? </p>
<p><strong>2.</strong> How we could recognize that we should use <span class="math-container">$x\ln y=t$</span>. If question didn't Hint? </p>
<p><strong>3.</strong> All the way that I did to solve this equation was weird for me (because for example we had <span class="math-container">$dx,dy,dt$</span> in line 3) and had not saw this way to solve differential equation before is There any other way to simplification?</p>
| user577215664 | 475,762 | <p><span class="math-container">$$(xy+2xy\ln^2 y+y\ln y)dx+(2x^2\ln y+x)dy=0$$</span>
<span class="math-container">$$(x+2x\ln^2 y+\ln y )+(2x^2\ln y+x)(\ln y)'=0$$</span>
2. How we could recognize that we should use xlny=t. If question didn't Hint? </p>
<p>Only <span class="math-container">$\ln y $</span> terms in the equation. Substitute <span class="math-container">$z=\ln y$</span>
<span class="math-container">$$(x+2xz^2+z)+(2x^2z+x)z'=0$$</span>
<span class="math-container">$$(x+2xz^2+z)+x^2(z^2)' +xz'=0$$</span>
<span class="math-container">$$ x +(x^2z^2)' +(xz)'=0$$</span>
Integrate and substitute back:
<span class="math-container">$$ \frac 12x^2 +x^2z^2 +xz=0$$</span>
<span class="math-container">$$ \frac 12x^2 +x^2\ln^2y +x\ln y =K$$</span> </p>
|
4,093,406 | <p>Which of the equations have at least two real roots?
<span class="math-container">\begin{aligned}
x^4-5x^2-36 & = 0 & (1) \\
x^4-13x^2+36 & = 0 & (2) \\
4x^4-10x^2+25 & = 0 & (3)
\end{aligned}</span>
I wasn't able to notice something clever, so I solved each of the equations. The first one has <span class="math-container">$2$</span> real roots, the second one <span class="math-container">$4$</span> real roots and the last one does not have real roots. I am pretty sure that the idea behind the problem wasn't solving each of the equations. What can we note to help us solve it faster?</p>
| Gregory | 197,701 | <p>Descartes' rule of signs says that:</p>
<ul>
<li>1 pos and 1 neg real root for the first one.</li>
<li>2,0 pos and 2,0 neg for the second and third one.</li>
</ul>
<p>Thus the first one you know already that it has two real roots. For the second and third, you can use the discrimant to see that the third has no real roots and the second will have 2 or 4.</p>
<p><strong>EDIT:</strong> Since you're just interested in whether you have 2 or more roots, the discrimant being positive is enough (as saulspatz pointed out)</p>
|
4,093,406 | <p>Which of the equations have at least two real roots?
<span class="math-container">\begin{aligned}
x^4-5x^2-36 & = 0 & (1) \\
x^4-13x^2+36 & = 0 & (2) \\
4x^4-10x^2+25 & = 0 & (3)
\end{aligned}</span>
I wasn't able to notice something clever, so I solved each of the equations. The first one has <span class="math-container">$2$</span> real roots, the second one <span class="math-container">$4$</span> real roots and the last one does not have real roots. I am pretty sure that the idea behind the problem wasn't solving each of the equations. What can we note to help us solve it faster?</p>
| Aditya Singh | 1,012,095 | <p>All you need to do is</p>
<p>let <span class="math-container">$x^2=a$</span></p>
<p>The above equations will become so</p>
<p><span class="math-container">$a^4−5a^2−36=0$</span></p>
<p><span class="math-container">$a^4−13a^2+36=0$</span></p>
<p><span class="math-container">$4a^4−10a^2+25=0$</span></p>
<p>And now calculate 'D' that is <span class="math-container">$b^2-4ac$</span></p>
<p>and the usual rules apply but since we first assumed <span class="math-container">$x^2=a$</span>,</p>
<p>Hence the roots will be twice as many ie</p>
<p>4 for D>0</p>
<p>2 for D=0</p>
|
2,707,299 | <p>Suppose that the conditional distribution of $Y$ given $X=x$ is Poisson with mean $E(Y|x)=x$, $Y|x\sim POI(x)$, and that $X\sim EXP(1)$</p>
<p>Find $V[Y]$</p>
<p>We know that</p>
<p>$$V[Y]=E_x[Var(Y|X)] + Var_x[E(Y|X)]$$</p>
<p>we also know that </p>
<p>$$V(Y|x)=E(Y^2|x)-[E(Y|x)]^2$$</p>
<p>I know we can find $V[Y]$ with those 2 formulas, but I am getting confused with so many expectances and variances around each other. Can anyone explain me how to solve this?</p>
| zoli | 203,663 | <p>If the conditional distribution of $Y$ given $X$ is Poisson with parameter $X$ then the conditional expectation is also $X$. That is, if $X$ is of exponential distribution with parameter one then</p>
<p>$$E[Y]=\int_0^{\infty}E[Y\mid X=x\,]e^{-x}\ dx=\int_0^{\infty}xe^{-x}\ dx=E[X]=1.$$</p>
<p>Also,</p>
<p>$$E[Y^2]=\int_0^{\infty}E[Y^2\mid X=x\,]e^{-x}\ dx=\int_0^{\infty}(x^2+x)e^{-x}\ dx$$
because the <a href="https://www.wolframalpha.com/input/?i=second+moment+of+the+Poisson+distribution" rel="nofollow noreferrer">second moment of the Poisson distribution</a> is the square of the parameter plus the parameter.</p>
<p>Now,
$$\int_0^{\infty}(x^2+x)e^{-x}\ dx=\int_0^{\infty}x^2e^{-x}\ dx+\int_0^{\infty}xe^{-x}\ dx=E[X^2]+E[X]=2+1=3$$
because <a href="https://www.wolframalpha.com/input/?i=second+moment+of+the+exponential+distribution" rel="nofollow noreferrer">the second moment of the exponential distribution of parameter $1$</a> is $2$.</p>
<p>The variance of $Y$ is then $$E[Y^2]-E^2[Y]=3-1=2.$$</p>
<hr>
<p>The second moments of the distributions above can be calculated based calculated based on the corresponding MGFs. The MGFs can be found in Wikipedia. If you have the MGF then take the second derivative and substitute $0$.</p>
|
2,078,264 | <p>I've been see the following question on group theory:</p>
<p>Let $p$ be a prime, and let $G = SL2(p)$ be the group of $2 \times 2$ matrices of determinant $1$ with entries in the field $F(p)$ of integers $\mod p$.</p>
<p>(i) Define the action of $G$ on $X = F(p) \cup \{ \infty \}$ by Mobius transformations. [You need not show that it is a group action.] </p>
<p>State the orbit-stabiliser theorem. Determine the orbit of $\infty$ and the stabiliser of $\infty$. Hence compute the order of $SL2(p)$.</p>
<p>I know matrices are isomorphic to Mobius maps, but not how the action of a mobius map can be used to define the action of a matrix (I don't really know what this part means to be honest).
I tried the next part, but wasnn't sure whether the consider the vector $(\infty,\infty)$, $(\infty,a)$ or $(b,\infty)$ $(a,b \in F(p))$.
Any help would be greatly appreciated!!</p>
<p>(Sorry the question title isn't very related to the question, I just didn't know what to put specifically!)</p>
| Andreas Caranti | 58,401 | <p>$\newcommand{\Set}[1]{\left\{ #1 \right\}}$
You have $$f(z) = \frac{a z + b}{c z + d},$$ and
$$f(\infty) = \frac{a \infty + b}{c \infty + d} = \frac{a}{c}.$$</p>
<p>This is best understood by letting the group act on the projective line
$$
\mathcal{P} = \Set{[x, y] : x, y \in F(p), (x, y) \ne (0, 0)},
$$
by
$$
f([x, y]) = [a x + b y, c x + d y],
$$
where $[x, y] = \Set{\lambda (x, y) : \lambda \in F(p)^{*} }$. In this notation, $\infty = [1, 0]$.</p>
|
3,007,664 | <p>I can't find the right approach to tackle the question whether
<span class="math-container">$$\lim_{N\to\infty} \sum_{n=1}^N \exp\Bigg(-N \sin^2\left(\frac{n\pi}{2N}\right)\Bigg)$$</span>
and
<span class="math-container">$$\lim_{N\to\infty} \sum_{n=1}^N \exp\Biggl(-\sin^2\left(\frac{n\pi}{2N}\right)\Biggr)$$</span>
converge or diverge. The fact that the limiting variable appears both as the upper bound of summation as well as in the individual summands seems to make the standard methods known to me inapplicable.</p>
<p>I suspect that the second limit (i.e. the one not containing <span class="math-container">$N$</span> in the exponent directly) does not exist, but that the first one may. I would be very grateful if you could point me to methods that allow one to determine the existence of the limits.</p>
<p>If a limit exists, I would also be very interested in understanding how, if at all, one could (approximately) replace the sum with an integral.</p>
<p>(For background, these questions have arisen during my study of the Rouse theory of polymer dynamics, e.g. in chapter 7.3.2. of Doi and Edwards, "The Theory of Polymer Dynamics". Physical explanations of how one can justify the treatment therein would be very welcome, too.)</p>
<p>Thank you in advance!</p>
| Myunghyun Song | 609,441 | <p>This is only a partial result (EDIT: full result is given below), but I'll post it anyway. Let us write <span class="math-container">$(x,y)$</span> if <span class="math-container">$y = f(x)$</span> holds. Since <span class="math-container">$f(f(x)) = x$</span>, <span class="math-container">$(x,y)$</span> is equivalent to <span class="math-container">$(y,x)$</span> and <span class="math-container">$(x,y)\wedge (x,y')$</span> implies <span class="math-container">$y= y'$</span>. What we can directly observe is that <span class="math-container">$0= f(f(0)) = f(0)$</span> by letting <span class="math-container">$x=y=0$</span>. What is less trivial is <span class="math-container">$f(1) = 1$</span>. Let <span class="math-container">$\gamma =f(1)$</span>. Then by letting <span class="math-container">$y = 1$</span> and <span class="math-container">$y= \gamma$</span>, we get
<span class="math-container">$$
(x+ \gamma + f(x),1+ f(x) +\gamma x)\wedge \;(x + 1+\gamma f(x), \gamma + f(x) +x).$$</span>
Hence, <span class="math-container">$1+ f(x) +\gamma x = x + 1+\gamma f(x)$</span> for all <span class="math-container">$x$</span>, and if <span class="math-container">$\gamma \neq 1$</span>, then <span class="math-container">$f(x) = x$</span>, leading to contradiction.<br>
We next show that <span class="math-container">$$(-\frac{f(x) - f(x')}{x-x'}, -\frac{x-x'}{f(x) - f(x')}),\quad \forall x\neq x'\;\cdots(*).$$</span> Assume <span class="math-container">$(a,b)\wedge (x,y)\wedge (x',y').$</span> Then, by the FE,
<span class="math-container">$$
(a+x+by, b+y +ax)\wedge (a+x'+by', b+y' +ax').
$$</span> We can equate <span class="math-container">$b+y +ax$</span> and <span class="math-container">$ b+y' +ax'$</span> by letting <span class="math-container">$a= -\frac{y-y'}{x-x'}$</span>. Then <span class="math-container">$b$</span> should satisfy <span class="math-container">$a+x+by = a+x'+by'$</span>, that is, <span class="math-container">$ b= -\frac{x-x'}{y-y'}.$</span> Because <span class="math-container">$y = f(x), y' = f(x')$</span>, this proves the claim.<br>
From the previous claim, we know that <span class="math-container">$(-1,-1)$</span>, i.e. <span class="math-container">$f(-1) = -1$</span>. By letting <span class="math-container">$x' = 0, - 1$</span>, we also know that <span class="math-container">$(x,y)$</span> implies
<span class="math-container">$$(-\frac{y}{x}, -\frac{x}{y})\wedge (-\frac{y+ 1}{x + 1}, -\frac{x+ 1}{y + 1}).
$$</span>If we write <span class="math-container">$-\frac{y-y'}{x-x'}= k$</span>, then <span class="math-container">$(-\frac{1/k+ 1}{k + 1}, -\frac{k+ 1}{1/k + 1}) = (-\frac{1}{k}, -k)$</span> also shows that <span class="math-container">$(\frac{y-y'}{x-x'}, \frac{x-x'}{y-y'})$</span> whenever <span class="math-container">$x\neq x'$</span>. Next, by putting <span class="math-container">$y=-1$</span> in the original functional equation, we also have <span class="math-container">$(x,y)$</span> implies
<span class="math-container">$$(x-y-1, y-x-1).$$</span> Iterating this <span class="math-container">$n$</span> times we also have
<span class="math-container">$$(2^n(x-y)-1, 2^n(y-x)-1) \;\cdots (**).
$$</span><br>
Our next claim is that if <span class="math-container">$(x,x) \vee (x, x+2^{1-N})$</span>, then <span class="math-container">$(x+ 2^{-N}, x+2^{-N})$</span> (<span class="math-container">$N\geq 0$</span>.) To show this, let us write <span class="math-container">$(x+ 2^{-N}, \alpha)$</span> and derive an equation about <span class="math-container">$\alpha$</span>. If <span class="math-container">$(x,x)$</span>, <span class="math-container">$(*)$</span> implies <span class="math-container">$( 2^N(x-\alpha ), \frac{1}{2^N(x-\alpha )})$</span>. By applying <span class="math-container">$(**)$</span> to <span class="math-container">$(x+ 2^{-N}, \alpha)$</span> we also have <span class="math-container">$(2^N(x-\alpha), 2^N(\alpha-x)-2)$</span>. Hence, <span class="math-container">$2^N(\alpha-x)-2 = \frac{1}{2^N(x-\alpha )}$</span> and this implies <span class="math-container">$2^N(x-\alpha )=-1$</span> as desired. In case <span class="math-container">$(x, x+2^{1-N})$</span> can be dealt with similarly by noting that <span class="math-container">$(-2^N(x-\alpha )-2, - \frac{1}{2^N(x-\alpha )+2})$</span>.<br>
A very similar argument can also prove that
<span class="math-container">$$(x,x) \vee (x, x-2^{1-N})\Rightarrow (x-2^{-N}, x-2^{-N}).$$</span>
So far, all the tedious arguments show that <span class="math-container">$f(x) = x$</span> holds for every dyadic rationals <span class="math-container">$x=\frac{j}{2^n}$</span>, that is, the set <span class="math-container">$F$</span> of fixed points of <span class="math-container">$f$</span> contains every dyadic rational. The following are some other facts about <span class="math-container">$F$</span>: <br>(i) If <span class="math-container">$(x,y)$</span>, then <span class="math-container">$x+y+xy$</span> and <span class="math-container">$x+y+1$</span> belongs to <span class="math-container">$F$</span>.<br>
(ii) If <span class="math-container">$x,y \in F$</span>, then <span class="math-container">$x+y+xy \in F$</span>.<br>
(iii) If <span class="math-container">$x \in F$</span>, then <span class="math-container">$x\pm \frac{j}{2^n}\in F$</span>. (Above claim)<br>
And for some <span class="math-container">$(x,f(x))$</span> such that <span class="math-container">$x\notin F$</span>, this generates so many non-fixed pairs
<span class="math-container">$$(\pm \left[\frac{f(x) - q}{x-q}\right]^r,\pm \left[\frac{x- q}{f(x)-q}\right]^r), \quad (x+q+qf(x),f(x)+q+qx)_{q \neq 1},$$</span> for all dyadic rational <span class="math-container">$q$</span> and <span class="math-container">$r\in \mathbf{N}$</span>.<br>I tried but failed to get further ideas about how <span class="math-container">$F$</span> and <span class="math-container">$\mathbf{R}\setminus F$</span> looks like. But I wish this helps somehow.</p>
<p>(Note: Actually in the above proposition, <span class="math-container">$(x, x\pm 2^{1-N})$</span> cannot occur since their difference cannot be in <span class="math-container">$F$</span>.)</p>
<hr>
<p>Following yesterday's post, I've completed proof: <span class="math-container">$f = id$</span> is the only solution of the equation.<br></p>
<p>I'll briefly review some facts already proved.<br>
(i) <span class="math-container">$F + \mathbf{Q}_{dyad} = F.$</span><br>
(ii) If <span class="math-container">$x,y \in F$</span>, then <span class="math-container">$xy \in F$</span>, or equivalently, <span class="math-container">$F\cdot F = F$</span>. (Since <span class="math-container">$x-1, y-1 \in F$</span> implies <span class="math-container">$xy -1\in F$</span> and thus <span class="math-container">$xy \in F$</span>.)<br>
(iii) If <span class="math-container">$(x,y)$</span>, then <span class="math-container">$x+y, \;xy + x+ y \in F$</span>.<br></p>
<p>Our claim starts with: If <span class="math-container">$0\neq x \in F$</span>, then <span class="math-container">$\frac{1}{x} \in F$</span>. Proof is simple. Let <span class="math-container">$(\frac{1}{x},\gamma).$</span> Then <span class="math-container">$\gamma +\frac{1}{x}\in F$</span>. Since <span class="math-container">$x\in F$</span>, <span class="math-container">$\gamma x + 1 \in F$</span>, and <span class="math-container">$\gamma x \in F$</span>. Note that <span class="math-container">$(\gamma x, \frac{1}{\gamma x}).$</span>(slope of <span class="math-container">$(\frac{1}{x},\gamma),(0,0).$</span>) This shows <span class="math-container">$\gamma = \pm \frac{1}{x}.$</span> If it were that <span class="math-container">$(\frac{1}{x},-\frac{1}{x})$</span>, then <span class="math-container">$\frac{1}{x^2} \in F$</span>. This implies that <span class="math-container">$x\cdot\frac{1}{x^2}=\frac{1}{x} \in F$</span>, as desired.<br></p>
<p>Our next claim is that <span class="math-container">$(x,y)$</span> implies <span class="math-container">$xy \in F$</span>. We start from <span class="math-container">$x+y, \;x+y+xy \in F$</span>. If <span class="math-container">$x+y =0$</span>, it's already done. Otherwise, <span class="math-container">$\frac{x+y+xy}{x+y} = 1+ \frac{xy}{x+y} \in F$</span>, hence <span class="math-container">$\frac{xy}{x+y}\in F$</span>. Then this implies <span class="math-container">$$(x+y)\cdot \frac{xy}{x+y} = xy \in F.$$</span><br></p>
<p>Our almost final claim is that <span class="math-container">$0< y \in F$</span> implies <span class="math-container">$\sqrt{y} \in F$</span>. Suppose <span class="math-container">$(\sqrt[4]{y}, \gamma)$</span>. Then <span class="math-container">$\gamma^2 \sqrt{y} \in F$</span>. Since <span class="math-container">$\frac{1}{y} \in F$</span>, we have <span class="math-container">$\frac{\gamma^2}{\sqrt{y}} \in F.$</span> Notice that <span class="math-container">$(\frac{\gamma}{\sqrt[4]{y}}, \frac{\sqrt[4]{y}}{\gamma})$</span> and hence <span class="math-container">$(\frac{\gamma^2}{\sqrt{y}}, \frac{\sqrt{y}}{\gamma^2})$</span> Thus we have <span class="math-container">$\gamma^4 = y$</span>, <span class="math-container">$\gamma = \pm \sqrt[4]{y}$</span>. Suppose <span class="math-container">$\gamma = \sqrt[4]{y}$</span>. Then, <span class="math-container">$\sqrt[4]{y} \in F$</span> implies <span class="math-container">$\sqrt{y} \in F$</span>. Otherwise, <span class="math-container">$(\sqrt[4]{y}, -\sqrt[4]{y})$</span> implies <span class="math-container">$-\sqrt{y} \in F$</span>. Thus <span class="math-container">$\sqrt{y} \in F.$</span><br></p>
<p>Finally we are ready to prove that <span class="math-container">$(x,y)$</span> implies <span class="math-container">$ x-y=d =0$</span>. Assume to the contrary that <span class="math-container">$d>0$</span>. Then, as I showed in yesterday's post, <span class="math-container">$(d-1, -d-1).$</span> But this implies <span class="math-container">$(-1+d)\cdot(-1-d) = 1-d^2 \in F$</span>, and thus <span class="math-container">$d^2 \in F.$</span> So <span class="math-container">$d$</span> must be in <span class="math-container">$F$</span> and <span class="math-container">$d-1$</span> also. This leads to <span class="math-container">$d-1 = -d-1$</span>, contradicting <span class="math-container">$d>0$</span>. So, the only solution of the functional equation is <span class="math-container">$f(x) = x$</span>!</p>
|
3,007,664 | <p>I can't find the right approach to tackle the question whether
<span class="math-container">$$\lim_{N\to\infty} \sum_{n=1}^N \exp\Bigg(-N \sin^2\left(\frac{n\pi}{2N}\right)\Bigg)$$</span>
and
<span class="math-container">$$\lim_{N\to\infty} \sum_{n=1}^N \exp\Biggl(-\sin^2\left(\frac{n\pi}{2N}\right)\Biggr)$$</span>
converge or diverge. The fact that the limiting variable appears both as the upper bound of summation as well as in the individual summands seems to make the standard methods known to me inapplicable.</p>
<p>I suspect that the second limit (i.e. the one not containing <span class="math-container">$N$</span> in the exponent directly) does not exist, but that the first one may. I would be very grateful if you could point me to methods that allow one to determine the existence of the limits.</p>
<p>If a limit exists, I would also be very interested in understanding how, if at all, one could (approximately) replace the sum with an integral.</p>
<p>(For background, these questions have arisen during my study of the Rouse theory of polymer dynamics, e.g. in chapter 7.3.2. of Doi and Edwards, "The Theory of Polymer Dynamics". Physical explanations of how one can justify the treatment therein would be very welcome, too.)</p>
<p>Thank you in advance!</p>
| Tianlalu | 394,456 | <p>There is a proof given by Professor Wu Wei-chao in 2001, you may download the paper <a href="http://xueshu.baidu.com/usercenter/paper/show?paperid=c09c9061b96b50fcbf62d3e3f4963de6&site=xueshu_se" rel="nofollow noreferrer">here</a>. </p>
<p>Here is the overview of the proof:</p>
<ul>
<li><p><span class="math-container">$f$</span> is one-to-one, and <span class="math-container">$$f(0)=0,\tag{2}$$</span></p></li>
<li><p><span class="math-container">$\forall y\in\Bbb R\setminus \{0\}$</span>, <span class="math-container">$$f\left(-\frac y{f(y)}\right)=-\frac{f(y)}y,\qquad f\left(-\frac{f(y)}y\right)=-\frac y{f(y)},\tag{8,9}$$</span> </p></li>
<li><p><span class="math-container">$$f(\pm1)=\pm1,\tag{11,12}$$</span></p></li>
<li><p><span class="math-container">$\forall x\in\Bbb R$</span>,
<span class="math-container">$$f(x+f(x)+1)=f(x)+x+1,\qquad f(x-f(x)-1)=-x+f(x)-1\tag{13,14}$$</span></p></li>
<li><p><span class="math-container">$$f(\pm2)=\pm2,\tag{23,29}$$</span></p></li>
<li><p><span class="math-container">$$(f(\pm\sqrt2))^2=2,\qquad(f(\pm\frac 1{\sqrt2}))^2=\frac12,\tag{42,46}$$</span></p></li>
<li><p><span class="math-container">$\forall x\in\Bbb R$</span>,
<span class="math-container">$$f(3x-4)=-3f(-x)-4.\tag{47}$$</span></p></li>
<li><p>Introduce 5 lemmas,</p>
<blockquote>
<p><strong>Lemma 4.</strong> <span class="math-container">$\forall a\in \Bbb R$</span>, if <span class="math-container">$$\tag{58}f(-x-a)=-f(x)-a,\qquad (\forall x\in\Bbb R)$$</span> and <span class="math-container">$$\tag{59}f(x-f(x)-\dfrac {a^2}4)=f(x)-x-\dfrac {a^2}4,\qquad (\forall x\in\Bbb R)$$</span>
then
<span class="math-container">$$f(x)=x\qquad (\forall x\in\Bbb R).$$</span></p>
</blockquote></li>
<li><p>Saparate into 3 cases and finish the proof.</p></li>
</ul>
<blockquote>
<p><strong>Case 1:</strong> <span class="math-container">$f(\pm\sqrt 2)=\pm\sqrt 2\implies f(x)=x$</span> (<span class="math-container">$\forall x\in\Bbb R$</span>).</p>
<p><strong>Case 2:</strong> <span class="math-container">$f(\pm\sqrt 2)=\mp\sqrt 2, f(\pm\frac 1{\sqrt2})=\pm\frac 1{\sqrt2}\implies$</span> contradition to injectivitiy of <span class="math-container">$f$</span>.</p>
<p><strong>Case 3:</strong> <span class="math-container">$f(\pm\sqrt 2)=\mp\sqrt 2, f(\pm\frac 1{\sqrt2})=\mp\frac 1{\sqrt2}\implies$</span> contradition to <span class="math-container">$f(\sqrt 2)=-\sqrt 2$</span>.</p>
</blockquote>
<hr>
<p>Wu also generalized the result.</p>
<blockquote>
<p><strong>Generalization:</strong> Suppose <span class="math-container">$F$</span> is a field or a ring, <span class="math-container">$\Bbb Z\subseteq F\subseteq \Bbb C$</span>, and <span class="math-container">$f:F\to F$</span> satisifies
<span class="math-container">$$f(x+f(y)+yf(x))=y+f(x)+xf(y),$$</span></p>
<ul>
<li><p>if <span class="math-container">$F=\Bbb R$</span>, then <span class="math-container">$f(x)=x$</span> (<span class="math-container">$\forall x\in\Bbb R$</span>);</p></li>
<li><p>if <span class="math-container">$F=\Bbb C$</span>, then <span class="math-container">$f(x)=x$</span> (<span class="math-container">$\forall x\in\Bbb C$</span>) or <span class="math-container">$f(x)=\overline x$</span> (<span class="math-container">$\forall x\in\Bbb C$</span>);</p></li>
<li><p>if <span class="math-container">$F=\Bbb Q$</span>, then <span class="math-container">$f(x)=x$</span> (<span class="math-container">$\forall x\in\Bbb Q$</span>);</p></li>
<li><p>if <span class="math-container">$F=\Bbb Z$</span>, then <span class="math-container">$f(x)=x$</span> (<span class="math-container">$\forall x\in\Bbb Z$</span>);</p></li>
<li><p>if <span class="math-container">$F=\{a+br\mid \forall a,b\in\Bbb Q\}$</span>, where <span class="math-container">$r$</span> is fixed irrational and <span class="math-container">$r^2\in\Bbb Q$</span>, then <span class="math-container">$f(x)=x$</span> (<span class="math-container">$\forall x\in F$</span>) or <span class="math-container">$f(x)=\overline x$</span> (<span class="math-container">$\forall x\in F$</span>), here we define <span class="math-container">$\overline{a+br}=a-br$</span>;</p></li>
<li><p>if <span class="math-container">$F=\{a+br\mid \forall a,b\in\Bbb Z\}$</span>, where <span class="math-container">$r$</span> is fixed irrational and <span class="math-container">$r^2\in\Bbb Z$</span>, then <span class="math-container">$f(x)=x$</span> (<span class="math-container">$\forall x\in F$</span>) or <span class="math-container">$f(x)=\overline x$</span> (<span class="math-container">$\forall x\in F$</span>), here we define <span class="math-container">$\overline{a+br}=a-br$</span>.</p></li>
</ul>
</blockquote>
|
1,761,189 | <p>Suppose that voters arrive to a voting booth according to a Poisson process with rate $\lambda = 100$ voters per hour. The voters will vote for two candidates, candidate $1$ and candidate $2$ with equal probability $\left(\frac{1}{2}\right)$. I am assuming that we start the voting at time $t=0$ and it continues indefinitely. I am trying to find probability of candidate $1$ receiving $k$ votes in the first $4$ hours of voting, CONDITIONAL on $1000$ voters arriving during the first $10$ hours. My calculation keeps coming up as $\frac{1}{2}$ due to the symmetry involved. </p>
<p>My idea is to define $A_t =$ number of votes for candidate $1$ at time $t$, then find $P(A_4 = k | N(10) = 1000)$, where $N(t)$ is the number of voters who have arrived at time $t$. Could someone tell me if this the right approach? Thanks.</p>
| Henry | 6,460 | <p>Each voter who turns up in ${10}$ hours has a probability of $\frac{4}{10}$ of being in the first $4$ hours and independently a probability of voting for the first candidate of $\frac12$, so a probability of $\frac15$ of both. </p>
<p>So, given $1000$ voters in total, you have a binomial distribution $A_4 \sim Bin(1000,\frac15)$, i.e. $$P(A_4=k\mid N_{10}=1000)={1000 \choose k}\frac{4^{1000-k}}{5^{1000}}$$</p>
|
2,814,793 | <p>I don't know what I did wrong. Can anyone point out my mistake? The problem is:</p>
<blockquote>
<p>For $\lim\limits_{x\to1}(2-1/x)=1$, finding $\delta$, such that if $0<|x-1|<\delta$, then $|f(x)-1|<0.1$</p>
</blockquote>
<p>Here is what I did:</p>
<p>Since $|f(x)-1|=|2-1/x-1|=|1-1/x|=|x-1|/x<0.1$, then $|x-1|<0.1x$.
We know that $0<|x−1|<δ$, so $\delta=0.1x=x/10$</p>
<p>The answer sheet says $\delta=1/11$. I don't understand it.</p>
| fleablood | 280,126 | <p>$\delta$ must be a constant determined by $.1$ and not dependent on the value of $x$.</p>
<p>(How could you chose an $x$ so that $|x -1| <\frac x{10}$ is we don't know what $x$ is in the first place?)</p>
<p>So you need to find a <em>constant</em> $\delta$ so that $|x - 1| < \delta \implies |2-\frac 1x-1|=|1-\frac 1x| < .1$.</p>
<p>To find that $|1 - \frac 1x| < .1$ so</p>
<p>$.9 < \frac 1x < 1.1$ so </p>
<p>$\frac {10}{11} < x < \frac {10}{9}$</p>
<p>$1 - \frac 1{11} < x < 1 + \frac 1{9}$</p>
<p>$-\frac 1{11} < x -1 < \frac 1{9}$</p>
<p>$|x - 1| < \min(\frac 1{11},\frac 1{9}) = \frac 1{11}$.</p>
<p>So that is <em>a</em> (and the maximum possible) delta.</p>
<p><em>NOW</em> it's easy to chose and $x$ so that $|x-1| < \frac 1{11}$. And when we do, it will logically (by following all those steps backwards) must be that $|2 - \frac 1x - 1| < .1$</p>
|
2,230,417 | <p>I need to formally prove that $f(x)=x^2$ is a contraction on each interval on $[0,a],0<a<0.5$. From intuition, we know that its derivative is in the range $(-1,1)$ implies that the distance between $f(x)$ and $f(y)$ is less then the distance between $x$ and $y$. </p>
<p>But now I need an explicit $\lambda$ such that $0\le \lambda<1$ and $d(f(x),f(y))\le \lambda d(x,y)$, where $d$ is the standard metric on $\Bbb R$.</p>
<p>Thanks a lot!</p>
| Hamza | 185,441 | <p>If $x\in[0,a]$ we get
$$
\sup_{t\in[0,a]}|f'(t)|=\sup_{t\in[0,a]}|2t|=2a
$$
So
$$
d(f(x),f(y))\leq 2a d(x,y) \qquad \forall x,y\in [0,a]
$$
and we can't find a constant less than $2a$, in fact for $x=a$ and $y=a-\epsilon$ for $a>\epsilon>0$ we get :
$$
\frac{d(f(x),f(y))}{d(x,y)}=\frac{d(a^2,(a-\epsilon)^2)}{\epsilon}=\frac{2a\epsilon -\epsilon^2}{\epsilon}=2a-\epsilon
$$</p>
|
1,920,776 | <p>As I know there is a theorem, which says the following: </p>
<blockquote>
<p>The prime ideals of $B[y]$, where $B$ is a PID, are $(0), (f)$, for irreducible $f \in B[y]$, and all maximal ideals. Moreover, each maximal ideal is of the form $m = (p,q)$, where $p$ is an irreducible element in $B$ and $q$ is an irreducible element in $(B/(p))[y]$.</p>
</blockquote>
<p>I managed to prove, that any prime non-principal ideal $m \subset B[y]$ is maximal. This gives the first part of the theorem. Now I want to prove, that $m = (p,q)$. I can show, that $m \cap B \neq (0)$. Thus $m$ contains a non-zero element $a$ from B. Since $B$ is PID and $m$ is prime, $m$ must contain an irreducible element from $B$ (we can take an irreducible factor $p$ of $a$). </p>
<p>As I understand, now I need to show, that $m$ contains an element $q$ (see the theorem above). Then it follows that $(p,q) \subseteq m$. And after that I have to prove, that $m \subseteq (p,q)$. How do I make these last steps?</p>
<p>Yes, I have seen similar questions with $B = k[x]$ or $\mathbb{Z}$, but I still cannot prove the last part.</p>
| egreg | 62,967 | <p>Let's look at a decimal example. You want to do $735-78$.</p>
<p>Borrow 1000 from the Number Bank; the loan is subject to no interest, but you <em>must</em> give back what you got as soon as you have used it.</p>
<p>Now consider that
$$
735-78=735+(1000-78)-1000
$$
The subtraction $1000-78$ is very easy to do: just do $9$-complement on the rightmost three digits (the missing one at the far left is, of course, $0$), getting $921+1$, so our operation now reads
$$
735-78=735+921+1-1000
$$
Since
\begin{array}{rr}
735 & + \\
921 & = \\
\hline
1656
\end{array}
we can give back 1000 to the bank and add 1:
$$
735-78=656+1=657
$$</p>
<p>In base two it's exactly the same, with the only difference that $1$-complement (instead of $9$-complement) is very easy, because it consists in flipping the digits. You don't need the loan either, because you work on a fixed number of bits, and numbers that overflow are simply reduced forgetting the leftmost digit. So if you have to do</p>
<pre><code>00101001 - 00001110
</code></pre>
<p>you can flip the digits in the second number and add, forgetting the leftmost bit that may become 1:</p>
<pre><code>00101001 +
11110001 =
----------
00011010 +
1 =
----------
00011011
</code></pre>
|
1,920,776 | <p>As I know there is a theorem, which says the following: </p>
<blockquote>
<p>The prime ideals of $B[y]$, where $B$ is a PID, are $(0), (f)$, for irreducible $f \in B[y]$, and all maximal ideals. Moreover, each maximal ideal is of the form $m = (p,q)$, where $p$ is an irreducible element in $B$ and $q$ is an irreducible element in $(B/(p))[y]$.</p>
</blockquote>
<p>I managed to prove, that any prime non-principal ideal $m \subset B[y]$ is maximal. This gives the first part of the theorem. Now I want to prove, that $m = (p,q)$. I can show, that $m \cap B \neq (0)$. Thus $m$ contains a non-zero element $a$ from B. Since $B$ is PID and $m$ is prime, $m$ must contain an irreducible element from $B$ (we can take an irreducible factor $p$ of $a$). </p>
<p>As I understand, now I need to show, that $m$ contains an element $q$ (see the theorem above). Then it follows that $(p,q) \subseteq m$. And after that I have to prove, that $m \subseteq (p,q)$. How do I make these last steps?</p>
<p>Yes, I have seen similar questions with $B = k[x]$ or $\mathbb{Z}$, but I still cannot prove the last part.</p>
| fleablood | 280,126 | <p>Basic assumption is that we are doing modulo arithmetic over some $2^{n+1}$. i.e. $x \equiv x + 2^{n+1}$ for all $x$. (I suppose I should write $x \equiv x + 2^{n+1} \pmod{2^{n+1}}$ but I'm not going to write the $(\operatorname{mod}{2^{n+1}})$ part.) Especially $0 \equiv 2^{n+1}$.</p>
<p>So what we want to show is the $-x \equiv 2^{n+1} - x$ is "twos complement".
If $x = \sum_{i=0}^{n} a_i 2^i; a_i = \{0,1\}$, the "twos complement" is $\overline x = (\sum_{i=0}^{n} b_i 2^i) + 1$ where $a_i + b_i = 1$. So
$$\begin{align}
x + \overline x
&= ( \sum_{i=0}^{n} a_i 2^i) + (\sum_{i=0}^{n} b_i 2^i) + 1 \\
&= (\sum_{i=0}^{n}(a_i + b_i)*2^i) + 1 \\
&= (\sum_{i=0}^{n} 1*2^i) +1 \\
&= 11\ldots11_2 + 1 &&\text{(subscript $2$ denotes binary)} \\
&\equiv 00\ldots00_2 \\
&\equiv (2^{n+1} - 1) + 1 \\
&= 2^{n+1} \\
&\equiv 0
\end{align}$$</p>
<p>So $\overline x = 2^{n+1} -x \equiv 0 - x = -x$</p>
<p><strong>Example:</strong> Let $n = 8$ so we have numbers $0, \ldots, 255$ with $256 \equiv 0$, set $x = 100 = 64 + 32 + 4 = 01100100_2$ and $y = 10011011_2 = 155$. Then
$$x + y = 11111111_2 = 255 = 256 - 1 \equiv -1$$ and
$$z = y + 1 = 10011011_2 + 1 = 10011100_2 = 156$$
However $z = y+ 1 = (x+y) +1 - x = (256 -1) +1 - x = 256 -x \equiv -x \equiv -100$.</p>
<p>And if we actually do it: $y = 10011011_2=128 + 16 + 8 + 2 + 1 = 155$</p>
<p>$z = 10011100_2 = 128 + 16 + 8 + 4 = 156$. </p>
<p>And, indeed, $156 + 100 = 256 \equiv 0$ so $156 \equiv -100 \pmod{256}$.</p>
<hr>
<p>Another way to think of it is if we were to do "10s compliment:</p>
<p>To find $-15,890,256 \bmod 1{,}000{,}000{,}000$ we would do "10 compliment" of $015,890,256$ which has decimal expansion $$(9-0)(9-1)(9-5)(9-8)(9-9)(9-0)(9-2)(9-5)(9-6) = 984{,}109{,}743.$$ [Note: $984{,}109{,}743 + 15{,}890{,}256=999{,}999{,}999$.]</p>
<p>Add $1$ to get $984{,}109{,}744$ and, indeed $984,109,744 \equiv -15,890,256 \pmod {1{,}000{,}000{,}000}$.</p>
<p>But "$2$" only has two values, $0$ and $1$, so it is easier.</p>
|
1,624,888 | <h2>Question</h2>
<p>In the following expression can $\epsilon$ be a matrix?</p>
<p>$$ (H + \epsilon H_1) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) = (E |m\rangle + \epsilon E|m_1\rangle + \epsilon^2 E_2 |m_2\rangle + \dots) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) $$</p>
<h2>Background</h2>
<p>So in quantum mechanics we generally have a solution $|m\rangle$ to a Hamiltonian:</p>
<p>$$ H | m\rangle = E |m\rangle $$</p>
<p>Now using perturbation theory:</p>
<p>$$ (H + \epsilon H_1) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) = (E |m\rangle + \epsilon E|m_1\rangle + \epsilon^2 E_2 |m_2\rangle + \dots) ( |m\rangle +\epsilon|m_1\rangle + \epsilon^2 |m_2\rangle + \dots) $$</p>
<p>I was curious and substituted $\epsilon$ as a matrix:</p>
<p>$$ \epsilon =
\left( \begin{array}{cc}
0 & 0 \\
1 & 0 \end{array} \right) $$</p>
<p>where $\epsilon$ now, is the nilpotent matrix, we get:</p>
<p>$$ \left( \begin{array}{cc}
H | m \rangle & 0 \\
H_1 |m_1 \rangle + H | m\rangle & H |m_1 \rangle \end{array} \right) = \left( \begin{array}{cc}
E | m \rangle & 0 \\
E_1 |m_1 \rangle + E | m\rangle & E |m_1 \rangle \end{array} \right)$$</p>
<p>Which is what we'd expect if we compared powers of $\epsilon$'s. All this made me wonder if $\epsilon$ could be a matrix? Say something like $| m_k\rangle \langle m_k |$ ? Say we chose $\epsilon \to \hat I \epsilon$</p>
<p>then there exists a radius of convergence. What is the radius of convergence in a general case of any matrix?</p>
| Dac0 | 291,786 | <p>I would say there's nothing preventing you from using a matrix as perturbation of a matrix equation as long as you let the limit of the norm converge to $0$.</p>
|
3,520,057 | <p>how would you draw <span class="math-container">$A\cup(B\cap C\cap D)$</span>
I tried with paint doesn't look good and I don't know if there are tools to make this easier would appreciate any help or tips</p>
<p><img src="https://i.stack.imgur.com/fc6Pq.png" alt="enter image description here"></p>
| Alex Kruckman | 7,062 | <blockquote>
<p>How can we say that the direct limit is the tensor product of the given family? </p>
</blockquote>
<p>Well, Atiyah and Macdonald are <em>defining</em> the tensor product of an infinite family of <span class="math-container">$A$</span>-algebras. We're free to make whatever definitions we want...</p>
<blockquote>
<p>Shouldn't one also prove the universal property of the tensor product here? </p>
</blockquote>
<p>Yes, that would provide some justification that the definition is a reasonable one. But be careful: in the category of <span class="math-container">$A$</span>-algebras, the universal property satisfied by the tensor product of two algebras <span class="math-container">$B\otimes_A C$</span> is the universal property of the coproduct! It is the universal property of the infinite coproduct that is satisfied by Atiyah and Macdonald's definition, not anything to do with multilinear maps. That is, what Atiyah and Macdonald are doing in this exercise is providing an explicit construction of the infinite coproduct in the category of <span class="math-container">$A$</span>-algebras.</p>
<p>The reason why the underlying <span class="math-container">$A$</span>-module of the coproduct of <span class="math-container">$A$</span>-algebras <span class="math-container">$B$</span> and <span class="math-container">$C$</span> agrees with the tensor product of the underlying <span class="math-container">$A$</span>-modules of <span class="math-container">$B$</span> and <span class="math-container">$C$</span> is that bilinear maps <span class="math-container">$B\times C\to D$</span> are closely related to pairs of maps <span class="math-container">$B\to D$</span> and <span class="math-container">$C\to D$</span>. For example, if we have a pair of <span class="math-container">$A$</span>-algebra homomorphisms <span class="math-container">$f\colon B\to D$</span> and <span class="math-container">$g\colon C\to D$</span>, then we can form a bilinear map <span class="math-container">$B\times C\to D$</span> by <span class="math-container">$(b,c)\mapsto f(b)g(c)$</span>. This relationship breaks down in the infinite case. Given a family of <span class="math-container">$A$</span>-algebra homomorphisms <span class="math-container">$f_\lambda\colon B_\lambda\to D$</span> for all <span class="math-container">$\lambda\in \Lambda$</span>, we can't get a multilinear map in the same way: multiplication of infinitely many outputs of the <span class="math-container">$f_\lambda$</span> doesn't make sense in <span class="math-container">$D$</span>.</p>
<blockquote>
<p>How the arbitrary element of the <span class="math-container">$A$</span>-algebra <span class="math-container">$B$</span> will look like? </p>
</blockquote>
<p>In general, the direct limit of a directed system of algebraic structures can be described as the union of all the structures in the system, modulo the equivalence relation defined by <span class="math-container">$c\in C$</span> is equivalent to <span class="math-container">$d\in D$</span> if and only if <span class="math-container">$c$</span> and <span class="math-container">$d$</span> agree later in the system, i.e. there is some structure <span class="math-container">$E$</span> in the system with maps <span class="math-container">$f\colon C\to E$</span> and <span class="math-container">$g\colon D\to E$</span> such that <span class="math-container">$f(c) = g(d)$</span>. </p>
<p>In this particular case, the canonical <span class="math-container">$A$</span>-algebra homomorphism <span class="math-container">$B_J\to B_{J'}$</span> that Atiyah and Macdonal refer to is the one that extends a tensor by <span class="math-container">$1$</span>s. E.g. if <span class="math-container">$B_J = B_1\otimes_A B_2$</span> and <span class="math-container">$B_{J'}$</span> is <span class="math-container">$B_1\otimes_A B_2\otimes_A B_3 \otimes_A B_4$</span>, then the map <span class="math-container">$B_J\to B_{J'}$</span> is determined by <span class="math-container">$x_1\otimes x_2\mapsto x_1\otimes x_2\otimes 1\otimes 1$</span>. So the elements of the direct limit are all elements of finite tensor products from the family, where we view two elements as equal if they are equal after we extend them both by <span class="math-container">$1s$</span> to put them in the same finite tensor product. </p>
<p>It turns out that this is the same as considering all finite linear combinations of infinite tensors <span class="math-container">$\bigotimes_{\lambda\in \Lambda} x_\lambda$</span>, where all but finitely many of the <span class="math-container">$x_\lambda$</span> are equal to <span class="math-container">$1$</span>, modulo the usual relations defining the tensor product. See Eric Wofsey's answer <a href="https://math.stackexchange.com/questions/2054068/infinite-tensor-product-as-infinite-coproduct-in-the-category-of-r-algebras">here</a> for more details and a sketch of the proof that this construction satisfies the universal property of the coproduct. </p>
<blockquote>
<p>Can one define the tensor products in the same way for directed systems of arbitrary families of, say <span class="math-container">$A$</span>-modules or say vector spaces of a field? </p>
</blockquote>
<p>No, the ring structure is crucial here, since we use <span class="math-container">$1$</span> to define the canonical maps <span class="math-container">$B_J\to B_{J'}$</span>. For infinite tensor products of modules or vector spaces, one has to consider multilinear maps. See the discussion <a href="https://mathoverflow.net/questions/11767/infinite-tensor-products">here</a>. </p>
|
273,463 | <p>I want to solve this differential equation</p>
<p><span class="math-container">$$
\frac{1}{\Phi(\phi)} \frac{d^2 \Phi}{d \phi^2}=-m^2
$$</span></p>
<p>with b.c.</p>
<p><span class="math-container">$$
\Phi(\phi+2 \pi)=\Phi(\phi)
$$</span></p>
<p>it is easy to solve by wolfram</p>
<pre><code>eq1 = D[Phi[phi], {phi, 2}]/Phi[phi];
sol1 = DSolve[eq1 == -m^2, Phi[phi], phi];
</code></pre>
<p>out is <code>{{Phi[phi] -> C[1] Cos[m phi] + C[2] Sin[m phi]}}</code></p>
<hr />
<p>In <em>QUANTUM CHEMISTRY</em> by McQuarrie(<a href="https://libgen.unblockit.cat/libraryp2/main/0FAA97ECA7C67AEAE6D4DD67E0217420" rel="nofollow noreferrer">https://libgen.unblockit.cat/libraryp2/main/0FAA97ECA7C67AEAE6D4DD67E0217420</a>)</p>
<p><span class="math-container">$$
\Phi(\phi)=A_m e^{i m \phi} \quad \text { and } \quad \Phi(\phi)=A_{-m} e^{-i m \phi}
$$</span></p>
<p>periodicity condition</p>
<p><span class="math-container">$$
\Phi(\phi+2 \pi)=\Phi(\phi)
$$</span></p>
<p>so
<span class="math-container">$$
\begin{gathered}
A_m e^{i m(\phi+2 \pi)}=A_m e^{i m \phi} \\
A_{-m} e^{-i m(\phi+2 \pi)}=A_{-m} e^{-i m \phi}
\end{gathered}
$$</span></p>
<p>by all of above can get</p>
<p><span class="math-container">$$
e^{\pm i 2 \pi m}=1
$$</span></p>
<p>so</p>
<p><span class="math-container">$$
m=0, \pm 1, \pm 2, \ldots
$$</span></p>
<p>finally</p>
<p><span class="math-container">$$
\Phi_m(\phi)=A_m e^{i m \phi} \quad m=0, \pm 1, \pm 2, \ldots
$$</span></p>
<p>the periodicity condition <span class="math-container">$\Phi(\phi+2 \pi)=\Phi(\phi)$</span>,How do I add periodicity conditions to my code?</p>
<hr />
<p>In general <em>mathematical physics equations</em>,can be get this
<span class="math-container">$$
\Phi(\phi)=A_m \mathrm{e}^{\mathrm{i} m \phi}+A_{-m} \mathrm{e}^{-\mathrm{i} m \phi} \quad(m=0,1,2 \cdots)
$$</span></p>
| user293787 | 85,954 | <p>There is <a href="https://reference.wolfram.com/language/ref/PeriodicBoundaryCondition.html" rel="nofollow noreferrer">PeriodicBoundaryCondition</a>. The syntax is a bit complicated:</p>
<pre><code>PBC=PeriodicBoundaryCondition[Phi[phi],phi==2*Pi,phi|->phi-2*Pi];
</code></pre>
<p>This can be used with some solvers such as</p>
<pre><code>{vals,funs}=NDEigensystem[{Phi''[phi],PBC},Phi[phi],{phi,0,2*Pi},10];
</code></pre>
<p>This will compute the first 10 eigenvalues and eigenfunctions</p>
<pre><code>vals
(* {0.,-1.00001,-1.00001,-4.00085,-4.00085,-9.00943,-9.00943,-16.0513,-16.0513,-25.1881} *)
</code></pre>
<p>This is a <strong>numerical</strong> result and contains numerical errors, the actual numbers are of course <span class="math-container">$$0,-1,-1,-4,-4,-9,-9,-16, \ldots$$</span>
corresponding to <span class="math-container">$-m^2$</span> in OPs question.</p>
<p>To plot the eigenfunctions, use for example</p>
<pre><code>Plot[{funs[[6]],funs[[7]]},{phi,0,2*Pi}]
</code></pre>
<p>which are linear combinations of <span class="math-container">$e^{3i\phi}$</span> and <span class="math-container">$e^{-3i\phi}$</span> in OPs question.</p>
<p><a href="https://i.stack.imgur.com/SHFgB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SHFgB.png" alt="enter image description here" /></a></p>
|
273,463 | <p>I want to solve this differential equation</p>
<p><span class="math-container">$$
\frac{1}{\Phi(\phi)} \frac{d^2 \Phi}{d \phi^2}=-m^2
$$</span></p>
<p>with b.c.</p>
<p><span class="math-container">$$
\Phi(\phi+2 \pi)=\Phi(\phi)
$$</span></p>
<p>it is easy to solve by wolfram</p>
<pre><code>eq1 = D[Phi[phi], {phi, 2}]/Phi[phi];
sol1 = DSolve[eq1 == -m^2, Phi[phi], phi];
</code></pre>
<p>out is <code>{{Phi[phi] -> C[1] Cos[m phi] + C[2] Sin[m phi]}}</code></p>
<hr />
<p>In <em>QUANTUM CHEMISTRY</em> by McQuarrie(<a href="https://libgen.unblockit.cat/libraryp2/main/0FAA97ECA7C67AEAE6D4DD67E0217420" rel="nofollow noreferrer">https://libgen.unblockit.cat/libraryp2/main/0FAA97ECA7C67AEAE6D4DD67E0217420</a>)</p>
<p><span class="math-container">$$
\Phi(\phi)=A_m e^{i m \phi} \quad \text { and } \quad \Phi(\phi)=A_{-m} e^{-i m \phi}
$$</span></p>
<p>periodicity condition</p>
<p><span class="math-container">$$
\Phi(\phi+2 \pi)=\Phi(\phi)
$$</span></p>
<p>so
<span class="math-container">$$
\begin{gathered}
A_m e^{i m(\phi+2 \pi)}=A_m e^{i m \phi} \\
A_{-m} e^{-i m(\phi+2 \pi)}=A_{-m} e^{-i m \phi}
\end{gathered}
$$</span></p>
<p>by all of above can get</p>
<p><span class="math-container">$$
e^{\pm i 2 \pi m}=1
$$</span></p>
<p>so</p>
<p><span class="math-container">$$
m=0, \pm 1, \pm 2, \ldots
$$</span></p>
<p>finally</p>
<p><span class="math-container">$$
\Phi_m(\phi)=A_m e^{i m \phi} \quad m=0, \pm 1, \pm 2, \ldots
$$</span></p>
<p>the periodicity condition <span class="math-container">$\Phi(\phi+2 \pi)=\Phi(\phi)$</span>,How do I add periodicity conditions to my code?</p>
<hr />
<p>In general <em>mathematical physics equations</em>,can be get this
<span class="math-container">$$
\Phi(\phi)=A_m \mathrm{e}^{\mathrm{i} m \phi}+A_{-m} \mathrm{e}^{-\mathrm{i} m \phi} \quad(m=0,1,2 \cdots)
$$</span></p>
| xzczd | 1,871 | <p>First of all, the deduction in <em>Quantum chemistry</em> by <em>McQuarrie</em> is not rigorous (if we don't call it incorrect). As shown in Nasser's answer, the general solution (when no b.c. is imposed) is</p>
<pre><code>generalsol[phi_] =
With[{ϕ = Phi[phi]},
Collect[DSolveValue[{D[ϕ, {phi, 2}]/ϕ == -m^2}, ϕ, phi] //
TrigToExp, E^_, Simplify] /. {C[1] - I C[2] -> 2 Subscript[A, "m"],
C[1] + I C[2] -> 2 Subscript[A, -"m"]}]
(* E^(-I m phi) Subscript[A, -"m"] + E^(I m phi) Subscript[A, "m"] *)
</code></pre>
<p>which is <strong>not</strong> the separate</p>
<p><span class="math-container">$$
\Phi(\phi)=A_m e^{i m \phi} \quad \text { and } \quad \Phi(\phi)=A_{-m} e^{-i m \phi}
$$</span></p>
<p>Then what's the rigorous way to deduce the desired solution? To deduce it, we need to be aware of the following facts:</p>
<ol>
<li><p>Though often written as a single <span class="math-container">$Φ(ϕ+2π)=Φ(ϕ)$</span> in various materials, when we say periodic boundary condition (b.c.) is imposed, <em>usually</em> it means we're imposing <span class="math-container">$Φ(ϕ+2π)=Φ(ϕ)$</span>, <span class="math-container">$Φ'(ϕ+2π)=Φ'(ϕ)$</span>, <span class="math-container">$Φ''(ϕ+2π)=Φ''(ϕ)$</span>, …. The number of imposed periodic b.c.s depends on the order of differential equations.</p>
</li>
<li><p><code>DSolve</code> (at least for now) is still a solver for generally valid solution i.e. the solutions that are only valid for <span class="math-container">$m=0,±1,±2,…$</span> i.e. <span class="math-container">$m\in \mathbb{Z}$</span> cannot be directly found by <code>DSolve</code>. (This behavior is similar to that of <code>Solve</code>. If you don't know what I mean, try <code>Solve[{a + 1 == 0, b == 0}, a]</code> and <code>Solve[{a + 1 == 0, b == 0}, a, MaxExtraConditions -> Infinity]</code>. )</p>
</li>
</ol>
<p>OK, keep the facts in mind, let's start. With <code>generalsol</code> at hand, we just need to deduce <span class="math-container">$m\in \mathbb{Z}$</span>. The ODE is of 2nd order, so we need 2 periodic b.c.s <code>Phi[0] == Phi[2 Pi], Phi'[0] == Phi'[2 Pi]</code>. Substitute the general solution into these and <code>Reduce</code>:</p>
<pre><code>Reduce[{Phi'[0] == Phi'[2 Pi], Phi[0] == Phi[2 Pi]} /. Phi -> generalsol,
m] // FullSimplify
(*
(C[1] ∈
Integers && (m == 1 + 2 C[1] || m == 2 C[1])) || (Subscript[A, -"m"] == 0 &&
Subscript[A, "m"] == 0) || (Subscript[A, -"m"] == Subscript[A, "m"] && m == 0)
*)
</code></pre>
<p><span class="math-container">$$(c_1\in \mathbb{Z}\land (m=1+2 c_1\lor m=2 c_1))\lor \left(A_{-\text{m}}=0\land A_{\text{m}}=0\right)\lor \left(A_{-\text{m}}=A_{\text{m}}\land m=0\right)$$</span></p>
<p><span class="math-container">$A_{-\text{m}}=0\land A_{\text{m}}=0$</span> corresponds to the general trivial solution, which will be obtained if we substitute the 2 periodic b.c.s into <code>DSolve</code>:</p>
<pre><code>phisol = DSolve[{D[Phi[phi], {phi, 2}]/Phi[phi] == -m^2, Phi[0] == Phi[2 Pi],
Phi'[0] == Phi'[2 Pi]}, Phi[phi], phi]
(* {{Phi[phi] -> 0}} *)
</code></pre>
<p><span class="math-container">$(c_1\in \mathbb{Z}\land (m=1+2 c_1\lor m=2 c_1))\lor \left(A_{-\text{m}}=A_{\text{m}}\land m=0\right)$</span> is equivalent to <span class="math-container">$m\in \mathbb{Z}$</span>. (Sadly I haven't found a way to make <em>Mathematica</em> prove this equivalence, but I think this is obvious enough. )</p>
<p>Alternatively, we can use <code>Solve</code> with <code>Reduce</code> option:</p>
<pre><code>Solve[{Phi'[0] == Phi'[2 Pi], Phi[0] == Phi[2 Pi]} /. Phi -> generalsol, m,
Method -> Reduce] // Simplify
(*
{{m -> ConditionalExpression[2 C[1], Element[C[1], Integers]]},
{m -> ConditionalExpression[1 + 2 C[1], Element[C[1], Integers]]}}
*)
</code></pre>
<p>The result is cleaner, but still, I haven't found a way to make <em>Mathematica</em> prove it's equivalent to <span class="math-container">$m\in \mathbb{Z}$</span>.</p>
|
3,299,072 | <p>I want to prove that
<span class="math-container">$$\frac{(x-y)(x-z)}{x^2}+\frac{(y-x)(y-z)}{y^2}+\frac{(z-x)(z-y)}{z^2}\geq 0$$</span>
for positive numbers <span class="math-container">$x,y,z$</span>.
I don't know how to even begin. I must say I'm not 100% certain the inequality always holds.</p>
<p>I tried the sort of factoring involved in proving schur's inequality, but it doesn't seem to work here. I also tried to distribute the denominators to obtain terms of form (1-y/x)(1-z/x) and then maybe substituting x/y=a, y/z=b, z/x=a etc </p>
| Michael Rozenberg | 190,319 | <p>Let <span class="math-container">$x\geq y\geq z$</span>.</p>
<p>Thus, <span class="math-container">$$\sum_{cyc}\frac{(x-y)(x-z)}{x^2}\geq\frac{(x-z)(y-z)}{z^2}-\frac{(x-y)(y-z)}{y^2}=$$</span>
<span class="math-container">$$=(y-z)\left(\frac{x-z}{z^2}-\frac{x-y}{y^2}\right)\geq0$$</span>
because <span class="math-container">$y-z\geq0,$</span> <span class="math-container">$x-z\geq x-y$</span> and <span class="math-container">$\frac{1}{z^2}\geq\frac{1}{y^2}.$</span></p>
<p>Done!</p>
|
3,299,072 | <p>I want to prove that
<span class="math-container">$$\frac{(x-y)(x-z)}{x^2}+\frac{(y-x)(y-z)}{y^2}+\frac{(z-x)(z-y)}{z^2}\geq 0$$</span>
for positive numbers <span class="math-container">$x,y,z$</span>.
I don't know how to even begin. I must say I'm not 100% certain the inequality always holds.</p>
<p>I tried the sort of factoring involved in proving schur's inequality, but it doesn't seem to work here. I also tried to distribute the denominators to obtain terms of form (1-y/x)(1-z/x) and then maybe substituting x/y=a, y/z=b, z/x=a etc </p>
| Gibbs | 498,844 | <p>There is no loss of generality in assuming <span class="math-container">$0 < x \leq y \leq z$</span>. Rewrite the inequality as
<span class="math-container">$$\frac{(x-y)^2+(y-z)(x-y)}{x^2}+\frac{(y-x)(y-z)}{y^2}+\frac{(z-y)^2+(y-x)(z-y)}{z^2} \geq 0$$</span>
and rearrange the terms as follows:
<span class="math-container">$$\frac{(x-y)^2}{x^2}+\frac{(z-y)^2}{z^2}+(z-y)(y-x)\left(\frac{1}{x^2}+\frac{1}{z^2}-\frac{1}{y^2}\right)\geq 0.$$</span>
Since <span class="math-container">$(z-y)(y-x)\geq 0$</span> by assumption, it suffices to prove
<span class="math-container">$$\frac{1}{x^2}+\frac{1}{z^2}-\frac{1}{y^2} \geq 0,$$</span>
which is equivalent to
<span class="math-container">$$\left(\frac{y}{x}\right)^2+\left(\frac{y}{z}\right)^2-1\geq0.$$</span>
But the latter is obviously true as <span class="math-container">$y/x \geq 1$</span>.</p>
|
1,781,467 | <p>First, I know what the right answer is, and I know how to solve it. What I'm trying to figure out is why I can't get the following process to work.</p>
<p>The probability that we get 2 consecutive heads with one flip is 0. The probability that we get 2 consecutive heads with 2 flips = 1/4. The probability of getting 2 consecutive heads with 3 flips = 1/6. The probability of getting 2 consecutive heads with 4 flips = 2/10 = 1/5. And the probability of getting 2 consecutive heads with 5 flips = 3/16. </p>
<p>Am I doing something wrong. I don't see any easy way to use these numbers to solve the original problem of finding the expected number of coin flips to get 2 consecutive heads. </p>
| Jonathan | 338,908 | <p>The definition of expected value in this case is:</p>
<p>$$E[x]=\sum_{x=2}^{\infty}xP[x]$$</p>
<p>and</p>
<p>$$P[x]=\frac{x-1}{2^x}$$</p>
<p>Combining these yields:</p>
<p>$$E[x]=\sum_{x=2}^{\infty}\frac{x^2-x}{2^x}$$</p>
<p>Therefore $E[x]=4$.</p>
|
3,422,861 | <p>The problem is as follows:</p>
<blockquote>
<p>The acceleration of an oscillating sphere is defined by the equation
<span class="math-container">$a=-ks$</span>. Find the value of <span class="math-container">$k$</span> such as <span class="math-container">$v=10\,\frac{cm}{s}$</span> when <span class="math-container">$s=0$</span>
and <span class="math-container">$s=5$</span> when <span class="math-container">$v=0$</span>.</p>
</blockquote>
<p>The given alternatives in my book are as follows:</p>
<p><span class="math-container">$\begin{array}{ll}
1.&15\\
2.&20\\
3.&10\\
4.&4\\
5.&6\\
\end{array}$</span></p>
<p>What I attempted to do here is to use integration to find the value of <span class="math-container">$k$</span>.</p>
<p>Since the acceleration measures the rate of change between the speed and time then:</p>
<p>I'm assuming that they are using a weird notation <em>"s"</em> for the time.</p>
<p><span class="math-container">$\dfrac{d(v(s))}{ds}=-ks$</span></p>
<p><span class="math-container">$v(s)=-k\frac{s^2}{2}+c$</span></p>
<p>Using the given condition: <span class="math-container">$v(0)=10$</span></p>
<p><span class="math-container">$10=c$</span></p>
<p><span class="math-container">$v(s)=-k\frac{s^2}{2}+10$</span></p>
<p>Then it mentions: <span class="math-container">$v(5)=0$</span></p>
<p><span class="math-container">$0=-k\frac{25}{2}+10$</span></p>
<p>From this it can be obtained:</p>
<p><span class="math-container">$k=\frac{20}{25}=\frac{4}{5}$</span></p>
<p>However this value doesn't appear within the alternatives. What part did I missunderstood?. Can somebody help me here?.</p>
| IamWill | 389,232 | <p>The term oscillating is very generic, so this could be a pendulum for example. I'm assuming <span class="math-container">$s$</span> stands for the position of the sphere. Then, notice that <span class="math-container">$$a =\frac{dv}{dt} = \frac{dv}{ds}\frac{ds}{dt} = \frac{dv}{ds}v$$</span>
Therefore, we have <span class="math-container">$$\int v dv = -\int ks ds \Rightarrow \frac{1}{2}{v}^{2}= -\frac{1}{2}ks^{2}+C$$</span>
Now, if <span class="math-container">$s=0$</span> we have <span class="math-container">$v=10$</span> (I'm ignoring units) so <span class="math-container">$C = 50$</span>. Now, if <span class="math-container">$s = 5$</span>, <span class="math-container">$v = 0$</span> and this implies <span class="math-container">$k=4$</span>.</p>
|
264,450 | <p>We know that the class of open intervals $(a,b)$, where $a,b$ are rational numbers is a countable base for $\mathbb R$. </p>
<p>But, $[a,b]$ where $a,b$ are rational numbers does not produce a base for $\mathbb R$.</p>
<p>Can we say that any $(a,b)$ or $[a,b]$ where <strong>$a$ is rational number and $b$ is an irrational number</strong> produce a base for $\mathbb R$?</p>
| Daron | 53,993 | <p>I am not sure if this is related to your question at all, but we could say that the family $\{ [a,b]\ \colon a,b \in \mathbb R, a <c< b\}$ form a <strong>neighborhood basis</strong> at the point $c$ with respect to the Euclidean topology. This is saying that $c$ is in the interior of $U$ iff there is some closed interval of nonzero length such that $c \in [a,b] \subseteq U$. The notion of a neighborhood basis is different than a base for a topology but might lead to confusion.</p>
|
2,094,698 | <p>I am trying to figure out how to create a formula to solve this problem I have.</p>
<p>OK, the problem</p>
<p>I need to work how much money to give people bassed on a percentage of a total value, but the problem is that the percentage varies from person to person.</p>
<p>Ie,</p>
<p>Total amount = $1,000,000</p>
<p>Total People = 100</p>
<p>90 people get 100% of $1,000,000 / 100 people</p>
<p>5 people get 80% of $1,000,000 / 100 people</p>
<p>3 people get 60% of $1,000,000 / 100 people</p>
<p>2 people get 40% of $1,000,000 / 100 people</p>
<p>Now the issue I am having is that not even one cent can be left over.
Do I just work out based on what each person would get $1,000,000 / 100 people and then just divding the remainder?</p>
<p>ie,</p>
<p>$1,000,000 / 100 = 10,000</p>
<p>10,000 * 100% = $10,000 </p>
<p>10,000 * 80% = $8,000</p>
<p>10,000 * 60% = $6,000</p>
<p>10,000 * 40% = $4,000</p>
<hr>
<p>And the total of that</p>
<p>90 x 10,000 = $900,000</p>
<p>5 x 8,000 = $40,000</p>
<p>3 x 6,000 = $18,000</p>
<p>2 x 4,000 = $8,000</p>
<p>Total of $966,000, remaining 34,000 dollars to be divided by 100 and added to each persons figure.</p>
<p>Or is there a better way to go about this? </p>
<p>EDIT: Another way I thought of doing this would be to have anyone who gets less than a 100% share, have it so they get the desired percentage of the full share people. </p>
<p>ie,
Say a full share worked out to be,
10,340 Dollars, have someone who gets an 80% share get 8,272 dollars etc and so on.</p>
<p>I just need to know how to get this worked out in a simple equation.</p>
<p>Thank for the help. And could someone edit the tags, I have no idea what to tag this with.</p>
| KyloRen | 406,318 | <p>So after all this time I figured out how to solve the issue. And I am also going to say that there is an issue with the wording of my question which will throw confusion as to what I really wanted in the first place. </p>
<p>So as the question states,</p>
<blockquote>
<p>Total amount = $1,000,000</p>
<p>Total People = 100</p>
<p>90 people get 100% of $1,000,000 / 100 people</p>
<p>5 people get 80% of $1,000,000 / 100 people</p>
<p>3 people get 60% of $1,000,000 / 100 people</p>
<p>2 people get 40% of $1,000,000 / 100 people</p>
</blockquote>
<p>So what I did is break down the percentage to units of value needed to be given to that particular person. ie, someone who is to get the so called "100%", they will get 100 unit values. Someone who was to get "40%", they will get 40 unit values, etc, etc.</p>
<p>So math is this below.</p>
<p><a href="https://i.stack.imgur.com/OZIBl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OZIBl.png" alt="enter image description here"></a></p>
<p>So when you do the percentages, using the greatest value as the value that is considered 100%, it breaks down to this,</p>
<pre><code> 10,352 = 100%
8,282 / 10,352 = 80%
6,211 / 10,352 = 60%
4,141 / 10,352 = 40%
</code></pre>
<p>So I apologize for more poor explanation in my original question, but this is what I was looking for.</p>
<p>Cheers, have a great day.</p>
|
1,526,014 | <p>I have no idea how to compute this infinite sum. It seems to pass the convergence test. It even seems to be equal to $\frac{p}{(1-p)^2}$, but I cannot prove it. Any insightful piece of advice will be appreciated.</p>
| Mark Viola | 218,419 | <p>Assume that $|p|<1$.</p>
<p><strong>METHOD $1$:</strong></p>
<p>$$p\left(\frac{d}{dp}\sum_{k=1}^\infty p^k\right)=\sum_{k=1}^{\infty}kp^k$$</p>
<hr>
<p><strong>METHOD $2$:</strong></p>
<p>$$\begin{align}
\sum_{k=1}^\infty kp^k&=\sum_{k=1}^{\infty}p^k\sum_{j=1}^k (1)\\\\
&=\sum_{j=1}^{\infty}\sum_{k=j}^{\infty}p^k\\\\
&=\sum_{j=1}^{\infty}\frac{p^j}{1-p}\\\\
&=\frac{p}{(1-p)^2}
\end{align}$$</p>
|
3,070,127 | <p>Let <span class="math-container">$B := \{(x, y) \in \mathbb{R}^2
: x^2 + y^2 \le 1\}$</span>be the closed ball in <span class="math-container">$\mathbb{R^2}$</span> with center at the origin.
Let I denote the unit interval <span class="math-container">$[0, 1].$</span> Which of the following statements are true?</p>
<p>Which of the following statements are true?</p>
<p><span class="math-container">$(a)$</span> There exists a continuous function <span class="math-container">$f : B \rightarrow \mathbb{R}$</span> which is one-one</p>
<p><span class="math-container">$(b)$</span> There exists a continuous function <span class="math-container">$f : B \rightarrow \mathbb{R}$</span> which is onto.</p>
<p><span class="math-container">$(c)$</span> There exists a continuous function <span class="math-container">$f : B \rightarrow I × I$</span> which is one-one.</p>
<p><span class="math-container">$(d)$</span> There exists a continuous function <span class="math-container">$f : B \rightarrow I × I$</span> which is onto.</p>
<p>I thinks none of option will be correct</p>
<p>option <span class="math-container">$a)$</span> and option <span class="math-container">$b)$</span> is false Just using the logics of compactness, that is <span class="math-container">$\mathbb{R}$</span> is not compacts
option c) and option d) is false just using the logic of connectedness that is <span class="math-container">$B-\{0\}$</span> is not connected but <span class="math-container">$I × I-\{0\}$</span> is connectedness</p>
<p>Is my logics is correct or not ?</p>
<p>Any hints/solution will be appreciated </p>
<p>thanks u</p>
| Caleb Stanford | 68,107 | <p>You can also arrive at jmerry's closed form answer directly from your answer. You got:
<span class="math-container">$$\sum_{j=0}^{n-1}\binom{2n}{j}$$</span></p>
<p>Now, notice that
<span class="math-container">$$\sum_{j=0}^{n-1}\binom{2n}{j} = \sum_{j=0}^{n-1} \binom{2n}{2n-j} = \sum_{j=n+1}^{2n} \binom{2n}{j}.$$</span></p>
<p>Therefore, the answer can be written
<span class="math-container">\begin{align*}
\sum_{j=0}^{n-1}\binom{2n}{j}
&= \frac12\;\left({\displaystyle\sum_{j=0}^{n-1}\binom{2n}{j} + \sum_{j=0}^{n-1}\binom{2n}{j}}\right) \\
&= \frac12\;\left({\displaystyle\sum_{j=0}^{n-1}\binom{2n}{j} + \sum_{j=n+1}^{2n} \binom{2n}{j}}\right) \\
&= \frac12\;\left({\displaystyle\left\{\sum_{j=0}^{2n}\binom{2n}{j}\right\} - \binom{2n}{n}}\right) \\
&= \frac12\;\displaystyle\left(2^{2n} - \binom{2n}{n}\right).
\end{align*}</span></p>
|
3,072,274 | <p><a href="https://i.stack.imgur.com/WaOF8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WaOF8.png" alt="enter image description here"></a></p>
<blockquote>
<p>Let <span class="math-container">$n\in\mathbb N$</span> be fixed. For <span class="math-container">$0\le k \le n$</span>, et <span class="math-container">$C_k=\binom nk$</span>. Evaluate: <span class="math-container">$$\sum_{\substack{0\le k\le n\\ k\text{ even}}} \frac{C_k}{k+1}.$$</span></p>
</blockquote>
<p>My attempt : <span class="math-container">$(x+1)^n=\sum_{k=0}^{n}{\binom{n}{k}x^k}\implies \frac{(x+1)^{n+1}}{n+1}+C=\int (x+1)^ndx=\sum_{k=0}^{n}{\binom{n}{k}\frac{1}{k+1}x^{k+1}}.$</span></p>
<p>Putting <span class="math-container">$x=0$</span> we have <span class="math-container">$C=\frac{-1}{n+1}$</span>. </p>
<p>so my answer is <span class="math-container">$$\frac{2^{n+1}-1}{n+1}$$</span></p>
<p>is its True ?</p>
<p>Any hints/solution will appreciated </p>
<p>thanks u </p>
| Robert Lewis | 67,071 | <p>Yes, <span class="math-container">$H$</span> is normal in <span class="math-container">$G$</span>; for</p>
<p><span class="math-container">$h \in H, \; g \in G \tag 1$</span></p>
<p>we have</p>
<p><span class="math-container">$$\begin{align}
\det (g^{-1}hg) &= \det(g^{-1}) \det(h) \det(g)\\
& = \det(h) \det(g^{-1})\det(g) \\
&= \det(h) \det(g^{-1}g) \\
&= \det(h) \det(1_G) \\
&= \det(h) > 0; \tag 2
\end{align}$$</span></p>
<p>thus,</p>
<p><span class="math-container">$g^{-1}hg \in H, \tag 3$</span></p>
<p>which shows that <span class="math-container">$H$</span> is normal.</p>
<p><strong><em>Note:</em></strong> This demonstration apparently binds for <span class="math-container">$GL(n, F)$</span>, where <span class="math-container">$F$</span> is any field which accomodates a notion of "positivity". If <span class="math-container">$F$</span> is not such a field, we can still assert that the subgroup of elements with <span class="math-container">$\det(h) = 1$</span> is normal. Thanks to Jyrki Lahtonene for pointing this out.</p>
|
307,287 | <p>According to <a href="https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem" rel="nofollow noreferrer">Brouwer's fixed point theorem</a>, for compact convex $K\subset\mathbb{R}^n$, every continuous map $K\rightarrow K$ has a fixed point.</p>
<p>However, these fixed points cannot be chosen continuously, even for $K=[0,1]$, in the sense that there is no continuous map $fix:[0,1]^{[0,1]}\rightarrow[0,1]$ such that $\forall f:[0,1]\rightarrow[0,1]\;f(fix(f))=fix(f)$. To see this, consider a family of functions $f_x:[0,1]\rightarrow[0,1]$ ($x\in[0,1]$) such that $f_0(y)=\frac{1}{3}$; $f_{\frac{1}{2}}(y)=\frac{1}{3}$ for $y\leq\frac{1}{3}$, $f_{\frac{1}{2}}(y)=\frac{2}{3}$ for $y\geq\frac{2}{3}$, and linearly interpolates between those for $\frac{1}{3}<y<\frac{2}{3}$; $f_1(y)=\frac{2}{3}$; and $f_x$ linearly interpolates between $f_0$ and $f_\frac{1}{2}$ for $0<x<\frac{1}{2}$ and between $f_\frac{1}{2}$ and $f_1$ for $\frac{1}{2}<x<1$. In order for $fix$ to continuously select a fixed point, we would need $fix(f_x)=\frac{1}{3}$ for $x\leq\frac{1}{2}$ and $fix(f_x)=\frac{2}{3}$ for $x\geq\frac{1}{2}$, a contradiction.</p>
<p>But one could imagine gradually shifting probability from the lower fixed point of $f_x$ to the upper fixed point as $x$ increases. [Edit: actually, one couldn't do that; as Noam Elkies points out in the comments, this example answers my own question.]</p>
<p>Hence my question: For a compact convex $K\subset\mathbb{R}^n$, is there a continuous map $fix:K^K\rightarrow\Delta(K)$ such that $\forall f:K\rightarrow K$, $fix(f)$ is supported on fixed points of $f$? Here $K^K$ is given the <a href="https://en.wikipedia.org/wiki/Compact-open_topology" rel="nofollow noreferrer">compact-open topology</a> and $\Delta(K)$ is the space of probability distributions over $K$, equipped with the <a href="https://en.wikipedia.org/wiki/Convergence_of_measures#Weak_convergence_of_measures" rel="nofollow noreferrer">weak topology</a>.</p>
| Martin Kell | 123,897 | <p>There can be no such function even in the category of smooth functions. </p>
<p>Here an example for functions $f:[0,1]\to [0,1]$ with $f(0)=\epsilon$ and $f(1)=1-\epsilon$: </p>
<p>(1) Let $\operatorname{id}:x\mapsto x$ and choose a smooth function $f_0$ with $f_0=\operatorname{id}$ on $[\frac{1}{3},\frac{2}{3}]$, $f_0>\operatorname{id}$ on $[0,\frac{1}{3})$ and $f_0<\operatorname{id}$ $(\frac{2}{3},1]$. </p>
<p>(2) There is a sequence of smooth functions $f_n$ converging to $f_0$ with $f_n(\frac{1}{3})=\frac{1}{3}$, $f_n>\operatorname{id}$ on $[0,\frac{1}{3})$ and $f_n<\operatorname{id}$ on $(\frac{1}{3},1]$.</p>
<p>(3) There is a sequence of smooth functions $g_n$ converging to $f_0$ with $g_n(\frac{2}{3})=\frac{2}{3}$, $g_n>\operatorname{id}$ on $(0,\frac{2}{3})$ and $g_n<\operatorname{id}$ on $(\frac{2}{3},1]$.</p>
<p>Both sequences have each a unique fixed point equal to $\frac{1}{3}$ and resp. $\frac{2}{3}$. Thus there are exactly two distinct probability measures associated to the fixed point sets of $f_n$ and resp. $g_n$. Obviously the limits of those probability measures do not agree. In particular, any choice of $fix$ cannot be continuous at $f_0$. </p>
|
3,497,328 | <p>I'm stuck. Can I get a hint? I heard the answer is zero.</p>
<p>I'm guessing we use the SSA congruent triangle theorem. </p>
<p>If <span class="math-container">$m∠A = 50°$</span>, side <span class="math-container">$a = 6$</span> units, and side <span class="math-container">$b = 10$</span> units, what is the maximum number of distinct triangles that can be constructed? </p>
| N. F. Taussig | 173,070 | <p>The <a href="https://en.wikipedia.org/wiki/Law_of_sines" rel="nofollow noreferrer">Law of Sines</a> states that
<span class="math-container">$$\frac{a}{\sin\alpha} = \frac{b}{\sin\beta} = \frac{c}{\sin\gamma}$$</span>
Therefore,
<span class="math-container">$$\sin\beta = \frac{b\sin\alpha}{a} = \frac{10\sin(50^\circ)}{6}$$</span>
What do you know about the range of the sine function?</p>
|
286,852 | <p>I'm currently trying to understand time continuous Fourier transformations for my Signal Analysis class (Electrical Engineering major). In my Textbook we have a set of "useful FT-pairs", one of which being
$x(t)<=FT=>X(\omega)$</p>
<p>$\sin(\omega_0*t)<=>i\pi[\delta(\omega+\omega_0)-\delta(\omega-\omega_0)]$</p>
<p>I'm trying to derive this on my own, but I keep running into the same dead end. I replace the sine with $\frac 1{2i}(e^\left(i\omega_0t\right)-e^\left(-i\omega_0t\right))$ and drop that into the $\int f(x) e^\left(-i\omega t\right)$. Multiply the functions together by adding together the exponents and then finding the anti-derivative isn't hard, but don't understand how you get from that to the $\delta(x)$. I played around on wolfram alpha and found out that apparently $\int_{-\infty}^{\infty} e^\left(it(\omega + \omega_0\right)dt$ doesn't converge except for $\omega = -\omega_0$, and then it converges to infinity. This seems close to the behavior of the delta function, but I don't think it's enough for me to replace them?</p>
<p>Could <em>really</em> use some help understanding this.</p>
| Dirk | 3,148 | <p>Your observations are right. The Fourier transform of $x\mapsto\sin(ax)$ is not defined by the integral but rather in a <em>weak sense</em> or in the sense of distributions (http://en.wikipedia.org/wiki/Distribution_(mathematics)#Tempered_distributions_and_Fourier_transform). This is also explained on the Wikipedia page on the Fourier transform (http://en.wikipedia.org/wiki/Fourier_transform#Tempered_distributions).</p>
|
1,926,593 | <p>I am currently taking a course in Discrete Math. The first part of our lesson this week is regarding sequences. I am stuck on formulas like the ones shown in the images I attached... I was hoping someone might be able to help me learn how to solve them. :)</p>
<p>Ps: What does it mean when <span class="math-container">$n-1$</span> is written below the function? Is it the inverse of <span class="math-container">$a^{n-1}$</span>?</p>
<p>I'm sorry if this is a dumb question haha. I've been studying every day for the past 3 weeks and my brain is officially exhausted. Thank-you so much for your time!</p>
<blockquote>
<p>Given <span class="math-container">$a_n = 3 a_{n-1} + 1$</span> and <span class="math-container">$a_0 = 2$</span>, compute <span class="math-container">$a_2$</span>.</p>
<p>Given the recurrence relation <span class="math-container">$a_n = -2a_{n-1}$</span> where <span class="math-container">$a_0 = 1$</span>, find <span class="math-container">$a_5$</span>.</p>
</blockquote>
| JMP | 210,189 | <p>For the inductive step, assume:</p>
<p>$$\sum_{i=0}^n \dbinom{i}{m} = \dbinom{n+1}{m+1}$$</p>
<p>Add $\dbinom{n+1}{m}$ to both sides to give:</p>
<p>$$\sum_{i=0}^{n+1} \dbinom{i}{m} = \dbinom{n+1}{m}+\dbinom{n+1}{m+1}=\dbinom{n+2}{m+1}$$</p>
|
2,811,908 | <p>I have this monster as the part of a longer calculation. My goal would be to somehow make it... nicer.</p>
<p>Intuitively, I would try to somehow utilize the derivate and the integral against each other, but I have no idea, exactly how.</p>
<p>I suspect, this might be a relative common problem, i.e. if we want to derivate a parametrized definite integral against one of its parameters. Maybe it has even some trick... or methodology to handle it.</p>
<p>Of course it is not a problem if the integral remains. The primary goal would be in this stage, to eliminate or significantly simplify the derivative operator.</p>
| GEdgar | 442 | <p>This is probably in your calculus textbook ... under reasonable conditions,</p>
<p>$$
\frac{d}{db}\int_0^1e^{bx}f(x)dx = \int_0^1 \frac{\partial}{\partial b}e^{bx}f(x)dx = \int_0^1 x e^{bx}f(x)dx
$$</p>
|
2,806,164 | <p>I recently came across a question that asked for the derivative of $e^x$ with respect to $y$. I answered $\frac{d}{dy}e^x$ but the answer was $e^x\frac{dx}{dy}$. How is that the answer? I am confused.</p>
| peterh | 111,704 | <p>Informally, the question understood it on the way, that <em>how the $y$ coordinate changes of the curve $(x;e^x)$ for the infinitesimally small changes of the $x$ coordinate</em>. This is the common interpretation of the derivate, but it is not the only one. You have probably got this question in differential calculus. Your answer was correct, but missed exactly this little important detail.</p>
|
2,806,164 | <p>I recently came across a question that asked for the derivative of $e^x$ with respect to $y$. I answered $\frac{d}{dy}e^x$ but the answer was $e^x\frac{dx}{dy}$. How is that the answer? I am confused.</p>
| mr_e_man | 472,818 | <p>The question you have been asked is a bad question, if you weren't given more information.</p>
<p>If $e^x$ is considered a function of two (independent) variables $x$ and $y$, then "derivative" probably means "partial derivative", and $\frac{\partial}{\partial y}e^x=0$.</p>
<p>If there is some relation between $x$ and $y$, then the chain rule applies.</p>
<p>$$\frac{d}{dy}(e^x)=\frac{d}{dx}(e^x)\cdot\frac{dx}{dy}=e^x\frac{dx}{dy}$$</p>
<p>And your answer $\frac{d}{dy}e^x$ is technically correct, but it's basically restating the question.</p>
|
4,480,276 | <p>[This question is rather a very easy one which I found to be a little bit tough for me to grasp. If there is any other question that has been asked earlier which addresses the same topic then kindly link this here as I am unable to find such questions by far.]</p>
<p>let <span class="math-container">$f(x)=x.$</span> If we are to find limit of it at <span class="math-container">$a$</span> by the definition <span class="math-container">$|f(x)-a|<\epsilon \implies |x-a|<\epsilon$</span>. Again <span class="math-container">$|x-a|<\delta$</span>. Then how does it prove that <span class="math-container">$\epsilon=\delta$</span>. Can't <span class="math-container">$\delta$</span> be larger or smaller than <span class="math-container">$\epsilon$</span>?</p>
<p>N.B: I am self learning limits from Calculus Early Transcendentals by James Stewart. I could reach upto the lesson "Precise Definition of Limits" which involves such a problem and didn't explain much about the explained problem</p>
| Átila Correia | 953,679 | <p>I think it is worth working with the definition of limit.</p>
<p>Let <span class="math-container">$f:X\to Y$</span> be a real-valued function with real domain s.t. <span class="math-container">$a\in\mathbb{R}$</span> is an accumulation point of <span class="math-container">$X$</span>.</p>
<p>We say the limit of <span class="math-container">$f$</span> at <span class="math-container">$a$</span> equals <span class="math-container">$L$</span> iff
<span class="math-container">\begin{align*}
(\forall\varepsilon\in\mathbb{R}_{>0})(\exists\delta_{\varepsilon}\in\mathbb{R}_{>0})(\forall x\in X)(0 < |x - a| < \delta_{\varepsilon} \Rightarrow |f(x) - L| < \varepsilon)
\end{align*}</span></p>
<p>In the present case, <span class="math-container">$f(x) = x$</span>.</p>
<p>Hence, if we suppose that <span class="math-container">$0 < |x - a| < \delta_{\varepsilon}$</span> (which is a function of <span class="math-container">$\varepsilon$</span> unknown so far), we get:
<span class="math-container">\begin{align*}
0 < |x - a| < \delta_{\varepsilon} \Rightarrow |f(x) - a| = |x - a| < \delta_{\varepsilon} := \varepsilon
\end{align*}</span></p>
<p>So (as previously mentioned by the other users), given <span class="math-container">$\varepsilon > 0$</span>, it suffices to take <span class="math-container">$\delta_{\varepsilon} = \varepsilon$</span>.</p>
<p>Hopefully this helps!</p>
|
1,648,354 | <p>We have been taught that linear functions, usually expressed in the form $y=mx+b$, when given a input of 0,1,2,3, etc..., you can get from one output to the next by adding some constant (in this case, 1).
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 3
\end{array}
$$</p>
<p>But with exponential functions (which are usually expressed in the form $y=a\cdot b^x $), instead of adding a constant, you multiply by a constant. (In this case, 2)
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 4\\
3 & 8
\end{array}
$$
But... we can keep going, can't we?
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 4\\
3 & 16\\
4 & 256
\end{array}
$$
In this example, you square the last output to get to the next one. I cannot find a 'general form' for such an equation, nor can I find much information online. Is there a name for these functions? Is there a general form for them? And can we keep going even past these 'super exponential functions'?</p>
| Brady Gilg | 188,927 | <p>Each of your tables is found by taking 2 to the power of the previous table (possibly with a shift by 1). As the formula for your second table is $2^x$, the formula for your third table is $2^{2^{x-1}}$ (for $x \geq 1$).</p>
|
2,253,676 | <p>$f(x)$ continuous in $[0,\infty)$. Both $\int^\infty_0f^4(x)dx, \int^\infty_0|f(x)|dx$ converge. </p>
<p>I need to prove that $\int^\infty_0f^2(x)dx$ converges.</p>
<p>Knowing that $\int^\infty_0|f(x)|dx$ converges, I can tell that $\int^\infty_0f(x)dx$ converges.</p>
<p>Other than that, I don't know anything. I tried to maybe integrate by parts but I got to nowhere</p>
<p>$\int^\infty_0f^4(x)dx = \int^\infty_0f^2(x)|f(x)|\cdot |f(x)| dx$.. </p>
| Angina Seng | 436,618 | <p>By AM/GM
$$3f(x)^2\le f(x)^4+|f(x)|+|f(x)|.$$</p>
|
2,253,676 | <p>$f(x)$ continuous in $[0,\infty)$. Both $\int^\infty_0f^4(x)dx, \int^\infty_0|f(x)|dx$ converge. </p>
<p>I need to prove that $\int^\infty_0f^2(x)dx$ converges.</p>
<p>Knowing that $\int^\infty_0|f(x)|dx$ converges, I can tell that $\int^\infty_0f(x)dx$ converges.</p>
<p>Other than that, I don't know anything. I tried to maybe integrate by parts but I got to nowhere</p>
<p>$\int^\infty_0f^4(x)dx = \int^\infty_0f^2(x)|f(x)|\cdot |f(x)| dx$.. </p>
| zhw. | 228,045 | <p>Let $A = \{|f|\le 1\}, B = \{|f|>1 \}.$ Then $\int_A f^2 \le \int_A |f|<\infty$ and $\int_B f^2 \le \int_B f^4<\infty.$ The result follows.</p>
|
2,013,382 | <blockquote>
<p>If $AX=\lambda X$ for every eigenvector $X\in \mathbb{R}^n$ then show that $A=\lambda I_n$.</p>
</blockquote>
<p>Given, $AX=\lambda X$ for every $X\in \mathbb{R}^n$.</p>
<p>Then $(A-\lambda I_n)X=0$ is true for every $X\in \mathbb{R}^n$.</p>
<p>But how to show $A=\lambda I_n$? Please help.</p>
| Emilio Novati | 187,568 | <p>Hint:</p>
<p>If $\vec e_i$ is the vector of the standard basis with components $\delta_{i,j}$, than
$A\vec e_i$ is the column $i$ of the matrix $A$.</p>
<p>And we have $A\vec e_i=\lambda \vec e_i=\lambda \delta_{i,j}$, so....</p>
<hr>
<p>For $\vec e_1=[1,0,0]^T$ we have
$$
A=\begin{bmatrix}
a_{11}&a_{12}&a_{13}\\
a_{21}&a_{22}&a_{23}\\
a_{31}&a_{32}&a_{33}\\
\end{bmatrix}
\begin{bmatrix}
1\\0\\0
\end{bmatrix}=
\lambda\begin{bmatrix}
1\\0\\0
\end{bmatrix}
\iff
$$
$$
\begin{bmatrix}
a_{11}\\a_{21}\\a_{31}
\end{bmatrix}=
\begin{bmatrix}
\lambda\\0\\0
\end{bmatrix}
$$
and we can do the same for the other vectors of the basis.</p>
|
3,447,872 | <p>Subtracting both equations
<span class="math-container">$$x(a-b)+b-a=0$$</span>
<span class="math-container">$$(x-1)(a-b)=0$$</span>
Since <span class="math-container">$a\not = b$</span>
<span class="math-container">$$x=1$$</span>
Substitution of x gives
<span class="math-container">$$1+a+b=0$$</span> which is contradictory to the question. What did I do wrong?</p>
<p><em>There is a solution for this question that proves the required condition satisfactorily, but I want to the know reason behind this contradiction.</em></p>
| Elke Hennig | 728,267 | <p>You assume that both equations have the same solution. This is not stated in the problem. Only the difference of the two solutions to each equation is identical.</p>
|
3,447,872 | <p>Subtracting both equations
<span class="math-container">$$x(a-b)+b-a=0$$</span>
<span class="math-container">$$(x-1)(a-b)=0$$</span>
Since <span class="math-container">$a\not = b$</span>
<span class="math-container">$$x=1$$</span>
Substitution of x gives
<span class="math-container">$$1+a+b=0$$</span> which is contradictory to the question. What did I do wrong?</p>
<p><em>There is a solution for this question that proves the required condition satisfactorily, but I want to the know reason behind this contradiction.</em></p>
| lab bhattacharjee | 33,337 | <p>Hint:</p>
<p>You have wrongly assumed a common root, then only the subtraction makes sense.</p>
<p>If <span class="math-container">$p,q;(p\ge q)$</span> are the roots of <span class="math-container">$$ct^2+dt+e=0$$</span></p>
<p><span class="math-container">$p-q=\dfrac{-d+\sqrt{d^2-4ce}-(-d-\sqrt{d^2-4ce)}}2=?$</span></p>
<p>We need <span class="math-container">$$\sqrt{a^2-4\cdot1\cdot b}=\sqrt{b^2-4\cdot1\cdot a}$$</span></p>
|
3,301,387 | <p>I don't understand what it is about complex numbers that allows this to be true.</p>
<p>I mean, if I root both sides I end up with Z=Z+1. Why is this a bad first step when dealing with complex numbers?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>if so then it would be
<span class="math-container">$$z^6+6z^5+15z^4+20z^3+15z^2+6z+1=z^6$$</span>
or factorized
<span class="math-container">$$ \left( 2\,z+1 \right) \left( {z}^{2}+z+1 \right) \left( 3\,{z}^{2}+
3\,z+1 \right)
=0$$</span></p>
|
286,082 | <p><a href="https://en.wikipedia.org/wiki/Budan's_theorem" rel="nofollow noreferrer">Budan's theorem</a> gives an upper bound for the number of real roots of a real polynomial in a given interval $(a,b)$. This bound is not sharp (see the example in Wikipedia).</p>
<p>My question is the following: let us suppose that Budan's theorem tells us "there are $0$ or $2$ roots in the interval $(a,b)$" (or more generally "there are $0$, $2$, ... $2n$ roots"). Let us suppose also that there are actually no roots in the interval. Is it true that there are $2$ roots whose real part lies in the interval $(a,b)$?</p>
<p>Thank you!</p>
| Harry Richman | 81,295 | <p>You could take
$$ f(x) = x^4 + 1$$
as a counterexample. Budan's theorem would tell you there's at most 4 roots in the interval $(-\epsilon, \epsilon)$ for any positive $\epsilon$. But $f(x)$ doesn't have any roots with real part 0.</p>
<hr>
<p>Also $x^3 + 1$ works for the same reason. But degree 3 is minimal -- something quadratic like $x^2 + 1$ gives you the wrong intuition.</p>
|
1,377,927 | <p>Prove using mathematical induction that
$(x^{2n} - y^{2n})$ is divisible by $(x+y)$.</p>
<p><strong>Step 1:</strong> Proving that the equation is true for $n=1 $</p>
<p>$(x^{2\cdot 1} - y^{2\cdot 1})$ is divisible by $(x+y)$ </p>
<p><strong>Step 2:</strong> Taking $n=k$</p>
<p>$(x^{2k} - y^{2k})$ is divisible by $(x+y)$</p>
<p><strong>Step 3:</strong> proving that the above equation is also true for $(k+1)$</p>
<p>$(x^{2k+2} - y^{2k+2})$ is divisible by $(x+y)$.</p>
<p>Can anyone assist me what would be the next step? Thank You in advance!</p>
| Bernard | 202,857 | <p><strong>Hint:</strong> Rewrite:
$$x^{2k+2}-y^{2k+2}=(x^{2k+2}-y^{2k}x^2)+(y^{2k}x^2-y^{2k+2})=x^2(x^{2k}-y^{2k})+y^{2k}(x^2-y^2).$$</p>
<p><strong>Added:</strong> Note it can be proved without induction:
$$x^{2n}-y^{2n}=(x^2)^n-(y^2)^n=(x^2-y^2)(x^{2(n-1)}+x^{2(n-2)}y^2+\dots+x^2y^{2(n-2)}+y^{2(n-1)}),$$
and the first factor is divisible by $x+y$.</p>
|
1,377,927 | <p>Prove using mathematical induction that
$(x^{2n} - y^{2n})$ is divisible by $(x+y)$.</p>
<p><strong>Step 1:</strong> Proving that the equation is true for $n=1 $</p>
<p>$(x^{2\cdot 1} - y^{2\cdot 1})$ is divisible by $(x+y)$ </p>
<p><strong>Step 2:</strong> Taking $n=k$</p>
<p>$(x^{2k} - y^{2k})$ is divisible by $(x+y)$</p>
<p><strong>Step 3:</strong> proving that the above equation is also true for $(k+1)$</p>
<p>$(x^{2k+2} - y^{2k+2})$ is divisible by $(x+y)$.</p>
<p>Can anyone assist me what would be the next step? Thank You in advance!</p>
| Harish Chandra Rajpoot | 210,295 | <p>Step 1: putting $n=1$, we get $$x^{2n}-y^{2n}=x^2-y^2=(x-y)(x+y)$$ above number is divisible by $(x+y)$. Hence the statement is true $n=1$ </p>
<p>step 2: assuming that for $n=m$, $(x^{2n}-y^{2n})$ is divisible by $(x+y)$ then we have $$(x^{2m}-y^{2m})=k(x+y) \tag 1$$
Where, $k$ is an integer </p>
<p>step 3: putting $n=m+1$ we get $$x^{2(m+1)}-y^{2(m+1)}$$ $$=x^{2m+2}-y^{2m+2}$$
$$=x^{2m+2}-x^{2m}y^2-y^{2m+2}+x^{2m}y^2$$ $$=(x^{2m+2}-x^{2m}y^2)+(x^{2m}y^2-y^{2m+2})$$ $$=x^{2m}(x^2-y^2)+y^2(x^{2m}-y^{2m})$$ Setting the value of $(x^{2m}-y^{2m})$ from (1), we get $$x^{2m}(x-y)(x+y)+y^2k(x+y)$$ $$=k(x^{2m}(x-y)+y^2)(x+y)$$ It is clear that the above number is divisible by $(x+y)$. Hence the statement holds for $n=m+1$</p>
<p>Thus $(x^{2n}-y^{2n})$ is divisible by $(x+y)$ for all positive integers $\color{blue}{n\geq 1}$</p>
|
91,766 | <p>Hello,
I am looking for a proof for the Chern-Gauss-Bonnet theorem. All I have found so far that I find satisfactory is a proof that the euler class defined via Chern-Weil theory is equal to the pullback of the Thom class by the zero section, but I would like a proof of the fact that this class gives the Euler characteristic when coupled to the fundamental class. Thanks in advance. </p>
| Renato G. Bettiol | 15,743 | <p>One reference that seems fairly good and that I just found by googling those key words is <a href="https://web.archive.org/web/20100524152105/http://www.math.upenn.edu/~alina/GaussBonnetFormula.pdf" rel="nofollow noreferrer">https://web.archive.org/web/20100524152105/http://www.math.upenn.edu/~alina/GaussBonnetFormula.pdf</a>.</p>
<p>The first time I learnt this, however, was with these lecture notes:
F. Mercuri, P. Piccione, D. V. Tausk, <em>Notes on Morse theory</em>, Publicações Matemáticas do IMPA, Rio de Janeiro, 2001, ISBN 85-244-0178-8; which maybe a little hard to find, but are very nicely written and I like them very much. Though, the proof you are looking for should be widely available elsewhere (google gives thousands of results, and I only looked at the first ones)...</p>
|
4,219,193 | <p>The <code>p implies q</code> statement is often described in various ways including:<br />
(1) <code>if p then q</code> (i.e. whenever p is true, q is true)<br />
(2) <code>p only if q</code> (i.e. whenever q is false, p is false)</p>
<p>I see the truth table for (1) as</p>
<pre><code>p | q | if p then q
-------------------
T | T | T
T | F |
F | T |
F | F |
</code></pre>
<p>I see the truth table for (2) as</p>
<pre><code>p | q | p only if q
-------------------
T | T |
T | F | F
F | T |
F | F | T
</code></pre>
<p>How are the two statements the same? What is wrong with my understanding?</p>
<hr />
<p><strong>Addenda</strong><br />
1). There are some excellent excellent answers/suggestions here but what really worked for me was the <a href="https://math.stackexchange.com/a/4219227/956241">following tip</a>:</p>
<blockquote>
<p>I think the intuitive way to think of this is if something is
contradicted then it is false but if nothing can be contradicted it is
by default true.</p>
</blockquote>
<p>2). I have now learned that a conditional statement that is true by virtue of the fact that its hypothesis is false (i.e. <code>true by default</code>) is called <code>vacuously true</code>.</p>
| Al Brown | 955,529 | <p>Truth table last</p>
<p>First, on necessary and sufficient:</p>
<p>If A, then B. This means that anytime you have A, you will have B. It does not say that anytime you have B, you <strong>will</strong> have A. There could be times with B but not A. If anytime you have A you have B, then B is <strong>necessary</strong> for having A. That’s because you can’t have A without B. A is <strong>sufficient</strong> for B (knowing we have A, that is sufficient to know we have B).</p>
<p>These are all the same:</p>
<p>A <span class="math-container">$\implies$</span> B</p>
<p>If A, then B,</p>
<p>B <span class="math-container">$\impliedby$</span> A</p>
<p>B is necessary for A,</p>
<p>A is sufficient for B,</p>
<p>B if A</p>
<p>A only if B</p>
<p>There is no case with A but not B; there may be cases with B but not A,</p>
<p>A implies B,</p>
<p>In a venn diagram, the circle for the cases of A lies entirely within the circle for the cases of B.</p>
<p>Whenever A, then B ,</p>
<p>We can’t have A without B.</p>
<p>And....</p>
<p><strong>A | B | statement</strong></p>
<p>T | T | T</p>
<p>T | F | F</p>
<p>F | T | T</p>
<p>F | F | T</p>
<p>Meaning statement must be A implies B</p>
<hr />
<p>All those were the same</p>
<p><strong>p | q | p only if q</strong> (this is p implies q)</p>
<p>T | T | T</p>
<p>T | F | F</p>
<p>F | T | T</p>
<p>F | F | T</p>
<p>These just means every incident of T,T did not violate (p implies q).</p>
<p>Whenever p and q are true (which is also written, whenever p and q), then we are at a point where <span class="math-container">$p \implies q$</span> is true. And also where <span class="math-container">$q \implies p$</span>. What would violate <span class="math-container">$p \implies q$</span> and make it false? Well, if we have p but don’t have q.</p>
|
533,628 | <p>I'm learning how to take indefinite integrals with U-substitutions on <a href="https://www.khanacademy.org/math/calculus/integral-calculus/u_substitution/v/u-substitution" rel="nofollow">khanacademy.org</a>, and in one of the videos he says that: $$\int e^{x^3+x^2}(3x^2+2x) \, dx = e^{x^3+x^2} + \text{constant}$$
I understand that the differential goes away, but not how the whole $(3x^2+2x)$ term go away together with the $dx$.</p>
| Elchanan Solomon | 647 | <p>Here is the intuition behind the proof. As you have done, we split the sum into two pieces. Each piece is controlled differently.</p>
<p>To control the first piece of the sum, we note that our sequence is bounded by some constant $M$, so that this sum is at most</p>
<p>$$M\frac{m}{n}$$</p>
<p>That is, $m$ terms of value at most $M$, multiplied by the $1/n$ in front.</p>
<p>To control the second part of the sum, we note that our sequence $a_n$ is getting close to $a$. Thus, if we choose $m$ big enough, the $|a_n - a|$ term is less than $\epsilon /2$ for $n>m$. This means that the second sum is at most</p>
<p>$$\frac{\epsilon}{2}\frac{n-m}{n} \leq \frac{\epsilon}{2}$$</p>
<p>The last observation is that if $n$ is chosen very large, we can also have</p>
<p>$$M\frac{m}{n} \leq \frac{\epsilon}{2}$$</p>
<p>Putting these together gives us our bound.</p>
<p>To recap: We want to split our sum into a piece with a bounded number of terms and another piece with a growing number of terms. On the piece with a bounded number of terms, we use the bound on the sequence. On the piece with a growing number of terms, we just make sure to start far out enough that the terms are small. Finally, we shrink $\frac{1}{n}$ to make both terms as small as we please.</p>
|
2,629,115 | <blockquote>
<p>We will define the convolution (slightly unconventionally to match Rudin's proof) of $f$ and $g$ as follows:
$$(f\star g)(x)=\int_{-1}^1f(x+t)g(t)\,dt\qquad(0\le x\le1)$$</p>
<ol>
<li>Let $\delta_n(x)$ be defined as $\frac n2$ for $-\frac1n<x<\frac1n$ and 0 for all other $x$. Let $f(x)$ be defined as $x$ for $.4<x<.7$ and let $f(x)=0$ for all other $x$. Find a piecewise algebraic expression for $f\star\delta_{10}$ and graph $f\star\delta_{10}$. Repeat the exercise for $f\star\delta_{20}$. In what sense does $f\star\delta_n$ converge on $[0,1]$ and to what function does it converge?</li>
</ol>
</blockquote>
<p>Hello everyone,
I just need help finding a piecewise algebraic expression for $f* \delta_{ 10}$. I think I should be able to figure out everything else in the question once I know how to do this. </p>
<p>Thoughts/things I know (how to do):
Delta 10 is defined as $5$ for $-1/10<x<1/10$ and $0$ otherwise.
$f(x)=x$ for $0.4<x<0.7$ and $0$ otherwise. I know how both graphs look like visually. I guess you could say there is a jump in the graph of f(x) at 0.4 and 0.7 and a jump in the graph of $\delta_{10}$ at $-1/10$ and $1/10$.</p>
<p>Now confusions: I believe that perhaps I will have to split up the problem into several cases/integrals as both f(x) and delta are piecewise. The main problem in this question is I don't understand what the t represents so I don't know how to set up my bounds as my bounds are in terms of t (as we integrate with respects to $dt$). I believe that the integral seemingly from the definition is only valid for $0\leq x<\leq1$ inclusive. Also if I'm doing $f* \delta_{10}$, lets say for x between $0.4<x<0.7$ then wouldn't $f(x+t)=x+t$ and $g(t)=0$? Overall, I think I'm confused so I could really use some guidance on this problem for the first case $f*\delta_{10}$, then I believe I could figure out the rest.
Thank you!</p>
| Quoka | 410,062 | <p>By definition of convolution and $\delta_{10}$, we have;
\begin{align*}
\left(f*\delta_{10}\right)(x) = \int_{-1}^1 f(x+t)\delta_{10}(t)\, dt
=5\int_{-1/10}^{1/10} f(x+t)\, dt
=5\int_{x-1/10}^{x+1/10} f(t)\, dt
\end{align*}
Now, $f(t) = 0$ outside of the interval $(0.4, 0.7)$. Thus, the above integral evaluates to $0$ is either $x-1/10 \geq 0.7 \iff x\geq 0.8$ or $x+1/10 \leq 0.4 \iff x\leq 0.3$. We are therefore only concerned with the case when $x\in (0.3, 0.8)$. Assuming that $x$ is in that interval, we have
\begin{align*}
\int_{x-1/10}^{x+1/10} f(t)\, dt
&=\int_{\max\{x-1/10,\, 0.4\}}^{\min\{x+1/10,\, 0.7\}} f(t)\, dt
=\int_{\max\{x-1/10,\, 0.4\}}^{\min\{x+1/10,\, 0.7\}} 1\, dt\\
&=\max\{x-1/10,\, 0.4\} - \min\{x+1/10,\, 0.7\}
\end{align*}
We conclude that
$$
\left(f*\delta_{10}\right)(x) = \begin{cases}
5(\max\{x-1/10,\, 0.4\} - \min\{x+1/10,\, 0.7\}) &\text{if } x\in(0.3, 0.8) \\
0 &\text{otherwise}
\end{cases}
$$</p>
|
3,766,146 | <p>Show that:</p>
<p><span class="math-container">$$\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}=\int\limits_1^2 \frac{dx}{x}=\ln(2)$$</span></p>
<hr />
<p><strong>My attempt:</strong></p>
<p>We build a Riemann sum with:</p>
<p><span class="math-container">$1=x_0<x_1<...<x_{N-1}<x_N=2$</span></p>
<p><span class="math-container">$x_n:=\frac{n}{N}+1,\,\,\,n\in\mathbb{N}_0$</span></p>
<p>That gives us:</p>
<p><span class="math-container">$$\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\sum\limits_{n=1}^N \left(\frac{n}{N}+1-\left(\frac{n-1}{N}+1\right)\right)\frac{1}{\frac{n}{N}+1}=\sum\limits_{n=1}^N \frac{1}{N}\frac{N}{N+n}=\sum\limits_{n=1}^N\frac{1}{N+n}$$</span></p>
<p>We know from the definition, that:</p>
<p><span class="math-container">$$\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}=\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\int\limits_1^2 \frac{dx}{x}$$</span></p>
<p>Now we show that,</p>
<p><span class="math-container">$$\int\limits_1^2 \frac{dx}{x}=\ln(2)$$</span></p>
<p>First we choose another Rieman sum with:</p>
<p><span class="math-container">$1=x_0<x_1<...<x_{N-1}<x_N=2$</span></p>
<p><span class="math-container">$x_n:=2^{\frac{n}{N}},\,\,\,n\in\mathbb{N}_0$</span></p>
<p>We get:</p>
<p><span class="math-container">$$\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\sum\limits_{n=1}^N\left(2^{\frac{n}{N}}-2^{\frac{n-1}{N}}\right)\frac{1}{2^{\frac{n-1}{N}}}=\sum\limits_{n=1}^N 2^{\frac{1}{N}}-1=N\left(2^{\frac{1}{N}}-1\right)$$</span></p>
<p>Since we know that (with <span class="math-container">$x \in \mathbb{R})$</span>:</p>
<p><span class="math-container">$$\lim\limits_{x\rightarrow0}\frac{2^x-1}{x}=\ln(2)\Longrightarrow \lim\limits_{x\rightarrow \infty}x(2^{\frac{1}{x}}-1)=\ln(2)\Longrightarrow \lim\limits_{N\rightarrow \infty}N(2^{\frac{1}{N}}-1)=\ln(2)$$</span></p>
<p>We get:</p>
<p><span class="math-container">$$\ln(2)=\lim\limits_{N\rightarrow \infty}N(2^{\frac{1}{N}}-1)=\lim\limits_{N\rightarrow \infty}\sum\limits_{n=1}^N\left(2^{\frac{n}{N}}-2^{\frac{n-1}{N}}\right)\frac{1}{2^{\frac{n-1}{N}}}=\int\limits_1^2 \frac{dx}{x}=\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}$$</span></p>
<p><span class="math-container">$\Box$</span></p>
<hr />
<p>Hey it would be great, if someone could check my reasoning (if its correct) and give me feedback and tips :)</p>
| Claude Leibovici | 82,404 | <p><em>Without Rieman sums.</em>
<span class="math-container">$$S_N=\sum\limits_{n=1}^N\frac{1}{N+n}=H_{2 N}-H_N$$</span> Using the asymptotics of harmonic numbers
<span class="math-container">$$S_N=\log (2)-\frac{1}{4 N}+\frac{1}{16 N^2}+O\left(\frac{1}{N^4}\right)$$</span></p>
|
3,766,146 | <p>Show that:</p>
<p><span class="math-container">$$\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}=\int\limits_1^2 \frac{dx}{x}=\ln(2)$$</span></p>
<hr />
<p><strong>My attempt:</strong></p>
<p>We build a Riemann sum with:</p>
<p><span class="math-container">$1=x_0<x_1<...<x_{N-1}<x_N=2$</span></p>
<p><span class="math-container">$x_n:=\frac{n}{N}+1,\,\,\,n\in\mathbb{N}_0$</span></p>
<p>That gives us:</p>
<p><span class="math-container">$$\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\sum\limits_{n=1}^N \left(\frac{n}{N}+1-\left(\frac{n-1}{N}+1\right)\right)\frac{1}{\frac{n}{N}+1}=\sum\limits_{n=1}^N \frac{1}{N}\frac{N}{N+n}=\sum\limits_{n=1}^N\frac{1}{N+n}$$</span></p>
<p>We know from the definition, that:</p>
<p><span class="math-container">$$\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}=\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\int\limits_1^2 \frac{dx}{x}$$</span></p>
<p>Now we show that,</p>
<p><span class="math-container">$$\int\limits_1^2 \frac{dx}{x}=\ln(2)$$</span></p>
<p>First we choose another Rieman sum with:</p>
<p><span class="math-container">$1=x_0<x_1<...<x_{N-1}<x_N=2$</span></p>
<p><span class="math-container">$x_n:=2^{\frac{n}{N}},\,\,\,n\in\mathbb{N}_0$</span></p>
<p>We get:</p>
<p><span class="math-container">$$\sum\limits_{n=1}^N(x_n-x_{n-1})\frac{1}{x_n}=\sum\limits_{n=1}^N\left(2^{\frac{n}{N}}-2^{\frac{n-1}{N}}\right)\frac{1}{2^{\frac{n-1}{N}}}=\sum\limits_{n=1}^N 2^{\frac{1}{N}}-1=N\left(2^{\frac{1}{N}}-1\right)$$</span></p>
<p>Since we know that (with <span class="math-container">$x \in \mathbb{R})$</span>:</p>
<p><span class="math-container">$$\lim\limits_{x\rightarrow0}\frac{2^x-1}{x}=\ln(2)\Longrightarrow \lim\limits_{x\rightarrow \infty}x(2^{\frac{1}{x}}-1)=\ln(2)\Longrightarrow \lim\limits_{N\rightarrow \infty}N(2^{\frac{1}{N}}-1)=\ln(2)$$</span></p>
<p>We get:</p>
<p><span class="math-container">$$\ln(2)=\lim\limits_{N\rightarrow \infty}N(2^{\frac{1}{N}}-1)=\lim\limits_{N\rightarrow \infty}\sum\limits_{n=1}^N\left(2^{\frac{n}{N}}-2^{\frac{n-1}{N}}\right)\frac{1}{2^{\frac{n-1}{N}}}=\int\limits_1^2 \frac{dx}{x}=\lim\limits_{N\rightarrow\infty}\sum\limits_{n=1}^N\frac{1}{N+n}$$</span></p>
<p><span class="math-container">$\Box$</span></p>
<hr />
<p>Hey it would be great, if someone could check my reasoning (if its correct) and give me feedback and tips :)</p>
| Community | -1 | <p>Your reasoning is correct, but you make it more complicated than needed.</p>
<p><span class="math-container">$$\frac1N\sum_{i=1}^N\frac1{1+\dfrac nN}\to\int_0^1\frac{dx}{1+x}=\left.\ln(1+x)\right|_0^1.$$</span></p>
|
31,790 | <p>See following three snippet code:</p>
<pre><code>(*can compile*)
range = Range[-2, 2, 0.005];
Compile[{},
With[{r = range}, Table[ArcTan[x, y], {x, r}, {y, r}]]][];
(*can compile*)
Compile[{},
With[{r = Range[-2, 2, 0.005 - 10^-8]}, Table[ArcTan[x, y], {x, r}, {y, r}]]][];
(*can't compile*)
Compile[{},
With[{r = Range[-2, 2, 0.005]}, Table[ArcTan[x, y], {x, r}, {y, r}]]][];
</code></pre>
<p>Why the last code can't be compiled? I used Mathematica 9.0.1.</p>
| Jens | 245 | <p>In addition to Mr. Wizard's analysis, one can also avoid the indeterminacy by replacing <code>ArcTan</code> as follows:</p>
<pre><code>Compile[{},
With[{r = Range[-2, 2, 0.005]},
Table[Arg[Complex[x, y]], {x, r}, {y, r}]]][];
</code></pre>
<p>The fact that <code>ArcTan[0,0]</code> is undefined is a real nuisance, and I never saw the point of it because that form of the function is mainly used for practical applications such as plotting, where the purely mathematical reasons against defining that special value don't really bother anyone.</p>
|
4,396,765 | <p>What would be an efficient algorithm to determine if <span class="math-container">$n \in \mathbb{N}$</span> can be written as <span class="math-container">$n = a^b$</span> for some <span class="math-container">$a,b \in \mathbb{N}, b>1$</span>?</p>
<p>So far, I've tried:</p>
<pre><code>def ispower(n):
if n<=3:
return False
LIM = math.floor(math.log2(n))
for b in range(2,LIM+1):
a = math.pow(n, 1/b)
a_floor = math.floor(a)
print(a,a_floor)
if a == a_floor:
return True
return False
</code></pre>
<p>That is, checking if the <span class="math-container">$b-th$</span> roots are integer, for <span class="math-container">$b$</span> from 2 to <span class="math-container">$LIM$</span>, where <span class="math-container">$LIM$</span> stands for the ultimate limit of n being a power of 2.</p>
<p>Thanks for your comments.</p>
| J.-E. Pin | 89,374 | <p>Since <span class="math-container">$n = n^1$</span>, every number can be written in the required form. Thus an efficient algorithm is to always answer yes.</p>
|
2,411,081 | <p>How can I show (with one of the tests for convergence , <strong>not by solving</strong>) that the integral $$\int _{1}^\infty\frac{\ln^5(x)}{x^2}dx$$ converges?</p>
| Enrico M. | 266,764 | <p>Abel's theorem: let $g = fh$ be defined on $[a, b)$ with $b \in (a, \infty]$. If it holds that:</p>
<ul>
<li><p>$h$ is continuous, positive, decreasing and $\lim_{x \to b} h(x) = 0$</p></li>
<li><p>$f$ is locally integrable on $[a, b)$ (i.e. integrable in the neighbourhood of any point) and $F$ is bounded on $[a, b)$, where $F: [a,b) \to \Bbb R$,</p></li>
</ul>
<p>$$F(x) = \int_a^x f(t) dt$$</p>
<p>Then, $\int_a^b g(x)dx$ is convergent. </p>
<p>Use this with $a = 1$, $b = \infty$, $h(x) = 1/x^2$</p>
|
2,411,081 | <p>How can I show (with one of the tests for convergence , <strong>not by solving</strong>) that the integral $$\int _{1}^\infty\frac{\ln^5(x)}{x^2}dx$$ converges?</p>
| marty cohen | 13,079 | <p>For any $a, b >0$,
$(\log x)^a/x^b \to 0$
as $x \to \infty$.</p>
<p>Therefore
$\int_1^{\infty}\dfrac{(\log x)^adx}{x^{1+b}}
$
converges for
$a,b >0$
(and diverges for
$b=0$).</p>
|
2,411,081 | <p>How can I show (with one of the tests for convergence , <strong>not by solving</strong>) that the integral $$\int _{1}^\infty\frac{\ln^5(x)}{x^2}dx$$ converges?</p>
| Stu | 460,772 | <p>$\displaystyle \mathcal{I}=\int _{1}^\infty\frac{\ln^5(x)}{x^2}dx$</p>
<p>As $\displaystyle \lim_{x\to +\infty}\frac{\ln^5(x)}{\sqrt{x}}=0\implies \exists a>0, \text{such that } \forall x>a \quad \frac{\ln^5(x)}{\sqrt{x}}<1\iff\frac{\ln^5(x)}{x^2}<\frac{1}{x^{\frac{3}{2}}}$</p>
<p>$\displaystyle\implies \frac{\ln^5(x)}{x^2} =\mathcal{O}\left(\frac{1}{x^{\frac{3}{2}}}\right)\implies \int _{a}^\infty\frac{\ln^5(x)}{x^2}dx$ converges </p>
<p>As $\displaystyle \int _{1}^a\frac{\ln^5(x)}{x^2}\;dx$ is bounded $\implies\mathcal{I}$ converges</p>
|
1,397,646 | <blockquote>
<p>Given $f_n:= e^{-n(nx-1)^2} $, any suggested approaches for showing that $\displaystyle \lim_{n \to \infty}\int^1_0 f_n(x)dx = 0$? </p>
</blockquote>
<p>I have already shown that $f_n \to 0$ pointwise (and not uniformly) on $[0,1]$ and have been trying to show that $$ \lim_{n \to \infty}\int^1_0 f_n(x)dx = 0$$ I know that $ \forall n \in \mathbb {N}, $ $\exists x_n = \frac {1}{n} \in [0,1]$ such that $f_n(x_n) = 1$, which has me thinking of doing something where I split the integral in such a way that isolates the interval to which the point $x_n$ belongs. Something along the lines of $$ \lim_{n \to \infty}\int^1_0 f_n(x)dx = \lim_{n \to \infty}\Big(\int^1_{x_n}f_n(x)dx + \int_0^{x_n} f_n(x)dx\Big)$$ However, this particular split doesn't help much because I can't really do anything with the first integral like I could with the second, bounding it with the sup norm and then observing that $x_n \to 0 $ as $ n \to \infty$ since $\sup_{[0,1]}|f_n| = 1$ (or so I think I can't). Any hints to other approaches I could use or to any modifications to my initial approach?</p>
<p>Thanks in advance.</p>
| Zhanxiong | 192,408 | <p>Break the interval to $[0, 1/n]$ and $[1/n, 1]$ is a good idea, but may be still too stringent, so why don't try $[0, 2/n]$ and $[2/n, 1]$?</p>
<p>On the interval $[0, 2/n]$, the integrand is bounded by $1$.</p>
<p>On the interval $[2/n, 1]$, the integrand is bounded by
$$\exp(-n(2 - 1)^2) = e^{-n}.$$
Therefore,
$$\int_0^1 f_n(x) dx \leq 1 \times \frac{2}{n} + e^{-n} \times \left(1 - \frac{2}{n}\right) \to 0$$
as $n \to \infty$. Hence the result follows.</p>
|
2,409,377 | <p>I'm trying to prove that if $F \simeq h_C(X)$ or "$X$ represents the functor $F$", then $X$ is unique up to unique isomorphism. I already know that if $h_C(X) \simeq F \simeq h_C(Y)$ that $s: X \simeq Y$ since Yoneda says that $h_C(X)$ is fully faithful, so reflects isomorphisms (in either direction). If $h_C(X) \simeq h_C(Y)$ is unique then I'm done as then $\psi(s) = \psi(s')$ ans so $s = s'$, where $\psi : h_C(X,Y) \to \text{Hom}_{C^{\wedge}}(h_C(X), h_C(Y))$ is the Yoneda bijection. </p>
<p>But how do I know that $h_C(X) \simeq h_C(Y)$ is unique?</p>
| D_S | 28,556 | <p>The Yoneda lemma gives you a canonical bijection $$d: \textrm{Hom}_{C}(X,Y) \rightarrow \textrm{Hom}_{\textrm{nat}}(\textrm{Hom}_C(-,X),\textrm{Hom}_C(-,Y))$$</p>
<p>Specifically, to each morphism $f: X \rightarrow Y$, associate the natural transformation $d(f): \textrm{Hom}_C(-,X) \rightarrow {Hom}_{C}(-,Y)$ which assigns to each object $T$ of $C$ the function </p>
<p>$$d(f)_T: \textrm{Hom}_C(T,X) \rightarrow \textrm{Hom}_C(T,Y)$$</p>
<p>$$ g \mapsto f \circ g$$</p>
<p>"Unique isomorphism" says in this case that if $\Phi: \textrm{Hom}_C(-,X) \rightarrow \textrm{Hom}_C(-,Y)$ is an isomorphism of functors, then there is a unique isomorphism $\phi: X \rightarrow Y$ such that $d(\phi) = \Phi$. In other words, the bijection $d$ sends nonisomorphisms to nonisomorphisms.</p>
|
2,662,792 | <p>I'm looking for a Galois extension $F$ of $\mathbb{Q}$ whose associated Galois group $\mbox{Gal}(F, \mathbb{Q})$ is isomorphic to $\mathbb{Z}_3 \oplus \mathbb{Z}_3$. I'm wondering if I should just consider some $u \notin Q$ whose minimum polynomial in $Q[x]$ has degree 9. In that case we'd have $[\mathbb{Q}(u) : \mathbb{Q}] = 9$ and thus $|\mbox{Gal}(\mathbb{Q}(u), \mathbb{Q})| = 9$ (assuming the extension is Galois). But since there are two distinct groups of order 9 (up to isomorphism), I'm not sure that this will yield the desired result. </p>
| David Lui | 445,002 | <p><a href="http://www.math.ku.edu/~dlk/abgal.pdf" rel="nofollow noreferrer">http://www.math.ku.edu/~dlk/abgal.pdf</a></p>
<p>This paper might help. It solves the inverse Galois problem for all abelian groups, and your group is abelian.</p>
|
2,662,792 | <p>I'm looking for a Galois extension $F$ of $\mathbb{Q}$ whose associated Galois group $\mbox{Gal}(F, \mathbb{Q})$ is isomorphic to $\mathbb{Z}_3 \oplus \mathbb{Z}_3$. I'm wondering if I should just consider some $u \notin Q$ whose minimum polynomial in $Q[x]$ has degree 9. In that case we'd have $[\mathbb{Q}(u) : \mathbb{Q}] = 9$ and thus $|\mbox{Gal}(\mathbb{Q}(u), \mathbb{Q})| = 9$ (assuming the extension is Galois). But since there are two distinct groups of order 9 (up to isomorphism), I'm not sure that this will yield the desired result. </p>
| Adrián Barquero | 900 | <p>This is something that nowadays can be easily done by using the <a href="http://www.lmfdb.org/" rel="nofollow noreferrer">LMFDB Database</a>.</p>
<p>For example, if you go to the above link, on the left hand side you'll find a column with the words Number Fields and the options <a href="http://www.lmfdb.org/NumberField/" rel="nofollow noreferrer">Global</a> and Local. By clicking on the Global option you'll be taken to a page that allows you to search for number fields, i.e., finite extensions of $\mathbb{Q}$, and in particular it allows you to do a refined search according to different parameters that you might be interested in.</p>
<p>In this case, you can restrict the degree of your number fields to be 9 and you can choose the Galois group to be $\mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/3 \mathbb{Z}$ by typing the option C3xC3.</p>
<p>For instance, a <a href="http://www.lmfdb.org/NumberField/?degree=9&galois_group=C3xC3&ram_quantifier=exactly&count=20" rel="nofollow noreferrer">simple search</a> with those options easily shows that if $\alpha$ is a root of the polynomial $f(x):= x^9 - 15x^7 - 4x^6 + 54x^5 + 12x^4 - 38x^3 - 9x^2 + 6x + 1$, then the number field <a href="http://www.lmfdb.org/NumberField/9.9.62523502209.1" rel="nofollow noreferrer">$F = \mathbb{Q}(\alpha)$</a> is Galois over $\mathbb{Q}$ and has Galois group $\mathrm{Gal}(F/\mathbb{Q}) \cong \mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/3 \mathbb{Z}$.</p>
<p>I encourage you to try to learn to use the website, since it provides a wonderful tool for doing experiments in number theory and in related areas.</p>
<p><strong>PS:</strong> As a useful exercise, why don't you try changing the condition on the Galois group to the other possible group of order 9. That will definitely answer your question about whether just choosing a root of an irreducible polynomial of degree 9 might work ;-)</p>
|
2,662,792 | <p>I'm looking for a Galois extension $F$ of $\mathbb{Q}$ whose associated Galois group $\mbox{Gal}(F, \mathbb{Q})$ is isomorphic to $\mathbb{Z}_3 \oplus \mathbb{Z}_3$. I'm wondering if I should just consider some $u \notin Q$ whose minimum polynomial in $Q[x]$ has degree 9. In that case we'd have $[\mathbb{Q}(u) : \mathbb{Q}] = 9$ and thus $|\mbox{Gal}(\mathbb{Q}(u), \mathbb{Q})| = 9$ (assuming the extension is Galois). But since there are two distinct groups of order 9 (up to isomorphism), I'm not sure that this will yield the desired result. </p>
| Angina Seng | 436,618 | <p>$F=\Bbb Q(\cos(2\pi/7),\cos(2\pi/9))$ works.</p>
<p>It's the compositum of two linear disjoint Galois cubic extensions.</p>
|
9,540 | <p>I'm following <a href="http://reference.wolfram.com/mathematica/ref/FinancialData.html">http://reference.wolfram.com/mathematica/ref/FinancialData.html</a></p>
<p>I get the following:</p>
<pre><code>In[6]:= DateListLogPlot[FinancialData["^DJI", All]]
</code></pre>
<blockquote>
<p>During evaluation of In[6]:= DateListLogPlot::ntdt: The first argument to DateListLogPlot should be a list of pairs of dates and real values, a list of real values, or a list of several such lists. >></p>
<p>Out[6]= DateListLogPlot[Missing["NotAvailable"]]</p>
</blockquote>
<pre><code>In[8]:= FinancialData["DJI", All]
</code></pre>
<blockquote>
<p>During evaluation of In[8]:= FinancialData::notent: DJI is not a known entity, class, or tag for FinancialData. Use FinancialData[] for a list of entities. >></p>
<p>Out[8]= FinancialData["DJI", All]</p>
</blockquote>
<pre><code>In[9]:= FinancialData["^DJI"]
</code></pre>
<blockquote>
<p>Out[9]= Missing["NotAvailable"]</p>
</blockquote>
<p>What's going on here? Is the DJI data unavailable somehow?</p>
| Vitaliy Kaurov | 13 | <p>If one looks for curated data accessible from Mathematica, the <code>WolframAlpha</code> function should always be considered as an option, because it links to curated data bases with frequent continuous updating:</p>
<pre><code>data = WolframAlpha["^DJI price history",
{{"HistoryDaily:Close:FinancialData", 1}, "TimeSeriesData"},
PodStates -> {"HistoryDaily:Close:FinancialData__All data"}];
</code></pre>
<p>And this is basically the same plot as in the documentation:</p>
<pre><code>DateListLogPlot[data, Filling -> 1, Joined -> True]
</code></pre>
<p><img src="https://i.stack.imgur.com/jKjPO.png" alt="enter image description here"></p>
<p>You also can access these data interactively to get the same programming syntax. This can actually teach how syntax looks in different cases:</p>
<p><img src="https://i.stack.imgur.com/Mq4HC.png" alt="enter image description here"></p>
<p>To understand better <code>WolframAlpha</code> integration in <em>Mathematica</em> I suggest the following sources:</p>
<ul>
<li><a href="http://reference.wolfram.com/mathematica/ref/WolframAlpha.html" rel="noreferrer">WolframAlpha function documentation</a></li>
<li><a href="http://reference.wolfram.com/mathematica/tutorial/DataFormatsInWolframAlpha.html" rel="noreferrer">Data Formats in Wolfram|Alpha tutorial</a> </li>
<li><a href="http://www.wolfram.com/broadcast/virtualconference/wolfram-alpha-in-mathematica/" rel="noreferrer">Mathematica: Wolfram|Alpha Integration free video course</a></li>
</ul>
|
1,897,212 | <p>Does there exist any surjective group homomorphism from <span class="math-container">$(\mathbb R^* , .)$</span> (the multiplicative group of non-zero real numbers) onto <span class="math-container">$(\mathbb Q^* , .)$</span> (the multiplicative group of non-zero rational numbers)? </p>
| quid | 85,306 | <p>Let me add a more abstract view to the concrete arguments already given. </p>
<p>Recall that a group $(G,\cdot)$ is called divisible if for each $a\in G$, positive integer $n$, the equation $X^n = a$ has a solution. </p>
<p>(The terminology makes more sense in additive notation where it means $nX=a$ has a solution; that is, one can 'divide the $a$ into $n$ equal parts.')</p>
<p>Now observe:</p>
<ul>
<li>the homomorphic image of a divisible group is divisible. </li>
<li>the only divisible subgroup of $(\mathbb{Q}^{\ast}, \cdot)$ is the trivial one.</li>
<li>the (sub)group $(\mathbb{R}_+^{\ast}, \cdot)$ is divisible.</li>
</ul>
<p>Thus, the image of $(\mathbb{R}_+^{\ast}, \cdot)$ under every homorphism $\varphi$ from $(\mathbb{R}^{\ast}, \cdot)$ to $(\mathbb{Q}^{\ast}, \cdot)$ must be trivial.</p>
<p>Now, for $x$ negative we have that $x^2$ is positive and thus $\varphi(x)^2=\varphi(x^2)= 1$. Thus, $\varphi(x)$ is $\pm 1$. (Also see another answer for this.)</p>
<p>Assume there are two negative number $x,y$ such that $\varphi(x) \neq \varphi (y)$, then $1= \varphi(xy) = \varphi(x)\varphi(y) = -1$, a contradiction. </p>
<p>Thus either all negative numbers have image $1$ or all negative number have image $-1$. </p>
<p>It follows that the only two homomorphism there could be are $x \mapsto 1$ and $x \mapsto \operatorname{sign}{(x)}$. </p>
<p>Both are indeed homomorphisms, yet neither is surjective. </p>
|
2,439,278 | <p>I wanted to solve the following problem.</p>
<blockquote>
<p>In $\triangle ABC$ we have $$\sin^2 A + \sin^2 B = \sin^2 C + \sin A \sin B \sin C.$$ Compute $\sin C$.</p>
</blockquote>
<p>Since it's an equation for a triangle, I assumed that $\pi = A + B + C$ would be important to consider.</p>
<p>I've tried solving for $\sin C$ as a quadratic, rewriting $\sin C = \sin (\pi - A - B)$, but nothing seemed to work.</p>
<p>How does one approach this problem? Any help would be appreciated.</p>
<p>The answer is</p>
<blockquote class="spoiler">
<p>$$\frac{2\sqrt{5}}{5}$$</p>
</blockquote>
| Michael Rozenberg | 190,319 | <p>By the law of sines the triangle $ABC$ is a similar to the triangle with sides-lengths $\sin{A}$, $\sin{B}$ and $\sin{C}$. </p>
<p>Thus, by the law of cosines
$$\sin^2C=\sin^2A+\sin^2B-2\sin{A}\sin{B}\cos{C}$$ and by the given
$$\sin^2C=\sin^2A+\sin^2B-\sin{A}\sin{B}\sin{C}.$$
Id est, $$2\cos{C}=\sin{C}$$ or
$$4(1-\sin^2C)=\sin^2C$$ or
$$\sin{C}=\frac{2}{\sqrt5}$$ or
$$C=\arcsin\frac{2}{\sqrt{5}}.$$
Done!</p>
|
2,439,278 | <p>I wanted to solve the following problem.</p>
<blockquote>
<p>In $\triangle ABC$ we have $$\sin^2 A + \sin^2 B = \sin^2 C + \sin A \sin B \sin C.$$ Compute $\sin C$.</p>
</blockquote>
<p>Since it's an equation for a triangle, I assumed that $\pi = A + B + C$ would be important to consider.</p>
<p>I've tried solving for $\sin C$ as a quadratic, rewriting $\sin C = \sin (\pi - A - B)$, but nothing seemed to work.</p>
<p>How does one approach this problem? Any help would be appreciated.</p>
<p>The answer is</p>
<blockquote class="spoiler">
<p>$$\frac{2\sqrt{5}}{5}$$</p>
</blockquote>
| Dr. Sonnhard Graubner | 175,066 | <p>using the Theorem of sines we get
$$\sin^2(C)\left(\frac{a^2+b^2}{c^2}\right)=\sin^2(C)+\frac{ab}{c^2}\sin^3(C)$$
since $$\sin(C)\neq 0$$ we obtain
$$\sin(C)=\frac{a^2+b^2-c^2}{ab}$$ using $$2\cos(C)=\frac{a^2+b^2-c^2}{ab}$$ we get $$\tan(C)=2$$</p>
|
1,063,774 | <blockquote>
<p>Prove that the <span class="math-container">$\lim_{n\to \infty} r^n = 0$</span> for <span class="math-container">$|r|\lt 1$</span>.</p>
</blockquote>
<p>I can't think of a sequence to compare this to that'll work. L'Hopital's rule doesn't apply. I know there's some simple way of doing this, but it just isn't coming to me. :(</p>
| marty cohen | 13,079 | <p>For fun,
I'll try to do
a proof by contradiction.</p>
<p>Suppose
$\lim_{n \to \infty} r^n
\ne 0
$.
Then,
since $S =(r^n)_{n=1}^{\infty}$
is a decreasing sequence
(since $0 < r < 1$),
$s = \lim S$
exists
and $s > 0$.
(Only $\lim\inf$
is needed
for what follows.)</p>
<p>Then,
for any $c > 0$,
there is an
$n(c)$
such that
$r^{n(c)}
< s+c
$.</p>
<p>Now,
consider
$r^{n(c)+1}$.
$r^{n(c)+1}
=r^{n(c)}r
<(s+c)r
$.
But this can be made
less than $s$
if
$(s+c)r
< s
$
or
$sr+cr < s$
or
$cr < s-sr = s(1-r)$
or
$c < \frac{s(1-r)}{r}$.
But,
by assumption,
$r^{n(c)+1}
\ge s
$.</p>
<p>Therefore
the assumption that
$\lim r^n > 0$
leads to a contradiction,
so
$\lim r^n = 0$.</p>
|
47,188 | <p>I am an amateur mathematician, and I had an idea which I worked out a bit and sent to an expert. He urged me to write it up for publication. So I did, and put it on arXiv. There were a couple of rounds of constructive criticism, after which he tells me he thinks it ought to go a "top" journal (his phrase, and he went on to name two). </p>
<p>His opinion is that my outsider status will have no effect on the reviewing process and my paper will be taken seriously. I am pleased and quite flattered to hear this, and my inclination is to do as he suggests. But I have to say it sounds too good to be true. Does this match other people's experience? I understand it's rare enough for undergraduates to publish anywhere, and I am not even an undergraduate! Surely a "non-mathematician submitting to top mathematical journal" must instantly rank high on the crackpot scale. How often does this actually occur successfully these days? Any advice?</p>
| sleepless in beantown | 8,676 | <p>What do you have to lose by submitting an article for publication? You'll have an even better record/credentialing/verification of the work you've put into it by being published in a Journal of good reputation. In the worst case, you will get a rejection letter, perhaps with a good explanation of why they are rejecting the article. The in-between case is pretty good too: you'll receive a referee report which may criticize your approach, suggest particular points to be polished and corrected, perhaps suggest a different approach to take, perhaps suggest that some of this work has been done by others who you shoud read and study or perhaps refer to in your work.</p>
<p>If you're lucky enough to be asked to rewrite and resubmit for consideration, you're possibly on your way to being published. If not, you'll at least have made some progress and educated yourself about the academic publication culture, and will be more prepared for the next article which you prepare for submission.</p>
<p>May I recommend that you find out what your activation energy level is, exceed it, and go ahead and clean up your paper and submit it for publication. Best wishes and good luck. <em>Go for it!</em></p>
<p>Would you mind sharing the arxiv link to your work?</p>
|
1,659,224 | <p>Find a sequence $a_n$ such that $\lim_{n \to \infty}|a_{n+1} - a_n | = 0$ while the sequence does not converge.</p>
<p>I am thrown for a loop. For a sequence not to converge it means that for all $\epsilon \geq 0$ and for all $N$, when $n \geq N$, $|a_n - L| \geq \epsilon$ as $n \longrightarrow \infty$</p>
<p>or I could change some terms and state it in terms of a Cauchy sequence, but either way how can the terms have a distance of 0 but the sequence not be consider converging? hints??</p>
| Arkady | 23,522 | <p>Hint: Take your favourite example of a <strong>series</strong> that does not converge but whose terms tend to zero. Think partial sums as elements of a new sequence..</p>
|
1,659,224 | <p>Find a sequence $a_n$ such that $\lim_{n \to \infty}|a_{n+1} - a_n | = 0$ while the sequence does not converge.</p>
<p>I am thrown for a loop. For a sequence not to converge it means that for all $\epsilon \geq 0$ and for all $N$, when $n \geq N$, $|a_n - L| \geq \epsilon$ as $n \longrightarrow \infty$</p>
<p>or I could change some terms and state it in terms of a Cauchy sequence, but either way how can the terms have a distance of 0 but the sequence not be consider converging? hints??</p>
| Omran Kouba | 140,450 | <p>I would take $a_n=\sqrt{n}$. This would do the job.</p>
|
1,108,787 | <p><img src="https://i.stack.imgur.com/YfmKh.png" alt="enter image description here"></p>
<p>I know we have to prove these 10 properties to prove a set is a vector space. However, I don't understand how to prove numbers 4 and 5 on the list.</p>
| MathMajor | 113,330 | <p>I think it will help to see examples of sets that fulfill $(4)$ and $(5)$, along with some that don't.</p>
<p>To prove $(4)$, you just need to identify element called the zero element. For example, if $V = \mathbb{R}^3$, then $\vec 0 = (0, 0, 0)$ would be the zero element, because for any point $(a, b, c) \in \mathbb{R}^3$, we have $(a, b, c) + (0, 0, 0) = (a, b, c)$. </p>
<p>If $W = \{f \mid f(\cdot) > 0 \}$, then $\vec 0 \not\in W$, because $f(x) = 0 \not>0$, and hence $W$ is not a vector space.</p>
<p>To prove $(5)$, we seek to find the additive inverse of any element $x$. For example, if $V = \mathbb{R}$, then for any element $x \in V$, $-x$ would be its additive inverse because $x + (-x) = 0$, and $(5)$ is fulfilled. </p>
<p>To see an example where an additive inverse does not exist, we once again consider $W = \{f \mid f(\cdot) > 0 \}$. For any $f \in W$, its additive inverse would be $-f$, but $f > 0$ so that $-f <0$ and hence $-f \not\in W$. Therefore $(5)$ fails over $W$.</p>
<p>I think it would also be instructive to see some examples of $0$ elements and additive inverses in other spaces:</p>
<p>(i) If $V = P_n$, the space of $n$-th degree polynomials, then $p(x) = 0$ is the zero element.</p>
<p>(ii) If $V = M_2$, then $\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$ it the zero element.</p>
<p>(iii) If $V = M_2$, and $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \in V$, then its additive inverse would be $ \begin{bmatrix} -a & -b \\ -c & -d \end{bmatrix}$.</p>
<p>(iii) If $V = P_n$, and $p(x) \in V$, then its additive inverse would be $-p(x)$.</p>
<p>Do you understand now?</p>
|
1,535,914 | <p>When I plot the following function, the graph behaves strangely:</p>
<p><span class="math-container">$$f(x) = \left(1+\frac{1}{x^{16}}\right)^{x^{16}}$$</span></p>
<p>While <span class="math-container">$\lim_{x\to +\infty} f(x) = e$</span> the graph starts to fade at <span class="math-container">$x \approx 6$</span>. What's going on here? (plotted on my trusty old 32 bit PC.)
<a href="https://i.stack.imgur.com/YL7WQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YL7WQ.png" alt="Approximating e" /></a></p>
<p>I guess it's because of computer approximation and loss of significant digits. So I started calculating the binary representation to see if this is the case. However in my calculations values of <span class="math-container">$x=8$</span> should still behave nicely.</p>
<p>If computer approximation would be the problem, then plotting this function on a 64 bit pc should evade the problem (a bit). I tried the Wolfram Alpha servers:</p>
<p><a href="https://i.stack.imgur.com/joNc2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/joNc2.png" alt="Wolfram alpha" /></a></p>
<p>The problem remains for the same values of <span class="math-container">$x$</span>.</p>
<h3>Questions</h3>
<ol>
<li>Could someone pinpoint the problem? What about the 32 vs 64 bit plot?</li>
<li>Is there a way to predict at which <span class="math-container">$x$</span>-value the graph of the function below would start to fail?
<span class="math-container">$$f_n(x) = \left(1+\frac{1}{x^n}\right)^{x^n}$$</span></li>
</ol>
| Gottfried Helms | 1,714 | <p>Here is a plot using the (arbitrary precision) calculator Pari/GP. I use 200 dec digits precision as default in my computations and got this plot without oscillation up to x=20: </p>
<p><a href="https://i.stack.imgur.com/nDTtY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nDTtY.png" alt="bild"></a></p>
<p>I tried it so far up to x=256; no oscillation. See here the image up to x=128 (just to have the left increase visible) </p>
<p><a href="https://i.stack.imgur.com/adZn6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/adZn6.png" alt="bild"></a></p>
|
3,217,130 | <p>How can I prove this by induction? I am stuck when there is a <span class="math-container">$\Sigma$</span> and two variables, how would I do it? I understand the first step but have problems when i get to the inductive step.</p>
<p><span class="math-container">$$\sum_{j=1}^n(4j-1)=n(2n+1)$$</span></p>
| Maxime Ramzi | 408,637 | <p><span class="math-container">$xyx = xyx$</span> therefore with <span class="math-container">$a=x, b=yx, c=xy$</span> we get <span class="math-container">$ab = ca$</span>, so <span class="math-container">$b=c$</span>, that is <span class="math-container">$xy=yx$</span></p>
|
3,059,020 | <p>The integral is<span class="math-container">$$\int_0^{2\pi}\frac{\mathrm dθ}{2-\cosθ}.$$</span>Just to skip time, the answer of the indefinite integral is <span class="math-container">$\dfrac2{\sqrt{3}}\tan^{-1}\left(\sqrt3\tan\left(\dfracθ2\right)\right)$</span>.</p>
<p>Evaluating it from <span class="math-container">$0$</span> to <span class="math-container">$ 2 \pi$</span> yields<span class="math-container">$$\frac2{\sqrt3}\tan^{-1}(\sqrt3 \tanπ)-\frac2{\sqrt3}\tan^{-1}(\sqrt3 \tan0)=0-0=0.$$</span>But using complex analysis, the integral is transformed into<span class="math-container">$$2i\int_C\frac{\mathrm dz}{z^2-4z+1}=2i\int_C\frac{\mathrm dz}{(z-2+\sqrt3)(z-2-\sqrt3)},$$</span>
where <span class="math-container">$C$</span> is the boundary of the circle <span class="math-container">$|z|=1$</span>. Then by Cauchy's integral formula, since <span class="math-container">$z=2-\sqrt3$</span> is inside the domain of the region bounded by <span class="math-container">$C$</span>, then:
<span class="math-container">$$2i\int_C\frac{\mathrm dz}{(z-2+\sqrt3)(z-2-\sqrt3)}=2πi\frac{2i}{2-\sqrt3-2-\sqrt3}=2πi\frac{2i}{-2\sqrt3}=\frac{2π}{\sqrt3}.$$</span></p>
<p>Using real analysis I get <span class="math-container">$0$</span>, using complex analysis I get <span class="math-container">$\dfrac{2π}{\sqrt3}$</span>. What is wrong?</p>
| Mark Viola | 218,419 | <p>Note that that tangent function, <span class="math-container">$\tan(x)$</span>, is discontinuous when <span class="math-container">$x=\pi/2+n\pi$</span>. So, the antiderivative <span class="math-container">$\frac2{\sqrt{3}} \arctan\left(\sqrt 3 \tan(\theta/2)\right)$</span> is not valid over the interval <span class="math-container">$[0,2\pi]$</span>.</p>
<p>Instead, we have </p>
<p><span class="math-container">$$\int_0^{2\pi}\frac{1}{2-\cos(\theta)}\,d\theta=2\int_0^\pi\frac{1}{2-\cos(\theta)}\,d\theta=\frac{4}{\sqrt3}\left.\left(\arctan\left(\sqrt 3 \tan(\theta/2)\right)\right)\right|_0^\pi=\frac{2\pi}{\sqrt3}$$</span></p>
|
603,291 | <p>Suppose $f:(a,b) \to \mathbb{R} $ satisfy $|f(x) - f(y) | \le M |x-y|^\alpha$ for some $\alpha >1$
and all $x,y \in (a,b) $. Prove that $f$ is constant on $(a,b)$. </p>
<p>I'm not sure which theorem should I look to prove this question. Can you guys give me a bit of hint? First of all how to prove some function $f(x)$ is constant on $(a,b)$? Just show $f'(x) = 0$?</p>
| André Nicolas | 6,312 | <p><strong>Hint:</strong> Show that $f'(y)$ exists and is equal to $0$ for all $y$. Then as usual by the Mean Value Theorem our function is constant. </p>
|
3,127,795 | <p>So I’m struggling to understand how to find the angle in this circle, we’ve recently learnt about trigonometry and like finding the area of a circle and all that but I can’t seem to remember which formula I have to use to find this angle. Can anyone lend a helping hand?
<img src="https://i.stack.imgur.com/V2Sm7.jpg" alt=""></p>
| Dr. Sonnhard Graubner | 175,066 | <p>You can use the theorem of cosines:
<span class="math-container">$$10^2=7^2+7^2-2\cdot 7^2\cos(\angle{AOB})$$</span></p>
|
597,986 | <p>It's a classical theorem of Lie group theory that any compact connected abelian Lie group must be a torus. So it's natural to ask what if we delete the connectedness, i.e. the problem of classification of the compact abelian Lie groups.</p>
| Mariano Suárez-Álvarez | 274 | <p>The quotient of your Lie group $G$ by the connected component of the identity $G_0$ is a finite discrete abelian group $A$, because the Lie group is compact. It follows that we have an extension $$0\to G_0\to G\to A\to 0$$ Such a thing is classified (forgetting topologies) by an element of $H^2(A,G_0)$. This group is zero, because $A$ is finite and $G_0$ is divisible. It follows that $G\cong A\times G_0$. Since $G_0$ is a compact and connected, it is a torus. </p>
<p>This completely decribes the possible $G$s.</p>
|
3,752,167 | <p>Freshman question, really, but the more I think about it, the more I doubt.</p>
<p>Suppose that two sets belong to the same equivalence class. Are they in effect interchangeable? (I understand that there is no axiom of `interchangeability' in the definition of an equivalence relation.)</p>
<p>For example, consider the equivalence class of all people who are 30 years old. This equivalence class contains both men and women who are 30; and men and women are different 'objects' if I may say. Yet if I consider the class of people who are 30 for some analysis, it does not matter if I pick a man or a woman. They are interchangeable as long as what matter is their age.</p>
<p>I just wonder if this is characteristic of <em>all</em> equivalence classes one can encounter in mathematics.</p>
| Ethan Bolker | 72,858 | <p>The answer to your question is in the question, here:</p>
<blockquote>
<p>They are interchangeable as long as what matter is their age.</p>
</blockquote>
<p>If all that matters in any particular context is the condition that specifies the equivalence relation then any representative will do.</p>
|
2,009,134 | <p>Suppose $$\frac{{{{\sin }^4}(\alpha )}}{a} + \frac{{{{\cos }^4}(\alpha )}}{b} = \frac{1}{{a + b}}$$ for some $a,b\ne 0$. </p>
<p>Why does $$\frac{{{{\sin }^8}(\alpha )}}{{{a^3}}} + \frac{{{{\cos }^8}(\alpha )}}{{{b^3}}} = \frac{1}{{{{(a + b)}^3}}}$$</p>
| Piquito | 219,998 | <p>Solving the system
$$\begin{cases}X^2+Y^2=1\\\dfrac{X^4}{a}+\dfrac{Y^4}{b}=\dfrac{1}{a+b}\end{cases}$$ where obviously $X$ and $Y$ are the sinus and cosinus respectively, one has $$X=-\sqrt{\dfrac{a}{a+b}}\\Y=\pm\sqrt{\dfrac{b}{a+b}}$$</p>
<p>This gives directly the equalities $\dfrac{X^8}{a^3}=\dfrac{a^4}{a^3(a+b)^4}$ and
$\dfrac{Y^8}{b^3}=\dfrac{b^4}{b^3(a+b)^4}$ hence the result</p>
<p>$$\dfrac{X^8}{a^3}+\dfrac{Y^8}{b^3}=\dfrac{1}{(a+b)^3}$$</p>
|
694,090 | <p>Let $V$ and $W$ be vector spaces over $\Bbb{F}$ and $T:V \to W$ a linear map. If $U \subset V$ is a subspaec we can consider the map $T$ for elements of $U$ and call this the restriction of $T$ to $U$, $T|_{U}: U \to W$ which is a map from $U$ to $W$. Show that</p>
<p>$$\ker T|_{U} = \ker T\cap U.$$</p>
<p>I know the definition of a linear map is </p>
<p>$f(x+y)=f(x) +f(y)$ and </p>
<p>$f(ax)=a\cdot f(x)$</p>
<p>I also know the kernel is the set of points which are mapped to zero.</p>
<p>However, I am struggling to piece this all together.</p>
<p>Thanks in advance for all the help.</p>
| Community | -1 | <p>Notice that by definition</p>
<p>$$\forall u\in U,\quad T_{|U}(u)=T(u)$$</p>
<p>Now if $u\in U$ such that $u\in \ker T_{|U}$ then we have
$$T_{|U}(u)=T(u)=0$$
and this means that $u\in \ker T$ hence</p>
<p>$$\ker T_{|U}\subset U\cap \ker T$$
Conversely if $u\in U\cap \ker T$ then
$$0=T(u)= T_{|U}(u)$$
hence $u\in \ker T_{|U}$ and we have the double inclusion. Conclude.</p>
|
268,482 | <p>One has written a paper, the main contribution of which is a few conjectures. Several known theorems turned out to be special cases of the conjectures, however no new case of the conjectures was proven in the paper. In fact, no new theorem was proven in the paper. </p>
<p>The work was reported on a few seminars, and several experts found the conjectures interesting. </p>
<p>One would like to publish this paper in a refereed journal. The paper was rejected from a certain journal just two days after its submission because "this genre of article does not fit the journal".</p>
<blockquote>
<p><strong>QUESTION.</strong> Are there examples of publications of this genre in refereed journals?</p>
</blockquote>
<p><strong>ADD:</strong> The mentioned paper explains the background, states the conjectures, discusses various special cases and consequences, and lists known cases. It is 20 pages long. </p>
| John Pardon | 35,353 | <p>The main contribution of the paper "The McKay conjecture and Galois automorphisms" by Gabriel Navarro is to propose various conjectures. It is published in the Annals of Mathematics.</p>
<p><a href="http://www.ams.org/mathscinet-getitem?mr=2144975" rel="noreferrer">http://www.ams.org/mathscinet-getitem?mr=2144975</a></p>
|
19,848 | <p><strong>{Xn}</strong> is a sequence of independent random variables each with the same
<strong>Sample Space {0,1}</strong> and <strong>Probability {1-1/$n^2$ ,1/$n^2$}</strong>
<br> <em>Does this sequence converge with probability one (Almost Sure) to the <strong>constant 0</strong>?</em>
<br><br>Essentially the {Xn} will look like this (its random,but just to show that the frequency of ones drops)<br>
010010000100000000100000000000000001......</p>
| Shai Covo | 2,810 | <p>Let's elaborate on the answers already given, by considering the more general case where $X_n = 1$ with probability $a_n$ and $X_n = 0$ with probability $1-a_n$, where $a_n \downarrow 0$ (the typical cases being $a_n = 1/n$ and $a_n = 1/n^2$) and the $X_n$ are not necessarily independent.</p>
<p>The most natural approach is to use Borel-Cantelli lemma, as follows. Denote by $E_n$ the event $\lbrace X_n = 1 \rbrace$.
If $\sum\nolimits_{n = 1}^\infty {{\rm P}(E_n )} < \infty$, then the probability of $E_n$ occurring for infinitely many $n$ is $0$. Thus, if $\sum\nolimits_{n = 1}^\infty {a_n} < \infty$, then $X_n \to 0$ almost surely (that is, with probability $1$), since, almost surely, $X_n = 0$ for all $n$ sufficiently large. If, on the other hand, $\sum\nolimits_{n = 1}^\infty {{\rm P}(E_n )} = \infty$ and the events $E_n$ are independent, then the probability of $E_n$ occurring for infinitely many $n$ is $1$. Thus, if $\sum\nolimits_{n = 1}^\infty {a_n} = \infty$ and the $X_n$ are independent, then almost surely $X_n$ does not converge to $0$ (even though ${\rm P}(X_n = 0) \to 1$, which only gives that $X_n \to 0$ in probability).</p>
<p>As for chandok's approach, define $Y_n = \sum\nolimits_{k = 1}^n {X_k }$, so that $0 \leq Y_1 \leq Y_2 \leq \cdots$. Since the sequence $Y_n$ is monotone, it converges, say to $Y$ (which might be infinite). By the Monotone Convergence Theorem, ${\rm E}(Y_n) \to {\rm E}(Y)$ (to put it otherwise, $\int_\Omega {Y_n \,{\rm dP}} \to \int_\Omega {Y\,{\rm dP}}$), where the right-hand side might be infinite (and is obviously infinite if $Y$ is). Now, ${\rm E}(Y_n) = \sum\nolimits_{k = 1}^n {a_k }$. Hence, if $\sum\nolimits_{k = 1}^\infty {a_k }$ is finite, then so is ${\rm E}(Y)$, or $\int_\Omega {Y\,{\rm dP}}$. From measure theory (since $Y$ is nonnegative), it follows that $Y$ is almost everywhere finite, or, in our setting, $Y$ is almost surely finite. But $Y$ is finite if and only if $X_n = 0$ for all $n$ sufficiently large. So, $\sum\nolimits_{k = 1}^\infty {a_k } < \infty$ implies $X_n \to 0$. However, if $\sum\nolimits_{k = 1}^\infty {a_k } = \infty$, then one cannot a priori conclude from ${\rm E}(Y) = \infty$ that $Y$ is almost surely infinite (simply since a random variable can have infinite expectation).</p>
<p>EDIT: With the above notation and assuming that the $X_n$ are independent, the characteristic function $\varphi _{n } (u)$ of $Y_n$ is given by
$$
\varphi _{n } (u) = {\rm E}[e^{{\rm i}uY_n } ] = {\rm E}[e^{{\rm i}u(X_1 + \cdots + X_n )} ] = \prod\limits_{k = 1}^n {{\rm E}[e^{{\rm i}uX_k } ]} = \prod\limits_{k = 1}^n {[(1 - a_k ) + a_k e^{{\rm i}u} ]}.
$$
Now, $X_n \to 0$ if and only if $Y := \lim _{n \to \infty } Y_n $ is finite. So it is instructive to note here the following (general) result. If $\varphi _{n } (u)$ converges to a function $\varphi(u)$ for every $u \in \mathbb{R}$ and $\varphi(u)$ is continuous at $u=0$, then $\varphi(u)$ is the characteristic function of some distribution.</p>
|
87,091 | <p>Find $C$ such that</p>
<p>$$Ce^{(-4x^2-xy-4y^2)/2}$$</p>
<p>is a joint probability density of a $2$-variable Gaussian.</p>
<p>If someone could give me a jumping off point, or a process as to how to go about this, I'd really appreciate it.</p>
| Steven Gamer | 47,540 | <p>It is easy to show that the commutator subgroup is a characteristic subgroup , hence it is a normal subgroup. Proving G/C abelian is straightforward recalling that C is the subgroup of G generated by all commutators.</p>
|
87,091 | <p>Find $C$ such that</p>
<p>$$Ce^{(-4x^2-xy-4y^2)/2}$$</p>
<p>is a joint probability density of a $2$-variable Gaussian.</p>
<p>If someone could give me a jumping off point, or a process as to how to go about this, I'd really appreciate it.</p>
| Community | -1 | <p>Recall that $gh = hg[g,h]$ and that $[G,G]$ is the subgroup of $G$ generated by all commutators. $[G,G] \unlhd G$ because $$[h,k]g = g[h,k][[h,k],g].$$</p>
<p>The map $$G \longrightarrow \frac{G}{[G,G]}$$ is called abelianization (precisely because of the theorem we are about to prove).</p>
<p>Every element of $G/[G,G]$ is of the form $g [G,G]$ and this group is abelian because $$(g [G,G]) (g' [G,G]) = g g' [G,G] = g g' [g',g] [G,G] = g' g [G,G] = (g' [G,G]) (g [G,G]).$$</p>
|
545,728 | <p>So I have $X \sim \text{Geom}(p)$ and the probability mass function is:</p>
<p>$$p(1-p)^{x-1}$$</p>
<p>From the definition that:</p>
<p>$$\sum_{n=1}^\infty ns^{n-1} = \frac {1}{(1-s)^2}$$</p>
<p>How would I show that the $E(X)=\frac 1p$</p>
| hejseb | 70,393 | <p>What is $s$ in your case? Consider:</p>
<p>$$
E(X)=\sum_{x=1}^\infty x p(1-p)^{x-1}=p\sum_{x=1}^\infty x(1-p)^{x-1}=\cdots=?
$$</p>
<p>Can you see how to take it from here?</p>
|
2,161,463 | <p>I am in the final stages of a proof and need help. I have simplified my starting expression expression down to $\dfrac{v\Gamma(-1+\frac{v}{2})}{2\Gamma(\frac{v}{2})}$</p>
<p>I know the above expression is to equal $\frac{v}{v-2}$</p>
<p>I am having ahard time getting there.</p>
<p>I know $\Gamma(n) = (n-1)!$</p>
<p>So I thought something like $\dfrac{v*(\dfrac{v}{2}-2)(\dfrac{v}{2}-1)(\dfrac{v}{2}-0)}{2(\dfrac{v}{2}-1)(\dfrac{v}{2}-0)}$ Would work but the terms only cancel to produce $\dfrac{v*(\dfrac{v}{2}-2)}{2}$</p>
<p>Can anyone give me a push in the right direction?</p>
| Jack D'Aurizio | 44,121 | <p>$\Gamma\left(\frac{v}{2}\right) = \left(\frac{v}{2}-1\right)\,\Gamma\left(\frac{v}{2}-1\right)$ hence
$$ \frac{v\,\Gamma\left(\frac{v}{2}-1\right)}{2\,\Gamma\left(\frac{v}{2}\right)}= \frac{v\,\Gamma\left(\frac{v}{2}-1\right)}{2\,\left(\frac{v}{2}-1\right)\,\Gamma\left(\frac{v}{2}-1\right)}=\frac{v}{2\left(\frac{v}{2}-1\right)}=\frac{v}{v-2}.$$</p>
|
3,752,948 | <p>Find <span class="math-container">$\displaystyle \lim_{x \to\frac{\pi} {2}} \{1^{\sec^2 x} + 2^{\sec^2 x} + \cdots + n^{\sec^2 x}\}^{\cos^2 x} $</span><br />
I tried to do like this :
Let <span class="math-container">$A=\displaystyle \lim_{x \to\frac{\pi}{2}} \{1^{\sec^2 x} + 2^{\sec^2 x} + \cdots + n^{\sec^2 x}\}^{\cos^2 x} $</span></p>
<p>Then <span class="math-container">$\ln A=\lim_{x \to\frac{\pi} {2}} \cos^2 x \ln\{1^{\sec^2 x} + 2^{\sec^2 x} + \cdots + n^{\sec^2 x}\}$</span><br />
which implies
<span class="math-container">$$\ln A = \lim_{x \to\frac {\pi}{2}} \frac{1^{\sec^2 x} + 2^{\sec^2 x}+ \cdots + n^{\sec^2 x}}{\sec^2 x}$$</span>
I'm stuck here. If I am on the right way please guide me to reach conclusion. Otherwise please describe the actual way. Thanks in advance.</p>
| marty cohen | 13,079 | <p>Since
<span class="math-container">$\sec(x) = 1/\cos(x)$</span>
and
<span class="math-container">$\cos(\pi/2+y)
=\cos(\pi/2)\cos(y)-\sin(\pi/2)\sin(y)
=-\sin(y)
$</span>,</p>
<p><span class="math-container">$\begin{array}\\
a(n)
&=\lim_{x \to \pi/2}\left(\sum_{k=1}^n k^{\sec^2(x)}\right)^{\cos^2(x)}\\
&=\lim_{y \to 0}\left(\sum_{k=1}^n k^{\sec^2(y+\pi/2)}\right)^{\cos^2(y+\pi/2)}\\
&=\lim_{y \to 0}\left(\sum_{k=1}^n k^{1/\sin^2(y)}\right)^{\sin^2(y)}\\
&=\lim_{z \to 0}\left(\sum_{k=1}^n k^{1/z}\right)^{z}
\qquad z = \sin^2(y)\\
&=\lim_{w \to \infty}\left(\sum_{k=1}^n k^{w}\right)^{1/w}
\qquad w = 1/z\\
&=\lim_{w \to \infty}\left(n^w\sum_{k=1}^n (k/n)^{w}\right)^{1/w}\\
&=\lim_{w \to \infty}n\left(\sum_{k=1}^n (k/n)^{w}\right)^{1/w}\\
&=\lim_{w \to \infty}n\left(\sum_{k=0}^{n-1} ((n-k)/n)^{w}\right)^{1/w}\\
&=\lim_{w \to \infty}n\left(\sum_{k=0}^{n-1} (1-k/n)^{w}\right)^{1/w}\\
&=\lim_{w \to \infty}n\left(1+\sum_{k=1}^{n-1} (1-k/n)^{w}\right)^{1/w}\\
&\ge n\\
\text{and}\\
a(n)
&=\lim_{w \to \infty}n\left(1+\sum_{k=1}^{n-1} (1-k/n)^{w}\right)^{1/w}\\
&\le\lim_{w \to \infty}n\left(1+n-1\right)^{1/w}\\
&=\lim_{w \to \infty}nn^{1/w}\\
&\to n\\
\end{array}
$</span></p>
<p>so the limit is <span class="math-container">$n$</span>.</p>
|
478,523 | <p>I'm trying to reason through whether $\int_{x=2}^\infty \frac{1}{xe^x} dx$ converges.</p>
<p>Intuitively, It would seem that since $\int_{x=2}^\infty \frac{1}{x} dx$ diverges, then multiplying the denominator by something to make it smaller would make it converge.</p>
<p>If I apply a Taylor expansion to the $\frac{1}{e^x}$, then I get</p>
<p>$\displaystyle \int_{x=2}^\infty \frac{1}{x} e^{-x} dx = \int_{x=2}^\infty \frac{1}{x}(1 + (-x) + \frac{(-x)^2}{2} + \cdots) dx$</p>
<p>My initial thought was to multiply everything out and just look at the leading order behavior, $ x^{n-1}$, but this would seem to mean the integral diverges.</p>
<p>What is wrong with this approach?</p>
| user1337 | 62,839 | <p>Using the Taylor expansion was fine, the problem is in the "leading order" approach.</p>
<p>You chose to express $\frac{1}{e^x}$ as an <em>infinite</em> sum - you cannot truncate it for no reason. </p>
|
2,620,700 | <p>I want to prove or disprove that ($\mathbb{R},\mathcal{E}_1$) with equivalence relation $x\mathcal{R}y \Leftrightarrow a \neq 0$ y = ax is homeomorphic to $(\{0,1\}, \tau)$ where $\tau$ is the discrete topology. </p>
<p>To me it seems that there are two classes of equivalence: every point is equivalent to every point, and 0 is equivalent only to 0. It then seems plausible that they are homeomorphic. The function I tried is this one: f(x) = 0 if x is 0, and 1 in every other situation. Does that work?</p>
<p>Thank you!</p>
<p>EDIT: $\mathcal{E}_1$ is the euclidean topology. The question is: Is the quotient space homeomorphic to the space with two elements with discrete topology?</p>
| Alex Kruckman | 7,062 | <p>You've observed that the quotient space $(\mathbb{R},\mathcal{E}_1)/\mathcal{R}$ consists of two points, the equivalence classes $C_0 = \{0\}$ and $C_1 = \mathbb{R}\backslash \{0\}$. Now what's the topology on the quotient space?</p>
<p>$(\mathbb{R},\mathcal{E}_1)/\mathcal{R}$ is homeomorphic to the two-point space with the discrete topology if and only if it has the discrete topology itself - that is, both of the singleton sets $\{C_0\}$ and $\{C_1\}$ are open. Are they? You just need to check the definition of the quotient topology... </p>
|
1,130,029 | <p>I am not sure if this question is off topic or not but a question like this has been asked on this site before - <a href="https://math.stackexchange.com/questions/409408/insertion-sort-proof">Insertion sort proof</a></p>
<p>Here is an example of insertion sort running a on a set of data
<a href="https://courses.cs.washington.edu/courses/cse373/13wi/lectures/02-25/19-sorting2-select-insert-shell.pdf" rel="nofollow noreferrer">https://courses.cs.washington.edu/courses/cse373/13wi/lectures/02-25/19-sorting2-select-insert-shell.pdf</a></p>
<p><img src="https://i.stack.imgur.com/5Iamh.png" alt="enter image description here" /></p>
<p>Here is the instructor's runtime proofs for the different cases (slide 10)
<img src="https://i.stack.imgur.com/7Ml65.png" alt="enter image description here" /></p>
<p>Can anyone explain the intuition behind the i/2 in average case? I get worst case(number of comparisons = element number) and the best case(everything in order, 1 comparison per element).</p>
| subash rajaa | 254,060 | <p>The intuition relies on the fact that for every position in the array on an average half the elements are greater than the number in that position, and hence they will contribute to a comparison.</p>
|
373,313 | <p>Given a manifold <span class="math-container">$M$</span>, we can always embed it in some Euclidian space (general position theorem). Hence we can define the minimal embedding space of <span class="math-container">$M$</span> to be the smallest euclidean space that we can embed <span class="math-container">$M$</span> in. My question is, will this depend on the category of <span class="math-container">$M$</span> (piece-wise linear or smooth)? I am not an expert in this area and I know the difference between these two categories can be subtle. Any pointer is very appreciated.</p>
| skupers | 798 | <p>Yes, it depends on the category of manifolds you are considering.</p>
<p>For example, by Corollary 1.4 of <a href="https://www.sciencedirect.com/science/article/pii/0040938365900418" rel="noreferrer">Hsiang-Levine-Szczarba</a>, the 16-sphere with non-standard smooth structure does not admit a smooth embedding into <span class="math-container">$\mathbb{R}^{19}$</span>, and hence also not in any Euclidean space of smaller dimension. However, as it is PL-homeomorphic to the standard 16-sphere it does admit a PL-embedding into <span class="math-container">$\mathbb{R}^{19}$</span>.</p>
|
2,390,215 | <p>There exist a bijection from $\mathbb N$ to $\mathbb Q$, so $\mathbb Q$ is countable. And by well ordering principle $\mathbb N$ has a least member say $n_1$ which is mapped to something in $\mathbb Q$, In this way $\mathbb Q$ can be well arranged? Is my argument good? </p>
| rschwieb | 29,335 | <p>If you assume the axiom of choice, <em>any</em> set can be well ordered using this strategy. See <a href="https://math.stackexchange.com/questions/2283566/what-is-the-largest-well-ordered-set">this post</a></p>
<p>But if the original question was really "is $\mathbb Q$ well-ordered <em>by its natural ordering?</em>" (the one we all learn in gradeschool.)</p>
<p><em>Then</em> the answer is no, because you can find sets which have no least element (e.g. $\{1/2^n\mid n\in \mathbb N\}$</p>
|
2,390,215 | <p>There exist a bijection from $\mathbb N$ to $\mathbb Q$, so $\mathbb Q$ is countable. And by well ordering principle $\mathbb N$ has a least member say $n_1$ which is mapped to something in $\mathbb Q$, In this way $\mathbb Q$ can be well arranged? Is my argument good? </p>
| Alec Rhea | 232,121 | <p>The answer is yes, but how we prove it depends on our assumptions -- are you assuming the axiom of choice (AC)? Your method of finding a bijection between $\mathbb{Q}$ and $\mathbb{N}$ is a good idea, however it is easiest to 'pass through' $\mathbb{Z}$ along the way in my opinion. I will sketch a possible proof with and without the axiom of choice.</p>
<p><strong>With AC</strong></p>
<p>If we are assuming choice then yes, since the axiom of choice allows us to well-order any set as a consequence of what is called the <strong>counting principle</strong>. This principle states that for any set $x$, there is some ordinal $\alpha$ such that there exists a bijection $f:\alpha\rightarrow x$. We may then well order the rationals as follows:</p>
<blockquote>
<p>Let $f:\alpha\rightarrow\mathbb{Q}$ be the ordinal bijection guaranteed by the counting principle in the case that $x=\mathbb{Q}$. We may then well-order the rationals under the ordering $$\leq_f=\{(p,q):f^{-1}(p)\leq f^{-1}(q)\}.$$ Accordingly we have that for all rational numbers $p$ and $q$, $$p\leq_f q\iff f^{-1}(p)\leq f^{-1}(q).$$</p>
</blockquote>
<p>This is a non-constructive process, in the sense that we essentially pull the existence of this bijection directly as a consequence of the axiom of choice (which implies the counting principle and vice-verse) and then use it to well-order the reals, seemingly trivially using the fact that the ordinals are well-ordered.</p>
<p><strong>Without AC</strong></p>
<p>Without the axiom of choice the answer is still yes, but it requires us to actually construct a bijection. </p>
<blockquote>
<p>We will use the composition of the following bijections $g:\mathbb{N}\rightarrow\mathbb{Z}$ and $f:\mathbb{Z}\rightarrow\mathbb{Q}$ to construct a bijection $h=f\circ g:\mathbb{N}\rightarrow\mathbb{Q}.$ We define $$g(n)=\begin{cases}\frac{n}{2},&\text{if}\ n\ \text{is even},\\-\frac{(n+1)}{2},&\text{if}\ \ n\ \text{is odd},\end{cases}$$ and we consider $0$ even. I'm too tired to think up/copy one right now (will probably edit later), but we can then define any bijection $f:\mathbb{Z}\rightarrow\mathbb{Q}$ that we like, and take $h=f\circ g$ to obtain a bijection between $\mathbb{N}$ and $\mathbb{Q}$, as desired.</p>
</blockquote>
|
815,418 | <p>Ok, so I've been playing around with radical graphs and such lately, and I discovered that if the </p>
<pre><code>nth x = √(1st x √ 2nd x ... √nth x);
</code></pre>
<p>Then</p>
<p>$$\text{the "infinith" } x = x$$</p>
<p>Example: </p>
<p>$$\sqrt{4\sqrt{4\sqrt{4\sqrt{4\ldots}}}}=4$$<br>
Try it yourself, type calc in Google search, hit then a number, such as $4$, and repeat, ending with $4$, (or press the buttons instead).</p>
<p>I'm a math-head, not big enough though, I think this sequence is divergent or convergent or whatever, too lazy to search up the difference.</p>
<p>However, can this be explained to me? Like how the Pythagorean Theorem can be explained visually.</p>
| Nicky Hekster | 9,605 | <p>Suppose $y = \sqrt{x\sqrt{x\sqrt{x\cdots}}}$. Then obviously $y^2=xy$, whence $y=x$ or $y=0$. But $y \gt 0$, so $y=x$.</p>
|
261,156 | <p>Well, I am trying to write a code that makes the number:</p>
<p><span class="math-container">$$123456\dots n\tag1$$</span></p>
<p>So, when <span class="math-container">$n=10$</span> we get:</p>
<p><span class="math-container">$$12345678910$$</span></p>
<p>And when <span class="math-container">$n=15$</span> we get:</p>
<p><span class="math-container">$$123456789101112131415$$</span></p>
<p>And when <span class="math-container">$n=4$</span> we get:</p>
<p><span class="math-container">$$1234$$</span></p>
| kglr | 125 | <pre><code>f1 = FromDigits @ StringRiffle[Range[#], ""] &;
f1 /@ {4, 10, 15}
</code></pre>
<blockquote>
<pre><code>{1234, 12345678910, 123456789101112131415}
</code></pre>
</blockquote>
<p>And</p>
<pre><code>f2 = FromDigits @* StringJoin @* IntegerString @* Range;
f2 /@ {4, 10, 15}
</code></pre>
<blockquote>
<pre><code>{1234, 12345678910, 123456789101112131415}
</code></pre>
</blockquote>
<p>A variation on @user1066's answer:</p>
<pre><code>f3 = Array[IntegerString, #, 1, FromDigits @* StringJoin] &;
f3 /@ {5, 10, 15}
</code></pre>
<blockquote>
<pre><code>{12345, 12345678910, 123456789101112131415}
</code></pre>
</blockquote>
|
261,156 | <p>Well, I am trying to write a code that makes the number:</p>
<p><span class="math-container">$$123456\dots n\tag1$$</span></p>
<p>So, when <span class="math-container">$n=10$</span> we get:</p>
<p><span class="math-container">$$12345678910$$</span></p>
<p>And when <span class="math-container">$n=15$</span> we get:</p>
<p><span class="math-container">$$123456789101112131415$$</span></p>
<p>And when <span class="math-container">$n=4$</span> we get:</p>
<p><span class="math-container">$$1234$$</span></p>
| AccidentalFourierTransform | 34,893 | <p>For fun, here's some more options, which are quite distinct from the already existing ones.</p>
<p>First, a recursive definition:</p>
<pre><code>f[1] = 1;
f[n_] := f[n] = f[n - 1]*10^Floor[Log[10, 10*n]] + n
</code></pre>
<p>And, second, one using <a href="https://mathematica.stackexchange.com/users/3056/kirma">kirma</a>'s and <a href="https://mathematica.stackexchange.com/users/72682/flinty">flinty</a>'s suggestion, i.e., the <a href="https://en.wikipedia.org/wiki/Champernowne_constant" rel="nofollow noreferrer">Champernowne constant</a>:</p>
<pre><code>f[n_] := IntegerPart[ChampernowneNumber[10] 10^((n + 1) Floor[Log[10, 10*n]] - (10^Floor[Log[10, 10*n]] - 1)/(10 - 1))]
</code></pre>
<p>They both yield the same answer as other posts, naturally.</p>
<p>For more info, see <a href="https://oeis.org/A007908" rel="nofollow noreferrer">OEIS</a>.</p>
|
188,139 | <p>Let $f$ be entire and non-constant. Assuming $f$ satisfies the functional equation $f(1-z)=1-f(z)$, can one show that the image of $f$ is $\mathbb{C}$?</p>
<p>The values $f$ takes on the unit disc seems to determine $f$...</p>
<p>Any ideas?</p>
| Robert Israel | 8,508 | <p>If $f({\mathbb C})$ misses $w$, then it also misses $1-w$. The only case where $w = 1-w$ is $1/2$, which is $f(1/2)$. Now use Picard's "little" theorem.</p>
|
1,110,652 | <p>Would anyone happen to know any introductory video lectures / courses on partial differential equations? I have tried to find it without success (I found, however, on ODEs). </p>
<p>It does not have to be free material, but something not to expensive would be nice.</p>
| Sophie Clad | 190,787 | <p>Here is answer to your Question .Complete course of PDE are available there</p>
<p><a href="http://nptel.ac.in/courses.php" rel="nofollow">http://nptel.ac.in/courses.php</a></p>
<p>These lectures are there in youtube channel 'nptel" but contents and syllabus can be seen from link above</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.