qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
464,426 | <p>Find the limit of $$\lim_{x\to 1}\frac{x^{1/5}-1}{x^{1/3}-1}$$</p>
<p>How should I approach it? I tried to use L'Hopital's Rule but it's just keep giving me 0/0.</p>
| Eric Jablow | 70,913 | <p>Or, and this will lead you to the chain rule, </p>
<p>$$
\lim_{x\to 1}\frac{x^{1/5}-1}{x^{1/3}-1} = \lim_{x\to 1}\frac{x^{1/5}-1}{x-1}\lim_{x\to 1}\frac{x-1}{x^{1/3}-1},
$$</p>
<p>provided both the limits on the right exist. Continue with</p>
<p>$$
\lim_{x\to 1}\frac{x^{1/5}-1}{x-1}\lim_{x\to 1}\frac{x-1}{x^{1/3}-1} = \lim_{x\to 1}\frac{x^{1/5}-1}{x-1}\left[\lim_{x\to 1}\frac{x^{1/3}-1}{x-1}\right]^{-1}.
$$</p>
<p>You can use the definition of derivative here, or do the limits by hand.</p>
|
147,661 | <p>Let $R$ be a (noncommutative) ring and $a \in R$ such that $a(1-a)$ is nilpotent. Why is $1+a(t-1)$ a unit in $R[t,t^{-1}]$? Probably one just has to write down an inverse element, but I could not find it. Perhaps there is a trick related to the geometric series which motivates the choice of the inverse element, which can be actually made into a formal proof because the series is finite since $a(1-a)$ is nilpotent?</p>
<p>Here is a proof that the two-sided ideal generated by $1+a(t-1)$ is $R[t,t^{-1}]$ (which already finishes the proof when $R$ is commutative): Let $A$ be the quotient, we have to show $A=0$. Now, $A$ contains elements $a,t$ such that $a(1-a)$ is nilpotent, $t$ is invertible and $(1-a) + ta = 0$. If we multiply this equation by $a^u (1-a)^v$, we see that $a^{u+1}(1-a)^v=0 \Rightarrow a^u (1-a)^v = 0$ as well as $a^u (1-a)^{v+1} = 0 \Rightarrow a^u (1-a)^v = 0$ for all $u,v \geq 0$. Since $a(1-a)$ is nilpotent, we get by induction that $a$ and $1-a$ are nilpotent. But $a$ nilpotent implies that $1-a$ is a unit, so it can only be nilpotent when $A=0$.</p>
<p>As mt_ mentions below, the general case may be reduced to the commutative case by working with the commutative subring $\mathbb{Z}[a] \subseteq R$. Anyway, I would like to know if there is a short proof which just writes down the inverse.</p>
<p>EDIT: I would also like to know why the converse is true: When $1+a(t-1)$ is a unit, why is $a(1-a)$ nilpotent? Rosenberg claims this in his book without proof (he only says "by the same reasoning as in ...", but this doesn't work the same!). Here, we cannot assume that $R$ is commutative, since the inverse <em>may</em> contain coefficients which do not lie in $\mathbb{Z}[a]$.</p>
<p>Background: This is needed in the proof of the Bass-Heller-Swan Theorem.</p>
| Bill Dubuque | 242 | <p>A simple local argument works more generally. Note that $\rm\: c = 1 + a(t-1)\:$ is a unit both $\rm\:mod\ a\:$ and $\rm\: mod\ b = a\!-\!1,\:$ since $\rm\: c\equiv 1\pmod a,\:$ and $\rm\:c\equiv t\pmod{a\!-\!1}.\:$ Now apply</p>
<p><strong>Theorem</strong> $\rm\ \ ab\:$ nilpotent $\rm\: \Rightarrow\ (U\!+\!aR)\cap(U\!+\!bR) \subseteq U\ $ where $\rm\:U = $ units of $\rm\:R$</p>
<p><strong>Proof</strong> $\ $ An element is a unit iff it lies in no maximal ideal, so it suffices to show that every element $\rm\:c\:$ of said intersection lies in no maximal ideal $\rm\:M.\:$ Suppose $\rm\:c\in M.\:$ By $\rm\:(ab)^n = 0\in M\:$ prime, either $\rm\:a\in M\:$ or $\rm\:b\in M.\:$ But if $\rm\:a\in M\,\:$ then $\rm\:c\in U\!+\!aR$ $\Rightarrow$ $\rm\:c = u + a\:r,\:$ for $\rm\:u\in U,\:r\in R,\:$ thus $\rm\:c,a\in M\:$ $\Rightarrow\:$ unit $\rm\:u = c - a\:\!r\in M,\:$ contradiction. Ditto by symmetry if $\rm\:b\in M.\ \ $ <strong>QED</strong></p>
<p>This yields an explicit inverse. With above c, note $\rm\:c(t)c(t^{-1}) = 1 + a(1-a)(t-1)^2 t^{-1},\:$ i.e. $\rm\: cc' = 1\!-\!n,\:$ for $\rm\:n\:$ nilpotent, say $\rm\:n^k = 0.\:$ Multiply both sides by $\rm\:d = 1+n+\cdots+n^{k-1}\:$ to get $\rm\:cc'd = 1-n^k = 1,\:$ yielding $\rm\:c^{-1} = c'd$.</p>
<p>This works generally if $\rm\:(a,b) = 1,\:$ viz. since $\rm\:c\:$ is a unit mod $\rm\:a\:$ and $\rm\:b\:$ we can use CRT to find $\rm\:c' = c^{-1}$ mod $\rm\:ab,\:$ so $\rm\:cc' = 1\!+\!abr = 1\!-\!n.\ $ $\rm\:ab\:$ so $\rm\:n\:$ is nilpotent, so $\rm\: c^{-1} = c'd\:$ as above.</p>
|
55,076 | <p>Let $T: \mathbb{R}^3 \rightarrow \mathbb{R}^3 $ be the linear operator </p>
<p>$$
A = \begin{pmatrix}
1 & 0 & 0 \\
1 & 2 & 0 \\
0 & 0 & 3 \end{pmatrix}
$$</p>
<p>The problem I am trying to understand is the following. </p>
<p>True or False? If $W$ is a $T$-invariant subspace of $\mathbb{R}^3$ $\exists T$-invariant subspace $W'$ of $\mathbb{R}^3$ such that $W \oplus W' = \mathbb{R}^3$. I think the answer is true and I will list my ideas below but I think there must be an easier way to do approach this.</p>
<p>Since $A$ has distinct eigenvalues $1,2,3$ we see the minimal polynomial is $m_A(x) = (x-1)(x-2)(x-3)$</p>
<p>and therefore using the fundamental structure theorem for PID's we have $\mathbb{R}^3 = \mathbb{R}/(x-1)\oplus\mathbb{R}/(x-2)\oplus \mathbb{R}/(x-3) = \mathbb{R}\oplus\mathbb{R}\oplus\mathbb{R} $</p>
<p>From this calculation does it follow that $W' = \mathbb{R}\oplus\mathbb{R}$ is the invariant subspace we are looking for?</p>
<p>Is there a way to do this problem without using the fundamental structure theorem for modules over a PID?</p>
| Dennis Gulko | 6,948 | <p>The answer is true for the given matrix $A$, since it is semi-simple (diagonalizable), as Geoff explained in his answer, but this is not true in general: for example, if you take $$B = \begin{pmatrix}
1 & 0 & 0 \\
1 & 1 & 0 \\
0 & 0 & 1 \end{pmatrix}$$
Then $W=Span\{(0,1,0),(0,0,1)\}$ is $B$-invariant, but there is no $B$-invariant subspace $W'$ such that $W\oplus W'=\mathbb{R}^3$.</p>
|
55,076 | <p>Let $T: \mathbb{R}^3 \rightarrow \mathbb{R}^3 $ be the linear operator </p>
<p>$$
A = \begin{pmatrix}
1 & 0 & 0 \\
1 & 2 & 0 \\
0 & 0 & 3 \end{pmatrix}
$$</p>
<p>The problem I am trying to understand is the following. </p>
<p>True or False? If $W$ is a $T$-invariant subspace of $\mathbb{R}^3$ $\exists T$-invariant subspace $W'$ of $\mathbb{R}^3$ such that $W \oplus W' = \mathbb{R}^3$. I think the answer is true and I will list my ideas below but I think there must be an easier way to do approach this.</p>
<p>Since $A$ has distinct eigenvalues $1,2,3$ we see the minimal polynomial is $m_A(x) = (x-1)(x-2)(x-3)$</p>
<p>and therefore using the fundamental structure theorem for PID's we have $\mathbb{R}^3 = \mathbb{R}/(x-1)\oplus\mathbb{R}/(x-2)\oplus \mathbb{R}/(x-3) = \mathbb{R}\oplus\mathbb{R}\oplus\mathbb{R} $</p>
<p>From this calculation does it follow that $W' = \mathbb{R}\oplus\mathbb{R}$ is the invariant subspace we are looking for?</p>
<p>Is there a way to do this problem without using the fundamental structure theorem for modules over a PID?</p>
| Pierre-Yves Gaillard | 660 | <p>Let $a$ be an endomorphism of a finite dimensional vector space $V$ over a field $K$. Assume that the eigenvalues of $a$ are in $K$. Let $\lambda$ be such an eigenvalue, let $f\in K[X]$ be the minimal polynomial, and let $d$ its degree. Then there is a unique polynomial $g_\lambda$ of degree $ < d$ such that $g_\lambda(a)$ is the projector onto the generalized $\lambda$ eigenspace. Moreover $g_\lambda$ is determined by the congruences
$$g_\lambda\equiv\delta_{\lambda,\mu}\ \bmod\ (X-\mu)^{m(\mu)}$$
for all eigenvalues $\mu$, where $m(\mu)$ is the multiplicity of $\mu$ as a root of $f$, and where $\delta$ is the Kronecker symbol. If $K$ is of characteristic $0$, these congruences can be solved by Taylor's Formula.</p>
<p><strong>EDIT.</strong> We have $$g_\lambda=T_\lambda\left(\frac{(X-\lambda)^{m(\lambda)}}{f}\right)\frac{f}{(X-\lambda)^{m(\lambda)}}\quad,$$
where $T_\lambda$ means "degree $ < m(\lambda)$ Taylor polynomial at $X=\lambda$". </p>
<p>In particular, if, as in your case, all the $m(\lambda)$ are equal to one, we get
$$g_\lambda=\prod_{\mu\not=\lambda}\ \frac{X-\mu}{\lambda-\mu}\quad.$$</p>
|
1,681,205 | <p>I would like a <strong>hint</strong> for the following, more specifically, what strategy or approach should I take to prove the following?</p>
<p><em>Problem</em>: Let $P \geq 2$ be an integer. Define the recurrence
$$p_n = p_{n-1} + \left\lfloor \frac{p_{n-4}}{2} \right\rfloor$$
with initial conditions:
$$p_0 = P + \left\lfloor \frac{P}{2} \right\rfloor$$
$$p_1 = P + 2\left\lfloor \frac{P}{2} \right\rfloor$$
$$p_2 = P + 3\left\lfloor \frac{P}{2} \right\rfloor$$
$$p_3 = P + 4\left\lfloor \frac{P}{2} \right\rfloor$$</p>
<p>Prove that the following limit converges:
$$\lim_{n\rightarrow \infty} \frac{p_n}{z^n}$$
where $z$ is the positive real solution to the equation $x^4 - x^3 - \frac{1}{2} = 0$.</p>
<p><em>Note</em>: I've already proven the following:
$$\lim_{n\rightarrow \infty} \frac{p_n}{p_{n-1}} = z$$
Any ideas? Not sure if this result helps. Also $\lim_{n\rightarrow \infty}p_n/z^n$ is also bounded above and below. I've attempted to show $\lim_{n\rightarrow \infty} \frac{p_n}{z^n}$ is Cauchy, but had no luck with that. I don't know what the limit converges to either.</p>
<p><em>Edit</em>: I believe the limit should converge as $p_n$ achieves an end behaviour of the form $cz^n$ for $c \in \mathbb{R}$ (this comes from the fact that the limit of the ratios of $p_n$ converge to $z$), however I do not know how to make this rigorous.</p>
<p><em>Edit 2</em>: Proving the limit exists is equivalent to showing
$$p_0 \cdot \prod_{n=1}^{\infty} \left( \frac{p_n/p_{n-1}}{z} \right)$$
converges.</p>
<p><strong>UPDATED:</strong></p>
<p>If someone could prove that $|p_n-z \cdot p_{n-1}|$ is bounded above (or converges, or diverges), then the proof is complete.</p>
| KonKan | 195,021 | <p>Let us start with the solution of the homogeneous recurrence
$$\phi_n = \phi_{n-1} + \frac{\phi_{n-4}}{2}$$
its characteristic equation is
$$x^4 - x^3 - \frac{1}{2} = 0$$
this equations has $4$ solutions, two of them are complex and the other two are a negative and a positive real number. Their approximate values, as given by Mathematica, are:
$$z_1=1.25372, \ \ z_2=-0.669107, \ \ z_3=0.207691 + 0.743573 i, \ \ z_4=0.207691 - 0.743573 i,$$
(in your answer you label $z$ the one labeled $z_1$ above). Notice that the positive real solution $z=z_1=1.25372$ is the one with the greatest magnitude among the $4$ solutions (actually, it is the only one whose magnitude exceeds $1$). </p>
<p>Now, the general solution to the homogeneous recurrence is:
$$\phi_n=c_1z^n_1+c_2z^n_2+c_3z^n_3+c_4z^n_4$$
where $c_1, c_2, c_3, c_4$ are constants to be determined from the initial conditions posed in your question. Since $z=z_1=1.25372$ has the greatest magnitude among the roots of the characteristic equation, the above general solution asymptotically (for $n$ large enough) tends to $c_1z^n$ i.e.
$$\phi_n\sim c_1z^n$$
Consequently,
$$\frac{\phi_n}{z^n}\sim c_1 \ \ \textrm{ i.e. } \ \ \lim_{n\rightarrow\infty}\frac{\phi_n}{z^n}=c_1$$
where $c_1$ will be determined by the solution of the $4\times 4$ linear system of equations
$$
\phi_i=c_1z^i_1+c_2z^i_2+c_3z^i_3+c_4z^i_4
$$
for $i=0,1,2,3$, $p_i$ given by the initial conditions posted in the question and $z_1=z,z_2,z_3,z_4$ the roots of the characteristic equation given above. </p>
<p>Let me now try to justify why the convergence of $\frac{\phi_n}{z^n}$ implies also the convergence of $\frac{p_n}{z^n}$. The recurrence
$$p_n = p_{n-1} + \left\lfloor \frac{p_{n-4}}{2} \right\rfloor = p_{n-1} + \frac{p_{n-4}}{2} - \epsilon_n$$
differs from the homogeneous, by a bounded function $0\leq\epsilon_n< 1$ of $n$. Since we are dealing with linear recurrences, increasing sequences $p_n$, $\phi_n$ and we are interested in the asymptotic behaviour of the solutions, in the limit of large $n$, the two are essentially the same. The solutions $p_n$ and $\phi_n$ differ by a $O(1)$ special solution (of the posted, non-homogeneous recurrence):
$$
p_n=\phi_n+O(1) \Rightarrow p_n\sim\phi_n\Rightarrow\frac{p_n}{z^n}\sim \frac{\phi_n}{z^n}\Rightarrow\lim_{n\rightarrow\infty}\frac{p_n}{z^n}=c_1
$$
We can also see that the bigger the value of $P\geq 2$ (given in the initial conditions), the quicker $\frac{p_n}{z^n}$ converges.</p>
<p><strong>P.S.</strong> Regarding the estimation that the general solutions $p_n$ and $ϕ_n$ of the respective recurrences, differ by a $O(1)$ special solution of the non-homogeneous, my argument is the following: when dealing with non-homogeneous linear recurrences with constant coefficients i.e.
$$
p_n+c_1p_{n−1}+...+c_dp_{n−d}=h(n)
$$
and $h(n)=const$, then it is customary to seek for a special solution to be a constant. Since here, the non-homogeneous factor is the bounded function $0\leq\epsilon_n< 1$, I guess that it is reasonable to conjecture that the corresponding special solution is $O(1)$. </p>
|
1,746,180 | <p>I already solved a few integrals with substitution but in this case I have no idea how to start. How to solve the integral $$\int_0^{\pi/2} \frac{\sqrt{\sin(x)}}{\sqrt{\sin(x)}+\sqrt{\cos(x)}} dx,$$where $x=\frac{\pi}{2}-t$ with substitution, can you tell me how to start?
It would be great!</p>
| Dr. Sonnhard Graubner | 175,066 | <p>after the substitution we have the integral
$$-\int_{\pi/2}^0{\frac {\sqrt {\cos \left( t \right) }}{\sqrt {\cos \left( t \right) }
+\sqrt {\sin \left( t \right) }}}dt$$</p>
|
335,116 | <p>As a PhD student, if I want to do something algebraic / linear-algebraic such as representation theory as well as do PDEs, in both the theoretical and numerical aspects of PDEs, would this combination be compatible and / or useful? Is it feasible?</p>
<p>I'd be grateful for an online resource to look into.</p>
<p>Thanks,</p>
| Dima Pasechnik | 11,100 | <p>There is e.g. a book
<a href="https://singer.math.ncsu.edu/papers/dbook2.ps" rel="noreferrer">Differential Galois Theory</a> by M. van der Put and M. F. Singer, where in Appendix D one can find things on the PDE case (the book is mostly about ODEs).</p>
<p>In mathematical physics there are topics such as <a href="https://en.wikipedia.org/wiki/Knizhnik%E2%80%93Zamolodchikov_equations" rel="noreferrer">KZ equations</a>
which are related to linear representations of braid groups.</p>
|
2,764,381 | <p>one thing I don't understand is what is sin(0) and sin(1) exactly? I am alright with the concept of radian (pi) but don't understand 0 and 1. What does it mean?</p>
| Karn Watcharasupat | 501,685 | <p>Generally, unless stated, the interval $[0,1]$ means the range of $x$ is from $0$ radian to $1$ radian.</p>
<p>$\sin 0$ is basically $\sin x|_{x=0}=0$. You can think of $\sin 1 = \sin \frac\pi\pi\approx0.0175$.</p>
<p>This is how it looks like:
<a href="https://i.stack.imgur.com/l8sYr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l8sYr.jpg" alt="enter image description here"></a></p>
|
290,903 | <p>I am unable to understand the fundamental difference between a Gradient vector and a Tangent vector.
I need to understand the geometrical difference between the both. </p>
<p>By Gradient I mean a vector $\nabla F(X)$ , where $ X \in [X_1 X_2\cdots X_n]^T $</p>
<p>Note: I saw similar questions on "Difference between a Slope and Gradient" but the answers didn't help me much.</p>
<p>Appreciate any effort.</p>
| dekuShrub | 556,042 | <p>The gradient is like the derivative of a function for multiple variables. It shows the rate of change depending on all the given variables of the function, and it is also a vector that points in the same direction as the normal to the function at a given point.</p>
<p>The tangent vector points along the surface of a function and can represent a plane that is parallel to point on that surface (a plane, if the surface is 3D). The tangent vector is thus perpendicular to the gradient.</p>
<p>It is easy to find the tangent plane from the normal vector, since the normal vector consists of the coefficients for the tangent plane.</p>
|
2,104,984 | <p>For every $x\in \mathbb{R}$, $f(x+6)+f(x-6)=f(x)$ is satisfied. What may be the period of $f(x)$?
I tried writing several $f$ values but I couldn't get something like $f(x+T)=f(x)$.</p>
| Mark Bennet | 2,906 | <p>Here is a big hint: apply the identity you have been given to $f(x+6)$ to obtain $f(x+6)=f(x+12)+f(x)$ and then ...</p>
|
2,104,984 | <p>For every $x\in \mathbb{R}$, $f(x+6)+f(x-6)=f(x)$ is satisfied. What may be the period of $f(x)$?
I tried writing several $f$ values but I couldn't get something like $f(x+T)=f(x)$.</p>
| Bumblebee | 156,886 | <p>If $f(x+6)+f(x-6)=f(x),$ then $f(x+12)+f(x)=f(x+6)$ and therefore $f(x+12)=-f(x-6)=-(-f(x-24)).$ Hence you have $$f(x)=f(x-36).$$</p>
|
772,315 | <p>Statement: $\limsup\limits_{n\to\infty} c_n a_n = c \limsup\limits_{n\to\infty} a_n$</p>
<p>Please help find a counterexample to this statement if $c<0$.</p>
<p>Edit: also suppose $c_n \to c$ and $\limsup a_n$ is finite</p>
| Joe | 107,639 | <p>$c_n\equiv-1$, $a_n=\sin\left(\frac{n\pi}{6}\right)$.</p>
<p>Then $\limsup(c_na_n)=1$ but $\limsup a_n=1$ too hence $-\limsup a_n=-1\neq\limsup(c_na_n)=1$.</p>
|
2,173,918 | <p>Let $f(z)=\sum\limits_{k=1}^\infty\frac{z^k}{1-z^k}$. I want to show that this series represents a holomorphic function in the unit disk. I'm, however, quite confused. For example, is $f(z)$ even a power series? It doesn't look as such. Here's what I have so far come up with.</p>
<blockquote>
<p>Proof:</p>
</blockquote>
<p>$$ \sum\limits_{k=1}^\infty\frac{z^k}{1-z^k}=-\sum\limits_{k=1}^\infty\frac{1}{1-z^{-k}}=-\sum\limits_{k=-\infty}^{-1}\sum\limits_{n=0}^\infty z^{kn}$$</p>
<p>[If the above expression is correct then we have a Laurent series of a power series].</p>
<p>Now, $|c_n|=1$ for all $n$. By Parseval's formula,</p>
<p>$$2\pi\sum\limits_{k=-\infty}^{-1}\sum\limits_{n=0}^\infty \rho^{2kn}=\sum\limits_{k=-\infty}^{-1} \int\limits_0^{2\pi}\left|g(\rho^ke^{it})\right|^2dt=^? \int\limits_0^{2\pi} \sum\limits_{k=1}^{\infty} \left|g(\rho^{-k}e^{it})\right|^2dt$$
where $0\le\rho\le 1$, and $g$ is some function (holomorphic) on the unit disk. So we know that this integral (above in the middle) exists.</p>
<p>Now, I believe, what remains to be proved is that the infinite series of this integral also exists. Do you think this approach is OK, or did I make some mistakes in it? How can we prove that the series $\sum\limits_{k=-\infty}^{-1}\sum\limits_{n=0}^\infty \rho^{2kn}$ converges? And then, since it converges, does this imply that $G$ is holomorphic?</p>
<p>Thanks a lot.</p>
| Jacky Chong | 369,395 | <p>Hint: Observe
\begin{align}
f_k(z) = \frac{z^k}{1-z^k}
\end{align}
is definitely holomorphic on the open unit disc for all $k$. In particular, we have that
\begin{align}
g_n(z) = \sum^n_{k=1}f_k(z)
\end{align}
is also holomorphic on the unit disc. Now, show that $g_n$ is a normally convergent sequence, i.e. show that $g_n$ converges uniformly on compact subsets of the unit disc. </p>
|
4,041,758 | <p>If you have a line passing through the middle of a circle, does it create a right angle at the intersection of the line and curve?</p>
<p>More generally, is it valid to define an angle created between a line and a curve? Is the tangent to the curve at the point of intersection a valid interpretation (I.e a semi circle has 2 right angles)</p>
<p>I saw it in a debate thread and it got me curious now.</p>
| Narasimham | 95,860 | <p>Yes. The semi-circle can even be referred to sometimes as a ( Curvilinear) <em>Diangle</em>, sum of the two shown right angles is <span class="math-container">$\pi$</span>.</p>
<p>If Q and P are opposite points of a diameter in a circle then the tangent at Q makes a right angle to the line PQ. Similarly tangent at P makes a right angle to the line QP.</p>
<p>The question is about two possibilities of the "diangle" and these two cases are possible, included in a rough sketch. The second region however cannot be be called a semicircle, but a lens shape etc..</p>
<p><a href="https://i.stack.imgur.com/ykZG2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ykZG2.png" alt="enter image description here" /></a></p>
|
964,989 | <p>Question: Find the order of $(1/2)^{1/2}$, $(1/e)^{1/e}$, $(1/3)^{1/4}$ without using calculator.</p>
<p>Extra constraint: You only have about 150 seconds to do it, failing to do so will eh... make you run out of time on the exam which affects the chance of admitting into graduate school!</p>
<p>Back to the question, when I first saw it I tried to use $(1/2.7)^{1/3}$ to approximate $(1/e)^{1/e}$, compare the $12^{th}$ power of each number and end up in getting the wrong result.</p>
<p>Later(too late!) I find out $f(x)=(1/x)^{1/x}$ has minimum at $x=e$ by taking the derivative $f'(x)=f(x)(ln(x)-1)/x^2$. In addition, second derivative or some kind of feeling is required to make sure it's minimum not maximum. However I feel this approach might unnecessarily take too long.</p>
<p>Had you seen this question, what would have been your first instinct/approach? Is there any faster way to do it?</p>
| Pieter21 | 170,149 | <p>I'd first raise to the power $4$, to find $1/4 < 1/3$. Or if this is a typo, raise to the power of 6 to find $1/8 < 1/9$.</p>
<p>For the comparisons with $1/e$, I'd take the logarithm, and do comparisons, knowing $ln(2)$ is about $0.69$, and $ln(3)$ is more than $1$.</p>
|
539,363 | <p>Two horses start simultaneously towards each other and meet after $3h 20 min$. How much time will it take the slower horse to cover the whole distance if the first arrived at the place of departure of the second $5 hours$ later than the second arrived at the departure of the first.</p>
<p><strong>MY TRY</strong>::</p>
<p>Let speed of 1st be a kmph and 2nd be b kmph</p>
<p>Let the distance between A and B be d km</p>
<p>d = 10a/3 + 10b/3</p>
<p>and</p>
<p>d/a - d/b = 5 </p>
<p>now i cant solve it. :(</p>
<p><strong>Spoiler</strong>: The answer is $10$ hours.</p>
| BaronVT | 39,526 | <p>First, let's identify what you actually want to solve for, which is $\frac{d}{b}$. Solve for $a$ in your first equation: $a = 3/10 d - b$ and substitute into the second equation</p>
<p>$$
\frac{d}{\frac{3}{10} d - b} - \frac{d}{b} = 5\\
db- d\left(\frac{3}{10} d - b\right) = 5b\left(\frac{3}{10} d - b\right)\\
d\left(2b - \frac{3}{10}d \right) = \frac{3}{2}bd - 5b^2 \\
\frac{3}{10}d^2- \frac{1}{2}d b -5b^2 = 0
$$
then, dividing by $b^2$
$$
\frac{3}{10}\left(\frac{d}{b}\right)^2 - \frac{1}{2}\frac{d}{b} - 5 =0 \\
3\left(\frac{d}{b}\right)^2 - 5\frac{d}{b} - 50 =0
$$
which is a quadratic in the variable you want to solve for.</p>
|
539,363 | <p>Two horses start simultaneously towards each other and meet after $3h 20 min$. How much time will it take the slower horse to cover the whole distance if the first arrived at the place of departure of the second $5 hours$ later than the second arrived at the departure of the first.</p>
<p><strong>MY TRY</strong>::</p>
<p>Let speed of 1st be a kmph and 2nd be b kmph</p>
<p>Let the distance between A and B be d km</p>
<p>d = 10a/3 + 10b/3</p>
<p>and</p>
<p>d/a - d/b = 5 </p>
<p>now i cant solve it. :(</p>
<p><strong>Spoiler</strong>: The answer is $10$ hours.</p>
| copper.hat | 27,978 | <p>Let $s_i$ be the speeds of the horses, with $s_1>s_2$ for definiteness. Let $d$ be the total distance. Let $t_i$ be the time taken to cover $d$, that is, $t_i = \frac{d}{s_i}$.</p>
<p>Clearly we have $s_i >0, d>0$.</p>
<p>Since the horses meet after 200 mins., we have the total distance travelled to be $d$, that is, $200(s_1+s_2) = d$. This gives $200(\frac{s_1}{d}+\frac{s_2}{d}) = 200(\frac{1}{t_1}+\frac{1}{t_2}) =1$,</p>
<p>Since the slower horse takes 300 mins. longer to cover $d$, we have $t_2-t_1 = 300$.</p>
<p>Substituting $t_1 = t_2-300$ into $200(\frac{1}{t_1}+\frac{1}{t_2}) =1$ gives $200(\frac{1}{t_2-300}+\frac{1}{t_2}) =1$, multiplying across by $t_2(t_2-300)$ simplifies to $200(2t_2-300) = t_2(t_2-300)$, which reduces to $(t_2-100)(t_2-600) = 0$. Using $t_1 = t_2-300$ gives solutions $(-200,100), (300,600)$.</p>
<p>The only solution with $t_i >0$ is $(300,600)$, so the answer is $t_2 = 600$ mins.</p>
|
4,417,896 | <p>I have only found information regarding doing this by integration by parts. By differentiating under the integral sign, I let
<span class="math-container">$$I_n = \int_0^\infty x^n e^{-\lambda x} dx $$</span>
and get <span class="math-container">$\frac{dI_n}{d\lambda} = -I_{n+1} $</span> and therefore <span class="math-container">$\frac{dI_n}{d\lambda} = -\frac{n+1}{\lambda} I_n$</span>. Proceeding from here I solve the ODE to get <span class="math-container">$I_n = Ae^{-\frac{n+1}{\lambda}x}$</span>.</p>
<p>This is clearly wrong. What went wrong? I am unsure how to proceed with this differentiation of the integral approach to solve this problem.</p>
| Eugene | 726,796 | <p>Let us differentiate a function
<span class="math-container">$$
\begin{aligned}
I_n(\lambda) = \int_0^{+\infty}x^ne^{-\lambda x}dx
\end{aligned}
$$</span></p>
<p>with respect to <span class="math-container">$\lambda$</span>:</p>
<p><span class="math-container">$$
\begin{aligned}
\frac{d}{d\lambda}I_n(\lambda) &= \int_0^{+\infty}x^n\frac{d}{d\lambda}\left(e^{-\lambda x}\right)dx = \int_0^{+\infty}x^n \left(-xe^{-\lambda x}\right)dx = \\
&= \int_0^{+\infty}x^{n+1}d\left(\frac{1}{\lambda}e^{-\lambda x}\right) = \\
&= \underbrace{\left. x^{n+1}\left(\frac{1}{\lambda}e^{-\lambda x}\right)\right|_0^{+\infty}}_{=0}-\int_0^{+\infty}\left(\frac{1}{\lambda}e^{-\lambda x}\right)(n+1)x^ndx = \\
&= -\frac{n+1}{\lambda}\int_0^{+\infty}x^ne^{-\lambda x}dx = -\frac{n+1}{\lambda}I_n(\lambda).
\end{aligned}
$$</span></p>
<p>Thus, we have a differential equation for <span class="math-container">$I_n(\lambda)$</span>:
<span class="math-container">$$
\begin{aligned}
\frac{d}{d\lambda}I_n(\lambda) = -\frac{n+1}{\lambda}I_n(\lambda) &\Leftrightarrow \frac{dI_n}{I_n} = -(n+1)\frac{d\lambda}{\lambda} \Leftrightarrow \log(I_n) = -(n+1)\log(\lambda) + C \Leftrightarrow \\
&\Leftrightarrow \log(I_n) = \log\left(C\lambda^{-(n+1)}\right) \Leftrightarrow I_n(\lambda) = \frac{C}{\lambda^{n+1}}.
\end{aligned}
$$</span></p>
<p>So, <span class="math-container">$I_n(\lambda) = \frac{C}{\lambda^{n+1}}$</span>. To find the constant <span class="math-container">$C$</span>, one needs an initial condition.</p>
<p>Let us calculate <span class="math-container">$I_n(1)$</span>. Then, <span class="math-container">$C = I_n(1)$</span>.</p>
<p><span class="math-container">$$
\begin{aligned}
I_n(1) &= I_n = \int_0^{+\infty}x^ne^{-x}dx = \left|\text{integrating by parts}\right| = \\
&= nI_{n-1} = n(n-1)I_{n-2} = \ldots = n!I_0 = \\
&= n!\underbrace{\int_0^{+\infty}e^{-x}dx}_{=1} = n! = C.
\end{aligned}
$$</span></p>
<p>Finalyy, we have
<span class="math-container">$$
I_n(\lambda) = \int_0^{+\infty}x^ne^{-\lambda x}dx = \frac{n!}{\lambda^{n+1}}
$$</span></p>
|
1,383,380 | <p>On page 12 of Stein, Shakarchi textbook 'Complex analysis', the authors state that the <em>Cauchy-Riemann equations link complex and real analysis</em>. I have completed courses on real and complex analysis, but I feel that this is somewhat of an over-statement. But perhaps it is just me which doesnt have a good enough overview.</p>
<p>$$
\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \qquad \frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x}
$$</p>
<p>If anyone with a clear insight is able to concisely explain how one could justify writing something like this -- then that insight would be most valuable.</p>
| Augustin | 241,520 | <p>There could be both!</p>
<p>The existence of such a point is a consequence of the intermediate value theorem.</p>
|
464,489 | <p>We are given $H = \{(1),(13),(24),(12)(34),(13)(24),(14)(23),(1234),(1432)\}$ is a subgroup of $S_4$. Also assume $K = \{(1),(13)(24)\}$ is a normal subgroup of $H$. Show $H/K$ isomorphic to $Z_2\oplus Z_2$. </p>
<p>This is just a practice question (not assignment). So I have tried finding $H/K$ explicitly.</p>
<p>$H/K = \{\{(1),(13)(24)\},\{(13),(24)\},\{(14)(23),(12)(34)\},\{(1234),(1423)\}\}$. We know there are only $2$ groups of order $4$. One of the elements in $H/K$ we see is $(1234)K$, doesn't this element have a order of $4$, making $H/K$ cyclic and hence not isomorphic to $Z_2\oplus Z_2$? </p>
| Prahlad Vaidyanathan | 89,789 | <p>No - be careful. $(1234)^2 = (13)(24) \in K$. Hence, $(1234)K^2 = K$, so $(1234)K$ has order 2 in $H/K$. In fact, you can just check by hand that all the elements of $H$ either square to $(1)$ or to $(13)(24)$ [You need to use the fact that disjoint cycles commute]. Hence, every element of $H/K$ has order $\leq 2$, which means $H/K$ has to be isomorphic to $\mathbb{Z}_2\oplus \mathbb{Z}_2$.</p>
|
1,602,312 | <p>Can we prove that $\sum_{i=1}^k x_i$ and $\prod_{i=1}^k x_i$ is unique for $x_i \in R > 0$?<br>
I stated that conjecture to solve CS task, but I do not know how to prove it (or stop using it if it is false assumption). </p>
<p>Is the pair of Sum and Product of array of k real numbers > 0 unique per array? I do not want to find underlying numbers, or reverse this pair into factorisation. I just assumed that pair is unique per given set to compare two arrays of equal length to determine whether numbers inside with possible multiplicities are the same. The order in array does not matter. 0 is discarded to give meaningful sum and product.</p>
<p>At the input I get two arrays of equal length, one of which is sorted, and I want to determine if the elements with multiplicities are the same, but I do not want to sort the second array (this will take more than linear time, which is the objective).<br>
So I thought to make checksums, sum and product seemed unique to me, hence the question.</p>
<p>For example:<br>
I have two arrays: v=[1,3,5,5,8,9.6] and the second g=[9.6,5,3,1,5,8].<br>
I calculate sum = 31.6, and product = 5760. For both arrays, since one is sorted version of another.<br>
And now I calculate this for b=[1,4,4,5,8,9.6], sum is 31.6 but product is 6144.
So I assumed that if both sum and product are not the same for two given arrays, than the element differs.</p>
<p>Getting back to question, I am thinking that the pair {sum, product} is the same for all permutations of array (which is desired), but will change when elements are different (which is maybe wishfull thinking).</p>
| Community | -1 | <p>Euclidean vectors are the easiest to visualize. Let's go through this in steps. We'll skip a set of only one vector until the end and start with a set of two vectors.</p>
<p>If a set of two Euclidean vectors is linearly <em>dependent</em> then there exists a line containing all both vectors and the origin. If a set of two Euclidean vectors is linearly <em>independent</em> then there does <em>not</em> exist a line which contains both vectors and the origin.</p>
<p><a href="https://i.stack.imgur.com/NKvYQ.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/NKvYQ.gif" alt="enter image description here" /></a></p>
<p>If a set of three Euclidean vectors is linearly <em>dependent</em> then there exists a plane containing all three vectors and the origin. If a set of three Euclidean vectors is linearly <em>independent</em> then there does <em>not</em> exist any plane containing all three and the origin.</p>
<p><a href="https://i.stack.imgur.com/SlvU1.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/SlvU1.gif" alt="enter image description here" /></a></p>
<p>Likewise if a set of <span class="math-container">$n$</span> Euclidean vectors is linearly <em>dependent</em> then there exists an <span class="math-container">$(n-1)$</span>-dimensional subspace containing all <span class="math-container">$n$</span> vectors and the origin. If a set of <span class="math-container">$n$</span> Euclidean vectors is linearly <em>independent</em> then there does not exist any <span class="math-container">$(n-1)$</span>-dimensional subspace containing all <span class="math-container">$n$</span> vectors and the origin. This is a bit harder to visualize for <span class="math-container">$n\gt 3$</span>.</p>
<p>Now let's handle the case of a set of only <span class="math-container">$1$</span> vector. It should be true that a set of one Euclidean vector is linearly <em>dependent</em> if and only if there exists a <span class="math-container">$0$</span>-dimensional subspace containing that vector and the origin. But what is a zero-dimensional subspace? It's a point. So if a zero-dimensional subspace is a point, then the only zero-dimensional subspace containing the origin must <strong>be</strong> the origin. So we see that a set of <span class="math-container">$1$</span> vector is linearly <em>dependent</em> if and only if that one vector is the zero vector.</p>
<p><a href="https://i.stack.imgur.com/1NMvm.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/1NMvm.gif" alt="enter image description here" /></a></p>
<p>(<a href="http://algebra.math.ust.hk/vector_space/07_independence/lecture2.shtml" rel="noreferrer"><em>Source of Images</em></a>)</p>
|
1,602,312 | <p>Can we prove that $\sum_{i=1}^k x_i$ and $\prod_{i=1}^k x_i$ is unique for $x_i \in R > 0$?<br>
I stated that conjecture to solve CS task, but I do not know how to prove it (or stop using it if it is false assumption). </p>
<p>Is the pair of Sum and Product of array of k real numbers > 0 unique per array? I do not want to find underlying numbers, or reverse this pair into factorisation. I just assumed that pair is unique per given set to compare two arrays of equal length to determine whether numbers inside with possible multiplicities are the same. The order in array does not matter. 0 is discarded to give meaningful sum and product.</p>
<p>At the input I get two arrays of equal length, one of which is sorted, and I want to determine if the elements with multiplicities are the same, but I do not want to sort the second array (this will take more than linear time, which is the objective).<br>
So I thought to make checksums, sum and product seemed unique to me, hence the question.</p>
<p>For example:<br>
I have two arrays: v=[1,3,5,5,8,9.6] and the second g=[9.6,5,3,1,5,8].<br>
I calculate sum = 31.6, and product = 5760. For both arrays, since one is sorted version of another.<br>
And now I calculate this for b=[1,4,4,5,8,9.6], sum is 31.6 but product is 6144.
So I assumed that if both sum and product are not the same for two given arrays, than the element differs.</p>
<p>Getting back to question, I am thinking that the pair {sum, product} is the same for all permutations of array (which is desired), but will change when elements are different (which is maybe wishfull thinking).</p>
| fosho | 166,258 | <p>A set of <strong>linearly independent</strong> vectors: pick any vector in this set and you cannot write it as a linear combination of the other vectors in this set. For example suppose $\{x_1,x_2,x_3\}$ is a set of linearly independent vectors. Let's say you want to write $x_2$ as a linear combination of $x_1$ and $x_3$ i.e. $x_2 = a\cdot x_1 + b\cdot x_3$ with $ a,b \in \mathbb{R}$. Since the vectors are <strong>linearly independent</strong>, this is impossible. A more concrete example (illustrating the exact same thing) is done by taking $x_1 = \sin x , x_2 = \tan x , x_3 = \cos x$</p>
<p>A set of <strong>linearly dependent</strong> vectors: again pick any vector in this set. Then we <em>can</em> write it as a linear combination of the other vectors in the set. Here an example could be $\{x,2,x+2\}$ we see that </p>
<p>$x = 1\cdot(x+2) -1 \cdot x$<br>
$2 = 0\cdot x + 1\cdot 2$<br>
$x+2 = 1\cdot x + 1\cdot 2$ </p>
|
1,602,312 | <p>Can we prove that $\sum_{i=1}^k x_i$ and $\prod_{i=1}^k x_i$ is unique for $x_i \in R > 0$?<br>
I stated that conjecture to solve CS task, but I do not know how to prove it (or stop using it if it is false assumption). </p>
<p>Is the pair of Sum and Product of array of k real numbers > 0 unique per array? I do not want to find underlying numbers, or reverse this pair into factorisation. I just assumed that pair is unique per given set to compare two arrays of equal length to determine whether numbers inside with possible multiplicities are the same. The order in array does not matter. 0 is discarded to give meaningful sum and product.</p>
<p>At the input I get two arrays of equal length, one of which is sorted, and I want to determine if the elements with multiplicities are the same, but I do not want to sort the second array (this will take more than linear time, which is the objective).<br>
So I thought to make checksums, sum and product seemed unique to me, hence the question.</p>
<p>For example:<br>
I have two arrays: v=[1,3,5,5,8,9.6] and the second g=[9.6,5,3,1,5,8].<br>
I calculate sum = 31.6, and product = 5760. For both arrays, since one is sorted version of another.<br>
And now I calculate this for b=[1,4,4,5,8,9.6], sum is 31.6 but product is 6144.
So I assumed that if both sum and product are not the same for two given arrays, than the element differs.</p>
<p>Getting back to question, I am thinking that the pair {sum, product} is the same for all permutations of array (which is desired), but will change when elements are different (which is maybe wishfull thinking).</p>
| bhbr | 160,776 | <p>Two parallel vectors are linearly dependent. Three vectors lying in a plane are linearly dependent. Four vectors lying in the same three-dimensional hyperplane are linearly dependent.</p>
<p>In n-dimensional space, you can find at most n linearly independent vectors.</p>
<p>Think of the vectors as rods with which you want to span up a tent: one rod gives you just a line, two rods give you a face. you need a third rod outside of that plane (linearly independent) to span up a volume. Any additional rods cannot span into a fourth dimension, so four rods in three dimensions must be linearly dependent.</p>
|
891,370 | <p>I got the function $8.513 \times 1.00531^{\Large t} = 10$. The task is to solve $t$. The correct answer is $t = 31$. How do I get there ?.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
\begin{align}
&8.513\times 1.00531^{t}=10\ \imp\ 1.00531^{t}={10 \over 8.513}\ \imp\
\ln\pars{1.00531^{t}} = \ln\pars{10 \over 8.513}
\\[3mm]&\imp t\ln\pars{1.00531} = \ln\pars{10 \over 8.513}\ \imp\
\color{#66f}{\large t = {\ln\pars{10/8.513} \over \ln\pars{1.00531}}}
\approx {\tt 30.3988}
\end{align}</p>
|
3,499,862 | <p>What is the equivalent of <span class="math-container">$\neg (\forall x) (P(x) \vee Q(x))$</span>? Will <span class="math-container">$P(x) \vee Q(x)$</span> be negated too? Or is just <span class="math-container">$\forall x$</span> negated?</p>
| N. Bar | 641,014 | <p><span class="math-container">$$\neg (\forall x)(Px \lor Qx)$$</span></p>
<p>By quantifier negation (QN) rules,
<span class="math-container">$$(\exists x)\neg(Px \lor Qx)$$</span></p>
<p>By DeMorgan's Law</p>
<p><span class="math-container">$$(\exists x)(\neg Px \land \neg Qx)$$</span></p>
|
4,510,697 | <p><a href="https://i.stack.imgur.com/HKvBy.png" rel="nofollow noreferrer">How do I find explicit solutions of x and y in this system?</a></p>
| John Omielan | 602,049 | <p>You've got the right idea, but as <a href="https://math.stackexchange.com/questions/4510682/find-postive-integer-x-with-x1-and-dfracx6-1x-1-is-perfect-square#comment9470655_4510682">Kenta S's comment</a> indicates, it's <span class="math-container">$d \mid 3$</span> (since <span class="math-container">$x \equiv -1 \pmod{x+1}$</span> means <span class="math-container">$x^4 + x^2 + 1 \equiv (-1)^4 + (-1)^2 + 1 \equiv 3 \pmod{x + 1}$</span>), not <span class="math-container">$d \mid 2$</span>, so the other option is <span class="math-container">$d = 3$</span>. For that case, for some integers <span class="math-container">$m$</span> and <span class="math-container">$n$</span>, we have</p>
<p><span class="math-container">$$x^4 + x^2 + 1 = 3m^2, \; \; x + 1 = 3n^2 \; \; \to \; \; x \equiv -1 \pmod{3} \tag{1}\label{eq1A}$$</span></p>
<p>With your factorization of</p>
<p><span class="math-container">$$x^4+x^2+1=(x^2+x+1)(x^2-x+1) \tag{2}\label{eq2A}$$</span></p>
<p>note that</p>
<p><span class="math-container">$$\begin{equation}\begin{aligned}
\gcd(x^2+x+1,x^2-x+1) & = \gcd(x^2+x+1,x^2+x+1-(x^2-x+1)) \\
& = \gcd(x^2+x+1,2x)
\end{aligned}\end{equation}\tag{3}\label{eq3A}$$</span></p>
<p>Since <span class="math-container">$x^2 + x + 1 = x(x+1) + 1$</span> is always odd, and <span class="math-container">$\gcd(x^2+x+1,x) = 1$</span>, this means that <span class="math-container">$x^2+x+1$</span> and <span class="math-container">$x^2-x+1$</span> are always relatively prime. With \eqref{eq1A} giving that <span class="math-container">$x \equiv -1 \pmod{3}$</span>, then <span class="math-container">$x^2 - x + 1 \equiv (-1)^2 - (-1) + 1 \equiv 3 \equiv 0 \pmod{3}$</span>. Thus, from the first part of \eqref{eq1A} and using \eqref{eq2A}, there are integers <span class="math-container">$s$</span> and <span class="math-container">$t$</span> where <span class="math-container">$m = st$</span> and</p>
<p><span class="math-container">$$x^2 + x + 1 = s^2, \; \; x^2 - x + 1 = 3t^2 \tag{4}\label{eq4A}$$</span></p>
<p>However, <span class="math-container">$x^2 \lt x^2 + x + 1 \lt (x + 1)^2$</span>, so the first part of \eqref{eq4A} is not possible. This means there are <strong>no</strong> integers <span class="math-container">$x \gt 1$</span> where <span class="math-container">$\frac{x^6 - 1}{x - 1}$</span> is a perfect square.</p>
|
2,032,241 | <p>In Euler's (number theory) theorem one line reads: since $d|ai$ and $d|n$ and $gcd(a,n)=1$ then $d|i$. I've been staring at this for over an hour and I am not convinced why this is true could anyone explain why? I have tried all sorts of lemma's I've seen before but I honestly just can't see it and I feel I'm going round in circles. Could someone just explain to me why it is making me feel stupid. Here is the full proof for context and I highlighted the lines I don't get. Thanks! (Also hcf=gcd as I know that confuses some people.)</p>
<p><a href="https://i.stack.imgur.com/JEcNA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JEcNA.png" alt="enter image description here"></a></p>
| Sean Haight | 273,590 | <p>Since $d|n$ and gcd$(a,n) = 1$ we must have gcd$(d,a) = 1$, since if $a$ and $d$ shared some common divisor, then $a$ and $n$ would share some common divisor. Now since $d|ai$ and gcd$(a,d) = 1$ , $d|i$. This last statement is a straight forward Theorem that says that if $d|bc$ and gcd$(d,b) = 1$ then $d|c$. Below is a proof: </p>
<p>Since gcd$(d,b) = 1$, we can write $xd + yb = 1$ for integers $x$ and $y$. Multiplying both sides of this equation by $c$ we obtain $xdc + ybc = c$. Since $d|bc$, $bc = kd$ for some integer $k$. Substituting this into our above equation yields $xdc + ykd = c$. Thus $(xc + yk)d = c$ so $d|c$. </p>
|
972,281 | <p>I have to find the inverse laplace transform of:</p>
<p>$\mathcal{L}^{-1}(\frac{s}{-8+2s+s^2})$</p>
<p>I found it was </p>
<p>$\frac{2}{3}e^{-4t}+\frac{1}{3}e^{2t}$</p>
<p>But the question I'm asked is, determine $A,B,C,D$ such that $e^{At}(Bcosh(Ct)+Dsinh(Ct))$ is a solution of the inverse laplace transform.</p>
<p>I have no idea how to proceed. I've tried multiple things to no avail.</p>
<p>Any help will be greatly appreciated. Thanks.</p>
<p>Edit: I know hyperbolic trigs can be rewritten in terms of exponentials, but I can't figure out how to use this to my advantage.</p>
| DLV | 160,304 | <p>I've solved this.</p>
<p>One creates a system of equations out of this expression. And equate it to my simplified answer.</p>
|
239,653 | <p>It is usually told that Birch and Swinnerton-Dyer developped their famous conjecture after studying the growth of the function
$$
f_E(x) = \prod_{p \le x}\frac{|E(\mathbb{F}_p)|}{p}
$$
as $x$ tends to $+\infty$ for elliptic curves $E$ defined over $\mathbb{Q}$, where the product is defined for the primes $p$ where $E$ has good reduction at $p$. Namely, this function should grow at the order of
$$
\log(x)^r
$$
when $x$ tends to $+\infty$, where $r$ is the (algebraic) rank of $E$.</p>
<p><strong>Question 1.</strong> Why is it natural to look at these kind of products?</p>
<p>Nowadays, people usually state the BSD conjecture as the equality
$$
r = \text{ord}_{s=1}L(E,s)\text{.}
$$</p>
<p><strong>Question 2.</strong> Are these two statements equivalent?</p>
| Community | -1 | <p>Question 1: The $L$-functon $L(E/\mathbf{Q},s)$ is a product over $L_p(E/\mathbf{Q},s) := 1/(1-a_pp^{-s}+p^{1-2s})$. Now plug in $s=1$ and use $p+1 - |E(\mathbf{F}_p)| = a_p$. (This is only a heuristic.)</p>
|
4,431,185 | <p>Let <span class="math-container">$ABC$</span> be a triangle and denote its area by <span class="math-container">$k = \mathrm{area}(ABC)$</span>. I want to divide <span class="math-container">$ABC$</span> into two sub-triangles <span class="math-container">$ABE$</span> and <span class="math-container">$AEC$</span> such that <span class="math-container">$\mathrm{area}(ABE) = t$</span> and <span class="math-container">$\mathrm{area}(AEC) = k-t$</span> for some <span class="math-container">$t < k$</span>.</p>
<p>Is it possible to find <span class="math-container">$E$</span> exactly? By "exactly" I mean that you could get an arbitrarily close approximation by repeatedly bisecting the edge <span class="math-container">$BC$</span> to find the split point <span class="math-container">$E$</span>.</p>
| nkhedekar | 803,212 | <p>I'm not sure what you mean by "compute this exactly". To find the point <span class="math-container">$E$</span> on the side <span class="math-container">$BC$</span>, you can compare the formulae for the areas of the new and original triangles <span class="math-container">$ABE$</span> and <span class="math-container">$ABC$</span>. Assuming <span class="math-container">$BC$</span> and <span class="math-container">$BE$</span> as the bases in the function <span class="math-container">$1/2 \times base \times height$</span>, the height remains unchanged and the ratio of the areas will be <span class="math-container">$t/k$</span> giving you <span class="math-container">$BE = BC \times t/k$</span></p>
|
25,485 | <p>Not sure if this is more appropriate for here or for Math.SE, but here goes:</p>
<p>How does one who is self-studying mathematics determine if a textbook is too hard for you?</p>
<p>Math is hard in general, but when does a textbook cross that line from being challenging to being nearly intractable?</p>
<p>Sometimes I can't tell if I'm just being challenged when I have to re-read one paragraph ten times to understand what the author is saying (even if I understand all the components of their statement individually), or if the book is simply not at the right level for my current background.</p>
| Sue VanHattum | 60 | <p>I don't know how to decide, in general, whether a book is the right one or not. In particular cases, it might make sense to ask here, with the details.</p>
<p>But I have a great answer for the related question, <em><strong>"How do I approach reading a math book?"</strong></em></p>
<p>It's very different than reading other books, and takes practice. (If a class I'm teaching uses a good textbook, I require students to take notes on each section we cover. That at least gets them started on actually reading a math book.)</p>
<p><a href="https://web.stonehill.edu/compsci/History_Math/math-read.htm" rel="nofollow noreferrer">Here is a great article about how to read a math book</a> that I've modified for my students. (I mostly took out stuff that didn't seem vital, so my students would read the article itself!) One of the authors, Shai Simonson, has written a great book about learning math, <em><strong><a href="http://web.stonehill.edu/compsci/RediscoveringMath/RM.html" rel="nofollow noreferrer">Rediscovering Mathematics</a></strong></em>, which you might also find useful. (<a href="https://www.bookfinder.com/search/?ac=sl&st=sl&ref=bf_s2_a1_t1_1&qi=BocApO.85IhokaQKfjp0CRsqLlE_1660182923_1:7:9&bq=author%3Dshai%2520simonson%26title%3Drediscovering%2520mathematics%2520you%2520do%2520the%2520math%2520classroom%2520resource%2520materials" rel="nofollow noreferrer">I avoid amazon, so here's where I'd look to buy it.</a>)</p>
<p>I tell my students to expect reading the textbook to go slowly, and to expect to read it 3 times. Skim the first time to get a feel for the big picture, read carefully the 2nd time and mark anything that doesn't make sense, and work ahead of the author (in examples) in your third read. I think you'll find all that in this article, and lots more.</p>
|
1,022,342 | <p>Assume we have a function $y=f(x)$ that is $\textit{C}^\infty$ and the function has a number of local maximums. Assume there are $k$ such maximums $\{m_1, m_2, m_3, \ldots , m_k\}$ where $f'(m_i)=0$. </p>
<p>If I performed a gradient ascent algorithm for any point $x$ where $f(x)$ is defined, it would get to one of these local maximums $m_i$. In this way, every point $x$ where $f(x)$ is defined, it <em>belongs</em> to a certain local maximum (via gradient ascent). </p>
<p>Is there a formal mathematical treatment of this concept that I just described? I have reduced it to two-dimensions, but I am looking to apply it to three dimensional surfaces. </p>
| Community | -1 | <p>It sounds like a zone of attraction or neighborhood of an optimum.</p>
<p>If you think of the gradient search as a dynamic process then these zones are limit sets. </p>
|
1,022,342 | <p>Assume we have a function $y=f(x)$ that is $\textit{C}^\infty$ and the function has a number of local maximums. Assume there are $k$ such maximums $\{m_1, m_2, m_3, \ldots , m_k\}$ where $f'(m_i)=0$. </p>
<p>If I performed a gradient ascent algorithm for any point $x$ where $f(x)$ is defined, it would get to one of these local maximums $m_i$. In this way, every point $x$ where $f(x)$ is defined, it <em>belongs</em> to a certain local maximum (via gradient ascent). </p>
<p>Is there a formal mathematical treatment of this concept that I just described? I have reduced it to two-dimensions, but I am looking to apply it to three dimensional surfaces. </p>
| Anthony Carapetis | 28,513 | <p>At least in the context of Morse theory these are known as the <em>(un)stable manifolds</em> of the critical points $m_i$.</p>
<p>If $\phi(x,t)$ is the gradient flow of $f$ (or more generally any flow):</p>
<p>$$ \phi(x,0) = x \\ \frac{\partial}{\partial t} \phi(x,t) = \nabla f(\phi(x,t))$$
then the stable and unstable manifolds at a critical point $p$ of $f$ are defined by</p>
<p>$$ W^s_p = \{ x | \lim_{t \to \infty} \phi(x,t) = p\}\;\text{ and }\;
W^u_p = \{ x | \lim_{t \to -\infty} \phi(x,t) = p\}$$ respectively.</p>
|
4,359,019 | <p>Let <span class="math-container">$P \in \mathbb{R}^{N\times N}$</span> be an orthogonal matrix and <span class="math-container">$f: \mathbb{R}^{N \times N} \to \mathbb{R}^{N \times N}$</span> be given by <span class="math-container">$f(M) := P^T M P$</span>. I am reading about random matrix theory and an exercise is to calculate the Jacobian matrix of <span class="math-container">$f$</span> and its Jacobian determinant.</p>
<p><strong>Question:</strong> How to calculate the Jacobian of a matrix-valued function? How is it defined? Somehow the notation here confuses me. I suspect that <span class="math-container">$Jf = P^T P$</span> and thus <span class="math-container">$\det(Jf) = 1\cdot 1 = 1$</span>.</p>
| F_M_ | 1,012,495 | <p>Consider the map <span class="math-container">$\psi \colon E' \to F'$</span> given by <span class="math-container">$\psi(f) = \left. f \right|_F.$</span> For surjectivity, let <span class="math-container">$f \in F'$</span> be given. Then we can extend <span class="math-container">$f$</span> to <span class="math-container">$\tilde{f} \in E'$</span> (since <span class="math-container">$f$</span> is continuous and densily defined). This extension coincides with <span class="math-container">$f$</span> on <span class="math-container">$F$</span> hence <span class="math-container">$\psi(\tilde{f}) = f,$</span> so <span class="math-container">$\psi$</span> is surjective.</p>
<p>Remains to show that <span class="math-container">$\psi$</span> is isometric (this will imply that <span class="math-container">$\psi$</span> is injective), i.e. <span class="math-container">$\left\|f\right\| = \left\|\left. f\right|_F\right\|.$</span> It is clear that <span class="math-container">$\left\|\left. f\right|_F\right\| \leq \left\|f\right\|.$</span> For the other inequality. Let <span class="math-container">$\epsilon>0$</span> and find <span class="math-container">$x\in E$</span> with <span class="math-container">$\left\|f\right\| \leq |f(x)| + \epsilon,$</span> and you can find <span class="math-container">$y \in F$</span> such that <span class="math-container">$\left\|x-y\right\| < \epsilon$</span> and <span class="math-container">$|f(x)-f(y)|<\epsilon.$</span> (Note <span class="math-container">$f$</span> is continuous and <span class="math-container">$F$</span> is dense in <span class="math-container">$E$</span>). Then we have
<span class="math-container">\begin{align*}
\left\|f\right\|& \leq |f(x)| + \epsilon \\
& \leq |f(x)| - |f(y)| + |f(y)| + \epsilon \\
& \leq \left|f(x)-f(y)\right| + |f(y)| + \varepsilon \\
& \leq |f(y)| + 2\epsilon.
\end{align*}</span>
Taking <span class="math-container">$\epsilon \to 0,$</span> yields <span class="math-container">$\left\|f\right\| \leq \left\|\left.f\right|_F\right\|.$</span></p>
<p>In conclusion <span class="math-container">$\psi$</span> is an isometric isomorphism.</p>
|
839,043 | <p>I have been studying power functions, and started to think about imaginary powers. Take the function $x^i$. Because I don't know how to multiply a number $i$ times, I tried to simplify the equation</p>
<p>$x^i = x^{\sqrt{-1}} = x^{(-1)^{1/2}}$</p>
<p>Then, using the property of exponents that states an exponent to an exponent is the two multiplied, I get</p>
<p>$x^{(-1)^{1/2}} = x^{-1/2} = (\sqrt{x})^{-1} = \frac{1}{\sqrt{x}}$</p>
<p>However, plugging in <a href="http://www.wolframalpha.com/input/?i=%7Bx%5Ei%2C+1%2F%28sqrt%28x%29%29%7D" rel="nofollow">any number</a> shows that</p>
<p>$x^i \neq \frac{1}{\sqrt{x}}$</p>
<p>Where did I go wrong?</p>
| user99680 | 99,680 | <p>Your definition works for Real numbers. But $(-1)^{1/2}$ is not a Real number. You need to use the complex logz, meaning you need to choose a branch of logz, so that you can define: $$x^i=e^{ilogx} $$;let $x=a+ib$, then this is equal to: $$e^{i(a+ib)}=e^{-b+ia} $$. If you do not select a branch of logz, you have a many-valued "function", i.e., you have many candidates for the logz.</p>
<p>once you have a branch, this is well-defined and single-valued.</p>
|
2,357,899 | <p>I want to prove that the cartesian product of a finite amount of countable sets is countable. I can use that the union of countable sets is countable. </p>
<p><strong>My attempt:</strong></p>
<p>Let $A_1,A_2, \dots, A_n$ be countable sets. </p>
<p>We prove the statement by induction on $n$</p>
<p>For $n = 1$, the statement clearly holds, as $A_1$ is countable. Now, suppose that $B := A_1 \times A_2 \times \dots A_{n-1}$ is countable.</p>
<p>We have: $$B \times A_n = \{(b,a)|b \in B, a \in A_n\}$$
$$= \bigcup_{a \in A_n} \{(b,a)|b \in B\}$$</p>
<p>and $\{(b,a)|b \in B\}$ is countable for a fixed $a \in A_n$, since the function $f_a: B \to B \times \{a\}: b \to (b,a)$ is a bijection, and $B$ is countable by induction hypothesis. Because the union of countable sets remains countable, we have proven that $(A_1 \times \dots A_{n-1}) \times A_n$ is countable, and because $f: (A_1 \times \dots A_{n-1}) \times B \to A_1 \times \dots A_{n-1} \times A_n: ((a_1, \dots, a_{n-1}),a_n) \mapsto (a_1, \dots, a_{n-1},a_n)$
is a bijection, the result follows.</p>
<p>Questions:</p>
<blockquote>
<ul>
<li>Is this proof correct/rigorous? </li>
<li>Are there other proofs that are
easier? </li>
<li>Someone pointed out that we can prove this theorem using the
'zigzag'-argument. Can someone provide this proof? I think this
zigzag-method is too graphical, and therefore not rigorous, so if
someone can clarify why this method is completely rigorous, I would be
more than glad to award him the bonus.</li>
</ul>
</blockquote>
| H. H. Rugh | 355,946 | <p>The phrase 'the union of countable sets is countable' is wrong unless you add the word 'countable' before 'union'.</p>
<p>The zig-zag argument is nothing but a graphical way to describe a bijection between $ {\Bbb N}\times{\Bbb N}$ and $ {\Bbb N}$.
You may also give an explicit formula for such a bijection. Using the French notation that $0\in {\Bbb N}$,
you may check that</p>
<p>$$ \phi (m,n) = m + \sum_{k=1}^{m+n} k $$
yields such a bijection. The inverse map is the 'zig-zag' path. Having constructed this function we may iterate to solve when taking further products. For example:
$$ (m,n,p) \in {\Bbb N}\times{\Bbb N} \times {\Bbb N}\mapsto \phi(\phi(m,n),p) \in {\Bbb N} $$
is a bijection etc...</p>
<p>Update: Some hints for the bijection:</p>
<p>1) Injectivity: Show that if $(m,n) \neq (m',n') \in {\Bbb N} \times {\Bbb N}$ then $$m + \sum_{k=1}^{m+n} k \neq m' + \sum_{k=1}^{m'+n'} k$$
(distinguish the cases when $m+n=m'+n'$ and $m+n \neq m'+n'$)</p>
<p>2) Surjectivity: We have $\phi(0,0)=0$. Suppose $\phi(m,n)=j$. Then if $n>0$ note that $\phi(m+1,n-1)=j+1$, while if $n=0$ then $\phi(0,m+1)=j+1$. Conclude using induction ...</p>
|
2,348,909 | <p>Example 7.3 of Baby Rudin states that the sum
\begin{align}
\sum_{n=0}^{\infty}\frac{x^2}{(1 + x^2)^n}
\end{align}
is, for $x \neq 0$, a convergent geometric series with sum $1 + x^2$. This confuses me. As far as I know, a geometric series is a series of the form
\begin{align}
\sum_{n=0}^{\infty}x^n,
\end{align}
and such a series converges to $\frac{1}{1 - x}$ if $x \in [0,1)$. Now, it is certainly true that if we put $y = \frac{x^2}{1 + x^2}$, then $\frac{1}{1 - y} = 1 + x^2$. But in this case, shouldn't the series we're discussing be
\begin{align}
\sum_{n=0}^{\infty}\left[\frac{x^2}{1 + x^2}\right]^n?
\end{align}
What am I missing?</p>
| Mark Viola | 218,419 | <p>$$\begin{align}
\sum_{n=0}^\infty \frac{x^2}{(1+x^2)^n}&=x^2\sum_{n=0}^\infty \left((1+x^2)^{-1}\right)^n\\\\
&=\frac{x^2}{1-(1+x^2)^{-1}}\\\\
&=\frac{x^2(1+x^2)}{1+x^2-1}\\\\
&=1+x^2
\end{align}$$</p>
<p>for $x\ne0$.</p>
|
81,267 | <p>I have the following problem: I have a (a lot)*3 table, meaning that I have 3 columns, say X, Y and Z, with real values. In this table some of the rows have the same (X,Y) values, but with different value of Z. For instance</p>
<pre><code>{{12.123, 4.123, 513.423}, {12.123, 4.123, 33.43}}
</code></pre>
<p>have the same (X,Y) but different Z. This is a case of multiplicity=2, but in principle it could be higher. What I want to do is to take all the unique rows, AND in case they have multiplicity >1 (i.e. repeated (X,Y)), pick the one with minimum Z value. In the previous example it would be the second one.</p>
<p>I hope I have been clear! Thank you very much indeed!</p>
| alephalpha | 6,652 | <pre><code>Min /@ Transpose@# & /@ GatherBy[data, Most]
</code></pre>
<p>Or if you're using Mathematica 10:</p>
<pre><code>First@*MinimalBy[Last] /@ GatherBy[data, Most]
</code></pre>
|
81,267 | <p>I have the following problem: I have a (a lot)*3 table, meaning that I have 3 columns, say X, Y and Z, with real values. In this table some of the rows have the same (X,Y) values, but with different value of Z. For instance</p>
<pre><code>{{12.123, 4.123, 513.423}, {12.123, 4.123, 33.43}}
</code></pre>
<p>have the same (X,Y) but different Z. This is a case of multiplicity=2, but in principle it could be higher. What I want to do is to take all the unique rows, AND in case they have multiplicity >1 (i.e. repeated (X,Y)), pick the one with minimum Z value. In the previous example it would be the second one.</p>
<p>I hope I have been clear! Thank you very much indeed!</p>
| 2012rcampion | 21,750 | <p>Using the new <code>Association</code> method <code>GroupBy</code>:</p>
<pre><code>Append @@@ Normal[GroupBy[data, Most -> Last, Min]]
</code></pre>
<p>This method is slightly slower than Bob Hanlon's and alephalpha's (and produces identical results), but I thought I'd show off one of the v10 features.</p>
|
3,609,906 | <p>I need to compute <span class="math-container">$$\lim_{n \to \infty}\sqrt{n}\int_{0}^{1}(1-x^2)^n dx.$$</span>
I proved that
for <span class="math-container">$n\ge1$</span>,
<span class="math-container">$$\int_{0}^{1}(1-x^2)^ndx={(2n)!!\over (2n+1)!!},$$</span>
but I don't know how to continue from here.</p>
<p>I also need to calculate <span class="math-container">$\int_{0}^{1}(1-x^2)^ndx$</span> for <span class="math-container">$n=50$</span> with a <span class="math-container">$1$</span>% accuracy. I thought about using Taylor series but also failed.</p>
| marty cohen | 13,079 | <p>Redoing what has been done
many many many times before.</p>
<p><span class="math-container">$\begin{array}\\
I_n
&=\int_0^1 (1-x^2)^n dx\\
I_0
&=\int_0^1 dx\\
&= 1\\
I_1
&=\int_0^1 (1-x^2) dx\\
&=1-\dfrac13\\
&=\dfrac23\\
I_n
&=\int_0^1 (1-x^2)^n dx\\
&=x(1-x^2)^n|_0^1+\int_0^1 2x^2n(1-x^2)^{n-1} dx\\
&\qquad\text{integrating by parts}\\
&\qquad f = (1-x^2)^n, f' = -2xn(1-x^2)^{n-1}, g' = 1, g = x\\
&=2n\int_0^1 x^2(1-x^2)^{n-1} dx\\
&=2n\int_0^1 (x^2-1+1)(1-x^2)^{n-1} dx\\
&=2n\int_0^1 (1-(1-x^2))(1-x^2)^{n-1} dx\\
&=2n\int_0^1 (1-x^2)^{n-1} dx-2n\int_0^1 (1-x^2)^{n} dx\\
&=2nI_{n-1}-2nI_n\\
\text{so}\\
I_n
&=\dfrac{2n}{2n+1}I_{n-1}\\
\dfrac{I_n}{I_{n-1}}
&=\dfrac{2n}{2n+1}\\
I_n
&=\dfrac{I_n}{I_{0}}\\
&=\prod_{k=1}^n\dfrac{I_k}{I_{k-1}}\\
&=\prod_{k=1}^n\dfrac{2k}{2k+1}\\
&=\dfrac{\prod_{k=1}^n(2k)}{\prod_{k=1}^n(2k+1)}\\
&=\dfrac{\prod_{k=1}^n(2k)\prod_{k=1}^n(2k)}{\prod_{k=1}^n(2k)\prod_{k=1}^n(2k+1)}\\
&=\dfrac{4^nn!^2}{(2n+1)!}\\
&=\dfrac{4^nn!^2}{(2n)!(2n+1)}\\
&\approx\dfrac{4^n(\sqrt{2\pi n}(n/e)^n)^2}{\sqrt{2\pi 2n}(2n/e)^{2n}(2n+1)}
\qquad\text{Stirling strikes twice}\\
&=\dfrac{4^n((2\pi n)(n^{2n}/e^{2n})}{2\sqrt{\pi n}4^nn^{2n}e^{2n}(2n+1)}\\
&=\dfrac{2\pi n}{2\sqrt{\pi n}(2n+1)}\\
&=\dfrac{\sqrt{\pi n}}{(2n+1)}\\
&=\dfrac{\sqrt{\pi n}}{2n(1+1/(2n))}\\
&=\dfrac{\sqrt{\pi n}}{2n}\dfrac1{1+1/(2n)}\\
&=\dfrac{\sqrt{\pi }}{2\sqrt{n}}\dfrac1{1+1/(2n)}\\
&=\dfrac{\sqrt{\pi }}{2\sqrt{n}}(1-\dfrac1{2n}+O(\dfrac1{n^2}))\\
\end{array}
$</span></p>
<p>so
<span class="math-container">$\sqrt{n}I_n
=\dfrac{\sqrt{\pi }}{2}(1-\dfrac1{2n}+O(\dfrac1{n^2}))
\to\dfrac{\sqrt{\pi }}{2}
$</span>.</p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| Jim Hefferon | 213 | <p>My approach is like that of others but I like to use math instead of everyday language. I get them to agree that we want this statement to be true: "if <em>x</em> is a perfect square then <em>x</em> is not prime" simply because <em>x=y*y</em> is a factorization. Then we use various <em>x</em>'s to get the different lines of the truth table.</p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| Clive Long | 7,954 | <p>This may add to the detailed discussions and explanations above.
I had problems with mathematical implication until I read Keith Devlin's "Introduction to Mathematical Thinking". Hopefully what follows is a faithful summary of some of the ideas in the book.</p>
<ol>
<li>Implication is not the conditional</li>
<li>Truth tables can help</li>
</ol>
<p>1 We learn and experience that most things happen because of causality, or our interpretation of it. $A$ happens then $B$ happens. This gets expressed in different ways: $A$ happens so $B$ <strong>will</strong> happen, <strong>if</strong> $A$ happens <strong>then</strong> $B$ <strong>will</strong> happen, etc. These different expressions all get conflated and some people learn or are taught they all are represented by the expressions:</p>
<p>"if $A$ then $B$"</p>
<p>or</p>
<p>$A \implies B$</p>
<p>All of the above are wrapped up in the the idea of <strong>implication</strong>.</p>
<p>We all feel comfortable, or we internalise that when $A$ happens, $B$ <strong>will certainly</strong> happen because $A$ <strong>caused</strong> $B$. This is implication.</p>
<p>However, we are left with an uncertainty about what happens when $A$ doesn't happen.</p>
<p>To deal with this uncertainty in mathematics we need to abandon the idea of causality and fall back to a reduced idea of the conditional.</p>
<p>In Devlin's terms</p>
<blockquote>
<p>"implication is the conditional with causality"</p>
</blockquote>
<p>or the equivalent</p>
<blockquote>
<p>"the conditional is implication without causality" </p>
</blockquote>
<p>So what?</p>
<p>Well we can <strong>choose to define</strong> the conditional for every possible (4 ways) of combining $A$ and $B$ being True or False. I will break off point 1 here then come back.</p>
<p>For point 2 I understand these things in truth table terms, not linguistic terms.</p>
<ol start="2">
<li>In parallel, if we think of simple logic (I am not sure of the exact terminology - <em>propositional logic?</em>) we can combine two statements $A$ and $B$ with a variety of operations: and, or. We learn (usually passively) even if we can't express the ideas formally, about the meaning of compound statements $A$ OR $B$, $A$ AND $B$ whose truth value depends on the truth value of $A$ and $B$ and we can define these in truth tables. In more concrete terms, most people are comfortable with TRUE OR FALSE = TRUE and TRUE AND FALSE = FALSE and so on for all possible combinations.</li>
</ol>
<p>Again, so what?</p>
<p>Well, $A \implies B$ is also a compound statement whose truth value depends upon the truth value of $A$ and $B$. We can now <strong>define</strong> the truth table values for False $\implies$ False to <strong>be</strong> TRUE. That's weird and uncomfortable.</p>
<p>There are two ways to deal with this first uncomfortable weirdness.</p>
<p>Either, <strong>claim</strong> that </p>
<p>$A \implies B$</p>
<p>is equivalent to </p>
<p>$\neg A \lor B$</p>
<p>derive the truth table values and demonstrate no contradictions,</p>
<p>alternatively (and this is how I deal with the situation - Devlin does not write this), we do not think about the result (truth value) of a compound statement for various "input" values as being <strong>True</strong> or <strong>False</strong>, but just ask whether the logical argument is <strong>valid</strong> (in place of true) or <strong>invalid</strong> (in place of false).</p>
<p>For example if A: x = 3, is <strong>False</strong> and B: x < 7 is <strong>False</strong></p>
<p>then</p>
<p>$A \implies B$ in this case is a valid argument. In this case:
if x $\neq$ 3 then x $\geq$ 7 is <strong>valid</strong> and hence <strong>true</strong>.</p>
<p>Rather than the truth table values being some arbitrary definition of the "meaning" of $A \implies B$, the conditional <strong>contains and is consistent with</strong> our hazier understanding of implication or if ... then, where A "causing" B is wrapped up in our understanding.</p>
<p>So back to 1. My understanding is this</p>
<p>$A \implies B$ is the conditional without causality, and all written or spoken expressions such as "implies" or "so" or "when" or "sufficent" or "necessary" I translate into the form $A \implies B$ ($A \to B$ ???) and work from there.</p>
<p>I would recommend Devlin's book (I have no commercial interests or connection) and the free Stanford/Coursera course by Devlin that covers the same ground.</p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| Dan Christensen | 6,103 | <p>If you will eventually be teaching the basic methods of proof (conditional proof, proof by contradiction, etc.) in your course for math majors, you might consider starting with the truth tables for NOT, AND and OR and postpone the truth table for IMPLIES until they understand some of those basic methods of proof. Then they should be able to understand a proof of ~A => [A => B] (see below) which corresponds to lines 3 and 4 of the usual truth table for IMPLIES, and IIUC is usually where problems seem to arise. (Screenshot from my proof checker.)</p>
<p><a href="https://i.stack.imgur.com/PKaDn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PKaDn.png" alt="enter image description here" /></a></p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| Dan Christensen | 6,103 | <p>My favour example is the implication, "If it is raining, then it is cloudy."</p>
<p>This does not mean that rain <em>causes</em> cloudiness. (There is no causality in mathematics.) In the usual sense of if-then statements in mathematics, it means only that it is not the case (present tense) that it is both raining and not cloudy. Using this definition, we can easily construct the truth table:</p>
<p><span class="math-container">$~~~~~~~~~R~~ C~~~~ R\implies C$</span></p>
<ol>
<li><span class="math-container">$~~T~~T~~~~~~~~~T$</span></li>
<li><span class="math-container">$~~T~~F~~~~~~~~~F$</span></li>
<li><span class="math-container">$~~F~~T~~~~~~~~~T$</span></li>
<li><span class="math-container">$~~F~~F~~~~~~~~~T$</span></li>
</ol>
<p>Where <span class="math-container">$R = $</span> "It is raining", and <span class="math-container">$C = $</span> "It is cloudy" (both in the present tense).</p>
<p>(We can also formally prove the above "definition", <span class="math-container">$(A\implies B) \iff \neg (A \land \neg B)$</span> from "first principles" <a href="http://www.dcproof.com/DeriveImplies.html" rel="nofollow noreferrer">here</a>. Only 19 lines.)</p>
<p>This is the so-called material conditional. (Other forms of the conditional may be required for past and future tenses and causality. See <a href="https://en.wikipedia.org/wiki/Conditional" rel="nofollow noreferrer">Wikipedia</a>)</p>
<p>As required, if it is true that it is raining (<span class="math-container">$R$</span>), and it is true that if it is raining then it is cloudy (<span class="math-container">$R\implies C$</span>), then <span class="math-container">$C$</span> must be true (see line 1).</p>
<p><span class="math-container">$~~~~(R \land (R\implies C))\implies C$</span></p>
<p>This is the so-called Detachment Rule.</p>
<p>To disprove that <span class="math-container">$R\implies C$</span>, it is sufficient to prove that <span class="math-container">$R$</span> is true and <span class="math-container">$C$</span> is false (see line 2).</p>
<p>Interestingly, if it is not raining (<span class="math-container">$\neg R$</span>), then it is true that <span class="math-container">$R\implies C$</span>, regardless of the truth value of <span class="math-container">$C$</span> (see lines 3 and 4).</p>
<p><span class="math-container">$~~~~\neg R \implies (R\implies C)$</span></p>
<p>Note that we cannot infer from this statement that <span class="math-container">$C$</span> is true, or that it is false since <span class="math-container">$A$</span> itself is false.</p>
<p>This is the so-called Principle of Vacuous Truth. It is rarely if ever used daily discourse since we don't usually give much consideration to implications with a false antecedent. It is, however, a valid and useful method of proof in mathematics.</p>
|
1,897,538 | <p>Next week I will start teaching Calculus for the first time. I am preparing my notes, and, as pure mathematician, I cannot come up with a good real world example of the following.</p>
<p>Are there good examples of
\begin{equation}
\lim_{x \to c} f(x) \neq f(c),
\end{equation}
or of cases when $c$ is not in the domain of $f(x)$?</p>
<p>The only thing that came to my mind is the study of physics phenomena at temperature $T=0 \,\mathrm{K}$, but I am not very satisfied with it.</p>
<p>Any ideas are more than welcome!</p>
<p><strong>Warning</strong></p>
<p>The more the examples are approachable (e.g. a freshman college student), the more I will be grateful to you! In particular, I would like the examples to come from natural or social sciences. Indeed, in a first class in Calculus it is not clear the importance of indicator functions, etc..</p>
<p><strong>Edit</strong></p>
<p>As B. Goddard pointed out, a very subtle point in calculus is the one of removable singularities. If possible, I would love to have some example of this phenomenon. Indeed, most of the examples from physics are of functions with poles or indeterminacy in the domain.</p>
| Slade | 33,433 | <p>If $f(t)$ is the number of humans alive at time $t$, then there is a discontinuity of $f$ at every time a human is born or dies.</p>
<p>The same logic holds for bank accounts, disk space, the number of molecules of caffeine in your body, and all other discrete phenomena.</p>
|
3,133,328 | <p>I am currently working on a question about proving the sum of eigenvalues and I have been searching for the solution <a href="https://www.youtube.com/watch?v=OLl_reBXY-g" rel="nofollow noreferrer">from YouTube</a>.</p>
<p>However, I do not understand why the teacher uses the diagonal method to show that the sum of eigenvalues equals the trace of matrix <span class="math-container">$A$</span>. Doesn't the diagonal method only apply on <span class="math-container">$3 \times 3$</span> or less dimensional matrix? </p>
<p>Thank you so much.</p>
| DonAntonio | 31,254 | <p>If we already know that the determinant of a square matrix is , up to sign, the sum of some products of the matrix's entry, all of which are formed by taking <strong>exactly one</strong> entry from each row and <strong>eactly one</strong> entry from each column, then it is not that hard: observe the coefficient of <span class="math-container">$\;\lambda^{n-1}\;$</span> in the char. polynomial <span class="math-container">$\;p_A(\lambda):=| \lambda I - A | \;$</span> of the matrix <span class="math-container">$\;A\;$</span> . </p>
<p>Observe what my definition of the char. pol. is: in the video they use <span class="math-container">$\; |A- \lambda I|\;$</span> , which always yields the "problem" of having either <span class="math-container">$\;1\;$</span> or <span class="math-container">$\;-1\;$</span> as the main coefficient of the pol., and this is avoided with the definition I use.</p>
<p>In order to get the coefficient of <span class="math-container">$\;\lambda^{n-1}\;$</span> in that determinant's expansion, we must first observe that we can write the char. pol. as </p>
<p><span class="math-container">$$p_A(\lambda)=\prod_{k=1}^n(\lambda-\lambda_k)=(\lambda-\lambda_1)(\lambda-\lambda_2)\cdot\ldots\cdot(\lambda-\lambda_n)$$</span></p>
<p>where the <span class="math-container">$\;\lambda_k\;$</span> are the matrix's eigenvalues (there may well be repeated values. We don't claim all the <span class="math-container">$\;\lambda_k\,'$</span> s are different, of course) . Warning: the above expression happens in a field that contains all the eigenvalues of the given matrix, so it may <em>not</em> be the field over which our matrix is defined! If working over the rational or real fields, the above expression is always true when working over the complex <span class="math-container">$\;\Bbb C\;$</span> .</p>
<p>Now, in the above expression evaluate the coefficient of <span class="math-container">$\;\lambda^{n-1}\;$</span> : you must take <span class="math-container">$\;n-1\;$</span> times <span class="math-container">$\;\lambda\;$</span> and once some <span class="math-container">$\;\lambda_i\;$</span> . At the end you get <span class="math-container">$\;-(\lambda_1+\lambda_2+\ldots+\lambda_n)\lambda^{n-1}\;$</span> . Observe the sign: it is always <span class="math-container">$\;-$</span>Tr.<span class="math-container">$\,A\;$</span> <em>with my definition</em>. With the one used in the video it is always Tr.<span class="math-container">$\,A\,$</span> . There you go...</p>
|
937,912 | <p>I'm looking for a closed form of this integral.</p>
<p>$$I = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx ,$$</p>
<p>where $\operatorname{Li}_2$ is the <a href="http://mathworld.wolfram.com/Dilogarithm.html" rel="noreferrer">dilogarithm function</a>.</p>
<p>A numerical approximation of it is</p>
<p>$$ I \approx 1.39130720750676668181096483812551383015419528634319581297153...$$</p>
<p>As Lucian said $I$ has the following equivalent forms:</p>
<p>$$I = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx = \int_0^1 \frac{\operatorname{Li}_2\left( \sqrt{x} \right)}{2 \, \sqrt{x} \, \sqrt{1-x}} \,dx = \int_0^{\frac{\pi}{2}} \operatorname{Li}_2(\sin x) \, dx = \int_0^{\frac{\pi}{2}} \operatorname{Li}_2(\cos x) \, dx$$</p>
<p>According to <em>Mathematica</em> it has a closed-form in terms of generalized hypergeometric function, Claude Leibovici has given us <a href="https://math.stackexchange.com/a/937926/153012">this form</a>.</p>
<p>With <em>Maple</em> using Anastasiya-Romanova's form I could get a closed-form in term of Meijer G function. It was similar to Juan Ospina's <a href="https://math.stackexchange.com/a/938024/153012">answer</a>, but it wasn't exactly that form. I also don't know that his form is correct, or not, because the numerical approximation has just $6$ correct digits. </p>
<p>I'm looking for a closed form of $I$ without using generalized hypergeometric function, Meijer G function or $\operatorname{Li}_2$ or $\operatorname{Li}_3$.</p>
<p>I hope it exists. Similar integrals are the following.</p>
<p>$$\begin{align}
J_1 & = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{1+x} \,dx = \frac{\pi^2}{6} \ln 2 - \frac58 \zeta(3) \\
J_2 & = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x}} \,dx = \pi^2 - 8 \end{align}$$</p>
<p>Related techniques are in <a href="http://carma.newcastle.edu.au/jon/Preprints/Papers/ArXiv/Zeta21/polylog.pdf" rel="noreferrer">this</a> or in <a href="http://arxiv.org/pdf/1010.6229.pdf" rel="noreferrer">this</a> paper. <a href="http://www-fourier.ujf-grenoble.fr/~marin/une_autre_crypto/Livres/Connon%20some%20series%20and%20integrals/Vol-3.pdf" rel="noreferrer">This</a> one also could be useful.</p>
| GEdgar | 442 | <p>Another one... replacing the Meijer G in Juan's answer
$$
{\mbox{$_4$F$_3$}(1/2,1/2,1,1;\,3/2,3/2,3/2;\,1)}+
\frac{\pi \,
{\mbox{$_4$F$_3$}(1,1,1,3/2;\,2,2,2;\,1)}}{16}
\\
\approx 1.3913072075067666818109648381255138301541952863
$$
and with user's comment
$$
{\mbox{$_4$F$_3$}(1/2,1/2,1,1;\,3/2,3/2,3/2;\,1)}+\frac{\pi^3}{48}-\frac{\pi (\log 2)^2}{4}
$$
agreeing with Claude.</p>
|
907,851 | <p>I am really new into math, why is $-2^2 = -4 $ and $(-2)^2 = 4 $? </p>
| Community | -1 | <p>The <a href="http://en.wikipedia.org/wiki/Order_of_operations" rel="nofollow">order of operations</a> is a convention that give us what operation should we do before. In the given examples there are three operations <em>(given in the priority)</em>:operation in parenthesis, power,multiplication and subtraction so we have</p>
<p>$$-2^2=-4\leftarrow\text{the power has the priority before the subtraction}$$</p>
<p>$$(-2)\cdot(-2)=4\leftarrow\text{the subtraction in the parenthesis has the priority }$$</p>
<p>before the multiplication.</p>
|
251,028 | <p>I want to make <span class="math-container">$S[\{,\cdots, \}]$</span> as follows</p>
<p>First input of <span class="math-container">$S$</span> is given list <span class="math-container">$\{1,2,3,\cdots, n\}$</span> and it produces <span class="math-container">$s_{123\cdots n}$</span></p>
<p>Further, if the ordering of the list is given differently, still gives increasing order. i.e.,</p>
<p><span class="math-container">$S[{1,3,2}] = s_{123}$</span></p>
<p><span class="math-container">$S[{1,2,4,3}] = s_{1234}$</span></p>
<p>and so on.</p>
| Acus | 18,792 | <p>First, look at <a href="https://mathematica.stackexchange.com/questions/187654/how-to-build-a-templatebox-with-dynamic-length-gridbox%5Btemplatebox-with-dynamic-length-gridbox%5D%5B1%5D">https://mathematica.stackexchange.com/questions/187654/how-to-build-a-templatebox-with-dynamic-length-gridbox[templatebox-with-dynamic-length-gridbox][1]</a> since similar questions already have answers</p>
<p>My solution (adopted from more general context) is below. With little more adaptation the output even can be copied, edited and reused, assuming that you do not change number of indices (this seems is limitation of current Mathematica two dimensional input template possibilities).</p>
<pre><code>MakeBoxes[mvDownUp[{indown___Integer}, {inup___Integer}],
sf : StandardForm] := With[{
argsa = Riffle[Flatten[Rest /@ Sort[
Transpose[{{indown, inup},
Join[Table[
AdjustmentBox[Slot[i], BoxBaselineShift -> 1], {i,
Length[{indown}]}],
Table[AdjustmentBox[Slot[i], BoxBaselineShift -> -1], {i,
Length[{indown}] + 1, Length[{indown, inup}]}]]}]
]], ","]
},
With[{
pfd =
Function[
StyleBox[RowBox[argsa], FontSize -> Small,
FontTracking -> "Condensed", AutoSpacing -> False]],
pfi =
ReleaseHold[
RowBox[{"mvDownUp", "@@",
MakeExpression[{Take[{##}, Length[{indown}]],
Take[{##}, {Length[{indown}] + 1,
Length[{indown, inup}]}]}, sf]}]] &
},
TemplateBox[
Flatten[{MakeBoxes[#, sf] & /@ {indown},
MakeBoxes[#, sf] & /@ {inup}}], "mvDownUp",
DisplayFunction :> pfd, InterpretationFunction :> pfi,
SyntaxForm -> "fish",
Tooltip -> ToString[mvDownUp[{indown}, {inup}]]]
]
];
With[{baseSymbolN = "S", bs = Symbol["S"]},
MakeBoxes[bs[in_mvDownUp], sf : StandardForm] :=
With[{sty = (FontColor -> Black), inEx = MakeBoxes[in, sf]},
With[{
pfd =
Function[
StyleBox[RowBox[{StyleBox[baseSymbolN, sty], #1}],
AutoSpacing -> False, FontTracking -> "Condensed"]],
pfi = Function[RowBox[{baseSymbolN, "[", #1, ",", #2, "]"}]]},
TemplateBox[{inEx}, baseSymbolN, DisplayFunction :> pfd,
InterpretationFunction :> pfi, SyntaxForm -> "fish"]]]]
S[x_List] := S[mvDownUp[Sort[x], {}]] /; ! OrderedQ[x]
S[x_List] := S[mvDownUp[x], {}]
S[{3, 1, 2, 7}]
</code></pre>
|
210,849 | <p>Let $p$ be an odd prime and $a$ be an integer with $\gcd(a, p) = 1$. Show that $x^2 - a \equiv 0 \mod p$ has either $0$ or $2$ solutions modulo $p$</p>
<p>I am clueless with this one. Hints please.</p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> $\ $ In a field, $\rm\ x_2^2 = x_1^2 \iff 0 = x_2^2-x_1^2 = (x_2\!-x_1\!)\,(x_2\!+x_1\!)\iff x_2 = \pm\, x_1$</p>
<p><strong>Remark</strong> $\ $ Generally a nonzero polynomial over a domain <a href="https://math.stackexchange.com/a/173483/242">has no more roots than its degree.</a> </p>
|
3,404,673 | <p>Let <span class="math-container">$G$</span> be a unipotent connected linear algebraic group over a field <span class="math-container">$F$</span>. Then <span class="math-container">$G$</span> is called <em>split</em> if there is a series of closed subgroup schemes <span class="math-container">$1 = G_0 \subset G_1 \subset \cdots \subset G_t =G$</span> with each <span class="math-container">$G_i$</span> normal in <span class="math-container">$G_{i+1}$</span> and <span class="math-container">$G_i/G_{i+1} \cong_F \mathbb G_a$</span>.</p>
<p>I have read that for <span class="math-container">$F$</span> perfect, every such <span class="math-container">$G$</span> is split, provided it is connected. And <span class="math-container">$F$</span> of characteristic zero, every such <span class="math-container">$G$</span> is moreover automatically connected.</p>
<p>I was trying to verify this in the case of <span class="math-container">$G = \operatorname{Res}_{E/F} \mathbb G_a$</span>, where <span class="math-container">$E/F$</span> is a quadratic extension. Does <span class="math-container">$G$</span> really have such a composition series?</p>
<p>I tried considering the diagonal embedding <span class="math-container">$\Delta$</span> of <span class="math-container">$\mathbb G_a$</span> into <span class="math-container">$G$</span> and looking at the quotient <span class="math-container">$G/\Delta$</span>. It should be the case that <span class="math-container">$G/\Delta \cong \mathbb G_a$</span>, but I'm not able to verify this. The natural map <span class="math-container">$G/\Delta \rightarrow \mathbb G_a$</span> given by <span class="math-container">$(x,y)\Delta \mapsto x- y $</span> is not defined over <span class="math-container">$F$</span>. </p>
| D_S | 28,556 | <p>Tricky business if you were using the definition of <span class="math-container">$G = \operatorname{Res}_{E/F}(\mathbb G_a)$</span> as a form of <span class="math-container">$\mathbb G_a \times \mathbb G_a$</span> and you didn't recollect Hilbert's Theorem 90... </p>
<p>Worse, I actually did my PhD thesis on the restriction of scalars of a certain group and I didn't realize that <span class="math-container">$\operatorname{Res}_{E/F}(\mathbb G_a)$</span> is the same algebraic group as <span class="math-container">$\mathbb G_a \times \mathbb G_a$</span>. It's obvious from the Yoneda lemma, but less obvious from the way I was thinking about restriction of scalars. </p>
<p>Here I'm taking <span class="math-container">$G$</span> to be the group given on <span class="math-container">$\overline{F}$</span>-points by <span class="math-container">$G(\overline{F}) = \overline{F} \times \overline{F}$</span>, with the Galois action</p>
<p><span class="math-container">$$\sigma.(x,y) = \begin{cases} (\sigma(x),\sigma(y)) & \textrm{ if } \sigma|_E =1 \\ (\sigma(y),\sigma(x)) & \textrm{ if } \sigma|_E \neq 1 \end{cases}$$</span>
for <span class="math-container">$\sigma \in \operatorname{Gal}(\overline{F}/F)$</span>. Now choose <span class="math-container">$\beta \in \overline{F}$</span> such that <span class="math-container">$E = F(\sqrt{\beta})$</span>, and define </p>
<p><span class="math-container">$$\phi: G \rightarrow \mathbb G_a \times \mathbb G_a$$</span></p>
<p><span class="math-container">$$\phi(x,y) = (x+y, \sqrt{\beta}(y-x))$$</span></p>
<p>This can be checked to be defined over <span class="math-container">$F$</span>.</p>
|
2,780,043 | <p>In the Navy we had bunk beds with a locker and lock both built in. You could gain access with a combination lock on the side that you could reprogram the code for. The lock was nothing more than several push buttons. Ive wondered for a long time now how many possible combinations there were. I believe the lock had $n=4$ buttons, but I would like to generalize to all $n\in\Bbb N$.</p>
<p>The system is easy enough to understand. Here are the rules:</p>
<ul>
<li>there are $n$ push buttons</li>
<li>each button can be pressed no more than once, and clearly you needn't push any button at all.</li>
<li>the locker combination can comprise of any number of distinct pressings.</li>
<li>order of pressing is relevant, so its a permutation problem</li>
<li>any grouping of buttons can be pressed simultaneously (wherein concurrently pressed buttons have no order).</li>
</ul>
<p>So, for example, if there are $n=3$ buttons and we are pressing all three buttons then $(1)(2)(3)$ is a viable combination, which is distinct from $(2)(1)(3)$ and from $(3)(2)(1)$, et. cetera, totaling 6 possible permutations when pressed separately. These are three button, three pressing combinations.</p>
<p>But those combinations are also distinct from three button, two pressing combinations such as $(1)(23)$ and $(12)(3)$ and three button, one pressing combinations like $(123)$. These were an additional three when (some) pressings are done concurrently, but since the order of the pressings are relevant then $(3)(12)$ and $(23)(1)$ are two more. Note that $(12)=(21)$ since concurrently pressed buttons have no order. But $(1)(23)\ne (23)(1)$ since the non-concurrently pressed groups $(1)$ and $(23)$ are ordered with respect to one another.</p>
<p>Naturally, one can press but two of the three, one of the three, or none of the three, for a multitude of additional possible combinations.</p>
<p>Id like to solve this problem, but in particular for the $n=4$ case but also in the general case.</p>
<p>This is a problem Ive been trying to solve for a decade.</p>
<p>For three buttons $1,2,3$ the combinations are $(), (1), (2), (3), (12), (13), (23), (123), (1)(2), (2)(1), (1)(3), (3)(1), (2)(3), (3)(2), (1)(23), (23)(1), (13)(2), (2)(13), (12)(3), (3)(12), (1)(2)(3), (1)(3)(2), (2)(1)(3), (2)(3)(1), (3)(1)(2), (3)(2)(1)$</p>
<p>26 combinations unless I missed some.</p>
<p>I shouldnt have to specify that there is no explicit limit on the number of pressings a combination can comprise, but you are implicitly restricted by the number of buttons and the fact that each can be pressed no more than once.</p>
| String | 94,971 | <p><strong>UPDATE</strong>: I have since learned (thanks to N. Shales in the comment section and the other answer in part) that $f(n,k)$ are called <a href="https://en.m.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling numbers of the second kind</a>. Furthermore
$$
a_n=\sum_{k=1}^n f(n,k)k!
$$
is called the <a href="https://en.m.wikipedia.org/wiki/Ordered_Bell_number" rel="nofollow noreferrer">$n^{th}$ ordered Bell number</a>, and so my answer simply reduces to:
$$
2 a_n
$$
Here follows my original post:</p>
<hr>
<p>Fantastic! Only 12 days after answering a <a href="https://math.stackexchange.com/questions/2759389/intuitive-method-of-finding-probability-of-getting-an-onto-function-from-all-pos/2762968#2762968">similar/related question</a>, I can make use of the overkill I performed there:</p>
<blockquote>
<p>In how many ways can $n$ elements be divided into $k$ distinct groups?</p>
</blockquote>
<p>The answer turns out to be described by the recurrence relation:
$$
f(n,k) = k\cdot f(n-1,k)+f(n-1,k-1)
$$
where $f(n,0)=0$ and $f(n,1)=1$. As stated there, this is reflected in the following table:
$$
\begin{array}{|r|rrrrr|}
\hline
&1&2&3&4&5\\
\hline
1&1&1&1&1&1\\
2&&1&3&7&15\\
3&&&1&6&25\\
4&&&&1&10\\
5&&&&&1\\
\hline
\end{array}
$$
and according to <a href="https://oeis.org/A048993" rel="nofollow noreferrer">The On-Line Encyclopedia of Integer Sequences</a>, this has no closed form yet.</p>
<hr>
<p>From this we can construct the figure you are after, namely we have one of two situations:</p>
<ol>
<li>We push all buttons during the combination. This corresponds to dividing the $n$ buttons into $k$ groups and pushing them in one of the $k!$ possible orders.</li>
<li>We push all but some set of buttons during the combination. This corresponds to dividing the $n$ buttons into $k$ groups, shuffling them to one of the $k!$ possible orders and leaving out the first group.</li>
</ol>
<p>Hence the answer becomes:
$$
2\sum_{k=1}^{n} f(n,k)\cdot k!
$$
if I am not mistaken. For $n=4$ in particular, it then must be:
$$
2(1\cdot 1!+7\cdot 2!+6\cdot3!+1\cdot 4!)=150
$$
and for $n=3$ the figure agrees with your suggested figure, namely:
$$
2(1\cdot 1!+3\cdot 2!+1\cdot 3!)=26
$$</p>
<hr>
<p>I have checked the cases $n=0,1,2,3,4,5,6$ by using a little programming, and they all match the theoretical figures $1,2,6,26,150,1082,9366$ I found by the above method. Another search in <a href="https://oeis.org/A000629" rel="nofollow noreferrer">The On-Line Encyclopedia of Integer Sequences</a> showed that those figures are actually in there too.</p>
|
10,666 | <p>My question is about <a href="http://en.wikipedia.org/wiki/Non-standard_analysis">nonstandard analysis</a>, and the diverse possibilities for the choice of the nonstandard model R*. Although one hears talk of <em>the</em> nonstandard reals R*, there are of course many non-isomorphic possibilities for R*. My question is, what kind of structure theorems are there for the isomorphism types of these models? </p>
<p><b>Background.</b> In nonstandard analysis, one considers the real numbers R, together with whatever structure on the reals is deemed relevant, and constructs a nonstandard version R*, which will have infinitesimal and infinite elements useful for many purposes. In addition, there will be a nonstandard version of whatever structure was placed on the original model. The amazing thing is that there is a <em>Transfer Principle</em>, which states that any first order property about the original structure true in the reals, is also true of the nonstandard reals R* with its structure. In ordinary model-theoretic language, the Transfer Principle is just the assertion that the structure (R,...) is an elementary substructure of the nonstandard reals (R*,...). Let us be generous here, and consider as the standard reals the structure with the reals as the underlying set, and having all possible functions and predicates on R, of every finite arity. (I guess it is also common to consider higher type analogues, where one iterates the power set ω many times, or even ORD many times, but let us leave that alone for now.) </p>
<p>The collection I am interested in is the collection of all possible nontrivial elementary extensions of this structure. Any such extension R* will have the useful infinitesimal and infinite elements that motivate nonstandard analysis. It is an exercise in elementary mathematical logic to find such models R* as ultrapowers or as a consequence of the Compactness theorem in model theory. </p>
<p>Since there will be extensions of any desired cardinality above the continuum, there are many non-isomorphic versions of R*. Even when we consider R* of size continuum, the models arising via ultrapowers will presumably exhibit some saturation properties, whereas it seems we could also construct non-saturated examples. </p>
<p>So my question is: what kind of structure theorems are there for the class of all nonstandard models R*? How many isomorphism types are there for models of size continuum? How much or little of the isomorphism type of a structure is determined by the isomorphism type of the ordered field structure of R*, or even by the order structure of R*? </p>
| Simon Thomas | 4,706 | <p>The following was useful in a recent paper on asymptotic cones with Kramer, Shelah and Tent. How many ultraproducts $\prod_{\mathcal{U}} \mathbb{N}$ exist up to isomorphism, where $\mathcal{U}$ is a non-principal ultrafilter over $\mathbb{N}$? If $CH$ holds, then obviously just one ... if $CH$ fails, then $2^{2^{\aleph_{0}}}$.</p>
<p>In the case when $CH$ fails, the ultraproducts are already nonisomorphic as linearly ordered sets. The proof uses the techniques of Chapter VI of Shelah's
book "Classification Theory and the Number of Non-isomorphic Models".</p>
|
496,488 | <p>I have few questions to the proof on Universality of linearly ordered sets. Could anyone advise please? Thank you.</p>
<p>Lemma: Suppose $(A,<_{1})$ is a linearly ordered set and $(B,<_{2})$ is a dense linearly ordered set without end points. Assume $F\subseteq A$ and $E\subseteq B$ are finite and $h:F \to E$ is an isomorphism from $(F,<_{1})$ to $(E,<_{2})$. If $a\in A-F$, then $\exists b\in B-E$ such that $h\cup \left\{(a,b)\right\}$ is an isomorphism from $F \cup \left\{a\right\}$ to $E \cup \left\{b\right\}$.</p>
<p>Theorem: Every countable linearly ordered set is embeddable into the linearly ordered set $(\mathbb{Q},<)$</p>
<p>Proof: Let $(A,<_{1})$ be countably linearly ordered set. Let $f:\mathbb{N}\to A$ be a bijection. Fix bijection $g:\mathbb{N} \to \mathbb{Q}$. Define an embedding $h:A\to \mathbb{Q}$. Let $h(f(0))=g(0).$ </p>
<p>The Induction hypothesis:$h:\left\{f(i): i<n\right\} \to \left\{h(f(i)): i<n\right\}$ is an isomorphism. Let $a=f(n)$, $F=\left\{f(i): i<n\right\},$ $E= \left\{h(f(i)): i<n\right\} ,D=h:\left\{m\in \mathbb{N}: g(m) \not\in E \wedge h\cup\left\{(a,g(m)) \right\}\right\}$, where $ h\cup\left\{(a,g(m)) \right\}$ is an isomorphism. By Lemma, $D\neq\varnothing$. Let $m*=min(D)$ and define $h(f(n))=g(m*)$. done</p>
<p>My questions:
What is the role of $h(f(0))=g(0) ?$ How does it follow that $h: A =F \cup \left\{f(n)\right\} \cup (A-F \cup \left\{f(n)\right\}) \to \mathbb{Q}=E\cup\left\{g(m*)\right\} \cup (\mathbb{Q}-(E \cup \left\{g(m*)\right\}))$ is an isomorphism? </p>
| Brian M. Scott | 12,042 | <p>First I’ll explain the proof and some of the motivation for it in greater detail, and then I’ll answer your specific questions.</p>
<p>For greater readability let $a_k=f(k)$ and $q_k=g(k)$ for each $k\in\Bbb N$, so that $F=\{a_k:k<n\}$, the subset of $A$ on which $h$ has already been defined. $E=\{h(a_k):k<n\}=h[F]$, the copy of $F$ in $\Bbb Q$. Now we want to extend $h$ to $a_n$ (called $a$ in the proof) by defining $h(a_n)$ to be some rational number that occupies the same position relative to the members of $E$ as $a_n$ occupies relative to the members of $F$. In other words, if $k,\ell<n$, and $a_k<_1 a_n<_1 a_\ell$, we want $h(a_n)$ to satisfy $h(a_k)<h(a_n)<h(a_\ell)$ in $\Bbb Q$. The lemma says that this is always possible. Specifically, it says that there is a $q\in\Bbb Q\setminus E$ such that $h\cup\{\langle a_n,b\rangle\}$ is an isomorphism from $F\cup\{a_n\}$ to $E\cup\{q\}$. In other words, if $$B=\big\{q\in\Bbb Q:h\cup\{\langle a_n,q\rangle\}\text{ is an isomorphism from }F\cup\{a_n\}\text{ to }E\cup\{q\}\big\}$$ is the set of possible choices for this $q$, the lemma says that $B\ne\varnothing$. For technical reasons the proof that you have says this a little indirectly: instead of using $B$, the actual set of candidates for $h(a_n)$, it uses the of indices of those candidates in the enumeration of $\Bbb Q$ by $g$. That set is</p>
<p>$$D=\big\{m\in\Bbb N:h\cup\{\langle a_n,q_m\rangle\}\text{ is an isomorphism from }F\cup\{a_n\}\text{ to }E\cup\{q\}\big\}\;.$$</p>
<p>The advantage to using $D$ instead of $B$ is that we can now make a <strong>non-arbitrary</strong> choice of an element of $B$: we let $m^*=\min D$, thereby choosing $q_{m^*}$ from $B$, instead of just picking any old element of $B$. From the definition of $D$ we know that $h\cup\{\langle a_n,q_{m^*}\rangle\}$ is an isomorphism from $F\cup\{a_n\}$ to $E\cup\{q_{m^*}\}$, so we extend $h$ to $h\cup\{\langle a_n,q_{m^*}\rangle\}$ by defining $h(a_n)=q_{m^*}$. We now have an isomorphism from $\{a_k:k\le n\}$ to $\{h(a_k):k\le n\}$, and we can continue the inductive construction of $h$.</p>
<p>In the end we have a function $h:A\to\Bbb Q$ defined on all of $A$. Suppose that $a,a'\in A$. There are $k,\ell\in\Bbb N$ such that $a=a_k$ and $a'=a_\ell$; let $n=\max\{k,\ell\}$. In the construction above we ensured that $h(a_n)$ was ‘in the right place’ relative to all $h(a_i)$ with $i<n$: if $a_i<_1 a_n$, then $h(a_i)<h(a_n)$, and if $a_n<_1 a_i$, then $h(a_n)<h(a_i)$. Thus, once $h(a_n)$ was chosen, we were assured that if $a<_1 a'$, then $h(a)=h(a_k)<h(a_\ell)=h(a')$, and similarly, if $a'<_1 a$, then $h(a')=h(a_\ell)<h(a_k)=h(a)$. Thus, $h$ is an isomorphism: it preserves the ordering. The key idea here is that $h$ is an isomorphism iff its restriction to every two-element subset of $A$ is an isomorphism: if $h$ preserves the order of each pair of elements of $A$, it preserves the entire order. This is why we can construct it one point at a time: if we correctly place the image of each new point with respect to all of the earlier points, then in the end we’ve preserved the order of every pair of points and therefore the entire ordering of $A$.</p>
<p>Now for your specific questions:</p>
<ol>
<li><p>$\Bbb Q$ is <em>homogeneous</em>: it ‘looks the same’ at every point. More technically, if $q_0$ and $q_1$ are distinct points of $\Bbb Q$, there is an isomorphism of $\Bbb Q$ onto itself that sends $q_0$ to $q_1$; the map $q\mapsto q+(q_1-q_0)$ works, for instance. This means that it doesn’t matter where we start embedding $A$ into $\Bbb Q$: if we have one embedding $h:A\to\Bbb Q$, we can ‘slide’ the embedding within $\Bbb Q$ just by adding a rational constant to each image. That is, if $r\in\Bbb Q$, the map $h_r:A\to\Bbb Q:a\mapsto h(a)+r$ is also an isomorphism from $A$ into $\Bbb Q$, but instead of sending to $a_0$ to $q_0$, it sends it to $q_0+r$. In short, it really doesn’t matter where we send $a_0$, but we have to send it somewhere, and if we follow the rule used in the induction step of picking the first available rational in the enumeration of $\Bbb Q$ by $g$, we’ll send $a_0$ to $q_0$.</p></li>
<li><p>What you’ve written is not a correct description of $h$ after the induction step. At that point, as I said above, $h:F\cup\{a_n\}\to E\cup\{q_{m^*}\}$, and $H$ is an isomorphism between $F\cup\{a_n\}$ and $E\cup\{q_{m^*}\}$. In the notation of your proof those sets are $F\cup\{f(n)\}$ and $E\cup\{g(m^*)\}$.</p></li>
</ol>
<p><strong>Added:</strong> I forgot to mention another reason for using $D$ instead of $B$: if $\langle A,<_1\rangle$ is a dense order without endpoints, the construction will produce an isomorphism from $A$ <strong>onto</strong> $\Bbb Q$, thereby showing that up to isomorphism there is just one countable dense linear order without endpoints. If $\Bbb Q\setminus h[A]\ne\varnothing$, let $m\in\Bbb N$ be minimal such that $q_m\notin h[A]$. A bit of thought should convince you that there must have been some stage $n$ in the construction when $\{q_k:k<m\}\subseteq E$ (since all $q_k$ with $k<m$ are in the range of $h$) <strong>and</strong> $q_m$ was in the same place relative to the members of $E$ as $a_n$ was relative to the members of $F$. That means that $q_m\in B$ at stage $n$, and $m\in D$. And since at that point we’d already ‘used’ all $q_k$ with $k<m$, $m$ must have been the smallest member of $D$, and we’d have set $m^*=m$. But then we’d have set $h(a_n)=q_m$, putting $q_m\in h[A]$ after all. This contradiction shows that $h[A]$ must be all of $\Bbb Q$.</p>
|
18,895 | <p>I know that the answer is $C(8,2)$, but I don't get, why.
Can anyone, please, explain it?</p>
| Yochai Timmer | 5,881 | <p>$C(8,2)$ is kinda self explanatory:<br>
you have $8$ bits, and each time you choose $2$ of them to be ones.</p>
|
1,981,014 | <blockquote>
<p>Given the function $f(x)=x^{2}-4x-5$, $A=[0,3)$ and $B=[0,1]$, find $f(A)$ and $f^{-1}(B)$.</p>
</blockquote>
<p>I found $f(A)$ by looking at the graph $f([0,3))=[-9,-5]$ but how would I calculate this without the graph? If I plug in $0$ into $f$ I get $-5$, and if I plug in $3$ I get $-8$, when I should get $-9$.</p>
| Community | -1 | <p>More general if $f: \mathbb{R} \rightarrow \mathbb{R}, f(x) = ax^2 + bx + c, a \gt 0$ then $f$ is decreasing on $(-\infty, - \frac b {2a}]$ and increasing on $[- \frac b {2a}, + \infty)$. You can use this together with the fact that $f$ is continuous to get the images you want.</p>
|
1,981,014 | <blockquote>
<p>Given the function $f(x)=x^{2}-4x-5$, $A=[0,3)$ and $B=[0,1]$, find $f(A)$ and $f^{-1}(B)$.</p>
</blockquote>
<p>I found $f(A)$ by looking at the graph $f([0,3))=[-9,-5]$ but how would I calculate this without the graph? If I plug in $0$ into $f$ I get $-5$, and if I plug in $3$ I get $-8$, when I should get $-9$.</p>
| Parcly Taxel | 357,390 | <p>$$f(x)=x^2-4x-5=(x-2)^2-9$$
The vertex of $f$ lies at $(2,-9)$ and 2 lies in $[0,3)$, so $f[A]$ has a closed lower endpoint at $-9$. 0 is farther from 2 than 3 is, so $f(0)=-5$ constitutes the upper endpoint, which is also closed. Hence $f[A]=[-9,-5]$.</p>
<p>As for $f^{-1}[B]$, solve for the places where $f(x)$ attains the endpoint values of $B$:
$$f(x)=0\text{ when }x=-1\text{ or }x=5$$
$$f(x)=1\text{ when }x=2\pm\sqrt{10}$$
Since $f(x)$ is decreasing when $x<2$ and increasing when $x>2$, we get
$$f^{-1}[B]=[2-\sqrt{10},-1]\cup[5,2+\sqrt{10}]$$</p>
|
309,395 | <p>I am currently learning about <em>Direct Proofs</em>. I am struggling trying to find a starting point to prove the Statement: <em>For all real numbers $c$, if $c$ is a root of a polynomial with rational coefficients, then c is a root of a polynomial with integer coefficients</em>. Based on a definition given by the book and the professor: A number is $c$ is called a root of a polynomial $p(x)$ <em>if, and only if,</em> $p(c) = 0$. But how can I prove the veracity of this statement using the given definition?</p>
| Emily | 31,475 | <p>Example (as a hint):</p>
<p>$$P(x) = \frac{3}{2}x^2 + 2x - \frac{1}{3}$$</p>
<p>Multiply by 6:</p>
<p>$$ 6P(x) = \frac{18}{2}x^2 + 12x - \frac{6}{3}$$
$$ 6P(x) = 9x^2 + 12x - 2$$</p>
<p>If $c$ is a zero of $P(x)$, then $6P(c) = 0$, so $0 = 9c^2+12c-2$, and so $c$ is a zero of of a polynomial with integer coefficients.</p>
|
1,776,726 | <p>I'm trying to determine whether or not </p>
<blockquote>
<p>$$\sum_{k=1}^\infty \frac{2+\cos k}{\sqrt{k+1}}$$ </p>
</blockquote>
<p>converges or not. </p>
<p>I have tried using the ratio test but this isn't getting me very far. Is this a sensible way to go about it or should I be doing something else?</p>
| egreg | 62,967 | <p>Write
$$
\frac{a^2+1}{a^2+2}=1-\frac{1}{a^2+2}
$$
Minimizing this is the same as maximizing $1/(a^2+2)$ which, in turn, is the same as minimizing $a^2+2$ or, as well, minimizing $a^2$.</p>
<p>Since
$$
a=-\frac{x^2-3x-2}{x-1}
$$
the minimum value for $a^2$ is obtained when $x^2-3x-2=0$.</p>
|
1,098,438 | <p>This problem is really bothering me for some time, I appreciate if you have some idea and insight.</p>
<blockquote>
<p>Prove that</p>
<p>$$2^{2^n}+5^{2^n}+7^{2^n}$$</p>
<p>is divisible by $39$ for all natural numbers $n$.</p>
</blockquote>
<p>There was a suggestion that this should be done by mathematical induction, however, with a twist, but I could not see what the twist is.</p>
| Empy2 | 81,790 | <p>$2^{2^{n+1}}=2^{2^n2}=(2^{2^n})^2$ so each term in this sequence is the square of the previous one.<br>
Look at the remainders when you divide them by $13$:
$$2^{2^1}=4\\
2^{2^2}=4^2=16=3\pmod{13}\\
2^{2^3}=3^2=9\pmod{13}\\
2^{2^4}=9^2=81=3\pmod{13}$$
So the remainders repeat after that, and are $4,3,9,3,9,3,9$.<br>
Do the same thing for $5^{2^n}$ and $7^{2^n}$, and for the remainders modulo $3$.</p>
|
1,098,438 | <p>This problem is really bothering me for some time, I appreciate if you have some idea and insight.</p>
<blockquote>
<p>Prove that</p>
<p>$$2^{2^n}+5^{2^n}+7^{2^n}$$</p>
<p>is divisible by $39$ for all natural numbers $n$.</p>
</blockquote>
<p>There was a suggestion that this should be done by mathematical induction, however, with a twist, but I could not see what the twist is.</p>
| VividD | 121,158 | <p>Following similar patterns of proof from other answers, one can also prove:</p>
<ul>
<li>$2^{2^n}+3^{2^n}+5^{2^n}$ is always divisible by $19$</li>
<li>$2^{2^n}+4^{2^n}+6^{2^n}$ is always divisible by $7$</li>
<li>$3^{2^n}+4^{2^n}+7^{2^n}$ is always divisible by $37$</li>
</ul>
|
1,037,068 | <p>If two sequences converge equally, we have
$$\lim_{n\rightarrow \infty }\left ( a_{n} \right )=\lim_{n\rightarrow \infty }\left ( b_{n} \right )$$</p>
<p>As a follow up, is the following equality also true?
$$\lim_{n\rightarrow \infty }\left ( \ln a_{n} \right )=\lim_{n\rightarrow \infty }\left ( \ln b_{n} \right )$$</p>
<p>Notice that I didn't put absolute value brackets, because I am working with sequences involving only positive terms at the moment.</p>
| Disintegrating By Parts | 112,478 | <p>You have probably seen how Parseval's equality for a given $x$ is equivalent to convergence of $\sum_{j}\langle x,v_j\rangle v_j$, but have forgotten:
$$
x = \left(x-\sum_{j=1}^{N}\langle x,v_j\rangle v_j\right)+\sum_{j=1}^{N}\langle x,v_j\rangle v_j
$$
This is an orthogonal decomposition, meaning that the expression in parentheses on the right is orthogonal to the sum that is written after that. Therefore, by the Pythagorean Theorem,
$$
\begin{align}
\|x\|^{2} & = \left\|x-\sum_{j=1}^{N}\langle x,v_j\rangle v_j\right\|^{2}
+ \left\|\sum_{j=1}^{N}\langle x,v_j\rangle v_j\right\|^{2}\\
& = \left\|x-\sum_{j=1}^{N}\langle x,v_j\rangle v_j\right\|^{2}
+\sum_{j=1}^{N}|\langle x,v_j\rangle|^{2}
\end{align}
$$
Now you can see that the sum on the far right converges to $\|x\|^{2}$ iff the first term on the right converges to $0$. In order words, Parseval's equality is equivalent to norm convergence of the vector sum to $x$. I've written this as an ordered sum, but the order does not matter. The uniqueness of the coefficients for any such sum converging to $x$ follows from orthogonality of the $v_j$.</p>
|
3,562,202 | <p>Show that <span class="math-container">$$\int_0^{2\pi}\arctan\Bigl( \frac {\sin x} {\cos x -2 }\Bigl) \, dx=0.$$</span> I can write this as <span class="math-container">$$\mathrm {Im}\int_0^{2\pi}\log( e^{ix}-2)\,dx.$$</span> Making the substitution <span class="math-container">$z =e^{ix} $</span> leads to <span class="math-container">$\operatorname{Im}\int_C\log( z-2)\frac {dz}{iz}$</span>. This should be equal to <span class="math-container">$2\pi \log (-2)$</span>. However the imaginary part of <span class="math-container">$\log (-2)$</span> is not <span class="math-container">$0,$</span> so I must be wrong somewhere. Thanks for any clarify</p>
| Random Variable | 16,033 | <p>I initially thought the issue was that <span class="math-container">$\operatorname{Arg}(a+ib)=\arctan \left(\frac{b}{a} \right)$</span> only if <span class="math-container">$a>0$</span>.</p>
<p>But <span class="math-container">$$ \begin{align} \int_{0}^{2 \pi} \arctan \left(\frac{\sin x}{\cos x -2} \right) \, \mathrm dx &= \int_{0}^{\pi} \arctan \left(\frac{\sin x}{\cos x -2} \right) \, \mathrm dx + \int_{\pi}^{2 \pi} \arctan \left(\frac{\sin x}{\cos x -2} \right) \, \mathrm dx \\ &= \int_{0}^{\pi} \left( \Im \left(\log(e^{ix}-2)\right) - \pi\right)\, \mathrm dx + \int_{\pi}^{2\pi} \left( \Im \left(\log(e^{ix}-2)\right) + \pi\right)\, \mathrm dx \\ &= \Im \int_{0}^{2 \pi} \log(e^{ix}-2) \, \mathrm dx \end{align}$$</span></p>
<p>So that's not the issue.</p>
<p>The issue is instead the fact that we're using the principal branch of the logarithm, and the branch cut for <span class="math-container">$\log(z-2)$</span> intersects the unit circle.</p>
<p>To get around this, we can rewrite the integral as </p>
<p><span class="math-container">$$ \begin{align} \int_{0}^{2 \pi} \arctan \left(\frac{-\sin x}{2-\cos x }\right) \mathrm dx &= \Im \int_{0}^{2 \pi} \log(2-e^{ix}) \, \mathrm dx \\ &= \Im \int_{|z|=1} \log(2-z) \, \frac{\mathrm dz}{iz} \\ &=\Im \left( 2 \pi \log 2 \right) \\ &=0. \end{align}$$</span></p>
<p>When using the principal branch of the logarithm, the branch cut for <span class="math-container">$\log(2-z)$</span> is on <span class="math-container">$[2, \infty)$</span> since that is where <span class="math-container">$2-z$</span> is real and negative.</p>
|
820,015 | <p>Suppose that you have $k$ dice, each with $N$ sides, where $k\geq N$. The definition of a straight is when all $k$ dice are rolled, there is at least one die revealing each number from $1$ to $N$. </p>
<p>Given the pair $(k,N)$, what is the probability that any particular roll will give a straight?</p>
| Graham Kemp | 135,106 | <p>An equivalent problem is fitting $N-1$ bars between $k$ stars so that there is one star between each bar. Here the stars represent the $k$ dice, and the bars are the borders of boxes which represent the $N$ values the dice can take.</p>
<p>There are ${k-1\choose N-1}$ ways to do this.</p>
<p>There are ${N+k-1\choose N-1}$ ways to arrange the <a href="http://en.wikipedia.org/wiki/Stars_and_bars_%28combinatorics%29" rel="nofollow">stars and bars</a> without this condition.</p>
<p>So the probability of getting a straight is: $$\mathcal{\Large P}(\text{Straight})=\frac{k-1\choose N-1}{N+k-1\choose N-1}=\frac{(k-1)!k!}{(k-N)!(N+k-1)!}, \quad\forall k\geq N$$</p>
|
54,541 | <p>Apparently, Mathematica has no real sprintf-equivalent (unlike any other high-level language known to man). <a href="https://mathematica.stackexchange.com/questions/970/sprintf-or-close-equivalent-or-re-implementation">This has been asked before</a>, but I'm wondering if the new <code>StringTemplate</code> function in Mathematica 10 can be extended to include such formatting capabilities.</p>
<p>What I have in mind is a function that takes a <code>TemplateObject</code>, and looks for "formatting specification strings" immediately after <code>TemplateSlot</code>'s and <code>TemplateExpression</code>'s and replaces them with <code>TemplateExpression</code>'s containing appropriate formatting code. So, for example, you could write:</p>
<pre><code>st = applyFormat@StringTemplate["Number: `1`%.2 some other text"]
</code></pre>
<p>and you would get something equivalent to: </p>
<pre><code>TemplateObject[{"Number: ",
TemplateExpression[ToString[NumberForm[TemplateSlot[1], {\[Infinity], 2}]]],
" some other text"}, InsertionFunction -> TextString,
CombinerFunction -> StringJoin]
</code></pre>
<p>I'm not particularly picky about the syntax (it doesn't have to mimic sprintf), as long as:</p>
<ul>
<li>it's easy to write and easy to read</li>
<li>it supports Mathematica's number formatting functions (<code>AccountingForm</code>, <code>ScientificForm</code>...)</li>
<li>it's extensible (e.g by delegating the formatting to a pattern that can be overwritten/extended)</li>
<li>it's compatible with existing <code>StringTemplate</code> templates</li>
</ul>
<p>I've started a function that does this, but I'm curious if you have better ideas (both implementation- and syntax-wise), so I'm posting it as an answer, not as part of the question.</p>
| Niki Estner | 242 | <p>This is my first attempt:</p>
<pre><code>Clear[formatPattern, formatPatternQ, applyFormat, applyFormatToValue, \
builtinFormatFunction]
formatPattern =
StartOfString ~~ "%" ~~
format : (DigitCharacter ... ~~ ("." ~~ DigitCharacter ..) ...) ~~
formatName : WordCharacter ... ~~ rest : ___;
formatPatternQ = StringMatchQ[#, formatPattern] &;
applyFormat[t_] := t
applyFormat[
TemplateObject[{b___, slot_TemplateSlot | TemplateExpression[slot_],
fmtStr_?formatPatternQ, r___}, opts : ___]] :=
Module[{formatting, restStr},
{{formatting, restStr}} =
StringCases[fmtStr,
formatPattern :> {applyFormatToValue[
ToExpression /@ StringSplit[format, ".", All], formatName],
rest}];
applyFormat[
TemplateObject[{b, TemplateExpression[formatting[slot]], restStr,
r}, opts]]]
applyFormatToValue[{padding_Integer, decimalSpec___}, format_][
slot_] :=
padString[applyFormatToValue[{Null, decimalSpec}, format][slot],
padding]
applyFormatToValue[{Null, decimalPlaces_}, format_][slot_] :=
Module[{formatFunction, value},
formatFunction = builtinFormatFunction[format];
value = N@slot;
ToString[formatFunction[value, {\[Infinity], decimalPlaces}],
TraditionalForm]]
builtinFormatFunction["sci"] := ScientificForm
builtinFormatFunction["eng"] := EngineeringForm
builtinFormatFunction["num" | ""] := NumberForm
builtinFormatFunction["acc"] := AccountingForm
(* not yet implemented - formatted values are in TraditionalForm, I
don't know how to format those *)
padString = #1 &;
</code></pre>
<p>Usage:</p>
<pre><code>applyFormat[StringTemplate["abc `1`%.3 def"]][π*10000]
</code></pre>
<blockquote>
<p>Out[27]= abc 31415.927 def</p>
</blockquote>
<pre><code>applyFormat[StringTemplate["abc `1`%.3sci def"]][π*10000]
</code></pre>
<blockquote>
<p>Out[28]= abc 3.142*10^4 def</p>
</blockquote>
<pre><code>applyFormat[StringTemplate["abc `1`%5.3eng def"]][π*10000]
</code></pre>
<blockquote>
<p>Out[29]= abc 31.416*10^3 def</p>
</blockquote>
|
3,413,253 | <p>Can anyone give me some examples and non examples of Lindelöf or second countable space and spaces that is Lindelöf but not second countable? And I understand the definition but find it is hard to visualize and imagine.
I have tried google it but it turns out I only found some silly examples like finite set or empty set.</p>
<p>In general, how can one construct a topological space that is Lindelöf or second countable?</p>
<p>Someone in stack exchange said the real line with discrete topology is Lindelöf, but I do not think so. We can simply construct an open cover defined by the collection of all the singleton set. And this open cover is well defined since singleton set is open in discrete topology. Hence, by definition it is not Lindelöf.</p>
<p>Last question, is (0,1) in the real line equipped with usual topology Lindelöf? I think it is Lindelöf but I could not give any formal proof.
(0,1) fails to be a compact set since we can construct an open cover defined by (1/n,1-1/n) but this open cover does not work so well for arguing for Lindelöf property since quotient number is dense in (0,1). So intuitively I think it is Lindelöf.</p>
<p>I wrote a pretty long question. My mothertongue is not English. Hopefully, you guys can understand me.</p>
| Wlod AA | 490,755 | <p>An example of a Lindelöf non-second countable space, which has some additional nice properties, was constructed/discovered during the Prague 1961 Topological Conference (by wh). The point-set is the unit disc</p>
<p><span class="math-container">$$\ B(\mathbf 0\,\ 1)\ :=
\ \{p\in\mathbb R^2: |p|\le 1\} $$</span></p>
<p>The neighborhoods of the points <span class="math-container">$\ p\ $</span> of the disk,
with <span class="math-container">$\ |p|<1,\ $</span> are the ordinary Euclidean. In the case of <span class="math-container">$\ |p|=1,\ $</span> a base neighborhood, <span class="math-container">$\ N_{a\,b}(p),\ $</span> is determined by points <span class="math-container">$\ a\ b\ $</span> such that <span class="math-container">$\ |a|=|b|=1\ $</span> and <span class="math-container">$\ a\ne p\ne b\ne a.\ $</span> This neighborhood consists of points which are between the chord which connects <span class="math-container">$\ a\ $</span> to <span class="math-container">$\ p\ $</span> and the unit circle, together with a similar one for <span class="math-container">$\ b\ $</span> and <span class="math-container">$\ p\ $</span> (the arcs <span class="math-container">$\ ap\ $</span> and <span class="math-container">$\ pb\ $</span> are such <span class="math-container">$\ a\ $</span> does not belong to arc <span class="math-container">$\ pb,\ $</span> nor <span class="math-container">$\ b\ $</span> to <span class="math-container">$\ ap.$</span>)</p>
<blockquote>
<p><em><strong>Note:</strong> Following that Prague conference, my example was published in a paper by A.Archangielski and W.Holsztyński (there is only one paper by these two authors). I've solved a respective question asked by Archangielski.</em></p>
</blockquote>
|
635,301 | <p>I need some help with the following problem: </p>
<blockquote>
<p>Let $f:\Bbb C \to \Bbb C$ be continuous satisfying that $f(\Bbb C)$ is an open set and that $|f(z)| \to \infty$ as $z\to \infty$. Prove that $f(\Bbb C)=\Bbb C$. </p>
</blockquote>
<p>My idea on this one is to prove by contradiction and assume that $S=f(\Bbb C)\ne\Bbb C$ to get some contradiction with the given two properties of the function. But I have no idea on how to proceed next. </p>
<p>Thanks in advance. </p>
| Community | -1 | <p>Your conditions imply that we can <em>continuously extend</em> $f$ to give a continuous function $\hat{f}$ from the Riemann sphere to itself given below</p>
<p>$$ \hat{f}(x) = \begin{cases}
f(x) & x \neq \infty \\ \infty & x = \infty \end{cases}$$</p>
|
465,001 | <p>$\mathbf{h}_i\in\mathbb{C}^{M}$ are column vectors $\forall i=\{1, 2, \cdots, K\}$.</p>
<p>$q_i\in\mathbb{R}_+$ are scalars $\forall i=\{1, 2, \cdots, K\}$</p>
<p>$\lvert\bullet\rvert$ denotes determinant of a square matrix or Euclidean norm of a vector according to the context. </p>
<p>From Sylvester's theorem, it's trivial to show that $\lvert\mathbf{I}_M+\mathbf{h}_1q_1\mathbf{h}_1^{\text{H}}\rvert=1+q_i\left|\mathbf{h}_1\right|^2$. Is it possible to extend the theorem to simplify $\lvert\mathbf{I}_M+\sum_{i=1}^K\mathbf{h}_iq_i\mathbf{h}_i^{\text{H}}\rvert$?</p>
<p>P. S. It's part of a problem involving multiple access channels and I am not sure if a definite solution exists. </p>
| amWhy | 9,003 | <p>Actually, for both functions $\cos\left(\frac 1x\right)$ and $\sin \left(\frac 1x\right)$, the limits as $x\to 0$ do not exist, and for the same reasons, irregardless of whether we consider the limit as $x \to 0^+$ or $x\to 0^{-1}$.</p>
<p>Look, for example, of the behavior of $\cos \left(\frac 1x\right)$ on the interval $(-0.1, 0.1)$. </p>
<p><img src="https://i.stack.imgur.com/VHX5i.png" alt="enter image description here"></p>
<p>For any function $f(x)$, for a limit $L$ to exist as $x \to a$, we have $$\lim_{x\to a}f(x)=L\in \mathbb R\iff \forall(x_n)\to a,\; f(x_n)\to L$$</p>
<p>For both $f(x) = \cos \frac 1x$ and $f(x) = \sin \frac 1x$, there is no such $L$ to which $f(x)$ as $x \to 0$ from the right OR from the left.</p>
|
8,013 | <p>Given a set $S$, one can easily find a set with greater cardinality -- just take the power set of $S$. In this way, one can construct a sequence of sets, each with greater cardinality than the last. Hence there are at least countably infinite many orders of infinity.</p>
<p><strong>But do there exist uncountably infinite orders of infinity?</strong></p>
<p>To be precise, does there exist an uncountable set of sets whose elements all have distinct cardinalities?</p>
<p>The first answer to <a href="https://math.stackexchange.com/questions/5378/types-of-infinity">Types of infinity</a> suggests the answer is "yes", but only establishes a countable number of cardinalities (which, to be fair, was what the question was asking about).</p>
<p>I've been exposed to enough mathematical logic to realize that I'm walking in a minefield; let me know if I've already mis-stepped.</p>
| W Warren | 32,183 | <p>...lots of the math on Destiny's wall (as Mika McKinnon describes them) are just
guesses about energy inflow and outflow...particle densities...space-time configurations inside either a wormhole spacetime..or related space-times (accretion disk anomalous events,.. solar flare events..etc). In the original
pilot episode "Air Part 1" you see the REAL equations for a 'traversable wormhole'..Morris - Thorne or one of the later followup papers..at arxiv.org there are several..just enter "traversable wormhole". Mika then stated on her
consultants blog (you can email her about this by the way)..she just sort of dumped everything she could think the scientists on Destiny might need to look into. One interesting set of equations was in the episode "Human" where Dr. Rush has outlines of Shor's theorems on the University's chalkboard..and some of
Weyl's group theory polynomials (squiggly stuff with the powers and subscripts).. on his whiteboard at home. Any of this actually valid? Well yes and no...to give Mika credit..she sort of 'skips around "subspace equations"' by discussing the Weyl stuff in Dr. Rush's comments on "codes and projections of wormhole power" in both "Human" and "Air part 1". The implication is that Destiny's FTL drive might be 'similar to Star Treks'.(which is a running joke since the entire series has references to other Sci-Fi Tv and movies...startrek starwars..etc.).. but that their FTL drive doesn't seem to need antimatter likes Trek's does..just lots of good solar protons and related solar ejecta.</p>
<p>Glad they never told actor Robert Carlyle about those "Weyl squiggles" (zeta and eta symbols for differential form geometries)..he might have had to explain them to Hoda and kathy Lee...on the Today Show...those ladies get tipsy enough to jump on anything. WW</p>
|
34,204 | <p>I have several contour lines and one point. How can I find a point in one of those contour lines which is nearest to the given point?</p>
<pre><code>(*Create the implicit curves*)
Data={{10,20,1},{10,40,2},{10,60,3},{10,80,4},{20,25,2},{20,45,3},{20,65,4},{30,30,3},{30,50,4},{40,35,4},{40,55,5},{50,20,4},{50,40,5},{60,25,5}};
U=NonlinearModelFit[Data,a x^b (y^(1-b))+c,{a,b,c},{x,y}];
L={ContourPlot[U[x,y]=={1},{x,0,100},{y,0,100},ContourStyle->Red],ContourPlot[U[x,y]=={2},{x,0,100},{y,0,100},ContourStyle->Magenta],ContourPlot[U[x,y]=={3},{x,0,100},{y,0,100},ContourStyle->Brown],ContourPlot[U[x,y]=={4},{x,0,100},{y,0,100},ContourStyle->Blue],ContourPlot[U[x,y]=={5},{x,0,100},{y,0,100},ContourStyle->Green]};
(*Point nearest to which we need to find the points on the curves*)
pt={30,50};
(*Graphic*)
Show[L,Graphics[{PointSize[Large],Blue,Point[pt]}],FrameLabel->{"X","Y"}]
</code></pre>
<p><img src="https://i.stack.imgur.com/kElkt.jpg" alt="enter image description here"></p>
| PlatoManiac | 240 | <p>For a general purpose & more accurate solution I can suggest the following.</p>
<p><strong>Update</strong>(the missing <code>data</code> part)</p>
<pre><code>(* Create the implicit curves analytically *)
conts = ((a x^b (y^(1 - b)) + c /. U["BestFitParameters"]) == #) & /@Range[5];
(* Given x val function to find y=f(x) values per curve *)
fun[exp_] := {#,y /. Quiet@FindRoot[Evaluate@(exp /. x -> #), {y, 1}]} &;
(* Find discrete (x,f(x)) for 0<x<100 *)
data = (fun[#] /@ Range[.1, 101, .25]) & /@ conts;
</code></pre>
<p>Continuing the old code</p>
<pre><code>(* Interpolate the data *)
ints = Interpolation /@ data;
(* Given a point or a list of points following function finds the minimizer
(x,f(x)) on each curve *)
xpts[pt_] := With[{int = #},
NMinimize[{Evaluate@Total[Sqrt[(x - #1)^2 + (int[x] - #2)^2] &@@@ pt], .1 <x<100},
{x}]] & /@ ints;
{#2[[2]], #3[#2[[2]]]} & @@ Flatten@First@Sort@Transpose@{xpts[pt], ints}
</code></pre>
<blockquote>
<p>{36.9385, 52.1471}</p>
</blockquote>
<p>For fun lets allow some dynamics! Note that I did not update the animations here. The code is self-contained now so one can reproduce similar things easily.</p>
<p><img src="https://i.stack.imgur.com/1d928.gif" alt="enter image description here"></p>
<p><strong>Note:</strong></p>
<p>The above function is written in such a way that one can have more than one point say $X=(x_1,...,x_n)$. Then it will find the unique minimizer $P=(a,b)$ on the given contours such that $dist=\sum_{i=1}^{n}d(P,x_i)$ is minimized where $d(*,*)$ is the Euclidean distance metric. Such a case follows in the figure. Such an example follows below. Here the number or points $\#(X)=n$ is increasing at every step of the animation. They are the black dots on the plot.</p>
<p>One should also note given a large number of points one can try <code>FindMinimum</code> in place of <code>NMinimize</code> but for up to 500 points <code>NMinimize</code> seems to do the job well enough.</p>
<p><img src="https://i.stack.imgur.com/nt70d.gif" alt="enter image description here"></p>
<p><strong>Code:</strong></p>
<pre><code>xptsSum[pt_] :=
With[{int = #},
x /. Last@
NMinimize[{Evaluate@
Total[Sqrt[(x - #1)^2 + (int[x] - #2)^2] & @@@ pt], .1 < x <
100}, {x}]] & /@ ints; CptSum =
Table[{50 + Sin[u^.6] u , 40 - Cos[u^.4] u}, {u, 0, 50, 50./89}];
clus = Take[CptSum, #] & /@ Range[Length@CptSum];
imsSum = ParallelMap[With[{pt = #},
xVals = xptsSum[pt];
Plot[Evaluate[(#[x]) & /@ ints], {x, .1, 100},
PlotRange -> {{0, 100}, {-10, 110}}, Frame -> True,
AspectRatio -> 1, ImageSize -> 500,
Epilog -> {{Directive[Opacity[.7], Gray],
Line[pt]}, {Directive[Opacity[.7], Black], PointSize@Small,
Point /@ pt}, {Red, PointSize@Large,
MapThread[Point@{#2, #1[#2]} &, {ints, xVals}]}}]] &, clus];
ListAnimate[imsSum, DefaultDuration -> 10]
</code></pre>
|
34,204 | <p>I have several contour lines and one point. How can I find a point in one of those contour lines which is nearest to the given point?</p>
<pre><code>(*Create the implicit curves*)
Data={{10,20,1},{10,40,2},{10,60,3},{10,80,4},{20,25,2},{20,45,3},{20,65,4},{30,30,3},{30,50,4},{40,35,4},{40,55,5},{50,20,4},{50,40,5},{60,25,5}};
U=NonlinearModelFit[Data,a x^b (y^(1-b))+c,{a,b,c},{x,y}];
L={ContourPlot[U[x,y]=={1},{x,0,100},{y,0,100},ContourStyle->Red],ContourPlot[U[x,y]=={2},{x,0,100},{y,0,100},ContourStyle->Magenta],ContourPlot[U[x,y]=={3},{x,0,100},{y,0,100},ContourStyle->Brown],ContourPlot[U[x,y]=={4},{x,0,100},{y,0,100},ContourStyle->Blue],ContourPlot[U[x,y]=={5},{x,0,100},{y,0,100},ContourStyle->Green]};
(*Point nearest to which we need to find the points on the curves*)
pt={30,50};
(*Graphic*)
Show[L,Graphics[{PointSize[Large],Blue,Point[pt]}],FrameLabel->{"X","Y"}]
</code></pre>
<p><img src="https://i.stack.imgur.com/kElkt.jpg" alt="enter image description here"></p>
| C_S | 10,062 | <p>I combined my data and answer from PlatoManiac, as follows;</p>
<pre><code>(*Input Data*)
Data={{10,20,1},{10,40,2},{10,60,3},{10,80,4},{20,25,2},{20,45,3},{20,65,4},{30,30,3},{30,50,4},{40,35,4},{40,55,5},{50,20,4},{50,40,5},{60,25,5}};
U = NonlinearModelFit[Data, a x^b (y^(1 - b)) + c, {a, b, c}, {x, y}];
(*Create the implicit curves analytically*)
conts=(U[x,y]==#==#)&/@{1,2,3,4,5};
(*Given x val following function finds y=f(x) values per curve*)
fun[exp_]:={#,y/.Quiet@FindRoot[Evaluate@(exp/.x->#),{y,1}]}&;
(*Find discrete (x,f(x)) for 0<x<100*)
data=(fun[#]/@Range[.1,101,10])&/@conts;
(*point closest to which we need to find the points on the curves*)
pt={{34,52}};
(*Interpolate the data*)
ints=Interpolation/@data;
(*Given a point or a list of points following function finds the minimizer (x,f(x)) on each curve*)
xpts[pt_]:=With[{int=#},NMinimize[{Evaluate@Total[Sqrt[(x-#1)^2+(int[x]-#2)^2]&@@@pt],.1<x<100},{x}]]&/@ints;
{#2[[2]],#3[#2[[2]]]}&@@Flatten@First@Sort@Transpose@{xpts[pt],ints}
{35.9226, 53.3513}
</code></pre>
<p>The result can be evaluated in code;</p>
<pre><code>ContourPlot[Evaluate@conts,{x,0,100},{y,0,100},ContourStyle->{Red,Magenta,Brown,Blue,Green},FrameLabel->{"X","Y"},Epilog->{{Blue,PointSize@Large,Point[pt]},{Red,PointSize@Large,Point[{35.9226,53.3513}]}}]
</code></pre>
<p><img src="https://i.stack.imgur.com/0EV3K.jpg" alt="enter image description here"></p>
<p>Thank you all for your answers.</p>
|
683,149 | <p>So I am just learning about bijections and I am having difficulty figuring out if these three problems are bijections and how to prove them. </p>
<ol>
<li>$f(x)=x/2$</li>
<li>$f(x)=2x^2$</li>
<li>$f(x)=\lfloor x\rfloor$</li>
</ol>
<p>Sorry I forgot to add the entire question. It is "Determine whether each of these functions is a bijection from $\Bbb R$ to $\Bbb R$."</p>
| Community | -1 | <p>Since you're new to bijections, I'll work out the details of one of them and leave the others for you to try. To show that $f(x) = x/2$ is a bijection, we need to check two things: Whether $f$ is $1-1$, and onto.</p>
<ul>
<li><p>To see if $f$ is $1-1$, we need to see if
$$f(a) = f(b) \implies a = b$$
(That is, whether every output has at <em>most</em> one corresponding input). This is equivalent to asking whether
$$\frac a 2 = \frac b 2 \implies a = b$$
which is certainly true.</p></li>
<li><p>To see if $f$ is onto, we need to see if every real number has a corresponding input. That is, given $b \in \mathbb{R}$, can we find another real number $a$ for which $f(a) = b$? Certainly: Just set $a = 2b$.</p></li>
</ul>
<p>Can you try the other two? Hint: Neither is a bijection. </p>
|
1,878,806 | <p>I am a graduate school freshman.</p>
<p>I did not take a probability lecture.</p>
<p>So I don't have anything about Probability.</p>
<p>Could you suggest Probability book No matter What book level?</p>
| Wavelet | 336,350 | <p>This is a surprisingly difficult question to answer. My natural instinct as a mathematician at heart (though no PhD, yet) is to echo Justin and tell you to go straight into measure theory and then worry about probability. Classical probability theory is indeed best understood in a measure theoretic context, though it's historical development tells a different story. On one hand, even if you get a probability book for "beginners", you'll be learning the axioms of measure theory (or at least some of them) without knowing it, so you might as well dive straight into the deep end. On the other hand, if you don't have a sufficient background, you risk losing interest due to the lack of context and motivation. Really, it all depends on your goals and background and if you could elaborate on those, I'd be more capable of pointing you in the right direction. </p>
<p>If you do indeed want to dive into the deep end, Ash and Doleans-Dade's Measure and Probability Theory is your best bet IMO. It has a nice balance of exposition and rigor but if you've never taken at least a calculus class (the <em>very</em> minimum pre-req IMO) you'll be completely lost. </p>
<p>Edit: Now that I read your question more closely, I see you say graduate school "freshman" (I've never heard that characterization of a grad student but I assume 1st year) so it really depends on your field of study. If it is mathematics or statistics, you absolutely need the measure theoretic version.</p>
|
848,415 | <p>If the limit of one sequence $\{a_n\}$ is zero and the limit of another sequence $\{b_n\}$ is also zero does that mean that $\displaystyle\lim_{n\to\infty}(a_n/b_n) = 1$?</p>
| TZakrevskiy | 77,314 | <p>Let $a_n=1/n$, $b_n=1/n^2$. What can you say on the ratio?</p>
|
2,825,237 | <p>\begin{cases}
4x \equiv 14 \pmod m \\
3x \equiv 2 \pmod 5
\end{cases}</p>
<p>I want to prove that for $m \in 4\mathbb{Z}$ there are no solutions(1). Moreover, I want to determine all m for which I have solutions(2). </p>
<p>First of all, the second equation is equivalent to $ x \equiv 4$ (mod 5).</p>
<p>If $m$ and $5$ are coprime, the Chinese remainder theorem states that I have solution, so I pick $m$ and $5$ not coprime. In this case, $m = 5t$ for some $t \in \mathbb{Z}$.
I can write:
$$ 4x = 14 + (5t)c$$
$$ 4x = 4 + 5(2 + tc)$$
$$ 4x = 4$$
$$ x \equiv 1 mod 5$$
However, I also have that $ x \equiv 4 \pmod 5$, so there are no solution. In the proof I did not use the fact that $m$ is a multiple of $4$, so I think the answer for (2) is that we have solution only for $(m,a) = 1$. Is that right?</p>
| lhf | 589 | <p>If $m$ is a multiple of $4$, then $4x \equiv 14 \bmod m$ implies $0 \equiv 4x \equiv 14 \equiv 2 \bmod 4$, a contradiction. </p>
|
136,996 | <p>What is known about $f(k)=\sum_{n=0}^{k-1} \frac{k^n}{n!}$ for large $k$?</p>
<p>Obviously it is is a partial sum of the series for $e^k$ -- but this partial sum doesn't reach close to $e^k$ itself because we're cutting off the series right at the largest terms. In the full series, the $(k+i-1)$th term is always at least as large as the $(k-i)$th term for $1\le i\le k$, so $f(k)< e^k/2$. Can we estimate more precisely how much smaller than $e^k$ the function is?</p>
<p>It would look very nice and pleasing if, say, $f(k)\sim e^{k-1}$ for large $k$, but I have no real evidence for that hypothesis.</p>
<p>(Inspired by <a href="https://math.stackexchange.com/questions/136932/">this question</a> and my answer thereto).</p>
| Aryabhata | 1,102 | <p>This appears as problem #96 in Donald J Newman's excellent book: A Problem Seminar. </p>
<p>The problem statement there is:</p>
<blockquote>
<p>Show that</p>
<p>$$ 1 + \frac{n}{1!} + \frac{n^2}{2!} + \dots + \frac{n^n}{n!} \sim
\frac{e^n}{2}$$</p>
</blockquote>
<p>Where $a_n \sim b_n$ mean $\lim \frac{a_n}{b_n} = 1$.</p>
<p>Thus we can estimate your sum (I have swapped $n$ and $k$) as</p>
<p>$$ 1 + \frac{n}{1!} + \frac{n^2}{2!} + \dots + \frac{n^{n-1}}{(n-1)!} \sim
\frac{e^n}{2}$$</p>
<p>as by Stirling's formula, $\dfrac{n^n}{n!e^n} \to 0$.</p>
<p>The solution in the book proceeds as follows:</p>
<p>The remainder term for a the Taylor Series of a function $f$ is</p>
<p>$$ R_n(x) = \int_{0}^{x} \frac{(x-t)^n}{n!} f^{n+1}(t) \ \text{d}t$$</p>
<p>which for our purposes, comes out as</p>
<p>$$\int_{0}^{n} \frac{(n-t)^n}{n!} e^t \ \text{d}t$$</p>
<p>Making the substitution $n-t = x$ gives us the integral</p>
<p>$$ \int_{0}^{n} \frac{x^n}{n!} e^{-x} \ \text{d}x$$</p>
<p>In an earlier problem (#94), he shows that</p>
<p>$$\int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \sim \sqrt{\frac{\pi n}{2}}$$</p>
<p>which using the substitution $n+x = t$ gives</p>
<p>$$ \int_{n}^{\infty} t^n e^{-t} \ \text{d}t \sim \frac{n^n}{e^n} \sqrt{\frac{\pi n}{2}}$$</p>
<p>Using $\int_{0}^{\infty} x^n e^{-x}\ \text{d}x = n!$ and Stirling's formula now gives the result.</p>
<p>To prove that</p>
<p>$$\int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \sim \sqrt{\frac{\pi n}{2}}$$</p>
<p>He first makes the substitution $x = \sqrt{n} t$ to obtain</p>
<p>$$ \int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \ = \sqrt{n} \int_{0}^{\infty} \left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t} \ \text{d}t$$</p>
<p>Now $$\left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t} \le (1+t)e^{-t}$$ and thus by the dominated convergence theorem,</p>
<p>$$ \lim_{n\to \infty} \frac{1}{\sqrt{n}} \int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x $$</p>
<p>$$= \int_{0}^{\infty} \left(\lim_{n \to \infty}\left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t}\right) \ \text{d}t$$</p>
<p>$$ = \int_{0}^{\infty} e^{-t^2/2} \ \text{d}t = \sqrt{\frac{\pi}{2}} $$</p>
|
4,105,854 | <blockquote>
<p>Solve for integers <span class="math-container">$x, y$</span> and <span class="math-container">$z$</span>:</p>
<p><span class="math-container">$x^2 + y^2 = z^3.$</span></p>
</blockquote>
<p>I tried manipulating by adding and subtracting <span class="math-container">$2xy$</span> , but it didn't give me any other information, except the fact that <span class="math-container">$z^3 - 2xy$</span> and <span class="math-container">$z^3+2xy$</span> are perfect squares.</p>
<p>This doesn't give us much information to work on. I don't know if my steps are correct, I do not know how to approach this problem.</p>
<p>Any help would be appreciated.</p>
| RobertTheTutor | 883,326 | <p>First I search for trivial solutions: <span class="math-container">$(0,0,0)$</span> works. Likewise <span class="math-container">$(0,1,1)$</span> and <span class="math-container">$(1,0,1)$</span>.</p>
<p><span class="math-container">$z$</span> has to be non-negative, as its cube is a sum of squares. Likewise if a negative value of <span class="math-container">$x$</span> or <span class="math-container">$y$</span> worked, so would a positive value. So we can stick to positive <span class="math-container">$(x,y,z)$</span> since we already found the solutions with <span class="math-container">$0$</span>'s.</p>
<p>So, next up would be <span class="math-container">$z=2$</span>, so <span class="math-container">$x^2 + y^2 = 8$</span>, which has a pretty immediate solution <span class="math-container">$(2,2,2)$</span>.</p>
<p>Going to <span class="math-container">$z = 3$</span>, we would need two squares that add up to <span class="math-container">$27$</span>, and none work.</p>
<p>We could continue taking values of <span class="math-container">$z$</span> one at a time, or we can pick a constraint and see if we can find a set of solutions. Suppose <span class="math-container">$x = 1$</span>. Are there any squares that are one less than a cube?</p>
<p>Comparing <span class="math-container">$1,4,9,16,25,36,49,64,81,100,121,144,169,196,225...$</span> with</p>
<p><span class="math-container">$1,8,27,64,125,216,343,512,729,1000,1331,1728...$</span>. I don't see any such option, but I do notice that <span class="math-container">$121 + 4 = 125$</span>, so there is a solution <span class="math-container">$(2,11,5)$</span>.</p>
<p>If we consider even vs odd possibilities: if <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are both even, so must <span class="math-container">$z$</span> be. However we can't factor all the <span class="math-container">$2$</span>'s out. If <span class="math-container">$(2a, 2b, 2c)$</span> is a solution, <span class="math-container">$(a, b, c)$</span> is not.</p>
<p>It looks like others are finding better patterns in the comments so I'll leave it there.</p>
|
1,582,275 | <p>Suppose that $B = S^{-1}AS$ for some $n \times n$ matrices $A$, $B$, and $S$.</p>
<ol>
<li>Show that if $x \in \ker(B)$ then $Sx \in \ker(A)$.</li>
</ol>
<p>Proof: $B = S^{-1}AS$ implies that $SB = AS$ which implies that $SBx = ASx = 0$, that is $Sx \in \ker(A)$.</p>
<ol start="2">
<li>Show that the linear transformation $T : \ker(B) \to ker(A), \, x \mapsto Sx$ is an isomorphism. </li>
</ol>
<p>I know how to prove part 1, but I am not sure what to do for part 2. </p>
| Justpassingby | 293,332 | <p>A similar relationship exists with the roles of $A$ and $B$ reversed, and with $S$ replaced with its own inverse:</p>
<p>$$A=SBS^{-1}=(S^{-1})^{-1}BS^{-1}$$</p>
<p>This means that the conclusion from point 1 can be applied to this situation, as well: $S^{-1}$ maps the kernel of $A$ into the kernel of $B.$</p>
|
4,063,337 | <p>In an exercise I'm asked the following:</p>
<blockquote>
<p>a) Find a formula for <span class="math-container">$\int (1-x^2)^n dx$</span>, for any <span class="math-container">$n \in \mathbb N$</span>.</p>
<p>b) Prove that, for all <span class="math-container">$n \in \mathbb N$</span>: <span class="math-container">$$\int_0^1(1-x^2)^n dx = \frac{2^{2n}(n!)^2}{(2n + 1)!}$$</span></p>
</blockquote>
<p>I used the binomial theorem in <span class="math-container">$a$</span> and got:</p>
<p><span class="math-container">$$\int (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) (-1)^k \ \frac{x^{2k + 1}}{2k+1} \ \ \ + \ \ C$$</span></p>
<p>and so in part (b) i got:</p>
<p><span class="math-container">$$\int_0^1 (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) \ \frac{(-1)^k}{2k+1}$$</span></p>
<p>I have no clue on how to arrive at the expression that I'm supposed to arrive. How can I solve this?</p>
| Ayoub | 536,671 | <p>I assume integration is done between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>.</p>
<ol>
<li>For <span class="math-container">$n\in\mathbb{N}$</span>, let's have <span class="math-container">$I_n=\int_{0}^1(1-x^2)^{n}\,dx$</span>.</li>
</ol>
<p>Then for any <span class="math-container">$n\in\mathbb{N}$</span>,</p>
<p><span class="math-container">$$I_{n+1}=\int_{0}^1 (1-x^2)(1-x^2)^{n}\,dx=I_n-\int_{0}^1 x^2(1-x^2)^{n}\,dx$$</span></p>
<p>Integratin par parts :</p>
<p><span class="math-container">$$\int_{0}^1 x^2(1-x^2)^{n}\,dx=\int_{0}^1 x\times x(1-x^2)^{n}\,dx=[x\times\frac{-(1-x^2)^{n+1}}{2n+2}]_0^1-\int_{0}^{1}\frac{-1}{2n+2}(1-x^2)^{n+1}dx$$</span></p>
<p>Therefore <span class="math-container">$$\int_{0}^1 x^2(1-x^2)^{n}\,dx=\frac{1}{2(n+1)}I_{n+1}$$</span></p>
<p>Going back to the first equality :</p>
<p><span class="math-container">$$I_{n+1}=I_n-\frac{1}{2n+2}I_{n+1}$$</span></p>
<p>Reordering everything :</p>
<p><span class="math-container">$$I_{n+1}=\frac{2(n+1)}{2n+3}I_n$$</span></p>
<ol start="2">
<li>Since <span class="math-container">$I_0=1$</span>, the former equality implies <span class="math-container">$I_n\neq 0$</span> for all <span class="math-container">$n\in\mathbb{N}$</span>. Let <span class="math-container">$n$</span> be a non-negative integer.</li>
</ol>
<p>Then</p>
<p><span class="math-container">$$\frac{I_n}{I_0}=\prod_{d=0}^{n-1}\frac{I_{d+1}}{I_d}=\prod_{k=0}^{n-1}\frac{2d+2}{2d+3}=\prod_{k=0}^{n-1}\frac{2d+2}{2d+3}\frac{2d+2}{2d+2}=\frac{2^{2n}(n-1+1)!^2}{(2(n-1)+3)!}=\frac{2^{2n}n!^2}{(2n+1)!}$$</span></p>
|
4,063,337 | <p>In an exercise I'm asked the following:</p>
<blockquote>
<p>a) Find a formula for <span class="math-container">$\int (1-x^2)^n dx$</span>, for any <span class="math-container">$n \in \mathbb N$</span>.</p>
<p>b) Prove that, for all <span class="math-container">$n \in \mathbb N$</span>: <span class="math-container">$$\int_0^1(1-x^2)^n dx = \frac{2^{2n}(n!)^2}{(2n + 1)!}$$</span></p>
</blockquote>
<p>I used the binomial theorem in <span class="math-container">$a$</span> and got:</p>
<p><span class="math-container">$$\int (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) (-1)^k \ \frac{x^{2k + 1}}{2k+1} \ \ \ + \ \ C$$</span></p>
<p>and so in part (b) i got:</p>
<p><span class="math-container">$$\int_0^1 (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) \ \frac{(-1)^k}{2k+1}$$</span></p>
<p>I have no clue on how to arrive at the expression that I'm supposed to arrive. How can I solve this?</p>
| Henry Lee | 541,220 | <p><span class="math-container">$$I(n)=\int_0^1(1-x^2)^ndx$$</span>
<span class="math-container">$u=x^2\Rightarrow dx=\frac{du}{2x}=\frac{du}{2u^{1/2}}$</span> and so:
<span class="math-container">$$I(n)=\frac12\int_0^1(1-u)^nu^{-1/2}du=\frac12 B\left(\frac12,n+1\right)=\frac{\Gamma\left(\frac12\right)\Gamma(n+1)}{2\Gamma\left(n+\frac32\right)}$$</span>
Now we can use the identity that:
<span class="math-container">$$\Gamma\left(\frac12+(n+1)\right)=\frac{(2n+2)!}{4^{n+1}(n+1)!}\sqrt{\pi}$$</span>
<span class="math-container">$$\Gamma\left(\frac12\right)=\sqrt{\pi}$$</span>
and so:
<span class="math-container">$$\frac{\Gamma(1/2)\Gamma(n+1)}{2\Gamma(n+3/2)}=\frac{4^{n+1}(n+1)!n!}{2(2n+2)!}=\frac{2^{2n+1}(n+1)(n!)^2}{(2(n+1))!}$$</span>
now we can use the fact that:
<span class="math-container">$$(2n+2)!=(2n+2)(2n+1)!=2(n+1)(2n+1)!$$</span>
and so we arrive at:
<span class="math-container">$$\frac{2^{2n}(n!)^2}{(2n+1)!}$$</span></p>
|
4,063,337 | <p>In an exercise I'm asked the following:</p>
<blockquote>
<p>a) Find a formula for <span class="math-container">$\int (1-x^2)^n dx$</span>, for any <span class="math-container">$n \in \mathbb N$</span>.</p>
<p>b) Prove that, for all <span class="math-container">$n \in \mathbb N$</span>: <span class="math-container">$$\int_0^1(1-x^2)^n dx = \frac{2^{2n}(n!)^2}{(2n + 1)!}$$</span></p>
</blockquote>
<p>I used the binomial theorem in <span class="math-container">$a$</span> and got:</p>
<p><span class="math-container">$$\int (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) (-1)^k \ \frac{x^{2k + 1}}{2k+1} \ \ \ + \ \ C$$</span></p>
<p>and so in part (b) i got:</p>
<p><span class="math-container">$$\int_0^1 (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) \ \frac{(-1)^k}{2k+1}$$</span></p>
<p>I have no clue on how to arrive at the expression that I'm supposed to arrive. How can I solve this?</p>
| Trevor Gunn | 437,127 | <p>My idea here is to use the <a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow noreferrer">Beta function</a>:</p>
<p><span class="math-container">$$ \int_0^1 t^{m}(1 - t)^n \;dt = \frac{\Gamma(m+1)\Gamma(n+1)}{\Gamma(m+n+2)}. $$</span></p>
<p>Where the <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="nofollow noreferrer">Gamma function</a> satisfies <span class="math-container">$\Gamma(n + 1) = n!$</span> if <span class="math-container">$n$</span> is an integer, and also <span class="math-container">$\Gamma$</span> satisfies the recurrence <span class="math-container">$\Gamma(n + 1) = n\Gamma(n)$</span>.</p>
<p>So if we substitute <span class="math-container">$t = x^2$</span>, <span class="math-container">$dt = 2x\; dx \iff \frac{1}{2\sqrt{t}}\;dt=dx$</span> then</p>
<p><span class="math-container">\begin{align}
\int_0^1 (1 - x^2)^n \; dx &= \frac{1}{2} \int_0^1 t^{-1/2} (1 - t)^n \;dt \\
&= \frac{1}{2} \frac{\Gamma(\frac12)n!}{\Gamma(n+\frac32)}. \tag{1}
\end{align}</span></p>
<p>And now using the recurrence <span class="math-container">$\Gamma(n + 1) = n\Gamma(n)$</span>, we find that</p>
<p><span class="math-container">\begin{align}
\Gamma(n+\tfrac32) &= (n+\tfrac12)\Gamma(n + \tfrac12) \\
&= (n+\tfrac12)(n - \tfrac12) \Gamma(n - \tfrac12) \\
&\hspace{2.5mm}\vdots \\
&= (n+\tfrac12)(n - \tfrac12) \cdots \tfrac{5}{2} \cdot \tfrac{3}{2} \cdot \tfrac12 \cdot \Gamma(\tfrac12) \\
&= \frac{1}{2^n} (2n + 1)(2n - 1) \cdots 5 \cdot 3 \cdot 1 \cdot \Gamma(\tfrac12) \tag{2}
\end{align}</span></p>
<p>The product <span class="math-container">$(2n + 1)(2n - 1) \cdots 5 \cdot 3 \cdot 1$</span> is also known as the <a href="https://en.wikipedia.org/wiki/Double_factorial" rel="nofollow noreferrer">double factorial</a> <span class="math-container">$(2n + 1)!!$</span>. It is well-known that we can rewrite it in terms of <span class="math-container">$(2n + 1)!$</span> as</p>
<p><span class="math-container">\begin{align}
(2n + 1)(2n - 1) \cdots 5 \cdot 3 \cdot 1 &= \frac{(2n + 1)(2n)(2n - 1)(2n - 2) \cdots 3 \cdot 2 \cdot 1}{(2n)(2n-2) \cdots 2} \\
&= \frac{(2n+1)!}{2^n \cdot n!}. \tag{3}
\end{align}</span></p>
<p>So now finally, if you combine <span class="math-container">$(1), (2), (3)$</span>, you get the result you want.</p>
|
638,164 | <p>Let $\textbf{F}\left ( x, y \right )=\left ( -\frac y{x^2+y^2},\frac x{x^2+y^2} \right )$ be a vector field in $\mathbb{R}^2-\left \{ \textbf{0} \right \}$.</p>
<p>I know that the potential function of $\textbf{F}$ on $x>0$ is $\arctan \left ( \frac yx \right )$.</p>
<p>But I want to know the potential function of $\textbf{F}$ on $\mathbb{R}^2-\left \{ \textbf{0} \right \}$. Does it exist?</p>
| Joe Z. | 24,644 | <p>As the person from Math Overflow hinted, your question is best viewed in the paradigm of the base $n$ expansion of $1/(n-1)^2$.</p>
<p>Consider the following sum: $\displaystyle \sum_{k=1}^\infty \frac{k}{n^{k+1}}$. This is equal to $\displaystyle \frac{1}{n^2} + \frac{2}{n^3} + \frac{3}{n^4} + \frac{4}{n^5} + \cdots$ onward to infinity, which is in turn just equal to $\displaystyle \frac{n}{(n-1)^2}$.</p>
<p>The number $1n^{n-3} + 2n^{n-4} + 3n^{n-5} + \cdots + (n-3)n^1 + (n-1)$) (which is $12345679$ in base $10$) is simply this expansion multiplied by $n^{n-2}$, to produce $\displaystyle \frac{n^{n-1}}{(n-1)^2}$, and then truncating the fractional portion. So, since we're just shifting decimal places, let's look at what happens in just the case of $\displaystyle \frac{1}{(n-1)^2}$ because it's easier to work with.</p>
<p>In the case of $\displaystyle \frac{1}{(n-1)^2}$, the expansion in base $n$ is equal to $$ \frac{1}{n^2} + \frac{2}{n^3} + \frac{3}{n^4} + \frac{4}{n^5} + \cdots + \frac{n-1}{n^{n}} + \frac{n}{n^{n+1}} + \cdots$$</p>
<p>Now, here's an example of that in action in base 10:</p>
<pre><code>0.0 1 2 3 4 5 6 7 9 0 1 2 3 ...
0 4 8 1 2 ...
1 5 9 1 3 ...
2 6 1 0 ...
3 7 1 1 ...
</code></pre>
<p>You'll notice that the tens digit of 10 butts into the place value of the ones digit of 9, causing a carry over and increasing the value of 8 in that place to 9.</p>
<p>For multiples of $\displaystyle \frac{1}{(n-1)^2}$ = $\displaystyle \frac{k}{(n-1)^2}$ , the same deal applies only in multiples. For $k = 2$, for example, $\displaystyle \frac{2}{(n-1)^2}$ is just equal to:</p>
<p>$$ \frac{2}{n^2} + \frac{4}{n^3} + \frac{6}{n^4} + \frac{8}{n^5} + \cdots + \frac{2(n-1)}{n^{n}} + \frac{2n}{n^{n+1}} + \cdots$$</p>
<p>All that changes is that the numbers being added as place values go into two digits more quickly. </p>
<pre><code>0.0 2 4 6 9 1 3 5 8 0 2 4 6 ...
0 8 1 6 2 4 ...
2 1 0 1 8 2 6 ...
4 1 2 2 0 ...
6 1 4 2 2 ...
</code></pre>
<p>Notice that the digit increases by $2$ each time now, except when the tens digit of the next number increases by $1$, in which case it increases by $3$.</p>
<p>Also notice that there is no $7 (= 9 - 2)$ in this expansion, just as there was no $8(= 9 - 1)$ in the first one. This is because when you try to get $7$ (e.g. with the $6$ in $16$ + the $1$ in $18$), the $2$ in $20$ butts in and forces the $7$ to carry again, increasing it to $8$.</p>
<p>This basically means that whenever the ones digit would become a number that's $7$ or greater, it increases by $3$ rather than $2$. But that just means that we can treat $7$ as skipped entirely, and in fact we're actually always just moving $2$ steps forward in the cyclic sequence $0, 1, 2, 3, 4, 5, 6, 8, 9$.</p>
<p>And this applies in general, for $k/(n-1^2)$, we're moving $k$ steps forward in the sequence $[0, 1, 2, 3, 4, 5, 6, \cdots, n-1]$ with $(n - 1 - k)$ removed. This "trick" even works with numbers that aren't relatively prime to $n - 1$:</p>
<p>$$ 3 / 81 = 0.037037037037037... = \text{3 steps forward in } [0, 1, 2, 3, 4, 5, 7, 8, 9] $$
$$ 9 / 81 = 0.111111111111111... = \text{9 steps forward in } [1, 2, 3, 4, 5, 6, 7, 8, 9] $$</p>
<p>In base 7:</p>
<p>$$ 2_7 / 51_7 = 0.025025025025025..._7 = \text{2 steps forward in } [0, 1, 2, 3, 5, 6] $$
$$ 3_7 / 51_7 = 0.040404040404040..._7 = \text{3 steps forward in } [0, 1, 2, 4,
5, 6] $$</p>
<p>Keep in mind, though, that as soon as you get through the first set of $n - 1$, you go back to one step but everything in the original number sequence shifts to the right by one:</p>
<p>$$ 9 / 81 = 0.111111111111111... = \text{9 steps forward in } [1, 2, 3, 4, 5, 6, 7, 8, 9] $$
$$ 10 / 81 = 0.123456790123456... = \text{1 step forward in } [1, 2, 3, 4, 5, 6, 7, 9, 0] $$</p>
<p>$$ 18 / 81 = 0.222222222222222... = \text{9 steps forward in } [2, 3, 4, 5, 6, 7, 8, 9, 1] $$
$$ 19 / 81 = 0.234567901234567... = \text{1 step forward in } [2, 3, 4, 5, 6, 7, 9, 0, 1] $$</p>
<p>So the fact that all the digits except for one we removed are represented is simply a consequence of the fact that the order of $k$ in the cyclic group $\mathbb{Z}_{n-1}$ is still $(n-1)$ when $k$ is relatively prime to $(n-1)$.</p>
<p>This doesn't quite constitute a rigorous proof (mostly because I'm too lazy to formalize it and it took quite a while for me to wrap my head around what I was saying myself), but I think you have enough of an idea now that you can figure it out yourself.</p>
<hr>
<p>As an additional note, it is in fact true that the multiples of $123456789$ (note! not $12345679$ which I explained above) that are relatively prime to $9$ also result in a permutation of those digits (including 0 when you reach 10 digits):</p>
<p>$$1 \times 123456789 = 123456789$$</p>
<p>$$2 \times 123456789 = 246913578$$</p>
<p>$$4 \times 123456789 = 493827156$$</p>
<p>$$5 \times 123456789 = 617283945$$</p>
<p>$$7 \times 123456789 = 864197523$$</p>
<p>$$8 \times 123456789 = 987654312$$</p>
<p>$$10 \times 123456789 = 1234567890$$</p>
<p>$$etc.$$</p>
<p>Well, up until 19, anyway:</p>
<p>$$19 \times 123456789 = 2345678991$$</p>
<hr>
<p>You also asked for a proof that if the proposition holds for $k < n$, it also holds for $n \le k \le n^2$. This is actually just a simple induction, if you've already proven the $k < n$ cases.</p>
<p>Suppose that we have a sequence of digits $a b c d \ldots$ that contains all distinct digits except for $n - k$ in the sequence described above (i.e. the digit increases by $k$ each time now, except when it would exceed $n - k$, in which case it increases by $k+1$.).</p>
<p>Then increasing each digit by $1$ (as adding $n$ to the numerator will do) will mean that all the digits are still distinct, except for one that carries over to zero, increasing the previous digit by $1$. But the one immediately before it became $n - k$ upon increasing by 1, so increasing it to $n - k + 1$ means that it's still not the same as any other digits (because the one that <em>was</em> $n - k + 1$ before has since increased to $n - k + 2$). And the resulting sequence also has the property that "the digit increases by $k$ each time now, except when it would exceed $n - k$, in which case it increases by $k+1$", so the induction can continue going on.</p>
|
2,725,831 | <blockquote>
<p>If we define $$f(x)=\left\lfloor \frac {x^{2x^4}}{x^{x^2}+3}\right\rfloor$$ and we have to find unit digit of $f(10)$</p>
</blockquote>
<p>I had tried approximation, factorization and substitutions like $x^2=u$ but it proved of no use. Moreover the sequential powers are feeling the hell out of me. Can someone please provide me with some hints</p>
| Ross Millikan | 1,827 | <p>First substitute in $10$ for $x$
$$f(10)=\left\lfloor \frac {10^{2\cdot 10^4}}{10^{10^2}+3}\right\rfloor\\
=\left\lfloor \frac {10^{20000}}{10^{100}+3}\right\rfloor$$
Now ask <a href="http://www.wolframalpha.com/input/?i=10%5E20000%2F(10%5E100%2B3)%20mod%2010" rel="nofollow noreferrer">Alpha</a>
<a href="https://i.stack.imgur.com/vY8TC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vY8TC.png" alt="enter image description here"></a><br>
or ask Python<br>
<a href="https://i.stack.imgur.com/hnSOR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hnSOR.png" alt="enter image description here"></a><br>
and the answer is $3$ </p>
<p>Added in response to the comment: You can do long division in base $10^{100}$. Unfortunately the numerator still has $200$ digits, so it will be a long haul. The denominator is a simple $13$. I suspect you are intended to write
$$f(10)=\left\lfloor \frac {10^{2\cdot 10^4}}{10^{10^2}+3}\right\rfloor\\
=\left\lfloor 10^{19900}\frac {10^{100}}{10^{100}+3}\right\rfloor\\
=\left\lfloor 10^{19900}\frac {1}{1+3\cdot 10^{-100}}\right\rfloor\\
=\left\lfloor 10^{19900}(1-3\cdot 10^{-100}+(3\cdot 10^{-100})^2-(3\cdot 10^{-100})^3+\ldots )\right\rfloor$$
and note that all the terms with exponents less than $199$ get too many zeros from the $10^{19900}$ to matter, then evaluate the term with exponent $199$. Then note that the term with exponent $200$ doesn't carry and is positive, so you only care about the term with exponent $199$. We have $-(3^{199})\equiv -7 \equiv 3 \pmod {10}$ so the answer is $3$.</p>
|
151,076 | <p>If A and B are partially-ordered-sets, such that there are injective order-preserving maps from A to B and from B to A, is there necessarily an order-preserving bijection between A and B ?</p>
| Martin Brandenburg | 1,650 | <p>One needs additional assumptions:</p>
<p>If $f : A \to B$ is an order-embedding of partial orders with the property that no element of $f(A)$ is comparable with an element of $B \setminus f(A)$, and $g : B \to A$ is a map with the same properties, then $A$ and $B$ are isomorphic. The usual proof of Schröder-Bernstein goes through.</p>
<p>Is this a new observation? Probably not. Any references are appreciated! </p>
|
2,933,753 | <p>Given two finite groups <span class="math-container">$G, H$</span>, we are going to say that <span class="math-container">$G<_oH$</span> if either</p>
<p>a. <span class="math-container">$|G|<|H|$</span></p>
<p>or </p>
<p>b. <span class="math-container">$|G|=|H|$</span> and <span class="math-container">$\displaystyle\sum_{g\in G} o(g)<\sum_{h\in H} o(h)$</span>,</p>
<p>where <span class="math-container">$o(g)$</span> denotes the order of the element <span class="math-container">$g$</span> (has this ordering a name?).</p>
<p>What is the smallest example (in this ordering) of a pair of nonisomorphic groups such that <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are incomparable, i.e., such that they have same cardinal and same sum of orders of elements?</p>
| Olexandr Konovalov | 70,316 | <p>To give an answer which uses open source software - one could do this in <a href="https://gap-system.org" rel="nofollow noreferrer">GAP</a>.</p>
<p>First, the GAP code based on the one from the answer by @Travis. It looks quite similar.</p>
<pre><code>gap> sumOfOrders := G -> Sum(List(G,Order));
function( G ) ... end
gap> sumOfOrdersList := n -> SortedList(List(AllSmallGroups(n),sumOfOrders));
function( n ) ... end
gap> List([1..16],sumOfOrdersList);
[ [ 1 ], [ 3 ], [ 7 ], [ 7, 11 ], [ 21 ], [ 13, 21 ], [ 43 ],
[ 15, 19, 23, 27, 43 ], [ 25, 61 ], [ 31, 63 ], [ 111 ],
[ 31, 33, 45, 49, 77 ], [ 157 ], [ 57, 129 ], [ 147 ],
[ 31, 39, 47, 47, 47, 55, 55, 55, 59, 67, 75, 87, 87, 171 ] ]
</code></pre>
<p>You can see that the last list contains 47 three times. Now let's find those three groups:</p>
<pre><code>gap> l:=AllSmallGroups(16,g->sumOfOrders(g)=47);
[ <pc group of size 16 with 4 generators>,
<pc group of size 16 with 4 generators>,
<pc group of size 16 with 4 generators> ]
</code></pre>
<p>and get their IDs:</p>
<pre><code>gap> List(l,IdGroup);
[ [ 16, 3 ], [ 16, 10 ], [ 16, 13 ] ]
</code></pre>
<p>Some notion about their structure can be obtained by <code>StructureDescription</code> (which however does not define the group up to isomorphism - see <a href="https://www.gap-system.org/Faq/faq.html#7.12" rel="nofollow noreferrer">here</a>):</p>
<pre><code>gap> List(l,StructureDescription);
[ "(C4 x C2) : C2", "C4 x C2 x C2", "(C4 x C2) : C2" ]
</code></pre>
<p>If I would be less lucky and did not manage to find an example with such quick exploration, I would likely write some code for more systematic search, which would look like the code below, following the guidelines from "Small groups search" in the <a href="http://alex-konovalov.github.io/gap-lesson/05-small-groups/" rel="nofollow noreferrer">GAP Software Carpentry lesson</a>:</p>
<pre><code>TestOneOrder:=function(n)
# find the smallest example among the groups of order n
local s,i,m,d,x;
# Calculate lists of sums of element orders.
# Avoid using AllSmallGroups(n) which potentially may be very large
s := List([1..NrSmallGroups(n)],i->Sum(List(SmallGroup(n,i),Order)));
if Length(Set(s))=NrSmallGroups(n) then
# Sum of element orders uniquely defines each group
return fail;
else
# There are duplicates - find them first
d := Filtered( Collected(s), x -> x[2] > 1 );
# Find the minimal possible value of the sum of element orders
m := Minimum( List( d, x-> x[1] ) );
# Find positions of m in the list
# Return the list of group IDs
return List( Positions(s,m), x -> [n,x] );
fi;
end;
FindSmallestPair:=function(n)
# check all groups of order up to n
local i, res;
for i in [1..n] do
# \r at the end of the print returns to the beginning of the line
Print("Checking groups of order ", i, "\r");
res := TestOneOrder(i);
if res<>fail then
# print new line before displaying the output
Print("\n");
return res;
fi;
od;
return fail;
end;
</code></pre>
<p>You can find this code on GutHub <a href="https://gist.github.com/alex-konovalov/23bc8398349b64c67de85542bedd307d" rel="nofollow noreferrer">here</a>. Reading it into GAP, one could obtain the same result as follows:</p>
<pre><code>gap> FindSmallestPair(20);
Checking groups of order 16
[ [ 16, 3 ], [ 16, 10 ], [ 16, 13 ] ]
</code></pre>
|
3,163,317 | <p>I am given the question "Find the equation of the line in standard form with slope <span class="math-container">$m = -3$</span> and passing through the point <span class="math-container">$(1, \frac{1}{3})$</span>"</p>
<p>The solution is provided as <span class="math-container">$x + 3y = 2$</span></p>
<p>I arrived at <span class="math-container">$3y + 3x =4$</span></p>
<p>Here is my working:</p>
<p>point slope formula:
<span class="math-container">$y - y1 = m(x - x1)$</span></p>
<p>My given slope m is <span class="math-container">$-\frac{1}{3}$</span> and point is <span class="math-container">$(1, \frac{1}{3})$</span></p>
<p>So:</p>
<p><span class="math-container">$y - \frac{1}{3} = -\frac{1}{3}(x - 1)$</span></p>
<p>I then multiplied both sides by 3 to get rid of the fractions:</p>
<p><span class="math-container">$3(y - \frac{1}{3}) = 3(-\frac{1}{3}(x - 1))$</span></p>
<p><span class="math-container">$3y - 1 = -3(x - 1)$</span></p>
<p><span class="math-container">$3y - 1 = -3x + 3$</span></p>
<p><span class="math-container">$3y = -3x + 4$</span></p>
<p><span class="math-container">$3y + 3x = 4$</span></p>
<p>Where did I go wrong and how can I arrive at <span class="math-container">$x + 3y = 2$</span>?</p>
| Alessio Del Vigna | 639,470 | <p>When you multiply both sides by <span class="math-container">$3$</span>, you made a mistake in the RHS.</p>
|
1,513,056 | <p>Why is a limit of an <strong>integer</strong> function $f(x)$ also integer? For example, a function that's defined on interval $[a, \infty)$ and the limit is $L$</p>
| Hippalectryon | 150,347 | <p>If the limit of $F$ is $L$, then a direct consequence is that for $x$ big enough, $F$ will be as close to its limit as you want, which is to say that $\epsilon=|f(x)-L|$ becomes as small as you want.</p>
<p>Hint : what happens if $\epsilon<\min(L-\lfloor L\rfloor,\lfloor L\rfloor+1-L)$ ?</p>
|
477,483 | <p>I'm trying to solve this question but yet I don't manage to find an answer...</p>
<p>Does it exist a topology with cardinality $\alpha, \> \forall \> \alpha \ge 1$ ?</p>
| bubba | 31,744 | <blockquote>
<p>is there any trivial quad tessellation that minimizes distortion in this case</p>
</blockquote>
<p>As indicated in a comment, a cube is a tesselation of a sphere using squares as the tiles. Not a very interesting/useful tesselation, from a graphics point of view, though, unless the spherical object is very small or you're trying very hard to minimize the number of polygons used.</p>
<p>You can get a a low-distortion quad tesselation by "inflating" a cube to get 6 patches that lie on the sphere. Or, saying it another way, you project the faces of the cube radially onto the sphere. One of the faces of the cube can be represented by the equations $(u,v) \mapsto (u,v,1)$ for $-1 \le u \le 1$ and $-1 \le v \le 1$ after the radial projection, you get the parametrization
$$
\mathbf x(u,v) = \left(\frac{u}{\sqrt{u^2 + v^2 +1}},
\frac{v}{\sqrt{u^2 + v^2 +1}},
\frac{1}{\sqrt{u^2 + v^2 +1}} \right)
$$
for a patch that covers one sixth of the sphere. You tesselate this patch in the obvious way, by making constant-sized steps in $u$ and $v$. The other five faces can be handled similarly, or can be obtained by rotations.</p>
<p>Here's a picture of one patch:</p>
<p><img src="https://i.stack.imgur.com/epBkx.jpg" alt="sphere patch"></p>
<p>I don't know if the distortion is minimal, but it seems fairly low, to me.</p>
<p>Patches of this type can also be written in NURBS form (non-uniform rational b-spline), and you may be able to hand these to your graphics subsystem and have it do the tesselation for you.</p>
|
791,020 | <p>From my textbook. </p>
<p>$$\sum\limits_{k=0}^\infty (-\frac{1}{5})^k$$</p>
<p>My work:</p>
<p>So a constant greater than or equal to $1$ raised to ∞ is ∞.</p>
<p>A number $n$ for $0<n<1$ is $0$. So when taking the limit of this series you get 0 but when formatting the problem a different way $(-1)^k/(5^k)$ it seems like an alternating series. Can someone help me figure this out?</p>
| Marc van Leeuwen | 18,880 | <p>The basic binomial formula $(1+x)^\alpha=\sum_k\binom\alpha kx^k$ is valid as a formal power series in $x$, or for concrete numbers$~x$ with $|x|<1$ (supposing that like is the case here, $\alpha$ is not a natural number). This cannot be used directly in the example, but you could bring the expression into a similar form by extracting a factor $-2$ from the expression $x-2$, writing $(x-2)^\alpha=(-2(1-\frac x2))^\alpha=(-2)^\alpha(1-\frac x2)^\alpha$ for $\alpha=\frac12$.</p>
<p>But this brings to the front a question that one should have considered at the beginning: what is the value $(-2)^\frac12$ that your formula gives at $x=0$, and will have to be the constant term of any reasonable expansion? The value $\sqrt{-2}$ is not very well defined; it does not exist as real number, and as complex number it it could be either $\def\i{\mathbf i}\sqrt2\i$ or $-\sqrt2\i$. So a formula with real numbers valid near $x=0$ is just not possible; if you want a formula that describes one of the complex square roots of $x-2$ near $x=0$ you can choose $r$ to be one of the mentioned square roots of $-2$, and write
$$(x-2)^\frac12=r\sqrt{1-\frac x2}
=r\sum_k\binom{1/2}k\Bigl(-\frac1{2^k}\Bigr)x^k.
$$
This series (which has purely imaginary coefficients due to the factor$~r$) converges for $|x|<2$.</p>
|
2,180,102 | <p>If you have 3 labeled points on a surface of a paper. Like </p>
<pre><code> 1
2 3
</code></pre>
<p>This makes a perfect equilateral triangle.</p>
<p>From this perspective I can say that the camera is on top of the paper looking down. We can say the camera is at coordinate $(0,0,100)$. Which is 0 degree rotation in Z axis and 90 degree rotation in Y axis.</p>
<p>Then, I move the camera to some arbitrary spot. Now, the points are like</p>
<pre><code> 1
2 3
</code></pre>
<p>This looks like the camera is farther back from the paper and was lowered, say at location $(0, -100, 50)$. Which is about -90 degree rotation in Z-axis, and 45 degree rotation in Y axis.</p>
<p>So my question is basically, given the $(x_1,y_1), (x_2,y_2), (x_3,y_3)$. Is there some formula that can take these arbitrary points, compare it with the original 3, to know how much of a X,Y,Z rotation it is of the camera?</p>
<p>I can also rotate on angles like this. For example, I can take the second example from above, then I can rotate my head clockwise, making the numbers flip like</p>
<pre><code>2
1
3
</code></pre>
<p>I think getting a normal vector from the center might be better to find.</p>
| Adren | 405,819 | <p><strong>Remark</strong></p>
<p>I think that the multiplicativeness of $\phi$ should rather be considered as a <em>consequence</em> of the fact that the groups of units of $\mathbb{Z}/n\mathbb{Z}$ and $\displaystyle{\prod_{i=1}^j\left(\mathbb{Z}/p_i^{k_i}\mathbb{Z}\right)}$ are isomorphic and hence equipotent.</p>
<p><strong>Detail</strong></p>
<p>Let $u=A\to B$ a ring isomomorphism. For all $x\in A$, the inversibility of $x$ means the existence of $x'\in A$ such that $xx'=1_A$, which is equivalent to $u(xx')=1_B$, that is to $u(x)\,u(x')=1_B$, or finally to the inversibility of $u(x)$.</p>
|
3,708,243 | <p>Equation: </p>
<blockquote>
<p><span class="math-container">$x^2-x-6=0$</span></p>
</blockquote>
<p>The two roots of this equation are <span class="math-container">$3$</span> and <span class="math-container">$-2$</span>. When writing the answer can I also write it as <span class="math-container">$-2, 3$</span> or do I have to maintain a certain order?</p>
| swordlordswamplord | 593,999 | <p>For min z I got 5 where
x1=1, x2=2, and x5=2 </p>
<p>I forgot what the relationship between min and max are when finding solution to LP... I know the question wants to max z but the first constraint is tricky because it wants a negative number</p>
|
26,259 | <p>I've been reading <em>generationgfunctionology</em> by Herbert S. Wilf (you can find a copy of the second edition on the author's page <a href="http://www.math.upenn.edu/~wilf/DownldGF.html" rel="nofollow">here</a>).</p>
<p>On page 33 he does something I find weird. He wants to shuffle the index forward and does so like this:
\begin{align*}
(f_{n+1})_{n\in N_0} &= \frac{(f(X)-f(0))}{X}\\
(f_{n})_{n\in N_0} &= \sum_{n\in N_0} f_nx^n
\end{align*}</p>
<p>Why is this allowed? Namely $X$ has a noninvertible (0) constant term, so how is this division (multiplication with the reciprocal) defined within the ring of formal power series?</p>
| Qiaochu Yuan | 232 | <p>The ring of formal power series is an integral domain, so you can cancel factors from two sides of an equation. (This is equivalent to saying that it embeds into a field, the ring of formal Laurent series.) This is the same thing you do when you say that $2 = \frac{6}{3}$ even though $3$ isn't invertible in $\mathbb{Z}$. </p>
|
26,259 | <p>I've been reading <em>generationgfunctionology</em> by Herbert S. Wilf (you can find a copy of the second edition on the author's page <a href="http://www.math.upenn.edu/~wilf/DownldGF.html" rel="nofollow">here</a>).</p>
<p>On page 33 he does something I find weird. He wants to shuffle the index forward and does so like this:
\begin{align*}
(f_{n+1})_{n\in N_0} &= \frac{(f(X)-f(0))}{X}\\
(f_{n})_{n\in N_0} &= \sum_{n\in N_0} f_nx^n
\end{align*}</p>
<p>Why is this allowed? Namely $X$ has a noninvertible (0) constant term, so how is this division (multiplication with the reciprocal) defined within the ring of formal power series?</p>
| Mitch | 1,919 | <p>The concept of ring formalizes all the allowed operations, but I think this can be presented without all reference to the machinery.</p>
<p>The original function is $f_n$. The generating function is $f(x)$ where $x$ is the variable ($X$ in the OP's question): </p>
<p>$$f(x) = \sum_{n\ge0} f_n x^n$$</p>
<p>Then we can operate on this as follows using naive power series operations: $f(0) = f_0$, so </p>
<p>$$f(x) - f(0) = \sum_{n\ge1} f_n x^n$$</p>
<p>(whatever the value of $f_0$; it can be invertible or not - we're not trying to invert the gf here at all). So then </p>
<p>$$\frac{f(x) - f(0)}{x} = \frac{1}{x}\sum_{n\ge1} f_n x^n = \sum_{n\ge1} f_n x^{n-1} = \sum_{n\ge0} f_{n+1} x^n$$</p>
<p>which is what is desired, the gf for $f_{n+1}$.</p>
|
15,124 | <p>Let $X \geq 1$ be an integer r.v. with $E[X]=\mu$. Let $X_i$ be a sequence of iid rvs with the distribution of $X$. On the integer line, we start at $0$, and want to know the expected position after we first cross $K$, which is some fixed integer. Each next position is determined by adding $X_i$ to the previous position. So the question is, if we stop this process after the first time $\tau$ for which $Y_{\tau}=\sum_{i=1}^{\tau}X_i > K$, that is, after the first time it crosses $K$, then what is $E[Y_{\tau}-K]$?. Can we get a bound of $O(\mu)$?</p>
| Shai Covo | 2,810 | <p>Let $\tau = \min \{ n \geq 1:X_1 + \cdots + X_n > K \}$. Then $\tau$ is an integer-valued random variable, bounded from above by $K+1$ (since $X_i \geq 1$). Note that $\tau = n$ if and only if $\sum\nolimits_{i = 1}^{n - 1} {X_i } \le K$ and $\sum\nolimits_{i = 1}^{n} {X_i } > K$. Thus, the event $\lbrace \tau = n \rbrace$ depends only on the values $X_1,\ldots,X_n$. So, by definition, $\tau$ is a stopping time with respect to the sequence $X_1,X_2,\ldots$. Now, $X_1,X_2,\ldots$ are i.i.d. with finite expectation $\mu$, and $\tau$ is a stopping time for them. Moreover, ${\rm E}(\tau) < \infty$ since $\tau \leq K+1$. Hence, by Wald's identity,
$$
{\rm E}\bigg(\sum\limits_{i = 1}^\tau {X_i } \bigg) = {\rm E}(\tau )\mu \leq (K+1)\mu.
$$
So if we put $Y_\tau = \sum\nolimits_{i = 1}^\tau {X_i }$, we get
$$
{\rm E}(Y_\tau - K) = {\rm E}(Y_\tau) - K \leq (K+1)\mu - K.
$$</p>
<p>EDIT:</p>
<p>Since $\tau \geq 1$, we have
$$
\mu - K \leq {\rm E}(Y_\tau - K) \leq (K+1)\mu - K.
$$ </p>
<p>As we have seen above, the problem reduces to calculating ${\rm E}(\tau)$. Put $S_n = \sum\nolimits_{i = 1}^n {X_i }$ ($S_0 = 0$).
Note that
$$
{\rm P}(\tau = n) = {\rm P}(S_{n - 1} \le K,S_n > K) = {\rm P}(S_{n - 1} \le K) - {\rm P}(S_n \le K).
$$
Hence,
$$
{\rm E}(\tau) = \sum\limits_{n = 1}^{K + 1} {n{\rm P}(\tau = n)} = \sum\limits_{n = 1}^{K + 1} n[{\rm P}(S_{n - 1} \le K) - {\rm P}(S_n \le K)] = \sum\limits_{n = 0}^K {{\rm P}(S_n \le K)}.
$$
So, we can write
$$
{\rm E}(\tau) = 1 + \sum\limits_{n = 1}^\infty {{\rm P}(S_n \le K)} = 1 + \sum\limits_{n = 1}^\infty {F^{(n)}(K)},
$$
where $F^{(n)}$ is the distribution function of $S_n$. For $t>0$ real, define $m(t) = \sum\nolimits_{n = 1}^\infty {F^{(n)}(t)}$.
From the theory of renewal processes, we know that $m(t) = {\rm E}(N_t)$, where $\lbrace N_t:t \geq 0 \rbrace$ is a renewal process with inter-arrival times distributed according to the distribution of $X$. $m(t)$ is called the renewal function. It may be worth noting that by the Elementary Renewal Theorem,
$$
\mathop {\lim }\limits_{t \to \infty } \frac{{m(t)}}{t} = \frac{1}{\mu }.
$$
Returning to our original setting, we have
$$
{\rm E}(Y_\tau ) = {\rm E}(\tau )\mu = (1 + m(K))\mu.
$$
So, the problem reduces to calculating $m(K)$. </p>
<p>Finally, <a href="http://www.postech.ac.kr/class/ie272/ie666_temp/Renewal.pdf" rel="noreferrer">here</a> is some useful link concerning renewal theory, which is very relevant to this answer.</p>
<p><a href="http://www.postech.ac.kr/class/ie272/ie666_temp/Renewal.pdf" rel="noreferrer">http://www.postech.ac.kr/class/ie272/ie666_temp/Renewal.pdf</a></p>
|
1,332,420 | <p>Solution of this task is ${7\pi\over2}$ but I don't know how to get to this solution.I used the formula for double angle for $\sin{2x}$ and $\cos{2x}$ and moved everything on one side of the equation and made $2\cos{x}$ as common factor to get this :$$2\cos{x}(\sin{x} - {\sqrt{2}\over2} - \cos{x}) = 0$$
Then I can get that $\cos{x} = 0$ and that solution for this particular x are $x = {\pi\over2}$ and $x = {3\pi\over2}$. But I don't know what to do next inside these brackets to get the rest.</p>
<p>EDIT: So I get at the end that $\sin{2x} = {1\over2}$ but solutions I get then are:
$$\sin{2x} = {1\over 2}$$
$$2x = {\pi\over6} \quad \text{&}\quad 2x = {5\pi\over6}$$
$$x = {\pi\over12} \quad \text{&}\quad x = {5\pi\over12}$$
And when then I sum up those two with already found:
$${\pi\over2}+{3\pi\over2} + {\pi\over12}+{5\pi\over12}=$$
$$={6\pi\over12}+{18\pi\over12} + {\pi\over12}+{5\pi\over12}=$$
$$={30\pi\over12}={5\pi\over2}$$
Which is not the solution I was looking for</p>
| DeepSea | 101,504 | <p><strong>hint</strong>:$$\sin x - \cos x = \dfrac{\sqrt{2}}{2} \Rightarrow \sin\left(x-\frac{\pi}{4}\right)=\dfrac{1}{2}$$. Can you continue ?</p>
|
1,332,420 | <p>Solution of this task is ${7\pi\over2}$ but I don't know how to get to this solution.I used the formula for double angle for $\sin{2x}$ and $\cos{2x}$ and moved everything on one side of the equation and made $2\cos{x}$ as common factor to get this :$$2\cos{x}(\sin{x} - {\sqrt{2}\over2} - \cos{x}) = 0$$
Then I can get that $\cos{x} = 0$ and that solution for this particular x are $x = {\pi\over2}$ and $x = {3\pi\over2}$. But I don't know what to do next inside these brackets to get the rest.</p>
<p>EDIT: So I get at the end that $\sin{2x} = {1\over2}$ but solutions I get then are:
$$\sin{2x} = {1\over 2}$$
$$2x = {\pi\over6} \quad \text{&}\quad 2x = {5\pi\over6}$$
$$x = {\pi\over12} \quad \text{&}\quad x = {5\pi\over12}$$
And when then I sum up those two with already found:
$${\pi\over2}+{3\pi\over2} + {\pi\over12}+{5\pi\over12}=$$
$$={6\pi\over12}+{18\pi\over12} + {\pi\over12}+{5\pi\over12}=$$
$$={30\pi\over12}={5\pi\over2}$$
Which is not the solution I was looking for</p>
| Community | -1 | <p>$$(2\sin x-\sqrt{2}-2\cos x)=0$$
$$(2\sin x-2\cos x)=\sqrt{2}$$
$$4\sin^2 x-8\sin x \cos x+4\cos^2 x=2$$
$$4-8\sin x \cos x=2$$
$$\sin x\cos x=\frac{1}{4}$$
$$0.5\sin 2x=\frac{1}{4}$$
$$\sin 2x=\frac{1}{2}$$
$$x=\frac{5\pi}{12}$$</p>
<p>see the graph of function. </p>
<p>But I think there is a problem in your answer which is $\frac{7\pi}{2}$ because it is not within the given interval.</p>
<p><img src="https://i.stack.imgur.com/nyN8T.png" alt="enter image description here"></p>
|
3,156,962 | <blockquote>
<p>Find general term of <span class="math-container">$1+\frac{2!}{3}+\frac{3!}{11}+\frac{4!}{43}+\frac{5!}{171}+....$</span></p>
</blockquote>
<p>However it has been ask to check convergence but how can i do that before knowing the general term. I can't see any pattern,comment quickly!</p>
| lab bhattacharjee | 33,337 | <p>I'll write <span class="math-container">$\alpha=m,\beta=n$</span> for the ease of typing</p>
<p>If <span class="math-container">$d(\ge1)$</span> divides <span class="math-container">$m^3-3mn^2,n^3-3m^2n$</span></p>
<p><span class="math-container">$d$</span> will divide <span class="math-container">$n(m^3-3mn^2)+3m(n^3-3m^2n)=-8m^3n$</span></p>
<p>and <span class="math-container">$3n(m^3-3mn^2)+m(n^3-3m^2n)=-8mn^3$</span></p>
<p>Consequently, <span class="math-container">$d$</span> must divide <span class="math-container">$(-8m^3n,-8mn^3)=8mn(m^2,n^2)=8mn$</span></p>
<p>So, <span class="math-container">$d$</span> will divide <span class="math-container">$8$</span></p>
<p>As <span class="math-container">$(m,n)=1,$</span> both <span class="math-container">$m,n$</span> cannot be even</p>
<p>If <span class="math-container">$m$</span> is even, <span class="math-container">$n^3-3m^2n$</span> will be odd <span class="math-container">$\implies d=1$</span></p>
<p>If both <span class="math-container">$m,n$</span> are odd,. <span class="math-container">$m^2,n^2\equiv1\pmod8$</span></p>
<p><span class="math-container">$m(m^2-3n^2)\equiv m(1-3)\equiv-2m\pmod8$</span></p>
<p>Similarly, we can establish the highest power of <span class="math-container">$2$</span> in <span class="math-container">$m^3-3mn^2$</span> will be <span class="math-container">$2$</span></p>
<p><span class="math-container">$\implies d=2$</span> if <span class="math-container">$m,n$</span> are odd</p>
|
4,016,921 | <p>Let <span class="math-container">$X, Y, Z$</span> be three random variables. Is the equality
<span class="math-container">$$
P(X=x, Y=y |Z=z) = P(X=x|Y=y,Z=z) P(Y=y|Z=z)
$$</span>
true?
Note that <span class="math-container">$P(X,Y)$</span> denotes the joint probability of random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>.</p>
| metamorphy | 543,769 | <p>Actually <span class="math-container">$_2F_1(a,b;a;z)=(1-z)^{-b}$</span> for <span class="math-container">$|z|<1$</span> and any <span class="math-container">$a,b$</span> (well, with <span class="math-container">$a\notin\mathbb{Z}_{\leqslant 0}$</span>). Indeed, both are equal to <span class="math-container">$\sum_{n=0}^\infty b(b+1)\dots(b+n-1)z^n/n!$</span>, by definition of <span class="math-container">$_2F_1$</span> for the LHS, and by the <a href="https://en.wikipedia.org/wiki/Binomial_series" rel="nofollow noreferrer">binomial series</a> for the RHS.</p>
|
1,610,616 | <blockquote>
<p>$6$ letters are to be posted in three letter boxes.The number of ways
of posting the letters when no letter box remains empty is?</p>
</blockquote>
<p>I solved the sum like dividing into possibilities $(4,1,1),(3,2,1)$ and $(2,2,2)$ and calculated the three cases separately getting $90,360$ and $90$ respectively.</p>
<p>What I wanted to know is there any shortcut method to solve this problem faster?Can stars and bars be used?How?</p>
<p>Or can the method of coefficients be used like say finding coefficient of $t^6$ in $(t+t^2+t^3+t^4)^3$.Will that method work here?</p>
| Abhinav | 671,238 | <p>Ways of dividing in 3 and 3 are 6!/3!3!=20.
3 are directly given in the letter box . 3 are left number of ways from that is 3*3*3=27 as there are three options for each letter.
Total ways =27*20=540</p>
|
2,151,491 | <p>For which prime numbers $p$ does the decimal for $\frac{1}{p}$ have cycle length $p-1$? I started with some simple examples to find an idea to solve:</p>
<p>$\frac{1}{2}=0.5,\frac{1}{3}=\overline{3},\frac{1}{5}=0.2,\frac{1}{7}=0.\overline{142857},\frac{1}{11}=0.\overline{09},\frac{1}{13}=0.\overline{0769230}$</p>
<p>Here I only had $7$ Then I can't find idea for solution. Any hints?</p>
| Peter | 82,961 | <p>Let $p$ be a prime number different from $2$ and $5$. This implies that $p$ is coprime to $10$.</p>
<p>Fermat's little theorem states that $$10^{p-1}\equiv 1\mod p$$ is satisfied. If for every prime factor $q$ of $p-1$, we have $$10^{\frac{p-1}{q}}\ne 1\mod p$$ we can conlude that $p-1$ is the smallest exponent such that the above equivalence holds.</p>
<p>For example : $$p=17$$ $$p-1=2^4$$ $$3^8\equiv 16\mod 17$$</p>
<p>So, $10$ is a primitive root modulo $17$, so the period has length $16$</p>
<p>$$p=31$$ $$p-1=2\cdot 3\cdot 5$$ $$10^{15}\equiv 1\mod 31$$</p>
<p>So, $10$ is not a primitive root modulo $31$. The period has length $15$ in this case.</p>
<p>In general, the length of the period is $ord_p(10)$</p>
<p>The period lengths for the primes upto $200$ :</p>
<pre><code>? forprime(j=1,200,if(gcd(j,10)==1,print(j," ",znorder(Mod(10,j)))))
3 1
7 6
11 2
13 6
17 16
19 18
23 22
29 28
31 15
37 3
41 5
43 21
47 46
53 13
59 58
61 60
67 33
71 35
73 8
79 13
83 41
89 44
97 96
101 4
103 34
107 53
109 108
113 112
127 42
131 130
137 8
139 46
149 148
151 75
157 78
163 81
167 166
173 43
179 178
181 180
191 95
193 192
197 98
199 99
?
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.