qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,794,724 | <blockquote>
<p>Let $F:U\rightarrow W$ be a linear transformation from the vector
space $U$ to the vectorspace $W$. Show that the image space to $F$,</p>
<p>$$V(F)=\{w\in W:w=F(u) \ \ \text{for some} \ \ u\in U\},$$</p>
<p>is a subspace of $W$.</p>
</blockquote>
<p>Okay, I know that in order for $M$ to be a subspace of a vectorspace $V$ then $M$ has to be </p>
<ol>
<li>non-empty</li>
<li>closed under addition with vectors and multiplication with scalars.</li>
</ol>
<p>So I have to show that $V(F)$ is non empty and closed under addition with vectors and multiplication with scalars. </p>
<p>Can someone break down to me how this is done? I don't really understand what is being stated in the curly brackets and how to apply that to show 1. and 2.</p>
| Aloizio Macedo | 59,234 | <p>The curly bracket says that $w$ is an element of the set $V(F)$ if and only if $w$ is the result of applying $F$ to some element $u \in U$.</p>
<p>For instance, if $F: \mathbb{R}^2 \to \mathbb{R}^3$ is given by $F(x,y)=(x,0,0)$, then $(1,0,0)$ is in $V(F)$, since $(1,0,0)=F(1,3)$, for example. But $(3,2,0)$ is not in $V(F)$, since if it were, its second coordinate would have to be $0$.</p>
<p>So, suppose that $w_1$ and $w_2$ are in $V(F)$. Then they are the image of some $u_1$ and $u_2$, respectively (i.e., $F(u_1)=w_1$ and $F(u_2)=w_2$).</p>
<p>We want to prove that $w_1+w_2$ is still in $V(F)$. But since $F$ is linear, you have a pretty good candidate for an element $u$ such that $F(u)=w_1+w_2$. Can you tell who it is?</p>
<p>Repeat this train of thought for the rest.</p>
|
3,162,338 | <p>Consider <span class="math-container">$ x_1, x_2, ..., x_n \in \mathbb{R}$</span>.</p>
<p>We have to prove that each <span class="math-container">$\sqrt x $</span> is rational if the sum of <span class="math-container">$\sqrt x_1 + \ldots + \sqrt x_n $</span> is rational. </p>
<p>I think that I could prove it using fact, that only the sum of the opposite irrational number is rational for example <span class="math-container">$ \sqrt 2 + (2-\sqrt 3) = 2 $</span>, because if <span class="math-container">$ x_1, x_2, ..., x_n \in \mathbb{R} $</span> then <span class="math-container">$ \sqrt (-1) \neq - \sqrt 1$</span>. </p>
| Peter Foreman | 631,494 | <p>Let
<span class="math-container">$$x_1=2$$</span>
<span class="math-container">$$x_2=(2-\sqrt{2})^2=6-4\sqrt{2}$$</span>
Then
<span class="math-container">$$\sqrt{x_1}+\sqrt{x_2}=2$$</span>
But
<span class="math-container">$$x_2\not\in \mathbb{Q}$$</span></p>
|
1,109,443 | <p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p>
<blockquote>
<p>Give three examples of complex numbers where z = -z</p>
</blockquote>
<p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p>
<p>What two other complex numbers can be given as examples?</p>
<p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p>
<p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
| Pieter Geerkens | 64,624 | <p>There is more than one way to count <em>genetics</em>. Suppose that 120 years ago all four pairs of your mother's great grand parents (8 in all) were pregnant with their first child, and both pairs of your father's grandparents (4 in all) were pregnant with their first child. </p>
<p>Thus at that moment in time you have 12 ancestors, of mixed generations back, just entering child-bearing years. </p>
<p>If exactly one of those twelve ancestors is full-blooded Cherokee, and the other eleven have no Cherokee blood, then:</p>
<p><strong>Yes, you are clearly 1/12th Cherokee.</strong></p>
<p>Whether you are counting back in even generations or in even years is an arbitrary choice, and it is incorrect to state that only one makes sense. Perhaps there are other valid ways to count ancestry back that would enable other fractions and counting.</p>
|
97,261 | <p>This semester, I will be taking a senior undergrad course in advanced calculus "real analysis of several variables", and we will be covering topics like: </p>
<p>-Differentiability.
-Open mapping theorem.
-Implicit function theorem.
-Lagrange multipliers. Submanifolds.
-Integrals.
-Integration on surfaces.
-Stokes theorem, Gauss theorem.</p>
<p>I need to know if anyone of you guys know good textbooks that contain practice problems with full solutions or hints that can be used to understand the material. Most of the textbooks I found are covering only the material with few examples.</p>
| Adam | 20,333 | <p><a href="http://www.math.harvard.edu/~shlomo/" rel="nofollow">http://www.math.harvard.edu/~shlomo/</a></p>
|
97,261 | <p>This semester, I will be taking a senior undergrad course in advanced calculus "real analysis of several variables", and we will be covering topics like: </p>
<p>-Differentiability.
-Open mapping theorem.
-Implicit function theorem.
-Lagrange multipliers. Submanifolds.
-Integrals.
-Integration on surfaces.
-Stokes theorem, Gauss theorem.</p>
<p>I need to know if anyone of you guys know good textbooks that contain practice problems with full solutions or hints that can be used to understand the material. Most of the textbooks I found are covering only the material with few examples.</p>
| analysisj | 14,966 | <p>Spivak has a good book in multivariable calculus. Harvard, I think, uses Hubbard's Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach. Another book which is used is Shifrin's Multivariable Mathematics, which I believe Harvard uses as well.. For Shifrin's book at least, you can buy a solution manual online.</p>
|
3,585,271 | <p>Let <span class="math-container">$V$</span> be an affine variety in <span class="math-container">$K^n$</span> with ideal <span class="math-container">$I=I(V)$</span>, where <span class="math-container">$K$</span> is an algebraically closed field. Let <span class="math-container">$V'$</span> be the variety with defining ideal <span class="math-container">$Radical(I)$</span>. Usually <span class="math-container">$K[x_1,\ldots,x_n]/I$</span> and <span class="math-container">$K[x_1,\ldots,x_n]/Radical(I)$</span> have different Hilbert series. Does <span class="math-container">$V'$</span> consist of several components of <span class="math-container">$V$</span>? Which part of <span class="math-container">$V$</span> is not in <span class="math-container">$V'$</span>? Thank you very much.</p>
| Con | 682,304 | <p>As varieties they are the same. Usually one considers varieties defined by radical ideals for the correspondence of Hilbert's Nullstellensatz.</p>
<p>Here you can see one reason why one wants to work with schemes. If for example <span class="math-container">$n = 1$</span>, <span class="math-container">$I = (x^2)$</span> and hence <span class="math-container">$\sqrt{I}= (x)$</span>, you get the origin in <span class="math-container">$\mathbb{A}^1$</span> in both cases, but one of them is actually a bit "thicker" (which the classical varieties cannot really see at first). That is, because this origin has multiplicities coming from <span class="math-container">$x^2 = 0$</span>. In scheme-theoretic language this is called non-reduced. This amounts to having nilpotency in your coordinate ring (or in your sheaf).</p>
|
1,384,752 | <p>I ran across a problem which has stumped me involving existential quantifiers.
Let U, our universe, be the set of all people. Let S(x) be the predicate "x is a student" and I(x) be the predicate "x is intelligent".
I want to write the statement "Some students are intelligent" in the correct logical form. I can see 2 possible ways to write it</p>
<p>1) There exists an x in U such that ( S(x) AND I(x) )</p>
<p>2) There exists an x in U such that ( S(x) implies I(x) )</p>
<p>If I draw a Venn diagram, it seems like option 1 must be true, but from this same diagram (where the sets where S(x) is true and I(x) is true intersect), it is also true that there is an x such that if x is in the set where S(x) us true, then x is in the set where I(x) is true. This makes me wonder if these two statements are not logically equivalent, but I have a feeling they are not.</p>
<p>Thanks,
Matt</p>
| Greg Martin | 16,078 | <p>The two statements are not equivalent. The second one would be true if there is even one nonstudent $x$ in the universe, regardless of intelligence; it would also be true if there is even one intelligent person $x$ in the universe, regardless of student status.</p>
<p>Understanding why this is the case depends on truly understanding how mathematical if-then statements work (in this case, the existential quantifier can be ignored, as I don't think it's part of the error you're making).</p>
|
3,985,302 | <p>Let <span class="math-container">$L/K$</span> be an extension of local fields. We can find <span class="math-container">$\alpha$</span> such that <span class="math-container">$\mathcal{O}_L=\mathcal{O}_K[\alpha]$</span>. What do we know about this generating element? I think that this <span class="math-container">$\alpha$</span> can be selected in such a way that, in addition to the above property, it is also a uniformizing parameter at the same time. But I can not prove it.</p>
| KCd | 619 | <p>If you look at a <em>proof</em> of that <span class="math-container">$\mathcal O_L = \mathcal O_K[\alpha]$</span> for some <span class="math-container">$\alpha$</span>, such as Prop. 3, Chapter III, Section 1 in Lang's "Algebraic Number Theory" then you should be able to see that you can take as <span class="math-container">$\alpha$</span> either <span class="math-container">$\zeta$</span> or <span class="math-container">$\zeta + \pi$</span> where <span class="math-container">$\zeta$</span> is an arbitrary generator (modulo the maximal ideal of <span class="math-container">$\mathcal O_L$</span>) of the residue field extension and <span class="math-container">$\pi$</span> is an arbitrary choice of uniformizer. When <span class="math-container">$L/K$</span> is unramified we can take as <span class="math-container">$\pi$</span> a prime from <span class="math-container">$K$</span> and thus <span class="math-container">$\mathcal O_L = \mathcal O_K[\zeta + \pi] = \mathcal O_K[\zeta]$</span>. When <span class="math-container">$L/K$</span> is totally ramified, we can use <span class="math-container">$\zeta = 1$</span> (or <span class="math-container">$\zeta = 0$</span>), so <span class="math-container">$\mathcal O_L = \mathcal O_K[\pi]$</span>.</p>
<p>Lang proves the result for an extension of DVRs with a separable residue field extension; no assumption of completeness, so in particular no use of Hensel's lemma. But the assumption that the DVR downstairs has an integral closure in the extension of its fraction field that is also a DVR is not at all a typical situation in a "global" situation such as the localization of a Dedekind domain, as there can be more than one prime upstairs lying over the prime downstairs. In the complete setting things are nicer since every valuation ring of a local field is a DVR.</p>
|
1,595,946 | <blockquote>
<p>Let $f:(a,b)\to\mathbb{R}$ be a continuous function such that
$\lim_\limits{x\to a^+}{f(x)}=\lim_\limits{x\to b^-}{f(x)}=-\infty$.
Prove that $f$ has a global maximum.</p>
</blockquote>
<p>Apparently, this is similar to the EVT and I believe the proof would be similar, but I cannot think anything related...</p>
| Tsemo Aristide | 280,301 | <p>Since $lim_{x\rightarrow a}f(x)=lim_{x\rightarrow b}f(x)=-\infty$, there exists $u>0, u<b-a$ such that $\mid x-a\mid<u, and \mid x-b\mid <u$ implies that $f(x)<-1$. The restriction of $f$ on $[a+u,b-u]$ has a maximum $m$ since $[a+u,b-u]$ is compact. Thus for every $x\in R$, $f(x)\leq sup\{-1,m\}$. Thus $f$ has a maximum. $M$. </p>
<p>There exists $u_M$ such that $\mid x-a\mid < u_M$ and $\mid x-b\mid <u_M$implies that $f(x)<-2\mid M\mid-1$. Since $lim_{x\rightarrow a}f(x)=lim_{x\rightarrow b}f(x)=-\infty$. Again the restriction of $f$ on $[a+u_M,b-u_M]$ has a maximum $L$, and there exists $x_0\in [a+u_M,b-u_M]$ such that $f(x_0)=L$ since $[a+u_M,b-u_M]$ is compact, in fact $L=M$. </p>
<p>To see this suppose that $L<M$, there exist $c>0$ such that $M-c>L$ and $M-c>-2\mid M\mid -1$. Consider $x\in (a,b)$, if $\mid x-a\mid < u_M$ and$\mid x-b\mid < u_M$ then $f(x)<-2\mid M\mid-1<M-c$ and if $x\in [a+u_M,b-u_M]$ $f(x)\leq L<M-c$, thus $M$ is not the maximum. Contradiction. done.</p>
|
4,083,697 | <p>I'm thinking about the example <span class="math-container">$f(x)=(x-1)^2$</span> which is clearly symmetric about the line <span class="math-container">$x=1$</span>. The question is really how do you show that it is symmetric about <span class="math-container">$x=1$</span> algebraically? I notice that if you plug in <span class="math-container">$-x+2$</span> you end up with <span class="math-container">$y=(-x+1)^2$</span> which is equivalent to the original function. I'm looking for this answer because I'm curious how one would show a similar property for a function that is more tricky to graph.</p>
| Andrei | 331,661 | <p>I would write it like this:
<span class="math-container">$$|z\cdot w|=|r_1|\cdot|r_2|\cdot|\cos\theta_1+i\sin\theta_1|\cdot|\cos\theta_2+i\sin\theta_2|=|r_1|\cdot|r_2|\cdot 1\cdot 1$$</span>
Then, if <span class="math-container">$|z\cdot w|=0$</span>, at least one of the factors must be <span class="math-container">$0$</span>.</p>
|
3,518,285 | <p>I started studying the book of Daniel Huybrechts, Complex Geometry An Introduction. I tried studying <a href="https://mathoverflow.net/questions/13089/why-do-so-many-textbooks-have-so-much-technical-detail-and-so-little-enlightenme">backwards</a> as much as possible, but I have been stuck on the concepts of <a href="https://en.wikipedia.org/wiki/Linear_complex_structure" rel="nofollow noreferrer">almost complex structures</a> and <a href="https://en.wikipedia.org/wiki/Complexification" rel="nofollow noreferrer">complexification</a>. I have studied several books and articles on the matter including ones by <a href="https://kconrad.math.uconn.edu/blurbs/linmultialg/complexification.pdf" rel="nofollow noreferrer">Keith Conrad</a>, <a href="https://individual.utoronto.ca/jordanbell/notes/complexification.pdf" rel="nofollow noreferrer">Jordan Bell</a>, <a href="http://www.physics.rutgers.edu/~gmoore/618Spring2019/GTLect2-LinearAlgebra-2019.pdf" rel="nofollow noreferrer">Gregory W. Moore</a>, <a href="https://www.springer.com/gp/book/9780387728285" rel="nofollow noreferrer">Steven Roman</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/2881246834" rel="nofollow noreferrer" rel="nofollow noreferrer">Suetin, Kostrikin and Mainin</a>, <a href="https://www.springer.com/gp/book/9783319115108" rel="nofollow noreferrer">Gauthier</a></p>
<p>I have several questions on the concepts of almost complex structures and complexification. Here is one:</p>
<p>I understand for a finite dimensional <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$V=(V,\text{Add}_V: V^2 \to V,s_V: \mathbb R \times V \to V)$</span>, the following are equivalent</p>
<ol>
<li><span class="math-container">$\dim V$</span> even</li>
<li><span class="math-container">$V$</span> has an almost complex structure <span class="math-container">$J: V \to V$</span></li>
<li><span class="math-container">$V$</span> has a complex structure <span class="math-container">$s_V^{\#}: \mathbb C \times V \to V$</span> that agrees with its real structure: <span class="math-container">$s_V^{\#} (r,v)=s_V(r,v)$</span>, for any <span class="math-container">$r \in \mathbb R$</span> and <span class="math-container">$v \in V$</span></li>
<li>if and only if <span class="math-container">$V \cong \mathbb R^{2n} \cong (\mathbb R^{n})^2$</span> for some positive integer <span class="math-container">$n$</span> (that turns out to be half of <span class="math-container">$\dim V$</span>) if and only if <span class="math-container">$V \cong$</span> (maybe even <span class="math-container">$=$</span>) <span class="math-container">$W^2=W \bigoplus W$</span> for some <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$W$</span>.</li>
</ol>
<p>The last condition makes me think that the property 'even-dimensional' for finite-dimensional <span class="math-container">$V$</span> is generalised by the property '<span class="math-container">$V \cong W^2$</span> for some <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$W$</span>' for finite or infinite dimensional <span class="math-container">$V$</span>.</p>
<p>Question: For <span class="math-container">$V$</span> finite or infinite dimensional <span class="math-container">$\mathbb R-$</span>vector space, are the following equivalent?</p>
<ol start="5">
<li><p><span class="math-container">$V$</span> has an almost complex structure <span class="math-container">$J: V \to V$</span></p></li>
<li><p>Externally, <span class="math-container">$V \cong$</span> (maybe even <span class="math-container">$=$</span>) <span class="math-container">$W^2=W \bigoplus W$</span> for some <span class="math-container">$\mathbb R-$</span> vector space <span class="math-container">$W$</span></p></li>
<li><p>Internally, <span class="math-container">$V=S \bigoplus U$</span> for some <span class="math-container">$\mathbb R-$</span> vector subspaces <span class="math-container">$S$</span> and <span class="math-container">$U$</span> of <span class="math-container">$V$</span> with <span class="math-container">$S \cong U$</span> (and <span class="math-container">$S \cap U = \{0_V\}$</span>)</p></li>
</ol>
| BCLC | 140,308 | <p>As a supplement to the other answers, I'm going to prove (6 or) 7 implies 5 without axiom of choice. This is based on <a href="https://math.stackexchange.com/users/431940/joppy">Joppy</a>'s <a href="https://math.stackexchange.com/a/3559659">answer</a> and <a href="https://math.stackexchange.com/users/686397/woolierthanthou">WoolierThanThou</a>'s <a href="https://math.stackexchange.com/questions/3518285/infinite-dimensional-vector-space-has-almost-complex-structure-if-and-only-if-it#comment7326517_3518448">comment</a>:</p>
<p>Given an isomorphism <span class="math-container">$\theta: S \to U$</span>, define <span class="math-container">$J: V \to V$</span> on the direct sum <span class="math-container">$V = S \bigoplus U$</span> by setting <span class="math-container">$J(s \oplus u) := - \theta^{-1}(u) \oplus \theta(s)$</span>.</p>
|
607,264 | <p>Let the directional derivative of a function $f(x,y)$ at a point $P$ in the direction of $(1/\sqrt{5})\mathbf{i}+(2/\sqrt{5})\mathbf{j}$ be $16/\sqrt{5}$ and the partial derivative $\partial f / \partial x$ evaluated at $P$ be $6$. Then what is the directional derivative in the direction of $\mathbf{i}-\mathbf{j}$?</p>
<p>I got $1$ as the answer but this is incorrect. It should come out to $1/\sqrt{2}$.</p>
<p>My work so far:</p>
<p>$D(P)=(f_x,f_y)\cdot(1/\sqrt{5},2/\sqrt{5}) $</p>
<p>if $\partial f/ \partial x=6$ then $6+2f_y=16 \implies f_y=5$</p>
<p>so $(1\cdot6)+(5\cdot(-1))=1$.</p>
| Eric Thoma | 35,667 | <p>The directional derivative of $f(x,y)$ along some unit vector $\mathbf{v}$ is $\nabla f(x,y) \cdot \mathbf{v}$.</p>
<p>Applying this to the given in the problem:</p>
<p>$$ \frac{1}{\sqrt{5}}\cdot\frac{\partial f}{\partial x} + \frac{2}{\sqrt{5}}\cdot\frac{\partial f}{\partial y}=\frac{16}{\sqrt{5}}$$</p>
<p>Using the value ${\partial f}/{\partial x} = 6$, we get ${\partial f}/{\partial y} = 5$. Thus, $\nabla f = (6,5)$, and $\nabla f \cdot (1,-1)/\sqrt{2}=1/\sqrt{2}$</p>
|
1,588,665 | <p>I have been reading up on finding the eigenvectors and eigenvalues of a symmetric matrix lately and I am totally unsure of <strong>how and why</strong> it works. Given a matrix, I can find its eigenvectors and values like a machine but the problem is, I have no intuition of how it works.</p>
<p>1) I understand that $v^tAv$ is the equation of an ellipse in matrix form</p>
<p>2) I understand how Lagrangian multipliers work</p>
<p>Can someone please show me the proof where finding the eigenvectors of such a matrix gives me the principal components ? </p>
<p>I am following this topic.
<a href="https://math.stackexchange.com/questions/87199/maximizing-symmetric-matrices-v-s-non-symmetric-matrices">Maximizing symmetric matrices v.s. non-symmetric matrices</a> I know how to find the eigenvalues of the matrix but <strong>I have no idea how it works</strong></p>
| Pedro | 70,305 | <p>To solve an equation, you have to establish a sequence of logical <strong>equivalences</strong>.</p>
<p>In your first method, you only established a sequence of logical <strong>implications</strong>. This is the reason why you lost the solution $x=0$.</p>
<p><em>Remark:</em> In order to solve an equation, an sequence of <strong>implications</strong> can fails for two reasons:</p>
<ol>
<li>You lost one (o more than one) "right solution".</li>
<li><p>You find one (or more then one) "wrong solution".</p>
<ul>
<li><p>Example 1 (provided by you):
$$\begin{align}
&9x^2-36x=0\\
\Rightarrow\quad&9x^2=36x\\
\Rightarrow\quad&9x=36\qquad \text{if } x\neq 0\\
\Rightarrow\quad&x=4\qquad \text{if } x\neq 0\\
\end{align}$$
Here, we lost the right solution $x=0$.</p></li>
<li><p>Example 2:
$$\begin{align}
&x^2+1=0\\
\Rightarrow\quad&(x^2+1)(x^2-1)=0(x^2-1)\\
\Rightarrow\quad&x^4-1=0\\
\Rightarrow\quad&x^4=1\\
\Rightarrow\quad&x=1\\
\end{align}$$
Here, we find the wrong solution $x=1$.</p></li>
<li><p>Example 3 (provided by you too):
$$\begin{align}
&9x^2-36x=0\\
\Leftrightarrow\quad&x(9x-36)=0\\
\Leftrightarrow\quad&x=0\text{ or }9x-36=0\\
\Leftrightarrow\quad&x=0\text{ or }x=4\\\end{align}$$
Here, we lost nothing right and find nothing wrong.</p></li>
<li><p>Example 4:
$$\begin{align}
&9x^2-36x=0\\
\Leftrightarrow\quad&9x^2=36x\\
\Leftrightarrow\quad&9x=36 \text{ if } x\neq 0\qquad \text { or } \qquad x=0\\
\Leftrightarrow\quad&x=4\text { or }x=0\\
\end{align}$$
Here, we solved the equation because we established a sequence of equivalences.</p></li>
</ul></li>
</ol>
|
353,658 | <p>Let $g : [0, 1] \rightarrow \mathbb{R}$ be twice differentiable with $g^{\prime \prime}(x) > 0$ for all $x \in [0,1]$. Suppose that $g(0) > 0$ and $g(1) = 1$. Prove if $g$ has a fixed point in $(0,1)$, then $g^{\prime}(1) > 1$.</p>
<p>My attempt: Define a function $h(x)=g(x)-x$. Since $g$ has a fixed point, say $c \in (0,1)$, we have $h(c)=g(c)-c=0$. </p>
<p>Notice that we have $h(c)=h(1)=0$, by Rolle's Theorem, there exists $d \in (c,1)$ such that $h^{\prime}(d)=0$</p>
<p>Applying Mean Value Theorem on $h$ on $[d,1]$, there exists $e \in (d,1)$ such that $h^{\prime \prime}(e)=\frac{h^{\prime}(d)-h^{\prime}(1)}{c-1}$. Notice that we have $h^{\prime \prime}(x)=g^{\prime \prime}(x) >0 $ for all $x \in [0,1]$. Hence, $-h^{\prime}(1)<0 \implies g^{\prime}(1) > 1$</p>
<p>Can anyone explain to me why we need to use Rolle's theorem here?</p>
| R Salimi | 71,371 | <p>We don't use Rolle's theorem,since $g^{\prime\prime}>0$,then $g^{\prime}(x)$ is strictly increasing,also $g$ has a fixed point c in $(0,1)$ and $1$ is fixed point for $g$,we can use only Mvt theorem.($\frac{g(1)-g(c)}{1-c}=g^{\prime}(t)=1,t\in(c,1)$).
Of course I mean $g^{\prime}(1)$ as $\lim_{x\to1^-}g^{\prime}(x)$ because $1$ is end of $[0,1]$.</p>
|
60,322 | <p>I'm interested in Lie theory and its connections to dynamical systems theory. I am starting my studies and would like references to articles on the subject.</p>
| Asaf | 8,857 | <p>The connections between Dynamics and Lie Groups (or Algebraic groups) comes mainly in two flavours:</p>
<ol>
<li>Smooth dynamics, like others have stated Hamiltonian dyanmics and differential equations.</li>
<li>Applications of Ergodic theory and Topological dynamics to Lie groups (or more generally, homogeneous spaces), or as Lindenstrauss' calls it - homogeneous dynamics.</li>
</ol>
<p>The homogeneous dynamics realm is again divided into two main areas:</p>
<ol>
<li>"Geometric Applications", i.e. most of Margulis works (rigidity and such). Those are problems that deal directly with these settings.</li>
<li>"Other applications", mainly number theoretical applications, which basically can be modelled on such spaces and dynamical methods (such as orbit classification or measure classification) come to use.</li>
</ol>
<p>I find myself more expert on 2.2 side, and I don't know anything about smooth dynamics, so I'll leave you with just one reference - <a href="https://books.google.co.uk/books?id=9nL7ZX8Djp4C&printsec=frontcover&dq=katok+Introduction+to+the+Modern+Theory+of+Dynamical+Systems&source=bl&ots=oSibX9lEAI&sig=vUEQGSMYsHrcJXM0F-sCTF88so4&hl=en&ei=dpOXTaeLLJCVswa7zP2wCA&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCAQ6AEwAQ%2520%2522%2520#v=onepage&q&f=false/" rel="nofollow noreferrer">Katok-Introduction to the Modern Theory of Dynamical Systems</a>, which is some sort of general encyclopedia, and might be a good place to start your journey.</p>
<p>About homogeneous dynamics.
The area doesn't have a usual reference, and to be exact, there are hardly any references at all.
A good place to start would be - <a href="https://books.google.co.uk/books?id=PiDET2fS7H4C&printsec=frontcover&dq=manfred+einsiedler+tom+ward&source=bl&ots=StxUpooJPF&sig=XuK6I5W0YFqCFw79ki3M22pcYzw&hl=en&ei=95OXTePGLoPOswbgpP3HCA&sa=X&oi=book_result&ct=result&resnum=3&ved=0CB8Q6AEwAg#v=onepage&q&f=false%20%22/" rel="nofollow noreferrer">Einsiedler,Ward - Ergodic Theory With a View Towards Number Theory</a>; this relatively a new book in the GTM, which is well written, and give you an introduction to ergodic theory, and in the later part he proves Ratner theorems for SL_2 (Furstenberg, Danni, Danni-Smilie). He also discusses some of the dynamics of nilpotent systems, such as the Heisenberg group (which is the starting point toward the Green-Tao theorem).</p>
<p>For the more advanced reader, the best place would be Elon's own notes - <a href="http://www.ma.huji.ac.il/%7Eelon/flows10s2/index.html" rel="nofollow noreferrer">Lindenstrauss notes from a prev. course in HU</a>, another good place would be the Clay Pisa proceedings, containing lecture notes of Eskin regarding Ratner theorems, and a paper by Lindenstrauss and Einsiedler about their work on diagonalizable actions.
If one is particularly interested in Ratner theorems, one can look in - <a href="https://people.uleth.ca/%7Edave.morris/books/Ratner.html" rel="nofollow noreferrer">Dave Morris' book about Ratner's theorems</a>.</p>
<p>For those who are interested in Margulis works (Arithmeticity and such), I know only of two references - <a href="https://books.google.co.uk/books?id=V9z2cjHOhVoC&printsec=frontcover&dq=Discrete+subgroups+of+semisimple+Lie+groups&source=bl&ots=BeVLv0TEKD&sig=suc4JrQSJWaUozIyrD-QM-q22m8&hl=en&ei=g5WXTeWcCIbYsgbNl7jNCA&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCIQ6AEwAQ#v=onepage&q&f=false/" rel="nofollow noreferrer">Margulis-Discrete subgroups of semisimple Lie groups</a> which is out of print and extremely hard to find (and also hard to understand, one needs some familiarity with Lie groups and algebraic groups), and the other one is a book by Zimmer - <a href="https://rads.stackoverflow.com/amzn/click/com/0817631844" rel="nofollow noreferrer" rel="nofollow noreferrer">Ergodic Theory and Semisimple Groups</a>.
Dave Morris has a draft of a book about Arithmetic groups which might be of interest as well - <a href="http://people.uleth.ca/%7Edave.morris/books/IntroArithGroups.html" rel="nofollow noreferrer">Morris - Introduction to Arithmetic Groups</a>.</p>
<p>Well I'm hoping I gave you enough reading for some time.</p>
|
1,652,701 | <p>Am I on the right track to solving this?</p>
<p>$$e^z=6i$$
Let $w=e^z$</p>
<p>Thus,</p>
<p>$$w=6i$$
$$e^w=e^{6i}$$
$$e^w=\cos(6)+i\sin(6)$$
$$\ln(e^w)=\ln(\cos(6)+i\sin(6))$$
$$w=\ln(\cos(6)+i\sin(6))$$
$$e^z=\ln(\cos(6)+i\sin(6))$$
$$\ln(e^z)=\ln(\ln(\cos(6)+i\sin(6)))$$
$$z=\ln(\ln(\cos(6)+i\sin(6)))$$</p>
<p>I had another method that started by taking the natural log of both sides right away, but that leads to $\arctan(6/0)$, which is undefined...</p>
| Mark Viola | 218,419 | <p><strong>HINT:</strong></p>
<p>$$6i=e^{\log(6)+i\pi/2+i2n\pi}=e^z$$</p>
|
1,652,701 | <p>Am I on the right track to solving this?</p>
<p>$$e^z=6i$$
Let $w=e^z$</p>
<p>Thus,</p>
<p>$$w=6i$$
$$e^w=e^{6i}$$
$$e^w=\cos(6)+i\sin(6)$$
$$\ln(e^w)=\ln(\cos(6)+i\sin(6))$$
$$w=\ln(\cos(6)+i\sin(6))$$
$$e^z=\ln(\cos(6)+i\sin(6))$$
$$\ln(e^z)=\ln(\ln(\cos(6)+i\sin(6)))$$
$$z=\ln(\ln(\cos(6)+i\sin(6)))$$</p>
<p>I had another method that started by taking the natural log of both sides right away, but that leads to $\arctan(6/0)$, which is undefined...</p>
| Noble Mushtak | 307,483 | <p>Hopefully, from all of these solutions, you know how to solve this problem. Now, let's try doing it your way. You've done everything right so far:
$$z=\ln(\ln(\cos(6)+i\sin(6)))$$</p>
<p>By <a href="https://en.wikipedia.org/wiki/Euler%27s_identity" rel="nofollow noreferrer">Euler's Identity</a>, we have $\cos(6)+i\sin(6)=e^{6i}$, so clearly, taking the $\ln$ of this is just $6i$:
$$z=\ln(6i)$$</p>
<p>Now, if we go back to our original equation:
$$e^z=6i$$</p>
<p>The equation we have at the end of all of this is just taking the $\ln$ of both sides of the original equation. Basically, everything you did is all valid, but you basically return to the original equation when we're all done with simplifying everything, which is why you were off-track.</p>
|
2,234,744 | <p>Please help me find the value of the following integral:<br>
$$\frac{(5050)\int^1_0(1-x^{50})^{100} dx}{\int^1_0(1-x^{50})^{101} dx}$$
I tried solving both numerator and denominator via by-parts but it isn't giving me a conclusive solution. Any other suggestions?</p>
| Marios Gretsas | 359,315 | <p>one way to solve it besides the change of variable,is by using the binomial theorem in both denominator and enumerator.</p>
<p>$\sum_{n=0}^{100}\binom{100}{n}(x^{50})^{100-n}(-1)^n=(1-x^{50})^{100}$ </p>
<p>$\binom{n}{k}=\frac{n!}{k!(n-k)!}$</p>
<p>Use the linearity of the integral and integrate each term of the series.</p>
|
470,617 | <ol>
<li><p>Two competitors won $n$ votes each.
How many ways are there to count the $2n$ votes, in a way that one competitor is always ahead of the other?</p></li>
<li><p>One competitor won $a$ votes, and the other won $b$ votes. $a>b$.
How many ways are there to count the votes, in a way that the first competitor is always ahead of the other?
(They can have the same amount of votes along the way)</p></li>
</ol>
<p>I know that the first question is the same as the number of different legal strings built of brackets, which is equal to the Catalan number; $\frac{1}{n+1}{2n\choose n}$, by the proof with the grid.</p>
<p>I am unsure about how to go about solving the second problem.</p>
| Robert Israel | 8,508 | <p>When in doubt, convert trig functions to complex exponentials.</p>
<p>If $w = e^{i\pi/7}$,
$\csc^2(\pi/7) = \dfrac{-4}{(w-1/w)^2}$ and similarly for the others
with $w$ replaced by $w^2$ and $w^4$.
Simplifying,
$$ \csc^2(\pi/7) + \csc^2(2\pi/7) + \csc^2(4\pi/7) - 8 \\=
-4\,{\frac {2\,{w}^{16}+{w}^{14}+3\,{w}^{12}+3\,{w}^{10}+3\,{w}^{8}+3
\,{w}^{6}+3\,{w}^{4}+{w}^{2}+2}{ \left( {w}^{8}-1 \right) ^{2}}}
$$
and the numerator is divisible by
$w^6+w^5+w^4+w^3+w^2+w+1 = 0$</p>
<p>EDIT: This cries out for generalization. We also have
$$\csc^2(\pi/3)+ \csc^2(2\pi/3) = 8/3$$
$$\csc^2(\pi/15) + \csc^2(2\pi/15) + \csc^4(4\pi/15) + \csc^2(8\pi/15) = 32$$
but unfortunately
$$\csc^2(\pi/31) + \csc^2(2\pi/31) + \csc^2(4\pi/31) + \csc^2(8\pi/31) + \csc^2(16\pi/31)$$
is irrational (it seems to be a root of $z^3-160 z^2+3904 z-23552 = 0$)</p>
<p>EDIT: Instead we have
$$ \sum_{j=1}^{15} \csc^2(j \pi/31) = 160$$</p>
<p>Actually it seems $$ \sum_{j=1}^n \csc^2(j \pi/(2n+1)) = \dfrac{2n(n+1)}{3}$$ for all positive integers $n$. In the case $n=3$, since $\csc(4\pi/7) = \csc(3\pi/7)$,
$$\csc^2(\pi/7) + \csc^2(2\pi/7) + \csc^2(4\pi/7) = \csc^2(\pi/7) + \csc^2(2\pi/7)\csc^2(3\pi/7) = 8$$
In the case $n=7$, there are actually four "basic" equations involving
$\csc^2(j \pi/15)$:
$$\eqalign{\csc^2(5\pi/15) &= 4/3\cr
\csc^2(3\pi/15) + \csc^2(6\pi/15) &= 4\cr
10 \csc^2(\pi/15) + \csc^2(2\pi/15)+\csc^2(7\pi/15) &= 36\cr
\csc^2(\pi/15) - 10 \csc^2(3 \pi/15) + \csc^2(4\pi/15) &= -4\cr}$$
and $\csc^2(\pi/15) + \csc^2(2\pi/15) + \csc^4(4\pi/15) + \csc^2(8\pi/15) = 32$ is the sum of the last two (using the fact that $\csc(8 \pi/15) = \csc(7 \pi/15)$).</p>
|
470,617 | <ol>
<li><p>Two competitors won $n$ votes each.
How many ways are there to count the $2n$ votes, in a way that one competitor is always ahead of the other?</p></li>
<li><p>One competitor won $a$ votes, and the other won $b$ votes. $a>b$.
How many ways are there to count the votes, in a way that the first competitor is always ahead of the other?
(They can have the same amount of votes along the way)</p></li>
</ol>
<p>I know that the first question is the same as the number of different legal strings built of brackets, which is equal to the Catalan number; $\frac{1}{n+1}{2n\choose n}$, by the proof with the grid.</p>
<p>I am unsure about how to go about solving the second problem.</p>
| lab bhattacharjee | 33,337 | <p>Let $7x=\pi\implies 4x=\pi-3x$</p>
<p>$$\frac1{\sin2x}+\frac1{\sin4x}=\frac{\sin4x+\sin2x}{\sin4x\sin2x}$$</p>
<p>$$=\frac{2\sin3x\cos x}{\sin(\pi-3x)\sin2x}(\text{ using } \sin2A+\sin2B=2\sin(A+B)\cos(A-B))$$</p>
<p>$$=\frac{2\sin3x\cos x}{\sin(3x)2\sin x\cos x}(\text{ using } \sin2C=2\sin C\cos C \text{ and }\sin(\pi-y)=\sin y)$$
$$=\frac1{\sin x}$$</p>
<p>$$\implies \frac1{\sin x}-\frac1{\sin2x}-\frac1{\sin4x}=0$$</p>
<p>Squaring we get,</p>
<p>$$\frac1{\sin^2x}+\frac1{\sin^22x}+\frac1{\sin^24x}=2\left(\frac1{\sin x\sin4x}+\frac1{\sin x\sin2x}-\frac1{\sin2x\sin4x}\right)=2\frac{(\sin2x+\sin4x-\sin x)}{\sin x\sin2x\sin4x}$$</p>
<p>Now, $$\sin2x+\sin4x-\sin x=2\sin3x\cos x-\sin(\pi-6x)\text{ as }x=\pi-6x$$</p>
<p>$$\implies \sin2x+\sin4x-\sin x=2\sin3x\cos x-\sin6x$$</p>
<p>$$=2\sin3x\cos x-2\sin3x\cos3x=2\sin3x(\cos x-\cos3x)=2\sin3x(2\sin2x\sin x)$$ using $\cos2C-\cos2D=\sin(D-C)\sin(C+D)$</p>
<p>$$\implies \frac{\sin2x+\sin4x-\sin x}{\sin x\sin2x\sin4x}=4$$ as $\sin x\sin2x\sin4x\ne0$ if $7x=\pi$</p>
|
2,849,450 | <p>Let's consider a generic linear programming problem. Is it possible that the decision variables of the objective function assume (at the optimal solution) irrational values?</p>
<p>Also, is it possible that some entries of the $A$ matrix are irrational?</p>
| Siong Thye Goh | 306,553 | <p>Yes.</p>
<p>$$\min x$$ subject to $$\sqrt{2}x=1$$ is a valid linear programming instance.</p>
|
664,349 | <blockquote>
<p>If $G$ is a finite group where every non-identity element is generator of $G$, what is the order of $G$?</p>
</blockquote>
<p>I know that the order of $G$ must be prime, but I'm not sure how to go about proving this from the problem statement. </p>
<p>Any hints on where to start?</p>
| AG. | 80,733 | <p>Since every nonidentity element of $G$ is a generator, $G$ is generated by some element $x$; hence $G$ is cyclic. If the order of $G$ is composite, say $|G|=p r$, where $p, r >1$, then there exists a nonidentity element $x^p$ that does not generate $G$, a contradiction. Thus, the order of $G$ must be a prime. </p>
|
1,581,756 | <p>Find the general solution of $$z(px-qy)=y^2-x^2$$ Let $F(x,y,z,p,q)=z(px-qy)+x^2-y^2$. This gives $$F_x=zp+2x$$
$$F_y=-zq-2y$$
$$F_z=px-qy$$
$$F_p=zx$$
$$F_q=-zy$$
By Charpit's method we have $$\frac{dx}{zx}=\frac{dy}{-zy}=\frac{dz}{z(px-qy)}=\frac{dp}{-zp-2x-p^2x+pqy}=\frac{dq}{zq+2y-pxy+q^2y}$$</p>
<p>By equating the first two I am getting $xy=k$.</p>
<p>But I am not able to solve the last two. </p>
<p>Thanks for the help!!</p>
| MathIsNice1729 | 274,536 | <p>For starters, the $\tau$ function is multiplicative. That means, $\tau(mn)=\tau(m)\tau(n)$ where $(m, n) =1$. But, as you seek intuition, I won't use it in the discussion. Now, your claim is true for $4$. The numbers which have $2$ divisors are prime $p$. Now, if we multiply it by something that the product has more than $4$ integers, then the multiplier must be of the form $p_1 p_2$ where at most one can be $p$ itself. But, we can always represent it as a multiple of $p_1 p_2$ which has exactly $4$ divisors. Now, a similar argument can be stated about integers with exactly $3$ divisors, which must be of the form $p^2$. So, every integer that has more than $4$ divisors can be expressed as a multiple of an integer with exactly $4$ divisors. But, no similar argument can be presented for numbers with higher divisor counts. Hence your claim is indeed true. </p>
<p>For the second case, just use the algebraic form of the $\tau$ function. Although, some simple combinatorics can give you the intuition. </p>
|
3,559,167 | <p>I know that it is a bounded below set and the infimum is 4, but I'm unsure of going about how to prove that it is indeed bounded. Any help would be greatly appreciated! </p>
| Lee Mosher | 26,501 | <p>Hint #1: If you want to investigate intuitively whether or not it is bounded, try evaluating <span class="math-container">$x + \frac{4}{x}$</span> for values of <span class="math-container">$x$</span> that get larger and larger, such as <span class="math-container">$x=10$</span>, <span class="math-container">$x=100$</span>, <span class="math-container">$x=1000$</span>, <span class="math-container">$x=100000$</span>, and so on. </p>
<p>Hint #2: Perhaps, if someone tells you "This number <span class="math-container">$B>0$</span> is an upper bound", you can figure out rigorously whether or not they are telling the truth by attempting to solve the inequality <span class="math-container">$x + \frac{4}{x} > B$</span>.</p>
|
3,820,929 | <p>When i look at my notes , i realized something i have not realized before.It was as to a modular arithmetic
question.</p>
<p>The question is <span class="math-container">${\sqrt 2} \pmod7$</span></p>
<p>It is very trivial question.The solution is: if <span class="math-container">$x \equiv {\sqrt 2} \pmod7$</span> ,then <span class="math-container">$x^{2} \equiv ({\sqrt 2})^{2} \pmod7$</span></p>
<p><span class="math-container">$\therefore x^{2} \equiv 2 \pmod7$</span> and <span class="math-container">$x=+3,-3,+4,-4$</span></p>
<p>However, there is something which i stuck in it. How can we work with <span class="math-container">${\sqrt 2}$</span> , because we know the definition of modular arithmetic.It says that</p>
<p><span class="math-container">$a \equiv b \pmod m$</span> where <span class="math-container">$a,b$</span> are integers and <span class="math-container">$m$</span> is positive integer.I think that <span class="math-container">${\sqrt 2}$</span> contradicts with the definiton of modular arithmetic because it is not an integer.</p>
<p>Can you enlighten me? What am i missing ?</p>
<p>NOTE:Someone might suggest that when you take exponential of both side , <span class="math-container">${\sqrt 2}$</span> turned out to be an integer.</p>
<p>My answer to this question:Yes it turned out to be an integer but in order to take exponential of <span class="math-container">${\sqrt 2}$</span> , it must be an integer because definition says that <span class="math-container">$a^{e} \equiv b^{e} \pmod m$</span> where <span class="math-container">$a,b$</span> are integers and <span class="math-container">$m,e$</span> are positive integers.</p>
| YJT | 731,237 | <p><span class="math-container">$\sqrt{2}$</span> is defined to be "a number that when you multiply it by itself, gives you <span class="math-container">$2$</span>". In each field, the definition of "multiply" might be different and hence this number can be different (or it might not exist). In your example, <span class="math-container">$\sqrt{2}$</span> is a<span class="math-container">$^*$</span> number in <span class="math-container">$Z_7$</span>, not to be confused with <span class="math-container">$1.41\ldots$</span>.</p>
<p><strong>Edit:</strong> <span class="math-container">$^*$</span>Actually, there are two such numbers, and "the square root of..." is not well defined here (as it is in <span class="math-container">$\mathbb{R}$</span>) to be the positive number one.</p>
|
697,336 | <p>Let $ABCD$ be a trapezoid, such that $AB$ is parallel to $CD$. Through $O$, the intersection point of the diagonals $AC$ and $BD$ consider a parallel line to the bases. This line meets $AD$ at $M$ and $BC$ at $N$. </p>
<p>Prove that $OM=ON$ and: $$\frac{2}{MN}=\frac1{AB}+\frac1{CD}$$</p>
| kmitov | 84,067 | <p><img src="https://i.stack.imgur.com/iLNJo.png" alt="enter image description here"></p>
<p>The picture support the above solution.</p>
<p>On the other hand ot is possible to go another way.</p>
<p>Trinagles AMO and ACD are similar. Then $\frac{MO}{DC}=\frac{AO}{AC}$</p>
<p>Trinagles BMO and BCD are similar. Then $\frac{NO}{DC}=\frac{BO}{BD}$</p>
<p>Trinagles DOC and AOB are similar. Then $\frac{DO}{OB}=\frac{OC}{AO}$</p>
<p>From the last equality it follows that $\frac{DB}{OB}=\frac{AC}{AO}$</p>
<p>Then $\frac{MO}{DC}=\frac{NO}{DC}$ So $MO=NO$</p>
|
418,748 | <p>I tried to calculate, but couldn't get out of this:
$$\lim_{x\to1}\frac{x^2+5}{x^2 (\sqrt{x^2 +3}+2)-\sqrt{x^2 +3}}$$</p>
<p>then multiply by the conjugate.</p>
<p>$$\lim_{x\to1}\frac{\sqrt{x^2 +3}-2}{x^2 -1}$$ </p>
<p>Thanks!</p>
| colormegone | 71,645 | <p>This limit problem seems to cooperate <em>rather</em> nicely with the "conjugate factor" method. Multiply the numerator and denominator by the "conjugate" of the numerator to obtain</p>
<p>$$\lim_{x \rightarrow 1} \ \frac{\sqrt{x^2+3}-2}{x^2 - 1} \ \cdot \frac{\sqrt{x^2+3}+2}{\sqrt{x^2+3}+2} $$</p>
<p>$$ = \ \lim_{x \rightarrow 1} \frac{(x^2+3) \ - \ 4 }{(x^2 - 1) \ \cdot \ (\sqrt{x^2+3}+2) } = \ \lim_{x \rightarrow 1} \frac{x^2 \ - \ 1 }{(x^2 - 1) \ \cdot \ (\sqrt{x^2+3}+2) } $$</p>
<p>$$= \ \lim_{x \rightarrow 1} \frac{ 1 }{ \sqrt{x^2+3}+2 } \ = \ \frac{1}{4} \ . $$</p>
|
1,390,676 | <p>A quasigroup is a pair $(Q,/)$, where $/$ is a binary operation on $Q$, such that (1) for each $a,b\in Q$ there exists unique solutions to the equations
$a/x=b$ and $y/a=b$.</p>
<p>Now I want to extract a class of quasigroups that captures characteristics from $(Q_+,/)$, where $Q_+$ is the set of positive rational numbers and $/$ is division. So far I have chosen the three properties below and want to know if they are independent or if any of them can be derived from the other two plus (1):</p>
<ol>
<li>$a/(b/c)=c/(b/a)$, for all $a,b,c\in Q$</li>
<li>$(a/b)/c=(a/c)/b$, for all $a,b,c\in Q$</li>
<li>$(a/b)/(c/d)=(d/c)/(b/a)$, for all $a,b,c,d\in Q$</li>
</ol>
| Sean English | 220,739 | <p>3 comes from the first two.</p>
<p>$$(a/b)/(c/d)\\=d/(c/(a/b))\\=d/(b/(a/c))\\=(a/c)/(b/d)\\=(a/(b/d))/c\\=(d/(b/a))/c\\=(d/c)/(b/a)$$</p>
|
1,689,923 | <p>I have a sequence $a_{n} = \binom{2n}{n}$ and I need to check whether this sequence converges to a limit without finding the limit itself. Now I tried to calculate $a_{n+1}$ but it doesn't get me anywhere. I think I can show somehow that $a_{n}$ is always increasing and that it has no upper bound, but I'm not sure if that's the right way</p>
| Christian Blatter | 1,303 | <p>For each $n$-set $A\subset[2n]$ we can form two different $(n+1)$-sets $$A':=A\cup\{2n+1\},\ A'':=A\cup\{2n+2\}\quad\subset [2(n+1)]\ .$$This implies ${2(n+1)\choose n+1}\geq 2{2n\choose n}$, hence $a_n\to\infty$ $(n\to\infty)$.</p>
|
1,843,274 | <p>Good evening to everyone. So I have this inequality: $$\frac{\left(1-x\right)}{x^2+x} <0 $$ It becomes $$ \frac{\left(1-x\right)}{x^2+x} <0 \rightarrow \left(1-x\right)\left(x^2+x\right)<0 \rightarrow x^3-x>0 \rightarrow x\left(x^2-1\right)>0 $$ Therefore from the first $ x>0 $, from the second $ x_1 = 1 $ and $x_2=-1$ therefore $ x $ belongs to $(-\infty,-1)$ and $(1,\infty)$ therefore $x$ belongs to $(1,\infty)$. But on the answer sheet it shows that it's defined on $(-1,0)$ and $(1,\infty)$. Where I am wrong? Thanks for any response.</p>
| Zau | 307,565 | <p>$$f(x) = x(x^2-1)= x^3-x$$
$$f'(x)= 3x^2-1$$ then x get local extreme value at point x = $\sqrt 3 /3 $or $-\sqrt 3 /3 $</p>
<p>And it is on the interval ($-\infty$,$-\sqrt 3 /3 $) or ($\sqrt 3 /3 $,$+\infty$) is increasing and the interval ($-\sqrt 3 /3 $,$\sqrt 3 /3$) is decreasing.</p>
<p>$$f(-\sqrt 3 /3) \geq0$$
$$f(\sqrt 3 /3) \leq 0$$</p>
<p>So it can draw the $f(x)$'s diagram:</p>
<p><a href="https://i.stack.imgur.com/YEcQW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YEcQW.png" alt="The image of function f(x)"></a>
EDIT: your answer is wrong because x>0 is not necessary. For instance, x = -0.5, x<0, however $x^2-1 <0$ rather than bigger than 1. Therefore you can't assert x>0. </p>
|
4,489,675 | <p>When saying that in a small time interval <span class="math-container">$dt$</span>, the velocity has changed by <span class="math-container">$d\vec v$</span>, and so the acceleration <span class="math-container">$\vec a$</span> is <span class="math-container">$d\vec v/dt$</span>, are we not assuming that <span class="math-container">$\vec a$</span> is constant in that small interval <span class="math-container">$dt$</span>, otherwise considering a change in acceleration <span class="math-container">$d\vec a$</span>, the expression should have been <span class="math-container">$\vec a = \frac{d\vec v}{dt} - \frac{d\vec a}{2}$</span> (Again assuming rate of change of acceleration is constant). According to that argument, I can say that <span class="math-container">$\vec v$</span> is also constant in that time interval and so <span class="math-container">$\vec a = \vec 0$</span>.</p>
<p>Can someone point out where exactly I have gone wrong. Also this was just an example, my question is general.</p>
| G Cab | 317,234 | <p>Since you are a physicist (and I am an engineer) to avoid to be entangled with
the centuries long debate on infinitesimals on one side, as well as with uncertainty principle on the other (!), let's
go back to the approach that the same Newton took to justify its motion. law:
<a href="https://en.wikipedia.org/wiki/Finite_difference" rel="nofollow noreferrer">finite differences</a>, i.e.
<a href="https://en.wikipedia.org/wiki/Finite_difference#Newton%27s_series" rel="nofollow noreferrer">Newton's series</a></p>
<p>You are measuring the position <span class="math-container">$s$</span> of an object every <span class="math-container">$\tau$</span> seconds and come up with a record
<span class="math-container">$$(s_0, 0\tau),(s_1,1 \tau), \cdots , (s_n, n \tau) , \cdots$$</span>.<br />
You have reason to believe that the phenomenon is repetible and want to model its position vs time behaviour.</p>
<p>You might realize that the ratio
<span class="math-container">$$
v_k = \frac{{\Delta s_k }}{{\Delta t_k }} = \frac{{s_{k + 1} - s_k }}{\tau }
$$</span>
is not constant, and so a <em>linear</em> model <span class="math-container">$s_n = s_0 + v n \tau$</span> is not satisfactory.</p>
<p>But then, instead you might realize that the difference of second degree
<span class="math-container">$$
a_k = \frac{{\Delta v_k }}{\tau }
= \frac{{\Delta ^2 s_k }}{{\tau ^2 }} = \frac{{s_{k + 2} - 2s_{k + 1} + s_k }}{{\tau ^2 }}
$$</span>
is quite constant, which means that the Newton series truncated at the second degree
is a quite satisfactory model.<br />
But such a Newton series is just the polynomial of second degree that interpolates the measured point.<br />
And since it "interpolates" it means that it gives a prediction also for smaller <span class="math-container">$\tau$</span>'s that you can verify
if you have a clock sensible enough. Mathematically it gives you a law that is continuous in time, and thus has<br />
a continuous second derivative.</p>
|
2,135,717 | <p>Let $G$ be an Abelian group of order $mn$ where $\gcd(m,n)=1$. </p>
<p>Assume that $G$ contains an element of $a$ of order $m$ and an element $b$ of order $n$. </p>
<p>Prove $G$ is cyclic with generator $ab$.</p>
<hr>
<p>The idea is that $(ab)^k$ for $k \in [0, \dots , mn-1]$ will make distinct elements but do not know how to argue it. </p>
<p>Could I say something like $<a>=A$, $<b>=B$, somehow $AB=\{ ab : a \in A , b \in B \}$ and that has order $|A||B|=mn$?</p>
<p>Don't know if it's the same exact or similar to <a href="https://math.stackexchange.com/questions/1870217/finite-group-of-order-mn-with-m-n-coprime">Finite group of order $mn$ with $m,n$ coprime</a>.</p>
| Bernard | 202,857 | <p>Suppose $(ab)^k=e$. This means $a^k=b^{-k}\in\langle a\rangle\cap\langle b\rangle$. But since $m\wedge n=1$,$\;\langle a\rangle\cap\langle b\rangle=\{e\}$ by <em>Lagrange's theorem</em>. Thus $a^k=e$ and $b^k=e$, which implies $k $ is a common multiple of $m$ and $n$, i.e. a multiple of $mn$ since $m\wedge n=1$.</p>
|
3,197,331 | <p><span class="math-container">$X, Y$</span> are quantities and <span class="math-container">$f : X → Y$</span> a function. Show
the equivalence of the following statements:</p>
<p>(i) <span class="math-container">$f$</span> is injective</p>
<p>(ii) <span class="math-container">$f^{-1}\!\bigl(f(A) \bigr)=A \quad \text{for all}~ A \subset X$</span></p>
| Dr. Mathva | 588,272 | <p><strong>Hint</strong> <span class="math-container">$\;$</span> Evaluate the expected value <span class="math-container">$E$</span> of the game. We know the game is fair if <span class="math-container">$E=$</span>"Money Paid for entering the game".</p>
<blockquote class="spoiler">
<p> <span class="math-container">$$E=\frac{1}{2}\cdot 0+\frac{1}{4}\cdot 1+\frac{1}{8}\cdot 2+\frac{1}{16}\cdot 4+\ldots=\sum^\infty_{i=0}\frac{1}{4}=\infty$$</span></p>
</blockquote>
|
507,062 | <p>I need your help with a lifelong problem I have always had. There are two things in life I hate most, 1) weddings and 2) Long division. For the life of me I hate division. I am horrible at it. I can multiple large numbers in my head but I can’t divide if my life depended on it. When the division operation is 2 digits over 1 or 2 digit, it’s not a major issue. But when I divide anything 3+ over a 2 digit number, it becomes an issue.</p>
<p>I am trying to find a way t divide that appeals to my abilities to multiply. I am trying to find another way to approach a division problem than the typical long division approach.</p>
<p>For example: 876/2. Simple enough Ans: 438. This isn’t a problem, Finding the new approach is.
I am trying to break this division number into single digits.
$$8=2(4)+0$$
$$7=2(3)+1$$
$$6=2(3)+0$$</p>
<p>My problem is how does this approach get me to 438? The numbers in the parenthesis get me to 433 with a summed remainder of 1. If I carry the one and put it on the first digit of my number, 3, I get 434.</p>
<p>In typical long division, the remainder is carried over. So
$$8=2(4)+0$$
$$7=2(3)+1 $$(the remainder 1 is carried down)
$$16=2(8)+0$$</p>
<p>Is there a way to do long division by doing the division individually on each number and then summing the remainder?</p>
| RDizzl3 | 77,680 | <p>Yes, that operation is legal. It just comes from multiplying $\frac{(4-x^2)}{5x}$ on both sides.</p>
|
2,970,787 | <blockquote>
<p>Find <span class="math-container">$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}$</span></p>
</blockquote>
<p>If I divide whole expression by maximum power i.e. <span class="math-container">$x^2$</span> I get,<span class="math-container">$$\lim_{x\to -\infty} \frac{(1-\frac1x)^2}{\frac1x+\frac{1}{x^2}}$$</span>
Numerator tends to <span class="math-container">$1$</span> ,Denominator tends to <span class="math-container">$0$</span></p>
<p>So I get the answer as <span class="math-container">$+\infty$</span></p>
<p>But when I plot the graph it tends to <span class="math-container">$-\infty$</span></p>
<p>What am I missing here? "Can someone give me the precise steps that I should write in such a case." Thank you very much!</p>
<p>NOTE: I cannot use L'hopital for finding this limit.</p>
| Mohammad Riazi-Kermani | 514,496 | <p>You are playing with dividing by <span class="math-container">$0$</span> </p>
<p>To avoid that, notice <span class="math-container">$$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1} = \lim_{x\to -\infty} \frac{x^2-2x +1}{x+1}$$</span></p>
<p><span class="math-container">$$= \lim_{x\to -\infty} x-3+\frac {4}{x+1} = -\infty $$</span></p>
|
3,078,707 | <p>The above question is the equation <span class="math-container">$(2.4)$</span> of the following paper:</p>
<p><a href="http://www.jmlr.org/papers/volume6/tsuda05a/tsuda05a.pdf" rel="nofollow noreferrer">MATRIX EXPONENTIATED GRADIENT UPDATES</a>.</p>
<p>Let <span class="math-container">$M$</span> and <span class="math-container">$N$</span> be two <span class="math-container">$n \times n$</span> positive definite matrices where <span class="math-container">$M=U\Lambda U^{\top}$</span>, <span class="math-container">$N=\tilde{U}\tilde{\Lambda} \tilde{U}^{\top}
$</span> and <span class="math-container">$(\lambda_i,v_i)$</span> are eigenpairs of <span class="math-container">$M$</span>, likewise for <span class="math-container">$N$</span>.</p>
<p>How to show the following
<span class="math-container">$$\text{Tr}(M\log N)=\sum_{i,j}\lambda_i\log(\tilde{\lambda_j})(u_i^{\top}\tilde{u}_j)^2$$</span></p>
<p>First I do not know what <span class="math-container">$i,j$</span> mean in summation, and how we have it using two summations. Second how to get that.</p>
<p>My try:</p>
<p><span class="math-container">\begin{align}
\text{Tr}(M\log N) &=
\text{Tr}(U\Lambda U^{\top}
\tilde{U}\log(\tilde{\Lambda})\tilde{U}^{\top}) \\
& = \text{Tr}(\Lambda U^{\top}
\tilde{U}\log(\tilde{\Lambda})\tilde{U}^{\top}U)
\end{align}</span></p>
<p>How can I proceed using matrix calculus to get the result not by expanding? what is the hidden trick?</p>
| Intelligenti pauca | 255,730 | <p>If I understand your question properly, we know a point <span class="math-container">$P=(-c,d)$</span> on the ellipse, with its tangent line, we also know vertex <span class="math-container">$A=(0,0)$</span> of the ellipse and that its major axis lies on the <span class="math-container">$x$</span>-axis, but we don't know the position of the other vertex <span class="math-container">$(-2a,0)$</span>, nor of the ellipse center <span class="math-container">$(-a,0)$</span>. </p>
<p>The equation of the ellipse is:
<span class="math-container">$$
{(x+a)^2\over a^2}+{y^2\over b^2}=1,
$$</span>
where semiaxes <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are to be found. Plugging there the coordinates of point <span class="math-container">$P=(-c,d)$</span> gives a first equation:
<span class="math-container">$$
{(a-c)^2\over a^2}+{d^2\over b^2}=1.
$$</span>
Another equation can be deduced by equating <span class="math-container">$dy/dx$</span> at <span class="math-container">$P$</span> to the known slope <span class="math-container">$m$</span> of the tangent:
<span class="math-container">$$
m=-{a^2\over b^2}{a-c\over d}.
$$</span>
Combining those two equations we can eliminate <span class="math-container">$b$</span> and find an equation for <span class="math-container">$a$</span>:
<span class="math-container">$$
a={3c^2\pm\sqrt{c^4-8mcd^3}\over 4c},
$$</span>
where the <span class="math-container">$+$</span> has to be chosen if <span class="math-container">$m<0$</span> as in your diagram.</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| N. S. | 9,176 | <p>Let $a_k...a_0$ be the digits of $n$. Then</p>
<p>$$n=10^ka_k+\cdots+10a_1+a_0$$
and hence</p>
<p>$$n -\mbox{sum of its digits}=\left( 10^ka_k+\cdots+10a_1+a_0\right)-\left( a_k+\cdots+a_1+a_0\right)\\
=a_k(10^k-1)+\cdots+a_1(10-1)=a_k\cdot 99\ldots9+\cdots+a_1 \cdot 9 \\
=9\left( a_k\cdot 11\ldots1+\cdots+a_2\cdot 11+a_1 \right)$$</p>
<p>As $n -\mbox{sum of its digits}$ is a multiple of nine, it is a multiple of $3$.</p>
|
563,431 | <p>Find the absolute maximum and minimum of $f(x,y)= y^2-2xy+x^3-x$ on the region bounded by the curve $y=x^2$ and the line $y=4$. You must use Lagrange Multipliers to study the function on the curve $y=x^2$.</p>
<p>I'm unsure how to approach this because $y=4$ is given. Is this a trick question?</p>
| Gazi | 131,093 | <p>Let G be any group of order 52. Then by sylow theorem it has only one sylow 13 subgroup and hence is normal in G, let it be H. Then G/H forms a quotient group of order 5, which implies G/H is cyclic and hence G is abelian.</p>
|
3,159,199 | <p>I did a question to find relative extrema for the following function:
<span class="math-container">$f(x)=x^2$</span> on <span class="math-container">$[−2,2].$</span></p>
<p>The answer said that there is no relative maxima for this function because relative extrema cannot occur at the end points of a domain.
Why is this so ?
Thank you.</p>
| st.math | 645,735 | <p>There is a relative minimum at <span class="math-container">$0$</span>. But there are also relative <em>maxima</em> at <span class="math-container">$2$</span> and at <span class="math-container">$-2$</span>.</p>
<p>The definition for a relative maximum at a point <span class="math-container">$x_0$</span> is that <span class="math-container">$f(x)\leq f(x_0)$</span> for all <span class="math-container">$x$</span> in a sufficiently small neighborhood of <span class="math-container">$x_0$</span> (intersected with the function's domain).</p>
<p>For <span class="math-container">$x_0=2$</span>, for example, it is true that <span class="math-container">$f(x)\leq f(2)$</span> for all <span class="math-container">$x$</span> in the sufficiently small intervall <span class="math-container">$(1.5,2.5)\cap[-2,2]=(1.5,2]$</span> around the point <span class="math-container">$x_0=2$</span>. So, it is a relative maximum.</p>
<p>There is no reason to believe that relative extrema cannot occur at the boundaries of intervals.</p>
|
2,987,994 | <p>I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique.</p>
<p>Does anyone know of any good ones to tackle?</p>
| chroma | 1,006,726 | <p>I know I am kind of late, but here are a few
<span class="math-container">$$\int_{-\infty}^\infty\frac{\ln{(x^2+1)}}{x^4+x^2+1}\,dx$$</span>
<span class="math-container">$$\int_{-\infty}^\infty\frac{\ln{(x^4+x^2+1)}}{x^4+1}\,dx$$</span>
<span class="math-container">$$\int_{-\infty}^\infty\left(x^2+\frac{1}{x^2}\right)e^{-\left(x^2+\frac{1}{x^2}\right)}\,dx$$</span>
<span class="math-container">$$\int_{-\infty}^\infty\frac{x^2+2\cos{x}-2}{x^4}\,dx$$</span>
<span class="math-container">$$\int_{-\infty}^\infty\frac{x^2+2\cos{x}-2}{x^4(x^2+1)}\,dx$$</span>
<span class="math-container">$$\int_{-\infty}^\infty\left(\frac{1-\cos{x}}{x^2}\right)^2\,dx$$</span>
<span class="math-container">$$\int_0^\infty\frac{\ln{(x+\sqrt{x^2+1})}}
{(x+\sqrt{x^2+1})^2}\,dx$$</span>
<span class="math-container">$$\int_0^{\frac{\pi}{2}}x\sqrt{\tan{x}}\space dx$$</span></p>
|
203,967 | <p>I have a tree as below. </p>
<pre><code>Graph[{7 -> 2, 2 -> 3, 3 -> 4, 3 -> 5}, VertexLabels -> "Name"]
</code></pre>
<p><a href="https://i.stack.imgur.com/cs0Ztm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cs0Ztm.jpg" alt="enter image description here"></a></p>
<p>However, I want the drawing would put parent nodes at a higher position than their respective children nodes (i.e. 7 should be drawn higher than 2, instead of lower than 2). What can I do to achieve this (I don't want to specify the vertex coordinates manually)? Thanks!</p>
| kglr | 125 | <p>You can use</p>
<ol>
<li><code>Graph</code> with the option <code>GraphLayout -> "LayeredDigraphEmbedding"</code>, or </li>
<li><a href="https://reference.wolfram.com/language/ref/LayeredGraphPlot.html" rel="nofollow noreferrer"><code>LayeredGraphPlot</code></a>, or</li>
<li><a href="https://reference.wolfram.com/language/ref/TreePlot.html" rel="nofollow noreferrer"><code>TreePlot</code></a> specifying the root vertex in the third argument.</li>
</ol>
<h3> </h3>
<pre><code> Graph[{7 -> 2, 2 -> 3, 3 -> 4, 3 -> 5},
VertexLabels -> "Name",
GraphLayout->"LayeredDigraphEmbedding"]
</code></pre>
<p><a href="https://i.stack.imgur.com/flYmK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/flYmK.png" alt="enter image description here"></a></p>
<pre><code>LayeredGraphPlot[{7 -> 2, 2 -> 3, 3 -> 4, 3 -> 5},
VertexLabels -> "Name"]
</code></pre>
<blockquote>
<p>same picture</p>
</blockquote>
<pre><code>TreePlot[{7 -> 2, 2 -> 3, 3 -> 4, 3 -> 5}, Automatic, 7,
VertexLabels-> "Name", DirectedEdges -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/IsupD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IsupD.png" alt="enter image description here"></a></p>
<p><strong>Note:</strong> <code>LayeredGraphPlot</code> and <code>TreePlot</code> produce <code>Graphics</code> objects.</p>
|
203,967 | <p>I have a tree as below. </p>
<pre><code>Graph[{7 -> 2, 2 -> 3, 3 -> 4, 3 -> 5}, VertexLabels -> "Name"]
</code></pre>
<p><a href="https://i.stack.imgur.com/cs0Ztm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cs0Ztm.jpg" alt="enter image description here"></a></p>
<p>However, I want the drawing would put parent nodes at a higher position than their respective children nodes (i.e. 7 should be drawn higher than 2, instead of lower than 2). What can I do to achieve this (I don't want to specify the vertex coordinates manually)? Thanks!</p>
| Szabolcs | 12 | <pre><code>Graph[{7 -> 2, 2 -> 3, 3 -> 4, 3 -> 5}, VertexLabels -> "Name",
GraphLayout -> {"LayeredEmbedding", "RootVertex" -> 7}]
</code></pre>
<p><a href="https://i.stack.imgur.com/t4RL2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t4RL2.png" alt="enter image description here"></a></p>
|
597,899 | <p>Let $\epsilon > 0$ be given. Suppose we have that $$a - \epsilon < F(x) < a + \epsilon$$</p>
<p>Does it follow that $a - \epsilon < F(x) \leq a $ ??</p>
| Haha | 94,689 | <p>In fact the symbolization $a\leq b$ means that </p>
<p>" $a$ is less <strong>or</strong> equal to $b$." $(1)$ </p>
<p>(A proposition with <strong>or</strong>,let's say $p$= $p_1$or $p_2$ is true iff one at least off $p_i$'s is true)</p>
<p>Now in $(1)$ the two individual propositions cannot be true at the same time.This doesn't matter though because we need at least one in order to be true.So if we have that $a<b$ (strictly) then it is correct to say that $a\leq b$ because the proposition $(1)$ is true.</p>
<p>In this case though this symbolization is more of a rogue symbolization and try not to write $a\leq b$ when $a<b$.</p>
|
3,527,785 | <p>I'm reading James Anderson's <em>Automata Theory with Modern Applications. Here:</em></p>
<blockquote>
<p><a href="https://i.stack.imgur.com/sFWNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sFWNh.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/k9zne.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k9zne.png" alt="enter image description here" /></a></p>
</blockquote>
<p>And I tried to prove the following theorem (for prefix codes).</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/S5BgC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S5BgC.png" alt="enter image description here" /></a></p>
</blockquote>
<p>I tried in the following way: Suppose <span class="math-container">$C$</span> is a prefix code which is not uniquely decipherable, that is there is a string <span class="math-container">$u \in C$</span> with two different expressions <span class="math-container">$u=ab=cd$</span>. But <span class="math-container">$u=vw$</span> and hence <span class="math-container">$vw=ab=cd$</span> where <span class="math-container">$w= \lambda$</span> and <span class="math-container">$\lambda$</span> is the empty word, therefore <span class="math-container">$v=a=c$</span> and <span class="math-container">$\lambda=b=d$</span> which contradicts your hypothesis that <span class="math-container">$u$</span> is not uniquely decipherable.</p>
<p>Is this correct? I am confused because I paired <span class="math-container">$v=a=c$</span> and <span class="math-container">$\lambda=b=d$</span> and I'm not sure if that is valid.</p>
| copper.hat | 27,978 | <p>It might be easier to do an inductive proof on the number of code symbols in two words.</p>
<p>Suppose <span class="math-container">$C$</span> is a prefix code for <span class="math-container">$S$</span>.</p>
<p>I use <span class="math-container">$|s|$</span> for the length (in terms of <span class="math-container">$\Sigma$</span>) of <span class="math-container">$s$</span>.</p>
<p>Choose <span class="math-container">$s \in S$</span> and write <span class="math-container">$s=c_1 t_1 = c_2 t_2$</span> where <span class="math-container">$c_k \in C$</span> and <span class="math-container">$t_k \in C^*$</span>.
Suppose <span class="math-container">$|c_1| \le |c_2|$</span> and write <span class="math-container">$s= c_1 w t_1'$</span>, where <span class="math-container">$|w| = |c_2|-|c_1|$</span> and <span class="math-container">$t_1 = w t_1'$</span>. Then <span class="math-container">$c_1w = c_2$</span> and hence <span class="math-container">$c_1 = c_2$</span> and <span class="math-container">$w = \lambda$</span>.
Now repeat with <span class="math-container">$t_1, t_2$</span>.</p>
|
3,809,699 | <p>While we define norm on a vector space, we consider only real or complex vector field. But can we generalize this norm on a vector space over an arbitrary field ? I think this can be done, but we have to define a suitable modulus function on the ground field to be meaningful in the property ||cx||=|c|||x||, what |c| means. Is this possible to define norm in this general situation? And why we only bother about real or complex vector space?</p>
| Physical Mathematics | 592,278 | <p>Let <span class="math-container">$F$</span> be a field and <span class="math-container">$V$</span> a vector space over <span class="math-container">$F$</span>. Define <span class="math-container">$|\cdot |: F \to \mathbb{R}$</span> by <span class="math-container">$|f| = 1$</span> for <span class="math-container">$f \ne 0 $</span> and <span class="math-container">$|0| = 0$</span>. Define <span class="math-container">$\| \cdot \| : V \to \mathbb{R}$</span> by <span class="math-container">$\|v \| = 1$</span> for <span class="math-container">$v \ne 0$</span> and <span class="math-container">$\| 0\| = 0$</span>.</p>
<p>Then <span class="math-container">$\|v\| \geq 0$</span>, <span class="math-container">$\|v\| = 0\iff v =0 $</span>, if <span class="math-container">$\lambda \ne 0$</span> and <span class="math-container">$v \ne 0$</span>, then <span class="math-container">$\lambda v \ne 0 $</span> and <span class="math-container">$\|\lambda v \| = 1 = |\lambda| \|v\|$</span>, otherwise <span class="math-container">$\lambda v = 0$</span> and <span class="math-container">$|\lambda| \|v\| = 0$</span>, so <span class="math-container">$\|\lambda v\| = |\lambda |\|v\|$</span> either way.</p>
<p>Finally as long at least one of <span class="math-container">$v$</span> or <span class="math-container">$w$</span> is nonzero, then <span class="math-container">$\|v + w\| \leq 1 \leq \|v\| + \|w\|$</span>, else <span class="math-container">$v = w = 0$</span>, so <span class="math-container">$\|v+w\| = \|0\| =0 \leq 0 + 0 = \|v\| + \|w\|$</span>.</p>
<p>So we have define a not very interesting norm on an arbitrary vector space over an arbitrary field. Note we need to define a norm on the field, which should follow the positive definiteness requirement as well as triangle inequality (which ours does) and maybe also a multiplicativity requirement: <span class="math-container">$|ab| = |a||b|$</span> (which ours does also).</p>
<p>Finally, as was commented, there are more interesting norm on number fields in number theory. But this shows it always possible in some sense.</p>
|
354,213 | <p>This is a similar question to the one I have posted <a href="https://math.stackexchange.com/questions/354124/given-u-2-5-3-how-to-find-unit-vectorsu-w-s-t-uv-is-maximal-and-u">before</a>. The problem
is as in the title:</p>
<blockquote>
<p>Given $u=(-2,5,3)$ find a unit vector $v$ s.t $|u\times v|$ is
maximal, and then a unit vector $w$ s.t $|(u\times v)\cdot w|$ is
minimal</p>
</blockquote>
<p>Again, I can write out the equations, but there should be an easier
way to find the max/min then to solve the equations (since, as I stated
in my previews question, they don't know how to solve for the max/min).</p>
<p>Any help on the problem is greatly appreciated!</p>
| Dimitris | 37,229 | <p>Let A be an open subset of $X\times Y=[0,1]\times [0,1]$. Then you can write it as a countble union : $$A=\bigcup_{i=1 }^\infty I_i \times J_i$$
in which $I_i,J_i$ are open intervals of [0,1].
Of course, $I_i\times J_i$ are all measurable subsets of $[0.1]^2$, and therefore their union is measurable (since the measurable subsets of $[0,1]^2$ form a σ-Algebra).</p>
<p>Each Borel set in $X\times Y$ is produced by countable unions and intersections of sets of the form of A (above), therefore it is measurable ( again, use the fact that the measurable subsets of $[0,1]^2$ form a σ-Algebra).</p>
<p>Now, <strong>for the second question</strong>, consider a continuous function $f:[0,1]\times[0,1]\longrightarrow \mathbb R$.</p>
<p>Then, we know that it is measurable if and only if for each borel set $B\subset \mathbb R$, it holds that $f^{-1}(B)$ is measurable.</p>
<p>Take a borel set $B\subset\mathbb R$. We know that the inverse image of a borel set through a continuous function is a borel set , therefore $f^{-1}(B)$ is a Borel set, thus it is measurable, QED.</p>
|
78,341 | <p>I believe I read somewhere that residually finite-by-$\mathbb{Z}$ groups are residually finite. That is, if $N$ is residually finite with $G/N\cong \mathbb{Z}$ then $G$ is residually finite.</p>
<p>However, I cannot remember where I read this, and nor can I find another place which says it. I was therefore wondering if someone could confirm whether this is true or not, and if it is give either a proof or a reference for this result? (If not, a counter-example would not go amiss!)</p>
<p>Note that I definitely know it is true if $N$ is f.g. free (this can be found in a paper of G. Baumslag, "Finitely generated cyclic extensions of free groups are residually finite" (Bull. Amer. Math. Soc., <strong>5</strong>, 87-94, 1971)).</p>
| Denis Osin | 10,251 | <p>I think what Yves meant in his comment to Andreas' answer is the result of Mal'cev [A. I. Mal'cev, On homomorphisms onto finite groups, Ivanov. Gos. Ped. Inst. Ucen. Zap., 18 (1958), pp. 49-60, in Russian] stating that any split extension of a finitely generated residually finite group by a residually finite group is residually finite. This is indeed an easy exercise. In particular, this implies that every (f.g. residually finite) - by - Z group is residually finite as every extension by a free group (e.g., by Z) splits. </p>
<p>If either of the conditions "finitely generated" or "split" is relaxed, the result does not hold. The counterexample with infinitely generated kernel is already provided by Yves. For non-split extensions there are even examples of the form $$1\to \mathbb Z/2\mathbb Z \to G\to Q\to 1,$$ where $Q$ is (finitely generated) residually finite while $G$ is not. In particular, being residually finite is not a quasi-isometry invariant.</p>
<p>Such examples can be found among solvable (moreover, center-by-metabelian) groups or even among groups of intermediate growth (see [A. Erschler, Not residually finite groups of intermediate growth, commensurability and non-geometricity, Journal of Algebra, 272 (2004), 154-172].</p>
|
506,397 | <p>I would like to know the condition for a random variable <span class="math-container">$Y$</span> in order to make <span class="math-container">$\mathbb{E}[\max\{X_1+Y,X_2\}] > \mathbb{E}[\max\{X_1, X_2\}]$</span>, where <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> are iid.</p>
<p>Any help would be appreciated.</p>
<h3>Comment by OP incorporated by dfeuer</h3>
<p>I tried to use the upper and lower bounds of the highest order statistics for inid and iid random variables to solve the problem, but they are not tight. The brute force might be applying the convolution on the sum term, then the cdf of the highest order statistic for inid random variables.</p>
| Einar Rødland | 37,974 | <p>If you start off by subtracting $\mathrm{E}[X_1]$ from both sides and let $Z=X_2-X_1$, the desired inequality becomes
$$\mathrm{E}[\max(Y,Z)]>\mathrm{E}[\max(0,Z)].$$</p>
<p>For any distribution $U$ with $F_U(t)=\Pr[U\le t]$ the cumulative distribution, we can write
$$
\mathrm{E}[U]=\int_{-\infty}^\infty \chi(t>0)-F_U(t)\,dt
$$
where $\chi(t>0)$ the indicator function which is 1 when $t>0$ is true and 0 when it is false.</p>
<p>Using $F_{\max(Y,Z)}(t)=F_Y(t)F_Z(t)$, the inequality becomes equivalent to
$$
\int_{-\infty}^\infty \chi(t>0)-F_Y(t)F_Z(t)\,dt
>\int_0^\infty 1-F_Z(t)\,dt
$$
which we can rewrite into
$$
\int_{-\infty}^\infty \Big[\chi(t>0)-F_Y(t)\Big]\cdot F_Z(t)\,dt>0.
$$
I can't quite see any way of transforming this into a simple statistical statement: i.e. like one in terms of expected values or probabilities. However, if we define
$$
H_Z(t)=\int_0^t F_Z(s)\,ds
$$
with the convention that $\int_0^a=-\int_a^0$ to deal with the $a<0$ cases, we can rewrite the integral condition into
$$
\textrm{E}\left[H_Z(Y)\right]=\int_{-\infty}^\infty H_Z(t)\,dF_Y(t)>0
$$
where $dF_Y(t)=f_Y(t)\,dt$ if the probability density of $Y$.</p>
|
985,103 | <p>The set $\{u_{1},u_{2}\cdots,u_{6}\}$
is a basis for a subspace $\mathcal{M}$ of $\mathbb{F}^{m}$ if and
only if $\{u_{1}+u_{2},u_{2}+u_{3}\cdots,u_{6}+u_{1}\}$
is also a basis for $\mathcal{M}$.
So far I have that the two basis are just rearranged sums of each other but don't know where else to go with it.</p>
| marty cohen | 13,079 | <p>If $a_n/b_n \to 0$,
then,
for any $c > 0$,
there is an
$N(c)$
such that
$|a_n/b_n|
< c$
for $n > N(c)$,
or
$|a_n|
< c|b_n|$.</p>
<p>If $|b_n|$
is bounded,
there is a $B > 0$
such that
$|b_n| < B$.</p>
<p>Therefore
$|a_n|
< cB$
for $n > N(c)$.</p>
<p>Finally,
choose
$c = \epsilon/B$.
Then
$|a_n|
< \epsilon
$
for
$n > N(\epsilon/B)$.</p>
|
3,527,004 | <p>As stated in the title, I want <span class="math-container">$f(x)=\frac{1}{x^2}$</span> to be expanded as a series with powers of <span class="math-container">$(x+2)$</span>. </p>
<p>Let <span class="math-container">$u=x+2$</span>. Then <span class="math-container">$f(x)=\frac{1}{x^2}=\frac{1}{(u-2)^2}$</span></p>
<p>Note that <span class="math-container">$$\int \frac{1}{(u-2)^2}du=\int (u-2)^{-2}du=-\frac{1}{u-2} + C$$</span></p>
<p>Therefore, <span class="math-container">$\frac{d}{du} (-\frac{1}{u-2})= \frac{1}{x^2}$</span> and</p>
<p><span class="math-container">$$\frac{d}{du} (-\frac{1}{u-2})= \frac{d}{du} (-\frac{1}{-2(1-\frac{u}{2})})=\frac{d}{du}(\frac{1}{2} \frac{1}{1-\frac{u}{2}})=\frac{d}{du} \Bigg( \frac{1}{2} \sum_{n=0}^\infty \bigg(\frac{u}{2}\bigg)^n\Bigg)$$</span></p>
<p><span class="math-container">$$= \frac{d}{du} \Bigg(\sum_{n=0}^\infty \frac{u^n}{2^{n+1}}\Bigg)= \frac{d}{dx} \Bigg(\sum_{n=0}^\infty \frac{(x+2)^n}{2^{n+1}}\Bigg)= \sum_{n=0}^\infty \frac{d}{dx} \bigg(\frac{(x+2)^n}{2^{n+1}}\bigg)=$$</span></p>
<p><span class="math-container">$$\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p>
<p>From this we can conclude that </p>
<p><span class="math-container">$$f(x)=\frac{1}{x^2}=\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p>
<p>Is this solution correct?</p>
| almagest | 172,006 | <p>Narrowly on your question: <strong>no</strong>, it is not correct. But it is almost correct.</p>
<p>The error is that just before the end you forgot to get the range for <span class="math-container">$n$</span> correct. When you differentiate a term in <span class="math-container">$x^0$</span> you get 0, not a term in <span class="math-container">$x^{-1}$</span>. </p>
<p>As a quite separate point there are other ways of getting a power series for <span class="math-container">$\frac{1}{(u-2)^2}$</span>, but you did not ask for alternative ways.</p>
|
1,694,159 | <p>I am prepping for my mid semester exam, and came across with the following question:</p>
<blockquote>
<p>Find the closed form for the sum $\sum_{j=k}^n (-1)^{j+k}\binom{n}{j}\binom{j}{k}$, using the assumption that $k = 0, 1,...n$ and $n$ can be any natural number.</p>
</blockquote>
<p>So what I have done is to note the fact that $$\binom{n}{j}\binom{j}{k}= \frac{n!}{j!(n-j)!}\frac{j!}{k!(j-k)!}=\frac{n!}{(n-j)!\ k!\ (j-k)!}$$</p>
<p>Then we can write the summation as $$\sum_{j=k}^n {(-1)^{j+k}\binom{n}{j}\binom{j}{k}}= \sum_{j=k}^n (-1)^{j+k} \frac{n!}{(n-j)!\ k!\ (j-k)!} = \frac{n!}{k!} \sum_{j=k}^n (-1)^{j+k} \frac{1}{(n-j)!\ (j-k)!} $$</p>
<p>I tried to let $m=j-k$:
$$\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m+2k} \frac{1}{(n-m-k)!\ m!}=\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m} \frac{1}{(n-m-k)!\ m!}$$</p>
<p>But I am not sure what to proceed next. Any help would be highly appreciated!</p>
| T.A.Tarbox | 413,002 | <p>Let the integer $k$ be fixed and let $\mathcal C_k$ denote the linear operator acting upon polynomials in $x$ that returns the value of the coefficient of $x^k$.
Then evidently,
\begin{align}
\sum_{j=k}^n {(-1)^{j+k}\binom{n}{j}\binom{j}{k}} &= \sum_{j=k}^n {(-1)^{j}\binom{n}{j}\cdot \mathcal C_k \cdot (1-x)^j} \\
&= \mathcal C_k\cdot\sum_{j=0}^n {(-1)^{j}\cdot (1-x)^j\binom{n}{j}}\\
&= \mathcal C_k\cdot(1-(1-x))^n \\
&= \mathcal C_k\cdot x^n \\
&= \delta_{kn}
\end{align}where $\delta_{kn}$ is the Kronecker delta.<br><br>
Q.E.D.</p>
|
226,265 | <blockquote>
<p>Suppose <span class="math-container">$(X,d)$</span> is a metric space. Does every open cover of <span class="math-container">$X$</span> have a minimal subcover with respect to inclusion?</p>
</blockquote>
<p>In other words:</p>
<blockquote>
<p>If <span class="math-container">$\mathcal{O}$</span> is an open cover of a metric space <span class="math-container">$X$</span>, then does there exist an open cover <span class="math-container">$\mathcal{U} \subseteq \mathcal{O}$</span> such that, if <span class="math-container">$\mathcal{U}' \subsetneq \mathcal{U}$</span>, then <span class="math-container">$\mathcal{U}'$</span> does <strong>not</strong> cover <span class="math-container">$X$</span> ?</p>
</blockquote>
| Cameron Buie | 28,900 | <p>Take any $x\in X$, and consider the covering by the sets $\{y\in X:d(x,y)<n\}$ for all positive integers $n$. If $X$ is unbounded (with respect to the metric $d$), then this open cover has no minimal subcover.</p>
|
226,265 | <blockquote>
<p>Suppose <span class="math-container">$(X,d)$</span> is a metric space. Does every open cover of <span class="math-container">$X$</span> have a minimal subcover with respect to inclusion?</p>
</blockquote>
<p>In other words:</p>
<blockquote>
<p>If <span class="math-container">$\mathcal{O}$</span> is an open cover of a metric space <span class="math-container">$X$</span>, then does there exist an open cover <span class="math-container">$\mathcal{U} \subseteq \mathcal{O}$</span> such that, if <span class="math-container">$\mathcal{U}' \subsetneq \mathcal{U}$</span>, then <span class="math-container">$\mathcal{U}'$</span> does <strong>not</strong> cover <span class="math-container">$X$</span> ?</p>
</blockquote>
| Thomas Andrews | 7,933 | <p>Let <span class="math-container">$X=\mathbb R$</span> and <span class="math-container">$U_n=(-n,n)$</span>. Then <span class="math-container">$\{U_n\}$</span> covers <span class="math-container">$X$</span> but it doesn't have a minimal sub-cover.</p>
|
1,349 | <p>In this question here the OP asks for hints for a problem rather than a full proof.</p>
<p><a href="https://math.stackexchange.com/questions/14477">Proof of subfactorial formula $!n = n!- \sum_{i=1}^{n} {{n} \choose {i}} \quad!(n-i)$</a></p>
<p>Now, while I would like to respect that request, I also feel that questions on this site are not intended just for the OP's benefit. This leads me to the question...</p>
<blockquote>
<p><strong>Question</strong>: Is there any way to use some form of <em>spoiler space</em>, so that it's possible to post the answer for the other readers' benefit, but at the same time hiding it from those who do not want it?</p>
</blockquote>
<p>My attempted "look at the previous version of this post" turned out a disaster. I've seen people use rot13, but that seems like a lot of fuss (and clashes with the mathematics).</p>
<p>On some sites they use white text on white background for spoily material, which, when you select with the mouse, reveals the text. Is that possible?</p>
<hr />
<p>Testing:</p>
<blockquote>
<p>! Spoiler Space</p>
<p>! More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Spoiler Space
More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> What happens if I write a really long sentence. Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Maybe it'll involve some maths like <span class="math-container">$E=mc^2$</span> or exclamation marks <span class="math-container">$n!=n \times (n-1)!$</span>.</p>
</blockquote>
| JDH | 413 | <p>What may be needed is a means by which an answer is visible to everyone except the OP.</p>
|
360,293 | <p>Calculate $\sum_{n=2}^\infty ({n^4+2n^3-3n^2-8n-3\over(n+2)!})$</p>
<p>I thought about maybe breaking the polynomial in two different fractions in order to make the sum more manageable and reduce it to something similar to $\lim_{n\to\infty}(1+{1\over1!}+{1\over2!}+...+{1\over n!})$, but didn't manage</p>
| lab bhattacharjee | 33,337 | <p>Express $$n^4+2n^3-3n^2-8n-3=(n+2)(n+1)n(n-1)+B(n+2)(n+1)n+C(n+2)(n+1)+D(n+2)+E--->(1)$$</p>
<p>So that $$T_n=\frac{n^4+2n^3-3n^2-8n-3}{(n+2)!}=\frac1{(n-2)!}+\frac B{(n-1)!}+\frac C{(n)!}+\frac D{(n+1)!}+\frac E{(n+2)!}$$</p>
<p>Putting $n=-2$ in $(1), E=2^4+2\cdot2^3-3\cdot2^2-8\cdot2-3=1$</p>
<p>Similarly, putting $n=-1,0,1$ we can find $D=0,C=-2,B=0$ .</p>
<p>$$\implies T_n=\frac{n^4+2n^3-3n^2-8n-3}{(n+2)!}=\frac1{(n-2)!}-\frac 2{n!} +\frac 1{(n+2)!}$$</p>
<p>Putting $n=2, T_2=\frac1{0!}-\frac 2{2!} +\frac 1{4!}$</p>
<p>Putting $n=3, T_3=\frac1{1!}-\frac 2{3!} +\frac 1{5!}$</p>
<p>Putting $n=4, T_4=\frac1{2!}-\frac 2{4!} +\frac 1{6!}$</p>
<p>$$\cdots$$</p>
<p>So, the sum will be $$\sum_{0\le r<\infty}\frac1{r!}-2\sum_{2\le s<\infty}\frac1{s!}+\sum_{4\le t<\infty}\frac1{t!}$$</p>
<p>$=\sum_{0\le r<\infty}\frac1{r!}-2\left(\sum_{0\le s<\infty}\frac1{s!}-\frac1{0!}-\frac1{1!}\right)+\sum_{0\le t<\infty}\frac1{t!}-\left(\frac1{0!}+\frac1{1!}+\frac1{2!}+\frac1{3!}\right)$</p>
<p>$$=e-2e+e-\{-2\left(\frac1{0!}+\frac1{1!}\right)+\left(\frac1{0!}+\frac1{1!}+\frac1{2!}+\frac1{3!}\right)\}=-\frac34$$</p>
|
770,430 | <p>How to find the value of $X$?</p>
<p>If $X$= $\frac {1}{1001}$+$\frac {1}{1002}$+$
\frac {1}{1003}$. . . . $\frac {1}{3001}$</p>
| Umberto | 92,940 | <p>Mathematica gives an approximation of</p>
<p>1.098612251</p>
<p>Ref.: <a href="http://www.wolframalpha.com/share/clip?f=d41d8cd98f00b204e9800998ecf8427egmfbkbu8jv&mail=1" rel="nofollow">http://www.wolframalpha.com/share/clip?f=d41d8cd98f00b204e9800998ecf8427egmfbkbu8jv&mail=1</a></p>
<p>Just to note that $\ln 3 \approx 1.098612288$. So the approximation given before are VERY good...</p>
|
572,125 | <p>How to show this function's discontinuity?<br></p>
<p>$ f(n) = \left\{
\begin{array}{l l}
\frac{xy}{x^2+y^2} & \quad , \quad(x,y)\neq(0,0)\\
0 & \quad , \quad(x,y)=(0,0)
\end{array} \right.$</p>
| Brian M. Scott | 12,042 | <p>If $\kappa$ is a regular cardinal, and $\alpha$ is any ordinal less than $\kappa$, then every function $f:\alpha\to\kappa$ is bounded. In particular, any function $f:\omega\to\omega_2$ is bounded.</p>
<p>More generally, all functions $f:\alpha\to\kappa$ will be bounded if $\alpha<\operatorname{cf}\kappa$, the <a href="https://en.wikipedia.org/wiki/Cofinality#Cofinality_of_ordinals_and_other_well-ordered_sets" rel="nofollow"><em>cofinality</em></a> of $\kappa$.</p>
|
4,134,734 | <p>I'm trying to verify that a certain function of two variables <span class="math-container">$F(x,y)$</span> satisfies the conditions of a joint CDF. Showing that each condition holds has been fairly straightforward except, that is, for the condition that</p>
<p><span class="math-container">$a<b,c<d\implies F(b,d)-F(b,c)-F(a,d)+F(a,c)\geqslant 0$</span>.</p>
<p>I honestly don't know where to start for this one. It's easy enough when the task is to show that this condition is violated but showing that it holds is another matter.</p>
<p>For reference, the particular bivariate function I'm trying to show satisfies this condition is the following (a modification a function Jordan M. Stoyanov examines in Section 5.6 of his book Counterexamples in Probability, 3rd Edition):</p>
<p><a href="https://i.stack.imgur.com/Z0Wwm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z0Wwm.png" alt="enter image description here" /></a></p>
<p>Any tips about how I might proceed would be appreciated.</p>
| Vons | 274,987 | <p>The condition is the same as checking <span class="math-container">$\frac{F(b,d)-F(b,c)}{F(a,d)-F(a,c)}\ge 1$</span>. Geometrically this is the slope of any cross sectional slice parallel to the xz-plane is greater if the x-value is greater.</p>
<p>Here are some cases that are worth considering.</p>
<p><span class="math-container">$$\frac 12<d<1\\
0<c<\frac 12\\
b>1\\
0<a<1$$</span></p>
<p>Gives <span class="math-container">$\frac{d-c}{\frac a2-ac}\ge1$</span>? <span class="math-container">$\frac{d-c}{a\left(\frac 12 -c\right)}\ge1$</span>, so this is true.</p>
<p>Another case that is worth considering is <span class="math-container">$0<c<d<\frac 12,b>1,0<a<1$</span>, in which case we have <span class="math-container">$\frac{d-c}{ad-ac}\ge 1$</span>? <span class="math-container">$\frac 1a\ge 1$</span>, so this is true.</p>
<p>Last case here is <span class="math-container">$0<c<d<\frac 12, 0<a<b<1$</span>, in which case we have <span class="math-container">$\frac{bd-bc}{ad-ac}\ge1$</span>? <span class="math-container">$\frac ba\ge1$</span>, so this is true.</p>
<p>A picture of the cdf. The slope/vertical distance between any two points that are further right on the x-axis must be at least as great as the slope/vertical distance between the same two points shifted to a lesser x-value.</p>
<p><a href="https://i.stack.imgur.com/lqSLt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lqSLt.png" alt="enter image description here" /></a></p>
|
19,148 | <p>I always find the strong law of large numbers hard to motivate to students, especially non-mathematicians. The weak law (giving convergence in probability) is so much easier to prove; why is it worth so much trouble to upgrade the conclusion to almost sure convergence?</p>
<p>I think it comes down to not having a good sense of why, practically speaking, a.s. convergence is better than convergence i.p. Sure, I can prove that one implies the other and not conversely, but the counterexamples feel contrived. I understand the advantages of a.s. convergence on a technical level, but not on the level of everyday life.</p>
<p>So my question: how would you explain to, say, an engineer, the significance of having a.s. convergence as opposed to i.p.? Is there a "real-life" example of bad behavior that we're ruling out?</p>
| Yuri Bakhtin | 2,968 | <p>Suppose you are in the context of collecting data and estimating the mean.</p>
<p>Imagine a situation where SLLN does not hold. It means that with positive probability accumulating new data is useless.</p>
|
2,227,709 | <p>Find the solution of the $x^2+2x+3 \equiv0\mod{198}$</p>
<p>i have no idea for this problem i have small hint to we going consider $x^2+2x+3 \equiv0\mod{12}$</p>
| Travis Willse | 155,629 | <p>Here's a more-or-less generalizable, manual way of finding all of the solutions:</p>
<p>First, as P. Vanchinathan does, change variable to $a := x + 1$, which transforms the equation into one with zero linear term:
$$a^2 + 2 \equiv 0 \pmod {198} .$$
(This step is option, but reduces the amount of later work.)</p>
<p>Now, we exploit the prime factorization $198 = 2 \cdot 3^2 \cdot 11$. Reducing modulo $2$ gives $a^2 \equiv 0 \pmod {198}$, so any solution $a$ to the above display equation has the form $a = 2b$. Substituting into the previous display equation gives
$$4 b^2 + 2 \equiv 0 \pmod {198},$$
which is equivalent to $$2 b^2 + 1 \equiv 0 \pmod {99}.$$</p>
<p>Now reducing modulo $11$ (and multiplying by $6$ to produce a monic polynomial on the l.h.s.) leaves
$$b^2 + 6 \equiv 0 \pmod {11} .$$
The l.h.s. factors as $(b + 4)(b - 4)$, so any solution $b$ to the equation modulo $99$ has the form $$b = 11 c \pm 4 .$$
Substituting in the above equation modulo $99$ and proceeding as before (in particular, multiplying both sides of the resulting equation by $7$, which is coprime to $9$ and hence produces an equivalent equation) gives
$$c^2 \pm 4 c + 3 \equiv 0 \pmod 9 .$$
We may factor the left-hand side as $(c \pm 1)(c \pm 3)$. Since the prime factorization of $9$ is $9 = 3^2$, the equation in $c$ has a solution iff either factor is $0$ or both of the above factors are congruent to $0$ modulo $3$. The latter is impossible since the difference of those factors modulo $3$ is $\pm 1$, so the solutions are exactly
$c = \mp 1, \mp 3$. Substituting these four solutions successively into our equations for $b, a, x$ gives all of the solutions to the original equation: $$x \equiv 13, 57, 139, 183 ,$$
which agrees with the solution lioness99a produced using W.A.</p>
|
4,149,355 | <p>Are there any stochastic processes <span class="math-container">$(X_t)_{t \in \mathbb{R}^d}$</span> such that</p>
<ol>
<li>almost surely paths are continuous but nowhere differentiable and</li>
<li>sampling of <span class="math-container">$n$</span> points <span class="math-container">$X_{t_n}$</span> on a path can be done in <span class="math-container">$O(n)$</span> time?</li>
</ol>
<p>Most sampling techniques I have in mind require at least <span class="math-container">$O(n \log n)$</span> time.</p>
<p>On the the other hand, all linear-time samplers I know, create processes such that 1) fails; like Perlin-noise, White noise, Pink noise, etc.</p>
<p>Is it even theoretically possible? So this might be a very fundamental question.</p>
| Asher2211 | 742,113 | <p><span class="math-container">$$(x-y)^2\ge0$$</span><br />
<span class="math-container">$$⇒ x^2+y^2\ge2xy=(x+y)^2-x^2-y^2=(x+y)^2-1$$</span><br />
<span class="math-container">$$⇒ 1=x^2+y^2\ge(x+y)^2-1$$</span><br />
<span class="math-container">$$⇒ 2\ge(x+y)^2$$</span><br />
<span class="math-container">$$⇒ x+y≤\sqrt2$$</span></p>
|
4,149,355 | <p>Are there any stochastic processes <span class="math-container">$(X_t)_{t \in \mathbb{R}^d}$</span> such that</p>
<ol>
<li>almost surely paths are continuous but nowhere differentiable and</li>
<li>sampling of <span class="math-container">$n$</span> points <span class="math-container">$X_{t_n}$</span> on a path can be done in <span class="math-container">$O(n)$</span> time?</li>
</ol>
<p>Most sampling techniques I have in mind require at least <span class="math-container">$O(n \log n)$</span> time.</p>
<p>On the the other hand, all linear-time samplers I know, create processes such that 1) fails; like Perlin-noise, White noise, Pink noise, etc.</p>
<p>Is it even theoretically possible? So this might be a very fundamental question.</p>
| Community | -1 | <p><span class="math-container">$$x^2+y^2=1$$</span> is the unit circle, and</p>
<p><span class="math-container">$$\cos(\theta)+\sin(\theta)=\sqrt2\cos\left(\theta-\dfrac\pi4\right)\le\sqrt 2.$$</span></p>
|
3,960,549 | <p>There are <span class="math-container">$3$</span> cyclone on average in a year in russia .</p>
<p>What is the probability that there will be cyclone in the next TWO years ?</p>
<p>I just want to know the value of <span class="math-container">$\lambda$</span> in Poisson distribution! As we know <span class="math-container">$\lambda$</span> actually works on the same time interval.
My calculations: as it's Two years, <span class="math-container">$\lambda = 3\cdot 2 = 6$</span>.</p>
<p>But my confusion is should we calculate the present year too? I mean if we don't calculate this year we can calculate as time = <span class="math-container">$2$</span> years otherwise <span class="math-container">$3$</span> ...</p>
<p>Is my calculation ok?</p>
| tommik | 791,458 | <p><span class="math-container">$\lambda=6$</span> thus your probability is</p>
<p><span class="math-container">$$P[X>0]=1-e^{-6}\approx 99.75\%$$</span></p>
|
3,960,549 | <p>There are <span class="math-container">$3$</span> cyclone on average in a year in russia .</p>
<p>What is the probability that there will be cyclone in the next TWO years ?</p>
<p>I just want to know the value of <span class="math-container">$\lambda$</span> in Poisson distribution! As we know <span class="math-container">$\lambda$</span> actually works on the same time interval.
My calculations: as it's Two years, <span class="math-container">$\lambda = 3\cdot 2 = 6$</span>.</p>
<p>But my confusion is should we calculate the present year too? I mean if we don't calculate this year we can calculate as time = <span class="math-container">$2$</span> years otherwise <span class="math-container">$3$</span> ...</p>
<p>Is my calculation ok?</p>
| Botnakov N. | 452,350 | <p>I suppose that your question means that there is some misunderstanding in the question about taking <span class="math-container">$2 \lambda$</span>: otherwise there would be no questions.</p>
<p>Let <span class="math-container">$\xi$</span> be the number of cyclones during the first year and <span class="math-container">$\eta$</span> is the number of cyclones during the second year. Thus <span class="math-container">$\xi \sim Pois(\lambda)$</span>, <span class="math-container">$\eta \sim Pois(\lambda)$</span>. We implicitly suppose that <span class="math-container">$\xi$</span> is independent of <span class="math-container">$\eta$</span> - otherwise this approach will not work.</p>
<p><span class="math-container">$E \xi = \lambda =3$</span>.</p>
<p>We use the next fact: the sum of independent r.v. with distributions <span class="math-container">$Pois(\lambda_1)$</span> and <span class="math-container">$Pois(\lambda_2)$</span> has distribution <span class="math-container">$Pois(\lambda_1 + \lambda_2)$</span>. Hence <span class="math-container">$\xi+ \eta \sim Pois(2 \lambda)$</span>.</p>
<p>Hence <span class="math-container">$P(\xi + \eta > 0) = 1 - e^{-6}$</span> is the probability of at least one cyclone and <span class="math-container">$P(\xi + \eta = 1) = 3e^{-6}$</span> is the probability that there will be exactly one cyclone.</p>
|
2,605,546 | <p>Simple question but not sure why for, $ f = \frac{\lambda}{2}\sum_{j=1}^{D} w_j^2$
$$\frac{\partial f}{\partial wj}= \lambda w_j$$</p>
<p>I would have thought the answer would be $\frac{\partial f}{\partial wj}= \lambda \sum_{j=1}^{D} w_j^2$</p>
<p>Since we get the derivative of $w_j^2$ which is $2w_j$, pull out the 2, getting rid of $\frac{\lambda}{2}$, and multiply by $w_j$. </p>
| Community | -1 | <p>$\frac {\partial \omega_j^2}{\partial \omega_j}=2\omega _j$
And, $\frac {\partial \omega_i^2}{\partial \omega_j}=0$ for $i\not =j $. ..</p>
|
2,605,546 | <p>Simple question but not sure why for, $ f = \frac{\lambda}{2}\sum_{j=1}^{D} w_j^2$
$$\frac{\partial f}{\partial wj}= \lambda w_j$$</p>
<p>I would have thought the answer would be $\frac{\partial f}{\partial wj}= \lambda \sum_{j=1}^{D} w_j^2$</p>
<p>Since we get the derivative of $w_j^2$ which is $2w_j$, pull out the 2, getting rid of $\frac{\lambda}{2}$, and multiply by $w_j$. </p>
| user247327 | 247,327 | <p>What you have written doesn't quite make sense! The given function is a function of the D variables, $\omega_1, \omega_2, \cdot\cdot\cdot, \omega_D$. But then what variable do you want to differentiate with respect to? Having used "j" as the summation index, you should not then use "j" as an index outside that summation! </p>
<p>It would be better to use some other index, say "i"- having summed over <strong>all</strong> variables, differentiate with respect to <strong>one</strong> of those,</p>
<p>The derivative of a <strong>constant</strong> is 0. And when you are differentiating <em>with respect to</em> the variable $\omega_j$, all the other variable are treated as <strong>constants</strong>.</p>
<p>For example, if D= 4 so that $f(\omega_1, \omega_2, \omega_3, \omega_4)= \frac{\lambda}{2}\sum_{j=1}^4 \omega_j^2= \frac{\lambda}{2}\omega_1^2+ \frac{\lambda}{2}\omega_2^2+ \frac{\lambda}{2}\omega_3^2+ \frac{\lambda}{2}\omega_4^2$. The derivative with respect to any one of those variables, say $\omega_j$, is $\lambda \omega_j$.</p>
|
676,171 | <blockquote>
<p>Prove that if $F$ is a field, every proper prime ideal of $F[X]$ is maximal.</p>
</blockquote>
<p>Should I be using the theorem that says an ideal $M$ of a commutative ring $R$ is maximal iff $R/M$ is a field? Any suggestions on this would be appreciated. </p>
| Alex Youcis | 16,497 | <p>Yes.</p>
<p>Hint: Every prime ideal $P$ of $F[x]$ is of the form $P=(f(x))$ for some polynomial $f$. Use this to show that $F[X]/P$ is a domain which is a finite dimensional vector space, and so a field (why?).</p>
|
3,795,932 | <p><span class="math-container">$\mathbb {Z}G $</span> is not Artinian where <span class="math-container">$ G$</span> is a finite group.</p>
<p>I know that <span class="math-container">$\mathbb {Z} $</span> is not Artinian but <span class="math-container">$\mathbb {Z} $</span> is not an ideal of the group ring. So How to see this? Any help would be appreciated!</p>
| b00n heT | 119,285 | <p>I am not sure that the fraction simplifies as you say (check that again). But the fact that the two results are equivalent derives from
<span class="math-container">$$\ln(2x-2)=\ln(2\cdot(x-1))=\ln(2)+\ln(x-1),$$</span>
So that the <span class="math-container">$2$</span> gets put into the constant <span class="math-container">$C$</span>.</p>
|
849,038 | <p>Im once again struggling to see the equivalence of two definitions. In my abstract algebra book (abstract algebra by beachy and blair) it says that elements $d_1,d_2,....,d_n \in D$ where $D$ is a UFD is said to be relatively prime in $D$ if there is no irreducible element $p \in D$ such that $p \mid a_i$ for $i=1,2,...n$.</p>
<p>However in my other book (A classical introduction to modern number theory by ireland and rosen) it says that two elements $a$ and $b$ are relatively prime if the only common divisors are units.</p>
<p>I am sure both definitions can not be correct. For instance, suppose that $a_1$ and $a_2$ are relatively prime according to the first definition, then we may very well have the case where a nonunit $p_1$ that is not irreducible divides both $a_1$ and $a_2$, so according to definition 2 the elements are not relatively prime. In this case, the elements are both relatively prime and not at the same time which shows that the definitions are not equivalent.</p>
<p>Am I missing something here or are the authors simply being sloppy?</p>
| Michael Hardy | 11,667 | <p>Here I am thinking I should remember something about this, or maybe I shouldn't. We have a matrix "equation":
$$
\begin{bmatrix} 1 & h_1 & w_1 \\ \vdots & \vdots & \vdots \\ 1 & h_n & w_n \end{bmatrix} \begin{bmatrix} 7 \\ 0.08 \\ 0.06 \end{bmatrix} \overset{\text{?}} = \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} \tag 1
$$
The equality is not actually equality: on the right, if we put the fitted ages rather than the observed ages, we would have equality. Write $(1)$ in matrix form as
$$
X\beta = A.
$$
The matrix $X$ has no inverse: there is no $3\times n$ matrix we could put to its right and get the $n\times n$ identity matrix. But there is a <b>left</b> inverse: there is a $3\times3$ matrix we can put to its left to get the $3\times3$ identity matrix. That matrix is
$$
\underbrace{(X^T X)^{-1}}_{3\times3} \underbrace{{}\quad X^T\quad{}}_{3\times n}. \tag 2
$$
Hence
$$
\begin{bmatrix} 7 \\ 0.08 \\ 0.06 \end{bmatrix} = (X^T X)^{-1} X^T A \tag 3
$$
<b>Exercise:</b> Equality holds in $(3)$ regardless of whether $A$ is the column of fitted ages or the column of observed ages.</p>
<p>Now we add an $(n+1)$th row to $X$, getting $\begin{bmatrix} X \\ W \end{bmatrix}$ where $W\in\mathbb R^{1\times3}$. Now instead of $(2)$ we will have
$$
\left(\begin{bmatrix} X^T & W^T \end{bmatrix} \begin{bmatrix} X \\ W \end{bmatrix}\right)^{-1} \begin{bmatrix} X^T & W^T \end{bmatrix} \begin{bmatrix} 7 \\ 0.08 \\ 0.06 \end{bmatrix} \overset{\text{?}} = \begin{bmatrix} a_1 \\ \vdots \\ a_n \\ a_{n+1} \end{bmatrix}.
$$</p>
<p>The matrix we need to invert becomes the $3\times3$ matrix
$$
\begin{bmatrix} X^T X + W^T W \end{bmatrix}.
$$
How to do that without doing the whole thing over from scratch is what I don't know at this moment.</p>
<p>Software experts must have considered this problem, but I don't know what has been done.</p>
|
3,738,789 | <p>I know if I stick two pins on a paper, and trace a taut loop around them, I get an ellipse. With one pin, I get a circle. Question is, are there names for shapes I get if I trace a taut loop around 3, 4, 5, ..., k pins, assuming the pins are not collinear, and the polygon formed by joining them is convex i.e. every pin stretches the loop at least one point as I trace around the pins? General formula for the locus? Any good, readable references to this? (I am not a mathematician, just a hobbyist.) Thank you.</p>
| Community | -1 | <p>If you keep the loop taut, the moving part will form a variable triangle defined by two pins and the pen for a while, then change one pin at a time. During this process, the pen draws arcs of ellipse, forming a continuous curve with continuous tangent.</p>
<p>The endpoints of the arcs will be found by lengthening the sides (when the loop is about to leave a pin, it is straight) and forming convex polygons that have the required perimeter. When you have the two pins of contact and the length, you have the ellipse equation. Additional geometry is needed to find the delimiting angles.</p>
<p><a href="https://i.stack.imgur.com/gszxJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gszxJ.png" alt="enter image description here" /></a></p>
<p>For the case of this figure, the trajectory is made of six elliptic arcs.</p>
|
2,543,834 | <p>Ok, so in my differential equations class we've been doing problems which more or less amount to solving equations of the form:</p>
<p><span class="math-container">$$\frac{dY}{dt} = AY$$</span></p>
<p>Where <span class="math-container">$A$</span> is just some <span class="math-container">$2\times2$</span> linear transformation and <span class="math-container">$Y$</span> is a parametric vector function defined more specifically as</p>
<p><span class="math-container">$$Y(t) = \begin{bmatrix}
x(t) \\
y(t)
\end{bmatrix}$$</span></p>
<p>The end result, assuming that there exists <span class="math-container">$\lambda_1, \lambda_2 \ne 0; \lambda_1 \ne \lambda_2$</span> which define the eigen values for A, is a definition for <span class="math-container">$Y(t)$</span> of the form,</p>
<p><span class="math-container">$$Y(t) = k_1e^{\lambda_1t}\vec{V_1} + k_2e^{\lambda_2t}\vec{V_2}$$</span></p>
<p>Where <span class="math-container">$\vec{V_1}, \vec{V_2}$</span> are the corresponding eigen vectors to their respective eigen values and <span class="math-container">$k_1, k_2$</span> are just some constants.</p>
<p>For solutions which involve either <span class="math-container">$k_1 = 0$</span> or <span class="math-container">$k_2 = 0$</span>, the end result is a straight-line solution. The rest are exponential curves within the vector space defined by the eigen vectors.</p>
<p>My understanding of eigen vectors, from a linear algebra class I took a year ago, so far is as follows (roughly):</p>
<ul>
<li><p>geometrically speaking, an eigenvector is any vector whose direction after transformation by some matrix <span class="math-container">$A$</span> remains the same. It's only scaled and/or negated.</p>
</li>
<li><p>Every eigenvector for some matrix <span class="math-container">$A$</span> composes a subspace which in turn defines the <em>eigen space</em> for <span class="math-container">$A$</span>'s vector basis.</p>
</li>
<li><p>therefore, the eigenvectors which <span class="math-container">$span(A)$</span> are linearly independent and define a coordinate space which also exists within <span class="math-container">$A$</span>.</p>
</li>
</ul>
<p>Regardless of whether or not the above is correct (if there's a mistake, any clarification/correction would be appreciated), what is it about eigenvectors <em>specifically</em> which allows for them to be used to solve these forms of differential equations?</p>
| Doug M | 317,162 | <p>$Y' = A Y\\
Y = e^{At}Y_0$</p>
<p>$e^{At} = \sum_\limits{n=0}^\infty \frac {A^nt^n}{n!}$</p>
<p>$A = PDP\\
A^n = PD^nP^{-1}$</p>
<p>$e^{At} = P\left(\sum_\limits{n=0}^\infty \frac {D^nt^n}{n!}\right)P^{-1}\\
Y = Pe^{Dt}P^{-1}Y_0$</p>
<p>$e^{Dt} = \begin{bmatrix} e^{\lambda_1 t}\\&e^{\lambda_2 t}\end{bmatrix}$</p>
<p>$PY_0 = \begin {bmatrix} k_1\\k_2\end{bmatrix}$</p>
<p>$Y = k_1V_1 e^{\lambda_1 t} + k_2V_2 e^{\lambda_2 t}$</p>
|
673,385 | <p>Question:</p>
<blockquote>
<p>Determine the multiplicative inverse of $x^2 + 1$ in $GF(2^4)$ with $$m(x) = x^4 + x + 1.$$ </p>
</blockquote>
<p>My confusion is over the $GF (2^4)$.</p>
| Vadim | 26,767 | <p><strong>Hint</strong>:</p>
<p>$$I_1(x)=\int_x^1\frac{dt}{\sqrt{1-t^4}}=\frac{1}{\sqrt{2}}\int_{0}^{\arccos x}\frac{du}{\sqrt{1-\frac{1}{2}\sin^2u}}$$</p>
<p>$$I_2(x)=\int_0^x \frac{dt}{\sqrt{1+t^4}}=\frac{1}{2}\int_{0}^{\arccos\frac{1-x^2}{1+x^2}}\frac{du}{\sqrt{1-\frac{1}{2}\sin^2u}}$$</p>
<p>and since $$\arccos 0 = \arccos \frac{1-1^2}{1+1^2}$$</p>
<p>$$\frac{I_1(0)}{I_2(1)}=\sqrt{2}$$</p>
<p>From the hint the substitutions should be pretty straightforward.</p>
<hr>
<p>Added due to the comment below:</p>
<p>The first one: let $t=\cos u$, $u=\arccos t$.</p>
<p>$$\frac{dt}{\sqrt{1-t^4}}=\frac{-\sin udu}{\sqrt{1-\cos^4 u}}=\frac{-\sin udu}{\sqrt{\sin^2 u(1+\cos^2 u)}}$$</p>
<p>and you get the RHS almost immediately.</p>
<p>The second one: let $t=\tan\frac{u}{2}$, $t^2=\frac{1}{\cos^2 \frac{u}{2}}-1=\frac{1-\cos u}{1+\cos u}$, $u=\arccos \frac{1-t^2}{1+t^2}$. Plug it in, and get the RHS also (almost) immediately.</p>
|
673,385 | <p>Question:</p>
<blockquote>
<p>Determine the multiplicative inverse of $x^2 + 1$ in $GF(2^4)$ with $$m(x) = x^4 + x + 1.$$ </p>
</blockquote>
<p>My confusion is over the $GF (2^4)$.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\down}{\downarrow}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
\begin{align}
\color{#00f}{\large I_{1}}&\equiv\int_{0}^{1}{\dd t \over \root{1 - t^{4}}}
=\int_{0}^{1}\pars{1 - t}^{-1/2}\,{1 \over 4}\,t^{-3/4}\,\dd t
={1 \over 4}\int_{0}^{1}t^{-3/4}\pars{1 - t}^{-1/2}\,\dd t
={1 \over 4}\,{\rm B}\pars{{1 \over 4},\half}
\\[3mm]&={1 \over 4}\,{\Gamma\pars{1/4}\Gamma\pars{1/2} \over \Gamma\pars{3/4}}
={1 \over 4}\,{\Gamma\pars{1/4}\root{\pi} \over \pi/\bracks{\Gamma\pars{1/4}\sin\pars{\pi/4}}}=
\color{#00f}{\large{1 \over 4\root{2\pi}}\,\Gamma^{\,2}\pars{1 \over 4}}
\approx 1.3110
\end{align}</p>
<blockquote>
<p>${\rm B}\pars{x,y}$ and $\Gamma\pars{z}$ are the <i>Beta</i> and <i>Gamma</i> functions, respectively. We used well known properties of them.</p>
</blockquote>
<p>\begin{align}
\color{#00f}{\large I_{2}}&\equiv\int_{0}^{1}{\dd t \over \root{1 + t^{4}}}
=\half\int_{0}^{\infty}{\dd t \over \root{1 + t^{4}}}
\end{align}
Lets $\ds{x \equiv {1 \over 1 + t^{4}}\quad\iff\quad t = \pars{{1 \over x} - 1}^{1/4}}$
\begin{align}
\color{#00f}{\large I_{2}}&=\half\int_{0}^{\infty}{\dd t \over \root{1 + t^{4}}}
=\half\int_{1}^{0}x^{1/2}\,{1 \over 4}\,\pars{1 - x \over x}^{-3/4}
\pars{-\,{\dd x \over x^{2}}}
\\[3mm]&={1 \over 8}\int_{0}^{1}x^{-3/4}\pars{1 - x}^{-3/4}\,\dd x
={1 \over 8}\,{\rm B}\pars{{1 \over 4},{1 \over 4}}
={1 \over 8}\,{\Gamma\pars{1/4}\Gamma\pars{1/4} \over \Gamma\pars{1/2}}
\\[3mm]&=\color{#00f}{\large{1 \over 8\root{\pi}}\,\Gamma^{\,2}\pars{1 \over 4}}
\approx 0.9270
\end{align}</p>
<blockquote>
<p>$$
\color{#00f}{\large{I_{1} \over I_{2}}} = {1/\pars{4\root{2}} \over 1/8}
=\color{#00f}{\large\root{2}}
$$</p>
</blockquote>
|
673,385 | <p>Question:</p>
<blockquote>
<p>Determine the multiplicative inverse of $x^2 + 1$ in $GF(2^4)$ with $$m(x) = x^4 + x + 1.$$ </p>
</blockquote>
<p>My confusion is over the $GF (2^4)$.</p>
| math110 | 58,742 | <p>The first one:let $t^2=\sin{x}$,then
$$I_{1}=\dfrac{1}{2}\int_{0}^{\frac{\pi}{2}}\dfrac{dx}{\sqrt{\sin{x}}}$$</p>
<p>The second one: let
$t^2=\tan{x}$</p>
<p>then
$$I_{2}=\dfrac{1}{2}\int_{0}^{\frac{\pi}{4}}\dfrac{dx}{\sqrt{\sin{x}\cos{x}}}=\dfrac{1}{2\sqrt{2}}\int_{0}^{\frac{\pi}{2}}\dfrac{dx}{\sqrt{\sin{x}}}$$
so
$$\dfrac{I_{1}}{I_{2}}=\sqrt{2}$$</p>
|
3,195,618 | <p>Prove that topological space <span class="math-container">$ \mathbb{R^2} $</span> with dictionary order topology is first countable, but not second countable.</p>
<p>I am a bit stuck. Some hints would help. For first countability I am having trouble finding a local base for each <span class="math-container">$ (x,y) \in \mathbb{R^2}$</span>. For second countability
can I for example look at the first quadrant and write it as: <span class="math-container">$ \cup_{x \in \mathbb{R^{+}} } ((x,0), (x, + \infty )) $</span> which is a disjunct union of uncountably many uncountable sets so there can't be a countable base ?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Substituting <span class="math-container">$$\frac{dy(x)}{dx}=\frac{dt}{dx}\frac{dy(t)}{dt}$$</span> we get
<span class="math-container">$$y''(t)+9y(t)=0$$</span> and <span class="math-container">$$y''(x)=\frac{d^2}{dx^2}\frac{dy(t)}{dt}+\left(\frac{dt}{dx}\right)^2\frac{d^2y(t)}{dt^2}$$</span></p>
|
237,446 | <p>I find to difficult to evaluate with $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )$$ I tried to use the fact, that $$\frac{1}{1-n} \geqslant \ln(n)\geqslant 1+n$$
what gives $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right ) \geqslant \lim_{n\rightarrow\infty} n(1-\sqrt[n]{1+n}) =\lim_{n\rightarrow\infty}n *\lim_{n\rightarrow\infty}(1-\sqrt[n]{1+n})$$ $$(1-\sqrt[n]{1+n})\rightarrow -1\Rightarrow\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )\rightarrow-\infty$$Is it correct? If not, what do I wrong?</p>
| WimC | 25,313 | <p>$\log(x) \leq x - 1$, so $\log(x) = n \log(x^{1/n}) \leq n (x^{1/n} - 1)$ for all integral $n \geq 1$. Take $x = \log(n)$ to get $\log(\log(n)) \leq n(\log(n)^{1/n}-1)$ or $n(1-\log(n)^{1/n}) \leq -\log(\log(n))$. This shows that your limit is $-\infty$.</p>
|
237,446 | <p>I find to difficult to evaluate with $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )$$ I tried to use the fact, that $$\frac{1}{1-n} \geqslant \ln(n)\geqslant 1+n$$
what gives $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right ) \geqslant \lim_{n\rightarrow\infty} n(1-\sqrt[n]{1+n}) =\lim_{n\rightarrow\infty}n *\lim_{n\rightarrow\infty}(1-\sqrt[n]{1+n})$$ $$(1-\sqrt[n]{1+n})\rightarrow -1\Rightarrow\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )\rightarrow-\infty$$Is it correct? If not, what do I wrong?</p>
| lisyarus | 135,314 | <p>Use Taylor!</p>
<p>$$n(1-\sqrt[n]{\log n}) = n (1-e^{\frac{\log\log n}{n}}) \approx n\left(1-\left(1+\frac{\log\log n}{n}\right)\right) = - \log\log n$$</p>
<p>which clearly tends to $-\infty$.</p>
|
1,902,878 | <p>If $a^x=bc$, $b^y=ca$ and $c^z=ab$, prove that: $xyz=x+y+z+2$.</p>
<p>My Approach;
Here,</p>
<p>$$a^x=bc$$
$$a={bc}^{\frac {1}{x}}$$</p>
<p>and,</p>
<p>$$b={ca}^{\frac {1}{y}}$$
$$c={ab}^{\frac {1}{z}}$$</p>
<p>I got stopped from here. Please help me to continue </p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>We have $$abc=a^{x+1}=b^{y+1}=c^{z+1}=K\text{(say)}$$</p>
<p>$\implies a=K^{1/(x+1)}$ etc.</p>
<p>Put the values of $a,b,c$ in one of $a^x=bc, b^y=ca,c^z=ab$</p>
|
3,776,217 | <p>Prove that</p>
<p><span class="math-container">\begin{equation}
y(x) = \sqrt{\dfrac{3x}{2x + 3c}}
\end{equation}</span></p>
<p>is a solution of</p>
<p><span class="math-container">\begin{equation}
\dfrac{dy}{dx} + \dfrac{y}{2x} = -\frac{y^3}{3x}
\end{equation}</span></p>
<p>All the math to resolve this differential equation is already done. The exercise simply asks to prove the solution.</p>
<p>I start by pointing out that it has the form</p>
<p><span class="math-container">\begin{equation}
\dfrac{dy}{dx} + P(x)y = Q(x)y^3
\end{equation}</span></p>
<p>where</p>
<p><span class="math-container">\begin{equation}
P(x) = \dfrac{1}{2x}, \qquad Q(x) = -\frac{1}{3x}
\end{equation}</span></p>
<p>Rewriting y(x) as</p>
<p><span class="math-container">\begin{equation}
y(x) = (3x)^{\frac{1}{2}} (2x + 3c)^{-\frac{1}{2}}
\end{equation}</span></p>
<p>Getting rid of that square root, I'll need it later on to simplify things</p>
<p><span class="math-container">\begin{equation}
[y(x)]^2 = 3x(2x + 3c)^{-1}
\end{equation}</span></p>
<p>Calculating dy/dx</p>
<p><span class="math-container">\begin{align}
\dfrac{dy}{dx} &= \frac{1}{2}(3x)^{-\frac{1}{2}}(3)(2x + 3c)^{-\frac{1}{2}} + \left(-\dfrac{1}{2}\right)(2x + 3c)^{-\frac{3}{2}}(2)(3x)^{\frac{1}{2}} \\
&= \frac{3}{2}(3x)^{-\frac{1}{2}}(2x + 3c)^{-\frac{1}{2}} - (3x)^{\frac{1}{2}}(2x + 3c)^{-\frac{3}{2}} \\
&= (3x)^{\frac{1}{2}}(2x + 3c)^{-\frac{1}{2}} \left[\dfrac{3}{2}(3x)^{-1} - (2x + 3c)^{-1}\right] \\
&= y \left[\dfrac{3}{2}(3x)^{-1} - (2x + 3c)^{-1}\right] \\
&= \dfrac{y}{2x} - y(2x + 3c)^{-1} \\
&= \dfrac{y}{2x} - y\left(\dfrac{y^2}{3x}\right) \\
&= \dfrac{y}{2x} - \dfrac{y^3}{3x}
\end{align}</span></p>
<p>Finally</p>
<p><span class="math-container">\begin{align}
\dfrac{dy}{dx} + P(x)y &= \dfrac{y}{2x} - \dfrac{y^3}{3x} + \dfrac{y}{2x} \\
&= \dfrac{y}{x} - \dfrac{y^3}{3x}
\end{align}</span></p>
<p>which obviously isn't the same as equation 2. I don't know where I screwed up.</p>
| user | 505,767 | <p>Your work seems fine and we obtain</p>
<p><span class="math-container">$$\dfrac{dy}{dx} =\dfrac{y}{2x} - \dfrac{y^3}{3x}$$</span></p>
<p>which should be the correct differential equation.</p>
<p>I don't understand why you have added the term <span class="math-container">$P(x)y$</span> in the last step.</p>
|
3,776,217 | <p>Prove that</p>
<p><span class="math-container">\begin{equation}
y(x) = \sqrt{\dfrac{3x}{2x + 3c}}
\end{equation}</span></p>
<p>is a solution of</p>
<p><span class="math-container">\begin{equation}
\dfrac{dy}{dx} + \dfrac{y}{2x} = -\frac{y^3}{3x}
\end{equation}</span></p>
<p>All the math to resolve this differential equation is already done. The exercise simply asks to prove the solution.</p>
<p>I start by pointing out that it has the form</p>
<p><span class="math-container">\begin{equation}
\dfrac{dy}{dx} + P(x)y = Q(x)y^3
\end{equation}</span></p>
<p>where</p>
<p><span class="math-container">\begin{equation}
P(x) = \dfrac{1}{2x}, \qquad Q(x) = -\frac{1}{3x}
\end{equation}</span></p>
<p>Rewriting y(x) as</p>
<p><span class="math-container">\begin{equation}
y(x) = (3x)^{\frac{1}{2}} (2x + 3c)^{-\frac{1}{2}}
\end{equation}</span></p>
<p>Getting rid of that square root, I'll need it later on to simplify things</p>
<p><span class="math-container">\begin{equation}
[y(x)]^2 = 3x(2x + 3c)^{-1}
\end{equation}</span></p>
<p>Calculating dy/dx</p>
<p><span class="math-container">\begin{align}
\dfrac{dy}{dx} &= \frac{1}{2}(3x)^{-\frac{1}{2}}(3)(2x + 3c)^{-\frac{1}{2}} + \left(-\dfrac{1}{2}\right)(2x + 3c)^{-\frac{3}{2}}(2)(3x)^{\frac{1}{2}} \\
&= \frac{3}{2}(3x)^{-\frac{1}{2}}(2x + 3c)^{-\frac{1}{2}} - (3x)^{\frac{1}{2}}(2x + 3c)^{-\frac{3}{2}} \\
&= (3x)^{\frac{1}{2}}(2x + 3c)^{-\frac{1}{2}} \left[\dfrac{3}{2}(3x)^{-1} - (2x + 3c)^{-1}\right] \\
&= y \left[\dfrac{3}{2}(3x)^{-1} - (2x + 3c)^{-1}\right] \\
&= \dfrac{y}{2x} - y(2x + 3c)^{-1} \\
&= \dfrac{y}{2x} - y\left(\dfrac{y^2}{3x}\right) \\
&= \dfrac{y}{2x} - \dfrac{y^3}{3x}
\end{align}</span></p>
<p>Finally</p>
<p><span class="math-container">\begin{align}
\dfrac{dy}{dx} + P(x)y &= \dfrac{y}{2x} - \dfrac{y^3}{3x} + \dfrac{y}{2x} \\
&= \dfrac{y}{x} - \dfrac{y^3}{3x}
\end{align}</span></p>
<p>which obviously isn't the same as equation 2. I don't know where I screwed up.</p>
| Lutz Lehmann | 115,115 | <p>You could probably get an easier calculation by going over the logarithmic derivative,
<span class="math-container">$$
\log(y(x))=\frac12(\log(3x)-\log(2x+3c))
\\~\\
\implies
\frac{y'(x)}{y(x)}=\frac12\left(\frac1x-\frac2{2x+3c}\right)
=\frac1{2x}-\frac1{3x}y(x)^2
$$</span>
The last step is obtained by doing the minimum to eliminate the constant <span class="math-container">$c$</span>.</p>
<p>As you can see, this again confirms your result.</p>
<hr />
<p>As to the original equation itself, it solves as
<span class="math-container">$$
(y^{-2})'=-2y^{-3}y'=\frac{y^{-2}}x+\frac2{3x}\\
\left(\frac{y^{-2}}x\right)'=\frac2{3x^2}\implies \frac{y^{-2}}x=-\frac2{3x}+c\\
y(x)=\pm\sqrt{\frac3{3cx-2}}
$$</span>
which is quite different from your solution formula.</p>
|
2,886,675 | <p>I suspect the following is exactly true ( for positive $\alpha$ )</p>
<p>\begin{equation}
\sum_{n=1}^\infty e^{- \alpha n^2 }= \frac{1}{2} \sqrt { \frac{ \pi}{ \alpha} }
\end{equation}</p>
<p>If the above is exactly true, then I would like to know a proof of it.
I accept showing a particular limit is true, may be far more difficult than just applying a general theorem to show that the limit exists. Also as the result involves $\pi$ this makes me think the proof could well be a long one, BUT … ?</p>
<p>To give some context, the above series crops up in calculating the 'One Particle Translational Partition Function' for the quantum mechanical 'Particle In A Box'.</p>
| Community | -1 | <p>No doubt that the equality is wrong. For large $\alpha$, the first term dominates and the asymptotic behavior is $e^{-\alpha}$.</p>
<p>No even sure that there exist a value of $\alpha$ such that the expressions are equal.</p>
|
2,660,934 | <p>Find $$\lim_{x \to \infty} x\sin\left(\frac{11}{x}\right)$$</p>
<p>We know $-1\le \sin \frac{11}{x} \le 1 $ </p>
<p>Therefore, $x\rightarrow \infty $
And so limit of this function does not exist. </p>
<p>Am I on the right track? Any help is much appreciated.</p>
| E.H.E | 187,799 | <h2>Hint</h2>
<p><span class="math-container">$$\lim\limits_{x \to \infty} x\sin\left(\frac{11}{x}\right)=11\lim\limits_{x \to 0^+} \frac{\sin(11x)}{11x}$$</span></p>
|
2,660,934 | <p>Find $$\lim_{x \to \infty} x\sin\left(\frac{11}{x}\right)$$</p>
<p>We know $-1\le \sin \frac{11}{x} \le 1 $ </p>
<p>Therefore, $x\rightarrow \infty $
And so limit of this function does not exist. </p>
<p>Am I on the right track? Any help is much appreciated.</p>
| user | 505,767 | <p>It is true that</p>
<p>$$-1\le \sin \frac{11}{x} \le 1$$</p>
<p>but since $x\to \infty$ we have that $$\sin \frac{11}{x}\to0$$</p>
<p>thus the limit is in the indeterminate form $0\cdot \infty$.</p>
<p>To solve we can set for example $y=\frac1x\to 0$ then</p>
<p>$$\lim_{x \to \infty} x\sin\left(\frac{11}{x}\right)=\lim_{y \to 0} \frac{\sin\left(11y\right)}{y}=11\cdot\lim_{y \to 0} \frac{\sin\left(11y\right)}{11y}=11$$</p>
|
417,064 | <p>Let T be a totally ordered set that is <strong>finite</strong>. Does it follow that minimum and maximum of T exist?
Since T is finite, I believe there exists a minimal of T. From that it maybe able to be shown that the minimal is the minimum but not quite sure whether it is the right approach. </p>
| Seirios | 36,434 | <p>By induction, you can show that any linearly ordered set $T$ of cardinality $n>0$ has a minimum and a maximum. </p>
<p>If $n=1$, $T=\{p\}$ and $p$ is the minimum and the maximum of $T$. Then, if $\text{card}(T)=n+1$ take a subset of cardinality $n$ ant its minimum $m$ and its maximum $M$. If $p$ is the last element in $T$, just distinguish the cases $p<m$, $p>M$ and $m<p<M$.</p>
|
3,991,351 | <p>As stated in the title.</p>
<p>Any arbitrary function can be expressed as
<span class="math-container">$$f(x)=\frac{a_0}{2}+\sum^{\infty}_{n=1}(a_n\cos(nx)+b_n\sin(nx)) \tag{1}$$</span>
where
<span class="math-container">$$a_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos(nx)dx \tag{2}$$</span>
<span class="math-container">$$b_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\sin(nx)dx \tag{3}$$</span></p>
<hr />
<p>I am trying to show that (1)-(3) are consistent with <span class="math-container">$f(x)=e^{ix}=\cos(x)+i\sin(x)$</span>.</p>
<p>However, for <span class="math-container">$a_n=\frac{1}{\pi}\int^{\pi}_{-\pi}e^{ix}\cos(nx)dx$</span> I got <span class="math-container">$a_n=0$</span>, similarly, <span class="math-container">$b_n=0$</span>.</p>
<p>What's gone wrong? Are the integral limits <span class="math-container">$\int_{-\pi}^{\pi}$</span> not valid in general, does it require modifications for non-periodic functions?</p>
| frog | 84,997 | <p>If you allow for complex coefficients, then the Fourier-series of <span class="math-container">$x\mapsto \cos (x) + \mathrm i \sin(x)$</span> is just itself (by uniqueness of the Fourier-series).
You can formally compute the Fourier-coefficients of <span class="math-container">$x\mapsto \mathrm e^{\mathrm i x}$</span> using integration by parts twice and you obtain (for <span class="math-container">$k>1$</span>)
<span class="math-container">$$
a_k = -\frac{2k \sin(k \pi)}{\pi(k^2-1)}=0,
$$</span>
since <span class="math-container">$k\in\mathbb N$</span> and <span class="math-container">$a_0 = 0$</span> and <span class="math-container">$a_1 = 1$</span>. The completely similar computation (again for <span class="math-container">$k>1$</span>) shows
<span class="math-container">$$
b_k = -\frac{2\mathrm i \sin(k\pi)}{\pi(k^2-1)}=0
$$</span>
and <span class="math-container">$b_0=0$</span> and <span class="math-container">$b_1 =\mathrm i$</span>. Therefore the Forier-series of the two functions agree and they are therefore the same.</p>
<p>EDIT: GEdgar made a very valid point. In order to evaluate the integrals, one can use the ODE she or he had given as mentioned in the comments but one gets another proof of the identity along the way. So I propose another way of doing it:</p>
<p>We use that <span class="math-container">$f(x) = \mathrm e^{\mathrm i x}$</span> satisfies the ODE <span class="math-container">$f' = \mathrm i f$</span> with the initial datum <span class="math-container">$f(0)=1$</span>. If <span class="math-container">$f$</span> has the Fourier-series
<span class="math-container">$$
f(x) = \frac{a_0}{2}+\sum_{n=1}^\infty (a_n\cos(nx) + b_n\sin(nx))
$$</span>
you can take a formal derivative and see that
<span class="math-container">$$
f'(x) = \sum_{n=1}^\infty (-na_n\sin(nx) + nb_n\cos(nx))
$$</span>
Using the ODE above you get <span class="math-container">$a_0=0$</span> and the relations (for <span class="math-container">$n>0$</span>): <span class="math-container">$- n a_n = \mathrm ib_n$</span> and <span class="math-container">$nb_n = \mathrm i a_n$</span>.
From there you get <span class="math-container">$\mathrm i n^2b_n=\mathrm i b_n$</span> and therefore <span class="math-container">$b_n=0$</span> for <span class="math-container">$n>1$</span>. Similarly you get <span class="math-container">$a_n=0$</span> for <span class="math-container">$n>1$</span>. If <span class="math-container">$n=1$</span> one finds <span class="math-container">$-a_1=\mathrm ib_1$</span> and <span class="math-container">$b_1 = \mathrm i a_1$</span>. But now we know that <span class="math-container">$f(0) =a_1 = 1$</span> and therefore <span class="math-container">$b_1 = \mathrm i$</span>.</p>
|
3,246,240 | <p>I have the following problem</p>
<p><span class="math-container">$\frac{d^2y}{dx^2} + \lambda y = 0 , y'(0)=0$</span> and <span class="math-container">$y(3)=0$</span></p>
<p>I'm trying to solve for the eigenvalues <span class="math-container">$\lambda_n$</span> for <span class="math-container">$n=1,2,3...$</span>
and eigenfunctions <span class="math-container">$y_n$</span> or <span class="math-container">$n=1,2,3...$</span></p>
<p>Im considering all cases for the values of <span class="math-container">$\lambda$</span>:</p>
<p><span class="math-container">$\lambda = 0$</span>: <span class="math-container">$y= Ax+B$</span> and <span class="math-container">$y' = A$</span> - applying conditions yields <span class="math-container">$A=0=B$</span></p>
<p>Now i get stuck on the cases for <span class="math-container">$\lambda < 0$</span> and <span class="math-container">$\lambda > 0$</span>.</p>
<p>I have tried a similar approach to the <span class="math-container">$\lambda = 0$</span> case by setting <span class="math-container">$\lambda = p^2 >0$</span> for example but I'm unsure where to go from here</p>
<p>Any help or guidance will be <strong>greatly</strong> appreciated!</p>
<p>edit: my latest attempt</p>
<p><img src="https://i.stack.imgur.com/smiUu.jpg" alt="enter image description here"></p>
| uniquesolution | 265,735 | <p>Starting with your inequality
<span class="math-container">$$|\hat f(\alpha +h)-\hat f(\alpha )|\leq \int_{\mathbb R}|f(x)||e^{-2i\pi xh}-1|dx,$$</span>
observe that the r.h.s does not depend on <span class="math-container">$\alpha$</span>, hence
<span class="math-container">$$\sup_{\alpha\in\mathbb{R}}|\hat f(\alpha +h)-\hat f(\alpha )|\leq \int_{\mathbb R}|f(x)||e^{-2i\pi xh}-1|dx,$$</span>
Since <span class="math-container">$f\in L^1(\mathbb{R})$</span>, the r.h.s tends to zero as <span class="math-container">$h\to 0$</span>, (by the dominated convergence theorem), hence
<span class="math-container">$$\lim_{h\to 0}\sup_{\alpha\in\mathbb{R}}|\hat f(\alpha +h)-\hat f(\alpha )|=0,$$</span>
and this is equivalent to saying that <span class="math-container">$\hat{f}$</span> is uniformly continuous.</p>
|
88,319 | <p>I've had the same effect in Mathematica 9 and 10.</p>
<p>I'm trying to color a 3D Plot with another function, let's call it colorFun ( it should highlight the areas where the colorFun is above a certain threshold), but ColorFunction seems to use the wrong coordinates.</p>
<p>Horribly colored minimal example</p>
<pre><code>colorFun := Function[{x, y},If[x < y, Red, Blue]]
Plot3D[Evaluate[x^2+y^2],{x,0,1},{y,0,2},ColorFunction->colorFun]
</code></pre>
<p><img src="https://i.stack.imgur.com/ecaTd.png" alt="Holy cow it's ugly!" title="sooo ugly!"></p>
<p>Note that x and y have different intervals plotted, so the divide should not be through the middle. Similar things happen if you change the colorFun to something like y<0.5 .
It seems that the ColorFunction is not using the same coordinates as the function, but rather a kind of normalized version, always going from 0 to 1.</p>
<p>Is this a bug, or is Mathematica beating my ability to understand computers again?</p>
| Mr.Wizard | 121 | <p>You'll get a much crisper output if you use Mesh functionality instead:</p>
<pre><code>Plot3D[x^2 + y^2, {x, 0, 1}, {y, 0, 2},
MeshFunctions -> {#/#2 &},
Mesh -> {{1}},
MeshShading -> {Red, Blue}
]
</code></pre>
<p><img src="https://i.stack.imgur.com/YHxOM.png" alt="enter image description here"></p>
<p>Or with additional grid lines:</p>
<pre><code>Plot3D[x^2 + y^2, {x, 0, 1}, {y, 0, 2},
MeshFunctions -> {#/#2 &, # &, #2 &},
Mesh -> {{1}, 12, 12},
MeshShading -> {{{Red, Blue}}}
]
</code></pre>
<p><img src="https://i.stack.imgur.com/trOTM.png" alt="enter image description here"></p>
|
2,596,700 | <p>We can see, for example, that the years 2009 and 2015 have identical calendars. Similarly, 2000 and 2028.</p>
<p>I read once that given any year X at most 28 years later there will be another year Y, with the calendar identical to that of X.</p>
<p>Here I am referring to our usual calendar, the Gregorian.</p>
<p>I have not yet been able to prove such an assertion.
I ask for help.</p>
| Azlif | 513,870 | <p>Hint :
Write the limit as $$\left[\left(1 - \frac{1}{a_n}\right)^{a_n} \right]^{\frac{n}{a_n}}.$$
The limit inside the bracket is $\frac{1}{e}$; what must the value of $n/{a_n}$, as $n\to \infty$, be, to ensure that the limit converges?</p>
|
786,643 | <p>Considering $$\int\frac{\ln(x+1)}{2(x+1)}dx$$ I first solved it seeing it similar to the derivative of $\ln^2(x+1)$ so multiplying by $\frac22$ the solution is $$\int\frac{\ln(x+1)}{2(x+1)}dx=\frac{\ln^2(x+1)}{4}+const.$$. But then we can solve it using by parts' method and so this is the solution that I found:
$$\frac12\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln(x+1)\ln(x+1)-\frac12\int\frac{\ln(x+1)}{x+1}dx$$
Seeing it as an equation I brought the integral $-\frac12\int\frac{\ln(x+1)}{x+1}dx$ to the left so that I obtain $$\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln(x+1)\ln(x+1)+const.$$ so $$\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln^2(x+1)+const.$$. I know that the first solution is correct but I used to way of solution that seem to be correct. How is it possible? Where is the mistake? Thank you in advance for your help!</p>
| RRL | 148,510 | <p>Look at the denominator in the integrand of your final result. The 2 is missing that was present in the original integral. Divide both sides of the equation by 2 and you get the desired result.</p>
|
786,643 | <p>Considering $$\int\frac{\ln(x+1)}{2(x+1)}dx$$ I first solved it seeing it similar to the derivative of $\ln^2(x+1)$ so multiplying by $\frac22$ the solution is $$\int\frac{\ln(x+1)}{2(x+1)}dx=\frac{\ln^2(x+1)}{4}+const.$$. But then we can solve it using by parts' method and so this is the solution that I found:
$$\frac12\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln(x+1)\ln(x+1)-\frac12\int\frac{\ln(x+1)}{x+1}dx$$
Seeing it as an equation I brought the integral $-\frac12\int\frac{\ln(x+1)}{x+1}dx$ to the left so that I obtain $$\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln(x+1)\ln(x+1)+const.$$ so $$\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln^2(x+1)+const.$$. I know that the first solution is correct but I used to way of solution that seem to be correct. How is it possible? Where is the mistake? Thank you in advance for your help!</p>
| kmitov | 84,067 | <p>Both answers are the same. Can you see that </p>
<p>$\int\frac{ln(x+1)}{(x+1)}dx=\frac{1}{2}\ln(x+1)ln(x+1)+cost.$</p>
<p>$\frac{1}{2}\int\frac{ln(x+1)}{(x+1)}dx=\frac{1}{4}\ln(x+1)ln(x+1)+cost.$</p>
|
66,314 | <p>This is very similar to my earlier question <a href="https://mathematica.stackexchange.com/questions/60069/one-to-many-lists-merge">One to Many Lists Merge</a> but somehow different. I have two lists, first column in each list represents its key. I want to merge these two lists. The only problem is that these two lists have some common keys but not all keys are the same and that they are of different lengths. For example, </p>
<pre><code>list1 = {{1, a, aa}, {2, b, bb}, {3, c, cc}, {4, d, dd}, {6, f,
ff}, {7, g, gg}, {13, j, jj}};
list2 = {{1, 10, 100, 1000}, {2, 20, 200, 2000}, {5, 50, 500,
5000}, {6, 60, 600, 6000}, {7, 70, 700, 7000}, {9, 90, 900,
9000}};
</code></pre>
<p>I am trying to merge these lists using their keys. If the key doesn't match, one list should have missing values such as -99.99. The result I am looking for the above two list would be</p>
<pre><code>answerlist = {{1, a, aa, 10, 100, 1000}, {2, b, bb, 20, 200,
2000}, {3, c, cc, -99.99, -99.99, -99.99}, {4, d,
dd, -99.99, -99.99, -99.99}, {5, -99.99, -99.99, 50, 500,
5000}, {6, f, ff, 60, 600, 6000}, {7, g, gg, 70, 700,
7000}, {9, -99.99, -99.99, 90, 900, 9000}, {13, j,
jj, -99.99, -99.99, -99.99}};
</code></pre>
<p>Thank you for your time and support in advance. </p>
| Mr.Wizard | 121 | <p>Another formulation:</p>
<pre><code>merge1[a_List, b_List, pad_] :=
Module[{rules, keys},
rules = Apply[# -> {##2} &, {a, b}, {2}];
keys = Union @@ Keys @ rules;
Join[List /@ keys, ##, 2] & @@
(Lookup[#, keys, pad & /@ #[[1, 2]] ] & /@ rules)
]
</code></pre>
<p>Test:</p>
<pre><code>merge1[list1, list2, -99.99] == answerlist
</code></pre>
<blockquote>
<pre><code>True
</code></pre>
</blockquote>
|
3,970,488 | <p>while solving a differential equation i encounter this derivative : let <span class="math-container">$$z=\frac {dt}{dx} $$</span> i don't understand how they make that <span class="math-container">$$ \frac {dz}{dx}=z^3\frac {d^2x}{dt^2}$$</span></p>
| Ninad Munshi | 698,724 | <p>Equivalently we have the expression</p>
<p><span class="math-container">$$\frac{dx}{dt}=\frac{1}{z}$$</span></p>
<p>then take <span class="math-container">$\frac{d}{dt}$</span> on both sides and apply chain rule</p>
<p><span class="math-container">$$\frac{d^2x}{dt^2} = \frac{d}{dt}\frac{1}{z} = \frac{dx}{dt}\cdot\left(\frac{d}{dx}\frac{1}{z}\right) = \frac{dx}{dt}\cdot\left(-\frac{1}{z^2}\frac{dz}{dx}\right)$$</span></p>
<p>Putting this together with first equation gives us</p>
<p><span class="math-container">$$\frac{dz}{dx} = -z^3\frac{d^2x}{dt^2}$$</span></p>
|
115,387 | <p>Have two series, just a quick check of some simple series:</p>
<p>$\sum _{1}^{\infty} \frac {1}{\sqrt {2n^{2}-3}}$</p>
<p>Considering
$\frac {1}{\sqrt {2n^{2}-3}}$ > $\frac {1}{\sqrt {4n^{2}}}$ = $\frac {1}{2n}$</p>
<p>Since
$\sum _{1}^{\infty} \frac {1}{2n}$ $\rightarrow$ Diverges, hence by the camparsion test we have that $\sum _{1}^{\infty} \frac {1}{\sqrt {2n^{2}-3}}$
diverges.</p>
<p>The second one is
$\sum _{1}^{\infty} (-1)^{n}(1+\frac{1}{n})^{n}$</p>
<p>Consdering $(-1)^{n}(1+\frac{1}{n})^{n} \le |(-1)^{n}(1+\frac{1}{n})^{n}|= (1+\frac{1}{n})^{n}$</p>
<p>We know that $(1+\frac{1}{n})^{n}$ converges, hence we have that $\sum _{1}^{\infty}(1+\frac{1}{n})^{n}$ coverges so again by the comparison test we have that $\sum _{1}^{\infty} (-1)^{n}(1+\frac{1}{n})^{n}$
converges absolutely $\Rightarrow$ converges,</p>
<p>many thanks in advance.</p>
| David Mitra | 18,986 | <p>The first argument is ok (you should start the series at $n=2$ though). </p>
<p>The second is not. As you imply, for the second series, you should be able to compute the <i>value of</i> the limit of (part of) the <i> sequence of terms you are adding</i>: $\lim\limits_{n\rightarrow\infty}(1+{1\over n})^n$. Then, based on the value of this limit, can the series converge?</p>
<p>Hint: The limit above is not zero. Thus $\lim\limits_{n\rightarrow\infty}[(-1)^n(1+{1\over n})^n]\ne0$. What can you say about a series $\sum\limits_{n=1}^\infty a_n$ if $\lim\limits_{n\rightarrow\infty}a_n\ne0$?</p>
|
3,693,735 | <p><span class="math-container">$X \not= \emptyset$</span>,<span class="math-container">$Y \not= \emptyset$</span>,<span class="math-container">$(X,T)$</span> and <span class="math-container">$(Y,V)$</span> are topological space. Let <span class="math-container">$f:X \rightarrow Y$</span> function is a homeomorphizm and <span class="math-container">$A \subseteq X $</span> if <span class="math-container">$x \in X $</span> is a accumulation point of <span class="math-container">$A$</span> show that <span class="math-container">$f(x)$</span> is a accumulation point of <span class="math-container">$f(A)$</span>.</p>
| QuantumSpace | 661,543 | <p>Your answer is correct. Here is a more "standard" approach, using the Sylow-theorems (which should be the first thing you think of given your assumptions):</p>
<p>By Sylow's third theorem we have a unique normal Sylow <span class="math-container">$p$</span>-group <span class="math-container">$P$</span> (use <span class="math-container">$1 < q < p)$</span>.</p>
<p>Consider the quotient <span class="math-container">$G/P$</span>. This quotient is cyclic as it has prime order and thus abelian. Therefore, the commutator <span class="math-container">$G'$</span> is contained in <span class="math-container">$P$</span>. I.e. <span class="math-container">$G' \subseteq P$</span>. Because <span class="math-container">$G$</span> is non-abelian, we have more than <span class="math-container">$1$</span> Sylow <span class="math-container">$q$</span>-group (otherwise you can show that the unique Sylow q subgroup <span class="math-container">$Q$</span> contains the commutator <span class="math-container">$G'$</span> and then it will follow that <span class="math-container">$G' =1$</span>, which means that <span class="math-container">$G$</span> is abelian). Thus <span class="math-container">$|Syl_q(G)| = p$</span> (this order must divide the order of <span class="math-container">$G$</span>). However, <span class="math-container">$p =|Syl_q(G)| \equiv 1\bmod q$</span> and your result follows.</p>
<p>Alternatively, if you want to avoid the commutator subgroup, you can argue by contradiction that <span class="math-container">$|Syl_q(G)| =1$</span>, but then there is a normal subgroup <span class="math-container">$Q$</span> of order <span class="math-container">$q$</span> and then it follows that <span class="math-container">$G \cong P \times Q$</span>, so that <span class="math-container">$G$</span> is abelian. Contradiction.</p>
|
273,619 | <p>How to use a <code>list</code> to specify part of another nested list <code>mat</code>? We don't want to write <code>mat[[list[[1]],list[[2]],list[[3]],...]]</code>.</p>
<pre><code>mat = RandomInteger[10, {5, 6, 7, 8}];
list = {3, 4, 6};
mat[[3, 4, 6]]
</code></pre>
| lericr | 84,894 | <p>Couple of options.</p>
<p>Use Extract instead:</p>
<pre><code>Extract[mat, list]
</code></pre>
<p>Apply Sequence to the list:</p>
<pre><code>mat[[Sequence @@ list]]
</code></pre>
|
273,619 | <p>How to use a <code>list</code> to specify part of another nested list <code>mat</code>? We don't want to write <code>mat[[list[[1]],list[[2]],list[[3]],...]]</code>.</p>
<pre><code>mat = RandomInteger[10, {5, 6, 7, 8}];
list = {3, 4, 6};
mat[[3, 4, 6]]
</code></pre>
| Syed | 81,355 | <p>A variation could be:</p>
<pre><code>Fold[Part, mat, list]
</code></pre>
<blockquote>
<p><code>{9, 0, 9, 2, 6, 5, 4, 3}</code></p>
</blockquote>
|
1,771,920 | <p>Okay so here's the question </p>
<blockquote>
<p>Seventy percent of the light aircraft that disappear while in flight
in a certain country are subsequently discovered. Of the aircraft that
are discovered, 60% have an emergency locator, whereas 90% of the
aircraft not discovered do not have such a locator. Suppose that a
light aircraft has disappeared. If it has an emergency locator, what
is the probability that it will be discovered?</p>
</blockquote>
<p>Anndd here's <strong>my</strong> answer</p>
<p><a href="https://i.stack.imgur.com/IyadD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IyadD.png" alt="answer"></a></p>
<p>The answer to this question was, however, given as 93%. I don't understand how they got that answer and I was pretty confident in my solution. Can someone either tell me the answer given in the text is incorrect or what's wrong with my solution?</p>
<p>Thanks so much!</p>
| JKnecht | 298,619 | <p>$P(D \mid E)$ </p>
<p>$= P(D \cap E)/P(E)$</p>
<p>$= P(D \cap E)/P((E \cap D) \cup (E \cap \overline{D}))$</p>
<p>$= P(D \cap E)/\{P(E \cap D) + P(E \cap \overline{D})\}$</p>
<p>$= \{P(D) \cdot P(E \mid D)\}/\{(P(D) \cdot P(E \mid D)) + P(\overline{D}) \cdot P(E \mid \overline{D})\}$</p>
<p>$= (0.70 \cdot 0.60)/((0.70 \cdot 0.60) + (0.30 \cdot 0.10))$</p>
<p>$= 0.93$</p>
|
202,379 | <p>Suppose for some constants $\alpha,\beta,\gamma$ that we're given the following ODE: $$\alpha y''+\beta xy'+\gamma y=0.$$ Now, I know how to find the general solution for $y(x)$ if any of $\alpha,\beta,\gamma$ should turn out to be $0$, but I've just ended up with the ODE $$2y''+xy'+y=0.$$ Can anybody give me the first (few) step(s) of a general procedure one can use for such ODEs?</p>
| Robert Israel | 8,508 | <p>Maple gives the general solution using the Kummer M and U functions.</p>
<p>$$ y \left( x \right) =c_{{1}}{{\rm e}^{-{\frac {\beta\,{x}^{2}}{
2 \alpha}}}}
{{\rm M}\left(-{\frac {-2\,\beta+\gamma}{2\beta}},\frac{3}{2},\,{\frac {\beta\,{x}^{2}}{2\alpha}}\right)}
x+c_{{2}}{{\rm e}^{-{\frac {\beta\,{x}^{2}}{2\alpha}}}}
{{\rm U}\left(-{\frac {-2\,\beta+\gamma}{2\beta}},\,\frac{3}{2},\,{\frac {\beta\,{x}^{2}}{2\alpha}}\right)}
x
$$</p>
<p>It could also be written in terms of hypergeometric functions.</p>
|
202,379 | <p>Suppose for some constants $\alpha,\beta,\gamma$ that we're given the following ODE: $$\alpha y''+\beta xy'+\gamma y=0.$$ Now, I know how to find the general solution for $y(x)$ if any of $\alpha,\beta,\gamma$ should turn out to be $0$, but I've just ended up with the ODE $$2y''+xy'+y=0.$$ Can anybody give me the first (few) step(s) of a general procedure one can use for such ODEs?</p>
| Luis Costa | 39,495 | <p>The general form as you have it, is a hypergeometric differential equation. You can manipulate it into a standard form and then apply the Frobenius method. It's already worked out here for several cases:</p>
<p><a href="http://en.wikipedia.org/wiki/Frobenius_solution_to_the_hypergeometric_equation" rel="nofollow">http://en.wikipedia.org/wiki/Frobenius_solution_to_the_hypergeometric_equation</a></p>
|
701,241 | <p><code>¬(p∨q)∧(p∨r)</code> Does this mean the negation of both <code>(p∨q)</code> and <code>(p∨r)</code> or just <code>(p∨q)</code>?
If it was just <code>p∨q</code> it would make more sense to me being inside the brackets like <code>(¬p∨q)</code> but maybe that's just the programmer in me. I have also seen <code>(¬p∨¬q)</code> does that mean the same as <code>¬(pvq)</code> It starts to get rather confusing..</p>
<p>Does it have to do with order assesed for example look at <code>(pvq)</code> then its negation.</p>
| fgp | 42,986 | <p>Usually negation is assumed to have rather high precedece, i.e. binds strongly to whatever stand to the right of it. So $\lnot a \land b$ means $(\lnot a)\land b$, $\lnot a \lor b$ similarly means $(\lnot a )\lor b$. Whether or not $a$ and $b$ are variables or themselves expressions is irrelevant, so $\lnot(p\lor q)\land(p\lor r)$ means $(\lnot(p\lor q))\land(p\lor r)$.</p>
<p>Finally, $(\lnot p \lor \lnot q)$ means $((\lnot p) \lor (\lnot q))$.</p>
<p>$\lnot$ is comparable to the unitary minus sign here - you'd read $-a+b$ as $(-a)+b$, <em>not</em> as $-(a+b)$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.