qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
974,207
<p>Given the sequence <span class="math-container">${b_n}$</span>, let <span class="math-container">$lim_{n \to \infty}\ b_n = b$</span>.</p> <p>Suppose that the sequence <span class="math-container">${a_n}$</span> and the number <span class="math-container">$a$</span> have the property for which there exists <span class="math-container">$M\in \mathbb{R}$</span> and there exists <span class="math-container">$N \in \mathbb{N}$</span> such that</p> <p><span class="math-container">$$|a_n - a| \leq M\cdot |b_n - b|, \ \forall n\in \mathbb{N}: \ n \geq N$$</span></p> <p>Prove that the <span class="math-container">$\lim_{n \to \infty} \ a_n = a$</span>.</p> <hr> <p>I need to show that:</p> <p><span class="math-container">$$\forall \epsilon &gt; 0 \ \ \exists N \in \mathbb{N}: \ \forall n \geq N: |a_n - a| &lt; \epsilon$$</span></p> <p>I know how to set <span class="math-container">$\epsilon$</span> such that <span class="math-container">$\epsilon &gt; 0$</span>. I’m lost from here. Because <span class="math-container">${b_n}$</span> converges I know</p> <p><span class="math-container">$|b_n - b| &lt; \epsilon$</span></p> <p>And I <em>think</em> it’s safe to assume:</p> <p><span class="math-container">$|b_n - b| \leq M\cdot |b_n - b|$</span>.</p> <p>So I could prove this either by showing</p> <p><span class="math-container">$$|a_n - a| \leq |b_n - b|$$</span></p> <p>Or,</p> <p><span class="math-container">$$M \cdot |b_n - b| &lt; \epsilon$$</span></p> <p>But I’m not sure how to start either way. Any suggestions?</p>
Community
-1
<p>Let $\epsilon&gt;0$. Since $\lim_{n\to\infty }b_n=b$ then there's $n_0\in\Bbb N$ such that </p> <p>$$|b_n-b|\le\frac {\epsilon}M\quad\text{whenever}\; n\ge n_0$$ so for $n\ge n_0$ we have</p> <p>$$|a_n-a|\le M|b_n-b|\le\epsilon$$ and the result follows.</p>
2,027,888
<h2>Exercise</h2> <p>If $H$ is the Heaviside function, prove, using the definition below, that $\lim \limits_{t \to 0}{H(t)}$ does not exist.</p> <hr> <h2>Definition</h2> <blockquote> <p>Let $f$ be a function defined on some open interval that contains the number $a$, except possible $a$ itself. Then we say that the limit of $f(x)$ as $x$ approaches $a$ is $L$, and we write $$\lim \limits_{x \to a}{f(x)} = L$$ if for every number $\epsilon &gt; 0$ there is a number $\delta &gt; 0$ such that $$\text{if } 0 &lt; |x - a| &lt; \delta \text{ then } |f(x) - L| &lt; \epsilon$$</p> </blockquote> <hr> <h2>Hint</h2> <blockquote class="spoiler"> <p> Use an indirect proof as follows. Suppose that the limit is $L$. Take $\epsilon = \frac{1}{2}$ in the definition of a limit and try to arrive at a contradiction.</p> </blockquote> <hr> <h2>Attempt</h2> <p>Let $\delta$ be any (preferably small) positive number.</p> <p>$H(0 - \delta) = H(-\delta) = 0$</p> <p>$H(0 + \delta) = H(\delta) = 1$</p> <p>$H(0 - \delta) =^? H(0 + \delta) \implies 0 =^? 1 \implies 0 \neq 1 \implies H(0 - \delta) \neq H(0 + \delta)$</p> <p>$\lim \limits_{t \to 0^-}{H(t)} \neq \lim \limits_{t \to 0^+}{H(t)} \implies \lim \limits_{t \to 0}{H(t)}$ does not exist</p> <hr> <h2>Request</h2> <p>I don't even know where to begin, even with the hint.</p> <p><strong>Can someone kickstart the proof for me?</strong>$^1$</p> <p><strong>$^1$ Update:</strong> I've come up with an attempt. Is it valid? It seems that I don't use the hint to my advantage; so if indeed my attempt is correct, what is the alternative proof using the hint?</p>
John11
300,543
<p>Your attempt expresses the right idea but doesn't directly use the definition. </p> <p>Here is one way to do it following the hint:</p> <p>Suppose $\lim_{t\rightarrow 0}H(t)=L$. Then for every $\epsilon &gt;0$, there exists $\delta &gt;0$ such that $\left | H(t)-L \right |&lt; \epsilon $ if $\left | t-0 \right |=\left | t \right |&lt;\delta $. In particular it must work for $\epsilon =\frac{1}{2}$.</p> <p>Take $t=\frac{\delta }{2}&lt;\delta $ and $t^{'}=-\frac{\delta }{2}&gt; -\delta $. Then we have:</p> <p>$\left | H(t)-L \right |=\left | 1-L \right |&lt; \frac{1}{2}$</p> <p>Similarly, we have:</p> <p>$\left | H(t^{'})-L \right |=\left | 0-L \right |=\left | L \right |&lt; \frac{1}{2}$</p> <p>Now, using the triangle inequality:</p> <p>$1=\left | L-1+L \right |\leq \left | L-1 \right |+\left | L \right |&lt; \frac{1}{2}+\frac{1}{2}=1 $</p> <p>But wait! We have reached the following contradiction: $1&lt; 1$</p> <p>We have thus shown that $\lim \limits_{t \to 0}{H(t)}$ doesn't exist.</p>
829,433
<p><img src="https://i.stack.imgur.com/FgnTN.jpg" alt="enter image description here"></p> <p>why is it when I get the average of the column "per hour", it's different from the total?</p>
cnick
133,048
<p>Your total column is weighting the per hour numbers by the number of hours.</p> <p>That is, if you straight average the per hour numbers, $$ (20+10+11+3)/4 = 11, $$ you are not getting the average number of 'per hours' because there is more time spent on 'audits' than on 'invoices'</p> <p>The actual average would be $$ 10379/1379.70 = \frac{(785.84*3 + 302.65*10 + 188.20*20 + 103.01*11)}{(785.84+302.65+188.20+103.01)} = 7.5 $$ </p>
163,585
<p>Which (finite, undirected) graphs have this property?</p> <p>Every vertex $v$ can be labeled with a positive integer $l(v)$.</p> <p>Variant 1: For each vertex $v$, $l(v) \geq \Sigma_{[v,w] \in E, w \neq v} l(w)/2$.</p> <p>Variant 2: For each vertex $v$, $l(v) &gt; \Sigma_{[v,w] \in E, w \neq v} l(w)/2$.</p>
Qiaochu Yuan
232
<p>The <a href="http://en.wikipedia.org/wiki/ADE_classification" rel="nofollow">simply laced Dynkin diagrams</a> $A_n, D_n, E_6, E_7, E_8$ are precisely the connected (simple, undirected) graphs with <a href="http://en.wikipedia.org/wiki/Spectral_radius" rel="nofollow">spectral radius</a> less than $2$. This is more or less equivalent to a closely related result I prove in <a href="http://qchu.wordpress.com/2010/04/27/the-mckay-correspondence-i/" rel="nofollow">this blog post</a> describing the connected (simple, undirected) graphs of spectral radius exactly $2$.</p> <p>The condition you wrote down might be equivalent to this condition, but I can't easily see the equivalence if it is. The Wikipedia article suggests that there is a characterization using the Laplacian (the spectral radius is defined using the adjacency matrix) that looks like what you're saying and gives references. </p>
3,270,382
<p>I'm having trouble constructing, and understanding the construction, of a metric that induces the dictionary order topology on <span class="math-container">$\mathbb R \times \mathbb R$</span>. There are example metrics posted here on Math.SE, but that is not what I'm looking for. I'd like to understand how to <strong>approach</strong> constructing the proper metric. My first attempt was to define <span class="math-container">$$d((x_1,x_2),(y_1,y_2)) = \begin{cases} |y_1 - x_1|, &amp; \mbox{if } 0 &lt; |y_1 - x_1| \\ |y_2 - x_2|, &amp; \mbox{if } x_1 = y_1, \mbox{and } 0 &lt; |y_2 - x_2| \\ 0, &amp; \mbox{if } x = y \end{cases}$$</span> However, even if I didn't make a mistake in my calculations, and this is in fact a metric, I don't know how to approach proving that <span class="math-container">$d$</span> induces the dictionary order topology. To summarize, I'd like to know how to approach constructing the metric and if my first attempt even works.</p>
freakish
340,986
<p>I assume that by "dictionary" order you mean lexicographical order. Note that there is more than one way to define <a href="https://en.wikipedia.org/wiki/Product_order" rel="nofollow noreferrer">ordering on a product of posets</a>. In particular <span class="math-container">$(a,b)\leq (a',b')$</span> if and only if <span class="math-container">$a&lt;a'$</span> <strong>or</strong> (<span class="math-container">$a=a'$</span> and <span class="math-container">$b&lt;b'$</span>).</p> <p>So the order topology is generated by open intervals. To avoid confusion I will write <span class="math-container">$I[x,y]$</span> to denote an open interval in a given poset and <span class="math-container">$(x,y)$</span> to denote a pair of points. What exactly are these intervals in the case of lexicographical ordering? If <span class="math-container">$a&lt;a'$</span> then <span class="math-container">$(a,b)&lt;(a',b')$</span> for any <span class="math-container">$b,b'$</span> and so</p> <p><span class="math-container">$$I[(a,b), (a',b')]=\{a,a'\}\times I[b,b']\cup \{x\in\mathbb{R}\ |\ a&lt;x&lt;a'\}\times\mathbb{R}$$</span></p> <p>These can be further reduced by noticing that each such subset is a union of <span class="math-container">$\{x\}\times I[b,b']$</span> which are elements of the basis as well. So this is the basis of the order topology. Intuitively vertical, one-dimensional lines are open.</p> <p>Now given any metric we construct a topology out of it by taking open balls as a basis. So what are open balls in your metric? Assume that <span class="math-container">$r&gt;0$</span> and let <span class="math-container">$(a,b)\in\mathbb{R}^2$</span>. If <span class="math-container">$d((a,b), (a',b'))&lt;r$</span> then we have two cases: (1) if <span class="math-container">$a\neq a'$</span> then <span class="math-container">$|a-a'|&lt;r$</span> and <span class="math-container">$b,b'$</span> can be arbitrary and (2) if <span class="math-container">$a=a'$</span> then <span class="math-container">$|b-b'|&lt;r$</span>. In particular the open ball <span class="math-container">$B((a,b), r)$</span> can be written as</p> <p><span class="math-container">$$(I[a-r, a+r]\backslash\{a\})\times\mathbb{R}\cup\{a\}\times I[b-r, b+r]$$</span></p> <p>Intuitively this is sort of opposite of the order topology basis. These are "squares" infinitely wide everywhere except for the middle.</p> <p>But do these collections generate the same topology? Unfortunately, they don't. First of all note that given two basis <span class="math-container">$\mathcal{B},\mathcal{B'}$</span> and corresponding topologies <span class="math-container">$\mathcal{T},\mathcal{T'}$</span> we have <span class="math-container">$\mathcal{T}\subseteq\mathcal{T'}$</span> if and only if for any <span class="math-container">$B\in\mathcal{B}$</span> and any <span class="math-container">$x\in B$</span> there is <span class="math-container">$B'\in\mathcal{B'}$</span> such that <span class="math-container">$x\in B'\subseteq B$</span>. <a href="https://math.stackexchange.com/questions/1165414/how-to-show-2-bases-generate-the-same-topology/1165424">See here</a>.</p> <p>With that you can check that your metric topology is a subset of the order topology. But not the other way around. Balls in your metric topology are too big, they don't fit in one-dimensional lines.</p> <p>So how to fix that? Well, you were on the right track. You just gave too much freedom on the first coordinate, in particular you allowed it to be arbitrarily small. And so open balls always "catch" infinitely many elements from the first coordinate. You can fix that by slightly modifying your metric. Put </p> <p><span class="math-container">$$d((x_1,x_2),(y_1,y_2)) = \begin{cases} 1+|x_1-y_1|, &amp; \mbox{if } 0 &lt; |y_1 - x_1| \\ |y_2 - x_2|, &amp; \mbox{if } x_1 = y_1, \mbox{and } 0 &lt; |y_2 - x_2| \\ 0, &amp; \mbox{if } x = y \end{cases}$$</span></p> <p>Now note that for small enough <span class="math-container">$r$</span>, i.e. <span class="math-container">$0&lt;r&lt;1$</span> the open ball <span class="math-container">$B((a,b), r)$</span> is indeed a vertical line <span class="math-container">$\{a\}\times I[b-r,b+r]$</span>.</p>
2,006,870
<p>Is the following inequality true?</p> <p>$\left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty \sum \limits_{k=1}^\infty \sum \limits_{l=1}^\infty a_{ij}\,a_{ik}\,a_{jl}\,a_{kl} \right) \leq \left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty a_{ij}^2 \right)^{1/2}\left( \sum \limits_{i=1}^\infty \sum \limits_{k=1}^\infty a_{ik}^2 \right)^{1/2}\left( \sum \limits_{j=1}^\infty \sum \limits_{l=1}^\infty a_{jl}^2 \right)^{1/2}\left( \sum \limits_{k=1}^\infty \sum \limits_{l=1}^\infty a_{kl}^2 \right)^{1/2}=\left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty a_{ij}^2 \right)^2$</p> <p>where $a_{ij}$s are real numbers.</p>
RGS
329,832
<p>To see if your method works it is enough to see if</p> <p>$$x\cdot y + x - y = x $$</p> <p>or</p> <p>$$x\cdot y + x - y = y $$</p> <p>for any $x, y $.</p> <p>Solving the first we get</p> <p>$$ x\cdot y + x - y = x \iff x \cdot y - y = 0 \iff y(x-1) = 0 \iff x = 1 \vee y = 0$$</p> <p>So your equation returns $x$ (regradless of $y $) if $x = 1$ or if $y = 0$ (regardless of $x $).</p> <p>$$x\cdot y + x - y = y \iff x \cdot y + x = 2y \iff (y + 1)x = 2y \stackrel{y \not= -1}{\iff} x = \frac {2y}{y + 1}$$</p> <p>Hence if $y \not= -1 \wedge x = \frac {2y}{y + 1}$ your method returns $y $.</p> <p>Therefore your method is not 100% perfect. If you change it to</p> <p>$$x\cdot y + x + y $$</p> <p>By symmetry it would be enough to check that neither number is 0 or 1 to ensure it works.</p>
3,059,150
<p>Does there exist any open linear (vector) subspace of a Hilbert space? I could not think of any example.</p> <p>Actually, I was reading the book by Simmons, there almost in every theorem it assumed that "If M is a closed linear subspace".It seemed natural to me to think about subspaces which are not closed. I have an got an example which is not closed: Take the Hilbert space <strong>H = L^[0,1], with L^2 norm</strong> and the subspace <strong>set of all polynomials</strong>, it is not closed because it's closure is <strong>H</strong> and not open can be found here <a href="https://math.stackexchange.com/q/385464/428326">Set of all polynomials on [0, 1/2] is not open in C[0, 1/2]</a>. Then I asked myself an example of to think of an open set. But I could lead myself nowhere, as I am not familiar with infinite dimensional vector space. Not closed does not necessarily mean open.</p>
Robert Lewis
67,071
<p>No.</p> <p>Let <span class="math-container">$N$</span> be a normed space and</p> <p><span class="math-container">$M \subsetneq N \tag 1$</span></p> <p>a <em>proper</em> subspace. Then <span class="math-container">$M$</span> contains no nonempty open set. For if</p> <p><span class="math-container">$\emptyset \ne U \subset M \tag 2$</span></p> <p><em>were</em> open, with</p> <p><span class="math-container">$M \ni m \in U, \tag 3$</span></p> <p>we could find <span class="math-container">$\rho &gt; 0$</span> such that the open ball</p> <p><span class="math-container">$B(m, \rho) \subset U; \tag 4$</span></p> <p>then picking any </p> <p><span class="math-container">$0 \ne v \in N \setminus M \tag 5$</span></p> <p>the vector </p> <p><span class="math-container">$m + \alpha (v - m) \in B(m, \rho) \tag 6$</span></p> <p>if <span class="math-container">$0 \ne \alpha \in \Bbb R$</span> is sufficiently small, since</p> <p><span class="math-container">$\Vert (m + \alpha (v - m)) - m \Vert = \Vert \alpha (v - m) \Vert = \vert \alpha \vert \Vert v - m \Vert &lt; \rho \tag 7$</span></p> <p>for</p> <p><span class="math-container">$\vert \alpha \vert &lt; \dfrac{\rho}{\Vert v - m \Vert}; \tag 8$</span></p> <p>but then</p> <p><span class="math-container">$m + \alpha(v - m) \in M, \tag 9$</span></p> <p>whence</p> <p><span class="math-container">$\alpha(v - m) = m + \alpha(v - m) - m \in M, \tag{10}$</span></p> <p>whence</p> <p><span class="math-container">$v - m \in M, \tag{11}$</span></p> <p>whence</p> <p><span class="math-container">$v = v - m + m \in M, \tag{12}$</span></p> <p>in contradiction to (5); therefore no <span class="math-container">$B(m, \rho)$</span> as in (5) can exist, and <span class="math-container">$M$</span> cannot be open, since it contains no open set.</p>
142,858
<p>Let $f:X\to Y$ be a proper surjection of complex algebraic varieties. Let $H_i$ denote Borel-Moore homology. Then $$ \mathrm{Gr}^W_{-k} H_k(X) \to \mathrm{Gr}^W_{-k} H_k(Y) $$ is surjective.</p> <p><strong>Question:</strong> Does anyone know a reference for this fact?</p> <p>I have a proof, but it's not as simple as it could be. And it uses generic smoothness of $f$, so it's only valid in characteristic zero.</p> <hr> <p>The cycle class map from Chow groups to Borel-Moore homology lands in the lowest weight part, so the above is somehow analogous to the fact that proper pushforward is surjective in Chow for a surjective map.</p> <p>If $f$ instead is an open immersion, then the induced map on lowest weights is also surjective. This is similarly analogous to the fact that flat pullback is surjective in Chow for open immersions. But here the proof for the result in Borel-Moore homology is very simple, which is one reason I think there should be a simple proof of the above result, too.</p> <hr> <p>Addendum. Here's my proof, it's very similar to the one linked to by novice. However, the proof in Lewis's book doesn't need generic smoothness; it uses instead the existence of a subvariety mapping generically finitely onto $Y$, as in the suggestion of ACL below.</p> <p>Note first that the if $f$ is in addition smooth, then $H_k(X) \to H_k(Y)$ is onto and there's not much to it. For the general case take $U \subset X$ where $f$ is smooth, and let $Z = X \setminus U$. Then there is a map between the long exact sequences $$ \cdots \to H_k(Z) \to H_k(X) \to H_k(U) \to H_{k-1}(Z) \to \cdots $$ and $$ \cdots \to H_k(f(Z)) \to H_k(Y) \to H_k(Y \setminus f(Z)) \to H_{k-1}(f(Z)) \to \cdots $$ as follows. The maps $H_\bullet(X) \to H_\bullet(Y)$ and $H_\bullet(Z) \to H_\bullet(f(Z))$ are the obvious ones. The map $H_\bullet(U) \to H_\bullet(Y \setminus f(Z))$ is the composite $$ H_\bullet(U) \to H_\bullet(X \setminus f^{-1}(f(Z))) \to H_\bullet(Y \setminus f(Z)). $$</p> <p>Now apply $W_{-k}$ to the long exact sequences. The two maps $H_k(Z) \to H_k(f(Z))$ and $H_k(U) \to H_k(Y \setminus f(Z))$ are surjective on lowest weights: the former by noetherian induction and the latter because it's the composition of pullback for an open immersion and a pushforward for a smooth proper morphism. Since in addition $W_{-k}H_{k-1}(Z) = W_{-k}H_{k-1}(f(Z)) = 0$ the result follows by the four lemma.</p>
Geordie Williamson
919
<p>I guess this is standard and there is a citable reference. I think the following is an argument which only uses the formalism (e.g. also works in the étale case). (This is an edited version of my first incorrect answer.)</p> <p>Firstly, if $X \to Y \to Z \stackrel{+1}{\to}$ is a distinguished triangle with $X, Y$ of wt $\ge 0$, then $Z$ is of weight $\ge 0$. (This is easy if one thinks about Frobenius eigenvalues.)</p> <p>Dually, if $X \to Y \to Z \stackrel{+1}{\to}$ is a dt with $Y, Z$ of wt $\le 0$ then $X$ is of wt $\le 0$.</p> <p>Now the constant sheaf $k_Y$ on $Y$ is of wt $\le 0$. Hence the dualizing sheaf $\omega_Y$ on $Y$ is of wt $\ge 0$.</p> <p>Let $f : X \to Y$ be as in your question. Consider the distinguished triangle $K \to f_!f^!\omega_Y \to \omega_Y \stackrel{+1}{\to}$ (where $K$ is defined as the shift of the cone over the adjunction morphism $f_!f^! \to id$). The above remarks show that $K$ is of weights $\ge -1$. </p> <p><b>Claim:</b> We are done if we can show $K$ is of weights $\ge 0$.</p> <p><i>Proof:</i> Pushing to a point we get a long exact sequence</p> <p>$\dots \to H^k(Y,K) \to H^k(X,\omega_X) \to H^k(Y,\omega_Y) \to H^{k+1}(Y,K) \to \dots$</p> <p>now everything in $H^{k+1}(Y,K)$ is of weight $\ge k+1$ (because $*$-pushforward preserves wt $\ge 0$). We conclude that we have a surjection</p> <p>$gr_W^kH^k(X,\omega_X) \to gr_W^kH^k(X,\omega_Y)$</p> <p>Finally, because $H_k^{BM}(X) = H^{-k}(X,\omega_X)$ (Borel-Moore homology) the claim follows.</p> <p>We now give a sketch of how to prove the claim. Because this argument is getting more complicated than I had first intended, I'll give a sketch. I can try to provide more details if it is useful for you.</p> <p>Consider a weight filtration $W$ on $\omega_Y$ (it is not "the" weight filtration because $\omega_Y$ is not necessarily perverse). I claim that $gr^W_{\le 0}(\omega_Y) = gr^W_0(\omega_Y)$ is isomorphic to $IC(Y)[d_Y](d_Y)$ (where $d_Y = dim Y$). This basic idea is that $\omega_Y$ does not have any sections supported on subvarieties, and any other $IC$ in $gr_{\le 0}(\omega_Y)$ would contribute such a forbidden section. Similarly, if $W$ denotes a weight filtration on $\omega_X$ then $gr^W_{0}(\omega_X)$ is $IC(X)[d_X](d_X)$.</p> <p>Now consider the adjunction map $f_!f^!\omega_Y \to \omega_Y$. The weight zero part is given by the map $f_!IC(X) = f_! gr^W_0(\omega_X) \to gr^W_0(\omega_Y)$. Now by the decomposition theorem (here we use surjectivity) $IC(Y)$ occurs as a summand of $f_!IC(X)$ in smallest degree. </p> <p>Now one deduces (I can only see how to do this using generic smoothness at the moment) that there exists $IC(Y)[?](?) \to f_!\omega_X$ such that the induced map </p> <p>$$IC(Y)[?](?) \to gr^W_0(\omega_Y) = IC(Y)[?](?)$$</p> <p>is an isomorphism. We conclude that the triangle</p> <p>$K \to f_!\omega_X \to \omega_Y \stackrel{+1}{\to}$</p> <p>can be replaced by a triangle</p> <p>$K \to L \to gr^W_{\ge 1}(\omega_Y) \stackrel{+1}{\to}$</p> <p>with $L$ of wts $\ge 0$ and $gr^W_{\ge 1}(\omega_Y)$ of weights $\ge 1$. We conclude that $K$ has weights $\ge 0$ as claimed.</p>
2,735,179
<p>I am trying to sketch the curve given by the following two parametric equations.</p> <p>$x=cos^3\theta$</p> <p>$y=sin^3\theta$</p> <p>Or the single cartesian equation:</p> <p>$x^{\frac{2}{3}}+y^{\frac{2}{3}}=1$</p> <p>So,</p> <p>I put the graph in Desmos and got:</p> <p><a href="https://i.stack.imgur.com/HtFNG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HtFNG.png" alt="enter image description here"></a> </p> <p>Now, the general advice for parametric curves is to create a table of values (by hand or with a calculator) and plot the (x,y) coordinates roughly on a graph.</p> <p>Simple enough.</p> <p>My problem is this:</p> <p>The circle with the equation $x^2 + y^2 =1$ gives the graph:</p> <p><a href="https://i.stack.imgur.com/fwZLh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fwZLh.png" alt="enter image description here"></a></p> <p>Now how do I know whether the graph curves "inwards" as in the first graph or "outwards" like a circle?</p> <p>I am aware of increasing/decreasing function and implicit differentiation.</p> <p>$\frac{d}{dx}[x^{\frac{2}{3}}+y^{\frac{2}{3}}]=\frac{d}{dx}[1]$</p> <p>$\frac{dy}{dx}= - \sqrt[3]{\frac{y}{x}}$</p> <p>Hence in the 1st quadrant where $x&gt;0$ and $y&gt;0$, and in the 3rd quadrant where $x&lt;0$ and $y&lt;0$ (same signs)</p> <p>$\frac{dy}{dx}&lt;0$ therefore it a decreasing function here</p> <p>In 2nd quadrant where $x&lt;0$ but $y&gt;0$ and in the 4th quadrant where $x&gt;0$ but $y&lt;0$ (opposite signs)</p> <p>$\frac{y}{x}&lt;0$</p> <p>$\therefore \frac{dy}{dx} &gt;0$ Hence increasing function</p> <p>which should explain the shape.</p> <p>Does anyone have a less mechanical way of doing this as I feel I worked "backwards" as I knew what I was aiming for once I had seen the correct graph?</p>
Henry
6,460
<p>If you want a Bayesian approach, you will want a <a href="https://en.wikipedia.org/wiki/Prior_probability" rel="nofollow noreferrer">prior distribution</a> for the probability $p$ of heads. With a Bernouilli or binomial random variable, the <a href="https://en.wikipedia.org/wiki/Conjugate_prior" rel="nofollow noreferrer">conjugate family</a> (whose main merit is that it is easiest to work with) is the <a href="https://en.wikipedia.org/wiki/Beta_distribution" rel="nofollow noreferrer">Beta distribution</a> with parameters $\alpha$ and $\beta$</p> <p>Seeing $h$ heads and $t$ tails, i.e. a likelihood proportional to $p^h(1-p)^t$, will give a posterior distribution for $p$ which is also a Beta distribution but with parameters $\alpha+h$ and $\beta+t$. This posterior distribution will have a mean of $\dfrac{\alpha+h}{\alpha+h+\beta+t}$, a mode of $\dfrac{\alpha+h-1}{\alpha+h+\beta+t-2}$, and a standard deviation of $\sqrt{\dfrac{(\alpha+h)(\beta+t)}{(\alpha+h+\beta+t)^2(\alpha+h+\beta+t+1)}}$ </p> <p>Common choices for the prior are $\alpha=\beta=1$ (a uniform prior), $\alpha=\beta=0$ (an improper Haldane prior), and $\alpha=\beta=\frac12$ (a Jeffreys prior)</p> <p>For example, starting with $\alpha=\beta=\frac12$ and your observation of $h=6,t=4$ would give a posterior distribution for $p$ with mean about $0.59$, mode about $0.61$ (compare these to the naive estimate of $\frac6{10}=0.6$) and standard deviation about $0.14$ </p>
3,243,655
<p>Question:</p> <blockquote> <p>Prove the equation <span class="math-container">$2x - 6y = 3$</span> has no integer solution to <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p> </blockquote> <p>I need to verify my proof I think I did it correctly, but am not fully sure since I don't have solutions in my book. I basically proved by contradiction and assumed there was an integer solution for x or y. I then solved for <span class="math-container">$x $</span> and <span class="math-container">$y$</span> in <span class="math-container">$2x - 6y = 3$</span> getting <span class="math-container">$x = 3y + 3/2$</span> and <span class="math-container">$y = x/3 - 1/2$</span> .since both <span class="math-container">$x,y$</span> are not integers I said it contradicts that <span class="math-container">$x$</span> or <span class="math-container">$y$</span> had an integer solution, meaning the original statement was correct. Did I prove this right, or should I redo?</p>
Community
-1
<p>Your proof, as it stands, is not correct. Some particular issues:</p> <ul> <li><p>You get to a point where <span class="math-container">$x = 3y + 3/2$</span> and <span class="math-container">$y = x/3 - 1/2$</span> and state that "since both <span class="math-container">$x,y$</span> are not integers..."; this assumes the conclusion that you're aiming for.</p></li> <li><p>It's not true that both <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are not integers. You could have one of the two be an integer, and the other not be an integer. For example, <span class="math-container">$(0, -1/2)$</span> and <span class="math-container">$(3/2, 0)$</span> are both solutions with <span class="math-container">$x$</span> or <span class="math-container">$y$</span> integral, but not both.</p></li> <li><p>Its not convincing that <span class="math-container">$3y + 3/2$</span> is not an integer (and the point immediately above shows that that claim is not true!).</p></li> <li><p>Stylistically, your assumptions are not explicitly stated and there is no introduction to the proof. You haven't stated that you're assuming <span class="math-container">$(x, y)$</span> to be a pair of integers solving a particular equation. Therefore, it's not clear how the contradiction is reached; at a minimum, you need to form the negation of the statement and clearly include assumptions.</p></li> </ul> <p>So unfortunately, this is not a properly written proof. But it can be fixed without too much work; here's an outline to follow:</p> <p>1) Introduce the players. Say "We proceed by contradiction. Assume that <span class="math-container">$x, y$</span> are integers such that <span class="math-container">$2x - 6y = 3.$</span></p> <p>2) Isolate one of the variables and get the contradiction. Perhaps "Then <span class="math-container">$x = 3y + \frac 3 2$</span>. Since <span class="math-container">$3y$</span> is an integer (why?), the sum <span class="math-container">$3y + \frac 3 2$</span> is not an integer."</p> <p>3) Tell the reader why this is a problem. "This contradicts the assumption that <span class="math-container">$x$</span> is an integer."</p>
3,337,147
<p>Let <span class="math-container">$R$</span> be a unique factorization domain (UFD). Given <span class="math-container">$a,b \in R$</span> not simultaneously equal to zero, an element <span class="math-container">$d \in R$</span> is by definition a greatest common divisor (GCD) of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> provided:</p> <ol> <li><span class="math-container">$d \mid a$</span> and <span class="math-container">$d \mid b$</span>.</li> <li>For all <span class="math-container">$d' \in R$</span> such that <span class="math-container">$d' \mid a$</span> and <span class="math-container">$d' \mid b$</span>, we have that <span class="math-container">$d' \mid d$</span>.</li> </ol> <p>Let <span class="math-container">$U(R) := \{\,\text{units in $R$}\,\}$</span> and assume <span class="math-container">$a,b \neq 0$</span>, <span class="math-container">$a,b \notin U(R)$</span>. Since <span class="math-container">$R$</span> is a UFD, there exist <span class="math-container">$u,v \in U(R)$</span>, irreducible elements <span class="math-container">$p_1,\dots,p_s \in R$</span> which are mutually non associate, <span class="math-container">$d_1,\dots,d_s,e_1,\dots,e_s \in \mathbb{N}$</span> such that: <span class="math-container">\begin{equation} a = u \cdot p_1^{d_1} \cdots p_s^{d_s} \, , \quad b = v \cdot p_1^{e_1} \cdots p_s^{e_s} \end{equation}</span> For all <span class="math-container">$1 \leq i \leq s$</span>, let <span class="math-container">$f_i := \min\{d_i,e_i\}$</span>. We want to prove that a GCD of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is: <span class="math-container">\begin{equation} c := p_1^{f_1} \cdots p_s^{f_s} \end{equation}</span></p>
jvdhooft
437,988
<p>Your calculations on the third draw are off. Either you have selected two balls of the same color (probability <span class="math-container">$\frac{1}{3}$</span>) in which case you are certain to win, or you have selected two balls of different color (probability <span class="math-container">$\frac{2}{3}$</span>) in which case you win half of the time. We have:</p> <p>First draw: <span class="math-container">$\frac{2}{4} \cdot 1 = \frac{1}{2}$</span></p> <p>Second draw: <span class="math-container">$\frac{2}{3} \cdot 1 = \frac{2}{3}$</span></p> <p>Third draw: <span class="math-container">$\frac{1}{3} \cdot 1 + \frac{2}{3} \cdot \frac{1}{2} \cdot 1 = \frac{2}{3}$</span></p> <p>Fourth draw: <span class="math-container">$1$</span></p> <p>Adding this all up, we arrive at <span class="math-container">$\frac{17}{6} \approx 2.83$</span>.</p>
3,253,145
<p>I am doing this question <a href="https://i.stack.imgur.com/AEaIn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AEaIn.jpg" alt="enter image description here" /></a> As it can't be solved using separation of variables (my assumption according to what i did, after checking by substituting <span class="math-container">$w(y,t)=f(y)g(t)$</span> , and getting a term at last which is <strong>not depending on only single variable</strong>)</p> <blockquote> <ol> <li><p>Did my assumption is right ?</p> </li> <li><p>So how to solve this equation, I am stuck ?</p> </li> </ol> </blockquote>
Community
-1
<p>Add <span class="math-container">$$2^{n+1}-1=\sum_{i=0}^n2^i$$</span> and you get</p> <p><span class="math-container">$$\sum_{i=0}^n2^i+\sum_{i=0}^n\pm_i2^i=2\sum_{i=0}^nb_i2^i$$</span> where the <span class="math-container">$b_i$</span> are <span class="math-container">$0$</span> or <span class="math-container">$1$</span>.</p> <p>So your numbers are</p> <p><span class="math-container">$$2m-2^{n+1}+1$$</span> where <span class="math-container">$m$</span> is any <span class="math-container">$n+1$</span> bits integer (leading zeroes allowed).</p> <p>In other words, every other integer in <span class="math-container">$[1-2^{n+1},1+2^{n+1}]$</span>.</p>
3,462,868
<p>What is the minimal size of a set <span class="math-container">$\mathfrak S$</span> of <span class="math-container">$k$</span>-element subsets of <span class="math-container">$\{1,...,n\}$</span> such that for any <span class="math-container">$k$</span>-element subset <span class="math-container">$S$</span> of <span class="math-container">$\{1,...,n\}$</span> there is an <span class="math-container">$S'\in\mathfrak S$</span> with <span class="math-container">$S\cap S'=\varnothing$</span>?</p> <p>As shown in an answer and comments to it, for <span class="math-container">$n&lt;2k$</span> there are no such <span class="math-container">$\mathfrak S$</span>, for <span class="math-container">$n=2k$</span> the only possibility is to take for <span class="math-container">$\mathfrak S$</span> all <span class="math-container">$k$</span>-element subsets, so that in this case the answer is <span class="math-container">$\binom{2k}k$</span>, while for <span class="math-container">$n\geqslant (k+1)k$</span> one can (and must at least) take any <span class="math-container">$k+1$</span> pairwise disjoint <span class="math-container">$k$</span>-element subsets and the answer is <span class="math-container">$k+1$</span>. Thus the cases <span class="math-container">$2k&lt;n&lt;(k+1)k$</span> remain unsolved.</p> <p>As suggested in a comment below: in case this is very hard, - mainly I would like to know this in the case <span class="math-container">$n=3k$</span>.</p> <p>Here are some calculations (being updated using the accepted answer): denoting by <span class="math-container">$\mu(n,k)$</span> the minimal size of <span class="math-container">$\mathfrak S$</span> as above, <span class="math-container">$$ \begin{matrix} \mu(\geqslant2,1)=2&amp;\mu(4,2)=6&amp;\mu(6,3)=20&amp;\mu(8,4)=70&amp;\mu(10,5)=252\\ &amp;\mu(5,2)=4&amp;\mu(7,3)=12&amp;\mu(9,4)=30&amp;\mu(11,5)\leqslant113\\ &amp;\mu(\geqslant6,2)=3&amp;\mu(8,3)=8&amp;\mu(10,4)\leqslant21&amp;\mu(12,5)\leqslant72\\ &amp;&amp;\mu(9,3)=7&amp;\mu(11,4)\leqslant18&amp;\mu(13,5)\leqslant54\\ &amp;&amp;\mu(10,3)=6&amp;\mu(12,4)=12&amp;\mu(14,5)\leqslant42\\ &amp;&amp;\mu(11,3)=5&amp;\mu(13,4)\leqslant14&amp;\mu(15,5)\leqslant31\\ &amp;&amp;\mu(\geqslant12,3)=4&amp;\mu(14,4)\leqslant12&amp;\mu(16,5)\leqslant28\\ &amp;&amp;&amp;\mu(15,4)\leqslant10&amp;\mu(17,5)\leqslant26\\ &amp;&amp;&amp;\mu(16,4)\leqslant9&amp;\mu(18,5)\leqslant24\\ &amp;&amp;&amp;\mu(17,4)\leqslant8&amp;\mu(19,5)\leqslant22\\ &amp;&amp;&amp;\mu(18,4)\leqslant7&amp;\mu(20,5)\leqslant20\\ &amp;&amp;&amp;\mu(19,4)\leqslant6&amp;\mu(21,5)\leqslant18\\ &amp;&amp;&amp;\mu(\geqslant20,4)=5&amp;\mu(22,5)\leqslant17\\ &amp;&amp;&amp;&amp;\mu(23,5)\leqslant16\\ &amp;&amp;&amp;&amp;\mu(24,5)\leqslant14\\ &amp;&amp;&amp;&amp;\mu(25,5)\leqslant12\\ &amp;&amp;&amp;&amp;\mu(26,5)\leqslant10\\ &amp;&amp;&amp;&amp;\mu(27,5)\leqslant9\\ &amp;&amp;&amp;&amp;\mu(28,5)\leqslant8\\ &amp;&amp;&amp;&amp;\mu(29,5)\leqslant7\\ &amp;&amp;&amp;&amp;\mu(\geqslant30,5)=6 \end{matrix} $$</span></p>
pico
666,807
<p>Its an integration rule:</p> <p><span class="math-container">$\int f'(t)~dt = f(t) + C$</span></p> <p>Or:</p> <p><span class="math-container">$\int \frac{df(t)}{dt}~dt = f(t) + C$</span></p>
2,096,296
<p>Recently I discovered that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{π^2}{6}$, and we know that sum of reciprocals of naturals to the first power diverges to infinity. So I was wondering just out of curiosity, whether there was a number between $1$ and $2$ where this sum of reciprocals raised to that power started diverging. Is there such a number? </p>
T C
368,626
<p>This is what you need. <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)</a>, find 6.3 - p-series</p> <p>To sum up what you ask</p> <p>If $k&gt;1$ then $\sum_{n=1}^\infty \frac{1}{n^k}$ converges</p> <p>If $k\leq 1$ then $\sum_{n=1}^\infty \frac{1}{n^k}$ diverges</p>
2,504,737
<p>During one of my daily exercises, I was looking for properties of the elements of Cartan calculus. I stumbled on Wikipedia's page about interior products <a href="https://en.wikipedia.org/wiki/Interior_product" rel="nofollow noreferrer">(here)</a>, and I've noticed a property that sounds very useful: $$ \iota_{[X,Y]}\omega=[\mathcal L_X,\iota_Y]\omega. $$ In Wikipedia's notations, $\iota$ is the interior product, $\mathcal L_X$ is the Lie derivative with respect to the vector field $X$ (and $X$ and $Y$ are vector fields). $\omega$ is a differential form on a manifold $M$. As there is no source given, I'm trying to prove this equality, and failing due to a sign. Maybe I'm doing some stupid error somewhere.</p> <p>My attempt so far follows.</p> <p>First: I note that the operator on the right has the property $$ [\mathcal L_X,\iota _Y](\omega\wedge\eta)=[\mathcal L_X,\iota_Y]\omega\wedge\eta+(-1)^k\omega\wedge[\mathcal L_X,\iota_Y]\eta, $$ where $\omega$ is assumed to be a $k-$form, and $\eta$ is an arbitrary form. This is the same interaction with wedge product as the left hand side. It follows that I can decide the value of the right hand side locally, where I can expand any form as tensor product of the basis forms. Hence, if I prove that the equality holds for $0-$ and $1-$forms, I'm done.</p> <p>I start with $0-$forms: I take a function $f$ from $M$ to the field, and compute left and right hand side. Well, "compute": the left hand side is the application of an inner product to a function, that is zero by definition, while on the right hand side I either have a contraction first, annulling $f$, or I have a Lie derivative acting first. The Lie derivative of a function is a function, so it follows that $\iota_Y\mathcal L f=0$, and I'm done for this case.</p> <p>For $1-$forms: let $\alpha$ be such an $1-$form. The left hand side is $$ \iota_{[X,Y]}\alpha=\alpha([X,Y]). $$ I split the calculation of the right hand side in two, and use Cartan's formula $\mathcal L_X=d\iota_X+\iota_Xd$ whenever necessary. Lie derivatives are ugly, all hail Cartan. $$ \mathcal L_X\iota_Y\alpha=\mathcal L_X(\alpha (Y))=X(\alpha(Y)),\\ \iota_Y\mathcal L_X\alpha=\iota_Yd\iota_X\alpha+\iota_Y\iota_Xd\alpha=Y(\alpha(X))+\underline{d\alpha(Y,X)}. $$ Underlined for your convenience is the step in which it is most likely I've done an error, but I fail to see why. I state that $\iota_Y\iota_Xd\alpha=d\alpha(Y,X)$ as I am applying $X$ first, and $Y$ second, and $\iota$ places vectors at the beginning of a string. Is it correct? Was I stupid here?</p> <p>Continuing: I use $$ d\alpha(Y,X)=Y(\alpha(X))-X(\alpha(Y))-\alpha([Y,X]). $$ Here comes the failure. I'd expect stuff to cancel out, but that's not happening. The signs of $Y(\alpha(X))$ agree, so they do not cancel.</p> <p>Where am I doing wrong? I strongly suspect that it is something in the underlined passage, but I need clarification about what I did wrong.</p> <p>Thanks all in advance for your time.</p> <p>(p.s.: in some other places of this site there is a proof of that relying on a property of $\mathcal L_X$ acting on differential forms. As I said, I despise $\mathcal L_X$. I'd prefer to see the error in this proof, as it is short and nice. Lie has done many wonderful things, and an orrible derivative)</p>
Ted Shifrin
71,348
<p>Yes, you are off by a sign on the term you figured. Remember that (for a $2$-form $\phi$) $\iota_X\phi(\cdot) = \phi(X,\cdot)$, so $\iota_Y\big(\iota_X\phi\big)=\phi(X,Y)$.</p>
362,944
<blockquote> <p>Consider a Brownian bridge <span class="math-container">$B: [0,1]\to \mathbb{R}$</span> with <span class="math-container">$B(0)=B(1)=0$</span>. Let <span class="math-container">$M[0, 1/2]=\max_{x\in[0,1/2]}B(x)$</span>. How to prove that <span class="math-container">$$\mathbb{P}(M[0, 1/2]\geq s)\leq 2\mathbb{P}(B(1/2)\geq s/2)?$$</span></p> </blockquote> <p>Actually, here is a "no big max" argument that could be used in the proof of construction Airy line ensemble. The "no big max" means the top curve between <span class="math-container">$(a, b)$</span> cannot get too high. </p> <p>(Definition of Brownian bridge) If <span class="math-container">$\{B(t): t\geq 0\}$</span> is standard Brownian motion, then <span class="math-container">$\{Z(t): 0\leq t\leq 1\}$</span> is a Brownian bridge process when <span class="math-container">$$Z(t)=B(t)-tB(1).$$</span></p>
dohmatob
78,539
<p>You can prove it via standard computations. See (2.1), (2.7), and (2.8) of <a href="https://www.researchgate.net/publication/236984395_On_the_maximum_of_the_generalized_Brownian_bridge" rel="nofollow noreferrer">On the maximum of the generalized Brownian bridge</a>.</p>
3,212,488
<p><span class="math-container">$$\int_{0}^{\infty} \ln{(1+a x)}{x^{-b-1}} dx$$</span><br> I defined two branch cuts along the real axis: <span class="math-container">$[-\infty ,-\frac{1}{a}]$</span> &amp; <span class="math-container">$[0,\infty]$</span> with the following contour: <a href="https://i.stack.imgur.com/4rd0E.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4rd0E.jpg" alt="contour"></a><br> I defined the <span class="math-container">$arg{(z)} =0$</span> above the positive branch cut and <span class="math-container">$arg(z)=2\pi$</span> below the positive branch cut. Similarly <span class="math-container">$arg(z)=\pi$</span> above the negative branch cut and <span class="math-container">$arg(z)=-\pi$</span> below the negative branch cut.</p> <p>Using the triangle inequality for integrals it can easily (but with care) be shown that the integrals along all parts of the circles tend to <span class="math-container">$0$</span> as <span class="math-container">$R \Rightarrow \infty$</span> (radius of the outer circle) and <span class="math-container">$\epsilon \Rightarrow 0$</span> (radius of the smaller circles around the singularities).<br> By doing this we get the necessary bounds for a and b: <span class="math-container">$a&gt;0$</span> and <span class="math-container">$0&lt;b&lt;1$</span></p> <p>The integrals along the positive branch cut work out nicely and result in: <span class="math-container">$$(1-e^{-2 b \pi i}) \int_{0}^{\infty} \ln{(1+a x)}{x^{-b-1}} dx$$</span><br> When I work out the integrals along the I end up with the following for the integral along the top side of the negative branch cut: <span class="math-container">$$e^{-b \pi i} \int_{\infty}^{\frac{1}{a}} \ln{(1-a x)}{x^{-b-1}} dx$$</span><br> After factoring out <span class="math-container">$-1$</span> from the inside of the ln you are left with: <span class="math-container">$$e^{-b \pi i} \int_{\infty}^{\frac{1}{a}} (\ln{(a x-1)}-i \pi){x^{-b-1}} dx$$</span><br> Applying the same procedure to the integral along the bottom side of the negative branch cut yields: <span class="math-container">$$e^{b \pi i} \int_{\frac{1}{a}}^{\infty} (\ln{(a x-1)}+i \pi){x^{-b-1}} dx$$</span><br> This is troublesome because the <span class="math-container">$e^{b \pi i}$</span> stops me from combining the two integrals along the negative branch cut in order to cancel out the integrals involving the <span class="math-container">$\ln$</span><br> In the following <a href="http://residuetheorem.com/2015/10/27/integral-with-two-branch-cuts-ii/" rel="nofollow noreferrer">article</a> the author magically has <span class="math-container">$e^{-b \pi i}$</span> in front of the integral which allows him to cancel out the parts including the <span class="math-container">$\ln$</span> which simplifies it enormously. </p> <p>Can someone please explain to me what I am doing wrong with the integrals along the negative branch cut? </p> <p>Any help is greatly appreciated!</p>
Robert Israel
8,508
<p>Hint: for (finite) real numbers <span class="math-container">$x$</span>, <span class="math-container">$e^{-x} &gt; 0$</span>.</p>
1,037,632
<p>How to find the last 2 digits of $2014^{2001}$? What about the last 2 digits of $9^{(9^{16})}$?</p>
Krishnamurari
195,422
<p>$9^{\phi(100)}=9^{40}\equiv 1\pmod{100}\implies9^{40k}\equiv 1\pmod{100}$</p> <p>Now consider $9^{16}=40k+?$ , that is the same as finding $x\equiv 9^{16}\pmod{40}$</p> <p>$9^{\phi(40)}=9^{16}\equiv 1\pmod{40}\implies 9^{16}=40k+1$</p> <p>$9^{40k}\equiv 1\pmod{100}\implies 9^{40k+1}\equiv 9\pmod{100}$</p>
15,702
<p>One way to prove that a field $K$ has no ideals except the entire field and the trivial ideal is to note the fact that every element $x$ has an inverse. By the definition of an ideal, if $x$ is in the ideal then $x^{-1}x$ is because $x^{-1} \in K$. But now we have that 1 is in the ideal, and so again by the definition of an ideal we have that every element is in the ideal. Therefore it is either the entire field or trivial.</p> <p>However, this works for any would-be ideal that has a unit; hence my question. I don't see how this coheres particularly with the idea that ideals are generalizations of things like "multiple of $n$", or that we use them to form quotient rings.</p> <p>Can someone please explain whether this has a deeper meaning or if it's not really important? I think it might have something to do with what is written in the "motivation" section in the <a href="http://en.wikipedia.org/wiki/Ideal_%28ring_theory%29" rel="nofollow">Wikipedia article for ideals</a> but I'm not really sure.</p> <p>Edit: I do realize that not all subsets without a unit are ideals. Sorry for the confusion.</p>
Matt E
221
<p>One way to think of an ideal as being the set of multiples of the (possibly non-existent) g.c.d of all the elements that contain it. (I am thinking here of the case of a commutative ring with $1$, so that distinctions between left, right, and two-sided don't matter.)</p> <p>In the integers, for example, any set of elements has a g.c.d. with all the reasonable properties that you could want, and furthermore if $\{a_i\}_{i \in I}$ is a set of integers, then the ideal generated by the $a_i$ is in fact principal, and its generator is a g.c.d. of this collection. In a more general ring, g.c.d.s don't necessarily exist, or even if they do, they don't have all the properties that they do in the integers. So rather than trying to work with g.c.d.s, we can introduce ideals, which generally have better properties (or, rather, do many of the same jobs in more general rings that g.c.d.s and their set of multiples do in the context of the integers).</p> <p>With this in mind, one sees (a) why ideals are the natural kernels of quotient maps: think about the case of the integers, working {\em modulo} $n$ means setting all multiples of $n$ equal to zero; (b) why an ideal with a unit will be the trivial ideal (i.e. the whole ring): because the g.c.d. of any set containing a unit will have to be $1$.</p>
2,823,568
<p>It is well known that the sum</p> <p>$$ \sum _{{k=0}}^{\infty }{\frac {x^{k}}{k!}} $$</p> <p>converges to $e^{x}$. In particular, for $x=1$ we have $\sum _{{k=0}}^{\infty }{\frac {1}{k!}}=e$. But what about the sum over the reciprocals of primorials, i.e.,</p> <p>$$ \sum _{{k=0}}^{\infty }{\frac {x^{k}}{k\#}}, $$</p> <p>where $k\#$ denotes the product of all primes equal to or smaller than $k$. Does the sum $\sum _{{k=0}}^{\infty }{\frac {1}{k\#}}$ converge, like it's factorial analogue does?</p> <p>In the same spirit, it would also be interesting to ask whether the sum</p> <p>$$ \sum _{{k=0}}^{\infty }{\frac {x^{k}}{p_k\#}} $$</p> <p>converges, where $p_k\#$ is the product of the first $k$ primes. Unfortunately, I have found no reference to such sums after a quick search on internet. What can be said about these sums?</p>
Pagode
564,189
<p>For the second one , D'Alembert criterion is also sufficient there.</p> <p>Indeed , you have $$ \forall k \in \mathbb{N}^*, \ \frac{x^{k+1}}{p_{k+1}\#}\frac{p_{k}\#}{x^{k}}=\frac{x}{p_k} $$</p> <p>which obviously converges to 0.</p> <p><strong>Conclusion:</strong></p> <p>The second radius is infinite.</p>
1,493,415
<p>When I did exercises in probability theory I found this limits as follows and verified it with Mathematica 8.0, and also noticed when $p=\dfrac12$ it shows that $\displaystyle p^n\sum_{k=0}^{n-1}\binom{k+n-1}{k}q^k \equiv \frac12$, but how it works?</p> <blockquote> <p>$$ \lim_{n\to\infty}p^n\sum_{k=0}^{n-1}\binom{k+n-1}{k}q^k=\begin{cases}0, &amp; p&lt;0.5\\ 0.5, &amp; p=0.5\\ 1, &amp; p&gt;0.5\end{cases} \qquad p,q&gt;0, p+q=1, n\in\mathbf N^* $$</p> </blockquote>
Robert Israel
8,508
<p>Hint: there is a finite interval $E$ such that $\int_{E^c} |f(x)| \; dx &lt; \epsilon$. Then approximate the restriction of $f$ to $E$ by simple functions...</p>
1,493,415
<p>When I did exercises in probability theory I found this limits as follows and verified it with Mathematica 8.0, and also noticed when $p=\dfrac12$ it shows that $\displaystyle p^n\sum_{k=0}^{n-1}\binom{k+n-1}{k}q^k \equiv \frac12$, but how it works?</p> <blockquote> <p>$$ \lim_{n\to\infty}p^n\sum_{k=0}^{n-1}\binom{k+n-1}{k}q^k=\begin{cases}0, &amp; p&lt;0.5\\ 0.5, &amp; p=0.5\\ 1, &amp; p&gt;0.5\end{cases} \qquad p,q&gt;0, p+q=1, n\in\mathbf N^* $$</p> </blockquote>
John Dawkins
189,130
<p>I'd first prove the limit statement for $f\in L^1$ that is also a step function: $f(x) = \sum_{k=1}^n 1_{(a_k,b_k]}(x)\cdot c_k$, where $a_1&lt;b_1\le a_2&lt;b_2&lt;\cdots\le a_n&lt;b_n$, and the $c_k$ are real. Then show that a general $f\in L^1$ can be approximated in $L^1$ by such step functions. For this second step it might be helpful to first approximate $f\in L^1$ by continuous functions of compact support.</p>
1,430,447
<p>sketch the graph of the integrand function and use it to help evaluate the integral.</p> <p>integration from(1 , -1) |x|-1 </p> <p><a href="https://i.stack.imgur.com/NY3Ei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NY3Ei.png" alt="enter image description here"></a></p> <p>I think I can evaluate the integration</p> <p>f(x) = 1/2 x^2 -x+c</p> <p>but how sketch the graph </p>
Mark Viola
218,419
<p>For $x\ge 0$, the integrand is $x-1$. The graph begins at $(0,-1)$ and ends at $(1,0)$.</p> <p>For $x\le 0$, the integrand is $-x-1$. The graph begins at $(-1,0)$ and ends at $(0,-1)$.</p>
1,430,447
<p>sketch the graph of the integrand function and use it to help evaluate the integral.</p> <p>integration from(1 , -1) |x|-1 </p> <p><a href="https://i.stack.imgur.com/NY3Ei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NY3Ei.png" alt="enter image description here"></a></p> <p>I think I can evaluate the integration</p> <p>f(x) = 1/2 x^2 -x+c</p> <p>but how sketch the graph </p>
nathan.j.mcdougall
181,447
<p>The graph can be considered as a piecewise function of two lines either side of $x=0$. Specifically, these are the lines $$y=\pm x-1$$ If you graph these lines it should be clear that the integral can be found piecewise. The integral will be $$\begin{align}\int\,(|x|-1)\;\mathrm{d}x=\begin{cases}\int\,(x-1)\;\mathrm{d}x\mbox{ for }x\geq 0\\\int\,(-x-1)\;\mathrm{d}x\mbox{ for }x&lt; 0\end{cases}\\=\begin{cases}\frac{1}{2}x^2-x+c_0\mbox{ for }x\geq 0\\-\frac{1}{2}x^2-x+c_1\mbox{ for }x&lt; 0\end{cases}\\\end{align}$$ Where $c_0,c_1\in\mathbb{R}$</p> <p>To evaluate the definite integral, then, we have $$\begin{align}\int_{-1}^1\,(|x|-1)\;\mathrm{d}x=\int_{-1}^0\,(|x|-1)\;\mathrm{d}x+\int_{0}^1\,(|x|-1)\;\mathrm{d}x\\=\left[-\frac{1}{2}x^2-x\right]_{-1}^0+\left[\frac{1}{2}x^2-x\right]_{0}^1\\=\left(\frac{1}{2}-1\right)+\left(\frac{1}{2}-1\right)\\=\boxed{-1}\end{align}$$</p>
3,270,504
<blockquote> <p>Prove that <span class="math-container">$\log|e^z-z|\leq |z|+1$</span> where <span class="math-container">$z\in\mathbb{C}$</span> with <span class="math-container">$|z|\geq e$</span>.</p> </blockquote> <p><strong>Background:</strong></p> <p>This is from a proof that <span class="math-container">$e^z-z$</span> has infinitely many zeroes. The present stage is that we assumed in contradiction that <span class="math-container">$e^z-z$</span> hasn't any zero.</p> <p><strong>My attampt:</strong></p> <p>I assume that the meaning of <span class="math-container">$\log$</span> here is the principal branch of <span class="math-container">$\log$</span>.</p> <p>We know that <span class="math-container">$|w|\in\mathbb{R} ,\ \forall w\in\mathbb{C}$</span>. Because <span class="math-container">$\log$</span> is increasing in <span class="math-container">$\mathbb{R}^+$</span> and according to the triangle inequality we get <span class="math-container">$$\log|e^z-z|\leq\log(|e^z|+|z|)$$</span> But I'm not sure how to proceed. Thanks.</p>
A.Γ.
253,273
<p>Hint: estimate for <span class="math-container">$r=|z|\ge 0$</span> <span class="math-container">$$ \log|e^z-z|\le\log(e^r+r)=\log(e^r[1+re^{-r}])=r+\log(1+re^{-r}). $$</span> Prove that the function <span class="math-container">$f(r)=re^{-r}$</span> attains maximum at <span class="math-container">$r=1$</span> and <span class="math-container">$1+e^{-1}\le e$</span>.</p>
3,268,398
<blockquote> <p>General solution of the equation <span class="math-container">$$x\left(\frac{dy}{dx}\right)^2+\left(y-x\right)\frac{dy}{dx}\:-y=0 $$</span>is</p> </blockquote> <p>Option are as follows:</p> <p><span class="math-container">$a)\qquad (x-y+c)(xy-c)=0$</span></p> <p><span class="math-container">$b)\qquad (x+y+c)(xy-c)=0$</span></p> <p><span class="math-container">$c)\qquad (x-y+c)(x^2+y^2-c)=0$</span></p> <p><span class="math-container">$d)\qquad (x-y+c)(x^2+y^2-c)=0$</span></p>
Dr. Sonnhard Graubner
175,066
<p>Hint: <span class="math-container">$$\frac{dy}{dx}=1$$</span> or <span class="math-container">$$\frac{dy}{dx}=-\frac{y(x)}{x}$$</span></p>
3,268,398
<blockquote> <p>General solution of the equation <span class="math-container">$$x\left(\frac{dy}{dx}\right)^2+\left(y-x\right)\frac{dy}{dx}\:-y=0 $$</span>is</p> </blockquote> <p>Option are as follows:</p> <p><span class="math-container">$a)\qquad (x-y+c)(xy-c)=0$</span></p> <p><span class="math-container">$b)\qquad (x+y+c)(xy-c)=0$</span></p> <p><span class="math-container">$c)\qquad (x-y+c)(x^2+y^2-c)=0$</span></p> <p><span class="math-container">$d)\qquad (x-y+c)(x^2+y^2-c)=0$</span></p>
nmasanta
623,924
<p><span class="math-container">$$x\left(\frac{dy}{dx}\right)^2+\left(y-x\right)\frac{dy}{dx}\:-y=0$$</span> <span class="math-container">$$\implies xp^2+(y-x)p-y=0\qquad \text{where}\quad p\equiv \frac{dy}{dx}$$</span></p> <p><span class="math-container">$$\implies xp(p-1)+y(p-1)=0$$</span> <span class="math-container">$$\implies (p-1)(y+xp)=0$$</span> <span class="math-container">$$\implies \text{either}\quad y+xp=0\qquad \text{or}\quad p-1=0$$</span></p> <p><span class="math-container">$$p-1=0\implies x-y+c=0\qquad \text{and}$$</span> <span class="math-container">$$y+xp=0\implies xy-c=0$$</span></p> <p>General solution is <span class="math-container">$$(x-y+c)(xy-c)=0$$</span></p>
3,281,965
<p>It is known that a way to check whether a number <span class="math-container">$n$</span> is prime, is to check for divisors of <span class="math-container">$n$</span> from <span class="math-container">$2$</span> to <span class="math-container">$\lfloor\sqrt{n}\rfloor$</span>. If we find any divisor, then <span class="math-container">$n$</span> is not prime. If we don't, then we don't need to check for divisors bigger than <span class="math-container">$\sqrt{n}$</span> (and <span class="math-container">$n$</span> is prime).</p> <p>An "approximation" of this method would be to check for divisors the same way but from <span class="math-container">$2$</span> to <span class="math-container">$log_2n$</span>. If we find any divisor we declare that <span class="math-container">$n$</span> is not prime. If we don't find any divisor we declare that <span class="math-container">$n$</span> is prime. Of course this method will not always give the correct results. My question is:</p> <p>If the second algorithm declares a number to be prime, what is the probability that this number is actually prime?</p>
LAGRIDA
634,307
<p>The probability that <span class="math-container">$n$</span> is not divisible by <span class="math-container">$p$</span> is : <span class="math-container">$1-\frac{1}{p}$</span>, then</p> <p>The probability that <span class="math-container">$n$</span> is prime is :</p> <p><span class="math-container">$$\prod_{\substack{\log_2(n) &lt; p \leq \sqrt{n} \\ \text{p prime}}} \left( 1 - \frac{1}{p} \right)$$</span></p> <p>Consider Mertens 3rd theoreme as <span class="math-container">$n \to +\infty$</span>:</p> <p><span class="math-container">$$\prod_{\substack{p \leq n \\ \text{p prime}}} \left(1-\frac{1}{p}\right) \sim \frac{e^{-\gamma}}{\log(n)}$$</span></p> <p>Then :</p> <p><span class="math-container">$$\prod_{\substack{\log_2(n) &lt; p \leq \sqrt{n} \\ \text{p prime}}} \left( 1 - \frac{1}{p} \right) \sim \dfrac{2 \log(\log_2(n))}{\log(n)}$$</span></p>
500,446
<p>Let $p$ be a prime and $p \geq 5$, Consider the congruence $x^3 \equiv a$ (mod p) with $\gcd(a,p)=1$. Show that The congruence has either no solution or three incongruent solutions modulo $p$ if $p \equiv 1$ (mod 6) and has unique solution modulo $p$ if $p \equiv 5$ (mod 6).</p> <p>My attempt: By Lagrange theorem, the congruence $x^3 \equiv a$ (mod p) has at most $3$ incongruent solutions modulo $p$. Suppose the congruence has a solution $b$ such that $b^3 \equiv a$ (mod p). Then $x^3 \equiv a \equiv b^3 \textit{mod p} \Rightarrow x^3 -b^3 \equiv 0$ (mod p). Note that $x^3 -b^3=(x-b)(x^2+bx+b^2)$</p> <p>Now I stuck at here. I observe that if $p \equiv 1$ (mod 6), then $(x^2+bx+b^2) \equiv 0$ (mod p) has two incongruent solutions modulo $p$ and if $p \equiv 5$ (mod 6), then $(x^2+bx+b^2) \equiv 0$ (mod p) has one unique solution modulo $p$.</p> <p>Can anyone guide me?</p>
N. S.
9,176
<p>As $p \neq 2$ you know that $2$ is invertible modulo $p$. Let $\alpha$ be its inverse.</p> <p>By completing the square we get</p> <p>$$(x^2+bx+b^2) \equiv 0 \Rightarrow x^2+bx+\alpha^2 b^2 \equiv \beta^2(\alpha^2-1) \equiv \beta^2 \alpha^2 (1-4) \pmod{p}$$</p> <p>Now all you have to do is study if $-3$ is a quadratic residue modulo $p$. The fact that you have to look at $p \mod 6$ comes very natural from here:</p> <p>$$\left( \frac{-3}{p} \right)=\left( \frac{-1}{p} \right)\left( \frac{3}{p} \right)=(-1)^{\frac{p-1}{2}} (-1)^{\frac{p-1}{2}\frac{3-1}{2}}\left( \frac{p}{3} \right)=\left( \frac{p}{3} \right)$$</p> <p><strong>Simpler solution</strong> Note that $b \neq 0 \pmod{p}$. Then</p> <p>$$x^3 \equiv b^3 \pmod{p} \Leftrightarrow (xb^{-1})^3 \equiv 1 \pmod{p}$$</p> <p>Now, if $p \equiv 1 \pmod{6}$, once you prove that $y^3 \equiv 1 \pmod{p}$ has three solutions, the conclusion follows immediately.</p> <p>If $p \equiv 5 \pmod{6}$, you can prove that $y^3 \equiv 1 \pmod{p}$ has unique solution. From here, it follows immediately that $x^3 \equiv b^3 \pmod{p} \Rightarrow x \equiv b \pmod{p}$. Thus, the function $f (x)= x^3$ is an one to one function $\mod{p}$, hence a bijection.</p>
1,694,991
<p>I tried doing $u$-substitution and got $-20e$ as my final answer, but I think the correct answer is just $20$. I'm not sure what I did wrong, but probably had to do with plugging in infinity... could someone explain the process of solving this integral?</p>
User8128
307,205
<p>I think integration by parts is the way to go here, not $u$-sub. Recall, we have $$\int^\infty_0 u(x) v'(x) dx = \left[u(x)v(x) \right]_0^\infty - \int^\infty_0 u'(x) v(x) dx.$$I'm abusing notation here; of course we shouldn't just "plug in" $\infty$, we should take a limit but of course this amounts to the same thing. Putting $u(x) = x/20$ and $v(x) =-20e^{-x/20}$, we see $$\int^\infty_0 \frac x {20} e^{-x/20} dx = \left[ - xe^{-x/20} \right]^\infty_0 + \int^\infty_0 e^{-x/20} dx.$$ The bracketed term is zero so $$\int^\infty_0 \frac x {20} e^{-x/20} dx = \left[-20e^{-x/20}\right]^\infty_0 = 20.$$</p>
3,299,863
<p>Let <span class="math-container">$f_{1}(x) = e^{x^5}$</span> and <span class="math-container">$f_{2}(x) = e^{x^3}$</span>. Let <span class="math-container">$g(x) = f_{1}f_{2}$</span>. Find <span class="math-container">$g^{(18)}(0)$</span>.</p> <p>By series expansion at <span class="math-container">$x = 0$</span>:</p> <p><span class="math-container">$f_{1}(x) = \sum_{k \ge 0} {x^{5k} \over k! }$</span> and <span class="math-container">$f_{2}(x) = \sum_{m \ge 0}{x^{3m} \over {m!}}$</span>, then</p> <p><span class="math-container">$$g(x) = \sum_{k, m \ge 0}{x^{5k + 3m} \over {m!k!}}.$$</span></p> <p>Substituting <span class="math-container">$5k + 3m = n$</span> we get <span class="math-container">$g(x) = \sum_{n \ge 0} \left( \sum_{5k + 3m = n}{1 \over {m!k!}} \right) x^{n} $</span>.</p> <p>Solving diophantine equation <span class="math-container">$5k + 3m = 18$</span>, there are two ordered pairs of non - negative integers <span class="math-container">$(k, m)$</span>: <span class="math-container">$(3, 1), (0, 6)$</span>. Thus, <span class="math-container">$g^{18}(0) = 18! \left[ { {1 \over {3!1!}} + {1 \over {0!6!}}} \right].$</span></p> <p>Is there a general method for finding <span class="math-container">$n^{th}$</span> derivative of functions <span class="math-container">$\prod_{1 \le i \le n}f_{i}$</span>? Obviously, if there are no solutions then a derivative of a function at some point will be <span class="math-container">$0$</span>. But what can be said when there are infinitely many solutions?</p> <p><strong>UPD: 01.08.2019</strong></p> <p>Consider function <span class="math-container">$f(x) = e^{1 \over 1 - x}$</span>. Then by expansion at 0: <span class="math-container">$$f(x) = e\sum_{n \ge 0} \sum_{x_{1} + 2x_{2} + \cdots = n} {{1} \over {x_{1}!x_{2}!\cdots}} x^{n},$$</span> which gives an infinite diophantine equation. More general, it can be applied to functions of a form: <span class="math-container">$f(x)^{g(x)}.$</span> Referring to my early question, what can be said about a derivative at <span class="math-container">$x = 0$</span> of a such function?</p>
Noah Schweber
28,111
<p>Just look at the usual <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> definition of convergence:</p> <blockquote> <p><span class="math-container">$\lim_{x\rightarrow a}f(x)=L$</span> iff for every <span class="math-container">$\epsilon&gt;0$</span> there is a <span class="math-container">$\delta&gt;0$</span> such that for all <span class="math-container">$x$</span>, if <span class="math-container">$0&lt;\vert x-a\vert&lt;\delta$</span> then <span class="math-container">$\vert f(x)-L\vert&lt;\epsilon$</span>.</p> </blockquote> <p>This makes perfect sense in the hyperreals, without changing anything: just make sure that all the variables are allowed to range over the hyperreals. So, for example, to show that <span class="math-container">$\lim_{x\rightarrow c}x=c$</span>, we just set <span class="math-container">$\delta=\epsilon$</span> as usual.</p> <p>Note that this definition applies to <em>any</em> function <span class="math-container">${}^*\mathbb{R}\rightarrow{}^*\mathbb{R}$</span>. Of course, we're usually really interested in the ones which come from functions on <span class="math-container">$\mathbb{R}$</span>; given such an <span class="math-container">$f$</span>, in the nonstandard setting we replace <span class="math-container">$f$</span> with <span class="math-container">${}^*f$</span> and go as above. We can then use the transfer property to show that everything we're going to get in this context is actually true in standard analysis too.</p> <p>Similarly, to tell whether a sequence <span class="math-container">$X=(x_n)_{n\in\mathbb{N}}$</span> from the standard world converges to some standard real <span class="math-container">$L$</span>, we first pass to its nonstandard version <span class="math-container">${}^*X=({}^*x_n)_{n\in{}^*\mathbb{N}}$</span> and then ask, in the hyperreal world, the usual question: is it the case that for all <span class="math-container">$\epsilon&gt;0$</span> there is some <span class="math-container">$n$</span> such that for all <span class="math-container">$m&gt;n$</span> we have <span class="math-container">$\vert {}^*x_m-L\vert&lt;\epsilon$</span>? Transfer tells us (as usual) that this gives the desired result.</p>
2,921,390
<blockquote> <p>Aashna and Radhika see the integers $1$ to $211$ written on a blackboard. They alternate turns and in every step each of them wipes out any $11$ numbers until only $2$ numbers are left on the blackboard. If the difference of these $2$ numbers (by subtracting the smaller from the larger) is $\geq 111$, the first player wins, otherwise the second. If you were Aashna, would you chose to play 1st or 2nd and why?</p> </blockquote> <p>By intuition only – I don’t see how I can prove it:</p> <p>Since there are 19 turns ($19\times 11=209$ numbers + 2 on the blackboard), I would choose to play 1st. In my 1st move I would wipe out numbers 101 to 111 since these are the only ones that do not have a pair to meet the rule. Then for whichever numbers Radhika would remove, I would respond by removing the numbers that would have a difference of 111, for example for 92 I wipe out 203 etc. If Radhika chose to remove pairs with difference equal to 111, there would still be a single number, for which I would remove its pair and then I would also remove pairs with difference 111 or 112 and so on.</p> <p>Does this method guarantee a win?</p>
Batominovski
72,152
<p>There are $100$ pairs with difference $111$: $\{1,112\}$, $\{2,113\}$, $\ldots$, $\{100,211\}$. As you noticed, there are $11$ numbers that are unpaired are $101$, $102$, $\ldots$, $111$. Since the second player can eliminate at most $9\cdot 11$ numbers, the second player does not touch at least one pair of difference $111$. Thus, if the first player does not play stupidly by corrupting these pairs, then the first player can always win.</p> <p>You have recommended a good strategy. In the first move, the first player proceeds by removing the $11$ unpaired numbers $101$, $102$, $\ldots$, $111$. Every turn after the second player played, if the number $x\in\{x,y\}$ with $|x-y|=111$ was remove but $y$ was not, then the first player responds by removing $y$. Since $11$ is odd, the number of such $x$'s that the second player removed must be odd. Thus, the first player will remove an odd number of such $y$'s. If the first player has not completed the turn (i.e., fewer than $11$ numbers have been played), then there are an even numbers left to remove. Then, the first player remove both numbers form some pairs with difference $111$ until the turn is finished. Therefore, by the end of the $(2k+1)$-st turn, only $11k$ pairs of difference $111$ have been killed (and other pairs are completely untouched). </p>
184,575
<p>I have a large jagged list, that is each sub-list has a different length. I would like to <code>Flatten</code> this list for <code>Histogram</code> purposes, but it seems to be taking an inordinate amount of time and memory</p> <pre><code>jaggedList=Table[RandomReal[1,RandomSample[Range[400000,800000],1]],{n,100}]; </code></pre> <p>Just to illustrate, length of each of elements of the main list</p> <pre><code>ListPlot[Length/@jaggedList] </code></pre> <p><a href="https://i.stack.imgur.com/Io5a0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Io5a0.png" alt="list lengths"></a></p> <p>Full Flatten takes a long time, my real data is several times larger, it gets painfully slow</p> <pre><code>fullFlatten=Flatten@jaggedList;//AbsoluteTiming {10.0055,Null} </code></pre> <p>I noticed flattening non-jagged sub-lists is not a problem</p> <pre><code>partialFlatten=Flatten/@jaggedList;//AbsoluteTiming {0.289219,Null} </code></pre> <p>Memory usage is huge on the final result of the full list, even though number of elements is the same:</p> <pre><code>ByteCount/@{fullFlatten,partialFlatten,jaggedList} {1460378864,486808224,486808224} </code></pre> <p>Would super appreciate any tips on what I can change to make this faster / more memory compact !</p>
Carl Woll
45,431
<p>The difference between using <a href="http://reference.wolfram.com/language/ref/Flatten" rel="noreferrer"><code>Flatten</code></a> and using <a href="http://reference.wolfram.com/language/ref/Join" rel="noreferrer"><code>Join</code></a> as in @kglr's answer is that <a href="http://reference.wolfram.com/language/ref/Flatten" rel="noreferrer"><code>Flatten</code></a> unpacks. Here is a smaller example:</p> <pre><code>SeedRandom[1] list = Table[RandomReal[1, RandomSample[2;;5, 1]], 3] </code></pre> <blockquote> <p>{{0.269558, 0.445678, 0.158104, 0.751213, 0.965444}, {0.0518202, 0.675946, 0.698472}, {0.344389, 0.830322, 0.556863}}</p> </blockquote> <p>Turn on packing messages:</p> <pre><code>On["Packing"] </code></pre> <p>Then, using <a href="http://reference.wolfram.com/language/ref/Flatten" rel="noreferrer"><code>Flatten</code></a>:</p> <pre><code>Flatten[list] </code></pre> <blockquote> <p>Developer`FromPackedArray::unpack: Unpacking array in call to HoldForm.</p> <p>Developer`FromPackedArray::punpack: Unpacking array with dimensions {5} in call to Flatten.</p> <p>Developer`FromPackedArray::unpack: Unpacking array in call to HoldForm.</p> <p>Developer`FromPackedArray::punpack: Unpacking array with dimensions {3} in call to Flatten.</p> <p>Developer`FromPackedArray::unpack: Unpacking array in call to HoldForm.</p> <p>General::stop: Further output of Developer`FromPackedArray::unpack will be suppressed during this calculation.</p> <p>Developer`FromPackedArray::punpack: Unpacking array with dimensions {3} in call to Flatten.</p> <p>General::stop: Further output of Developer`FromPackedArray::punpack will be suppressed during this calculation.</p> <p>{0.269558, 0.445678, 0.158104, 0.751213, 0.965444, 0.0518202, 0.675946, 0.698472, 0.344389, 0.830322, 0.556863}</p> </blockquote> <p>and using <a href="http://reference.wolfram.com/language/ref/Join" rel="noreferrer"><code>Join</code></a>:</p> <pre><code>Join @@ list </code></pre> <blockquote> <p>{0.269558, 0.445678, 0.158104, 0.751213, 0.965444, 0.0518202, 0.675946, 0.698472, 0.344389, 0.830322, 0.556863}</p> </blockquote> <p>As you can see, using <a href="http://reference.wolfram.com/language/ref/Join" rel="noreferrer"><code>Join</code></a> generates no unpacking messages, which is why it is much faster.</p>
1,318,880
<p>I'm trying to prove that $\operatorname{lcm}(n,m) = nm/\gcd(n,m)$ I showed that both $n,m$ divides $nm/\gcd(n,m)$ but I can't prove that it is the smallest number. Any help will be appreciated.</p>
Atvin
215,617
<p>Hint: For any $a,b$ real numbers: $\min(a,b)+\max(a,b)=a+b$.</p> <p>Now, if we have $a=a_1^{p_1} a_2^{p_2}\ldots$ and similarly with $b$, if you use the equation I just mentioned for all $p_i$, you will get, that $\gcd(a,b)\cdot\operatorname{lcm}(a,b)=ab$.</p>
2,337,357
<p><a href="https://i.stack.imgur.com/iWuCv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iWuCv.png" alt="enter image description here"></a></p> <p>The derivative of $\frac1x$ is $\frac{-1}{x^2}$.</p> <p>How do I find the $c$ if there is no zero in the derivative of the function?</p> <p>I started with $-1/x^2= -0.0625$ but I'm confused from here on.</p>
DreamConspiracy
454,309
<p>We will consider the probability that two of the same number do not appear. After the first role, this is of course $1$. Then, after the second roll, there is a $\frac {19}{20}$ chance that the die role is not repeated. On the third role, there is a $\frac {18}{20}$ chance that the role is repeated. Multiplying together and subtracting from one we get: $$1-\left(\frac {19}{20}\right)\left(\frac {18}{20}\right)=0.145$$ You have to remember to consider all previous die roles, and not just the one immediately before.</p>
15,093
<p>For example, I am confident that very few students majoring in pure mathematics can write a complete proof to the <a href="https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem" rel="noreferrer">Abel–Ruffini theorem</a> (there is no algebraic solution to general polynomial equations of degree five or higher with arbitrary coefficients) by the time of their graduation. I suspect many students with a Master's degree or Doctorate in pure mathematics could not prove this theorem either. They may know the conclusion, but may not be able to sketch an idea of the proof, let alone give a complete proof.</p> <p>My question is: should we educate pure mathematics major students in such a way that they should know how to prove most of the classical results in mathematics such as the Abel–Ruffini theorem and the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra" rel="noreferrer">Fundamental Theorem of Algebra</a> before getting their bachelor's degree, or at least their master's Degrees?</p>
paul garrett
63
<p>This is an interesting question, but, understandably, confounds at least two different things. E.g., is it really the case that to "know" a true mathematical fact is to be able to produce its proof on command? I think not. Another diagnostic question: must we understand thermodynamics and the Carnot cycle to drive a car usefully? Must we be able to prove the stability of the proton before setting our coffee cup on the table? Yes, of course, I'm exaggerating... but my exaggeration is in the direction I think is relevant.</p> <p>Namely, <em>awareness</em> is the key point (and assimilation of the facts into one's world-view... to the extent that they might have some impact and affect one's own decisions). </p> <p>My opinion on this is in the same vein as my objection to people being told to do every exercise before moving forward: not only are many of those exercises either make-work or pranks, but many are also incomprehensible without understanding what happens in the sequel... which one will not see until after? A bit perverse. Sure, some such pranks are "fun" in Math Olympiads and Putnam and such, but...</p> <p>The problem that I see is that undergrads are too often conditioned to be paranoid that there's some unfathomable flaw in what they've written... that can only be adjudicated by the oracular professor. One of the worst corollaries of this is that kids are very inhibited about broadening their scope, because they're already worried about defending themselves with regard to a tiny, trivial "plot of land", and are taught to give no credence to their own critical faculties.</p> <p>So, yes, I think this question raises some good issues, but is literally a bit mis-aimed in its assumptions.</p>
660,034
<p>I wondered if all decimal expansions of $\frac{1}{n}$ could be thought of in such a way, but clearly for $n=6$,</p> <p>$$.12+.0024+.000048+.00000096+.0000000192+...\neq.1\bar{6}$$</p> <p>Why does it work for 7 but not 6? Is there only one such number per base, <em>i.e.</em> 7 in base 10? If so what is the general formula?</p>
Community
-1
<p>Do you know about geometric series? Your series is really $$7(0.02 + 0.0004 + \cdots) = 7 \frac{2/100}{1 - 2/100} = 7 \cdot \frac{1}{49}$$ When you replace $7$ by something else, say $n$, your series is similarly $n \frac{2/100}{1 - 2/100} = \frac{n}{49}$. </p> <p>In particular, only when $n = 7$ would that be equal to $\frac{1}{n}$.</p>
1,656,963
<p>If $(a_n) \to 0$, then applying the algebraic limit theorem, what is $ \lim_\limits{n\to\infty} \frac {1+2a_n}{1+3a_n - 4a^2_n}$.</p> <p>Would just be able to do : $\lim_\limits{n\to\infty} \frac {1+2(0)}{1+3(0) - 4(0)}$. </p> <p>$\lim \frac {1}{1} = 1$.</p>
Roman83
309,360
<p>If $(a_n)→a$ and $1+3a-4a^2 \not=0 (a\not=1; a \not = -\frac 14)$, then $ \lim_\limits{n\to\infty} \frac {1+2a_n}{1+3a_n - 4a^2_n}=\frac {1+2a}{1+3a - 4a^2}$</p>
199,842
<p>I understand the reasoning behind $\pi r^2$ for a circle area however I'd like to know what is wrong with the reasoning below:</p> <p>The area of a square is like a line, the height (one dimension, length) placed several times next to each other up to the square all the way until the square length thus we have height x length for the area.</p> <p>The area of a circle could be thought of a line (The radius) placed next to each other several times enough to make up a circle. Given that circumference of a circle is $2 \pi r$ we would, by the same reasoning as above, have $2 \pi r^2$. Where is the problem with this reasoning?</p> <p>Lines placed next to each other would only go straight like a rectangle so you'd have to spread them apart in one of the ends to be able to make up a circle so I believe the problem is there somewhere. Could anybody explain the issue in the reasoning above?</p>
Will Orrick
3,736
<p>The main issue is that you don't form an area by placing <em>lines</em> next to each other--you need to place <em>strips</em> next to each other. As you say, to form an $a\times a$ square, you can place $n$ strips of dimension $a\times w$ next to each other, where $w=a/n$, giving total area $naw=a^2$.</p> <p><img src="https://i.stack.imgur.com/MKMXO.gif" alt="enter image description here"></p> <p>Your suggestion amounts to forming the area of a circle of radius $r$ by placing $n$ strips of dimension $w\times r$ next to each other radially, where $w=2\pi r/n$, giving area $nrw=2\pi r^2$. But look what happens if you do this:</p> <p><img src="https://i.stack.imgur.com/8jc0f.gif" alt="enter image description here"></p> <p>The problem is that the strips overlap, so the total area of the $n$ strips is greater than the area of the circle. If you run the animation (by reloading the page, if necessary) you can convince yourself that in the $n\rightarrow\infty$ limit, half of each strip contributes to the final area. (Observe that as strips are added in the counterclockwise direction, roughly half of each strip gets covered by subsequent strips.)</p> <p>We can fix the problem of overlapping strips more easily by using triangles of base $w$ and height $r$ instead of rectangles:</p> <p><img src="https://i.stack.imgur.com/8YyNv.gif" alt="enter image description here"></p> <p>This gives area $\frac{1}{2}nrw=\pi r^2$.</p>
3,436,804
<p>Does the following series converge? If yes, what is its value in simplest form?</p> <p><span class="math-container">$$\left( \frac{1}{1} \right)^2+\left( \frac{1}{2}+\frac{1}{3} \right)^2+\left( \frac{1}{4}+\frac{1}{5}+\frac{1}{6} \right)^2+\left( \frac{1}{7}+\frac{1}{8}+\frac{1}{9}+\frac{1}{10} \right)^2+\dots$$</span></p> <p>I have no idea how to start. Any hint would be really appreciated. THANKS!</p>
Andronicus
528,171
<p>Notice, that:</p> <p><span class="math-container">$$\left(\frac{1}{1}\right)^2+\left(\frac{1}{2} + \frac{1}{3}\right)^2 + \dots&lt;\left(\frac{1}{1}\right)^2+\left(\frac{2}{2}\right)^2+\left(\frac{3}{4}\right)^2+\left(\frac{4}{7}\right)^2=2+\sum_{n=2}^{\infty}\left(\frac{n}{\frac{n(n-1)}{2}+1}\right)^2&lt;2+\sum_{n=2}^{\infty}\left(\frac{2}{n-1}\right)^2$$</span></p> <p>Since it's bounded by converging series, it's convergent itself.</p>
3,436,804
<p>Does the following series converge? If yes, what is its value in simplest form?</p> <p><span class="math-container">$$\left( \frac{1}{1} \right)^2+\left( \frac{1}{2}+\frac{1}{3} \right)^2+\left( \frac{1}{4}+\frac{1}{5}+\frac{1}{6} \right)^2+\left( \frac{1}{7}+\frac{1}{8}+\frac{1}{9}+\frac{1}{10} \right)^2+\dots$$</span></p> <p>I have no idea how to start. Any hint would be really appreciated. THANKS!</p>
robjohn
13,854
<p><strong>Approximation with Euler-Maclaurin Sum Formula</strong></p> <p>The Euler-Maclaurin Sum Formula gives the fairly well-known asymptotic series for the Harmonic Numbers <span class="math-container">$$ \begin{align} H_n &amp;=\gamma+\log(n)+\frac1{2n}-\frac1{12n^2}+\frac1{120n^4}\\[3pt] &amp;-\frac1{252n^6}+\frac1{240n^8}-\frac1{132n^{10}}+O\!\left(\frac1{n^{12}}\right)\tag1 \end{align} $$</span> where <span class="math-container">$\gamma$</span> is the Euler-Mascheroni constant. <span class="math-container">$\gamma$</span> doesn't actually come from the Euler-Maclaurin Sum Formula, but is defined as <span class="math-container">$\lim\limits_{n\to\infty}(H_n-\log(n))$</span>.</p> <p>Substituting into <span class="math-container">$(1)$</span> and expanding Taylor series gives <span class="math-container">$$ \begin{align} \left(H_{n(n+1)/2}-H_{n(n-1)/2}\right)^2 &amp;=\frac4{n^2}-\frac{16}{3n^4}+\frac{32}{45n^6}+\frac{1424}{315n^8}\\[3pt] &amp;+\frac{3392}{1575n^{10}}-\frac{112912}{10395n^{12}}+O\!\left(\frac1{n^{14}}\right)\tag2 \end{align} $$</span> Equation <span class="math-container">$(2)$</span> not only shows that the series converges, but applying the Euler-Maclaurin Sum Formula to <span class="math-container">$(2)$</span> results in <span class="math-container">$$ \begin{align} &amp;\sum_{k=1}^n\left(H_{k(k+1)/2}-H_{k(k-1)/2}\right)^2\\ &amp;=C-\frac4n+\frac2{n^2}+\frac{10}{9n^3}-\frac8{3n^4}+\frac{398}{225n^5}+\frac{16}{45n^6}-\frac{4378}{2205n^7}\\[3pt] &amp;+\frac{712}{315n^8}-\frac{22718}{14175n^9}+\frac{1696}{1575n^{10}}+\frac{138}{4235n^{11}}-\frac{56456}{10395n^{12}}+O\!\left(\frac1{n^{13}}\right)\tag3 \end{align} $$</span> Plugging <span class="math-container">$n=200$</span> into <span class="math-container">$(3)$</span> gives <span class="math-container">$$ \bbox[5px,border:2px solid #C0A000]{\sum_{k=1}^\infty\left(H_{k(k+1)/2}-H_{k(k-1)/2}\right)^2=3.170466061427153684757796531}\tag4 $$</span> As a check, <span class="math-container">$n=300$</span> gives the same result.</p>
1,105,787
<p>I was given an excercise in my calculus class that i don't really understand, the problem says : Find the area limited by the curves $$ y = \frac{x+4}{x^2+1} ,\space x = -2 ,\space x = 3,\space y = 0 $$</p> <p>I don't really know what approach to follow here, my guess would be to solve it using riemann sums or maybe definite integrals and using $ x = -2 $ and $ x = 3 $ as the interval but i'm totally lost.</p>
mickep
97,236
<p>It is meant that you should calculate the integral $$ \int_{-2}^3 \frac{x+4}{x^2+1}\,dx $$ Also, see this picture where the curves are drawn.</p> <p><img src="https://i.stack.imgur.com/Lw4JX.png" alt="picture"></p>
813,301
<p>$f$ is strictly increasing and $g$ is decreasing. How to find whether $f\circ g$ and $g\circ f$ are increasing, decreasing, strictly increasing or strictly decreasing?</p> <p>This is what I did,</p> <p>$f \circ g=f(g(x))$</p> <p>If we take $x_1 &lt; x_2$,</p> <p>$f(x_1) &lt; f(x_2) $ and $\ g(x_1) \ge g(x_2) $</p> <p>Assuming $f(g(x_1)) &lt; f(g(x_2))$, then,</p> <p>$g(x_1)&lt;g(x_2) \implies x_2&lt;x_1$ </p> <p>This is a contradiction, therefore our assumption is wrong. After this, what should I assume to prove this? If I assume $f(g(x_1)) \ge f(g(x_2)) \ $, a problem occurs since $f$ is strictly increasing and the assumption has an equality possibility.</p> <p>Or is there a more effective method than this? Can it be applied to prove the same for $g\circ f$?</p>
Christian Blatter
1,303
<p>It is not necessary to produce a contradiction.</p> <p>When $x&lt;y$ then $g(x)\geq g(y)$ and therefore $f\bigl(g(x)\bigr)\geq f\bigl(g(y)\bigr)$. This already proves that $f\circ g$ is decreasing. You cannot hope for more: It could be that $g(x)=g(y)$, so that we obtain an instance of $x&lt;y$ and $f\circ g(x)=f\circ g(y)$.</p> <p>Similarly for $g\circ f$.</p>
483,173
<p>A non-zero matrix $A$ is said to be nilpotent for some positive integer $k\geq2$. If $A$ is nilpotent then is $I+A$ invertible?? Where $I$ is the identity matrix.</p>
Ben Grossmann
81,360
<p>Short answer: yes.</p> <p>Long answer:</p> <blockquote class="spoiler"> <p>If $A^k=0$, consider the product $$A^{2n+1}+I=(A+I)(A^{2n}-A^{2n-1}+\cdots+I)$$ For a sufficiently large integer $n$.</p> </blockquote>
492,031
<p>I am currently reading through Hatcher's Algebraic Topology book. I am having some trouble understanding the difference between a deformation retraction and just a retraction. Hatcher defines them as follows:</p> <p>A <strong>deformation retraction</strong> of a space $X$ onto a subspace $A$ is a family of maps $f_t:X \to X$, $t \in I$, such that $f_0=\mathbb{1}$ (the identity map), $f_1(X)=A$, and $f_t|A=\mathbb{1}$ for all $t$.</p> <p>A <strong>retraction</strong> of $X$ onto $A$ is a map $r:X \to X$ such that $r(X)=A$ and $r|A=\mathbb{1}$.</p> <p>Is the notion of time the important characteristic that sets the two ideas apart? (It seems that the definition of deformation retraction utilizes time in its definition, whereas retraction seems to not.)</p> <p>Any insight is appreciated. Also, if anyone have additional suggested reading material to help with concepts in Algebraic topology, that would be much appreciated.</p>
Giorgio Mossa
11,888
<p>As you noted the two notion are different, deformation retraction being a continuous family of continuous function, i.e. an homotopy, while a retraction being just a continuous function.</p> <p>A retraction is just a map that sends all the point of $X$ in $A$ fixing the points of $A$.</p> <p>A deformation retraction as the opposite is a family of mappings that fix the points of $A$, but that's more: we require that the family is continuous that means that we want that the induced map $X \times I \to X$ sending every pair $(x,t) \in X \times I$ to $f_t(x)$ is a continuous function.</p> <p>Anyway there's also another way to see deformation retraction and more generally homotopies. For every space $Y$ we can consider the set $Y^I=\mathbf{Top}(I,Y)$ the set of continuous paths in $Y$ and topologizes this set with the <a href="http://en.wikipedia.org/wiki/Compact_open_topology">compact-open topology</a>.</p> <p>Since the space $I$ is locally compact a general theorem tells that there's a bijection $$\mathbf{Top}(X \times I , Y) \cong \mathbf{Top}(X,Y^I)$$ sending every map $F \colon X \times I \to Y$ in the map $\bar F \colon X \to Y^I$ that to every $x \in X$ associates the continuous function $\bar F(x) \colon I \to Y$ such that for $t \in Y$ $\bar F(x)(t)=F(x,t)$ (this bijection in natural both in $X$ and $Y$).</p> <p>Because of this bijection we can define an homotopy to be just a continuous function in $\mathbf{Top}(X,Y^I)$.</p> <p>If we adopt this this point of view, of homotopies as continuous mapping in path spaces, a deformation retraction is just a mapping that associate to every point $x \in X$ a path, starting at the point $x$ and ending at some point of $A$. Each path corresponding to a point is the <em>trajectory that the point follows during the deformation of $X$ to $A$</em>. </p> <p>Anyway a deformation retraction of $X$ to $A$ is not simply an homotopy: it is also an homotopy relative to the subspace $A$ between the identity and a map $r \colon X \to X$ such that $r(X) \subseteq A$. By the requirement that $r$ is homotopic to $1_X$ via an homotopy relative to $A$ it follows that $r$ must be a retraction: by definition an homotopy relative to $A$ sends every point of $A$ into the constant path which connect $a$ to $r(a)$, which must be equal. </p> <p>This means that if $A$ is a deformation retract of $X$ then $A$ is also a retract, anyway the converse doesn't hold in general: a counterexample consider the map that send $S^1$ (the circle) to a point, this is a retraction, but there's no deformation retraction of $S^1$ to a point (to prove this one need to make a little work and build some invariant like the $\pi_1$ which is in the next chapter of the book).</p> <p>Hope this helps understanding the ideas and the differences of these two concepts.</p>
3,220,273
<p>It seems the definition of a parallelogram is locked to quadrilaterals for some reason. Is there a reason for this? Why couldn't a parallelogram (given the way the word seems rather than as a mathematical/geometric construct) contain greater than two pairs of parallel sides? In a hexagon for example, all six sides are parallel to their opposing side. Is there a term for this kind of object?</p> <p>It seems to me there must be some value in describing a polygon with even numbers of sides in which the opposing sides are parallel to each other. While a hexagon, octagon, decagon, etc. all match this rule, you could have polygons with unequal sides as well.</p> <p><a href="https://i.stack.imgur.com/Ln3Yg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ln3Yg.png" alt="enter image description here"></a></p> <p><strong>Edit 1:</strong> Object described by Mark Fischler</p> <p><a href="https://i.stack.imgur.com/uEWA9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uEWA9.png" alt="Object described by Mark Fischler"></a></p> <p><strong>Zonogon:</strong></p> <p><a href="https://i.stack.imgur.com/KfV1w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KfV1w.png" alt="enter image description here"></a></p>
Ethan Bolker
72,858
<p>Interesting question. Parallelograms are quadrilaterals for historical reasons. They could have been defined to include your examples, but weren't. Now the meaning is so common that it can't be changed. </p> <p>I don't think there is a name for your class of polygons. The reason is in this:</p> <blockquote> <p>It seems to me there must be some value in describing a polygon with even numbers of sides in which the opposing sides are parallel to each other.</p> </blockquote> <p>If there were some value - if these polygons came up often in geometry - then someone would have named them. If you have interesting things to say about them and publish your thoughts you'll invent a name in your paper. If it's widely read the name will stick.</p> <p>I thought <em>parallelogon</em> would be a good possibility, but that name is taken: <a href="https://en.wikipedia.org/wiki/Parallelogon" rel="noreferrer">https://en.wikipedia.org/wiki/Parallelogon</a> . </p> <p>The convex polygons whose sides come in <em>equal</em> parallel pairs are <em>zonogons</em>: <a href="https://en.wikipedia.org/wiki/Zonogon" rel="noreferrer">https://en.wikipedia.org/wiki/Zonogon</a> . Your polygons have zonogons as nontrivial <a href="https://en.wikipedia.org/wiki/Minkowski_addition" rel="noreferrer">Minkowski summands</a>. </p>
4,563,135
<p>Denote the linear space of linear operators on the linear space <span class="math-container">$V$</span> with field <span class="math-container">$\mathbb{F}$</span> by <span class="math-container">$L(V)$</span> and the linear space of <span class="math-container">$n \times n$</span> matrices with entries in <span class="math-container">$\mathbb{R}$</span> by <span class="math-container">$\mathbb{R}^{n\times n}$</span>. Let <span class="math-container">$T:V \to V$</span> be an operator on the linear space <span class="math-container">$V$</span> over <span class="math-container">$\mathbb{R}$</span>. Let <span class="math-container">$C_T(x)=\det(x I - T)$</span> be its characteristic polynomial. The coefficients of <span class="math-container">$C_T$</span> are in <span class="math-container">$\mathbb{R}$</span> since by definition <span class="math-container">$C_T(x)=\det\left(\mathcal{M}_B^B(xI-T)\right)$</span> and all the entries of <span class="math-container">$\mathcal{M}_B^B(xI-T)$</span> are in <span class="math-container">$\mathbb{R}$</span>, where <span class="math-container">$\mathcal{M}_B^B$</span> is a linear isomorphism between <span class="math-container">$L(V)$</span> and <span class="math-container">$\mathbb{R}^{n \times n}$</span> with <span class="math-container">$B$</span> being a basis for <span class="math-container">$V$</span>. Consequently, if <span class="math-container">$\lambda \in \mathbb{C} - \mathbb{R}$</span> is a root of <span class="math-container">$C_T$</span> then <span class="math-container">$\bar\lambda$</span> is also a root of <span class="math-container">$C_T$</span> with the same algebraic multiplicity. Now, I want to show that</p> <p><span class="math-container">$$\dim \ker (\lambda I - T)^{(m)} = \dim \ker (\bar \lambda I - T)^{(m)}, \qquad m = 1,\dots,r \tag{1}$$</span></p> <p>where <span class="math-container">$r$</span> is the algebraic multiplicity of <span class="math-container">$\lambda$</span> and <span class="math-container">$\bar \lambda$</span>. According to the spectral decomposition theorem, we have <span class="math-container">$V = \cdots \oplus V_\lambda \oplus \cdots \oplus V_{\bar \lambda} \oplus \cdots$</span>, where <span class="math-container">$V_{\lambda} = \ker (\lambda I - T)^{(r)}$</span> and <span class="math-container">$V_{\bar \lambda} = \ker (\bar \lambda I - T)^{(r)}$</span>. Let <span class="math-container">$B=(\cdots, B_{\lambda},\cdots,B_{\bar \lambda},\cdots)$</span> be the corresponding basis of this decomposition. Equation <span class="math-container">$(1)$</span> literally means that the blocks <span class="math-container">$\mathcal{M}_{B_\lambda}^{B_{\lambda}}(T|_{V_{\lambda}})$</span> and <span class="math-container">$\mathcal{M}_{B_{\bar \lambda}}^{B_{\bar \lambda}}(T|_{V_{\bar \lambda}})$</span> have complex conjugate Jordan sub-blocks of the following form</p> <p><span class="math-container">\begin{align} J_{\lambda} = \begin{bmatrix} \lambda &amp; 1 &amp; \cdots &amp; 0 \\ 0 &amp; \lambda &amp; \ddots &amp; \vdots \\ \vdots &amp; \vdots &amp; \ddots &amp; 1 \\ 0 &amp; 0 &amp; \cdots &amp; \lambda \end{bmatrix}_{d \times d}, \qquad J_{\bar \lambda}= \begin{bmatrix} \bar \lambda &amp; 1 &amp; \cdots &amp; 0 \\ 0 &amp; \bar \lambda &amp; \ddots &amp; \vdots \\ \vdots &amp; \vdots &amp; \ddots &amp; 1 \\ 0 &amp; 0 &amp; \cdots &amp; \bar \lambda \end{bmatrix}_{d \times d}, \qquad J_{\bar \lambda} = \overline{J_{\lambda}} \end{align}</span></p> <p>or more compactly,</p> <p><span class="math-container">$$\mathcal{M}_{B_{\bar \lambda}}^{B_{\bar \lambda}}(T|_{V_{\bar \lambda}}) = \overline{\mathcal{M}_{B_\lambda}^{B_{\lambda}}(T|_{V_{\lambda}})}$$</span> How can I prove equation <span class="math-container">$(1)$</span>?</p>
Hosein Rahnama
267,844
<p>For any matrix <span class="math-container">$A$</span> with entries in <span class="math-container">$\mathbb{R}$</span> we have</p> <p><span class="math-container">$$(\bar \lambda I - A)^{(m)} = \Big((\overline{\lambda \bar I - \bar A)}\Big)^{(m)} = \Big((\overline{\lambda I - A)}\Big)^{(m)} = \overline{(\lambda I - A)^{(m)}}.$$</span></p> <p>Consequently, equation <span class="math-container">$(1)$</span> that we want to show is equivalent to</p> <p><span class="math-container">$$\dim \ker (\lambda I - A)^{(m)} = \dim \ker \overline{(\lambda I - A)^{(m)}}, \qquad m = 1,\dots,r$$</span></p> <p>where <span class="math-container">$A:= M_B^B(T)$</span> is the matrix of <span class="math-container">$T$</span> with respect to some basis <span class="math-container">$B$</span> of <span class="math-container">$V$</span>. It is enough to show that for any <span class="math-container">$m \times n$</span> matrix <span class="math-container">$P$</span> with entries in <span class="math-container">$\mathbb{C}$</span>, the following holds.</p> <p><span class="math-container">$$\dim \text{col} P = \dim \text{col} \overline P \tag{$*$}$$</span></p> <p>This means that complex conjugation does not change rank. This also implies that</p> <p><span class="math-container">$$\dim \ker P = \dim \ker \overline P,$$</span></p> <p>if you apply the rank-nullity theorem to <span class="math-container">$P$</span> and <span class="math-container">$\overline P$</span>. A direct proof of <span class="math-container">$(*)$</span> is given <a href="https://math.stackexchange.com/questions/490434/complex-conjugation-does-not-change-rank/4466754#4466754">here</a>.</p>
4,024,871
<p>I'm looking to build a function <span class="math-container">$f:S^2 \to \mathbb R^2$</span> such that <span class="math-container">$f(x)\neq f(−x)$</span> for all <span class="math-container">$x\in S^2$</span>.</p> <p>By Borsuk-Ulam Theorem, this function must be discontinuous. I was trying to build a not too complicated function, but I always encountered a problem.</p> <p>I appreciate any help.</p>
DanielWainfleet
254,665
<p><span class="math-container">$S^2=\{(\cos u,\sin u \cos v,\sin u \sin v):u\in [0,2\pi)\land |v|\le \pi /2\}.$</span></p> <p>Consider the equator <span class="math-container">$E=\{(\cos u, \sin u,0):u\in [0,2\pi)\}.$</span></p> <p>For <span class="math-container">$u\in [0,2\pi)$</span> let <span class="math-container">$g(\cos u, \sin u,0)=u.$</span></p> <p>If <span class="math-container">$\sin u \sin v &gt;0$</span> let <span class="math-container">$g(\cos u,\sin u \cos v,\sin u \sin v)=3\pi.$</span></p> <p>If <span class="math-container">$\sin u \sin v &lt;0$</span> let <span class="math-container">$g(\cos u,\sin u \cos v,\sin u \sin v)=-3\pi.$</span></p> <p>Let <span class="math-container">$f(x)=(g(x),0)$</span> for all <span class="math-container">$x\in S^2.$</span></p> <p><span class="math-container">$f$</span> maps <span class="math-container">$E$</span> bijectively to <span class="math-container">$[0,2\pi)\times \{0\}.$</span> And <span class="math-container">$x\in E\iff x\ne -x\in E.$</span></p> <p>If <span class="math-container">$x\in S^2$</span> \ <span class="math-container">$E$</span> then <span class="math-container">$\{f(x),f(-x)\}=\{(3\pi,0),(-3\pi,0)\}$</span> so <span class="math-container">$f(x)\ne f(-x).$</span></p>
2,288,358
<p>Is there a straightforward way to prove the following inequality: $$|1 + k\big(\exp(it)-1\big)|\leq 1 $$ where $k\in(0,1)$ and $t \in \mathbb{R}$ (correction, see dxiv answer) with $|t| \leq 1$, other than writing the quantity into its real and imaginary parts and checking that they satisfy the required inequalities (which is long and seems inelegant) ? </p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>Put $$y=\frac {xt}{\sqrt {3} }$$</p> <p>the integral becomes</p> <p>$$\frac {1}{\sqrt {3}}\int_0^1 x^3(\int_0^\sqrt{3}\sqrt {1+t^2}dt)dx $$</p> <p>with $t=\sinh(u) $, you can finish.</p>
1,745,180
<p>I want to prove that the polynomial </p> <p>$$ f_p(x) = x^{2p+2} - cx^{2p} - dx^p - 1 $$</p> <p>,where $c&gt;0$ and $d&gt;0$ are real numbers, has distinct roots. Also $p&gt;0$ is an even integer. How can I prove that the polynomial $f_p(x)$ has distinct roots for any $c$,$d$ and $p$.</p> <p>PS: There is a similar topic that <a href="https://math.stackexchange.com/questions/1740673/how-to-prove-that-my-polynomial-has-distinct-roots?lq=1">How to prove that my polynomial has distinct roots?</a></p>
Community
-1
<p>Here is a <strong><em>partial</em></strong> solution that shows there can be no multiple REAL roots. The proof doesn't work for complex numbers though (and I am not sure the result is even true in complex numbers).</p> <p>$f' = (2p+2) x^{2p+1} - 2pcx^{2p-1} - pdx^{p-1}$. If $x$ is a multiple root of $f$, then both $f$ and $f'$ vanish at $x$. But $(2p+2) f - xf' = -2cx^{2p}-(p+2)dx^p-(2p+2) &lt; 0$ for all $x \in \mathbb R$ because $p$ is even and $c$, $d$ and $p$ are all positive.</p>
372,401
<p>Let us assume that the boundary of the domain in the definition of the Sobolev spaces $L^2$ and $H_0^1$ is sufficiently smooth.</p> <p>Let $|\cdot |$ denote the norm in $L^2$. Then for a function $v$ in $H_0^1$, the norm is given via $\|v\|^2=|v|^2+|\nabla v|^2$. </p> <p>In general, one cannot bound the $H_0^1$-norm by the $L^2$-norm, as the gradient of a function, cannot be bounded by the function values.</p> <p>What if for $v\in H_0^1$, one has $|v|=0$. Does this imply that $|\nabla v|=0$?</p> <p>I have tried to come to terms with this in 1D. Consider an interval $(a,b)$ and $u\in L^2(a,b)$, with a weak derivative $u'\in L^2(a,b)$ and $u(a) = u(b) = 0$. Then, $u$ is absolutely continuous almost everywhere, and one has $u(x) = \int_a^xu'(s)ds$. Then, $0=|u|^2=\int_a^b(\int_a^xu'(s)ds)^2dx$ which somehow should give that $\int_a^bu'(s)^2ds$ is zero as well...</p>
xyzzyz
23,439
<p>Let <span class="math-container">$C$</span> be that sine curve part, and <span class="math-container">$S$</span> be the vertical segment. Suppose there exists a path <span class="math-container">$\omega: [0, 1] \to A$</span> that connects a point <span class="math-container">$(0, 1) \in S$</span> with <span class="math-container">$(\frac{1}{\pi}, 0) \in C$</span>. Consider the set <span class="math-container">$U = \{t \in [0, 1]: t \in C \} = \omega^{-1}(C)$</span>. As sine curve is open in <span class="math-container">$A$</span>, <span class="math-container">$U$</span> is an open subset of <span class="math-container">$[0, 1]$</span>. Let <span class="math-container">$t_0 = \inf U$</span>. As <span class="math-container">$U$</span> is open and <span class="math-container">$0 \not \in U$</span> (because <span class="math-container">$\omega(0) = (0, 1) \in S$</span>, <span class="math-container">$\omega(t_0) \in S$</span>, but for some <span class="math-container">$t_1 &gt; t_0$</span>, <span class="math-container">$\omega((t_0, t_1)) \subset C$</span>. Consider a small open neighbourhood <span class="math-container">$V \subset A$</span> of <span class="math-container">$t_0$</span> that has infinitely many disconnected components. By continuity of <span class="math-container">$\omega$</span>, there's some interval <span class="math-container">$(t_0 - \epsilon, t_0 + \epsilon)$</span> such that <span class="math-container">$\omega((t_0 - \epsilon, t_0 + \epsilon)) \subset V$</span>. For some point in <span class="math-container">$t \in (t_0, t_0 + \epsilon)$</span>, <span class="math-container">$\omega(t)$</span> is in a different component of <span class="math-container">$V$</span> than <span class="math-container">$t_0$</span>, which is a contradiction, because <span class="math-container">$\omega((t_0 - \epsilon, t_0 + \epsilon))$</span> is connected, as a continuous image of a connected set.</p>
3,443,672
<p>The equation is <span class="math-container">$$\tan\frac{5\pi}{6} \cos x=1-\sin x$$</span> <span class="math-container">$$\sin\frac{5\pi}{6} \cos x=\cos\frac{5\pi}{6}-\cos\frac{5\pi}{6} \sin x$$</span> <span class="math-container">$$\sin\left(\frac{5\pi}{6}+x\right)=\cos \frac{5\pi}{6}$$</span> which looks weird to me. What am I doing wrong?</p>
lab bhattacharjee
33,337
<p>Another way and probably a better one,</p> <p><span class="math-container">$$2+\sqrt3=\csc30+\cot30=\dfrac{1+\cos30}{\sin30}=\cot(?)=\tan(?)$$</span></p> <p>Use <a href="https://en.m.wikipedia.org/wiki/Tangent_half-angle_substitution#The_substitution" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Tangent_half-angle_substitution#The_substitution</a> to find</p> <p><span class="math-container">$$\dfrac{1-\sin x}{\cos x}=\tan(45^\circ-x/2)$$</span></p>
3,588,605
<p>We have an isosceles <span class="math-container">$\triangle ABC, AC=BC, \measuredangle ACB=40^\circ$</span> and a point <span class="math-container">$M$</span> such that <span class="math-container">$\measuredangle MAB=30^\circ$</span>, <span class="math-container">$\measuredangle MBA=50^\circ$</span>. Find <span class="math-container">$\measuredangle BMC$</span>. <a href="https://i.stack.imgur.com/5qtEf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5qtEf.png" alt="enter image description here" /></a> Starting with <span class="math-container">$\angle ABC=\angle BAC=70^\circ \Rightarrow \angle CBM=20 ^\circ$</span>. Let us construct the equilateral <span class="math-container">$\triangle ABH$</span>. If we look at <span class="math-container">$\triangle ACH, \angle ACH=20^\circ$</span> and <span class="math-container">$\angle CAH=10^\circ$</span>. Can we show <span class="math-container">$\triangle AHC \cong CHB$</span>? Any other ideas?</p>
bjorn93
570,684
<p>Here's a trigonometric approach. Let <span class="math-container">$\angle BCM=\varphi\Rightarrow \angle ACM=40^{\circ}-\varphi$</span>. Apply the law of sines in <span class="math-container">$\triangle AMC$</span> and <span class="math-container">$\triangle BMC$</span>: <span class="math-container">$$\frac{AC}{CM}=\frac{\sin(80^{\circ}-\varphi)}{\sin(40^\circ)} \\ \frac{BC}{CM}=\frac{\sin(20^{\circ}+\varphi)}{\sin(20^\circ)} $$</span> Since <span class="math-container">$AC=BC$</span>, the two ratios with the sines are equal. We have <span class="math-container">$\sin(40^\circ)=2\sin(20^\circ)\cos(20^\circ)$</span>, so <span class="math-container">$$\frac{\sin(80^{\circ}-\varphi)}{2\cos(20^\circ)}=\sin(20^\circ+\varphi) \Leftrightarrow \\ \sin(80^{\circ}-\varphi)=2\sin(20^\circ+\varphi)\cos(20^\circ)$$</span> Then use the sum-product identities: <span class="math-container">$$\sin(80^{\circ}-\varphi)=\sin(\varphi)+\sin(\varphi+40^\circ) \Leftrightarrow \\ \sin(\varphi)=\sin(80^{\circ}-\varphi)-\sin(\varphi+40^\circ) \Leftrightarrow \\ \sin(\varphi)=2\sin(20^\circ-\varphi)\cos(60^\circ) \Leftrightarrow \\ \sin(\varphi)=\sin(20^\circ-\varphi) $$</span> Since <span class="math-container">$0&lt;\varphi&lt;40^{\circ}$</span>, the last equality implies <span class="math-container">$\varphi=20^\circ-\varphi\Leftrightarrow \varphi=10^{\circ}$</span>, and we find <span class="math-container">$\angle BMC=150^{\circ}$</span>.</p>
3,588,605
<p>We have an isosceles <span class="math-container">$\triangle ABC, AC=BC, \measuredangle ACB=40^\circ$</span> and a point <span class="math-container">$M$</span> such that <span class="math-container">$\measuredangle MAB=30^\circ$</span>, <span class="math-container">$\measuredangle MBA=50^\circ$</span>. Find <span class="math-container">$\measuredangle BMC$</span>. <a href="https://i.stack.imgur.com/5qtEf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5qtEf.png" alt="enter image description here" /></a> Starting with <span class="math-container">$\angle ABC=\angle BAC=70^\circ \Rightarrow \angle CBM=20 ^\circ$</span>. Let us construct the equilateral <span class="math-container">$\triangle ABH$</span>. If we look at <span class="math-container">$\triangle ACH, \angle ACH=20^\circ$</span> and <span class="math-container">$\angle CAH=10^\circ$</span>. Can we show <span class="math-container">$\triangle AHC \cong CHB$</span>? Any other ideas?</p>
Quanto
686,284
<p><a href="https://i.stack.imgur.com/6THq1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6THq1.png" alt="enter image description here"></a></p> <p>Construct the equilateral triangle <span class="math-container">$AHB$</span>. Given that <span class="math-container">$AC = BC, AH = BH$</span> and the shared <span class="math-container">$CH$</span>, the triangles <span class="math-container">$AHC$</span> and <span class="math-container">$BHC$</span> are congruent. Then, <span class="math-container">$\angle BCH = \dfrac12\angle ACB = 20^\circ$</span>.</p> <p>Since <span class="math-container">$AH = BH$</span> and <span class="math-container">$\angle BAM = \angle HAM = 30^\circ$</span>, the triangles <span class="math-container">$BAM$</span> and <span class="math-container">$HAM$</span> are congruent, which yields <span class="math-container">$\angle HBM = \angle BHM = \angle HBC = 10^\circ$</span> and <span class="math-container">$HM || CB$</span>. </p> <p>Then, the triangles <span class="math-container">$CHB$</span> and <span class="math-container">$BHC$</span> have the same altitudes <span class="math-container">$h$</span> with respect to the base <span class="math-container">$BC$</span>. Since <span class="math-container">$\angle BCH = \angle CBM = 20^\circ$</span>, we have <span class="math-container">$CH = BM = h\cot 20^\circ$</span>.</p> <p>As a result, the triangles <span class="math-container">$CHB$</span> and <span class="math-container">$BMC$</span> are congruent, which leads to,</p> <p><span class="math-container">$$\angle BMC = \angle CHB = 180^\circ - \angle CBH - \angle BCH = 180^\circ - 10^\circ - 20^\circ = 150^\circ$$</span></p>
471,710
<p>Why do small angle approximations only hold in radians? All the books I have say this is so but don't explain why.</p>
Mostafa
62,686
<p>The reason why we use radians for $\sin x$ (or other trigonometric functions) <strong>in calculus</strong> is explained <a href="http://en.wikipedia.org/w/index.php?title=Trigonometric_functions&amp;oldid=563025229#The_significance_of_radians">here</a> and <a href="http://en.wikipedia.org/w/index.php?title=Radian&amp;oldid=560056445#Advantages_of_measuring_in_radians">here</a> in Wikipedia.</p> <p>Having known that, notice that small angle approximation is just the Taylor expansion of $\sin x$ and $\cos x$ around $x=0$:</p> <p>$$\sin x = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+1}}{(2n+1)!}\tag{1}$$ $$ \cos x = \sum_{n=0}^\infty \frac{(-1)^n x^{2n}}{(2n)!}\tag{2}$$</p> <p>If you scale $x$ by some constant $\omega$, then you must replace $x$ with $\omega x$ in $(1)$ and $(2)$. So, working in degrees $($ $\omega=\frac{\pi}{180}$ $)$, the approximation will become: $$\sin \theta\approx \frac{\pi}{180}\theta\tag{$\theta$ in degrees}$$</p>
4,524,393
<p>I don't know if this question is best suited to this stack exchange. If it isn't, feel free to migrate it or close it. This question was inspired by a mistake I saw in a math class. I corrected the professor, and he acknowledged it. I then said, &quot;Some students think professors never make mistakes&quot;. And he said, &quot;Yes, and those students are mistaken&quot;. So, what are some famous or at least semi-famous examples of math professors making errors in the classroom? Note, the mistake has to have taken place in a classroom, not in a journal or book or paper.</p>
John Wayland Bales
246,513
<p>Aristotle famously taught that the only regular solids that tiled space were the cube and the tetrahedron when, in fact, only the cube does so.</p> <p>It took 1800 years for the mistake to be corrected.</p> <p><a href="http://www.ams.org/notices/201211/rtx121101540p.pdf" rel="noreferrer">Mysteries in Packing Regular Tetrahedra</a></p> <p>Note: It is generally accepted that most of Aristotle's surving writings were, in fact, lecture notes. So the tetrahedron error is an error which he would have made in a class lecture.</p>
4,524,393
<p>I don't know if this question is best suited to this stack exchange. If it isn't, feel free to migrate it or close it. This question was inspired by a mistake I saw in a math class. I corrected the professor, and he acknowledged it. I then said, &quot;Some students think professors never make mistakes&quot;. And he said, &quot;Yes, and those students are mistaken&quot;. So, what are some famous or at least semi-famous examples of math professors making errors in the classroom? Note, the mistake has to have taken place in a classroom, not in a journal or book or paper.</p>
usr0192
275,654
<p>Wiles initial proof of Fermat's Last Theorem contained an error, and in some documentary he said he was about to give up on it, but tried to understand one last time what the exact problem was, and then was able to fix it (he also had the help of Richard Taylor his previous graduate student)</p>
689,433
<p>In class, my professor said that given a Markov chain $\{X_k\}$ it intuitively should be true that </p> <p>$P(X_{k+1} = x_{k+1} \, \mid \, X_0 = a_0, \dots, X_{k-1}= a_{k-1}) = P(X_{k+1} = x_{k+1}\, \mid \,X_{k-1}= a_{k-1})$</p> <p>and asked us to prove it as an exercise. Note that the indices in the conditional only go up to $k-1$. Now, to me, this in fact seems counterintuitive and contradicting the memoryless dependence of Markov chains only on the most recent state. In any case, I have been unable to prove or disprove the statement without imposing any further assumptions on $X_k$. Can anyone resolve this?</p>
Bjørn Kjos-Hanssen
61,578
<p>The idea is that $X_k$, whatever it is, has to be something, so you sum over it: $$ P(X_{k+1} = x_{k+1} \mid X_0 = a_0, \dots, X_{k-1} = a_{k-1}) $$ $$ = \sum_{x_k} P(X_{k+1} = x_{k+1}, X_k=x_k \mid X_0 = a_0, \dots, X_{k-1} = a_{k-1} ) $$ $$ = \sum_{x_k} P(X_{k+1} = x_{k+1}\mid X_k=x_k, X_0 = a_0, \dots, X_{k-1} = a_{k-1} )\cdot P( X_k=x_k \mid X_0 = a_0, \dots, X_{k-1} = a_{k-1} ) $$</p>
689,433
<p>In class, my professor said that given a Markov chain $\{X_k\}$ it intuitively should be true that </p> <p>$P(X_{k+1} = x_{k+1} \, \mid \, X_0 = a_0, \dots, X_{k-1}= a_{k-1}) = P(X_{k+1} = x_{k+1}\, \mid \,X_{k-1}= a_{k-1})$</p> <p>and asked us to prove it as an exercise. Note that the indices in the conditional only go up to $k-1$. Now, to me, this in fact seems counterintuitive and contradicting the memoryless dependence of Markov chains only on the most recent state. In any case, I have been unable to prove or disprove the statement without imposing any further assumptions on $X_k$. Can anyone resolve this?</p>
Did
6,179
<blockquote> <p>The "intuitive" statement is actually true for any Markov chain. More generally, for every $0\leqslant i\lt j\leqslant k$, $$ P(X_{j:k} = x_{j:k} \mid X_{0:i} = x_{0:i})=P(X_{j:k} = x_{j:k} \mid X_{i} = x_{i}). $$</p> </blockquote> <p>Let $(\ast)=P(X_{k+1} = x_{k+1} \mid X_{0:k-1} = x_{0:k-1})$. Then $$(\ast) = \displaystyle\sum_{x}P(X_{k+1} = x_{k+1},X_k=x \mid X_{0:k-1} = x_{0:k-1})$$ by the decomposition of the event $[X_{k+1} = x_{k+1}]$ along the partition $\bigcup\limits_x[X_k=x]=\Omega$. For every $x$, $$P(X_{k+1} = x_{k+1},X_k=x \mid X_{0:k-1}=x_{k-1}) = P(X_{k+1} = x_{k+1},X_k=x \mid X_{k-1}=x_{k-1})$$ by the Markov property at time $k-1$. Hence $$(\ast) = \displaystyle\sum_{x}P(X_{k+1} = x_{k+1},X_k=x \mid X_{k-1}=x_{k-1})= P(X_{k+1} = x_{k+1} \mid X_{k-1}=x_{k-1})$$ by 3. and the decomposition of the event $[X_{k+1} = x_{k+1}]$ along the partition $\bigcup\limits_x[X_k=x]=\Omega$.</p>
2,710,656
<p>I now there is a continuous surjective map from $\Bbb{R}\to\Bbb{R}^2$ thanks to Peano curve.</p> <p>My question is simple: does there exist a $C^1$ surjective map from $\Bbb{R}\to\Bbb{R}^2$ ? I think that the answer is no and I have seen this long time ago but I was too young to understand the proof. Unfortunately I cannot think of a proof now.</p> <p>Any idea ?</p>
DanielWainfleet
254,665
<p>Notation: <span class="math-container">$f''S=\{f(x):x\in S\}$</span> when <span class="math-container">$S$</span> is a subset of the domain of <span class="math-container">$f.$</span></p> <p>Let <span class="math-container">$f:\Bbb R\to \Bbb R^2$</span> be a continuous surjection. For <span class="math-container">$n\in \Bbb Z,$</span> each <span class="math-container">$[n,n+1]$</span> is compact so <span class="math-container">$f''[n,n+1]$</span> is compact and therefore closed in <span class="math-container">$\Bbb R^2.$</span> Since <span class="math-container">$\Bbb R^2=\cup_{n\in \Bbb Z}f''[n,n+1],$</span> the Baire category theorem implies that some <span class="math-container">$f''[n,n+1]$</span> has non-empty interior. So for some <span class="math-container">$n$</span> we have <span class="math-container">$f''[n,n+1]\supset [a,a+b]\times [a',a'+b']$</span> with positive <span class="math-container">$b,b'$</span>. </p> <p>To simplify the notation we will assume WLOG (by a change of scale and a change of variable) that <span class="math-container">$f''[0,1]\supset [0,1]^2.$</span></p> <p>For <span class="math-container">$1&lt;n\in \Bbb N,$</span> consider <span class="math-container">$[0,1]^2$</span> as an <span class="math-container">$n\times n$</span> checkerboard of closed sub-squares with sides of length <span class="math-container">$1/n.$</span> Let <span class="math-container">$C_n$</span> be the set of centers of these sub-squares. (The point is that <span class="math-container">$C_n$</span> has <span class="math-container">$n^2$</span> members and if <span class="math-container">$(u,v),(u',v')$</span> are distinct members of <span class="math-container">$C_n$</span> then <span class="math-container">$|u-u'|\geq 1/n$</span> or <span class="math-container">$|v-v'|\geq 1/n$</span>...or both).</p> <p>Let <span class="math-container">$B_n \subset [0,1]$</span> where <span class="math-container">$B_n$</span> has <span class="math-container">$n^2$</span> members and <span class="math-container">$f''B_n=C_n.$</span> Choose distinct <span class="math-container">$x_n,x'_n\in B_n$</span> such that <span class="math-container">$|x_n-x'_n|\leq (n^2-1)^{-1}.$</span> Let <span class="math-container">$f(x_n)=(u_n,v_n)$</span> and <span class="math-container">$f(x'_n)=(u'_n,v'_n).$</span> We have <span class="math-container">$|u_n-u'_n|\geq 1/n$</span> or <span class="math-container">$|v_n-v'_n|\geq 1/n$</span> (or both). </p> <p>Let <span class="math-container">$A$</span> be an infinite subset of <span class="math-container">$\Bbb N$</span> \ <span class="math-container">$\{1\}$</span> such that <span class="math-container">$\forall n\in A\;(|u_n-u'_n|\geq 1/n)$</span> or <span class="math-container">$\forall n\in A\; (|v_n-v'_n|\geq 1/n\} .$</span> </p> <p>WLOG assume <span class="math-container">$\forall n\in A\; (|u_n-u'_n|\geq 1/n).$</span></p> <p>Suppose <span class="math-container">$f$</span> were continuously differentiable. Then with <span class="math-container">$f(x)=(f_1(x),f_2(x)),$</span> the function <span class="math-container">$f'_1(x)$</span> is continuous. Now <span class="math-container">$\frac {f_1(x_n)-f_1(x'_n)}{x_n-x'_n}= f'_1(y_n)$</span> for some <span class="math-container">$y_n$</span> between <span class="math-container">$x_n$</span> and <span class="math-container">$x'_n$</span> by the MVT. </p> <p>For all <span class="math-container">$n\in A$</span> we have <span class="math-container">$|f'_1(y_n)|=$</span> <span class="math-container">$\frac {|u_n-u'_n|}{|x_n-x'_n|}\geq$</span> <span class="math-container">$ \frac {1/n}{(n^2-1)^{-1}}=n-n^{-1}.$</span> </p> <p>But <span class="math-container">$A$</span> is infinite; therefore <span class="math-container">$\{|f'_1(x)|: x\in [0,1]\}$</span> is unbounded above, which is impossible if <span class="math-container">$f'_1$</span> is continuous.</p>
440,528
<p>My question is about group theory:</p> <blockquote> <p>How many subgroups does a non-cyclic group contain whose order is 25?</p> </blockquote> <p>How can i answer that question?</p> <p>Can you generalize the answer?</p> <p>Thanks for your help.</p>
amWhy
9,003
<p>There are three possibilities for the order of any subgroup $H$ of a group $G$ of order $25$:</p> <ul> <li>$|H| = 1 \iff H = e$</li> <li>$|H| = 5,$ since $5\mid 25$.</li> <li>$|H| = 25$, if $H = G$.</li> </ul> <p>We're given that $G$ is non-cyclic, so the order of any element $x \neq e$ must be $5$, <strong>else</strong>, if $25$, it would generate the group, and hence the group would be cyclic. (Contradiction). I.e., $x \neq e \; \implies \;|\langle x \rangle| = 5$, and for each distinct subgroup $\langle x_i\rangle = \{e, x_i, x_i^2, x_i^3, x_i^4\}$, the elements $x_i, x_i^2, x_i^3, x_i^4$ are each of order $5$, and any one of them generates the <strong>same</strong> subgroup as does $x_i$.</p> <p>Since $G$ is non cyclic, and $|G| = 25 = 5^2,$ where $5$ is prime, we know that $G \cong \mathbb Z_5 \times \mathbb Z_5$</p>
35,375
<p>Good morning, today I have read that "number theory is nothing but the study of $\mathrm{Gal}(\mathbb{\bar{Q}}/\mathbb{Q})$", here <a href="http://www.math.uconn.edu/~alozano/elliptic/finding%20points.pdf" rel="nofollow">http://www.math.uconn.edu/~alozano/elliptic/finding%20points.pdf</a> can anyone give a very naive layman definition of what it actually means?</p> <p>Furthermore, I got this doubt that $\bar {\mathbb{Q}}$ is the algebraic closure of $\mathbb{Q}$, and the thing that confuses me is the field of rational numbers $\mathbb{Q}$ is not a algebraically closed as there exists a polynomial with $a_{1},a_{2},\dotsc,a_{n}\in \mathbb{Q}$ and $(x-a_{1})(x-a_{2})\cdots(x-a_{n})+1$ has no zero in $\mathbb{Q}$.</p> <p>Then why are we considering the field extension of $\bar {\mathbb{Q}}/\mathbb{Q}$ when $\mathbb{Q}$ is not algebraically closed, won't it contradict the definition of algebraic closure?</p> <p>But I am not getting an answer i was looking for ,i want what are the things going on behind the $\mathrm{Gal}(\mathbb{\bar{Q}}/\mathbb{Q})$ like what is the thing we get if we take the $\mathbb{\bar{Q}/\mathbb{Q}}$ and what does taking the $\mathrm{Gal}(\mathbb{\bar{Q}}/\mathbb{Q})$ give someone,</p> <p>Thank you</p>
Gerry Myerson
8,269
<p>You might want to let us know where you read that quote, the context could be helpful in determining the author's intentions. I'm guessing that it wasn't meant to be taken too seriously, and I'd recommend you take it to mean only that there are a few things in Number Theory that you can understand better if you know something about that Galois group. </p> <p>If you want to look into it further, a good keyphrase is "absolute Galois group." </p>
3,480,857
<p>For <span class="math-container">$x \in \mathbb{R}^n$</span> we define <span class="math-container">$\Vert x \Vert _\infty := \sup_{k = 1,..,n} |x_k|$</span> (meaning that <span class="math-container">$\Vert x \Vert _\infty $</span> is the biggest component of <span class="math-container">$x$</span> according to amount)</p> <p>How can one prove that </p> <p><span class="math-container">$$\Vert x\Vert_\infty \leq \Vert x \Vert \leq \sqrt{n}\Vert x\Vert_\infty$$</span></p> <p>I have seen it on <a href="https://en.wikipedia.org/wiki/Norm_%28mathematics%29#Properties" rel="nofollow noreferrer">Wikipedia</a>, but there's no proof to it.</p> <p>I know that using Cauchy–Schwarz inequality we get for all <span class="math-container">$x\in\mathbb{R}^n$</span> <span class="math-container">$$ \Vert x\Vert_1= \sum\limits_{i=1}^n|x_i|= \sum\limits_{i=1}^n|x_i|\cdot 1\leq \left(\sum\limits_{i=1}^n|x_i|^2\right)^{1/2}\left(\sum\limits_{i=1}^n 1^2\right)^{1/2}= \sqrt{n}\Vert x\Vert_2 $$</span></p> <p>but that doesn't help me.</p>
N. S.
9,176
<p><span class="math-container">$$\| x \|_\infty = \sup_{k = 1,..,n} |x_k| =\sqrt{ \sup_{k = 1,..,n} |x_k|^2 } \leq \sqrt{ \sum_{k = 1}^n |x_k|^2 }$$</span></p> <p>For the second <span class="math-container">$$\sqrt{ \sum_{j = 1}^n |x_j|^2 } \leq \sqrt{ \sum_{j = 1}^n \sup_{k = 1,..,n} |x_k|^2 }$$</span></p>
1,955,505
<blockquote> <p>$\sum_{n=0}^\infty \frac{n^2+3n+2}{4^n} = \frac{128}{27}$ Given hint: $(n^2+3n+2) = (n+2)(n+1)$</p> </blockquote> <p><strong>I've tried</strong> converting the series to a geometric one but failed with that approach and don't know other methods for normal series that help determine the actual convergence value. Help and hints are both appreciated</p>
dxiv
291,201
<p>Hint: &nbsp;&nbsp;for $|x| \lt 1$</p> <p>$$f(x) = \sum_{n=0}^{\infty} x^{n+2} = \frac{x^2}{1-x} = -1 -x + \frac{1}{1-x}$$</p> <p>Now consider $f''(\frac{1}{4})$.</p>
3,113,850
<blockquote> <p><span class="math-container">$$f(x)=\sum_{n=1}^{\infty}{\frac{x^{n-1}}{n}},$$</span></p> <p>Prove that <span class="math-container">$f(x)+f(1-x)+\log(x)\log(1-x)=\frac{{\pi}^2}{6}$</span></p> </blockquote> <p>In my mind though,I think that this is related to Basel problem<span class="math-container">$\left(\sum\limits_{n=1}^{\infty}{\frac{1}{n^2}}\right)$</span>,but I don't know how to solve this.</p> <p>Any help would be greatly appreciated :-)</p> <p>Edit:</p> <p>My attempt:</p> <p>I cannot use latex expertly,so I post image.The circled part<a href="https://i.stack.imgur.com/gFDdf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gFDdf.jpg" alt="enter image description here " /></a></p>
Invisible
721,644
<p>I'm not sure if the proof is concise enough, but here is my attempt. Every remark is welcome! <span class="math-container">$\text{LaPlace formulae}$</span></p> <p>Let <span class="math-container">$A\in M_n(\mathbb N).$</span></p> <blockquote> <p><span class="math-container">$$A_{ij}:=\sum_{p\in S_n\\p(i)=j}^n(-1)^{I(p)}a_{1p(1)}\cdot\ldots a_{i-1,p(i-1)}a_{i+1,p(i+1)}\cdot\ldots\cdot a_{np(n)}$$</span></p> </blockquote> <p>Note (reminder): there is no <span class="math-container">$a_{ip(i)}$</span>, that's why <span class="math-container">$A_{ij}$</span> is called <span class="math-container">$\text{the algebraic complement}$</span> of the element <span class="math-container">$a_{ij}.$</span> Now we can simplify the initial formula: <span class="math-container">$$\det(A)=\sum_{j=1}a_{ij}A_{ij}\leftarrow\text{development by the i-th row}$$</span> As colleagues have already mentioned:<span class="math-container">$$ A_{ij}=0,\forall\; i,j\in\{1,\ldots,n\}\implies\det(A)=0$$</span></p> <p>We can also write: <span class="math-container">$$A_{ij}=(-1)^{i+j}\Delta_{ij}$$</span> More conrete example: <span class="math-container">$$B=\begin{bmatrix}A_{11}&amp;A_{12}&amp;\ldots &amp;A_{1,n-1}&amp;A_{1n}\\A_{21}&amp;A_{22}&amp;\ldots&amp;A_{2,n-1}&amp;A{2n}\\\vdots&amp;\vdots&amp;\ddots&amp;\vdots&amp;\vdots\\A_{n1}&amp;A_{n2}&amp;\ldots&amp;A_{n,n-1}&amp;A_{nn}\end{bmatrix}$$</span> If the matrix <span class="math-container">$B$</span> above were to represent what happens when we calculate <span class="math-container">$\det(A)$</span> with all the algebraic complements: <span class="math-container">$A_{ij}=0\;\forall\;i,j\in\{1,\ldots,n\}$</span>, it would be <span class="math-container">$\text{the null-matrix}$</span>. </p> <p>Here comes the initial matrix: </p> <blockquote> <p><span class="math-container">$$A=(a_{ij}),\;a_{ij}=x,\;\forall\;i,j\in\{1,\ldots,n\},x\in\mathbb F$$</span></p> </blockquote> <p><strong>This is the example @Arthur suggested.</strong></p> <p>We can also prove @Arthur's argument that tells us <span class="math-container">$\text{the matrix must be singular}.$</span> I can't think of other cases rather than Arthur's, but even if they were the only ones, it would be enough for the statement <span class="math-container">$A=0_n$</span> to fall.</p>
4,627,133
<p>We have <span class="math-container">$A$</span> <span class="math-container">$(3×3)$</span> matrix with real entries. We know that A is orthogonal and <span class="math-container">$\operatorname{trace}(A)&gt;1$</span>. Show that matrix <span class="math-container">$A+I_{3}$</span> is invertible.</p> <p>We can see that <span class="math-container">$\det(A)=1$</span> or <span class="math-container">$\det(A)=-1$</span>. We can easily find <span class="math-container">$\operatorname{trace}(A^{*})=\det(A)\operatorname{trace}(A)$</span>. Suppose <span class="math-container">$\det(A+I_{3})=0$</span>. If we take the characteristic polynomial of A <span class="math-container">$$ P(x)=-\det(A-xI_{3})=-x^{3}+\operatorname{trace}(A)x^{2}-\operatorname{trace}(A^*)x+\det(A) $$</span> we can find that <span class="math-container">$P(-1)=0$</span> so <span class="math-container">$1+\operatorname{trace}(A)+\det(A) \operatorname{trace}(A)+\det(A)=0$</span>. If <span class="math-container">$\det(A)=1$</span> we get easily a contradiction, but in the case where <span class="math-container">$\det(A)=-1$</span> we get something right. I tried using eigen values to get in a contradiction with the fact that <span class="math-container">$ \operatorname{trace}(A)&gt;1$</span>, but nothing.</p>
user1551
1,551
<p>Without using eigenvalues: for any unit vector <span class="math-container">$u$</span>, complete it to an orthonormal basis <span class="math-container">$\{u,v,w\}$</span> of <span class="math-container">$\mathbb R^3$</span>. Then <span class="math-container">$$ \begin{aligned} 1&amp;&lt;\operatorname{tr}(A)\\ &amp;=\langle u,Au\rangle+\langle v,Av\rangle+\langle w,Aw\rangle\\ &amp;\le\langle u,Au\rangle+\|v\|\|Av\|+\|w\|\|Aw\|\\ &amp;\le\langle u,Au\rangle+2\\ &amp;=\langle u,(A+I)u\rangle+1.\\ \end{aligned} $$</span> Therefore <span class="math-container">$\langle u,(A+I)u\rangle&gt;0$</span> and in turn, <span class="math-container">$(A+I)u\ne0$</span>. Since <span class="math-container">$u$</span> is an arbitrary unit vector, <span class="math-container">$A+I$</span> must be nonsingular.</p>
65,392
<p>Hello,</p> <p>I was wondering if anyone is aware of an elementary proof of the claim in the title, assuming the existence of nilpotent injectors in soluble groups. By elementary I mean a proof that does not involve any recourse to Fitting classes etc.</p> <p>Thanks in advance. </p>
Geoff Robinson
14,450
<p>I take a nilpotent injector of a finite solvable group $G$ to be a nilpotent subgroup $M$ of $G$ such that $M \cap N$ is a maximal nilpotent normal subgroup of $N$ whenever $N$ is subnormal in $G$. Assuming existence of $M$ , I think uniqueness up to conjugacy follows inductively. Notice that $M \cap H$ is a nilpotent injector of $H$ whenever $H$ is normal in $G.$</p> <p>We may suppose that $Z(G) = 1$. Now let $p$ be a prime divisor of $|F(G)|$. Since $F(G) \leq M$, we have $O_{p'}(M) \leq C_{G}(O_{p}(G)).$ Thus $O_{p'}(M) = O_{p'}(L)$, where $L = M \cap C_{G}(O_{p}(G))$ is a nilpotent injector of $C_{G}(O_{p}(G))$. For notice that $O_{p'}(L) \lhd M$ so that $O_{p'}(L) \leq O_{p'}(M)$, while $O_{p'}(M) \leq M \cap C_{G}(O_{p}(G)) =L$ and $O_{p'}(M) \leq O_{p'}(L)$.</p> <p>Now $L$ is unique up to conjugacy within $C_{G}(O_{p}(G))$, so certainly unique up to conjugacy in $G$. Hence $O_{p'}(M)= O_{p'}(L)$ is unique up to conjugacy within $G$. By maximality as a nilpotent subgroup, $ M = P \times O_{p'}(M)$, where $P$ is a Sylow $p$-subgroup of $C_{G}(O_{p'}(M)).$ Hence we see that $M$ is unique up to conjugacy. in $G$.</p>
65,392
<p>Hello,</p> <p>I was wondering if anyone is aware of an elementary proof of the claim in the title, assuming the existence of nilpotent injectors in soluble groups. By elementary I mean a proof that does not involve any recourse to Fitting classes etc.</p> <p>Thanks in advance. </p>
Tom Morris
15,601
<p>Another good reference here is "Injectors and Normal Subgroups of Finite Groups" by Avinoam Mann. Israel Journal of Mathematics, Vol 9.</p>
507,867
<p>I saw an inequality for $n\times n$ matrices. I was wondering if the inequality is true or not?</p> <p>Does $\det(A)&gt;0$ imply $\det(I+A)&gt;0$?</p>
mrf
19,440
<p>Let $$ A = \begin{bmatrix} -3 &amp; 0 \\ 0 &amp; -1/2 \end{bmatrix}.$$</p>
2,459,651
<p>What additional properties must an operation have besides commutativity so that commutativity along with other properties implies associativity?</p> <p>Where can I read about such structures?</p>
Cameron Buie
28,900
<p>Suppose we have an operation $\star$ on a set $S$ such that for all $x,y,z\in S,$ we have $$x\star(y\star z)=(x\star z)\star y.$$ If $\star$ is also commutative, then $\star$ is associative.</p> <p>The above is borrowed from Axiom 4 of <a href="https://en.wikipedia.org/wiki/Tarski%27s_axiomatization_of_the_reals" rel="nofollow noreferrer">Tarski's axiomatization of the real numbers</a>. Axioms $4$ and $5$ together imply (and are implied by) the axioms of an abelian group: associativity, identity, inverses, and commutativity.</p>
87,319
<p>How might I show that there's no metric on the space of measurable functions on $([0,1],\mathrm{Lebesgue})$ such that a sequence of functions converges a.e. iff the sequence converges in the metric?</p>
Dirk
3,148
<p>There is no topology on $L^1([0,1])$ which describes the notion of "convergence almost everywhere".</p> <p>Well, I just noticed that this has been answered a few seconds ago. Anyway, I'd like to point out the nice note "Convergence Almost Everywhere is Not Topological" which can be read <a href="http://ordman.net/MathResearch/1966A.pdf" rel="nofollow">here</a>...</p>
87,319
<p>How might I show that there's no metric on the space of measurable functions on $([0,1],\mathrm{Lebesgue})$ such that a sequence of functions converges a.e. iff the sequence converges in the metric?</p>
David Mitra
18,986
<p>Just to add to the other answers (since it was not explicitly stated): there is a sequence in $L_1[0,1]$ that converges in measure but not pointwise a.e. </p> <p>For example: </p> <p>$f_1(x)=1$, </p> <p>$f_2(x)= \chi_{[0,1/2]}$</p> <p>$f_3(x)=\chi_{[1/2,1]}$</p> <p>$f_4(x)=\chi_{[0,1/4]} $</p> <p>$f_5(x)=\chi_{[1/4,1/2]} $</p> <p>$f_6(x)=\chi_{[1/2,3/4]} $</p> <p>$f_7(x)=\chi_{[3/4,1]} $</p> <p>$f_8(x)=\chi_{[0,1/8]} $</p> <p>$\phantom{f_8(x)}\ \ \vdots$</p> <p>Where $\chi_A$ is the indicator function on $A$.</p> <p>$\{f_n\}$ converges to measure to 0 but does not converge pointwise a.e.</p>
1,818,281
<p>Suppose I have a pair of 2 non-linear differential equations of the form: $$\begin{matrix} \frac{dy}{dt}=f(x,y)\\ \frac{dx}{dt}=g(x,y) \end{matrix}$$ Equilibrium points are where the trajectory ends up on, when plotted on the $x-y$ plane.</p> <p>What are the qualitative differences between a 'stable node' and a 'stable spiral'?</p> <p>Are they both stable?</p>
okrzysik
246,433
<p>Yes both stable nodes and stable spirals are stable equilibria (as indicated by their names). And they are qualitatively very similar to one another. </p> <p>The only real difference between the two is that solution trajectories in the $x-y$ phase plane for a stable spiral tend to spiral around the equilibrium before they are "sucked" into it. This is analogous to water spiralling around a sink hole as it flows into it.</p> <p>On the other hand a stable focus just "sucks" solution trajectories directly into it - you could think of this as water flowing into a sink but without any (or relatively little) rotation.</p> <p>To illustrate the point I've added phase plane portraits below showing examples of both equilibrium types. These plots have come from solving two simple linear ODE systems. Each system of ODEs has an equilibrium at $(x,y)=(0,0)$ and as you can see the solution trajectories on the LHS plot spiral into their equilibrium point. Whilst on the RHS plot the trajectories flow directly into their equilibrium point.</p> <p><a href="https://i.stack.imgur.com/TpBRQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TpBRQ.jpg" alt="enter image description here"></a></p>
1,031,304
<p>Let $\varphi$ be a bounded, differentiable function on $\mathbb{R}$ such that $\varphi'$ is bounded and uniformly continuous on $\mathbb{R}$.</p> <p>We want to prove that $\displaystyle\frac{\varphi(x+h)-\varphi(x)}{h}\to\varphi'(x)$ uniformly as $h\to 0$</p> <p>I can prove $\varphi$ is uniformly continuous, but I don't know what to do with it.</p> <p>Any hints?</p>
copper.hat
27,978
<p>You have $\phi(x+h)-\phi(x)-\phi'(x)h = \int_0^1 (\phi'(x+th)-\phi'(t))h dt$.</p> <p>Let $\epsilon&gt;0$ and choose $\delta$ such that if $\|\xi\| &lt; \delta$ then $\|\phi'(x+\xi)-\phi'(x) \| &lt; \epsilon$.</p> <p>Then $\|\phi(x+h)-\phi(x)-\phi'(x)h \| \le \epsilon \|h\|$.</p> <p>(There is no need to assume that $\phi,\phi'$ are bounded.)</p>
283,473
<p>I'd like to be able to construct polynomials $p$ whose graphs look like this:</p> <p><img src="https://i.stack.imgur.com/8Rk3a.jpg" alt="enter image description here"></p> <p>We can assume that the interval of interest is $[-1, 1]$. The requirements on $p$ are:</p> <p>(1) Equi-oscillation (or roughly equal, anyway) between two extremes. A variation of 10% or so in the values of the extrema would be OK.</p> <p>(2) Zero values and derivatives at the ends of the interval, i.e. $p(-1) = p(1) =p'(-1) = p'(1) = 0$</p> <p>I want to do this for degrees up to around 30 or so. Just even degrees would be OK.</p> <p>If it helps, these things are a bit like Chebyshev polynomials (but different at the ends).</p> <p>The one in the picture has equation $0.00086992073067855669451 - 0.056750328789339152999 t^2 + 0.60002383910750621904 t^4 - 2.3217878459074773378 t^6 + 4.0661558859963998471 t^8 - 3.288511471137768132 t^{10} + t^{12}$</p> <p>I got this through brute-force numerical methods (solving a system of non-linear equations, after first doing a lot of work to find good starting points for iteration). I'm looking for an approach that's more intelligent and easier to implement in code.</p> <p>Here is one idea that might work. Suppose we want a polynomial of degree $n$. Start with the Chebyshev polynomial $T_{n-2}(x)$. Let $Q(x) = T_{n-2}(sx)$, where the scale factor $s$ is chosen so that $Q(-1) = Q(1) = 0$. Then let $R(x) = (1-x^2)Q(x)$. This satisfies all the requirements except that its oscillations are too uneven -- they're very small near $\pm1$ and too large near zero. Redistribute the roots of $R$ a bit (somehow??) to level out the oscillations. </p> <p><strong>Comments on answers</strong></p> <p>Using the technique suggested by achille hui in an answer below, we can very easily construct a polynomial with the desired shape. Here is one:</p> <p><img src="https://i.stack.imgur.com/b0Es2.jpg" alt="achille hui solution"></p> <p>The only problem is that I was hoping for a polynomial of degree 12, and this one has degree 30.</p> <p>Also, I was expecting the solution to grow monotonically outside the interval $[-1,1]$, and this one doesn't, as you can see here:</p> <p><img src="https://i.stack.imgur.com/eIKkN.jpg" alt="behaviour beyond unit interval"></p>
fedja
12,992
<p>Here the Newton version of the Asymptote code that does the job. Now it runs really fast in the range requested. The polynomial is encoded by its roots (stored in the x array). Thanks to the kind soul who had the patience to type 4 spaces in the beginning of each line :). </p> <p><strong>Edit.</strong> Since bubba wanted to know where the main iteration step came from, here is a short explanation. First of all, as I said, it is convenient to work with the sum $f(x)=\sum_k\log|x-x_k|$ instead of the product $\prod_k (x-x_k)$ (for both numerical and analytic reasons). This sum dives to $-\infty$ at each root $x_k$ and achieves a local maxima $y_{k-1}$ and $y_k$ at some points $z_{k-1}\in(x_{k-1},x_k)$ and $z_k\in(x_k,x_{k+1})$. Now, let's make a leap of faith and assume that $y_{k-1}$ and $y_k$ are not very different and also that the $z$-points are approximately in the middle of the corresponding intervals (the first assumption is bound to happen sooner or later if our algorithm works at all and the second one is actually poorly justified by itself but more reasonable than any other wild guess about the location). We want to shift the root $x_k$ so that the maxima $y_{k-1}$ and $y_k$ will become equal. Linearizing and assuming that the intervals $(x_{k-1},x_k)$ and $(x_k,x_{k+1})$ are of approximately equal length, we see that the partial derivatives of $f(z_{k-1})$ and $f(z_k)$ with respect to $x_k$ are about $\frac{1}{4(x_{k+1}-x_{k-1})}$ and $-\frac{1}{4(x_{k+1}-x_{k-1})}$ respectively, so, solving the corresponding linear equation, we get the correction $p=(y_k-y_{k-1})(x_{k+1}-x_{k-1})/8$ to apply to $x_k$. That explains the main formula. Unfortunately, if we just use it literally, a lot of crazy things may happen up to the loss of the order of roots, so we should make sure that our shifts are not too large at the early stages when the disbalances are quite large. The second line is a mere truncation of the shifts to the size which, we believe, will constitute only a fraction of the distance between roots (which initially is $2/n$), so that no rearrangement of roots or screwing up of the Newton iteration scheme will occur. The exact choice of the constant $0.3$ is empirical (i.e., I just checked that it worked in the requested range and, say, $0.5$ did not). The formal justification of the algorithm will require quite a lot of careful estimates (you can easily notice that some of the assumptions I made are on the border of wishful thinking) but, since all we need is one sequence of polynomials of not too high degree, I thought it would be worth trying without worrying too much about formal proofs because the final result is verifiable and once you get it, who cares what steps led to it? I know, this is a dismal thing to say for a mathematician, but, since I'm wearing a programmer's hat here, I decided I could try to get away with it :).</p> <pre><code>int n=51; //The number of intermediate roots plus 1 (degree-3) real[] x,y,z; //The root array, the maximum array. and the critical point array for(int k=0;k&lt;n+1;++k) {x[k]=2*k/n-1; if(k&gt;0) z[k-1]=(x[k-1]+x[k])/2;} //Just initialized the roots to an arithmetic progression //and the critical points to midpoints between roots real f(real t) { real s=2*log(1-t^2); for(int k=1;k&lt;n;++k) s+=log(abs(t-x[k])); return s; } //This is just the logarithm of the absolute value of the polynomial //with double roots at +1,-1 and simple roots at x[k], k=1,...,n-1 real g(real t) { real s=2*(1/(t+1)+1/(t-1)); for(int k=1;k&lt;n;++k) s+=1/(t-x[k]); return s; } //This is the derivative of f real G(real t) { real s=-2*(1/(t+1)^2+1/(t-1)^2); for(int k=1;k&lt;n;++k) s-=1/(t-x[k])^2; return s; } //This is the derivative of g for(int m=0;m&lt;15*n+30;++m) //The outer cycle; the number of iterations is sufficient //to cover n up to 70 with the roots found with machine precision (1E-15) { for(int k=0;k&lt;n/2;++k) { real a=z[k]; a-=g(a)/G(a); y[k]=f(a); y[n-1-k]=y[k]; z[k]=a; z[n-1-k]=-a; } //Newton update of critical points with symmetry taken into account real[] xx=copy(x); for (int k=1;k&lt;n/2;++k) { real p=(y[k]-y[k-1])*(xx[k+1]-xx[k-1])/8; if (abs(p)&gt;0.3/n) p*=0.3/n/abs(p); x[k]+=p; x[n-k]-=p; } //The main iteration step: move each roots to balance the maxima //adjacent to it. It can be done //better if we look beyond the adjacent maxima to evaluate what will //happen but, since it works in a reasonable time, I was too lazy to bother. } write(x); write(y); //outputs the roots and the maxima of f. Note that the roots at +1 and -1 //are double and the rest are simple. pause(); //Just doesn't let the window close before you look at it. </code></pre>
2,391,769
<p>I'm concerned with the total number of ones, and the total number of runs, but not with the size of any of the runs.</p> <p>For example, $N=8$, $R=3$, $C=5$ includes 11101010, 01101011 among the 24 total possible strings.</p> <p>I can compute these for small $N$ easily enough, but I am specifically interested in the distribution for $N=65536$. As this will result in very large integers, the log probability distribution is equally useful.</p> <p>I found [1] and [2], which includes this:</p> <p>Let $N_{n;g_k,s_k}$ denote the number of binary strings which contain for given $g_k$ and $s_k$, $g_k=0,1,…,⌊\frac{s_k}{k}⌋$, $s_k=0,k,k+1,…,n$, exactly $g_k$ runs of 1’s of length at least $k$ with total number of 1’s (with sum of lengths of runs of 1’s) exactly equal to $s_k$ in all possible binary strings of length $n$.</p> <p>An expression for this is given in eq. (24):</p> <p>$N_{n;g_k,s_k} = \sum_{y=0}^{n-s_k} {y+1 \choose g_k } {s_k-(k-1)g_k-1 \choose g_k-1} \sum_{j=0}^{⌊\frac{n-y-s_k}{k}⌋} (-1)^j {y+1-g_k \choose j} {n-s_k-kj-g_k \choose n-s_k-kj-y} $</p> <p>for $g_k \in \{1, ..., ⌊\frac{s_k}{k}⌋\}$, $s_k \in \{k, k+1, ..., n\}$.</p> <p>I think this is exactly what I'm looking for, with $k = 1$, $s_k = C$ and $g_k = R$. However, when I implemented this I did not get the expected results (Python shown below, edge cases omitted), based on comparing to counting all strings for N=8. I am working backwards to try to understand where I might have gone wrong, but not having much luck yet. I wonder if I am misunderstanding the result.</p> <pre><code>def F(x, y, n): # x = C or s_k (cardinality) # y = R or g_k (runCount) # n = N (total bits) a1 = 0 for z in range(n-x+1): b1 = choose(z+1, y) * choose(x-1, y-1) a2 = 0 for j in range(n-z-x+1): a2 += (-1) ** j * choose(z+1-y, j) * choose(n-x-j-y, n-x-j-z) a1 += b1 * a2 return a1 </code></pre> <p>Note that the <code>choose</code> function uses factorial, which I realize won't work for larger $N$ - but should be fine for $N=8$.</p> <p>Edit: corrected a sign error typo in eq. (24) and the equivalent error in the python code.</p> <p>[1] Counting Runs of Ones and Ones in Runs of Ones in Binary Strings, Frosso S. Makri, Zaharias M. Psillakis, Nikolaos Kollas <a href="https://file.scirp.org/pdf/OJAppS_2013011110241057.pdf" rel="nofollow noreferrer">https://file.scirp.org/pdf/OJAppS_2013011110241057.pdf</a></p> <p>[2] On success runs of a fixed length in Bernoulli sequences: Exact and asymptotic results, Frosso S.Makria, Zaharias M.Psillakis <a href="http://www.sciencedirect.com/science/article/pii/S0898122110009284" rel="nofollow noreferrer">http://www.sciencedirect.com/science/article/pii/S0898122110009284</a></p>
G Cab
317,234
<p><a href="https://i.stack.imgur.com/52D4H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/52D4H.png" alt="runs_ones_1"></a></p> <p>Consider a binary string and let's put an additional (dummy) fixed $0$ at the start of the it. We individuate as a <em>run</em> a $0$ followed by contiguous $1$'s.</p> <p>So, given a string of length $N$, total of ones $C$, and number of runs $R$, we will have $N-C$ zeros, of which $N-C-R+1$ are "free", that is not tight to mark the runs.</p> <p>The number of ways to constitute the runs is the number of (standard) compositions of $C$ into $R$ parts, that is $$ {{C-1} \choose {R-1}}$$.<br> The number of ways to place the "free" zeros will be equal to the number of <em>weak</em> compositions of their number into $R+1$ parts (in front of the first and then after each run), i.e.: $$ {{N-C-R+1+R+1-1} \choose {R}}={{N-C+1} \choose {R}}$$. </p> <p>Which confirms <em>N.Shales</em>'s answer, so that the merit should go to him.</p> <p><em>Addendum</em> </p> <p>Concerning formula (24), for what I can see, it looks like that in the 3rd binomial ${{y+1+g_k} \choose {j}}$ there is a sign typo, and that it should be $\cdots -g_k$.<br> Then putting for instance $k=1,\; s_k=n-1 \; g_k=2$ it will correctly give $n-2$,<br> and with $k=1,\; s_k=C \; g_k=R$ it will give the formula above.<br> But not having the full text, I cannot check that further.</p>
2,019,711
<p>Correction: For what values of the real number $a$, can $$ a(x_1^2+x_2^2+x_3^2)+2(x_1x_2+x_1x_3+x_2x_3) $$ be expressed as sum of the form $$ \alpha x^2+\beta y^2+\gamma z^2 $$ where $\alpha,\beta,\gamma$ are real numbers? </p>
user361424
361,424
<p>The proof that it won't work in that case is trivial - you get an impossible fraction of $\frac{f(x)}{0}$. But if what you're looking for is "why" it won't work, beyond that obvious reason, consider it geometrically - what Newton's method is is following the tangent line to the x-intercept. If the derivative is zero, the tangent line is horizontal; if it's horizontal, there's no x-intercept. If what you're looking for is a detailed listing of when Newton's method will and won't work, there are too many variegated pathological cases for that to exist. Keeping in mind that geometric interpretation, following the tangent line to the x-intercept, should give you a general sense.</p>
199,235
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/9505/xy-yx-for-integers-x-and-y">$x^y = y^x$ for integers $x$ and $y$</a> </p> </blockquote> <p>Determine the number of solutions of the equation $n^m = m^n$ where both m and n are integers.</p>
Cameron Buie
28,900
<p>Well, there are countably infinitely many, yes? There are (after all) only that many integer pairs $(m,n)$, so certainly no more than that will be solutions to the equation. On the other hand, any pair $(m,m)$ will be a solution, and there are countably infinitely many of those.</p>
38,597
<p>Or equivalently, if $G$ is a group, do the projective and injective dimension of $Z$ (viewed as a $ZG$-module) agree?</p> <p>Thanks! </p>
Dan Ramras
4,042
<p>I like to think of group (co)homology topologically, so I would say the integral (co)homological dimension of G is the integral (co)homological dimension of BG. Thinking this way there are lots of geometric examples in which the integral homological dimension is less than the integral cohomological dimension: for example, non-orientable surfaces are classifying spaces for their fundamental groups. Their integral homological dimension is 1, and their integral cohomological dimension is 2.</p> <p>Of course for other coefficient systems, this will no longer be the case (as Tom points out in his comment below). Note, for example, that the rational cohomological dimension of a non-orientable surface is 1.</p> <p>[Okay, I'm going to admit some confusion in regards to the comments on the original question. Am I thinking of the flat or the injective dimension here, when I take homology of BG? From the comments, it sounds like this must correspond to the flat dimension? I don't have Brown's book in front of me to un-confuse myself...]</p>
1,226,162
<p>\begin{align} x' &amp;= -x^3 + x^5 + (x^4)(y^5)\\[.7em] y' &amp;= -8y^3 + y^5 - 10(y^4)(x^5) \end{align} $(0,0)$ is obviously a critical point of the system, and we are given that it is asymptotically stable, but have to show it. </p> <p>I have tried to make a Lyapunov function $V(x,y) = ax^2 + cy^2$, with a,c > 0 but I am having trouble to prove that $\frac{d}{dt} V(x,y)$ is negative definite. I get some complicated polynomial I can't use logic to finalize. How can I change the Lyapunov to come up with a meaningful conclusion?</p> <p>\begin{align} \frac{d}{dt}V(x,y) = 2ax(-3x^2 + 5x^4 + 4x^3y^5) + 2cy(-24y^2+5y^4-40y^3x^5)\\[.7em] \end{align}</p>
RTJ
223,807
<p>For $$V(x,y)=\frac{1}{2}(x^2+y^2)$$ we have $$\dot{V}=-x^4-8y^4+x^6+y^6-9y^5x^5\\ \leq -(1-x^2-|xy|)x^4-8(1-(1/8)y^2-|xy|)y^4$$ Thus if we define the region (neighborhood of the origin) $$\Omega:=\left\{(x,y)|(x^2+|xy|&lt;1)\textrm{ and }\frac{1}{8}y^2+|xy|&lt;1\right\}$$ we have that $$\dot{V}(x,y)&lt;0\qquad \forall (x,y)\in\Omega\neq 0$$ Now if we choose the level set $$\Omega_0:=\left\{(x,y)|V(x,y)&lt;\frac{1}{4}\right\}\subseteq \Omega$$ we also have that $$\dot{V}(x,y)&lt;0\qquad \forall (x,y)\in\Omega_0\neq 0$$ Therefore every solution that starts within $\Omega_0$ remains therein and asymptotically converges to the origin ($V$ is strictly decreasing).</p>
77,485
<p>Find the smallest integer $n$ such that $$\left(1-\frac{n}{365}\right)^n &lt; \frac{1}{2}.$$</p> <p>I cannot use a calculator, and I do not know where to begin.</p>
Jens
505
<p>You could get an initial guess, by </p> <p>$$ \begin{aligned} &amp;&amp;\left(1-\frac{n}{365}\right)^n &amp;&lt; \frac{1}{2}.\\ &amp;\Leftrightarrow &amp; n\ln\left(1-\frac{n}{365}\right) &amp;&lt; -\ln 2\\ \end{aligned} $$</p> <p>and using $\ln(1 + x)\approx x$ for small $x$ to arrive at $n\approx \sqrt{365\ln2}$. Then, you'd need to guess a value for this square root, and check if it really meets your requirements.</p>
4,587,608
<p>The following is the definition of a change-of-coordinate matrix from a textbook I'm using:</p> <p>Let <span class="math-container">$\beta$</span> and <span class="math-container">$\beta'$</span> be two ordered bases for a finite-dimensional vector space V, and let <span class="math-container">$Q=[I_v]_{\beta}^{\beta'}$</span>. Then <span class="math-container">$Q$</span> is called a change of coordinate matrix.</p> <p>There is, however, no explanation of what <span class="math-container">$[I_v]$</span> means, and I can't understand it.</p> <p>Presumably, it should mean the identity matrix of the size appropriate for the vector space <span class="math-container">$V$</span>. It appears that the identity matrix is the same independent of the basis (1s on the diagonal), but in that case, how could it possible take us from one basis to another?</p> <p>Any help with what <span class="math-container">$[I_v]$</span> signifies would be greatly appreciated.</p>
Marian G.
527,214
<p>You want to so solve the given integral in a simple way. I suggest the following approach, where no substitution is needed.</p> <p>First, modify the integral as follows:</p> <p><span class="math-container">\begin{align*} \int\frac{-t^2}{\sqrt{1-t^2}}\,\mathrm dt &amp;=\int\left(\frac{1-t^2}{\sqrt{1-t^2}}-\frac{1}{\sqrt{1-t^2}}\right)\,\mathrm dt\\[1em] &amp;=\int\sqrt{1-t^2}\,\mathrm dt-\arcsin(t). \end{align*}</span></p> <p>Next, apply the integration by parts on the last integral with <span class="math-container">$u=\sqrt{1-t^2}$</span> and <span class="math-container">$v'=1$</span>. Hence, <span class="math-container">$u'=\frac{-t}{\sqrt{1-t^2}}$</span> and <span class="math-container">$v=t$</span>. Consequently, we infer that</p> <p><span class="math-container">\begin{equation} \int\sqrt{1-t^2}\,\mathrm dt =t\sqrt{1-t^2}-\int\frac{-t^2}{\sqrt{1-t^2}}\,\mathrm dt. \end{equation}</span></p> <p>Substituting it back, we obtain</p> <p><span class="math-container">\begin{equation} \int\frac{-t^2}{\sqrt{1-t^2}}\,\mathrm dt =t\sqrt{1-t^2}-\int\frac{-t^2}{\sqrt{1-t^2}}\,\mathrm dt-\arcsin(t). \end{equation}</span></p> <p>From the last line, we can solve your integral immediately since the integral on the right-hand side is identical with the one on the left. Therefore,</p> <p><span class="math-container">\begin{equation} \boldsymbol{\int\frac{-t^2}{\sqrt{1-t^2}}\,\mathrm dt =\frac12\cdot\left(t\sqrt{1-t^2}-\arcsin(t)\right)+C}. \end{equation}</span></p>
276,725
<p>Is there a positive number $n$ of distinct odd integers $z_1,z_2, \ldots, z_n \geq 3$ such that $\frac{1}{z_1} + \frac{1}{z_2} + \cdots + \frac{1}{z_n} = 1$?</p>
user6043040
321,541
<p>Another solution: 1=1/3+1/5+1/7+1/9+1/11+1/13+1/23+1/721+1/979007+1/661211444787+1/622321538786143185105739+1/511768271877666618502328764212401495966764795565+1/209525411280522638000804396401925664136495425904830384693383280180439963265695525939102230139815</p> <p>You may have to verify it via symbolic softwares.</p>
4,320,437
<p>I was thinking about linear tramsformations and i came up with this example: <span class="math-container">$$f:\mathbb{R}^n \to \mathbb{C}^n\\ f(x)=ix$$</span> for this example, domain and co-domain are not defined over the same field and all linear transformations that i encountered by now had domain and co-domain defined over the same field. I was wondering that if this is a valid linear transformation or not? and if not, why did we put such a constraint?</p> <p>also, if it is possible, keep the explanation simple because i'm pretty new in pure math. thank you in advance.</p>
Elliot Yu
165,060
<p>Linear transformations are always defined with a single underlying field <span class="math-container">$K$</span>, so that both the domain and the codomain are <span class="math-container">$K$</span>-vector spaces. This is because for linearity to make sense, we need to have the same notion of scalar multiplication in both the source and the target space. In other words, in order to say that <span class="math-container">$f: V\to W$</span> is linear, we must be able to say that <span class="math-container">$f$</span> doesn't care if you scalar multiply the argument first and then apply it or apply it then scalar multiply, i.e. <span class="math-container">$f(\alpha v) = \alpha f(v)$</span> for all vectors <span class="math-container">$v\in V$</span> and scalars <span class="math-container">$\alpha$</span>. This only makes sense if you can &quot;do the same scalar multiplication&quot; on <span class="math-container">$V$</span> and on <span class="math-container">$W$</span>, namely we need this multiplication by <span class="math-container">$\alpha$</span> to make sense in both spaces.</p> <p>The above is the most general case, but one class of special cases allow you to say a little more. This is when <span class="math-container">$V$</span> is a <span class="math-container">$K$</span>-vector space, <span class="math-container">$W$</span> a <span class="math-container">$L$</span>-vector space, and <span class="math-container">$L$</span> is a field extension of <span class="math-container">$K$</span>, or equivalently, when <span class="math-container">$K$</span> is a subfield of <span class="math-container">$L$</span>. In this case, any <span class="math-container">$L$</span> vector space can be viewed as a <span class="math-container">$K$</span> vector space, by simply ignoring the &quot;extra scalars&quot; in <span class="math-container">$L$</span>. Thus when we say a map <span class="math-container">$f: V\to W$</span> is linear, we only require that <span class="math-container">$f(\alpha v) = \alpha f(v)$</span> for all <span class="math-container">$\alpha\in K$</span>. Taking the subfield <span class="math-container">$K$</span> to be <span class="math-container">$\mathbb{R}$</span>, and extension field <span class="math-container">$K$</span> to be <span class="math-container">$\mathbb{C}$</span>, this is the situation you find yourself in in your example.</p> <p>As a side note, a more complicated way to reconcile the two different fields when one is an extension of the other is through an <a href="https://en.wikipedia.org/wiki/Change_of_rings#Extension_of_scalars" rel="noreferrer">extension of scalars</a>. This goes out of the scope of your original question, so I will only mention it and encourage you to explore.</p>
1,166,127
<p>I want to select an even rows of matrix. How can I show it using mathematical notations.</p> <p>Something like this:</p> <p>$X_{i:}$ where <code>i</code> is even.</p>
Mathophile
219,073
<p>Let $A_{m\times n}$ be the original matrix, and $B$ be the matrix which consists of even rows of $A$. Then $B$ can be defined as follows:</p> <p>$B_{i,j}=A_{2i,j}$ $\ \ (1\leq i\leq \lfloor m/2\rfloor) $</p>
839,516
<p>What is the number of binary sequences of length $n$, with no two consecutive zeros, and if starts with $0$ has to end with $1$.</p> <p>Would appreciate suggestions and help.</p> <p>I tried counting the total sequences and than substractung the ones containing 2 consecutive zeros, and then substracting the ones starting and ending with seros- but it got messy and confusing </p>
Pavan Sangha
154,686
<p>In the case of starting with a $0$</p> <p>Let $A(n)$ be the number of such strings of length $n$ and suppose we know $A(k)$ for $k&lt;n$. Now each string of length $n$ can end in a $01$ or $11$. If it ends in $01$ the number immediately before the $0$ must be a $1$ and so it must actually end in $101$ the first $n-2$ digits form a sequence $A(n-2)$. If it ends in $11$ then the first $n-1$ digits form a sequence in $A(n-1)$. It follows that $$A(n)=A(n-1)+A(n-2).$$ In other words a string on length $n$ can be obtained by amending a $01$ onto the end of a sequence of length $n-2$ or a $1$ onto a sequence of length $n-1$.</p> <p>Perhaps look at something similar for strings starting with a $1$ of course there are more options. If you call this $B(n)$ your total will be $$C(n)=A(n)+B(n).$$</p>
1,344,892
<p>Let $K$ be a field, and let $A$ be a $K$-algebra such that $\alpha \in A$. Then the natural homomorphism $$ \phi: K[x] \to K[\alpha], \hspace{3mm} (x \mapsto \alpha )$$ has a kernel which is a principal ideal $ \langle f \rangle$ and so $$ K[x] / \langle f \rangle \cong K[\alpha]$$</p> <p>Notice that $K[\alpha]$ is a field. The book then states that, if $n=$ deg $f$ we have that $\{1, \alpha, \alpha^2, \dots, \alpha^{n-1} \}$ are a $K$-basis of $K[\alpha]$. </p> <p>I am not sure how to convince myself that this set is indeed a basis of $K[\alpha]$, how would I go about showing this? </p>
Noah Schweber
28,111
<p>Towards showing it spans, let's prove the easier proposition that $\alpha^n$ is in the span of $\{1, \alpha, . . ., \alpha^{n-1}\}$. (It's easy to get the rest of the way from this.) Write $f=c_nx^n+ . . . + c_0x^0$, with $c_n\not=0$. Then consider the element $$z=-{c_{n-1}x^{n-1}+c_{n-2}x^{x-2}+ . . . +c_0x^0\over c_n}$$ What can you say about the image of $z$ in $K[\alpha]$?</p> <p>Towards showing it is linearly independent, suppose I have a nontrivial linear combination of $1, . . . , \alpha^{n-1}$ which equals zero in $K[\alpha]$; can I use that to get a polynomial $g\in K[x]$ with degree $&lt;n$, whose $f$-image is $0$? Why is this a problem?</p>
293,371
<p>This is part of a homework assignment for a real analysis course taught out of "Baby Rudin." Just looking for a push in the right direction, not a full-blown solution. We are to suppose that $f(x)f(y)=f(x+y)$ for all real x and y, and that f is continuous and not zero. The first part of this question let me assume differentiability as well, and I was able to compose it with the natural log and take the derivative to prove that $f(x)=e^{cx}$ where c is a real constant. I'm having a little more trouble only assuming continuity; I'm currently trying to prove that f is differentiable at zero, and hence all real numbers. Is this an approach worth taking?</p>
KarlG
60,744
<p>Since $f(t)=f(t+0)=f(t)f(0)$ we can conclude that $f(0)=1$. If there exists a real number $β$ such that $f(β)=0$, then for any real number $x$ we have </p> <p>$f(x)=f(β+(x-β))=f(β)f(x-β)=0$</p> <p>which implies that $f$ is identically zero. By the continuity of $f$ we can conclude that $f(x)&gt;0$ for all real numbers.</p> <p>For a positive integer $n$, $f(n)=f(1+1+...+1)=f(1)f(1)*...*f(1)=f(1)^n$.</p> <p>Since $f(-1)=f(1+(-2))=f(1)f(-2)=f(1)f(-1)f(-1)$,</p> <p>it follows that $f(-1)=f(1)^{-1}$.</p> <p>For a negative integer $m$, $f(m)=f(-1+(-1)+...+(-1))=f(-1)^{-m}=(f(1)^{-1})^{-m}=f(1)^m$.</p> <p>Since $f(0)=1=f(1)^0$, $f(n)=f(1)^n$ holds for all integers.</p> <p>Let $q$ and $m$ be positive integers, then $f(m)=f(1/q+1/q+...+1/q)=f(1/q)^{qm}=f(1)^m$.</p> <p>It follows that $f(1/q)=f(1)^{1/q}$. This will also hold for negative integers $q_1$ and $m_1$ as well.</p> <p>Then, $f(s/t)=f(1/t+...+1/t)=f(1/t)^s=(f(1)^{1/t})^s=f(1)^{s/t}$, for any rational number.</p> <p>Since the rationals are dense in the reals and $f$ is continuous, for any real number $x$ we can find a sequence $(r_n)$ that will converge to $x$ with $f(r_n)=f(1)^{r_n}$, so that $f(x)=f(1)^x$ for all real numbers.</p> <p>Since $f(1)&gt;0$ and is real-valued, $\log(f(1))=c$ exists. Then:</p> <p>$f(x)=f(1)^x=(\exp(\log(f(1))))^x=(\exp(c))^x=(e^c)^x=e^{cx}$.</p> <p>Thanks again!</p>
586,724
<p>Please how to integrate this $$\frac{1}{(1-e^{2x})^{1/2}}$$ I have tried $u= e^x$ But I think that is wrong So can anyone help me ?</p>
Claude Leibovici
82,404
<p><strong>HINT</strong>. Would you try to define $u$ such that $u^2 = 1 - e^{2x} $ ? It will become quite simple.</p>
514,352
<p>Can anybody give me an example of a multiplicative function $f$ such that $$\prod_p \sum_{k=0}^\infty f(p^k)$$ converges absolutely and such that $$\sum_{n=1}^\infty f(n)$$ diverges?</p>
coffeemath
30,316
<p>The product $\Pi_p(1+a_p)$ converges absolutely iff the sum $\sum_p a_p$ does. </p> <p>A multiplicative function may be defined as $f(1)=1$ and then for each prime $p$ and power $k \ge 1$ the values of $f(p^k)$ may be assigned arbitrarily, and then $f$ itself may be defined by extending from prime powers using the multiplicativity.</p> <p>So suppose we define for each $p$ that $f(p)=1$, while also choosing the values of $f$ at higher powers of $p$ in such a way that $$a_p \equiv f(p)+f(p^2)+ \cdots = \frac{1}{p^2}.$$ [This will entail negative values for some of the higher $f(p^k)$, but that has no effect on the sum of all the terms being $1/p^2$, provided we choose them right.] Then the product $\Pi_p(1+a_p)$ has its $a_p=\frac{1}{p^2}$, and so converges absolutely since the sum of reciprocal squares of primes does.</p> <p>However the sum $\sum_n f(n)$ diverges, since if it converges then its $n$th term must approach zero, yet for any squarefree number $n$ we have $f(n)=1.$</p>
889,628
<p>The question and answer is shown but I don't fully understand the answer for part a. Could someone please explain to me why the integral setup for the marginal density function of y1 is from y1 to 1, and not 0 to 1? And also the same thing for y2. Thank you <img src="https://i.stack.imgur.com/ECQQ7.png" alt="Question"></p> <p><img src="https://i.stack.imgur.com/Kg8N1.png" alt="enter image description here"></p>
JRN
18,398
<p>I teach infinity by using the number line. Tell the child that each point on the line represents a number, where the numbers are arranged such that the larger numbers are on the right. Then tell them that <em>infinity</em> is the number (the point) that is the largest (the farthest to the right). If they're smart, they'll see that no such number (or point) exists. <em>Infinity</em> is a concept with no counterpart number (or point).</p>
318,299
<blockquote> <p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p> </blockquote> <p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
Henno Brandsma
4,280
<p>In a locally connected space $X$, all connected components of open sets are open. This is in fact equivalent to being locally connected.</p> <p>Proof: (one direction) let $O$ be an open subset of a locally connected space $X$. Let $C$ be a component of $O$ (as a (sub)space in its own right). Let $x \in C$. Then let $U_x$ be a connected neighbourhood of $x$ in $X$ such that $U_x \subset O$, which can be done as $O$ is open and the connected neighbourhoods form a local base. Then $U_x,C \subset O$ are both connected and intersect (in $x$) so their union $U_x \cup C \subset O$ is a connected subset of $O$ containing $x$, so by maximality of components $U_x \cup C \subset C$. But then $U_x$ witnesses that $x$ is an interior point of $C$, and this shows all points of $C$ are interior points, hence $C$ is open (in either $X$ or $O$, that's equivalent).</p> <p>Now $\mathbb{R}$ is locally connected (open intervals form a local base of connected sets) and so every open set if a disjoint union of its components, which are open connected subsets of $\mathbb{R}$, hence are open intervals (potentially of infinite "length", i.e. segments). That there are countably many of them at most, follows from the already given "rational in every interval" argument. </p>
318,299
<blockquote> <p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p> </blockquote> <p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
peter
92,783
<p>I hope that this is right as this is a lemma i've thought of and i plan to use in a project due in several days and it somewhat generalizes the question asked:</p> <p>Suppose that $U$ is a set of intervals in $\mathbb{R}$ (closed, open, semi-closed, etc.). Then there exists a set of disjoint intervals $V$ in $\mathbb{R}$ s.t. $\bigcup_{I\in U}I=\biguplus_{I\in V}I$. If non of the intervals are degenerate or $U$ is countable, then this set can be taken to be countable. And if they were all open, then we could take the segments of $V$ to be open (And also, if $U$ is countable then we won't be needing the Axiom of Choice). </p> <p>proof: Let us order the elements of $U$: $U=\langle I_\beta\,|\,\beta\leq\alpha\rangle$ where $\alpha$ is the first ordinal of cardinality $|U|$. (If $U$ is countable then this doesn't require AC and from here on will be standard induction with a simple construction in the end for $\omega$).</p> <p>I'll build $V_\beta=\langle J^\gamma_\beta\,|\,\gamma\leq\beta\rangle$ - a sequence of segments for all $\beta\leq\alpha$ such that every two sets in $V_\beta$ are either disjoint or equal and such that $\displaystyle{\bigcup_{\gamma\leq\beta}I_\gamma=\biguplus_{\gamma\leq\beta}J^\gamma_\beta}$ and $\forall\beta$, $\langle J^\beta_\gamma\rangle_{\gamma\geq\beta}$ is a non-descending sequence of sets ,by means of transfinite induction. </p> <p>For $V_0$ take, $V_0=\langle I_0\rangle$. Suppose that we have built the required $V_\gamma$, $\gamma&lt;\beta$ for some $\beta\leq\alpha$, then we will build $V_\beta$ in the following way: $\forall\gamma&lt;\beta$, denote $\widetilde{J}_\gamma$=$\bigcup_{\gamma\leq\delta&lt;\beta}J_\delta^\gamma$-still segments (non-decreasing sequence). If $I_\beta$ is disjoint of all $\widetilde{J}_\gamma$, taking $V_\beta\!=\!\langle \widetilde{J}_\gamma\,|\,\gamma&lt;\beta\rangle\cup\{(\beta,I_\beta)\}$ would give us a sequence $\langle V_\gamma\,|\,\gamma\leq\beta\rangle$ satisfying the required conditions of it (the only non trivial thing is that pairs of $\widetilde{J}_\gamma$ are either disjoint of each other or are equal, but that is also quite trivial since if the contrary would have occurred, then $\exists\gamma_1&lt;\gamma_2&lt;\beta$ s.t. $\widetilde{J}_{\gamma_1}\neq\widetilde{J}_{\gamma_2}$ and $\widetilde{J}_{\gamma_1}\cap\widetilde{J}_{\gamma_2}\neq\emptyset$, but then, $\exists \beta&gt;\delta_1\geq\gamma_1, \beta&gt;\delta_2\geq\gamma_2$ s.t. $J^{\gamma_1}_{\delta_1}\cap J^{\gamma_2}_{\delta_2}\neq\emptyset$, meaning that either $J^{\gamma_1}_{\delta_2}= J^{\gamma_2}_{\delta_2}$ or $J^{\gamma_1}_{\delta_1}= J^{\gamma_2}_{\delta_1}$ thus, $\forall\beta&gt;\epsilon\geq\delta_1,\delta_2$, $J^{\gamma_1}_{\epsilon}= J^{\gamma_2}_{\epsilon}$ and since we are talking here about non-decreasing sequences, this will contradict $\widetilde{J}_{\gamma_1}\neq\widetilde{J}_{\gamma_2}$). And if $I_\beta$ isn't disjoint of all $\widetilde{J}_\gamma$, Then we can take $J_\beta^\gamma=\widetilde{J}_\gamma$ for all $\gamma&lt;\beta$ that don't intersect with $I_\beta$ and $J_\beta^\gamma=\bigcup_{\delta&lt;\beta\text{ s.t. }\widetilde{J}_\delta\cap I_\beta\neq\emptyset}{\widetilde{J}_\delta}\cup I_\beta$ - segment for all of the other $\gamma\leq\beta$. Then again from the same arguments, $\langle V_\gamma\,|\,\gamma\leq\beta\rangle$ would satisfy the required conditions. </p> <p>Finally, we can take $V=\{J_\alpha^\beta\,|\,\beta\leq\alpha\}$ to get what we wanted in the first place. And obviously, if our segments were all non-degenerate to begin with, from the way we constructed our set, all of the segments in $V$ will be non-degenerate (and thus of positive measure), but they are disjoint and so there is only a countable number of them. And if the segments in $U$ were all open, then obviously, so will the segments in $V$.$\square$</p>
318,299
<blockquote> <p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p> </blockquote> <p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
Christian Bueno
86,451
<p>The following is certainly not the quickest approach to a proof, but when this question was first posed to me in class, my first intuition was to use some elementary graph theory:</p> <hr> <p>Let $U$ be an open set of $\mathbb{R}$. As we know, $\mathbb{R}$ has a countable basis $\mathcal{B}$ comprised of connected open sets and so we may write $U=\bigcup_{n\in I} U_n$, where for each $n$ we have $U_n\in\mathcal{B}$ and $I$ is some countable index set. </p> <p>Let $G$ be the <strong><em><a href="http://en.wikipedia.org/wiki/Intersection_graph" rel="nofollow">intersection graph</a></em></strong> of $\{U_n\}$. That is to say, the vertex set of $G$ is simply $\{U_n\}$ and there is an edge between $U_i$ and $U_j$ iff they have nonempty intersection. It's easy to convince yourself that:</p> <ul> <li>This graph must have countably many <a href="http://en.wikipedia.org/wiki/Connected_component_%28graph_theory%29" rel="nofollow">graphically-connected</a> components (otherwise we'd have uncountably many vertices which is impossible).</li> <li>The intersection graph of $A\subseteq\{U_n\}$ is graphically-connected iff for any two $V,W\in A$ there is a sequence $V=U_{n_1},U_{n_2},\ldots,U_{n_k}=W$ such that $U_{n_i}\cap U_{n_{i+1}}\neq\varnothing$. </li> <li>The union $\bigcup A$ is a connected set of $\mathbb{R}$ whenever the intersection graph of $A$ is graphically-connected. </li> </ul> <p>Thus, when we take the union of all the vertices within a graphically-connected component, for every component, we obtain countably-many connected open sets. The union of these sets is of course $U$ itself. Since the connected open sets of $\mathbb{R}$ are intervals (including rays), we're done.</p> <hr> <p><strong>Side Note</strong>: This would also work in $\mathbb{R}^n$ or in general, any topological space $X$ that has a countable basis comprised of connected sets. Well, so long as we replace <em>countable union of disjoint open intervals</em> with <em>countable union of disjoint open connected sets.</em></p>
183,768
<p>Prove convergence\divergence of the series: $$\sum_{n=1}^{\infty}\dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)}$$</p> <p>Here is what I have at the moment:</p> <p><strong>Method I</strong></p> <p>My first way uses a result that is related to <strong><a href="http://en.wikipedia.org/wiki/Wallis_product" rel="nofollow noreferrer">Wallis product</a></strong> that we'll denote by $W_{n}$. Also,<br> we may denote $\dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)}$ by $P_{n}$. Having noted these and taking a large value of $n$<br> we get: $$(P_{n})^2 =\frac{1}{W_{n} \cdot (2n+1)}\approx\frac{2}{\pi}\cdot \frac{1}{2n+1}$$ $$P_{n}\approx \sqrt {\frac{2}{\pi}} \cdot \frac{1}{\sqrt{2n+1}}$$ </p> <p>Further we have that: $$\lim_{n\to\infty}\sqrt {\frac{2}{\pi}} \cdot \frac{n}{\sqrt{2n+1}} \le \sum_{n=1}^{\infty} P_{n}$$ that obviously shows us that the series diverges.</p> <p><strong>Method II</strong></p> <p>The second way is to resort to the powerful <strong><a href="http://mathworld.wolfram.com/KummersTest.html" rel="nofollow noreferrer">Kummer's Test</a></strong> and firstly proceed with the ratio test: $$\lim_{n\to\infty} \frac{P_{n+1}}{P_{n}}=\frac{2n+1}{2n+2}=1$$ and according to the result, the ratio test is inconclusive.</p> <p>Now, we apply Kummer's test and get: $$\lim_{n\to\infty} \frac{P_{n}}{P_{n+1}}n-(n+1)=\lim_{n\to\infty} -\frac{n+1}{2n+1}=-\frac{1}{2} \le 0$$ Since $$\sum_{n=1}^{\infty} \frac{1}{n} \longrightarrow \infty$$ our series diverges and we're done.</p> <p>On the site I've also found <a href="https://math.stackexchange.com/questions/118383/convergence-of-sum-limits-n-1-infty-left-dfrac-1-cdot3-cdots-2n-1?rq=1">a related question</a> with answers that can be applied for my question. Since I've already have some answers for my question you may regard it as a recreational one and if you have a nice proof to share I'd be glad to receive it. I like this question very much and want to make up a collection with nice proofs for it. Thanks. </p>
Rijul Saini
27,729
<p>As robjohn notes, $$ \frac{1\cdot3\cdot5\cdots(2n-1)}{2\cdot4\cdot6\cdots(2n)}=\frac{(2n)!}{2^{2n}n!^2} = \frac 1{4^n} \binom{2n}{n} $$ Noting that $$(2n+1) \binom{2n}{n} &gt; \sum_{i=0}^{2n} \binom{2n}{i} = 4^n$$ As $\binom{2n}{n}$ is the largest binomial coefficient.</p> <p>Therefore, $$\frac 1{4^n} \binom{2n}{n} &gt; \frac{1}{2n+1},$$ and hence the series diverges, by the comparison test.</p>
2,259,243
<p>How to solve T(n) = T(n-2) + n using iterative substitution</p> <pre><code>Base case: T(0) = 1 T(1) = 1 Solve: T(n) = T(n-2) + n </code></pre> <p>Currently I have:</p> <pre><code>T(n) = T(n-2) + n = T(n-4) + n - 2 + n = T(n-4) + 2n - 2 = T(n-6) + n - 4 + n - 2 + n = T(n-6) + 3n - 6 = T(n-8) + n - 6 + n - 4 + n -2 + n = T(n-8) + 4n - 12 = T(n-10) + n - 8 + n - 6 + n - 4 + n - 2 + n = T(n-10) + 5n - 20 </code></pre> <p>The pattern I see is: </p> <p>$$\ T(n-2 \sum_{i=1}^k i) + n \sum_{i=0}^k i - \sum_{i=0}^{k-1} i(i+1) $$</p> <p>but this may be wrong because I am completely stuck after this</p>
Anindya Sengupta
664,444
<p>In your proof, you are assuming that a permutation cannot be both even and odd (for otherwise, you could express B as a product of an even number of transpositions, and B^(-1) as a product of an odd number of transpositions, and then your proof falls flat). To prove that a permutation cannot be BOTH even and odd, you need the fact that identity can only be expressed as a product of an even number of permutations. So you do need an alternative approach to proving this result. That's why most proofs are lengthy.</p>
1,553,530
<p>I have a small question about how to finish the proof in the title. The main idea seems to be make an assumption of ∀x∀y (Px→(Py→x=y)) and to derive a contradiction between Raa and ¬Raa from that, which then proves the conclusion:</p> <p>So</p> <p>1.∀x∀y (Px→(Py→x=y))</p> <ol start="2"> <li>∀y (Pa→(Py→a=y))</li> </ol> <p>3.(Pa→(Pb→a=b))</p> <ol start="4"> <li>Assume Pa, then by modus ponens</li> </ol> <p>5.Pb→a=b</p> <p>Pb can be derived from the premiss ∀x∃y(Rxy∧Py), which gives</p> <p>6.∃y(Ray∧Py)</p> <ol start="7"> <li><p>Existential elimination gives the assumption Rab∧Pb</p> </li> <li><p>Pb by conjunction elimination</p> </li> <li><p>Plugging Pb into 5. gives a=b by modus ponens</p> </li> <li><p>Use the assumption Rab∧Pb again</p> </li> <li><p>Use conjunction elimination to get Rab</p> </li> <li><p>Then combine 9. and 11. to get Raa</p> </li> <li><p>Then take the premiss ∀x¬Rxx</p> </li> <li><p>Use universal quantifier elimination to get ¬Raa</p> </li> <li><p>From the contradiction between 14. and 12. you can prove ¬Pa (Pa was the assumption in 4.)</p> <p>But where do I go from here? I need to get another contradiction in order to discharge my assumption ∀x∀y (Px→(Py→x=y)) and prove the conclusion...but anything I assume seems impossible to discharge again!</p> </li> </ol> <p>Thanks for your help and sorry for the long explanation!</p>
Ollie
293,142
<p>For those who like a nice Gentzen-style proof: </p> <p>Have ¬∀x∀y(Px→(Py→(x=y)) as your root. </p> <p>Line1: ∃Elim. Have ∃y(Rcy∧Py) and ¬∀x∀y(Px→(Py→(x=y)) on the same line. </p> <p>To prove ∃y(Rcy∧Py) it's just a simple ∀Elim from ∀x∃y(Rxy∧Py)</p> <p>Line2: ¬Intro. Have Pa and ¬Pa on the same line. </p> <p>To prove Pa it's just a simple ^Elim from (Rca ^ Pa) which is discharged at line1.</p> <p>Line3 (above ¬Pa): ∃Elim. Have ∃y(Ray∧Py) and ¬Pa on the same line. </p> <p>We can derive ∃y(Ray∧Py) from ∀x∃y(Rxy∧Py).</p> <p>Line4 (above ¬Pa): ¬Intro. Have Raa and ¬Raa on the same line. </p> <p>¬Raa can be derived from ∀x¬Rxx. </p> <p>Line5 (above Raa): Have (a=b) and Rab on the same line. </p> <p>Rab can be derived from (Rab^Pb), which is discharged at line3. </p> <p>Line6 (above (a=b)): ->Elim. Have (Pb -> (a=b)) and Pb on the same line. </p> <p>Pb can be derived from (Rab^Pb), which is discharged at line3.</p> <p>Line7: ->Elim. Have (Pa -> (Pb -> (a=b))) and Pa on the same line. </p> <p>Pa is discharged at line 4. Two more uses of ∀Elim and you get ∀x∀y(Px→(Py→x=y)) which is discharged at line2. </p> <p>The proof is then complete. : )</p>
64,905
<p>Let's see if we could use MO to put some pressure on certain publishers...</p> <p>Although it is wonderful that it has been put <a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p> <p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
Petrus
11,860
<p>"Lectures on Chevalley Groups" - by Robert Steinberg</p>
64,905
<p>Let's see if we could use MO to put some pressure on certain publishers...</p> <p>Although it is wonderful that it has been put <a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p> <p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
John Stillwell
1,587
<p>Paul Cohen - <em>Set Theory and the Continuum Hypothesis</em>.</p>
2,031,842
<p>If 3 is the remainder when dividing $P(x)$ with $(x-3)$, and $5$ is the remainder when dividing $P(x)$ with $(x-4)$, what is the remainder when dividing $P(x)$ with $(x-3)(x-4)$?</p> <p>I'm completely puzzled by this, I'm not sure where to start...</p> <p>Any hint would be much appreciated. </p>
Mark Bennet
2,906
<p>If you carry out the division you should be able to show that $$P(x)=Q(x)(x-3)(x-4)+R(x)$$</p> <p>Two questions: What is the maximum degree of the remainder $R(x)$? Can you see how to use the remainder theorem for the cases $x=3,4$ that you know already so that $Q(x)$ becomes irrelevant?</p>