qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,397,834 | <blockquote>
<h2><span class="math-container">$$49y″−98y′+48y= 0 \quad\quad\quad \,\, y(2)=3,y′(2)=9.$$</span></h2>
</blockquote>
<p>When I solved, I got that my <span class="math-container">$r_1= \frac67$</span> and <span class="math-container">$r_2= \frac87.$</span> Then I got that <span class="math-container">$y= C_1e^{(\frac67)}t +C_2e^{\frac87}t.$</span> I’m not sure how to proceed from here in regards to substituting <span class="math-container">$y(2)= 3$</span> and <span class="math-container">$y’(3)=9$</span> to get <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> values. </p>
| b00n heT | 119,285 | <p>Since I'm not completely satisfied with the other answers, rewrite the equation as
<span class="math-container">$$y(x)^5=8-x^5$$</span>
and now partially differentiate with respect to
<span class="math-container">$x$</span> on both sides (using the chain rule):
<span class="math-container">$$5y^4(x)y'(x)=-5x^4$$</span>
and dividing by <span class="math-container">$y^4(x)$</span> you obtain
<span class="math-container">$$y'(x)=-\frac{x^4}{y^4(x)}$$</span>
and so on and so forth</p>
|
140,459 | <p>There is a long tradition of mathematicians remarking that FLT in itself is a rather isolated claim, attractive only because of its simplicity. And people often note a great thing about current proofs of FLT is their use of the modularity thesis which is just the opposite: arcane, and richly connected to a lot of results.</p>
<p>But have there been uses of FLT itself? Beyond implying simple variants of itself, are there any more serious uses yet?</p>
<p>I notice the discussion in <a href="https://mathoverflow.net/questions/50183/fermats-last-theorem-and-computability-theory">Fermat's Last Theorem and Computability Theory</a> concludes that one purported use is not seriously using FLT.</p>
| Alex B. | 35,416 | <p>Corollary 3.17 in <a href="http://arxiv.org/abs/1206.1822">this paper of Stefan Keil</a> uses FLT for exponent 7 to show that if $E/\mathbb{Q}$ is an elliptic curve with a rational 7-torsion point $P$, and $E\rightarrow E'$ is the 7-isogeny with kernel $\langle P\rangle$, then $E'(\mathbb{Q})[7]=0$. There are of course lots of ways of proving this, but the paper does it by writing down a parametrisation of all elliptic curves over $\mathbb{Q}$ with 7-torsion and of their rational 7-isogenies, and then playing with parameters to get a contradiction to FLT.</p>
|
140,459 | <p>There is a long tradition of mathematicians remarking that FLT in itself is a rather isolated claim, attractive only because of its simplicity. And people often note a great thing about current proofs of FLT is their use of the modularity thesis which is just the opposite: arcane, and richly connected to a lot of results.</p>
<p>But have there been uses of FLT itself? Beyond implying simple variants of itself, are there any more serious uses yet?</p>
<p>I notice the discussion in <a href="https://mathoverflow.net/questions/50183/fermats-last-theorem-and-computability-theory">Fermat's Last Theorem and Computability Theory</a> concludes that one purported use is not seriously using FLT.</p>
| Wojowu | 30,186 | <p>Recall that around 1977 Mazur has completely classified the possible torsion groups of elliptic curves over <span class="math-container">$\mathbb Q$</span>. A few years prior, Kubert has worked on this problem and has established a number of partial results, including, in the paper <a href="https://doi.org/10.1112/plms/s3-33.2.193" rel="noreferrer">"Universal bounds on the torsion of elliptic curves"</a>, the following statement (Main result 1, second part):</p>
<blockquote>
<p>If <span class="math-container">$\ell>3$</span> is a prime for which Fermat's last theorem is valid, then <span class="math-container">$\ell^2\nmid |E_\mathrm{tor}(\mathbb Q)|$</span>.</p>
</blockquote>
<p>(let me remark that the proof splits into cases <span class="math-container">$\ell>5$</span>, which substantially uses the assumption, and <span class="math-container">$\ell=5$</span> which doesn't and uses a complicated descent argument)</p>
<p>With this theorem we can, relatively easily, prove that if FLT holds, then <span class="math-container">$|E_\mathrm{tor}(\mathbb Q)|$</span> is a product of a squarefree number and a factor of <span class="math-container">$12$</span>.</p>
<p>Of course, this result predates FLT by a long shot, and was quickly superseded by Mazur's theorem, but it is still noteworthy because it relies on full FLT and not just a single case.</p>
|
1,142,530 | <p>Find an equation of the plane.
The plane that passes through the point
(−3, 2, 1) and contains the line of intersection of the planes
x + y − z = 4
4x − y + 5z = 2</p>
<p>I know the normal to plane 1 is <1,1,-1> and the normal to plane 2 is <4,-1,5>. The cross product of these 2 would give a vector that is in the plane I need to find.</p>
<p>P1 x P2 = <4,-9,-5></p>
<p>So now I have a point (-3,2,1) and a vector <4,-9,-5> on the plane but I'm not sure what to do next.</p>
| kobe | 190,421 | <p>Let $\Pi$ be the plane that we seek. Setting $z = 0$ in the two equations and solving simultaneously for $x$ and $y$, we find that $(6/5,14/5,0)$ is a point in the line of intersection of the two planes. So $(6/5,14/5,0)$ lies on $\Pi$. Since $(-3,2,1)$ also lies on $\Pi$, the vector from $(-3,2,1)$ to $(6/5,14,5,0)$, i.e., $\vec{w} = \langle 21/5,4/5,-1\rangle$, lies on $\Pi$. So the vector $\vec{n} = \vec{v} \times \vec{w}$ is normal to $\Pi$. So the equation of the plane is $\vec{n}\cdot \langle x + 3, y - 2, z - 1\rangle = 0$. Now simplify to get it in the form $ax + by + cz = d$.</p>
|
3,414,009 | <p>Given functions
<span class="math-container">$$
f(x)=\sum_{i=0}^\infty a_ix^i\,\,\,\text{ and }\,\,\,g(x)=\sum_{j=0}^\infty b_jx^j
$$</span>
the following simplification
<span class="math-container">$$
f(x)g(x)=\left(\sum_{i=0}^\infty a_ix^i\right)\left(\sum_{j=0}^\infty b_jx^j \right)=\sum_{i=0}^\infty\sum_{j=0}^\infty a_ib_jx^{i+j}
$$</span>
is relatively intuitive. I guess a rather simple and informal proof would consist in expanding a few terms and recognize the emerging pattern. For example,
<span class="math-container">$$
(a_0+a_1x+a_2x^2+\cdots)(b_0+b_1x+b_2x^2+\cdots)\\
=a_0b_0+a_0b_1x+a_0b_2x^2+\cdots+a_1b_0+a_1b_1x+a_1b_2x^2+\cdots
$$</span>
and recognize this as a double summation. However, I fear some of my students might be confused by it. Is there a more straighforward way to argue in order to justify the intuition behind such simplification?</p>
| Hagen von Eitzen | 39,174 | <p>There's no intuition, it's just
<span class="math-container">$$\left(\sum_{i=0}^\infty a_ix^i\right)\cdot c=\sum_{i=0}^\infty a_ix^ic $$</span>
with <span class="math-container">$c:=\sum_{j=0}^\infty b_jx^j$</span>, i.e.,
<span class="math-container">$$\left(\sum_{i=0}^\infty a_ix^i\right)\cdot \left(\sum_{j=0}^\infty b_jx^j\right)=\sum_{i=0}^\infty \left(a_ix^i\sum_{j=0}^\infty b_jx^j\right) $$</span>
followed by
<span class="math-container">$$c\cdot \left(\sum_{j=0}^\infty b_jx^j\right) c=\sum_{j=0}^\infty cb_jx^j $$</span>
with <span class="math-container">$c=a_ix^i$</span>, i.e.,
<span class="math-container">$$a_ix^i\sum_{j=0}^\infty b_jx^j = \sum_{j=0}^\infty a_ib_jx^{i+j} $$</span>
end therefore after summation
<span class="math-container">$$\left(\sum_{i=0}^\infty a_ix^i\right)\cdot \left(\sum_{j=0}^\infty b_jx^j\right)=
\sum_{i=0}^\infty\sum_{j=0}^\infty a_ib_jx^{i+j}$$</span></p>
|
1,577,838 | <p>Let $p$ and $q$ be primes such that $p=4q+1$. Then $2$ is a primitive root modulo $p$.</p>
<p>Proof. </p>
<p>Note that $q\not=2$ since $4\cdot2+1=9$ is not prime. $\mathrm{ord}_p(2)\vert p-1=4q$, so $\mathrm{ord}_p(2)=1,\;2,\;4,\;q,\;2q,\;\mathrm{or}\;4q$.</p>
<p>Clearly $\mathrm{ord}_p(2) \not= 1$, and $\mathrm{ord}_p(2)\not=2$ since $4\equiv1(\text{mod }p) \implies p=3$ but $3\not=4q+1$ for any positive integer $q$. Also $\mathrm{ord}_p(2)\not=2$ because $2^4=16\equiv1(\text{mod }p)\implies p=3 \text{ or } 5$. It has been shown that $p\not=3$ and $p=5\implies q=1$ which is not prime.</p>
<p>Suppose $\mathrm{ord}_p(2)=q$. Then $2^q\equiv1(\text{mod }p)$. Let $g$ be a primitive root modulo $p$, so that $2\equiv g^i(\text{mod }p)$ for some $i\in\mathbb{Z}$. Then $$g^{iq}\equiv1(\text{mod }p)\implies p-1\vert iq\implies iq=k(p-1)=4kq\implies i=4k$$ for some $k\in\mathbb{Z}$. So $2\equiv g^{4k}(\text{mod }p)$ and $2$ is a square modulo $p$, which means $p\equiv\pm1(\text{mod }8)$. Hence, either $8\vert p-1$ or $8\vert p+1$. If $8\vert p-1$, then $p-1=4q=8l\implies q=2l$ for some $l\in\mathbb{Z}$, so $q$ is even, which is impossible since $q\not=2$. If instead $8\vert p+1$, then $p+1=4q+2=8l\implies2q+1=2l$ for some $l\in\mathbb{Z}$, which is impossible. Thus $\mathrm{ord}_p(2)\not=q$.</p>
<p>Suppose $\mathrm{ord}_p(2)=2q$. Then $2^{2q}\equiv1(\text{mod }p)$. Let $g$ be a primitive root modulo $p$, so that $2\equiv g^i(\text{mod }p)$ for some $i\in\mathbb{Z}$. Thus $$g^{2iq}\equiv1(\text{mod }p)\implies p-1\vert 2iq\implies2iq=4kq\implies i=2k$$ for some $k\in\mathbb{Z}$, so $2$ is a square modulo $p$, which has been shown to be false. Therefore $\mathrm{ord}_p(2)\not=2q$.</p>
<p>Hence $\mathrm{ord}_p(2)=4q=p-1$ and 2 is a primitive root modulo $p$. $\square$</p>
<p>I feel confident that my proof is correct, mostly because I cannot find any obvious errors. Are there any major errors in the proof? If not, are there any details I should have included? For example, should I have specified that $1\leq i\leq p-1$, or was I okay to be lazy there? Is there anything that was unnecessary to include in the proof? Does it need to be 'cleaned up?' Thank you for your time.</p>
| Dylan Weil | 499,493 | <p>I'm not certain that this proof is actually valid. Saying that $g^{2iq} \equiv 1 \Rightarrow (p-1)|2iq$ seems to assume that $O_p(2) = p - 1$ which is what you're trying to prove. If this wasn't being implicitly assumed, there'd be no guarantee even that $(p-1) \leq 2iq$.</p>
|
1,331,850 | <p>I haven't done something like this in a long time. How do I set something like this up? Can someone help me with the beginning or give me some direction?</p>
<p><img src="https://i.stack.imgur.com/jluer.png" alt="enter image description here">
<img src="https://i.stack.imgur.com/dv5yu.png" alt="enter image description here"></p>
| quazi5 | 249,373 | <p>Notice that your vector field is conservative. You can then calculate the the potential function by finding the integral of F(x,y), in this case V = (1/2)x^2 + (1/2)y^2 + 2xy. Then by plugging in the end point and the beginning point,t = 3 and t = 0 respectively. This leaves us with V(c(3)) - V(c(0)) = V(4,6) - V(1,1) = [8+18+48] - [1/2 + 1/2 + 2] = 71. We can shorthand it with just endpoints because of the the properties of a conservative vector field. </p>
|
607,044 | <p>I'm looking at some work with Combinatorial Game Theory and I have currently got:
(P-Position is previous player win, N-Position is next player win)</p>
<p>Every Terminal Position is a P-Position,</p>
<p>For every P-Position, any move will result in a N-Position,</p>
<p>For every N-Position, there exists a move to result in a P-Position.</p>
<p>These I am okay with, the problem comes when working with the Sprague-Grundy function;
g(x)=mex{g(y):y \in F(x)}, where F(x) are the possible moves from x and mex is the minimum excluded natural number.</p>
<p>I can see every Terminal Position has SG Value 0, these have x=0, and F(0) is the empty set.</p>
<p>The problem comes in trying to find a way to prove the remaining two conditions for positions, can anyone give me a hand with these?</p>
| sapta | 173,460 | <p>the sequence <span class="math-container">$U_n$</span>is strictly decreasing sequence.
as, <span class="math-container">$U_{n+1}-U_n=\frac{1}{n+1}-\log (1+\frac{1}{n})<0$</span>;which can be checked by Riemann's integral on the function <span class="math-container">$f(x)=\frac{1}{x}$</span>.
now again using Riemann's integral we can show,
<span class="math-container">$\frac{1}{n}>\log(n+1)-\log(n)$</span>,so,
<span class="math-container">$$\sum_{k=1}^{n}\frac{1}{k}>(\log 2-\log 1)+(\log 3-\log 2)+(\log 4-\log 3)+....+(\log(n+1)-\log n )=\log(1+n)$$</span>
<span class="math-container">$$U_n>\log(1+n)-\log n>0;$$</span>
so,<span class="math-container">$U_n$</span> has a lower bound as well as decreasing,hence <span class="math-container">$U_n$</span> converges.</p>
|
702,879 | <p>I am studying exponentials from MacLane-Moerdijk's book, "Sheaves in geometry and Logic". I do not understand the following: Induced by the product-exponent adjunction, consider the bijection $$\hom(Y\times X,Z)\to\hom(Y,Z^X)\;\;\;\;\;\;(\star)$$
They say: The existence of the above adjunction can be stated in elementary terms (i.e., without using Hom-sets). For, set $Y=Z^X$ in $(\star)$; The identity arrow $1:Z^X\to Z^X$ on the right in $(\star)$ then corresponds, under the adjunction, to an arrow $e:Z^X\times X\to Z$. <em>The bijection $f\mapsto f'$ of $(\star)$, by naturality, now becomes the statement that to each $f:Y\times X\to Z$ there is a unique $f':Y\to Z^X$ such that the diagram</em> </p>
<p><img src="https://i.stack.imgur.com/xGHJ4.jpg" alt="enter image description here"></p>
<p><em>commutes</em>.</p>
<p>I do not understand how the definition of naturality gives the above statement. Naturality, if I am not mistaken, says (naturality in $Y$, if this is what is meant) that if $$\alpha_Y:\hom(Y\times X,Z)\to\hom(Y,Z^X) $$ then for all $g:Y\to W$,$$(-\circ g)\circ \alpha_W=\alpha_Y\circ(-\circ (g\times1)) $$
How does that give the statement? I am sure I am doing something wrong. Can anyone clarify, please?</p>
| user21929 | 133,385 | <p>The naturality of $\alpha_Y$ in $Y$ means that: for any $Y$, for any $W$, for any $g : Y \rightarrow W$, for any $h: W \times X \rightarrow Z$, we have $\alpha_{W}(h) \circ g = \alpha_Y(h \circ (g \times id_X))$, hence ${\alpha_{Y}}^{-1}(\alpha_{W}(h) \circ g) = h \circ (g \times id_X)$. With $W = Z^X$ and $h = {\alpha_{Z^X}}^{-1}(id_{Z^X}) = e$, we obtain ${\alpha_{Y}}^{-1}(g) = e \circ (g \times id_X)$. </p>
<p>I show the existence of $f'$: let $f: Y \times X \rightarrow Z$; we set $g = \alpha_Y(f) = f'$.</p>
<p>I show the unicity of $f'$: let $f'_1, f'_2: Y \rightarrow Z^X$ such that $e \circ (f'_1 \times id_X) = e \circ (f'_2 \times id_X)$; we have ${\alpha_Y}^{-1}(f'_1) = {\alpha_Y}^{-1}(f'_2)$, hence $f'_1 = f'_2$.</p>
|
3,344,017 | <p>Solve in positive integers:
<span class="math-container">$$
y^3 - x^3 = z^2,$$</span>
where <span class="math-container">$\gcd (x, y, z)=1$</span>.</p>
| Richard Jensen | 658,583 | <p>Let me stress that the way you show set equality is almost always by the following method:</p>
<p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be sets. Let <span class="math-container">$a \in A$</span>, and <span class="math-container">$b \in B$</span>. If you can show that <span class="math-container">$a \in B$</span> and <span class="math-container">$b \in A$</span>, then <span class="math-container">$A = B$</span>, since <span class="math-container">$a$</span> and <span class="math-container">$b$</span> were chosen arbitrarily. With that in mind, here is how to solve your question with this method:</p>
<p>Let <span class="math-container">$x = (a,b) \in (\mathbb{R}-\mathbb{Z})\times \mathbb{N}$</span>. Since <span class="math-container">$(\mathbb{R}-\mathbb{N}) \subset \mathbb{R}$</span>, we have that <span class="math-container">$a \in \mathbb{R}$</span> and <span class="math-container">$b \in \mathbb{N}$</span>, so <span class="math-container">$x \in \mathbb{R} \times \mathbb{N}$</span>.Since <span class="math-container">$a \notin \mathbb{Z}$</span>, we have that <span class="math-container">$x = (a,b) \notin (\mathbb{Z} \times\mathbb{N})$</span>, and we conclude that <span class="math-container">$x \in (\mathbb{R} \times \mathbb{N}) - (\mathbb{Z} \times \mathbb{N})$</span>. </p>
<p>For the other way around, let <span class="math-container">$x = (a,b) \in (\mathbb{R} \times \mathbb{N}) - (\mathbb{Z} \times \mathbb{N})$</span>, and do similar procedures as above.</p>
|
3,344,017 | <p>Solve in positive integers:
<span class="math-container">$$
y^3 - x^3 = z^2,$$</span>
where <span class="math-container">$\gcd (x, y, z)=1$</span>.</p>
| J.G. | 56,861 | <p>To prove <span class="math-container">$(A\setminus B)\times C=(A\times C)\setminus(B\times C)$</span>, note both sides have only ordered pairs as elements, so we just need the following: <span class="math-container">$$(x,\,y)\in(A\setminus B)\times C\iff x\in A\setminus B\land y\in C\iff x\in A\land x\notin B\land y\in C\\\iff (x\in A\land y\in C)\land\neg(x\in B\land y\in C)\\\iff((x,\,y)\in A\times C)\land((x,\,y)\notin B\times C)\iff(x,\,y)\in(A\times C)\setminus(B\times C).$$</span></p>
|
69,472 | <blockquote>
<p><strong>Theorem 1</strong><br>
If $g \in [a,b]$ and $g(x) \in [a,b] \forall x \in [a,b]$, then $g$ has a fixed point in $[a,b].$<br>
If in addition, $g'(x)$ exists on $(a,b)$ and a positive constant $k < 1$ exists with
$$|g'(x)| \leq k, \text{ for all } \in (a, b)$$
then the fixed point in $[a,b] is unique. </p>
<p><strong>Fixed-point Theorem</strong>
Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g'$ exists on $(a,b)$ and that a constant $0 < k < 1$ exists with
$$|g'(x)| \leq k, \text{ for all } x \in (a, b)$$
Then, for any number $p_0$ in $[a,b]$, the sequence defined by
$$p_n = g(p_{n-1}), n \geq 1$$
converges to the unique fixed-point in $[a,b]$</p>
</blockquote>
<p>These are two theorems that I have learned, and I'm having a hard time with this problem:</p>
<blockquote>
<p>Given a function $f(x)$, how can we find the interval $[a,b]$ on which fixed-point iteration will converge?</p>
</blockquote>
<p>Besides guess and check, I couldn't find any other way to solve this problem. I tried to link the above theorems, but it involves two variables, so I have a feeling it can't be solved algebraically. I wonder is there a general way to find the interval of convergence rather trial and error? Thank you.</p>
| Chris Taylor | 4,873 | <p>The reason that the iteration <span class="math-container">$x\leftarrow \tfrac{1}{2}(x+3/x)$</span> converges so rapidly to <span class="math-container">$\sqrt{3}$</span> is because it is derived from Newton's method. Newton's method says that to find a root of a function <span class="math-container">$f(x)$</span>, we pick a starting point and repeat the iteration</p>
<p><span class="math-container">$$x \leftarrow x - \frac{f(x)}{f'(x)}$$</span></p>
<p>until we have achieved our desired accuracy. As long as certain conditions on <span class="math-container">$f$</span> are satisfied (its derivative doesn't vanish, for example) and our initial guess is close enough to the root, this will always converge to the correct answer, and moreover it exhibits quadratic convergence, which means that you approximately double the number of decimal places of accuracy after each step of the iteration.</p>
<p>To approximate the square root of a number <span class="math-container">$c$</span>, we need a function whose root is the square root of <span class="math-container">$c$</span>. A simple choice is</p>
<p><span class="math-container">$$f(x) = x^2 - c$$</span></p>
<p>The Newton iteration which finds the root of this equation is</p>
<p><span class="math-container">$$x \leftarrow x - \frac{x^2 - c}{2x}$$</span></p>
<p>which you can rearrange to give</p>
<p><span class="math-container">$$x \leftarrow \frac{1}{2} \left( x + \frac{c}{x}\right)$$</span></p>
<p>which, if you take <span class="math-container">$c=3$</span>, is exactly the iteration you asked about in your question.</p>
|
69,472 | <blockquote>
<p><strong>Theorem 1</strong><br>
If $g \in [a,b]$ and $g(x) \in [a,b] \forall x \in [a,b]$, then $g$ has a fixed point in $[a,b].$<br>
If in addition, $g'(x)$ exists on $(a,b)$ and a positive constant $k < 1$ exists with
$$|g'(x)| \leq k, \text{ for all } \in (a, b)$$
then the fixed point in $[a,b] is unique. </p>
<p><strong>Fixed-point Theorem</strong>
Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g'$ exists on $(a,b)$ and that a constant $0 < k < 1$ exists with
$$|g'(x)| \leq k, \text{ for all } x \in (a, b)$$
Then, for any number $p_0$ in $[a,b]$, the sequence defined by
$$p_n = g(p_{n-1}), n \geq 1$$
converges to the unique fixed-point in $[a,b]$</p>
</blockquote>
<p>These are two theorems that I have learned, and I'm having a hard time with this problem:</p>
<blockquote>
<p>Given a function $f(x)$, how can we find the interval $[a,b]$ on which fixed-point iteration will converge?</p>
</blockquote>
<p>Besides guess and check, I couldn't find any other way to solve this problem. I tried to link the above theorems, but it involves two variables, so I have a feeling it can't be solved algebraically. I wonder is there a general way to find the interval of convergence rather trial and error? Thank you.</p>
| Ragib Zaman | 14,657 | <p>As others have already said, your first equation is derived from Newton's method. The idea of it is very simple, and it answers your last question. Basically, to approximate the location of a root of a function, we approximate the function locally by its tangent, find the tangents root instead, and that value is a decent approximation for the location of the desired root. It is a good exercise too use the idea to derive the formula for Newton's method. </p>
<p>However, we can view the equation produced for square roots in a more elementary light. If $x_n>0$ is a decent approximation for $\sqrt{n}$, then $$ \frac{ x_n + n/x_n }{2} $$ will be even better. Why? If $x_n $ is an approximation for $\sqrt{n}$ slightly too small (big), then $ n/x_n$ is an approximation for $\sqrt{n}$ slightly too big (small) - and then we average them to get somewhere closer in between. I still remember from all the calculations I used to do with this that $19601/13860$ is a good approximation for $\sqrt{2}$.</p>
|
1,008,610 | <blockquote>
<p>If <span class="math-container">$a$</span> is a group element, prove that <span class="math-container">$a$</span> and <span class="math-container">$a^{-1}$</span> have the same order.</p>
</blockquote>
<p>I tried doing this by contradiction.</p>
<p>Assume <span class="math-container">$|a|\neq|a^{-1}|$</span>.</p>
<p>Let <span class="math-container">$a^n=e$</span> for some <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$(a^{-1})^m=e$</span> for some <span class="math-container">$m\in \mathbb{Z}$</span>, and we can assume that <span class="math-container">$m < n$</span>.</p>
<p>Then <span class="math-container">$e= e*e = (a^n)((a^{-1})^m) = a^{n-m}$</span>. However, <span class="math-container">$a^{n-m}=e$</span> implies that <span class="math-container">$n$</span> is not the order of <span class="math-container">$a$</span>, which is a contradiction and <span class="math-container">$n=m$</span>.</p>
<p>But I realized this doesn’t satisfy the condition if <span class="math-container">$a$</span> has infinite order. How do I prove that piece?</p>
| André Nicolas | 6,312 | <p>Suppose that $a$ has infinite order. We show that $a^{-1}$ cannot have finite order. Suppose to the contrary that $(a^{-1})^m=e$ for some positive integer $m$. We have by repeated application of associativity that
$$a^m (a^{-1})^m=e.$$
It follows that $a^m=e$.</p>
|
1,008,610 | <blockquote>
<p>If <span class="math-container">$a$</span> is a group element, prove that <span class="math-container">$a$</span> and <span class="math-container">$a^{-1}$</span> have the same order.</p>
</blockquote>
<p>I tried doing this by contradiction.</p>
<p>Assume <span class="math-container">$|a|\neq|a^{-1}|$</span>.</p>
<p>Let <span class="math-container">$a^n=e$</span> for some <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$(a^{-1})^m=e$</span> for some <span class="math-container">$m\in \mathbb{Z}$</span>, and we can assume that <span class="math-container">$m < n$</span>.</p>
<p>Then <span class="math-container">$e= e*e = (a^n)((a^{-1})^m) = a^{n-m}$</span>. However, <span class="math-container">$a^{n-m}=e$</span> implies that <span class="math-container">$n$</span> is not the order of <span class="math-container">$a$</span>, which is a contradiction and <span class="math-container">$n=m$</span>.</p>
<p>But I realized this doesn’t satisfy the condition if <span class="math-container">$a$</span> has infinite order. How do I prove that piece?</p>
| coreyman317 | 525,188 | <p>It seems a more straightforward solution exists?</p>
<p>If <span class="math-container">$g$</span> has infinite order then so does <span class="math-container">$g^{-1}$</span> since otherwise, for some <span class="math-container">$m\in\mathbb{Z}^+$</span>, we have <span class="math-container">$(g^{-1})^m=e=(g^m)^{-1}$</span>, which implies <span class="math-container">$g^m=e$</span> since the only element whose inverse is the identity is the identity. This contradicts that <span class="math-container">$g$</span> has infinite order, so <span class="math-container">$g^{-1}$</span> must have infinite order.</p>
<p>If <span class="math-container">$g$</span> has finite order <span class="math-container">$n$</span>, then by existence of inverses in a group <span class="math-container">$$g^n=e\iff$$</span> <span class="math-container">$$g^n \cdot (g^{-1})^n=e\cdot(g^{-1})^n\iff$$</span> <span class="math-container">$$g^n\cdot g^{-n}=(g^{-1})^n\iff$$</span> <span class="math-container">$$ e = (g^{-1})^n$$</span> This implies <span class="math-container">$|g^{-1}|\leq n$</span>.</p>
<p>If <span class="math-container">$|g^{-1}|<n$</span>, say <span class="math-container">$m$</span>, then <span class="math-container">$(g^{-1})^m=e=(g^m)^{-1}\implies g^m=e$</span>, which contradicts that <span class="math-container">$|g|=n>m$</span>. So <span class="math-container">$|g^{-1}|=n$</span> if <span class="math-container">$|g|=n$</span>.</p>
|
1,633,722 | <p>Consider the family of $n \times n$ real matrices $A$, for which there is a $n \times n$ real matrix $B$ with $AB-BA=A$. How large can the rank of a matrix in this family be?</p>
<h2>Motivation</h2>
<p>Prasolov's book contains an exercise about proving that if $A,B$ are matrices with $AB-BA=A$ then $A$ cannot be invertible. (It is easy to prove this by multiplying both sides of the equation by $A^{-1}$ assuming it exists and then taking trace.) </p>
<p>I tried to come up with concrete examples of such matrices $A,B$ with $A$ having as large a rank as possible. By restricting $B$ to diagonal matrices one comes up with a very simple criterion and it is easy to construct matrices with rank $\lfloor n/2 \rfloor$. For example for $ n = 4$ and for $ad - bc \neq 0$ we have
$$A = \begin{pmatrix} 0 & 0 & a & b \\ 0 & 0 & c & d \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, B = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\0 & 0 & 0 & 1 \end{pmatrix}.$$ </p>
| Balloon | 280,308 | <p>You can compute the determinant, see that
$$\begin{vmatrix}1&5&0\\2&1&-2\\0&1&3\end{vmatrix}=5-2\cdot 15=-25\neq 0,$$
and so $(x,y,c)$ is a basis of $\mathbb{R}^3,$ and thus $c\not\in\mathrm{span}(x,y).$ </p>
<p>The error would be that you have written $(2,0,1)$ instead of $(1,2,0),$ as Aurey told you.</p>
<hr>
<p><strong>Edit</strong> :</p>
<p>You can first note that $x,y$ is a free family, and that</p>
<p>$$\begin{vmatrix}2&5&0\\0&1&-2\\1&1&3\end{vmatrix}=2\cdot5-10=0,$$
so the only remainig possibility is that $c\in\mathrm{span}(x,y).$</p>
|
1,633,722 | <p>Consider the family of $n \times n$ real matrices $A$, for which there is a $n \times n$ real matrix $B$ with $AB-BA=A$. How large can the rank of a matrix in this family be?</p>
<h2>Motivation</h2>
<p>Prasolov's book contains an exercise about proving that if $A,B$ are matrices with $AB-BA=A$ then $A$ cannot be invertible. (It is easy to prove this by multiplying both sides of the equation by $A^{-1}$ assuming it exists and then taking trace.) </p>
<p>I tried to come up with concrete examples of such matrices $A,B$ with $A$ having as large a rank as possible. By restricting $B$ to diagonal matrices one comes up with a very simple criterion and it is easy to construct matrices with rank $\lfloor n/2 \rfloor$. For example for $ n = 4$ and for $ad - bc \neq 0$ we have
$$A = \begin{pmatrix} 0 & 0 & a & b \\ 0 & 0 & c & d \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, B = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\0 & 0 & 0 & 1 \end{pmatrix}.$$ </p>
| Domenico Vuono | 227,073 | <p>If $c$ is in the span of $x$ $y$ then $$c=a \begin{bmatrix}
2 \\
0 \\
1 \\
\end{bmatrix}+b \begin{bmatrix}
5 \\
1 \\
1 \\
\end{bmatrix}$$ you have to solve a linear system to determine $a$ and $b$. If the linear system hasn't solutions then $c$ isn't in the span of $x$ and $y$</p>
|
221,667 | <p>I'm taking a second course in linear algebra. Duality was discussed in the early part of the course. But I don't see any significance of it. It seems to be an isolated topic, and it hasn't been mentioned anymore. So what's exactly the point of duality?</p>
| Suresh Venkat | 865 | <p>This will probably not be apparent in a linear algebra course, but duality is the workhorse of optimization. Roughly speaking, you can often frame an optimization problem as trying to minimize some quantity subject to linear constraints (that form a matrix). Then in order to solve this problem you usually need to understand the "dual" problem, which is another optimization problem whose constraint matrix is the dual to the original constraint matrix. It is by understanding the primal and dual spaces simultaneously that you can prove that you have an optimal (or near optimal) solution for your problem. </p>
|
1,640,373 | <p>The definition of a topological space is a set with a collection of subsets (the topology) satisfying various conditions. A metric topology is given as the set of open subsets with respect to the metric. But if I take an arbitrary topology for a metric space, will this set coincide with the metric topology? </p>
<p>I'm trying to justify why we call the elements of a topology "open". If my above question is true, then at least in a metric space, the set of open sets is equivalent to the topology of the metric space.
So am I right in thinking that when we remove the metric, we are generalising this equivalence by defining the open sets as those that satisfy the conditions of a topology?</p>
| Jack's wasted life | 117,135 | <p>You don't even need the matrix representation. T will map a vector to $(0,0,0)$ iff its $y$-component is zero. So your basis is the correct one.</p>
|
2,479,305 | <p>I'm studying first-order logic and I saw the textbook put equality symbol $=$ ads a logical symbol.</p>
<p>It's not a logical connective symbol, so $=xy$ is an atomic formula.
<a href="https://en.wikipedia.org/wiki/Atomic_formula" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Atomic_formula</a></p>
<p>However, the textbook explained it in such a way:" For example, $=v_1v_2$ is an atomic formula, since $=$ is a two-place predicate symbol and...".</p>
<p>But $=$ was listed and was obviously a Logical symbol ( a "superset" of equality symbol), not a parameter( a "superset" of Predicate symbols ).</p>
<p>How could $=$ be both a logical symbol and predicate symbol?</p>
| Bram28 | 256,001 | <p>First, $=$ is a predicate symbol since it says something about two objects, namely that they are identical. Out of all predicate symbols, though, it is the only 'logical' one, so it is indeed not like any other predicate, nor is it like any other logical symbol.</p>
<p>OK, but why is it a logical symbol? One answer is that $=$ is a logical symbol in the sense that in logic it has a <em>fixed</em> meaning (identity!), just like the other logical symbols like $\land$, $\neg$, $\forall$ have a fixed meaning. Whereas the non-logical symbols, such as atomic propositions like $P$ and $Q$, or predicates like $P(x)$ or $R(x,y)$ do not have a fixed meaning: indeed, one can interpret those symbols in any way one wants, which is exactly what formal <em>interpretations</em> do.</p>
<p>... but that doesn't really answer the question ... we can still ask <em>why</em> $=$ is treated as a logical symbol ... why is its meaning fixed, unlike other predicate symbols?</p>
<p>Well, conceptually, identity is 'logical' in the sense that no matter what context or subject or domain one is talking about, identity is just that: identity. That is, in any domain, it will be true that every object is identical to itself, and that if $a=b$, and $a$ has some property $P$, then $b$ will have that property as well (of course! $a$ and $b$ are the very same object!). So, a statement like $a=a$ is <em>logically</em> true; it is true no matter how crazy of a world one imagines; no need to even know what $a$ stands for.</p>
<p>Contrast that with a statement like $1 < 2$: this is 'merely' true in the sense that we mathematically defined $1$, $2$, and $<$ in a certain way ... that is, it is a statement about mathematically defined objects, and its truth is therefore relative to a specific domain, and therefore not a pure <em>logical</em> truth.</p>
|
929,243 | <p>Hey guys I just need help solving this solution here. Sorry if I didn't type the symbols correctly.</p>
<p>My solution so far:
$$
(¬p \vee ¬(p\wedge¬q)) \wedge (¬p \vee ¬q)≡
(¬p \vee (¬p \vee q)) \wedge (¬p \vee ¬q)≡
$$
at this point I'm stuck. Is there any way I can take care of the not-$p$ $\vee$ not-$p$?</p>
<p>Thanks</p>
| Adriano | 76,987 | <p>Yes, that would be Idempotent Law:
\begin{align*}
(\neg p \lor \neg p \lor q) \land (\neg p \lor \neg q)
&\equiv (\neg p \lor q) \land (\neg p \lor \neg q) \\
&\equiv \neg p \lor (q \land \neg q) \\
&\equiv \neg p \lor \bot \\
&\equiv \neg p \\
\end{align*}</p>
|
2,982,978 | <p><strong>Exercise :</strong></p>
<blockquote>
<p>Let <span class="math-container">$S: l^1 \to l^1$</span> be the right-shift operator :
<span class="math-container">$$S(x_1,x_2,\dots) = (0,x_1,x_2,\dots)$$</span>
Prove that <span class="math-container">$S$</span> is bounded and find its norm.</p>
</blockquote>
<p><strong>Attempt :</strong></p>
<p>The space <span class="math-container">$l^1$</span> is : <span class="math-container">$$l^1 = \{(x=(x_n) : \sum_{i=1}^\infty x_n < + \infty\}$$</span></p>
<p>To show that the operator <span class="math-container">$S$</span> is bounded, I must show that :</p>
<p><span class="math-container">$$\exists \; M>0 : \|Sx\|\leq M\|x\|$$</span></p>
<p>But I can't really see how to proceed on this particular proof without having any knowledge of the norm that should be used. Shall the norm of <span class="math-container">$l^1$</span> space be used ? If so, what does "find the norm of the operator S" ? </p>
<p>I would <strong>really</strong> appreciate any tips/solutions or clarifications regarding this particular exercise.</p>
<p><strong>Note :</strong> I have <strong>NOT</strong> been introduced to isometries in my Functional Analysis class yet, so I am looking for an elementary bounded operator approach.</p>
| Matematleta | 138,929 | <p>From scratch: </p>
<p><span class="math-container">$\vec x=(x_1,x_2,\cdots )$</span> and <span class="math-container">$S(\vec x)=(0,x_1,x_2,\cdots)$</span> so </p>
<p><span class="math-container">$\|\vec x\|=\sum^{\infty}_{i=1}|x_i|$</span> and <span class="math-container">$\|S(\vec x)\|=\sum^{\infty}_{i=1}|x_i|=\|\vec x\|.$</span></p>
<p>Now, <span class="math-container">$\|S|\|=\sup_x\frac{\|S(\vec x)\|}{\|\vec x\|}=\frac{\|\vec x\|}{\|\vec x\|}=1$</span>.</p>
<p>Therefore, <span class="math-container">$\|S\|=1$</span> and, in particular <span class="math-container">$S$</span> is bounded.</p>
|
339,090 | <p>I would like to pose a question on a variation on the classical <a href="http://en.wikipedia.org/wiki/Coupon_collector%27s_problem#Extensions_and_generalizations" rel="nofollow">coupon collector's problem</a>: coupon type $i$ is to be collected $k_i$ times. What is the expected stopping time or the expected number of trials needed to have collected all the sought after $(k_1,k_2,...,k_n)$ coupons?</p>
<p>We can compute this with recursion. But is there a better, more direct approach? What I am particularly interested to see is a martingale method.</p>
| Rus May | 17,853 | <p>I have wondered about this exact question and wrote up some results <a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/v15i1n31/pdf" rel="nofollow">here</a>. This paper gives an explicit answer to your question (and allows for non-uniform probabilities among the coupons). It does not use, as you suggested, Martingales. However, if you are able to obtain similar results using Martingales, I'd be extremely interested in seeing them.</p>
|
2,294,969 | <p>I can't seem to find a path to show that:</p>
<p>$$\lim_{(x,y)\to(0,0)} \frac{x^2}{x^2 + y^2 -x}$$</p>
<p>does not exist.</p>
<p>I've already tried with $\alpha(t) = (t,0)$, $\beta(t) = (0,t)$, $\gamma(t) = (t,mt)$ and with some parabolas... they all led me to the limit being $0$ but this exercise says that there's no limit when approaching $(0,0)$. Hints? Thank you.</p>
| Michael Seifert | 248,639 | <p>Other commenters have noted that the function value can be made to approach 1 along the parabolic path $x = y^2$. In fact, the function's value along a path going to the origin can be made to approach any non-zero real number by choosing an elliptical or hyperbolic path instead. If we look at the curve
$$
y^2 - x = \lambda x^2
$$
for an arbitrary constant $\lambda \neq 0$, then as we approach the origin along this curve, the value of $f(x,y)$ will approach
$$
\lim_{(x,y) \to (0,0)} \frac{x^2}{x^2 + y^2 - x} = \lim_{(x,y) \to (0,0)} \frac{x^2}{x^2 + \lambda x^2} = \frac{1}{1 + \lambda},
$$
and this latter fraction can be set equal to any $q \in \mathbb{R} \setminus \{0,1\}$ by choosing $\lambda = \frac{1}{q} - 1$.</p>
<p>The curves themselves can easily seen to be ellipses or hyperbolas by rearranging the above relation to read
$$
-\lambda \left( x + \frac{1}{2\lambda} \right)^2 + y^2 = -\frac{1}{4 \lambda},
$$
which is an ellipse when $\lambda < 0$ and is a hyperbola when $\lambda > 0$.
The parameterization of such a curve would be $(t, \sqrt{ \lambda t^2 + t})$; note that in the case $\lambda < 0$, we would have to restrict the range of $t$ to $[0, -1/\lambda]$.</p>
|
2,486,095 | <p>Given an acute-angled triangle $\Delta ABC$ having it's Orthocentre at $H$ and Circumcentre at $O$. Prove that $\vec{HA} + \vec{HB} + \vec{HC} = 2\vec{HO}$</p>
<p>I realise that $\vec{HO} = \vec{BO} + \vec{HB} = \vec{AO} + \vec{HA} =\vec{CO} + \vec{HC}$ which leads to $3\vec{HO} = (\vec{HA} + \vec{HB} + \vec{HC}) + (\vec{AO} + \vec{BO} + \vec{CO})$</p>
<p>How can I prove that $(\vec{AO} + \vec{BO} + \vec{CO}) = \vec{HO}$ in order to solve the problem?</p>
<p>Thank you for answering!</p>
| marty cohen | 13,079 | <p>5 is a pretty small number of iterations. Try a larger number and see what happens. Maybe 100.</p>
<p>Also, check two residuals at each iteration.</p>
|
832,715 | <blockquote>
<p>Suppose that the distribution of a random variable
$X$ is symmetric with respect to the point $x = 0$. If $\mathbb{E}(X^4)>0$ then $Var(X)$ and $Var(X^2)$ are both positive.</p>
</blockquote>
<p>How is that true? I am getting $Var(X)=\mathbb{E}(X^2)$ and $Var(X^2)=\mathbb{E}(X^4)-(\mathbb{E}(X^2))^2$, but do not know why $\mathbb{E}(X^2)>0$ & $\mathbb{E}(X^4)>(\mathbb{E}(X^2))^2.$</p>
| Avraham | 91,378 | <p>I think we can prove everything <strong>but</strong> the fact that $Var(X^2) > 0.$ We are given that $E(X) = 0$ and $E(X^4) > 0$, the question is how to show both Var$(X)$ and Var$(X^2)$ are strictly positive.</p>
<p>Knowing $E(X^4) > 0$ allows us to say that
$$
\begin{align}
\frac{\sum_{i=1}^nx_i^4}{n} &> 0\\
\Rightarrow \sum_{i=1}^nx_i^4 &> 0
\end{align}
$$</p>
<p>Now each $x_i^4$ individually must be greater than or equal to $0$ as the power is even, and there must exist <em>some</em> $x_i \neq 0$ to allow for $\sum_{i=1}^nx_i^4 > 0$. Discard all $x_i = 0$, and call the remaining observations the set ${i^*}$. The remaining $x_{i^*} \neq 0$ so $\sum_{i^*} x_{i^*}^2 > 0$ as well. All that are left are 0 observations which cannot reduce the value, so $E(X^2)$ must also be greater than 0, thus:
$$
\begin{align}
Var(X) &= E(X^2) - E(X)\\
&= E(X^2) > 0\\
&\blacksquare\; Var(X) > 0
\end{align}
$$</p>
<p>Now we get to the tricky part. We can say
$$
\begin{align}
Var(X^2) &= E((X^2)^2) - E(X^2)^2\\
&= E(X^4) - E(X^2)^2
\end{align}
$$. </p>
<p>What is $E(X^4)$? It's
$$
\begin{align}
&=\frac{\sum_{i^* = 1}^{n^*}x_{i^*}^4}{n^*}\\
&=\frac{\sum_{i^* = 1}^{n^*}\left(x_{i^*}^2\right)^2}{n^*}\\
\end{align}
$$
What is $E(X^2)^2$? It's
$$
\begin{align}
&=\left(\frac{\sum_{i^* = 1}^{n^*}x_{i^*}^2}{n^*}\right)^2\\
&=\frac{\left(\sum_{i^* = 1}^{n^*}x_{i^*}^2\right)^2}{\left(n^*\right)^2}
\end{align}
$$</p>
<p>So the denominator of $E(X^2)^2$ is the square of the denominator of $E(X^4)$ which <em>should</em> make $E(X^4) > E(X^2)^2$. <strong>However</strong>, compare the numerators. We have a case of <a href="https://en.wikipedia.org/wiki/Jensen%27s_inequality" rel="nofollow">Jensen's inequality</a>. Since all the terms are strictly non-negative, and some are strictly positive, being that squaring is a convex function, the sum of the squares is less than or equal to the square of the sum. So we have $E(X^4)$ having a lower numerator and $E(X^2)^2$ having a higher denominator, and I'm not sure if we can always say which one dominates.</p>
|
194,373 | <p>Let $\Omega$ be a bounded smooth domain and define $\mathcal{C} = \Omega \times (0,\infty)$. Below, $x$ refers to the variable in $\Omega$ and $y$ to the variable in $(0,\infty)$. The map $\operatorname{tr}_\Omega:H^1(\mathcal C) \to L^2(\Omega)$ refers to the trace operator ($\operatorname{tr}_\Omega u = u(\cdot,0)$ for smooth functions). <img src="https://i.stack.imgur.com/F2lgh.png" alt="enter image description here"></p>
<p>How do I know that the constant functions are in that bigger space (let's just take $\epsilon =1$)? They obviously have finite $H^\epsilon(\mathcal{C})$ norm but that is not enough. </p>
<p>We can approximate (<a href="https://math.stackexchange.com/questions/1102898/why-does-this-completion-of-a-sobolev-space-contain-constant-functions-please-e">see this</a>) the constant function $1$ by $u_n$, where $u_n(x,y) = 1$for $y \in [0,n)$ and $u_n(x,y) = 0$ for $y \in [2n, \infty)$ and $u_n(x,y)$ linearly interpolates between $(n,2n)$. <strike>This is Cauchy with respect to the $H^\epsilon$ norm</strike> (<strong>edit: it's not Cauchy</strong>), but how to prove that $1$ is in $H^\epsilon$? I thought we could say $\lVert u_n - 1 \rVert_{H^\epsilon(\mathcal C)} \to 0$ but this is not sensible since $tr_\Omega$ is only defined for $H^1(\mathcal C)$ functions, and $1$ is not in $H^1(C)$.</p>
| Jean Van Schaftingen | 42,047 | <p>I think that the trace is <em>defined by the completion</em>. Indeed for every $u \in H^1 (\Omega)$, you have $\Vert\operatorname{tr} u\Vert_{L^2 (\Omega)} \le \Vert u \Vert_{\varepsilon}$. Since $H^1 (\mathcal{C})$ is dense in $H^\varepsilon (\mathcal{C})$, the trace operator $\operatorname{tr}$ is a well-defined continuous linear operator on $H^\varepsilon (\mathcal{C})$.</p>
<p>By the way, it is the same argument that shows that $\nabla v$ is defined on $H^\varepsilon (\mathcal{C})$. A more refined argument (relying I think on the Hardy inequality) would show that the restriction on $\Omega \times [0, R]$ is well-defined from $H^\varepsilon (\mathcal{C})$ to $L^2 (\Omega \times [0, R])$.</p>
<p>Going back to the question, you just then need to observe that your approximating sequence of the constant has its traces (as traces of $H^1 (\mathcal{C})$ functions) converging in the space $L^2 (\Omega)$. That is the since the sequence is constant in a neighbourhood of the set $\Omega \times \{0\}$.</p>
|
234,466 | <p>This is the second problem in Neukirch's Algebraic Number Theory. I did the proof but it feels a bit too slick and I feel I may be missing some subtlety, can someone check it over real quick?</p>
<p>Show that, in the ring $\mathbb{Z}[i]$, the relation $\alpha\beta =\varepsilon\gamma ^n$, for $\alpha,\beta$ relatively prime numbers and $\varepsilon$ a unit, implies $\alpha =\varepsilon '\xi ^n$ and $\beta =\varepsilon ''\eta ^n$, with $\varepsilon '$,$\varepsilon ''$ units.</p>
<p>So basically, because the Gaussian integers are a unique factorization domain and alpha and beta are relatively prime, I have the prime decomposition:</p>
<p>$\alpha = \varepsilon' p_1^{e_1}...p_r^{e_r}$</p>
<p>$\beta = \varepsilon'' p_s^{e_s}...p_y^{e_y}$</p>
<p>$\varepsilon\gamma^n = \varepsilon q_1^{nf_1}...q_k^{nf_k}$</p>
<p>And so $\alpha\beta = p_1^{e_1}...p_r^{e_r}p_s^{e_s}...p_y^{e_y}$. Where we have a one-to-one correspondence between the $p_i^{e_i}$ and the $q_i^{nf_i}$ and thus setting $p_i^{e_i} = q_i^{f_i}$, in accordance with this correspondence, we obtain our desired xi and eta. </p>
<p>Does this make sense? I never used anything specific to the Gaussian integers so if this is right then it holds for all UFDs. Thanks.</p>
| Berci | 41,488 | <p>Yes, it seems to hold in every UFD.</p>
|
145,046 | <p>I'm a first year graduate student of mathematics and I have an important question.
I like studying math and when I attend, a course I try to study in the best way possible, with different textbooks and moreover I try to understand the concepts rather than worry about the exams. Despite this, months after such an intense study, I forget inexorably most things that I have learned. For example if I study algebraic geometry, commutative algebra or differential geometry, In my minds remain only the main ideas at the end. Viceversa when I deal with arguments such as linear algebra, real analysis, abstract algebra or topology, so more simple subjects that I studied at first or at the second year I'm comfortable.
So my question is: what should remain in the mind of a student after a one semester course? What is to learn and understand many demostrations if then one forgets them all?</p>
<p>I'm sorry for my poor english.</p>
| Asaf Karagila | 622 | <p>Usually what remains in your mind is the general idea: a vague outline of the terminology and theorems. It may sound bad, but it's fine. I can honestly say that I hardly remember anything from most courses I took, except the things I had to teach, or directly relate to my work. This includes things from a course I took the previous semester and even things from a course I am currently taking whose topic is in fact close to my own research.</p>
<p>It is not that bad. What important is to learn how to store this data so the next time you run into it you will immediately recognize it - or at least recognize that you should recognize it.</p>
<p>I can give an example from my own experience, after learning and forgetting most of the basic course in Galois theory, I was still able to identify a similar picture when looking at the covering spaces of a topological space and the $\pi_1(X,x_0)$ structure between them. I went to ask the professor who taught me Galois if there is any connection and indeed there was.</p>
<p>Remember that a good mathematician should be able to see analogies between theorems, and even theories. So identifying similar ideas and similar patterns is more or less essential to this point. </p>
<p>To the actual details of all the theorems, I wish I could tell you that your memory can fight the good war of remembering and win. It is not usually the case, and even friends that in our undergrad courses remembered everything in details have now - not too long into grad school - forgotten a lot of the material.</p>
<p>Let me sum up with what I think you should take from courses. You should be able at least locally (during the course and up to one semester later) be able to remember most of the proofs, or at least the theorems. Later on you need to remember the idea, and the methods used in the proofs. The methods can take you an extra mile later on when you approach proving things on your own because if you see similarities you can often use similar methods.</p>
|
1,232,036 | <p>I have a midterm tomorrow, and while studying for that I saw this question, however don't have any idea how to solve it. (I could not come up with a legitimate proof. All I could do was, by putting some functions, approving what the problem claims.) I will appreciate if you can help.</p>
<p>Suppose that $ h(t)$ is continuous and bounded on $(-\infty,\infty)$ and </p>
<p>$$|h(t)| \leq M, \forall t\in (-\infty,\infty)$$</p>
<p>Show that equation
$$
y′(t) + y(t) = h(t)
$$</p>
<p>has one solution that is bounded on $(-\infty,\infty)$. Also, show that if $h(t)$ is a periodic function, then $y(t)$ is also periodic.</p>
<p>Regards,</p>
<p>Amadeus</p>
| Chappers | 221,811 | <p>Multiplying by $e^{t}$, we have
$$ (e^{t}y(t))' = e^t h(t). $$
Integrating, we have
$$ y(t) = y(0)e^{-t} + e^{-t}\int_0^{t} e^{s}h(s) \, ds $$
Now we have to find a bounded solution. Since $h(s)\geqslant-M$, we can define $H(s)=h(s)+M$ positive, and hence we can also write the solution as
$$ y(t) = y(0)e^{-t} + e^{-t}\int_0^{t} e^{s}(H(s)-M) \, ds \\
= y(0)e^{-t} + e^{-t}\int_0^{t} e^{s}H(s) \, ds + M(e^t-1)e^{-t} \\
= M+(y(0)-M)e^{-t} + e^{-t}\int_0^{t} e^{s}H(s) \, ds $$</p>
<p>Now, the integrand is positive. Let us first check what happens as $t \to \infty$. The second term dies away. The integrand is between $0$ and $2M$, so for $t>0$,
$$ y(t) \leqslant M + (y(0)-M) + 2M \int_0^t e^{s-t} \, ds = y(0)+ 2M(1-e^{-t}), $$
which is bounded. Now the problem comes, $-\infty$. As $t \to -\infty$, the integral tends to
$$ -A=\int_0^{-\infty} e^{s} H(s) \, ds = -\int_0^{\infty} e^{-s} H(-s) \, ds, $$
which converges because $H$ is bounded. Now, the obvious candidate for the bounded solution has
$$ y(0)-M-A=0, \tag{1} $$
because otherwise, there's no way that $y$ is bounded, since we'd have $y = O( e^{-t})$. Now, another way to write $y$ is
$$ y(t) = M + (y(0)-M-A)e^{-t} - e^{-t} \int_t^{-\infty} e^s H(s) \, ds $$
(I'm not convinced about my sign there, but it's irrelevant.) Now impose (1), so either way,
$$ \lvert y(-t)-M \rvert = e^{t} \int_{t}^{\infty} e^{-s} H(-s) \, ds, $$
where I've changed the sign of $t$ for convenience. All we have to do now is show that this is bounded. But $H(-s)<2M$, so the right-hand side is less than
$$ 2M \int_t^{\infty} e^{t-s} \, ds = 2M, $$
and so there is precisely one bounded solution.</p>
<hr>
<p>Right, now the periodic bit. This is obviously false for general $y$, since I can choose what I like for $y(0)$ and get a nonperiodic $e^{-t}$, so we'll have to assume that $y(-\infty)$ is finite. Therefore, take
$$ y(t) = a + e^{-t} \int_{-\infty}^t e^{s}g(s) \, ds, $$
where $g$ has integral zero over a period. (then
$$ y'(t) + y(t) = a+ e^{-t} \int_{-\infty}^t e^{s}g(s) \, ds- e^{-t} \int_{-\infty}^t e^{s}g(s) \, ds+g(t)=a+g(t), $$
so $a+g(s)=h(s)$, and this is a legitimate solution). Now suppose $g$ has period $T$. Then
$$ y(t+T)-y(t) = e^{-T} e^{-t} \int_{-\infty}^{t+T} e^{s}g(s) \, ds -e^{-t} \int_{-\infty}^t e^{s}g(s) \, ds \\
= e^{-t} \left( \int_{-\infty}^{t+T} e^{s-T} g(s) \, ds - \int_{-\infty}^t e^{s}g(s) \, ds \right) \tag{2} $$
Setting $u=s-T$ in the first integral, the upper limit becomes $t+T-T=t$, and we have
$$ \int_{-\infty}^{t} e^{u} g(u+T) \, du - \int_{-\infty}^t e^{s}g(s) \, ds $$
But $g(u+T)=g(u)$ by definition, so the bracket in (2) is zero, and $y$ is also periodic. Phew!</p>
|
774,434 | <p><a href="https://www.wolframalpha.com/input/?i=Sum%5BBinomial%5B3n,n%5Dx%5En,%20%7Bn,%200,%20Infinity%7D%5D" rel="noreferrer">Wolfram alpha tells me</a> the ordinary generating function of the sequence $\{\binom{3n}{n}\}$ is given by
$$\sum_{n} \binom{3n}{n} x^n = \frac{2\cos[\frac{1}{3}\sin^{-1}(\frac{3\sqrt{3}\sqrt{x}}{2})]}{\sqrt{4-27x}}$$
How do I prove this?</p>
| epi163sqrt | 132,007 | <p>As was already mentioned in the comment section the <em>Lagrange Inversion Formula</em> is a proper method to prove this identity. In the following I use the notation from R. Sprugnolis (etal) paper <a href="http://www.researchgate.net/publication/226195157_Lagrange_Inversion_When_and_How/file/e0b495230d098ec3f7.pdf">Lagrange Inversion: when and how</a>.</p>
<blockquote>
<p>Let us suppose that a formal power series $w=w(t)$ is implicitely defined by a relation $w=t\Phi(w)$, where $\Phi(t)$ is a formal power series such that $\Phi(0)\ne0$. The <em>Lagrange Inversion Formula (LIF)</em> states that:</p>
<p>$$[t^n]w(t)^k=\frac{k}{n}[t^{n-k}]\Phi(t)^n$$</p>
</blockquote>
<p>There are several variations of the LIF stated in the paper. We use in the following $G6$:</p>
<blockquote>
<p>Let $F(t)$ be any formal power series and $w=t\Phi(w)$ as before, then the following is valid:</p>
<p>\begin{align*}
[t^n]F(t)\Phi(t)^n=\left[\left.\frac{F(w)}{1-t\Phi'(w)}\right|w=t\Phi(w)\right]\tag{1}
\end{align*}</p>
</blockquote>
<p><em>Note:</em> The notation $[\left.f(w)\right|w=g(t)]$ is a linearization of $\left.f(w)\right|_{w=g(t)}$ and denotes the substitution of $g(t)$ to every occurrence of $w$ in $f(w)$ (that is, $f(g(t))$). In particular, $w=t\Phi(w)$ is to be solved in $w=w(t)$ and $w$ has to be substituted in the expression on the left of the $|$ sign.</p>
<blockquote>
<p>We prove the following identity</p>
<p>\begin{align*}
\sum_{n\ge0}\binom{3n}{n}t^n=\frac{2\cos\left[\frac{1}{3}\sin^{-1}\left(\frac{3\sqrt{3}\sqrt{t}}{2}\right)\right]}{\sqrt{4-27t}}\tag{2}
\end{align*}</p>
</blockquote>
<p>$$ $$</p>
<blockquote>
<p>Let $F(t)=1$ and $\Phi(t)=(1+t)^3$.</p>
<p>Since
$$t\Phi'(w)=3t(1+w)^2=\frac{3t\Phi(w)}{1+w}=\frac{3w}{1+w}$$
we obtain
\begin{align*}
\binom{3n}{n}&=[t^n]F(t)\Phi(t)^n=[t^n](1+t)^{3n}\\
&=[t^n]\left[\left.\frac{1}{1-t\Phi'(w)}\right|w=t\Phi(w)\right]=[t^n]\left[\left.\frac{1+w}{1-2w}\right|w=t\Phi(w)\right]\\
\end{align*}
Let
\begin{align*}
A(t):=\sum_{n\ge0}\binom{3n}{n}t^n=\left.\frac{1+w}{1-2w}\right|_{w=t\Phi(w)}
\end{align*}
Expressing $A(t)=\frac{1+w}{1-2w}$ in terms of $w$, we get
$$w=\frac{A(t)-1}{2A(t)+1}$$
Since $w=t\Phi(w)=t(1+w)^3$, we get
\begin{align*}
\frac{A(t)-1}{2A(t)+1}=t\left(1+\frac{A(t)-1}{2A(t)+1}\right)^3
\end{align*}</p>
<p>which simplifies to:</p>
<p>\begin{align*}
(4-27t)A(t)^3-3A(t)-1=0\tag{3}
\end{align*}</p>
</blockquote>
<p>In order to get the RHS of $(2)$ we first analyse the structure of $(3)$ which is</p>
<p>$$f(t)A(t)^3-3A(t)=1$$</p>
<p>with $f(t)$ linear and observe a similarity of this structure with the identity</p>
<p>$$4\cos^3{t}-3\cos{t}=\cos{3t}$$</p>
<blockquote>
<p>We use the Ansatz:</p>
<p>$$A(t) := \frac{2\cos\left(g(t)\right)}{\sqrt{4-27t}}$$</p>
<p>and obtain
\begin{align*}
(4-27t)&A(t)^3-3A(t)=\\
&=\frac{8\cos^3\left(g(t)\right)}{\sqrt{4-27t}}-\frac{6\cos\left(g(t)\right)}{\sqrt{4-27t}}=\\
&=\frac{2\cos\left(3g(t)\right)}{\sqrt{4-27t}}\\
&=1
\end{align*}
Since
\begin{align*}
2\cos\left(3g(t)\right)&=\sqrt{4-27t}\\
4\cos^2\left(3g(t)\right)&=4-27t\\
\sin^2\left(3g(t)\right)&=\frac{27}{4}t\\
\end{align*}
we get
\begin{align*}
g(t)&=\frac{1}{3}\sin^{-1}\left(\frac{3\sqrt{3t}}{2}\right)\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Box
\end{align*}</p>
<p>and conclude the identity $(2)$ is valid.</p>
</blockquote>
|
1,834,756 | <p>The Taylor expansion of the function $f(x,y)$ is:</p>
<p>\begin{equation}
f(x+u,y+v) \approx f(x,y) + u \frac{\partial f (x,y)}{\partial x}+v \frac{\partial f (x,y)}{\partial y} + uv \frac{\partial^2 f (x,y)}{\partial x \partial y}
\end{equation}</p>
<p>When $f=(x,y,z)$ is the following true?</p>
<p>$$\begin{align}
f(x+u,y+v,z+w) \approx f(x,y,z) &+ u \frac{\partial f (x,y,z)}{\partial x}+v \frac{\partial f (x,y,z)}{\partial y} + w \frac{\partial f (x,y,z)}{\partial z}
\\
&+uv \frac{\partial^2 f (x,y,z)}{\partial x \partial y} + vw \frac{\partial^2 f (x,y,z)}{\partial y \partial z}+ uw \frac{\partial^2 f (x,y,z)}{\partial x \partial z} \\
&+ uvw \frac{\partial^3 f (x,y,z)}{\partial x \partial y \partial z}
\end{align}$$</p>
| Hosein Rahnama | 267,844 | <p>The general formula for the Taylor expansion of a sufficiently smooth real valued function <span class="math-container">$f:\mathbb{R}^n \to \mathbb{R}$</span> at <span class="math-container">$\mathbf{x}_0$</span> is</p>
<p><span class="math-container">$$f({\bf{x}})=f({\bf{x}}_0)+\nabla f({\bf{x}}_0) \cdot ({\bf{x}}-{\bf{x}}_0) + \frac{1}{2} ({\bf{x}}-{\bf{x}}_0) \cdot \nabla \nabla f ({\bf{x}}_0) \cdot ({\bf{x}}-{\bf{x}}_0) + O(\lVert\mathbf{{\bf{x}}-{\bf{x}}_0}\rVert^2)$$</span></p>
<p>If you call <span class="math-container">${\bf{x}}-{\bf{x}}_0:={\bf{h}}$</span> then the above formula can be rewritten as</p>
<p><span class="math-container">$$f({\bf{x}}_0+{\bf{h}})=f({\bf{x}}_0)+\nabla f({\bf{x}}_0) \cdot {\bf{h}} + \frac{1}{2} {\bf{h}} \cdot \nabla \nabla f ({\bf{x}}_0) \cdot {\bf{h}} + O(\lVert\mathbf{h}\rVert^2)$$</span></p>
<p>In these formulas, <span class="math-container">$\nabla f$</span> is the (first) gradient of <span class="math-container">$f$</span>, <span class="math-container">$\nabla\nabla f$</span> is usually called the Hessian (second gradient) of <span class="math-container">$f$</span>, and <span class="math-container">$O$</span> is the famous <a href="https://en.wikipedia.org/wiki/Big_O_notation#Infinitesimal_asymptotics" rel="nofollow noreferrer">big O notation</a>. You can extend this formulation for functions like <span class="math-container">$f:\mathbb{R}^n \to \mathbb{R}^m$</span>. You may also find it useful to take a look at <a href="https://en.wikipedia.org/wiki/Taylor_series#Taylor_series_in_several_variables" rel="nofollow noreferrer">this link</a>.</p>
|
639,665 | <p>How can I calculate the inverse of $M$ such that:</p>
<p>$M \in M_{2n}(\mathbb{C})$ and $M = \begin{pmatrix} I_n&iI_n \\iI_n&I_n \end{pmatrix}$, and I find that $\det M = 2^n$. I tried to find the $comM$ and apply $M^{-1} = \frac{1}{2^n} (comM)^T$ but I think it's too complicated.</p>
| Christoph | 86,801 | <p>You can easily just do gaussian elimination to compute this inverse as you would do for any other explicitly given matrix.</p>
<p>We will use row operations and track what we did on the right hand side, so we start with
$$
\pmatrix{I_n & iI_n \\ iI_n & I_n} \,\Bigg|\,
\pmatrix{I_n & 0 \\ 0 & I_n}.
$$
Multiply the first $n$ rows by $i$ and substract them from the rows $n+1$ to $2n$ and you get
$$
\pmatrix{I_n & iI_n \\ 0 & 2I_n} \,\Bigg|\,
\pmatrix{I_n & 0 \\ -iI_n & I_n}.
$$
Multiply the rows $n+1$ to $2n$ by $\frac 1 2$,
$$
\pmatrix{I_n & iI_n \\ \phantom{\frac 1 2}\!\!\!0 & I_n} \,\Bigg|\,
\pmatrix{I_n & 0 \\ -\frac 1 2iI_n & \frac 1 2 I_n}.
$$
Now multiply the rows $n+1$ to $2n$ by $i$ and substract from the first $n$ rows to get
$$
\pmatrix{\phantom{\frac 1 2}\!\!\!I_n & 0 \\ \phantom{\frac 1 2}\!\!\!0 & I_n} \,\Bigg|\,
\pmatrix{\frac 1 2 I_n & -\frac 1 2 i I_n \\ -\frac 1 2iI_n & \frac 1 2 I_n}.
$$
Thus,
$$
\pmatrix{I_n & iI_n \\ iI_n & I_n}^{-1}
=
\pmatrix{\frac 1 2 I_n & -\frac 1 2 i I_n \\ -\frac 1 2iI_n & \frac 1 2 I_n}.
$$</p>
<p>For the determinant we can use row operations as well:
$$
\det\pmatrix{I_n & iI_n \\ iI_n & I_n}
=
\det\pmatrix{I_n & iI_n \\ 0 & 2I_n}
=
2^n.
$$</p>
|
2,916,887 | <p>The formula for Shannon entropy is as follows,</p>
<p>$$\text{Entropy}(S) = - \sum_i p_i \log_2 p_i
$$</p>
<p>Thus, a fair six sided dice should have the entropy,</p>
<p>$$- \sum_{i=1}^6 \dfrac{1}{6} \log_2 \dfrac{1}{6} = \log_2 (6) = 2.5849...$$</p>
<p>However, the entropy should also correspond to the average number of questions you have to ask in order to know the outcome (as exampled in <a href="https://medium.com/udacity/shannon-entropy-information-gain-and-picking-balls-from-buckets-5810d35d54b4" rel="noreferrer">this guide</a> under the headline <em>Information Theory</em>).</p>
<p>Now, trying to construct a decision tree to describe the average number of questions we have to ask in order to know the outcome of a dice, and this seems to be the optimal one:</p>
<p><img src="https://i.stack.imgur.com/yCi7X.png" alt="Decision tree for dice"></p>
<p>Looking at the average number of questions in the image, there are 3 questions in 4/6 cases in 2 questions in 2/6 cases. Thus the entropy should be:</p>
<p>$$\dfrac{4}{6} \times 3 + \dfrac{2}{6} \times 2 = 2.6666...$$</p>
<p>So, obviously the result for the entropy isn't the same in the two calculations. How come?</p>
| Ahmad Bazzi | 310,385 | <p>There is nothing wrong with what you did. <strong>In the book "Elements on Information Theory", there is a proof that the average number of questions needed lies between $H(X)$ and $H(X)+1$, which agrees with what you did</strong>. So, in terms of "questions", the entropy gives you an accuracy within $1$ question. The following argument is from "Elements on Information Theory":</p>
<h2>Proof that $H(X) \leq L < H(X) + 1$</h2>
<p>If $L$ is the average number of questions (in the book it is the referred to as the expected description length), it could be written as
$$L = \sum p_i l_i$$
subject to the constraints that each $l_i$ is an integer, because $l_i$ reflects the number of questions asked to arrive at the answer of the $i^{th}$ outcome. Also, you have $$\sum D ^{-l_i} \leq 1$$where $D$ is the size of your alphabets. Furthermore, the optimal number of questions can be found by minimizing the $D-$adic probability distribution closest to the distribution of $X$ in relative entropy, that is, by finding the $D-$adic $r$, where
$$r_i = \frac{D^{-l_i}}{\sum_j D^{-l_j}}$$ that minimizes
$$L - H(X) = D(p \Vert r) - \log(\sum D^{-l_i}) \geq 0$$
The choice of questions $l_i = \log_D \frac{1}{p_i}$ will give $L = H$. Since $\log_D \frac{1}{p_i}$ is not necessarily an integer, you could
$$l_i = \lceil \log_D \frac{1}{p_i} \rceil$$.
Using <a href="https://en.wikipedia.org/wiki/Kraft%E2%80%93McMillan_inequality" rel="noreferrer">Kraft-McMillan's inequality</a>, you can say
$$\sum D^{-\lceil \log_D \frac{1}{p_i} \rceil} \leq \sum D^{- \log \frac{1}{p_i}} = \sum p_i = 1$$
Now you'll get that the optimal $l_i$ are bounded between
$$\log_D \frac{1}{p_i} \leq l_i < \log_D \frac{1}{p_i} + 1$$
which gives you</p>
<blockquote>
<p>$$H(X) \leq L < H(X) + 1$$
You computed $L \simeq 2.666$ and $H(X) \simeq 2.58$</p>
</blockquote>
|
3,985,917 | <p>I am trying to show algebraically that <span class="math-container">$8^3>9^{8/3}$</span>. This came from trying to complete the base case of an induction proof.</p>
<p>I have struggled because <span class="math-container">$8$</span> and <span class="math-container">$9$</span> cannot be manipulated to be the same base. Otherwise I could just argue that <span class="math-container">$3>\dfrac{8}{3}$</span>.</p>
<p>I tried raising both sides to the third power and got <span class="math-container">$8^9>9^8$</span>. I can rewrite this as <span class="math-container">$8^9>9^{9-1}$</span> but I am not sure if this is the right direction.</p>
| user2661923 | 464,411 | <p>An approach that is alternative to (but inferior to) David Lui's answer is if you happen to know that</p>
<p><span class="math-container">$$\log_{10} ~2 \approx 0.301 ~~\text{and}~~ \log_{10} ~3 \approx 0.477.$$</span></p>
<p>Then, you simply compare</p>
<p><span class="math-container">$$0.301 \times (3 \times 3) ~~\text{vs}~~ 0.477 \times [2 \times (8/3)].$$</span></p>
|
3,403,364 | <p>Here's a little number puzzle question with strange answer:</p>
<blockquote>
<p>In an apartment complex, there is an even number of rooms. Half of the rooms have one occupant, and half have two occupants. How many roommates does the average person in the apartment have? </p>
</blockquote>
<p>My gut instinct was to say <span class="math-container">$\frac{1}{2}$</span>, but apparently that is wrong and the correct answer is <span class="math-container">$\frac{2}{3}$</span>????</p>
<p>I saw this problem on Twitter with very little explanation and wasn't able to find it online anywhere. If anyone could shed some light on this for me that would be awesome : ) </p>
| rogerl | 27,542 | <p>If there are <span class="math-container">$2n$</span> rooms, then there are <span class="math-container">$3n$</span> people. Clearly <span class="math-container">$2n$</span> of them have exactly one roommate...</p>
|
1,790,311 | <p>Show the following equalities $5 \mathbb{Z} +8= 5\mathbb{Z} +3= 5\mathbb{Z} +(-2)$.</p>
<p>$5 \mathbb{Z} +8=\{5z_{1}+8: z_{1} \in \mathbb{Z}\}$,</p>
<p>$5 \mathbb{Z} +3=\{5z_{2}+3: z_{2} \in \mathbb{Z}\}$,</p>
<p>$5 \mathbb{Z} +(-2)=\{5z_{3}+(-2): z_{3} \in \mathbb{Z}\}$.</p>
<p>So, how can prove to use these defitions?</p>
| Pipicito | 93,689 | <p>Note that for any $n \in \mathbb{Z}$ you have $\mathbb{Z} = \mathbb{Z} + n$.</p>
<p>Thus, $5 \mathbb{Z} +8 = 5 \mathbb{Z} + 5 + 3 = 5 (\mathbb{Z} + 1)+ 3 = 5\mathbb{Z} + 3$.</p>
<p>Similarly, you have $5 \mathbb{Z} + 8 = 5 \mathbb{Z} + 10 - 2 = 5\mathbb{Z} - 2$.</p>
|
23,312 | <p>What is the importance of eigenvalues/eigenvectors? </p>
| lhf | 589 | <p>The behaviour of a linear transformation can be obscured by the choice of basis. For some transformations, this behaviour can be made clear by choosing a basis of eigenvectors: the linear transformation is then a (non-uniform in general) scaling along the directions of the eigenvectors. The eigenvalues are the scale factors.</p>
|
23,312 | <p>What is the importance of eigenvalues/eigenvectors? </p>
| daven11 | 10,607 | <p>This made it clear for me <a href="https://www.youtube.com/watch?v=PFDu9oVAE-g" rel="nofollow noreferrer">https://www.youtube.com/watch?v=PFDu9oVAE-g</a></p>
<p>There's a lot of other linear algebra videos as well from 3 blue 1 brown</p>
|
142,677 | <p>Consider the following list of equations:</p>
<p>$$\begin{align*}
x \bmod 2 &= 1\\
x \bmod 3 &= 1\\
x \bmod 5 &= 3
\end{align*}$$</p>
<p>How many equations like this do you need to write in order to uniquely determine $x$?</p>
<p>Once you have the necessary number of equations, how would you actually determine $x$?</p>
<hr/>
<p><strong>Update:</strong></p>
<p>The "usual" way to describe a number $x$ is by writing</p>
<p>$$x = \sum_n 10^n \cdot a_n$$</p>
<p>and listing the $a_n$ values that aren't zero. (You can also extend this to some radix other than 10.)</p>
<p>What I'm interested in is whether you could instead express a number by listing all its residues against a suitable set of modulii. (And I'm <em>guessing</em> that the prime numbers would constitute such a "suitable set".)</p>
<p>If you were to do this, how many terms would you need to quote before a third party would be able to tell which number you're trying to describe?</p>
<p>That was my question. However, since it appears that the Chinese remainder theorem is extremely hard, I guess this is a bad way to denote numbers...</p>
<p>(It also appears that $x$ will never be uniquely determined without an upper bound.)</p>
| Community | -1 | <p>This is a classic example of <a href="http://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="nofollow noreferrer">Chinese remainder theorem</a>. To solve it, one typically proceeds as follows. We have <span class="math-container">$$x = 2k_2 + 1 = 3k_3 + 1 = 5k_5 + 3.$$</span>
Since <span class="math-container">$\displaystyle x = 2k_2 + 1 = 3k_3 + 1$</span>, we have that <span class="math-container">$2k_2 = 3k_3$</span> i.e. <span class="math-container">$2|k_3$</span> and <span class="math-container">$3|k_2$</span>, since <span class="math-container">$(2,3) = 1$</span>. Hence, <span class="math-container">$k_3 = 2k_6$</span> and <span class="math-container">$k_2 = 2k_6$</span>. Hence, we now get that <span class="math-container">$$x = 6k_6 + 1 = 5k_5+3.$$</span> Rearranging, we get that <span class="math-container">$$6k_6 - 5k_5 = 2.$$</span> Clearly, <span class="math-container">$(2,2)$</span> is a solution to the above. In general, if <span class="math-container">$ax+by$</span> has integer solutions and <span class="math-container">$(x_0,y_0)$</span> is one such integer solution, then all integer solutions are given by <span class="math-container">$$(x,y) = \displaystyle \left( x_0 + k \frac{\text{lcm}[\lvert a \rvert,\lvert b \rvert]}{a}, y_0 - k \frac{\text{lcm}[\lvert a \rvert,\lvert b \rvert]}{b} \right)$$</span> where <span class="math-container">$k \in \mathbb{Z}$</span>.
Hence, all the integer solutions to <span class="math-container">$6k_6 - 5k_5 = 2$</span>, are given by <span class="math-container">$$(k_6,k_5) = \left( 2 + 5k, 2 + 6k \right)$$</span>
Hence, <span class="math-container">$x = 5k_5 + 3 = 30k + 13$</span> i.e. <span class="math-container">$$x \equiv 13 \bmod 30.$$</span></p>
|
1,693,630 | <p><a href="https://i.stack.imgur.com/TVeGv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TVeGv.jpg" alt="enter image description here"></a></p>
<p>This is my attempt at finding $\frac{d^2y}{dx^2}$. Can some one point out where I'm going wrong here?</p>
| Emilio Novati | 187,568 | <p>You are wrong because you have not computed
$$
\frac{d}{dx} \frac{dy}{dx}
$$
but</p>
<p>$$
\frac{d}{dt} \frac{dy}{dx}
$$</p>
<p>if you set
$$
\frac{dy}{dt}=\dot y \qquad \frac{dx}{dt}=\dot x
$$
than the formula for the second derivative is:
$$
\frac{d^2y}{dx^2}=\frac{d}{dt} \frac{dy}{dx}\frac{dt}{dx}=\frac{\dot x \ddot y-\dot y \ddot x}{\dot x^3}
$$
For a proof you can see :<a href="https://en.wikipedia.org/wiki/Parametric_derivative#Second_derivative" rel="nofollow">https://en.wikipedia.org/wiki/Parametric_derivative#Second_derivative</a></p>
|
885,129 | <p>I want to speed up the convergence of a series involving rational expressions the expression is $$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1}$$
If I have not misunderstood anything the error in the infinite sum is at most the absolute value of the last neglected term. The formula for the $n$th term is $\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1}$ from the definition of the series. To get the series I used Maxima the computer algebra system. I have noticed that to get 13 decimal places of the series one must wade through $312958$ terms of the series. I had to kill the computer GUI and some other system processes and run Maxima to compute the sum. I took about 5 minutes. The final sum I obtained was $0.3106137076850$. Is there any way to speed up the convergence of the sum? In general is there any way to speed up the convergence of the sum of $$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {p(x)} {q(x)}$$ </p>
<p>where both ${p(x)}$ and ${q(x)}$ are rational functions?</p>
| WimC | 25,313 | <p>There are several methods to speed up the summation of series. For example <a href="https://en.wikipedia.org/wiki/Euler_summation" rel="nofollow noreferrer">Euler summation</a> or the <a href="https://en.wikipedia.org/wiki/Shanks_transformation" rel="nofollow noreferrer">Shanks transformation</a>. Here is a simple method that works quite well. Let $$f(n)=\frac{n^2+2n-1}{n^4+2n^2+1}$$ and $$g(n)=\tfrac{1}{2}f(2n-1)-f(2n)+\tfrac{1}{2}f(2n+1).$$ Then $$f(1)-f(2)+f(3)-\ldots = \tfrac{1}{2}f(1) +g(1)+g(2)+g(3)+\ldots$$ but the right hand side converges much faster than the left hand side. This is a generic method. For example if you take $$f(n)=\frac{1}{n}$$ then $$g(n)=\frac{1}{(2n-1)\cdot 2n \cdot (2n+1)}$$ and $$\log(2)=1-\frac{1}{2}+\frac{1}{3}-\ldots =\frac{1}{2}+\frac{1}{1\cdot 2\cdot 3} + \frac{1}{3\cdot 4 \cdot 5}+\ldots$$</p>
|
507,454 | <p>I had a geometry class which was proctored using the Moore method, where the questions were given but not the answers, and the students were the source of all answers in the class. One of the early questions which we never solved is listed in the title.</p>
<p>In this case, use any reasonable definition of "between-ness". I believe the definition we used was "$B$ is between $A$ and $C$ if and only if $|AB|+|BC|=|AC|$". A collineation is a mapping where every line is mapped to a line. A mapping is a function that operates over the set of points within the given space and returns points in the given space.</p>
<p>When we were studying this question, we managed to get to the point that a between-ness mapping must map lines to line segments. In particular, we had managed to show that for every $A-B-C$ (read: "$B$ is between $A$ and $C$") in the mapped space by applying between-ness preserving mapping $m$, we could guarantee that pre-images $P_A, P_C$ such that $m(P_A)=A,m(P_C)=C$ implies the existence of pre-image $P_B$ such that $P_A-P_B-P_C$ and $m(P_B)=B$. I have never seen any full proof of the title statement.</p>
<p>I would like to read any hints that are known for solving this question. Feel free to completely solve it, but please hide the full solution in such a way that I can start with your hint(s) and have an opportunity to finish the solution for myself.</p>
| Stefan4024 | 67,746 | <p>It's well-known fact that every prime number, except for $2$ and $3$ can be written as $6k \pm 1$. SO we have:</p>
<p>$$(6k \pm 1)^2 - (6l \pm 1)^2 = 36k^2 \pm 12k + 1 - 36l^2 \mp 12l - 1$$ </p>
<p>$$= 36(k^2-l^2) \pm 12(k - l) = 12(3k^2 - 3l^2 \pm k \mp l)$$</p>
<p>The sum in the parenthesis is always even. Why?</p>
<p>Hence we proved that it's divisible by 24.</p>
|
1,918,144 | <p>Suppose $\sum\limits_{k=0}^{\infty} b_k $ converges absolutely and has the sum $b$. Suppose $a \in\mathbb R$ with $|a|<1$. What is the sum of the series $\sum\limits_{k=0}^{\infty}(a^kb_0 +a^{k-1}b_1 +a^{k-2}b_2 +...+b_k)$?</p>
| Bernard | 202,857 | <p>This is the *Cauchy product of the series $\sum_{k=0}^\infty b_k$ and $\sum_{k=0}^\infty a_k$. As both series converge absolutely, their Cauchy product converge to the product of each sum, $b\cdot\dfrac1{1-a}$.</p>
|
625,975 | <p>I'm just starting to learn computability. Some treatments of the subject use a relation they call $T$, which I <em>think</em> is called the universal recursive relation. It's defined something like this (<a href="http://www.its.caltech.edu/~jclemens/courses/02ma117a/handouts/handout6.pdf" rel="nofollow">http://www.its.caltech.edu/~jclemens/courses/02ma117a/handouts/handout6.pdf</a> p. 3):</p>
<blockquote>
<p>$T_n(m, c, x_1, \ldots , x_n) \iff \operatorname{Final}(m, c)\land c$ is a computation starting
from $\ast 1^{x_1}B\cdots B1^{x_n}$, i.e. $c$ codes a halting computation for the
machine m computing a function with inputs $x_1, \ldots , x_n$</p>
</blockquote>
<p>As I understand this relation, it holds when and only when the program with code $m$ and input $(x_1,\ldots , x_n)$ halts with output $c$. </p>
<p>This relation is said to be recursive (and primitive recursive - see p. 2 of the handout I linked). What I don't see is how this is compatible with the halting problem being undecidable. If the relation was recursive, couldn't you use it to check whether the program halts for some given input? (I've probably misunderstood what the relation is but I can't see how else to interpret it.)</p>
<p>Thanks a lot.</p>
| tomasz | 30,222 | <p>Just because we can tell if a given computation is the halting computation doesn't mean we can tell if one exists at all.</p>
<p>The halting relation is is $\exists c T_n(m,c,x_1,\ldots,x_n)$.</p>
|
265 | <p>I'm a mod at Computational Science SE. Sometimes, users ask questions on Computational Science about doing something in Wolfram Alpha. As far as I can tell, Wolfram Alpha uses Mathematica for its math engine. Are those questions on topic here?</p>
<p>Note: I was made aware of <a href="https://mathematica.meta.stackexchange.com/questions/68/other-wri-product-discussion">Other WRI product discussion?</a>. I'm scoping my question specifically to uses of Wolfram Alpha that look something like <a href="https://scicomp.stackexchange.com/questions/1407/solving-system-of-equations-with-wolfram-alpha">https://scicomp.stackexchange.com/questions/1407/solving-system-of-equations-with-wolfram-alpha</a>, where the question is talking about using Wolfram Alpha for mathematics only. </p>
| Szabolcs | 12 | <p>We haven't had questions like that so far, so there is no precedent.</p>
<p>But I'd say questions about how to do something with Wolfram|Alpha <em>without</em> using Mathematica should be explicitly off topic. Based on the discussion you linked to I think most will agree.</p>
|
1,823,487 | <p>There are $m$ different people and a <strong>circle</strong> that has $m+r$ identical seats. How many ways can we put those people in the circle?</p>
<p>If the seats were not identical then the solution was: $ \frac{1*(m+r-1)!}{r!} $</p>
<p>I can't understand how the fact that the seats are identical affects my solution? Is the only difference which guy is next to which one?</p>
<p>For example: $n=4 , r=2 $ , are those all options the same? </p>
<p>(the numbers representing a specific man and $X$ representing an empty seat) $(1,X,2,X,3,4,1) = (1,X,2,X,4,3,1) = (1,X,X,2,3,4,1)$ ?</p>
<p>What can I understand if the question will be changed to identical $m+r$ seats (not a circle) ?</p>
<p>Thank you,</p>
<p>Stav.</p>
| Brian M. Scott | 12,042 | <p>First imagine that one seat is marked. Then we can think of the seats as actually being arranged in a line, with the marked seat at the beginning of the line. There are $\binom{m+r}m$ ways to pick $m$ of the seats to be occupied by people, and those $m$ people can be arranged in $m!$ ways in the chosen seats, so there are</p>
<p>$$\binom{m+r}mm!=\frac{(m+r)!}{r!}\tag{1}$$</p>
<p>ways to seat the people, <em>not</em> $\dfrac{(m+r-1)!}{r!}$.</p>
<p>Now in your example consider the arrangements </p>
<p>$$\begin{align*}&\langle 1,X,2,X,3,4\rangle,\langle X,2,X,3,4,1\rangle,\langle 2,X,3,4,1,X\rangle,\langle X,3,4,1,X,2\rangle,\langle 3,4,1,X,2,X\rangle,\\
&\text{and }\langle 4,1,X,2,X,3\rangle\;,
\end{align*}$$</p>
<p>where the first entry in each tuple corresponds to the marked seat. If we remove the marking, making the seats identical, these $6$ arrangements can no longer be distinguished: the only thing that distinguished them before was the marking on the one seat that let us decide which seat was the first to be listed.</p>
<p>More generally, the $m+r$ circular permutations of a given seating are all indistinguishable when the seats are identical: there is simply no way to pick out one of the $m+r$ entries in the tuple to be the first one. Thus, each <em>distinguishable</em> arrangement when the seats are identical is counted $m+r$ times by $(1)$, and there are therefore only</p>
<p>$$\frac1{m+r}\cdot\frac{(m+r)!}{r!}=\frac{(m+r-1)!}{r!}$$</p>
<p>distinguishable arrangements.</p>
|
4,055,850 | <p>I was thinking it could be plugging in <span class="math-container">$x^2+x$</span> to the <span class="math-container">$f'(x)$</span>, then using the Chain Rule to solve it, but I'm not sure if it is right. Please help!</p>
| Ikhtesad Mannan | 1,019,660 | <p>That's taken from the IB Oxford AA HL textbook. In IB, the real scale is the x-axis and the imaginary scale is the y-axis. According to that, the book is wrong. It has gotten the conjugate and the conjugate of the opposite the wrong way around.</p>
|
2,780,216 | <p>How many integer solutions are there to the equation $a + b + c = 21$ if $a \geq3$,
$b \geq 1$, and $c \geq 2b$?</p>
| Abhishek Choudhary | 452,208 | <p>$a$ can vary from $3$ to $18$ $b$ can vary from $1$ to $16$ $c$ can vary from $2$ to $17$ Hence according to me number of solutions should be $16*16*16=4096$</p>
|
1,722,692 | <p>I am asked to find</p>
<p>$$\lim_{x \to 0} \frac{\sqrt{1+x \sin(5x)}-\cos(x)}{\sin^2(x)}$$</p>
<p>and I tried not to use L'Hôpital but it didn't seem to work. After using it, same thing: the fractions just gets bigger and bigger.</p>
<p>Am I missing something here?</p>
<p>The answer is $3$</p>
| Claude Leibovici | 82,404 | <p>Let us play with Taylor series to get even more than just the limit considering $$\sin(5x)=5 x-\frac{125 x^3}{6}+O\left(x^5\right)$$ $$1+x \sin(5x)=1+5 x^2-\frac{125 x^4}{6}+O\left(x^6\right)$$ $$\sqrt{1+x \sin(5x)}=1+\frac{5 x^2}{2}-\frac{325 x^4}{24}+O\left(x^6\right)$$ $$\sqrt{1+x \sin(5x)}-\cos(x)=3 x^2-\frac{163 x^4}{12}+O\left(x^6\right)$$ $$\frac{\sqrt{1+x \sin(5x)}-\cos(x)}{\sin^2(x)}=\frac{3 x^2-\frac{163 x^4}{12}+O\left(x^6\right) }{x^2-\frac{x^4}{3}+O\left(x^6\right)}$$ Now, perfrorming the long division $$\frac{\sqrt{1+x \sin(5x)}-\cos(x)}{\sin^2(x)}=3-\frac{151 x^2}{12}+O\left(x^4\right)$$ which shows the limit and how it is approached.</p>
<p>For illustration purposes, let us use $x=0.01$; the "exact" value should be $\approx 2.9987421$ while the above approximation gives $2.9987400$</p>
<p>For $x=0.1$, the corresponding values would be $\approx 2.8782301$ and $2.8741700$.</p>
|
30,305 | <p>I want to call <code>Range[]</code> with its arguments depending on a condition. Say we have </p>
<pre><code>checklength = {5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6}
</code></pre>
<p>I then want to call <code>Range[]</code> 13 times (the length of <code>checklength</code>) and do <code>Range[5]</code> when <code>checklength[[#]] == 5</code> and <code>Range[2, 6]</code> when <code>checklength[[#]] == 6</code>. <code>If[]</code> would seem an appropriate way to do it, </p>
<pre><code>Range[If[checklength[[#]] == 5, 5, XXX]]& /@ Range[13]
</code></pre>
<p>but I don't know what to put for "XXX", since I need "2,6" there without any brackets. I've tried </p>
<pre><code>Range[If[checklength[[#]]==5, 5, Flatten[{2,6}]]]& /@ Range[13]
</code></pre>
<p>but that doesn't help (in fact if you think about it, it shouldn't!). The problem is, I need an unbracketed pair of numbers to be treated as a single argument and I don't know how to do that. I can think of one quite messy solution, </p>
<pre><code>Range[If[checklength[[#]] == 5, 1, 2], If[checklength[[#]] == 5, 5, 6]& /@ Range[13]
</code></pre>
<p>but I'd be disappointed if there's not a better way to do it. Even though this does the trick, the general question remains of how to treat unbracketed comma separated numbers as a single item.</p>
| jVincent | 1,194 | <p>The solution you are asking for is to pass the sequence of 2,6 which is nicely defined as <code>Sequence[2,6]</code> However if we simply put that as the last argument in <code>If</code>, it will be misinterpreted, for instance <code>If[False,Sequence[1,2]]</code> would return 2. So we need to use <code>Unevaluated</code>:</p>
<pre><code>Range[If[checklength[[#]] == 5, 5, Unevaluated[Sequence[2, 6]]]] & /@ Range[13]
</code></pre>
<p>However in your particular case I would question why you are not instead simply using:</p>
<pre><code>If[checklength[[#]] == 5, Range[5], Range[2, 6]] & /@ Range[13]
</code></pre>
<p>or:</p>
<pre><code>If[# == 5, Range[5], Range[2, 6]] & /@ checklength
</code></pre>
<p>Which seems less convoluted to interpret and does what you need.</p>
|
4,000,935 | <p>How do you show that <span class="math-container">$7a^2 - 12b^2 = 8c^2$</span> has no integer solutions</p>
<hr />
<p>When <span class="math-container">$x = a/c$</span> and <span class="math-container">$y = b/c$</span> then <span class="math-container">$\gcd(a,b,c) = 1$</span> I believe.</p>
<p>If we use mod5, and neither <span class="math-container">$a,b,c$</span> is divisible by 5 then <span class="math-container">$7a^2 - 12b^2$</span> can have remainder 1 or 4 and <span class="math-container">$8c^2$</span> can only have remainder 2 or 3. But if either <span class="math-container">$a$</span> or <span class="math-container">$b$</span> is divisible by 5 then <span class="math-container">$7a^2 - 12b^2$</span> will have remainder 2 or 3.</p>
<p>I've considered using mod7 so <span class="math-container">$-12b^2$</span> would have remainder either <span class="math-container">$1,2,$</span> or <span class="math-container">$4.$</span> But <span class="math-container">$8c^2$</span> can have remainder 1,2 or 4 as well.</p>
<p>I'm not really sure how to go about this, any help would be appreciated thanks.</p>
| lab bhattacharjee | 33,337 | <p>Hint</p>
<p>WLOG <span class="math-container">$(a,c)=1$</span></p>
<p>Take modulo <span class="math-container">$\pmod3$</span></p>
<p><span class="math-container">$$a^2\equiv2c^2\pmod3$$</span></p>
<p>But for any integer <span class="math-container">$d,$</span></p>
<p><span class="math-container">$$d^2\equiv0,1\pmod3\not\equiv2$$</span></p>
|
4,000,935 | <p>How do you show that <span class="math-container">$7a^2 - 12b^2 = 8c^2$</span> has no integer solutions</p>
<hr />
<p>When <span class="math-container">$x = a/c$</span> and <span class="math-container">$y = b/c$</span> then <span class="math-container">$\gcd(a,b,c) = 1$</span> I believe.</p>
<p>If we use mod5, and neither <span class="math-container">$a,b,c$</span> is divisible by 5 then <span class="math-container">$7a^2 - 12b^2$</span> can have remainder 1 or 4 and <span class="math-container">$8c^2$</span> can only have remainder 2 or 3. But if either <span class="math-container">$a$</span> or <span class="math-container">$b$</span> is divisible by 5 then <span class="math-container">$7a^2 - 12b^2$</span> will have remainder 2 or 3.</p>
<p>I've considered using mod7 so <span class="math-container">$-12b^2$</span> would have remainder either <span class="math-container">$1,2,$</span> or <span class="math-container">$4.$</span> But <span class="math-container">$8c^2$</span> can have remainder 1,2 or 4 as well.</p>
<p>I'm not really sure how to go about this, any help would be appreciated thanks.</p>
| Community | -1 | <p>Using mod <span class="math-container">$4$</span>, a is divisible by <span class="math-container">$2$</span><br>
Let <span class="math-container">$a = 2k$</span></p>
<p><span class="math-container">$7k^2 - 3b^2 = 2c^2$</span></p>
<p>HINT: GO mod 3 and use that <span class="math-container">$t^2 \equiv 0,1 \pmod 3$</span> for all positive integers t</p>
|
733,101 | <p>I've been stuck for a while on this question and haven't found applicable resources.</p>
<p>I have 10 choices and can select 3 at a time. I am allowed to repeat choices (combination), but the challenge is that ABA and AAB are not unique.</p>
<p>10 choose 3 is the question.</p>
<p>I have been working on a smaller set to find a formula. 3 choose 3.</p>
<p>I came up with 27 results (if order matters) and 10 results if order doesn't matter. AAA, AAB, AAC, ABB, ACB, ACC, BBB, BBC, BCC, CCC</p>
<p>How do I go about solving these problems.</p>
<p>My closest hypothesis is choices^slots / slots! == 3^3/3!</p>
| user2566092 | 87,313 | <p>Hint: First consider the case that all 3 chosen elements are distinct, and count the possibilities (taking into account that order doesn't matter). Then consider the case when exactly 2 chosen elements are equal, and count the possibilities. Finally count the possibilities when all 3 elements are equal.</p>
|
394,294 | <p>I would like to know the asymptotics of the following sequences of integrals:
<span class="math-container">$$ I_n = \int _0
^{+ \infty}
e^{-t} \left ( \dfrac{t}{1 + t} \right )^n
\ dt
$$</span></p>
<p>I have tried using Laplace method ou saddle node method, but I have been unable to conclude anything.</p>
<p>I have tried to find out the behaviour using a software computation. Here is what I found:
<span class="math-container">\begin{equation}
\begin{array}{|c|c|c|}
\hline
\\
n & I_n & - \ln(I_n)
\\
\hline
1 = 2^0 & 1.9269472 \cdot 10^{-1} & 1.646648079928304
\\
\hline
2 = 2^1 & 8.7215768 \cdot 10^{-2} & 2.4393701372220464
\\
\hline
4 = 2^2 & 2.6524946 \cdot 10^{-2} & 3.629669596451481
\\
\hline
8 = 2^3 & 4.7442047 \cdot 10^{-3} & 5.35083146190712
\\
\hline
16 = 2^4 & 4.1306898 \cdot 10^{-4} & 7.7918959541372
\\
\hline
32 = 2^5 & 1.3310510 \cdot 10^{-5} & 11.226956600782769
\\
\hline
64 = 2^6 & 1.0697730 \cdot 10^{-7} & 16.05064913913718
\\
\hline
128 = 2^7 & 1.2206772 \cdot 10^{-10} & 22.826445101120278
\\
\hline
256 = 2^8 & 8.8802107 \cdot 10^{-15} & 32.35495110837777
\\
\hline
512 = 2^9 & 1.3241607 \cdot 10^{-20} & 45.77092301635973
\\
\hline
1024 = 2^{10} & 8.1182635 \cdot 10^{-29} & 64.6808514171555
\\
\hline
2048 = 2^{11} & 2.1076879 \cdot 10^{-40} & 91.3578121482943
\\
\hline
4096 = 2^{12} & 9.3011756 \cdot 10^{-57} & 129.01720949348262
\\
\hline
8192 = 2^{13} & 7.3886987 \cdot 10^{-80} & 182.2068558012616
\\
\hline
16384 = 2^{14} & 1.7003331 \cdot 10^{-112} & 257.358706225986
\\
\hline
32768 = 2^{15} & 1.2703122 \cdot 10^{-158} & 363.5691819464147
\\
\hline
65536 = 2^{16} & 7.9749999 \cdot 10^{-224} & 513.7027491850932
\\
\hline
131072 = 2^{17} & 5.2817088 \cdot 10^{-316} & 725.9526396990342
\\
\hline
262144 = 2^{18} & 2.4716651 \cdot 10^{-446} & 1026.0480594180656
\\
\hline
524288 = 2^{19} & 1.2878115 \cdot 10^{-630} & 1450.3756643021711
\\
\hline
\end{array}
\end{equation}</span>
From this tabular, it seems that <span class="math-container">$\ln I_n \sim - 2 \sqrt{n}$</span>.</p>
<p><strong>Is there any method to study such sequences of integrals?</strong></p>
| MathTolliob | 171,738 | <p>Let us assume the following conjecture:</p>
<blockquote>
<p>Let <span class="math-container">$h: \mathbb{R}^+ \longrightarrow \mathbb{R}$</span> be a continuous function.
Let also <span class="math-container">$g_n: \mathbb{R}^+ \longrightarrow \mathbb{R}$</span> be a <span class="math-container">$\mathcal{C}^2$</span> function over <span class="math-container">$\mathbb{R}^+$</span>, depending of an positive integer <span class="math-container">$n$</span>, with a unique maximum <span class="math-container">$\mathcal{U}_n$</span> such that <span class="math-container">$g_n''(\mathcal{U}_n) < 0$</span>.
Therefore, we have the following estimation:
<span class="math-container">$$ \int _0
^{+ \infty}
h(t) e^{- g_n(t)}
\ dt
\underset{n \longrightarrow + \infty}{\sim}
h(\mathcal{U}_n)
\cdot
\sqrt { \dfrac{2 \pi}{- g_n''(\mathcal{U}_n)} }
\cdot
e^{g_n(\mathcal{U}_n)}
\ .
$$</span></p>
</blockquote>
<p>This is a little adaptation to the classical Laplace method (see [Wikipedia page on Laplace method, section "Other formulation"][1]</p>
<p>In our case, as pointed out by Carlo Beenakker, we have <span class="math-container">$h(t) = 1$</span>,
<span class="math-container">$g_n(t) = t - n \ln t + n \ln(1 + t)$</span> and <span class="math-container">$\mathcal{U}_n = \dfrac{\sqrt{4n + 1} - 1}{2}$</span>. So, we finaly find that
<span class="math-container">$$ I_n
\underset{n \longrightarrow + \infty}{\sim}
\sqrt{e \pi} n^{\frac{1}{4}} e^{-2 \sqrt{n}}
\ .
$$</span>
[1]: <a href="https://en.wikipedia.org/wiki/Laplace%27s_method" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Laplace%27s_method</a></p>
|
3,319,122 | <p>This is from Tao's Analysis I: </p>
<p><a href="https://i.stack.imgur.com/DYQxE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DYQxE.png" alt="enter image description here"></a></p>
<p>So far I managed to show (inductively) that these sets do exist for for every <span class="math-container">$\mathit{N}\in\mathbb{N}$</span> but, I'm finding it hard to show they're unique.</p>
<p>One of the things I'm (also) trying to proof is that <span class="math-container">$\mathit{N}\in\mathit{A_N}$</span>, but I'm stuck on that one too. </p>
<p>I'm not allowed to use the ordering of the natural numbers.</p>
| José Carlos Santos | 446,262 | <p>Formally, your question makes no sense, because <span class="math-container">$f$</span> is undefined at <span class="math-container">$0$</span>. But suppose that you extend the domain of <span class="math-container">$f$</span> to <span class="math-container">$\mathbb R$</span>, defining<span class="math-container">$$f(0)=\lim_{x\to0}\frac x{e^x-1}=1.$$</span>Then, yes, <span class="math-container">$f$</span> is decreasing near <span class="math-container">$0$</span>. That's so because <span class="math-container">$f(x)=\frac1{g(x)}$</span>, where<span class="math-container">\begin{align}g(x)&=\begin{cases}\frac{e^x-1}x&\text{ if }x\neq0\\1&\text{ otherwise}\end{cases}\\&=1+\frac x2+\frac{x^2}{3!}+\cdots\end{align}</span>Then <span class="math-container">$g'(0)=1$</span> and, since <span class="math-container">$g$</span> is increasing near <span class="math-container">$0$</span>, <span class="math-container">$f$</span> is decreasing near <span class="math-container">$0$</span>.</p>
|
2,701,582 | <p>I thought I was doing this right until I checked my answer online and got a different one. I worked through the problem again and got my original answer a second time so this one is bothering me since the other similar ones I have done checked out okay. Please let me know if I'm doing something wrong, thanks!</p>
<blockquote>
<p>Find $x, y$ contained in integers such that $475x+2018y=1$ then find a value for $475\equiv -1$ (mod $2018$).</p>
</blockquote>
<p>Since it is in the form of $ax+by=1$ I know that the $\gcd(a,b)=1$. I still did the division algorithm since that helps me with the back substitution. Here is what I got.</p>
<p>Division Algm:</p>
<ul>
<li><p>$2018=(4\times 475)+118$</p></li>
<li><p>$475=(4\times 118)+3$</p></li>
<li><p>$18=(39\times 3)+1$</p></li>
</ul>
<p>Back Sub:</p>
<ul>
<li><p>$1=118-(39\times 3)$</p></li>
<li><p>$1=118-39(475-(4\times 118))$</p></li>
<li><p>$1=(157\times 118)-(39\times 475)$</p></li>
<li><p>$1=157\times (2018-(4\times 475))-(39\times 475)$</p></li>
<li><p>$1=(157\times 2018)-(667\times 475)$</p></li>
</ul>
<p>So $x=667$ and $y=157$</p>
<p>The second question I answered from the first part which is $475x$ congruent to $1$(mod $2018$) so that would just be $667$ from the first part. Any help is appreciated, thanks! </p>
| Dirk | 379,594 | <p>One famous generating set of $S_n$ is
$$A = \{ (1,2), (1,2,3,4,\ldots, n)\}.$$</p>
<p>The proof that this sets generates the whole group is basically the proof of correctness for the bubble sort algorithm.
Now you can only generate four subgroups from subsets of this set, the trivial one, the whole group and two cyclic groups; but of course $S_n$ has much more subgroups in general.</p>
<p>Looking at two other famous generating sets, this will also not work:
$$A_1 = \{ (i,j) \mid i < j\}, \,\,\, A_2 = \{(i,i+1) \mid 1 \leq i < n\}.$$
The first one is the set of all transpositions, the second one are the so called Coxeter generators of $S_n$. Both will not be able to generate the alternating group from subsets, as the alternating group does not contain any transpositions.</p>
|
2,701,582 | <p>I thought I was doing this right until I checked my answer online and got a different one. I worked through the problem again and got my original answer a second time so this one is bothering me since the other similar ones I have done checked out okay. Please let me know if I'm doing something wrong, thanks!</p>
<blockquote>
<p>Find $x, y$ contained in integers such that $475x+2018y=1$ then find a value for $475\equiv -1$ (mod $2018$).</p>
</blockquote>
<p>Since it is in the form of $ax+by=1$ I know that the $\gcd(a,b)=1$. I still did the division algorithm since that helps me with the back substitution. Here is what I got.</p>
<p>Division Algm:</p>
<ul>
<li><p>$2018=(4\times 475)+118$</p></li>
<li><p>$475=(4\times 118)+3$</p></li>
<li><p>$18=(39\times 3)+1$</p></li>
</ul>
<p>Back Sub:</p>
<ul>
<li><p>$1=118-(39\times 3)$</p></li>
<li><p>$1=118-39(475-(4\times 118))$</p></li>
<li><p>$1=(157\times 118)-(39\times 475)$</p></li>
<li><p>$1=157\times (2018-(4\times 475))-(39\times 475)$</p></li>
<li><p>$1=(157\times 2018)-(667\times 475)$</p></li>
</ul>
<p>So $x=667$ and $y=157$</p>
<p>The second question I answered from the first part which is $475x$ congruent to $1$(mod $2018$) so that would just be $667$ from the first part. Any help is appreciated, thanks! </p>
| Ivan Di Liberti | 36,248 | <p>Given a prime number $p$, $S_p$ is always generated by a transposition and a $p$ cycle. There is no way to find a subst of this set that generates $A_n$.</p>
|
3,370,521 | <p>I have to prove that the following series converges:
<span class="math-container">$$
\sum_{n=1}^{\infty}\frac{\sin n}{n(\sqrt{n}+\sin n)}
$$</span>
I tried to use Dirichlet's test but I was not really sure whether the denominator is a monotonically decreasing function. If it is than the problem becomes easy.</p>
| José Carlos Santos | 446,262 | <p>No. Take<span class="math-container">$$\begin{array}{rccc}f\colon&D(0,1)&\longrightarrow&\mathbb C\\&z&\mapsto&\frac1{1-z}\end{array}$$</span>and <span class="math-container">$a_n=-1+\frac1n$</span> for each <span class="math-container">$n\in\mathbb N$</span>.</p>
|
2,317,166 | <p>I just pondered this question and have tried out several methods to solve it(mainly using trigonometry). However, I am not satisfied with my trigonometrical proof and is looking for better proofs.</p>
<p>Give an <em>elegant proof</em> that the diagonals are the longest lines in a square.
It would rather be nice if the proof is by <strong>"reductio ad absurdum"</strong> method. Thank you :).</p>
<p>Bonus : Also prove that the diagonals of a rectangle are the longest lines.</p>
| Andrew D. Hwang | 86,418 | <p>Consider the axis-oriented rectangle $R$ consisting of all points whose Cartesian coordinates $(x, y)$ satisfy
$$
x_{0} \leq x \leq x_{1},\qquad
y_{0} \leq y \leq y_{1}.
$$
If $(x, y)$ and $(x', y')$ are points of $R$, then $|x' - x| \leq |x_{1} - x_{0}|$ and $|y' - y| \leq |y_{1} - y_{0}|$, so their distance satisfies
$$
\sqrt{(x' - x)^{2} + (y' - y)^{2}} \leq \sqrt{(x_{1} - x_{0})^{2} + (y_{1} - y_{0})^{2}}.
$$
The right-hand upper bound is the distance between the corners $(x_{0}, y_{0})$ and $(x_{1}, y_{1})$, and between the corners $(x_{0}, y_{1})$ and $(x_{1}, y_{0})$.</p>
|
493,600 | <p>I am presented with the following task:</p>
<p>Can you use the chain rule to find the derivatives of $|x|^4$ and $|x^4|$ in $x = 0$? Do the derivatives exist in $x = 0$? I solved the task in a rather straight-forward way, but I am worried that there's more to the task:</p>
<p>First of all, both functions is a variable to the power of an even number, so given that $x$ is a real number, we have that $|x^4| = |x|^4$. In order to force practical use of the chain rule, we write $|x|^4 = \sqrt{x^2}^4$. We are using the fact that taking a number to the power of an even number, and using the absolute value, gives us positive numbers exclusively. If we choose the chain $u = x^2$, thus $g(u) = \sqrt{u}^4$, we have that $u' = 2x$ og $g'(u) = (u^2)' = 2u$. Then we have that the derivative of the function, that I for pratical reasons will name $f(x)$, is $f'(x) = 2x^2 * 2x = 4x^3$. We see that the the general power rule applies here, seeing as we work with a variable to the power of an even number. The derivative in the point $x = 0$ is $4 * 0^3 = \underline{\underline{0}}$. Thus we can conclude that the derivative exists in $x = 0$.</p>
<p>Is this fairly logical? I'm having a hard time seeing that there is anything more to this task, but it feels like it went a bit too straightforward.</p>
| user71352 | 71,352 | <p>You can apply the chain rule to $(x^{2})^{2}$, which happens to equal $|x^{4}|=|x|^{4}$, since it is the composition of two functions differentiable at $0$ (i.e take $f(x)=x^{2}$ then this is $f\circ f$). You could actually proceed quicker since $|x^{4}|=|x|^{4}=x^{4}$ so we don't need chain rule. What you are using is that $f(x)=|x^{4}|=|x|^{4}$ agrees everywhere with a function that is differentiable everywhere and is hence differentiable everywhere itself. You can prove this using difference quotients.</p>
<p>Let $f$ be a function that agrees everywhere with a function $g$ whose derivative at $x_{0}$ exists.. Then is $f$ is differentiable at $x_{0}$ and $f'(x_{0})=g'(x_{0})$</p>
<p>$\lim_{h\to0}\frac{f(x_{0}+h)-f(x_{0})}{h}=\lim_{h\to0}\frac{g(x_{0}+h)-g(x_{0})}{h}=g'(x_{0})$. So $f'(x_{0})$ exists and equals $g'(x_{0})$.</p>
|
246,384 | <p>How can I plot a hexagon inside this cylinder so that it is always in the same direction as the cylinder, no matter what the orientation is? In addition, the hexagon must always be the same size as the cylinder.</p>
<p>For example, start at the origin (10,9,8) and go to (1,2,3)?</p>
<pre><code>Graphics3D [{Cylinder [{{10, 9, 8}, {1, 2, 3}}, 0.5]}
</code></pre>
<p>I tried this:</p>
<pre><code>Table [Show [Graphics3D [Cylinder [{{0., 0., 0}, {0., 0., l}}, 1]], PolyhedronData [{"Prism", 6}]], {l, 0.1 , 3, 0.1}]
</code></pre>
| Rohit Namjoshi | 58,370 | <p>The nested lists are saved as strings so <code>ToExpression</code> has to be mapped.</p>
<pre><code>data = {{1, {{2, 3, 4}, {5, 6, 7, {8, 9}}}, 10, {11, 12}}};
export = ExportString[data, "CSV"];
import = ImportString[export, "CSV"];
importData = import // Map[ToExpression]
importData == data
(* True *)
</code></pre>
|
1,272,647 | <p>I guess it's a simple question, but it really escaped my memory.</p>
<p>If $a + b =1$, then how can I call those $a$ and $b$ numbers? </p>
<p>$a$ is not an inversion of $b$, and it's not reciprocal of $b$.. but I'm sure that they do have a 'name'.</p>
| k1.M | 132,351 | <p>This pair of numbers has no special name in mathematics as I know. Because usually mathematicians use $\lambda$ and $1-\lambda$ instead of $a$ and $b$ and there is no need to use a special name...</p>
|
1,272,647 | <p>I guess it's a simple question, but it really escaped my memory.</p>
<p>If $a + b =1$, then how can I call those $a$ and $b$ numbers? </p>
<p>$a$ is not an inversion of $b$, and it's not reciprocal of $b$.. but I'm sure that they do have a 'name'.</p>
| quapka | 112,628 | <p>I've considered posting this as just a comment, because I am quite sure, this does not really answer the question (but is a bit related). But since I tend to write more, I've chosen to post an answer.</p>
<p><strong>Background</strong></p>
<p>Consider $V$ a vector space over the field $\mathbb{K}$ (e.g. $\mathbb{K} = \mathbb{C}$ and $V = \mathbb{C}^n, n \in \mathbb{N}$). Then there is well know term <em>linear combination</em> of vectors. If $\alpha_i \in \mathbb{K}, v_i\in V, i = 1, \ldots, n$, then linear combination is the expression</p>
<p>$$
\sum_{i=1}^n \alpha_i v_i = a_1 v_1 + a_2 v_2 + \ldots + a_n v_n
$$
when the vectors stand alone. Or for some $x\in V$ we say, that $x$ is linear combination of $v_i$ if</p>
<p>$$
x= \sum_{i=1}^n \alpha_i v_i.
$$</p>
<p><strong>Answer</strong></p>
<p>Now, to the question. Beside linear combination, there is somehow <em>sub-term</em> of linear combination called <em>affine combination</em>. We again have $\alpha_i \in \mathbb{K}, v_i \in V$. And we say, that this</p>
<p>$$
\tag{1}
\sum_{i=1}^n \alpha_i v_i
$$</p>
<p>is affine combination, if additional combination holds, which is</p>
<p>$$
\tag{2}
\sum_{i=1}^n \alpha_i = 1.
$$</p>
<p>So, there is not really a name for those $\alpha_i$ (maybe just <em>affine coefficients</em>), but the expression (1) is called <strong>affine combination</strong> (of $v_i$) if the condition (2) holds.</p>
<p>In your case $a, b$ might be those <em>affine coefficients</em> for some vectors in general.</p>
|
128,122 | <p>Original Question: Suppose that $X$ and $Y$ are metric spaces and that $f:X \rightarrow Y$. If $X$ is compact and connected, and if to every $x\in X$ there corresponds an open ball $B_{x}$ such that $x\in B_{x}$ and $f(y)=f(x)$ for all $y\in B_{x}$, prove that f is constant on $X$. </p>
<p>Here's my attempt:
Cover X by $\bigcup _{x \in X}B_x$.
Since X is compact there is a finite sub-covering $\bigcup _{n=1}^NB_x$ of $X$.
Given $x\in X$ there is an i between 1 and N such that $x\in B_{x_{i}}$.
By assumption $f(x)=f(x_{i})$. Since there are only finitely many balls covering $X$, $f(X)$ is finite, say $f(X)=\{a_{1},...a_{m}\}$.</p>
<p>Where do I go from here, I want to show that f(X) is a singleton. Is $X$ a singleton too?</p>
| Bill Cook | 16,423 | <p>Notice that since $f(X)$ is a finite set and $Y$ is Hausdorff you can cover each $a_i$ with an open ball disjoint from all others. For example: let $B_i = B_{a_i}(\epsilon_i)$ where $\epsilon_i$ is $1/4$ the minimal distance between $a_i$ and the rest of the $a$'s (this exists because you have a finite set). Then you're pretty much done. Just note:</p>
<ul>
<li>$f^{-1}(B_i)$ is open because $B_i$ is open and $f$ is continuous.</li>
<li>$f^{-1}(\{a_i\})$ is closed because $\{a_i\}$ is closed and $f$ is continuous.</li>
<li>$f^{-1}(\{a_i\})=f^{-1}(B_i)$ since the only element of the range contained in $B_i$ is $a_i$.</li>
</ul>
<p>Therefore, each $C_i=f^{-1}(\{a_i\})=f^{-1}(B_i)$ is open and closed. But for $i\not=j$, $C_i$ and $C_j$ are disjoint and they're union is all of $X$. So if there is more than one of them $X$ is not connected! </p>
|
85,814 | <p>how to solve $\pm y \equiv 2x+1 \pmod {13}$ with Chinese remainder theorem or iterative method?</p>
<p>It comes from solving $x^2+x+1 \equiv 0 \pmod {13}$ (* ) and background is following:</p>
<blockquote>
<p>13 is prime. (* ) holds under Euclidean lemma if and only if $4(x^2+x+1) \equiv \pmod {13}$
or if and only if $(2x+1)^2 \equiv -3 \pmod {13}$. So if $p=13$, so by Euler's criterion
$[ \frac{-3}{13} ] \equiv (-3)^{\frac{13-1}{2}} = (-3)^6 = 9^3 \equiv (-4)^3 =-64 \equiv 1 \pmod{13} $. Hence equation $y^2 \equiv -3 \pmod{13}$ has two incongruent solution( lemma 4.1.3) $\pm y$ so solutions of the equations $\pm y \equiv 2x+1 \pmod{13}$ are solutions of the equation (* )
So my most important question is how you change equation $\pm y \equiv 2x+1 \pmod{13}$ to the form $ax\equiv b \pmod{13} $ in other words to the form where you can use either Chinese remainder theorem or iterative method to solve $\pm y \equiv 2x+1 \pmod{13}$ and finally (* )?
Finally just because of curiosity. Is $[\frac{-3}{7}]\equiv (-3)^3 = -27 \equiv -1 \pmod{7}$? So is mod(-27,7)=1 or -1?</p>
</blockquote>
| Arturo Magidin | 742 | <p>To solve $x^2 + x + 1 \equiv 0 \pmod{13}$, you can use the usual quadratic formula, interpreted appropriately. The solutions are given by
$$x = \frac{-1 \pm\sqrt{-3}}{2},$$
where "$\sqrt{-3}$" means an integer $y$ such that $y^2\equiv -3\pmod{13}$, and "$\frac{1}{2}$" means an integer $z$ such that $2z\equiv 1\pmod{13}$.</p>
<p>The latter is easy: $z\equiv 7\pmod{13}$ does the job.</p>
<p>For the latter, you need $-3$ to be a quadratic residue modulo $13$; it <em>is</em>, because $3\equiv 4^2\pmod{13}$, and $-1\equiv 5^2\pmod{13}$, so $4\times 5 = 20\equiv 7\pmod{13}$ does the trick; indeed, $7^2 = 49 \equiv -3\pmod{13}$.</p>
<p>So the two solutions to $x^2+x+1\equiv 0\pmod{13}$ are:
$$x= (2)^{-1}(-1+\sqrt{-3}) \equiv 7(-1+7) = 7(6) = 42 \equiv 3\pmod{13}$$
and
$$x= (2)^{-1}(-1-\sqrt{-3}) \equiv 7(-1-7) = 7(-8) = -56 \equiv 9\pmod{13}.$$</p>
<p>You can verify this easily:
$$\begin{align*}
3^2 + 3 + 1 &= 9+3+1 = 13\equiv 0\pmod{13},\\
9^2+9+1&=81+9+1 = 91=13\times 7\equiv 0\pmod{13}.
\end{align*}$$</p>
<p>If you want to go your route, you are trying to find values of $x$ such that $(2x+1)^2\equiv -3\pmod{13}$. That means finding the two square roots of $-3$ modulo $13$; as above, they are $7$ and $-7\equiv 6$, so you are looking for the values of $x$ such that $2x+1\equiv 7\pmod{13}$ and $2x+1\equiv 6\pmod{13}$. These translate to $2x\equiv 6\pmod{13}$ and $2x\equiv 5\pmod{13}$. We can solve these by multiplying both sides by $7$ (the multiplicative inverse of $2$), so we get
$$x\equiv 7(6) = 42\equiv 3\pmod{13}\qquad\text{and}\qquad x\equiv 7(5) = 35 = 9\pmod{13},$$
the same answers as above. </p>
<p>I don't know why you are trying to find $\pm y\equiv 2x+1\pmod{13}$. This does not give you what you want.</p>
|
3,263,795 | <p>Let <span class="math-container">$A = (a_{ij})$</span> be an invertible <span class="math-container">$n\times n$</span> matrix. I wonder how to prove that <span class="math-container">$A$</span> is a product of elementary matrices. I suspect that we need to transform it into the identity matrix by using elementary row operations, but how to do it exactly?</p>
<p><strong>P.S.</strong> I've checked questions which could be considered similar and neither of them deals with this exact (general) situation. Please don't mark this question as a duplicate unless you find a precise answer.</p>
| Toby Mak | 285,313 | <p>Adding to John Omielan's answer, <a href="https://math.stackexchange.com/questions/2295360/prove-the-perpendicular-bisector-of-chord-passes-through-the-centre-of-the-circle">according to the perpendicular bisector theorem</a>, if a point is on the perpendicular bisector, then it is equidistant from the segment's endpoints.</p>
<p>The converse of this theorem shows that if the point is equidistant from the endpoints of a segment, then it lies on the perpendicular bisector. Since <span class="math-container">$(x,y)$</span> is the centre of the circle passing through <span class="math-container">$(0,0)$</span> and <span class="math-container">$(a,b)$</span>, then the centre also must lie on the perpendicular bisector.</p>
|
275,308 | <p>Problems with calculating </p>
<p>$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}$$</p>
<p>$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}=\lim_{x\rightarrow0}\frac{\ln(2\cos^{2}(x)-1)}{(2\cos^{2}(x)-1)}\cdot \left(\frac{\sin x}{x}\right)^{-1}\cdot\frac{(2\cos^{2}(x)-1)}{x^{2}}=0$$</p>
<p>Correct answer is -2. Please show where this time I've error. Thanks in advance!</p>
| user 1591719 | 32,016 | <p>$$\lim_{x\to 0}\frac{\ln(\cos(2x))}{x\sin x}=\lim_{x\to 0}\frac{\ln(\cos(2x))}{1-\cos 2x}\times\lim_{x\to 0}\frac{x}{\sin x}\times\lim_{x\to 0}\frac{1-\cos 2x }{(2x)^2}\times4=-1\times1\times2=-2$$</p>
|
1,025,117 | <p>Let $V$ be finite dim $K-$vector space. If w.r.t. any basis of $V$, the matrix of $f$ is a diagonal matrix, then I need to show that $f=\lambda Id$ for some $\lambda\in K$. </p>
<p>I am trying a simple approach: to show that $(f-\lambda Id)(e_i)=0$ where $(e_1,...,e_2)$ is a basis of $V$. Let the diagonal matrix be given by $diag(\lambda_1,...,\lambda_2).$ Then $$(f-\lambda Id)(e_i)=diag(\lambda_1,...,\lambda_2) (0,0,..,0,1,0,...,0)^T - (0,0,..,0,\lambda ,0,...,0)$$ $$=(0,0,...,0,\lambda_i - \lambda,0,...,0)$$ where $1,\lambda,\lambda_i-\lambda$ are in the $i^{th}$ position. I don't see how to conclude $\lambda_i=\lambda$. </p>
| mfl | 148,513 | <p>Using L'Hospital:</p>
<p>$$\lim_{x\to \infty}x\ \log\left(\frac{x+c}{x-c}\right)=\lim_{x\to \infty}\frac{\log\left(\frac{x+c}{x-c}\right)}{\frac 1x}\underbrace{=}_{\mathrm{L'Hospital}} \lim_{x\to \infty} \frac{\frac{x-c}{x+c}\cdot \frac{-2c}{(x-c)^2}}{-\frac {1}{x^2}}=\lim_{x\to \infty} \frac{x-c}{x+c}\cdot \frac{2cx^2}{(x-c)^2}=2c$$</p>
<p>Without using L'Hospital:</p>
<p>$$\lim_{x\to \infty}x\ \log\left(\frac{x+c}{x-c}\right)=\lim_{x\to \infty} \log\left(1+\frac{2c}{x-c}\right)^x=\log \lim_{x\to \infty}\left(1+\frac{2c}{x-c}\right)^x\\=\log \lim_{x\to \infty}\left(\left(1+\frac{2c}{x-c}\right)^{x-c}\right)^{\frac{x}{x-c}}=\log e^{2c}=2c.$$</p>
|
1,025,117 | <p>Let $V$ be finite dim $K-$vector space. If w.r.t. any basis of $V$, the matrix of $f$ is a diagonal matrix, then I need to show that $f=\lambda Id$ for some $\lambda\in K$. </p>
<p>I am trying a simple approach: to show that $(f-\lambda Id)(e_i)=0$ where $(e_1,...,e_2)$ is a basis of $V$. Let the diagonal matrix be given by $diag(\lambda_1,...,\lambda_2).$ Then $$(f-\lambda Id)(e_i)=diag(\lambda_1,...,\lambda_2) (0,0,..,0,1,0,...,0)^T - (0,0,..,0,\lambda ,0,...,0)$$ $$=(0,0,...,0,\lambda_i - \lambda,0,...,0)$$ where $1,\lambda,\lambda_i-\lambda$ are in the $i^{th}$ position. I don't see how to conclude $\lambda_i=\lambda$. </p>
| Ivo Terek | 118,056 | <p>We have:$$\lim_{x\to \infty}x\ \log\left(\frac{x+c}{x-c}\right) = \lim_{x\to \infty}\ \log\left(\frac{x+c}{x-c}\right)^x = \log \lim_{x \to +\infty}\left(\frac{1+\frac{c}{x}}{1-\frac{c}{x}}\right)^x = \log \lim_{x \to +\infty} \frac{\left(1+\frac{c}{x}\right)^x}{\left(1-\frac{c}{x}\right)^x}$$</p>
<p>Using the fundamental limit for $e$: $$= \log \frac{e^c}{e^{-c}} = \log e^{2c} = 2c.$$</p>
|
1,071,321 | <p>The problem is quite simple to formulate. If you have a large group of people (n > 365), and their birthdays are uniformly distributed over the year (365 days), what's the probability that every day of the year is someone's birthday?</p>
<p>I am thinking that the problem should be equivalent to finding the number of ways to place n unlabeled balls into k labeled boxes, such that all boxes are non-empty, but C((n-k)+k-1, (n-k))/C(n+k-1, n) (C(n,k) being the binomial coefficient) does not yield the correct answer.</p>
| Graham Kemp | 135,106 | <p>Birthday Coverage is basically a <a href="http://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow">Coupon Collector's</a> problem.</p>
<p>You have $n$ people who drew birthdays with repetition, and wish to find the probability that all $365$ different days were drawn among all $n$ people. ($n\geq 365$)</p>
<p>$$\mathsf P(T\leq n)= 365!\; \left\lbrace\begin{matrix}n\\365\end{matrix}\right\rbrace\; 365^{-n} $$</p>
<p>Where, the braces indicate a <a href="http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow">Stirling number of the second kind</a>. Also represented as $\mathrm S(n, 365)$.</p>
|
1,470,819 | <p>Let $f$ be defined (and real-valued) on $[a,b]$. For any $x\in [a,b]$ form a quotient $$\phi(t)=\dfrac{f(t)-f(x)}{t-x} \quad (a<t<b, t\neq x),$$ and define $$f'(x)=\lim \limits_{t\to x}\phi(t),$$ provided this limit exists in accordance with Defintion 4.1. </p>
<p>I have one question. Why Rudin considers $t\in (a,b)$? What would be if $t\in [a,b]?$</p>
| user133281 | 133,281 | <p>Let the endpoints be $a$ and $b$. In the semilog plot, we plot $xf(x) = e^y f(e^y)$ as a function of $y = \log(x)$. The endpoints then are $\log(a)$ and $\log(b)$ and the area under the curve is
$$
\int_{\log(a)}^{\log(b)} e^y f(e^y) \, dy = \int_{a}^{b} f(x) \, dx = Q
$$
which follows from the <a href="https://en.wikipedia.org/wiki/Integration_by_substitution" rel="nofollow">substitution rule</a>.</p>
|
320,704 | <p>Given an irrational <span class="math-container">$a$</span>, the sequence <span class="math-container">$b_n := na$</span> is dense and equidistributed in <span class="math-container">$\mathbb S^1$</span> where we view <span class="math-container">$\mathbb S^1$</span> as <span class="math-container">$[0, 1]$</span> with its endpoints identified. </p>
<p>Given a point <span class="math-container">$p$</span> in <span class="math-container">$\mathbb S^1$</span>, can we obtain a quantitative upper bound (that can depend on <span class="math-container">$a, p, e$</span>) on the smallest <span class="math-container">$n$</span> such that <span class="math-container">$na$</span> is in <span class="math-container">$B_e (p)$</span>?</p>
| Aaron Meyerowitz | 8,008 | <p>I think that the answer from @burtonpeterj is in fact pretty much best possible. Note that it does not utilize <span class="math-container">$p.$</span> I don't think there is a way to work general values of <span class="math-container">$p$</span> into the estimate.</p>
<p>If you take the fractional parts of <span class="math-container">$0,a,2a,3a,\cdots,na$</span> they divide <span class="math-container">$[0,1]$</span> into sub-intervals. The <a href="https://en.wikipedia.org/wiki/Three-gap_theorem" rel="nofollow noreferrer">three gap theorem</a> states that there will be at most three distinct lengths. The particular lengths can be determined from the convergents mentioned above.</p>
<p>This allows us to find the smallest <span class="math-container">$n$</span> so that all the sub-intervals have length <span class="math-container">$2e$</span> or less. Then this would work for all values of <span class="math-container">$p$</span> in the unit interval and be the right one for <span class="math-container">$p$</span> being the middle of the last interval of longer length which get split (and any points close enough to it.)</p>
<p>Here are a few special cases which illustrate what can happen.</p>
<ul>
<li><p>If <span class="math-container">$a=\alpha$</span> is positive but much smaller than <span class="math-container">$e$</span> (<span class="math-container">$\alpha \ll e)$</span> then at about <span class="math-container">$n=\frac{1-2e}{\alpha}$</span> we have lots of intervals of length <span class="math-container">$a$</span> and one just barely under <span class="math-container">$2e.$</span></p></li>
<li><p>The case that <span class="math-container">$a=\frac{p}{q}$</span> is rational doesn't really fit the question but, ignoring that, we see that at <span class="math-container">$n=q-1$</span> there is an interval of length <span class="math-container">$\frac2{q}$</span> which gets split exactly in half. Then we just get the points we had before.</p></li>
<li><p>If <span class="math-container">$a=\frac{p}{q}+\alpha$</span> is irrational with <span class="math-container">$|\alpha| \ll e \ll \frac1q $</span> then up to about <span class="math-container">$n=q$</span> it is just about like for <span class="math-container">$a=\frac{p}{q}$</span> and at that point we have <span class="math-container">$q$</span> subintervals all of length about <span class="math-container">$\frac1q.$</span> Then the following fractional parts start chipping off new subintervals of length <span class="math-container">$|\alpha|.$</span> </p></li>
</ul>
|
320,619 | <p>Consider $\ell^\infty $ the vector space of real bounded sequences endowed with the sup norm, that is
$||x|| = \sup_n |x_n|$ where $x = (x_n)_{n \in \Bbb N}$. </p>
<p>Prove that $B'(0,1) = \{x \in l^\infty : ||x|| \le 1\} $ is not compact.</p>
<p>Now, we are given a hint that we can use the equivalence of sequential compactness and compactness without proof. </p>
<p>However, I don't understand how sequences of sequences work? Do I need to find a set of sequences and order them such that they do not converge to the same sequence? </p>
<p>Does the sequence $(y_n)$ where $y_n$ is the sequence such that $y_n = 0$ at all but the nth term where $y_n =1$ satisfy the requirement that it does not have a convergent subsequence? </p>
<p>I think I have probably just confused myself with this sequence of sequences lark. Sorry and thanks. </p>
| saz | 36,150 | <p>You have to find a sequence in $B'(0,1)$ which does not contain a convergent subsequence. Because the existenc of such a sequence implies (right from the definition) that $B'(0,1)$ isn't sequentially compact.</p>
<p><strong>Hint</strong> Consider the sequence $(y^n)_n$ defined by $$y^n_k := \begin{cases} 1 & n=k \\0 & n \not= k \end{cases} \qquad (k \in \mathbb{N})$$ Then $y^n \in B'(0,1)$ for all $n \in \mathbb{N}$ and $\|y^n-y^m\|_{\infty}=1$ for $m \not= n$. This means that $(y^n)_n$ doesn't contain a Cauchy-subsequence, hence in particular no convergent subsequence.</p>
|
3,420,459 | <p>I have this <a href="https://i.stack.imgur.com/ew8Id.png" rel="nofollow noreferrer">question:</a></p>
<blockquote>
<p>If <span class="math-container">$a\otimes b=a^b-b^a$</span>, what is <span class="math-container">$(3\otimes 2)\otimes (4\otimes 1)$</span>?</p>
</blockquote>
<p>The answer in the solution set I was given is <span class="math-container">$-2,$</span> but I'm not sure how to get there, and I can't find any sort of similar problems online. Can anyone explain the process of solving this and how to find more resources for solving these problems?</p>
| K B Dave | 534,616 | <p>If the problem had been posed this way:</p>
<blockquote>
<p>Let <span class="math-container">$f$</span> be the function with</p>
<ul>
<li>domain pairs of positive numbers, </li>
<li>range all real numbers, and</li>
<li>value at <span class="math-container">$(a,b)$</span> given by <span class="math-container">$$\begin{align} f(a,b) &= a^b-b^a \end{align}$$</span>
and suppose that the numbers <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are given by
<span class="math-container">$$\begin{align} u &= f(3,2) & v &= f(4,1) \end{align}$$</span>
What is the value of <span class="math-container">$f(u,v)$</span>?</li>
</ul>
</blockquote>
<p>Would you be able to solve it?</p>
|
536,805 | <p>Three cards are drawn sequentially from a deck that contains 16 cards numbered 1 to 16 in an arbitrary order. Suppose the first card drawn is a 6.</p>
<p>Define the event of interest, A, as the set of all increasing 3-card sequences, i.e. A={(x1,x2,x3)|x1 < x2 < x3}, where x1,x2,x3∈{1,⋯,16}. Define event B as the set of 3-card sequence that starts with 6, i.e. B={(x1,x2,x3)|x1=6} or simply B={(6,x2,x3)}</p>
<p>Let $S_{x_3=t}$ represent the subset $\{(6,x_2,t)|6 \lt x_2 \lt t\}$, then $|A \cap B|=\displaystyle\sum_{t=8}^{16}∣∣S_{x_3=t}∣∣$.</p>
<p>Then what is |A∩B|=?</p>
| Henry | 6,460 | <p>Hint: </p>
<ul>
<li>$S_{x_3=8} = \{(6,7,8)\}$</li>
<li>$S_{x_3=9} = \{(6,7,9),(6,8,9)\}$</li>
<li>$S_{x_3=t} = \{(6,7,t),\ldots,(6,t-1,t)\}$</li>
<li>Count each</li>
<li>Add up the counts</li>
</ul>
|
891,137 | <p>Count all $n$-length strings of digits $0, 1,\dots, m$ that have an equal number of $0$'s and $1$'s. Is there a closed form expression?</p>
| Jack D'Aurizio | 44,121 | <p>$f(x)=\log(1+x)$ is a concave function over $(-1,+\infty)$, since its second derivative equals:
$$f''(x) = -\frac{1}{(1+x)^2}<0.$$
Concavity implies that the graphics of $f(x)$ always lies under the graphics of any tangent line. </p>
<p>So, consider the equation of the tangent line in $x=0$ in order to have:
$$\forall x\in(-1,+\infty),\qquad \log(x+1)\leq x.$$</p>
<hr>
<p>Equivalently, you can exploit the convexity of $e^x$ to prove that:
$$\forall y\in\mathbb{R},\qquad e^{y}\geq y+1,$$
then take the logarithm of both members.</p>
|
891,137 | <p>Count all $n$-length strings of digits $0, 1,\dots, m$ that have an equal number of $0$'s and $1$'s. Is there a closed form expression?</p>
| Varun Iyer | 118,690 | <p>So the inequality is:</p>
<p>$$\log(1+x) \le x$$</p>
<p>First, notice that $x > -1$ for all $x$ in the function $f(x) = \log(1+x)$</p>
<p>Since, start plugging in values, and see that for any $x$, where $x > -1$, $x > \log(1+x)$, except when $x=0$, which in that case $x = \log(1+x)$</p>
<p>So the solution set is $$x > -1$$</p>
<p><strong>EDIT</strong></p>
<p>For a calculus approach.</p>
<p>Use the derivative.</p>
<p>$$f'(x) = \frac{1}{1+x}$$</p>
<p>And </p>
<p>$$f'(x) = 1$$</p>
<p>Now, when $x > -1$, both function are increasing. however, as $x\to\infty$, the fraction $\frac{1}{1+x} \to 0$ and will never be $>1$. Therefore, since the logarithm function will never be increasing as fast as $f(x) = x$, you can conclude that the inequality will be satisfied.</p>
|
891,137 | <p>Count all $n$-length strings of digits $0, 1,\dots, m$ that have an equal number of $0$'s and $1$'s. Is there a closed form expression?</p>
| André Nicolas | 6,312 | <p><strong>A start:</strong> Let $f(x)=x-\ln(1+x)$. Note that $f(0)=0$. Use the <em>derivative</em> of $f(x)$ to conclude that $f(x)$ reaches a minimum at $x=0$.</p>
|
3,369,438 | <p><span class="math-container">$\begin{array}{|l} \forall xP(x) \vee \forall x \neg P(x) \quad premise
\\ \exists xQ(x) \rightarrow \neg P(x) \quad premise \\ \forall xQ(x) \quad premise
\\\hline \begin{array}{|l} \forall xP(x) \quad assumption
\\\hline \vdots \quad
\\ \forall x\neg P(x) \quad \end{array}
\\\begin{array}{|l} \forall x\neg P(x) \quad assumption \\\hline \end{array}
\\ \forall x \neg P(x) \quad \vee elim\\ \end{array}$</span></p>
<p>Am I on the right path to solving this or how should I be thinking about it? I think I may have strayed from the path in trying to use <span class="math-container">$\vee$</span> elim, just didn't see a different path to take. </p>
| Graham Kemp | 135,106 | <p>Hint: you have an existential and a universal in the premises. Assume a witness for the existance and see what happens under the assumption of <span class="math-container">$\forall x~P(x)$</span>.</p>
<p><span class="math-container">$\begin{array}{|l} \forall xP(x) \vee \forall x \neg P(x) \quad premise
\\ \exists xQ(x) \rightarrow \neg P(x) \quad premise \\ \forall xQ(x) \quad premise
\\\hline \begin{array}{|l} \forall xP(x) \quad assumption\\\hline \begin{array}{|l}[c]~Q(c)\to\neg P(c)\quad assumption\\\hline \vdots \end{array}\\\ldots \qquad existential~elimination \\ \vdots
\\ \forall x\neg P(x) \quad \end{array}
\\\begin{array}{|l} \forall x\neg P(x) \quad assumption
\\\hline \end{array}
\\ \forall x \neg P(x) \quad \vee elim\\ \end{array}$</span></p>
|
2,056,499 | <p>I am trying to prove that if
$$
\lim_{x \to c} (f(x)) = L_1
\\ \lim_{x \to c} (g(x)) = L_2
\\ L_1, L_2 \geq 0
$$
Then
$$
\lim_{x \to c} f(x)^{g(x)} = (L_1)^{L_2}
$$</p>
<p>I am doing this for fun, and my prof said that it shouldn't be too hard, but all I got so far is
$$
\forall \epsilon >0 \ \exists \delta > 0 : \text{if}\ \ |x-c|<\delta\ \ \ \text{then}\ |P(x)-L|<\epsilon
\\ |f(x)^{g(x)} - (L_1)^{L_2}| < \epsilon
$$
I have no idea how to proceed. Can someone help me out? I started by defining h(x) as $$(f(x))^{(g(x))}$$ but I couldn't go anywhere with that without basically defining the limit of h(x) as x approaches c to be L1^L2</p>
| RyRy the Fly Guy | 412,727 | <p>Given
<span class="math-container">$$
\lim_{x \to c} (f(x)) = L_1
\\ \lim_{x \to c} (g(x)) = L_2
$$</span></p>
<p>as long as <span class="math-container">$L_1 > 0$</span>, then your statement is true as follows:</p>
<p><span class="math-container">$$\lim_{x \rightarrow c} f(x)^{g(x)} = \lim_{x \rightarrow c} e^{\ln f(x)^{g(x)}} = \lim_{x \rightarrow c} e^{g(x) \ln f(x)}$$</span></p>
<p>Now we take limit of a composed function, and since the exponential function is continuous on <span class="math-container">$\mathbb{R}$</span>,</p>
<p><span class="math-container">$$\lim_{x \rightarrow c} e^{g(x) \ln f(x)} = e^{\lim_{x \rightarrow c}g(x) \ln f(x)}$$</span></p>
<p>Lets focus on the computing the exponent...</p>
<p><span class="math-container">$$\lim_{x \rightarrow c}[g(x) \ln f(x)] = \lim_{x \rightarrow c}[g(x)] \cdot \lim_{x \rightarrow c} [\ln f(x)] = \lim_{x \rightarrow c}[g(x)] \cdot \ln \lim_{x \rightarrow c} [f(x)]$$</span></p>
<p>Note the expression <span class="math-container">$\ln \lim_{x \rightarrow c} [f(x)] = \ln L_1$</span> is the reason we must restrict <span class="math-container">$L_1 > 0$</span>. We continue</p>
<p><span class="math-container">$$\lim_{x \rightarrow c}[g(x)] \cdot \ln \lim_{x \rightarrow c} [f(x)] = L_2 \cdot \ln L_1 = \ln L_1^{L_2} $$</span></p>
<p>Now, lets take into the account the base with the exponent...</p>
<p><span class="math-container">$$e^{\lim_{x \rightarrow c}g(x) \ln f(x)} = e^{\ln L_1^{L_2}} = L_1^{L_2}$$</span></p>
<p>Hence, </p>
<p><span class="math-container">$$\lim_{x \rightarrow c} f(x)^{g(x)} = L_1^{L_2}$$</span></p>
|
35,964 | <p>This is kind of an odd question, but can somebody please tell me that I am crazy with the following question, I did the math, and what I am told to prove is simply wrong:</p>
<p>Question:
Show that a ball dropped from height of <em>h</em> feet and bounces in such a way that each bounce is $\frac34$ of the height of the bounce before travels a total distance of 7 <em>h</em> feet.</p>
<p>My Work:
$$\sum_{n=0}^{\infty} h \left(\frac34\right)^n = 4h$$</p>
<p>Obviously 4 <em>h</em> does not equal 7 <em>h</em> . What does the community get?</p>
<p>I know that my calculations are correct, see Wolfram Alpha and it confirms my calculations, that only leaves my formula, or the teacher being incorrect...</p>
<p>Edit:
Thanks everyone for pointing out my flaw, it should be something like:
$$\sum_{n=0}^{\infty} -h + 2h \left(\frac34\right)^n = 7h$$</p>
<p>Thanks in advance for any help!</p>
| Yuval Filmus | 1,277 | <p>Note that when the ball bounces it goes both up and down. So from the second term onwards, you need to count each term twice. Therefore the answer is $2 \cdot 4h - h = 7h$ ($h$ is the first term, which is only counted once).</p>
|
35,964 | <p>This is kind of an odd question, but can somebody please tell me that I am crazy with the following question, I did the math, and what I am told to prove is simply wrong:</p>
<p>Question:
Show that a ball dropped from height of <em>h</em> feet and bounces in such a way that each bounce is $\frac34$ of the height of the bounce before travels a total distance of 7 <em>h</em> feet.</p>
<p>My Work:
$$\sum_{n=0}^{\infty} h \left(\frac34\right)^n = 4h$$</p>
<p>Obviously 4 <em>h</em> does not equal 7 <em>h</em> . What does the community get?</p>
<p>I know that my calculations are correct, see Wolfram Alpha and it confirms my calculations, that only leaves my formula, or the teacher being incorrect...</p>
<p>Edit:
Thanks everyone for pointing out my flaw, it should be something like:
$$\sum_{n=0}^{\infty} -h + 2h \left(\frac34\right)^n = 7h$$</p>
<p>Thanks in advance for any help!</p>
| Arturo Magidin | 742 | <p>Your computation does not give the <em>total distance traveled</em>, it only gives the distance it traveled <em>downward.</em></p>
<p>The ball first falls $h$. Then it <em>rises</em> $\frac{3}{4}h$, and falls $\frac{3}{4}h$ again; then it <em>rises</em> $(\frac{3}{4})^2h$, and falls that much again. Etc.</p>
<p>So the total distance traveled by the ball is
$$h + 2h\left(\frac{3}{4}\right) + 2h\left(\frac{3}{4}\right)^2 + \cdots = h + \sum_{n=0}^{\infty} \frac{3h}{2}\left(\frac{3}{4}\right)^{n-1}.$$</p>
<p>This gives, by the usual formula, a total distance of:
$$ h + \frac{\quad\frac{3h}{2}\quad}{1 - \frac{3}{4}} = h + \frac{\quad\frac{3h}{2}\quad}{\frac{1}{4}} = h + \frac{12h}{2} = 7h.$$</p>
|
40,500 | <blockquote>
<p>What are the most
fundamental/useful/interesting ways in
which the concepts of Brownian motion,
martingales and markov chains are
related?</p>
</blockquote>
<p>I'm a graduate student doing a crash course in probability and stochastic analysis. At the moment, the world of probability is a confusing blur, but I'm starting with a grounding in the basic theory of markov chains, martingales and Brownian motion. While I've done a fair amount of analysis, I have almost no experience in these other matters and while understanding the definitions on their own isn't too difficult, the big picture is a long way away.</p>
<p>I would like to <strong>gather together results and heuristics</strong>, each of which links together two or more of Brownian motion, martingales and Markov chains in some way. Answers which <strong>relate probability to real or complex analysis</strong> would also be welcome, such as "Result X about martingales is much like the basic fact Y about sequences".</p>
<p>The thread may go on to contain a Big List in which each answer is the posters' favourite as yet unspecified result of the form "This expression related to a markov chain is always a martingale because blah. It represents the intuitive idea that blah".</p>
<p>Because I know little, I can't gauge the worthiness of this question very well so apologies in advance if it is deemed untenable by the MO police.</p>
| Steven Heston | 6,769 | <p>Girsanov's Theorem.</p>
|
1,893,609 | <p>I am trying to show that $A=\{(x,y) \in \Bbb{R} \mid -1 < x < 1, -1< y < 1 \}$ is an open set algebraically. </p>
<p>Let $a_0 = (x_o,y_o) \in A$. Suppose that $r = \min\{1-|x_o|, 1-|y_o|\}$ then choose $a = (x,y) \in D_r(a_0)$. Then</p>
<p>Edit: I am looking for the proof of the algebraic implication that $\|a-a_0\| = \sqrt {(x-x_o)^2+(y-y_o)^2} < r \Rightarrow|x| < 1 , |y| < 1 $</p>
| Parcly Taxel | 357,390 | <p>The question states that all the points are on the circumference. Three non-collinear points already determine a unique circle passing through them, so any three of the four given points may be chosen and the fourth will automatically lie on the circle.</p>
<p>The problem of finding the centre of the circle through three points is well-known. <a href="https://en.wikipedia.org/wiki/Circumscribed_circle#Circumcenter_coordinates" rel="nofollow">Wikipedia</a> itself gives the following solution $(R_x,R_y)$ for the three points $(A_x,A_y), (B_x,B_y), (C_x,C_y)$:</p>
<p>$$R_x = \left[(A_x^2 + A_y^2)(B_y - C_y) + (B_x^2 + B_y^2)(C_y - A_y) + (C_x^2 + C_y^2)(A_y - B_y)\right] / D$$
$$R_y = \left[(A_x^2 + A_y^2)(C_x - B_x) + (B_x^2 + B_y^2)(A_x - C_x) + (C_x^2 + C_y^2)(B_x - A_x)\right]/ D$$
$$D = 2\left[A_x(B_y - C_y) + B_x(C_y - A_y) + C_x(A_y - B_y)\right]$$</p>
<p>However, there is no circle passing through four or more points <em>in general position</em>. For example, if four points are given as $(0,0),(2,0),(0,2)$ and $(-1,-1)$, the first three points determine a circle with centre $(1,1)$ and radius $\sqrt2$, but the fourth point does not lie on this circle. If this arises, the best you can do is minimise the sum-distance of each point to the circle itself, which becomes a least-squares fitting problem. Many resources for this are available too, like <a href="http://www.math.stonybrook.edu/~scott/Book331/Fitting_circle.html" rel="nofollow">this one</a> from Stony Brook.</p>
|
11,629 | <p>The basic logic course in school gives the impression that logic has both the syntax and the semantics aspects. Recently, I wonder whether the syntax part still plays an essential role in the current studies. Below are some of my observations, I hope the idea from the community can make them more complete.</p>
<p>Model theory: Even though model theory is stated in the language of logic, it can be viewed as the study of local isomorphism (see Poizat's "A course in Model Theory"). The syntax part is therefore a natural (though might be uncomfortable for some) way to view the theory rather than a necessity.</p>
<p>Recursion theory: The object of study is the notion of computability in different context. If we believe in Church-Turing Thesis, then these concept are independent of the formalism chosen. </p>
<p>Set theory: The intimate relationship between large cardinal and determinacy perhaps can suggest that this is a universal phenomenon. Will this phenomenon disappear if we change the language of mathematics to, for example, category theory?</p>
<p>Proof theory: I know too little to say anything.</p>
<p>If the observation is true, is it justified to demand that Turing degrees, and large cardinals receive the same mathematical status as, for example, prime numbers?</p>
| Adam | 2,361 | <p>I would posit that proof theory is exactly the syntactical part of logic, and the other three branches (set theory, model theory, and recursion theory) are what's left when you get rid of the syntax. Which is probably why those three branches get more attention nowadays.</p>
|
1,952,562 | <p>Find an equation of the tangent line to the curve $y = sin(3x) + sin^2 (3x)$ given the point (0,0). Answer is $y = 3x$, but please explain solution steps. </p>
| dantopa | 206,581 | <p><strong>Goal</strong></p>
<p>Find the slope $m$, and intercept $b$, for the line
$$
y = mx + b,
$$
tangent at the origin to the curve
$$
f(x) = \sin ^2(3 x)+\sin (3 x).
$$</p>
<p><strong>Intercept $b$</strong></p>
<p>Because the function goes through the origin, the tangent line will also go through the origin. Therefore the $y-$intercept $b=0$.</p>
<p><strong>Slope $m$</strong></p>
<p>The slope of the tangent line $m$ is, by definition, the same as the slope of the target function at the point of contact. The slope of the function is
$$
f'(x) = 6 \sin (3 x) \cos (3 x) + 3 \cos (3 x).
$$
The problem specifies $x_{*} = 0.$ The point of contact is
$$
\left( x_{*}, f(x_{*}) \right) = \left( 0, 3 \right).
$$
The slope of the function at the point of contact is
$$
f'( x_{*} ) = 6 \sin (3 x_{*}) \cos (3 x_{*}) + 3 \cos (3 x_{*}) = 6\cdot 0 \cdot 0 + 3 \cdot 1 = 3.
$$
The slope of the tangent line is
$$
m = f'( x_{*} ) = 3.
$$</p>
<p><strong>Solution</strong>
The equation of the line tangent to $f(x)$ at $x_{*} = 0$ is
$$
y = mx + b = 3x.
$$</p>
<p><a href="https://i.stack.imgur.com/hEXOm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hEXOm.png" alt="Tangent line"></a></p>
|
4,480,570 | <p>I am reading over the proof of <strong>Lemma 10.32 (Local Frame Criterion for Subbundles)</strong> in Lee's <em>Introduction to Smooth Manifolds</em>.</p>
<p>The lemma says</p>
<blockquote>
<p>Let <span class="math-container">$\pi: E \rightarrow M$</span> be a smooth vector bundle and suppose that for each <span class="math-container">$p\in M$</span> we are given an M-dimensional linear subspace <span class="math-container">$D_p \subseteq E_p$</span>. Then <span class="math-container">$D = \cup_{p \in M} D_p \subseteq E$</span> is a smooth subbundle of <span class="math-container">$E$</span> iff each point of <span class="math-container">$M$</span> has a neighborhood <span class="math-container">$U$</span> on which there exist smooth local sections <span class="math-container">$\sigma_1, \cdots, \sigma_m: U \rightarrow E$</span> with the property that <span class="math-container">$\sigma_1(q), \cdots, \sigma_m(q)$</span> form a basis for <span class="math-container">$D_q$</span> at each <span class="math-container">$q \in U$</span>.</p>
</blockquote>
<p>Overall I understand the proof of this lemma, besides the part where we need to show that <span class="math-container">$D$</span> is an embedded submanifold with or without boundary of <span class="math-container">$E$</span>. Professor Lee's proof says that</p>
<blockquote>
<p>it suffices to show that each <span class="math-container">$p \in M$</span> has a neighborhood <span class="math-container">$U$</span> such that <span class="math-container">$D \cap \pi^{-1}(U)$</span> is an embedded submanifold (possibly with boundary) in <span class="math-container">$\pi^{-1}(U) \in E$</span>.</p>
</blockquote>
<p>It is not very obvious to me why it is sufficient by showing this. May someone explain the logic to me?</p>
<p>Edit: Here's my attempt to reason it: By Theorem 5.8, if <span class="math-container">$D ∩ \pi^{-1}(U)$</span> is an embedded submanifold in <span class="math-container">$\pi^{-1}(U)$</span>, it satisfies the local k-slice condition. Now because <span class="math-container">$D$</span> is a union of <span class="math-container">$D ∩ \pi^{-1}(U)$</span> over different neighborhoods of <span class="math-container">$p \in M$</span>, it satisfies the local k-slice condition as well, and hence again by Theorem 5.8, <span class="math-container">$D$</span> is an embedded submanifold.</p>
<p>Please let me know if anything is wrong and how it can be corrected.</p>
<p>Thank you very much.</p>
<p>Here's a screenshot of the Lemma and its (partial) proof:
<a href="https://i.stack.imgur.com/87mUp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/87mUp.jpg" alt="enter image description here" /></a></p>
| subrosar | 602,170 | <p>Suppose that there is an open cover <span class="math-container">$\{U_i\}$</span> of <span class="math-container">$E$</span> such that <span class="math-container">$D\cap U_i\hookrightarrow E$</span> is a smooth embedding and <span class="math-container">$D\cap U_i$</span> is open in <span class="math-container">$D$</span>. I claim that <span class="math-container">$D\hookrightarrow E$</span> must be a smooth embedding.</p>
<p>Because <span class="math-container">$D\cap U_i$</span> is open in <span class="math-container">$D$</span>, it follows that <span class="math-container">$D\hookrightarrow E$</span> is locally an immersion, so it must be an immersion globally. It is left to show that the inclusion is also a topological embedding.</p>
<p>It suffices to show that there is an open cover <span class="math-container">$\{V_i\}$</span> on <span class="math-container">$D$</span> which also forms an open cover on <span class="math-container">$D\subseteq E$</span> (subspace topology) on which the inclusion is a topological embedding. We were given that the inclusion is a topological embedding on sets of the open cover <span class="math-container">$\{D\cap U_i\}$</span>. Since each <span class="math-container">$U_i$</span> is open in <span class="math-container">$E$</span>, each <span class="math-container">$D\cap U_i$</span> is open in <span class="math-container">$D$</span> with the subspace topology induced by <span class="math-container">$E$</span>.</p>
|
2,362,942 | <p>How could I notate a matrix rotation?</p>
<p>Example:
$ A = \begin{pmatrix} a & b \\ c & d \end{pmatrix},\:\:\: A_{\text{rotated}} = \begin{pmatrix} c & a \\ d & b\end{pmatrix}$. </p>
<p>Notice the whole matrix is "rotated" clockwise. Is there any notation for this, and anyway to compute it generally via basic matrix operations such as addition and multiplication or other?</p>
| Jim Ferry | 458,592 | <p>A bit more generally, how does one express the actions of $D_4$ on a square matrix? The transpose expresses a flip about the main diagonal. Multiplying on the left by the matrix
$$F = \left(\begin{matrix} 0 & 0 & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots \\0 & 1 & \cdots & 0 \\1 & 0 & \cdots & 0 \\ \end{matrix}\right)$$
expresses a flip about the vertical axis. These two operations generate $D_4$. In particular, clockwise rotation of a matrix $A$ may be expressed as $A^T F$.</p>
|
46 | <p>I have solved a couple of questions myself in the past, and I think some of them are interesting to the public and will most likely appear in the future. One example for this is the question how to enable antialiasing in the Linux frontend, for which there is no native support right now. My question would now be whether posting these as a new question would be appropriate, and then immediately answer it.</p>
| Robert Cartaino | 80 | <p>If your questions are genuinely interesting or intriguing problems, then posting self-answered questions is generally considered okay. </p>
<p>My only caution this early in a beta is with regard to asking questions simply to
"seed" the site. Stack Exchange sites don't generally need to be seeded for the purpose of adding bulk. Take a look at this blog post:</p>
<p><a href="http://blog.stackoverflow.com/2010/07/area-51-asking-the-first-questions/"><strong>Asking the First Questions</strong></a></p>
<p>But if you're question is interesting, certainly… feel free to post.</p>
|
810,190 | <blockquote>
<p>Page 29 of Source 1: Denote the complex conjugate by * :
$\mathbf{u \cdot v} = \sum_{1 \le i \le n} u_i^*v_i = (\mathbf{v \cdot u})^*$</p>
<p><a href="http://www.math.sunysb.edu/~eitan/la13.pdf" rel="nofollow">Page 1 of Source 2:</a> $\mathbf{u \cdot v} = \mathbf{u}^T\mathbf{ \bar{v} }$.</p>
<p><a href="http://www2.math.umd.edu/~hking/Hermitian.pdf" rel="nofollow">Page 1 of Source 3:</a> Denote $\mathbf{u^*} = \mathbf{\bar{u}^T} $. Then $ \mathbf{ <u,v> = u*v = \bar{u}^Tv } $.</p>
</blockquote>
<p>Would someone please explain and elucidate all these differences? Which is right? I'm confused. I believe that $u \cdot v = <u, v>$, if $< >$ is considered as the $\cdot$?</p>
<p>In view of the answer below, which is the most convenient and powerful that I should remember?</p>
| Peter Franek | 62,009 | <p>Source 1. and 3. are identical. It depends on the convention: there is not really a big difference, just in one case it is linear in the first entry and antilinear ($(u, \alpha v)=\bar\alpha (u,v)$) in the second entry (Source 2) and in the other convention, it is antilinear in the first and linear in the second entry (Source 1 and 3). I believe that more common is the convention from source 1 & 3.</p>
|
133,936 | <p>I am trying to understand a part of the following theorem:</p>
<blockquote>
<p><strong>Theorem.</strong> Assume that $f:[a,b]\to\mathbb{R}$ is bounded, and let $c\in(a,b)$. Then, $f$ is integrable on $[a,b]$ if and only if $f$ is integrable on $[a,c]$ and $[c,b]$. In this case, we have
$$\int_a^bf=\int_a^cf+\int_c^bf.$$
<em>Proof.</em> If $f$ is integrable on $[a,b]$, then for every $\epsilon>0$ there exists a partition $P$ such that $U(f,P)-L(f,P)<\epsilon$. Because refining a partition can only potentially bring the upper and lower sums closer together, we can simply add $c$ to $P$ if it is not already there. Then, let $P_1=P\cap[a,c]$ be a partition of $[a,c]$, and $P_2=P\cap[c,b]$ be a partition of $[c,b]$. It follows that
$$U(f,P_1)-L(f,P_1)<\epsilon\text{ and }U(f,P_2)-L(f,P_2)<\epsilon,$$
implying that $f$ is integrable on $[a,c]$ and $[c,b]$.</p>
<p>[...]</p>
</blockquote>
<p>How does that last expression "follow?" Neither $P_1$ nor $P_2$ are refinements of $P$, but they are still somehow less than $\epsilon$; will that not make their difference larger? That is,
$$U(f,P_i)-L(f,P_i)\geqslant U(f,P)-L(f,P),$$
for $i=1,2$? Thanks in advance!</p>
| nullUser | 17,459 | <p>Think about how $U(f,P_1)-L(f,P_1)$ relates to $U(f,P)-L(f,P)$. Suppose $P_1=\{ t_0,t_1,\ldots,t_n \}$ and $P = \{ t_0, t_1, \ldots, t_n,\ldots,t_{n+k}\}$. Then we have</p>
<p>$$
U(f,P_1)-L(f,P_1) = \sum_{i=1}^n (\sup\limits_{[t_{i-1},t_i]}(f)-\inf\limits_{[t_{i-1},t_i]}(f))|t_i-t_{i-1}|
\leq \sum_{i=1}^{n+k} (\sup\limits_{[t_{i-1},t_i]}(f)-\inf\limits_{[t_{i-1},t_i]}(f))|t_i-t_{i-1}| = U(f,P)-L(f,P) < \epsilon
$$
where we used the fact that $\sup(f) - \inf(f)$ is nonnegative on each partition interval.</p>
|
2,931,762 | <blockquote>
<p>Given a function <span class="math-container">$f$</span> is defined for integers <span class="math-container">$m$</span> and <span class="math-container">$n$</span> as given:
<span class="math-container">$$f(mn) = f(m)\,f(n) - f(m+n) + 1001$$</span>
where either <span class="math-container">$m$</span> or <span class="math-container">$n$</span> is equal to <span class="math-container">$1$</span>, and <span class="math-container">$f(1) = 2$</span>.</p>
<p>The problem itself is to prove that <span class="math-container">$$f(x) = f(x-1) + 1001$$</span> in order to find the value of <span class="math-container">$f(9999)$</span>. </p>
</blockquote>
<p>As such, what I've already tried is:<br>
Replacing <span class="math-container">$n$</span> as <span class="math-container">$1$</span> and <span class="math-container">$m$</span> as <span class="math-container">$x$</span>, and trying to solve from there. </p>
<p><span class="math-container">$$f(x) = f(1) * f(x) - f(x+1) + 1001$$</span>
<span class="math-container">$$f(x) = 2 * f(x) - f(x+1) + 1001,$$</span></p>
<p>Although I'm not sure how I should continue from here, it did occur that the negative sign could be manipulated in some way, but I'm fairly certain that <span class="math-container">$-f(x+1)$</span>, cannot be rewritten as <span class="math-container">$f(-x-1)$</span>, or anything of the sorts.</p>
<p>So if anyone has any thoughts or insight as how I should proceed, it would be greatly appreciated.</p>
| ℋolo | 471,959 | <p>You got $$f(x)=2f(x)-f(x+1)+1001$$Now, let's find $f(x+1)$ using $f(x)$:$$f(x)=2f(x)-f(x+1)+1001\iff f(x+1)=f(x)+1001$$Now, let $y=x+1, x=y-1$ you get $$f(y)=f(y-1)+1001$$</p>
<p>(Note, both $x$ and $y$ are just names of the variables, so if it is more convenient to you, you can rewrite the final line using $x$ instead of $y$)</p>
|
2,931,762 | <blockquote>
<p>Given a function <span class="math-container">$f$</span> is defined for integers <span class="math-container">$m$</span> and <span class="math-container">$n$</span> as given:
<span class="math-container">$$f(mn) = f(m)\,f(n) - f(m+n) + 1001$$</span>
where either <span class="math-container">$m$</span> or <span class="math-container">$n$</span> is equal to <span class="math-container">$1$</span>, and <span class="math-container">$f(1) = 2$</span>.</p>
<p>The problem itself is to prove that <span class="math-container">$$f(x) = f(x-1) + 1001$$</span> in order to find the value of <span class="math-container">$f(9999)$</span>. </p>
</blockquote>
<p>As such, what I've already tried is:<br>
Replacing <span class="math-container">$n$</span> as <span class="math-container">$1$</span> and <span class="math-container">$m$</span> as <span class="math-container">$x$</span>, and trying to solve from there. </p>
<p><span class="math-container">$$f(x) = f(1) * f(x) - f(x+1) + 1001$$</span>
<span class="math-container">$$f(x) = 2 * f(x) - f(x+1) + 1001,$$</span></p>
<p>Although I'm not sure how I should continue from here, it did occur that the negative sign could be manipulated in some way, but I'm fairly certain that <span class="math-container">$-f(x+1)$</span>, cannot be rewritten as <span class="math-container">$f(-x-1)$</span>, or anything of the sorts.</p>
<p>So if anyone has any thoughts or insight as how I should proceed, it would be greatly appreciated.</p>
| fleablood | 280,126 | <blockquote>
<p>Given a function f is defined for integers m and n as given:
f(mn)=f(m)f(n)−f(m+n)+1001
where either m or n is equal to 1</p>
</blockquote>
<p>As multiplication and addition is commutative we can assume, wolog <span class="math-container">$n = 1$</span> and that is just a convoluted way of simply saying <span class="math-container">$f(m) = 2f(m) -f(m+1) +1001$</span> or <span class="math-container">$f(m+1) = f(m) + 1001$</span>. Which means, by induction, <span class="math-container">$f(m) = f(1) + 1001(m-1)$</span>.</p>
<p>And as <span class="math-container">$f(1) = 2$</span> we have <span class="math-container">$f(m) = 2 + 1001(m-1)$</span> for all <span class="math-container">$m\in \mathbb Z; m \ne 1$</span>. </p>
<p>So <span class="math-container">$f(9999) = 2 + 1001*9998=10008000$</span></p>
|
2,304,332 | <p>This is related to <a href="https://math.stackexchange.com/questions/2162294/prove-that-there-exists-f-g-mathbbr-to-mathbbr-such-that-fgx-i/2294324#2294324">Prove that there exists $f,g : \mathbb{R}$ to $\mathbb{R}$ such that $f(g(x))$ is strictly increasing and $g(f(x))$ is strictly decreasing.</a></p>
<p>But according to the proof of Ewan Delanoy, you must use a $p(x)=kx$, $q(x)=-kx$, where $k\neq1$, otherwise the iterative group is trivial, then the proof failed. I think it is still possible to construct such $f,g$ that their compositions are exactly $y=\pm x$.</p>
| José Carlos Santos | 446,262 | <p>If $f(g(x))=x$, then $f$ is surjective and $g$ is injective. And if $g(f(x))=-x$, then $g$ is surjective and $f$ is injective. Therefore, $f$ and $g$ are bijections and then it follows from $f(g(x))=x$ that $g=f^{-1}$. But this contradicts the fact that $g(f(x))=-x$.</p>
|
3,513,581 | <p>I'm referring to Sakasegawa's forumla for calculating average line length in a queuing system.</p>
<p><a href="https://i.stack.imgur.com/Ydg3Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ydg3Y.png" alt="enter image description here"></a></p>
<p>I don't understand the result intuitively. For example, our server is able to serve 10 customers per hour, and we have 8 arriving every hour. So we have 1 queue and 1 server, and a utilization of 0.8:</p>
<ul>
<li>I think to myself, "we're underutilized, and not running at full capacity, so there should be no line". I.e. average line length = 0</li>
<li>But the actual answer from the formula is 3.2</li>
</ul>
<p>How can we have 3 people lined up if we are able to process them faster than they come in?? I just don't see it intuitively.</p>
| Mark Spearman | 1,078,333 | <p>This formula is for an M/M/m queue which has exponential interarrival times and exponential service times and is an approximation for <span class="math-container">$m>1$</span>.</p>
|
853,878 | <p>I'm learning calc and after learning about how to differentiate using product rule and chain rule etc. I came across marginal cost and marginal revenue. I'm pretty familiar with cost, profit and revenue. Why would I want to find the marginal revenue (aka rate of change for revenue or cost)? What does this rate of change give me? Without getting too mechanical and mathematical, can someone explain why I want to know the so called rate of change?</p>
| Puzzle_riddle | 161,144 | <p>Great question and there are many potential answers to this question. Let me give you one that from my experience is used extensively.</p>
<p>Suppose you have a firm that produces widgets. A widget sells for \$300, and it costs \$100 per unit, if you only produce 5; the next 5 will cost \$200 per unit to produce, the next 5 \$300, etc.</p>
<p>How much would the firm choose to produce? If you look at total costs, you might reason that it pays the firm to produce up to 17 units, for it will cost 5*100+5*200+5*300+3*400=4200 and it will sell for 17*300=5100, hence a profit of 5100-4200=900. Not bad, right?</p>
<p>Well, let us now look at marginal cost. The marginal cost of producing the 15th unit is 300, but the cost of producing the 16th is 400. Just looking at the margin, it is clear that it will not pay to produce more than 15 units. To verify, if the firm produces 15 units, it will cost 5*100+5*200+5*300=3000 and it will sell for 15*300=4500. A profit of \$1500! much better than \$900!</p>
<p>That should, I hope, give you some basic intuition as to why thinking in marginal costs is more useful than thinking in terms of total profit.</p>
|
540,217 | <p><strong>Question:</strong></p>
<p>$\int ^1_0 \frac {\ln x}{1-x^2}dx$ - converges or diverges?</p>
<p><strong>What we did:</strong></p>
<p>We tried to compare with $-\frac 1x$ and $-\frac 1{x-1}$ but ended up finding that these convergence tests fail. Our book says this integral diverges, but Wolfram on the other hand says it converges. How come?</p>
| Ron Gordon | 53,268 | <p>There are two potential sources of divergence if the integral were to diverge (it doesn't): at $x=0$ and $x=1$. At $x=0$, the integral behaves as $\ln{x}$, which has antiderivative $x \ln{x}-x$. You may show that the limit of this expression as $x \to 0$ is $0$ using e.g.,L'Hopital. So the integrand is integrable at $x=0$.</p>
<p>At $x=1$, you may show that</p>
<p>$$\lim_{x\to 1} \frac{\ln{x}}{1-x^2} = -\frac12$$</p>
<p>so that the integrand is integrable here as well. Thus, the integral converges.</p>
|
588,488 | <p>I know that for the harmonic series
$\lim_{n \to \infty} \frac1n = 0$ and
$\sum_{n=1}^{\infty} \frac1n = \infty$.</p>
<p>I was just wondering, is there a sequence ($a_n =\dots$) that converges "faster" (I am not entirely sure what's the exact definition here, but I think you know what I mean...) than $\frac1n$ to $0$ and its series $\sum_{n=1}^{\infty}{a_n}= \infty$?</p>
<p>If not, is there proof of that?</p>
| Fixed Point | 30,261 | <p>I had wondered about this too a long time ago and then came across this. The series</p>
<p>$$\sum_{n=3}^{\infty}\frac{1}{n\ln n (\ln\ln n)}=\infty$$</p>
<p>diverges and it can be very easily proven by the integral test. But here is the kicker. This series actually requires googolplex number of terms before the partial sum exceeds 10. Talk about slow! It is only natural that if the natural log is slow, then take the natural log of the natural log to make it diverge even slower.</p>
<p>Here is another one. This series</p>
<p>$$\sum_{n=3}^{\infty}\frac{1}{n\ln n (\ln\ln n)^2}=38.43...$$</p>
<p>actually converges using the same exact (integral) test. But it converges so slowly that this series requires $10^{3.14\times10^{86}}$ before obtaining two digit accuracy. So using these iterated logarithms you can come up with a series which converges or diverges "arbitrarily slowly".</p>
<p>Reference: Zwillinger, D. (Ed.). CRC Standard Mathematical Tables and Formulae, 30th ed. Boca Raton, FL: CRC Press, 1996.</p>
|
249,332 | <p>Consider the following list</p>
<pre><code>list={{{0,0,0},0},{{0,0,1},a},{{0,0,-1},-a},{{1,0,1},b},{{1,0,0},-b},{{1,1,1},a+b},{{1,1,1},a-b},{{-1,0,-1},{-a-b}},{{-1,0,-1},{-a+b}}};
</code></pre>
<p>how can this list be sorted such that a>b>0, therefore the expected result would be</p>
<pre><code>list={{{1,1,1},a+b},{{0,0,1},a},{{1,0,1},b},{{1,1,1},a-b},{{0,0,0},0},{{-1,0,-1},{-a+b}},{{1,0,0},-b},{{0,0,-1},-a},{{-1,0,-1},{-a-b}}};
</code></pre>
<p>in this specific example we also have that <code>2b>a</code> however it is not that important.</p>
| kglr | 125 | <pre><code>ordering = Reverse @
Ordering[list[[All, 2]] /. FindInstance[a > b > 0 && 2 b > a, {a, b}][[1]]];
list[[ordering]]
</code></pre>
<blockquote>
<pre><code>{{{1, 1, 1}, a + b}, {{0, 0, 1}, a}, {{1, 0, 1}, b}, {{1, 1, 1}, a - b},
{{0, 0, 0}, 0}, {{-1, 0, -1}, -a + b}, {{1, 0, 0}, -b}, {{0, 0, -1}, -a},
{{-1, 0, -1}, -a - b}}
</code></pre>
</blockquote>
<p>This should be faster for long lists.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.