qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,234,726 | <p>How many lattice paths are there from $(0, 0)$ to $(10, 10)$ that do not pass to the point $(5, 5)$ but do pass to $(3, 3)$?</p>
<p>What I have so far:</p>
<p>The number of lattice paths from $(0,0)$ to $(n,k)$ is equal to the binomial coefficient $\binom{n+k}n$ (according to Wikipedia). So the number of lattice paths from $(0, 0)$ to $(10, 10)$ is $\binom{20}{10}$, and the number of lattice paths through $(5, 5)$ is $\binom{10}{5}$, and the number of lattice paths that pass through $(3, 3)$ is $\binom{6}{3}$. What next?</p>
| imok1948 | 468,333 | <p>Lets count,<br>
T = Total number of paths from (0,0) to (n,k) = (n!k!)/(n!+k!).<br>
A = Toatl number of paths from (0,0) to (a,b) = (a!b!)/(a!+b!).(In your question a,b =3,3)<br>
B = Total number of paths from (a,b) to (n,k) = ((n-a)!(k-b)!)/((n-a)!+(k-b)!)<br>
Ans = T-(A*B)</p>
<p>If there are more than one path which you dont want to visit then simply add all the paths which are not included then subtract from T.</p>
|
2,589,938 | <p>Suddenly I am confused by a very elementary question:</p>
<blockquote>
<p>Let $a, b, c$ be the sides of a triangle. How about this inequality:
$$ a+b > c. $$</p>
</blockquote>
<p>Is it the definition of a triangle or is it a theorem?</p>
| lhf | 589 | <p>This follows from the law of cosines:
$$
c^2 = a^2+b^2 - 2ab \cos \hat C
\le a^2+b^2 + 2ab
= (a+b)^2
$$</p>
|
3,244,649 | <ul>
<li>Show the set <span class="math-container">$A=\{(m,n)\in N\times N : m\leq n\}$</span> is countably infinite.</li>
</ul>
<p>If <span class="math-container">$A$</span> is countable then we need to show that there is a bijection between <span class="math-container">$A$</span> and <span class="math-container">$\mathbb{N}$</span>, but how can I show <span class="math-container">$A$</span> is countably infinite? </p>
<p>Thanks...</p>
| qualcuno | 362,866 | <p>Note that <span class="math-container">$A$</span> can be written as </p>
<p><span class="math-container">$$
A = \bigsqcup_{n \in \mathbb{N}}\{(m,n) : m \leq n\}.
$$</span></p>
<p>That is, <span class="math-container">$A$</span> is the disjoint <strong>countable</strong> union of sets <span class="math-container">$F_n = \{(m,n) : m \leq n\}$</span>. These are finite: for a fixed <span class="math-container">$n$</span>, the set <span class="math-container">$F_n$</span> has <span class="math-container">$n$</span> elements, namely <span class="math-container">$(1,n) , (2,n) \dots, (n,n)$</span>. Thus <span class="math-container">$A$</span> is a countable union of countable sets, which says that <span class="math-container">$A$</span> itself is countable. </p>
<p>To see that is is infinite, observe that the mapping <span class="math-container">$d : n \in \mathbb{N} \mapsto (n,n) \in A$</span> is injective, and so <span class="math-container">$A$</span> cannot have a finite amount of elements.</p>
|
2,258,139 | <p>A natural number $n>1$ is called <em>good</em> if$$n \mid 2^n+1.$$ For example, $n=3$ is good, as $3 \mid 2^3+1=9$. Prove that if $N_1$ and $N_2$ are good, then:</p>
<ul>
<li>$\mathrm{lcm}(N_1,N_2)$ and $\gcd(N_1,N_2)$ are good,</li>
<li>$N_1\cdot N_2$ is good. </li>
</ul>
<p>This seems pretty difficult for me. Any hints?</p>
| TBTD | 175,165 | <p>Here is a proof of part 1. Assume,
$$
n_1 \mid 2^{n_1}+1 \ \text{and} \ n_2 \mid 2^{n_2}+1.
$$
Denote, $d=gcd(n_1,n_2)$. We have,
$$
2^{n_1}\equiv -1 \pmod{d} \ \text{and} \ 2^{n_2}\equiv -1 \pmod{d}.
$$
Now, from <a href="https://en.wikipedia.org/wiki/B%C3%A9zout%27s_identity" rel="nofollow noreferrer">Bézout's identity</a>, there exists $a,b\in\mathbb{Z}$ such that $d=an_1+bn_2$. Since, all of $d,n_1,n_2$ are odd, exactly one of $a,b$ is odd and the other one is even. Thus, $a+b$ is necessarily odd. Now,
$$
2^d \equiv 2^{an_1}2^{bn_2}\equiv (-1)^a(-1)^b \equiv (-1)^{a+b}\equiv -1 \pmod{d},
$$
proving that $(n_1,n_2) \mid 2^{(n_1,n_2)}+1$.</p>
<p>For the story involving least common multiple, let us assume that the set of all prime divisors of $n_1$ and $n_2$ is $\{p_1,p_2,\dots,p_k\}$ (after taking unions of all prime divisors of $n_1$ and $n_2$). Hence, by unique factorization theorem, there exists nonnegative integers $\{\alpha_1,\dots,\alpha_k\}$ and $\{\beta_1,\dots,\beta_k\}$ such that:
$$
n_1 = p_1^{\alpha_1}\cdots p_k^{\alpha_k} \ \text{and} \ n_2 = p_1^{\beta_1}\cdots p_k^{\beta_k}.
$$
Note that some of these numbers can very well be 0. Now, since
$$
2^{n_1}+1 \mid 2^{[n_1,n_2]}+1 \ \text{and} \ 2^{n_2}+1 \mid 2^{[n_1,n_2]}+1
$$, we have
$$
n_1 \mid 2^{[n_1,n_2]}+1 \ \text{and} \ n_2 \mid 2^{[n_1,n_2]}+1.
$$
Therefore, in the prime factorization of $2^{[n_1,n_2]}+1$, if the corresponding weights for $p_1,\dots,p_k$ are precisely $\theta_1,\dots,\theta_k$, we have
$$
\theta_i \geq \alpha_i \ \text{and} \ \theta_i\geq\beta_i,\forall i = 1,2,\dots,k \implies \theta_i \geq \max\{\alpha_i,\beta_i\},\forall i.
$$
Since the exponent of the prime $p_k$ in the factorization of $[n_1,n_2]$ is precisely $\max\{\alpha_i,\beta_i\}$, we are done.</p>
<p>Next, we will proceed into part 2. Assuming same notation for the prime decomposition of $n_1$ and $n_2$, we shall prove that
$$
p_i^{\alpha_i+\beta_i}\mid 2^{n_1n_2}+1, \forall i.
$$
Here's how to do it. Start with $p_1$, and assume that $\alpha_1 \geq \beta_1$ (you can do the exact same argument for the other case by swapping only one step below).</p>
<p>$$2^{n_1n_2}+1 = (2^{n_1}+1)\underbrace{((2^{n_1})^{n_2-1}-(2^{n_1})^{n_2-2}+\dots + 1)}_{\triangleq (*)}.$$
Clearly, since $n_1 \mid 2^{n_1}+1$, $p_1 ^{\alpha_1} \mid 2^{n_1}+1$. We will now prove that the second term in the factorization above is divisible by $p_1^{\beta_1}$, and by similar logic for the rest of the $p_2,\dots,p_k$, we will be done.</p>
<p>By our assumption, $\beta_1 \leq \alpha_1$, we have that $p_1^{\beta_1}\mid p_1^{\alpha_1}\mid 2^{n_1}+1$. Hence,
$$
2^{n_1}\equiv -1 \pmod{p_1^{\beta_1}}.
$$
Now, we will compute $(*)$ modulo $p_1^{\beta_1}$. It is easy to see that
$$
(*) \equiv \underbrace{1 + 1 + \dots + 1}_{n_2 \ \text{times}}\equiv n_2 \equiv 0 \pmod{p_1^{\beta_1}}
$$
hence, we are done. </p>
<p>Had it been the case $\beta_1 \geq \alpha_1$, we could have used the equivalent factorization:
$$
(2^{n_2}+1)((2^{n_2})^{n_1-1}-(2^{n_2})^{n_1-2}+\dots + 1)
$$
to arrive at the result. Last step is to execute the exact same steps for $p_2,p_3,\dots,p_k$, in an analogous way, and conclude via the fact that the numbers $z_i \triangleq p_i^{\alpha_i+\beta_i}$ are mutually coprime, namely $(z_i,z_j) = 1$ if $i \neq j$. Since we have shown that each of $z_i$ divides $2^{n_1n_2}+1$ and they are all coprime, it must be the case that their product also divides $2^{n_1n_2}+1$. $\Box$</p>
|
4,000,062 | <p>A sequence <span class="math-container">$\left\{a_n\right\}$</span> is defined as <span class="math-container">$a_n=a_{n-1}+2a_{n-2}-a_{n-3}$</span> and <span class="math-container">$a_1=a_2=\frac{a_3}{3}=1$</span></p>
<blockquote>
<p>Find the value of <span class="math-container">$$a_1+\frac{a_2}{2}+\frac{a_3}{2^2}+\cdots\infty$$</span></p>
</blockquote>
<p>I actually tried this using difference equation method.Let the solution be of the form <span class="math-container">$a_n=\lambda^n$</span>
<span class="math-container">$$\lambda^n=\lambda^{n-1}+2\lambda^{n-2}-\lambda^{n-3}$$</span> which gives the cubic equation <span class="math-container">$\lambda^3-\lambda^2-2\lambda+1=0$</span>. But i am not able to find the roots manually.</p>
| Ekaveera Gouribhatla | 31,458 | <p>Let <span class="math-container">$$S=\sum_{n=1}^{\infty}\frac{a_n}{2^{n-1}}$$</span>
We have,
<span class="math-container">$$S=\frac{1}{1}+\frac{1}{2}+\frac{3}{4}+P$$</span>
Where <span class="math-container">$$P=\sum_{n=4}^{\infty}\frac{a_n}{2^{n-1}}$$</span>
Using the given recurrence we have,
<span class="math-container">$$P=\sum_{n=4}^{\infty}\frac{a_{n-1}+2a_{n-2}-a_{n-3}}{2^{n-1}}$$</span>
So we get,
<span class="math-container">$$P=\sum_{n=4}^{\infty}\frac{a_{n-1}}{2^{n-1}}+2\sum_{n=4}^{\infty}\frac{a_{n-2}}{2^{n-1}}-\sum_{n=4}^{\infty}\frac{a_{n-3}}{2^{n-1}}$$</span>
By Change of variable for each summation we get
<span class="math-container">$$P=\sum_{k=3}^{\infty}\frac{a_k}{2^{k}}+2\sum_{k=2}^{\infty}\frac{a_k}{2^{k+1}}-\sum_{k=1}^{\infty}\frac{a_k}{2^{k+2}}$$</span>
Which implies
<span class="math-container">$$P=\frac{1}{2}\sum_{k=3}^{\infty}\frac{a_k}{2^{k-1}}+\frac{2}{4}\sum_{k=2}^{\infty}\frac{a_k}{2^{k-1}}-\frac{1}{8}\sum_{k=1}^{\infty}\frac{a_k}{2^{k-1}}$$</span>
<span class="math-container">$\implies$</span>
<span class="math-container">$$P=\frac{1}{2}\left(\frac{a_3}{4}+P\right)+\frac{1}{2}\left(\frac{a_2}{2}+\frac{a_3}{4}+P\right)-\frac{S}{8}$$</span>
Using <span class="math-container">$a_3=3,a_2=1$</span>
<span class="math-container">$$P=\frac{3}{8}+\frac{P}{2}+\frac{1}{4}+\frac{3}{8}+\frac{P}{2}-\frac{S}{8}$$</span>
Thus we get
<span class="math-container">$$\frac{S}{8}=\frac{3}{8}+\frac{1}{4}+\frac{3}{8}$$</span>
<span class="math-container">$\implies$</span>
<span class="math-container">$$S=8$$</span></p>
|
2,213,626 | <p>How can you prove that if the gcd(a,b) = 1 then gcd(a,bi) = 1 in the Gaussian integers? I know that $i$ is a unit in the ring, but how can you rigorously prove this?</p>
| QuantumEyedea | 213,634 | <p>Integrate both sides of the second form of your DE with respect to $x$ and then you're left with an equation
$$
\tfrac{1}{2} (y')^2 + a y^{-2}/2 = b y
$$</p>
<p>Then you just are left with a 1-st order DE from there</p>
|
2,358,490 | <p>Let $V$ be a finite dimensional vector space over the field $K$, and let $W_1$ and $W_2$ be subspaces. Express $(W_1+W_2)^{\perp}$ in terms of $W_1^{\perp}$ and $W_2^{\perp}$. Also, express $(W_1\cap W_2)^{\perp}$ in terms of $W_1^{\perp}$ and $W_2^{\perp}$.</p>
<p>I have no idea what this exercise is asking. Remark: I am self-studying and I do not have solutions.</p>
<p><strong>Questions</strong>:</p>
<p>What am I supposed to prove? How should I prove it?</p>
| Itay4 | 385,242 | <p>I will prove: $(W_1+W_2)^{\perp}=W_1^\perp\cap W_2^\perp.$</p>
<p>$W_1,W_2\subset W_1+W_2,$ and we know $(W_1+W_2)^{\perp} \subset W_1^{\perp},W_2^{\perp}$ </p>
<p>So we get: $$(W_1+W_2)^{\perp} \subset W_1^{\perp}\cap W_2^{\perp}$$</p>
<p>In the other direction,</p>
<p>Let $v\in W_1^{\perp}\cap W_2^{\perp},$ so for every $w_1+w_2 \in W_1 + W_2:$</p>
<p>$$\langle v,w_1+w_2 \rangle=\langle v,w_1\rangle+\langle v,w_2\rangle=0$$</p>
<p>Hence, $v\in (W_1+W_2)^{\perp}.$ </p>
|
550,230 | <p>If 2 vectors form a basis for $\mathbb{R}^2$, must these 2 vectors always be orthogonal to each other?</p>
<p>For instance, the standard bases in $\mathbb{R}^2$ are definitely orthogonal (easily drawn). How about other bases?</p>
| Ted Shifrin | 71,348 | <p>No, they definitely need not be orthogonal, just non-parallel.</p>
|
137,435 | <p>Consider the braid group on n strands given in the usual Artin presentation. Then add extra relations: each Artin generator has order d. For example, if d=2, one recovers the symmetric group. I would like to know what the order of the group is for arbitrary n and d. Even knowing the name of such groups would be helpful, though, as my attempts to determine this by searching the literature have so far failed.</p>
| Julián Aguirre | 4,791 | <p>Let $A=\{(x,y):y\ge e^x\}$ and $B=\{(x,0)\}$. Then $A+B=\{(x,y):y>0\}$.</p>
|
1,353,432 | <p>I know two proofs about the approximation of Euler-Mascheroni constant $\gamma$ that are very technical. So I would like to know if someone has a strategic proof to show that $0.5<\gamma< 0.6$.</p>
<blockquote>
<p>Let be $\gamma\in \mathbb{R}$ such that</p>
<p>$$\large\gamma= \lim_{n\to +\infty}\left[\left(1+\frac{1}{2}+\cdots+\frac{1}{n}\right)-\log{(n+1)}\right].$$</p>
<p>Show that $0.5<\gamma< 0.6$</p>
</blockquote>
<p>P.S.: In my book, the author use $\log{(n+1)}$ in the limit definition of $\gamma$.</p>
| Jaume Oliver Lafont | 134,791 | <p>Setting $n=1$ and $m=8$ into the following inequality involving harmonic numbers</p>
<p>$$
2H_n-H_{n(n-1)}<\gamma<2H_m-H_{m^2}
$$</p>
<p>gives</p>
<p>$$
0.5<\gamma<0.692
$$</p>
|
1,643,013 | <p>I have just started learning about differential equations, as a result I started to think about this question but couldn't get anywhere. So I googled and wasn't able to find any particularly helpful results. I am more interested in the reason or method rather than the actual answer. Also I do not know if there even is a solution to this but if there isn't I am just as interested to hear why not.</p>
<p>Is there a solution to the differential equation:</p>
<p>$$f(x)=\sum_{n=1}^\infty f^{(n)}(x)$$</p>
| E.H.E | 187,799 | <p>the question means:
$$y-y'-y''-y'''-......y^n=0$$
the differential equation is homogeneous and the characteristics equation is
$$1-r-r^2-r^3-.......r^n=0$$
$$r(1+r+r^2+r^3+....)=1$$
by using the geometric series (r<1)
$$\frac{r}{1-r}=1$$
$$r=1-r$$
$$r=\frac{1}{2}$$
so the function is
$$y=Ce^{\frac{x}{2}}$$</p>
|
1,867,401 | <p>I refer to this derivation of the gradient in polar coordinates: <a href="http://www.math.jhu.edu/~js/Math202/polar.grad.chain.pdf" rel="nofollow">http://www.math.jhu.edu/~js/Math202/polar.grad.chain.pdf</a></p>
<p>I can understand all parts except why the unit gradient $$\hat{e_r}=\langle\cos\theta,\sin\theta\rangle$$
$$\hat{e_\theta}=\langle-\sin\theta,\cos\theta\rangle$$</p>
<p>are defined as such?</p>
<p>I can hazard a guess due to $x=r\cos\theta$, $y=r\sin\theta$, so when $r=1$, that possibly leads to $\hat{e_r}$?
And $\hat{e_\theta}$ has to be orthogonal, but why not $\langle \sin\theta, -\cos\theta\rangle$?</p>
<p>Any other explanation?</p>
<p>Thanks for any help.</p>
| Hans Lundmark | 1,242 | <p>Consider a point with polar coordinates $(r,\theta)$. It lies, of course, at the distance $r$ from the origin. A change of $d\theta$ in the value of $\theta$ will move this point a distance $r \, d\theta$ along the circle of radius $r$. (Notice the factor $r$; it says that the farther out you are, the bigger is the effect of a change in the angle.)</p>
<p>The derivative $\partial f/\partial \theta$ only measures the function's sensitivity to changes in the value of the coordinate $\theta$, and to get the physically interesting number which measures sensitivity to moving the point in the $\mathbf{e}_{\theta}$ direction, you have to compensate by dividing by this factor of $r$.</p>
|
129,261 | <p>I need to prove several inequalities trivially. (i.e. without using AM-GM, re-arrangement etc). I just keep hitting a blank. Could anyone help?</p>
<p>$$x^{4}+y^{4}+z^{4}\geq x^{2}yz+xy^{2}z+xyz^{2}$$</p>
| Brian M. Scott | 12,042 | <p>The righthand side is clearly $xyz(x+y+z)$. Assume that $x,y,z>0$. For fixed $x+y+z$, $xyz$ is maximized when $x=y=z$, and $x^4+y^4+z^4$ is minimized when $x=y=z$. This is easy to see if you can visualize the surfaces $x+y+z=k$, $xyz=k$, and $x^4+y^4+z^4=k$ for a positive constant $k$. Thus, for a fixed value of $x+y+z$, the worst case for the inequality is when $x=y=z$: that’s when the righthand side is biggest and the lefthand side is smallest. But when $x=y=z$, the two sides are clearly equal, so the inequality holds for all $x,y,z>0$. (From there, by the way, it’s easy to show that it holds for all real $x,y,z$.)</p>
|
3,491,978 | <blockquote>
<p>Let (X,d) be a compact metric space. For every open cover, show there exists ε > 0 such that for every x ∈ X, B(x,ε) is contained in some member of the cover.</p>
</blockquote>
<p>My attempt:</p>
<p>(X,d) is compact. Therefore there exists a finite subcover of X.</p>
<p>Any element x in X must lie in some member of the cover, say x ∈ Ui. Otherwise they would not constitute a cover.</p>
<p>Since Ui is open, by definition every point is interior, so there exists ε > 0 such that B(x,ε) is contained in Ui. </p>
<p>I haven't used the fact the subcover is finite, or the fact X is a metric space rather than just topological space, so I feel my reasoning is flawed. </p>
<p>Any help is greatly appreciated!</p>
| fleablood | 280,126 | <p>The problem with that proof is you have an <span class="math-container">$\epsilon$</span> defined for each <span class="math-container">$x$</span>. You need to prove there is an <span class="math-container">$\epsilon$</span> that works for all <span class="math-container">$x$</span> but is independent of the value of <span class="math-container">$x$</span></p>
<p>You are give an open cover <span class="math-container">$U$</span> consisting of open sets <span class="math-container">$U_\alpha$</span>. We know this has a finite subcover but we should resist assuming we know it actually work on finding it in terms of the <span class="math-container">$x \in X$</span>.</p>
<p>As you point out for each <span class="math-container">$x\in X$</span> there is a <span class="math-container">$U_\zeta \in U$</span> so <span class="math-container">$x\in U_\zeta$</span>. And there is an <span class="math-container">$\epsilon_x$</span> (for <em>that</em> <span class="math-container">$x$</span>, not nesc for <em>all</em> <span class="math-container">$x$</span>) so that <span class="math-container">$B(x,\epsilon_x) \subset U_\zeta$</span>. Fine. </p>
<p>(<i>Hi... I'm a time traveler from a the future. I'm sticking in an extra step right now. We also have <span class="math-container">$B(x,\frac {\epsilon_x}2) \subset B(x,\epsilon)\subset U_\zeta$</span>. I'll explain why I'm doing that later.</i>)</p>
<p>If we collect these neighborhoods, <span class="math-container">$B(x,\epsilon_x)$</span> into a collection, <span class="math-container">$\mathcal B=\{B(x,\epsilon_x)|x\in X\}$</span>. Well.... it's pretty easy to show that <span class="math-container">$\mathcal B$</span> is an open cover of <span class="math-container">$X$</span>! So <span class="math-container">$\mathcal B$</span> has a finite subcover.</p>
<p>(<i>Hi.... Time traveler again. We also have <span class="math-container">$\mathcal C=\{B(x,\frac {\epsilon_x}2)|x\in X\}$</span> is also a open cover with a finite subcover</i>.)</p>
<p>Which if we put it in other words.... there is a finite subset <span class="math-container">$\{x_1,........, x_n\}\subset X$</span> so that <span class="math-container">$\{B(x_i \epsilon_{x_i})|x_i \in \{x_1,.....,x_n\}\}\subset \mathcal B$</span> and <span class="math-container">$X\subset \cup_{i=1}^n B(x_i \epsilon_{x_i})$</span></p>
<p>(<i>Hi... remember me? The time traveler? Just noting that there is also subset of <span class="math-container">$\mathcal C$</span> that will cover <span class="math-container">$X$</span> and a subset <span class="math-container">$\{w_1,......., w_m\}\subset X$</span> that acts as an index.</i>)</p>
<p>Now there are a finite number of <span class="math-container">$\epsilon_x$</span> so there must exist a <span class="math-container">$E=\min\{\epsilon_{x_i}\} > 0$</span> (<i>Hi.... there also must be a <span class="math-container">$E'=\min\{\frac {\epsilon_{w_j}}2\}$</span></i>) and thus as every <span class="math-container">$x$</span> must be in some <span class="math-container">$B(x_i\epsilon_{x_i})$</span>. So <span class="math-container">$B(x, E) \subset B(x_i\epsilon_{x_i}) \subset U_\zeta$</span> for some <span class="math-container">$U_\zeta$</span> and we are done and...</p>
<p>Oh SHHHHHugar!!!!!! Although <span class="math-container">$d(x,x_i) < \epsilon_{x_i}$</span> and <span class="math-container">$E< \epsilon_{x_i}$</span> that doesn't mean that for any <span class="math-container">$y\in B(x,E)$</span> that <span class="math-container">$y \in B(x,\epsilon_{x_i})$</span> as <span class="math-container">$d(y,x_i) \le d(y,x)+d(x,x_i)< E + \epsilon_{x_i} \not < \epsilon_{x_i}$</span>. If only there were some way I could travel back in time and fix my mistake.</p>
<p><em>warping weird music</em>.</p>
<p>Hi. For every <span class="math-container">$x\in X$</span> then <span class="math-container">$x\in B(w_j, \frac {\epsilon_{w_j}}2)$</span> for some <span class="math-container">$w_j$</span>. Thus for any <span class="math-container">$y\in B(x, E')$</span> then <span class="math-container">$d(y, w_j) \le d(y,x)+ d(x,w_j) < E' + \frac {\epsilon_{w_j}}2 < \frac {\epsilon_{w_j}}2+\frac {\epsilon_{w_j}}2 < \epsilon_{w_j}$</span> and so ....</p>
<p><span class="math-container">$B(x, E')\subset B(w_j, \epsilon_{w_j})\subset U_\zeta$</span> for some <span class="math-container">$U_\zeta\in U$</span>. And we are done.</p>
|
2,379,955 | <p>Assume I want to minimise this:
$$ \min_{x,y} \|A - x y^T\|_F^2$$
then I am finding best rank-1 approximation of A in the squared-error sense and this can be done via the SVD, selecting $x$ and $y$ as left and right singular vectors corresponding to the largest singular value of A.</p>
<p>Now instead, is possible to solve:
$$ \min_{x} \|A - x b^T\|_F^2$$
for $b$ also fixed?</p>
<p>If this is possible, is there also a way to solve:
$$ \min_{x} \|A - x b^T\|_F^2 + \|C - x d^T\|_F^2$$
where I think of $x$ as the best "average" solution between the two parts of the cost function.</p>
<p>I am of course longing for a closed-form solution but a nice iterative optimisation approach to a solution could also be useful.</p>
| Rodrigo de Azevedo | 339,790 | <p><span class="math-container">$$\| \mathrm A - \mathrm x \mathrm b^{\top} \|_{\text{F}}^2 = \cdots = \| \mathrm b \|_2^2 \, \| \mathrm x \|_2^2 - \langle \mathrm A \mathrm b , \mathrm x \rangle - \langle \mathrm x , \mathrm A \mathrm b \rangle + \| \mathrm A \|_{\text{F}}^2$$</span></p>
<p>Taking the gradient of this cost function,</p>
<p><span class="math-container">$$\nabla_{\mathrm x} \| \mathrm A - \mathrm x \mathrm b^{\top} \|_{\text{F}}^2 = 2 \, \| \mathrm b \|_2^2 \, \mathrm x - 2 \mathrm A \mathrm b$$</span></p>
<p>which vanishes at the minimizer</p>
<p><span class="math-container">$$\mathrm x_{\min} := \color{blue}{\frac{1}{\| \mathrm b \|_2^2} \mathrm A \mathrm b}$$</span></p>
<p>Note that</p>
<p><span class="math-container">$$\mathrm A - \mathrm x_{\min} \mathrm b^{\top} = \mathrm A - \mathrm A \left( \frac{ \,\,\,\mathrm b \mathrm b^{\top} }{ \mathrm b^\top \mathrm b } \right) = \mathrm A \left( \mathrm I_n - \frac{ \,\,\,\mathrm b \mathrm b^{\top} }{ \mathrm b^\top \mathrm b } \right)$$</span></p>
<p>where</p>
<ul>
<li><p><span class="math-container">$\frac{ \,\,\,\mathrm b \mathrm b^{\top} }{ \mathrm b^\top \mathrm b }$</span> is the projection matrix that projects onto the line spanned by <span class="math-container">$\mathrm b$</span>, which we denote by <span class="math-container">$\mathcal L$</span>.</p>
</li>
<li><p><span class="math-container">$\left( \mathrm I_n - \frac{ \,\,\,\mathrm b \mathrm b^{\top} }{ \mathrm b^\top \mathrm b } \right)$</span> is the projection matrix that projects onto the orthogonal complement of line <span class="math-container">$\mathcal L$</span>.</p>
</li>
</ul>
<p>Hence, the minimum is</p>
<p><span class="math-container">$$\| \mathrm A - \mathrm x_{\min} \mathrm b^{\top} \|_{\text{F}}^2 = \left\| \mathrm A \left( \mathrm I_n - \frac{ \,\,\,\mathrm b \mathrm b^{\top} }{ \mathrm b^\top \mathrm b } \right) \right\|_{\text{F}}^2 = \color{blue}{\left\| \left( \mathrm I_n - \frac{ \,\,\,\mathrm b \mathrm b^{\top} }{ \mathrm b^\top \mathrm b } \right) \mathrm A^\top \right\|_{\text{F}}^2}$$</span></p>
<p>which is the sum of the <em>squared</em> <span class="math-container">$2$</span>-norms of the projections of the rows of <span class="math-container">$\rm A$</span> (i.e., the columns of <span class="math-container">$\rm A^\top$</span>) onto the orthogonal complement of line <span class="math-container">$\mathcal L$</span>.</p>
<hr />
<p><a href="/questions/tagged/matrices" class="post-tag" title="show questions tagged 'matrices'" rel="tag">matrices</a> <a href="/questions/tagged/rank-1-matrices" class="post-tag" title="show questions tagged 'rank-1-matrices'" rel="tag">rank-1-matrices</a></p>
|
3,767,452 | <blockquote>
<p>Suppose we have
<span class="math-container">$$\begin{align}
\cos x + \cos y + \cos z &= \frac{3}{2}\sqrt{3} \\[4pt]
\sin x + \sin y + \sin z &= \frac{3}{2}
\end{align}$$</span></p>
<p>How can we solve for <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and <span class="math-container">$z$</span>?</p>
</blockquote>
<p>According to Wolfram Alpha, the values of <span class="math-container">$x, y, z$</span> must be the same i.e. <span class="math-container">$\pi/6$</span> modulo <span class="math-container">$2\pi$</span>.</p>
<blockquote>
<p>How do we solve the equations analytically?</p>
</blockquote>
<p><strong>What I am able to prove</strong>. I am able to show that two out of three variables <span class="math-container">$x,y, z$</span> must be equal. This I can do by reformulating the problem as "maximize <span class="math-container">$\sin x$</span> subject to the above constraints." and doing Lagrange optimization. I am sure there must be a simpler way.</p>
<p>Problem source: <a href="https://cmientranceexam.in/docs/trigonometry/" rel="nofollow noreferrer">From CMI Entrance 2010 paper</a></p>
| lab bhattacharjee | 33,337 | <p>Hint</p>
<p><span class="math-container">$$(\cos x+\cos y+\cos z)^2+(\sin x+\sin y+\sin z)^2=?$$</span></p>
<p><span class="math-container">$$\implies\cos(x-y)+\cos(y-z)+\cos(z-x)=3$$</span></p>
<p>As for <span class="math-container">$A,\cos A\le1$</span></p>
<p>each of the cosine ratio will be <span class="math-container">$$=1$$</span></p>
|
136,264 | <p>I have a question concerning the stability analysis for a kind of differential equation taking the form
$$\dot x=Ax+Bw,$$
where $A\in \mathbb{R}^{n \times n}$, $B\in \mathbb{R}^{n \times m}$ are constant matrices and
$w \in \mathbb{R}^m$ is a normal random variable, i.e., $w\sim \mathcal{N}(0,W)$ with $W$ be a symmetric and positive definite matrix.</p>
<p>Since the zero is not an equilibrium of the system, the Lyapunov analysis does not make sense. When the input-to-state stability analysis is considered, the robust control theory does not apply due to the unboundness of $w$. By resorting to stochastic stability in the sense of mean square or almost surely, the Ito formula seems to be invalid.</p>
<p>HOW to carry out the stability analysis of this kind of systems? Any pointer will be helpful. Thanks!</p>
| just-someone | 100,295 | <p>Here are the rankings of (not just math) journals employed by some countries:</p>
<ul>
<li><p><a href="https://sucupira.capes.gov.br/sucupira/public/" rel="nofollow noreferrer">Brazil</a> (in Portuguese)</p></li>
<li><p><a href="http://www.julkaisufoorumi.fi/en" rel="nofollow noreferrer">Finland</a></p></li>
<li><p><a href="https://dbh.nsd.uib.no/publiseringskanaler/Forside.action?request_locale=en" rel="nofollow noreferrer">Norway</a> </p></li>
</ul>
|
3,555,084 | <blockquote>
<p>Let
<span class="math-container">$$f(z) = e^z (1+\cos\sqrt{z} ) $$</span>
<span class="math-container">$\Omega=\{z\in\Bbb C: |z|\gt r\}$</span>, <span class="math-container">$r\gt 0$</span>. What is <span class="math-container">$f(\Omega)$</span>?</p>
<p>where <span class="math-container">$\sqrt{z}=\exp{(\text{Log }z/2)}, \text{Log }z=\log|z|+i\arg z,\arg z\in(-\pi,\pi]$</span></p>
</blockquote>
<p><span class="math-container">$\infty$</span> is an essential singularity. Picard's Great theorem , <span class="math-container">$\Bbb C\setminus f(\Omega) $</span> contains at most one point. <span class="math-container">$f(\Omega)$</span> is <span class="math-container">$\Bbb C$</span> ? </p>
<p>When <span class="math-container">$ z\in\Bbb R $</span>, <span class="math-container">$f(z)\geq0$</span>. According to <a href="https://en.wikipedia.org/wiki/Schwarz_reflection_principle" rel="nofollow noreferrer">Schwarz reflection principle</a> and Picard's Great theorem, all <span class="math-container">$ x+iy(y\ne0), \in\Bbb f(\Omega) $</span>. How to show that all negative real numbers belong to <span class="math-container">$f(\Omega)$</span>? thanks a lot!</p>
<p><strong>Is there a simple, more Elementary Proof</strong> than Conard's ? </p>
| Allawonder | 145,126 | <p>If you write the expression as <span class="math-container">$$x^4\left(1-\sqrt{1+6/x^2}+3/x^2\right)=\frac{1-\sqrt{1+6/x^2}+3/x^2}{1/x^4},$$</span> then you may be able to apply the Marquis de L'hopital's method.</p>
<p>After exactly two iterations, you get <span class="math-container">$$\frac92\frac{\frac{1}{\sqrt{1+6/x^2}}-1}{1/x^2},$$</span> where you may now take an elementary limit as <span class="math-container">$x\to+\infty.$</span></p>
|
3,555,084 | <blockquote>
<p>Let
<span class="math-container">$$f(z) = e^z (1+\cos\sqrt{z} ) $$</span>
<span class="math-container">$\Omega=\{z\in\Bbb C: |z|\gt r\}$</span>, <span class="math-container">$r\gt 0$</span>. What is <span class="math-container">$f(\Omega)$</span>?</p>
<p>where <span class="math-container">$\sqrt{z}=\exp{(\text{Log }z/2)}, \text{Log }z=\log|z|+i\arg z,\arg z\in(-\pi,\pi]$</span></p>
</blockquote>
<p><span class="math-container">$\infty$</span> is an essential singularity. Picard's Great theorem , <span class="math-container">$\Bbb C\setminus f(\Omega) $</span> contains at most one point. <span class="math-container">$f(\Omega)$</span> is <span class="math-container">$\Bbb C$</span> ? </p>
<p>When <span class="math-container">$ z\in\Bbb R $</span>, <span class="math-container">$f(z)\geq0$</span>. According to <a href="https://en.wikipedia.org/wiki/Schwarz_reflection_principle" rel="nofollow noreferrer">Schwarz reflection principle</a> and Picard's Great theorem, all <span class="math-container">$ x+iy(y\ne0), \in\Bbb f(\Omega) $</span>. How to show that all negative real numbers belong to <span class="math-container">$f(\Omega)$</span>? thanks a lot!</p>
<p><strong>Is there a simple, more Elementary Proof</strong> than Conard's ? </p>
| Barry Cipra | 86,747 | <p>Just to give yet another approach, let <span class="math-container">$u=x^2+3$</span>. Then, for <span class="math-container">$x\ge0$</span>,</p>
<p><span class="math-container">$$x^2(x^2-x\sqrt{x^2+6}+3)=(u-3)\left(u-\sqrt{u^2-9}\right)={9u\over u+\sqrt{u^2-9}}-{27\over u+\sqrt{u^2-9}}\to{9\over1+1}-0={9\over2}$$</span></p>
|
3,336,870 | <p>Suppose <span class="math-container">$S= \{x_1+x_5\}$</span> is a vector space in <span class="math-container">$R^5$</span>.</p>
<p>Then what is the orthogonal complement for <span class="math-container">$S$</span>?</p>
<p><em>My interpretation:</em></p>
<p>We can represent as <span class="math-container">$[1, 0, 0, 0, 1] [x_1, x_2, x_3, x_4, x_5]^{T} = 0$</span></p>
<p>So, the solutions for plane lies in null space and hence <span class="math-container">$S^{\bot}$</span> is the row space i.e., <span class="math-container">$c[1, 0, 0, 0, 1]$</span> for all real <span class="math-container">$c$</span>.</p>
<p>Am I correct?</p>
| aryan bansal | 698,119 | <p>1)let's say the apples are identical
W1 + 2 + w2 +2 + w3 +2 = 10 (w is whole no.) And wi represents no of apple ith person gets
W1 + w2 + w3 = 4 implying 6c2 ways(beggar's method)</p>
<p>2) they are not identical
10= 2+2+6 or 2+3+5 or 2+ 4 +4 or 3+3+4.
We will make groups and arrange
10!/(2!2!6!2!)×3!(extra division of 2! Because there are 2 groups, If u know this then good but if u don't let me know I will explain)+10!(2!3!5!)*3!+10!/(2!4!4!2!)*3!+10!(3!3!4!2!)*3!</p>
|
3,360,912 | <p>Why if we have strictly increasing, continuous and onto function its inverse must be continuous? could anyone explain this for me please?</p>
| bsbb4 | 337,971 | <p>In short, the number of lattice points of given bounded magnitude grows linearly, giving a contribution proportional to <span class="math-container">$n \cdot \frac{1}{n^2} = \frac{1}{n}$</span>, making the series diverge.</p>
<p>I give a proof that <span class="math-container">$\sum_{\omega \in \Lambda^*} \omega^{-s}$</span> converges absolutely iff <span class="math-container">$s>2$</span>:</p>
<p>Fix a lattice <span class="math-container">$\Lambda \in \Bbb C$</span> and set <span class="math-container">$\Omega_r = \{m\lambda_1 + n\lambda_2 | m,n \in \Bbb Z \,\,\text{and}\,\, \max(|m|, |n|) = r \}$</span>.</p>
<p>Then <span class="math-container">$\Lambda^*$</span> is a disjoint union of the <span class="math-container">$\Lambda_r$</span>, <span class="math-container">$r > 0$</span>. Observe that <span class="math-container">$|\Lambda_r| = 8r$</span>.</p>
<p>Let <span class="math-container">$D$</span> and <span class="math-container">$d$</span> be the greatest and least moduli of the elements of the parallelogram <span class="math-container">$\Pi_1$</span> containing <span class="math-container">$\Lambda_1$</span>. Then we have <span class="math-container">$rD \geq |\omega| \geq rd$</span> for all <span class="math-container">$\omega \in \Lambda_r$</span>.</p>
<p>Define <span class="math-container">$\sigma_{r, s} = \sum_{\omega \in \Lambda_r} |\omega|^{-s}$</span>.</p>
<p><span class="math-container">$\sigma_{r, s}$</span> lies between <span class="math-container">$8r(rD)^{-s}$</span> and <span class="math-container">$8r(rd)^{-s}$</span>. Therefore <span class="math-container">$\sum_{r=1}^\infty \sigma_{r, s}$</span> converges iff <span class="math-container">$\sum r^{1-s}$</span> converges, i.e. iff <span class="math-container">$s > 2$</span>. </p>
<p>The claim follows.</p>
<p>This proof follows the one in Jones and Singerman's Complex functions pp.91.</p>
|
2,548,177 | <p>I'd like to define <code>sumdiv</code> in Maple such that this:</p>
<pre><code>with(numtheory);
f:=x->x^2;
sumdiv(f(d)*mobius(100/d), d=1..100);
</code></pre>
<p>would do a sum on all divisors <code>d</code> of $100$.</p>
<p><strong>How to do such a sum over divisors in Maple?</strong></p>
<p>Here's what I've tried:</p>
<pre><code>isdivisible:=(a,b)->if a mod b = 0 then 1 else 0 fi;
sum(f(d)*mobius(100/d)*isdivisible(100,d), d=1..100);
</code></pre>
<p>but even if <code>isdivisible(100,d)</code> is $0$ (i.e. $d$ not divisible by $100$), it tries to evaluate <code>mobius(100/d)</code> anyway which is impossible, thus an error.</p>
<blockquote>
<p>Error, (in mobius) invalid arguments</p>
</blockquote>
| acer | 12,448 | <p>You can do this by adding up the results from the Maple command <code>numtheory[divisors]</code>, or you could go more directly to the Maple command <code>numtheory[sigma]</code>.</p>
<pre><code>restart;
`+`(op(numtheory[divisors](100)));
217
numtheory[sigma](100);
217
F1 := n -> `+`(op(numtheory[divisors](n))):
F2 := n -> numtheory[sigma](n):
seq( F1(i), i = 10 .. 20 );
18, 12, 28, 14, 24, 24, 31, 18, 39, 20, 42
seq( F2(i), i = 10 .. 20 );
18, 12, 28, 14, 24, 24, 31, 18, 39, 20, 42
</code></pre>
|
331,710 | <p>I need to find the area of a parallelogram with vertices $(-1,-1), (4,1), (5,3), (10,5)$.</p>
<p>If I denote $A=(-1,-1)$, $B=(4,1)$, $C=(5,3)$, $D=(10,5)$, then I see that $\overrightarrow{AB}=(5,2)=\overrightarrow{CD}$. Similarly $\overrightarrow{AC}=\overrightarrow{BD}$. So I see that these points indeed form a parallelogram.</p>
<p>It is assignment from linear algebra class. I wasn't sure if I had to like use a matrix or something. </p>
| Berci | 41,488 | <p><strong>Hints:</strong></p>
<ol>
<li>The area of a parallelogram with side vectors $\bf a$ and $\bf b$ is $\det(\bf a\ \bf b)$.</li>
<li>For a parallelogram $A,B,C,D$ its side vectors are e.g. $B-A$ and $C-A$.</li>
</ol>
|
331,710 | <p>I need to find the area of a parallelogram with vertices $(-1,-1), (4,1), (5,3), (10,5)$.</p>
<p>If I denote $A=(-1,-1)$, $B=(4,1)$, $C=(5,3)$, $D=(10,5)$, then I see that $\overrightarrow{AB}=(5,2)=\overrightarrow{CD}$. Similarly $\overrightarrow{AC}=\overrightarrow{BD}$. So I see that these points indeed form a parallelogram.</p>
<p>It is assignment from linear algebra class. I wasn't sure if I had to like use a matrix or something. </p>
| lab bhattacharjee | 33,337 | <p>As the diagonal of a parallelogram $A(-1,-1),B(4,1),C(5,3),D(10,5)$ divides into two congruent triangles, which implies the triangles have same area.</p>
<p>So, the area of the parallelogram will be $2\cdot$ area of any one of $\triangle ABC,\triangle ADC, \triangle ABD, \triangle BCD$</p>
<p>For example, the <a href="http://www.cut-the-knot.org/triangle/RouthTheorem.shtml" rel="nofollow">area</a> of $$\triangle ABC=\frac12\left|\det\begin{pmatrix} -1&-1&1 \\ 4&1&1 \\ 5&3&1\end{pmatrix}\right|$$</p>
<p>$$=\frac12\left|\det\begin{pmatrix} -1&-1&1 \\ 5&2&0 \\ 6&4&0\end{pmatrix}\right|\text{ applying} R_2'=R_2-R_1,R_3'=R_3-R_1$$</p>
<p>$$=\frac12\left|5\cdot4-6\cdot2\right|=4$$</p>
|
3,413,837 | <p>Jerry the mouse is hungry and according to some confidential information, there is a tempting piece of cheese at the end of one of the three paths after the junction he just found himself!</p>
<p>Fortunately, Tom is standing right there and Jerry hopes he can get some useful information as to which path he must get; most importantly because Spike and Tyke, the dogs, are at the end of the other two paths!</p>
<p>The only problem is that Tom gives true and false replies in alternating order. Furthermore, he has no way of knowing which will be first, the truth or the lie!</p>
<p>He is only allowed to ask Tom 2 questions that can be answered by a “yes” and a “no”.</p>
<p>What must be the two questions he must ask?</p>
<hr>
<p>No matter how hard I tried, I can't figure out anything...
I have seen several variations for the 2 doors problem but this one is different!</p>
| antkam | 546,005 | <p>The problem with these knights (always truthful) / knaves (always lying) puzzles is that many of them can be solved <em>the same way</em>. Simply ask:</p>
<blockquote>
<p>"If I were to ask you is door X correct, what would you say?" </p>
</blockquote>
<p>A knave would have to lie twice, and therefore tell the truth. No need to go into the mud about the next question, use an XOR etc. All those solutions work too, but with the template above, you don't even need to think. Which is sad. :(</p>
<p>In fact, my answer is the plain-English way to say "Is door X correct IFF you are a knight?" (or equiv: "Is door X correct XOR you are a knave?")</p>
<p>Credit: IIRC I read this in one of the logic puzzle books by <a href="https://en.wikipedia.org/wiki/Raymond_Smullyan" rel="nofollow noreferrer">Raymond Smullyan</a></p>
|
2,180,398 | <p>I am not sure if I got the correct answers to these basic probability questions.</p>
<blockquote>
<p>You and I play a die rolling game, with a fair die. The die is equally likely to land on any of its $6$ faces. We take turns rolling the die, as follows. </p>
<p>At each round, the player rolling the die wins (and the game stops) if (s)he rolls either "$3$", "$4$","$5$", or "$6$". Otherwise, the next player has a chance to roll. </p>
<ol>
<li>Suppose I roll 1st. What is the probability that I win in the 1st round? </li>
<li>Suppose I roll in the 1st round, and we play the game for at most three rounds. What is the probability that<br>
a. I win?<br>
b. You win?<br>
c. No one wins? (i.e., I roll "$1$" or "$2$" in the 3rd round?)</li>
</ol>
</blockquote>
<p>My answers:</p>
<ol>
<li>$4/6$</li>
<li>a. I think it is $(4/6)^2$ . Can someone explain why it isn't $4/6 + 4/6$ ?<br>
b. I think this is $4/6$.<br>
c. I think it is $(2/6)^3$</li>
</ol>
| Student | 304,018 | <p>$\color{red}{\text{REMARK: I wrote this answer supposing that one round is one person roling because }}$
$\color{red}{\text{of the formulation of your second question. If this is not the case, look at N.F. Taussig's}}$
$\color{red}{\text{answer.}}$</p>
<p>First of all: you answered the first question correct! </p>
<p>For your second question: let me denote the persons by person $A$ and person $B$. Suppose person $A$ goes first, then question 2a becomes: 'suppose person $A$ rolls in the first round and at most 3 rounds are played. What is the chance of $A$ winning?'. </p>
<p>Well: either person $A$ wins in the first round, which is a chance of $\frac{4}{6}$. If not, then person $A$ has to win in the third round. Note that this implies you do not win in the first round and that person $B$ does not win in the second round! You not winning in the first round occurs with a chance of $\frac{2}{6}$ and the same chance holds for $B$ not winning round $2$. Now you still have to win in the third round, which occurs (again) with a chance $\frac{4}{6}$. </p>
<p>Conclusion: $A$ wins if $A$ wins round $1$ (game stops) or $A$ does not win round $1$, $B$ does not win round $2$ and $A$ wins round $3$. This gives us the following chance:
$$\frac{4}{6} + \frac{2}{6}\frac{2}{6}\frac{4}{6}.$$</p>
<p>Can you apply this reasoning to find the answer to 2b? </p>
<p>To end with some good news: also 2c is correct ;)</p>
|
249,107 | <p>Im working on my thesis about semidirect products and splitting lemma. I got the following theorems to prove and Im a not sure how to start. I would appreciate any help.</p>
<p>$\\$
1. Let $f:A\to B$ be a map.</p>
<p>Show:</p>
<p>a) if $g:B\to A$ so that $gf=id_{A}$ then $f$ is injective</p>
<p>b) if $g:B\to A$ so that $fg=id_{B}$ then $f$ is surjective</p>
<p>$\\$</p>
<ol>
<li><p>$A$, $B$, $G$ groups and there is a short exact sequence</p>
<p>$1\to A\to G\to B\to 1$</p></li>
</ol>
<p>then $\alpha :A\to G$ is injective and $\beta :G\to B$ is surjective. Please show that.</p>
| Hagen von Eitzen | 39,174 | <p>Finding the proof is practically unavoidable as soon as you make yourself clear what you are given and what you want to show:</p>
<p>1a) Assume $f(y)=f(y)$. You want to show that this implies $x=y$. You are given the fact that $g(f(t))=t$ for all $t\in A$. Aha: $f(x)=f(y)$ imlpies $x=g(f(x))=g(f(y))=y$.</p>
<p>1b) Assume $b \in B$. You want to exhibit some $a\in A$ with $f(a)=b$. You are given the fact, that $f(g(t))=t$ for all $t\in B$. Aha: Given $b\in B$, we have $a:=g(b)\in A$ with $f(a)=f(g(b))=b$.</p>
<p>Regarding short exact sequences of groups:
Assume $\alpha(x)=\alpha(y)$. then $\alpha(xy^{-1})=1$, i.e. $xy^{-1}\in\ker(1\to A)=\{1\}$, i.e. $xy^{-1}=1$ and finally $x=y$.
Assume $b\in B$. Then $b\in\ker(B\to 1)$, hence $b\in\operatorname{im}(\beta)$, i.e. there exists $g\in G$ with $\beta(g)=b$.</p>
|
157,823 | <p>Check whether function series is convergent (uniformly):</p>
<p>$\displaystyle\sum_{n=1}^{+\infty}\frac{1}{n}\ln \left( \frac{x}{n} \right)$ for $x\in[1;+\infty)$</p>
<p>I don't know how to do that.</p>
| Unoqualunque | 17,703 | <p>The series doesn't converge. Use integral test or Cauchy-condensation-test via monotonicity of the general term</p>
|
3,912,734 | <p>My text book in linear algebra - out of the blue - claims that:</p>
<p><span class="math-container">$|\lambda u|=|\lambda||u|$</span></p>
<p>Where u is a vector and <span class="math-container">$\lambda$</span> is a constant.</p>
<p>I would understand if || were used to denote absolute numbers, but in this book, || is used to denote length (so the dot product of the vector v with itself is written as <span class="math-container">$|v|^2$</span>)</p>
<p>I'm positively dumbfounded. How do I prove this and why would av scalar like <span class="math-container">$\lambda$</span> even have a length in the first place?</p>
<p>EDIT:</p>
<p>This is used in a later section to prove that the projection of v on u equals:</p>
<p><span class="math-container">$$|\frac{u \cdotp v}{|u|^2}u|=\frac{|u \cdotp v|}{|u|^2}|u|=\frac{|u \cdotp v|}{|u|}$$</span> so it definitly seems they're referring to lengths, and not absolute numbers. Otherwise I can't really see that working.</p>
| lhf | 589 | <p>Perhaps this makes it clear: <span class="math-container">$|\lambda u|_V=|\lambda|_{\mathbb R}\,|u|_V$</span>.</p>
<p>It reads: the length of <span class="math-container">$\lambda u$</span> in the vector space <span class="math-container">$V$</span> is the product to the absolute value of <span class="math-container">$\lambda$</span> in <span class="math-container">$\mathbb R$</span> by the length of <span class="math-container">$u$</span> in <span class="math-container">$V$</span>.</p>
<p>Using subscripts as above sometimes helps to make sense of what is going on.
For instance, if <span class="math-container">$T:V\to W$</span> is a linear transformation and <span class="math-container">$v_1,v_2 \in V$</span>, then
<span class="math-container">$$
T(v_1+ v_2) = T(v_1) + T(v_2)
$$</span>
actually means
<span class="math-container">$$
T(v_1+_V v_2) = T(v_1) +_W T(v_2)
$$</span></p>
|
3,912,734 | <p>My text book in linear algebra - out of the blue - claims that:</p>
<p><span class="math-container">$|\lambda u|=|\lambda||u|$</span></p>
<p>Where u is a vector and <span class="math-container">$\lambda$</span> is a constant.</p>
<p>I would understand if || were used to denote absolute numbers, but in this book, || is used to denote length (so the dot product of the vector v with itself is written as <span class="math-container">$|v|^2$</span>)</p>
<p>I'm positively dumbfounded. How do I prove this and why would av scalar like <span class="math-container">$\lambda$</span> even have a length in the first place?</p>
<p>EDIT:</p>
<p>This is used in a later section to prove that the projection of v on u equals:</p>
<p><span class="math-container">$$|\frac{u \cdotp v}{|u|^2}u|=\frac{|u \cdotp v|}{|u|^2}|u|=\frac{|u \cdotp v|}{|u|}$$</span> so it definitly seems they're referring to lengths, and not absolute numbers. Otherwise I can't really see that working.</p>
| Hagen von Eitzen | 39,174 | <p>You may know about the dot product that <span class="math-container">$(\lambda u)\cdot (\mu v)=\lambda\mu (u\cdot v)$</span>.
Hence
<span class="math-container">$$ |\lambda u|=\sqrt{\lambda u\cdot \lambda u)}=\sqrt{\lambda ^2 (u\cdot u)}=|\lambda|\sqrt{u\cdot u}=|\lambda|\,|u|.$$</span></p>
|
1,047,263 | <p>I used to do this on my calculators and it never worked! I think it's because you can't multiply any number by itself to get a negative number. Is that even true? I think it is! I've tried it out and it never worked! Look here:$$0.5\cdot0.5=0.25$$$$0\cdot0=0$$$$-1\cdot-1=1$$$$2.1\cdot2.1=4.41$$$$-7.5\cdot-7.5=56.25$$I <em>never</em> get a negative for multiplying negative numbers because a negative times a negative is a positive and no matter what I do, I always get a nonnegative number! Tell me what you think! Is this why?</p>
| k170 | 161,538 | <p>For every real $x$ such that $x\gt 0$, we have
$$ \sqrt{-x}=\sqrt{(-1)x}=\sqrt{-1}\sqrt{x} =i\sqrt{x} $$</p>
|
691,112 | <p>Suppose $A$ and $B$ are finite sets and $f:A\rightarrow B$. Prove that if $|A|>|B|$, then $f$ is not one-to-one.</p>
<p>Scratch work:</p>
<p>Since the goal is in negation, I try to prove it by contradiction and assume that $f$ is one-to-one. Since $A$ has more elements than $B$, it can't be the case that $f$ is one-to-one because some $a\in A$ has to share images with other. But other than the false assumption $f$ is one-to-one, I have no other clue to proceed with the question. What technique should I apply? Please give hints and guidance. Thanks in advance.</p>
| naslundx | 130,817 | <p>Select a subset $A' \subset A$ such that $|A'| = |B|$. Let $A' = \{a_1, a_2, \dots, a_n\}$ and $B = \{b_1, b_2, \dots, b_n\}$. There is a trivial bijection (one-to-one and onto) $f:A' \to B$.</p>
<p>Now assume $g$ is a one-to-one mapping $A \to B$ which when restricted to $A'$ is $f$.</p>
<p>There exists $a_{n+1} \in (A - A')$ which is mapped to some element $b_i \in B$. However, for every element $b_i \in B$, there already exists $a_i \in A'$ such that $f(a_i) = b_i$. Hence $g(a_{n+1}) = g(a_i) = f(a_i) = b_i$ and $g$ is not one-to-one.</p>
|
691,112 | <p>Suppose $A$ and $B$ are finite sets and $f:A\rightarrow B$. Prove that if $|A|>|B|$, then $f$ is not one-to-one.</p>
<p>Scratch work:</p>
<p>Since the goal is in negation, I try to prove it by contradiction and assume that $f$ is one-to-one. Since $A$ has more elements than $B$, it can't be the case that $f$ is one-to-one because some $a\in A$ has to share images with other. But other than the false assumption $f$ is one-to-one, I have no other clue to proceed with the question. What technique should I apply? Please give hints and guidance. Thanks in advance.</p>
| Jose Antonio | 84,164 | <p>We write $A\preceq B$ if there exists an injection $A\to B$.</p>
<p><strong>Theorem:</strong> Let $A, B$ be finite sets. Then, $A \preceq B, \iff \#A \le \#B$.</p>
<p>Proof: ($\Rightarrow$)
Let $\varphi(n)$ be the statement "B is a set of size $\,n\,$ and $A \preceq B \rightarrow \#A \le n$."</p>
<p>$$S = \left\{\,n\in \omega:\varphi(n)\, \right\}.$$</p>
<p>For $n = 0\,$, B is the empty set, and the only injection is to itself. So, clearly $0 \in S.$ Assume $n \in S $ we need to show that $n^{+} \in S$.</p>
<p>Suppose $ \#B =n^{+}$ and $\,f: A \rightarrow B\,$ is an injective map. We choose an element $b\in B$. If $\,b \in f[A] $ then $ f(a) = b\,$ for a unique $a\in A$. Let $A^{*} = A-\left\{a \right\}$ and $\,B^{*} = B -\left\{b \right\}$. We define the function $g: A^{*} \rightarrow B^{*}$ to be the restriction of $f$ in $A^{*}$, i.e., $\,g = f\restriction_{A^{*}}$. Then, $g\,$ is a one-to-one function and by our inductive hypothesis $\#A^{*} \le \#B^{*}.$ But since $\#A^{*} = \#A-1\,$ and $\#B^{*} = n$. Then, $\#A-1 \le \ n \rightarrow \#A \le n^{+} .$</p>
<p>If $b \notin f[A]$. Let $B^{*} = B-\left\{b \right\}$. Where $f: A \rightarrow B^{*}$ is a one-to-one function, and by inductive hypothesis $\#A \le \#B^{*}.$ But since $\#B^{*} = \#B-1 = n$. Then, $\#A \le n^{+} $. </p>
<p>Hence $n^{+} \in S$ which close the induction.</p>
<p>($\Leftarrow$) We need to show that $\#A \le \#B$ implies the existence of an injective map $f: A\rightarrow B$.</p>
<p>For $\#B = 0\,$, that means $\#A = 0$. And clearly $f: \emptyset \rightarrow \emptyset $ is an injection. Suppose our claims holds for n, we need to show that also holds for $n^{+}$. For $\# B = n^{+}$, as $n^{+} \not = 0\,$ the set is nonempty, so there exist an element $b \in B$. Let $B^{*} = B -\left\{b \right\}$, then we have that $\#B^{*} = n$. </p>
<p>If $\#A \le \#B^{*}$ by our inductive hypothesis, there exist a injective map $g : A \rightarrow B^{*}$. Let $i_{ B^{*} \rightarrow B}$ be the inclusion map, i.e., $i_{ B^{*} \rightarrow B}: B^{*} \rightarrow B : j \mapsto j\,$, which is an injection. Therefore, the composition $\, i_{ B^{*} \rightarrow B} \circ g: A \rightarrow B\,$, is an injection as desired. </p>
<p>If $\, \#A \not\le \#B^{*}$ but $ \#A \le \#B $, i.e., $ \#A = n^{+}$. We set $A^{*} = A-\left\{a \right\}$. Then $\#A^{*}\le \#B^{*}$ and by the inductive hypothesis there exist an injective map $h': A^{*} \rightarrow B^{*}$. We can define the function $h: A \rightarrow B$, by adding the ordered pair $ \langle a, b \rangle $ to the function $h'$. That is, $h: = h' \cup \left\{\, \langle a, b \rangle \, \right\}$ (as $ \langle a, b \rangle $ is a genuine extra element, the map $h$ is one-to-one). $\;\;\ \Box$</p>
<p>Since $\#A> \# B$ then by the above theorem cannot exists an injective map $A$ to $B$. </p>
|
1,515,775 | <p>So, everyone knows the famous <a href="https://en.wikipedia.org/wiki/Lagrange%27s_four-square_theorem" rel="nofollow">Lagrange's four-square theorem</a>, which states, that every positive integer can be written down as the sum of $4$ square numbers. Since $4=2^2$, and $2$ represents the square numbers, could this be stated for bigger numbers too? For example, $8=2^3$, so we could state, that every positive integer can be written down as the sum of $8$ cube numbers? I tried to find a counter-example for this statement, but didn't have any success.</p>
<p>What do you think about this idea? Can you tell me a counter example or a problem with the thinking? Thanks!</p>
| Thomas Delaney | 250,032 | <p>I think yours is an interesting question, and I strongly feel that there is a finite 'n' such that all integers can be expressed as the sum of n cubes. However, isn't it easy to find a counter-example to your conjecture that n=8? What about 23, which can be expressed as 8+8+1+1+1+1+1+1+1. It requires 9 cubes.
I'm guessing n might be around 10.</p>
|
2,331,191 | <p>Use either direct proof, proof by contrapositive, or proof by contradiction.</p>
<p>Using proof by contradiction method</p>
<blockquote>
<p>Assume n is a perfect square and n+3 is a perfect square (proof by
contradiction)</p>
<p>There exists integers x and y such that <span class="math-container">$n = x^2$</span> and <span class="math-container">$n+3 = y^2$</span></p>
<p>Then <span class="math-container">$x^2 + 3 = y^2$</span></p>
<p>Then <span class="math-container">$3 = y^2 - x^2$</span></p>
<p>Then <span class="math-container">$3 = (y-x)(y+x)$</span></p>
<p>Then <span class="math-container">$y+x = 3$</span> and <span class="math-container">$y-x=1$</span></p>
<p>Then <span class="math-container">$x = 1, y = 2$</span></p>
<p>Since <span class="math-container">$x = 1$</span>, that implies <span class="math-container">$n = 1$</span></p>
<p><em><strong>this is how far I got</strong></em></p>
</blockquote>
<p>Anyone know what I should do now?</p>
| Steven Alexis Gregory | 75,410 | <p>First, as mentioned by others, $1$ is a perfect square and $1+3$ is a perfect square. So you need to prove "If $N > 1$ is a perfect square, then $n+3$ is not a perfect square.</p>
<p>Let $n = m^2$ where $m$ and $n$ are positive integers and $n > 1$. Then we must have $m > 1$. Since $m$ is an integer, we can say that $m \ge 2$. Hence $2m+1 \ge 5$. So</p>
<p>$$(m+1)^2 = m^2 + (2m+1) \ge n + 5 > n+3$$</p>
<p>Since $n < n+3 < n+5$, then $m^2 < n+3 < (m+1)^2$</p>
|
461 | <p>There is a function on $\mathbb{Z}/2\mathbb{Z}$-cohomology called <em>Steenrod squaring</em>: $Sq^i:H^k(X,\mathbb{Z}/2\mathbb{Z}) \to H^{k+i}(X,\mathbb{Z}/2\mathbb{Z})$. (Coefficient group suppressed from here on out.) Its notable axiom (besides things like naturality), and the reason for its name, is that if $a\in H^k(X)$, then $Sq^k(a)=a \cup a \in H^{2k}(X)$ (this is the cup product). A particularly interesting application which I've come across is that, for a vector bundle $E$, the $i^{th}$ Stiefel-Whitney class is given by $w_i(E)=\phi^{-1} \circ Sq^i \circ \phi(1)$, where $\phi$ is the Thom isomorphism.</p>
<p>I haven't found much more than an axiomatic characterization for these squaring maps, and I'm having trouble getting a real grip on what they're doing. I've been told that $Sq^1$ corresponds to the "Bockstein homomorphism" of the exact sequence $0 \to \mathbb{Z}/2\mathbb{Z} \to \mathbb{Z}/4\mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \to 0$. Explicitly, if we denote by $C$ the chain group of the space $X$, we apply the exact covariant functor $Hom(C,-)$ to this short exact sequence, take cohomology, then the connecting homomorphisms $H^i(X)\to H^i(X)$ are exactly $Sq^1$. This is nice, but still somewhat mysterious to me. Does anyone have any good ideas or references for how to think about these maps?</p>
| Andy Putman | 317 | <p>For the Steenrod squares, I highly recommend the first couple of chapters of the book "Cohomology operations and applications in homotopy theory" by Mosher and Tangora. It's beautifully written (and now available in a cheap Dover edition).</p>
|
461 | <p>There is a function on $\mathbb{Z}/2\mathbb{Z}$-cohomology called <em>Steenrod squaring</em>: $Sq^i:H^k(X,\mathbb{Z}/2\mathbb{Z}) \to H^{k+i}(X,\mathbb{Z}/2\mathbb{Z})$. (Coefficient group suppressed from here on out.) Its notable axiom (besides things like naturality), and the reason for its name, is that if $a\in H^k(X)$, then $Sq^k(a)=a \cup a \in H^{2k}(X)$ (this is the cup product). A particularly interesting application which I've come across is that, for a vector bundle $E$, the $i^{th}$ Stiefel-Whitney class is given by $w_i(E)=\phi^{-1} \circ Sq^i \circ \phi(1)$, where $\phi$ is the Thom isomorphism.</p>
<p>I haven't found much more than an axiomatic characterization for these squaring maps, and I'm having trouble getting a real grip on what they're doing. I've been told that $Sq^1$ corresponds to the "Bockstein homomorphism" of the exact sequence $0 \to \mathbb{Z}/2\mathbb{Z} \to \mathbb{Z}/4\mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \to 0$. Explicitly, if we denote by $C$ the chain group of the space $X$, we apply the exact covariant functor $Hom(C,-)$ to this short exact sequence, take cohomology, then the connecting homomorphisms $H^i(X)\to H^i(X)$ are exactly $Sq^1$. This is nice, but still somewhat mysterious to me. Does anyone have any good ideas or references for how to think about these maps?</p>
| Dev Sinha | 4,991 | <p>Here's how I explain Steenrod squares to geometers. First, if $X$ is a manifold of dimension $d$ then one can produce classes in $H^n(X)$ by proper maps $f: V \to X$ where $V$ is a manifold of dimension $d-n$ through many possible formalisms - eg. intersection theory (the value on a transverse $i$-cycle is the count of intersection points), or using the fundamental class in locally finite homology and duality, or Thom classes, or as the pushforward $ f_*(1) $ where $1$ is the unit class in $H^0(V)$. Taking this last approach, suppose $f$ is an immersion and thus has a normal bundle $\nu$. If $x = f_*(1) \in H^n(X)$ then $Sq^i(x) = f_*(w_i(\nu))$. This is essentially the Wu formula.</p>
<p>That is, if cohomology classes are represented by submanifolds, and for example cup product reflects intersection data, then Steenrod squares remember normal bundle data.</p>
|
1,643,579 | <p>I've an homework problem that i'm unable to find the right answer.</p>
<p>The problem is:</p>
<p>The line $tx + sy = 2$ goes through point $(2,1)$ and is parallel to line $y = 8 -3x$, find the value of $t^2 + s^2$. </p>
<p>$ A. {32\over49}$ $B.{18\over49}$ $C.{36\over49}$ $D.{25\over49} $ $E.{40\over49} $</p>
<p>I was able to find a parallel line at $ y = -3x + 7 $ but i'm unable to find any of the possible answers to be right.</p>
| Mark Viola | 218,419 | <p><strong>HINT:</strong></p>
<p>Using $\cos(x)=1-\frac12 x^2+O(x^4)$ we have</p>
<p>$$\frac{1-\prod_{k=1}^n\cos(kx)}{x^2}=\frac12\sum_{k=1}^{n}k^2+O(x^2)$$</p>
|
1,643,579 | <p>I've an homework problem that i'm unable to find the right answer.</p>
<p>The problem is:</p>
<p>The line $tx + sy = 2$ goes through point $(2,1)$ and is parallel to line $y = 8 -3x$, find the value of $t^2 + s^2$. </p>
<p>$ A. {32\over49}$ $B.{18\over49}$ $C.{36\over49}$ $D.{25\over49} $ $E.{40\over49} $</p>
<p>I was able to find a parallel line at $ y = -3x + 7 $ but i'm unable to find any of the possible answers to be right.</p>
| Mark Viola | 218,419 | <p>Since it has yet to be posted, I thought it would be instructive to present the approach suggested by @DanielFischer. We note that we can write </p>
<p>$$a_{n+1}-a_n=\frac12(n+1)^2 \tag 1$$</p>
<p>Summing $(1)$ we find that </p>
<p>$$\begin{align}
\sum_{k=1}^{n-1}(a_{k+1}-a_k)&=a_{n}-a_1\\\\
a_n&=a_1+\frac12\sum_{k=1}^{n-1}(k+1)^2\\\\
&=\frac12\sum_{k=1}^{n}k^2\\\\
&=\frac{1}{12}n(n+1)(2n+1)
\end{align}$$ </p>
<hr>
<p>For the second part, we note that </p>
<p>$$\frac{6a_n}{n^3}=\left(1+\frac1n\right)\left(1+\frac1{2n}\right)$$</p>
<p>Therefore, the limit of interest is </p>
<p>$$\begin{align}
\lim_{n\to \infty}\left(\frac{6a_n}{n^3}\right)^{(n^2+1)/n}&=\lim_{n\to \infty}\left(\frac{6a_n}{n^3}\right)^{n\left(\frac{n^2+1}{n^2}\right)}\\\\
&=e^{3/2}
\end{align}$$</p>
|
51,757 | <p>I'm trying to find closed form for</p>
<p>$$\sum_{k=1}^{n}\sin\frac{1}{k}$$</p>
<p>I typed it in Mathematica 6.0 and WolframAlpha, but no result what i expected.</p>
<p>Any hints will be appreciated, thank you.</p>
| Andrew | 11,265 | <p>The sum can be expanded in the asymptotic series, several first members being
$$
\sum_{k=1}^n\sin\frac{1}{k}=
\log n+a+\frac1{2n}-\frac1{12n^3}+O\left(\frac1{n^3}\right),
$$
where
$$
a=\gamma+\sum_{k=1}^\infty (-1)^k\frac{\zeta(2k+1)}{(2k+1)!}
$$
and $\gamma$ is the Euler constant. The value of $a$ is $0.38...$ as written by Henry.</p>
|
4,069,053 | <blockquote>
<p>If a finite field has characteristic 2 why is every element a square?</p>
</blockquote>
<p>I've attempted this problem by calculating the square root as follows: <span class="math-container">$\mathbb{F}$</span> has <span class="math-container">$q = 2^m$</span> elements, so if <span class="math-container">$a \in \mathbb{F}$</span> then <span class="math-container">$a = a^q = (a^{2^{(n-1)}})^2$</span>. Since, <span class="math-container">$a \in \mathbb{F}$</span> is a square elements if there exist <span class="math-container">$b \in \mathbb{F}$</span> s.t <span class="math-container">$b^2 = a$</span>, I believe the next step would be to show that the square root <span class="math-container">$b = a^{2^{(n-1)}} \in \mathbb{F}$</span>. Is this reasonable or should I take a different approach to the problem (and if so what approach?)?</p>
| Mark Bennet | 2,906 | <p>You can do it like this.</p>
<p>If <span class="math-container">$a^2=b^2$</span> then <span class="math-container">$a=\pm b$</span>, but in characteristic <span class="math-container">$2$</span> we have <span class="math-container">$b=-b$</span> so that if <span class="math-container">$a^2=b^2$</span> then <span class="math-container">$a=b$</span>.</p>
<p>Now square all the <span class="math-container">$n$</span> distinct elements of your finite field. You get <span class="math-container">$n$</span> different answers. And since there are only <span class="math-container">$n$</span> elements in the field each of them must appear in the list of squares.</p>
<hr />
<p>You can also do it by noting that the multiplicative group of non-zero elements has odd order, and every element of a group of odd order is a square. Working with <span class="math-container">$q$</span> in your case is a simple case of this underlying fact.</p>
|
569,927 | <p>$p^{\; \left\lfloor \sqrt{p} \right\rfloor}\; -\; q^{\; \left\lfloor \sqrt{q} \right\rfloor}\; =\; 999$</p>
<p>How do you find positive integer solutions to this equation?</p>
| Dietrich Burde | 83,966 | <p>The exponents must be less than $4$ (we have $17^4-16^4=17985$ and
$16^4-15^3=62161$). There is an easy solution with $q=1$. The other solution arises from a difference of two cubes equal to $999$. In fact, $37=4^3-3^3$ is the difference of two consecutive cubes, and $999=3^3\cdot 37$. This gives
$$
12^3-9^3=999.
$$</p>
<p>For the numbers equal to the difference of consecutive cubes see the sequence A003215 in integer sequences, which is the crystal ball sequence for a hexagonal lattice.</p>
|
367,643 | <p>A Norman window has the shape of a rectangle with a semi circle on top; diameter of the semicircle exactly matches the width of the rectangle. Find the dimensions of the Norman window whose perimeter is 300 in that has maximal area.</p>
<p><a href="https://i.stack.imgur.com/WBPm7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WBPm7.png" alt="enter image description here"></a></p>
<p>The area of the semicircle would be <span class="math-container">$(\pi(w/2)^2)/2$</span>. The area of the rectangle would be <span class="math-container">$hw$</span>. I know that the perimeter is 300 in, and that the perimeter would be <span class="math-container">$2h+w+(w\pi) = 300$</span>. How would I write <span class="math-container">$h$</span> in terms of <span class="math-container">$w$</span>, and then solve for <span class="math-container">$h$</span> to specify the dimensions? The total area would be the 2 sub-areas added together. I would have to take the derivative of the combined areas to solve for the width and height. What would be the proper steps for doing this?</p>
| colormegone | 71,645 | <p>OK, I spotted the error: when you wrote the expression (which Anil Baseski repeated) for the perimeter, you used the circumference of a circle written as $C = \pi d$. However, the "lunette" of the Norman window is only a semi-circle, so the perimeter equation should be $p = 2h + w + \frac{\pi}{2} w = 300$ . The corrections are then</p>
<p>$$h = 150 - \left(\frac{\pi + 2}{4}\right)w , A = 150w - \left(\frac{4 + \pi}{8}\right)w^2,$$</p>
<p>$$\frac{dA}{dw} = 150 - \left(\frac{4 + \pi}{4}\right)w = 0 \Rightarrow w = \frac{600}{4 + \pi} \approx 84.0$$</p>
<p>$$ \Rightarrow h \approx 150 - \left(\frac{\pi + 2}{4}\right) \cdot 84.0 \approx 42.0.$$</p>
<p>Needless to say, such windows in actual use are designed for esthetics and not maximal area: this window is way too wide, relative to its total height, to be appealing...</p>
|
802,014 | <p>For all sets $A$, $B$, $C$, if $A$ is subset of $B$, $B$ is subset of $C$, and $C$ is subset of $A$, then $A = B = C$.</p>
<p>This is a true statement and I need to provide a proof? Thus, when a statement is false I need to provide it with counterexample whereas if it is true then it has to be provided by a proof?</p>
| Rene Schipperus | 149,912 | <p>Yes what you say is correct. The statement is an application of the Extensionality Axiom of set theory. Extensionality says that if two sets have the same elements then they are equal. It is easy to see from $A \subseteq B$ and $B \subseteq A$ that an element belongs to $A$ if and only if it belongs to $B$, so by extensionality $A=B$.</p>
|
1,407,700 | <p>I am stuck on solving the following systems of equations with 3 variables. The textbook asks to use the addition method so can we please stick to that.</p>
<p>${5x -y = 3}$</p>
<p>${3x + z = 11}$</p>
<p>${y - 2z = -3}$</p>
<p>I am used to systems of equations where each equation has at least one instance of the variable e.g. ${x + y + z = 1}$ but in each of the above, one of the variables is omitted in each equation.</p>
<p>Could somebody explain what to do in this situation? Should I multiply both sides by one of the variables to balance it up? </p>
| Alex1357 | 137,382 | <p>$f$ is not uniformly continuous.
Short explanation:
For $x \to \infty$ we have asymptotically $\sin \frac 1x \approx \frac 1x$, so $f(x) \approx x^2$. </p>
<p>A bit longer explanation: There is an $y_0 > 0$ such that $0<\sin(y) < 2y$ for $0<y<y_0$. Consequently, $\sin(\frac 1x) > \frac 1 {2x}$ and $f(x)>\frac {x^2}2$ for $x > \frac 1{y_0}$.</p>
<p>A function which is uniformly continuous can only grow linearly, though. For an $\delta, \epsilon$ instance of the definition one would have the bound $f(x) \leq f(0) + \frac \epsilon \delta x$.</p>
|
1,390,382 | <p>I have a problem that comes from absorbing random walks on a connected undirected graph $G$ with two types of nodes, absorbing nodes and free nodes. We randomly pick a node to start, once the random walk reaches an absorbing node, it will never leave the node again. But if we are at a free node, we will pick an outgoing edge with probability proportional to the edge weight and go to one of the neighbouring nodes. With proper labelling of the nodes, the transition matrix $T(G)$ can be written as $$T = \begin{pmatrix} T_{aa} & T_{af} \\ T_{fa} & T_{ff} \end{pmatrix}, $$ where the matrix $T_{aa}$ corresponds to the block of probabilities corresponding to transitions from an absorbing node to another absorbing node, and so on. For the computation of the overall limiting distribution, I need to show that $\lim_{n \rightarrow \infty}{T_{ff}^{n} = 0}$ so that I can have a nice looking result $$\lim_{n \rightarrow \infty}{T^n} = \begin{pmatrix} I & 0 \\ (I-T_{ff})^{-1}T_{fa} & 0 \end{pmatrix}$$ </p>
<p>I now believe that the limit is indeed zero and I think the easiest way is probably proving that all eigenvalues have absolute value strictly smaller than 1. And I can already show that the absolute values of all eigenvalues are no larger than 1. Can somebody help me to prove that there cannot be an eigenvalue whose absolute value is 1? Also, once we know no eigenvalue of $T_{ff}$ is 1, we would immediately have the fact that $I-T_{ff}$ is invertible. Other approaches are also welcome. Thanks.</p>
| Clement C. | 75,808 | <p>I may be missing something on the conditions: what about $$A=\begin{pmatrix} 0 & 1 & 0\\ 1& 0 & 0\\ 0&0&0\end{pmatrix}?$$ You have $A^3 = A$, so its powers cannot converge to $0$.</p>
|
3,360,879 | <p>Given the localised ring <span class="math-container">$\mathbb{Z}_{(2)}=\{\frac{a}{b}:a,b \in \mathbb{Z}, 2 \nmid b \}$</span>, I want to show that this is an integral domain.</p>
<p>We choose some fraction <span class="math-container">$ \frac{a}{b}\in \mathbb{Z}_{(2)}$</span>,where <span class="math-container">$a \in \mathbb{Z}$</span> and <span class="math-container">$b \in \mathbb{Z}$</span>, such that <span class="math-container">$2 \nmid b$</span> and pick up another fraction <span class="math-container">$ \frac{a'}{b'}\neq 0\in \mathbb{Z}_{(2)}$</span> with <span class="math-container">$a' \in \mathbb{Z}$</span> and <span class="math-container">$b' \in \mathbb{Z}$</span> such that <span class="math-container">$2 \nmid b'$</span>. We look at the term <span class="math-container">$ \frac{a}{b}*\frac{a'}{b'}=0$</span>. Can we just conclude that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> have to be zero because the only zero divisors in <span class="math-container">$\mathbb{Z}$</span> are the zeroes? How could I argue alternatively with the prime ideal <span class="math-container">$(2)$</span> ?</p>
| Olórin | 187,521 | <p>Let <span class="math-container">$A$</span> be a commutative ring with unit and <span class="math-container">$S$</span> multiplicative subset of <span class="math-container">$A$</span> (contains <span class="math-container">$1$</span> by definition), and <span class="math-container">$S^{-1}A$</span> the localization. Then <span class="math-container">$\frac{a}{s} \frac{a'}{s'} = 0$</span> means that there is an <span class="math-container">$s'' \in S$</span> such that <span class="math-container">$s''(aa' \times 1 - 0 \times ss') = 0$</span> in <span class="math-container">$A$</span>. (As <span class="math-container">$\frac{0}{1}$</span> is the localization's zero.) With <span class="math-container">$A = \mathbf{Z}$</span> and <span class="math-container">$S = A \backslash \mathfrak{p}$</span> where <span class="math-container">$\mathfrak{p} = (2)$</span> which is the setup you are dealing with, one sees that <span class="math-container">$a$</span> or <span class="math-container">$a'$</span> must be zero.</p>
<p><em>Remark</em>. If <span class="math-container">$\mathbf{Z}_{(2)}=\{\frac{a}{b} \in\mathbf{Q}\;|\;a,b \in \mathbf{Z}, 2 \nmid b \}$</span> then <span class="math-container">$0 = \frac{a}{b} \frac{c}{d} = \frac{ac}{bd}$</span> implieds that <span class="math-container">$a$</span> or <span class="math-container">$c$</span> is zero, same conclusion.</p>
|
19,996 | <p>In 1556, Tartaglia claimed that the sums<br>
1 + 2 + 4<br>
1 + 2 + 4 + 8<br>
1 + 2 + 4 + 8 + 16<br>
are alternative prime and composite. Show that his conjecture is false. </p>
<p>With a simple counter example, $1 + 2 + 4 + 8 + 16 + 32 + 64 + 128 = 255$, apparently it's false. However, I want to prove it in general case instead of using a specific counter example, but I got stuck :( !
I tried:<br>
The sum $\sum_{i=0}^n 2^i$ is equal to $2^{n+1} - 1$. I assumed that $2^{n+1} - 1$ is prime, then we must show that $2^{n+1} - 1 + 2^{n+1} = 2^{n+2} - 1$ is not composite. Or we assume $2^{n+1}$ is composite and we must show that $2^{n+2} - 1$ is not prime.
But I have no clue how $2^{n+2} - 1$ relates to its previous prime/composite. Any hint?</p>
| Per Alexandersson | 934 | <p>If $2^{n}-1$ is prime, then $n$ must be odd, otherwise, we could factor
$2^{n}-1$ as $(2^{n/2}+1)(2^{n/2}-1).$</p>
<p>So <i>at least</i> every other number in the series $2^n-1,n=1,2,3...$
must be composite, by the conjugacy rule.</p>
<p>EDIT: Reread the question, but you can do the same for $a^3-b^3 = (a-b)(a^2+ab+b^2)$ so whenever $n$ is divisible by 3, you can do a factorization as above. In general, $a^n-b^n$ is divisible by $a-b,$
so if $n$ is not prime, $n=pq$, and we have
$$2^{n}-1 = (2^q)^p - 1^p = (2^q-1)Q$$
where $Q$ is an integer.</p>
|
1,641,076 | <p>Let's say I have the following decomposition: </p>
<p>$$\{100,10011,00110\}^*$$</p>
<p>How would I determine if the decomposition is ambiguous or unambiguous?</p>
| Maria | 308,011 | <p>We know that the two planes hit at an intersection, and thus their
intersection should be orthogonal to the "facing" of said planes. Hence,
you can find the (orthogonal) vector of intersection by taking the
cross product the two normal vectors of these planes, since the cross
product of two vectors produces a vector that is perpendicular to
both of the inputs.</p>
|
2,521,710 | <p>I am trying to do a proof for convergence. But I am stuck in my proof not getting any further... What is missing to finish that proof?</p>
<p>$$a_n = \frac{1}{(n+1)^2}$$
Show that: $$\lim_{n \to \infty}a_n=0$$</p>
<p>Let $e > 0$ and $\forall n \ge n_0 = \lceil \frac{1}{\sqrt{\epsilon}}\rceil+1 \in \mathbb Z^+:$</p>
<p>$\begin{align}
|a_n-0| &\equiv |a_n| \\
&\equiv | \frac{1}{(n+1)^2}|\\
&\equiv|\frac{1}{(n+1)} \cdot \frac{1}{(n+1)}| \\
&\equiv |\frac{1}{n+1}| \cdot |\frac{1}{n+1}| \\
&< |\frac{1}{n}| \cdot |\frac{1}{n}| \\
&(\text{because } n \in \mathbb N) \\
&= \frac{1}{n^2} < \epsilon \text{ (by definition of convergence) }
\end{align}$</p>
<p>$\begin{align}
n \ge n_0 &= \lceil \frac{1}{\sqrt{\epsilon}}\rceil +1 \\
&> \lceil \frac{1}{\sqrt{\epsilon}} \rceil \\
&\ge \frac{1}{\sqrt{\epsilon}}
\end{align}$</p>
<p>thus</p>
<p>$\begin{align}
n &> \frac{1}{\sqrt{\epsilon}} \\
&\equiv \frac{1}{n} > \sqrt{\epsilon} \\
&\equiv \frac{1}{n^2} > \epsilon
\end{align}$</p>
<p>But the line that should follow: </p>
<p>$\begin{align}
\frac{1}{n^2} &< \epsilon
\equiv \frac{1}{n^2} < \frac{1}{n^2}
\end{align}$</p>
<p>which is wrong.. </p>
| boaz | 83,796 | <p>Let $\varepsilon>0$. We need to find $n_0$, such that
$$
\left|\frac{1}{(n+1)^2}-0\right|<\varepsilon
$$
for all $n_0\leq n$. </p>
<p>Set $n_0=\lceil 1/\varepsilon \rceil$. Note that if $n_0\leq n$, then
$$
\left|\frac{1}{(n+1)^2}-0\right|=\frac{1}{(n+1)^2}\leq\frac{1}{n+1}<\frac{1}{n}\leq\frac{1}{n_0}=\frac{1}{\lceil 1/\varepsilon \rceil}
\leq\frac{1}{1/\varepsilon}=\varepsilon
$$
Q.E.D.</p>
|
1,921,562 | <p>Couldn't solve this indefinite integral, can someone help me? $$\int \frac {x^3+4x^2+6x+1}{x^3+x^2+x-3} dx$$</p>
| B. Goddard | 362,009 | <p>First, do long division to change your improper rational expression into a polynomial plus a proper rational expression. $$\int 1 +\frac{3x^2+5x+4}{x^3+x^2+x-3} \; dx.$$ Then factor the denominator by noting that if you plug in $x=1$ you get $0$, so $x-1$ is a factor. Then do partial fractions to get $$\int 1 + \frac{2}{x-1} + \frac{x+2}{x^2+2x+3} \; dx.$$ Finally, complete the square in the quadratic to get $$\frac{(x+1)+1}{(x+1)^2+2}$$ and substitute $u=x+1$. That leaves $$\int 1 \;dx + \int \frac{2}{x-1} \; dx + \int \frac{u}{u^2+2} \; du + \int \frac{1}{u^2+2} \; du.$$</p>
|
1,179,843 | <p>Proving $\sum_{n=1}^\infty \frac{\xi ^n}{n}$ is not uniformly convergent for $\xi \in (0,1)$.</p>
<p>I am trying to do the above. I have attempted to show it is not a cauchy sequence by considering $||\frac{\xi ^n}{n} ||_{\sup}$ but no avail. Any help please</p>
| user2566092 | 87,313 | <p>Hint: Show for $z$ close to $1$ that the convergence of the series becomes arbitrarily slow. More formally, show the negation of uniform convergence: There exists $\epsilon > 0$ such that for all positive integers $N$, there exists $z \in (0,1)$ and $n \geq N$ such that $|f_n(z) - f(z)| \geq \epsilon$. Here $f_n$ is the $n$th partial sum of your series, and $f$ is the limiting function. It should be clear that this is true because you can choose $z$ to be whatever you want, arbitrarily close to $1$.</p>
|
275,310 | <p>I am a bit confused. What is the difference between a linear and affine function? Any suggestions will be appreciated</p>
| kozenko | 145,312 | <p>An affine function is the composition of a linear function followed by a translation.
$ax$ is linear ; $(x+b)\circ(ax)$ is affine.
see Modern basic Pure mathematics : C.Sidney</p>
|
3,580,293 | <blockquote>
<p>The value of <span class="math-container">$$\lim\limits_{x \rightarrow \infty} \left(5^x + 5^{3x}\right)^{\frac{1}{x}}$$</span> is...</p>
</blockquote>
<p>My approach :</p>
<blockquote>
<p><span class="math-container">$$\lim\limits_{x \rightarrow \infty} \left(5^x + 5^{3x}\right)^{\frac{1}{x}}$$</span>
<span class="math-container">$$=\lim\limits_{x\rightarrow\infty} \left(5^x\left(1 + 5^{2x}\right)\right)^\frac{1}{x}$$</span>
<span class="math-container">$$=\lim\limits_{x\rightarrow\infty} \left(5^x\right)^\frac{1}{x}\left(1 + 5^{2x}\right)^\frac{1}{x}$$</span>
<span class="math-container">$$=\lim\limits_{x\rightarrow\infty} 5\left(1 + 5^{2x}\right)^\frac{1}{x}$$</span></p>
</blockquote>
<p>But I don't know how to proceed. Can anyone help? Thanks!</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$$(5^{3x})^{1/x}<(5^x+5^{3x})^{1/x} < 5^3(1+5^{-2x})^{1/x}$$</span>
So by sandwich theorem the required limit is 125.</p>
|
77,089 | <p>Fix a field $k$. For a singular variety $X$, I understand that the Grothendieck group $K^0(X)$ of vector bundles on $X$ is not necessarily isomorphic to the Grothendieck group $K_0(X)$ of coherent sheaves on $X$. </p>
<p>I am curious to learn what is known about these two groups in one family of examples: $\mathbb P^n_{D}$, where $D$ is the dual numbers $D=k[\epsilon]/(\epsilon^2)$. </p>
<p>References would be especially appreciated, as I know very little about K-theory.</p>
| Georges Elencwajg | 450 | <p>If $X$ is a noetherian separated scheme and $X_{red}$ its reduction , we have $K_0(X)=K_o(X_{red})$: in other words $K_o$ doesn't see nilpotents .<br>
Much more generally and profoundly, Quillen has proved that for all his $K$-theory groups, $K_i(X)=K_i(X_{red})$.<br>
In your particular case you thus have (in the following $T$ is an indeterminate)
$$K_0(\mathbb P^n_D)=K_0(\mathbb P^n_k)=\mathbb Z[T]/(T^{n+1})$$</p>
<p>As for $K^0$, a special case of a theorem of Berthelot (SGA 6, Exposé VI, Théorème 1.1, page 365)
states that, for any commutative ring $A$, we have $K^0(\mathbb P^n_A)=K^o(A)[T]/(T^{n+1})$.<br>
If $A=D=k[\epsilon]$, we have $K^0(D)=\mathbb Z$, since projective modules over local rings (like $D$) are free.<br>
So here too $$K^0(\mathbb P^n_D)=\mathbb Z[T]/(T^{n+1})$$</p>
<p><strong>Bibliography</strong><br>
Srinivas has written <a href="http://books.google.com/books/about/Algebraic_K_Theory.html?id=Sr8iTL98eiAC">this nice book</a> on $K$-theory. </p>
<p>And as an homage to the recently sadly departed Daniel Quillen, let me refer to his groundbreaking paper<br>
"Higher algebraic $K$-theory I", published in Springer's Lecture Notes LNM 341.</p>
|
1,234,661 | <p>I lost my baby Rudin book on real analysis book but I recall a pair of results in homework exercises that he seemed to indicate that there is no "boundary" between convergent and divergent series of positive decreasing terms. One result was that if $a_n$ is positive decreasing, and $\sum_n a_n$ is divergent, then $\sum_n a_n/s_n$ is also divergent where $s_n$ is the $n$th partial sum of the $a_i$. So, since the original series diverges, we can keep repeating this construction to get series that diverge slower and slower, since $\lim_{n \to \infty} s_n = \infty$.</p>
<p>Rudin paired this with another homework exercise result about convergent series, something showing that given any convergent series (possibly of positive decreasing terms) you could construct a new series that converged "slower" in some obvious sense. Can someone recall that result?</p>
| zhw. | 228,045 | <p>If $\sum a_n = \infty,$ then we can find $n_1 < n_2 < \dots $ such that $\sum_{n_k\le n < n_{k+1}} a_n > 1$ for all $k.$ Define $b_n = a_n/k$ for $n_k\le n < n_{k+1}.$ Then $\sum b_n =\infty,$ and $b_n/a_n \to 0.$</p>
<p>If $\sum a_n < \infty,$ then we can find $n_1 < n_2< \dots $ such that $\sum_{n_k\le n < n_{k+1}} a_n < 1/2^k$ for all $k.$ Define $b_n = ka_n$ for $n_k\le n < n_{k+1}.$ Then $\sum b_n <\infty,$ and $b_n/a_n \to \infty.$</p>
|
1,475,235 | <p>Why doesn't $e^x$ have an inverse in the complex plane? Can someone please clarify it?</p>
| Jack D'Aurizio | 44,121 | <p>Here comes the overkill: by <a href="https://en.wikipedia.org/wiki/Picard_theorem" rel="nofollow">Great Picard's Theorem</a>, any analytic function with an essential singularity at infinity takes every complex value, with at most one exception, an infinite number of times. $e^z$ clearly has an essential singularity at infinity.</p>
|
1,406,535 | <p>Let $ f$ be a function such that $|f(u)-f(v)|\leq|u-v|$ for all real $u$ and $v$ in an interval $[a,b]$.Then:<br>
$(i)$Prove that $f$ is continuous at each point of $[a,b]$.<br></p>
<p>$(ii)$Assume that $f$ is integrable on $[a,b]$.Prove that,$|\int_{a}^{b}f(x)dx-(b-a)f(c)|\leq\frac{(b-a)^2}{2}$,where $a\leq c \leq b$<br></p>
<p>I tried to solve second part,First part i could not get idea.<br>
$|\int_{a}^{b}f(x)dx-(b-a)f(c)|=|\int_{a}^{b}f(x)-f(c)dx|=\int_{a}^{b}|f(x)-f(c)|dx\leq\int_{a}^{b}|x-c|dx\leq\int_{a}^{c}(c-x)dx+\int_{c}^{b}(x-c)dx$<br></p>
<p>But i am not getting desired result,what have i done wrong in this?Or is there another method to prove it.Please help.</p>
| Jyrki Lahtonen | 11,619 | <p>I begin by trying also to answer your question:</p>
<blockquote>
<p>Why would we exclude $a$ such that $\gcd(a, n) = 1$ when the test works well <em>without</em> such condition? If we know that $\gcd(a, n) \not= 1$, we already that $n$ is not prime, thus the test is pointless.</p>
</blockquote>
<p>The answer is that we are not excluding those $a$, but the chances of randomly getting a Fermat witness $a$ such that $\gcd(a,n)>1$ may be extremely low. At least it can easily be too low to help us in the task of primality testing.</p>
<hr>
<p>On with my more verbose rant.</p>
<p>I am not an expert on primality testing, but I think $561$ is too small to truly bring to light the risks of primality testing by crude Little Fermat. After all, $3$ is a small number, and you are likely to hit a multiple of three sooner rather than later. But what if you stumble upon a Carmichael number with only relatively large prime factors (for the purposes of crypto we are talking 100+ digit prime number candidates after all).</p>
<p>For example, by <a href="https://en.wikipedia.org/wiki/Carmichael_number" rel="nofollow">Chernik's result</a> if $6k+1,12k+1$ and $18k+1$ are all primes, then their product $n$ is a Carmichael number. If here all those three primes have about 40 digits, then the probability that any given Fermat witness says "Possible prime" is $1-10^{-40}$ or thereabouts - not good when the number to be tested is actually composite!</p>
<p>But, if you use a slightly more sophisticated witness based primality testing, such as <a href="https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test" rel="nofollow">Rabin-Miller</a>, then you never have this problem. Each (R-M) witness yelling "Possible prime!" lowers the odds of your candidate still being a composite by a known minimum factor (IIRC the factor is four - but do check your sources on Rabin-Miller). </p>
<p>So the point is that due to the existence of Carmichael numbers we need to use something other than basic Little Fermat to get something like 99.99% certainty of the candidate being a prime with a reasonable amount of testing
(when 8 R-M witnesses, each cutting the chance of a false positive by a factor of four, will do it).</p>
<p>If the number to be tested would be a Carmichael number, then I'm not aware of any number $\epsilon>0$ such that each witness would lower the probability of a false positive by $(1-\epsilon)$. Furthermore (see the Wikipedia page I linked to earlier), there are quite a few Carmichael numbers in a given range. True, they form a tiny fraction, but when Rabin-Miller gives us the kind of a guarantee that Fermat cannot with the same computational complexity, why use Little Fermat?</p>
|
102,357 | <p>Let $G=PSL(2,q)$ where $q$ is prime power. What is Aut$(G\times G)$ and Aut$(G\times G\times G)$? Also if $G=A_{n}$ where $A_{n}$ is the alternating group of degree $n$, then what is Aut$(G\times G)$? </p>
<p>Thanks in advance</p>
| Igor Rivin | 11,142 | <p>See <a href="http://en.wikipedia.org/wiki/Direct_product_of_groups#Automorphisms_and_endomorphisms" rel="nofollow">this wikipedia article</a>. The result mentioned there implies that the automorphism group is the wreath product power of the automorphism group of $G.$</p>
|
181,855 | <p>In the latest <a href="http://what-if.xkcd.com/113/" rel="noreferrer">what-if</a> Randall Munroe ask for the smallest number of geodesics that intersect all regions of a map. The following shows that five paths of satellites suffice to cover the 50 states of the USA:
<img src="https://i.stack.imgur.com/gyfYt.png" alt="from what-if.xkcd.com"></p>
<p>A similar configuration where the lines are actually great circles is claimed by the author:</p>
<blockquote>
<p>They're all slightly curved, since the Earth is turning under the satellites, but it turns out that this arrangement of lines also works for the much simpler version of the question that ignores orbital motion: "How many straight (great-circle) lines does it take to intersect every state?" For both versions of the question, my best answer is a version of the arrangement above.</p>
</blockquote>
<p>There has been quite some work on similar sounding problems. For stabbing (or finding transversals of) line segments see as an example <a href="http://link.springer.com/article/10.1007%2FBF01934440" rel="noreferrer">Stabbing line segments</a> by H. Edelsbrunner, H. A. Maurer,
F. P. Preparata, A. L. Rosenberg, E. Welzl and D. Wood (and papers which reference it.)
or L.M. Schlipf's <a href="http://www.diss.fu-berlin.de/diss/receive/FUDISS_thesis_000000096077" rel="noreferrer">dissertation</a> with examples of different kinds. </p>
<p><strong>Is there an algorithmic approach known to tackle this problem (or for the
simpler problem when all regions of the map are convex)?</strong></p>
<p>In the case of the 50 states of the USA, it is of course easy to see that one great circle does not suffice: take two states (e.g. New York and Louisiana) such that all great circles that intersect those do not pass through a third state (e.g. Alaska). Similarly one can show that we need at least 3 great circles. </p>
<p>Maybe it would be helpful to consider all triples of regions that do not lie on a great circle and use this hypergraph information to deduce lower bounds.</p>
<p><strong>What are good methods to find lowers bounds?</strong></p>
<p>Randall Munroe's conjectures that 5 is optimal:</p>
<blockquote>
<p>I don't know for sure that 5 is the absolute minimum; it's possible
there's a way to do it with four, but my guess is that there isn't.
[...] If
anyone finds a way (or proof that it's impossible) I'd love to see it!</p>
</blockquote>
| Gerhard Paseman | 3,206 | <p>Here is a suggestion following the idea of the original poster to show for the given instance that four is too low a bound. Assume that four geodesics suffice and aim for a contradiction as follows:</p>
<p>Consider five states sharing or almost sharing a common longitude, e.g. the five states north of Texas or those north of and including Louisiana. One geodesic has to run through at least two of them. It should be easy to prove by hand that no four geodesics can both cover the lower 48 states and have one geodesic cover three of these five states, for any geodesic that covers at least three of these states has to leave many other states uncovered, including a group of four contiguous states and a few more groups of states (to be described soon). Now one needs three geodesics to cover this group of four and a few more groups, and one of the geodesics has to hit at least two out of this group of four. But after trying to cover three of this group of four, one has too many states left over with these choice of geodesics. The remaining few groups will contain states that cannot be covered by two geodesics alone.</p>
<p>So the plan is to start with the group of five states, consider each subset of two or more that can be covered by a geodesic, and then pick groups (ideally columns or linear arrangements) of states on either side of the given geodesic which are four or more in number, and consider the possibilities remaining when using one of three geodesics to cover two or more of the four states. When all these possibilities are considered, one may find five or more states left over which form a complete graph on five points in the hypergraph of geodesic relationships. Indeed, for most subsets of the initial five states north of Texas, many of those geodesics do not hit Michigan, Ohio, West Virginia, or Virginia. Of those that do hit one of those four states, they will not hit Kentucky, Tennessee, Georgia, or Florida.</p>
<p>By careful analysis of the states uncovered from considering the first five states, it should be possible to choose wisely two or three candidate sets of four states. For each of these candidate, one should be able to come up with five or more states not coverable by two geodesics.</p>
<p>Gerhard "Another Economical Use Of Pentagrams" Paseman, 2014.10.01</p>
|
360,063 | <p>Let <span class="math-container">$\mathbb{N}$</span> denote the set of positive integers. For any prime <span class="math-container">$p\in\mathbb{N}$</span> let <span class="math-container">$p\mathbb{N} = \{np: n\in \mathbb{N}\}$</span>. Is there a partition <span class="math-container">${\cal P}$</span> of <span class="math-container">$\mathbb{N}\setminus\{1\}$</span> such that for all <span class="math-container">$B \in {\cal P}$</span> and every prime <span class="math-container">$p\in\mathbb{N}$</span> we have <span class="math-container">$|B \cap p\mathbb{N}|=1$</span>?</p>
| LSpice | 2,383 | <p>Recursively define a sequence of <span class="math-container">$B$</span>'s as follows. Initially, each is empty. At each step <span class="math-container">$n > 1$</span>, place <span class="math-container">$n$</span> in the first <span class="math-container">$B$</span> that contains only elements coprime to <span class="math-container">$n$</span>. Clearly, for each prime <span class="math-container">$p$</span>, there is no <span class="math-container">$B$</span> that contains two distinct multiples of <span class="math-container">$p$</span>. Now fix a prime <span class="math-container">$p$</span> and a natural number <span class="math-container">$N > 1$</span>, and consider the first <span class="math-container">$B$</span> that contains no multiple of <span class="math-container">$p$</span> after step <span class="math-container">$N$</span> has completed. The first power <span class="math-container">$p^k$</span> of <span class="math-container">$p$</span> that is larger than <span class="math-container">$N$</span> cannot be placed in any earlier <span class="math-container">$B$</span> (since all have a multiple of <span class="math-container">$p$</span>), so it will be placed in <span class="math-container">$B$</span> if no multiple <span class="math-container">$p d$</span> of <span class="math-container">$p$</span> with <span class="math-container">$N < p d < p^k$</span> has been.</p>
<p>As @StevenStadnicki <a href="https://mathoverflow.net/questions/360063/slicing-up-mathbbn-setminus-1/360069#comment906934_360069">points out</a>, it's interesting to investigate the structure of these sets. (I started to do it by hand, and found it sort of addictive.) Here's some Haskell code to allocate the first <span class="math-container">$N$</span> numbers (doubtless both inefficient and unidiomatic, but it seems to work):</p>
<pre><code>insert N [] = [(N, [N])]
insert N ((c,bs):bss) = if gcd c N == 1 then (N*c,N:bs):bss else (c,bs):(insert N bss)
insertTo 1 = []
insertTo N = insert N $ insertTo (N - 1)
</code></pre>
<p>One runs it as</p>
<pre><code>map snd $ insertTo 1000
</code></pre>
<p>(for example), whose output starts</p>
<pre><code>[[997,991,983,977,971,967,953,947,941,937,929,919,911,907,887,883,881,877,863,859,857,853,839,829,827,823,821,811,809,797,787,773,769,761,757,751,743,739,733,727,719,709,701,691,683,677,673,661,659,653,647,643,641,631,619,617,613,607,601,599,593,587,577,571,569,563,557,547,541,523,521,509,503,499,491,487,479,467,463,461,457,449,443,439,433,431,421,419,409,401,397,389,383,379,373,367,359,353,349,347,337,331,317,313,311,307,293,283,281,277,271,269,263,257,251,241,239,233,229,227,223,211,199,197,193,191,181,179,173,167,163,157,151,149,139,137,131,127,113,109,107,103,101,97,89,83,79,73,71,67,61,59,53,47,43,41,37,31,29,23,19,17,13,11,7,5,3,2],
[961,841,529,361,289,169,121,49,25,9,4],
[667,323,143,35,6],[899,437,221,77,15,8],
[713,247,187,21,10],[551,391,91,55,12],
[851,493,209,65,27,14],[299,133,85,33,16],
[377,253,119,95,18],[703,527,319,161,39,20],
[989,779,629,403,203,45,22],
[893,731,533,407,217,115,24],
[943,817,341,259,125,51,26],
[799,481,451,145,57,28],
[901,611,589,473,287,30] …
</code></pre>
|
3,276,984 | <p>Why we divide the small difference 'd√x' by the difference in area 'dx' Where we normally divide the difference in area by the small difference ?<a href="https://i.stack.imgur.com/LDKB7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LDKB7.jpg" alt="enter image description here"></a></p>
<p>My huge confusion is, when we try to find the derivative of x^2 we <strong>divide the difference in area 'dx^2' by the small difference dx</strong> -> <strong>dx^2 / dx</strong> . But while i try to find the derivative of √x, i get a correct answer only when i <strong>divide the small difference by the difference in area</strong> -> <strong>d√x / dx</strong> ??</p>
<p>Thanks in advance! :)</p>
| Henry | 6,460 | <p>You are trying to find the rate of change in the side of a square relative to the change in area. So you have a division, similar to finding other instantaneous rates of change. </p>
<p>If <span class="math-container">$x$</span> is the area and <span class="math-container">$s=\sqrt{x}$</span> the side then you are looking at what happens when you increase the area slightly. Writing <span class="math-container">$d\sqrt{x}$</span> at this stage may be more confusing than something like <span class="math-container">$\delta s$</span></p>
<p>You get the change in area to be <span class="math-container">$\delta x = 2 s \, \delta s + (\delta s)^2 \approx 2 s \, \delta s$</span> when <span class="math-container">$\delta s$</span> is small compared with <span class="math-container">$s$</span>. So in terms of relative changes <span class="math-container">$\dfrac{\delta s}{\delta x} \approx \dfrac{1}{2s}$</span>. Throughout this process <span class="math-container">$\delta s$</span> and <span class="math-container">$\delta x$</span> are small</p>
<p>You then take the limit, substitute back and get the derivative <span class="math-container">$\dfrac{d}{dx} \sqrt{x} = \dfrac{1}{2\sqrt{x}}$</span></p>
|
28,955 | <p>I need to crack a stream cipher with a repeating key.</p>
<p>The length of the key is definitely 16. Each key can be any of the characters numbered 32-126 in ASCII.</p>
<p>The algorithm goes like this:</p>
<p>Let's say you have a plain text:</p>
<p>"Welcome to Q&A for people studying math at any level and professionals in related fields."</p>
<p>Let's say that the password is:</p>
<p>"0123456789abcdef"</p>
<p>Then, to encrypt the plaintext, just XOR them together. If the key isn't long enough, just repeat it. e.g., </p>
<p>Welcome to Q&A for people studying math at any level and professionals in related fields.</p>
<pre><code> XOR
</code></pre>
<p>0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789ab</p>
<p>I have 2 English messages encrypted with the above algorithm and with the same key.
I know about the communicative property of xor and that it can be exploited for the example above.
I've read that this is a pretty weak cipher and it has been cracked. However, I have no idea how to do it. So, where can I find a cryptanalysis tool to do it for me?</p>
| Dave Hull | 254,289 | <p>There's a good description of an approach that can be used to crack this in the <a href="http://cryptopals.com" rel="nofollow">cryptopals.com</a> challenges, see set 1, problem 6 for details.</p>
<p>I've written a PowerShell script that can crack repeating xor key crypto based on the info in cryptopals.com's description of the algorithm. It generally works well, though this is a problem of probability not certainty, so your mileage may vary.</p>
<p>The Cryptopals.com method uses Hamming Distance at the bit level, which is effectively the Index of Coincidence.</p>
<p>If you want to try my script or read through it to understand the problem better, you can find a link to the script and a write up on its usage, deficiencies and workarounds at <a href="http://trustedsignal.blogspot.com/2015/07/cracking-repeating-xor-key-crypto.html" rel="nofollow">http://trustedsignal.blogspot.com/2015/07/cracking-repeating-xor-key-crypto.html</a>.</p>
|
32,849 | <p>I am trying to simulate a signal that randomly increases its phase, so far I have tried two thing but neither worked. I usually use matlab but I want to learn some <em>Mathematica</em> so I thought I would try this in <em>Mathematica</em>.</p>
<p>My first try was</p>
<pre><code>times = Table[i, {i, 0, 2, 0.05}];
function[source_, t_][fiIN_] :=
With[{fi = fiIN + 0.01*RandomReal[]},source*(1 + Sin[2 Pi*t + fi])];
</code></pre>
<p>Where I wanted to feed <code>fi</code> back into <code>fiIN</code> for each subsequent <code>t</code> value (in the list <code>times</code>). I did not know how to makes this work though so I went on with my second try:</p>
<pre><code>fiupdate[fi_] := fiupdate[fi - 1] + 0.01*RandomReal[];
fiupdate[1] = 0;
fitimes = Range[Length[times]];
</code></pre>
<p>However this function does not remember the value of <code>RandomReal[]</code> for the earlier steps, so <code>fiupdate[10]</code> could be smaller than <code>[9]</code> or <code>[8]</code>. Also when using this function I get an error:</p>
<pre><code>fis = fiupdate[fitimes];
$RecursionLimit::reclim: Recursion depth of 1024 exceeded
</code></pre>
<p>Im not sure how to make this work. Any help is appreciated. Thank you!</p>
| ssch | 1,517 | <p>Your problem can be reduced to creating an increasing function, <code>phase</code>, and then use <code>Sin[t + phase[t]]</code>.</p>
<p>Here is one way to do this by interpolating a sorted list of random numbers:</p>
<pre><code>tmax = 40;
phase = Interpolation[Sort[RandomReal[10, tmax]]];
Plot[phase[t], {t, 1, tmax}]
Plot[{Sin[t], Sin[t + phase[t]]}, {t, 1, tmax}]
</code></pre>
<p><img src="https://i.stack.imgur.com/Wkv50.png" alt="plots"></p>
|
130,914 | <p>I dont know how to proceed with solving $$\sum_{i=1}^{n}i^{k}(n+1-i).$$ Please give advise.</p>
| Ross Millikan | 1,827 | <p>You can factor out the $(n+1)$ to give $(n+1)\sum_{i=1}^n i^k-\sum_{i=1}^n i^{k+1}$ For positive integral $k$ you can use <a href="http://en.wikipedia.org/wiki/Faulhaber%27s_formula" rel="nofollow">Faulhaber's formulas</a>. What kind of $k$ are you considering?</p>
|
4,567,410 | <p>I know that when calculating the pre-image of a function <span class="math-container">$f:X \rightarrow Y$</span> in a given subset <span class="math-container">$B$</span>, that is, <span class="math-container">$f^{-1}(B)=\{x \in X\mid f(x) \in B\}$</span>,the <span class="math-container">$f^{-1}$</span> simbol is just notation because it does not imply the existance of the inverse of the function.</p>
<p>But, taking into account this, can we conclude that <span class="math-container">$f(A)=B \implies A=f^{-1}(B)$</span> is not necessarily true if <span class="math-container">$f$</span> is not invertible and the reciprocal is also not true? Because I have been trying to find a counterexample but I haven't been able to. Thanks.</p>
| FShrike | 815,585 | <p>It is indeed false. It's true when <span class="math-container">$f(x)\in B$</span> implies <span class="math-container">$x\in A$</span> - equivalently, <span class="math-container">$f(X\setminus A)\subseteq Y\setminus B$</span>. This is true, for example, if <span class="math-container">$f$</span> is injective. But consider <span class="math-container">$X=\Bbb R,Y=\Bbb R$</span> and <span class="math-container">$f(x)=x^2$</span>.</p>
<p><span class="math-container">$f([0,\infty))=[0,\infty)$</span> but <span class="math-container">$f^{-1}([0,\infty))=\Bbb R$</span>, so we do not have <span class="math-container">$f(A)=B\implies A=f^{-1}(B)$</span>.</p>
<p><span class="math-container">$f$</span> doesn't <em>need</em> to be injective for this to be true, either. Consider again <span class="math-container">$X=Y=\Bbb R$</span> and:</p>
<p><span class="math-container">$$f:x\mapsto\begin{cases}x^2&|x|\le1\\x&|x|>1\end{cases}$$</span></p>
<p>If <span class="math-container">$A=[-1,1]$</span> then <span class="math-container">$f(A)=B=[0,1]$</span>, and <span class="math-container">$f^{-1}(B)=[-1,1]=A$</span> despite the fact <span class="math-container">$f$</span> does not inject.</p>
|
4,567,410 | <p>I know that when calculating the pre-image of a function <span class="math-container">$f:X \rightarrow Y$</span> in a given subset <span class="math-container">$B$</span>, that is, <span class="math-container">$f^{-1}(B)=\{x \in X\mid f(x) \in B\}$</span>,the <span class="math-container">$f^{-1}$</span> simbol is just notation because it does not imply the existance of the inverse of the function.</p>
<p>But, taking into account this, can we conclude that <span class="math-container">$f(A)=B \implies A=f^{-1}(B)$</span> is not necessarily true if <span class="math-container">$f$</span> is not invertible and the reciprocal is also not true? Because I have been trying to find a counterexample but I haven't been able to. Thanks.</p>
| Arturo Magidin | 742 | <p><strong>Theorem.</strong> Let <span class="math-container">$f\colon X\to Y$</span> be a function. Then <span class="math-container">$f$</span> is one-to-one if and only if for every <span class="math-container">$A\subseteq X$</span>, if <span class="math-container">$f(A)=B$</span> then <span class="math-container">$f^{-1}(B)=A$</span>.</p>
<p><em>Proof.</em> Suppose first that <span class="math-container">$f$</span> is one-to-one, and let <span class="math-container">$f(A)=B$</span>. To prove <span class="math-container">$f^{-1}(B)=A$</span> we prove they each contain each other.</p>
<p>If <span class="math-container">$a\in A$</span>, then <span class="math-container">$f(a)\in B$</span>, and therefore, since <span class="math-container">$f^{-1}(B)=\{x\in X\mid f(x)\in B\}$</span>, we have that <span class="math-container">$a\in f^{-1}(B)$</span>. Thus, <span class="math-container">$A\subseteq f^{-1}(B) = f^{-1}(f(A))$</span>. (This holds without any assumptions on <span class="math-container">$f$</span> beyond it being a function).</p>
<p>Now let <span class="math-container">$x\in f^{-1}(B)$</span>. Then <span class="math-container">$f(x)\in B=f(A)=\{f(a)\mid a\in A\}$</span>, so there exists <span class="math-container">$a\in A$</span> such that <span class="math-container">$f(x)=f(a)$</span>. <strong>Because <span class="math-container">$f$</span> is assumed to be one-to-one</strong>, we conclude that <span class="math-container">$x=a$</span>, so <span class="math-container">$x\in A$</span>. Thus, <span class="math-container">$f^{-1}(B)\subseteq A$</span>, and we have equality.</p>
<p>Conversely, assume that for all <span class="math-container">$A\subseteq X$</span>, if <span class="math-container">$B=f(A)$</span> then <span class="math-container">$A=f^{-1}(B)$</span>. Let <span class="math-container">$x,x'\in X$</span> be such that <span class="math-container">$f(x)=f(x')$</span>, and let <span class="math-container">$A=\{x\}$</span>. Then <span class="math-container">$f(x')\in f(A) = \{f(x)\}=\{f(x')\}$</span>, so therefore <span class="math-container">$x'\in f^{-1}(f(A))=A=\{x\}$</span>. Therefore, <span class="math-container">$x'=x$</span>, so we conclude that <span class="math-container">$f$</span> is one-to-one. <span class="math-container">$\Box$</span></p>
|
2,952,028 | <p>The question asks: Find the values of k for which the line</p>
<p><span class="math-container">$y=2x-k$</span> is tangent to the circle with equation <span class="math-container">$x^2+y^2=5$</span></p>
<p>So I started by substituting,</p>
<p><span class="math-container">$x^2+(2x-k)^2=5$</span></p>
<p><span class="math-container">$x^2+4x^2-4xk+k^2=5$</span></p>
<p><span class="math-container">$5x^2+k^2-4xk-5=0$</span></p>
<p>But after this I couldn't see a way to factor it that would make able to find the discriminant and set that equal to <span class="math-container">$0$</span>.</p>
<p>Help would be appreciated.</p>
| Szeto | 512,032 | <p>Rewrite it as
<span class="math-container">$$5x^2-(4k)x+(k^2-5)=0$$</span></p>
<p>Thus,
<span class="math-container">$$\Delta=16k^2-20(k^2-5)=100-4k^2$$</span></p>
<p>Setting <span class="math-container">$\Delta=0$</span>, we obtain <span class="math-container">$k=\pm 5$</span>.</p>
<p>Visualization by Desmos:
<a href="https://i.stack.imgur.com/iNWqW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iNWqW.jpg" alt=""></a></p>
|
3,787,111 | <p>Sets can have minimal and least elements, and they are two different things, for example:
given the set <span class="math-container">$A=\{\{1\},\{2\},\{3\},\{1,2\},\{2,3\},\{1,3\},\{1,2,3\}\}$</span> and the subset relation, this set has <span class="math-container">$\{1\}$</span> as a minimal element but not as a least element. Now, my question is, what is <span class="math-container">$\min(A)$</span>? I would say that it doesn't exists since there are not least elements, but does <span class="math-container">$\min(A)$</span> really mean "least element of the set <span class="math-container">$A$</span>"? I'm asking because the notation "<span class="math-container">$\min$</span>" looks like it's referring to the minimal element and not the least one. And is this notation really referring to the least element of a set or instead it's a completely different things and we're talking about a unary operation used for the natural numbers, integers and real numbers?</p>
| Bernard | 202,857 | <p><strong>Hint</strong>:</p>
<p>Determine first the image of the interval <span class="math-container">$(-2,3)$</span>. Observe that <span class="math-container">$f$</span> is an even function, increasing on <span class="math-container">$\mathbf R^+$</span>, so that
<span class="math-container">$$f((-2,3))\subset f((-3,3))=f((0,3))=\dots$$</span></p>
|
3,787,111 | <p>Sets can have minimal and least elements, and they are two different things, for example:
given the set <span class="math-container">$A=\{\{1\},\{2\},\{3\},\{1,2\},\{2,3\},\{1,3\},\{1,2,3\}\}$</span> and the subset relation, this set has <span class="math-container">$\{1\}$</span> as a minimal element but not as a least element. Now, my question is, what is <span class="math-container">$\min(A)$</span>? I would say that it doesn't exists since there are not least elements, but does <span class="math-container">$\min(A)$</span> really mean "least element of the set <span class="math-container">$A$</span>"? I'm asking because the notation "<span class="math-container">$\min$</span>" looks like it's referring to the minimal element and not the least one. And is this notation really referring to the least element of a set or instead it's a completely different things and we're talking about a unary operation used for the natural numbers, integers and real numbers?</p>
| farruhota | 425,072 | <p>Alternatively, you can draw the graph:</p>
<p><span class="math-container">$\hspace{2cm}$</span><a href="https://i.stack.imgur.com/C1tDI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1tDI.png" alt="enter image description here" /></a></p>
<p>Note that the extreme points of a function occur at either border points or critical (turning) points. You checked the border points <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. The function is maximum at the point <span class="math-container">$B$</span>. You should also check the turning point <span class="math-container">$C$</span>, where the function reaches its minimum. Hence, the range is between the minimum and maximum. The answer is choice <span class="math-container">$A$</span>.</p>
|
67,630 | <p>I know of a theorem from Axler's <em>Linear Algebra Done Right</em> which says that if $T$ is a linear operator on a complex finite dimensional vector space $V$, then there exists a basis $B$ for $V$ such that the matrix of $T$ with respect to the basis $B$ is upper triangular.</p>
<p>The proof of this theorem is by induction on the dimension of $V$. For dim $V = 1$ the result clearly holds, so suppose that the result holds for vector spaces of dimension less than $V$. Let $\lambda$ be an eigenvalue of $T$, which we know exists for $V$ is a complex vector space.</p>
<p>Consider $U = $ Im $(T - \lambda I)$. It is not hard to show that Im $(T - \lambda I)$ is an invariant subspace under $T$ of dimension less than $V$.</p>
<p>So by the induction hypothesis, $T|_U$ is an upper triangular matrix. So let $u_1 \ldots u_n$ be a basis for $U$. Extending this to a basis $u_1 \ldots u_n, v_1 \ldots v_m$ of $V$, the proof is complete by noting that for each $k$ such that $1 \leq k \leq m$, $T(v_k) \in $ span $\{u_1 \ldots u_n, v_1 \ldots v_k\}$.</p>
<p>The proof of this theorem seems to be only using the hypothesis that $T$ has at least one eigenvalue (in a complex vector space). So I turned to the following example of a linear operator $T$ on a <em>real</em> vector space $(\mathbb{R}^3)$ instead that has one real eigenvalue:</p>
<p>$T(x,y,z) = (x, -z ,y)$. </p>
<p>In the standard basis of $\mathbb{R}^3$ the matrix of $T$ is </p>
<p>$\left[\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{array}\right]$</p>
<p>The only real eigenvalue of this matrix is $1$ with corresponding eigenvector </p>
<p>$\left[\begin{array}{c}1 \\ 0 \\ 0 \end{array}\right]$.</p>
<p>This is where I run into trouble: If I just extend this to any basis of $\mathbb{R}^3$, then using the standard basis again will not put the matrix of $T$ in upper triangular form. Why can't the method used in the proof above be used to put the matrix of $T$ in upper triangular form?</p>
<p>I know this is not possible for if $T$ were to be put in upper triangular form, then this would mean all its eigenvalues are real which contradicts it having eigenvalues $\pm i$ as well.</p>
| Per Alexandersson | 934 | <p>You can always use gaussian elimination to put a matrix in UT form.
Every step in that process is reversible (invertible).
But, gaussian elimination can be thought of as a basis change,
since every step can also be represented as a matrix multiplication,
by an invertible matrix, (matrices looking like identity matrix and some off-diagonal elements).</p>
<p>I leave it as an exercise to work out the details.</p>
|
3,716,619 | <p>Evaluating
<span class="math-container">$$\lim_{x\to 0}\left(\frac{\pi ^2}{\sin ^2\pi x}-\frac{1}{x^2}\right)$$</span>
with L'Hospital is so tedious. Does anyone know a way to evaluate the limit without using L'Hospital? I have no idea where to start.</p>
| robjohn | 13,854 | <p><strong>Pre-Calculus Answer to the Question</strong></p>
<p>Note that since <span class="math-container">$\lim\limits_{x\to0}\frac{\sin(x)}x=1$</span>, as shown in <a href="https://math.stackexchange.com/a/75151">this answer</a>, and <span class="math-container">$\frac1x$</span> is continuous at <span class="math-container">$x=1$</span>, we also have <span class="math-container">$\lim\limits_{x\to0}\frac x{\sin(x)}=1$</span>.
<span class="math-container">$$
\begin{align}
\lim_{x\to 0}\left(\frac{\pi^2}{\sin^2(\pi x)}-\frac1{x^2}\right)
&=\lim_{x\to 0}\frac{\pi^2x^2-\sin^2(\pi x)}{x^2\sin^2(\pi x)}\tag{1a}\\
&=\lim_{x\to 0}\frac{\pi x-\sin(\pi x)}{(\pi x)^3}\lim_{x\to0}\frac{\pi x+\sin(\pi x)}{\sin(\pi x)}\lim_{x\to 0}\frac{\pi^2(\pi x)}{\sin(\pi x)}\tag{1b}\\[3pt]
&=\frac16\cdot2\cdot\pi^2\tag{1c}\\[6pt]
&=\frac{\pi^2}3\tag{1d}
\end{align}
$$</span>
Explanation:<br />
<span class="math-container">$\text{(1a)}$</span>: algebra<br />
<span class="math-container">$\text{(1b)}$</span>: factorization<br />
<span class="math-container">$\text{(1c)}$</span>: apply <span class="math-container">$\lim\limits_{x\to0}\frac x{\sin(x)}=1$</span> from above<br />
<span class="math-container">$\phantom{\text{(1c):}}$</span> and <span class="math-container">$\lim\limits_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16$</span> from below<br />
<span class="math-container">$\text{(1d)}$</span>: computation</p>
<hr />
<p><strong>Proof that <span class="math-container">$\boldsymbol{\lim\limits_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16}$</span></strong></p>
<p>Assume that <span class="math-container">$0\lt x\le\frac\pi3$</span>. Then, <span class="math-container">$\cos(x)\ge\frac12$</span> and <span class="math-container">$0\le\sin(x)\le x\le\tan(x)$</span>. Therefore,
<span class="math-container">$$
\begin{align}
\frac{x-\sin(x)}{x^3}
&\le\frac{\tan(x)-\sin(x)}{x^3}\tag{2a}\\
&=\frac{\tan(x)}{x}\frac{1-\cos(x)}{x^2}\tag{2b}\\
&=\frac1{\cos(x)}\frac{\sin(x)}{x}\frac{2\sin^2(x/2)}{4\,(x/2)^2}\tag{2c}\\[6pt]
&\le1\tag{2d}
\end{align}
$$</span>
Furthermore,
<span class="math-container">$$
\begin{align}
&\frac{x-\sin(x)}{x^3}-\frac14\frac{x/2-\sin(x/2)}{(x/2)^3}\tag{3a}\\
&=\frac{2(x/2)-2\sin(x/2)\cos(x/2)}{8(x/2)^3}-\frac{2(x/2)-2\sin(x/2)}{8(x/2)^3}\tag{3b}\\
&=\frac{2\sin(x/2)(1-\cos(x/2))}{8(x/2)^3}\tag{3c}\\
&=\frac{2\sin(x/2)\,2\sin^2(x/4)}{8(x/2)^3}\tag{3d}
\end{align}
$$</span>
Since <span class="math-container">$\lim\limits_{x\to0}\frac{\sin(x)}x=1$</span>, <span class="math-container">$(3)$</span> shows that
<span class="math-container">$$
\lim_{x\to0}\left(\frac{x-\sin(x)}{x^3}-\frac14\frac{x/2-\sin(x/2)}{(x/2)^3}\right)=\frac18\tag4
$$</span>
For any <span class="math-container">$n$</span>, adding <span class="math-container">$\frac1{4^k}$</span> times <span class="math-container">$(4)$</span> with <span class="math-container">$x\mapsto x/2^k$</span> for <span class="math-container">$k$</span> from <span class="math-container">$0$</span> to <span class="math-container">$n-1$</span> gives
<span class="math-container">$$
\begin{align}
\lim_{x\to0}\left(\frac{x-\sin(x)}{x^3}-\frac1{4^n}\frac{x/2^n-\sin\left(x/2^n\right)}{\left(x/2^n\right)^3}\right)
&=\frac18\frac{1-(1/4)^n}{1-1/4}\tag{5a}\\
&=\frac16-\frac16\frac1{4^n}\tag{5b}
\end{align}
$$</span>
Thus, for any <span class="math-container">$\epsilon\gt0$</span>, choose <span class="math-container">$n$</span> large enough so that <span class="math-container">$\frac1{4^n}\le\frac\epsilon2$</span>. Then, <span class="math-container">$(5)$</span> says that we can choose a <span class="math-container">$\delta\gt0$</span> so that if <span class="math-container">$0\lt x\le\delta$</span>,
<span class="math-container">$$
\frac{x-\sin(x)}{x^3}-\overbrace{\frac1{4^n}\frac{x/2^n-\sin\left(x/2^n\right)}{\left(x/2^n\right)^3}}^{\frac12[0,\epsilon]_\#}
=\frac16-\!\overbrace{\ \ \ \frac16\frac1{4^n}\ \ \ }^{\frac1{12}[0,\epsilon]_\#}\!+\frac12[-\epsilon,\epsilon]_\#\tag6
$$</span>
where <span class="math-container">$[a,b]_\#$</span> represents a number between <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. The bounds above the braces follow from <span class="math-container">$(2)$</span> and the choice of <span class="math-container">$n$</span>.</p>
<p>Equation <span class="math-container">$(6)$</span> says that for <span class="math-container">$0\lt x\le\delta$</span>,
<span class="math-container">$$
\frac{x-\sin(x)}{x^3}=\frac16+[-\epsilon,\epsilon]_\#\tag7
$$</span>
Since <span class="math-container">$\frac{x-\sin(x)}{x^3}$</span> is even, we can say that <span class="math-container">$(7)$</span> is true for <span class="math-container">$0\lt|x|\le\delta$</span>, which means that
<span class="math-container">$$
\lim_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16\tag8
$$</span></p>
|
4,621,390 | <p>I'm studying Linear Algebra and have come to think of a column vector as an ordered bunch of objects, where each object is the product of a scalar and a basis vector, vis:</p>
<p><span class="math-container">$\begin{bmatrix}
a \\
b \\
\vdots \\
\end{bmatrix} = a \hat{i} + b \hat{j} + \dots$</span></p>
<p>Where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are scalars and <span class="math-container">$\hat{i}$</span> and <span class="math-container">$\hat{j}$</span> are <em>unique</em> basis vectors. By that I mean <span class="math-container">$\hat{i}$</span> must not equal <span class="math-container">$\hat{j}$</span>.</p>
<p>Is that correct?</p>
| StudentsTea | 170,148 | <p>@mr_e_man got it:</p>
<blockquote>
<p>If <span class="math-container">$\hat{i} = \hat{j}$</span>, then <span class="math-container">$(1) \hat{i} + (−1) \hat{j} = 0$</span>, which contradicts linear independence of <span class="math-container">$\hat{i}$</span> and <span class="math-container">$\hat{j}$</span>. A basis is supposed to be linearly independent.</p>
</blockquote>
<p>Vectors must come from vector spaces, vector spaces must have basis, and basis must be linearly independent. So, each component in a column vector must correspond to a unique basis.</p>
|
4,651,364 | <p>I am trying to evaluate the following contour integral by evaluating the residue at <span class="math-container">$z=0$</span> of the integrand.</p>
<p><span class="math-container">$$I=\oint_{|z|=1} \frac{z^n+z^{-n}}{2iz(1-rz)(1-rz^{-1})}dz.$$</span></p>
<p>We can manipulate the integrand (which we denote by <span class="math-container">$f(z)$</span>) and ignore the term with positive <span class="math-container">$z$</span> exponent to get</p>
<p><span class="math-container">$$\mathop{\rm Res}_{z=0}f(z) = \mathop{\rm Res}_{z=0} \frac{z^{-n}}{2i(1-rz)(z-r)}.$$</span></p>
<p>I am not quite sure what to do here. I'm not sure how to deal with a pole that isn't just a simple pole. How do I deal with this?</p>
| Pavan C. | 914,078 | <p>Suppose <span class="math-container">$d_X$</span> and <span class="math-container">$d_Y$</span> are the metrics of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, respectively.</p>
<p><span class="math-container">$f$</span> is isometric on <span class="math-container">$X_0$</span>. Then for any <span class="math-container">$a, b \in X_0$</span>, <span class="math-container">$d_X(a,b) = d_Y(f(a), f(b))$</span>.</p>
<p>Take any points <span class="math-container">$x_1, x_2 \in X$</span>. Since <span class="math-container">$X_0$</span> is dense in <span class="math-container">$X$</span>, we can create sequences <span class="math-container">$a_n$</span> and <span class="math-container">$b_n$</span> in <span class="math-container">$X_0$</span> such that, as <span class="math-container">$n \to \infty$</span>, <span class="math-container">$a_n \to x_1$</span> and <span class="math-container">$b_n \to x_2$</span>. And by continuity, as <span class="math-container">$n \to \infty$</span>, <span class="math-container">$f(a_n) \to f(x_1)$</span> and <span class="math-container">$f(b_n) \to f(x_2)$</span>.</p>
<p>Finally, we know that for every <span class="math-container">$n$</span>, <span class="math-container">$d_X(a_n,b_n) = d_Y(f(a_n), f(b_n))$</span>. Since <a href="https://proofwiki.org/wiki/Distance_Function_of_Metric_Space_is_Continuous" rel="nofollow noreferrer">any metric function for a metric space is also continuous</a>, we have that <span class="math-container">$d_X(a_n, b_n) \to d_X(x_1, x_2)$</span> and <span class="math-container">$d_Y(f(a_n) f(b_n)) \to d_Y(f(x_1), f(x_2))$</span>, so by taking the limit on both sides, we get <span class="math-container">$d_X(x_1, x_2) = d_Y(f(x_1), f(x_2))$</span>.</p>
|
945,334 | <p>Here is a lemma whose proof is as under:</p>
<blockquote>
<p>If $S \in L(X,Y)$ and lim$_{r \to 0}\frac{\|Sr\|}{\|r\|}=0$,then $S=0$.</p>
</blockquote>
<p>Proof:</p>
<p>The condition lim$_{r \to 0}\Big(\frac{\|Sr\|}{\|r\|}\Big)=0$means that for each $\epsilon \gt 0$ there is a $\delta \gt 0$ such that
$\Big(\frac{\|Sr\|}{\|r\|}\Big)\leq \epsilon $ whenever $0\lt \|r\| \lt \delta$</p>
<p>Let $\text{u}\in X$ be a non-zero vector.Choose a non-zero $t\in \mathbb R$ so that $\|t\text{u}\|\lt \delta $ .Then
$\Big(\frac{\|S(t\text{u})\|}{\|t\text{u}\|}\Big)= \Big(\frac{\|S\text{u}\|}{\|\text{u}\|}\Big)\leq \epsilon $
and therefore $\|S\text{u}\|\leq\epsilon \|\text{u}\|.$</p>
<p>This is true for any $\epsilon \gt 0.$Hence $S\text{u}=0$ for all $u \in X$ This means that $S=0$</p>
<p>$---------------------------------------$</p>
<p>I can't understand the step why did we take a vector $\text{u}\in X$ and then introduce $t$ in proof?
Please help....</p>
| Petr Naryshkin | 178,423 | <p>Let $(y_k)$ be a convergent subsequence with a limit $A$. Since $(x_n)$ is monotone, $(y_k)$ is monotone as well. Then every $y$ is smaller than $A$. Then We know that for every $eps > 0$ there exists $K: k > K \Rightarrow y_k > A - eps$. Then for every x after $y_k$ in original sequence it's also true since sequence is monotone.Also, for every x there exists an y which stands further in $(x_n)$. Then every $x$ is $< A$. Then the sequence converges to A. (It was for increasing sequence, for decreasing anlogically).</p>
|
2,929,025 | <p>Bill gave exams for the entrance at some specific gymnasium. <span class="math-container">$602$</span> students took part, which were classified, after the exams, in an ascending order, and the first <span class="math-container">$108$</span> students will be taken, which will accept to enter. Every student that has the possibility to enter will not enter with a small possibility <span class="math-container">$p=0.02$</span>, same for all, and independent from the rest. Bill is at the position <span class="math-container">$113$</span>, so he will be accepted if at least <span class="math-container">$5$</span> students from the first <span class="math-container">$112$</span> will not enter at the gymnasium. I want to give an exact expression for the probability <span class="math-container">$q$</span> that Bill gets accepted. I also want to give an approximate expression for the probability <span class="math-container">$q$</span> .</p>
<p>Is the probability that Bill get accepted equal to</p>
<p><span class="math-container">$$5 \cdot 0.02?$$</span></p>
<p>Or do we have to take also something else into consideration?</p>
| Acccumulation | 476,070 | <p>This is a binomial distribution with <span class="math-container">$p=.02$</span>, <span class="math-container">$n=112$</span>, and five successes required. So the simple way to find the answer is simply to find a binomial calculator. For instance, <a href="https://stattrek.com/online-calculator/binomial.aspx" rel="nofollow noreferrer">https://stattrek.com/online-calculator/binomial.aspx</a> gives 7.49%</p>
<p>If you want to do it by hand, you can take </p>
<p><span class="math-container">$\sum_5^{112} \binom{112}{n}(.02)^n(.98)^{112-n}=1-\sum_0^4 \binom{112}{n}(.02)^n(.98)^{112-n}$</span></p>
<p>You can also treat this as being approximated by a Poisson distribution with <span class="math-container">$\lambda = 112*.02=2.24$</span> and find the probability <span class="math-container">$x\geq5$</span>, which gives 7.68%, which is close to the exact answer of 7.49%.</p>
|
2,623,924 | <p>My textbook explains the proof, which I don't understand:</p>
<p>"Consider two bases $v_1...v_p$ and $w_1...w_q$ of V. Since the vectors $v$ are linearly independent and the vectors $w$ span V..."</p>
<p>How exactly does $w$ span V? </p>
<p>The book then says the same for vectors $v$, that $v$ spans V and hence $p=q$, but I don't really understand how you can assume that given two sets of vectors that are basis, one set must span V.</p>
| Arnaud Mortier | 480,423 | <p>Yes, you can find such examples, loads of them. However your question seems to be related to the big O notation so I'll answer in the spirit. </p>
<p><em>Just because $\log(f)>c\log(g)$ <strong>for this particular c</strong> doesn't mean that $\log(f)$ is not a $O(\log(g))$.</em></p>
<p>In @dxiv 's example, for instance, $\log(f)$ is a $O(\log(g))$, but you need a bigger constant than $c$ to prove it.</p>
<p>Edit: take any two continuous positive-valued functions $ f $ and $ g $ on an open domain containing $0 $, say, such that $ f(0)\neq 1 $ and $ g (0)=1 $. Then $ f $ is a big O of $ g $ near $0 $ but $\log f $ isn't a big O of $\log g $. Big Os are tricky. If you don't like this example because it happens near $0 $, feel free to replace $0 $ by $\infty $.</p>
|
300,944 | <p>Show that there are no intergers $x$ and $y$ such that</p>
<p>$P(x,y)=x^2-5y^2=2$</p>
<p>Hint from professor:</p>
<p>Consider the equation in a convenient $\mod (n)$ so that you end up with a polynomial in a single variable. Then proceed as solving number of congruence.</p>
<hr>
<p>Im not sure how to approach this question</p>
<p>Since $P(x,y)=x^2-5y^2=2$</p>
<p>then $x^2-5y^2=0$ $\to$ $x^2=5y^2$</p>
<p>we have $5y^2\equiv0\mod(x)$</p>
<p>then how do I continue..?</p>
<p>Thank you!!</p>
| Herng Yi | 34,473 | <p>Suppose there exists integers $x$ and $y$ such that $x^2 - 5y^2 = 2$. Use the fact that</p>
<blockquote>
<p>Every square number is congruent to either $0$ or $1$ modulo $4$. $(\ast)$</p>
</blockquote>
<p>Hence, $x^2 - 5y^2 \equiv x^2 - y^2 \equiv 2 \pmod{4}$. However, the difference of two squares $x^2 - y^2 \equiv -1, 0, 1 \pmod{4}$ due to $(\ast)$.</p>
|
300,944 | <p>Show that there are no intergers $x$ and $y$ such that</p>
<p>$P(x,y)=x^2-5y^2=2$</p>
<p>Hint from professor:</p>
<p>Consider the equation in a convenient $\mod (n)$ so that you end up with a polynomial in a single variable. Then proceed as solving number of congruence.</p>
<hr>
<p>Im not sure how to approach this question</p>
<p>Since $P(x,y)=x^2-5y^2=2$</p>
<p>then $x^2-5y^2=0$ $\to$ $x^2=5y^2$</p>
<p>we have $5y^2\equiv0\mod(x)$</p>
<p>then how do I continue..?</p>
<p>Thank you!!</p>
| awllower | 6,792 | <p>I think your professor means to divide the equation by $5$, so that it becomes $x²\equiv 2 \pmod 5$. But, by the supplementary law of quadratic reciprocity, this is impossible.</p>
|
3,159,884 | <p>Prove that if <span class="math-container">$|z+w|=|z-w|$</span> then <span class="math-container">$z\overline{w}$</span> is purely imaginary.</p>
<p>To start off, I said let <span class="math-container">$z=a+bi$</span> and let <span class="math-container">$w=p+qi$</span>. Not sure where to go from here after subbing in those for <span class="math-container">$z$</span> and <span class="math-container">$w$</span>.</p>
| Bernard | 202,857 | <p>You don't even need to explicit <span class="math-container">$z$</span> and <span class="math-container">$w$</span>. You just have to show that
<span class="math-container">$$\overline{z\,\overline w}=-z\,\overline w.$$</span></p>
<p>You can use that <span class="math-container">$|a|=|b|\iff a\,\overline a=b\,\overline b$</span>.</p>
|
83,565 | <p>I am learning Mathematica on the fly, one of my tasks is to find the variance of white noise. I followed the tutorial for finding white noise by using the code:</p>
<pre><code>WN = WhiteNoiseProcess[NormalDistribution[0, 10]];
data = RandomFunction[WN, {0, 10000}];
</code></pre>
<p>I know I can use the following code to find the variance:<code>Variance[data]</code> </p>
<p>However, I would like to find it by using the formula for variance. I checked the reference built into Mathematica and it says I can simply use: </p>
<pre><code>Total[(list-Mean[list])^2]/(Length[list]-1)
</code></pre>
<p>I input data for the list:</p>
<pre><code>Total[(data-Mean[data])^2]/(Length[data]-1)
</code></pre>
<p>When I do this, I don't get the same output as when I use the <code>Variance[data]</code> code, but instead get:
<img src="https://i.stack.imgur.com/ZcpyR.png" alt="enter image description here"></p>
<p>So, I am curious what I am doing wrong? I'm sure it's something simple I am not doing, but after spending a couple of hours wrestling with this, I am breaking down to ask. Sorry if this is a dumb question. Thank you in advance for your time. </p>
| bbgodfrey | 1,063 | <p>The specific approach is as follows. Convert data to an ordinary list, eliminate an extra set of <code>{}</code>, and insert the list into your formula:</p>
<pre><code>dta = First@Normal@data;
Last@Total[(dta - Last@Mean[dta])^2]/(Length[dta] - 1)
</code></pre>
<p>which gives the same result as </p>
<pre><code>Variance[data]
</code></pre>
<p>namely <code>102.245</code> for the particular set of random numbers used.</p>
|
83,565 | <p>I am learning Mathematica on the fly, one of my tasks is to find the variance of white noise. I followed the tutorial for finding white noise by using the code:</p>
<pre><code>WN = WhiteNoiseProcess[NormalDistribution[0, 10]];
data = RandomFunction[WN, {0, 10000}];
</code></pre>
<p>I know I can use the following code to find the variance:<code>Variance[data]</code> </p>
<p>However, I would like to find it by using the formula for variance. I checked the reference built into Mathematica and it says I can simply use: </p>
<pre><code>Total[(list-Mean[list])^2]/(Length[list]-1)
</code></pre>
<p>I input data for the list:</p>
<pre><code>Total[(data-Mean[data])^2]/(Length[data]-1)
</code></pre>
<p>When I do this, I don't get the same output as when I use the <code>Variance[data]</code> code, but instead get:
<img src="https://i.stack.imgur.com/ZcpyR.png" alt="enter image description here"></p>
<p>So, I am curious what I am doing wrong? I'm sure it's something simple I am not doing, but after spending a couple of hours wrestling with this, I am breaking down to ask. Sorry if this is a dumb question. Thank you in advance for your time. </p>
| kglr | 125 | <pre><code>SeedRandom[1]
WN = WhiteNoiseProcess[NormalDistribution[0, 10]];
data = RandomFunction[WN, {0, 10000}];
</code></pre>
<h2>CentralMoment</h2>
<pre><code>variance = # /(# - 1)& @ #["PathLength"] CentralMoment[#["Values"], 2]&;
variance[data]
</code></pre>
<blockquote>
<p>97.25341025240807</p>
</blockquote>
<pre><code>Variance[data]
</code></pre>
<blockquote>
<p>97.25341025240806</p>
</blockquote>
<h2>MomentConvert + MomentEvaluate</h2>
<pre><code>variance2 = MomentEvaluate[MomentConvert[CentralMoment[2], "UnbiasedSampleEstimator"],
#["Values"]]&;
variance2 @ data
</code></pre>
<blockquote>
<p>97.25341025240809</p>
</blockquote>
|
275,974 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/264889/how-is-this-called-rationals-and-irrationals">How is this called? Rationals and irrationals</a> </p>
</blockquote>
<p>Please help me prove, that
$$\underset{n\rightarrow\infty}{\lim}\left(\underset{k\rightarrow\infty}{\lim}(\cos(|n!\pi x|)^{2k})\right)=\begin{cases}
1 & \iff x\in\mathbb{Q}\\
0 & \iff x\notin\mathbb{Q}
\end{cases}$$</p>
<p>Seems very complicated, but it's on calc I. I've tried use series expansions of cos, but it don't lead to answer. Thanks in advance!</p>
<p><strong>Edit</strong></p>
<p>Please don't use too much advanced techniques.</p>
| Calvin Lin | 54,563 | <p><strong>Hint:</strong> Show that if $x \in \mathbb{Q}$, then there exists some $N$ such that for $n > N$, $n! \pi x$ is an integer multiple of $2\pi$. Conclude that it tends to 1.</p>
<p>Show that if $x \not \in \mathbb{Q}$, then $\lim_{k\rightarrow \infty} [\cos (n! \pi x)]^{2k} = 0$.</p>
|
3,059,833 | <blockquote>
<p>If the equation <span class="math-container">$2^{2x} + a*2^{x+1} + a + 1=0$</span> has roots of opposite sign then the exhaustive values of a are?</p>
</blockquote>
<p>I tried taking <span class="math-container">$2^x = t$</span>. But then didn't know what to do.</p>
<p>The equation became, <span class="math-container">$t^2 + 2at + a +1 =0$</span>. But then, what conditions should I impose?</p>
<p>Since one root is negative and the other positive, the only conclusion that I could draw was that <span class="math-container">$x<0$</span> for the first condition and <span class="math-container">$x>0$</span> for the second condition. But, I don't know how do I proceed from here?</p>
<p>Any help would be appreciated.</p>
| Shubham Johri | 551,962 | <p>In order for distinct real roots, <span class="math-container">$a^2-a-1>0$</span>.</p>
<p>Since <span class="math-container">$x_1>0,2^{x_1}=t_1=-a+\sqrt{a^2-a-1}>1$</span>. </p>
<p>Similarly, <span class="math-container">$x_2<0\therefore 0<2^{x_2}=t_2=-a-\sqrt{a^2-a-1}<1$</span>.</p>
|
3,401,630 | <p>I am trying to prove the inequality
<span class="math-container">$$\frac{1}{n}-\frac{1}{(n+1)^2}>\frac{1}{n+1}\quad \forall \ n>1.$$</span> How would I go about doing this? I've tried solving it on my own but my final answer is <span class="math-container">$1>0$</span>. </p>
| Dr. Sonnhard Graubner | 175,066 | <p>Multiplying by <span class="math-container">$n+1>0$</span> we get
<span class="math-container">$\frac{n+1}{n}-\frac{1}{n+1}>1$</span> and this is
<span class="math-container">$$(n+1)^2-n>n(n+1)$$</span> this is <span class="math-container">$$n^2+n+1>n^2+n$$</span> or <span class="math-container">$1>0$</span>
all steps are back wards possible.</p>
|
1,765 | <p>Can anyone explain to me this behaviour? I've been having more than a couple of similar doubts these last weeks. </p>
<p>For example</p>
<pre><code>f[_?NumericQ] := 8;
</code></pre>
<p>Now, if I do</p>
<pre><code>With[{a = f[a]}, HoldForm@Block[{NumericQ = True &}, a]]
</code></pre>
<p>I get</p>
<pre><code>Block[{NumericQ = True &}, f[a]]
</code></pre>
<p>And if I do</p>
<pre><code>Block[{NumericQ = True &}, f[a]]
</code></pre>
<p>I get</p>
<pre><code>8
</code></pre>
<p>So far so good... Another so-far-so-goodie is (notice the <code>:=</code>)</p>
<pre><code>With[{a := f[a]}, Block[{NumericQ = True &}, a]]
8
</code></pre>
<p>Question: Can anyone help me understand why this output?</p>
<pre><code>With[{a = f[a]}, Block[{NumericQ = True &}, a]]
f[a]
</code></pre>
<p>Could it be that <code>With</code> (<code>=</code> version) not only evaluates and replaces, but also guarantees that the replaced expression won't be reevaluated no matter what until the <code>With</code> is exited? If that's the case I wasn't expecting that. What's happening here?</p>
<p><strong>EDIT</strong></p>
<p>Question also applies to Function</p>
<pre><code>Block[{NumericQ = True &}, #] &[f[a]]
f[a]
</code></pre>
<p>And not just those too. Everything I try behaves the same way... A couple of other examples</p>
<pre><code>Block @@ (Hold[{NumericQ = True &}, exp] /. exp -> f[a])
With[{a := Evaluate@f[a]}, Block[{NumericQ = True &}, a]]
</code></pre>
<p>both give <code>f[a]</code></p>
<p><strong>EDIT</strong></p>
<pre><code>With[{g = h}, h = 8; Print[g] ]
</code></pre>
<p>prints 8, not h, so clearly h is reevaluated inside the <code>With</code> in this case, so <code>g</code> is not so constant. </p>
<p><strong>EDIT</strong></p>
<p>Ok, another couple of examples</p>
<pre><code>In[10]:= ClearAll[h, f];
h := 8 /; NumericQ["a"];
f[_?NumericQ] := 8;
</code></pre>
<p>Now, both <code>h</code> and <code>f[a]</code> remain unevaluated</p>
<pre><code>In[21]:= {h, f[a]}
Out[21]= {h, f[a]}
</code></pre>
<p>Now, with the <code>OwnValues</code> everything works as expected</p>
<pre><code>In[17]:= With[{g = h},
Block[{NumericQ = True &}, g]]
Out[17]= 8
</code></pre>
<p>But with the <code>DownValues</code>, it doesn't </p>
<pre><code>In[19]:= With[{g = f[a]},
Block[{NumericQ = True &}, g]]
Out[19]= f[a]
</code></pre>
<p>Similarly</p>
<pre><code>f[a_?NumericQ] := 8;
g[e_] := Block[{NumericQ = True &}, e]
</code></pre>
<p><code>f[a]</code> evaluates to <code>f[a]</code>, but weirdly</p>
<pre><code>In[17]:= g[Unevaluated@f[a]]
Out[17]= 8
In[18]:= g[f[a]]
Out[18]= f[a]
</code></pre>
| Verbeia | 8 | <p>See the answers to <a href="https://mathematica.stackexchange.com/a/567/8">this question</a>. </p>
<p>The short answer is that, yes, <code>With</code> is designed for creating local <em>constants</em>, so once the local expressions are initialised, they won't be re-evaluated. </p>
<p>Using the <code>{a := f[a]}</code> notation gets around this design feature of <code>With</code>.
To be honest, though, I have never seen it used before. It is not mentioned in the documentation, and I can't help thinking that it breaks something in subtle ways. </p>
<p>Of course <code>Block[{a = f[a], NumericQ = True &}, a]</code> gives a <code>$RecursionLimit</code> error, but returns <code>8</code>.</p>
<p>Is there a reason why <code>Block[{NumericQ = True &}, f[a]]</code> doesn't fulfil your purpose?</p>
<p>In response to your edit, <code>Block[{NumericQ = True &}, #] & [f[a]]</code> doesn't give your desired output because the <code>f[a]</code> is outside the <code>Block</code> scoping construct. Again, see <a href="https://mathematica.stackexchange.com/a/569/8">Leonid's answer to the previously mentioned question</a> as well as the other answers there.</p>
|
28,892 | <p>I was searching on MathSciNet recently for a certain paper by two mathematicians. As I often do, I just typed in the names of the two authors, figuring that would give me a short enough list. My strategy was rather dramatically unsuccessful in this case: the two mathematicians I listed have written 80 papers together!</p>
<p>So this motivates my (rather frivolous, to be sure) question: which pair of mathematicians has the most joint papers? </p>
<p>A good meta-question would be: can MathSciNet search for this sort of thing automatically? The best technique I could come up with was to think of one mathematician that was both prolific and collaborative, go to their "profile" page on MathSciNet (a relatively new feature), where their most frequent collaborators are listed, alphabetically, but with the wordle-esque feature that the font size is proportional to the number of joint papers. </p>
<p>Trying this, to my surpise I couldn't beat the 80 joint papers I've already found. Erdos' most frequent collaborator was Sarkozy: 62 papers (and conversely Sarkozy's most frequent collaborator was Erdos). Ronald Graham's most frequent collaborator is Fan Chung: 76 papers (and conversely).</p>
<p>I would also be interested to hear about triples, quadruples and so forth, down to the point where there is no small set of winners.</p>
<hr>
<p><b>Addendum</b>: All right, multiple people seem to want to know. The 80 collaboration pair I stumbled upon is Blair Spearman and Kenneth Williams. </p>
| lhf | 532 | <p>Perhaps Hardy and Littlewood?</p>
|
28,892 | <p>I was searching on MathSciNet recently for a certain paper by two mathematicians. As I often do, I just typed in the names of the two authors, figuring that would give me a short enough list. My strategy was rather dramatically unsuccessful in this case: the two mathematicians I listed have written 80 papers together!</p>
<p>So this motivates my (rather frivolous, to be sure) question: which pair of mathematicians has the most joint papers? </p>
<p>A good meta-question would be: can MathSciNet search for this sort of thing automatically? The best technique I could come up with was to think of one mathematician that was both prolific and collaborative, go to their "profile" page on MathSciNet (a relatively new feature), where their most frequent collaborators are listed, alphabetically, but with the wordle-esque feature that the font size is proportional to the number of joint papers. </p>
<p>Trying this, to my surpise I couldn't beat the 80 joint papers I've already found. Erdos' most frequent collaborator was Sarkozy: 62 papers (and conversely Sarkozy's most frequent collaborator was Erdos). Ronald Graham's most frequent collaborator is Fan Chung: 76 papers (and conversely).</p>
<p>I would also be interested to hear about triples, quadruples and so forth, down to the point where there is no small set of winners.</p>
<hr>
<p><b>Addendum</b>: All right, multiple people seem to want to know. The 80 collaboration pair I stumbled upon is Blair Spearman and Kenneth Williams. </p>
| Eric Rowell | 6,355 | <p>E. Cline, B. Parshall, and L. Scott have 28 papers together, spanning 35 years or so. Just one example of a triple... </p>
|
4,272,755 | <p>Calculate the triple integral using spherical coordinates: <span class="math-container">$\int_C z^2dxdydz$</span> where C is the region in <span class="math-container">$R^3$</span> described by <span class="math-container">$1 \le x^2+y^2+z^2 \le 4$</span></p>
<p>Here's what I have tried:</p>
<p>My computation for <span class="math-container">$z$</span> is: <span class="math-container">$\sqrt{1-x^2-y^2} \le z \le \sqrt{4-x^2-y^2}$</span>, as for y I get: <span class="math-container">$-\sqrt{1-x^2}\le y \le \sqrt{4-x^2}$</span> and for x I get: <span class="math-container">$1 \le x \le 2$</span></p>
<p>The triple integral becomes:</p>
<p><span class="math-container">$$\int_{1}^2 dx \int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}dy \int_{-\sqrt{1-x^2-y^2}}^{\sqrt{1-x^2-y^2}}z^2dz$$</span></p>
<p>The way I have pictured theta is as so:</p>
<p><a href="https://i.stack.imgur.com/Y1fvn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y1fvn.png" alt="enter image description here" /></a></p>
<p>Where The Red + Green is equal to <span class="math-container">$\frac{\pi}{2}$</span> But because we're only interested in the region from <span class="math-container">$1 \le x \le 2$</span> this covers only <span class="math-container">$\frac{\pi}{4}$</span></p>
<p>The integral becomes:</p>
<p>I presumed <span class="math-container">$\phi$</span> only goes from <span class="math-container">$\frac{\pi}{4}$</span> also, and we know that <span class="math-container">$r$</span> goes from <span class="math-container">$1\le r \le 2$</span></p>
<p>So the final integral becomes:</p>
<p><span class="math-container">$$\int_0^{\frac{\pi}{4}} d\theta \int_0^{\frac{\pi}{4}} \cos^2(\phi)\sin(\phi) \space d\phi \int_1^2 r^4 dr$$</span></p>
<p>Because <span class="math-container">$z^2 = r^2 \cos^2(\phi)$</span></p>
<p>However my answer that I get is <span class="math-container">$2\pi(\sqrt{2}-4)$</span> but the answer should be <span class="math-container">$\frac{124\pi}{15}$</span>. I would greatly appreciate the communities assistance</p>
| Math Lover | 801,574 | <p>The region <span class="math-container">$C$</span> is <span class="math-container">$1 \le x^2+y^2+z^2 \le 4$</span>, which
is the entire region between spherical surfaces <span class="math-container">$x^2 + y^2 + z^2 = 1$</span> and <span class="math-container">$x^2 + y^2 + z^2 = 4$</span>.</p>
<p>So clearly, <span class="math-container">$0 \leq \phi \leq \pi$</span> and <span class="math-container">$0 \leq \theta \leq 2 \pi$</span>. Also, <span class="math-container">$ 1 \leq \rho \leq 2$</span>.</p>
<p>So the integral should be,</p>
<p><span class="math-container">$ \displaystyle \int_0^{2\pi} \int_0^{\pi} \int_1^2 \rho^4 \cos^2 \phi \sin \phi ~ d\rho ~ d\phi ~ d\theta$</span></p>
<p>Alternatively, for ease of calculation, note that due to symmetry of the region <span class="math-container">$C$</span>, we must have</p>
<p><span class="math-container">$ \displaystyle \int_C x^2 ~ dV = \int_C y^2 ~ dV = \int_C z^2 ~ dV$</span></p>
<p>So,
<span class="math-container">$ ~ \displaystyle \int_C z^2 ~ dV = \frac{1}{3} \int_C (x^2 + y^2 + z^2) ~ dV$</span></p>
<p><span class="math-container">$ \displaystyle = \frac{1}{3} \int_0^{2\pi} \int_0^{\pi} \int_1^2 \rho^4 \sin \phi ~ d \rho ~ d\phi ~ d\theta$</span></p>
|
85,343 | <p>I am looking for a reference to study classical (i.e., not quantized) Yang-Mills theory. </p>
<p>Most of the sources I find focus on mathematical aspects of the theory, like Bleecker's book <em>Gauge theory and variational principles</em>, or Baez & Muniain's <em>Gauge fields, knots and gravity</em>.</p>
<p>But I am more interested something similar to the standard development of electromagnetism, as can be found, for example, in Landau & Lifchitz's course on theoretical physics. To be more exact, I would like to learn about the field equations (Yang-Mills and Einstein equations), but also about the corresponding Lorentz Law, the energy of the Yang-Mills field,... and the analogous concepts of what is made for the electromagnetism.</p>
<p>That is, I look for a rigorous exposition where, at the same time, I could learn whether it is possible to prove, at the classical level, the quick decrease of the strong interaction, and things like that.</p>
| Steve Huntsman | 1,847 | <p>This is underrepresented in the literature. I have Nakahara and have looked at Frenkel (both listed in other answers) as well as many other "standard" references. The best book reference for classical YM theory that I found was <a href="http://books.google.com/books?id=BxjL6EkIpfUC">Rubakov's <em>Classical Theory of Gauge Fields</em>.</a></p>
|
85,343 | <p>I am looking for a reference to study classical (i.e., not quantized) Yang-Mills theory. </p>
<p>Most of the sources I find focus on mathematical aspects of the theory, like Bleecker's book <em>Gauge theory and variational principles</em>, or Baez & Muniain's <em>Gauge fields, knots and gravity</em>.</p>
<p>But I am more interested something similar to the standard development of electromagnetism, as can be found, for example, in Landau & Lifchitz's course on theoretical physics. To be more exact, I would like to learn about the field equations (Yang-Mills and Einstein equations), but also about the corresponding Lorentz Law, the energy of the Yang-Mills field,... and the analogous concepts of what is made for the electromagnetism.</p>
<p>That is, I look for a rigorous exposition where, at the same time, I could learn whether it is possible to prove, at the classical level, the quick decrease of the strong interaction, and things like that.</p>
| Hollis Williams | 119,114 | <p>My personal suggestion is 'Differential Geometry, Gauge Theories, and Gravity' by M. Gockeler and T. Schucker. However, it assumes a fairly high degree of mathematical sophistication (it's one of the texts in the 'Cambridge Monographs on Mathematical Physics). If you do get it, it is really worth the effort to master the topics inside, and the range of topics covered is fascinating.</p>
|
2,666,409 | <blockquote>
<p>If $SL(2)=\{A\in \mathcal{M}_2: \det(A)=1\}$, find a parametrization around the identity matrix $I_2$ and find the first fundamental form. </p>
</blockquote>
<p>I've proved that $SL(2)$ is an hypersurface in $\mathbb{R}^4$, so it has dimension $3$. However, I don't understand very well what does "around the identity" means. I think that if I have the parametrization I'll be able to compute the first fundamental form. </p>
<p>Thank you. </p>
| Mariano Suárez-Álvarez | 274 | <p>The group is the set of matrices $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ such that $ad-bc=1$. If you fix $a$, $b$ and $c$, then you can solve for $d$ using a somewhat obvious formula, and that formula works in a neighborhood of the point $(a,b,c)=(1,0,0)$ which gives the identity. This gives you a parametrization of the many points on the group including the identity. I'll let you write it down explicitly and seeing if it gives a chart or not.</p>
|
1,064,091 | <p>I am asked to generate 200 and 1000 points from a bi-variate normal mixture densities.
I am trying to understand the algorithm, not just the matlab code (I have to write it, not use an existing function). I found a code on mathworks: <a href="http://www.mathworks.com/help/matlab/ref/randn.html#bufqioz-2" rel="nofollow">http://www.mathworks.com/help/matlab/ref/randn.html#bufqioz-2</a>
on the second example, but I do not understand the math behind it. If someone can help me out with that, or explain it regardless to the code, or if you have another algorithm which can be explained, I will be grateful.
Thanks a lot!</p>
| Robert Israel | 8,508 | <p>The equality is quite definitely wrong. Try any example.</p>
|
226,449 | <p>Many counting formulas involving factorials can make sense for the case $n= 0$ if we define $0!=1 $; e.g., Catalan number and the number of trees with a given number of vetrices. Now here is my question:</p>
<blockquote>
<p>If $A$ is an associative and commutative ring, then we can define an
unary operation on the set of all the finite subsets of our ring,
denoted by $+ \left(A\right) $ and $\times \left(A\right)$. While it
is intuitive to define $+ \left( \emptyset \right) =0$, why should
the product of zero number of elements be $1$? Does the fact that $0! =1$ have anything to do with 1 being the multiplication unity of integers?</p>
</blockquote>
| eggcrook | 71,261 | <p>The most elementary way to understand is just to work backwards. If you start with a sum of several elements and subtract them one at a time, you get $0$ after you finish. If you start with a product of several elements and divide them out one at a time, you get $1$ after you finish. Instead of thinking of $0!$ as an empty product and getting all metaphysical, just think of it as nonempty product with all the terms factored out.</p>
|
542,951 | <p>I am trying show that the function $f:[0,1]\to \mathbb{R}$ defined by $f(x)=\sin \dfrac{1}{x}$ if $x\neq 0$ and $f(0)=0$ possesses IVP. Though it looks easy, but I am not getting any clue how to start with. Any help would be appreciated.</p>
| JessicaK | 102,435 | <p>Consider the interval</p>
<p>$$I_{k} = \left[ \frac{1}{\left(2k+\frac{3}{2}\right)}, \frac{1}{\left(2k+\frac{1}{2}\right) \pi}\right]$$</p>
<p>notice that $-1\leq f(I_{k}) \leq 1$ for all $k\in \mathbb{N}$ and that $I_{1}\subseteq I_{2}\subseteq \dots$</p>
|
1,851,084 | <p>I have to solve the following problem:
find the matrix $A \in M_{n \times n}(\mathbb{R})$ such that:
$$A^2+A=I$$ and $\det(A)=1$.
How many of these matrices can be found when $n$ is given?
Thanks in advance.</p>
| quid | 85,306 | <p>The fact $A^2 + A = I$, means that $X^2 + X -1$ is an annihilating polynomial of $A$. </p>
<p>The minimal polynomial of $A$ thus divides that polynomial. Thus all the eigenvalues of $A$ are among the roots of $X^2 + X - 1$, which are<br>
$$\frac{-1 \pm \sqrt{5}}{2}.$$</p>
<p>Since the determinant is the product of all eigenvalues (with multiplicity) it follows that both have the same multiplicity and this multiplicity is even. </p>
<p>Since the sum of all multiplicities is the dimension, it follows that the dimension must be divisible by $4$. </p>
<p>This condition suffices. A matrix has this property if and only it is similar to a diagonal matrix with half the diagonal entries $\frac{-1 + \sqrt{5}}{2}$ and the other $\frac{-1 - \sqrt{5}}{2}$. </p>
<p>The matrix is diagonalisable as the minimal polynomial has only distinct roots.</p>
|
1,873,194 | <p>Entropy of random variable is defined as:</p>
<p>$$H(X)= \sum_{i=1}^n p_i \log_2(p_i)$$</p>
<p>Which as far as I understand can be interpreted as how many yes/no questions one would have to ask on average, to find out the value of the random variable $X$.</p>
<p>But what if the log base is changed to for example e? What would the interpretation be then? Is there an intuitive explanation?</p>
| Chill2Macht | 327,486 | <p>Obviously this interpretation breaks down (at least somewhat) for non-integer bases, but for any logarithm base $b$, not just base $b=2$, we can interpret the information with respect to that base, $$H_b(X) = \sum_{i=1}^n p_i \log_b(p_i)$$ to be the average number of $b-$ary questions one would have to ask on average in order to find out the value of the random variable $X$.</p>
<p>What do I mean by a $b-$ary question? One that has at most $b$ possible answers. Any yes/no question is a $2-$ary question, and if one can reword it cleverly enough, any $2-$ary question can be stated as a yes/no question (since yes and no are both exhaustive and mutually exclusive).</p>
<p>More precisely, a $b$-ary question is one that has $b$ possible answers, which together exhaust all possibilities and each of which is mutually exclusive. In general, for $b \not=2$, it might be difficult to think of truly $b-$ary questions which don't involve artificially restricting the possible answers.</p>
<p>Consider, for example, trying to determine the value of a dice roll. If you are limited to $2-$ary questions, (e.g. "did it roll a $1$?") then on average it will take $\log_2 6$ questions to ascertain what value it rolled. However, if you are allowed to ask the $6-$ary question "did it roll a 1, 2, 3, 4, 5, or 6"? Then it will only take you $\log_6 6 =1$ question on average to ascertain the value.</p>
<p>If you consider the question tree consisting of all possible questions to derive the answer, then $b$ is just the number of branches emanating from each node (since each node is a question). If you increase $b$, you will decrease the number of questions necessary on average to ascertain the answer because each node will have more branches emanating from it, and thus you can traverse a larger portion of the set of possible answers by hitting fewer nodes. </p>
<p>This is in fact <em>very</em> similar to the idea of recursion equations and recursion trees in computer science; in particular, I encourage you to look at Section 4.4 of Cormen et al, <em>Introduction to Algorithms</em>, for an explanation of how the logarithm enters naturally into these types of situations. </p>
<p>We can think of asking a $b-$ary question as dividing the problem of finding the value of the random variable $X$ into $b$ subproblems of equal size -- then the analogy with recursion should become more clear (hopefully).</p>
|
4,552,955 | <p>I'm solving a probability problem, and I've ended up with this sum:</p>
<p><span class="math-container">$$\sum\limits_{k=0}^{n-a-b}\binom{n-a-b}{k}(a+k-1)!(n-a-k)!$$</span></p>
<p>WolframAlpha says I should get the answer <span class="math-container">$\frac{n!}{a\binom{a+b}{a}}$</span>, but I don't see how to get there. I tried to get to something containing <span class="math-container">$\binom{a+k-1}{k}$</span> so that I could use the Hockey Stick Theorem, but I wasn't successful.</p>
<p>So any hints would be very welcome, thanks for any help</p>
| Bruno B | 1,104,384 | <p>It seems to be the result of taking <span class="math-container">$x := a$</span>, <span class="math-container">$y := b+1$</span>, <span class="math-container">$z := 1$</span> and "<span class="math-container">$n$</span>" <span class="math-container">$:= n - a - b$</span> in the <a href="https://en.wikipedia.org/wiki/Rothe%E2%80%93Hagen_identity" rel="nofollow noreferrer">Rothe-Hagen identity</a> (thank you for indirectly making me discover this complicated yet fascinating identity today).</p>
<p>This gives you, using <span class="math-container">$\displaystyle \frac{c}{d} \binom{d}{d - c} = \frac{c \cdot d!}{d \cdot c!(d-c)!} = \binom{d-1}{d - c}$</span> to simplify the formula:
<span class="math-container">$$\sum_{k = 0}^{n - a - b} \binom{a + k - 1}{k}\binom{n - a - k}{n - a - b - k} = \binom{n}{n - a - b}$$</span></p>
<p>Let's reorder all the factorials hidden in the binomials:
<span class="math-container">$$\begin{split}1 =& \frac{1}{\binom{n}{n - a - b}} \sum_{k = 0}^{n - a - b} \binom{a + k - 1}{k}\binom{n - a - k}{n - a - b - k}\\& = \frac{(a+b)!(n - a - b)!}{n!}\sum_{k = 0}^{n - a - b} \frac{(a + k - 1)!}{k!(a-1)!}\frac{(n - a - k)!}{(n - a - b - k)!b!}\\
& = \frac{1}{n!}\frac{(a+b)!}{(a-1)!b!} \sum_{k = 0}^{n - a - b} \frac{(n - a - b)!}{(n - a - b - k)!k!}(a + k - 1)!(n - a - k)!\\
& = \frac{a\binom{a+b}{a}}{n!} \sum_{k = 0}^{n - a - b} \binom{n - a - b}{k}(a + k - 1)!(n - a - k)!\end{split}$$</span>
This finally grants:
<span class="math-container">$$\sum_{k = 0}^{n - a - b} \binom{n - a - b}{k}(a + k - 1)!(n - a - k)! = \frac{n!}{a\binom{a+b}{a}}$$</span></p>
|
2,942,879 | <p>Suppose that <span class="math-container">$lim_{n\rightarrow \infty} a_n = L$</span> and <span class="math-container">$L \neq 0$</span>. Prove there is some <span class="math-container">$N$</span> such that <span class="math-container">$a_n \neq 0$</span> for all <span class="math-container">$n \geq N$</span>.</p>
<p>We know by the definition of convergence of a sequence, <span class="math-container">$\forall \epsilon > 0, \exists\ N \in \mathbb{N}$</span> such that <span class="math-container">$\forall n \geq N$</span>, <span class="math-container">$|x_n - L| < \epsilon$</span>.</p>
<p>So take such an <span class="math-container">$N$</span> which we know exists since we're given the limit <span class="math-container">$L$</span>.
So for the condition <span class="math-container">$|a_n - L| < \epsilon$</span> to hold <span class="math-container">$\forall \epsilon > 0$</span>, <span class="math-container">$a_n \neq 0$</span>, as otherwise if <span class="math-container">$a_n = 0$</span>, since <span class="math-container">$L \neq 0$</span>, we could find an <span class="math-container">$\epsilon$</span> such that <span class="math-container">$\epsilon < |-L| = L$</span>.</p>
<p>Proof seems rather short and maybe not rigorous enough. Wanted thoughts or if there's a better way to do this.</p>
| Siong Thye Goh | 306,553 | <p>You might like to choose your <span class="math-container">$\epsilon$</span> explicitly might, for example, take <span class="math-container">$\epsilon = \frac{|L|}2$</span>.</p>
<p>Then we can find <span class="math-container">$N$</span> such that for all <span class="math-container">$n \ge N$</span>, we have <span class="math-container">$|a_n -L| < \frac{|L|}2$</span>. Hence <span class="math-container">$$L-\frac{|L|}{2}<a_n< L+\frac{|L|}2$$</span></p>
<p>If <span class="math-container">$L<0$</span>, we have <span class="math-container">$L+\frac{|L|}2=\frac{L}2<0.$</span> Hence for all <span class="math-container">$n\ge N$</span>, <span class="math-container">$a_n<0$</span>.</p>
<p>If <span class="math-container">$L>0$</span>, we ahve <span class="math-container">$L-\frac{|L|}2=\frac{L}2>0.$</span> Hence for all <span class="math-container">$n\ge N$</span>, <span class="math-container">$a_n >0$</span>.</p>
<p>Remark:</p>
<p>Be careful that <span class="math-container">$|-L|$</span> need not be equal to <span class="math-container">$L$</span>.</p>
|
3,910,623 | <p>There is a problem that appears in an interview<span class="math-container">$^\color{red}{\star}$</span> with <a href="https://en.wikipedia.org/wiki/Vladimir_Arnold" rel="nofollow noreferrer">Vladimir Arnol'd</a>.</p>
<blockquote>
<p>You take a spoon of wine from a barrel of wine, and you put it into your cup of tea. Then you return a spoon of the (nonuniform!) mixture of tea from your cup to the barrel. Now you have some foreign substance (wine) in the cup and some foreign substance (tea) in the barrel. Which is larger: the quantity of wine in the cup or the quantity of tea in the barrel at the end of your manipulations?</p>
</blockquote>
<p>This problem is also quoted <a href="https://math.stackexchange.com/q/895627">here</a>.</p>
<hr />
<p>Here's my solution:</p>
<p>The key is to consider the proportions of wine and tea in the second spoonful (that is, the spoonful of the nonuniform mixture that is transported from the cup to the barrel). Let <span class="math-container">$s$</span> be the volume of a spoonful and <span class="math-container">$c$</span> be the volume of a cup. The quantity of wine in this second spoonful is <span class="math-container">$\frac{s}{s+c}\cdot s$</span> and the quantity of tea in this spoonful is <span class="math-container">$\frac{c}{s+c}\cdot s$</span>. Then the quantity of wine left in the cup is <span class="math-container">$$s-\frac{s^2}{s+c}=\frac{sc}{s+c}$$</span> and the quantity of tea in the barrel now is also <span class="math-container">$\frac{cs}{s+c}.$</span> So the quantities that we are asked to compare are the same.</p>
<p>However, Arnol'd also says</p>
<blockquote>
<p>Children five to six years old like them very much and are able to solve them, but they
may be too difficult for university graduates, who are spoiled by formal mathematical training.</p>
</blockquote>
<p>Given the simple nature of the solution, I'm going to guess that there is a trick to it. How would a six year old solve this problem? My university education is interfering with my thinking.</p>
<hr />
<p><span class="math-container">$\color{red}{\star}\quad$</span> S. H. Lui, <a href="https://www.ams.org/notices/199704/arnold.pdf" rel="nofollow noreferrer">An interview with Vladimir Arnol′d</a>, Notices of the AMS, April 1997.</p>
| Atbey | 327,944 | <p>The volume of spoon, <span class="math-container">$s$</span>, is the conserved quantity. It is also the amount of wine in the cup.<br> When you then take some mixture <span class="math-container">$\mathit{tea}+\mathit{wine} = s$</span> into the spoon, <br><span class="math-container">$s-\mathit{wine}$</span> is the amount of wine left in the cup <em>and</em> the amount of tea poured into the wine barrel.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.