qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,208,613 | <p>We have a fair <span class="math-container">$3$</span> sided die <span class="math-container">$(a,b,c)$</span>, and we perform the following experiment:</p>
<p>Roll the die until we have seen <span class="math-container">$10$</span> of any of the sides, let <span class="math-container">$X$</span> be the number of times the die was rolled.</p>
<p>eg. <span class="math-container">${bcbaaaaaaaaaa,bbbbbbbbbb,aaaaaaacacccccccccc}$</span></p>
<p>We are interested in the probability of <span class="math-container">$X$</span> for all possible values of <span class="math-container">$X$</span>, ie. <span class="math-container">$X={10,11...28}$</span>.</p>
| angryavian | 43,949 | <p>The roots of the two equations are
<span class="math-container">$$\frac{1}{2}\left(-(k-1) \pm \sqrt{(k-1)^2 + 8 (k+1)}\right) = \frac{1}{2}(-(k-1) \pm (k+3)) = \{2, -(k+1)\}$$</span>
<span class="math-container">$$\frac{1}{2(k-1)} \left(-k \pm \sqrt{k^2 - 4(k-1)}\right) = \frac{1}{2(k-1)}(-k \pm (k-2)) = \{-\frac{1}{k-1}, -1\}$$</span>
so you want
<span class="math-container">$$\{2, -(k+1), -\frac{1}{k-1}, -1\}$$</span>
to have cardinality <span class="math-container">$3$</span>.</p>
|
4,336,706 | <p>Let <span class="math-container">$\mathbb {K}$</span> be a field. Let <span class="math-container">$f: \mathbb {K}^2 \rightarrow \mathbb {K}^2; x \mapsto Ax+b$</span> be an affine transformation. Suppose <span class="math-container">$f$</span> has a fixed point line (i.e. a line such that every point on that line is a fixed point of <span class="math-container">$f$</span>). When does the linear map <span class="math-container">$x \mapsto Ax$</span> have a fixed point line?</p>
<ul>
<li><em>What I tried:</em><br />
I tried to construct a fixed point line of the linear map from the one of <span class="math-container">$f$</span>, but to no avail. I know that <span class="math-container">$(0,0)$</span> is a fixed point of the linear map. If I could obtain one other fixed point I would be done, since by linearity the line through the origin and that point would consist only of fixed points. So it boils down to finding a fixed point of the linear map other than the origin. Another thought: By our assumption the coefficient matrix of the inhomogeneous system of linear equations <span class="math-container">$(A-I_2)x=-b$</span> has rank one.
Now we are interested in the homogeneous system. Any hints?</li>
</ul>
| Kavi Rama Murthy | 142,385 | <p>Counter-example <span class="math-container">$f(x)=\frac 1x$</span> when <span class="math-container">$f$</span> is only <span class="math-container">$C^{\infty}$</span> on <span class="math-container">$(0,1)$</span>.</p>
<p>If <span class="math-container">$f$</span> is also continuous on <span class="math-container">$[0,1]$</span> then this is a trivial application of MVT to the function <span class="math-container">$xf(x)$</span>.</p>
|
1,265,801 | <p>Let $p$ be an odd prime and $a, b \in \Bbb Z$ with $p$ doesn't divide $a$ and $a$ doesn't divide $b$. Prove that among the congruence's $x^2 \equiv a \mod p$, $\ x^2 \equiv b \mod p$, and $x^2 \equiv ab \mod p$, either all three are solvable or exactly one.</p>
<p>Please help I'm trying to study for final in number theory and I can't figure out this proof.</p>
| Joffan | 206,402 | <p>Since $p$ is a prime, it has at least one primitive root $g$. </p>
<ul>
<li>Every quadratic residue $q$ (coprime to $p$) can be expressed as $g^{2k}\equiv q \bmod p$. </li>
<li>Every quadratic non-residue $n$ can be expressed as $g^{2k+1}\equiv n \bmod p$.</li>
</ul>
<p>Therefore your assertion is equivalent to the parity of the exponent of a chosen primitive root. Odd exponents give quadratic non-residues, even exponents give quadratic residues, and multiplying the numbers $a$ and $b$ together leads to adding the exponents.</p>
|
2,402,429 | <p>Let $P(x) = x^3 + 2x^2+3x+4$ and $a$ be the root of equation $x^4+x^3+x^2+x+1=0$.</p>
<p>Find the value of $P(a)P(a^2)P(a^3)P(a^4)$</p>
<p>Is my answer correct ?</p>
<p>Since root of equation $x^4+x^3+x^2+x+1=0$ is the $5^{th}$ primitive root of 1,</p>
<p>so $a, a^2, a^3, a^4$ are roots of $x^4+x^3+x^2+x+1=0$ </p>
<p>$P(a)P(a^4)=(a^3+2a^2+3a+4)(\frac{1}{a^3}+\frac{2}{a^2}+\frac{3}{a}+4)= 15+5a^4+5a$</p>
<p>Similarly, $P(a^2)P(a^3)=15+5a^3+5a^2$</p>
<p>$P(a)P(a^2)P(a^3)P(a^4)=(15+5a^4+5a)(15+5a^3+5a^2)=125$</p>
| Mark Bennet | 2,906 | <p>Here is another way. Call the product that you want Q.</p>
<p>Note that $xP(x)-P(x)=x^4+x^3+x^2+x-4$</p>
<p>As you correctly observe the values you are substituting are all roots of $x^4+x^3+x^2+x+1=0$ and hence for these values you get $(x-1)P(x)=-5$ whence $$(a-1)(a^2-1)(a^3-1)(a^4-1)Q=(-5)^4=625$$</p>
<p>Now pairing the first and last factor and the middle two factors and noting that $a^5=1$ we have $$(a-1)(a^2-1)(a^3-1)(a^4-1)=(2-a-a^4)(2-a^2-a^3) =$$$$=4-2(a+a^2+a^3+a^4)+a^3+a^4+a+a^2$$ and we know that $$a^4+a^3+a^2+a=-1$$ So $5Q=625$ and $Q=125$</p>
|
408,590 | <p>I'm looking for references (books/lecture notes) for :</p>
<ul>
<li>Cardinality without choice, Scott's trick;</li>
<li>Cardinal arithmetic without choice.</li>
</ul>
<p>Any suggestions? Thanks in advance.</p>
| Cameron Buie | 28,900 | <p>Azriel Lévy's "Basic Set Theory" discusses Scott's trick, and does some discussion of choiceless arithmetic.</p>
|
490,641 | <p>In Niels Lauritzen, <em>Concrete Abstract Algebra</em>, I'm having trouble showing the following:</p>
<p>The problem starts out like this:</p>
<p>$f(X)=a_nX^n+\cdots+a_1X+a_0, a_i \in \mathbb Z, n \in \mathbb N$ </p>
<p>Part (i) which I think I've done right:</p>
<p>i) Show $X-a \mid X^n-a^n$:
$X^n - a^n = (X-a)(X^{n-1} + X^{n-2}a + \cdots + Xa^{n-2} + a^{n-1}), n\ge2 \Rightarrow X-a \mid X^n-a^n$.</p>
<p>Part (ii) which I am having trouble solving:</p>
<blockquote>
<p>ii) Show that if: $a, N \in \mathbb Z$, $f$ has degree $n$ modulo $N$, $f(a) \equiv 0$ (mod $N$) then $f(X) \equiv (X-a)g(X)$ (mod $N$) and $g$ has degree $n-1$ modulo $N$.</p>
</blockquote>
<p>Hint: use (i) and $f(X) \equiv f(X) - f(a)$ (mod $N$).</p>
<p>$f$ has degree $n$ modulo $N$ means $N$ does not divide $a_n$.</p>
<p>Thanks for your time.</p>
| Calvin Lin | 54,563 | <p><strong>Hint:</strong> Ignoring modulo $N$, what would be the coefficient of $x^{n-1} $ in $g(x)$.</p>
<p><strong>Hint:</strong> If this coefficient is 0 modulo $N$, what can we say about the coefficient of $x^{n}$ in $f$?</p>
|
323,128 | <p>Show that in every (not necessarily connected) graph there is a path from every vertex $u$ of odd degree to some other vertex $v$ ($u \neq v$), also of odd degree.</p>
| Brian M. Scott | 12,042 | <p>HINT: Let $u$ be vertex of odd degree. Start at $u$ and walk from vertex to vertex, never repeating an edge, until you can’t proceed any further.</p>
|
2,263,759 | <p>Let $E$ be a vector space and $\varphi: E \to E$ be a linear map. Let $x, y \in E \setminus \{0\}$ and $\lambda, \mu \in F$ such that
$\varphi(x) = \lambda x$ and $\varphi(y) = \mu y$. Prove that if $\lambda \neq \mu$ then $\{x, y\}$ is linearly independent.</p>
<p>This proof seems like it should be on the simpler side. But perhaps I am over thinking it. This is what I have:</p>
<p><strong>Proof :</strong>
Suppose $\{x,y\}$ is not linearly independent.
Then, there exists scalars $a,b$ s.t. $ax+by=0$ where $a=b=0$.
So, $0=ax+by=\varphi(x)+\varphi(y)= \varphi(x+y)$</p>
<p>And this is where I am stuck. I can use the fact that $\varphi$ is linear, and show that $\varphi(0)=0$, but the map is not necessarily injective, so there might be more elements in the null space. I can't use $\varphi(x+y)=c(x+y)$ because this assertion would only hold in dimension $1$. What am I missing? Am I going about this in the right way?</p>
<p>Thank you in advance!</p>
| egreg | 62,967 | <p>You may want to see a proof that can be generalized to more than two vectors.</p>
<p>Consider $\alpha x+\beta y=0$. Then
\begin{align}
\varphi(\alpha x+\beta y)&=0 \\
\mu(\alpha x+\beta y)&=0
\end{align}
which become
\begin{align}
\alpha\lambda x+\beta\mu y&=0 \\
\alpha\mu x+\beta\mu y&=0
\end{align}
Subtract the second from the first:
$$
\alpha(\lambda-\mu)x=0
$$
Since $(\lambda-\mu)x\ne0$ by assumption, we obtain $\alpha=0$ and so $\beta y=0$, from which $\beta=0$.</p>
<hr>
<p>This can be generalized in a similar way: if $x_k$ is an eigenvector relative to $\lambda_k$, for $k=1,2,\dots,n$ and the eigenvalues are pairwise distinct, then $\{x_1,\dots,x_n\}$ is linearly independent.</p>
<p>The case $n=1$ is obvious. Suppose the thesis holds for $n-1$ eigenvectors relative to distinct eigenvalues. Then, given $\alpha_1x_1+\dots+\alpha_nx_n=0$, we have
\begin{align}
\varphi(\alpha_1x_1+\dots+\alpha_nx_n)&=0 \\
\lambda_n(\alpha_1x_1+\dots+\alpha_nx_n)&=0
\end{align}
which becomes
\begin{align}
\alpha_1\lambda_1x_1+\dots+\alpha_{n-1}\lambda_{n-1}x_{n-1}+\alpha_n\lambda_nx_n&=0 \\
\alpha_1\lambda_nx_1+\dots+\alpha_{n-1}\lambda_nx_{n-1}+\alpha_n\lambda_nx_n&=0
\end{align}
Subtract the second from the first: then
$$
\alpha_1(\lambda_1-\lambda_n)x_1+\dots+
\alpha_{n-1}(\lambda_{n-1}-\lambda_n)x_{n-1}=0
$$
and, by the induction hypothesis,
$$
\alpha_k(\lambda_k-\lambda_n)=0 \qquad (k=1,\dots,n-1)
$$
forcing $\alpha_1=\dots=\alpha_{n-1}=0$. Hence also $\alpha_nx_n=0$ and $\alpha_n=0$.</p>
|
920,429 | <p>Given that $x(t)=(c_1+c_2 t + c_3 t^2)e^t$ is the general solution to a differential equation, how do you work backwards to find the differential equation? </p>
| mvw | 86,776 | <p>The general solution
$$
x(t) = (c_1 + c_2 t + c_3 t^2)\, e^t
$$
has three parameters $c_i$, so one would need three integrations to reintroduce them from a differential equation, or
three differentiations to eleminate them:
$$
\begin{align}
x(t) &= (c_1 + c_2 t + c_3 t^2)\, e^t \Rightarrow \\
\dot{x}(t) &= (c_2 + 2\, c_3 t)\, e^t + x(t) \Rightarrow \\
\ddot{x}(t) &= 2\, c_3 \, e^t + 2 \dot{x}(t) - x(t) \Rightarrow \\
\dddot{x}(t) &= 2\, c_3 \, e^t + 2 \ddot{x}(t) - \dot{x}(t)
\end{align}
$$
This would give
$$
\dddot{x}(t) - \ddot{x}(t) = 2\ddot{x}(t)-3\dot{x}(t)+x(t)
$$
or
$$
\dddot{x}(t) - 3\ddot{x}(t) + 3\dot{x}(t) - x(t) = 0
$$</p>
<p>Check: I am too lazy to solve this manually, so I used the machine:
<a href="https://www.wolframalpha.com/input/?i=x%27%27%27+-+3+x%27%27+%2B+3+x%27+-+x+%3D+0" rel="nofollow">Wolfram Alpha solution</a></p>
<p>It reconstructs your general solution.</p>
|
920,429 | <p>Given that $x(t)=(c_1+c_2 t + c_3 t^2)e^t$ is the general solution to a differential equation, how do you work backwards to find the differential equation? </p>
| Ana M | 167,359 | <p>Given a homogeneous linear differential equation of order $n$, we can get the solutions by writing the characteristic polynomial and equating to zero.</p>
<p>$c_nx^{(n)}(t)+c_{n-1}x^{(n-1)}(t)+...+c_{1}x^{(1)}(t)+c_0x(t)=0$</p>
<p>$x(t)=\sum_iA_ie^{\lambda_i t}$, where $\lambda_i$ is a $\lambda$ such that solves the characteristic polynomial:</p>
<p>$c_n\lambda^{n}+c_{n-1}\lambda^{n-1}+...+c_1\lambda^{1}+c_0=0$</p>
<p>However, if some $\lambda_i$ is a solution of the characteristic polynomial with a multiplicity $k>1$, its contribution to the general $x(t)$ is not $Ae^{\lambda_i t}$, but $\sum_{j=0}^{k-1}A_jt^{j}e^{\lambda_i t}$. For example, if the characteristic polynomial is $(\lambda-2)^2$ then the solution is not $Ae^{2t} + Be^{2t}$, but $Ae^{2t} + Bte^{2t}$.</p>
<p>This is exactly our case. We can see the solution is of the form $\sum_{j=0}^{k-1}A_jt^{j}e^{\lambda_i t}$ where $\lambda_i=1$, $k-1=2\rightarrow k=3$.</p>
<p>So the characteristic polynomial is $(\lambda-1)^3=\lambda^3-3\lambda^2+3\lambda-1$.</p>
<p>From this we can conclude that the differential equation is $x^{(3)}(t)-3x^{(2)}(t)+3x^{(1)}(t)-x(t)=0$.</p>
|
366,844 | <p>Using the infinite product of $\sin(\pi z)$, one can find the Hadamard product for $e^z-1$:</p>
<p>$$e^z-1 =2ie^{z/2}\sin(-iz/2)= 2i e^{z/2} (-iz/2) \prod_n \left(1+\frac{z^2}{4\pi n^2}\right)\\= e^{z/2} z \prod_n \left(1+\frac{z^2}{4\pi n^2}\right).$$</p>
<p>I don't see a way to find the product for $\cos\pi z$. A naive attempt is letting $\{a_n\}\subset{\Bbb C}$ be all the zeros of $\cos(\pi z)$ and showing the possible convergence of
$$
\prod_{n=1}^\infty\left(1-\frac{z}{a_n}\right)
$$</p>
<p>Is there an alternative way to find the Hadamard product in the title for $\cos\pi z$?</p>
| Clayton | 43,239 | <p><strong>Hint:</strong> Use $\sin(2z)=2\sin(z)\cos(z)$ so that $$\cos(z)=\frac{\sin(2z)}{2\sin(z)}.$$ If you're careful about how you write it, you will see that all of the 'even terms' cancel nicely. I do not have time right now, but if you haven't been able to solve it within a few hours, I will return and post my solution.</p>
|
2,231,092 | <p>I am reading <a href="http://people.ucalgary.ca/~rzach/static/open-logic/open-logic-complete.pdf" rel="nofollow noreferrer">Open Logic TextBook</a>. In which there is a proposition about Extensionality of first order sentences (6.12) It goes like this, </p>
<p>Let $\phi$ be a sentence, and $M$ and $M'$
be structures.
If $c_M = c_{M'}$
, $R_M = R_{M'}$
, and $f_M = f_{M'}$
for every constant symbol $c$,
relation symbol $R$, and function symbol $f$ occurring in $\phi$, then $ M \models \phi$ iff $M' \models \phi$</p>
<p>Does this statement implicitly imply that the Domain is exactly the same set, since $f_M = f_{M'}$ I am confused at this statement, does it mean, $f_M = f_{M'}$ only on the domain of constant values (or other covered terms?)</p>
| hamam_Abdallah | 369,188 | <p><strong>hint</strong></p>
<p>Your line has the following cartesian equation </p>
<p>$$y=\frac {1}{7}x+b $$</p>
<p>with $$b=-3+\frac {2}{7}=-\frac {19}{7} $$</p>
<p>thus</p>
<p>$$r=\sqrt {x^2+y^2}=f (x) $$</p>
<p>and $$\tan (\theta)=\frac {y}{x} =g (x)$$
or $$x=h (\theta) $$</p>
<p>thus, your polar equation will be
$$r=f (h (\theta)). $$</p>
|
2,576,344 | <p>This problem is about expected value, and it's a real world problem.</p>
<p>I know so far that $f$ is strictly increasing, if that makes the proof more concise (but if you can also prove it without this assumption, that would be awesome). Find all solutions for $f$ when $f(P_a \cdot a+P_b \cdot b)=P_a \cdot f(a)+P_b \cdot f(b)$ for all $a,b \in R$, and for $P_a,P_b \in R_+$. I'm 80% sure $f$ must be $f(x)=mx$ but I don't know how to prove it.</p>
| angryavian | 43,949 | <p>By taking $P_b=0$, you have $f(P_a \cdot a) = P_a \cdot f(a)$ for any $a$ and $P_a$.</p>
<p>Taking $a=1$ shows that $f(P_a) = P_a \cdot f(1)$ for any $P_a$.</p>
<p>For purely cosmetic reasons you can rewrite the last result as $f(x) = x \cdot f(1)$ for all $x$.</p>
<p>Letting $m:=f(1)$ shows that $f(x) = mx$.</p>
<p>This shows that a necessary condition for $f$ to satisfy the original condition is "$f$ must be of the form $f(x)=mx$ where $m:=f(1)$." To show it is also a sufficient condition, you can directly check that the original condition holds with this kind of $f$.</p>
<hr>
<p>The question was edited (after my original answer) to assert $P_a , P_b \ge 0$.</p>
<p>The same argument as before shows that $f(x) = f(1) \cdot x$ for $x \ge 0$.
Then by applying the original assumption, we have $f(-1) + f(1) = f(-1 + 1) = f(0) = 0$ and thus $f(-1) = f(1)$. Applying the same argument before shows $f(-x) = f(-1) \cdot x = f(1) \cdot x$ for $x \ge 0$. Thus $f(x)=f(1) \cdot x$ for all real $x$.</p>
|
241,998 | <p>Consider a list of even length, for example <code>list={1,2,3,4,5,6,7,8}</code></p>
<p>what is the fastest way to accomplish both these operations ?</p>
<p><strong>Operation 1</strong>: two by two element inversion, the output is:</p>
<pre><code>{2,1,4,3,6,5,8,7}
</code></pre>
<p>A code that work is:</p>
<pre><code>Flatten@(Reverse@Partition[list,2])
</code></pre>
<p><strong>Operation 2</strong>: Two by two Reverse, the output is:</p>
<pre><code>{7,8,5,6,3,4,1,2}
</code></pre>
<p>A code that work is:</p>
<pre><code>Flatten@(Reverse@Partition[Reverse[list],2])
</code></pre>
<p><strong>The real lists have length 16, no need for anything adapated to long lists</strong></p>
| Daniel Huber | 46,318 | <p>We can achieve a speed up of approx. 2 by using <code>Riffle</code>:</p>
<p>list = {1, 2, 3, 4, 5, 6, 7, 8};</p>
<p>For the first problem:</p>
<pre><code>Riffle[list[[2 ;; ;; 2]], list[[1 ;; ;; 2]]]
</code></pre>
<p>For the second problem:</p>
<pre><code>Riffle[list[[-2 ;; 1 ;; -2]], list[[-1 ;; 2 ;; -2]]]
</code></pre>
|
1,969,903 | <blockquote>
<p>a) Evaluate the one-dimensional Gaussian integral</p>
<p><span class="math-container">$I(a)$</span> = <span class="math-container">$\int_R exp(-ax^2)dx$</span>, <span class="math-container">$a>0$</span></p>
<p>b) evaluate the two-dimensional Gaussian integral using a)</p>
<p><span class="math-container">$I_2(a,b)$</span> = <span class="math-container">$\int_{R^2} exp(-ax^2 -by^2)dxdy, a,b>0$</span></p>
</blockquote>
<p>For a) I have done the following:</p>
<p><span class="math-container">$\int_{-\infty}^{+\infty}$</span> <span class="math-container">$e^{-ax^2}dx = 2\int_0^{\infty}e^{-ax^2}dx $</span></p>
<p><span class="math-container">$I^2$</span> = 4<span class="math-container">$\int_0^\infty$$\int_0^\infty$$e^{-a(x^2 + y^2)}dydx$</span></p>
<p>...</p>
<p><span class="math-container">$I^2$</span> = <span class="math-container">$\sqrt{\pi a}$</span></p>
<p><span class="math-container">$I$</span> = <span class="math-container">$\pi$$\sqrt a$</span></p>
<p>I am having difficulties understanding how to solve b) any help or guide will be appreciated.</p>
| Batman | 127,428 | <p>Note that $e^{-ax^2-by^2} = e^{-a x^2}e^{-b y^2}$.</p>
<p>Then, note that $\iint_{\mathbb{R}^2} e^{-ax^2-by^2} dy dx = \iint_{\mathbb{R}^2} e^{-a x^2}e^{-b y^2} dy dx = \int_{\mathbb{R}} e^{-a x^2} dx \int_{\mathbb{R}} e^{-b y^2} dy$, and apply your result from part (a). </p>
<p>Also, note $\int_{\mathbb{R}} \frac{1}{\sqrt{2 \pi \sigma}} e^{-\frac{x^2}{2 \sigma^2}} dx = 1$ (compare this to your answer for part (a)). </p>
|
3,984,230 | <blockquote>
<p><span class="math-container">$2^x=4x$</span></p>
</blockquote>
<p>I cant seem to solve this equation. The furthest I have been able to come is
<span class="math-container">$x-\log_2(x)=2$</span>, but I can't figure how to solve. When I graph <span class="math-container">$2^x$</span> and <span class="math-container">$4x$</span> they intersect at <span class="math-container">$x=4$</span> and <span class="math-container">$x=0.31$</span>, so I know it is possible to solve.</p>
| Community | -1 | <p>The equation has indeed the easy integer solution <span class="math-container">$x=4$</span> which can be found by inspection. Graphing reveals a second solution (and it is possible to formally prove that there are no other real roots).</p>
<p>If you heard of the Taylor development, as the second root is not that large,
you can develop <span class="math-container">$e^{x\ln2}$</span> to the second order and solve</p>
<p><span class="math-container">$$1+x\ln2 +\frac{(x\ln2)^2}2=4x.$$</span></p>
<p>The smallest solution of this quadratic is <span class="math-container">$x=0.30935\cdots$</span>, which is close the searched answer.</p>
<hr />
<p>Even better, you can use the known approximation <span class="math-container">$0.31$</span> and write</p>
<p><span class="math-container">$$e^{(0.31+y)\ln 2}=e^{0.31\ln 2}e^{y\ln 2}=4(0.31+y).$$</span></p>
<p>Solving as above,</p>
<p><span class="math-container">$$y=-9.3067447651\cdots\,\cdot10^{-5}$$</span> and <span class="math-container">$$x=0.3099069323807\cdots,$$</span> where all decimals are exact !</p>
|
40,572 | <p>Dummit and Foote, p. 204</p>
<p>They suppose that $G$ is simple with a subgroup of index $k = p$ or $p+1$ (for a prime $p$), and embed $G$ into $S_k$ by the action on the cosets of the subgroup. Then they say</p>
<p>"Since now Sylow $p$-subgroups of $S_k$ are precisely the groups generated by a $p$-cycle, and distinct Sylow $p$-subgroups intersect in the identity"</p>
<p>Am I correct in assuming that these statements follow because the Sylow $p$-subgroups of $S_k$ must be cyclic of order $p$? They calculate the number of Sylow $p$-subgroups of $S_k$ by counting (number of $p$-cycles)/(number of $p$-cycles in a Sylow $p$-subgroup). They calculate the number of $p$-cycles in a Sylow $p$-subgroup to be $p(p-1)$, which I don't see.</p>
| André Nicolas | 6,312 | <p>Don't really like to give general rules or tricks, since then looking for them can interfere with the analysis of a problem. But the presence of <strong>or</strong> means we are trying to count the <strong>union</strong> of two sets $A$ and $B$. And sometimes the best way to count a union is to count $A$, count $B$, count what they have in common, and do the obvious thing. Your jntuitive explanation of the "why" in this case was clear and correct.</p>
<p>Your tutor's answer is right, but your description of it is not. We are counting the number of <strong>hands</strong>, order is irrelevant. The $c(13,4)$ in the answer comes not from the <em>location</em> of the spades. It actually counts the number of ways of choosing $4$ spades from the $13$. For <strong>every</strong> way of choosing the spades, there are $c(39,9)$ ways to choose the remaining cards (you kind of typed it backwards), so the number of $4$-spade hands is
$$C(13,4)C(39,9)$$</p>
|
269,242 | <p>The number of primes in each of the $\phi(n)$ residue classes relatively prime to $n$ are known to occur with asymptotically equal frequency (following from the proof of the Prime Number Theorem). Does the same result hold on pairs of consecutive primes on the $\phi(n)^2$ pairs of congruence classes?</p>
<p>To wit: Consider $\{(2, 3), (3, 5), (5, 7), (7, 11), \ldots\}\pmod n$. Does $(a,b)$ occur with natural density
$$\begin{cases}
1/\phi(n)^2,&\gcd(ab,n)=1\\
0,&\text{otherwise}
\end{cases}
$$
?</p>
| user54998 | 54,998 | <p>I think it would be extremely unlikely that, at this point in time, one could prove anything of this kind. Even the much weaker problem (for general $n$) of whether there are infinitely many such pairs (if $\gcd(ab,n) = 1$) seems extremely difficult as soon as $\phi(n) > 2$. </p>
<p>Consider the very special case that $n = 4$. If $\pi_1(x)$ and $\pi_3(x)$ denote the prime counting function for primes congruent to $1$ and $3$ mod $4$ respectively, the fact that:</p>
<p>$$\limsup(\pi_1(x) - \pi_3(x)) = \infty, \qquad \limsup (\pi_3(x) - \pi_1(x)) = \infty$$</p>
<p>implies that there are infinitely many such pairs for any $(a,b)$ mod $4$ with $ab$ odd. However, such results are completely insufficient for showing that there are infinitely many <i>triples</i> (say) of consecutive primes which are $1$ mod $4$. Similarly, the problem for pairs of primes when $\phi(n) > 2$ seems very difficult. It seems to me that the only approach to such problems is via generalizations of the twin prime conjecture, and (at best) such results will never give positive density.</p>
<p>To summarize: as a general heuristic, of course, your conjecture seems quite reasonable, but even the weaker conjecture (that there are infinitely many pairs for all suitable $a$ and $b$) seems out of reach unless $n = 3$, $4$, or $6$.</p>
|
1,874,914 | <p>in order to find $e^{AT}$ We can't just take the exponential of A as we would do in its diagonalized form. We need to diagonalize $A=S^{-1}e^{\delta(t)}S$ in order to find $e^{AT}$ why is this the case? I know we can't take the exponential of the matrix right away, do we need to take the exponential of the diagonal and multiply by $S$ in order to reach the answer all the time?</p>
<p>If yes, why?</p>
| SC Maree | 357,023 | <p>Definitions with matrices are not that obvious as you might expect. Consider for example the matrix inverse. If $x$ is a number, its inverse $x^{-1}$ should satisfy $x x^{-1} = 1$. A simple computation gives of course $x^{-1} = \frac{1}{ x}$. With a matrix $A$, this is not as simple, We require $A A^{-1} = I$, where $I$ is the identity matrix. In this case, what should $\frac 1 A$ mean? It is not the inverse of each of its entries (in general). However, with diagonal matrices, things are asier, because the inverse of a diagonal matrix is indeed the inverse of its diagonal entries,
$$\begin{pmatrix}a & 0 & 0\\ 0 & b & 0 \\ 0 & 0 & c \end{pmatrix} ^{-1} = \begin{pmatrix}a^{-1} & 0 & 0\\ 0 & b^{-1} & 0 \\ 0 & 0 & c^{-1} \end{pmatrix}$$</p>
<p>A similar thing holds with exponentials. The exponential of a diagonal matrix is the exponent of its diagonal entries,
$$\exp\begin{pmatrix}a & 0 & 0\\ 0 & b & 0 \\ 0 & 0 & c \end{pmatrix} = \begin{pmatrix}e^a & 0 & 0\\ 0 & e^b & 0 \\ 0 & 0 & e^c \end{pmatrix}.$$</p>
<p>If you do a diagonalization, $A^k = U^{-1}D^kU$, we can follow the definition of the matrix exponential, which is given by the infinite summation,
$$\exp(tA) = \sum_{k=0}^{\infty} \frac{t^{k} A^{k}}{k!} = U^{-1}\left(\sum_{k=0}^{\infty} \frac{t^{k} D^{k}}{k!} \right)U = U^{-1}e^{tD}U,$$</p>
<p>And Yes! We simplified the infinite summation to something we know: the exponential of a diagonal matrix. </p>
|
378,966 | <p>$$A_t-A_{xx} = \sin(\pi x)$$
$$A(0,t)=A(1,t)=0$$
$$A(x,t=0)=0$$
Find $A$.</p>
<p>I know I need to find the homogeneous and particular solutions. Im just not sure on this PDE.</p>
| Kaster | 49,333 | <p>You have to guess particular solution first.
$$
A^p = B\sin \pi x \\
-A^p_{xx} = B\pi^2\sin \pi x = \sin \pi x \\
B = \frac 1{\pi^2}
$$
so
$$
A^p = \frac 1{\pi^2} \sin \pi x
$$
General solution of inhomogeneous problem is a sum of general solution of homogeneous problem and particular solution. So
$$
A = A^h + A^p
$$
It'll be much easier if one solves homogeneous problem instead. So all you need to do is alter BCs as follows
$$
A(0,t) = A^h(0,t)+A^p(0,t) = A^h(0,t)+0 = \fbox{$A^h(0,t)=0$} \\
A(1,t) = A^h(1,t)+A^p(1,t) = A^h(1,t)+0 = \fbox{$A^h(1,t)=0$} \\
A(x,0) = A^h(x,0)+A^p(x,0) = A^h(x,0)+\frac 1{\pi^2}\sin \pi x = 0 \Leftrightarrow \fbox{$A^h(x,0) = -\frac 1{\pi^2} \sin \pi x$}
$$
So, now solve homogeneous heat equation with BCs provided above.</p>
|
2,706,776 | <p>In solving the wave equation
$$u_{tt} - c^2 u_{xx} = 0$$
it is commonly 'factored'</p>
<p>$$u_{tt} - c^2 u_{xx} =
\bigg( \frac{\partial }{\partial t} - c \frac{\partial }{\partial x} \bigg)
\bigg( \frac{\partial }{\partial t} + c \frac{\partial }{\partial x} \bigg)
u = 0$$</p>
<p>to get
$$u(x,t) = f(x+ct) + g(x-ct).$$</p>
<p><strong>My question is: is this legitimate?</strong></p>
<p>The partial differentiation operators are not variables, but here in 'factoring' they are treated as such.</p>
<p>Also it does not seem that both factors can individually be set to zero to obtain the solution--either one or the other, or both might be zero.</p>
| knzhou | 247,403 | <p>Yes, this logic is totally legitimate. In the language of linear algebra, we have two operators $A$ and $B$ and the wave equation is
$$AB \psi = 0, \quad A = \partial_t - c \partial_x, \quad B = \partial_t + c \partial_x.$$
Since $A$ and $B$ commute, they may be simultaneously diagonalized, i.e. there is a basis $\psi_i$ where
$$A \psi_i = \lambda_i \psi_i, \quad B \psi_i = \lambda'_i \psi_i$$
so the wave equation becomes
$$AB \psi_i = \lambda_i \lambda'_i \psi_i = 0.$$
This is satisfied for $\lambda_i = 0$ or $\lambda'_i = 0$, so the set of solutions to the wave equation is just the union of the nullspaces of $A$ and $B$. </p>
|
2,416,446 | <p>I need to prove the following inequality: </p>
<blockquote>
<p>$$\sum_{n=m+1}^\infty \frac{1}{n^2}\leq \frac1m$$</p>
</blockquote>
<p>But I'm stuck with it. I found online geometric justifications for this but I'd really appreciate to see actual proof. Any hints? </p>
| Kenny Lau | 328,173 | <p>$$\begin{array}{rcl}
\displaystyle \sum_{n=m+1}^\infty \frac{1}{n^2}
&\le& \displaystyle \sum_{n=m+1}^\infty \frac{1}{(n-1)n} \\
&=& \displaystyle \sum_{n=m+1}^\infty \left(\frac{1}{n-1} - \frac1n \right) \\
&=& \displaystyle \left(\frac{1}m - \frac1{m+1} \right) + \left(\frac{1}{m+1} - \frac1{m+2} \right) + \cdots \\
&=& \dfrac1m
\end{array}$$</p>
|
2,416,446 | <p>I need to prove the following inequality: </p>
<blockquote>
<p>$$\sum_{n=m+1}^\infty \frac{1}{n^2}\leq \frac1m$$</p>
</blockquote>
<p>But I'm stuck with it. I found online geometric justifications for this but I'd really appreciate to see actual proof. Any hints? </p>
| Bernard | 202,857 | <p><strong>Hint:</strong></p>
<p>For each $n$ you have
$$\frac1{n^2}\le\frac1{n(n-1)}=\frac1{n-1}-\frac1n.$$</p>
|
4,032,983 | <p>I would like to know math websites that are useful for students, PhD students and researchers (useful in the sense most of the students or researchers—of a particular area—are using it). Maybe you can share which math websites you sometime use and why you use it.</p>
<p>Let me give my websites and why I use them:</p>
<p>MathOverflow and Stack Exchange: Very good to look up questions about how to do mathematics, what mathematics is, how it evolves, and of course to have an exchange with smart people, who can help you if you are stuck at something.</p>
<p>arXiv: I don't use it now, but I think that's the main website to read papers and to publish (research level)</p>
<p>The Stacks Project: It is very useful for me to look algebraic things ups with more explanation than in some lectures...</p>
<p>MathSciNet: To search for papers that may help you.</p>
<p>Number Theory Web: Since I want to become a number theorist, this site helps me to see which kind of things exist.</p>
<p>Do you have more websites that help you to understand maybe mathematics, or just to connect to other mathematicians?</p>
<p>Everything is welcome. Just post it as an answer and not a comment.</p>
| Some Guy | 730,299 | <p>(<a href="https://artofproblemsolving.com" rel="nofollow noreferrer">https://artofproblemsolving.com</a>) is a very good math site. It has artofporblemsolving books and it also has a lot of math games to practice your math skills. Alcumus is the best game on there (<a href="https://artofproblemsolving.com/alcumus" rel="nofollow noreferrer">https://artofproblemsolving.com/alcumus</a>)</p>
|
3,234,217 | <p>Let <span class="math-container">$a,b,c \in \mathbb{R},$</span> <span class="math-container">$\vec{v_1}=\begin{pmatrix}1\\4\\1\\-2 \end{pmatrix},$</span> <span class="math-container">$\vec{v_2}=\begin{pmatrix}-1\\a\\b\\2 \end{pmatrix},$</span> and <span class="math-container">$\vec{v_1}=\begin{pmatrix}1\\1\\1\\c \end{pmatrix}.$</span> What are the conditions on the numbers <span class="math-container">$a,b,c$</span> so that the three vectors are linearly dependent on <span class="math-container">$\mathbb{R}^4$</span>? I know that the usual method of solving this is to show that there exists scalars <span class="math-container">$x_1,x_2,x_3$</span> not all zero such that
<span class="math-container">\begin{align}
x_1\vec{v_1}+x_2\vec{v_2}+x_3\vec{v_3}=\vec{0}.
\end{align}</span></p>
<p>Doing this would naturally lead us to the augmented matrix
<span class="math-container">\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
1& b & 1 &0\\
-2 & 2 & c &0\\
\end{pmatrix}</span></p>
<p>Doing some row reduction would lead us to the matrix</p>
<p><span class="math-container">\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
0& b+1 & 0 &0\\
0 & 0 & c+2 &0\\
\end{pmatrix}</span>
I'm not quite sure how to proceed after this. Do I take cases on when whether <span class="math-container">$b+1$</span> or <span class="math-container">$c+2$</span> are zero and nonzero? </p>
| AD - Stop Putin - | 1,154 | <p>Your start is fine. </p>
<p>Consider in the equation <span class="math-container">$x_1v_1+x_2v_2 + x_3v_3 =0$</span>. </p>
<p>We have two choices:</p>
<ol>
<li><span class="math-container">$x_3=0$</span>.</li>
</ol>
<p>(Can you see the similarity between <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span>?)
<span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span> will spann a one dimensional space if and only if <span class="math-container">$a=-4$</span> and <span class="math-container">$b= -1$</span>. That is the vectors will be linear dependent for any choice of <span class="math-container">$c$</span>.</p>
<ol start="2">
<li><span class="math-container">$x_3\ne0$</span>. </li>
</ol>
<p>In this case the equation is equivalent to
<span class="math-container">$xv_1 + yv_2=v_3$</span>.</p>
<p>(I leave the fun part of solving that to you. )</p>
|
147,095 | <p>I was wondering if there is any stationary distribution for bipartite graph? Can we apply random walks on bipartite graph? since we know the stationary distribution can be found from Markov chain, but we have two different islands in bipartite graph and connections occur between nodes from different groups. </p>
| JP McCarthy | 35,482 | <p>$\newcommand{\Raw}{\Rightarrow}\newcommand{\raw}{\rightarrow}\newcommand{\N}{\mathbb{N}}$
I just want to echo a few of the other answers and add one point to RW's: I understand that the graph is bipartite if and only if $-1$ is an eigenvalue of the stochastic operator. There is a bit more info than the OP asked for but I had it handy so said I would add it.</p>
<p>A central question is for a given random walk whether or not the
$\xi_k$ display limiting behaviour as $k\raw\infty$? If
`$\xi_\infty$' exists, what is its distribution? Here $\xi_k$ is the position of the walk after $k$ steps. </p>
<p>One possible debarring of the existence of a limit is periodicity. Consider a Markov chain $\xi$ on a set $X=X_0\cup X_1$ with $X_0\cap X_1=\emptyset$ and neither of the $X_i=\emptyset$ for $i=1,2$. Suppose $\xi$ has the property that $\xi_{2k+i}\in X_i$, for $k\in\N_0$, $i=0,1$. Then `$\xi_\infty$' cannot exist in the obvious way. In a certain sense $\xi$ must be <em>aperiodic</em> for limiting behaviour to exist. This is of course the case for bipartite graphs.</p>
<p>Suppose $\xi$ is a Markov chain and the limit $\nu P^n\raw \theta$ exists. Loosely speaking, after a long time $N$, $\xi_N$ has distribution $\mu(\xi_N)\sim \theta$:
$$\begin{align}
\nu P^N\sim \theta
\\\Raw \nu P^{N}P\sim\theta P
\\\Raw \nu P^{N+1}\sim \theta P
\end{align}$$
But $\nu P^{N+1}\sim \theta$ also and hence $\theta P\sim \theta$. So if '$\xi_\infty$' exists then its distribution $\theta$ may have the property $\theta P=\theta$. Such a distribution is said to be a <em>stationary distribution for $P$</em>. </p>
<p>Relaxing the supposition on `$\xi_\infty$' existing, do stationary distributions exist? Clearly they are left eigenvectors of eigenvalue 1 that have positive entries summing to 1.</p>
<p>If $k(x)\in F(X)$ is any constant function then $Pk=k$ so $k$ is a right eigenfunction of eigenvalue 1. Let $u$ be a left eigenvector of eigenvalue 1. By the triangle inequality, $|u(x)|=|\sum_yu(y)p(y,x)|\leq\sum_y|u(y)|p(y,x)$. Now
$$\sum_{z\in X}|u(z)|\leq \sum_{z\in X}\left(\sum_{y\in X}|u(y)|p(y,z)\right)=\sum_{y\in X}|u(y)|\underbrace{\left(\sum_{z\in X}p(y,z)\right)}_{=1}=\sum_{y\in X}|u(y)|$$
Hence the inequality is an equality so $\sum_z\left(\sum_y|u(y)|p(y,z)-|u(z)|\right)=0$ is a sum of non-negative terms. Hence $|u|P=|u|$, and by a scaling, $\pi(x):=|u(x)|/\sum_y |u(y)|$, is a stationary distribution.</p>
<p>Therefore a left eigenvector of eigenvalue 1 exists and if it is positive and of weight 1 it is a stationary distribution.</p>
<p>How many stationary distributions exist? Consider Markov Chains $\xi$ and $\zeta$ on disjoint finite sets $X$ and $Y$, with stochastic operators $P$ and $Q$. The block matrix
$$ R=\left(\begin{array}{cc}P & 0\\0 & Q\end{array}\right)$$
is a stochastic operator on $X\cup Y$. If $\pi$ and $\theta$ are stationary distributions for $P$ and $Q$ then
$$\phi_c=(c\pi,(1-c)\theta)\,,\,\,\,\,c\in[0,1]$$
is an infinite family of stationary distributions for $R$. The dynamics of this walk are that if the particle is in $X$ it stays in $X$, and vice versa for $Y$ (the graph of $R$ has two disconnected components). This example shows that, in general, the stationary distribution need not be unique. Rosenthal shows that a sufficient condition for uniqueness is that the Markov chain $\xi$ has the property that every point is accessible from any other point; i.e. for all $\,x,y\in X$, there exists $r(x,y)\in\N$ such that $p^{(r(x,y))}(x,y)>0$. A Markov chain satisfying this property is said to be <em>irreducible</em>.</p>
<p>So for the existence of a unique, stationary distribution it may be sufficient that the Markov chain is both aperiodic and irreducible. Call a stochastic operator $P$ <em>ergodic</em> if there exists $n_0\in \N$ such that
$$p^{(n_0)}(x,y)>0\,,\,\,\forall x,y\in X$$
In fact, ergodicity is equivalent to aperiodic and irreducible, and the following theorem asserts that it is both a necessary and sufficient condition for the existence of a strict distribution for '$\xi_\infty$'. These precluding remarks suggest the distribution of `$\xi_\infty$' is in fact stationary and unique, and indeed this will be seen to be the case. A nice, non-standard proof of this well-known theorem is to be found in F. Ceccherini-Silberstein, T. Scarabotti and F. Tolli. Harmonic Analysis on
Finite Groups. Cambridge University Press, New York, 2008, which also discusses bipartite graphs specifically.</p>
<p><strong>Markov Ergodic Theorem</strong></p>
<blockquote>
<p><em>A stochastic operator $P$ is ergodic if and only if there exists a strict $\pi\in M_p(X)$ such that</em> $$\lim_{n\raw\infty} p^{(n)}(x,y)=\pi (y)\,,\,\,\forall x,\,y\in X$$ In this case $\pi$ is
the unique stationary distribution for $P$.**</p>
</blockquote>
|
3,389,659 | <p>if <span class="math-container">$S_n={a_1,a_2,a_3,...,a_{2n}}$</span>} where <span class="math-container">$a_1,a_2,a_3,...,a_{2n}$</span> are all distinct integers.Denote by <span class="math-container">$T$</span> the product
<span class="math-container">$$T=\prod_{i,j\epsilon S_n,i<j}{(a_i-a_j})$$</span> Prove that <span class="math-container">$2^{(n^2-n+2[\frac{n}{3}])}\times \text{lcm}(1,3,5,...,(2n-1)) $</span> divides <span class="math-container">$T$</span>.(where [] is the floor function)</p>
<p>I have tried many approaches.I tried using the fact that either number of odd or even integers<span class="math-container">$\ge(n+1)$</span> and since in <span class="math-container">$T$</span> every integer is paired with every other integer only once.Since even-even and odd-odd both give even numbers:<span class="math-container">$2^{\frac{n(n+1)}{2}}$</span> divides <span class="math-container">$T$</span>.As for the <span class="math-container">$lcm(1,,3,5,....(2n-1))$</span> I have no Idea what to do.Please help.I am quite new to NT.Thank you.</p>
| antkam | 546,005 | <p>HINT:</p>
<p>You're not quite correct for the powers of <span class="math-container">$2$</span>. There are <span class="math-container">$2n$</span> numbers total, so there can in fact be exactly <span class="math-container">$n$</span> odds and <span class="math-container">$n$</span> evens (i.e. neither has to be <span class="math-container">$\ge n+1$</span>.)</p>
<p>Instead: Let there be <span class="math-container">$k$</span> odds, and <span class="math-container">$2n-k$</span> evens. This gives you <span class="math-container">${k \choose 2}$</span> factors of <span class="math-container">$2$</span> from the (odd - odd) terms and <span class="math-container">${2n-k \choose 2}$</span> factors of <span class="math-container">$2$</span> from the (even - even) terms.</p>
<ul>
<li><p>First you can show that their sum is minimized at <span class="math-container">$k=n$</span> and that sum is <span class="math-container">$n(n-1) = n^2 - n$</span>.</p></li>
<li><p>So now you're need another <span class="math-container">$2[{n \over 3}]$</span> more factors of <span class="math-container">$2$</span>. These come from splitting evens further into <span class="math-container">$4k$</span> vs <span class="math-container">$4k+2$</span>, and the odds into <span class="math-container">$4k+1$</span> vs <span class="math-container">$4k+3$</span>, because some of the differences will provide a factor of <span class="math-container">$4$</span>, i.e. an <em>additional</em> factor of <span class="math-container">$2$</span> in addition to the factor of <span class="math-container">$2$</span> you already counted.</p>
<ul>
<li>Proof sketch: e.g. suppose there are <span class="math-container">$n$</span> evens, and for simplicity of example lets say <span class="math-container">$n$</span> is itself even. In the worst split exactly <span class="math-container">$n/2$</span> will be multiples of <span class="math-container">$4$</span>, which gives another <span class="math-container">$\frac12 {\frac n 2}({\frac n 2}-1)$</span> factors of <span class="math-container">$2$</span> from these numbers of form <span class="math-container">$4k$</span>, and a similar thing happens with the <span class="math-container">$4k+1$</span>'s, the <span class="math-container">$4k+2$</span>'s, and the <span class="math-container">$4k+3$</span>'s. This funny thing is that if you add up everything, <span class="math-container">$n({\frac n 2}-1) \ge 2[{\frac n 3}]$</span> (you can try it, and you will need to prove it). In other words <span class="math-container">$2[{\frac n 3}]$</span> is not a tight bound at all, but rather a short-hand that whoever wrote the question settled on just to make you think about the splits into <span class="math-container">$4k + j$</span>. In fact a really tight bound would involve thinking about splits into <span class="math-container">$8k+j, 16k+j$</span> etc. </li>
</ul></li>
<li><p>As for <span class="math-container">$lcm(1, 3, 5, \dots, 2n-1)$</span>, first note that <span class="math-container">$lcm$</span> divides <span class="math-container">$T$</span> just means each of the odd numbers divide <span class="math-container">$T$</span>. You can prove this by the pigeonhole principle. </p>
<ul>
<li>Further explanation: E.g. consider <span class="math-container">$lcm(3, 9) = 9$</span>, so this lcm divides <span class="math-container">$T$</span> if both odd numbers divide <span class="math-container">$T$</span>. In general, the lcm can be factorized into <span class="math-container">$3^{n_3} 5^{n_5} 7^{n_7} 11^{n_{11}} \dots$</span> and it divides <span class="math-container">$T$</span> if every term <span class="math-container">$p^{n_p}$</span> divides <span class="math-container">$T$</span>. but in your case <span class="math-container">$p^{n_p}$</span> itself must be one of the numbers in the list <span class="math-container">$(1, 3, 5, \dots, 2n-1)$</span> or else the lcm wouldn't contain that many factors of <span class="math-container">$p$</span>.</li>
</ul></li>
</ul>
<p>Can you finish from here?</p>
|
986,754 | <p>So I'm kind of stuck on this question and I don't exactly know how to describe this on the title header and I apologize... </p>
<blockquote>
<p>For some values of $x$, the assignment statement $y := 1-\cos(x)$ involves a difficulty. What is the difficulty? What values of $x$ are involved? What remedy do you propose to resolve this difficulty?</p>
</blockquote>
<p>I know that this question does seem bleak and looks confusing, but any help I get would be appreciated! Thank you!</p>
| lhf | 589 | <p>The point is that when $x$ is small, $\cos(x) \approx 1$ and so you can expect loss of precision in $y$.</p>
<p>One remedy is this:</p>
<blockquote class="spoiler">
<p> $$1-\cos(x) = (1-\cos(x)) \dfrac{1+\cos(x)}{1+\cos(x)} = \dfrac{1-\cos^2(x)}{1+\cos(x)}=\dfrac{\sin^2(x)}{1+\cos(x)}$$<br>
For $x$ really small, you may was well take $\cos(x) \approx 1 - \frac{x^2}{2}$, which gives $1-\cos(x) \approx \frac{x^2}{2}$; this is consistent with the expression above, since $\sin(x)\approx x$ and $\cos(x)\approx 1$, but is much simpler.</p>
</blockquote>
|
413,108 | <p>Given a commutative ring <span class="math-container">$ R $</span> and a multiplicatively closed subset <span class="math-container">$ S $</span> of <span class="math-container">$ R $</span>, there are two ways to consturct <span class="math-container">$ S^{-1}R $</span>:</p>
<ol>
<li><p>define an equivalence relation <span class="math-container">$ \sim $</span> on <span class="math-container">$ R\times S $</span> and then take <span class="math-container">$ S^{-1}R := (R\times S)/\sim $</span>.</p>
</li>
<li><p>Let <span class="math-container">$ R[S] $</span> be the free commutative algebra over <span class="math-container">$ R $</span> generated by <span class="math-container">$ S $</span> and <span class="math-container">$ i:S\to R[S] $</span> the canonical embedding and <span class="math-container">$ I $</span> the ideal of <span class="math-container">$ R[S] $</span> generated by <span class="math-container">$ i(s)s-1 $</span> for each <span class="math-container">$ s $</span> in <span class="math-container">$ S $</span> and then take <span class="math-container">$ S^{-1}R := R[S]/I $</span>.</p>
</li>
</ol>
<p>Suppose <span class="math-container">$ S $</span> dosen't contain zero divisors, if one works in the first way, it's easy to prove that the canonical map <span class="math-container">$ j:R\to R^{-1}S $</span> is injective. But if one works in the second way, then it's equivalent to state that <span class="math-container">$ I \cap R = \{ 0 \}$</span>. Can we directly prove this without considering <span class="math-container">$ (R\times S)/\sim $</span> ?</p>
| Johannes Huisman | 85,592 | <p>Maybe these are the arguments you're looking for: First reduce to the case of <span class="math-container">$S$</span> being finitely generated as a multiplicative subset of <span class="math-container">$R$</span>. Next, reduce to the case <span class="math-container">$S$</span> where <span class="math-container">$S$</span> is generated by only one element <span class="math-container">$s$</span>. The morphism
<span class="math-container">$$
R\rightarrow R[T]/(sT-1)
$$</span>
is injective for <span class="math-container">$s$</span> a nonzero divisor. As you said, it suffices to prove that
<span class="math-container">$$
(sT-1)\cap R=\{0\}.
$$</span>
Since <span class="math-container">$s$</span> is a nonzero divisor,
<span class="math-container">$$
\deg((sT-1)F)=\deg(sT-1)+\deg(F)= 1+\deg(F)
$$</span>
for all <span class="math-container">$F\in R[T]$</span>. It follows that <span class="math-container">$(sT-1)F\in R$</span> only if <span class="math-container">$\deg(F)=-\infty$</span>, i.e., <span class="math-container">$F=0$</span>.</p>
|
4,246,719 | <p>Consider two random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, both distributed as a <a href="https://en.wikipedia.org/wiki/Gumbel_distribution" rel="nofollow noreferrer">Gumbel</a> with location 0 and scale 1.</p>
<p>Let <span class="math-container">$Z\equiv X-Y$</span>.</p>
<p>We know that if the two variables are <strong>independent</strong>, then <span class="math-container">$Z$</span> is Logistic with location 0 and scale 1. Hence,</p>
<p><span class="math-container">$$
\Pr(Z\leq z)=\frac{1}{1+\exp(-z)}
$$</span></p>
<p>Suppose now that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are <strong>correlated</strong> with correlation parameter <span class="math-container">$\rho$</span>. Can we still write down a closed form expression for <span class="math-container">$
\Pr(Z\leq z)$</span>?</p>
| Ben Grossmann | 81,360 | <p>What you have computed is really <span class="math-container">$\frac{\delta \operatorname{vec}(Y^TY)}{\delta \operatorname{vec}(Y)}$</span>; I will assume this is what you're really after. I will also assume that you are using the <a href="https://en.wikipedia.org/wiki/Row-_and_column-major_order" rel="nofollow noreferrer">column-major</a> vectorization operator. To correct your mistake, we have the following:
<span class="math-container">$$
\begin{align}
\delta\operatorname{vec}(Y^TY)
&=\operatorname{vec}(\delta Y^TY) + \operatorname{vec}(Y^T\delta Y)
\\
&= (Y^T \otimes I)\operatorname{vec}(\delta Y^T) + (I \otimes Y^T)\operatorname{vec}(\delta Y)
\\ &=
(Y^T \otimes I)K\operatorname{vec}(\delta Y) + (I \otimes Y^T )\operatorname{vec}(\delta Y)
\\ &=
[(Y^T \otimes I)K + (I \otimes Y^T)]\operatorname{vec}(\delta Y),
\end{align}
$$</span>
where <span class="math-container">$K$</span> is the <a href="https://en.wikipedia.org/wiki/Commutation_matrix" rel="nofollow noreferrer">commutation matrix</a> of the correct size. With that, we find that
<span class="math-container">$$
\frac{\delta \operatorname{vec}(Y^TY)}{\delta \operatorname{vec}(Y)}= (Y^T \otimes I)K + (I \otimes Y^T).
$$</span></p>
|
15,237 | <p><a href="https://matheducators.stackexchange.com/questions/176/knowing-mathematics-does-not-translate-to-knowing-to-teach-mathematics-why">A question</a> has been asked about why great mathematicians are not necessarily great teachers. On the other hand, I am wondering if knowing more mathematics actually helps with one's teaching of lower level courses in mathematics. For example, I believe that a good student with bachelor's degree in mathematics should have sufficient knowledge to teach calculus. However, how does having a master's or doctorate degree help one's teaching in calculus, if any at all?</p>
<p>I am teaching calculus now and I do not understand commutative algebra; I took a course on commutative algebra long time ago; I did poorly in the course and now I could hardly recall anything from this course. If I invest substantial among of time studying this subject well now, will it help me in my calculus course in any sense?</p>
| Nick C | 470 | <p><em>I believe that a good student with bachelor's degree in mathematics should have sufficient knowledge to teach calculus.</em></p>
<p>I agree with this. I work with a few colleagues who came to teach at the community college after working in the high school, and their credential is a Master's of Science for Teachers (MST). They have never taken graduate (theory) courses in algebra, analysis, geometry, etc, but they are excellent and dedicated teachers/colleagues -- attributes I chalk up to their teaching experience. This experience speaks a lot to me about their potential, more so than a graduate degree.</p>
<p><em>If I invest substantial among of time studying this subject well now, will it help me in my calculus course in any sense?</em></p>
<p>In short -- no. While it is good to have a depth of knowledge in a topic so you can engage students when they ask something "out of the box" (like the <em>three</em> students asking me about fractional derivatives in the past two weeks alone), its use has its limits, and the law of diminishing returns applies. It is a reminder that a teacher is not there to simply have knowledge to spew forth, but to help students take on new information. I believe a "good teacher" should be able to take something just out of reach of their own understanding and still help a student learn it. [Of course, they should be willing to learn it alongside that student.]</p>
<p><em>...how does having a master's or doctorate degree help one's teaching in calculus, if any at all</em></p>
<p>Simply put, a graduate degree is just a minimum qualification adopted by institutions of higher ed for their faculty. It (and a good curriculum vitae) will get you in the door for an interview, and then you must show your skills as an educator. If this is hard to hear, consider the perennial student complaint "when am I <em>ever</em> going to use this". The answer for them is the same as for the math PhD who wants to teach beginning algebra: "Perhaps you won't."</p>
|
15,237 | <p><a href="https://matheducators.stackexchange.com/questions/176/knowing-mathematics-does-not-translate-to-knowing-to-teach-mathematics-why">A question</a> has been asked about why great mathematicians are not necessarily great teachers. On the other hand, I am wondering if knowing more mathematics actually helps with one's teaching of lower level courses in mathematics. For example, I believe that a good student with bachelor's degree in mathematics should have sufficient knowledge to teach calculus. However, how does having a master's or doctorate degree help one's teaching in calculus, if any at all?</p>
<p>I am teaching calculus now and I do not understand commutative algebra; I took a course on commutative algebra long time ago; I did poorly in the course and now I could hardly recall anything from this course. If I invest substantial among of time studying this subject well now, will it help me in my calculus course in any sense?</p>
| Benoît Kloeckner | 187 | <p>An aspect that does not show up much in other's answer is: exercise & test design. To construct an exercise, it is very often quite useful to have more advanced knowledge, either to be able to choose the values correctly, or to know what will echo later courses.</p>
<p>Some example:</p>
<ul>
<li>knowing finite fields helps understanding that the coincidence between equality of polynomials and equality of their associated polynomial functions is not obvious, and needs a proof (it does not make it so much easy to convey, though),</li>
<li>having learned quite some differential geometry, I know that constructing a smooth function that is zero on <span class="math-container">$(-\infty,0]$</span> and positive on <span class="math-container">$(0,+\infty)$</span> is not just for practice, but turns out to be very useful (much) later on,</li>
<li>when I give a family of vectors for students to decide whether it is free, and whether it is generating, I am happy to have a set of available tools (e.g. dimension theory) that helps me knowing some of the answers in advance, and checking theirs,</li>
<li>Knowing the partial fraction decomposition of a rational function helps designing exercises on sums, integrals, and systems of linear equation that will put these subject closer together.</li>
</ul>
<p>Additional examples on the already discussed fact that knowing more gives you better perspective (and warns you from some misconception, and help you answer student's questions):</p>
<ul>
<li>knowing Weierstrass continuous nowhere differentiable functions helps me understand better the difference between continuous and differentiable functions; so does knowing that generic continuous functions are nowhere differentiable,</li>
<li>knowing Taylor series helps understanding derivatives, in particular proving the formula <span class="math-container">$(fg)'=f'g+fg'$</span> is much more transparent¹ when one defines <span class="math-container">$f'(x)$</span> as the number (when it exists) such that <span class="math-container">$f(x+h)=f(x)+f'(x)h+o_{h\to 0}(h)$</span>,</li>
<li>knowing that the antiderivatives of <span class="math-container">$x\mapsto e^{x^2}$</span> cannot be expressed with usual functions helps warning students that finding explicit anti-derivatives (or solutions of differential equations) is not always possible, and helps explaining why we define new function names, and what math is really about.</li>
</ul>
<p>¹ <strong>Added in edit.</strong> I was asked by email to give more details on this point, as well doing it here. With the first-order Taylor series definition of the derivative, whenever <span class="math-container">$f,g$</span> are differentiable at <span class="math-container">$x$</span> we get
<span class="math-container">\begin{align*}
(fg)(x+h) &= \big(f(x) + f'(x) h +o(h)\big) \big(g(x)+g'(x)h+o(h)\big) \\
&= f(x)g(x) + \big(f'(x)g(x)+f(x)g'(x)\big)h + f'(x)g'(x)h^2 + o(h) \\
&= (fg)(x) + (f'g+fg')(x) h + o(h)
\end{align*}</span>
so that <span class="math-container">$fg$</span> is differentiable at <span class="math-container">$x$</span>, with the derivative as we know it. This is much more transparent than the usual trick of computing the ratio by adding and subtracting some quantity in order to factorize, as the cross products <span class="math-container">$f'g$</span> and <span class="math-container">$fg'$</span> appear naturally when developping, and the product <span class="math-container">$f'g'$</span> that student might think should be the answer appears clearly, but as a second-order term. (Of course, beware that the higher-order Taylor formulas do not entail high differentiability, a misconception that this approach could unfortunately enforce.)</p>
|
285,227 | <p>I am trying to prove $\exp(x+y) = \exp(x) \exp(y)$.</p>
<p>I may use that $$\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}$$
I further know how to multiply two power series in one point, i.e. if $f(x) = \sum_{n=0}^\infty c_n(x-a)^n$ and $g(x) = \sum_{k=0}^\infty d_n(x-a)^n$ then
$$
f(x)g(x) = \sum_{n=0}^\infty e_n(x-a)^n
$$
with
$$
e_n = \sum_{m=0}^n c_md_{n-m}
$$</p>
| salaku | 388,854 | <p>$$
\begin{align*}
& \exp(x+y)=\sum_{n=0}^{\infty}\frac{1}{n!}(x+y)^n = \sum_{n=0}^{\infty}\frac{1}{n!}\sum_{k=0}^{n}\frac{n!}{(n-k)!k!}
x^ky^{n-k} \\
=& \sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{x^k}{k!}\frac{y^{n-k}}{(n-k)!} = \lim_{N \to \infty}\sum_{n=0}^{N}\sum_{k=0}^{n} \frac{x^k}{k!}\frac{y^{n-k}}{(n-k)!}\\
& \mathrm{\quad make \; a \; bijective \;map\; between} \left\{(n,k):0 \le n \le N,0 \le k\le n\right\} \\&\mathrm{\; and } \left\{(n,k):0 \le k \le N,k \le n\le N\right\} \\
=& \lim_{N \to \infty}\sum_{n=0}^{N}\sum_{k=0}^{n} \frac{x^k}{k!}\frac{y^{n-k}}{(n-k)!}= \lim_{N \to \infty}\sum_{k=0}^{N}\sum_{n=k}^{N}\frac{x^k}{k!}\frac{y^{n-k}}{(n-k)!} \\
= &\lim_{N \to \infty} \sum_{k=0}^{N}\frac{x^k}{k!}\sum_{n=k}^{N}\frac{y^{n-k}}{(n-k)!} = \lim_{N \to \infty} \sum_{k=0}^{N}\frac{x^k}{k!}\sum_{n=0}^{N-k}\frac{y^{n}}{n!} = \sum_{k=0}^{\infty}\frac{x^k}{k!}\sum_{n=0}^{\infty}\frac{y^{n}}{n!} \\
=&\sum_{k=0}^{\infty}\frac{x^k}{k!}\sum_{n=0}^{\infty}\frac{y^{n}}{n!}=\sum_{k=0}^{\infty}\sum_{n=0}^{\infty}\frac{x^k}{k!}\frac{y^{n}}{n!} = \exp(x)\exp(y)
\end{align*}
$$</p>
|
599,602 | <p>Please help with this calculus question. I'm asked to solve
$$(1+y^2) \,\mathrm{d}x = (\tan^{-1}y - x)\,\mathrm{d}y.$$</p>
| mainak | 114,270 | <p><strong>strong text</strong>
we can write the given equation as, </p>
<pre><code> dx/dy = {tan^(-1)y -x} / {1 + y^(2)}
dx/dy + x/{1 + y^(2)} = tan^(-1)y /{1 + y^(2)}
</code></pre>
<p>the above one is a linear differential equation in x.
therefore, Integrating factor = exp^[ ∫ 1/{1 + y^(2)} dy]
= exp^{tan^(-1)y}</p>
<p>Hence we get the solution as,</p>
<pre><code> x.(integrating factor) = ∫ [ (Integrating factor).{tan^(-1)y /(1 + y^(2))}] dy + C (some constant)
x.exp^{tan^(-1)y} = exp^{tan^(-1)y} . [tan^(-1)y - 1] + C
{ 1 + x + tan^(-1)y }.exp^{tan^(-1)y} = C
which is the required solution.
</code></pre>
|
141,346 | <p>I am not sure where one looks up this type of fact. Google was not very helpful.</p>
| Daniel Schäppi | 1,649 | <p>Yes, any Grothendieck abelian category has enough injectives. I believe this goes back to Grothendieck's Tohoku paper. The category in question is Grothendieck abelian since it is equivalent to a category of additive presheaves. The domain category has objects the integers, and morphisms generated by $d_n: n \rightarrow n+1$ (or the other way around depending on your grading conventions), subject to the relations $d_{n+1} \circ d_n=0$.</p>
|
1,850,069 | <p>Let the incircle (with center $I$) of $\triangle{ABC}$ touch the side $BC$ at $X$, and let $A'$ be the midpoint of this side. Then prove that line $A'I$ (extended) bisects $AX$.<a href="https://i.stack.imgur.com/pd7Di.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pd7Di.png" alt="enter image description here"></a></p>
| Jack D'Aurizio | 44,121 | <p>That is an almost trivial exercise in <a href="http://mathworld.wolfram.com/BarycentricCoordinates.html" rel="nofollow noreferrer">barycentric coordinates</a>: you just have to check that
<span class="math-container">$$ \det\left(\frac{A+X}{2};I;A'\right) = 0 \tag{1} $$</span>
given <span class="math-container">$I=\frac{aA+bB+cC}{a+b+c}$</span>, <span class="math-container">$A'=\frac{B+C}{2}$</span> and <span class="math-container">$X=\frac{(p-c)B+(p-b)C}{a}$</span>, but:</p>
<p><span class="math-container">$$ (a+b+c) I = aA+bB+cC = a(X+A) + (p-a)(B+C).\tag{2}$$</span></p>
|
3,080,005 | <p>So, I was trying to understand the "Group action" theory. I read the definition of <span class="math-container">$Stab_G$</span> and I was trying to solve some basic questions.</p>
<p>I came across with the following question: </p>
<blockquote>
<p>Let <span class="math-container">$S_7$</span> be a group that on itself by <span class="math-container">$x\cdot y = xyx^{-1}$</span>. Calculate <span class="math-container">$|Stab_{S_7}((1 \ 2 \ 3)(4 \ 5 \ 6))|$</span>.</p>
</blockquote>
<p>Firstly, I don't understand what does "on itself by <span class="math-container">$x\cdot y = xyx^{-1}$</span>" means. Secondly I would like to see how to calculate it formally so I can calculate other sections of the question.</p>
| user289143 | 289,143 | <p>A group action on a set <span class="math-container">$X$</span> is defined as <span class="math-container">$\varphi : G\ \mathrm{x}\ X \rightarrow X$</span> such that <span class="math-container">$g \cdot x=\varphi (g,x)$</span> In this case <span class="math-container">$X=G$</span> and <span class="math-container">$\varphi (x,y)=xyx^{-1}$</span> <br>
Now <span class="math-container">$$|Stab_{S_7}((1 \ 2 \ 3)(4 \ 5 \ 6))|=\frac{|G|}{|G((123)(456))|}$$</span>
where <span class="math-container">$G((123)(456))$</span> is the orbit of <span class="math-container">$(123)(456)$</span> under the action of <span class="math-container">$G$</span> <br>
The orbit of an element in <span class="math-container">$S_7$</span> is determined by its cycle type: in this case is <span class="math-container">$(3,3,1)$</span>, so <span class="math-container">$|G((123)(456))|$</span> is the number of <span class="math-container">$(3,3,1)$</span> cycles in <span class="math-container">$S_7$</span>. The order of its conjugacy class is given by the formula in the link: <a href="https://groupprops.subwiki.org/wiki/Conjugacy_class_size_formula_in_symmetric_group" rel="nofollow noreferrer">https://groupprops.subwiki.org/wiki/Conjugacy_class_size_formula_in_symmetric_group</a> <br>
In our case <span class="math-container">$$|G((123)(456))|=\frac{7!}{3^2 2!}=280$$</span>
and the order of the stabilizer is <span class="math-container">$7! \cdot \frac{3^2 2|}{7!}=18$</span></p>
|
342,064 | <p>We know that if an estimator, say $\widehat{\theta}$, is an unbiased estimator of $\theta$ and if its variance tends to 0 as n tends to infinity then it is a consistent estimator for $\theta$. But this is a sufficient and not a necessary condition. I am looking for an example of an estimator which is consistent but whose variance does not tend to 0 as n tends to infinity. Any suggestions? Thank you in advance for your time.</p>
| gerw | 58,577 | <p>Just take an estimator $\hat\theta_n$ which has just two values:
\begin{align*}P(\hat\theta_n = \theta) &= p_n\\P(\hat\theta_n = \delta_n) &= 1-p_n\end{align*}
Now, choose sequences $\delta_n$ and $p_n$ appropriate and you should get a consistent estimator ($p_n \to 0$), whose variance does not converge to zero ($\delta_n \to \infty)$.</p>
|
4,374,307 | <p>Problem:<br />
Suppose there are <span class="math-container">$7$</span> chairs in a row. There are <span class="math-container">$6$</span> people that are going to randomly
sit in the chairs. There are <span class="math-container">$3$</span> females and <span class="math-container">$3$</span> males. What is the probability that
the first and last chairs have females sitting in them?</p>
<p>Answer:<br />
Let <span class="math-container">$p$</span> be the probability we seek. Out of <span class="math-container">$3$</span> females, only <span class="math-container">$2$</span> can be sitting at the end of the row. I consider the first and last chairs to be at the end of the row.
<span class="math-container">\begin{align*}
p &= \dfrac{ {3 \choose 2 } 3(2) (4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ 3(3)(2) (4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ 3(4)(3)(2) } { 7(5)(4)(3)(2) } = \dfrac{ 3(3)(2) } { 7(5)(3)(2) } \\
p &= \dfrac{ 18 } { 35(3)(2) } \\
p &= \dfrac{ 3 } { 35 }
\end{align*}</span>
Am I right?
Here is an updated solution.</p>
<p>Let <span class="math-container">$p$</span> be the probability we seek. Out of <span class="math-container">$3$</span> females, only <span class="math-container">$2$</span> can be sitting at the end of the row. I consider the first and last chairs to be at the end of the row.
<span class="math-container">\begin{align*}
p &= \dfrac{ 3(2) (5)(4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ (5)(4)(3)(2) } { 7(5)(4)(3)(2) } \\
p &= \dfrac{1}{7}
\end{align*}</span>
Now is my answer right?</p>
| Thomas Andrews | 7,933 | <p>An alternative approach is that you have the outer seats both occupied with probability <span class="math-container">$\frac57$</span> and the conditional probability that they both have a woman is <span class="math-container">$$\frac{\binom32}{\binom62}=\frac15.$$</span></p>
<p>So the probability is <span class="math-container">$\frac57\cdot \frac15=\frac17.$</span></p>
|
4,374,307 | <p>Problem:<br />
Suppose there are <span class="math-container">$7$</span> chairs in a row. There are <span class="math-container">$6$</span> people that are going to randomly
sit in the chairs. There are <span class="math-container">$3$</span> females and <span class="math-container">$3$</span> males. What is the probability that
the first and last chairs have females sitting in them?</p>
<p>Answer:<br />
Let <span class="math-container">$p$</span> be the probability we seek. Out of <span class="math-container">$3$</span> females, only <span class="math-container">$2$</span> can be sitting at the end of the row. I consider the first and last chairs to be at the end of the row.
<span class="math-container">\begin{align*}
p &= \dfrac{ {3 \choose 2 } 3(2) (4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ 3(3)(2) (4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ 3(4)(3)(2) } { 7(5)(4)(3)(2) } = \dfrac{ 3(3)(2) } { 7(5)(3)(2) } \\
p &= \dfrac{ 18 } { 35(3)(2) } \\
p &= \dfrac{ 3 } { 35 }
\end{align*}</span>
Am I right?
Here is an updated solution.</p>
<p>Let <span class="math-container">$p$</span> be the probability we seek. Out of <span class="math-container">$3$</span> females, only <span class="math-container">$2$</span> can be sitting at the end of the row. I consider the first and last chairs to be at the end of the row.
<span class="math-container">\begin{align*}
p &= \dfrac{ 3(2) (5)(4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ (5)(4)(3)(2) } { 7(5)(4)(3)(2) } \\
p &= \dfrac{1}{7}
\end{align*}</span>
Now is my answer right?</p>
| Toby Mak | 285,313 | <p>Here is an approach that is closest to your thought process. The three females and the three males are all <em>indistinguishable</em>, so we use combinations instead of permutations. Picking any one sex first gives:</p>
<p><span class="math-container">$${7 \choose 3} \cdot {4 \choose 3}$$</span></p>
<p>total possibilities.</p>
<p>Now if two women are already sitting at the ends, then there is one woman and three men left to fill <span class="math-container">$5$</span> seats. Choosing the woman first we have:</p>
<p><span class="math-container">$$5 \cdot {4 \choose 3}$$</span></p>
<p>ways and here we can see another simpler approach: where 3 men sit in 4 seats and the women sit in the remaining 3 seats.</p>
<p>Thus the probability is <span class="math-container">$\frac{5}{7 \choose 3} = \frac{1}{7}$</span>.</p>
|
3,011,080 | <p>I was reading:</p>
<blockquote>
<p>Take <span class="math-container">$N \in \mathbf N$</span> such that <span class="math-container">$a_i \leq N$</span> for all <span class="math-container">$i \leq n$</span> and N
is a multiple of every prime number ≤ n. We claim that then 1 + N, 1 +
2N, . . . , 1 + nN, 1 + (n + 1)N are pairwise relatively prime. To see
this, suppose p is a prime number such that p | 1+iN and p | 1+jN (1 ≤
i < j ≤ n+ 1); then p divides their difference (j − i)N, but p ≡ 1 mod
N, so p does not divide N, hence p | j − i ≤ n. But all prime numbers
≤ n divide N, and we have a contradiction</p>
</blockquote>
<p>however, it didn't really make sense to me.</p>
<p>My confusion is where:</p>
<p><span class="math-container">$$p \equiv 1 \pmod N$$</span></p>
<p>came from and how it contributed to the proof. Why does it matter that that if <span class="math-container">$p \equiv 1 \mod N$</span> is true then <span class="math-container">$p$</span> doesn't divide <span class="math-container">$N$</span>?</p>
<p>I feel this suppose to be a pretty elementary proof but it is escaping me. What is the simplest way to see this proof? Or what step is the proof skipping? I know this suppose to be easy but something is escaping me....</p>
<hr>
<p>context:</p>
<p><a href="https://faculty.math.illinois.edu/~vddries/main.pdf" rel="nofollow noreferrer">https://faculty.math.illinois.edu/~vddries/main.pdf</a></p>
| Will Jagy | 10,400 | <p>All you need is the conclusion that <span class="math-container">$p$</span> does not divide <span class="math-container">$N.$</span> It is enough to show that <span class="math-container">$\gcd(p,N) = 1.$</span> Now, as there is an integer <span class="math-container">$t$</span> such that <span class="math-container">$pt = 1 + jN,$</span> we arrive at the Bezout expression
<span class="math-container">$$ pt - jN = 1 $$</span>
which does the trick. See how, if <span class="math-container">$p|N,$</span> Bezout would force <span class="math-container">$p|1.$</span></p>
<p>Meanwhile, <span class="math-container">$N$</span> is divisible by all primes up to little <span class="math-container">$n.$</span> Since <span class="math-container">$p$</span> does not divide <span class="math-container">$N,$</span> we find that <span class="math-container">$$ p > n \; . $$</span>
This leads to a contradiction when <span class="math-container">$p |(j-i) \leq n$</span></p>
<p>I do not see any reason to conclude that <span class="math-container">$p \equiv 1 \pmod N.$</span> That sort of thing happens when <span class="math-container">$N$</span> divides something involving <span class="math-container">$p,$</span> not when <span class="math-container">$p $</span> divides something involving <span class="math-container">$N.$</span> </p>
|
3,011,080 | <p>I was reading:</p>
<blockquote>
<p>Take <span class="math-container">$N \in \mathbf N$</span> such that <span class="math-container">$a_i \leq N$</span> for all <span class="math-container">$i \leq n$</span> and N
is a multiple of every prime number ≤ n. We claim that then 1 + N, 1 +
2N, . . . , 1 + nN, 1 + (n + 1)N are pairwise relatively prime. To see
this, suppose p is a prime number such that p | 1+iN and p | 1+jN (1 ≤
i < j ≤ n+ 1); then p divides their difference (j − i)N, but p ≡ 1 mod
N, so p does not divide N, hence p | j − i ≤ n. But all prime numbers
≤ n divide N, and we have a contradiction</p>
</blockquote>
<p>however, it didn't really make sense to me.</p>
<p>My confusion is where:</p>
<p><span class="math-container">$$p \equiv 1 \pmod N$$</span></p>
<p>came from and how it contributed to the proof. Why does it matter that that if <span class="math-container">$p \equiv 1 \mod N$</span> is true then <span class="math-container">$p$</span> doesn't divide <span class="math-container">$N$</span>?</p>
<p>I feel this suppose to be a pretty elementary proof but it is escaping me. What is the simplest way to see this proof? Or what step is the proof skipping? I know this suppose to be easy but something is escaping me....</p>
<hr>
<p>context:</p>
<p><a href="https://faculty.math.illinois.edu/~vddries/main.pdf" rel="nofollow noreferrer">https://faculty.math.illinois.edu/~vddries/main.pdf</a></p>
| Charlie Parker | 118,359 | <p>We want to show:</p>
<p><span class="math-container">$$ 1+N, 1+2N, \dots, 1+(n+1)N$$</span></p>
<p>is coprime (i.e. share no common factors). It's sufficient to show they don't share a common prime factor (since all other factors are made of primes, since if its a factor of a specific number then that number can be turned to primes and those divide the number in question. So if the building blocks don't divide the number no other number does either).</p>
<p>Consider any pair <span class="math-container">$1 + iN,1+jN$</span> (<span class="math-container">$0 \leq i,j \leq n+1)$</span> and any prime <span class="math-container">$p$</span> less than either number in the pair. For the sake of contradiction suppose <span class="math-container">$p | 1+iN$</span> and <span class="math-container">$p | 1+Nj$</span>. This means <span class="math-container">$1+Nj = p t_j$</span> and <span class="math-container">$1+Ni = p t_i$</span>. We also have <span class="math-container">$p$</span> divides the difference:</p>
<p><span class="math-container">$$ p | N (i-j) $$</span></p>
<p>so <span class="math-container">$p|N$</span> or <span class="math-container">$p|i-j$</span> (since <span class="math-container">$p$</span> is prime i.e. "atomic"). Notice from earlier facts that <span class="math-container">$1 = Nj - p t_j $</span> is true so if <span class="math-container">$p|N$</span> we'd get that <span class="math-container">$p$</span> divides <span class="math-container">$1$</span> which is absurd because <span class="math-container">$p$</span> is prime. So <span class="math-container">$p \not \mid N$</span>. So <span class="math-container">$p | i - j$</span>. Notice that <span class="math-container">$i,j$</span> are between 1 to n so their difference is less that <span class="math-container">$n$</span>. So <span class="math-container">$p | i - j \leq n$</span> which means <span class="math-container">$p$</span> must be less than <span class="math-container">$n$</span>. But that can't be true either because <span class="math-container">$N$</span> is a multiple of all primes less than <span class="math-container">$n$</span> and if <span class="math-container">$p \leq n$</span> and is prime, then it must divide <span class="math-container">$N$</span>. So contradicts an earlier fact we derived. Thus, this sequence must be coprime <span class="math-container">$ 1+N, 1+2N, \dots, 1+(n+1)N$</span> (share no common factors, especially a prime factor which is what we showed explicitly). </p>
|
2,127,679 | <p>I need to find $\frac{a}{b} \mod c$.<br>
This is equal to $(a\cdot b^{\phi(c)-1}\mod c)$, when $b,c$ are co-prime. But what if that's not the case?<br>
To be more clear, I need $$\frac{10^{a\cdot b}-1}{10^b-1}\mod P$$ </p>
| Bill Dubuque | 242 | <p>Update: you edited your question to include a specific example. Here there is some ambiguity depending on whether the division is intended in the integers, or in the integers mod $m$. Let's consider a simpler case, the fraction $\ x\equiv {6}/2\pmod{\!10}.\,$ If this denotes division in the integers then $\,6/2\,$ denotes $3$ so $\,x\equiv 3\pmod{\!10}.\,$ However, if it denotes division in the integers mod $10$ then we seek the solution of $\,2x\equiv 6\pmod{\!10},\,$ i.e. $\,2x = 6+10k\,$ $\iff$ $x = 3 + 5k$ $\iff x\equiv 3\pmod{5}$ $\iff x\equiv 3,8\pmod{\!10}.\,$ Usually one can determine which is intended from the ambient context.</p>
<hr>
<p>Generally let's consider the solution of $\ B\, x \equiv A\pmod {\!M}.\ $ If $\,d=(B,M)\,$ then $\, d\mid B,\:\ d\mid M\mid B\,x\!-\!A\,\Rightarrow\, d\mid A\ $ is a neccessary condition for a solution $\,x\,$ to exist.</p>
<p>If so then let $\ m, a, b \, =\, M/d,\, A/d,\, B/d.\ $ Then cancelling $\,d\,$ throughout yelds</p>
<p>$$\ x\equiv \dfrac{A}B\!\!\!\pmod{\!M}\iff M\mid B\,x\!-\!A \overset{\rm\large cancel \ d}\iff\, m\mid b\,x\! -\! a \iff x\equiv \dfrac{a}b\!\!\!\pmod{\!m}$$</p>
<p>where the fraction $\ x\equiv a/b\pmod{\! m}\,$ denotes <em>all</em> solutions of $\,ax\equiv b\pmod{\!m},\: $ and similarly for the fraction $\ x\equiv A/B\pmod{\!M}.\: $ Note there may be zero, one, or multiple solutions.</p>
<p>The above implies that if solutions exist then we can compute them by cancelling $\,d = (B,M)\,$ from the numerator $\,A,\,$ the denominator $\,B\,$ <em>and</em> the modulus $\,M.\,$ In other words</p>
<p>$$ x\equiv \dfrac{ad}{bd}\!\!\!\pmod{\!md}\iff x\equiv \dfrac{a}b\!\!\!\pmod{\! m}$$</p>
<p>If $\, d>1\, $ the fraction $\, x\equiv A/B\pmod{\!M}\,$ is <em>multiple-valued,</em> denoting the $\,d\,$ solutions</p>
<p>$$\quad\ \begin{align} x \equiv a/b\!\!\pmod{\! m}\, &\equiv\, \{\, a/b + k\,m\}_{\,\large 0\le k<d}\!\!\!\pmod{\!M},\,\ M = md\\
&\equiv\, \{a/b,\,\ a/b\! +\! m,\,\ldots,\, a/b\! +\! (d\!-\!1)m\}\!\!\!\pmod{\! M}
\end{align}$$
${\rm for\ example}\ \overbrace{\dfrac{6}3\pmod{\!12}}^{{\rm\large cancel}\ \ \Large (3,12)\,=\,3}\!\!\!\!\equiv \dfrac{2}{1}\!\pmod{\!4}\equiv \!\!\!\!\!\!\!\!\!\!\!\!\!\overbrace{\{2,6,10\}}^{\qquad\ \ \Large\{ 2\,+\,4k\}_{\ \Large 0\le k< 3}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\pmod{12}$</p>
|
2,127,679 | <p>I need to find $\frac{a}{b} \mod c$.<br>
This is equal to $(a\cdot b^{\phi(c)-1}\mod c)$, when $b,c$ are co-prime. But what if that's not the case?<br>
To be more clear, I need $$\frac{10^{a\cdot b}-1}{10^b-1}\mod P$$ </p>
| Matt B | 111,938 | <p>In general, it doesn't make sense if $b$ is not coprime to $c$. However, in your case, the fraction is actually an integer so we can make sense of it.</p>
<p>Not that $\dfrac{x^n-1}{x-1}=x^{n-1}+x^{n-2}+ \cdots +1$ so setting $x=10^b$ and $n=a$, you get $\dfrac{10^{ab}-1}{10^b-1}=10^{a-1}+x^{a-2}+ \cdots +1$ and you nowjust to have compute what each term is $\mod{P}$.</p>
|
235,430 | <p>Suppose that a bounded sequence of real numbers $s_i$ ($i\in\omega$) has a limit $\alpha$ along some ultrafilter $\mu_1\in \beta{\Bbb N}\setminus{\Bbb N}$. Then given another ultrafilter $\mu_2\in \beta{\Bbb N}\setminus{\Bbb N}$, surely there exists some rearrangement $s_{r(i)}$ of $s_i$ that has the same limit $\alpha$.</p>
<p>One can easily extend this simple observation to a countable family of sequences. </p>
<p>Now given $s_{i;j}$ ( $i,j\in \omega$; values bounded for each fixed $j$) with limits $\alpha_j$ along a fixed $\mu_1\in \beta{\Bbb N}\setminus{\Bbb N}$, and given another ultrafilter $\mu_2\in \beta{\Bbb N}\setminus{\Bbb N}$, there exists a <em>simultaneous</em> rearrangement $s_{r(i);j}$ having the same limits $\alpha_j$ along $\mu_2$. </p>
<p>All this fails if we pass to size $c=2^\omega$ families of sequences. Indeed $s_{i;j}$ could then enumerate all bounded sequences. But all the limits $\alpha_j$ together would determine $\mu_1$. Taking limits of a simultaneous rearrangement of all the sequences amounts, equivalently, to taking limits of the original sequences along an ultrafilter $\mu_2'$ in the orbit of $\mu_2$ under the action of the symmetric group of $\Bbb N$ extended to $\beta\Bbb N$. Equality of all those limits thus forces $\mu_1=\mu_2'$, and that places $\mu_1$ and $\mu_2$ in the same orbit of the symmetric group action, a severe restriction on $\mu_2$.</p>
<p><strong>Question</strong>: If CH fails, what happens for a size $\omega_1$ family of sequences?</p>
| Pushpendre | 29,887 | <p>If Y is any diagonal matrix then the lower bound is achieved so the lower bound is tight for diagonal matrices. There can't be any sharper bound defined in terms of eigen values since there exists a diagonal matrix with those eigenvalues. </p>
<p>Update after edit to the question: </p>
<p>If a bound of the type $c||y-z|| ||y + z||$ has to be better than the trace bound then it has to match the trace bound in the special case where $y,z$ are $[0, 1, \ldots ]$ and $[1, 0, \ldots]$ orthogonal basis vectors. In this case $Y$ would again be a diagonal matrix with only two non-zero entries. Moreover, $E[|x^TYx|]$ would be $E[|(x_1)^2 - (x_2)^2|] = 0$ and $||y-z|| = ||y + z|| = \sqrt{2}$. Which means $0 \ge c \times 2 \implies c = 0$. So this form of bound is not going to lead anywhere.</p>
<p>Thinking more geometrically what you need are constraints on the angle between $y,z$ and you need to quantify how much energy a hyperplane spanned by $y,z$ is allowed to receive. So the $c$ in your hypothesized bound can't be a constant but has to depend on $y,z$</p>
|
202,719 | <p><code>Reduce</code> often provides a much fuller solution than <code>Solve</code>. But it's always in the form of a true statement rather than functions or replacement rules, e.g.</p>
<p>Input:</p>
<pre><code>Reduce[Sin[x^2] + Cos[a] == 0 && -π/2 <= x <= π/2, x]
</code></pre>
<p>Output:</p>
<pre><code>(Cos[a] == -1 && (x == -Sqrt[(π/2)] || x == Sqrt[π/2])) ||
(-1 < Cos[a] <= Sin[1/4 (-4 π + π^2)] &&
(x == -Sqrt[π + ArcSin[Cos[a]]] || x == Sqrt[π + ArcSin[Cos[a]]])) ||
(Cos[a] == 0 && x == 0) ||
(-1 < Cos[a] < 0 && (x == -Sqrt[-ArcSin[Cos[a]]] || x == Sqrt[-ArcSin[Cos[a]]]))
</code></pre>
<p>What I need is a function that takes the variable solved for (i.e. <code>x</code> here) as its input and gives corresponding value (or a list of values/separate functions for non-unique solutions) as the output. For example, the above output would be represented similarly to the following:</p>
<pre><code>solution[x_] = Piecewise[{
{{-Sqrt[Pi/2], Sqrt[Pi/2]}, Cos[a] == -1},
{{-Sqrt[Pi + ArcSin[Cos[a]]], Sqrt[Pi + ArcSin[Cos[a]]]},
-1 < Cos[a] <= Sin[(1/4)*(-4*Pi + Pi^2)]},
{0, Cos[a] == 0},
{{-Sqrt[-ArcSin[Cos[a]]], Sqrt[-ArcSin[Cos[a]]]},
-1 < Cos[a] < 0}}]
</code></pre>
<p>What is a good way to transform the expression returned by <code>Reduce</code> to such a function?</p>
| J. M.'s persistent exhaustion | 50 | <p>At least for the OP's specific case, here is one possibility:</p>
<pre><code>Piecewise[Append[
KeyValueMap[{#2, #1} &,
GroupBy[Cases[
BooleanConvert[Reduce[Sin[x^2] + Cos[a] == 0 &&
-π/2 <= x <= π/2, x], "DNF"],
x == expr_ && cond_ :> {expr, cond}, {1}],
Last -> First]], {Indeterminate, True}]]
</code></pre>
<p><img src="https://i.stack.imgur.com/HsJ61.png" alt="result"></p>
|
769,504 | <p>It is mentioned in <a href="http://ac.els-cdn.com/0166864182900657/1-s2.0-0166864182900657-main.pdf?_tid=2ecebd88-ccce-11e3-ae74-00000aab0f6c&acdnat=1398467347_2b1c578dc3ae8c1a9107e7444203edb6" rel="nofollow noreferrer">this</a> article, that, the one point compactification of an uncountable discrete space, is A non-first-countable topological space in which ONE has a winning strategy in <a href="https://math.stackexchange.com/questions/765723/can-we-construct-from-0-omega-1-a-space-which-is-strictly-frechet-with-no-w">$G_{np}(q,E)$</a>.
I have tried to use the Alexandroff one point compactification of the <a href="https://dantopology.wordpress.com/tag/mrowka-space/" rel="nofollow noreferrer">Mrowka space</a>: </p>
<blockquote>
<p><strong>Mrówka’s space $\Psi$:</strong> Subsets of $\omega$ are said to be <em>almost disjoint</em> if their intersection is finite. Let $\mathscr{A}$ be a maximal almost disjoint family of subsets of $\omega$, and let $\Psi=\omega\cup\mathscr{A}$. Points of $\omega$ are isolated. Basic open nbhds of $A\in\mathscr{A}$ are sets of the form $\{A\}\cup(A\setminus F)$, where $F$ is any finite subset of $A$. $\Psi$ is not even countably compact, since $\mathscr{A}$ is an infinite (indeed uncountable) closed, discrete set in $\Psi$.</p>
</blockquote>
<p><strong>Claim:</strong> $X=\Psi \cup \{ \infty \}$ is a non-first-countable topological space in which ONE has a winning strategy in $G_{np}(q,E)$ .</p>
<p><strong>Proof</strong>:It is obvious that $X$ is not first countabe, since every point in $\mathscr{A}$, has an uncountable local base. We will show now, that, it satisfies $G_{np}(q,E)$.</p>
<p>Let $q \in \overline A$. If $q \in \omega$, then it is a discrete point. So, suppose $q$ is not in $\omega$. If $q \neq \infty$, then, every open neighbothood of $q$, is a cofinite subset of $q$. Since $q \in \mathscr{A}$ contains only points from $\omega$, That means that, every infinite sequence of points $\{q_n\} \subset q$ from $\omega$ converges to $q$.</p>
<p>If $q = \infty$, Then, every open neighbourhood of $q$, is a complement of a compact set in $X$. Again, sinse an infinite set of points from $\omega$ or from $\mathscr{A}$ is not compact, every sequence of points, that will be picked by TWO will converge to $p$. </p>
<p>What do you think? Is my proof ok?</p>
<p>Thank you!</p>
| topsi | 60,164 | <p>I think that if we take, $\mathbb R$ with the discrete topology and
$X = \mathbb R \{ \infty \}$ to be the Alexandroff one point compactification of $\mathbb R$, we get a space which is not first counable in which ONE has a winning strategy in the game: $G_{np}(\infty,X)$.</p>
<p><strong>Proof:</strong> Suppose we take $\mathbb R$ with the discrete topology and add the point $\infty$. Then, every open neighborhood of $\infty$ is the complement of a finite set. So, take $\mathcal U$ a countable set of open sets $U_n=\mathbb R \setminus A_n$ where $A_n$ is finite for each $n$. So, for every $x \notin \bigcup A_n$, $(\mathbb R \setminus \{ x \})$ is an open neignborhood of $\infty$, and for each $n \in \mathbb N$ $(\mathbb R \setminus \{ x \}) \nsubseteq U_n$. So,to me, it seems that $\mathbb R \cup \{ \infty \}$ is not first countable.</p>
<p>To show that ONE does not have a winning strategy, I think that it is enough to point out that, every infinite sequence of points from $\mathbb R$ is converging to $\infty$.</p>
<p>What do you think? am I right?</p>
<p>Thank you!</p>
|
440,439 | <p>I was working more on the topic on my previous question when I have to know whether the following statement is true to circumvent the "exception" caused by division by singular matrices; again, long story short, the statement follows:</p>
<p>If two singular matrices <span class="math-container">$A, B$</span> exist s.t. the determinant of <span class="math-container">$EA-B$</span> is identically zero for all real matrices <span class="math-container">$E$</span>, then either <span class="math-container">$A=YB$</span> or <span class="math-container">$B=ZA$</span>, <span class="math-container">$Y$</span> and <span class="math-container">$Z$</span> being undetermined matrices.</p>
<p>Is it true (vacuously or not) in general?</p>
| Alexandre Eremenko | 25,510 | <p>Every bounded analytic function <span class="math-container">$h$</span> in the disk has the representation
<span class="math-container">$$h(z)=B(z)\exp(-P(z)),$$</span>
where <span class="math-container">$B$</span> is a Blaschke product and <span class="math-container">$P$</span> has positive imaginary part. Applying this to <span class="math-container">$h=f^3=g^2$</span>, we conclude that every factor in the Blaschke product must occur <span class="math-container">$6n$</span> times. Therefore the Blaschke product <span class="math-container">$B$</span> has a 6-th root <span class="math-container">$B_0$</span> which is also a Blaschke product, and
<span class="math-container">$h_0(z)=B_0(z)\exp(-P(z)/6)$</span> satisfies <span class="math-container">$h=h_0^6$</span>,
so <span class="math-container">$f=c_3h^2$</span> and <span class="math-container">$g=c_2h^2$</span>, where <span class="math-container">$c_k$</span> are some <span class="math-container">$k$</span>-th roots of unity. Multiplying <span class="math-container">$h_0$</span> on an appropriate <span class="math-container">$6$</span>-th root of unity we obtain the requested function.</p>
|
62,177 | <p>One of the most mind boggling results in my opinion is, with the axiom of choice/well-ordering principle, there exist such things as uncountable well-ordered sets $(A,\leq)$. </p>
<p>With this is mind, does there exist some well ordered set $(B,\leq)$ with some special element $b$ such that the set of all elements smaller than $b$ is uncountable, but for any element besides $b$, the set of all elements smaller is countable (by countable I include finite too). </p>
<p>More formally stated, how can one show the existence of a well ordered set $(B,\leq)$ such that there exists a $b\in B$ such that $\{a\in X\mid a\lt b\}$ is uncountable, but $\{a\in X\mid a\lt c\}$ is countable for all $c\neq b$?</p>
<p>It seems like this $b$ would have to "be at the very end of the order." </p>
| Andrés E. Caicedo | 462 | <p>There is no need to use the axiom of choice here. Suppose $X$ is an infinite well-orderable set. We argue that there is a well-ordered set $(Y,<)$ with $Y$ of strictly larger cardinality than $X$. </p>
<p>For this, consider the set $A$ of all binary relations $R\subseteq X\times X$ such that $R$ is a well-ordering of a subset of $X$. Let's call $X_R$ this unique subset.</p>
<p>We introduce an equivalence relation on $A$ by setting that $R_1\sim R_2$ iff $(X_{R_1},R_1)$ and $(X_{R_2},R_2)$ are order isomorphic. Let $Y$ be the set of equivalence classes. </p>
<p>We can well-order $Y$ by saying that $[R_1]<[R_2]$ iff there is an order isomorphism from $(X_{R_1},R_1)$ to a proper initial segment of $(X_{R_2},R_2)$. One easily verifies that $<$ is well-defined. This means that if $R_1\sim R_3$ and $R_2\sim R_4$, then $(X_{R_1},R_1)$ is isomorphic to a proper initial segment of $(X_{R_2},R_2)$ iff $(X_{R_3},R_3)$ is isomorphic to a proper initial segment of $(X_{R_4},R_4)$. One also verifies easily that $<$ is a well-ordering. </p>
<p>Finally, $Y$ has size strictly larger than $X$. To see this, note first that $X$ injects naturally into $Y$, namely, given a well-ordering $\prec$ of $X$ and any two initial segments $X_1$ and $X_2$ of $X$ under $\prec$, with $X_1$ a proper initial segment of $X_2$, $[\prec\upharpoonright X_1]<[\prec\upharpoonright X_2]$. But each point of $X$ determines an initial segment of $X$. </p>
<p>Now, if there were an injection $f$ of $Y$ into $X$, then the range $Z$ of $f$ would be well-orderable in a way isomorphic to $(Y,<)$ by using $f$ to copy the well-ordering $<$ of $Y$: Simply set $f(a) R f(b)$ iff $a<b$ for any classes $a,b\in Y$. We then have a copy of $(Y,<)$ as a proper initial segment of $(Y,<)$ (again, looking at initial segments of $(Z,R)$), contradicting that $<$ is a well-ordering.</p>
<p>The above may seem complicated but it is simple: All it is saying is that a well-ordering is less than another if it is an initial segment, and this is a "well-ordering of well-orderings". (And we have to use equivalence classes because different well-orderings may actually be isomorphic.)</p>
<p>When $X$ is a countably infinite set, the resulting set $Y$ is well-ordered and uncountable, but any initial segment of $Y$ corresponds to a well-ordering of $X$, so it is countable. If you want a set $B$ as required, simply set $B=Y\cup\{*\}$ where $*$ is some point not in $Y$, ordered by simply making $*$ larger than all the elements of $Y$. Then $\{a\in B\mid a<*\}$ is uncountable, but $\{c\in B\mid c<d\}$ is countable for any $d\in Y$, i.e., for any $d\in B$ with $d\ne *$.</p>
<p>Once you are familiar with the construction of ordinals, the above can be streamlined a bit: Rather than using an equivalence class, we simply use the ordinal isomorphic to any representative of the class. The argument above gives us that the collection of countable ordinals is actually a set, and its union is an (in fact, the first) uncountable ordinal. As a matter of fact, there is not even the need to take a union. The set of countable ordinals is already an uncountable ordinal.</p>
<p>(The argument above shows that given any well-ordered set there is a larger well-ordered set. A similar argument gives that if we have a family of well-orders, we can paste them together to get a well-ordering larger than all the ones in the family.)</p>
|
863,364 | <p><img src="https://i.stack.imgur.com/w6R2g.jpg" alt="enter image description here"></p>
<p>I am missing the 3D graph for the equation $x^2+2z^2=1$.</p>
| angryavian | 43,949 | <p>I think that as long as everything is well defined (for example, being in the <a href="http://en.wikipedia.org/wiki/Schwartz_space" rel="nofollow">Schwartz space</a>), then the result is what you think it should be. For example, $$F\left(\frac{\partial^2}{\partial x \partial y}\right) u(x,y,t) = (ix)(iy) \hat{u}(x,y,t)$$ (assuming you are doing the transform on $x$ and $y$).</p>
<hr>
<p>This is stated in Stein's book on Fourier Analysis in Chapter 6:</p>
<blockquote>
<p>Let $f \in \mathcal{S}(\mathbb{R}^d)$. Then $$\left(\frac{\partial}{\partial x}\right)^\alpha f(x) \longrightarrow (i\xi)^\alpha \hat{f}(\xi).$$
(Here, the $x$ and $\xi$ are tuples in $\mathbb{R}^d$, and $\alpha$ is a multi-index. For example, $x=(x_1, x_2)$, and $\alpha=(1,2)$ gives $\left(\frac{\partial}{\partial x}\right)^\alpha:= \frac{\partial^3}{\partial x_1 \partial^2 x_2}$ and $x^\alpha:= x_1 x_2^2$.)</p>
</blockquote>
|
1,175,297 | <p>Note: The following definitions from my book, Discrete Mathematics and Its Applications [7th ed, 598].</p>
<p>This is my book's definition for a reflexive relation
<img src="https://i.stack.imgur.com/og5wE.png" alt="enter image description here"></p>
<p>This is my book's definition for a anti symmetric relation
<img src="https://i.stack.imgur.com/OaZGk.png" alt="enter image description here"></p>
<p>Is a reflexive relation just the same as a anti symmetric relation? From what I've, the only way to meet that antisymmetric requirement is to have the same ordered pair, say an element a from Set A, (a,a). If you have anything other than the same ordered pair, (1,2) and (2,1), it will not meet the antisymmetric requirement. But the overall definition of reflexive relation is that it's the same ordered pair. Are they just two ways of saying the same thing? Is it possible to have one and not the other?</p>
| Graham Kemp | 135,106 | <p>Reflexivity <em>requires</em> all elements to be both way related with themselves. However, this does <em>not</em> prohibit non-equal elements from being both way related with each other.</p>
<p>That is: $\qquad\forall a\in A: (a,a)\in R$ from which we can prove: $$\forall a\in A\;\forall b\in A:\Big(a=b \;\to\; (a,b)\in R \wedge (b,a)\in R\Big)$$</p>
<p>Antisymetricality <em>requires</em> any two elements which are both way related with each other to be equal. Which means it <em>prohibits</em> non-equal elements from being both-way related to each other; however, this does <em>not</em> require all elements to be both way related with themselves.
$$\forall a\in A, \forall b\in A:\Big((a,b)\in R \wedge (b,a)\in R \;\to\; a=b\Big)$$
Or via contraposition: $\qquad\forall a\in A, \forall b\in A:\Big(a\neq b \;\to\; (a,b)\notin R \vee (b,a)\notin R\Big)$</p>
<p>Thus they are not the same thing.</p>
<hr>
<p>Examples are easy to generate. From the set $A=\{0, 1\}$ The following relations are:</p>
<p>$$\begin{array}{c|c|c}
R & \text{Reflexive} & \text{Antisymmetric}
\\ \hline
\{(0,0), (1,1)\} & \text{Yes} & \text{Yes}
\\
\{(0,0), (0,1), (1,0), (1,1)\} & \text{Yes} & \text{No}
\\
\{(0,0), (0,1), (1,0)\} & \text{No} & \text{No}
\\
\{(0,0), (0,1)\} & \text{No} & \text{Yes}
\end{array}$$</p>
|
868,935 | <p>I saw a proof that
$$
\lim_{x\to 0} \ln|x|\cdot x = 0
$$
where is is argued that for $x \in (0,1)$ we have
$$
| \ln(x) x | = \left| \int_1^x x/t ~\mathrm d t \right| = \left| \int_x^1 x/t ~\mathrm d t \right| \le \left|\int_x^1 1 dt\right| = |1 - x| \le 1
$$
and therefore the result follows, but why should the fact that $|\ln(x)x|$ is bounded imply it converges to zero?</p>
| Matt B. | 164,029 | <p>$x \ln(x)$ is monotonous as well (decreasing around 0), so it must have a limit.</p>
|
2,860,321 | <p>Suppose $L/K$ is a Galois extension of local fields with Galois group $G = \operatorname{Gal}(L/K)$. Let $K'$ be the maximal unramified Extension of $K$ in $L$.</p>
<p>The Definition of the <strong>inertia group</strong> of $L/K$ is given by $I = I_{L/K} = \operatorname{Gal}(L/K')$ which I understand.</p>
<p>In some notes I found that this is equivalent to</p>
<p>$$ I = \{ \sigma \in G : \sigma \text{ maps to } \iota \text{ in } \operatorname{Gal}(k_L/k_K) \}$$</p>
<p>or $$I = \{ \sigma \in G : \sigma x \equiv x \text{ (mod }M) \: \forall x \in \mathcal{O}_L \}.
$$</p>
<p>I know that $k_L$ and $k_K$ are the residue fields of $L$ and $K$.
Could you please explain me what $\iota$ and $M$ are supposed to be? If possible, could you also explain why all the conditions above are indeed equivalent then?</p>
<p>Thank you.</p>
| D_S | 28,556 | <p>Assume $L/K$ are local fields. Let $\mathfrak p$ (resp. $\mathfrak P$) be the unique maximal ideal of $\mathcal O_K$ (resp. $\mathcal O_L$). Let $\kappa(\mathfrak p) = \mathcal O_K/\mathfrak p$ and $\kappa(\mathfrak P) = \mathcal O_L/\mathfrak P$. The inclusion of $\mathcal O_K$ into $\mathcal O_L$ induces an inclusion of $\kappa(\mathfrak p)$ into $\kappa(\mathfrak P)$.</p>
<p>Then for all $\sigma \in G = \operatorname{Gal}(L/K)$, we have $\sigma \mathfrak P = \mathfrak P$. Then the ring isomorphism $\sigma: \mathcal O_L \rightarrow \mathcal O_L$, fixing $\mathcal O_K$ pointwise, induces a field isomorphism $\bar{\sigma}: \kappa(\mathfrak P) \rightarrow \kappa(\mathfrak P)$ fixing $\kappa(\mathfrak p)$ pointwise. Thus we have a homomorphism</p>
<p>$$G \rightarrow \operatorname{Gal}(\kappa(\mathfrak P)/\kappa(\mathfrak p)), \sigma \mapsto \bar{\sigma}$$</p>
<p>It can be shown to be surjective. Let $I$ be the kernel of this homomorphism. By definition,</p>
<p>$$I = \{ \sigma \in G : \bar{\sigma} = 1_{\kappa(\mathfrak P)}\}$$</p>
<p>If $x = a + \mathfrak P$ is an element of $\kappa(\mathfrak P) = \mathcal O_L/\mathfrak P$, then by definition, $\bar{\sigma}(x) = \sigma(a) + \mathfrak P$. Hence $\bar{\sigma} = 1_{\kappa(\mathfrak P)}$ if and only if for all $a \in \mathcal O_L$, we have $\sigma(a) + \mathfrak P = a + \mathfrak P$. This shows that your two definitions of $I$ are equivalent.</p>
<p>Now $L/K$ is unramified, if and only if $[L : K] = [\kappa(\mathfrak P) : \kappa(\mathfrak p)]$, if and only if $I$ is trivial. If $K'$ is the fixed field of $L/K$, then $G/I \cong \operatorname{Gal}(K'/K)$, and the corresponding "$I$" for $L/K'$ is trivial. Hence $K'/K$ is unramified. I forgot how to show it is maximal unramified, but it should not be too hard. </p>
|
2,860,321 | <p>Suppose $L/K$ is a Galois extension of local fields with Galois group $G = \operatorname{Gal}(L/K)$. Let $K'$ be the maximal unramified Extension of $K$ in $L$.</p>
<p>The Definition of the <strong>inertia group</strong> of $L/K$ is given by $I = I_{L/K} = \operatorname{Gal}(L/K')$ which I understand.</p>
<p>In some notes I found that this is equivalent to</p>
<p>$$ I = \{ \sigma \in G : \sigma \text{ maps to } \iota \text{ in } \operatorname{Gal}(k_L/k_K) \}$$</p>
<p>or $$I = \{ \sigma \in G : \sigma x \equiv x \text{ (mod }M) \: \forall x \in \mathcal{O}_L \}.
$$</p>
<p>I know that $k_L$ and $k_K$ are the residue fields of $L$ and $K$.
Could you please explain me what $\iota$ and $M$ are supposed to be? If possible, could you also explain why all the conditions above are indeed equivalent then?</p>
<p>Thank you.</p>
| nguyen quang do | 300,700 | <p>Your question makes no sense if you don't specify what your fields $K, L$ are. Since you introduce the maximal unramified subextension $K'/K$ of $L/K$, there is only one possible interpretation: $K$ is a local field, i.e. it is complete w.r.t. a non archimedean discrete valuation $v$. The ring of integers of $K$ is the valuation ring {$O_K={x\in K, v(x) \ge 0}$}, which is a local ring with principal maximal ideal {$P_K={x\in K , v(x) \ge 1}$}, with residual field $k_K=O_K/P_K$. The same notations carry over to $L$. The maximal unramified subextension $K'/K$ of $L/K$ is the compositum of all the unramified subextensions of $L/K$, and because of the maximality property, $K'/K$ is Galois if $L/K$ is. Recall that the definition of "unramified" then requires that the residual extension $k_K'/k_K$ is also Galois (see Cassels-Fröhlich, p.21).</p>
<p>In the latter case, denote $G=Gal(L/K), G_0=Gal(K'/K), \bar G=Gal(k_K'/k_K)$. Since the action of $G$ respects the valuation, it is obvious that the passage to residual fields $s \in G \to \bar s \in \bar G$ s.t. $\bar s (\bar x)=\bar {s(x)}$ for $x\in O_L$, induces a homomorphism $r:G \to \bar G$ whose kernel is the inertia subgroup {$I = {s\in G, v_L(s(x)-x)\ge 1}$ for all $x \in O_L$}, as you noticed. It remains to show that $G_0=I$. Oviously $I$ contains $G_0$ because $P_L=P_{K'}$ (total ramification). The main point now is the <em>surjectivity</em> of $r$, which is a consequence of Hensel's lemma (op. cit., lemma 1, p.27), so that $\bar G\cong G/I$ and $\mid \bar G\mid$= the residual degree $f$. But $\mid G/G_0 \mid$= $f$ because of the classical relation $n=ef$ (op. cit., prop. 3, p.9), hence finally $G_0=I$.</p>
|
4,072,769 | <blockquote>
<p>How to evaluate this?
<span class="math-container">$$\prod_{k=1}^m \tan \frac{k\pi}{2m+1}$$</span></p>
</blockquote>
<p>My work</p>
<p>I couldn't figure out a method to solve this product.
I thought that this identity could help.
<span class="math-container">$$\frac{e^{i\theta}-1}{e^{i\theta}+1}=i\tan \frac{\theta}{2}$$</span></p>
<p>By supposing <span class="math-container">$z=e^{\frac{2k\pi}{2m+1}i}$</span>, then,
<span class="math-container">$$i\tan \frac{k\pi}{2m+1}=\frac{z-1}{z+1}$$</span>
So, the product
<span class="math-container">$$\displaystyle\prod_{k=1}^m \tan\frac{k\pi}{2m+1}=\displaystyle\prod_{k=1}^m \frac{z-1}{i(z+1)}=\displaystyle\prod_{k=1}^m \frac{e^{\frac{2k\pi}{2m+1}i}-1}{i(e^{\frac{2k\pi}{2m+1}i}+1)}$$</span></p>
<p>which is getting more complicated.</p>
<p>Answer is <span class="math-container">$\sqrt{2m+1}$</span>. Any help is appreciated.</p>
| mathlover123 | 761,688 | <p>So you can start by breaking the product to get:
<span class="math-container">$$\prod_{k=1}^{2m}\tan{(\frac{k\pi}{2m+1})}=\prod_{k=1}^{2m}\sin{(\frac{k\pi}{2m+1})}\prod_{k=1}^{2m}\frac{1}{\cos{(\frac{k\pi}{2m+1}})}$$</span>
Now :
<span class="math-container">$$\prod_{k=1}^{2m}\cos{(\frac{k\pi}{2m+1})}=\frac{(-1)^{m}}{2^{2m}}$$</span>
Proof:
<span class="math-container">$$\cos{(\frac{k\pi}{2m+1})}=\exp{(\frac{ki\pi}{2m+1})}\frac{(1+\exp{(\frac{-2ik\pi}{2m+1}}))}{2}$$</span>
If we have polynomial <span class="math-container">$z^{2m+1}-1$</span> then its <span class="math-container">$2m+1$</span> roots of unity are <span class="math-container">$\exp{(\frac{-2ik\pi}{2m+1}}):1\le k \le 2m+1$</span>
Thus :
<span class="math-container">$$z^{2m+1}-1=-\prod_{k=1}^{2m+1}(-z+\exp{(\frac{-2ik\pi}{2m+1}}))$$</span>
<span class="math-container">$$\therefore \frac{(-1)^{2m+1}-1}{2}=-1=-\prod_{k=1}^{2m}(1+\exp{(\frac{-2ik\pi}{2m+1}}))$$</span>
<span class="math-container">$$\therefore \prod_{k=1}^{2m}\cos{(\frac{k\pi}{2m+1})}=\frac{1}{2^{2m}}\exp(\frac{i(2m)(2m+1)\pi}{2(2m+1)})=\frac{(-1)^{m}}{2^{2m}}$$</span>
Now we have to evaluate : <span class="math-container">$$\prod_{k=1}^{2m}\sin{(\frac{k\pi}{2m+1})}$$</span>
Using similar process like one above we get:
<span class="math-container">$$\prod_{k=1}^{2m}\sin{(\frac{k\pi}{2m+1})}=(-1)^{m}\lim_{z\to1}\frac{z^{2m+1}-1}{(z-1)(2i)^{2m+1}}=\frac{(2m+1)(-1)^{m}}{(2i)^{2m}}$$</span>
Okay we have these two products but how do they relate to the original problem. Well
<span class="math-container">$$\tan{(\frac{(2m+1-k)\pi}{2m+1})}=\tan{(\pi-\frac{k\pi}{2m+1})}=-\tan{(\frac{k\pi}{2m+1})}$$</span>
thus:
<span class="math-container">$$\prod_{k=1}^{2m}\tan{(\frac{k\pi}{2m+1})}=\prod_{k=1}^{m}\tan{(\frac{k\pi}{2m+1})}\prod_{k=m+1}^{2m}\tan{(\frac{k\pi}{2m+1})}=(-1)^{m}(\prod_{k=1}^{m}\tan{(\frac{k\pi}{2m+1})})^{2}$$</span>
<span class="math-container">$$\prod_{k=1}^{2m}\tan{(\frac{k\pi}{2m+1})}=\frac{(2m+1)}{(2i)^{2m}}2^{2m}=(-1)^{m}(2m+1)$$</span>
<span class="math-container">$$\therefore (-1)^{m}(\prod_{k=1}^{m}\tan{(\frac{k\pi}{2m+1})})^{2}=(-1)^{m}(2m+1)$$</span>
<span class="math-container">$$\therefore \prod_{k=1}^{m}\tan{(\frac{k\pi}{2m+1})}=\sqrt{2m+1} $$</span></p>
|
1,115,545 | <p>In my lecture notes we have the following:</p>
<p>The set <span class="math-container">$$\mathbb{P}^2(K)=\{[x, y, z] | (x, y, z) \in (K^3)^{\star}\}$$</span> is called projective plane over <span class="math-container">$K$</span>.</p>
<p>There are the following cases:</p>
<ul>
<li><p><span class="math-container">$z \neq 0$</span>
<span class="math-container">$$\left [x, y, z\right ]=\left [\frac{x}{z}, \frac{y}{z}, 1\right ]$$</span>
<span class="math-container">$$\mathbb{P}^2(K) \ni \left [x, y, z\right ] \to \left (\frac{x}{z}, \frac{y}{z}\right ) \in A^2(K)$$</span>
These points are the finite points of <span class="math-container">$\mathbb{P}^2(K)$</span>.</p>
</li>
<li><p><span class="math-container">$z=0$</span></p>
<p>The points <span class="math-container">$\left [x, y, 0\right ]$</span> are called points at infinity of <span class="math-container">$\mathbb{P}^2(K)$</span>.</p>
</li>
</ul>
<p>The finite points <span class="math-container">$\left [x, y, 1\right ] \in \mathbb{P}^2(K)$</span> geometrically correspond to the intersection points of the lines from <span class="math-container">$(0,0,0)$</span> with the plane <span class="math-container">$z=1$</span>.</p>
<p>The points at infinity <span class="math-container">$\left [x, y, 0\right ]$</span> geometrically correspond to the lines of the plane <span class="math-container">$x0y$</span> that pass through <span class="math-container">$(0, 0, 0)$</span>.</p>
<p>We define as line in <span class="math-container">$\mathbb{P}^2(K)$</span> each equation of the form <span class="math-container">$$ax+by+cz=0, (a,b,c) \neq (0,0,0)$$</span> that means the set <span class="math-container">$$E=\{\left [x, y, z\right ] \in \mathbb{P}^2(K) | ax+by+cz=0, (a, b, c) \neq (0,0,0) \}$$</span></p>
<p>If <span class="math-container">$z \neq 0$</span> : <span class="math-container">$\left [x, y, z\right ]=\left [x, y, 1\right ]$</span>, that means that the equation is written <span class="math-container">$$ax+by+c=0$$</span> (and if <span class="math-container">$ab \neq 0$</span> then it is the known affine line)</p>
<p><span class="math-container">$$$$</span></p>
<p>Can you explain to me what <span class="math-container">$$\mathbb{P}^2(K) \ni \left [x, y, z\right ] \to \left (\frac{x}{z}, \frac{y}{z}\right ) \in A^2(K)$$</span> means?</p>
<p>Also, at the end, why does it stand that <span class="math-container">$\left [x, y, z\right ]=\left [x, y, 1\right ]$</span> ? At the beginning didn't we have that <span class="math-container">$\left [x, y, z\right ]=\left [\frac{x}{z}, \frac{y}{z}, 1\right ]$</span> ?</p>
| sciona | 195,458 | <p>Use Cauchy-Schwarz Inequality: $$(a+b+b+c+c+d+d+a)\left(\frac{a^2}{a+b}+\frac{d^2}{a+d}+\frac{b^2}{b+c}+\frac{c^2}{c+d}\right) \ge (a+b+c+d)^2$$</p>
<p>Alternative approach: $$\sum\limits_{cyc} \frac{a^2}{a+b} = \sum\limits_{cyc} \left(\frac{a^2}{a+b} - (a-b)\right) = \sum\limits_{cyc} \frac{b^2}{a+b}$$</p>
<p>Therefore, $$\sum\limits_{cyc} \frac{a^2}{a+b} = \frac{1}{2}\sum\limits_{cyc} \frac{a^2+b^2}{a+b} \ge \frac{1}{4}\sum\limits_{cyc} \frac{(a+b)^2}{a+b} = \frac{1}{2}\sum\limits_{cyc} a$$</p>
|
1,115,545 | <p>In my lecture notes we have the following:</p>
<p>The set <span class="math-container">$$\mathbb{P}^2(K)=\{[x, y, z] | (x, y, z) \in (K^3)^{\star}\}$$</span> is called projective plane over <span class="math-container">$K$</span>.</p>
<p>There are the following cases:</p>
<ul>
<li><p><span class="math-container">$z \neq 0$</span>
<span class="math-container">$$\left [x, y, z\right ]=\left [\frac{x}{z}, \frac{y}{z}, 1\right ]$$</span>
<span class="math-container">$$\mathbb{P}^2(K) \ni \left [x, y, z\right ] \to \left (\frac{x}{z}, \frac{y}{z}\right ) \in A^2(K)$$</span>
These points are the finite points of <span class="math-container">$\mathbb{P}^2(K)$</span>.</p>
</li>
<li><p><span class="math-container">$z=0$</span></p>
<p>The points <span class="math-container">$\left [x, y, 0\right ]$</span> are called points at infinity of <span class="math-container">$\mathbb{P}^2(K)$</span>.</p>
</li>
</ul>
<p>The finite points <span class="math-container">$\left [x, y, 1\right ] \in \mathbb{P}^2(K)$</span> geometrically correspond to the intersection points of the lines from <span class="math-container">$(0,0,0)$</span> with the plane <span class="math-container">$z=1$</span>.</p>
<p>The points at infinity <span class="math-container">$\left [x, y, 0\right ]$</span> geometrically correspond to the lines of the plane <span class="math-container">$x0y$</span> that pass through <span class="math-container">$(0, 0, 0)$</span>.</p>
<p>We define as line in <span class="math-container">$\mathbb{P}^2(K)$</span> each equation of the form <span class="math-container">$$ax+by+cz=0, (a,b,c) \neq (0,0,0)$$</span> that means the set <span class="math-container">$$E=\{\left [x, y, z\right ] \in \mathbb{P}^2(K) | ax+by+cz=0, (a, b, c) \neq (0,0,0) \}$$</span></p>
<p>If <span class="math-container">$z \neq 0$</span> : <span class="math-container">$\left [x, y, z\right ]=\left [x, y, 1\right ]$</span>, that means that the equation is written <span class="math-container">$$ax+by+c=0$$</span> (and if <span class="math-container">$ab \neq 0$</span> then it is the known affine line)</p>
<p><span class="math-container">$$$$</span></p>
<p>Can you explain to me what <span class="math-container">$$\mathbb{P}^2(K) \ni \left [x, y, z\right ] \to \left (\frac{x}{z}, \frac{y}{z}\right ) \in A^2(K)$$</span> means?</p>
<p>Also, at the end, why does it stand that <span class="math-container">$\left [x, y, z\right ]=\left [x, y, 1\right ]$</span> ? At the beginning didn't we have that <span class="math-container">$\left [x, y, z\right ]=\left [\frac{x}{z}, \frac{y}{z}, 1\right ]$</span> ?</p>
| user2345215 | 131,872 | <p>Another approach using $4ab\le(a+b)^2$:
\begin{align*}\frac{a^2}{a+b}+\frac{b^2}{b+c}+\frac{c^2}{c+d}+\frac{d^2}{d+a}&=\left(a-\frac{ab}{a+b}\right)+\left(b-\frac{bc}{b+c}\right)+\left(c-\frac{cd}{c+d}\right)+\left(d-\frac{da}{d+a}\right)\\
&=1-\left(\frac{ab}{a+b}+\frac{bc}{b+c}+\frac{cd}{c+d}+\frac{da}{d+a}\right)\\
&\ge1-\frac14\Bigl((a+b)+(b+c)+(c+d)+(d+a)\Bigr)=\frac12\end{align*}</p>
|
4,492,930 | <p>Let <span class="math-container">$\mathfrak{c}$</span> denote the cardinality of the continuum. I sketch an intuitive but non-rigorous argument that <span class="math-container">$|\mathbb{R}^\mathbb{N}| = \mathfrak{c}$</span>, with the question:</p>
<p><strong>Question</strong>: can this argument be made rigorous?</p>
<hr />
<p><strong>Sketch "proof"</strong>: Let <span class="math-container">$f : \mathbb{R} \to \mathbb{R}, a \in \mathbb{R}$</span>. It is a standard result that <span class="math-container">$f$</span> is continuous at <span class="math-container">$a$</span> iff:</p>
<ol>
<li><span class="math-container">$\forall\varepsilon>0 \; \exists \delta>0$</span> s.t. <span class="math-container">$|x-a| < \delta \implies |f(x)-f(a)| < \varepsilon$</span></li>
<li><span class="math-container">$\forall (x_n)$</span> s.t. <span class="math-container">$x_n \to a$</span>, <span class="math-container">$f(x_n) \to f(a)$</span></li>
</ol>
<p>(1) requires something to be true for every element of a set of cardinality <span class="math-container">$\mathfrak{c}$</span>, while (2) requires something to be true for every element of:</p>
<p><span class="math-container">$$S := \{(x_n) | x_n \to a\}$$</span></p>
<p>But since (1) and (2) are equivalent, we may deduce that <span class="math-container">$|S| = \mathfrak{c}$</span>. It follows that <span class="math-container">$|\mathbb{R}^\mathbb{N}| = \mathfrak{c}$</span>.</p>
<p>(This argument has inaccuracies, at least some of which can be fixed with observations made below.)</p>
<hr />
<p><strong>General idea</strong>: If two logical statements are equivalent and each "depends"(?) on sets of cardinality <span class="math-container">$C$</span> and <span class="math-container">$C'$</span> respectively, then we would expect that <span class="math-container">$C = C'$</span>.</p>
<p>I outline two potential issues that I spotted with this type of reasoning below.</p>
<hr />
<p><strong>Objection A</strong>: Suppose we let <span class="math-container">$T$</span> = <span class="math-container">$\{S\}$</span> i.e a set with <span class="math-container">$S$</span> as an element. Then, we can rephrase (2) as:</p>
<ol start="3">
<li>For every <span class="math-container">$s \in t$</span> of every <span class="math-container">$t \in T$</span>, <span class="math-container">$f(s) \to f(a)$</span> (where <span class="math-container">$f(s)$</span> is interpreted in the obvious way).</li>
</ol>
<p>But then <span class="math-container">$T$</span> is a set of cardinality <span class="math-container">$1$</span>, and there's an issue because <span class="math-container">$\mathfrak{c} \neq 1$</span>. I think this issue could be remedied with a more rigorous approach, but I don't know any of the set theory which I expect is needed to do so.</p>
<hr />
<p><strong>Objection B</strong>: Consider the following two statements. We have that <span class="math-container">$x \le 0$</span> iff:</p>
<ol start="4">
<li><span class="math-container">$x \le a$</span>, <span class="math-container">$\; \forall a \in A_1$</span> where <span class="math-container">$A_1 = \{0\}$</span></li>
<li><span class="math-container">$x \le a$</span>, <span class="math-container">$\; \forall a \in A_2$</span> where <span class="math-container">$A_2 = \mathbb{R}^+ \cup \{0\}$</span></li>
</ol>
<p>Then, (4),(5) are equivalent, but depend on sets of wildly different cardinalities <span class="math-container">$1 \neq \mathfrak{c}$</span> again.</p>
<p>The solution to this is a bit more obvious. It is clear that most of <span class="math-container">$A_2$</span> is redundant, so we could argue that it has "effective cardinality" <span class="math-container">$1$</span>, since it suffices to know simply whether or not <span class="math-container">$x \le a$</span> for <span class="math-container">$a=0 \in A_2$</span>. But that doesn't solve the issue of (6):</p>
<ol start="6">
<li><span class="math-container">$x < a$</span>, <span class="math-container">$\; \forall a \in A_3$</span> where <span class="math-container">$A_3 = \mathbb{R}^+$</span></li>
</ol>
<p>which is also equivalent to (4),(5) but appears to have countably infinite "effective cardinality". (Notably, <span class="math-container">$\mathbb{R}^+$</span> is not closed; I think that is key here.)</p>
<hr />
<p>I expect that for both (A) and (B), there would need to be some kind of condition on what "types" of elements are allowed for the sets in question, and also the types of sets allowed.</p>
<p>Is there a way to reconcile all of these issues and make this a valid direction of argument? Is there any truth to the general idea described above, and if not, is there a clear error that can be pinpointed?</p>
| Sourav Ghosh | 977,780 | <p>Yes. As inner product is additive in each component. But you have to be careful while dealing with homogeneity as inner product is conjugate linear. But in real inner product it's linear in both components i.e a real inner product is a bilinear map.</p>
<p>Here we only need additivity, so it's fine for any inner product space.</p>
<p><span class="math-container">$\langle \lambda x,x\rangle= \langle Ax+ Bx,x
\rangle=\langle Ax, x\rangle+\langle Bx,x
\rangle$</span></p>
|
1,579,781 | <blockquote>
<p>If $x+y+z=6$ and $xyz=2$, then find the value of $$\cfrac{1}{xy}
+\cfrac{1}{yz}+\cfrac{1}{zx}$$</p>
</blockquote>
<p>I've started by simply looking for a form which involves the given known quantities ,so:</p>
<p>$$\cfrac{1}{xy} +\cfrac{1}{yz} +\cfrac{1}{zx}=\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}$$</p>
<p>Now this might look nice since I know the value of the denominator but if I continue to work on the numerator I get looped :</p>
<p>$$\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}=\cfrac{4\left(\cfrac{1}{xy}+\cfrac{1}{zy}+\cfrac{1}{zy}\right)}{(xyz)^2}=\cfrac{4\left(\cfrac{(\cdots)}{(xyz)^2}\right)}{(xyz)^2}$$</p>
<p>How do I deal with such continuous fraction ?</p>
| Pieter Rousseau | 286,128 | <p>$$x+y+z=6$$
Divide both sides with $xyz$
$$\frac{x+y+z}{xyz}=\frac{6}{xyz}$$
$$\frac{x}{xyz}+\frac{y}{xyz}+\frac{z}{xyz}=\frac{6}{2}$$
$$\frac{1}{yz}+\frac{1}{xz}+\frac{1}{xy}=3$$</p>
|
90,876 | <p>$$2x-\dfrac{x+1}{2} + \dfrac{1}{3}(x+3)= \dfrac{7}{3}$$</p>
<p>When I solve this I always end up with 11x = 5, which is wrong, no matter which way I solve it. Does anyone know how to solve it? Steps? (Because I know the answer should be x=1)</p>
| The Chaz 2.0 | 7,850 | <p>Multiply by $6$ to clear fractions:</p>
<p>$12x - 3(x +1) +2(x +3) = 14$</p>
<p>Eventually you'll get</p>
<p>$11x = 11$</p>
|
81,982 | <p>I am beggining to do some work with cubical sets and thought that I should have an understanding of various extra structures that one may put on cubical sets (for purposes of this question, connections). I know that cubical sets behave more nicely when one has an extra set of degeneracies called connections. The question is: Why these particular relations? Why do they show up? Precise references would be greatly appreciated.</p>
| Dany Majard | 5,375 | <p>As far as higher cubical categories are concerned, a connection will allow you to literally rotate a face, i.e. turn a face of one type into a face of another type, in an invertible way.
In short it materializes an equivalence between the different types of faces into special degenerate cubes.</p>
<p>The 2d case for example is fairly simple as one can either turn horizontal arrows into vertical arrows or vice versa.</p>
<p>One advantage of a connection is therefore that it allows one to speak of commutative n-cubes in an n-tuple category with connection. To do so, you can take an n-cube, apply connections until you only have non trivial faces of one type. Then check whether the obtained cube is an identity or not. It turns out that it does not depend on the way you chose to apply the connection, if your cube gives an identity cube with one face rearrangement, it will with another. It is, to my understanding the essence of Brown and AlAlg's equivalence between cubical categories with connections and globular categories.</p>
<p>So for cubical categories it is very restrictive, which is also why they are so friendly. But I am not sure about the impact on cubical sets. You surely will find good material in Tim and Ronnie's references.</p>
|
3,023,726 | <p>I'm trying to solve a problem that I can't seem to work out.</p>
<blockquote>
<p><span class="math-container">$f$</span> is an entire function. Prove that <span class="math-container">$|f^{(n)}(0)|< n!n^n$</span> for at least 1 <span class="math-container">$n$</span>. </p>
</blockquote>
<p>I've been thinking to use the Cauchy estimates somehow but there's no reason for me to believe that <span class="math-container">$f$</span> is bounded from above.</p>
<p>Any help is appreciated.</p>
| Community | -1 | <p><strong>Hint:</strong> Try proving it by contradiction. Suppose <span class="math-container">$|f^{(n)}(0)| \geq n! n^n$</span> for all <span class="math-container">$n \in \Bbb{N}$</span>. You are given that <span class="math-container">$f$</span> is entire. Can you say anything about the radius of convergence of <span class="math-container">$f$</span> around <span class="math-container">$0$</span>?</p>
|
2,237,441 | <p>Let $n$ be a natural number.</p>
<p>I need to prove that $9 \mid 4^n-3n-1$</p>
<p>Could anyone give me some hints how to prove it without using induction.</p>
| Guy | 206,544 | <p>Note that
$$4^3=64\equiv 1\mod 9$$</p>
<p>Now divide to three cases, depending on $\left(n\mod 3\right)$.</p>
|
2,237,441 | <p>Let $n$ be a natural number.</p>
<p>I need to prove that $9 \mid 4^n-3n-1$</p>
<p>Could anyone give me some hints how to prove it without using induction.</p>
| Maximal Ideal | 352,912 | <p>Another approach: consider the fact that
$$(4^{n}-1)-3n = (4-1)(1+4+\cdots+4^{n-1})-3n = 3[1+4+\cdots+4^{n-1} - n].$$</p>
|
2,403,201 | <p>How do I solve for $x$:</p>
<p>$$\log\left(\frac{1.07^x}{1050-2.5x}\right)=\log\left(\frac{1.2}{828}\right)$$</p>
<p>If I raise both sides to the power of $10$, I get:
$\dfrac{1.07^x}{1050-2.5x}=\frac{1}{690}$</p>
<p>Then I'm stuck. How do I solve this ?</p>
<p>As suggest by @Kevin, I have decided to add my take here:</p>
<p>One way I could solve this is using Linear Interpolation Approximation.</p>
<p>We have,</p>
<p>$\frac{1.07^x}{1050-2.5x}=\frac{1}{690}$</p>
<p>$1-690\frac{1.07^x}{1050-2.5x}=0$</p>
<p>We need to get the LHS as close to $0$ as possible.</p>
<p>At $x=5(A)$,</p>
<p>LHS $\simeq$ 0.067219 (a)</p>
<p>Since LHS at $x=5$ is greater than $0$, we try at $x=7(B)$</p>
<p>LHS $\simeq$ -0.07311 (b)</p>
<p>Since LHS at $x=7$ is less than $0$, </p>
<p>$5<x<7$</p>
<p>Thus by interpolation,</p>
<p>$x=[A+\frac{a}{a-b}(B-A)]=[5+\frac{0.067219}{0.067219-(-0.07311)}(7-5)]\simeq5.958$</p>
| Community | -1 | <p>After seeing the plot of the function, the orders of magnitude are such that in a first approximation the $x$ at the denominator can be ignored, and you get</p>
<p>$$x\approx\log_{1.07}\frac{1050}{690}=6.2054\cdots.$$</p>
<p>As said by Claude, next approximations are given by Newton, and two or three iterations should be enough.</p>
<hr>
<p>After simplification the second approximations is</p>
<p>$$\frac{C\ln C}{C\ln A+B}=5.99452\cdots.$$</p>
<p>Not too bad.</p>
<hr>
<p>The next approximation is probably good enough for practical applications, but is a little less "sexy" when expanded:</p>
<p>$$\frac{\left(\dfrac{C\ln C\ln A}{C\ln A+B}-1\right)A^{\frac{C\ln C}{C\ln A+B}}+C}{A^\frac{C\ln C}{C\ln A+B}\ln A+B}=5.99305338\cdots$$</p>
|
3,454,514 | <blockquote>
<p>How to change the integration order in the given integral?
<span class="math-container">$$
\int\limits_0^1dx\int\limits_0^1dy\int\limits_0^{x^2+y^2}fdz\rightarrow
\int\limits_?^?dz\int\limits_?^?dy\int\limits_?^?fdx
$$</span></p>
</blockquote>
<p>I tried to make a graphic interpretation, but it seemed rather complex and didn't clarify the things. Moreover, I might not have much time if I had to solve this kind of problem on a test.
So, I would be really grateful if someone could explain how to solve this problem efficiently.</p>
| Hans Lundmark | 1,242 | <p>The iterated integral on the left equals the triple integral
<span class="math-container">$$
I = \iiint_D f \, dxdydz
,
$$</span>
where
<span class="math-container">$$
D = \{ (x,y,z) \in \mathbf{R}^3 : 0 \le x \le 1, \, 0 \le y \le 1, \, 0 \le z \le x^2+y^2 \}
.
$$</span>
To write this as an iterated integral with <span class="math-container">$z$</span> outermost, we first figure out that the largest possible <span class="math-container">$z$</span> coordinate for a point in <span class="math-container">$D$</span> is <span class="math-container">$2$</span>, namely for the point <span class="math-container">$(x,y,z)=(1,1,2)$</span>. So the range of <span class="math-container">$z$</span> will be <span class="math-container">$0 \le z \le 2$</span>:
<span class="math-container">$$
I = \int_{z=0}^{2} \left( \iint_{D_z} f \, dxdy \right) dz
,
$$</span>
where <span class="math-container">$D_z$</span> is the region in <span class="math-container">$\mathbf{R}^2$</span> obtained by slicing through <span class="math-container">$D$</span> at a fixed height <span class="math-container">$z \in [0,2]$</span>:
<span class="math-container">$$
D_z = \{ (x,y) \in \mathbf{R}^2 : 0 \le x \le 1, \, 0 \le y \le 1, \, x^2+y^2 \ge z \}
.
$$</span>
Geometrically this is what you get when you take the unit square and remove a (quarter) disk with radius <span class="math-container">$\sqrt{z}$</span>. Now writing a double integral over this region <span class="math-container">$D_z$</span> is slightly tricky, since it looks different depending on whether <span class="math-container">$z$</span> is less than or greater than <span class="math-container">$1$</span> (see <a href="https://www.wolframalpha.com/input/?i=plot+0%3Cx%3C1%2C0%3Cy%3C1%2C+x%5E2%2By%5E2+%3E+1%2F2" rel="nofollow noreferrer">case 1</a>
and
<a href="https://www.wolframalpha.com/input/?i=plot+0%3Cx%3C1%2C0%3Cy%3C1%2C+x%5E2%2By%5E2+%3E+11%2F10" rel="nofollow noreferrer">case 2</a>
on Wolfram Alpha).
So you'll need to split it,
<span class="math-container">$$
I=
\int_{z=0}^{1} \left( \iint_{D_z} f \, dxdy \right) dz
+ \int_{z=1}^{2} \left( \iint_{D_z} f \, dxdy \right) dz
,
$$</span>
and then write
<span class="math-container">$$
I=
\int_{z=0}^{1} \left( \int_{y=0}^{1} \left( \int_{x=g(y)}^{1} f \, dx \right) \, dy \right) dz
+ \int_{z=1}^{2} \left( \int_{y=\sqrt{z-1}}^{1} \left( \int_{x=\sqrt{z-y^2}}^{1} f \, dx \right) \, dy \right) dz
,
$$</span>
where the curve <span class="math-container">$x=g(y)$</span> describes the left boundary of <span class="math-container">$D_y$</span> in case 1 (when <span class="math-container">$0\le z \le 1$</span>):
<span class="math-container">$$
g(y) =
\begin{cases}
\square, 0 \le y \le \square
,\\
\square, \square \le y \le 1
.
\end{cases}
$$</span>
(I'm leaving it for you to figure out how to fill in the blanks in this final step, so that I don't spoil the <em>whole</em> exercise for you!)</p>
|
103,540 | <p>Suppose you have a triangular chessboard of size $n$, whose "squares" are ordered triples $(x,y,z)$ of nonnegative integers that add up to $n$. A rook can move to any other point that agrees with it in one coordinate -- for example, if you are on $(3,1,4)$ then you can move to $(2,2,4)$ or to $(6,1,1)$, but not to $(4,3,1)$.</p>
<p>What is the maximum number of mutually non-attacking rooks that can be placed on this chessboard?</p>
<p>More generally, is anything known about the graph whose vertices are these ordered triples and whose edges are rook moves?</p>
| Cristi Stoica | 10,095 | <p>Here is a paper about this problem: <a href="http://www.cin.ufpe.br/~pcp/nonattacking_queens.pdf" rel="noreferrer">"Non-attacking queens on a triangle"</a>.</p>
<p>And here's another one <a href="http://arxiv.org/abs/0910.4325" rel="noreferrer">"Putting Dots in Triangles"</a></p>
|
103,540 | <p>Suppose you have a triangular chessboard of size $n$, whose "squares" are ordered triples $(x,y,z)$ of nonnegative integers that add up to $n$. A rook can move to any other point that agrees with it in one coordinate -- for example, if you are on $(3,1,4)$ then you can move to $(2,2,4)$ or to $(6,1,1)$, but not to $(4,3,1)$.</p>
<p>What is the maximum number of mutually non-attacking rooks that can be placed on this chessboard?</p>
<p>More generally, is anything known about the graph whose vertices are these ordered triples and whose edges are rook moves?</p>
| Gerhard Paseman | 3,402 | <p>This has a connection with additive permutations, asked about here:
<a href="https://mathoverflow.net/q/211276">Are there enough additive permutations?</a> .</p>
<p><a href="http://oeis.org/A002047" rel="nofollow noreferrer">http://oeis.org/A002047</a> has references to a similar problem involving a hexagonal board and counting the number of placements. (The Bennett and Potts reference calls the piece a 'brook'.) This number also counts the permutations. A recent ArXiv paper <a href="http://arxiv.org/abs/1510.05987" rel="nofollow noreferrer">http://arxiv.org/abs/1510.05987</a> appears to provide an asymptotic estimate. Note that placement on a hexagonal board gives one for the triangle, but there may be placements in a triangle corner that are not realizable in the hexagon.</p>
<p>Gerhard "Cutting Corners For More Results" Paseman, 2017.03.13</p>
|
1,397,190 | <p>Find the sum of following series:</p>
<p>$$1 + \cos \theta + \frac{1}{2!}\cos 2\theta + \cdots$$</p>
<p>where $\theta \in \mathbb R$.</p>
<p>My attempt: I need hint to start.</p>
| Michael Galuza | 240,002 | <p>Hint:
$$
1+\cos x + \frac{1}{2!}\cos 2x + \ldots = \Re(e^{0ix} + e^{1ix} + \frac{1}{2!}e^{2ix} + \ldots)=\Re e^{e^{ix}}
$$</p>
<p>$$
e^{ix}=\cos x+i\sin x\\\Longrightarrow e^{e^{ix}} = e^{\cos x}e^{i\sin x}=e^{\cos x}(\cos(\sin x)+i\sin(\sin x))
$$
Your sum is
$$
e^{\cos x}\cos(\sin x)
$$</p>
|
3,917,912 | <p>I am reading an article where the author seems to use a known relationship between the sum of a finite sequence of real positive numbers <span class="math-container">$a_1 +a_2 +... +a_n = m$</span> and the sum of their reciprocals. In particular, I suspect that
<span class="math-container">\begin{equation}
\sum_{i=1}^n \frac{1}{a_i} \geq \frac{n^2}{m}
\end{equation}</span><br />
with equality when <span class="math-container">$a_i = \frac{m}{n} \forall i$</span>. Are there any references or known theorems where this inequality is proven?</p>
<p><a href="https://math.stackexchange.com/a/1857918/852233">This</a> interesting answer provides a different lower bound. However, I am doing some experimental evaluations where the bound is working perfectly (varying <span class="math-container">$n$</span> and using <span class="math-container">$10^7$</span> uniformly distributed random numbers).</p>
| TurlocTheRed | 397,318 | <p>This follows form Lagrange Multipliers.</p>
<p>Let <span class="math-container">$S_1=\sum_{i=0}^n a_i=m$</span>. Minimize <span class="math-container">$S_2=\sum_{i=0}^n\frac{1}{a_i}$</span>.</p>
<p>Treat each <span class="math-container">$a_i$</span> as an independent variable.</p>
<p><span class="math-container">$\lambda\frac{\partial S_1}{\partial a_i}=\frac{\partial S_2}{\partial a_i}\implies \lambda=\frac{-1}{a_i^2}\implies\forall m,n, a_m^2=a_n^2$</span>. That they are all positive means they are all equal. They all sum to <span class="math-container">$m$</span>, so they all must me <span class="math-container">$m/n$</span>.</p>
<p>Then sum their reciprocals to get <span class="math-container">$\frac{n^2}{m}$</span> as the minimum.</p>
|
3,981,983 | <p>I had an interesting math problem presented to me some time ago by a friend (he stated it in non-mathematical terms). At what angle would you launch a projectile from a spaceship/satellite such that it left that object and went on to hit another orbiting object? Then as a supplemental question he asked at what angle would you launch that projectile to hit the other orbiting object in the least amount of time?</p>
<p>I assumed that the objects were only acted upon by a single spherically symmetric mass distribution such that I could treat it as a one-body problem for each object. Further, I assumed it all took place in the plane with a polar coordinate system so that I ended up with this simple system of nonlinear autonomous ODE's,</p>
<p><span class="math-container">$$ \begin{bmatrix} \frac{d \theta}{dt} \\ \frac{dv}{dt}\\ \frac{dr}{dt} \end{bmatrix} = \begin{bmatrix} \frac{h}{r^{2}} \\ \frac{h^{2}}{r^{3}} -\frac{\mu}{r^{2}} \\ v \end{bmatrix}.$$</span></p>
<p>Where the inital conditions for the projectile would be <span class="math-container">$\{ \theta_{i} , r_{i}, v_{\beta}\cos(\phi - \theta_{i}) + v_{i}\}$</span> with an <span class="math-container">$h_{\phi} = r_{i} (v_{\beta}\sin(\phi - \theta_{i}) + v_{\theta i})$</span> and the target objects' initial conditions are <span class="math-container">$\{ \theta_{i}^{'} , r_{i}^{'}, v_{i}^{'}\}$</span> with <span class="math-container">$h'= r^{'}_{i} v_{\theta i}^{'}$</span>. Where <span class="math-container">$\phi$</span> is the launch angle from the polar axis while <span class="math-container">$v_{\beta}$</span> is the magnitude of the projectiles velocity (which I assume to not change, only its launch direction) and <span class="math-container">$\mu$</span> is a constant relating to the gravitational field strength of the attracting object. Below is a picture depicting the general initial and final conditions.</p>
<p><a href="https://i.stack.imgur.com/63WHz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/63WHz.png" alt="The picture depicting the general initial and final conditions." /></a>
<a href="https://i.stack.imgur.com/KZsRS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KZsRS.png" alt="enter image description here" /></a></p>
<p>After all these preliminaries, i'm basically asking if there is a variational calculus or other simpler solution to solving this problem as perhaps a boundary problem of sorts mixed with an initial ODE problem. That is, aside from computationally pouring on through thousands of trajectories with minutely differing <span class="math-container">$\phi$</span>'s then numerically guessing at the appropriate approximate launch angle or angles that solve my question(s).</p>
<p>Which. . . isn't what I exactly want to do and I desire to know if there is an equation, a single or system of ODE's, that I could solve themselves for this launch angle which gives a least time of travel or gives launch angles that would lead to a hit (irrespective of the time of travel). If you can help in anyway, this would be most appreciated. I'm a sophomore College student with little knowledge of solving robustly ODE's or even programming solvers for them.</p>
| Brian M. Scott | 12,042 | <p>If <span class="math-container">$x\in X\setminus A$</span>, then <span class="math-container">$X\setminus A$</span> is an open nbhd of <span class="math-container">$x$</span> disjoint from <span class="math-container">$A$</span>. If <span class="math-container">$a\in A$</span>, then <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span>, so by the definition of <em>limit point</em> <span class="math-container">$a$</span> has an open nbhd <span class="math-container">$U_a$</span> that contains no other point of <span class="math-container">$A$</span>. In other words, <span class="math-container">$U_a\cap A=\{a\}$</span>.</p>
|
18,413 | <p>The thought came from the following problem:</p>
<p>Let $V$ be a Euclidean space. Let $T$ be an inner product on $V$. Let $f$ be a linear transformation $f:V \to V$ such that $T(x,f(y))=T(f(x),y)$ for $x,y\in V$. Let $v_1,\dots,v_n$ be an orthonormal basis, and let $A=(a_{ij})$ be the matrix of $f$ with respect to this basis.</p>
<p>The goal here is to prove that the $A$ is symmetric. I can prove this easily enough by saying:</p>
<p>Since $T$ is an inner product, $T(v_i,v_j)=\delta_{ij}$.</p>
<p>\begin{align*}
T(A v_j,v_i)&=T(\sum_{k=1}^n a_{kj} v_k,v_i)\\
&=T(a_{1j} v_1,v_i) + \dots + T(a_{nj} v_n,v_i)\\
&=a_{1j} T(v_1,v_i) + \dots + a_{nj} T(v_n,v_i)\tag{bilinearity}\\
&=a_{ij}\tag{$T(v_i,v_j)=\delta_{ij}$}\\
\end{align*}</p>
<p>By the same logic,</p>
<p>\begin{align*}
T(A v_j,v_i)&=T(v_j,A v_i)\\
&=T(v_j,\sum_{k=1}^n a_{ki} v_k)\\
&=T(v_j,a_{1i} v_1)+\dots+T(v_j,a_{ni} v_n)\\
&=a_{1i} T(v_j,v_1)+\dots+a_{ni} T(v_j,v_n)\\
&= a_{ji}\\
\end{align*}</p>
<p>By hypothesis, $T(A v_j,v_i)=T(v_j,A v_i)$, therefore $a_{ij}=T(A v_j,v_i)=T(v_j,T v_i)=a_{ji}$.</p>
<p>I had this other idea though, that since $T$ is an inner product, its matrix is positive definite.</p>
<p>$T(x,f(y))=T(f(x),y)$ in matrix notation is $x^T T A y=(A x)^T T y$</p>
<p>\begin{align*}
x^T T A y &= (A x)^T T y\\
&=x^T A^T T y\\
TA &= A^T T\\
(TA)^T &= (A^T T)^T\\
A^T T^T &= T^T A\\
TA &= A^T T^T\tag{T is symmetric}\\
&= (TA)^T\tag{transpose of matrix product}\\
\end{align*}</p>
<p>This is where I got stuck. We know that $T$ and $TA$ are both symmetric matrices. Clearly $T^{-1}$ is symmetric. If it can be shown that $T^{-1}$ and $AT$ commute, that would show it.</p>
| Chris Card | 1,470 | <p>It's not true in general, e.g. </p>
<p>$A = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}$
$B = \begin{pmatrix} 1 & 1 \\ 2 & 1 \end{pmatrix}$</p>
<p>$AB = \begin{pmatrix} 4 & 3 \\ 3 & 2 \end{pmatrix}$</p>
<p>(with thanks to Rahul for formatting help)</p>
|
2,515,765 | <p>The following question is from an intermediate calculus book I am going through: </p>
<p>Find two sets in $\mathbb R^2$ that have the same interior, but whose complements have different interiors. </p>
<p>This seems like the kind of question that should be fairly straightforward, but I just can't think of an answer. I have tried, for instance, taking the first set to be $\mathbb R^2$ minus an open disc and comparing with $\mathbb R^2$ minus a closed disc. However the complements have the same interior. I have also tried the same with removing single points, squares etc, but nothing seems to work. If anyone can shed any light on this I'd be very grateful.<br>
Thanks in advance. </p>
| Carlos Jiménez | 356,536 | <p>Consider $A=\mathbb{Q}\times\mathbb{Q}$ and $B=\emptyset$. Clearly, $\text{Int}(A)=\text{Int}(B)=\emptyset$ but $\text{Int}(\mathbb{R}^2\setminus A)=\emptyset$ and $\text{Int}(\mathbb{R}^2\setminus B)=\text{Int}(\mathbb{R}^2)=\mathbb{R}^2$ </p>
|
3,000,862 | <p>I can name at least 4 different ways of representing <span class="math-container">$\exp$</span> function:</p>
<ol>
<li>Taylor series: For <span class="math-container">$x \in \mathbb{R}, \exp(x) = \sum_{k=0}^{\infty} \frac{x^k}{k!}$</span>.</li>
<li>Differential equation: <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> differentiable with <span class="math-container">$f'(x) = f(x)$</span> and <span class="math-container">$f(0)=1$</span>.</li>
<li>Inverse function of <span class="math-container">$\ln(x) = \int_1^x \frac{dt}{t}$</span> for <span class="math-container">$x>0$</span>.</li>
<li>Exponent: The number <span class="math-container">$e$</span> (defined e.g. as <span class="math-container">$\sum_{k=0}^{\infty} \frac{1}{k!}$</span>) raised to the power <span class="math-container">$x$</span>, for <span class="math-container">$x \in \mathbb{R}$</span></li>
</ol>
<p>I managed to proved equivalence among the first <span class="math-container">$3$</span> but I am a bit puzzled by <span class="math-container">$4.$</span>. </p>
<p>An easy way would be to look at <span class="math-container">$a^b = \exp(b \ln(a))$</span>. But I am not sure that is meaningful. </p>
<p>Is there any other way of defining <span class="math-container">$a^b$</span> without involving <span class="math-container">$\exp$</span> that would give a more meaningful answer? Or how would you approach proving that 4. is equivalent to 1-3?</p>
| Oscar Lanzi | 248,217 | <p>You may have a sign error. I render <span class="math-container">$s^3$</span> as <span class="math-container">$-(1-i\sqrt{3})/16$</span> but you seem to have <span class="math-container">$+(1-i\sqrt{3})/16$</span>. But even with the sign correction (which probably does get you a good answer), your approach is not best. See below.</p>
<p>When you have an expression with two different complex cube roots there are nine choices for this combination. Any of three roots or the first cube root could be paired with any of three cube roots for the second. Yet, of course, only three of the combinations can be correct for the cubic equation you started with. If your calculator/computer program does not choose the "principal values" of the cube roots in the right way for this particular equation, you go wrong (even without other errors).</p>
<p>The solution is easy. When you get a root for <span class="math-container">$t$</span>, do not solve independently for <span class="math-container">$s$</span>. Use the fact that <span class="math-container">$st=-(1/4), s=-(1/4t)$</span> to get an expression for <span class="math-container">$s$</span> that has the same cube root radical as the one for <span class="math-container">$t$</span>. Now your intended difference <span class="math-container">$s-t$</span> contains only a single cube root radical, and its three possible values correspond properly to the three roots of the cubic equation.</p>
|
668,291 | <p>If $h$ and $k$ are any two distinct integers, then $h^n-k^n$ is divisible by $h-k$.</p>
<p>Let's start with the basis. Let $n=1$, then
$h^1-k^1 = h-k$</p>
<p>Now for the induction, I can't use $k$ because I don't want to be confused. So let $P(r)$ for $h^n-k^n$ and that's $h^r-k^r$</p>
<p>$h^r-k^r = h-k$</p>
<p>$h^r = h-k +k^r$</p>
<p>So, for $P(r+1)$</p>
<p>$h^{r+1}-k^{r+1}$</p>
<p>$h^r * h^1 - k^r * k^1$</p>
<p>$ (h-k +k^r) * h -k^r *k $</p>
<p>This is the point where I'm not certain if I should distribute the $h $ all over the place...so here it is</p>
<p>$ (h*h-k*h +k^r*h) -k^r *k $</p>
<p>$ (h*h)+(-k*h) +(k^r*h) -k^r *k $</p>
<p>$ (h)*(h-k) + (k^r)*(h-k)$</p>
<p>$(h-k) * (h+k^r)$</p>
| Pedro | 23,350 | <p>Another option is to write $$h^{r+1}-k^{r+1}=hh^{r}-kk^{r}=(h+k)(h^r-k^r)-kh(h^{r-1}-k^{r-1})$$</p>
<p>in a similar fashion to what we do to prove that if $\alpha,\beta$ are the roots of $f[X]\in\Bbb Z[X]$, $\alpha^n+\beta^n\in\Bbb Z$ for every $n\in\Bbb Z$. Indeed, $\alpha+\beta,\alpha\beta\in\Bbb Z$ by Vieta and</p>
<p>$$\alpha^{n+1}+\beta^{n+1}=(\alpha+\beta)(\alpha^n+\beta^n)-\alpha\beta(\alpha^{n-1}+\beta^{n-1})$$</p>
|
1,710,523 | <p>Im asked to find the arc length of :</p>
<p>$$
\int_{-2}^{x}\sqrt{3w^4-1}dw
$$
where x is between -2 and -1.</p>
<p>Do I find the integral just as I would normally find it then find the arc length of that? Im a little confused on the notation I guess.</p>
| Jan Eerland | 226,665 | <p>Use the formula for the arc length:</p>
<p>$$\int_{-2}^{-1}\sqrt{1+\frac{\partial}{\partial x}\left[\int_{-2}^{x}\sqrt{3w^4-1}\space\text{d}w\right]^2}\space\text{d}x=$$</p>
<hr>
<p>Using the fundamental theorem of calculus:</p>
<p>$$\frac{\partial}{\partial x}\left[\int_{-2}^{x}\sqrt{3w^4-1}\space\text{d}w\right]=\frac{\sqrt{-\left(3x^4-1\right)^2}}{\sqrt{1-3x^4}}$$</p>
<hr>
<p>$$\int_{-2}^{-1}\sqrt{1+\left[\frac{\sqrt{-\left(3x^4-1\right)^2}}{\sqrt{1-3x^4}}\right]^2}\space\text{d}x=\int_{-2}^{-1}\sqrt{1+3x^4-1}\space\text{d}x=$$
$$\int_{-2}^{-1}\sqrt{3x^4}\space\text{d}x=\sqrt{3}\int_{-2}^{-1}\sqrt{x^4}\space\text{d}x=\sqrt{3}\int_{-2}^{-1}x^2\space\text{d}x=$$
$$\sqrt{3}\left[\frac{x^3}{3}\right]_{-2}^{-1}=\frac{\sqrt{3}}{3}\left[x^3\right]_{-2}^{-1}=\frac{\sqrt{3}}{3}\left((-1)^3-(-2)^3\right)=$$
$$\frac{\sqrt{3}}{3}\left(-1--8\right)=\frac{\sqrt{3}}{3}\left(-1+8\right)=\frac{7\sqrt{3}}{3}=\frac{7}{\sqrt{3}}$$</p>
|
19,305 | <p>If I compute the eigenvalues and eigenvectors using <code>numpy.linalg.eig</code> (from Python), the eigenvalues returned seem to be all over the place. Using, for example, <a href="http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" rel="nofollow">the Iris dataset</a>, the normalized Eigenvalues are <code>[2.9108 0.9212 0.1474 0.0206]</code>, but the ones I currently have are <code>[9206.53059607 314.10307292 12.03601935 3.53031167]</code>.</p>
<p>The problem I'm facing is that I want to find out how much percentage of the variance each component brings, but with the current Eigenvalues I don't have the right values.</p>
<p>So, how can I transform my eigenvalues so that they can give me the correct proportion of variance?</p>
<p>Edit: Just in case it wasn't clear, I'm computing the <code>eig</code> of the covariance matrix (The process is called Principal Component Analysis).</p>
| user72308 | 72,308 | <p>I replicate the dataset in your question.and got the normalized Eigenvalues are [2.9108 0.9212 0.1474 0.0206].</p>
<p>first to get this answer your need to normalize the data :1) subtract the mean for each column (each variable over time),2)divided demeaned column by its variance (variance after demean) 3) then you do PCA which will give you the answer you want [2.9108 0.9212 0.1474 0.0206].</p>
<p>Following are my matlab codes for how I carry out PCA for your dataset
clear all;</p>
<p>% data - MxN matrix of input data
% (M dimensions, N trials)
% signals - MxN matrix of projected data
% PC - each column is a PC
% V - Mx1 matrix of variances
filename = 'iris.xls';sheet = 1;xlRange = 'A:D';
x=xlsread(filename,sheet,xlRange );</p>
<p>data=x';
[M,N] = size(data);
% subtract off the mean for each dimension
mn = mean(data,2);
data = data - repmat(mn,1,N);
for kk=1:M
data(kk,:)=data(kk,:)/sqrt(var(data(kk,:)));
end
%calculate the covariance matrix
covariance = 1 / (N-1) * data * data';
%covariance=M/sum(diag(covariance))*covariance;
% find the eigenvectors and eigenvalues
[PC, V] = eig(covariance);
% extract diagonal of matrix as vector
V = diag(V);
% sort the variances in decreasing order
[junk, rindices] = sort(-1*V);
V = V(rindices);
PC = PC(:,rindices);
% project the original data set
signals = PC' * data;
VV(:,1)=V;
VV(:,2)=V/M;
disp(VV);</p>
|
3,441,346 | <p>I was asked to prove that a set <span class="math-container">$X$</span> is closed if and only if it contains all its limit points. I proceeded like so:</p>
<p>Let <span class="math-container">$X^\dagger=\partial X \cap X´$</span> and <span class="math-container">$X^\ast=\partial X \backslash X´$</span> with <span class="math-container">$X´$</span> being the derived set of <span class="math-container">$X$</span>. If <span class="math-container">$X$</span> is closed then:
<span class="math-container">$$X=int(X) \,\cup \,\partial X =int(X) \, \cup \,(X^\dagger \,\cup\,X^\ast)= $$</span>
<span class="math-container">$$=(int(X)\,\cup\, X^\dagger)\, \cup \, X^\ast= $$</span>
<span class="math-container">$$=X´ \, \cup \, X^\ast$$</span>
Therefore if <span class="math-container">$X$</span> is closed then it contains all its limit points. I.e, <span class="math-container">$X$</span> is closed if and only if <span class="math-container">$X´\subseteq X$</span>.</p>
<p>Is this correct, and if so, what are some better ways to prove this?</p>
| copper.hat | 27,978 | <p>There is no need to drag the boundary and derived sets into the proof.</p>
<p>Suppose <span class="math-container">$X$</span> is closed and <span class="math-container">$p$</span> is a limit point of <span class="math-container">$X$</span>. Suppose <span class="math-container">$p \notin X$</span>. Since <span class="math-container">$X^c$</span> is open and <span class="math-container">$p \in X^c$</span>, we must have <span class="math-container">$X^c \cap X$</span> is non empty which is impossible, hence <span class="math-container">$p \in X$</span>.</p>
<p>Now suppose <span class="math-container">$X$</span> contains all its limit points. Suppose <span class="math-container">$p \notin X$</span>. Then there must be some neighbourhood <span class="math-container">$U$</span> containing <span class="math-container">$p$</span> that does not intersect <span class="math-container">$X$</span> (otherwise <span class="math-container">$p$</span> would be a limit point). Hence <span class="math-container">$X^c$</span> is open and so <span class="math-container">$X$</span> is closed.</p>
|
2,853,668 | <blockquote>
<p>Show that $$\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{x^n}$$ converges for every $x>1$.</p>
</blockquote>
<p>let $a(x)$ be the sum of the series. does $a$ continious at $x=2$? differentiable?</p>
<p>I guess the first part is with leibniz but I am not sure about it.</p>
| Mostafa Ayaz | 518,023 | <p>Hint:</p>
<p>What if you use the ratio test?$$\lim_{n\to\infty}|\dfrac{a_{n+1}}{a_n}|$$</p>
|
2,484,004 | <p><span class="math-container">$M^*(a,b)=b-a$</span> we know that this fact but how we can prove closed intervals are Lebesgue measurable. I tried to prove by using <span class="math-container">$\cap ((a-\frac1n),(b+\frac1n))$</span> But ı totaly stucked :( please help me guys</p>
| Dr. Sonnhard Graubner | 175,066 | <p>HINT: plug in $$y^2=1-4x^2$$ in your function, then it containes only the variable $x$</p>
|
15,871 | <p>I would like to state something about the existence of solutions $x_1,x_2,\dots,x_n \in \mathbb{R}$ to the set of equations</p>
<p>$\sum_{j=1}^n x_j^k = np_k$, $k=1,2,\dots,m$</p>
<p>for suitable constants $p_k$. By "suitable", I mean that there are some basic requirements that the $p_k$ clearly need to satisfy for there to be any solutions at all ($p_{2k} \ge p_k^2$, e.g.).</p>
<p>There are many ways to view this question: find the coordinates $(x_1,\dots,x_n)$ in $n$-space where all these geometric structures (hyperplane, hypersphere, etc.) intersect. Or, one can see this as determining the $x_j$ necessary to generate the truncated Vandermonde matrix $V$ (without the row of 1's) such that $V{\bf 1} = np$ where ${\bf 1} = (1,1,\dots,1)^T$ and $p = (p_1,\dots,p_m)^T$.</p>
<p>I'm not convinced one way or the other that there has to be a solution when one has $m$ degrees of freedom $x_1,\dots,x_m$ (same as number of equations). In fact, it would be interesting to even be able to prove that for finite number $m$ equations $k=1,2,\dots,m$ that one could find $x_1,\dots,x_n$ for bounded $n$ (that is, the number of data points required does not blow up).</p>
<p>A follow on question would be to ask if requiring ordered solutions, i.e. $x_1 \le x_2 \le \dots \le x_n$, makes the solution unique for the cases when there is a solution.</p>
<p>Note: $m=2$ is easy. There is at least one solution = the point(s) where a line intersects a circle given that $p_2 \ge p_1^2$. </p>
<p>Any pointers on this topic would be helpful -- especially names of problems resembling it.</p>
| Joel David Hamkins | 1,946 | <p>First, let me mention that one must be careful when asserting that an implication fails. Taken literally, the assertion "D(x) does not imply P(x)" is logically equivalent to the assertion that D(x) is true and P(x) is false. This meaning of material implication that is used in mathematics is not the same as the natural language interpretation of if-then. For example, if the professor says to a student "It is not true that if you pass the final, then you pass the class", most people would not want the students to deduce logically that he or she will pass the final, but fail the class. But this does follow logically from the mathematical usage of material implication. So your definition of "defect" may not be what you intend. </p>
<p>To be sure, mathematicians are often sloppy about this. One often hears people say that such-and-such condition does not imply another condition. What they mean is that it does not necessarily imply the other condition. For example, suppose I have a function f, and someone says "its not true that if f is continuous, then f is differentiable." This statement is logically equivalent to the assertion that f is indeed continuous and not differentiable. What they meant to say, of course, was that "not every continuous function is differentiable". </p>
<p>In your case, you assert two implication failures: one if the definition of defect and another in the definition of minimal. When you clarify exactly what you mean more precisely, you will be led to the conclusion that the only sensible (minimal) defect is simply the assertion D(x), asserting that "either P(x) holds, or Q(x) fails". This statement does not imply P(x), except for those values of x for which Q(x) already implies P(x), and also if D(x) ∧ Q(x), then P(x) follows immediately. If D'(x) is any other statement such that D'(x) ∧ Q(x) implies P(x), then D'(x) implies that either Q(x) fails or P(x) holds, and so D'(x) implies D(x).</p>
|
1,291,107 | <p>Let $X$ be random variable and $f$ it's density. How can one calculate $E(X\vert X<a)$?</p>
<p>From definition we have:</p>
<p>$$E(X\vert X<a)=\frac{E\left(X \mathbb{1}_{\{X<a\}}\right)}{P(X<a)}$$</p>
<p>Is this equal to:</p>
<p>$$\frac{\int_{\{X<a\}}xf(x)dx}{P(X<a)}$$</p>
<p>? If yes, then how one justify it? Thanks. I'm conditional expectation noob.</p>
<p>Also, what is $E(X|X=x_0)$? In discrete case it is $x_0$...</p>
| Martigan | 146,393 | <p>Euh... I think you overcomplicated things here...</p>
<p>$(1-x)(x-5)^3=x-1$ is equivalent to $(1-x)[(x-5)^3+1]=0$</p>
<p>Either $x=1$ or $(x-5)^3=-1$...</p>
|
135,675 | <p>Let $D$ be the Dirac-Operator on $\mathbb{R}^n$ or more generally the Dirac spinor bundle $\mathcal{S}\to M$ of a (semi-)Riemannian spin manifold $M$. Then we consider $D$ as an unbouded Operator on $\mathcal{H}=L^2(\mathbb{R}^n)$ with domain $C^\infty_c(\mathbb{R}^n,\mathbb{C}^N)$. Then it is said that the operator $f\langle D\rangle^{-n}$ is compact, where $f\in C^\infty_c(\mathbb{R}^n,\mathbb{C})$ is considered as a multiplication operator on $\mathcal{H}$ and $\langle D\rangle:=\sqrt{D^\dagger D+ DD^\dagger}$.</p>
<p>Since I am not really a crack in functional analysis, it is not even obvious for my how exactly $\langle D\rangle$ works. I suspect that the Operator $D^\dagger D+DD^\dagger$ is (essentially) self-adjoint and then the spectral theorem is used for defining $\langle D\rangle$ and its powers $\langle D\rangle^{-n}$.</p>
<p>But what is even more mysterious to me is the claim that $f\langle D\rangle^{-n}$ is actually compact (note that $f$ has compact support, however). Why is this true?</p>
| Paul Siegel | 4,362 | <p>As discussed in the comments, the statement probably needs to be modified in order for $\langle D \rangle^{-n}$ to be defined. I'm guessing that the correct statement should fit into the following framework:</p>
<hr>
<p>Proposition: Let $D$ be an essentially self-adjoint first order elliptic operator on a possibly non-compact manifold $M$, let $f \in C_c^\infty(M)$, and let $g \in C_0(\mathbb{R})$. Then $m(f) g(D)$ is compact where $m(f)$ is the multiplication operator by $f$.</p>
<p>I'm not going to be able to drudge up all of the gory details, but in the end the proof can be pieced together using standard elliptic analysis. First consider the resolvent function $g(t) = (i + t)^{-1}$. Let $K$ denote the support of $f$ and let $v$ be a vector in the domain of the closure $\overline{D}$ of $D$, so that $v' = m(f)v$ is in the Sobolev space $L_1^2(K)$. Garding's inequality estimates the Sobolev $1$-norm of $v'$ in terms of the $L^2$ norm of $v'$ and of $\overline{D}v'$; from this it follows that $m(f)(i + \overline{D})^{-1}$ maps $L^2(M)$ continuously into $L_1^2(K)$. But the Rellich lemma asserts that the inclusion of $L_1^2(K)$ into $L^2(M)$ is compact, so the result is proved for the specific $g$ above. Now, the set of all $g$ for which the result is true is closed under linear combinations, pointwise multiplication, complex conjugation, and uniform limits, so by the Stone-Weierstrass theorem the result is true for any $g \in C_0(\mathbb{R})$. Notice, however, that the proposition does not apply to $g(t) = t^{-n}$, hence my concerns in the comments.</p>
<hr>
<p>Now, the Dirac operator on a complete Riemannian manifold is essentially self adjoint and therefore fits into the proposition above. This ultimately follows from the fact that its symbol is the Clifford multiplication endomorphism which is not only invertible (away from the $0$ section) but bounded in norm on the unit cosphere bundle. In other words, it has "finite propagation speed". I'm not quite sure how this works out in the semi-Riemannian case, but my guess is that it does as long as you have some counterpart of the completeness assumption: note that the Dirac operator on $(0,1)$ is not essentially self-adjoint.</p>
|
1,176,615 | <p>I am invited to calculate the minimum of the following set:</p>
<p>$\big\{ \lfloor xy + \frac{1}{xy} \rfloor \,\Big|\, (x+1)(y+1)=2 ,\, 0<x,y \in \mathbb{R} \big\}$.</p>
<p>Is there any idea?</p>
<p>(The question changed because there is no maximum for the set (as proved in the following answers) and I assume that the source makes mistake)</p>
| Christian Blatter | 1,303 | <p>From $xy+x+y=1$ and $x>0$, $y>0$ it follows that $xy<1$. Since $t\mapsto t+{1\over t}$ is decreasing when $t<1$ we conclude that we have to make
$xy$ is as large as possible. Let $x+y=:s$. Then
$$1-s=xy\leq{s^2\over4}\ .$$
The largest possible $xy$ goes with the smallest admissible $s>0$, and the latter satisfies $1-s={\displaystyle{s^2\over4}}$. This leads then to $$x=y={s\over2}=\sqrt{2}-1\ ,$$ and finally to
$$xy+{1\over xy}=6\ ,$$
which is already an integer.</p>
|
75,880 | <p>Say $f:X\rightarrow Y$ and $g:Y\rightarrow X$ are functions where $g\circ f:X\rightarrow X$ is the identity. Which of $f$ and $g$ is onto, and which is one-to-one?</p>
| karmic_mishap | 17,529 | <p>I think $f$ should be one-to-one and $g$ should be onto, since $g$ has to cover all of $X$ in its range and $f$ has to make $X$ correspond in a one-to-one fashion with $Y$. It seems that $g$ could be not one-to-one if it is an inverse of $f$ that discards some of the information that being a member of Y conveys. For example, if $X = \mathbb{R}^{+}$, $Y = \mathbb{R}$, $f(x) = x$, $g(x) = |x|$, $g$ is not one-to-one, but it is onto, and f is one-to-one, but clearly not onto. Hopefully this example is valid and helps you out.</p>
|
2,362,790 | <p>I'm interested in the problem linked with <a href="https://math.stackexchange.com/questions/770117/determinant-of-circulant-matrix/2362754#2362754">this answer</a>. </p>
<p>Let $ f(x) = a_n + a_1 x + \dots + a_{n-1} x^{n-1} $ be polynomial with distinct $a_i$ which are <strong>primes</strong>. </p>
<p>(Polynomials like that for $n= 4 \ \ \ \ f(x) = 7 + 11 x + 17 x^2 + 19 x^3 $)</p>
<ul>
<li>Is it for some $n$ possible that $x^n-1$ and $f(x)$ have some common divisors?</li>
</ul>
<p>(Negative answer would mean that it is possible to generate circulant non-singular matrices with any prime numbers)</p>
<p>In other words </p>
<ul>
<li>$x^n-1$ has roots which lie (as complex vectors) symmetrically in complex plane on the<br>
unit circle, can such root be also a root of $f(x) = a_n + a_1 x +
\dots + a_{n-1} x^{n-1}$ in general case where $a_i$ are constrained
as above? </li>
</ul>
| Gerry Myerson | 8,269 | <p>For $n=4$, $x^4-1$ and $13+11x+17x^2+19x^3$ both have the root $x=-1$. Any number of similar examples can be produced. </p>
|
879,640 | <p>Does a matrix have only one inverse matrix (like the inverse of an element in a field)? If so, does this mean that</p>
<p>$A,B \text{ have the same inverse matrix} \iff A=B$?</p>
| Disintegrating By Parts | 112,478 | <p>If $A$, $B$ are square matrices with same inverse $C$, then $AC=CA=I$ and $BC=CB=I$. Therefore,
$$
A =AI= A(CB)= (AC)B = IB = B.
$$
The odd thing about matrices is this: If $A$, $B$ are $n\times n$ matrices over a field, then $AB=I$ iff $BA=I$. This is a direct consequence of the fact that the $N\times N$ matrices form a finite-dimensional linear space.</p>
|
3,243,503 | <p>If <span class="math-container">$x + y = 2c$</span>, find minimum value of
<span class="math-container">$ \sec x +\sec y $</span> if <span class="math-container">$x,y\in(0,\pi/2)$</span>, in terms of <span class="math-container">$c$</span>.</p>
<p>I was able to solve by differentiating the equation and got the answer as 2secc.
But i would like to know solution with trigonometry as base or without differentiating the equation.</p>
| Ma Joad | 516,814 | <p><span class="math-container">$$\frac{d}{dx}(\sec x+\sec y)=\frac{\sin x}{\cos^2x}+\frac{\sin y}{\cos^2y}\frac{dy}{dx}=0,\\
\frac{dy}{dx}=-1,\\
\frac{\sin x}{\cos^2x}=\frac{\sin y}{\cos^2y}\\
\Rightarrow x=y=c.$$</span>
Minimum value: <span class="math-container">$2\sec c.$</span></p>
|
885,778 | <ol>
<li><p>Is there any group of order 36 with no subgroup of order 6?</p></li>
<li><p>Is there any group of order $p^2q^2$ with no subgroup of order $pq$?</p></li>
<li><p>Is there any group of order $p^{2m}q^{2m}$ with no subgroup of order $p^mq^m$?</p></li>
<li><p>Is there any group of order $p^{2m}q^{2n}$ with no subgroup of order $p^mq^n$?</p></li>
</ol>
<p>($p,q$ are distinct primes)</p>
| Geoff Robinson | 13,147 | <p>For question 2, I believe the answer is yes, which also settles question $3$ and $4.$ Take $p = 149$ and $q = 5.$ Then a cyclic group of order $25$ acts irreducibly and faithfully on an elementary Abelian $p$-group of order $p^{2}.$ Furthermore, in the resulting semidirect product, each element of order $5$ acts irreducibly on the normal Sylow $p$-subgroup, so there is no subgroup of order $745 = pq.$ </p>
<p>The general strategy is to find primes odd primes $p,q$ with $p \equiv -1$ mod $q^{2},$ and then a similar construction works.</p>
|
1,450,476 | <p>I'm in number theory and I currently have these problems assigned as homework. I've looked through the sections containing these problems and I've solved/proved most of the other problems, but I can't figure these ones out.</p>
<ol>
<li>For $n>1$, show that every prime divisor of $n!+1$ is an odd integer that is greater than $n$.</li>
<li><p>Assuming that $p_n$ is the $n$th prime number, show that the sum $\frac{1}{p_1}+\frac{1}{p_2}+...+\frac{1}{p_n}$ is never an integer.</p></li>
<li><p>How many zeroes end 1,111! ?</p></li>
</ol>
<p>Thanks in advance!</p>
| Eric Auld | 76,333 | <p>It is certainly not bijective. Showing it is a monomorphism is trivial (why?) Showing it is an epimorphism is almost as easy; think about two functions $f,g: \mathbb{Q} \to R$ that disagree on some value. Can it be that $f \circ i = g \circ i$?</p>
|
1,949,874 | <blockquote>
<p><span class="math-container">$P_2(R)$</span> is the set of polynomials of degree two or lower.</p>
<p>Show that there is a unique basis <span class="math-container">$\{p_1, p_2, p_3\}$</span> of <span class="math-container">$P_2(R)$</span> with the property that <span class="math-container">$p_1(0) = 1, p_1(1) = p_1(2) = 0, p_2(1) = 1, p_2(0) = p_2(2) = 0$</span> and <span class="math-container">$p_3(2) = 1, p_3(0) = p_3(1) = 0$</span>.</p>
</blockquote>
<p>Stuck here. I know a proof might look something like "Suppose there were two different bases..." and then showing they must be the same, but I'm actually stuck on part of finding what one basis would be in the first place.</p>
| Paul Sinclair | 258,282 | <p>You are asked to prove two things:</p>
<ul>
<li>Existance: there is such a basis, and</li>
<li>Uniqueness: there is at most one such basis.</li>
</ul>
<p>The proof you describe is only for proving uniqueness. Uniqueness is almost always easier to prove than existance.</p>
<p>To prove such a basis exists, you need to show</p>
<ol>
<li>That there are $3$ polynonomials $p_1, p_2, p_3$ in $P_2(\Bbb R)$ such that $$\begin{array} {ccc} p_1(0) = 1 & p_1(1) = 0 & p_1(2) = 0\\p_2(0) = 0 & p_2(1) = 1 & p_2(2) = 0\\p_3(0) = 0 & p_3(1) = 0 & p_3(2) = 1\end{array}$$</li>
<li>That $\{p_1, p_2, p_3\}$ is linearly independent</li>
<li>That the span of $\{p_1, p_2, p_3\}$ is all of $P_2(\Bbb R)$.</li>
</ol>
<p>So start with this: if $p \in P_2(\Bbb R)$, then $p(x) = ax^2 + bx + c$ for some $a, b, c$. If $p(0) = r, p(1) = s, p(2) = t$, then we have the following linear system of equations in the three unknowns $a, b, c$:
$$p(0) = a\cdot 0 + b \cdot 0 + c = r\\p(1) = a\cdot 1 + b \cdot 1 + c = s\\p(2) = a\cdot 4 + b \cdot 2 + c = t$$
This system is non-degenerate and so has a unique solution for any given $r, s, t$, which you should be able to easily produce.</p>
<p>This gives you directly the existence of the three polynomials $p_1, p_2, p_3$. It also shows that every polynomial in $P_2(\Bbb R)$ is in their span, for if $q \in P_2(\Bbb R)$, then $q$ is the unique polynomial $p$ above when $r = q(0), s = q(1), t = q(2)$. But note that $q(0)p_1 + q(1)p_2 + q(2)p_3$ is also a polynomial in $P_2(\Bbb R)$ with the same values at $x = 0,1,2$. Hence it must be $q$.</p>
<p>That $p_1, p_2, p_3$ are linearly independent is obvious, as any linear combination of $p_2$ and $p_3$ must be $0$ at $x = 0$ and so cannot be $p_1$. Similarly $x = 2$ and $x = 3$ show that $p_2$ and $p_3$ cannot be written as linear combinations of the other two either.</p>
<p>Lastly, if $q_1, q_2, q_3$ where another such basis, then note that the polynomials $q_1 - p_1, q_2 - p_2, q_3 - p_3$ each have the property that their values at $0, 1,$ and $2$ are all $0$. Apply the result above again to see that each of these polynomials must be uniformly 0, which proves the uniqueness.</p>
|
1,949,874 | <blockquote>
<p><span class="math-container">$P_2(R)$</span> is the set of polynomials of degree two or lower.</p>
<p>Show that there is a unique basis <span class="math-container">$\{p_1, p_2, p_3\}$</span> of <span class="math-container">$P_2(R)$</span> with the property that <span class="math-container">$p_1(0) = 1, p_1(1) = p_1(2) = 0, p_2(1) = 1, p_2(0) = p_2(2) = 0$</span> and <span class="math-container">$p_3(2) = 1, p_3(0) = p_3(1) = 0$</span>.</p>
</blockquote>
<p>Stuck here. I know a proof might look something like "Suppose there were two different bases..." and then showing they must be the same, but I'm actually stuck on part of finding what one basis would be in the first place.</p>
| Gerry Myerson | 8,269 | <p>Paul Sinclair's solution is fine, but if you want to find $p_1,p_2,p_3$, you can do it this way: </p>
<p>First, you know that if $p(x)$ is a polynomial and $p(a)=0$ then $x-a$ is a factor. So $p_1$ must be $m_1(x-1)(x-2)$ for some constant $m_1$, and you can evaluate that constant by considering $p_1(0)$. Similarly, $p_2=m_2x(x-2)$, and $p_3=m_3x(x-1)$, for some easily evaluated constants $m_2$ and $m_3$. </p>
|
1,507,526 | <p>Let $S$ be the portion of the sphere $x^2+y^2+z^2=9$, where $1\leq x^2+y^2\leq4$ and $z\geq0$. Calculate the surface area of $S$</p>
<p>Ok i'm really confused with this one. I know i have to apply the surface area formula but and possibly spherical coordinates but i can't seem how to get the integral out.</p>
<p>The shape. I thought of using spherical system but after doing i ended up with a 3 coordinate system. I'm not even sure how to begin with this one.</p>
| Simon | 282,296 | <p>Cylindrical coordinates are the way to go!
Recall that if you have a surface S the surface integral is equal to
\begin{equation}
\int\int f(x,y,z) dS
\end{equation}
Well, you can represent z as a function of x and y.
Also recall that dS stands for the "arc-length" at that point.
Therefore, you can rewrite the integral as
\begin{equation}
\int\int f(x,y,z(x,y))\sqrt{(\frac{\partial{z}}{\partial{x}})^{2} + (\frac{\partial{z}}{\partial{y}})^{2} + 1} dA
\end{equation}
Now, convert to cylindrical coordinates. You know that $\theta$ ranges from 0 to 2$\pi$ and $r$ ranges from 1 to 4.</p>
<p>I believe you can go from there.</p>
|
1,598,545 | <p>Maybe I am not well versed with the actual definition of mean, but I have a doubt. On most resources, people say that arithmetic mean is the sum of $n$ observations divided by n. So my first question: </p>
<blockquote>
<p>How does this formula work? Is there any derivation to it? If not,
then while creating this definition, what was the creator thinking?</p>
</blockquote>
<p>Okay, so using my intuition, I thought that it is the value that lies in the centre. And it worked for some cases, like the mean of $1$ , $2$ and $3$ is $2$ , which is the central value. But, lets imagine a number line from numbers $0$ to $9$. Now, I choose $3$ numbers, say $1$, $8$ and $9$. By the formula, I get the mean is equal to $6$. But, if mean really is a central value, shouldn't it be $5$(I know we call $5$ the median in this case)? But it seems like the mean is getting closer to $8$ and $9$, which means it is not central? So my final question?</p>
<blockquote>
<p>Have I imagined mean incorrectly? What kind of central value really
mean is?</p>
</blockquote>
| Ross Millikan | 1,827 | <p>The arithmetic mean formula is a definition of the term, so there is no derivation. The mean does not involve the range that the numbers might be over, like your $1$ to $9$ example. It only involves the actual numbers. You can define the term central value to take a set of numbers and return half the sum of the max and the min. That seems to be what you are doing with your example of $1,8,9$. That is a fine definition. Whether it is useful or not is yet to be seen. Given $1,8,9$, the mean of $6$ reflects the fact that two of the numbers are high. It is also the expected value-if you draw many times with replacement from $1,8,9$, the sum will be close to $6$ times the number of draws. </p>
<p>There are several statistics that can be used similarly. You are trying to take a distribution of numbers and report one number that "gives a feel" for the set. The arithmetic mean is one. Others are the median, the mode, the geometric mean, etc. You need to look at your purpose and choose the one that suits the need, then be careful to say which you have used.</p>
|
1,598,545 | <p>Maybe I am not well versed with the actual definition of mean, but I have a doubt. On most resources, people say that arithmetic mean is the sum of $n$ observations divided by n. So my first question: </p>
<blockquote>
<p>How does this formula work? Is there any derivation to it? If not,
then while creating this definition, what was the creator thinking?</p>
</blockquote>
<p>Okay, so using my intuition, I thought that it is the value that lies in the centre. And it worked for some cases, like the mean of $1$ , $2$ and $3$ is $2$ , which is the central value. But, lets imagine a number line from numbers $0$ to $9$. Now, I choose $3$ numbers, say $1$, $8$ and $9$. By the formula, I get the mean is equal to $6$. But, if mean really is a central value, shouldn't it be $5$(I know we call $5$ the median in this case)? But it seems like the mean is getting closer to $8$ and $9$, which means it is not central? So my final question?</p>
<blockquote>
<p>Have I imagined mean incorrectly? What kind of central value really
mean is?</p>
</blockquote>
| Ian | 83,396 | <p>I really don't think that people came up with the formula for the arithmetic mean by starting with a nice property and deriving the formula to obtain that property. However, one useful characterization of the mean is that it is the unique minimizer of the function $f(m)=\sum_{i=1}^n (x_i-m)^2$. That is, it is the number closest to the sample vector in the sense of the ordinary Euclidean distance. From this property it follows that $\sum_{i=1}^n (x_i-m)=0$, so that the signed deviations add up to zero.</p>
<p>The median has a similar characterization: it is a minimizer (in general not the only one) of the function $g(m)=\sum_{i=1}^n |x_i-m|$. So this is the closest to the sample vector in the sense of absolute distance. From this property we get $\sum_{i=1}^n \operatorname{sign}(x_i-m)=0$, so that there are an equal number of samples larger than the median and smaller than the median.</p>
<p>By the way, the metric defining the median is bad for a number of reasons. One emerges if you consider the linear regression problem for minimizing the sum of $p$th powers of the residuals where $p \geq 1$. When $p=1$ you have the metric which defines the median. In this case, if you have $N-1$ points on the line and $1$ point not on the line, one of the lines minimizing the sum of the absolute residuals is the line through the $N-1$ points regardless of how far away the outlier point is from the line. This is a bad property, the system should respond to that outlier at least a little bit.</p>
|
761,286 | <p>let $G$ be an infinite group of the form $G_1 \oplus G_2 \oplus \dots \oplus G_n$ where each $G_i$ is a <strong>non trivial</strong> group and $n>1$. Prove that $G$ is not cyclic.</p>
<p><strong>Attempt</strong> : Let $G = G_1 \oplus G_2 \oplus \dots \oplus G_n$ be cyclic.</p>
<p>then $\exists ~g =(g_1,g_2,......,g_n)$ such that $g$ is generator of $G$ if and only if :</p>
<p>$(i)~~ g_1$ generates $G_1, ~g_2$ generates $ G_2$, and so on.</p>
<p>$(ii)~\gcd(|g_i|,|g_j| =1)$ whenever $i \neq j$ .</p>
<p>Since a group of prime order is cyclic, this means that if we take $G_i$ such that $|G_i|=p_i$ , where $p_i$ is a prime number and for any $i \neq j,~\gcd (p_i,p_j) =1$ , hence, if we take $G_i$'s in this fashion such that $g_i$ of prime order generates $G_i$, $G$ should turn out to be cylic?</p>
<p>Where could I be making a mistake? Thank you for the help.</p>
| ajotatxe | 132,456 | <p>It seems that the worst (and the best) scenario will give $n^2\sqrt n$ times.</p>
|
1,726,187 | <p>Recently in class our teacher told us about the evaluating of the sum of reciprocals of squares, that is $\sum_{n=1}^{\infty}\frac{1}{n^2}$. We began with proving that $\sum_{n=1}^{\infty}\frac{1}{n^2}<2$ by induction. However, we actually proved a stronger result, namely that $\sum_{n=1}^{\infty}\frac{1}{n^2}<2-\frac{1}{n}$, because we were told that it is much easier to prove the stronger result than the weaker one. My question is: why exactly is it easier? Is it because our induction hypothesis is stronger? And how to prove specifically the weaker result using induction, without actually evaluating the sum to be $\frac{\pi^2}{6}$?</p>
| Jack D'Aurizio | 44,121 | <p>You may Euler's acceleration method or just an iterated trick like my $(1)$ <a href="https://math.stackexchange.com/a/1409131/44121">here</a> to get:
$$ \zeta(2) = \sum_{n\geq 1}\frac{1}{n^2} = \color{red}{\sum_{n\geq 1}\frac{3}{n^2\binom{2n}{n}}}\tag{A}$$
and the last series converges pretty fast. Then you may notice that the last series comes out from a <a href="https://math.stackexchange.com/questions/878477/a-closed-form-of-sum-k-0-infty-frac-1k1k-gamma2-left-frack2">squared arcsine</a>. Oh, wait, that just gives <a href="https://math.stackexchange.com/a/617710/44121">another proof of</a>:
$$ \zeta(2)=\frac{\pi^2}{6}. \tag{B}$$</p>
|
1,859,810 | <p>Consider the function
$$
f(x)=\sum_{n=1}^{\infty}\frac{\mathrm{sign}\left(\sin(nx)\right)}{n}\, .
$$
This is a bizarre and fascinating function. A few properties of this function that SEEM to be true:</p>
<p>1) $f(x)$ is $2\pi$-periodic and odd around $\pi$.</p>
<p>2) $\lim_{x\rightarrow \pi_-} f(x) = \ln 2$. (Can be proven by letting $x = \pi-\epsilon$, expanding the sine function, and taking the limit as $\epsilon\rightarrow 0$.)</p>
<p>3) $\int_0^{\pi}dx\, f(x) = \frac{\pi^3}{8}$ (Can be "proven" by integrating each $\mathrm{sign}\left(\sin(nx)\right)$ term separately. Side question: Is such a procedure on this jumpy function even meaningful?)</p>
<p>All of this despite the fact that I can't really prove that this function converges anywhere other than when $x$ is a multiple of $\pi$!</p>
<p>A graph of this function (e.g. in Mathematica) reveals an amazing fractal shape. My question: What is the fractal dimension of the graph of this function? Does the answer depend on which definition of fractal dimension we use (box dimension, similarity dimension, ...)?</p>
<p>This question doesn't come from anywhere other than from my desire to see if an answer exists.</p>
<p>As requested, a plot of this function on the range $x\in[0,2\pi]$:</p>
<p><a href="https://i.stack.imgur.com/LHvgb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LHvgb.png" alt="Graph of fractal function"></a>
Edited to add:</p>
<p>Other, perhaps more immediate, questions about this function:</p>
<p>1) Does it converge? Conjecture: It converges whenever $x/\pi$ is irrational, but doesn't necessarily diverge if $x/\pi$ is rational. See, e.g., $x = \pi$, where it converges to zero, and apparently to $\pm \ln 2$ on either side of $x = \pi$.</p>
<p>2) I would guess that it diverges as $x\rightarrow 0_+$. How does it diverge there? If this really is a fractal function, I would suppose that the set of points where it diverges is dense. For instance, it appears to have a divergence at $x = 2\pi/3$.</p>
<p>Edit 2:</p>
<p>Another thing that's pretty straightforward to prove is that:
$$
\lim_{x\rightarrow {\frac{\pi}{2}}_-} f(x) = \frac{\pi}{4} + \frac{\ln 2}{2}
$$
and
$$
\lim_{x\rightarrow {\frac{\pi}{2}}_+} f(x) = \frac{\pi}{4} - \frac{\ln 2}{2}
$$</p>
<p>Final Edit:</p>
<p>I realize now that the initial question about this function - what is its fractal dimension - is (to use a technical term) very silly. There are much more immediate and relevant question, e.g. about convergence, etc. I've already selected one of the answers below as answering a number of these questions.</p>
<p>One final point, for anyone who stumbles on this post in the future. The term $\mathrm{sign}(\sin(nx))$ is actually a <a href="https://en.wikipedia.org/wiki/Square_wave" rel="noreferrer">square wave</a>, and so we can use the usual Fourier series of a square wave to derive an alternate way of expressing this function:
$$
f(x)=\sum_{n=1}^{\infty}\frac{\mathrm{sign}\left(\sin(nx)\right)}{n}
= \frac{4}{\pi}\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}
\frac{\sin(n(2m-1)x)}{n(2m-1)}
$$
By switching the order of the sums and doing the $n$ sum first, this could also be represented as a weighted sum of <a href="https://en.wikipedia.org/wiki/Sawtooth_wave" rel="noreferrer">sawtooth waves</a>.</p>
| Marco Cantarini | 171,547 | <p>We can prove the convergence almost everywhere using the following </p>
<blockquote>
<p><strong>Theorem:</strong> Let $\varphi
$ a function such that $$\varphi\left(x\right)\in L^{2}\left(-\pi,\pi\right),\,\varphi\left(x+2\pi\right)=\varphi\left(x\right),\,\int_{0}^{2\pi}\varphi\left(x\right)dx=0.$$ Assume that $$\sup_{0<h\leq\delta}\left(\int_{0}^{2\pi}\left|\varphi\left(x+h\right)-\varphi\left(x\right)\right|^{2}dx\right)^{1/2}=O\left(\delta^{1/2}\right)
$$ and let the sequence of real numbers $\left(a_{n}\right)_{n}
$ such that $$\sum_{n\geq2}a_{n}^{2}\log^{\gamma}\left(n\right)<\infty,\,\gamma>3
$$ then the series $$\sum_{n\geq1}a_{n}\varphi\left(nx\right)
$$ converges almost everywhere.</p>
</blockquote>
<p>(For a reference see V. F. Gaposhkin, “On series relative to the system $\left\{ \varphi\left(nx\right)\right\} $”, Mat. Sb. (N.S.), $69(111)$:$3$ ($1966$), $328–353$) </p>
<p>So it is sufficient to note that $\varphi\left(x\right)=\textrm{sign}\left(\sin\left(x\right)\right)
$ verifies the conditions and $$\sum_{n\geq2}\frac{\log^{\gamma}\left(n\right)}{n^{2}}<\infty
$$ for all $\gamma>0
$. So we have that $$\sum_{n\geq1}\frac{\textrm{sign}\left(\sin\left(nx\right)\right)}{n}.
$$ converges almost everywhere.</p>
<p>At zero an heuristic argument can be that for all $M>0
$ exists some $\delta>0
$ such that, if $0<x<\delta
$, we have $$M<\sum_{n\geq1}\frac{\textrm{sign}\left(\sin\left(nx\right)\right)}{n}$$ because if $x$ is “small” we have $\textrm{sign}\left(\sin\left(nx\right)\right)=1$ and so, since $\sum_{n\geq1}\frac{1}{n}$ diverges, we can find a sufficient small $\delta$ such that the series is bigger than every fixed $M$ and the negative contributions are too “small” for a significant cancellation.</p>
|
1,859,810 | <p>Consider the function
$$
f(x)=\sum_{n=1}^{\infty}\frac{\mathrm{sign}\left(\sin(nx)\right)}{n}\, .
$$
This is a bizarre and fascinating function. A few properties of this function that SEEM to be true:</p>
<p>1) $f(x)$ is $2\pi$-periodic and odd around $\pi$.</p>
<p>2) $\lim_{x\rightarrow \pi_-} f(x) = \ln 2$. (Can be proven by letting $x = \pi-\epsilon$, expanding the sine function, and taking the limit as $\epsilon\rightarrow 0$.)</p>
<p>3) $\int_0^{\pi}dx\, f(x) = \frac{\pi^3}{8}$ (Can be "proven" by integrating each $\mathrm{sign}\left(\sin(nx)\right)$ term separately. Side question: Is such a procedure on this jumpy function even meaningful?)</p>
<p>All of this despite the fact that I can't really prove that this function converges anywhere other than when $x$ is a multiple of $\pi$!</p>
<p>A graph of this function (e.g. in Mathematica) reveals an amazing fractal shape. My question: What is the fractal dimension of the graph of this function? Does the answer depend on which definition of fractal dimension we use (box dimension, similarity dimension, ...)?</p>
<p>This question doesn't come from anywhere other than from my desire to see if an answer exists.</p>
<p>As requested, a plot of this function on the range $x\in[0,2\pi]$:</p>
<p><a href="https://i.stack.imgur.com/LHvgb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LHvgb.png" alt="Graph of fractal function"></a>
Edited to add:</p>
<p>Other, perhaps more immediate, questions about this function:</p>
<p>1) Does it converge? Conjecture: It converges whenever $x/\pi$ is irrational, but doesn't necessarily diverge if $x/\pi$ is rational. See, e.g., $x = \pi$, where it converges to zero, and apparently to $\pm \ln 2$ on either side of $x = \pi$.</p>
<p>2) I would guess that it diverges as $x\rightarrow 0_+$. How does it diverge there? If this really is a fractal function, I would suppose that the set of points where it diverges is dense. For instance, it appears to have a divergence at $x = 2\pi/3$.</p>
<p>Edit 2:</p>
<p>Another thing that's pretty straightforward to prove is that:
$$
\lim_{x\rightarrow {\frac{\pi}{2}}_-} f(x) = \frac{\pi}{4} + \frac{\ln 2}{2}
$$
and
$$
\lim_{x\rightarrow {\frac{\pi}{2}}_+} f(x) = \frac{\pi}{4} - \frac{\ln 2}{2}
$$</p>
<p>Final Edit:</p>
<p>I realize now that the initial question about this function - what is its fractal dimension - is (to use a technical term) very silly. There are much more immediate and relevant question, e.g. about convergence, etc. I've already selected one of the answers below as answering a number of these questions.</p>
<p>One final point, for anyone who stumbles on this post in the future. The term $\mathrm{sign}(\sin(nx))$ is actually a <a href="https://en.wikipedia.org/wiki/Square_wave" rel="noreferrer">square wave</a>, and so we can use the usual Fourier series of a square wave to derive an alternate way of expressing this function:
$$
f(x)=\sum_{n=1}^{\infty}\frac{\mathrm{sign}\left(\sin(nx)\right)}{n}
= \frac{4}{\pi}\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}
\frac{\sin(n(2m-1)x)}{n(2m-1)}
$$
By switching the order of the sums and doing the $n$ sum first, this could also be represented as a weighted sum of <a href="https://en.wikipedia.org/wiki/Sawtooth_wave" rel="noreferrer">sawtooth waves</a>.</p>
| Mark McClure | 21,361 | <p>The series certainly converges when $x$ is a rational multiple of $\pi$, say $x=p\pi/q$ where $p/q$ is in lowest terms. Here is an outline of a proof of this using <a href="https://en.wikipedia.org/wiki/Dirichlet's_test" rel="noreferrer">Dirichlet's test</a>.</p>
<p>The numbers
$$\alpha_k = k\,\pi\frac{p}{q} \mod 2\pi$$
form a group under addition. In particular, each element $\alpha$ has an additive inverse $\beta$ with the property that $\sin(\alpha)=-\sin(\beta)$. As a result, the sequence $(\alpha_k)_k$ is periodic with the same number of positive and negative ones in a period. Thus, if we define
$$b_n = \text{sign}(\sin(n\pi p/q)),$$
it's easy to see that the sequence
$$S_N = \sum_{n=1}^N b_n$$
is bounded. We can now apply Dirichlet's test (as stated in the link above) using $a_n=1/n$.</p>
<p>I suspect that a similar argument will work for irrational multiples of $\pi$ using uniform distribution of $nx\mod 2\pi$. I also suspect that such an argument might be very hard and depend on how well approximable $x/\pi$ is by rationals.</p>
<p>Returning to rational multiples of $\pi$, these types of sequences can be computed in closed form using various tricks involving transforms of the harmonic sequence. At $x=\pi/2$, for example, the series reduces to
$$\sum_{n=0}^{\infty} \frac{(-1)^{n}}{2n+1}=1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\cdots = \frac{\pi}{4}.$$
At $x=2\pi/3$, the sum is
$$\sum _{n=0}^{\infty } \left(\frac{1}{3 n+1}-\frac{1}{3 n+2}\right) = \frac{\pi }{3 \sqrt{3}}.$$
There is a key difference between these two series, however. For the first we have
$$\left\{\text{sign}(\sin(n\pi/2))\right\}_{n=1,2,3,4} = \{1,0,-1,0\}.$$
which has two zeros. Numbers close to $\pi/2$ but just a little less will start with the pattern $\{1,1,-1,-1\}$. Numbers <em>very</em> close to and less than $\pi/2$ will maintain this pattern in the sum for a long time and result in a sum that's close to
$$\sum _{n=0}^{\infty } \left(\frac{1}{4 n+1}+\frac{1}{4 n+2}-\frac{1}{4 n+3}-\frac{1}{4 n+4}\right) = \frac{1}{4} (\pi +\log (4)) \approx 1.13197.$$
Numbers close to $\pi/2$ and just <em>above</em> will yield a sum close to
$$\sum _{n=0}^{\infty } \left(\frac{1}{4 n+1}-\frac{1}{4 n+2}-\frac{1}{4 n+3}+\frac{1}{4 n+4}\right) = \frac{1}{4} (\pi -\log (4)) \approx 0.438825.$$
This leads to the appearance of a jump discontinuity at $\pi/2$ where the actual value is, in fact, between the two.</p>
<p>For $x=2\pi/3$, by contrast, we have
$$\left\{\text{sign}(\sin(2n\pi/3))\right\}_{n=1,2,3} = \{1,-1,0\},$$
which has just one zero. For values of $x$ close to $2\pi/3$ but just a little less, the sequence starts $\{1,-1,-1\}$ and we can make the partial sums as large in absolute value as we like (but negative) by choosing $x$ to be sufficiently close to $2\pi/3$. On the other side of $2\pi/3$ the sequence is $\{1,-1,1\}$ and we have similar but positive behavior. This leads to the appearance of an asymptote like feature at $2\pi/3$.</p>
<p>Note that these arguments are not complete. The argument near $2\pi/3$ is very much like Marco's argument near zero. Detailed analysis for irrational multiples of $\pi$ is likely to be quite sensitive and require a strong knowledge of uniform distribution of sequences mod $2\pi$. I also suspect that both behaviors are genuine and dense in the real line.</p>
<p>Finally, I don't think that fractal dimension is a reasonable concept to apply here. While certain parts of the graph look like other parts, self-similar sets and their generalizations are compact sets. This set is neither closed nor bounded. Neither similarity dimension nor box-counting dimension will be well-defined here. Hausdorff dimension will, I suppose, be defined. I don't think it's likely to be easy to compute, though. The Hausdorff dimension of Weiestrass's function is an open question.</p>
|
535,757 | <p>The exercise is to give an example for two sets $M$ and $N$, and functions $f$ and $g$, for which $f \circ g = id_M$, but $g \circ f \ne id_N$.</p>
<p>My idea is a bit based on my computer programming background, where <code>(x/2)*2</code> is <code>0</code> for integers. Here it is:</p>
<p>$$M=N=\mathbb{N_0}.$$
$$f: \mathbb{N_0} \rightarrow \mathbb{N_0}, g: \mathbb{N_0} \rightarrow \mathbb{N_0}.$$
$$f(x) = 2x, g(x) = x \div 2.$$</p>
<p>Thus,</p>
<p>$$(g\circ f)(x) = g(f(x)) = g(2x) = 2x\div 2 = x = id_\mathbb{N_0}$$
$$(f\circ g)(x) = f(g(x)) = f(x\div2) = (x\div2)\cdot2 \ne id_\mathbb{N_0}$$</p>
<p>Because if $f(g(1)) = f(0) = (0\div 2)\cdot2 = 0$, and $0\ne 1$, it follows that $(f\circ g) \ne id_\mathbb{N_0}$</p>
<p>Is my example valid?</p>
| Peter LeFanu Lumsdaine | 2,439 | <p><strong>Yes, your example is valid, and your overall argument for it is good.</strong> You have a few minor errors in details and notation, though.
$\newcommand{\id}{\mathit{id}} \newcommand{\N}{\mathbb{N}}$</p>
<ul>
<li><p>In your line “$(g \circ f)(x) = \cdots = x = \id_\mathbb{N}$”, the last item should be not $\id_\mathbb{N}$, but $\id_\mathbb{N}(x)$. All the earlier terms in this sequence of equalities are <em>values</em> of the function; $\id_\N$ refers to the whole function, not just the individual value.</p></li>
<li><p>In the line “$(f \circ g)(x) = \cdots \neq \id_\N$”, you again need to be more careful distinguishing between whole functions and individual values. At the level of individual values, it’s not always true that $(f \circ g)(x) \neq \id_\N(x)$; there are some values of $x$, e.g. $x = 2$, that make them equal. The key point is that there are some values, e.g. $x = 3$, that make them non-equal — $(f \cdot g)(3) = 2 \neq \id_\N 3$ — which implies that as functions, $(f \cdot g) \neq \id_\N$.</p></li>
<li><p>As others have said, your function $x \div 2$ would usually be called $\lfloor \frac{x}{2} \rfloor$. Since you explicitly define what you mean by $x \div 2$, though, I wouldn’t really consider this an error, just an issue of style.</p></li>
</ul>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.