qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,070,008 | <p>Is being $T_1$ is a topological invariant?
Is being a first-countable space is a topological invariant?
I need a little hint as to whether or not these sets are topological invariants.</p>
| Sultan of Swing | 144,369 | <p>Let $X$ be $T_{1}$. Let $f:X\rightarrow Y$ be a homeomorphism. This tells us that $f$ is an open mapping (i.e., open sets map to open sets). Let $x,y ∈ X$, and let $B$ be an open set containing $x$ but not $y$. Since, $f$ is an open map, then $f(B)$ will be an open set in $Y$. Can we be certain that the open set $f(B)$ contains $f(x)$ but does not contain $f(y)$? If so, then it follows that $Y$ is also a $T_{1}$ space, and thus topologically invariant.</p>
<p>Secondly, regarding first-countable spaces, let $X$ be a first-countable space, and let $x$ be any element of $X$. Then $x$ has a countable neighborhood basis. Since $f$ is an open (closed) map, it will map every open (closed) neighborhood around $x$ (in $X$) to one around $f(x)$ (in $Y$). Thus, it follows that $f(x)$ has a countable neighborhood basis, and thus $Y$ is first-countable.</p>
|
134,455 | <p>I have an expression which consists of terms with undefined function calls <code>a[n]</code>:</p>
<pre><code>example = 1 - c^2 + c a[1] a[2] + 1/2 c^2 a[1]^2 a[2]^2 + c a[1] a[3]
</code></pre>
<p>Now I want to transform each term with individual <code>a</code>s to a different function <code>v[m1,m2,m3]</code>, such that <code>v[[m_i]]</code> is the exponent of <code>a[[i]]</code>. In addition, I want that the coefficient of <code>v[[m_i]]</code> is multiplied by the (exponent of <code>a[[i]]</code> + 1):</p>
<pre><code>fct[example]
(* example -> (1 - c^2)*(0+1)*(0+1)*(0+1)*v[0,0,0] + c*(1+1)*(1+1)*(0+1)*v[1,1,0] +
+ 1/2 c^2*(2+1)*(2+1)*(0+1)*v[2,2,0] + c*(1+1)*(0+1)*(1+1)*v[1,0,1]
= (1 - c^2)*v[0,0,0] + c*4*v[1,1,0] +
+ 1/2 c^2*9*v[2,2,0] + c*4*v[1,0,1]
*)
</code></pre>
<hr>
<p>I found one solution, but it is annoyingly slow, and I was hoping that somebody finds faster methods. The idea is to multiply <code>example</code> with <code>a[1]^2 * a[2]^2 * a[3]^2</code> (such that every term has the form of <code>coeff * a[1]^n1*a[2]^n2*a[3]^n3</code>), then use a <code>Replace</code>-Rule:</p>
<pre><code>fct[expr_] :=
Expand[expr*a[1]^2*a[2]^2*a[3]^2] /.
{a[1]^n1_*a[2]^n2_*a[3]^n3_ -> (n1-1)*(n2-1)*(n3-1)*v[n1-2, n2-2, n3-2]}
fct[example] (* v[0, 0, 0] - c^2 v[0, 0, 0] + 4 c v[1, 0, 1] + 4 c v[1, 1, 0] +
9/2 c^2 v[2, 2, 0] *)
</code></pre>
<hr>
<p>My specific questions:</p>
<ol>
<li><p>How can I make the algorithm general for arbitrary <code>a[n]</code> with n>3?
(For the way I did it here, I dont know how to access the exponents more general)</p></li>
<li><p>How can I make the algorithm faster? (It takes already ~5sec for n=8, and roughly 80 terms of <code>a[1]^n1*a[2]^n2*a[3]^n3</code> in <code>example</code>.)</p></li>
</ol>
| BoLe | 6,555 | <pre><code>v /: v[x__] v[y__] := Apply[v, {x} + {y}]
transform[expr_] :=
Module[{max, temp, free},
max = Max@Cases[expr, a[i_] :> i, Infinity];
temp = expr /. a[i_]^p_. :> (p + 1)*(v @@ UnitVector[max, i]);
free = Cases[temp, x_ /; FreeQ[x, v]];
With[{t = Total[free]},
temp - t + (v @@ ConstantArray[0, max])*t]]
transform[example]
</code></pre>
<blockquote>
<p><code>(1 - c^2) v[0, 0, 0] + 4 c v[1, 0, 1] + 4 c v[1, 1, 0] + 9/2 c^2 v[1, 1, 0]</code></p>
</blockquote>
|
85,126 | <p>Does anyone have an implementation for <code>AnglePath</code> (see <a href="http://reference.wolfram.com/language/ref/AnglePath.html" rel="nofollow"><code>AnglePath</code> Documentation</a> and <a href="http://blog.wolfram.com/2015/05/21/new-in-the-wolfram-language-anglepath/" rel="nofollow">example usage</a>) in <em>Mathematica</em> 10.0?</p>
| KennyColnago | 3,246 | <p>For the first usage, with an input list <code>t</code> of angles, I used:</p>
<pre><code>anglePath[t_?VectorQ] :=
With[{a = Accumulate[t]},
Join[{{0., 0.}}, Accumulate[Transpose[{Cos[a], Sin[a]}]]]]
</code></pre>
<p>For the second usage, with an input matrix of <code>{r,t}</code> pairs, I used:</p>
<pre><code>anglePath[t_?MatrixQ] :=
With[{a = Accumulate[t[[All, 2]]]},
Join[{{0., 0.}},
Accumulate[t[[All, 1]]*Transpose[{Cos[a], Sin[a]}]]]
]
</code></pre>
<p>Borrowing from the blog post:</p>
<pre><code>Graphics[
{Thick,
MapIndexed[{ColorData["SandyTerrain", First[#2]/110], Line[#]} &,
Partition[anglePath[Table[{r,119.4*Degree},{r,0,1.1,0.01}]],
2, 1]]},
Background -> Black]
</code></pre>
<p><img src="https://i.stack.imgur.com/TGoGt.png" alt="angle path graphic"></p>
|
2,011,754 | <p>Can somebody help me to solve this equation?</p>
<p>$$(\frac{iz}{2+i})^3=-8$$ ?
I'm translating this into</p>
<p>$(\frac{iz}{2+i})=-2$</p>
<p>But i recon it's wrong ...</p>
| Fred | 380,717 | <p>Determine the solutions $w_1,w_2,w_3$ of the equation $w^3=-8$.</p>
<p>Then solve $\frac{iz}{2+i}=w_j $ for j=1,2,3</p>
|
2,585,466 | <p>I have two growth curve data sets, A (Martians) and B (Venusians). Data point sets of age (0 (birth) - 250 months, X axis) against height (0 - 200 centimeters, Y axis). The first set (A) contains 67 X Y point pairs, the second set (B) contains 27 point pairs. I have fit both data sets to my favorite version of the Logistic Equation using NonlinearModelFit. NonlinearModelFit returns estimates for my two independent variables: Increment, (N0), and Time Coefficient (k). Then following I invoked "ParameterTable" calculating: (1) Standard Errors (2) t-Statistics and (3) P-Values for both of the curve fitting exercises, Martians and Venusians. Of the three Parameters: Standard Errors, t-Statistics, and P-Values, which parameter indicates a better fit to an energy conservative logistic equilibrium? Standard Errors on the calculated Time Coefficients (k)? t-Statistics on the calcuated Time Coefficents (k)? Is growth on Mars more of an energy conservative mechanical process than growth on Venus? Are data sets with different numbers of point pairs directly comparable on Standard Errors, t-Statistics and P-Values? </p>
| prog_SAHIL | 307,383 | <p>$$y^2=4ax$$ $$xy=c^2$$</p>
<p>As Arthur pointed, we need to find their point of intersection and calculate $dy\over{dx}$ of both curves at this point.</p>
<p>(Note that this is angle of tangents at that point, we cannot calculate the true angle due to mathematical limitations.)</p>
<p>Put $x$ from first equation into the other.</p>
<p>We get, $$\frac{y^3}{4a}=c^2 $$ Now use the given relation, $c^4=32a^4$</p>
<p>you get, $$\frac{y^6}{16a^2}=32a^4$$
$$y^6=512a^6$$
$$y=2^{3/2}a$$ </p>
<p>Using this we get, $x=2a$</p>
<p>Calculating derivatives, $\frac{dy}{dx}=\frac{1}{2}\sqrt{\frac{4a}{x}}$</p>
<p>Placing 2a $\frac{dy}{dx}=\frac{1}{\sqrt{2}}$ </p>
<p>Other equation, $\frac{dy}{dx}=-\frac{c^2}{x^2}$</p>
<p>Placing x=2a, $\frac{dy}{dx}=-\sqrt{2}$</p>
<p>Their product is -1, <strong>Hence proved.</strong></p>
|
815,661 | <p>Let $m$ be the product of first n primes (n > 1) , in the following expression :</p>
<p>$$m=2⋅3…p_n$$</p>
<p>I want to prove that $(m-1)$ is not a complete square.</p>
<p>I found two ways that might prove this . My problem is with the SECOND way . </p>
<p><strong>First solution (seems to be working) :</strong> </p>
<p>The first way that I used is this : </p>
<p>Proof by negation : assume that $m-1$ is a complete square , i.e. $m-1 = x^2$ , then </p>
<p>$m=x^2+1=x^2-(-1)=(x-(-1))(x+(-1))=(x+1)(x-1)$</p>
<p>So we have either : </p>
<ol>
<li><p>$(x+1)$ is even and $(x-1)$ is even </p></li>
<li><p>$(x+1)$ is even and $(x-1)$ is odd</p></li>
<li><p>$(x-1)$ is even and $(x+1)$ is odd</p></li>
</ol>
<p>First case : $(x+1)$ is even and $(x-1)$ is even , then $m$ looks like this : </p>
<p>$m=2⋅otherNumbersA⋅2⋅otherNumbersB$ </p>
<p>If we disregard $2$ then $m$ is a multiplication of $n-1$ prime numbers , then </p>
<p>$m$ is a multiplication of : $2 \cdot bigPrimeNumber$ . Contradiction . </p>
<p>The other two cases are just the same .</p>
<p><strong>Second solution (my problem) :</strong></p>
<p>What I'm interested in is the following solution (that I'm stuck in) :</p>
<p>Proof by negation : assume that : $m-1 = x^2$ and $m=2⋅3…p_n$ , means that $m$ divides by 3 , so we can write : $m-1≡2(mod 3)$ , which means that : </p>
<p>$m-1≡2(mod 3) ===> (m-1)-2=3q , q\in N ===> m-3=3q=m=3(1+q)$</p>
<p>Meaning : </p>
<p>$m-1=x^2$</p>
<p>$m-1≡2(mod 3)$</p>
<p>$x^2≡2(mod 3)$</p>
<p>How do I continue from here ? how can I use : $x^2≡2(mod 3)$ to reach a contradiction ?</p>
<p>Thanks</p>
| Mark Fischler | 150,362 | <p>The follow-on to the previous answer is that since $x^2 \equiv 2 \mod 3$ has no solution,
and $m-1 \equiv 2 \mod 3$, there cannot be a number $x$ such that $m-1 = x^2$. </p>
<p>It is worth pointing out that the first "solution" given is completely bogus because it contains a mistake in the first equation, which ends up reading $[x^2 + 1 ]= m = (x+1)(x-1) [= x^2 - 1]$.</p>
|
1,508,863 | <p>I have this homework problem assigned but I'm confused as to how to solve it:</p>
<p>For $n>2$ and $a\in\mathbb{Z}$ with $\gcd(a,n)=1$, show that $o_n(a)=m$ is odd $\implies o_n(-a)=2m$.</p>
<p>(where $o_n(a)=m$ means that $a$ has order $m$ modulo $n$).</p>
<p>We were also given this hint: Helpful to consider when $o_p(-a)$ is odd and when it is even.</p>
<p>Thanks for any help!</p>
| Thomas Andrews | 7,933 | <p>Note that:</p>
<p>$$e^{-1}=\sum_{k=0}^\infty \frac{(-1)^k}{k!}$$</p>
<p>Then:</p>
<p>$$\frac{n!}e=n!e^{-1} = \left(\sum_{k=0}^{n} (-1)^k\frac{n!}{k!}\right) + \sum_{k=n+1}^{\infty} (-1)^{k}\frac{n!}{k!}$$</p>
<p>Show that if $a_n=\sum_{k=n+1}^{\infty} (-1)^{k}\frac{n!}{k!}$ then $0<|a_{n}|<1$ and $a_n>0$ if and only if $n$ is odd.</p>
<p>So the when $n$ is odd, the value is:
$$\left\lfloor\frac{n!}{e}\right\rfloor=\sum_{k=0}^{n} (-1)^k\frac{n!}{k!}\tag{1}$$
When $n$ is even it is one less:
$$\left\lfloor\frac{n!}{e}\right\rfloor=-1+\sum_{k=0}^{n} (-1)^k\frac{n!}{k!}\tag{2}$$</p>
<p>Now, almost all of these terms are even. The last term $n!/n!=1$ is odd. When $n$ is odd, the second-to-last term $n!/(n-1)!$ is odd, also. But all other terms are even.</p>
<p>So for $n$ odd, there are two odd terms in the sum, $k=n,n-1$.</p>
<p>For $n$ even, there are two odd terms in the sum, $-1$ and $k=n.$</p>
<hr>
<p>The trick, then, is to show that the $a_n$ has these properties:
$$\begin{align}
&0<|a_n|<1\\
&a_n>0\iff n\text{ is odd}
\end{align}$$</p>
<p>To show these, we note that $\frac{n!}{k!}$ is strictly decreasing for $k>n$ and $(-1)^k\frac{n!}{k!}$ is alternating. In general, any alternating sum of a decreasing series converges to a value strictly between $0$ and the first term of the sequence, which in this case is $\frac{(-1)^{n+1}}{n+1}.$</p>
|
1,508,863 | <p>I have this homework problem assigned but I'm confused as to how to solve it:</p>
<p>For $n>2$ and $a\in\mathbb{Z}$ with $\gcd(a,n)=1$, show that $o_n(a)=m$ is odd $\implies o_n(-a)=2m$.</p>
<p>(where $o_n(a)=m$ means that $a$ has order $m$ modulo $n$).</p>
<p>We were also given this hint: Helpful to consider when $o_p(-a)$ is odd and when it is even.</p>
<p>Thanks for any help!</p>
| Micah | 30,836 | <p>Following @Vladimir's comment, I can show that $a=3e$ has this property. I don't find the proof very enlightening, though...</p>
<p>We have</p>
<p>$$
\frac{n!}{3e} = \sum_{k=0}^n \frac{1}{3}\frac{n!}{k!}(-1)^k + E
$$
where $E$ is an error term that is less than $1$ in absolute value and also small by comparison with the other terms — so it won't affect the parity of the floor except in the case where it's negative and the initial sum is an integer.</p>
<p>In that initial sum, all but the last three individual terms will be even integers, as they will be multiples of $\frac{n(n-1)(n-2)}{3}=2\binom{n}{3}$. So we can neglect them.</p>
<p>The last three terms will take the form
$$
(-1)^n \frac{1-n+n(n-1)}{3}=(-1)^n \frac{(n-1)^2}{3}
$$</p>
<p>What this all boils down to is that we need $\left\lfloor (-1)^n \frac{(n-1)^2}{3}\right \rfloor$ to be even except when $\frac{(n-1)^2}{3}$ is actually an integer and also $n$ is even (which is the case where the error term is negative): that is, we want to check that $\left\lfloor (-1)^n \frac{(n-1)^2}{3}\right \rfloor$ is odd when $n \equiv 4 \pmod{6}$ and even otherwise. Which... it is, but it's not clear what the unifying principle is here, if any. And doing this for $11e$ seems possible, but horrifying. (One easier approach to proving this for $a=11e$ would be to just notice that, by this kind of argument, we need only consider the congruence class of $n$ modulo $22$, and then explicitly checking that $\lfloor 0!/(11e)\rfloor, \lfloor 1!/(11e)\rfloor,\dots,\lfloor 21!/(11e)\rfloor$ are all even. But that's if anything even less enlightening.)</p>
|
2,545,226 | <p>Suppose $a_n$ is a positive sequence but not necessarily monotonic. </p>
<p>For the series $\sum_{n=1}^\infty \frac{1}{a_n}$ and $\sum_{n=1}^\infty \frac{a_n}{n^2}$ I can find examples where both diverge: $a_n = n$, and where one converges and the other diverges: $a_n = n^2$.</p>
<p>Can we find example where both converges?</p>
| Reiner Martin | 248,912 | <p>No, by the Cauchy-Schartz inequality we have
$$
+\infty=\sum_{n=1}^\infty \frac{1}{n} = \sum_{n=1}^\infty \frac{1}{\sqrt{a_n}} \cdot \frac{\sqrt{a_n}}{n} \le \sqrt{\sum_{n=1}^\infty \frac{1}{a_n} \cdot \sum_{n=1}^\infty \frac{a_n}{n^2}}.
$$</p>
|
2,072,347 | <p>I was trying to solve this problem, but couldn't figure it out. The solution goes like this:</p>
<p><a href="https://i.stack.imgur.com/1KSWH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KSWH.png" alt="http://www.tkiryl.com/Calculus/Problems/Section%201.4/Calculating%20Limits/Solutions/Calc_S_59.png (I don't have the reputation to post the image)"></a></p>
<p>I don't understand the first step. Why is the limit multiplied by $\frac{4x}{5x}$? and $\frac{5}{4}$ ? </p>
| StephanCasey | 157,220 | <p>They were using the Squeeze theorem but I don't think it is necessary (<a href="https://www.khanacademy.org/math/differential-calculus/limits-from-equations-dc/squeeze-theorem-dc/v/proof-lim-sin-x-x" rel="nofollow noreferrer">https://www.khanacademy.org/math/differential-calculus/limits-from-equations-dc/squeeze-theorem-dc/v/proof-lim-sin-x-x</a>)</p>
<p>$$\lim_{x \to 0} \frac{\sin5x}{\sin4x}$$</p>
<p>$$= \frac{\lim_{x \to 0} \sin5x}{\lim_{x \to 0} \sin4x}$$</p>
<p>where $\lim_{x \to 0} \sin4x$ not equal to $0$</p>
<p>but it is equal to zero so we can't do this so we can use L'Hopitals Rule where you differentiate top and bottom. We will be using chain rule here as well because $\frac{d}{dx}\sin5x = \cos5x \times \frac{d}{dx}5x = 5\cos 5x$</p>
<p>$$\lim_{x \to 0} \frac{\sin5x}{\sin4x}$$</p>
<p>$$=^{L'H} \lim_{x \to 0} \frac{5\cos5x}{4\cos4x}$$</p>
<p>Now just substitute zero</p>
<p>$$= \frac{5\cos(5(0))}{4\cos(4(0))}$$</p>
<p>$$=\frac{5(1)}{4(1)}$$</p>
<p>$$=\frac{5}{4}$$</p>
|
2,306,895 | <p>I want to find $Hom_{\mathtt{Grp}}(\mathbb{C}^\ast,\mathbb{Z})$, where $\mathbb{C}^\ast$ is the multiplicative group, and $\mathbb{Z}$ is additive.
$\mathbb{C}$ is the additive group of complex numbers. We have the following map: </p>
<p>$\large{\mathbb{C} \xrightarrow{exp} \mathbb{C}^\ast \xrightarrow{?} \mathbb{Z}}$</p>
<p>where the fiber of $exp$ is $\mathbb{Z}$</p>
<p>And I don't know if this can help, any hint?</p>
| Joshua Ruiter | 399,014 | <p>This isn't a full answer, but I suspect that this Hom group may be the trivial group. Suppose $\phi:\mathbb{C}^* \to \mathbb{Z}$ is a group homomorphism. We know that $\phi(1) = 0$ since $1$ is the identity. Then
$$
\phi(-1)^2 = \phi(1) = 0 \implies 2 \phi(-1) = 0 \implies \phi(-1) = 0
$$
By a similar argument, any complex number $e^{2\pi i/k}$ where $k \in \mathbb{Z}$ should go to zero, since
$$
\phi(e^{2\pi i/k})^{k} = \phi(2\pi i) = \phi(1) = 0 \implies k \phi(e^{2\pi i/k}) = 0 \implies \phi(e^{2\pi i/k}) = 0
$$
So we have a dense subset of the unit circle that all must get sent to zero. I don't quite know how to use this, but it seems likely to me that this will force $\phi$ to send everything to zero.</p>
|
4,077,917 | <p>If you have
<span class="math-container">$$
\int_0^2 \int_0^{\sqrt{4 - x^2}} e^{-(x^2 + y^2)} dy \, dx
$$</span>
and you convert to polar coordinates, you integrate from <span class="math-container">$0$</span> to <span class="math-container">$\pi/2$</span>) with respect to theta.</p>
<p>But, if you have
<span class="math-container">$$
\int_{-6}^6 \int_0^{\sqrt{36-x^2}} \sin(x^2+y^2) \, dy \, dx
$$</span>
and you convert to polar coordinates, you integrate from <span class="math-container">$0$</span> to <span class="math-container">$\pi$</span> with respect to theta. Can someone explain to me why the bounds of integration with respect to theta are different in these two problems? I'm having a hard time figuring it out. It would be a lot of help. Thanks.</p>
| zkutch | 775,801 | <p>Let's construct formal proof. In first case we have set
<span class="math-container">$$\left\lbrace \begin{array}{l}0 \leqslant x \leqslant 2 \\
0 \leqslant y \leqslant \sqrt{4-x^2}
\end{array}\right\rbrace$$</span>
considering polar coordinates <span class="math-container">$x=r\cos \theta, y=r\sin \theta$</span> we will have from first inequalities <span class="math-container">$0 \leqslant r\cos \theta \leqslant 2 $</span> and <span class="math-container">$0 \leqslant r\sin \theta \leqslant \sqrt{4-(r\cos \theta)^2}$</span>. From here we have for <span class="math-container">$\theta $</span> inequalities <span class="math-container">$0 \leqslant \sin \theta$</span> and <span class="math-container">$0 \leqslant \cos \theta$</span> which gives <span class="math-container">$\theta \in \left[0, \frac{\pi}{2}\right]$</span>. After analysis inequalities <span class="math-container">$r \leqslant \frac{2}{\cos \theta}$</span> and <span class="math-container">$r \leqslant 2$</span> we have set</p>
<p><span class="math-container">$$\left\lbrace \begin{array}{l}0 \leqslant \theta \leqslant \frac{\pi}{2} \\
0 \leqslant r \leqslant 2
\end{array}\right\rbrace$$</span>
For second case analogical analysis gives</p>
<p><span class="math-container">$$\left\lbrace \begin{array}{l}-6 \leqslant x \leqslant 6 \\
0 \leqslant y \leqslant \sqrt{36-x^2}
\end{array}\right\rbrace \to \left\lbrace \begin{array}{l}0 \leqslant \theta \leqslant \pi \\
0 \leqslant r \leqslant 6
\end{array}\right\rbrace$$</span>
As clevely is written in adjacent answer from David G. Stork "Sometimes a picture is worth 1000 words". <strong>But</strong> the most best picture is not mathematical proof - good picture helps us to construct correct mathematical proof.</p>
|
2,354,609 | <p>I have to approximate $\sqrt2$ using Taylor expansion with an error $<10^{-2}$.</p>
<p>I noticed that I can do MacLaurin expansion of $\sqrt{x+1}$ then put $x=1$</p>
<p>So: $$\sqrt{x+1}=1 + \dfrac{x}{2} - \dfrac{x^2}{8} + \dfrac{x^3}{16} + {{\frac1{2}}\choose{n+1}}x^{n+1}(1+\xi)^{-\frac1{2}-n}$$</p>
<p>I have to find the order of the polynomial at which </p>
<p>$\sqrt2 -($<strong>the value I found with the polynomial</strong>$)<10^{-2}$</p>
<p>I can check the value of the polynomial in $x=1$ order by order until I find that </p>
<p>$\sqrt2 -($<strong>the value I found with Taylor</strong>$)<10^{-2}$</p>
<p>Or is there a faster way to find the desired order?</p>
| Christian Blatter | 1,303 | <p>The Taylor series of the function $f(x):=(1+x)^{1/2}$ just barely converges for $x:=1$. Evaluate the Taylor expansion of $g(x):=(1+x)^{-1/2}$ at $x:=-{1\over2}$ instead.</p>
|
3,389,542 | <blockquote>
<p><strong>Proposition.</strong> If <span class="math-container">$\text{Ran}(R) \subseteq \text{Dom}(S)$</span>, then <span class="math-container">$\text{Dom}(S \circ R) = \text{Dom}(R)$</span></p>
</blockquote>
<p>My attempt:</p>
<p>Suppose <span class="math-container">$\text{Ran}(R) \subseteq \text{Dom}(S)$</span></p>
<p>We need to show:</p>
<p><span class="math-container">$(\rightarrow) $</span> <span class="math-container">$\text{Dom}(S \circ R) \subseteq \text{Dom}(R)$</span></p>
<p><span class="math-container">$(\leftarrow)$</span> <span class="math-container">$\text{Dom}(R) \subseteq \text{Dom}(S \circ R) $</span></p>
<hr>
<p><span class="math-container">$(\rightarrow)$</span></p>
<p>Consider arbitrary element <span class="math-container">$a$</span>, where <span class="math-container">$a \in \text{Dom}(S \circ R)$</span>. Then there must be some element <span class="math-container">$p$</span>, where <span class="math-container">$p = (a,x)$</span> and <span class="math-container">$p \in R$</span>. It implies that <span class="math-container">$a \in Dom(R)$</span>. Since <span class="math-container">$a$</span> was arbitrary, we have <span class="math-container">$\text{Dom}(S \circ R) \subseteq \text{Dom}(R)$</span></p>
<p><span class="math-container">$(\leftarrow$</span>)</p>
<p>Consider element <span class="math-container">$(x,y)$</span> where <span class="math-container">$(x,y) \in R$</span>. Since <span class="math-container">$y \in Ran(R)$</span> and <span class="math-container">$\text{Ran}(R) \subseteq \text{Dom}(S)$</span>, it follows that <span class="math-container">$y \in Dom(S)$</span>, which means that there must be some element <span class="math-container">$(y,p)$</span> such that <span class="math-container">$(y,p) \in S$</span>. </p>
<p>Since <span class="math-container">$(y,p) \in S$</span> and <span class="math-container">$(x,y) \in R$</span>, we have <span class="math-container">$(x,p) \in S \circ R$</span>, and it means that <span class="math-container">$x \in Dom(S \circ R)$</span>.</p>
<p>Since we've considered arbitrary element, we have <span class="math-container">$\text{Dom}(R) \subseteq \text{Dom}(S \circ R)$</span></p>
<p>We've shown both sides, hence <span class="math-container">$\text{Dom}(S \circ R) = \text{Dom}(R)$</span>. <span class="math-container">$\Box$</span></p>
<hr>
<p>Is it correct?</p>
<p>If it is, are there better ways to prove <span class="math-container">$(\rightarrow)$</span>?</p>
| Trishan Mondal | 685,504 | <p><br>
Idk it is true or not ...
We can construct a complete bipartite graph which is balanced as early as possible .</p>
<p>Such construction is <em>per Tarun's construction</em>.so maximising x(G) can be done by finding maximal triangle free vertices.</p>
<p><em>Case1</em> if n(even) then we can construct biparite graph with lenth <span class="math-container">$n/2$</span>.hence this creates<span class="math-container">$n²/4$</span> vertices.
<em>Observation</em> two vertices are in same
Side (as they are not adjecent) we can't color any 3 points with same color .so it need distinct color .
<span class="math-container">$X(G)\ge n²/4$</span></p>
<p><em>Case2</em>
If n is odd then there are (n+1)(n-1)/4 vertices with partition size n+1/2
So <span class="math-container">$ x(G)>= n²-1/4$</span></p>
<p>So ,max{x(G)}= n²/4 for enen n
= n²-1/4 ,for odd n</p>
|
4,065,797 | <p>Just to give a simple numerical example but in general the variables <span class="math-container">$x,y,z,u,v$</span> are not equal.</p>
<p><span class="math-container">$113= 2*4^2 + 2*4^2 +2*4^2 + 4^2 +1^2$</span></p>
<p>I am looking for a general method to solve this type of equation or a piece of software to do the same. I already looked in this site for methods that could help but could not find anything dealing with this ind of case.</p>
<p><strong>Question 2</strong> It is also useful to know if there is a test that can tell if the equation does not have a solution.</p>
| Quanto | 686,284 | <p>Integrate by parts to obtain a recursive formula as follows</p>
<p><span class="math-container">\begin{align}
I_n&=\int_0^{\pi} \sin^{n}x \ln(\sin x) dx\\
&= -\int_0^{\pi} \sin^{n-1}x \ln(\sin x)\> d(\cos x)\>\\
& =\int_0^{\pi}((n-1) \sin^{n-2}x \cos^2x\ln(\sin x)+ \sin^{n-2}x\cos^2x)dx\\
&= (n-1) (I_{n-2}-I_n)+ \frac1{n-1}\int_0^{\pi}\sin^{n}x\>dx
\end{align}</span>
Thus
<span class="math-container">$$I_n = \frac{n-1}n I_{n-2} +\frac1{n(n-1)} \int_0^{\pi}\sin^{n}x\>dx
$$</span>
with <span class="math-container">$I_0 = -\pi\ln2$</span> and <span class="math-container">$I_1= \ln2 -1$</span>. (See <a href="https://en.wikipedia.org/wiki/Wallis%27_integrals" rel="nofollow noreferrer">here</a> for evaluating <span class="math-container">$\int_0^{\pi/2}\sin^{n}x\>dx$</span>.)</p>
|
4,065,797 | <p>Just to give a simple numerical example but in general the variables <span class="math-container">$x,y,z,u,v$</span> are not equal.</p>
<p><span class="math-container">$113= 2*4^2 + 2*4^2 +2*4^2 + 4^2 +1^2$</span></p>
<p>I am looking for a general method to solve this type of equation or a piece of software to do the same. I already looked in this site for methods that could help but could not find anything dealing with this ind of case.</p>
<p><strong>Question 2</strong> It is also useful to know if there is a test that can tell if the equation does not have a solution.</p>
| Igor Rivin | 109,865 | <p>Mathematica says:</p>
<p><span class="math-container">$$\fbox{$\frac{\sqrt{\pi } \left(H_{\frac{n-1}{2}}-H_{\frac{n}{2}}\right) \Gamma
\left(\frac{n+1}{2}\right)}{n \Gamma \left(\frac{n}{2}\right)}\text{ if }\Re(n)>-1$}.$$</span></p>
|
2,233,138 | <p>Let ${x_n}$ be defined by </p>
<p>$$x_n : = \begin{cases} \frac{n+1}{n}, &\text{if } n \text{ is odd}\\
0,&\text{if } n \text{ is even}.
\end{cases}$$</p>
<p>I am pretty sure about $\lim_{n\to\infty}\inf x_n = 0$ </p>
<p>because if ${x_1} = 2$, $x_2 = 0$, $x_3 = 4/3$, $x_4 = 0$ so $\lim_{n->\infty}\inf x_n = 0$</p>
<p>But about sup </p>
<p>$$\sup\{x_k : k \geq n\}=\begin{cases}
\frac{n+1}{n}, &\text{if } n \text{ is odd}\\
\frac{n+2}{n+1}, &\text{if } n \text{ is even}.
\end{cases}$$</p>
<p>I understand about odd but don't understand about when $n$ is even.</p>
<p>Why it is not $0$ when $n$ is even?</p>
| Bernard | 202,857 | <p><strong>Hint:</strong></p>
<p>Consider the vectors $\underbrace{(1,1,\dots,1)}_{n\;1\text{s}}$ and $\;(a_1, a_2,\dots,a_n)$.</p>
|
4,327,537 | <p>I have a question which states that
"In a group of 23 people what is the probability that there are two people with the same birthday? Assume there are 365 days in a year. Ignore leap years and such complications. Assume there is an equal probability of a person being born on each day of the year.". I solved it using the complement. I first computed the number of ways in which we can assign the birthdays to 23 people out of 365 days (without replacement). That gave 365 * (365-1).. (365-k+1). Then I divided this by 365^k. Then I subtracted the result from 1. But, the probability which I have now got may also contain 3 people having the same birthday or 4 people having the same birthday, etc. I want to know the probability of exactly two people having the same birthday. In short, what I have computer is, " what's the probability that AT LEAST TWO PEOPLE HAVE SAME BIRTHDAY" and what I'm looking for is "WHAT IS THE PROBABILITY OF EXACTLY TWO PEOPLE HAVING THE SAME BIRTHDAY".How do I compute that probability?</p>
| Suzane | 901,114 | <p>A level curve of <span class="math-container">$f$</span> is a set of points <span class="math-container">$(x,y)$</span> satisfying <span class="math-container">$f(x,y)=c$</span> for some constant <span class="math-container">$c$</span>; here <span class="math-container">$c=0$</span>, but it could be any other value and the implicit function theorem would still hold. At each point <span class="math-container">$(x_0,y_0)$</span> of a level curve, the gradient vector of <span class="math-container">$f$</span>, <span class="math-container">$\left(\frac{\partial f}{\partial x}(x_0,y_0), \frac{\partial f}{\partial y}(x_0,y_0)\right)$</span>, is perpendicular to the level curve (meaning, it is perpendicular to the tangent line to the level curve). If <span class="math-container">$\frac{\partial f}{\partial y}(x_0,y_0)=0$</span>, we have a horizontal gradient vector in the plane; that means the tangent to the level curve <span class="math-container">$f(x,y)=c$</span> (which we'd hope is the graph of <span class="math-container">$y$</span> as a function of <span class="math-container">$x$</span>) is vertical at <span class="math-container">$(x_0,y_0)$</span>. When this happens we cannot guarantee <span class="math-container">$y$</span> is locally a <span class="math-container">$C^1$</span> implicit function of <span class="math-container">$x$</span>. Two examples of what can go wrong: 1) <span class="math-container">$f(x,y)=x-y^2$</span> at <span class="math-container">$(0,0)$</span> (<span class="math-container">$y$</span> is not even a function of <span class="math-container">$x$</span>, though <span class="math-container">$x=y^2$</span>), 2) <span class="math-container">$f(x,y)=x-y^3$</span> at <span class="math-container">$(0,0)$</span> (<span class="math-container">$y$</span> is a function of <span class="math-container">$x$</span> but not differentiable due to a vertical slope in its graph). Draw out these two examples and you will understand it.</p>
|
139,105 | <p>Can a (finite) collection of disjoint circle arcs in $\mathbb{R}^3$ be interlocked in the sense in that they cannot be separated, i.e. each moved arbitrarily far from one another while remaining disjoint (or at least never crossing) throughout?
(Imagine the arcs are made of rigid steel; but infinitely thin.)
The arcs may have different radii; each spans strictly less than $2 \pi$ in angle, so each has a positive "gap" through which arcs may pass:
<br /> <img src="https://i.stack.imgur.com/hd2l0.jpg" alt="Arcs4"><br />
Of course, if one could prove that in any such collection, one arc can be removed to infinity, the result would follow by induction.
But an impediment to that approach is that sometimes there is no arc than can be removed while all the others remain fixed.</p>
<p>Another approach would be to reduce the <em>piercing number</em> of the configuration:
the number of intersections of an arc with the disks on whose boundary the arcs lie. If the piercing number could always be reduced in any configuration, then it would "only" remain to prove that if there are no disk-arc piercings at all, the configuration can be separated.</p>
<p>Intuitively it seems that no such collection can interlock, but I am not seeing a proof.
I'd appreciate any proof ideas—or interlocked configurations!</p>
| John Pardon | 35,353 | <p>I believe there is no such locked configuration. The proof is by induction, as you suggest.</p>
<p>Pick any arc and imagine moving it to infinity. Of course, to do this, it will have to pass through some other arcs, and thus this is not a valid motion. We can, however, by picking our motion "generically", ensure that there are just finitely many times when our arc passes through another arc, and that at each of these times, it passes through exactly one other arc at exactly one point. But now if we rotate (in the plane of the circle) the arc during the motion, we can ensure that it's "gap" is moved to each of the points where it used to pass through another arc. Thus we have turned our invalid motion into a valid one.</p>
|
3,712,256 | <p>I am trying to prove that: </p>
<blockquote>
<p>For nonempty subsets of the positive reals <span class="math-container">$A,B$</span>, both of which are bounded above, define
<span class="math-container">$$A \cdot B = \{ab \mid a \in A, \; b \in B\}.$$</span>
Prove that <span class="math-container">$\sup(A \cdot B) = \sup A \cdot \sup B$</span>.</p>
</blockquote>
<p>Here is what I have so far. </p>
<blockquote>
<p>Let <span class="math-container">$A, B \subset \mathbb{R}^+$</span> be nonempty and bounded above, so <span class="math-container">$\sup A$</span> and <span class="math-container">$\sup B$</span> exist by the least-upper-bound property of <span class="math-container">$\mathbb{R}$</span>. For any <span class="math-container">$a \in A$</span> and <span class="math-container">$b \in B$</span>, we have
<span class="math-container">$$ab \leq \sup A \cdot b \leq \sup A \cdot \sup B.$$</span>
Hence, <span class="math-container">$A \cdot B$</span> is by bounded above by <span class="math-container">$\sup A \cdot \sup B$</span>. Since <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are nonempty, <span class="math-container">$A \cdot B$</span> is nonempty by construction, so <span class="math-container">$\sup(A \cdot B)$</span> exists. Furthermore, since <span class="math-container">$\sup A \cdot \sup B$</span> is an upper bound of <span class="math-container">$A \cdot B$</span>, by the definition of the supremum, we have
<span class="math-container">$$\sup(A \cdot B) \leq \sup A \cdot \sup B.$$</span>
It suffices to prove that <span class="math-container">$\sup(A \cdot B) \geq \sup A \cdot \sup B$</span>. </p>
</blockquote>
<p>I cannot figure out the other half of this. A trick involving considering <span class="math-container">$\sup A - \epsilon$</span> and <span class="math-container">$\sup B - \epsilon$</span> for some <span class="math-container">$\epsilon > 0$</span> and establishing that <span class="math-container">$\sup(A \cdot B) < \sup A \cdot \sup B + \epsilon$</span> did not seem to work, though it did in the additive variant of this proof. I haven't anywhere used the assumption that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are contained in the <strong>positive</strong> real numbers, and it seems to me that this assumption must be important, probably as it pertains to inequality sign, so I assume that at some point I will need to multiply inequalities by some positive number. I cannot seem to get a good start on this, though. A hint on how to get started on this second half would be very much appreciated. </p>
| Calum Gilhooley | 213,690 | <p><strong>Hint:</strong></p>
<p>Rather than <span class="math-container">$\sup A - \varepsilon$</span> and <span class="math-container">$\sup B - \varepsilon,$</span> subtract appropriate multiples of <span class="math-container">$\varepsilon$</span> from <span class="math-container">$\sup A, \sup B$</span> respectively. You'll need to assume that <span class="math-container">$\varepsilon$</span> isn't too big.</p>
<p><strong>Full proof:</strong></p>
<p>[I'm sorry, I can't get the wretched spoiler mechanism to work, so I'm afraid you'll have to avert your eyes!]</p>
<p>Let <span class="math-container">$s = \sup A > 0,$</span> and <span class="math-container">$t = \sup B > 0.$</span></p>
<p>You have already proved that <span class="math-container">$\sup AB \leqslant st.$</span></p>
<p>For every <span class="math-container">$\varepsilon$</span> such that <span class="math-container">$\varepsilon > 0$</span> and <span class="math-container">$\varepsilon < 2st,$</span> there exist <span class="math-container">$a \in A$</span> and <span class="math-container">$b \in B$</span> such that
<span class="math-container">\begin{align*}
a & > s - \frac\varepsilon{2t} > 0, \\
b & > t - \frac\varepsilon{2s} > 0.
\end{align*}</span>
Therefore
<span class="math-container">$$
ab > \left(s - \frac\varepsilon{2t}\right)\left(t - \frac\varepsilon{2s}\right) =
st - \frac\varepsilon2 - \frac\varepsilon2 + \frac{\varepsilon^2}{4st} > st - \varepsilon.
$$</span>
Therefore <span class="math-container">$\sup AB \geqslant st,$</span> therefore <span class="math-container">$\sup AB = st = (\sup A)(\sup B).$</span></p>
|
2,117,054 | <p>Find all prime solutions of the equation $5x^2-7x+1=y^2.$</p>
<p>It is easy to see that
$y^2+2x^2=1 \mod 7.$ Since $\mod 7$-residues are $1,2,4$ it follows that $y^2=4 \mod 7$, $x^2=2 \mod 7$ or $y=2,5 \mod 7$ and $x=3,4 \mod 7.$ </p>
<p>In the same way from $y^2+2x=1 \mod 5$ we have that $y^2=1 \mod 5$ and $x=0 \mod 5$ or $y^2=-1 \mod 5$ and $x=4 \mod 5.$</p>
<p>How put together the two cases?</p>
<p>Computer find two prime solutions $(3,5)$ and $(11,23).$</p>
| 1Emax | 324,326 | <p>Try working mod $3$ and mod $8$. Assuming $x, y>3$, we have $x,y = \pm 1$ mod $3$. Since $x, y$ are odd we have $x^2, y^2=1$ mod $8$, so
$$x^2, y^2 = 1 \text{ mod } 24.$$
Substituting in the equation gives $$x = 24k+11 $$ for some integer $k$.
Rearranging the original equation we get
$$x(5x-7)=(y-1)(y+1), \tag{1}$$
therefore $x |y-1$ or $x|y+1$, since $x$ is a prime number.</p>
<p>Solving for $x$ gives
$$ x = \frac{17}{10} + \frac{1}{10}\sqrt{20y^2+29}>\frac{1}{3}(y+1).$$
Note that $x$ is odd and $y \pm 1$ is even, so $x \ne y\pm1$. This forces $x = \frac{1}{2} (y \pm1)$, or
$$y = 2x \pm 1 = 48k + 22 \pm 1 \Rightarrow y = 48k+23.$$
$48k+21$ is rejected being divisible by 3.
Plugging these in $(1)$ gives the solution $k=0$ or
$$x = 11, \space y = 23.$$</p>
|
3,377,353 | <p>Given a purely real, rational integer <span class="math-container">$p$</span> that is prime in <span class="math-container">$\mathbb{Z}$</span>, we know very well that it ramifies in <span class="math-container">$\mathbb{Q}(\sqrt{pm})$</span> (where <span class="math-container">$m$</span> is a nonzero integer coprime to <span class="math-container">$p$</span>), it is inert in some of the other quadratic rings and it splits in the others.</p>
<p>In a ring of degree <span class="math-container">$4$</span>, things are of course a bit more complicated than that. For example, in <span class="math-container">$\mathcal{O}_{\mathbb{Q}(\zeta_8)}$</span>, we see that <span class="math-container">$2$</span> is ramified, since it's ramified in each of the three intermediate fields (<span class="math-container">$\mathbb{Q}(i)$</span>, <span class="math-container">$\mathbb{Q}(\sqrt{-2})$</span> and <span class="math-container">$\mathbb{Q}(\sqrt{2})$</span>).</p>
<p>Furthermore, we see that <span class="math-container">$(1 - \zeta_8)(1 + \zeta_8) = 1 - i$</span> and <span class="math-container">$(1 - {\zeta_8}^3)(1 + {\zeta_8}^3) = 1 + i$</span>. Is this what they call "ramifies completely"?</p>
<p>Turning our attention to <span class="math-container">$3$</span>, we see that it is not prime in <span class="math-container">$\mathcal{O}_{\mathbb{Q}(\zeta_8)}$</span>, because, although it does not split in two of the intermediate fields, it does split in <span class="math-container">$\mathbb{Z}[\sqrt{-2}]$</span>. I may have overlooked something, but as far as I can tell, the equation <span class="math-container">$x^4 + b x^3 + c x^2 + d x \pm 3 = 0$</span> has no solutions in <span class="math-container">$\mathcal{O}_{\mathbb{Q}(\zeta_8)}$</span>.</p>
<p>If I'm right, this would mean that <span class="math-container">$3$</span> does not split as "completely" as <span class="math-container">$2$</span> ramifies. Assuming I'm correct in these assertions, am I using the correct terminology? And if not, what is the correct terminology?</p>
| Daniel Hast | 41,415 | <p>Here's the general situation for number fields: Let <span class="math-container">$L/K$</span> be a degree <span class="math-container">$n$</span> extension of number fields, let <span class="math-container">$\newcommand{\OO}{\mathcal{O}}\OO_K$</span> and <span class="math-container">$\OO_L$</span> be their rings of integers, and let <span class="math-container">$P$</span> be a nonzero prime ideal of <span class="math-container">$\OO_K$</span>. <em>Ramification theory</em> concerns the factorization of the ideal <span class="math-container">$P \OO_L$</span> (and its implications for the structure of the Galois group); a good reference for this is chapter 1 of Neukirch's <em>Algebraic Number Theory</em>.</p>
<p>Since <span class="math-container">$\OO_L$</span> is a Dedekind domain, we can uniquely factor <span class="math-container">$P \OO_L$</span> as a product of prime ideals
<span class="math-container">$$P \OO_L = Q_1^{e_1} \cdot \ldots \cdot Q_r^{e_r}$$</span>
for some distinct nonzero prime ideals <span class="math-container">$Q_1, \dots, Q_r$</span> of <span class="math-container">$\OO_L$</span> and some positive integers <span class="math-container">$e_1, \dots, e_r$</span>. (This factorization of <em>ideals</em> is unique up to ordering, even if <span class="math-container">$\OO_L$</span> doesn't have unique prime factorization of <em>elements</em>.)</p>
<p>The exponent <span class="math-container">$e_i$</span> is called the <em>ramification index</em> of <span class="math-container">$Q_i$</span>. We also define the <em>inertia degree</em> of <span class="math-container">$Q_i$</span> to be <span class="math-container">$f_i = [\OO_L/Q_i : \OO_K/P]$</span>, the degree of the extension of residue fields. We have
<span class="math-container">$$ n = e_1 f_1 + \dots + e_r f_r.$$</span></p>
<p>(If <span class="math-container">$L$</span> is a Galois extension of <span class="math-container">$K$</span>, then <span class="math-container">$f_i = f_j$</span> and <span class="math-container">$e_i = e_j$</span> for all <span class="math-container">$i, j$</span>. In this case, we usually refer to them as the "inertia degree <span class="math-container">$f$</span> of <span class="math-container">$P$</span> in <span class="math-container">$L/K$</span>" and the "ramification index <span class="math-container">$e$</span> of <span class="math-container">$P$</span> in <span class="math-container">$L/K$</span>", and we have <span class="math-container">$n = efr$</span>.)</p>
<p>There are three extreme cases:</p>
<ol>
<li>If <span class="math-container">$e = n$</span> (that is, <span class="math-container">$P \OO_L = Q^e$</span>), then we say <span class="math-container">$P$</span> is <em>totally ramified</em>.</li>
<li>If <span class="math-container">$f = n$</span> (that is, <span class="math-container">$P \OO_L$</span> is already a prime ideal), then we say <span class="math-container">$P$</span> is <em>inert</em>.</li>
<li>If <span class="math-container">$r = n$</span> (that is, <span class="math-container">$P \OO_L = Q_1 \cdot \ldots \cdot Q_n$</span> for distinct <span class="math-container">$Q_1, \dots, Q_n$</span>), then we say <span class="math-container">$P$</span> <em>splits completely</em>.</li>
</ol>
<p>There are also various intermediate cases:</p>
<ol start="4">
<li>If <span class="math-container">$e_i > 1$</span> for some <span class="math-container">$i$</span>, then we say <span class="math-container">$Q_i$</span> and <span class="math-container">$P$</span> are <em>ramified</em> in <span class="math-container">$L/K$</span>.</li>
<li>If <span class="math-container">$e_i = 1$</span> for all <span class="math-container">$i$</span>, then we say <span class="math-container">$Q_i$</span> and <span class="math-container">$P$</span> are <em>unramified</em>.</li>
<li>If <span class="math-container">$1 < r < n$</span>, then I don't think there's a universally accepted term for it, but I'd personally say something like "splits partially". (<span class="math-container">$P$</span> could be ramified or unramified in this case.)</li>
</ol>
|
3,225,553 | <p>Show that <span class="math-container">$4x^2+6x+3$</span> is a unit in <span class="math-container">$\mathbb{Z}_8[x]$</span>.</p>
<p>Once you have found the inverse like <a href="https://math.stackexchange.com/questions/3172556/show-that-4x26x3-is-a-unit-in-mathbbz-8x">here</a>, the verification is trivial. But how do you come up with such an inverse. Do I just try with general polynomials of all degrees and see what restrictions RHS = <span class="math-container">$1$</span> imposes on the coefficients until I get lucky? Also is there a general method to show an element in a ring is a unit?</p>
| Wuestenfux | 417,848 | <p>Hint: As in the hinted paper, a possible ansatz would be</p>
<p><span class="math-container">$(4x^2+6x+3) (ax+b) = 4ax^3+(4b+6a)x^2+ (6b+3a)x+3b=1$</span>.</p>
<p>This requires <span class="math-container">$4a\equiv 0\mod 8$</span> (so <span class="math-container">$a$</span> must be even), <span class="math-container">$4b+6a\equiv 0\mod 8$</span>, and <span class="math-container">$6b+3a\equiv 0\mod 8$</span> and <span class="math-container">$3b\equiv 1\mod 8$</span> (so <span class="math-container">$b=3$</span>).</p>
<p>The cases left are <span class="math-container">$a$</span> even with <span class="math-container">$b=3$</span>.</p>
|
1,473,318 | <blockquote>
<p>How many numbers can by formed by using the digits $1,2,3,4$ and $5$ without repetition which are divisible by $6$?</p>
</blockquote>
<p><strong>My Approach:</strong></p>
<p>$3$ digit numbers formed using $1,2,3,4,5$ divisible by $6$ </p>
<p>unit digit should be $2/4$ </p>
<p>No. can be $XY2$ & $XY4$</p>
<p>$X+Y+2 = 6,9$ & $X+Y+4 = 9,12$</p>
<p>$X+Y = 4,7$ & $X+Y = 5,8$</p>
<p>$(X,Y)= (1,3),(3,1),(2,5),(5,2)$ & </p>
<p>$(X,Y)= (2,3),(3,2),(3,5),(5,3)$</p>
<p>Therefore,Total 8 numbers without repetition.</p>
<blockquote>
<p>But I am confused here how to find numbers of numbers?</p>
</blockquote>
| cr001 | 254,175 | <p>You have three sets {1,4}{2,5}{3}. For the exactly one from each set case, you already have the answer which is ${2\choose1}{1\choose1}2!+{2\choose1}{1\choose1}2!=8$.</p>
<p>For the one from first and one from second case, you have ${2\choose1}+{2\choose1}=4$</p>
<p>For the two from first and two from second case, you have $3!+3!=12$</p>
<p>For the five numbers case, you have $4!+4!=48$</p>
<p>These are all the possible cases, hence totally 72 numbers.</p>
|
2,378,508 | <p>I am reading about Arithmetic mean and Harmonic mean. From <a href="https://en.wikipedia.org/wiki/Harmonic_mean#In_physics" rel="nofollow noreferrer">wikipedia</a>
I got this comparision about them:</p>
<blockquote>
<p>In certain situations, especially many situations involving rates and ratios, the harmonic mean provides the truest average. For instance, if a vehicle travels a certain distance at a speed x (e.g., 60 kilometres per hour -
km
/
h
) and then the same distance again at a speed y (e.g., 40
km
/
h
), then its average speed is the harmonic mean of x and y (48
km
/
h
), and its total travel time is the same as if it had traveled the whole distance at that average speed. However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 50 kilometres per hour. The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds; and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. </p>
</blockquote>
<pre><code> distance time velocity remark
1st section d/2 t1 60
2nd section d/2 t2 40
1st + 2nd section d (t1+t2) v use harmonic mean to calculate v
1st section d1 t/2 60
2nd section d2 t/2 40
1st + 2nd section d1+d2 t v use arithmetic mean to calculate v
</code></pre>
<p>How <code>distance</code> and <code>time</code> are pushing us to compute harmonic mean and arithmetic mean respectively for computing "average <code>v</code>" in this case?</p>
| Crazy | 449,016 | <p>I think you mixed up the degree of the differential equation and the degree of the polynomials.</p>
<p>Example,</p>
<p>$$\frac{d^3y}{dx^3}+\frac{dy}{dx}+y=0$$</p>
<p>is called a third order differential equation. The highest derivative inside the differential equation is $3$. So, it is a third order.</p>
<p>Consider this</p>
<p>$$(\frac{d^3y}{dx^3})^2+y=0$$</p>
<p>This is still considered a third order differential equation.</p>
<p>$$(\frac{dy}{dx})^{10}+y=0$$</p>
<p>This is called the first order differential equation despite that it is raised to the power of 10.</p>
<p>It is very different from the polynomial </p>
<p>Like,</p>
<p>$$y=x^2+x+3$$</p>
<p>This is called a second degree polynomial.</p>
<p>Conclusion:</p>
<p>The highest derivative that exists inside the differential equation let's say 2 is known as the '2' order differential equation.</p>
<p>The highest power that exists inside the polynomial say 10 $(x^{10}) $is called the 10th degree polynomial.</p>
|
2,735,001 | <p>I was asked to find the corresponding series for the function $\ln(x^2+4)$</p>
<p>The obvious solution to me was to use the well known fact $$\ln(1+x)=\sum_{n=1}^\infty (-1)^{n-1}\frac{x^n}{n}$$
And substituting $x^2+3$ for $x$
$$\ln(1+(x^2+3))=\sum_{n=1}^\infty (-1)^{n-1}\frac{(x^2+3)^n}{n}$$
Using binomial theorem on the $(x^2+3)^n$ on the inside gives us the nested summation
$$\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\sum_{m=0}^n {n\choose{m}}x^{2m}3^{n-m}=S_1$$</p>
<p>However, the answer key gives the series as
$$S_2=\ln 4+\sum_{n=1}^\infty (-1)^n \frac{x^{2n+2}}{2^{2n+2}(n+1)}$$</p>
<p>Question: Is $S_1=S_2$? If so, how do we prove this? If not, where is the error in this reasoning?</p>
<p>Thanks</p>
| Angina Seng | 436,618 | <p>The series
$$\ln(1+t)=\sum_{n=1}^\infty(-1)^{n-1}\frac{t^n}n$$
is only valid for $|t|<1$. You apply it for $t=x^2+3$. I don't
think $|x^2+3|<1$.</p>
|
1,557,015 | <p>This one looks simple, but apparently there is something more to it.
$$f{(x)=x^x}$$
I read somewhere that the domain is $\Bbb R_+$, a friend said that $x\lt-1, x\gt0$... </p>
<p>I'm really confused, because i don't understand why the domain isn't just all the real numbers.
According to any grapher online the domain is $\Bbb R_+$.
Any Thoughts on the matter?</p>
<p>Can someone explain what am I missing?</p>
| Kamil Jarosz | 183,840 | <p>Split it into cases:</p>
<ol>
<li>When $x=p/q$ where $p\in \mathbb Z,q\in\mathbb N_{>1},p\ne0,\gcd(p,q)=1$, then:
$$x^x=\left(\frac{p}{q}\right)^\frac{p}{q}=\sqrt[q]{\left(\frac{p}{q}\right)^p}$$
<ul>
<li>when $p<0$ then
$$x^x=\sqrt[q]{\left(-\frac{q}{|p|}\right)^{|p|}}$$
if $p$ is even, then $\left(-\frac{q}{|p|}\right)^{|p|}$ is positive, otherwise it's negative and the root doesn't exist for even $q$.</li>
<li>when $p>0$ then
$$x^x=\sqrt[q]{\left(\frac{|p|}{q}\right)^{|p|}}$$
and $\left(\frac{|p|}{q}\right)^{|p|}$ is always positive.</li>
</ul></li>
<li>When $x\in\mathbb Z$ the value $x^x$ always exist except $x=0$.</li>
<li>When $x$ is irrational then the only way to define $x^x$ is $$x^x=\exp(x\ln x)$$ and for real numbers we have $x>0$.</li>
</ol>
<p>Summarizing, $x^x$ exist for all</p>
<ul>
<li>$x\in\mathbb R_+$</li>
<li>$x\in\mathbb Z_-$</li>
<li>$x\in\left\{ -\frac{p}{q}\in \mathbb Q\colon p,q\in\mathbb N_+ \land \gcd(p,q)=1\land q\text{ is odd}\right\}$</li>
</ul>
<p><strong>Why we don't see the negative part of the plot</strong></p>
<ol>
<li>Technical reason: $x^x$ in programs is usually defined as <code>exp(x*log(x))</code> and the function <code>log(x)</code> is not defined for negative <code>x</code>.</li>
<li>Mathematical reason: set of negative $x$ which $x^x$ exists for is countable. Countable many points is not enough to form a curve.</li>
</ol>
<p>This function may be <a href="https://www.desmos.com/calculator/45004tcvxg" rel="nofollow">plotted with points for negative $x$</a>.</p>
|
270,985 | <p>A graph $G=(V,E)$ is said to be <em>vertex-critical</em> if removing a vertex $v\in V$ reduces the chromatic number $\chi(\cdot)$. <em>Edge-criticality</em> is defined in a similar manner. Moreover, $G$ is called <em>contraction-critical</em> if contracting any edge reduces the chromatic number.</p>
<p><em>Questions.</em></p>
<p>1) Are edge- and vertex-criticality equivalent?</p>
<p>2) What is an example of a graph $G$ such that $\omega(G) < \chi(G)$ [where $\omega(G)$ denotes the clique number), and $G$ is vertex-critical, but not contraction-critical?</p>
| Abdelmalek Abdesselam | 7,410 | <p>I think the graph studied in my article <a href="http://www.sciencedirect.com/science/article/pii/S0021869315005657" rel="nofollow noreferrer">"16,051 formulas for Ottaviani's invariant of cubic threefolds"</a> with Christian Ikenmeyer and Gordon Royle fits the bill. In the paper we considered a hypergraph on a set $V$ of 15 vertices where the hyperedges are 5-subsets of $V$. Let $G$ be the collinearity graph, namely the obtained by replacing each hyperedge by a complete graph $K_5$. See Section 4 of the article for an explicit description which shows that $\chi(G)=8$ while $\omega(G)=7$. This graph is vertex-critical and I suspect it is not contraction-critical but I did not check this last property.</p>
|
270,985 | <p>A graph $G=(V,E)$ is said to be <em>vertex-critical</em> if removing a vertex $v\in V$ reduces the chromatic number $\chi(\cdot)$. <em>Edge-criticality</em> is defined in a similar manner. Moreover, $G$ is called <em>contraction-critical</em> if contracting any edge reduces the chromatic number.</p>
<p><em>Questions.</em></p>
<p>1) Are edge- and vertex-criticality equivalent?</p>
<p>2) What is an example of a graph $G$ such that $\omega(G) < \chi(G)$ [where $\omega(G)$ denotes the clique number), and $G$ is vertex-critical, but not contraction-critical?</p>
| user1272680 | 90,417 | <p>Concerning your first question, every edge-critical graph without isolated vertices must be vertex-critical, but not vice versa. For instance, the complement of a $7$-cycle is vertex-critical but not edge-critical.</p>
<p>Concerning your second question, every vertex-critical graph must be contraction-critical as well. Suppose we are contracting an edge $uv$ of a $k$-vertex-critical graph $G$. Since $G$ is $k$-vertex-critical, there is a $k$-colouring $c$ of $G$ in which $u$ is the only vertex coloured $k$. This is also a proper colouring of the graph with the edge $uv$ contracted.</p>
|
4,554,831 | <blockquote>
<p>Let <span class="math-container">$(X,d)$</span> be a metric space. Prove that if the point <span class="math-container">$x$</span> is on the boundary of the open ball <span class="math-container">$B(x_0,r)$</span> then <span class="math-container">$d(x_0,x)=r$</span>.</p>
</blockquote>
<p>I find this difficult because it seems intuitive yet not easy to prove. By definition, if <span class="math-container">$A\subset X$</span> then a point <span class="math-container">$x$</span> is on the boundary if for all <span class="math-container">$\epsilon>0$</span> we have <span class="math-container">$B(x,\epsilon)\cap A\ne\emptyset$</span> and also <span class="math-container">$(X\setminus A)\cap B(x,\epsilon)\ne\emptyset$</span>. However I don't know how to use this definition in any meaningful way.</p>
| Martin R | 42,969 | <p>Using the definition of a boundary point: For all <span class="math-container">$\epsilon > 0$</span> is <span class="math-container">$B(x,\epsilon)\cap B(x_0,r)\ne\emptyset$</span>, i.e. there is an <span class="math-container">$y \in X$</span> with
<span class="math-container">$$ \tag{$1$}
d(y, x) < \epsilon \text{ and } d(y, x_0) < r \, ,
$$</span>
and also <span class="math-container">$(X\setminus B(x_0,r))\cap B(x,\epsilon)\ne\emptyset$</span>, i.e. there is a <span class="math-container">$z \in X$</span> with
<span class="math-container">$$ \tag{$2$}
d(z, x) < \epsilon \text{ and } d(z, x_0) \ge r \, .
$$</span></p>
<p>It follows from <span class="math-container">$(1)$</span> and the triangle inequality that
<span class="math-container">$$
d(x, x_0) \le d(x, y) + d(y, x_0) < \epsilon + r
\implies \boxed{d(x, x_0) < r+ \epsilon} \, .
$$</span>
and from <span class="math-container">$(2)$</span> and the triangle inequality that
<span class="math-container">$$
r \le d(z, x_0) \le d(z, x) + d(x, x_0) < \epsilon + d(x, x_0)
\implies \boxed{d(x, x_0) > r- \epsilon} \, .
$$</span></p>
<p>So we have
<span class="math-container">$$
r - \epsilon < d(x, x_0) < r+ \epsilon
$$</span>
for all <span class="math-container">$\epsilon > 0$</span>, and that implies <span class="math-container">$d(x, x_0) = r$</span>.</p>
<p><em>Remark:</em> Note that the reverse implication <span class="math-container">$d(x, x_0) = r \implies x \in \partial B(x_0, r)$</span> does not necessarily hold in a metric space. For counterexample, see
<a href="https://math.stackexchange.com/questions/3154978/is-it-true-that-the-boundary-of-an-open-ball-is-equal-to-the-boundary-of-a-close/3155018#3155018">Is it true that the boundary of an open ball is equal to the boundary of a closed ball, in an arbitrary metric space?</a></p>
|
3,080,124 | <p>Let <span class="math-container">$X$</span> be a topological space. Let <span class="math-container">$a\in X$</span>. Is it always true that <span class="math-container">$a$</span> is contained in a proper open set of <span class="math-container">$X$</span>? I don't know how to derive it directly by the axioms of a topological space.</p>
| Henno Brandsma | 4,280 | <p>No, this need not be the case: if <span class="math-container">$X$</span> is a set and <span class="math-container">$p \in X$</span> then the following defines a topology on <span class="math-container">$X$</span> (th excluded point topology w.r.t. <span class="math-container">$p$</span>):</p>
<p><span class="math-container">$$\mathcal{T}= \{A \subseteq X: p \notin A\} \cup \{X\}$$</span></p>
<p>It's easy to check this satisfies the axioms of a topology. And it's also clear that the only open set that contains <span class="math-container">$p$</span> is <span class="math-container">$X$</span> itself. </p>
<p>But very often additional assumptions on <span class="math-container">$X$</span> exist, e.g. <span class="math-container">$T_1$</span> (for every pair <span class="math-container">$x \neq y$</span> of points in <span class="math-container">$X$</span> there is an open set <span class="math-container">$O$</span> such that <span class="math-container">$x \in O$</span>, and <span class="math-container">$y \notin O$</span>. This guarantees that there are enough open sets so that every point is contained in a proper open set. The above example is merely <span class="math-container">$T_0$</span>, not <span class="math-container">$T_1$</span>. Where <span class="math-container">$T_0$</span> means that for every <span class="math-container">$x \neq y$</span> we have an open set <span class="math-container">$O$</span> such that (<span class="math-container">$x \in O$</span> and <span class="math-container">$y \notin O$</span>) <em>or</em> (<span class="math-container">$x \notin O$</span> and <span class="math-container">$y \in O$</span>), which does hold as one of the <span class="math-container">$x$</span> or <span class="math-container">$y$</span> is unequal to <span class="math-container">$p$</span> (say <span class="math-container">$x$</span>) and we then use <span class="math-container">$O=\{x\}$</span>. So <span class="math-container">$T_0$</span> does not guarantee this property, and the stronger <span class="math-container">$T_1$</span> does. So if you desire such a property for a space <span class="math-container">$X$</span>, assume such "separation axioms" on it.</p>
|
2,005,555 | <p>When I was solving a DE problem I was able to reduce it to </p>
<p>$$e^x \sin(2x)=a\cdot e^{(1+2i)x}+b\cdot e^{(1−2i)x}.$$ </p>
<p>For complex $a,b$. Getting one solution is easy $(\frac{1}{2i},-\frac{1}{2i})$ but I was wondering what are all the values for complex $a,b$ that satisfy the equation. </p>
| snoram | 103,861 | <p>Going through it step by step without the use (of frankly very useful) shortcuts.</p>
<p>Start with:
$$e^x \sin(2x)=a\cdot e^{(1+2i)x}+b\cdot e^{(1−2i)x}$$</p>
<p>Clean up a bit and divide be $e^x$ on both sides:
$$ e^x \sin(2x)=a\cdot e^{x} e^{2xi}+b\cdot e^{x} e^{-2xi}$$
$$\sin(2x)=a\cdot e^{2xi}+b\cdot e^{-2xi}$$</p>
<p>Throw in Euler's Formula: $e^{ix} = \cos(x) + i \sin(x)$
$$\implies \sin(2x)=a\cdot \left(\cos(2x) + i \sin(2x)\right)+b\cdot \left(\cos(-2x) + i \sin(-2x)\right)$$</p>
<p>Note that: $\cos(x) = \cos(-x)$ and that $\sin(a) = -\sin(-a)$</p>
<p>$$\implies \sin(2x)=a\cdot \left(\cos(2x) + i \sin(2x)\right)+b\cdot \left(\cos(2x) - i \sin(2x)\right)$$
$$\implies \sin(2x)= \left( a+b \right)\cos(2x) +\left(a-b\right) i \sin(2x) $$</p>
<p>We thus left with
$$\begin{cases} a+ b = 0 \\ \left(a-b\right) i = 1\end{cases}$$</p>
<p>Solving I only get one solution (which is also complex).
$$\boxed{a = \frac{-i}{2}} \text{ and } \boxed{ b = \frac{i}{2}}$$</p>
<p>This is the same as your solution, so I conclude no other (complex or not) solutions exist.</p>
|
2,301,368 | <p>Below you see the <a href="https://en.wikipedia.org/wiki/Rhombicuboctahedron" rel="noreferrer">Rhombicuboctahedron</a>. If you put an additional point in the blue triangle, you make three blue triangles out of one. Now you connect a yellow square with two adjacent small blue triangle and you end up with a blue-yellow hexagon.</p>
<p>$\hskip2.7in$<a href="https://i.stack.imgur.com/zN7T8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zN7T8.png" alt="enter image description here"></a></p>
<p>Drawn in the plane this would look like this:</p>
<p>$\hskip2.1in$<a href="https://i.stack.imgur.com/kJSFvm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/kJSFvm.png" alt="enter image description here"></a></p>
<p>How is the resulting 3D object called, if it has a name at all...</p>
| zwim | 399,263 | <p>I think this has to be this since it has $12$ hexagons and $6$ squares as requested.</p>
<p><a href="https://en.wikipedia.org/wiki/Chamfer_(geometry)" rel="noreferrer">https://en.wikipedia.org/wiki/Chamfer_(geometry)</a></p>
<p><a href="https://i.stack.imgur.com/JGdGi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JGdGi.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/Kha3B.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Kha3B.png" alt="enter image description here"></a></p>
<p>They call it truncated rhombic dodecahedron on this page.</p>
|
2,301,368 | <p>Below you see the <a href="https://en.wikipedia.org/wiki/Rhombicuboctahedron" rel="noreferrer">Rhombicuboctahedron</a>. If you put an additional point in the blue triangle, you make three blue triangles out of one. Now you connect a yellow square with two adjacent small blue triangle and you end up with a blue-yellow hexagon.</p>
<p>$\hskip2.7in$<a href="https://i.stack.imgur.com/zN7T8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zN7T8.png" alt="enter image description here"></a></p>
<p>Drawn in the plane this would look like this:</p>
<p>$\hskip2.1in$<a href="https://i.stack.imgur.com/kJSFvm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/kJSFvm.png" alt="enter image description here"></a></p>
<p>How is the resulting 3D object called, if it has a name at all...</p>
| lesath82 | 430,906 | <p>Here is your solid:</p>
<p><a href="https://i.stack.imgur.com/2wW66.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2wW66.png" alt="enter image description here"></a></p>
<p>It's part of the family of the chamfered cubes, but I don't think it has a name on its own.</p>
|
2,881,914 | <p>Using a computer I found the double sum</p>
<p>$$S(n)= \sum_{j=1}^n\sum_{k=1}^n \frac{j^2 + jk + k^2}{j^2(j+k)^2k^2}$$
has values</p>
<p>$$S(10) \quad\quad= 1.881427206538142 \\ S(1000) \quad= 2.161366028875634 \\S(100000) = 2.164613524212465\\$$</p>
<p>As a guess I compared with fractions $\pi^p/q$ where $p,q$ are positive integers and it appears </p>
<p>$$\lim_{n \to \infty} S(n) = \frac{\pi^4}{45} = 2\zeta(4) \approx 2.164646467422276 $$</p>
<p>I'd be interested in seeing a proof if true. </p>
| skbmoore | 321,120 | <p>$$S(\infty)=\sum_{j=1}^\infty\,\sum_{k=1}^\infty \frac{(j+k)^2 - jk}{j^2(j+k)^2k^2} = \underbrace{\Big(\sum_{k=1}^\infty \frac{1}{k^2}\Big)^2}_{=\zeta(2)^2} -
\underbrace{\sum_{j=1}^\infty\,\sum_{k=1}^\infty \frac{1}{j\,k}\int_0^\infty dt \,t \,e^{-t(j+k)}}_{:=U},$$
where the first step is algebra and the second is use of the Euler representation of the $\Gamma$ function. Interchange sums and integral and sum in terms of $\log$ to find
$$U=\int_0^\infty dt \,t \, \log^2(1-e^{-t}) = -\int_0^1 \frac{du}{u} \log\,u \log^2{(1-u)} =$$
$$-\frac{\partial}{\partial s} \frac{\partial^2}{\partial v^2} \int_0^1 u^{s-1} (1-u)^{v-1} \, du \Big\vert_{s=0,v=1}= -\frac{\partial}{\partial s} \frac{\partial^2}{\partial v^2}\frac{\Gamma(s) \Gamma(v)}{\Gamma(s+v)}\Big\vert_{s=0,v=1}$$
where the first step follows from a simple substitution $u=e^{-t}$ and the second is writing the integral in terms of something that is known, the beta integral. Use your favorite CAS to do the partial derivatives to get $U=\pi^4/180.$ Combine with $\zeta(2)^2 = \pi^4/36$ to finish the proof of the OPs hypothesis.</p>
|
2,881,914 | <p>Using a computer I found the double sum</p>
<p>$$S(n)= \sum_{j=1}^n\sum_{k=1}^n \frac{j^2 + jk + k^2}{j^2(j+k)^2k^2}$$
has values</p>
<p>$$S(10) \quad\quad= 1.881427206538142 \\ S(1000) \quad= 2.161366028875634 \\S(100000) = 2.164613524212465\\$$</p>
<p>As a guess I compared with fractions $\pi^p/q$ where $p,q$ are positive integers and it appears </p>
<p>$$\lim_{n \to \infty} S(n) = \frac{\pi^4}{45} = 2\zeta(4) \approx 2.164646467422276 $$</p>
<p>I'd be interested in seeing a proof if true. </p>
| Jack D'Aurizio | 44,121 | <p>An alternative approach:</p>
<p>$$ S = \lim_{n\to +\infty}S(n) = \sum_{j,k\geq 1}\frac{1}{j^2 k^2}-\sum_{k,j\geq 1}\frac{1}{jk(j+k)^2}=\zeta(2)^2-\sum_{k,j\geq 1}\int_{0}^{+\infty}\frac{e^{-(j+k)x}}{jk}\,x\,dx $$
leads to
$$S = \zeta(2)^2-\int_{0}^{+\infty}x\log^2(1-e^{-x})\,dx=\frac{\pi^4}{36}+\int_{0}^{1}\frac{\log^2(1-x)\log(x)}{x}\,dx$$
or to
$$ S = \frac{\pi^4}{36}+\int_{0}^{1}\frac{\log(1-x)}{1-x}\log^2(x)\,dx = \frac{\pi^4}{36}-\sum_{n\geq 1}\frac{2H_n}{(n+1)^3}$$
since $\frac{-\log(1-x)}{1-x}=\sum_{n\geq 1}H_n x^n$ and $\int_{0}^{1}x^n\log^2(x)\,dx = \frac{2}{(n+1)^3}$. Rearranging
$$ S = \frac{\pi^4}{36}-2\sum_{n\geq 1}\frac{H_{n}}{n^3}+2\,\zeta(4) = 2\,\zeta(4) = \frac{\pi^4}{45}$$
since the middle term is a linear Euler sum, which can be computed from the Theorem 2.2 <a href="http://algo.inria.fr/flajolet/Publications/FlSa98.pdf" rel="nofollow noreferrer">here</a> (Flajolet and Salvy, a masterpiece).</p>
|
2,303,163 | <blockquote>
<p>Let $T$ be a linear operator on the finite-dimensional space $V.$ Suppose there is a linear operator $U$ on $V$ such that $TU=I.$ Prove that $T$ is invertible and $U=T^{-1}.$</p>
</blockquote>
<p>Attempt: Let $\dim V=n$ and $\{\alpha_i\}_{i=1}^n$ a basis for $V$. We claim that $\{U(\alpha_i)_{i=1}^n\}$ is a basis for $V.$ If not then there exists scalars $c_i$'s $\in F$ not all zero such that $\sum c_iU(\alpha_i)=0.$ Applying $T$ on both sides we get $\sum c_iTU(\alpha_i)=\sum c_i\alpha_i=0,$ a contradiction. Thus $\{U(\alpha_i)_{i=1}^n\}$ is a basis for $V.$</p>
<p>Now we observe that $T[U(\alpha_i)]=\alpha_i$ since $TU=I.$ We infer that $T$ and $U$ are invertible since they map basis vectors to basis vectors. It remains to show that $U=T^{-1}.$</p>
<p><strong>Can I say that $U=T^{-1}$ using $TU=I$?</strong>
I am not sure if I can use this since I don't know if inverses are unique.</p>
| Ken Duna | 318,831 | <p>I suppose at this point you just need to show that $UT = I$ as well. You can use $TU = I$ and the fact that $T^{-1}$ exists for this:</p>
<p>\begin{align*}
TU &= I \\
TUT &= IT = T \\
T^{-1}TUT &= T^{-1}T \\
UT &= I
\end{align*}</p>
|
1,684,124 | <p>Here is my attempt:</p>
<p>$$ \frac{2x}{x^2 +2x+1}= \frac{2x}{(x+1)^2 } = \frac{2}{x+1}-\frac{2}{(x+1)^2 }$$</p>
<p>Then I tried to integrate it,I got $2\ln(x+1)+\frac{2}{x+1}+C$ as my answer. Am I right? please correct me if I'm wrong.</p>
| Travis Willse | 155,629 | <p><strong>Hint</strong> The factorization $(x + 1)^2$ of the denominator of the integrand suggests that we can rewrite the integral using the substitution $u := x + 1$, $du = dx$:
$$\int \frac{2 (u - 1)}{u^2} du = 2 \left(\int \frac{du}{u} - \int \frac{du}{u^2} \right) .$$</p>
|
511,304 | <p>Given the ODE: </p>
<p>$2(x+1)y' = y$</p>
<p>How can I solve that using Power Series? I started to think about it:</p>
<p>$
\\2(x+1)\sum_{n=1}^{\infty}{nc_nx^{n-1}}-\sum_{n=0}^{\infty}{c_nx^n}=0
\\2\sum_{n=1}^{\infty}{nc_nx^{n}}+2\sum_{n=1}^{\infty}{nc_nx^{n-1}}-\sum_{n=0}^{\infty}{c_nx^n}=0
\\\sum_{n=0}^{\infty}{2nc_nx^{n}}+\sum_{n=0}^{\infty}{2(n+1)c_{n+1}x^{n}}-\sum_{n=0}^{\infty}{c_nx^n} = 0
\\\sum_{n=0}^{\infty}{[2nc_n + 2(n+1)c_{n+1} - c_n]x^n} = 0
$</p>
<p>Then:</p>
<p>$
\\2nc_{n}+2(n+1)c_{n+1}-c_n=0
\\c_{n+1}=\frac{c_n(1-2n)}{2(n+1)}
$</p>
<p>Now, I should know what is the generic formula of $c_n$, but I can not see the pattern by assigning values to $n$. How can I proceed?</p>
| Sangchul Lee | 9,340 | <p>Note that</p>
<p>$$ \tanh x = 1 - \frac{2e^{-2x}}{1 + e^{-2x}} = 1 + O\left(e^{-2x}\right). $$</p>
<p>Thus </p>
<p>$$ \arctan\left(C^{-1} \tanh x \right) = \arctan\left( C^{-1} + O\left(e^{-2x}\right) \right) = \arctan(C^{-1}) + O\left(e^{-2x}\right). $$</p>
<p>This shows that</p>
<p>$$ \int_{0}^{\lambda} \arctan\left(C^{-1} \tanh x \right) \, dx = \lambda \arctan(C^{-1}) + O(1). $$</p>
<p>Therefore</p>
<p>$$ \lim_{\lambda \to \infty} \left( -\lambda + \int_{0}^{\lambda} \arctan\left(C^{-1} \tanh x \right) \, dx \right)
= \begin{cases}
+\infty, & C^{-1} > \tan 1 \\
\text{converges}, & C^{-1} = \tan 1 \\
-\infty, & C^{-1} < \tan 1
\end{cases} $$</p>
<p>When $C^{-1} = \tan 1$, Mathematica says that the limit will be approximately</p>
<p>$$ -0.50560901153910564220\cdots. $$</p>
|
197,877 | <p>According to answer of Denis Serre to <a href="https://mathoverflow.net/questions/197773/a-geometric-property-of-singular-matrices">this question</a>, the manifold of singular matrices in $M_{n}(\mathbb{R})$ is defined as follows:
$$M=\{A\in M_{n}(\mathbb{R})\mid \text{rank}(A)=n-1\}$$</p>
<p>So we define a (line bundle) over this manifold:
$$\{(A,v)\in M\times\mathbb{R}^{n}\mid Av=0\}$$.</p>
<blockquote>
<p>Is it a trivial line bundle?</p>
</blockquote>
| Alex Degtyarev | 44,953 | <p>No, this bundle is not trivial (starting from dimension $2$). Introduce a metric, consider projector to a hyperplane, and rotate this hyperplane through $\pi$ about an axis. You get an orientation reversing loop.</p>
|
90,112 | <p>When reading "Chebyshev centers and uniform convexity" by Dan Amir I encountered the following result which is apparently "known and easy to prove". I'm sure it is, but I can't find a proof and am failing to prove it myself.</p>
<p>The result (slightly simplified) is</p>
<p>If $X$ is a uniformly convex space (i.e. if $||x|| = ||y|| = 1$ with $||x - y|| \geq \epsilon$ then there exists $\delta(\epsilon) > 0$ such that $||\frac{x + y}{2}|| \leq 1 - \delta(\epsilon)$) then for any $x, y$ with $||x|| \leq 1$ and $||y|| \leq 1$, and $||x - y|| \geq \epsilon$, $||\frac{x + y}{2}|| \leq 1 - \delta(\epsilon)$.</p>
<p>Part of the problem is that I think this isn't true without making some additional restrictions to reduce the value of $\delta(\epsilon)$. e.g. by considering $||x|| = 1$ and $y = (1 - \epsilon) x$ you can see that this requires that $\delta(\epsilon) \leq \frac{1}{2} \epsilon$. So I think the true result is probably just that you can choose $\delta$ so that this is true.</p>
<p>I'm sure this should be easy and I'm just missing an obvious trick, but oh well.</p>
| Sergei Ivanov | 4,354 | <p>If the second $\delta(\varepsilon)$ is allowed to differ from the first one, then there is a simple implicit argument: Suppose the contrary, then there is a sequence $X_n$ of 2-dimensional normed spaces satisfying the definition with the same function $\delta(\varepsilon)$ and points $x_n,y_n\in X_n$ with $\|x_n\|\le 1$, $\|y_n\le 1\|$, $\|x_n-y_n\|\ge\varepsilon$ but $\|(x_n+y_n)/2\|\ge 1-\delta_n$ where $\delta_n\to 0$. Since the Banach--Mazur compactum is compact, there is a converging subsequence, and the limit space satisfies the definition for the same $\delta(\varepsilon)$ but contains two points $x,y$ with $\|x\|\le 1$, $\|y\|\le 1$, $\|x-y\|\ge\varepsilon$ and $\|(x+y)/2\|\ge 1$, a contradiction.</p>
<p>In fact, you can always choose the same $\delta(\varepsilon)$ in the second case provided that $\dim X\ge 2$. Suppose the contrary, then there are points $x,y\in X$ such that $\|x\|\le 1$, $\|y\|\le 1$, $\|x-y\|\ge\varepsilon$ but $\|(x+y)/2\|=1-\delta_1$ where $\delta_1<\delta=\delta(\varepsilon)$. We may assume that $X$ is 2-dimensional (otherwise restrict to a 2-dimensional subspace containing $x$ and $y$). Fix $\delta_1$ and from all such pairs $x,y$ choose one that minimize $\big|\|x\|-\|y\|\big|$. I claim that this minimizing pair satisfies $\|x\|=\|y\|$.</p>
<p>Suppose the contrary: let $\|x\|>\|y\|$. Denote $z=(x+y)/2$, $v=(x-y)/2$. If $v$ is proportional to $x$, choose any $v'$ with $\|v'\|=\|v\|$ such that $\|z+v\|\ne \|z\|\pm\|v\|$. Then the points $x'=z+v'$ and $y'=z-v'$ show that $x$ and $y$ did not minimize $\big|\|x\|-\|y\|\big|$. If $v$ is not proportional to $x$, choose a vector $w$ parallel to a supporting line to the unit sphere of $\|\cdot\|$ at the point $v/\|v\|$. Note that $w$ cannot be parallel to a supporting line at $x/\|x\|$, so either $\|x+tw\|<\|x\|$ or $\|x-tw\|<\|x\|$ for a sufficiently small $t>0$. Hence the points $x'=x+tw$ and $y'=y-tw$ or $x'=x-tw$ and $y'=y+tw$ provide a counter-example with $\big|\|x'\|-\|y'\|\big|<\big|\|x\|-\|y\|\big|$.</p>
<p>Thus the minimizing pair satisfies $\|x\|=\|y\|$. Multiplying by $\|x\|^{-1}$ we get a counter-example with $\|x\|=\|y\|=1$.</p>
|
1,159,599 | <p>can someone give me a hint on how to calculate this integral?</p>
<p>$\int _0^{\frac{1}{3}} \frac{e^{-x^2}}{\sqrt{1-x^2}}dx$</p>
<p>Thanks so much!</p>
| Harry Peter | 83,346 | <p><span class="math-container">$\int_0^\frac{1}{3}\dfrac{e^{-x^2}}{\sqrt{1-x^2}}~dx$</span></p>
<p><span class="math-container">$=\int_0^{\sin^{-1}\frac{1}{3}}\dfrac{e^{-\sin^2x}}{\sqrt{1-\sin^2x}}~d(\sin x)$</span></p>
<p><span class="math-container">$=\int_0^{\sin^{-1}\frac{1}{3}}e^\frac{\cos2x-1}{2}~dx$</span></p>
<p><span class="math-container">$=e^{-\frac{1}{2}}\int_0^{2\sin^{-1}\frac{1}{3}}e^\frac{\cos x}{2}~d\left(\dfrac{x}{2}\right)$</span></p>
<p><span class="math-container">$=\dfrac{e^{-\frac{1}{2}}}{2}\int_0^{2\sin^{-1}\frac{1}{3}}e^\frac{\cos x}{2}~dx$</span></p>
<p>Which can express in terms of <a href="https://www.cambridge.org/core/services/aop-cambridge-core/content/view/9C572E5CE44E9E0DE8630755DF99ABAC/S0013091505000490a.pdf/incomplete-bessel-functions-i.pdf" rel="nofollow noreferrer">Incomplete Bessel Functions</a></p>
|
400,838 | <p>I need to find $$\lim_{x\to 1} \frac{2-\sqrt{3+x}}{x-1}$$</p>
<p>I tried and tried... friends of mine tried as well and we don't know how to get out of:</p>
<p>$$\lim_{x\to 1} \frac{x+1}{(x-1)(2+\sqrt{3+x})}$$</p>
<p>(this is what we get after multiplying by the conjugate of $2 + \sqrt{3+x}$)</p>
<p>How to proceed? Maybe some hints, we really tried to figure it out, it may happen to be simple (probably, actually) but I'm not able to see it. Also, I know the answer is $-\frac{1}{4}$ and when using l'Hôpital's rule I am able to get the correct answer from it.</p>
| Euler....IS_ALIVE | 38,265 | <p>Multiplying by the conjugate does indeed work. You just forgot to carry the negative sign throughout. After multiplying by the conjugate, the correct expression is $\frac{1-x}{(x-1)(2+\sqrt{3+x})}$</p>
|
1,775,649 | <p>True or false and explain why?: a matrix with characteristic polynomial $\lambda^3 -3\lambda^2+2\lambda$ must be diagonalizable.</p>
<p>First I found the lambda's that make this zero (eigenvalues) and got $0, 1, 2$ but I don't know if having $0$ as an eigenvalue means that the matrix is not diagonalizable? I know that a matrix has $0$ as an eigenvalue if it is not invertible, but I don't know if a matrix needs to be invertible to be diagonalizable? Also if a matrix has complex eigenvalues does that also mean it cannot be diagonalizable?</p>
| Patrick Abraham | 337,503 | <p>Edit (better wording)</p>
<p>$0$ as an eigenvalue doesn't hinder the diagonalization, actually there is no eigenvalue that would hinder it.</p>
<p>That doesn't mean that every matrix is diagonalizable, but that the eigenvalues have no influence, at least in $\mathbb{C}$.</p>
<p>Let A be a complex $n x n$ matrix with $A=diag(\lambda_1,...,\lambda_n)$, it is always diagonalizable, no matter what eigenvalues.</p>
<hr>
<p>If $\lambda \in \mathbb{C}$ belongs to $A \in \mathbb{R}$ you have a problem, since you can't diagonalize it, without using $\mathbb{C}$.</p>
|
174,655 | <p>So I have 2 lists of 10000+ lists of 3 numbers, e.g.</p>
<pre><code>{{1,2,3},{4,5,6},{7,8,9},...}
{{2,1,3},{4,5,6},{41,2,0},...}
</code></pre>
<p>Wanting a result like </p>
<pre><code>{2,...}
</code></pre>
<p>Getting some sort of list of <code>True</code>/<code>False</code> is also probably enough, like this:</p>
<pre><code>{False,True,False,...}
</code></pre>
<p>I guess I could use <code>Position</code> once I've done that.</p>
<p>I tried to use <code>Thread</code>, as below:</p>
<pre><code>Thread[{{a, b}, {c, d}, {e, f}} == {{a, b}, {d, e}, {f, e}}]
</code></pre>
<p>which gives the <code>True</code>/<code>False</code> output</p>
<pre><code>{True, {c, d} == {d, e}, {e, f} == {f, e}}
</code></pre>
<p>But as soon as there are actual numbers in place, it doesn't work:</p>
<pre><code>Thread[{{1, 2}, {2, 3}, {4, 5}} == {{1, 3}, {2, 3}, {4, 5}}]
</code></pre>
<p>Returns</p>
<pre><code>False
</code></pre>
<p>I'd really appreciate any help you could give.</p>
<p>Thanks,</p>
<p>H</p>
| kglr | 125 | <pre><code>a = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}, {1, 2, 3}};
b = {{2, 1, 3}, {4, 5, 6}, {41, 2, 0}, {1, 2, 3}};
Pick[Range@Length@a, Total /@ Unitize[Subtract[a, b]], 0]
</code></pre>
<blockquote>
<p>{2,4}</p>
</blockquote>
<p>Or using @Henrik's idea of using <code>Dot</code> in in the second argument of <code>Pick</code>:</p>
<pre><code>Pick[Range @ Length @ a,
Unitize[Subtract[a, b]].ConstantArray[1, Dimensions[a][[2]]], 0]
</code></pre>
<blockquote>
<p>{2, 4}</p>
</blockquote>
|
174,655 | <p>So I have 2 lists of 10000+ lists of 3 numbers, e.g.</p>
<pre><code>{{1,2,3},{4,5,6},{7,8,9},...}
{{2,1,3},{4,5,6},{41,2,0},...}
</code></pre>
<p>Wanting a result like </p>
<pre><code>{2,...}
</code></pre>
<p>Getting some sort of list of <code>True</code>/<code>False</code> is also probably enough, like this:</p>
<pre><code>{False,True,False,...}
</code></pre>
<p>I guess I could use <code>Position</code> once I've done that.</p>
<p>I tried to use <code>Thread</code>, as below:</p>
<pre><code>Thread[{{a, b}, {c, d}, {e, f}} == {{a, b}, {d, e}, {f, e}}]
</code></pre>
<p>which gives the <code>True</code>/<code>False</code> output</p>
<pre><code>{True, {c, d} == {d, e}, {e, f} == {f, e}}
</code></pre>
<p>But as soon as there are actual numbers in place, it doesn't work:</p>
<pre><code>Thread[{{1, 2}, {2, 3}, {4, 5}} == {{1, 3}, {2, 3}, {4, 5}}]
</code></pre>
<p>Returns</p>
<pre><code>False
</code></pre>
<p>I'd really appreciate any help you could give.</p>
<p>Thanks,</p>
<p>H</p>
| jkuczm | 14,303 | <p>If you need speed, you could use ugly compiled function:</p>
<pre><code>equalPosInt2 = Last@Compile[{{a, _Integer, 2}, {b, _Integer, 2}},
Module[{result, dimA, dimB, n, m, eq},
result = Internal`Bag@Most@{0};
dimA = Dimensions@a;
dimB = Dimensions@b;
n = Min[Compile`GetElement[dimA, 1], Compile`GetElement[dimB, 1]];
m = Min[Compile`GetElement[dimA, 2], Compile`GetElement[dimB, 2]];
Do[
eq = True;
Do[
If[Compile`GetElement[a, i, j] =!= Compile`GetElement[b, i, j],
eq = False;
Break[];
];
,
{j, m}
];
If[eq, Internal`StuffBag[result, i]];
,
{i, n}
];
Internal`BagPart[result, All]
],
CompilationTarget -> "C", RuntimeOptions -> "Speed"
];
</code></pre>
<p>Basic test:</p>
<pre><code>a = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}, {1, 2, 3}};
b = {{2, 1, 3}, {4, 5, 6}, {41, 2, 0}, {1, 2, 3}};
equalPosInt2[a, b]
(* {2, 4} *)
</code></pre>
<p>Timings:</p>
<pre><code>fraccalo[a_, b_] := Position[MapThread[Equal, {a, b}], True]
f2[a_, b_] := Position[Unitize[Subtract[a, b]].ConstantArray[1, Dimensions[a][[2]]], 0, 1]
f3[a_, b_] := SparseArray[Unitize[Unitize[Subtract[a, b]].ConstantArray[1, Dimensions[a][[2]]]], {Length[a]}, 1]["NonzeroPositions"]
kglr1[a_, b_] := Pick[Range@Length@a, Total /@ Unitize[Subtract[a, b]], 0]
kglr2[a_, b_] := Pick[Range@Length@a, Unitize[Subtract[a, b]].ConstantArray[1, Dimensions[a][[2]]], 0]
RandomSeed@12345;
{a, b} = RandomInteger[{1, 9}, {2, 1000000, 3}];
r1 = fraccalo[a, b]; // MaxMemoryUsed // RepeatedTiming (*{0.691, 280000672}*)
r2 = f2[a, b]; // MaxMemoryUsed // RepeatedTiming (*{0.0803, 64000928}*)
r3 = f3[a, b]; // MaxMemoryUsed // RepeatedTiming (*{0.048, 64001120}*)
r4 = kglr1[a, b]; // MaxMemoryUsed // RepeatedTiming (*{0.072, 56000904}*)
r5 = kglr2[a, b]; // MaxMemoryUsed // RepeatedTiming (*{0.0495, 72001232}*)
r6 = equalPosInt2[a, b]; // MaxMemoryUsed // RepeatedTiming (*{0.0054, 29928}*)
r1[[All, 1]] === r2[[All, 1]] === r3[[All, 1]] === r4 === r5 === r6
(* True *)
</code></pre>
|
73,375 | <p>The example cells in the documentation each have a count of the cells inside their section:</p>
<pre><code> Cell[TextData[{"Basic Examples", " ", Cell["(4)", "ExampleCount"]}],
"ExampleSection", "ExampleSection"]
</code></pre>
<p>But this is static content, how exactly would this work dynamically? I'd like to make my own cell style where the cell dingbat counts the number of cells inside the cell group it contains (and updates itself dynamically of course). </p>
<p>I've looked at the stylesheet for outline-styled notebooks and then tried using the <code>Counter*</code> options, but these are for dynamic tallying, not content counting and there's not much documentation on these esoteric front-end things like </p>
<pre><code>CounterBoxOptions->{CounterFunction:>CapitalRomanNumeral}]
</code></pre>
<p>Any help would be appreciated.</p>
<p><img src="https://i.stack.imgur.com/Yujf7.png" alt="enter image description here"></p>
| Kuba | 5,478 | <p>I think when it's done each time you save the notebook it should be nice enough :)</p>
<pre><code>SetOptions[
EvaluationNotebook[],
NotebookEventActions -> {
{"MenuCommand", "Save"} :> (Scan[
Module[{nr},
SelectionMove[#, All, CellGroup, AutoScroll -> False];
nr = Length @ Select[
SelectedCells[],
Experimental`CellStyleNames[#] === "Input" & (*1*)
];
SetOptions[#, CellDingbat -> "(" <> ToString[nr] <> ")"];
] &
,
Cells[CellStyle -> "Section"] (*2*)
]),
PassEventsDown -> True
}
]
</code></pre>
<p>Ad 1. Cell style to count</p>
<p>Ad 2. Cell style whose parent group end "resets the counter"</p>
<p>You can use it in stylesheets too.</p>
<p><a href="https://i.stack.imgur.com/xPOrl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xPOrl.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Update from Question's Author:</strong></p>
<p>As Kuba's comment fixes the raggedness:</p>
<pre><code>SetOptions[EvaluationNotebook[],
NotebookEventActions -> {{"MenuCommand",
"Save"} :> (Scan[
Module[{nr},
SelectionMove[#, All, CellGroup, AutoScroll -> False];
nr = Length@
Select[SelectedCells[],
Experimental`CellStyleNames[#] ===
"ItemNumbered" & (*1*)];
SetOptions[#,
CellDingbat ->
Cell[BoxData[
PaneBox[
StyleBox[ToString[nr] <> " ",
RGBColor[0.5, 0.5, 0.67, 0.81],
FontFamily -> "Continuum Light", 15],
Alignment -> Right, ImageSize -> 40]],
Background -> White]];] &,
Cells[CellStyle -> "Subsection"] (*2*)]),
PassEventsDown -> True}]
</code></pre>
<p><a href="https://i.stack.imgur.com/P6hXV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P6hXV.png" alt="enter image description here"></a></p>
|
129,287 | <p>Suppose $p(x_1, x_2, \cdots, x_n)$ is a symmetric polynomial. Given any univariate polynomial $u$, we can define a new polynomial $q(x_1, x_2, \cdots, x_{n+1})$ as</p>
<p>$q(x_1, x_2, \cdots, x_{n+1}) = u(x_1)p(x_2, x_3, \cdots, x_{n+1}) + u(x_2)p(x_1, x_3, \cdots, x_{n+1}) + \cdots \\ \phantom{q(x_1, x_2, \cdots, x_{n+1}) = } \qquad + u(x_{n+1})p(x_1, x_2, \cdots, x_n).$</p>
<p>It is easy to verify that $q$ is a symmetric polynomial. My question is: Is there a name already defined for such a mapping from $(p, u)$ to $q$? Thanks.</p>
| Abdelmalek Abdesselam | 7,410 | <p>I don't know if the operation has a name in the context of the classical theory of symmetric functions. However, in mathematical physics this is essentially what is called a creation operator in a Boson Fock space.
See, e.g., Reed and Simon "Methods of Modern Mathemtatical Physics" vol 2, page 209, 1975 edition.</p>
|
3,210,295 | <p>I wondered if anybody knew how to calculate a percentage loss/gain of a process over time?</p>
<p>Suppose for example Factory A conducted activity over 6 periods.</p>
<p>In t-5, utilisation of resources was: 80%
t-4: 70%
t-3: 80%
t-2: 100%
t-1: 90%
t: 75%</p>
<p>Therefore, but for the exception of two periods ago, at 100% utilisation, there has been a utilisation loss. </p>
<p>Is it possible to calculate cumulative utilisation loss over this period?</p>
<p>Any help would be appreciated, </p>
<p>Best,</p>
<p>Andrew</p>
| Ross Millikan | 1,827 | <p>For each period, the loss is <span class="math-container">$100\%$</span> minus the utilization, so your losses are <span class="math-container">$20\%, 30\%, 20\%, 0\%, 10\%, 25\%$</span>. The total of these is <span class="math-container">$105\%$</span>, which means that in the six periods you have lost just over one period of utilization. If you average them, you get <span class="math-container">$17.5\%$</span>, which means that you have lost that percentage of the possible utilization of the six periods.</p>
|
13,889 | <p><strong>Question:</strong> Are there intuitive ways to introduce cohomology? Pretend you're talking to a high school student; how could we use pictures and easy (even trivial!) examples to illustrate cohomology?</p>
<p><strong>Why do I care:</strong> For a number of math kids I know, doing algebraic topology is fine until we get to homology, and then it begins to get a bit hazy: why does all this quotienting out work, why do we make spaces up from other spaces, how do we define attaching maps, etc, etc. I try to help my peers do basic homological calculations through a sequence of easy examples (much like the ones Hatcher begins with: taking a circle and showing how "filling it in" with a disk will make the "hole" disappear --- ) and then begin talking about what kinds of axioms would be nice to have in a theory like this. I have attempted to begin studying co-homology through "From Calculus to Cohomology" and Hatcher's text, but I cannot see the "picture" or imagine easy examples of cohomology to start with. </p>
| Paul VanKoughnett | 2,215 | <p>For simplicial/cellular cohomology, one way to think of it is in terms of dual cell structures: if $a_1,a_2,\dotsc$ are your $k$-simplices or $k$-cells, then they generate the $k$th chain group, and the $k$th cochain group is generated by their duals $\alpha_1,\alpha_2,\dotsc$, where $\alpha_i(a_j)=\delta_{ij}$. You can then draw a dual cell structure which has an $n-k$-cell for every $k$-cell in your original space, with each cell representing a cochain, and with that cochain sending the chains it intersects to $1$ and the rest to $0$ (and extending linearly). So if your space is a surface, you put a vertex inside every face, draw an edge between two of those vertices if there's an edge between the corresponding faces, and add a face between a set of edges if the dual edges all intersect in a vertex.</p>
<p>Then the homology of the new cell structure is the cohomology of the original structure. With field coefficients on a manifold, at least. But it does allow you to visualize, for example, the coboundary map: in the case of our surface, it sends $C^1$ to $C^2$, but $C^2$ "looks 0-dimensional", and so the coboundary map "looks like a boundary map," which is something your students are hopefully familiar with.</p>
<p>Also, if you do it with Platonic polyhedra, you get other Platonic polyhedra. A cube becomes an octahedron, etc. Of course, they all have the same (co)homology but they have different (co)chain groups so they're nice trivial examples, and more interesting than just an arbitrary sphere.</p>
<p>Hatcher goes over this very briefly in the beginning of his chapter on cohomology. He also gives a thing you can do with $\mathbb{Z}$ coefficients that's similar, though in this case you can run into torsion, so I don't know if it's as good an example.</p>
<p>The other thing you can do is just take the no-nonsense algebraic tack. We like cohomology because sometimes we want maps going the wrong direction. For example, its ring structure is easier to work with than homology's coalgebra structure (if your kids are familiar with homology, at this point you show them the coalgebra structure induced by the diagonal map and how it's hard to work with). And you get so much information for free just by knowing that certain maps are ring homomorphisms rather than graded abelian group homomorphisms. I can't think of a good example off the top of my head, but I know there is one.</p>
<p>Oh, I was taught de Rham cohomology before I even knew what homology ways. I think it's pretty easy to understand. That's another option.</p>
|
4,444,504 | <p>We have measure theory in this semester.I found the statement of Lusin's theorem on the internet to be:</p>
<blockquote>
<p>Let <span class="math-container">$f:\mathbb{R\to R}$</span> be a Lebesgue measurable function.Then for each <span class="math-container">$\epsilon>0$</span> there exists a closed set <span class="math-container">$F
_\epsilon\subset \mathbb R$</span> such that <span class="math-container">$f|_{F_{\epsilon}}$</span> is continuous and <span class="math-container">$|\mathbb R\setminus F_{\epsilon}|<\epsilon$</span>.</p>
</blockquote>
<p>But in another book I saw the following version:</p>
<blockquote>
<p>Let <span class="math-container">$f:\mathbb{R\to R}$</span> be a Lebesgue measurable function .Then for each <span class="math-container">$\epsilon>0$</span> there exists a compact set <span class="math-container">$K_{\epsilon}\subset \mathbb R$</span> such that <span class="math-container">$f|_{K_{\epsilon}}$</span> is continuous and <span class="math-container">$|\mathbb R-K_\epsilon|<\epsilon$</span>.</p>
</blockquote>
<p>Things turned out getting worse when our instructor told us the following version of Lusin's theorem:</p>
<blockquote>
<p>Any continuous function on <span class="math-container">$\mathbb R$</span> with compact support is Lebesgue integrable.</p>
</blockquote>
<p>Now I am really confused.I cannot understand why these all are equivalent.I also tried to prove these results but couldn't.In the book by Sheldon Axler I found a proof but that proof is given for Borel measurable functions not Lebesgue measurable functions.How can I prove these results and how to show they are indeed same?</p>
| Kishalay Sarkar | 691,776 | <p>Dave L. Renfro's comments and hints made me answer this question.First we prove the following:</p>
<blockquote>
<p>Let <span class="math-container">$f:\mathbb{R\to R}$</span> be measurable and <span class="math-container">$\epsilon>0$</span>,then there exists <span class="math-container">$E\subset \mathbb R$</span> measurable such that <span class="math-container">$\lambda(X\setminus E)<\epsilon$</span> and <span class="math-container">$f|_E$</span> is continuous.</p>
</blockquote>
<p>If we could prove this,then we have a measurable set <span class="math-container">$E$</span> such that <span class="math-container">$\lambda(X\setminus E)<\epsilon/2$</span>.Now <span class="math-container">$E$</span> can be expressed as a disjoint union <span class="math-container">$E=K\cup L$</span> where <span class="math-container">$\lambda(L)<\epsilon/2$</span> and since <span class="math-container">$f|_E$</span> is continuous,so is <span class="math-container">$f|_K$</span> and <span class="math-container">$\mathbb R\setminus K=\mathbb R\setminus (E^c\cup L)$</span> so that <span class="math-container">$\lambda(\mathbb R\setminus K)<\epsilon$</span>.</p>
<p>Having said that,let us prove the statement given above:</p>
<p>Let <span class="math-container">$\epsilon>0$</span> and let <span class="math-container">$(I_n)$</span> be an enumeration of the open intervals in <span class="math-container">$\mathbb R$</span> with rational endpoints.</p>
<p><span class="math-container">$f^{-1}(I_n)$</span> is measurable as <span class="math-container">$I_n$</span> is Borel and <span class="math-container">$f$</span> measurable.Then <span class="math-container">$\exists G_n$</span> open such that <span class="math-container">$f^{-1}(I_n)\subset G_n$</span> and <span class="math-container">$\lambda(G_n\setminus f^{-1}(I_n))<\epsilon/2^n$</span>.</p>
<p>Let <span class="math-container">$E$</span> be the complement of <span class="math-container">$\bigcup\limits_{n=1}^\infty (G_n\setminus f^{-1}(I_n))$</span>,then <span class="math-container">$E$</span> is measurable and <span class="math-container">$\lambda(\mathbb R\setminus E)<\epsilon$</span>.Now it can be shown that <span class="math-container">$(f|_E)^{-1}(I_n)=f^{-1}(I_n)\cap E=E\cap G_n$</span> which is open in <span class="math-container">$E$</span> and since <span class="math-container">$\{I_n\}$</span> is a basis of <span class="math-container">$\mathbb R$</span>with usual topology,so <span class="math-container">$f|_E$</span> is continuous.</p>
|
2,482,669 | <p>Find sum of the expression,
$$x^n+x^{n-1}y+x^{n-2}y^2+x^{n-3}y^3+\dots+xy^{n-1}+y^n$$
where $x,y$ are real numbers and $n$ is a natural number.</p>
| farruhota | 425,072 | <p>It is:
$$\frac{x^{n+1}-y^{n+1}}{x-y}.$$</p>
|
2,656,909 | <p>I recently begun to read Walter Rudin magnum upos "Principles of Mathematical Analysis" and i'm having a little trouble in understanding the proof of the the following statement:</p>
<p>2.41 Theorem: if a set $E$ in $R^k$ has one of the following three properties then it has the other two:</p>
<p>(a) $E$ is closed and bounded; </p>
<p>(b) $E$ is compact; </p>
<p>(c) Every infinite subset of $E$ has a limit point in $ E$.</p>
<p>He proves that (a) implies (b) and that (b) implies (c), remaining only to show that (c) implies (a). He started as follow:</p>
<p>If$ E$ is not bounded, then $E$ contains points $x_n$ with</p>
<p>$ |x_n| > n ( n = 1,2,3...) $</p>
<p>The set $S$ consisting of these points $x_n $ in infinite and clearly has no limit points in $R^k...$</p>
<p>My question is why this set $ S $ has no limit points in $R^k $?</p>
<p>Thanks in advance.</p>
| DonAntonio | 31,254 | <p>Answer to question: because either $\;\lim\limits_{n\to\infty} x_n\;$ doesn't exist or else it is $\;\pm\infty\;$ (one of the two). Either way, the limit is <strong>not</strong> in $\;\Bbb R^k\;$ .</p>
|
2,656,909 | <p>I recently begun to read Walter Rudin magnum upos "Principles of Mathematical Analysis" and i'm having a little trouble in understanding the proof of the the following statement:</p>
<p>2.41 Theorem: if a set $E$ in $R^k$ has one of the following three properties then it has the other two:</p>
<p>(a) $E$ is closed and bounded; </p>
<p>(b) $E$ is compact; </p>
<p>(c) Every infinite subset of $E$ has a limit point in $ E$.</p>
<p>He proves that (a) implies (b) and that (b) implies (c), remaining only to show that (c) implies (a). He started as follow:</p>
<p>If$ E$ is not bounded, then $E$ contains points $x_n$ with</p>
<p>$ |x_n| > n ( n = 1,2,3...) $</p>
<p>The set $S$ consisting of these points $x_n $ in infinite and clearly has no limit points in $R^k...$</p>
<p>My question is why this set $ S $ has no limit points in $R^k $?</p>
<p>Thanks in advance.</p>
| CopyPasteIt | 432,081 | <p>We have our set $S = \{x_n \;|\; n \ge 1\}$ with $|x_n| \gt n$.</p>
<p>Let $x$ be any point in $\Bbb R^k$ . Then $x$ is not a limit point of the set $S$.</p>
<p>To show this, first find an integer $N \ge 1$ so that $x$ is an interior point of the closed ball $B_N$ of radius $N$ about the origin (zero coordinates). There are at most $N - 1$ points of $S$ that can also be contained in this ball. We can form an open ball about $x$ contained in $B_N$ that excludes any one of these points, except perhaps for $x$ itself. But then since the finite intersection of open sets is open, we can find an open set containing $x$ that excludes all points of $S$, except, of course, for $x$ itself if it is in $S$. </p>
|
1,385,936 | <p><em>I was wondering how to approximate $\sqrt{1+\frac{1}{n}}$ by $1+\frac{1}{2n}$ without using Laurent Series.</em></p>
<p>The reason why I ask was because using this approximation, we can show that the sequence $(\cos(\pi{\sqrt{n^{2}-n}})_{n=1}^{\infty}$ converges to $0$. This done using a mean-value theorem or Lipschitz (bounded derivative) argument where</p>
<p>$$
|\cos(\pi{\sqrt{n^{2}-n}})-\cos(\pi{n}+\pi/2)|=|\cos(\pi{\sqrt{n^{2}-n}})|
\leq \pi|\sqrt{n^{2}-n}-n-1/2| = \pi |\frac{-1/4}{n^{2}-n+n+1/2}|
$$</p>
<p>I looked up $\sqrt{1+\frac{1}{n}}$ and saw that this approximation can be obtained using Laurent series at $x=\infty$. I am not familiar with Laurent series since I have not had any complex analysis yet, but I was wondering if there was another naive way to see this?</p>
| Alex R. | 22,064 | <p>This requires nothing more than just the definition of a derivative. The function $f(x)=\sqrt{1+x}$ has derivative $f'(x)=\frac{1}{2\sqrt{1+x}}$ for $x\geq 0$. By the fundamental definition of derivatives:</p>
<p>$$f(x)-f(0)=f'(0)x+\epsilon(x),$$</p>
<p>where $\lim_{x\rightarrow 0^+}\epsilon(x)/x=0$ and $\lim_{x\rightarrow 0^+} \epsilon(x)=0$. If you're unfamiliar with this fact, divide both sides by $x$ and take the limit, invoking the definition of the derivative on the left side. To avoid confusion, it's actually that $\epsilon(x):=f(x)-f(0)-f'(0)$, which is a <em>defines</em> $\epsilon(x)$.
It follows that:</p>
<p>$$f(x)=1+\frac{x}{2}+\epsilon(x).$$</p>
<p>Now plug in $x=1/n$ and observe that $\epsilon(1/n)/(1/n)$ is going to zero, which means that $\epsilon(1/n)$ must be considerably smaller than $(1/n)$ as $n$ gets larger. It follows that:</p>
<p>$$\sqrt{1+1/n}=1+2/n+(\mbox{something much smaller than }1/n),$$</p>
<p>an approximation which will tend to get better as $n$ gets larger.</p>
|
1,385,936 | <p><em>I was wondering how to approximate $\sqrt{1+\frac{1}{n}}$ by $1+\frac{1}{2n}$ without using Laurent Series.</em></p>
<p>The reason why I ask was because using this approximation, we can show that the sequence $(\cos(\pi{\sqrt{n^{2}-n}})_{n=1}^{\infty}$ converges to $0$. This done using a mean-value theorem or Lipschitz (bounded derivative) argument where</p>
<p>$$
|\cos(\pi{\sqrt{n^{2}-n}})-\cos(\pi{n}+\pi/2)|=|\cos(\pi{\sqrt{n^{2}-n}})|
\leq \pi|\sqrt{n^{2}-n}-n-1/2| = \pi |\frac{-1/4}{n^{2}-n+n+1/2}|
$$</p>
<p>I looked up $\sqrt{1+\frac{1}{n}}$ and saw that this approximation can be obtained using Laurent series at $x=\infty$. I am not familiar with Laurent series since I have not had any complex analysis yet, but I was wondering if there was another naive way to see this?</p>
| Jack D'Aurizio | 44,121 | <p>For any $x>0$, $\sqrt{1+x}\leq 1+\frac{x}{2}$ is trivial by squaring. On the other hand:
$$ 1+\frac{x}{2}-\sqrt{1+x} = \frac{\frac{x^2}{4}}{1+\frac{x}{2}+\sqrt{1+x}}\leq\frac{x^2}{8+2x} $$
gives:</p>
<blockquote>
<p>$$ 1+\frac{x}{2}-\frac{x^2}{8+2x}\leq \sqrt{1+x} \leq 1+\frac{x}{2}-\frac{x^2}{8+4x}.$$</p>
</blockquote>
|
1,385,936 | <p><em>I was wondering how to approximate $\sqrt{1+\frac{1}{n}}$ by $1+\frac{1}{2n}$ without using Laurent Series.</em></p>
<p>The reason why I ask was because using this approximation, we can show that the sequence $(\cos(\pi{\sqrt{n^{2}-n}})_{n=1}^{\infty}$ converges to $0$. This done using a mean-value theorem or Lipschitz (bounded derivative) argument where</p>
<p>$$
|\cos(\pi{\sqrt{n^{2}-n}})-\cos(\pi{n}+\pi/2)|=|\cos(\pi{\sqrt{n^{2}-n}})|
\leq \pi|\sqrt{n^{2}-n}-n-1/2| = \pi |\frac{-1/4}{n^{2}-n+n+1/2}|
$$</p>
<p>I looked up $\sqrt{1+\frac{1}{n}}$ and saw that this approximation can be obtained using Laurent series at $x=\infty$. I am not familiar with Laurent series since I have not had any complex analysis yet, but I was wondering if there was another naive way to see this?</p>
| Steven Alexis Gregory | 75,410 | <p>Try</p>
<p>$\sqrt{1+x} \approx 1 + \alpha x + \beta x^2 + O(x^3)$</p>
<p>$1 + x = 1 + 2 \alpha x + (\alpha^2 + 2 \beta)x^2 + O(x^3)$</p>
<p>$\alpha = \frac 12$, $\quad \beta = -\frac 18$</p>
|
208,883 | <p>Let $\bar{\rho}: G_K\to PGL_n(\mathbb{C})$ be projective representation of the absolute Galois group of a number field $K$ and $\varphi\in Aut(G_K)$.</p>
<p>A theorem of Tate tells us that we can always lift $\bar{\rho}$ to some $\rho: G_K \to GL_n(\mathbb{C})$. I am wondering if there is a lift $\rho$ whose kernel is preserved by $\varphi$, i.e. $\varphi(\ker\rho)=\ker\rho$.</p>
<p><strong>Edit. A better question would be</strong>: Do you have any idea about how to determine necessary and sufficient conditions for the existence of a lift with kernel stable under the automorphism $\varphi$?</p>
| Jeremy Kahn | 8,252 | <p>I believe the problem is exactly this. A composition of $K$-quasiconformal maps is not necessarily $K$-quasiconformal, which makes them difficult to work with. And a locally quasiconformal map is not necessarily globally quasiconformal. Normally when you define a type of manifold in terms of a class of permitted overlap maps the class of maps should be defined in terms of a local property and closed under composition. There's no way to do that so as to get structures that are then related by global quasiconformal maps.</p>
|
657,047 | <p>So I have $a^n = b$. When I know $a$ and $b$, how can I find $n$?</p>
<p>Thanks in advance!</p>
| DryEraseMarker | 124,645 | <p>$$ a^n = b $$
$$ log_{a}b = n $$</p>
<p>Because the easily accessible <em>log</em> button on your calculator is probably <em>base 10</em> and <em>not base a</em>, you have to punch it in this way:</p>
<p>$$\frac {\log b} {\log a}$$</p>
<p>which will result in your answer, $n$.</p>
<p>If you have a TI-89 Titanium, <em>Diamond 7</em> is the way to quickly access the <em>log</em> function (it took me a long time to find this). </p>
|
289,923 | <p>As far as i know, both differential and gradient are vectors where their dot product with a unit vector give directional derivative with the direction of the unit vector. So what are the differences?</p>
| notmyname | 294,646 | <p>Essentially, and in an informal sense, it is the difference between the projection of the gradient onto the plane below the surface (this is the normal "gradient"), and a "risen" gradient which is embedded in the 3D surface.</p>
<p><strong>Note</strong>: Technically, the differential and gradient reduce to the same thing in the case of the map from $R^2\rightarrow R $. That is, they are both row vectors consisting of the partial derivatives of f.</p>
<p>However, as the differential is used in higher dimensional cases usually, being a generalization of the 1D gradient described above, one might informally interpret "the differential of f" as the result of converting f into the surface it implicitly represents using some map <strong>x</strong>, and taking the differential of <strong>x</strong> (a map from a plane to a surface).</p>
<p>What is the difference between the two, then, if we take this informal interpretation of "differential of f"? </p>
<p>I will explain:</p>
<p>Examine what happens when we map the gradient vector of f using the differential of <strong>x</strong>: The 2D gradient vector $(u_x,u_y)$ becomes $(1,1,u_x+u_y)$, i.e. it "rises" and becomes embedded 3 dimensionally in the surface, whereas before it was only a projection on the plane beneath the surface. Note that this 3D vector is the analog to the "amount of change" in the gradient direction $||\nabla f||$, a scalar, in the 1D case, and is the result of taking the differential of <strong>x</strong> at a point p=(u,v) in the direction of the gradient of f. </p>
<p>Technically, the differential itself (at the point) is the matrix operator which implemented this transformation, just as in the 1D case of f the gradient was the vector which dotted with the direction gave the amount of change in that direction.</p>
|
868,943 | <p>Can you please tell me the sum of the seires</p>
<p>$ \frac {1}{10} + \frac {3}{100} + \frac {6}{1000} + \frac {10}{10000} + \frac {15}{100000} + \cdots $ </p>
<p>where the numerator is the series of triangular numbers?</p>
<p>Is there a simple way to find the sum?</p>
<p>Thank you.</p>
| Gerry Myerson | 8,269 | <p>$$S={1\over10}+{3\over100}+{6\over1000}+{10\over10000}+\cdots$$ $${S\over10}={1\over100}+{3\over1000}+{6\over10000}+\cdots$$ Subtracting, $${9S\over10}={1\over10}+{2\over100}+{3\over1000}+{4\over10000}+\cdots$$ Now do the same thing again, that is, divide by $10$ and subtract, to get $${81S\over100}={1\over10}+{1\over100}+{1\over1000}+\cdots={1\over9}$$</p>
|
868,943 | <p>Can you please tell me the sum of the seires</p>
<p>$ \frac {1}{10} + \frac {3}{100} + \frac {6}{1000} + \frac {10}{10000} + \frac {15}{100000} + \cdots $ </p>
<p>where the numerator is the series of triangular numbers?</p>
<p>Is there a simple way to find the sum?</p>
<p>Thank you.</p>
| Mustafa Saad | 164,692 | <p>I thought I might add another derivation <em>(devised by me)</em>. This one is long and involves dissecting the sequence into its simplest terms.</p>
<blockquote>
<p>$1/10 + 3/100 + 6/1000 + \ldots$</p>
<p>$= 1/10 + (1+2)/100 + (1+2+3)/1000 + \ldots$ (from the definition of
triangular numbers.)</p>
<p>$= 1/10 + 1/100 + 2/100 + 1/1000 + 2/1000 + 3/1000 + \ldots$</p>
<p>(by grouping terms with similar numerator together)<br>
$= (1/10 + 1/100 + 1/1000 + \ldots) + (2/100 + 2/1000 + \ldots) + (3/1000 +
\ldots) + \ldots$ $= 1/9 + 2/90 + 3/900 + \ldots$</p>
<p>($1/9$ is a common factor)</p>
<p>$= 1/9 [ 1 + 2/10 + 3/100 + \ldots]$ $= 1/9 [ 1 + 1/10 + 1/10 + 1/100
+ 1/100 + 1/100 + \ldots ]$</p>
<p>(after rearranging the terms)</p>
<p>$= 1/9 [ 1 + (1/10 + 1/100 + 1/100 + \ldots) + (1/10 + 1/100 + 1/100 +
\ldots) + (1/100 + \ldots) + \ldots ] $ $= 1/9 [ 1 + 1/9 + (1/9 + 1/90
+ 1/900 + 1/900 + \ldots) ]$</p>
<p>(the terms between the parentheses represent a geometric series
whose sum is $10/81$)</p>
<p>$= 1/9 [ 1 + 1/9 + 10/81 ]$
$= 1/9 \times 100/81$
$= 100/729$</p>
</blockquote>
|
1,774,084 | <p>I think it is convergent to $1$ because as $n$ tends to $\infty$ , $1/\sqrt(n)$ tends to $0$. Is it true?</p>
<p>Thanks!</p>
| Will Jagy | 10,400 | <p>$$ n^{\left( \frac{1}{\log n} \right)} = e $$
$$ \lim_{n \rightarrow \infty} n^{\left( \frac{1}{ \log \log n} \right)} = \infty $$</p>
|
3,632,431 | <blockquote>
<p>Consider the function <span class="math-container">$f: \mathbb{N} \to \mathbb{N}$</span> defined by <span class="math-container">$f(x)=\frac{x(x+1)}{2}$</span>. Show that <span class="math-container">$f$</span> is injective but not surjective.</p>
</blockquote>
<p>So I started by assuming that <span class="math-container">$f(a)=f(b)$</span> for some <span class="math-container">$a,b \in \mathbb{N}$</span>.
I want to show that <span class="math-container">$a=b$</span>.</p>
<p><span class="math-container">$$\Rightarrow \frac{a(a+1)}{2} = \frac{b(b+1)}{2}\\
\Rightarrow a(a+1)=b(b+1) \\
\Rightarrow a^2+a=b^2+b$$</span></p>
<p>I don't know where to go from here.</p>
| PrincessEev | 597,568 | <p>Suppose <span class="math-container">$a \ne b$</span> at that point. Then, without loss of generality, <span class="math-container">$b > a$</span>. But then <span class="math-container">$b^2 + b > a^2 + a$</span>. Why? Because squaring the inequality gives you <span class="math-container">$b^2 > a^2$</span> (which holds since <span class="math-container">$a,b \ge 1$</span>, or <span class="math-container">$0$</span> depending on your convention for <span class="math-container">$\Bbb N$</span>), and you can add the original <span class="math-container">$b > a$</span> inequality to this one and maintain it.</p>
<p>Thus, you have to have equality.</p>
|
3,392,171 | <p>We have a partial fraction equation:
<span class="math-container">$$\frac{1}{x-5} +\frac{1}{x+5}=\frac{2x+1}{x^2-25}$$</span></p>
<p>I multiplied the equation by the common denominator <span class="math-container">$(x+5)(x-5)$</span> and got <span class="math-container">$0=1$</span>. Is this correct?</p>
| user | 505,767 | <p>Yes it is equivalent to</p>
<p><span class="math-container">$$\frac{2x}{x^2-25}=\frac{2x+1}{x^2-25}\iff 2x=2x+1$$</span></p>
<p>which indeed has not solutions for <span class="math-container">$x\in \mathbb R$</span>.</p>
|
223,955 | <p>How can we convert a list to an integer correctly? </p>
<p><strong>{5, 22, 4, 5} -> 52245?</strong></p>
<p>When I use the command <code>FromDigits</code> in Mathematica </p>
<pre><code>FromDigits[{5, 22, 4, 5}]
</code></pre>
<p>The result is incorrect, namely <strong>7245</strong></p>
| Mr.Wizard | 121 | <pre><code>FromDigits @ ToString @ Row @ {5, 22, 4, 5}
Head[%]
</code></pre>
<blockquote>
<pre><code>52245
Integer
</code></pre>
</blockquote>
|
1,043,090 | <p>A rectangle $ABCD$, which measure $9 ft$ by $12 ft$, is folded once perpendicular to diagonal AC so that the opposite vertices A and C coincide. Find the length of the fold. So I tried to fold a rectangular paper but there are spare edges. So the gray is my fold and I'm not sure if its in the middle of my diagonal AC? If it is then now I need to solve the A to the center and center to A. Where I'm so confused. <img src="https://i.stack.imgur.com/Ksk6t.png" alt="enter image description here"></p>
| CiaPan | 152,299 | <p>Your <em>folding line</em> is perpendicular to the rectangle's diagonal, which is a hypotenuse of a right triangle with legs 9 and 12 feet, so the folding line itself is a hypotenuse of a right trianlge with legs $9\times\frac 9{12}$ and $12\times\frac 9{12}$ — so its length is $$\sqrt{\left(9\times\frac 9{12}\right)^2 + \left(12\times\frac 9{12}\right)^2} = \frac9{12}\sqrt{81 + 144} = \frac9{12}\times 15= \frac{45}4$$</p>
|
1,324,062 | <p>Evaluate: </p>
<blockquote>
<p>$$\lim_{h \rightarrow 0} \frac{e^{2h}-1}{h}$$</p>
</blockquote>
<p>Now one way would be using the Maclaurin expansion for $e^{2x}$</p>
<p>However, can we solve it using the definition of the derivative (perhaps considering $f(x)=e^x$)? Many thanks for your help! $$$$
EDIT: I forgot to mention to please not use L'Hopital's Rule. Using it, the problem becomes trivial and loses all chances of getting a beautiful solution.</p>
| Ivo Terek | 118,056 | <p>$$\lim_{h \rightarrow 0} \frac{e^{2h}-1}{h} = 2 \lim_{h \to 0}\frac{e^{2h}-1}{h} = 2\lim_{x \to 0}\frac{e^x-1}{x} = 2\cdot 1 = 2,$$ where I made the substituition $x = 2h$ just to make things easier for you to visualize. I used one of the fundamental limits along with the fact that $h \to 0 \iff x \to 0$.</p>
|
4,042,250 | <p>My idea is to use disjoint events and calculating the probability of getting at least two heads for each number rolled. For example, if I roll a 3, I would calculate the probability with the expression <span class="math-container">$(\frac{1}{6}) (\frac{1}{2})^3 \binom{3}{2} + (\frac{1}{6}) (\frac{1}{2})^3\binom{3}{3})= \frac{1}{12}$</span> and then add up the probabilities of getting at least two for each rolls, since the events are disjoint, summing to <span class="math-container">$\frac{67}{128}$</span>. Is this a valid solution? Is there a better approach to solving this problem?</p>
| user | 293,846 | <p>You can compute this much simpler. The probability that you get not more than one head out of <span class="math-container">$n$</span> flips is <span class="math-container">$\frac{n+1}{2^n}$</span>. Therefore the probability in question is:
<span class="math-container">$$
\frac16\sum_{n=1}^6\left(1-\frac{n+1}{2^n}\right).
$$</span></p>
|
3,516,189 | <p>I've been struggling with the following exercise for quite some time already:</p>
<blockquote>
<p>Consider a linear space <span class="math-container">$\mathbb{V} = \mathcal{C}\left(\left[a, b\right]\right)$</span> and let <span class="math-container">$f_{1},\ldots, f_{n}$</span> be linearly independent functions in <span class="math-container">$\mathbb{V}$</span>. Prove there exist numbers <span class="math-container">$a \leq x_{1} < \cdots < x_{n} \leq b$</span> such that <span class="math-container">$$ \det \begin{bmatrix}
f_{1}(x_{1}) & f_{1}(x_{2}) & \cdots & f_{1}(x_{n})\\
f_{2}(x_{1}) & f_{2}(x_{2}) & \cdots & f_{2}(x_{n}) \\
\vdots & \vdots & \ddots & \vdots \\
f_{n}(x_{1}) & f_{n}(x_{2}) & \cdots & f_{n}(x_{n})
\end{bmatrix} \neq 0.$$</span></p>
</blockquote>
<p>The statement is extremely easy to prove by means of induction. However, I'm interested if there's another (and more elegant) proof which <em>doesn't involve induction</em>. </p>
<p>Any hints appreciated.</p>
| Ben Grossmann | 81,360 | <p>Proceed by contrapositive. We suppose that for all <span class="math-container">$a\leq x_1 < \cdots < x_n \leq b$</span>,
<span class="math-container">$$
\det \begin{bmatrix}
f_{1}(x_{1}) & f_{1}(x_{2}) & \cdots & f_{1}(x_{n})\\
f_{2}(x_{1}) & f_{2}(x_{2}) & \cdots & f_{2}(x_{n}) \\
\vdots & \vdots & \ddots & \vdots \\
f_{n}(x_{1}) & f_{n}(x_{2}) & \cdots & f_{n}(x_{n})
\end{bmatrix} = 0.
$$</span>
Equivalently, the above holds for all choices of <span class="math-container">$x_1,\dots,x_n \in [a,b]$</span>. Consider the subspace of <span class="math-container">$\Bbb R^n$</span> defined by
<span class="math-container">$$
U = \operatorname{span}\{(f_1(x),\dots,f_n(x)) : x \in [a,b]\}.
$$</span>
Suppose for the purpose of contradiction that <span class="math-container">$U = \Bbb R^n$</span>. It follows that there exist vectors <span class="math-container">$v_1,\dots,v_n \in U$</span> that span <span class="math-container">$\Bbb R^n$</span>. If we take these vectors as the columns of a matrix, then we end up with an <span class="math-container">$n \times n$</span> matrix of the form above; this matrix has linearly independent columns, which means that its determinant is non-zero. This contradicts our assumption.</p>
<p>So, <span class="math-container">$U$</span> is necessarily a proper subspace of <span class="math-container">$\Bbb R^n$</span>. Select any non-zero <span class="math-container">$c = (c_1,\dots,c_n) \in U^\perp$</span>. By definition, we have <span class="math-container">$c^Tv = 0$</span> for all <span class="math-container">$v \in U$</span>. That is, for every <span class="math-container">$x \in [a,b]$</span> we have
<span class="math-container">$$
c_1 f_1(x) + \cdots + c_n f_n(x) = 0.
$$</span>
That is, the functions <span class="math-container">$f_1,\dots,f_n$</span> are linearly dependent.</p>
<p>The conclusion follows.</p>
<hr>
<p>The proof by induction, since I was curious. Reduce from the <span class="math-container">$n$</span>-case to the <span class="math-container">$(n-1)$</span>-case by noting that</p>
<p><span class="math-container">$$
\det \pmatrix{
f_{1}(x_{1}) & f_{1}(x_{2}) & \cdots & f_{1}(x_{n})\\
f_{2}(x_{1}) & f_{2}(x_{2}) & \cdots & f_{2}(x_{n}) \\
\vdots & \vdots & \ddots & \vdots \\
f_{n}(x_{1}) & f_{n}(x_{2}) & \cdots & f_{n}(x_{n})
} = \\
\det\pmatrix{
f_{1}(x_{1}) & f_{1}(x_{2}) & \cdots & f_{1}(x_{n})\\
0 & f_{2}(x_{2}) - \frac{f_2(x_1)}{f_1(x_1)} f_1(x_2) & \cdots & f_{2}(x_{n}) - \frac{f_2(x_1)}{f_1(x_1)}f_1(x_n) \\
\vdots & \vdots & \ddots & \vdots \\
0 & f_{n}(x_{2}) - \frac{f_n(x_1)}{f_1(x_1)} f_1(x_2) & \cdots & f_{n}(x_{n}) - \frac{f_n(x_1)}{f_1(x_1)}f_1(x_n)
} = \\
f_1(x_1) \det\pmatrix{
f_{2}(x_{2}) - \frac{f_2(x_1)}{f_1(x_1)} f_1(x_2) & \cdots & f_{2}(x_{n}) - \frac{f_2(x_1)}{f_1(x_1)}f_1(x_n) \\
\vdots & \ddots & \vdots \\
f_{n}(x_{2}) - \frac{f_n(x_1)}{f_1(x_1)} f_1(x_2) & \cdots & f_{n}(x_{n}) - \frac{f_n(x_1)}{f_1(x_1)}f_1(x_n)
}
$$</span>
and defining <span class="math-container">$g_j(x) = f_{j+1}(x) - \frac{f_{j+1}(x_1)}{f_1(x_1)}f_1(x)$</span> for <span class="math-container">$j = 1,\dots,n-1$</span>.</p>
|
3,853,980 | <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$n×n $</span> complex matrix such that the three matrices <span class="math-container">$A+I$</span> , <span class="math-container">$A^2+I $</span> , <span class="math-container">$ A^3+I$</span> are all unitary .Prove that<span class="math-container">$ A$</span> is the zero matrix</p>
<p>I try to show that</p>
<p><span class="math-container">$Trace( A^{\theta}A) =0$</span> where <span class="math-container">$A^{\theta }$</span> is conjugate transpose of matrix <span class="math-container">$A$</span></p>
<p><span class="math-container">$\because $</span>
<span class="math-container">$Trace( A^{\theta}A)$</span> = <span class="math-container">$|a_{11}|^2 + |a_{12}|^2....|a_{nn}|^2$</span></p>
<p><span class="math-container">$A+I$</span> is unitary ,so</p>
<p><span class="math-container">$(A+I)^{\theta}(A+I)= I $</span></p>
<p><span class="math-container">$\implies (A^ {\theta}+I)(A +I) =I $</span></p>
<p><span class="math-container">$A^ {\theta}A+ A^ {\theta}+A = 0$</span></p>
<p><span class="math-container">$ Trace( A^{\theta}A)= -( Trace( A^{\theta}+A))$</span>
<span class="math-container">$ \implies Trace( A^{\theta}A)=-2$</span>( sum of real parts of each diagonal entry of A</p>
<p>I don't know how to proceed further
Please help</p>
| StubbornAtom | 321,264 | <p>I am not sure of your logic for calculating <span class="math-container">$\operatorname E\left[X_1\max(X_1,X_2)\right]$</span>.</p>
<p>By definition, this is equal to</p>
<p><span class="math-container">\begin{align}
\operatorname E\left[X_1\max(X_1,X_2)\right]&=\iint x\max(x,y)f_{X_1,X_2}(x,y)\,\mathrm dx\,\mathrm dy
\\&=\iint x\max(x,y)\mathbf1_{0<x,y<1}\,\mathrm dx\,\mathrm dy
\\&=\iint x^2\mathbf1_{0<y<x<1}\,\mathrm dx\,\mathrm dy+\iint xy\,\mathbf1_{0<x<y<1}\,\mathrm dx\,\mathrm dy
\\&=\int_0^1\int_y^1 x^2\,\mathrm dx\,\mathrm dy+\int_0^1 y\int_0^y x\,\mathrm dx\,\mathrm dy
\end{align}</span></p>
|
3,853,980 | <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$n×n $</span> complex matrix such that the three matrices <span class="math-container">$A+I$</span> , <span class="math-container">$A^2+I $</span> , <span class="math-container">$ A^3+I$</span> are all unitary .Prove that<span class="math-container">$ A$</span> is the zero matrix</p>
<p>I try to show that</p>
<p><span class="math-container">$Trace( A^{\theta}A) =0$</span> where <span class="math-container">$A^{\theta }$</span> is conjugate transpose of matrix <span class="math-container">$A$</span></p>
<p><span class="math-container">$\because $</span>
<span class="math-container">$Trace( A^{\theta}A)$</span> = <span class="math-container">$|a_{11}|^2 + |a_{12}|^2....|a_{nn}|^2$</span></p>
<p><span class="math-container">$A+I$</span> is unitary ,so</p>
<p><span class="math-container">$(A+I)^{\theta}(A+I)= I $</span></p>
<p><span class="math-container">$\implies (A^ {\theta}+I)(A +I) =I $</span></p>
<p><span class="math-container">$A^ {\theta}A+ A^ {\theta}+A = 0$</span></p>
<p><span class="math-container">$ Trace( A^{\theta}A)= -( Trace( A^{\theta}+A))$</span>
<span class="math-container">$ \implies Trace( A^{\theta}A)=-2$</span>( sum of real parts of each diagonal entry of A</p>
<p>I don't know how to proceed further
Please help</p>
| G Cab | 317,234 | <p>A geometric approach (considering only the half square <span class="math-container">$0 \le X_1 \le X_2 \le 1$</span> because of symmetry)</p>
<p><a href="https://i.stack.imgur.com/Hof1j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hof1j.png" alt="Unif_max&sum_1" /></a></p>
<p>clearly shows that the joint pdf is
<span class="math-container">$$
p(m,s) = 2\left[ {m \le s \le 2m} \right]
$$</span>
where <span class="math-container">$[P]$</span> denotes the <a href="https://en.wikipedia.org/wiki/Iverson_bracket" rel="nofollow noreferrer"><em>Iverson bracket</em></a>
and which in fact gives
<span class="math-container">$$
\eqalign{
& \int_{m = 0}^1 {\int_{s = 0}^2 {p(m,s)\,dm\,ds} } = 2\int_{m = 0}^1 {\int_{s = m}^{2m} {\,dm\,ds} } = \cr
& = 2\int_{m = 0}^1 {mdm} = 1 \cr}
$$</span></p>
<p>Then
<span class="math-container">$$
\eqalign{
& \overline m = 2\int_{m = 0}^1 {m^{\,2} dm} = {2 \over 3} \cr
& \overline s = 2\int_{m = 0}^1 {\int_{s = m}^{2m} {\,dm\,sds} } = 3\int_{m = 0}^1 {m^{\,2} dm} = 1 \cr}
$$</span>
and
<span class="math-container">$$
\eqalign{
& 2\int_{m = 0}^1 {\int_{s = m}^{2m} {\,\left( {m - 2/3} \right)\left( {s - 1} \right)dm\,ds} } = \cr
& = 2\int_{m = 0}^1 {\left( {m - 2/3} \right)dm\int_{s = m - 1}^{2m - 1} {\,s\,ds} } = \cr
& = \int_{m = 0}^1 {\left( {m - 2/3} \right)\left( {3m^{\,2} - 2m} \right)dm} = \cr
& = \int_{m = 0}^1 {\left( {3m^{\,3} - 4m^{\,2} + 4/3m} \right)dm} = \cr
& = {3 \over 4} - {4 \over 3} + {4 \over 6} = {1 \over {12}} \cr}
$$</span></p>
|
56,103 | <p>A referee asked me to include a reference or proof for the following classical fact. It's not hard to prove, but I'd prefer to just give a reference -- does anyone know one?</p>
<p>Let $X$ be a nice space (eg a smooth manifold, or more generally a CW complex). The topological Picard group $Pic(X)$ is the set of isomorphism classes of $1$-dimensional complex vector bundles on $X$. The set $Pic(X)$ is an abelian group with group operation the fiberwise tensor product, and the first Chern class map</p>
<p>$$c_1 : Pic(X) \longrightarrow H^2(X;\mathbb{Z})$$</p>
<p>is an isomorphism of abelian groups.</p>
<p>Now make the assumption that $H_1(X;\mathbb{Z})$ is a finite abelian group. One nice construction of elements of $Pic(X)$ is as follows. Consider $\phi \in Hom(H_1(X;\mathbb{Z}),\mathbb{Q}/\mathbb{Z})$. Let $\tilde{X}$ be the universal cover, so $\pi_1(X)$ acts on $\tilde{X}$ and $X = \tilde{X} / \pi_1(X)$. Let $\psi : \pi_1(X) \rightarrow \mathbb{Q}/\mathbb{Z}$ be the composition of $\phi$ with the natural map $\pi_1(X) \rightarrow H_1(X;\mathbb{Z})$. Define an action of $\pi_1(X)$ on $\tilde{X} \times \mathbb{C}$ by the formula</p>
<p>$$g(p,z) = (g(p),e^{2 \pi i \psi(g)}z) \quad \quad \text{for $g \in \pi_1(X)$ and $(p,z) \in \tilde{X} \times \mathbb{C}$}.$$</p>
<p>Observe that this makes sense since $\psi(g) \in \mathbb{Q} /\mathbb{Z}$. Define $E_\phi = (\tilde{X} \times \mathbb{C}) / \pi_1(X)$. The projection onto the first factor induces a map $E_{\phi} \rightarrow X$ which is easily seen to be a complex line bundle. The line bundle $E_{\phi}$ is known as the flat line bundle on $X$ with monodromy $\phi$.</p>
<p>Now, the universal coefficient theorem says that we have a short exact sequence</p>
<p>$$0 \longrightarrow Ext(H_1(X;\mathbb{Z}),\mathbb{Z}) \longrightarrow H^2(X;\mathbb{Z}) \longrightarrow Hom(H_2(X;\mathbb{Z}),\mathbb{Z}) \longrightarrow 0.$$</p>
<p>Since $H_1(X;\mathbb{Z})$ is a finite abelian group, there is a natural isomorphism $\rho : Hom(H_1(X;\mathbb{Z}),\mathbb{Q}/\mathbb{Z}) \rightarrow Ext(H_1(X;\mathbb{Z}),\mathbb{Z}) $. We can finally state the fact for which I am looking for a reference :</p>
<p>$$c_1(E_{\phi}) = \rho(\phi).$$</p>
| Andy Putman | 317 | <p>I noticed that someone voted this up today. Since this might indicate that someone else is interested in the answer, I thought I'd remark that Oscar Randal-Williams and I worked out a proof of this when I visited him earlier this year. A version of this proof can be found in Section 2.2 of my paper</p>
<p>The Picard group of the moduli space of curves with level structures,
to appear in Duke Math. J.</p>
<p>which is available on my webpage <a href="http://www.nd.edu/~andyp/papers/" rel="nofollow">here</a>.</p>
<p>(marked community wiki since it feels weird to get reputation for answering my own question)</p>
|
35,220 | <p>It is a basic result of group cohomology that the extensions with a given abelian normal subgroup <em>A</em> and a given quotient <em>G</em> acting on it via an action $\varphi$ are given by the second cohomology group $H^2_\varphi(G,A)$. In particular, when the action is trivial (so the extension is a central extension), this is the second cohomology group $H^2(G,A)$ for the trivial action. In the special case where <em>G</em> is also abelian, we classify all the class two groups with <em>A</em> inside the center and <em>G</em> as the quotient group.</p>
<p>I am interested in the following: given a sequence of abelian groups $A_1, A_2, \dots, A_n$, what would classify (up to the usual notion of equivalence via commutative diagrams) the following: a group <em>E</em> with an ascending chain of subgroups:</p>
<p>$$1 = K_0 \le K_1 \le K_2 \le \dots \le K_n = E$$</p>
<p>such that the $K_i$s form a central series (i.e., $[E,K_i] \subseteq K_{i-1}$ for all <em>i</em>) and $K_i/K_{i-1} \cong A_i$?</p>
<p>The case $n = 2$ reduces to the second cohomology group as detailed in the first paragraph, so I am hoping that some suitable generalization involving cohomology would help describe these extensions.</p>
<p>Note: As is the case with the second cohomology group, I expect the object to classify, not isomorphism classes of possibilities of the big group, but a notion of equivalence class under a congruence notion that generalizes the notion of congruence of extensions. Then, using the actions of various automorphism groups, we can use orbits under the action to classify extensions under more generous notion of equivalence.</p>
<p>Note 2: The crude approach that I am aware of involves building the extension step by step, giving something like a group of groups of groups of groups of ... For intsance, in the case $n = 3$:</p>
<p>$$1 = K_0 \le K_1 \le K_2 \le K_3 = G$$</p>
<p>with quotients $A_i \cong K_i/K_{i-1}$, I can first consider $H^2(A_3,A_2)$ as the set of possibilities for $K_3/K_1$ (up to congruence). For each of these possibilities <em>P</em>, there is a group $H^2(P,A_1)$ and the total set of possibilities seems to be:</p>
<p>$$\bigsqcup_{P \in H^2(A_3,A_2)} H^2(P,A_1)$$</p>
<p>Here the $\in$ notation is being abused somewhat by identifying an element of a cohomology group with the corresponding extension's middle group.</p>
<p>What I really want is some algebraic way of thinking of this unwieldy disjoint union as a single object, or some information or ideas about its properties or behavior.</p>
| Torsten Ekedahl | 4,008 | <p>This looks like a (slightly) non-additive version of Grothendieck's theory of
"extensions panachées" (SGA 7/I, IX.9.3). There he considers objects (in some
abelian category) $X$ together with a filtation $0\subseteq X_1\subseteq
X_2\subseteq X_3=X$. In the first version he also fixes (just as one does for
extensions) isomorphisms $P\rightarrow X_1$, $Q\rightarrow X_2/X_1$ and
$R\rightarrow X_3/X_2$. However, in the next version he fixes the isomorphism
class of the two extensions $0\rightarrow P\rightarrow X_2\rightarrow
Q\rightarrow0$ and $0\rightarrow Q\rightarrow X_3/X_1\rightarrow R\rightarrow0$
so that if $E$ is an extension of $P$ by $Q$ and $F$ is an extension of $Q$ by
$R$, then the category $\mathrm{EXTP}(F,E)$ has as objects filtered objects $X$
as above together with fixed isomorphisms of extensions $E\rightarrow X_2$ and
$F\rightarrow X_3/X_1$ and whose morphisms are are morphisms of $X$'s preserving
the given structures. The morphisms of $\mathrm{EXTP}(F,E)$ are necessarily
isomorphisms so we are dealing with a groupoid. Similarly for objects $A$ and
$B$ $\mathrm{EXT}(B,A)$ is the groupoid of extensions of $B$ by $A$.
Grothendieck then shows that $\mathrm{EXTP}(F,E)$ is a torsor over
$\mathrm{EXT}(R,P)$ (in the category of torsors, Grothendieck had previously
defined this notion). The action on objects of an extension $0\rightarrow
P\rightarrow G\rightarrow R\rightarrow0$ is given by first taking the pullback
of it under the map $X/X_1\rightarrow R$ and then using the obtained action by
addition on extensions of $P$ by $F$. To more or less complete the picture,
there is an obstruction to the existence of an object of $\mathrm{EXTP}(F,E)$:
We have that $E$ gives an element of $\mathrm{Ext}^1(Q,P)$ and $F$ one of
$\mathrm{Ext}^1(R,Q)$ and their Yoneda product gives an obstruction in
$\mathrm{Ext}^2(P,Q)$.</p>
<p>The case at hand is similar (staying at the case of $n=3$ and with the caveat
that I haven't properly checked everything): We choose fixed isomorphisms with
$K_2$ and a given central extension and with $K_3/K_1$ and another given central
extension (assuming that we have three groups $P$, $Q$ and $R$ as before)
getting a category $\mathrm{CEXTP}(F,E)$ of central extensions. We shall shortly
modify it but to motivate that modification it seems a good idea to start with
this. We get as before an action of $\mathrm{CEXT}(R,P)$ on
$\mathrm{CEXTP}(F,E)$ as we can pull back central extensions just as before. It
turns however that the action is not transitive. In fact we can analyse both the
difference between two elements of $\mathrm{CEXTP}(F,E)$ and the obstructions
for the non-emptyness of it by using the Hochschild-Serre spectral sequence. To
make it easier to understand I use a more generic notation. Hence we have a
central extension $1\rightarrow K\rightarrow G\rightarrow G/K\rightarrow1$ and
an abelian group $M$ with trivial $G$-action. There is then a succession of two
obstructions for the condition that a given central extension of $M$ by $G/K$
extend to a central extension of $M$ by $G$. The first is $d_2\colon
H^2(G/K,M)\rightarrow H^2(G/K,H^1(K,M))$, the $d_2$-differential of the H-S
s.s. Now, we always have a map $H^2(G/K,M)\rightarrow H^2(G/K,H^1(K,M))$ given
by pushout of $1\rightarrow G\rightarrow G/K\rightarrow1$ along the map
$K\rightarrow \mathrm{Hom}(K,M)=H^1(K,M)$ given by the action by conjugation of
$K$ on the given central extension of $M$ by $K$ (equivalently this map is given
by the commutator map in that extension). It is easy to compute and identify
$d_2$ but I just claim that it is equal to that map by an appeal to the What Else
Can It Be-principle (which works quite well for the beginnings of spectral
sequences with the usual provisio that the WECIB-principle only works up to a
sign).</p>
<p>This means that we can cut down on the number of obstructions by redefining
$\mathrm{CEXTP}(F,E)$. We add as data a group homomorphism $\varphi\colon
K_3/K_1\rightarrow\mathrm{Hom}(Q,P)$ that extends $Q\rightarrow
\mathrm{Hom}(Q,P)$ which describes the conjugation action on $K_2$ and only look
the elements of $\mathrm{CEXTP}(F,E)$ for which the action is the given
$\varphi$ to form $\mathrm{CEXTP}(F,E;\varphi)$. Now the action of
$\mathrm{CEXT}(R,P)$ on $\mathrm{CEXTP}(F,E;\varphi)$ should make
$\mathrm{CEXTP}(F,E;\varphi)$ a
$\mathrm{CEXT}(R,P)$-(pseudo)torsor. Furthermore, there is now only a single
obstruction for non-emptyness which is given by $d_3\colon H^2(R,M)\rightarrow
H^3(P,M)$.</p>
<p>Going to higher lengths there are two ways of proceeding in the original
Grothendieck situation: Either one can look at the the two extensions of one
length lower, one ending with the next to last layer (i.e., $X_{n-1}$) and the
other being $X/X_1$. This reduces the problem directly to the original case
(i.e., we look at filtrations of length $n-2$ on $Q$). One could instead look at
the successive two-step extensions and then look at how adjacent ones build up
three-step extensions and so on. This is essentially an obstruction theory point
of view and quickly becomes quite messy. An interesting thing is however the
following: We saw that in the original situation the obstruction for getting a
three step extension was that $ab=0$ for the Yoneda product of the two twostep
filtrations. If we have a sequence of three twostep extensions whose three step
extensions exist then we have $ab=bc=0$. The obstruction for the existence of
the full fourstep extension is then essentially a Massey product $\langle
a,b,c\rangle$ (defined up to the usual ambiguity). The messiness of such an
iterated approach is well-known, it becomes more and more difficult to keep
track of the ambiguities of higher Massey products. The modern way of handling
that problem is to use an $A_\infty$-structure and it is quite possible (maybe
even likely) that such a structure is involved.</p>
<p>If we turn to the current situation and arbitrary $n$ then the first approach
has problems in that the midlayer won't be abelian anymore and I haven't looked
into what one could do. As for the second approach I haven't even looked into
what the higher obstructions would look like (the definition of the first
obstruction on terms of $d_3$ is very asymmetric).</p>
|
4,348,455 | <p>Each digits of the decimal expansion of the integer <span class="math-container">$2022$</span> (this year) consists of <span class="math-container">$0$</span> or <span class="math-container">$2$</span> and also, each digits of the ternary expansion of the same integer <span class="math-container">$2022$</span> (which is <span class="math-container">$2202220_3$</span>) consists of <span class="math-container">$0$</span> or <span class="math-container">$2$</span>. I wonder if there are infinitely many such integers.</p>
| Empy2 | 81,790 | <p>I think they will run out after a while. This is not a proof, just heuristics.<br />
There are <span class="math-container">$2^n$</span> of these numbers of length <span class="math-container">$n+1$</span>. One approach is to think of the base 3 version as a random set of digits. There will be about <span class="math-container">$(n+1)\log_{3}10$</span> base 3 digits, each has <span class="math-container">$2/3$</span> chance of being 0 or 2.<br />
The chance of any of the <span class="math-container">$n+1$</span> digit decimal numbers being just 0s and 2s in ternary would be
<span class="math-container">$$\approx C\left(2\left(2\over3\right)^{\log_310}\right)^n\\
\approx C(0.855^n)$$</span>
This has a finite sum, so I expect finitely many of these numbers.</p>
|
3,852,362 | <p>Let <span class="math-container">$~X = \{(x,y)∈ℝ^2∶|x| ≤1,~|y|≤1\}$</span> and function <span class="math-container">$f : X →ℝ$</span> defined by <span class="math-container">$$f(x,y)=\dfrac{x\cos x + y \sin y}{x^2+y^2+\alpha}$$</span>where <span class="math-container">$\alpha\gt0$</span>, then the range of <span class="math-container">$f(x,y)$</span> is<br>
<span class="math-container">$a)~~$</span> not compact set<br>
<span class="math-container">$b)~~$</span> bounded open set<br>
<span class="math-container">$c)~~$</span> connected open set<br>
<span class="math-container">$d)~~$</span> connected closed set</p>
<p>How to find the range of <span class="math-container">$f(x,y)$</span> and the further things ?</p>
<p>Clearly, since <span class="math-container">$\alpha\gt0$</span>, for any value of <span class="math-container">$(x,y),~~x^2+y^2+\alpha\ne0$</span> and therefore <span class="math-container">$f(x,y)$</span> is defined for all values of <span class="math-container">$(x,y)$</span>. Now how to proceed further ?</p>
| Kwin van der Veen | 76,466 | <p>The zero order hold discretization is easiest done in state space. The continuous state space model can be written as</p>
<p><span class="math-container">$$
\dot{x}(t) = A\,x(t) + B\,u(t-d), \tag{1}
$$</span></p>
<p>with <span class="math-container">$x$</span> the state, <span class="math-container">$u$</span> the input delayed by <span class="math-container">$d$</span> time units and the matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span> given by</p>
<p><span class="math-container">$$
A = -5, \quad B = 5.
$$</span></p>
<p>The entire state space model can be completed using <span class="math-container">$C=1$</span> and <span class="math-container">$D=0$</span>, with the output of the state space model defined as <span class="math-container">$y(t) = C\,x(t) + D\,u(t-d)$</span>.</p>
<p>For zero order hold discretization it is assumed that the input is held constant during one sample time, with the sample time in your case is given to be <span class="math-container">$T=0.5$</span>, so</p>
<p><span class="math-container">$$
u(t) = u_k\, \forall\ k\,T \leq t < (k+1)\,T,\, \forall\ k \in \mathbb{Z} \tag{2}
$$</span></p>
<p>similarly the state at the discrete sample <span class="math-container">$k$</span> is denoted with <span class="math-container">$x_k$</span>.</p>
<p>The zero order hold discretization can now be derived using the convolution integral (obtained by adding the delay to equation 19 on page 5 from <a href="http://web.mit.edu/2.14/www/Handouts/StateSpaceResponse.pdf" rel="nofollow noreferrer">these handouts</a>)</p>
<p><span class="math-container">$$
x(t) = e^{A\,t} x(0) + \int_0^t e^{A\,(t-\tau)} B\,u(\tau-d)\,d\tau. \tag{3}
$$</span></p>
<p>Time shifting <span class="math-container">$(3)$</span> from <span class="math-container">$0$</span> to <span class="math-container">$k\,T$</span> and evaluating it at <span class="math-container">$t=(k+1)\,T$</span> yields</p>
<p><span class="math-container">$$
x((k+1)\,T) = e^{A\,T} x(k\,T) + \int_{k\,T}^{(k+1)\,T} e^{A\,((k+1)\,T-\tau)} B\,u(\tau-d)\,d\tau, \tag{4}
$$</span></p>
<p>which using the definition of the discrete sampled state is equivalent to</p>
<p><span class="math-container">$$
x_{k+1} = e^{A\,T} x_k + \int_{k\,T}^{(k+1)\,T} e^{A\,((k+1)\,T-\tau)} B\,u(\tau-d)\,d\tau. \tag{5}
$$</span></p>
<p>If the delay is not a whole multiple of the sample time then when substituting <span class="math-container">$(2)$</span> in <span class="math-container">$(5)$</span> allows one to split the integral into two parts, such that each partial integral is only a function of one of the discrete sampled inputs and thus can be factored out of the integral. If the delay is a whole multiple of the sample time then the integral does not have to be split in order to factor out the input.</p>
<p>For example when substituting in the values from your question, so <span class="math-container">$A = -5$</span>, <span class="math-container">$B = 5$</span>, <span class="math-container">$d=5.8$</span> and <span class="math-container">$T=0.5$</span>, yields</p>
<p><span class="math-container">\begin{align}
x_{k+1} &= e^{-2.5} x_k + \int_{k\,0.5}^{(k+1)\,0.5} e^{-5\,((k+1)\,0.5-\tau)} 5\,u(\tau-5.8)\,d\tau, \tag{6a} \\
&= e^{-2.5} x_k + \int_{k\,0.5}^{k\,0.5+0.3} e^{-5\,((k+1)\,0.5-\tau)} 5\,u_{k-12}\,d\tau + \int_{k\,0.5+0.3}^{k\,0.5+0.5} e^{-5\,((k+1)\,0.5-\tau)} 5\,u_{k-11}\,d\tau, \tag{6b} \\
&= e^{-2.5} x_k + \int_{0}^{0.3} e^{-5\,(0.5-\tau)} d\tau\,5\,u_{k-12} + \int_{0.3}^{0.5} e^{-5\,(0.5-\tau)} d\tau\,5\,u_{k-11}, \tag{6c} \\
&\approx 0.0820850\,x_k + 0.285794\,u_{k-12} + 0.632121\,u_{k-11}. \tag{6d}
\end{align}</span></p>
<p>Transforming the difference equation from <span class="math-container">$(6d)$</span> into the Z-transform yields</p>
<p><span class="math-container">$$
z\,X(z) = 0.0820850\,X(z) + 0.285794\,z^{-12}\,U(z) + 0.632121\,z^{-11}\,U(z). \tag{7}
$$</span></p>
<p>Since the output <span class="math-container">$y(t)$</span> is identical to the state <span class="math-container">$x(t)$</span> means that <span class="math-container">$Y(z) = X(z)$</span> and thus the zero order hold discretization transfer function can be obtained using</p>
<p><span class="math-container">$$
G(z) = \frac{Y(z)}{U(z)} = \frac{X(z)}{U(z)}, \tag{8}
$$</span></p>
<p>which when substituting in <span class="math-container">$(7)$</span> yields</p>
<p><span class="math-container">$$
G(z) = \frac{0.632121\,z + 0.285794}{z^{12} (z - 0.0820850)}. \tag{9}
$$</span></p>
|
283,360 | <p>Let $M$ be a simply connected topological 4-manifold with intersection form given by the E8 lattice. Does anyone know of examples of continuous self-maps of $M$ of degree 2 or 3? Or of degree any other prime for that matter?</p>
| Oscar Randal-Williams | 318 | <p>Such a map $f : M \to M$ of degree $d >0$ satisfies, with respect to the cup-product pairing $\langle -, - \rangle$,
$$\langle f^*(x), f^*(y) \rangle = d \langle x, y \rangle.$$</p>
<p>Conversely, I claim that any integer matrix $A$ satisfying
$$A^T E_8 A = d E_8$$
arises as $A = f^*$ for a map $f : M \to M$, necessarily of degree $d$. To see this, build $M$ as a CW-complex from $\bigvee_8 S^2$ by attaching a 4-cell along the map $g : S^3 \to\bigvee_8 S^2$ dictated by the $E_8$ form. The composition
$$S^3 \overset{g}\to \bigvee_8 S^2 \overset{A}\to \bigvee_8 S^2$$
is the map dictated by the form $A^T E_8 A = d E_8$, so is $d \cdot g$. In particular it becomes nullhomotopic when composed with $\bigvee_8 S^2 \subset M$, giving a map
$$f: M = (\bigvee_8 S^2) \cup_g D^4 \to M$$
which induces $A$ on second cohomology.</p>
<p>Taking e.g. $A=\lambda \cdot \mathrm{Id}$ yields a self-map of degree $d=\lambda^2$. I am not sure whether it is possible to produce non-square degrees.</p>
|
283,360 | <p>Let $M$ be a simply connected topological 4-manifold with intersection form given by the E8 lattice. Does anyone know of examples of continuous self-maps of $M$ of degree 2 or 3? Or of degree any other prime for that matter?</p>
| Will Sawin | 18,060 | <p>There are plenty of integer matrices $A$ with $A^T E_8 A = d E_8$, which give plenty of maps, as in Oscar's answer.</p>
<p>First, for $d=1$, these are the automorphisms of the $E_8$ lattice. There are <a href="https://en.wikipedia.org/wiki/E8_lattice#Symmetry_group" rel="nofollow noreferrer">696729600</a> of these. </p>
<p>Second, for any odd prime $p$, the E_8 quadratic form splits mod $p$, hence we can find a rank $4$ isotropic subspace. Consider the sublattice of elements that are in that subspace mod $p$. It is an index $p^4$ lattice, hence has discriminant $p^8$, and the quadratic form on it is divisible by $p$. Dividing by $p$, we obtain a unimodular lattice. It remains even and positive definite, so it is the $E_8$ lattice again. Fixing an isomorphism with $E_8$ describes a degree $p$ endomorphism of $E_8$.</p>
<p>In fact the number of such subspaces should be $2(p^6+p^5+p^4+2p^3+p^2+p+1)(p^4+p^3+2p^2+p+1)(p^2+2p+1)$, so this gives $696729600\cdot 2(p^6+p^5+p^4+2p^3+p^2+p+1)(p^4+p^3+2p^2+p+1)(p^2+2p+1)$ self-maps of degree $p$.</p>
<p>I think something similar can be done as well to produce degree $2$ endomorphisms, just with a little more care.</p>
|
623,709 | <p>I make the following conjecture: the function
$$
d(x, y):=\frac{||x-y||}{\max(||x||, ||y||)}
$$
is a distance on $H$, where $H$ is a normed vector space or a Hilbert space, and $x, y \in H$ (the function $d$ is defined to be $0$ in the case $x=y=0$). Note that $d$ is scale invariant, i.e., $d(\lambda x, \lambda y)=d(x, y)$ for $0 \neq \lambda \in \mathbb{R}$. The property of $d$ which needs to be explicitly proved or disproved is the triangle inequality
$$
d(x, y) \leq d(x,z)+d(z,y).
$$
The triangle inequality (TI) can be easily proved for $H=\mathbb{R}$; moreover, due to scale invariance, it is sufficient to prove it for $||x||,||y||, ||z|| \leq 1$. The TI has been numerically tested by a program which has generated $10^8$ triples of random points uniformly distributed in $[-1, 1]^3$, and the same number in $[-1,1]^6$: all the generated triples satisfied the TI. This test supports therefore the conjecture for $H=\mathbb{R}^3$ and $H=\mathbb{R}^6$. Since the subspace generated by three linearly independent vectors of a real (complex) Hilbert space is isometrically isomorphic to $\mathbb{R}^3$ ($\mathbb{R}^6$), the numerical test supports the conjecture also for a generic Hilbert space*. </p>
<p>Does somebody know if this conjecture has been already proved, or is able to prove (or disprove) it?</p>
<ul>
<li>Before posting this question, I exchanged some e-mail with prof. Egor Makimenko, of the Instituto Politécnico Nacional, México. I did by myself a program for the numerical test of the TI, but the test cited above has been performed by a program that prof. Maximenko sent me. Moreover, the generalization from $\mathbb{R}^3$-$\mathbb{R}^6$ to a generic Hilbert space is due to prof. Maximenko.</li>
</ul>
| Egor Maximenko | 118,806 | <p>The inequality purposed by BGA fails also in $(\mathbb{R}^2,\|\cdot\|_\infty)$,
but it seems to be true for the norms induced by inner products.</p>
<p>Hypothesis I: If $X$ is an inner product space and
$\|\cdot\|$ is the norm induced by the inner product,
then for every $x,y,z\in X$
$$\|x-y\|\,\|z\| \le \|x-z\|\,\|y\| + \|z-y\|\,\|x\|.$$</p>
<p>I don't know how to prove this hypothesis in general situation,
but it can be easily proved in $X=\mathbb{R}^1$ and it passes some
<a href="http://esfm.egormaximenko.com/test_one_inequality.html" rel="nofollow">numerical tests</a> in $X=\mathbb{R}^3$.</p>
<p>The difficult case ($\|z\|\ge\|x\|\ge\|y\|$) of the discussed triangular inequality $d(x,y)\le d(x,z)+d(z,y)$ follows from Hypothesis I.</p>
|
1,770,804 | <p>I am a high school student my maths teacher said that if $\,ax+b=cx+d,\,$ then is $\,a=c\,$ and $\,b=d.\,$ Can someone give me a prove of this?</p>
| Community | -1 | <p>Let $x=0$, then $b=d$. So $ax+b=cx+b$. So $ax=cx$. Then let $x=1$ to get $a=c$</p>
|
2,342,051 | <p>I am totally new to statistics. I'm learning the basics.</p>
<p>I came upon this question while solving Erwin Kreyszig's exercise on statistics.
The problem is simple. It asks to calculate standard deviation after removing outliers from the dataset.</p>
<p>The dataset is as follows: 1, 2, 3, 4, 10.
What I did is, I found out q<sub>m</sub> = 3. Then $q$<sub>l</sub> $= \frac{1+2}{2} = 1.5$ and $q$<sub>m</sub> $= \frac{4+10}{2} = 7$.</p>
<p>Now, $IQR = 7-1.5 = 5.5$ and $1.5*IQR = 8.25$</p>
<p>So, we can say numbers beyond $1.5 - 5.5 = -4$ and $7 + 5.5 = 12.5$ will be an outlier.</p>
<p>Since there is no outlier, I found out the Standard Deviation of the set which is 3.53.</p>
<p>But, the answer provided is 1.29 which is different from the standard deviation of the set.</p>
<p>Can anyone help me what I missed? </p>
<p>Also, I have another question - we can see with plain eyes 10 is an outlier. But it is not detected here - why? </p>
| Sahiba Arora | 266,110 | <p>$C$ is closed therefore $\mathbb R \setminus C$ is open. Let $x \in \mathbb R \setminus C$. Then there exists an open set $U$ such that $U \subseteq \mathbb R \setminus C$. Now, $f \equiv 0$ on U. Hence $f$ is differentiable at $x$. </p>
|
1,003,096 | <p>Let $G=(\mathbb{Q}-\{0\},*)$ and $H=\{\frac{a}{b}\mid a,b\text{ are odd integers}\}$.</p>
<ol>
<li>Show $H$ is a normal subgroup of $G$.</li>
<li>Show that $G/H \cong (\mathbb{Z},+)$</li>
</ol>
<p>I know that there are multiple definitions for normal subgroup and I am having a hard time to develop the proof for these particular sets. </p>
<p>For part 2. I need help developing a function from $G/H \to (\mathbb{Z},+)$.</p>
| Username Unknown | 62,874 | <p>$\textbf{Show that $G/H \cong (\mathbb{Z},+)$.}$</p>
<p>$\textbf{Proof:}$ We can construct a function $f:G/H \to (\mathbb{Z},+)$ to be able to show that $G/H \cong (\mathbb{Z},+)$. However we can notice that $f$ can be written as $f:2^kH \to k$. Why?</p>
<p>Note that $\mathbb{Q}=\{\frac{r}{s} | r,s \in \mathbb{Z}, s\neq 0\}$. So $\mathbb{Q}-\{0\}=\{\frac{r}{s} | r,s \in \mathbb{Z}, r,s\neq 0\}$.</p>
<p>Let $\frac{r}{s} \in G/H$, we can write $r$ and $s$ into their prime factorization. </p>
<p>(i.e. $\frac{r}{s}=\frac{2^{n_1}3^{n_2}\cdots p_m^{n_m}}{2^{t_1}3^{t_2}\cdots p_m^{t_m}}=\frac{2^{n_1}}{2^{t_1}} \cdot \frac{3^{n_2}\cdots p_m^{n_m}}{3^{t_2}\cdots p_m^{t_m}}$) </p>
<p>Note that the only prime number that is not odd is 2. So we can break the fraction above to be the product of two fractions, and since the elements of $H$ are fractions with odd numerators and denominators then $\frac{r}{s}=2^{n_1-t_1}H$</p>
<p>We eventually want to use the conclusion of $G/\,Ker() \cong \,Im()$ So we need to show that $H=\,Ker()$ and $(\mathbb{Z},+)=\,Im()$. So the $\,Ker()=\{k=0\}$ because that means the fraction $\frac{r}{s}$ is only comprised of odd integers in both the numerator and denominator. So we can see that $\,Im()=(\mathbb{Z},+)$. So $f(2^kH)=k$.</p>
<p>To finally use the conclusion above, We need to show that (1) $f$ is a homomorphism, and (2) onto.$$f(2^{k_1}H2^{k_2}H)=f(2^{k_1+k_2}H)=k_1+k_2=f(2^{k_1}H)+f(2^{k_2})$$ Hence $f$ is a homomorphism. And lastly, we need to show that this homomorphism is onto (i.e. surjective). For all $k$ in $(\mathbb{Z},+)$, $f(2^kH)=k$ so $f$ is surjective. Since $f$ is an onto homomorphism we have $G/H \cong (\mathbb{Z},+)$</p>
|
3,106,574 | <p>Let <span class="math-container">$(a_n) _{n\ge 0}$</span> <span class="math-container">$a_{n+2}^3+a_{n+2}=a_{n+1}+a_n$</span>,<span class="math-container">$\forall n\ge 1$</span>, <span class="math-container">$a_0,a_1 \ge 1$</span>. Prove that <span class="math-container">$(a_n) _{n\ge 0}$</span> is convergent.<br>
I could prove that <span class="math-container">$a_n \ge 1$</span> by mathematical induction, but here I am stuck. </p>
| Jean Marie | 305,862 | <p>Initial remarks : </p>
<p>a) In case of convergence to a limit <span class="math-container">$L$</span>, we would have <span class="math-container">$L^3+L=L+L$</span>, with solutions <span class="math-container">$L=-1,0,1$</span>. </p>
<p>b) We assume that, up to a switching operation, <span class="math-container">$a_1 \leq a_0.$</span></p>
<blockquote>
<p>We are going to show that <span class="math-container">$a_n$</span> is convergent with limit <span class="math-container">$L=1.$</span></p>
</blockquote>
<p>First step : the recurrence "definition" </p>
<p><span class="math-container">$$a_{n+2}^3+a_{n+2}=a_{n+1}+a_n$$</span></p>
<p>of sequence <span class="math-container">$a_n$</span>, rewritten under the form</p>
<p><span class="math-container">$$ f(a_{n+2})=a_{n+1}+a_n \ \ \text{where} \ \ f(x):=x^3+x, \tag{0}$$</span></p>
<p>cannot be considered as a "definition" unless it has been proved
that <span class="math-container">$a_{n+2}$</span> is determined in a unique way by (1). This will be the consequence of </p>
<p><strong>Lemma <span class="math-container">$0$</span></strong> : <span class="math-container">$f$</span> is a bijection.</p>
<p>Proof : <span class="math-container">$f'(x)=3x^2+1\geq 1>0$</span>. Thus function <span class="math-container">$f$</span> is strictly increasing, therefore is bijective. <span class="math-container">$\square$</span>.</p>
<p>Let us write (0) under the form :</p>
<p><span class="math-container">$$a_{n+2}=f^{-1}(a_{n+1}+a_n)\tag{1}$$</span></p>
<p><strong>Lemma <span class="math-container">$1$</span></strong> : <span class="math-container">$a_n \geq 1$</span> whatever <span class="math-container">$n$</span>.</p>
<p>Proof : by recurrence, using the fact that <span class="math-container">$f^{-1}(2,+\infty)=(1,\infty). \ \square$</span></p>
<p><strong>Lemma <span class="math-container">$2$</span></strong> : for any <span class="math-container">$x \geq 0, f(x) \geq 2x^2$</span></p>
<p>Proof by observing that <span class="math-container">$x^3+x-2x^2=x(x-1)^2 \geq 0. \ \square$</span></p>
<p>As a consequence of lemma <span class="math-container">$2$</span>, </p>
<p><span class="math-container">$$f^{-1}(u) \leq \sqrt{\tfrac{u}{2}}.\tag{2}$$</span></p>
<p>Thus, for any <span class="math-container">$n$</span>, </p>
<p><span class="math-container">$$a_{n+2} \leq \sqrt{\tfrac{a_{n+1}+a_n}{2}}\tag{3}.$$</span></p>
<p>Let us define an auxiliary sequence <span class="math-container">$b_n$</span> by </p>
<p><span class="math-container">$$b_n=a_n-1\tag{4}$$</span> </p>
<p>Our objective is thus to prove that <span class="math-container">$b_n \to 0$</span>. (3) becomes :</p>
<p><span class="math-container">$$b_{n+2} \leq \sqrt{1+\tfrac{b_{n+1}+b_n}{2}}-1\tag{5}.$$</span></p>
<p><strong>Lemma <span class="math-container">$3$</span></strong> : <span class="math-container">$b_n$</span> is a positive decreasing sequence bounded from below by <span class="math-container">$0$</span>.</p>
<p>Proof : By recurrence. True (see initial remark b)) for the two first elements. For the general case, use in (5) the (classical) result : </p>
<p><span class="math-container">$$\text{for all} \ x>0, \ \ \ 1 \leq \sqrt{1+x}\leq 1+\tfrac{x}{2}.\tag{5}$$</span> </p>
<p><span class="math-container">$\ \square$</span></p>
<p>Lemma <span class="math-container">$3$</span> allows to conclude that <span class="math-container">$a_n$</span> is a positive decreasing sequence bounded from below by <span class="math-container">$1$</span>. Thus <span class="math-container">$a_n$</span> converges to a limit which is necessarily <span class="math-container">$L=1$</span> (see initial remark a)).</p>
<p>Remark about the rate of convergence : I have observed that sequence <span class="math-container">$b_n$</span> behaves asymptotically as a geometrical sequence with ratio <span class="math-container">$r=0.640388...$</span> whatever the initial values <span class="math-container">$a_0$</span> and <span class="math-container">$a_1$</span>. This value is in fact equal to the value <span class="math-container">$(1+\sqrt{17})/8$</span> found for one of the roots of characteristic equation found by @maxmilgram. I have no true proof of this fact.</p>
|
1,295,453 | <p>In my assignment I have to calculate to following limit. I wanted to know if my solution is correct. Your help is appreciated:</p>
<p>$$\lim_{n \to \infty}n\cos\frac{\pi n} {n+1} $$</p>
<p>Here's my solution:</p>
<p>$$=\lim_{n \to \infty}n\cos \pi \frac{n} {n+1} $$</p>
<p>Since $\frac {n} {n+1}\to 1 $ and $\cos \pi \to (-1)$ we can use the "infinity times a number" rule, since $n \to \infty$. </p>
<p>Therefore, the limit will be $(- \infty) $</p>
<p>Did I Get it right? </p>
<p>Thanks, </p>
<p>Alan </p>
| Jan Eerland | 226,665 | <p>$$\lim_{n \to \infty}n\cos\left(\frac{\pi n} {n+1}\right) =$$</p>
<p>$$\lim_{n \to \infty}n\lim_{n \to \infty}\cos\left(\frac{\pi n} {n+1}\right) =$$
$$\lim_{n \to \infty}n\cos\left(\lim_{n \to \infty}\frac{\pi n} {n+1}\right)=$$
$$\lim_{n \to \infty}n\cos\left(\lim_{n \to \infty}\frac{\pi } {1+\frac{1}{n}}\right)=$$
$$\lim_{n \to \infty}n\cos(\pi)=-1\left(\lim_{n \to \infty}n\right)=-\infty$$</p>
|
73,277 | <p>Let $\boldsymbol{\theta}=(\theta_1,\ldots,\theta_m)$ be a vector of real numbers in $[-\pi,\pi]$. For $t\ge 0$, define
$$ f(t,\boldsymbol{\theta}) = \binom{m+t-1}{t}^{-1}
\sum_{j_1+\cdots+j_m=t} \exp(ij_1\theta_1+\cdots+ij_m\theta_m),$$
where the sum is over non-negative integers $j_1,\ldots,j_m$ with sum $t$.
Note that the number of terms in the sum is $\binom{m+t-1}{t}$, so
$|f(t,\boldsymbol{\theta})|\le 1$ with equality occurring when all the $\theta_j$s
are equal.</p>
<p>For a problem in asymptotic combinatorics, we need a bound on
$|f(t,\boldsymbol{\theta})|$ that decreases rapidly as the $\theta_j$s move apart and
is valid for all $\boldsymbol{\theta}$.
Surely this problem has been studied before?</p>
<p>Note that $\binom{m+t-1}{t}f(t,\boldsymbol{\theta})$ is the coefficient of $x^t$ in
$$\prod_{j=1}^m (1-xe^{i\theta_j})^{-1},$$
which suggests some sort of contour integral approach.</p>
| Lucia | 38,624 | <p>We shall prove that
$$
f(t,{\theta}) \le \frac{m-1}{t+m-1} \min_{1\le j,k \le m} \frac{1}{|\sin((\theta_j-\theta_k)/2)|}.
$$
This shows that if the angles are not too close to each other, then the sum does get
small. </p>
<p>Suppose without loss of generality that the minimum in our bound occurs for $\theta_1$ and $\theta_2$
(so these are the angles that are furthest apart). Then writing $j_1+j_2 =\ell$ we have
$$
|f(t,{ \theta})| \le \binom{m+t-1}{t}^{-1} \sum_{\ell=0}^{t} \sum_{j_3+\ldots+j_m=t-\ell}
\Big| \sum_{j_1+j_2 =\ell} \exp(ij_1 \theta_1 + ij_2 \theta_2)\Big|.
$$
Now the inner sum over $j_1$ (and $j_2=\ell -j_1$) is simply a geometric progression, and
so
\begin{align*}
\Big| \sum_{j_1+j_2=\ell} \exp(ij_1 + i j_2 \theta_2) \Big| &=
\Big| \sum_{j=0}^{\ell} \exp(ij (\theta_1-\theta_2))\Big| =
\Big|\frac{\exp(i(\ell+1)(\theta_1-\theta_2))-1}{\exp(i(\theta_1-\theta_2))-1}\Big|
\\
&\le \frac{2}{|\exp(i(\theta_1-\theta_2))-1|} = \frac{1}{|\sin((\theta_1-\theta_2)/2)|}.
\end{align*}
Therefore
$$
|f(t,{\theta})| \le \frac{1}{|\sin((\theta_1-\theta_2)/2)|}\binom{m+t-1}{t}^{-1} \sum_{\ell+j_3+\ldots+j_m=t} 1,
$$
which proves our claimed bound.</p>
<p>As the original question suggests, one would be able to prove better bounds
using contour integrals. The key would be to integrate on the circle centered at
the origin and with radius $r=t/(m+t)$. One should be able to get good bounds
in terms of
$$
\sum_{j,k} \sin^2 \Big(\frac{\theta_j-\theta_k}{2}\Big),
$$
but I have not worked this out carefully.</p>
|
3,910,345 | <p>Recently a lecturer used this notation, which I assume is a sort of twisted form of Leibniz notation:</p>
<p><span class="math-container">$$y\,\mathrm{d}x - x\,\mathrm{d}y \equiv -x^2\,\mathrm{d}\left(\frac{y}{x}\right)$$</span></p>
<p>The logic here was that this could be used as:</p>
<p><span class="math-container">$$\begin{align}
-x^2\,\mathrm{d}\left(\frac{y}{x}\right) &\equiv -x^2\,\left(\frac{\mathrm{d}y}{x} -\frac{y}{x^2}\,\mathrm{d}x\right)\\
&\equiv y\mathrm{d}x - x\mathrm{d}y
\end{align}
$$</span></p>
<p>Why is this legal?</p>
<p>I can see some kind of differentiation going on with the second term in the above equivalence, producing the <span class="math-container">$\frac{1}{x^2}$</span>, but having the single <span class="math-container">$\mathrm{d}$</span> seems like a really weird abuse of notation, and I don't quite follow why it splits the single <span class="math-container">$\frac{y}{x}$</span> fraction into two parts.</p>
| littleO | 40,119 | <p>Such arguments can always be rephrased to avoid treating <span class="math-container">$dx$</span> and <span class="math-container">$dy$</span>, etc, as individual "infinitesimal" quantities. (On the other hand, "infinitesimal intuition" is a powerful and intuitive way to derive calculus formulas, so I can see why physicists are drawn to it.)</p>
<p>I'll assume that <span class="math-container">$y$</span> is a function of <span class="math-container">$x$</span>. Let <span class="math-container">$h(x) = y(x)/x$</span>. Then
<span class="math-container">$$
h'(x) = \frac{x y'(x) - y(x)}{x^2}
$$</span>
so
<span class="math-container">$$
\tag{1} -x^2 h'(x) = y(x) - x y'(x).
$$</span>
That's probably how I would write it, because the meaning is perfectly clear.</p>
<p>We could also write (1) using Leibniz notation:
<span class="math-container">$$
-x^2 \frac{dh}{dx} = y - x \frac{dy}{dx}.
$$</span>
If we then "multiply through by <span class="math-container">$dx$</span>", we obtain
<span class="math-container">$$
- x^2 dh = y dx - x dy
$$</span>
which is what your lecturer wrote.</p>
<p>I can imagine that some people think the version using infinitesimal notation is more beautiful or more intuitive.</p>
|
2,280,203 | <p>How to transform the integral </p>
<p>$$\int _{0}^{\pi }\sin ^{2}\left( \psi \right) \sin \left( m\psi \right) d\psi $$</p>
<p>to </p>
<p>$$\int _{0}^{\pi }\left( \dfrac {1} {2}-\dfrac {1} {2}\cos 2\psi \right) \sin m\psi d\psi $$</p>
<p>What is the general method you need to solve trig questions like this. How are do you know which identities to use and which ones should you always have memorised to derive this.</p>
| M.P | 441,344 | <p>Some other useful tools for basic trig. integrals: </p>
<p>$\sin(2x)=2 \sin (x)\cos (x)$<br>
$\cos(2x)=1-\cos^2(x)=1-2\sin^2(x)$<br>
and the most elementary one: $1=\sin^2(x)+\cos^2(x)$, </p>
<p>Trigonometric products to sums and identities for higher-exponent trigonometric functions are also handy, but harder to memorize. These can be found from math-tables.</p>
<p>Also good rules to remember when integrating trig. functions:<br>
$$\frac{d}{dx}(\sin^n(x))=n\cos(x)\sin^{n-1}(x)$$
(or generally)
$$\int f'f^n=\frac{f^{n+1}}{n+1}+C,(n \neq-1)$$<br>
$$\int \frac{f'}{f}=\ln|f|+C$$</p>
<p>So in many cases the trick is to derive the equation (with trigonometric identities) to a form where these rules can be applied.</p>
|
3,995,986 | <p>Need help integrating:
<span class="math-container">$$\int _0^{\infty }\:\:\frac{6}{\theta}xe^{-\frac{2x}{\theta }}\left(1-e^{-\frac{x}{\theta }}\right)dx$$</span></p>
<p>I think I should multiply the <span class="math-container">$$xe^{-\frac{2x}{\theta }}$$</span> out and then use integration by parts but it is not really working for me?</p>
| Raffaele | 83,382 | <p><span class="math-container">$$I=\int_0^{\infty } \frac{6 x}{t} e^{-\frac{2 x}{t}} \left(1-e^{-\frac{x}{t}}\right) \, dx$$</span>
Set <span class="math-container">$$e^{-\frac{x}{t}}=u\to x=-t\log u;\;dx=-\frac{t}{u}$$</span>
<span class="math-container">$$I=\int_1^0-6 (1-u) u^2 \log u\left(-\frac{t}{u}\right)\,du=$$</span>
<span class="math-container">$$=6t\int_0^1 (u-1) u \log u\,du=6t[-\frac{u^3}{9}+\frac{1}{3} u^3 \log u+\frac{u^2}{4}-\frac{1}{2} u^2 \log u]_0^1=\frac{5 }{6}t$$</span>
As <span class="math-container">$$\underset{u\to 0}{\text{lim}}\left(-\frac{u^3}{9}+\frac{1}{3} u^3 \log u+\frac{u^2}{4}-\frac{1}{2} u^2 \log u\right)=0$$</span></p>
<p>TL;DR <span class="math-container">$I=\frac{5 }{6}t$</span></p>
|
351,030 | <p>for positive integer $n$, how can we show</p>
<p>$$ \sum_{d | n} \mu(d) d(d) = (-1)^{\omega(n)} $$</p>
<p>where $d(n)$ is number of positive divisors of $n$ and $mu(n)$ is $(-1)^{\omega(n)} $ if $n$ is square free, and $0$ otherwise. Also, what is</p>
<p>$$ \sum_{d | n} \mu(d) \sigma (d) $$ where $\sigma(n)$ is the sum of positive divisors of $n$</p>
| Norbert | 19,538 | <p>Since $L_\infty(\Omega)=L_1^*(\Omega)$ you need to show that for all $f\in L_1(\Omega)$
$$
\lim\limits_{n\to\infty}\langle A_n, f\rangle = \langle A, f\rangle\tag{1}
$$
where $A$ is the desired limit. In fact it is enough to check $(1)$ only for some functions $f\in S$, where $L_1(\Omega)=\overline{\mathrm{span}S}$. Now consider
$$
S=\left\{\chi_{[a,b)}:[a,b)\subset [0,1)\right\}
$$
Then for all $f\in S$ you have
$$
\lim\limits_{n\to\infty}\langle A_n,f\rangle=\frac{\alpha+\beta}{2}\int\limits_{(0,1)}f(t)d\mu(t)
$$
Now you can suggest what is $A$.</p>
|
133,418 | <p>Let $\langle R,0,1,+,\cdot,<\rangle$ be the standard model for R, and let S be a countable model of R (satisfying all true first-order statements in R). Is it true that the set 1,1+1,1+1+1,… is bounded in S? My intuition says "no", but I am yet to find a counter example. I read something about rational functions, but I cannot verify it is, indeed, a non-standard model of R.</p>
| André Nicolas | 6,312 | <p>The (first-order) theory of real-closed fields is complete. So any real-closed field that has the desired properties (countable, non-Archimedean) will do. We can use devices from Model Theory. However, an <em>algebraically</em> natural approach is to start with the rational functions in $x$ with real algebraic coefficients, and the standard lexicographic ordering. Then we extend this to a real-closed field. </p>
<p>This yields the field of <a href="http://en.wikipedia.org/wiki/Puiseux_series" rel="nofollow">Puiseux series</a> with real algebraic coefficients. It is real-closed, so elementarily equivalent to the field $\mathbb{R}$. And it is not Archimedean, since $x>1$, $x>1+1$, $x>1+1+1$, and so on. To get infinitely many non-isomorphic such fields, we can add $n$ transcendentals to the base field, for some positive integer $n$, or countably many transcendentals, and again form the field of Puiseux series. </p>
<p>Your question did not ask for countable <em>Archimedean</em> fields that are elementarily equivalent to $\mathbb{R}$. But they are easy to find. The simplest is the field of real algebraic numbers. </p>
|
2,622,583 | <blockquote>
<p>Prove that if $f:\mathbb R \to \mathbb R$ is a measurable function and $f(x)=f(x+1)$ almost everywhere, then there exists a measurable function $g:\mathbb R \to \mathbb R$ with $f=g$ almost everywhere and $g(x)=g(x+1)$ for every $x \in \mathbb R$</p>
</blockquote>
<p>I'm trying to prove this by construction.
We know that $A= \{x \in \mathbb R :f(x) \not = f(x+1) \}$ is measurable and $m(A)=0$, so I thought $g$ should be something like:</p>
<p>$ g(x) = \left\{
\begin{array}{ll}
f(x) & \mathrm{if\ } x \notin A \\
?? & \mathrm{if\ } x \in A
\end{array}
\right.$</p>
<p>And this way,we would get that $f=g$ almost everywhere, and $g$ would be measurable... But using this I haven't been able to find a way to make $g(x)=g(x+1)$ for every $x \in \mathbb R$</p>
| ncmathsadist | 4,154 | <p>Let $E = \{x| f(x) \not= f(x+1)$; this has measure zero. Now let $Q$ be the union of all integer translates of $E$; this also has measure zero. Now define $g(x) = f(x)$ for $x\in Q$ and $0$ otherwise. </p>
|
306,461 | <p>Let $A = \{(x,y) \in\mathbb{R}^2: a \leq (x-c)^2+(y-d)^2 \leq b\}$ for given $a,b,c, d$ real numbers. I want to show that $A$ is path-connected.</p>
<p>How can I do that?</p>
<p>I know that every open subset of $\mathbb R^2$ that is connected is path connected. But this is obviously not open so I cannot use that. Then I thought of multiple cases. If we take arbitrary $x$ and $y$ and draw the line between them and they do not intersect with the circle centred at $(c,d)$ then we can obviously draw a line between the points which is still in the set, so we can then define the function. I am stuck on the other case. </p>
| Seirios | 36,434 | <p><strong>Hint:</strong> Translate $A$ so that you can suppose $c=d=0$ and use polar coordinates.</p>
|
29,255 | <p>sorry! am not clear with these questions</p>
<ol>
<li><p>why an empty set is open as well as closed?</p></li>
<li><p>why the set of all real numbers is open as well as closed?</p></li>
</ol>
| XGS | 164,057 | <p>This should be pretty obvious. Take $\mathbb{R}$ (together with its equipped topology) for example. We have:
1. Since finite intersection of two open sets is open, it follows that $(1, 2) \cap (3, 4) = \emptyset$ must be open; The complement of $\emptyset$, which is $\mathbb{R}$, must be closed;<br>
2. Since any union of two open sets is open, it follows that $ (- \infty, 1) \cup (-1, + \infty) = \mathbb{R}$ is open; By the same complement rule again, the complement of $\mathbb{R}$, which is $\emptyset$, must be closed.<br>
Hope this helps. </p>
|
2,310,441 | <p>I consider the sequence of composite odd integers: 9, 15, 21, 25, 27, 33, 35, 41, ...</p>
<p>I observe that there are certain large gaps between the composite odd integers and this may contribute towards the solution.</p>
<p>So I start by considering some sums first:</p>
<p>9 + 9 + 9 = 27, 9 + 9 + 15 = 33. So this means that 31 is potentially such a prime number.</p>
<p>Then I consider other sums and manage to obtain 39, 43 and 45. So now 41 becomes the potential contender.</p>
<p>But this method is clearly just trial and error. Is there a more elegant method?</p>
| Barry Cipra | 86,747 | <p>For every prime $p\gt3$, either $p-25$ or $p-35$ is divisible by $6$. (Both numbers are even, and $p-25\equiv p-1$ mod $3$ while $p-35\equiv p-2$ mod $3$.). If $p\ge53$, the difference ($p-25$ or $p-35$) is at least $18$, hence can be written in the form $6(a+b+1)=3(2a+1)+3(2b+1)$, with $a,b\ge1$. Thus every prime $p\gt47$ can be written as the sum of two odd multiples of $3$ and either $25$ or $35$.</p>
<p>Finally, $p=47$ cannot be written as a sum of three composite odd numbers: If $47=x+y+z$ with $x\le y\le z$ from the list of odd composites, then, since $47/3\lt16$, $x$ must be either $9$ or $15$. If $x=9$, then $y+z=38$, implying $y$ is also either $9$ or $15$ (since $38/2=19$), neither of which works. If $x=15$, then $y+z=32$, again implying $y$ is either $9$ or $15$, neither of which works. (Alternatively, since $25$ and $35$ are the only odd composites less than $47$ that are not multiples of $3$, and since $25+25\gt47$, we would have to write $47=x+y+35$. But $x+y=12$ has no solutions in odd composites.)</p>
|
480,727 | <p>If $$2^x=3^y=6^{-z}$$ and $x,y,z \neq 0 $ then prove that:$$ \frac{1}{x}+\frac{1}{y}+\frac{1}{z}=0$$</p>
<p>I have tried starting with taking logartithms, but that gives just some more equations.</p>
<p>Any specific way to solve these type of problems?</p>
<p>Any help will be appreciated.</p>
| Aroonalok | 142,139 | <p>My answer does not solve the problem at hand (because the existing solutions work fine) but it addresses an issue related to the condition $x,y,z \neq 0$.<br>
It may seem that $x = y = z = 0$ is the only solution to the above system of equations and by imposing the condition $x,y,z \neq 0$, we are driving even that out thereby leaving no real solution to it. But it is not true.</p>
<p><a href="https://i.stack.imgur.com/BrXVw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BrXVw.png" alt="enter image description here"></a>
If $x,y,z \neq 0$, then too, this system of equations has infinitely many solutions. </p>
<p>In the above figure, three distinct values of $x, y, z$ are marked as a solution where $2^x = 3^y = 6^{-z} =v$. We can slide the line of $v$ up or down thereby getting infinite solutions. </p>
|
1,029,650 | <p>In Four-dimensional space, the Levi-Civita symbol is defined as:</p>
<p>$$ \varepsilon_{ijkl } =$$
\begin{cases}
+1 & \text{if }(i,j,k,l) \text{ is an even permutation of } (1,2,3,4) \\
-1 & \text{if }(i,j,k,l) \text{ is an odd permutation of } (1,2,3,4) \\
0 & \text{otherwise}
\end{cases}
</p>
<p>Let's suppose that I fix the last index ( l=4 for example). I guess that the 4-indices symbol can now be replaced with a 3-indices one:</p>
<p>$$ \varepsilon_{ijk } =$$
\begin{cases}
+1 & \text{if } (i,j,k) \text{ is } (1,2,3), (2,3,1) \text{ or } (3,1,2), \\
-1 & \text{if } (i,j,k) \text{ is } (3,2,1), (1,3,2) \text{ or } (2,1,3), \\
\;\;\,0 & \text{if }i=j \text{ or } j=k \text{ or } k=i
\end{cases} </p>
<p>My doubt is the following: is $$ \varepsilon_{ijk4 }A^{jk} = \varepsilon_{ij4k }A^{jk } $$ true? (In the sense that the 4-indices symbols can both be replaced by the same 3-indices symbols. I'm using the Einstein notation, so multiple indices are summed) or they give two 3-indices symbols with different sign</p>
| Oscar Cunningham | 1,149 | <p>If you fix one of the indices of $\varepsilon_{ijkl}$ to be $4$ you get $\pm\varepsilon_{ijk}$ depending on wheather you fix an odd or an even positioned index. So $\varepsilon_{ijk4}=\varepsilon_{ijk}$ but $\varepsilon_{ij4k}=-\varepsilon_{ijk}$. To see why the signs come out this way, notice that when you substitute $1,2,3$ for $i,j,k$ you get: $\varepsilon_{1234}=1=\varepsilon_{123}$, but $\varepsilon_{1243}=-1=-\varepsilon_{123}$. So in fact
$$\varepsilon_{ijk4}A^{jk}=-\varepsilon_{ij4k}A^{jk}.$$</p>
|
386,649 | <p>If you were working in a number system where there was a one-to-one and onto mapping from each natural to a symbol in the system, what would it mean to have a representation in the system that involved more than one digit?</p>
<p>For example, if we let $a_0$ represent $0$, and $a_n$ represent the number $n$ for any $n$ in $\mathbb{N}$, would '$a_1$$a_0$' represent a number?</p>
<p>Is such a system well defined or useful for anything?</p>
| Foo Barrigno | 73,411 | <p>The <a href="http://en.wikipedia.org/wiki/Factorial_number_system" rel="nofollow">factorial number system</a> is one such system. Each place value has one more digit than the previous one. It also has the wonderful property that all rational numbers have a terminating factorial system representation.</p>
<p>In general, any mixed-radix system where the number of values represented by each digit is unique is a numeral system with an infinite number of digit values.</p>
|
565,046 | <blockquote>
<p>The center of $D_6$ is isomorphic to $\mathbb{Z}_2$.</p>
</blockquote>
<p>I have that
$$D_6=\left< a,b \mid a^6=b^2=e,\, ba=a^{-1}b\right>$$
$$\Rightarrow D_6=\{e,a,a^2,a^3,a^4,a^5,b,ab,a^2b,a^3b,a^4b,a^5b\}.$$
My method for trying to do this has been just checking elements that could be candidates. I've widdled it down to that the only elements that commute with all of $D_6$ must be $\{e,a^3\}$ but I got there by finding a pair of elements that didn't commute for all other elements and I still haven't even shown that $a^3$ commutes with everything. For example, I have been trying to show now that
$$a^3b=ba^3$$
and haven't gotten too far yet but if I had to answer a question like this on the exam, I feel it would be difficult, is there any kind of trick or hints other than brute force using the relations to get that $a^3$ commutes with everything?</p>
<p>For the solution once I have that the center of $D_6$ is what I think then as there is only one group of order $2$ up to isomorphism, it must be isomorphic to $\mathbb{Z}_2$.</p>
<p>Ideally a way that doesn't appeal to $D_6$ as symmetries of the hexagon if that seems possible. </p>
| Ben West | 37,097 | <p>I wrote up a general classification for the centers of $D_n$, (the dihedral group of order $2n$, not $n$) just the other week. Perhaps it will be useful to read:</p>
<p>If $n=1,2$, then $D_n$ is of order $2$ or $4$, hence abelian, and $Z(D_n)=D_n$. Suppose $n\geq 3$. We have the presentation
$$
D_n=\langle x,y:x^2=y^n=1,\; xyx=y^{-1}\rangle.
$$
Then $yx=xy^{-1}$ implies the reduction $y^kx=xy^{-k}$. An element is in the center iff it commutes with $x$ and $y$, since $x$ and $y$ generate $D_n$. Let $z=x^iy^j$ be in the center. From $zy=yz$ we see
$$
x^iy^{j+1}=yx^iy^j\implies x^iy=yx^i.
$$
But $i\neq 1$, else we have $xy=yx=xy^{-1}$, so $y^2=1$, a contradiction since $n\geq 3$. So $i=0$, and $z=y^j$. Then from the equation $zx=xz$, we have
$$
y^jx=xy^j=xy^{-j}
$$
which implies $y^{2j}=1$. Thus $j=0$ or $j=n/2$. If $n$ is odd, we must necessarily have $j=0$, and $z=1$. If $n$ is even, either possibility works. But $y^{n/2}$ is indeed in the center as it clearly commutes with $y$, as well as with $x$ since $y^{n/2}x=xy^{-n/2}=x(y^{n/2})^{-1}=xy^{n/2}$. Summarizing, we have, for $n\geq 3$,
$$
Z(D_n)=\begin{cases}
\{1,y^{n/2}\} & \text{if }n\equiv 0\pmod{2},\\
\{1\} & \text{if }n\equiv 1\pmod{2}.
\end{cases}
$$</p>
|
565,046 | <blockquote>
<p>The center of $D_6$ is isomorphic to $\mathbb{Z}_2$.</p>
</blockquote>
<p>I have that
$$D_6=\left< a,b \mid a^6=b^2=e,\, ba=a^{-1}b\right>$$
$$\Rightarrow D_6=\{e,a,a^2,a^3,a^4,a^5,b,ab,a^2b,a^3b,a^4b,a^5b\}.$$
My method for trying to do this has been just checking elements that could be candidates. I've widdled it down to that the only elements that commute with all of $D_6$ must be $\{e,a^3\}$ but I got there by finding a pair of elements that didn't commute for all other elements and I still haven't even shown that $a^3$ commutes with everything. For example, I have been trying to show now that
$$a^3b=ba^3$$
and haven't gotten too far yet but if I had to answer a question like this on the exam, I feel it would be difficult, is there any kind of trick or hints other than brute force using the relations to get that $a^3$ commutes with everything?</p>
<p>For the solution once I have that the center of $D_6$ is what I think then as there is only one group of order $2$ up to isomorphism, it must be isomorphic to $\mathbb{Z}_2$.</p>
<p>Ideally a way that doesn't appeal to $D_6$ as symmetries of the hexagon if that seems possible. </p>
| Kevin Maguire | 336,007 | <p>It does not give proofs, but given the tone of the original question a good graphical tool for the OP would be Group Explorer. It does a lot of the donkey work, and can show you various visualisations, including the multiplication tables in helpful ways.</p>
<p><a href="http://groupexplorer.sourceforge.net/" rel="nofollow noreferrer">http://groupexplorer.sourceforge.net/</a></p>
<p>For $D_6$, that $Z_2$ is the center is (to me a least) kinda obvious by just looking at the pictures of the multication table sorted by it's various subgroups. eg below is $D_6$ with the required subgroup shown in top left - note the first 2 rows and 2 columns match exactly, whereas other rows/columns don't.</p>
<p><a href="https://i.stack.imgur.com/tgPHw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tgPHw.jpg" alt="Center of D6"></a></p>
|
1,373,103 | <p>I was wondering if $|f(x)g(x)| = |f(x)| |(g(x)|$ is true all the time as in the case of real numbers.</p>
<p>I was not convinced enough that that was true.</p>
<p>But I can't think of any counterexample.</p>
<p>Thank you.</p>
| Bernard | 202,857 | <p>No, you have to solve $\;\lvert T_5(x)-\cos x\rvert\le 0.003406$.</p>
<p><em>Hint:</em> $\cos x$ is defined by an alternating series, so you have information on the error when you truncate the series at a given order.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.