qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,449,826 | <p><span class="math-container">$C$</span> is any closed curve encompassing the whole branch cut.
The approach to this problem would involve using the residue theorem:</p>
<p>1) We first want to find the residues at infinity so we change the form to where we are able to perform a series expansion.</p>
<p>2) Obtain the coefficients of <span class="math-container">$\frac{1}{z}$</span>.</p>
<p>3) Plug the residues back into the formula: <span class="math-container">$\oint\limits_{C}\mathrm{d}z \, f(z)=2\pi i \, \mathrm{Res}(f,\infty)$</span></p>
<p>So for the integral: <span class="math-container">$\oint\limits_{C}\mathrm{d}z \,\sqrt{\frac{z-a}{z-b}}$</span> </p>
<p>I would like some help in obtaining the form which allows for an expansion. If this approach is bad for this integral, then please inform me of a better solution.</p>
<p>A similar problem has been posted before <a href="https://math.stackexchange.com/questions/2288715/integral-int-z-r-sqrtz-az-bdz-a-neq-b-rmaxa-b-z-in-c">here</a>, for which <span class="math-container">$\oint\limits_{C}\mathrm{d}z \,\sqrt{(z-a)(z-b)} = \dfrac{\pi i}{4}(a-b)^2$</span>. I am just referencing this because the problem is so similar as with our approach utilizing residues at infinity.</p>
| Pedro Juan Soto | 601,282 | <p>We have that <span class="math-container">$$\sqrt{\frac{z-a}{z-b}} = 1 + \frac{b-a}{2}\frac{1}{z} + O\bigg(\frac{1}{z^2}\bigg)$$</span> therefore using Cauchy's integral formula we have that <span class="math-container">$$\oint \sqrt{\frac{z-a}{z-b}}dz = 2 \pi i \frac{b-a}{2} = (b-a)\pi i .$$</span></p>
|
1,210,018 | <p>$$ \begin{bmatrix}
1 & 1 \\
1 & 1 \\
\end{bmatrix}
\begin{Bmatrix}
v_1 \\
v_2 \\
\end{Bmatrix}=
\begin{Bmatrix}
0 \\
0 \\
\end{Bmatrix}$$</p>
<p>How can i solve this ?</p>
<p>I found it $$v_1+v_2=0$$ $$v_1+v_2=0$$ .</p>
<p>So i can't solve it for $v_1$ and $v_2$ .</p>
| Jordan Glen | 225,803 | <p>You essentially have $1$ equation in two unknowns. Let $v_2= t$, where $t\in \mathbb R$. Then $v_1 = -v_2 = -t$.</p>
<p>So there are infinitely many solutions, all of which can be represented by $$V = \begin{pmatrix} -t\\t\end{pmatrix} = t\begin{pmatrix} -1 \\ 1\end{pmatrix}$$ again, where $t$ can take any value in the reals (assuming that's the field on which $V$ is defined.)</p>
<p>NOTE: You could just as easily set $t = v_1 \implies v_2 = -t = -v_1$.</p>
|
3,047,670 | <blockquote>
<p>Prove or disprove that <span class="math-container">$$\lim_{n\to \infty} \left(\frac{x_{n+1}-l}{x_n-l}\right)=\lim_{n\to\infty}\left(\frac{x_{n+1}}{x_n}\right)$$</span> where <span class="math-container">$l=\lim_{n\to \infty} x_n$</span></p>
</blockquote>
<p>I think that the above result is true,but I am not really sure how to prove it.If anyone has a counterexample I am looking forward to it.<br>
EDIT1 : <span class="math-container">$x_n$</span> is any real sequence which is not constant.<br>
EDIT2: What if we add the additional constraint that <span class="math-container">$l \in \mathbb{R}$</span>?</p>
| Love Invariants | 551,019 | <p>Take <span class="math-container">$x_n=n$</span><br>
Then LHS=<span class="math-container">$1\over0$</span> and RHS=<span class="math-container">$1+{1\over\infty}=1$</span><br>
Both aren't equal.</p>
|
3,587,891 | <blockquote>
<p>Suppose <span class="math-container">$f(x, y, z): \mathbb{R}^{3} \rightarrow \mathbb{R}$</span> is a <span class="math-container">$C^{2}$</span> harmonic function, that is, it satisfies <span class="math-container">$f_{x x}+f_{y y}+f_{z z}=0 .$</span> Let <span class="math-container">$E \subset \mathbb{R}^{3}$</span> be a region to which Gauss's Theorem can be applied. I want to show that
<span class="math-container">$$
\iint_{\partial E} f \nabla f \cdot \vec{n} d \sigma=\iiint_{E}|\nabla f|^{2}\mathrm{d}(x,y,z)
$$</span>
where, as usual, <span class="math-container">$\nabla f=\left(f_{x}, f_{y}, f_{z}\right)$</span> is the gradient.</p>
</blockquote>
<p>Can you help, how can I show?</p>
<h1>StayAtHome</h1>
| Juan L. | 646,880 | <p>Let <span class="math-container">$(s_p)_{p \in U} \in \prod_{p \in U} \mathscr F_p$</span> be a compatible germ. By definition, there exists a cover <span class="math-container">$\{U_i\}_{i \in I}$</span> for <span class="math-container">$U$</span>, and elements <span class="math-container">$f_i \in \mathscr F(U_i)$</span> such that for all <span class="math-container">$q \in U_i$</span> we have <span class="math-container">$s_q = [f_i, U_i] \in \mathscr F_q$</span>.</p>
<p>If <span class="math-container">$q \in U_i \cap U_j$</span>, then <span class="math-container">$[f_i, U_i]=s_q=[f_j, U_j]$</span> in <span class="math-container">$\mathscr F_q$</span>, hence <span class="math-container">$$[f_i|_{U_i \cap U_j}, U_i \cap U_j] = [f_j|_{U_i \cap U_j}, U_i \cap U_j] \in \mathscr F_q.$$</span></p>
<p>Since <span class="math-container">$q$</span> was arbitrary, this holds on all of <span class="math-container">$U_i \cap U_j$</span>. Since <span class="math-container">$\mathscr F$</span> is a sheaf, the map <span class="math-container">$\mathscr F(U_i \cap U_j) \to \prod_{p \in U_i \cap U_j} \mathscr F_p$</span> is injective and thus we have <span class="math-container">$f_i|_{U_i \cap U_j}=f_j|_{U_i \cap U_j}$</span>.</p>
<p>Since <span class="math-container">$\mathscr F$</span> is a sheaf, there exists a unique <span class="math-container">$f \in \mathscr F(U)$</span> such that <span class="math-container">$f|_{U_i} = f_i$</span>. Therefore, the map <span class="math-container">$\mathscr F(U) \to \prod_{p \in U} \mathscr F_p$</span> takes <span class="math-container">$$f \mapsto ([f, U])_{p \in U} = ([f_i, U_i])_{p \in U} = (s_p)_{p \in U}.$$</span></p>
|
1,159 | <p>A lot of times I see theorems stated for local rings, but usually they are also true for "graded local rings", i.e., graded rings with a unique homogeneous maximal ideal (like the polynomial ring). For example, the Hilbert syzygy theorem, the Auslander-Buchsbaum formula, statements related to local cohomology, etc.</p>
<p>But it's not entirely clear to me how tight this analogy is. I certainly don't expect all statements about local rings to extend to graded local rings, so I'd like to know about some "pitfalls" in case I ever decide to make an "oh yes, this obviously extends" fallacy. What are some examples of statements which are true for local rings whose graded analogues are not necessarily true? Or another related question: what kind of intuition should I have when I want to conclude that statements have graded versions?</p>
<p>There is a notion of "generalized local ring" due to Goto and Watanabe which includes graded local rings and local rings: a positively graded ring that is finitely generated as an algebra over its zeroth degree part, and its zeroth degree part is a local ring, so one possibility is just to see if this weaker definition is enough to prove the statement. Of course the trouble comes when the proofs cite other sources, and become unmanageable to trace back to first principles.</p>
| Greg Stevenson | 310 | <p>One small thing I know of which changes is that if one has a Z-graded-commutative noetherian ring (where Z is the integers) Matlis' classification of indecomposable injective modules goes through but with one small hiccup.</p>
<p>Every indecomposable injective is isomorphic to E(R/p)[n] for some unique homogeneous prime ideal p but the integer shift n is not necessarily unique although under the hypotheses I think you are interested in one probably gets uniqueness. I can't think of an example where this really causes much of a problem though.</p>
<p>Having thought about this some more I think that non-negative integer graded-local noetherian rings, in particular those generated in degree 1 such that the maximal homogeneous ideal is also maximal if one forgets the grading, are incredibly well behaved and the analogy with local rings is very good. In fact, there is even a version of Nakayama's lemma for such rings (maybe one needs a little more) which is stronger than the usual one in the sense that one can drop the finiteness condition on the module. There are also no problems with graded versions of prime avoidance etc... in general.</p>
<p>I'd recommend section 1.5 of Cohen-Macaulay Rings by Bruns and Herzog where they prove that a bunch of standard facts still go through and one can see what does and doesn't change in the proofs.</p>
<p>As I mentioned in the comment I think one has to be most careful when considering rings graded by things like monoids which aren't as nice as the non-negative integers. In particular, if the grading is not positive (i.e. some elements of the monoid are invertible) and/or if the monoid is not cancellative at the identity (i.e. a+b = a does not imply b is the identity). I think in the non-cancellative case one can construct a counterexample to Nakayama's lemma but I am not 100% sure on this.</p>
|
2,937,671 | <p>Definition <span class="math-container">$\{A_i\}_{i\in I}$</span> be an indexed family of classes; Let
<span class="math-container">$$A=\bigcup_{i\in I} A_i.$$</span></p>
<p>The <span class="math-container">$product$</span> of the classes <span class="math-container">$A_i$</span> is defined to be the class
<span class="math-container">$$\prod_{i\in I}A_i\{f:f:I\rightarrow A\ is\ a\ function,\ and\ f(i)\in A_i,\forall i\in I \}$$</span></p>
<p>And I want to prove </p>
<p>Let <span class="math-container">$\{A_i\}_{i\in I}$</span> and <span class="math-container">$\{B_j\}_{j\in J}$</span> be families of classes. Prove the following:
<span class="math-container">$$(\prod_{i\in I}A_i)\cap(\prod_{j\in J}B_j)=\prod_{(i,j)\in{I\times J}}(A_i\cap B_j)$$</span></p>
<p>if <span class="math-container">$$\bigcup_{i\in I} A_i=\bigcup_{j\in J} B_j=X$$</span>
is satisfying.
But I don't know the meaning of <span class="math-container">$\prod_{(i,j)\in{I\times J}}(A_i\cap B_j)$</span>.</p>
<p>If there are someone who know meaning can you give me some explain using example
<span class="math-container">$I=\{a,b\},J=\{x,y,z\},A_a=\{1,2\},A_b=\{2,3,4\},B_x=\{1,4\},B_y=\{1,3\},B_z=\{1,2\}.$</span></p>
<p>and if you can please explain </p>
<p>how <span class="math-container">$$(\prod_{i\in I}A_i)\cap(\prod_{j\in J}B_j)=\prod_{(i,j)\in{I\times J}}(A_i\cap B_j)$$</span> is satisfying?</p>
| Paul Frost | 349,785 | <p><span class="math-container">$I$</span> and <span class="math-container">$J$</span> are sets. Define a family <span class="math-container">$\{ X_k \}_{k \in \{ 1, 2 \}}$</span> by <span class="math-container">$X_1 = I, X_2 = J$</span> and form the product <span class="math-container">$P = \prod_{k \in \{ 1, 2 \}} X_k$</span>. We write <span class="math-container">$P = I \times J$</span>. Then <span class="math-container">$I \times J$</span> is again a set and it easy to see that is nothing else than the "naive" construct <span class="math-container">$\{ (i,j) \mid i \in I, j \in J \}$</span> (each function <span class="math-container">$f : \{ 1, 2 \} \to X_1 \cup X_2 = I \cup J$</span> such that <span class="math-container">$f(1) \in X_1 = I, f(2) \in X_2 = J$</span> can be identified with the pair <span class="math-container">$(f(1),f(2))$</span>).</p>
<p>For any <span class="math-container">$(i,j) \in I \times J$</span> you obtain the intersection class <span class="math-container">$C_{(i,j)} = A_i \cap B_j$</span> and you can form the product</p>
<p><span class="math-container">$$\prod_{(i,j) \in I \times J} C_{(i,j)} = \prod_{(i,j) \in I \times J} (A_i \cap B_j) .$$</span></p>
<p>This shows that writing</p>
<p><span class="math-container">$$(\prod_{i\in I}A_i)\cap(\prod_{j\in J}B_j)=\prod_{(i,j)\in{I\times J}}(A_i\cap B_j)$$</span></p>
<p>is incorrect because <span class="math-container">$\prod_{i\in I}A_i$</span> consists of functions defined on <span class="math-container">$I$</span>, <span class="math-container">$\prod_{j\in J}B_j$</span> of functions defined on <span class="math-container">$J$</span> and <span class="math-container">$\prod_{(i,j)\in{I\times J}}(A_i\cap B_j)$</span> of functions defined on <span class="math-container">$I \times J$</span>. The intersection on the left side is empty if <span class="math-container">$I \ne J$</span>. But even if <span class="math-container">$I = J$</span>, we can never have an equation between the left and the right side. However, the range of all the functions is <span class="math-container">$X$</span>. For the first two products this is the assumption in your question, and for the third product you have to verify that <span class="math-container">$\bigcup_{(i,j)\in{I\times J}} (A_i \cap B_j) = X$</span>. This is an easy exercise.</p>
<p>Let us consider the case that <span class="math-container">$I = J$</span> in which <span class="math-container">$\prod_{i\in I}A_i, \prod_{i\in I}B_i$</span> are both subsets of the set of all functions <span class="math-container">$I \to X$</span>. We shall show that in general there does not even exist a <em>bijection</em></p>
<p><span class="math-container">$$\beta : (\prod_{i\in I}A_i)\cap(\prod_{i\in I}B_i) \to \prod_{(i,j)\in{I\times I}}(A_i\cap B_j) .$$</span></p>
<p>Let <span class="math-container">$X$</span> be a set and <span class="math-container">$A_i = B_i = X$</span> for all <span class="math-container">$i \in I$</span>. Then <span class="math-container">$\prod_{i\in I}A_i = \prod_{i\in I}B_i = \prod_{i\in I}X $</span>, hence <span class="math-container">$(\prod_{i\in I}A_i)\cap(\prod_{i\in I}B_i) = \prod_{i\in I}X $</span>. But <span class="math-container">$\prod_{(i,j)\in{I\times I}}(A_i\cap B_j) = \prod_{(i,j)\in{I\times I}}X$</span>. It is clear that if <span class="math-container">$I, X$</span> are finite sets with <span class="math-container">$n, m$</span> elements, respectively, such that <span class="math-container">$n, m > 1$</span>, then <span class="math-container">$\prod_{i\in I}X $</span> has <span class="math-container">$m^n$</span> elements whereas <span class="math-container">$\prod_{(i,j)\in{I\times I}}X$</span> has <span class="math-container">$m^{n^2}$</span> elements.</p>
<p>We have seen that the statement in your question is wrong. So what can be said positively? The only thing which makes sense is this:</p>
<p>If <span class="math-container">$A'_i \subset A_i$</span>, then we have a natural injection <span class="math-container">$\iota : \prod_{i\in I}A'_i \to \prod_{i\in I}A_i$</span> so that we can write by a little abuse of notation <span class="math-container">$\prod_{i\in I}A'_i \subset \prod_{i\in I}A_i$</span>. In fact, each element of <span class="math-container">$\prod_{i\in I}A'_i$</span> is a function <span class="math-container">$f : I \to \bigcup_{i \in I}A'_i$</span>, the latter being a subset of <span class="math-container">$\bigcup_{i \in I}A_i$</span> so that we can identity <span class="math-container">$f$</span> with <span class="math-container">$\iota(f) : I \stackrel{f}{\rightarrow} \bigcup_{i \in I}A'_i \hookrightarrow \bigcup_{i \in I}A_i$</span>.</p>
<p>With <span class="math-container">$C_i = A_i \cup B_i$</span> we therefore obtain</p>
<p><span class="math-container">$$\prod_{i\in I}A_i, \prod_{i\in I}B_i, \prod_{i\in I}(A_i \cap B_i) \subset \prod_{i\in I}C_i$$</span></p>
<p>and for these subsets we get</p>
<p><span class="math-container">$$(\prod_{i\in I}A_i) \cap (\prod_{i\in I}B_i) = \prod_{i\in I}(A_i \cap B_i) .$$</span></p>
|
704,073 | <p>I encountered something interesting when trying to differentiate $F(x) = c$.</p>
<p>Consider: $\lim_{x→0}\frac0x$. </p>
<p>I understand that for any $x$, no matter how incredibly small, we will have $0$ as the quotient. But don't things change when one takes matters to infinitesimals?
I.e. why is the function $\frac0x = f(x)$, not undefined at $x=0$?</p>
<p>I would appreciate a strong logical argument for why the limit stays at $0$. </p>
| AlexR | 86,940 | <p>Note that writing $f(x) = \frac0x$ results in $f(0) = \frac00$ wich <em>is</em> undefined. However, the singularity of $f$ is nice in the way that is can be continuously defined by $f(0) := 0$ (note the colon for <em>defining</em> the value). A limit is exactly this concept: What is the value of $f(x)$ when $x$ comes arbitrarily close to $0$, but <strong>not</strong> equal to $0$. The statement
$$\lim_{x\to 0} \frac0x = 0$$
Means exactly that, and not, as you might think $\frac00 = 0$. The misconception is that
$$\lim_{x\to x_0} \frac{f(x)}{g(x)} \neq \frac{\lim_{x\to x_0} f(x)}{\lim_{x\to x_0} g(x)}$$
For general $f,g$ (even if both are continuous!). This only works when $\lim_{x\to x_0} g(x) \neq 0$, wich is not the case in your example.</p>
|
3,046,083 | <p>Is it true that the intersection of the closures of sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span> is equal to the closure of their intersection?
<span class="math-container">$ cl(A)\cap{cl(B)}=cl(A\cap{B})$</span> ?</p>
| Henno Brandsma | 4,280 | <p>No, the rationals and the irrationals (in the reals) are disjoint so <span class="math-container">$\operatorname{cl}(A \cap B) = \operatorname{cl}{\emptyset}=\emptyset$</span> while <span class="math-container">$\operatorname{cl}(A) = \operatorname{cl}(B) = \mathbb{R}$</span></p>
|
865,598 | <p>How can I calculate this value?</p>
<p>$$\cot\left(\sin^{-1}\left(-\frac12\right)\right)$$</p>
| Rene Schipperus | 149,912 | <p>$$\cot x =\frac{\cos x}{\sin x}=\frac{\sqrt{1-\sin^2x}}{\sin x}$$
so we have $$\frac{\sqrt{\frac{3}{4}}}{-\frac{1}{2}}=\pm \sqrt{3}$$
There is not enough information in the problem to determine the sign.</p>
|
3,773,856 | <p>I'm having trouble with part of a question on Cardano's method for solving cubic polynomial equations. This is a multi-part question, and I have been able to answer most of it. But I am having trouble with the last part. I think I'll just post here the part of the question that I'm having trouble with.</p>
<p>We have the depressed cubic equation :
<span class="math-container">\begin{equation}
f(t) = t^{3} + pt + q = 0
\end{equation}</span>
We also have what I believe is the negative of the discriminant :
<span class="math-container">\begin{equation}
D = 27 q^{2} + 4p^{3}
\end{equation}</span>
We assume <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are both real and <span class="math-container">$D < 0$</span>. We also have the following polynomial in two variables (<span class="math-container">$u$</span> and <span class="math-container">$v$</span>) that results from a variable transformation <span class="math-container">$t = u+v$</span> :
<span class="math-container">\begin{equation}
u^{3} + v^{3} + (3uv + p)(u+v) + q = 0
\end{equation}</span>
You also have the quadratic polynomial equation :
<span class="math-container">\begin{equation}
x^{2} + qx - \frac{p^{3}}{27} = 0
\end{equation}</span>
The solutions to the 2-variable polynomial equation satisfy the following constraints :
<span class="math-container">\begin{equation}
u^{3} + v^{3} = -q
\end{equation}</span>
<span class="math-container">\begin{equation}
uv = -\frac{p}{3}
\end{equation}</span>
The first section of this part of the larger question asks to prove that the solutions of the quadratic equation are non-real complex conjugates. Here the solutions to the quadratic are equal to <span class="math-container">$u^{3}$</span> and <span class="math-container">$v^{3}$</span> (this relationship between the quadratic polynomial and the polynomial in two variables was proven in an earlier part of the question). I was able to do this part. The second part of this sub-question is what I'm having trouble with.</p>
<p>The question says, let :
<span class="math-container">\begin{equation}
u = r\cos(\theta) + ir\sin(\theta)
\end{equation}</span>
<span class="math-container">\begin{equation}
v = r\cos(\theta) - ir\sin(\theta)
\end{equation}</span>
The question then asks the reader to prove that the depressed cubic equation has three real roots :
<span class="math-container">\begin{equation}
2r\cos(\theta) \text{ , } 2r\cos\left( \theta + \frac{2\pi}{3} \right) \text{ , } 2r\cos\left( \theta + \frac{4\pi}{3} \right)
\end{equation}</span>
In an earlier part of the question they had the reader prove that given :
<span class="math-container">\begin{equation}
\omega = \frac{-1 + i\sqrt{3}}{2}
\end{equation}</span>
s.t. :
<span class="math-container">\begin{equation}
\omega^{2} = \frac{-1 - i\sqrt{3}}{2}
\end{equation}</span>
and :
<span class="math-container">\begin{equation}
\omega^{3} = 1
\end{equation}</span>
that if <span class="math-container">$(u,v)$</span> is a root of the polynomial in two variables then so are :
<span class="math-container">$(u\omega,v\omega^{2})$</span> and <span class="math-container">$(u\omega^{2},v\omega)$</span>. I think that the part of the question I'm having trouble with is similar. I suspect that :
<span class="math-container">\begin{equation}
2r \cos\left( \theta + \frac{2\pi}{3} \right) = u\omega + v\omega^{2} \text{ or } u\omega^{2} + v\omega \tag{1}
\end{equation}</span>
and :
<span class="math-container">\begin{equation}
2r \cos\left( \theta + \frac{4\pi}{3} \right) = u\omega + v\omega^{2} \text{ or } u\omega^{2} + v\omega \tag{2}
\end{equation}</span>
I have derived that :
<span class="math-container">\begin{equation}
\omega = \cos(\phi) + i\sin(\phi)
\end{equation}</span>
where <span class="math-container">$\phi = \frac{2\pi}{3}$</span>. Also :
<span class="math-container">\begin{equation}
\omega^{2} = \cos(2\phi) + i \sin(2\phi)
\end{equation}</span>
So that the goal of the question may be to prove equations <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>. I have tried to do this but haven't been able to.</p>
<p>Am I approaching this question in the correct way ? If I am approaching it the right way can someone show me how to use trigonometric identities to prove equations #1 and #2 ?</p>
| Community | -1 | <p>You can "brute force" the solution of the quadratic,</p>
<p><span class="math-container">$$x^2+qx-\frac{p^3}{27}=0,$$</span></p>
<p>giving two roots</p>
<p><span class="math-container">$$u^3,v^3=\frac{-q\pm\sqrt{q^2+\dfrac{4p^3}{27}}}2$$</span> which are complex. In polar form,</p>
<p><span class="math-container">$$\rho=\frac{q^2}2+\frac{p^3}{27}$$</span> and <span class="math-container">$$\theta=\pm\arctan\sqrt{1+\dfrac{4p^3}{27q^2}}+k\pi.$$</span></p>
<p>Now after taking the cubic roots,</p>
<p><span class="math-container">$$u+v=\sqrt[3]\rho\left(\cos\frac\theta3+i\sin\frac\theta3+\cos\frac\theta3-i\sin\frac\theta3\right)=2\sqrt[3]\rho\cos\frac\theta3$$</span> for <span class="math-container">$k=0,1,2$</span>.</p>
|
3,051,176 | <blockquote>
<p>True of False: If <span class="math-container">$m$</span> and <span class="math-container">$n$</span> are odd positive integers, then <span class="math-container">$n^2+m^2$</span> is not a perfect square.</p>
</blockquote>
<p>Anyway it is already appear <a href="https://math.stackexchange.com/questions/683070/if-a-and-b-are-odd-then-a2b2-is-not-a-perfect-square">here</a>,but I want check my solution!</p>
<p>The statement is true, because , suppose <span class="math-container">$$n^2+m^2=k^2$$</span>
Then <span class="math-container">$n^2=k^2-m^2=(k-m)(k+m)$</span>. Here divisors of <span class="math-container">$n^2$</span> are <span class="math-container">$1,n,n^2$</span>, so either</p>
<ul>
<li><span class="math-container">$k-m=1$</span> and <span class="math-container">$k+m=n^2$</span></li>
<li><span class="math-container">$k-m=n$</span> and <span class="math-container">$k+m=n$</span></li>
<li><span class="math-container">$k-m=n^2$</span> and <span class="math-container">$k+m=1$</span></li>
</ul>
<p>Suppose the first bullet is true. Then <span class="math-container">$m=\frac{(n-1)(n+1)}{2}$</span>, an even number,since <span class="math-container">$n-1$</span> and <span class="math-container">$n+1$</span> are even. Contradict the fact <span class="math-container">$m$</span> is odd. Similarly we get contradictions of latter two. Hence the statement is true.</p>
<p>Is this correct? If not,what I'm doing wrong ?</p>
<p>Edit:I realize my mistake. If <span class="math-container">$n$</span> is prime, then my count is correct. Kindly add other information about this to your answer if you wish</p>
| José Carlos Santos | 446,262 | <p>The line that you mentioned is the line defined by the two points at which the tangent lines touch the circle. So, if you want to actually get those two points, compute the intersection between the line and the circle. Then, the tangent lines will be the lines passing through <span class="math-container">$(x_1,y_1)$</span> and each of the touching points.</p>
|
3,051,176 | <blockquote>
<p>True of False: If <span class="math-container">$m$</span> and <span class="math-container">$n$</span> are odd positive integers, then <span class="math-container">$n^2+m^2$</span> is not a perfect square.</p>
</blockquote>
<p>Anyway it is already appear <a href="https://math.stackexchange.com/questions/683070/if-a-and-b-are-odd-then-a2b2-is-not-a-perfect-square">here</a>,but I want check my solution!</p>
<p>The statement is true, because , suppose <span class="math-container">$$n^2+m^2=k^2$$</span>
Then <span class="math-container">$n^2=k^2-m^2=(k-m)(k+m)$</span>. Here divisors of <span class="math-container">$n^2$</span> are <span class="math-container">$1,n,n^2$</span>, so either</p>
<ul>
<li><span class="math-container">$k-m=1$</span> and <span class="math-container">$k+m=n^2$</span></li>
<li><span class="math-container">$k-m=n$</span> and <span class="math-container">$k+m=n$</span></li>
<li><span class="math-container">$k-m=n^2$</span> and <span class="math-container">$k+m=1$</span></li>
</ul>
<p>Suppose the first bullet is true. Then <span class="math-container">$m=\frac{(n-1)(n+1)}{2}$</span>, an even number,since <span class="math-container">$n-1$</span> and <span class="math-container">$n+1$</span> are even. Contradict the fact <span class="math-container">$m$</span> is odd. Similarly we get contradictions of latter two. Hence the statement is true.</p>
<p>Is this correct? If not,what I'm doing wrong ?</p>
<p>Edit:I realize my mistake. If <span class="math-container">$n$</span> is prime, then my count is correct. Kindly add other information about this to your answer if you wish</p>
| Mick | 42,351 | <p>I think what you have found in the text should be:-</p>
<p>If <span class="math-container">$P(x_1, y_1)$</span> is a point <strong>on</strong> the circle <span class="math-container">$C: x^2 + y^2 +2gx + 2fy + c = 0$</span>, then the equation of the tangent t<strong>ouching circle C at P</strong> is<br>
<span class="math-container">$$xx_1+yy_1+g(x+x_1)+f(y+y_1)+c=0$$</span></p>
<hr>
<p>If <span class="math-container">$P(x_0, y_0)$</span> is external to <span class="math-container">$C$</span>, the equation(s) of tangents <strong>from P to C</strong> will not be a nice-looking simplified form. However, they can still be derived through the following steps.</p>
<ol>
<li><p>By midpoint formula, find M, the midpoint of OP. </p></li>
<li><p>By distance formula, find <span class="math-container">$R = \dfrac {OP}{2}$</span>.</p></li>
<li><p>Setup the equation of the new circle (centered at M and radius = R)</p></li>
<li><p>Solve circle C and .circle M to get <span class="math-container">$H(x_1, y_1)$</span> and <span class="math-container">$K(x_2, y_2)$</span>.</p></li>
<li><p>Use two-point form to find the equation of <span class="math-container">$PH$</span> and <span class="math-container">$PK$</span>, which are the required tangents.</p></li>
</ol>
|
2,412,783 | <p>I'm very new to linear algebra, and I have a homework problem that hasn't been covered in the book or by the professor. It seems like I have a fundamental misunderstanding of what matrices represent, but I can't find a good article or answer.</p>
<blockquote>
<p>Do the three lines $x_1 - 4x_2 = 1$, $2x_1 - x_2 = -3$, and $-x_1 - 3x_2 = 4$ have a common point of intersection? Explain.</p>
</blockquote>
<p>I assumed that the solution set of the matrix would represent how many intersections there were. I solved the echelon form and got:
$$\begin{bmatrix}1 & -4 & & 1\\2& -1 & & -3\\-1 & -3 & & 4\end{bmatrix} \rightarrow \begin{bmatrix}1 & -4 & & 1\\0& 1 & & -\frac{5}{7}\\0 & 0 & & 0\end{bmatrix}$$</p>
<p>Since this has infinite solutions, I would have thought it meant there were infinite intersections, or rather two equivalent lines, but that obviously isn't true. Is there any relationship between the solution set of a matrix and its original equations/lines? What is the matrix actually representing?</p>
| Randall | 464,495 | <p>You can reduce it further. When you do, the first row is $(1,0,-13/7)$ which says that your simultaneous solution is $(-13/7, -5/7)$. So, there is one point of intersection.</p>
<p>Since you have only two variables but three equations, you would need an extra 0-row in order to get infinitely many solutions, and this would imply that two of your lines were the same to begin with. </p>
<p>You may have temporarily forgotten that your matrix was augmented? It's not really "square."</p>
|
2,412,783 | <p>I'm very new to linear algebra, and I have a homework problem that hasn't been covered in the book or by the professor. It seems like I have a fundamental misunderstanding of what matrices represent, but I can't find a good article or answer.</p>
<blockquote>
<p>Do the three lines $x_1 - 4x_2 = 1$, $2x_1 - x_2 = -3$, and $-x_1 - 3x_2 = 4$ have a common point of intersection? Explain.</p>
</blockquote>
<p>I assumed that the solution set of the matrix would represent how many intersections there were. I solved the echelon form and got:
$$\begin{bmatrix}1 & -4 & & 1\\2& -1 & & -3\\-1 & -3 & & 4\end{bmatrix} \rightarrow \begin{bmatrix}1 & -4 & & 1\\0& 1 & & -\frac{5}{7}\\0 & 0 & & 0\end{bmatrix}$$</p>
<p>Since this has infinite solutions, I would have thought it meant there were infinite intersections, or rather two equivalent lines, but that obviously isn't true. Is there any relationship between the solution set of a matrix and its original equations/lines? What is the matrix actually representing?</p>
| Atique M | 686,251 | <p>If your augmented matrix is of the form <span class="math-container">$[A | C]$</span> then system has unique solution if <span class="math-container">$rank(A)=rank(C)$</span>=number of unknowns.
Here, <span class="math-container">$ank(A)=rank(C)=$</span>number of unknowns=2
So, unique soon i.e a single intersection point</p>
|
3,795,234 | <p>(disclaimer: I am not well versed in mathematics so please excuse my poor notation / explanation)</p>
<p>Given a hexagon grid that defines it's "neighbours" via offsets on the axis' <span class="math-container">$q$</span> & <span class="math-container">$r$</span> like this :
<a href="https://i.stack.imgur.com/AWkKH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AWkKH.png" alt="Image credit to: https://www.redblobgames.com/grids/hexagons/#coordinates" /></a></p>
<p>(the hexagon containing just <span class="math-container">$q$</span> & <span class="math-container">$r$</span> is the <span class="math-container">$(0,0)$</span> node)</p>
<p>I am looking for a function <span class="math-container">$f$</span> that calculates a unique natural number (like an identifier) for a given <span class="math-container">$(q,r)$</span> offset and a given max neighbour distance <span class="math-container">$r$</span>. The generated natural numbers should be between 1 and the neighbour count in regards to a maximum distance <span class="math-container">$d$</span><sup>[2]</sup> while also being "gapless": for <span class="math-container">$d = 1$</span> the numbers 1 to 6 should be assigned assigned to the neighbours while <span class="math-container">$d = 2$</span> implies the numbers from 1 to 18 are assigned.</p>
<p>I know that the value pairs of the <span class="math-container">$(q,r)$</span> offsets are unique but struggle to find a way to map them onto the wanted value range. A classic square grid based algorithm like <span class="math-container">$row*columncount + column$</span> is a good starter but of course has gaps in a hexagon based grid.</p>
<p>[2]: <span class="math-container">$3(d^2+d)$</span>,d > 0</p>
| roookeee | 817,286 | <p>(disclaimer: I had a really hard time to explain how this works and will describe some stuff that just "comes into existance" without explaining how I got there because I really can't explain it)</p>
<p>Given the following image:</p>
<p><a href="https://i.stack.imgur.com/I0dJE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I0dJE.png" alt="enter image description here" /></a></p>
<p>The following observations can be made that are the basis of that image:</p>
<ul>
<li>for a fixed <span class="math-container">$q$</span> there is a diagonal</li>
<li>the amount of "smaller" cells than <span class="math-container">$(0,0)$</span> is the same as the amount of cells "greater" than <span class="math-container">$(0,0)$</span>. Thus <span class="math-container">$(0,0)$</span> can trivially be assigned <span class="math-container">$\frac{neighbourcount}{2}$</span></li>
<li>the diagonal for <span class="math-container">$q = 0$</span> has 7 cells which equals <span class="math-container">$2d + 1$</span> wherein <span class="math-container">$d$</span> is the maximum neighbour distance</li>
<li>for every increment of <span class="math-container">$q$</span> for <span class="math-container">$q > 0$</span> the field count per diagonal decreases by 1 making the total preceding field count for a given columns <span class="math-container">$r = 0$</span> cell: <span class="math-container">$\frac{neighbourcount}{2} + \sum_{n=0}^{2d+1}n - \sum_{n=0}^{2d+1-q}n$</span>. For the given images diagonal for <span class="math-container">$q = 2$</span> that equals <span class="math-container">$18 + 28 - 15 = 31$</span></li>
<li>the value of the other cells in a given diagonal are simply the value of the <span class="math-container">$r = 0$</span> cell with the cells <span class="math-container">$r$</span> value added</li>
<li>this outlined procedure can be applied to the "left" side by doing the same steps with the absolute value of <span class="math-container">$q$</span> but subtracting it from <span class="math-container">$\frac{neighbourcount}{2}$</span> instead of adding it</li>
</ul>
<p>Thus the following function arises:</p>
<p><span class="math-container">$signum(v) := \begin{cases}0&,v = 0\\ 1&,v > 0\\-1&, v < 0\end{cases}$</span></p>
<p><span class="math-container">$neighbour\_count(distance) := 3\cdot(distance^2+distance)$</span></p>
<p><span class="math-container">$index(cell, distance) := \frac{neighbour\_count(distance)}{2} + signum(cell_q)\cdot (\sum_{n=0}^{2d+1}n - \sum_{n=0}^{2d+1-|cell_q|}n + cell_r)$</span></p>
<p>Some context for this question in general: I wanted to employ perfect hashing for my hex-based game. Employing the explained function / algorithm yielded a significant performance gain compare to the general purpose hashing variant</p>
|
1,767,682 | <p>I was thinking about sequences, and my mind came to one defined like this:</p>
<p>-1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, 1, 1, 1, ...</p>
<p>Where the first term is -1, and after the nth occurrence of -1 in the sequence, the next n terms of the sequence are 1, followed by -1, and so on. Which led me to perhaps a stronger example, </p>
<p>-1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, 1, -1, ...</p>
<p>Where the first term is -1, and after the nth occurrence of -1 in the sequence, the next $2^n$ terms of the sequence are 1, followed by -1, and so on.</p>
<p>By the definition of convergence or by Cauchy's criterion, the sequence does not converge, as any N one may choose to define will have an occurrence of -1 after it, which must occur within the next N terms (and certainly this bound could be decreased)</p>
<p>However, due to the decreasing frequency of -1 in the sequence, I would be tempted to say that there is some intuitive way in which this sequence converges to 1. Is there a different notion of convergence that captures the way in which this sequence behaves?</p>
| Eric Towers | 123,905 | <p>Most methods of <a href="https://en.wikipedia.org/wiki/Divergent_series" rel="nofollow">summing divergent series</a> can be adapted to making divergent sequences converge. For instance, Cesaro summation replaces the $n^\text{th}$ term of a series with the mean of the first $n$ terms. You could do the same: replace the $n^\text{th}$ term of your sequence with the mean of the first $n$ terms. Then you would be able to show convergence to $1$ (after this replacement) in the normal sense.</p>
<p><a href="https://en.wikipedia.org/wiki/Abel%27s_theorem" rel="nofollow">Abel summation</a> would be adapted to replace the $N^\text{th}$ term in your series with $\lim_{z \rightarrow 1^-} \sum_{n=0}^N a_n z^n$.</p>
<p>And so on for other methods.</p>
<p>Hardy's book, <em>Divergent Series</em> is a fun read with a lot of interesting stuff.</p>
|
2,810,008 | <p>Can I investigate this limit and if yes, how? $${i^∞}$$</p>
<p>I am at a loss of ideas and maybe it is undefined?</p>
| ncmathsadist | 4,154 | <p>You can't. The powers of $i$ form a periodic progression among four values. Convergence means you have some complex number for which the following is true. For any neighborhood of the number, the sequence eventually enters that neighborhood and never re-exits.</p>
|
3,815,898 | <p>In my stats lecture, my professor introduced this theorem, however I don't quite understand what the theorem means and how he got from step 3 to 4. Also what does the prime symbol mean? Could someone paraphrase what the theorem means or point me to some online resource where I could learn more about this theorem? Thanks</p>
<p>Theorem: If <span class="math-container">$X$</span> is a continuous random variable with cumulative distribution function (CDF) <span class="math-container">$F$</span>, then <span class="math-container">$Y = F(X) \sim U[0,1]$</span></p>
<p>Proof:
Let CDF of <span class="math-container">$Y$</span> = <span class="math-container">$F_y$</span>
<span class="math-container">$$F_y=P(Y \leq y)$$</span>
<span class="math-container">$$=P(F(X) \leq y)$$</span>
<span class="math-container">$$=P(X \leq F^{-1}(y))$$</span>
<span class="math-container">$$=F(X^{'})$$</span>
<span class="math-container">$$=F(F^{-1}(y))=y$$</span></p>
<p><span class="math-container">$$Y \sim U[0,1]$$</span></p>
| tommik | 791,458 | <p>This theorem is named "integral transformation theorem "</p>
<p>As you can see after few passages you get</p>
<p><span class="math-container">$$F_Y(y)=F_X[F_X^{-1}(y)]$$</span></p>
<p>now it is evident that applying both <span class="math-container">$F$</span> and <span class="math-container">$F^{-1}$</span> to any <span class="math-container">$y$</span> they cancel each other, say <span class="math-container">$e^(log y)$</span> thus yoy get</p>
<p><span class="math-container">$$F_Y(y)=y$$</span></p>
<p>But <span class="math-container">$F=y$</span> is the CDF of the uniform distribution over <span class="math-container">$[0;1]$</span></p>
<p>Derivate it and get</p>
<p><span class="math-container">$$f_Y(y)=\mathbb{1}_{[0;1]}(y)$$</span></p>
<p>This theorem is very useful for many purposes, for example using it backwards you can generate a random sample from any continuous distribution starting from a random sample extracted by a <span class="math-container">$[0;1]$</span> uniform distribution.</p>
<p><strong>Example</strong></p>
<p>Suppose you have to generate a random sample by an Exponential <span class="math-container">$Exp(1)$</span> distribution, say</p>
<p><span class="math-container">$$F_X(x)=1-e^{-x}$$</span></p>
<p>By integral transformation theorem you know that</p>
<p><span class="math-container">$$y=1-e^{-x}\sim U[0;1]$$</span></p>
<p>and thus being</p>
<p><span class="math-container">$$x=-log(1-y)$$</span></p>
<p>you can have your random number <span class="math-container">$x$</span> using the uniform random number <span class="math-container">$y$</span> that is automatically generated by any calculator, also in EXCEL and often also by a poket calculator</p>
|
4,250,845 | <p>Considering a Quadrilateral <span class="math-container">$ABCD$</span> where <span class="math-container">$A(0,0,0), B(2,0,2), C(2,2\sqrt 2,2), D(0,2\sqrt2,0)$</span>. Basically I have to find the <strong>Area</strong> of <strong>projection</strong> of quadrilateral <span class="math-container">$ABCD$</span> on the plane <span class="math-container">$x+y-z=3$</span>.</p>
<p>I have tried to first find the projection of the points <span class="math-container">$A,B,C,D$</span> <em><strong>individually</strong></em> on the plane and then using the <strong>projected points</strong> find the vectors <span class="math-container">$\vec{AB}$</span> and <span class="math-container">$\vec{BC}$</span> and then using <span class="math-container">$|\vec{AB}\cdot \vec{BC}|$</span> , but I was unable to find the projected points.</p>
<p>Is it the correct approach? If it is not I would highly appreciate a correct approach for the problem.</p>
| Glorious Nathalie | 948,761 | <p>As indicated by Intelligenti pauca in the above comments, the best way to go is to find the area of the quadrilateral, then multiply the area found by the cosine of the angle between the two planes which is the same angle between the normals to the planes (or its supplement).</p>
<p><span class="math-container">$\begin{equation} \begin{split}
\text{Area} &= \frac{1}{2} ( | AB \times AC | + | AC \times AD | ) \\
&= \frac{1}{2} ( | (2,0,2) \times (2, 2\sqrt{2}, 2) | + | (2, 2 \sqrt{2}, 2) \times (0, 2 \sqrt{2}, 0) | ) \\
&= 2 ( | (- \sqrt{2}, 0, \sqrt{2} ) | + | (-\sqrt{2}, 0, \sqrt{2} ) | )\\
& = 2 ( 2 + 2 ) = 8 \end{split} \end{equation}$</span></p>
<p>The normal to the plane of the quadrilateral is along <span class="math-container">$(-1, 0, 1)$</span> and the normal to the projection plane is along <span class="math-container">$(1, 1, -1)$</span>, therefore, if <span class="math-container">$\theta$</span> is the angle between these two normals, then,</p>
<p><span class="math-container">$\cos \theta = \dfrac{ (-1, 0, 1) \cdot (1, 1, -1) }{\sqrt{2}\sqrt{3} } = -\sqrt{\dfrac{2}{3}} $</span></p>
<p>Hence, the projected area is equal to <span class="math-container">$8\sqrt{\dfrac{2}{3}}$</span></p>
|
1,778,098 | <p>Let $f,g: M^{k} \to N$ ($M$ and $N$ with out boundary ) such that they are homotopic then for $\omega$ a $k$-form on $N$ do we have that </p>
<p>$$ \int_M f^{\ast} \omega = \int_M g^{\ast} \omega$$ </p>
<p>as conclusion? I can't figure out a proof so I am starting to think that it is not true. I can't use Homology or Cohomology and all that fancy stuff . I think is just a matter of rearranging singular cubes but I can't see how. </p>
<p>The way I was trying to approach this is the following: Consider first
the case where support of $\omega$ is in the interior of the image of a singular cube $f \circ c$, where $c$ is a singular cube into $M$. </p>
<p>Then we look at the new singular cube $f \circ c : I \to N$. But I don't know what to do from here and how to approach the general case. </p>
<blockquote>
<p><strong>Attempt</strong></p>
</blockquote>
<p>Using @TedShifrin's answer I've managed the following </p>
<p>$$\int_{M \times [0,1]}d(H^{*}w)=\int_{\partial(M \times [0,1])}H^{*}w = \int_{\partial M}g^{\ast}w-f^{\ast}w=0$$</p>
<p>but I am not sure.</p>
<p>Thanks a lot in advance.</p>
| Ted Shifrin | 71,348 | <p>You had better assume $M$ is a compact manifold without boundary and $\omega$ is a closed $k$-form on $N$. (The latter is automatic if $\dim N = k$ as well.) Then you can consider the homotopy mapping $H\colon M\times [0,1]\to N$ and apply Stokes's Theorem to $H^*\omega$. </p>
|
655,261 | <p>Let meagre subsets be defined as:<br>
$A\text{ meagre}\iff A=\bigcup_{\lvert K\rvert\leq\aleph_0} A_k\text{ with }\overline{A_k}°=\varnothing$<br>
Then it satisfies:<br>
$B\subseteq A\text{ meagre}\Rightarrow B\text{ meagre}$<br>
$A_k\text{ meagre}\Rightarrow\bigcup_{\lvert K\rvert\leq\aleph_0} A_k\text{ meagre}$</p>
<p><em>Flipping this around, is it possible to define meagre subsets by these properties?</em></p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>The property "being a set" (not very demanding!) verifies:</p>
<p>$B\subset A$ is a set $\Rightarrow$ $B$ is a set.</p>
<p>$A_k\text{ is a set}\Rightarrow\bigcup_{\lvert K\rvert\leq\aleph_0} A_k\text{ is a set}$</p>
|
655,261 | <p>Let meagre subsets be defined as:<br>
$A\text{ meagre}\iff A=\bigcup_{\lvert K\rvert\leq\aleph_0} A_k\text{ with }\overline{A_k}°=\varnothing$<br>
Then it satisfies:<br>
$B\subseteq A\text{ meagre}\Rightarrow B\text{ meagre}$<br>
$A_k\text{ meagre}\Rightarrow\bigcup_{\lvert K\rvert\leq\aleph_0} A_k\text{ meagre}$</p>
<p><em>Flipping this around, is it possible to define meagre subsets by these properties?</em></p>
| Community | -1 | <p>If $A$ is meager, then there is a sequence of nowhere dense subsets $\{A_n\}$ such that $A=\bigcup_{n=1}^\infty A_n$. Now $B=B\cap A=\bigcup_{n=1}^\infty(B\cap A_n)$. We just need show that each $B\cap A_n$ is nowhere dense, and this will finish the proof for the first.</p>
<p>But we can show something a little more general. Let $A$ be nowhere dense and $B\subseteq A$. Then $B$ is nowhere dense. Proof: $B\subseteq A$ implies $\overline{B}\subseteq\overline{A}$. And this implies $\overline{B}^\circ\subseteq\overline{A}^\circ=\varnothing$. Thus
$\overline{B}^\circ=\varnothing$. Thus $B$ is nowhere dense.</p>
|
2,278,991 | <p>(NOTE: I am new to proof construction. Don't panic if your heart beat increase.)</p>
<p>Proof 1:</p>
<p>suppose $x$ is even and prime, then there is $k$ in $\mathbb N$ such that </p>
<p>$x = 2k$</p>
<p>But x is only divisible by itself and not 2.</p>
<p>$\frac{x}{2} != k$</p>
<p>$x$ can not be even.</p>
<hr>
<p>Proof 2:
$x$ is prime</p>
<p>$x = pq$ then either $p=1$ or $q=1$</p>
<p>$x$ is even</p>
<p>$x = 2k$ </p>
<p>$pq = 2k$</p>
<p>$(2)\frac{k}{pq} = 1$ </p>
<p>which is false </p>
| Henrik supports the community | 193,386 | <p>What?</p>
<p>Whatever you're trying to do, it gets completely lost because you're not explaining what you're trying to accomplish, but just stating some things.</p>
<p>And for the mathematical content, I gave up after:</p>
<blockquote>
<p>If $x>n$ then $\frac{x}{n}$ does not belong to $\mathbb N$.</p>
</blockquote>
<p>That's simply not true: $4>2$ and $\frac{4}{2}=2\in\mathbb N$.</p>
|
172,366 | <blockquote>
<p>What will be the value of $P(12)+P(-8)$ if $P(x)=x^{4}+ax^{3}+bx^{2}+cx+d$
provided that $P(1)=10$, $P(2)=20$, $P(3)=30$?</p>
</blockquote>
<p>I put these values and got three simultaneous equations in $a, b, c, d$. What is the smarter way to approach these problems?</p>
| Cocopuffs | 32,943 | <p>I'm not sure this is the smartest way, but here's one way.</p>
<p>Define $Q(x) = P(x) - x^4$ which satisfies the conditions $$Q(1) = 9, Q(2) = 4, Q(3) = -51.$$</p>
<p>So $a,b,c,d$ satisfy the matrix identity</p>
<p>$$\begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 2 & 4 & 8 \\ 1 & 3 & 9 & 27 \end{pmatrix} \begin{pmatrix} d \\ c \\ b \\ a \end{pmatrix} = \begin{pmatrix} 9 \\ 4 \\ -51 \end{pmatrix}.$$ To solve this, find the kernel of the matrix (a bit of linear algebra), which turns out to be $$<\begin{pmatrix} -6 \\ 11 \\ -6 \\ 1 \end{pmatrix}>$$ and add a particular solution, for example where $a = 0$: $b = -25, c = 70, d = -36$.</p>
<p>So your polynomial is given by $$P(x) = x^4 + ax^3 + (-25-6a)x^2 + (11 + 70a)x + (-36 - 6a)$$ for some $a$ which we can't determine.</p>
<p>You then get $$P(12) + P(-8) = 17940 + 990a + 1900 - 990a = 19840.$$</p>
|
67,460 | <p>Denote the system in $GF(2)$ as $Ax=b$, where:
$$
\begin{align}
A=&(A_{ij})_{m\times m}\\
A_{ij}=&
\begin{cases}
(1)_{n\times n}&\text{if }i=j\quad\text{(a matrix where entries are all 1's)}\\
I_n&\text{if }i\ne j\quad\text{(the identity matrix)}
\end{cases}
\end{align}
$$
that is, $A$ is a square matrix of order $m\times n$. And $b$ is a 0-1 vector with length $m\times n$. Now what is the solution of this system, if any, for a general pair of $m$ and $n$?</p>
<p>Example: For $m=2,n=3$ and $b=(0, 1, 0, 0, 1, 0)^T$, we have
$$
A=
\begin{pmatrix}
1 & 1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 1 & 0 \\
1 & 1 & 1 & 0 & 0 & 1 \\
1 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 1
\end{pmatrix}
$$
then one solution is $x=(1, 0, 1, 0, 1, 0)^T$</p>
<p>I know Gaussian elimination. I am trying but find it not very easy when dealing with a general case.</p>
| Robert Israel | 8,508 | <p>An empirical formula: it looks to me like $\det(A) = (-1)^{(m-1)(n-1)} (n+m-1)(n-1)^{m-1}(m-1)^{n-1}$. In particular, $\det(A) \equiv 0 \mod 2$ when $m$ or $n$ is odd (except in the case $m=n=1$) so, as Henning found, $A$ will not be invertible over GF(2) in those cases.
However, it will be invertible when $m$ and $n$ are both even.</p>
<p>EDIT: the situation over $GF(2)$ for even $m$ and $n$ is simpler than I thought. Consider $A^2$. Let $U$ be the $n \times n$ matrix of all 1's. Note that $U^2 = n U$. The $(i,i)$ block of $A^2$ is $U^2 + (m-1) I = n U + (m-1) I$. The $(i,j)$ block for $i \ne j$ is $2 U + (m-2) I$. If $m$ and $n$ are even, the $(i,i)$ block mod 2 is $I$ and the $(i,j)$ block mod 2 is $0$ (i.e. the inverse of $A$ over $GF(2)$ is $A$). </p>
|
487,171 | <p>Now I tried tackling this question from different sizes and perspectives (and already asked a couple of questions here and there), but perhaps only now can I formulate it well and ask you (since I have no good ideas).</p>
<p>Let there be $k, n \in\mathbb{Z_+}$. These are fixed.</p>
<p>Consider a set of $k$ integers $S=\{0, 1, 2, ... k-1\}$.</p>
<p>We form a sequence $a_1, a_2, ..., a_n$ by picking numbers from $S$ at random with equal probability $1/k$.</p>
<p>The question is - what is the probability of that sequence to be sorted ascending, i.e. $a_1 \leq a_2 \leq ... \leq a_n$? </p>
<p>Case $k \to \infty$:</p>
<p>This allows us to assume (with probability tending to $1$) that all elements $a_1, ..., a_n$ are different. It means that only one ordering out of $n!$ possible is sorted ascending. </p>
<p>And since all orderings are equally likely (not sure why though), the probability of the sequence to be sorted is</p>
<p>$$\frac{1}{n!}.$$</p>
<p>Case k = 2:</p>
<p>Now we have zeroes and ones which come to the resulting sequence with probability $0.5$ each. So the probability of any particular n-sequence is $\frac{1}{2^n}$. </p>
<p>Let us count the number of possible sorted sequences:</p>
<p>$$0, 0, 0, \ldots, 0, 0$$
$$0, 0, 0, \ldots, 0, 1$$
$$0, 0, 0, \ldots, 1, 1$$
$$\ldots$$
$$0, 0, 1, \ldots, 1, 1$$
$$0, 1, 1, \ldots, 1, 1$$
$$1, 1, 1, \ldots, 1, 1$$</p>
<p>These total to $(n+1)$ possible sequences. Now again, any sequence is equally likely, so the probability of the sequence to be sorted is </p>
<p>$$ \frac{n+1}{2^n}. $$</p>
<p>Question:</p>
<p>I have no idea how to generalize it well for arbitrary $k, n$. Maybe we can tackle it together since my mathematical skills aren't really that high. </p>
| Marc van Leeuwen | 18,880 | <p>The number of possible sequences is $k^n$. Then number of favourable outcomes, namely weakly increasing sequences, is $\binom{n+k-1}{k-1}=\binom{n+k-1}n$. The probability of finding the random sequence to be increasing is
$$
\frac{\binom{n+k-1}{k-1}}{k^n} =
\frac{\binom{n+k-1}n}{k^n} = \prod_{i=1}^n\frac{k+i-1}{ik}.
$$
To see why the binomial coefficient gives the number of weakly increasing sequences, imagine drawing a bar chart for the function, and tracing a path from position $(0,1)$ (at height $1$ to the left of the first bar) to $(n,k)$ (at height $k$ to the right of the last bar) across the tops of the bars. The path has $n+k-1$ steps, of which $n$ are horizontal across the top of a bar, and $k-1$ are vertical to increase the height.</p>
|
3,243,406 | <p>I know that the function <span class="math-container">$f(x)=|x(x-1)^3|$</span> is not derivable in <span class="math-container">$x=0$</span>, but why is it derivable in <span class="math-container">$x=1$</span>?</p>
| Sri-Amirthan Theivendran | 302,692 | <p>Note that <span class="math-container">$f(x)=|x|(x-1)^2|x-1|$</span> whence</p>
<p><span class="math-container">$$
f'(1)=\lim_{x\to1}\frac{f(x)-f(1)}{x-1}=\lim_{x\to 1} \frac{f(x)}{x-1}=\lim_{x\to1} |x|(x-1)|x-1|=0.
$$</span>
since <span class="math-container">$|x|(x-1)|x-1|$</span> is a product of continuous functions.</p>
|
295,597 | <p>I'm trying to solve this simple integral:</p>
<p>$$\frac12 \int \frac{x^2}{\sqrt{x + 1}} dx$$</p>
<p>Here's what I have done so far:</p>
<ol>
<li><p>$\displaystyle t = \sqrt{x + 1} \Leftrightarrow x = t^2 - 1 \Rightarrow dx = 2t dt$</p></li>
<li><p>$\displaystyle \frac12 \int \frac{x^2}{\sqrt{x + 1}} dx = \int \frac{t (t^2 - 1)^2}t dt$</p></li>
<li><p>$\displaystyle \int (t^2 - 1)^2 dt = \frac15 t^5 - \frac23 t^3 + t + C$</p></li>
<li><p>$\displaystyle \frac15 t^5 - \frac23 t^3 + t + C = \frac15 \sqrt{(x + 1)^5} - \frac23 \sqrt{(x + 1)^3} + \sqrt{x + 1} + C$</p></li>
</ol>
<p>WolframAlpha tells me steps 1 and 3 are right so the mistake must be somewhere in steps 2 and 4, but I really can't see it.</p>
| Thomas Andrews | 7,933 | <p>$$\begin{align}\frac{x^2}{\sqrt{x+1}} &= \frac{x^2-1}{\sqrt{x+1}} + \frac{1}{\sqrt{x+1}}\\&=(x-1)\sqrt{x+1} + \frac{1}{\sqrt{x+1}}\\&=(x+1)^{3/2} - 2\sqrt{x+1} + \frac{1}{\sqrt{x+1}}\end{align}$$</p>
<p>I think you can integrate each of these terms.</p>
|
2,921,927 | <p>We have the following function : <span class="math-container">$$f(z)=\frac{z^2}{1-\cos z}$$</span> where <span class="math-container">$z_0=0$</span> is a removable singularity since the limit as <span class="math-container">$z$</span> goes to <span class="math-container">$0$</span> is <span class="math-container">$2$</span>. </p>
<p>In such cases, in order to find the residue I proceed by trying to find the Laurent series around the singularity and checking the coefficient of the <span class="math-container">$z^{-1}$</span> term. However, the cosine is in the denominator and I can't properly find the first negative power of <span class="math-container">$z$</span>. Will I have to resort to a polynomial division to get my negative <span class="math-container">$z$</span> powers?</p>
| Mark | 470,733 | <p>The residue of a removable singularity is always $0$. It simply follows from the fact that if the singularity is removable then the Laurent series is actually Taylor series. </p>
|
3,998,098 | <p>I was asked to determine the locus of the equation
<span class="math-container">$$b^2-2x^2=2xy+y^2$$</span></p>
<p>This is my work:</p>
<blockquote>
<p>Add <span class="math-container">$x^2$</span> to both sides:
<span class="math-container">$$\begin{align}
b^2-x^2 &=2xy+y^2+x^2\\
b^2-x^2 &=\left(x+y\right)^2
\end{align}$$</span></p>
</blockquote>
<p>I see that this is similar to the equation of a circle. How can I find the locus of this expression?</p>
| Vajra | 759,757 | <p>The matrix associated to the conic is
<span class="math-container">$$\begin{pmatrix}-2 & -1 & 0\\-1 & -1 & 0\\0 & 0 & b^2 \end{pmatrix}\implies\Bigg|\begin{pmatrix}-2 & -1\\-1 & -1 \end{pmatrix}\Bigg|=2-1=1>0$$</span>
and this shows that the conic is an ellipse.</p>
|
3,998,098 | <p>I was asked to determine the locus of the equation
<span class="math-container">$$b^2-2x^2=2xy+y^2$$</span></p>
<p>This is my work:</p>
<blockquote>
<p>Add <span class="math-container">$x^2$</span> to both sides:
<span class="math-container">$$\begin{align}
b^2-x^2 &=2xy+y^2+x^2\\
b^2-x^2 &=\left(x+y\right)^2
\end{align}$$</span></p>
</blockquote>
<p>I see that this is similar to the equation of a circle. How can I find the locus of this expression?</p>
| Amanuel Getachew | 669,545 | <p>The equation
<span class="math-container">$$Ax^2 + Bxy+Cy^2 +Dx + Ey + F = 0$$</span>
is:</p>
<ul>
<li><p>an ellipse if <span class="math-container">$B^2 - 4AC < 0$</span> (could be circle or a point if <span class="math-container">$A = C$</span>, <span class="math-container">$F > 0$</span> and <span class="math-container">$B = 0$</span>)</p>
</li>
<li><p>a parabola if <span class="math-container">$B^2 -4AC = 0$</span></p>
</li>
<li><p>a hyperbola if <span class="math-container">$B^2 - 4AC > 0$</span></p>
</li>
</ul>
<p>In this case, <span class="math-container">$B^2-4AC = -4 <0$</span>. Hence it is an ellipse.</p>
|
4,386,087 | <p>There exists an elevator which starts off containing <span class="math-container">$p$</span> passengers.</p>
<p>There are <span class="math-container">$F$</span> floors.</p>
<p><em><span class="math-container">$\forall i: P_i = $</span>P(i. passenger exists on any of the floors)</em> = <span class="math-container">$1/F$</span>.</p>
<p>The passengers exit independently.</p>
<p>What's the probability that the elevator door opens on all floors?</p>
<p><em>Reasonable assumption: elevator stops on at least one of the floors</em>.</p>
<p>Number of all configurations: <span class="math-container">$F^{p}$</span>.</p>
<p>It arises as the implication that any of the passengers can
exit on any of the floors.</p>
<p>Number of favorable configurations will be the difference between the number of all configurations and the number of ways that at least one of the floors are skipped.</p>
<p>Therefore the probability in accordance with the principle of inclusion-exclusion:</p>
<p><span class="math-container">$\frac{F^{p} - \binom{F}{1}\cdot \left( F - 1\right)^{p} + \binom{F}{2}\cdot \left( F - 2\right)^{p} - ... + \binom{F}{F - 1}\cdot 1^{p}}{F^{p}}$</span>.</p>
<p>Now how to proceed from this point?</p>
| true blue anil | 22,388 | <p>I prefer the approach you adopted, as you do not need recourse to a Stirling table, and in a way the formula you use is more basic.</p>
<p>You could, however, learn to condense your expression to</p>
<p><span class="math-container">$$\left(\dfrac1{F^P}\right)\sum_{i=0}^{F}(-1)^i\dbinom{F}{i}(F-i)^P$$</span></p>
<p>although it may appear more intimidating !</p>
|
257,821 | <p>The Kullback-Liebler divergence between two distributions with pdfs $f(x)$ and $g(x)$ is defined
by
$$\mathrm{KL}(F;G) = \int_{-\infty}^{\infty} \ln \left(\frac{f(x)}{g(x)}\right)f(x)\,dx$$</p>
<p>Compute the Kullback-Lieber divergence when $F$ is the standard normal distribution and $G$
is the normal distribution with mean $\mu$ and variance $1$. For what value of $\mu$ is the divergence
minimized?</p>
<p>I was never instructed on this kind of divergence so I am a bit lost on how to solve this kind of integral. I get that I can simplify my two normal equations in the natural log but my guess is that I should wait until after I take the integral. Any help is appreciated.</p>
| VCZ | 49,092 | <p>The pdf of the standard normal distribution is $
\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}$. Similarly, $g(x) = \frac{1}{\sqrt{2\pi}}e^{(x-\mu)^2/2}$.Therefore, $D_{KL}=\int_{-\infty}^{\infty} \frac{{2\mu-\mu}^2}{2\sqrt{2\pi}}e^{-x^{2}/2} dx$. This is equivalent to $\frac{{\mu}^2}{2}$.</p>
<p>This can be easily generalized to any two normal distributions with means $\mu_1, \mu_2$ and variances ${\sigma_1}^2, {\sigma_1}^2$.</p>
<p>The K-L divergence is obviously minimized at $\mu=0$ - where the distributions are the same!</p>
<p>EDIT: Sorry, I misread the mean as 1 originally - I have corrected it. </p>
|
257,821 | <p>The Kullback-Liebler divergence between two distributions with pdfs $f(x)$ and $g(x)$ is defined
by
$$\mathrm{KL}(F;G) = \int_{-\infty}^{\infty} \ln \left(\frac{f(x)}{g(x)}\right)f(x)\,dx$$</p>
<p>Compute the Kullback-Lieber divergence when $F$ is the standard normal distribution and $G$
is the normal distribution with mean $\mu$ and variance $1$. For what value of $\mu$ is the divergence
minimized?</p>
<p>I was never instructed on this kind of divergence so I am a bit lost on how to solve this kind of integral. I get that I can simplify my two normal equations in the natural log but my guess is that I should wait until after I take the integral. Any help is appreciated.</p>
| R S | 48,725 | <p>I cannot comment (not enough reputation). </p>
<p>Vincent: You have the wrong pdf for $g(x)$, you have a normal distribution with mean 1 and variance 1, not mean $\mu$. </p>
<p>Hint: You don't need to solve any integrals. You should be able to write this as pdf's and their expected values, so you never need to integrate. </p>
<p>Outline: Firstly, $ \log({f(x) \over g(x) })=\left\{ -{1 \over 2} \left( x^2 - (x-\mu )^2 \right) \right\} $ . Expand and simplify. Don't even write out the other $f(x)$ and see where that takes you. </p>
|
1,284,388 | <p>Solving for variable $d$:</p>
<p>$v = \frac{1}{2}hd^2 + 9.9$</p>
<p>$-2(v - 9.9) = hd^2$</p>
<p>$-2v + 19.8 = hd^2$</p>
<p>$d = \sqrt{\frac{-2v + 19.8}{h}}$</p>
<p>The correct answer is:</p>
<p>$d = \pm\sqrt{\frac{2v - 19.8}{h}}$</p>
| Olivier Oloa | 118,798 | <p>From $$V = \frac{1}{2}hd^2 + 9.9$$ you may deduce, with appropriate conditions:
$$
2V = hd^2 + 2\times9.9
$$ $$
2V-19.8 = hd^2
$$$$
\frac{2V-19.8}h = d^2
$$$$
\pm\sqrt{\frac{2V-19.8}h }= d.
$$</p>
|
419,625 | <p>Please help me, I have two functions:</p>
<blockquote>
<pre><code>a := x^3+x^2-x;
b := 20*sin(x^2)-5;
</code></pre>
</blockquote>
<p>and I would like to change a background color and fill the areas between two curves. I filled areas but I dont know how can I change the background, any idea?</p>
<blockquote>
<pre><code>plots[display]([plottools[transform]((x, y)->
[x, y+x^3+x^2+x])(plot(20*sin(x^2)-5-
x^3-x^2-x, x = o .. s, filled = true, color = red)),
plot([x^3+x^2+x, 20*sin(x^2)-5],
x = -3 .. 3, y = -30 .. 30, color = black)]);
</code></pre>
</blockquote>
| Mikasa | 8,581 | <p>The way in which @acer colored the area between two curves is really elegant, so I don't have any additional points about it for you. Just to say that: when you made the area between two curves colored (following the acer's way); I think the following line could make the background colored as well:</p>
<pre><code>> plot([50, -50], x = -3 .. 3, y = -50 .. 50, filled = true, color = blue);
</code></pre>
<p><img src="https://i.stack.imgur.com/UvDFH.png" alt="enter image description here"></p>
|
419,625 | <p>Please help me, I have two functions:</p>
<blockquote>
<pre><code>a := x^3+x^2-x;
b := 20*sin(x^2)-5;
</code></pre>
</blockquote>
<p>and I would like to change a background color and fill the areas between two curves. I filled areas but I dont know how can I change the background, any idea?</p>
<blockquote>
<pre><code>plots[display]([plottools[transform]((x, y)->
[x, y+x^3+x^2+x])(plot(20*sin(x^2)-5-
x^3-x^2-x, x = o .. s, filled = true, color = red)),
plot([x^3+x^2+x, 20*sin(x^2)-5],
x = -3 .. 3, y = -30 .. 30, color = black)]);
</code></pre>
</blockquote>
| Software | 62,172 | <p><em>Thanks to @Babak S. and @acer</em> $\Huge\color{green}{✔}^\color{red}{+}$</p>
<p>I Just wanted to say that:</p>
<blockquote>
<p>Right click on the background color -> select "<strong>color</strong>" -> You can also use "the default colors".</p>
</blockquote>
<p><img src="https://i.stack.imgur.com/8IcID.jpg" alt="enter image description here"></p>
|
4,280,424 | <p>The PDE:
<span class="math-container">$$\frac1D C_t-Q=\frac2rC_r+C_{rr}$$</span></p>
<p>on the domain <span class="math-container">$r \in [0,\bar{R}]$</span> and <span class="math-container">$t \in [0,+\infty]$</span> and where <span class="math-container">$D$</span> and <span class="math-container">$Q$</span> are Real constants. We're looking for a function <span class="math-container">$C(r,t)$</span>.</p>
<p>The BC:
<span class="math-container">$$C(0,t)=f(t)$$</span>
<span class="math-container">$$C_r(\bar{R},t)=0$$</span>
The IC:
<span class="math-container">$$C(r,0)=C_0$$</span>
If <span class="math-container">$f(t)=0$</span> then I know the solution. Assume:
<span class="math-container">$$C(r,t)=C_E(r)+v(r,t)$$</span>
<span class="math-container">$$-Q=\frac2rC_r+C_{rr}$$</span>
<span class="math-container">$$-Q=\frac2rC_E'(r)+C_E''(r)$$</span>
<span class="math-container">$$rC_E''+2C_E'+Qr=0$$</span>
where <span class="math-container">$C_E(r)$</span> is the steady-state solution (<span class="math-container">$t \to \infty$</span>).
<span class="math-container">$$C_E(r)=-\frac{Qr^2}{6}+\frac{c_1}{r}+c_2$$</span>
<span class="math-container">$$\text{because }\lim_{r\to 0}C(r)=\infty\Rightarrow c_1=0$$</span>
But since as <span class="math-container">$f(t) \neq 0$</span>, <span class="math-container">$c_2$</span> cannot be determined.</p>
<p>All help will be appreciated.</p>
<hr>
<p><strong>Edit.</strong> In the case of <span class="math-container">$f(t)=0$</span> the solution, summarised, becomes:</p>
<p><span class="math-container">$$c_2=0$$</span>
<span class="math-container">$$C_E(r)=-\frac{Qr^2}{6}$$</span>
<span class="math-container">$$C(r,t)=-\frac{Qr^2}{6}+v(r,t)$$</span>
Compute partial derivatives:
<span class="math-container">$$C_t=v_t$$</span>
<span class="math-container">$$C_r=-\frac{Qr}{3}+v_r$$</span>
<span class="math-container">$$C_{rr}=-\frac{Q}{3}+v_{rr}$$</span>
Inserting in the PDE then gives the homogeneous PDE in <span class="math-container">$v(r,t)$</span>:
<span class="math-container">$$\frac1D v_t=\frac2r v_r+v_{rr}$$</span>
Ansatz: <span class="math-container">$v(r,t)=R(r)T(t)$</span>, then separation of variables yields the ODE solutions, with <span class="math-container">$-m^2$</span> a separation constant:
<span class="math-container">$$T(t)=c_3\exp(-m^2 D t)$$</span>
<span class="math-container">$$R(r)=c_4\frac{\sin mr}{r}$$</span>
BCs:
<span class="math-container">$$R(0)=0$$</span>
<span class="math-container">$$R'(\bar{R})=0$$</span>
<span class="math-container">$$R'=c_4\frac{mr\cos mr-\sin mr}{r^2}$$</span>
<span class="math-container">$$R'(\bar{R})=c_4\frac{m\bar{R}\cos m\bar{R}-\sin m\bar{R}}{\bar{R}^2}=0$$</span>
The eigenvalues <span class="math-container">$m_i$</span> are the solutions to the transcendental equation:
<span class="math-container">$$m_i\bar{R}=\tan m_i\bar{R}$$</span>
So we have:
<span class="math-container">$$v(r,t)=\sum_{i=1}^\infty A_i\exp(-m_i^2 D t)\frac{\sin m_ir}{r}$$</span>
Determine the <span class="math-container">$A_i$</span> the usual way with the IC and the Fourier series.</p>
<p>So we have:</p>
<p><span class="math-container">$$C(r,t)=-\frac{Qr^2}{6}+\sum_{i=1}^\infty A_i\exp(-m_i^2 D t)\frac{\sin m_ir}{r}$$</span></p>
| Mark | 470,733 | <p>The closure has to contain <span class="math-container">$A$</span>, so there are only two options here: <span class="math-container">$\mathbb{R}$</span> or <span class="math-container">$\mathbb{R}\setminus\{p\}$</span>. Since <span class="math-container">$\mathbb{R}\setminus\{p\}$</span> is not a closed set (its complement is clearly not open in this topology), it can't be the closure. So it has to be <span class="math-container">$\mathbb{R}$</span>.</p>
|
2,677,134 | <p>Given the definition of Big-O, prove that $f(n) = n^2 - n$ is $O(n^2)$.</p>
<p>When I use the given definition, I get $n^2 - n \leq n^2 - n^2$ which means that $n^2 -n \leq 0$, which is not true. Is there some step I'm missing?</p>
| an4s | 533,556 | <p>The definition that you have stated is incorrect. $f(n)$ is $\mathcal{O}(g(n))$ $\textit{iff}$ there exists a constant $c > 0$ and there exists a value for n, $n_0 > 0$, such that $|f(n)|\leq c\cdot|g(n)|~\forall n \geq n_0$.</p>
<p>In your case, $f(n) = n^2 - n$ and $g(n) = n^2$. Let's use the definition above.
$$\begin{align*}
|f(n)| &\leq c\cdot|g(n)| \\
|n^2 - n| &\leq c\cdot|n^2| \\
\left|1 - \dfrac{1}{n}\right| &\leq c \\
\end{align*}$$
For $c = 1$ and $n_0 = 1$, the above inequality holds. That means that for any value of $n\geq1$ while $c = 1$, $f(n) = n^2 - n$ is $\mathcal{O}(g(n)) = \mathcal{O}(n^2)$.</p>
<p>Another method to solve this is by using limits. $f(n)$ is $\mathcal{O}(g(n))$ if $\lim_{n\to\infty}\dfrac{f(n)}{g(n)}$ is a finite constant. Let's calculate that for the given $f(n)$ and $g(n)$.
$$\lim_{n\to\infty}\dfrac{n^2 - n}{n^2}$$
Using L'Hospital's Rule,
$$\lim_{n\to\infty}\dfrac{n^2 - n}{n^2} = \lim_{n\to\infty}\dfrac{2n - 1}{2n} = \lim_{n\to\infty}\dfrac{2}{2} = constant$$
Therefore, again, $f(n) = n^2 - n$ is $\mathcal{O}(g(n)) = \mathcal{O}(n^2)$.</p>
|
2,389,782 | <p>Other than <a href="http://rads.stackoverflow.com/amzn/click/3885380064" rel="nofollow noreferrer">Engelking General Topology</a>, I also come across other graduate general topology text such as <a href="http://rads.stackoverflow.com/amzn/click/0697068897" rel="nofollow noreferrer">Dugundji</a> and <a href="http://www.springer.com/gp/book/9780387901251" rel="nofollow noreferrer">Kelley</a>, which I also find them interesting. </p>
<p>However, I find Engelking's book more comprehensive compare to the two graduate texts. </p>
<blockquote>
<p>Question: Does there exist a general topology book which is more
comprehensive than Engelking? If yes, what is the title?</p>
</blockquote>
| Peter Elias | 392,689 | <p>When looking for a result, Engelking is my first choice. But, as <a href="https://math.stackexchange.com/questions/2265141/is-mathbbr-mathord-sim-hausdorff-if-x-y-x-sim-y-is-a-closed-su/2392314#2392314">my recent experience</a> shows, one should also not miss Bourbaki.</p>
|
2,389,782 | <p>Other than <a href="http://rads.stackoverflow.com/amzn/click/3885380064" rel="nofollow noreferrer">Engelking General Topology</a>, I also come across other graduate general topology text such as <a href="http://rads.stackoverflow.com/amzn/click/0697068897" rel="nofollow noreferrer">Dugundji</a> and <a href="http://www.springer.com/gp/book/9780387901251" rel="nofollow noreferrer">Kelley</a>, which I also find them interesting. </p>
<p>However, I find Engelking's book more comprehensive compare to the two graduate texts. </p>
<blockquote>
<p>Question: Does there exist a general topology book which is more
comprehensive than Engelking? If yes, what is the title?</p>
</blockquote>
| Henno Brandsma | 4,280 | <p>I'm a fan of "Handbook of Set-theoretic Topology", which I often use, and Engelking as well. It contains many things that are more recent than Engelking and is more research oriented.</p>
<p>Nagata's "Modern General Topology" is also nice. Also in the Japanese school "Topics in General Topology", by Morita and Nagata. The latter book is more like the Handbook. The first more of a text book, like Kelley or Dugundji (which are good as well).</p>
|
31,158 | <p>To generate 3D mesh <a href="http://reference.wolfram.com/mathematica/TetGenLink/tutorial/UsingTetGenLink.html#167310445" rel="nofollow noreferrer">TetGen</a> can be easily used. Are there similar functions (or a way to use TetGen) to generate 2d mesh? I know that such functionality can be <a href="https://mathematica.stackexchange.com/questions/22244/creating-a-2d-meshing-algorithm-in-mathematica">easily implemented</a> but I would like to use a Mathematica provided function, as I need to experiment with number of nodes in elements and so on. I just want to solve PDE using FEM not really to play around with mesh generation.</p>
| Mark | 9,085 | <p>You can use TetGen and just make a z axis which is set to 0 (or any other constant). The issue with DelaunayTriangulation (if you want to generate a mesh from a list of points) is that it returns an adjacency list of the edges, which is very hard to turn into the polygons. <a href="https://mathematica.stackexchange.com/questions/277/how-to-get-actual-triangles-from-delaunaytriangulation">This</a> thread describes the issue with it.</p>
<p>Adding a z dimention of 0 is simply:</p>
<pre><code>pts3d = Map[Append[#, 0]&, pts2d]
</code></pre>
<p>And then to turn the 3d points back into 2d:</p>
<pre><code>newpts2d = pts3d[[All,1;;2]]
</code></pre>
|
2,146,457 | <p>Simplify the ring $\mathbb Z[\sqrt{-13}]/(2)$. I have so far:</p>
<p>$$\mathbb Z[\sqrt{-13}]/(2) \cong \mathbb Z[x]/(2, x^2 + 13) \cong \mathbb Z_2[x] / (x^2 + 1)$$</p>
<p>Now how do I simplify it further? I know that $x^2 + 1 = (x + 1)^2$ in $\mathbb Z_2[x]$, is this useful?</p>
| Mustafa | 400,050 | <p>$x^2+1=0 \Rightarrow x^2=-1 \equiv 1(mod 2) \Rightarrow x^3=x$</p>
<p>$f(x)=a_0+a_1x+a_2x^2+..+a_nx^n=a_0+a_1x+a_2(-1)+a_3(x)+...=a_0+a_1x(mod 2)$</p>
<p>$\Rightarrow \mathbb Z_2[x]/(x^2+1) =\{I,1+I, x+I,(1+x)+I \}; I=(x^2+1)$</p>
|
1,130,487 | <p>Jessica is playing a game where there are 4 blue markers and 6 red markers in a box. She is going to pick 3 markers without replacement.
If she picks all 3 red markers, she will win a total of 500 dollars. If the first marker she picks is red but not all 3 markers are red, she will win a total of 100 dollars. Under any other outcome, she will win 0 dollars. </p>
<p><strong>Solution</strong>
The probability of Jessica picking 3 consecutive red markers is: $\left(\frac16\right)$</p>
<p>The probability of Jessica's first marker being red, but not picking 3 consecutive red markers is:<br/>$\left(\frac35\right)-\left(\frac16\right)=\left(\frac{13}{30}\right)$
<br/>
So i am bit stuck here<br/></p>
<p><strong>what i think</strong> is it shouldn't be that complex it should be as simple as
chance of Jessica's first marker being red=chance of getting red 1 time
i.e P(First marker being red)=$\left(\frac{6}{10}\right)$
can any explain me the probability of Jessica's first marker being red=$\left(\frac{13}{30}\right)$?</p>
| agha | 118,032 | <p>Remember that:</p>
<p>$$\int \frac{2}{2x+2}dx \neq \ln(2x+2)$$</p>
<p>But:</p>
<p>$$\int \frac{2}{2x+2}dx = \ln(2x+2)+C$$</p>
<p>where $C$ is constant, so:</p>
<p>$$\int \frac{2}{2x+2}dx = \ln(2x+2)+C=\ln(2(x+1))+C=\ln(x+1)+\ln 2+C=\ln(x+1)+(\ln 2+C)=\ln(x+1)+C_1$$</p>
<p>So both calculations are almost correct, but you forgot about constant.</p>
|
386,172 | <p>The expression was simplified in the answer to <a href="https://math.stackexchange.com/questions/384592/finding-markov-chain-transition-matrix-using-mathematical-induction">this question</a>. I'm trying to simplify it but I got stuck. Multiplying all the factors and regrouping didn't work, but maybe I'm doing the wrong regrouping.</p>
<p>Also, I don't understand why the $(2p-1)^n$ dropped the exponent in the second line of the same solution.</p>
| Maazul | 74,820 | <p>Just take $\frac{1}{2}(2p-1)^n$ common and simplify.</p>
<p>$(p)(\frac{1}{2}(2p-1)^n) + (1-p)(-\frac{1}{2}(2p-1)^n)$</p>
<p>Taking $\frac{1}{2}(2p-1)^n$ common, we have</p>
<p>$\frac{1}{2}(2p-1)^n\left(p-(1-p)\right)$</p>
<p>$\frac{1}{2}(2p-1)^n\left(2p-1\right)$</p>
<p>$\frac{1}{2}(2p-1)^{n+1}$</p>
|
3,652,102 | <p>Let, <span class="math-container">$(P,\le)$</span> be the poset.<br>
I have begun to solve this in the following way-
Note that, <span class="math-container">$rs-r\le rs-s\iff r\ge s$</span><br>
So, without loss of generality assume that <span class="math-container">$r\ge s$</span>, then <span class="math-container">$\operatorname{min}(rs-r,rs-s)=r(s-1)$</span><br>
As per the question <span class="math-container">$P$</span> has elements <span class="math-container">$\ge r(s-1)$</span>.<br>
So, let number of elements of <span class="math-container">$P$</span> is <span class="math-container">$r(s-1)+n$</span> where <span class="math-container">$n\in\Bbb{N}$</span>.<br>
Let us assume on contrary, <span class="math-container">$P$</span> neither has an anti-chain of size <span class="math-container">$r$</span> nor a chain of size <span class="math-container">$s$</span><br>
i.e. for any <span class="math-container">$A$</span> of <span class="math-container">$P$</span> with <span class="math-container">$r$</span> elements, <span class="math-container">$\exists a,b\in A$</span> such that either <span class="math-container">$a\le b$</span> or <span class="math-container">$b\le a$</span>.<br>
And for any <span class="math-container">$C$</span> of <span class="math-container">$P$</span> with <span class="math-container">$s$</span> elements, <span class="math-container">$\exists x,y\in A$</span> such that neither <span class="math-container">$x\le y$</span> nor <span class="math-container">$y\le x$</span>.<br>
Now, I cannot use the number of elements of <span class="math-container">$P$</span> to get a contradiction from the above assumption.<br>
Can anybody help me with this? Thanks for assistance in advance.</p>
| Clive Newstead | 19,542 | <p>Assume that <span class="math-container">$P$</span> has neither an antichain of size <span class="math-container">$r$</span> nor a chain of size <span class="math-container">$s$</span>.</p>
<p>Let <span class="math-container">$\{ a_1, a_2, \dots, a_k \}$</span> be a maximal antichain in <span class="math-container">$P$</span>.</p>
<p>Define <span class="math-container">$C_1$</span> to be a maximal chain in <span class="math-container">$P$</span> containing <span class="math-container">$a_1$</span>, and for each <span class="math-container">$1 \le i < k$</span>, let <span class="math-container">$C_{i+1}$</span> be a maximal antichain in <span class="math-container">$P \setminus (C_1 \cup \cdots \cup C_i)$</span> containing <span class="math-container">$a_{i+1}$</span>.</p>
<p>You can verify that <span class="math-container">$k < r$</span>, that <span class="math-container">$|C_i| < s$</span> for each <span class="math-container">$i$</span>, that <span class="math-container">$C_1 \cup \cdots \cup C_k = P$</span>, and that the <span class="math-container">$C_i$</span> are all disjoint. It then follows that
<span class="math-container">$$|P| = \sum_{i=1}^k |C_i| \le \sum_{i=1}^k (s-1) = k(s-1) \le (r-1)(s-1)$$</span>
But
<span class="math-container">$$(r-1)(s-1) = (rs-r)+(1-s) = (rs-s)+(1-r)$$</span>
so <span class="math-container">$|P| \le \mathrm{min} \{ rs-r, rs-s \}$</span>, as required.</p>
|
3,982,937 | <p>To avoid typos, please see my screen captures below, and the red underline. The question says <span class="math-container">$h \rightarrow 0$</span>, thus why <span class="math-container">$|h|$</span> in the solution? Mustn't that <span class="math-container">$|h|$</span> be <span class="math-container">$h$</span>?</p>
<p><img src="https://i.stack.imgur.com/rweRh.jpg" alt="enter image description here" /></p>
<p>Spivak, <em>Calculus</em> 2008 4 edn. <a href="https://mathpop.com/" rel="nofollow noreferrer">His website's errata</a> lists no errata for these pages.</p>
| Vishu | 751,311 | <p><span class="math-container">$h$</span> might tend to zero from either side, so the statement <span class="math-container">$$-\delta \lt h \lt \delta \iff 0\lt |h| \lt \delta$$</span> is required.</p>
|
802,848 | <p>I am reading this book, <em>Gödel's Proof</em>, by James R. Newman, at location 117 (Kindle), it says,</p>
<blockquote>
<p>For <strong>various reasons</strong>, this axiom, (through a point outside a given line only one parallel to the line can be drawn), did not appear "self-evident" to the ancients.</p>
</blockquote>
<p>Any idea what the <strong>various reasons</strong> might be? It's self-evident enough to me.</p>
<p><strong>Edit</strong></p>
<p>Sorry, my bad, right after the sentence, (the above quote), there is a footnote, says that:</p>
<blockquote>
<p>The chief reason for this alleged lack of self-evidence seems to have been the fact that the parallel axiom makes an assertion about infinitely remote regions of space. Euclid defines parallel lines as straight lines in a plane that, "being produced indefinitely in both directions," do not meet. Accordingly, to say that two lines are parallel is to make the claim that the two lines will not meet even "at infinity." But the ancients were familiar with lines that, though they do not intersect each other in any finite region of the plane, do meet "at infinity." Such lines are said to be "asymptotic." Thus, a hyperbola is asymptotic to its axes. It was therefore not intuitively evident to the ancient geometers that from a point outside a given straight line only one straight line can be drawn that will not meet the given line even at infinity.</p>
</blockquote>
| Mauro ALLEGRANZA | 108,274 | <p>I think that it is not correct to say that "ancient hate the parallel postulate".</p>
<p>For sure, it is not so "self-evident" as others [but please, think at <a href="http://aleph0.clarku.edu/%7Edjoyce/java/elements/bookI/cn.html" rel="nofollow noreferrer">Common notion</a> n°5 : "The whole is greater than the part"; until Cantor it was "absolutely" self-evident].</p>
<p>The possible explanation, as per Gerry's comment, is that it involves the <em>infinite</em>, and the <em>infinite</em> is not so easy to manage ...</p>
<p>According to Boris Rosenfeld, <a href="https://rads.stackoverflow.com/amzn/click/com/0387964584" rel="nofollow noreferrer" rel="nofollow noreferrer">A History of Non-euclidean Geometry</a> (original ed.1976), page 36, Euclid was "aware" of this :</p>
<blockquote>
<p>Euclid tries to prove as many theorems as possible without using the fifth postulate. The first 28 propositions of Book I are so proved.</p>
</blockquote>
<p>According to Rosenfeld [page 40] :</p>
<blockquote>
<p>it seems that the first work devoted to this question was Archimedes' lost treatise <em>On parallel lines</em> that appeared a few decades after Euclid's <em>Elements</em>.</p>
</blockquote>
<p>The title of this work in known only through the list of Archimedes' works by ibn al-Nadim (ca.990), and</p>
<blockquote>
<p>it is possible that one of Ibn Qurra's preserved treatises on parallel lines represents an edited version of Archimedes' treatise.</p>
<p>[...] it is very likely that Archimedes used a definition of parallel lines different from Euclid's. [...] it is possible that Archimedes based his definition of parallel lines on distance.</p>
</blockquote>
<p><em>Added</em></p>
<p>As finely remarked by mau, the original definition and postulate are [see Thomas Heath, <a href="https://rads.stackoverflow.com/amzn/click/com/0486600882" rel="nofollow noreferrer" rel="nofollow noreferrer">The Thirteen Books of Euclid's Elements . Volume 1 : Introduction & Books I and II</a> (1908 - Dover reprint) ] :</p>
<blockquote>
<p><strong>Def 23</strong>: <em>Parallel</em> straight lines are straight lines which, being in the same plane and being produced indefinitely in both directions, do not meet one another in either direction.[page 154]</p>
<p><strong>Postulate 5</strong>: That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles.[page 155]</p>
</blockquote>
<p>Heath's edition comments at lenght definitions and postulates: the comment to P5 span from page 202 to page 220, with a lot of informations about the recorded attempt to prove it, from Proclus on.</p>
<p>Page 220 lists the most common alternatives to Euclid's version of the postulate; among them :</p>
<blockquote>
<p><strong>(I)</strong> <em>Through a given point only one parallel can be drawn to a given straight line</em> or, <em>Two straight lines which intersect one another cannot both be parallel to one and the same straight line</em>.</p>
<p>This is commonly known as "<a href="http://en.wikipedia.org/wiki/Playfair%27s_axiom" rel="nofollow noreferrer">Playfair's Axiom</a>" - from <a href="http://en.wikipedia.org/wiki/John_Playfair" rel="nofollow noreferrer">John Playfair</a> (10 March 1748 – 20 July 1819) - , but it was of course not a new discovery. It is distinctly stated in Proclus' note to Eucl.I.31.</p>
</blockquote>
|
876,763 | <p>Let $R$ be a commutative ring with identity. Assume that for any two principal ideals $Ra$ and $Rb$ we have either $Ra\subseteq Rb$ or $Rb\subseteq Ra$. Show that for any two ideals $I$ and $J$ in $R$, we have either $I\subseteq J$ or $J\subseteq I$.</p>
<p>Initially i thought that if i could show that any ideal in the ring is principal then i am done. But could not show what i thought of. Is my assumption to solve the problem correct? How can i proceed? Any hints would be highly appreciated. Thank you.</p>
| Community | -1 | <p>Suppose $I\not\subseteq J$ and $J\not\subseteq I$ you have $x\in I\setminus J$ and $y\in J\setminus I$</p>
<p>Now consider Principal ideals $Rx$ and $Ry$</p>
|
2,134,167 | <p>So we finished studying chapter 5 of Rudin on differentiation (Mean value theorem, Taylor's theorem etc) and this was given as a homework problem:</p>
<p>Let $ f(x) $ be continuously differentiable on $ [0, \infty) $ such that $ f $ satisfies $ f'(x) = \cos(x^2)f(x) $ for all $ x \geq 0 $, with $ f(0) = 1 $. Prove that $ e^{-x} \leq f(x) \leq e^x $ for all $ x \geq 0 $.</p>
<p>Clearly, $ x = 0$ then the result is trivial. I tried to use Taylor's theorem to note that if $ x > 0 $, then there exists $ x_1 \in (0,x) $ such that $ f(x) = 1 + xf'(x_1) = 1 + x \cos(x_1^2)f(x_1) $. This is where I'm stuck, since I don't know what to do with the cosine function. Any hint/help/comment is greatly appreciated.</p>
| ztefelina | 326,591 | <p>$cos(x^2)\in [-1,1]$, so $f'(x)\geq-f(x)$ and $f'(x) \leq f(x)$. Multiplying the first relation with $e^x$ and the second with $e^{-x}$ you get $(f(x)e^x)' \geq 0$ and ($f(x)e^{-x})'\leq 0 $, so $f(x)e^x$ is increasing and $f(x)e^{-x}$ is decreasing. <br>
So $ f(x)e^x \geq f(0)e^0=1$, hence $f(x) \geq e^{-x}$.<br>
Also, $f(x)e^{-x} \leq 1$, hence $f(x) \leq e^x$.</p>
|
2,134,167 | <p>So we finished studying chapter 5 of Rudin on differentiation (Mean value theorem, Taylor's theorem etc) and this was given as a homework problem:</p>
<p>Let $ f(x) $ be continuously differentiable on $ [0, \infty) $ such that $ f $ satisfies $ f'(x) = \cos(x^2)f(x) $ for all $ x \geq 0 $, with $ f(0) = 1 $. Prove that $ e^{-x} \leq f(x) \leq e^x $ for all $ x \geq 0 $.</p>
<p>Clearly, $ x = 0$ then the result is trivial. I tried to use Taylor's theorem to note that if $ x > 0 $, then there exists $ x_1 \in (0,x) $ such that $ f(x) = 1 + xf'(x_1) = 1 + x \cos(x_1^2)f(x_1) $. This is where I'm stuck, since I don't know what to do with the cosine function. Any hint/help/comment is greatly appreciated.</p>
| victoria | 412,473 | <p>For all real x, $ -1 \leq $cos$ (x) \leq 1$</p>
<p>so $0 \leq $cos$^2 (x) \leq 1 \ \forall x \in R$</p>
<p>Given $f'(x) = $cos$^2 x \ f(x) \ \ \forall x \geq 0$, then</p>
<p>$|f'(x)| \leq |f(x)| \ \ \forall x\geq 0$ </p>
<p>We know if $f'(x) = f(x) $on$ \ R$ with $f(0) = 1$, then $f(x) = e^x$</p>
<p>You should be able to complete the proof from those two facts. </p>
|
2,697,729 | <p>Suppose that the probability that you will drop a penny on the ground is 1/5, and the probability that you will find a penny on the ground today is 1/4. If the two events are independent, what is the probability that at least one of the two events will occur?</p>
<p>First I tried simply $1/5+1/4$ which was incorrect. Then I tried $( 1/5 \cdot 100)+ (1/4 \cdot 100)$, which was was incorrect as well. The correct answer is $2/5$ or $40\%$. </p>
| N. F. Taussig | 173,070 | <blockquote>
<p>First, I tried simply $1/5 + 1/4$, which was incorrect. </p>
</blockquote>
<p>Let's see why. </p>
<p>Suppose we want to find $\Pr(A \cup B)$. </p>
<p><a href="https://i.stack.imgur.com/kNcXq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kNcXq.jpg" alt="Venn_diagram_for_two_sets"></a></p>
<p>If we simply add $\Pr(A)$ and $\Pr(B)$, we will have added $\Pr(A \cap B)$ twice, once for each set in which the intersection is contained. We only want to include $\Pr(A \cap B)$ once, so we must subtract it from $\Pr(A) + \Pr(B)$ to find $\Pr(A \cup B)$, that is
$$\Pr(A \cup B) = \Pr(A) + \Pr(B) - \Pr(A \cap B)$$</p>
<p>You did not subtract the probability that both events occurred from the sum of the probabilities that the individual events occurred.</p>
<blockquote>
<p>Then I tried $\frac{1}{5} \cdot 100 + \frac{1}{4} \cdot 100$, which was incorrect as well. </p>
</blockquote>
<p>You made two mistakes here. The first is described above. The other is that you meant to multiply by $100\%$ rather than $100$. Multiplying by $100$ gives you a probability greater than $1$. </p>
<blockquote>
<p>The probability that you will drop a penny on the ground is $1/5$; the probability that you will find a penny on the ground today is $1/4$. If the two events are independent, what is the probability that at least one of the two events will occur?</p>
</blockquote>
<p><strong>Method 1:</strong> Let $D$ be the event that you drop a penny; let $F$ be the event that you find a penny. Then
$$\Pr(D \cup F) = \Pr(D) + \Pr(F) - \Pr(D \cap F)$$
Since events $D$ and $F$ are independent, $\Pr(D \cap F) = \Pr(D)\Pr(F)$. Therefore,
\begin{align*}
\Pr(D \cup F) & = \Pr(D) + \Pr(F) - \Pr(D \cap F)\\
& = \Pr(D) + \Pr(F) - \Pr(D)\Pr(F)\\
& = \frac{1}{5} + \frac{1}{4} - \left(\frac{1}{5}\right)\left(\frac{1}{4}\right)\\
& = \frac{1}{5} + \frac{1}{4} - \frac{1}{20}\\
& = \frac{4}{20} + \frac{5}{20} - \frac{1}{20}\\
& = \frac{8}{20}\\
& = \frac{2}{5}
\end{align*}</p>
<p><strong>Method 2:</strong> We subtract the probability that neither event occurs from $1$. Let $D$ and $F$ be defined as above.
\begin{align*}
\Pr(D^C \cup F^C) & = 1 - \Pr(D \cup F)\\
& = 1 - [\Pr(D) + \Pr(F) - \Pr(D \cap F)]\\
& = 1 - \Pr(D) - \Pr(F) + \Pr(D \cap F)\\
& = 1 - \Pr(D) - \Pr(F) + \Pr(D)\Pr(F) && \text{since $D$ and $F$ are independent}\\
& = 1 - \Pr(D) - \Pr(F)[1 - \Pr(D)]\\
& = [1 - \Pr(D)][1 - \Pr(F)]\\
& = \Pr(D^C)\Pr(F^C)
\end{align*}
Hence, the independence of events $D$ and $F$ implies the independence of their complements. Thus,
\begin{align*}
\Pr(D^C \cap F^C) & = \Pr(D^C)\Pr(F^C)\\
& = [1 - \Pr(D)][1 - \Pr(F)]\\
& = \left[1 - \frac{1}{5}\right]\left[1 - \frac{1}{4}\right]\\
& = \left(\frac{4}{5}\right)\left(\frac{3}{4}\right)\\
& = \frac{3}{5}
\end{align*}
Therefore,
\begin{align*}
P(E \cup F) & = 1 - P(D^C \cap F^C)\\
& = 1 - \frac{3}{5}\\
& = \frac{2}{5}
\end{align*}</p>
<p><strong>Method 3:</strong> We justify the solution you provided in the comments. Let $D$ and $F$ be as above.<br>
$$\Pr(D \cup F) = \Pr(D \cap F^C) + \Pr(D^C \cap F) + P(D \cap F)$$
Since $D$ and $F$ are independent, $\Pr(D \cap F) = \Pr(D)\Pr(F)$. Therefore,
\begin{align*}
\Pr(D)\Pr(F^C) & = \Pr(D)[1 - \Pr(F)]\\
& = \Pr(D) - \Pr(D)\Pr(F)\\
& = \Pr(D) - \Pr(D \cap F) && \text{since $D$ and $F$ are independent}\\
& = \Pr(D \cap F^C)
\end{align*}
Hence, the independence of events $D$ implies the independence of $D$ and $F^C$. Interchanging the roles of $D$ and $F$ in the above argument shows the independence of $D^C$ and $F$. Hence,
\begin{align*}
\Pr(D \cup F) & = \Pr(D \cap F^C) + \Pr(D^C \cap F) + \Pr(D \cap F)\\
& = \Pr(D)\Pr(F^C) + \Pr(D^C)\Pr(F) + \Pr(D)\Pr(F)\\
& = \Pr(D)[1 - \Pr(F)] + [1 - \Pr(D)]\Pr(F) + \Pr(D)\Pr(F)\\
& = \left(\frac{1}{4}\right)\left[1 - \frac{1}{5}\right] + \left[1 - \frac{1}{4}\right]\left(\frac{1}{5}\right) + \left(\frac{1}{4}\right)\left(\frac{1}{5}\right)\\
& = \left(\frac{1}{4}\right)\left(\frac{4}{5}\right) + \left(\frac{3}{4}\right)\left(\frac{1}{5}\right) + \left(\frac{1}{4}\right)\left(\frac{1}{4}\right)\\
& = \frac{1}{5} + \frac{3}{20} + \frac{1}{20}\\
& = \frac{4}{20} + \frac{3}{20} + \frac{1}{20}\\
& = \frac{8}{20}\\
& = \frac{2}{5}
\end{align*}</p>
|
131,322 | <p>A knot in S^3 is small if its complement does not contain a closed incompressible surface. Is it a generic property for knots, meaning that among all knots with less than $n$ crossings, the proportion of small knots goes to 1 when $n$ goes to infinity?</p>
| Robert Bruner | 6,872 | <p>Pardon me for not editing my previous answer, but this is really a different answer, not an improvement on my previous answer, which addresses different aspects of the (neighborhood of) the question.</p>
<p>The fact that $Z[x]/(fg) \longrightarrow Z[x]/(f) \times Z[x]/(g)$ is not iso here makes the attempt to replace $Z[x]/(1+x+\cdots+x^{n-1})$ by $\prod_d Z[x]/(\Phi_d(x))$ fail!</p>
<p>Take the simplest case, n=4. Let $R = Z[x]/(1+x+x^2+x^3)$, $R_1=Z[x]/(1+x)$ and $R_2 = Z[x]/(1+x^2)$. The map $1-x : R \longrightarrow R$ and the map $1-x : R_1 \times R_2 \longrightarrow R_1 \times R_2$ do not have the same cokernel! The cokernel of the former is Z/4, but the coker of the latter is $Z/2 \times Z/2$ ! The cokernel of $R \longrightarrow R_1 \times R_2$ is $Z/2$, but the map induced on this coker by 1-x is trivial. The snake lemma's ker-coker sequence is amusing here, and if I knew how to make mathJax display commutative diagrams, I'd have used fewer words to say all this.</p>
<p>This shows that Smith Normal Form is not invariant under monomorphisms! Lovely example.</p>
|
2,957,315 | <p><span class="math-container">$j_{1,1}$</span> denotes the first zero of the first Bessel function of the first kind. (That's a lot of firsts!) It's approximately equal to <span class="math-container">$3.83$</span>. My question is, is there any closed form expression for its value? Even a infinite series or infinite product that yields it would be good.</p>
<p>I ask because this value is used in physics, in the context of diffraction of light through a circular aperture, and students often make the mistake of thinking that the number just pops out of nowhere.</p>
| Geremia | 128,568 | <p>The series expansion is:</p>
<p><span class="math-container">$$J_{1}\left(x\right)=\sum_{n=0}^\infty\frac{x^{2n+1}}{2^{2n+1} n! (n+1)!}$$</span>
<span class="math-container">$$=\frac{x}{2} - \frac{x^{3}}{16} + \frac{x^{5}}{384} - \frac{x^{7}}{18432} + …$$</span></p>
<p>The denominators are Sloan Sequence <a href="https://oeis.org/A002474" rel="nofollow noreferrer">A002474</a>.</p>
<p>Sloan Sequence <a href="https://oeis.org/A115369" rel="nofollow noreferrer">A115369</a> is the first zero.</p>
|
259,734 | <p>Basically, what the title says. </p>
<p>Presumably, one could use the fact that monoidal categories (resp. strict monoidal categories) are one-object bicategories (resp. 2-categories) and use the Lack model structure on those, but I am unsure if this would work or not.</p>
| john | 8,751 | <p>There is a model structure on the category of monoidal categories and strict monoidal functors. A strict monoidal functor is a fibration / weak equivalence just when its underlying functor is a fibration / weak equivalence in Cat.</p>
<p>More generally for T a 2-monad with rank on Cat you can lift the model structure on Cat to the category of strict T-algebras and strict T-algebra morphisms: this generalises the above example. </p>
<p>For these results, see the discussion in Section 1.7 and Theorem 4.5 of Steve Lack's paper "Homotopy theoretic aspects of 2-monads": <a href="https://arxiv.org/abs/math/0607646" rel="nofollow noreferrer">https://arxiv.org/abs/math/0607646</a></p>
|
1,476,946 | <p>So, I'm just starting to peruse "Categories for the Working Mathematician", and there's one thing I'm uncertain on. Lets say I have three objects, $X,Y,Z$ and two arrows $f,g$ such that $X\overset {f} {\to}Y\overset {g} {\to}Z$. Does this necessitate the composition arrow exist so the diagram commutes, i.e must I have an $X\overset {h} {\to} Z$ such that $h=g\circ f$, or is it just that IF such an arrow $h$ exists, then it commutes? </p>
<p>The question came up when the book defined preorders, saying that they were transitive since we could associate arrows...I just wanted to make sure association of arrows actually mandates the creation of the direct arrow.</p>
| BrianO | 277,043 | <p>By the category axioms, every category is closed under composition of arrows where it's defined. So if $X\overset {f} {\to}Y\overset {g} {\to}Z$ exist, then the category also contains $X\overset {g \dot f}{\to}Z$.</p>
|
3,430,066 | <p><strong>Question:</strong></p>
<p>Calculate the integral </p>
<p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2}$$</span></p>
<p><strong>Attempted solution:</strong></p>
<p>I initially had two approaches. First was recognizing that the denominator looks like a quadratic equation. Perhaps we can factor it.</p>
<p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2} = \int_0^1 \frac{dx}{e^{-2x}(e^x+1)(e^x+e^2x-1)}$$</span></p>
<p>To me, this does not appear productive. I also tried factoring out <span class="math-container">$e^x$</span> with a similar unproductive result.</p>
<p>The second was trying to make it into a partial fraction. To get to a place where this can efficiently be done, I need to do a variable substitution:</p>
<p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2} = \Big[ u = e^x; du = e^x \, dx\Big] = \int_1^e \frac{u}{u^3+2u^2 - 1} \, du$$</span></p>
<p>This looks like partial fractions might work. However, the question is from a single variable calculus book and the only partial fraction cases that are covered are denominators of the types <span class="math-container">$(x+a), (x+a)^n, (ax^2+bx +c), (ax^2+bx +c)^n$</span>, but polynomials with a power of 3 is not covered at all. Thus, it appears to be a "too difficult" approach.</p>
<p>A third approach might be to factor the new denominator before doing partial fractions:</p>
<p><span class="math-container">$$\int_1^e \frac{u}{u^3+2u^2 - 1} \, du = \int_1^e \frac{u}{u(u^2+2u - \frac{1}{u})} \, du$$</span></p>
<p>However, even this third approach does not have a denominator that is suitable or partial fractions, since it lacks a u-free term.</p>
<p>What are some productive approaches that can get me to the end without restoring to partial fractions from variables with a power higher than <span class="math-container">$2$</span>?</p>
| zwim | 399,263 | <p>The change in <span class="math-container">$u=e^x$</span> </p>
<p>leads to a denominator of degree <span class="math-container">$3$</span>:</p>
<p><span class="math-container">$\displaystyle\int\dfrac{u}{u^3+2u^2-1}\mathop{du}=-\frac 12\ln\big|u^2+u-1\big|-\frac{3\sqrt{5}}{5}\tanh^{-1}\left(\frac{\sqrt{5}}5(2u+1)\right)+\ln\big|u+1\big|+C$</span></p>
<p><br>
It is possible to do slightly better, while considering <span class="math-container">$\tanh$</span>. </p>
<p>Since this function is symmetrical in <span class="math-container">$\pm x$</span>, we take the middle point from <span class="math-container">$e^x,e^{-2x}$</span> which is <span class="math-container">$e^{-x/2}$</span>.</p>
<p><br>
The change <span class="math-container">$u=\tanh(-x/2)$</span> </p>
<p>leads to a denominator of degree only <span class="math-container">$2$</span> which is simpler:</p>
<p><span class="math-container">$\displaystyle\int-\dfrac{u-1}{u^2+4u-1}\mathop{du}=-\frac 12\ln\big|u^2+4u-1\big|-\frac {3\sqrt{5}}5\tanh^{-1}\left(\frac{\sqrt{5}}5(u+2)\right)+C$</span></p>
|
3,757,864 | <p>Given a diagram like this,
<a href="https://i.stack.imgur.com/Xwum0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xwum0.png" alt="enter image description here" /></a></p>
<p>Where <span class="math-container">$O$</span> is the center and <span class="math-container">$OA = \sqrt{50}$</span>, <span class="math-container">$AB = 6$</span>, and <span class="math-container">$BC = 2$</span>. The question was to find the length of <span class="math-container">$OB$</span>. <span class="math-container">$\angle ABC = 90^o$</span></p>
<p>What I've done is so far:</p>
<p>I made the triangle <span class="math-container">$ABC$</span> and named <span class="math-container">$\angle BAC = \alpha$</span> . By trigonometry, I have the values for <span class="math-container">$\sin{\alpha}$</span> and <span class="math-container">$\cos{\alpha}$</span>. I get <span class="math-container">$\cos{\alpha}=\frac{6}{\sqrt{40}}$</span>.</p>
<p>Then I made the triangle <span class="math-container">$OCA$</span> and named <span class="math-container">$\angle OAB = \beta$</span> so <span class="math-container">$\angle OAC = \alpha + \beta$</span>. By using the cosinus rule, I have <span class="math-container">$\cos(\alpha + \beta) = \frac{1}{\sqrt{5}}$</span>.</p>
<p>Using the formula, <span class="math-container">$\cos(\alpha + \beta) = \cos{\alpha}.\cos{\beta} - \sin{\alpha}.\sin{\beta}$</span> and making <span class="math-container">$\sin{\beta} = \sqrt{1 -\cos^2{\beta}}$</span> I finally get that <span class="math-container">$\cos{\angle OAB} = \frac{1}{\sqrt{2}}$</span>.</p>
<p>Finally, by using the cosinus rule on the triangle <span class="math-container">$AOB$</span> I get <span class="math-container">$OB = \sqrt{26}$</span>.</p>
<p>My only problem is this takes me way too long! I am interested in a quicker way to do this (i.e. I now know that <span class="math-container">$\angle OAB = 45^o$</span> from trigonometry, but is there a quicker way to recognize it?)</p>
| Jaap Scherphuis | 362,967 | <p>Assuming <span class="math-container">$\angle ABC=90^o$</span> is given.</p>
<p>You can get there slightly quicker:<br />
By Pythagoras, <span class="math-container">$|AC|=\sqrt{40}$</span>.<br />
<span class="math-container">$OAC$</span> is isosceles, with <span class="math-container">$|OA|=|OC|=\sqrt{50}$</span>.<br />
You can then immediately get <span class="math-container">$\cos(\angle OAC)=\frac{|AC|/2}{|OA|} = \frac{\sqrt{40}/2}{\sqrt{50}}= \frac{1}{\sqrt{5}}$</span>.<br />
I don't yet see a way to shortcut the rest.</p>
<p>You could do it completely differently, by algebra. Use a coordinate system, centred on <span class="math-container">$B$</span>, and let <span class="math-container">$O$</span> be the point <span class="math-container">$(x,y)$</span>. Then we get two equations from the fact that <span class="math-container">$|OA|=|OC|=\sqrt{50}$</span>.</p>
<p><span class="math-container">$$x^2+(6-y)^2=50\\
(2-x)^2+y^2=50$$</span></p>
<p>There are fairly easily solved to give <span class="math-container">$y=1$</span>, <span class="math-container">$x=-5$</span>, from which you get <span class="math-container">$|OB|=\sqrt{26}$</span>.</p>
|
83,167 | <p>Let $X$ be a compact complex surface of general type which a ball quotient. Is it true that $\pi_{1}(X)$ can not contain ${\mathbb{Z}}^{2}$ as a subgroup? What kind of infinite abelian groups can occur as a subgroup of $\pi_{1}(X)$?</p>
| Community | -1 | <p>I don't know what a ball quotient is, but the decisive value is the genus of $X$.
If the genus is zero, $X$ is the projective space, if the genus is two, $X$ is a torus and the fundamental group is ${\mathbb Z}^2$ if the genus is $\ge 2$, then $X$ is a quotient of the upper half plane and the fundamental group $\pi$ is a uniform torsion-free lattice in $G=PSL_2({\mathbb R})$. Therefore every element is semisimple and the centralizer of any element $g\ne 1$ in $G$ is a torus, so the centralizer in $\pi$ is isomorphis to $\mathbb Z$. This means that in the case of genus $\ge 2$ the answer is $\mathbb Z$ only.</p>
|
3,464,247 | <p>I'm wondering if my reasoning is justified when determining if a vector is in the span of of a set of vectors.</p>
<p><span class="math-container">$$T = \{(1, 1, 0), (-1, 3, 1)\}$$</span></p>
<p>For which <span class="math-container">$a$</span> is <span class="math-container">$(a^2, a+2, 2) \in span(T)$</span></p>
<p>I've come to the conclusion that there DNE an "<span class="math-container">$a$</span>" which could make the vector be in the span of set <span class="math-container">$T$</span>. However I'm not sure if my reasoning is justified. </p>
<p>For vector <span class="math-container">$V$</span> to be in the span of set <span class="math-container">$T$</span>,it would have to be a linear combination of the two vectors in set <span class="math-container">$T$</span>, a scalar of the vectors in set <span class="math-container">$T$</span>, or combination of both. </p>
<p>In order to satisfy the <span class="math-container">$2$</span> in the third entry of vector <span class="math-container">$V$</span>, we must have <span class="math-container">$2(-1,3,1)$</span>. While we have our <span class="math-container">$2$</span> in the third entry, it does not fit the 1st and 2nd entries of vector <span class="math-container">$V$</span>, so I would need add/subtract a scalar of the first vector <span class="math-container">$(1,1,0)$</span>. But I've realized that it can never fit the requirements of vector <span class="math-container">$V$</span> so I've come to the conclusion that an "<span class="math-container">$a$</span>" that would make vector <span class="math-container">$V$</span> to be in the span of set <span class="math-container">$T$</span> DNE.</p>
| Yuki.F | 336,662 | <p>Uh yes, the answer should be <span class="math-container">$DNE$</span> if the <span class="math-container">$a$</span> must be real.</p>
<p>You mentioned that <span class="math-container">$T = \{(1, 1, 0), (-1, 3, 1)\}$</span> and the required vectors is <span class="math-container">$(a^2, a+2, 2)$</span>. Assume there are real numbers <span class="math-container">$u, v$</span> such that <span class="math-container">$u(1, 1, 0) + v(-1, 3, 1) = (a^2, a + 2, 2)$</span>, i.e. <span class="math-container">\begin{align}u - v &= a^2, \\ u + 3v &= a + 2, \\ v &= 2.\end{align}</span>
Then it follows that <span class="math-container">$u = a - 4$</span> and the <span class="math-container">$1^{\text{st}}$</span> equation becomes <span class="math-container">$a^2 - a + 6 = 0$</span> which doesn't have a real solution.</p>
<p>And your thinking starts correctly. You just have to show (in case) how/why setting <span class="math-container">$u$</span> for whatever value won't makes sense.</p>
|
3,290,514 | <p>I need to make the navigation and guidance of a vehicle (a quadcopter) in a platform. This platform can be seen like this:</p>
<p><a href="https://i.stack.imgur.com/jeJ34.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jeJ34.png" alt="enter image description here"></a></p>
<p>where the blue dots are the center of each square, and the <span class="math-container">$x$</span> distances are all the same, and the <span class="math-container">$y$</span> distances are all the same.</p>
<p>I need the distance between each blue dot to the center (the blue dot of the <span class="math-container">$(2;2)$</span>), but that distance depends on the <span class="math-container">$yaw$</span> angle. For example, if <span class="math-container">$yaw=0^\circ$</span>, the situation is like this:</p>
<p><a href="https://i.stack.imgur.com/QGwlE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QGwlE.png" alt="enter image description here"></a></p>
<p>and the distances are:</p>
<p><span class="math-container">$$d_{1;1} = (-d_x; -d_y)$$</span>
<span class="math-container">$$d_{1;2} = (-d_x; 0)$$</span>
<span class="math-container">$$d_{1;3} = (-d_x; d_y)$$</span></p>
<p><span class="math-container">$$d_{2;1} = (0; -d_y)$$</span>
<span class="math-container">$$d_{2;2} = (0; 0)$$</span>
<span class="math-container">$$d_{2;3} = (0; d_y)$$</span></p>
<p><span class="math-container">$$d_{3;1} = (d_x; -d_y)$$</span>
<span class="math-container">$$d_{3;2} = (d_x; 0)$$</span>
<span class="math-container">$$d_{3;3} = (d_x; d_y)$$</span></p>
<p>If the situation is with <span class="math-container">$yaw=180^\circ$</span>:</p>
<p><a href="https://i.stack.imgur.com/Y0A6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y0A6P.png" alt="enter image description here"></a></p>
<p>the distances are the same but with the opposite sign, i.e,</p>
<p><span class="math-container">$$d_{1;1} = (d_x; d_y)$$</span>
<span class="math-container">$$d_{1;2} = (d_x; 0)$$</span>
<span class="math-container">$$d_{1;3} = (d_x; -d_y)$$</span></p>
<p><span class="math-container">$$d_{2;1} = (0; d_y)$$</span>
<span class="math-container">$$d_{2;2} = (0; 0)$$</span>
<span class="math-container">$$d_{2;3} = (0; -d_y)$$</span></p>
<p><span class="math-container">$$d_{3;1} = (-d_x; d_y)$$</span>
<span class="math-container">$$d_{3;2} = (-d_x; 0)$$</span>
<span class="math-container">$$d_{3;3} = (-d_x; -d_y)$$</span></p>
<p>If <span class="math-container">$yaw=90^\circ$</span>, the situation is like this:</p>
<p><a href="https://i.stack.imgur.com/B6a8b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B6a8b.png" alt="enter image description here"></a></p>
<p>and the distances (see the difference between <span class="math-container">$d_x$</span> and <span class="math-container">$d_y$</span>) would be:</p>
<p><span class="math-container">$$d_{1;1} = (-d_y; d_x)$$</span>
<span class="math-container">$$d_{1;2} = (-d_y; 0)$$</span>
<span class="math-container">$$d_{1;3} = (-d_y; d_x)$$</span></p>
<p><span class="math-container">$$d_{2;1} = (0; -d_x)$$</span>
<span class="math-container">$$d_{2;2} = (0; 0)$$</span>
<span class="math-container">$$d_{2;3} = (0; d_x)$$</span></p>
<p><span class="math-container">$$d_{3;1} = (d_y; -d_x)$$</span>
<span class="math-container">$$d_{3;2} = (d_y; 0)$$</span>
<span class="math-container">$$d_{3;3} = (d_y; d_x)$$</span></p>
<p>If <span class="math-container">$yaw = -90^\circ$</span>:</p>
<p><a href="https://i.stack.imgur.com/6Zk2f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Zk2f.png" alt="enter image description here"></a></p>
<p>the distances would be:</p>
<p><span class="math-container">$$d_{1;1} = (d_y; d_x)$$</span>
<span class="math-container">$$d_{1;2} = (d_y; 0)$$</span>
<span class="math-container">$$d_{1;3} = (d_y; -d_x)$$</span></p>
<p><span class="math-container">$$d_{2;1} = (0; d_x)$$</span>
<span class="math-container">$$d_{2;2} = (0; 0)$$</span>
<span class="math-container">$$d_{2;3} = (0; -d_x)$$</span></p>
<p><span class="math-container">$$d_{3;1} = (-d_y; d_x)$$</span>
<span class="math-container">$$d_{3;2} = (-d_y; 0)$$</span>
<span class="math-container">$$d_{3;3} = (-d_y; -d_x)$$</span></p>
<p>I need to write a matrix that uses the information of the <span class="math-container">$yaw$</span> angle and returns the distances from each angle (not just 0, 90, -90 and 180, but also 1, 2, 3, ...)</p>
<p>I tried to write it but I couldn't find the solution.</p>
<p>Thank you very much. I really need this help</p>
<p>Edit: please note that the coordinate frame moves with the quadcopter, like in this image:</p>
<p><a href="https://i.stack.imgur.com/Vge0h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vge0h.png" alt="enter image description here"></a></p>
<p>Edit 2: for example, if <span class="math-container">$yaw=45^\circ$</span>, then the distance from <span class="math-container">$(3;3)$</span> to <span class="math-container">$(2;2)$</span> is <span class="math-container">$\sqrt{d_x^2+d_y^2}$</span> in <span class="math-container">$x$</span> and <span class="math-container">$0$</span> in <span class="math-container">$y$</span>.</p>
| Claude Leibovici | 82,404 | <p>If I properly uunderstand, you have <span class="math-container">$n$</span> data points <span class="math-container">$(x_i,y_i)$</span> <span class="math-container">$(i=1,2,\cdots,n)$</span> and you want to find the coordinates <span class="math-container">$(a,b)$</span> and the radius <span class="math-container">$r$</span> of the circle.</p>
<p>Before speaking about the function you could minimize, you can have reasonable <em>estimates</em> writing for each data point <span class="math-container">$i$</span>
<span class="math-container">$$f_i=(x_i-a)^2+(y_i-b)^2-r^2=0$$</span> Now, write for all possible <span class="math-container">$\frac {n(n-1)}2$</span> combinations [<span class="math-container">$i=1,2,\cdots,n)$</span> and <span class="math-container">$j=i+1,\cdots,n-1)$</span>]
<span class="math-container">$$f_j-f_i=0 \implies 2(x_i-x_j)a+2(y_i-y_j)b=(x_i^2+y_i^2)-(x_j^2+y_j^2)$$</span> and a linear regression will give <span class="math-container">$(a_*,b_*)$</span>. These being known, an <em>estimate</em> of <span class="math-container">$r$</span> could be obtained using
<span class="math-container">$$r^2_*=\frac 1 n\sum_{i=1}^n \big[(x_i-a_*)^2+(y_i-b_*)^2\big]$$</span> Now, comes the problem of the objective function <span class="math-container">$\Phi(a,b,r)$</span> that you really want to minimize. It could be
<span class="math-container">$$\Phi_1(a,b,r)=\sum_{i=1}^n\big[(x_i-a)^2+(y_i-b)^2-r^2\big]^2$$</span>
<span class="math-container">$$\Phi_2(a,b,r)=\sum_{i=1}^n\big[\sqrt{(x_i-a)^2+(y_i-b)^2}-r \big]^2$$</span>
This can easily be done using nonlinear regression or optimization; since you have good estimates, the calculations would converge quite fast. You could even use Newton-Raphson method for solving the equations
<span class="math-container">$$\frac{\partial \Phi} {\partial a}=0 \qquad \qquad\frac{\partial \Phi} {\partial b}=0 \qquad\qquad\frac{\partial \Phi} {\partial r}=0 $$</span></p>
|
1,239,806 | <p>I begin by contradiction. Assume that $n$ can be expressed as the sum of three squares. That is $n = a^2 + b^2 + c^2$. Now since $n \equiv 7 \pmod 8$ then $8 \mid n - 7$ so $8 \mid a^2 + b^2 + c^2 - 7$. But then I don't know how to proceed from here. Any ideas</p>
| Elaqqad | 204,937 | <p>Use the equation modulo $8$ for every integer $x$ we have $x^2\equiv 0,1,4 \mod 8$ $ a^2+b^2+c^2\equiv 7 \mod 8$</p>
<p>Can three numbers from $0,1,4$ sum to $7$? </p>
|
1,239,806 | <p>I begin by contradiction. Assume that $n$ can be expressed as the sum of three squares. That is $n = a^2 + b^2 + c^2$. Now since $n \equiv 7 \pmod 8$ then $8 \mid n - 7$ so $8 \mid a^2 + b^2 + c^2 - 7$. But then I don't know how to proceed from here. Any ideas</p>
| Adhvaitha | 228,265 | <p>This means $n$ is odd. Hence, if $n=a^2+b^2+c^2$, we have two possibilities.</p>
<ol>
<li><p>All of $a,b,c$ are odd. This means that $a^2\equiv1\pmod8$, $b^2\equiv1\pmod8$ and $c^2\equiv1\pmod8$. Hence,
$$n = a^2+b^2+c^2 \equiv 3\pmod8 \not\equiv 7\pmod8$$</p></li>
<li><p>Two of $a,b,c$ are even and other one is odd. Without loss of generality, assume $a,b$ are even and $c$ is odd. This means that $a^2\equiv0\pmod4$, $b^2\equiv0\pmod4$ and $c^2\equiv1\pmod4$. Hence,
$$n = a^2+b^2+c^2 \equiv 1\pmod4 \not\equiv 7\pmod8$$</p></li>
</ol>
|
4,112,942 | <p>Found this exercise in Serge Lang's <em>Introduction to Linear Algebra</em>:</p>
<blockquote>
<p>Find the rank of the matrix <span class="math-container">$$\begin{pmatrix} 1 &1 &0 &1 \\ 1 &2 &2 &1 \\ 3 &4 &2 &3 \end{pmatrix}$$</span></p>
</blockquote>
<hr />
<p>So my process to solve it is as follows. First, I set a system</p>
<p><span class="math-container">$$ x \begin{pmatrix} 1 \\ 1 \\ 0 \\ 1 \end{pmatrix} + y \begin{pmatrix} 1 \\ 2 \\ 2 \\ 1 \end{pmatrix} + z \begin{pmatrix} 3 \\ 4 \\ 2 \\ 3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} $$</span></p>
<p><span class="math-container">$$\begin{cases}
x + y + 3z = 0 \\
x + 2y + 4z = 0 \\
2y + 2z = 0 \\
x + y +3z = 0
\end{cases}$$</span></p>
<p>Immediately we see that we can ignore the last equation. Then we can subtract the first one from the second one so we get</p>
<p><span class="math-container">$$\begin{cases}
x + y +3z = 0 \\
y + z = 0 \\
2y + 2z = 0
\end{cases}$$</span></p>
<p>The third one is <span class="math-container">$2$</span> times the second one so we can remove it. Finally we have
<span class="math-container">$$\begin{cases}
x + y +3z = 0 \\
y + z = 0 \\
\end{cases}$$</span></p>
<p>Since this is a system of two equations in three unknowns, it has a non-trivial solution and thus, both are linearly dependent. Therefore, the rank is <span class="math-container">$1$</span></p>
<p>I must have made some mistake since the answers at the back of the book state that the solution is <span class="math-container">$2$</span> but I don't see where I'm wrong</p>
| Haf | 911,000 | <p>Thanks to the comments I found that I just have to reduce it to row echelon form, I guess I misunderstood ranks, the answer is</p>
<p><span class="math-container">$$\begin{aligned}
\begin{pmatrix}
1 &1 &0 &1 \\
1 &2 &2 &1 \\
3 &4 &2 &3 \end{pmatrix} &\overset{(2) - (1)}{\implies} \\
\begin{pmatrix}
1 &1 &0 &1 \\
0 &1 &2 &0 \\
3 &4 &2 &3
\end{pmatrix} &\overset{(3)-3\cdot (1)}{\implies} \\
\begin{pmatrix}
1 &1 &0 &1 \\
0 &1 &2 &0 \\
0 &1 &2 &0
\end{pmatrix} &\overset{(3) - (2)}{\implies} \\
\begin{pmatrix}
1 &1 &0 &1 \\
0 &1 &2 &0 \\
0 &0 &0 &0
\end{pmatrix}
\end{aligned}$$</span></p>
<p>Then, since it's in row echelon form it's clear that these are two linearly independent equations</p>
|
4,112,942 | <p>Found this exercise in Serge Lang's <em>Introduction to Linear Algebra</em>:</p>
<blockquote>
<p>Find the rank of the matrix <span class="math-container">$$\begin{pmatrix} 1 &1 &0 &1 \\ 1 &2 &2 &1 \\ 3 &4 &2 &3 \end{pmatrix}$$</span></p>
</blockquote>
<hr />
<p>So my process to solve it is as follows. First, I set a system</p>
<p><span class="math-container">$$ x \begin{pmatrix} 1 \\ 1 \\ 0 \\ 1 \end{pmatrix} + y \begin{pmatrix} 1 \\ 2 \\ 2 \\ 1 \end{pmatrix} + z \begin{pmatrix} 3 \\ 4 \\ 2 \\ 3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} $$</span></p>
<p><span class="math-container">$$\begin{cases}
x + y + 3z = 0 \\
x + 2y + 4z = 0 \\
2y + 2z = 0 \\
x + y +3z = 0
\end{cases}$$</span></p>
<p>Immediately we see that we can ignore the last equation. Then we can subtract the first one from the second one so we get</p>
<p><span class="math-container">$$\begin{cases}
x + y +3z = 0 \\
y + z = 0 \\
2y + 2z = 0
\end{cases}$$</span></p>
<p>The third one is <span class="math-container">$2$</span> times the second one so we can remove it. Finally we have
<span class="math-container">$$\begin{cases}
x + y +3z = 0 \\
y + z = 0 \\
\end{cases}$$</span></p>
<p>Since this is a system of two equations in three unknowns, it has a non-trivial solution and thus, both are linearly dependent. Therefore, the rank is <span class="math-container">$1$</span></p>
<p>I must have made some mistake since the answers at the back of the book state that the solution is <span class="math-container">$2$</span> but I don't see where I'm wrong</p>
| Community | -1 | <p>The first two rows are independent (by inspection: they're not multiples of each other).</p>
<p>Then <span class="math-container">$r_3=2r_1+r_2$</span>, as you can see.</p>
<p>So the row rank is <span class="math-container">$2$</span>. So the rank is <span class="math-container">$2$</span>.</p>
|
4,515 | <p>I've been using the sentence:</p>
<blockquote>
<p>If a series converges then the limit of the sequence is zero</p>
</blockquote>
<p>as a criterion to prove that a series diverges (when $\lim \neq 0$) and I can understand the rationale behind it, but I can't find a <strong>formal proof</strong>.</p>
<p>Can you help me?</p>
| Gadi A | 1,818 | <p>If we know that the sequence converges and merely wish to show it converges to zero, then a proof by contradiction gives a little more intuition here (although the direct proofs are simple and beautiful). Assume $a_n\to a$ with $a>0$, then for all $n>N$ for some large enough $N$ we have $a_n > a/2$ (take $\varepsilon = a/2$ in the definition of the limit). Now the sum diverges: $\sum_{n>N}a_n > \sum_{n>N}a/2 = \infty$. A similar argument works when $a<0$.</p>
|
463,239 | <p>Integrate $$\int{x^2(8x^3+27)^{2/3}}dx$$</p>
<p>I'm just wondering, what should I make $u$ equal to?</p>
<p>I tried to make $u=8x^3$, but it's not working. </p>
<p>Can I see a detailed answer?</p>
| GeoffDS | 8,671 | <p>Now, $u = 8x^3 + 27$ is a choice that makes less work for you. But, one thing that students should understand is even if you don't make the best choice, it still might work. You tried $u = 8x^3$. In that case $du = 24x^2 \,dx$ and your integral becomes
$$\int{x^2(8x^3+27)^{2/3}}dx = \frac{1}{24} \int (u+27)^{2/3} \,du.$$
You might think to yourself, well, I can't do anything with that. But, I am telling you that you can. What if that was your original integral? Might you not try a substitution of $v = u + 27$ with $dv = du$? That would give you
$$\frac{1}{24} \int v^{2/3} \,dv = \frac{1}{24} \cdot \frac{3}{5} v^{5/3} + C.$$
At this point, you can undo both substitutions.</p>
<p>This method, in the end, is basically the same thing as doing $u = 8x^3 + 27$ in the first place but it's okay to make a choice that's not the best in the first place. Just try to keep going from there. If you can make the integral simpler at each step, then you are getting closer to solving the problem.</p>
|
877,646 | <p>Friends,I have a set of matrices of dimension $3\times3$ called $A_i$. ,</p>
<p>Following are the given conditions</p>
<p>a) each $A_i$ is non invertible <strong>except $A_0$</strong> because their determinant is zero.</p>
<p>b) $\sum_{n=0}^\infty A_i$ is invertible and determinant is not zero</p>
<p>c) </p>
<ol>
<li><p>This is the recursion available for $A_i$,
$ A_{n}=\frac{1}{n} \{C_1* A_{n-1} +C_2 * A_{n-2}\} \tag 1$, where $A_0$ = Constant matrix ,$A_1$ =Constant matrix </p></li>
<li><p>$C_1,C_2 $ are constant matrices. $A_1$ and $A_0$ are initial values.
$A_0,A_1,C_1,C_2,A_n $ have dimension $3\times 3$</p></li>
<li><p>$C_1,C_2,C_1+C_2 $ etc are skew symmetric matrices , not commutative, and also with diagonals as zeros </p></li>
<li><p>$A_n$ are converging series. Means last terms will be approaching to zero or very very small values</p></li>
<li><p>Determinant of $C_1*A_{n-1}$ and $C_2*A_{n-2}$ both are zero {Logic : det($C_1A_{n-1}$)=det($C_1$)det($A_{n-1}$),=0*det($A_{n-1}$),$=0 $ }</p></li>
<li><p>Given that SUM= $ \sum_{n=0}^{n= \infty} A_n \ne 0 $.</p></li>
<li><p>Let $S(x) = \sum_{n=0}^\infty A_nx^n$,$SUM=S(1)$.<strong><em>Given that $S(1)$ is invertible</em></strong> . Remember we still have not proved S(x) is invertible. What we only know is, S(1) is invertible from the given conditions </p></li>
</ol>
<p><strong>Question</strong>
From the given condition can we say that $S(x)=\sum_{n=0}^\infty A_nx^n$ is invertible? If so. how do we prove that?. (x is not a matrix, it is just a variable)</p>
| Urgje | 95,681 | <p>Write the sin-function as the difference of two exponentials and obtain the corresponding integrals by contour integration. This gives the result:
\begin{eqnarray*}
I &=&\int_{-\infty }^{+\infty }dx\frac{1}{x-i}\sin x=\int_{-\infty
}^{+\infty }dx\frac{1}{x-i}\frac{1}{2i}\left\{ e^{ix}-e^{-ix}\right\} \\
\int_{-\infty }^{+\infty }dx\frac{1}{x-i}e^{ix} &=&2\pi ie^{-1} \\
\int_{-\infty }^{+\infty }dx\frac{1}{x-i}e^{-ix} &=&0 \\
I &=&\frac{1}{2i}2\pi ie^{-1}=\frac{\pi }{e}
\end{eqnarray*}</p>
|
1,380,697 | <p>I am currently an undergraduate and thinking about applying to graduate school for math. The problem is that I don't know what field I want to go. Taking graduate classes even more confuse me because the more I learn the less I know what specifically I want to do. My question is to where to find an information about different fields of mathematics? Maybe you can recommend me some good journals about math with overview of top areas of math or popular fields. I already spoke with my professors, asked graduate students about their history but I think that my knowledge about math in a broad sense grows slower than I want it. </p>
<p>Maybe there is some good website with people chatting about different fields of research? What about conferences: is there a conference available for undergrad about top-trands in mathematics? </p>
<p>All sources and all answers are welcome. </p>
<p>I am mostly interested in pure math, but I also like applied math. </p>
| Sinister Cutlass | 235,860 | <p>If you are in the U.S., you could try attending AMS Sectional Meetings. These sorts of conferences happen often, and they feature talks in a huge number of different topics in current mathematics. Otherwise, I'm sure that in your locale, there are probably analogous conferences.</p>
<p>Also, you could sign up for email notifications from arXiv mathematics. You will get daily notifications of new papers in your choice of mathematical subfields.</p>
|
1,634,520 | <p>I <a href="https://math.stackexchange.com/questions/1632455/mathematical-meaning-of-certain-integrals-in-physics/1633320#1633320">have been told</a> that the Helmholtz decomposition theorem says that</p>
<blockquote>
<p>every <em>smooth</em> vector field $\boldsymbol{F}$ [where I am not sure what precise assumptions are needed on $\boldsymbol{F}$] on an opportune
region $V\subset\mathbb{R}^3$ [satisfying certain conditions for whose precisation I would be very grateful to any answerer] can be expressed as $$ \boldsymbol{F}(\boldsymbol{x})=-\nabla\left[\int_{V}\frac{\nabla'\cdot \boldsymbol{F}(\boldsymbol{x}')}{4\pi\|\boldsymbol{x}-\boldsymbol{x}'\|}dV'-\oint_{\partial V}\frac{\boldsymbol{F}(\boldsymbol{x}')\cdot\hat{\boldsymbol{n}}(\boldsymbol{x}')}{4\pi\|\boldsymbol{x}-\boldsymbol{x}'\|}dS'\right]$$ $$+\nabla\times\left[\int_{V}\frac{\nabla'\times \boldsymbol{F}(\boldsymbol{x}')}{4\pi\|\boldsymbol{x}-\boldsymbol{x}'\|}dV'+\oint_{\partial V}\frac{\boldsymbol{F}(\boldsymbol{x}')\times\hat{\boldsymbol{n}}(\boldsymbol{x}')}{4\pi\|\boldsymbol{x}-\boldsymbol{x}'\|}dS'\right]$$[where I suppose that the $\int_V$ integrals are intended as limits of Riemann integrals or Lebesgue integrals].</p>
</blockquote>
<p>I think I have been able to prove it (<a href="https://math.stackexchange.com/questions/1634520/helmholtz-theorem/1698958#1698958">below</a>), interpretating the integrals as Lebesgue integrals with $dV'=d\mu'$ where $\mu'$ is the usual tridimensional Lebesgue measure, for a compactly supported $\boldsymbol{F}\in C^2(\mathbb{R}^3)$ with $\boldsymbol{x}\in \mathring{V}$ and $V$ satisfying the hypothesis of Gauss's divergence theorem.</p>
<p>Nevertheless, I am also interested in proofs of it under less strict assumptions on $\boldsymbol{F}$. What are the usual assumptions -I am particularly interested in the assumptions done in physics- on $\boldsymbol{F}$ and how can the theorem proved in that case? I heartily thank any answerer.</p>
<p>I think that it would be interesting to generalise it to some space containing $C_c^2(\mathbb{R}^3)$ whose functions have "smoothness" properties usually considered true in physics, where the Helmholtz decomposition is much used, but I am not able to find such a space and prove the desired generalisation.</p>
| Self-teaching worker | 111,138 | <p>Let $\boldsymbol{F}:\mathbb{R}^3\to \mathbb{R}^3$, $\boldsymbol{F}\in C^2( \mathbb{R}^3)$ compactly supported and let $V\subset\mathbb{R}^3$ be a region satisfying the hypothesis of Gauss's divergence theorem, with $\boldsymbol{x}\in \mathring{V}$ contained in its interior.</p>
<p>By using <a href="https://math.stackexchange.com/questions/1675487/a-differentiation-under-the-integral-sign/1675841#1675841">this result</a> with $f(z)=\|z\|^{-1}$ and $g=F_i$, $i=1,2,3$, and taking the validity of <a href="https://math.stackexchange.com/questions/1625017/diracs-delta-in-3-dimensions-proof-of-nabla2-boldsymbolx-boldsymbolx/1625086#1625086">this</a> for $\varphi\in C_c^2(\mathbb{R}^3)$ into account, we see that $$\boldsymbol{F}(\boldsymbol{x})=-\frac{1}{4\pi}\int_{\mathbb{R}^3} \frac{\nabla'^2\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'=-\frac{1}{4\pi}\nabla^2\int_{\mathbb{R}^3} \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'$$which, according to a known <a href="https://en.wikipedia.org/wiki/Vector_calculus_identities#Curl_of_the_curl" rel="nofollow noreferrer">identity</a> valid for $C^2$ vector fields, equates $$\frac{1}{4\pi} \nabla\times\left[\nabla\times\int_{\mathbb{R}^3} \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'\right]-\frac{1}{4\pi}\nabla\left[\nabla\cdot \int_{\mathbb{R}^3} \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'\right]$$which, according to <a href="https://math.stackexchange.com/questions/1675487/a-differentiation-under-the-integral-sign/1675841#1675841">this</a> again, is$$\frac{1}{4\pi} \nabla\times\int_{\mathbb{R}^3} \frac{\nabla'\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu' -\frac{1}{4\pi}\nabla \int_{\mathbb{R}^3} \frac{\nabla'\cdot\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'$$ $$=\frac{1}{4\pi} \nabla\times\int_V \frac{\nabla'\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu' -\frac{1}{4\pi}\nabla \int_V \frac{\nabla'\cdot\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'$$ $$+\frac{1}{4\pi} \nabla\times\int_{\mathbb{R}^3\setminus V} \frac{\nabla'\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu' -\frac{1}{4\pi}\nabla \int_{\mathbb{R}^3\setminus V} \frac{\nabla'\cdot\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'\quad\quad (1)$$ where the existence of the curls and divergences of the last member of the equality is guaranteed for example by the following lemma:</p>
<blockquote>
<p>Let $\varphi:V\subset\mathbb{R}^3\to\mathbb{R}$ be bounded and $\mu_{\boldsymbol{y}}$-measurable, with $\mu_{\boldsymbol{y}}$ as the usual $3$-dimensional Lebesgue measure, where $V$ is bounded and measurable (according to the same measure). Let us define, for all $\boldsymbol{x}\in\mathbb{R}^3$, $$\Phi(\boldsymbol{x}):=\int_V \frac{\varphi(\boldsymbol{y})}{\|\boldsymbol{x}-\boldsymbol{y}\|}d\mu_{\boldsymbol{y}}$$then $\Phi\in C^1(\mathbb{R}^3)$ and, for $k=1,2,3$, $$\forall\boldsymbol{x}\in\mathbb{R}^3\quad\quad\frac{\partial \Phi(\boldsymbol{x})}{\partial x_k}=\int_V\frac{\partial}{\partial x_k} \left[\frac{\varphi(\boldsymbol{y})}{\|\boldsymbol{x}-\boldsymbol{y}\|}\right]d\mu_{\boldsymbol{y}}=\int_V \varphi(\boldsymbol{y})\frac{y_k-x_k}{\|\boldsymbol{x}-\boldsymbol{y}\|^3}d\mu_{\boldsymbol{y}}$$</p>
</blockquote>
<p>which is proved <a href="https://www.physicsforums.com/threads/commutation-integral-derivative-in-deriving-amperes-law.857283/#post-5380956" rel="nofollow noreferrer">here</a>. The integrand functions of the integrals above <a href="https://en.wikipedia.org/wiki/Vector_calculus_identities#Divergence_2" rel="nofollow noreferrer">are</a>$$ \frac{\nabla'\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}=\nabla'\times\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]-\nabla'\left[\frac{1}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] \times\boldsymbol{F}(\boldsymbol{x}')$$$$=\nabla'\times\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]+\nabla\left[\frac{1}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] \times\boldsymbol{F}(\boldsymbol{x}')$$$$=\nabla'\times\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]+\nabla\times\left[\frac{ \boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]$$ and $$ \frac{\nabla'\cdot\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}=\nabla'\cdot\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]-\nabla'\left[\frac{1}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] \cdot\boldsymbol{F}(\boldsymbol{x}')$$$$=\nabla'\cdot\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]+\nabla\left[\frac{1}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] \cdot\boldsymbol{F}(\boldsymbol{x}')$$$$=\nabla'\cdot\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]+\nabla\cdot\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] $$
and therefore $$\frac{1}{4\pi} \nabla\times\int_{\mathbb{R}^3\setminus V} \frac{\nabla'\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu' -\frac{1}{4\pi}\nabla \int_{\mathbb{R}^3\setminus V} \frac{\nabla'\cdot\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'$$ $$=\frac{1}{4\pi} \nabla\times\int_{\mathbb{R}^3\setminus V} \nabla'\times\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] + \nabla\times\left[\frac{ \boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu'$$$$-\frac{1}{4\pi}\nabla\int_{\mathbb{R}^3\setminus V} \nabla'\cdot\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] + \nabla\cdot\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu'.$$
Let us chose a parallelepiped $R$ or sphere such that $\boldsymbol{F}$ and its derivatives are null outside its interior and such that $\bar{V}\subset \mathring {R}$ and let us call $E:=\overline{R\setminus V}$. By using <a href="http://planetmath.org/differentiationundertheintegralsign" rel="nofollow noreferrer">Leibniz's rule</a> we see that
$$\int_E \nabla\times\left[\nabla\times\left[\frac{ \boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]\right] d^3x'=\nabla\times\int_E \nabla\times\left[\frac{ \boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d^3x'$$$$= \nabla\times\int_{\mathbb{R}^3\setminus V} \nabla\times\left[\frac{ \boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu'$$and
$$\int_E \nabla\left[\nabla\cdot\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] \right]d^3x'=\nabla\int_E \nabla\cdot\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d^3x'$$$$=\nabla\int_{\mathbb{R}^3\setminus V} \nabla\cdot\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu'$$and their difference is
$$\nabla\times\int_{\mathbb{R}^3\setminus V} \nabla\times\left[\frac{ \boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu'-\nabla\int_{\mathbb{R}^3\setminus V} \nabla\cdot\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu'$$$$=\int_E \nabla\times\left[\nabla\times\left[\frac{ \boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]\right] - \nabla\left[\nabla\cdot\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] \right]d^3x'$$$$=\int_E-\nabla^2\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d^3x'=\mathbf{0}$$ and so
$$\frac{1}{4\pi} \nabla\times\int_{\mathbb{R}^3\setminus V} \frac{\nabla'\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu' -\frac{1}{4\pi}\nabla \int_{\mathbb{R}^3\setminus V} \frac{\nabla'\cdot\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'$$ $$=\frac{1}{4\pi} \nabla\times\int_{\mathbb{R}^3\setminus V} \nabla'\times\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]d\mu' +\frac{1}{4\pi} \nabla\times\int_{\mathbb{R}^3\setminus V} \nabla\times\left[\frac{ \boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu'$$$$-\frac{1}{4\pi}\nabla\int_{\mathbb{R}^3\setminus V} \nabla'\cdot\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu' -\frac{1}{4\pi} \nabla\int_{\mathbb{R}^3\setminus V} \nabla\cdot\left[\frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu'$$$$=\frac{1}{4\pi} \nabla\times\int_{\mathbb{R}^3\setminus V} \nabla'\times\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right]d\mu' -\frac{1}{4\pi}\nabla\int_{\mathbb{R}^3\setminus V} \nabla'\cdot\left[ \frac{\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}\right] d\mu' $$which, by using Gauss's theorem of divergence and the derived $\int_D\nabla\times\mathbf{F}d^3x=\int_{\partial D} \hat{\mathbf{n}}\times\mathbf{F}d\sigma$ identity, we can see to equate$$\frac{1}{4\pi} \nabla\times\int_{\partial E} \frac{\hat{\boldsymbol{n}}(\boldsymbol{x'})\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\sigma' -\frac{1}{4\pi}\nabla\int_{\partial E} \frac{\boldsymbol{F}(\boldsymbol{x}')\cdot\hat{\boldsymbol{n}}(\boldsymbol{x'})}{\|\boldsymbol{x}-\boldsymbol{x}'\|} d\sigma' $$$$=-\frac{1}{4\pi} \nabla\times\int_{\partial V} \frac{\hat{\boldsymbol{n}}(\boldsymbol{x'})\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\sigma' +\frac{1}{4\pi}\nabla\int_{\partial V} \frac{\boldsymbol{F}(\boldsymbol{x}')\cdot\hat{\boldsymbol{n}}(\boldsymbol{x'})}{\|\boldsymbol{x}-\boldsymbol{x}'\|} d\sigma' $$where the last member is derived by taking the fact that the normal on $\partial V$ is the opposite of the normal to the internal frontier surface of $E$ into account, as well as the nullity of $\boldsymbol{F}$ on the external surface. By substituting this in the expression $(1)$ we conclude that $$\boldsymbol{F}(\boldsymbol{x})=\frac{1}{4\pi} \nabla\times\int_V \frac{\nabla'\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu' -\frac{1}{4\pi}\nabla \int_V \frac{\nabla'\cdot\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\mu'$$$$-\frac{1}{4\pi} \nabla\times\int_{\partial V} \frac{\hat{\boldsymbol{n}}(\boldsymbol{x'})\times\boldsymbol{F}(\boldsymbol{x}')}{\|\boldsymbol{x}-\boldsymbol{x}'\|}d\sigma' +\frac{1}{4\pi}\nabla\int_{\partial V} \frac{\boldsymbol{F}(\boldsymbol{x}')\cdot\hat{\boldsymbol{n}}(\boldsymbol{x'})}{\|\boldsymbol{x}-\boldsymbol{x}'\|} d\sigma' .$$I do not accept this answer of mine because, since many statements usually found in texts of physics do not specify particular assumptions, I suspect that much more relaxed assumptions on $\boldsymbol{F}$ might be intended.</p>
|
1,910,109 | <p>$$\int \frac{1}{\sqrt{x} (1 - 3\sqrt{x})}$$</p>
<p>I tried with the substitution $u = 1-3\sqrt{x}$</p>
<p>I am confused with how to finish this problem I know I am supposed to substitute $u$ and $\text{d}u$ in but I am not sure how to finish it.</p>
| windircurse | 331,921 | <p><strong>Hint:</strong> Try $u=\sqrt x$, $du=\frac{1}{2\sqrt x}$</p>
|
1,057,819 | <p>The number $128$ can be written as $2^n$ with integer $n$, and so can its every individual digit. Is this the only number with this property, apart from the one-digit numbers $1$, $2$, $4$ and $8$? </p>
<p>I have checked a lot, but I don't know how to prove or disprove it. </p>
| Michal Paszkiewicz | 177,915 | <p>No value less than 2^30,000 bigger than 128 follows this pattern, as I have found through this tool:</p>
<p><a href="http://www.michalpaszkiewicz.co.uk/maths/series/powers-of-two.html" rel="nofollow">http://www.michalpaszkiewicz.co.uk/maths/series/powers-of-two.html</a></p>
<p>You can use this tool to try and find such a value, but it would probably be easier to prove no such number exists using number theory.</p>
|
1,057,819 | <p>The number $128$ can be written as $2^n$ with integer $n$, and so can its every individual digit. Is this the only number with this property, apart from the one-digit numbers $1$, $2$, $4$ and $8$? </p>
<p>I have checked a lot, but I don't know how to prove or disprove it. </p>
| Fractalic | 197,946 | <p>Here's some empirical evidence (not a proof!).</p>
<p>Here are the first powers which increase the length of 1-2-4-8 runs in the least significant digits (last digits):</p>
<pre>
Power 0: 1 digits. ...0:1
Power 7: 3 digits. ...0:128
Power 18: 4 digits. ...6:2144
Power 19: 5 digits. ...5:24288
Power 90: 6 digits. ...9:124224
Power 91: 7 digits. ...9:8248448
Power 271: 8 digits. ...0:41422848
Power 1751: 9 digits. ...3:242421248
Power 18807: 10 digits. ...9:8228144128
Power 56589: 14 digits. ...3:21142244442112
Power 899791: 16 digits. ...9:8112118821224448
Power 2814790: 17 digits. ...6:42441488812212224
Power 7635171: 19 digits. ...5:2288212218148814848
Power 39727671: 20 digits. ...6:48844421142411214848
Power 99530619: 21 digits. ...6:142118882828812812288
Power 233093807: 25 digits: ...0:2821144412811214484144128
Power 22587288091 : 26 digits: ...9:81282288882824141181288448
</pre>
<p>Easy to see, this run length grows veeeery slow: $25$ digits of $233093807*log_{10}2 \approx 70168227$ decimal digits are powers of 2. Hardly 25 can reach 70168227.</p>
<p>Let's look at it deeper. Consider $2^k$ having $n$ decimal digits (obviously $n \le k$). Let's say $2^k \mod 5^n = a$. Then by CRT we can recover $2^k \mod 10^n$ (note that $2^k \equiv 0 \pmod {2^n}$):</p>
<p>$$f(a) \equiv 2^k \equiv a \cdot 2^n \cdot (2^{-n})_{\mod 5^n} \pmod{10^n}$$</p>
<p>Now <strong>let's assume</strong> that $2^k$ randomly goes over $\mod 5^n$. How many there are elements $x \in (0,\dots,5^n-1)$ such that $f(x) \mod 10^n$ consists from digits 1-2-4-8? There are at most $4^n$ such values (all possible combinations of 1-2-4-8 $n$ times), from $5^n$ (remark - we should consider only coprime with 5 numbers, there are $5^{n-1}*4$ such). So if $2^k$ <em>acts randomly</em>, then the probability of getting 1-2-4-8 value is limited by $(4/5)^{n-1}$, which decreases exponentially with $n$ (and $k$).</p>
<p>Now how much powers of 2 do we have for each $n$? Constant amount, 3 or 4! So, if for some small $n$ there are no such values, then very probably for all numbers it's also true.</p>
<p>Remark: this may look mouthful, nearly same explanation could be given modulo $10^n$. But in my opinion it's easier to believe that $2^k$ randomly goes over $\mod 5^n$ than $\mod 10^n$. <strong>EDIT</strong>: Also, $2$ is a primitive root modulo $5^n$ for any $n$, so it indeed goes over all $5^{n-1}*4$ accounted values.</p>
<p>Remark 2: the exact amount of $x$, such that $f(x) \mod 10^n$ consist from digits 1-2-4-8, from experiments:</p>
<pre>
...
n=15: 54411 / 30517578125 ~= 2^-19.0973107004
n=16: 108655 / 152587890625 ~= 2^-20.4214544789
n=17: 216803 / 762939453125 ~= 2^-21.7467524186
n=18: 433285 / 3814697265625 ~= 2^-23.0697489411
n=19: 866677 / 19073486328125 ~= 2^-24.3914989097
n=20: 1731421 / 95367431640625 ~= 2^-25.7150367656
...
</pre>
<p><strong>UPD:</strong></p>
<p>The fact that $2$ is a primitive root modulo $5^n$ is quite important.</p>
<ol>
<li><p>We can use it to optimize search for first powers of 2 which increase the 1-2-4-8 run length (first data in this post). For example, for $n=3$ only $13$ of $5^2*4=100$ values correspond to 1-2-4-8 3-digit endings. For $\mod 1000$, period for powers of 2 is equal to order of the group, namely $100$. It means we need to check only 13 of each 100 values. I managed to build the table for $n=20$ which speeds up the computation roughly by $2^{25}$. Sadly each next $n$ for the table is much harder to compute, so this approach does not scale efficiently.</p></li>
<li><p>For arbitrary $n$ we can quite efficiently find some $k$ such that $2^k$ has last $n$ digits-powers-of-two.<br>
Let's assume that for some $n$ we know such $k_0$. Consider $a = 2^{k_0} \mod 10^n$. We want to construct $k'$ for $n+1$. Let's look at $a \mod 2^{n+1}$. It's either $0$ or $2^{n}$. If it's $0$, we can set $(n+1)$'s digit to any from $0,2,4,6,8$, in particular $2,4,8$ will fit our goal. If it's $2^{n}$ then we need to set $(n+1)$'s digit to any from $1,3,5,7,9$, in particular $1$ will be fine for us.<br>
After setting the new digit we have some value $a' < 10^{n+1}$. We now find $k'$ by calculating discrete logarithm of $a'$ base $2$ in the group $(\mathbb{Z}/5^{n+1}\mathbb{Z})^*$. It must exist, because $2$ is primitive root modulo $5^{n+1}$ and $a'$ is not divisible by five (meaning that it falls into the multiplicative group).</p></li>
</ol>
<p>Summing up, it's possible to extend any existing 1-2-4-8 tail either with $1$, or with any of $2,4,8$. For example, tail of length 1000, consisting only of 1 and 2:</p>
<pre><code>sage: k
615518781133550927510466866560333743905276469414384071264775976996005261582333424319169262744569107203933909361398767112744041716987032548389396865721149211989700277761744700088281526981978660685104481509534097152083798849174934153949598921020219299137483196605381824600377210207048355959118105502326334547495384673753323864726827644650703466356156319492521379682428275201262134907960967634887658195264018797348236155773958687977059474419550906257366056229915615067527218040720408353328787880060032847746927391316869927283585312014157952623949696812057481086276896651244409107902992111507870787820359137244857060839675634572294938878098506151681269336043213294287160464665102314138635395739226878089
sage: print pow(2, k, 10**1000)
1112221212111222212111211212211112121221111221112211221212222212222111211212111122221111222222211112222211112122111222122222212222111221111112211122121122221212111122212112211122121121212211211221122111111111111121211111211212222212112121222221221122111222221222222122212221212111121111112111211222111111211222222222112222212112211212121122212122222211111121112122122112112122222212121121222221112121221222221121122221121222121112111121221221212211121221122121122122122112112112111222212111111221121211211122222122211122211211222122122211112121121111211222211211212211112111212121212111222221212221211212222121122221211112222211221121221211211221222211112121221222122112122221221221221221222211122222222222222222222111121122221121121212111222211112122112112222112221212111112121221221121211221111121212111111121212222212211222122122212112211221221112222121221212121121112111222221122221221121111212121211211211221121211211121122122211212221112122111122212112212121112121121122111112111211111212122112
</code></pre>
|
345,065 | <p>Let f and g be functions of one real variable and define $F(x,y)=f[x+g(y)]$. Find formulas for all the partial derivatives of F of first and second order.</p>
<p>For the first order, I think we have:</p>
<p>$\frac{\partial F}{\partial x}=\frac{\partial f}{\partial x}+ \frac{\partial f}{\partial y}$</p>
<p>$\frac{\partial F}{\partial y}=\frac{\partial f}{\partial x}g'(x)+ \frac{\partial f}{\partial y}g'(y)$</p>
<p>Is it correct? What are the second order derivatives?</p>
<p>Thank you</p>
| user27182 | 22,020 | <p>you need to differentiate $f$ by its argument, then differentiate the argument by $x$ or $y$</p>
<p>Setting $\xi = x + g(y)$,</p>
<p>$\frac{d F}{d x}=\frac{df}{d \xi} \frac{d \xi}{dx} = \frac{df}{d \xi}$</p>
<p>and </p>
<p>$\frac{d F}{d y}=\frac{df}{d \xi} \frac{d \xi}{dy} = \frac{df}{d \xi} g'(y)$</p>
<p>You seem to have looked up the chain rule, but just didn't notice that $f$ has one argument only, so you can probably do the $2^{nd}$ order ones ok.</p>
|
4,622,956 | <p>I think <span class="math-container">$\,9\!\cdot\!10^n+4\,$</span> can be a perfect square, since it is <span class="math-container">$0 \pmod 4$</span> (a quadratic residue modulo <span class="math-container">$4$</span>), and <span class="math-container">$1 \pmod 3$</span> (also a quadratic residue modulo <span class="math-container">$3$</span>).<br />
But when I tried to find if <span class="math-container">$\;9\!\cdot\!10^n+4\,$</span> is a perfect square, I didn’t succeed. Can someone help me see if <span class="math-container">$\;9\!\cdot\!10^n+4\,$</span> can be a perfect square ?</p>
| sirous | 346,566 | <p>Comment:</p>
<p>Lets try construction such a number. Suppose we have:</p>
<p><span class="math-container">$k^2-9\times 10^n=4$</span></p>
<p>We use following known Pell's equation:</p>
<p><span class="math-container">$x^2-Dy^1=1$</span></p>
<p>For <span class="math-container">$D=10$</span> we have <span class="math-container">$x=19$</span> and <span class="math-container">$y=6$</span> such that:</p>
<p><span class="math-container">$19^2-10\times 6^2=1$</span></p>
<p>multiplying both sides by <span class="math-container">$2^2$</span> we get"</p>
<p><span class="math-container">$38^2-10\times 12^2=4$</span></p>
<p>we rewrite this as:</p>
<p><span class="math-container">$38^2-9\times(4^2\times 10)=4$</span></p>
<p>multiplying both sides by <span class="math-container">$25^2$</span> we get:</p>
<p><span class="math-container">$(25\times 38)^2-9\times 10^5=4\times 25^2=2500=2496+4$</span></p>
<p>Or:</p>
<p><span class="math-container">$[(25\times 38)^2-2496]-9\times 10^5=4$</span></p>
<p>Or generally:</p>
<p><span class="math-container">$A=[(25\times 38)\times 10^m-(25\times 10^{2m}+96)]$</span></p>
<p>here in equation <span class="math-container">$k^2-9\times 10^n=4$</span>, <span class="math-container">$n=2m+1$</span></p>
<p><span class="math-container">$A$</span> must be perfect square.May be by brute force we can find such a number.</p>
|
831,763 | <p>The following equation $$e^{i+z}e^{iz}=1$$ is to be solved for $z$. I have tried
$$
\begin{eqnarray}
e^{i+z+iz} = 1\\
i+z+iz=0\\
z= -{i \over 1+i} = -{i(1+i)\over 2} = \frac12-i\frac12
\end{eqnarray}
$$
However I am absolutely unsure, that's correct. Somehow I suspect trigonometry should creep in the answer.</p>
| Rene Schipperus | 149,912 | <p>You have $$e^{i+(1+i)z}=1$$ this means that
$$i+(1+i)z=2\pi i n$$</p>
<p>So $$z=\frac{(2\pi n -1)i}{1+i}=\left(\pi n -\frac{1}{2}\right) (1+i)$$</p>
|
2,904,603 | <p>I'm working on the following question:</p>
<blockquote>
<p>Show that $G$ is a group if and only if, for every $a, b \in G$,
the equations $xa = b$ and $ay = b$ have solutions $x, y \in G$.</p>
</blockquote>
<p>I'm having trouble getting started because I'm not understanding what it means for "the equations $xa = b$ and $ay = b$ have solutions $x, y \in G$". Do they mean, there's only 1 left multiplier that takes $a$ to $b$ and one right multiplier that takes $a$ to $b$?</p>
| Robert Lewis | 67,071 | <p>I think our colleagues stressed out and ACTOH have made clear what we're looking for here. </p>
<p>I would phrase it like this: </p>
<p>Let $G \ne \emptyset$ be a set with an associative binary operation; then $G$ is a group if and only if for any $a, b \in G$ there exist $x, y \in G$ such that $xa = b$ and $ay = b$. </p>
<p>The details of the proof of this little proposition are in and of themselves worth seeing; basic and classic stuff.</p>
<p>The "only if" direction is pretty obvious, so I'll focus on the "if" direction here.</p>
<p>So suppose</p>
<p>$xa = b \tag 1$always has a solution. Then taking</p>
<p>$b = a \tag 2$</p>
<p>we have some $e_l$ such that</p>
<p>$e_l a = a; \tag 3$</p>
<p>I claim that in fact</p>
<p>$\forall b \in G, \; e_l b = b; \tag 4$</p>
<p>for since </p>
<p>$\exists y \in G, \; ay = b, \tag 5$</p>
<p>we have</p>
<p>$e_l b = e_l (ay) = (e_la)y = ay = b; \tag 6$</p>
<p>likewise, by reversing more or less the roles played by the equations $xa = b$ and $ay = b$, we find the existence of an $e_r \in G$ such that</p>
<p>$\forall b \in G, \; be_r = b; \tag 7$</p>
<p>then</p>
<p>$e_l = e_l e_r = e_r; \tag 8$</p>
<p>taking</p>
<p>$e = e_l = e_r, \tag 9$</p>
<p>we have our multiplicative identity for $G$. We proceed: taking $b = e$ in the equation $xa = b$; then there is an $a' \in G$ with</p>
<p>$a'a = e; \tag{10}$</p>
<p>likewise, from $ay = b$, again with $b = e$, we have $a'' \in G$ such that</p>
<p>$aa'' = e; \tag{11}$</p>
<p>thus,</p>
<p>$a'' = ea'' = (a'a)a'' = a'(aa'') = a'e = a'; \tag{12}$</p>
<p>we may now, in the light of (10)-(12), take</p>
<p>$a^{-1} = a' = a'', \tag{13}$</p>
<p>and we have found the inverse for $a \in G$.</p>
<p>Since we now have the associative binary operation on $G$ is possessed of an identity $e \in G$, and that every $a \in G$ is possessed of an inverse element $a^{-1}$, we conclude that $G$ is indeed a group.</p>
|
3,085,591 | <p>I'm trying to prepare for an exam and came across the following question:</p>
<blockquote>
<p>Given <span class="math-container">$n \times n$</span> matrix <em>A</em>, let <em>U</em> represent row space and <em>W</em>
represent column space. </p>
<p>A) Prove: <span class="math-container">$W \subseteq$</span> <span class="math-container">$U^{\perp} $</span> if and only if <span class="math-container">$A^2=0$</span></p>
<p>B) Prove that if <span class="math-container">$dim(U)>n/2$</span>, then <span class="math-container">$A^2 \ne 0$</span>.</p>
</blockquote>
<p>I was able to do part A pretty easily as follows: </p>
<p><span class="math-container">$\Rightarrow$</span>: Let rows of <em>A</em> be {<span class="math-container">$ \vec{r}_1, \vec{r}_2 ,...,\vec{r}_n $</span>} and columns be {<span class="math-container">$ \vec{c}_1, \vec{c}_2 ,...,\vec{c}_n $</span>}. Assume <span class="math-container">$W \subseteq$</span> <span class="math-container">$U^{\perp} $</span>. Entry <span class="math-container">$a_{ij}$</span> (in row <em>i</em>, column <em>j</em> of matrix <span class="math-container">$A^2=AA$</span>) equals the dot product of <span class="math-container">$\vec{r}_i \cdot \vec{c}_j $</span>, and by assumption <span class="math-container">$\vec{c}_j$</span> is orthogonal to <span class="math-container">$ \vec{r}_1, \vec{r}_2 ,...,\vec{r}_n $</span>, so it follows that <span class="math-container">$a_{ij}=0$</span> for all <em>i,j</em>. Thus <span class="math-container">$A^2=0$</span></p>
<p><span class="math-container">$\Leftarrow$</span>: Assume <span class="math-container">$A^2=0$</span>. We can represent the columns of <span class="math-container">$A^2$</span> as </p>
<p><span class="math-container">$$ \pmatrix{\vec{r}_1 \\ \vec{r}_2 \\ \vdots \\ \vec{r}_n} \cdot \vec{c}_1 , \pmatrix{\vec{r}_1 \\ \vec{r}_2 \\ \vdots \\ \vec{r}_n} \cdot \vec{c}_2 , ... , \pmatrix{\vec{r}_1 \\ \vec{r}_2 \\ \vdots \\ \vec{r}_n} \cdot \vec{c}_n ,$$</span>
But by assumption, <span class="math-container">$A^2=0$</span>, meaning all columns of <span class="math-container">$A^2=0$</span> and all the above dot products are zero. Hence, <span class="math-container">$\vec{c}_i$</span> is orthogonal to all vectors in {<span class="math-container">$\vec{r}_1, \vec{r}_2 ,...,\vec{r}_n $</span>}, and thus all linear combination of those vectors. Since this applies to all <span class="math-container">$\vec{c}_i$</span>, then <span class="math-container">$W \subseteq$</span> <span class="math-container">$U^{\perp} $</span>.</p>
<p>As for Part B, I'm having trouble and not sure exactly where to begin. Presumably, I would use the results from part A, meaning that if <span class="math-container">$A^2 \ne 0$</span>, then <span class="math-container">$W \not\subseteq$</span> <span class="math-container">$U^{\perp} $</span>, that is, there is some vector in column space that is not orthogonal to every vector in row space.</p>
<p>Some properties I'm aware of that may be useful but not sure if or how:</p>
<ul>
<li><span class="math-container">$dim(U)=dim(W)$</span> </li>
<li>The orthogonal complement of the row space is null space</li>
<li>the dimension of the row space and nullspace equals <span class="math-container">$n$</span></li>
<li>dimension of column space of <em>A</em> equals dimension of row space of <span class="math-container">$A^T$</span></li>
</ul>
<p>Anyway, any tips would be greatly appreciated!</p>
| P Vanchinathan | 28,915 | <p>Let us regard a matrix as a linear transformation. </p>
<p><span class="math-container">$A^2=0$</span> means Range of <span class="math-container">$A$</span> is contained in kernel of <span class="math-container">$A$</span>.
(<span class="math-container">$A^2v=0$</span> means <span class="math-container">$A(Av)=0$</span> ie <span class="math-container">$Av\in \ker A$</span>.)
This is not possible as the dimension of range of <span class="math-container">$A$</span> is the dimension of column (row) space of <span class="math-container">$A$</span>, which is given to be <span class="math-container">$>n/2$</span>, and kernel dimension, by rank-nullity theorem, is <span class="math-container">$<n/2$</span>.</p>
|
2,533,960 | <p>So, the given expression is
$$\binom{2n}{2} = 2\binom{n}{2}+n^2$$</p>
<p>The task is to give a combinatorical proof for it.</p>
<p>Left side of the identity is obviously equal to the number of options for choosing 2 elements out of the set with cardinality $2n$.</p>
<p>What issues me is that I can't think of any way to separate that into two disjoint cases which would have $2\binom{n}{2}$ and $n^2$ different options (what is, I believe, meant to happen).</p>
<p>Any hints would be helpful.</p>
| Especially Lime | 341,019 | <p>Hint: split the set into two halves and consider whether the selected elements are in the same half.</p>
|
66,068 | <p>I have a list like this. </p>
<pre><code>cdatalist = {{1., 0.898785, Failed, Failed, 50., 25., "serial"}, {1., 1.31175,1., Failed, 50., 25., "serial"}, {1., 18.8025, Failed, 0.490235, 50., 25., "serial"}, {1., 19.6628, 0.990079, Failed, 50., 25., "serial"}, {1., 39.547, Failed, Failed, 50., 25., "serial"}, {1., 39.7503, Failed, 0.482749, 50., 25., "serial"}, {1., 40.2078, Failed, Failed, 50., 25., "serial"}, {1., 40.6208, 0.980588, Failed, 50., 25., "serial"}, {1., 102.588, Failed, Failed, 50., 25., "serial"}, {1., 102.781, Failed, 0.466214, 50., 25., "serial"}, {1., 102.826, Failed, Failed, 50., 25., "serial"}, {1., 102.833, Failed, Failed, 50., 25., "serial"}, {15., 0.89985, Failed, Failed, 50., 25., "serial"}, {15., 1.31344, 1., $Failed, 50., 25., "serial"}}
</code></pre>
<p>at the end, I want to compile a new list by dropping any lines that don't have "Failed" on the third column on each row. </p>
<pre><code>datalistfunc[input_] :=
Module[{cell, cell2, celltable, celllist},
i = 1;
celllist = {};
While[i < Length@cdatalist + 1,
cell =
Select[cdatalist[[i]][[1 ;; 3]],
Head[cdatalist[[i]][[3]]] == Real &];
i = If[i < Length@cdatalist + 1, i + 1, Length@cdatalist + 1];
celllist = AppendTo[celllist, cell2];
Print[cell2]
]
]
datalist = datalistfunc[cdata];
</code></pre>
<p>My list looks like this after filtering. </p>
<pre><code>{{},{}}
{{1.,1.31175,1.},{}}
{{},{}}
{{1.,19.6628,0.990079},{}}
{{},{}}
{{},{}}
{{},{}}
{{1.,40.6208,0.980588},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{15.,1.31344,1.},{}}
</code></pre>
<p>Instead, I want my list to look like this. </p>
<pre><code>{{1.,1.31175,1.},
{1.,19.6628,0.990079},
{1.,40.6208,0.980588},
{15.,1.31344,1.}}
</code></pre>
| alancalvitti | 801 | <pre><code>cdatalist // Dataset // Query[Select[#[[3]] != "Failed" &], 1 ;; 3] // Normal
</code></pre>
|
2,252,090 | <p>Someone posed this question to me on a forum, and I have yet to figure it out. If $a,b,c,d$ are the zeroes of:</p>
<p>$$x^4-7x^3+2x^2+5x-1=0$$
Then what is the value of $$ \frac1a +\frac1b +\frac1c +\frac1d $$</p>
<p>I can figure out the zeroes, but they are wildly complex. I'm sure there must be an easier way. </p>
| Semiclassical | 137,524 | <p>Hint: If $f(x)$ has roots $a,b,c,d$ then $f(x)=(x-a)(x-b)(x-c)(x-d)$. Expand this out and compare coefficients with the given quartic. (In particular, the coefficients of $x^3$ and $x^0$.)</p>
|
998,769 | <p>A random variable $X$ distributed over the interval $[0, 2\pi]$</p>
<p>a) the pdf of $X$</p>
<p>b) the cdf of $X$</p>
<p>c) $P(\frac{\pi}{6} \leq X \leq \frac{\pi}{2})$</p>
<p>d) $P(-\frac{\pi}{6} \leq X \leq \frac{\pi}{2})$</p>
<p>my answers:</p>
<p>a) pdf of $X$ is $f(x) = \begin{cases}\frac{1}{2\pi},& 0 \leq x \leq 2\pi, \\
0, & \text{otherwise.}\end{cases}$</p>
<p>b) cdf of $X$ is $F(x) = \begin{cases}0,& x < 0, \\ \frac{x}{2\pi}, &0 \leq x \leq 2\pi, \\ 1,& x > 2\pi.\end{cases}$</p>
<p>c) For this one, do i just do $F(\frac{\pi}{6}) - F(\frac{\pi}{2})$ ?</p>
<p>d) and for this question, it looks like I have to do something else? because it's asking for the probability between
$F(-\frac{\pi}{6})$ and $F(\frac{\pi}{2}) $ but when $x < 0,$ the probability should be $0$ as well? Not sure if i'm looking at this the wrong way. </p>
<p>could someone also kindly explain to me why in a CDF, the probability is always $0$ when $x$ less than the start of the interval but when it's greater than the interval, the probability is always 1? I don't know if it's a coincidence or if i didn't properly understand a CDF but every CDF I've seen so far has it so that the probability is 1 when x is greater than the end of the interval. </p>
| Rey | 73,712 | <p>You want to go from the definition for that one too. For <a href="http://en.wikipedia.org/wiki/Surjective_function" rel="nofollow">surjective</a> you need to show:
$ \forall y\in R , \; \exists x\in R, f(x)=y$ </p>
<p>For example for $h$, we can show that for each $y$, we can set $x = sign(y) \sqrt{y}$, and then we get $h(x) =y$</p>
|
1,579,616 | <p>So I know it's true for $n = 5$ and assumed true for some $n = k$ where $k$ is an interger greater than or equal to $5$.</p>
<p>for $n = k + 1$ I get into a bit of a kerfuffle.</p>
<p>I get down to $(k+1)^2 + 1 < 2^k + 2^k$ or equivalently:</p>
<p>$(k + 1)^2 + 1 < 2^k * 2$.</p>
<p>A bit stuck at how to proceed at this point</p>
| barak manos | 131,263 | <p><strong>First, show that this is true for $n=5$:</strong></p>
<p>$5^2+1<2^5$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$n^2+1<2^n$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$(n+1)^2+1=$</p>
<p>$n^2+\color\green{2}\cdot{n}+1+1<$</p>
<p>$n^2+\color\green{n}\cdot{n}+1+1=$</p>
<p>$n^2+n^2+1+1=$</p>
<p>$2\cdot(\color\red{n^2+1})<$</p>
<p>$2\cdot(\color\red{2^n})=$</p>
<p>$2^{n+1}$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
|
4,059,489 | <blockquote>
<p>Let <span class="math-container">$ A, B \in M_n (\mathbb{C})$</span> such that <span class="math-container">$(A-B)^2 = A -B$</span>. Then <span class="math-container">$\mathrm{rank}(A^2 - B^2) \geq \mathrm{rank}( AB -BA)$</span>.</p>
</blockquote>
<p>I tried to apply the basic inequalities without results. How to start? Thank you.</p>
| Aderinsola Joshua | 395,530 | <p>The easiest but absurd way to do this is by squaring and squaring and squaring and..... To write out the equation of polynomial in <span class="math-container">$x$</span></p>
<p><span class="math-container">$$\sqrt{\frac{x-8}{1388}}+\sqrt{\frac{x-7}{1389}}+\sqrt{\frac{x-6}{1390}}-\sqrt{\frac{x-1388}{8}}-\sqrt{\frac{x-1389}{7}}-\sqrt{\frac{x-1390}{6}} = 0$$</span></p>
<p>With the help of a cas, then we would have to observe the polynomial in <span class="math-container">$x$</span> to see how many real roots it has</p>
<p><span class="math-container">$$( 166271366055421398492452275393603941230766862954455137380647248140804636783475714327462432993480052899307496588105274122048146156780410637021361 \cdot x^6 - 1420140036782905319437561361639895711607700448706531408943829995316272533767501640030686352288245638342331131237991828557619781758027571151457168696 \cdot x^5 + 5053362485747062046204356842284861702806221919004319273920353418806545439134535332974351841718762407835957590697696948539773086891685978542298184835440 \cdot x^4 - 9589047098047637215433435077525811816830831747049585209634085291233160531678815568240751343497601637440011806189213827409633807381969946623370920034699520 \cdot x^3 + 10233876003972506215993189306355354430848302521410622385212634701164151435860483711557613487835971006731199476091003849691449303854798610976036031554449411840 \cdot x^2 - 5824396568911083719726988788037339915692680410889570986360670278090638276801032058096056401851613752306491371113890184533047015490065483972816831803846901282816 \cdot x + 1381013855402990033245846225695823841737429380881009697772601168066108621988351698959635167325016829640478323081178371168331795718011254016938645598239579277398016 )\cdot (x-1396)= 0 $$</span></p>
<p>The polynomial is <span class="math-container">$7$</span>th degree, and has <span class="math-container">$4$</span> real roots</p>
<p>EDIT:</p>
<p>But squaring an equation creates an environment for false root.... so all roots are tested, so that only <span class="math-container">$x-1396 = 0$</span> is a real root that satisfies the equation</p>
|
160,161 | <p>Is there an unlabeled locally-finite graph which is a Cayley graph of an infinitely many non-isomorphic groups with respect to suitably chosen generating sets?</p>
| YCor | 14,094 | <p>Yes. You can even have uncountably many. One recipe is as follows: consider a group $G$ with finite generating subset $S$ and an extension $1\to F\to G'\stackrel{\pi}\to G\to 1$, with $F$ finite. Endow $G'$ with the generating subset $S'=\pi^{-1}(S)$. Then the Cayley graph of $(G',S')$ only depends on the Cayley graph of $(G,S)$ and of the cardinal of $F$: indeed it is obtained by replacing any vertex by a complete graph on $|F|$ vertices and replacing any edge by the corresponding bipartite graph.</p>
<p>Therefore what you need is to find $G$ and infinitely many non-isomorphic $G'$ with $F$ of fixed size. One way to do so is when $G$ admits a finitely generated central extension $\tilde{G}$ with center an infinite-dimensional vector space $Z$ over $\mathbf{Z}/p\mathbf{Z}$ for some prime $p$: then $Z$ admits continuum many hyperplanes $H$ and there are still (*) continuum many non-isomorphic groups among the $G'=\tilde{G}/H$.</p>
<p>(*) I use that if $G$ is a finitely generated group and $\mathcal{N}(G)$ is the set of its normal subgroups, and if we write $N\sim N'$ if $G/N$ and $G/N'$ are isomorphic groups, then the equivalence relation $\sim$ on $\mathcal{N}(G)$ has countable classes. (Standard exercise) </p>
<p><b>Edit</b>: an application of this is that <b>having a solvable word problem is not invariant under quasi-isometry (QI)</b> among finitely generated groups, and also <b>having a recursive presentation on finitely many generators is not a QI-invariant </b>. Indeed if we choose $G$ to be the first Grigorchuk group, which has a solvable word problem, then among the uncountably many groups $G'$ obtained, only countably many have a solvable word problem (and more generally only countably many are recursively presented). In contrast, being finitely presented is a QI invariant, and being finitely presented with solvable word problem is a QI invariant as well, because it is equivalent to having the Dehn function bounded above by a recursive function (and the equivalence class of Dehn function is a QI invariant).</p>
|
160,161 | <p>Is there an unlabeled locally-finite graph which is a Cayley graph of an infinitely many non-isomorphic groups with respect to suitably chosen generating sets?</p>
| AGenevois | 122,026 | <p>Here is a related construction, but which I find more elementary. The idea is to start from the lamplighter group
<span class="math-container">$$L_2 := \langle a,t \mid a^2=1, [t^nat^{-n},a]=1 \ (n \in \mathbb{N}) \rangle,$$</span>
to fix an arbitrary subset <span class="math-container">$I \subset \mathbb{N}$</span>, and to define the new group
<span class="math-container">$$G_I:= \left\langle a,t,z \mid a^2=z^2=[z,a]=[z,t]=1, [t^nat^{-n},a] = \left\{ \begin{array}{cl} 1 & \text{if } n \in I \\ z & \text{if } n \notin I \end{array} \right. \right\rangle.$$</span>
In other word, we add a central element of order two and we use it to "twist" the commutator relations of <span class="math-container">$L_2$</span>. By killing <span class="math-container">$z$</span>, we recover <span class="math-container">$L_2$</span> as a quotient, and the corresponding kernel is <span class="math-container">$\langle z \rangle$</span>. Observe that <span class="math-container">$z$</span> has order two in <span class="math-container">$G_I$</span> since, by killing <span class="math-container">$a$</span> and <span class="math-container">$t$</span>, it is sent to a non-trivial element in the quotient <span class="math-container">$\mathbb{Z}/2 \mathbb{Z}$</span>. In other word, we have a central extension
<span class="math-container">$$1 \to \mathbb{Z}/2\mathbb{Z} \to G_I \to L_2 \to 1.$$</span>
As explained by Yves in his answer, this implies that all the <span class="math-container">$G_I$</span> share a common Cayley graph. It remains to show that there exist uncountably many distinct groups among the <span class="math-container">$G_I$</span>.</p>
<p><strong>Lemma:</strong> <em>The groups <span class="math-container">$G_I$</span> and <span class="math-container">$G_J$</span> are isomorphic iff <span class="math-container">$I=J$</span>.</em></p>
<p><strong>Sketch of proof.</strong> First, we need to know that the automorphism group of <span class="math-container">$L_2$</span> is generated by the inner automorphisms, the inversion of <span class="math-container">$t$</span>, and the "transvections" induced by
<span class="math-container">$$\left\{ \begin{array}{ccc} a \mapsto a \\ t \mapsto gt \end{array} \right., \ g \in \langle \langle a \rangle \rangle.$$</span>
Next, we observe that they all extend to automorphisms of <span class="math-container">$G_I$</span>. This implies that, if there exists an isomorphism <span class="math-container">$\varphi : G_J \to G_I$</span>, then, up to post-composing with an automorphism of <span class="math-container">$G_I$</span>, we can suppose without loss of generality that <span class="math-container">$\varphi$</span> induces the identity <span class="math-container">$L_2 \to L_2$</span> when we quotient by the centers of <span class="math-container">$G_I,G_J$</span>. Consequently, there exist <span class="math-container">$\epsilon,\eta \in \{0,1\}$</span> such that
<span class="math-container">$$\varphi : \left\{ \begin{array}{ccc} z_J & \mapsto & z_I \\ a_J & \mapsto & z_I^\epsilon a_I \\ t_J & \mapsto & z_I^\eta t_I \end{array} \right..$$</span>
It follows that <span class="math-container">$\varphi \left( [t_J^na_Jt_J^{-n},a_J] \right) = [t_I^na_It_I^{-n},a_I]$</span> for every <span class="math-container">$n \in \mathbb{N}$</span>, hence <span class="math-container">$I=J$</span>. <span class="math-container">$\square$</span></p>
<p><strong>Remark:</strong> Following Yves' edition of his answer, we deduce that the lamplighter group <span class="math-container">$L_2$</span>, which has solvable word problem, is quasi-isometric a group with unsolvable word problem. More explicitly, if <span class="math-container">$I \subset \mathbb{N}$</span> is not a computable subset, then <span class="math-container">$G_I$</span> is a finitely generated group that is quasi-isometric to <span class="math-container">$L_2$</span> but whose word problem is unsolvable.</p>
|
3,166,999 | <p>I'm reading Kechris' book "Classical Descriptive Set Theory" and the author gives the following definition (pp. <span class="math-container">$49$</span>, row <span class="math-container">$3$</span>):</p>
<blockquote>
<p>A <strong>weak basis</strong> of a topological space <span class="math-container">$X$</span> is a collection of nonempty open sets s.t. every nonempty open set contains one of them.</p>
</blockquote>
<p>My question is: is this definition equivalent to that of a basis for a topology?</p>
<p>The fact that the author gives a specific name to such a family suggests that it is not, but for every <span class="math-container">$x\in X$</span> and for every open nhbd <span class="math-container">$U(x)$</span> there exists <span class="math-container">$V(x)$</span> in the weak basis contained in <span class="math-container">$U$</span>. This means that a weak basis is also a covering and hence satisfies the conditions for being a basis.</p>
<p>Any comment is appreciated.
Thank you in advance for your help.</p>
| William Elliot | 426,203 | <p>A base covers the space.<br>
A weak base may not cover the space.<br>
The set of all not empty, open subsets of R that exclude 0 is a weak base. </p>
|
2,150,085 | <p>I've always been puzzled by the value $0^0$. I remembered that one of my professor claimed that it was truely equal to $1$. However I think that most people would say, from an analytic point of view, that it is indeterminate which I agree with. Translating $0^0$ into $e^{0 \ln(0)}$ makes the problem appear explicitly. </p>
<p>I also know the classic limit on $\mathbb{R}$, $\lim_{x\to 0^+} x^x = 1$ which could make us define by convention that $0^0$ is indeed $1$. However I once stumbled upon this <a href="https://youtu.be/BRRolKTlF6Q?t=9m1s" rel="nofollow noreferrer">Numberphile video</a> in which (around 9:00) Matt Parker claims that even though the two sided limits $\lim_{x\to 0^{\pm}} x^x = 1$, when we consider the whole complex plan, this doesn't hold anymore and that's why we cannot say that $0^0 = 1$. Curious, I tried to explicitly prove that the limit doesn't exist. I don't have a good knowledge of complex analysis but here is what I tried :</p>
<p>Let $z \in \mathbb{C},\ z = r\ e^{i \theta}$ with $r \in \mathbb{R^+},\ \theta \in [0,2 \pi[$.
$$
\lim_{|z| \to 0} z^z = \lim_{r\to 0^+} (r\cdot e^{i\theta})^{r\cdot e^{i\theta}} = \lim_{r\to 0^+} \overbrace{r^{re^{i\theta}}}^{(A)}\cdot \overbrace{e^{i\theta\cdot r e^{i\theta}}}^{(B)}
$$
We can cut the limit of product assuming both limits exist :
$$
(A) = \lim_{r\to 0^+} e^{\ln(r) r e^{i\theta}} = e^{\lim_{r \to 0^+} \ln(r) r e^{i\theta}} = e^0 = 1 \\
(B) = e^{\lim_{r\to 0^+} i\theta\cdot r e^{i\theta}} = e^0 = 1
$$
Therefore $$
\lim_{|z| \to 0} z^z = 1
$$
independently from $\theta$ which proves that the limit is $1$ everywhere in the complex plan. Is my reasoning wrong ? Is the video wrong ? Thank you !</p>
<hr>
<p>Expanding a bit my question, since it seems that my reasoning is indeed wrong, could you also provide me an example of 2 paths in the complex plan which lead different limit values ?</p>
| M. Winter | 415,941 | <p>I have good and bad news for you. The good news: in the most applied scenarios, $z^z$ indeed converges to $1$ as $z$ converges $0$. The bad news: it is not so easy!</p>
<p>In your question you demonstrated, that approaching the zero on straight lines gives you $1$. But you should not forget the context in which you made this observation. At first, for complex numbers even the expression $z^w$ is not as easy as it seems. You have to use the defintion</p>
<p>$$z^w=\exp( w\log(z)).$$</p>
<p>This involves using the complex logarithm which cannot be defined on the whole complex plane $\mathbb C$ at the same time (if we are interested in $\log(z)$ beeing analytic). Instead, one cuts out a curve that connects $0$ and $\infty$. The remaining set $\mathbb C^*$ has an analytic definition of $\log(z)$, unique up to a constant term $2\pi k i,k\in\mathbb Z$. So choosing such a cut plane and such a term will give you also a unique definition of $\log(z)$ and $z^w$ or $-$ important in our case $-$ of $z^z$. Finally, for such a unique definition there is a unique continuous function $\phi:\mathbb C^*\rightarrow \mathbb R$ with</p>
<p>$$
\log(z)=\log|z|+i\phi(z).
$$</p>
<p>Note that studying the behavior of $z^z$ in $z=0$ is essentially the same as studying $z\log(z)$ ($="\!\log(z^z)\!"$) in $z=0$. The latter one converges for $z\rightarrow0$, if and only if $z^z$ converges and the limit $z^*$ of $z\log(z)$ gives a limit $\exp(z^*)$ for $z^z$. So the question is: $\lim_{z\rightarrow 0}{z\log(z)}=0$?</p>
<p>The answer turns out to be dependent <em>only on the cut</em> we made in order to define $\log(z)$. You, most probably, chose the principal branch of the complex logarithm, which is defined on $\mathbb C$ minus the non-positive real line. In this case $z\log(z)\rightarrow0$ (see below). But you have to exclude your path $r \exp(i\phi)$ with $\phi=\pi$ as this path is located directly inside the cut of $\mathbb C^*$. </p>
<p>Lets look at approaching the zero in general with a curve $z_t$. We write $z_t=r_t\exp(i\phi(z_t))$, so $r_t\rightarrow 0$ with $t\rightarrow \infty$. We then have</p>
<p>\begin{align}
z_t\log(z_t) &= r_t\exp(i\phi(z_t))\cdot(\log r_t+i \phi(z_t))\\
&=\underbrace{r_t \log r_t}_{\rightarrow \,0} \cdot \exp(i\phi(z_t))+ r_t \phi(z_t) \cdot\exp(i\phi(z_t)).
\end{align}</p>
<p>So the behavior when approaching zero only depends on the second term (the first one vanishes; the behavior of $z\log(z)$ on the positive real line is known). In the limit, $r_t \phi(z_t)$ determines the absolute value of $z^*$ and $\phi(z_t)$ determines the argument of $z^*$. The following cases are possible:</p>
<ol>
<li>$z_t$ more or less directly approaches zero without winding itself around the origin, i.e. $\phi(z_t)$ is bounded. Then $r_t \phi(z_t)\rightarrow 0$, independent of the convergence of $\phi(z_t)$. This gives the usual result $z\log(z)\rightarrow 0$.</li>
<li>$z_t$ slowly spirals around the origin, i.e. $\phi(z_t)$ is unbounded but in $\mathrm o(r_t^{-1})$. So still $r_t\phi_t\rightarrow 0$ and we have the usual result.</li>
<li>$z_t$ spirals around the origin faster, i.e. $\phi(z_t)$ is unbounded and in $\Theta(r_t^{-1})$. But now, even if $r_t\phi_t$ convergec, the limit $z_t\log(z_t)$ does not exist as the argument does not converge. Also, $z_t\log(z_t)$ is not divergent, but oscillates.</li>
<li>$z_t$ spirals around the origin even faster, i.e. $\phi(z_t)$ is unbounded and in $\Omega(r_t^{-1})$. Now, $r_t\phi_t\rightarrow\pm\infty$. $z_t\log(z_t)$ diverges to complex infinity (converges to the northpole of the Riemann sphere).</li>
</ol>
<p>As you can see, none of these case gives convergence to a limit other than $0$. Finally, note that the number of turns (and the pace of turning) of $z_t$ around the origin is determined by the cut of $\mathbb C^*$. If the cut only turns around the origin a finite number of times, so will $z_t$. </p>
<p><a href="https://i.stack.imgur.com/ODovk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ODovk.png" alt="enter image description here"></a></p>
<p>So the reason why you could only find $z\log(z)\rightarrow 0$ was, that you used the principal branch of the complex logarithm, where the cut is a straight line from the origin to the left. But there are indeed cuts giving you either no convergence, or even divergence.</p>
<hr>
<p>Note that even if there is no curve $z_t\rightarrow 0$ that will give you any other suggestion than $z \log (z)=0$, there are indeed sequences $z_n\rightarrow 0$ that will give you any result you want, say $z_n\log(z_n)\rightarrow z^*=r^*\exp(i\phi^*)$. Choose $z_n$ and an appropriate cut so that</p>
<p>\begin{align}
\phi(z_n)&=2\pi n+\phi^*,\\
r_n&=\frac{r^*}{2\pi n}.
\end{align}</p>
<p>Then $r_n\phi(z_n)\rightarrow r^*$ and the arguments are at all times equivalent to $\phi^*$.</p>
|
2,820,464 | <p>Find the limit of the sequence {$a_{n}$}, given by$$ a_{1}=0,a_{2}=\dfrac {1}{2},a_{n+1}=\dfrac {1}{3}(1+a_{n}+a^{3}_{n-1}), \ for \ n \ > \ 1$$</p>
<p>My try:</p>
<p>$ a_{1}=0,a_{2}=\dfrac {1}{2},a_{3}=\dfrac {1}{2},a_{4}=0.54$ that is the sequence is incresing and each term is positive. Let the limit of the sequence be $x$.
Then $ \lim _{n\rightarrow \infty }a_{n+1}=\lim _{n\rightarrow \infty }a_{n}=x$
$$ \lim _{n\rightarrow \infty }a_{n+1}= \lim _{n\rightarrow \infty }1+a_{n}+a^{3}_{n-1}$$</p>
<p>$\Rightarrow x=\dfrac {1}{3}( 1+x+x^3)$</p>
<p>$\Rightarrow x^3-2x+1=0$</p>
<p>and this equation has three roots $x=\dfrac {-1\pm \sqrt {5}}{2},1$ </p>
<p>So the limit of the sequence is $\dfrac {-1 + \sqrt {5}}{2}$.</p>
<p><strong>how can i say that the limit is</strong>
$\dfrac {-1 + \sqrt {5}}{2}$?</p>
| Tsemo Aristide | 280,301 | <p>Because $f(x)={1\over 3}(1+x+x^3)$ is a strictly increasing function, compute its derivative, show recursively that that $0<a_n$, remark that if $a_{n-1},a_{n-2}$ are strictly inferior to $1$, $a_n\geq f(max(a_{n-1},a_{n-2}))$ and $f(max(a_{n-1},a_{n-2}))<f(1)=1$. _{n-1})$.</p>
|
2,714,450 | <p>Suppose $A$ and $B$ are two square matrices so that $e^{At}=e^{Bt}$ for infinite (countable or uncountable) values of $t$ where $t$ is positive.</p>
<p>Do you think that $A$ <strong>has to be equal to</strong> $B$?</p>
<p>Thanks,
Trung Dung.</p>
<hr>
<p>Maybe I do not state clearly or correctly.</p>
<p>I mean that the equality holds for all $t\in (0, T)$ where $T>0$ or $T=+\infty$, i.e. for uncountable $t$. In this case I think some of the counter-examples above do not work because it is correct for countable $t$.</p>
| Jens Schwaiger | 532,419 | <p>Note that $t\mapsto e^{tA}-e^{tB}$ is a matrix of power series $c_{ij}(t)=\sum_{l=0}^\infty c_{ij}^l t^{(l)}$ with $\infty$ as its radius of convergence.
By <a href="https://math.stackexchange.com/questions/1577032/identity-theorem-for-power-series">Identity theorem for power series</a> these series are identical zero if the set $N=\{t\,\vert \,c_{ij}(t)=0\}$ has a limit point.</p>
<p>Thus all $c_{ij}^{(l)}=0$ provided for instance that $c_{ij}(\frac1n)=0$ for all $n\in\mathbb{N}$. But then $A-B=0$ since $A-B=(c_{ij}^{(1)})_{1\leq i,j\leq n}$.</p>
|
2,057,813 | <p>How can I prove that for $s \in \mathbb{C}$, with real part of $s$ being equal to 1,
\begin{equation}
\sum_{n=1}^{\infty}\frac{1}{n^{s}}
\end{equation}
diverges?</p>
<p>Thanks a lot!</p>
| reuns | 276,986 | <p>$$\left|n^{-s}-\int_n^{n+1} x^{-s}dx\right| = \left|\int_n^{n+1} \int_n^x s t^{-s-1}dtdx\right| <\int_n^{n+1} \int_n^x |s \, n^{-s-1}|dtdx = \left|\frac{s}{2}n^{-s-1}\right|$$</p>
<p>therefore</p>
<p>$$\left|\sum_{n=1}^{N-1} n^{-s}-\frac{1-N^{1-s}}{s-1}\right| = \left|\sum_{n=1}^{N-1} n^{-s}-\int_n^{n+1} x^{-s}dx\right| < \frac{|s|}{2} \sum_{n=1}^{N-1} |n^{-s-1}|$$
and hence : $\displaystyle\sum_{n=1}^N n^{-s}$ converges as $N \to \infty$ if and only if $\displaystyle\frac{1-N^{1-s}}{s-1}$ converges.</p>
<p>$$\boxed{\text{but for } Re(s) = 1, s \ne 1 : \quad \lim_{N \to \infty} N^{1-s} \quad \text{fails to exist}}$$</p>
|
2,057,813 | <p>How can I prove that for $s \in \mathbb{C}$, with real part of $s$ being equal to 1,
\begin{equation}
\sum_{n=1}^{\infty}\frac{1}{n^{s}}
\end{equation}
diverges?</p>
<p>Thanks a lot!</p>
| robjohn | 13,854 | <p>Using formula $(10)$ from <a href="https://math.stackexchange.com/a/2027588">this answer</a>,
$$
\zeta(s)=\lim_{n\to\infty}\left[\sum_{k=1}^n\frac1{k^s}-\frac1{1-s}n^{1-s}+\frac12n^{-s}\right]
$$
converges for $\mathrm{Re}(s)\gt-1$.</p>
<p>For $s=1+it$, where $t\in\mathbb{R}$,
$$
\sum_{k=1}^n\frac1{k^s}=\zeta(s)-\frac1{it}e^{-it\log(n)}+O\!\left(\frac1n\right)
$$
Because $\lim\limits_{n\to\infty}e^{-it\log(n)}$ does not exist for $t\ne0$, the series on the left does not converge. In fact, it orbits $\zeta(s)$ at a distance of approximately $\frac1{|1-s|}$.</p>
|
927,188 | <p>This question has been on my mind for a very long time, and I thought I'd finally ask it here. </p>
<p>When I was 6, my dad pulled me out of school. The classes were too easy; the professors, too dull. My father had been man of philosophy his entire life (almost got a PhD in it) and regretted not having a more quantitive background. He wanted me to have a different life and taught me math accordingly. When I was 11, I taught myself trig. When I was 12, I started taking calculus at my local university. I continued on this track, and finally got to real analysis and abstract algebra at 15. I loved every math course I ever took and found myself breezing through all that was presented to me (the university was not Princeton after all). However, around this time, I came to the conclusion that math was not for me. I decided to try a different path.</p>
<p>Why, you might ask, did I do this? The answer was simple: I didn't believe I could be a great mathematician. While I thrived taking the courses, I never turned math into a lifestyle. I didn't come home and do complex questions on a white board. I didn't read about Euler in my spare time. I also never felt I had a great intuition into problems. Once you showed me how to solve a problem, I was golden. But start from scratch on my own? It seemed like a different story entirely. To make things worse, my sister, who was at Caltech at the time, would call home with stories of all these incredible undergrads who solved the mathematical mysteries of the universe as a hobby. Whenever I mentioned math as a career, she would always issue a strong warning: you're not like these kids who spend all their time doing math. Think about doing something else. </p>
<p>Over time, I came to agree with this statement. Coincidentally, I got rejected by MIT and Princeton to continue my undergraduate studies there. This crushed me at the time; my dream of studying math at one of the great institutions had ended. Instead, I ended up at Georgia Tech (not terrible by any means, just not what I had envisioned). Being at an engineering school, I thought I'd give aerospace a shot. It had lots of math, right? Not really, or at least not enough for my taste. I went into CS. This was much better, but still didn't feel quite right. At last, as a sophomore, I felt it was time to get back on track: I'm now doubling majoring in applied math and CS. </p>
<p>My question is, how do I know I'm not making a mistake? There seems to be so many people doing math competitions, research, independent studies, etc, while I just started to take some math courses again. What should I do to test myself and see if I can really make math a career? I apologize for the long and possibly quite subjective post. I'd just really like to hear from math people who know their stuff. Thanks a bunch in advance. </p>
| Luke Willis | 87,726 | <p>Computer Science with Math is a wonderful combination that opens up a lot of opportunities. You could very easily do something for either path and use a fair bit from both along the way.</p>
<p>I had a similar experience with math growing up, though not quite to the degree that you expressed. I don't think I am one of the "enlightened few" of Mathematics either, but I enjoyed it and thus stuck with it. I graduated from University earlier this year with a double major in Computer Science and Mathematics.</p>
<p>I am presently employed doing mostly Computer Science related activities, though I still find that my math background has strengthened my problem solving abilities. Math can be very valuable to your understanding of Data Structures and Algorithms.</p>
<p>I would say that if you enjoy both of your areas of study, your current path will open up many career possibilities.</p>
|
2,207,848 | <p>I'm not very familiar with contraposition and so I am having some difficulties proving the statement. </p>
<blockquote>
<p>If $n$ is a positive integer such that $n \equiv 2 \pmod{4}$ or $n \equiv 3 \pmod{4}$, then $n$ is not a perfect square.</p>
</blockquote>
<p>What would be a good way to prove this?<br>
Need help please.</p>
| Louis | 393,349 | <p>To prove $A\implies B$ by contrapositive you need to prove $\bar B \implies \bar A$. Let's prove that if $n$ is a perfect square then $n \equiv 2 [ 4]$ or $n \equiv 3 [4]$ is not true.</p>
<p>Suppose $n$ is a perfect square, say $n=k^2, k\in \mathbb Z$ and suppose $k \equiv 3[4]$ then $n=k^2 \equiv 3 [4]$ which means $\exists q \in \mathbb Z$ such that $n=k^2=(4q+3)^2=16q^2+24q+9=4(4q^2+6q+2)+1\equiv 1[4]$ which is a contradiction. A similar argument shows $k\equiv 2[4]$ is also not possible. So we are left with $k\equiv 0[4]\implies n \equiv 0[4]$ or $k \equiv 1[4]\implies n \equiv 1[4]$ which implies $k$ is not equal to 3 or 2 modulo 4.</p>
|
3,383,206 | <p><strong>Question</strong>: Can <span class="math-container">$\int_0^\infty \frac{\sqrt{x}}{(1+x)^2} dx$</span> be computed with residue calculus?</p>
<p>The integral comes from computing <span class="math-container">$\mathbb{E}(\sqrt{X})$</span> where <span class="math-container">$X=U/(1-U)$</span> and <span class="math-container">$U$</span> is uniformly distributed in the unit interval. One can see that <span class="math-container">$\mathbb{E}(X)=\infty$</span> while wolfram computes the expectation of the radical as <span class="math-container">$\pi/2$</span> and I confirmed this numerically with Monte Carlo simulations in the r programming language.</p>
<p>It’s been awhile since I’ve done residue calculus so I consulted Ahlfors’ text. Ahlfors treats integrals of the form <span class="math-container">$\int_0^\infty x^\alpha R(x) dx$</span> for some rational function (i.e. ratio of polynomials) <span class="math-container">$R(x)$</span> and <span class="math-container">$0<\alpha<1$</span> which is this case with <span class="math-container">$\alpha=1/2$</span> and <span class="math-container">$R(x)=1/(1+x)^2$</span> but then states for convergence <span class="math-container">$R(x)$</span> must have a zero of at least order 2 at <span class="math-container">$\infty$</span> and at most a simple pole at the origin. But the latter is not satisfied here, there is only a zero at the origin not a pole, so this cannot be applied plus we have the pole of order 2 at <span class="math-container">$a=-1$</span> to deal with, right?</p>
<p>My idea before reviewing was to try a semi-circle of radius <span class="math-container">$R>1$</span> with indented semi-circle about <span class="math-container">$a=-1$</span>. The residue of <span class="math-container">$f$</span> at <span class="math-container">$a=-1$</span> I have computed as <span class="math-container">$-i/2$</span> and I recall indented estimation lemma resulting in the indented contour integral tending towards <span class="math-container">$i\pi Res(f,a)$</span> as the radius shrinks, which with the correct (negative) orientation would give <span class="math-container">$-\pi/2$</span> so if the entire contour integral is zero, we can add this to other side and (hopefully) show the rest vanishes except for <span class="math-container">$\int_0^\infty$</span> region but I’m obviously handwaving here. This seems problematic because it appears we’d be left with <span class="math-container">$\int_{-\infty}^{\infty}$</span> as the larger semi-circle radius grows and the smaller one about <span class="math-container">$a=-1$</span> shrinks instead of getting the integral just over the positive half line.</p>
<p>(If this can be done with real methods too, I certainly am not opposed to that answer, this is just for fun)</p>
| José Carlos Santos | 446,262 | <p>If <span class="math-container">$x\in\mathbb R\setminus\mathbb N$</span>, let <span class="math-container">$r$</span> be the distance from <span class="math-container">$x$</span> to the closest natural number. Then <span class="math-container">$(x-r,x+r)\cap\mathbb N=\emptyset$</span> and <span class="math-container">$(x-r,x+r)$</span> is a neighborhood of <span class="math-container">$x$</span>.</p>
|
1,216,392 | <blockquote>
<p>$P,Q$ are polynomials with real coefficients and for every real $x$ satisfy $P(P(P(x)))=Q(Q(Q(x)))$. Prove that $P=Q$.</p>
</blockquote>
<p>I see only that these polynomials are same degree</p>
| zhw. | 228,045 | <p>Slade, I had the same approach but I'm not sure how you got the constant terms equal so quickly. But if the degree is $1$ that part is easy and you're done. Otherwise, if $p\ne q,$ then $|p_2(x)-q_2(x)|$ blasts off to $\infty$ as $x\to \infty.$ Because the leading coefficients of $p$ and $q$ agree, this would imply $|p_3(x)-q_3(x)|\to \infty,$ contradiction.</p>
|
2,088,487 | <p>If $α, β$ are the roots of the equations $x^2 +px+1=0$, and $γ, δ$ are the roots of the equations $x^2+qx+1=0$, then
$( α-γ)(β+ δ)( δ+ α)( β- γ) = $ ?</p>
| Community | -1 | <p>We have $$P= (α - γ)(β + δ) (β - γ)(α + δ)
= (αβ + αδ - βγ - γδ) (αβ + βδ - αγ - γδ)$$</p>
<p>We know that $αβ = γδ = 1$. Using these, we'll have</p>
<p>$$P = (1 + αδ - βγ - 1) (1 + βδ - αγ - 1)$$
$$ = (αδ - βγ) (βδ - αγ)$$
$$= αβδ^2 - γδα^2 - γδβ^2 + αβγ^2$$</p>
<p>Again, using $αβ = γδ = 1$, we get,</p>
<p>$$P = δ^2 - α^2 - β^2 + γ^2
= γ^2 + δ^2 - (α^2 + β^2)$$</p>
<p>Now, 'completing the square', we have,</p>
<p>$$P = [(γ + δ)^2 - 2γδ] - [(α + β)^2 - 2αβ]$$</p>
<p>And using $α + β = -p , γ + δ = -q$ and $αβ = γδ = 1$, we get,</p>
<p>$$P = (q^2 - 2) - (p^2 - 2)= q^2 - p^2$$ Hope it helps. </p>
|
2,088,487 | <p>If $α, β$ are the roots of the equations $x^2 +px+1=0$, and $γ, δ$ are the roots of the equations $x^2+qx+1=0$, then
$( α-γ)(β+ δ)( δ+ α)( β- γ) = $ ?</p>
| Hari Shankar | 351,559 | <p>Write the product as $(\gamma - \alpha)(\gamma - \beta)(-\delta - \alpha)(-\delta - \beta) = P(\gamma) P(-\delta)$</p>
<p>$P(\gamma) = x^2+p \gamma+1 = p\gamma - q\gamma = \gamma (p-q)$ since $x^2+1 + q \gamma = 0$</p>
<p>Similarly $P(-\delta) = (q+p) \delta$</p>
<p>Hence $P(\gamma) P(\delta) = (q^2-p^2)\gamma \delta = q^2-p^2 $</p>
|
2,227,280 | <p>For every positive number there exists a corresponding negative number. Would that imply that the number of positive numbers is "equal" to the number of negative numbers? (Are they incomparable because they both approach infinity?)</p>
| badjohn | 332,763 | <p>The word "infinity" is used in many places in maths and the definitions are not necessarily the same. Different symbols are used but there are still more definitions than symbols. </p>
<p>The most familiar symbol for infinity, $\infty$, is commonly used in calculus but it is more of a suggestive shorthand than an actual infinity. It is not normally used when discussing sizes of sets. </p>
<p>When discussing the size of sets, the term cardinality is usual. Clearly, normal counting won't work for infinite sets. However, as others have said, it is possible to say whether or not two sets <em>have the same cardinality</em> / <em>are the same size</em>. If a bijection (one to one and onto map) between them exists then they have the same cardinality. One counter-intuitive property of infinite sets is that it is possible that a map which is one to one and not onto exists as well one that is one to one and onto. So, the existence of a map that is one to one but not onto does not prove that one set is smaller. For that, you would also need to prove that no other map that is one to one and onto exists. The simplest example of this is the set of all natural numbers and just the even ones. Intuitively, the set of even numbers is smaller and there is an obvious map from the even natural numbers to a subset of the natural numbers. However, there is also a map between them which is one to one and onto hence they actually have the same cardinality. </p>
<p>By definition, a set which has a one to one and onto map to the natural numbers is called "countable". This is not countable in the day to day sense but it does mean that you could name one per second and (*), although you would not ever finish, you would name any particular one in a finite time. </p>
<p>(*) Assuming that you and the universe were immortal. </p>
<p>Many sets of intuitively different sizes are countable, for example the integers $\Bbb{Z}$, the rational numbers $\Bbb{Q}$ and the algebraic numbers $\Bbb{A}$. This cardinality / size is named countable and the symbol $\aleph_0$ is used.
This is called "aleph null", "aleph naught", or "aleph zero". Aleph is the first letter of the Hebrew alphabet. </p>
<p>However, even though many sets which are intuitively bigger turn out to be the same size, bigger sets exist. It can be proved that there is no one to one and onto map from the natural numbers to the set of real numbers $\Bbb{R}$ so that it is bigger. Again, some apparently bigger sets are not actually bigger. For example $\Bbb{C}$, $\Bbb{R^2}$, and $\Bbb{R^3}$ are the same cardinality as $\Bbb{R}$. </p>
<p>Even bigger sets exist, the set of all subsets of $\Bbb{R}$ is bigger than $\Bbb{R}$. There is no biggest set. </p>
<p>A particularly interesting question is whether $\Bbb{R}$ is the next biggest cardinality after the countable infinity of $\Bbb{N}$. This is called the "Continuum Hypothesis". My answer is already long enough so I won't talk about that but if you are interested in this subject then you should look it up. </p>
<p>Finally, there is yet another type of infinity called "ordinal numbers". In this sense, the first infinity is usually written as $\omega$ and, unlike the cardinal infinities, $\omega + 1$ is different. </p>
|
2,894,606 | <p>Please, could you help me with the question below.
Demonstrate that O(log n^k) = O(log n):</p>
| Claude Leibovici | 82,404 | <p>Except for very few specific cases, you cannot get explicit solutions and you need numerical methods.</p>
<p>Consider that you look for the zero(s) of function
$$f(B)=B^S-\frac{B-1}{R}-1$$
$$f'(B)=S B^{S-1}-\frac{1}{R}$$
$$f''(B)=(S-1) S B^{S-2}$$
The first derivative cancels at a point
$$B_*=\left(\frac{1}{R S}\right)^{\frac{1}{S-1}}$$ and the second derivative is always positive if $ 1 < S <2$ (I do not consider the cases $S=1$ or $S=2$ for which the problem is simple). Since we can bound the function, the solution is always between $\left(\frac 1R -1\right)$ and $1$.</p>
<p>You can also notice that $B=1$ is a trivial solution for any $R,S$ in the provided ranges. Since $f(0)=\frac 1R >0$, if $f(B_*) <0$, then the solution is $> B_*$.</p>
<p>So, what I should do is to compute $f(k B_*)$ for $k=2,3,4,\cdots$ until we find the smallest $k$ such that $f(k B_*)>0$. At this point, let $B_0=k_{min}B_*$ and start Newton method.</p>
<p>Let us take one example using $S=1.234$ and $R=0.567$; this gives $B_*\approx 4.6$. Using the simplistic procedure, we find $k_{min}=2$; so start using $B_0=9.2$ and get the following iterates
$$\left(
\begin{array}{cc}
n & B_n \\
0 & 9.201489213 \\
1 & 9.194700041 \\
2 & 9.194696121
\end{array}
\right)$$</p>
<p>Let us repeat using $S=1.357$ and $R=0.246$; this gives $B_*\approx 21.6$. Using the simplistic procedure, we find $k_{min}=3$; so start using $B_0=64.8$ and get the following iterates
$$\left(
\begin{array}{cc}
n & B_n \\
0 & 64.83555879 \\
1 & 51.00444906 \\
2 & 48.72198422 \\
3 & 48.64769013 \\
4 & 48.64760965
\end{array}
\right)$$ </p>
<p>If you want something more sophisticated, we could approximate $B_0$ expanding the function as a truncated Taylor series around $B_*$. This would give
$$B_0=B_*+ \sqrt{-2\frac{f(B_*) }{f''(B_*) }}$$ For the worked examples, this would give as starting values $8.76$ and $46.05$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.