qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,614,853 | <p>I'm trying to prove that the BinPacking problem is NP hard granted the partition problem <em>is NP hard</em>. If I have E a set of positive integers, can I split it into two subsets such that the sums of the integers in both subsets are equal?</p>
<p>The polynomial reduction I found would be the following:</p>
<ul>
<li>My objects O are defined as what's in E (the weight of each object is the integer value and |E| is the number of objects)</li>
<li>Number of bags = 2</li>
<li>Capacity of each bag is half of the sum of everything in E</li>
</ul>
<p>Is this wrong because my polynomial reduction only reduces to BinPacking with <strong>2 bags</strong> whose <strong>capacity are the same</strong>?</p>
<p>I’m skeptic because if I carry out this proof, and I find an algorithm in polynomial time for the partition problem I will only have proof that BinPacking with 2 bags whose capacity are the same is solvable in a polynomial time.</p>
| Paul DUBOIS | 1,138,237 | <p>It is not wrong, you effectively reduce only some of the BinPacking problem instances.
You will only have an <em>explicit</em> solution for BinPacking with 2 bags, this is correct also.</p>
|
78,311 | <p>Let $\mu$ be standard Gaussian measure on $\mathbb{R}^n$, i.e. $d\mu = (2\pi)^{-n/2} e^{-|x|^2/2} dx$, and define the Gaussian Sobolev space $H^1(\mu)$ to be the completion of $C_c^\infty(\mathbb{R}^n)$ under the inner product
$$\langle f,g \rangle_{H^1(\mu)} := \int f g\, d\mu + \int \nabla f \cdot \nabla g\, d\mu.$$</p>
<p>It is easy to see that polynomials are in $H^1(\mu)$. Do they form a dense set?</p>
<p>I am quite sure the answer must be yes, but can't find or construct a proof in general. I do have a proof for $n=1$, which I can post if anyone wants. It may be useful to know that the polynomials are dense in $L^2(\mu)$.</p>
<p><strong>Edit</strong>: Here is a proof for $n=1$.</p>
<p>It is sufficient to show that any $f \in C^\infty_c(\mathbb{R})$ can be approximated by polynomials. We know polynomials are dense in $L^2(\mu)$, so choose a sequence of polynomials $q_n \to f'$ in $L^2(\mu)$. Set $p_n(x) = \int_0^x q_n(y)\,dy + f(0)$; $p_n$ is also a polynomial. By construction we have $p_n' \to f'$ in $L^2(\mu)$; it remains to show $p_n \to f$ in $L^2(\mu)$. Now we have
$$ \begin{align*} \int_0^\infty |p_n(x) - f(x)|^2 e^{-x^2/2} dx &= \int_0^\infty \left(\int_0^x (q_n(y) - f'(y)) dy \right)^2 e^{-x^2/2} dx \\
&\le \int_0^\infty \int_0^x (q_n(y) - f'(y))^2\,dy \,x e^{-x^2/2} dx \\
&= \int_0^\infty (q_n(x) - f'(x))^2 e^{-x^2/2} dx \to 0 \end{align*}$$
where we used Cauchy-Schwarz in the second line and integration by parts in the third. The $\int_{-\infty}^0$ term can be handled the same with appropriate minus signs.</p>
<p>The problem with $n > 1$ is I don't see how to use the fundamental theorem of calculus in the same way.</p>
| Nate Eldredge | 822 | <p>Byron's paper, which he linked in his (accepted) answer, has a proof in a more general setting (where the Gaussian measure can be replaced by any measure with exponentially decaying tails). Here is a specialization of it to the Gaussian case, which I wrote up to include in some lecture notes. I guess I was on the right track, the trick was to differentiate more times.</p>
<p>$\newcommand{\R}{\mathbb{R}}$
For continuous $\psi : \R^k \to \R$, let
\begin{equation*}
I_i \psi(x_1, \dots, x_k) = \int_0^{x_i} \psi(x_1, \dots, x_{i-1},
y, x_{i+1}, \dots, x_k)dy.
\end{equation*}
By Fubini's theorem, all operators $I_i, 1 \le i \le k$ commute. If
$\psi \in L^2(\mu)$ is continuous, then $I_i \psi$ is also
continuous, and $\partial_i I_i \psi = \psi$. Moreover,
$$\begin{align*}
\int_{0}^\infty |I_i \psi (x_1, \dots, x_k)|^2 e^{-x_i^2/2} dx_i &=
\int_{0}^\infty \big\lvert\int_0^{x_i} \psi(\dots, y,\dots)\,dy\big\rvert^2
e^{-x_i^2/2} dx_i \\
&\le \int_0^\infty \int_0^{x_i} |\psi(\dots, y, \dots)|^2 \,dy x_i
e^{-x_i^2/2}\,dx_i && \text{Cauchy--Schwarz} \\
&= \int_0^\infty |\psi(\dots, x_i, \dots)|^2 e^{-x_i^2/2} dx_i
\end{align*}$$
where in the last line we integrated by parts. We can make the same
argument for the integral from $-\infty$ to $0$, adjusting signs as
needed, so we have
\begin{equation*}
\int_\R |I_i \psi(x)|^2 e^{-x_i^2/2} dx_i \le \int_\R |\psi_i(x)|^2
e^{-x_i^2/2} dx_i.
\end{equation*}
Integrating out the remaining $x_j$ with respect to $e^{-x_j^2/2}$
shows
$$
||{I_i \psi}||_{L^2(\mu)}^2 \le ||{\psi}||_{L^2(\mu)}^2,
$$
i.e. $I_i$ is a contraction on $L^2(\mu)$.</p>
<p>Now for $\phi \in C_c^\infty(\R^k)$, we can approximate $\partial_1
\dots \partial_k \phi$ in $L^2(\mu)$ norm by polynomials $q_n$.
If we let $p_n = I_1 \dots I_k q_n$, then $p_n$ is again a
polynomial, and $p_n \to I_1 \dots I_k \partial_1 \dots \partial_k
\phi = \phi$ in $L^2(\mu)$. Moreover,
$\partial_i p_n = I_1 \dots I_{i-1} I_{i+1} \dots I_k q_n \to I_1
\dots I_{i-1} I_{i+1} \dots I_k \partial_1 \dots \partial_k \phi =
\partial_i \phi$ in $L^2(\mu)$ also.</p>
|
78,311 | <p>Let $\mu$ be standard Gaussian measure on $\mathbb{R}^n$, i.e. $d\mu = (2\pi)^{-n/2} e^{-|x|^2/2} dx$, and define the Gaussian Sobolev space $H^1(\mu)$ to be the completion of $C_c^\infty(\mathbb{R}^n)$ under the inner product
$$\langle f,g \rangle_{H^1(\mu)} := \int f g\, d\mu + \int \nabla f \cdot \nabla g\, d\mu.$$</p>
<p>It is easy to see that polynomials are in $H^1(\mu)$. Do they form a dense set?</p>
<p>I am quite sure the answer must be yes, but can't find or construct a proof in general. I do have a proof for $n=1$, which I can post if anyone wants. It may be useful to know that the polynomials are dense in $L^2(\mu)$.</p>
<p><strong>Edit</strong>: Here is a proof for $n=1$.</p>
<p>It is sufficient to show that any $f \in C^\infty_c(\mathbb{R})$ can be approximated by polynomials. We know polynomials are dense in $L^2(\mu)$, so choose a sequence of polynomials $q_n \to f'$ in $L^2(\mu)$. Set $p_n(x) = \int_0^x q_n(y)\,dy + f(0)$; $p_n$ is also a polynomial. By construction we have $p_n' \to f'$ in $L^2(\mu)$; it remains to show $p_n \to f$ in $L^2(\mu)$. Now we have
$$ \begin{align*} \int_0^\infty |p_n(x) - f(x)|^2 e^{-x^2/2} dx &= \int_0^\infty \left(\int_0^x (q_n(y) - f'(y)) dy \right)^2 e^{-x^2/2} dx \\
&\le \int_0^\infty \int_0^x (q_n(y) - f'(y))^2\,dy \,x e^{-x^2/2} dx \\
&= \int_0^\infty (q_n(x) - f'(x))^2 e^{-x^2/2} dx \to 0 \end{align*}$$
where we used Cauchy-Schwarz in the second line and integration by parts in the third. The $\int_{-\infty}^0$ term can be handled the same with appropriate minus signs.</p>
<p>The problem with $n > 1$ is I don't see how to use the fundamental theorem of calculus in the same way.</p>
| JT_NL | 1,120 | <p>I think I have a different proof. Let $\gamma$ be the <em>Gaussian measure</em>, that is, $\gamma$ is given by the <em>Radon-Nikodym density</em>,
$$\mathrm{d}\gamma(x) = \frac{\mathrm{e}^{-x^2}}{\sqrt{\pi}} \mathrm{d}x.$$
Also, consider the <em>Ornstein-Uhlenbeck operator</em> given as,
$$L := -\frac12 \Delta + x \cdot \nabla.$$
We can verify that for $u$ and $v$ in $C_{\mathrm{c}}^\infty(\mathbf R^d)$ we have the symmetricity
$$\int_{\mathbf{R}^d} u L v \, \gamma(\mathrm{d}x) = \int_{\mathbf{R}^d} \nabla u \cdot \nabla v \, \gamma(\textrm{d}x).$$
The nice thing about this is that the <em>Hermite polynomials</em> are an orthogonal basis for the Gaussian Hilbert space, that is, $L^2(\gamma)$. These can be define using the Rodrigues' formula, that is,
$$H_n(x) = (-1)^n \mathrm{e}^{x^2} \partial_x^n \mathrm{e}^{-x^2}.$$
Furthermore, we have,
$$L H_n = n H_n.$$
Also, we have,
$$H_n' = 2n H_{n - 1}.$$
Proving that the polynomials form a basis for $L^2(\gamma)$ is not hard, just consider the entire function
$$F(z) = \int_{-\infty}^\infty \mathrm e^{zx - x^2} \, \frac{\mathrm{d}x}{\sqrt\pi}.$$
Hence, so are the Hermite polynomials as they are linear combinations of the monomials $(x \mapsto x^n)_n$. Higher-order Hermite polynomials are the canonical tensor extensions. So, given an $f$ in $L^2$ we have that $f$ is given in the form
$$f = \lim_N f_N = \lim_N \sum_{n = 0}^N a_n \frac{a_n}{\sqrt{n! 2^n}}.$$
And $f_N$ gives the limit of functions that converges to $f$.</p>
<p>Also, recall the orthogonality (after scaling),
$$\int_{\mathbf{R}^d} h_n h_m \, \gamma(\mathrm{d}x) = \delta_{nm}.$$
where
$$h_n = \frac{H_n}{\sqrt{n! 2^n}}.$$</p>
<p>We can rewrite the inner product on the Sobolev space as
$$\langle u, v \rangle_{H^1} = \langle u, (L + 1) v \rangle_{L^2(\gamma)}.$$
We only have to care about the bilinear form
$$\mathcal E(u, v) = \int_{\mathbf{R}^d} u L v \, \gamma(\mathrm{d}x).$$
Or so, picking $u = v = f_N - f$ we have after noting that
$$g_N := f - f_N = \sum_{n = N + 1}^\infty a_n \frac{H_n}{\sqrt{n! 2^n}},$$
$$
\begin{align}
\mathcal E(u, v) &= \int_{\mathbf{R}^d} f_N L f_N \, \gamma(\mathrm{d}x)\\
&\quad + \int_{\mathbf{R}^d} f L f \, \gamma(\mathrm{d}x)\\
&\quad - 2\int_{\mathbf{R}^d} f_N L f \, \gamma(\mathrm{d}x)\\
&= \sum_{n = 0}^\infty n \frac{|a_n|^2}{n! 2^n} - \sum_{n = 0}^N n \frac{|a_n|^2}{n! 2^n}\\
&= \sum_{n = N + 1}^\infty n \frac{|a_n|^2}{n! 2^n}.
\end{align}.
$$
And as $\sum |a_n|^2$ converges due to the $L^2$ density, so should this.</p>
<p>I hope I did not make any mistakes, just occurred to me while I was biking...</p>
|
1,600,054 | <p>The graph of $y=x^x$ looks like this:</p>
<p><a href="https://i.stack.imgur.com/JdbSv.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/JdbSv.gif" alt="Graph of y=x^x."></a></p>
<p>As we can see, the graph has a minimum value at a turning point. According to WolframAlpha, this point is at $x=1/e$.</p>
<p>I know that $e$ is the number for exponential growth and $\frac{d}{dx}e^x=e^x$, but these ideas seem unrelated to the fact that the mimum value of $x^x$ is $1/e$. <strong>Is this just pure coincidence, or could someone provide an intuitive explanation</strong> (i.e. more than just a proof) <strong>of why this is?</strong></p>
| sirfoga | 83,083 | <p><strong>Hint</strong>: actually you are looking for a local/global minimum .. so look at the derivative of the function $f(x) = x^x$
$$f'(x) = x^x (\log (x)+1)$$
which equals $0 \iff x = \frac{1}{e}$ </p>
|
1,600,054 | <p>The graph of $y=x^x$ looks like this:</p>
<p><a href="https://i.stack.imgur.com/JdbSv.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/JdbSv.gif" alt="Graph of y=x^x."></a></p>
<p>As we can see, the graph has a minimum value at a turning point. According to WolframAlpha, this point is at $x=1/e$.</p>
<p>I know that $e$ is the number for exponential growth and $\frac{d}{dx}e^x=e^x$, but these ideas seem unrelated to the fact that the mimum value of $x^x$ is $1/e$. <strong>Is this just pure coincidence, or could someone provide an intuitive explanation</strong> (i.e. more than just a proof) <strong>of why this is?</strong></p>
| Brenton | 226,184 | <p>Hint: Note that $x^x = e^{x\log x}$.</p>
<p>So minimizing $x^x$ is the same as minimizing $x\log x$</p>
|
1,988,563 | <blockquote>
<p>Use the formal defintion to prove the given limit:
$$\lim_{x\to\frac13^+}\sqrt{\frac{3x-1}2}=0$$</p>
</blockquote>
<p>Not sure how to deal with $\sqrt\cdot$. Appreciate a hint.</p>
| Doug M | 317,176 | <p>$\forall \epsilon>0,\exists \delta>0\text{ such that } 0<(x-\frac 13)<\delta \implies |\sqrt{\frac{3x-1}{2}}|<\epsilon$</p>
<p>$\sqrt{\frac{3x-1}{2}} < \sqrt{\frac 32} \sqrt \delta$</p>
<p>$\delta \le \frac 23 \epsilon^2\implies|\sqrt{\frac{3x-1}{2}}|<\epsilon$</p>
|
1,988,563 | <blockquote>
<p>Use the formal defintion to prove the given limit:
$$\lim_{x\to\frac13^+}\sqrt{\frac{3x-1}2}=0$$</p>
</blockquote>
<p>Not sure how to deal with $\sqrt\cdot$. Appreciate a hint.</p>
| Masacroso | 173,262 | <p>By the axioms of order of a field we know that multiplying both sides of an inequality by a positive quantity the inequality doesnt change, then</p>
<p>$$\left|\sqrt{\frac{3x-1}2}\right|<\epsilon\iff
\left|\frac{3x-1}2\right|<\epsilon\cdot \left|\sqrt{\frac{3x-1}2}\right|<\epsilon^2$$</p>
<p>where the last inequality comes from the application again of the LHS. Then we finally have that</p>
<p>$$\left|\frac{3x-1}2\right|<\epsilon^2\iff |3x-1|<2\epsilon^2$$</p>
<p>From here it is easy to apply the definition of limit and see that $\lim_{x\to 1/3^+} (3x-1)=0$ (the expression $2\epsilon^2$ only depends of epsilon, what can be chosen arbitrarily).</p>
<p>And then because the function $f(x)=\sqrt x$ is continuous at zero we have that</p>
<p>$$\lim_{x\to 1/3^+}\sqrt{\frac{3x-1}2}=\sqrt{\frac {\lim_{x\to 1/3^+} (3x-1)}2}=\sqrt{\frac02}=0$$</p>
|
3,422,830 | <blockquote>
<p>In the polynomial
<span class="math-container">$$
(x-1)(x^2-2)(x^3-3) \ldots (x^{11}-11)
$$</span>
what is the coefficient of <span class="math-container">$x^{60}$</span>? </p>
</blockquote>
<p>I've been trying to solve this question since a long time but I couldn't. I don't know whether opening the brackets would help because that is really a mess. I have run out of ideas. Would someone please help me to solve this question? </p>
| amir bahadory | 204,172 | <p>Hint : <span class="math-container">$1+2+3 +...+11= \frac {11×12}{2} =66 $</span> so we must find how we can construct number <span class="math-container">$6=6+0=5+1=4+2=3+3=1+2+3$</span> and note that <span class="math-container">$3+3$</span> impossible. </p>
|
3,422,830 | <blockquote>
<p>In the polynomial
<span class="math-container">$$
(x-1)(x^2-2)(x^3-3) \ldots (x^{11}-11)
$$</span>
what is the coefficient of <span class="math-container">$x^{60}$</span>? </p>
</blockquote>
<p>I've been trying to solve this question since a long time but I couldn't. I don't know whether opening the brackets would help because that is really a mess. I have run out of ideas. Would someone please help me to solve this question? </p>
| A.J. | 654,406 | <p>Expanding the brackets is definitely not the way to go, but thinking about what would happen if you did is helpful. Every term in the expanded polynomial will come from multiplying one term from each bracket (e.g. the highest degree term will come from multiplying <span class="math-container">$\,x \cdot x^2 \cdot x^3 \cdot x^4 \dots \cdot x^{11}\,$</span>), and since each bracket is a binomial that doesn't leave many options.</p>
<p>The first thing to notice is that the aforementioned highest-degree term will be <span class="math-container">$\,x^{66};\,$</span> thus, to get a term with <span class="math-container">$\,x^{60},\,$</span> we will need to use MOST BUT NOT ALL of the <span class="math-container">$\,x$</span>'s in the factors. We need to reduce the maximum degree by <span class="math-container">$\,6\,$</span>; one obvious way to do that is to not use the <span class="math-container">$\,x^6\,$</span> in the sixth factor, so instead we use the <span class="math-container">$\,-6\,$</span> from that bracket. However, there are other ways, e.g. not using the <span class="math-container">$\,x^2\,$</span> or the <span class="math-container">$\,x^4\,$</span> from the second and fourth factors, respectively. Hopefully by now you're getting the sense that what we need to do is come up with all the different ways to make <span class="math-container">$\,x^6\,$</span> with any of the powers available to us, and then for each way find the corresponding coefficient.</p>
<p>There are not that many ways of making <span class="math-container">$\,x^6;\,$</span> note that the most we can use is three powers of <span class="math-container">$\,x\,$</span>, as the minimum power would then be <span class="math-container">$\,x\cdot x^2\cdot x^3=x^6.\,$</span> The list of ways is: <span class="math-container">$\,x^6, x\cdot x^5, x^2 \cdot x^4,\,$</span> and <span class="math-container">$\,x\cdot x^2\cdot x^3.\,$</span></p>
<p>Now we just need to find the coefficient of each term that EXCLUDES one of the above combinations. As an example, let's consider the term that excludes <span class="math-container">$\,x^2\,$</span> and <span class="math-container">$\,x^4.\,$</span> Written out in detail, this term would be</p>
<p><span class="math-container">$$(x)(-2)(x^3)(-4)(x^5)(x^6)(x^7)(x^8)(x^9)(x^{10})(x^{11}) = 8x^{60}$$</span></p>
<p>Similarly, it should be easy to see that the coefficients for the terms excluding the other combinations are:</p>
<ul>
<li>For <span class="math-container">$\,x^6,\,$</span>, the coefficient will just be <span class="math-container">$\,-6\,$</span>;</li>
<li>For <span class="math-container">$\,x\cdot x^5,\,$</span> the coefficient will be <span class="math-container">$\,(-1)(-5)=5\,$</span>;</li>
<li>For <span class="math-container">$\,\,x\cdot x^2\cdot x^3,\,$</span> the coefficient will be <span class="math-container">$\,(-1)(-2)(-3)=-6.\,$</span></li>
</ul>
<p>Thus the four terms that will have <span class="math-container">$\,x^{60}\,$</span> are <span class="math-container">$\,-6x^{60}, 5x^{60}, 8x^{60}\,$</span> and <span class="math-container">$\,-6x^{60}.\,$</span> The sum of these will be just <span class="math-container">$\,x^{60},\,$</span> so the answer to the question is <span class="math-container">$\,\boxed{1}.\,$</span></p>
|
3,380,998 | <p>Is it possible to express the cube root of "i" without using "i" itself?</p>
<p>If this is possible can you show me how to arrive at it?</p>
<p>thanks</p>
| Henno Brandsma | 4,280 | <p>No. <span class="math-container">$i$</span> has 3 cube roots in the complex numbers and none of them can of course be real (i.e. have no imaginary part; a real number has a real third power, not <span class="math-container">$i$</span>). They are </p>
<p><span class="math-container">$$e^{i\frac{\pi}{6}}, e^{i\frac{5\pi}{6}}, e^{i\frac{3\pi}{2}} $$</span> </p>
<p>which you can write out using Euler's <span class="math-container">$e^{it} = \cos(t) + i\sin(t)$</span> as usual.</p>
|
104,297 | <p>How would I go about solving</p>
<p>$(1+i)^n = (1+\sqrt{3}i)^m$ for integer $m$ and $n$?</p>
<p>I have tried </p>
<pre><code>Solve[(1+I)^n == (1+Sqrt[3] I)^m && n ∈ Integers && m ∈ Integers, {n, m}]
</code></pre>
<p>but this does not give the answer in the 'correct' form.</p>
| bbgodfrey | 1,063 | <p>As stated in the question and also the comment above by <a href="https://mathematica.stackexchange.com/users/12/szabolcs">Szabolcs</a>, Mathematica does not seem to be able to solve the equation directly. For instance, neither <code>Solve</code> nor <code>Reduce</code> produces the desired result. However, as I suggested in a comment above, the equation can be decomposed into expressions for its amplitude and phase, and each solved to obtain the answer. Begin with the amplitude.</p>
<pre><code>Abs[(1 + I)]^n == Abs[(1 + Sqrt[3] I)]^m
(* 2^(n/2) == 2^m *)
</code></pre>
<p>Thus, <code>n</code> is twice <code>m</code>. Insert this into the expression for the phases, modulo <code>2 π</code>.</p>
<pre><code>Solve[(Mod[Arg[(1 + I)] n, 2 π] ==
Mod[Arg[(1 + Sqrt[3] I)] m, 2 π]) /. n -> 2 m, m, Integers]
(* {{m -> ConditionalExpression[12 C[1], C[1] ∈ Integers]}} *)
</code></pre>
<p>This, <code>m</code> is any integer, positive or negative, multiplied by <code>12</code>, and <code>n</code> is <code>2 m</code>.</p>
|
2,239,192 | <p>Let $P_n$ be the polynomials of degree no more than n with basis $Z_n=(1, x, x^2,\dotsc,x^n)$. The derivative transformation $D$ goes from $P_n$ to $P_{n-1}$. Write out the matrix for $D$ from $(P_4, Z_4)$ to $(P_3, Z_3)$.</p>
<p>I haven't done a problem similar to this so I'm not sure how to go about doing this. Thanks</p>
| Brian Fitzpatrick | 56,960 | <p>We have bases
\begin{align*} \alpha &= \{1,x,x^2,x^3,x^4\} & \beta &= \{1,x,x^2,x^3\}
\end{align*} for $P_4$ and $P_3$ respectively.</p>
<p>Our map $D:P_4\to P_3$ is given by $D(f)=f^\prime$. To compute
$[D]_\alpha^\beta$, we must evaluate $D$ on each basis element in $\alpha$ and
write the output in terms of the basis elements in $\beta$. Doing so gives
$$
\begin{array}{rcrcrcrcrcrcrcrcrcrcr} D(1) & = & 0 & = & \color{blue}{0}\cdot 1 &
+ & \color{purple}{0}\cdot x & + & \color{darkcyan}{0}\cdot x^2 & + &
\color{darkorange}{0}\cdot x^3 \\ D(x) & = & 1 & = & \color{blue}{1}\cdot 1 & +
& \color{purple}{0}\cdot x & + & \color{darkcyan}{0}\cdot x^2 & + &
\color{darkorange}{0}\cdot x^3 \\ D(x^2) & = & 2\,x & = & \color{blue}{0}\cdot 1
& + & \color{purple}{2}\cdot x & + & \color{darkcyan}{0}\cdot x^2 & + &
\color{darkorange}{0}\cdot x^3 \\ D(x^3) & = & 3\,x^2 & = & \color{blue}{0}\cdot
1 & + & \color{purple}{0}\cdot x & + & \color{darkcyan}{3}\cdot x^2 & + &
\color{darkorange}{0}\cdot x^3 \\ D(x^4) & = & 4\,x^3 & = & \color{blue}{0}\cdot
1 & + & \color{purple}{0}\cdot x & + & \color{darkcyan}{0}\cdot x^2 & + &
\color{darkorange}{4}\cdot x^3
\end{array}
$$
Hence
$$
[D]_\alpha^\beta= \left[\begin{array}{rrrrr} \color{blue}{0} & \color{blue}{1} &
\color{blue}{0} & \color{blue}{0} & \color{blue}{0} \\ \color{purple}{0} &
\color{purple}{0} & \color{purple}{2} & \color{purple}{0} & \color{purple}{0} \\
\color{darkcyan}{0} & \color{darkcyan}{0} & \color{darkcyan}{0} &
\color{darkcyan}{3} & \color{darkcyan}{0} \\ \color{darkorange}{0} &
\color{darkorange}{0} & \color{darkorange}{0} & \color{darkorange}{0} &
\color{darkorange}{4}
\end{array}\right]
$$</p>
|
310,930 | <p>Let $U$ be the subspace of $\mathbb{R}^3$ spanned by $\{(1,1,0), (0,1,1)\}$. Find a subspace $W$ of $\Bbb R^3$ such that $\mathbb{R}^3 = U \oplus W$.</p>
<p>As I am having an examination tomorrow, it would be really helpful if one could explain the methodology for doing this problem. I am mostly interested in the methodology, rather than the result. </p>
<p>Thank you very much in advance.</p>
| Jim | 56,747 | <p>In general, if you have a subspace $U \subseteq V$ and you want a complement $W$ so that $U \oplus W = V$. Then first find a basis $\{u_1, \ldots, u_n\}$ of $U$. Then extend this to a basis $\{u_1, \ldots, u_n, w_1, \ldots, w_m\}$ of $V$. The complement is the subspace spanned by those additional vectors,
$$W = \mathrm{span}\{w_1, \ldots, w_m\}.$$</p>
<p>So for the specific problem above you need to extend $\{(1, 1, 0), (0, 1, 1)\}$ to a basis of $\mathbb R^3$, i.e., you just need to find a single vector not contained in $U$. Then $W$ will be the line spanned by that vector.</p>
|
3,226,320 | <p>I have some data points that need to be fit to the curve defined by</p>
<p><span class="math-container">$$y(x)=\frac{k}{(x+a)^2} - b$$</span></p>
<p>I have considered that it can be done by the least squares method. However, the analytical solution gives me a negative <span class="math-container">$a$</span>, so it puts the first point on the left branch of this hyperbola and I need all the points to fit to the right branch, thus <span class="math-container">$a$</span> must be positive. All my points have positive <span class="math-container">$x$</span> and <span class="math-container">$y$</span> is non-increasing.</p>
<p>Is there any way to add this type of constraint to analytical solution?</p>
<p>I would also kindly appreciate any links to related and/or useful information on iterative numerical solution. I need to program everything manually for my mobile app, so I can't use any external software or libraries.</p>
| Claude Leibovici | 82,404 | <p>In any manner, your model is nonlinear with respect to parameters. So, why not to rewrite it as
<span class="math-container">$$y(x)=\frac{k}{(x+\alpha^2)2} - b$$</span></p>
|
1,408,467 | <p>I've found a few papers that deal with removing redundant inequality constraints for linear programs, but I'm only trying to find the non-redundant constraints that define a feasible region (i.e. I have no objective function), given a set of possibly redundant inequality constraints.</p>
<p>For instance, if I have:</p>
<p>$$
0x_1 + x_2 \leq -1\\
0x_1 - x_2 \leq -1\\
-x_1 + 0x_2 \leq -2\\
x_1 + 0x_2 \leq -2\\
x_1 + 0x_2 \leq -6
$$</p>
<p>Is there a robust technique that could detect that the last constraint is redundant?</p>
| tomi | 215,986 | <p>Suppose you have four non-redundant constraints. These define the feasible region (some sort of quadrilateral).</p>
<p>The fifth constraint is redundant if it does not intersect the feasible region - if it is non-redundant then adding it to the problem will trim off some part opf the feasible region.</p>
<p>For each of the earlier constraints, find where the fifth constraint would intersect the line. Test this point (against the other three constraints) to see if it is on the border of the feasible region. If it isn't for any of the earlier constraints, then it is redundant.</p>
|
89,000 | <p>Let $f:I \rightarrow \mathbb{R}$, where $I\subset \mathbb{R}$ is an interval, be midconvex, that is
$$f\left(\frac{x+y}{2}\right) \leq \frac{f(x)+f(y)}{2}$$ for all $x,y \in I$.
Assume that for some $x_0, y_0 \in \mathbb{R}$ such that $x_0 < y_0$ holds equality
$$f\left(\frac{x_0+y_0}{2}\right)= \frac{f(x_0)+f(y_0)}{2}.$$
What can we say about $f$ restricted to $[x_0, y_0]$ ? Maybe $f|_{[x_0,y_0]}$ is
a sum of some additive function and some constant?</p>
<p>Thanks.</p>
<p>Added.</p>
<p>Maybe then $f(qx_0+(1-q)y_0)=qf(x_0)+(1-q)f(y_0)$ for $q=\frac{k}{2^n}$, where $n \in \mathbb{N}$, $k=0,1,...,2^n$ ?</p>
| Martin Sleziak | 8,297 | <p>If I understand your new question correctly (after the addition in
your question and some clarification in the comments) you want to
know, whether the following is true:</p>
<blockquote>
<p>Let $f$ be a midpoint convex (a.k.a. Jensen convex) function on
$[x_0,y_0]$, i.e. for any $x,y\in[x_0,y_0]$
$$ f\left(\frac{x+y}2\right)\le \frac{f(x)+f(y)}2 \qquad (1)$$
holds. Suppose that, moreover,
$$ f\left(\frac{x_0+y_0}2\right)=\frac{f(x_0)+f(y_0)}2 \qquad (2).$$
Then $f(qx_0+(1-q)y_0)=qf(x_0)+(1-q)f(y_0)$ for $q=\frac{k}{2^n}$,
where $n \in \mathbb{N}$, $k=0,1,...,2^n$.</p>
</blockquote>
<hr>
<p>This can be shown by induction on $n$.</p>
<p><strong>For $n=1$</strong> the claim is equivalent to (1).</p>
<p><strong>For $n=2$</strong> we want to show that
$f\left(\frac{x_0+3y_0}4\right)=\frac{f(x_0)+3f(y_0)}4$ and
$f\left(\frac{3x_0+y_0}4\right)=\frac{3f(x_0)+f(y_0)}4$. Using (1)
for the points $\frac{x_0+3y_0}4$ and $\frac{3x_0+y_0}4$ we get
$$2f\left(\frac{x_0+y_0}2\right) \le f\left(\frac{x_0+3y_0}4\right) + f\left(\frac{3x_0+y_0}4\right). \qquad (3)$$
We also get from (1) that
$$2f\left(\frac{x_0+3y_0}4\right) \le f\left(\frac{x_0+y_0}2\right) +
f(y_0) \qquad (4)$$ and
$$2f\left(\frac{3x_0+y_0}4\right) \le f(x_0)+f\left(\frac{x_0+y_0}2\right)
\qquad (5)
$$
Combining (3), (4), (5) we get that
$$4f\left(\frac{x_0+y_0}2\right) \le 2f\left(\frac{x_0+3y_0}4\right) + 2f\left(\frac{3x_0+y_0}4\right) \le f(x_0)+2f\left(\frac{x_0+y_0}2\right)+f(y_0).$$
But since from (2) we know that in fact the equality holds for the
leftmost and rightmost expression, we get that equality holds in
(4) and (5), too. This implies our claim for $n=2$.</p>
<p><strong>Suppose that the claim is true for $n$, we will show it for
$n+1$:</strong> Let $q=\frac{k}{2^{n+1}}$. If $k$ is even, the claim
follows from induction hypothesis. If $k$ is odd, we can use what
we proved in the case $n=2$, using either $\frac{k-1}{2^{n+1}}$
and $\frac{k+3}{2^{n+1}}$ or $\frac{k-3}{2^{n+1}}$ and
$\frac{k+1}{2^{n+1}}$ instead of $x_0$ and $y_0$. (We can choose
whichever pair belongs to the original interval. For these point
the claim is true, since $k\pm1$, $k\pm 3$ are even.)</p>
<hr>
<p>The proof is basically a modification of the proof that every
midpoint convex function is rationally convex, see this question:
<a href="https://math.stackexchange.com/questions/83383/showing-that-f-is-convex">Midpoint-Convex and Continuous Implies Convex</a>
(You can try whether other proofs mentioned there can be modified to
you situation too.)</p>
|
3,571,047 | <p>Here's what I have so far:</p>
<p><span class="math-container">$$\frac{\partial f}{\partial y}|_{(a,b)} = \lim\limits_{t\to 0} \frac{\sin(a^2 + b^2 + 2tb + t^2) - \sin(a^2 + b^2)}{t} = \lim\limits_{t\to 0} \frac{\sin(a^2 + b^2)[\cos(2tb + t^2) - 1] + \cos(a^2 + b^2)\sin(2tb + t^2)}{t}$$</span>
I can see that side left of the plus sign might go to <span class="math-container">$0$</span>, and the side on the right would probably go to <span class="math-container">$\cos(a^2+b^2)$</span>, but I'm missing a <span class="math-container">$2a$</span> multiplying my solution!</p>
| Boka Peer | 304,326 | <p>You said <span class="math-container">$t = 1 + \sqrt{x}$</span> gives <span class="math-container">$dt = x + 2x^{3/2}/3 + C,$</span> which is not correct. It seems instead of taking derivative, you took integral. I am giving you some hinits. </p>
<p>Suppose <span class="math-container">$t = 1 + \sqrt{x}$</span>. Then <span class="math-container">$x = (t-1)^2.$</span> Hence, <span class="math-container">$dx = 2(t-1)dt.$</span> </p>
<p>Therefore, the integral becomes <span class="math-container">$ \int \dfrac{(t-1)^2 . 2(t-1)}{t} dt.$</span></p>
<p>Note that <span class="math-container">$(t-1)^3 = t^3 -3t^2 +3t -1.$</span> Hence, we essentillay have the following thing: </p>
<p><span class="math-container">$ \int 2(\dfrac {t^3 -3t^2 +3t -1}{t}) dt$</span></p>
<p>= <span class="math-container">$2\int( t^2 -3t +3 - \frac{1}{t})dt.$</span></p>
<p>Do the rest of the steps by yourself.</p>
|
81,811 | <p>I heard this example was given in Whitehead's paper A CERTAIN OPEN MANIFOLD WHOSE GROUP IS UNITY.( <a href="http://qjmath.oxfordjournals.org/content/os-6/1/268.full.pdf" rel="nofollow">http://qjmath.oxfordjournals.org/content/os-6/1/268.full.pdf</a> ) But I was confused by his term. Thus I'm looking for an explanation in more standard terms about this example.</p>
<p>But since my aim is to know about an example of this kind, any alternative will do either.</p>
| Autumn Kent | 1,335 | <p>The manifold is the <a href="http://en.wikipedia.org/wiki/Whitehead_manifold" rel="nofollow">Whitehead manifold</a>.</p>
|
1,364,430 | <p><strong>Problem</strong></p>
<p>How many of the numbers in $A=\{1!,2!,...,2015!\}$ are square numbers?</p>
<p><strong>My thoughts</strong></p>
<p>I have no idea where to begin. I see no immediate connection between a factorial and a possible square. Much less for such ridiculously high numbers as $2015!$.</p>
<p>Thus, the only one I can immediately see is $1! = 1^2$, which is trivial to say the least.</p>
| ajotatxe | 132,456 | <p>Only $1!$. For $n>1$, let $p$ be the greatest prime with $p\le n$. <a href="https://en.wikipedia.org/wiki/Bertrand%27s_postulate">Between $p$ and $2p$ there is another prime</a>, so $2p>n$. Therefore, $p$ occurs only once in the factorization of $n!$ and hence, $n!$ is not a square.</p>
|
117,619 | <p>I need to evaluate the following real convergent improper integral using residue theory (vital that i use residue theory so other methods are not needed here)
I also need to use the following contour (specifically a keyhole contour to exclude the branch cut):</p>
<p><a href="https://i.stack.imgur.com/4wwwj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4wwwj.png" alt=""></a></p>
<p>$$\int_0^\infty \frac{\sqrt{x}}{x^3+1}\ \mathrm dx$$</p>
| Robert Israel | 8,508 | <p>Consider the integral of $\sqrt{z}/(z^3+1)$ around the given contour, using a branch of $\sqrt{z}$ with branch cut on the positive real axis. This can be evaluated using residues.
Note that (in the appropriate limit) the integrals over $L_1$ and $L_2$ both approach $\int_0^\infty \frac{\sqrt{x}}{x^3+1}\ dx$ (for $L_2$ you're going from right to left, but also $\sqrt{z}$ approaches $-\sqrt{x}$ as $z$ approaches $x$ from below the branch cut). The integrals over the circular arcs should both go to $0$.</p>
|
2,873,520 | <p>I want to find out how interference of two sine waves can affect the output-phase of the interfered wave. </p>
<p>Consider two waves,</p>
<p>$$ E_1 = \sin(x) \\
E_2 = 2 \sin{(x + \delta)}
$$</p>
<p>First off, I don't know how to prove it, but I can see visually (plotting numerically) that the sum of these waves looks a new sine wave. </p>
<p>I want to find out what the phase of $E_1 + E_2 $ looks like. First I tried finding using functions like ArcSin() and Ln() but ran into trouble for both methods. For example when I try ArcSin(Sin[x] + 2*Sin(x - $\delta$)), I get answers that disagree with my numerical answers. </p>
<p>Numerically, I solve for zeros and find the one with a positive derivative in a 2-pi region. </p>
<p>Now I plot the phase-shift of $E_3$ as a function of $\delta$ (in blue) and compare it to $E_2$ (in purple): </p>
<p><a href="https://i.stack.imgur.com/P6Jsc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P6Jsc.png" alt="enter image description here"></a></p>
<p>Is there a "formula" I can use to find this answer without having to trace through one of the zeros? I think the key when using something like ArcSin is using the right normalization (I think it only works for sine of amplitude 1), but I'm not sure exactly the proper way of doing it. </p>
| Chickenmancer | 385,781 | <p>There is a rich study of so-called "differential algebra."</p>
<p><a href="https://en.wikipedia.org/wiki/Differential_algebra" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Differential_algebra</a></p>
<p>However, what you're realizing is the connection between linear algebra and differential equations. </p>
<p>Namely, if $D: C^\infty(X) \longrightarrow C^\infty(X)$ is the derivative operator, and $1$ represents the identity map, then the vectors $y$ which satisfy the following polynomial equation</p>
<p>$$(a_n D^n + \cdots+ a_1D + a_01)y=0$$
are said to be solutions to the differential equation </p>
<p>$$a_n y^{(n)} + \cdots+ a_2y''+ a_1y' + a_0y=0.$$</p>
<p>The missing connection would be the Cayley Hamilton Theorem. Which would say that if $T:V\longrightarrow V$ is a linear operator, with characteristic equation
$$a_n \lambda^n + \cdots+ a_1\lambda + a_0=0$$</p>
<p>Then $T$ satisfies this characteristic equation
$$a_n T^nv + \cdots+ a_1Tv + a_0v=0$$
for all $v\in V.$</p>
|
743,988 | <p>If we formally exponentiate the derivative operator $\frac{d}{dx}$ on $\mathbb{R}$, we get</p>
<p>$$e^\frac{d}{dx} = I+\frac{d}{dx}+\frac{1}{2!}\frac{d^2}{dx^2}+\frac{1}{3!}\frac{d^3}{dx^3}+ \cdots$$</p>
<p>Applying this operator to a real analytic function, we have</p>
<p>$$\begin{align*}e^\frac{d}{dx} f(x) &= f(x)+f'(x)+\frac{1}{2!}f''(x)+\cdots\\
&=f(x)+f'(x)((x+1)-x)+\frac{1}{2!}f''(x)((x+1)-x)^2+\cdots\\
&=f(x+1)
\end{align*}$$</p>
<p>Does anyone have an explanation of why this should "morally" be true? I do not have a very good intuition for the matrix exponential which is probably holding me back here...</p>
| Stephen Montgomery-Smith | 22,016 | <p>So you have suggested <span class="math-container">$e^{t \frac d{dx}} f(x) = f(x+t)$</span>. This is to be expected, because you would formally expect (i) <span class="math-container">$e^{0 \frac d{dx}}$</span> to be the identity operator, (ii) <span class="math-container">$\frac d{dt} [e^{t \frac d{dx}} f(x)] \big|_{t=0} = f'(x)$</span>, and (iii) <span class="math-container">$e^{t \frac d{dx}} e^{s \frac d{dx}} = e^{(s+t) \frac d{dx}}$</span>. And look, the formula you propose works.</p>
<p>Look here for something more formal: <a href="http://en.wikipedia.org/wiki/C0-semigroup" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/C0-semigroup</a></p>
|
1,441,349 | <p>Given is that $a∈ℤ_n^*$ and $d|ord(a)$.</p>
<p>I need to show that $ord(a^d)= ord(a)/d$.</p>
<p>I started with the following:</p>
<p>$ord(a^d) = e$, such that $(a^d)^e\equiv 1\pmod n$</p>
<p>$ord(a)/d =f/d$ where $ord(a)=f$, such that $a^f\equiv 1\pmod n$</p>
<p>Now I want to prove that $e=f/d$. </p>
<p>I have tried multiplying and dividing the formulas, but I am not able to prove it. How do I do this?</p>
| Mauro ALLEGRANZA | 108,274 | <p>We say that :</p>
<blockquote>
<p>$\Gamma$ <em>tautologically implies</em> (or <em>logically implies</em> or <em>semantically entails</em>) $\varphi$ (written $\Gamma \vDash \varphi$) iff every truth assignment for the sentence symbols in $\Gamma$ and $\varphi$ that satisfies every member of $\Gamma$ also satisfies $\varphi$.</p>
</blockquote>
<p>We say that :</p>
<blockquote>
<p>$\varphi$ is a <em>tautology</em> (written $\vDash \varphi$), when $\emptyset \vDash \varphi$, i.e. every truth assignment (for the sentence symbols in $\varphi$) satisfies $\varphi$.</p>
</blockquote>
<hr>
<p>In order to appreciate the first definition, we can transform it into a sort of "procedure" : </p>
<p><em>(i)</em> consider a valuation $v$; </p>
<p><em>(ii)</em> if $v$ does not satisfy some formula in $\Gamma$, throw it away;</p>
<p><em>(iii)</em> if $v$ does, then check if it satisfy also $\varphi$.</p>
<p>Now it must be more evident the role of $\Gamma$ : it acts as a "filter", selecting from the set of <strong>all</strong> <em>valuations</em> a subset to be used for checking the satisfiability of $\varphi$.</p>
<p>What happens when $\Gamma = \emptyset$ ? Being empty, the emptyset cannot be used to "filter" any valuations, and thus the above "procedure" boils down to :</p>
<p><em>(i)</em> consider a valuation $v$; </p>
<p><em>(ii)</em> check if it satisfy $\varphi$,</p>
<p>and this is exactly what the second definition amounts to.</p>
|
366,654 | <p>Find all values of real number p or which the series converges:</p>
<p>$$\sum \limits_{k=2}^{\infty} \frac{1}{\sqrt{k} (k^{p} - 1)}$$ </p>
<p>I tried using the root test and the ratio test, but I got stuck on both. </p>
| André Nicolas | 6,312 | <p>Things will look nicer if we let $y=w^2$. Then $\frac{dy}{dx}=2w\frac{dw}{dx}$ and we end up with
$$2w\frac{dw}{dx}+\frac{w^2}{x-2}=5(x-2)w.$$
There is the solution $w=0$. For others, cancel. We get a nice linear equation. The $x-2$ is slightly annoying, at least for typing, so let $t=x-2$. We have arrived at
$$2\frac{dw}{dt}+\frac{w}{t}=5t.\tag{$1$}$$</p>
<p>The homogeneous equation $\frac{2dw}{dt}+\frac{w}{t}=0$ is easy to solve. For a particular solution of $(1)$, look for a solution of shape $at^2$. </p>
|
3,460,426 | <p>I tried to take the <span class="math-container">$Log$</span> of <span class="math-container">$\prod _{m\ge 1} \frac{1+\exp(i2\pi \cdot3^{-m})}{2} = \prod _{m\ge 1} Z_m$</span>, which gives </p>
<p><span class="math-container">$$Log \prod_{m\ge 1} Z_m = \sum_{m \ge 1} Log (Z_m) = \sum_{m \ge 1} \ln |Z_m| + i \sum_{m \ge 1} \Theta_m,$$</span></p>
<p>where <span class="math-container">$\Theta_m$</span> is the principal argument of <span class="math-container">$Z_m$</span>.</p>
<p><span class="math-container">$|Z_m| = \left[\frac{1}{2} \left(1 + \cos(\frac{2\pi}{3^m}\right)\right]^{1/2}$</span>
has the range <span class="math-container">$[0,1]$</span>, so <span class="math-container">$\ln |Z_m| \le 0$</span>. And, since there are infinitely many <span class="math-container">$m$</span>'s sucu that <span class="math-container">$\ln|Z_m| \not = 0$</span>, <span class="math-container">$\sum_{m \ge 1} \ln |Z_m| \to -\infty$</span>.</p>
<p>Then, <span class="math-container">$$\exp\left({Log \prod_{m\ge 1} Z_m}\right) = \exp\left(\sum_{m \ge 1} \ln |Z_m| \right)\exp\left(i \sum_{m \ge 1} \Theta_m\right) = 0.$$</span></p>
<p>I want to show that <span class="math-container">$\prod _{m\ge 1} \frac{1+\exp(i2\pi \cdot3^{-m})}{2} $</span> is non-zero. What is wrong in the above reasonings?</p>
| joriki | 6,622 | <p>The problem is ill-posed. You answered one possible interpretation of it, but apparently another interpretation is intended. To say that a child is chosen “randomly” tells us nothing without specifying the distribution according to which it is chosen. Typically, when no distribution is specified, the intention is that a uniform distribution is implied. This becomes problematic if there is more than one candidate uniform distribution.</p>
<p>The distribution that you seem to be assuming is one where first a family is chosen uniformly randomly among all families, and then a child in that family is chosen uniformly randomly. (I gather that you're assuming this distribution from your use of <span class="math-container">$X=Y+1$</span>.)</p>
<p>The intended distribution (which to my mind seems closer to the actual wording) is likely the uniform distribution over all children.</p>
|
1,059,427 | <p>What is a good method to number of ways to distribute $n=30$ distinct books to $m=6$ students so that each student receives at most $r=7$ books?</p>
<p>My observation is: If student $S_i$ receives $n_i$ books, the number of ways
is: $\binom{n}{n_1,n_2,\cdots,n_m}$.</p>
<p>So answer is coefficient of $x^n$ in $n!(1+x+\frac{x^2}{2!}+\cdots+\frac{x^r}{r!})^m$.</p>
<p>For this case it means computing the coefficient of $x^{30}$ in $30!(1+x+\frac{x^2}{2!}+\cdots+\frac{x^7}{7!})^6$.</p>
<p>However, it's quite annoying to compute this coefficient without exponential function. Also if we change $(...)$ into $e^x-(\frac{x^8}{8!}+\frac{x^9}{9!}+\cdots)$, how can we handle $(\frac{x^8}{8!}+\frac{x^9}{9!}+\cdots)$ term?</p>
<p>Is there any good idea to handle this term for easier calculation?</p>
| leonbloy | 312 | <p>Not very easy, but calling $S(b,s,m)$ the number of distributions ($b$ books, $s$ students, $m$ maximum allowed for each) one could write:</p>
<p>$$S(b,s,m)= \sum_{k=0}^{\min(\lfloor b/m \rfloor,s)} {s \choose k} \frac{b!}{(m!)^k(b-m \,k)!} S(b-mk,s-k,m-1) $$</p>
<p>and compute recursively the values, with $S(b,s,m) = s^b$ for $s\le m$</p>
|
102,814 | <p>Is it possible to construct a nontrivial homomorphism from $C_6$ to $A_3$? I have tried to construct one but failed. Is there a good way to see when there will be a homomorphism?</p>
| Community | -1 | <p>I have the following recipe in mind:</p>
<p><strong>Proposition:</strong></p>
<p>The number of homomorphisms from $\mathbb Z_m$ to $\mathbb Z_n$ is $(m,n)$.</p>
<p><em>Proof.</em> </p>
<p>Observe that a homomorphism from a cyclic group is fixed by fixing the image of a generator, $1$. </p>
<p>$$\varphi(1)=a \implies \varphi (2)=\varphi(1)+\varphi(1)=a+a~~~\text{and so on} \cdots$$</p>
<p>And, the order of the element divides the order of the image of that element. These facts when cooked appropriately should prove the fact. $\blacksquare$</p>
<p><strong>Explicit Maps:</strong></p>
<p>Set $(m,n)=d$.</p>
<p>$$\phi: \mathbb Z_m \to \mathbb Z_n$$ Every map defined by $$\phi([x]_m)=[k\frac{n}{d}x]_n~~~~\text{$k=0,1,2,3,\cdots d-1$}$$ is a homomorphism and conversely, any homomorphism is of this form.</p>
<hr>
<p>Note that $A_3 \cong C_3$. This means that the number of homomorphisms is $GCD(6,3)=3$.</p>
<hr>
<p>Another way of looking at this problem will be, to consider the universal property of the quotients. This is a very useful thing and is a often recurring theme in Algebra.</p>
<p>$$\begin{array}{ccccccccc}
\mathbb Z \\
\downarrow & \searrow \\
\mathbb Z_n & \xrightarrow{\varphi} & \mathbb Z_m
\end{array}$$</p>
<p>Call the map from $\mathbb Z$ to $\mathbb Z_m$ by $\Pi_m$. In keeping with the standard language, $\Pi_n$ factors through $\operatorname{Ker} \Pi_m=m\mathbb Z$ and $\varphi$ is the induced homomorphism on $\mathbb Z_m$.</p>
<p>Let's recall that, generally, when our maps depended on coset representatives, we require the fact that the map didn't change when the representative of the coset changes. (i.e.) maps must be well defined. We can prove in a more gneral setup, this is always the case, when $\operatorname{Ker} \Pi_m \subseteq \operatorname{Ker} \Pi_n$. In this case, it would just mean, $m \mathbb Z \subseteq n\mathbb Z \implies n|m$ </p>
<p>This is a useful criteria for figuring out if a non-trivial homomorphism exists. </p>
<p>One Should note that the set of all homomorphisms between two groups is never empty, thanks to the identity map that always exists. </p>
<p>Further, between finite groups, it helps to think of the first isomorphism theorem. $$\dfrac{|G|}{|\operatorname{Ker} \theta|}=|Im \theta|$$
How does this help? This helps when you consider the fact that $Im \theta$ is a subgroup of the co-domain and by asking for those cardinalities of the $\operatorname{Ker} \theta$ that permit such values. </p>
<p>One common mistake I have seen people make is the assumption a homomorphism is onto! No, first isomorphism theorem says <em>if</em> the map is onto, then going modulo the kernel, you have the isomorphic copy of the range! </p>
<p>I hope this helps!</p>
|
1,960,911 | <p>I am trying to evaluate this limit for an assignment.
$$\lim_{x \to \infty} \sqrt{x^2-6x +1}-x$$</p>
<p>I have tried to rationalize the function:
$$=\lim_{x \to \infty} \frac{(\sqrt{x^2-6x +1}-x)(\sqrt{x^2-6x +1}+x)}{\sqrt{x^2-6x +1}+x}$$</p>
<p>$$=\lim_{x \to \infty} \frac{-6x+1}{\sqrt{x^2-6x +1}+x}$$</p>
<p>Then I multiply the function by $$\frac{(\frac{1}{x})}{(\frac{1}{x})}$$</p>
<p>Leading to </p>
<p>$$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{(\frac{-6}{x})+(\frac{1}{x^2})}+1}$$</p>
<p>Taking the limit, I see that all x terms tend to zero, leaving -6 as the answer. But -6 is not the answer. Why is that?</p>
| adjan | 219,722 | <p>Your error is here:
$$\frac{\sqrt{x^2-6x +1}-x}{x}=\sqrt{1-\frac{6}{x}+\frac{1}{x^2}}+1$$</p>
|
3,322,049 | <p>I could have solved this by substitution, but the ‘n’ is confusing me. How should I proceed?</p>
| J.G. | 56,861 | <p>If the question really means <span class="math-container">$\int\frac{dx}{x\ln n}$</span> with <span class="math-container">$n$</span> a constant with respect to <span class="math-container">$x$</span>, you get <span class="math-container">$\frac{\ln |x|}{\ln n}+C=\log_n |x|+C$</span>. If it's a typo for <span class="math-container">$\int\frac{dx}{x\ln x}$</span>, the substitution <span class="math-container">$u=\ln x$</span> gets us to the result <span class="math-container">$\ln|\ln x|+C$</span>. (Note that in both cases the <span class="math-container">$C$</span> is <em>locally</em> constant.)</p>
|
3,322,049 | <p>I could have solved this by substitution, but the ‘n’ is confusing me. How should I proceed?</p>
| st.math | 645,735 | <p>If you meant '<span class="math-container">$x$</span>' instead of '<span class="math-container">$n$</span>', then substituting <span class="math-container">$u=\ln x$</span> gives <span class="math-container">$\mathrm du=\frac1x\,\mathrm dx$</span>, and thus
<span class="math-container">$$\int\frac{1}{x\ln x}\mathrm dx=\int\frac1u\mathrm du=\ln|u|+C=\ln|\ln(x)|+C.$$</span></p>
|
253,966 | <p>Just took my final exam and I wanted to see if I answered this correctly:</p>
<p>If $A$ is a Abelian group generated by $\left\{x,y,z\right\}$ and $\left\{x,y,z\right\}$
have the following relations:</p>
<p>$7x +5y +2z=0; \;\;\;\; 3x +3y =0; \;\;\;\; 13x +11y +2z=0$</p>
<p>does it follow that $A \cong Z_{3} \times Z_{3} \times Z_{6}$ ?</p>
<p>I know if we set $x=(1,0,2)$, $y=(0,1,0)$ and $z=(2,1,5)$ then this is consistent with the relations and with $A \cong Z_{3} \times Z_{3} \times Z_{6}$ </p>
| DonAntonio | 31,254 | <p>Have you studied The Smith Normal Form of an (integer) square matrix? Well, if you form the matrix of coefficients of your relations you get:</p>
<p>$$A:=\begin{pmatrix}7&5&2\\3&3&0\\13&11&2\end{pmatrix}\Longrightarrow \det A=0$$</p>
<p>Thus, if $\,G:=\{x,y,z\}\,$ is the free abelian group on $\,\{x,y,z\}\,$ and $\,N\leq G\,$ is the (free abelian, of course) subgroup generated by the same letters but <em>subject to the given relations</em>, the quotient $\,G/N\,$ is finite iff $\,\operatorname{rank} N=3\Longleftrightarrow \det A\neq 0\,$.</p>
<p>From the above it follows your group cannot be what you wrote.</p>
<p>If you haven't studied the above then try to: it is very nice and spectacular stuff, though apparently you should reach the result otherwise.</p>
|
119,876 | <pre><code>Module[{x},
f@x_ = x;
p@x_ := x;
{x, x_, x_ -> x, x_ :> x}
]
?f
?p
</code></pre>
<p>gives</p>
<pre><code>{x$17312, x$17312_, x_ -> x, x_ :> x}
f[x_]=x
p[x_]:=x
</code></pre>
<p>but I'd like to get</p>
<pre><code>{x$17312, x$17312_, x$17312_ -> x$17312, x$17312_ :> x$17312}
f[x$17312_]=x$17312
p[x$17312_]:=x$17312
</code></pre>
<p>I thought <code>Module[{x}, body_]</code> operates something like the following, which would do what I want:</p>
<pre><code>module[{x_Symbol}, body_] := ReleaseHold[Hold@body /. x -> Unique@x];
SetAttributes[module, HoldAll];
module[{x},
f@x_ = x;
p@x_ := x;
{x, x_, x_ -> x, x_ :> x}
]
?f
?p
</code></pre>
<p>I guess there are some cases with nested scoping constructs that need to be considered for special treatment, but why can't it do the replacement in <code>Set, SetDelayed, Rule, RuleDelayed</code>?</p>
<hr>
<p>Motivation</p>
<p>I want to use<code>f@x_ = Integrate[y^2, {y, 0, x}]</code> instead of <code>f@x_ := Evaluate@Integrate[y^2, {y, 0, x}]</code> and to be safe I want to scope the variable/pattern label <code>x</code> to something unique.</p>
<p>See also <a href="https://mathematica.stackexchange.com/questions/119878/why-does-syntax-highlighting-in-set-and-rule-not-color-pattern-names-on-the">Why does syntax highlighting in `Set` and `Rule` not color pattern names on the RHS?</a></p>
| Alexey Popkov | 280 | <p>It looks like what you need is the <a href="http://library.wolfram.com/infocenter/MathSource/425/" rel="nofollow"><code>LocalPatterns`</code></a> package by Ted Ersek. It introduces new special symbols <code>DotEqual</code> and <code>LongRightArrow</code> which works like <code>Set</code> and <code>Rule</code> but provide variable scoping during evaluation of the RHS:</p>
<pre><code><< LocalPatterns.m
?DotEqual
</code></pre>
<blockquote>
<p><code>lhs\[DotEqual]rhs</code> evaluates <code>rhs</code> using a local environment for any variable used as a pattern in <code>lhs</code>. From then on <code>lhs</code> is replaced by the result of evaluating <code>rhs</code> whenever <code>lhs</code> appears. The infix operator <code>\[DotEqual]</code> is entered as <code>\[DotEqual]</code>. The expression <code>lhs\[DotEqual]rhs</code>, has an equivalent form <code>LocalSet[lhs,rhs]</code>.</p>
</blockquote>
<pre><code>Clear[f];
x = 53.54;
f[x_] \[DotEqual] Integrate[Log[Sqrt[x] + 1], x]
Definition[f]
</code></pre>
<blockquote>
<p>f[x_] := Sqrt[x] - x/2 + (-1 + x) Log[1 + Sqrt[x]]</p>
</blockquote>
<pre><code>?LongRightArrow
</code></pre>
<blockquote>
<p><code>lhs → rhs</code> represents a rule where <code>rhs</code> is evaluated immediately using a local environment for any variable used as a pattern in <code>lhs</code>. The infix operator <code>→</code> is entered as <code>\[LongRightArrow]</code>. The expression <code>lhs → rhs</code> has an equivalent form <code>LocalRule[lhs,rhs]</code>.</p>
</blockquote>
<pre><code>x = 34/7;
g2 = g[x_]\[LongRightArrow]Integrate[Log[Sqrt[x] + 1], x]
</code></pre>
<blockquote>
<pre><code>g[x_] :> Sqrt[x] - x/2 + (-1 + x) Log[1 + Sqrt[x]]
</code></pre>
</blockquote>
|
402,802 | <p>I have read that $$y=\lvert\sin x\rvert+ \lvert\cos x\rvert $$ is periodic with fundamental period $\frac{\pi}{2}$.</p>
<p>But <a href="http://www.wolframalpha.com/input/?i=y%3D%7Csinx%7C%2B%7Ccosx%7C" rel="nofollow">Wolfram</a> says it is periodic with period $\pi$.</p>
<p>Please tell what is correct.</p>
| Matt L. | 70,664 | <p><strong>Hint:</strong> Note that $\sin(x+\pi/2) = \cos(x)$ and $\cos(x+\pi/2)=-\sin(x)$.</p>
|
402,802 | <p>I have read that $$y=\lvert\sin x\rvert+ \lvert\cos x\rvert $$ is periodic with fundamental period $\frac{\pi}{2}$.</p>
<p>But <a href="http://www.wolframalpha.com/input/?i=y%3D%7Csinx%7C%2B%7Ccosx%7C" rel="nofollow">Wolfram</a> says it is periodic with period $\pi$.</p>
<p>Please tell what is correct.</p>
| mez | 59,360 | <p>You should try this and figure it out. It is not that hard, and it is a bad habbit to give up on something that you can do. Wolfram alpha is not always correct, it's written by humans.</p>
<p><img src="https://i.stack.imgur.com/NHOcv.jpg" alt="enter image description here"></p>
<p>It's true that this function is $\pi$ periodic, just that its <a href="http://en.wikipedia.org/wiki/Periodic_function" rel="nofollow noreferrer">prime period</a> is $\frac{\pi}{2}$.</p>
|
481,834 | <p>Let $A=(A_{ij})$ be a square matrix of order $n$. Verify that the determinant of the matrix</p>
<p>$\left( \begin{array}{ccc}
a_{11}+x & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22}+x & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}+x \end{array} \right)$,</p>
<p>can be represented as the polynomial $x^n+a_1x^{n-1}+a_2x^{n-2}+\cdots+a_{n-1}x+a_n$, where each coefficient $a_k$ is the sum of the minors of order $k$ of the matrix $A$.</p>
<p>I tried to use the definition of determinant by cofactor expansion but it's very long, I was wondering if there's a shorter way to show this. </p>
| Eurakarte | 92,668 | <p>Simply note that a determinant is computed as the sum and product of finitely many polynomial terms, which results in a polynomial. You can show that with an induction argument, although I don't really think that's necessary.</p>
|
2,906,917 | <p>I have this problem but I don't know how to continue.<br>
Here it is:
Compute $\int \sin(x) \left( \frac{1}{\cos(x) + \sin(x)} + \frac{1}{\cos(x) - \sin(x)} \right)\,dx.$<br>
So I can anti differentiate the sin x to be cos x but I am unsure on where to go off that for the fraction. I don't want to multiply the fractions to create a big and messy function and I don't quite understate how to do partial fraction decomposition. I'm guessing I will have to do substitution?</p>
<p>Anyways, thank you for any help!</p>
| YiFan | 496,634 | <p>Multiplying the fractions is actually the way to go and everything cancels out!
$$\int\sin x\left(\frac{2\cos x}{\cos^2x-\sin^2x}\right)dx = \int\frac{\sin 2x}{\cos 2x}dx = \int\tan 2x\; dx$$
Now, you just need to find the antiderivative of $\tan$. </p>
<p>By the way: partial fraction decomposition is only done for functions with polynomials as numerators and denominators. Unless you can make a genius substitution which transforms the integral to a rational function, partial fractions is irrelevant. Usually, when confronted with completely trigonometric integrals, the first thing you should try is to simplify the integrand using trigonometric identities; only then do you want to consider substitutions or integration by parts.</p>
|
2,648,516 | <p>I am studying Fourier analysis from the text "Stein and Shakarchi" and there is this thing on Dirichlet Kernel. It's fine to define it as a trigonometric poylnomial of degree $n$ , but what is the mathematical intuition behind calling it a Kernel ? I have also thought of Kernel as being a set of zeroes of sum function. Is there a relation between both the terminologies?</p>
| Disintegrating By Parts | 112,478 | <p>The "kernel" of something is the essential part of it, the germ of it, a whole seed. That's a definition that makes sense in terms of the linear operator $T$, because the essential part of it is the function $K$. To me, it makes no sense how the null space came to be called "kernel." I would guess that the use of kernel in terms of the integral operator came before the abstract definition of the null space of a linear operator; integral operators were around before general linear operators.</p>
|
2,179,317 | <p>We know that, if $\mathcal D$ is a domain containing the origin $(0,0,0)$, then</p>
<p>$$\int_{\mathcal D} \delta(\vec r) d \vec r= \int_{\mathcal D} \delta(x) \delta(y) \delta(z) dx dy dz=1$$</p>
<p>However, <a href="http://mathworld.wolfram.com/DeltaFunction.html" rel="nofollow noreferrer">we also know that</a> the delta distribution can be expressed in spherical coordinates as</p>
<p>$$\delta(r,\theta,\phi)=\frac{\delta(r)}{2 \pi r^2}$$</p>
<p>If we take $\mathcal D = \mathbb R^3$, we would then have</p>
<p>$$\int_{\mathbb R^3} \delta(\vec r) d\vec r =\int_0^\infty 4 \pi r^2 \frac{\delta(r)}{2 \pi r^2} dr= \int_0^\infty 2 \delta(r) dr = 1$$</p>
<p>That is to say,</p>
<p>$$ \int_0^\infty \delta(r) dr = \frac 1 2$$</p>
<p>Now, this seems very odd, but maybe it can have some sense. Indeed, we know that for every $\epsilon >0$</p>
<p>$$\int_{-\epsilon}^{\epsilon} \delta(x) dx = 1$$</p>
<p>So it is possible that we can say (maybe in a not very rigorous way...), being $\delta(x)$ even, that</p>
<p>$$\int_{0}^{\epsilon} \delta(x) dx = \int_{-\epsilon}^{0} \delta(x) dx = \frac 1 2$$</p>
<p>Does this have sense? If so, can we make it rigorous, i.e. showing that every succession of function converging to $\delta$ has this property? </p>
<p>And if not, can we nevertheless give some sense to</p>
<p>$$ \int_0^\infty \delta(r) dr = \frac 1 2 \ \ ?$$</p>
<p><strong>Update</strong></p>
<p><a href="http://www.fen.bilkent.edu.tr/~ercelebi/mp03.pdf" rel="nofollow noreferrer">Here</a>, I found another formula for the delta distribution in spherical coordinates, that is to say:</p>
<p>$$\delta(\vec r ) = \frac{\delta (r)}{4 \pi r^2}$$</p>
<p>This seems to make much more sense, because we would have </p>
<p>$$\int_0^\infty \delta(r) dr = 1$$</p>
<p>However, there are two issues at this point:</p>
<ol>
<li>Which one is the correct form for the delta in spherical coordinates?</li>
<li>Is the integral $\int_0^\infty \delta(r) dr$ well defined? (See also Ruslan's answer).</li>
</ol>
| Stefano | 387,021 | <p>I would be inclined to say that your intuition is wrong, mostly because $\delta$ is not a function. </p>
<p>I'll explain. Take the function $\delta_n$ defined by $\delta_n(x) = n$ if $x \in (-1/n,0)$ and $\delta_n(x) =0$ otherwise. Clearly $\delta_n(x) = 0$ for $x \ge 0$, and so
$$
\int_0^\infty \delta_n(x) = 0
$$
for all $n$.</p>
<p>Nevertheless, $\delta_n \to \delta$ in the sense of distributions. In fact, take a test function $\varphi$. We have</p>
<p>$$ \int_{-\infty}^\infty \varphi(x) \delta_n(x)dx = n \int_{-1/n}^0\varphi(x)dx = \varphi(\xi_n),$$</p>
<p>where $\xi_n$ is between $-1/n$ and zero. Therefore by the continuity of $\varphi$,</p>
<p>$$\int_{-\infty}^\infty \varphi(x) \delta_n(x)dx \to \varphi(0), $$</p>
<p>and so $\delta_n \to \delta$.</p>
|
1,507,710 | <p>I'm trying to get my head around group theory as I've never studied it before.
As far as the general linear group, I think I've ascertained that it's a group of matrices and so the 4 axioms hold?
The question I'm trying to figure out is why $(GL_n(\mathbb{Z}),\cdot)$ does not form a group.
I think I read somewhere that it's because it doesn't have an inverse and I understand why this would not be a group, but I don't understand why it wouldn't have an inverse. </p>
| Geoff Robinson | 13,147 | <p>As your title suggests, ${\rm GL}(n, \mathbb{Z})$ is indeed a group. It consists of those integer matrices with non-zero determinant whose inverses are also integer matrices ( and such matrices all have determinant $\pm 1$, as others have pointed out). </p>
<p>What is not a group is the set of $n \times n$ integer matrices of non-zero determinant. These have inverses with rational entries, but not usually integer entries.</p>
|
1,507,710 | <p>I'm trying to get my head around group theory as I've never studied it before.
As far as the general linear group, I think I've ascertained that it's a group of matrices and so the 4 axioms hold?
The question I'm trying to figure out is why $(GL_n(\mathbb{Z}),\cdot)$ does not form a group.
I think I read somewhere that it's because it doesn't have an inverse and I understand why this would not be a group, but I don't understand why it wouldn't have an inverse. </p>
| testman | 286,006 | <p>Integers with multiplication do not form a group. For example the 1x1 matrix (2) has an inverse (1/2) which is not integer.</p>
|
66,199 | <p>Say I have the following lists of rules:</p>
<pre><code>case1 = {a -> 1, b -> 3, c -> 4, e -> 5}
case2 = {c -> 3, a -> 1, w -> 2}
case3 = {x -> 5, y -> 2, z -> 0, c -> 2}
</code></pre>
<p>How do I write a function <code>myfun[]</code>, to select the value of "c" in each case?</p>
<p>I want</p>
<pre><code>myfun[case1]
</code></pre>
<p>to return 4;</p>
<pre><code>myfun[case2]
</code></pre>
<p>to return 3;</p>
<pre><code>myfun[case3]
</code></pre>
<p>to return 2.</p>
| mgamer | 19,726 | <p>In such a case, with Mathematica 10, one can use easily associations:</p>
<pre><code>case1 = {a -> 1, b -> 3, c -> 4, e -> 5}
case2 = {c -> 3, a -> 1, w -> 2}
case3 = {x -> 5, y -> 2, z -> 0, c -> 2}
</code></pre>
<p>then: </p>
<pre><code>ac1 = Association@case1;
ac2 = Association@case2;
ac3 = Association@case3
</code></pre>
<p>The associated value for the key "c" one gets in the following way: </p>
<pre><code> ac2[Key[c]]
</code></pre>
<p>the associated value 3. The whole thing is more easy when the keys are strings, in this case you can simply write ac2["c"].</p>
|
3,410,150 | <p>If we try solving it by finding <span class="math-container">$f''(x)$</span> then it is very long and difficult to do, so my teacher suggested a way of doing it, he said find nature of all the roots of <span class="math-container">$f(x) =f'(x)$</span>, and on finding nature of the roots we got them to be real(but not all distinct) and then he said as all the roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct. I did not understand how to prove that if all the roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct.Can anyone please help me to prove this?</p>
<p>Is this statement (all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct) true only for this question or is it true in general for all function <span class="math-container">$f(x)$</span> whose all roots of <span class="math-container">$f(x) = 0$</span> are real?</p>
<p>If instead of <span class="math-container">$f(x) = (x-a)^3(x-b)^3$</span> we had <span class="math-container">$f(x) = (x-a)^4(x-b)^4$</span>
then would we say that as all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct or rather we would say that as all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all roots of <span class="math-container">$f'(x)= f''(x)$</span> are real.</p>
<p>If anyone has any other way of solving this question
<span class="math-container">$f(x) = (x-a)^3(x-b)^3$</span> then what is the nature of the roots of <span class="math-container">$f''(x) = f'(x)$</span> please share it.</p>
| Community | -1 | <p><strong>Answer to subsidiary question</strong></p>
<p>When you say <span class="math-container">$f'(x)=f(x)$</span> you simply mean that they are equal for some specific roots. That does not mean their derivatives are equal.</p>
|
528,591 | <p>I need to prove there are zero divisors in $\mathbb{Z}_n$ if and only if $n$ is not prime.
What should I consider first? </p>
| esoteric-elliptic | 425,395 | <p>Here's a slightly different way to go about this: to prove that <span class="math-container">$\mathbb Z_n$</span> has zero divisors if and only if <span class="math-container">$n$</span> is not prime, is the same as showing that <span class="math-container">$\mathbb Z_n$</span> has no zero divisors if and only if <span class="math-container">$n$</span> is prime. Here, <span class="math-container">$\mathbb Z_n = \{0,1,2,...,n-1\}$</span>.</p>
<ol>
<li><p>Suppose <span class="math-container">$\mathbb Z_n$</span> has no zero divisors. Then, if <span class="math-container">$a,b\in\mathbb Z_n$</span>, <span class="math-container">$a\ne 0$</span>, <span class="math-container">$b\ne 0$</span>, we must have <span class="math-container">$ab\ne 0$</span>. Suppose <span class="math-container">$n$</span> is not prime. Then, <span class="math-container">$n = pq$</span> for some integers <span class="math-container">$p,q$</span> such that <span class="math-container">$p < n$</span> and <span class="math-container">$q < n$</span> where <span class="math-container">$p,q\ne 0$</span>. However, <span class="math-container">$pq \equiv 0 \bmod n$</span>. This is a contradiction, so <span class="math-container">$n$</span> must be prime.</p>
</li>
<li><p>Suppose <span class="math-container">$n$</span> is prime. Suppose there exist <span class="math-container">$a,b\in\mathbb Z_n$</span> such that <span class="math-container">$ab = 0$</span> (same as saying <span class="math-container">$ab \equiv 0 \bmod n$</span>) where <span class="math-container">$a\ne 0$</span> and <span class="math-container">$b\ne 0$</span>. So, <span class="math-container">$n | ab$</span>. Since <span class="math-container">$n$</span> is prime, <span class="math-container">$n|ab \implies n|a$</span> or <span class="math-container">$n|b$</span>. None of these can happen, since <span class="math-container">$a < n$</span> and <span class="math-container">$b < n$</span>. So, <span class="math-container">$\mathbb Z_n$</span> has no zero divisors.</p>
</li>
</ol>
<p>Done!</p>
|
458 | <p>If you go to the bottom of any page in the SE network (e.g. this one!), you'll see a list of SE sites. In particular there's a link to MathOverflow, that is potentially seen by a large number of people (many of whom are outside of our target audience).</p>
<p>When you put your cursor over that link, there's a hover popup reading "mathematicians". If you try this with many of the other sites you'll find more a more detailed description.</p>
<p>We should improve this!</p>
<blockquote>
<p>I'll provide a few samples as answers; please vote for the one you like, and we'll get it fixed.</p>
</blockquote>
| Kaveh | 7,507 | <p>Professional Researchers in Mathematics</p>
|
1,383,956 | <p>Having two points <span class="math-container">$A(xa, ya)$</span> and <span class="math-container">$B(xb, yb)$</span> and knowing a value <span class="math-container">$k$</span> representing the length of a perpendicular segment in the middle of <span class="math-container">$[AB]$</span>, how can I find the other point of the segment?</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/fy8KZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fy8KZ.png" alt="" /></a></p>
<p>The known values are <span class="math-container">$xa$</span>, <span class="math-container">$ya$</span>, <span class="math-container">$xb$</span>, <span class="math-container">$yb$</span>. Also, it's obvious that <span class="math-container">$xm = \frac{(xa + xb)}{2}$</span> and <span class="math-container">$ym = \frac{(ya + yb)}{2}$</span></p>
</blockquote>
<p>How to find <span class="math-container">$N(xn, yn)$</span>?</p>
| 3SAT | 203,577 | <p>After using @Hetebrij answer</p>
<p>$\color{blue}{(4)}$</p>
<p>$P(X> 0.5)=\displaystyle\int_{0.5}^{\ln(2)}=0.351\;,P(X\geq 0.3)=0.651$</p>
<p>$P(X\geq 0.5\big|X\geq 0.3)=\boxed{\frac{0.351}{0.651}\approx 0.539}$</p>
<hr>
<p>$P(X>0.7)=\boxed0$ because that the max height is $\ln(2)\approx 0.69$</p>
|
441,374 | <p>Let $K_{\alpha}(z)$ be the <a href="https://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions:_I.CE.B1_.2C_K.CE.B1" rel="nofollow noreferrer">modified Bessel function of the second kind of order $\alpha$</a>.</p>
<p>I need to compute the following integral:</p>
<p>$$\int_0^\infty\;\;K_0\left(\sqrt{a(k^2+b)}\right)dk$$ </p>
<p>where $a>0$ and $b>0$. </p>
<p>I have tried several substitutions and played around a lot in Mathematica, and can't seem to solve this. Perhaps an integral representation of $K_{0}(z)$ would be helpful here.</p>
<p>Even if this can't be done exactly, a sensible approximation strategy would also be useful. </p>
<p>Any advice would be greatly appreciated. Thanks in advance for your time!</p>
| Kirill | 11,268 | <p>Using some of the ideas mentioned in Ron Gordon's answer we can evaluate this integral. Change the variable to $x=\sqrt{a} k$, so that the integral becomes
$$ \int_0^\infty K_0(\sqrt{ak^2+ab})\,dk = \frac1{\sqrt{a}} \int_0^\infty K_0(\sqrt{x^2+ab})\,dx, $$
and introduce the function
$$ I(b) = \int_0^\infty K_0(\sqrt{x^2+b^2})\,dx, $$
so that the integral is $I(\sqrt{ab})/\sqrt{a}$.</p>
<p>First, making the substitution $x=\sqrt{s^2-b^2}$, we get that
$$ I(b) = \int_b^\infty K_0(s)\frac{s\,ds}{\sqrt{s^2-b^2}}, $$
and we can use $K_0'(s) = -K_1(s)$ to write $K_0(s)=\int_s^\infty K_1(u)\,du$, so that
$$ I(b) = \int_b^\infty \frac{s\,ds}{\sqrt{s^2-b^2}}\int_s^\infty K_1(u)\,du. $$
The integral has the range $b<s<u<\infty$, and we can do the integral over $s$ explicitly, giving
$$ I(b) = \int_b^\infty\sqrt{u^2-b^2}K_1(u)\,du. $$</p>
<p>Second, using the formula
$$ K_0(u) = \int_0^\infty e^{-u\cosh t}\,dt, $$
we can write
$$ I(b) = \int_b^\infty \frac{s\,ds}{\sqrt{s^2-b^2}}\int_0^\infty e^{-s\cosh t}\,dt = \int_0^\infty b K_1(b\cosh t)\,dt, $$
where the integral over $s$ can be done in closed form in Mathematica. Substituting $t = \text{arccosh}(u/b)$ we get
$$ I(b) = \int_b^\infty \frac{b\,du}{\sqrt{u^2-b^2}}\,K_1(u). $$</p>
<p>From the two expressions above we can easily see that
$$ \frac{dI}{db} = -I(b), $$
and Mathematica will tell us that
$$ I(0) = \int_0^\infty K_0(x)\,dx = \frac\pi2. $$
Therefore,
$$I(b) = \frac\pi2 e^{-b}, $$
and
$$ \int_0^\infty K_0(\sqrt{a x^2+a b})\,dx = \frac{\pi}{2\sqrt{a}}e^{-\sqrt{a b}}. $$
This matches Ron Gordon's answer, even though his was only an asymptotic calculation.</p>
|
1,837,807 | <p>Let $\mathbb{N}$ denote the set of natural numbers, then a subbasis on $\mathbb{N}$ is </p>
<p>$$S = \{(-\infty, b), b \in \mathbb{N}\} \cup \{(a,\infty), a \in \mathbb{N}\}$$</p>
<p>Let $\leq$ be the relation on $\mathbb{N}$ identified with "less or equal to"</p>
<p>Then I saw a claim that says: (the order topology) $(\mathbb{N}, \leq)$ is a discrete space</p>
<p><strong>Proof:</strong> Let $x \in \mathbb{N}, \{x\} = (x-1,x+1) = (-\infty,x+1) \cap (x-1, \infty) \quad\quad\square$</p>
<p>I have trouble with the last equivalent. If $(-\infty,x+1)$ and $(x-1, \infty)$ are subbasic elements i.e. open sets, then intersection of opens are open and I have no problem with that. But my confusion is whether $(-\infty,x+1)$ and $(x-1, \infty)$ are subbasic open sets to begin with.</p>
<p>The confusion stems from definition of the subbasis which is $$S = \{(-\infty, b), b \in \mathbb{N}\} \cup \{(a,\infty), a \in \mathbb{N}\}$$</p>
<p>Then every element in the subbasis has to be like $(-\infty, b)\cup (a,\infty)$. So it doesn't seem we can get rid of the $(a,\infty)$ part to get a pure subbasic element of the form $(-\infty, b)$. So $(-\infty, b)$ is not a subbasic element.</p>
<p>Can someone help?</p>
| Community | -1 | <p>You are confusing</p>
<p>$$\{(-\infty, b) \mid b \in \mathbb{N}\} \cup \{(a,\infty) \mid a \in \mathbb{N}\}$$</p>
<p>with</p>
<p>$$ \{ (-\infty, b) \cup (a, \infty) \mid b \in \mathbb{N}, a \in \mathbb{N} \}$$</p>
|
3,644,710 | <p>I wish to classify the Galois group of <span class="math-container">$\mathbb{Q}(e^{i\pi/4})/\mathbb{Q}$</span>. Let me denote the eighth root of unity as <span class="math-container">$\epsilon$</span>. I see that <span class="math-container">$1, \epsilon, \epsilon^2, \epsilon^3$</span> are linearly independent over <span class="math-container">$\mathbb{Q}$</span>, although I do not know how to rigorously prove this (can anyone show me?). Since <span class="math-container">$\{\epsilon, \epsilon^2, \epsilon^3\}$</span> are linearly independent does that mean <span class="math-container">$[\mathbb{Q}(e^{i\pi/4}):\mathbb{Q}] = 8$</span>, or is it some other value (if it is please explain why)? Are my observations correct so far? </p>
| Itachi | 655,021 | <p>Hint :
Firstly, go for m>n and use <span class="math-container">$|x_m - x_n| \le |x_m - x_{m-1}|+|x_{m-1} - x_{m-2}|+....+|x_{n+1} - x_n|$</span>.
Now plug in all the values.</p>
|
413,719 | <p>Would I be correct in saying that they correspond to all points in $\mathbb{R}^3$? Or a line in $\mathbb{R}^3$?</p>
| Sujaan Kunalan | 77,862 | <p>They are just vectors in $\mathbb{R}^3$. They do not correspond to all points nor do they make a line.</p>
|
1,010 | <p>For periodic/symmetric tilings, it seems somewhat "obvious" to me that it just comes down to working out the right group of symmetries for each of the relevant shapes/tiles, but its not clear to me if that carries over in any nice algebraic way for more complicated objects such as a <a href="http://en.wikipedia.org/wiki/Penrose_tiling" rel="noreferrer">penrose tiling</a>
and just following <a href="http://en.wikipedia.org/wiki/Quasicrystal" rel="noreferrer">wikipedia</a> just leads to the statement that groupoids come into play, but with no references to example constructions! Moreover, at least naively thinking about, it seems any such algebraic approach should naturally also apply to fractals.</p>
<ol>
<li>what references am I somehow not able to find that do a good job talking about this further?</li>
<li>is my "intuition" that the mathematical structure for at least some classes of fractals and quasicrystals being equivalent correct?</li>
</ol>
| Michael Lugo | 143 | <p>This reminds me of a talk that I saw by <a href="http://www.ma.utexas.edu/~sadun" rel="nofollow">Lorenzo Sadun</a> a couple years ago, although I'm not sure exactly why. (When I think about tilings I'm thinking more combinatorially than it sounds like you are.) You might look at <a href="http://www.ma.utexas.edu/users/sadun/publist.html" rel="nofollow">some of Sadun's papers</a> or his recently published lecture notes <a href="http://rads.stackoverflow.com/amzn/click/0821847279" rel="nofollow"><i>Topology of Tiling Spaces</i></a>.</p>
|
1,010 | <p>For periodic/symmetric tilings, it seems somewhat "obvious" to me that it just comes down to working out the right group of symmetries for each of the relevant shapes/tiles, but its not clear to me if that carries over in any nice algebraic way for more complicated objects such as a <a href="http://en.wikipedia.org/wiki/Penrose_tiling" rel="noreferrer">penrose tiling</a>
and just following <a href="http://en.wikipedia.org/wiki/Quasicrystal" rel="noreferrer">wikipedia</a> just leads to the statement that groupoids come into play, but with no references to example constructions! Moreover, at least naively thinking about, it seems any such algebraic approach should naturally also apply to fractals.</p>
<ol>
<li>what references am I somehow not able to find that do a good job talking about this further?</li>
<li>is my "intuition" that the mathematical structure for at least some classes of fractals and quasicrystals being equivalent correct?</li>
</ol>
| Emily Peters | 699 | <p>In the answers to "what is a groupoid" I was pointed to
<a href="http://www.ams.org/notices/199607/weinstein.pdf">Alan
Weinstein's very nice notices article</a>.</p>
<p>The first example he gives (before the definition even of a groupoid)
is about tilings, and how the groupoid contains more information than
the group of automorphisms. Which solves your problem
of Wikipedia not giving any examples, at least. </p>
|
3,689,627 | <p>First of all let me tell you that the answer to this question is likely to confirm a not-so-minor error in a very popular (and excellent) textbook on optimization, as you'll see below. </p>
<h3>Background</h3>
<p>Suppose that we have a real-valued function <span class="math-container">$f(X)$</span> whose domain is the set of <span class="math-container">$n\times n$</span> nonsingular symmetric matrices. Clearly, <span class="math-container">$X$</span> does not have <span class="math-container">$n^2$</span> independent variables; it has <span class="math-container">$n(n+1)/2$</span> independent variables as it's symmetric. As is well known, an important use of Taylor expansion is to find the derivative of a function by finding the optimal first-order approximation. That is, if one can find a matrix <span class="math-container">$D \in \mathbb{R}^{n\times n}$</span> that is a function of <span class="math-container">$X$</span> and satisfies</p>
<p><span class="math-container">$$f(X+V) = f(X) + \langle D, V \rangle + \text{h.o.t.}, $$</span>
where <span class="math-container">$\text{h.o.t.}$</span> stands for higher-order terms and <span class="math-container">$\langle \cdot, \cdot \rangle$</span> is inner product, then the matrix <span class="math-container">$D$</span> is the derivative of <span class="math-container">$f$</span> w.r.t. <span class="math-container">$X$</span>. </p>
<h3>Question</h3>
<p>Now my question is: What is the right inner product <span class="math-container">$\langle \cdot, \cdot \rangle$</span> to use here if the matrix is symmetric? I know that if the entries of <span class="math-container">$X$</span> were independent (i.e., not symmetric), then the <span class="math-container">$\text{trace}$</span> operator would be the correct inner product. But I suspect that this is not true in general for a symmetric matrix. More specifically, my guess is that even if the <span class="math-container">$\text{trace}$</span> operator would lead to the correct expansion in the equation above, the <span class="math-container">$D$</span> matrix that comes as a result won't give the correct derivative. Here is why I think this is the case. </p>
<p>A while ago, I asked <a href="https://math.stackexchange.com/questions/3667029/what-is-the-derivative-of-log-det-x-when-x-is-symmetric">a question</a> about the derivative of the <span class="math-container">$\log\det X$</span> function, because I suspected that the formula in the book Convex Optimization of Boyd & Vandenberghe is wrong. The formula indeed seems to be wrong as the <a href="https://math.stackexchange.com/a/3667352/567560">accepted answer</a> made it clear. I tried to understand what went wrong in the proof in the Convex Optimization book. The approach that is used in the book is precisely the approach that I outlined above in Background. The authors show that the first-order Taylor approximation of <span class="math-container">$f(X)=\log\det X$</span> for symmetric <span class="math-container">$X$</span> is
<span class="math-container">$$ f(X+V) \approx f(X)+\text{trace}(X^{-1}V). $$</span></p>
<p>The authors prove this approximation by using decomposition specific to symmetric matrices (proof in Appenix A.4.1; book is <a href="https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf" rel="nofollow noreferrer">publicly available</a>). Now this approximation is correct but <span class="math-container">$X^{-1}$</span> is not the correct derivative of <span class="math-container">$\log\det X$</span> for symmetric <span class="math-container">$X$</span>; the <a href="https://math.stackexchange.com/a/3667352/567560">correct derivative is</a> <span class="math-container">$2X^{-1}-\text{diag}(\text{diag}(X^{-1}))$</span>. Interestingly, the same approximation in the formula above holds for nonsymmetric invertible matrices too (can be shown with SVD decomposition), and in this case it <em>does</em> give the right derivative because the derivative of <span class="math-container">$\log\det X$</span> is indeed <span class="math-container">$X^{-T}$</span> for a matrix with <span class="math-container">$n^2$</span> independent entries. Therefore I suspect that <span class="math-container">$\text{trace}$</span> is not the right inner product <span class="math-container">$\langle \cdot, \cdot \rangle$</span> for symmetric matrices, as it ignores the fact that the entries of <span class="math-container">$X$</span> are not independent. Can anyone shed light on this question?</p>
<h3>Added: A simpler question</h3>
<p>Based on a comment, I understand that the general answer to my question may be difficult, so let me ask a simpler question. The answer to this question may be sufficient to show what went wrong in the proof in the Convex Optimization book.</p>
<p>Suppose <span class="math-container">$g(X)$</span> is a function <span class="math-container">$g: \mathbb{R}^{n\times n} \to \mathbb R$</span>. Is it true that the first-order Taylor approximaton with trace as inner product, i.e.,</p>
<p><span class="math-container">$$g(X+V) \approx g(X) + \text{trace}\left( \nabla g (X)^T V \right), $$</span></p>
<p>implicitly assumes that the entries of <span class="math-container">$X$</span> are independent? In other words, is it true that this approximation may not hold if entries of <span class="math-container">$X$</span> are not independent (e.g., if <span class="math-container">$X$</span> is symmetric)?</p>
| greg | 357,854 | <p>Consider a pair matrices with elements given by
<span class="math-container">$$\eqalign{
M_{ij} &= \begin{cases} 1 &\text{if }(i=j) \\
\frac{1}{2} & \text{otherwise}\end{cases} \\
W_{ij} &= \begin{cases} 1 &\text{if }(i=j) \\
2 & \text{otherwise}\end{cases} \\
}$$</span>
which are Hadamard inverses of each other, i.e. <span class="math-container">$\;M\odot W={\tt1}$</span></p>
<p>Suppose that you have been given a function, and by hard work you have calculated its gradient <span class="math-container">$G$</span> and its Taylor expansion
<span class="math-container">$$f(X+dX) \approx f(X) + G:dX$$</span>
where the colon denotes the Frobenius inner product <span class="math-container">$\;A:B={\rm Tr}(A^TB)$</span></p>
<p>Everything looks great until someone points out that your problem has a symmetry constraint
<span class="math-container">$$X={\rm Sym}(X)\doteq\tfrac{1}{2}\left(X+X^T\right)$$</span>
The constraint implies <span class="math-container">$(X,G)$</span> are symmetric,
so you might think the constrained gradient is
<span class="math-container">$$\eqalign{
H &= {\rm Sym}(G) \\
}$$</span>
but this is not correct.
Fortunately, there <em>is</em> a way to calculate <span class="math-container">$H$</span> from <span class="math-container">$G$</span>
<span class="math-container">$$\eqalign{
H &= W\odot{\rm Sym}(G) = W\odot G \quad\implies\quad G = M\odot H \\
}$$</span>
Substituting this into the Taylor expansion yields
<span class="math-container">$$\eqalign{
f(X) + G:dX &= f(X) + (M\odot H):dX \\
&= f(X) + H:(M\odot dX) \\
&= f(X) + (\sqrt{M}\odot H):(\sqrt{M}\odot dX) \\
}$$</span>
<strong>NB:</strong> These matrices are symmetric with only
<span class="math-container">$\left(\frac{n(n+1)}{2}\right)$</span> independent components.</p>
<p>You might think of the last expansion formula as the standard inner product after each factor has been projected using the elementwise square root of the <span class="math-container">$M$</span> matrix.</p>
<p>The Frobenius <span class="math-container">$\times$</span> Hadamard product
generates a scalar triple product, i.e.
<span class="math-container">$$A:B\odot C = \sum_i\sum_j A_{ij}B_{ij}C_{ij}$$</span>
The order of the three matrices does not affect the value of this product.</p>
<p>Interestingly, if you had to enforce a <em>skew</em> constraint, i.e.
<span class="math-container">$$X={\rm Skw}(X)\doteq\tfrac{1}{2}\left(X-X^T\right)$$</span>
then the constrained gradient would satisfy your intuition<br>
<span class="math-container">$$H={\rm Skw}(G)$$</span>
with <span class="math-container">$\left(\frac{n(n-1)}{2}\right)$</span> independent components.</p>
|
3,689,627 | <p>First of all let me tell you that the answer to this question is likely to confirm a not-so-minor error in a very popular (and excellent) textbook on optimization, as you'll see below. </p>
<h3>Background</h3>
<p>Suppose that we have a real-valued function <span class="math-container">$f(X)$</span> whose domain is the set of <span class="math-container">$n\times n$</span> nonsingular symmetric matrices. Clearly, <span class="math-container">$X$</span> does not have <span class="math-container">$n^2$</span> independent variables; it has <span class="math-container">$n(n+1)/2$</span> independent variables as it's symmetric. As is well known, an important use of Taylor expansion is to find the derivative of a function by finding the optimal first-order approximation. That is, if one can find a matrix <span class="math-container">$D \in \mathbb{R}^{n\times n}$</span> that is a function of <span class="math-container">$X$</span> and satisfies</p>
<p><span class="math-container">$$f(X+V) = f(X) + \langle D, V \rangle + \text{h.o.t.}, $$</span>
where <span class="math-container">$\text{h.o.t.}$</span> stands for higher-order terms and <span class="math-container">$\langle \cdot, \cdot \rangle$</span> is inner product, then the matrix <span class="math-container">$D$</span> is the derivative of <span class="math-container">$f$</span> w.r.t. <span class="math-container">$X$</span>. </p>
<h3>Question</h3>
<p>Now my question is: What is the right inner product <span class="math-container">$\langle \cdot, \cdot \rangle$</span> to use here if the matrix is symmetric? I know that if the entries of <span class="math-container">$X$</span> were independent (i.e., not symmetric), then the <span class="math-container">$\text{trace}$</span> operator would be the correct inner product. But I suspect that this is not true in general for a symmetric matrix. More specifically, my guess is that even if the <span class="math-container">$\text{trace}$</span> operator would lead to the correct expansion in the equation above, the <span class="math-container">$D$</span> matrix that comes as a result won't give the correct derivative. Here is why I think this is the case. </p>
<p>A while ago, I asked <a href="https://math.stackexchange.com/questions/3667029/what-is-the-derivative-of-log-det-x-when-x-is-symmetric">a question</a> about the derivative of the <span class="math-container">$\log\det X$</span> function, because I suspected that the formula in the book Convex Optimization of Boyd & Vandenberghe is wrong. The formula indeed seems to be wrong as the <a href="https://math.stackexchange.com/a/3667352/567560">accepted answer</a> made it clear. I tried to understand what went wrong in the proof in the Convex Optimization book. The approach that is used in the book is precisely the approach that I outlined above in Background. The authors show that the first-order Taylor approximation of <span class="math-container">$f(X)=\log\det X$</span> for symmetric <span class="math-container">$X$</span> is
<span class="math-container">$$ f(X+V) \approx f(X)+\text{trace}(X^{-1}V). $$</span></p>
<p>The authors prove this approximation by using decomposition specific to symmetric matrices (proof in Appenix A.4.1; book is <a href="https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf" rel="nofollow noreferrer">publicly available</a>). Now this approximation is correct but <span class="math-container">$X^{-1}$</span> is not the correct derivative of <span class="math-container">$\log\det X$</span> for symmetric <span class="math-container">$X$</span>; the <a href="https://math.stackexchange.com/a/3667352/567560">correct derivative is</a> <span class="math-container">$2X^{-1}-\text{diag}(\text{diag}(X^{-1}))$</span>. Interestingly, the same approximation in the formula above holds for nonsymmetric invertible matrices too (can be shown with SVD decomposition), and in this case it <em>does</em> give the right derivative because the derivative of <span class="math-container">$\log\det X$</span> is indeed <span class="math-container">$X^{-T}$</span> for a matrix with <span class="math-container">$n^2$</span> independent entries. Therefore I suspect that <span class="math-container">$\text{trace}$</span> is not the right inner product <span class="math-container">$\langle \cdot, \cdot \rangle$</span> for symmetric matrices, as it ignores the fact that the entries of <span class="math-container">$X$</span> are not independent. Can anyone shed light on this question?</p>
<h3>Added: A simpler question</h3>
<p>Based on a comment, I understand that the general answer to my question may be difficult, so let me ask a simpler question. The answer to this question may be sufficient to show what went wrong in the proof in the Convex Optimization book.</p>
<p>Suppose <span class="math-container">$g(X)$</span> is a function <span class="math-container">$g: \mathbb{R}^{n\times n} \to \mathbb R$</span>. Is it true that the first-order Taylor approximaton with trace as inner product, i.e.,</p>
<p><span class="math-container">$$g(X+V) \approx g(X) + \text{trace}\left( \nabla g (X)^T V \right), $$</span></p>
<p>implicitly assumes that the entries of <span class="math-container">$X$</span> are independent? In other words, is it true that this approximation may not hold if entries of <span class="math-container">$X$</span> are not independent (e.g., if <span class="math-container">$X$</span> is symmetric)?</p>
| Miguel | 259,671 | <p>I think that the key problem is that such differential on "sets of matrices with dependent components" is not defined.</p>
<p>If <span class="math-container">$f:\mathbb{R}^m \rightarrow \mathbb{R}$</span> is differentiable, then the first order approximation in the direction of <span class="math-container">$v$</span> is:
<span class="math-container">$$f(x+v)\approx f(x)+\nabla_f(x)\cdot v $$</span>
with the usual dot product:
<span class="math-container">$$\nabla_f(x)\cdot v=\sum_i \frac{\partial f}{\partial x_i}\,v_i $$</span></p>
<p>Now, if <span class="math-container">$m=n^2$</span> and you fancy reshaping vectors as square matrices and writing everything in uppercase, this is the same as:
<span class="math-container">$$f(X+V)\approx f(X)+tr(D(X)^\top\, V )$$</span>
where the <span class="math-container">$ij$</span> component of matrix <span class="math-container">$D(X)$</span> is <span class="math-container">$\frac{\partial\, f}{\partial\, X_{ij}}$</span>
because the trace reproduces the usual dot product:
<span class="math-container">$$tr(D(X)^\top\, V ) = \sum_i\sum_j D(X)_{ij}\,V_{ij}=\frac{\partial\, f}{\partial\, X_{ij}}\,V_{ij}$$</span></p>
<p>All of this is well known and I have only recalled it to have some notation at hand for the case where the components of <span class="math-container">$X$</span> are not "independent". One way to explain the problem in this case is that the domain is no longer <span class="math-container">$\mathbb{R}^m$</span> and you have to rewrite the function definition. </p>
<p>I will try to do this rewriting. For instance, let <span class="math-container">$X=\begin{pmatrix} a& b\\b & c\end{pmatrix}$</span> and you consider your function as <span class="math-container">$f:\mathbb{R}^3\to\mathbb{R}$</span> so that <span class="math-container">$f(X)=f(a,b,c)$</span> and <span class="math-container">$\nabla f=\left(\frac{\partial f}{\partial a},\frac{\partial f}{\partial b},\frac{\partial f}{\partial c}\right)$</span>. But now the gradient cannot be cast into a square matrix. If you just repeat the derivative with respect to <span class="math-container">$b$</span> and place it twice on the matrix, then the trace does not recover the dot product but introduces an extra term.</p>
<p>Another way to see what is happening is to note that not every perturbation <span class="math-container">$V$</span> is valid, since <span class="math-container">$X+V$</span> may not be symmetric.</p>
<p>To summarize, you have to introduce a novel concept of differentiation on a set that is <strong>not</strong> a linear space, because the differential as such is not defined on such weird sets. <em>(Spoiler alert: manifolds)</em></p>
<p>You can visualize the problem with a simpler example. Consider the function <span class="math-container">$f: \mathbb{R}^2 \to \mathbb{R}$</span>, <span class="math-container">$f(x,y)=\frac{1}{2}(x^2+y^2)$</span>. Then the gradient is <span class="math-container">$\nabla f(x,y)=(x,y)$</span>. But imagine that an external influence forces the points to remain on the circle: <span class="math-container">$\mathcal{S}^1=\{(x,y)\in\mathbb{R}^2:x^2+y^2=1\}$</span>, so the components <span class="math-container">$x,y$</span> are not "independent". (You can think of a centripetal force in physics or a constraint in optimization). Then, it is obvious that your function is constant, so the gradient must vanish.</p>
<p>And then all the differential geometry of manifolds starts...</p>
<p>Edit: Maybe I have not answered your question. You try to blame on the dot product, and it is true that you have to think a way to rewrite the dot product in matrix form. But I think the issue is more fundamental: it is the derivative itself that must be redefined. I am sure B&V know the rigourous formalism, but they tried to keep their text at a more elementary level. BTW, if your topic is optimization, maybe you can have a look at Absil's excellent book: <a href="https://dl.acm.org/doi/book/10.5555/1557548" rel="nofollow noreferrer">Optimization Algorithms on Matrix Manifolds</a> but, again, differential geometry is required.</p>
|
799,183 | <p>I'm trying to work out what the transformation $T:z \rightarrow -\frac{1}{z}$ does (eg reflection in a line, rotation around a point etc). Any help on how to do this would be greatly appreciated! I've tried seeing what it does to $1$ and $i$ but is hasn't helped me. Thanks!</p>
| Alice Ryhl | 132,791 | <p>It's a <a href="http://en.wikipedia.org/wiki/Inversive_geometry#Circle_inversion" rel="nofollow">circle inversion</a>, followed by a reflection over the y axis.</p>
|
1,477,325 | <p>It is known that an integrable function is a.e. finite. Is an a.e. finite function integrable? What if the measure is finite?</p>
| Paul | 202,111 | <p>No. A characteristic function of a non measurable set is everywhere finite, but not integrable.</p>
|
3,368,402 | <p>I am utilizing set identities to prove (A-C)-(B-C).</p>
<p><span class="math-container">$\begin{array}{|l}(A−B)− C = \{ x | x \in ((x\in (A \cap \bar{B})) \cap \bar{C}\} \quad \text{Def. of Set Minus}
\\
\quad \quad \quad \quad \quad =\{ x | ((x\in A) \wedge (x\in\bar{B})) \wedge (x\in\bar{C})\} \quad \text{Def. of intersection}
\\ \quad \quad \quad \quad \quad =\{ x | (A\wedge\overline{C}\wedge\overline{B})\vee(\overline{C}\wedge\overline{B}\wedge C)\} \quad \text{Association Law}
\\
\quad \quad \quad \quad \quad =\{ x | ((x\in A) \wedge (x\in\bar{C})) \wedge ((x\in \bar{B}) \wedge (x\in\bar{C}))\} \quad \text{Idempotent Law}
\\
\quad \quad \quad \quad \quad =\{ x | (((x\in (A\cap\bar{C})) \cap (x\in (\bar{B} \cap\bar{C})))\} \quad \text{Def. of union}
\\
\quad \quad \quad \quad \quad =\{ x | (((x\in (A\cap \bar{C})) \cap \overline{(x\in (B\cup C)))} \} \quad \text{DeMorgan's Law}
\\
\quad \quad \quad \quad \quad =\{ x | x \in (A - C) - (B \cup C) \} \quad \text{Def. Set Minus}
\\
=(A-C)-(B-C)
\end{array}$</span></p>
<p>So it looks like I screwed up on the final step. Is there something that I am forgetting to do properly or where am I supposed to go from that final step? </p>
| user0102 | 322,814 | <p><span class="math-container">\begin{align*}
(A-C)-(B-C) & = (A\cap\overline{C})-(B\cap\overline{C}) = (A\cap\overline{C})\cap(\overline{B\cap\overline{C})}\\\\
& = (A\cap\overline{C})\cap(\overline{B}\cup C) = (A\cap\overline{C}\cap\overline{B})\cup(\overline{C}\cap\overline{B}\cap C)\\\\
& = A\cap\overline{C}\cap\overline{B} = (A\cap\overline{B})\cap\overline{C} = (A-B)-C
\end{align*}</span></p>
|
3,492,376 | <p>Can anyone explain to me why the below expression:</p>
<p><span class="math-container">$$\int\frac{2\cos x}{{(4-4\sin^2x})^{3/2}}\:dx$$</span></p>
<p>is equal to this:</p>
<p><span class="math-container">$$\frac{2}{8}\int\frac{\cos x}{{(1-\sin^2x})^{3/2}}\:dx$$</span></p>
<p>a) Why the constant <span class="math-container">$2/8$</span> outside the integral is not <span class="math-container">$2/4$</span>?</p>
<p>b) And how do you arrive at?</p>
<p><span class="math-container">$$\int\frac{1}{{(4\cos^2x})}\:dx$$</span></p>
<p>Thank you.</p>
| Michael Hardy | 11,667 | <p><span class="math-container">$$
4^{3/2} = \sqrt 4^3 = 2^3 = 8.
$$</span></p>
|
114,371 | <p>A sector $P_1OP_2$ of an ellipse is given by angles $\theta_1$ and $\theta_2$. </p>
<p><img src="https://i.stack.imgur.com/mdmq2.png" alt="A sector of an ellipse"></p>
<p>Could you please explain me how to find the area of a sector of an ellipse?</p>
| Riccardo.Alestra | 24,089 | <p>Using polar elliptical coordinates:$$x=a\rho cos(\theta)$$
$$y=b \rho sin(\theta)$$
the part of the plane enclosed from the ellipse is
$$\{(\rho,\theta):0\le \rho \le 1,\theta\in(0,2\pi)\}$$
the Jacobian of the inverse transform is:
$$J=\begin{bmatrix}a cos(\theta) & -a \rho sin(\theta) \\ b sin(\theta) & b \rho cos(\theta)\end{bmatrix}$$
so the area of a sector is:
$$\int_{\theta_1}^{\theta_2}d\theta \int_0^1 d\rho a b\rho$$</p>
<p>Because the integral in $d\rho$ is $\frac{1}{2}ab$
the result is $$\frac{1}{2}ab(\theta_2-\theta_1)$$</p>
|
114,371 | <p>A sector $P_1OP_2$ of an ellipse is given by angles $\theta_1$ and $\theta_2$. </p>
<p><img src="https://i.stack.imgur.com/mdmq2.png" alt="A sector of an ellipse"></p>
<p>Could you please explain me how to find the area of a sector of an ellipse?</p>
| Community | -1 | <p>As I show in <a href="https://math.stackexchange.com/questions/493104">Evaluating $\int_a^b \frac12 r^2\ \mathrm d\theta$ to find the area of an ellipse</a> the area
of an ellipse given its central angle is: </p>
<p><span class="math-container">$A(\theta) = \frac{1}{2} a b \tan ^{-1}\left(\frac{a \tan (\theta )}{b}\right)$</span></p>
<p>for <span class="math-container">$0\leq\theta\leq2\pi$</span>, so the area of the sector in question is: </p>
<p><span class="math-container">$A\left(\theta _2\right) - A\left(\theta _1\right)$</span> </p>
<p>or: </p>
<p><span class="math-container">$\frac{1}{2} a b \left(\tan ^{-1}\left(\frac{a \tan \left(\theta
_2\right)}{b}\right)-\tan ^{-1}\left(\frac{a \tan \left(\theta
_1\right)}{b}\right)\right)$</span> </p>
<p>which sadly does not appear to simplify any further. </p>
|
65,886 | <p>It is clear that Sylow theorems are an essential tool for the classification of finite groups.
I recently read an article by Marcel Wild, <em>The Groups of Order Sixteen Made Easy</em>, where he gives a complete classification of the groups of order $16$ that is based on
elementary facts, in particular, he does not use Sylow theorem.</p>
<p>Did anyone encounter a complete classification of the groups of order $12$ that does not use Sylow theorem?
What about order 24? (I'm less optimistic there, but who knows).</p>
| Panurge | 72,877 | <p>Thanks to user641 for this concise proof. If I'm not wrong, it is a variant of Cole's proof (1893), which can be found here :</p>
<p><a href="http://www.jstor.org/stable/2369516?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">http://www.jstor.org/stable/2369516?seq=1#page_scan_tab_contents</a></p>
<p>Perhaps the average student can find some difficulties in the proof given by user641, so I take leave to write a second answer, only to make things more explicit.</p>
<p>"Then the conjugation action of $G$ on these ten Sylows gives an embedding of $G$ into $A_{10}$."</p>
<p>Perhaps it should be stressed that $G$ embeds in $A_{10}$ not only as a group, but as an operating group. I don't know if this fact is mentioned in many textbooks, so I will give a proof. If this proof is too complicate, please say it.</p>
<p><strong>Lemma 1.</strong> Let $G$ be a group, let $N$ be a normal subgroup of $G$ and $S$ a simple subgroup of $G$. Then $S$ is contained in $N$ or isomorphic to a subgroup of $G/N$.</p>
<p>Proof. $N \cap S$ is normal in $S$. Since $S$ is simple, $N \cap S$ is thus equal to $S$ or to $1$. In the first case, $S$ is contained in $N$ and the statement is true. In the second case, $S$ is isomorphic to $SN/N$ (second isomorphism theorem), which is a subgroup of $G/N$, so the statement is also true in the second case.</p>
<p><strong>Lemma 2.</strong> Let $E$ a finite set. If $G$ is a simple subgroup of the symmetric group $S_{E}$, if $\vert G \vert > 2$, then $G$ is contained in the alternating group $A_{E}$.</p>
<p>Proof. $A_{E}$ is normal in $S_{E}$. Thus, in view of lemma 1, $G$ is contained in $A_{E}$ or isomorphic to a subgroup of $S_{E}/A_{E}$. But the latter is impossible, since $S_{E}/A_{E}$ has order $\leq 2$ and $\vert G \vert > 2$ by hypothesis.</p>
<p>Definition. If $\cdot : G \times X \rightarrow X$ denotes an action of a group $G$ on a set $X$, if $\star$ denotes an action of a group $H$ on a set $Y$, let us define an isomorphism from the first action onto the second action as an ordered pair $(f, \sigma)$, where $f$ is a bijection from $X$ onto $Y$ and $\sigma$ a group isomorphism from $G$ onto $H$ such that, for each $x$ in $X$ and for each $g$ in $G$,</p>
<p>$f(g \cdot x) = (\sigma (g) ) \star f(x)$.</p>
<p><strong>Lemma 3.</strong> Let $\cdot$ denote an action of a group $G$ on a set $X$, let $\star$ denote an action of a group $H$ on a set $Y$, let $(f, \sigma)$ be an isomorphism from the first action onto the second action. Then an element $g$ of $G$ fixes a point $x$ of $X$ (for the action $\cdot$) if and only $\sigma(g)$ fixes $f(x)$ (for the action $\star$).</p>
<p>Proof. Easy.</p>
<p><strong>Definition.</strong> Let $\cdot$ denote an action of a group $G$ on a set $X$, let $\star$ denote an action of a group $H$ on a set $Y$. If there exists an isomorphism from $\cdot$ onto $\star$, we say that these actions are isomorphic. (Some authors say "equivalent". Aschbacher, Finite Group Theory, 2d edition, p. 9, says "quasiequivalent".)</p>
<p><strong>Lemma 4.</strong> Let $G$ be a simple group, let $\cdot$ be a nontrivial action of $G$ on a set $E$, let $\varphi$ denote the homomorphism from $G$ to $S_{E}$ corresponding to this action. Then $\varphi$ is injective (in other words, the action is truthful) and the action $\cdot$ of $G$ on $E$ is isomorphic to the natural action of the permutation group $\varphi (G)$. If $E$ is finite and $\vert G \vert > 2$, then $\varphi$ takes its values in $A_{E}$.</p>
<p>Proof. For the injectivity of $\varphi$, note that the kernel of $\varphi$ is a normal subgroup of the simple group $G$ and this kernel is not the whole $G$, since $G$ acts nontrivially. For an isomorphism from the action $\cdot$ onto the natural action of $\varphi (G)$, take the ordered pair $(f, \psi)$, where $f$ is the identity bijection from $E$ onto itself and where $\psi$ is the group isomorphism $G \rightarrow \varphi(G) : g \mapsto \varphi(g)$ from $G$ onto $\varphi (G)$. For the last statement, use lemma 2.</p>
<p><strong>Lemma 5.</strong> Let $G$ be a finite nonabelian simple group. (This amounts to say : let $G$ be a finite simple group whose order is not a prime number.) Let $p$ be a prime factor of $\vert G \vert$. Let $E$ denote the set of all Sylow $p$-subgroups of $G$, let $n$ denote the number $\vert E \vert $ of Sylow $p$-subgroups of $G$. Then the action of $G$ on $E$ by conjugation is isomorphic to the natural operation of a subgroup of $A_{n}$.</p>
<p>Proof. Since $G$ is a finite nonabelian simple group, it has more than one Sylow $p$-subgroup, thus the (transitive) action of $G$ on $E$ is nontrivial. In view of the preceding lemmas, the action of $G$ on $E$ by conjugation is isomorphic to the natural operation of a subgroup of $A_{E}$. Now, if $X$ and $Y$ are equipotent finite sets, the natural action of a subgroup of $A_{X}$ is isomorphic to the natural action of a subgroup of $A_{Y}$.</p>
<p><strong>Lemma 6.</strong> Let $G$ be a finite group, let $p$ be a prime number. If $P$ is a Sylow $p$-subgroup of $G$, if $g$ is an element of $G$ whose order is a power of $p$ and which normalizes $P$, then $g$ is in $P$. If $P$ and $Q$ are Sylow $p$-subgroups of $G$, if $P$ normalizes $Q$, then $P = Q$.</p>
<p>Proof. Classical. (Since $g$ normalizes $P$, the order of the subgroup $<P, g>$ generated by $P$ and $g$ is a power of $p$, thus $<P, g>$ is equal to $P$ by maximality of Sylow $p$-subgroups.)</p>
<p><strong>Lemma 7.</strong> Let $G$ be a finite group, let $p$ be a prime number. Let $E$ denote the set of all Sylow $p$-subgroups of $G$. The action of $G$ by conjugation on $E$ has the following properties :</p>
<p>1° for each Sylow $p$-subgroup $P$ of the operating group, there is one and only one point in the set $E$ that is fixed by every element of $P$;</p>
<p>2° for every point in the set $E$, there is one and only one Sylow $p$-subgroup $P$ of the operating group such that every element of $P$ fixes this point;</p>
<p>3° if $P$ is a Sylow $p$-subgroup of the operating group, if $x$ denotes the only point in $E$ that is fixed by each element of $P$, then the stabilizer of $x$ in $G$ is $N_{G}(P)$;</p>
<p>4° if it is moreover assumed that two different Sylow $p$-subgroups of $G$ always intersect trivially, then every nontrivial $p$-element of the operating group (I mean by "$p$- element" an element whose order is a power of $p$) fixes one and only one point of $E$.</p>
<p>Proof. Use Lemma 6. (In the statement of Lemma 7, I made a distinction between a Sylow $p$-subgroup of $G$ as a point of $E$ and as a subgroup of $G$, in order to forget what is not essential.)</p>
<p><strong>Definition</strong> (nonstandard). Let us define a Cole group as a simple subgroup $G$ of order 360 of $A_{10}$ with the following properties :</p>
<p>1° for each Sylow $3$-subgroup $P$ of $G$, there is one and only one point in $\{1, \ldots , 10 \}$ that is fixed (for the natural operation) by every element of P$;</p>
<p>2° for every point in the set $\{1, \ldots , 10 \}$, there is one and only one Sylow $3$-subgroup $P$ of $G$ such that every element of $P$ fixes this point;</p>
<p>3° if $P$ is a Sylow $3$-subgroup of $G$, if $x$ denotes the only point in $E$ that is fixed by each element of $P$, then the stabilizer of $x$ in $G$ is $N_{G}(P)$;</p>
<p>4° every nontrivial $3$-element of the operating group (I mean by "$3$- element" an element whose order is a power of $3$) fixes one and only one point of $\{1, \ldots , 10 \}$ .</p>
<p><strong>Lemma 8.</strong> Every simple group of order 360 is isomorphic to a Cole group.</p>
<p>Proof. Use Lemmas 3, 5 and 7. (Recall that user641 has proved that a simple group $G$ of order 360 has exactly $10$ Sylow $3$-subgroups and that two distinct Sylow $3$-subgroups of $G$ interset always trivially.)</p>
<p>Now, I think that user641' statement : "Note that $N_G(P)$ (...) is a point stabilizer in $G$" should be clear for the average student. (Again, if this proof is too complicated, please say it.)</p>
<p>If nobody has objections, I will write other answers in order to make other arguments from the proof more explicit.</p>
|
815,065 | <p>$$\int^{\pi /2}_{0} \frac{\ln(\sin x)}{\sqrt x}dx$$</p>
<p>Use the segment integral formula? The $\sqrt x$ is zero at $x=0$ and $\ln\sin x$ is $-\infty$ </p>
| Lucian | 93,448 | <p>We know that $\displaystyle\underbrace{\int_0^\tfrac\pi2\frac{\ln x}{\sqrt x}dx}_A=\sqrt{2\pi}\bigg(\ln\frac\pi2-2\bigg)$. This can easily be proven using integration by </p>
<p>parts. Now, let's show that $\displaystyle\int_0^\tfrac\pi2\frac{\ln(\sin x)}{\sqrt x}dx$ is bounded : $\displaystyle\int_0^\tfrac\pi2\frac{\ln(\sin x)}{\sqrt x}dx-\int_0^\tfrac\pi2\frac{\ln x}{\sqrt x}dx=$</p>
<p>$=\displaystyle\int_0^\tfrac\pi2\frac{\ln(\sin x)-\ln x}{\sqrt x}dx=\int_0^\tfrac\pi2\frac{\ln\dfrac{\sin x}x}{\sqrt x}dx$. But $\dfrac{\sin x}x$ is strictly decreasing on this interval, </p>
<p>being bounded between $1$ and $\dfrac2\pi$ . So our last integral is also bounded in between $\displaystyle\int_0^\tfrac\pi2\frac{\ln1}{\sqrt x}dx=$</p>
<p>$=0$, and $\displaystyle\int_0^\tfrac\pi2\frac{\ln\dfrac2\pi}{\sqrt x}dx=\bigg(\ln\frac2\pi\bigg)\bigg[2\sqrt x\bigg]_0^\frac\pi2=\underbrace{\sqrt{2\pi}\cdot\ln\frac2\pi}_B$ , since the logarithm is monotonous </p>
<p>on the interval $\bigg[\dfrac2\pi,1\bigg]$. Therefore, our initial integral is also bounded in between $A+0=A$, and </p>
<p>$A+B=-2\sqrt{2\pi}$ . Then, since the integrand is monotonous on the interval $\bigg[0,\dfrac\pi2\bigg]$, convergence </p>
<p>immediately follows.</p>
|
372,548 | <p><span class="math-container">$f:\mathbb R\to\mathbb R$</span> is a convex continuous function. We have a finite or a countable set of triples: <span class="math-container">$\{(x_n,f(x_n),D_n)\}_{n\in N}$</span>, where <span class="math-container">$D_n$</span> is the slope of a tangent line <span class="math-container">$L_n$</span> at <span class="math-container">$x_n$</span> (if at a point <span class="math-container">$f$</span> is not differentiable, then multiple lines can be tangents; <span class="math-container">$L_n$</span> is just one of those lines).</p>
<p>Assuming that, for any <span class="math-container">$n,m,k$</span>, the intersection of <span class="math-container">$L_n$</span> and <span class="math-container">$L_m$</span> cannot be the point <span class="math-container">$(x_k, f(x_k))$</span>, then we want to prove that there exists a smooth function <span class="math-container">$g$</span> such that <span class="math-container">$g(x_n)=f(x_n)$</span> and <span class="math-container">$g'(x_n)=D_n$</span> for any <span class="math-container">$n$</span>.</p>
<hr />
<p>The original problem that I am trying to solve involves multi-dimensional manifolds, but I think it is easy to generalize the 2-dimensional case.</p>
<p>By the mollification theorem, a smooth function approximating <span class="math-container">$f$</span> must exist, but can it contain a set of points that corresponds precisely to the points on the graph of <span class="math-container">$f$</span>?</p>
| Willie Wong | 3,948 | <p>The local-in-time regularity of the Navier-Stokes solution is pretty well-studied.</p>
<ol>
<li>The classic paper of Foias and Temam <a href="https://www.sciencedirect.com/science/article/pii/0022123689900153" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/pii/0022123689900153</a> proves that, when dimension = 3, with initial data in energy space the solution will be, for at least a short time, be in some Gevrey class. Furthermore, as long as the energy remains bounded the solution will remain in the Gevrey class.</li>
<li>Grujić and Kukavica used a different interpolation from Foias and Temam in <a href="https://www.sciencedirect.com/science/article/pii/S0022123697931670" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/pii/S0022123697931670</a> and proved that in dimension 2 or above, for initial data in <span class="math-container">$L^p$</span>, the solution will be real analytic for a short time.</li>
</ol>
<p>These are just two of the more well-known results in this area. As you can see the analyticity of the solution, at least for short time, is automatic and does not depend on the initial data being real analytic (or band limited). That this is so is due to the smoothing effect of the viscosity. Ignoring the nonlinearity the smoothing effect is well-known for the heat equation. For short times the nonlinearity does not have enough time to kick in and cause problems.</p>
|
690,465 | <p>So we are learning trigonometry in school and I would like to ask for a little help with these. I would really appreciate if somebody can explain me how I can solve such equations :)</p>
<ul>
<li><p>$\sin 3x \cdot \cos 3x = \sin 2x$</p></li>
<li><p>$2( 1 + \sin^6 x + \cos^6 x ) - 3(\sin^4 x + \cos^4 x) - \cos x = 0$</p></li>
<li><p>$3 \sin^2 x - 4 \sin x \cdot \cos x + 5 \cos^2 x = 2$</p></li>
<li><p>$\sin^2 x - \sin^4 x + \cos^4 x = 1$</p></li>
</ul>
<p>In our students book they're poorly explained with 2 pages, I tried to find a solution on the web, but still couldn't find similar examples. All we got from our teacher was paper with few formulas and we basically have no idea when to use them. I would show what I've tried, but the problem is that I have no idea to even start solving such equations. </p>
| Henry | 6,460 | <p>Are your formulae things like $\sin^2 x + \cos^2 x = 1$?</p>
<p>If so then you need to spot where you can apply them.</p>
<p>For example $$\sin^2 x - \sin^4 x + \cos^4 x = 1$$
$$\sin^2 x (1- \sin^2 x) + \cos^4 x = 1$$
$$\sin^2 x \cos^2 x + \cos^4 x = 1 $$
$$(\sin^2 x + \cos^2 x) \cos^2 x = 1 $$
$$ \cos^2 x = 1 $$
$$ \cos x = \pm 1 $$</p>
|
690,465 | <p>So we are learning trigonometry in school and I would like to ask for a little help with these. I would really appreciate if somebody can explain me how I can solve such equations :)</p>
<ul>
<li><p>$\sin 3x \cdot \cos 3x = \sin 2x$</p></li>
<li><p>$2( 1 + \sin^6 x + \cos^6 x ) - 3(\sin^4 x + \cos^4 x) - \cos x = 0$</p></li>
<li><p>$3 \sin^2 x - 4 \sin x \cdot \cos x + 5 \cos^2 x = 2$</p></li>
<li><p>$\sin^2 x - \sin^4 x + \cos^4 x = 1$</p></li>
</ul>
<p>In our students book they're poorly explained with 2 pages, I tried to find a solution on the web, but still couldn't find similar examples. All we got from our teacher was paper with few formulas and we basically have no idea when to use them. I would show what I've tried, but the problem is that I have no idea to even start solving such equations. </p>
| Raven | 130,125 | <p>For the third one
$$3\sin^2x-4\sin x\cos x+5\cos^2x=2\\
\Rightarrow\sin^2x-4\sin x\cos x+4\cos^2x=2-2\sin^2x-\cos^2x\\
\Rightarrow(\sin x-2\cos x)^2=2(1-\sin^2x)-\cos^2x\\
\Rightarrow(\sin x-2\cos x)^2=2\cos^2x-\cos^2x\\
\Rightarrow(\sin x-2\cos x)^2=cos^2x\\
\Rightarrow(\sin x-2\cos x)^2-\cos^2x=0\\
\Rightarrow(\sin x-2\cos x+\cos x)(\sin x-2\cos x-\cos x)=0\\
\Rightarrow(\sin x-\cos x)(\sin x-3\cos x)=0$$
Then
$$\sin x-\cos x=0\Rightarrow\tan x=1$$ and $$\sin x-3\cos x=0\Rightarrow\tan x=3$$</p>
<p>For the first one I don't know how far your book is covering. I have used the following two rules for that $$\sin 2x=2\sin x\cos x$$ and $$\sin 3x=3\sin x-4\sin^3x$$ So it goes like this
$$\sin 3x .\cos 3x=\sin2x\\
\Rightarrow 2\sin 3x .\cos 3x=2\sin2x\\
\Rightarrow\sin6x=2\sin2x\\
\Rightarrow\sin3(2x)=2\sin2x\\
\Rightarrow 3\sin2x-4\sin^32x=2\sin2x\\
\Rightarrow 3\sin2x-4\sin^32x-2\sin2x=0\\
\Rightarrow \sin2x-4\sin^32x=0\\
\Rightarrow \sin2x(1-4\sin^22x)=0\\
$$
Then $$\sin 2x=0$$ and $$\sin2x=\pm \frac{1}{2}$$
Hope it helps.</p>
|
4,485,550 | <p>I'm interested in playing with nonwell-founded variants of set theory and weaker/different axioms of induction/extensionality.</p>
<p>I have a hunch coalgebraic methods could better handle weirdness like modelling homotopy type theory.</p>
<p>I also have just been interested in the idea of coinduction as primitive as opposed to induction.</p>
<p>Weak limits like weak function spaces are also useful in Computer Science for higher order abstract syntax. So set-theory minus induction/extensionality could lead to cleaner encodings of stuff like the lambda calculus.</p>
<p>But I really don't know where to start with nonwell-founded set theory in the first place. Unfortunately nonwell-founded set theory seems a bit esoteric.</p>
| Greg Nisbet | 128,599 | <p>Here is one entrypoint to a particular family of non-well-founded set theories, New Foundations and its descendants.</p>
<p>All variants of New Foundations have a universal set <span class="math-container">$U$</span> and have <span class="math-container">$U \in U$</span> as a theorem, so none of them are well-founded.</p>
<p>Versions of New Foundations without urelements have the same extensionality axiom as ZFC. Adding urelements forces us to restrict extensionality to sets with at least one element, but this is not a large modification of the axiom extensionality. It is the same modification one would make to add atoms to ZFC.</p>
<p>Here's the Wikipedia page on <a href="https://en.wikipedia.org/wiki/New_Foundations" rel="nofollow noreferrer">New Foundations</a>. It starts with Russellian type theory, which does <em>not</em> take the key step of erasing types outside of comprehension contexts and so isn't in the New Foundations family per se.</p>
<p>Randall Holmes has a web page describing <a href="https://en.wikipedia.org/wiki/New_Foundations" rel="nofollow noreferrer">New Foundations</a>.</p>
<p>Metamath has a <a href="http://us.metamath.org/nfeuni/mmnf.html" rel="nofollow noreferrer">proof database for New Foundations</a>.</p>
|
2,202,382 | <p>When $A^TA = I$, I am told it is orthogonal. What does that mean?</p>
<p>$A = \begin{bmatrix}cos\theta & & -sin\theta \\ \\ sin\theta & & cos\theta\end{bmatrix}, A^T = \begin{bmatrix}cos\theta & & sin\theta \\ \\ -sin\theta & & cos\theta\end{bmatrix}$</p>
| PM. | 416,252 | <p>Perhaps the term is <em>suggestive</em> of matrix $\mathbf{A}$ being orthogonal to another matrix $\mathbf{B}$ in some sense like $$\mathbf{A}.\mathbf{B}=0$$ as it would be if $A$ and $B$ were orthogonal vectors. However this is <em>not</em> what it means when considering matrices.</p>
<p>For matrices ìt refers to the set of columns being orthogonal (actually orthonormal) in the usual sense for vectors</p>
<p>$$\mathbf{c_i}.\mathbf{c_j}=\delta_{ij}$$ The same is true for the set of rows. </p>
<p>Because the columns are orthogonal (and rows) the matrix itself is <em>called</em> orthogonal. These matrices have the property you stated.</p>
<p>(Try it with the columns or rows of your 2x2 matrix.)</p>
|
1,939,937 | <p>$(172195)(572167)=985242x6565$</p>
<p>Obviously the answer is 9 if you have a calculator, but how can you find x without redoing the multiplication?</p>
<p>The book says to use congruences, but I don't see how that is very helpful. </p>
| fleablood | 280,126 | <p>Old trick.</p>
<p>$X = \sum_{i=0}^na_i 10^i = \sum{i=0}^n a_i (9 + 1)^i \equiv \sum_{i=0}^n a_i \mod 9$.</p>
<p>This is why adding up the digits of a number will give you the remainder of the number when dividing by 9[*].</p>
<p>So $172195 \equiv 1+7+2+1+9+5 \equiv 7 \mod 9$</p>
<p>And $572167 \equiv 5+7+2+1+6+7 \equiv 28 \equiv 1 \mod 9$</p>
<p>So $172195*572167 \equiv 1*7 = 7 \mod 9$.</p>
<p>$985242x6565 \equiv 9 + 8 + 5+2+4+2+x+6+5+6+5 = 7+x \mod 9$</p>
<p>So $x + 7 \equiv 7 \mod 9$ so $x \equiv 0 \mod 9$.</p>
<p>So $x = 0$ or $9$.</p>
<p>Bugger.</p>
<p>Okay. Bigger guns.</p>
<p>$X = \sum_{i=0}^na_i 10^i = \sum_{i=0}^n a_i (11 - 1)^i \equiv \sum_{i=0}^n a_i(-1)^i \mod 11$.</p>
<p>This is why if you add every other digit and subtract every other digit you get the remainder when dividing by $11$[*].</p>
<p>$172195 \equiv 5 -9 + 1 - 2 + 7 - 1 \equiv 1 \mod 11$</p>
<p>$572167 \equiv -5+7-2+1-6+7 \equiv 2 \mod 11$</p>
<p>So $172195*572167 \equiv 2 \mod 11$.</p>
<p>And $985242x6565 \equiv 9-8+5-2+4-2+x-6+5-6+5 \equiv 4 + x \mod 11$</p>
<p>So $x+4 \equiv 2 \mod 11$</p>
<p>$x \equiv -2 \equiv 9 \mod 11$.</p>
<p>As $0 \le x \le 9$ we have $x = 9$.</p>
<p>=====</p>
<p>[*] well, you get the remainder when you repeat enough times. Most people know the rule as "a number is divisible by 9 if when you add the digits you get a number that is divisible by 9". But we can take it a step further and realize even if the number is not divisible by 9, the results will have the same remainder.</p>
<p>Likewise must know the rule of "a number is divisible by 11 if the sum of the odd digits equal the sum of the even digits" (although they <em>should</em> include the possibility of being different by a multiple of 11).</p>
|
4,804 | <p>Using the Unanswered tab, I was surprised to find a large number of questions that were already answered <em>in the answer box</em>, correctly, and thoroughly. But if the answer(s) is never upvoted, the question remains in the tab where it does not belong. To make things worse, the Community user occasionally bumps such already-answered questions to the front page, wasting the screen real estate. </p>
<p>By now I've read a number of such answers (to make sure they are indeed correct) and subsequently upvoted: this is by far the easiest way to reduce the number of Unanswered questions on the site (currently 8409). Of course, 30 upvotes per day only go so far, which is why I write this, hoping that more users will find a little time for this kind of clean-up. </p>
<p>Why so many question owners do not upvote correct answers to their questions, I have no idea. </p>
<p>As an aside (but in line with the topic), I'd like to recognize J.M. as the first MSE user to vote 10000 times.</p>
<p><img src="https://i.stack.imgur.com/LLUqk.png" alt="10K votes"></p>
| Amitesh Datta | 10,467 | <p>The reason that a user might not upvote good answers to his/her question is usually that he/she does not know how to, forgets to do so, or does not notice the answers. Of course, this most often happens when the user is new/unregistered.</p>
<p>However, you may also ask why other users do not upvote good answers. The main "problem" is that voting and reputation are highly opportunistic. You need to answer the right questions at the right time and in the right place in order to maximize your upvote yield. You can "hit gold" with many upvotes for a very good but not great answer if your timing is perfect. On the other hand, you can get few or no upvotes for an excellent answer simply because no-one notices it or those who notice it do not understand it. For example, if some users lived in different time zones to the ones they do, then their reputation would probably be (much) higher that it is currently. Of course, if you answer a bounty question, then even if you are not awarded the bounty, you gain a reasonable amount of reputation points simply because the question is highlighted and more people are likely to view your answer (and upvote it). </p>
|
4,804 | <p>Using the Unanswered tab, I was surprised to find a large number of questions that were already answered <em>in the answer box</em>, correctly, and thoroughly. But if the answer(s) is never upvoted, the question remains in the tab where it does not belong. To make things worse, the Community user occasionally bumps such already-answered questions to the front page, wasting the screen real estate. </p>
<p>By now I've read a number of such answers (to make sure they are indeed correct) and subsequently upvoted: this is by far the easiest way to reduce the number of Unanswered questions on the site (currently 8409). Of course, 30 upvotes per day only go so far, which is why I write this, hoping that more users will find a little time for this kind of clean-up. </p>
<p>Why so many question owners do not upvote correct answers to their questions, I have no idea. </p>
<p>As an aside (but in line with the topic), I'd like to recognize J.M. as the first MSE user to vote 10000 times.</p>
<p><img src="https://i.stack.imgur.com/LLUqk.png" alt="10K votes"></p>
| 75064 | 75,064 | <p>Users interested in reducing the backlog of Unanswered questions by upvoting helpful answers may find the following queries useful:</p>
<ul>
<li><p><a href="http://data.stackexchange.com/mathematics/query/114659/thank-you-comment-by-op-without-upvote-or-accept" rel="nofollow">Thank-you comment by OP on the sole answer, without upvote or accept</a>. This lists the answers on which OP left a thank-you comment, but neither upvoted nor accepted (despite the answer being the only one given). </p></li>
<li><p><a href="http://data.stackexchange.com/mathematics/query/114522/thank-you-comment-by-op-without-upvote" rel="nofollow">Thank-you comment by OP without upvote</a>. This is a more general query: it lists the answers on which OP left a thank-you comment, but did not upvote. (It may be one of several answers given, and it may have been accepted.)</p></li>
</ul>
<p>Warning #1: the data is not real-time. If you look at the very top rows, you may find that the answers were already upvoted by another user. This is less likely if you scroll down to a random position in the table. </p>
<p>Warning #2: the queries do not guarantee that the answer is correct. That's still up to the voter to decide. </p>
|
1,753,719 | <p>Definition of rapidly decreasing function</p>
<p>$$\sup_{x\in\mathbb{R}} |x|^k |f^{(l)}(x)| < \infty$$ for every $k,l\ge 0$.</p>
<p>Given the Gaussian function $f(x) = e^{-x^2}$, I know that its derivatives will always be in form of $P(x)e^{-x^2}$ where $P(x)$ is a polynomial of degree, say, $n$. Then $|x|^k |f^{(l)}(x)|$ will be $Q(x) e^{-x^2}$ where $Q(x)$ is of degree $n+k$. $e^{-x^2}$ is bounded apparently. But how could I "immediately" argue this whole thing is bounded?</p>
| Robert Israel | 8,508 | <p>An exponential always beats a polynomial in the end...</p>
<p>$|x| \le e^{|x|}$ so $|x|^n < e^{n |x|} < e^{x^2}$ if $|x| > n$, therefore $\left|x^n e^{-x^2}\right| < 1$ there. </p>
<p>Since the continuous function $x^n e^{-x^2}$ is also
bounded on the finite interval $[-n,n]$, we conclude that $x^n e^{-x^2}$ is bounded on $\mathbb R$. </p>
<p>Take a linear combination of
those and you have your $Q(x) e^{-x^2}$.</p>
|
97,946 | <p>I want to prove the following:</p>
<p>Let $G$ be a finite abelian $p$-group that is not cyclic.
Let $L \ne {1}$ be a subgroup of $G$ and $U$ be a maximal subgroup of L then there exists a maximal subgroup $M$ of $G$ such that $U \leq M$ and $L \nleq M$.</p>
<p>Proof.
If $L=G$ then we are done.Suppose $L \ne G$ . Let $|G|=p^{n}$ then $|L|=p^{n-i}$ and $|U|=p^{n-i-1}$ for some $0< i < n$. There is $x_{1} \in G$ such that $x_{1} \notin L$. Thus $|U\langle x_{1}\rangle|=p^{n-i}$ and does not contain L. There is $x_{2} \in G$ such that $x_{2} \notin L$ and $x_{2} \notin |U\langle x_{1}\rangle|$. Thus $|U\langle x_{1}\rangle\langle x_{2}\rangle|=p^{n-i+1}$. Continuing like this, we get $|U\langle x_{1}\rangle \langle x_{2}\rangle\cdots \langle x_{i}\rangle|=p^{n-1}$ is a maximal subgroup of $G$. The problem is, I am not sure that this subgroup does not contain $L $.</p>
<p>Thanks in advance.</p>
| yakov | 92,209 | <p>A s noted Prof. Robinson, this is false. However, this is true iff $L\not\le M\Phi(G)$. Indeed, let $\bar G=G/M\Phi(G)$; then $\bar L$ is a direct factor of $\bar G$ of order $p$. If $\bar G=\bar L\times\bar U$. Then $U$ is a maximal subgroup of $G$ and $U\cap L=M$.</p>
|
1,443,680 | <p>In Quantum Mechanics one often deals with wavefunctions of particles. In that case, it is natural to consider as the space of states the space $L^2(\mathbb{R}^3)$. On the other hand, on the book I'm reading, there's a construction which it's quite elegant and general, however it is not rigorous. For those interested in seeing the book, it's "Quantum Mechanics" by Cohen-Tannoudji.</p>
<p>The book proceeds as follows: the first postulate of Quantum Mechanics states that for every quantum system there is one Hilbert space $\mathcal{H}$ whose elements describe the possible states of the system. The idea then is that $\mathcal{H}$ doesn't necessarily is a space of functions.</p>
<p>Indeed, Cohen defines (or doesn't define) $\mathcal{H}$ as the space of kets $|\psi\rangle\in \mathcal{H}$, being the kets just vectors encoding the states of the system.</p>
<p>The second postulate states that for each physically observable quantity there is associated one hermitian operator $A$ such that the only possible values to be measured are the eigenvalues of $A$ and such that</p>
<ol>
<li><p>If $A$ has discrete spectrum $\{|\psi_n\rangle : n \in \mathbb{N}\}$ then the probability of measuring the eigenvalue $a_n$ on the state $|\psi\rangle$ is $\langle \psi_n | \psi\rangle$ considering that $|\psi\rangle$ is normalized.</p></li>
<li><p>If $A$ has continuous spectrum $\{|\psi_{\lambda}\rangle : \lambda \in \Lambda\}$ then the probability density on state $|\psi\rangle$ for the possible eigenvalues is $\lambda \mapsto \langle \psi_\lambda | \psi\rangle$</p></li>
</ol>
<p>If, for example, the position operator $X$ for particle in one-dimension, exists, and if its eigenvectors are $|x\rangle$ with eigenvalues $x$, for each $x\in \mathbb{R}$, the probability density of position is $\langle x |\psi\rangle$ which is a function $\mathbb{R}\to \mathbb{C}$ and we recover the wavefunction.</p>
<p>This formulation, though, seems to be more general. In that case, wavefunction is just the information about one possible kind of measurement which we can obtain from the postulates. There is nothing special with it.</p>
<p>Now, although quite elegant and simple, this is not even a little rigorous. For example: the position operator hasn't been defined! It is just "the operator associated to position with continuous spectrum", but this doesn't define the operator. On the book, it is defined on the basis $\{|x\rangle\}$, but this set is defined in terms of it, so we get circular.</p>
<p>Another problem is that usually we are dealing with unbounded operators which are not defined on the whole of $\mathcal{H}$. And an even greater problem is that $\mathcal{H}$ was never defined!</p>
<p>I've been looking forward to find out how to make this rigorous, but couldn't find anything useful. Many people simply say that the right way is to consider always $L^2(\mathbb{R}^3)$, so that all of this talk is nonsense. But I disagree, I find it quite natural to consider this generalized version.</p>
<p>The only thing I've found was the idea of rigged Hilbert spaces, known also as Gel'fand triple. I've found not much material about it, but anyway, I didn't understand how it can be used to make this rigorous.</p>
<p>In that case, how does one make this idea of space of states, or space of kets, fully rigorous, overcoming the problems I found out, and possibly any others that may exist? Is it through the Gel'fand triple? If so, how is it done?</p>
| Physics Footnotes | 348,696 | <p>There are two mathematically rigorous (and quite general) Hilbert Space formalisms you might be looking for. Both can be seen as attempts to salvage the engine of Dirac's original bra-ket algorithm, while avoiding its mathematical embarrassments.</p>
<p><strong>The first</strong> - created by von Neumann - replaces Dirac's <em>kets</em> by <em>vectors</em> in an abstract Hilbert space $\mathscr{H}$, and replaces his <em>transformations</em> with <em>linear operators</em> on this space. To avoid the need of Dirac's (in)famous delta functions, von Neumann rejects the fundamental status of <em>eigenvectors</em> in Quantum Mechanics. Instead, the central player in von Neumann's game is <em>spectral decomposition</em> (specifically, of self-adjoint operators, although this can be generalized). This approach reduces to eigenvectors and eigenvalues when the operators happen to be finite dimensional (or, in the infinite dimensional case, <em>compact</em>), but handles the continuous case without batting an eyelid.</p>
<p>In the end, however, physicists were reluctant to abandon the highly intuitive eigenvector-based formalism of Dirac, and definitely did not want to muck around with all the subtleties of operator-domains inherent in von Neumann's approach. </p>
<p>And that is why the next fellow entered the fray...</p>
<p><strong>The second</strong> - created by Gelfand et al - attempts to preserve more of Dirac's legacy, while straying as little as possible from von Neumann's Hilbert Space. Specifically, they proceed by starting with a Hilbert Space $\mathscr{H}$ and then constructing two auxiliary spaces from it. First a dense inner-space $\Phi$, and then an outer-space $\Phi^*$ (being the dual of the inner-space). Taken together, this framework is referred to as a <em>Rigged Hilbert Space</em> or <em>Gelfand Triple</em>:
$$\Phi\subset\mathscr{H}\subset\Phi^*$$</p>
<p>Very briefly, the reasons they do this are:</p>
<ul>
<li>The inner-space $\Phi$ supports operator algebra without all the hassle of domains inherent with von Neumann's approach.</li>
<li>The continuous spectrum can be handled in a rigorous eigenvector-eigenvalue fashion because the Dirackian delta 'functions', which don't live in the Hilbert Space, can be found in the dual $\Phi^*$ (technically, as <em>distributions</em> rather than functions).</li>
</ul>
<p><strong>Where does this leave the poor mathematical physicist with a desire for rigor?</strong></p>
<p>Well, the truth is, nobody really uses Gelfand triples. Just to construct one triple you need to construct an infinite sequence of topologies! Not only that, but the whole construction depends on the specific form of the Hamiltonian, so it is nowhere near as general as von Neumann's approach. (Indeed, finding a Gelfand triple for a specific quantum system would easily qualify you for a PhD!)</p>
<p>To me, the Rigged Hilbert Space approach is reminiscent of the non-standard analysis discovered in the 1960s. Nobody actually uses it, but it makes them feel better about cancelling the $du$ in:
$$\frac{dy}{du}\frac{du}{dx}$$
But let's be honest, you were going to do that anyway right? </p>
<p><strong>Bottom line?</strong> </p>
<p>Learn von Neumann's rigorous Hilbert Space quantum mechanics.</p>
<p>There are plenty of excellent books out there. Even von Neumann's seminal <em>Mathematical Foundations of Quantum Mechanics</em> (1932) is a fantastic resource (although the old-fashioned pre-LaTeX notation is a bit intimidating). There were lots of excellent modern books published in the 1970s and 80s on the subject (Prugovecki's <em>Quantum Mechanics in Hilbert Space</em> comes to mind). For a superb up-to-the-minute treatment of Quantum Mechanics in the thorough style von Neumann would salute is Valter Moretti's <em>Spectral Theory and Quantum Mechanics</em> (which I happen to be reading at the moment!).</p>
<p><strong>Note on the issue of generality...</strong></p>
<p>Much of your question concerns the issue of generality. Von Neumann's approach has the virtue of not committing to any specific $L^2$ space, and as a result produces theorems that hold for <em>all</em> quantum mechanical systems. But that is not always advantageous. Sometimes you want to find results about specific (types of) quantum mechanical systems. In this case, you should choose a specific Hilbert Space (often, but not always, an $L^2$ space). But even then, you know you can rely on all the tools of the trade you learned/developed in a general Hilbert Space.</p>
|
19,962 | <p><a href="http://en.wikipedia.org/wiki/Covariance_matrix" rel="nofollow">http://en.wikipedia.org/wiki/Covariance_matrix</a></p>
<pre><code>Cov(Xi,Xj) = E((Xi-Mi)(Xj-Mj))
</code></pre>
<p>Is the above equivalent to:</p>
<pre><code>(Xi-Mi)(Xj-Mj)
</code></pre>
<p>I don't understand why the expectancy of (Xi-Mi)(Xj-Mj) would be different than just (Xi-Mi)(Xj-Mj)</p>
<p>Addendum:</p>
<p>Let's say I have two sets of data:</p>
<p>Set 1: 1,2,3 avg: 2</p>
<p>Set 2: 4,5 avg: 4.5</p>
<p>Is the following a covariance matrix? </p>
<pre><code>(1-2)*(4-4.5) , (2-2)*(4-4.5) , (3-2)*(4-4.5)
(1-2)*(5-4.5) , (2-2)*(5-4.5) , (3-2)*(5-5.5)
</code></pre>
<p>I'm reading online and it seems like a covariance matrix composed of two data sets should be a 2x2 matrix, but in this case, I have a 2x3 matrix. How do I go from what I have to the correct 2x2 matrix?</p>
| leonbloy | 312 | <p>It's like asking if $X$ is equivalent to $E(X)$. </p>
<p>$X$ is (asssumed to be) a random variable, it can take several values according to some probability law. In contrast $E(X)$ is the expected value of $X$ - so it's not a random variable which takes different values in different tries, it's a constant number. So they are conceptually different entities (they coincide only in the degenerate-trivial case in which the random variable only takes one value).</p>
<p>The same applies to the covariance matrix. In your formula $X_i$ is a random variable, so the product is also a random var (because it's a function of random variables!). The expectation operator "takes out" the randomness, and so the covariance matrix is a constant matrix.</p>
|
2,772,541 | <p>I have a given function:$(x_i, y_i) = p(x_a, y_a) + (1-p)(x_b, y_b)$</p>
<p>From this function I need to get $p$, which should be a value between 0 and 1. </p>
<p>In an attempt to solve this I found: </p>
<p>$(x_i, y_i) = p(x_a, y_a) + (1-p)(x_b, y_b)$</p>
<p>$(x_i, y_i) - (x_b, y_b) = p(x_a, y_a) - p(x_b, y_b)$</p>
<p>$(x_i, y_i) - (x_b, y_b) = p((x_a, y_a) - (x_b, y_b))$</p>
<p>$\frac{(x_i, y_i) - (x_b, y_b)}{(x_a, y_a) - (x_b, y_b)} = p$</p>
<p>$\frac{(x_i - x_b, y_i - y_b)}{(x_a - x_b, y_a - y_b)} = p$</p>
<p>Here I have no idea how to continue. My logical follow-up would be to divide the x and y components seperately, like this:
$(\frac{x_i - x_b}{x_a - x_b},\frac{y_i - y_b}{y_a - y_b}) = p$</p>
<p>This makes no sense though, since I would get a coordinate as a result instead of a number. </p>
| Hagen von Eitzen | 39,174 | <p>Let $u,v\in(a,b)$ be zeroes of $f$ and assume that $g$ has no zeroes in $(u,v)$.
From $f(u)=f(v)=0$ and the given inequality, we obtain that $g(u)$, $g(v)$ are non-zero, hence also positive.
This allows us to consider $h(x)=\frac{f(x)}{g(x)}$ on $[u,v]$.
We have $h(u)=h(v)=0$. By Rolle $h'(x)=0$ for some $x\in(u,v)$. But $h'(x)=\frac{f'(x)g(x)-f(x)g'(x)}{g(x)^2}\ne0$ for all $x\in (u,v)$.</p>
|
2,772,541 | <p>I have a given function:$(x_i, y_i) = p(x_a, y_a) + (1-p)(x_b, y_b)$</p>
<p>From this function I need to get $p$, which should be a value between 0 and 1. </p>
<p>In an attempt to solve this I found: </p>
<p>$(x_i, y_i) = p(x_a, y_a) + (1-p)(x_b, y_b)$</p>
<p>$(x_i, y_i) - (x_b, y_b) = p(x_a, y_a) - p(x_b, y_b)$</p>
<p>$(x_i, y_i) - (x_b, y_b) = p((x_a, y_a) - (x_b, y_b))$</p>
<p>$\frac{(x_i, y_i) - (x_b, y_b)}{(x_a, y_a) - (x_b, y_b)} = p$</p>
<p>$\frac{(x_i - x_b, y_i - y_b)}{(x_a - x_b, y_a - y_b)} = p$</p>
<p>Here I have no idea how to continue. My logical follow-up would be to divide the x and y components seperately, like this:
$(\frac{x_i - x_b}{x_a - x_b},\frac{y_i - y_b}{y_a - y_b}) = p$</p>
<p>This makes no sense though, since I would get a coordinate as a result instead of a number. </p>
| Chappers | 221,811 | <p>Let $W = f' g - g' f$. Then $W \neq 0$ by the condition, and if $x_1,x_2$ are successive zeros of $f$, then
$$ W(x_1) = f'(x_1) g(x_1), \qquad W(x_2) = f'(x_2) g(x_2), $$
and these both have the same sign. But $f'(x_1)f'(x_2)<0$ because otherwise the mean value theorem and the intermediate value theorem would imply that $f$ has another zero between $x_1$ and $x_2$. Hence we also have $g(x_1)g(x_2)<0$, so the intermediate value theorem implies that $g$ has a zero between $x_1$ and $x_2$.</p>
|
2,156,331 | <p>Consider the discrete topology $\tau$ on $X:= \{ a,b,c, d,e \}$. Find subbasis for $\tau$ which does not contain any singleton sets.</p>
<p>The definition of subbasis is as follows: </p>
<blockquote>
<p><strong>Definition:</strong> A <em>subbasis</em> $S$ for a topology on $X$ is a collection of subsets of $X$ whose union is $X$.</p>
</blockquote>
<p>So let $S$ be equal to the collection of $\{a,b\}$, $\{c,d\}$ and $\{d,e\}$. </p>
<p>Clearly union of these three elements is $X$. </p>
<p>So should be $S$ - as defined - be taken as subbasis? Please check the answer I posted in comment.</p>
| MPW | 113,214 | <p><strong>Hint:</strong> You can write $\{a\}$ as $\{a,b\}\cap\{a,c\}$. Do the same with each of the elements of $X$.</p>
|
1,256,460 | <p>I want to solve the following problem: </p>
<p>$$u_{xx}(x,y)+u_{yy}(x,y)=0, 0<x<\pi, y>0 \\ u(0,y)=u(\pi, y)=0, y>0 \\ u(x,0)=\sin x +\sin^3 x, 0<x<\pi$$ </p>
<p>$u$ bounded </p>
<p>I have done the following: </p>
<p>$$u(x,y)=X(x)Y(y)$$ </p>
<p>We get the following two problems: </p>
<p>$$X''(x)+\lambda X(x)=0 \ \ \ \ \ (1) \ X(0)=X(\pi)=0$$ </p>
<p>$$Y''(y)-\lambda Y(y)=0 \ \ \ \ \ (2)$$ </p>
<p>To solve the problem $(1)$ we do the following: </p>
<p>The characteristic polynomial is $\mu +\lambda =0$. </p>
<ul>
<li><p>$\lambda <0$: $\mu=\pm \sqrt{-\lambda}$</p>
<p>$X(x)=c_1e^{\sqrt{-\lambda}x}+c_2e^{-\sqrt{-\lambda}x}$ </p>
<p>$X(0)=0 \Rightarrow c_1+c_2=0 \Rightarrow c_1=-c_2$ </p>
<p>$X(\pi)=0 \Rightarrow c_1e^{\sqrt{-\lambda}\pi}+c_2 e^{-\sqrt{-\lambda}\pi}=0 \Rightarrow c_2(-e^{\sqrt{-\lambda}\pi}+e^{-\sqrt{-\lambda}\pi})=0 \Rightarrow c_1=c_2=0$ </p></li>
<li><p>$\lambda=0$: </p>
<p>$X(x)=c_1 x+c_2$ </p>
<p>$X(0)=0 \Rightarrow c_2=0 \Rightarrow X(x)=c_1x$ </p>
<p>$X(\pi)=0 \Rightarrow c_1 \pi=0 \Rightarrow c_1=0$ </p></li>
<li><p>$\lambda >0$ : </p>
<p>$X(x)=c_1 \cos (\sqrt{\lambda} x)+c_2 \sin (\sqrt{\lambda}x)$ </p>
<p>$X(0)=0 \Rightarrow c_1=0 \Rightarrow X(x)=c_2 \sin (\sqrt{\lambda}x)$ </p>
<p>$X(\pi)=0 \Rightarrow \sin (\sqrt{\lambda}\pi)=0 \Rightarrow \sqrt{\lambda}\pi=k\pi \Rightarrow \lambda=k^2$ </p></li>
</ul>
<p>For the problem $(2)$ we have the following: </p>
<p>$Y(y)=c_1 e^{ky}+c_2 e^{-ky}$ </p>
<p>The general solution is the following: </p>
<p>$$u(x,y)=\sum_{k=0}^{\infty}a_n( e^{ky}+ e^{-ky}) \sin (kx) $$ </p>
<p>$$u(x,0)=\sin x+\sin^3 x=\sin x+\frac{3}{4}\sin x-\frac{1}{4}\sin (3x)=\frac{7}{4}\sin x-\frac{1}{4}\sin (3x) \\ \Rightarrow \frac{7}{4}\sin x-\frac{1}{4}\sin (3x)=\sum_{k=0}^{\infty}2a_n\sin (kx) \\ \Rightarrow 2a_1=\frac{7}{4} \Rightarrow a_1=\frac{7}{8}, 2a_3=-\frac{1}{4}=-\frac{1}{8}, a_k=0 \text{ for } k=2,4,5,6,7, 8, \dots $$ </p>
<p>Is this correct?? </p>
| Valentin | 223,814 | <p>Basically, by just using definition of average</p>
<p>$T_{avg}=\frac{1}{30}\int_0^{30} { T(t) dt }$ </p>
|
908,196 | <blockquote>
<p>Solve $x^2-1=2$</p>
</blockquote>
<p>I have no idea how to do this can somebody please help me? I have tried working it out and I could never get the answer.</p>
| Ahaan S. Rungta | 85,039 | <p>$ x^2 - 1 = 2 \implies x^2 = 3 \implies x = \pm \sqrt{3} $</p>
<p>We have two solutions because both solutions result in the same square. We simply added $1$ to both sides and took the square root. Both steps are correct, because we are making the same transformation to both sides of the equation. So if they are equal before the transformation, they have to be equal after the transformation. </p>
|
908,196 | <blockquote>
<p>Solve $x^2-1=2$</p>
</blockquote>
<p>I have no idea how to do this can somebody please help me? I have tried working it out and I could never get the answer.</p>
| John Joy | 140,156 | <p>$$\begin{array}{llll}
x^2-1&=&2 &(\text{given})\\
(x^2-1)+1&=&2+1 &(\text{additive axiom of equality})\\
x^2+(-1+1)&=&3 &(\text{associative field axiom})\\
x^2+0&=&3 &(\text{additive inverse field axiom})\\
x^2&=&3 &(\text{additive identity})
\end{array}$$</p>
<p>now I'm stuck. :(</p>
|
745,436 | <p>I'm reading this pdf <a href="http://rutherglen.science.mq.edu.au/wchen/lndpnfolder/dpn01.pdf" rel="nofollow">http://rutherglen.science.mq.edu.au/wchen/lndpnfolder/dpn01.pdf</a> I understand some of the expression used in this but I don't understand the part $(m,n) = 1$</p>
<p>Is this a cartesian coordinate or some sort of operation?</p>
| Bill Dubuque | 242 | <p>The notation $\,(a,b) := \gcd(a,b)\,$ is widely used in number theory. Similarly, but less frequently, authors use $\,\ \ [a,b]\, := {\rm lcm}(a,b).\,$ Hence $\,(a,b) = 1\,$ means $\,a,b\,$ are <em>coprime:</em> $\,c\mid a,b\,\Rightarrow\,c\mid 1.$</p>
<p>Here, as often, one can uniquely infer the meaning from its use. The first use of the notation is in Theorem $1$ where it is claimed that $\,(a,b)=1\iff $ every positive divisor of $\,ab\,$ can be written <em>uniquely</em> in the form $\,cd\,$ where $\,c\mid a,\ d\mid b,\,\ c,d\in\Bbb N.\,$ This implies that $\,a,b\,$ are coprime since, if not, they have a common divisor $\,n> 1$ which has more than one such rep: $\,c,d = n,1\,$ and $\,1,n.$</p>
<p>One of the reasons that the gcd tuple notation is so widely used is that it helps to emphasize analogies between gcds and <em>ideals</em>, which use the same tuple notation. They share many of the same arithmetical laws, e.g. associative, distributive, and $\,(a,b) = (a)\,$ if $\,a\mid b,\,$ so by using a common notation one can write proofs that work for both gcds and ideals. In PIDs like $\,\Bbb Z\,$ they are essentially arithmetically equivalent, since for ideals $\ (a,b) = (c) \iff c = \gcd(a,b),\,$ so one can read $\,(a,b)\,$ either as a gcd or an ideal. The analogies between ideals and gcds are clarified when one studies <em>divisor theory</em> (the modern version of Kronecker's alternative to Dedekind's ideal-theoretic approach to factorization in rings of algebraic integers). An introduction to divisor theory can be found in Borevich and I. R. Shafarevich, <em>Number theory</em>. See also </p>
<p>Friedemann Lucius. <a href="http://www.digizeitschriften.de/dms/img/?PPN=GDZPPN002239620" rel="noreferrer">Rings with a theory of greatest common divisors.</a><br>
manuscripta math. 95, 117-36 (1998). </p>
<p>Olaf Neumann. <a href="http://link.springer.com/article/10.1007%2Fs591-002-8140-8#" rel="noreferrer">Was sollen und was sind Divisoren?</a><br>
(What are divisors and what are they good for?) Math. Semesterber, 48, 2, 139-192 (2001).</p>
<p>Because there is little written in English on divisor theory, it would be very valuable to translate Neumann's very nice German exposition into English. If anyone is interested in doing so please contact me.</p>
|
886,003 | <p>I have two questions:</p>
<p><strong>A)</strong> Suppose that we have $$Z=c\sum_i (X_i-a)(Y_i-b)$$ where $X_i$s and $Y_i $s are independent exponential random variables with means equal to $\mu_{X}$ and $\mu_{Y}$ (for $1\le i\le n$). That is $X_i$s are i.i.d random variables and so are $Y_i $s. Besides, $a,b$ and $c$ are real numbers. I want to find the distribution of $Z$ for large enough $n$. </p>
<p>I used central limit theorem (CLT) and found the distribution of Z. I calculated the mean and variance as follows:
$$E(Z)=\sum_i c(E(X_i)-a)(E(Y_i)-b)$$
using delta method, I estimated the variance as follows:
$$Var(Z)=c^2\sum_i (E(X_i)-a)^2 Var(Y_i)+Var(X_i)(E(Y_i)-b)^2$$
To check if it is correct, I used MATLAB. I considered $n$ to be $200$. I varied $\mu_{X}$ and $\mu_{Y}$ from $1$ to $4$ $({1,2,3,4})$. To simplify my calculation I considered $\mu_{X}=\mu_{Y}$. For each random variable, I created $1,000,000$ samples and calculated PDF of $Z$. But when I compare this PDF with the one I found using CLT they are different! I cannot understand where I made mistake! </p>
<p><strong>B)</strong> Another question is that if we have $$Z=c\sum_i (a_i+X_i-a)(b_i+Y_i-b)$$ where $a_i$ and $b_i$ are real numbers. Can I still use CLT to find the PDF of Z?
$$E(Z)=\sum_i c(E(X_i)+a_i-a)(E(Y_i)+b_i-b)$$
using delta method, I estimated the variance as follows:
$$Var(Z)=c^2\sum_i (E(X_i)+a_i-a)^2 var(Y_i)+var(X_i)(E(Y_i)+b_i-b)^2$$ </p>
<p>(again I tested the correctness using MATLAB and faced the same problem!)</p>
<p>I would appreciate if you could help me.</p>
| wolfies | 74,360 | <p>In answer to your first question ... </p>
<p>Given <span class="math-container">$X \sim Exponential(\lambda_1)$</span> with <span class="math-container">$E[X] =\lambda_1 $</span>, and <span class="math-container">$Y \sim Exponential(\lambda_2)$</span> with <span class="math-container">$E[Y] =\lambda_2 $</span>, where <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent. Let: </p>
<p><span class="math-container">$$W_i =c (X_i-a) (Y_i-b) \quad \text{and} \quad Z_n = \sum_{i=1}^n W_i$$</span> </p>
<p>Then, by the Lindeberg-Levy version of the Central Limit Theorem:</p>
<p><span class="math-container">$$Z_n\overset{a} {\sim }N\big( n E[W], n Var(W)\big)$$</span></p>
<p>We immediately have: <span class="math-container">$$E[W] = c \left(\lambda _1-a\right) \left(\lambda _2-b\right)$$</span></p>
<p><strong>Variance of <span class="math-container">$W$</span></strong></p>
<p>The OP attempts to approximate the variance - this is not necessary and causes errors. </p>
<p>By independence, the joint pdf of <span class="math-container">$(X,Y)$</span> is <span class="math-container">$f(x,y)$</span>:</p>
<p><a href="https://i.stack.imgur.com/9TJHd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9TJHd.png" alt=""></a></p>
<p>Then, <span class="math-container">$Var[W]$</span> is:</p>
<p><a href="https://i.stack.imgur.com/a8k4b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a8k4b.png" alt=""></a></p>
<p>where I am using the <code>Var</code> function from the <em>mathStatica</em> package for <em>Mathematica</em> to do the nitty-gritties. All done. </p>
<hr>
<p><strong>Central Limit Theorem approximation</strong></p>
<p>Here are <span class="math-container">$100000$</span> pseudo-random drawings of <span class="math-container">$Z$</span> generated in <em>Mathematica</em>, given <span class="math-container">$n = 200, \lambda_1= 3, \lambda_2 =2,a=2.2,b=4$</span> ...</p>
<pre><code>zdata = Table[
xdata = RandomVariate[ExponentialDistribution[1/3], {200}];
ydata = RandomVariate[ExponentialDistribution[1/2], {200}];
Total @@ {(xdata - 2.2) (ydata - 4)}, {i, 1, 100000}];
</code></pre>
<p>The CLT Normal approximation <span class="math-container">$N\big(\mu, \sigma^2\big)$</span> has parameters <span class="math-container">$\mu = n E[W]$</span> and <span class="math-container">$\sigma = \sqrt{n Var(W)}$</span>:</p>
<p><a href="https://i.stack.imgur.com/apdfh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/apdfh.png" alt=""></a></p>
<p>Here, the squiggly BLUE curve is the empirical pdf (from the Monte Carlo data), and the dashed red curve is the Central Limit Theorem Normal approximation. It works very nicely WHEN THE CORRECT variance derivation is used, even with a sample of size <span class="math-container">$n = 200$</span>.</p>
<hr>
<p><strong>Central Limit Theorem fit using OP's approximated variance</strong></p>
<p>By contrast, if we use the OP's <em>approximation of Var(Z)</em> to calculate <span class="math-container">$\sigma$</span>, then the CLT 'fit' is not good at all:</p>
<p><a href="https://i.stack.imgur.com/vJcya.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vJcya.png" alt=""></a></p>
<p><strong>Notes</strong></p>
<ol>
<li>As disclosure, I should add that I am one of the authors of the software used above.</li>
</ol>
|
499,652 | <p>I saw this a lot in physics textbook but today I am curious about it and want to know if anyone can show me a formal mathematical proof of this statement? Thanks!</p>
| Daniel Robert-Nicoud | 60,713 | <p>As already expressed in another answer, this can be formulated in a formal way by taking the Taylor expansion of $\tan\alpha$ around $0$ to get:
$$\tan\alpha = \alpha + \frac{1}{3}\alpha^3 + \ldots = \alpha + O(\alpha^3)$$
where $O(\alpha^3)$ denotes some function that goes to zero approximately at the same rate as $\alpha^3$ does. Formally, a function $f(x)$ is $O(g(x))$ if $\limsup_{x\rightarrow 0}\left|\frac{f(x)}{g(x)}\right|<\infty$.</p>
<p>In general, looking at the Taylor expansion of a function around some point is a very good way to get an approximation of the function near that point. For example we have:
$$\sin(x) = x+O(x^3)$$
$$\cos(x)=1+O(x^2)$$
$$e^x=1+x+\frac{1}{2}x^2+O(x^3)$$
around zero. This is very useful in various areas of mathematics and physics.</p>
|
2,761,151 | <p>In the formula below, where does the $\frac{4}{3}$ come from and what happened to the $3$? How did they get the far right answer? Taken from Stewart Early Transcendentals Calculus textbook.</p>
<p>$$\sum^\infty_{n=1} 2^{2n}3^{1-n}=\sum^\infty_{n=1}(2^2)^{n}3^{-(n-1)}=\sum^\infty_{n=1}\frac{4^n}{3^{n-1}}=\sum_{n=1}^\infty4\left(\frac{4}{3}\right)^{n-1}$$</p>
| Thomas | 26,188 | <p>You have
$$
2^{2n}3^{1-n} = (2^2)^n3^{-(n-1)} = 4^n3^{-(n-1)} = \frac{4^n}{3^{n-1}} = \frac{4\cdot4^{n-1}}{3^{n-1}} = 4\left(\frac{4}{3}\right)^{n-1}
$$
Here we have used the formulas</p>
<ol>
<li>$$
a^ma^k = a^{m+k}
$$</li>
<li>$$
a^{-m} = \frac{1}{a^m}
$$</li>
<li>$$
(a^m)^k = a^{mk}
$$</li>
<li>$$
\frac{a^m}{b^m} = \left(\frac{a}{b}\right)^{m}
$$
So</li>
</ol>
<blockquote>
<p>In the formula below, where does the $\frac{4}{3}$ come from</p>
</blockquote>
<p>It came from moving the exponent outside using formula 4 above.</p>
<blockquote>
<p>and what happened to the 3? </p>
</blockquote>
<p>Nothing happened to the $3$. </p>
<blockquote>
<p>How did they get the far right answer?</p>
</blockquote>
<p>The last step is using the formula 4 from above.</p>
|
2,836,552 | <blockquote>
<p>Let $R$ be commutative ring with identity that contains a field $K$ as a subring. If$ $R is a finite dimensional vector space over the field $K$, prove that every prime ideal in $R$ will be maximal.</p>
</blockquote>
<p>My idea was to prove the integral domain $R/p$ (if $p$ is a prime ideal) was a field as for any ideal $P$, $R/P$ will be a field iff $P$ is maximal in $R$. But how can I use the fact that $R$ is finite dimensional over $K$? I don't understand.</p>
| Bernard | 202,857 | <p><strong>Hint</strong>:</p>
<p>Note that $R/\mathfrak p$ has finite dimension over $K$. You have to prove that any non-zero element $x$ in $R/\mathfrak p$ is invertible. Consider the map
$$R/\mathfrak p\xrightarrow {\enspace{}\times x\enspace\:}R/\mathfrak p.$$
This is an endomorphism of the $K$-vector space $R/\mathfrak p$, and it is injective since $R/\mathfrak p$ is an integral domain.</p>
<p><em>Note</em> :
The more general following result is often used:</p>
<blockquote>
<p>Let $A\subset B$ two integral domains, such that $B$ is a finitely generated $A$-module. Then $A$ is a field if and only if $B$ is.</p>
</blockquote>
|
2,836,552 | <blockquote>
<p>Let $R$ be commutative ring with identity that contains a field $K$ as a subring. If$ $R is a finite dimensional vector space over the field $K$, prove that every prime ideal in $R$ will be maximal.</p>
</blockquote>
<p>My idea was to prove the integral domain $R/p$ (if $p$ is a prime ideal) was a field as for any ideal $P$, $R/P$ will be a field iff $P$ is maximal in $R$. But how can I use the fact that $R$ is finite dimensional over $K$? I don't understand.</p>
| Berci | 41,488 | <p>$A:=R/p$ is also finite dimensional and has no zero divisors. <br>
Take any nonzero $a\in A$ and consider its powers $1,a, a^2,\dots$, these are linearly dependent. Take the least $n\in\Bbb N$ giving $\lambda_na^n+\dots+\lambda_0=0$. </p>
<p>Since we can cancel out $a$, the constant term $\lambda_0$ is nonzero. <br>
Then dividing by $-\lambda_0$ and pulling out $a$, we arrive to $a\cdot f(a)=1$ for a specific polynomial $f$, yielding an inverse for $a$. </p>
|
1,206,460 | <p>This is the question :
Prove that the set of all the words in the English language is countble (the set's cardinality is אo)
A word is defined as a finite sequence of letters in the English language.</p>
<p>I'm not really sure how to start this. I know that a finite union of countble sets is countble and i think this is the way to start.</p>
<p>Thanks in advance !</p>
| IanF1 | 187,796 | <p>The easiest way to show a set is countable is to provide a way of counting it - ie a rule to determine the position of any member within the set.</p>
<p>In this case we can start with all 1-letter "words", from a to z - there are 26 of these. Then we can continue with the two-letter words aa, ab, ... az, ba, bb .... zy, zz. There are 26^2 of these. And so on. </p>
<p>Any finite-length word will be assigned a unique position in this sequence, therefore the sequence is countable.</p>
|
2,943,973 | <p>I'm trying to prove there is some <span class="math-container">$N$</span> such that for all <span class="math-container">$n > N$</span>, it is the case that <span class="math-container">$$2n^{3/4} + 2(n-\sqrt{n})^{3/2} + n - 2n^{3/2} \leq 0$$</span></p>
<p>I know that this is true, since I graphed this function on Wolfram. It has one real root, and beyond that root the function is negative. How would I prove this analytically if possible? It seems like the expression is to messy to work with, but perhaps there is a simplification, or argument I am missing that makes the problem easy.</p>
| Will Jagy | 10,400 | <p><span class="math-container">$$ \sqrt{n - \sqrt n} < \sqrt n - \frac{1}{2} $$</span>
<span class="math-container">$$ \left( n - \sqrt n \right)^{\frac{3}{2}} < n^{\frac{3}{2}} - \frac{3}{2} n + \frac{3}{4} \sqrt n - \frac{1}{8} $$</span></p>
|
1,895,323 | <p>Recently, I had a mock-test of a Mathematics Olympiad. There was a question which not only I but my friends too were not able to solve. The question goes like this: </p>
<p>If,
$$ \frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{1}{a+b+c} $$<br>
Then what is the value of<br>
$$ \frac{1}{a^5} + \frac{1}{b^5} + \frac{1}{c^5} $$ </p>
<p>To, solve this question, I used a variety of ways like:<br>
1) transposing variables in the first equation and,<br>
2) putting a whole power of five to both the sides of the equation one. But, I was unable to find the solution. </p>
<p>The options were -- (a) 1 , (b) 0 , (c) $ \frac{1}{a^5 + b^5 + c^5} $ , (d) None of them. </p>
<p>So, I require any possible help. And, a complete answer would be most welcome. Thanks in advance.</p>
| Asinomás | 33,907 | <p>Options $1$ and $2$ are wrong, take $a=1,b=-1,c=-1$.</p>
<p>Now notice given non-zero real number $a,b,c$ we have:</p>
<p>$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}=\frac{1}{abc}\overbrace{\iff}^{\text{multiply by a+b+c}} 3+2(ab+bc+ac)=1\iff ab+bc+ac=-1$.</p>
<p>Analogously we have $\frac{1}{a^3}+\frac{1}{b^3}+\frac{1}{c^3}=\frac{1}{a^5+b^5+c^5}\iff a^5b^5+b^5c^5+a^5c^5=-1$.</p>
<p>Taking $a=2,b=2,c=\frac{-5}{4}$ satisfies $ab+bc+ac=-1$ but not $a^5b^5+b^5c^5+a^5c^5=-1$.</p>
<p>So the answer is $d$.</p>
|
3,057,819 | <p>I giving a second try to this question. Hopefully, with better problem definition.</p>
<p>I have a circle inscribed inside a square and would like to know the point the radius touches when extended. In the figure A, we have calculated the angle(<code>θ</code>), <code>C</code>(center) , <code>D</code> and <code>E</code>. How do i calculate the <code>(x,y)</code> of <code>A</code> and <code>B</code>? </p>
<p><a href="https://i.stack.imgur.com/Y0st7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y0st7.png" alt="enter image description here"></a></p>
| Patricio | 601,766 | <p>In the case you've drawn, you already know the <span class="math-container">$x$</span> value, assuming the circle has center in <span class="math-container">$(C_x,C_y)$</span> and radius <span class="math-container">$r$</span>, <span class="math-container">$A_x=B_x=C_x+r.$</span> As for the <span class="math-container">$y,$</span> a little trigonometry helps: <span class="math-container">$A_y=C_y+r·\tan \theta.$</span></p>
|
2,101,756 | <p>From the power series definition of the polylogarithm and from the integral representation of the Gamma function it is easy to show that:
\begin{equation}
Li_{s}(z) := \sum\limits_{k=1}^\infty k^{-s} z^k = \frac{z}{\Gamma(s)} \int\limits_0^\infty \frac{\theta^{s-1}}{e^\theta-z} d \theta
\end{equation}
The identity holds whenever $Re(s) > 0$. Now my question is twofold. </p>
<p>Firstly, how do we analytically continue that function to the area $Re(s) <0$? Clearly this must be possible because it was already Riemann who found a corresponding reflection formula by deforming the integration contour to the complex plane and evaluating that integral both in a clock-wise and in a anti-clockwise direction.</p>
<p>My second question would be how do we compute two dimensional functions of that kind. To be precise I am interested in quantities like this:</p>
<p>\begin{equation}
Li_{s_1,s_2}^{(\xi_1,\xi_2)}(z_1,z_2) := \sum\limits_{1 \le k_1 < k_2 < \infty }(k_1+\xi_1)^{-s_1} (k_2+\xi_2)^{-s_2} z_1^{k_1} z_2^{k_2-k_1}
\end{equation}
Clearly if both $Re(s_1) >0$ and $Re(s_2) >0$ the quantity above has a following integral representation:
\begin{equation}
Li_{s_1,s_2}^{(\xi_1,\xi_2)}(z_1,z_2) = \frac{z_1 z_2}{\Gamma(s_1) \Gamma(s_2)} \int\limits_{{\mathbb R}_+^2} \frac{\theta_1^{s_1-1} \theta_2^{s_2-1} e^{-\theta_1 \xi_1-\theta_2 \xi_2}}{\left(e^{\theta_1+\theta_2}-z_1\right)\left(e^{\theta_2}-z_2\right)} d\theta_1\theta_2
\end{equation}
However how do I compute the quantity if any of the real parts of the $s$-parameters becomes negative?</p>
| user247327 | 247,327 | <p>First, your "Full Question" is <strong>not</strong> the same as the first equation you post. Do you intend one of those "=" to be "-"? Second, is $log_3(y)^5$ untended to be $log(y^5)$? That would fit your "$5log_3(y)$" in your "Full Question". If you mean "$6+ log_3(y)= log_3(y^5)$" that is the same as $6+ log_3(y)= 5log_3(x)$ so that $4log_3(x)= 6$. Then $log_3(x^4)= 6$ and $x^4= 3^6$ so that $x= 3^{6/4}= 3^{3/2}$. </p>
|
3,115,347 | <p>Let <span class="math-container">$f:(0,\infty) \to \mathbb R$</span> be a differentiable function and <span class="math-container">$F$</span> on of its primitives. Prove that if <span class="math-container">$f$</span> is bounded and <span class="math-container">$\lim_{x \to \infty}F(x)=0$</span>, then <span class="math-container">$\lim_{x\to\infty}f(x)=0$</span>.</p>
<p>I've seen this problem on a Facebook page yesterday. Can anybody give me some tips to solve it, please? It looks pretty interesting and I have no idea of a proof now.</p>
| Peter Foreman | 631,494 | <p>a) The probability that a die does not fall higher than <span class="math-container">$3$</span> is given by
<span class="math-container">$$P(X\le3)=\frac{3}{6}=\frac{1}{2}$$</span>
So as each event is independent we can find the probability that no die falls higher than three by raising this result to the power of <span class="math-container">$4$</span>
<span class="math-container">$$(P(X\le3))^4=\frac{1}{2^4}=\frac{1}{16}$$</span></p>
<p>b) Similarly the probability that a die does not fall higher than <span class="math-container">$4$</span> is given by
<span class="math-container">$$P(X\le4)=\frac{4}{6}=\frac{2}{3}$$</span>
So the final probability is
<span class="math-container">$$(P(X\le4))^4=\frac{2^4}{3^4}=\frac{16}{81}$$</span></p>
<p>c) If <span class="math-container">$4$</span> is the highest rolled value then every die rolled has a value <span class="math-container">$\le 4$</span>. But we also need at least one <span class="math-container">$4$</span> to be rolled - so we need to subtract the probability of rolling every dice <span class="math-container">$\le3$</span>. The answer is then
<span class="math-container">$$(P(X\le4))^4-(P(X\le3))^4=\frac{16}{81}-\frac{1}{16}=\frac{175}{1296}$$</span>
which is the difference of the answers from a) and b).</p>
|
1,064,115 | <p><strong>UPDATE:</strong> Thanks to those who replied saying I have to calculate the probabilities explicitly. Could someone clarify if this is the form I should end up with:</p>
<p>$G_X$($x$) = P(X=0) + P(X=1)($x$) + P(X=2) ($x^2$) + P(X=3)($x^3$)</p>
<p>Then I find the first and second derivative in order to calculate the expected value and variance?</p>
<p>Thanks!</p>
<p><strong>ORIGINAL POST:</strong> We have a probability question which has stumped all of us for a while and we really cannot figure out what to do. The question is:</p>
<p>An urn contains 4 red and 3 green balls. Balls will be drawn from the urn in sequence until the first red ball is drawn (ie. without replacement). Let X denote the number of green balls drawn in this sequence.</p>
<p>(i) Find $G_X$(x), the probability generating function of X.</p>
<p>(ii) Use $G_X$(x) to find E(X), the expected value of X.</p>
<p>(iii) Use $G_X$(x) and E(X) to find $σ^2$(X), the variance of X.</p>
<p>It appears to me from looking in various places online that this would be a hypergeometric distribution, as it is with replacement. However, we have not covered that type of distribution in our course and it seems the lecturer wishes for us to use a different method. We have only covered binomial, geometric and Poisson. I have tried to figure out an alternative way of finding the probability generating function and hence the expected value and variance (just using the derivatives), but, I have not been successful. Would anyone be able to assist?</p>
<p>Thanks! :)
Helen</p>
| d125q | 112,944 | <p>This would not be a hypergeometric distribution. You can think of hypergeometric as binomial without replacement, not geometric without replacement (even though the name might suggest otherwise). In other words, hypergeometric doesn't care at which spot the red ball is drawn.</p>
<p>Well, it should be relatively easy to find the probability mass function. Observe that, for example, $$\mathrm{P}(X = 2) = \color{green}{\frac37} \cdot \color{green}{\frac26} \cdot \color{red}{\frac45}$$</p>
<p>You can generalize this in the following manner:</p>
<p>$$
\mathrm{P}(X = x) = p_{X}(x) = \begin{cases}
\displaystyle \color{green}{\frac{\frac{3!}{(3 - x)!}}{\frac{7!}{(7 - x)!}}} \cdot \color{red}{\frac{4}{7 - x}} && x \in \{0, 1, 2, 3\} \\
0 && \text{otherwise}
\end{cases}
$$</p>
<p>Now you can use this to find the probability-generating function by definition.</p>
|
1,785,444 | <p>The question says to 'Express the last equation of each system as a sum of multiples of the first two equations." </p>
<p>System in question being: </p>
<p>$ x_1+x_2+x_3=1 $</p>
<p>$ 2x_1-x_2+3x_3=3 $</p>
<p>$ x_1-2x_2+2x_3=2 $</p>
<p>The question gives a hint saying "Label the equations, use the gaussian algorithm" and the answer is 'Eqn 3 = Eqn 2 - Eqn 1' but short of eye-balling it, I'm not sure how they deduce that after row-reducing to REF.</p>
| Noble Mushtak | 307,483 | <p>NOTE: $r_i$ is the original $i^{th}$ equation as stated in your question above.</p>
<p>Well, let's go through the process of finding the extended echelon form using Gauss-Jordan elimination. Here's the matrix:
$$\left[\begin{matrix}1 \ 1 \ 1 \ 1 \\ 2 \ -1 \ 3 \ 3\\ 1 \ -2 \ 2 \ 2\end{matrix}\right]\left[\begin{matrix}1 \ 0 \ 0 \\ 0 \ 1 \ 0 \\ 0 \ 0 \ 1\end{matrix}\right]$$</p>
<p>First, we subtract the second row by twice the first row and the third row by the first row:
$$\left[\begin{matrix}1 \ 1 \ 1 \ 1 \\ 0 \ -3 \ 1 \ 1 \\ 0 \ -3 \ 1 \ 1\end{matrix}\right]\left[\begin{matrix}1 \ 0 \ 0 \\ -2 \ 1 \ 0 \\ -1 \ 0 \ 1\end{matrix}\right]$$</p>
<p>Now, we subtract the third row by the second row (we don't really care about the first row at this point since we just want to know the numbers in the third row):
$$\left[\begin{matrix}1 \ 1 \ 1 \ 1 \\ 0 \ -3 \ 1 \ 1 \\ 0 \ 0 \ 0 \ 0\end{matrix}\right]\left[\begin{matrix}1 \ 0 \ 0 \\ -2 \ 1 \ 0 \\ 1 \ -1 \ 1\end{matrix}\right]$$</p>
<p>Thus, since the third row in the matrix to the left is $\mathbf 0$ and third row in the matrix to the right is $1 \ -1 \ 1$, we have that $r_1-r_2+r_3=\mathbf 0$, or that $r_3=r_2-r_1$. Therefore, the third equation is the second equation minus the first.</p>
|
3,991,691 | <p>I'm having some trouble proving the following:</p>
<blockquote>
<p>Let <span class="math-container">$d$</span> be the smallest positive integer such that <span class="math-container">$a^d \equiv 1 \pmod m$</span>, for <span class="math-container">$a \in \mathbb Z$</span> and <span class="math-container">$m \in \mathbb N$</span> and with <span class="math-container">$\gcd(a,m) = 1$</span>. Prove that, if <span class="math-container">$a^n \equiv 1 \pmod m$</span> then <span class="math-container">$d\mid n$</span>.</p>
</blockquote>
<p>The first thing that came to my mind was Euler's theorem but I couldn't conclude anything because I'm not very skilled when it comes to using Euler's totient function. Can someone give me any tips or show me how to solve this?</p>
| Will Jagy | 10,400 | <p>This sort of thing has a clear description in terms of Conway's topograph; I find it more convenient to use equivalent form <span class="math-container">$u^2 + uv - 5 v^2.$</span> The outcome for the original problem is sequences (note that the rule deals with every other element). For instance,
<span class="math-container">$5\cdot 9 - 2 = 43$</span> and <span class="math-container">$5 \cdot 14 -3 = 67$</span></p>
<p><span class="math-container">$$ x_{n+4}= 5 x_{n+2} - x_n,$$</span>
<span class="math-container">$$ y_{n+4}= 5 y_{n+2} - y_n,$$</span></p>
<p>There are two interleaved subsequences.</p>
<p><span class="math-container">$$
\begin{array}{c}
1&1&2&3&9&14&43&67&206&321&987 &1538&4729& \ldots \\
3&2&1&1&2&3&9&14&43&67&206&321&987&\ldots \\
\end{array}
$$</span>
Running the indices backwards leads to different solutions, but they are just transpositions of the ones above.</p>
<p>Let's see, given <span class="math-container">$u^2 + uv - 5 v^2 = -5$</span> and <span class="math-container">$x=u+3v, y=v$</span> gives <span class="math-container">$x^2 - 5 xy + y^2 = -5$</span> and the reverse holds as well</p>
<p><a href="https://i.stack.imgur.com/fIgKP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fIgKP.jpg" alt="enter image description here" /></a>
Apparently I drew one of these in 2016
<a href="https://i.stack.imgur.com/QtjIu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QtjIu.jpg" alt="enter image description here" /></a></p>
<p>Examples in previous answers:</p>
<p><a href="http://math.stackexchange.com/questions/342284/generate-solutions-of-quadratic-diophantine-equation/346821#346821">Generate solutions of Quadratic Diophantine Equation</a>
diagrams</p>
<p><a href="http://math.stackexchange.com/questions/81917/another-quadratic-diophantine-equation-how-do-i-proceed/144794#144794">Another quadratic Diophantine equation: How do I proceed?</a></p>
<p><a href="http://math.stackexchange.com/questions/228356/how-to-find-solutions-of-x2-3y2-2/228405#228405">How to find solutions of $x^2-3y^2=-2$?</a></p>
<p><a href="http://math.stackexchange.com/questions/342284/generate-solutions-of-quadratic-diophantine-equation/345128#345128">Generate solutions of Quadratic Diophantine Equation</a></p>
<p><a href="http://math.stackexchange.com/questions/487051/why-cant-the-alpertron-solve-this-pell-like-equation/487063#487063">Why can't the Alpertron solve this Pell-like equation?</a></p>
<p><a href="http://math.stackexchange.com/questions/512621/finding-all-solutions-of-the-pell-type-equation-x2-5y2-4/512649#512649">Finding all solutions of the Pell-type equation $x^2-5y^2 = -4$</a></p>
<p><a href="http://math.stackexchange.com/questions/680972/if-m-n-in-mathbb-z-2-satisfies-3m2m-4n2n-then-m-n-is-a-perfect-square/686351#686351">If $(m,n)\in\mathbb Z_+^2$ satisfies $3m^2+m = 4n^2+n$ then $(m-n)$ is a perfect square.</a></p>
<p><a href="http://math.stackexchange.com/questions/739752/how-to-solve-binary-form-ax2bxycy2-m-for-integer-and-rational-x-y/739765#739765">how to solve binary form $ax^2+bxy+cy^2=m$, for integer and rational $ (x,y)$</a> :::: 69 55</p>
<p><a href="http://math.stackexchange.com/questions/742181/find-all-integer-solutions-for-the-equation-5x2-y2-4/756972#756972">Find all integer solutions for the equation $|5x^2 - y^2| = 4$</a></p>
<p><a href="http://math.stackexchange.com/questions/822503/positive-integer-n-such-that-2n1-3n1-are-both-perfect-squares/822517#822517">Positive integer $n$ such that $2n+1$ , $3n+1$ are both perfect squares</a></p>
<p><a href="http://math.stackexchange.com/questions/1078450/maps-of-primitive-vectors-and-conways-river-has-anyone-built-this-in-sage/1078979#1078979">Maps of primitive vectors and Conway's river, has anyone built this in SAGE?</a></p>
<p><a href="http://math.stackexchange.com/questions/1091310/infinitely-many-systems-of-23-consecutive-integers/1093382#1093382">Infinitely many systems of $23$ consecutive integers</a></p>
<p><a href="http://math.stackexchange.com/questions/1132187/solve-the-following-equation-for-x-and-y/1132347#1132347">Solve the following equation for x and y:</a> <1,-1,-1></p>
<p><a href="http://math.stackexchange.com/questions/1132799/finding-integers-of-the-form-3x2-xy-5y2-where-x-and-y-are-integers">Finding integers of the form $3x^2 + xy - 5y^2$ where $x$ and $y$ are integers, using diagram via arithmetic progression</a></p>
<p><a href="http://math.stackexchange.com/questions/1221178/small-integral-representation-as-x2-2y2-in-pells-equation/1221280#1221280">Small integral representation as $x^2-2y^2$ in Pell's equation</a></p>
<p><a href="http://math.stackexchange.com/questions/1404023/solving-the-equation-x2-7y2-3-over-integers/1404126#1404126">Solving the equation $ x^2-7y^2=-3 $ over integers</a></p>
<p><a href="http://math.stackexchange.com/questions/1599211/solutions-to-diophantine-equations/1600010#1600010">Solutions to Diophantine Equations</a></p>
<p><a href="http://math.stackexchange.com/questions/1667323/how-to-prove-that-the-roots-of-this-equation-are-integers/1667380#1667380">How to prove that the roots of this equation are integers?</a></p>
<p><a href="http://math.stackexchange.com/questions/1719280/does-the-pell-like-equation-x2-dy2-k-have-a-simple-recursion-like-x2-dy2">Does the Pell-like equation $X^2-dY^2=k$ have a simple recursion like $X^2-dY^2=1$?</a></p>
<p><a href="http://math.stackexchange.com/questions/1737385/if-d1-is-a-squarefree-integer-show-that-x2-dy2-c-gives-some-bounds-i/1737824#1737824">http://math.stackexchange.com/questions/1737385/if-d1-is-a-squarefree-integer-show-that-x2-dy2-c-gives-some-bounds-i/1737824#1737824</a> "seeds"</p>
<p><a href="http://math.stackexchange.com/questions/1772594/find-all-natural-numbers-n-such-that-21n2-20-is-a-perfect-square/1773319#1773319">Find all natural numbers $n$ such that $21n^2-20$ is a perfect square.</a></p>
<p><a href="https://math.stackexchange.com/questions/2549380/is-there-a-simple-proof-that-if-b-aba-ab-1-then-a-b-must-be-fibon/2549440#2549440">Is there a simple proof that if $(b-a)(b+a) = ab - 1$, then $a, b$ must be Fibonacci numbers?</a> 1,1,-1; 1,11</p>
<p><a href="https://math.stackexchange.com/questions/2579293/to-find-all-integral-solutions-of-3x2-4y2-11/2579305#2579305">To find all integral solutions of $3x^2 - 4y^2 = 11$</a></p>
<p><a href="https://math.stackexchange.com/questions/3778825/how-do-we-solve-pell-like-equations/3778953#3778953">How do we solve pell-like equations?</a></p>
<p><a href="https://math.stackexchange.com/questions/3804117/diophantine-equation-x2-xy-%e2%88%92-3y2-17/3804236#3804236">Diophantine equation $x^2 + xy − 3y^2 = 17$</a> <1,1,-3></p>
|
3,054,321 | <p>I'm looking for a closed form for this sequence,</p>
<blockquote>
<p><span class="math-container">$$\sum_{n=1}^{\infty}\left(\sum_{k=1}^{n}\frac{1}{(25k^2+25k+4)(n-k+1)^3} \right)$$</span></p>
</blockquote>
<p>I applied convergence test. The series converges.I want to know if the series is expressed with any mathematical constant. How can we do that?</p>
| Robert Israel | 8,508 | <p>Change the order of summation, so it's <span class="math-container">$\sum_{k=1}^\infty \sum_{n=k}^\infty$</span>.
Then I get
<span class="math-container">$$ {\frac {\zeta \left( 3 \right) \left( 4\,\pi\,\cot \left( \pi/5
\right) -15 \right) }{60}}
$$</span>
You could also write <span class="math-container">$$\cot(\pi/5) = \frac{\sqrt{2}}{20} (5 + \sqrt{5})^{3/2}$$</span></p>
|
1,645,361 | <p>I am aware that the union of subspaces does not necessarily yield a subspace. However, I am confused about the following question: </p>
<blockquote>
<p>(i) Let $U, U'$ be subspaces of a vector space $V$ (both not equal to $V$). Prove that the union of $U$ and $U'$ does not equal $V$.<br>
(ii) Find an example of $V$ and $U,U',U''$ contained in $V$ (all not equal to $V$) such that the union of $U,U'$ and $U''$ is equal to $V$. </p>
</blockquote>
<p>Thank you.</p>
| AnalysisStudent0414 | 97,327 | <p>(i) Suppose $V = U \cup U'$. Since the inclusions aren't strict, there is at least one vector in $U'$ not in $U$, and viceversa. Call those $u' \in U'\setminus U$ and $u \in U\setminus U'$. Then, $u+u' \not \in U \cup U'$, but this is absurd since $u+u' \in V$.</p>
<p>(ii) If we take three subspaces, the above reasoning doesn't hold: for example, if we take $V=\mathbb{F}_2^2$ the union of the subspaces $(1,0)$ and $(0,1)$ and $(1,1)$ is the entire $V$.</p>
|
1,645,361 | <p>I am aware that the union of subspaces does not necessarily yield a subspace. However, I am confused about the following question: </p>
<blockquote>
<p>(i) Let $U, U'$ be subspaces of a vector space $V$ (both not equal to $V$). Prove that the union of $U$ and $U'$ does not equal $V$.<br>
(ii) Find an example of $V$ and $U,U',U''$ contained in $V$ (all not equal to $V$) such that the union of $U,U'$ and $U''$ is equal to $V$. </p>
</blockquote>
<p>Thank you.</p>
| Takirion | 299,952 | <p>(i) Let's assume $U,U',V$ as you said, but $U\cup U'=V$. Obviously we have neither $U\subseteq U'$ nor $U'\subseteq U$ as else we had $V=U\cup U'=U$, or $V=U\cup U'=U'$.
So we find $v\in U\setminus U'$ and $v'\in U'\setminus U$. As $U\cup U'=V$ we have $v+v'\in U\cup U'$.
If $v+v'\in U$ we have $v'=(v+v')-v\in U$, contradiction.
If $v+v'\in U'$ we have $v=(v+v')-v'\in U'$, contradiction.</p>
<p>(ii) But let's assume the $\mathbb{F}_2$-space $\mathbb{F}_2^2$ and the three subspaces $U=\lbrace 0, (1,0)\rbrace$, $U'=\lbrace 0, (0,1)\rbrace$, $U''=\lbrace 0, (1,1)\rbrace$. Then $\mathbb{F}_2^2 = U\cup U'\cup U''$.</p>
|
1,657,557 | <p>For example, how would I enter y^(IV) - 16y = 0? </p>
<p>typing out fourth derivative, and putting four ' marks does not seem to work. </p>
| mvw | 86,776 | <p>The expression</p>
<pre><code>solve y'''' - 16 y = 0 for y
</code></pre>
<p>seems to work.</p>
<p><a href="http://www.wolframalpha.com/input/?i=solve+y%27%27%27%27+-+16+y+%3D+0+for+y" rel="nofollow">Link to result</a>.</p>
|
2,129,086 | <p>I know that the total number of choosing without constraint is </p>
<p>$\binom{3+11−1}{11}= \binom{13}{11}= \frac{13·12}{2} =78$</p>
<p>Then with x1 ≥ 1, x2 ≥ 2, and x3 ≥ 3. </p>
<p>the textbook has the following solution </p>
<p>$\binom{3+5−1}{5}=\binom{7}{5}=21$ I can't figure out where is the 5 coming from?</p>
<p>The reason to choose 5 is because the constraint adds up to 6? so 11 -6 =5?</p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\bracks{z^{11}}\sum_{x_{1} = 1}^{\infty}z^{x_{1}}
\sum_{x_{2} = 2}^{\infty}z^{x_{2}}\sum_{x_{3} = 3}^{\infty}z^{x_{3}} & =
\bracks{z^{11}}{z \over 1 - z}\,{z^{2} \over 1 - z}\,{z^{3} \over 1 - z} =
\bracks{z^{\color{#f00}{5}}}\pars{1 - z}^{-3}
\\[5mm] & = \bracks{z^{5}}\sum_{i = 0}^{\infty}{-3 \choose i}\pars{-z}^{i} =
-{-3 \choose 5} = {7 \choose 5} = {7 \times 6 \over 2} = \bbx{\ds{21}}
\end{align}</p>
<blockquote>
<p>Note that $\ds{\color{#f00}{5} = 11 - 1 - 2 - 3}$.</p>
</blockquote>
|
897,756 | <p>How can I solve the following trigonometric inequation?</p>
<p>$$\sin\left(x\right)\ne \sin\left(y\right)\>,\>x,y\in \mathbb{R}$$</p>
<p>Why I'm asking this question... I was doing my calculus homework, trying to plot the domain of the function $f\left(x,y\right)=\frac{x-y}{sin\left(x\right)-sin\left(y\right)}$ and figured out I'd have to solve the inequation $\sin\left(x\right)\ne\sin\left(y\right)$... I was able to come to the answer $y\ne x +2\cdot k\cdot \pi \>,\>k \in \mathbb{N}$. However, the answer on the textbook also includes $y\ne -x +2\cdot k\cdot \pi + \pi \>,\>k \in \mathbb{N}$, so I thought that I was probably doing something wrong while solving that inequation.</p>
| Yiorgos S. Smyrlis | 57,021 | <p>$$
\sin(x+h)=\sin x\cos h+\sin h\cos x,
$$
and hence
$$
\frac{\sin(x+h)-\sin x}{h}=\frac{\sin x\cos h+\sin h\cos x-\sin x}{h}=\cos x+\sin x\frac{\cos h -1}{h} \\= \cos x-\sin x\frac{2\sin^2(h/2)}{h}=\cos x-\sin x \cdot \frac{h}{2} \cdot\left(\frac{\sin(h/2)}{h/2}\right)^2 \to \cos x-\sin x\cdot 0 \cdot 1\\=\cos x
$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.