qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,424,720 | <p>I want to calculate the above limit. Using sage math, I already know that the solution is going to be <span class="math-container">$-\sin(\alpha)$</span>, however, I fail to see how to get to this conclusion.</p>
<h2>My ideas</h2>
<p>I've tried transforming the term in such a way that the limit is easier to find:
<span class="math-container">\begin{align}
\frac{\cos(\alpha + x) - \cos(\alpha)}{x}
&= \frac{\cos(x)\cos(\alpha)-\sin(x)\sin(\alpha)-\cos(\alpha)}{x} & (1) \\
&= \frac{\cos(x)\cos(\alpha)-\cos(x)}{x} - \frac{\sin(x)\sin(\alpha)}{x} & (2) \\
&= \frac{\cos(\alpha)(\cos(x)-1)}{x} - \sin(\alpha) & (3) \\
\end{align}</span></p>
<p>However, I'm not sure whether term <span class="math-container">$(3)$</span> is enough to solve the problem. Surely, for <span class="math-container">$x \to 0$</span>, this evaluates to <span class="math-container">$\frac{0}{0} - \sin(\alpha)$</span>, which is not enough to determine the limit.</p>
<p>Another try was to transform the term in such a way that I can use <span class="math-container">$\lim_{x \to 0} \frac{\sin x}{x} = 1$</span>. For instance, I found that
<span class="math-container">$$
\frac{\cos(\alpha + x) - \cos(\alpha)}{x} = \frac{\sin(\alpha - 90^° + x) - \sin(\alpha-90^°)}{x} \qquad (4)
$$</span></p>
<p>However, this only seems to lead to more complicated term manipulations that I did not manage to bring to a useful point.</p>
<p>What can I do to solve this problem?</p>
| Ian | 83,396 | <p>The derivation of the derivative of <span class="math-container">$\cos(x)$</span> from geometry goes the way you went, but now you need to provide some justification for the equality <span class="math-container">$\lim_{x \to 0} \frac{\cos(x)-1}{x}=0$</span>, which can also be done using geometry.</p>
|
80,783 | <p>How do you convert $(12.0251)_6$ (in base 6) into fractions?</p>
<p>I know how to convert a fraction into base $x$ by constantly multiplying the fraction by $x$ and simplifying, but I'm not sure how to go the other way?</p>
| André Nicolas | 6,312 | <p>I assume you want decimal notation in your fraction. We have
$$(12.0251)_6=(1\times 6^1) + (2\times 6^0) +\frac{0}{6^1}+\frac{2}{6^2}+\frac{5}{6^3}+\frac{1}{6^4}.$$</p>
<p>Bring the right-hand side to the common denominator $6^4$, and calculate. Equivalently, multiply by $6$ often enough that you get an integer $N$ ($4$ times) and divide by that power of $6$. After a while, we get that $N=10471$. Since $6^4=1296$,
$$(12.0251)_6=\frac{10471}{1296}.$$</p>
<p><strong>Another way:</strong> Note that $(12.0251)_6=\dfrac{(120251)_6}{(10000)_6}$. Convert numerator and
denominator to base $10$. This looks slicker, but the computational details are the same.</p>
<p><strong>Comment:</strong> There is a nice trick for making the calculation of the numerator easier. It goes back almost two thousand years in China, and is sometimes called <em>Horner's Method</em>, after an early 19th century British schoolmaster. We work from the left. Calculate in turn</p>
<p>$(1\times 6)+2=a$</p>
<p>$(a\times 6)+0=b$</p>
<p>$(b\times 6)+2=c$</p>
<p>$(c\times 6)+5=d$</p>
<p>$(d\times 6)+1=e$</p>
<p>Our numerator is $e$. We find that $e=10471$. </p>
<p>Horner's Method does not really speed up things in this small calculation. But with longer calculations, there is substantial gain. Horner's Method is a useful tool when we want to evaluate a high degree polynomial $P(x)=a_0x^n+a_1x^{n-1}+ \cdots +a_n$ at a particular numerical value of $x$. With a bit of practice, it is even a handy tool for evaluation of polynomials with a calculator. There is no need to ever "store" intermediate results.</p>
|
856,756 | <p>Let $A_1 = \{x_1,x_2,x_3,x_4\}$ be a set of four positive real numbers.
The sets $A_i, i\geq 2$ are made up of four real numbers defined from the arithmetic mean, geometric mean, harmonic mean and root mean square, as shown below.</p>
<p>Let $A_2 = \{ \text{AM} (A_1),\text{GM}(A_1),\text{HM}(A_1),\text{RMS}(A_1) \}$</p>
<p>Let $A_3 = \{ \text{AM} (A_2),\text{GM}(A_2),\text{HM}(A_2),\text{RMS}(A_2) \}$</p>
<p>$ \vdots$</p>
<p>Let $A_{i+1} = \{ \text{AM} (A_i),\text{GM}(A_i),\text{HM}(A_i),\text{RMS}(A_i) \}$</p>
<p>I was wondering if there was anything special as this process is repeated. Do the elements of the set converge towards a specific number? I was thinking it would converge to SOME number between the AM and GM since $RMS \geq AM \geq GM \geq HM$, but this is as far as I could go.</p>
| Daccache | 79,416 | <p>I can't give a complete answer, but here's what I did:<br>
(note that in your question, you put $AM \geqslant GM \geqslant HM \geqslant RMS$, but the RMS is actually supposed to be the greatest, as in $RMS \geqslant AM \geqslant GM \geqslant HM$)<br>
To experiment, I took 3 sets of numbers $A_1 = \{2, 6, 11, 27\}, B_1 = \{1, 2, 3, 4\}$, and $C_1 = \{182.26, 5\sqrt{2}, 192403, 12\sin(1)\}$, put them in excel, and crunched the numbers.<br>
As it turns out, the numbers <em>do</em> converge to a specific number unique to each set. Some converge faster than others, but they do converge and at that relatively quickly. Here's a picture for clarity: </p>
<p><img src="https://i.stack.imgur.com/CiG5E.png" alt="An excel table showing the three sets"></p>
<p>So, the three sets do converge to a single number, highlighted in red when all the values reached an agreement to 9 decimal places. As you can see, the number is always between the AM and GM, not the GM and HM. (Although I suspect your reasoning was correct in thinking that it would converge to a number in the 'middle' of the inequalities.)<br>
I don't have any concrete ideas on how to give a proof of this fact, but it might come from this reasoning: </p>
<ol>
<li>Given a set of 4 positive numbers in a set $A_1$, we compute their AM, GM, HM, and RMS, and let them be the four values in the set $A_2$.</li>
<li>Since the four numbers in $A_2$ satisfy the RMS-AM-GM-HM inequality, and every average must be in between its greatest and least values (the largest and smallest elements in $A_1$), then $\max{A_1} \geqslant 1stRMS \geqslant 1stAM \geqslant 1stGM \geqslant 1stHM \geqslant \min{A_1}$.</li>
<li>In $A_3$, by the same reasoning as in step 2, we have that the maximum of $A_2$, which is the 1stRMS, will be greater than the 2ndRMS, which is in turn greater than the 2ndAM... which is larger than the minimum of $A_2$, which is the 1stHM.</li>
<li>We can keep on 'nesting' the inequalities in each other with each repetition, and in doing this we are squeezing the means in between increasingly close numbers, which will make it converge.</li>
</ol>
<p>Cheers!</p>
|
856,756 | <p>Let $A_1 = \{x_1,x_2,x_3,x_4\}$ be a set of four positive real numbers.
The sets $A_i, i\geq 2$ are made up of four real numbers defined from the arithmetic mean, geometric mean, harmonic mean and root mean square, as shown below.</p>
<p>Let $A_2 = \{ \text{AM} (A_1),\text{GM}(A_1),\text{HM}(A_1),\text{RMS}(A_1) \}$</p>
<p>Let $A_3 = \{ \text{AM} (A_2),\text{GM}(A_2),\text{HM}(A_2),\text{RMS}(A_2) \}$</p>
<p>$ \vdots$</p>
<p>Let $A_{i+1} = \{ \text{AM} (A_i),\text{GM}(A_i),\text{HM}(A_i),\text{RMS}(A_i) \}$</p>
<p>I was wondering if there was anything special as this process is repeated. Do the elements of the set converge towards a specific number? I was thinking it would converge to SOME number between the AM and GM since $RMS \geq AM \geq GM \geq HM$, but this is as far as I could go.</p>
| Dirk | 3,148 | <p>Late for the party but here goes: Yes,there is convergence. For just arithmetic and geometric mean this is classical (check the <a href="https://en.wikipedia.org/wiki/AGM_method" rel="nofollow">AGM method</a>). For more means there is an article by Krause</p>
<p>Krause, Ulrich,
<a href="http://dx.doi.org/10.4171/EM/109" rel="nofollow">Compromise, consensus, and the iteration of means</a>.
Elem. Math. 64 (2009), no. 1, 1–8. </p>
<p>which proves convergence in a wider setting. You can even change the used means from iteration to iteration and still obtain convergence as my brother and me proved here</p>
<p>Lorenz, Jan; Lorenz, Dirk A.,
<a href="http://dx.doi.org/10.1109/TAC.2010.2046086" rel="nofollow">On conditions for convergence to consensus.</a>
IEEE Trans. Automat. Control 55 (2010), no. 7, 1651–1656. </p>
|
102,624 | <p>I recently became interested in Maass cusp forms and heared people mentioning a "multiplicity one conjecture". As far as I understood it, it says that the dimension of the space of Maass cusp form for fixed eigenvalue should be at most one. </p>
<p>Since Maass cusp forms always are defined for a Fuchsian lattice, I wonder
1) for which lattices this conjecture had been conjectured?
2) what is the motivation for this conjecture?
3) to whom this conjecture is due?
4) is it published somewhere?
5) is it proven in some cases?</p>
| GH from MO | 11,919 | <p>This conjecture is usually stated for $\mathrm{SL}_2(\mathbb{Z})$, and it is widely open. I think it is folklore, and is stated in several papers, e.g. in Luo: Nonvanishing of $L$-values and the Weyl law (before (3)).</p>
<p>The motivation, I think, is similar as with the conjecture for the multiplicity of the Riemann zeta zeros. The belief is that there is no "accidental" algebraic independence among the eigenvalues of the Laplacian or the zeros of an automorphic $L$-function. For example, the Laplacian eigenvalue $1/4$ is expected to "come from" an even Galois representation, while the zero $1/2$ is expected to "come from" rational points of infinite order on an abelian variety. For $\mathrm{SL}_2(\mathbb{Z})$ or $\zeta(s)$ we don't know of any object that would "impose" any algebraic independence on the data, hence we believe that in those cases the data is entirely transcendental.</p>
<p>For congruence subgroups $\Gamma_0(N)$ the "multiplicity one conjecture" is false, because the eigenvalue $1/4$ is known to occur with multiplicity for some $N$'s. The known examples come from even Galois representations. I think it is safe to believe that the multiplicities are bounded for any $N$.</p>
|
1,569,936 | <p>Given that $a_0, a_1,...,a_{n-1} \in \mathbb{C}$ I am trying to understand how the following calculation for the determinant of the following matrix follows:
$$
\text{det}
\begin{bmatrix}
x & 0 & 0 & ... & 0 & a_0 \\
-1 & x & 0 & ... & 0 & a_1 \\
0 & -1 & x & ... & 0 & a_2 \\
. \\
. \\
. \\
0 & 0 & 0 & ... & -1 & x + a_{n-1} \\
\end{bmatrix}
\\ =
(x) \text{ det}
\begin{bmatrix}
x & 0 & 0 & ... & 0 & a_1 \\
-1 & x & 0 & ... & 0 & a_2 \\
0 & -1 & x & ... & 0 & a_3 \\
. \\
. \\
. \\
0 & 0 & 0 & ... & -1 & x + a_{n-1} \\
\end{bmatrix}
+
\text{det}
\begin{bmatrix}
0 & 0 & ... & 0 & a_0 \\
-1 & x & ... & 0 & a_2 \\
. \\
. \\
. \\
0 & 0 & ... & -1 & x + a_{n-1} \\
\end{bmatrix}
\\ = x(x^{n-1} + a_{n-1}x^{n-2}+...+a_1) + (-1)^{n-1}\text{det}
\begin{bmatrix}
-1 & x & ... & 0 & a_2 \\
. \\
. \\
. \\
0 & 0 & ... & -1 & x + a_{n-1} \\
0 & 0 & ... & 0 & a_0 \\
\end{bmatrix}
$$</p>
<p>I do not understand:
(1) how the determinant can be broken up into the sum of the determinants of the 2 smaller matrices and (2) how are the determinants of the 2 smaller matrices what they are?</p>
| stity | 285,341 | <p>If $A=(a_{i,j}) \in M_n(\mathbb{C})$, $$\det(A) = \sum_{k=1}^{n} (-1)^ka_{1,k} \det(\Delta_{1,k}) $$
Where $\Delta_{1,k}$ is A minus the column and the line of $a_{1,k}$</p>
<p>Here the sum only has 2 terms because others are equals to $0$.</p>
|
96,657 | <p>I'm trying to understand an alternative proof of the idea that if $E$ is a dense subset of a metric space $X$, and $f\colon E\to\mathbb{R}$ is uniformly continuous, then $f$ has a uniform continuous extension to $X$.</p>
<p>I think I know how to do this using Cauchy sequences, but there is this suggested alternative. For each $p\in X$, let $V_n(p)$ be the set of $q\in E$ such that $d(p,q)<\frac{1}{n}$. Then prove that the intersections of the closures
$$
A=\bigcap_{n=1}^\infty\overline{f(V_n(p))}
$$
consists of a single point, $g(p)$, and so $g$ is the desired continuous extension of $f$. Why is this intersection a single point, and why is $g$ continuous?</p>
<hr>
<p>This is what I did so far. Since $f$ is uniformly continuous, for given $\epsilon>0$, there is $\delta>0$ such that $\text{diam }f(V)<\epsilon$ whenever $\text{diam }V<\delta$. Since $V_n(p)$ has diameter at most $\frac{2}{n}$, taking $n>2/\delta$ would imply
$$
\text{diam }f(V_n(p))=\text{diam }\overline{f(V_n(p))}<\epsilon
$$
So I think $\lim_{n\to\infty}\text{diam }\overline{f(V_n(p))}=0$, which would imply $A$ consists of at most one point. I noticed that the closures form a descending sequence of closed sets, but I couldn't tell if they are bounded since $X$ is an arbitrary metric space, in order to conclude that the intersection is nonempty, and hence a single point.</p>
<p>Lastly, why is $g$ continuous at points $p\in X\setminus E$? I was trying to think of an argument with sequences converging to $p$ since $p$ is a limit point of $E$, but got stumping on how to show $g$ is actually continuous. Thanks.</p>
| Levon Haykazyan | 11,753 | <p>As Srivatsan notes, any set with a finite diameter is bounded. You have shown that for a fixed $\varepsilon$, ${\rm diam} \overline {f(V_n(p))} < \varepsilon$, starting from some $n$. So starting from some $n$, $\overline {f(V_n(p))}$ are all bounded and hence compact. Furthermore the image of a <em>real</em> uniformly continuous function on the bounded set is bounded. This is Exercise 4.8 in Rudin.</p>
<p>For continuity on $p \in X$ you do the following. For $\varepsilon > 0$ you essentially want a $\delta = {1 \over n}$ so small that ${\rm diam} \overline {f(V_n(p))} < \varepsilon$. Now if $q \in X$ is such that $d(p, q) < {1 \over n}$, then you can pick $m$ so large that $V_m(q) \subseteq V_n(p)$. Then $\overline {f(V_m(q))} \subseteq \overline {f(V_n(p))}$ and so $g(p), g(q) \in \overline {f(V_n(p))}$. This implies that $d(g(p), g(q)) < \varepsilon$. By uniform continuity of $f$ you can pick $\delta$ independent of $p$ and this gives you uniform continuity.</p>
|
96,657 | <p>I'm trying to understand an alternative proof of the idea that if $E$ is a dense subset of a metric space $X$, and $f\colon E\to\mathbb{R}$ is uniformly continuous, then $f$ has a uniform continuous extension to $X$.</p>
<p>I think I know how to do this using Cauchy sequences, but there is this suggested alternative. For each $p\in X$, let $V_n(p)$ be the set of $q\in E$ such that $d(p,q)<\frac{1}{n}$. Then prove that the intersections of the closures
$$
A=\bigcap_{n=1}^\infty\overline{f(V_n(p))}
$$
consists of a single point, $g(p)$, and so $g$ is the desired continuous extension of $f$. Why is this intersection a single point, and why is $g$ continuous?</p>
<hr>
<p>This is what I did so far. Since $f$ is uniformly continuous, for given $\epsilon>0$, there is $\delta>0$ such that $\text{diam }f(V)<\epsilon$ whenever $\text{diam }V<\delta$. Since $V_n(p)$ has diameter at most $\frac{2}{n}$, taking $n>2/\delta$ would imply
$$
\text{diam }f(V_n(p))=\text{diam }\overline{f(V_n(p))}<\epsilon
$$
So I think $\lim_{n\to\infty}\text{diam }\overline{f(V_n(p))}=0$, which would imply $A$ consists of at most one point. I noticed that the closures form a descending sequence of closed sets, but I couldn't tell if they are bounded since $X$ is an arbitrary metric space, in order to conclude that the intersection is nonempty, and hence a single point.</p>
<p>Lastly, why is $g$ continuous at points $p\in X\setminus E$? I was trying to think of an argument with sequences converging to $p$ since $p$ is a limit point of $E$, but got stumping on how to show $g$ is actually continuous. Thanks.</p>
| chenu | 868,600 | <p>Let, <span class="math-container">$p\in X$</span>, we define <span class="math-container">$V_n(p)=\big\{q\in E: d(p,q)<\frac{1}{n}\big\}$</span><br />
We have that, <span class="math-container">$V_n(p)\neq\phi\,\forall\,n\in\Bbb N$</span> (Since, <span class="math-container">$\overline E=X$</span>)<br />
Also note that,
<span class="math-container">\begin{equation}\tag{1}
\lim_{n\to\infty}{\mbox{diam}} (V_n(p))=0
\end{equation}</span></p>
<p>Now, <span class="math-container">$f:E\to Y$</span> is uniformly continuous.<br />
Hence, <span class="math-container">$\forall\,\epsilon>0\,\exists\,\delta>0$</span>, such that whenever <span class="math-container">$F\subset E$</span> and diam<span class="math-container">$(F)<\delta$</span> then diam<span class="math-container">$(f(F))<\epsilon$</span><br />
Due to (1), we have <span class="math-container">$k\in\Bbb N$</span> such that diam<span class="math-container">$(V_k(p))<\delta\implies$</span> diam<span class="math-container">$(f(V_k(p)))<\epsilon$</span><br />
<span class="math-container">$\implies$</span> diam<span class="math-container">$(f(V_n(p)))<\epsilon\,\forall\,n\geq k$</span> (Since, <span class="math-container">$V_1(p)\supset V_2(p)\supset V_3(p)\dots$</span>)<br />
<span class="math-container">$$\implies \lim_{n\to\infty}{\mbox{diam$(f(V_n(p)))=0$}}$$</span>
<span class="math-container">\begin{equation}\tag{2}
\implies \lim_{n\to\infty}{\mbox{diam$\overline{(f(V_n(p)))}=0$}}
\end{equation}</span>
<span class="math-container">$Y$</span> is complete, <span class="math-container">$\overline{f(V_n(p))}\neq\phi\,\forall\,n\in\Bbb N$</span> and forms the sequence of nested closed sets (by (2)), hence by Cantor's intersection theorem we have:<span class="math-container">$$\bigcap_{n=1}^\infty\overline{f(V_n(p))}=\{y\}$$</span>
Now, we define <span class="math-container">$g:X\to Y$</span> as:
<span class="math-container">\begin{equation}\tag{3}
g(x)=
\begin{cases}
f(x) &x\in E\
\bigcap_{n=1}^\infty\overline{f(V_n(x))} &x\in X\backslash E\
\end{cases}
\end{equation}</span>
Now, we let <span class="math-container">$x\in X\backslash E$</span>. <br />
From (2) we have, <span class="math-container">$\forall\,\epsilon>0\,\exists\,k_1\in\Bbb N$</span> such that, diam<span class="math-container">$\overline{(f(V_{k_1}(x))}<\epsilon$</span><br />
<span class="math-container">$\implies$</span> diam<span class="math-container">$(g(\overline{V_{k_1}(x)}))<$</span> diam<span class="math-container">$\overline{(g(V_{k_1}(x)))}<\epsilon\implies d'(g(a),g(x))<\epsilon\,\forall\,a\in\overline{V_{k_1}(x)}$</span><br />
Now, we easily can verify that, <span class="math-container">$B_{\frac{1}{k_1}}(x)\subseteq\overline{V_{k_1}(x)}$</span><br />
Hence, <span class="math-container">$g$</span> is continuous at <span class="math-container">$x$</span>.<br />
Now, if <span class="math-container">$x\in E$</span>, then also <span class="math-container">$g$</span> is continuous at <span class="math-container">$x$</span> (By continuity of <span class="math-container">$f$</span>).<br />
Therefore, <span class="math-container">$g:X\to Y$</span> is continuous extension of <span class="math-container">$f$</span>. <span class="math-container">$\qquad\qquad\qquad\square$</span></p>
|
4,568,221 | <p><span class="math-container">$$\iint\limits_{D} \left(x^2+y^2\right)\mathrm{d}x \mathrm{d}y$$</span>
where <span class="math-container">$D$</span> is given each time by
<span class="math-container">$D=x^2-y^2=1,\hspace{0.5cm} x^2-y^2=9,\hspace{0.5cm} xy=2,\hspace{0.5cm} xy=4$</span></p>
<p>I try to use Polar coordinate transformation
<span class="math-container">\begin{cases}
x &= \rho \cos \theta \\
y &= \rho \sin \theta
\end{cases}</span>
but I don’t know how to find the range of <span class="math-container">$\rho$</span> and <span class="math-container">$\theta$</span></p>
| Robert Z | 299,698 | <p>I don't see a simple way to evaluate integral in polar coordinates. In my opinion it is better to consider the transformation <span class="math-container">$(u,v)=(x^2-y^2,xy)$</span>, then the Jacobian is equal to
<span class="math-container">$$\left|\frac{\partial(u,v)}{\partial(x,y)}\right|=\left|\det\left(\begin{bmatrix}2x&-2y\\y &x \end{bmatrix}\right)\right|=2(x^2+y^2).$$</span>
Therefore
<span class="math-container">$$\begin{align*}
\iint\limits_{D} \left(x^2+y^2\right)dx dy&=\iint\limits_{D_+} 2\left(x^2+y^2\right)dx dy
=\iint\limits_{D^+}\left|\frac{\partial(u,v)}{\partial(x,y)}\right|dx dy\\
&=\iint\limits_{[1,9]\times[2,4]}dudv=(9-1)\cdot(4-2)=16
\end{align*}$$</span>
where <span class="math-container">$D=\{(x,y): 1\le x^2-y^2\le 9, 2\le xy\le 4\}=D_-\cup D_+$</span>
with <span class="math-container">$D_+=D\cap (\mathbb{R}^+)^2$</span> and <span class="math-container">$D_-=D\cap (\mathbb{R}^-)^2=-D_+$</span>.</p>
<p>Below a picture of <span class="math-container">$D_+$</span>.
<a href="https://i.stack.imgur.com/4XbQx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4XbQx.png" alt="enter image description here" /></a></p>
|
1,042,375 | <p><strong>Question:</strong></p>
<blockquote>
<p>Let $f: \mathbb{R} \to \mathbb{R}$ be a continuous function. Prove that $\{ x \in \mathbb{R} \mid f(x) > 0\}$ is an open subset of $\mathbb{R}$.</p>
</blockquote>
<p>At first I thought this was quite obvious, but then I came up with a counterexample. What if $f(x)=1$ for all $x$? Then the set becomes $\{ x \in \mathbb{R} \mid f(x) = 1\}$, a proper subset of the set in question, which is a closed subset.</p>
| 5xum | 112,884 | <p>The set $\{x\in\mathbb R\mid f(x) = 1\}$ for the function defined by $f(x) = 1$ for every $x$, is $\mathbb R$ the whole set of real numbers. This is an open set, so no, you did not find a contradiction.</p>
<hr>
<p>In case you don't believe me:</p>
<p>By definition, a set $X\subseteq \mathbb R$ is open if, for every $x\in X$, there exists an interval $(a,b)$ for which $(a,b)\subseteq X$. In the case of $X=\mathbb R$, you can, for any $x\in\mathbb R$, find such an interval (i.e. $(x-1,x+1)$), meaning $\mathbb R$ is an open subset of $\mathbb R$.</p>
<hr>
<p>You may say "but $\mathbb R$ is a closed subset of $\mathbb R$!". Well, yes it is. But there is nothing in the definition of closed and open sets which demands that a set cannot be both closed and open. In fact, in topology, sets which are both closed and open are nothing strange. Proper clopen (meaning closed and open) subsets of a topological space which do not contain any nontrivial clopen sets are called "connected components" of the space.</p>
|
59,965 | <p>If I have a function $f(x,y)$, is it always true that I can write it as a product of single-variable functions, $u(x)v(y)$?</p>
<p>Thanks.</p>
| Ross Millikan | 1,827 | <p>No, it is not. It is unusual that you can do so. For example, $f(x,y)=x+y$ cannot be.</p>
|
449,296 | <p>I am trying to deduce how mathematicians decide on what axioms to use and why not other axioms, I mean surely there is an infinite amount of available axioms. What I am trying to get at is surely if mathematicians choose the axioms then we are inventing our own maths, surely that is what it is but as it has been practiced and built on for so long then it is too late to change all this so we have had to adapt and create new rules?? I'm not sure how clear what I am asking is or whether it is even understandable but would appreciate any answers or comments, thanks.</p>
| Ittay Weiss | 30,953 | <p>Mathematics does not exist in a vacuum. It is strongly related, via applications, to the world around us. Mathematicians choose axioms according to what works well when we try to use the insights and results flowing from these axioms to better understand problems (usually from science) that we care about. </p>
<p>To draw an analogy with painting, a painter can surely mix colours in endless combinations and spread paint on canvas in equally endless possibilities. But, artists don't just randomly spread paint on canvas. The reason is that their art does not exist in a vacuum. It is strongly related to human culture, the physical world around us, and the predispositions of the human brain. These dictate what is considered good art, and so guide the artist in the creation of a good painting.</p>
|
2,311,848 | <p>$X$ and $Y$ are independent r.v.'s and we know $F_X(x)$ and $F_Y(y)$. Let $Z=max(X,Y)$. Find $F_Z(z)$.</p>
<p>Here's my reasoning: </p>
<p>$F_Z(z)=P(Z\leq z)=P(max(X,Y)\leq z)$. </p>
<p>I claim that we have 2 cases here: </p>
<p>1) $max(X,Y)=X$. If $X<z$, we are guaranteed that $Y<z$, so $F_Z(z)=P(Z\leq z)=P(X<z)=F_X(z)$</p>
<p>2) $max(X,Y)=Y$. Similarly, $F_Z(z)=P(Z\leq z)=P(Y<z)=F_Y(z)$</p>
<p>Since we're interested in either case #1 or #2, </p>
<p>$F_Z(z)=F_X(z)+F_Y(z)-F_X(z)*F_Y(z)$</p>
<p>However, it's wrong and I know it. But I would like to know where the flaw in my reasoning is. I <strong><em>know the answer</em></strong> to this problem, I just want to know at what moment my reasoning fails.</p>
| mlc | 360,141 | <p>Everything works until you write "either... or..." instead of "and". </p>
<p>In practice, your last formula is computing the probability that $X \le z$ or $Y \le z$ but not both. Hence, for $X \le z$, you implicitly compute the probability that $Y > z$; likewise, for $Y \le z$, you consider the probability that $X > z$. </p>
|
61,316 | <p>Hi all,</p>
<p>I heard a claim that if I have a matrix $A\in\mathbb R^{n\times n}$ such that $A^n \to 0 \ (\text{for }n\to\infty )$
(that is, every entry of $A^n$ converges to $0$ where $n\to \infty$)
then $I-A$ is invertible.</p>
<p>anyone knows if there is a name for such a matrix or how (for general knowledge) to prove this ?</p>
| Per Alexandersson | 1,056 | <p>It is quite easy:</p>
<p>Consider the sum $\sum_{n=0}^\infty A^n$.
Your condition makes sure that this converges. At the same time, pretend that this is a usual,
geometric series. Then the sum is given by $1/(1-A)$ or, if you wish, multiplicative inverse of $I-A.$</p>
<p>So in short, $I-A$ has an inverse, and it is given by the converging sum $\sum_{n=0}^\infty A^n$.</p>
|
238,970 | <p>Recently I've come across 'tetration' in my studies of math, and I've become intrigued how they can be evaluated when the "tetration number" is not whole. For those who do not know, tetrations are the next in sequence of iteration functions. (The first three being addition, multiplication, and exponentiation, while the proceeding iteration function is pentation) </p>
<p>As an example, 2 with a tetration number of 2 is equal to $$2^2$$ 3 with a tetration number of 3 is equal to $$3^{3^3}$$ and so forth.</p>
<p>My question is simply, or maybe not so simply, what is the value of a number "raised" to a fractional tetration number. What would the value of 3 with a tetration number of 4/3 be?</p>
<p>Thanks for anyone's insight</p>
| Gottfried Helms | 1,714 | <p>This is not an answer, just an extended comment to the answer of @MarkHunter .
The <a href="https://ariwatch.com/VS/Algorithms/TheFourthOperation.htm" rel="nofollow noreferrer">linked article</a> refers to many common topics, but also to something of G. Szekeres and it seems it follows a similar idea as I have seen it in an article of Peter Walker (1970/80).
I just implemented that method of Peter Walker (hope I got all right) and compared the results with @SheldonL's Kneser-method.</p>
<p>Just to have an impression of the approximation of the two methods against each other I draw the following picture. Starting at <span class="math-container">$x_0=0.1$</span> computing the iterates with heights <span class="math-container">$h=0..4 $</span> by <span class="math-container">$1/40$</span> steps. The numbers by the two methods are well overlaid (blue & green), however the small differences might still be systematic (pink, y-scale on the right hand). Original differences should be that in the interval <span class="math-container">$h=0..1$</span> because the other intervals are computed by the functional equation (at least my version of P. Walkers idea).<br />
I've no further analysis so far.</p>
<p><a href="https://i.stack.imgur.com/E1mWY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E1mWY.png" alt="image" /></a></p>
|
2,832,614 | <blockquote>
<p>Prove by induction that
$$\lim_{x \to a} \frac{x^n-a^n}{x-a}=na^{n-1}.$$</p>
</blockquote>
<p>I did a strange proof using two initial results: We know that result is true for $n=1$ and $n=2$. Assuming the result is true for $n=k-1$ and $n=k$, I can prove the result for $n=k+1$. For this I used my assumption for the case of $n=k$, and used a random multiplier to get the desired result:
$$\lim_{x \to a} \frac{x^{k}-a^{k}}{x-a}·\frac{x+a}{x+a}.$$
During the simplification I had to use my $n=k-1$ case result as well.</p>
<p>Is this proof OK in terms of induction principle? I can show my whole working if needed. </p>
<p>Thank you.</p>
<p><strong>Edit</strong></p>
<p>whole proof:</p>
<p>$\lim_{x \to a} \frac{x^{k}-a^{k}}{x-a}·\frac{x+a}{x+a}=k.a^{k-1}$ (I'll omit limit notation for clarity)</p>
<p>$$\frac{x^{k}-a^{k}}{x-a}·\frac{x+a}{x+a}=k.a^{k-1}$$</p>
<p>$$\frac{1}{2a}\frac{x^{k+1}+ax^k-a^kx-a^{k+1}}{x-a}=k.a^{k-1}$$</p>
<p>$$\frac{1}{2a}(\frac{x^{k+1}-a^{k+1}}{x-a}+\frac{ax(x^{k-1}-a^{k-1})}{x-a})=k.a^{k-1}$$</p>
<p>$$\frac{1}{2a}[\frac{x^{k+1}-a^{k+1}}{x-a}+{a^2(k-1).a^{k-2}}]=k.a^{k-1}$$</p>
<p>Hence, $$\lim_{x \to a} \frac{x^{k+1}-a^{k+1}}{x-a}=(k+1)a^{k}.$$</p>
| N. S. | 9,176 | <p><strong>Hint</strong>
$$\frac{x^{n+1}-a^{n+1}}{x-a}=\frac{x^{n+1}-ax^n}{x-a}+\frac{ax^n-a^{n+1}}{x-a}$$</p>
|
77,136 | <p>I have a multidimensional-variable list, suppose for example {{x1,y1},{x2,y2}..}. I have duplicate values for the 'x' coordinates and I need to find for the duplicate 'x' elements the corresponding minimum 'y'. A sample of this list is the following:</p>
<pre><code>l1={{1, 1.43E-46}, {21, 2.79E-48}, {41, 3.22E-45}, {41,1.74E-46}, {81, 2.77E-46}, {121,9.97E-48}, {161, 1.24E-45}, {181,1.19E-45}}
</code></pre>
<p>I need to rewrite the list with only those elements from duplicate 'x' coordinates with a minimum value of 'y', for the former list this would be:</p>
<pre><code>l1={{1, 1.43E-46}, {21, 2.79E-48}, {41,1.74E-46}, {81, 2.77E-46}, {121,9.97E-48}, {161, 1.24E-45}, {181,1.19E-45}}
</code></pre>
<p>Since from the duplicate coordinates 'x', the minimum 'y' is:</p>
<pre><code> {41,1.74E-46}
</code></pre>
| kglr | 125 | <pre><code>l1 = {{1, 1.43 10^-46}, {21, 2.79 10^-48}, {41, 3.22 10^-45}, {41,1.74 10^-46},
{81, 2.77 10^-46}, {121, 9.97 10^-48}, {161, 1.24 10^-45}, {181, 1.19 10^-45}};
</code></pre>
<p>I version 10, you can also use <code>MinimalBy</code>:</p>
<pre><code> Join@@MinimalBy[Last]/@GatherBy[l1, First]
(* {{1, 1.43*10^-46}, {21, 2.79*10^-48}, {41, 1.74*10^-46}, {81, 2.77*10^-46},
{121, 9.97*10^-48}, {161, 1.24*10^-45}, {181, 1.19*10^-45}} *)
</code></pre>
<p>or <code>Merge</code>:</p>
<pre><code>List@@@Normal@Merge[#->#2&@@@l1, Min]
(* {{1, 1.43*10^-46}, {21, 2.79*10^-48}, {41, 1.74*10^-46}, {81, 2.77*10^-46},
{121, 9.97*10^-48}, {161, 1.24*10^-45}, {181, 1.19*10^-45}} *)
</code></pre>
|
3,436,891 | <p>I have the following stupid question in my mind while i am studying for exams.
Does <span class="math-container">$X<\infty \ a.s$</span>, implies that <span class="math-container">$\mathbb E(X)<\infty$</span>? </p>
<p>Further on this, is the converse of the above statement true? Do give me a bit summary on this. Thanks very much.</p>
<p>I thought this was true until I realize the following example: Let's consider a simple symmetric random walk, we know that each state is null-recurrent. Let <span class="math-container">$\tau_L$</span> be a stopping time when the walk first hits <span class="math-container">$L$</span> started from <span class="math-container">$0$</span>. then
<span class="math-container">$$\mathbb P(\tau_L<\infty)=1$$</span>
so <span class="math-container">$\tau_L<\infty \ a.s.$</span>
but we also know that
<span class="math-container">$$\mathbb E(\tau_L)=\infty$$</span>
Is this a counter-example? Thanks, i am a bit weak on measure theory. </p>
| Surb | 154,545 | <p>Take <span class="math-container">$(\Omega ,\mathcal F,\mathbb P)=([0,1], \mathcal B([0,1]), m)$</span> where <span class="math-container">$m$</span> denote the Lebesgue measure on <span class="math-container">$[0,1]$</span>. Consider <span class="math-container">$X(\omega )=\frac{1}{\omega }$</span>. </p>
<p>Then, <span class="math-container">$X(\omega )<\infty $</span> a.s. but <span class="math-container">$\mathbb E[X]=\infty $</span>.</p>
|
563,927 | <p>Show that $\mathbb{R}$
is not a simple extension of $\mathbb{Q}$
as follow:</p>
<p>a. $\mathbb{Q}$
is countable.</p>
<p>b. Any simple extension of a countable field is countable.</p>
<p>c. $\mathbb{R}$
is not countable.</p>
<p>I 've done a. and c. Can anyone help me a hint to prove b.?</p>
| Prahlad Vaidyanathan | 89,789 | <p>Let $F$ be a countable field, then the collection of all polynomials of degree $\leq n$ is countable. Hence, $F[x]$ is countable, being the countable union of countable sets. Hence, $F[x] \times F[x]\setminus\{0\}$ is countable. There is a surjective function $F[x]\times F[x]\setminus\{0\} \to F(x)$ by
$$
(f(x), g(x)) \mapsto \frac{f(x)}{g(x)}
$$
Hence, $F(x)$ is countable. If $a$ is transcendental over $F$, then $F(x) \cong F(a)$, which is thus countable.</p>
|
1,820,690 | <p>Find two numbers, $A$ and $B$, both smaller than $100$, that have a lowest common multiple of $450$ and a highest common factor of $15$.</p>
<p>I know that this involves the formula of </p>
<p>$A × B = LCM × HCF$</p>
<p>But I don't quite understand the above formula so I rather memorise it and that is why I can't apply it now. Can anyone explain on how this formula is derived ? Thanks alot in advance !</p>
| Siddharth Bhat | 261,373 | <p>Replace $z^5 \to t$. Hence, this gives us $t^2 - t - 992 = 0$. Solving for the roots (use the quadratic equation) we get that the roots are</p>
<p>$t = 32, -31$</p>
<p>Hence, $z^5 = 32, -31$</p>
<p>For </p>
<p>$$
z^5 = 32 \\
z^5 = e^{i 2\pi n}2^5, n \in \mathbb{N} \\
z = 2 \cdot e^{\frac{i 2 \pi n}{5}}, n \in \mathbb{N}
$$</p>
<p>Hence, we need to think about the roots laid out in the complex plane with an angle of $\frac{360^\circ}{5} = 72^\circ$ between them. The roots with negtive real part will lie between $90^\circ \leq \theta \leq 270^\circ$ - that is, the roots at $144^\circ$ and $216^\circ$.</p>
<p>Similary, analyze
$$
z^5 = -31
$$</p>
<p>By symmetry, since the exponent is the same as the previous case ($5$), we will have the same anglular distance - $72^\circ$. However, in this case, it is flipped about the $y$-axis (the imaginary axis) since we have a $-1$ factor. Hence, this time, we will have $3$ negative roots (since the last time we have $3$ positive roots).</p>
<p>So, the total is $3 + 2 = 5$ roots with a negative real part</p>
|
1,424,913 | <p>I am trying to solve the following problem.</p>
<blockquote>
<p>Let $G$ be a group. If $M, N \subset G$ are such that $x^{-1} M x = M$
and $x^{-1} N x = N$ for all $x \in G$ and $M \cap N = \{1\}$, prove
that $m n = n m$ for all $m \in M, n \in N$.</p>
</blockquote>
<p>I have already proven it for the specific case in which $M, N$ are subgroups of $G$ (see proof below). However, I can't prove it without this hypothesis. I think $M, N$ must be subgroups for the result to hold, but I can't find a counterexample to show this as well.</p>
<p>Any help with a proof or counterexample is much appreciated.</p>
<hr>
<p>Current proof outline:</p>
<blockquote>
<p>$$ (m \cdot n)\cdot (n \cdot m)^{-1} = (m \cdot n)\cdot (m^{-1}
\cdot n^{-1}) =: k$$
$ k = (m \cdot n \cdot m^{-1}) \cdot n^{-1}
\in N$, because $(m \cdot n \cdot m^{-1}) \in N $ and $n^{-1} \in
N$ (and $N$ is subgroup).</p>
<p>$ k = m \cdot (n \cdot m^{-1} \cdot n^{-1}) \in M$, because $m \in
M $ and $(n \cdot m^{-1} \cdot n^{-1}) \in M$ (and $M$ is subgroup).</p>
<p>So $k \in M \cap N = \{1\}$, which implies $k = 1$ and
$$ (m \cdot n)\cdot (n \cdot m)^{-1} = 1 $$
$$ (m \cdot n)\cdot (n \cdot m)^{-1}\cdot (n \cdot m) = 1 \cdot (n \cdot m) $$
$$ m \cdot n = n \cdot m $$</p>
</blockquote>
| Sammy Black | 6,509 | <p>Note: I am assuming that the quantities in your equations are real numbers.</p>
<p>Since $\cosh u \ge 1$, with the minimum only occurring at $u = 0$, you're complicating the calculation too much. You can conclude immediately that $xy = 0$, so either $x = 0$ or $y = 0$ or both. (EDITED, to correct silly error.)</p>
<hr>
<p>By the way, since $\cosh$ is an even function, meaning that $\cosh(-u) = \cosh u$ for all $u \in \Bbb{R}$, it is <strong>not</strong> one-to-one, hence cannot have an inverse. However, if you artificially restrict it's domain, say to $u \ge 0$ (so $\sinh u \ge 0$), then you can solve for the inverse:
$$
\cosh^2 u - \sinh^2 u = 1
\quad\Longrightarrow\quad
\sinh^2 u = \cosh^2 u - 1
\quad\Longrightarrow\quad \sinh u = \sqrt{\cosh^2 u - 1}
$$
Now,
\begin{align}
\cosh u + \sinh u &= e^u \\
\cosh u + \sqrt{\cosh^2 u - 1} &= e^u \\
\ln \bigl( \cosh u + \sqrt{\cosh^2 u - 1} \bigr) &= u \\
\end{align}
so if $x = \cosh u$ and $u \ge 0$, then
$$
u = \ln \bigl( x + \sqrt{x^2 - 1} \bigr),
$$
and this defines the restricted inverse. <a href="https://www.desmos.com/calculator/k4zpwirf5w" rel="nofollow">Here</a>'s a picture.</p>
|
115,269 | <p>I'm studying the remainder class $\mathbb{Z}_n$, I've grabbed something, but something else is unfocused. Let
$$20x \equiv 4\pmod{34}$$
then GCD(20,34)=2 so I rewrite as:
$$10x \equiv 2\pmod{17}$$
and successively:
$$10x \equiv 1\pmod{17}$$
Now I know $\gcd(10, 17)=1$</p>
<blockquote>
<p>Question 1: Why? Is this cause I've divided both $20$ and $34$ for the
same $\gcd=2$? If $d$ is a $\gcd$ for $a$ and $b$, then $\gcd(a/d, b/d)=1$?</p>
</blockquote>
<p>At this point I can:
$$1=10\alpha + 17\beta$$
that will be:
$$1=10(-5)+17\cdot 3$$</p>
<blockquote>
<p>Question 2: I know that $-5$ is the solution of equation, but why I have to choose $-5$ more than $3$?</p>
</blockquote>
<p>Now $-5$ is a solution for:
$$10x \equiv 1\pmod{17}$$
and $-5\cdot2$ is a solution for:
$$10x \equiv 2\pmod{17}$$
$[-10]_{17}$ is class, the entire set is $\{-10+17k: k\in\mathbb{Z}\}$.</p>
<blockquote>
<p>Question 3: This set will contain all invertible elements. Right? Now
If I took $[8]_{17}$ and $[7]_{17}$ why the class $[56]_{17}$ is not
in the same class of $[1]_{17}$ since it is invertible, matter of
fact, $\gcd(56,17)=1$? Is $\gcd(x,y)=1$ the only way to test if $[x]_y$ is
invertible? I know that element $a$ was invertible if $ba=ab=1$, but this
contradicts my knowledge.</p>
</blockquote>
| Community | -1 | <p>Verify the following equation. And, note that index runs over only upto a finite stage. Because, this is false for many infinite cases (as I indicate at the end!)</p>
<p>$$\sum_{i=1}^n (a_i+b_i)=\sum_{i=1}^n a_i+\sum_{i=1}^n b_i\\\ \sum_{i=1}^nc\cdot a_i=c\cdot\left(\sum_{i=1}^na_i\right)$$</p>
<p>where $c$ is a constant independent of the running index.</p>
<p>So, the desired sum becomes, $$\sum_{i=1}^k (i^2+3i)=\sum_{i=1}^ki^2+3\sum_{i=1}^k i$$</p>
<p>As you have noted, you can now use, $\displaystyle\sum_{i=1}^ki=\dfrac{k(k+1)}{2}$ and $\displaystyle\sum_{i=1}^ki^2=\dfrac{k(k+1)(2k+1)}{6}$ to get the sum.</p>
<hr>
<p>Here is what I mean when I talk about the infinite series:</p>
<p>$$\sum_{i=1}^\infty (i-i)=\sum_{i=1}^\infty 0=0$$ but when you expand the sum as I wrote down above, you'll get $\infty-\infty$ which clearly refuses to make sense!</p>
<hr>
<p>That you call this complicated, I am inclined to give you some references, which if you're Mathematically inclined, you'll love. </p>
<p>Suggested Reading:</p>
<ul>
<li><p>Terry Tao -- Analysis, Volume I, TRIM series, Hindustan Book Agency.<a href="http://terrytao.wordpress.com/books/analysis-i/" rel="nofollow noreferrer">(here)</a> for errata and some links</p></li>
<li><p>Knopp -- Infinite sequence and Series, Dover Books on Mathematics <a href="http://rads.stackoverflow.com/amzn/click/0486601536" rel="nofollow noreferrer">(here)</a> </p></li>
<li><p>Thomas Bromwich -- An Introduction to the theory of Infinite Series <a href="http://www.archive.org/details/introductiontoth00bromuoft" rel="nofollow noreferrer">(archive.com link)</a></p></li>
</ul>
<p>You may like <a href="https://math.stackexchange.com/a/92998/21436">this</a> answer of mine in connection with the last reference.</p>
|
115,269 | <p>I'm studying the remainder class $\mathbb{Z}_n$, I've grabbed something, but something else is unfocused. Let
$$20x \equiv 4\pmod{34}$$
then GCD(20,34)=2 so I rewrite as:
$$10x \equiv 2\pmod{17}$$
and successively:
$$10x \equiv 1\pmod{17}$$
Now I know $\gcd(10, 17)=1$</p>
<blockquote>
<p>Question 1: Why? Is this cause I've divided both $20$ and $34$ for the
same $\gcd=2$? If $d$ is a $\gcd$ for $a$ and $b$, then $\gcd(a/d, b/d)=1$?</p>
</blockquote>
<p>At this point I can:
$$1=10\alpha + 17\beta$$
that will be:
$$1=10(-5)+17\cdot 3$$</p>
<blockquote>
<p>Question 2: I know that $-5$ is the solution of equation, but why I have to choose $-5$ more than $3$?</p>
</blockquote>
<p>Now $-5$ is a solution for:
$$10x \equiv 1\pmod{17}$$
and $-5\cdot2$ is a solution for:
$$10x \equiv 2\pmod{17}$$
$[-10]_{17}$ is class, the entire set is $\{-10+17k: k\in\mathbb{Z}\}$.</p>
<blockquote>
<p>Question 3: This set will contain all invertible elements. Right? Now
If I took $[8]_{17}$ and $[7]_{17}$ why the class $[56]_{17}$ is not
in the same class of $[1]_{17}$ since it is invertible, matter of
fact, $\gcd(56,17)=1$? Is $\gcd(x,y)=1$ the only way to test if $[x]_y$ is
invertible? I know that element $a$ was invertible if $ba=ab=1$, but this
contradicts my knowledge.</p>
</blockquote>
| Shaun Ault | 13,074 | <p>There are properties of series that can be used. Specifically, series are <em>linear</em>, which means
$$ \sum_{i=1}^k (ca_i + b_i) = c\sum_{i=1}^k a_i + \sum_{i=1}^k b_i, $$
for constants $c$. Thus,
$$ \sum_{i=1}^k (i^2 + 3i) = \sum_{i=1}^k i^2 + 3\sum_{i=1}^k i.$$
At that point, you need the formulas,
$$ \sum_{i=1}^k i^2 = \frac{k(k+1)(2k+1)}{6}, \qquad \sum_{i=1}^k i =
\frac{k(k+1)}{2}.$$
So we obtain:
$$ \sum_{i=1}^k (i^2 + 3i) = \ldots = \frac{k(k+1)(2k+1)}{6} + \frac{3k(k+1)}{2}.$$
Now what if we didn't have those two formulas? Well the sum of consecutive integers can be thought of as a "averaging" formula. Since $1, 2, 3, \ldots, k$ is an arithmetic sequence,
$$1+2+3+ \cdots + k = (\textrm{average of}\; 1, 2, 3, \ldots, k) \cdot k.$$
(convince yourself this is true by looking at small $k$ values.)
Furthermore, the average is the "middle" number of the arithmetic sequence, if $k$ is odd, or the average of the two middles if $k$ is even. Either way, the value $\frac{k+1}{2}$ is the average (it balances out the values on the left and right of it). So that
$$1+2+3+ \cdots + k = \frac{k+1}{2} \cdot k = \frac{k(k+1)}{2}.$$
Now the sum of consecutive squares can't be handled quite the same way. However, you can prove inductively that
$$ \sum_{i=1}^k i^p = (\textrm{polynomial in $k$, of degree $p+1$}). $$
So $\sum_{i=1}^k ak^3 + bk^2 + ck + d$. Then use the first four values, $\sum_{i=1}^1 i^2 = 1$, $\sum_{i=1}^2 i^2 = 5$, etc. to determine $a, b, c, d.$ Anyway, that is how one can find the formula $\sum_{i=1}^k i^2= \frac{k(k+1)(2k+1)}{6}$, after which we simply memorize to use it in the future.</p>
<p>Hope this helps!</p>
|
4,057,255 | <p>Suppose we have 10 items that we will randomly place into 6 bins, each with equal probability. I want you to determine the probability that we will do this in such a way that no bin is empty. For the analytical solution, you might find it easiest to think of the problem in terms of six events Ai, i = 1, . . . , 6 where Ai represents the event that bin i is empty, and calculate P(AC1 ∩···∩AC6)using DeMorgan’s law.</p>
<p>Analytical= 1- P(bin one empty) + P(bin 2 empty) + P(bin 3 empty) + P(bin 4 empty) + P(bin 5 empty) + P(bin 6 empty) – P(1 and 2 empty) – P(1 and 3 empty) – P(1 and 4 empty) – P(1 and 5 empty) – P(1 and 6 empty) – P(2 and 3 empty) – P(2 and 4 empty) – P(2 and 5 empty) – P(2 and 6 empty) – P(3 and 4 empty) – P(3 and 5 empty) – P(3 and 6 empty) – P(4 and 5 empty) – P(4 and 6 empty) – P(5 and 6 empty) + P( 1 2 3 empty) + P(1 2 4 empty) + P (1 2 5) empty + P (1 2 6 empty) + P( 1 3 4 empty) + P(1 3 5 empty) + P( 1 3 6 empty) + P(1 4 5 empty) + P(1 4 6 empty) + P(1 5 6 empty) – P(1 2 3 4 empty) – P(1235 empty) – P(1236 empty) – P(2345 empty) – P(2346 empty) – P(3456 empty) + P(12345 empty) + P(23456 empty) + P(34561 empty) + P(45612 empty) – P(123456 empty)</p>
<p>1-(6<em>P(1 specific empty bin) – 15</em>P(2 specific empty bins) + 20<em>P(3 specific empty bins) – 15</em>P(4 specific empty bins) +6*P(5 specific empty bins)</p>
<p>This is what I have so far but I don't know how to determine the individual probabilities above.</p>
| true blue anil | 22,388 | <p>Another way is to use Stirling numbers of the second kind which can be conceived as putting <em>n</em> labelled objects (throws <span class="math-container">$1 \;thru\; 10$</span>)
into <em>k</em> identical boxes with no box being left empty.
If we multiply <span class="math-container">$S2(10,6)$</span> by <span class="math-container">$6!$</span> to restore the identity of boxes, we immediately get the answer as <span class="math-container">$\dfrac{S2(10,6)\cdot 6!}{6^{10}} \approx 0.2718$</span></p>
<hr />
<p><strong>Addendum</strong></p>
<p>@user2661923 has added a method for manually
working out S2(10,6). Here is another way which is totally mechanical, without bothering about double counting. There are <span class="math-container">$5$</span> possible templates, viz <span class="math-container">$511111\;421111\;331111\;322111\;222211$</span> and the answer is</p>
<p><span class="math-container">$10!\left[\dfrac 1 {5!\;5!} + \dfrac 1 {4!2!\;4!}+\dfrac 1 {3!3!\;2!4!}+\dfrac1 {3!2!2!\;2!3!}+\dfrac1 {2!2!2!2!\;4!2!}\right]$</span></p>
<p>To explain the third template say, to get the arrangements, you want <span class="math-container">$\binom{10}{3,3,1,1,1,1}\binom{6!}{2!4!}$</span> which simplifies to <span class="math-container">$\frac{10!6!}{3!3!1!1!1!1!2!4!}$</span> but we need to eliminate the <span class="math-container">$6!$</span> from the numerator and don't need to pad with <span class="math-container">$1!$</span></p>
|
550,441 | <p>Say I roll a 6-sided die until its sum exceeds $X$. What is E(rolls)?</p>
| LeeNeverGup | 104,910 | <p>Let $X$ be a random variable with discrete uniform distribution in $[1, 6]$, and let n be a number. If m is the number of trials needed to exceed n, we can conclude:</p>
<p>-sum of m trials is more than n.</p>
<p>-sum of m-1 trial is less than n.</p>
<p>So the question is: Let $Y_n$ be a sum of n independent variables with distribution as $X$. What is the probability of $Y_n<r$ for any r given?</p>
<p>I assume it can be solved, but the solution might involve scary integrals I doesn't like. Maybe someone else can continue from here. </p>
|
3,480,123 | <p>In my master thesis, I'm trying to prove the following limit:
<span class="math-container">$$\lim_{\epsilon \to 0^+}\int_0^1 \frac{\left(\ln\left(\frac{\epsilon}{1-x}+1\right)\right)^\alpha}{x^\beta(1-x)^\gamma}\,\mathrm{d}x=0,$$</span>
where <span class="math-container">$\alpha, \beta, \gamma \in (0,1)$</span>.</p>
<p>Assigning some numerical values to <span class="math-container">$\alpha, \beta, \gamma$</span> and <span class="math-container">$\epsilon$</span> in the Wolfram, I could perceive such convergence does indeed seem to happen.</p>
<p>Appreciate any help.</p>
| Vasily Mitch | 398,967 | <p>You can encode the vector <span class="math-container">$PA$</span> as an algebraic sum of vectors <span class="math-container">$BA$</span>, <span class="math-container">$CA$</span> and <span class="math-container">$BA\times CA$</span>.</p>
|
64,977 | <p>Suppose I had a complete bipartite graph with edges each given some numerical "cost" value. Is there a way to select a subset of those edges such that each vertex on each side of the graph is mapped to each vertex on the other (one to one) and the total "costs" is maximized (or minimized)?</p>
<p>Has anyone ever formulated something equivalent to this?</p>
| Peter Sarkoci | 14,763 | <p>I like the question. You are actually asking, what is the smallest finite probability space (in size of $\Omega'$) on which one can have $d$ distinct pair-wisely independent (but not necessarily independent) events of probability $\frac{1}{2}$. Just to explain the reformulation: once the $i$-th event is defined to be
$$E_{i}=\lbrace\omega\in\Omega'\;|\;\omega(i)=1\rbrace,$$
the "efficient substitute" criterion amounts to $\mathrm{Pr}(E_{i})=\frac{1}{2}$ and $\mathrm{Pr}(E_{i}\cap E_{j})=\frac{1}{4}$ for every $i,j$ with $1\le i,j\le d$ and $i\not=j$. The example for $d=3$ is a <a href="http://en.wikipedia.org/wiki/Pairwise_independence" rel="nofollow">well known case</a> of this situation.</p>
<p>Now, every such space satisfies $|\Omega'|=4n$ for some natural $n$ (the reason being obvious). Knowing this, we could ask an inverse question: given a $4n$-set $\Omega'$, what is the maximum size of a family $\mathcal{F}$ of $2n$-subsets of $\Omega'$ such that every two of them have intersection of size $n$. Knowing a precise answer to this, the original problem is solved as well: given a $d$ take the smallest $4n$ such that the maximum size of $\mathcal{F}$ is at least $d$.</p>
<p>There is a <a href="http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WDY-4K5SSV9-2&_user=10&_coverDate=11%252F30%252F2006&_alid=1751979224&_rdoc=3&_fmt=high&_orig=search&_origin=search&_zone=rslt_list_item&_cdi=6779&_sort=r&_st=13&_docanchor=&view=c&_ct=11541&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=f7d789c702d869ebb39ba6aed9d790e2&searchtype=a" rel="nofollow">paper</a> titled "Pairwise intersections and forbidden configurations". Let me quote from the abstract:</p>
<blockquote>
<p>Let $f_{m}(a,b,c,d)$ denote the maximum size of the family $\mathcal{F}$ of subsets of an $m$-element set for which there is <strong>no</strong> pair of subsets $A,B\in\mathcal{F}$ with $|A\cap B|\ge a$, $|A^{c}\cap B|\ge b$, $|A\cap B^{c}|\ge c$, and $|A^{c}\cap B^{c}|\ge d$.</p>
</blockquote>
<p>The count we are looking for is exactly $f_{4n}(n+1,n+1,n+1,n+1)$. Besides other interesting things, the paper gives also asymptotic estimates for this count; one of them gives $\Theta(4n^{2n-1})$ in our case.</p>
<p>Edit: unfortunately, I was too quick with the asymptotic estimate. The paper gives an estimate only for fixed $a,b,c,d$ as $m$ tends to $\infty$.</p>
|
3,827,449 | <p>For example, <span class="math-container">$(1,1,1)$</span> is such a point. The sphere must contain all points that satisfy the condition.</p>
<p>So, I've been milling over this question on and off for the past few days and just can't seem to figure it out. I think I would have to use the distance formula from some point to (0,0,0) or (3,3,3) but can not wrap my head around how to set it up, or what point to use. Any help would be greatly appreciated.</p>
| 2'5 9'2 | 11,123 | <p>Following your lead, here is how to get started:</p>
<p><span class="math-container">$$\left(\text{distance to }(3,3,3)\right)^2=\left(2\cdot{}\left(\text{distance to }(0,0,0)\right)\right)^2$$</span>
<span class="math-container">$$(x-3)^2+(y-3)^2+(z-3)^2=4\left(x^2+y^2+z^2\right)$$</span></p>
<p>A few more steps of algebra and you can expand this, regroup terms, normalize, complete the square, and end up with a sphere equation in standard form.</p>
|
65,166 | <p>For a graph $G$, let its Laplacian be $\Delta = I - D^{-1/2}AD^{-1/2}$, where $A$ is the adjacency matrix, $I$ is the identity matrix and $D$ is the diagonal matrix with vertex degrees. I'm interested in the spectral gap of $G$, i.e. the first nonzero eigenvalue of $\Delta$, denoted by $\lambda_{1}(G)$.</p>
<p>Is it true that a randomly chosen (with uniform distribution) $d$-regular bipartite graph on $(n, n)$ vertices (with multiple edges allowed) has, with probability approaching $1$ as $n \to \infty$, $\lambda_1$ arbitrarily close to $1$ (i.e. we can make arbitrarily close by taking $d$ large enough)?</p>
<p>If yes, is there a reference for this fact?</p>
<p>Proofs of expanding properties for random regular graphs which I have found in the literature usually give the probability only bounded from below by a constant, i. e. $1/2$, although I imagine that actually almost all random graphs have good spectral gap.</p>
<p>Note: by $d$-regular bipartite graph I mean a graph in which each vertex (on the left and on the right) has degree $d$.</p>
| Alain Valette | 14,497 | <p>In the paper "On the second eigenvalue and random walks in random d-regular graphs" (Combinatorica 11 (1991), no. 4, 331–362), Joel Friedman considers a model of $2d$-regular random graphs on $n$ vertices, by selecting randomly and uniformly $d$ permutations from the symmetric group $S_n$, and looking at the undirected Schreier graph associated with these permutations and their inverses. He proves (using the trace method) that random graphs are close to being Ramanujan. Now his model naturally gives random $d$-regular bipartite graphs on $(n,n)$ vertices. Maybe his method can be adapted to that situation?</p>
|
65,166 | <p>For a graph $G$, let its Laplacian be $\Delta = I - D^{-1/2}AD^{-1/2}$, where $A$ is the adjacency matrix, $I$ is the identity matrix and $D$ is the diagonal matrix with vertex degrees. I'm interested in the spectral gap of $G$, i.e. the first nonzero eigenvalue of $\Delta$, denoted by $\lambda_{1}(G)$.</p>
<p>Is it true that a randomly chosen (with uniform distribution) $d$-regular bipartite graph on $(n, n)$ vertices (with multiple edges allowed) has, with probability approaching $1$ as $n \to \infty$, $\lambda_1$ arbitrarily close to $1$ (i.e. we can make arbitrarily close by taking $d$ large enough)?</p>
<p>If yes, is there a reference for this fact?</p>
<p>Proofs of expanding properties for random regular graphs which I have found in the literature usually give the probability only bounded from below by a constant, i. e. $1/2$, although I imagine that actually almost all random graphs have good spectral gap.</p>
<p>Note: by $d$-regular bipartite graph I mean a graph in which each vertex (on the left and on the right) has degree $d$.</p>
| Doron | 31,917 | <p>Check out the new paper <a href="http://arxiv.org/abs/1212.5216" rel="nofollow">http://arxiv.org/abs/1212.5216</a> (Cor 1.6).</p>
<p>It is proven there that a random d-regular bipartite graph has its largest non-trivial eigenvalue at most 2\sqrt(d-1)+0.84.</p>
|
947,290 | <p>In a cyclic group of order 8 show that element has a cube root. So for some $a\in G$ there is an element $x \in G$ with $x^3=a.$</p>
<p>Also show in general that if $g=<a>$ is a cyclic group of order m and $(k,m)=1$ then each element in G has a $k$th root. What element will $a^k$ generate? Use this to express any element as a $k$th powers.</p>
<p>Where do I begin? For the first one is it just through closure essentially? And the second one Im stuck on. Where do I begin? I know that gcd between k & m is 1 so $kx+my=1$ with $x,y\in Z$. Thank you.</p>
| Nour | 380,871 | <p>For the second part of your question,
$a$ is a generator of $G$ ( i.e.
$G= <a>$ ).</p>
<p>Since ($k$ , $m$) $=1$ , so $a^k$ is also a generator of $G$ ( $< a^k > $ = $ <a^{gcd(m,k)}>$ ).</p>
<p>Therefore, for $x\in G$ , $x= a^{sk}$ $= (a^s)^k$ , for some $s$.</p>
|
112,137 | <p>I'm guessing the answer to this question is well-known:</p>
<p>Suppose that $Y:C \to P$ and $F:C \to D$ are functors with $D$ cocomplete. Then one can define the point-wise Kan extension $\mathbf{Lan}_Y\left(F\right).$ Under what conditions does $\mathbf{Lan}_Y\left(F\right)$ preserve colimits? Notice that if $C=P$ and $Y=id_C,$ then $\mathbf{Lan}_Y\left(F\right)=F,$ so this is not true in general. Would $F$ preserving colimits imply this?</p>
<p>Dually, under what conditions does a right Kan extension preserve limits?</p>
<p>Thank you.</p>
| Buschi Sergio | 6,262 | <p>Let $(a_i: A_i\to A)_{i\in I}$ with $I\in Cat$ a universal cocone in a category $\mathcal{A}$, and let $H: \mathcal{B}\to \mathcal{A}$.</p>
<p><em><strong>We ask when:</em></strong></p>
<p>for any $F: \mathcal{B}\to \mathcal{C}$ such that:</p>
<p>exist $L:=Lan_H F$ punctually (or at least it exist for the objects $A,\ A_i$ of the above diagram i.e. exists $L(A_i):=\varinjlim_{(B, b)\in H\downarrow A_i} F(B)$ and $L(A):=\varinjlim_{(B, b)\in H\downarrow A} F(B)$).</p>
<p>we have that $L(A)=\varinjlim_i L(A_i)$.</p>
<p>Consider the colimit category $\widehat{HA}:=\varinjlim_i H\downarrow A_i$, the functors $F\circ \pi_{A_i}:H\downarrow A_i\to \mathcal{B}\to \mathcal{C}$ induce a funtor $\hat{F}: \varinjlim_i H\downarrow A_i \to \mathcal{B}$ and is not hard verify that $\varinjlim_i L(A_i)=\varinjlim_i \varinjlim_{(B, b)\in H\downarrow A_i} F(B)= \varinjlim_{(B, b)\in \widehat{HA}} F(B)= \varinjlim \widehat{F}$.</p>
<p>Then the natural morphisms $\phi: \varinjlim_i L(A_i)\to L(A)$ is induced by the natural functor $\Phi: \widehat{HA}\to H\downarrow A $, then $\phi$ is a isomorphism (for any $F$ such that..) iff the functor $\Phi$ is final i.e. iff each morphism $H(B)\to A $ has a factorization on some $H(B')\to A_i\to A$ (through a morphism $B\to B'$) and two such factorization are connected in $H\downarrow A$. </p>
|
1,693,387 | <p>It is clear from the change of basis formula that the matrices $A$ and $B$ representing the same linear map in different bases are equivalent: there exists invertible matrixes $Q$ and $P$ such that $A=Q^{-1}BP$.</p>
<p>My question is about the other way, I mean given two equivalent matrices $A$ and $B$ in $M_{n,p}(\mathbb K)$ why they must represent the same linear map in different bases ? thank you for your help!</p>
| Marcel | 68,145 | <p>The matrix elements $\{\langle i|A|j\rangle,1\le i\le n,1\le j\le p\}$ of $A$ with respect to a given basis sets $\{\langle i|\}$ and $\{|j\rangle\}$, are the same as $\{\langle i|Q^{-1}BP|j\rangle,1\le i\le n,1\le j\le p\}$, which are the matrix elements of $B$ with respect to new basis sets given by $\{P|j\rangle,1\le j\le p\}$ and $\{\langle i|Q^{-1},1\le i\le n\}$.</p>
|
1,693,387 | <p>It is clear from the change of basis formula that the matrices $A$ and $B$ representing the same linear map in different bases are equivalent: there exists invertible matrixes $Q$ and $P$ such that $A=Q^{-1}BP$.</p>
<p>My question is about the other way, I mean given two equivalent matrices $A$ and $B$ in $M_{n,p}(\mathbb K)$ why they must represent the same linear map in different bases ? thank you for your help!</p>
| Ted Shifrin | 71,348 | <p>This is just a simple generalization of similarity (where you consider only $n\times n$ matrices with the <em>same</em> basis in domain and range). Here $P$ is the change-of-basis matrix in the domain ($\Bbb K^p$) and $Q$ is the change-of-basis matrix in the range ($\Bbb K^n$).</p>
|
1,693,387 | <p>It is clear from the change of basis formula that the matrices $A$ and $B$ representing the same linear map in different bases are equivalent: there exists invertible matrixes $Q$ and $P$ such that $A=Q^{-1}BP$.</p>
<p>My question is about the other way, I mean given two equivalent matrices $A$ and $B$ in $M_{n,p}(\mathbb K)$ why they must represent the same linear map in different bases ? thank you for your help!</p>
| Open Season | 99,428 | <p>Clearly $B$ being an $m \times n$ matrix represents a transformation $\Bbb R^n \rightarrow \Bbb R^m$ where we take the standard basis in each space. If $A = Q^{-1}BP$, then take a basis for $\Bbb R^n$ made up of the columns of $P$ and a basis for $\Bbb R^m$ made up of the columns of $Q$. Then $A$ represents the transformation with respect to these new bases.</p>
|
317,753 | <p>I am taking real analysis in university. I find that it is difficult to prove some certain questions. What I want to ask is:</p>
<ul>
<li>How do we come out with a proof? Do we use some intuitive idea first and then write it down formally?</li>
<li>What books do you recommended for an undergraduate who is studying real analysis? Are there any books which explain the motivation of theorems? </li>
</ul>
| Mikasa | 8,581 | <p>I have been teaching about 13 years in collage so I have seen many books or texts written by for example, <em>Rudin</em>, <em>Bartle</em>, <em>Apostol</em> and <em>Aliprantis</em> in Analysis. But the ones have been useful for me or for students that have topological approaches or graphical approaches. Rudin's is a great one but there is not examples as you find variously in Apostol's. Bartle's some chapters are including figures but two last ones have a few. Aliprantis's is full of problems and because of that I prefer it. Just an advice : If you are new in any field and that's why you want to be more familiar to those new concepts; try to select the books whose have many solved problems. I prefer to teach <strong>through practice</strong>. Sorry if my written in English is not good as others.</p>
|
223,631 | <p>I'm using NeumannValue boundary conditions for a 3d FEA using NDSolveValue. In one area I have positive flux and in another area i have negative flux. In theory these should balance out (I set the flux inversely proportional to their relative areas) to a net flux of 0 but because of mesh and numerical inaccuracies they don't. Is there a way to constrain total flux = 0 and just set a constant flux for one of my areas?</p>
<p>edit:
here's my boundary conditions:</p>
<pre><code>Subscript[Γ, 1] =
NeumannValue[-1, (Abs[x] - 1)^2 + (Abs[y] - 1)^2 < (650/1000)^2 &&
z < -0.199 ];
Subscript[Γ, 2] =
NeumannValue[4, x^2 + y^2 + (z + 1/5)^2 < (650/1000/2)^2 ];
</code></pre>
<p>and my equations:</p>
<pre><code>Dcof = 9000
ufun3d = NDSolveValue[
{D[u[t, x, y, z], t] - Dcof Laplacian[u[t, x, y, z], {x, y, z}] ==
Subscript[Γ, 1] + Subscript[Γ, 2],
u[0, x, y, z] == 0},
u, {t, 0, 10 }, {x, y, z} ∈ em];
</code></pre>
<p>and my element mesh:</p>
<pre><code>a = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, 0, 1}}];
b = Cylinder[{{0, 0, -1/5}, {0, 0, 0}}, (650/1000)/2];
c = Cylinder[{{1, 1, -1/5}, {1, 1, 0}}, 650/1000];
d = Cylinder[{{-1, 1, -1/5}, {-1, 1, 0}}, 650/1000];
e = Cylinder[{{1, -1, -1/5}, {1, -1, 0}}, 650/1000];
f = Cylinder[{{-1, -1, -1/5}, {-1, -1, 0}}, 650/1000];
r = RegionUnion[a,b,c,d,e,f];
boundingbox = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, -1/5, 1}}];
r2 = RegionIntersection[r,boundingbox]
em = ToElementMesh[r2];
</code></pre>
<p>And this is what my mesh looks like from the bottom up. </p>
<p><a href="https://i.stack.imgur.com/Q66fl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q66fl.png" alt="enter image description here"></a>
edit2:
I figured i should add a plot of what i think is "wrong" too.<br>
plotting the diagonal cross section i'd expect the values to be centered around 0 but they're all negative.</p>
<pre><code>ContourPlot[ufun3d[5, xy, xy, z], {xy, -1 , 1 }, {z, -0.2, 1},
ClippingStyle -> Automatic, PlotLegends -> Automatic]
</code></pre>
<p><a href="https://i.stack.imgur.com/SfwIa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SfwIa.png" alt="enter image description here"></a></p>
| user21 | 18,437 | <p>Too long for a comment. An easy way to generate a high quality mesh is to replace the <code>Implicitegion</code> with <code>Cubuid</code> and make use of the <a href="https://reference.wolfram.com/language/FEMDocumentation/ref/ToBoundaryMesh.html#1039792269" rel="noreferrer">OpenCascade boundary mesh generator</a>:</p>
<pre><code>Needs["NDSolve`FEM`"]
(*a=ImplicitRegion[True,{{x,-1,1},{y,-1,1},{z,0,1}}];*)
a = Cuboid[{-1, -1, 0}, {1, 1, 1}];
b = Cylinder[{{0, 0, -1/5}, {0, 0, 0}}, (650/1000)/2];
c = Cylinder[{{1, 1, -1/5}, {1, 1, 0}}, 650/1000];
d = Cylinder[{{-1, 1, -1/5}, {-1, 1, 0}}, 650/1000];
e = Cylinder[{{1, -1, -1/5}, {1, -1, 0}}, 650/1000];
f = Cylinder[{{-1, -1, -1/5}, {-1, -1, 0}}, 650/1000];
r = RegionUnion[a, b, c, d, e, f];
(*boundingbox=ImplicitRegion[True,{{x,-1,1},{y,-1,1},{z,-1/5,1}}];*)
boundingbox = Cuboid[{-1, -1, -1}, {1, 1, 1}];
r2 = RegionIntersection[r, boundingbox];
mesh = ToElementMesh[r2, "BoundaryMeshGenerator" -> {"OpenCascade"}];
groups = mesh["BoundaryElementMarkerUnion"];
temp = Most[Range[0, 1, 1/(Length[groups])]];
colors = ColorData["BrightBands"][#] & /@ temp;
mesh["Wireframe"["MeshElementStyle" -> FaceForm /@ colors]]
</code></pre>
<p><a href="https://i.stack.imgur.com/AkyRU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AkyRU.png" alt="enter image description here"></a></p>
|
1,638,757 | <p>Given $2$ points in 2-dimensional space $(x_s,y_s)$ and $(x_d,y_d)$, our task is to find whether $(x_d,y_d)$ can be reached from $(x_s,y_s)$ by making a sequence of zero or more operations.
From a given point $(x, y)$, the operations possible are:</p>
<pre><code>a) Move to point (y, x)
b) Move to point (x, -y)
c) Move to point (x+y, y)
d) Move to point (2*x, y)
</code></pre>
<p>Now i am pretty sure this has something to do with gcd's but m not able to formalize my approach, Can someone intuitively explain how can we figure out if a particular state is reachable from the current state?</p>
| Jack's wasted life | 117,135 | <p>Gcd is invariant under operations a,b,c. Gcd may double under operation d. So the transition is possible if either $\gcd(x_s,y_s)=\gcd(x_d,y_d)$ or $2^n\gcd(x_s,y_s)=\gcd(x_d,y_d)$ for some $n\in\mathbb N$ where we assume $\gcd(x,y)=\gcd(|x|,|y|)$.</p>
<p>Note that $$\gcd(x,y)|ax+by\;\forall\;a,b\in\mathbb Z$$ We can justify our claim that under c gcd is invariant using <a href="https://proofwiki.org/wiki/B%C3%A9zout's_Lemma" rel="nofollow">Bezout's lemma</a>.
If $\gcd(x,y)=d\;$$$\exists a,b\in\mathbb Z: ax+by=d\implies a(x+y)+(b-a)y=d\implies\gcd(x+y,y)|d$$
Again $d|x+y,y\implies d|\gcd(x+y,y)$. As both are positive $d=\gcd(x+y,y)$. </p>
<p>For operation d, note that $x/d,y/d$ can't have any common factor other than $1$, it would violate the fact that $d=\gcd(x,y)$. So if $y/d$ is odd $\gcd(2x,y)=d\gcd(2x/d,y/d)=d$. If $y/d$ is even $\gcd(2x,y)=d\gcd(2x/d,y/d)=2d$. </p>
|
526,837 | <p>Let $(\Omega, {\cal B}, P )$ be a probability space, $( \mathbb{R}, {\cal R} )$ the usual
measurable space of reals and its Borel $\sigma$- algebra, and $X : \Omega \rightarrow \mathbb{R}$ a random variable.</p>
<p>The meaning of $ P( X = a) $ is intuitive when $X$ is a discrete random variable, because it's the definition of the probability mass function. I am not sure if my question makes sense, but how should I think of $ P( X = a) $ when $X$ is a continuous random variable? </p>
| Hagen von Eitzen | 39,174 | <p>You can check th ersult by computing the derivative of $-2x+\ln(1+2x)$. It is the same as that of $-1-2x+\ln(1+2x)$. In general, if $F$ is an antiderivative of $f$ then so is $F+c$ for any constant $c$.</p>
|
4,218,943 | <p>Let <span class="math-container">$A_n$</span> be a sequence of <span class="math-container">$d\times d$</span> symmetric matrices, let <span class="math-container">$A$</span> be a <span class="math-container">$d\times d$</span> symmetric positive definite matrix (matrix entries are assumed to be real numbers). Assume that each element of <span class="math-container">$A_n$</span> converges to the corresponding element of <span class="math-container">$A$</span> as <span class="math-container">$n\to \infty$</span>. Can we conclude that for some <span class="math-container">$\epsilon>0$</span> there exists <span class="math-container">$N_\epsilon$</span> such that the smallest eigenvalue of <span class="math-container">$A_n$</span> is larger than <span class="math-container">$\epsilon$</span>, for all <span class="math-container">$n \geq n_\epsilon$</span>?</p>
| Asinomás | 33,907 | <p>Let <span class="math-container">$\lambda_1 \geq \dots \lambda_n$</span> be the eigenvalues. Any <span class="math-container">$e< \lambda_n$</span> works.</p>
<p>Suppose <span class="math-container">$\lambda < e$</span> is an eigenvalue of <span class="math-container">$A_n$</span>, take an orthonormal basis of <span class="math-container">$A$</span>, take a unitary eigenvector of <span class="math-container">$A_n$</span> for <span class="math-container">$\lambda$</span> and let the coordinates in the basis be <span class="math-container">$(a_1,\dots,a_n)$</span>.</p>
<p>Note <span class="math-container">$A_nv = (\lambda a_1,\dots, \lambda a_n)$</span> and <span class="math-container">$Av = (\lambda_1a_1, \dots,\lambda_n a_n)$</span> so that the distance between the two vectors is <span class="math-container">$\sqrt{ \sum \limits_{i=1}^n (\lambda_i - \lambda)^2a_i^2 } \geq \lambda_n - \lambda$</span></p>
<p>By unicity of norms we have that <span class="math-container">$A_n$</span> converges to <span class="math-container">$A$</span> under the usual norm which is the supremum of the distances between <span class="math-container">$Av$</span> and <span class="math-container">$A_nv$</span> for <span class="math-container">$v$</span> in the unitary sphere. It follows that <span class="math-container">$A_n$</span> cannot have an eigenvalue less than <span class="math-container">$e$</span> for sufficiently large <span class="math-container">$n$</span>.</p>
|
878,115 | <p>Question1:
I found 30 boxes. In 10 boxes i found 15 balls. In 20 boxes i found 0 balls.
Afer i collected all 15 balls i put them randomly inside the boxes.</p>
<p>How much is the chance that all balls are in only 10 boxes or less?</p>
<p>Question2:
I found 30 boxes. In 10 boxes i found 15 balls. In 20 boxes i found 0 balls. In two of the boxes i could find 3 balls. (So in one box has to be 2 balls and in the other seven boxes have to be 1 ball.)
Afer i collected all 15 balls i put them randomly inside the boxes.</p>
<p>How much is the chance that i find in only 2 boxes 6 balls or more?</p>
<p>I wrote a c# programm and tried it 1 million times.
My solution was: With a chance of 12,4694% all balls are in 10 boxes or less.</p>
| ant11 | 110,047 | <p>I will assume the balls and boxes are indistinguishable.</p>
<p>The first problem is: If I distribute $15$ balls among $30$ boxes, what is the probability that at most $10$ boxes contain a ball?</p>
<p>First, the fact that there are $30$ boxes does not matter, since they are indistinguishable. So we only need to consider the problem as if there were $15$ boxes.</p>
<p>Second, we'll solve the problem by finding the probability that <em>more</em> than $10$ boxes contain a ball, and subtract that number from $1$.</p>
<p>There is exactly $1$ to occupy $15$ boxes with $15$ balls: $\{1,1,1\cdots\}$</p>
<p>There is also exactly $1$ way to occupy $14$ boxes with $15$ balls, since, again, the boxes are indistinguishable: $\{2,1,1\cdots\}$</p>
<p>There are $2$ ways to occupy $13$ boxes: $\{3,1,1\cdots\}$ or $\{2,2,1,1\cdots\}$</p>
<p>There are $3$ ways to occupy $12$ boxes, and $5$ ways to occupy $11$ (you can and should verify these numbers).</p>
<p>Finally, how many ways could we distribute $15$ balls over $15$ boxes? This number is exactly equal to $p(15)=176$, where $p(n)$ is the <a href="https://oeis.org/A000041" rel="nofollow">partition function</a>.</p>
<p>So the answer to problem 1 is $1-(1+1+2+3+5)/176=164/176\approx93.2\%$</p>
<p>EDIT: I believe this answer is drastically different from the one obtained by your program for the following reason: you might have meant, in your problem statement, "what is the probability all $15$ balls are contained in $10$ <em>particular</em> boxes?" This significantly changes the question; in particular, the total number of boxes being $30$ is now relevant. Let me know if this is the misunderstanding.</p>
|
3,705,580 | <p><span class="math-container">$\mathbf {The \ Problem \ is}:$</span> Let, <span class="math-container">$f,g,h$</span> be three functions defined from <span class="math-container">$(0,\infty)$</span> to <span class="math-container">$(0,\infty)$</span> satisfying the given relation <span class="math-container">$f(x)g(y) = h\big(\sqrt{x^2+y^2}\big)$</span> for all <span class="math-container">$x,y \in (0,\infty)$</span>, then show that <span class="math-container">$\frac{f(x)}{g(x)}$</span> and <span class="math-container">$\frac{g(x)}{h(x)}$</span> are constant.</p>
<p><span class="math-container">$\mathbf {My \ approach} :$</span> Actually, by putting <span class="math-container">$x$</span> in place of <span class="math-container">$y$</span> and vice-versa, we can show that <span class="math-container">$\frac{f(x)}{g(x)}$</span> is a constant, let it be <span class="math-container">$c .$</span> Then, I tried that <span class="math-container">$g(x_i)g(y_i)=g(x_j)g(y_j)$</span> whenever <span class="math-container">$(x_i,y_i)$</span>, <span class="math-container">$(x_j,y_j)$</span> satisfies <span class="math-container">$x^2+y^2 =k^2$</span> for every <span class="math-container">$k \in (0,\infty)$</span> .
But, I can't approach further.</p>
<p>Any help would be greatly appreciated .</p>
| I was suspended for talking | 474,690 | <p>Write <span class="math-container">$W = (X_1,X_2)$</span> and <span class="math-container">$Z = (X_3,X_4)$</span>, which are i.i.d random vectors (with standard normal coordinates). Then using your notation, <span class="math-container">$U = W\cdot Z$</span> and <span class="math-container">$V = \|Z\|^2$</span> so <span class="math-container">$T= (W\cdot Z)/\|Z\|^2$</span>. So using that <span class="math-container">$W$</span> and <span class="math-container">$Z$</span> are independent and law of total probability we have for any Borel <span class="math-container">$A$</span> that
<span class="math-container">\begin{align*}
P(T\in A) &= \int P\left(\frac{W\cdot z}{\|z\|^2}\in A\mid Z = z\right)P(Z=z)dz \\
&=
\int P\left(\frac{W\cdot z}{\|z\|^2}\in A\right)P(Z=z)dz.
\end{align*}</span>
Since you know the distributions of <span class="math-container">$W$</span> and <span class="math-container">$Z$</span> individually you should be able to work out the distribution. (I haven't myself checked it so let me know if there is any difficulties).</p>
|
3,705,580 | <p><span class="math-container">$\mathbf {The \ Problem \ is}:$</span> Let, <span class="math-container">$f,g,h$</span> be three functions defined from <span class="math-container">$(0,\infty)$</span> to <span class="math-container">$(0,\infty)$</span> satisfying the given relation <span class="math-container">$f(x)g(y) = h\big(\sqrt{x^2+y^2}\big)$</span> for all <span class="math-container">$x,y \in (0,\infty)$</span>, then show that <span class="math-container">$\frac{f(x)}{g(x)}$</span> and <span class="math-container">$\frac{g(x)}{h(x)}$</span> are constant.</p>
<p><span class="math-container">$\mathbf {My \ approach} :$</span> Actually, by putting <span class="math-container">$x$</span> in place of <span class="math-container">$y$</span> and vice-versa, we can show that <span class="math-container">$\frac{f(x)}{g(x)}$</span> is a constant, let it be <span class="math-container">$c .$</span> Then, I tried that <span class="math-container">$g(x_i)g(y_i)=g(x_j)g(y_j)$</span> whenever <span class="math-container">$(x_i,y_i)$</span>, <span class="math-container">$(x_j,y_j)$</span> satisfies <span class="math-container">$x^2+y^2 =k^2$</span> for every <span class="math-container">$k \in (0,\infty)$</span> .
But, I can't approach further.</p>
<p>Any help would be greatly appreciated .</p>
| StubbornAtom | 321,264 | <p>Suppose <span class="math-container">$$Z=\frac{X_1X_3+X_2X_4}{\sqrt{X_3^2+X_4^2}}$$</span></p>
<p>Now conditioning <span class="math-container">$X_3=a,X_4=b$</span>, we find that the above has a standard normal distribution, since</p>
<p><span class="math-container">$$\frac{aX_1+bX_2}{\sqrt{a^2+b^2}}\sim N(0,1)$$</span></p>
<p>for any real <span class="math-container">$a,b$</span>. As the conditional distribution does not depend on <span class="math-container">$a,b$</span>, the unconditional distribution remains the same. That is, <span class="math-container">$Z\sim N(0,1)$</span>.</p>
<p>Relating <span class="math-container">$Z$</span> and <span class="math-container">$T$</span> we have <span class="math-container">$$T=\frac{Z}{\sqrt{X_3^2+X_4^2}}=\frac{Z}{\sqrt 2\sqrt{(X_3^2+X_4^2)/2}}$$</span></p>
<p>As the distribution of <span class="math-container">$Z$</span> is independent of <span class="math-container">$X_3,X_4$</span>, we can see that <span class="math-container">$$\frac{Z}{\sqrt{(X_3^2+X_4^2)/2}}\sim t_2$$</span></p>
<p>So I think <span class="math-container">$T$</span> is distributed as <span class="math-container">$\frac1{\sqrt 2}$</span> times a <span class="math-container">$t$</span> distributed variable with <span class="math-container">$2$</span> degrees of freedom.</p>
|
89,197 | <p>I am working on a problem where I have to generate a table of components while each component of the table has 18 entries. Six of the indices among 18 run from 0 to 1 while the other 12 can take values between 0 to 3. After doing that I have to select some of the entries which follow a certain criterion (sum of all values in each component should be three). I have done this for smaller sized entry tables but for this one <em>Mathematica</em> gives up very fast saying <code>General::nomem: The current computation was aborted because there was insufficient memory available to complete the computation</code>. I don't have a larger memory computer available. Can somebody help me with this please? The commands I am using are:</p>
<pre><code>list =
Table[{i, j, k, l, m, n, o, p, q, r, s, u, v, x, y, z, a, b},
{i, 0, 1}, {j, 0, 3}, {k, 0, 3}, {l, 0, 1}, {m, 0, 3}, {n, 0, 3}, {o, 0, 1},
{p, 0, 3}, {q, 0, 3}, {r, 0, 1}, {s, 0, 3}, {u, 0, 3}, {v, 0, 1}, {x, 0, 3},
{y, 0, 3}, {z, 0, 1}, {a, 0, 3}, {b, 0, 3}] // Flatten
list1 = Partition[%, 18];
f1 = Total[#] < 4 &;
f2 = Total[#] > 2 &;
list2 = Select[list1, f1];
list3 = Select[list1, f2];
list4 = Intersection[list2, list3];
</code></pre>
| ubpdqn | 1,997 | <p>I think this can done as follows:</p>
<pre><code>pol = {1, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3};
ip = Join @@ (Permutations /@ PadRight[IntegerPartitions[3, 3]]);
subs = Subsets[Range[18], {3}];
rl = Flatten[Map[Function[u, Thread[u -> #] & /@ ip], subs], 1];
cand = ReplacePart[ConstantArray[0, 18], #] & /@ rl;
ex = Position[pol, 1]
pck = Pick[cand, Max[Extract[#, ex]] <= 1 & /@ cand];
</code></pre>
<p>As I understand this aims to find vectors of length 18 with restrictions described whose sum of components is 3.</p>
<p><code>cand</code> just finds vectors with elements {0,1,2,3}. <code>pck</code> picks out those that comply with condition {0,1} for positions in <code>ex</code>.
There are 5712 cases.</p>
<p>Apologies if I have misunderstood.</p>
<p>A sample (20) is presented below:</p>
<p><a href="https://i.stack.imgur.com/SNfvU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SNfvU.png" alt="enter image description here"></a></p>
|
3,460,595 | <p>I am given the following sequence:</p>
<p><span class="math-container">$$a_n = 1^9 + 2^9 + ... + n^9 - an^{10}$$</span></p>
<p>Where <span class="math-container">$a \in \mathbb{R}$</span>. I have to find the value of <span class="math-container">$a$</span> for which the sequence <span class="math-container">$a_n$</span> is convergent (Or conclude that there is no such value of <span class="math-container">$a$</span>).</p>
<p>How can I find this value (or that there is no such value)? I don't know how to approach something like this at all.</p>
| 2'5 9'2 | 11,123 | <p>You have</p>
<p><span class="math-container">$$
\begin{align}
b_n:=a_{n+1}-a_n&=(n+1)^9-a(n+1)^{10}+an^{10}\\
&=\sum_{k=0}^9n^k-a\sum_{k=0}^9\binom{10}{k}n^k\\
&=\sum_{k=0}^9n^k\left(1-a\binom{10}{k}\right)
\end{align}
$$</span></p>
<p>If <span class="math-container">$a\neq\frac{1}{10}$</span>, this is a polynomial in <span class="math-container">$n$</span> of degree <span class="math-container">$9$</span>. So as <span class="math-container">$n\to\infty$</span>, <span class="math-container">$\{b_n\}$</span> will diverge. </p>
<p>If <span class="math-container">$a=\frac{1}{10}$</span>, this is <span class="math-container">$\sum_{k=0}^8n^k\left(1-\frac{1}{10}\binom{10}{k}\right)$</span>, a polynomial in <span class="math-container">$n$</span> of degree <span class="math-container">$8$</span>. So again, as <span class="math-container">$n\to\infty$</span>, <span class="math-container">$\{b_n\}$</span> will diverge. </p>
<p>In order for the sequence <span class="math-container">$\{a_n\}$</span> to converge, it is necessary for the successive differences <span class="math-container">$\{b_n\}$</span> to converge to <span class="math-container">$0$</span>. Therefore <span class="math-container">$\{a_n\}$</span> diverges no matter what <span class="math-container">$a$</span> is.</p>
|
1,793,182 | <p>My task was to find the directional derivative of function:<br>
$$z = y^2 - \sin(xy)$$ at the point $(0, -1)$ in direction of vector $u = (-1, 10) $. </p>
<p>The result I found was $-21/\sqrt{101}$. But I can't figure out what is the interpretation of this result. </p>
<p>Does it mean that the function grows fastest with that derivative or with something else?</p>
| Martin Sleziak | 8,297 | <p>I will write down two versions of my attempt to prove the above claim. Both are based on the same idea. (It is more-or-less the same proof, written down from a slightly different perspective.)$\newcommand{\intrv}[2]{[#1,#2]}\newcommand{\limti}[1]{\lim\limits_{#1\to\infty}}$</p>
<hr>
<p>Let $\Delta$ denotes the set of all partitions of the interval $\intrv ab$.</p>
<p><strong>Observation.</strong> Let us fix some partition $P\in\Delta$ given by the points $a=x_0 < x_1 < \dots < x_k=b$. Then the function
$$\varphi_P \colon (y_0,\dots,y_k) \mapsto \sum_{i=1}^k \sqrt{(x_i-x_{i-1})^2+(y_i-y_{i-1})^2}$$
from $\mathbb R^{k+1}$ to $\mathbb R$ is <em>continuous</em> and $L_P(f)$ is obtained by plugging the values $(f(x_0),\dots,f(x_k))$ into this function.</p>
<h2>Elementary version</h2>
<p>Using the above observation we get that $\limti n L_P(f_n) = L_P(g)$ holds for any partition $P\in\Delta$.</p>
<p>By definition,
$$L(g) = \sup_{P\in\Delta} L_P(g)$$
which implies that
$$(\forall \varepsilon>0) (\exists P\in\Delta) L(g)-\varepsilon < L_P(g)$$
Now if we use that $\limti n L_P(f_n) = L_P(g)$, then we get
$$(\forall \varepsilon>0) (\exists P\in\Delta) (\exists n_0) (\forall n\ge n_0) L(g)-\varepsilon < L_P(f_n).$$
But we also have $L_P(f_n) \le L(f_n)$ for every $n$, which implies
$$(\forall \varepsilon>0) (\exists P\in\Delta) (\exists n_0) (\forall n\ge n_0) L(g)-\varepsilon < L(f_n).$$
But since neither of the expressions $L(g)-\varepsilon$ and $L(f_n)$ depend on $P$ we can simply write:
\begin{align*}
(\forall \varepsilon>0) (\exists n_0) (\forall n\ge n_0) L(g)-\varepsilon &< L(f_n)\\
(\forall \varepsilon>0) (\exists n_0) L(g)-\varepsilon &\le \inf_{n\ge n_0} L(f_n)\\
(\forall \varepsilon>0) L(g)-\varepsilon &\le \sup_{n_0\in\mathbb N}\inf_{n\ge n_0} L(f_n)\\
(\forall \varepsilon>0) L(g)-\varepsilon &\le \liminf_{n\to\infty} L(f_n)\\
L(g) &\le \liminf_{n\to\infty} L(f_n)
\end{align*}</p>
<p>It is worth mentioning that the above proof does not deal with the case $L(f)=+\infty$. But in this case we can make very similar argument, simply replacing $L-\varepsilon$ (for arbitrary $\varepsilon>0$) by an arbitrary large real number $M$.</p>
<h2>Using topology of the pointwise convergence</h2>
<p>This is a different viewpoint for basically the same proof.</p>
<p>Let us consider the set $\mathbb R^{\intrv ab}$ of all functions from $\intrv ab$ to $\mathbb R$. (To be more precise, we should work with the functions to $\mathbb R\cup\{\infty\}$, since we allow $L(f)=\infty$.) We can endow this set with the topology of pointwise convergence, i.e., the product topology.</p>
<p>Now we get that (for a fixed partition $P\in\Delta$) the function $f\mapsto L_P(f)$ is continuous. (Since it is obtained from the continuous function $\varphi_P$ and the projections mapping $f\mapsto f(x_i)$.)</p>
<p>Now since $L(f)=\sup L_P(f)$, we get that
$$f\mapsto L(f)$$ is a supremum of lower semicontinuous functions and therefore it is lower semicontinuous. </p>
<p>Lower semicontinuity of $L$ implies that if $\limti n f_n=g$ then
$$L(g)\le\liminf\limits_{n\to\infty}L(f_n).$$</p>
|
2,943,461 | <p>I'm stumped on a math puzzle and I can't find an answer to it anywhere!
A man is filling a pool from 3 hoses. Hose A could fill it in 2 hours, hose B could fill it in 3 hours and hose C can fill it in 6 hours. However, there is a blockage in hose A, so the guy starts by using hoses B and C. When the blockage in hose A has been cleared, hoses B and C are turned off and hose A starts being used. How long does the pool take to fill?
Any help would be strongly appreciated :)</p>
| MRobinson | 587,882 | <p>If the total volume of the pool is <span class="math-container">$x$</span>, we can denote the rates as:</p>
<p><span class="math-container">$r_A = \frac{x}{2}, r_B = \frac{x}{3}, r_C = \frac{x}{6}.$</span></p>
<p>From here you can see that:</p>
<p><span class="math-container">$r_{B+C} = r_B + r_C = \frac{x}{3} + \frac{x}{6} = \frac{3x}{6} = \frac{x}{2} = r_A.$</span></p>
<p>So you can see that the two pipes fill up at the same rate as pipe <span class="math-container">$A$</span>, your answer is simply 2 hours.</p>
|
3,660,652 | <p>To which of the seventeen standard quadrics (<a href="https://mathworld.wolfram.com/QuadraticSurface.html" rel="nofollow noreferrer">https://mathworld.wolfram.com/QuadraticSurface.html</a>) do these two equations reduce?
<span class="math-container">\begin{equation}
Q_1^2+3 Q_2 Q_1+\left(3
Q_2+Q_3\right){}^2 = 3 Q_2+2 Q_1 Q_3.
\end{equation}</span>
<span class="math-container">\begin{equation}
-9 Q_2-6 Q_3+3 \left(Q_1^2+\left(3 Q_2+4 Q_3-1\right) Q_1+9 Q_2^2+4 Q_3^2+6 Q_2
Q_3\right) = 0.
\end{equation}</span>
Further, what are the associated transformations needed to accomplish the reductions?</p>
<p>This is a "distilled" form of a previous more expansive question <a href="https://mathoverflow.net/questions/359459/interpret-certain-expressions-in-terms-of-classical-quadratic-surfaces">https://mathoverflow.net/questions/359459/interpret-certain-expressions-in-terms-of-classical-quadratic-surfaces</a></p>
| eeen | 646,577 | <p>This is a line with a rational slope, namely <span class="math-container">$256/27$</span> (on the <span class="math-container">$dx$</span>-plane). Hence there are an infinite number of solutions <span class="math-container">$(d,x)$</span> of the form </p>
<p><span class="math-container">$$(26 + 27t, 253 + 256t),$$</span></p>
<p>for integers <span class="math-container">$t$</span>. See if you can prove why.</p>
|
3,833,767 | <p>I am trying to brush up on calculus and picked up Peter Lax's Calculus with Applications and Computing Vol 1 (1976) and I am trying to solve exercise 5.2 a) in the first chapter (page 29):</p>
<blockquote>
<p>How large does <span class="math-container">$n$</span> have to be in order for</p>
<p><span class="math-container">$$ S_n = \sum_{j = 1}^n \frac{1}{j^2}$$</span></p>
<p>to be within <span class="math-container">$\frac{1}{10}$</span> of the infinite sum? within <span class="math-container">$\frac{1}{100}$</span>? within <span class="math-container">$\frac{1}{1000}$</span>? Calculate the first, second and third digit after the decimal point of <span class="math-container">$ \sum_{j = 1}^\infty \frac{1}{j^2}$</span></p>
</blockquote>
<p>Ok so the first part is easy and is derived from the chapter's text:</p>
<p><span class="math-container">$ \forall j \geq 1 $</span> we have <span class="math-container">$\frac{1}{j^2} \leq \frac{2}{j(j+1)}$</span> and therefore:</p>
<p><span class="math-container">\begin{equation}
\begin{aligned}
\forall n \geq 1,\quad \forall N \geq n +1 \quad S_N - S_n &\leq 2\sum_{k = n+1}^N \frac{1}{k(k+1)}\\
&= 2\sum_{k = n+1}^N \left\{ \frac{1}{k}- \frac{1}{k+1}\right\}\\
&= 2 \left[ \frac{1}{n+1} - \frac{1}{N+1}\right]
\end{aligned}
\end{equation}</span></p>
<p>Now because we know <span class="math-container">$S_N$</span> converges to a limit <span class="math-container">$l$</span> from below and by the rules of arithmetic for convergent sequences we have:</p>
<p><span class="math-container">$$ 0 \leq S - S_n \leq \frac{2}{n+1}$$</span></p>
<p>So if we want <span class="math-container">$S_n$</span> to be within <span class="math-container">$\frac{1}{10^k}$</span> of <span class="math-container">$S$</span> it suffices to have:</p>
<p><span class="math-container">$$ n \geq N_{k} = 2\times10^k -1$$</span></p>
<p>But the second part of the question puzzles me. I would like to say that computing <span class="math-container">$S_{N_{k}}$</span> is enough to have the first <span class="math-container">$k$</span> decimal points of <span class="math-container">$S$</span>. But earlier in the chapter (on page 9), there is a theorem that states:</p>
<blockquote>
<p>if <span class="math-container">$a$</span> and <span class="math-container">$b$</span> have the same integer parts and the same digits up to the <span class="math-container">$m$</span>-th, then they differ by less than <span class="math-container">$10^{-m}$</span>,
<span class="math-container">$$ |a - b | < 10^{-m}$$</span>
and the converse is <em>not</em> true.</p>
</blockquote>
<p>And the example of <span class="math-container">$a = 0.29999...$</span> and <span class="math-container">$b = 0.30000...$</span> indeed shows that two numbers can differ by less than <span class="math-container">$2\times 10^{-5}$</span> and yet have all different first digits.</p>
<p>So I think there is something missing in my "demonstration" above. How to show that I indeed "catch" the first <span class="math-container">$k$</span> digits of <span class="math-container">$S$</span> by computing <span class="math-container">$S_{N_k}$</span>?</p>
<p>Thanks!</p>
| Claude Leibovici | 82,404 | <p>I shall cheating assuming that you know the value of the infinite sum.</p>
<p>So for your example
<span class="math-container">$$\sum_{n=1}^{p}\frac1{n^2}=H_p^{(2)}$$</span> and you want to know <span class="math-container">$p$</span> such that
<span class="math-container">$$\frac {\pi^2}6-H_{p+1}^{(2)} \leq 10^{-k}$$</span>
Using the asymptotics of the generalized harmonic numbers, this will write
<span class="math-container">$$\frac 1{(p+1)}-\frac 1{2(p+1)^2}+\frac 1{6(p+1)^3}+\cdots\leq 10^{-k}$$</span> For sure, larger will be <span class="math-container">$k$</span> and less terms we shall need.</p>
<p>To stay with equations we know how to solve, let us stay with this cubic in <span class="math-container">$x=\frac 1{(p+1)} $</span> and solve
<span class="math-container">$$x-\frac 1 2 x^2+\frac 16 x^3 - \epsilon=0 \qquad \text{where} \qquad \epsilon=10^{-k}$$</span></p>
<p>We have
<span class="math-container">$$\Delta =-\frac{5}{12}+\epsilon -\frac{3 }{4}\epsilon ^2$$</span> which is very qucikly negative so only one real root. Using the hyperbolic method, we shall find
<span class="math-container">$$x=1-2 \sinh \left(\frac{1}{3} \sinh ^{-1}(2-3 \epsilon )\right)$$</span> that is to say
<span class="math-container">$$p=-\frac{2 \sinh \left(\frac{1}{3} \sinh ^{-1}(2-3 \epsilon )\right)}{1-2 \sinh
\left(\frac{1}{3} \sinh ^{-1}(2-3 \epsilon )\right)}$$</span> Since <span class="math-container">$\epsilon$</span> is small, a Taylor expansion will give
<span class="math-container">$$p=\frac{1}{\epsilon }-\frac{3}{2}-\frac{\epsilon }{12}+O\left(\epsilon ^3\right)$$</span> Back to <span class="math-container">$k$</span>
<span class="math-container">$$p \sim 10^k-\frac 32$$</span> If you want a difference of <span class="math-container">$10^{-6}$</span>, you will need "almost" one million of terms.</p>
<p>If we make <span class="math-container">$p=10^6$</span>, the difference is <span class="math-container">$0.99999950 \times 10^{-6}$</span></p>
|
319,725 | <p>I am trying to prove the following inequality concerning the <a href="https://en.wikipedia.org/wiki/Beta_function" rel="noreferrer">Beta Function</a>:
<span class="math-container">$$
\alpha x^\alpha B(\alpha, x\alpha) \geq 1 \quad \forall 0 < \alpha \leq 1, \ x > 0,
$$</span>
where as usual <span class="math-container">$B(a,b) = \int_0^1 t^{a-1}(1-t)^{b-1}dt$</span>.</p>
<p>In fact, I only need this inequality when <span class="math-container">$x$</span> is large enough, but it empirically seems to be true for all <span class="math-container">$x$</span>.</p>
<p>The main reason why I'm confident that the result is true is that it is very easy to plot, and I've experimentally checked it for reasonable values of <span class="math-container">$x$</span> (say between 0 and <span class="math-container">$10^{10}$</span>). For example, for <span class="math-container">$x=100$</span>, the plot is:</p>
<p><a href="https://i.stack.imgur.com/UiRCf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UiRCf.png" alt="Plot of the function to be proven greater than 1"></a></p>
<p>Varying <span class="math-container">$x$</span>, it seems that the inequality is rather sharp, namely I was not able to find a point where that product is larger than around <span class="math-container">$1.5$</span> (but I do not need any such reverse inequality).</p>
<p>I know very little about Beta functions, therefore I apologize in advance if such a result is already known in the literature. I've tried looking around, but I always ended on inequalities trying to link <span class="math-container">$B(a,b)$</span> with <span class="math-container">$\frac{1}{ab}$</span>, which is quite different from what I am looking for, and also only holds true when both <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are smaller than 1, which is not my setting.</p>
<p>I have tried the following to prove it, but without success: the inequality is well-known to be an equality when <span class="math-container">$\alpha = 1$</span>, and the limit for <span class="math-container">$\alpha \to 0$</span> should be equal to 1, too. Therefore, it would be enough to prove that there exists at most one <span class="math-container">$0 < \alpha < 1$</span> where the derivative of the expression to be bounded vanishes. This derivative can be written explicitly in terms of the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="noreferrer">digamma function</a> <span class="math-container">$\psi$</span> as:
<span class="math-container">$$
x^\alpha B(\alpha, x\alpha) \Big(\alpha \psi(\alpha) - (x+1)\alpha\psi((x+1)\alpha) + x\alpha \psi(x\alpha) + 1 + \alpha \log x \Big).
$$</span>
Dividing by <span class="math-container">$x^\alpha B(\alpha, x\alpha) \alpha$</span>, this becomes
<span class="math-container">$$
-f(\alpha) + \frac{1}{\alpha} + \log x,
$$</span>
where <span class="math-container">$f(\alpha) = -\psi(\alpha) + (x+1)\psi((x+1)\alpha) - x \psi(x\alpha)$</span> is, as proven <a href="http://web.math.ku.dk/~berg/manus/alzberg2.pdf" rel="noreferrer">by Alzer and Berg</a>, Theorem 4.1, a completely monotonic function. Unfortunately, the difference of two completely monotonic functions (such as <span class="math-container">$f(\alpha)$</span> and <span class="math-container">$\frac{1}{\alpha} + C$</span>) can vanish in arbitrarily many points, therefore this does not allow to conclude.</p>
<p>Many thanks in advance for any hint on how to get such a bound!</p>
<p>[EDIT]: As pointed out in the comments, the link to the paper of Alzer and Berg pointed to the wrong version, I have corrected the link.</p>
| fedja | 1,131 | <p>You can get away with the usual distribution function mumbo-jumbo. The general lemma is as follows:</p>
<p>Let <span class="math-container">$\mu,\nu$</span> be non-negative measures and <span class="math-container">$f,g$</span> be non-negative functions such that there exists <span class="math-container">$s_0>0$</span> with the property that <span class="math-container">$\mu\{f>s\}\ge \nu\{g>s\}$</span> for <span class="math-container">$s\le s_0$</span> and the reverse inequality holds for <span class="math-container">$s\ge s_0$</span>. Suppose also that <span class="math-container">$\int f^q\,d\mu=\int g^q\,d\nu<+\infty$</span> for some <span class="math-container">$q>0$</span>. Then, as long as the integrals in question are finite, we have <span class="math-container">$\int f^p\,d\mu\ge \int g^p\,d\nu$</span> for <span class="math-container">$0<p\le q$</span> and the reverse inequality holds for <span class="math-container">$p\ge q$</span>.</p>
<p>The proof of the lemma is rather straightforward. Let <span class="math-container">$p\le q$</span> (that is the case you are really interested in)
<span class="math-container">$$
\int f^p\,d\mu-\int g^p\,d\nu=p\int_0^\infty s^p[\mu\{f>s\}-\nu\{g>s\}]\frac{ds}s
\\
=p\int_0^\infty [s^p-s_0^{p-q}s^q][\mu\{f>s\}-\nu\{g>s\}]\frac{ds}s\ge 0\,.
$$</span> </p>
<p>Now we use it with <span class="math-container">$f(t)=t(1-t)^x$</span>, <span class="math-container">$d\mu=\frac{dt}{t(1-t)}$</span> on <span class="math-container">$(0,1)$</span>, <span class="math-container">$g(t)=t$</span>, <span class="math-container">$d\nu=\frac{dt}{t}$</span> on <span class="math-container">$(0,\frac1x)$</span>. Since the maximum of <span class="math-container">$t(1-t)^x$</span> is attained at <span class="math-container">$t=\frac{1}{x+1}$</span>, we see that the function <span class="math-container">$s\mapsto \mu\{f>s\}$</span> drops to <span class="math-container">$0$</span> before the function <span class="math-container">$s\mapsto \nu\{g>s\}$</span>. Also, the first function has larger in absolute value negative derivative than the second one for each value of <span class="math-container">$s$</span> where it is still positive. To see it, notice that the set where <span class="math-container">$f>s$</span> is an interval <span class="math-container">$(u,v)=(u(s),v(s))$</span> that shrinks as <span class="math-container">$s$</span> increases and the left end <span class="math-container">$u$</span> of this interval satisfies
<span class="math-container">$$
du\left(\frac 1u-\frac x{1-u}\right)=\frac{ds}s\,,
$$</span>
so trivially
<span class="math-container">$$
\frac{du}{u(1-u)}\ge \frac{du}u>\frac {ds}s
$$</span>
The right end moving to the left can only increase the decay speed. Finally, for <span class="math-container">$q=1$</span>, the integrals are equal (which also shows that the graphs of the distribution functions must indeed intersect), so for <span class="math-container">$0<p\le 1$</span> (which plays the role of <span class="math-container">$\alpha$</span>), we have the desired inequality.</p>
|
2,171,237 | <p>$f(x)$ is continuous on $[0,\pi]$ and $\int_0^\pi{f(x)\sin xdx} = \int_0^\pi{f(x)\cos xdx} = 1.$</p>
<p>Find $\min\int_0^\pi {f^2(x)dx}.$</p>
<p>I try to solve this problem by this:
$$\begin{array}{l}
{\left( {\int\limits_0^\pi {f(x)\sin xdx} } \right)^2} \le \left( {\int\limits_0^\pi {{f^2}(x){{\sin }^2}xdx} } \right)\left( {\int\limits_0^\pi {dx} } \right) \le \pi \int\limits_0^\pi {{f^2}(x){{\sin }^2}xdx} \\
{\left( {\int\limits_0^\pi {f(x)\cos xdx} } \right)^2} \le \pi \int\limits_0^\pi {{f^2}(x){{\cos }^2}xdx} \\
\Rightarrow \pi \int\limits_0^\pi {{f^2}(x)\left( {{{\sin }^2}x + {{\cos }^2}x} \right)dx} \ge 1 + 1 = 2
\end{array}$$
The thing is I can't find $f(x)$ to let the equation happens. Any help? Thank you in advance.</p>
| Jacky Chong | 369,395 | <p>Let us consider the Fourier series of $f$
\begin{align}
f(x) = \frac{1}{2}a_0+\sum^\infty_{n=1} a_n \cos nx+ \sum^\infty_{n=1} b_n \sin nx
\end{align}
then that means
\begin{align}
\int^\pi_0 f^2(x)\ dx= \frac{1}{4}a_0^2+\frac{\pi}{2}\sum^\infty_{n=1}(a_n^2+b_n^2).
\end{align}
Since
\begin{align}
\int^\pi_0 f(x) \cos x\ dx = 1
\end{align}
then this means $a_1= 2/\pi$ and likewise $b_1 = 2/\pi$. It follows
\begin{align}
\int^\pi_0 f^2(x)\ dx \geq \frac{4}{\pi}= \frac{\pi}{2}(a_1^2+b_1^2).
\end{align}</p>
|
3,101,286 | <p>I would like to get this text translated from Dutch to English:</p>
<p><a href="https://i.stack.imgur.com/0IWlQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0IWlQ.png" alt="enter image description here"></a></p>
<p>I tried using Google translator but the result is confusing me:</p>
<p><a href="https://i.stack.imgur.com/4SSrE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4SSrE.png" alt="enter image description here"></a></p>
| StackTD | 159,845 | <p><a href="https://i.stack.imgur.com/0IWlQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0IWlQ.png" alt="enter image description here"></a></p>
<ul>
<li><p>Given a bounded sequence <span class="math-container">$(y_n)_n$</span> in <span class="math-container">$\mathbb{C}$</span>. Show that for every sequence <span class="math-container">$(x_n)_n$</span> in <span class="math-container">$\mathbb{C}$</span> for which the series <span class="math-container">$\sum_n x_n$</span> converges absolutely, that also the series <span class="math-container">$\sum_n \left(x_ny_n\right)$</span> converges absolutely.</p></li>
<li><p>Suppose <span class="math-container">$(y_n)_n$</span> is a sequence in <span class="math-container">$\mathbb{C}$</span> with the following property: for each sequence <span class="math-container">$(x_n)_n$</span> in <span class="math-container">$\mathbb{C}$</span> for which the series <span class="math-container">$\sum_n x_n$</span> converges absolutely, also the series <span class="math-container">$\sum_n \left(x_ny_n\right)$</span> converges absolutely. Can you then conclude that <span class="math-container">$(y_n)_n$</span> is a bounded sequence? Explain!</p></li>
</ul>
|
2,657,053 | <blockquote>
<p>Suppose I know that
$$\sum_{i=1}^n i^2=\frac{n(n+1)(2n+1)}{6}\,\,\,\, \tag{1} $$
How can I prove the the following?
$$
\sum_{i=0}^{n-1} i^2=\frac{n(n-1)(2n-1)}{6}
$$</p>
</blockquote>
<hr>
<p>I have looked up the solution to the other problem but it seems to be a bit confusing to me. Is it possible to find a solution derived from equation 1 if you did <strong>NOT</strong> know this part:
$$
\frac{n(n-1)(2n-1)}{6}
$$</p>
| sku | 341,324 | <p>The OP asked for how to get the RHS if you didn't know about it. One way to do this is through rising factorials, which work like ordinary integration. </p>
<p>define $$n^{\bar 1} = n$$
$$n^{\bar 2} = n(n+1)$$ and so on</p>
<p>We can rewrite $n^2 = n^{\bar 2} - n^{\bar 1}$</p>
<p>$$\sum_{i=0}^{n} i^2= \sum_{i=0}^{n} i^{\bar 2} - i^{\bar 1} = \frac{n^{\bar 3}}{3} - \frac{n^{\bar 2}}{2} + C$$ </p>
<p>We can show $C = 0$ for $n = 0$</p>
<p>$$LHS = \frac{(n)(n+1)(n+2)}{3} - \frac{(n)(n+1)}{2} = \frac{(n)(n+1)(2n+1)}{6}$$</p>
|
2,054,949 | <p>From what I understand, these three concepts all describe the points where the function is not continuous. How to tell them apart? Thanks!</p>
| DanielWainfleet | 254,665 | <p>$z_0$ is a pole of $f$ iff $f$ is analytic on $\{z: 0<|z-z_0|<r\}$ for some $r>0,$ and $f(z_0)$ cannot be defined in such a way that $f$ is analytic on $\{z:|z-z_0|<r\},$ but also that $f(z)=(z-z_0)^ng(z)$ for $0<|z-z_0|<r$ for some $n\in \mathbb N,$ where $g$ is analytic on $\{z:|z-z_0|<r\}.$ For example if $f(z)=1/z+2/(z-1)^3$ then $f$ has a pole at $0$ and at $1.$</p>
<p>$z_0$ is a removable singularity of $f$ iff $f$ is analytic on $\{z:0<|z-z_0|<r\}$ and either (i) $f(z_0)$ is not defined, or (ii) $f(z_0)$ is defined but $f$ is not continuous at $z_0;$ but there is a (unique) value that be be assigned (or re-assigned) as $f(z_0)$ so that $f$ is analytic on $\{z: |z-z_0|<r\}.$</p>
<p>$z_0$ is an essential singularity iff $f$ is analytic on $\{z:0<|z-z_0|<r\}$ and $z_0$ is not a pole or a removable singularity. For example if $f(z)=e^{1/z}$ for $z\ne 0$ then $0$ is an essential singularity of $f.$</p>
<p>A zero of $f$ is any $z$ such that $f(z)=0.$ </p>
|
26,162 | <p>I'm the math department chair at a small university. Our general education program is non-traditional. The university is split into three areas. Students are expected to complete a major in one of the areas and earn minors in the other two. Overall, this is good because students get more depth in the chosen minor areas, but is detrimental to elementary education majors because they need a more general education. I would like to put together a basic math minor to help elementary education majors get the math they need to be effective teachers. This should be easy enough that elementary education majors would choose it over psychology and political science, yet challenging enough that anyone who earns the minor and graduates would be an elementary teacher math specialist. Does the following list of courses sound reasonable or do we need more?
*Intermediate Algebra
*College Algebra
*Quantitative Reasoning (logic, financial math, basic statistics, etc)
*Math for Elementary Education Majors (standard course that covers topics on the PRAXIS)</p>
| guest troll | 21,066 | <p>[Caveat, I'm just a citizen, not a teach.]</p>
<p>I like it. Call it a "math ed minor" to clarify it is different from say a chemist getting a "math minor".</p>
<p>I think the two remedial algebra classes are fine. Kind of gives them some understanding of where the kids are headed after arithmetic. And there are aspects of elementary school arithmetic that are somewhat algebraic (long division and multiplication, fractions, even "r*t=d" problems). I'm OK with leaving out geometry (90% of the "race to calculus" in high school is algebra...you have to prioritize). I'm also fine leaving out calculus and higher classes. After all, these are primary school teachers we are talking about. I do quite like the quantitative reasoning class and the math ed class.</p>
<p>Keep it simple. Rack up a win. Don't let people screw it up by making it longer (more classes) or sticking in Rudin. 30% of a loaf is better than none. More sugar, less medicine. Present this very much as "kinder, gentler" (thanks George HW). [Nothing is stopping any of these ed majors from taking any math class they want...but let's be real, the demand for true majors courses in miniscule.]</p>
<p>And do something good for your department: your previous question asked about the PRAXIS class being taken out of your department and if you listen to the tough, tough crowd then the ed majors will just all take psych. I did notice your 09FEB comment saying you'd been able to save it by being more customer centric. Good job. There's a long history of math departments losing classes (e.g. stats) because they were not customer centric enough...wanted to teach math major stuff and were unsympathetic towards the needs of non math majors (often even ignorant and unwilling to learn about application and non math major careers).</p>
|
26,162 | <p>I'm the math department chair at a small university. Our general education program is non-traditional. The university is split into three areas. Students are expected to complete a major in one of the areas and earn minors in the other two. Overall, this is good because students get more depth in the chosen minor areas, but is detrimental to elementary education majors because they need a more general education. I would like to put together a basic math minor to help elementary education majors get the math they need to be effective teachers. This should be easy enough that elementary education majors would choose it over psychology and political science, yet challenging enough that anyone who earns the minor and graduates would be an elementary teacher math specialist. Does the following list of courses sound reasonable or do we need more?
*Intermediate Algebra
*College Algebra
*Quantitative Reasoning (logic, financial math, basic statistics, etc)
*Math for Elementary Education Majors (standard course that covers topics on the PRAXIS)</p>
| Justin Hancock | 20,719 | <p>The top priority should be ensuring that elementary education majors have a deep and mathematically-sound understanding of elementary school mathematics. They need to (re)learn the core mathematics that they will be teaching—whole number and fraction arithmetic—so that they can make sense of it for themselves and for their students. (Liping Ma's book <a href="https://www.taylorfrancis.com/books/mono/10.4324/9781003009443/knowing-teaching-elementary-mathematics-liping-ma" rel="noreferrer"><em>Knowing and Teaching Elementary Mathematics</em></a> which was recommended in <a href="https://matheducators.stackexchange.com/a/26068/20719">this answer to your previous question</a> gives a sobering account of U.S. elementary teachers' dismal knowledge of fractions.)</p>
<p>If the elementary education program at your institution is already doing a good job of this in their mathematics methods courses, then the courses you've suggested seem appropriate for a minor, if a bit light, as others have noted. I might suggest including a problem-solving or inquiry-based course that you can market as demonstrating how to implement student-centered pedagogies (Euclidean geometry lends itself well to this type of course), or some sort of a maker or constructionist course, or a modeling course, or a history of math course—something that pre-service elementary teachers will view as relevant to their teaching, since you're competing with psychology courses.</p>
<p>If elementary education majors are not already getting the math they need from their education program, which is what it sounds like, then courses that are focused specifically on elementary school mathematics would be much more useful for them, and you would probably see higher enrollment. As guest troll suggested, it might be more appropriate to call this a "math education minor" or a "math for teaching minor."</p>
<p>I highly recommend talking a look at some of <a href="https://math.berkeley.edu/%7Ewu/" rel="noreferrer">Hung-Hsi Wu's writings</a>. He's spent several decades thinking about <a href="https://math.berkeley.edu/%7Ewu/ICMtalk.pdf" rel="noreferrer">how mathematicians should be involved in mathematics teacher education</a>, and he's written a series of textbooks for mathematics teachers. You could start with this Notices of the AMS article, <a href="https://math.berkeley.edu/%7Ewu/NoticesAMS2011.pdf" rel="noreferrer">"The Mis-Education of Mathematics Teachers."</a></p>
<p>Here are what I think are some of the important points that he's made.</p>
<ul>
<li>School mathematics is not and should not be university mathematics. Defining a rational number as an equivalence class of ordered pairs of integers is appropriate for the mathematician's purposes, but not for the elementary teacher's purposes.</li>
<li>Nevertheless, school mathematics should still be rigorous mathematics. Fractions need to be defined in order to reason about them clearly. Thus, the elementary teacher should work with a definition like</li>
</ul>
<blockquote>
<p>Given a positive integer <span class="math-container">$n$</span>, divide the interval <span class="math-container">$[0,1]$</span> on the number line into <span class="math-container">$n$</span> parts of equal length. The fraction <span class="math-container">$\frac{1}{n}$</span> is the number located at the first division point to the right of <span class="math-container">$0$</span>. Given a positive integer <span class="math-container">$m$</span>, the fraction <span class="math-container">$\frac{m}{n}$</span> is the number to the right of <span class="math-container">$0$</span> by a distance that is <span class="math-container">$m$</span> times the distance between <span class="math-container">$\frac{1}{n}$</span> and <span class="math-container">$0$</span>.</p>
</blockquote>
<ul>
<li>The mathematics that has been taught in U.S. schools for the past several decades, as codified in textbooks, is not sound mathematics. In lieu of definitions, it gives confounding statements like</li>
</ul>
<blockquote>
<p>A fraction can be a <em>part of a whole</em>, a <em>ratio</em>, or a <em>quotient</em>. The fraction <span class="math-container">$\frac{3}{4}$</span> can be <span class="math-container">$3$</span> parts when the whole is divided into <span class="math-container">$4$</span> parts, such as three-quarters of a pie. It can represent a "ratio situation," such as "there are <span class="math-container">$3$</span> boys for every <span class="math-container">$4$</span> girls." It is also the result of dividing <span class="math-container">$3$</span> by <span class="math-container">$4$</span>, which can arise from a "partitioning situation" such as sharing <span class="math-container">$3$</span> cookies fairly between <span class="math-container">$4$</span> people.</p>
</blockquote>
<ul>
<li>Teachers are a product of the education system, so they have likely learned unsound mathematics and will likely teach unsound mathematics. Taking university mathematics courses will not help them unlearn and relearn the mathematics they need in order to be effective teachers. To break this cycle, mathematicians need to</li>
</ul>
<blockquote>
<p>consult with education colleagues, help design new mathematics courses for teachers, teach those courses, and offer constructive criticisms in every phase of this reorientation in preservice professional development.</p>
</blockquote>
|
2,777,982 | <p>I was asked to describe the surface described by</p>
<p>$${\bf r}^\top {\bf A} {\bf r} + {\bf b}^\top {\bf r} = 1,$$</p>
<p>where $3 \times 3$ positive definite matrix ${\bf A}$ and vector $\bf b$ are given.</p>
<p>My intuition tells me that it is a rotated ellipsoid with a centre that is off the origin. However, I am told to show this via the substitution ${\bf r} = {\bf x} + {\bf a}$, with $\bf a$ being a constant vector, and dictate the conditions on this vector to obtain a new quadric surface ${\bf x}^\top{\bf A}{\bf x} = C$. However, upon substitution, I get a ridiculously messy answer involving combinations of position and constant vectors. Is there a trick I am missing out on? Thank you!</p>
| Community | -1 | <p>Note: for convenience we use $2b$ instead of $b$.</p>
<p>$$(x+a)^TA(x+a)+2b^T(x+a)=x^TAx+a^TAx+x^TAa+a^TAa+b^Tx+2b^Ta.$$</p>
<p>Notice that by symmetry of $A$, $x^TAa=a^TAx$. Collecting all the $x$ terms,</p>
<p>$$(2a^TA+2b^T)x$$ can be cancelled with the choice</p>
<p>$$a=-A^{-1}b.$$</p>
<p>Then</p>
<p>$$C=1-a^TAa-2b^Ta=1+b^TA^{-1}b>1.$$</p>
<p>(As $A$ is positive definite, it is invertible. Note that this is a matrix version of the "complete the square" paradigm.)</p>
<hr>
<p>Going further, you can diagonalize the matrix and write, in the basis defined by the Eigenvectors</p>
<p>$$y^T\Lambda y=C$$</p>
<p>or</p>
<p>$$\lambda_0u^2+\lambda_1v^2+\lambda_2w_2=C.$$</p>
<p>As the three Eigenvalues are positive,</p>
<p>$$(\sqrt\lambda_0u)^2+(\sqrt\lambda_1v)^2+(\sqrt\lambda_2w)^2=C$$ describes a stretched sphere.</p>
|
194 | <p>In many parts of the world, the majority of the population is uncomfortable with math. In a few countries this is not the case. We would do well to change our education systems to promote a healthier relationship with math. But in the present situation, how can we help the students who come to our classes, which they are required to take, with fear and loathing?</p>
<p><strong>How do we help students overcome their math anxieties?</strong></p>
| André Souza Lemos | 5,038 | <p>To my view, the most effective strategies deal with anxiety as a collective - and thus political and cultural - issue. The answers provided by adamblan and Mandy Jansen are quite to the point, in that regard, and what I'll add here is just a complement.</p>
<p>Anxiety can be defined as overreaction to falsely perceived <strong>risk</strong>. Risk taking is socially divided, just like labor is. Risk avoidance behavior is commonly present in children. It can obviously be reinforced positively or negatively, depending on what kind of parental guidance is given, and these decisions are heavily informed by the cultural background of the parents and/or caregivers. It has been argued that biological characteristics are relevant to determining what degree of exposure to danger can a person tolerate (so there would be inclinations based on gender), but anthropological research tells us that human behavior is extremely plastic, so universal truths are scarce here.</p>
<p>That said, what kind of risk is presented to a child in learning mathematics? There is, of course, the social risk of being outcast by failing to fit properly into the assigned stereotype. This fear has been brilliantly addressed by previous answers, so I'll skip it here.</p>
<p>There is possibly, though, a different kind of risk perception, more subtle, and with deeper roots than social prejudice. To put it in simple terms: the human brain consumes a lot of energy, and streams of thought that are (considered) energy efficient tend to be strongly preferred. <em>This also has a social signature</em>: anti-intellectualism is the way societies have to indicate to the individual that to be "lost in thought" is dangerous to herself and to the larger group. Being considered the way of thinking that is more liberated from experiential constraints, mathematics may have been historically associated with the biggest of threats. </p>
<p>In other words, if thinking "complicated" thoughts has been considered, <em>in itself</em>, wasteful and possibly dangerous to the cohesion of the group - and thus risky - then thinking mathematically has to be extremely problematic. This kind of prejudice affects more directly, obviously, those that are not in a position of power. In this case, though, it won't be enough to address cultural stereotypes, because the anti-mathematical prejudice is largely unconscious, and even more pervasive. I'd say that it lies outside of what can be dealt with in a pedagogical environment. </p>
<p>This predisposition can cause enormous suffering, inside and outside of the classroom. It is a clinical problem, and a social one. The issue is particularly urgent on our present time, when the ability to think mathematically is, maybe for the first time in human history, not an accidental feature (virtue or menace), but the most valuable resource that we have, for our continued survival on this planet. </p>
|
250,364 | <blockquote>
<p><strong>Problem</strong> Prove that $$\log(1 + \sqrt{1+x^2})$$ is uniformly continuous.</p>
</blockquote>
<p>My idea is to consider $|x - y| < \delta$, then show that
$$|\log(1 + \sqrt{1+x^2}) - \log(1 + \sqrt{1+y^2})|
= \bigg|\log\bigg(\dfrac{1 + \sqrt{1+x^2}}{1 + \sqrt{1+y^2}}\bigg)\bigg| < \epsilon$$
But I couldn't find a choice for $x, y$ that could implies the above expression is true. Completing the square doesn't seem to help at all. Any idea?</p>
| Martin Argerami | 22,857 | <p>Note that $t\mapsto \log t$ is uniformly continuous on $[1,\infty)$ (proven below). Note also that $t\mapsto 1+\sqrt{1+t^2}$ is uniformly continuous on $\mathbb R$ (also proven below). As the composition of uniformly continuous functions is uniformly continuous, the result follows.</p>
<p>To see that $\log$ is uniformly continuous on $[1,\infty)$, fix $\varepsilon>0$. Assume that $1\leq x<y$, $y-x<\delta$. Then $y/x<1+\delta/x\leq1+\delta$, and so
$$
\log y-\log x=\log \frac y x\leq\log(1+\delta);
$$
if we choose $\delta$ small enough so that $\log(1+\delta)<\varepsilon$, we are done. </p>
<p>For the uniform continuity of $g:t\mapsto 1+\sqrt{1+t^2}$, fix $\varepsilon>0$. Choose $x_0$ such that $\sqrt{1+x^2}>3/\varepsilon$ if $|x|\geq x_0$. Since $g$ is continuous on the compact set $[-x_0-1,x_0+1]$, it is uniformly continuous there. So there exists $\delta_1>0$ such that $x,y\in[-x_0,y_0]$ with $|y-x|<\delta_1$ implies $|g(y)-g(x)|<\varepsilon$. </p>
<p>Now let $\delta=\min\{\delta_1,\varepsilon/3,1\}$. Suppose that $|x-y|<\delta$. If both $|x|<x_0$, then $|y|<x_0+1$ and so $|g(y)-g(x)|<\varepsilon$ by the uniform continuity on the compact set. If $|x|\geq|x_0$, then
$$
|g(y)-g(x)|=|\sqrt{1+y^2}-\sqrt{1+x^2}|\leq|\sqrt{1+y^2}-|y||+||y|-|x||+||x|-\sqrt{1+x^2}|\\
\leq\frac1{|y|+\sqrt{1+y^2}}+|y-x|+\frac1{|x|+\sqrt{1+x^2}}<\frac1{\sqrt{1+y^2}}+\frac1{\sqrt{1+x^2}}+\frac\varepsilon3\\
<\frac\varepsilon3+\frac\varepsilon3+\frac\varepsilon3=\varepsilon.
$$</p>
|
2,130,911 | <p>I'm unsure how to compute the following : 3^1000 (mod13)</p>
<p>I tried working through an example below,</p>
<p>ie) Compute $3^{100,000} \bmod 7$
$$
3^{100,000}=3^{(16,666⋅6+4)}=(3^6)^{16,666}*3^4=1^{16,666}*9^2=2^2=4 \pmod 7\\
$$</p>
<p>but I don't understand why they divide 100,000 by 6 to get 16,666. Where did 6 come from? </p>
| Maczinga | 411,133 | <p>Just use Fermat's Little Theorem</p>
<p>$a^{p}\equiv a\mod p$</p>
<p>with $p=7$ in your example and $p=13$ in your former question.</p>
|
1,413,363 | <p>The question:</p>
<p>Find values of $a,b,c.$ if $\displaystyle \frac{x^2+1}{x^2+3x+2} = \frac{a}{x+2}+\frac{bx+c}{x+1}$</p>
<p>My working so far:</p>
<p><a href="https://i.imgur.com/VegifVa.jpg" rel="nofollow noreferrer">http://i.imgur.com/VegifVa.jpg</a></p>
<p>How do I isolate $a$, $b$ and $c$?</p>
| juantheron | 14,311 | <p>Given $$\displaystyle \frac{x^2+1}{x^2+3x+2} = \frac{a}{x+2}+\frac{bx+c}{x+1} = \frac{a(x+1)+(bx+c)(x+2)}{x^2+3x+2}$$</p>
<p>So $$x^2+1 = bx^2+(a+2b+c)x+(a+2c)$$</p>
<p>Now equating Coefficients, we get $b=1$ and $(a+2b+c) = 0$ and $a+2c=1$</p>
<p>So Put $b=1$ in $a+2b+c=0\Rightarrow a+c=-2$ and above $a+2c=1$</p>
<p>so we get $c=3$ and $a=-5$</p>
<p>So we get $(a,b,c) = (-5,1,3)$</p>
|
887,327 | <p>I am confused as to how to evaluate the infinite series $$\sum_{n=1}^\infty \frac{\sqrt{n+1}-\sqrt{n}}{\sqrt{n^2+n}}.$$<br>
I tried splitting the fraction into two parts, i.e. $\frac{\sqrt{n+1}}{\sqrt{n^2+n}}$ and $\frac{\sqrt{n}}{\sqrt{n^+n}}$, but we know the two individual infinite series diverge. Now how do I proceed?</p>
| amrik | 167,970 | <p>Your problem may be converted to the following formula:
\begin{align}
& \lim_{N\to\infty}\sum_{n=1}^{N}\left({\frac{1}{\sqrt{n}}-\frac{1}{\sqrt{n+1}}}\right) =
\left(\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{3}}\right)+\left(\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{4}}\right)+...+\left(\frac{1}{\sqrt{k}}-\frac{1}{\sqrt{N+1}}\right) \\
& \hspace{5mm} =
\lim_{N\to\infty}\left(\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{N+1}}\right)=
\lim_{N\to\infty}\left(\frac{\sqrt{N+1}-\sqrt{2}}{\sqrt{2}\sqrt{N+1}}\right) = 1
\end{align}</p>
|
2,426,897 | <p>Let $\mathbb{H}$ be the ring of real quaternions and $Z(\mathbb{H})$ be its center. Of course $Z(\mathbb{H})=\mathbb{R}$. </p>
<p>Suppose $a+bi+cj+dk$, $x+yi+zj+wk \in \mathbb{H}$ such that $(a+bi+cj+dk)(x+yi+zj+wk) \in Z(\mathbb{H})$. </p>
<p>Does it imply $(x+yi+zj+wk)(a+bi+cj+dk) \in Z(\mathbb{H})$?</p>
| Bram28 | 256,001 | <p>For a rough estimate, I'd first divide by $100$ and think about $20.17$, for $\sqrt{2017} = 10 \sqrt{20.17}$. In fact, I would just consider $\sqrt{20}$: </p>
<p>You know $4^2=16$ and $5^2 =25$, so $\sqrt{20}$ is between $4$ and $5$, and is in fact close to the middle of them, i.e close to $4.5$. Hence, $\sqrt{2017}$ will be close to $45$.</p>
<p>So, see what $45^2$ is ... and proceed from there.</p>
|
883,972 | <p>Let:</p>
<p>$$f(n) = n(n+1)(n+2)/(n+3)$$</p>
<p>Therefore :</p>
<p>$$f∈O(n^2)$$</p>
<p>However, I don't understand how it could be $n^2$, shouldn't it be $n^3$? If I expand the top we get $$n^3 + 3n^2 + 2n$$ and the biggest is $n^3$ not $n^2$.</p>
| amWhy | 9,003 | <p>But when you <strong><em>divide</em></strong> a degree-three polynomial, $\,n^3 + 3n^2 + 2n,\,$ by a degree-one polynomial, $\,n+3,\,$ you end up with a degree <strong><em>two</em></strong> polyonomial $n^2 + 2\;$ with remainder of $\quad \frac{-6}{n+3}$</p>
|
3,282,400 | <p>I would like to illustrate my confusion about this topic by building up the issue from more or less first principles. Let <span class="math-container">$U \subseteq \mathbb{R}^n$</span> be an open subset and let <span class="math-container">$f:U \to \mathbb{R}^m$</span>. We say that <span class="math-container">$f$</span> is totally differentiable at <span class="math-container">$a \in U$</span> if there exists a linear function <span class="math-container">$D_{a}f:\mathbb{R}^n \to \mathbb{R}^m$</span> such that the following holds.</p>
<p><span class="math-container">$\lim_{x \to a} \frac{||f(x)-f(a)-D_{a}f(x-a)||}{||x-a||} = 0$</span></p>
<p>We call <span class="math-container">$D_{a}f$</span> the total derivative of <span class="math-container">$f$</span> at <span class="math-container">$a$</span>. My question concerns how we then define the derivative function. For the familiar case in which <span class="math-container">$n=m=1$</span>, this is simple since the total derivative reduces to just</p>
<p><span class="math-container">$(D_{a}f)(b) = \big(\frac{df}{dx}\big{\rvert}_a\big)b$</span></p>
<p>for all <span class="math-container">$b \in \mathbb{R}$</span>. Here, <span class="math-container">$\frac{df}{dx}\big{\rvert}_a$</span> denotes the usual definition of the derivative of <span class="math-container">$f$</span> at <span class="math-container">$a$</span>. We then define the derivative function <span class="math-container">$f':U \to \mathbb{R}$</span> as</p>
<p><span class="math-container">$f'(y) := \frac{df}{dx}\big{\rvert}_y$</span></p>
<p>for all <span class="math-container">$y \in U$</span>. Let us now consider the case when <span class="math-container">$m=1$</span>. Representing the action of <span class="math-container">$D_{a}f$</span> via the Jacobian matrix, we have</p>
<p><span class="math-container">$(D_{a}f)(b) = \begin{bmatrix}
\frac{\partial{f}}{\partial{x_1}}\big{\rvert}_a & \cdots & \frac{\partial{f}}{\partial{x_n}}\big{\rvert}_a
\end{bmatrix}
\begin{bmatrix}
b_1 \\\
\vdots \\\
b_n
\end{bmatrix}$</span></p>
<p>for all <span class="math-container">$b=(b_1,\cdots,b_n)\in\mathbb{R}^n$</span>. This suggests that we define the derivative function <span class="math-container">$Df:U \to \mathbb{R}^n$</span> as</p>
<p><span class="math-container">$(Df)(y) := \big(\frac{\partial{f}}{\partial{x_1}}\big{\rvert}_y, \cdots, \frac{\partial{f}}{\partial{x_n}}\big{\rvert}_y\big)$</span></p>
<p>for all <span class="math-container">$y \in U$</span>. We could perhaps make a similar argument for the case in which <span class="math-container">$n=1$</span>. How does this extend to cases in which the Jacobian matrix is not a simple column or row vector? What definition of the derivative function is most natural in those cases? By natural, I mean that the definition should allow important identities like the product rule and chain rule to retain their obvious forms.</p>
| Fabio Lucchini | 54,738 | <p>Statement (b) can be proved by applying long homology sequence.
For consider the complexes of abelian groups:
<span class="math-container">\begin{align}
&A:\ldots\to A\xrightarrow f A\xrightarrow g A\xrightarrow f\ldots\\
&B:\ldots\to B\xrightarrow f B\xrightarrow g B\xrightarrow f\ldots\\
&A/B:\ldots\to A/B\xrightarrow f A/B\xrightarrow g A/B\xrightarrow f\ldots\\
\end{align}</span>
Then we get an exact sequence of complexes
<span class="math-container">$$\{0\}\to B\to A\to A/B\to\{0\}$$</span>
giving rise to the long homology sequence
<span class="math-container">$$H_1(A/B)\xrightarrow{\delta_1}H_0(B)\xrightarrow{\kappa_0}H_0(A)\xrightarrow{\pi_0}H_0(A/B)\xrightarrow{\delta_0}H_1(B)\xrightarrow{\kappa_1}H_1(A)\xrightarrow{\pi_1}H_1(A/B)\tag 1$$</span>
where we noted that
<span class="math-container">$$H_n(A)=\begin{cases}A_g/A^f&n\equiv 0\pmod 2\\A_f/A^g&n\equiv 1\pmod 2\end{cases}$$</span>
so that <span class="math-container">$q(A)=|H_1(A)|/|H_0(A)|$</span>.
The equation <span class="math-container">$q(A)=q(B)q(A/B)$</span> then follows from the exactness of <span class="math-container">$(1)$</span> for:<span class="math-container">$\newcommand\Ker{\operatorname{Ker}}\renewcommand\Im{\operatorname{Im}}$</span>
<span class="math-container">\begin{align}
q(B)(A/B)
&=\frac{|H_1(B)|}{|H_0(B)|}\frac{|H_1(A/B)|}{|H_0(A/B)|}\\
&=\frac{|\Ker\kappa_1||\Im\kappa_1|}{|\Ker\kappa_0||\Im\kappa_0|}
\frac{|\Ker\delta_1||\Im\delta_1|}{|\Ker\delta_0||\Im\delta_0|}\\
&=\frac{|\Im\kappa_1|}{|\Im\kappa_0|}
\frac{|\Ker\delta_1|}{|\Ker\delta_0|}\\
&=\frac{|\Ker\pi_1|}{|\Ker\pi_0|}
\frac{|\Im\pi_1|}{|\Im\pi_0|}\\
&=\frac{|H_1(A)|}{|H_0(A)|}\\
&=q(A)
\end{align}</span></p>
|
327,750 | <p>$$\bigcup_{n=1}^\infty A_n = \bigcup_{n=1}^\infty (A_{1}^c \cap\cdots\cap A_{n-1}^c \cap A_n)$$</p>
<p>The results is obvious enough, but how to prove this</p>
| PeptideChain | 322,402 | <p>$$\bigcup_{n=1}^\infty (A_{1}^c \cap\cdots\cap A_{n-1}^c \cap A_n)=\bigcup_{n=1}^\infty A_n\cap(A_{1}^c \cap\cdots\cap A_{n-1}^c )\\=\bigcup_{n=1}^\infty A_n\cap(A_{1} \cup\cdots\cup A_{n-1} )^c=\bigcup_{n=1}^\infty A_n\setminus(A_{1} \cup\cdots\cup A_{n-1} )= \bigcup_{n=1}^\infty A_n $$</p>
<p>Using $A\cap B^c=A\setminus B$ and the fact that the excluded elements in the $n$th element of the union were already included in the previous $n-1$.</p>
|
201,381 | <p>I have basic training in Fourier and Harmonic analysis. And wanting to enter and work in area of number theory(and which is of some interest for current researcher) which is close to analysis. </p>
<blockquote>
<p>Can you suggest some fundamental papers(or books); so after reading these I can have, hopefully(probably), I will have some thing to work on(I mean, chance of discovering something new)?</p>
</blockquote>
| Joe Silverman | 11,926 | <p>You could try the short book by Hugh Montgomery, which focuses closely on the interactions of harmonic analysis and number theory.</p>
<p><em>Ten Lectures on the Interface between Analytic Number Theory and Harmonic Analysis</em>
by Hugh L. Montgomery
Series: CBMS Regional Conference Series in Mathematics (Book 84)
Paperback: 220 pages
Publisher: American Mathematical Society (October 11, 1994)
ISBN-10: 0821807374</p>
|
328,670 | <p>Suppose I is an ideal of a ring R and J is an ideal of I, is there any counter example showing J need not to be an ideal of R? The hint given in the book is to consider polynomial ring with coefficient from a field, thanks</p>
| Atom | 673,223 | <p>Take <span class="math-container">$R := \mathbb Q[x]$</span>, <span class="math-container">$I := x\, \mathbb Q[x]$</span> and <span class="math-container">$J := \{a_1 x + a_2 x^2 + \cdots + a_n x^n : a_1\in\mathbb Z\}$</span>. Then <span class="math-container">$I$</span> is an ideal of <span class="math-container">$R$</span> and <span class="math-container">$J$</span> is an ideal of <span class="math-container">$I$</span>. But <span class="math-container">$J$</span> is not an ideal of <span class="math-container">$R$</span>.</p>
|
3,364,016 | <p>Can the following expression be written as the factorial of <span class="math-container">$m$</span>?</p>
<p><span class="math-container">$m(m-1)(m-2) \dots {m-(n-1)}$</span></p>
| azif00 | 680,927 | <p><span class="math-container">$$m(m-1)\cdots (m-n+1) = \frac{m(m-1)\cdots (m-n+1)(m-n)!}{(m-n)!} = \frac{m!}{(m-n)!}$$</span></p>
|
853,774 | <blockquote>
<p>If $(G,*)$ is a group and $(a * b)^2 = a^2 * b^2$ then $(G, *)$ is abelian for all $a,b \in G$.</p>
</blockquote>
<p>I know that I have to show $G$ is commutative, ie $a * b = b * a$</p>
<p>I have done this by first using $a^{-1}$ on the left, then $b^{-1}$ on the right, and I end up with and expression $ab = b * a$. Am I mixing up the multiplication and $*$ somehow?</p>
<p>Thanks</p>
| Doug Spoonwood | 11,300 | <p>Here's the proof that I got from Prover9:</p>
<p>1 G(x,y) = G(y,x) # label(non_clause) # label(goal). [goal].</p>
<p>2 G(x,G(y,z)) = G(G(x,y),z). [assumption].</p>
<p>3 G(G(x,y),z) = G(x,G(y,z)). [copy(2),flip(a)].</p>
<p>4 G(x,N(x)) = 1. [assumption].</p>
<p>5 G(x,1) = x. [assumption].</p>
<p>6 G(G(x,y),G(x,y)) = G(G(x,x),G(y,y)). [assumption].</p>
<p>7 G(x,G(y,G(x,y))) = G(x,G(x,G(y,y))). [copy(6),rewrite([3(3),3(6)])].</p>
<p>8 G(c2,c1) != G(c1,c2). [deny(1)].</p>
<p>9 G(x,G(N(x),y)) = G(1,y). [para(4(a,1),3(a,1,1)),flip(a)].</p>
<p>12 G(x,G(y,G(x,G(y,z)))) = G(x,G(x,G(y,G(y,z)))). [para(7(a,1),3(a,1,1)),rewrite([3(4),3(3),3(2),3(7),3(6)]),flip(a)].</p>
<p>19 G(1,N(N(x))) = x. [para(4(a,1),9(a,1,2)),rewrite([5(2)]),flip(a)].</p>
<p>24 G(x,N(N(y))) = G(x,y). [para(19(a,1),3(a,2,2)),rewrite([5(2)])].</p>
<p>26 G(1,x) = x. [para(19(a,1),9(a,2)),rewrite([24(4),9(3)])].</p>
<p>31 G(x,G(N(x),y)) = y. [back_rewrite(9),rewrite([26(5)])].</p>
<p>58 G(x,G(y,x)) = G(x,G(x,y)). [para(4(a,1),12(a,1,2,2,2)),rewrite([5(2),4(4),5(4)])].</p>
<p>90 G(x,y) = G(y,x). [para(31(a,1),58(a,2,2)),rewrite([3(3),31(4)])].</p>
<p>91 $F. [resolve(90,a,8,a)].</p>
|
612,681 | <p>I am a little confused about the decidablity of validity of first order logic formulas. I have a textbook that seems to have 2 contradictory statements. </p>
<p>1)Church's theorem: The validity of a first order logic statement is undecidable.
(Which I presume means one cannot prove whether a formula is valid or not)
2)If A is valid then the semantic tableau of not(A) closes. (A is a first order logic statement)</p>
<p>According to me 2 contradicts one since you would then be able to just apply the semantic tableau algorithm on A and prove its validity.</p>
<p>I am sure I am missing something, but what?</p>
| André Nicolas | 6,312 | <p>What about if the sentence is not valid? Then the algorithm will not terminate. </p>
<p>Alternately, for any of the usual proof systems, we can enumerate all sequences of sentences, and check mechanically for each sequence whether it is a proof of $\varphi$. Not very useful if $\varphi$ is not a theorem. </p>
<p><strong>Remark:</strong> We can get something useful out of the idea. Suppose that $T$ is a recursively axiomatized theory which is <strong>complete</strong> (for any $\varphi$, either $\varphi$ is a theorem of $T$, or $\lnot\varphi$ is a theorem of $T$). Then there is indeed an algorithm that, for any input $\varphi$, will determine whether or not $\varphi$ is a theorem of $T$. </p>
|
11,518 | <p>How to prove that $\mathcal{O}_{\sqrt[3]{3}}$ is an euclidean domain? I heard that one should prove the following but why it is enough?</p>
<p>For any $ a,b,c\in\mathbb{R}$, prove that there are $ x,y,z\in\mathbb{R}$ such that $ x-a,y-b,z-c\in\mathbb{Z}$ and that
$$-1\leq x^3+3y^3+9z^3-9xyz\leq 1.$$</p>
| Bill Dubuque | 242 | <p>As Alex mentioned, the proof that you have in mind amounts to showing that your cubic field is Euclidean with respect to the norm. In fact it is known that there are only three pure cubic fields $\rm\ Q(\sqrt[3] m)\ $ that are norm Euclidean, viz. $\rm\ \mathbb Q(\sqrt[3] 2),\ \mathbb Q(\sqrt[3] 3),\ \mathbb Q(\sqrt[3] {10})\:.\ $ You can find the proofs in Cioffari: <a href="http://www.ams.org/journals/mcom/1979-33-145/S0025-5718-1979-0514835-5/S0025-5718-1979-0514835-5.pdf" rel="nofollow">The Euclidean Condition in Pure Cubic and Complex Quartic Fields.</a></p>
|
1,648,587 | <blockquote>
<p><strong>Problem.</strong> Consider two arcs <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> embedded in <span class="math-container">$D^2\times I$</span> as shown in the figure. The loop <span class="math-container">$\gamma$</span> is obviously nullhomotopic in <span class="math-container">$D^2\times I$</span>, but show that there is no nullhomotopy of <span class="math-container">$\gamma$</span> in the complement of <span class="math-container">$\alpha\cup \beta$</span>.<br />
<a href="https://i.stack.imgur.com/0GU17.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0GU17.png" alt="enter image description here" /></a></p>
</blockquote>
<p>I tried to use van Kampen's theorem to find the fundamental group of <span class="math-container">$X=D^2\times I-\alpha\cup \beta$</span>. Let <span class="math-container">$A=D^2\times I-\alpha$</span> and <span class="math-container">$B=D^2\times I-\beta$</span>. To find the fundamental group of <span class="math-container">$A$</span>, we note that after a homeomorphism, <span class="math-container">$A$</span> looks like a cylinder with it's axis removed. The fundamental group of <span class="math-container">$A$</span> is thus <span class="math-container">$\mathbf Z$</span>. Similarly for <span class="math-container">$B$</span>. Now we need to find the normal subgroup in <span class="math-container">$\pi_1(A)\sqcup \pi_1(B)$</span> generated by words of the form <span class="math-container">$i_{AB}(\omega)i_{BA}(\omega)^{-1}$</span> and quotient by it. Here <span class="math-container">$i_{AB}:A\cap B\to A$</span> and <span class="math-container">$i_{BA}:A\cap B\to B$</span> are the inclusion maps and <span class="math-container">$\omega$</span> is a loop in <span class="math-container">$A\cap B$</span>. Intuitively, the only loops in <span class="math-container">$A\cap B$</span> whose image in <span class="math-container">$A$</span> in nontrivial in <span class="math-container">$A$</span> are the ones which link with <span class="math-container">$\alpha$</span>. I am not sure how to say this precisely and I will be grateful if someone can help me with this. Similarly for <span class="math-container">$B$</span>. I am not able to make progress from here.</p>
| amir bahadory | 204,172 | <p>Let <span class="math-container">$X$</span> denote the complement of <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> in <span class="math-container">$D^{2} \times I .$</span> since the two arcs <span class="math-container">$\alpha, \beta$</span> can deformation retract to two parallel lines. We see that <span class="math-container">$X$</span> is homeomorphic to a disk minus two distinct points, i.e. <span class="math-container">$X \cong D^{2}-\{a, b\} .$</span> The loop <span class="math-container">$\gamma$</span> is just the boundary of the disk. And <span class="math-container">$X$</span> deformation retracts to wedge of two circles(8). Hence the fundamental group of <span class="math-container">$X$</span> is free on two generators,
i.e. <span class="math-container">$\pi_{1}(X) \cong \mathbb{Z} * \mathbb{Z}$</span>. Therefore there is no nullhomotopy of <span class="math-container">$\gamma$</span> in <span class="math-container">$X$</span></p>
|
181,110 | <p>On a euclidean plane, the shortest distance between any two distinct points is the line segment joining them. How can I see why this is true? </p>
| Rahul Shah | 621,670 | <p>Think of all possible paths as geometrical figures. In a triangle, the sum of any 2 sides is greater than the 3rd side. Hence if you go by any other path other than a straight line, you travel more.</p>
|
4,304 | <p>I am trying to understand <a href="http://en.wikipedia.org/wiki/All-pairs_testing" rel="nofollow noreferrer"><strong>pairwise testing</strong></a>.</p>
<p>How many combinations of tests would be there for example, if</p>
<blockquote>
<p><code>a</code> can take values from 1 to m</p>
<p><code>b</code> can take values from 1 to n</p>
<p><code>c</code> can take values from 1 to p</p>
</blockquote>
<p><code>a</code>, <code>b</code> and <code>c</code> can take m, n and p distinct values respectively. <strong>What are the total number of pairwise combinations possible?</strong></p>
<hr />
<p>With a pairwise testing tool that I am testing, I am getting 40 results for m = n = p = 6. I am trying to mathematically understand how I get 40 values.</p>
| Douglas S. Stones | 139 | <p>Pairwise testing tests for all possible 2-way interactions efficiently -- I gave a quick overview here: <a href="https://cstheory.stackexchange.com/questions/891/">https://cstheory.stackexchange.com/questions/891/</a></p>
<p>You are looking for strength 2 <a href="http://math.nist.gov/coveringarrays/coveringarray.html" rel="nofollow noreferrer">covering arrays</a>. In each pair of columns every pair of symbols occur -- this ensures all 2-way interactions are observed in some way. Here's a very simple example of a covering array of strength 2 with 2 columns:</p>
<pre><code>11
12
21
22
12
</code></pre>
<p>What castel has drawn is essentially the <a href="http://en.wikipedia.org/wiki/Latin_square" rel="nofollow noreferrer">Latin square</a>:</p>
<pre><code>123456
612345
561234
456123
345612
234561
</code></pre>
<p>If you look at each entry and write the list (r,c,s), where r is the row index, c is the column index, and s is the symbol, you will construct an orthogonal array (as depicted below) -- a covering array of strength 2 with the minimum number of rows (36).</p>
<pre><code>111
122
133
...
661
</code></pre>
<p>In fact, Latin squares exist for all orders n. So if you have three columns (e.g. three variables) and n symbols for each variable, then you can always find a strength 2 covering array with n<sup>2</sup> rows.</p>
<p>Many combinatorial designs give rise to particularly efficient covering arrays. Strength 2 covering arrays with more than three columns and n<sup>2</sup> rows are equivalent to sets of <a href="http://designtheory.org/library/encyc/mols/a/" rel="nofollow noreferrer">mutually orthogonal Latin squares</a> (the reference shows the construction).</p>
<p>In your case, if you have 40 results, then you are not using the most efficient covering array.</p>
|
2,735,984 | <p>I tried to solve this recurrence by taking out $n+1$ as a common in the RHS, but still have $n \cdot a_n$ and $a_n$</p>
| Kiryl Pesotski | 417,344 | <p>$$a_{n}=\Big(\frac{1}{n}+n\Big)a_{n-1}+3(n+1)$$
Divide by
$$\prod_{k=1}^{n}\Big(\frac{1}{k}+k\Big)$$
To give
$$\frac{a_{n}}{\prod_{k=1}^{n}\big(\frac{1}{k}+k\big)}=\frac{a_{n-1}}{\prod_{k=1}^{n-1}\big(\frac{1}{k}+k\big)}+\frac{3(n+1)}{\prod_{k=1}^{n}\big(\frac{1}{k}+k\big)}$$
Let
$$A_{n}=\frac{a_{n}}{\prod_{k=1}^{n}\big(\frac{1}{k}+k\big)}$$
To give
$$A_{n}-A_{n-1}=f(n)$$
Where $f(n)=\frac{3(n+1)}{\prod_{k=1}^{n}\big(\frac{1}{k}+k\big)}$. Now sum both sides from $2$ to $n$
$$\sum_{k=2}^{n}(A_{k}-A_{k-1})=\sum_{k=2}^{n}f(k)$$
This is
$$A_{n}-A_{1}=\sum_{k=2}^{n}f(k)$$
Thus
$$A_{n}=A_{1}+\sum_{m=2}^{n}\frac{3(m+1)}{\prod_{k=1}^{m}\big(\frac{1}{k}+k\big)}$$
and
$$a_{n}=\Big(\prod_{l=1}^{n}\big(\frac{1}{l}+l\big)\Big)\Big(\frac{a_{1}}{2}+\sum_{m=2}^{n}\frac{3(m+1)}{\prod_{k=1}^{m}\big(\frac{1}{k}+k\big)}\Big)$$</p>
|
4,030,359 | <p>Consider the abstract von Neumann algebra
<span class="math-container">$$M:= \ell^\infty-\bigoplus_{i \in I} B(H_i)$$</span>
which consists of elements <span class="math-container">$(x_i)_i$</span> with <span class="math-container">$\sup_i \|x_i\| < \infty$</span> and <span class="math-container">$x_i \in B(H_i)$</span>.</p>
<p>Let <span class="math-container">$N$</span> be the algebraic direct sum <span class="math-container">$\bigoplus_i B(H_i)$</span>, thus it consists of elements <span class="math-container">$(x_i)_i$</span> such that only finitely many <span class="math-container">$x_i$</span> are non-zero. I want to show that <span class="math-container">$N$</span> is <span class="math-container">$\sigma$</span>-weakly dense in <span class="math-container">$M$</span>.</p>
<p>I guess my main problem is that I don't understand the <span class="math-container">$\sigma$</span>-weak topology on <span class="math-container">$M$</span>. By a result of Sakai, it is the unique topology on <span class="math-container">$M$</span> coming from a weak<span class="math-container">$^*$</span>-topology when we realise <span class="math-container">$M$</span> as the dual of some other Banach space.</p>
<p>Maybe, we can do the following
<span class="math-container">$$M \cong \ell^\infty-\bigoplus_{i \in I} T(H_i)^* \cong \left(\ell^1-\bigoplus_{i \in I} T(H_i)\right)^*$$</span>
But this does not seem very practical! Any insight in the matter will be appreciated!</p>
| Calvin Khor | 80,734 | <p>Just pointing out that the example in the first linked post is easily modified to produce a function that is non-differentiable at <span class="math-container">$0$</span>, not convex at <span class="math-container">$0$</span>, and <span class="math-container">$f(0)=0$</span> is a strict minimum (i.e. the smoothness was supposed to make it "harder" to prove):</p>
<p><span class="math-container">$$f(x)=\begin{cases}x^2 + \max(0,x)^2\sin^2 \frac1x & x\neq 0\\ 0 & x=0\end{cases}$$</span>
The <span class="math-container">$x^2$</span> makes it so that the unique zero of <span class="math-container">$f$</span> is <span class="math-container">$x=0$</span>, and <span class="math-container">$f>0$</span> otherwise. Its not convex to the right of <span class="math-container">$0$</span> because <span class="math-container">$f$</span> "jumps" between the <span class="math-container">$x^2$</span> curve and <span class="math-container">$2x^2$</span> curve infinitely often. That it is not differentiable at <span class="math-container">$0$</span> is more or less standard.</p>
|
1,255,970 | <p>What is
$$\int_{K} e^{a \cdot x+ b \cdot y} \mu(x,y)$$
where $K$ is the Koch curve and $\mu(x,y)$ is a uniform measure <a href="http://wwwf.imperial.ac.uk/~jswlamb/M345PA46/%5BB%5D%20chap%20IX.pdf" rel="nofollow noreferrer">look here</a>.</p>
<p><strong>Attempt:</strong>
I can evaluate the integral numerically and I have derived a method to integrate $e^x$ over some cantor sets, <a href="https://math.stackexchange.com/q/1248373/219489">look here</a>. When I tried using that method to integrate the Koch Curve, I end up unable to express the integral in direct terms of its self. <a href="https://math.stackexchange.com/questions/1256554/integral-of-a-function-over-the-koch-curve-is-it-rigourous-enough">Here's</a> a proof that integration can be done over the Koch Curve...</p>
<p><strong>Information:</strong> I'd like a symbolic answer if its available, but infinite series/products for this integral are great too. If there's a reference that actually handles <strong>this</strong> specific function over fractals and derives a symbolic result, that's good to. Also feel free to change $K$ to any other (non-trivial of course ;) ) variant of the Koch curve if that makes it easier to compute. I warn only that because the goal is to integrate over <em>any</em> fractal rather than just one or two special examples, you shouldn't pick needlessly trivial examples...</p>
<p><strong>Motivation:</strong> The derivation of this result allows for integration over a fractal, however the actual reason this is useful, is because of the usefulness of the exponential function. For instance, the concept of average temperature over a fractal is a very interesting concept. $e^x$ type functions allow for rudimentary temperature fields to be constructed and theoretically integrated over fractals. $e^x$ type functions are useful for many kinds of problems, but they seem to be difficult to integrate over fractals. In addition, developing a theory for integrals over fractals, requires a large library of results, and $e^x$ should definitely be included in that list of integrable functions.</p>
| Mark McClure | 21,361 | <p>This "answer" is in response to your comment that you'd be interested in seeing series/product solutions. As I'm sure you know, it's not difficult (in principle) to compute the integral of $x^p$ or $y^p$ with respect to a self-similar measure. (I have Mathematica code that automates the procedure.) Thus, we can get an approximation by simply writing
$$
e^{ax+by}=e^{ax}e^{by},
$$
replacing the exponential expressions with a finite sum approximation, and then integrating. The result is:
$$
\left(1+\frac{a}{2}+\frac{19a^2}{120}+\frac{3 a^3}{80}+\frac{92983 a^4}{13023360}+\frac{5935 a^5}{5209344}+\frac{618497323 a^6}{3948161817600}+\cdots\right)\times \\
\left(1+\frac{b}{6 \sqrt{3}}+\frac{b^2}{120}+\frac{b^3}{1008\sqrt{3}}+\frac{83b^4}{2604672}+\frac{601 b^5}{234420480 \sqrt{3}}+\frac{2095657 b^6}{35533456358400}+\cdots\right)
$$
Unfortunately, I see no significant simplification beyond this. In particular, I am not able to find closed form expressions for the integrals of the power functions - only exact expressions for specific integers.</p>
|
2,005,798 | <p>I have the following equality:
$$
\lim_{k \to \infty}\int_{0}^{k}
x^{n}\left(1 - {x \over k}\right)^{k}\,\mathrm{d}x = n!
$$</p>
<p>What I think is that after taking the limit inside the integral ( maybe with the help of Fatou's Lemma, I don't know how should I do that yet ), then get</p>
<p>$$
\int_{0}^{k}\lim_{k \to \infty}\left[\,%
x^{n}\left(1 - {x \over k}\right)^{k}\,\right]\mathrm{d}x =
\int_{0}^{\infty}x^{n}\,\mathrm{e}^{-x}\,\mathrm{d}x =
\Gamma\left(n + 1\right) = n!
$$</p>
<p>How can I give a clear proof ?.</p>
| Jimmy R. | 128,037 | <p>Just to elaborate on the slick hint of @Arthur, because the OP seems confused in the comments: Since $(-1)^k=-1$ if $k$ is odd and $(-1)^k=1$ if $k$ is even, you have:
\begin{align}0=(1+(-1))^n=\sum_{k=1}^n(-1)^k\dbinom{n}{k}&=\sum_{k\text{ even}}\dbinom{n}{k}-\sum_{k\text{ odd}}\dbinom{n}{k}\\[0.2cm]2^n=(1+1)^n=\sum_{k=1}^n\dbinom{n}{k}&=\sum_{k\text{ even}}\dbinom{n}{k}+\sum_{k\text{ odd}}\dbinom{n}{k}\end{align} Hence, adding these two you get: $$2^n+0=\sum_{k\text{ even}}\dbinom{n}{k}+\sum_{k\text{ odd}}\dbinom{n}{k}+\sum_{k\text{ even}}\dbinom{n}{k}-\sum_{k\text{ odd}}\dbinom{n}{k}=2\sum_{k\text{ even}}\dbinom{n}{k}$$ which implies that $$2^n=2\sum_{k\text{ even}}\dbinom{n}{k} \implies \sum_{k\text{ even}}\dbinom{n}{k}=2^{n-1}$$ Hence $$\frac1{2^n}\sum_{k=0}^{n/2}\dbinom{n}{2k}=\frac1{2^n}\sum_{k\text{ even}}\dbinom{n}{k}=\frac{1}{2^n}2^{n-1}=\frac12$$</p>
|
572,316 | <p>I am reading a bit about calculus of variations, and I've encountered the following.</p>
<blockquote>
<p>Suppose the given function <span class="math-container">$F(\cdot,\cdot,\cdot)$</span> is twice continuously differentiable with respect to all of its arguments. Among all functions/paths <span class="math-container">$y=y(x)$</span>, which are twice continuously differentiable on the internal <span class="math-container">$[a,b]$</span> with <span class="math-container">$y(a)$</span> and <span class="math-container">$y(b)$</span> specified, find the one which extremizes the functional defined by <span class="math-container">$$J(y) := \int_a^b F(x, y, y_x) \, {\rm d} x$$</span></p>
</blockquote>
<hr />
<p>I have a bit of trouble understanding what this means exactly, as a lot of the lingo is quite new. As I currently understand it, the function <span class="math-container">$F$</span> is a function, of which is second derivative is constant (like for example <span class="math-container">$y=x^2)$</span>, and it looks as if <span class="math-container">$F$</span> is a function of 3 variables? I'm not too sure what <span class="math-container">$F(x,y, y_x)$</span> means, since <span class="math-container">$y_x$</span> was earlier specified to mean <span class="math-container">$y'(x)$</span>.</p>
<p>Also, what is this 'functional' intuitively? I have some trouble understanding it, mainly because of the aforementioned confusion.</p>
| Jas Ter | 38,653 | <p>Consider the expression
$$ J(y) := \int_a^b F(x,y,y_x) dx. $$
Here, $y$ is a function: $y:[a,b]\rightarrow \mathbb{R}$.
Note: $y_x$ is a common notation for the partial derivative. Since $y$ depends only on one variable, $y_x \equiv y'$. The above expression abuses notation (in a very common way), and a better version is
$$ J(y) := \int_a^b F(x,y(x),y_x(x)) dx. $$
Note that $F$ is not required to have vanishing derivatives of any order; it is just required to be sufficiently smooth. The function $F : \mathbb{R}^3\rightarrow \mathbb{R}$, and is to be evaluated at $x$, $y(x)$, and $y_x(x)$. The integrand is a well-defined function whose integral is guaranteed to exist. </p>
<p>Thus, $J$ takes a function $y : [a,b] \rightarrow \mathbb{R}$ and computes a number $J(y)$ by the right hand side. Physicists often call such a "function of a function" by the name "functional". When analyzed mathematically, $y$ is often described as an element in an abstract vector space of infinite dimension, call it $X$. Thus, think of $J$ as a map that takes a vector $y$ in $X$(visualize a finite dimension if you like) and produces a number.</p>
<p>The regularity assumtion of $F$ and $y$ lets you compute the change in $J(y)$ when $y$ changes by a "small amount", meaning that a small function is added to $y$. Typically, physicists think intuitively here, but analytst turn to the space $X$ which comes equipped with a rigorous way to measure smallness, for example a norm. </p>
<p>In physics notation, $y(x) \rightarrow y(x) + \epsilon \eta(x)$ for such a small change, and then compute the Taylor expansion:
$$ J(y + \epsilon \eta) \approx \int F(x,y+\epsilon\eta,y_x + \epsilon \eta_x) dx = \int F(x,y,y_x) dx + \int F_2(x,y,y_x)\epsilon \eta(x)dx + \int F_3(x,y,y_x)\epsilon y_x dx, $$
where $J_i$ is the partial derivative with respect to argument number $i$ of $F$. Physicists work from this expression and derives the Euler--Lagrange equations.</p>
|
109,754 | <p>Please help me get started on this problem:</p>
<blockquote>
<p>Let <span class="math-container">$V = R^3$</span>, and define <span class="math-container">$f_1, f_2, f_3 ∈ V^*$</span> as follows:<br />
<span class="math-container">$f_1(x,y,z) = x - 2y$</span><br />
<span class="math-container">$f_2(x,y,z) = x + y + z$</span><br />
<span class="math-container">$f_3(x,y,z) = y-3z$</span></p>
<p>Prove that <span class="math-container">$\{f_1,f_2,f_3\}$</span> is a basis for <span class="math-container">$V^*$</span>, and then find a basis for <span class="math-container">$V$</span> for which it is the dual basis.</p>
</blockquote>
| azarel | 20,998 | <p>$\bf Hint:$ Suppose that $a f_1(x,y,z)+b f_2(x,y,z)+c f_3(x,y,z)=0$. Try different values for $(x,y,z)$, for example, if you substitute $(2,1,-3)$. We obtain $f_1(2,1,-3)=0$, $f_2(2,1,-3)=0$ and $f_3(2,1,-3)=10$, hence $cf_3(2,1,-3)=10c$ which implies $c=0$.</p>
|
4,106,933 | <p>I tried <span class="math-container">$\left \vert \frac{\sin x}{x^2} - \frac{\sin c}{c^2}\right \vert \leq \frac{1}{x^2} + \frac{1}{c^2} < \epsilon$</span>, but it doesn't help me much with <span class="math-container">$\vert x - c \vert < \delta$</span>. How can I prove this?</p>
| razixxmaster | 905,366 | <p><span class="math-container">$|f'(x) |=| \frac{cos(x)}{x^2} + \frac{-2sin(x)}{x^3}|\leqslant M+C $</span> , the derivative is bounded from a starting point of x<span class="math-container">$_\beta$</span> . try to continue .
I think the function isn't uniformly continues because the limit as x goes to 0 doesn't exist .</p>
|
144,545 | <p>I have this weird integral to find. I am actually trying to find the volume that is described by these two equations.</p>
<p>$$x^2+y^2=4$$ and</p>
<p>$$x^2+z^2=4$$ for</p>
<p>$$x\geq0, y\geq0, z\geq0$$</p>
<p>It is a weird object that has the plane $z=y$ as a divider for the two cylinders. My problems is that I can't find the integration limits.</p>
<p>I can't even draw this thing properly.</p>
| DonAntonio | 31,254 | <p>1.- Try to prove that $y\in G\,,\,yy=y\Longrightarrow y=1$</p>
<p>2.- Now prove, using your notation, that $xg\cdot xg=xg$</p>
<p>DonAntonio</p>
|
144,545 | <p>I have this weird integral to find. I am actually trying to find the volume that is described by these two equations.</p>
<p>$$x^2+y^2=4$$ and</p>
<p>$$x^2+z^2=4$$ for</p>
<p>$$x\geq0, y\geq0, z\geq0$$</p>
<p>It is a weird object that has the plane $z=y$ as a divider for the two cylinders. My problems is that I can't find the integration limits.</p>
<p>I can't even draw this thing properly.</p>
| Dustan Levenstein | 18,966 | <p>The statement is that every $g \in G$ has a right inverse $x$, ie, $gx = 1$. Now the same statement holds in turn for $x$: let $g'$ (suggestively named) be a right inverse for $x$, so that $xg' = 1$. Then on the one hand, using associativity $gxg' = (gx)g'= 1\cdot g' = g'$, but on the other hand, $gxg' = g(xg') = g\cdot 1 = g$. So $g = g'$, and $x g = x g' = 1$.</p>
<p>An equivalent conceptual way of thinking of this is as follows: The statement that $x$ is a right inverse of $g$ is identical to the statement that $g$ is a left inverse of $x$. So $x$ has a left inverse, and is assumed as always to have a right inverse. Therefore it must simply have a two-sided inverse.</p>
|
1,850,418 | <p>An argument has two parts, the set of all premises, and the conclusion drawn from said premise. Now since there's only 1 conclusion, it would be weird to choose a name for the 'second' part of the argument. However, what is the first part called? I used to think that this was actually called the premise, however that turns out not to be the case. Anyway, what <em>is</em> it called? Because I feel like it needs a name.</p>
<pre><code>Argument
Premise 1¯|
Premise 2 | <--- What's this called?
Premise 3_|
Conclusion
</code></pre>
<p>What is the part containing premise 1, 2, and 3?</p>
<p><strong>Edit:</strong>
I just realized that actually, perhaps one conclusion doesn't necessarily need to be drawn, perhaps there can be several conclusions, if so, what is the set of all conclusions called?</p>
| Maxandre Jacqueline | 437,343 | <p>In Model Theory, if you assume $\mathfrak{A}$ to be a model of a language $\mathcal{L}$, you can define a set of sentences $\mathcal{T}$ that is called a theory by letting</p>
<p>$$ \mathcal{T} = \left\{\text{sentence(s) } \sigma \text{ of } \mathcal{L} \text{ such that } \mathfrak{A} \models \sigma \right\}. $$</p>
<p>In other words, the theory is all sentences of the language which are true under the interpretation of the model. </p>
<p>Then, if you use $\mathcal{T}$ as a set of premises for your proof, then you can call this set of premises a theory. If you want to use a smaller set of premises for your proof, you can reduce $\mathcal{T}$ to a potentially smaller set of sentences by defining $\Sigma$ to be the set of sentences that entails (has as consequences) all sentences in $\mathcal{T}$. This $\Sigma$ is a set of axioms for our theory, and you can also use it as a set of premises for your proof.</p>
<p>Best !</p>
<p>Reference : <a href="http://www.math.toronto.edu/weiss/model_theory.pdf" rel="nofollow noreferrer">http://www.math.toronto.edu/weiss/model_theory.pdf</a></p>
|
3,236,205 | <p>I'm studying for a qualifying exam in algebra and I've come across the following problem:</p>
<blockquote>
<p>Let <span class="math-container">$G$</span> be a finite group with a subgroup <span class="math-container">$N$</span>. Let <span class="math-container">$Aut(G)$</span> be the group of automorphisms of <span class="math-container">$G$</span>. Prove that if <span class="math-container">$|Aut(G)|$</span> and <span class="math-container">$|N|$</span> are relatively prime, then <span class="math-container">$N$</span> is contained in the center of <span class="math-container">$G$</span>.</p>
</blockquote>
<p>I'm struggling to find a good way to approach this. I'm able to prove a similar result that if <span class="math-container">$|G|$</span> and <span class="math-container">$|Aut(G)|$</span> are relatively prime, then <span class="math-container">$G$</span> is abelian:</p>
<p>Let <span class="math-container">$\theta$</span> be a homomorphism from <span class="math-container">$G$</span> to the inner automorphism group of <span class="math-container">$G$</span>, denoted <span class="math-container">$\theta:G\rightarrow Inn(G)$</span>. Then because inner automorphisms are conjugations by fixed elements, the kernel of <span class="math-container">$\theta$</span> is the center of <span class="math-container">$G$</span>, denoted <span class="math-container">$ker(\theta)=Z(G)$</span>. By the first isomorphism theorem, it follows that <span class="math-container">$G/Z(G)\cong Inn(G)\subseteq Aut(G)$</span>. Hence, by Lagrange's theorem, <span class="math-container">$|G/Z(G)|$</span> divides <span class="math-container">$|Aut(G)|$</span>. Similarly, by consequence of Lagrange's theorem, <span class="math-container">$|G/Z(G)||Z|=|G|$</span>, as <span class="math-container">$Z(G)$</span> is a normal subgroup of <span class="math-container">$G$</span>. Since <span class="math-container">$|G/Z(G)|$</span> divides <span class="math-container">$|G|$</span> and <span class="math-container">$|Aut(G)|$</span>, and <span class="math-container">$|G|$</span> and <span class="math-container">$|Aut(G)|$</span> are relatively prime, it follows that <span class="math-container">$|G/Z(G)|=1$</span>, implying that <span class="math-container">$G=Z(G)$</span>. The desired result follows. <span class="math-container">$\blacksquare$</span></p>
<p>It's not obvious to me how to transform this proof into one of the desired problem (or if there even is a good way to simply modify what I already have). If I knew that <span class="math-container">$N$</span> were a normal subgroup, I could potentially apply the same kind of argument using Lagrange's theorem, but I don't have that assumption. Without more information on the structure of <span class="math-container">$G$</span>, it doesn't seem likely that I'll be able to use an element argument to show that <span class="math-container">$N\subseteq Z(G)$</span>.</p>
| Alexander Gruber | 12,952 | <p><span class="math-container">$N/Z(G)\leq G/Z(G)$</span>. If <span class="math-container">$N\not\leq Z(G)$</span>, what can you say about the order of <span class="math-container">$N/Z(G)$</span>?</p>
|
1,974,114 | <p>Let R be a integral domain with a finite number of elements. Prove that R is a field.</p>
<p>Let a ∈ R \ {0}, and consider the set aR = {ar : r ∈ R}. </p>
<p>Guessing i will have to show that |aR| = R, and deduce that there exists r ∈ R such that ar = 1 but don't know what to do?</p>
| Brian M. Scott | 12,042 | <p>HINT: If $aR\ne R$, there must be distinct $r,s\in R$ such that $ar=as$. (Why?) But then $a(r-s)=\ldots\;?$</p>
|
186,726 | <p>Just a soft-question that has been bugging me for a long time:</p>
<p>How does one deal with mental fatigue when studying math?</p>
<p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p>
<p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p>
<p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p>
<p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
| Thomas | 26,188 | <p>I used to think that the best way and the only correct way was to sit down for hours get really concentrated. Staying up late at night. Never thinking about anything but math. It was my opinion that only when you get in this zone can you really produce some good thoughts, when you really start to live the problem.</p>
<p>But then I realized that while this sometimes works just fine (when, for example, one just can't stop going), often it didn't. Pursuing this method often led me to get frustrated. I just couldn't do it. I am sure than for some (like Erdős?) this is how it works, but for mere mortals, not.</p>
<p>So as suggested above, one way to deal with mental fatigue, if simply by taking breaks. But I would suggest that you actually schedule those breaks. So you could for example work intensely for 50 min and then take a 10 min, or you could for example work for 2 hours and take a 30 min break. I am guessing that it depends on the person how exactly one schedules breaks. It is my experience that if one doesn't schedule breaks, then one either forgets to take breaks, or the breaks end up becoming much larger.</p>
<p>Also (as alluded to in other answers) I think that it is important to keep the mind working on other things than math. Even though you want to "mainly" be thinking about the problem ahead, I think that it can be beneficial to have other projects to work on. Often this might just be another math problem than the one you are currently working one, but it could also be something completely different. Having some other project to work on, will also help with the feeling of hopeless when no progress is made.</p>
|
186,726 | <p>Just a soft-question that has been bugging me for a long time:</p>
<p>How does one deal with mental fatigue when studying math?</p>
<p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p>
<p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p>
<p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p>
<p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
| 000 | 22,144 | <p>I'm not a mathematician. I'm actually too young.</p>
<p>However, I wholeheartedly do not recommend coffee. Coffee leads to emotional instability (in my experience and perhaps others) as well increased stress levels. There is also a huge negative effect on the quality of your sleep.</p>
<p>I used to spend a lot of nights working on mathematics with the aid of coffee and various forms of caffeine. This was fun; I can't deny that. However, I don't think it's productive in the long run. If you do this, you deny yourself the opportunities outside of mathematics and ways to improve yourself as a person. As you probably know, it also destroys your sleeping patterns. </p>
<p>Did I mention that it's hard to maintain a romantic relationship when you're not on the same sleep schedule as she or he is and that you're tired whenever they're awake?</p>
<p>I now spend my days trying to live on a fairly regular sleep schedule and without caffeine. (I still haven't got adjusted to waking up at 6 a.m. on weekends, but I will eventually.) I feel that there are more than enough hours in the day and you do not need to strain yourself by staying up late at night and working diligently on a problem. The best idea, in my mind, is to work from 6 p.m. to whenever you are comfortable. This allows you to dedicate yourself to a problem if need be; that is, if you become obsessed with it, you have enough time to work on it for hours. Obsession, I feel, is inevitable in mathematics.</p>
<p>With all that said, you should take a step back from time to time and feel proud of what you've accomplished. I am sure that you have done things that are atypical and worth feeling proud for. Revel in those accomplishments from time to time if you're down on yourself. It reminds you that you're not nearly as terrible as your mind wants you to think. Also, I highly recommend having an emotionally positive romantic relationship. Personally, it causes me to have a much more stable self-image.</p>
<p>Good luck on your mathematics!</p>
|
186,726 | <p>Just a soft-question that has been bugging me for a long time:</p>
<p>How does one deal with mental fatigue when studying math?</p>
<p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p>
<p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p>
<p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p>
<p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
| Joachim | 36,074 | <p>I know this is an old question, but i still wanted to add my bit to the topic..</p>
<p>There's a studying method, i forgot the name but i'm sure you can find it with google's help.. It goes like this: study 25 mins, then take a 5 min break. Repeat this 4 times, then take a break of a full half hour.</p>
<p>When you take a break, the method also suggests you go out of the room where you're studying and do some light physical exercise, like taking a walk.</p>
<p>It doesn't work that well for me, i must admit. What does work for me when i'm really tired is the following:
Study an hour, take an hour off, and keep repeating this.</p>
<p>During the break i do nothing, preferably i just sit in a chair staring at the wall or relax with my eyes closed. Sometimes i listen to music. If i actually do something my head doesn't seem to rest properly, so i really try to do nothing, just relax and drink some tea.</p>
<p>Taking such long breaks seems ridiculous, but this way i get way more information in my head and it stays there for a longer time. However i must say such long breaks are only necessary when im really tired.</p>
<p>After studying like this for a day or two, i notice i am way more effective and my head gets fresher. I then reduce the breaks to whatever i feel like, normally studying between an hour and two and taking a break of something like 20-30 mins.</p>
<p>Hopefully this still helps.</p>
|
809,499 | <p>The no. of real solution of the equation $\sin x+2\sin 2x-\sin 3x = 3,$ where $x\in (0,\pi)$.</p>
<p>$\bf{My\; Try::}$ Given $\left(\sin x-\sin 3x\right)+2\sin 2x = 3$</p>
<p>$\Rightarrow -2\cos 2x\cdot \sin x+2\sin 2x = 3\Rightarrow -2\cos 2x\cdot \sin x+4\sin x\cdot \cos x = 3$</p>
<p>$\Rightarrow 2\sin x\cdot \left(-\cos 2x+2\cos x\right)=3$</p>
<p>Now I did not understand how can i solve it.</p>
<p>Help me</p>
<p>Thanks</p>
| juantheron | 14,311 | <p>Given $\sin x+2\sin 2x-\sin 3x = 3\Rightarrow (\sin 2x-1)^2+(\cos 2x+\sin x)^2+(\cos x)^2=0\; \forall x\in (0,\pi)$</p>
<p>So either $\sin 2x = 1$ and $\cos 2x = -\sin x$ and $\cos x=0$</p>
<p>So from above no common value of $x\in (0,\pi)$ for which above equation is satisfied.</p>
<p>So no real values of $x$</p>
|
4,576,565 | <p>How would you decide whether a tridiagonal matrix with all ones in the diagonals has only a trivial solution (as matrix b is zero in the equation Ax=b)?</p>
<p>Edit: So, a general solution to an n by n matrix of the following appearance:</p>
<p><span class="math-container">$\begin{bmatrix}
1& 1& 0& 0\\
1& 1& 1& 0\\
0& 1& 1& 1\\
0& 0& 1& 1\\
\end{bmatrix}$</span></p>
| Robert Z | 299,698 | <p>Notice that for <span class="math-container">$\alpha\not=1,2$</span>,
<span class="math-container">$$\int _0^{1-x}\frac{1}{\left(x+y\right)^{\alpha}}dy=\left[\frac{(x+y)^{1-\alpha}}{1-\alpha}\right]_{0^+}^{1-x}
=\frac{1-x^{1-\alpha}}{1-\alpha},$$</span>
and therefore (notice that the integral is <em>improper</em> for <span class="math-container">$\alpha>0$</span>)
<span class="math-container">$$\iint_{D}\frac{1}{(x+y)^{\alpha}}dxdy=\int_0^1 \frac{1-x^{1-\alpha}}{1-\alpha}\, dx=\frac{1}{1-\alpha}\left[x-\frac{x^{2-\alpha}}{2-\alpha}\right]_{0^+}^1\\
=\frac{1}{2-\alpha}+\frac{\lim_{x\to 0^+}x^{2-\alpha}}{(2-\alpha)(1-\alpha)}.$$</span>
The limit on the right-hand side gives the final result: for
<span class="math-container">$\alpha>2$</span>, <span class="math-container">$\lim_{x\to 0^+}x^{2-\alpha}=+\infty$</span> and the integral is divergent, otherwise, for
<span class="math-container">$\alpha<2$</span> and <span class="math-container">$\alpha\not=1$</span>, the limit is zero and the integral is equal to <span class="math-container">$\frac{1}{2-\alpha}>0$</span>.
The special cases <span class="math-container">$\alpha=1,2$</span> are easy to handle: the given integral is equal to <span class="math-container">$1$</span> for <span class="math-container">$\alpha=1$</span> and it is divergent for <span class="math-container">$\alpha=2$</span>.</p>
|
25,172 | <p>What would be a good books of learning differential equations for a student who likes to learn things rigorously and has a good background on analysis and topology?</p>
| Gerald Edgar | 127 | <p>A classic book is Coddington & Levinson, <em>Theory of Ordinary Differential Equations</em>. Probably tough going for most students.</p>
<p>Coddington, <em>An Introduction to Ordinary Differential Equations.</em> Much shorter, suited to undergraduates, but still rigorous. There is an inexpensive reprinting from Dover.</p>
|
176,691 | <p>Let $A'$ denotes the complement of A with respect to $ \mathbb{R}$ and $A,B,T$ are subsets of $\mathbb{R}$. I am trying to prove $A' \cap (A' \cup B') \cap T= A' \cap T$, but I got some problems along the way.</p>
<p>$A' \cap (A' \cup B') \cap T= (A' \cap A') \cup (A' \cap B') \cap T= A' \cup (A \cup B) \cap T =(A' \cup A)\cup B \cap T= \mathbb{R} \cup B \cap T = B \cap T$ Something wrong?</p>
| Ink | 34,881 | <p>Let $x \in A' \cap (A' \cup B) \cap T$. Then $x \in A'$ and $x \in T$, so $x \in A' \cap T$. This proves that $A' \cap (A' \cup B) \cap T \subset A' \cap T$. The reverse inclusion is similar.</p>
|
176,691 | <p>Let $A'$ denotes the complement of A with respect to $ \mathbb{R}$ and $A,B,T$ are subsets of $\mathbb{R}$. I am trying to prove $A' \cap (A' \cup B') \cap T= A' \cap T$, but I got some problems along the way.</p>
<p>$A' \cap (A' \cup B') \cap T= (A' \cap A') \cup (A' \cap B') \cap T= A' \cup (A \cup B) \cap T =(A' \cup A)\cup B \cap T= \mathbb{R} \cup B \cap T = B \cap T$ Something wrong?</p>
| Eugene Shvarts | 34,515 | <p>Dilip's comments point out the flaw re: DeMorgan's Laws.</p>
<p>To avoid that mess altogether, a both-directions-subset proof works concisely. On the one hand, it is clear that $A' \cap (A' \cup B') \cap T \subset A' \cap T$, since the LHS intersects all of the RHS's elements with another set. </p>
<p>On the other hand, expanding the LHS yields $A' \cap ((A' \cap T) \cup(B' \cap T)) = (A' \cap T)\cup(A' \cap B' \cap T)$ . Certainly, $A' \cap T \subset (A' \cap T)\cup(A' \cap B' \cap T)$ .</p>
|
2,954,929 | <p>What happens when <span class="math-container">$x < -2$</span> ? Does the whole square root term just "disappear" which leaves us with 1 which is positive and thus the answer to the question is <span class="math-container">$x\le-1$</span>? Or do we have to constrain the domain of <span class="math-container">$x$</span> to: <span class="math-container">$(-2\le x\le-1)$</span>?</p>
| Community | -1 | <p>Let <span class="math-container">$\mathcal{H}$</span> denote the Hilbert space of wave functions <span class="math-container">$\psi:\mathbb{R}^3\to\mathbb{C}$</span>. Recall that
<span class="math-container">$$\langle u|v\rangle =\iiint_{\mathbb{R}^3}\bar{u}(x,y,z)\ v(x,y,z)\ dx\ dy\ dz$$</span>
for all <span class="math-container">$u,v\in \mathcal{H}$</span>. Write <span class="math-container">$h$</span> for the operator <span class="math-container">$(-i\nabla -A)^2$</span>. Observe that
<span class="math-container">\begin{align}\langle hu|v\rangle &=\iiint_{\mathbb{R}^3}\overline{hu}(x,y,z)\ v(x,y,z)\ dx\ dy\ dz
\\&=\iiint_{\mathbb{R}^3}(i\nabla -A)^2\bar{u}(x,y,z)\ v(x,y,z)\ dx\ dy\ dz
\\&=\iiint_{\mathbb{R}^3}(i\nabla -A)\cdot \overline{\Phi u}(x,y,z)\ v(x,y,z)\ dx\ dy\ dz,
\end{align}</span>
where <span class="math-container">$\Phi=-i\nabla-A$</span>. That is,
<span class="math-container">\begin{align}\langle hu|v\rangle &=i\iiint_{\mathbb{R}^3}\nabla\cdot \overline{\Phi u}(x,y,z)\ v(x,y,z)\ dx\ dy\ dz-\iiint_{\mathbb{R}^3} \overline{\Phi u}(x,y,z)\cdot Av(x,y,z)\ dx\ dy\ dz
\\&=-i\iiint_{\mathbb{R}^3}\overline{\Phi u}(x,y,z)\cdot \nabla v(x,y,z)\ dx\ dy\ dz
-\iiint_{\mathbb{R}^3}\overline{\Phi u}(x,y,z)\cdot Av(x,y,z)\ dx\ dy\ dz,\end{align}</span>
where we apply <a href="https://en.wikipedia.org/wiki/Integration_by_parts#Higher_dimensions" rel="nofollow noreferrer">integration by parts in higher dimension</a>, assuming that <span class="math-container">$u(x,y,z)$</span> and <span class="math-container">$v(x,y,z)$</span> vanish quickly when <span class="math-container">$(x,y,z)$</span> is large. That is,
<span class="math-container">\begin{align}\langle hu|v\rangle &=\iiint_{\mathbb{R}^3}\overline{\Phi u}(x,y,z)\cdot(-i\nabla-A)v(x,y,z)\ dx\ dy\ dz
\\&=\iiint_{\mathbb{R}^3}\overline{\Phi u}(x,y,z)\cdot\Phi v(x,y,z)\ dx\ dy\ dz.
\end{align}</span>
In particular,
<span class="math-container">\begin{align}\langle hu|u\rangle &=\int_{\mathbb{R}^3}\overline{\Phi u}(x,y,z)\cdot\Phi u(x,y,z)\ dx\ dy\ dz\\&=\int_{\mathbb{R}^3}\big\Vert\Phi u(x,y,z)\big\Vert^2\ dx\ dy\ dz\geq 0\end{align}</span></p>
|
145,306 | <p>I had a problem on a program of mine that I could avoid by developing the code through other ways. On the other hand, I still do not know how to solve the simple problem below:</p>
<p>Consider these two definitions:</p>
<p>f = p;
p = 2;</p>
<p>One can use Clear[p] to clear the value of p, which will lead the output of f to be p, instead of 2.</p>
<p>Is there a way to clear the value of p through the definition of the variable f? That is, without explicit writing the variable p. </p>
<p>By using Definition[f], the output is "f=p", but if I try to pick the variable "p" from the definition, trough Definition[f][[1]], I cannot apply Clear on it, since p is automatically evaluated to 2, thus the application of Clear in the last case leads to "Clear[2]", instead of "Clear[p]".</p>
<p>Context on where this problem may appear: Consider a larger program in which the variable name p was automatically built and depends on the input of the user (in this case the name of the variable will typically be larger, but let us stick with the name p). If the name of the variable was generated automatically, and depends on the user input, I cannot explicitly type in the code "Clear[p]". One can use Clear[f], but this will not clear the value of p. To sove the issue, I simply avoided using variables whose names depend on the user input, nonetheless, I would like to know a solution for the problem above.</p>
<p>Thanks.</p>
| Eisbär | 32,508 | <p>Try this</p>
<pre><code>p = 2;
With[{a = p}, f = a]
</code></pre>
|
3,678,417 | <p>I understand:
<span class="math-container">$$\sum\limits^n_{i=1} i = \frac{n(n+1)}{2}$$</span>
what happens when we restrict the range such that:
<span class="math-container">$$\sum\limits^n_{i=n/2} i = ??$$</span></p>
<p>Originally I thought we might just have <span class="math-container">$\frac{n(n+1)}{2}/2$</span> but I know that's not correct since starting the summation at n/2 would be the larger values of the numbers between <span class="math-container">$1..n$</span></p>
| JonathanZ supports MonicaC | 275,313 | <p>No matter what sequence you're adding up (i.e. no matter what <span class="math-container">$a_i$</span> is), so long as <span class="math-container">$m \lt n$</span> we know that
<span class="math-container">$$\sum^{m-1}_{i = 1} a_i + \sum_{i = m}^n a_i = \sum_{i = 1}^n a_i$$</span></p>
<p>so we can bring the first term over to the right and side and get</p>
<p><span class="math-container">$$\sum_{i = m}^n a_i = \sum_{i = 1}^n a_i - \sum^{m-1}_{i = 1} a_i$$</span></p>
<p>Can you figure out how to apply this to your situation?</p>
|
754,301 | <p>Say I have the following maximization.</p>
<p><span class="math-container">$$ \max_{R: R^T R=I_n} \operatorname{Tr}(RZ),$$</span>
where <span class="math-container">$R$</span> is an <span class="math-container">$n\times n$</span> orthogonal transformational, and the SVD of <span class="math-container">$Z$</span> is written as <span class="math-container">$Z = USV^T$</span>. </p>
<p>I'm trying to find the optimal <span class="math-container">$R^*$</span> which intuitively I know is equal to <span class="math-container">$VU^T$</span> where
<span class="math-container">$$\operatorname{Tr}(RZ)=\operatorname{Tr}(VU^T USV^T)=\operatorname{Tr}(S).$$</span>
I know this is the max since it is the sum of all the singular values of <span class="math-container">$Z$</span>. However, I'm having trouble coming up with a mathematical proof justifying my intuition.</p>
<p>Any thoughts? </p>
| Omran Kouba | 140,450 | <p>Let $A=\sqrt{S}$, and equip the space of $n\times n$ real matrices with the usual Euclidean scalar product. Then
$$\hbox{Tr}(RZ)= \hbox{Tr}(RUA^2V^T)=\hbox{Tr}((RUA)(VA)^T)=\langle RUA,VA\rangle$$
By the Cauchy-Schwarz inequality, we get
$$\hbox{Tr}(RZ)\leq \Vert RUA \Vert_2 \Vert VA \Vert_2= \Vert A \Vert_2 \Vert A \Vert_2
=\hbox{Tr}(AA^T)=\hbox{Tr}(S)$$
where we used the invariance of the $\Vert \cdot \Vert_2 $ under orthogonal transformations.
the converse inequality, is proved by choosing $R=VU^T$, and we are done.</p>
|
1,177,605 | <p>Since the problem uses $y^2=x$, I first assumed that the element must be horizontal (parallel to the $x$-axis). However, the bounded region has all $y$ values greater than $0$, so I could also use a vertical element. This problem has me stumped; I know how to set up the integral but for the shell method I need to find the radius (element to axis of rotation) and the height of the element.</p>
<p>What is the best way to approach this problem?</p>
| Analogue Multiplexer | 241,272 | <p>$\bullet \space \mathbf{SO(3) / SO(2) \simeq S^2}:$</p>
<p>Consider a fundamental representation of the Lie group $G := SO(3)$. Any element $M$ of $G$ can be written as a linear map $M : \mathbb{R}^3 \rightarrow \mathbb{R}^3$ such that $M^{-1} = M^T$ and $\det(M) = 1$. We can easily restrict to $M : S^2 \rightarrow S^2$. For any arbitrary $x \in S^2 \subset \mathbb{R}^3$ we write $x = (x_1,x_2,x_3)$ and $-x = (-x_1,-x_2,-x_3)$, so that $x_1^2 + x_2^2 + x_3^2 = 1$.</p>
<p>Let now $\iota : SO(2) \rightarrow SO(3)$ be some embedding such that $\iota(SO(2))$ is a subgroup of $SO(3)$. Note that there is some $x \in S^2$ such that $\iota(SO(2)) (x) = x$ and $\iota(SO(2)) (-x) = -x$. Thus $\iota(SO(2))$ is a stabilizer $G_x = G_{-x} \subset G$, so that $G_x x = x$ and $G_{-x} (-x) = -x$. Let now $g \in G - G_x$, thus $g \in SO(3)$ but $g \not \in \iota(SO(2))$. Then $g G_x \subset G$ is a left coset of $G_x$, so that $g G_x \cap G_x = \emptyset$. Then
\begin{equation}
y := g x = g G_x x = (g G_x g^{-1}) g x = G_y y.
\end{equation}</p>
<p>Note that $g G_x$ is a subset of $G$ but not a subgroup. But it should be clear that $G_y$ is some conjugate of $G_x$. Then $G_y \simeq G_x$ and $G_x \cap G_y = e$ if $y \not \in \{x,-x\}$, where $e$ is the identity element of $SO(3)$. Also note that $g G_x g^{-1} = G_{-x} = G_x$ for any $g \in SO(3)$ such that $g x = -x$. Then $g^2 = e$, so that $g^{-1} = g$ and $g h g^{-1} = g h g = h^{-1}$ for any $h \in G_x$.</p>
<p>For any $y \in S^2$ there exists an element $g \in G$ such that $y = g x$. Now it should be clear that the left coset space (i.e. the smooth set of left cosets of $G_x$) is isomorphic to $S^2$. Then we can say that there is a principal fiber bundle $(SO(3),S^2,\pi,SO(2))$ with surjective map $\pi : SO(3) \rightarrow S^2$, with a short exact sequence:
\begin{equation}
1 \rightarrow SO(2) \rightarrow SO(3) \rightarrow SO(3) / \iota(SO(2)) \simeq S^2 \rightarrow 0.
\end{equation}</p>
<p>(This is similar to the principal fiber bundle $(SU(2),S^2,\pi,U(1))$.) Now note that any $x \in S^2$ induces a pair $\{x,-x\} \subset S^2$, so that $\{x,-x\} \in \mathbb{R} P^2$. Then it is straight forward to see that
\begin{equation}
SO(3) = G = \cup_{\{x,-x\} \in \mathbb{R} P^2} G_x.
\end{equation}</p>
<p>$\bullet \space \mathbf{SO(3) / O(2) \simeq \mathbb{R} P^2}:$</p>
<p>There is no proper embedding of $O(2)$ into $SO(3)$ with a fundamental representation. Consider a projective representation $SO(3) : \mathbb{R} P^2 \rightarrow \mathbb{R} P^2$.</p>
<p>Let $L \in O(2)$ and $l := \det(L)$, so that $l \in \{1,-1\}$. Let now $\iota : O(2) \rightarrow SO(3)$ be some embedding such that $\iota(O(2))$ is a subgroup of $SO(3)$. Now define $M := \iota(L) \in \iota(O(2))$ such that $\det(M) = 1$:
\begin{equation}
L =
\left(
\begin{array}{cc}
L_{1 1} & L_{1 2} \\
L_{2 1} & L_{2 2}
\end{array}
\right)
\Rightarrow M =
\left(
\begin{array}{ccc}
L_{1 1} & L_{1 2} & 0 \\
L_{2 1} & L_{2 2} & 0 \\
0 & 0 & l
\end{array}
\right)
.
\end{equation}
Note that this is just an arbitrary embedding; there is no canonical one. As discussed: for any $x \in S^2$ there exists an element $g$ such that $g x = -x$, thus also $g (-x) = x$. There is a projection $S^2 \rightarrow \mathbb{R} P^2$ so that this action turns into $g \{x,-x\} = \{-x,x\} = \{x,-x\}$. This shows that in this case $g$ is also an element of the stabilizer. All these $g$ generate an extension to the stabilizer we already constructed, related to the fundamental representation. This extended stabilizer can really be regarded as a proper embedding from $O(2)$ to $SO(3)$. Thus:
\begin{equation}
\iota(O(2)) = G_{\{x,-x\}} = G_{\{-x,x\}} \simeq O(2).
\end{equation}</p>
<p>It should be clear that if $l = 1$, then $M$ acts like $SO(2)$ and we may assume that $g = e$. If $l = -1$, then $g$ generates an axis $a(g) \in \mathbb{R} P^2$ which is perpendicular to the ${\{x,-x\}}$ axis. This axis $a(g)$ generates the direction of a mirror. There is a principal fiber bundle $(SO(3),\mathbb{R} P^2,\pi,O(2))$ with surjective map $\pi : SO(3) \rightarrow \mathbb{R} P^2$, with a short exact sequence:
\begin{equation}
1 \rightarrow O(2) \rightarrow SO(3) \rightarrow SO(3) / \iota(O(2)) \simeq \mathbb{R} P^2 \rightarrow 0.
\end{equation}</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.