qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,045,920 | <p>Let T: $M_{2x2} \rightarrow M_{2x2}$ be defined by</p>
<p>T(A) = $\begin{bmatrix}
1 & 3 \\[0.3em]
-1 & 1 \\[0.3em]
\end{bmatrix} A $</p>
<p>Is T a linear transformation?</p>
<p>I want to show that the operations for linear transformations hold. However, I am not sure if I am on the right track for showing that the linear transformation is closed under addition. Let A and B be 2x2 matrices with components in $\Re$.</p>
<p>Let A = $\begin{bmatrix}
a & b \\[0.3em]
c & d \\[0.3em]
\end{bmatrix}$ and B = $\begin{bmatrix}
e & f \\[0.3em]
g & h \\[0.3em]
\end{bmatrix}$.</p>
<p>T(A + B) = $T\begin{bmatrix}
a & b \\[0.3em]
c & d \\[0.3em]
\end{bmatrix}$ + $T\begin{bmatrix}
e & f \\[0.3em]
g & h \\[0.3em]
\end{bmatrix}$ = $T\begin{bmatrix}
a+e & b+f \\[0.3em]
c+g & d+h \\[0.3em]
\end{bmatrix}$. Not sure what to do at this point. </p>
<p>T($\alpha$A)= $\alpha$ $\begin{bmatrix}
1 & 3 \\[0.3em]
-1 & 1 \\[0.3em]
\end{bmatrix}A$ = $\begin{bmatrix}
\alpha1 & \alpha3 \\[0.3em]
-\alpha1 & \alpha1 \\[0.3em]
\end{bmatrix}A$ = $\alpha T(A)$</p>
| Ben Grossmann | 81,360 | <p>It should be given in your text at some point that matrix multiplication satisfies the following properties (among some other important ones): for compatible matrices $A,B,C$ and scalar $\alpha$:</p>
<ul>
<li>$A(B + C) = AB + AC$</li>
<li>$A(\alpha B) = (\alpha A)B = \alpha(AB)$</li>
</ul>
<p>That should be enough to prove what you need to prove.</p>
|
1,045,920 | <p>Let T: $M_{2x2} \rightarrow M_{2x2}$ be defined by</p>
<p>T(A) = $\begin{bmatrix}
1 & 3 \\[0.3em]
-1 & 1 \\[0.3em]
\end{bmatrix} A $</p>
<p>Is T a linear transformation?</p>
<p>I want to show that the operations for linear transformations hold. However, I am not sure if I am on the right track for showing that the linear transformation is closed under addition. Let A and B be 2x2 matrices with components in $\Re$.</p>
<p>Let A = $\begin{bmatrix}
a & b \\[0.3em]
c & d \\[0.3em]
\end{bmatrix}$ and B = $\begin{bmatrix}
e & f \\[0.3em]
g & h \\[0.3em]
\end{bmatrix}$.</p>
<p>T(A + B) = $T\begin{bmatrix}
a & b \\[0.3em]
c & d \\[0.3em]
\end{bmatrix}$ + $T\begin{bmatrix}
e & f \\[0.3em]
g & h \\[0.3em]
\end{bmatrix}$ = $T\begin{bmatrix}
a+e & b+f \\[0.3em]
c+g & d+h \\[0.3em]
\end{bmatrix}$. Not sure what to do at this point. </p>
<p>T($\alpha$A)= $\alpha$ $\begin{bmatrix}
1 & 3 \\[0.3em]
-1 & 1 \\[0.3em]
\end{bmatrix}A$ = $\begin{bmatrix}
\alpha1 & \alpha3 \\[0.3em]
-\alpha1 & \alpha1 \\[0.3em]
\end{bmatrix}A$ = $\alpha T(A)$</p>
| Milo Brandt | 174,927 | <p>It looks like you've already proved everything you desire to; you've got it in the wrong order though; you ought to write out $T(A+B)$ as the sum of two matrices (as you do on the rightmost expression) and <em>then</em> note that this equals $T(A)+T(B)$ (as you do in the middle expression). (Your proof that $T$ preserves scalar multiplication is fine)</p>
|
3,075,987 | <p>So I am currently studying Calculus and Linear Algebra and I came across the same concepts that is being applied in a lot of the proofs that I read for Calculus and Linear algebra but not capable of fully understanding it.</p>
<p><strong>Claim:</strong> There exists a <span class="math-container">$\lambda \in \mathbb{R}$</span> such that <span class="math-container">$\nabla F(a)=\lambda \nabla G(a)$</span></p>
<p>So the claim above I am mentioning is one of the theorem in calculus known as the Lagrange Multiplier theorem. And it is the tool known to be solve the "constrained optimization" problem where <span class="math-container">$F$</span> is the function we try to maximize/minimize subject to the constraint <span class="math-container">$G=0$</span>.</p>
<p>In the proofs that I read, it finished it off like this:</p>
<p>"<span class="math-container">$\nabla F(a) \cdot u=\lambda (\nabla G(a) \cdot u)$</span> where <span class="math-container">$u$</span> is an arbitrary unit vector.</p>
<p><strong>Question:</strong> My question is how does finish off the proof when we still the <span class="math-container">$u$</span> vector sticking out in the proof? The text even mentions that it's because <span class="math-container">$u$</span> is an arbitrary unit vector so it does not matter or something along the lines of that but cannot comprehend it. Any clarification will be appreciated, and thanks in advance.</p>
<p>Once again, a lot of the proofs are finished off like that but it doesn't feel complete to me because we still have that <span class="math-container">$u$</span> sticking out. If there is a simpler example that I can understand such concept with will also be appreciated.</p>
| Bijayan Ray | 627,419 | <p>one hint to understand is</p>
<p>in a finite dimensional space zero vector is the only vector orthogonal to all vectors </p>
<p>so from the last equation of your proof take the things in one side and use the above statement</p>
|
1,582,742 | <blockquote>
<p>I have this so easy question, yet it is giving me a headache</p>
<p>Consider the following two subsets of <span class="math-container">$R^3$</span></p>
<p><span class="math-container">$S_2 = [(1,2,3),(4,5,6),(7,8,9),(10,11,12)]$</span></p>
<p><span class="math-container">$S_3 = [(1,0,-1),(1,0,1)]$</span></p>
<p>which of the sets <span class="math-container">$S_i$</span> spans <span class="math-container">$R^3$</span> and which of the sets <span class="math-container">$S_i$</span> are linearly independent?</p>
</blockquote>
<p>Here's what i know for 100%</p>
<p><span class="math-container">$S_3$</span> does not span <span class="math-container">$R^3$</span> since it has only 2 vectors and 2 < 3 = dim <span class="math-container">$R^3$</span></p>
<p><span class="math-container">$S_3$</span> is linearly independent since neither of the two vectors in <span class="math-container">$S_3$</span> is a multiple of each other</p>
<p><span class="math-container">$S_2$</span> is not linearly independent since <span class="math-container">$S_2$</span> has 4 vectors and 4 > 3 = dim <span class="math-container">$R^3$</span></p>
<p>Here's what i'm not quite sure about:</p>
<p>does <span class="math-container">$S_2$</span> spans <span class="math-container">$R^3$</span>? I know it doesn't if its dimension, call it <span class="math-container">$n$</span> is smaller than <span class="math-container">$R^n$</span> but what about what if there are more vectors??</p>
| Javier | 241,291 | <p>You have correctly deduce that $S_2$ consists on dependent vectors. That means that one of those vectors (let's say $v_4$ can be expressed as a lineal combination of $v_1,v_2,v_3$. In that case the space that $\left[ v_1,v_2,v_3,v_4 \right]$ spans is the same that the space that $\left[ v_1,v_2,v_3 \right]$. You can see that every vector in the former space can be expressed as combination of the four vectors, and therefore the first three vectors are enough.</p>
<p>Now you only have to see whether $\left[ v_1,v_2,v_3 \right]$ spans $\mathbb{R}^3$ or not. You can repeat the argument if you need it in a general case.</p>
|
1,582,742 | <blockquote>
<p>I have this so easy question, yet it is giving me a headache</p>
<p>Consider the following two subsets of <span class="math-container">$R^3$</span></p>
<p><span class="math-container">$S_2 = [(1,2,3),(4,5,6),(7,8,9),(10,11,12)]$</span></p>
<p><span class="math-container">$S_3 = [(1,0,-1),(1,0,1)]$</span></p>
<p>which of the sets <span class="math-container">$S_i$</span> spans <span class="math-container">$R^3$</span> and which of the sets <span class="math-container">$S_i$</span> are linearly independent?</p>
</blockquote>
<p>Here's what i know for 100%</p>
<p><span class="math-container">$S_3$</span> does not span <span class="math-container">$R^3$</span> since it has only 2 vectors and 2 < 3 = dim <span class="math-container">$R^3$</span></p>
<p><span class="math-container">$S_3$</span> is linearly independent since neither of the two vectors in <span class="math-container">$S_3$</span> is a multiple of each other</p>
<p><span class="math-container">$S_2$</span> is not linearly independent since <span class="math-container">$S_2$</span> has 4 vectors and 4 > 3 = dim <span class="math-container">$R^3$</span></p>
<p>Here's what i'm not quite sure about:</p>
<p>does <span class="math-container">$S_2$</span> spans <span class="math-container">$R^3$</span>? I know it doesn't if its dimension, call it <span class="math-container">$n$</span> is smaller than <span class="math-container">$R^n$</span> but what about what if there are more vectors??</p>
| David G. Stork | 210,401 | <p>All your vectors in $S_2$ are over the form $(1,2,3) + i\ (3,3,3)$ for $i = 1,2,3$. Thus $(1,2,3)$ and the single vector $(3,3,3)$ can generate all your set, and thus form all linear combinations of your set. Hence the intrinsic dimensionality is $2$ (not $3$). </p>
|
1,619,208 | <blockquote>
<p>A person walks downhill at 10 km/h, uphill at 6 km/h and on the plane at 7.5 km/h. If the person takes 3 hours to go from a place A to another place B, and 1 hour on the way back, the distance between A and B is
$\begin{array}1 1. 15 km && 2. 23.5 km \\ 3. 16 km && 4. \text{Given data is insufficient to calculate the distance}\end{array}$</p>
</blockquote>
<p>Attempt: If he walked $X km$ downhill, $Y km$ uphill, and $Zkm$ on a plane, we get
\begin{equation}
{Xkm\over 10 km/h}+{Ykm\over 6 km/h}+{Zkm\over 7.5 km/h}=3\to (1)
\end{equation} and
\begin{equation}
{Xkm\over 6 km/h}+{Ykm\over 10 km/h}+{Zkm\over 7.5 km/h}=1\to (2)
\end{equation}</p>
<p>We are supposed to find distance between A and B, that is, $X+Y+Z$. Adding those equations and simplifying, we get $2(X+Y)+Z=30$ and I am stuck.</p>
<p>I also tried:</p>
<ol>
<li>$2(X+Y)+Z=(X+Y)+(X+Y+Z)$ and substituting each option for $X+Y+Z$, and try solve for $X,Y,Z$, and only option 1 fits the above equations (and for 2,3 it gives negative value for $X$ and/or $Y$.)</li>
<li>Guessing $Z=0$, it suggests option 1 again.</li>
</ol>
<p>But both are not the "right" way to solve it.</p>
<p>Can someone please help?</p>
| sinbadh | 277,566 | <p>Let us suppose that you know $r$ and want to know $k$ such that $\binom{k}{r}=c$, with $c\in \mathbb{N}$. Let us name $(*)$ this equation.</p>
<p>For $r=0$, $\binom{k}{r}=1$, and you can easly find the solution of $(*)$ if there exists.</p>
<p>If $r=1,2,3,4$ you can solve $(*)$ relatively easy, cause $\binom{r}{2}=\frac{r(r-1)}{2}$, $\binom{r}{3}=\frac{r(r-1)(r-2)}{3!}$ and $\binom{r}{4}=\frac{r(r-1)(r-2)(r-3)}{4!}$, and each one induces an equation of degree 2,3,4, respectively.</p>
<p>But, for $r\ge 5$, the resulting equation doesn't have a general formula cause is of degree $\ge 5$, so you will try with the particular form of $k$ and $r$ in each case. </p>
<p><strong>Conclusion:</strong> For equation $(*)$, only a few cases can be solved in general way. In particular, your original equation becomes $\dfrac{k(k-1)}{2}=45$, so $k=10$</p>
|
2,856,568 | <p>In the example below I get the first part. May I know why they're dividing by 2 for the second part? I'm asking because I feel the answer for second part should also be $\binom{10}{5}$. </p>
<p>In the first part simply by choosing 5 people, we're already splitting the squad into two teams. So I don't really see a difference between parts 1 & 2. Help?</p>
<h1>Example</h1>
<p>There are ten people in a basketball squad. Find how many ways:</p>
<ol>
<li>the starting five can be chosen from the squad</li>
<li>the squad can be split into two teams of five.</li>
</ol>
<h2>Solution</h2>
<ol>
<li>There are $\binom{10}{5} = \frac{10\times9\times8\times7\times6}{5\times4\times3\times2\times1} = 252$ ways of chosing the starting five.</li>
<li>The number of ways of dividing the squad into two teams of five is $\frac{252}{2} = 126$.</li>
</ol>
| Community | -1 | <p>This is from Sheldon Ross's book a First Course in Probability, I remember that example confusing me. Sometimes it helped me to think of it this way - assume that the first 5 picked will be on the red jersey team and the remaining will be on the yellow jersey team for option 1. Then a team of A,B,C,D,E in red jerseys is different than a team A,B,C,D,E in yellow jerseys, so they are both counted. In option 2, this is irrelevant so we divide by 2 to take out this distinction.</p>
|
3,403,464 | <p>Suppose <span class="math-container">$f\in L^2(X\times Y)$</span> where <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are finite measure spaces and we have a product measure <span class="math-container">$\mu$</span>.</p>
<p>Suppose further that <span class="math-container">$g\in L^2(X)$</span>, and <span class="math-container">$h:Y\to \mathbb{R}$</span> is defined by <span class="math-container">$h(y)=\int f(x,y)\overline{g(x)}dx$</span>.</p>
<p>Can we say that <span class="math-container">$h$</span> is measurable? I want to show that <span class="math-container">$h\in L^2(Y)$</span>. It is not too hard to show that the <span class="math-container">$L^2$</span> norm is finite. But of course this only makes sense if <span class="math-container">$h$</span> is measurable...</p>
| Aloizio Macedo | 59,234 | <p>This is mostly a pointwise statement. That <span class="math-container">$\omega$</span> is a volume form means that <span class="math-container">$\omega_p(X_1,\cdots,X_n) \neq 0$</span> if and only if <span class="math-container">$X_1,\cdots,X_n$</span> is a basis of <span class="math-container">$T_pM$</span>. (<span class="math-container">$\omega_p$</span> is a non-zero "determinant".) </p>
<p>The map
<span class="math-container">\begin{align*}
\iota_p:T_pM &\to \Lambda^{n-1}_p(T_pM) \\
X &\mapsto \iota_X\omega
\end{align*}</span>
is then injective, since <span class="math-container">$\omega(X_1,\cdot,\cdot,\cdots)=\omega(X_2,\cdot,\cdot,\cdots)$</span> implies <span class="math-container">$\omega(X_1-X_2,\cdot,\cdot,\cdots)=0$</span>, and thus <span class="math-container">$X_1-X_2$</span> can't be completed to a basis. Therefore, it must be zero. By dimensional reasons, the map is then an isomorphism.</p>
<p>There is still the issue of smoothness when we go from the pointwise statement to the differential one. By taking local charts and trivializations, if <span class="math-container">$\omega=gdx_1\cdots dx_n$</span> and we are taking a differential form <span class="math-container">$\eta=\sum f_i dx_1 \cdots \widehat{dx_i}\cdots dx_n$</span>, we are getting <span class="math-container">$X=\sum_i (-1)^{i+1}\frac{f_i}{g}\partial_i$</span> as our output, which is smooth.</p>
|
1,905,397 | <p>I want to know if the popular Sudoku puzzle is a Cayley table for a group. </p>
<p>Methods I've looked at: Someone I've spoken to told me they're not because counting the number of puzzle solutions against the number of tables with certain permutations of elements, rows and columns, the solutions are bigger than the tables, but I can't see why because I don't know how to count the different tables for a group of order 9, and then permute the rows, columns and elements in different ways. Also I believe the rotations/reflections will matter in comparing these numbers too. It would also be nice if there was a way to know if the operation is associative just from the table.</p>
| flawr | 109,451 | <p>Another approach: We are considering a group of order $9 = 3^2$ so a prime squared. <a href="https://math.stackexchange.com/questions/64371/showing-non-cyclic-group-with-p2-elements-is-abelian">It is well known</a> that groups of order $p^2$ must be abelian. (abelian = commutative) Now a soduku can never represent an abelian group, as it is never symmetric. (Symmetric as in matrices, that means symmetric wrt. the diagonal.)</p>
|
3,620,409 | <blockquote>
<p>If a matrix and its transpose both have the same eigenvectors, is it necessarily symmetric?</p>
</blockquote>
<p>It's clear to see how if <span class="math-container">$A$</span> = <span class="math-container">$A^T$</span>, they would have the same eigenvectors, but is it the only way? And how would you show it?</p>
| Batominovski | 72,152 | <blockquote>
<p><strong>Proposition.</strong> Let <span class="math-container">$A\in\text{Mat}_{n\times n}(\mathbb{R})$</span> be such that <span class="math-container">$A$</span> is diagonalizable over <span class="math-container">$\mathbb{R}$</span>. Then, <span class="math-container">$A$</span> and <span class="math-container">$A^{\top}$</span> have the same set of <span class="math-container">$\mathbb{R}$</span>-eigenspaces if and only if <span class="math-container">$A$</span> is a symmetric matrix.</p>
</blockquote>
<p><strong>Proof.</strong> One direction is trivial, so we prove the more difficult direction. Let <span class="math-container">$v_1,v_2,\ldots,v_n\in\mathbb{R}^n$</span> be linearly independent eigenvectors of <span class="math-container">$A$</span>, with the corresponding eigenvalues <span class="math-container">$\lambda_1,\lambda_2,\ldots,\lambda_n\in\mathbb{R}$</span>, respectively. Then, there exists a permutation <span class="math-container">$\sigma$</span> on the set of indices <span class="math-container">$\{1,2,\ldots,n\}$</span> such that <span class="math-container">$v_1,v_2,\ldots,v_n$</span> are eigenvectors of <span class="math-container">$A^\top$</span> associated to the eigenvalues <span class="math-container">$\lambda_{\sigma(1)},\lambda_{\sigma(2)},\ldots,\lambda_{\sigma(n)}$</span>, respectively. Thus,
<span class="math-container">$$AA^\top\,v_k=A\,(A^\top\,v_k)=A\,(\lambda_{\sigma(k)}\,v_k)=\lambda_{\sigma(k)}\,(A\,v_k)=\lambda_{\sigma(k)}\,(\lambda_k\,v_k)\,.$$</span>
Similarly,
<span class="math-container">$$A^\top A\,v_k=A^\top\,(A\,v_k)=A^\top\,(\lambda_kv_k)=\lambda_{k}\,(A^\top\,v_k)=\lambda_{k}\,(\lambda_{\sigma(k)}\,v_k)\,.$$</span>
Therefore,
<span class="math-container">$$AA^\top\,v_k=\lambda_k\lambda_{\sigma(k)}\,v_k=A^\top A\,v_k$$</span>
for <span class="math-container">$k=1,2,\ldots,n$</span>. Since <span class="math-container">$\mathbb{R}^n$</span> is spanned by <span class="math-container">$\{v_1,v_2,\ldots,v_n\}$</span>, this proves that <span class="math-container">$AA^\top=A^\top A$</span>, whence <span class="math-container">$A$</span> is normal. Therefore, <span class="math-container">$A$</span> can be diagonalized using an orthogonal matrix.</p>
<p>Let now <span class="math-container">$A=Q\Lambda Q^\top$</span> be a diagonalization of <span class="math-container">$A$</span> via an orthogonal matrix <span class="math-container">$Q\in\text{Mat}_{n\times n}(\mathbb{R})$</span>, where <span class="math-container">$\Lambda$</span> is the diagonal matrix
<span class="math-container">$\text{diag}_n\left(\lambda_1,\lambda_2,\ldots,\lambda_n\right)$</span>. This means <span class="math-container">$$A^\top =(Q\Lambda Q^\top)^\top=(Q^\top)^\top\Lambda^\top Q^\top =Q\Lambda Q^\top=A\,,$$</span><br>
as <span class="math-container">$\Lambda^\top=\Lambda$</span>. Therefore, <span class="math-container">$A$</span> is symmetric.</p>
<p><br></p>
<p><br></p>
<blockquote>
<p><strong>Corollary 1.</strong> Let <span class="math-container">$A\in\text{Mat}_{n\times n}(\mathbb{R})$</span> be such that <span class="math-container">$A$</span> is diagonalizable over <span class="math-container">$\mathbb{C}$</span>. Then, <span class="math-container">$A$</span> and <span class="math-container">$A^{\top}$</span> have the same set of <span class="math-container">$\mathbb{C}$</span>-eigenspaces if and only if <span class="math-container">$A$</span> is a normal matrix.</p>
</blockquote>
<p><br></p>
<blockquote>
<p><strong>Corollary 2.</strong> Let <span class="math-container">$A\in\text{Mat}_{n\times n}(\mathbb{C})$</span> be such that <span class="math-container">$A$</span> is diagonalizable over <span class="math-container">$\mathbb{C}$</span>. Then, <span class="math-container">$A$</span> and <span class="math-container">$A^{\dagger}$</span> have the same set of <span class="math-container">$\mathbb{C}$</span>-eigenspaces if and only if <span class="math-container">$A$</span> is a normal matrix. Here, <span class="math-container">$(\_)^\dagger$</span> represents the Hermitian conjugate operator.</p>
</blockquote>
<p><br></p>
<p><strong>Remark.</strong> Let <span class="math-container">$\mathbb{K}$</span> be a field and suppose that <span class="math-container">$A\in\text{Mat}_{n\times n}(\mathbb{K})$</span> is diagonalizable over <span class="math-container">$\mathbb{K}$</span>. It is only known that, if <span class="math-container">$A$</span> and <span class="math-container">$A^\top$</span> have the same set of <span class="math-container">$\mathbb{K}$</span>-eigenspaces, then <span class="math-container">$AA^\top=A^\top A$</span>. <s>I do not think that the converse holds for all <span class="math-container">$\mathbb{K}$</span>.</s> See also my question <a href="https://math.stackexchange.com/questions/3620718/which-matrices-a-in-textmat-n-times-n-mathbbk-are-orthogonally-diagona">here</a>.</p>
<p><strong><em>Update.</em></strong> I forgot that diagonalizable matrices commute if and only if they can be simultaneously diagonalized. Since <span class="math-container">$A$</span> is diagonalizable over <span class="math-container">$\mathbb{K}$</span>, <span class="math-container">$A^\top$</span> is also diagonalizable over <span class="math-container">$\mathbb{K}$</span>. Therefore, <span class="math-container">$A$</span> and <span class="math-container">$A^\top$</span> commute (i.e., <span class="math-container">$AA^\top=A^\top A$</span>) if and only if <span class="math-container">$A$</span> and <span class="math-container">$A^\top$</span> can be simultaneously diagonalized, which is equivalent to the condition that <span class="math-container">$A$</span> and <span class="math-container">$A^\top$</span> have the same <span class="math-container">$\mathbb{K}$</span>-eigenspaces. Therefore, we have the following theorem.</p>
<blockquote>
<p><strong>Theorem.</strong> Let <span class="math-container">$\mathbb{K}$</span> be a field and <span class="math-container">$n$</span> a positive integer. Suppose that a matrix <span class="math-container">$A\in\text{Mat}_{n\times n}(\mathbb{K})$</span> is diagonalizable over <span class="math-container">$\mathbb{K}$</span>. Then, <span class="math-container">$A$</span> and <span class="math-container">$A^\top$</span> have the same <span class="math-container">$\mathbb{K}$</span>-eigenspaces if and only if <span class="math-container">$$AA^\top=A^\top A\,.$$</span></p>
</blockquote>
|
194,569 | <blockquote>
<p>How can we find an integer $n$ such that $\mathbb{Z}[\frac{1}{20},\frac{1}{32}]=\mathbb{Z}[\frac{1}{n}]$?</p>
</blockquote>
<p>Thanks in advance!</p>
| i. m. soloveichik | 32,940 | <p>Let $n=\operatorname{lcm}(32,20)=160$ then $R=Z[1/32,1/20]\subset Z[1/n]$ since 1/32=5/160, 1/20=8/160.
On the other hand $1/160=4\cdot 1/32\cdot 1/20\in R$. So $n=160$ is a valid answer</p>
|
3,141,004 | <p>I am just a high school student, so please excuse me if this question sound silly to you.</p>
<p>How do I graph this : <span class="math-container">$\dfrac{9}{x} - \dfrac{4}{y} = 8$</span></p>
<p>I could do trial and error, but is there a more systematic way that I can use to graph this equation.</p>
| José Carlos Santos | 446,262 | <p>Yes, there is. Note that<span class="math-container">\begin{align}\frac9x-\frac4y=8&\iff\frac4y=\frac9x-8\\&\iff y=\frac4{\frac9x-8}=\frac{4x}{9-8x}=-\frac12+\frac9{18-16x}.\end{align}</span>So, graph the function <span class="math-container">$x\mapsto-\dfrac12+\dfrac9{18-16x}$</span>.</p>
|
1,161,089 | <p>Sorry if this may seem trivial - I just started studying Group Theory. This is the problem:</p>
<blockquote>
<p>Prove that <span class="math-container">$(g,h) \rightarrow hg$</span> does not define a group action with <span class="math-container">$g$</span> acting on <span class="math-container">$h$</span>. Prove instead that this defines an action of <span class="math-container">$G^\text{op} $</span> on <span class="math-container">$G$</span>.</p>
</blockquote>
<p>Here is my attempt to prove the problem:</p>
<blockquote>
<p>To show the first part, note that <span class="math-container">$(g,h) \rightarrow hg$</span> does not satisfy the criteria <span class="math-container">$g,(h,x)=(gh),x$</span> for <span class="math-container">$x\in X$</span> where <span class="math-container">$X$</span> is the set acted upon. This is because, we get <span class="math-container">$xhg=xgh$</span> which is not necessarily true.</p>
<p>The second part is true because an operation <span class="math-container">$*'$</span> in <span class="math-container">$G^\text{op}$</span> is defined as <span class="math-container">$h*'g=g*h$</span>. Where <span class="math-container">$*$</span> is the operation in <span class="math-container">$G$</span>. This will then allow the mapping to satisfy the criteria of a group action. We have, <span class="math-container">$(g,(h,x))=(g, xh)=xhg$</span> and <span class="math-container">$((g*'h),x)=((h*g),x)=xhg$</span> thus the function obeys the associativity criteria. The identity <span class="math-container">$(e,x)$</span> is direct.</p>
</blockquote>
<p>My problem is that I do not fully comprehend the original problem.</p>
<p>a) I do not know what it means by <span class="math-container">$hg$</span>, whether it is with respect to the operation <span class="math-container">$*$</span> in <span class="math-container">$G$</span> or <span class="math-container">$*'$</span> in <span class="math-container">$G^\text{op}$</span>.</p>
<p>b) I am not sure if my understanding of <span class="math-container">$G^\text{op}$</span> is correct, and if I have used the operations correctly.</p>
<p>May someone please explain if I have any mistakes?</p>
<p>Thank you so much!</p>
| Uncountable | 215,630 | <p>You throw the coin $x$ times. Now the probability that the first $y$ are heads and the second $x-y$ are tails is:
$$\frac{1}{2^{y}}\frac{1}{2^{x-y}}=\frac{1}{2^x}$$
But we can choose the $y$ heads out of the $x$ throws in ${x\choose y}$ ways. So:
$$P(y\text{ heads in } x\text{ coin tosses})={x\choose y}\frac{1}{2^x}$$</p>
|
2,304,168 | <p>Let's say we have a random point inside a circle . My intuition tells me that that If I draw vectors from all the circumference points to that point and find their sum the result will be zero.</p>
<p>Am I correct ? How can we prove this ?</p>
<p>Edit: I am asking about the sum of the 'vectors'</p>
| William Elliot | 426,203 | <p>bd A = cl A - int A.<br>
int A subset A.<br>
Thus<br>
cl A - A subset cl A - int A = bd A.</p>
|
3,306,110 | <p>the expression <span class="math-container">$x(x-9)$</span> is resolved in the manner
<span class="math-container">$$x(x-9)=x^2-9x$$</span>
but why not like this?
<span class="math-container">$$x(x-9)=x^2-9$$</span>
where does the <span class="math-container">$9x$</span> come from?</p>
| Ethan Bolker | 72,858 | <p>Remember the distributive law
<span class="math-container">$$
a(b+c) = ab + ac .
$$</span></p>
<p>In your expression, <span class="math-container">$a=x$</span>, <span class="math-container">$b=x$</span> and <span class="math-container">$c = -9$</span>.</p>
<p>Then <span class="math-container">$x$</span> times <span class="math-container">$-9$</span> is rewritten as <span class="math-container">$-9x$</span>.</p>
|
4,508,532 | <blockquote>
<p>Solve the equation <span class="math-container">$a \circ x \circ b = c$</span> in the group <span class="math-container">$S_4$</span> when <span class="math-container">$$a=\begin{pmatrix}1&2&3&4\\ 2&3&1&4\end{pmatrix}, b= \begin{pmatrix}1&2&3&4\\ 4&1&2&3\end{pmatrix}, c=\begin{pmatrix}1&2&3&4\\ 1&3&4&2\end{pmatrix}.$$</span></p>
</blockquote>
<p>So going from right to left in <span class="math-container">$a \circ x \circ b$</span> we want that <span class="math-container">$$\begin{align*} 1 \to 4 \to 3 \to 1\end{align*} \\ 2 \to 1 \to 2 \to 3 \\ 3 \to 2 \to 4 \to 4 \\ 4 \to 3 \to 1 \to 2$$</span> this gave me that <span class="math-container">$x =\begin{pmatrix}1&2&3&4\\ 2&4&1&3\end{pmatrix}$</span> but I'm not sure how I can verify this?</p>
<p>Using cycle notation this seems to be false as <span class="math-container">$$(123)\circ x \circ (1432)$$</span> would be then <span class="math-container">$$(123)(1243)(1432)=(13)(24)$$</span> and not <span class="math-container">$(234)$</span>.</p>
| José Carlos Santos | 446,262 | <p>Your answer is correct and, actually, we don't have <span class="math-container">$$(1\ \ 2\ \ 3)\circ(1\ \ 2\ \ 4\ \ 3)\circ(1\ \ 4\ \ 3\ \ 2)=(1\ \ 3)\circ(2\ \ 4).\tag1\label{1}$$</span> For instance, the LHS of \eqref{1} maps <span class="math-container">$1$</span> into <span class="math-container">$1$</span>, since:</p>
<ul>
<li><span class="math-container">$(1\ \ 4\ \ 3\ \ 2)$</span> maps <span class="math-container">$1$</span> into <span class="math-container">$4$</span>;</li>
<li><span class="math-container">$(1\ \ 2\ \ 4\ \ 3)$</span> maps <span class="math-container">$4$</span> into <span class="math-container">$3$</span>;</li>
<li><span class="math-container">$(1\ \ 2\ \ 3)$</span> maps <span class="math-container">$3$</span> into <span class="math-container">$1$</span>.</li>
</ul>
<p>Therefore, \eqref{1} is false. And, in fact, <span class="math-container">$$(1\ \ 2\ \ 3)\circ(1\ \ 2\ \ 4\ \ 3)\circ(1\ \ 4\ \ 3\ \ 2)=(2\ \ 3\ \ 4)=c.$$</span></p>
|
1,904,786 | <p>How can I get a good approximation of the sum</p>
<p>$$\sum_{s=1}^\infty x^s\ln(s)$$</p>
<p>by hand ?</p>
<p>If I only consider the part of the function, where it is strictly decreasing, I can bound the sum by integrals. But how can I approximate the sum upto this point ?</p>
| Simply Beautiful Art | 272,831 | <p>$|x|<1$</p>
<p>$$f(x):=\sum_{s=2}^\infty x^s\ln(s)$$</p>
<p>$$f(x)+f(-x)=2x^{-2}\sum_{s=2}^\infty x^{2s}\ln(2s)$$</p>
<p>$$=2x^{-2}\sum_{s=2}^\infty x^{2s}\left(\ln(2)+\ln(s)\right)$$</p>
<p>$$f(x)+f(-x)=2\ln(2)\frac{x^2}{1-x^2}+2\frac{f(x^2)}{x^2}$$</p>
<p>So as long as we can approximate it for $x>0$, this relationship allows us to calculate for $x<0$, and vice versa.</p>
<p>Note that for all $x>0$</p>
<p>$$\frac{x^2}{1-x}=\sum_{s=2}^\infty x^s<f(x)$$</p>
<p>It also seems to be fairly good of an approximation as well that is fairly easy. Try it for a few values?</p>
|
4,097,772 | <p>Consider <span class="math-container">$\mathbb{D}_{R} = \{z \in \mathbb{C}: |z| < R\}$</span>. For given <span class="math-container">$f \in \mathscr{O}(\mathbb{D}_{R})$</span> - space of holomorphic functions on the open disk, denote by <span class="math-container">$c_n$</span> the <span class="math-container">$n$</span>-th Taylor's coeffitient of <span class="math-container">$f$</span> at <span class="math-container">$0.$</span> Hot to show that the topology of compact convergence on <span class="math-container">$\mathscr{O}(\mathbb{D}_{R})$</span> is generated by family of seminorms <span class="math-container">$\{\|\;.\|_{r, \infty}: 0 <r < R\}$</span> where <span class="math-container">$\|f\|_{r, \infty} = \sup_{n \geq0}|c_n|r^n$</span>. It's sufficient to show that we can major the norm from one family by finite norms from another family and vice versa, but i failed writing inequalities.</p>
| Community | -1 | <p>It suffices to prove that, for every <span class="math-container">$R>r>\delta>0$</span>, if <span class="math-container">$||f_\lambda-f||_{r, \infty}\to 0$</span> then <span class="math-container">$f_\lambda$</span> converges uniformly to <span class="math-container">$f$</span> on <span class="math-container">$\{z:|z|\le r-\delta\}$</span> and that if it converges uniformly on <span class="math-container">$\{z:|z|\le r\}$</span> then <span class="math-container">$||f_\lambda-f||_{r,\infty}\to 0$</span> (where <span class="math-container">$f_\lambda$</span> is a net).</p>
<p>Let us first prove that uniform convergence implies <span class="math-container">$||f_\lambda-f||\to 0$</span>.
Recall that, by Cauchy's inequalities we have
<span class="math-container">$$|c_n|r^n\le \max_{|z|=r}|f(z)|$$</span>
So if <span class="math-container">$f_\lambda\to f$</span> uniformly on <span class="math-container">$\{z:|z|\le r\}$</span>, <span class="math-container">$||f_\lambda-f||_{r,\infty}=\sup_n |c_n|r^n
\le \max_{|z|=r}|f(z)-f_n(z)|\to 0$</span>.</p>
<p>On the other hand, suppose that <span class="math-container">$||f_\lambda-f||_{r,\infty}\to 0$</span>. Then <span class="math-container">$\sup_k|c_k-c_{k,\lambda}|r^k\to 0$</span>. Take <span class="math-container">$N$</span> big enough so that <span class="math-container">$\forall \lambda\ge N$</span> we have <span class="math-container">$|c_{k}-c_{k,\lambda}|r^k<\varepsilon$</span>. Then for every <span class="math-container">$z:|z|\le r(1-\delta)$</span> we have
<span class="math-container">$$|f(z)-f_\lambda(z)|\le \sum |c_k-c_{k,\lambda}||z|^k\le \sum \varepsilon(1-\delta)^k=
\frac{\varepsilon}{\delta}$$</span>
Thus, fixed <span class="math-container">$\delta>0$</span>, we have uniform convergence on <span class="math-container">$\overline{\mathbb{D}_{r-\delta}}$</span>.</p>
<p>This implies the result: its clear that a net converges in the topology iff it converges uniformly on compact subsets of <span class="math-container">$\mathbb{D}_R$</span>. Actually, the topology is clearly metrizable, as one can take <span class="math-container">$r\in \mathbb{Q}\cap [0,R]$</span> to define a countable family of seminorms generating the same topology and so one can dispense from using nets, using sequences instead.</p>
|
1,038,219 | <ol>
<li>(n) people put their name in a hat. </li>
<li>Each person picks a name out of the hat to buy a gift for. </li>
<li>If a person picks out themselves they put the name back into the hat.</li>
<li>If the last person can only pick themselves then the loop is invalid and either<br>
. start again<br>
. or step back until a valid loop can be reached.</li>
</ol>
<p>What is the probability that if n is 33 that the chain creates a perfect loop?</p>
<p>An example of a perfect loop where n is 4: </p>
<ul>
<li>A gives to B </li>
<li>B gives to C </li>
<li>C gives to D. </li>
<li>D gives to A. </li>
</ul>
<p>An example of a valid but not perfect loop where n is 4:</p>
<ul>
<li>A gives to B </li>
<li>B gives to A </li>
<li>C gives to D. </li>
<li>D gives to C. </li>
</ul>
| Ross Millikan | 1,827 | <p>You are asking for the chance of a single cycle given that you have a <a href="http://en.wikipedia.org/wiki/Derangement" rel="nofollow">derangement</a>. For $n$ people, the number of derangements is the closest integer to $\frac {n!}e$ To have a cycle, person $1$ has $n-1$ choices, then that person has $n-2$ choices, then that person has $n-3$, etc. So there are $(n-1)!$ cycles. The odds are then (just about) $\frac e{n}$</p>
|
251,607 | <p>The simplest Laguerre polynomials are
$$
L_k(x)=(\frac{d}{dx}-1)^k\left(\frac{x^k}{k!}\right).
$$
I would like to find a simple reference for proving or disproving the following assertions.</p>
<p>(1) All the $k$ zeroes of $L_k$ are simple and located on the positive half-line.</p>
<p>(2) The largest zero of $L_k$ is bounded above by $k^2$.</p>
| john mangual | 1,358 | <p>hmm derivatives interlace zeros... however $ a^+=1- d/dx$ behaves nicely on polynomials. It should preserve positivity of the roots and shift the leading zero by $+n$ with a small correction term where $n = \deg p$. here we have $x=0$ as a root with multiplicity $k$ and they should hop forward a bit</p>
|
1,177,111 | <p>Let $(a_n)$ be defined by $a_1=1$, $a_{n+1}=a_n+{1\over a_n}$. Prove there exists $\lim_\limits{n\to \infty}{a_n}$ and find $\lim_\limits{n\to \infty}{a_n\over n}$. </p>
<p>I tried many things but they wouldn't work. I would appreciate your help.</p>
| Clement C. | 75,808 | <ul>
<li>By induction, prove $a_n \geq 1$ for all $n$.</li>
<li>Use this to show $(a_n)_n)$ is strictly increasing.</li>
<li>By monotonicity, it must converge in $\mathbb{R}\cup\{+\infty\}$.</li>
<li>By continuity of the function $f\colon x > 0 \mapsto x+\frac{1}{x}$, if the limit $\ell$ exists it must satisfy $\ell = f(\ell)$. What is the only possible limit?</li>
</ul>
<p>Then, once you known $a_n\xrightarrow[n\to\infty]{} \infty$, you can consider
$$\ln a_n = \ln a_n - \ln a_1 = \sum_{k=1}^{n-1} \ln\frac{a_{k+1}}{a_k} = \sum_{k=1}^{n-1} \ln\!\left(1+\frac{1}{a_k^2}\right)$$.</p>
<p>You know that the LHS goes to $\infty$, so the RHS must be a divergent sequence (to infinity). Since $a_n\to \infty$, $\ln\!\left(1+\frac{1}{a_k^2}\right)\sim_{n\to\infty} \frac{1}{a_k^2}$ and by comparison of divergent series this implies that the series
$$
\sum \frac{1}{a_k^2}
$$
diverges as well. Why do that imply that the positive sequence $\frac{a_n}{n}$ cannot have any non-zero limit? And why does that imply that then it must have limit $0$?</p>
|
3,724,594 | <p>Let <span class="math-container">$g: \mathbb{R}^2 \to \mathbb{R}$</span>, <span class="math-container">$g(x,y) = |xy|$</span>. Find all the <span class="math-container">$(x,y) \in \mathbb{R}^2$</span> where <span class="math-container">$g$</span> is differentiable.</p>
<p>I tried to compute the partial derivatives:</p>
<p><span class="math-container">$|\frac{g(x+h,y) - g(x,y)}{h}| = |\frac{|(x+h)y|-|xy|}{h}| \leq |\frac{(x+h)y-xy}{h}|=|y|$</span>.</p>
<p>So the partial derivative <span class="math-container">$\frac{\partial g}{\partial x}= |y|$</span>. Computing in the same way <span class="math-container">$\frac{\partial g}{\partial y}= |x|$</span>.</p>
<p>Both of the partial derivatives are continous. Therefore the function itself is differentiable.</p>
<p>According to the solutions this is not correct. There are a few "special cases" where this doesn't hold. Where is my mistake?</p>
| Allawonder | 145,126 | <blockquote>
<p>So what if I always choose <span class="math-container">$\epsilon=\infty$</span>? Then it is guaranteed that the distance between <span class="math-container">$f(x)$</span> and <span class="math-container">$L$</span> is less than <span class="math-container">$\epsilon$</span>, and, as a bonus, <em><span class="math-container">$L$</span> can literally be anything, which means that the limit can be any value you like.</em> Which is obviously absurd. What am I missing here?</p>
</blockquote>
<p>The first part of your statement is right. The problem is italicised. If your tolerance is infinitely large, then we can any number we choose approximates the <em>fixed</em> limit <span class="math-container">$L$</span> with enough accuracy. There is nothing here that says <span class="math-container">$L$</span> can be anything, in the sense that we have said that <span class="math-container">$f$</span> does have a limit. The symbol <span class="math-container">$L$</span> is only arbitrary in the sense that we are talking in general, for any <em>given <span class="math-container">$L$</span>.</em> However, this is the case also when <span class="math-container">$\epsilon\ne \infty.$</span></p>
<p>Hope this helps.</p>
|
2,229,473 | <p>I am trying to solve the following question:</p>
<blockquote>
<p>If $K$ is a field and $R=K[x]/(x^n)$.and $r=a_0+a_1x +\dots+a_{n-1}x^{n-1}$ is an element in $R$ with $a_0\neq 0$, prove that $r$ is a unit and find its inverse.</p>
</blockquote>
<p>To prove it is a unit, I know that $\gcd(r,x^n)=1$, so there exist $a,b \in K[x]$ such that $ra+x^nb=1$, $ra=1$ in $R$.</p>
<p>But to construct the inverse, and I know that $a_0$ has its inverse in K. Can someone give me some hint?</p>
<p>I also need to show that every zero divisor in $R$ is nilpotent.
For nonzeros $a,b \in R$, if $ab=0$, then $x^n$ divides $ab$, does this imply that $a$ and $b$ both have no constant terms?</p>
| egreg | 62,967 | <p>I'd prefer to write $\xi$ for the image of $x$ in $K[x]/(x^n)$, just to be sure we're working in the right context.</p>
<p>What you want to prove is that $r=a_0+a_1\xi+\dots+a_{n-1}\xi^{n-1}$ is invertible in $R$.</p>
<p>Hints:</p>
<ol>
<li>$a\xi^k$ is nilpotent, for every $a\in K$ and every positive integer $k$;</li>
<li>the sum of two nilpotent elements is nilpotent;</li>
<li>if $A$ is a commutative ring, $a\in A$ is invertible and $t\in A$ is nilpotent, then $a+t$ is invertible.</li>
</ol>
<p>Finally, what are the nilpotent elements in $R$?</p>
|
2,229,473 | <p>I am trying to solve the following question:</p>
<blockquote>
<p>If $K$ is a field and $R=K[x]/(x^n)$.and $r=a_0+a_1x +\dots+a_{n-1}x^{n-1}$ is an element in $R$ with $a_0\neq 0$, prove that $r$ is a unit and find its inverse.</p>
</blockquote>
<p>To prove it is a unit, I know that $\gcd(r,x^n)=1$, so there exist $a,b \in K[x]$ such that $ra+x^nb=1$, $ra=1$ in $R$.</p>
<p>But to construct the inverse, and I know that $a_0$ has its inverse in K. Can someone give me some hint?</p>
<p>I also need to show that every zero divisor in $R$ is nilpotent.
For nonzeros $a,b \in R$, if $ab=0$, then $x^n$ divides $ab$, does this imply that $a$ and $b$ both have no constant terms?</p>
| ancient mathematician | 414,424 | <p>Of course @egreg has given the best solution (and I gave it +1) by putting it all in a general context, but I can never resist the "idiot" answer to this question. It is clear from the Binomial Theorem that the inverse of $a_0+a_1x +\dots+a_{n-1}x^{n-1}+(x^n)$ is
$$
\sum_{k=0}^\infty (-1)^k a_0^{-k-1}(a_1x +\dots+a_{n-1}x^{n-1})^k +(x^n)
$$
which happily (as we are trying to do algebra) is really a finite sum, every term after the $(n-1)$-th vanishing vanishing into $(x^n)$, as it has a factor $x^n$. </p>
<p>So not only is it a unit, we've calculated the inverse. </p>
|
10,352 | <p>I teach high school geometry and see many of my students fall in to the trap of "it looks like it, so it's true" -- a <a href="https://en.wikipedia.org/wiki/Van_Hiele_model" rel="noreferrer">Van Hiele Level 0 to 1</a> thought process. For instance, when talking about parallel line angle relationships, once they realize that all pairs of angles (when you have 2 parallel lines and a transversal) are either congruent or supplementary, asking them to justify why the angles are congruent is "They're parallel and both look acute." </p>
<p>I continually remind them that explaining their ideas based off of earlier ideas is BETTER than using visual information, and provide not-to-scale diagrams to emphasize that fact, but I'd like to create a poster to point to to say something like "It's good that you noticed they look the same, but we're looking for at least a level 2 explanation here." I'd like to make explicit to my students my goal of moving them "up the scale" to a higher level of thinking. </p>
<p>For ease, the Van Hiele levels as presented to educators can be phrased as follows: </p>
<p><a href="https://i.stack.imgur.com/wqSB7.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/wqSB7.jpg" alt="Van Hiele Levels"></a>
(<a href="http://www.nctm.org/Publications/teaching-children-mathematics/2014/Vol21/Issue5/Linking-the-Van-Hiele-Theory-to-Instruction/" rel="noreferrer">Source</a>)</p>
<p>How can these be rephrased concisely and accurately in student-friendly language? For instance, I'm not sure how I feel about the rephrasingseen in even the table I provided. I have some ideas of my own, but would prefer to collaborate. </p>
<p>Is communicating this goal with students a good idea, in your opinion?</p>
<p>At what level do higher-level educators expect their students to be by the time they reach their college level courses, so I could possibly add this to the poster?</p>
| Jasper | 1,147 | <p>This model is really describing three different concepts.</p>
<p>1-2: What does it look like? This is "basic" shape recognition.</p>
<p>2-3: Practical geometry. How many sides does a shape have? What kinds of angles does the shape have? Can shapes be put next to each other without any gaps?</p>
<p>4-5: Proofs, for the sake of proofs. The examples might have to do with geometric shapes. But the exercise of doing the proofs could be done just as well with algebra, or with matrices, or with sets.</p>
<p>The model is missing a step between 3 and 4: Constructive geometry, like the methods used by <a href="http://rads.stackoverflow.com/amzn/click/0486281388" rel="nofollow">origami masters</a>, old-fashioned <a href="http://rads.stackoverflow.com/amzn/click/0022321500" rel="nofollow">hand draftsmen</a>, and ancient Greek compass-and-straightedge geometers. These are practical exercises, which someone at level 2 can see the use of, and someone at level 4 can check the accuracy of.</p>
<p><strong>A translation</strong></p>
<ol>
<li>Visual. Name that shape. What does it kinda look like?</li>
<li>Descriptive. Tell me about that shape. How many sides does it have? Are the sides straight? What kinds of angles does it have?</li>
<li>Categorizing. What do these shapes have in common? Can they be joined together? What gaps would they have?</li>
</ol>
<p>3.5 Construction. Can you modify a shape to have particular properties? Can you make drawings of a 3-D object carefully enough that someone can make another object based on your drawings? What are the tolerances?</p>
<ol start="4">
<li>Proofs. Can you prove things about the shape? If you say something is true about kind(s) of shapes, can you be sure that nobody can find an exception?</li>
<li>Creative proofs. What rules can you change about the proofs? How does this change what you can prove?</li>
</ol>
<p><strong>Real-world applications</strong></p>
<ul>
<li>Most human beings need to be at least at level 1. There is a reason that babies can recognize shapes before they can talk.</li>
<li>Most hands-on workers need to be at least at level 3.</li>
<li>Designers of physical things (including tailors, seamstresses, drafters, engineers, and architects) need to be at least at level 3.5. (The missing "constructive geometry" level.)</li>
<li>Students studying mathematics, science, computer programming, or engineering in a college-preparatory or college program need to be at least at level 4.</li>
</ul>
|
629,756 | <p>Consider the eigenvalue problem
\begin{equation}
\left\{ \begin{array}{l}
\Phi \in C^{2}(\mathbb{R}) \ \text{and bounded }\\
-\Phi^{''}(x)=\lambda\Phi(x), \ x\in \mathbb{R}.
\end{array} \right.
\end{equation}</p>
<p>and assume that the solution is given by $\Phi(x)=Ae^{i\sqrt{\lambda}x}+Be^{-i\sqrt{\lambda}x}$.</p>
<p>How can one prove that $\lambda$ is real and not negative?</p>
| Community | -1 | <p>If it is not true that $\lambda \geq 0$, then $\operatorname{Im} \sqrt{\lambda} \neq 0$, i.e. $\sqrt{\lambda} = a + ib$ with $b \neq 0$. Thus
$$C\exp(i \sqrt{\lambda} x) = C \exp(iax)\exp(-bx)$$
is unbounded as $x \to \infty$ or as $x \to -\infty$ depending on the sign of $b$. Thus if a solution of the form you propose is bounded, then either $\lambda \geq 0$ or $A = B = 0$.</p>
<p>(Note that it is not possible for one term to sufficiently cancel out the other unless $b = -b$ i.e. $\lambda \geq 0$).</p>
|
3,490,416 | <p>Problem:</p>
<blockquote>
<p>From any fixed point in the axis of a general conic, a line is drawn perpendicular to the tangent at <span class="math-container">$P$</span> (on the conic) meeting <span class="math-container">$SP$</span> in <span class="math-container">$R$</span> where <span class="math-container">$S$</span> is the focus. Show that the locus of <span class="math-container">$R$</span> is a circle.</p>
</blockquote>
<p>I don't really know where to start from, I drew the diagram and now I am stuck. My attempt was to consider four points on conic and show that the four corresponding points <span class="math-container">$R$</span> are cyclic but I fail to do so. </p>
| Intelligenti pauca | 255,730 | <p>I'll show the proof for an ellipse, the other cases are analogous.</p>
<p>In an ellipse, the bisector <span class="math-container">$PH$</span> of <span class="math-container">$\angle SPS'$</span> (where <span class="math-container">$S'$</span> is the second focus) is perpendicular to the tangent at <span class="math-container">$P$</span>. It follows that triangles <span class="math-container">$SPH$</span> and <span class="math-container">$SDR$</span> are similar (<span class="math-container">$D$</span> is the fixed point on axis <span class="math-container">$SS'$</span>) and <span class="math-container">$SR:SP=SD:SH$</span>, that is:
<span class="math-container">$$
SR={SP\over SH}SD.
$$</span>
On the other hand, from the angle bisector theorem we have
<span class="math-container">$SP:S'P=SH:S'H$</span>, that is:
<span class="math-container">$$
{SP\over SH}={SP+S'P\over SH+S'H}={a\over c},
$$</span>
where <span class="math-container">$2a$</span> is the ellipse major axis and <span class="math-container">$2c=SS'$</span>.
Inserting this result into the previous equation we obtain that <span class="math-container">$SR$</span> is constant, hence <span class="math-container">$R$</span> belongs to a circle of centre <span class="math-container">$S$</span> and radius <span class="math-container">${a\over c}SD$</span>.</p>
<p><a href="https://i.stack.imgur.com/nMIGD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nMIGD.png" alt="enter image description here"></a></p>
|
857,207 | <p>I need help to confirm my answer for the following question</p>
<p>"There is an alphabet of size 40 and this alphabet is used for forming messages in a communication system. If 10 of these alphabets can be used only as the first or last symbols in the message and the rest 30 can be used anywhere, how many messages can be formed if repetition is allowed, The length of the messages is 25 symbols?"</p>
<p>I came up with the following solution for this question = 30^(23) * 1600</p>
| johannesvalks | 155,865 | <p>Only 10 for the first and the last and 30 for the innerpart thus</p>
<p>$$
10 + 10 \times 10 + 10 \times 30 \times 10 + 10 \times 30^2 \times 10 + 10 \times 30^3 \times 10 + \cdots + 10 \times 30^8 \times 10
$$</p>
<p>Thus</p>
<p>$$
10 + 100 \sum_{k=0}^{8} 30^k = 10 + 100 \frac{30^9 - 30}{30-1}.
$$</p>
<hr>
<p>For $25$ symbols we get</p>
<p>$$
10 \times 30^{23} \times 10
$$</p>
|
190,914 | <p>In <a href="https://math.stackexchange.com/questions/38350/n-lines-cannot-divide-a-plane-region-into-x-regions-finding-x-for-n">this</a> post it is mentioned that $n$ straight lines can divide the plane into a maximum number of $(n^{2}+n+2)/2$ different regions. </p>
<p>What happens if we use circles instead of lines? That is, what is the maximum number of regions into which n circles can divide the plane?</p>
<p>After some exploration it seems to me that in order to get maximum division the circles must intersect pairwise, with no two of them tangent, none of them being inside another and no three of them concurrent (That is no three intersecting at a point).</p>
<p>The answer seems to me to be affirmative, as the number I obtain is $n^{2}-n+2$ different regions. Is that correct?</p>
| hmakholm left over Monica | 14,366 | <p>A qualitative approach:</p>
<p>Consider a group of $n$ <em>lines</em> that divide the plane into maximally many regions. The intersections between the lines all fall within a bounded area of the plane. Now replaces each of the lines with a huge circle, so large that within the bounded area-of-intersections it deviates from a straight line by much less than the distance between two points of intersection.</p>
<p>Then you still have all the regions you had with lines. However, if there are at least three circles, you <em>also</em> get some new regions that are far away from the central bounded area.</p>
|
3,990,110 | <p>I was seeking the other day for intuition about the Hölder inequality and more precisely the geometric intuition behind the relation <span class="math-container">$\frac{1}{p} + \frac{1}{q} = 1$</span>.</p>
<p>I found <a href="http://www.tricki.org/article/Geometric_view_of_H%C3%B6lders_inequality" rel="nofollow noreferrer">this interpretation</a> of the Hölder inequality.</p>
<p>It says that we are trying to find an estimate of :</p>
<p><span class="math-container">$$ \int f^a g^b$$</span> with the knowledge of <span class="math-container">$\int f^p$</span> and <span class="math-container">$\int g^q$</span>.</p>
<p>For <span class="math-container">$(a, b) = (1,1)$</span>, we need the relationship <span class="math-container">$\frac{1}{p} + \frac{1}{q} = 1$</span> so that the points : <span class="math-container">$(p, 0), (1,1), (0, q)$</span> lie on the same line. We can know use interpolation to have the Hölder inequality which give an estimate at the point <span class="math-container">$(1, 1)$</span>.</p>
<p>But I feel like an argument is missing here. Why the estimate is using the geometric mean of the function <span class="math-container">$f$</span> and <span class="math-container">$g$</span> ? Why we don't have something like (using arithmetic mean) :
<span class="math-container">$$
\int \mid fg \mid \leq \frac{1}{p} \int f^p + \frac{1}{q}\int g^q \;\;?
$$</span>
Moreover the article is saying we can generalize this approach to approximate <span class="math-container">$\int f^ag^b$</span> but in this case we need three points such that <span class="math-container">$(a,b)$</span> is in the convex hull of these three points. Why for <span class="math-container">$(1, 1)$</span> two points are needed while for general <span class="math-container">$(a, b)$</span> we need three points ?</p>
| epi163sqrt | 132,007 | <p>At first we look at the geometrical situation of the convex hull of three points in general position in <span class="math-container">$\mathbb{R^2}$</span>.</p>
<p><strong>Convex hull of three points with <span class="math-container">$(a,b)$</span> inside:</strong></p>
<p>In OPs referenced article we have a triangle with points <span class="math-container">$(u_j,v_j),1\leq j\leq 3$</span> and a point <span class="math-container">$(a,b)$</span> <em>inside</em> the triangle as shown in the graphic below:</p>
<p><span class="math-container">$\qquad\qquad\quad$</span><a href="https://i.stack.imgur.com/e3727.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e3727.png" alt="enter image description here" /></a></p>
<p>We can write <span class="math-container">$(a,b)$</span> as convex linear combination of <span class="math-container">$(u_j,v_j)$</span> according the graphic as follows:
<span class="math-container">\begin{align*}
\binom{a^{\prime}}{b^{\prime}}&=\binom{u_2}{v_2}+\lambda\left[\binom{u_3}{v_3}-\binom{u_2}{v_2}\right]\\
&=(1-\lambda)\binom{u_2}{v_2}+\lambda\binom{u_3}{v_3}\\
\\
\color{blue}{\binom{a}{b}}&=\binom{u_1}{v_1}+\mu\left[\binom{a^{\prime}}{b^{\prime}}-\binom{u_1}{v_1}\right]\\
&=(1-\mu)\binom{u_1}{v_1}+\mu\binom{a^{\prime}}{b^{\prime}}\\
&=(1-\mu)\binom{u_1}{v_1}+\mu\left[(1-\lambda)\binom{u_2}{v_2}+\lambda\binom{u_3}{v_3}\right]\\
&\,\,\color{blue}{=(1-\mu)\binom{u_1}{v_1}+\mu(1-\lambda)\binom{u_2}{v_2}+\lambda\mu\binom{u_3}{v_3}}\tag{1}
\end{align*}</span></p>
<p>Note we have in (1) a <em>convex combination</em> of the points <span class="math-container">$(u_j,v_j)$</span> since
<span class="math-container">\begin{align*}
(1-\mu)+\mu(1-\lambda)+\lambda\mu=1
\end{align*}</span></p>
<p><strong>Convex hull of three points with <span class="math-container">$(a,b)=(1,1)$</span> at the boundary:</strong></p>
<p>Now we consider the triangle with the three points <span class="math-container">$(0,0), (p,0), (0,q)$</span>, <span class="math-container">$p>1, q>1$</span> and the point <span class="math-container">$(a,b)=(1,1)$</span> at the line segment from <span class="math-container">$(p,0)$</span> to <span class="math-container">$(0,q)$</span>.</p>
<p><span class="math-container">$\qquad\qquad\qquad$</span><a href="https://i.stack.imgur.com/txUhQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/txUhQ.png" alt="enter image description here" /></a></p>
<p>We calculate similarly as above:
<span class="math-container">\begin{align*}
\color{blue}{\binom{1}{1}}&=\binom{p}{0}+\nu\left[\binom{0}{q}-\binom{p}{0}\right]\\
&\,\,\color{blue}{=(1-\nu)\binom{p}{0}+\nu\binom{0}{q}}
\end{align*}</span></p>
<p>We observe in this special case where <span class="math-container">$(a,b)=(1,1)$</span> is at the boundary of the convex hull, we need only <em>two points</em> <span class="math-container">$(p,0)$</span> and <span class="math-container">$(0,q)$</span> to obtain a convex combination of the triangle for <span class="math-container">$(1,1)$</span>.</p>
<p><strong>Hölder's inequality:</strong></p>
<p>A few words to Hölder's inequality which might be helpful. We have for positive real numbers <span class="math-container">$x$</span> and <span class="math-container">$y$</span> and <span class="math-container">$p>1, q>1$</span> real numbers with <span class="math-container">$\frac{1}{p}+\frac{1}{q}=1$</span> the arithmetic-geometric means inequality:
<span class="math-container">\begin{align*}
x^{\frac{1}{p}}y^{\frac{1}{q}}\leq \frac{1}{p}x+\frac{1}{q}y\tag{2}
\end{align*}</span>
We define for a <span class="math-container">$p$</span>-integrable measurable function <span class="math-container">$f$</span>:
<span class="math-container">\begin{align*}
N_p(f):=\left(\int|f|^p\,d\mu\right)^{\frac{1}{p}}
\end{align*}</span>
and take
<span class="math-container">\begin{align*}
x:=\left(\frac{|f(\omega)|}{N_p(f)}\right)^p,\quad y:=\left(\frac{|g(\omega)|}{N_q(g)}\right)^q\tag{3}
\end{align*}</span></p>
<blockquote>
<p>We obtain from (2) and (3):
<span class="math-container">\begin{align*}
\frac{|f\,g|}{N_p(f)N_q(g)}&\leq \frac{1}{p\left(N_p(f)\right)^p}|f|^p+\frac{1}{q\left(N_q(g)\right)^q}|g|^q\tag{4}\\
\frac{1}{N_p(f)N_q(g)}\int|f\,g|\,d\mu
&\leq \frac{1}{p\left(N_p(f)\right)^p}\int|f|^p\,d\mu+\frac{1}{q\left(N_q(g)\right)^q}\int|g|^q\,d\mu\\
&=\frac{1}{p}+\frac{1}{q}\\
&=1\\
\color{blue}{N_1(fg)}&\color{blue}{\leq N_p(f)N_q(g) }\tag{5}
\end{align*}</span>
and (5) gives Hölder's inequality.</p>
</blockquote>
<p>Note the transformation from the arithmetic mean in (4) to the multiplication of <span class="math-container">$N_p(f)$</span> with <span class="math-container">$N_q(g)$</span> in (5).</p>
|
3,990,110 | <p>I was seeking the other day for intuition about the Hölder inequality and more precisely the geometric intuition behind the relation <span class="math-container">$\frac{1}{p} + \frac{1}{q} = 1$</span>.</p>
<p>I found <a href="http://www.tricki.org/article/Geometric_view_of_H%C3%B6lders_inequality" rel="nofollow noreferrer">this interpretation</a> of the Hölder inequality.</p>
<p>It says that we are trying to find an estimate of :</p>
<p><span class="math-container">$$ \int f^a g^b$$</span> with the knowledge of <span class="math-container">$\int f^p$</span> and <span class="math-container">$\int g^q$</span>.</p>
<p>For <span class="math-container">$(a, b) = (1,1)$</span>, we need the relationship <span class="math-container">$\frac{1}{p} + \frac{1}{q} = 1$</span> so that the points : <span class="math-container">$(p, 0), (1,1), (0, q)$</span> lie on the same line. We can know use interpolation to have the Hölder inequality which give an estimate at the point <span class="math-container">$(1, 1)$</span>.</p>
<p>But I feel like an argument is missing here. Why the estimate is using the geometric mean of the function <span class="math-container">$f$</span> and <span class="math-container">$g$</span> ? Why we don't have something like (using arithmetic mean) :
<span class="math-container">$$
\int \mid fg \mid \leq \frac{1}{p} \int f^p + \frac{1}{q}\int g^q \;\;?
$$</span>
Moreover the article is saying we can generalize this approach to approximate <span class="math-container">$\int f^ag^b$</span> but in this case we need three points such that <span class="math-container">$(a,b)$</span> is in the convex hull of these three points. Why for <span class="math-container">$(1, 1)$</span> two points are needed while for general <span class="math-container">$(a, b)$</span> we need three points ?</p>
| mlg4080 | 102,585 | <p>Focusing more on intuition, and landing on the geometry how integrals <em>scale</em>, let us check out a simple case.</p>
<p>Consider <span class="math-container">$\int_{0}^{r} 1 dx = r$</span>. Applying Holder's inequality we would get on the opposite side <span class="math-container">$\left( \int_{0}^{r} 1 dx \right)^{\frac{1}{p}} \left( \int_{0}^{r} 1 dx \right)^{\frac{1}{q}}$</span>. So, what can we say to relate <span class="math-container">$r$</span> to <span class="math-container">$r^{\frac{1}{p} + \frac{1}{q}}$</span>?</p>
<p><strong>Case 1:</strong> (<span class="math-container">$\frac{1}{p} + \frac{1}{q} = 1$</span>)</p>
<p>Evidently if <span class="math-container">$\frac{1}{p} +\frac{1}{q} =1$</span>, then we have the equality that <span class="math-container">$r = r^{\frac{1}{p} + \frac{1}{q}}$</span></p>
<p><strong>Case 2:</strong> (<span class="math-container">$\frac{1}{p} + \frac{1}{q} < 1$</span>)</p>
<p>If <span class="math-container">$\frac{1}{p} + \frac{1}{q} < 1$</span> then for <span class="math-container">$r < 1$</span> we have <span class="math-container">$r < r^{\frac{1}{p} +\frac{1}{q}}$</span> and for <span class="math-container">$r > 1$</span> we have <span class="math-container">$r > r^{\frac{1}{p} + \frac{1}{q}}$</span>.</p>
<p><strong>Case 3:</strong> (<span class="math-container">$\frac{1}{p} + \frac{1}{q} > 1$</span>)</p>
<p>In this case, similar to Case 2, you can quickly rule at an inequality that holds for all values of <span class="math-container">$r$</span>.</p>
<p><strong>Conclusion:</strong>
If an inequality like Holder's is true, the only possible option is when <span class="math-container">$\frac{1}{p} + \frac{1}{q}=1$</span>.</p>
|
3,672,378 | <p>Consider the following Primal LP </p>
<p><span class="math-container">$$
\min_{x_1,x_2,x_3} x_1+x_2+x_3 $$</span>
subject to constraints,
<span class="math-container">$$
x_1+x_2 \ge a \\
x_1+x_3 \ge b \\
x_2+x_3 \ge c\\
x_1 \ge d \\
x_2 \ge e \\
x_3 \ge f
$$</span></p>
<p>Here <span class="math-container">$$a,b,c,d,e,f \quad \text{are constants} $$</span>
There is no restrcition on variables <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span>, <span class="math-container">$x_3$</span> (free variables). How do I establish that this primal problem is feasible and bounded? Could someone please help? </p>
<p>Also, in general, how does one establish an optimisation problem is feasible and bounded?</p>
| Jean Marie | 305,862 | <p>Add the first 3 inequalities, then add the last 3 ones : you get</p>
<p><span class="math-container">$$x_1+x_2+x_3 \ge M \ \ \ \ \text{with} \ \ \ \ M:=\max((a+b+c)/2,d+e+f)$$</span></p>
<p>Therefore, taking all <span class="math-container">$x_k$</span> equal to <span class="math-container">$M$</span> gives a feasible solution to the primal problem.</p>
|
1,083,301 | <p>I'm reading Knuth/Graham/Patashnik's Concrete Mathematics:</p>
<blockquote>
<p><img src="https://i.imgur.com/YpjYJwd.png" alt="enter image description here"></p>
</blockquote>
<p>I don't understand how he goes from $(r-k){r \choose r-k}$ to $r{r-1 \choose r-k-1}$ using $(5.6)$. The mentioned property has a $k$ in one side and an $r$ in the other. I'm confused at what I should do. I'm using the RHS of $(5.6)$, and $r$ is $r$, then what would be the $k$ in the other side and why?</p>
| Red Banana | 25,805 | <ul>
<li>Using the symmetry:</li>
</ul>
<p>$\displaystyle(r-k){r\choose k}=(r-k){r\choose r-k}$</p>
<ul>
<li>Now using $(5.6)$:</li>
</ul>
<p>$\begin{eqnarray*}
{(r-k){r\choose r-k}}&=&{(r-k) \frac{r}{r-k}{n-1 \choose n-k-1}} \\
{}&&{}\\
{}&&{}\\
{}&=&{\frac{(r-k)r}{(r-k)} {n-1\choose n-k-1}} \\
{}&&{}\\
{}&&{}\\
{}&=&{r{n-1\choose n-k-1}}
\end{eqnarray*}$</p>
|
3,408,320 | <p>Let <span class="math-container">$T : R^{3} \to R^{3}$</span> be the linear transformation defined by <span class="math-container">$T(x,y,z) = (x+3y+2z,3x+4y+z,2x+y-z)$</span>. Find the rank of <span class="math-container">$T^{2}$</span> and <span class="math-container">$T^{3}$</span>.</p>
<p>I formed the matrix of linear transformation for <span class="math-container">$T$</span> and squared it, and found the rank which is <span class="math-container">$2$</span>. For <span class="math-container">$T^{3}$</span> I found the matrix by multiplying matrix <span class="math-container">$T$</span> by <span class="math-container">$T^{2}$</span>.
I need to calculate the rank of this matrix.</p>
<p>Is this only way to calculate rank<span class="math-container">$?$</span> </p>
<p>This question is asked as multiple choice problem, which should not take much time.
But finding rank in this way is really time consuming. Is there any other way<span class="math-container">$?$</span></p>
| Graham Kemp | 135,106 | <p>Because the order of the groups is not important you must count <em>fewer</em> of the arrangements.</p>
|
1,783,841 | <p>If a function $f(x)$ has a derivative $f'(x)$ then where $f'(x_0) = 0$ there is an extreme point at $x=x_0$.
And where $f''(x_0)=0$ there is an inflection point at $x=x_0$.</p>
<p>I am asking are there any significance or applications where $f(x_0)=f'(x_0)$ at some point $x=x_0$?</p>
<p>I am referring to a case like $f(x)=x^2$ and $f'(x)=2x$, they intersect at $(0,0)$ and $(2,4)$ what is special or can be derived from these points?</p>
| J.G | 293,121 | <p>If I'm interpreting this correctly, I believe you are asking if there is anything special about this happening locally, i.e at a certain point $x_0$, does the fact that $f(x_0)=f'(x_0)$ mean anything? I would say no; note that by just translating the function appropriately, you can make this happen at any point. To see this, for any differentiable $f$, and any point $x_0$ define $g_{x_0}(x)=f(x)+(f'(x_0)-f(x_0))$, and this function will have that property at the point $x_0$. </p>
|
1,254,981 | <p>The undergrad course is called intro the applied math, and it covers: "The unit introduces some of the principal mathematical techniques such as difference equations, differential equations and partial differential equations."</p>
<p>We are not given a textbook and I really want to study from one, the closer it matches to the course the better. I think it's definitely based on a textbook out there. </p>
<p>We are about halfway through the course. Here are some course pdfs <a href="https://www.dropbox.com/sh/u3ek0rjs7jczbhj/AAC86lV0wep07b-cua3dz00Oa?dl=0" rel="nofollow">https://www.dropbox.com/sh/u3ek0rjs7jczbhj/AAC86lV0wep07b-cua3dz00Oa?dl=0</a>. BonusQ.pdf gives a good overview of the content</p>
| Skipe | 234,395 | <p>You could try these books:
- An Introduction to Ordinary Differential Equations by James C Robinson
- Differential Equations by Richard Bronson and Gabriel Costa</p>
<p>These are the books I used to use and they are pretty good, and your course, from your description, looks a lot like my first year differential equations course (I tried looking at your dropbox folder however for some reasons it couldn't load). </p>
<p>My lecturer's lecture notes are also very helpful, perhaps yours are too?</p>
<p>But yeah just a suggestion.</p>
|
1,254,981 | <p>The undergrad course is called intro the applied math, and it covers: "The unit introduces some of the principal mathematical techniques such as difference equations, differential equations and partial differential equations."</p>
<p>We are not given a textbook and I really want to study from one, the closer it matches to the course the better. I think it's definitely based on a textbook out there. </p>
<p>We are about halfway through the course. Here are some course pdfs <a href="https://www.dropbox.com/sh/u3ek0rjs7jczbhj/AAC86lV0wep07b-cua3dz00Oa?dl=0" rel="nofollow">https://www.dropbox.com/sh/u3ek0rjs7jczbhj/AAC86lV0wep07b-cua3dz00Oa?dl=0</a>. BonusQ.pdf gives a good overview of the content</p>
| Hans Lundmark | 1,242 | <p>It's hard to tell exactly how well it fits your course, but a really great book in general is
<a href="http://rads.stackoverflow.com/amzn/click/0738204536" rel="nofollow">Nonlinear Dynamics And Chaos</a> by Steven Strogatz. (I wholeheartedly agree with all the rave reviews on Amazon. Besides, <a href="http://www.wired.com/2012/07/spider-man-has-good-taste-in-textbooks/" rel="nofollow">Spider-Man uses it</a>!)</p>
<p>That book is definitely relevant for most of the stuff in BonusQ.pdf. However, it doesn't contain any material about PDEs.</p>
|
375,348 | <p>Prove that $\mathbb Z_{m}\times\mathbb Z_{n} \cong \mathbb Z_{mn}$ implies $\gcd(m,n)=1$.</p>
<p>This is the converse of the Chinese remainder theorem in abstract algebra. Any help would be appreciated.</p>
<p>Thanks!</p>
| André Nicolas | 6,312 | <p><strong>Hint:</strong> Every element of $\mathbb{Z}_m\times \mathbb{Z}_n$ has order $\le$ the lcm of $m$ and $n$.</p>
|
3,808,774 | <p>I have this limit I've been trying to solve, and so far I can only solve it using the property of geometric series.</p>
<p>So we get
<span class="math-container">$$\lim_{n \to \infty}\frac{1}{3^2}+\frac{1}{3^3}+...+\frac{1}{3^n}=\sum_{n=2}^{\infty}\left(\frac{1}{3}\right)^n=\frac{1}{1-\frac{1}{3}}=\frac{3}{2}$$</span></p>
<p>Is everything alright with this solution and is there any other way using the basic limit solving techniques to solve this limit?</p>
| fleablood | 280,126 | <blockquote>
<p><span class="math-container">$\lim_{n \to \infty}\frac{1}{3^2}+\frac{1}{3^3}+...+\frac{1}{3^n}=\sum_{n=2}^{\infty}\left(\frac{1}{3}\right)^n=\frac{1}{1-\frac{1}{3}}=\frac{3}{2}$</span></p>
</blockquote>
<p>This is not correct as <span class="math-container">$\sum_{n=\color{red}0}^{\infty} (\frac 13)^n = \frac 1{1-\frac 13}$</span> and <span class="math-container">$\sum_{n=\color{red}0}^{\infty}\left(\frac{1}{3}\right)^n\ne\sum_{n=\color{red}2}^{\infty} (\frac 13)^n$</span>.</p>
<p>But there are two ways to correct this:</p>
<ol>
<li><p><span class="math-container">$\sum_{n=\color{red}2}^{\infty} (\frac 13)^n = \sum_{k=\color{red}0}^{\infty} (\frac 13)^n - \sum_{n=\color{red}0}^{\color{blue}1} (\frac 13)^n = [\frac 32] - [(\frac 13)^0 + (\frac 13)^1]= \frac 32 - \frac 43=\frac 16$</span></p>
</li>
<li><p><span class="math-container">$\sum_{n=\color{red}2}^{\infty} (\frac 13)^n=\sum_{n=\color{red}2}^{\infty} [\color{blue}{\frac 1 {3^2}}(\frac 13)^{n\color{blue}{-2}}]=\frac 19\sum_{n=\color{red}2}^{\infty} (\frac 13)^{n\color{blue}{-2}}=\frac 19\sum_{\color{purple}{m=0}; (m=n-2)}^{\infty} (\frac 13)^m=\frac 19\cdot \frac32 = \frac 16$</span></p>
</li>
</ol>
<p>I don't think there is any way to solve this in any way that doesn't reinvent the wheel and use geometric series.</p>
<p>But <span class="math-container">$\lim{n\to \infty}\sum_{k=2}^n (\frac 13)^k =\frac 16$</span> means that for any <span class="math-container">$\epsilon > 0$</span> there is an <span class="math-container">$N_\epsilon$</span> so that for all <span class="math-container">$n>N_\epsilon$</span> then <span class="math-container">$|\frac 16 - \sum_{k=2}^n (\frac 13)^k| < \epsilon$</span>.</p>
<p>Now we can prove by induction if <span class="math-container">$\sum_{k=2}^n (\frac 13)^k = x$</span> then <span class="math-container">$9x = \sum_{k=2}^n 9(\frac 13)^k=\sum_{k=2}^n (\frac 13)^{k-2}=\sum_{m=0}^{n-2}(\frac 13)^m$</span>. And <span class="math-container">$9x(1-\frac 13)=(1-\frac 13)\sum_{m=0}^{n-2}(\frac 13)^m= 1-(\frac 13)^{n-1}$</span>. So <span class="math-container">$x = \frac {1-(\frac 13)^{n-1}}{9(1-\frac 13)}=\frac {1-(\frac 13)^{n-1}}6$</span>.</p>
<p>(In other words we just reinvented the wheel to derive the basic formula for finite sums of geometric series).</p>
<p>And we can use the standard <span class="math-container">$\epsilon$</span> proof to prove <span class="math-container">$\lim_{n\to \infty} \frac {1-(\frac 13)^{n-1}}6 = \frac 16$</span>.</p>
<p><span class="math-container">$|\frac 16 - \frac {1-(\frac 13)^{n-1}}6|=\frac 1{3\cdot 6^{n-1}}$</span> so if <span class="math-container">$n>\log_6{\frac 1{3\epsilon}}+1$</span> then <span class="math-container">$|\frac 16 -\sum_{k=2}^n(\frac 13)^k| < \epsilon$</span>.</p>
<p>(In other words we just reinvented the wheel to derive the formula for converging infinite sums of geometric series.)</p>
|
197,470 | <p>I'm new in Mathematica software, and I need help with a simple exercise. </p>
<p>Given the list</p>
<pre><code>list = {x, 3.2, y, 1, 2.679, 3, z, 4, -7.9};
</code></pre>
<p>I need to make some rules to generate a new list with only three elements.</p>
<ul>
<li>first element is the product of its symbols </li>
<li>second element the product of its integers</li>
<li>third element the product of its real numbers </li>
</ul>
<p>Thus</p>
<pre><code>newlist = {x y z, 12, -67.72512}
</code></pre>
<p>I hope you can help me.</p>
| kglr | 125 | <p>You can use <a href="https://reference.wolfram.com/language/ref/GroupBy.html" rel="noreferrer"><code>GroupBy</code></a> in several ways:</p>
<pre><code>GroupBy[list, Head, Apply[Times]] /@ {Symbol, Integer, Real}
</code></pre>
<blockquote>
<p>{x y z, 12, -67.7251} </p>
</blockquote>
<p>or using a combination of <code>GroupBy</code> and<a href="https://reference.wolfram.com/language/ref/Lookup.html" rel="noreferrer"><code>Lookup</code></a> (thanks: CarlWoll):</p>
<pre><code>Lookup[{Symbol, Integer, Real}] @ GroupBy[list, Head, Apply[Times]]
</code></pre>
<blockquote>
<p>same result</p>
</blockquote>
<pre><code>RotateRight @ Values @ GroupBy[SortBy[Head] @ list, Head, Apply[Times]]
</code></pre>
<blockquote>
<p>same result</p>
</blockquote>
<p>Alternatively, you can use <a href="https://reference.wolfram.com/language/ref/GatherBy.html" rel="noreferrer"><code>GatherBy</code></a>:</p>
<pre><code>RotateRight[Times @@@ GatherBy[SortBy[Head] @ list, Head]]
</code></pre>
<blockquote>
<p>{x y z, 12, -67.7251} </p>
</blockquote>
<p>Also, <code>Replace</code>:</p>
<pre><code>Times @@@ Replace[list, {Except @ Blank @ # -> 1} & /@ {Symbol, Integer, Real}, {1}]
</code></pre>
<p>and <code>DeleteCases</code>:</p>
<pre><code>Times @@@ (DeleteCases[list, Except@Blank@#] & /@ {Symbol, Integer, Real})
</code></pre>
|
2,330,368 | <p>I have calculate the sum of the following power series: $$\sum_{n=0}^{\infty} (-1)^n(n+1)^2x^n$$</p>
<p>I've determined the convergence radius $R = 1$convergence interval $I_c = (-1,1)$ and the convergence set $A = (-1,1)$ for this power series. Furthermore, I've tired to bring it to a from which can be easily computed, namely $$\sum_{i=0}^{\infty}x^n$$ or $$\sum_{i=0}^{\infty}(-1)^n x^n$$ (I only guess that these are the closest nicest forms that can be reached).</p>
<p>I tried to create a new function $g(x)=\int f(x) \, dx = \sum_{n=0}^{\infty}(-1)^n(n+1)x^{(n+1)}$, where $f(x)$ is the function in the power series at the very beginning of this question.</p>
<p>And this is where I get stuck. Any help would be much appreciated. </p>
| Claude Leibovici | 82,404 | <p>First, let $x=-y$ which makes $$S=\sum_{n=0}^{\infty} (-1)^n(n+1)^2x^n=\sum_{n=0}^{\infty} (n+1)^2y^n=\sum_{n=0}^{\infty} (n^2+2n+1)\,y^n$$ Now, use $n^2=n(n-1)+n$ which makes $$S=\sum_{n=0}^{\infty} (n(n-1)+3n+1)\,y^n$$ that is to say $$S=\sum_{n=0}^{\infty} n(n-1)\,y^n+3\sum_{n=0}^{\infty} n\,y^n+\sum_{n=0}^{\infty} y^n$$
$$S=y^2\sum_{n=0}^{\infty} n(n-1)\,y^{n-2}+3y\sum_{n=0}^{\infty} ny^{n-1}+\sum_{n=0}^{\infty} y^n$$ $$S=y^2\left(\sum_{n=0}^{\infty} y^n \right)''+3y\left(\sum_{n=0}^{\infty} y^n \right)'+\left(\sum_{n=0}^{\infty} y^n \right)$$ I am sure that you can take it from here.</p>
|
505,804 | <p>Let $X$ be a metric space and let $G$ be a group of homeomorphisms $X \to X$ acting on $X$. We say $G$'s action is properly discontinuous in case for every $x \in X$ and compact $K \subseteq X$, there are at most finitely many $g \in G$ such that $g(x) \in K$. Equivalently (and this is not hard to show), $G \cdot x$ is discrete and $G_x$ finite for any $x$.</p>
<p>Why is it the case that $G$ acts properly discontinuously if and only if for any compact $K$, $g(K) \cap K \ne \emptyset$ for only finitely many $g$? One direction is relatively easy, but I just cannot seem to prove the "only if". The best I've been able to do is for finite $K$ (which is kind of the next best thing when you're stumped on proving something for compact sets, I guess).</p>
| lessthanepsilon | 129,626 | <p>see the errata posted here: <a href="http://www.personal.psu.edu/sxk37/errata.pdf" rel="noreferrer">http://www.personal.psu.edu/sxk37/errata.pdf</a></p>
<p>where katok writes:</p>
<p>p.27 l.9: Replace “metric” by “locally compact metric”</p>
<p>p.27 l.10: Replace “homeomorphisms” by “isometries”.</p>
<p>p.27 l.-5-l.-3 Replace “It is clear from the definition that a group $G$ acts properly discontinuously on $X$ if and only if each orbit is discrete and the stabilizer of each point is finite.” by “Since $X$ is locally compact, a group $G$ acts properly discontinuously on $X$ if and only if each orbit has no accumulation point in $X$, and the order of the stabilizer of each point is finite. The first condition, however, is equivalent to the fact that each orbit of $G$ is discrete. For, if $g_n(x) \to s \in X$, then for any $\varepsilon > 0$, $\rho(g_n(x),\,g_{n+1}(x))< \varepsilon$ for sufficiently large $n$, but since $g_n$ is an isometry, we have $\rho(g^{-1}_ng_{n+1}(x),\,x)< \varepsilon$, which mplies that $x$ is an accumulation point for its orbit $Gx$, i.e. $Gx$ is not discrete.”</p>
|
1,642,247 | <p>How to prove that any regular map $\phi : \Bbb P^1 \to \Bbb A^n(\Bbb C)$ maps $\Bbb P^1$ to point. Now $\phi=(F_1/G_1,...,F_n/G_n)$ where $F_i/G_i$ is a regular function.
Now how do I conclude? </p>
| Marra | 27,071 | <p>If $\phi$ is regular, by continuity its image is a limited set in $\mathbb{A}^n(\mathbb{C})$. Using any chart of $\mathbb{P}^1$, you can obtain a holomorphic mapping from $\mathbb{C}^1$ to $\mathbb{A}^n(\mathbb{C})$ which is globally holomorphic and limited. Therefore it is constant, that is, its image is only one point. It extends to the entire $\mathbb{P}^1$ by continuity.</p>
|
1,642,247 | <p>How to prove that any regular map $\phi : \Bbb P^1 \to \Bbb A^n(\Bbb C)$ maps $\Bbb P^1$ to point. Now $\phi=(F_1/G_1,...,F_n/G_n)$ where $F_i/G_i$ is a regular function.
Now how do I conclude? </p>
| basket | 294,706 | <p>Let $k$ be algebraically closed. A morphism $f: \mathbb{P}^r \to \mathbb{A}^n$ induces a $k$-homomorphism $\varphi: \mathcal{O}(\mathbb{A}^n) = k[x_1,\dots,x_n] \to \mathcal{O}(\mathbb{P}^r) = k$. As a $k$-homomorphism, $\varphi$ is surjective and so it must be the quotient map by a maximal ideal $\mathfrak{m}_P$. Then $f$ simply maps $\mathbb{P}^r$ to $P$.</p>
|
1,769,297 | <p>I have been reading <em>Representations and Characters of Groups</em> by Gordon James and Martin Liebeck. I encountered the following construction of an $\mathbb{R}G$-module from a $\mathbb{C}G$-module.</p>
<p><a href="https://i.stack.imgur.com/gXuGt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gXuGt.png" alt="enter image description here"></a></p>
<p>Next, there's a proposition related to this on the next page.</p>
<p><a href="https://i.stack.imgur.com/QXoDw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QXoDw.png" alt="enter image description here"></a></p>
<p>The proof of Proposition 23.6(1) is clear from the first image. I don't understand the proof of Proposition 23.6(2). How does the author break down $V_{\mathbb{R}}$ into a direct sum of $U$ and $W$? Are we using the fact that $V$ is irreducible as a $\mathbb{C}G$-module?</p>
| guest | 187,930 | <p>First of all, to make things more comfortable for you I will do all calculations on $\mathbb{C}G$-modules using the following observation: if $V_0$ is an $\mathbb{R}G$-module of real dimension $n$ with character $\chi$, the complexification $\mathbb{C}\otimes_\mathbb{R} V_0=V$ is a $\mathbb{C}G$-module ($G$ acting only on the right factor) of complex dimension $n$ with the same character $\chi$ (a basis of $V$ over $\mathbb{C}$ is $1\otimes v_i$ for $v_i$ an $\mathbb{R}$-basis of $V_0$ and $G$ acts trivially on the left factor).</p>
<p>Thus suppose your $\mathbb{R}G$-module $V_R$ splits as $V_R = U\oplus W$ and let $\phi$ and $\psi$ be the characters of $U$ and $W$ respectively. Then by distributivity $$\mathbb{C}\otimes V_R = (\mathbb{C}\otimes U)\oplus (\mathbb{C}\otimes W)$$ and denoting $\mathbb{C}U$, $\mathbb{C}W$ the corresponding $\mathbb{C}G$-modules, we get $\mathbb{C}V_R = \mathbb{C}U\oplus \mathbb{C}W$.</p>
<p>By the first paragraph, the characters of these $\mathbb{C}G$-modules did not change, so we still have the equality of characters of $\mathbb{C}G$-modules $$\chi+\overline{\chi}=\phi + \psi.$$ But now we are in that comfortable place of algebra of characters of complex representations, so we can feel free to use the orthogonality relations. Since $V$ is an irreducible $\mathbb{C}G$-module, $\chi$ is an irreducible character, and similarly for $\overline{\chi}$.</p>
<p>Now characters of irreducible representations form an orthonormal basis. Assume $\chi\neq \overline{\chi}$ for a contradiction and expand $\phi = a\chi +b\overline{\chi} + S_\phi$ and $\psi = a'\chi + b'\overline{\chi}+S_\psi$, where the $S$s are sums of irreducible characters orthogonal to those two. Thus we have $$\chi+\overline{\chi} = (a+a')\chi+(b+b')\overline{\chi} + S_\phi + S_\psi.$$ Taking inner products with $\chi$ and using $||\chi||^2=1$ we get $1= a+ a'$. Similarly, taking inner products with $\overline{\chi}$ we get $1 = b+b'$. Taking inner products with all characters involved in $S_\phi$ and $S_\psi$ we conclude $S_\phi = S_\psi = 0$. Finally, since $a,a',b,b'$ are non-negative integers, we have without loss of generality $a=1$,$a'=0$, $b=0,b'=1$ and $S_\phi = S_\psi = 0$.</p>
<p>This shows that $\chi = \phi$ and $\overline{\chi} = \psi$; in particular, $\chi=\overline{\chi}$ since $\phi$ is real and we have a contradiction. Thus $\chi=\overline{\chi}$ and we can still use the orthogonality relations in the equation $2\chi = \phi+\psi$ to conclude as before that $\phi = \psi = \chi$.</p>
|
1,938,341 | <p>Prove, that $\forall n \in \mathbb{N}$ the following identity holds:
$$a^n - b^n = (a-b)\sum_{k=0}^{n-1} a^k b^{n-k-1}.$$</p>
<hr>
<p>For $n = 1$ we get $a^1 - b^1 = a - b$ on LHS, and on RHS we get $(a-b)\sum_{k=0}^{0}a^k b^{n-k-1} = a-b.$ Both sides are equal, so identity hold for $n = 1$.</p>
<hr>
<p>Assume that for $m \in \mathbb{N}, n = m$ identity is generally true. For $m = n+1$ we get:
$$(a-b)\sum_{k=0}^{m}a^k b^{m-k}.$$
Multiplying and distributing we get:
$$\sum_{k=0}^{m}a^{k+1}b^{m-k} - \sum_{k =0}^{m}a^k b^{m-k+1}.$$</p>
<p>What shall I do after the last step? Is there some rule/identity, which I can use?</p>
| B. Goddard | 362,009 | <p>You can re-index the first sum in your last expression. If you decrease $k$ by one in the summand, you can pay for it by increasing the limits. So you'd get
$$\sum_{k=1}^{m+1} a^kb^{m-k} - \sum_{k=0}^{m} a^kb^{m-k}.$$</p>
<p>Now if you separate the last term of the first sum and the first term of the
second sum you have:</p>
<p>$$a^{m+1}b^0+ \sum_{k=1}^{m} a^kb^{m-k} - \sum_{k=1}^{m} a^kb^{m-k}-a^0b^m.$$</p>
<p>And the two summations cancel.</p>
|
2,014,156 | <p><a href="http://www.wolframalpha.com/input/?i=lim%20(sum%201%2F(a%2Bn)%5E2,%20n%20%3D%201%20to%20infinity),%20a%20to%20infinity" rel="nofollow noreferrer">WolframAlpha</a> says that the sequence
$$
S_n = \sum\limits_{i=1}^{\infty}\frac{1}{(n+i)^2}
$$
tends to $0$. The proof given by the site uses notations unfamiliar to me, and was wondering if the following proves this.</p>
<p>Given $n$, we have that for all $N$,
$$
\sum\limits_{i=1}^N\frac{1}{(n+i)^2} = \sum\limits_{i = n+1}^{n+N}\frac{1}{i^2}\underset{N\to\infty}{\longrightarrow}\sum\limits_{i=n+1}^{\infty}\frac{1}{i^2}
$$
which means that
$$
S_n = \sum\limits_{i=n+1}^{\infty}\frac{1}{i^2}\underset{n\to\infty}{\longrightarrow}0
$$</p>
<p>Is this proof valid? Is the order of taking limits not problematic?</p>
| marty cohen | 13,079 | <p>The proof can be rewritten
to make it simpler and clearer.</p>
<p>$\begin{array}\\
\sum\limits_{i=1}^N\frac{1}{(n+i)^2}
&= \sum\limits_{i = n+1}^{n+N}\frac{1}{i^2}\\
&< \sum\limits_{i = n+1}^{n+N}\frac{1}{i(i-1)}
\qquad\text{since } i(i-1) < i^2\\
&= \sum\limits_{i = n+1}^{n+N}\left( \frac1{i-1}-\frac1{i}\right)\\
&=\frac1{n}-\frac1{n+N}\\
&< \frac1{n}\\
& \to 0 \text{ as } n \to \infty\\
\end{array}
$</p>
|
2,108,245 | <p>While studying Fourier analysis last semester, I saw an interesting identity:</p>
<p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}$$
whenever $\alpha \in \mathbb{C}\setminus \mathbb{Z}$, which I learned two proofs using Fourier series and residue calculus.</p>
<p>More explicitly, we can deduce the theorem using Fourier series of $f(\theta)=e^{i(\pi - \theta)\alpha}$ on $[0,2\pi]$ or contour integral of the function $g(z)=\frac{\pi}{(z^2-\alpha^2)\tan\pi z}$ along large circles.</p>
<p>But these techniques, as long as I know, wasn't fully developed at Euler's time. </p>
<p>So what was Euler's method to prove this identity? Is there any proof at elementary level?</p>
| Renascence_5. | 286,825 | <p>For the partial fraction decomposition of the cotangent
$$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$
hence
\begin{align*}
\sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2}
&= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\
&= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\
&= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\
&= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\
&= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\
&=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}
\end{align*}</p>
|
2,500,723 | <blockquote>
<p>A price $p$ (in dollars) and demand $x$ for a product are related by $$\left(2x^2\right)-2xp+50p^2 = 20600$$ If the price is increasing at a rate of $2$ dollars per month when the price is $20$ dollars, find the rate of change of the demand.</p>
</blockquote>
<p>I was a little confused on how to proceed with this question. Am I supposed to use implicit differentiation (with the $x$ serving the same purpose as a $y$) and then find the derivative of $x$?</p>
<p>This is the implicit differentiation I tried:</p>
<p>$$4x\frac{dx}{dp}-2\frac{dx}{dp}+100p = 0$$</p>
<p>$$4x\frac{dx}{dp}-2\frac{dx}{dp} = -100p$$</p>
<p>$$\frac{dx}{dp}(4x-2) = -100p$$</p>
<p>$$\frac{dx}{dp} = \frac{-100p}{4x-2}$$</p>
<p>I believe this is the derivative I am looking for (thought not entirely sure) but I am not sure what values of $p$ and $x$ to input, as I am supposed to get a numerical final answer.</p>
<p>Any help?</p>
| John Alexiou | 3,301 | <p>Blindly follow the rule that $\boxed{{\rm d}f = \frac{\partial f}{\partial x} {\rm d}x + \frac{\partial f}{\partial p} {\rm d}p}$.</p>
<p>In this case:</p>
<p>$$ 2x^2-2xp+50p^2 = 20600 $$</p>
<p>$$ {\rm d} \left( 2x^2-2xp+50p^2 = 20600 \right) $$</p>
<p>$$ \frac{\partial}{\partial x} \left( 2x^2-2xp+50p^2 = 20600 \right) {\rm d}x + \frac{\partial}{\partial p} \left( 2x^2-2xp+50p^2 = 20600 \right) {\rm d}p$$</p>
<p>$$ \left( 4x -2p \right) {\rm d}x + \left(100p - 2x \right) {\rm d} p = 0 $$</p>
<p>$$ \frac{{\rm d}x}{{\rm d}p} = \frac{x-50 p}{2 x -p} $$</p>
|
3,735,388 | <blockquote>
<p>They ask me to solve
<span class="math-container">$$y' +2y + \int_{0}^{x} y(t)dt = f(x)$$</span> with <span class="math-container">$y(0)=0$</span> and
<span class="math-container">$$
f(x) =
\begin{cases}
0, & x < 5 \\
2, & x \geq 5
\end{cases}
$$</span></p>
</blockquote>
<p>I don't know how to do it.</p>
| Sameer Baheti | 567,070 | <blockquote>
<p>Solve the following differential equation:
<span class="math-container">$$y^\prime +2y + \int_{0}^{x} y(t)dt = f(x)\tag{1}$$</span>
where <span class="math-container">$y(0)=0$</span>, and
<span class="math-container">$$f(x)= \begin{cases} 0 & x < 5 \\ 2 & x \geq 5 \end{cases}$$</span></p>
</blockquote>
<p>Solution:</p>
<p>Assuming <span class="math-container">$t^\prime=\frac{dt}{dx}$</span> for any function <span class="math-container">$t$</span>!</p>
<p>Substitute
<span class="math-container">\begin{align*}
u&=y+\int_{0}^{x} y(t)dt\\
u^\prime&=y^\prime+y(x)\frac{dx}{dx}-y(0)\frac{d(0)}{dx}\\
u^\prime&=y^\prime+y\\
\end{align*}</span>
to get
<span class="math-container">\begin{align*}
u^\prime+u&=0& \forall\ x<5\tag{2}\\
u^\prime+u&=2& \forall\ x\ge5\tag{3}\\
\end{align*}</span>
Solving equation <span class="math-container">$(2)$</span>:
<span class="math-container">\begin{align*}
\frac{du}{dx}&=-u\\
-\frac{du}{u}&=dx\\
-\int\frac{du}{u}&=\int dx\\
\ln u&=-x+c_1\\
u&=e^{-x+c_1}\\
u^\prime&=-e^{-x+c_1}\\
y^\prime+y&=-e^{-x+c_1}\\
y^\prime+y&=-e^{-x+\displaystyle\lim_{c_1\to-\infty}c_1}&(\because y^\prime(0)=y(0)=0\text{ from equation }1)\\
y^\prime+y&=0\\
y&=e^{-x+c_2}\\
y&=-e^{-x+\displaystyle\lim_{c_2\to-\infty}c_2}&(\because y(0)=0)\\
\Rightarrow y(x)&=0\ \forall\ x<5\tag{4}
\end{align*}</span>
Solving equation <span class="math-container">$(3)$</span>:
<span class="math-container">\begin{align*}
\frac{du}{dx}&=-u+2\\
-\frac{du}{u-2}&=dx\\
-\int\frac{du}{u-2}&=\int dx\\
\ln (u-2)&=-x+c_3\\
u&=e^{-x+c_3}+2\\
u^\prime&=-e^{-x+c_3}\\
y^\prime+y&=-e^{-x+c_3}\\
y^\prime(5)+y(5)&=-e^{-5+c_3}\\
\frac{d}{dx}(e^xy)&=-e^{c_3}\\
ye^x&=-e^{c_3}x+c_4\\
y&=-e^{c_3}xe^{-x}+c_4e^{-x}\\
y^\prime&=e^{c_3}e^{-x}(x-1)-c_4e^{-x}\\
\text{We know that }\qquad\qquad\qquad\qquad y^\prime(5)+2y(5)+\int_0^5 y(t)dt&=2\\
y^\prime(5)+2y(5)&=2&
\end{align*}</span>
which is just one equation to obtain two variables <span class="math-container">$c_3$</span> and <span class="math-container">$c_4$</span>.</p>
<p><strong>The question seems incomplete or I am mistaken.</strong> Please comment!</p>
|
1,029,515 | <p>Show that $\gcd(x,x+2)$ is $1$ if $x$ is odd and $2$ if $x$ is even.</p>
<p>I am looking for a much simpler proof beside the one which I have posted.</p>
| lab bhattacharjee | 33,337 | <p>If integer $d$ divides $x,x+2;d$ must divide $(x+2-x)=2$</p>
<p>Observe that $x,x+2$ have same parity as $x+2-x=2$ which is even </p>
<p>Now if $x$ is even $\iff x+2$ is even, then clearly $2$ divides both, hence $(2a,2a+2)=2$</p>
<p>If $x$ is odd, $2$ can not divide $x\implies (2b+1,2b-1)=1$</p>
|
1,627,830 | <p>I have to calcuate this limit: $$\lim _{x\to 0}\left(\frac{1-\frac{\left(1+x\right)^{\frac{1}{x}}}{e}}{x}\right)$$</p>
<p>I have no idea how to start, maybe taylor series?</p>
<p>Thanks.</p>
| Emilio Novati | 187,568 | <p>Hint:</p>
<p>Substitute $y=\frac{1}{x}$, so your limit becomes
$$
\lim_{y \to \infty}y\left(1-\frac{\left(1+\frac{1}{y} \right)^y}{e} \right)
$$
now use the series expansion at $\infty$
$$
\left(1+\frac{1}{y}\right)^y=e-\frac{e}{2y} + O \left(\frac{1}{y^2}\right)
$$
and you have the result .</p>
|
60,486 | <p>I have noted several instances <a href="https://mathematica.stackexchange.com/a/51377/7167">here</a> and <a href="https://mathematica.stackexchange.com/a/55412/7167">here</a> where <code>ConvexHullMesh</code>, introduced in v10 has been used to greatly simplify some geometry problems. Determining the volume of a region encompassing a set of 3D points, <code>pts</code> is now simply:</p>
<pre><code>Volume@ConvexHullMesh@pts
</code></pre>
<p>Is there an equally simplified method for determining the surface area of this convex hull?</p>
<p>Note, the <a href="http://reference.wolfram.com/language/ref/ConvexHullMesh.html" rel="nofollow noreferrer">documentation</a> for <code>ConvexHullMesh</code> suggests that the area can be obtained, using <code>RegionMeasure</code>, however I suspect this is a typo, since <code>RegionMeasure</code> gives an area <em>only if the points have a dimension of 2</em>.</p>
| RunnyKine | 5,709 | <p>For fun, here is an approach that uses the graphics generated from the mesh. Lets generate some points:</p>
<pre><code>SeedRandom[0]
pts = RandomReal[5, {50, 3}];
chull = ConvexHullMesh @ pts;
</code></pre>
<p>Create a graphics object from it and discretize it:</p>
<pre><code>gr = Graphics3D[GraphicsComplex[MeshCoordinates[chull], {MeshCells[chull, 2]}]];
</code></pre>
<p>And here is the area:</p>
<pre><code>Area @ DiscretizeGraphics @ gr
</code></pre>
<blockquote>
<p>86.8845076</p>
</blockquote>
<p>For comparison:</p>
<pre><code>Area @ RegionBoundary @ chull
</code></pre>
<blockquote>
<p>86.8845076</p>
</blockquote>
|
404,372 | <p>There are $7$ empty seats on a bus and four people get on. How many different ways can they be seated?</p>
<p>Would it be ${}^7C_{4}$ or ${}^7P_{4}$?</p>
| Michiel Van Couwenberghe | 79,783 | <p>I think the answer is $$ 4! C_7^4 = 7 * 6 * 5 *4 $$ Where the C is a notation for the combination of 4 out of 7. Explanation: first you pick 4 out of 7 seats which will be taken and then you determine the person for each seat.</p>
|
127,645 | <p>In Lay's Linear Algebra and Its Applications textbook, he defines the matrix
$$A=\begin{bmatrix}
4 & 11 & 14\\8 & 7 & -2
\end{bmatrix}$$
and claims that the transformation $T(x)=A x$ maps the sphere to an ellipse.</p>
<p><a href="https://i.stack.imgur.com/grGkv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/grGkv.png" alt="enter image description here"></a></p>
<p>I'm still struggling with the mathematics on why this is true, but I would like to be able to come up with a way in Mathematica to demonstrate the transformation of the unit sphere to this ellipse to my students.</p>
<p><strong>Update:</strong> There is some wonderful support on this page. Thanks to all, but I'd also like to see if it is possible to use Mathematica to produce the equation of the ellipse that is the border of the region.</p>
| corey979 | 22,013 | <p>Generating equidistributed points on a sphere <a href="https://mathematica.stackexchange.com/a/119341/22013">according to this post</a>:</p>
<pre><code>pts = With[{points = 200, samples = 40000, iterations = 20},
Nest[With[{randoms = Join[#, RandomPoint[Sphere[], samples]]},
Normalize @ Mean @ randoms[[#]] & /@
Values @ PositionIndex @ Nearest[#, randoms]] &,
RandomPoint[Sphere[], points], iterations]];
ListPointPlot3D[#, BoxRatios -> 1]& @ pts
</code></pre>
<p><a href="https://i.stack.imgur.com/NIVvf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NIVvf.jpg" alt="enter image description here"></a></p>
<pre><code>A = {{4, 11, 14}, {8, 7, -2}};
ellipse = Dot[A, #]& /@ pts;
(* which is the same as pts.Transpose[A] *)
ell = ListPlot[ellipse, Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/53crb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/53crb.jpg" alt="enter image description here"></a></p>
<hr>
<p>The points on a unit sphere above are equidistributed. To make a uniform random distribution of points one needs to generate the spherical angles $b$ and $l$ <a href="http://mathworld.wolfram.com/SpherePointPicking.html" rel="nofollow noreferrer">according to</a></p>
<p>$$b=\arccos(2u-1)$$
$$l=2\pi v$$</p>
<p>where $u,v$ are $\mathcal{U}[0,1)$ iid:</p>
<pre><code>ang = Table[{ArcCos[2 RandomReal[] - 1], 2 \[Pi] RandomReal[]}, {i, 1000}];
</code></pre>
<p>Then transform it to Cartesian coordinates $(x,y,z)$:</p>
<pre><code>sph = {Sin[#1] Cos[#2], Sin[#1] Sin[#2], Cos[#1]}& @@@ ang;
ListPointPlot3D[#, BoxRatios -> 1]& @ sph
</code></pre>
<p><a href="https://i.stack.imgur.com/UeNJC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UeNJC.jpg" alt="enter image description here"></a></p>
<p>(<em>EDIT: As <a href="https://mathematica.stackexchange.com/questions/127645/transform-sphere-to-an-ellipse-in-r2/127647?noredirect=1#comment345414_127647">Michael E2 pointed out in the comment</a>, <code>sph</code> can be obtained with a built-in function: <code>sph = RandomPoint[Sphere[], 1000]</code>.</em>)</p>
<pre><code>ellipse = Dot[A, #]& /@ sph;
pl2 = RegionPlot[ConvexHullMesh[ellipse], AspectRatio -> 1/GoldenRatio]
</code></pre>
<p><a href="https://i.stack.imgur.com/PUcHV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PUcHV.jpg" alt="enter image description here"></a></p>
<p>Thanks to <a href="https://mathematica.stackexchange.com/questions/127645/transform-sphere-to-an-ellipse-in-r2#comment345221_127647">Bob Hanlon</a> for pointing out the <code>RegionPlot</code>.</p>
|
127,645 | <p>In Lay's Linear Algebra and Its Applications textbook, he defines the matrix
$$A=\begin{bmatrix}
4 & 11 & 14\\8 & 7 & -2
\end{bmatrix}$$
and claims that the transformation $T(x)=A x$ maps the sphere to an ellipse.</p>
<p><a href="https://i.stack.imgur.com/grGkv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/grGkv.png" alt="enter image description here"></a></p>
<p>I'm still struggling with the mathematics on why this is true, but I would like to be able to come up with a way in Mathematica to demonstrate the transformation of the unit sphere to this ellipse to my students.</p>
<p><strong>Update:</strong> There is some wonderful support on this page. Thanks to all, but I'd also like to see if it is possible to use Mathematica to produce the equation of the ellipse that is the border of the region.</p>
| yarchik | 9,469 | <pre><code>x = Cos[ϕ] Sin[θ];
y = Sin[ϕ] Sin[θ];
z = Cos[θ];
A = {{4, 11, 14}, {8, 7, -2}};
ParametricPlot[
A.{x, y, z}, {θ, 0, Pi}, {ϕ, 0, 2 Pi},
MeshStyle -> {Red, Blue}, ColorFunction -> "MintColors"]
</code></pre>
<p><a href="https://i.stack.imgur.com/JCOSp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JCOSp.png" alt="enter image description here"></a></p>
|
2,055,695 | <p>Use mathematical induction to prove each of the following statements.
let <span class="math-container">$$g(n) = 1^3 + 2^3 + 3^3 + ... + n^3$$</span></p>
<p>Show that the function <span class="math-container">$$g(n)= \frac{n^2(n+1)^2}{4}$$</span> for all n in <strong>N</strong></p>
<p>so the base case is just g(1) right? so the answer for the base case is 1, because 4/4 = 1</p>
<p>then for g(2) is it replace all of the n's with n + 1 and see if there is a concrete answer?</p>
| K Split X | 381,431 | <p>Base Case:</p>
<p>Show that $g(n)$ holds for $n=1$.</p>
<p>(You've done this already).</p>
<hr>
<p>Inductive Hypothesis:</p>
<p>Assume that $g(n)$ holds for arbitrary $k \in \mathbb{N}$, where $k \geq 1$ (since you already proved base case).</p>
<p>Thus we have that:</p>
<p>$$g(k) = \frac{k^2(k+1)^2}{4}$$</p>
<hr>
<p>Inductive Step:</p>
<p>Show that $g(k) \to g(k+1)$</p>
<p>$$g(k+1) = \frac{(k+1)^2((k+1)+1)^2}{4} = \frac{(k+1)^2(k+2)^2}{4}$$</p>
<p>Notice that if we expand $g(k+1)$, we get:
$$\frac{\left(k^4+6k^3+13k^2+12k+4\right)}{4}$$</p>
<p>Now for the proof:</p>
<p>We know that $g(k+1) = g(k) + (k+1)^3$</p>
<p>We can assume $g(k)$ holds via our inductive hypothesis.</p>
<p>Thus we have:</p>
<p>$$g(k+1) = g(k) + (k+1)^3$$
$$g(k+1) = \frac{k^2(k+1)^2}{4} + (k+1)^3$$
$$g(k+1) = \frac{k^2(k+1)^2}{4} + \frac{4(k+1)^3}{4}$$</p>
<p>After you simplify and collect like terms, you will end up with $g(k+1)$ (this is why I expanded the original equation for $g(k+1)$ so we don't have to factor anything hard.</p>
<p>QED.</p>
|
2,055,695 | <p>Use mathematical induction to prove each of the following statements.
let <span class="math-container">$$g(n) = 1^3 + 2^3 + 3^3 + ... + n^3$$</span></p>
<p>Show that the function <span class="math-container">$$g(n)= \frac{n^2(n+1)^2}{4}$$</span> for all n in <strong>N</strong></p>
<p>so the base case is just g(1) right? so the answer for the base case is 1, because 4/4 = 1</p>
<p>then for g(2) is it replace all of the n's with n + 1 and see if there is a concrete answer?</p>
| prokaryoticeukaryote | 217,977 | <p>Assuming you are asking what the inductive step is, you simply need to assume that $g(n) =\frac{n^2(n+1)^2}{4}$ and then use it to evaluate g(n+1) like so:
\begin{equation}
g(n+1) = g(n) + (n+1)^3
\end{equation}
\begin{equation}
g(n+1) = \frac{n^2(n+1)^2}{4} + (n+1)^3
\end{equation}
\begin{equation}
g(n+1) = \frac{4(n+1)^3 + n^2(n+1)^2}{4}
\end{equation}
\begin{equation}
g(n+1) = \frac{(n+1)^2(4(n+1) + n^2)}{4}
\end{equation}
\begin{equation}
g(n+1) = \frac{(n+1)^2(n^2 + 4n + 4)}{4}
\end{equation}
\begin{equation}
g(n+1) = \frac{(n+1)^2(n+2)^2}{4}
\end{equation}</p>
<p>Q.E.D.</p>
|
31,113 | <p>Zagier has a very short proof (<a href="https://mathscinet.ams.org/mathscinet-getitem?mr=1041893" rel="noreferrer">MR1041893</a>, <a href="https://www.jstor.org/stable/2323918" rel="noreferrer">JSTOR</a>) for the fact that every prime number $p$ of the form $4k+1$ is the sum of two squares.
The proof defines an involution of the set $S= \lbrace (x,y,z) \in N^3: x^2+4yz=p \rbrace $ which is easily seen to have exactly one fixed point. This shows that the involution that swaps $y$ and $z$ has a fixed point too, implying the theorem.</p>
<p>This simple proof has always been quite mysterious to me. Looking at a precursor of this proof by Heath-Brown did not make it easier to see what, if anything, is going behind the scenes.
There are similar proofs for the representation of primes using some other quadratic forms, with much more involved involutions.</p>
<p>Now, my question is: is there any way to see where these involutions come from and to what extent they can be used to prove similar statements?</p>
| yoyo | 129,839 | <p>[This is more of a comment to Franz Lemmermeyer's answer.]</p>
<p>Let <span class="math-container">$F(x,y)=ax^2+bxy+cy^2=[a,b,c]$</span> be an integral (<span class="math-container">$a,b,c\in\mathbb{Z}$</span>) binary quadratic form of discriminant <span class="math-container">$\Delta=b^2-4ac$</span>, and let <span class="math-container">$\mathcal{Q}=\mathcal{Q}(\Delta)$</span> be the collection of all such forms. <span class="math-container">$GL_2(\mathbb{Z})\times\langle i\rangle$</span> acts on <span class="math-container">$\mathcal{Q}$</span> by change of variable, preserving the discriminant
<span class="math-container">$$
g=\left(\begin{array}{cc}\alpha&\beta\\\gamma&\delta\end{array}\right), \ F^g=F(\alpha x+\beta y,\gamma x+\delta y).
$$</span>
where
<span class="math-container">$$
i=\left(\begin{array}{cc}\sqrt{-1}&0\\0&\sqrt{-1}\\\end{array}\right)
$$</span>
(which has determinant <span class="math-container">$-1$</span> and preserves integrality <span class="math-container">$[a,b,c]^i=[-a,-b,-c]$</span>).</p>
<p>Consider the actions of
<span class="math-container">$$
s=\left(\begin{array}{cc}0&-\sqrt{-1}\\\sqrt{-1}&0\\\end{array}\right), \ [a,b,c]^s=[-c,b,-a], \ s^2=1
$$</span>
and
<span class="math-container">$$
f=\left(\begin{array}{cc}1&1\\0&-1\\\end{array}\right), \ [a,b,c]^{f}=[a,2a-b,a-b+c], \ f^2=1.
$$</span>
If <span class="math-container">$\mathcal{Q}$</span> contains a fixed point for the action of <span class="math-container">$s$</span>, (i.e. <span class="math-container">$[a,b,-a]$</span>), then <span class="math-container">$\Delta=b^2+(2a)^2$</span> is a sum of two squares, and if <span class="math-container">$\mathcal{Q}$</span> contains a fixed point for the action of <span class="math-container">$f$</span>, (i.e. <span class="math-container">$[a,a,c]$</span>), then <span class="math-container">$\Delta=a(a-4c)$</span> factors.</p>
<p>One interesting aspect of these involutions is that they go outside <span class="math-container">$SL_2(\mathbb{Z})$</span> to the broader orthogonal group <span class="math-container">$O(\Delta;\mathbb{Z})$</span> preserving the discriminant,
<span class="math-container">$$
\Delta(a,b,c)=\left(a \ b \ c\right)
\left(
\begin{array}{ccc}
0&0&-2\\
0&1&0\\
-2&0&0\\
\end{array}
\right)
\left(
\begin{array}{c}
a\\
b\\
c\\
\end{array}
\right).
$$</span></p>
<p>Other than that, they are among the ``simplest'' involutions available.</p>
<p>Now, if <span class="math-container">$\Delta=p>0$</span> is prime, then <span class="math-container">$f$</span> has fixed points only when <span class="math-container">$p\equiv 1\bmod 4$</span>, namely
<span class="math-container">$$
\pm\left[1,1,\frac{1-p}{4}\right], \ \pm\left[p,p,\frac{p-1}{4}\right].
$$</span>
One might hope to finagle a finite set of ``reduced'' forms that contains only one of these fixed points and is closed under both involutions, but this seems somewhat complicated (or impossible?). [For instance, looking at <span class="math-container">$s$</span> with uniqueness in mind, one might consider those forms with <span class="math-container">$a>0>c$</span>. Considering what happens with these under <span class="math-container">$f$</span>, we get the condition <span class="math-container">$a+c<b$</span>. Looking at this condition under <span class="math-container">$s$</span>, we get <span class="math-container">$|a+c|<b$</span> (so <span class="math-container">$b>0$</span>) and <span class="math-container">$b>0$</span> implies <span class="math-container">$b<2a$</span> for stability under <span class="math-container">$f$</span>, etc.]</p>
<p>The other two elements from <span class="math-container">$O(\Delta)$</span> (acting from the left on the column <span class="math-container">$(a,b,c)$</span> from my own predjudices) in the ``one-sentence'' proof aren't involutions; they're of order 6 and inverse to one another:
<span class="math-container">$$
\left(
\begin{array}{ccc}
0&0&-1\\
0&1&-2\\
-1&1&-1\\
\end{array}
\right)\leftrightarrow
\left(\begin{array}{cc}0&\sqrt{-1}\\-\sqrt{-1}&-\sqrt{-1}\\\end{array}\right)=t,
$$</span>
<span class="math-container">$$
\left(
\begin{array}{ccc}
-1&1&-1\\
-2&1&0\\
-1&0&0\\
\end{array}
\right)\leftrightarrow
\left(\begin{array}{cc}\sqrt{-1}&-\sqrt{-1}\\-\sqrt{-1}&0\\\end{array}\right)=t^{-1}
$$</span>
i.e.
<span class="math-container">\begin{align*}
[a,b,c]^t&=[-c,b-2c,-a+b-c],\\
[a,b,c]^{t^{-1}}&=[-a+b-c,-2a+b,-a],
\end{align*}</span>
under the association
<span class="math-container">$$
F^g\longleftrightarrow
\left(
\begin{array}{ccc}
\alpha^2&\alpha\gamma&\gamma^2\\
2\alpha\beta&\alpha\delta+\beta\gamma&2\gamma\delta\\
\beta^2&\beta\delta&\delta^2\\
\end{array}
\right)
\left(
\begin{array}{c}
a\\
b\\
c\\
\end{array}
\right).
$$</span>
Again, these are some of the simpler finite order <span class="math-container">$O(\Delta)$</span> elements (from the <span class="math-container">$2\times2$</span> matrix standpoint).</p>
<p>Now, what happens to the finite set of ``reduced'' forms
<span class="math-container">$$
\mathcal{Q}_0=\{[a,b,c]\in\mathcal{Q} : a>0>c, b>0\}
$$</span>
under <span class="math-container">$t$</span> and <span class="math-container">$t^{-1}$</span>? We have
<span class="math-container">\begin{align*}
[a,b,c]^t&\in\mathcal{Q} \Longleftrightarrow b<a+c,\\
[a,b,c]^{t^{-1}}&\in\mathcal{Q} \Longleftrightarrow b>2a \text{ and } b>a+c.
\end{align*}</span>
These conditions seem to start matching up with the the earlier attempt to find reduced forms stabalized by both <span class="math-container">$s$</span> and <span class="math-container">$f$</span>. So much so that the map
<span class="math-container">$$
[a,b,c]\mapsto\left\{
\begin{array}{cc}
[a,b,c]^t & b<a+c\\
[a,b,c]^f& a+c<b<2a\\
[a,b,c]^{t^{-1}} & b>2a\\
\end{array}
\right.
$$</span>
is an involution on <span class="math-container">$\mathcal{Q}_0$</span> giving us what we want!</p>
|
31,113 | <p>Zagier has a very short proof (<a href="https://mathscinet.ams.org/mathscinet-getitem?mr=1041893" rel="noreferrer">MR1041893</a>, <a href="https://www.jstor.org/stable/2323918" rel="noreferrer">JSTOR</a>) for the fact that every prime number $p$ of the form $4k+1$ is the sum of two squares.
The proof defines an involution of the set $S= \lbrace (x,y,z) \in N^3: x^2+4yz=p \rbrace $ which is easily seen to have exactly one fixed point. This shows that the involution that swaps $y$ and $z$ has a fixed point too, implying the theorem.</p>
<p>This simple proof has always been quite mysterious to me. Looking at a precursor of this proof by Heath-Brown did not make it easier to see what, if anything, is going behind the scenes.
There are similar proofs for the representation of primes using some other quadratic forms, with much more involved involutions.</p>
<p>Now, my question is: is there any way to see where these involutions come from and to what extent they can be used to prove similar statements?</p>
| David Leep | 476,997 | <p>A recent article gives another very elegant proof of the two-square theorem in the Zagier style, but is easier. See</p>
<ul>
<li>Stan Dolan, <em>A very simple proof of the two-squares theorem</em>,
The Mathematical Gazette, Volume 105, Issue 564 (2021) p. 511. doi:<a href="https://doi.org/10.1017/mag.2021.120" rel="noreferrer">10.1017/mag.2021.120</a> (the whole article is visible in the free preview)</li>
</ul>
<p>The proof is not one sentence but it is still well worth reading. One starts with the set <span class="math-container">$S = \{(x,y,z) \mid (x+y+z)^2 = p + 4xz, \text{ and }
x,y,z \in \mathbb{Z}_{>0} \}$</span>. Short, clever arguments show that <span class="math-container">$S$</span> is finite,
<span class="math-container">$|S|$</span> is odd, and that there is a positive integer solution with <span class="math-container">$y=z$</span>.</p>
|
2,907,487 | <p>Trying to better understand how an "interpretation" works in logic: <a href="https://en.wikipedia.org/wiki/Interpretation_(logic)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Interpretation_(logic)</a></p>
<p>For the most part it seems to be the idea that we have some syntactical symbol and we can assign it a semantic meaning of our choosing.</p>
<p>To my understanding, and also based on reading this Wiki article, that would mean assigning syntactic symbols like $p, q, r$, etc, with semantic meanings like "true" or "false" or I suppose "unknown" or "$67$% true" or whatever you'd want to do.</p>
<p>For this reason I get a little confused when we talk about certain concepts/requirements (such as validity) that something is true "under every interpretation" since it seems conceivable we could make up an interpretation that has nothing to do with true/false at all. </p>
<p>Or is an interpretation strictly an assignment of true/false? What about other logics? Does it even make sense to talk about logics that have nothing to do with true/false at all? Or is the concept of true/false a necessary component in any logic system for it to even be a logic system?</p>
| Mauro ALLEGRANZA | 108,274 | <p>We may follow Wiki's entry carefully [<em>emphasis added</em>]:</p>
<blockquote>
<p>An interpretation is an assignment of <em>meaning</em> to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.</p>
<p>The <em>most commonly</em> studied formal logics are [<em>classical</em>] propositional logic, predicate logic and their modal analogs, and for these there are <em>standard ways</em> of presenting an interpretation. In these contexts an interpretation is a function that provides the extension of symbols and strings of symbols of an object language.</p>
<p>An interpretation <em>often</em> (but not always) provides a way to determine the truth values of sentences in a language. If a given interpretation assigns the value True to a sentence or theory, the interpretation is called a model of that sentence or theory.</p>
</blockquote>
<p>Things are different for the so-called <em>non-classical</em> logic.</p>
<p>See e.g. <a href="https://plato.stanford.edu/entries/logic-intuitionistic/#BasSem" rel="nofollow noreferrer">Intuitionistic Logic</a> :</p>
<blockquote>
<p>Intuitionistic systems have inspired a variety of interpretations, including Beth’s tableaux, Rasiowa and Sikorski’s topological models, Heyting algebras, formulas-as-types, Kleene’s recursive realizabilities, the Kleene and Aczel slashes, and models based on sheafs and topoi. Of all these interpretations Kripke’s possible-world semantics, with respect to which intuitionistic predicate logic is complete and consistent, most resembles classical model theory.</p>
</blockquote>
<p>According, for example, to <a href="https://en.wikipedia.org/wiki/Brouwer%E2%80%93Heyting%E2%80%93Kolmogorov_interpretation" rel="nofollow noreferrer">Brouwer–Heyting–Kolmogorov interpretation</a> the "correct" interpretation of Intuitionistic logic is in term of <em>proof</em> (instead of truth-value).</p>
<hr />
<p>As discussed before, in the context of <a href="https://en.wikipedia.org/wiki/Interpretation_(logic)#Interpretations_for_propositional_logic" rel="nofollow noreferrer"><em>classical propositional logic</em></a>,</p>
<blockquote>
<p>where the language consists of formulas built up from propositional symbols (or variables) and logical connectives, [...] the <em>standard</em> kind of interpretation is a function that maps each propositional symbol to one of the truth values true and false.</p>
<p>This function is known as a <em>truth assignment</em> or <em>valuation function</em>.</p>
<p>Given any truth assignment for a set of propositional symbols, there is a unique extension to an interpretation for all the propositional formulas built up from those variables. This extended interpretation is defined inductively, using the truth-table definitions of the logical connectives discussed above.</p>
</blockquote>
<p>But also for classical logic we may have different kind of interpretations, like the <a href="https://en.wikipedia.org/wiki/Algebraic_logic#Algebras_as_models_of_logics" rel="nofollow noreferrer">algebraic models</a>.</p>
<hr />
<p><em>In conclusion</em> :</p>
<blockquote>
<p>is an interpretation strictly an assignment of true/false?</p>
</blockquote>
<p>NO.</p>
|
4,295,884 | <p>The challenge is to solve this equation <span class="math-container">$2^{x}+7^{y}=9^{z}$</span> in positive integers. The obvious solution is <span class="math-container">$x=y=z=1$</span>. Using brute force, I found <span class="math-container">$3$</span> possible solutions:
<span class="math-container">\begin{eqnarray*}
(x_1,y_1,z_1)&=&(3,0,1),\\
(x_2,y_2,z_2)&=&(1,1,1)\\
(x_3,y_3,z_3)&=&(5,2,2).\\
\end{eqnarray*}</span>
There are no other natural solutions for <span class="math-container">$z≤10000$</span>.</p>
<p>It seems that the equation <span class="math-container">$2^{x}+7^{y}=9^{z}$</span> has no other solutions in natural numbers. How can this be proven?</p>
| Mike | 544,150 | <p>This is a PARTIAL answer. What remains to show is that there are not an infinite number of solutions for <span class="math-container">$x=1$</span>. Assume <span class="math-container">$x\ge 3$</span>. Then as in Conner's answer,
<span class="math-container">$$2^x = (3^z-7^v)(3^z+7^v).$$</span></p>
<p>This gives for some positive integer <span class="math-container">$a$</span> the equation
<span class="math-container">$$2^a(3^z-7^v)=(3^z+7^v).$$</span></p>
<p>Rearranging terms gives
<span class="math-container">$$(2^a-1)3^z = (2^a+1)7^v,$$</span></p>
<p>Note that this implies that <span class="math-container">$a$</span> must also be at least 3. So this gives <span class="math-container">$2^a+1=3^z$</span>. As <span class="math-container">$a$</span> is at least <span class="math-container">$3$</span> [because one can check that there are no solutions to the above equation for <span class="math-container">$a \in \{1,2\}$</span>], it follows that <span class="math-container">$2^a+1=3^z$</span> and thus in particular <span class="math-container">$3^z$</span> must be <span class="math-container">$1$</span> mod <span class="math-container">$8$</span>, and so it follows that <span class="math-container">$z$</span> must be even, which gives</p>
<p><span class="math-container">$$(3^{z/2}-1)(3^{z/2}+1) = 2^a,$$</span>
which implies both <span class="math-container">$3^{z/2}-1$</span>, <span class="math-container">$3^{z/2}+1$</span>, must be powers of <span class="math-container">$2$</span>. As they differ by only <span class="math-container">$2$</span>, this is possible only if <span class="math-container">$z=2$</span> and <span class="math-container">$a=3$</span>. So if <span class="math-container">$x$</span> is at least <span class="math-container">$3$</span>, then <span class="math-container">$v$</span> must be <span class="math-container">$1$</span> and thus <span class="math-container">$y$</span> must be <span class="math-container">$2$</span>. Thus <span class="math-container">$x$</span> must be <span class="math-container">$5$</span> and <span class="math-container">$z=2$</span>.</p>
|
275,558 | <p>I recently had a homework problem asking to prove that the direct product of rings (or rings with identity) are still rings (with identity), and it seemed really silly to go through all the steps in order to conclude what was extremely trivial. </p>
<p>My question is, are there instances when this is not the case? Are there certain properties, or certain algebraic structures whereupon taking their direct products doesn't maintain the original algebraic structure/properties?</p>
| Dietrich Burde | 83,966 | <p>Some properties <em>by definition</em> do not necessarily "maintain the original algebraic structure" for the direct product, such as, say, being "irreducible".
Perhaps the question gets more focus if we stay with groups (or rings, modules and algebras). <br>
The direct product of simple groups need not be simple, the direct product of groups of a square-free order is not of square-free order, the direct product of cyclic groups need not be cyclic, and the direct product of <a href="https://en.wikipedia.org/wiki/Complete_group" rel="nofollow">complete groups</a> need not be complete.</p>
|
935,358 | <p>What is an example of a six-element group that is not abelian?</p>
<p>I can't think of any. It is very possible that I am overthinking this. Thank you for any help.</p>
| N. F. Taussig | 173,070 | <p>The symmetric group, $S_3$, on the vertices of an equilateral triangle is non-abelian. It is generated by a 120 degree counterclockwise rotation and a reflection over the perpendicular bisector of one of the sides. As you can check, it is a non-abelian group with six elements since the reflection and rotation do not commute.</p>
|
155,143 | <p>Let $X$ be a Banach space an let $(e_n)_{n=1}^{\infty}$ be a Schauder basis of $X$. If we denote the sequence of the natural projections associated with $(e_n)_{n=1}^{\infty}$ by $(S_n)_{n=1}^{\infty}$ then by Uniform Boundedness principle we can observe that $$\sup_n||S_n||< \infty. $$</p>
<p>And the number $K=\sup_n ||S_n||$ is called basis constant.</p>
<p>At this point, I have a question. What is the importance of the basis constant $K$? What happens when $K=1$?</p>
| alpha | 45,681 | <p>I know one situation where the basis constant plays a role. This is the Paley-Wiener theorem on the perturbation of bases (nota bene, not the more famous one on the characterisation of the Fourier transform of $L^2$ functions). It is a quantitative version of the intuitively obvious result that the isomorphic character of a Banach space is stable under small perturbations. As such it played an important role in the initial explosion of the isomorphic theory of Banach spaces in the 70's (structure of $\ell^p$-spaces and their subspaces, resp. quotient spaces).</p>
|
2,998,479 | <p>Solving the integration problem:</p>
<p><span class="math-container">$$\int \frac{4^x-5^x}{7^x} \, dx$$</span></p>
<p>Not sure how to solve this problem, I don't see a way to incorporate integration by parts into this problem or even some kind of substitution.</p>
<p>Seems that taking the natural log of each element will help to simplify the equation but I don't think it is valid mathematically.</p>
| Tianlalu | 394,456 | <p><strong>HINT:</strong>
<span class="math-container">$$\frac{4^x-5^x}{7^x}=\left(\frac47\right)^x-\left(\frac57\right)^x,$$</span>
and you have the formula (by putting <span class="math-container">$a=4/7$</span> and <span class="math-container">$5/7$</span>)
<span class="math-container">$$\int a^x~\mathrm dx=?$$</span></p>
|
834,451 | <p>Simple question from a calc 3 beginner. Visually I cannot imagine the span of two vectors, what does this necessarily mean? For example my text mentions if two vectors are parallel their span is a line, otherwise a plane. Can anyone elaborate?</p>
| Hayden | 27,496 | <p>Assuming it makes sense that the span of a single vector is a line, we can imagine the two vectors in 3-space. Because the span of each vector lies within the space of each of them, we can draw the two lines that are in the direction of these two vectors:</p>
<ul>
<li><p>if the two lines are equal, then this is all of the span.</p></li>
<li><p>otherwise, we can imagine picking one of the lines and sliding it along the other line such that it stays parallel to its original placement. </p></li>
</ul>
<p>To make the above intuition precise, the point is that the span of a single vector is the result of performing scalar multiplication on that single vector: that is, contracting it, shrinking it, or even flipping its orientation (making the direction go the other way). Then we can get any other point of the plane by finding the projection of the point onto the first line, and then translating by the projection of the point onto the second line; this is why when the lines are the same we do not leave the line.</p>
<p>As far as the formal definition of the span goes, the span of a set $S=\{v_1,\ldots, v_n\}$ of vectors is given by the set
$$\mathrm{span}(S)=\left\{ \sum_{i=1}^n{c_iv_i} \mid c_i\in \mathbb{F}, v_i\in S\right\}$$ where $\mathbb{F}$ is the field that you're working over (likely the real numbers $\mathbb{R}$).</p>
<p>In the case where $S=\{v_1,v_2\}$, we're looking at the set of vectors of the form $c_1v_1+c_2v_2$. Now, if $v_1$ and $v_2$ are in the same direction, then there is $c$ such that $v_1=cv_2$, so we can rewrite this as $c_1v_1+c_2v_2=c_1v_1+c_2cv_1=(c_1+c_2c)v_1$, which is just any element in the span of $\{v_1\}$. This is why it is possible for the span to be a line.</p>
<p>Now, when they aren't in the same direction, they lie precisely in a unique plane. Any point in the line that goes along the direction of the vector $v_1$ can be written as $c_1v_1$, and any point in the line that goes along the direction of the vector $v_2$ can be written as $c_2v_2$. Every point in the plane will then be of the form $c_1v_1+c_2v_2$ by projecting the point onto $v_1$ and $v_2$ to find $c_1$ and $c_2$, respectively. </p>
|
834,451 | <p>Simple question from a calc 3 beginner. Visually I cannot imagine the span of two vectors, what does this necessarily mean? For example my text mentions if two vectors are parallel their span is a line, otherwise a plane. Can anyone elaborate?</p>
| Tobias Kildetoft | 2,538 | <p>Here is a visual way of thinking about this (at least in small enough dimension that you can visualize it).</p>
<p>Say you have some collection of vectors and you place yourself at the origin. Each vector determines a direction (the span does not care if you replace a vector by some non-zero scalar multiple), and the span is everywhere you are able to go by only going in the specified directions (though you are allowed to go both forwards and backwards).</p>
<p>So, if you were in $3$ dimensions and had the vectors $(1,0,0)$ and $(0,1,0)$ then you could for example go to $(-1,2,0)$ by first going backwards $1$ unit in the direction of the first vector and then going forwards $2$ units in the direction of the second vector.</p>
|
2,410,118 | <p>Edit: Excuse me, the numerator of the given solution should've been a rising factorial. That said, I still don't understand where it comes from?</p>
<p>The question is as follows :</p>
<blockquote>
<p>n friends visit k exhibitions, each person visits only 1 exhibition. Find the number of possibilities if</p>
<p>b) Only the number of friends who goes to each exhibition matters. Neither the order nor who goes where matters.</p>
</blockquote>
<p>Now, I have the solution which would be <span class="math-container">$$\frac{[n+1]^{k-1}}{(k-1)!} $$</span></p>
<p>But I don't really understand why that is so.
If I understand that correctly, that would be a <em>stars and bars</em> problem, but I don't seem to grasp the concept of it.</p>
<p>Any help would be greatly appreciated.</p>
| Hagen von Eitzen | 39,174 | <p>We can try something of the form
$$a_n=\alpha n+\beta (-1)^nn+\gamma.$$
From $$\begin{align}\alpha(2n+1)+2\gamma&=a_n+a_{n+1}\\&=(k+1, 2k+1,3k+1,4k+1, 5k+1,\ldots )\\&=nk+1,\end{align}$$
we find $\alpha=\frac k2$, $\gamma=\frac12-\frac k4$. From this we infer $\beta=\frac k4-\frac12$, and - hooray! - that works.</p>
|
2,322,938 | <p>The powerset of the empty set is {$\emptyset$}. Attempt:</p>
<p>The power set of the powerset of the empty set is { { {$\emptyset$} } }? </p>
<p>Next I want to find the powerset of the power set of the powerset of the empty set P(P(P($\emptyset$))). </p>
| Akiva Weinberger | 166,353 | <p>Here's a visualization of $\emptyset$ (called $V_0$ here), $\mathcal P(\emptyset)$ (called $V_1$ here), $\mathcal P(\mathcal P(\emptyset))$ (called $V_2$ here), etc. An empty box represents the empty set. A box containing only an empty box represents the set containing only the empty set, and so forth.</p>
<p><a href="https://upload.wikimedia.org/wikipedia/commons/8/83/Von_Neumann_universe_4.png" rel="noreferrer"><img src="https://upload.wikimedia.org/wikipedia/commons/8/83/Von_Neumann_universe_4.png" alt="enter image description here"></a></p>
<p>Thus:</p>
<ul>
<li>$V_0=\{\}$</li>
<li>$V_1=\{\{\}\}$</li>
<li>$V_2=\{\{\},\{\{\}\}\}$ (this is the powerset of the powerset)</li>
<li>$V_3=\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\}$</li>
<li>etc.</li>
</ul>
<p>$V_5$, not pictured, has $2^{2^{\large 2^{2}}}=65,\!536$ elements.</p>
<p>EDIT:</p>
<p>To be explicit, the definition is $V_0=\emptyset$ and $V_{n+1}=\mathcal P(V_n)$. There are two properties of these you should know:</p>
<ul>
<li><em>Every</em> "hereditarily finite set" (that is, a set you could write with a finite number of curly braces) is contained in one of those $V$ sets.</li>
<li>$V_n\subset V_{n+1}$ for all $n$. That is, each is a subset of the next one.</li>
</ul>
<p>Can you see why these are true?</p>
|
1,702,139 | <p>I am reading through the <a href="http://www.computervisionmodels.com/" rel="nofollow noreferrer">Computer Vision: Models, Learning, and Inference</a> book to get an understanding of computer vision. The author describes the high-level steps taken to arrive at one of the derivations, but I'm stuck in working through the details.</p>
<p>An excerpt from the book (available at above link), describing the problem:</p>
<p><strong>"Problem 4.3</strong> Taking equation 4.29 as a starting point, show that the maximum likelihood parameters for the categorical distribution are given by
$$
\hat{\lambda_k} = \frac{N_k}{\sum_{m=1}^{6}N_m}
$$
where $N_k$ is the number of times that category $K$ was observed in the training data.</p>
<p><strong>Equation 4.29</strong>
$$
L = \sum_{k=1}^{6} N_k \log[\lambda_k] + v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg)
$$
where the second term uses the Lagrange multiplier $v$ to enforce the constraint on the parameters $\sum_{k=1}^{6}\lambda_k=1$".</p>
<p>The high-level description of how to solve this is given:</p>
<p>"We differentiate $L$ with respect to $\lambda_k$ and $v$, set the derivatives to zero, and solve for $\lambda_k$"</p>
<p>I think some of my confusion comes in how to take the partial derivative when it comes to within a summation. I have found some questions which seem similar, but I'm not sure how to apply them to this equation: <a href="https://math.stackexchange.com/questions/217831/differentiation-with-summation-symbol?rq=1">1</a>, <a href="https://math.stackexchange.com/questions/429269/complicated-derivative-with-nested-summations?rq=1">2</a>. I'll now list out the steps I took so far and hope to be pointed in the right direction.</p>
<p>Partial derivative of $L$ with respect to $\lambda_k$:
$$
\frac{\partial L}{\partial \lambda_k} = \frac{\partial}{\partial \lambda_k} \Bigg[ \sum_{k=1}^{6} N_k \log[\lambda_k] + v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg) \Bigg]
$$</p>
<p>$$
= \sum_{k=1}^{6} N_k \frac{1}{\lambda_k} + \frac{\partial}{\partial \lambda_k} \Bigg[ v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg) \Bigg]
$$</p>
<p>$$
= \sum_{k=1}^{6} N_k \frac{1}{\lambda_k} + \frac{\partial}{\partial \lambda_k} \Bigg[ v \sum_{k=1}^{6}\lambda_k - v \Bigg]
$$</p>
<p>$$
= \sum_{k=1}^{6} N_k \frac{1}{\lambda_k} + v \sum_{k=1}^{6}1
$$</p>
<p>$$
= \sum_{k=1}^{6} N_k \frac{1}{\lambda_k} + 6v
$$</p>
<p>Partial derivative of $L$ with respect to $v$:
$$
\frac{\partial L}{\partial v} = \frac{\partial}{\partial v} \Bigg[ \sum_{k=1}^{6} N_k \log[\lambda_k] + v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg) \Bigg]
$$
$$
= \frac{\partial}{\partial v} \Bigg[ v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg) \Bigg]
$$
$$
= \frac{\partial}{\partial v} v\sum_{k=1}^{6}\lambda_k - v
$$
$$
= \sum_{k=1}^{6}\lambda_k - 1
$$</p>
<p>At this point, I imagine I would set both equations equal to zero and solve for $\lambda_k$. However, I don't know that I did the differentiation correctly and if so, I'm not sure how to proceed with solving for $\lambda_k$ (removing it from the summation?).</p>
| Robert Israel | 8,508 | <p>Hint: what kind of singularity can such a map have at $0$? </p>
|
1,702,139 | <p>I am reading through the <a href="http://www.computervisionmodels.com/" rel="nofollow noreferrer">Computer Vision: Models, Learning, and Inference</a> book to get an understanding of computer vision. The author describes the high-level steps taken to arrive at one of the derivations, but I'm stuck in working through the details.</p>
<p>An excerpt from the book (available at above link), describing the problem:</p>
<p><strong>"Problem 4.3</strong> Taking equation 4.29 as a starting point, show that the maximum likelihood parameters for the categorical distribution are given by
$$
\hat{\lambda_k} = \frac{N_k}{\sum_{m=1}^{6}N_m}
$$
where $N_k$ is the number of times that category $K$ was observed in the training data.</p>
<p><strong>Equation 4.29</strong>
$$
L = \sum_{k=1}^{6} N_k \log[\lambda_k] + v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg)
$$
where the second term uses the Lagrange multiplier $v$ to enforce the constraint on the parameters $\sum_{k=1}^{6}\lambda_k=1$".</p>
<p>The high-level description of how to solve this is given:</p>
<p>"We differentiate $L$ with respect to $\lambda_k$ and $v$, set the derivatives to zero, and solve for $\lambda_k$"</p>
<p>I think some of my confusion comes in how to take the partial derivative when it comes to within a summation. I have found some questions which seem similar, but I'm not sure how to apply them to this equation: <a href="https://math.stackexchange.com/questions/217831/differentiation-with-summation-symbol?rq=1">1</a>, <a href="https://math.stackexchange.com/questions/429269/complicated-derivative-with-nested-summations?rq=1">2</a>. I'll now list out the steps I took so far and hope to be pointed in the right direction.</p>
<p>Partial derivative of $L$ with respect to $\lambda_k$:
$$
\frac{\partial L}{\partial \lambda_k} = \frac{\partial}{\partial \lambda_k} \Bigg[ \sum_{k=1}^{6} N_k \log[\lambda_k] + v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg) \Bigg]
$$</p>
<p>$$
= \sum_{k=1}^{6} N_k \frac{1}{\lambda_k} + \frac{\partial}{\partial \lambda_k} \Bigg[ v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg) \Bigg]
$$</p>
<p>$$
= \sum_{k=1}^{6} N_k \frac{1}{\lambda_k} + \frac{\partial}{\partial \lambda_k} \Bigg[ v \sum_{k=1}^{6}\lambda_k - v \Bigg]
$$</p>
<p>$$
= \sum_{k=1}^{6} N_k \frac{1}{\lambda_k} + v \sum_{k=1}^{6}1
$$</p>
<p>$$
= \sum_{k=1}^{6} N_k \frac{1}{\lambda_k} + 6v
$$</p>
<p>Partial derivative of $L$ with respect to $v$:
$$
\frac{\partial L}{\partial v} = \frac{\partial}{\partial v} \Bigg[ \sum_{k=1}^{6} N_k \log[\lambda_k] + v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg) \Bigg]
$$
$$
= \frac{\partial}{\partial v} \Bigg[ v\Bigg(\sum_{k=1}^{6}\lambda_k - 1 \Bigg) \Bigg]
$$
$$
= \frac{\partial}{\partial v} v\sum_{k=1}^{6}\lambda_k - v
$$
$$
= \sum_{k=1}^{6}\lambda_k - 1
$$</p>
<p>At this point, I imagine I would set both equations equal to zero and solve for $\lambda_k$. However, I don't know that I did the differentiation correctly and if so, I'm not sure how to proceed with solving for $\lambda_k$ (removing it from the summation?).</p>
| Yiorgos S. Smyrlis | 57,021 | <p><strong>Answer.</strong> $\,f(z)=az\,$ or $\,f(z)=a/z$, for some $a\ne 0$.</p>
<p>Hint. At $z=0$ the function $f$ has an isolated singularity. It can only be a removable one or a simple pole. If it is removable, $f$ should vanish there.</p>
|
973,771 | <p>I am trying to find </p>
<p>$$ \lim_{x\to 0} \frac{e^{\alpha x} -e^{\beta x}}{\sin(\alpha x) + \sin(\beta x) }$$</p>
<p>but dont have a clue where to start, could someone give me a hint please ?</p>
| user184394 | 184,394 | <p>Using Wolfram Alpha:</p>
<p>lim (e^(alpha x)-e^(beta x))/(sin(alpha x)+sin(\beta x)) as x->0</p>
<p>Answer:</p>
<p>(alpha-beta)/(alpha+beta)</p>
<p><a href="http://www.wolframalpha.com/input/?i=lim+%28e%5E%28alpha+x%29-e%5E%28beta+x%29%29%2F%28sin%28alpha+x%29%2Bsin%28%5Cbeta+x%29%29+as+x-%3E0" rel="nofollow">http://www.wolframalpha.com/input/?i=lim+%28e%5E%28alpha+x%29-e%5E%28beta+x%29%29%2F%28sin%28alpha+x%29%2Bsin%28%5Cbeta+x%29%29+as+x-%3E0</a></p>
|
2,046,729 | <p>In my book there is a solved question . <a href="https://i.stack.imgur.com/O4LpL.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O4LpL.gif" alt="enter image description here" /></a></p>
<p>In that question I could not understand from where does <span class="math-container">$(3\lambda + 2)$</span> come from</p>
<p>And is there any other method to solve it</p>
| Jorge Medina | 279,252 | <p>Your objective here is to figure out where the terms of both sequences are the same. In general you do this by expressing one variable (n) in terms of the other (m) and then expressing both in terms of a third variable (λ). This way you can see which terms in Tn are the same as which terms in Tm, just by plugging in successive values of λ.</p>
<p>In your example n = (7m + 4)/3</p>
<p>Because n can only be an integer, we know that 7m + 4 must be some integer multiple of 3. Thus:</p>
<p>7m + 4</p>
<p>Must be of the form: </p>
<p>3(a + b) = 3a +3b</p>
<p>So m must:
-Be composed of a variable that when multiplied by 7 gives a multiple of 3
-Be composed of a constant that when multiplied by 7 and added to 4 gives a multiple of 3</p>
<p>It's natural then to say m = 3λ + 2.</p>
|
219,793 | <p>I have the following function </p>
<p><code>f[z_] := Piecewise[{{Cos[z] - 17 Sin[z]/z, z > 0}, {Cosh[z] - 17 Sinh[z]/z, z <= 0}}]</code></p>
<p>I am interested in the regions where <span class="math-container">$\vert f(z)\vert \leq 1$</span>, I tried to shade that region using <code>Filling</code>, but my results were not the expected, how should I write the code? Here is my attempt: </p>
<pre><code>Plot[{f[z], -1, 1}, {z, -5, 20}, PlotStyle -> {Automatic, Dashed, Dashed}, Filling -> {1 -> {3}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/V7hnH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V7hnH.png" alt="enter image description here"></a></p>
<p>To be more precise: I want to shade the region <span class="math-container">$D=\{(x,y):\vert f(x) \vert<1\text{ and } \vert y \vert <1 \}$</span></p>
| kglr | 125 | <p>Assuming (1) possible codes are triples with distinct elements and (2) "in the wrong position" means <em>not necessarily in the correct position</em>:</p>
<pre><code>ClearAll[containsOneCP, containsOne, containsTwo, containsNone]
containsOneCP[lst_] := Module[{abc = MapThread[Equal, {#, lst}]},
BooleanCountingFunction[{1}, 3][## & @@ abc]] &;
containsOne[lst_] := Length[Intersection[#, lst]] == 1 &;
containsTwo[lst_] := Length[Intersection[#, lst]] == 2 &;
containsNone[lst_] := ContainsNone[lst];
conditions = {condition1, condition2, condition3, condition4, condition5};
funcs = {containsOneCP, containsOne, containsTwo, containsNone, containsOne};
flist = MapThread[# @ #2 &, {funcs, conditions}];
Select[And @@ Through[flist @ #] &] @ Permutations[Range[0, 9], {3}]
</code></pre>
<blockquote>
<pre><code>{{0, 1, 2},{0, 4, 2},{1, 0, 2},{4, 0, 2},{6, 0, 5},{6, 0, 9},{6, 5, 0},{6, 9, 0}}
</code></pre>
</blockquote>
<p><strong>Update:</strong> If "a true number / two true numbers, however, in the wrong position" means the one digit is (the two digits are) <strong>not</strong> in their correct positions, we need to add the condition <code>(And @@ MapThread[Unequal, {#, lst}])</code> to <code>containsOne</code> and <code>containsTwo</code>:</p>
<pre><code>containsOneWP[lst_] := (And @@ MapThread[Unequal, {#, lst}]) &&
Length[Intersection[#, lst]] == 1 &;
containsTwoWP[lst_] := (And @@ MapThread[Unequal, {#, lst}]) &&
Length[Intersection[#, lst]] == 2 &;
</code></pre>
<p>With this modification we get a unique result that matches OP's manually obtained result:</p>
<pre><code>funcs2 = {containsOneCP, containsOneWP, containsTwoWP, containsNone, containsOneWP};
flist2 = MapThread[#@#2 &, {funcs2, conditions}];
Select[And @@ Through[flist2@#] &]@Permutations[Range[0, 9], {3}]
</code></pre>
<blockquote>
<pre><code> {{0, 4, 2}}
</code></pre>
</blockquote>
|
3,322,760 | <p>I am trying to solve the non linear ODE</p>
<p><span class="math-container">$$\displaystyle \frac{d^2y} {dx^2}=a^2(y+y^3)$$</span></p>
<p>With the boundary conditions that it vanish at <span class="math-container">$\pm\infty$</span>.
I am thinking that it might be better to deal with it an integral equation</p>
<p><span class="math-container">$$\displaystyle \left(\frac{d^2} {dx^2}-a^2 \right) y =a^2y^3$$</span></p>
<p>The Green's function will be for that of the modified Hemholtz equation in 1D and the after stacking the solution to the homogenous equation(following approach of Lippmann Schwinger equation)
I can write it in integral form as</p>
<p><span class="math-container">$$\displaystyle y(x)=e^{-a|x|}-\frac{1}{2a}\int_{-\infty}^{\infty}e^{-a|x-x_{1}|}y^{3}(x_{1})dx_{1}$$</span></p>
<p>Now I have no idea how to solve this integral equation. For the Lippmann Schwinger equation Neumann series can be used should I do the same with this one or is there some better method for these kind of integral equations?</p>
| Kavi Rama Murthy | 142,385 | <p>In (A) we cannot write a rational number <span class="math-container">$x$</span> as <span class="math-container">$y+z$</span> with <span class="math-container">$x$</span> rational and <span class="math-container">$y$</span> irrational because <span class="math-container">$z=x-y$</span> would then be rational. In fact <span class="math-container">$X+Y$</span> consist only of irrational numbers. </p>
<p>Your answer for (B) is correct. </p>
<p>(C) and (D) are both incorrect. Given any real number <span class="math-container">$x$</span> there is a 'large' prime number <span class="math-container">$p$</span> such that <span class="math-container">$x-y \leq 100$</span>. Hence <span class="math-container">$x=(x-p)+p \in (-\infty, 100] + P$</span> where <span class="math-container">$P$</span> is the set of all prime numbers. </p>
|
57,261 | <p>In Wellin's <a href="http://rads.stackoverflow.com/amzn/click/1107009464" rel="nofollow noreferrer">Programming with Mathematica</a> book, here's one of his implementations of the Newton method, where the iteration runs until the error tolerance is reached.</p>
<p><img src="https://i.stack.imgur.com/EnGoj.png" alt="Mathematica graphics"></p>
<p><img src="https://i.stack.imgur.com/i4rj2.png" alt="Mathematica graphics"></p>
<pre><code>Clear[findRoot]
findRoot[expr_, {var_, init_}, ϵ_] :=
Module[{xi = init, fun = Function[fvar, expr]},
While[Abs[fun[xi]] > ϵ,
xi = N[xi - fun[xi]/fun'[xi]]];
{var -> xi}]
findRoot[x^3 - 2, {x, 2.0}, 0.0001]
(* {x -> 2.} *)
</code></pre>
<p>As you could see the result is clearly wrong. I think it's because of the presence of <code>fvar</code> in the body of <code>Function</code>, which was never defined. I think he meant to use <code>var</code>. I tried that, and it works, but there was a warning that <code>"The variable name has been used twice in a nested scoping construct, in a way that is likely to be an error"</code>, with <code>var</code> highlighted in red.</p>
<p><img src="https://i.stack.imgur.com/pU2zC.png" alt="Mathematica graphics"></p>
<p>Should I be concerned about this warning? In what circumstances would that be an issue? Please feel free to come up with examples of your own.</p>
<p><strong>Edit</strong>: Here's a way that I found that avoid the scoping warning:</p>
<p><img src="https://i.stack.imgur.com/RpuBw.png" alt="Mathematica graphics"></p>
<p>Is this a better way? I think the main reason of using <code>Function</code> in Wellin's case and defining a function within the <code>Module</code> here is to make a regular expression (which can't take in variable directly) become a function that could take in a value (<code>xi</code> in this function). What's the best way to do it? This is closely related to this question <a href="https://mathematica.stackexchange.com/questions/57190/defining-a-function-with-parameters-as-variables/">here</a>.</p>
| eldo | 14,254 | <p>"Is this a better way?" </p>
<p>Yes, but I think there are alternatives:</p>
<p><a href="http://reference.wolfram.com/language/ref/Nest.html" rel="nofollow noreferrer">Nest</a></p>
<pre><code>newton1[fun_, xi_, n_] := With[{f = fun/D[fun, x]}, Nest[# - f /. x -> # &, 2., n]]
newton1[x^3 - 2, 2., 10]
</code></pre>
<blockquote>
<p>1.25992</p>
</blockquote>
<p><a href="http://reference.wolfram.com/language/ref/NestList.html" rel="nofollow noreferrer">NestList</a></p>
<pre><code>newton2[fun_, xi_, n_] := With[{f = fun/D[fun, x]}, NestList[# - f /. x -> # &, 2., n]]
ListLinePlot[newton2[x^3 - 2, 2., 10], Mesh -> All, MeshStyle -> Directive[PointSize[Medium], Red]]
</code></pre>
<p><img src="https://i.stack.imgur.com/9AeMp.jpg" alt="enter image description here"></p>
<p>For the next functions we define</p>
<pre><code> f = # / D[#, x] & [x^3 - 2];
</code></pre>
<p><a href="http://reference.wolfram.com/language/ref/NestWhileList.html" rel="nofollow noreferrer">NestWhileList</a></p>
<pre><code>NestWhileList[# - f /. x -> # &, 2., Abs[#1 - #2] > 0.0001 &, 2]
</code></pre>
<blockquote>
<p>{2., 1.5, 1.2963, 1.26093, 1.25992, 1.25992}</p>
</blockquote>
<p><a href="http://reference.wolfram.com/language/ref/NestWhile.html" rel="nofollow noreferrer">NestWhile</a></p>
<pre><code>NestWhile[# - f /. x -> # &, 2., Abs[#1 - #2] > 0.0001 &, 2]
</code></pre>
<blockquote>
<p>1.25992</p>
</blockquote>
<p><a href="http://reference.wolfram.com/language/ref/FixedPointList.html" rel="nofollow noreferrer">FixedPointList</a></p>
<pre><code>FixedPointList[# - f /. x -> # &, 2.]
</code></pre>
<blockquote>
<p>{2., 1.5, 1.2963, 1.26093, 1.25992, 1.25992, 1.25992, 1.25992}</p>
</blockquote>
<p><a href="http://reference.wolfram.com/language/ref/FixedPoint.html" rel="nofollow noreferrer">FixedPoint</a></p>
<pre><code>FixedPoint[# - f /. x -> # &, 2.]
</code></pre>
<blockquote>
<p>1.25992</p>
</blockquote>
<p><a href="http://reference.wolfram.com/language/ref/Fold.html" rel="nofollow noreferrer">Fold</a> (or FoldList)</p>
<pre><code>Fold[# - f /. x -> # &, 2., Range@10]
</code></pre>
<blockquote>
<p>1.25992</p>
</blockquote>
<p>Your last solution could also be written as follows:</p>
<pre><code>findRoot[expr_, {var_, init_}, e_] :=
Module[{xi = init, fun},
fun[x_] := expr / D[expr, var] /. var :> x;
While[ Abs @ fun @ xi > e, xi -= fun @ xi ];
{var -> xi}]
</code></pre>
<p>Here is a timetable computed over 1000 runs with initial value 0.1:</p>
<p><img src="https://i.stack.imgur.com/dbZAR.jpg" alt="enter image description here"></p>
|
43,043 | <p>I'm working through Carl Bender's <a href="http://youtu.be/LYNOGk3ZjFM">Mathematical Physics lectures</a> on YouTube (which are great fun), and I'd like <em>Mathematica</em>'s help solving terms in the perturbation series. It would be convenient if expressions like</p>
<pre><code>SeriesCoefficient[Sum[a[n] b^n, {n, 0, ∞}], {b, 0, 3}]
</code></pre>
<p>gave <code>a[3]</code> as a result. Replacing $\infty$ with an integer greater than 3 is fine, but easy to lose track of in my notes. Is it possible to add an assumption that the series converges, and would that solve my problem? Other solutions?</p>
<p><strong>Edit</strong></p>
<p>For an example of how I hope to use this functionality, consider solving $x^5 + x = 1$ by inserting the perturbation parameter $b$ in front of $x$ (but not $x^5$). I want to apply <code>SeriesCoefficient</code> to the result of</p>
<pre><code>x^5 + b x - 1 /. x -> Sum[a[n] b^n, {n, 0, ∞}]
</code></pre>
<p>and build up a list of coefficients that equal zero.</p>
| Michael E2 | 4,999 | <p>How funny - I was just doing this the other day (different equation, with a series in <code>x^(1/2)</code>). Here is what I came up with (actually, I'm following Isaac Newton's method, I believe):</p>
<pre><code>Clear[sc];
(* series coefficient of x in f[b, x] == 0 *)
sc[f_, n_] := Nest[
Append[#, a /. First @ Solve[
SeriesCoefficient[
f[b, #.b^(Range @ Length @ # - 1) + a b^(Length @ #)],
{b, 0, Length @ #}] == 0,
a]] &,
{a /. First @ Solve[SeriesCoefficient[f[b, a], {b, 0, 0}] == 0, a]},
n]
</code></pre>
<p>OP's example:</p>
<pre><code>ClearAll[f];
f[b_, x_] := x^5 + b x - 1
sc[f, 5]
(*
{1, -(1/5), -(1/25), -(1/125), 0, 21/15625}
*)
sc[f, 25] // Total // N
(*
0.754878
*)
NSolve[f[1, x] == 0, Reals]
(*
{{x -> 0.754878}}
*)
</code></pre>
<hr>
<p>There are imaginary roots too:</p>
<pre><code>NRoots[f[1, x] == 0, x]
(*
x == -0.877439 - 0.744862 I || x == -0.877439 + 0.744862 I ||
x == 0.5 - 0.866025 I || x == 0.5 + 0.866025 I || x == 0.754878
*)
</code></pre>
<p>To get all the roots in OP's example:</p>
<pre><code>Clear[sc];
sc[f_, n_] := Nest[
Map[Join[#,
{a} /. First@Solve[SeriesCoefficient[f[b, #.b^(Range@Length@# - 1) + a b^(Length@#)],
{b, 0, Length@#}] == 0, a]] &, #] &,
Transpose[{a /. Solve[SeriesCoefficient[f[b, a], {b, 0, 0}] == 0, a]}],
n]
Total[sc[f,5],{2}]//N
(*
{0.753344, -0.877796 - 0.745447 I, 0.501124 + 0.865898 I,
0.501124 - 0.865898 I, -0.877796 + 0.745447 I}
*)
</code></pre>
|
2,164,338 | <p>I am stuck up with this simple problem related to averages. Please help me out in explaining the complete solution to this problem.</p>
<blockquote>
<p>Consider a class of <span class="math-container">$40$</span> students whose average weight is <span class="math-container">$40\ \mathrm{kg}$</span>. <span class="math-container">$m$</span> new students join this class whose average weight is <span class="math-container">$n\ \mathrm{kg}$</span>. If it is known that <span class="math-container">$m + n = 50$</span>, what is the maximum possible average weight of the class now?</p>
</blockquote>
| just_curious | 420,614 | <p>Hint: Find the new total weight (which you claim you can find), divide by $40+m$, find the $m$ maximizing the expression!</p>
|
2,164,338 | <p>I am stuck up with this simple problem related to averages. Please help me out in explaining the complete solution to this problem.</p>
<blockquote>
<p>Consider a class of <span class="math-container">$40$</span> students whose average weight is <span class="math-container">$40\ \mathrm{kg}$</span>. <span class="math-container">$m$</span> new students join this class whose average weight is <span class="math-container">$n\ \mathrm{kg}$</span>. If it is known that <span class="math-container">$m + n = 50$</span>, what is the maximum possible average weight of the class now?</p>
</blockquote>
| Juned Hashmi | 489,322 | <p>My simple approach to the question:</p>
<p>Every new joinee should contribute to maximize average weight.</p>
<p>Therefore contribution if $1$ student joins $= 1 \times 9 = 9$ kg</p>
<p>And this will be maximum when both the variable is equal i.e., $5 \times 5 = 25$</p>
<p>Therefore maximum average weight $= 40 + \frac{25}{45} \approx 40.56$ kg</p>
|
871,221 | <blockquote>
<p>Let $F$ be a field. Show that the field of quotients of $F$ is ring isomorphic to $F$.</p>
</blockquote>
<p><strong>Attempt:</strong> Let $F'$ be the field of Quotients of the field $F$. </p>
<p>Let $\Phi:F \rightarrow F'$ such that $\Phi(x)=x/1$ </p>
<p>$x/1$ refers to the equivalence class containing the element $(x,1) ~;~x,1 \in F $ . The equivalence relations satisfies $a/b = c/d$ iff $ad=bc$</p>
<p><strong>Operation Preservation</strong> : Then, $\Phi(x+y)= (x/1) + (y/1) = \Phi(x) + \Phi(y)$ </p>
<p>$\Phi(xy) = (xy)/1 = (x/1) . (y/1)$</p>
<p>These operations are consistent with the definition of addition and multiplication defined for the elements of the field of quotient and hence, both addition and multiplication are preseerved.</p>
<p><strong>One - One nature</strong> : If $x/1$ denotes the equivalence class containing $(x,1)$. Now, how do I prove the one-one nature. I have read about equivalence classes, but, don't possess much expert intuition. So, here's my attempt : </p>
<p>$(x,1) \in x/1 \implies (1,x) \in x/1 \implies (x,x) \in x/1$</p>
<blockquote>
<p>Also, the equivalence classes are disjoint, $\implies$ No other equivalence class shall contain elements of the form $(x,1)$ or $(1,x)$ or $(x,x) \implies x/1 =y/1~~ \forall ~~x,y \in F$ . How does this $\implies x=y$ which is necessary for the one-one condition?</p>
</blockquote>
<p><strong>Onto Nature</strong> : Corresponding to any element $(x/1)$ in the quotient of field, we can find $x \in F$. Hence, the mapping is onto.</p>
<p><strong>Is my attempt correct?</strong></p>
<p>Thank you for your help.</p>
| symplectomorphic | 23,611 | <p>You're confusing two things: the set of solutions to the normal equations, and the set of vectors over which you are searching for the least squared error. The second set is just the set of all vectors $x$ in whatever vector space you're working on. The squared error varies as you choose different vectors from the second set. The goal of the least squares problem is to find the vector(s) that minimize the squared error.</p>
<p>The claim is that if you get a solution to the normal equations, then it also solves the least square problem, in the following sense: if you chose any other vector from the second set besides the solution to the normal equations, you would get a squared error at least as large as the squared error you get using the solution to the normal equations.</p>
|
415,743 | <p><strong>Given Problem</strong> is to solve this separable differential equation:</p>
<p>$$y^{\prime}=\frac{y}{4x-x^2}.$$</p>
<p><strong>My approach</strong>: was to build the integral of y':</p>
<p>$$\int y^{\prime} = \int \frac{y}{4x-x^2}dy = \frac{y^2}{2(4x-x^2)}.$$</p>
<p>But now i am stuck in differential equations, what whould be the next step? And what would the solution looks like? Or is this already the solution? I doubt that.</p>
<p><strong>P.S.</strong> edits were only made to improve language and latex</p>
| Alex Wertheim | 73,817 | <p>You seem to be slightly confused - where did the $dy$ come from? Why did you make that choice? You should apply separation of variables to solve this problem. </p>
<p>That is, given </p>
<p>$$ y' = \frac{y}{4x-x^{2}}$$</p>
<p>Write this as: </p>
<p>$$ \frac{dy}{dx} = \frac{y}{4x-x^{2}}$$</p>
<p>Separating the variables, we have:</p>
<p>$$ \frac{dy}{y} = \frac{dx}{4x-x^{2}}$$</p>
<p>NOW we can integrate:</p>
<p>$$\int \frac{dy}{y} = \int \frac{dx}{4x-x^{2}}$$</p>
<p>$$\implies \ln y = \int \frac{dx}{4x-x^{2}}$$</p>
<p>From here, I recommend factoring the expression on the denominator of the right hand side and using partial fractions decomposition. I think you can take it from here, but feel free to post if you are still lost.</p>
|
3,051,452 | <p>How can I prove that <span class="math-container">$\mathbb Q \times \mathbb Q$</span> is connected? I was trying to prove that it isn't because <span class="math-container">$\mathbb Q$</span> is not connected but I can't find two open <span class="math-container">$A,B$</span> to prove it.</p>
| José Carlos Santos | 446,262 | <p>You know that <span class="math-container">$\mathbb Q$</span> is not connected. That means that there is a set <span class="math-container">$A\subset\mathbb Q$</span> such that <span class="math-container">$A$</span> is open and closed in <span class="math-container">$\mathbb Q$</span> and that furthermore <span class="math-container">$A\neq\emptyset,\mathbb Q$</span>. So, consider the set <span class="math-container">$A\times A$</span>.</p>
|
30,339 | <p>I need to construct a vector similar to:</p>
<pre><code>v[x_]:={0, 0, x, 0, 0, 2*x, 0, 0, 3*x, ....., 0, 0, n*x};
</code></pre>
<p>where <code>n=10^9</code>. How can I make such a vector wisely?</p>
| Pankaj Sejwal | 1,561 | <p>Just one more way,but is not packed array,</p>
<pre><code>Table[{0, 0, i x} /. List -> Sequence, {i, 1, 4}]
</code></pre>
<blockquote>
<p>{0, 0, x, 0, 0, 2 x, 0, 0, 3 x, 0, 0, 4 x}</p>
</blockquote>
|
1,763,377 | <p>Let $T:V \to V$ be a linear operator such that $T^5 \neq 0,$ but $T^6 = 0.$ Suppose $V$ is isomorphic to $\mathbb{R}^6.$ Prove that there does not exist an $S:V \to V$ such that $S^2 = T.$ Does the answer change if $V$ is isomorphic to $\mathbb{R}^{12}?$</p>
<p>My thoughts for the first part: Suppose such an $S$ did exist, then $T^6 = S^{12} = 0.$ but if $S^{12} = 0,$ then $S^{6} = 0,$ which implies that $T^3 = 0$ and therefore $T^5 = 0,$ contradicting the given information.<br>
I am just starting with these type of problems so I am not sure how to formalize these ideas, but I have a feeling that theorems involving nilpotence should be used. If anyone has a suggestion on how to make this argument more rigorous, it would be of much help.</p>
<p>For the second part, $S^{12} = 0$ does not imply any lower power of $S$ would be the zero matrix, so then an $S$ could exist such that $S^2=T,$ or $S^10 = T^5,$ while still satisfying $T^6 = S^{12} = 0.$</p>
<p>Any suggestions on how to formalize this (or better ways to do it) would be appreciated.</p>
| Bernard | 202,857 | <p><strong>Hint:</strong></p>
<p>This is a consequence of <em>Hamilton-Cayley's theorem</em>.</p>
|
240,894 | <p>I am importing an excel sheet into Mathematica. The result is a rectangular array. The array contains sequences of commas. Whenever a cell is empty in the Excel sheet consecutive commas appear in the imported array. How to get rid of them? So I am looking for a function <span class="math-container">$f$</span> such that for example</p>
<pre><code>f({1,2,3,2,1,,,,6,5,4,3,,,,8})={1,2,3,2,1,6,5,4,3,8}
</code></pre>
<p>It doesn't look easy to me because it looks like reinventing the wheel.</p>
<p>Any help will be much appreciated.</p>
| A little mouse on the pampas | 42,417 | <pre><code>DeleteCases[{1, 2, 3, 2, 1, , , , 6, 5, 4, 3, , , , 8}, Null]
</code></pre>
|
368,745 | <p>Here $\mu$ is a probability measure. Another similar question is: is it a $\sigma$-compact space? Thank you in advance!</p>
| naaah | 73,716 | <p>This very rarely happens.</p>
<p>A real or complex topological vector space is locally compact if and only it is finite-dimensional. A Banach space is $\sigma$-compact if and only if it is finite-dimensional. For $L^\infty(\mu)$ of a probability measure this happens if and only if $\mu$ has finitely many atoms and no continuous part.</p>
|
1,463,152 | <p>Let $$f(x)= \begin{cases}
e^{-1/x} & x>0 \\
0 & x\le 0
\end{cases}
$$
I want to show that $f^{(n)}(0)=0$ for every $n\in\mathbb{N}$.</p>
<p>My idea is to do this by induction on $n$. I can compute the first derivative by taking the limit of $$\frac{f(x)-f(0)}{x-0}=\frac{e^{-1/x}}{x}$$ as $x\rightarrow 0^+$. I'm not sure how to show this is $0$, which is why the induction step is unclear to me. If the statement is true for some $n$, then I can compute $f^{(n+1)}(0)$ by taking the limit $$\frac{f^{(n)}(x)-f^{(n)}(0)}{x-0}=\frac{f^{(n)}(x)}{x}$$ as $x\rightarrow 0^+$ and show that this is also $0$. Can anyone help?</p>
| vonbrand | 43,946 | <p>Use generating functions. To simplify later work, use the recurrence backwards to get <span class="math-container">$a_0 = -5/9$</span>. Define <span class="math-container">$A(z) = \sum_{n \ge 0} a_n z^n$</span>, shift the recurrence by 2, multiply by <span class="math-container">$z^n$</span> and sum over <span class="math-container">$n \ge 0$</span>, recognize some sums:</p>
<p><span class="math-container">$\begin{align*}
\sum_{n \ge 0} a_{n + 2} z^n
&= 6 \sum_{n \ge 0} a_{n + 1} z^n - 9 \sum_{n \ge 0} a_n z^n \\
\frac{A(z) - a_0 - a_1 z}{z^2}
&= 6 \frac{A(z) - a_0}{z} - 9 A(z)
\end{align*}$</span></p>
<p>Solve for <span class="math-container">$A(z)$</span>, as partial fractions:</p>
<p><span class="math-container">$\begin{align*}
A(z)
&= \frac{7 - 33 z}{9 - 54 z + 81 z^2} \\
&= \frac{4}{9 (1 - 3 z)^2} - \frac{11}{9 (1 - 3 z)}
\end{align*}$</span></p>
<p>Use the generalized binomial theorem:</p>
<p><span class="math-container">$\begin{align*}
(1 - u)^{-m}
&= \sum_{n \ge 0} (-1)^n \binom{-m}{n} u^n \\
&= \sum_{n \ge 0} \binom{n + m - 1}{m - 1} u^n
\end{align*}$</span></p>
<p>and read off the coefficients:</p>
<p><span class="math-container">$\begin{align*}
a_n
&= [z^n] A(z) \\
&= \frac{4}{9} \binom{n + 2 - 1}{2 - 1} \cdot 3^n - \frac{11}{9} \cdot 3^n \\
&= (4 n - 7) \cdot 3^{n - 2}
\end{align*}$</span></p>
<p>The given solution is definitely wrong.</p>
|
1,358,140 | <p>I haven't been able to solve this limit, so help would be appreciated.
L'Hopital is not allowed.
Thanks in advance.</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>Set $x-1=h$</p>
<p>$$\lim_{h\to0}\dfrac{(h+1)e^{(1+h)^2}-e}h=\lim_{h\to0}e^{(1+h)^2}+e\cdot\lim_{h\to0}\dfrac{e^{2h+h^2}-1}h$$</p>
<p>Now $$\lim_{h\to0}\dfrac{e^{2h+h^2}-1}{2h+h^2}\cdot\lim_{h\to0}\dfrac{2h+h^2}h$$</p>
|
1,358,140 | <p>I haven't been able to solve this limit, so help would be appreciated.
L'Hopital is not allowed.
Thanks in advance.</p>
| Mark Viola | 218,419 | <p>The best way to proceed has already been presented. I thought that it might be instructive to see another approach. To that end, we write</p>
<p>$$e^{x^2}=e+e(x^2-1)+O(x^2-1)^2$$</p>
<p>so that</p>
<p>$$\begin{align}
\frac{xe^{x^2}-e}{x-1}&=\frac{e(x-1)+e(x^2-1)^2+O(x^2-1)^2}{x-1}\\\\
&=3e+O(x-1)\\\\
&\to 3e\,\,\text{as}\,\,x\to 1
\end{align}$$</p>
|
1,358,140 | <p>I haven't been able to solve this limit, so help would be appreciated.
L'Hopital is not allowed.
Thanks in advance.</p>
| Paramanand Singh | 72,031 | <p>\begin{align}
L &= \lim_{x \to 1}\frac{xe^{x^{2}} - e}{x - 1}\notag\\
&= \lim_{x \to 1}\frac{xe^{x^{2}} - e}{x^{2} - 1}\cdot\frac{x^{2} - 1}{x - 1}\notag\\
&= \lim_{x \to 1}\frac{xe^{x^{2}} - e}{x^{2} - 1}\cdot(x + 1)\notag\\
&= 2\lim_{x \to 1}\frac{xe^{x^{2}} - xe + xe - e}{x^{2} - 1}\notag\\
&= 2\lim_{x \to 1}xe\cdot\frac{e^{x^{2} - 1} - 1}{x^{2} - 1} + e\cdot\frac{x - 1}{x^{2} - 1}\notag\\
&= 2\lim_{x \to 1}\left(xe\cdot\frac{e^{x^{2} - 1} - 1}{x^{2} - 1} + \frac{e}{x + 1}\right)\notag\\
&= e + 2e\lim_{x \to 1}\frac{e^{x^{2} - 1} - 1}{x^{2} - 1}\notag\\
&= e + 2e\lim_{t \to 0}\frac{e^{t} - 1}{t}\text{ (by putting }t = x^{2} - 1)\notag\\
&= e + 2e = 3e\notag
\end{align}</p>
|
167,463 | <p>How many permutations of the letters of the word "DEFEATED" are there in which the E's are separated from each other?</p>
<p>I relied on the fact that: (The number of permutations of the letters in this word with NO restriction) - (The number of permutations of the letters in this word with the 3 E's are sticking together) = The required number of permutations.</p>
<p>The number of permutations of the letters in this word with NO restriction <span class="math-container">$=\displaystyle\frac{8!}{2!3!}=3360$</span>.</p>
<p>The number of permutations of the letters in this word with the 3 E's are sticking together <span class="math-container">$=\displaystyle\frac{6!}{2!}=360$</span> (in here I regarded the 3 E's as 1 item as a whole so (EEE)DFATD are 6 items in total with 2 identical letters (D) so the result is <span class="math-container">$\displaystyle\frac{6!}{2!}$</span>).</p>
<p>Now, doing the subtraction gives 3000 which should be the answer to the question but this is not correct according to the answer given in the book. What might be wrong in my method?!</p>
| Old John | 32,441 | <p>My preferred method for this goes as follows:</p>
<p>First work out the number of permutations of the letters which are not E: this is just $5!/2$ (didvide by 2 for the number of ways of rearranging the identical Ds).</p>
<p>Now imagine the these letters written something like this: .D.F.A.T.D.</p>
<p>If the Es are to be separated, then we have to choose 3 of the dots to be the places where the Es can go.</p>
<p>If you work out the number of ways of choosing 3 from 6, and then multiply by $5!/2$, you should get the right answer.</p>
|
1,531,488 | <p>I hope to know the singularity and pole of $\frac{z^2 e^z}{1+e^{2z}}$.</p>
<p>I try $\frac{z^2 e^z}{1+e^{2z}} = \frac{z^2}{e^{-z}+e^{z}}$ and observe that the denominator seems like cosine function. So I think the singularites are i(2n+1)$\pi$/2. But when I try to evaluate the poles, I fail and different values of $n$ has different residues. </p>
<p>Could the residues be imaginary?</p>
| zahbaz | 176,922 | <p>Since $\frac{z^2 e^z}{1+e^{2z}} = \frac{z^2}{2\cosh z}$, let's start with</p>
<p>$$f(z)=\frac{z^2}{2\cos iz}$$</p>
<p>Now both $z^2$ and $\cos iz$ are entire. The function can be singular only where $z_{0_n} = iz = \frac{\pi +2\pi n}{2}$ and since the zeroes of the denominators are first order, we have</p>
<p>$$\begin{align*}
Res f(z_{0_n}) & =\frac{z^2}{(2\cos iz)'}\bigg|_{z_{0_n}} \\ \\
& =\frac{\left(\frac{\pi +2\pi n}{2}\right)^2}{-2i\sin \left(\frac{\pi +2\pi n}{2}\right)} \\ \\
& =\frac{i\left(\pi +2\pi n\right)^2}{8\sin \left(\frac{\pi}{2} +\pi n\right)} \\ \\
& =\frac{i\pi^2\left(1 +2 n\right)^2}{8\cos\left(\pi n\right)} \\ \\
\end{align*}
$$</p>
<p>$$\boxed{Res f(z_{0_n}) =(-1)^n\frac{i\pi}{8}\left(1 +2 n\right)^2}$$</p>
<p>Compare with <a href="http://www.wolframalpha.com/input/?i=residues%20z%5E2%20%2F%20%282cos%28iz%29%29" rel="nofollow">WolframAlpha</a>...</p>
|
285,965 | <p>I was told to assume $f(x)$ is a polynomial with degree $d\geq 1$ with integer coefficients and positive leading coefficient.</p>
<p>(i) I need to show that there are infinitely many $x$ such that $f(x)$ isn't prime.</p>
<p>(ii) I also need to show that if $f(x_0) = m$, where $m>0$, then $f(x) \equiv 0 \pmod m$ whenever $x\equiv x_0 \pmod m$.</p>
<p>I tried (ii) and so far I have<br>
If $f(x_0) = m \neq 0 \pmod m$, we have $x \equiv x_0 \pmod m$ so then $f(x) \equiv 0 \pmod m$.</p>
<p>For (i) I am not sure where to start can someone give me an idea?</p>
| Math Gems | 75,092 | <p><strong>Hint</strong> $\ $ For some $\,x_0\,$ we have $\,f(x_0)= m\ne \pm1\: $ (see the note below for one proof). </p>
<p>It follows that $\,m\mid f(mn+x_0)\,$ since $\,{\rm mod\ } m\!:\ f(mn+x_0)\equiv f(x_0)\equiv m \equiv 0.\:$ </p>
<p><strong>Note</strong> $\ $ If $\,f(x) = \pm 1\,$ for all $\,x\in \Bbb Z\,$ then $\,f\!-\!1\,$ or $\,f\!+\!1\,$ would have infinitely many roots, impossible for a nonzero polynomial over a domain. Thus we do not need to use any order properties to deduce that $\,f\,$ takes a nonunit value.</p>
|
2,210,560 | <p>As I understand it, <strong>partial orders</strong> are binary relations that are:</p>
<ul>
<li>Reflexive</li>
<li>Anti-symmetric</li>
<li>Transitive</li>
</ul>
<p>An example would be $\subseteq$ for sets</p>
<p>And if we add totality to this, we get a total (or linear) order, so a <strong>total order</strong> is</p>
<ul>
<li>Reflexive (this one is implied by totality, so can be removed from definition)</li>
<li>Anti-symmetric</li>
<li>Transitive</li>
<li>Total</li>
</ul>
<p>An example would be $\leq$ for numbers</p>
<p>But we also have <strong>strict linear orders</strong>, which are:</p>
<ul>
<li>Irreflexive (implied by asymmetry)</li>
<li>Asymmetric (implied by transitivity + irreflexivity)</li>
<li>Transitive</li>
<li>Connex (for any $a \not = b$: either $aRb$ or $bRa$)</li>
</ul>
<p>An example would be $<$ for numbers</p>
<p>So (first question), is there likewise something called a <strong>strict partial order</strong>, that would be:</p>
<ul>
<li>Irreflexive (implied by asymmetry)</li>
<li>Asymmetric (implied by transitivity + irreflexivity)</li>
<li>Transitive</li>
</ul>
<p>an example of which would be $\subset$ for sets? I can't find any reference for a such a term ...</p>
<p>But this also leads me to my second and main question. I do see references that say that 'order' is just short-hand for 'partial order' and that, as such, <em>could</em> be a total order. But if an 'order' has to be a partial order, then it has to be reflexive, and hence cannot be a strict total order. ... which is weird, because you'd think a strict total order would still be considered some kind of 'order' ...</p>
<p>I know there is such a thing as a 'preorder' which is reflexive and transitive, but without it being anti-symmetric or assymmetric, doesn;t really feel like an 'order'. In fact, if symmetric, this would be an equivalence relation, which doesn't feel like it has any 'ordering' at all. Indeed, as the name implies, a 'preorder' seems to fall short of it being an 'order'.</p>
<p>OK, but isn't there an obvious candidate for defining an 'order' (whether partial or linear/total) as any binary relation that is:</p>
<ul>
<li>Anti-symmetric </li>
<li>Transitive</li>
</ul>
<p>Interestingly, if we want to make this a 'strict order' by changing anti-symmetry into the stronger asymmetry:</p>
<ul>
<li>Asymmetric (and thus also anti-symmetric)</li>
<li>Transitive</li>
</ul>
<p>we obtain the 'strict partial order' from earlier, since asymmetry and transitivity imply irreflexivity. But the more general 'order' is not the same as a partial order, as an 'order' would not insist on reflexivity ... nor irreflexivity ... indeed it would merely indicate that there is an 'ordering' between the different objects, i.e. how an object relates to itself a general 'order' wouldn't care about.</p>
<p>So, is there anyone that does this? Or are we implicitly doing this (but then: what about the references that say 'order' means 'partial order'?). Or is there a good reason not to do this?</p>
| Fabio Somenzi | 123,852 | <p>Transitivity is the fundamental property of all relations that we call "something something" order. Of course, an equivalence relation is also transitive, and in fact is also a <em>preorder</em>. </p>
<p>So, maybe, one can start from transitive relations, split them according to whether they are reflexive, irreflexive, or neither. (Obviously, there's nothing new in this taxonomy.) On the irreflexive branch one gets exactly the strict partial orders. On the reflexive branch one gets preorders and their specializations, namely, partial orders and equivalence relations.</p>
<p>On the third branch we find the riff-raff transitive relations, and I'm not sure anybody calls them orders. There are also preorders that are neither partial orders nor equivalence relations, of course. So, maybe one could adopt the definition that an ordering relation is a binary relation that is transitive and either reflexive and antisymmetric or irreflexive.</p>
<p>The only main difference from the definition you consider is that a relation that is transitive and antisymmetric, but neither reflexive not irreflexive, is not considered an order relation.</p>
<p>Totality (linearity) can be specified by saying that for all $a$ and $b$, if $a \neq b$, then either $a R b$ or $b R a$. This works for both reflexive and irreflexive relations. (Thanks to @mlc for reminding me to cover this detail.)</p>
|
2,210,560 | <p>As I understand it, <strong>partial orders</strong> are binary relations that are:</p>
<ul>
<li>Reflexive</li>
<li>Anti-symmetric</li>
<li>Transitive</li>
</ul>
<p>An example would be $\subseteq$ for sets</p>
<p>And if we add totality to this, we get a total (or linear) order, so a <strong>total order</strong> is</p>
<ul>
<li>Reflexive (this one is implied by totality, so can be removed from definition)</li>
<li>Anti-symmetric</li>
<li>Transitive</li>
<li>Total</li>
</ul>
<p>An example would be $\leq$ for numbers</p>
<p>But we also have <strong>strict linear orders</strong>, which are:</p>
<ul>
<li>Irreflexive (implied by asymmetry)</li>
<li>Asymmetric (implied by transitivity + irreflexivity)</li>
<li>Transitive</li>
<li>Connex (for any $a \not = b$: either $aRb$ or $bRa$)</li>
</ul>
<p>An example would be $<$ for numbers</p>
<p>So (first question), is there likewise something called a <strong>strict partial order</strong>, that would be:</p>
<ul>
<li>Irreflexive (implied by asymmetry)</li>
<li>Asymmetric (implied by transitivity + irreflexivity)</li>
<li>Transitive</li>
</ul>
<p>an example of which would be $\subset$ for sets? I can't find any reference for a such a term ...</p>
<p>But this also leads me to my second and main question. I do see references that say that 'order' is just short-hand for 'partial order' and that, as such, <em>could</em> be a total order. But if an 'order' has to be a partial order, then it has to be reflexive, and hence cannot be a strict total order. ... which is weird, because you'd think a strict total order would still be considered some kind of 'order' ...</p>
<p>I know there is such a thing as a 'preorder' which is reflexive and transitive, but without it being anti-symmetric or assymmetric, doesn;t really feel like an 'order'. In fact, if symmetric, this would be an equivalence relation, which doesn't feel like it has any 'ordering' at all. Indeed, as the name implies, a 'preorder' seems to fall short of it being an 'order'.</p>
<p>OK, but isn't there an obvious candidate for defining an 'order' (whether partial or linear/total) as any binary relation that is:</p>
<ul>
<li>Anti-symmetric </li>
<li>Transitive</li>
</ul>
<p>Interestingly, if we want to make this a 'strict order' by changing anti-symmetry into the stronger asymmetry:</p>
<ul>
<li>Asymmetric (and thus also anti-symmetric)</li>
<li>Transitive</li>
</ul>
<p>we obtain the 'strict partial order' from earlier, since asymmetry and transitivity imply irreflexivity. But the more general 'order' is not the same as a partial order, as an 'order' would not insist on reflexivity ... nor irreflexivity ... indeed it would merely indicate that there is an 'ordering' between the different objects, i.e. how an object relates to itself a general 'order' wouldn't care about.</p>
<p>So, is there anyone that does this? Or are we implicitly doing this (but then: what about the references that say 'order' means 'partial order'?). Or is there a good reason not to do this?</p>
| egreg | 62,967 | <p>A <em>strict partial order</em> is a relation that's irreflexive and transitive (asymmetric is a consequence). This is the most common definition.</p>
<p>Actually, this notion is completely equivalent to the notion of <em>partial order</em> (a reflexive, antisymmetric and transitive relation).</p>
<p>Indeed, if $X$ is a set and $\Delta_X=\{(x,x):x\in X\}$, we have that</p>
<ul>
<li><p>if $S$ is a strict partial order on $X$, then $S^+=S\cup\Delta_X$ is a partial order;</p></li>
<li><p>if $R$ is a partial order on $X$, then $R^{-}=R\setminus\Delta_X$ is a strict partial order on $X$;</p></li>
<li><p>if $S$ is a strict partial order on $X$, then $S=(S^+)^-$;</p></li>
<li><p>if $R$ is a partial order on $X$, then $R=(R^-)^+$.</p></li>
</ul>
<p>You can try your hand in proving the statements.</p>
<p>So any strict partial order determines a unique partial order and conversely. Passing from $S$ to $S^+$ is essentially the same we do when passing from $<$ on numbers to $\le$.</p>
<p>The property of being a linear (or total) order can be expressed by</p>
<blockquote>
<p>for all $a,b\in X$, if $a\ne b$, then either $a\mathrel{T}b$ or $b\mathrel{T}a$</p>
</blockquote>
<p>where $T$ is a (strict) partial order.</p>
<p>Are strict partial orders useful? Yes. If you compare the two definitions, you see that equality is not necessary in the definition of a strict partial order (not for linear ones, though), which makes them attractive for certain logic frameworks where equality has no particular status with respect to other predicates.</p>
|
1,385,197 | <p>For $n\in\mathbb N$,
$$a_{n+1}=2a_n+\frac{1}{a_n},\quad a_1=1.
$$
Can any one give an explicit formula for all $a_n$? If such an explicit general formula doesn't exist, please explain it. I've tried to figure out the $n$-iterated function $f^{(n)}$ where $f(x)=2x+1/x$ or even $f(\tan(t))$. But in either cases, I failed.Since the recurrence isn't linear nor homogeneous,the generating function method doesn't apply here.</p>
| vonbrand | 43,946 | <p>Take the recurrence, and square:</p>
<p>$$
a^2_{n + 1} = 4 a^2_n + 4 + \frac{1}{a^2_n}
$$</p>
<p>We know that $a_n \to \infty$, so as a first approximation you have for $b_n = a^2_n$:</p>
<p>$$
b_{n + 1} = 4 b_n + 4
$$</p>
<p>For $b_0 = 1$ this one has solution:</p>
<p>$$
b_n = (n + 4) \cdot 4^{n - 1}
$$</p>
<p>So $a_n \sim 2^{n - 1} \sqrt{n}$. Replacing this in the recurrence would allow you to get tighter asymptotics.</p>
|
3,454,810 | <blockquote>
<p>Study the following sequence of numbers:
<span class="math-container">$$\forall n\in\mathbb{N}, u_n=\sum_{k=n}^{2n}\frac{k}{\sqrt{n^2+k^2}}$$</span></p>
</blockquote>
<p>I tried to calculate <span class="math-container">$u_{n+1}-u_n$</span>, but I couldn't simplify the expression.</p>
<p>Plotting the sequence shows arithmetic (or seems to be an) progression.</p>
| Khosrotash | 104,171 | <p><span class="math-container">$$\forall n\in\mathbb{N}, u_n=\sum_{k=n}^{2n}\frac{k}{\sqrt{n^2+k^2}}\\ u_{n+1}=\sum_{k=n+1}^{2n+2}\frac{k}{\sqrt{(n+1)^2+k^2}}\\u_{n+1}-u_{n}=\underbrace{\sum_{k=n+1}^{2n+2}\frac{k}{\sqrt{(n+1)^2+k^2}}}_{n+2 \space terms}-\underbrace{\sum_{k=n}^{2n}\frac{k}{\sqrt{n^2+k^2}}}_{n+1 \space terms}\\$$</span></p>
|
121,852 | <p>I am creating a Java game with collisions. I found myself stuck on the following problem.</p>
<p>I have got two known lines: $y$ and $i.$ $i$ is the inbound direction and $o$ the outbound direction, therefore both angles are the same. From $y$ and $i,$ I have calculated $\alpha.$ I also know the coordinates of the intersection point $(x,y).$ $y$ is not the horizontal axis. $y$ and $i$ can be any line of the format $y = ax + b.$</p>
<p>Could anyone help me out on getting the equation $y = ax + b$ for line $o$?</p>
<p><img src="https://i.stack.imgur.com/wBE0N.png" width="300" height="300"></p>
| Américo Tavares | 752 | <p>This is an approach using standard methods from analytic geometry and trigonometry.</p>
<p><img src="https://i.stack.imgur.com/xFi62.jpg" alt="enter image description here"></p>
<p>For convenience instead of $(x,y)$ I call the coordinates of the point of intersection $(h,k)$. Since the line $o$ passes through $(h,k)$ its equation is of the form $$y=a(x-h)+k=ax-ah+k=ax+b,\tag{0}$$ where $a$ is the slope and $b=k-ah$. We need only to find $a$. By the same reason the equation of the lines $y$ and $i$ are of a similar form. Let the equation of $y$ be
$$y=m(x-h)+k=mx-mh+k=mx+b_{y},\quad b_y=k-mh\tag{A}$$ </p>
<p>and the equation of $i$</p>
<p>$$y=m'(x-h)+k=m'x-m'h+k=m'x+b_{i},\quad b_i=k-m'h.\tag{B}$$</p>
<p>Let $\theta _{y}$ and $\theta _{i}$ be the angles between a horizontal line $\ell $ and the lines, respectivelly, $y$ and $i$. The least angle $\alpha $ between lines $y$ and $i$ is $\alpha =\theta _{y}-\theta _{i}$ or $\alpha =\pi -\left( \theta _{y}-\theta
_{i}\right) $ (see picture). Therefore $\tan \alpha =\tan \left( \theta _{y}-\theta _{i}\right) $ or $
\tan \alpha =-\tan \left( \theta _{y}-\theta _{i}\right) $.
Applying the formula of tangent of the difference of angles
$$
\begin{equation}
\tan \left( \theta _{y}-\theta _{i}\right) =\frac{\tan \theta _{y}-\tan
\theta _{i}}{1+\tan \theta _{y}\cdot\tan\theta _{i}}
\end{equation}\tag{1}$$
we get
$$
\begin{equation}
\tan \alpha =\left\vert \frac{m-m'}{1+mm'}\right\vert ,
\end{equation}\tag{2}$$
where $m=\tan \theta _{y}$ and $m'=\tan \theta _{i}$. The least
angle between $y$ and $o$ is equal to $\alpha$. Similarly it verifies the equation
$$
\begin{equation}
\tan \alpha=\left\vert \frac{m-a}{1+ma}\right\vert ,
\end{equation}\tag{3}$$
where $a=\tan \theta _{o}$ is the slope of line $o$. From $(2)$ and $(3)$ we have
$$
\begin{equation}
\frac{m-m'}{1+mm'} =\pm \frac{m-a}{1+ma}.
\end{equation}\tag{4}$$
Assuming that $\alpha \leq \pi /2$ and solving for $a$ we get two possible solutions, depending on the sign. The positive sign is excluded because the respective solution is $a=m'$. For the negative sign we get</p>
<p>$$\begin{equation}a=\frac{2m-m^{\prime }+m^{2}m^{\prime }}{1-m^{2}+2mm^{\prime }}.\end{equation}\tag{5}
$$</p>
<p>Consequently the equation of the line $o$ is given by
$$\begin{equation}
y=\frac{2m-m^{\prime }+m^{2}m^{\prime }}{1-m^{2}+2mm^{\prime }}x-\frac{2m-m^{\prime }+m^{2}m^{\prime }}{1-m^{2}+2mm^{\prime }}h+k.
\end{equation}\tag{6}$$</p>
|
79,991 | <p>I'm attempting to MapThread a function of two lists that requires the index of the list values. For example,</p>
<pre><code>MapThread[#1*i+#2*j&,{{a,b,c},{e,f,g}}]
</code></pre>
<p>Where #1 represents the value from list 1, #2 the value from list 2, i the index of list 1, and j the index of list 2. The expected output is</p>
<pre><code>{a+e,2b+2f,3c+3g}
</code></pre>
<p>This would presumably be accomplished by an "IndexedMapThread" function, but I'm not sure something like that exists in Mathematica currently. </p>
<p>Any suggestions on how to do this simply? </p>
| kglr | 125 | <p>Update:</p>
<pre><code>ClearAll[imtF]
imtF[foo_] := Module[{i = 1}, foo[#, i++] & /@ Transpose@#] &
</code></pre>
<p>Examples:</p>
<pre><code>imtF[#2 (Plus @@ #) &][{{a, b, c}, {e, f, g}}]
(* {a + e, 2 (b + f), 3 (c + g)} *)
xx = {{a, b, c}, {e, f, g}, {x, y, z}};
imtF[#2 (Plus @@ #) &][xx]
(* {a + e + x, 2 (b + f + y), 3 (c + g + z)} *)
imtF[Plus @@ Times@## &][xx]
(* {a + e + x, 2 b + 2 f + 2 y, 3 c + 3 g + 3 z} *)
fn[x_, i_] := #*i + #2*i + #3^i & @@ x (*Mr.W's example modified *)
imtF[fn][xx]
(* {a + e + x, 2 b + 2 f + y^2, 3 c + 3 g + z^3} *)
</code></pre>
<hr>
<pre><code>Range[Length@#] Thread@+## &[{a, b, c}, {e, f, g}]
MapIndexed[# #2[[1]] &, Thread[Plus[##]]] &[{a, b, c}, {e, f, g}]
MapIndexed[# #2[[1]] &, +## & @@@ ({##}\[Transpose])] &[{a, b, c}, {e, f, g}]
MapIndexed[+(## & @@ # ) #2[[1]] &, #\[Transpose]] &@{{a, b, c}, {e, f, g}}
</code></pre>
<p>all give</p>
<pre><code>(* {a + e, 2 (b + f), 3 (c + g)} *)
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.