qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,196,463 | <blockquote>
<p>Prove that if <span class="math-container">$C \subset B$</span> where <span class="math-container">$B$</span> is a bounded subset of a metric space <span class="math-container">$(X, d)$</span>, then <span class="math-container">$C$</span> is bounded and <span class="math-container">$\operatorname{diam} C \leq \operatorname{diam} B$</span></p>
</blockquote>
<p><em>My Attempted Proof</em></p>
<p>Since <span class="math-container">$B$</span> is bounded <span class="math-container">$\exists \delta > 0$</span> and <span class="math-container">$x \in X$</span> such that <span class="math-container">$B \subseteq B_{(x, d)} (x, \delta)$</span></p>
<p>Given <span class="math-container">$C \subseteq B$</span> supose that <span class="math-container">$\text{diam} C > \operatorname{diam} B$</span>. Trvially we have <span class="math-container">$\operatorname{diam} B \leq \delta$</span>, hence <span class="math-container">$d(b_1, b_2) \le \delta$</span> for all <span class="math-container">$b_1, b_2 \in B$</span></p>
<p>If <span class="math-container">$\operatorname{diam} C > \operatorname{diam} B$</span>, then there must exist <span class="math-container">$c_1, c_2 \in C$</span> such that <span class="math-container">$d(c_1, c_2) > d(b_1, b_2)$</span> for all <span class="math-container">$ b_1, b_2 \in B$</span>, but since <span class="math-container">$C \subseteq B$</span> we have <span class="math-container">$c_1, c_2 \in B$</span> reaching a contradiction.</p>
<p>Therefore it follows that <span class="math-container">$\operatorname{diam} C \leq \operatorname{diam} B$</span></p>
<p>Since <span class="math-container">$B$</span> is bounded we have <span class="math-container">$C \subseteq B \subseteq B_{(X, d)}(x, \delta)$</span> and hence <span class="math-container">$C$</span> is also bounded. <span class="math-container">$\ \ \square$</span></p>
<hr />
<p>Is my proof correct? If so how rigorous is it? Are there points where it can be improved? Any comments on my proof style and writing are greatly appreciated.</p>
| Henno Brandsma | 4,280 | <p>It can be simplified: the diameter proof need not be done by contradiction. </p>
<p>$B$ is bounded so $B \subseteq B(x, R)$ for some $X \in X, R>0$. (no need for $(X,d)$ in the subscript, as this is clear by context).
Then also $C \subset B$ so the same $x$ and $R$ work to show $C$ is bounded.</p>
<p>Let $c_1, c_2 \in C$ then $d(c_1, c_2) \le \operatorname{diam}(B)$, as the latter number is by definition an upperbound for the set $\{d(b_1, b_2): b_1, b_2 \in B\}$and $d(c_1, c_2)$ is one of those numbers (as $C \subset B$).
So $\operatorname{diam}(B)$ is an upperbound for $\{d(c_1,c_2): c_1,c_2 \in C\}$
and $\operatorname{diam}(C)$ is the by definition the smallest of these upperbounds, so $\operatorname{diam}(C) \le \operatorname{diam}(B)$</p>
|
2,196,463 | <blockquote>
<p>Prove that if <span class="math-container">$C \subset B$</span> where <span class="math-container">$B$</span> is a bounded subset of a metric space <span class="math-container">$(X, d)$</span>, then <span class="math-container">$C$</span> is bounded and <span class="math-container">$\operatorname{diam} C \leq \operatorname{diam} B$</span></p>
</blockquote>
<p><em>My Attempted Proof</em></p>
<p>Since <span class="math-container">$B$</span> is bounded <span class="math-container">$\exists \delta > 0$</span> and <span class="math-container">$x \in X$</span> such that <span class="math-container">$B \subseteq B_{(x, d)} (x, \delta)$</span></p>
<p>Given <span class="math-container">$C \subseteq B$</span> supose that <span class="math-container">$\text{diam} C > \operatorname{diam} B$</span>. Trvially we have <span class="math-container">$\operatorname{diam} B \leq \delta$</span>, hence <span class="math-container">$d(b_1, b_2) \le \delta$</span> for all <span class="math-container">$b_1, b_2 \in B$</span></p>
<p>If <span class="math-container">$\operatorname{diam} C > \operatorname{diam} B$</span>, then there must exist <span class="math-container">$c_1, c_2 \in C$</span> such that <span class="math-container">$d(c_1, c_2) > d(b_1, b_2)$</span> for all <span class="math-container">$ b_1, b_2 \in B$</span>, but since <span class="math-container">$C \subseteq B$</span> we have <span class="math-container">$c_1, c_2 \in B$</span> reaching a contradiction.</p>
<p>Therefore it follows that <span class="math-container">$\operatorname{diam} C \leq \operatorname{diam} B$</span></p>
<p>Since <span class="math-container">$B$</span> is bounded we have <span class="math-container">$C \subseteq B \subseteq B_{(X, d)}(x, \delta)$</span> and hence <span class="math-container">$C$</span> is also bounded. <span class="math-container">$\ \ \square$</span></p>
<hr />
<p>Is my proof correct? If so how rigorous is it? Are there points where it can be improved? Any comments on my proof style and writing are greatly appreciated.</p>
| Michelle Osorio | 605,830 | <p>You can try to show that </p>
<ol>
<li>if <span class="math-container">$A\subset B$</span> so <span class="math-container">$\sup A\leq \sup B$</span></li>
<li>Show that <span class="math-container">$\{d(x,y)|x,y\in A\}\subset \{d(x',y')|x',y'\in B\}$</span></li>
<li>Let <span class="math-container">$\operatorname{diam} = \sup$</span> and use (1.) to show that <span class="math-container">$\operatorname{diam}(A) \leq \operatorname{diam}(B)$</span></li>
</ol>
|
898,543 | <p>I have the random vector $(X,Y)$ with density function $8x^{2}y$ for $0 < x < 1$, $0 < y < \sqrt{x}$ I am trying to find the marginal distributions of $X$ and $Y$. For $X$ this seems to be simply the integral $\int_{0}^{\sqrt{x}}8x^{2}y = 4x^{3}$, which is also the given solution, and follows the general formula I've gotten, where you find marginal distributions of a variable by integrating the joint PDF of all other variables over their supports. However, this seems to fail in the case of $Y$, where I try the integral $\int_{0}^{1}8x^{2}y = \frac{8y}{3}$, conflicting with the given answer of $\frac{8y}{3}(1-y^{6})$. What am I misunderstanding here? This seems painfully simple, and I have never had issues finding a marginal distribution like this before.</p>
| drhab | 75,923 | <p><strong>Hint:</strong></p>
<p>$P\left[L\mid B\right]=\frac{1}{7}$ i.e. $P\left[L\cap B\right]=\frac{1}{7}P\left[B\right]$</p>
<p>$P\left[B\mid L\right]=\frac{1}{3}$ i.e. $P\left[L\cap B\right]=\frac{1}{3}P\left[L\right]$</p>
<p>$1-P\left[L\cup B\right]=\frac{4}{5}$ </p>
<p>These equations are enough to find $P\left[L\cap B\right]$</p>
|
72,537 | <blockquote>
<p>Let $A\in M_{n}$ have Jordan canonical form $J_{n_1}(\lambda_{1})\oplus\cdots\oplus J_{n_k}(\lambda_{k})$. If $A$ is non-singular ($\lambda_i\neq 0$), what is the Jordan canonical form of $A^{2}$?</p>
</blockquote>
<p>I can prove that if the eigenvalues of $A$ are $\sigma(A)=\{\lambda_{1},\dots, \lambda_{n} \}$ then $\sigma(A^{2})=\{\lambda_{1}^{2},\dots, \lambda_{n}^{2} \}$, for this reason I have been trying to attack this problem using this fact, but I am getting nowhere. How should I proceed?</p>
| Mariano Suárez-Álvarez | 274 | <p>Everything works blockwise, so you can simply assume that $A$ is one Jordan block...</p>
<p>So let $A=J_n(\lambda)$, which we can write as $\lambda I+N$ with $N=J_n(0)$. Then $A^2=\lambda^2I+2\lambda N+N^2$. The matrix $N'=2\lambda N+N^2$ is nilpotent and (because $\lambda\neq0$) has rank $n-1$, so it is conjugate to $N$. It follows that $A^2$ is conjugate to $\lambda^2I+N=J_n(\lambda^2)$.</p>
|
82,254 | <p>Consider the standard form polyhedron, and assume that the rows of the matrix A are linearly independent.</p>
<p>$$ \left \{ x | Ax = b, x \geq 0 \right \} $$</p>
<p>(a) Suppose that two different bases lead to the same basic solution. Show that the basic solution is degenerate (has less than m non-zero entries).</p>
<p>(b) Consider a degenerate basic solution. Is it true that it corresponds to two or more distinct bases? Prove or give a counterexample.</p>
<p>(c) Suppose that a basic solution is degenerate. Is it true that there exists an adjacent basic solution which is degenerate? Prove or give a counterexample.</p>
<p><strong>Solution</strong></p>
<p>(a) I think it's obvious but how build the proof, the two different bases lead to the same basic solution, when the last entering variable cannot be increased at all because it's b value equals 0 therefore as result we have the same basic solution. But how to prove that?</p>
<p>(b) no, degenerate basic solution can correspond to one basis only as well. But how to prove that?</p>
<p><strong>Addendum</strong></p>
<p>I found great description of (a) and (b), but level of this text is much higher than I can apprehend. I will appreciate if someone could shed light on this explanation. </p>
<p>(a) every basic feasible solution is equivalent to an extreme point. However, there may exist more than one basic corresponding to the same basic feasible solution or extreme point. Case of degeneracy corresponds to that of a extreme point at which some $r > p \equiv n- m $ defining hyperplanes from $x\geq 0$ are binding. Hence, for any associated basis, $(r-p)$ of the $X_{B}$ - variables area also zero. Consequently, the number of positive variables is $q = m-(r-p)<m$. In this case, each possible choice of a basis $B$ that includes the columns of these q positive variables represents this point. Clearly, if there exists more than one basis representing an extreme point, then this extreme point is degenerate</p>
<p>(b) Consider example
$$x_{1} + x_{2} + x_{3} = 1$$</p>
<p>$$-x_{1} + x_{2} + x_{3} = 1$$</p>
<p>$$x_{1}, x_{2}, x_{3} \geq 0$$</p>
<p>Consider the solution $\bar{x}=(0,1,0)$. Observer that this is an extreme point or a basic feasible solution with a corresponding basis having $x_{1}$ and $x_{2}$ as basic variables. Moreover, this is a degenerate extreme point. There are four defining hyperplanes binding at $\bar{x}$. Moreover, there are three ways of choosing three linearly independent hyperplanes from this set that yield $\bar{x}$ as the (unique) solution. However, the basis associated with $\bar{x}$ is unique.
Consider a degenerate basic variable ${x_{B}}_{r}$ (with $\bar{b}_{r}=0$), which is such that $Ax=b$ does not necessarily imply that ${x_{B}}_{r}=0$. Given that such a variable exists, we will construct another basis representing this point. Let $x_{k}$ be some component of $x_{N}$ that has a nonzero coefficient $\theta_{r}$ in the row corresponding to ${x_{B}}_r$. Note that $x_{k}$ exists. Then consider a new choice of $(n-m)$ nonbasic variables given by ${x_{B}}_{r}$ and $x_{N-k}$, where $x_{N-k}$ represents the components of $x_{N}$ other than $x_{k}$. Putting ${x_{B}}_{r}= 0$ and $x_{N-k}=0$ above uniquely gives $x_{k}=\frac{\bar{b}_{r}}{\theta_{r}}=0$ from row $r$, and so ${x_{B}}_{i} = \bar{b}_{i}$ is obtained as before from the other rows. Hence, this corresponds to an alternative basis that represents the same extreme point. Finally, note that if no degenerate basic variable ${x_{B}}_{r}$ of this type exists, then there is only one basis that represents this extreme point.</p>
| Apurv | 240,799 | <p>I have a slightly different proof for part (a).
If the bases, $B$ and $B'$ are distinct, but correspond to the same basic feasible solution $x_b$ ($x_b$ corresponds to the vector of basic variables), then, by definition $Bx_b=b$ and $B'x_b=b$. Hence, $(B-B')x_b=0$. Since $B,B'$ are distinct, $dim(B-B') \geq 1$. Therefore, by rank-nullity theorem, $dim(x_b) \leq (m-1)$, which implies that at least one of the components of $x_b$ is zero. </p>
|
4,357,484 | <p>Suppose the following series:
<span class="math-container">\begin{eqnarray}
\sum_{k'}k'f_{k'}
\end{eqnarray}</span>
where <span class="math-container">$f_{k'}$</span> are some Fourier coefficients that result from a periodic function <span class="math-container">$f(t+T)=f(t)$</span>:
<span class="math-container">\begin{eqnarray}
f_{k'}=\frac{1}{T}\int_{0}^{T}dt e^{ik'2\pi t/T}f(t).
\end{eqnarray}</span>
Is it true then that:
<span class="math-container">\begin{eqnarray}
\sum_{k'}k'f_{k'}=\frac{1}{T}\int_{0}^{T}dt e^{ik'2\pi t/T}f(t)\left(\sum_{k'}k'e^{ik'2\pi t/T}\right)=0,
\end{eqnarray}</span>
that is, given that:
<span class="math-container">\begin{eqnarray}
g(t)=\left(\sum_{k'}k'e^{ik'2\pi t/T}\right)=2i\sum_{k'=1}^{+\infty}k'\sin(2\pi k' t/T)=0
\end{eqnarray}</span>
Is the above identity correct?</p>
<p><strong>EDIT:</strong> Wolfram Alpha gives the wrong answer for the sum <span class="math-container">$g(t)$</span>. So I would like to know how to determine <span class="math-container">$g(t)$</span> in closed form, if possible.</p>
| tomasz | 30,222 | <p>It's not correct. <span class="math-container">$A$</span> is only isomorphic to <span class="math-container">$\bigoplus_{i=1}^r(K)_i$</span> <em>as a <span class="math-container">$K$</span>-vector space</em>, not as a ring. It is also easy to find counterexamples: take <span class="math-container">$A=\mathbf Q[\sqrt 2]$</span>, <span class="math-container">$K=\mathbf Q$</span>.</p>
<p>To prove the result, notice that for any basis <span class="math-container">$\alpha_1,\alpha_2,\ldots,\alpha_n$</span> of <span class="math-container">$A$</span> over <span class="math-container">$K$</span>, you have <span class="math-container">$A=K[\alpha_1,\alpha_2,\ldots,\alpha_n]$</span>, so (by induction) it is enough to consider the case when <span class="math-container">$A=K[\alpha]$</span> for some <span class="math-container">$\alpha\in A$</span>. Then show that <span class="math-container">$K[\alpha]$</span> is a field if it's finite dimensional.</p>
|
3,394,378 | <p>I am stuck with this Precalculus problem about polynomial functions. The problem:</p>
<blockquote>
<p>Consider <span class="math-container">$f(x)=x^2+ax+b$</span> with <span class="math-container">$a^2-4b>0$</span>. Let <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> be the roots of <span class="math-container">$f$</span>. Assume that <span class="math-container">$f(x)$</span> divides <span class="math-container">$f(x+2)f(x-2)$</span>. Then</p>
<ol>
<li><p>Show that <span class="math-container">$\alpha=\beta+2$</span> or <span class="math-container">$\alpha=\beta-2$</span></p>
</li>
<li><p>Using <span class="math-container">$1.$</span>, find the minimum value of <span class="math-container">$f(x)$</span>.</p>
</li>
</ol>
</blockquote>
<p>Part <span class="math-container">$1$</span> is easy: write <span class="math-container">$f(x+2)f(x-2)=f(x)g(x)$</span> and substitute <span class="math-container">$x=\alpha$</span> to obtain <span class="math-container">$f(\alpha+2)f(\alpha-2)=0$</span>, so <span class="math-container">$\alpha+2$</span> or <span class="math-container">$\alpha-2$</span> is a root of <span class="math-container">$f$</span>. This root cannot be <span class="math-container">$\alpha$</span>, so <span class="math-container">$\alpha+2=\beta$</span> or <span class="math-container">$\alpha-2=\beta$</span>.</p>
<p>I am stuck with part <span class="math-container">$2$</span>. <strong>Any idea</strong>?</p>
<p>The minimum is obtained at <span class="math-container">$-\frac{a}{2}=\frac{\alpha+\beta}{2}=\beta+1$</span> or <span class="math-container">$\beta-1$</span> but I don't know how to continue.</p>
| Steven Alexis Gregory | 75,410 | <p>If the roots are <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, then the minimum occurs at
<span class="math-container">$x_{min} = \dfrac{\alpha + \beta}{2}$</span>.</p>
<p><span class="math-container">$$\begin{align}
f(x_{min})
&= (x_{min} - \alpha)(x_{min} - \beta) \\
&= -\dfrac{(\alpha-\beta)^2}{4}
\end{align}$$</span></p>
|
1,482,776 | <blockquote>
<p>Let $(X_t)$ be a continuous nonnegative supermartingale and $T = \inf\{t\geq 0 \colon X_t = 0 \}$ then $X_t = 0$ for every $t\geq T$.</p>
</blockquote>
<p>Idea of solution:</p>
<p>Since $T$ is stopping time, by Doob theorem:
$$E(X_{T+q} 1_{T < \infty} | F_T) \leq X_T 1_{T < \infty} =0 $$
for every $q \in \mathbb{Q}$</p>
<p>Then taking expectation we have that $E(X_{T+q}1_{T < \infty}) \leq 0$
. Since $X_{T+q}1_{T < \infty} $ is positive $X_{T+q}1_{T < \infty} = 0$ a.e.
(on a set $\Omega_q$ with $\mathbb{P}(\Omega_q)=1$). Taking $\Omega'=\cap_{q \in \mathbb{Q}^+} \Omega_q$ we will have that $$X_{T+t}1_{T < \infty} = 0$$
on $\Omega'$ for every $t\geq 0$.</p>
<p>The only problem is we are not allow to use Doob theorem since $T$ is not bounded and $X$ is not U.I. I try to use $T \wedge k $ to make the stoping time bounded but I couldn't take limit properly.</p>
| Math-fun | 195,344 | <p>We have $cos x= \sum_{j=0}^{\infty}\frac{(-1)^j}{(2j)!}x^{2j}$ hence \begin{align}
\frac{x^n}{\cos \sin x -\cos x}&= \frac{x^n}{\sum_{j=0}^{\infty}\frac{(-1)^j}{(2j)!}(\sin^{2j}x-x^{2j})}\\
&= \frac{x^n}{\sum_{j=1}^{\infty}\frac{(-1)^j}{(2j)!}(\sin^{2j}x-x^{2j})}\\
&=\frac{1}{\color{blue}{\frac{(-1)^1}{2!}}\frac{\sin^{2}x-x^{2}}{x^n}+\frac{1}{4!}\frac{\sin^{4}x-x^{4}}{x^n}+...}\\
\end{align}
Now note that $$\sin^2x-x^2=\color{red}{-\frac13}x^4+\frac{2 x^6}{45}+O\left(x^8\right)$$ and $$\sin^4x-x^4=-\frac{2 x^6}{3}+\frac{x^8}{5}+O\left(x^{10}\right)$$ This indicates that choosing $n=4$ we will have a leading non-zero term in denumerator (coming from $(\sin^2x-x^2)/x^4$) making for a limit of $6$. Choosing $x>4$ say $x=6$, endows the second term a nonzero limit but the first term will be diverging making for a zero limit. Hence your choice is only $n=4$ and in this case the limit is $$\frac{1}{\color{blue}{\frac{(-1)^1}{2}}\color{red}{\frac{-1}{3}}}$$</p>
|
3,154,316 | <p>With regard to this curve:
<span class="math-container">$$3xy=x^3+y^3$$</span>
I understand that <span class="math-container">$\frac{dy}{dx}$</span> is not defined at <span class="math-container">$(0,0)$</span>, but, there must be some more information right as there are <span class="math-container">$2$</span> tangent lines. I know my question is not very specific but if anyone can elaborate on the derivative as we approach <span class="math-container">$(0,0)$</span>. Thanks</p>
| Community | -1 | <p>If I understand your question correctly, you understand that the Folium of Descarte has certain 'tangent-like' directions at the origin <span class="math-container">$(0,0)$</span>. However (since there are two directions), we cannot identify them by computing the derivative <span class="math-container">$\frac{dy}{dx}$</span>, so it would be nice to have some other way of finding them.</p>
<p>One way is to use polar coordinates. In polar coordinates, <span class="math-container">$x = r\cos \theta$</span> and <span class="math-container">$y = r\sin \theta$</span>. There, our problem comes into focus: the origin <span class="math-container">$(x,y) = (0,0)$</span> can be represented as <span class="math-container">$r = 0$</span> with no restriction on <span class="math-container">$\theta$</span>. However, if we write down the equation in polar, we find
<span class="math-container">$$r^2 \sin \theta \cos \theta = r^3 \left(\sin^3 \theta + \cos^3 \theta\right)$$</span>
Notice that the left-hand side has two powers of <span class="math-container">$r$</span>, while the right-hand side has three. This means that, to keep the equation balanced, as <span class="math-container">$r \to 0$</span>, <span class="math-container">$\sin \theta \cos \theta$</span> must go to <span class="math-container">$0$</span>. The only way for this to happen is if <span class="math-container">$\theta = \frac{\pi}{2} n$</span> for some natural number <span class="math-container">$n$</span>. From this, we can conclude that the 'tangent-like' directions at <span class="math-container">$(0,0)$</span> are vertical and horizontal, which agrees with the image Wikipedia has of the curve.</p>
<p><a href="https://upload.wikimedia.org/wikipedia/commons/thumb/4/48/Kartesisches-Blatt.svg/800px-Kartesisches-Blatt.svg.png" rel="nofollow noreferrer"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/48/Kartesisches-Blatt.svg/800px-Kartesisches-Blatt.svg.png" alt="enter image description here"></a></p>
|
3,154,316 | <p>With regard to this curve:
<span class="math-container">$$3xy=x^3+y^3$$</span>
I understand that <span class="math-container">$\frac{dy}{dx}$</span> is not defined at <span class="math-container">$(0,0)$</span>, but, there must be some more information right as there are <span class="math-container">$2$</span> tangent lines. I know my question is not very specific but if anyone can elaborate on the derivative as we approach <span class="math-container">$(0,0)$</span>. Thanks</p>
| Ted Shifrin | 71,348 | <p>Here's an alternative approach, which usually shows up in differential and algebraic geometry. It's called "blowing up" the origin. </p>
<p>Introduce the <em>slope</em> coordinate <span class="math-container">$m$</span> by <span class="math-container">$y=mx$</span> and rewrite the equation. You have
<span class="math-container">$$3xy=x^3+y^3 \iff 3x(mx) = x^3+(mx)^3 \iff 3mx^2 = x^3(1+m^3).$$</span>
Dividing out the <span class="math-container">$x^2$</span>, we obtain <span class="math-container">$3m = x(1+m^3)$</span>. When <span class="math-container">$x=0$</span> we get <span class="math-container">$m=0$</span> and we see that one branch of the curve comes in with slope <span class="math-container">$0$</span>. Now make the reciprocal substitution (to see what happens with infinite slope) <span class="math-container">$x=\ell y$</span>. Similarly we end up with
<span class="math-container">$3\ell = y(1+\ell^3)$</span>, and, when <span class="math-container">$y=0$</span>, we find that <span class="math-container">$\ell=0$</span>, so there is also a branch of the curve at the origin with infinite slope.</p>
<p><strong>Comment</strong>: By the way, polar coordinates is itself a blow-up of the origin, as we get the whole circle of possible directions <span class="math-container">$\theta\in [0,2\pi)$</span> when <span class="math-container">$r=0$</span>.</p>
|
1,003,379 | <p>I've been working problems all day so maybe I'm just confusing myself but in order to do this. I have to the take the integral along each contour $C_1-C_4$. My issue is how to convert to parametric functions in order to this so that I can integrate</p>
<p><img src="https://i.stack.imgur.com/HWRoM.jpg" alt="enter image description here"></p>
| dustin | 78,317 | <p>Parametric equations for the square going counter clockwise:
\begin{alignat}{2}
\gamma_1 &= 2 + 2i(2t-1)&&{}\quad 0\leq t\leq 1\\
\gamma_2 &= 2i + 2(3-2t)&&{}\quad 1\leq t\leq 2\\
\gamma_3 &= -2 + 2i(5-2t)&&{}\quad 2\leq t\leq 3\\
\gamma_4 &= -2i + 2(2t - 7)&&{}\quad 3\leq t\leq 4
\end{alignat}</p>
|
3,581,724 | <p>I suspect a simple wooden toy "lead screw" was made by advancing a cylindrical rotary cutting tool ( <em>Cylindrical End Mill Cutter</em>) along the surface of the rotating wooden dowel (base cylinder), resulting in a helical cut (the axes of the cylinders are orthogonal (<em>skew</em>).</p>
<p><a href="https://i.stack.imgur.com/VVTRg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VVTRg.png" alt="enter image description here"></a></p>
<p>Videos of the manufacturing process close to what I suspect:</p>
<ul>
<li><a href="https://youtu.be/5U9lJAgU1oE?t=31" rel="nofollow noreferrer">https://youtu.be/5U9lJAgU1oE?t=31</a> (but: spherical cutter. radial, not tangent end mill)</li>
<li><a href="https://youtu.be/y5DOQWiexOQ?t=314" rel="nofollow noreferrer">https://youtu.be/y5DOQWiexOQ?t=314</a> (cutter radial)</li>
<li><a href="https://youtu.be/pbaRRsG3BN4?t=9" rel="nofollow noreferrer">https://youtu.be/pbaRRsG3BN4?t=9</a> (not radial, but "tilted")</li>
</ul>
<p>I have tried to visualise/emulate the resulting geometry using multiple <code>difference</code> operations for cylinder primitives in <a href="https://openjscad.org/" rel="nofollow noreferrer">(Open)JSCAD</a> (see code at end of post) and adjusted the view manually:</p>
<p><a href="https://i.stack.imgur.com/PQIUU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PQIUU.png" alt="image of helical track approximated by multiple cylinder (*End mill*)-cuts"></a></p>
<p>What is the equivalent (elliptical?) shape that is the cross-section of the helical path?</p>
<p><hr/>
And: what is the contact surface/line/point of another, slightly smaller cylinder that is used as "lead screw nut" (having the same orientation as the cutting cylinder, i.e. orthogonal to the base cylinder) - a point contact on one of the helical edges?</p>
<p>Code for JSCAD</p>
<pre><code>function main () {
let main = cylinder({r: 3, h:10, center: true, fn: 64 });
for (let i=0; i<36; i++) {
let cut = cylinder({r: 0.2, h:10, center: true});
cut = translate([0,-3,0],cut);
cut = rotate([0,90,i*3],cut);
cut = translate([0,0,i*0.1],cut);
main = difference(main, cut);
}
return main;
}
</code></pre>
<p><s>I think the underlying question may be about the surface created by a straight line moved along a spiral ( or helix):</p>
<p><a href="https://i.stack.imgur.com/4R595.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4R595.png" alt="blender screenshot"></a>
(created with Blender: a mesh edge with Screw modifier)</p>
<p>Or the surface created by a helix that has been rotated (spin):</s></p>
<p><a href="https://i.stack.imgur.com/wJwjf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wJwjf.png" alt="enter image description here"></a></p>
<p>The cross-section of the "cutting" cylinder (<em>End mill</em>) is a circle of course, <s>which is what an infinite number of cuts "converge" to (a cylinder with zero length).</p>
<p>Then the cross-section along the helix should be an ellipse (intersection of the hypothetical "cutting" cylinder (<em>End mill cutter</em>) and the plane orthogonal to the helix).</s></p>
<hr/>
<p>It's not the same as moving a circle along the helix; to illustrate, I've reduced the cylinder's length:
<a href="https://i.stack.imgur.com/2BRYO.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2BRYO.gif" alt="enter image description here"></a></p>
<p>My "straight line" theory does not apply either, I think these "lines" might be helices created by the intersection of the translated and rotated "cutting" cylinders.</p>
<p>So it seems this might be much more involved than I anticipated -- please don't spend too much time on this on my account. I was just curious to see whether the "cut" could be better created in 3D by "lofting" the equivalent cross-section along a helix.</p>
| Narasimham | 95,860 | <p>Trying to understand the motions. At first I imagined that you were referring to a simple twisted tube of <span class="math-container">$(x,y,z)$</span> parametrization:</p>
<p><a href="https://i.stack.imgur.com/Zwx4Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zwx4Y.png" alt="Tube here"></a></p>
<p>however no clue about where the straight straight line generators come from. So this is discarded. What appears to me is that a rotating milling cutter offset distance <span class="math-container">$b$</span> normally from cylinder axis and skewed angle <span class="math-container">$\alpha$</span> with respect to axis of the vertical wooden cylinder. It is mounted on a stationary tool post and mills out a one-sheeted Hyperboloid of revolution. If the tool post in addition moves with a helical pitch <span class="math-container">$ p= c\, \theta$</span> is the added torsion component around the vertical axis of cylinder. Required is a parametrization of the generated ruled surface.</p>
<p><em>Hyperboloid of one sheet</em> is seen in parametrization when <span class="math-container">$c=0.$</span> When <span class="math-container">$z$</span> motion is imparted to the rotating milling cutter we are adding as pitch <span class="math-container">$p= 2 \pi c $</span> for each turn of the lathe.</p>
<p><span class="math-container">$$r(u)= \sqrt{(u \cos \alpha)^2+b^2}$$</span></p>
<p><span class="math-container">$$ (x,y,z)=(r(u) \cos \theta,r(u) \sin \theta,u \sin \alpha + c\, \theta). $$</span></p>
<p>EDIT1;</p>
<p><a href="https://i.stack.imgur.com/FuRDw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FuRDw.png" alt="enter image description here"></a></p>
<p>(In view of clarification)..The following surface is a helical channel groove made by a standard end-mill or router that can be CNC programmed:</p>
<p><a href="https://i.stack.imgur.com/YKp0v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YKp0v.png" alt="Helical Channel CNC"></a></p>
|
690,621 | <p>Consider the Quotient ring $\mathbb{Z}[x]/(x^2+3,5)$. </p>
<p>Solution: I first tried to take care of $(5)$ in the above ring. Therefor we can consider $\mathbb{Z_5}[x]/(x^2+3)$. Now and interesting point to note here is $(5) \subset (x^2+3)$. So, we can consider $\mathbb{Z_5}[x]/(5)$. But this is just $\mathbb{Z_5}[x]$. Thus, am I on the right track? Is there a rigorous way to prove the above?</p>
| janmarqz | 74,166 | <p>The equivalent classes are pieces (chunks) of the set where is the equivalence relation, these pieces include the elements which are interrelated. The set of all these pieces is an example of a partition of the set where the equivalence relation is defined. </p>
<p>For example:</p>
<p>In $\Bbb{Z}$ we define $a\sim b$ if $4$ divides $a-b$, then is easy to prove that this gives an equivalence relation in $\Bbb{Z}$ and that the equivalence classes are:
$$[0]=\{0,4,-4,8,-8,12,-12,...\},$$
$$[1]=\{1,5,-3,9,-7,13,-11,...\},$$
$$[2]=\{2,6,-2,10,-6,14,-10,...\},$$
$$[3]=\{3,7,-1,11,-5,15,-9,...\}.$$</p>
<p>You can see that these subsets (chunks) are mutually disjoint and
$${\Bbb{Z}}=[0]\cup[1]\cup[2]\cup[3].$$ </p>
|
2,381,406 | <p>Somewhere I saw that </p>
<blockquote>
<p>To show that $x^2-y^3$ is irreducible in $k[x,y]$ it suffices to show that $x^2-y^3$ is irreducible in $k(y)[x]$.</p>
</blockquote>
<p>My question is what is the relation between $k[x,y]$ and $k(y)[x]$ ?</p>
<p>Also there is a confusion that if $k(y)$ is the smallest field containing $y$ and $k$ (by definition) then what will be the inverse of $y?$ Is it $1/y$ ?</p>
| Henno Brandsma | 4,280 | <p>No, you cannot make a sequence with all the neighbourhoods of a point $x$, e.g. there are as many neighbourhoods of $0$ in the real line as there are real numbers, for every $r > 0$, we have $O_r = (-r,r)$ which is a neighbourhood of $0$. And Cantor's diagonal argument shows that we cannot put the real numbers in a sequence.</p>
<p>The whole point of being first-countable is that there is for every $x \in X$ a <em>fixed</em> (you cannot change it, and add your $N$!), pre-given, (countable!) sequence of neighbourhoods $N_1, N_2, N_3, \ldots$ of $x$ such that every arbitrary neighbourhood of $x$ contains one of them. This allows us to essentially understand all neighbourhoods of a point from only countably many of them. This allows you to work with sequences and sequential continuity. </p>
<p>All metric spaces are first-countable because we can use $B(x, \frac{1}{n})_{n \in \mathbb{N}}$ as a countable local base for $x$. So the standard examples are first countable, but there are many non-first-countable spaces.</p>
|
3,816,041 | <blockquote>
<p>How many ways <span class="math-container">$5$</span> identical green balls and <span class="math-container">$6$</span> identical red balls can be arranged into <span class="math-container">$3$</span> distinct boxes such that no box is empty?</p>
</blockquote>
<p>My attempt :</p>
<p>Finding coefficient of <span class="math-container">$x^{11}$</span> in the expansion of <span class="math-container">$$( x + x^2 + x^3 + x^4 + x^5+x^6 )^3 ( x + x^2 + x^3 + x^4 + x^5 )^ 3$$</span> and arranging them which was wrong when inspected</p>
<p>Please help me out</p>
| Qiaochu Yuan | 232 | <p>Recall that if <span class="math-container">$f(x) \in \mathbb{F}_p[x]$</span> is any polynomial, the Frobenius map <span class="math-container">$F : x \mapsto x^p$</span> generates the Galois group of its splitting field, and hence to compute its Galois group it suffices to compute the cycle structure of Frobenius acting on its roots; the Galois group will be cyclic of order the lcm of the sizes of the cycles.</p>
<p>When <span class="math-container">$f(x) = x^n - 1$</span> and <span class="math-container">$\gcd(p, n) = 1$</span> the roots are precisely powers of a primitive <span class="math-container">$n^{th}$</span> root of unity <span class="math-container">$\zeta_n \in \overline{\mathbb{F}_p}$</span> and so we can be very explicit about the action of Frobenius on it: we have <span class="math-container">$F(\zeta_n) = \zeta_n^p$</span> and so <span class="math-container">$F^k(\zeta_n) = \zeta_n^{p^k}$</span>, meaning that the orbit of <span class="math-container">$\zeta_n$</span> has size the least positive integer <span class="math-container">$k$</span> such that <span class="math-container">$\zeta_n^{p^k} = \zeta$</span>, or equivalently such that</p>
<p><span class="math-container">$$p^k \equiv 1 \bmod n.$$</span></p>
<p>This is exactly the multiplicative order <span class="math-container">$\text{ord}_n(p)$</span> of <span class="math-container">$p \bmod n$</span>. The other roots of <span class="math-container">$f(x)$</span> are the other <span class="math-container">$n^{th}$</span> roots of unity <span class="math-container">$\zeta_n^k$</span>, which are primitive <span class="math-container">$\frac{n}{\gcd(n, k)}$</span> roots of unity, and hence which have orbits of size <span class="math-container">$\text{ord}_{\frac{n}{\gcd(n, k)}}(p)$</span>, which in particular divides the size of this largest orbit we found above.</p>
<p>Hence the Galois group has order <span class="math-container">$\text{ord}_n(p)$</span>, but the analysis above even reveals the exact cycle structure of Frobenius, and furthermore reveals it on each of the irreducible factors</p>
<p><span class="math-container">$$x^n - 1 = \prod_{d | n} \Phi_d(x)$$</span></p>
<p>of <span class="math-container">$x^n - 1$</span> over <span class="math-container">$\mathbb{Q}$</span> (the <a href="https://en.wikipedia.org/wiki/Cyclotomic_polynomial" rel="nofollow noreferrer">cyclotomic polynomials</a>).</p>
|
1,074,177 | <p>Suppose a problem
$$\min_{x \in \mathbb{R}^{n}} f(x)$$</p>
<p>subject to $x \in \Omega$ which is a closed and convex set. If $\nabla f(x)$ is Lipschitz continuous in $\Omega$, then prove that</p>
<p>$$e(x) = x - P_{\Omega}(x- \nabla f(x))$$</p>
<p>is also Lipschitz continuous in $\Omega$.</p>
<p>Thanks in advance.</p>
| megas | 191,170 | <p>The key is that projection onto a convex set is non-expansive, that is, for any two points $x, y$,
$$
\| P_{\Omega}(x) - P_{\Omega}(y)\| \le \|x-y\|.
$$
Now, we assume that $\nabla f(y)$ is Lipschitz continuous on $\Omega$, <em>i.e.</em>, there exists some constant $L$ such that
$$
\| \nabla f(x) - \nabla f(y)\| \le L\cdot \|x-y\|
$$
for any $x, y \in \Omega$.
Then, for any two points $x,y \in \Omega$, we have
\begin{align}
\|e(x)-e(y)\|
&= \|x - P_{\Omega}(x- \nabla f(x)) - y + P_{\Omega}(y- \nabla f(y))\|\\
&= \|x - y + P_{\Omega}(y- \nabla f(y)) - P_{\Omega}(x- \nabla f(x))\|\\
&\le \|x - y\| + \| P_{\Omega}(y- \nabla f(y)) - P_{\Omega}(x- \nabla f(x))\|\\
&\le \|x - y\| + \| y- \nabla f(y) - x+ \nabla f(x)\|\\
&\le \|x - y\| + \| y- x \| +\|\nabla f(x) - \nabla f(y)\|\\
&\le \|x - y\| + \| y- x \| + L\|x-y\|\\
&= (2+L) \cdot \|x-y\|,
\end{align}
where we have repeatedly applied triangle inequality, and exploited the non-expansiveness of the projection onto a convex set, as well as the Lipschitz continuity of $\nabla f(x)$.</p>
<p>The above implies that $e(x)$ is Lipschitz continuous on $\Omega$ with constant $L+2$.</p>
|
291,729 | <p>How to show that $\large 3^{3^{3^3}}$ is larger than a googol ($\large 10^{100}$) but smaller than googoplex ($\large 10^{10^{100}}$).</p>
<p>Thanks much in advance!!!</p>
| user1551 | 1,551 | <p>\begin{align}
&\color{red}{100 \log_3 10} < 100\times3 < 729 = 3^6 < \color{red}{3^{3^3}} = 3^{27} < 10^{100} < \color{red}{10^{100} \log_3 10}\\
\Rightarrow&10^{100} < 3^{3^{3^3}} < 10^{10^{100}}.
\end{align}</p>
|
291,729 | <p>How to show that $\large 3^{3^{3^3}}$ is larger than a googol ($\large 10^{100}$) but smaller than googoplex ($\large 10^{10^{100}}$).</p>
<p>Thanks much in advance!!!</p>
| Dave L. Renfro | 13,130 | <p>I know others have beat me by nearly a day, but here's something I came up with just now that seems more straightforward. Each of the inequalities makes frequent use of monotonicity (increasing), either in the base or in the exponent, of an exponentiated expression. (After writing this up, I noticed that my estimates in carrying out the googolplex part are the same as what user58512 has.)</p>
<p>$$10^{100} \; < \; {\left( 3^3 \right)}^{100} \; < \; {\left( 3^3 \right)}^{\left( 3^5\right)} \; = \; 3^{\left( 3 \cdot 3^5\right)} \; = \; 3^{3^{6}} \; < \; 3^{3^{3^3}} $$</p>
<p>In the strict inequalities above, I first made use of $10 < 3^3,$ then $100 < 3^5,$ and lastly $6 < 3^3.$</p>
<p>$$3^{3^{3^3}} \; < \, 10^{3^{3^3}} \; < \; 10^{10^{3^3}} \; = \; 10^{10^{27}} \; < \; 10^{10^{100}}$$</p>
<p>In the strict inequalities above, I first made use of $3 < 10,$ then $3 < 10,$ and lastly $27 < 100.$</p>
|
206,227 | <p>I was given the following problem:</p>
<p>Let $V_1, V_2, \dots$ be an infinite sequence of Boolean variables. For each natural number $n$, define a proposition $F_n$ according to the following rules: </p>
<p>$$\begin{align*}
F_0 &= \text{False}\\
F_n &= (F_{n-1} \ne V_n)\;.
\end{align*}$$</p>
<p>Use induction to prove that for all $n$, $F_n$ is $\text{True}$ if and only if an odd number of the variables $V_k \;( k \le n)$ are $\text{True}$.</p>
<p>Can anyone help me out with at least beginning this problem? I'm not even entirely sure what it is asking.</p>
| hmakholm left over Monica | 14,366 | <p>A quicker way to see intuitively that this works is to notice that if we represent "False" by the number $0$ and "True" by the number $1$, then $\neq$ corresponds exactly to addition modulo $2$.</p>
<p>Therefore $F_n$ is represented by the sum of $V_1$ upto $V_n$ modulo 2, which is $1$ exactly if an <em>odd</em> number of the $V_i$s are $1$.</p>
|
34,959 | <p>$F(x) = \int_{x-1}^{x+1}f(t)dt$ for x an element of the reals.</p>
<p>Show that $F$ is differentiable on Reals, and compute $F^\prime$.</p>
<p>I am unsure about how to showing $F$ is differentiable. I know that I need to use the fundamental theorem of calculus, but can someone please explain how to do so?</p>
| Martin Sleziak | 8,297 | <p>You can simply use definition of the derivative.</p>
<p>You have
$$F(x)=\int_{x-1}^{x+1} f(t) dt.$$</p>
<p>$$F(x+h)-F(x)=\int_{x+h-1}^{x+h+1} f(t) dt-\int_{x-1}^{x+1} f(t) dt=
\int_{x+1}^{x+h+1} f(t) dt - \int_{x-1}^{x+h-1} f(t) dt$$</p>
<p>$$\frac{F(x+h)-F(x)}{h}=\frac{\int_{x+1}^{x+h+1} f(t) dt}{h} - \frac{\int_{x-1}^{x+h-1} f(t) dt}h$$</p>
<p>$$
\min_{c\in_\langle x+1,x+1+h\rangle} f(c)-\max_{c\in_\langle x-1,x-1+h\rangle} f(c) \le
\frac{F(x+h)-F(x)}{h}
\le \max_{c\in_\langle x+1,x+1+h\rangle} f(c)-\min_{c\in_\langle x-1,x-1+h\rangle} f(c)$$
(From the continuity we know that the minima and maxima exists.)</p>
<p>Now (also from the continuity) both expression converge to $f(x+1)-f(x-1)$.</p>
<p>However, you might want to have a look at a general version of this <a href="http://en.wikipedia.org/wiki/Differentiation_under_the_integral_sign" rel="nofollow">http://en.wikipedia.org/wiki/Differentiation_under_the_integral_sign</a></p>
|
3,805,745 | <p>I am working my way through a linear algebra book and would appreciate some help verifying my proof.</p>
<p><strong>Prove that <span class="math-container">$|u \cdot v| = |u | |v |$</span> if and only if one vector is a scalar multiple of the
other.</strong></p>
<p><strong>PROOF:</strong></p>
<p>Let <span class="math-container">$k ∈ ℝ$</span> and <span class="math-container">$u ,v \in\mathbb R^n$</span> and <span class="math-container">$~u =k~v$</span></p>
<p>ASSUME: <span class="math-container">$|u\cdot v| = |u | |v |$</span></p>
<p>our assumption holds IFF <span class="math-container">$|kv \cdot v| = |kv | |v |$</span></p>
<p>which again holds IFF <span class="math-container">$k|v \cdot v| = k|v | |v |$</span></p>
<p>and, by definition of the dot product, holds IFF <span class="math-container">$k|v|^2 = k|v |^2$</span></p>
<p>Q.E.D.</p>
| J.G. | 56,861 | <p>Let's make the proof of CS explicit to show @CSquared's answer doesn't require circularity. In fact, it's simpler to run through the proof of CS rather than invoking it, as we don't need to check two directions separately.</p>
<p>Write <span class="math-container">$f(k):=u-kv$</span> so<span class="math-container">$$0\le|f(k)|^2=f(k)\cdot f(k)=|u|^2+k^2|v|^2-2ku\cdot v,$$</span>with equality iff <span class="math-container">$f(k)=0$</span> i.e. <span class="math-container">$u=kv$</span>. (You can see where this is going: it involves the convention for <span class="math-container">$k$</span> used in the OP, which CSquared reverses.) The special case <span class="math-container">$k:=v\cdot u/|v|^2$</span> gives <span class="math-container">$0\le|u|^2-|u\cdot v|^2/|v|^2$</span>, which rearranges to <span class="math-container">$|u\cdot v|^2\ge|u|^2|v|^2$</span>, again with equality iff <span class="math-container">$u=kv$</span>. Now just take the square root.</p>
<p>(The above proof actually works even on complex spaces, due to the careful use of <span class="math-container">$v\cdot u$</span> at one point instead of <span class="math-container">$u\cdot v$</span>, and of <span class="math-container">$|u\cdot v|^2$</span> instead of <span class="math-container">$(u\cdot v)^2$</span>.)</p>
|
637,061 | <p>I have a problem:</p>
<blockquote>
<p>For a system of linear equations:
$$x_i=\sum_{j=1}^{n}a_{ij}x_j+b_i,\ \ i=1,2, \ldots , n \tag 1$$
Prove that, if<br>
$$\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ij}^2 \le q<1$$
then $(1)$ has a unique solution.</p>
</blockquote>
<p>==================================</p>
<p>My teacher said that We need to use the <a href="http://en.wikipedia.org/wiki/Banach%27s_contraction_principle" rel="nofollow">Banach's Contraction Principle</a>, but I have trouble when I do it...</p>
<p>Any help will be appreciated! Thanks!</p>
| Nox | 121,022 | <p>The inequality you have derived holds in a given norm.<br>
Now you do not have to show explicitly, that $|| A|| \leq \sum_{i,j} a_{ij}^2$ holds. The double sum which you are given is a specific norm(Hilbert-Schmid or Frobenius) and what you essentially have yet to show to be done with the task is that it is consistent with the euclidean norm for vectors.</p>
|
209,420 | <p>For instance, when trying to compute <span class="math-container">$\mathbb{E}[\sum_{i=1}^{10}X_i]$</span> where <span class="math-container">$X_i \sim N(0,1)$</span>, I input into Mathematica:</p>
<pre><code>Expectation[Sum[x[i],{i, 1, 10}], x[i] \[Distributed] NormalDistribution[]]
</code></pre>
<p>but, instead of getting 0, I get:</p>
<pre><code>x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8]+x[9]+x[10]
</code></pre>
<p>Why is it not simplifying to 0?</p>
| Bob Hanlon | 9,362 | <pre><code>Clear["Global`*"]
</code></pre>
<p>Assuming that the variates are i.i.d., the distribution of the sum is normal</p>
<pre><code>distSum[μ_, σ_, n_Integer?Positive] := Assuming[σ > 0,
TransformedDistribution[
Total[Array[x, n]],
Array[x[#] \[Distributed] NormalDistribution[μ, σ] &, n]] //
Simplify]
distSum[μ, σ, 10]
(* NormalDistribution[10 μ, Sqrt[10] σ] *)
</code></pre>
<p>The mean and variance are as expected</p>
<pre><code>#[distSum[μ, σ, 10]] & /@ {Mean, Variance}
(* {10 μ, 10 σ^2} *)
</code></pre>
<p>When the <code>x[i]</code> are standard normal</p>
<pre><code>#[distSum[0, 1, 10]] & /@ {Mean, Variance}
(* {0, 10} *)
</code></pre>
<p>For a general <code>n</code>, start by generating a sequence of means and variances</p>
<pre><code>seq = Table[{Mean[distSum[μ, σ, n]],
Variance[distSum[μ, σ, n]]},
{n, 1, 10}];
</code></pre>
<p>Use <a href="https://reference.wolfram.com/language/ref/FindSequenceFunction.html" rel="nofollow noreferrer"><code>FindSequenceFunction</code></a> to generalize from the sequence</p>
<pre><code>mean[n_] = FindSequenceFunction[seq[[All, 1]], n]
(* n μ *)
var[n_] = FindSequenceFunction[seq[[All, 2]], n]
(* n σ^2 *)
stdDev[n_] = Assuming[σ > 0, Sqrt[var[n]] // Simplify]
(* Sqrt[n] σ *)
</code></pre>
|
3,873,138 | <p>Since we have variable coefficients we will use the cauchy-euler method to solve this DE. First we substitute <span class="math-container">$y=x^m$</span> into our given DE. This then gives
"</p>
<p><span class="math-container">$9x(m(m-1)x^{m-2}) + 9mx^{m-1} = 0$</span></p>
<p>Note that:</p>
<p><span class="math-container">$x^{m-2} = x^{m-1}x^{-1}$</span></p>
<p>Then</p>
<p><span class="math-container">$9x(m(m-1)x^{m-1}x^{-1}) + 9mx^{m-1} = 0$</span></p>
<p><span class="math-container">$9(m(m-1)x^{m-1}) + 9mx^{m-1} = 0$</span></p>
<p><span class="math-container">$9mx^{m-1}((m-1)) + 9mx^{m-1} = 0 \Rightarrow m-1=0$</span></p>
<p><span class="math-container">$m_{1} = 1$</span> so <span class="math-container">$y_{1}=c_{1}x$</span> is our solution and using reduction of order we get our second solution which is <span class="math-container">$y_{2}=c_{2}x\ln(x)$</span> and by superposition of homogenous equations we get our general solution</p>
<p><span class="math-container">$y = c_{1}x + c_{2}x\ln(x)$</span></p>
<p>However i am told that this is wrong and the answer is <span class="math-container">$y=c_{1} + c_{2}\ln(x)$</span></p>
<p>What happened to the factor x?</p>
| user577215664 | 475,762 | <p>Since you have that <span class="math-container">$m^2=0 \implies m=0$</span> is a double root you have to multiply by <span class="math-container">$\ln |x|$</span> in order to find the second solution:
<span class="math-container">$$y_1=x^0=1$$</span>
<span class="math-container">$$\implies y_2= 1 \times \ln |x|$$</span>
<span class="math-container">$$y(x)=y_1+y_2 =C_1+C_2 \ln |x|$$</span></p>
|
2,871,655 | <p>I was trying some Cambridge past papers and it said to first separate into partial fractions and then find the sum of the sequence, however after splitting inot partial fractions I'm not getting the terms to cancel out like I normally do with these questions. Is there something I'm missing
.been trying to manipulate the 3 sets of term but can't seem to get it . Thanks </p>
<p>$$\sum_{r=1}^n\frac4{r(r+1)(r+2)}$$</p>
| Robert Israel | 8,508 | <p>You can write this as
$$ -\frac{i}2 \left( {\it polylog} \left( s,i \right) -{\it polylog} \left( s,-
i \right) \right)
$$</p>
<p>EDIT: It does seem that for odd $s$, $f(s)$ is a rational multiple of $\pi^s$.
See <a href="https://oeis.org/A053005" rel="nofollow noreferrer">OEIS sequence A053005</a> and <a href="https://oeis.org/A046976" rel="nofollow noreferrer">A046976</a> and references there.</p>
|
646,779 | <p>Prove that if $p$ and $q$ are polynomials over the field $F$, then the degree of their sum is less than or equal to whichever polynomial's degree is larger</p>
<p>$$\deg(p+q)\leq \max \left\{\deg(p),\deg(q) \right\}$$</p>
<p>Currently, I am taking it case by case, but I was curious if there was a way to do a proof by contradiction. What would it mean if I could add $2$ polynomials the result would be of larger degree than either of them.</p>
| Denis | 66,241 | <p>Hint: write polynomials in their general forms, and look at what happens when you sum them. It is impossible to create a nonzero coefficient where the coefficient was zero before, so in particular you cannot augment the maximal degree.</p>
|
304,259 | <p>I am stuck on this problem and I'm not sure how to approach it. Can anyone help me out with figuring how to approach the proof?</p>
<p>My task is to:</p>
<blockquote>
<p>Prove that it is impossible to find integers $\,x,\, y\,$ such that $\;2^x = 4y + 3$. </p>
</blockquote>
<p>I assumed a proof by cases would be the way to go?</p>
<p>Any input? Thanks in advance!</p>
| amWhy | 9,003 | <p><strong>Proof-By-Cases - Sketch:</strong> </p>
<p>We consider $x \in \mathbb{Z}$. For all $x \in \mathbb{Z}$:</p>
<ol>
<li>$x > 0$</li>
<li>$x = 0,\;$ or</li>
<li>$x < 0$</li>
</ol>
<p>$(1)$ For non-negative integer $x (x >0)$: Show the left hand side will always be even, except when $x = 0$, and the right hand side will always be odd, regardless of the integer value of $y$.
(I.e. all <em>positive</em> integral powers of $2$ are even, but $4y+3 = 2\cdot 2 y + 2 + 1 = 2(2y+1) + 1$ must be odd, regardless of the value of $y$.)</p>
<p>$(2)$ Then consider the case $x = 0$: $\;2^0 = 1 \neq 4y+3 = 2(2y+1) + 1$, whatever the <em>integer</em> value of $y$.</p>
<p>$(3)$ For negative integers $x (x < 0):$ the left-hand side will <em>not be an integer</em> $\left(\text{e.g.,}\;\; 2^{-2} = \dfrac 14\right),\;$ while the right hand side will <em>always</em> be an integer, regardless of the value of integer $y$. Hence the equation has not solution in integers in this case, either.</p>
<hr>
<p>And hence we conclude there are no integer solutions for $x, y$ satisfying the equation: $$2^x = 4y + 3$$</p>
|
3,068,534 | <p>Let <span class="math-container">$R$</span> be the ring of algebraic integers of a quadratic imaginary number field <span class="math-container">$\mathbb Q[\sqrt{d}]$</span> for a negative square-free integer <span class="math-container">$d$</span>. For a prime integer <span class="math-container">$p$</span>, <span class="math-container">$(p)$</span> is a prime ideal or is the product <span class="math-container">$P \overline P$</span> of some prime ideal <span class="math-container">$P$</span> and <span class="math-container">$\overline P$</span>, the ideal consisting of the complex conjugates of elements of <span class="math-container">$P$</span>. Why does this mean if <span class="math-container">$(p)$</span> is a proper subset of a proper ideal <span class="math-container">$I$</span> of <span class="math-container">$R$</span>, then <span class="math-container">$I$</span> is prime?</p>
<ul>
<li><p>If <span class="math-container">$(p)$</span> is a prime ideal, then <span class="math-container">$(p)$</span> is a maximal ideal so <span class="math-container">$(p)=I$</span>.</p></li>
<li><p>I don't know how to say <span class="math-container">$(p)=P \overline P \subset I \subset R$</span> implies <span class="math-container">$I$</span> is a prime ideal.</p></li>
<li><p>Our definition of a prime ideal <span class="math-container">$P$</span> is that <span class="math-container">$P$</span> is nonzero and if the product <span class="math-container">$CD$</span> of two ideals <span class="math-container">$C$</span> and <span class="math-container">$D$</span> is a subset of <span class="math-container">$P$</span>, then <span class="math-container">$C$</span> or <span class="math-container">$D$</span> is a subset of <span class="math-container">$P$</span>. </p></li>
</ul>
<p>Thanks in advance!</p>
| nowhere dense | 124,875 | <p>I will focus on the case you still don't solve.</p>
<p>Suppose that <span class="math-container">$I$</span> is a proper ideal such that <span class="math-container">$P\overline P\subsetneq I$</span> (notice that the inclusion should be strict, otherwise <span class="math-container">$P\overline P=I$</span> is a counterexample). If <span class="math-container">$M$</span> is a maximal ideal containing <span class="math-container">$I$</span> then <span class="math-container">$P\overline P\subsetneq M$</span> implies either <span class="math-container">$P\subsetneq M$</span> or <span class="math-container">$\overline{P}\subsetneq M$</span>. WLOG we have <span class="math-container">$P\subsetneq M$</span> and hence <span class="math-container">$P$</span> is a prime ideal which is neither <span class="math-container">$(0)$</span> or maximal. This contradicts the fact that the ring of integers have <em>dimension <span class="math-container">$1$</span></em> (i.e., every nonzero prime ideal is maximal).</p>
<p>A more elucidative proof would be using the properties of the ideal factorization in number fields. If <span class="math-container">$P\overline{P}\subsetneq I$</span> then we have that <span class="math-container">$I$</span> properly divides <span class="math-container">$P\overline{P}$</span> and hence either <span class="math-container">$I=P$</span> or <span class="math-container">$I=\overline{P}$</span>.</p>
|
3,068,534 | <p>Let <span class="math-container">$R$</span> be the ring of algebraic integers of a quadratic imaginary number field <span class="math-container">$\mathbb Q[\sqrt{d}]$</span> for a negative square-free integer <span class="math-container">$d$</span>. For a prime integer <span class="math-container">$p$</span>, <span class="math-container">$(p)$</span> is a prime ideal or is the product <span class="math-container">$P \overline P$</span> of some prime ideal <span class="math-container">$P$</span> and <span class="math-container">$\overline P$</span>, the ideal consisting of the complex conjugates of elements of <span class="math-container">$P$</span>. Why does this mean if <span class="math-container">$(p)$</span> is a proper subset of a proper ideal <span class="math-container">$I$</span> of <span class="math-container">$R$</span>, then <span class="math-container">$I$</span> is prime?</p>
<ul>
<li><p>If <span class="math-container">$(p)$</span> is a prime ideal, then <span class="math-container">$(p)$</span> is a maximal ideal so <span class="math-container">$(p)=I$</span>.</p></li>
<li><p>I don't know how to say <span class="math-container">$(p)=P \overline P \subset I \subset R$</span> implies <span class="math-container">$I$</span> is a prime ideal.</p></li>
<li><p>Our definition of a prime ideal <span class="math-container">$P$</span> is that <span class="math-container">$P$</span> is nonzero and if the product <span class="math-container">$CD$</span> of two ideals <span class="math-container">$C$</span> and <span class="math-container">$D$</span> is a subset of <span class="math-container">$P$</span>, then <span class="math-container">$C$</span> or <span class="math-container">$D$</span> is a subset of <span class="math-container">$P$</span>. </p></li>
</ul>
<p>Thanks in advance!</p>
| Wojowu | 127,263 | <p>Here is a straightforward proof. Since we are in a quadratic field, it's not hard to see that <span class="math-container">$R/(p)$</span> has <span class="math-container">$p^2$</span> elements (since, as a group, <span class="math-container">$R$</span> is free abelian on two generators). If <span class="math-container">$I$</span> is a proper ideal properly containing <span class="math-container">$(p)$</span>, then the quotient <span class="math-container">$R/I$</span> is isomorphic to a quotient of <span class="math-container">$R/(p)$</span> by the image of <span class="math-container">$I$</span> modulo <span class="math-container">$(p)$</span>. From there it's clear <span class="math-container">$R/I$</span> has <span class="math-container">$p$</span> elements, so is a field, implying <span class="math-container">$I$</span> is maximal, hence prime.</p>
|
3,209,722 | <p>I saw in another post on the website a simple proof that <span class="math-container">$$\lim_{n\to\infty} \left( 1+\frac{x}{n} \right)^n = \lim_{m\to\infty} \left( 1+\frac{1}{m} \right)^{mx}$$</span></p>
<p>which consists of substituting <span class="math-container">$n$</span> by <span class="math-container">$mx$</span>. I can see how the equality then holds for positive real numbers <span class="math-container">$x$</span>, yet it isn't obvious to me why it holds for negative <span class="math-container">$x$</span>.</p>
| Community | -1 | <p><strong>Hint:</strong></p>
<p><span class="math-container">$$ \lim_{n\to\infty} \left( 1+\frac{x}{n} \right)^n \lim_{n\to\infty} \left( 1-\frac{x}{n} \right)^n = \lim_{n\to\infty} \left( 1-\frac{x^2}{n^2} \right)^n = \lim_{n\to\infty} \left( 1-\frac{x^2}{n^2} \right)^{n^2/n }=1.$$</span></p>
|
145,612 | <p>Why are isosceles triangles called that — or called anything? Why is their class given a name? Why did they find their way into the <em>Elements</em> and every single elementary geometry text and course ever since? Did no one ever ask himself, "What use is this, or why is it interesting?"?</p>
<p>Here are some facts about isosceles triangles whihc you might think would serve as valid answers to the above question, and I will attempt to show that they do not:</p>
<ul>
<li><em>A triangle has two equal sides iff it has two equal angles.</em> But that's of interest only because we're already looking at the one class (triangles with two equal sides) or the other (those with two equal angles). And, in any event, the statement of the theorem is not more interesting than its generalization, that the larger a side in a triangle, the greater the angle opposite it.</li>
<li><em>Various facts about the isosceles right triangle.</em> Fine, I'll grant that the isosceles right triangle is interesting. But that's insufficient reason to give the much broader class of isosceles triangles a name.</li>
<li><em>Any triangle can be partitioned into $n$ isosceles triangles $\forall n>4$ — and various other recent results.</em> Very nice, but isosceles triangles are, of course, in Euclid, so these don't really answer the question.</li>
</ul>
| J. David Taylor | 30,850 | <p>I believe that one of the reasons why isosceles triangles are discussed in the elements is because Euclid's construction of the regular pentagon hinges on the construction of an isosceles triangle with a nice (will edit with more specifics later) relationship between the length of its sides. </p>
<p>The Greeks were interested in constructing regular polygons. A regular $n$-gon is constructible if and only if $n$ factors into a power of $2$ and a product of distinct Fermat primes (Gauss). So the regular pentagon was largest 'building block' for constructing regular polygons that anyone discovered until Gauss. This is one of the reasons why it was significant, and as a result so were isosceles triangles.</p>
<p>On another note, reflecting a triangle over one of its sides is common in elementary geometry proofs. This yields an isosceles triangle. This happens often enough to warrant giving isosceles triangles a name to reference the particular properties they have that don't hold for triangles in general. (In other words, giving them a name makes many elementary geometry proofs shorter, even if the thing being proved isn't even about triangles.)</p>
|
4,196,868 | <p><span class="math-container">$(f_n)$</span> is a sequence of continuous, real valued functions on a metric space <span class="math-container">$M$</span>.</p>
<p>It converges pointwise to a <strong>continuous</strong> function <span class="math-container">$f$</span>.</p>
<p>Suppose that <span class="math-container">$(y_m)$</span> is a sequence of points in <span class="math-container">$M$</span>, and it converges to point <span class="math-container">$y\in M$</span>.</p>
<p>Then, <span class="math-container">$lim_{y_m\to\ y}[lim_{n\to\infty} (f_n(y_m))] = lim_{n\to\infty}[lim_{y_m\to\ y} (f_n(y_m))] = f(y)$</span></p>
<p>I think it follows from continuity of <span class="math-container">$f$</span> and the functions <span class="math-container">$f_n$</span>, and the definitions of those limits.</p>
<p>Is this right?</p>
| IV_ | 292,527 | <p>First check <span class="math-container">$0$</span> and <span class="math-container">$1$</span>: <span class="math-container">$0$</span> is a solution of your first equation.</p>
<p>Neither of your two equations can be solved any further by <a href="https://en.wikipedia.org/wiki/Elementary_function" rel="nofollow noreferrer">elementary functions</a> alone or by elementary functions and the <a href="https://en.wikipedia.org/wiki/Special_functions" rel="nofollow noreferrer">special function</a> <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert W</a> alone.</p>
<p>Let's take for example your equation from the title of your question:</p>
<p><span class="math-container">$$\sin(x)=x\cos(x).$$</span></p>
<p>Rearrange the equation to have all functions of the solution variable on one side of the equation:</p>
<p><span class="math-container">$$\sin(x)-x\cos(x)=0$$</span></p>
<p>Your equation is in dependence of <span class="math-container">$x$</span> and the transcendental functions <span class="math-container">$\sin$</span> and <span class="math-container">$\cos$</span> of <span class="math-container">$x$</span>, and all coefficients of the equation are algebraic numbers. That means, the equation is an <em>algebraic equation over the algebraic numbers</em> in dependence <em>of the solution variable and at least one transcendental function of the solution variable</em>. Such types of equations often cannot be solved by elementary functions alone.</p>
<p>The left-hand side of the latter equation is an elementary function. The elementary functions can be generated by applying only <span class="math-container">$\exp$</span>, <span class="math-container">$\ln$</span> and/or unary or multiary algebraic functions.<br />
Each of the elementary standard functions (sin, cos, tan, cot, sec, csc, sinh, cosh, tanh, coth, sech, csch, arcsin, arccos, arctan, arccot, arcsec, arccsc, arcsinh, arccosh, arctanh, arccoth, arcsech, arccsch) can be brought to this expln-form. See e.g. the Wikipedia articles for the single functions or [Abramowitz/Stegun 1970]:</p>
<p><span class="math-container">$$\sin(x)=-\frac{1}{2}i\left(e^{ix}-e^{-ix}\right),$$</span></p>
<p><span class="math-container">$$\cos(x)=\frac{1}{2}\left(e^{ix}+e^{-ix}\right).$$</span></p>
<p><span class="math-container">$$-\frac{1}{2}i\left(e^{ix}-e^{-ix}\right)-\frac{1}{2}x\left(e^{ix}+e^{-ix}\right)=0$$</span></p>
<p><span class="math-container">$$-\frac{1}{2}x\left(e^{ix}\right)^2-\frac{1}{2}i\left(e^{ix}\right)^2-\frac{1}{2}x+\frac{1}{2}i=0$$</span></p>
<p><span class="math-container">$$-x\left(e^{ix}\right)^2-i\left(e^{ix}\right)^2-x+i=0$$</span></p>
<p><span class="math-container">$x\rightarrow\frac{t}{i}:$</span></p>
<p><span class="math-container">$$it\left(e^{t}\right)^2-i\left(e^t\right)^2+it+i=0$$</span></p>
<p><span class="math-container">$$t(e^t)^2-(e^t)^2+t+1=0\tag1$$</span></p>
<p>The equation is now in the expln-form.</p>
<p>[Abramowitz/Stegun 1970] Abramowitz, M.; Stegun, I.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. National Bureau of Standard 1970</p>
<p><strong>1.) Elementary inverses / elementary numbers</strong></p>
<p>The elementary function on the left-hand side of this equation is an algebraic function over the algebraic numbers in dependence of <span class="math-container">$t$</span> and <span class="math-container">$e^t$</span>. This algebraic function is therefore a <em>multiary</em> algebraic function over the algebraic numbers.<br />
The theorem in [Ritt 1925], that is proved also in [Risch 1979], implies that bijective compositions of <span class="math-container">$\exp$</span>, <span class="math-container">$\ln$</span> and/or <strong>unary</strong> algebraic functions are the only elementary functions that are invertible by an elementary function. The elementary function on the left-hand-side of equation 1 is not bijective because it is not injective, but you can split it into bijective pieces by choosing restrictions of this function (restricting the domain of the function). You are looking for the partial inverses then. But the elementary function on the left-hand side of equation 1 is not in the form that Ritt's theorem requires. If we cannot find a representation of the elementary function in this form, we therefore cannot solve the equation by only rearranging it by applying only elemenary functions that are readable from the equation.</p>
<p><a href="https://www.jstor.org/stable/2373917?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">[Risch 1979] Risch, R. H.: Algebraic Properties of the Elementary Functions of Analysis. Amer. J. Math. 101 (1979) (4) 743-759</a><br />
<a href="http://www.ams.org/journals/tran/1925-027-01/S0002-9947-1925-1501299-9/" rel="nofollow noreferrer">[Ritt 1925] Ritt, J. F.: Elementary functions and their inverses. Trans. Amer. Math. Soc. 27 (1925) (1) 68-90</a></p>
<p><strong>2.) Elementary numbers</strong></p>
<p>The <a href="https://en.wikipedia.org/wiki/Elementary_number" rel="nofollow noreferrer">elementary numbers</a> are the numbers that are generated from the rational numbers by applying only elementary functions (or rather <span class="math-container">$\exp$</span>, <span class="math-container">$\ln$</span> and/or unary or multiary algebraic functions).</p>
<p>Equation 1 is an irreducible polynomial equation in dependence of the solution variable <span class="math-container">$t$</span> and <span class="math-container">$e^t$</span> with only algebraic coefficients. The main theorem in [Lin 1983] says, assuming <a href="https://en.wikipedia.org/wiki/Schanuel%27s_conjecture" rel="nofollow noreferrer">Schanuel's conjecture</a>, that equations of this type cannot be solved by elementary numbers except <span class="math-container">$0$</span>. [Chow 1999] proves, assuming Schanuel's conjecture, that equations of this type cannot be solved by <em>explicit</em> elementary numbers except <span class="math-container">$0$</span>.</p>
<p>Although neither of your two equations is solvable by elementary numbers, products of solutions of these equations could perhaps be an elementary number like <span class="math-container">$4\pi$</span> is.</p>
<p><a href="http://timothychow.net/closedform.pdf" rel="nofollow noreferrer">[Chow 1999] Chow, T.: What is a closed-form number. Am. Math. Monthly 106 (1999) (5) 440-448</a><br />
<a href="https://www.jstor.org/stable/43836165" rel="nofollow noreferrer">[Lin 1983] Ferng-Ching Lin: Schanuel's Conjecture Implies Ritt's Conjectures. Chin. J. Math. 11 (1983) (1) 41-50</a></p>
<p><strong>3.) Lambert W</strong></p>
<p>For applying only Lambert W and elementary functions, equation 1 in dependence of <span class="math-container">$t$</span> and <span class="math-container">$e^t$</span> should be transformable to the form</p>
<p><span class="math-container">$$f_1(a_1+a_2f_2(t)^{a_3}e^{b_1+b_2f_2(t)^{b_3}})=c\tag2,$$</span></p>
<p>where <span class="math-container">$a_1,a_2,a_3,b_1,b_2,b_3,c$</span> are constants and <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> are elementary functions with a suitable elementary partial inverse. Remember that Ritt's above mentioned theorem implies which types of elementary functions have elementary invertible partial inverses.<br />
But your equation cannot be brought to this form.</p>
<p><strong>4.) Generalized Lambert W</strong></p>
<p>Your equations are solvable in terms of Generalized Lambert W.</p>
<p><span class="math-container">$$\sin(t)=t\cos(t)$$</span>
<span class="math-container">$$-\frac{1}{2}i(e^{it}-e^{-it})=\frac{1}{2}t(e^{it}+e^{-it})$$</span>
<span class="math-container">$t\to\frac{x}{2i}$</span>:
<span class="math-container">$$e^x=-\frac{x+2}{x-2}$$</span>
<span class="math-container">$$\frac{x-2}{x+2}e^x=-1$$</span>
<span class="math-container">$$x=W\left(^{+2}_{-2};-1\right)$$</span>
<span class="math-container">$$t=-\frac{1}{2}i\ W\left(^{+2}_{-2};-1\right)$$</span></p>
<p><span class="math-container">$\ $</span></p>
<p><span class="math-container">$$\cos(t)=-t\sin(t)$$</span>
<span class="math-container">$$\frac{1}{2}(e^{it}+e^{-it})=\frac{1}{2}it(e^{it}-e^{-it})$$</span>
<span class="math-container">$t\to\frac{x}{2i}$</span>:
<span class="math-container">$$e^x=\frac{x+2}{x-2}$$</span>
<span class="math-container">$$\frac{x-2}{x+2}e^x=1$$</span>
<span class="math-container">$$x=W\left(^{+2}_{-2};1\right)$$</span>
<span class="math-container">$$t=-\frac{1}{2}i\ W\left(^{+2}_{-2};+1\right)$$</span></p>
<p>[Mezö 2017] Mezö, I.: On the structure of the solution set of a generalized Euler-Lambert equa-tion. J. Math. Anal. Appl. 455 (2017) (1) 538-553<br />
[Mezö/Baricz 2017] Mezö, I.; Baricz, A.: On the generalization of the Lambert W function. Transact. Amer. Math. Soc. 369 (2017) (11) 7917–7934 <a href="https://arxiv.org/abs/1408.3999" rel="nofollow noreferrer">(On the generalization of the Lambert W function with applications in theoretical physics. 2015)</a><br />
<a href="https://arxiv.org/abs/1801.09904" rel="nofollow noreferrer">[Castle 2018] Castle, P.: Taylor series for generali-zed Lambert W functions. 2018</a><br />
<a href="https://arxiv.org/abs/2207.00707" rel="nofollow noreferrer">[Stoutemyer 2022] Stoutemyer, D. R.: Inverse spherical Bessel functions generalize Lambert W and solve similar equations containing trigonometric or hyperbolic subexpressions or their inverses. 2022</a></p>
|
390,532 | <p>I'm trying to solve (for $x$) some problems such as $\arctan(0)=x$, $\arcsin(-\frac{\sqrt{3}}{{2}})=x$, etc.</p>
<p>What is the best way to go about this? So far, I have been trying to solve the problems intuitively (e.g. I ask myself <em>what value of sine will give me $-\frac{\sqrt{3}}{{2}}$?</em>), maybe drawing a triangle to help. Is there a better way to solve these problems?</p>
| orion | 137,195 | <p>In other words, you know the values of $\arcsin x$/$\arctan x$/$\arccos x$ for some specific values of $x$. That's just fine. Inverse trigonometric functions are transcendental functions and with exceptions of a few well-known values, the result is not nicely expressible with elementary functions (you can use a calculator or any number of "approximations by hand" to get a numerical value for the angle, but that's not the same).</p>
<p>In addition to the classical angles of multiples of $30^\circ$, which have known values of trigonometric functions, you can use the half-angle formulas and addition theorems to get other angles (inverse functions can be computed by recognizing the half-angle and angle addition expressions and reducing the calculation to a simpler expression).</p>
<p>In fact, the angles you can construct by adding and halving of elementary angles ($45^\circ$ and $60^\circ$), are precisely the angles you can construct with a compass and a straightedge (constructible angles). Such constructions also mean that all the angles, for which you can get the trigonometric function without a calculator (or the inverse problem), have a nice geometric representation of how to carry out such a calculation. Construction with similar triangles and circles is thus a good idea if you have a more complicated expression that you think you can get analytically.</p>
<p>Otherwise, there's nothing wrong about knowing a few special values. We do that all the time.</p>
|
11,916 | <p>In <a href="https://mathoverflow.net/questions/11845/theory-mainly-concerned-with-lambda-calculus/11861#11861">Theory mainly concerned with lambda-calculus?</a>, F. G. Dorais wrote, of the idea that the lambda-calulus defines a domain of mathematics:</p>
<blockquote>
<p>That would never stick unless there's another good reason. Besides, the schism between cs and math is very recent, I would contend that "functional programming" is actually a math term, historically speaking. More importantly, it would be wrong to use a term different than those who use it most, namely theoretical computer scientists, who are very competent mathematicians by the way. </p>
</blockquote>
<p>The idea, I think, is that the overlap between the kind of constructive mathematics that follows the formulae-as-types correspondence, and pure functional programming is so substantial that the core of the two topics is essentially the same subject.</p>
<p>Is this true?</p>
| Adam | 2,361 | <p>I think most people here would agree that Category Theory is part of mathematics.</p>
<p>The study of strongly-typed functional programming languages is really just the study of cartesian closed categories, so I think that this particular part of functional programming is legitimate mathematics. And Domain Theory is the study of the category of complete partial orders with bottom, so I would include that too.</p>
<p>I don't think I would extend that to untyped or dynamically-typed languages (LISP). Also, I'd probably pick a term other than "functional programming" since subfields of math are rarely named with gerunds ("strongly typed functional languages" is probably the most accurate, but a bit verbose).</p>
|
1,397,576 | <p>To me there is a hierarchy where vectors $\subset$ sequences $\subset$ functions $\subset$ operators</p>
<ul>
<li><p>All vectors are sequences, but not all sequences are vectors because
sequences are infinite dimensional</p></li>
<li><p>All sequences are functions, but not all functions are sequences
because functions can do more than just map $\mathbb{N} \to A$ where
$A$ is some set</p></li>
<li><p>All functions are operators, but not all operators are functions
because an operator can map functions to functions but function can only map numbers to numbers</p></li>
</ul>
<p>Can someone check if my ideas are reasonable? Does there exist such a hierarchy?</p>
| Paul Sinclair | 258,282 | <p>Vectors are not sequences. They can be represented in some cases by finite sequences (as Omnomnomnom has pointed out). But in general a vector is any elements of a vector space, and a vector space is any set where you can add and multiply by scalars. All sequences are vectors, because they form a vector space. All functions from a fixed set into a field also form a vector space over that field, so they are also vectors. Operators are just functions from the cross-product of a set with itself into the same set (binary operators that is). If that set is a field, then they too are vectors in a vector space.</p>
<p>But if the sets in question are not fields, then you don't get vectors. The relationships here are much richer than your schema comprehends.</p>
|
1,397,576 | <p>To me there is a hierarchy where vectors $\subset$ sequences $\subset$ functions $\subset$ operators</p>
<ul>
<li><p>All vectors are sequences, but not all sequences are vectors because
sequences are infinite dimensional</p></li>
<li><p>All sequences are functions, but not all functions are sequences
because functions can do more than just map $\mathbb{N} \to A$ where
$A$ is some set</p></li>
<li><p>All functions are operators, but not all operators are functions
because an operator can map functions to functions but function can only map numbers to numbers</p></li>
</ul>
<p>Can someone check if my ideas are reasonable? Does there exist such a hierarchy?</p>
| ASCII Advocate | 260,903 | <p>It's better to think of each as its own type of object, but where some types can be naturally converted into others (similar to type conversion in computer programming). Some of the conversions faithfully translate all of the information, some are reversible, and others lose part of the information. </p>
|
450,410 | <p>I'm trying to teach myself how to do $\epsilon$-$\delta$ proofs and would like to know if I solved this proof correctly. The answer given (Spivak, but in the solutions book) was very different.</p>
<hr>
<p><strong>Exercise:</strong> Prove $\lim_{x \to 1} \sqrt{x} = 1$ using $\epsilon$-$\delta$.</p>
<p><strong>My Proof:</strong></p>
<p>We have that $0 < |x-1| < \delta $.</p>
<p>Also, $|x - 1| = \bigl|(\sqrt{x}-1)(\sqrt{x}+1)\bigr| = |\sqrt{x}-1||\sqrt{x}+1| < \delta$.</p>
<p>$\therefore |\sqrt{x}-1|< \frac{\delta}{|\sqrt{x}+1|}$</p>
<p>Now we let $\delta = 1$. Then
\begin{array}{l}
-1<x-1<1 \\
\therefore 0 < x < 2 \\
\therefore 1 < \sqrt{x} + 1<\sqrt{2} + 1 \\
\therefore \frac{1}{\sqrt{x} + 1}<1.
\end{array}</p>
<p>We had that $$|\sqrt{x}-1|< \frac{\delta}{|\sqrt{x}+1|} \therefore |\sqrt{x}-1|<\delta$$</p>
<p>By letting $\delta=\min(1, \epsilon)$, we get that $|\sqrt{x}-1|<\epsilon$ if $0 < |x-1| < \delta $.</p>
<p>Thus, $\lim_{x \to 1} \sqrt{x} = 1$.</p>
<hr>
<p>Is my proof correct? Is there a better way to do it (still using $\epsilon-\delta$)?</p>
| Emanuele Paolini | 59,304 | <p>The proof is correct but can be simplified. You don't need the part "Now let $\delta=1$...". In fact it is always true that
$$
\frac{1}{\sqrt x + 1} \le 1
$$
since $\sqrt x \ge 0$.</p>
<p>Also, a matter of style. In the first line you don't <em>have</em> $0 < |x-1|<\delta$ but you <em>suppose</em> it (this is because $\delta$ is not already been given, but has to be found yet). The same when you write "let $\delta = 1$" you should write "if $\delta \le 1$ ..."</p>
|
920,050 | <p>The answer is $\frac1{500}$ but I don't understand why that is so. </p>
<p>I am given the fact that the summation of $x^{n}$ from $n=0$ to infinity is $\frac1{1-x}$. So if that's the case then I have that $x=\frac15$ and plugging in the values I have $\frac1{1-(\frac15)}= \frac54$.</p>
| Aldo | 171,035 | <p>$$S_n = (1/5)^4+...+(1/5)^n\ \ \ \ \ (i)$$ </p>
<p>$$-(1/5)S_n = -(1/5)^5-...-(1/5)^n-(1/5)^{n+1}\ \ \ \ (ii)$$</p>
<p>$(i)+(ii)$ $$S_n(1-1/5) = (1/5)^4 - (1/5)^{n+1} \Rightarrow (4/5)S_n = 1/625 - (1/5)^{n+1}$$</p>
<p>$\Rightarrow S_n = 1/500 - (5/4)(1/5)^{n+1}$</p>
<p>but</p>
<p>$(1/5)^n \rightarrow 0$</p>
|
2,871,105 | <p>I am trying to prove the following statement, but starting to doubt its correctness.</p>
<p>Suppose that $H$ is a Hausdorf topological space (I am formulating generally, though my specific case is $H=S'(R)$ - a space of tempered distributions). </p>
<p>Suppose I have a set of nested subsets $\Omega_i \subseteq H$ and $\Omega_i \supseteq \Omega_{i+1}$ and by $\overline{\Omega}$ we denote a sequential closure of $\Omega\subseteq H$. Is it true that:</p>
<p>$x\in \cap_{i=1}^\infty \overline{\Omega_i}$ if and only if there is sequence $\{x_i\}$ such that $x_i \rightarrow x, x_i\in \Omega_i$. </p>
<p>The fact that from $x_i \rightarrow x, x_i\in \Omega_i$ we can deduce $x\in \cap_{i=1}^\infty \overline{\Omega_i}$ is obvious. The opposite is problematic.</p>
<p>I tried to prove the opposite statement via reducing to Cantor's intersection theorem. Suppose that $x\in \cap_{i=1}^\infty \overline{\Omega_i}$ is fixed. Then I define
$R_i = \{\{x_n\}| \exists N, \forall n>N: x_n \in \Omega_i, x_n \rightarrow x\}$ (a set of sequences that tend to $x$ and is in $\Omega_i$ starting from some index). It is easy to see that $R_i$ is also nested: $R_i\supseteq R_{i+1}$.</p>
<p>Then the wanted statement is equivalent to $\cap_{i} R_i \ne \emptyset$. </p>
<p>The problem now is that I need compactness of $R_i$ in order to apply Kantor's theorem, but I stuck at this step.</p>
| Daniel Schepler | 337,888 | <p>For a counterexample using sequential closures, consider the topological space whose underlying set is $\mathbb{N}^2 \sqcup \{ x_0 \}$, and with the topology such that $U$ is open if and only if $x_0 \notin U$ or for some function $f : \mathbb{N} \to \mathbb{N}$, $\{ (x, y) \in \mathbb{N}^2 \mid y > f(x) \} \subseteq U$.</p>
<p>Now, let $\Omega_n := \{ (x, y) \in \mathbb{N}^2 \mid x \ge n \}$. Then $x_0$ is in the sequential closure of $\Omega_n$ for each $n$ since $(n, m) \to x_0$ as $m \to \infty$. On the other hand, if we have any sequence $(x_n, y_n) \in \Omega_n$, then $x_n \to \infty$ as $n \to \infty$. Using this, it is possible to construct a function $f : \mathbb{N} \to \mathbb{N}$ such that $y_n < f(x_n)$ for each $n$. It follows that $(x_n, y_n) \not\to x_0$ as $n \to \infty$ since the corresponding neighborhood of $x_0$ for this $f$ does not contain any element of the sequence.</p>
|
2,535,933 | <p>let assume i have a position function in 1 dimension with constant acceleration.</p>
<p>$$ x(t) = x_0 + v_0t + \frac{1}{2}at^2 $$</p>
<p>then it's first derivative is a velocity function:
$$ \frac{dx}{dt} = v(t) = v_0 + at $$</p>
<p>then it's second derivative is an acceleration function:</p>
<p>$$ \frac{dv}{dt} = a(t) = a $$</p>
<p>so in conclusion if we have x(t) a position function and we take a first derivative, we will get a velocity function and if we take it's second derivative we will get an acceleration function. this is what everyone knows.</p>
<p>now i see some lecture video says that:</p>
<p>$$ \frac{dv}{dt} = \frac{dv}{dx} * \frac{dx}{dt} $$</p>
<p>if this is true then if i calculate $\frac{dv}{dx}$ and multiply by $\frac{dx}{dt}$ i will also get $a(t)$ but i don't know how to do it because when we apply chain rule we need to determine what is inner function and what is outer function but here there is only one function which is x(t) how to find $\frac{dv}{dx}$? </p>
<p>can someone rewrite the position function into inner part and outer part?
or what is the valid way to do the calculation?</p>
<p>I'm very new to calculus and physic please explain step by step and easy simple example</p>
| Falrach | 506,310 | <p>Shorter:</p>
<p><span class="math-container">$(X-\Bbb{E}[X])^2 \geq 0$</span> by definition. So it follows directly from <span class="math-container">$\Bbb{E}[(X-\Bbb{E}[X])^2] = \operatorname{Var}(X) = 0$</span>, that <span class="math-container">$(X-\Bbb{E}[X])^2 = 0$</span> almost surely. We conclude <span class="math-container">$X-\Bbb{E}[X] = 0$</span> almost surely. That is what you wanted to show.</p>
|
2,477,137 | <p>$\left(1+3+5...+(2n+1)\right ) + \left(3.5+5+6.5+...+(\frac{7+3n}{2})\right)=105$ </p>
<p>It is the equation that I did not understand how to find $n.$</p>
| Disintegrating By Parts | 112,478 | <p>If $\mu$ is a Borel measure on $\mathbb{R}$ with no atoms, then $m(\lambda)=\mu(-\infty,\lambda]$ is continuous, and
$$
\int_{-\infty}^{\infty}\lambda^2 dm(\lambda)=\int_{0}^{\infty}\lambda d(m(\sqrt{\lambda})-m(-\sqrt{\lambda})).
$$
However, there are problems in the case that $m$ has atoms, if you want a normalization such that $m(\lambda)$ and $m(\sqrt{\lambda})-m(-\sqrt{\lambda})$ are both continuous from the right (or left.) So, to avoid renormalization issues, the problem was stated in such a way that the spectral measures have no atoms.</p>
|
107,915 | <p>I randomly place $k$ rooks on an (arbitrarily sized) $N$ by $M$ chessboard. Until only one rook remains, for each of $P$ time intervals we move the pieces as follows:</p>
<p>(1) We choose one of the $k$ rooks on the board with uniform probability. </p>
<p>(2) We choose a direction for the rook, $(N, W, E, S)$, with uniform probability. </p>
<p>(3) We choose a number of squares in which to move the rook along the direction chosen in [2] with uniform probability over the interval consisting of the rook's current position to the edge of the board.</p>
<p>(4) If the rook being moved collides with another piece while being translated in [3], just as in regular chess it will annihilate that piece and remain at the piece's former position.</p>
<p>NOTE - An alternative way of stating [2], [3], and [4] would be to say that the chosen rook samples all possible sets of moves, with uniform probability, and is unable to bypass other rooks without annihilating them and stopping at their former positions.</p>
<p>NOTE 2 - Gerhard Paseman is correct in suggesting that the original formulation for [2] and [3] will bias the rook towards shorter path lengths. This is in part due to the choice of direction in [2] not being weighted by the resulting possible number of choices in [3], and also the over-counting of positions in [3] due to the lack of consideration that there may be a collision. There are also problems with [2] near the board's boundaries where a direction can be chosen in which no move can take place. Instead of [2] and [3], I'll suggest that a better method would be to number all possible position that the chosen rook from [1] can occupy (keeping the collision constraint from [4] in mind), and then use a PRNG to select the next position. </p>
<p>What does the distribution look like for the number of time intervals, $P$, necessary for only a single rook to remain on the board?</p>
| Per Alexandersson | 1,056 | <p>So, I did some numerical experiments on 4 rooks on a k times k board.
Each data point is the mean of 500 runs.</p>
<p><a href="https://i.stack.imgur.com/k00FG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k00FG.png" alt=""></a>
</p>
<p>The x axis is the width/height of the board, the y axis is the number of iterations needed for it to be only one rook left.</p>
<p>EDITED:
So, I did some changes and now my data conforms with the others:</p>
<p>1) Choose rook.
2) Choose direction.
3) If direction does not allow moving in that direction, goto 2.
4) Move 1,2,.. or k steps in direction chosen, where k gives a boundary square.</p>
<p>I.e. this does not count non-moves (which the image ABOVE do).</p>
<p>The image below shows mean of 500 runs, k rooks on a k*k board, starting at k=1.
<a href="https://i.stack.imgur.com/55bVR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/55bVR.png" alt=""></a>
</p>
|
3,239,185 | <p>Let <span class="math-container">$f,g$</span> be two analytic functions on the domain <span class="math-container">$\Omega$</span> such that <span class="math-container">$|f(z)|=|g(z)|$</span> throughout <span class="math-container">$\Omega$</span>.</p>
<p>I believe <span class="math-container">$h(z)=f/g$</span> only has removable singularities(can't really prove it...), for the following reasons. If <span class="math-container">$g(z_0)=0$</span>, then <span class="math-container">$f(z_0)=0$</span>, and
<span class="math-container">$$\lim_{z\to z_0}h(z)=\lim_{z\to z_0}\frac{f(z)}{g(z)}=\lim_{z\to z_0}\frac{|f(z)|e^{\arg f(z)}}{|f(z)|e^{\arg g(z)}}\\
=\lim_{z\to z_0}\frac{e^{\arg f(z)}}{e^{\arg g(z)}}=e^{\arg f(z_0)-\arg g(z_0)}.$$</span>
So, we define <span class="math-container">$h(z_0)$</span> to be this value(EDIT this value is undefined :( ). Also,
<span class="math-container">$$
\lim_{z\to z_0}h'(z)=\lim_{z\to z_0}\frac{(f'g-g'f)(z)}{g(z)^2}\\
=\lim_{z\to z_0}(\frac{f'}{g}-\frac{g'}{g}\cdot\frac{f}{g})\\
=\lim_{z\to z_0}\frac{f'-hg'}{g}=\ldots?
$$</span>
Now I cannot proceed to prove that <span class="math-container">$h'(z)$</span> exist at <span class="math-container">$z=z_0$</span>.</p>
<p><strong>How can I make <span class="math-container">$h$</span> analytic?</strong></p>
<p>PS: if <span class="math-container">$h$</span> is made analytic, I can prove by integration that <span class="math-container">$f(z)=e^{\alpha i}g(z)$</span> for some fixed <span class="math-container">$\alpha\in \mathbb R$</span>.</p>
<p>Any help with the problem?</p>
| Martin R | 42,969 | <p>The problem with your approach is that <span class="math-container">$\arg f(z_0) = \arg 0$</span> and <span class="math-container">$\arg g(z_0) = \arg 0$</span> are not defined, and actually <span class="math-container">$w \mapsto \arg w$</span> <em>cannot</em> be defined as a continuous function in the neighborhood of <span class="math-container">$w=0$</span>.</p>
<p>But your assumption that <span class="math-container">$h = f/g$</span> has only removable singularities correct. It follows directly from <a href="https://en.wikipedia.org/wiki/Removable_singularity#Riemann%27s_theorem" rel="nofollow noreferrer">Riemann's theorem on removable singularities</a> because <span class="math-container">$h$</span> is <em>bounded.</em> Therefore <span class="math-container">$h$</span> can be extended to a holomorphic function on <span class="math-container">$\Omega$</span>.</p>
<p>Finally, <span class="math-container">$|h(z)| \equiv 1$</span> implies that <span class="math-container">$h$</span> is constant because of the maximum modulus (or open mapping) theorem.</p>
|
4,528,489 | <p>I hope I could get clarification on a minor detail in the proof to theorem 3.11 (b) in Rudin's Principles of Mathematical Analysis. The theorem and proof are as follows.
<br>Theorem:</p>
<blockquote>
<p>If X is a compact metric space and if {<span class="math-container">$p_n$</span>} is a Cauchy sequence in X, then {<span class="math-container">$p_n$</span>} converges to some point of X.</p>
</blockquote>
<p>Proof:</p>
<blockquote>
<p>Let {<span class="math-container">$p_n$</span>} be a Cauchy sequence in the compact space X. For N = 1,2,3..., let <span class="math-container">$E_N$</span> be the set consisting of <span class="math-container">$p_N$</span>, <span class="math-container">$p_{N+1}$</span>, <span class="math-container">$p_{N+2}$</span>, ... Then <span class="math-container">$$ \lim \limits_{N \to \infty} diam \overline {E_N} = 0$$</span>, by Definition 3.9 and Theorem 3.10 (a). Being a closed subset of the compact space X , each <span class="math-container">$\overline {E_N}$</span> is compact (Theorem 2.35). Also <span class="math-container">$E_N \supset E_{N+1}$</span>, so that <span class="math-container">$\overline{E_N} \supset \overline{E_{N+1}}$</span>...</p>
</blockquote>
<p>Here I am trying to find out a reasoning behind the statement <span class="math-container">$E_N \supset E_{N+1}$</span>, then <span class="math-container">$\overline{E_N} \supset \overline{E_{N+1}}$</span>. My reasoning behind <span class="math-container">$E_N \supset E_{N+1}$</span> is that since <span class="math-container">$ \lim \limits_{N \to \infty} diam E_N = 0$</span>, and since the diameter of <span class="math-container">$E_N$</span> captures how big the set is, then as N gets bigger, the set becomes smaller. And the reason why <span class="math-container">$\overline{E_N} \supset \overline{E_{N+1}}$</span> is that <span class="math-container">$diam \overline{E_N} = diam E_N$</span> (theorem 3.10 a). Is my reasonings correct?</p>
| Anne Bauval | 386,889 | <p>It is relatively elementary to prove that <span class="math-container">$f$</span> is bijective on the four following (invariant) subsets, which form a partition of <span class="math-container">$\mathbb Z/(pq\mathbb Z):$</span>
<span class="math-container">$$\{0\},\{(pk)\bmod{pq}:q\nmid k\},\{(qk)\bmod{pq}:p\nmid k\},
\{k\bmod{pq}:p\nmid k,q\nmid k\}.$$</span></p>
|
976,617 | <p>Find supremum and infimum of the set:
$B={ \frac{x}{1+ \mid x \mid }} \ for \ x\in \mathbb{R}$
For me it is visible that it will be 1 and -1 respectively but how to prove it properly?</p>
| Ruben | 153,329 | <p>For all $x\in \mathbb R$ we have $\frac{x}{1+|x|} < 1$, hence the supremum is less than or equal to 1. Suppose the supremum is smaller than 1, then we can write it as $1-\epsilon$ for some $\epsilon > 0$. Can you find an $x\in \mathbb R$ such that $\frac{x}{1+|x|} \in (1-\epsilon, 1)$?</p>
|
75,862 | <blockquote>
<p>In quadilateral $ABCD$ (usual clockwise or anticlockwise naming), $AB=16\sqrt{2}$ cm, $CD=10$ cm, $DA=8.5$ cm, $\angle D = 120^\circ $ and $\angle ACB = 45^\circ$. How to find $\angle ABC$?</p>
</blockquote>
<p><a href="http://testfunda.com/examprep/learningresources/smsqod/cat-sms-question-of-the-day.htm?assetid=905f5662-87e9-47f0-ae19-8cde6e204a4f" rel="nofollow">Problem source</a>.</p>
<p><strong>ADDED:</strong></p>
<p>As stated in one of the answer, the obvious approach, utilizing the law of cosines and sines gives a very ugly form for a problem that is intended for pencil-paper calculation. I was wondering if there is any alternative approach to avoid doing the messy parts? </p>
| robjohn | 13,854 | <p>Using the Law of Cosines, I get that $|AC|^2=8.5^2+10^2+85=257.25$ since $\cos(ADC)=-\frac{1}{2}$. Next, $\sin^2(ACB)=\frac{1}{2}$ and $|AB|^2=512$. Law of Sines says that
$$
\frac{\sin^2(ACB)}{|AB|^2}=\frac{\sin^2(ABC)}{|AC|^2}
$$
Therefore,
$$
\sin^2(ABC)=\frac{1}{2}\frac{257.25}{512}\approx\frac{1}{4}
$$
Thus, $ABC$ must be about $30^\circ$. The hardest thing to do was square $8.5$.</p>
|
1,707,675 | <p>How can I find the indefinite integral which is $$\int \frac{\ln(1-x)}{x}\text{d}x$$</p>
<p>I tried to use substitution by assigning $$\ln(1-x)\text{d}x = \text{d}v $$ and $$\frac{1}{x}=u$$ but, it is meaningless but true, the only thing I came up from integration by part is that $$\int \frac{\ln(1-x)}{x^2}\text{d}x = foo $$ and that has no help for me to find the integration $$\int \frac{\ln(1-x)}{x}\text{d}x$$</p>
| Enrico M. | 266,764 | <p>This integral has no primitive. Indeed the result is a so well known Special function called Logarithm Integral:</p>
<p>$$\int\frac{\ln(1-x)}{x}\ \text{d}x = -\text{Li}_2(x)$$</p>
<p>More here</p>
<p><a href="https://en.wikipedia.org/wiki/Logarithmic_integral_function" rel="nofollow">https://en.wikipedia.org/wiki/Logarithmic_integral_function</a></p>
|
4,253,564 | <p>I am having trouble with the following integral</p>
<blockquote>
<p>Prove that <span class="math-container">$$ \int_0^1\frac{x\ln(x)}{1+x^2+x^4}dx=\frac{1}{36}\Big(\psi^{(1)}(2/3)-\psi^{(1)}(1/3)\Big)$$</span></p>
</blockquote>
<p><span class="math-container">$$I=\int_0^1\frac{x\ln(x)}{1+x^2+x^4}dx=\int_0^1\frac{\ln(u^2)}{2(1+u+u^2)}du=\int_0^1\frac{\ln(u)}{(1+u+u^2)}du$$</span>
let <span class="math-container">$x^2=u\rightarrow \frac{du}{dx}=2x$</span></p>
<p>How does one proceed from here? Is my approach correct? Thank you for your time</p>
| projectilemotion | 323,432 | <p>Firstly, note that after your substitution it should be
<span class="math-container">$$I=\int_0^1\frac{x\ln(x)}{1+x^2+x^4}~dx=\color{red}{\frac{1}{4}}\int_0^1\frac{\ln(u)}{1+u+u^2}~du.$$</span>
To evaluate the latter integral, the geometric series shows that
<span class="math-container">$$\begin{align*} \int_0^1\frac{\ln(u)}{1+u+u^2}~du&=\int_0^1 \frac{(1-u)\ln(u)}{1-u^3}~du\\&=\int_0^1 \sum_{k=0}^{\infty} (1-u)\ln(u)u^{3k}~du\\&=\sum_{k=0}^{\infty} \left[\int_0^1 u^{3k}\ln(u)~du-\int_0^1 u^{3k+1}\ln(u)~du\right]. \end{align*}$$</span>
By differentiating the integral <span class="math-container">$\int_0^1 x^{\alpha}~dx$</span> with respect to <span class="math-container">$\alpha\in \mathbb{R}\setminus \{-1\}$</span>, one obtains that
<span class="math-container">$$\int_0^1 x^{\alpha}\ln(x)~dx=-\frac{1}{(\alpha+1)^2}.$$</span>
Therefore, one has that
<span class="math-container">$$\begin{align*} \int_0^1\frac{\ln(u)}{1+u+u^2}~du&=\sum_{k=0}^{\infty} \left[\frac{1}{(3k+2)^2}-\frac{1}{(3k+1)^2}\right]\\&=\frac{1}{9}\sum_{k=0}^{\infty} \left[\frac{1}{(k+2/3)^2}-\frac{1}{(k+1/3)^2}\right]\\&=\frac{1}{9}(\psi^{(1)}(2/3)-\psi^{(1)}(1/3)), \end{align*}$$</span>
where we have used the series representation of the <a href="https://en.wikipedia.org/wiki/Trigamma_function" rel="noreferrer">trigamma function</a>
<span class="math-container">$$\psi^{(1)}(z)=\sum_{k=0}^{\infty} \frac{1}{(z+k)^2}. \tag{1}$$</span>
If this is not your definition of the trigamma function (if you define <span class="math-container">$\psi^{(1)}(z):=\frac{d^2}{dz^2}\ln(\Gamma(z))$</span>), then you can prove <span class="math-container">$(1)$</span> using the <a href="https://en.wikipedia.org/wiki/Gamma_function#Weierstrass%27s_definition" rel="noreferrer">Weierstrass's definition</a> of the <span class="math-container">$\Gamma$</span> function:
<span class="math-container">$$\Gamma(z)=\frac{e^{-\gamma z}}{z}\prod_{n=1}^{\infty} \left(1+\frac{z}{n}\right)^{-1}e^{z/n}.$$</span></p>
|
401,898 | <p>the function $f_n(x)=x^n-x^{2n}$ converge to $f(x)=0$ in $(-1,1]$. Intuativly the function does not converge uniformally in (-1,1]. How can I prove it?
I tried using the definition $\lim \limits_{n\to\infty}\sup \limits_{ x\in (-1,1]}|f_n(x)-f(x)|$ function is continial fractional on $[-1,1]$ and $x=0,(\frac 1 2 )^{\frac 1 n}$ are the roots of the derivative. I found that the second derivative is negative in the second point. then $\sup=1/4$ and the function does not converge uniformally?</p>
| DonAntonio | 31,254 | <p>An idea: say for $\,n>2\,$</p>
<p>$$f_n(x)=x^n-x^{2n}\implies f'_n(x)=nx^{n-1}-2nx^{2n-1}=nx^{n-1}\left(1-2x^{n}\right)=0\iff$$</p>
<p>$$x=0\,,\,\frac1{\sqrt[n]2}$$</p>
<p>$$f_n''(x)=n(n-1)x^{n-2}-2n(2n-1)x^{2n-2}\implies\begin{cases}f''(0)=0\\{}\\f''\left(\frac1{\sqrt[n]2}\right)=\frac{n\sqrt[n]4}4\left(-2n\right)<0\end{cases}\;\implies$$</p>
<p>$$\implies\text{at}\;\;\left(x=\frac1{\sqrt[n]2}\;,\;y=f_n\left(\frac1{\sqrt[n]2}\right)=-\frac14\right)\;\text{we have a maximum for}\;\;f_n(x)\;,\;\forall\,n>2 .$$</p>
<p>And from here it follows at once that since $\,\displaystyle{f(x):=0=\lim_{n\to\infty}f_n(x)}\;$ , then</p>
<p>$$ \lim_{n\to\infty}\sup_{n}|f_n(x)-f(x)|=\lim_{n\to\infty}\frac14=\frac14$$</p>
<p>and thus the convergence isn't uniform.</p>
|
401,898 | <p>the function $f_n(x)=x^n-x^{2n}$ converge to $f(x)=0$ in $(-1,1]$. Intuativly the function does not converge uniformally in (-1,1]. How can I prove it?
I tried using the definition $\lim \limits_{n\to\infty}\sup \limits_{ x\in (-1,1]}|f_n(x)-f(x)|$ function is continial fractional on $[-1,1]$ and $x=0,(\frac 1 2 )^{\frac 1 n}$ are the roots of the derivative. I found that the second derivative is negative in the second point. then $\sup=1/4$ and the function does not converge uniformally?</p>
| Tim | 74,128 | <p>Choose an arbitrarily large <em>odd</em> value of $n$. There exists some $0<x<1$ such that $x^n>\dfrac 12$.</p>
<p>Then $$\begin{array}{rl}f_n(-x) &= (-x)^n - (-x)^{2n}
\\ &= -\left(x^n + x^2n\right) \\ &\leq -\frac 34 \end{array}$$</p>
<p>So $f_n$ does not converge uniformly on $(-1,1]$.</p>
|
3,163,580 | <p>I'm having troubles to show that if <span class="math-container">$0<|\alpha|<1$</span> then the elements <span class="math-container">$f_k=\lbrace 1, \alpha^k, \alpha^{2k}, \alpha^{3k}, \cdots \rbrace$</span> span <span class="math-container">$\ell^2$</span> for <span class="math-container">$k \geq 1$</span>. I know I should to use the Vandermonde matrix and its properties, however I don't really know how to proceed.</p>
<p>Can you provide me with some hints?</p>
| anomaly | 156,999 | <p>To clarify, not every element of <span class="math-container">$\ell^2$</span> is the sum of finitely many <span class="math-container">$f_k$</span>; take an element of <span class="math-container">$\ell^2$</span> that does not decay as <span class="math-container">$e^{-tn}$</span> for some <span class="math-container">$t$</span>, for example. It is true, however, that the closed span of the <span class="math-container">$f_k$</span> (i.e., the closure of the vector space they generate) is <span class="math-container">$\ell^2$</span> itself.</p>
<p>Fix <span class="math-container">$x = (x_0, x_1, \dots)\in \ell^2$</span>, and assume without loss of generality that each <span class="math-container">$x_i$</span> lies in <span class="math-container">$X = [-1, 1]$</span>. Since <span class="math-container">$x_i \to 0$</span>, there exists (e.g., by the Tietze extension theorem) some continuous <span class="math-container">$f:X \to X$</span> with <span class="math-container">$f(\alpha^n) = x_n$</span> for each <span class="math-container">$x$</span>. Fix <span class="math-container">$\epsilon > 0$</span>, and choose <span class="math-container">$N$</span> such that
<span class="math-container">$$\sum_{n > N} |x_n|^2 < \epsilon.$$</span>
By the Stone-Weierstrass theorem, there exists some polynomial <span class="math-container">$g(z) = \sum a_n z^n$</span> with <span class="math-container">$|g - f| < \epsilon$</span> on <span class="math-container">$X$</span>. Then <span class="math-container">$\xi = \sum a_k f_k\in \ell^2$</span> has
<span class="math-container">$$|\xi_n - x_n| = \left|\sum_k a_k \alpha^{nk} - x_n\right| = |g(\alpha^n) - x_n| < \epsilon$$</span>
for all <span class="math-container">$n$</span>. Now bound the <span class="math-container">$\ell^2$</span>-norm of <span class="math-container">$\xi - x$</span>, using the fact that the <span class="math-container">$(f_k)_n$</span> decay exponentially in <span class="math-container">$n$</span>.</p>
|
633,799 | <p>I am a little confused about the basic definition of inclusion.</p>
<p>I understand that, for example, $\{4\}\subset\{4\}$.</p>
<p>I also understand that $4\in\{4\}$, and that it is false to say that $\{4\}\in\{4\}$.</p>
<p>However, is it possible to say that $4\subset\{4\}$?</p>
| dani_s | 119,524 | <p>Technically it depends on the definition of 4 and the axioms of set theory you are using. With the standard definitions it is false. (Note though that $4 \subset \{4\}$ is a valid statement)</p>
|
633,799 | <p>I am a little confused about the basic definition of inclusion.</p>
<p>I understand that, for example, $\{4\}\subset\{4\}$.</p>
<p>I also understand that $4\in\{4\}$, and that it is false to say that $\{4\}\in\{4\}$.</p>
<p>However, is it possible to say that $4\subset\{4\}$?</p>
| Asaf Karagila | 622 | <p>First of all, sets <em>can</em> be elements of other sets too. For example if $X$ is a set then $\mathcal P(X)$ is the power set of $X$, and it is a set whose elements are all sets. But now that's out of the way, let us focus on the question whether or not $4\subseteq\{4\}$ makes sense.</p>
<p>It is possible if you interpret $4$ as a set. In naive set theory we often work under assumptions closer to type theory. There are real numbers, and there are vectors, and functions, and there are sets and there are other sort of type of mathematical objects.</p>
<p>In modern set theory we often work under the assumption that everything is a set. We construct surrogate sets to interpret other concepts such as the integers, or the real numbers, as sets.</p>
<p>For example, one of the mainstream ways to interpret the ordered pair $\langle x,y\rangle$ is by considering the set $\{\{x\},\{x,y\}\}$. Even though ordered pairs are "not sets", we can represent them using sets.</p>
<p>Similarly for integers, we can represent them as sets too. Often we choose the following encoding, $0=\varnothing$ and $n+1=n\cup\{n\}=\{0,\ldots,n\}$. In that case $4=\{0,1,2,3\}$. Clearly under this interpretation $4\nsubseteq\{4\}$. But under this interpretation, $0\subseteq\{0\}=1$.</p>
|
3,665,879 | <p>We all are familiar with the sum and difference formulas for <span class="math-container">$\sin$</span> and <span class="math-container">$\cos$</span>, but is there an analogue for the sum and difference formulas for secant and cosecant? That is, </p>
<p><span class="math-container">$$\csc (A\pm B) = ?$$</span> and <span class="math-container">$$\sec (A \pm B) = ?$$</span></p>
<p>I tried a variation of the sum and difference formulas, but they were incorrect. Can it be derived geometrically?</p>
| Community | -1 | <p>Let <span class="math-container">$x \in X$</span> and <span class="math-container">$B_1{x}$</span> an open ball around <span class="math-container">$x$</span> with radius <span class="math-container">$1$</span> with respect the metric <span class="math-container">$d$</span>. Obviously, <span class="math-container">$X= \cup_{x \in X}B_1(x)$</span>. As <span class="math-container">$X$</span> is compact, <span class="math-container">$X$</span> is covered by a finite union <span class="math-container">$B_1(x_1) \cup ... \cup B_1(x_n)$</span>. Each <span class="math-container">$B_1(x_i)$</span> is bounded and a finite union of bounded sets is also bounded.</p>
|
97,131 | <p>I have the following problem:</p>
<p>I have a convex hull $\Omega$ defined by a set of n-dimensional hyperplanes $S = [(n_1,d_1), (n_2,d_2),...,(n_k,d_k)]$ such that a point $p \in \Omega$ if $n_i^T p \geq d_i \quad \forall (n_i,d_i) \in S $. Now I have a "joining" hyperplane $(n_{k+1},d_{k+1})$ and I want to know if this hyperplane "modifies the shape" of the convex hull and in that case, which hyperplanes of $S \bigcup (n_{k+1},d_{k+1})$ are not necessary anymore because they become redundant.</p>
<p>Trivial example with one dimension:</p>
<p>My convex hull is described by the inequality $ 3 \leq x \leq 5$ so </p>
<p>$S = [(1,3),(-1,-5)]$</p>
<p>The joining hyperplane is the inequality $ x \geq 4$ so the resulting convex hull should be
$ 4 \leq x \leq 5$</p>
<p>$S = [(1,4),(-1,-5)]$</p>
<p>returning $(1,3)$.</p>
<p>Now I would like the same thing generalized for n-dimensions. I can get algorithms till 3 dimensions but they are not generalized.</p>
<p>Do you have any hints or pointers on how I can find a solution to this problem?</p>
<p>p.s. I apologize for the sloppy description, I am not a mathematician. Please feel free to ask for more details.</p>
<p>Kind regards.</p>
| Joseph O'Rourke | 6,094 | <p>If you search for <em>detection of redundant constraints in linear programming</em> you will find many hits, including one to an MO question, "<a href="https://mathoverflow.net/questions/69662/">Detection of Redundant Constraints</a>."
One source paper is</p>
<blockquote>
<p>J. Gondzio. Presolve analysis of linear programs prior to applying an interior point method. <em>OSRA Journal on Computing</em>, 9(1):73–91, 1997. <a href="http://Presolve%20analysis%20of%20linear%20programs%20prior%20to%20applying%20an%20interior%20point%20method/" rel="nofollow noreferrer">Link here</a>.</p>
</blockquote>
<p>Google Scholar will permit you to locate the 77 papers that cite this.
Detecting redundant constraints in linear programming is by now a standard topic,
and is in fact incorporated into most LP solvers.</p>
<p>To respond to Gerhard's comment:</p>
<blockquote>
<p>However, it is known
that the problem of determining whether or not a linear matrix inequality constraint is redundant or not is NP-complete, in general.</p>
</blockquote>
<p>This from "Identifying Redundant Linear Constraints in Systems of Linear Matrix
Inequality Constraints,"
Jibrin & Stover, 2007. <a href="http://www.cefns.nau.edu/Academic/Math/researchInterests/students/sdpred_may31_07.pdf" rel="nofollow noreferrer">PDF download</a></p>
|
2,604,206 | <p>Can anyone provide links to a concrete proof? Intuitively, the two-dimensional real space is infinite. so there should be infinitely many subspaces. But how do I go about a proof?</p>
| asdq | 466,346 | <p>Take $v_\epsilon=(\epsilon,1)$, for $\epsilon \in [0,1]$. Then $\lambda v_\epsilon + \mu v_\nu=0$ implies $\lambda \epsilon + \nu \nu =0$ and $\lambda + \nu =0$, hence $\lambda (\epsilon -\nu)=0$. This shows that for $\epsilon \neq \nu$, $v_\epsilon$ and $v_\nu$ are linearly independent and therefore span different subspaces in $\mathbb{R}^2$. Hence we get an injective map from $[0,1]$ into the set of all subspaces of $\mathbb{R}^2$. Since $[0,1]$ is clearly infinite, we must have infinitely many subspaces in $\mathbb{R}^2$.</p>
|
1,811,109 | <p>How can we cause this relation to be true?</p>
<blockquote>
<p>$$x \sin\theta + y \cos\theta = \sqrt{ x^2 + y^2 } \tag{$\star$}$$</p>
</blockquote>
<p>I know the identity</p>
<p>$$x \sin\theta + y \cos\theta = \sqrt{x^2+y^2}\; \sin\left(\theta + \operatorname{atan}\frac{y}{x}\right)$$
What can make the sine part "$1$" (or just approximately "$1$") so that $(\star)$ holds?</p>
| Prasanna Venkatesan | 320,861 | <p>First of all you should've tried this out yourself and should give your details about your effort made.
Anyway it is not an identity. Here's workout:-</p>
<p>set: </p>
<p>$$ x=r \cos(\alpha)~~ \text{and}~~ y=r \sin(\alpha)$$</p>
<p>Substituting back in he equation we get</p>
<p>$$ r \cos(\alpha) \sin(\theta) + r \sin(\alpha) \cos(\theta)=r$$</p>
<p>$$ \cos(\alpha) \sin(\theta) + \sin(\alpha) \cos(\theta)=1$$</p>
<p>$$ \sin(\alpha + \theta) = 1$$</p>
<p>$$ \alpha + \theta = n\pi + (-1)^n \pi/2 ~~~~\text{ - (*)}$$</p>
<p>Last equation gives relation between $\theta$ and $\alpha$, but both are arbitrary variables.</p>
<p>Therefore by contradiction the equation $x \sin\theta + y \cos\theta = \sqrt { x^2 + y^2 }$ is not an identity.</p>
|
2,600,679 | <p>Provided two real number sequences: $a_1,a_2,...,a_n$;$b_1,b_2,...,b_n$, define their means respectively:
$$\bar a=\frac{1}{n}\sum_{i=1}^n a_i,\bar b=\frac{1}{n}\sum_{i=1}^n b_i$$
and define their variances and covariance respectively:
$$var(a)=\frac{1}{n}\sum_{i=1}^n (a_i-\bar a)^2,var(b)=\frac{1}{n}\sum_{i=1}^n (b_i-\bar b)^2,cov(a,b)=\frac{1}{n}\sum_{i=1}^n (a_i-\bar a)(b_i-\bar b)$$
naturally leads to the definition of normalized cross correlation:
$$NCC=\frac{cov(a,b)}{\sqrt{var(a)var(b)}}=\frac{\sum_{i=1}^n(a_i-\bar a)(b_i-\bar b)}{\sqrt{\sum_{i=1}^n (b_i-\bar b)^2 \sum_{i=1}^n (a_i-\bar a)^2}}$$
Now how to show that $NCC$ lies in $[-1,1]$?</p>
| Enrico M. | 266,764 | <p>Yes.</p>
<p>$$\sum_{k = 0}^{+\infty} \frac{x^k}{(k!)^2} = I_0\left(2 \sqrt{x}\right)$$</p>
<p>Where $I_0$ is the modified Bessel Function of the first kind.</p>
<p><a href="http://mathworld.wolfram.com/ModifiedBesselFunctionoftheFirstKind.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/ModifiedBesselFunctionoftheFirstKind.html</a></p>
|
619,040 | <p>An exponential object $B^{A}$ is defined to be the representing object of the functor $$\mathcal{C}\left(- \times A,B\right): \mathcal{C} \rightarrow Set$$
or equivalently, as the terminal object of $\left(-\times A \downarrow B\right)$. The dual concept is of the co-exponential object which is the initial object of the $\left(B\downarrow -\times A \right)$. </p>
<blockquote>
<p>Is co-exponential object as useful as exponential object? What is the notation for them and what are the interesting examples of co-exponential objects? What is right (or left) adjoint of the functor which send any object to the co-exponential (with a fixed base)?</p>
</blockquote>
<p>Thanks</p>
| Henry Story | 253,728 | <p>I have given <a href="https://math.stackexchange.com/questions/3621660/examples-of-co-implication-a-k-a-co-exponential/3624965#3624965">an explanation with example</a> and a lot of references about how to think of co-exponentials.
I have not yet worked out how useful they are for myself under that name.</p>
<p>But a very powerful argument to take them seriously is that reasoning requires the ability to think of how to falsify arguments put forward by someone else. So when someone puts forward an argument in constructive logic to the effect that <span class="math-container">$\Delta \vdash \alpha$</span> then it can be questioned by refuting <span class="math-container">$\alpha$</span>, which in case one accepts the sequent requires one to find one or more statements in <span class="math-container">$\Gamma$</span> that one rejects also. This falsificationist reasoning is co-constructive, and that is where co-implications appear too, which is another word for co-exponential. </p>
<p>It has been forcefully argued that Scientific Reasoning is falsificationist by <a href="https://de.wikipedia.org/wiki/Karl_Popper" rel="nofollow noreferrer">Karl Popper</a>. In which case it would be very important.</p>
|
304,209 | <p>I am trying to learn weak derivatives. In that, we call <span class="math-container">$\mathbb{C}^{\infty}_{c}$</span> functions as test functions and we use these functions in weak derivatives. I want to understand why these are called <em>test functions</em> and why the functions with these properties are needed. I have some idea about these but couldn't understand them properly.</p>
<p>Also, I'll be happy if any one can suggest some good reference on this topic and Sobolev spaces.</p>
| nicomezi | 316,579 | <p>To understand why we are calling them test functions, we have to understand what distributions are and where they come from.</p>
<p>Usually, to evaluate a function, we compute its value at the point where we want to know it. But remember that there are spaces of functions (or equivalence classes of functions) such as <span class="math-container">$L^p$</span> spaces where the value on a point is not a good representation of the underlying function (it may even have no sense at all). Then, why not trying to evaluate the function as some sort of a weighted mean ? And an integral of <span class="math-container">$f$</span> times a function wisely chosen can be seen as a weighted mean.</p>
<p>We define :</p>
<p><span class="math-container">$$T_f(\phi)=\int_{\mathbb{R}}f(x) \phi(x) dx.$$</span></p>
<p>In fact, test functions are just a new way to "know" a function. Because we have the choice of the space from which we take our tests functions, why not taking one with very good properties ? So we do the choice of <span class="math-container">$\mathbb{C}^{\infty}_{c}$</span>. By doing so, we can proceed to integrate a large variety of functions and we also have linearity and continuity (compared to the topology of <span class="math-container">$\mathbb{C}^{\infty}_{c}$</span>) of the mapping <span class="math-container">$T_f$</span> when it has a meaning. We can also define a lot of useful operations by reporting every problem one would meet with <span class="math-container">$f$</span> on the tests functions. </p>
<p>We just have contructed a continuous linear application from <span class="math-container">$\mathbb{C}^{\infty}_{c}$</span> to <span class="math-container">$\mathbb{R}$</span>. Those kind of applications are known as linear forms. So we define that a distribution is a continuous linear form over <span class="math-container">$\mathbb{C}^{\infty}_{c}$</span> to <span class="math-container">$\mathbb{R}$</span>.</p>
<p>This way, we can take <span class="math-container">$\phi$</span> being narrower around the point we are considering with constant aera to know the "value" of <span class="math-container">$f$</span> near it.</p>
<p>Note that this is just the tip of the icerberg, and that the OP may have understood why we are calling them this way by now. But I wanted to share this POV on this site.</p>
|
987,054 | <p>Prove that the sequence
$$b_n=\left(1+\frac{1}{n}\right)^{n+1}$$
Is decreasing.</p>
<p>I have calculated $b_n/b_{n-1}$ but it is obtain:
$$\left(1-\frac{1}{n^2}\right)^n \left(1+\frac{1}{n}\right)^n$$
But I can't go on.</p>
<p>Any suggestions please?</p>
| Community | -1 | <p>$$y=\left(1+\frac{1}{x}\right)^{x+1}$$</p>
<p>$$\ln y=({x+1})\cdot\ln\left(1+\frac{1}{x}\right)$$</p>
<p>$$y'\frac{1}{y}=\ln\left(1+\frac{1}{x}\right)+(x+1)\cdot \frac{1}{1+\frac{1}{x}}\cdot\left(-\frac{1}{x^2}\right)$$</p>
<p>$$y'=\left(\ln\left(1+\frac{1}{x}\right)-\frac{1}{x}\right)\cdot\left(1+\frac{1}{x}\right)^{x+1}$$</p>
<p>$$\Rightarrow y'<0$$</p>
<p>Hence $y$ is decreasing</p>
|
372,211 | <p>I'm trying to write an <a href="http://developer.android.com/reference/android/view/animation/Interpolator.html" rel="nofollow noreferrer">interpolator</a> for a translate animation, and I'm stuck. The animation passes a single value to the function. This value maps a value representing the elapsed fraction of an animation to a value that represents the interpolated fraction. The value starts at 0 and goes to 1 when the animation completes. So for instance, if I wanted a linear translation (constant velocity) my function would look like:</p>
<pre><code>function(input) {
return input
}
</code></pre>
<p>What I need is for the velocity to remain constant for half the animation, then decelerate rapidly to zero. What this essentially means is that the input values I return must be the same as the output values for half the animation (until 0.5), then the values must increase from 0.5 to 1.0 at a slower rate of change between calls than the input value change between calls.</p>
<p><hr>
EDIT (straight from the Android docs):</p>
<p>The following table represents the approximate values that are calculated by example interpolators for an animation that lasts 1000ms:</p>
<p><img src="https://i.stack.imgur.com/UQrow.png" alt="enter image description here"></p>
<p>As the table shows, the LinearInterpolator changes the values at the same speed, .2 for every 200ms that passes. The AccelerateDecelerateInterpolator changes the values faster than LinearInterpolator between 200ms and 600ms and slower between 600ms and 1000ms.</p>
<p><hr>
EDIT 2:</p>
<p>I thought I should provide an example of something that works, just not the way I want it to. The function for a decelerate interpolation provided with the Android framework is exactly:</p>
<pre><code>function(input) {
return (1 - (1 - input) * (1 - input))
}
</code></pre>
| bubba | 31,744 | <p>Let $s(t)$ denote the distance moved at time $t$. Choose some value $h$ with $0 \le h \le 1$. Then the required function is:</p>
<p>$$s(t) = \frac{2t}{1+h} \quad \text{for } 0 \le t \le h $$</p>
<p>$$s(t) = \frac{t^2 - 2t + h^2}{h^2 - 1} \quad \text{for } h \le t \le 1 $$</p>
<p>The first part of the curve (where $0 \le t \le h$) is a straight line, obviously.</p>
<p>The second part (where $h \le t \le 1$) is a piece of parabola. At $t=h$, the line and the parabola join smoothly. </p>
<p>The speed of the particle varies as follows:</p>
<p>From $t=0$ to $t=h$, the speed is constant, $2/(1+h)$.</p>
<p>From $t=h$ to $t=1$, the speed gradually decreases.</p>
<p>At $t=1$, the speed is zero.</p>
<p>You can adjust $h$ to get the behaviour you want. Any value with $0 \le h \le 1$ will work.</p>
<p>If you choose $h=0$, you get the Android function.</p>
<p>If you choose $h=1$, you get purely linear motion.</p>
<p>Here's what the composite curve looks like when we choose $h=0.5$:
<img src="https://i.stack.imgur.com/1Y7ru.jpg" alt="enter image description here"></p>
|
1,990,670 | <blockquote>
<p>Assume that $0 < \theta < \pi$. Solve the following equation for $\theta$. $$\frac{1}{(\cos \theta)^2} = 2\sqrt{3}\tan\theta - 2$$ </p>
</blockquote>
<p><a href="https://i.stack.imgur.com/SoU8A.png" rel="nofollow noreferrer">Question and Answer</a></p>
<p>Regarding to the attached image, that shows the question and the answer?</p>
<p>How could I solve this question and what are the steps to follow to reach the answer?</p>
| EnlightenedFunky | 372,659 | <p>HINT: Write $\frac{1}{\cos^2(x)}$ as $\sec^2(x)$ then look up Pythagorean identities. And possibly solutions to a quadratic equations.</p>
|
3,807,708 | <p>I was asked to prove the following identity (starting from the left-hand side):
<span class="math-container">$$(a+b)³(a⁵+b⁵)+5ab(a+b)²(a⁴+b⁴)+15a²b²(a+b)(a³+b³)+35a³b³(a²+b²)+70a⁴b⁴=(a+b)^8.$$</span>
I'm trying to solve it by a sort of "inspection", but I haven't made it yet. Of course I could try to expand the left-hand polynomial and come to a more recognizable form of <span class="math-container">$(a+b)^8$</span>, but of course that would be the hard way (assuming that there is an easy one).</p>
<p>As an example of why I am talking of "inspection" I can state a similar problem:</p>
<p>Show that <span class="math-container">$$(x+\frac{5}{2}a)⁴-10a(x+\frac{5}{2}a)³+35a²(x+\frac{5}{2}a)²-50a³(x+\frac{5}{2}a)+24a⁴=(x²-\frac{1}{4}a²)(x²-\frac{9}{4}a²).$$</span>
Here by "inspection" we can deduce that the left-hand side of the identity is equivalent to <span class="math-container">$$[(x+\frac{5}{2}a)-a][(x+\frac{5}{2}a)-2a][(x+\frac{5}{2}a)-3a][(x+\frac{5}{2}a)-4a]$$</span> and then after a few steps come to the the desire result.</p>
<p>I would appreciate any help you could give me.</p>
| user | 505,767 | <p>Since the expression is symmetric and in decreasing order, it suffices to consider only the expansion for <span class="math-container">$a$</span> that is</p>
<p><span class="math-container">$$a^8+3a^7+3a^6+a^5+5a^7+10a^6+5a^5+15a^6+15a^5+35a^5+70a^4$$</span></p>
<p><span class="math-container">$$a^8+8a^7+28a^6+56a^5+70a^4$$</span></p>
<p>which agrees with the row of <a href="https://en.wikipedia.org/wiki/Pascal%27s_triangle" rel="nofollow noreferrer">Pascal's triangle</a> for <span class="math-container">$n=8$</span> that is</p>
<p><span class="math-container">$$(a+b)^8=\sum_{k=0}^8 \binom 8 k a^kb^{8-k}$$</span></p>
|
4,565,584 | <blockquote>
<p>Let <span class="math-container">$X = (-1,1)^{\Bbb N}$</span> have the product topology. Is the subset <span class="math-container">$(0,1)^{\Bbb N}$</span> open?</p>
</blockquote>
<p>To consider whether <span class="math-container">$(0,1)^{\Bbb N}$</span> is open I know that it is if I can find a basic open set that is a subset of this set, but I'm tangled up with the definitions. Is it that</p>
<blockquote>
<p><span class="math-container">$(0,1)^{\Bbb N}$</span> is open if one can find a basic open set <span class="math-container">$\prod_{n \in \Bbb N} V_n \subset (0,1)^{\Bbb N}$</span> such that <span class="math-container">$\color{red}{V_n = (0,1)^\Bbb N}$</span> for all but finitely many <span class="math-container">$n$</span>.</p>
</blockquote>
<p>or is it that</p>
<blockquote>
<p><span class="math-container">$(0,1)^{\Bbb N}$</span> is open if one can find a basic open set <span class="math-container">$\prod_{n \in \Bbb N} V_n \subset (0,1)^{\Bbb N}$</span> such that <span class="math-container">$\color{red}{V_n = (-1,1)^\Bbb N}$</span> for all but finitely many <span class="math-container">$n$</span>.</p>
</blockquote>
<p>I have some confusion with this since we're dealing with a subspace of <span class="math-container">$(-1,1)^\Bbb N$</span>.</p>
| Theo Bendit | 248,286 | <p>Neither, but the second is closest.</p>
<p>Simply finding a basic open subset is insufficient to show that a set is open (e.g. on the real line, <span class="math-container">$(0, 1]$</span> contains the basic open set <span class="math-container">$(0, 1)$</span>, but <span class="math-container">$(0, 1]$</span> is not open). In order to establish that a set <span class="math-container">$A$</span> is open with respect to the topology generated by a basis <span class="math-container">$\mathcal{B}$</span>, you need to show that, for all <span class="math-container">$x \in A$</span>, there exists a <span class="math-container">$U \in \mathcal{B}$</span> such that <span class="math-container">$x \in U \subseteq A$</span>. That is, every point in the set is contained in a basic open set (again, which is true of most points of <span class="math-container">$(0, 1]$</span>, but not for <span class="math-container">$x = 1$</span>).</p>
<p>On the other hand, this means that any non-empty open set must contain at least one basic open set! While this is isn't sufficient to be non-empty and open, it is definitely necessary. This necessary condition matches your second statement. If you can show that this condition is false, then <span class="math-container">$(0, 1)^\Bbb{N}$</span> is definitely not open (but the converse is not true in general).</p>
<p>The (usual) basis of the product topology consists of products of open subsets of <span class="math-container">$(-1, 1)$</span>, taking the form <span class="math-container">$\prod_{n \in \Bbb{N}} U_n$</span>, such that all but finitely many <span class="math-container">$\mathcal{U}_n$</span> are equal to the full space <span class="math-container">$(-1, 1)$</span>. Does <span class="math-container">$(0, 1)^\Bbb{N}$</span> contain any such basic open sets? No it doesn't; any such <span class="math-container">$\prod_{n \in \Bbb{N}} U_n$</span> contains points with <span class="math-container">$0$</span> coordinates (in <span class="math-container">$n$</span>th position, where <span class="math-container">$n$</span> is such that <span class="math-container">$\mathcal{U}_n = (-1, 1)$</span>), and <span class="math-container">$(0, 1)^\Bbb{N}$</span> contains no such points. Thus, the necessary condition for openness has failed, and <span class="math-container">$(0, 1)^\Bbb{N}$</span> is not open in the product topology.</p>
|
4,565,584 | <blockquote>
<p>Let <span class="math-container">$X = (-1,1)^{\Bbb N}$</span> have the product topology. Is the subset <span class="math-container">$(0,1)^{\Bbb N}$</span> open?</p>
</blockquote>
<p>To consider whether <span class="math-container">$(0,1)^{\Bbb N}$</span> is open I know that it is if I can find a basic open set that is a subset of this set, but I'm tangled up with the definitions. Is it that</p>
<blockquote>
<p><span class="math-container">$(0,1)^{\Bbb N}$</span> is open if one can find a basic open set <span class="math-container">$\prod_{n \in \Bbb N} V_n \subset (0,1)^{\Bbb N}$</span> such that <span class="math-container">$\color{red}{V_n = (0,1)^\Bbb N}$</span> for all but finitely many <span class="math-container">$n$</span>.</p>
</blockquote>
<p>or is it that</p>
<blockquote>
<p><span class="math-container">$(0,1)^{\Bbb N}$</span> is open if one can find a basic open set <span class="math-container">$\prod_{n \in \Bbb N} V_n \subset (0,1)^{\Bbb N}$</span> such that <span class="math-container">$\color{red}{V_n = (-1,1)^\Bbb N}$</span> for all but finitely many <span class="math-container">$n$</span>.</p>
</blockquote>
<p>I have some confusion with this since we're dealing with a subspace of <span class="math-container">$(-1,1)^\Bbb N$</span>.</p>
| Jakobian | 476,484 | <p>For any product of topological spaces <span class="math-container">$X_i$</span>, if <span class="math-container">$U\subseteq \prod_{i\in I} X_i$</span> is open and non-empty, and <span class="math-container">$\pi_j:\prod_{i\in I} X_i\to X_j$</span> is the projection onto the <span class="math-container">$j$</span>th factor, then necessarily <span class="math-container">$\pi_j(U) = X_j$</span> for all but finitely many <span class="math-container">$j$</span>.</p>
<p>This is because if we take <span class="math-container">$x\in U$</span>, then there is some basic open set <span class="math-container">$V$</span> with <span class="math-container">$x\in V\subseteq U$</span> (that is, <span class="math-container">$V = \bigcap_{k=1}^m \pi_{j_k}^{-1}(U_k)$</span> for some <span class="math-container">$j_1, ..., j_m\in I$</span> and <span class="math-container">$U_k\subseteq X_{j_k}$</span> open), and then we have the inclusions <span class="math-container">$\pi_j(U)\supseteq \pi_j(V)$</span>, and since <span class="math-container">$\pi_j(V) = X_j$</span> for all but finitely many <span class="math-container">$j$</span> (it holds for all <span class="math-container">$j\in I\setminus\{j_1, ..., j_m\}$</span>), the same must hold for <span class="math-container">$\pi_j(U)$</span>.</p>
<p>In particular, <span class="math-container">$\pi_n((0, 1)^\mathbb{N}) = (0, 1)\neq (-1, 1)$</span> for all <span class="math-container">$n\in\mathbb{N}$</span>, and the above condition doesn't hold. The set cannot be open.</p>
|
508,059 | <p>How it is possible to considerably shorten the list of properties that define a vector space by using definitions from abstract algebra?</p>
| DonAntonio | 31,254 | <p>A vector space is a module over a division ring.</p>
<p>Less short? The pair $\;(V,\Bbb F)\;$ is a vector space if $\;V\;$ is an additive (abelian) group and $\;V\;$ is a module over $\;\Bbb F\;$.</p>
|
508,059 | <p>How it is possible to considerably shorten the list of properties that define a vector space by using definitions from abstract algebra?</p>
| Matemáticos Chibchas | 52,816 | <p>A $R$-module ($R$ a commutative ring with unity) is an abelian group $M$ endowed with a ring (with unity) homomorphism $R\to\mathrm{End}(M,M)$.</p>
|
89,845 | <p>first,I think we can avoid set theory to bulid the first order logic , by the operation of the finite string.but I have The following questions:</p>
<p>How does "meta-logic" work. I don't really know this stuff yet, but from what I can see right now, meta-logic proves things about formal languages and logics in general. But does it use some logic to do so? Like if I want to prove that two formal languages are equivalent in some respect, aren't I presupposing a "background" formal language? And won't my choice of a "background" (meta) language affect what I can and can't demonstrate? For example, what logic was Godel using when he proved his famous theorems? Was it a bivalent one? A three valued logic? etc</p>
<p>In short,I'm still not sure how reasoning about all possible formal languages work. For example, suppose I say something of the form "for all formal theories, F, if F has property X, then F must have property Y". If I wanted to prove something like that, how does such very general reasoning work? What I mean is that in such a proof, what kind of logic would be employed (for example, would it be a two valued logic?), and does the choice of logic affect the outcome? Do logicians agree on some kind of meta-meta logic, which they use to reason about absolutely everything? Or do they just choose their favorite one?</p>
<p>if metalogic is just predicate logic,It seems circular to me! we build the theory of predicate logic by using predicate logic?For example, in proving some theorem in the object language we seem to assume that it is already correct (in the metalanguage). Or defining some connective in the object language, we use that connective in the metalanguage to do so. It's like they're saying "Alright guys! We are going to prove a bunch of stuff about logic! Oh, by the way, you have to take all this stuff we are about to prove for granted, but don't worry, that's just the "metalanguage"." Something about this seems wrong to me. Maybe I have misunderstood?</p>
| Community | -1 | <p>There are two roles for metalanguage: First, to avoid contradictions like the liar paradox, because in the liar paradox we have a statement that speaks about itself so it does not respect the hierarchy language-metalanguage.
The other role is to allow us to speak freely and to use theorems of the language as meta-theorems. So if we use the scheme of deduction using the principle of excluded middle, we know that we are using a meta-theorems, but this is just a way to use the corresponding theorem in the object language without repetition. So for example the metatheorem:
"If the negation of a proposition A does not hold, then A holds"
can be replaced by the theorem in the object language " $\neg \neg A \Rightarrow A$".</p>
|
293,921 | <p>The problem I am working on is:</p>
<p>An ATM personal identification number (PIN) consists of four digits, each a 0, 1, 2, . . . 8, or 9, in succession.</p>
<p>a.How many different possible PINs are there if there are no restrictions on the choice of digits?</p>
<p>b.According to a representative at the author’s local branch
of Chase Bank, there are in fact restrictions on the choice
of digits. The following choices are prohibited: (i) all four
digits identical (ii) sequences of consecutive ascending or
descending digits, such as 6543 (iii) any sequence start-ing with 19 (birth years are too easy to guess). So if one of the PINs in (a) is randomly selected, what is the prob-ability that it will be a legitimate PIN (that is, not be one of the prohibited sequences)?</p>
<p>c. Someone has stolen an ATM card and knows that the first
and last digits of the PIN are 8 and 1, respectively. He has
three tries before the card is retained by the ATM (but
does not realize that). So he randomly selects the $2nd$ and $3^{rd}$
digits for the first try, then randomly selects a different pair of digits for the second try, and yet another randomly selected pair of digits for the third try (the
individual knows about the restrictions described in (b)
so selects only from the legitimate possibilities). What is
the probability that the individual gains access to the
account?</p>
<p>d.Recalculate the probability in (c) if the first and last digits are 1 and 1, respectively. </p>
<h2>---------------------------------------------</h2>
<p>For part a): The total number of pins without restrictions is $10,000$</p>
<p>For part b): The number of pins in either ascending or descending order is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known, then the three other spots containing digits are already spoken for. The number of pins where each slot contains the same digit is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known there is only one option left to the rest of the slots. The number of pins that have their first and second slot occupied by 1 and 9, respectively, is $1 \cdot 1 \cdot 10 \cdot 10 \cdot$. So, if R is the set that contains these restricted pins, then $|R| = 130$; and if N is the set that contains the non-restricted ones, meaning R and N are complementary sets, then $|N| = 10,000 - 130$. <strong>Hence, the probability is then $P(N) = 9780/10000 = 0.9870.$ However, the answer is $0.9876$. What did I do wrong?</strong></p>
<p>For part c): The sample space, containing all of the outcomes of the experiment that will take place, is $|N|=9870$. When it says that the thief won't use the same pair of digits in each try, does that not allow him trying the pin 8 <strong>5 2</strong> 1 in one try and the pin 8 <strong>2 5</strong> 1 in another try?</p>
| joriki | 6,622 | <p>For b): Which is the descending sequence starting with $1$ that you counted?</p>
<p>For c): Good question; the problem is badly worded in that regard. Taking it literally, I'd tend to interpret it as referring to unordered pairs, but since it makes little sense to couple two different PINs in this manner, I suspect that they actually mean ordered pairs. However, note that the answer doesn't depend on this.</p>
<p>I understand neither why the question says that the thief knows the restrictions, nor why you say that the sample space has size $9870$. The thief knows that the first and last digits are $8$ and $1$, respectively; that's not compatible with any of the sequences excluded by the restrictions, and it doesn't allow for $9870$ possibilities.</p>
|
386,799 | <blockquote>
<p>P1086: For a closed surface, the positive orientation is the one for which the normal vectors point outward from the surface, and inward-pointing normals give the negative orientation.</p>
<p>P1087: If <span class="math-container">$S$</span> is a smooth orientable surface given in parametric form by a vector function <span class="math-container">$\mathbf{r}(u,v)$</span>, then it is automatically supplied with the orientation of the unit normal <span class="math-container">$\mathbf{n} = \cfrac{\partial_u\mathbf{r} \times \partial_v\mathbf{r}}{\vert \partial_u\mathbf{r} \times \partial_v \mathbf{r} \vert} $</span>...</p>
<p>P1093: The orientation of a surface S induces the positive orientation of the boundary curve C shown in the figure. This means that if one walks in the positive direction around the curve with one's head pointing in the direction of <span class="math-container">$\mathbf{n}$</span>, then the surface is always on one's left.</p>
</blockquote>
<p>How does one determine whether <span class="math-container">$\partial_{\huge{u}}\mathbf{r} \times \partial_{\huge{v}}\mathbf{r} \quad \text{ or } \quad \partial_{\huge{v}}\mathbf{r} \times \partial_{\huge{u}}\mathbf{r} \quad $</span> (negatives of each other) matches the desired orientation?</p>
<p>Since a surface may be hard to sketch (especially under exam conditions), I was hoping for an argument that isn't geometric or visual. But if geometry and visualisation are the easiest, would you please provide pictures for your explanations?</p>
<hr />
<blockquote>
<p>P1091 16.7.<span class="math-container">$23 \text{ generalised.}$</span> <span class="math-container">$\mathbf{F} = (x,-z,y)$</span> and <span class="math-container">$S$</span> is the part of <span class="math-container">$x^2 + y^2 + z^2 = p$</span> in the first octant and oriented towards <span class="math-container">$(0,0,0)$</span>. Evaluate the surface integral <span class="math-container">$\iint_S \mathbf{F} \cdot d\mathbf{S}$</span>. For closed surfaces, use the positive (outward) orientation.</p>
</blockquote>
<p><strong>Solution:</strong> Since <span class="math-container">$S$</span> is a sphere, parameterize with <span class="math-container">$r(\theta, \phi) = (p\sin \theta \cos \theta, p \sin \theta \sin \theta, p \cos \phi)$</span>.<br />
Then <span class="math-container">$\mathbf{F[r(\theta, \phi)]} \cdot \color{red}{(\partial_{\theta} r \times \ \partial_{\phi} r )} = p^3 \sin^3 \theta \cos^2 \theta \qquad (♦)$</span><br />
Then <span class="math-container">$\iint_S \mathbf{F} \cdot d\mathbf{S} = \iint_D \mathbf{F} \cdot (\partial_{\theta} r \times \ \partial_{\phi} r ) \, dA = p^3\int^{2\pi}_0 \cos^2 \theta \, d\theta \int^{\pi/2}_9 \sin^3 \phi \, d\phi = ... = p^3 \quad \pi \quad 1/3.$</span>
The answer is given as <span class="math-container">$ -p^\color{red}{2} \quad \pi \quad 1/3 $</span>.</p>
<p>How would've one determined that the cross product in (♦) coloured in red is wrong,<br />
and that it should've been <span class="math-container">$\color{green}{ \partial_{\large{\phi}} r \times \partial_{\large{\theta}} r }$</span> ?</p>
<p>Predicated on user Dan's Answer: <img src="https://i.stack.imgur.com/2Qq3a.jpg" alt="enter image description here" /></p>
| Christian Blatter | 1,303 | <p>A surface $S$ as such is a "two-dimensional smooth set of points" embedded in three-space. At each point $p\in S$ we have a tangent plane $T_p(\sim{\mathbb R}^2)$ with origin at $p$, and this tangent plane has a well defined orthogonal complement $N:=T_p^\perp$, a line through $p$ with origin at
$p$. On this line one can measure lengths, but a-priori it does <strong>not</strong> have a positive, let alone: correct, sense of direction.</p>
<p>For certain purposes, in particular when it comes to the computation of flows, one would like to single out one of the two possible senses as <em>positive</em>, and this in a manner depending continuously on $p$. This can be done in various ways, either by words like "outward", "inward", "upward", "to the right" referring to the implied $(x,y,z)$-coordinate system, or by saying that for a certain parametrization $(u,v)\mapsto{\bf r}(u,v)$ the orientation induced by ${\bf r}_u\times{\bf r}_v$ (in this order!) is positive. Such an explicit directive <em>has to be given by the person that hands you the surface</em> and tells you to do something with it; it does not come out of thin air.</p>
<p>When you are given a geometric description of $S$ (e.g., "the piece of an ellipsoidal surface with axes $\ldots$ bounded by $\ldots$" ), together with the intended orientation, and you look up a parametric representation of $S$ in a catalogue then you have to <strong>verify using geometric vizualization</strong> (even in an exam situation) whether ${\bf r}_u\times{\bf r}_v$ induces the intended orientation or not.</p>
|
4,285,426 | <p>Intuitively it is quite easy to see why <span class="math-container">$$a \equiv (a \bmod m) \pmod m.$$</span></p>
<p>When you divide a by m you get a remainder in the range <span class="math-container">$0, \dots, m-1.$</span> When you divide the remainder by m again, you get the same number again as the remainder, except that this time the quotient is 0.</p>
<p>I get that.</p>
<p>The question is how to prove it formally.</p>
| Thomas Andrews | 7,933 | <p>At heart, you need this version of the division algorithm to even define <span class="math-container">$(a\bmod m).$</span></p>
<blockquote>
<p>Let <span class="math-container">$a,m\in\mathbb Z, m\neq 0.$</span> Then there is a unique pair <span class="math-container">$q,r\in \mathbb Z$</span> such that <span class="math-container">$a=mq+r$</span> and <span class="math-container">$0\leq r<|m|.$</span></p>
</blockquote>
<p>Often, division algorithm does not include uniqueness. Depending on whether you’ve already proven division algorithm this way, you might have to prove a corollary to get uniqueness.</p>
<p>From that theorem, you define <span class="math-container">$(a\bmod m):=r,$</span> since <span class="math-container">$r$</span> is unique. Without uniqueness, you can’t even define <span class="math-container">$a\bmod m.$</span></p>
<p>But then <span class="math-container">$a-(a\bmod m)=a-r=mq$</span> is divisible by <span class="math-container">$m,$</span> so <span class="math-container">$$a\equiv (a\bmod m)\pmod m,$$</span> by definition.</p>
<hr />
<p>That assumes you’ve defined congruence the easiest and most usual way:</p>
<blockquote>
<p><span class="math-container">$x\equiv y\pmod m$</span> iff <span class="math-container">$x-y$</span> is divisible by <span class="math-container">$m.$</span></p>
</blockquote>
<p>Your approach would seem to indicate a slightly harder definition:</p>
<blockquote>
<p><span class="math-container">$x\equiv y\pmod m$</span> iff <span class="math-container">$(x\bmod m)=(y\bmod m).$</span></p>
</blockquote>
<p>Then you would want to show: <span class="math-container">$$((a\bmod m)\bmod m)=(a\bmod m).\tag1$$</span></p>
<p>That follows from:</p>
<blockquote>
<p><strong>Lemma:</strong> If <span class="math-container">$0\leq c<|m|,$</span> then <span class="math-container">$(c\bmod m)=c.$</span><br>
<strong>Proof:</strong> Eince <span class="math-container">$c=m\cdot 0+c$</span> satisfies the division algorithm condition, with <span class="math-container">$q=0, r=c,$</span> you are done.</p>
</blockquote>
<p>Then, by definition of <span class="math-container">$a\bmod m,$</span> you’d have <span class="math-container">$$0\leq (a\bmod m)<|m|,$$</span> and hence you can conclude <span class="math-container">$(1)$</span> from the Lemma.</p>
<p>As you can see, all the work is really in the definitions, and which definitions you choose.</p>
<hr />
<p>In first order logic formalism, we can’t even define new terminology. We have to replace all terms like “<span class="math-container">$a\bmod m$</span>“ and “<span class="math-container">$x\equiv y\pmod m$</span>” and “<span class="math-container">$|m|$</span>” and “<span class="math-container">$k$</span> is divisible by <span class="math-container">$m$</span>“ by their definitions.</p>
<p>In Peano Axioms for the natural numbers (non-negative integers,) we can’t even talk about “<span class="math-container">$x-y$</span>” in general.</p>
<p>Nobody wants to do that.</p>
|
4,114,034 | <p>In Linear Algebra Done Right by Axler, there are two sentences he uses to describe the uniqueness of Linear Maps (3.5) which I cannot reconcile. Namely, whether the uniqueness of Linear Maps is determined by the choice of 1) <em>basis</em> or 2) <em>subspace</em>. These two seem like very different statements to me given there can be a many-to-one relationship between basis and subspace. In otherwords, saying a Linear Map is "unique on a subspace" seems like a stronger statement than saying it is "unique on a basis".</p>
<p>This first sentence he writes before proving the theorem (3.5):</p>
<blockquote>
<p>The uniqueness part of the next result means that a linear map is completely determined by its
values on a <strong>basis</strong>.</p>
</blockquote>
<p>This second sentence he writes at the end after proving the uniqueness of a linear map:</p>
<blockquote>
<p>Thus <span class="math-container">$T$</span> is uniquely determined on <span class="math-container">$span(v_1, \dots, v_n)$</span> by the equation above.
Because <span class="math-container">$v_1, \dots, v_n$</span> is a basis of <span class="math-container">$V$</span>, this implies that <span class="math-container">$T$</span> is <strong>uniquely determined
on <span class="math-container">$V$</span></strong>.</p>
</blockquote>
<p>My question is, if <span class="math-container">$T$</span> is uniquely determined on <span class="math-container">$V$</span>, doesn't that imply that the choice of basis for <span class="math-container">$V$</span> doesn't matter (since each basis of <span class="math-container">$V$</span> spans <span class="math-container">$V$</span>)? But if so, a key part of the theorem requires explicitly choosing <span class="math-container">$T$</span> such that <span class="math-container">$T(v_j)=w_j$</span>, meaning if we choose a different basis <span class="math-container">$a_1, \dots, a_n$</span> of <span class="math-container">$V$</span> and then select <span class="math-container">$T$</span> such that <span class="math-container">$T(a_j)=w_j$</span>, we would get a different <span class="math-container">$T$</span>.</p>
<p>I've found these questions <a href="https://math.stackexchange.com/questions/3263742/proving-a-linear-transformation-is-unique">here</a> and <a href="https://math.stackexchange.com/questions/1272797/insights-about-tv-j-w-j-the-linear-maps-and-basis-of-domain?rq=1">here</a> that are tangentially related but don't address my question specifically. On the other hand the questions <a href="https://math.stackexchange.com/questions/1873360/linear-maps-uniqueness-proof-difference-between-uniquely-determined-on-spanv">here</a> and <a href="https://math.stackexchange.com/questions/3059263/linear-map-uniquely-determined-by-span-of-basis">here</a> get a little closer, but the answer in the first suggests that the choice of basis is arbitrary where as the answer in the second suggests the basis must be the same.</p>
<p>For more information, I've included the complete statement of the theorem plus the last paragraph of the proof.</p>
<p><strong>Theorem 3.5 Linear maps and basis of domain</strong></p>
<blockquote>
<p>Suppose <span class="math-container">$v_1, \dots, v_n$</span> is a basis of <span class="math-container">$V$</span> and <span class="math-container">$w_1, \dots, w_n \in W$</span>. Then there exists a unique linear map <span class="math-container">$T: V \to W$</span> such that
<span class="math-container">$Tv_j=w_j$</span> for each <span class="math-container">$j = 1, \dots, n$</span>.</p>
</blockquote>
<p>Last paragraph of proof:</p>
<blockquote>
<p>To prove uniqueness, now suppose that <span class="math-container">$T \in \mathcal{L}(V,W)$</span>; and that <span class="math-container">$Tv_j=w_j$</span> for each <span class="math-container">$j = 1, \dots ,n$</span>. Let <span class="math-container">$c_1, \dots, c_n \in F$</span>. The homogeneity of <span class="math-container">$T$</span> implies that <span class="math-container">$T(c_jv_j) = c_jw_j$</span> for each <span class="math-container">$j=1, \dots, n$</span>. The additivity of <span class="math-container">$T$</span> now implies that <span class="math-container">$T(c_1v_1 + \cdots + c_nv_n) = c_1w_1 + \cdots + c_nw_n$</span>. Thus <span class="math-container">$T$</span> is uniquely determined on <span class="math-container">$span(v_1, \dots, v_n)$</span> by the equation above. Because <span class="math-container">$v_1, \dots, v_n$</span> is a basis of <span class="math-container">$V$</span>, this implies that <span class="math-container">$T$</span> is uniquely determined on <span class="math-container">$V$</span>.</p>
</blockquote>
| DonAntonio | 31,254 | <p>Simple, direct meaning of definition: for any function <span class="math-container">$\;f\;$</span> , a point <span class="math-container">$\;a\;$</span> in its domain of definion is a fixed point if <span class="math-container">$\;f(a)=a\;$</span> . For your function, this means that we must have</p>
<p><span class="math-container">$$f(x)=x\iff \left(f(x)=\right)2x^2=x\iff x(2x-1)=0\iff x=0,\,x=\frac12$$</span></p>
<p>and there you have two points in <span class="math-container">$\;[-1,1]\;$</span> which are fixed by your function...</p>
|
2,392,114 | <p>It is possible to rewrite the equation $x^3+ax^2+bx+c=0$ as $y^3+3hy+k=0$ by setting $y=x+a/3$</p>
<p>How do you find the coefficient h in the equation $y^3+3hy+k=0$?</p>
| Mark Bennet | 2,906 | <p>You put $$x^3+ax^2+bx+c=\left(y-\frac a3\right)^3+a\left(y-\frac a3\right)^2+b\left(y-\frac a3\right)+c=$$$$=y^3-ay^2+\frac {a^2}3y-\frac {a^3}{27}+ay^2-\frac {2a^2}3y+\frac {a^3}9+by-\frac {ab}3+c=$$and collect terms$$=y^3+3\left(\frac {3b-a^2}9\right)y+\frac{-2a^3-9ab+27c}{27}$$</p>
|
150,472 | <p>Let $h\in C_0([a,b])$ arbitrary, that is $h$ is continuous and vanishes on the boundary.
I want to show that
$\int\limits_a^b h(x)\sin(nx)dx \rightarrow 0$.</p>
<p>If $h\in C^1$, integration by parts immediately yields the claim, since $h'$ is continuous and thence bounded on the compact interval, using also the zero boundary condition.</p>
<p>However, I believe the statement is also true for all $h\in C_0([a,b])$. My idea is to approximate $h$ by functions $h_m \in C_0^1([a,b])$. Then for all $m$,</p>
<p>$$\begin{equation*}
\lim_{n \to \infty} \int h_m(x) \sin(nx) dx = 0.
\end{equation*}$$</p>
<p>$$\begin{align*}
\Rightarrow ~~~ \lim_{n \to \infty} \int h(x)\sin(nx) dx &= \lim_{n \to \infty} \int \lim_{m \to \infty} h_m(x)\sin(nx) dx\\ &= \lim_{m \to \infty}(\lim_{n \to \infty} \int h_m(x)\sin(nx) dx)\\ &= \lim 0 = 0.
\end{align*}$$</p>
<p>This is fine iff the second equality is. In fact, this is two different steps, as three limiting processes are involved. Hence the questions:</p>
<p>First, can I make sure that I can interchange the $m$-limit with the integral sign? (Can I assume that $h_m$ converges uniformly? Or use some sort of Dominated Convergence Theorem?)</p>
<p>And second, may I swap the $n$-limit for the $m$-limit? (The $n$-limit is in fact $C/n \to 0$)</p>
<p>I hope it's not too messy. Many thanks for any kind of help!</p>
| Pedro | 23,350 | <p>I think this is from Apostol. It is an informal approach to the following Lemma, if I'm not recalling wrongly:</p>
<p>Let $f$ be integrable in $[a,b]$. Then</p>
<p>$$\lim \limits_{\lambda \to \infty } \int\limits_a^b f\left( x \right)\sin \lambda xdx = 0$$</p>
<p>$(1)$ Let $f$ be constant. Then </p>
<p>$$\lim \limits_{\lambda \to \infty } \int\limits_a^b k\sin \lambda xdx = \left.-k\frac{\cos \lambda x}{\lambda}\right]_a^b=0$$</p>
<p>$(2)$ Let $f$ be a step function over $[a,b]$, viz</p>
<p>$$f(x) = \begin{cases} k:a< x\leq a_1 \cr k_1: a_1<x \leq a_2 &\cr \cdots \cr k_n :a_n<x\leq b \end{cases}$$</p>
<p>Then by the last result,</p>
<p>$$\lim \limits_{\lambda \to \infty } \int\limits_a^b f\left( x \right)\sin \lambda xdx = 0$$</p>
<p>$(3)$ Since for an integrable $f$ there exist two step functions such that $$\int_a^b |f(x)-s(x)|dx<\epsilon$$</p>
<p>$$\int_a^b |s_1(x)-f(x)|dx<\epsilon$$</p>
<p>we can "conclude".</p>
<p><strong>IMPORTANT</strong>: If anyone can make this more detailed, precise and formal, please, do so. </p>
|
162,836 | <p>I would like to find the surface normal for a point on a 3D filled shape in Mathematica. </p>
<p>I know how to calculate the normal of a parametric surface using the cross product but this method will not work for a shape like <code>Cone[]</code> or <code>Ball[]</code>.</p>
<ol>
<li>Is there some sort of <code>RegionNormal</code> option? There is an option to
find <code>VertexNormals</code> <a href="http://reference.wolfram.com/language/ref/VertexNormals.html" rel="noreferrer">here</a>, but this is something to with
shading and seems unhelpful. </li>
<li>Is there a method I can use to convert the region into a parametric expression and use the normal cross product method? </li>
</ol>
<p>The plan is to take an arbitrary line and find the angle of intersection between the line and the surface of the shape. </p>
| Michael E2 | 4,999 | <pre><code>(* put inequality into u ≤ 0 form, return just u *)
standardize[a_ <= b_] := a - b;
standardize[a_ >= b_] := b - a;
regnormal[reg_, {x_, y_, z_}] := Module[{impl},
impl = LogicalExpand@ Simplify[RegionMember[reg, {x, y, z}], {x, y, z} ∈ Reals];
If[Head@impl === Or,
impl = List @@ impl,
impl = List@impl];
impl = Replace[impl, {Verbatim[And][a___] :> {a}, e_ :> {e}}, 1];
Piecewise[
Flatten[
Function[{component},
Table[{
D[standardize[component[[i]]], {{x, y, z}}],
Simplify[
(And @@ Drop[component, {i}] /. {LessEqual -> Less, GreaterEqual -> Greater}) &&
(component[[i]] /. {LessEqual -> Equal, GreaterEqual -> Equal}),
TransformationFunctions -> {Automatic,
Reduce[#, {}, Reals] &}]
}, {i, Length@component}]
] /@ impl,
1],
Indeterminate]
];
</code></pre>
<p>Examples:</p>
<pre><code>regnormal[Cone[{{0, 0, 0}, {1, 1, 1}}, 1/2], {x, y, z}]
</code></pre>
<p><img src="https://i.stack.imgur.com/tBmsw.png" alt="Mathematica graphics"></p>
<pre><code>regnormal[Ball[{1, 2, 3}, 4], {x, y, z}]
</code></pre>
<p><img src="https://i.stack.imgur.com/axxSm.png" alt="Mathematica graphics"></p>
<pre><code>regnormal[RegionUnion[Ball[], Cone[{{0, 0, 0}, {1, 1, 1}}, 1/2]], {x, y, z}]
</code></pre>
<p><img src="https://i.stack.imgur.com/bljj3.png" alt="Mathematica graphics"></p>
<pre><code>regnormal[Cylinder[{{1, 1, 1}, {2, 3, 1}}], {x, y, z}]
</code></pre>
<p><img src="https://i.stack.imgur.com/SpmyG.png" alt="Mathematica graphics"></p>
<p>It assumes that the <code>RegionMember</code> expression can be computed (which is not always the case) and that it will be a union (via <code>Or</code>) of intersections (via <code>And</code>). It also assumes that the <code>RegionMember</code> expression includes the boundary. Thus, it is probably not very robust, but it handles the OP's examples.</p>
<p>Also, if this is used in numerical applications, which seems to be the case for the OP, one should worry about the exact conditions in the <code>Piecewise</code> expressions returned. It's unlikely the numerical calculations will be accurate enough to satisfy <code>Equal</code>. So either change the conditions or possible change <code>Internal`$EqualTolerance</code>:</p>
<pre><code>Block[{Internal`$EqualTolerance = Log10[2.^28]}, (* ~single-precision FP equality *)
<evaluate regnormal[...] expression>
]
</code></pre>
|
499,044 | <p>I "know" that $\mathbb{C} \otimes_\mathbb{R} \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}$ as rings, but I don't really know it, what I mean with this is that I don't know any explicit isomorphism $f: \mathbb{C} \otimes_\mathbb{R} \mathbb{C} \rightarrow \mathbb{C} \oplus \mathbb{C}$. I suspect that such an isomorphism should be easy to find, but I am really not finding anyone. Could anyone please help me?</p>
| Martin Brandenburg | 1,650 | <p>You don't mean the direct sum of rings (or algebras), you mean the direct product of rings. See also <a href="https://math.stackexchange.com/questions/345501">here</a>.</p>
<p>In general, if $K/k$ is a field extension and $f \in k[x]$ is a polynomial which splits over $K$ into $n$ distinct linear factors $x-\alpha_i$, then there is an isomorphism of $K$-algebras</p>
<p>$$k[x]/(f) \otimes_k K \cong K[x]/(f) \cong \prod_i K[x]/(x-\alpha_i) \cong \prod_i K = K^n.$$
It is given by mapping $x \otimes 1$ to $(\alpha_1,\dotsc,\alpha_n)$, the rest is given by the information that it is an $K$-algebra isomorphism. Explicitly, we have $p \otimes \lambda \mapsto (\lambda p(\alpha_1),\dotsc,\lambda p(\alpha_n))$ for $\lambda \in K$ and $p \in k[x]$.</p>
<p>The inverse can be found just by looking at the proof of the Chinese Remainder Theorem which we have used above: Since $\alpha_1,\dotsc,\alpha_n$ are pairwise distinct, there are $p_j \in K[x]$ such that $p_j(\alpha_i)=\delta_{ij}$ (e.g. Lagrange polynomial). Then $\overline{p_j} \in K[x]/(f)$ gets mapped to the unit vector $e_j \in K^n$. Hence, the inverse map $K^n \to K[x]/(f)$ is given by $(\lambda_1,\dotsc,\lambda_n) \mapsto \sum_j \lambda_j p_j$.</p>
<p>Example: If $f = x^2+1 \in k[x]$ is irreducible and $\mathrm{char}(k) \neq 2$, we get $k(i) \otimes_k k(i) \cong k(i) \times k(i)$ as $k(i)$-algebras (where $k(i)$ acts on the <em>right</em> tensor factor) given by $i \otimes 1 \mapsto (i,-i)$ (and hence $1 \otimes i \mapsto (i,i)$ and $i \otimes i \mapsto (-1,1)$). We compute $p_1=\dfrac{x+i}{2i}$ and $p_2 = \overline{p_1} = \dfrac{i-x}{2i}$ in $k(i)[x]$. The images in $k(i)[x]/(x^2+1) \cong k(i) \otimes_k k(i)$ are $p_1 = 1 \otimes \frac{i}{2i} + i \otimes \frac{1}{2i} = \frac{1}{2} (1 \otimes 1 - i \otimes i)$ and $p_2 = 1 \otimes \frac{i}{2i} -i \otimes \frac{1}{2i} = \frac{1}{2} (1 \otimes 1 + i \otimes i)$.</p>
<p>These are the two orthogonal idempotents mentioned in the other answers; as you can see you don't have to guess them etc., you can compute them following a general algorithm. It is useful that many theorems have <em>constructive</em> proofs, such as here the Chinese Remainder Theorem.</p>
<p>Let me just indicate what happens in characteristic $2$. Here we have $k(i) \otimes_k k(i) \cong k(i)[x]/(x+i)^2 \cong k[a,b]/(a^2,b^2)$ with $b=x+i$ and $a=i+1$.</p>
|
2,658,563 | <p><strong>(Brazil National Olympiad)</strong></p>
<p><em>Let $n$ be a positive integer. In how many ways can we distribute $n+1$ toys to $n$ kids, such that each kid gets at least one toy?</em></p>
<p><strong><em>My approach</em></strong>:</p>
<p>For each child we can assign a number $k$ to it, representing the toy it will get. So we have ${n + 1} \choose {n}$ choices for chosing the toys, then $n!$ ways to choose the assignment. Since we have already chosen the leftover toy (by chosing the ones who were not left over), we now only have to choose from $n$ children who's getting it. So the final answer should be: </p>
<p>$(n+1)!n$</p>
<p>But the answer is: $\frac{(n+1)!n}{2}$</p>
<p><strong>Can someone explain what was my mistake?</strong></p>
| Badam Baplan | 164,860 | <p>A standard combinatorial way to think about it would be to recognize the role of the <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling Numbers of the Second Kind</a>. We proceed as follows:</p>
<p>(1) partition the set of $n+1$ toys into $n$ nonempty subsets. This is a Stirling number of the second kind ${n+1}\brace{n}$), which is here simply choosing two toys to group together (${{n+1}\brace{n}} = {{n+1}\choose{2}}$). Then</p>
<p>(2) Bijectively assign the $n$ children to the $n$ elements of the partition in $n!$ factorial ways.</p>
<p>Moreover, this generalizes immediately to the problem: How many ways are there to assign $m$ toys to $n$ children so that every child gets at least one toy? It's just ${{m}\brace{n}}n!$.</p>
|
123,018 | <p>I'm doing some homework for a computer science class. It's been so long since I've done math, I have a question that assumes math knowledge that confuses me.</p>
<p>Given:
<em>Whether a diophantine polynomial in a single variable has integer roots.</em></p>
<p>With the given question I need to determine if that question is solvable using computers. I know how to do that, but I don't the math required to answer this question.</p>
<p>So I understand "Whether a .... polynomial in a single variable has integer ...?"</p>
<p>My question:</p>
<ul>
<li>What does diophantine mean</li>
<li>What is an integer root</li>
<li>How do you determine if a diophantine polynomial in a single variable has integer roots?</li>
</ul>
<p>The math behind this question is assumed to be known, once I know the answers to those three questions (really just the last one) I can answer my homework.</p>
<p>Note: it may or may not be obvious that this is computable, since I don't know enough about the math to say, I will say a lot of things that seem computable are not unless their input and output are acceptable to a finite precision.</p>
| Ross Millikan | 1,827 | <p><a href="http://en.wikipedia.org/wiki/Diophantine_equation" rel="nofollow">A Diophantine equation</a> is one where the variables are required to take integer values. A root of a <a href="http://en.wikipedia.org/wiki/Polynomial" rel="nofollow">polynomial</a> is a value of the variable(s) that gives the polynomial the value $0$.</p>
|
1,420,277 | <p>I have to solve this:</p>
<p>$$[(\nabla \times \nabla)\cdot \nabla](x^2 + y^2 + z^2)$$</p>
<p>But I am really drowning in the sand..</p>
<p>Can anybody help me please?</p>
| MathAdam | 266,049 | <p>Let <strong><em>E</em></strong> = number of Estrada supporters<br>
Let <strong><em>A</em></strong> = number of Arrayo supporters </p>
<p>Then <strong><em>E</em></strong> + <strong><em>A</em></strong> = <strong>8600</strong> </p>
<p>Estrada's majority -- we don't know what it is yet -- is <strong><em>E</em></strong> - <strong><em>A</em></strong> </p>
<p>In the second scenario, 1/3 of Estrada's stay home, so he gets <strong>2/3</strong> <strong><em>E</em></strong> votes.<br>
Likewise, 1/2 of Arrayo's fans are at home watching Dr. Who, so he's left with <strong>1/2</strong> <strong><em>A</em></strong> votes.</p>
<p>The new spread, still a majority for Estrada is this: <strong>2/3</strong> <strong><em>E</em></strong> - <strong>1/2</strong> <strong><em>A</em></strong> </p>
<p>We don't know what Estrado's majority is here, either. But we do know it's decreased by 200:</p>
<p><strong><em>E</em></strong> - <strong><em>A</em></strong> - <strong>200</strong> = <strong>2/3</strong> <strong><em>E</em></strong> - <strong>1/2</strong> <strong><em>A</em></strong> </p>
<p>Group like terms and simplify: </p>
<p><strong>200</strong> = <strong>1/3</strong> <strong><em>E</em></strong> - <strong>1/2</strong> <strong><em>A</em></strong> </p>
<p>Now we have 2 equations in two unknowns:</p>
<p><strong>200</strong> = <strong><em>1/3 E</em></strong> - <strong><em>1/2 A</em></strong><br>
<strong>8600</strong> = <strong><em>E</em></strong> + <strong><em>A</em></strong> </p>
<p>Can you solve a system of two equations and two unknowns? Doing so will provide you with values for <strong><em>E</em></strong> and <strong><em>A</em></strong> which you should then go back and plug into the other equations to see if they give the predicted values.</p>
|
3,368,655 | <p>I came across a problem that asked if it is posible for a function to be Riemann integrable function in <span class="math-container">$[0,+\infty)$</span> but also <span class="math-container">$|f(x)|\geq 1$</span> for all <span class="math-container">$x\geq 0$</span>. </p>
<p>At first I thought it was imposible, but I realized that only holds for continuous functions, because they would have to be either positive or negative, and then they would have to go to 0 at infinity. </p>
<p>I have an idea of what the function would have to be like, with alternating signs, but whose integral converges, but I haven't been able to find any, so I'm starting to think it is imposible. </p>
<p>I would like some help finding this function, or disproving it, as I don't know many tools for working with functions without a constant sign.</p>
| MathematicsStudent1122 | 238,417 | <blockquote>
<p>I came across a problem that asked if it is posible for a function to be Riemann integrable function in <span class="math-container">$[0,+\infty)$</span> but also <span class="math-container">$|f(x)|\geq 1$</span> for all <span class="math-container">$x\geq 0$</span>. </p>
</blockquote>
<p>Yes, although the integral will be improper. For each <span class="math-container">$n \in \mathbb{Z}_{\geq 1}$</span>, divide the interval <span class="math-container">$[n-1, n]$</span> into <span class="math-container">$2^n$</span> subintervals <span class="math-container">$I_{k,n} \stackrel{\text{def}}{=}\left[n-1+\frac{k-1}{2^n}, n-1+\frac{k}{2^n}\right]$</span> for each <span class="math-container">$1 \leq k \leq 2^n$</span>. Define <span class="math-container">$f_n:[n-1, n] \to \mathbb{R}$</span> by <span class="math-container">$f_n(x) = 1$</span> if <span class="math-container">$x \in I_{k,n}$</span> for <span class="math-container">$k$</span> odd, and <span class="math-container">$f_n(x) = -1$</span> if <span class="math-container">$x \in I_{k,n}$</span> for <span class="math-container">$k$</span> even. It is clear that <span class="math-container">$\int_{n-1}^{n} f_n = 0$</span>. </p>
<p>Now extend this to <span class="math-container">$[0, \infty)$</span> in the natural way by defining <span class="math-container">$f:[0, \infty) \to \mathbb{R}$</span> such that <span class="math-container">$f(x) = f_n(x)$</span> for all <span class="math-container">$x \in [n-1, n]$</span>. A careful argument shows that for any <span class="math-container">$x \in \mathbb{R}$</span>, <span class="math-container">$\left|\int_{0}^{x} f\right| \leq 2^{-\lceil x \rceil}$</span>, and hence <span class="math-container">$\left|\int_{0}^{\infty} f\right| = 0$</span></p>
<p>One can use a similar argument to find a function <span class="math-container">$g$</span> such that <span class="math-container">$\lim_{x \to \infty} |g(x)| = \infty$</span>, but <span class="math-container">$\int_{\mathbb{R}^{+}} g = 0$</span>. </p>
|
292,331 | <p>Suppose that $(n_k)_{k\in \mathbb{N}}$ is a given increasing sequence of positive integers. </p>
<p>Does there exist an (irrational) number $a$ such that
$\{an_k\}:=(a n_k)\text{mod }1 \rightarrow 1/2$ as $k \rightarrow \infty$? </p>
| GH from MO | 11,919 | <p>The answer is no in general. For many increasing sequences $(n_k)_{k\in \mathbb{N}}$ of positive integers, it happens for every irrational number $a$ that $\{an_k\}$ is dense or even equidistributed in the unit circle. See <a href="https://en.wikipedia.org/wiki/Equidistribution_theorem" rel="nofollow noreferrer">this Wikipedia article</a> for some examples.</p>
|
2,168,906 | <blockquote>
<p>The task is to find necessary and sufficient condition on <span class="math-container">$b$</span> and <span class="math-container">$c$</span> for the equation <span class="math-container">$x^3-3b^2x+c=0$</span> to have three distinct real roots.</p>
</blockquote>
<p>Are there any formulas (such as <span class="math-container">$x_1x_2=c/a$</span> and <span class="math-container">$x_1+x_2=-b/a$</span> for roots in <span class="math-container">$ax^2+bx+c=0$</span>), but for equations of 3rd power?</p>
| Jonathaniui | 420,908 | <p>Note that $ln (y) = x^x$ and not equal to $x ln (e^x) $</p>
<p>Thsi is because $ln (e^(x^x))$ cancels the $e $ and the exponent $x^x $ is tue result, from there derivate as normal and obtain the result.</p>
|
300,105 | <p>I want to find the proof of the spectrum of the hypercube</p>
| Mariano Suárez-Álvarez | 274 | <p>This is just a data point:</p>
<p>Computing characteristic polynomials of their adjacency matrices, one finds the roots for the $d$-dimensional hypercube are $d$, $d-2$, $d-4$, $\dots$, $-d$, with multiplicities $\binom{d}{0}$, $\binom{d}{1}$, $\binom{d}{2}$, $\dots$, $\binom{d}{d}$.</p>
<p>Since this is extraordinarily regular, it suggests that one find a recursive formula. If $A_n$ is the adjacency matrix of hypercube on $2^{n-1}$ vertices, then $A_n=\begin{pmatrix}A_{n-1}&I_{2^{n-2}}\\I_{2^{n-2}}&A_{n-1}\end{pmatrix}$ so we have what to work with.</p>
|
1,101,371 | <p>Any book that I find on abstract algebra is somehow advanced and not OK for self-learning. I am high-school student with high-school math knowledge. Please someone tell me a book can be fine on abstract algebra? Thanks a lot. </p>
| Community | -1 | <p>Abstract Algebra
Theory and Applications
Thomas W. Judson </p>
<p>You can also find it online in <a href="http://abstract.ups.edu/download/aata-20130816.pdf" rel="nofollow">here.</a> </p>
|
1,101,371 | <p>Any book that I find on abstract algebra is somehow advanced and not OK for self-learning. I am high-school student with high-school math knowledge. Please someone tell me a book can be fine on abstract algebra? Thanks a lot. </p>
| Dave L. Renfro | 13,130 | <p>Three books I know of that <strong>really are high school level</strong> are listed below. Although the books thus far listed (Gallian, Herstein, Fraleigh, Pinter, etc.) are fine texts, these are standard upper undergraduate college level textbooks, not books <em>specifically written</em> for good (but not necessarily near genius level) high school students.</p>
<p>Irving Adler, <a href="http://rads.stackoverflow.com/amzn/click/B000JBZYV0"><strong>Groups in the New Mathematics. An Elementary Introduction to Mathematical Groups Through Familiar Examples</strong></a>, The John Day Company, 1967, 274 pages.</p>
<p>Francis [Frank] James Budden, <a href="http://rads.stackoverflow.com/amzn/click/0521080169"><strong>The Fascination of Groups</strong></a>, Cambridge University Press, 1972, xviii + 596 pages.</p>
<p>Israel Grossman and Wilhelm Magnus, <a href="http://rads.stackoverflow.com/amzn/click/088385614X"><strong>Groups and Their Graphs</strong></a>, New Mathematical Library #4, Random House, 1964, viii + 195 pages.</p>
|
701,122 | <p>Part 1
Let $f(x) = ax^n$, where $a$ is any real number. Prove that $f$ is even if $n$ is an even integer. (Integers can be negative too)</p>
<p>Part 2
Prove that if you add any two even functions, you get an even function</p>
<p>I'm confused as to how you would prove adding two even functions would get you an even function.</p>
| Ishfaaq | 109,161 | <p>The first part is simple. Suppose $f(x) = ax^n$for an even integer $n$. $f(-x) = a(-x)^n = ax^n = f(x)$ since $n$ is even. </p>
<p>As for the second part, suppose $h(x) = f(x) + g(x)$ for each $x$ in the domain of $h$, where $f$ and $g$ are even functions. Then, $h(-x) = f(-x) + g(-x) = f(x) + g(x) = h(x) $</p>
<p>If you are unsure about the definition of an even function then check <a href="http://mathworld.wolfram.com/EvenFunction.html" rel="nofollow">this</a> out. </p>
|
1,007,309 | <p><img src="https://i.stack.imgur.com/xJKtU.png" alt="enter image description here"></p>
<p>I cannot understand part ii) in this solution. I cannot see the significance of arbitrarily close to 0 points for which $|sin(\frac{1}{x_n})|=1$</p>
| Ben Grossmann | 81,360 | <p>Our goal is to show that $g^{-1}(B)$ fails to be open because, although $0 \in g^{-1}(B)$, no open neighborhood of $0$ lies inside the set $g^{-1}(B)$.</p>
<p>So, we want to show that for any $\delta > 0$, the set $(-\delta,\delta)$ (the neighborhood of $0$ of radius $\delta$) is not a subset of $g^{-1}(B)$. That is, for every such $\delta$, there exists some $x \in (-\delta,\delta)$ for which $g(x) \notin (-1/2,1/2)$.</p>
<p>In particular, for any such $\delta$, we can choose an $x = x_n$ with $|x_n| < \delta$. So, $x_n \in (-\delta,\delta)$, but $g(x_n) = 1 \notin (-\delta,\delta)$, as desired.</p>
|
962,287 | <p>I am trying to isolate x in the equation $$(x-20)^{2} = -(y-40)^{2} - 525.$$ How can I do it?</p>
| Jasser | 170,011 | <p>$x=(\sqrt {(y-40)^ 2+525})i+20$ where $i=\sqrt {-1}$.</p>
<p>If domain of x is real number set than there is no solution.</p>
|
87,583 | <p>Next task to complete:</p>
<ul>
<li><p>Count <code>*</code>-symbol in such expression as <code>a + s^2*b - c/y + o^3 + n*m*u</code> (in this case count of <code>*</code> should be 6)</p></li>
<li><p>Powers such $o^3$ should be expand to $o*o*o$</p></li>
</ul>
<p>I try, but my code is pretty ugly.</p>
<p><img src="https://i.stack.imgur.com/f66j0.png" alt="enter image description here"></p>
| m_goldberg | 3,066 | <p>I think this is easier to do by working with strings.</p>
<p>First write a function that will expand strings of the form "Power(x,k)" where k is an integer in "x<em>x</em>...<em>x" with k - 1 "</em>"s.</p>
<pre><code>f[x_, k_] :=
Module[{i = Abs[ToExpression[k]] - 1},
Nest[StringJoin[#, "*" <> x] &, x, i]]
</code></pre>
<p>A couple of tests for <code>f</code>.</p>
<pre><code>f["s", 2]
</code></pre>
<blockquote>
<pre><code>"s*s"
</code></pre>
</blockquote>
<pre><code>f["ab", "-3"]
</code></pre>
<blockquote>
<pre><code>"ab*ab*ab"
</code></pre>
</blockquote>
<p>Next write a function that will use <code>f</code> to transform powers and will count the stars in the expression after <code>f</code> has done its transformation.</p>
<pre><code>starCount[expr_] :=
StringCount[
StringReplace[
expr // CForm // ToString,
"Power(" ~~ v : WordCharacter .. ~~ "," ~~ k : NumberString ~~ ")" :> f[v, k]],
"*"]
starCount[a + s^2*b - c/y + o^3 + n*m*u]
</code></pre>
<blockquote>
<pre><code>6
</code></pre>
</blockquote>
<pre><code>starCount[1/(b s^3) + 1/t^4]
</code></pre>
<blockquote>
<pre><code>6
</code></pre>
</blockquote>
<p>Note: I use <code>CForm</code> to recover the <code>/</code>s that represent division, which the OP apparently wants to preserve as a distinct operator.</p>
|
2,666,772 | <blockquote>
<p>$W$ = $\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}$. Use W to build an 8x8 matrix encoding an orthonormal basis in $R^8$ by scaling A = $\begin{bmatrix} W & W \\ W & -W \end{bmatrix}$ in the right way.</p>
</blockquote>
<p>Am I wrong here, but is it not just this matrix beside each other 4 times? Doesn't that work?</p>
| Pietro Paparella | 414,530 | <p>The matrix $W$ above is a <a href="https://en.wikipedia.org/wiki/Hadamard_matrix" rel="nofollow noreferrer">Hadamard matrix</a> of order four and therefore satisfies $WW^\top = W^\top W = 4I_4$.</p>
<p>If
$$
A = \frac{1}{\sqrt{8}}\begin{bmatrix}
W & W \\
W & -W
\end{bmatrix},
$$
then $A$ is an orthogonal matrix.</p>
|
34,724 | <h3>Overview</h3>
<p>For integers n ≥ 1, let T(n) = {0,1,...,n}<sup>n</sup> and B(n)= {0,1}<sup>n</sup>. Note that |T(n)|=(n+1)<sup>n</sup> and |B(n)| = 2<sup>n</sup>.
A certain set S(n) ⊂ T(n), defined below, contains B(n). The question is about the growth rate of |S(n)|. Does it grow exponentially, like |B(n)|, so that |S(n)| ~ c<sup>n</sup> for some c, or does it grow superexponentially, so that c<sup>n</sup>/|S(n)| approaches 0 for all c> 0?</p>
<h3>Definition</h3>
<p>The set S(n) is defined as follows: an n-tuple t = (t<sub>1</sub>,t<sub>2</sub>,...,t<sub>n</sub>) ∈ T(n) is in S(n) if and only if t<sub>i+j</sub> ≤ j whenever 1 ≤ j< t<sub> i</sub>. For example, if t ∈ T(10) with t<sub> 4</sub>=5, t<sub> 5</sub> can be at most 1, t<sub> 6</sub> can be at most 2, , t<sub> 7</sub> can be at most 3, and t<sub> 8</sub> at most 4, but there is no restriction (at least not due to the value of t<sub> 4</sub>) on t<sub> 9</sub> or t<sub> 10</sub>; t<sub> 9</sub> and t<sub> 10</sub> can have any values in {1,...,10}.</p>
<h3>Alternate formulation (counting triangles)</h3>
<p>The elements of S(n) can be put into one-to-one correspondence with certain configurations of n right isosceles triangles, so that |S(n)| counts the number of such configurations. </p>
<p>For integers k>0 (size) and v≥0 (vertical position), let Δ <sub>k,v</sub> be the triangle with vertices (0,v), (k,k+v), and (k,v). (Δ<sub>0,v</sub> is the degenerate triangle with all three vertices at (0,v).)</p>
<p>Now associate with an n-tuple t = (t<sub>1</sub>,t<sub>2</sub>,...,t<sub>n</sub>) ∈ T(n) the set D<sub>t</sub> = $\lbrace\Delta_{t_k,k}:1\le k \le n\rbrace$. (That's "\lbrace\Delta_{t_k,k}:1\le k \le n\rbrace," if you can't read it.) The set D<sub> t</sub> contains n isosceles right triangles that extend to the right of the y-axis, one triangle at each of the points (0,k) for 1 ≤ k ≤ n.</p>
<p>The tuple t is in S(n) if and only if the triangles in D<sub> t</sub> have disjoint interiors. (This isn't hard to show, and if it is, I've probably made a mistake in my definitions, so let me know.) Thus |S(n)| counts the number of ways one can arrange n isosceles right triangles of various sizes (between size zero and size n) at n consecutive integer points on the y-axis so the triangle can extend to the right and up without overlapping. Triangles of the same size are indistiguishable for the purpose of counting the number of arrangements. (It may help to think of right isosceles pennants attached at an acute-angle corner to a flagpole in a stiff wind.)</p>
<h3>Question</h3>
<p>Does |S(n)| grow exponentially with n, or faster?</p>
<h3>Calculations</h3>
<p>If I’ve counted correctly, the first few terms of the sequence {|S(n)|} beginning with n=1 are 2, 8, 38, 184, 904, and 4384. This sequence (and some sequences resulting from minor variations of the problem) fails to match anything in the Online Encyclopedia of Integer Sequence.</p>
<p>Links to similar counting problems mentioned or solved in the literature would help. </p>
<p>Thanks!</p>
| David E Speyer | 297 | <p>Your sequence is bounded by $(125+\epsilon)^n$. Obviously, this isn't close to a good bound, but it answers the question.</p>
<p>We start by bounding a different question: Let $\Gamma_n$ be the convex hull of $(0,0)$, $(0,n)$ and $(n,n)$. (So $\Gamma$ is rotated $180^{\circ}$ with respect to your $\Delta$.) Let $q_n$ be the number of ways to pack non-overlapping triangles into $\Gamma_n$. </p>
<p>Given any packing of triangles in $\Gamma_n$, which uses at least one triangle, let the largest triangle be of size $k$ and have a vertex at $(0,r)$. (If there is more than one largest triangle, make an arbitrary choice; this will just lead to a larger bound in the end.) So all the other triangles must fit into one of two trapezoids: one with base $r$ and height $k$ and the other with base $n-r$ and height $k$. In any case, these two trapezoids fit into translations of $\Gamma_r$ and $\Gamma_{n-r}$. So we obtain the inequality
$$q_n \leq \sum_{r=1}^{n-1} q_r q_{n-r} + 1,$$
where the $+1$ is because we have to remember the possibility that there might be no triangles in the packing. If we take $q_0=0$ for convenience, we get that $\sum q_n z^n$ is term by term dominated by the solution of
$$Q(z) = Q(z)^2 + \frac{z}{1-z}.$$
Solving the quadratic,
$$Q(z) = \frac{1}{2} \left( 1 - \sqrt{1-\frac{4z}{1-z}} \right).$$
Notice that $Q(z)$ has radius of convergence $1/5$ so $q_n \leq (5+\epsilon)^n$.</p>
<p>I previously had an argument here that didn't work, so here is something even more sloppy. All the triangles you are considering fit inside $\Gamma_{3n}$. So your quantity is bounded by $q_{3n}$, and hence by $(125+\epsilon)^n$.</p>
<p>I suspect that $5^n$ may be pretty close to the right rate of growth, especially given Roland Bacher's computation.</p>
|
441,962 | <p>I looking for a proof for the theorem but I have not find yet.</p>
<p>A link or even sketch for How it goes will be very appreciate.</p>
<p>A linear map is self adjoint </p>
<p>iff </p>
<p>the matrix representation according to orthonormal basis is self adjoint.</p>
<p>by the way is not that true for all self adjoint matrix and not just for matrices </p>
<p>representation according to orthonormal basis.</p>
<p>Thanks in advanced !!</p>
| al-Hwarizmi | 68,686 | <p>I try to understand what you really asking for. If I correctly understand you are looking for a theorem in literature that reflects your case. This is according to my understanding the spectral theorem <a href="https://en.wikipedia.org/wiki/Spectral_theorem" rel="nofollow">>>> here</a>. From either the Cauchy or von Neumann version it should be possible to deduct directly your case. All this provided I understood well your question.</p>
|
438,070 | <p>I stumbled across this question and I cannot figure out how to use the value of $\cos(\sin 60^\circ)$ which would be $\sin 0.5$ and $\cos 0.5$ seems to be a value that you can only calculate using a calculator or estimate at the very best.</p>
| Community | -1 | <p>To show that $F_k$ is a subspace of $\mathbb R^\mathbb N$ you should verify that $F_k$ is a <strong>non empty set</strong> and <strong>any linear combination of two elements of $F_k$ remains in $F_k$</strong>.</p>
<p>Let's show an example:</p>
<p>Clearly $F_1$ is a non empty set since the zero sequence is bounded.</p>
<p>Let $(x_n)$ and $(y_n)$ two bounded sequences so there's $M,N$ such that
$$|x_n|\leq M\quad\text{and}\quad |y_n|\leq N\quad\forall n\in\mathbb N$$
and let $a,b\in \mathbb R$ so
$$|ax_n+by_n|\leq |a||x_n|+|b||y_n|\leq |a|M+|b|N\quad\forall n\in\mathbb N$$
so the sequence $(ax_n+by_n)$ is bounded and then $F_1$ is a subspace of $\mathbb R^\mathbb N$.</p>
|
2,771,059 | <p>This question is similar to a question I posted earlier.<br/>
<span class="math-container">$$z=\cos\frac{\pi}{3}+j\sin\frac{\pi}{3}$$</span>
<br/>
This time I have to do the sum <span class="math-container">$z^4+z$</span><br/>
<br/>
I have used the approach I was shown in my previous question. Here is what I've done:
<span class="math-container">$$\left(\cos\frac{\pi}{3}+j\sin\frac{\pi}{3}\right)^4+\left(\cos\frac{\pi}{3}+j\sin\frac{\pi}{3}\right)$$</span>
<span class="math-container">$$\cos\frac{4 \pi}{3}+j\sin\frac{4 \pi}{3}+\cos\frac{\pi}{3}+j\sin\frac{\pi}{3}$$</span>
collecting like terms...
<span class="math-container">$$\cos\frac{5\pi}{3}+2j\sin\frac{5\pi}{3}$$</span>
I verified this with wolframalpha but the answer it gave was zero. Is this approach I'm using appropriate for this problem?</p>
| Rhys Hughes | 487,658 | <p>Via de Moivre's Theorem, <span class="math-container">$z^4=\cos\big(\frac{4\pi}{3}\big)+j\sin\big(\frac{4\pi}{3}
\big)=-\frac{1}{2} -\frac{\sqrt{3}}{2}j$</span>
<span class="math-container">$$\cos{\frac{\pi}{3}}+j\sin{\frac{\pi}{3}}=\frac{1}{2}+\frac{\sqrt{3}}{2}j$$</span></p>
<p>Adding those together yields <span class="math-container">$0$</span>.</p>
|
148,032 | <p>What is the larger of the two numbers?</p>
<p>$$\sqrt{2}^{\sqrt{3}} \mbox{ or } \sqrt{3}^{\sqrt{2}}\, \, \; ?$$
I solved this, and I think that is an interesting elementary problem. I want different points of view and solutions. Thanks!</p>
| Robert Israel | 8,508 | <p>Hint: If $a$ and $b$ are positive numbers, $a^b < b^a$ if and only if $\dfrac{\ln a}{a} < \dfrac{\ln b}{b}$. Find intervals on which $\dfrac{\ln x}{x}$ is increasing or decreasing.</p>
|
148,032 | <p>What is the larger of the two numbers?</p>
<p>$$\sqrt{2}^{\sqrt{3}} \mbox{ or } \sqrt{3}^{\sqrt{2}}\, \, \; ?$$
I solved this, and I think that is an interesting elementary problem. I want different points of view and solutions. Thanks!</p>
| checkmath | 25,077 | <p>Hint: Use the Logarithm function.</p>
|
148,032 | <p>What is the larger of the two numbers?</p>
<p>$$\sqrt{2}^{\sqrt{3}} \mbox{ or } \sqrt{3}^{\sqrt{2}}\, \, \; ?$$
I solved this, and I think that is an interesting elementary problem. I want different points of view and solutions. Thanks!</p>
| Jeff | 10,832 | <p>We have $\sqrt{2}>1$ and $\sqrt{3}>1$, so raising either of these to powers $>1$ makes them larger.</p>
<p>Call $x=\sqrt{2}^\sqrt{3}$ and $y=\sqrt{3}^\sqrt{2}$.</p>
<p>We have $x^{2\sqrt{3}}$=8 and $y^{2\sqrt{2}}=9.$</p>
<p>Since $2\sqrt{2} < 2\sqrt{3}$, we conclude $y>x$.</p>
|
88,469 | <p>These vectors form a basis on $\mathbb R^3$: $$\begin{bmatrix}1\\0\\-1\\\end{bmatrix},\begin{bmatrix}2\\-1\\0\\\end{bmatrix} ,\begin{bmatrix}1\\2\\1\\\end{bmatrix}$$</p>
<p>Can someone show how to use the Gram-Schmidt process to generate an orthonormal basis of $\mathbb R^3$?</p>
| Paul | 16,158 | <p>Let $u_1=\begin{bmatrix}1\\0\\-1\\\end{bmatrix} ,u_2=\begin{bmatrix}2\\-1\\0\\\end{bmatrix} ,u_3=\begin{bmatrix}1\\2\\1\\\end{bmatrix}$. To find the required orthonormal basis $\{w_1,w_1,w_3\}$, first we have
$$w_1=\frac{u_1}{\|u_1\|}=\begin{bmatrix}\frac{1}{\sqrt{2}}\\0\\-\frac{1}{\sqrt{2}}\\\end{bmatrix}.$$</p>
<p>Second, find $u_2-(w_1\cdot u_2)w_1$ as follows:
$$u_2-(w_1\cdot u_2)w_1=\begin{bmatrix}2\\-1\\0\\\end{bmatrix}-\sqrt{2}\begin{bmatrix}\frac{1}{\sqrt{2}}\\0\\-\frac{1}{\sqrt{2}}\\\end{bmatrix}=\begin{bmatrix}1\\-1\\1\\\end{bmatrix}.$$
By taking the dot product, you can see that $w_1$ is orthogonal to the above vector:
$$w_1\cdot[u_2-(w_1\cdot u_2)w_1]=w_1\cdot u_2-(w_1\cdot u_2)w_1\cdot w_1=0$$
since $w_1$ is an unit vector. So we can take
$$w_2=\frac{u_2-(w_1\cdot u_2)w_1}{\|u_2-(w_1\cdot u_2)w_1\|}=\begin{bmatrix}\frac{1}{\sqrt3}\\-\frac{1}{\sqrt3}\\\frac{1}{\sqrt3}\\\end{bmatrix}.$$</p>
<p>Finally, find $u_3-(w_1\cdot u_3)w_1-(w_2\cdot u_3)w_2$ as follows:
$$u_3-(w_1\cdot u_3)w_1-(w_2\cdot u_3)w_2=\begin{bmatrix}1\\2\\1\\\end{bmatrix}-0\cdot\begin{bmatrix}\frac{1}{\sqrt3}\\-\frac{1}{\sqrt3}\\\frac{1}{\sqrt3}\\\end{bmatrix}-0\cdot\begin{bmatrix}\frac{1}{\sqrt{2}}\\0\\-\frac{1}{\sqrt{2}}\\\end{bmatrix}=\begin{bmatrix}1\\2\\1\\\end{bmatrix}.$$
By taking the dot product, you can again see that $w_1$ and $w_2$ and is orthogonal to the above vector. So we can take
$$w_3=\frac{u_3-(w_1\cdot u_3)w_1-(w_2\cdot u_3)w_2}{\|u_3-(w_1\cdot u_3)w_1-(w_2\cdot u_3)w_2\|}=\begin{bmatrix}\frac{1}{\sqrt6}\\\frac{2}{\sqrt6}\\\frac{1}{\sqrt6}\\\end{bmatrix}.$$</p>
|
1,794,221 | <p>I am asked to show that the tangent space of $M$={ $(x,y,z)\in \mathbb{R}^3 : x^{2}+y^{2}=z^{2}$} at the point p=(0,0,0) is equal to $M$ itself.</p>
<p>I have that $f(x,y,z)=x^{2}+y^{2}-z^{2}$ but as i calculate $<gradf_p,v>$ i get zero for any vector.Where am i making a disastrous error?</p>
| Mathematician 42 | 155,917 | <p>Suppose $\phi$ is injective, then $\phi(g)=e_H$ implies that $g=e_G$ since $\phi$ is injective and $\phi(e_G)=e_H$. Hence $\ker(\phi)$ is trivial. Conversely, suppose that the kernel is trivial, then $\phi(g)=\phi(h)$ implies that $\phi(gh^{-1})=e_H$, hence $gh^{-1}\in \ker(\phi)$. Since this kernel is trivial, it follows that $gh^{-1}=e_G$, or equivalently $g=h$, hence $\phi$ is injective.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.