qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,438,999 | <p>If $(x-a)^2=(x+a)^2$ for all values of $x$, then what is the value of $a$?</p>
<p>At the end when you get $4ax=0$, can I divide by $4x$ to cancel out $4$ and $x$?</p>
| mathreadler | 213,607 | <p>$$(x-a)^2 = (x+a)^2$$
Expand squares:
$$x^2 - 2ax + a^2 = x^2 + 2ax + a^2$$
Subtract $x^2+a^2$ from both sides:
$$-2ax = 2ax$$
add $2ax$ and divide by $4$:
$$0 = ax$$</p>
<p>So either 1) $a = 0$ or 2) $x = 0$.</p>
<ol>
<li>$$(x-0)^2 = (x+0)^2 \Leftrightarrow x^2 = x^2$$
Which is true for all x.</li>
<li>$$(0-a)^2 = (0+a)^2 \Leftrightarrow (-a)^2 = (a)^2$$
Which is true for all $a$.</li>
</ol>
<p>So either $a=0$ and $x$ can be anything or $x=0$ and $a$ can be anything.</p>
|
1,438,999 | <p>If $(x-a)^2=(x+a)^2$ for all values of $x$, then what is the value of $a$?</p>
<p>At the end when you get $4ax=0$, can I divide by $4x$ to cancel out $4$ and $x$?</p>
| Narasimham | 95,860 | <p>Simplifying, after expansion of squares you get</p>
<p>$$ a \cdot x = 0 $$</p>
<p>either or both of them can be zero.</p>
<p>However you are specifically given that <em>for all values of $x$</em>... So <em>do not put</em> a particular $ x=0. $</p>
<p>Only choice is $ a=0. $</p>
<p>Also for any function if $ f(x+a) = f(x-a)$, then $ a =0.$</p>
<p>Like if $\, e^ { x+a } \sin ( x+a) = e^ { x-a } \sin ( x-a) $, then also, $ a=0$</p>
|
102,304 | <p>I have here a complex equation:</p>
<p>$$z^2 - (7+j)z + 24 +j7 = 0$$</p>
<p>How do we get the roots of this equation? I started using the quadratic formula $-b \pm \sqrt{ b^2-4ac}\over 2$, but it yielded too much complexity on it. Is there any way to directly attack this? Thanks.</p>
| Did | 6,179 | <p>One can <a href="http://en.wikipedia.org/wiki/Completing_the_square" rel="nofollow">complete the square</a>, that is, write $z^2-(7+j)z$ as the beginning of the expansion of
$$
\left(z-\frac12(7+j)\right)^2.
$$
This yields
$$
z^2-(7+j)z+24+7j=\left(z-\tfrac12(7+j)\right)^2-u,
$$
with
$$
u=\tfrac14(7+j)^2-24-7j.
$$
But $u=v^2$ for some complex number $v$, hence the equation to solve is equivalent to
$$
\left(z-\tfrac12(7+j)\right)^2-v^2=0,
$$
that is,
$$
\left(z-\tfrac12(7+j)-v\right)\cdot\left(z-\tfrac12(7+j)+v\right)=0,
$$
which yields the two solutions
$$
z=\tfrac12(7+j)\pm v.
$$
It remains to compute $v$...</p>
|
235,661 | <p>Is this sufficient? Also, any good books/other suggestions regarding the subject will be very helpful.</p>
<p>Find min, max, inf, sup (if they exist):</p>
<p>$$B=\left\{\frac{m}{m+n}:m,n\in\mathbb{N}\right\}$$</p>
<p>Showing B has an upper bound:
Let $M=1$, we need to find $m,n$ fulfilling:$$\frac{m}{m+n}>1$$
As $n\in\mathbb{N}$ and is only in the denominator, the smaller it's value, the greater the value of n, the smaller $b$ will be. Therefore, let us choose $n=1$ (smallest possible value).$$\frac{m}{m+1}>1\,\,\,\,\,\leftrightarrow\,\,\,\,\,\,m>m+1$$</p>
<p>We got a contradiction, thus $M$ is an upper bound of $B$.</p>
<p>Showing $M=\sup B$: Let $\epsilon>0$, we need to find $b\in B$ fulfilling:$$\frac{m}{m+n}>1-\epsilon$$
Again, we'll choose $n=1$ to get the biggest $b$ possible:
$$\begin{align}
\frac{m}{m+1}&>1-\epsilon\\ m&>m+1-m\epsilon -\epsilon\\m&>\frac{1-\epsilon}{\epsilon}
\end{align}$$
Therefore for every $\epsilon$ we can choose $n=1,m>\frac{1-\epsilon}{\epsilon}$, which means $\sup B=1$.</p>
<p>Edit: Since $m,n \in\mathbb{N}$, $B>0$.</p>
<p>Showing $0=\inf B$: Let $\epsilon>0$, we need to find $b\in B$ fulfilling:
$$\frac{m}{m+n}<0+\epsilon$$
Choosing $m=1$ to make $b$ as small as possible:
$$1<\epsilon+n\epsilon\\n>\frac{1-\epsilon}{\epsilon}$$</p>
<p>We have shown that such $b$ exists for every $\epsilon$. Therefore, $\sup B = 0$</p>
| DonAntonio | 31,254 | <p>If you take $\,\Bbb N=\{1,2,3,...\}\,$, then I think you'll agree with</p>
<p>$$\forall\,\,m,n,\in\Bbb N\,\,\,,\,\,\frac{m}{m+n}>0\Longrightarrow 0\,\,\text{is a lower bound for}\,\,M\,...$$</p>
<p>I think it'd be a good idea to try to prove that zero is actually the infimum of $\,M\,$</p>
|
235,661 | <p>Is this sufficient? Also, any good books/other suggestions regarding the subject will be very helpful.</p>
<p>Find min, max, inf, sup (if they exist):</p>
<p>$$B=\left\{\frac{m}{m+n}:m,n\in\mathbb{N}\right\}$$</p>
<p>Showing B has an upper bound:
Let $M=1$, we need to find $m,n$ fulfilling:$$\frac{m}{m+n}>1$$
As $n\in\mathbb{N}$ and is only in the denominator, the smaller it's value, the greater the value of n, the smaller $b$ will be. Therefore, let us choose $n=1$ (smallest possible value).$$\frac{m}{m+1}>1\,\,\,\,\,\leftrightarrow\,\,\,\,\,\,m>m+1$$</p>
<p>We got a contradiction, thus $M$ is an upper bound of $B$.</p>
<p>Showing $M=\sup B$: Let $\epsilon>0$, we need to find $b\in B$ fulfilling:$$\frac{m}{m+n}>1-\epsilon$$
Again, we'll choose $n=1$ to get the biggest $b$ possible:
$$\begin{align}
\frac{m}{m+1}&>1-\epsilon\\ m&>m+1-m\epsilon -\epsilon\\m&>\frac{1-\epsilon}{\epsilon}
\end{align}$$
Therefore for every $\epsilon$ we can choose $n=1,m>\frac{1-\epsilon}{\epsilon}$, which means $\sup B=1$.</p>
<p>Edit: Since $m,n \in\mathbb{N}$, $B>0$.</p>
<p>Showing $0=\inf B$: Let $\epsilon>0$, we need to find $b\in B$ fulfilling:
$$\frac{m}{m+n}<0+\epsilon$$
Choosing $m=1$ to make $b$ as small as possible:
$$1<\epsilon+n\epsilon\\n>\frac{1-\epsilon}{\epsilon}$$</p>
<p>We have shown that such $b$ exists for every $\epsilon$. Therefore, $\sup B = 0$</p>
| ackshooairy | 47,570 | <p>Consider first case where the value m is much larger than the value of n. $n << m$</p>
<p>Then consider the case where the value n is much larger than the value m. $m <<n$ </p>
<p>Write out a few iterations and you'll see where each one is headed. That will give you the supremum and infimum.</p>
<p>I should point out like the other posts that $m,n \in \mathbb{N}$ which means $m,n > 0$.</p>
|
1,085,279 | <p>There is a <a href="https://math.stackexchange.com/questions/265619/meaning-of-normalization">question</a> already asked here about this. But I know almost nothing of algebraic geometry, nothing fancy to understand the answer. So I would highly appreciate an elementary explanation to my question.</p>
<p>I encountered the term <em>normalization</em> while I was trying to understand that a particular algebraic curve is smooth. My questions are: </p>
<p>1) What is the meaning of normalization? </p>
<p>2) Why do we perform it? </p>
<p>3) How is it related to smoothness of algebraic curves? To singularities of curves? </p>
<p>4) Is normalization cannonical? If so, how? </p>
| Georges Elencwajg | 3,217 | <p>0) Recall that a domain $A$ is said to be normal if it is integrally closed in its fraction field $K=Frac(A)$.<br>
This means that any element $q\in K$ killed by a monic polynomial in $A[T]$, i.e. such that for some $n\gt 0, a_i\in A$ one has $$q^n+a_1q^{n-1}+\cdots+a_n=0$$ already satisfies $ q\in A$ .<br>
A variety $V$ is said to be normal if it can be covered by open affines $V_i\subset V$ whose associated rings of functions $A_i=\mathcal O(V_i)$ are normal.</p>
<p>1) The normalization of an irreducible variety $X$ is a morphism $n:\tilde X\to X$ such that $\tilde X$ is a normal variety and there exists a closed subvariety $Y\subsetneq X$ such that $n|(\tilde X\setminus n^{-1}(Y))\stackrel {\cong}{\to}X\setminus Y$ is an isomorphism. </p>
<p>2) We perform normalization because normal varieties have better properties than arbitrary ones.<br>
For example in normal varieties regular functions defined outside a closed subvariety of codimension $\geq 2$ can be extended to regular functions defined everywhere ("Hartogs phenomenon") .</p>
<p>3) A curve is non-singular (=smooth if the base field is algebraically closed ) if and only if it is normal, so that normalization=desingularization for curves.<br>
In higher dimensions normal varieties, alas, may have singularities.<br>
Getting rid of these is tremendously difficult in characteristic zero (Hironaka) and is an unsolved challenge in positive characteristics. </p>
<p>4) Yes, normalization of $X$ is canonical in the sense that if $n': X'\to X$ is another normalization we have an isomorphism $j:\tilde X \stackrel {\cong}{\to} X'$ commuting with the normalization morphisms, namely $n'\circ j=n$ .<br>
At the basis of this canonicity is the fact that there is a (trivial) canonical procedure for enlarging a domain to its integral closure in its fraction field. </p>
|
1,085,279 | <p>There is a <a href="https://math.stackexchange.com/questions/265619/meaning-of-normalization">question</a> already asked here about this. But I know almost nothing of algebraic geometry, nothing fancy to understand the answer. So I would highly appreciate an elementary explanation to my question.</p>
<p>I encountered the term <em>normalization</em> while I was trying to understand that a particular algebraic curve is smooth. My questions are: </p>
<p>1) What is the meaning of normalization? </p>
<p>2) Why do we perform it? </p>
<p>3) How is it related to smoothness of algebraic curves? To singularities of curves? </p>
<p>4) Is normalization cannonical? If so, how? </p>
| Takumi Murayama | 116,766 | <p>I like Georges Elencwajg's answer, but I think it's useful to see some topological intuition for what normalization does over <span class="math-container">$\mathbf{C}$</span>.</p>
<p>Note we say a variety is <strong>normal</strong> if its local rings are integrally closed in their fraction field.</p>
<h2>Riemann Extension Theorem</h2>
<p>This fleshes out 2) in Georges Elencwajg's answer. Most of what follows is from Kollár's article "<a href="http://www.ams.org/journals/bull/1987-17-02/S0273-0979-1987-15548-0/" rel="noreferrer">The structure of algebraic threefolds</a>". I also enjoy the discussion around p. 391 in Brieskorn and Knörrer's book <em>Plane algebraic curves</em>.</p>
<p>In complex analysis, you learn about the <strong>Riemann extension theorem</strong>, which says a bounded meromorphic function on any open set <span class="math-container">$U \subset \mathbf{C}$</span> that is holomorphic on <span class="math-container">$U \setminus \{p\}$</span> is in fact holomorphic on <span class="math-container">$U$</span>. In (complex) algebraic geometry, we want something similar to hold (let's say, for curves): that a bounded rational function that is regular on <span class="math-container">$U \setminus \{p\}$</span> is in fact regular on <span class="math-container">$U$</span>.</p>
<p>This fails in general, however:</p>
<p><strong>Example</strong> (cuspidal cubic)<strong>.</strong> Let <span class="math-container">$V = \{x^2 - y^3 = 0\} \subset \mathbf{C}^2$</span>, and let <span class="math-container">$f = (x/y)\rvert_V$</span>. <span class="math-container">$f$</span> is a rational function on <span class="math-container">$V$</span>, regular away from <span class="math-container">$0$</span>. You can of course demand <span class="math-container">$f(0,0) = 0$</span> to make <span class="math-container">$f$</span> continuous at <span class="math-container">$(0,0)$</span>, but this does not make <span class="math-container">$f$</span> regular. For, suppose <span class="math-container">$x/y = a(x,y)/b(x,y)$</span> for some polynomials <span class="math-container">$a,b$</span> such that <span class="math-container">$b(0,0) \ne 0$</span>. Then, <span class="math-container">$xb(x,y) - ya(x,y) = 0$</span> on <span class="math-container">$V$</span>, so <span class="math-container">$x^2 - y^3$</span> divides it. But there is a nonzero constant term in <span class="math-container">$b(x,y)$</span> which contributes a nonzero coefficient for <span class="math-container">$x$</span> in <span class="math-container">$xb(x,y) - ya(x,y)$</span>, so it can't be zero. Note, though, that <span class="math-container">$(x/y)^2 = y$</span> <em>is</em> regular on <span class="math-container">$V$</span>, which shows <span class="math-container">$V$</span> is not normal.</p>
<p>The question then becomes: can we modify the curve <span class="math-container">$V$</span> so that the Riemann extension theorem <em>does</em> hold? The answer is that yes, the normalization in fact does this for us: it gives another variety <span class="math-container">$\tilde{V}$</span> such that the rational functions on <span class="math-container">$V$</span> and <span class="math-container">$\tilde{V}$</span> agree, but an extension property like the one above holds. This extension property is the content of</p>
<p><strong>Hartog's Theorem.</strong> Let <span class="math-container">$V$</span> be a normal variety and let <span class="math-container">$W \subset V$</span> be a subvariety such that <span class="math-container">$\dim W \le \dim V - 2$</span>. Let <span class="math-container">$f$</span> be a regular function on <span class="math-container">$V - W$</span>. Then <span class="math-container">$f$</span> extends to a regular function on <span class="math-container">$V$</span>.</p>
<p>But returning to our example: the map <span class="math-container">$\mathbf{C} \to V$</span> sending <span class="math-container">$z \mapsto (z^3,z^2)$</span> is in fact a normalization. The function <span class="math-container">$x/y$</span> then pulls back to <span class="math-container">$z$</span> on <span class="math-container">$\mathbf{C}$</span>, which is obviously regular!</p>
<p><em>Remark.</em> It is possible to define normality as saying every rational function that is bounded in a neighborhood <span class="math-container">$U$</span> of a point <span class="math-container">$p$</span> is in fact regular on <span class="math-container">$U$</span>, in direct analogy to the Riemann extension theorem. But the equivalence of these definitions is hard: see Kollár, <em>Lectures on resolution of singularities</em>, §1.4, especially Rem. 1.28.</p>
<h2>Separating Branches</h2>
<p>What follows is from Mumford's <em>The red book of varieties and schemes,</em> III.9.</p>
<p>Normality can be understood as a way to separate the "branches" of an algebraic variety at a singular point. Consider the following</p>
<p><strong>Example</strong> (nodal cubic)<strong>.</strong> Let <span class="math-container">$V = \{x^2(x+1) - y^2\} \subset \mathbf{C}^2$</span>. It is not normal at <span class="math-container">$(0,0)$</span> since it's singular there. Consider a small analytic neighborhood
<span class="math-container">$U = \{(x,y) \mid \lvert x \rvert < \epsilon,\ \lvert y \rvert < \epsilon\}$</span>.
Points in <span class="math-container">$U \cap V$</span> satisfy <span class="math-container">$\lvert x - y \rvert \lvert x + y \rvert = \lvert x \rvert^3 < \epsilon \lvert x \rvert^2$</span> hence <span class="math-container">$\lvert x - y \rvert < \sqrt{\epsilon} \lvert x \rvert$</span> or <span class="math-container">$\lvert x + y \rvert < \sqrt{\epsilon} \lvert x \rvert$</span>, but both can't occur simultaneously for small enough <span class="math-container">$\epsilon$</span>. Thus, near the origin <span class="math-container">$V$</span> splits into two "branches" containing points satisfying <span class="math-container">$\lvert x - y \rvert \ll \lvert x \rvert$</span> and <span class="math-container">$\lvert x + y \rvert \ll \lvert x \rvert$</span>. Each piece is connected, but there is no <em>algebraic</em> way to separate each branch.</p>
<p>The normalization <span class="math-container">$\pi\colon \tilde{V} \to V$</span> ends up fixing this, in the following way: for each point <span class="math-container">$p \in V$</span>, the inverse image <span class="math-container">$\pi^{-1}(p)$</span> is in 1-1 correspondence with the set of branches at <span class="math-container">$p$</span>. In our particular example, it is given by <span class="math-container">$\mathbf{C} \to V$</span> where <span class="math-container">$z \mapsto (z^2-1,z(z^2-1))$</span>; the two branches correspond to <span class="math-container">$z=\pm1$</span>.</p>
<p>So perhaps a variety <span class="math-container">$V$</span> is normal if and only if at every point <span class="math-container">$p \in V$</span>, there is only one branch. The forward direction is essentially the content of Zariski's main theorem; see pp. 288–289 in Mumford. But the converse is false: the cuspidal cubic only has one branch but is not normal.</p>
|
2,275,951 | <p>The parabola y=x² is parameterized by x(t) = t and y(t) = t². At the point <strong>A</strong> (t,t²) a line segment <strong>AP</strong> 1 unit long is drawn normal to the parabola extending inward. Find the parametric equations of the curve traced by the point <strong>P</strong> as <strong>A</strong> moves along the parabola.</p>
<p><a href="https://www.desmos.com/calculator/kp5wxcbpnt" rel="nofollow noreferrer">Best picture i could come up with</a></p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\int_{0}^{\pi/2}\expo{-\pi\tan\pars{t}/2}\,\dd t &
\,\,\,\stackrel{x\ =\ \tan\pars{t}}{=}\,\,\,
\int_{0}^{\infty}{\expo{-\pi x/2} \over x^{2} + 1}\,\dd x =
\Im\int_{0}^{\infty}{\expo{-\pi x/2} \over x - \ic}\,\dd x
\\[5mm] & =
\Im\int_{-\ic}^{\infty - \ic}{\expo{-\pi\ic/2}\expo{-\pi x/2} \over x}\,\dd x =
-\,\Re\int_{-\ic}^{\infty - \ic}{\expo{-\pi x/2} \over x}\,\dd x
\\[5mm] & \stackrel{\mrm{as}\ \epsilon\ \to\ 0^{+}}{\sim}
\Re\int_{\infty}^{\epsilon}{\expo{-\pi x/2} \over x}\,\dd x +
\Re\int_{0}^{-\pi/2}
{\epsilon\expo{\ic\theta}\ic\,\dd\theta \over \epsilon\expo{\ic\theta}} +
\Re\int_{-\epsilon}^{-1}{\expo{-\pi\ic y/2} \over \ic y}\,\ic\,\dd y
\\[5mm] & \stackrel{\mrm{as}\ \epsilon\ \to\ 0^{+}}{\sim}
-\int_{\pi\epsilon/2}^{\infty}{\expo{-x} \over x}\,\dd x -
\int^{\pi\epsilon/2}_{1}{\cos\pars{x} \over x}\,\dd y
\\[5mm] & \stackrel{\mrm{as}\ \epsilon\ \to\ 0^{+}}{\to}\,\,\,
\int^{\infty}_{\pi\epsilon/2}{\cos\pars{x} \over x}\,\dd y -
\int_{\pi\epsilon/2}^{\infty}{\expo{-x} \over x}\,\dd x -
\int^{\infty}_{\pi/2}{\cos\pars{x} \over x}\,\dd y
\\[5mm] & =
-\,\mrm{Ci}\pars{{\pi \over 2}\,\epsilon} -
\,\mrm{Ei}\pars{{\pi \over 2}\,\epsilon} + \,\mrm{Ci}\pars{\pi \over 2}
\label{1}\tag{1}
\end{align}</p>
<blockquote>
<p>$\ds{\mrm{Ei}}$ is the
<a href="http://dlmf.nist.gov/6.2.E1" rel="nofollow noreferrer">Exponential Integral Function</a>.
<a href="http://dlmf.nist.gov/6.6" rel="nofollow noreferrer">Note that</a>, as $\ds{z \to 0}$,
$\ds{\,\mrm{Ci}\pars{z} \sim \gamma + \ln\pars{z} + \,\mrm{O}\pars{z^{2}}}$ and
$\ds{\,\mrm{Ei}\pars{z} \sim -\gamma - \ln\pars{z} + \,\mrm{O}\pars{z^{1}}}$ </p>
</blockquote>
<p>such that \eqref{1} becomes
$$
\bbx{\int_{0}^{\pi/2}\expo{-\pi\tan\pars{t}/2}\,\dd t =
\,\mrm{Ci}\pars{\pi \over 2}}
$$</p>
|
4,146,629 | <p>I'm reading D. E. Knuth's book "Surreal Numbers". And I'm completely stuck in chap. 6 (The Third Day) because there is a proof I don't understand. Alice says</p>
<blockquote>
<p>Suppose at the end of <span class="math-container">$n$</span> days, the numbers are <span class="math-container">$$x_1<x_2<\dots<x_m$$</span></p>
</blockquote>
<p>She demonstrates that <span class="math-container">$x_i \equiv (\{x_{i-1}\},\{x_{i+1}\})$</span> and she begins the proof by saying</p>
<blockquote>
<p>Look, each element of <span class="math-container">$X_{iL}$</span> is <span class="math-container">$\le x_{i-1}$</span>, and each element of
<span class="math-container">$X_{iR}$</span> is <span class="math-container">$\ge x_{i+1}$</span>.</p>
</blockquote>
<p>That first step of the proof is the one I don't understand. Can someone show me how to demonstrate that statement?</p>
| peawormsworth | 603,214 | <p>An ordered list of numbers in the universe after each day:</p>
<pre><code>Day 0: empty
Day 1: 0
Day 2: -1, 0, 1
Day 3: -2, -1, -1/2, 0, 1/2, 1, 2
</code></pre>
<p>New numbers:</p>
<pre><code>Day 1: 0
Day 2: -1, 1
Day 3: -2, -1/2, 1/2, 2
</code></pre>
<p>On any given day the universe can be sorted:</p>
<p><span class="math-container">$$x_1<x_2<\dots<x_m$$</span></p>
<p>Sorted list of number on day 3:</p>
<pre><code>-2 < -1 < -1/2 < 0 < 1/2 < 1 < 2
</code></pre>
<p>which we assign to be <span class="math-container">$x_1<x_2<x_3<x_4<x_5<x_6<x_7$</span></p>
<p>The new numbers on day 3 being:</p>
<pre><code>-2, -1/2, 1/2, 2
</code></pre>
<p>which are the value of elements x1, x3, x5, x7 from our sorted list</p>
<p>The book says: <span class="math-container">$x_i \equiv (\{x_{i-1}\},\{x_{i+1}\})$</span></p>
<p>For day 3, it means:</p>
<pre><code>x1 = { {},{x2}}
x3 = {{x2},{x4}}
x5 = {{x4},{x6}}
x7 = {{x6},{} }
</code></pre>
<p>Which can be written with values as:</p>
<pre><code>-2 = {|-1}
-1/2 = {-1|0}
1/2 = {0|1}
2 = {1|}
</code></pre>
<p>Each element of <span class="math-container">$X_{iL}$</span> is <span class="math-container">$\le x_{i-1}$</span>, and each element of <span class="math-container">$X_{iR}$</span> is <span class="math-container">$\ge x_{i+1}$</span>.</p>
<p>This says that there is a longer form for writting these left and right sets:</p>
<pre><code>x1 = {{},{x2,x4,x6}
x3 = {{x2},{x4,x6}}
x5 = {{x2,x4},{x6}
x7 = {{x2,x4,x6},{}}
</code></pre>
<p>Which can be written with values as:</p>
<pre><code>-2 = {|-1,0,1}
-1/2 = {-1|0,1}
1/2 = {-1,0|1}
2 = {-1,0,1|}
</code></pre>
<p>I don't have the book, but I think the conclusion it is getting to is that each finite surreal number has a short representation with only one number in the left set and one number in the right set.</p>
<p>This means for example that we could write the finite surreal number -1/2 as:</p>
<pre><code>-1/2 = {-1|0,1}
</code></pre>
<p>but that we only need to write:</p>
<pre><code>-1/2 = {-1|0}
</code></pre>
<p>Any finite surreal value is fully defined by the single greatest number from its left set and a single smallest number from the right set</p>
<p>Since the entire universe of numbers available on the previous day are place into either the right and left set of new numbers, the full representations become very large. And since all mathematical operations on the shortened versions work the same as using the longer versions, there is an incentive to use this short form while operating with finite surreal numbers.</p>
<p>For example I could write a surreal representation:</p>
<pre><code>3/256 = { 1/128 | 1/64 }
</code></pre>
<p>Where left and right are the decrement and increment of the numerator of the original number:</p>
<pre><code>3/256 = { (3-1)/256 | (3+1)/256 }
</code></pre>
<p>While the long form would involve writing 1023 numbers instead of two</p>
|
2,946,379 | <p>The question posed is the following: Let <span class="math-container">$X$</span> be a Banach Space and let <span class="math-container">$T:X\to X$</span> be a Lipschitz-Continuous map. Show that, for <span class="math-container">$\mu$</span> sufficiently large, the equation
<span class="math-container">\begin{equation}
Tx+\mu x=y
\end{equation}</span>
has, for any <span class="math-container">$y\in X$</span>, a unique solution.</p>
<p>Note that <span class="math-container">$x,y$</span> are vectors, since our book (<em>Mathematical Analysis</em> by Mariano Giaquinta and Giuseppe Modica) generally ignores vector indicators, since it's all multivariable.</p>
<p>My proof is based on the Banach Fixed Point Theorem:
Since <span class="math-container">$T$</span> is Lipschitz-continuous, we have <span class="math-container">$\|Tx\|\leq k\|x\|$</span> for <span class="math-container">$0<k\leq1$</span>. So
<span class="math-container">$\|Tx-\mu x\|\leq k\|x\| - \mu \|x\|$</span>.</p>
<p>Then we can say</p>
<p><span class="math-container">\begin{equation}
\|Tx-\mu x\|\leq (k-\mu)\|x\|
\end{equation}</span></p>
<p>So, if <span class="math-container">$\mu$</span> is large enough that <span class="math-container">$|k-\mu|<1$</span>, we have a contractive map, and by the Banach Fixed Point theorem, there exists a unique fixed point <span class="math-container">$x_0$</span> for <span class="math-container">$(T-\mu)x$</span>. Then, <span class="math-container">$Tx-\mu x=y$</span> has a unique solution, namely, <span class="math-container">$x_0$</span>.</p>
<p>My question is whether this is a valid proof. I'm mostly foggy on if I applied the theorem correctly, and if I am allowed to say <span class="math-container">$Tx-\mu x=(T-\mu)x$</span>, since <span class="math-container">$T$</span> is a map and <span class="math-container">$\mu$</span> is a constant (I think).</p>
| J.G. | 56,861 | <p>Since <span class="math-container">$dx=dt/t$</span>, you need to divide the whole integrand by <span class="math-container">$t$</span>.</p>
|
2,946,379 | <p>The question posed is the following: Let <span class="math-container">$X$</span> be a Banach Space and let <span class="math-container">$T:X\to X$</span> be a Lipschitz-Continuous map. Show that, for <span class="math-container">$\mu$</span> sufficiently large, the equation
<span class="math-container">\begin{equation}
Tx+\mu x=y
\end{equation}</span>
has, for any <span class="math-container">$y\in X$</span>, a unique solution.</p>
<p>Note that <span class="math-container">$x,y$</span> are vectors, since our book (<em>Mathematical Analysis</em> by Mariano Giaquinta and Giuseppe Modica) generally ignores vector indicators, since it's all multivariable.</p>
<p>My proof is based on the Banach Fixed Point Theorem:
Since <span class="math-container">$T$</span> is Lipschitz-continuous, we have <span class="math-container">$\|Tx\|\leq k\|x\|$</span> for <span class="math-container">$0<k\leq1$</span>. So
<span class="math-container">$\|Tx-\mu x\|\leq k\|x\| - \mu \|x\|$</span>.</p>
<p>Then we can say</p>
<p><span class="math-container">\begin{equation}
\|Tx-\mu x\|\leq (k-\mu)\|x\|
\end{equation}</span></p>
<p>So, if <span class="math-container">$\mu$</span> is large enough that <span class="math-container">$|k-\mu|<1$</span>, we have a contractive map, and by the Banach Fixed Point theorem, there exists a unique fixed point <span class="math-container">$x_0$</span> for <span class="math-container">$(T-\mu)x$</span>. Then, <span class="math-container">$Tx-\mu x=y$</span> has a unique solution, namely, <span class="math-container">$x_0$</span>.</p>
<p>My question is whether this is a valid proof. I'm mostly foggy on if I applied the theorem correctly, and if I am allowed to say <span class="math-container">$Tx-\mu x=(T-\mu)x$</span>, since <span class="math-container">$T$</span> is a map and <span class="math-container">$\mu$</span> is a constant (I think).</p>
| Community | -1 | <p>There are a ton of mistakes here, unfortunately. The key issue is that you've got something like</p>
<p><span class="math-container">$$\frac{t + t^3}{t - t^5} = 1 + t^2 - \frac{1}{t^4} - \frac{1}{t^2}$$</span></p>
<p>where you've just mixed-and-matched all four terms. This is a (very) incorrect manipulation of the fractions. One way that you can tell the two sides are unrelated is that the left hand side tends to <span class="math-container">$1$</span> as <span class="math-container">$t \to \infty$</span>, while the right hand side blows up.</p>
<p>The second and third issues, as pointed out in the other answers, are that you're missing <span class="math-container">$dt/t = dx$</span> from the substitution, and that you didn't change the bounds to <span class="math-container">$[e, \infty)$</span>. </p>
|
2,264,791 | <p>I have a problem that I'm having trouble figuring out the distribution with given condition.</p>
<p>It is given that 1/(<span class="math-container">$X$</span>+1), where <span class="math-container">$X$</span> is an exponentially distributed random variable with parameter 1.</p>
<blockquote>
<p><strong>Original Problem:</strong></p>
<p>What is the distribution of 1/(<span class="math-container">$X$</span>+1), where <span class="math-container">$X$</span> is an expoentially distributed random variable with parameter 1?</p>
</blockquote>
<p>With parameter 1, <span class="math-container">$X$</span> can be written as <span class="math-container">$e^{-x}$</span>, and after plug in the given function, I got <span class="math-container">$$\frac{1}{e^{-x}+1} = \frac{e^{x}}{e^{x}+1}$$</span></p>
<p>What type of distribution is this?</p>
| Graham Kemp | 135,106 | <p>$X$ is <em>not</em> "writen" as $e^{-x}$. The probability density function of $X$, called $f_X(x)$, is equal to $e^{-x}~\big[x\geqslant 0\big]$.</p>
<p>The cummulative distribution function of $X$ is: $$\begin{align}F_X(x) ~&=~ \mathsf P(X\leqslant x) \\[1ex] &=~ (1-e^{-x}) ~\big[x\geqslant 0\big]\end{align}$$</p>
<p>Let $Y:=1/(1+X)$. The cummulative distribution function, and therefore probability density function, of $Y$ is</p>
<p>$$\begin{align}F_Y(y) ~&=~ \mathsf P(1/(1+X)\leqslant y) \\[1ex] &=~ \mathsf P(X\geqslant (1/y)-1) \\[1ex] &=~ 1- F_X(\tfrac 1y-1)\\[0ex] &~~\vdots\\[3ex] f_Y(y) ~&=~ \dfrac{\mathrm d ~F_Y(y)}{\mathrm d~y\qquad}\\[0ex] &~~\vdots\end{align}$$</p>
|
345,844 | <p>Should be simple enough, yet I can't show that there are no monomorphisms $\mathbb{Z}^3\rightarrow \mathbb{Z}^2$. (It is true, right?)</p>
| Jim | 56,747 | <p><strong>Hint:</strong> Show that given any $x, y, z \in \mathbb Z^2$ there exist <em>non-zero</em> integers $a, b, c \in \mathbb Z$ such that
$$ax + by + cz = (0, 0)$$
(You should be able to give an explicit formula for $a, b, c$ in terms of the entries of $x, y, z$)</p>
<p>Next show that if you had a monomorphism $\phi\colon\mathbb Z^3 \to \mathbb Z^2$ then there would be no such $a, b, c$ for the vectors $\phi(0, 0, 1), \phi(0, 1, 0), \phi(0, 0, 1)$.</p>
|
936,525 | <p>I am following a proof in the text OPTIMIZATION THEORY AND METHODS a springer series by WENYU SUN and YA-XIANG YUAN. I come across what seems obvious that for a column vector $v$, with dimension $n\times 1$, $$\biggl\|I-\frac{vv^T}{v^Tv}\biggr\|=1,$$ where $I$ is an $n\times n$ matrix, and $\|.||$ is a matrix norm.</p>
<hr>
<p>I try to verify it by considering a Frobenius norm, that is </p>
<p>\begin{equation*}
\begin{split}
\biggl\|I-\frac{vv^T}{v^Tv}\biggr\|_F& = \biggl (tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)^T\biggl(I-\frac{vv^T}{v^Tv}\biggr)\biggl)^{\frac{1}{2}} \\
& = \biggl (tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)^2\biggr)^{\frac{1}{2}}\\
& =tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)\\
& =tr\bigl(I\bigr)-tr\biggr(\frac{vv^T}{v^Tv}\biggr)\\
& =n-\frac{1}{\|v\|^2} \|v\|^2\\
& =n-1
\end{split}
\end{equation*} </p>
<hr>
<p>So, I do not what is the problem. Because in the text no specification of norm is given. May be I have to change another matrix norm.</p>
<hr>
<p>NOTE: A Frobenius matrix norm for any matrix $A$ is defined by
\begin{equation*}
\begin{split}
||A\|_F & = \biggl( \sum_{i=1}^{m}\sum_{j=1}^{n}|a_{ij}|^2\biggr)^\frac{1}{2}\\
& = \biggl(tr(A^TA)\biggr)^\frac{1}{2}
\end{split}
\end{equation*}</p>
| dineshdileep | 41,541 | <p>I think you should check the notations of that book first. The authors are probably talking about the spectral-norm. Note that spectral norm is completely different from the frobenius norm. For any square matrix $P$
\begin{align}
||P||_{\text{spectral norm}}=||P||_2=\max_{||x||_2 = 1}||Px||_2=\sigma_{1} &&\{\mbox{spectral norm, $\sigma_1$ is highest singular value}\} \\
||P||_{Frob}=\sqrt{\sum_{i,j}|P_{i,j}|^2}=\sqrt{\mathop{trace}\{P^TP\}}=\sum_{i=1}^{r}\sigma_{i}^2 && \{\mbox{frobenius norm, $r$ is rank of $P$}\}
\end{align}</p>
<p>Let $v_1=\frac{v}{||v||_2}$ Let $v_1,v_2,\dots,v_{n}$ form a orthonormal basis for $n$ dimensional space so that they are orthogonal to each other and are of unit norm. Let $V=[v_1,\dots,v_n]$ be a $n\times n$ orthonormal matrix. Convince yourself that $$I=VV^T=v_1v_1^T+\dots+v_nv_n^T$$ Note that the matrix you are interested in is $$P=I-v_1v_1^T=v_2v_2^T+\dots+v_nv_n^T$$ Now try to obtain the above results in terms of singular values. </p>
|
200,658 | <p>What is the value of :</p>
<p>$$\sum_{n=1}^{\infty}\frac{n^2+n+1}{3^n}$$</p>
| Ayush Khemka | 42,108 | <p>I don't know the exact steps of how to get that, but i figured out that this comes out to be
$\frac 11+\frac 79+\frac {15}{27}+\ldots$</p>
<p>and this link here, gives the solution to be $\frac {11}4$. Refer
<a href="http://www.wolframalpha.com/input/?i=sum%20from%201%20to%20infinity%20%28%28n%5E2%2bn%2b1%29/3%5En%29" rel="nofollow">WolframAlpha</a> Solution.</p>
|
200,658 | <p>What is the value of :</p>
<p>$$\sum_{n=1}^{\infty}\frac{n^2+n+1}{3^n}$$</p>
| Jakub Konieczny | 10,674 | <p>In most practical applications, you can just ask Mathematica, and it will tell you it's $\frac{11}{4}$.</p>
<p>If you want to arrive at the formula in a more rigorous way, you can do the following: </p>
<p>Consider the function $f$: $$f(x) = \frac{1}{1 - x} = \sum_{n=0}^\infty x^n$$</p>
<p>An initial observation is that $f(\frac{1}{3}) = \sum_{n=0}^\infty \frac{1}{3^n}$ so it looks a little like your sum. Now, consider $f'$:
$$f'(x) = \frac{1}{(1 - x)^2} = \sum_{n=1}^\infty n x^{n-1} = \sum_{n=0}^\infty (n+1) x^{n} $$</p>
<p>When you plug in $\frac{1}{3}$ again, you find $f'(\frac{1}{3}) = \sum_{n=0}^\infty \frac{n+1}{3^n}$. Finally, consider $f''$:
$$f''(x) = \frac{2}{(1 - x)^3} = \sum_{n=0}^\infty (n+2)(n+1) x^{n} = \sum_{n=0}^\infty (n^2 + 3n + 2) x^{n}$$
When you plug in $\frac{1}{3}$ once more, you find $f''(\frac{1}{3}) = \sum_{n=0}^\infty \frac{n^2 + 3n + 2}{3^n}$. Now, all you have to do is to express $n^2 + n +1$ as:
$$ n^2 + n + 1 = (n^2 + 3n + 1) - 2 (n+1) + 1 $$
so you get that the sought sum $S$ is:
$$ S = f''\left(\frac{1}{3}\right) - 2 f'\left(\frac{1}{3}\right) - f\left(\frac{1}{3}\right)$$
The remaining computation is not pleasant, but it is definitely doable, and does not involve any new creative ideas.</p>
<hr>
<p>I was a little too careless: the computation I did was for a sum raging from $n=0$ rather than $n=1$ as in the problem. It is however easy to mend - just subtract the initial term, which is just $\frac{1}{3}$.</p>
|
1,712,457 | <blockquote>
<p>Assume $f$ is differentiable over an open interval $I$. Suppose $a<b$ are two numbers in $I$ with $f'(a) < f'(b)$. Show that if $f'(a) < 0 <f'(b)$, then neither $f(a)$ nor $f(b)$ can be the minimum value of $f$ over $[a,b]$.</p>
</blockquote>
<p>Intuitively this makes sense: $f$ must change concavity on $[a,b]$ and thus it will have a relative minimum point where the first derivative is $0$. Since $f$ isn't the constant function and the function changes concavity at least once, $f(a)$ nor $f(b)$ can be the minimum on the interval.</p>
<p>Is this reasoning fine or do I need to be more mathematical?</p>
| Jared | 138,018 | <p>You have the following picture where $a$ is the left and $b$ is the right:
<a href="https://i.stack.imgur.com/Vcpzz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vcpzz.jpg" alt="enter image description here"></a></p>
<p>$f'(a) < 0$ (given) therefore the function decreases as you move <em>immediately</em> to the right of $x = a$. Therefore there are values of $f(x)$ immediately after $f(a)$ which must be smaller--<strong>$f(a)$ cannot be a minimum</strong>.</p>
<p>$f'(b) > 0$ (given) therefore there the function decreases as you move <em>immediately</em> to the <em>left</em> (running it backwards) from $x = b$. Therefore there are values of $f(x)$ immediately <em>before</em> $f(b)$ which must be smaller--<strong>$f(b)$ cannot be a minimum</strong>.</p>
<p>This is <em>not</em> a rigorous proof--rather it is an <em>intuitive</em> proof (a proof sketch). To be fully rigorous you need to show that these points I speak of (that are smaller) exist via an epsilon-delta proof (this is not an "easy" proof and I'm not completely sure how you would go about showing it).</p>
|
359,742 | <p>I have a mathematical problem that leads me to a particular necessity. I need to calculate the convolution of a function for itself for a certain amount of times. </p>
<p>So consider a generic function $f : \mathbb{R} \mapsto \mathbb{R}$ and consider these hypothesis:</p>
<ul>
<li>$f$ is continuos in $\mathbb{R}$.</li>
<li>$f$ is bound, so: $\exists A \in \mathbb{R} : |f(x)| \leq A, \forall x \in \mathbb{R}$.</li>
<li>$f$ is integral-defined, so its area is a real number: $\exists \int_a^bf(x)\mathrm{d}x < \infty, \forall a,b \in \mathbb{R}$. Which implies that such a function at ifinite tends to zero.</li>
</ul>
<p><strong>Probability mass functions:</strong> Such functions fit the constraints given before. So it might get easier for you to consider $f$ also like the pmf of some continuos r.v.</p>
<p>Consider the convolution operation: $a(x) \ast b(x) = c(x)$. I name the variable always $x$.</p>
<p>Consider now the following function:</p>
<p>$$
F^{(n)}(x) = f(x) \ast f(x) \ast \dots \ast f(x), \text{for n times}
$$</p>
<p>I want to evaluate $F^{(\infty)}(x)$. And I would like to know whether there is a generic final result given a function like $f$.</p>
<h3>My trials</h3>
<p>I tried a little in Mathematica using the Gaussian distribution. What happens is that, as $n$ increases, the bell stretches and its peak always gets lower and lower until the function almost lies all over the x axis. It seems like $F^{(\infty)}(x)$ tends to $y=0$ function...</p>
<p><img src="https://i.stack.imgur.com/8FDFH.png" alt="Trials in Mathematica"></p>
<p>As $n$ increases, the curves gets lower and lower. </p>
| Bragadeesh | 203,169 | <p>I had a similar question for years. Only recently I was able to solve. So here you go.</p>
<p>As you have mentioned, you can assume $f$ as a pdf of a random variable multiplied by a scaling factor, since it satisfies all the required properties you've mentioned.</p>
<p>So following the approach, let me first consider a function $f(x)$, which is a pdf of a random variable $X$.
Also consider a sequence of $n$ random variables, $X_1 , X_2 , X_3 , \dots , X_n $ that are iid ( Independent and Identically Distributed RVs ) with a pdf $f(x)$.</p>
<p>Now <a href="https://en.wikipedia.org/wiki/Central_limit_theorem" rel="nofollow noreferrer">Central Limit Theorem</a> says that
\begin{equation}
Y = \frac{1}{\sqrt n} \sum\limits_{i=1}^{n} X_i
\end{equation}
converges in probability to a normal distribution as $n$ approaches $\infty$. But by <a href="https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf" rel="nofollow noreferrer">sum property of random variable</a>, the pdf of the random variable $Y$ is simply $\frac{1}{\sqrt n} ( f(x)*f(x)*\dots f(x)) $.</p>
<p>This means that in your case $F^{\infty}(x)$ converges to $\sqrt n a^n \mathcal{N}
(\mu,\sigma)$, which tends to $0$ as $n$ tends to $\infty$, if $|a| \leq 1 $, where $a$ is the scaling factor required to normalize the area under the curve to $1$ for the equivalent pdf. This is the reason why your function becomes flatter and flatter with increasing $n$. Now try the same experiment after normalizing the function with $ \sqrt n a^n$, you must get a smooth bell curve.Hope it helps.</p>
|
2,520,768 | <p>How would I approach this problem? </p>
<p>Let $(a, b, c) \in \mathbb{Z^3}$ with $a^2 + b^2 = c^2$. Show that:
$$
60 \,\mid\, abc
$$</p>
| John Lou | 404,782 | <p>\begin{align}
n^2 \mod 3 &\equiv 0 \text{ or } 1\\
a^2 + b^2 &\equiv c^2 \mod 3 \quad\text{the following two cases are the only possibilities}\\
0+0 &\equiv 0 \mod 3 \\
0+1 &\equiv 1 \mod 3\\
\end{align}
Regardless, at least one number in the triple is divisible by $3$.</p>
<p>\begin{align}
n^2 \mod 5 &\equiv 0 \text{ or } 1 \text{ or } 4\\
a^2 + b^2 &\equiv c^2 \mod 5 \quad\text{the following four cases are the only possibilities}\\
0+0 &\equiv 0 \mod 5 \\
0+1 &\equiv 1 \mod 5\\
0+4 &\equiv 4 \mod 5\\
4+1 &\equiv 0 \mod 5\\
\end{align}
Regardless, at least one number in the triple is divisible by $5$.</p>
<p>$4\big|abc $ if there are at least $2$ twos in all three. Note that even if $n^2$ is divisible by $4$, then it may not have a factor of $4$ because it could be of the form $2 \cdot p_1^{q_1}\cdot p_2^{q_2}...p_n^{q_n}$, which is only divisible by $2$. Therefore, we must check if $n^2$ has a $16$. However, we can use $8$ instead, because if $n^2$ has an $8$, it must have a $16$ (because $8$ is not a perfect square.)</p>
<p>\begin{align}
\text{if } n &\equiv 1 \mod 2\\
n^2 \mod 8 &\equiv 1 \\
\text{if } n &\equiv 0 \mod 2\\
n^2 \mod 8 &\equiv 0 \text{ or } 4 \\
a^2 + b^2 &\equiv c^2 \mod 5 \quad\text{the following four cases are the only possibilities}\\
0+0 &\equiv 0 \mod 8 \\
0+1 &\equiv 1 \mod 8\\
0+4 &\equiv 4 \mod 8\\
4+4 &\equiv 0 \mod 8\\
\end{align}</p>
<p>Now we can see that the product $abc$ has at least $3\cdot 4 \cdot 5$, which are the factors of $60$</p>
|
3,190,828 | <p>Let <span class="math-container">$A\in M_n(\mathbb{C})$</span> be a matrix such that <span class="math-container">$A^n=aA$</span>,where <span class="math-container">$a\in \mathbb{R}-\{0,1\}$</span>.<br>
I wanted to find <span class="math-container">$A$</span>'s eigenvalues and I thought that they are the roots of the polynomial equation <span class="math-container">$x^n=ax$</span>. Is this correct?</p>
| Fred | 380,717 | <p>It is correct: if <span class="math-container">$ \mu $</span> is an eigenvalue of <span class="math-container">$A$</span> with corresponding eigenvector <span class="math-container">$x$</span>, then <span class="math-container">$A^nx= \mu^n x$</span>, hence <span class="math-container">$\mu^nx=a Ax=a \mu x$</span>. Since <span class="math-container">$x \ne 0$</span>, we get <span class="math-container">$ \mu^n=a \mu.$</span></p>
|
3,190,828 | <p>Let <span class="math-container">$A\in M_n(\mathbb{C})$</span> be a matrix such that <span class="math-container">$A^n=aA$</span>,where <span class="math-container">$a\in \mathbb{R}-\{0,1\}$</span>.<br>
I wanted to find <span class="math-container">$A$</span>'s eigenvalues and I thought that they are the roots of the polynomial equation <span class="math-container">$x^n=ax$</span>. Is this correct?</p>
| trancelocation | 467,003 | <p>You have</p>
<ul>
<li><span class="math-container">$A^n-aA=O_{n\times n}$</span></li>
</ul>
<p>So, based on <a href="https://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton_theorem" rel="nofollow noreferrer">Cayley-Hamilton theorem</a> you can conclude that any matrix <span class="math-container">$A$</span> satisfies the above equation, iff its minimal polynomial <span class="math-container">$m_A(\lambda)$</span> satisfies</p>
<ul>
<li><span class="math-container">$m_A(\lambda)|\lambda(\lambda^{n-1}-a)$</span></li>
</ul>
|
2,080,716 | <p>I have the quadratic form
$$Q(x)=x_1^2+2x_1x_4+x_2^2 +2x_2x_3+2x_3^2+2x_3x_4+2x_4^2$$</p>
<p>I want to diagonalize the matrix of Q. I know I need to find the matrix of the associated bilinear form but I am unsure on how to do this.</p>
| Atul Mishra | 396,163 | <p>If you are familiar with unions and intersections of sets , then it is not a difficult problem.</p>
<p><strong>Your answer should be:</strong> Total sum-(sum of multiple of $3 +$ sum of multiple of $7 -$ sum of multiple of $21)$</p>
<p>Since $21$ is LCM of $3$ & $7$</p>
|
125,116 | <p>Is there a rotation representation that can also represent "turns", instead of collapsing coincident rotations into the same representation?</p>
<p>In 2D, a simple angle satisfies this, as it can have additional multiples of $2\pi$. For example, rotating by a turn and a half would be $3\pi$.</p>
<p>Is there something similar for 3D rotations? Does the concept even make sense there? Quaternions don't work for this since they only have two representations of any given rotation. Rotation vectors ($\theta\hat{e}$) seem to work, though they are very hard to work with.</p>
<p>EDIT: My objective with this is to extend quaternion spherical interpolation to rotations of more than 180° in terms of beginning and end "orientation with turns" objects, so you could, for example, interpolate over an entire revolution using the same machinery as you would with normal small rotation interpolation.</p>
| P Vanchinathan | 22,878 | <p>The resolution in 2D that you suggested may also be viewed as
going from the circle to its universal covering space:
$\mathbb{R}\to S^1$.</p>
<p>SO the the same trick should work: take the universal cover of SO(3).</p>
|
203,464 | <p>I would like to exclude the point <code>{x=0,y=0}</code> in the function definition</p>
<pre><code>f = Function[{x, y}, {x/(x^2 + y^2), -(y/(x^2 + y^2))}]
</code></pre>
<p>So far I tried <code>ConditionalExpression</code>and <code>/;</code> without success.</p>
<p>Thanks!</p>
| N.J.Evans | 11,777 | <p>As with other solutions, you have to do some cleaning up afterword, but you can use <code>Table</code> with lists defining the iterators:</p>
<pre><code>Flatten[Table[{i + j}, {i, {a, b, c}}, {j, {d, e, f}}], 1]
</code></pre>
<p>If you really want to map it onto the lists, you can use the following, but the result is the same:</p>
<pre><code>Flatten[Table[{i + j}, {i, #1}, {j, #2}], 1] & @@ {{a, b, c}, {d, e, f}}
</code></pre>
<p>either option outputs:</p>
<pre><code>{{a + d}, {a + e}, {a + f}, {b + d}, {b + e}, {b + f}, {c + d}, {c + e}, {c + f}}
</code></pre>
|
12,949 | <p>Let $\kappa$ be an infinite cardinal. Then there exists at least one <a href="http://en.wikipedia.org/wiki/Real-closed_field">real-closed field</a> of cardinality $\kappa$ (e.g. <a href="http://en.wikipedia.org/wiki/Lowenheim-Skolem">Lowenheim-Skolem</a>; or, start with a function field over $\mathbb{Q}$ in $\kappa$ indeterminates, choose an ordering and a real-closure). </p>
<p>But I think there are many more, namely $2^{\kappa}$ pairwise nonisomorphic real-closed fields of cardinality $\kappa$. This is equal to the number of binary operations on a set of infinite cardinality $\kappa$, so is the largest conceivable number.</p>
<p>As for motivation -- what can I tell you, mathematical curiosity is a powerful thing. One application of this which I find interesting is that there would then be $2^{2^{\aleph_0}}$ conjugacy classes of order $2$ subgroups of the automorphism group of the field $\mathbb{C}$. </p>
<p><b>Addendum</b>: Bonus points (so to speak) if you can give a general model-theoretic criterion for a theory to have the largest possible number of models which yields this result as a special case.</p>
| Joel David Hamkins | 1,946 | <p>In the countable case, the bound of 2<sup>ω</sup> is realized, since any countable real-closed field will contain the rational numbers and fill at most countably many cuts in the rationals with LUBs. But we can arrange that any given cut is filled by a real closed subfield of R containing that real. So there must be 2<sup>ω</sup> many non-isomorphic countaable real closed fields.</p>
<p>In the general case, because the models have an order, you can easily make this order have different cofinalities, by building elementary chains of different lengths. That is, just use your Lowenheim Skolem construction to add another point on top of the previous model, and continue for δ steps. This will produce an elementary extension of size κ whose order has cofinality δ, for any regular δ up to κ.
So this gives many more models, but doesn't quite answer your 2<sup>κ</sup> question. I'm inclined to agree with you and expect that it must be the maximal number for all κ on general grounds.</p>
|
2,301,198 | <p>Solve the initial value problem for the sequence $\left \{ u_{n}| n \in \mathbb{N} \right \}$ satisfying the recurrence relation:
$u_n − 5u_{n-1} + 6u_{n−2} = 0 $ with $u_0 = 1$ and $u_1 = 1$.</p>
<p>Ive gotten the general solution to be $u_n = A(2)^n + B(3)^n$. </p>
<p>Once I sub the initial values: </p>
<p>$u_0 = 2A + 3B = 1$</p>
<p>$u_1 = 2A + 3B = 1$</p>
<p>And Im unsure on how to solve this system. Any help appreciated, thanks. </p>
| Michael Rozenberg | 190,319 | <p>$$u_n-2u_{n-1}=3(u_{n-1}-2u_{n-2}),$$
which says that
$u_{n}-2u_{n-1}=-3^{n-1}$.</p>
<p>Thus,
$$u_n-2u_{n-1}=-3^{n-1}$$
$$2^1u_{n-1}-2^2u_{n-2}=-2^13^{n-2}...$$
$$2^{n-1}u_1-2^nu_0=-2^{n-1}3^0,$$
which after summing gives:
$$u_n-2^nu_0=-3^{n-1}-2\cdot3^{n-2}-...-2^{n-1}$$ or
$$u_n=2^n-\frac{3^n-2^n}{3-2}$$ or
$$u_n=2^{n+1}-3^n.$$
Done!</p>
|
188,336 | <p>Let $|\cdot|_1$ and $|\cdot|_2$ be two norms on a field $\mathbb F$. We call the two norms equivalent if every Cauchy-sequence with respect to $|\cdot|_1$ is also a Cauchy-sequence with respect to $|\cdot|_2$. Prove the following statement:</p>
<p>$$|\cdot|_1\sim|\cdot|_2\quad\Leftrightarrow\quad\exists \alpha\in\mathbb{R}_{>0}: \forall x\in\mathbb F: |x|_1=|x|_1^\alpha.$$</p>
<p>The direction "$\Leftarrow$" is straightforward: Let there an $\alpha$ with the property above, $(a_i)_{i\in\mathbb N}$ be a Cauchy sequence with respect to $|\cdot|_1$ and $\varepsilon_2\in\mathbb{R}_{>0}$ be arbitrary. Set $\varepsilon_1:=\varepsilon_2^\alpha$ and follow from the fact that $(a_i)_{i\in\mathbb N}$ is a Cauchy sequence, that there exists an $N\in\mathbb N$ such that
$$\forall n,m>N: |a_m-a_n|_1<\varepsilon_1.$$
Since $|\cdot|_1\sim|\cdot|_2$ and the definition of $\varepsilon_1$ we get that
$$\forall n,m>N: |a_m-a_n|_2^\alpha<\varepsilon_1^\alpha$$
which is equivalent to
$$\forall n,m>N: |a_m-a_n|_2<\varepsilon_1$$
which means that $(a_i)_{i\in\mathbb N}$ is a Cauchy sequence with respect to $|\cdot|_2$. </p>
<p>For the other direction (which is probably harder) we may assume that the being a Cauchy sequence is the same property for both norms but I don't see how I can construct such an $\alpha$ from this fact.</p>
<p>Remark 1: This is an exercise number 5 on page 7 in the book "p-adic Numbers, p-adic Analysis and Zeta-Functions" (Second Edition) by Neal Koblitz. </p>
<p>Remark 2: The right side is the notion of norm-equivalence that I am familiar with, but in this book it is explicitly defined in the way from this post. </p>
| user29999 | 29,999 | <p>Hint:
\begin{equation}
|a_m-a_n| < \varepsilon \Leftrightarrow |a_m-a_n|_{2}^{\alpha}<\varepsilon \Leftrightarrow |a_m-a_n|_{2}< \varepsilon^{1/\alpha}.
\end{equation}</p>
|
188,336 | <p>Let $|\cdot|_1$ and $|\cdot|_2$ be two norms on a field $\mathbb F$. We call the two norms equivalent if every Cauchy-sequence with respect to $|\cdot|_1$ is also a Cauchy-sequence with respect to $|\cdot|_2$. Prove the following statement:</p>
<p>$$|\cdot|_1\sim|\cdot|_2\quad\Leftrightarrow\quad\exists \alpha\in\mathbb{R}_{>0}: \forall x\in\mathbb F: |x|_1=|x|_1^\alpha.$$</p>
<p>The direction "$\Leftarrow$" is straightforward: Let there an $\alpha$ with the property above, $(a_i)_{i\in\mathbb N}$ be a Cauchy sequence with respect to $|\cdot|_1$ and $\varepsilon_2\in\mathbb{R}_{>0}$ be arbitrary. Set $\varepsilon_1:=\varepsilon_2^\alpha$ and follow from the fact that $(a_i)_{i\in\mathbb N}$ is a Cauchy sequence, that there exists an $N\in\mathbb N$ such that
$$\forall n,m>N: |a_m-a_n|_1<\varepsilon_1.$$
Since $|\cdot|_1\sim|\cdot|_2$ and the definition of $\varepsilon_1$ we get that
$$\forall n,m>N: |a_m-a_n|_2^\alpha<\varepsilon_1^\alpha$$
which is equivalent to
$$\forall n,m>N: |a_m-a_n|_2<\varepsilon_1$$
which means that $(a_i)_{i\in\mathbb N}$ is a Cauchy sequence with respect to $|\cdot|_2$. </p>
<p>For the other direction (which is probably harder) we may assume that the being a Cauchy sequence is the same property for both norms but I don't see how I can construct such an $\alpha$ from this fact.</p>
<p>Remark 1: This is an exercise number 5 on page 7 in the book "p-adic Numbers, p-adic Analysis and Zeta-Functions" (Second Edition) by Neal Koblitz. </p>
<p>Remark 2: The right side is the notion of norm-equivalence that I am familiar with, but in this book it is explicitly defined in the way from this post. </p>
| Sangchul Lee | 9,340 | <p>To prove the direction in question, assume otherwise. That is, there exists $x, y \in \Bbb{F}^{\times}$ such that</p>
<p>$$ \frac{\log|x|_1}{\log|x|_2} = \alpha(x) \neq \alpha(y) = \frac{\log|y|_1}{\log|y|_2}. \tag{1} $$</p>
<p>The naive idea of the proof is to exaggerate this difference in a deliberate way.</p>
<p>The condition $(1)$ means that two vectors</p>
<p>$$ v_x = (\log|x|_1, \log|x|_2) \quad \text{and} \quad v_y = (\log|y|_1, \log|y|_2) $$</p>
<p>are linearly independent, forming a basis of $\Bbb{R}^2$. In particular,</p>
<p>$$L = v_x \Bbb{Z} \oplus v_y \Bbb{Z} = \{ m v_x + n v_y : m, n \in \Bbb{Z}\}$$</p>
<p>is a lattice on $\Bbb{R}^2$. Then for any vector $v = pv_x + qv_y \in \Bbb{R}^2$, the lattice point $w = [p]v_x + [q]v_y \in L$ satisfies</p>
<p>$$ \|v - w\| \leq \|v_x\| + \|v_y\| =: R.$$</p>
<p>In particular, any closed ball of radius $R$ contains at least one lattice point of $L$. Thus for each $k = 1, 2, 3, \cdots$, we can find some a sequence of pairs of integers $(m_k, n_k)$ such that </p>
<p>$$ m_k v_x + n_k v_y \in [2kR, 2(k+1)R] \times (-\infty, -kR]. $$</p>
<p>Then for</p>
<p>$$z_k = x^{m_k}y^{n_k} \in \Bbb{F}$$</p>
<p>we have</p>
<p>$$ e^{2kR} \leq |z_k|_1 \leq e^{2(k+1)R} \quad \text{and} \quad |z_k|_2 \leq e^{-kR}. $$</p>
<p>Now, for any $l+2 \leq k$ we have</p>
<p>$$|z_k - z_l|_1 \geq |z_k|_1 - |z_l|_1 \geq e^{2Rk} - e^{2R(l+1)} \geq e^{2R(l+2)} - e^{2R(l+1)} \geq e^{2R} - 1 > 0$$</p>
<p>and $(z_k)$ is not Cauchy in $|\cdot|_1$. But clearly $(z_k)$ is Cauchy in $|\cdot|_2$. Thus two norms are not equivalent and therefore the proof is completed.</p>
|
295,517 | <p>My math is not incredibly strong and perhaps I have just not been searching for the right terms, but I have a summation that is part of an algorithm I've been working on and would really like to reduce it to just a formula, but am really struggling to find a solution (if one exists).</p>
<p>$\sum_{i=1}^{n}\frac{5}{i^{0.35}}$</p>
<p>Can anyone point me in the right direction as to how to approach this, or is likely not possible to reduce down to just a formula? Thanks very much in advance.</p>
| GEdgar | 442 | <p>If you are interested in how it behaves for large $n$, you could try an approximation like
$$
\sum_{i=1}^n \frac{5}{i^{0.35}} \approx \int_{1/2}^{n+1/2}\frac{5\;dx}{x^{0.35}}
$$
For example,
$$
\sum_{i=1}^{100} \frac{5}{i^{0.35}} \approx 148.93,\qquad
\int_{1/2}^{100.5}\frac{5\;dx}{x^{0.35}}\approx 149.08 .
$$</p>
|
1,120,013 | <p>Let $X$ and $Y$ be two random variables (say real numbers, or vectors in some vector space). It seems to me that the following is true:</p>
<p>E [ X | E [ X | Y ] ] = E [ X | Y]</p>
<p>Note that E [ X | Y ] is a random variable in it's own right. Also note that equality here is point-wise, for every point in the sample space of the joint distribution on on $(X,Y)$. My question, assuming I'm not missing something and the above is true, is whether this law has a name, or is written down / proved somewhere.</p>
| pre-kidney | 34,662 | <p>Let $Z=E[X\ | \ Y]$. Your equation states: $E[X \ | \ Z]=Z$. This follows from the following fact.</p>
<p><strong>Tower Property of Conditional Expectation:</strong></p>
<p>$$E[E[X\ | \ \mathcal{F}]\ | \ \mathcal{G}]=E[X\ | \ \mathcal{G}],\text{ whenever }\mathcal{G}\subset \mathcal{F}.$$</p>
<p><strong>Proof of your equation:</strong></p>
<p>We apply the tower property with $\mathcal{G}=\sigma(Z)$ and $\mathcal{F}=\sigma(Y)$. Note that $\sigma(Z)\subset \sigma(Y)$ follows from the construction of $Z$ as a conditional expectation w.r.t. $Y$.</p>
<p>Plugging in to the tower property,
$$
\begin{align*}
E[E[X\ | \ \sigma(Y)]\ | \ \sigma(Z)]&=E[X\ | \ \sigma(Z)]\\
\implies E[Z\ | \ Z]&=E[X\ | \ Z]\\
\implies Z&=E[X\ | \ Z].
\end{align*}$$</p>
|
308,520 | <p>The DE is $y' = -y + ty^{\frac{1}{2}}$. </p>
<p>$2 \le t \le 3$</p>
<p>$y(2) = 2$</p>
<p>I tried to see if it was in the <a href="http://www.sosmath.com/diffeq/first/lineareq/lineareq.html" rel="nofollow">linear form</a>. I got:</p>
<p>$$\frac{dy}{dt} + y = ty^{\frac{1}{2}}$$</p>
<p>The RHS was not a function of <code>t</code>. I also tried separation of variables, but I couldn't isolate the <code>y</code> from the term $ty^{\frac{1}{2}}$. Any hints?</p>
| Ron Gordon | 53,268 | <p>Let $y=u^2$, then you can cancel a factor of $u$ and get</p>
<p>$$2 u' + u = t$$</p>
<p>for which you can apply an integrating factor of $e^{t/2}$ to both sides and get</p>
<p>$$\frac{d}{dt} [u e^{t/2}] = \frac{t}{2} e^{t/2}$$</p>
<p>Integrating both sides, we get the general solution:</p>
<p>$$u(t) = t-2 + C e^{-t/2}$$</p>
<p>where $C$ is a constant of integration. The solution is then $y(t)=u(t)^2$.</p>
|
308,520 | <p>The DE is $y' = -y + ty^{\frac{1}{2}}$. </p>
<p>$2 \le t \le 3$</p>
<p>$y(2) = 2$</p>
<p>I tried to see if it was in the <a href="http://www.sosmath.com/diffeq/first/lineareq/lineareq.html" rel="nofollow">linear form</a>. I got:</p>
<p>$$\frac{dy}{dt} + y = ty^{\frac{1}{2}}$$</p>
<p>The RHS was not a function of <code>t</code>. I also tried separation of variables, but I couldn't isolate the <code>y</code> from the term $ty^{\frac{1}{2}}$. Any hints?</p>
| Kaster | 49,333 | <p>I'll just add that integration factor is not necessary. From the equations
$$
2z'+z = t
$$
you can assume that particular solution is linear $z^p = At+B$ and substitute it to ODE
$$
2A+At+B=t
$$
from which you can easily find that $A = 1, B = -2$, so $z^p = t - 2$. General solution of inhomogeneous problem is a sum of general solution of homogeneous problem and particular solution of inhomogeneous problem. Homogeneous one can be easily solved and $z_0 = C^{-\frac 12t}$, so $z = t-2+C^{-\frac 12 t}$</p>
|
490,064 | <p>Solve the Cauchy problem, $\forall t \in \mathbb{R}$,
$$ \begin{cases}
u''(t) + u(t) = |t|\\
u(0)=1, \quad u'(0) = -1
\end{cases} $$</p>
<p>The solution to the homogeneous equation is $A\cos(t) + B \sin(t)$. Empirically, $|t|$ is "more or less" a particular solution, however it is not differentiable in $0$... What is the fastest way to find a particular solution two times differentiable?</p>
| Anthony Carapetis | 28,513 | <p>This isn't the easiest or most systematic way to get a solution, but I had a bit of fun finding it so I'll post it anyway. </p>
<p>Let's look for solutions of the form $u(t) = |t|f(t)$ where $f$ is twice differentiable with $f(0)=0$. Such a function is twice differentiable everywhere, with second derivative $$ u''(t) = \operatorname{sign}(t)(2 f'(t) + tf''(t))$$ where we use the convention that the sign of 0 is 0. The differential equation is then </p>
<p>$$\operatorname{sign}(t)(2 f'(t) + tf''(t) + tf(t)) = \operatorname{sign}(t) t,$$ so a solution of $2 f'(t) + tf''(t) + tf(t)=t$ would suffice. The substitution $a(t) = t(f(t)-1)$ transforms this to $$a''(t) + a(t) = 0$$ with initial conditions $a(0) = 0, a'(0) = -1$; which has solution $a(t) = -\sin(t)$. Thus we have $f(t) = 1 - \operatorname{sinc}(t)$, giving a particular solution $$u(t) = |t| - \operatorname{sign}(t)\sin(t).$$</p>
|
3,858,362 | <p>Solve <span class="math-container">$$\dfrac{x^3-4x^2-4x+16}{\sqrt{x^2-5x+4}}=0.$$</span>
We have <span class="math-container">$D_x:\begin{cases}x^2-5x+4\ge0\\x^2-5x+4\ne0\end{cases}\iff x^2-5x+4>0\iff x\in(-\infty;1)\cup(4;+\infty).$</span> Now I am trying to solve the equation <span class="math-container">$x^3-4x^2-4x+16=0.$</span> I have not studied how to solve cubic equations. Thank you in advance!</p>
| José Carlos Santos | 446,262 | <p>Use the fact that<span class="math-container">\begin{align}x^3-4x^2-4x+16=0&\iff x(x^2-4)-4(x^2-4)=0\\&\iff(x-4)(x^2-4)=0.\end{align}</span></p>
|
3,008,162 | <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be well-ordered sets, and suppose <span class="math-container">$f:A\to B$</span> is an
order-reversing function. Prove that the image of <span class="math-container">$f$</span> is finite.</p>
<p>I started by supposing not. Then we must have that the image of <span class="math-container">$f$</span>, or the set <span class="math-container">$\{f(x)\in B:x\in A\}$</span>, has infinite cardinality. If this is the case the we must have that <span class="math-container">$\vert{\{f(x)\in B:x\in A\}}\vert\geq \aleph_0$</span> which also means there exists a strictly order-preserving function <span class="math-container">$g:\mathbb{N}\to \{f(x)\in B:x\in A\}$</span>. </p>
<p>The contradiction I am trying to reach is that this would imply that there exists an order-reversing function from <span class="math-container">$\mathbb{N}$</span> to an infinite image which is a subset of a well-ordered set which can't happen but I don't know how to close the gap in the argument. </p>
| Dante Grevino | 616,680 | <p>Let <span class="math-container">$C=f(A)$</span> the image of <span class="math-container">$f$</span> with the order induced by <span class="math-container">$B$</span>. Every non-empty subset of <span class="math-container">$C$</span> has minimum and maximum. This implies that every element in <span class="math-container">$C$</span> distinct from the minimum has immediate predecessor and every element distinct from the maximum has immediate successor. Let <span class="math-container">$c_1$</span> the minimum of <span class="math-container">$C$</span>. For every natural number <span class="math-container">$n$</span>, we choose <span class="math-container">$c_{n+1}$</span> in <span class="math-container">$C$</span> as the immediate succesor of <span class="math-container">$c_n$</span> if <span class="math-container">$c_n$</span> is distinct from the maximum of <span class="math-container">$C$</span>. If this process stop we are done. Otherwise we have a a strictly order preserving function <span class="math-container">$g:\mathbb{N}\to C$</span> wich is not suryective. Let <span class="math-container">$c$</span> be the minimum of <span class="math-container">$C\setminus g(\mathbb{N})$</span>. And let <span class="math-container">$d$</span> the immediate predecessor of <span class="math-container">$c$</span> in <span class="math-container">$C$</span>. By election of <span class="math-container">$c$</span>, there exists a natural number <span class="math-container">$k$</span> such that <span class="math-container">$d=c_k$</span>. Then <span class="math-container">$c=c_{k+1}$</span> is in <span class="math-container">$g(\mathbb{N})$</span>. A contradiction.</p>
<p>EDIT:
We can also follow your approach in this way: For every natural number <span class="math-container">$n$</span>, let <span class="math-container">$a_n$</span> in <span class="math-container">$A$</span> such that <span class="math-container">$f(a_n)=c_n=g(n)$</span>. Let <span class="math-container">$D=(a_n)_{n\in\mathbb{N}}$</span> and consider the restriction <span class="math-container">$f:D\to f(D)$</span>. Then the bijection <span class="math-container">$f^{-1}\circ g:\mathbb{N}\to D$</span> is a strictly order reversing function and <span class="math-container">$D$</span> is a well-ordered set. This is imposible by the argument above.</p>
|
300,163 | <p>I need to integrate the $z/\bar z$ (where $\bar z$ is the conjugate of $z$) counterclockwise in the upper half ($y>0$) of a donut-shaped ring. The outer circle is $|z|=4$ and the inner circle is $|z|=2$. </p>
<p><strong>My method:</strong></p>
<p>$z/\bar z = e^{2i\theta}$ - which is entire over the complex plane.
So with respect to $d\theta$, we get the integral $re^{i3\theta} d\theta$ which, we can then evaluate at r=4 (from pi to 0) and r=2 from (0 to pi)</p>
<p><strong>Two questions:</strong></p>
<p>1) As integrating in the counterclockwise direction, surely I shouldn't be getting a negative number?</p>
<p>2) Via the deformation theorem, as the function is holomorphic on both circles and the region between them, should I not be getting 0? </p>
| Daniel Mckenzie | 60,074 | <p>1) $e^{2i\theta}$ is not holomorphic, and therefore not entire. There are many way to check this, but it suffices to observe that $\frac{\partial}{\partial \bar{z}}\frac{z}{\bar{z}} = -\frac{z}{\bar{z}^2}\neq 0$. See the discussion of the Wirtinger derivative in the definition section here: <a href="http://en.wikipedia.org/wiki/Holomorphic_function" rel="nofollow">wikipedia</a>.</p>
<p>2)The deformation theorem you refer to is about integrating holomorphic functions over contours. It looks like you are trying to evaluate the area integral:
\begin{equation}
\int_{r=2}^{r=4}\int^{\theta=\pi}_{\theta=0}z/z^{*}rdrd\theta
\end{equation}
So even if the function were holomorphic, you would not neccessarily get zero.</p>
|
3,102,905 | <p>I have the following sequence <span class="math-container">$$(x_{n})_{n\geq 1}, \ x_{n}=ac+(a+ab)c^{2}+...+(a+ab+...+ab^{n})c^{n+1}$$</span>
Also I know that <span class="math-container">$a,b,c\in \mathbb{R}$</span> and <span class="math-container">$|c|<1,\ b\neq 1, \ |bc|<1$</span>
I need to find the limit of <span class="math-container">$x_{n}$</span>.</p>
<p>The result should be <span class="math-container">$\frac{ac}{(1-bc)(1-c)}$</span>
I miss something at these two sums which are geometric progressions.Each sum should start with <span class="math-container">$1$</span> but why ? If k starts from 0 results the first terms are <span class="math-container">$bc$</span> and <span class="math-container">$c$</span> right?</p>
<p>My attempt:
<span class="math-container">$x_{n}=a(c+c^{2}(1+b)+...+c^{n+1}(1+b+...+b^{n}))$</span></p>
<p><span class="math-container">$1+b+...+b^{n}=\frac{b^{n+1}-1}{b-1}$</span> so <span class="math-container">$$x_{n}=a\sum_{k=0}^{n}c^{k+1}\cdot \frac{b^{k+1}-1}{b-1}\Rightarrow x_{n}=\frac{a}{b-1}\sum_{k=0}^{n}c^{k+1}\cdot (b^{k+1}-1)=\frac{a}{b-1}(\sum_{k=0}^{n}c^{k+1}\cdot b^{k+1}-\sum_{k=0}^{n}c^{k+1})$$</span></p>
<p>Now I take separately each sum to calculate.</p>
<p><span class="math-container">$\sum_{k=0}^{n}(bc)^{k+1}=bc+b^2c^2+...+b^{n+1}c^{n+1}$</span></p>
<p>It's a geometric progression with <span class="math-container">$r=bc$</span>, right ?But if a calculate the sum, in the end I don't get the right answer.I get the right answer if this progression starts with <span class="math-container">$1$</span> as first term.Why?</p>
<p>The same thing with the second sum.If the first term is <span class="math-container">$1$</span> I'll get the right answer.</p>
<p>Why I need to add/subtract a <span class="math-container">$1$</span> to get the answer?Why I don't get the correct answer just by solving the progressions with the first term <span class="math-container">$bc$</span> and <span class="math-container">$c$</span>?</p>
| John Omielan | 602,049 | <p>You seem to be doing everything correctly. Using your final value for <span class="math-container">$x_n$</span>, and taking the limit as <span class="math-container">$n \to \infty$</span>, I get, using the sum of an infinite geometric series being <span class="math-container">$\frac{a}{1-r}$</span>, where <span class="math-container">$a$</span> is the first term and <span class="math-container">$r$</span> is the common ratio where <span class="math-container">$\left|r\right| \lt 1$</span>, of</p>
<p><span class="math-container">$$\cfrac{a}{b-1}\left(\cfrac{bc}{1-bc} - \cfrac{c}{1-c}\right) \tag{1}\label{eq1}$$</span></p>
<p>For the part inside the brackets, multiply the first term's numerator & denominator by <span class="math-container">$1-c$</span> and the first term's numerator & denominator by <span class="math-container">$1-bc$</span>, to get a common denominator, with this then becoming</p>
<p><span class="math-container">$$\cfrac{bc - bc^2 - c + bc^2}{\left(1-bc\right)\left(1-c\right)}$$</span>
<span class="math-container">$$\cfrac{c\left(b-1\right)}{\left(1-bc\right)\left(1-c\right)} \tag{2}\label{eq2}$$</span></p>
<p>Substituting this into \eqref{eq1}, then removing the common factor of <span class="math-container">$b - 1$</span> (as <span class="math-container">$b \neq 1$</span>) gives your expected result of</p>
<p><span class="math-container">$$\cfrac{ac}{\left(1-bc\right)\left(1-c\right)} \tag{3}\label{eq3}$$</span></p>
|
2,292,324 | <p>I know what the answer to this question is, but I am not sure how the answer was reached and I would really like to understand it! I am omitting the base case because it is not relevant for my question.</p>
<p>Inductive hypothesis:</p>
<p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{n(n+1)} = \frac{n}{n+1}$$ is true when $n = k$ and $k > 1$</p>
<p>Therefore: $$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k}{k+1}$$</p>
<p>Inductive step:</p>
<p>Prove that $$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k+1}{k+1+1} = \frac{k+1}{k+2}$$</p>
<p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \left[\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)}\right] + \frac{1}{(k+1)(k+2)}$$</p>
<p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k}{k+1} + \frac{1}{(k+1)(k+2)}$$</p>
<p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k+1}{k+2}$$</p>
<p>What I am confused about is where the $\frac{1}{(k+1)(k+2)}$ comes from in the first line of the inductive step. Can someone please explain this in a little more detail? The source of the answer explains it as "break last term from sum", but I am unclear on what that means.</p>
| John | 7,163 | <p>You want to show</p>
<p>$$\sum_{j=1}^n \frac{1}{j(j+1)} = \frac{n}{n+1}$$</p>
<p>The inductive step involves assuming it holds for $n=k$ and then showing that it also holds for $n=k+1$. So you assume</p>
<p>$$\sum_{j=1}^k \frac{1}{j(j+1)} = \frac{k}{k+1}$$</p>
<p>and show</p>
<p>$$\sum_{j=1}^{k+1} \frac{1}{j(j+1)} = \frac{k+1}{(k+1)+1}.$$</p>
<p>The left side of the sum above can also be written like this:</p>
<p>$$\sum_{j=1}^{k+1} \frac{1}{j(j+1)} = \left[\sum_{j=1}^{k} \frac{1}{j(j+1)}\right] + \frac{1}{(k+1)(k+2)}.$$</p>
<p>This is breaking the last term from the sum. Now you can substitute in the inductive assumption for the sum in square brackets:</p>
<p>$$\sum_{j=1}^{k+1} \frac{1}{j(j+1)} = \left[\frac{k}{k+1}\right] + \frac{1}{(k+1)(k+2)}.$$</p>
<p>Now you need to show that</p>
<p>$$\frac{k}{k+1} + \frac{1}{(k+1)(k+2)} = \frac{k+1}{k+2},$$</p>
<p>and you're done.</p>
|
2,292,324 | <p>I know what the answer to this question is, but I am not sure how the answer was reached and I would really like to understand it! I am omitting the base case because it is not relevant for my question.</p>
<p>Inductive hypothesis:</p>
<p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{n(n+1)} = \frac{n}{n+1}$$ is true when $n = k$ and $k > 1$</p>
<p>Therefore: $$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k}{k+1}$$</p>
<p>Inductive step:</p>
<p>Prove that $$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k+1}{k+1+1} = \frac{k+1}{k+2}$$</p>
<p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \left[\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)}\right] + \frac{1}{(k+1)(k+2)}$$</p>
<p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k}{k+1} + \frac{1}{(k+1)(k+2)}$$</p>
<p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k+1}{k+2}$$</p>
<p>What I am confused about is where the $\frac{1}{(k+1)(k+2)}$ comes from in the first line of the inductive step. Can someone please explain this in a little more detail? The source of the answer explains it as "break last term from sum", but I am unclear on what that means.</p>
| Angina Seng | 436,618 | <p>The inductive hypothesis is
$$\frac1{1\cdot2}+\frac1{2\cdot3}+\cdots+\frac{1}{k(k+1)}
=\frac{k}{k+1}.\tag1$$
You need to prove that (1) implies the statement got from (1) by replacing
$k$ by $k+1$. This is
$$\frac1{1\cdot2}+\frac1{2\cdot3}+\cdots+\frac{1}{(k+1)(k+2)}
=\frac{k+1}{k+2}\tag2$$
but instead of (2) you have
$$\frac1{1\cdot2}+\frac1{2\cdot3}+\cdots+\frac{1}{k(k+1)}
=\frac{k+1}{k+2}$$
which is wrong, and this is causing your confusion. The inductive
proof starts by recognising that
$$\frac1{1\cdot2}+\frac1{2\cdot3}+\cdots+\frac{1}{(k+1)(k+2)}
=\left[\frac1{1\cdot2}+\frac1{2\cdot3}+\cdots+\frac{1}{k(k+1)}
\right]+\frac1{(k+1)(k+2)}.$$</p>
|
2,241,100 | <p>Please someone help me solve the following equation in terms of $y$:</p>
<blockquote>
<p><strong>$\frac{y^2}{2}+y = \frac{x^3}{3}+\frac{x^2}{2}+c_1$</strong></p>
</blockquote>
<p>The calculator gives me:</p>
<blockquote>
<p>$y = \frac{1}{3}(\sqrt{3}\sqrt{c_1+2x^3+3x^2+3}-3), -\frac{1}{3}(\sqrt{3}\sqrt{c_1+2x^3+3x^2+3}-3)$</p>
</blockquote>
<p>I do not know the procedure to get to the answer. Somebody help please. Thank you.</p>
| Ahmed S. Attaalla | 229,023 | <p>Multiply both sides by $6$ to make things a little nicer.</p>
<p>$$3y^2+6y=2x^3+3x^3+6c_1$$</p>
<p>$$3y^2+6y-2x^3-3x^3-6c_1=0$$</p>
<p>At this point you should realize that the variable $y$, what we are trying to solve for, is quadratic in the above equation. Although it looks quite messy, the equation is really algebraically in the form:</p>
<p>$$ay^2+by+c=0$$</p>
<p>We know how to deal with quadratics, one option is utilizing the quadratic formula with,</p>
<p>$$a=3$$</p>
<p>$$b=6$$</p>
<p>$$c=-2x^3-3x^3-6c_1$$</p>
<p>Another is to complete the square.</p>
|
1,255,311 | <p><img src="https://i.stack.imgur.com/5V9e0.png" alt="enter image description here"></p>
<p>I understand inner product space with vectors, but the conversion to functions is throwing me off. Also why do they use an integral here, I've always seen summations. I think I'm missing something with notation here. Any help/hints would be appreciated. </p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>Write</p>
<p>$z=x+iy$</p>
<p>and use Euler Formula $e^{iw}=\cos w+i\sin w$</p>
<p>and equate the real & the imaginary parts </p>
|
1,662,398 | <p>I am currently studying for my upcoming midterm and I am stumped on this example provided in the slides. Basically here is the question:</p>
<blockquote>
<p>Given 35 computers, what is the probability that more than 10 computers are in use(active)? We are told that each computer is only active 10% of the time. The answer given in the slide is .0004</p>
</blockquote>
<p>Here is my following attempt to reproduce that answer:</p>
<p>$$1-{35 \choose 10} \cdot (0.10)^{10} \cdot (0.90)^{25}$$ </p>
<p>First I got the probability of exactly 10 computers being active out of 35 and then I subtracted 1 from it to get the probability of more than 10 computers. </p>
<p>EDIT: I have solved this now with the following new work!</p>
<p>1 - (summation of (35 choose k)*0.1^k * 0.9^(35-k) from 0 to 10</p>
| Graham Kemp | 135,106 | <p>You have <em>correctly</em> identified this count as having a <strong>Binomial Distribution</strong>.</p>
<p>So far, so good. However, what happened next was not okay.</p>
<p>The complement of having more than $10$ computers active is <strong>not</strong> of having <em>exactly</em> 10 computers active. It is of having $10$ <em>or less</em> computers active.</p>
<p>$$\mathsf P(X>10) = 1- \mathsf P(X\leq 10) \\ = 1-\sum_{k=0}^{10} \dbinom{35}{k} 0.10^k~0.90^{35-k}$$</p>
<p>That's a sufficient answer for an exam. It's a little awkward to calculate the answer.</p>
<blockquote class="spoiler">
<p> $$0.999~575~702~404~549~174~279~490~538~848~808~6$$</p>
</blockquote>
<hr>
<p>Alternatively you could use the Normal approximation to Binomial and lookup Z-tables if they are provided.</p>
|
3,392,871 | <blockquote>
<p>Let <span class="math-container">$k>1$</span> and define a sequence <span class="math-container">$\left\{a_{n}\right\}$</span> by <span class="math-container">$a_{1}=1$</span> and <span class="math-container">$$a_{n+1}=\frac{k\left(1+a_{n}\right) }{\left(k+a_{n}\right)}$$</span>
(a) Show that <span class="math-container">$\left\{a_{n}\right\}$</span> is monotonic increasing. </p>
</blockquote>
<p>Assume <span class="math-container">$a_n \geq a_{n-1}$</span>. Then,</p>
<p><span class="math-container">$$a_{n+1} = \frac{k(1+a_n)}{k+a_n} \geq \frac{k(1+a_{n-1})}{k+a_n}....$$</span></p>
<p>But I get hung up on the <span class="math-container">$a_n$</span> in the denominator. I cannot replace it with <span class="math-container">$a_{n-1}$</span> since <span class="math-container">$a_n \geq a_{n-1}$</span>. Is there a trick to get around this?</p>
| YiFan | 496,634 | <p>In general, to show that a sequence defined by the recurrence <span class="math-container">$a_{n+1}=f(a_n)$</span> is monotonically increasing, what you want to do is to show <span class="math-container">$a_{n+1}>a_n$</span>, which converts to <span class="math-container">$f(a_n)>a_n$</span>. Then you consider this inequality with the explicit <span class="math-container">$f$</span> given to you, and solve the inequality for the range of <span class="math-container">$a_n$</span> for which the inequality is true. If, in fact, your <span class="math-container">$a_n$</span> are guaranteed to lie in the range so that the inequality always holds, then you're done with the proof.</p>
<p>In this case, what you want to be doing is to show
<span class="math-container">$$\frac{k(1+a_n)}{k+a_n}\geq a_n\iff k(1+a_n)(k+a_n)\geq a_n(k+a_n)^2 $$</span>
which factorises easily by taking out the <span class="math-container">$(k+a_n)$</span> term, after which the problem essentially becomes one of finding the range of solutions to a cubic inequality, which I assume you know how to do. The details are well-explained in Theo Bendit's answer, of course.</p>
|
618,986 | <p>I'm having trouble with this question, I'd like someone to point me in the right direction.</p>
<p>let $A$ be a n by n matrix with real values.
show that there is another n by n real matrix $B$ such that $B^3=A$, and that $B$ is symmetric. Are there more matrices like this $B$ or is it the only one?</p>
<p>What I was thinking:</p>
<p>I don't have a clear way to solve it. I think we need to use the fact that if a real matrix is symmetric, then it is normal, and so has an orthonormal basis of eigenvectors...Other then that I don't really know anything.</p>
| Oria Gruber | 76,802 | <p>I would first like to thank Mariano Suarez-Alvarez in advance for pointing me in the right direction.</p>
<p>if $A$ is symmetric over $\mathbb R$, then it is diagonalizable:</p>
<p>$A=PDP^{-1}$ such that $D$ is diagonal.</p>
<p>let $B = PD_2P^{-1}$ such that $D_2$ is a diagonal matrix whos values are the third root of the matrix $D$.</p>
<p>so we get $B^3 = PD_2^{3}P^{-1} = PDP^{-1}=A$</p>
|
618,986 | <p>I'm having trouble with this question, I'd like someone to point me in the right direction.</p>
<p>let $A$ be a n by n matrix with real values.
show that there is another n by n real matrix $B$ such that $B^3=A$, and that $B$ is symmetric. Are there more matrices like this $B$ or is it the only one?</p>
<p>What I was thinking:</p>
<p>I don't have a clear way to solve it. I think we need to use the fact that if a real matrix is symmetric, then it is normal, and so has an orthonormal basis of eigenvectors...Other then that I don't really know anything.</p>
| Yiorgos S. Smyrlis | 57,021 | <p>Note that, every symmetric $A\in\mathbb R^{n\times n}$ matrix is diagonalisable, it has real eigenvalues $d_1,\ldots,d_n$, and its diagonalization is realised with an orthogonal matrix $U$, i.e.,
$$
A=U^TDU,
$$
where $D=\mathrm{diag}(d_1,\ldots,d_n)$, and $U^TU=I$. Now let
$$
B=U^T\mathrm{diag}(d_1^{1/3},\ldots,d_n^{1/3})U.
$$
Clearly $B^3=A$ and
$$
B^T=\big(U^T\mathrm{diag}(d_1^{1/3},\ldots,d_n^{1/3})U\big)^T=U^T\mathrm{diag}(d_1^{1/3},\ldots,d_n^{1/3})U=B.
$$
Hence $B$ is symmetric.</p>
|
618,986 | <p>I'm having trouble with this question, I'd like someone to point me in the right direction.</p>
<p>let $A$ be a n by n matrix with real values.
show that there is another n by n real matrix $B$ such that $B^3=A$, and that $B$ is symmetric. Are there more matrices like this $B$ or is it the only one?</p>
<p>What I was thinking:</p>
<p>I don't have a clear way to solve it. I think we need to use the fact that if a real matrix is symmetric, then it is normal, and so has an orthonormal basis of eigenvectors...Other then that I don't really know anything.</p>
| mjw | 655,367 | <p>If <span class="math-container">$B$</span> satisfies <span class="math-container">$B^3=A$</span>, then <span class="math-container">$\{ B, \alpha B, \overline{\alpha} B \}$</span> are solutions, where <span class="math-container">$\alpha = \exp \left(\frac{2\pi i}{3}\right)$</span> and where <span class="math-container">$\overline{\alpha} = \exp \left(-\frac{2\pi i}{3}\right)$</span> is the conjugate of <span class="math-container">$\alpha.$</span></p>
|
830,977 | <p>I'm having some real trouble with lebesgue integration this evening and help is very much appreciated.</p>
<p>I'm trying to show that $f(x) = \dfrac{e^x + e^{-x}}{e^{2x} + e^{-2x}}$ is integrable over $(0,\infty)$.</p>
<p>My first thought was to write the integral as $f(x) = \frac{\cosh(x)}{\cosh(2x)}$ and then note $f(x) = \frac{\cosh(x)}{\sinh(x)^2 + \cosh(x)^2}$ so that $|f(x)| \le \frac{\cosh(x)}{\cosh(x)^2}$. These all seemed like sensible steps to me at this point, and I know the integral on the right hand side exists (wolfram alpha), but I'm having trouble showing it and am wondering if I have made more of a mess by introducing trigonometric functions.</p>
<p>Thanks</p>
| Zarrax | 3,035 | <p>For the numerator, observe that $e^x$ dominates $e^{-x}$ as $x \rightarrow \infty$. </p>
<p>For the denominator, observe that $e^{2x}$ dominates $e^{-2x} $ as $x \rightarrow \infty$. </p>
<p>So the integrand will decay like ${e^x \over e^{2x}}$ = $e^{-x}$ as $x \rightarrow \infty$ and the integral will converge. To get a formal proof, use the limit comparison test with $e^{-x}$.</p>
|
293,245 | <p>Most true statements independent of PA that I know of is equivalent to some consistency statement. For example</p>
<ul>
<li>Con(PA), Con(PA + Con(PA)), Con(PA + Con(PA) + Con (PA + Con(PA)), $\dots$</li>
<li>Goodstein's theorem is equivalent to Con(PA)</li>
<li>Any conjunction or disjunction of the above.</li>
</ul>
<p>Is every true statement independent of PA equivalent to some consistency statement?</p>
<p>By "equivalent to some consistency statement", I mean that $PA \vdash S \iff Con(T)$, for some theory $T$. Also, $T$ should be either finite, or specified by a Turing machine that outputs its axioms (and such that PA proves that the Turing machine never stops outputting statements), so that the description of $T$ doesn't throw PA off.</p>
<p>EDIT: In particular, are there are $\Pi^0_1$ examples?</p>
| Payam Seraji | 65,878 | <p>$1$-consistency of $PA$ is a true $\Pi_3$ sentence which is not provable in $PA$+{all true $\Pi_1$ sentences} (see this <a href="https://academic.oup.com/jigpal/advance-article-abstract/doi/10.1093/jigpal/jzx061/4792773?redirectedFrom=fulltext" rel="noreferrer">article</a>). Simple (iterated) consistency statements (as you mentioned above) are all (true) $\Pi_1$ sentences, so it is not equivalent to any $\Pi_1$ sentence. </p>
|
1,401,516 | <p>Given is the unit circle in the plane. Choose randomly point in it, such that $P(\left(x,y\right)\in A)$ is proportional to area of $A$, where $A$ is measurable set in plane. Find density function of random variable $X$ which represents the $x$ coordinate of this point.</p>
<p>My idea was to find $P(X\leq x)$ and then differentiate, but I'm struggling with determining area of subset of a circle where all x-coordinates are less or equal to given $x$ while $x$ varies in $\left[-1,1\right]$. Attached is the figure for fixed $x$.<a href="https://i.stack.imgur.com/89o8Z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/89o8Z.jpg" alt="enter image description here"></a></p>
| georg | 144,937 | <p>I would say, if you do not integrate, then from the surface of the unit circle subtract the area of a circle segment a radius of 1 and a angle $\alpha$:</p>
<p><a href="https://i.stack.imgur.com/NFUkn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NFUkn.png" alt="enter image description here"></a></p>
<p>$A=\pi-\left(\frac{1^2}{2}\cdot 2\alpha-x\cdot y\right)=\pi-\arccos x+x\sqrt{1-x^2}$</p>
<p>$\displaystyle \Rightarrow P(X\le x)=\frac{1}{\pi}(\pi-\arccos x+x\sqrt{1-x^2})$</p>
|
244,214 | <p>One major approach to the theory of forcing is to assume that ZFC has a countable <em>transitive</em> model $M \in V$ (where $V$ is the "real" universe). In this approach, one takes a poset $\mathbb{P} \in M$, uses the fact that $M$ is <em>countable</em> to prove that there exists a generic set $G \in V$, then defines $M[G]$ as an actual set inside $V$ and proves it is a model of ZFC.</p>
<p>The downside to this approach is that a countable transitive model may not exist. For example, it is possible that $V = L$ and $V$ is a minimal model of ZFC, so that any smaller model $M \in V$ is non-standard. However, if we only want a <em>countable</em> model of ZFC, there is no problem. First, Gödel's completeness theorem shows that (assuming ZFC is consistent, of course!) there is some model $M_0 \in V$ of ZFC. Then, the Löwenheim-Skolem theorem guarantees that there is a elementary substructure $M \subseteq M_0$ which is countable in $V$. So $M$ is a countable model of ZFC, and therefore, there is a generic filter $G \in V$.</p>
<p>Can we continue the proof of forcing along these lines? Of course, transitivity is convenient for many reasons (such as showing that various formulas are absolute), but is it <em>possible</em> to go without it? Perhaps we would need to modify the construction of the $\mathbb{P}$-names and $M[G]$ by only considering elements that are actually in $M$. </p>
<p><strong>EDIT</strong> To be a bit more clear, I believe that forcing can be done without a countable model at all, using either the syntactic approach or an approach via Boolean-valued models. My question is more humble. The arguments of forcing are very intuitive when $M$ is a countable transitive model; why don't <em>the same</em> (up to relativizing formulas to $M$) arguments work when $M$ is just countable? </p>
| Joel David Hamkins | 1,946 | <p>Yes, one can undertake forcing without the transitivity assumption,
and even the countability of the model is not important.</p>
<p>One of the standard ways to do this is with the Boolean-valued
model quotient construction, which has been described in many places. Basically, given a forcing notion $B$,
a complete Boolean algebra (take the completion if you have only a
partial order), form the class of all $B$-names, and then define
the Boolean values $[\![\varphi]\!]^B$. This can be done internally
to any model $M$. If $U\subset B$ is any ultrafilter — no need for
genericity of any kind, and even $U$ inside $M$ is fine — then you define the
quotient $M^B/U$ by the equivalence relation $$\sigma =_U\tau\quad\iff\quad[\![\sigma=\tau]\!]\in U,$$ which is a congruence with
respect to the relation $$\sigma\in_U\tau\iff
[\![\sigma\in\tau]\!]\in U.$$ One then verifies the Łoś theorem property
that $M^B/U\models\varphi\iff[\![\varphi]\!]\in U$, and so $M^B/U$
is a model of any statement whose Boolean value is in $U$.</p>
<p>To construct a model of ZFC+$\neg$CH, for example, start with any model $M\models\text{ZFC}$, and let $U\subset B$ be any ultrafilter in the Boolean algebra arising from Cohen's forcing to add $\aleph_2$ many Cohen reals. The model $M^B/U$ will satisfy ZFC+$\neg$CH. No need for $M$ to be countable or transitive!</p>
<p>You can find further extensive details in my paper, <a href="http://jdh.hamkins.org/boolean-ultrapowers/" rel="noreferrer">Well-founded
Boolean ultrapower as large cardinal
embeddings</a>, including
a discussion of what I call the <em>naturalist account</em> of forcing,
which describes how one can take the common set-theorist's talk of
"forcing over $V$" at face value.</p>
<p><strong>Update.</strong> Let me respond to your comment and clarified question. You want to know where the argument commonly used with countable transitive models goes wrong without those assumptions. So let me explain. </p>
<ul>
<li><p>Countability is clearly used in order to find the generic filter. Strictly speaking, one doesn't need that the entire model is countable, but rather only that the model $M$ has only countably many dense subsets of the forcing notion $P$ being used for the forcing. For example, one can easily find generic filters for any forcing notion in an $\omega_1$-like model, which is an uncountable model all of whose rank initial segments are countable. More generally, it suffices if there are only countably many maximal antichains for the forcing, since meeting these suffices for genericity. One can relax this a bit in certain cases. For example, if you have Martin's axiom or some other forcing axiom, and if $M$ has fewer than continuum many open dense sets for a forcing notion P that happens to be ccc in the ambiant universe, then it will be instance of the forcing axiom to know that there is an $M$-generic filter. And similarly with proper forcing and PFA and so on. So these are some ways in which you can dispense with the countability assumption and still get a generic filter. </p></li>
<li><p>Transitivity. The use of transitivity in the CTM approach to forcing is used critically in the definition of what $M[G]$ is. Namely, one usually defines $M[G]$ to consists of all the interpretations of names $\tau$ in $M$ by $G$, defining the value $$\newcommand\val{\text{val}}\val(\tau,G)=\{\val(\sigma,G)\mid\exists p\in G\ \langle\sigma,p\rangle\in \tau\}.$$ This definition takes place by $\in$-induction in a realm where both $M$ and $G$ exist. One cannot seem to carry out this induction on names if the model is not $\in$-standard or at least well-founded, and so this is the main point of failure with the usual CTM approach to forcing. Without transitivity or at least well-foundedness, we don't seem to know exactly what $M[G]$ should mean. This problem is addressed in the Boolean-valued model quotient construction by defining $M[G]$ as a quotient by an equivalence relation, rather than by the $\in$-inductive value procedure. Furthermore, if $G$ is $M$-generic for a non-well-founded model $M$, then indeed after forming the model $M[G]$ by the quotient construction $M^B/G$, then inside $M[G]$ one can see that it arises internally via the values-of-names construction. But the point is that without having first provided an alternative definition of what $M[G]$ is, one doesn't seem at first able to carry out that name-value process, because the induction takes place in context with both $M$ and $G$ already available. (And this subtle point, I believe, seems to be the answer to your question as updated by the revision.)</p></li>
</ul>
|
3,830,204 | <p>Working through <em>Spivak's Calculus</em> and using old assignments from the course offered at my school I'm working on the following problem, asking me to find the integral <span class="math-container">$$\int \frac{1}{x^{2}+x+1} dx$$</span></p>
<p>Looking through Spivak and previous exercises I worked on, I thought using a partial fraction decomposition would be the technique, but even in Spivak the only exercises I've seen which are similar involve:</p>
<p><span class="math-container">$$\int \frac{1}{(x^{2}+x+1)^{n}} dx\ ,\text{where}\ n> 1$$</span></p>
<p>In which case it is pretty straightforward to solve. So there must be a reason why the exercise isn't presented unless it is so straightforward.</p>
<p>Integration by parts and substitution (at least for now) have proven fruitless as well. So I come here to ask if I'm missing any special trick to compute this integral ?</p>
| jasmine | 557,708 | <p><span class="math-container">$\int \frac{1}{(x+1/2)^2 + 3/4} dx= \frac{2}{\sqrt3}\tan^{-1}\frac{2x+1}{\sqrt3} +c$</span></p>
|
3,830,204 | <p>Working through <em>Spivak's Calculus</em> and using old assignments from the course offered at my school I'm working on the following problem, asking me to find the integral <span class="math-container">$$\int \frac{1}{x^{2}+x+1} dx$$</span></p>
<p>Looking through Spivak and previous exercises I worked on, I thought using a partial fraction decomposition would be the technique, but even in Spivak the only exercises I've seen which are similar involve:</p>
<p><span class="math-container">$$\int \frac{1}{(x^{2}+x+1)^{n}} dx\ ,\text{where}\ n> 1$$</span></p>
<p>In which case it is pretty straightforward to solve. So there must be a reason why the exercise isn't presented unless it is so straightforward.</p>
<p>Integration by parts and substitution (at least for now) have proven fruitless as well. So I come here to ask if I'm missing any special trick to compute this integral ?</p>
| Robby the Belgian | 19,298 | <p>I'll take a different approach from the answers so far. For this approach, you'll have to be at least a bit comfortable working with complex numbers.</p>
<p>We can factor <span class="math-container">$x^2 + x + 1$</span> as <span class="math-container">$\left(x - \frac{1 + \sqrt{3}i}{2}\right)\left(x - \frac{1 - \sqrt{3}i}{2}\right)$</span>.</p>
<p>That means we can rewrite the integral using partial fractions:
<span class="math-container">$\int \frac{dx}{x^2+x+1} = \int \frac{A dx}{x - \frac{1 +\sqrt{3}i}{2}} + \int \frac{B dx}{x - \frac{1 -\sqrt{3}i}{2}}$</span> for some <span class="math-container">$A, B \in \mathbb{C}$</span>.
We can find <span class="math-container">$A$</span> and <span class="math-container">$B$</span> easily:
<span class="math-container">$Ax - A \frac{1 +\sqrt{3}i}{2} + Bx - B \frac{1 -\sqrt{3}i}{2} = 1$</span>.
This gives us that <span class="math-container">$A = -B$</span>, and <span class="math-container">$A = -\frac{\sqrt{3}i}{3}$</span>.</p>
<p>These integrals are of the form <span class="math-container">$\int \frac{Kdx}{x-L}$</span>, and are easy enough to solve: just use the substitution <span class="math-container">$u = x - \frac{1 +\sqrt{3}i}{2}$</span> for the first one, and <span class="math-container">$u =x - \frac{1 -\sqrt{3}i}{2}$</span> for the second one.</p>
|
3,552,915 | <p>Determine the point on the plane <span class="math-container">$4x-2y+z=1$</span> that is closest to the point <span class="math-container">$(-2, -1, 5)$</span>. This question is from Pauls's Online Math Notes. He starts by defining a distance function: </p>
<p><span class="math-container">$z = 1 - 4x + 2y$</span></p>
<p><span class="math-container">$d(x, y) = \sqrt{(x + 2)^2 + (y + 1)^2 + (-4 -4x + 2y)^2}$</span></p>
<p>However, at this point, to make the calculus simpler he finds the partial derivatives of <span class="math-container">$d^2$</span> instead of <span class="math-container">$d$</span>. Why does this give you the same answer? </p>
| Bernard | 202,857 | <p>Here is a simple solution using tools from middle school for the computation:</p>
<p>Denote <span class="math-container">$x,y,z$</span> the coordinates of the orthogonal projection of the point <span class="math-container">$(-2,-1,5)$</span>
It satisfies the equations of proportionality:
<span class="math-container">$$\frac{x+2}4=\frac{y+1}{-2}=\frac{z-5}1$$</span>
This common ratio is also equal to
<span class="math-container">$$\frac{4(x+2)-2(y+1)+1(z-5)}{4^2+(-2)^2+1^2}=\frac{(4x-2y+z)+1}{21}=\frac{2}{21}$$</span>
whence the solution
<span class="math-container">\begin{cases}
x=-2+\dfrac 8{21}=-\dfrac{34}{21},\\[1ex]
y=-1-\dfrac 4{21}=-\dfrac{25}{21}, \\[1ex]
z= 5+\dfrac 2{21}=\dfrac{105}{21}.
\end{cases}</span>
With these elements, the distance squared is
<span class="math-container">$$d^2=(x+2)^2+(y+1)^2+(z-5)^2=\frac{8^2+(-4)^2+2^2}{21^2}=\frac{84}{21^2}=\frac{4}{21}$$</span>
and finally <span class="math-container">$\;d=\dfrac 2{\sqrt{21}}.$</span></p>
<p>As to your exact question, the differential of <span class="math-container">$d^2$</span> is <span class="math-container">$2d D(d)$</span>, hence the critical values are obtained at the same points, and as <span class="math-container">$d>0$</span>, <span class="math-container">$d^2$</span> and <span class="math-container">$d$</span> both increase or both decrease.</p>
|
1,284,039 | <p>What function satisfies $f(x)+f(−x)=f(x^2)$?</p>
<p>$f(x)=0$ is obviously a solution to the above functional equation.</p>
<p>We can assume f is continuous or differentiable or similar (if needed).</p>
| grube300 | 240,897 | <p>Give f(x) = ln(|x|) a try in your equation</p>
|
423,159 | <p>What do you call a linear map of the form <span class="math-container">$\alpha X$</span>, where <span class="math-container">$\alpha\in\Bbb R$</span> and <span class="math-container">$X\in\mathrm O(V)$</span> is an orthogonal map (<span class="math-container">$V$</span> being some linear space with inner product)? Are there established names, historical names, some naming attempts that haven't caught on?</p>
<ul>
<li><p>"<a href="https://en.wikipedia.org/wiki/Conformal_map" rel="nofollow noreferrer">Conformal</a>" aka. "angle-preserving" feels rather close, but I believe these terms are more commonly used in the sense of "locally angle-preserving" (i.e. it is not implicitly understood to be linear). Also, <span class="math-container">$\alpha=0$</span> is explicitly allowed in my context, which is not quite angle-preserving.</p>
</li>
<li><p>I first thought "<a href="https://en.wikipedia.org/wiki/Homothety" rel="nofollow noreferrer">homotheties</a>" are what I am looking for, but these only capture the scaling part, not the rotation part.</p>
</li>
<li><p>Roto-scaling or scale-rotation is apparently also already taken and is more general than what I need (see the comment by Carlo).</p>
</li>
</ul>
<p>At the risk of letting this become too "opinion-based", let me also say that I am open for suggestions.</p>
| Vladimir Dotsenko | 1,306 | <p>Wikipedia suggests "conformal orthogonal group" for the group of all such maps; see the articles</p>
<p><a href="https://en.wikipedia.org/wiki/Conformal_group" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Conformal_group</a>
<a href="https://en.wikipedia.org/wiki/Orthogonal_group#Conformal_group" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Orthogonal_group#Conformal_group</a></p>
<p>The same term is used in Magma handbook:</p>
<p><a href="http://magma.maths.usyd.edu.au/magma/handbook/text/317" rel="nofollow noreferrer">http://magma.maths.usyd.edu.au/magma/handbook/text/317</a></p>
<p>and in quite a few other reputable places, e.g.</p>
<p><a href="https://people.maths.bris.ac.uk/%7Ematyd/GroupNames/linear.html" rel="nofollow noreferrer">https://people.maths.bris.ac.uk/~matyd/GroupNames/linear.html</a></p>
<p>so it appear that "conformal orthogonal transformation" is, even if slightly tautological when taken literally, the way to go.</p>
|
1,943,351 | <p>Good day,</p>
<p>In class we said that if a random variable <span class="math-container">$X-Y$</span> is independent of random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> then <span class="math-container">$X-Y$</span> is almost sure constant, i.e. there exists a <span class="math-container">$c \in \mathbb{R}$</span> such that <span class="math-container">$P(X-Y=c)=1$</span>.</p>
<p>First, I don't exactly how to prove this. I know that <span class="math-container">$X$</span> is constant if it is independent of itself. Therefore I could prove that <span class="math-container">$X-Y$</span> is independent of itself (But the other directions doesn't hold I suppose). Do I know that <span class="math-container">$X-Y$</span> is independent of itself?</p>
<blockquote>
<p>Is it correct to say: If <span class="math-container">$Z$</span> is independent of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> then it is independent of <span class="math-container">$g(X,Y)$</span> where <span class="math-container">$g$</span> is a measurable function.</p>
</blockquote>
<p>I don't think so. The definition of independence doesn't give this property.</p>
<p>Then how do I prove that <span class="math-container">$X-Y$</span> is almost sure constant? Another approach through expectations:</p>
<p><span class="math-container">$$E(X-Y|X)=E(X-Y|Y)=E(X-Y)=EX-EY $$</span></p>
<p>But is seems not leading me to a goal.</p>
<blockquote>
<p>So: Why is the random variable <span class="math-container">$X-Y$</span> almost sure constant if it is independent of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>?</p>
<p>Is this valid for a general random variable <span class="math-container">$f(X,Y)$</span> (where <span class="math-container">$f$</span> is measurable for example)? i.e. <span class="math-container">$f(X,Y)$</span> is almost sure constant it it is independent of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>?</p>
</blockquote>
<p>If not I would ask for a counterexample.</p>
<p>Thanks a lot for your help,
Marvin</p>
| Landon Carter | 136,523 | <p>Let $\phi_A(t)$ be the characteristic function of random variable $A$.Then you know that if $A,B$ are independent, then $\phi_{A+B}(t)=\phi_A(t)\phi_B(t)$.</p>
<p>You have that $X-Y$ is independent of $X$ and $Y$. Noting that $X=(X-Y)+Y$, you have that $\phi_X(t)=\phi_{(X-Y)+Y}(t)=\phi_{X-Y}(t)\phi_Y(t)$ [using the fact that $X-Y$ and $Y$ are independent] for every $t\in\mathbb R$. Thus $\phi_{X-Y}(t)=\dfrac{\phi_X(t)}{\phi_Y(t)}$.</p>
<p>Also you have, similarly, that $\phi_Y(t)=\phi_{Y-X}(t)\phi_X(t)$ implying $\phi_{Y-X}(t)=\dfrac{\phi_Y(t)}{\phi_X(t)}$.</p>
<p>Let us call $Z:=X-Y$ for brevity of notation. Then the above two give that $\phi_Z(t)=\dfrac{\phi_X(t)}{\phi_Y(t)}$ and $\phi_{-Z}(t)=\dfrac{\phi_Y(t)}{\phi_X(t)}$.</p>
<p>Hence $\phi_Z(t)\phi_{-Z}(t)=1$, for every $t\in\mathbb R$.</p>
<p>Using the fact that $\phi_{-Z}(t)=\overline{\phi_{Z}(t)}$, we have that $|\phi_Z(t)|=1$ for all $t\in\mathbb R$.</p>
<p>Thus, $\phi_Z(t)=e^{ig(t)}$ for some function $g$, for all $t\in\mathbb R$.</p>
<p>Now observe that for all $t$, $e^{ig(t)}=E(e^{itZ})$ implies $E(e^{i(tZ-g(t))})=1$ for all $t$. Noting that $|e^{i(tZ-g(t))}|=1$ we must have that $tZ-g(t)=2k(t)\pi$ for an integer valued function $k$, almost surely.</p>
<p>Thus almost surely, $Z=\dfrac{g(t)+2k(t)\pi}{t}$ for all $t\in\mathbb R$.</p>
<p>Since the LHS is independent of $t$, we may choose any $t$, say $t=1$. Then $Z=g(1)+2k(1)\pi$ which is afterall, a constant, almost surely.</p>
<p>Hence $X-Y$ is constant almost surely.</p>
<p>Now let us see your other questions. You want to know if $Z$ is independent of $X$ and $Y$ then is it true that $Z$ is independent of $g(X,Y)$ for any measurable function $g$?</p>
<p>So consider for Borel sets $A,B$ the following: $P(Z\in A, g(X,Y)\in B)=P[Z\in A, (X,Y)\in g^{-1}(B)]$. This can be written as the product $P[Z\in A]P[(X,Y)\in g^{-1}(B)]$ if and only if $Z$ is JOINTLY INDEPENDENT with $X,Y$. There exist examples where $Z$ is independent of each $X,Y$ but maybe not jointly. Here's one:</p>
<p>Throw two dice independently. Define $A$ to be the event that $7$ is obtained as the sum of the two throws, $B$ be the event that $3$ is obtained on first throw and $C$ be the event that $4$ is obtained on second throw. Then you can check that $A,B,C$ are pairwise independent but not jointly independent ($A$ is NOT jointly independent with $B$ and $C$.) For example with random variables, take $X=1_B,Y=1_C,Z=1_A$.</p>
<p>As you see, we crucially used the structure of $f(X,Y)=X-Y$. I cannot say right now if it can always be said that if $f(X,Y)$ is independent of both $X$ and $Y$ then it is a.s. constant.</p>
|
803,335 | <p>Note: this is particularly aimed at high-school/entry level college problems </p>
<p>When I'm learning a new topic:</p>
<p>1) I read the theory given in the textbook at the start of each topic</p>
<p>2) proceed to read the solved example problems which the textbook provides (usually 3-5 with full solutions)</p>
<p>3) I then proceed onto answering every question within each exercise.</p>
<p>My problem is that I still forget concepts, for instance, let's say a week later (or even a few days later sometimes).</p>
<p>What am I doing wrong?</p>
| Dan Christensen | 3,515 | <p>You don't say anything about writing notes or summaries. For each type of problem, while it is still fresh in your mind, write a detailed reminder to yourself on how to do it -- maybe a particularly good example. Include key definitions, theorems with examples and non-examples. Keep these summary notes separate from your lecture notes so you can quickly review them. </p>
|
2,438,362 | <p>We have, $\rho(A) \leq \|A\|$
where $\rho(A)$ denotes the spectral radius of $A$.</p>
<p>Now there is a corollary
that $\rho(A) < 1$ iff $\|A\|<1$
it is clear that when $\|A\|<1$ then $\rho(A)<1$</p>
<p>but how to show that if $\rho(A)<1$ then $\|A\|<1$,
perhaps it is because of this one</p>
<p>$\|A\| = \sup_{x\neq 0}(\frac{\|Ax\|}{\|x\|})$
and $\|Ax\| = \|\lambda x\| = |\lambda| \|x\|$
and
hence $\|A\| = \sup(|\lambda|)$</p>
<p>$\|A\| = \rho(A)<1$
so $\|A\|<1$</p>
<p><strong>EDIT : -</strong></p>
<p>I see this but for only $||A||_{2} = \sqrt{\rho(A^{*}A}) $ and $A^{*}$ is the conjugate transpose of $A$ , so in case of say real Symmetric matrix $A$ , $A^{*} = A^{T}$ so $||A||_{2} = \sqrt{\rho(A^{2})} = \sqrt{(\rho(A) )^ 2} < 1 ,$ since $\rho(A)<1$ implying $||A||_{2}<1$,but what about its natural norm that is $\|A\| = \sup_{x\neq 0}(\frac{\|Ax\|}{\|x\|})$?</p>
| user1551 | 1,551 | <p>The "iff", or your so-called "corollary", are wrong. Counterexample: when $A$ is the $2\times2$ Jordan block for the eigenvalue $1-\epsilon$ for some small $\epsilon>0$ we have $\rho(A)=1-\epsilon<1$ but $\|A\|_2\ge\|(1,0)\,A\|_2=\|(1-\epsilon,1)\|_2>1$.</p>
|
1,473,513 | <p>The motion of a pendulum is described by the differential equation</p>
<p><span class="math-container">$$ \ddot\theta +\frac gl \sin \theta = 0$$</span></p>
<p>if we integrate this equation with respect to <span class="math-container">$\theta$</span> we obtain</p>
<p><span class="math-container">$$ \frac 12 \dot \theta ^2 - \frac gl \cos \theta = C $$</span></p>
<p>Would anyone please shed some light on how to integrate the first term? It seems that:
<span class="math-container">$$\int \ddot \theta\,d\theta = \frac 12 \dot \theta ^2$$</span></p>
<p>Or in other words<br>
<span class="math-container">$$\int{\frac{d^2\theta}{dt^2}}\,d\theta =\frac{1}{2}\left( \frac{d\theta}{dt} \right) ^2$$</span></p>
<p>I don't really buy it</p>
| Spencer | 71,045 | <p>It follows from the chain rule,</p>
<p>$$ \frac{d}{dt} = \frac{d\theta}{dt} \frac{d}{d\theta} = \dot{\theta} \frac{d}{d\theta},$$</p>
<p>$$ \ddot{\theta} = \dot{\theta}\frac{d}{d\theta} \dot{\theta} = \frac12 \frac{d}{d\theta} \left( \dot{\theta}^2 \right). $$</p>
<p>I didn't like the above too much as an undergraduate because it looks like ana abuse of notation. One way to think about it is if the path is monotonic, then I can parameterize the derivative in terms of the value of $\theta$, i.e., it is possible to write $\dot{\theta}=g(\theta)$. </p>
<hr>
<p>Another way of thinking about it is a change of variable in integration. We can change $t\rightarrow \theta(t)$ so long as $\theta$ is monotonic. </p>
<p>$$ \int \ddot{\theta} dt \rightarrow \int \ddot{\theta} \dot{\theta} d\theta = \int \frac12 \frac{d}{dt}(\dot{\theta})^2 d\theta$$</p>
|
1,981,928 | <p>While I was studying properties of limit and sequences, I found a theorm that says 'if {$s_n$}, {$t_n$} are convergent sequences, then $s_n \le t_n$ for all $n \in \mathbb{N}$ implies that $$\lim_{n\rightarrow \infty} s_n \le \lim_{n\rightarrow \infty} t_n$$
this proof is quite easy to construct, as you can say </p>
<blockquote>
<p>Given $\epsilon>0$ choose $N \in \mathbb{N}$ such that $|s_n-S| < \epsilon/2$ and $|t_n-T| < \epsilon/2$ then $S-T = (S-s_n) + (t_n-T) + s_n - t_n$ and use triangle inequality to finish</p>
</blockquote>
<p>but I heard that following is NOT TRUE, and I don't know why? </p>
<p><strong>if $s_n<t_n$, can we say $\lim_{n\rightarrow \infty} s_n < \lim_{n\rightarrow \infty} t_n$?</strong></p>
| kobe | 190,421 | <p>No, it's not true. Let $s_n = 1 $ and $t_n = 1 + 1/n$, for all $n\in \Bbb N$. Then $s_n < t_n$ for every $n$, but $\lim_{n\to \infty} s_n = 1 = \lim_{n\to \infty} t_n$.</p>
|
141,484 | <p><strong>Bug introduced in 10.4.1 or earlier and fixed in 11.1.1</strong></p>
<hr>
<p>I recently installed MMA v11.1 and encountered an issue with the memory usage of the LinearModelFit[] command. It appears than when mixing numeric and nominal variables, the LinearModelFit[] command uses a very large block of memory. I first noticed with issue on a large Linux server, where the the MMA kernel kept crashing after consuming all 256 GB of memory for a relatively small problem. </p>
<p>I created a simple example in an attempt to illustrate the problem:</p>
<pre><code>In[44]:= $Version
Out[44]= "11.1.0 for Microsoft Windows (64-bit) (March 13, 2017)"
</code></pre>
<p>First create a simple data matrix of size <em>n</em>:</p>
<pre><code>regDat[n_] := Transpose[{
RandomChoice[{"Yes", "No"}, n], (* nominal variable *)
RandomReal[{1, 10}, n], (* numeric #1 *)
RandomReal[{1, 10}, n], (* numeric #2 *)
RandomReal[{1, 10}, n] (* dependent variable *)
}
];
</code></pre>
<p>Now run a regression with <em>n</em> = 25,000, using only the numeric variables:</p>
<pre><code>mem1 = MaxMemoryUsed[
LinearModelFit[regDat[25000], {v2, v3}, {v1Nom, v2, v3},
NominalVariables -> {v1Nom}]] // AbsoluteTiming
{0.10582, 8813008}
</code></pre>
<p>This uses about 8.8 MB of memory. Now run the same regression, except add the nominal variable (which has values {"Yes","No"}):</p>
<pre><code>mem2 = MaxMemoryUsed[
LinearModelFit[regDat[25000], {v1Nom, v2, v3}, {v1Nom, v2, v3},
NominalVariables -> {v1Nom}]] // AbsoluteTiming
{6.92887, 4120980912}
</code></pre>
<p>This new regression takes 65x longer and uses 4.12 GB of memory. </p>
<p>I've confirmed this behavior on v11.1 on Windows and Linux. My original problem had <em>n</em>=257,000 observations, with 6 numeric variables and 4 nominal variables, and was unable to run due to excessive memory usage. But the same code ran without issue on v10 and v11. </p>
<p>(Note: The only time I've encountered memory issues using the LinearModelFit[] command is when I've inadvertently treated a numeric value as a nominal. I'm speculating that perhaps v 11.1 is treating all variables as nominal when a regression has a both types).</p>
<p>Can anyone else confirm this behavior?</p>
<p>Thanks,</p>
<p>Mark</p>
| Szabolcs | 12 | <p>You could use something like this if your data is not too large:</p>
<pre><code>terp = Interpreter[
DelimitedSequence[
DelimitedSequence["Number", {"[", Whitespace, "]"}],
{"[", Whitespace, "]"}
]
]
terp["[[1 2] [3 4]]"]
(* {{1, 2}, {3, 4}} *)
</code></pre>
<p>You can add another layer of <code>DelimitedSequence</code> if you have several such expressions separated by commas.</p>
<p>Unfortunately, this method of parsing is quite slow.</p>
|
141,484 | <p><strong>Bug introduced in 10.4.1 or earlier and fixed in 11.1.1</strong></p>
<hr>
<p>I recently installed MMA v11.1 and encountered an issue with the memory usage of the LinearModelFit[] command. It appears than when mixing numeric and nominal variables, the LinearModelFit[] command uses a very large block of memory. I first noticed with issue on a large Linux server, where the the MMA kernel kept crashing after consuming all 256 GB of memory for a relatively small problem. </p>
<p>I created a simple example in an attempt to illustrate the problem:</p>
<pre><code>In[44]:= $Version
Out[44]= "11.1.0 for Microsoft Windows (64-bit) (March 13, 2017)"
</code></pre>
<p>First create a simple data matrix of size <em>n</em>:</p>
<pre><code>regDat[n_] := Transpose[{
RandomChoice[{"Yes", "No"}, n], (* nominal variable *)
RandomReal[{1, 10}, n], (* numeric #1 *)
RandomReal[{1, 10}, n], (* numeric #2 *)
RandomReal[{1, 10}, n] (* dependent variable *)
}
];
</code></pre>
<p>Now run a regression with <em>n</em> = 25,000, using only the numeric variables:</p>
<pre><code>mem1 = MaxMemoryUsed[
LinearModelFit[regDat[25000], {v2, v3}, {v1Nom, v2, v3},
NominalVariables -> {v1Nom}]] // AbsoluteTiming
{0.10582, 8813008}
</code></pre>
<p>This uses about 8.8 MB of memory. Now run the same regression, except add the nominal variable (which has values {"Yes","No"}):</p>
<pre><code>mem2 = MaxMemoryUsed[
LinearModelFit[regDat[25000], {v1Nom, v2, v3}, {v1Nom, v2, v3},
NominalVariables -> {v1Nom}]] // AbsoluteTiming
{6.92887, 4120980912}
</code></pre>
<p>This new regression takes 65x longer and uses 4.12 GB of memory. </p>
<p>I've confirmed this behavior on v11.1 on Windows and Linux. My original problem had <em>n</em>=257,000 observations, with 6 numeric variables and 4 nominal variables, and was unable to run due to excessive memory usage. But the same code ran without issue on v10 and v11. </p>
<p>(Note: The only time I've encountered memory issues using the LinearModelFit[] command is when I've inadvertently treated a numeric value as a nominal. I'm speculating that perhaps v 11.1 is treating all variables as nominal when a regression has a both types).</p>
<p>Can anyone else confirm this behavior?</p>
<p>Thanks,</p>
<p>Mark</p>
| Ali Hashmi | 27,331 | <pre><code>ToExpression[
StringReplace["[[1 4 5 6 2] [9 8 7 4 7]]", {" " -> ",", "[" -> "{", "]" -> "}"}]]
(* {{1, 4, 5, 6, 2}, {9, 8, 7, 4, 7}} *)
</code></pre>
|
267,971 | <p>I want to keep inside of a integral evaluated after some replacement inside it, but at the same time the integral itself unevaluated.</p>
<p>I start with:</p>
<pre><code>int=HoldForm[Integrate[x^n/(x + 1)^(n + 1), {x, 0, 1}]]
</code></pre>
<p>Output as desired:
<span class="math-container">$$\int_0^1 \frac{x^n}{(x+1)^{n+1}} \, dx$$</span></p>
<p>When I replace <code>n</code> with some number I get output as expected:</p>
<pre><code>int /. n -> 3
</code></pre>
<p><span class="math-container">$$\int_0^1 \frac{x^3}{(x+1)^{3+1}} \, dx$$</span></p>
<p>But then I want to evaluate the inside of the integral and keep the integral itself unevaluated.</p>
<p>So I tried instead:</p>
<pre><code>int = HoldForm[Integrate[Evaluate[x^n/(x + 1)^(n + 1)], {x, 0, 1}]]
</code></pre>
<p><span class="math-container">$$\int_0^1 \text{Evaluate}\left[\frac{x^n}{(x+1)^{n+1}}\right] \, dx$$</span></p>
<pre><code>int /. n -> 3
</code></pre>
<p>output not as I wanted:
<span class="math-container">$$\int_0^1 \text{Evaluate}\left[\frac{x^3}{(x+1)^{3+1}}\right] \, dx$$</span></p>
<p>I wanted:
<span class="math-container">$$\int_0^1 \frac{x^3}{(x+1)^{4}} \, dx$$</span></p>
<p>Any ideas how to do it?</p>
| azerbajdzan | 53,172 | <p>I found a way, but does the code really have to be so ridiculous for such a simple task?</p>
<pre><code>int = HoldForm[Integrate[x^n/(x + 1)^(n + 1), {x, 0, 1}]]
HoldForm[a] /.
HoldPattern[
a] -> (int /. n -> 3 /. Integrate -> Evaluate /.
HoldForm -> integr) /. integr -> Integrate
</code></pre>
<p><span class="math-container">$$\int_0^1 \frac{x^n}{(x+1)^{n+1}} \, dx$$</span>
<span class="math-container">$$\int_0^1 \frac{x^3}{(x+1)^4} \, dx$$</span></p>
|
267,971 | <p>I want to keep inside of a integral evaluated after some replacement inside it, but at the same time the integral itself unevaluated.</p>
<p>I start with:</p>
<pre><code>int=HoldForm[Integrate[x^n/(x + 1)^(n + 1), {x, 0, 1}]]
</code></pre>
<p>Output as desired:
<span class="math-container">$$\int_0^1 \frac{x^n}{(x+1)^{n+1}} \, dx$$</span></p>
<p>When I replace <code>n</code> with some number I get output as expected:</p>
<pre><code>int /. n -> 3
</code></pre>
<p><span class="math-container">$$\int_0^1 \frac{x^3}{(x+1)^{3+1}} \, dx$$</span></p>
<p>But then I want to evaluate the inside of the integral and keep the integral itself unevaluated.</p>
<p>So I tried instead:</p>
<pre><code>int = HoldForm[Integrate[Evaluate[x^n/(x + 1)^(n + 1)], {x, 0, 1}]]
</code></pre>
<p><span class="math-container">$$\int_0^1 \text{Evaluate}\left[\frac{x^n}{(x+1)^{n+1}}\right] \, dx$$</span></p>
<pre><code>int /. n -> 3
</code></pre>
<p>output not as I wanted:
<span class="math-container">$$\int_0^1 \text{Evaluate}\left[\frac{x^3}{(x+1)^{3+1}}\right] \, dx$$</span></p>
<p>I wanted:
<span class="math-container">$$\int_0^1 \frac{x^3}{(x+1)^{4}} \, dx$$</span></p>
<p>Any ideas how to do it?</p>
| Michael E2 | 4,999 | <p>You can use Trott-Strzebonski or <code>RuleCondition</code> or controlled evaluation; see <a href="https://mathematica.stackexchange.com/questions/29317/replacement-inside-held-expression">Replacement inside held expression</a>, which might be considered a duplicate.</p>
<p>Variations:</p>
<pre><code>int = HoldForm[Integrate[x^n/(x + 1)^(n + 1), {x, 0, 1}]];
int /. e_Times :> Block[{n = 3}, e /; True]
int /. e_Times :> Block[{n = 3}, RuleCondition[e, True]]
(* HoldForm[Integrate[x^3/(1 + x)^4, {x, 0, 1}]] *)
</code></pre>
<p>But not:</p>
<pre><code>int /. e_Times :> Block[{n = 3}, e]
(*
HoldForm[Integrate[
Block[{n = 3}, x^n/(x + 1)^(n + 1)],
{x, 0, 1}]]
*)
</code></pre>
<p>These also give the desired result:</p>
<pre><code>int /. e_Times :>
With[{i = e /. n -> 3}, RuleCondition[i, True]]
int /. e : Times[n, __] | Plus[n, __] | Power[_, n] :>
With[{i = e /. n -> 3}, RuleCondition[i, True]]
int /. HoldForm[Integrate[i_, rest___]] :>
With[{e = i /. n -> 3}, HoldForm[Integrate[e, rest]]]
int /. HoldForm[f_[args___]] :>
Block[{n = 3}, HoldForm[f[##]] &[args]]
int /. HoldForm[f_[args___]] :>
(HoldForm[f[##]] & @@ ({args} /. n -> 3))
(* HoldForm[Integrate[x^3/(1 + x)^4, {x, 0, 1}]] *)
</code></pre>
<p>Note that the very first variation assumes all the instances of <code>n</code> occur inside a <code>Times</code>, which is true in the OP's example. The pattern <code>e : Times[n, __] | Plus[n, __] | Power[_, n]</code> comprises other forms, but not all possible forms (e.g. not <code>Sin[n] x</code>).
The last three variations are more general. The third to last allows the integrand to be evaluated; the last two allow all arguments to be evaluated, should <code>n</code> appear in the limits of integration, say.</p>
<p>There is a difference between <code>ReplaceAll</code> (<code>... /. n -> 3</code>) and <code>Block[{n = 3},...]</code> if <code>n</code> appears in another function that holds its arguments, which does not occur in the OP's example. This applies to any of the variations above. In <code>ReplaceAll</code>, the symbol <code>n</code> will be replaced by <code>3</code> but not evaluated inside a function that holds its arguments. In <code>Block</code>, since <code>n</code> is not evaluated inside such a function, it won't be replaced by <code>3</code>.</p>
|
123,918 | <p>Someone <a href="https://stackoverflow.com/questions/9851628/minimal-positive-number-divisible-to-n">asked this question</a> in SO:</p>
<blockquote>
<p><span class="math-container">$1\le N\le 1000$</span></p>
<p>How to find the minimal positive number, that is divisible by N, and
its digit sum should be equal to N.</p>
</blockquote>
<p>I'm wondering if for every integer <span class="math-container">$N$</span>, can we always find a positive number <span class="math-container">$q$</span> such that, it is dividable by <span class="math-container">$N$</span> and the sum of its digits is <span class="math-container">$N$</span>?</p>
| marlu | 26,204 | <p>For every $N$ there is a number $X$ such that $N$ divides $X$ and the sum of digits of $X$ equals $N$. </p>
<p><strong>Proof:</strong> Write $N = RM$ where $M$ is coprime to $10$ and $R$ contains only the prime factors $2$ and $5$. Then, by Euler's theorem, $10^{\varphi(M)} \equiv 1 \pmod M$. Consider $X' := \sum_{i=1}^{N} 10^{i\varphi(M)}$. It is a sum of $N$ numbers each of which is congruent to $1$ modulo $M$, so $X' \equiv N\cdot 1 \equiv 0 \pmod M$. Furthermore, the decimal representation of $X'$ contains exactly $N$ ones, all other digits are $0$, so the sum of digits of $X'$ is $N$. Multiply $X'$ by a high power of ten to get a multiple of $R$, call the result $X$. Then $X$ is divisible by $M$ and $R$, hence by $N$, and it has the same digit sum as $X'$ which is $N$.</p>
|
1,970,305 | <p>I have just begun reading through Section 3.2 of Hatcher's Algebraic Topology. While I reasonably understood the computations relating to the cup product, I was unsure of the purpose of the cup product. From what I knew, it does not help us to compute cohomology groups, given that we need the cohomology groups to compute the cup product. </p>
<p>In a nutshell, why do we care about the cup product? </p>
| Dean C Wills | 366,201 | <p>$x=-2,y=1,z=5$ gives $6(-2) + 15(1) + 10(5) = 53$, by inspection. I'm trying to think of the general rule. </p>
|
2,280,052 | <p>Wolfram Alpha says:
$$i\lim_{x \to \infty} x = i\infty$$</p>
<p>I'm having a bit of trouble understanding what $i\infty$ means. In the long run, it seems that whatever gets multiplied by $\infty$ doesn't really matter. $\infty$ sort of takes over, and the magnitude of whatever is being multiplied is irrelevant. I.e., $\forall a \gt 0$:</p>
<p>$$a\lim_{x \to \infty} x = \infty, -a\lim_{x \to \infty} x = -\infty$$</p>
<p>What's so special about imaginary numbers? Why doesn't $\infty$ take over when it gets multiplied by $i$? Thanks.</p>
| murray | 32,337 | <p>In <em>Mathematica</em>, evaluating</p>
<pre><code>Limit[x, x -> Infinity]
</code></pre>
<p>gives (the usual shorthand symbol for the build-in entity) <code>Infinity</code>. No problem there. And also in <em>Mathematica</em>, evaluating</p>
<pre><code>I Infinity
</code></pre>
<p>must returns as output the same thing you entered (albeit with the shorthand symbol for <code>Infinity</code> and the stylized <code>i</code> representing that complex number).</p>
<p>What this means is that no further evaluation is possible: <em>Mathematica</em> knows no further rules that would allow it to simplify the result further.</p>
|
1,427,595 | <blockquote>
<p>The <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow">Cayley table</a> tells us whether a group is abelian. Because the group operation of an abelian group is commutative, a group is abelian if and only if its Cayley table is symmetric along its diagonal axis.</p>
</blockquote>
<p>Sorry, but why is this true?</p>
| Nikos M. | 139,391 | <p>The <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow">Caley table</a> describes the group action ("multiplication") between elements, so if the action is commutative (abelian), then table is symmetric (along the diagonal for $a \ne b$), since $a \circ b = b \circ a$</p>
<p>if seen as a matrix of group actions with indices $i, j$ for $i$-th element and $j$-th element, then $T_{ij} = T_{ji}$ (symmetric matrix)</p>
|
980,818 | <p>I'm working on a problem that involves the following summation:
$$y=\sum_{i=0}^{x}i2^i$$
I need to determine the largest value of $x$ such that $y$ is less than or equal to some integer K. Currently I'm using a lookup table approach which is fine, but I would really like to find and understand a solution that would allow calculation of $x$.</p>
<p>Thank you!</p>
| Peter | 82,961 | <p>Use the idendity</p>
<p>$$\sum_{i=0}^x i2^i=2(2^xx-2^x+1)$$</p>
<p>and calculate the value $x$ with binary search, for example.</p>
|
919,562 | <p>I need to prove that:</p>
<p>$$\inf\{\frac{1}{3}+\frac{3n+1}{6n^2} \Big| n\in\mathbb N\}=\frac{1}{3}$$</p>
<p>I get stuck with my proof, I'll write it down.</p>
<p>$$n\geq1$$
$$3n\geq3$$
$$3n+1\geq4$$
$$\frac{1}{3}+3n+1\geq4+\frac{1}{3}$$</p>
<p>Now, I'm having a problem with $6n^2$ if I multiply by $6n^2$, I'll get variable in the express of $4+\frac{1}{3}$.</p>
<p>Any ideas?, Thanks!</p>
| Adriano | 76,987 | <p>Let:
$$
S =
\left\{\frac{1}{3} + \frac{3n + 1}{6n^2} ~\middle|~ n \in \mathbb N\right\}
$$
Notice that since $3n + 1 > 0$ and $6n^2 > 0$, we know that $\frac{3n + 1}{6n^2} > 0$ so that $\frac{1}{3}$ is a lower bound for $S$. It remains to show that $\frac{1}{3}$ is the <strong>greatest</strong> lower bound.</p>
<p>To this end, choose any $\epsilon > 0$. Now recall that, by the Archimedean property, there is some $N \in \mathbb N$ such that $N > \frac{2}{3\epsilon}$. But then since $\frac{1}{3} + \frac{3N + 1}{6N^2} \in S$ and:
\begin{align*}
\frac{1}{3} + \frac{3N + 1}{6N^2}
&< \frac{1}{3} + \frac{3N + N}{6N^2} &\text{since }N > 1 \\
&= \frac{1}{3} + \frac{2}{3N} \\
&< \frac{1}{3} + \epsilon &\text{since }N > \frac{2}{3\epsilon} \iff \epsilon > \frac{2}{3N} \\
\end{align*}
it follows that $\frac{1}{3} = \inf S$, as desired. $~~\blacksquare$</p>
|
3,478,700 | <p>As you may know we can define the equation of a tangent line of a differentiable function at any point <span class="math-container">$a$</span> is given by:
<span class="math-container">$$y = f(a) + f'(a)(x-a)$$</span></p>
<blockquote>
<p>However how can I interpret this equation?
<span class="math-container">$$y = f(a) + f'(a)(x-a) + f''(a)(x-a)^2$$</span></p>
</blockquote>
<p>This would be very useful to me. Looks like a Taylor expansion at the point <span class="math-container">$a$</span>. However I can't see this geometrically.</p>
<p>If this doesn't have an answer, is there any geometric meaning to the third derivative of a function?</p>
| Mohammad Riazi-Kermani | 514,496 | <p>You mean <span class="math-container">$$y = f(a) + f'(a)(x-a) + \frac {f''(a)}{2}(x-a)^2$$</span></p>
<p>which is a quadratic approximation to the function around the point <span class="math-container">$(a,f(a))$</span> instead of the linear approximation which is the tangent line.</p>
<p>This is a better approximation due to the second derive at the point <span class="math-container">$ (a,f(a))$</span> which involves the concavity of the graph as well as the slope at the point of tangency.</p>
|
112,021 | <p>Let $n$ be a positive integer.
The $n$ by $n$ Fourier matrix may be defined as follows:</p>
<p>$$
F^{*} = (1/\sqrt{n}) (w^{(i-1)(j-1)})
$$</p>
<p>where </p>
<p>$$
w = e^{2 i \pi /n}
$$</p>
<p>is the complex $n$-th root of unity with smaller positive argument
and $*$ means transpose -conjugate.</p>
<p>It is well known that $F$ is diagonalizable with eigenvalues $1,-1,i,-i$</p>
<p>where $i^2 =-1.$</p>
<p>It is also known that $F$ has real eigenvectors:</p>
<p>COMMENT:
(I was unable to got this paper)</p>
<p>McClellan, James H.; Parks, Thomas W.
Eigenvalue and eigenvector decomposition of the discrete Fourier transform.
IEEE Trans. Audio Electroacoust. AU-20 (1972), no. 1, 66--74.
END of COMMENT</p>
<p>QUESTION:</p>
<p>There is some simple manner to get just one of these
real eigenvectors.</p>
<p>For example how to get a real vector with an odd number
$n=2k+1$ of coordinates and such that</p>
<p>$$
F(x) =x.
$$</p>
| paul garrett | 15,629 | <p>This has a little number-theoretic content, having to do with real-valued <em>characters</em> modulo $n=2k+1$. For example, for $n=p$ an odd prime number, there are exactly two such functions (up to scalar multiples), the function that is $1$ for non-zero-mod-$p$ inputs, and the quadratic character $\chi$ mod $p$, which is $\chi(0)=0$, $\chi(j)=+1$ for $j$ a square modulo $p$, and $\chi(j)=-1$ for $j$ a non-square mod $p$.</p>
<p>For odd $n=p_1...p_k$ a product of distinct primes, products of the trivial characters and/or the quadratic characters modulo the various $p_i$ are the $2^k$ real-valued eigenvectors for the Fourier matrix. </p>
<p>Modulo higher powers $n=p^m$ of a prime, the non-trivial character $\chi(j)$ still just depends on $j$ mod $p$ and whether it's a square or not, or is $0$, and these can be combined multiplicatively as in the previous example.</p>
|
112,021 | <p>Let $n$ be a positive integer.
The $n$ by $n$ Fourier matrix may be defined as follows:</p>
<p>$$
F^{*} = (1/\sqrt{n}) (w^{(i-1)(j-1)})
$$</p>
<p>where </p>
<p>$$
w = e^{2 i \pi /n}
$$</p>
<p>is the complex $n$-th root of unity with smaller positive argument
and $*$ means transpose -conjugate.</p>
<p>It is well known that $F$ is diagonalizable with eigenvalues $1,-1,i,-i$</p>
<p>where $i^2 =-1.$</p>
<p>It is also known that $F$ has real eigenvectors:</p>
<p>COMMENT:
(I was unable to got this paper)</p>
<p>McClellan, James H.; Parks, Thomas W.
Eigenvalue and eigenvector decomposition of the discrete Fourier transform.
IEEE Trans. Audio Electroacoust. AU-20 (1972), no. 1, 66--74.
END of COMMENT</p>
<p>QUESTION:</p>
<p>There is some simple manner to get just one of these
real eigenvectors.</p>
<p>For example how to get a real vector with an odd number
$n=2k+1$ of coordinates and such that</p>
<p>$$
F(x) =x.
$$</p>
| Alexey Ustinov | 5,712 | <p>There is full and simple description of all eigenvectors in the article </p>
<p>Morton, P. On the eigenvectors of Schur's matrix. J. Number Theory, 1980, 12, 122-127 <a href="http://deepblue.lib.umich.edu/bitstream/2027.42/23371/1/0000315.pdf" rel="nofollow">http://deepblue.lib.umich.edu/bitstream/2027.42/23371/1/0000315.pdf</a></p>
|
3,324,647 | <p>Say you have the following matrix A in <span class="math-container">$R^2 \rightarrow R^2$</span>:</p>
<p><span class="math-container">$
\begin{bmatrix}
7 & -10 \\
5 & -8
\end{bmatrix}
$</span></p>
<p>Thus the eigenvalues/eigenvectors are: 2 <span class="math-container">$\begin{bmatrix} 2 \\ 1 \end{bmatrix}$</span> and -3 <span class="math-container">$\begin{bmatrix} 1 \\ 1 \end{bmatrix}$</span>.</p>
<p>Thus the eigenspace matrix is <span class="math-container">$\begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$</span>. </p>
<p>Say you have the vector v(x,y) of (2,3), thus Ax = [-16, -14]. </p>
<p>I'm confused as to how does the eigenspace and eigenvalues allow me to easily see what A is doing to the vector (2,3)? </p>
<p>How do I apply the eigenvalues/eigenspace on vector v(2,3) to see what A is doing to it?</p>
| D.B. | 530,972 | <p>Keep in mind that <span class="math-container">$A(2,1) = 2(2,1)$</span> and <span class="math-container">$A(1,1) = -3(1,1)$</span> by def of eigenvalues and eigenvectors. Can you write the desired vector <span class="math-container">$(2,3)$</span> as a linear combination of <span class="math-container">$(2,1)$</span> and <span class="math-container">$(1,1)$</span>? Note that <span class="math-container">$-1(2,1)+4(1,1) = (2,3)$</span>.</p>
<p>Hence, you can simplify the calculation to
<span class="math-container">$$A(2,3) = -1*2*(2,1)+4*(-3)*(1,1) = (-4,-2)-(12,12) = (-16,-14).$$</span></p>
|
1,440,470 | <p>Given two real valued independent random variables $X$ and $Y$, write their ratio as $R = \frac{X}{Y}$</p>
<p>I know various other ways of finding a formula for the distribution of $R$, but I'm specifically interested in understanding why the following derivation does not yield the correct result.</p>
<p>$$
P(R = r) = \int_{\mathbb{R}} P(\frac{X}{Y} = r | X = x)P(X = x)dx \\
= \int_{\mathbb{R}}P(Y = \frac{x}{r})P(X = x)dx \\
= \int_{\mathbb{R}}f_Y(\frac{x}{r})f_X(x)dx
$$</p>
<p>I can't see how this is wrong.</p>
| Michael | 155,065 | <p>Your posted computations seem to blur the distinctions between $Pr[X=x]$ and $f_X(x)$. For example, $Pr[X=x]$ is a number in $[0,1]$, while $f_X(x)$ can be larger than 1 for some values of $x$. </p>
<hr>
<p>A correct way of obtaining the probability of an event by conditioning is: </p>
<p>$$ Pr[R\leq r] = \int_{-\infty}^{\infty} Pr[R \leq r|X=x]f_X(x)dx \quad (Eq 1)$$</p>
<p>For a similar equation with densities, you can take a derivative of (Eq 1) with respect to $r$: </p>
<p>$$ f_R(r) = \int_{-\infty}^{\infty} f_{R|X=x}(r|X=x)f_X(x)dx $$</p>
<p>The conditional density $f_{R|X=x}(r|X=x)$ is not the same as $f_Y(x/r)$. You can compute $f_{R|X=x}(r|X=x)$ by working with $Pr[R \leq r|X=x]$ before taking derivatives. </p>
<hr>
<p>Homework questions for you (can you post answers to them?): </p>
<p>1) Give an example of a random variable $W$ with a density $f_W(w)$ that can be larger than 1 for certain values of $w$. </p>
<p>2) Obtain an expression for $Pr[R\leq r]$ similar to (Eq 1), but this time conditioning on $Y=y$. </p>
<p>3) Assume $Y$ is a positive random variable. Compute $f_{R|Y=y}(r|Y=y)$ by starting with $Pr[R\leq r|Y=y]$, manipulating this, and taking a derivative. </p>
<p>4) Assume $Y$ is a positive random variable. Compute $f_{R|X=x}(r|X=x)$ by starting with $Pr[R\leq r|X=x]$. (Remember that multiplying by negative numbers flips inequalities. If it helps, at first you can assume that $r>0$.) </p>
|
2,751,819 | <p>I need some help solving this.
I have tried:</p>
<p>$$
\begin{bmatrix}
a & b \\
c & d \\
\end{bmatrix}
=\frac{1}{\operatorname{det}A}\cdot \begin{bmatrix}
d & -b \\
-c & a \\
\end{bmatrix}$$
I ended up with $$a=\frac{d}{\operatorname{det}A},$$
and
$$d=\frac{a}{\operatorname{det}A}.$$
Then
$$\operatorname{tr}(A)=a+d=\frac{a+d}{\operatorname{det}A},$$
but I don't really think it works.</p>
| Naweed G. Seldon | 395,669 | <p>You have $A^{-1} = A \implies A^2 = I$. So we just calculate that for the matrix you have $$\begin{pmatrix} 1 & 0 \\ 0& 1 \end{pmatrix} = \begin{pmatrix} a & b \\ c& d \end{pmatrix}\cdot\begin{pmatrix} a & b \\ c& d \end{pmatrix} = \begin{pmatrix} a^2 + bc & (a+d)\cdot b \\ (a+d)\cdot c & bc + d^2 \end{pmatrix}$$</p>
<p>Therefore, you get $(a+d)\cdot b = (a+d)\cdot c = 0$. </p>
<p>Case 1: $a+d = 0$, we're done.</p>
<p>Case 2: Let's assume $b$ or $c$ are $0$ then $a^2 = d^2 = 1 \implies a^2 - d^2 = 0 \implies a = \pm d$. </p>
<p>If $a = -d \implies a+d = 0$</p>
<p>If $a = d$, then $A = aI$, and $A^2 = I \implies a = \pm 1 \implies A = \pm I$</p>
<p>Therefore, for $A^2 = I$, we have $\text{tr} A = 0, 2 , -2$</p>
<hr>
<p><strong>Continuing from your chain of thought</strong></p>
<p>In your calculations, you obtained the following, </p>
<p>$$a+d = \frac{a+d}{\det A}$$
Now, as we know if $A^2 = I$, then $\det A = \pm1$. </p>
<p>Therefore, for $\det A = -1$. You get $$a+d = -(a+d) \implies \text{tr} A = 0$$</p>
<p>For $\det A = 1$, you get $$\begin{pmatrix} a & b\\ c & d \end{pmatrix} = \begin{pmatrix} d & -b\\ -c & a \end{pmatrix}$$</p>
<p>$\implies b = c = 0, a = d$. Therefore $A = aI$, and $\text{tr} A = 2a$
$$A^2 = I \implies (aI)^2 = I \implies a^2 = 1 \implies a = \pm 1$$</p>
<p>Hence, $\text{tr} A = \pm 2$</p>
<p><strong>Comment:</strong> For a more 'clean' answer, please refer to Omnomnomnom's answer/response. I provided an elementary answer to demonstrate one exists, but the better and faster way, in my opinion, is still using eigenvalues. </p>
|
83,945 | <p>I've got a uniform random variable $X\sim\mathcal{U}(-a,a)$ and a normal random variable $Y\sim\mathcal{N}(0,\sigma^2)$. I am interested in their sum $Z=X+Y$. Using the convolution integral, one can derive the p.d.f. for $Z$:</p>
<p>$$f_Z(x)=\frac{1}{2a\sqrt{2\pi}}\int_{x-a}^{x+a}e^{-\frac{u^2}{2\sigma^2}}du=\frac{1}{2a}\left[\Phi\left(\frac{x+a}{\sigma}\right)-\Phi\left(\frac{x-a}{\sigma}\right)\right]$$
where $\Phi(\cdot)$ is the normalized Gaussian cdf.</p>
<p>I am trying to evaluate $\int_{-\infty}^{\infty} f_Z^2(x)dx$ and $\int_{-\infty}^{\infty} f_Z^3(x)dx$. Are there bounds on these expressions in terms of elementary functions? Can they be expressed in terms of a finite sum involving $\Phi(\cdot)$?</p>
| Dilip Sarwate | 15,941 | <p>I don't have a complete answer but just a suggestion for part of your question.</p>
<p>$f_Z$ is the convolution of a uniform density and a Gaussian density. Thus,
its Fourier transform or characteristic function $\Psi_Z$ is the product of
a Gaussian function and a sinc function. Parseval's theorem then gives us
that
$$
\int (f_Z)^2 = \int |\Psi_Z|^2
$$
(there may be a $2\pi$ or something similar that needs to be included in that
equation depending on how the Fourier transform is defined).
The integrand on the right hand side is the product of <em>another</em> Gaussian
and a $\text{sinc}^2$ function, and the integral on the right might even have a
closed form that is known already, or be more amenable to numerical
integration than something that requires multiple computations of values of
$\Phi(x)$. </p>
|
83,945 | <p>I've got a uniform random variable $X\sim\mathcal{U}(-a,a)$ and a normal random variable $Y\sim\mathcal{N}(0,\sigma^2)$. I am interested in their sum $Z=X+Y$. Using the convolution integral, one can derive the p.d.f. for $Z$:</p>
<p>$$f_Z(x)=\frac{1}{2a\sqrt{2\pi}}\int_{x-a}^{x+a}e^{-\frac{u^2}{2\sigma^2}}du=\frac{1}{2a}\left[\Phi\left(\frac{x+a}{\sigma}\right)-\Phi\left(\frac{x-a}{\sigma}\right)\right]$$
where $\Phi(\cdot)$ is the normalized Gaussian cdf.</p>
<p>I am trying to evaluate $\int_{-\infty}^{\infty} f_Z^2(x)dx$ and $\int_{-\infty}^{\infty} f_Z^3(x)dx$. Are there bounds on these expressions in terms of elementary functions? Can they be expressed in terms of a finite sum involving $\Phi(\cdot)$?</p>
| Robert Israel | 8,508 | <p>It looks to me like $$ \int_{-\infty}^\infty f_Z(x)^2\ dx = -{\frac {{\sigma}}{2{a}^{2}\sqrt {\pi }}}+\frac{1}{2a}
{{\rm erf}\left({\frac {a}{{\sigma}}}\right)}+ \frac{\sigma}{2a^2 \sqrt{\pi}} {{\rm e}^{-a^2/\sigma^2}}
$$</p>
<p>EDIT: OK, here's the proof.</p>
<p>For convenience, scale distances so that $\sigma = 1$. We're looking at</p>
<p>$$J =\frac{1}{8 \pi a^2} \int_{-\infty}^\infty dx \int_{x-a}^{x+a} ds \int_{x-a}^{x+a} dt\ e^{-(s^2+t^2)/2} $$ </p>
<p>Interchange the order of integration so this becomes</p>
<p>$$\eqalign{ J &= \frac{1}{8 \pi a^2} \int_{-\infty}^\infty ds \int_{s-2a}^{s+2a} dt \int_{\max(s,t)-a}^{\min(s,t)+a} dx\ e^{-(s^2+t^2)/2}\cr
&= \frac{1}{8 \pi a^2} \int_{-\infty}^\infty ds \int_{s-2a}^{s+2a} dt\ (2a + \min(s,t) -\max(s,t)) e^{-(s^2+t^2)/2} \cr}$$</p>
<p>Break this up into two pieces, one where $s<t$ and the other where $s>t$.</p>
<p>$$ \eqalign{J_1 &= \frac{1}{8 \pi a^2} \int_{-\infty}^\infty ds \int_s^{s+2a} dt\ (2a + s-t) e^{-(s^2+t^2)/2}\cr
J_2 &= \frac{1}{8 \pi a^2} \int_{-\infty}^\infty ds \int_{s-2a}^{s} dt\ (2a + t-s) e^{-(s^2+t^2)/2}\cr}$$</p>
<p>In $J_1$, take $t = s+u$ (so that $-(s^2+t^2)/2 = -s^2 - us - t^2/2$); in $J_2$, take $t = s - u$ (so that $-(s^2+t^2)/2 = -s^2 +us - t^2/2$). With these changes of variables we recombine the integrals:</p>
<p>$$ J = \frac{1}{8 \pi a^2} \int_{-\infty}^\infty ds \int_0^{2a} du\ (2a - u) e^{-s^2} (e^{-us} + e^{us}) e^{-u^2/2} $$</p>
<p>Interchange the order of integration again</p>
<p>$$ \eqalign{J &= \frac{1}{8 \pi a^2} \int_0^{2a} du \int_{-\infty}^\infty ds\ (2a-u) e^{-s^2} (e^{-us}+e^{us}) e^{-u^2/2} \cr
&= \frac{1}{2\sqrt{\pi} a^2} \int_0^{2a} du\ (a-u/2) e^{-u^2/4}\cr
&= \frac{1}{2a} \text{erf}(a) + \frac{1}{2 \sqrt{\pi} a^2} e^{-a^2} - \frac{1}{2 \sqrt{\pi} a^2}\cr}$$ </p>
|
2,779,429 | <blockquote>
<p>Evaluate $$\int \frac {dx}{\sin \frac x2\sqrt {\cos^3 \frac x2}}$$</p>
</blockquote>
<p>My try </p>
<p>Write $t=\frac x2$ and hence $dx=2dt$</p>
<p>To change the integral to $$\int \frac {\csc t dt}{\cos^{\frac 32} t}$$</p>
<p>Multiplying both bottom and top by $\csc t$ and then using $\csc^2 t=1+\cot^2 t$ in the numerator the problem simplifies to $$2\int (\sin t)(\cos ^{\frac {-3}{2}} t) dt+2\int \frac {\cot^3 t dt}{\sqrt {\cos t}}$$</p>
<p>Now the first integral is easy to go but I am not getting any idea for the second one. Any help would be very beneficial. New methods are also welcome. </p>
| José Carlos Santos | 446,262 | <p><strong>Hint:</strong>\begin{align}\int\frac{\mathrm dx}{\sin\left(\frac x2\right)\sqrt{\cos^3\left(\frac x2\right)}}&=\int\frac{\sin\left(\frac x2\right)}{\left(1-\cos^2\left(\frac x2\right)\right)\sqrt{\cos^3\left(\frac x2\right)}}\,\mathrm dx\\&=-2\int\frac{\mathrm dt}{(1-t^2)t\sqrt t}.\end{align}</p>
|
2,939,163 | <p>I want to find a certain <span class="math-container">$x$</span> that belongs to <span class="math-container">$\mathbb R$</span> so that </p>
<p><span class="math-container">$$\left|\begin{array}{r}1&x&1\\x&1&0\\0&1&x\end{array}\right|=1$$</span></p>
<p>This should be easy enough. I apply the Laplace extension on the third row so I get</p>
<p><span class="math-container">$$0-\left|\begin{array}{a}1 & 1\\x&0\end{array}\right|+x\left|\begin{array}{r}1&x\\x &1\end{array}\right|=1$$</span></p>
<p>So we have</p>
<p><span class="math-container">$$-(0-x)+x(1-x^2)=1\implies x+x-x^3=1\implies x^3-2x+1=0$$</span></p>
<p>I'm kind of stuck because I'm not entirely familiar with solving cubic functions. I don't think there's a way to refactor this. Perhaps I should have found another way to solve this. <span class="math-container">$x=1$</span> is definitely a solution, but there's another one that I'm missing. Any hints?</p>
| say era | 470,734 | <p>as said before : <span class="math-container">$x=1$</span> is a solution.</p>
<p>You can then reduce you polynomial to: <span class="math-container">$x^3-2x+1=(x-1)*(x^2+x-1)$</span></p>
<p>So you can solve the second degree polynomial and get the two other solutions.</p>
|
1,077,284 | <p>I am trying to find the equation of a 3D surface as illustrated below. The boundaries of this surface is comprised of two planar elliptical arcs $AB$ and $AC$ as well as a 3D arc $BC$ which is a 3D curve on an elliptical surface described nicely in <a href="https://math.stackexchange.com/a/1075515/62050">this post</a>. Could someone kindly help me how this surface bounded by $AB$, $AC$, and $BC$ can be put into an equation? Thanks in advance.</p>
<p><img src="https://i.stack.imgur.com/wG2XK.png" alt="enter image description here"></p>
| JimmyK4542 | 155,509 | <p>Since $R$ is a simply connected region bounded by the curve $g$, Green's Theorem tells you that $$\displaystyle\iint\limits_{R}\left[\dfrac{\partial Q}{\partial x}-\dfrac{\partial P}{\partial y}\right]\,dx\,dy = \oint\limits_{g}P\,dx+Q\,dy$$</p>
<p>for functions $P(x,y)$ and $Q(x,y)$. </p>
<p>You want to compute the area of $R$, which is given by $\displaystyle\iint\limits_{R}1\,dx\,dy$. </p>
<p>To make the formula for Green's Theorem useful for calculating the area of $R$, you should pick functions $P(x,y)$ and $Q(x,y)$ such that $\dfrac{\partial Q}{\partial x}-\dfrac{\partial P}{\partial y} = 1$. One such choice is $P(x,y) = 0$ and $Q(x,y) = x$. Another choice is $P(x,y) = -y$ and $Q(x,y) = 0$. Yet another choice is $P(x,y) = -\frac{1}{2}y$ and $Q(x,y) = \frac{1}{2}x$. </p>
<p>This gives you the following formulas for the area of $R$:
$$\displaystyle\iint\limits_{R}1\,dx\,dy = \oint\limits_{g}x\,dy = -\oint\limits_{g}y\,dx = \dfrac{1}{2}\oint\limits_{g}x\,dy-\dfrac{1}{2}\oint\limits_{g}y\,dx$$ </p>
<p>Any one of those last three line integrals can be used to compute the area of $R$. </p>
|
1,554,603 | <p>Let $\theta \in \mathbb R$, and let $T\in\mathcal L(\mathbb C^2)$ have canoncial matrix</p>
<p>$M(T)$ = $$
\left(
\begin{matrix}
1 & e^{i\theta} \\
e^{-i\theta} & -1 \\
\end{matrix}
\right)
$$
(a) Find the eigenvalues of $T$.</p>
<p>(b) Find an orthonormal basis for $\mathbb C^2$ that consists of eigenvectors for $T$.</p>
<p>I can get the eigenvalues of T and they are $\sqrt 2$ and $-\sqrt 2$. However, I cannot get each eigenvector respect to each eigenvalue. I know how to get eigenvectors by calculating the null space of $(T - \lambda I)$, but it looks like this is not a proper method to solve this problem. So, anyone can help? Thank you! </p>
| Mark | 94,840 | <p>As the other answer list, the number of ideals is actually $12$. One other way to show this is to use the Chinese Remainder Theorem, which gives an isomorphism
$$\mathbb Z\diagup60\mathbb Z \xrightarrow{\sim}
\left(\mathbb Z\diagup4\mathbb Z\right) \times
\left(\mathbb Z\diagup3\mathbb Z\right) \times
\left(\mathbb Z\diagup5 \mathbb Z\right)$$</p>
<p>Hence, the number of ideals in $\mathbb Z\diagup60\mathbb Z$ is the product of the number of
ideals of the three factors. Since $\mathbb Z\diagup3\mathbb Z$ and $\mathbb
Z\diagup 5\mathbb Z$ are fields, they only have $2$ ideals (the zero ideal and the
unit ideal). $\mathbb Z\diagup4\mathbb Z$ additionally has the ideal $(2)$; $(3)$ is easily seen to be identical to the unit ideal. So
the result is $3 \cdot 2 \cdot 2 = 12$.</p>
|
46,905 | <p>I need to draw a set of curves on one graph (characteristics equations). As you can see they have exchanged x and y axes. My goal is to plot all those curves on one graph. Are there ways to do that? </p>
<pre><code>f[t_, t0_] := -(2 - 4/Pi*ArcTan[2])*Exp[-t]*(t - t0);
g[x_, x0_] := (x - x0)/(-(2 - 4/Pi*ArcTan[x + 2]));
Show[Table[Plot[f[t, t0], {t, 0, 1},
PlotRange -> {0, -0.3},
AxesLabel -> {t, x}], {t0, 0, 1, 0.1}]]
</code></pre>
<p><img src="https://i.stack.imgur.com/6BDw9.jpg" alt="enter image description here"></p>
<pre><code>Show[
Table[
Plot[g[x, x0], {x, 0, -0.3}, PlotRange -> {0, 1}, AxesLabel -> {x, t}],
{x0, 0, -0.3, -0.05}]]
</code></pre>
<p><img src="https://i.stack.imgur.com/F9h7Q.jpg" alt="enter image description here"></p>
| Kuba | 5,478 | <p>The problem boils down to "how to plot inverse function without explicit formula". You can use <code>ParametricPlot[{h[y],y},{y...]</code>:</p>
<pre><code>Show[
Plot[Table[f[t, t0], {t0, 0, 1, .1}], {t, 0, 1},
Evaluated -> True, PlotStyle -> Blue],
ParametricPlot[Table[{g[x, x0], x}, {x0, -0.3, 0, 0.05}], {x, -.3, 0},
Evaluated -> True, PlotStyle -> Red]
,
PlotRange -> {{0, 1}, {-.3, 0}}, Frame -> True, FrameLabel -> {"t", "x"},
BaseStyle -> {18, Bold}]
</code></pre>
<blockquote>
<p><img src="https://i.stack.imgur.com/7r25U.png" alt="enter image description here"></p>
</blockquote>
|
419,370 | <p>When defining a term it seems common to use 'if' when the stronger 'iff' is also true. For instance:</p>
<p>Definition 1: A set $A$ is <em>open</em> in $(X,d)$ if $\forall x \in A$, $\exists \epsilon \gt 0$ such that $ B(x,\epsilon) \subseteq A$.</p>
<p>Since this is a definition, there are obviously no cases when the reverse conditional fails so it would be true to write 'iff' instead. But it seems strange to me that it's not common to write the formally stronger statement. I suppose the reasoning is (a) the lack of ambiguity mentioned above (b) potentially writing 'iff' might look as though one were stating an equivalent condition that should <em>not</em> be taken as the definition, e.g.</p>
<p>Observation 2: A set $A$ is open in $(X,d)$ iff $X\setminus A$ is closed in $(X,d)$.</p>
<p>Am I right that this is the convention? Is it acceptable/understandable to write 'iff' for definitions? Apologies if this is not a well-enough-formed question for the local standards.</p>
<hr>
<p>It's also occured to me that there might be space in the notation to adapt the definitional '$:=$' to give '$:\!\mathrm{iff}$' to be used in such cases, eg.</p>
<p>Definition 3: A set $A$ is <em>open</em> in $(X,d)$ :iff $\forall x \in A$, $\exists \epsilon \gt 0$ such that $ B(x,\epsilon) \subseteq A$.</p>
<p>Or indeed:</p>
<p>Let $A := \{1,2,3\}$ and $B:=\{1,2\}$. Then each $b \in B$ is also in $A$. Now $$\forall b \in B, b \in A \quad \mathrm{iff\!:} \quad B \subseteq A$$ so $B \subseteq A$ by definition.</p>
<p>Has this been used? Would it be sensible usage? Can I claim it as a great notational victory and tell people about it at parties?</p>
| rschwieb | 29,335 | <p>Sidestepping the philosophical stuff that's about to ensue, let me say this. Since "if" in a definition is correct already, it would be unattractive to replace it with a more restrictive condition "iff." In mathematics, a rule of thumb is to not overcomplicate something by using a stronger thing when a weaker thing already suffices. Basically, you have nothing to gain but dirty looks from those who believe "iff" is incorrect :)</p>
<hr>
<p>No, it is <em>conventionally</em> not really right use the biconditional when first defining terms. I finally managed to dig up <a href="http://en.wikipedia.org/wiki/Wikipedia_talk%3aWikiProject_Mathematics/Archive/2011/Jul#Use_of_.22iff.22_in_articles_with_definitions" rel="nofollow">this exchange at the math wikiproject</a> which contains some insights on preferring "if". I am aware of another exchange on the topic in 2006 where an editor vehemently advocated "iff," but I don't think that author or his arguments matched (the expertise of) the ones given in this more recent discussion I am linking. (Even at the 2006 discussion, Ryan Reich showed up to weigh in on preferring "if".)</p>
<p>I think the links I provided have ample evidence to show that the most popular <em>convention</em> is to use "if" and not "iff." One very experienced mathematician at the math wikiproject went so far as to say that the use of iff in definition is "a hallmark of amateurish mathematical writing that almost never appears in quality publications."</p>
<p>(Incidentally Wikipedia also has <a href="http://en.wikipedia.org/wiki/Iff#Definitions" rel="nofollow">a little bit</a> addressing this, and I know that the mathematics project Manual of Style includes lines about not using iff in definitions.)</p>
<hr>
<p>It is fine to use the biconditional when showing that another condition is equivalent to the condition you used <em>when first defining your term</em>.</p>
<p>When you make a definition, you are relabeling a (potentially complex) set of conditions with a simpler name. I don't think it is really a logical "if", it is more of a definitional linguistic "if". Some logician may show up and blow me out of the water by saying that there really is no difference, but I'll still go out on a limb and try to describe why using "iff" sounds fishy to me.</p>
<p>It's tempting to conflate the logical biconditional with the linguistic relation of being "synonymous." However, you have to remember that when we are writing biconditionals we are within the framework of some logical calculus. The terms that are referred to in this calculus have to be defined before we can incorporate them in logical statements.</p>
<p>Another thing to realize is that you don't really need "if" to write definitions. You can say things like "we define a square to be a simple polygon which satsifies (conditions)." Or: "There are seven days in the week Monday, ... Friday. The two days Saturday and Sunday are defined to be <em>weekend days</em>.</p>
<p>There isn't really any "if A, B" or "A if B" going on here: the act of defining takes place just outside of the logical framework.</p>
|
419,370 | <p>When defining a term it seems common to use 'if' when the stronger 'iff' is also true. For instance:</p>
<p>Definition 1: A set $A$ is <em>open</em> in $(X,d)$ if $\forall x \in A$, $\exists \epsilon \gt 0$ such that $ B(x,\epsilon) \subseteq A$.</p>
<p>Since this is a definition, there are obviously no cases when the reverse conditional fails so it would be true to write 'iff' instead. But it seems strange to me that it's not common to write the formally stronger statement. I suppose the reasoning is (a) the lack of ambiguity mentioned above (b) potentially writing 'iff' might look as though one were stating an equivalent condition that should <em>not</em> be taken as the definition, e.g.</p>
<p>Observation 2: A set $A$ is open in $(X,d)$ iff $X\setminus A$ is closed in $(X,d)$.</p>
<p>Am I right that this is the convention? Is it acceptable/understandable to write 'iff' for definitions? Apologies if this is not a well-enough-formed question for the local standards.</p>
<hr>
<p>It's also occured to me that there might be space in the notation to adapt the definitional '$:=$' to give '$:\!\mathrm{iff}$' to be used in such cases, eg.</p>
<p>Definition 3: A set $A$ is <em>open</em> in $(X,d)$ :iff $\forall x \in A$, $\exists \epsilon \gt 0$ such that $ B(x,\epsilon) \subseteq A$.</p>
<p>Or indeed:</p>
<p>Let $A := \{1,2,3\}$ and $B:=\{1,2\}$. Then each $b \in B$ is also in $A$. Now $$\forall b \in B, b \in A \quad \mathrm{iff\!:} \quad B \subseteq A$$ so $B \subseteq A$ by definition.</p>
<p>Has this been used? Would it be sensible usage? Can I claim it as a great notational victory and tell people about it at parties?</p>
| wendy.krieger | 78,024 | <p>The distinction between <strong>if</strong> and <strong>iff</strong> is that <strong>if</strong> can be a subset relation, while <strong>iff</strong> is a set equality relation.</p>
<p>The role of a definition is to bring things into view of a theory, so it needs to deal with failure in the theory as well. Correspondingly, while a definition might be inspired by the theory, it makes no assumptions that are left to the theory.</p>
<p>Here we shall use quotes to mark out what is being defined. For example, '1 foot = 12 inches' is a statement. Writing '1 foot = 12 "inches" ' is the definition of an inch as 1/12 foot. When quotes are set around a relation, the items are elsewhere defined, and the relation is said to be true: '1 foot "=" 12 inches' supposes the foot and inch are separately defined (eg as fractions of a yard), and the equality is said to be true. It can be set around a number, when one wants to discuss that relationship, eg 'One metre = "39.37" inches'.</p>
<p>A statement 'A iff B' equates to '(A if B) = (B if A)'. This is a relationship which can not be used as a definition. One can use statements like</p>
<ul>
<li>("A" if B) and (B if "A") requires both to be true.</li>
<li>("A1" if B) and (B if "A2") leaves 'A1 = A2' to be set by the theory</li>
<li>("A if B) , leaving B if A to theory.</li>
</ul>
<p>The reason that the equality fails, is that it is not the role of definitions to make assumptions about either A or B. Instead, it must assume that the sets A and B are not identical, and 'A if B' suggests that A is a subset of B.</p>
<p>Defining both parts of the relation, means that some X can only become A if both B arises from it, and it arrises from B. But it is well in the scope of the theory to find B from A if A comes from B. So the first definition is actually redundant, and the second part might be discarded.</p>
<p>The definitions by A1 and A2, is useful to test the variations of A are identical. There is a test that <em>inertial mass</em> ($F = ma$) and <em>gravitational mass</em> ($a = GM/r^2$), these are known to be exact to 14 places. This experiment could be written as A1"="A2.</p>
<p>In practice, one either defines ("A" if B) or defines (B if "A"), and let the theory set the other value. </p>
|
2,865,122 | <p><a href="http://math.sfsu.edu/beck/complex.html" rel="nofollow noreferrer">A First Course in Complex Analysis by Matthias Beck, Gerald Marchesi, Dennis Pixton, and Lucas Sabalka</a> Exer 3.8</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is holomorphic in region <span class="math-container">$G$</span>, and <span class="math-container">$f(G) \subseteq \{ |z|=1 \}$</span>. Prove <span class="math-container">$f$</span> is constant.</p>
</blockquote>
<p>(I guess we may assume <span class="math-container">$f: G \to \mathbb C$</span> s.t. image(f)=<span class="math-container">$f(G)$</span>. I guess it doesn't matter if we somehow have <span class="math-container">$f: A \to \mathbb C$</span> for any <span class="math-container">$A$</span> s.t. <span class="math-container">$G \subseteq A \subseteq \mathbb C$</span> as long as <span class="math-container">$G$</span> is a region and <span class="math-container">$f$</span> is holo there.)</p>
<p>I will now attempt to elaborate the following proof at a <a href="https://math.oregonstate.edu/%7Edeleenhp/teaching/winter17/MTH483/hw2-sol.pdf" rel="nofollow noreferrer">Winter 2017 course in Oregon State University</a>.</p>
<p><strong>Question 1</strong>: For the following elaboration of the proof, what errors if any are there?</p>
<p><strong>Question 2</strong>: Are there more elegant ways to approach this? I have a feeling this can be answered with Ch2 only, i.e. Cauchy-Riemann or differentiation/holomorphic properties instead of having to use Möbius transformations.</p>
<blockquote>
<p>OSU Pf (slightly paraphrased): Let <span class="math-container">$g(z)=\frac{1+z}{1-z}$</span>, and define <span class="math-container">$h(z)=g(f(z)), z \in G \setminus \{z : f(z) = 1\}$</span>. Then <span class="math-container">$h$</span> is holomorphic on its domain, and <span class="math-container">$h$</span> is imaginary valued by Exer 3.7. By a variation of Exer 2.19, <span class="math-container">$h$</span> is constant. QED</p>
</blockquote>
<p>My (elaboration of OSU) Pf: <span class="math-container">$\because f(G) \subseteq C[0,1]$</span>, let's consider the Möbius transformation in the preceding Exer 3.7 <span class="math-container">$g: \mathbb C \setminus \{z = 1\} \to \mathbb C$</span> s.t. <span class="math-container">$g(z) := \frac{1+z}{1-z}$</span>:</p>
<p>If we plug in <span class="math-container">$C[0,1] \setminus \{1\}$</span> in <span class="math-container">$g$</span>, then we'll get the imaginary axis by Exer 3.7. Precisely: <span class="math-container">$$g(\{e^{it}\}_{t \in \mathbb R \setminus \{0\}}) = \{is\}_{s \in \mathbb R}. \tag{1}$$</span> Now, define <span class="math-container">$G' := G \setminus \{z \in G | f(z) = 1 \}$</span> and <span class="math-container">$h: G' \to \mathbb C$</span> s.t. <span class="math-container">$h := g \circ f$</span> s.t. <span class="math-container">$h(z) = \frac{1+f(z)}{1-f(z)}$</span>. If we plug in <span class="math-container">$G'$</span> in <span class="math-container">$h$</span>, then we'll get the imaginary axis. Precisely: <span class="math-container">$$h(G') := \frac{1+f(G')}{1-f(G')} \stackrel{(1)}{=} \{is\}_{s \in \mathbb R}. \tag{2}$$</span></p>
<p>Now Exer 2.19 says that a real valued holomorphic function over a region is constant: <span class="math-container">$f(z)=u(z) \implies u_x=0=u_y \implies f'=0$</span> to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$u$</span> is constant. Actually, an imaginary valued holomorphic function over a region is constant too: <span class="math-container">$f(z)=iv(z) \implies v_x=0=v_y \implies f'=0$</span> again by Cauchy-Riemann Thm 2.13 to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$v$</span> is constant.</p>
<p><span class="math-container">$(2)$</span> precisely says that <span class="math-container">$h$</span> is imaginary valued over <span class="math-container">$G'$</span>. <span class="math-container">$\therefore,$</span> if <span class="math-container">$G'$</span> is a region (A) and if <span class="math-container">$h$</span> is holomorphic on <span class="math-container">$G'$</span> (B), then <span class="math-container">$h$</span> is constant on <span class="math-container">$G'$</span> with value I'll denote <span class="math-container">$Hi, H \in \mathbb R$</span>:</p>
<p><span class="math-container">$\forall z \in G',$</span></p>
<p><span class="math-container">$$Hi = \frac{1+f(z)}{1-f(z)} \implies f(z) = \frac{Hi-1}{Hi+1}, \tag{3}$$</span></p>
<p>where <span class="math-container">$Hi+1 \ne 0 \forall H \in \mathbb R$</span>.</p>
<p><span class="math-container">$\therefore, f$</span> is constant on <span class="math-container">$G'$</span> (Q4) with value given in <span class="math-container">$(3)$</span>.</p>
<p>QED except possibly for (C)</p>
<hr />
<blockquote>
<p>(A) <span class="math-container">$G'$</span> is a region</p>
</blockquote>
<p>I guess if <span class="math-container">$G \setminus G'$</span> is finite, then G' is a region. I'm thinking <span class="math-container">$D[0,1]$</span> is a region and then <span class="math-container">$D[0,1] \setminus \{0\}$</span> is still a region.</p>
<blockquote>
<p>(B) To show <span class="math-container">$h$</span> is holomorphic in <span class="math-container">$G'$</span>:</p>
</blockquote>
<p>Well <span class="math-container">$h(z)$</span> is differentiable <span class="math-container">$\forall z \in G'$</span> and <span class="math-container">$f(z) \ne 1 \forall z \in G'$</span> and <span class="math-container">$f'(z)$</span> exists in <span class="math-container">$G' \subseteq G$</span> because <span class="math-container">$f$</span> is differentiable in <span class="math-container">$G$</span> because <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$G$</span>.</p>
<p><span class="math-container">$$h'(z) = g'(f(z)) f'(z) = \frac{2}{(1-w)^2}|_{w=f(z)} f'(z) = \frac{2 f'(z)}{(1-f(z))^2} $$</span></p>
<p>Now, <span class="math-container">$f'(z)$</span> exists on an open disc <span class="math-container">$D[z,r_z] \ \forall z \in G$</span> where <span class="math-container">$r_z$</span> denotes the radius of the open disc s.t. <span class="math-container">$f(z)$</span> is holomorphic at <span class="math-container">$z$</span>. So, I guess <span class="math-container">$\frac{2 f'(z)}{(1-f(z))^2} = h'(z)$</span> exists on an open disc with the same radius <span class="math-container">$D[z,r_z] \ \forall z \in G'$</span>, and <span class="math-container">$\therefore, h$</span> is holomorphic in <span class="math-container">$G'$</span>.</p>
<blockquote>
<p>(C) Possible flaw:</p>
</blockquote>
<p>It seems that on <span class="math-container">$G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$\frac{Hi-1}{Hi+1}$</span> while on <span class="math-container">$G \setminus G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$1$</span>.</p>
<p><span class="math-container">$$\therefore, \forall z \in G, f(z) = \frac{Hi-1}{Hi+1} 1_{G'}(z) + 1_{G \setminus G'}(z)$$</span></p>
<p>It seems then that we've actually show only that <span class="math-container">$f$</span> is constant on <span class="math-container">$G$</span> except for the subset of G where <span class="math-container">$f=1$</span>.</p>
| Angina Seng | 436,618 | <p>The Cauchy-Riemann equations have a geometric interpretation. Let $f$
be holomorphic at $a$ and let $f'(a)\ne0$. Consider the horizontal
line through $a$ consisting of points $a+s$ for real $s$, and
also the vertical line through $a$, that is the points $a+it$ for
$t$ real.
Then these are mapped by $f$ into two curves $C_1$ and $C_2$ meeting
at $f(a)$. Cauchy-Riemann implies these meet at right angles there.</p>
<p>But if the image of $f$ were within a 1-dimensional subspace such as
the unit circle, then $C_1$ and $C_2$ would be restricted within
too, which means they cannot intersect orthogonally. The only
way out of this impasse is for $f'(a)=0$. This must happen for all $a$.</p>
<p>An introductory book that makes much of such geometric interpretations
is Needham's <em>Visual Complex Analysis</em> (OUP).</p>
<p>If you really don't like geometry, write $f(x+iy)=u+iv$ in the usual
way. If $f$ maps to the unit circle, then $u^2+v^2=1$. Differentiating
gives $uu_x+vv_x=uu_y+vv_y=0$. Cauchy-Riemann gives
$-uv_x+vu_x=0$. Then
$$u_x=u^2u_x+v^2u_x=u(uu_x+vv_x)+v(-uv_x+vu_x)=0$$
and similarly $v_x=0$. Therefore $f'=0$.</p>
|
2,865,122 | <p><a href="http://math.sfsu.edu/beck/complex.html" rel="nofollow noreferrer">A First Course in Complex Analysis by Matthias Beck, Gerald Marchesi, Dennis Pixton, and Lucas Sabalka</a> Exer 3.8</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is holomorphic in region <span class="math-container">$G$</span>, and <span class="math-container">$f(G) \subseteq \{ |z|=1 \}$</span>. Prove <span class="math-container">$f$</span> is constant.</p>
</blockquote>
<p>(I guess we may assume <span class="math-container">$f: G \to \mathbb C$</span> s.t. image(f)=<span class="math-container">$f(G)$</span>. I guess it doesn't matter if we somehow have <span class="math-container">$f: A \to \mathbb C$</span> for any <span class="math-container">$A$</span> s.t. <span class="math-container">$G \subseteq A \subseteq \mathbb C$</span> as long as <span class="math-container">$G$</span> is a region and <span class="math-container">$f$</span> is holo there.)</p>
<p>I will now attempt to elaborate the following proof at a <a href="https://math.oregonstate.edu/%7Edeleenhp/teaching/winter17/MTH483/hw2-sol.pdf" rel="nofollow noreferrer">Winter 2017 course in Oregon State University</a>.</p>
<p><strong>Question 1</strong>: For the following elaboration of the proof, what errors if any are there?</p>
<p><strong>Question 2</strong>: Are there more elegant ways to approach this? I have a feeling this can be answered with Ch2 only, i.e. Cauchy-Riemann or differentiation/holomorphic properties instead of having to use Möbius transformations.</p>
<blockquote>
<p>OSU Pf (slightly paraphrased): Let <span class="math-container">$g(z)=\frac{1+z}{1-z}$</span>, and define <span class="math-container">$h(z)=g(f(z)), z \in G \setminus \{z : f(z) = 1\}$</span>. Then <span class="math-container">$h$</span> is holomorphic on its domain, and <span class="math-container">$h$</span> is imaginary valued by Exer 3.7. By a variation of Exer 2.19, <span class="math-container">$h$</span> is constant. QED</p>
</blockquote>
<p>My (elaboration of OSU) Pf: <span class="math-container">$\because f(G) \subseteq C[0,1]$</span>, let's consider the Möbius transformation in the preceding Exer 3.7 <span class="math-container">$g: \mathbb C \setminus \{z = 1\} \to \mathbb C$</span> s.t. <span class="math-container">$g(z) := \frac{1+z}{1-z}$</span>:</p>
<p>If we plug in <span class="math-container">$C[0,1] \setminus \{1\}$</span> in <span class="math-container">$g$</span>, then we'll get the imaginary axis by Exer 3.7. Precisely: <span class="math-container">$$g(\{e^{it}\}_{t \in \mathbb R \setminus \{0\}}) = \{is\}_{s \in \mathbb R}. \tag{1}$$</span> Now, define <span class="math-container">$G' := G \setminus \{z \in G | f(z) = 1 \}$</span> and <span class="math-container">$h: G' \to \mathbb C$</span> s.t. <span class="math-container">$h := g \circ f$</span> s.t. <span class="math-container">$h(z) = \frac{1+f(z)}{1-f(z)}$</span>. If we plug in <span class="math-container">$G'$</span> in <span class="math-container">$h$</span>, then we'll get the imaginary axis. Precisely: <span class="math-container">$$h(G') := \frac{1+f(G')}{1-f(G')} \stackrel{(1)}{=} \{is\}_{s \in \mathbb R}. \tag{2}$$</span></p>
<p>Now Exer 2.19 says that a real valued holomorphic function over a region is constant: <span class="math-container">$f(z)=u(z) \implies u_x=0=u_y \implies f'=0$</span> to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$u$</span> is constant. Actually, an imaginary valued holomorphic function over a region is constant too: <span class="math-container">$f(z)=iv(z) \implies v_x=0=v_y \implies f'=0$</span> again by Cauchy-Riemann Thm 2.13 to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$v$</span> is constant.</p>
<p><span class="math-container">$(2)$</span> precisely says that <span class="math-container">$h$</span> is imaginary valued over <span class="math-container">$G'$</span>. <span class="math-container">$\therefore,$</span> if <span class="math-container">$G'$</span> is a region (A) and if <span class="math-container">$h$</span> is holomorphic on <span class="math-container">$G'$</span> (B), then <span class="math-container">$h$</span> is constant on <span class="math-container">$G'$</span> with value I'll denote <span class="math-container">$Hi, H \in \mathbb R$</span>:</p>
<p><span class="math-container">$\forall z \in G',$</span></p>
<p><span class="math-container">$$Hi = \frac{1+f(z)}{1-f(z)} \implies f(z) = \frac{Hi-1}{Hi+1}, \tag{3}$$</span></p>
<p>where <span class="math-container">$Hi+1 \ne 0 \forall H \in \mathbb R$</span>.</p>
<p><span class="math-container">$\therefore, f$</span> is constant on <span class="math-container">$G'$</span> (Q4) with value given in <span class="math-container">$(3)$</span>.</p>
<p>QED except possibly for (C)</p>
<hr />
<blockquote>
<p>(A) <span class="math-container">$G'$</span> is a region</p>
</blockquote>
<p>I guess if <span class="math-container">$G \setminus G'$</span> is finite, then G' is a region. I'm thinking <span class="math-container">$D[0,1]$</span> is a region and then <span class="math-container">$D[0,1] \setminus \{0\}$</span> is still a region.</p>
<blockquote>
<p>(B) To show <span class="math-container">$h$</span> is holomorphic in <span class="math-container">$G'$</span>:</p>
</blockquote>
<p>Well <span class="math-container">$h(z)$</span> is differentiable <span class="math-container">$\forall z \in G'$</span> and <span class="math-container">$f(z) \ne 1 \forall z \in G'$</span> and <span class="math-container">$f'(z)$</span> exists in <span class="math-container">$G' \subseteq G$</span> because <span class="math-container">$f$</span> is differentiable in <span class="math-container">$G$</span> because <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$G$</span>.</p>
<p><span class="math-container">$$h'(z) = g'(f(z)) f'(z) = \frac{2}{(1-w)^2}|_{w=f(z)} f'(z) = \frac{2 f'(z)}{(1-f(z))^2} $$</span></p>
<p>Now, <span class="math-container">$f'(z)$</span> exists on an open disc <span class="math-container">$D[z,r_z] \ \forall z \in G$</span> where <span class="math-container">$r_z$</span> denotes the radius of the open disc s.t. <span class="math-container">$f(z)$</span> is holomorphic at <span class="math-container">$z$</span>. So, I guess <span class="math-container">$\frac{2 f'(z)}{(1-f(z))^2} = h'(z)$</span> exists on an open disc with the same radius <span class="math-container">$D[z,r_z] \ \forall z \in G'$</span>, and <span class="math-container">$\therefore, h$</span> is holomorphic in <span class="math-container">$G'$</span>.</p>
<blockquote>
<p>(C) Possible flaw:</p>
</blockquote>
<p>It seems that on <span class="math-container">$G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$\frac{Hi-1}{Hi+1}$</span> while on <span class="math-container">$G \setminus G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$1$</span>.</p>
<p><span class="math-container">$$\therefore, \forall z \in G, f(z) = \frac{Hi-1}{Hi+1} 1_{G'}(z) + 1_{G \setminus G'}(z)$$</span></p>
<p>It seems then that we've actually show only that <span class="math-container">$f$</span> is constant on <span class="math-container">$G$</span> except for the subset of G where <span class="math-container">$f=1$</span>.</p>
| Didier | 788,724 | <p>Here is an idea that uses a weaker version of the open mapping theorem:</p>
<ul>
<li>if <span class="math-container">$f$</span> is constant, then there is nothing to prove</li>
<li>if <span class="math-container">$f$</span> is non-constant, <span class="math-container">$f'$</span> is not identically zero: let <span class="math-container">$z_0$</span> such that <span class="math-container">$f'(z_0) \neq 0$</span>. By the inverse function theorem, there exists an open neighbourhood of <span class="math-container">$z_0$</span>, say <span class="math-container">$V$</span>, such that <span class="math-container">$f(V)$</span> is open and <span class="math-container">$f : V \to f(V)$</span> is bijective with smooth inverse. Hence <span class="math-container">$f(V) \subset \mathrm{Im}(f)$</span>, and the image of <span class="math-container">$f$</span> contains an open subset of <span class="math-container">$\mathbb{C}$</span>: it cannot fit in the circle, which has empty interior.</li>
</ul>
|
2,994,296 | <p>I'm trying to figure out how to prove, that <span class="math-container">$$\lim_{n\to \infty} \frac{n^{4n}}{(4n)!} = 0$$</span>
The problem is, that <span class="math-container">$$\lim_{n\to \infty} \frac{n^{n}}{n!} = \infty$$</span>
and I have no idea how to prove the first limit equals <span class="math-container">$0$</span>. </p>
| Anurag A | 68,092 | <p>Using Stirling's approximation: <span class="math-container">$n! \approx c \sqrt{n}(n/e)^n$</span>, we get
<span class="math-container">$$\frac{n^{4n}}{(4n)!} \approx \frac{n^{4n}}{c\sqrt{4n}(4n/e)^{4n}} \approx \left(\frac{e}{4}\right)^{4n}\frac{1}{c\sqrt{4n}} \overbrace{\longrightarrow}^{\because \frac{e}{4}<1} 0.$$</span></p>
|
3,844,448 | <p>Find all values of <span class="math-container">$h$</span> such that rank(<span class="math-container">$A$</span>) = <span class="math-container">$2$</span>.</p>
<p><span class="math-container">$A$</span> = <span class="math-container">$\begin{bmatrix}
1 & h & -1\\
3 & -1 & 0\\
-4 & 1 & 3
\end{bmatrix} $</span></p>
<p>I used row transformations to get</p>
<p><span class="math-container">$A$</span> = <span class="math-container">$\begin{bmatrix}
1 & h & -1\\
0 & -1-3h & 3\\
0 & 1+4h & -1
\end{bmatrix} $</span></p>
<p>But how do I solve to get the rank? I know the general idea is that rank(<span class="math-container">$A$</span>) = <span class="math-container">$2$</span> when dim(col(<span class="math-container">$A$</span>)) = dim(row(<span class="math-container">$A$</span>)) = <span class="math-container">$2$</span></p>
| Cade Reinberger | 450,991 | <p>Well, you know that the column space has at least two linearly independent vectors. So, you just want the middle vector in your transformed matrix to be in the span of the other 2. Well, you can always add the left vector in the column space (<span class="math-container">$\hat{i}$</span>, you might call it) so that the top component of the middle vector will correspond to some linear combination of the other two columns. So you need the ratio of the second two elements in the middle column to correspond to the ratio of the second two elements in the right column. That is, you want <span class="math-container">$$\frac{-1-3h}{1+4h} = \frac{3}{-1} $$</span> This gives <span class="math-container">$$ 3h+1 = 12h+3$$</span> thus <span class="math-container">$h= \frac{-2}{9}$</span>.</p>
<p>Notice, also, that this equation only holds for <span class="math-container">$h=-\frac{2}{9}$</span>. If you put in another value of <span class="math-container">$h$</span>, this equation won't hold and so the projection of the middle vector to the <span class="math-container">$yz$</span> plane, that is, the second two components, won't be some scalar multiple of any two components of the vector in the right column. So if I take a linear combination of the vector in the left column and the right column, and look at the <span class="math-container">$y$</span> and <span class="math-container">$z$</span> components (the projection to the <span class="math-container">$yz$</span> plane), the left vector scaled will have no contribution, since it doesn't have a <span class="math-container">$y$</span> or <span class="math-container">$z$</span> term, and the <span class="math-container">$yz$</span> component won't be a scalar multiple of the same component for the vector on the right, because the <span class="math-container">$h$</span> is wrong the ratio won't work out for that to happen. So This is the only such <span class="math-container">$h$</span>.</p>
<p>So then you get the middle column will look like <span class="math-container">$\begin{bmatrix} \frac{-2}{9} & \frac{-1}{3} & {\frac{1}{9}}\end{bmatrix}^\top$</span>. You can see that this vector is simply <span class="math-container">$-\frac{1}{9}$</span> the vector in the right column plus <span class="math-container">$\frac{1}{9}$</span> times the vector on the left. Which works out as we hope.</p>
<p>So it's <span class="math-container">$$\boxed{h = - \frac{2}{9}}$$</span></p>
|
2,771,034 | <p>$\frac{a_n}{b_n} \rightarrow 1$ and $\sum_{n=1}^\infty b_n$ converges, can it be concluded that $\sum_{n=1}^\infty a_n$ converges?<br>
My attempt at an answer to this question: since $\sum_{n=1}^\infty b_n$ converges, $b_n \rightarrow 0$. Because of this, $a_n \rightarrow 0$ equally fast. However, I'm well aware that this does not imply that $\sum_{n=1}^\infty a_n$ converges. I'm stuck at that point, though, as I'm not sure what other conclusions can be drawn. Could anyone help me out?</p>
| SK19 | 509,159 | <p>$$a_n = \frac{a_n}{b_n}\cdot b_n$$</p>
<p>(given that $b_n\neq 0$) The intuition now tells us, that from a certain $N$ on, $\frac{a_n}{b_n}$ will be so close to $1$ that $a_n$ basically add not much more than $b_n$, so if $\sum b_n$ converges, so will $\sum a_n$. </p>
<p>But as the other answers have raised concern if intuition is not enough in this case, so let's see what exactly we need. We can safely assume that $b_n\neq 0$ eventually, e.g. starting from $N_1\in\mathbb{N}$ (else the limit of $\frac{a_n}{b_n}$ couldn't be defined). Set $d=\lim_{n\to\infty}\frac{a_n}{b_n}$ ($d=1$ in our case). Let $c$ be any positive real (especially $0<c<|d|$), then we have $d-c < \frac{a_n}{b_n} < d+c$ eventually, e.g. starting from $N_2$. Then we have
$$(d-c)b_n < a_n < (d+c)b_n$$
So with $N=\max\{N_1,N_2\}$ in mind, we can say that $(d-c)b_n < a_n < (d+c)b_n$ holds eventually
Here are some facts:</p>
<ul>
<li>given $t\in\mathbb{R}$ with $t\neq 0$: $\sum b_n$ converges iff $\sum tb_n$ converges</li>
<li>given that $|a_n|<(d+c)b_n$ eventually, $\sum(d+c)b_n$ converges implies $\sum a_n$ converges</li>
<li>given on the other hand $a_n > (d-c)b_n \geq 0$ eventually, $\sum(d-c)b_n$ diverges implies $\sum a_n$ diverges</li>
</ul>
<p>So if $d$ is positive (or even $1$ in our case) and if $b_n$ is eventually positive, then we can say that $\sum a_n$ converges if and only if $\sum b_n$ converges. If $b_n$ is not eventually positive, counterexamples can be constructed.</p>
|
888,319 | <p><span class="math-container">$ABC$</span> is an acute angled triangle, where <span class="math-container">$P$</span> is the orthocenter, and <span class="math-container">$R$</span> is the circumradius. I want to show that <span class="math-container">$PA+PB+PC\le 3R$</span> geometrically, that is without using trigonometry. I have a trig solution, but I want to know whether we can do it by pure geometry.</p>
<p><img src="https://i.stack.imgur.com/jqIKE.png" alt="enter image description here" /></p>
<p>Note: In the image, the direction of the inequalities should be the opposite.</p>
| DeepSea | 101,504 | <p><strong>Hint:</strong> the identity $a^2 + b^2 + c^2 + PA^2 + PB^2 + PC^2 = 12R^2$ is useful.</p>
|
479,594 | <p>I was wandering which is the best way to generate various combinations of $x_i$ such that $$\sum\limits_{i=1}^7 x_i = 1.0$$</p>
<p>where $ x_i \in \{0.0, 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0\}$</p>
<p>I can generate these using brute-force, i.e checking through all $ 11^7$ combinations and only taking those which satisfies our constraint, however I am interested to know if there is another approach for this. Any ideas?</p>
| bkarpuz | 53,441 | <p>Let $b_{n}:=\sum_{k=1}^{n-1}a_{k}$ for $n=2,3,\cdots$,
then $b_{n+1}-b_{n}=a_{n}$ for $n=2,3,\cdots$.
Then, the equation reads as $b_{n+1}-b_{n}=-\frac{1}{2}b_{n}+\frac{n}{4}$ for $n=2,3,\cdots$ with $b_{2}=a_{1}=\frac{1}{4}$. Rearraging the terms, we get $$\begin{cases}b_{n+1}-\frac{1}{2}b_{n}=\frac{n}{4},{\quad}n=2,3,\cdots,\\ b_{2}=\frac{1}{4}.\end{cases}$$
Let $\mu_{n}:=2^{n}$ for $n=2,3,\cdots$.
Multiplying both sides of the equation by $\mu_{n+1}$, we get
$$\mu_{n+1}b_{n+1}-\mu_{n}b_{n}=\mu_{n}\frac{n}{2}.$$
Summing this from $2$ to $(n-1)$ for $n=2,3,\cdots$, we get
$$\begin{aligned}&\underbrace{\sum_{k=2}^{n-1}[\mu_{k+1}b_{k+1}-\mu_{k}b_{k}]}_{\text{telescoping sum}}=\frac{1}{2}\sum_{k=2}^{n-1}k2^{k}\\ &{\implies}\mu_{n}b_{n}-\mu_{2}b_{2}=\frac{1}{2}2^{n}(n-2)\\ &{\implies}b_{n}=\frac{1}{2^{n}}+\frac{1}{2}(n-2),\end{aligned}$$
where we have used the fact that $\mu_{2}b_{2}=1$.
Then, the desired solution is $$a_{n}=b_{n+1}-b_{n}=\frac{1}{2}\bigg(1-\frac{1}{2^{n}}\bigg),{\quad}n=2,3,\cdots.\tag*{$\blacksquare$}$$</p>
|
1,812,675 | <p>Is there a recurrence solution to $a_n=\frac{n}{a_{n-1}}$? I'm wondering if it could be done in the form of an alternating series partial to $n$ or as a trigonometric function.</p>
| Clement C. | 75,808 | <p><strong>Hint:</strong></p>
<p>Set $b_n = \ln a_n$. Then
$b_n = - b_{n-1} + \ln n$, and we can write
$$\begin{align}
b_n &= - b_{n-1} + \ln n
= b_{n-2} - \ln (n-1) + \ln n\\
&= -b_{n-3} + \ln(n-2) - \ln (n-1) + \ln n \\
&\vdots \\
&= b_1 + \sum_{k=1}^n (-1)^{n-k} \ln k
\end{align}$$</p>
<p>Can you continue?</p>
|
2,458,863 | <p>I tried to find the critical points of the function</p>
<p>$$f(x,y) = x^2y-2xy + \arctan y $$</p>
<p>And I found that is $P(1,0)$, the problem is that the Hessian is null, and I don't know how to procede to determine the nature of that point.
Can you help me ?</p>
<p><strong>Update:</strong> Thanks you all, and I tried to study the sign of the function, the problem is that I don't know how to proceed , since I have $Δf(x,y)=x^2y-2xy + \arctan y $ and I don't know how to study the sign locally around $1,0$.</p>
| Tsemo Aristide | 280,301 | <p>If the class of of $[p]$ is invertible mod $m$, $a'p=am+1$ or $am-1$, you have
$(am+1)(bm+1)=m(mab+a+b)+1, (am+1)(bm-1)=m(abm-a+b)-1$</p>
<p>$ (am-1)(bm-1)=m(abm-a-b)+1$. This shows the product of two invertible numbers mod $m$ is $1$ mod $m$ or $-1$ mod $m$.</p>
|
2,278,431 | <p>"Apply Green's Theorem to evaluate the line integral of F around positively oriented boundary"</p>
<p>$$F(x,y)=x^2yi+xyj$$</p>
<p>C: The region bounded by y=$x^2$ and y=4x+5</p>
| farruhota | 425,072 | <p>If $n!\sim \left(\frac{n}{e}\right)^n\sqrt{2\pi n}$, then:</p>
<p>$$\frac{n^n}{n!e^n}=\frac{1}{n!} \cdot \left(\frac{n}{e}\right)^n\sim \left(\frac{e}{n}\right)^n \frac{1}{\sqrt{2\pi n}} \cdot \left(\frac{n}{e}\right)^n = \frac{1}{\sqrt{2\pi n}}$$
which is decreasing.</p>
|
3,443,094 | <blockquote>
<p>If <span class="math-container">$$\lim_{x\to 0}\frac{ae^x-b}{x}=2$$</span> the find <span class="math-container">$a,b$</span></p>
</blockquote>
<p><span class="math-container">$$
\lim_{x\to 0}\frac{ae^x-b}{x}=\lim_{x\to 0}\frac{a(e^x-1)+a-b}{x}=\lim_{x\to 0}\frac{a(e^x-1)}{x}+\lim_{x\to 0}\frac{a-b}{x}=\boxed{a+\lim_{x\to 0}\frac{a-b}{x}=2}\\
\lim_{x\to 0}\frac{a-b}{x} \text{ must be finite}\implies \boxed{a=b}\\
$$</span>
Now I think I am stuck, how do I proceed ?</p>
| Ross Millikan | 1,827 | <p>You can use the Taylor series, <span class="math-container">$e^x=1+x+$</span> terms of order <span class="math-container">$x^2$</span> and higher. Plug that in. The fact that <span class="math-container">$a=b$</span> cancels the <span class="math-container">$1$</span> and you will be working with the <span class="math-container">$x$</span> term.</p>
|
3,443,094 | <blockquote>
<p>If <span class="math-container">$$\lim_{x\to 0}\frac{ae^x-b}{x}=2$$</span> the find <span class="math-container">$a,b$</span></p>
</blockquote>
<p><span class="math-container">$$
\lim_{x\to 0}\frac{ae^x-b}{x}=\lim_{x\to 0}\frac{a(e^x-1)+a-b}{x}=\lim_{x\to 0}\frac{a(e^x-1)}{x}+\lim_{x\to 0}\frac{a-b}{x}=\boxed{a+\lim_{x\to 0}\frac{a-b}{x}=2}\\
\lim_{x\to 0}\frac{a-b}{x} \text{ must be finite}\implies \boxed{a=b}\\
$$</span>
Now I think I am stuck, how do I proceed ?</p>
| Peter Szilas | 408,605 | <p>Option:</p>
<p>1) <span class="math-container">$\lim_{x \rightarrow 0}\dfrac{ae^x-b}{x}=2$</span>;</p>
<p><span class="math-container">$\lim_{x \rightarrow 0}(ae^x-b)=$</span></p>
<p><span class="math-container">$\lim_{x \rightarrow \infty}((\dfrac {ae^x-b}{x})\cdot x)=$</span></p>
<p><span class="math-container">$\lim_{x \rightarrow 0}\dfrac{ae^x-b}{x} \cdot \lim_{x \rightarrow 0} x=$</span></p>
<p><span class="math-container">$2 \cdot 0=0$</span>; </p>
<p><span class="math-container">$\rightarrow a=b$</span>;</p>
<p>2) <span class="math-container">$\lim_{x \rightarrow 0} a(\dfrac{e^x-1}{x})=2$</span>;</p>
<p><span class="math-container">$a=?$</span></p>
|
1,246,356 | <p>Let $A,B \in {M_n}$ . suppose $A$ is normal matrix and has distinct eigenvalue, and $AB=0$. why $B$ is normal matrix?</p>
| robjohn | 13,854 | <p>The <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow">Euler-Maclaurin Summation Formula</a> says
$$
\begin{align}
\sum_{k=5}^m\frac{\log(\log(k))}{k\log(k)}
&=\frac12\log(\log(m))^2+C+O\left(\frac{\log(\log(m))}{m\log(m)}\right)
\end{align}
$$
Therefore,
$$
\begin{align}
&\sum_{k=5}^{\sqrt{n}}\frac{\log(\log(k))}{k\log(k)}\\
&=\tfrac12\log(\log(\sqrt{n}))^2+C+O\left(\frac{\log(\log(n))}{\sqrt{n}\log(n)}\right)\\
&=\tfrac12\log\left(\tfrac12\log(n)\right)^2+C+O\left(\frac{\log(\log(n))}{\sqrt{n}\log(n)}\right)\\
&=\tfrac12\log(\log(n))^2-\log(2)\log(\log(n))+\tfrac12\log(2)^2+C+O\left(\frac{\log(\log(n))}{\sqrt{n}\log(n)}\right)
\end{align}
$$
where $C\doteq-0.08334404437765197472024727705275296252855$.</p>
|
4,480,905 | <p>When <span class="math-container">$T$</span> is any linear operator acting on a vector space <span class="math-container">$V$</span>, and <span class="math-container">$n$</span> is a natural number, <span class="math-container">$T^n$</span> means <span class="math-container">$T$</span> applied <span class="math-container">$n$</span> times (composition) and that is also a linear operator. That is clear.</p>
<p>When <span class="math-container">$T$</span> is a nonzero linear operator acting on a vector space <span class="math-container">$V$</span>, then <span class="math-container">$T^0$</span> is the identity operator <span class="math-container">$T^0 = I$</span>. But I think that should also be true (true by definition), when <span class="math-container">$T$</span> is the zero operator i.e. the operator which sends all vectors to the zero vector.</p>
<p>Why? Because <span class="math-container">$T^0$</span> means that we are not applying any operator. So it makes sense to say: OK, all vectors stay unchanged when "applying" <span class="math-container">$T^0$</span> even when <span class="math-container">$T$</span> is the zero operator. I say "applying" because we're not actually applying anything.</p>
<p>Is that indeed so?</p>
<p>I am asking because this kind of disagrees with what we have for real numbers where <span class="math-container">$0^0$</span> is usually left undefined.</p>
<p><strong>EDIT:</strong><br />
What's the context of this question? I was reading a proof for the uniqueness of the Jordan Normal Form. There this expression comes up <span class="math-container">$2d(\phi^p) - d(\phi^{p-1}) - d(\phi^{p+1})$</span>, where <span class="math-container">$p$</span> is a positive integer, and <span class="math-container">$d$</span> is the defect of the linear operator in the brackets. The proof is very nice but convoluted and eventually it boils down to proving the uniqueness for a special linear operator which has only <span class="math-container">$0$</span> as a characteristic root (as an eigenvalue). So I had some doubts what happens exactly with the expression <span class="math-container">$\phi^{p-1}$</span> when <span class="math-container">$p = 1$</span>, and if we need to put some restrictions on the linear operator <span class="math-container">$\phi$</span>.</p>
| Gerald | 167,701 | <p><span class="math-container">$0^0$</span> is an indeterminate form. Consider the two limits:</p>
<p><span class="math-container">$$\lim_{x \rightarrow 0}\lim_{y \rightarrow 0}\ x^y
$$</span></p>
<p><span class="math-container">$$\lim_{y \rightarrow 0}\lim_{x \rightarrow 0}\ x^y
$$</span></p>
<p>For the first limit, we get <span class="math-container">$x^0$</span> under the limit as <span class="math-container">$x$</span> goes to <span class="math-container">$0$</span>. For all <span class="math-container">$x$</span>, except <span class="math-container">$x=0$</span>, <span class="math-container">$x^0=1$</span>. Thus <span class="math-container">$\lim_{x \rightarrow 0}\ x^0 =\lim_{x \rightarrow 0}\ 1=1$</span>. However for the second limit, we have that as <span class="math-container">$x$</span> goes to <span class="math-container">$0$</span>, then we get <span class="math-container">$0^y$</span>. This is equal to <span class="math-container">$0$</span> for all <span class="math-container">$y$</span>, except <span class="math-container">$y=0$</span>. This we obtain <span class="math-container">$\lim_{y \rightarrow 0}\ 0^y= \lim_{y \rightarrow 0}\ 0 = 0$</span>. Therefore there is a discontinuity at <span class="math-container">$(x,y)=(0,0)$</span>. So we let <span class="math-container">$0^0$</span> be undefined and can only talk about <span class="math-container">$0^0$</span> when we specify a direction that we are computing that function from. From the x direction, from the y direction, or somewhere in between.</p>
<p>Let <span class="math-container">$T = a\begin{bmatrix}1,0,0\\0,1,0\\0,0,1\end{bmatrix}=\begin{bmatrix}a,0,0\\0,a,0\\0,0,a\end{bmatrix}$</span>, for an arbitrary constant <span class="math-container">$a$</span>. Let <span class="math-container">$\bf{x}$</span> be a vector in <span class="math-container">$\mathbb{R}^3$</span>.</p>
<p>Consider two limits:</p>
<p><span class="math-container">$$\lim_{a \rightarrow 0}\lim_{n \rightarrow 0}\ T^n {\bf x}
$$</span></p>
<p><span class="math-container">$$\lim_{n \rightarrow 0}\lim_{a \rightarrow 0}\ T^n {\bf x}
$$</span></p>
<p>Clearly the first limit, the limit as <span class="math-container">$n$</span> goes to 0 gives <span class="math-container">$(T^0=I)\forall a$</span>, but in the second limit, the limit as <span class="math-container">$a$</span> goes to zero gives <span class="math-container">$(T^n=0)\forall n$</span>. Clearly there is a discontinuity at <span class="math-container">$(n,a)=(0,0)$</span>.</p>
|
3,583,879 | <blockquote>
<p>a) $P_5=11$$</p>
<p>b) <span class="math-container">$P_1+P_2+P_3+P_4+P_5 =26$</span></p>
</blockquote>
<p>For the first part
<span class="math-container">$$\alpha^5+\beta ^5$$</span>
<span class="math-container">$$=(\alpha^3+\beta ^3)^2-2(\alpha \beta )^3$$</span></p>
<p>I found the value of <span class="math-container">$\alpha^3+\beta^3=4$</span></p>
<p>So <span class="math-container">$$16-2(-1)=18$$</span> which doesn’t match.</p>
<p>In the second part depends on the value obtained from part 1, so I need to get that cleared up. </p>
<p>I checked the computation many times, but it might end up being just that. Also, is there a more efficient way to do this?</p>
| trancelocation | 467,003 | <p>You can make life easier by realizing that</p>
<p><span class="math-container">$$P_k = \alpha^k+\beta^k$$</span> </p>
<p>is the solution of the linear recurrence</p>
<p><span class="math-container">$$a_{k+2}=a_{k+1}+a_k \text{ with } a_1 = \alpha + \beta = 1 $$</span>
<span class="math-container">$$\text{ and } a_2 = \alpha^2 + \beta^2 = (\alpha + \beta)^2-2\alpha\beta = 1+2=3$$</span></p>
<p>Hence,</p>
<p><span class="math-container">$$a_3= 4, a_4 = 7, a_5 = 11$$</span></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.