qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
200,876 | <p>Is there a topological space $(C,\tau_C)$ and two points $c_0\neq c_1\in C$ such that the following holds?</p>
<blockquote>
<blockquote>
<p>A space $(X,\tau)$ is connected if and only if for all $x,y\in X$ there is a continuous map $f:C\to X$ such that $f(c_0) = x$ and $f(c_1) = y$.</p>
</blockquote>
</blockquote>
<p>Is there also a Hausdorff space satisfying the above?</p>
| Włodzimierz Holsztyński | 8,385 | <blockquote>
<p>I hope that my example below is as elegant as the continuous long line provided by Goldstern above, while my example is less expected. Also, while long line is simpler <strong>in itself</strong>, the <strong>proof</strong> is simpler in my case. Finally, perhaps logicians will find some advantages (I'll do a little of it--I am not confident to do it well).</p>
</blockquote>
<p>Let $\ A\ $ be an arbitrary set. The following ordered triple $\ (\mathbf S_A\ \mathbf 0\ \mathbf 1)\ $, where $\ \mathbf S_A:=(S_A\ T_A)\ $ is a topological space--call it a skeleton, and $\ \mathbf {0\ 1}\in S_A,\, $ is to be defined below, while first (ahead of time) let's formulate</p>
<p><strong>THEOREM</strong> For every connected subset $\ X\subseteq S_A,\ $ such that $\ \mathbf {0\ 1}\in X,\ $ the inequality of cardinalities $\ |X|\ge|A|\ $ holds.</p>
<p>This instantly gives a simple negative answer to <strong><em>the question of this thread</em></strong> posed by Dominic.</p>
<p><strong>DEFINITION</strong></p>
<ul>
<li>$\ S_A\ :=\ \{(x_a)_{a\in A}\in[0;1]^A\ :\ \forall_{a\ b\in A} [x_a\ x_b\in(0;1)\ \Rightarrow\ a=b]\ \}$</li>
<li>$\ \mathbf 0\ :=\ (0)_{a\in A}\ $ and $\ \mathbf 1\ :=\ (1)_{a\in A}$</li>
<li>$\ T_A\ $ is the topology in $\ S_A\ $ induced by the Tikhonov toplogy in cube $\ [0;1]^A$</li>
</ul>
<p><strong>PROOF (of the theorem)</strong> The connected component of $\ \mathbf 0\ $ in $\ S_A\ $ is dense in $\ S_A,\ $ which means that its closure, i.e. space $\ S_A\ $ itself, is connected too. Next, let: </p>
<p>$$H^a\ :=\ \{x\in[0;1]^A\ :\ x_a=\frac 12$$</p>
<p>Let $\ X\subseteq S_A\ $ be a connected subset such that $\ \mathbf {0\ 1}\in X.\ $ Then $\ p_a(X)=[0;1],\ $ hence $\ H\,^a\cap X\ne \emptyset.\ $ Thus</p>
<p>$$ |X|\ \ge\ \left|\left\{H^a\ :\ a\in A\ \right\}\right|\ =\ |A|$$</p>
<p>Indeed, sets $\ H^a\ $ are disjoint (hence different). <strong>END of Proof</strong></p>
<blockquote>
<p> G E N E R A L I Z A T I O N</p>
</blockquote>
<p>We may replace the topological interval $\ [0;1],\ $ and its three points $\ 0\ \frac 12\ 1,\ $ by an arbitrary connected space $\ S\ $, and its three points $\ a\ h\ b,\ $ such that $\ h\ $ separates $\ a\ b\ $ (meaning that there are open sets $\ G\ $ and $\ H:=(S\setminus\{h\})\setminus G\ $ of $\ S\ $ such that $\ a\in G\ $ and $\ b\in H$. Etc. The theorem still holds.</p>
<blockquote>
<p>Logical considerations</p>
</blockquote>
<p>I am not using ordinal numbers. My construction is free of any complications, especially when $\ S\ $ of the generalization is a proper 3-point space $\ \{a\ h\ b\}.\ $ Thus I am worried only about axioms like the axiom of choice or continuum hypothesis, and similar, about their relation to the cartesian product, and to the ordinary $\ [0;1]\ $ of my main example.</p>
<blockquote>
<p>EXTRA. Compactness. Another connectedness proof.</p>
</blockquote>
<p>Space $\ \mathbf S_A\ $ is compact, it is a closed subset of the Tikhonov cube $\ [0;1]^A:\ $ <strong>indeed</strong>, let $\ x\in [0;1]^A\setminus S_A.\ $ Then there exist two <strong>different</strong> indices $\ a\ b\ \in\ A\ $ such that $\ (x_a\ x_b)\in (0;1)^{\{a\ b\}}.\ $ Thus the inverse image of this open square under the canonical projection $\ p_{a\ b} : [0;1]^A\rightarrow(0;1)^{\{a\ b\}}\ $ is disjoint from $\ S_A\ $ (one could say that $\ [0;1]^A\setminus S_A\ $ is open because it is a union of the inverse images of the open squares). Thus indeed $\ S_A\ $ is compact.</p>
<p>Now $\ \mathbf S_A\ $ is connected because it is an inverse limit of spaces $\ \mathbf S_B\ $ for all finite $\ B\subseteq A,\ $ under the canonical projections. (One could also use som other similar arguments). This inverse limit nature of $\ \mathbf S_A\ $ shows its covering 1-dimensionality:</p>
<p>$$\dim \mathbf(S_A)\ =\ 1$$</p>
|
312,602 | <p>I am writing a paper right now, and part of the paper makes use of a (trivial) generalization of a number of really nice theorems and constructions from a paper that was never made public. The author has left pure mathematics and has no intention of publishing the paper, but I received a copy directly from him several years ago. </p>
<p>The results and constructions are crucial to my paper, and since I am working with a minor generalization, I think I do need to include the proofs, especially since they aren't available. I don't want it to look like I'm taking credit for the results or plagiarizing, but some of the proofs are basically copies. I have a bunch of disclaimers at the top of the section and continually remind the reader that all of these results are in the original paper. </p>
<p>Do I need to not only give credit at the top of the section, but also give credit for every observation, statement, lemma, and diagram? </p>
<p>PS It seems like the community wiki checkbox is gone. I'd appreciate if a moderator could do that for me.</p>
<p>Edit: I contacted the author, and he agreed to put it on the arXiv himself. Going to leave the question up as a community wiki for anyone else who runs into this problem.</p>
| Alexandre Eremenko | 25,510 | <p>First of all, if you received a paper from the author privately, and it is unpublished, you need his/her permission to use the result, and permission to mention his/her work.</p>
<p>When you ask the permission, you may also ask whether the author is willing to post the paper on the arXiv, and you may propose him/her to do this yourself.</p>
<p>If permission is granted but the rest does not work, you may include the paper in your reference list, marking it as "private communication". But this is the measure of the last resort (if the author cannot be contacted and/or gives you no permission to publish his paper).</p>
<p>Remark. I have some results which I consider important, and which I do not publish. The situation is exactly the same: I build on an unpublished result.
The author gave me his manuscript but
refuses to publish it, and does not give me permission to publish his result.
Myself, I stick to the following rule: everything I use in a published paper must be available (in principle) to every reader.</p>
|
312,602 | <p>I am writing a paper right now, and part of the paper makes use of a (trivial) generalization of a number of really nice theorems and constructions from a paper that was never made public. The author has left pure mathematics and has no intention of publishing the paper, but I received a copy directly from him several years ago. </p>
<p>The results and constructions are crucial to my paper, and since I am working with a minor generalization, I think I do need to include the proofs, especially since they aren't available. I don't want it to look like I'm taking credit for the results or plagiarizing, but some of the proofs are basically copies. I have a bunch of disclaimers at the top of the section and continually remind the reader that all of these results are in the original paper. </p>
<p>Do I need to not only give credit at the top of the section, but also give credit for every observation, statement, lemma, and diagram? </p>
<p>PS It seems like the community wiki checkbox is gone. I'd appreciate if a moderator could do that for me.</p>
<p>Edit: I contacted the author, and he agreed to put it on the arXiv himself. Going to leave the question up as a community wiki for anyone else who runs into this problem.</p>
| T. Amdeberhan | 66,131 | <p>Why not you approach the author and ask to join forces to write as a co-author if the other person is willing to do so? That could be a fine resolution and fair to both parties.</p>
|
1,687,336 | <p>I've been searching through the internet and through SE to find something to help me understand generating functions, but I haven't found anything that would solve my problem with them.</p>
<p>I understand that </p>
<p>$$\frac1{1-x}=\sum_{n\ge 0}x^n\;,\tag{1}$$</p>
<p>gives the sequence $(1, 1, 1, 1,...) $ because $$\frac1{1-x}=\sum_{n\ge 0}x^n\;,\tag{1}$$ </p>
<p>is just another way of writing the sum $1+x+x^2+x^3+x^4+...$ and the coefficients of each term are 1, and thus the sequence is $1, 1, 1, 1,...$.</p>
<p>What I don't get is how does </p>
<p>$$\begin{align*}
\frac4{1-x^3}&=\sum_{n\ge 0}4x^{3n}
\end{align*}\tag{3}$$
equal</p>
<p>$$4x^0+0x^1+0x^2+4x^3+0x^4+0x^5+4x^6+0x^7+0x^8+\ldots$$</p>
<p>and thus the sequence $(4, 0, 0, 4, 0, 0, 4, 0, 0,...)$.</p>
<p>I would say that $$\begin{align*}
\frac4{1-x^3}&=\sum_{n\ge 0}4x^{3n}
\end{align*}\tag{3}$$</p>
<p>is equal to $$4x^{3\times0}+4x^{3\times1}+4x^{3\times2}+4x^{3\times3}...=4+4x^3+4x^6+4x^9\ldots$$</p>
<p>There is something that I'm completely not understanding and I would like to know what that something is.</p>
| anonymouse | 316,033 | <p>$$4x^0+0x^1+0x^2+4x^3+0x^4+0x^5+4x^6+0x^7+0x^8+4x^9\ldots$$</p>
<p>and</p>
<p>$$4+4x^3+4x^6+4x^9\ldots$$</p>
<p>are the same polynomial. Each has a $4$ term, each has a $4x^3$ term, et cetera.</p>
<p>So both can be represented by the sequence $(4, 0, 0, 4, 0, 0, 4, ...)$</p>
<p>The first term of the sequence is the coefficient of the constant term, the second term is the coefficient of the linear term, et cetera.</p>
|
2,130,776 | <p>Let $(L,R)$ be a pair of adjoint functor. </p>
<p>How to show that the commutativity of the left diagram induces the commutativity of the right one?</p>
<p><a href="https://i.stack.imgur.com/lOffl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lOffl.jpg" alt="diagram"></a></p>
| Geoff | 100,192 | <p>Let $(\eta,\epsilon):F \dashv G:\mathfrak{C} \to \mathfrak{D}$ be an adjoint pair of functors and assume you have the commuting diagram
$$
\array{
A & \xrightarrow{f} & GX \\
h \downarrow & & \downarrow G(k) \\
B & \xrightarrow{g} & GY
}
$$
in $\mathfrak{C}$. Then applying the functor $F$ to the diagram gives the commuting diagram
$$
\array{
FA & \xrightarrow{F(f)} & F(GX) \\
F(h) \downarrow & & \downarrow F(G(k)) \\
FB & \xrightarrow{F(g)} & F(GY)
}
$$
in $\mathfrak{D}$. Using the counit allows us to produce the commuting diagram
$$
\array{
FA & \xrightarrow{F(f)} & F(GX) & \xrightarrow{\epsilon_{X}} & X \\
F(h) \downarrow & & \downarrow F(G(k)) & & \downarrow k\\
FB & \xrightarrow{F(g)} & F(GY) & \xrightarrow{\epsilon_Y} & Y
}
$$
which contracts to the commuting diagram
$$
\array{
FA & \xrightarrow{\epsilon_X \circ F(f)} & X \\
F(h) \downarrow & & \downarrow k \\
FB & \xrightarrow{\epsilon_Y \circ F(g)} & Y
}
$$
in $\mathfrak{D}$. There is some work to check that the middle rectangle actually commutes, but it all follows from the triangle identities of adjunction and the couniversal property of $\epsilon$.</p>
|
3,835,514 | <p>For some c > <span class="math-container">$0$</span>, the cumulative distribution function of a continuous random variable X is given by:</p>
<p><span class="math-container">$$
F_X(x) = \begin{cases} 0 & \text{if } x \le0 \\ cx(x+1) & \text{if } 0 \lt x <1 \\ 1 & \text{if } x \ge 1\end{cases}
$$</span></p>
<p>Show that <span class="math-container">$c = 1/2 $</span></p>
<p>I know that by differentiating <span class="math-container">$cx(x+1)$</span> and equating to 1, I obtain the cumulative distribution function but I don't know how to eliminate <span class="math-container">$x$</span> after differentiating.</p>
| VIVID | 752,069 | <p>For a quadratic expression given as <span class="math-container">$$rx^2 + px + q$$</span> whose roots are, say, <span class="math-container">$m$</span> and <span class="math-container">$n$</span>, it can be factorised as <span class="math-container">$$rx^2 + px + q = r(x-m)(x-n)$$</span></p>
<p>So now, you should find the solutions to your equation. Ask if you cannot.</p>
|
2,166,897 | <blockquote>
<p>Let X be a complex Banach space. Let <span class="math-container">$T\in B(X)$</span> be a bounded linear operator on <span class="math-container">$X$</span>. Let <span class="math-container">$T^*\in B(X^*)$</span> be the adjoint of <span class="math-container">$T$</span>.</p>
<p>Prove: If <span class="math-container">$T^*$</span> is invertible, then for all elements <span class="math-container">$x\in X$</span>,
<span class="math-container">$$ \|Tx \| \geq \| (T^*)^{-1}\|^{-1}\| x \|$$</span>
and use the inequality to prove that <span class="math-container">$T$</span> is invertible</p>
</blockquote>
| Vincent Boelens | 94,696 | <p>Let $x\in X$. By Hahn-Banach there is $f\in X^\ast$ with $\|f\|=1$ and
$|f(x)|=\|x\|$. Then we obtain
$$\begin{align}
\|x\|&=|f(x)| \\
&=|(T^\ast)^{-1}(T^\ast(f))(x)|\\
&\le \|(T^\ast)^{-1}\||(T^\ast(f))(x)|\\
&=\|(T^\ast)^{-1}\||(f\circ T)(x)|\\
&\le\|(T^\ast)^{-1}\|\|T(x)\|,
\end{align} $$
which is equivalent to the inequality of the statement. This immediately implies that $T$ is injective and that if $(Tx_n)$ is a Cauchy sequence, also $(x_n)$ must be Cauchy. Thus, $T(X)$ is complete. </p>
<p>It remains to be shown that $T(X)=X$. Suppose this is not the case. Then choose $x\in X\setminus T(X)$. Again by Hahn-Banach and the fact that $T(X)$ is closed, there is a functional $f\in X^\ast$ such that $f$ is zero on $T(X)$, but $f(x)\neq 0$. This is a contradiction, since $T^\ast$ is injective, but $T^\ast(f) = f\circ T = 0$.</p>
|
281,504 | <p>Is there a trick for easily solving a matrix polynomial like
$$
p(A) = \left( 7\cdot A^4 - 4\cdot A^3 + 6\cdot A - 5\cdot E \right)
, A = \left(\begin{matrix}2 & -1 \\ 3 & 5\end{matrix}\right)
$$
or is it really just step-by-step calculation of
$$
7\cdot A\cdot A\cdot A\cdot A\cdot A\cdot A\cdot A-4\cdot A\cdot A\cdot A+6\cdot A-5\cdot E
$$</p>
| Felix | 463,163 | <p>Given a line <span class="math-container">$\overline r+t\cdot\overline{v}$</span> and a point to be reflected <span class="math-container">$\overline p$</span>, we must first find the closest point on the line to <span class="math-container">$\overline{p}$</span>. Let <span class="math-container">$t_0$</span> correspond to that point. Its value is:</p>
<p><span class="math-container">$$t_0 = \dfrac{\overline{p}\cdot\overline{v}-\overline{r}\cdot\overline{v}}{\overline{v}\cdot\overline{v}}$$</span></p>
<p>The intersection on the line is hence <span class="math-container">$\overline{s} = \overline{r}+t_0\cdot\overline{v}$</span>, and the reflected point <span class="math-container">$\overline{p}_2 = 2\overline{s}-\overline{p}$</span>.</p>
<p>About the solution: one has to find a line perpendicular to the given line, but also through the point to be reflected. A vector <span class="math-container">$\overline{d}$</span> perpendicular to <span class="math-container">$\overline{v}$</span> is given by the fact that if <span class="math-container">$\overline{d}\bot \overline{v}$</span>, then <span class="math-container">$\overline{d}\cdot\overline{v}=0$</span>. On the other hand, we must arrive to the same point through two different routes, <span class="math-container">$\overline{r}+t\cdot\overline{v}$</span> and <span class="math-container">$\overline{p}+\overline{d}$</span> so we set them equal. We now have a system of two vector equations to solve.</p>
<p>With all due respect, I don't think the answer Ross provided here was quite right. Using my notation, he suggested, that <span class="math-container">$t_0 = -\dfrac{\overline{p}\cdot\overline{r}}{\overline{p}\cdot\overline{v}}$</span>, which is not the right solution.</p>
|
2,491,818 | <p>My professor showed us that the Cauchy distribution does not have an expected value, that is, the integral $\int_{-\infty}^{\infty} x f(x) \text{d} x$ is not defined ($f(x)$ is the p.d.f. of the Cauchy distribution). I find that very counterintuitive. What does it actually mean, in the context of probability, to not have a defined expected value?</p>
| Ken Wei | 243,183 | <p>The Law of Large Numbers fails for distributions without an expected value. So empirically, if you were to average many samples from the Cauchy distribution, you won't be able to say that the average converges to zero (as one might think).</p>
<p>A property of the Cauchy distribution is that if $X_1,...,X_n$ and $Z$ are i.i.d. Cauchy random variables, then $\frac{1}{n}\sum_{i=1}^nX_i \ \stackrel{d}{=} \ Z$.</p>
<p>With a standard normal random variable, the distribution of the LHS would be $\frac{1}{\sqrt n}$ times as 'spread out' as a single random variable, whereas in this case, averaging a bunch of i.i.d. Cauchy variables changes nothing about their distribution.</p>
<p>Our intuitive interpretation of the expected value is tied to the law of large numbers. In this case, it doesn't apply.</p>
|
2,617,235 | <p>Given a triangle $\Delta$ABC, how to draw any inscribed equilateral triangle whose vertices lie on different sides of $\Delta$ABC?</p>
| Christian Blatter | 1,303 | <p>Assume $A=(0,0)$, $B=(b,0)$, $C=(c,h)$ with $b>0$, $0<c<b$, and $h>0$. We shall construct an equilateral triangle with one side horizontal as follows: Draw a horizontal line $y=h'$, where $h'$ is determined by the condition
$$h'={\sqrt{3}\over2}\>{h-h'\over h}\>b\ .\tag{1}$$
This line intersects the two legs of the triangle in two points $P$ and $Q$. Let $M=(m, h')$ be the midpoint of $PQ$. Then $P$, $Q$, and $R:=(m,0)$ form an equilateral triangle.</p>
<p>The condition $(1)$ ensures that $h'={\sqrt{3}\over2}\>|PQ|$. Solving for $h'$ one obtains
$$h'={h\, b\over{2\over\sqrt{3}}h+b}\ .$$</p>
|
1,482,776 | <blockquote>
<p>Let $(X_t)$ be a continuous nonnegative supermartingale and $T = \inf\{t\geq 0 \colon X_t = 0 \}$ then $X_t = 0$ for every $t\geq T$.</p>
</blockquote>
<p>Idea of solution:</p>
<p>Since $T$ is stopping time, by Doob theorem:
$$E(X_{T+q} 1_{T < \infty} | F_T) \leq X_T 1_{T < \infty} =0 $$
for every $q \in \mathbb{Q}$</p>
<p>Then taking expectation we have that $E(X_{T+q}1_{T < \infty}) \leq 0$
. Since $X_{T+q}1_{T < \infty} $ is positive $X_{T+q}1_{T < \infty} = 0$ a.e.
(on a set $\Omega_q$ with $\mathbb{P}(\Omega_q)=1$). Taking $\Omega'=\cap_{q \in \mathbb{Q}^+} \Omega_q$ we will have that $$X_{T+t}1_{T < \infty} = 0$$
on $\Omega'$ for every $t\geq 0$.</p>
<p>The only problem is we are not allow to use Doob theorem since $T$ is not bounded and $X$ is not U.I. I try to use $T \wedge k $ to make the stoping time bounded but I couldn't take limit properly.</p>
| Christian Blatter | 1,303 | <p>In order to simplify matters we are going to look at the reciprocal of the given expression. The denominator can be developed into a series as follows: From
$$\cos x=1-{1\over2}x^2+{1\over24}x^4+?x^6$$
and
$$\eqalign{\cos(\sin x)
&=1-{1\over2}x^2\left(1-{1\over 6}x^2+?x^4\right)^2+{1\over24}x^4(1+?x^2)^4+?x^6\cr
&=1-{1\over2}x^2+\left({1\over6}+{1\over24}\right)x^4+?x^6\cr}$$
it follows that
$$\cos(\sin x)-\cos x={1\over6}x^4+?x^6\ .$$
This implies
$$\lim_{x\to0}{\cos(\sin x)-\cos x\over x^4}={1\over6}\ ,$$
respectively:
$$\lim_{x\to0}{x^4\over \cos(\sin x)-\cos x}={6}\ ,$$
and $n=4$ is the only exponent leading to a limit $\ne0, \infty$.</p>
|
1,242,760 | <p>Now proving by induction is fairly simple. However, this is a multiple choice problem whose answers don't make any sense to me. The actual problem goes as follows:</p>
<p><em>To prove by induction that $n^2 - 7n - 2$ is divisible by $2$ is true for all positive integers $n$, we assume $k^2 - 7k - 2$ is divisible by $2$ is true for some positive integers $k$ and we show that $k^2 - 7k - 2 + A$ is divisible by $2$ where $A$ is:</em></p>
<p>Now I would've assumed that $A$ would be $(k+1)^2 - 7(k + 1) - 2$, but I checked the answer and it is actually $2(k-3)$ which makes no sense to me. I tried to factor, reduce, and anything I could think of but I can't get $(k+1)^2 - 7(k + 1) - 2$. Does anyone know where I am going wrong?</p>
| ncmathsadist | 4,154 | <p>Use the fact that
$$ n^2 - 7n - 2 - ( (n-1)^2 - 7(n-1) - 2))
= 2n - 1 - 7 = 2(n-4).$$</p>
|
1,242,760 | <p>Now proving by induction is fairly simple. However, this is a multiple choice problem whose answers don't make any sense to me. The actual problem goes as follows:</p>
<p><em>To prove by induction that $n^2 - 7n - 2$ is divisible by $2$ is true for all positive integers $n$, we assume $k^2 - 7k - 2$ is divisible by $2$ is true for some positive integers $k$ and we show that $k^2 - 7k - 2 + A$ is divisible by $2$ where $A$ is:</em></p>
<p>Now I would've assumed that $A$ would be $(k+1)^2 - 7(k + 1) - 2$, but I checked the answer and it is actually $2(k-3)$ which makes no sense to me. I tried to factor, reduce, and anything I could think of but I can't get $(k+1)^2 - 7(k + 1) - 2$. Does anyone know where I am going wrong?</p>
| John Joy | 140,156 | <p>In the problem statement <em>"we show that $k^2 - 7k - 2 + A$ is divisible by $2\dots$"</em>. I think that what is intended here is that $k^2 - 7k - 2 + A$ is the next case. In other words
$$k^2 - 7k - 2 + A = (k+1)^2 -7(k+1)-2$$
Do the math, and I'm sure that you'll come up with
$$A = 2(k-3)$$
To finish off the problem, show that
$$k^2 - 7k - 2 + \color{blue}{2(k-3)} = (k+1)^2 -7(k+1)-2$$</p>
|
345,589 | <p>I am interested in proving that the family of functions
$$\{f_{\omega}: \mathbb{C}^n\rightarrow\mathbb{C}, f_\omega(z) = \exp(i\langle \omega, z \rangle): \omega \in \mathbb{C}^n\},$$
where $\langle \cdot,\cdot\rangle$ is the usual hermitian dot product, is $\mathbb{C}$-linearly independent.</p>
<p>In the case $n=1$ an expeditive argument consists in remarking that these functions are eigenfunctions with distinct eigenvalues of the complex derivation operator.</p>
<p>Is there a somewhat similar argument, or a simple way to prove the result in dimension $n$ ?</p>
| Kofi | 17,439 | <p>Yes, you just need make make a slightly elaborate argument.</p>
<p>The function $E_\omega \in C(\mathbb{C}^n, \mathbb{C})$ defined by $z \mapsto \exp(i \langle \omega, z \rangle)$ is an eigenvector for each of the partial Derivative operators $\partial/\partial z_j$, $j = 1, \dots n$, and the corresponding eigenvalues are $\omega_j$.</p>
<p>Now assume that
$$ \lambda_1 E_{\omega^1} + \dots + \lambda_N E_{\omega^N} \equiv 0.$$
Inserting $z = (z_1, 0, \dots, 0)$, this becomes a statement about the one-dimensional case, where you can apply your argument. In particular, you conclude that some of the $\omega^k$ have the same first entry or all $\lambda_k$ are zero. This way you reduce the equation above to a number of equations of the same kind, but with smaller $N$ and the property that all $\omega^k$ have the same first entry.</p>
<p>Proceeding this way for $j= 1, \dots, n$, you finally get that each $\lambda_k$ is zero, unless some of the $\omega^k$ coincide.</p>
|
2,553,175 | <p>How can I verify that
$$1-2\sin^2x=2\cos^2x-1$$
Is true for all $x$?</p>
<p>It can be proved through a couple of messy steps using the fact that $\sin^2x+\cos^2x=1$, solving for one of the trigonemtric functions and then substituting, but the way I did it gets very messy very quickly and you end up with a bunch of factoring, etc.</p>
<p>What's the simplest way to solve this?</p>
| user284331 | 284,331 | <p>It should be very nice: $1-2\sin^{2}\theta=1-2(1-\cos^{2}\theta)=1-2+2\cos^{2}\theta$.</p>
|
35,987 | <p><em>cross post in <a href="https://stackoverflow.com/questions/3513660/multivariate-bisection-method">StackOverflow</a></em></p>
<p>I need an algorithm to perform a 2D bisection method for solving a $2$x$2$ non-linear problem. Example: two equations $f(x,y)=0$ and $g(x,y)=0$ which I want to solve simultaneously. I have very familiar with the 1D bisection ( as well as other numerical methods ). Assume I already know the solution lies between the bounds $x_1 < x < x_2$ and $y_1 < y < y_2$.</p>
<p>In a grid the starting bounds are:</p>
<pre><code> ^
| C D
y2 -+ o-------o
| | |
| | |
| | |
y1 -+ o-------o
| A B
o--+------+---->
x1 x2
</code></pre>
<p>and I know the values at $f(A)$, $f(B)$, $f(C)$ and $f(D)$ as well as $g(A)$, $g(B)$, $g(C)$ and $g(D)$. I might even know for which edges $f=0$ and for which $g=0$.</p>
<p>To start the bisection I guess we need to divide the points out along the edges as well as the middle.</p>
<pre><code> ^
| C F D
y2 -+ o---o---o
| | |
|G o o M o H
| | |
y1 -+ o---o---o
| A E B
o--+------+---->
x1 x2
</code></pre>
<p>Now considering the possibilities of combinations such as checking if $f(G)*f(M)<0$ <code>AND</code> $g(G)*g(M)<0$ seems overwhelming. Maybe I am making this a little too complicated, but I think there should be a multidimensional version of the Bisection, just as Newton-Raphson can be easily be multidimed using gradient operators.</p>
<p>Any clues, comments, or links are welcomed.</p>
| KalEl | 8,602 | <ol>
<li><p>Check the pair of <em>opposite</em> corners to determine if zeroes lie within each of the four subdivided rectangles (zeroes can be there in more than one of them). Eg. if f(M)>0 and f(A)<0, then AEMG contains zeroes of f. Same is true also if f(G)>0 and f(E)<0.</p></li>
<li><p>Do this for all the four sub rectangles, and for both f and g.</p></li>
<li><p>There will be atleast one which contains zeroes for both f and g. Zoom into that and repeat.</p></li>
</ol>
|
35,987 | <p><em>cross post in <a href="https://stackoverflow.com/questions/3513660/multivariate-bisection-method">StackOverflow</a></em></p>
<p>I need an algorithm to perform a 2D bisection method for solving a $2$x$2$ non-linear problem. Example: two equations $f(x,y)=0$ and $g(x,y)=0$ which I want to solve simultaneously. I have very familiar with the 1D bisection ( as well as other numerical methods ). Assume I already know the solution lies between the bounds $x_1 < x < x_2$ and $y_1 < y < y_2$.</p>
<p>In a grid the starting bounds are:</p>
<pre><code> ^
| C D
y2 -+ o-------o
| | |
| | |
| | |
y1 -+ o-------o
| A B
o--+------+---->
x1 x2
</code></pre>
<p>and I know the values at $f(A)$, $f(B)$, $f(C)$ and $f(D)$ as well as $g(A)$, $g(B)$, $g(C)$ and $g(D)$. I might even know for which edges $f=0$ and for which $g=0$.</p>
<p>To start the bisection I guess we need to divide the points out along the edges as well as the middle.</p>
<pre><code> ^
| C F D
y2 -+ o---o---o
| | |
|G o o M o H
| | |
y1 -+ o---o---o
| A E B
o--+------+---->
x1 x2
</code></pre>
<p>Now considering the possibilities of combinations such as checking if $f(G)*f(M)<0$ <code>AND</code> $g(G)*g(M)<0$ seems overwhelming. Maybe I am making this a little too complicated, but I think there should be a multidimensional version of the Bisection, just as Newton-Raphson can be easily be multidimed using gradient operators.</p>
<p>Any clues, comments, or links are welcomed.</p>
| dranxo | 6,360 | <p>You might want to consider the vector field</p>
<p>$ \vec{F}(x,y) = (f(x,y), g(x,y)) $</p>
<p>and look for sources and sinks of $\vec{F}$. I think this could be done by recursively dividing up the plane into squares and calculating the winding number of each square. If it is nonzero then you have a critical point within that square (cf Thm 2 <a href="http://www.mpi-inf.mpg.de/~ag4-gm/handouts/06gm_top3.pdf" rel="nofollow">this paper</a>) and should divide further.</p>
|
1,791,837 | <p>Recently I have been very fascinated by the claim and impact of Godel's incompleteness theorem. To understand the proof given by Godel, I felt the need to read an introductory book in logic to begin with. I have started reading the book titled "A Mathematical Introduction to Logic" by Herbert Enderton. As mentioned by the author, the aim of this book is to formalize the concept of: <em>logical reasoning, truth, proof</em>, etc. In the first chapter of this book the author introduces Sentential Logic. Further, to prove a few results in this chapter, the author provides proofs which are based on basic logical deductions. </p>
<p>My doubt is the following: Our aim of introducing sentential logic was to formalizing certain aspects of logical reasoning, however, in proving a few results about this new system (sentential logic) we use intuitive logic itself. But can we use a circular argument like this for proving?</p>
<p>Please let me know as to where my understanding about sentential logic lacks. I'm just starting with the field of mathematical logic and am currently stuck at this philosophical doubt, hence any help would be very welcome.</p>
| R Mary | 309,615 | <p>I think the function you are looking for is $f(x)=\int_{\gamma_x} \alpha$ where $\gamma_x$ is a path from some chosen base point $x_0$ to $x$. But then for this to be well defined you have to say that all paths between $x$ and $x_0$ are homotopic, i.e. that your sphere is simply connected, so the whole argument gets a bit circular...</p>
<p>P.S. Traying to write things out in coordinates will not get you far because a closed form not being exact is a "global" thing, not a local one</p>
|
3,816,041 | <blockquote>
<p>How many ways <span class="math-container">$5$</span> identical green balls and <span class="math-container">$6$</span> identical red balls can be arranged into <span class="math-container">$3$</span> distinct boxes such that no box is empty?</p>
</blockquote>
<p>My attempt :</p>
<p>Finding coefficient of <span class="math-container">$x^{11}$</span> in the expansion of <span class="math-container">$$( x + x^2 + x^3 + x^4 + x^5+x^6 )^3 ( x + x^2 + x^3 + x^4 + x^5 )^ 3$$</span> and arranging them which was wrong when inspected</p>
<p>Please help me out</p>
| Dietrich Burde | 83,966 | <p>The polynomial <span class="math-container">$f=x^4+x^3+x^2+x+1$</span> is irreducible over <span class="math-container">$\Bbb F_p$</span> if <span class="math-container">$p\not\equiv \pm 1\bmod 5$</span> with <span class="math-container">$p\neq 5$</span>. The proof goes as follows. Note that
<span class="math-container">$$
x^5 −1 = (x−1)(x^4 +x^3 +x^2 +x+1).
$$</span>
Any root of <span class="math-container">$f=0$</span> has order <span class="math-container">$5$</span> or <span class="math-container">$1$</span> (in a splitting field). Only the identity has order <span class="math-container">$1$</span>. If <span class="math-container">$f$</span> had a linear factor in <span class="math-container">$\Bbb F_p[x]$</span>, then <span class="math-container">$f=0$</span> had a root in <span class="math-container">$\Bbb F_p$</span>. Since <span class="math-container">$1+1+1+1+1\neq 0$</span>, <span class="math-container">$1$</span> is not a root, so that any possible root must have order <span class="math-container">$5$</span>. But the order of <span class="math-container">$\Bbb F_p^*$</span> is <span class="math-container">$p-1$</span>,
which is not divisible by <span class="math-container">$5$</span>, so there is no root in the base field <span class="math-container">$\Bbb F_p$</span>.</p>
<p>If <span class="math-container">$f$</span> had an irreducible quadratic factor <span class="math-container">$q$</span> in <span class="math-container">$\Bbb F_p[x]$</span>, then <span class="math-container">$f=0$</span> would have a root in a quadratic
extension <span class="math-container">$K$</span> of <span class="math-container">$\Bbb F_p$</span>. Since <span class="math-container">$[K : \Bbb F_p] = 2$</span>, the field <span class="math-container">$K$</span> has <span class="math-container">$p^2$</span> elements, and <span class="math-container">$K^*$</span> has <span class="math-container">$p^2 −1=(p−1)(p+1)$</span> elements.
By Lagrange, the order of any element of <span class="math-container">$K^*$</span> is a divisor of <span class="math-container">$p^2 − 1$</span>, but <span class="math-container">$5$</span> does not divide <span class="math-container">$p^2 − 1$</span>, so there is no element in <span class="math-container">$K$</span> of order 5. That is, there is no quadratic irreducible factor.</p>
<p>It follows that <span class="math-container">$f$</span> is irreducible.</p>
|
1,241,864 | <p>I would like to know if there is formula to calculate sum of series of square roots $\sqrt{1} + \sqrt{2}+\dotsb+ \sqrt{n}$ like the one for the series $1 + 2 +\ldots+ n = \frac{n(n+1)}{2}$.</p>
<p>Thanks in advance.</p>
| Community | -1 | <p>For integer square roots, one should note that there are runs of equal values and increasing lengths</p>
<p>$$1,1,1,2,2,2,2,2,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,4\dots$$</p>
<p>For every integer $i$ there are $(i+1)^2-i^2=2i+1$ replicas, and by the Faulhaber formulas</p>
<p>$$\sum_{i=1}^m i(2i+1)=2\frac{2m^3+3m^2+m}6+\frac{m^2+m}2=\frac{4m^3+9m^2+5m}{6}.$$</p>
<p>When $n$ is a perfect square minus $1$, all runs are complete and the above formula applies, with $m=\sqrt{n+1}-1$.</p>
<p>Otherwise, the last run is incomplete and has $n-\left(\lfloor\sqrt n\rfloor\right)^2+1$ elements.</p>
<p>Hence, with $m=\lfloor\sqrt n\rfloor$,</p>
<p>$$S_n=\frac{4(m-1)^3+9(m-1)^2+5(m-1)}{6}+m\left(n-m^2+1\right)\\
=m\left(n-\frac{2m^2+3m-5}6\right).$$</p>
|
1,241,864 | <p>I would like to know if there is formula to calculate sum of series of square roots $\sqrt{1} + \sqrt{2}+\dotsb+ \sqrt{n}$ like the one for the series $1 + 2 +\ldots+ n = \frac{n(n+1)}{2}$.</p>
<p>Thanks in advance.</p>
| Simply Beautiful Art | 272,831 | <p>For a better upper bound than Alex's answer,</p>
<p>$$\sum_{n=1}^xn^{1/2}\le\frac23\left(x+\frac12\right)^{3/2}$$</p>
<p>And if you want to improve upon that,</p>
<p>$$\sum_{n=1}^xn^{1/2}\approx\frac23\left(x+\frac12\right)^{3/2}\underbrace{-0.22474487139}_{\zeta(-1/2)}$$</p>
|
96,789 | <p>I've been working on understanding limits thoroughly, so I'm rewriting how I understand the chain rule. Please help me fill in my gaps in understanding.</p>
<p>$f$ is some function. Then</p>
<p>$f'(x) = \lim\limits_{h \to 0}\frac{f(x+h)-f(x)}{h}$</p>
<p>Now I might want to evaluate something like </p>
<p>$$\left(f(g(x))\right)' = \lim\limits_{h \to 0}\frac{f(g(x+h))-f(g(x))}{h}$$</p>
<p>Evaluating this is tricky, so we need a way to do it. </p>
<p>$\left(f(g(x))\right)' = (\lim\limits_{h \to 0}\frac{f(g(x+h))-f(g(x))}{g(x+h)-g(x)}\cdot\frac{g(x+h)-g(x)}{h}$)</p>
<p>$\left(f(g(x))\right)' = (\lim\limits_{h \to 0}\frac{f(g(x+h))-f(g(x))}{g(x+h)-g(x)})\cdot\lim\limits_{h \to 0}(\frac{g(x+h)-g(x)}{h})$</p>
<p>if $k=g(x+h)-g(x)$, then</p>
<p>$g(x+h)=k+g(x)$, so</p>
<p>$\left(f(g(x))\right)' = (\lim\limits_{h \to 0}\frac{f(g(x)+k)-f(g(x))}{k})\cdot(\lim\limits_{h \to 0}\frac{g(x+h)-g(x)}{h})$</p>
<p>and, assuming I can go ahead and just change the limit variable on the left term, then</p>
<p>$=(\lim\limits_{k \to 0}\frac{f(g(x)+k)-f(g(x))}{k}) \cdot (\lim\limits_{h \to 0}\frac{g(x+h)-g(x)}{h})$</p>
<p>$=f'(g(x))\cdot g'(x)$</p>
<p>Which is easier to figure out, and is also the chain rule.</p>
<p>Is that correct?</p>
| Michael Hardy | 11,667 | <p>This is correct except that it fails to address the fact that $k$ may be $0$ when $h$ is not. If there is some neighborhood of $0$ such that when $h$ is in that neighborhood and $h\ne0$, then $k\ne0$, then everything above works fine. If every open neighborhood of $0$ contains some value of $h\ne0$ for which $k=0$, then your argument fails to deal with that situation. This is about the behavior of the function $g$, not about $f$.</p>
<p>The fraction
$$
\frac{f(g(x)+k)-f(g(x))}{k}
$$
is undefined when $k=0$, which may happen in some cases even when $h=0$. Now consider the piecewise defined function
$$
k \mapsto \begin{cases} \frac{f(g(x)+k)-f(g(x))}{k} & \text{if }k\ne 0, \\ \\
f'(g(x)) & \text{if } k = 0.\end{cases} \tag{2}
$$
I think working with (2) in place of (1) can fill in the gap. You still probably have to worry about the details of the logic, but that's the main idea.</p>
|
96,789 | <p>I've been working on understanding limits thoroughly, so I'm rewriting how I understand the chain rule. Please help me fill in my gaps in understanding.</p>
<p>$f$ is some function. Then</p>
<p>$f'(x) = \lim\limits_{h \to 0}\frac{f(x+h)-f(x)}{h}$</p>
<p>Now I might want to evaluate something like </p>
<p>$$\left(f(g(x))\right)' = \lim\limits_{h \to 0}\frac{f(g(x+h))-f(g(x))}{h}$$</p>
<p>Evaluating this is tricky, so we need a way to do it. </p>
<p>$\left(f(g(x))\right)' = (\lim\limits_{h \to 0}\frac{f(g(x+h))-f(g(x))}{g(x+h)-g(x)}\cdot\frac{g(x+h)-g(x)}{h}$)</p>
<p>$\left(f(g(x))\right)' = (\lim\limits_{h \to 0}\frac{f(g(x+h))-f(g(x))}{g(x+h)-g(x)})\cdot\lim\limits_{h \to 0}(\frac{g(x+h)-g(x)}{h})$</p>
<p>if $k=g(x+h)-g(x)$, then</p>
<p>$g(x+h)=k+g(x)$, so</p>
<p>$\left(f(g(x))\right)' = (\lim\limits_{h \to 0}\frac{f(g(x)+k)-f(g(x))}{k})\cdot(\lim\limits_{h \to 0}\frac{g(x+h)-g(x)}{h})$</p>
<p>and, assuming I can go ahead and just change the limit variable on the left term, then</p>
<p>$=(\lim\limits_{k \to 0}\frac{f(g(x)+k)-f(g(x))}{k}) \cdot (\lim\limits_{h \to 0}\frac{g(x+h)-g(x)}{h})$</p>
<p>$=f'(g(x))\cdot g'(x)$</p>
<p>Which is easier to figure out, and is also the chain rule.</p>
<p>Is that correct?</p>
| Arturo Magidin | 742 | <p>The final idea is generally correct, as Michael Hardy points out, modulo some refinements to deal with tricky situations.</p>
<p>However, I wanted to point out that some of the arguments leading up to that final part are incorrect:</p>
<ul>
<li><p>You assume that
$$b\lim_{h\to 0}\frac{a}{h} = \lim_{h\to 0}\frac{ab}{h}.$$
This will work if $b$ <em>does not</em> depend on $h$ (in the sense that either both sides exist and are equal, or both sides do not exist), but cannot hold if $b$ <em>does</em> depend on $h$. To see that it cannot hold when $b$ depends on $h$, note that the right hand side will <em>not</em> depend on $h$ (because it's the value of a limit over $h$), but the left hand side <em>does</em> depend on $h$ (since $b$ depends on $h$ and is outside the limit, so the left hand side will be a function of $h$, namely $b$, times a constant, namely the value of the limit).</p></li>
<li><p>For the same reason, when you write:
$$
\left(\lim\limits_{h \to 0}\frac{f(g(x)+k)-f(g(x))}{h}\right) \cdot \frac{k}{k}
=\left(\lim\limits_{h \to 0}\frac{f(g(x)+k)-f(g(x))}{k}\right) \cdot \frac{k}{h}$$
you run into trouble, because you cannot just pull the $h$ out of the limit that is a limit over $h$, and because $k$ depends on $h$, so you cannot move it in and out of a limit over $h$ as you would a constant.</p></li>
</ul>
<p>What you need in both cases is the "product rule for limits": if
$$\lim_{h\to 0}\;k\quad\text{and}\quad \lim_{h\to 0}\;m\quad\text{both exist, then }\lim_{h\to 0}\;km = \left(\lim_{h\to 0}\;k\right)\left(\lim_{h\to 0}\;m\right),$$
which is what you use in the final paragraphs. </p>
<hr/>
<p>We want to find
$$\lim_{h\to 0}\frac{f(g(x+h)) - f(g(x))}{h}.$$</p>
<p>You propose defining $k(h)$ (and it's important to keep the dependence on $h$ explicit, to prevent the errors you fell into) by
$$k(h) = g(x+h)-g(x)$$
so that we can rewrite
$$\frac{f(g(x+h)) - f(g(x))}{h} = \frac{f(g(x)+k(h)) - f(g(x))}{h}.$$
Nothing wrong here. Intuitively, what we want to then do is rewrite again into
$$\begin{align*}
\frac{f(g(x)+k(h)) - f(g(x))}{h} &= \frac{f(g(x)+k(h)) -f(g(x))}{k(h)}\cdot\frac{k(h)}{h}\\
&= \frac{f(g(x)+k(h)) - f(g(x))}{k(h)}\cdot \frac{g(x+h)-g(x)}{h},
\end{align*}$$
and then use the fact that a limit of a product is the product of the limits (if they both exist), and a change of variable in the first limit from $h$ to $k$, to deduce the Chain Rule.</p>
<p>The problem with this is that the rewriting is only valid if $k(h)\neq 0$ when $h\neq 0$; otherwise, the two functions are not equal, since they don't have the same domain. That is, we know that $k(h)=g(x+h)-g(x)$ is $0$ when $h=0$, but it's possible for $k(h)$ to equal zero for <em>other</em> values of $h$; and at those values, $\frac{f(g(x)+k(h))-f(g(x))}{h}$ is defined, but $\frac{f(g(x)+k(h))-f(g(x))}{k(h)}\cdot\frac{g(x+h)-g(x)}{h}$ is not, and we don't have equality.</p>
<p>If you think about it, though, at the places where $k(h)=0$, we have that
$$\frac{f(g(x)+k(h))-f(g(x))}{h}=0.$$
So the simple way to make this work is to use a <em>different</em> function instead of
$$\frac{f(g(x)+k(h)-f(g(x))}{k(h)}$$
on the right hand side, one that is equal to $f'(g(x))$ when $k(h)=0$ (the value we want to get), and is equal to the old value when $k(h)\neq 0$.</p>
<p>So we define a function $G(h)$ as follows:
$$G(h) = \left\{\begin{array}{ll}
\frac{f(g(x)+k(h))- f(g(x))}{k(h)} &\text{if }k(h)\neq 0\\
f'(g(x))&\text{if }k(h)=0.
\end{array}\right.$$
With this function, we <em>do</em> have that
$$\frac{f(g(x+h)) - f(g(x))}{h} = G(h)\cdot\frac{g(x+h)-g(x)}{h}$$
for all $h$. Since they take the same values at all values of $h$ (except $h=0$, where they are both undefined), the two have the same limit as $h\to 0$:
$$\lim_{h\to 0}\left(\frac{f(g(x+h))-f(g(x))}{h} \right)= \lim_{h\to 0}\left(G(h)\cdot\frac{g(x+h)-g(x)}{h}\right).$$
Now,
$$\lim_{h\to0}\frac{g(x+h)-g(x)}{h} = g'(x),$$
so if $\lim\limits_{h\to 0}G(h)$ exists, then we'll be set.</p>
<p>Now, $k(h) = g(x+h)-g(x)$; as $h\to 0$, we have $k(h)\to 0$ because $g$ is continuous at $x$ (by virtue of being differentiable at $x$). </p>
<p>If $k(h)=0$ for all values of $h$ near $0$, then
$$\lim\limits_{h\to 0}G(h) = \lim_{h\to 0}f'(g(x)) = f'(g(x)).$$</p>
<p>If $k(h)\neq 0$ for all $h$ near $0$ (except at $h=0$), then we can do a change of variable: since $\lim\limits_{h\to 0}k(h) = 0$, we have:
$$\begin{align*}
\lim_{h\to 0}G(h) &= \lim_{h\to0}\frac{f(g(x)+k(h)) - f(g(x))}{k(h)}\\
&= \lim_{k(h)\to 0}\frac{f(g(x)+k(h)) - f(g(x))}{k(h)}\\
&= f'(g(x)).
\end{align*}$$
The justification for this formally requires epsilon and deltas, but the idea is: we can make $k(h)$ arbitrarily close to $0$ by making $h$ arbitrarily close to $0$, so taking the limit as $h$ approaches $0$ amounts to the same as taking the limit as $k(h)$ approaches zero.</p>
<p>What if $k(h)$ is not constant zero, but takes the value $0$ at arbitrarily small values of $h$ (that is, for every $\delta\gt 0$ we can find $h$ in $(-\delta,\delta)$ where $k(h)=0$, and we can find $h'$ in $(-\delta,\delta)$ where $k(h)\neq 0$)? Then one needs to argue carefully that the limit
$$\lim_{h\to 0}G(h)$$
is still equal to $f'(g(x))$; this is difficult to do informally, and there are no ready "limit rules" to help us. You would need to see the proof with epsilon and deltas. One can think of this as a rather "extreme and almost pathological" case, and the easy cases above follow the intuition you had fairly well, once you fix the problems with your manipulation of limits. </p>
<p>Taking this for granted, we have:
$$\begin{align*}
\lim_{h\to 0}\frac{f(g(x+h))-f(g(x))}{h} &= \lim_{h\to 0}\frac{f(g(x)+k(h)) - f(g(x))}{h}\\
&= \lim_{h\to 0}\left(G(h)\cdot\frac{g(x+h)-g(x)}{h}\right)\\
&= \left(\lim_{h\to 0}G(h)\right)\left(\lim_{h\to 0}\frac{g(x+h)-g(x)}{h}\right)\\
&= f'(g(x))g'(x).
\end{align*}$$</p>
|
3,482,138 | <p><a href="https://i.stack.imgur.com/OUlV2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OUlV2.png" alt="enter image description here"></a></p>
<ol>
<li>I have found two planes trough the origin that meet the given plane
at right angles.</li>
</ol>
<p>I found three points in the plane, getting the vectors between those points, and using that as a normal vector for the equation of the plane I am looking to find. </p>
<p>I don't understand how I am to describe all of these planes though. Would finding a basis of all vectors lying in the plane and using an arbitrary linear combination of these as our normal to the plane we are looking for be the right way? </p>
<ol start="2">
<li>I took <span class="math-container">$(4, -1, 1)$</span> and <span class="math-container">$(1, -2, -3)$</span> as normal vectors to each plane.
Their cross product is <span class="math-container">$(5, 13, -7)$</span> is in the direction of the line
of intersection of the two planes, thus parallel to both.</li>
</ol>
<p><span class="math-container">$(x, y, z) = (x_0 + 5t, y_0 + 13t, z_0 -7t)$</span> is a parametric equation for all lines parallel to both. Is this correct? And what do they mean by refining this such that each line is listed once?</p>
<p>Any help is greatly appreciated!</p>
| bjorn93 | 570,684 | <p>It should be clear that <span class="math-container">$\triangle BEC$</span> is equilateral. Denote <span class="math-container">$\angle BFD=\alpha$</span> and <span class="math-container">$\angle AFD=\beta$</span>. We need to find <span class="math-container">$\alpha+\beta$</span>. You can show that <span class="math-container">$\angle ADF=60^{\circ}-\alpha$</span> through some angle chasing. Sine law for <span class="math-container">$\triangle ADF$</span> gives you
<span class="math-container">$$\frac{AD}{AF}=\frac{\sin\beta}{\sin(60^{\circ}-\alpha)} $$</span>
and the sine law for <span class="math-container">$\triangle ABF$</span> gives you
<span class="math-container">$$\frac{AB}{AF}=\frac{\sin(\alpha+\beta)}{\sin(2(\alpha+\beta))}=\frac{1}{2\cos(\alpha+\beta)} $$</span>
Since <span class="math-container">$AB=AD$</span>,
<span class="math-container">$$\frac{\sin\beta}{\sin(60^{\circ}-\alpha)}=\frac{1}{2\cos(\alpha+\beta)} $$</span>
Can you prove that the last equality implies <span class="math-container">$\beta=30^{\circ}$</span> or <span class="math-container">$\alpha+\beta=60^{\circ}$</span> and that <span class="math-container">$\beta=30^{\circ}$</span> is a contradiction?</p>
|
2,708,071 | <p>Question: Suppose $|x_n - x_k| \le n/k^2$ for all $n$ and $k$.Show that $\{x_n\}_{n=1}^{\infty}$ is cauchy. </p>
<p>Attempt : To prove this, I have to find $M \in N$ that for $\varepsilon >0$, $n/k^2 < \varepsilon$ for $n,k \ge M$. </p>
<p>Let $\varepsilon > 1/M$. </p>
<p>Then, $n/k^2 \le M/M^2$ (#) $= 1/M < \varepsilon$ for $n,k \ge M$.</p>
<p>I feel (#) is not necessarily true. Is there any way to show (#) is correct? or could you give me some any hint regarding this question?</p>
| Aweygan | 234,668 | <p>No, $(\#)$ is not necessarily true. $M=1$, $n=5$, $k=2$ provides a counterexample. </p>
<p>Note that since $|x_n-x_k|=|x_k-x_n|$, we have $|x_n-x_k|\leq\min\{n/k^2,k/n^2\}$. Given $\varepsilon>0$, choose $M\in\mathbb N$ such that $\varepsilon>1/M$. Suppose $n,k\geq M$ and $k\geq n$. Then we have
$$|x_n-x_k|\leq\frac{n}{k^2}\leq\frac{1}{k}\leq\frac{1}{M}<\varepsilon.$$</p>
|
196,303 | <p>All models of space that I know from physics use real or complex manifolds. I was just wondering if it is still the case at the level of Planck scale. In string theory, physicists still use strings (circles) in a 11 dimensional manifold in order to model particles. Do they do this because there is no mathematical alternatives or because the nature (mathematical essence) of space at the Planck scale is still not yet discovered? </p>
| Community | -1 | <p>This paper by Carlip <a href="http://arxiv.org/abs/gr-qc/0108040" rel="nofollow">http://arxiv.org/abs/gr-qc/0108040</a> is a good, relatively nontechnical explanation of why it's hard to reconcile quantum mechanics (QM) with general relativity (GR).</p>
<p>GR says that spacetime is a real manifold with a semi-Riemannian metric. QM says that the possible states of a system form a complex vector space.</p>
<p>If you naively try to combine these two ideas, it's hard to make sense of the result. Given one manifold-with-metric $M_1$ and another one $M_2$, what would it even mean to talk about the linear combination $c_1M_1+c_2M_2$, where $c_1$ and $c_2$ are complex numbers? The spacetimes $M_1$ and $M_2$ do not have any built-in way of matching up points in one with points in the other. The two spacetimes don't even need to have the same topology. In quantum mechanics, we would also have the Born rule, which says that $|c_1|^2$ and $|c_2|^2$ have interpretations as the probabilities of outcomes of measurements. It's not clear what these probabilities would mean in this context.</p>
<p>So should spacetime be described at the Planck scale as a real manifold, or if not, then what? Straightforward application of the fundamental principles of the two theories seems to lead to nonsense answers. We really don't know.</p>
|
2,871,655 | <p>I was trying some Cambridge past papers and it said to first separate into partial fractions and then find the sum of the sequence, however after splitting inot partial fractions I'm not getting the terms to cancel out like I normally do with these questions. Is there something I'm missing
.been trying to manipulate the 3 sets of term but can't seem to get it . Thanks </p>
<p>$$\sum_{r=1}^n\frac4{r(r+1)(r+2)}$$</p>
| Simply Beautiful Art | 272,831 | <p>As you've noticed, we have</p>
<p>$$4^sf(s)=\sum_{n=0}^\infty\left(\frac1{(n+1/4)^s}-\frac1{(n+3/4)^s}\right)$$</p>
<p>which, for odd $s$, can be written as</p>
<blockquote>
<p>$$\begin{align}
&(s-1)!4^sf(s)\\
&=\lim_{x\to1/4}\frac{d^{s-1}}{dx^{s-1}}\sum_{n=0}^\infty\left(\frac1{n+x}+\frac1{x+1-n}\right)\\
&=\lim_{x\to1/4}\frac{d^{s-1}}{dx^{s-1}}\sum_{n=-\infty}^\infty\frac1{n+x}
\\
&=\lim_{x\to1/4}\frac{d^{s-1}}{dx^{s-1}}\pi\cot(\pi x)
\end{align}$$</p>
</blockquote>
<p>which gives exact values for $f(s)$ when $s$ is odd.</p>
<p>It also turns out this is the <a href="https://en.wikipedia.org/wiki/Dirichlet_beta_function" rel="nofollow noreferrer">Dirichlet beta function</a> with the special values of</p>
<p>$$f(s)=\beta(s)=\frac{(-1)^kE_{2k}}{4^{k+1}(2k)!}\pi^{2k+1}$$</p>
<p>where $s=2k+1$ and $E_k$ are the <a href="https://en.wikipedia.org/wiki/Euler_number" rel="nofollow noreferrer">Euler numbers</a>.</p>
|
2,452,084 | <ul>
<li><em>I'm having trouble understanding why the arbitrariness of $\epsilon$ allows us to conclude that $d(p,p')<0$. It seems we could likely
conclude a value such as $\frac {\epsilon}{100}$ couldn't we?? The
other idea that would normally work is the limit (as $n$ approaches
$\infty$, $p$ approaches $p'$) but that would mean we are further in
another sequence which would have same problem Thanks in advance</em></li>
</ul>
<p>$\ $
$\ $
$\ $ </p>
<p>Definition of Convergence</p>
<blockquote>
<p>A sequence $\{ p_n \}$ in a metric space $X$ is said to converge if there us a point $p \in X$ with the following property: For every $ \epsilon>0$ there is an integer $N$ such that $n \ge N$ implies distance function $d(p_n, p) < \epsilon.$</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/tDMkx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDMkx.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/TIFil.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TIFil.png" alt="enter image description here"></a></p>
| Alekos Robotis | 252,284 | <p>The moral is that the only $x\ge 0$ satisfying $x< \epsilon$ for all $\epsilon>0$ is $0$. For, suppose that $x\ne 0$, then $x=\delta>0$. Choose $\epsilon = \frac{\delta}{2}$. Then $x\not< \epsilon$, contrary to assumption. </p>
<p>Because $\epsilon$ is arbitrary, the statement is true for all $\epsilon>0$, and thus we are done.</p>
|
1,956,338 | <p>Let $f\in C_b(\mathbb{R})$ and $g$ any continuous function on $\mathbb{R}$ (or maybe there has to be a restiction on $g$?). Now let K be a compact subset of $\mathbb{R}$ and $U$ an open subset of $\mathbb{R}$. Then I want to show that the following set $$\Phi:=\{t\in\mathbb{R}_+:(e^{t\cdot g}f)(K)\subseteq U\}$$ is a open set. Hence I have to find for every $t\in\Phi$ an $\varepsilon>0$ such that $s\in\Phi$ whenever $|s-t|<\varepsilon$. Why is this true. Maybe the argument is simple but I don't see it. Already thank you.</p>
| user60589 | 60,589 | <p>For a compact set $C$ which is contained in an open set $U$ there is open ball around $1$ such that $B_{\varepsilon}(1) \cdot C \subset U$.</p>
<p>Then you have
$$ e^{B_{\delta}(t)\cdot g(K) }f(K) = e^{B_{\delta}(0)}e^{t\cdot g(K)}f(K)=B_{\varepsilon}(1) \cdot C \subset U $$
where $\varepsilon = \log \delta$ and $e^{t\cdot g(K)}f(K) =C$.</p>
<p>How to prove the first assertion? Assume $(\Bbb R \setminus U)\cap (\overline{ B_{\varepsilon}(1)} \cdot C)\ne \emptyset$, these sets are bounded and closed, thus compact, so their intersection(take $\varepsilon = \frac{1}{n}$ for $n>0$) contains an element which has distance zero to $C$ which is a contradiction.</p>
|
561,921 | <p>So far I have,
$$
\lim_{x\to 1} \frac{\frac{x}{\sqrt{x^2+1}} - \frac{1}{\sqrt{1^2+1}}}{x-1}=\lim_{x\to 1} \frac{\frac{x}{\sqrt{x^2+1}} - \frac{1}{\sqrt{2}}}{x-1}
$$</p>
<p>I have no idea how to keep going with this, every way I try I get stuck and can't do anything with it. </p>
| marty cohen | 13,079 | <p>More generally,
consider
$$\lim_{x\to a} \frac{\frac{x}{\sqrt{x^2+1}} - \frac{1}{\sqrt{a^2+1}}}{x-a}.$$</p>
<p>I will use just algebra.</p>
<p>If $x \ne a$,</p>
<p>$\begin{align}
\frac{\frac{x}{\sqrt{x^2+1}} - \frac{a}{\sqrt{a^2+1}}}{x-a}
&=\frac{x\sqrt{a^2+1}-a\sqrt{x^2+1}}{\sqrt{x^2+1}\sqrt{a^2+1}(x-a)}\\
&=\left(\frac{x\sqrt{a^2+1}-a\sqrt{x^2+1}}{\sqrt{x^2+1}\sqrt{a^2+1}(x-a)}\right)
\left(\frac{x\sqrt{a^2+1}+a\sqrt{x^2+1}}{x\sqrt{a^2+1}+a\sqrt{x^2+1}}\right)\\
&=\frac{x^2(a^2+1)-a^2(x^2+1)}
{\sqrt{x^2+1}\sqrt{a^2+1}(x-a)(x\sqrt{a^2+1}+a\sqrt{x^2+1})}\\
&=\frac{x^2a^2+x^2-a^2x^2-a^2}
{\sqrt{x^2+1}\sqrt{a^2+1}(x-a)(x\sqrt{a^2+1}+a\sqrt{x^2+1})}\\
&=\frac{x^2-a^2}
{(x-a)\sqrt{x^2+1}\sqrt{a^2+1}(x\sqrt{a^2+1}+a\sqrt{x^2+1})}\\
&=\frac{x+a}
{\sqrt{x^2+1}\sqrt{a^2+1}(x\sqrt{a^2+1}+a\sqrt{x^2+1})}\\
\end{align}
$</p>
<p>Letting $x \to a$,
this becomes
$\frac{2a}
{(a^2+1)(2a\sqrt{a^2+1})}
=\frac{1}
{(a^2+1)^{3/2}}
$.</p>
<p>As a derivative,</p>
<p>$\begin{align}
\left(\frac{x}{\sqrt{x^2+1}}\right)'
&=\frac{\sqrt{x^2+1}-x(1/2)(2x)(x^2+1)^{-1/2}}{x^2+1}\\
&=\frac{\sqrt{x^2+1}-x^2(x^2+1)^{-1/2}}{x^2+1}\\
&=\frac{(x^2+1)-x^2}{(x^2+1)^{3/2}}\\
&=\frac{1}{(x^2+1)^{3/2}}\\
\end{align}
$</p>
<p>which is comforting (and much easier).</p>
<p>Note that the "$1$"
in $x^2+1$ and $a^2+1$
can be any value -
it is just carried along
and,
if the expression is
$\frac{x}{\sqrt{x^2+b}}$,
the result is
$\frac{b}{(x^2+b)^{3/2}}$.</p>
|
561,921 | <p>So far I have,
$$
\lim_{x\to 1} \frac{\frac{x}{\sqrt{x^2+1}} - \frac{1}{\sqrt{1^2+1}}}{x-1}=\lim_{x\to 1} \frac{\frac{x}{\sqrt{x^2+1}} - \frac{1}{\sqrt{2}}}{x-1}
$$</p>
<p>I have no idea how to keep going with this, every way I try I get stuck and can't do anything with it. </p>
| Claude Leibovici | 82,404 | <p>Just as for aaa, L'Hopital's rule is, at least to me too, the simplest way to solve your problem. There is another one (which is usable if you know about Taylor series). Around x = 1, the first terms of the Taylor series of x / Sqrt[x^2 + 1] is 1 / Sqrt[2] + (x -1) / (2 Sqrt[2]) +... Then, replace the numerator by this expansion and simplify.</p>
|
3,209,722 | <p>I saw in another post on the website a simple proof that <span class="math-container">$$\lim_{n\to\infty} \left( 1+\frac{x}{n} \right)^n = \lim_{m\to\infty} \left( 1+\frac{1}{m} \right)^{mx}$$</span></p>
<p>which consists of substituting <span class="math-container">$n$</span> by <span class="math-container">$mx$</span>. I can see how the equality then holds for positive real numbers <span class="math-container">$x$</span>, yet it isn't obvious to me why it holds for negative <span class="math-container">$x$</span>.</p>
| DINEDINE | 506,164 | <p>If <span class="math-container">$x<0$</span> then <span class="math-container">$-x>0$</span>. Now apply it for <span class="math-container">$-x$</span> and make the substitution </p>
|
1,969,625 | <p>Which of the following statements is/are true?</p>
<p>$A.$ $\sin(\cos x)=x$ has exactly one root in $[0,\pi/2]$</p>
<p>$B.$ $\cos(\sin x)=x$ has exactly one root in $[0,\pi/2]$</p>
<p>$C.$ Both $A$ and $B$ are true.</p>
<p>$D.$ Both $A$ and $B$ are false.</p>
<p>I tried as $\cos(\sin x):[0,\pi/2]\rightarrow [0,\pi/2]$ and $|(\cos(\sin x))'|=|\sin(\sin x) \cos x|<1$
so it has unique fixed point. But i am confused about fixed points of $\sin(\cos x).$ Please tell me about uniqueness of fixed points of $\sin(\cos x).$ Thanks a lot.</p>
| Mihir | 378,189 | <p>Use the property:
STEP1: Find the range of the function.
Suppose it comes out to be $[a,b]$
STEP2: Then find $f(a).f(b)$.
STEP3:. If it comes out to be negative(Think precisely). Then the function will have at least one solution in $[a,b]$</p>
<p>I'd go with options A,B &C.
I'll be glad if you'll do it yourself. Otherwise I will post an image of the solution. Try yourself first...</p>
|
1,969,625 | <p>Which of the following statements is/are true?</p>
<p>$A.$ $\sin(\cos x)=x$ has exactly one root in $[0,\pi/2]$</p>
<p>$B.$ $\cos(\sin x)=x$ has exactly one root in $[0,\pi/2]$</p>
<p>$C.$ Both $A$ and $B$ are true.</p>
<p>$D.$ Both $A$ and $B$ are false.</p>
<p>I tried as $\cos(\sin x):[0,\pi/2]\rightarrow [0,\pi/2]$ and $|(\cos(\sin x))'|=|\sin(\sin x) \cos x|<1$
so it has unique fixed point. But i am confused about fixed points of $\sin(\cos x).$ Please tell me about uniqueness of fixed points of $\sin(\cos x).$ Thanks a lot.</p>
| Vik78 | 304,290 | <p>By the intermediate value theorem there is at least one root (since 0 < sin(cos(0)) and 1 > sin(cos(1))). Since the derivative of sin(cos($x$)) is less than one everywhere we can use the fundamental theorem of calculus to conclude that this is the only root.</p>
|
1,793,318 | <p>[limit question][1]</p>
<p>Let $x_{k}$ be a sequence of strictly positive real numbers with $\lim \limits_{k \to \infty}\dfrac{x_{k}}{x_{k+ 1}} >1$. Prove that $x_{k}$ is convergent and calculate $\lim \limits_{k \to \infty} x_{k}$.</p>
<p><strong>Attempted answer attached as picture.</strong></p>
<p>I am not sure if I'm actually answering the question properly. Also would I do the same steps to prove that limit is less than 1 and divergent? </p>
<p>Thank you inadvanced for any help. </p>
| Zhanxiong | 192,408 | <p>You are on the right track. For clarity, let's denote the partial sum $\sum_{n = 1}^N \frac{x}{1 + n^2x^2}$ by $S_N(x)$. One way to show the result is to investigate $\sup_{x \in [0, 1]}|S_{2N}
(x) - S_{N}(x)|$:
\begin{align}
\sup_{x \in [0, 1]} |S_{2N}(x) - S_N(x)| & = \sup_{x \in [0, 1]}\sum_{n = N + 1}^{2N} \frac{x}{1 + n^2x^2} \geq \sum_{n = N + 1}^{2N} \frac{1/N}{1 + n^2/N^2} \\
& \geq \frac{1}{N} \times N \times \frac{1}{1 + (2N)^2/N^2} = \frac{1}{5}
\end{align}
which doesn't converge to $0$ as $N \to \infty$. Thus we do not have uniform convergence (if $\{S_N(x)\}$ converges uniformly, then the above quantity is bounded to converge to $0$ as $N \to \infty$, in view of Cauchy's criterion).</p>
|
1,397,576 | <p>To me there is a hierarchy where vectors $\subset$ sequences $\subset$ functions $\subset$ operators</p>
<ul>
<li><p>All vectors are sequences, but not all sequences are vectors because
sequences are infinite dimensional</p></li>
<li><p>All sequences are functions, but not all functions are sequences
because functions can do more than just map $\mathbb{N} \to A$ where
$A$ is some set</p></li>
<li><p>All functions are operators, but not all operators are functions
because an operator can map functions to functions but function can only map numbers to numbers</p></li>
</ul>
<p>Can someone check if my ideas are reasonable? Does there exist such a hierarchy?</p>
| Race Bannon | 188,877 | <p>A vector is an object of a vector space that obeys the usual vector space properties. </p>
<p>A sequence is a function from the natural numbers to some set.</p>
<p>If $X$ is a set, and $Y$ is a set, a function is a subset of $X \times Y$ such that for $(x,y), (x,y') \in X \times Y$, we have $y = y'$.</p>
<p>An operator is a function from a vector space to another vector space.</p>
|
2,535,933 | <p>let assume i have a position function in 1 dimension with constant acceleration.</p>
<p>$$ x(t) = x_0 + v_0t + \frac{1}{2}at^2 $$</p>
<p>then it's first derivative is a velocity function:
$$ \frac{dx}{dt} = v(t) = v_0 + at $$</p>
<p>then it's second derivative is an acceleration function:</p>
<p>$$ \frac{dv}{dt} = a(t) = a $$</p>
<p>so in conclusion if we have x(t) a position function and we take a first derivative, we will get a velocity function and if we take it's second derivative we will get an acceleration function. this is what everyone knows.</p>
<p>now i see some lecture video says that:</p>
<p>$$ \frac{dv}{dt} = \frac{dv}{dx} * \frac{dx}{dt} $$</p>
<p>if this is true then if i calculate $\frac{dv}{dx}$ and multiply by $\frac{dx}{dt}$ i will also get $a(t)$ but i don't know how to do it because when we apply chain rule we need to determine what is inner function and what is outer function but here there is only one function which is x(t) how to find $\frac{dv}{dx}$? </p>
<p>can someone rewrite the position function into inner part and outer part?
or what is the valid way to do the calculation?</p>
<p>I'm very new to calculus and physic please explain step by step and easy simple example</p>
| angryavian | 43,949 | <p>Suppose for sake of contradiction $P(X \ne E[X]) > 0$.
Then there exists some $\epsilon > 0$ such that $$P(|X - E[X]| > \epsilon) > 0.$$
But then
$$\text{Var}(X) = E[(X - E[X])^2] \ge E[(X - E[X])^2 \mathbf{1}_{|X - E[X]| > \epsilon}] > \epsilon^2 P(|X - E[X]| > \epsilon) > 0,$$
a contradiction.</p>
<hr>
<p>If the above is hard to understand, considering the case of a discrete random variable $X$ can be helpful for intuition. If $P(X \ne E[X]) > 0$, then there is some value $c \ne E[X]$ such that $P(X=c) > 0$. Then
$$\text{Var}(X) = E[(X - E[X])^2] \ge (c - E[X])^2 P(X=c) > 0.$$
This argument does not work for continuous random variables, though.</p>
|
29,177 | <p>I had a quick look around here and google before, and didn't find any answer to this particular question, and it's beginning to really irritate me now, so I'm asking here:</p>
<p>How is one supposed to write l (little L), 1 (one) and | (pipe) so that they don't all look the same? One of my teachers draws them all as vertical lines and I have seen things like |l| written as 3 vertical lines.</p>
<p>I tried writing a cursive l, but they always look like e when I write them, which is a whole new problem.</p>
<p>So is there a "best way" to write all of these on paper, or a "least confusing" way?</p>
<p>Thanks in advance guys ^_^</p>
| Kevin Seifert | 65,059 | <p>Another approach is to write the 1 serifs backwards (mirror image). That way, the top serif can be exaggerated, but doesn't look anything like a 2 or 7. Just make it pointy so it doesn't look like a C.</p>
|
997,634 | <p>Evaluate
$$\int_0^R\int_0^\sqrt{R^2-x^2} e^{-(x^2+y^2)} \,dy\,dx$$ </p>
<p>using polar coordinates.</p>
<p>My answer is $-\frac{1}{2}R(e^{-R^2+x^2}-1)$ but I want to confirm if that's correct</p>
<p>And also, when I change from $dy\,dx$ to $dr \,d\theta$ ...how do I know if it should be $dr\,d\theta$ or $d\theta \,dr$?</p>
| Gahawar | 129,839 | <blockquote>
<p>$$\int\limits_0^R \int\limits_0^{\sqrt{R^2-x^2}} e^{-x^2-y^2} \; dy \; dx = \int\limits_0^{\pi/2} \int\limits_0^{R} e^{-\rho^2} \rho \; d\rho \; d\theta = \frac{\pi}{4} \left(1 - e^{-R^2}\right).$$</p>
</blockquote>
<p>The domain of integration is a quarter circle of radius $R$, so when one converts to polar coordinates one sees that $\rho$ goes from zero to $R$ and $\theta$ ranges from $0$ to $\pi/2$. It is useful to note that the area element $d\mathrm{A} = \rho \; d\rho \; d\theta \neq d\rho \; d\theta$. I suppose that one could change the order of integration in polar coordinates, however I do not see why one would do so.</p>
|
44,226 | <p>Let $D$ denote the unit complex 1-dimensional disc, together with the hyperbolic metric $h_D=\frac{4dz\wedge d\bar{z}}{(1-|z|^2)^2}$of curvature $-1$. By Nash's embedding theorem, we can always embed the disc $D$ real-analytically and isometrically into real Euclidean space ${\mathbb{R}}^n$ for some large $n$. (I think $n=5=2\dim_{\mathbb{R}}D+1$ is sufficient.)</p>
<p>I'm wondering whether we have a complex-analytic embeddeding of the disc $D$ into complex Euclidean space. Consider complex Euclidean space ${\mathbb{C}}^n$ with the standard Euclidean metric $h_{\mathbb{C}^n}=dz_1\wedge d\bar{z_1}+\cdots+dz_n\wedge d\bar{z_n}$. Does there exist a holomorphic map $f:D\to {\mathbb{C}}^n$ for some large $n$ that is at the same time an isometry, i.e. $f^*h_{\mathbb{C}^n}=h_D$?</p>
<p>If we write the map $f$ in terms of coordinates $f=(f_1,\ldots, f_n)$, this question has the an equivalent formulation solely in terms of holomorphic functions. This question is asking whether there are holomorphic functions $f_i:D\to {\mathbb{C}}$ on the disc, such that $\frac{4}{(1-|z|^2)^2}=f_1(z)\overline{f_1(z)}+\cdots + f_n(z)\overline{f_n(z)}$ for all $z\in D$.</p>
| Robert Bryant | 13,972 | <p>The answer is 'no', there is no holomorphic curve in $\mathbb{C}^n$ (for any $n$) such that the induced metric has constant negative curvature. To my knowledge, this was first proved by E. Calabi many many years ago, essentially using the structure equations for holomorphic curves in $\mathbb{C}^n$. The proof is easy, but it involves knowing something about the structure equations, and I don't want to try to explain that here. A reasonable source for this is Blaine Lawson's Lectures on Minimal Submanifolds, Chapter IV (see Theorem 11).</p>
<p>There is a more general fact, namely that the only minimal surface (let alone holomorphic curve) in $\mathbb{C}^n$ that has constant Gaussian curvature is a flat plane. This is due to M. Pinl, Minimalfl\"achen fester Gau{\ss}chen Kr\"ummung, Math. Ann. 136 (1958), 34-40.</p>
|
708,596 | <p>Suppose that $U$ and $V$ are vector spaces, and that $f:V \to W$ is a linear map. Suppose also that $u$ and $v$ are vectors in $V$ such that $f(u)=f(v)$. Show that there is a vector $w \in \ker f $ such that $v=u+w$.</p>
<p>I roughly understand what is kernel and its definition but I have no idea how to apply it to this question particularly the $f(u)=f(v)$ and $v=u+w$ part. Is it something to do with zero vectors? I don't understand how to show this.</p>
| mookid | 131,738 | <p>We has necessarily:
$w - v-u$, let us check that it is in $\ker f$.</p>
<p>Using linearity:
$f(w) = f(v-u) = f(v) - f(u) = 0$
That's it.</p>
|
4,261,763 | <p>I was working with this problem:</p>
<p><a href="https://i.stack.imgur.com/AgVpu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AgVpu.png" alt="enter image description here" /></a></p>
<p><span class="math-container">$f(x) = 1, x \in(-\infty,0)\cup(0,\infty)\\
f(x) = -1, x = 0 $</span></p>
<p>The absolute value would yield us the function: <span class="math-container">$|f(x)| = 1, \forall x\in(-\infty,\infty)$</span></p>
<p>Hence, it'd become continous everywhere.</p>
<p>Thank you for any feedback. And as a side question, is there a way to find more such functions, in an easy way?</p>
| L F | 221,357 | <p>What about
<span class="math-container">$$f(x)=\begin{cases}x & \text{if }x\in\mathbb{Q} \\ -x & \text{if }x\in\mathbb{R}-\mathbb{Q} \end{cases}$$</span>
Just separe points of domain on rationals and irrationals and give them opposite aditive functions, as this way you can have a lot of functions with the properties you ask</p>
|
976,617 | <p>Find supremum and infimum of the set:
$B={ \frac{x}{1+ \mid x \mid }} \ for \ x\in \mathbb{R}$
For me it is visible that it will be 1 and -1 respectively but how to prove it properly?</p>
| pitchounet | 61,409 | <p>Let :</p>
<p>$$ B = \left\{ f(x), \; x \in \mathbb{R} \right\}. $$</p>
<p>First, note that $B$ is bounded and not empty, which ensures that $\inf(B)$ and $\sup(B)$ exist. In order to prove that $\sup(B) = 1$, you need to prove that : $\forall y \in B, \, y \leq 1$ and either :</p>
<ul>
<li>$1 \in B$ and in this case, $\sup(B) = \max(B) = 1$.</li>
<li>OR : find a sequence $(u_{n})_{n \in \mathbb{N}}$ of elements of $B$ which converges to $1$. </li>
</ul>
<p>Here, $1 \notin B$. The sequence $\displaystyle \Big( \frac{n}{1+n} \Big)_{n \in \mathbb{N}} = \big( f(n) \big)_{n \in \mathbb{N}}$ converges to $1$, which proves that $\sup(B) = 1$. </p>
<p>You can do the same to prove that $\inf(B) = -1$.</p>
|
3,496,594 | <p>I have to factorize the polynomial <span class="math-container">$P(X)=X^5-1$</span> into irreducible factors in <span class="math-container">$\mathbb{C}$</span> and in <span class="math-container">$\mathbb{R}$</span>, this factorisation happens with the <span class="math-container">$5$</span>th roots of the unity. </p>
<p>In <span class="math-container">$\mathbb{C}[X]$</span> we have <span class="math-container">$P(X)=\prod_{k=0}^4 (X-e^\tfrac{2ki\pi}{5})$</span>.</p>
<p>In <span class="math-container">$\mathbb{R}[X]$</span> the solution states that by gathering all complex conjugate roots we find that <span class="math-container">$P(X)=(X-1)(X^2-2\cos(\frac{2\pi}{5})+1)(X^2-2\cos(\frac{4\pi}{5})+1)$</span>, but I can't figure out how. Another problem I ran into was trying to figure out where the <span class="math-container">$2\cos(\frac{2\pi}{5})$</span> and <span class="math-container">$2\cos(\frac{4\pi}{5})$</span> come from so I tried these two methods:
The sum of the roots of unity is zero so we have:
<span class="math-container">$1+e^\tfrac{2i\pi}{5}+e^\tfrac{4i\pi}{5}+e^\tfrac{6i\pi}{5}+e^\tfrac{8i\pi}{5}=0$</span></p>
<p>In the <span class="math-container">$5$</span>th roots of unity circle P3 and P4 are images according to the x-axis the same goes to P2 and P5, therefore <span class="math-container">$e^\tfrac{6i\pi}{5}=e^\tfrac{-4i\pi}{5}$</span> and <span class="math-container">$e^\tfrac{8i\pi}{5}=e^\tfrac{-2i\pi}{5}$</span> afterwards by using Euler's formula we find <span class="math-container">$1+2\cos(\frac{2\pi}{5})+2\cos(\frac{4\pi}{5})=0$</span>.</p>
<p>Another method is that <span class="math-container">$\cos(6\pi/5) = \cos(-6\pi/5) = \cos(-6\pi/5 + 2\pi) = \cos(4\pi/5)$</span> <span class="math-container">$\cos(8\pi/5) = \cos(-8\pi/5) = \cos(-8\pi/5 + 2\pi) = \cos(2\pi/5)$</span> </p>
<p>therefore <span class="math-container">$1 + \cos(2\pi/5) + \cos(4\pi/5) + \cos(4\pi/5) + \cos(2\pi/5) = 0$</span> and we find <span class="math-container">$1+2\cos(\frac{2\pi}{5})+2\cos(\frac{4\pi}{5})=0$</span></p>
<p>I don't know if both of these methods are correct on their own and I don't know if they will help in the factorisation since I don't know how to go from there and find <span class="math-container">$P(X)=(X-1)(X^2-2\cos(\frac{2\pi}{5})+1)(X^2-2\cos(\frac{4\pi}{5})+1)$</span></p>
| Bernard | 202,857 | <p>A way to obtain an explicit factorisation using <em>Chebyshev polynomials</em>:</p>
<p>Using the recurrence relation:
<span class="math-container">$$P_{n+1}(t)=2tP_n(t)-P_{n-1}(t),\qquad P_0(t)=1,\;P_1(t)=t,$$</span>
we readily obtain
<span class="math-container">$$P_5(t)=2t\, P_4(t)-P_3(t)=16t^5-20t^3+5t,$$</span>
so that, for <span class="math-container">$t=\cos\frac\pi{10}$</span>, we obtain the equation
<span class="math-container">$$\cos5\Bigl(\frac{\pi}{10}\Bigr)=0=16t^5-20t^3+5t,$$</span>
and as <span class="math-container">$\cos\frac{\pi}{10}>0$</span>, it is a root of the biquadratic equation <span class="math-container">$\;16t^4-20t^2+5=0$</span>. So <span class="math-container">$\cos^2\frac\pi{10}$</span> is a root of the quadratic equation
<span class="math-container">$$ f(u)=16u^2 -20 u+5=0. $$</span>
To determine <em>which</em> root it is, we need to find a number which separates the roots <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span>. </p>
<p>Now <span class="math-container">$\;\cos^2\dfrac\pi 6=\dfrac34 <\cos^2\dfrac\pi{10}$</span>, and it happens that <span class="math-container">$
f(3/4)=-1<0$</span>, so <span class="math-container">$u_1<\dfrac 34<u_2$</span>, and finally the standard formulæ for the solutions of a quadratic equation yield
<span class="math-container">$$\cos^2\frac\pi{10}=u_2=\frac{5+\sqrt 5}8\quad\text{whence }\;\cos\frac\pi5=2\cos^2\frac\pi{10}-1=\frac{1+\sqrt 5}4$$</span>
Last step, with the same duplication formula
<span class="math-container">\begin{align}
\cos\frac{2\pi}5&=2\biggl(\frac{1+\sqrt 5}4\biggr)^2-1=\frac{-1+\sqrt 5}4 \\
\cos\frac{4\pi}5&=2\biggl(\frac{-1+\sqrt 5}4\biggr)^2-1=\frac{-1-\sqrt 5}4,
\end{align}</span>
we obtain the factorisation:
<span class="math-container">$$X^5-1=(X-1)\Bigl(X^2+\frac{1-\sqrt 5}2 X+1\Bigr)\Bigl(X^2+\frac{1+\sqrt 5}2 X+1\Bigr).$$</span></p>
|
1,707,675 | <p>How can I find the indefinite integral which is $$\int \frac{\ln(1-x)}{x}\text{d}x$$</p>
<p>I tried to use substitution by assigning $$\ln(1-x)\text{d}x = \text{d}v $$ and $$\frac{1}{x}=u$$ but, it is meaningless but true, the only thing I came up from integration by part is that $$\int \frac{\ln(1-x)}{x^2}\text{d}x = foo $$ and that has no help for me to find the integration $$\int \frac{\ln(1-x)}{x}\text{d}x$$</p>
| Jack D'Aurizio | 44,121 | <p>The primitive is not an elementary function. Since in a neighbourhood of the origin we have:
$$ -\log(1-x) = \sum_{n\geq 1}\frac{x^n}{n} $$
it happens that:
$$ \int_{0}^{x}\frac{\log(1-z)}{z}\,dz = -\sum_{n\geq 1}\frac{x^n}{n^2} = \color{red}{-\text{Li}_2(x)}$$
for any $x\in(-1,1)$.</p>
|
654,239 | <p>Let $\phi_{\alpha}(z)=\frac{z-\alpha}{1-\bar{\alpha}z}$ for $0<|\alpha|<1$</p>
<p>Find all the line $L$ in the complex plane such that $\phi_{\alpha} (L)=L$</p>
<p>Can you help me?</p>
| user149088 | 149,088 | <p>$\newcommand{\CC}{\mathcal{C}}$
In this case $f$ is a 2-cycle (i.e. $f\circ f=Id$. ًWe take $f=-\Phi_\alpha$). Let $a$ be a fixed point of $f$ ($a^2=\frac \alpha{\bar \alpha}$). Let $\CC$ be the circle of center $\frac1{\bar \alpha}$ and passing through $a$, $L_1$ the straight line $(a,\alpha)$, $L_2$ the straight line orthogonal to $L_1$ and passing through $\frac1{\bar \alpha}$, and $b$ is symmetric to $a$ with respect to $L_1$.</p>
<p>Then the only invariant lines by $f$ are $L_1$, $L_2$, and the only circles invariant by $f$ are:</p>
<p>a) The circles which passes through $a$ and $b$.</p>
<p>b) The circles which are symmetric with respect to $L_1$ and orthogonal to $\CC$.</p>
<p>In general, we have the following Theorem (cf: <a href="http://dx.doi.org/10.1016/j.ajmsc.2012.03.005" rel="nofollow">http://dx.doi.org/10.1016/j.ajmsc.2012.03.005</a>).</p>
<blockquote>
<p>Let $f$ be a $2-$cycle such that $\infty$ is not a fixed point. Let
$\alpha\in{\Bbb C}$ such that $f(\alpha)=\infty$. Let $a$ be a fixed
point of $f$, $\CC$ the circle of center $\alpha$ and passing through
$a$, $L_1$ the straight line $(a,\alpha)$, $L_2$ the straight line
orthogonal to $L_1$ and passing through $\alpha$, and $b$ is symmetric
to $a$ with respect to $L_1$.</p>
<p>Then the only invariant lines by $f$ are $L_1$, $L_2$, and the only
circles invariant by $f$ are:</p>
<ul>
<li>The circles which passes through $a$ and $b$.</li>
<li>The circles which are symmetric with respect to $L_1$ and
orthogonal to $\CC$.</li>
</ul>
</blockquote>
|
4,253,564 | <p>I am having trouble with the following integral</p>
<blockquote>
<p>Prove that <span class="math-container">$$ \int_0^1\frac{x\ln(x)}{1+x^2+x^4}dx=\frac{1}{36}\Big(\psi^{(1)}(2/3)-\psi^{(1)}(1/3)\Big)$$</span></p>
</blockquote>
<p><span class="math-container">$$I=\int_0^1\frac{x\ln(x)}{1+x^2+x^4}dx=\int_0^1\frac{\ln(u^2)}{2(1+u+u^2)}du=\int_0^1\frac{\ln(u)}{(1+u+u^2)}du$$</span>
let <span class="math-container">$x^2=u\rightarrow \frac{du}{dx}=2x$</span></p>
<p>How does one proceed from here? Is my approach correct? Thank you for your time</p>
| Quanto | 686,284 | <p>Alternatively</p>
<p><span class="math-container">\begin{align} \int_0^1\frac{x\ln x}{1+x^2+x^4}dx
= &\int_0^1\frac{x\ln x}{(e^{i\frac\pi3}+x^2)(e^{-i\frac\pi3}+x^2)}dx \\
=& \>\frac1{\sin\frac\pi3 }\>\Im \int_0^1 \frac{e^{i\frac\pi3}x\ln x}{1+ e^{i\frac\pi3}x^2 }dx
\overset{x^2\to x} =\frac{1}{2\sqrt3}\Im \text{Li}_2(- e^{i\frac\pi3})
\end{align}</span></p>
|
401,898 | <p>the function $f_n(x)=x^n-x^{2n}$ converge to $f(x)=0$ in $(-1,1]$. Intuativly the function does not converge uniformally in (-1,1]. How can I prove it?
I tried using the definition $\lim \limits_{n\to\infty}\sup \limits_{ x\in (-1,1]}|f_n(x)-f(x)|$ function is continial fractional on $[-1,1]$ and $x=0,(\frac 1 2 )^{\frac 1 n}$ are the roots of the derivative. I found that the second derivative is negative in the second point. then $\sup=1/4$ and the function does not converge uniformally?</p>
| Community | -1 | <p>$f$ doesn't converge uniformly in $(-1,1]$ since
$$\lim_{x\to-1}\lim_{n\to\infty}f_n(x)=0\neq\lim_{n\to\infty}\lim_{x\to-1}f_n(x)=\lim_{n\to\infty}(-1)^n-1$$</p>
|
2,163,948 | <p><strong>Question:</strong></p>
<blockquote>
<p>Does there exist a Riemannian manifold, with a point $p \in M$, and <strong>infinitely many</strong> points $q \in M$ such that there is <strong>more than one</strong> minimizing geodesic from $p$ to $q$?</p>
</blockquote>
<p><strong>Edit:</strong></p>
<p>As demonstrated in Jack Lee's answer, one can construct many exmaples in the following way:</p>
<p>Take $X$ to be a manifold which has a pair of points $p,q$, with more than one minimizing geodesic connecting them. Take $Y$ to be any geodesically convex (Riemannian) manifold. Then $X \times Y$ satisfies the requirement:</p>
<p>Indeed, let $\alpha,\beta$ be two different geodesics in $X$ from $p$ to $q$.</p>
<p>Fix $y_0 \in Y$, and let $y \in Y$ be arbitrary. Let $\gamma_y$ be a minimizing geodesic in $Y$ from $y_0$ to $y$. Then $\alpha \times \gamma_Y,\beta \times \gamma_Y$ are minimizing from $(p,y_0)$ to $(q,y)$.</p>
<p>Hence, if $Y$ is positive-dimensional (hence infinite), we are done.</p>
<p><em>"Open" question: Are there examples which are not products? (This is probably hard, I am not even sure what obstructions exist for a manifold to be a topological product of manifolds)</em></p>
<hr>
<p>Note that for <strong>any</strong> $p$, the set $$\{q \in M \,| \, \text{there is more than one minimizing geodesic from $p$ to $q$} \}$$</p>
<p>is of measure zero. </p>
<p>Indeed, let $M$ be a connected Riemannian manifold, and let $p \in M$.</p>
<p>The distance function from $p$, $d_p$ is $1$-Lipschitz, hence (by Rademacher's theorem) differentiable almost everywhere.</p>
<p>It is easy to see that if there are (at least) two different length minimizing geodesics from $p$ to $q$, then $d_p$ is not differentiable at $q$. (We have two "natural candidates" for the gradients).</p>
| HK Lee | 37,116 | <p>I will introduce two examples</p>
<p>(1) Consider a torus in $\mathbb{R}^3$</p>
<p>(2) Consider two dimensional regular triangle $T$ in
$\mathbb{R}^2\subset X=\mathbb{R}^3$ If $U$ is a <em>suitable</em> tubular
neighborhood of $T$ in $X$, then consider $\partial U$ which is
homeomorphic to $S^2$</p>
<p>There are three points $p_i$ in $\partial U$ whose Gaussian curvature attains local maximum. Then cut locus of $p_1$ ${\rm Cut}\ (p_1)$ is a curve $c:[0,1]
\rightarrow \partial U$ between
$p_2$ and $p_3$ And interior points $c(t),\ 0<t<1$ have a <em>multiplicity</em> $2$ i.e. there are exactly two minimizing geodesics from $p_1$ to $c(t)$</p>
<p>(3) (As far as I know) Generally, in Riemannian manifold $M$, if
${\rm Cut}\ (p)$ is not point set, then points in ${\rm Cut}\ (p)$
of multiplicity $\geq 2$ are dense</p>
<p>(4) Another highdimensional example is $\mathbb{C}P^2$</p>
|
3,163,580 | <p>I'm having troubles to show that if <span class="math-container">$0<|\alpha|<1$</span> then the elements <span class="math-container">$f_k=\lbrace 1, \alpha^k, \alpha^{2k}, \alpha^{3k}, \cdots \rbrace$</span> span <span class="math-container">$\ell^2$</span> for <span class="math-container">$k \geq 1$</span>. I know I should to use the Vandermonde matrix and its properties, however I don't really know how to proceed.</p>
<p>Can you provide me with some hints?</p>
| Fnacool | 318,321 | <p>Recall that a set <span class="math-container">$S\subseteq \ell^2$</span> is dense if <span class="math-container">$({\bf c},s)=0$</span> for all <span class="math-container">$s\in S$</span> implies <span class="math-container">${\bf c}=0$</span>.</p>
<p>Fix <span class="math-container">$\alpha$</span> such that <span class="math-container">$0<|\alpha|<1$</span> and let
<span class="math-container">$$S=((1,\alpha^k,\alpha^{2k},\dots):k=1,2\dots)\subset \ell^2.$$</span> </p>
<p>Let <span class="math-container">${\bf c}\in \ell^2$</span> be such that <span class="math-container">$({\bf c},s)=0$</span> for all <span class="math-container">$s\in S$</span> and define </p>
<p><span class="math-container">$$f(x) = ({\bf c},(1,x,x^2,\dots)) = \sum_{j=0}^\infty c_j x^j.$$</span> </p>
<p>Then <span class="math-container">$f$</span> is analytic on <span class="math-container">$(-1,1)$</span> (and by extension on the open unit ball in the complex plane). Since by assumption <span class="math-container">$f(\alpha^k)=0$</span> for all <span class="math-container">$k=1,2,...$</span>, and <span class="math-container">$\alpha^k \to 0$</span> as <span class="math-container">$k\to\infty$</span>, it follows from the uniqueness theorem for analytic functions that <span class="math-container">$f$</span> is identically zero. Therefore, <span class="math-container">$c_k = \frac{f^{(k)}(0)}{k!}=0$</span>, proving that <span class="math-container">${\bf c}=0$</span>. Thus <span class="math-container">$S$</span> is dense. </p>
|
1,718,380 | <p>Simply: How do I solve this equation for a given $n \in \mathbb Z$?</p>
<p>$x^x = n$</p>
<p>I mean, of course $2^2=4$ and $3^3=27$ and so on. But I don't understand how to calculate the reverse of this, to get from a given $n$ to $x$. </p>
| marty cohen | 13,079 | <p>Simpler:</p>
<p>If
$x^x = n$,
then $x\ln(x) = \ln(n)
=y$.</p>
<p>Let
$f(x) = x\ln(x)-y
$.
$f'(x)
=\ln(x)+1
$.</p>
<p>Applying Newton's iteration,
starting with $x = \frac{y}{\ln y}$,
$x_{new}
=x-\frac{f(x)}{f'(x)}
=x-\frac{x\ln(x)-y}{\ln(x)+1}
=\frac{x\ln(x)+x-x\ln(x)+y}{\ln(x)+1}
=\frac{x+y}{\ln(x)+1}
$.</p>
<p>Iterate until cooked.</p>
|
2,604,206 | <p>Can anyone provide links to a concrete proof? Intuitively, the two-dimensional real space is infinite. so there should be infinitely many subspaces. But how do I go about a proof?</p>
| Alekos Robotis | 252,284 | <p>Let $V$ be a $2-$dimensional vector space over $\mathbf{R}$. Fix a basis $v_1,v_2$ for $V$ so that $V$ can be identified with $\mathbf{R}^2$. Then we can see that the subspaces
$$X_\theta=\{\lambda(\cos\theta, \sin\theta):\lambda \in \mathbf{R}\}$$
are distinct $1-$dimensional vector spaces for each $0\le \theta< \pi$. Visually, each of the vectors $(\cos\theta,\sin\theta)$ describes a unit vector in the unit circle of $\mathbf{R}^2$. The subspaces $X_\theta$ are then the lines through the origin making a counter-clockwise angle of $\theta$ with the positive $x-$axis. In particular, there are uncountably many such subspaces.</p>
|
1,811,109 | <p>How can we cause this relation to be true?</p>
<blockquote>
<p>$$x \sin\theta + y \cos\theta = \sqrt{ x^2 + y^2 } \tag{$\star$}$$</p>
</blockquote>
<p>I know the identity</p>
<p>$$x \sin\theta + y \cos\theta = \sqrt{x^2+y^2}\; \sin\left(\theta + \operatorname{atan}\frac{y}{x}\right)$$
What can make the sine part "$1$" (or just approximately "$1$") so that $(\star)$ holds?</p>
| Noam | 344,143 | <p>This identity is not correct. Take for example $x=2,\,y=3,\,\theta=0^\circ.$
Then $\sqrt{x^2+y^2} = \sqrt{4+9}=3.605\neq 3=x\sin \theta + y \cos \theta.$</p>
|
1,811,109 | <p>How can we cause this relation to be true?</p>
<blockquote>
<p>$$x \sin\theta + y \cos\theta = \sqrt{ x^2 + y^2 } \tag{$\star$}$$</p>
</blockquote>
<p>I know the identity</p>
<p>$$x \sin\theta + y \cos\theta = \sqrt{x^2+y^2}\; \sin\left(\theta + \operatorname{atan}\frac{y}{x}\right)$$
What can make the sine part "$1$" (or just approximately "$1$") so that $(\star)$ holds?</p>
| Tom-Tom | 116,182 | <p>In this answer, I will suppose that either $\theta$ is fixed and solve the equation for $(x,y)$ or that $x$ and/or $y$ are fixed and solve for $\theta$. But first,
let us write $r=\sqrt{x^2+y^2}$ and consider the angle $\alpha$ such that
$x=r\sin\alpha$ and $y=r\cos\alpha$. This is always possible. (Use for instance $\alpha=2\operatorname{arctan}\frac{x}{y+r}$.) The equation becomes
$$r\sin\theta\sin\alpha+r\cos\theta\cos\alpha=r\cos(\theta-\alpha)=r.\tag 1$$</p>
<ul>
<li>Case 1 : $\theta$ is given. </li>
</ul>
<p>This equation (1) is verified as soon as $\alpha=\theta+k\,2\pi$ with $k\in\mathbb Z$. The choice of $r$ is arbitrary. The solutions's ensemble is therefore
$$\left\{(r\sin\theta,r\cos\theta)\mid\,r\in\mathbb R\right\}.$$</p>
<ul>
<li><p>Case 2 : $x$ and $y$ are given.
Then $r$ and $\alpha$ are also given and the solutions of equation (1) are the angles $\theta$ such that $\cos(\theta-\alpha)=1$. The ensemble of solutions is
$$\alpha+2\pi\mathbb Z=\{\alpha+k2\pi\,\mid\,k\in\mathbb Z\}.$$</p></li>
<li><p>Case 3 : $x$ is given.
In this case, we have much more latitude. If $x=0$ the ensemble of solutions $(y,\theta)$ is
$$\{(y,\theta) \mid \cos\theta=\operatorname{sgn}(y) \}=\mathbb{R}^+\times2\pi\mathbb Z\cup \mathbb{R}^-\times\left(\pi+2\pi\mathbb Z\right).$$
If $x\neq 0$ the ensemble of solutions is
$$\left\{\left(y,2\operatorname{arctan}\frac{x}{y+\sqrt{x^2+y^2}}+k2\pi\right)\,\Bigg|\, y\in\mathbb R,\,k\in\mathbb Z\right\}.$$
The case where $y$ is given is obtained by swapping $x$ and $y$ in this case.</p></li>
</ul>
|
2,058,939 | <p>Let $f:\mathbb{R^3}$ $\rightarrow$ $\mathbb{R}$, defined as: </p>
<p>$$f(x,y,z)=\begin{cases} \left(x^2+y^2+z^2\right)^p \exp\left(\frac{1}{\sqrt{x^2+y^2+z^2}}\right)& ,\,\text{if }\quad(x,y,z) \ne (0,0,0)\quad \\
0 &,\,\text{o.w}
\end{cases}$$</p>
<p>Where $\,p\in \mathbb{R}$. Is this function is continuous?</p>
| Community | -1 | <p>$f$ is continuous in $\mathbb{R}^3\setminus\{(0,0,0)\}$, but not in the point $(0,0,0)$ since the limit of $f(x,y,z)$ when $(x,y,z)\to(0,0,0)$ does not exist: To see it recall that
$$
e^{t}=1+t+\frac{t^2}{2!}+\frac{t^3}{3!}+\frac{t^4}{4!}+\cdots,
$$</p>
<p>so for $t>0$ we have
$$
e^{t}\geq\frac{t^{2p+2}}{(2p+2)!}.
$$</p>
<p>Plugging $t=\sqrt{\frac{1}{x^2+y^2+z^2}}$ and translating in terms of $f$ this estimate gives
$$
f(x,y,z)\geq\frac{(x^2+y^2+z^2)^p}{(x^2+y^2+z^2)^{p+1}(2p+2)!}=\frac{1}{(x^2+y^2+z^2)(2p+2)!}\overset{(x,y,z)\to0}{\to}\infty.
$$</p>
|
3,418,810 | <p>So my question is what does it mean to be <span class="math-container">$0$</span> in <span class="math-container">$S^{-1} M$</span>, where <span class="math-container">$S$</span> is a multi-closed subset of a ring <span class="math-container">$A$</span>, <span class="math-container">$M$</span>, lets assume to be a finitely generated <span class="math-container">$A$</span> module.</p>
<p>I was reading Atiyah Macdonalds book on commutative algebra. From what I gather, <span class="math-container">$S^{-1} M$</span> is a set the fractions of the form <span class="math-container">$\frac{m}{s}$</span>. So I was wondering whats does the <span class="math-container">$0$</span> fraction, <span class="math-container">$``\frac{0}{s}"$</span> looks like. I tried going back to the definition of his construction, but cant really get a good idea.</p>
<p>Any help or insight is deeply appreciated.</p>
| Angina Seng | 436,618 | <p>In <span class="math-container">$S^{-1}M$</span> and element <span class="math-container">$m/s$</span> (with <span class="math-container">$s\in S$</span> and <span class="math-container">$m\in M$</span>) is zero iff
<span class="math-container">$tm=0$</span> for some <span class="math-container">$t\in S$</span>, that is iff <span class="math-container">$S\cap\text{Ann}(m)$</span> is non-empty.</p>
|
194,191 | <p>Test the convergence of $\int_{0}^{1}\frac{\sin(1/x)}{\sqrt{x}}dx$</p>
<p><strong>What I did</strong></p>
<ol>
<li>Expanded sin (1/x) as per Maclaurin Series</li>
<li>Divided by $\sqrt{x}$</li>
<li>Integrate</li>
<li>Putting the limits of 1 and h, where h tends to zero</li>
</ol>
<p>So after step 3, I get something like this:</p>
<p>$S= \frac{-2}{\sqrt{x}}+\frac{2}{5\cdot 3! x^{5/2}}- \frac{2}{9 \cdot 5!x^{9/2}}+\frac{2}{13\cdot 7!x^{13/2}}-...$
Putting Limits:
$I=S(1)-S(0)$
But I am stuck at calculating $S(0)$</p>
| Sasha | 11,069 | <p>Change variables $u = \frac{1}{x}$. Then:
$$
\int_0^1 \frac{\sin(1/x)}{\sqrt{x}} \mathrm{d}x= \int_1^\infty \sqrt{u} \sin(u) \frac{\mathrm{d}u}{u^2} =\int_1^\infty \frac{\sin(u)}{u^{3/2}}\mathrm{d}u
$$
The latter integral is absolutely convergent.</p>
|
987,054 | <p>Prove that the sequence
$$b_n=\left(1+\frac{1}{n}\right)^{n+1}$$
Is decreasing.</p>
<p>I have calculated $b_n/b_{n-1}$ but it is obtain:
$$\left(1-\frac{1}{n^2}\right)^n \left(1+\frac{1}{n}\right)^n$$
But I can't go on.</p>
<p>Any suggestions please?</p>
| QmmmmLiu | 186,644 | <p>Why don't you change your goal to prove that $b_n-b_{n+1}>0$?</p>
<p>$b_n-b_{n+1}=(1+1/n)^n-(1+1/n)^n(1+1/n)=(1+1/n)^n(1-1-1/n)=(1+1/n)^n(-1/n)$</p>
<p>Since $n$ is a natural number, $-1/n < 0$. Since $1>0, 1/n>0$, we have $1+1/n>0$. It would be nice if you can use $(1+1/n)^n >0$ directly based on $1+1/n>0$. If not, just do a simple proof by induction.</p>
<p>Then you will have your $b_n-b_{n+1}>0$, i.e., $b_n> b_{n+1}$, meaning that $b_n$ is decreasing.</p>
|
1,425,519 | <p>I'm trying to solve <a href="http://poj.org/problem?id=2140" rel="nofollow">this problem</a> on POJ and I thought that I had it. Since I can't figure out what's wrong with my code, I'd like to test it against a huge list of correct answers. This will make my code much easier to debug.</p>
<p>If you don't want to go to the page linked above, or figure out what exactly the problem is asking, here's a concise version:</p>
<blockquote>
<p>Given a positive integer $N$, determine $x$, the number of sets of consecutive <strong>positive</strong> integers that sum to $N$.</p>
</blockquote>
<p>For example, suppose $N = 15$. Then $x = 4$, since, by brute force, the only possible solutions are </p>
<p>$\{15\}$, $\{7, 8\}$, $\{4, 5, 6\}$, and $\{1, 2, 3, 4, 5\}$</p>
<hr>
<p>My algorithm runs in $O(n)$ time, although it uses the $sqrt(x)$ function, which is quite expensive. Here's some pseudocode:</p>
<pre><code>input n
if n = 1 or n = 2:
print 1
exit
count = 1
for i = n / 2 + 1; i * (i + 1) / 2 >= n; i -= 1:
j = sqrt(i * (i + 1) - 2 * n)
if i * (i + 1) - j * (j + 1) = 2 * n or
i * (i + 1) - (j + 1) * (j + 2) = 2 * n:
count += 1
print count
exit
</code></pre>
<p>Here's some English explaining the loop: </p>
<p>Start with the greatest number, $i$, of a possible set. Disregarding the trivial set $\{N\}$, $i$ clearly cannot be greater than $N / 2$. (Suppose $i > N / 2$ and $i < N$. Then $i + (i + 1) > N$ and $i + (i - 1)$ may or may not be equal to $N$.) Since $i$ cannot be greater than $N/2$, start with $i = N/2$ as the upper bound of $i$. To test each possible set, loop $i$ from this upper bound down until $i (i + 1) / 2 < N$. Clearly, if the sum all positive integers from $1$ to $i$ is less than $N$, then no set of consecutive integers, whose greatest number is $i$, could sum to $N$. This establishes the lower bound of $i$.</p>
<p>To test if a set could end in $i$, I use the following mathematics:</p>
<p>Let </p>
<p>$$S(x) = \sum_{j=1}^{x} j$$</p>
<p>Starting with a set composed solely of $i$, add a set which most closely sums (including $i$) to $N$. Mathematically, </p>
<p>$$N = i + S(i - 1) - S(x)$$</p>
<p>$S(i - 1) - S(x)$ will generate some set of consecutive integers whose greatest number is $i - 1$ and whose least number is between $1$ and $i - 1$, inclusively. To rewrite,</p>
<p>$$N = i + \frac{(i-1)i}{2} - \frac{x(x + 1)}{2}$$
$$2N = 2i + i(i - 1) - x(x + 1)$$
$$2N = i(i + 1) - x(x + 1)$$</p>
<p>Therefore, $i(i + 1) - 2N = x(x+1)$ for some $x \in R$. If $x \in N$, then a solution has been found, generating the set of consecutive integers from $x + 1$ to $i$, inclusive. To quickly determine whether $x \in N$, I square-root $i (i + 1) - 2N$ and round down to some integer $t$. I plug $t$ back in to the equation for $x$, and check to see if it works. If $i(i + 1) - 2N \not = t(t+1)$ then I increment $t$ by one and check again. If neither of the cases work, then a valid set could not possibly have a greatest value of $i$, so I decrement $i$, continuing the loop.</p>
<hr>
<p>My algorithm has worked for hundreds of test cases, but when I submit, POJ gives me <code>WRONG ANSWER</code>. <strong>This is why I'd like to have a list of correct answers to compare against my program.</strong></p>
| Dominik | 259,493 | <p>The other answers already answered how to calculate the number more efficiently, but since you've asked for a list of values I will provide one. If you don't count a "sum" of $1$ number as a sum, the wanted value is called <a href="https://en.wikipedia.org/wiki/Polite_number#Politeness" rel="nofollow">politeness of the number</a> and can be found in <a href="https://oeis.org/A069283" rel="nofollow">OEIS/A069283</a>. </p>
|
2,356,813 | <p>Let $f:[0,\infty)\to\mathbb R$ be a function in $C^2$ such that
$\lim_{x\to\infty} (f(x)+f'(x)+f''(x)) = a.$
Prove that $\lim_{x\to\infty} f(x)=a$</p>
| RRL | 148,510 | <p>Note that with $\alpha = e^{i \pi/3}$ and $\beta = e^{-i \pi/3}$ we have $\alpha \beta = 1$ and $\alpha + \beta = 1$ and , therefore,</p>
<p>$$\tag{1}f(x) + f'(x) + f''(x) = \alpha\beta f(x) + ( \alpha + \beta)f'(x) + f''(x) \\ =\alpha[ \, \beta f(x) + f'(x) + (\beta f(x) + f'(x))' \, ]$$</p>
<p>One can prove the lemma (when the real part of $\gamma$ is positive):</p>
<blockquote>
<p>$$\gamma f(x) + f'(x) \to \delta \implies f(x) \to \delta/\gamma$$</p>
</blockquote>
<p>To prove the lemma use the Hardy - L'Hospital trick</p>
<p>$$\lim_{x \to \infty}f(x) = \lim_{x \to \infty}\frac{e^{\gamma x}f(x)}{e^{\gamma x}} = \lim_{x \to \infty}\frac{e^{\gamma x}(\gamma f(x) + f'(x))}{\gamma e^{\gamma x}} = \frac{\delta}{\gamma}.$$ </p>
<p>Note that at this stage to appy L'Hospital's rule we don't need to assume anything about the existence of the limit of $f(x)$ in the numerator, only that the limit of the denominator is $+\infty.$</p>
<p>Now by (1) and the lemma we have </p>
<p>$$f(x) + f'(x) + f''(x) \to a \implies \beta f(x) + f'(x) \to a/\alpha,$$</p>
<p>and using the lemma again, </p>
<p>$$f(x) \to a/(\alpha \beta) = a$$</p>
|
70,582 | <p>For which n can $a^{2}+(a+n)^{2}=c^{2}$ be solved, where $a,b,c,n$ are positive integers?
I have found solutions for $n=1,7,17,23,31,41,47,79,89$ and for multiples of $7,17,23$...
Are there infinitely many prime $n$ for which it is solvable? </p>
| Thomas Andrews | 7,933 | <p>The general primitive solution to $x^2+y^2 = z^2$ is given by: $x=u^2-v^2$, $y=2uv$, $z=u^2+v^2$, with $u,v$ relatively prime and not both odd.</p>
<p>For $(a,a+n,z)$ to be a primitive triple, we'd have to have a $(u,v)$ such that: $|u^2 - v^2 - 2uv| = n$. We can rewrite that as: $(u-v)^2 - 2v^2 = \pm n$</p>
<p>So, setting $w = u-v$, we want to find $(w,v)$ which are relatively prime and $w$ is odd, with:</p>
<p>$$w^2-2v^2 = \pm n$$</p>
<p>This means that $n$ must be odd.</p>
<p>In fact, we can use unique factorization in $\mathbb{Z}[\sqrt{2}]$ to show that $n$ can be any product of primes of the form $8k\pm 1$. Since there are infinitely many primes of the form $8k\pm 1$, the answer to your question is, "yes."</p>
<p>(Oh, and once you find one solution $(w,v)$ for a particular $n$, you can find infinitely many solutions for that $n$.)</p>
|
1,860,615 | <p>I have a simple quadratic (with $x^2$) equation, x can Be complex too:</p>
<p>$$x^2+x+1=0$$</p>
<p>But it could be any equation, the equation above is just an example. I need to compute $x_1^{10}+x_2^{10}$, but it could have another exponents (ex: $x_1^{50}+x_2^{50}$).</p>
<p>I need to know, on a general case, how to find $x_1^n+x_2^n,\ n\in\mathbb{N}\ ax^2+bx+c=0,\ a\ is\ not\ 0$?</p>
<p>I ask this because I have to create a software which computes this (user writes the equation and the number n = exponent) and I can't find the roots always, because sometimes are complex. I think I should make use of Viete, but I don't know how to compute $x_1^n+x_2^n$.</p>
<p>Thank you very much!!</p>
| Brian Tung | 224,454 | <p>One approach is to represent the roots in polar form, and take advantage of their symmetry. For the sake of ease of exposition, let the quadratic be monic; that is, $a = 1$. If it's not already in that form, it's trivial to convert it to a monic form.</p>
<p>If the roots are real—if $b^2-4c \geq 0$—I assume you know how to handle that. So we'll just consider the case where they're not real. In that case, $b^2-4c < 0$, and the roots are given by</p>
<p>$$
x_{1, 2} = \frac{-b \pm \sqrt{b^2-4c}}{2}
= \frac{-b}{2} \pm \frac{\sqrt{4c-b^2}}{2}i
$$</p>
<p>Note that $c > 0$ necessarily if the roots are not real. We can see, using the Pythagorean theorem, that $|x_1| = |x_2| = \sqrt{c}$. Now, find $\theta, 0 \leq \theta \leq \pi$ such that</p>
<p>$$
\cos\theta = -\frac{b}{2\sqrt{c}}
$$
$$
\sin\theta = \sqrt{1-\frac{b^2}{4c}}
$$</p>
<p>This can be done using the atan2 function in many programming languages. Then $x_{1, 2}$ can be represented as $\sqrt{c} \text{ cis } (\pm\theta) \equiv \sqrt{c} \left[\cos(\pm\theta)+i\sin(\pm\theta)\right]$. Then</p>
<p>$$
x_1^n = \sqrt{c^n} \text{ cis }(n \theta)
$$
$$
x_2^n = \sqrt{c^n} \text{ cis } (-n\theta)
$$</p>
<p>Since $x_1^n$ and $x_2^n$ form a conjugate pair (just as $x_1$ and $x_2$ do), we therefore have</p>
<p>$$
x_1^n+x_2^n = 2\sqrt{c^n}\cos (n\theta)
$$</p>
|
1,860,615 | <p>I have a simple quadratic (with $x^2$) equation, x can Be complex too:</p>
<p>$$x^2+x+1=0$$</p>
<p>But it could be any equation, the equation above is just an example. I need to compute $x_1^{10}+x_2^{10}$, but it could have another exponents (ex: $x_1^{50}+x_2^{50}$).</p>
<p>I need to know, on a general case, how to find $x_1^n+x_2^n,\ n\in\mathbb{N}\ ax^2+bx+c=0,\ a\ is\ not\ 0$?</p>
<p>I ask this because I have to create a software which computes this (user writes the equation and the number n = exponent) and I can't find the roots always, because sometimes are complex. I think I should make use of Viete, but I don't know how to compute $x_1^n+x_2^n$.</p>
<p>Thank you very much!!</p>
| Ng Chung Tak | 299,599 | <p>\begin{align*}
\alpha+\beta &= -\frac{b}{a} \\
\alpha \beta &= \frac{c}{a} \\
\alpha^n+\beta^n &=
(\alpha+\beta)^{n}+\sum_{k=1}^{\left \lfloor \frac{n}{2} \right \rfloor}
\binom{n-k}{k} \frac{n(-\alpha \beta)^{k} (\alpha+\beta)^{n-2k}}{n-k} \\
&=
(\alpha+\beta)^{n}-n\alpha \beta (\alpha+\beta)^{n-2}+
\frac{n(n-3)}{2!} \alpha^2 \beta^2 (\alpha+\beta)^{n-4}-\ldots \\
\end{align*}</p>
|
1,774,294 | <p>If you have $$y^2=2x^2+C$$</p>
<p>why is this not equivalent to</p>
<p>$$y=\sqrt{2x^2}+C$$</p>
| lab bhattacharjee | 33,337 | <p>$$y=c+\sqrt{2x^2}\implies y-c=\sqrt2|x|$$</p>
<p>Squaring we get $$(y-c)^2=2x^2\iff2x^2=?$$</p>
<p>But by the first case $$2x^2=y^2-c$$</p>
|
3,270,944 | <p>Let <span class="math-container">$A$</span> be a bounded linear operator on a separable Hilbert space <span class="math-container">${\cal H}$</span>, and suppose that <span class="math-container">$A$</span> is distinct from its adjoint <span class="math-container">$A^*$</span>. </p>
<p><strong>Question:</strong> Can the double commutant of <span class="math-container">$A$</span> be distinct from the double commutant of <span class="math-container">$\{A,A^*\}$</span>? If so, is there a simple example?</p>
<hr>
<p>This question was inspired by the following statement from section 3.3 in Vaughan Jones (2009), <em>Von Neumann Algebras</em> (<a href="https://math.berkeley.edu/~vfr/VonNeumann2009.pdf" rel="nofollow noreferrer">https://math.berkeley.edu/~vfr/VonNeumann2009.pdf</a>):</p>
<blockquote>
<p>If <span class="math-container">$S \subseteq {\cal B(H)}$</span>, we call <span class="math-container">$(S \cup S^*)''$</span> the von Neumann algebra generated by <span class="math-container">$S$</span>.</p>
</blockquote>
<p>I don't know if this was meant to be the most <em>efficient</em> definition, and that's exactly what prompted my question. Maybe the Hilbert space wasn't assumed to be separable in that context, but I am interested in the separable case (if it matters).</p>
| Mohammad Riazi-Kermani | 514,496 | <p><span class="math-container">$$\frac{2(x+h)^2+1-(2x^2+1)}{(x + h) - (x)} =\frac{2(x^2+2xh+h^2)+1-(2x^2+1)}{(x + h) - (x)}=$$</span></p>
<p><span class="math-container">$$\frac{4xh+2h^2}{ h}=4x+2h$$</span></p>
|
751,138 | <p>Show that if $3\mid(a^2+1)$ then $3$ does not divide $(a+1)$. </p>
<p>using proof of contradiction </p>
<p>can someone prove this using contradiction method please</p>
| user2345215 | 131,872 | <p>Prove the contrapositive:</p>
<blockquote>
<p>If $3\mid a+1$, then $3\nmid a^2+1$.</p>
</blockquote>
<p>Which is easy, because then $a=3k-1$ and $a^2+1=9k^2-6k+2$, which isn't divisible by $3$.</p>
|
386,799 | <blockquote>
<p>P1086: For a closed surface, the positive orientation is the one for which the normal vectors point outward from the surface, and inward-pointing normals give the negative orientation.</p>
<p>P1087: If <span class="math-container">$S$</span> is a smooth orientable surface given in parametric form by a vector function <span class="math-container">$\mathbf{r}(u,v)$</span>, then it is automatically supplied with the orientation of the unit normal <span class="math-container">$\mathbf{n} = \cfrac{\partial_u\mathbf{r} \times \partial_v\mathbf{r}}{\vert \partial_u\mathbf{r} \times \partial_v \mathbf{r} \vert} $</span>...</p>
<p>P1093: The orientation of a surface S induces the positive orientation of the boundary curve C shown in the figure. This means that if one walks in the positive direction around the curve with one's head pointing in the direction of <span class="math-container">$\mathbf{n}$</span>, then the surface is always on one's left.</p>
</blockquote>
<p>How does one determine whether <span class="math-container">$\partial_{\huge{u}}\mathbf{r} \times \partial_{\huge{v}}\mathbf{r} \quad \text{ or } \quad \partial_{\huge{v}}\mathbf{r} \times \partial_{\huge{u}}\mathbf{r} \quad $</span> (negatives of each other) matches the desired orientation?</p>
<p>Since a surface may be hard to sketch (especially under exam conditions), I was hoping for an argument that isn't geometric or visual. But if geometry and visualisation are the easiest, would you please provide pictures for your explanations?</p>
<hr />
<blockquote>
<p>P1091 16.7.<span class="math-container">$23 \text{ generalised.}$</span> <span class="math-container">$\mathbf{F} = (x,-z,y)$</span> and <span class="math-container">$S$</span> is the part of <span class="math-container">$x^2 + y^2 + z^2 = p$</span> in the first octant and oriented towards <span class="math-container">$(0,0,0)$</span>. Evaluate the surface integral <span class="math-container">$\iint_S \mathbf{F} \cdot d\mathbf{S}$</span>. For closed surfaces, use the positive (outward) orientation.</p>
</blockquote>
<p><strong>Solution:</strong> Since <span class="math-container">$S$</span> is a sphere, parameterize with <span class="math-container">$r(\theta, \phi) = (p\sin \theta \cos \theta, p \sin \theta \sin \theta, p \cos \phi)$</span>.<br />
Then <span class="math-container">$\mathbf{F[r(\theta, \phi)]} \cdot \color{red}{(\partial_{\theta} r \times \ \partial_{\phi} r )} = p^3 \sin^3 \theta \cos^2 \theta \qquad (♦)$</span><br />
Then <span class="math-container">$\iint_S \mathbf{F} \cdot d\mathbf{S} = \iint_D \mathbf{F} \cdot (\partial_{\theta} r \times \ \partial_{\phi} r ) \, dA = p^3\int^{2\pi}_0 \cos^2 \theta \, d\theta \int^{\pi/2}_9 \sin^3 \phi \, d\phi = ... = p^3 \quad \pi \quad 1/3.$</span>
The answer is given as <span class="math-container">$ -p^\color{red}{2} \quad \pi \quad 1/3 $</span>.</p>
<p>How would've one determined that the cross product in (♦) coloured in red is wrong,<br />
and that it should've been <span class="math-container">$\color{green}{ \partial_{\large{\phi}} r \times \partial_{\large{\theta}} r }$</span> ?</p>
<p>Predicated on user Dan's Answer: <img src="https://i.stack.imgur.com/2Qq3a.jpg" alt="enter image description here" /></p>
| Ted | 15,012 | <p>There's no fixed answer to this, because whichever one you choose, if you were to reverse $u$ and $v$, you would have to choose the other one. It depends on how the $u$ and $v$ coordinates are oriented on the surface.</p>
<p>However, "$\color{brown}{\text{if the positive $u$ and $v$ tangent vectors at a point are oriented such that,}}$ when looking from the outside of the surface at the point, $\color{brown}{\text{the direction from $u$ to $v$ is counterclockwise, then $\partial u \times \partial v$ is the correct choice.}}$" </p>
|
2,353,193 | <p>I've recently been learning some homological algebra, mainly out of Northcott and some other sources, and I'm having trouble with the notion of projective dimension. In particular, I have a question (not from Northcott) that says</p>
<blockquote>
<p>Let $R = k[x,y]$ for a field $k$ and $M$ a finitely generated $R$-module. Then $M$ has projective dimension $2$ if and only if $\text{Hom}_R(k,M) \neq 0$, where we consider $k$ as an $R$-module with the ideal $\mathfrak m=(x,y)$ acting as $0$ on $k$ (i.e. $k = R/\mathfrak m$).</p>
</blockquote>
<p>I have attempted the problem but I don't see any of linking the notion of projective dimension to the Hom-set. What I have so far:</p>
<p>We have a projective resolution $$0\rightarrow P_2\rightarrow P_1\rightarrow P_0\rightarrow M\rightarrow 0$$ of $M$ and so we have a long exact sequence $$0\rightarrow \text{Hom}(k,P_2)\rightarrow \text{Hom}(k,P_1)\rightarrow \text{Hom}(k,P_0)\rightarrow \text{Hom}(k,M)\rightarrow \text{Ext}^1(k,P_2)\rightarrow \dots$$</p>
<p>We also have the exact sequence $$0\rightarrow \mathfrak m\rightarrow R\rightarrow k\rightarrow 0$$ which gives rise to the long exact sequence $$0\rightarrow \text{Hom}(k,M)\rightarrow \text{Hom}(R,M)\rightarrow \text{Hom}(\mathfrak m, M)\rightarrow \text{Ext}^1(k,M)\rightarrow \text{Ext}^1(R,M)\dots$$</p>
<p>Of these two long exact sequences I think the second one is more useful because we don't know anything about the $P_i$'s from the first one. Also $\text{Ext}^1(R,M) = 0$ since $R$ is projective, so we have an exact sequence with just four nonzero terms if we ignore everything past that.</p>
<p>However I have no idea how to include the projective resolution of $M$ which I imagine is necessary since the projective dimension of $M$ is a hypothesis. Also not sure how to use the finitely generated assumption.</p>
<p>So, I'd like a hint or two to proving this particular claim, and also if possible some general tips on proving things about projective dimension and using long exact sequences in general.</p>
| Mariano Suárez-Álvarez | 274 | <p>A simpler way to see that the claim is false in one direction is to notice that the automorphism group of $R$ acts transitively on the set of modules of the form $R/(x-a,y-b)$, so that they all have the same projective dimension. As there are no nonzero maps between them, the claim is false.</p>
<hr>
<p>Let's do the graded case. First, show that $\def\Ext{\operatorname{Ext}}\def\Tor{\operatorname{Tor}}\Ext^p(k,M)\cong \Tor_{2-p}(k,M)$ for all modules $M$, so that $\hom(k,M)=0$ implies that $\Tor_2(k,M)=0$. Now pick a projective resolution $0\to P_2\to P_1\to P_0\to M$ of $M$, and since $M$ is finitely generated, we can pick it so that it is <em>minimal</em>, that is, the image of each map $P_{i+1}\to P_i$ is in $R_+P_i$. As $\Tor(k,M)$ is the homology of the complex $0\to k\otimes P_2\to k\otimes P_1\to k\otimes P_0$ and the minimality of the resolution implies that all the maps here are zero, we see that $k\otimes P_2$ and this implies that $P_2=0$, by the graded Nakayama. It follows that $M$ has projective dimension at most $1$.</p>
<p>(Here the hypothesis is that $\hom(k,M)=0$, but that is an ungraded $\hom$: if we want to use a graded $\hom$ we have to have $\hom(k(\ell),M)=0$ for all shifts $\ell$.)</p>
<hr>
<p>To check that $\Ext^p(k,M)\cong \Tor_{2-p}(k,M)$ notice that $k$ has the Koszul resolution $0\to R\otimes\Lambda^2V\to R\otimes V\to R\to k$, with $V$ the vector space with basis $\{x,y\}$. Use it to construct complexes which compute $\Ext(k,M)$ and $\Tor(k,M)$ and notice that you get the same complex</p>
|
162,836 | <p>I would like to find the surface normal for a point on a 3D filled shape in Mathematica. </p>
<p>I know how to calculate the normal of a parametric surface using the cross product but this method will not work for a shape like <code>Cone[]</code> or <code>Ball[]</code>.</p>
<ol>
<li>Is there some sort of <code>RegionNormal</code> option? There is an option to
find <code>VertexNormals</code> <a href="http://reference.wolfram.com/language/ref/VertexNormals.html" rel="noreferrer">here</a>, but this is something to with
shading and seems unhelpful. </li>
<li>Is there a method I can use to convert the region into a parametric expression and use the normal cross product method? </li>
</ol>
<p>The plan is to take an arbitrary line and find the angle of intersection between the line and the surface of the shape. </p>
| Bill Watts | 53,121 | <p>If you know the equation of your surface, you can do this. For instance a circle at the origin of radius 5.</p>
<pre><code>f[x_,y_,z_]=x^2+y^2+z^2-5^2;
grad[x_,y_,z_]=Grad[f[x,y,z],{x,y,z}]
</code></pre>
<p>The unit normal vector on that surface is</p>
<pre><code>normal[x_,y_,z_]=Simplify[grad[x,y,z]/Sqrt[grad[x,y,z].grad[x,y,z]]]
</code></pre>
<p>Simple case</p>
<pre><code>normal[0,0,5]
(*{0,0,1}*)
normal[2,3,2 Sqrt[3]]
(*{2/5,3/5,(2 Sqrt[3])/5}*)
</code></pre>
<p>The hard part is getting the surface into the equation.</p>
|
162,836 | <p>I would like to find the surface normal for a point on a 3D filled shape in Mathematica. </p>
<p>I know how to calculate the normal of a parametric surface using the cross product but this method will not work for a shape like <code>Cone[]</code> or <code>Ball[]</code>.</p>
<ol>
<li>Is there some sort of <code>RegionNormal</code> option? There is an option to
find <code>VertexNormals</code> <a href="http://reference.wolfram.com/language/ref/VertexNormals.html" rel="noreferrer">here</a>, but this is something to with
shading and seems unhelpful. </li>
<li>Is there a method I can use to convert the region into a parametric expression and use the normal cross product method? </li>
</ol>
<p>The plan is to take an arbitrary line and find the angle of intersection between the line and the surface of the shape. </p>
| Jesse Wilson | 83,195 | <p>I just came across the UnitNormal[] function that might be applicable:
<a href="https://resources.wolframcloud.com/FunctionRepository/resources/UnitNormal" rel="nofollow noreferrer">https://resources.wolframcloud.com/FunctionRepository/resources/UnitNormal</a></p>
<p>Also, SurfaceData[surface,"NormalVector"] might work:
<a href="https://reference.wolfram.com/language/ref/SurfaceData.html" rel="nofollow noreferrer">https://reference.wolfram.com/language/ref/SurfaceData.html</a></p>
|
2,698,098 | <p>The question:</p>
<blockquote>
<p>Suppose <span class="math-container">$0< \delta < \pi$</span>, <span class="math-container">$f(x) = 1$</span> if <span class="math-container">$|x| \leq \delta$</span>, <span class="math-container">$f(x) = 0$</span> if <span class="math-container">$\delta < |x| \leq \pi$</span>, and <span class="math-container">$f(x + 2 \pi) = f(x)$</span> for all <span class="math-container">$x$</span>.</p>
<p>(a) Compute the Fourier Coefficients for <span class="math-container">$f$</span>.</p>
<p>(b) Conclude that <span class="math-container">$$\sum_{n=1}^\infty \frac{\sin(n \delta)}{n} = \frac{\pi - \delta}{2}$$</span></p>
</blockquote>
<p>I've found the coefficients as <span class="math-container">$c_n = \frac{1}{2 \pi} \int^\pi_\pi f(x) e^{-inx} = \frac{\sin (n \delta)}{i \pi n}$</span>. But I can't seem to see what I should be looking for to prove (b). Parseval's Theorem gives it for <span class="math-container">$|c_n|^2$</span> and even then it doesn't give the answer I'm looking for. A hint would be appreciated.</p>
| Kavi Rama Murthy | 142,385 | <p>There is a small mistake in the calculation of $\hat {f} (n)$. Also, your calculation does not hold for $n=0$. Calculate $\hat {f} (0)$ separately and use the fact that $\sum \hat {f} (n) =f(0)$. You will also have change $n$ to $-n$ when you sum over negative values of $n$.</p>
|
3,368,655 | <p>I came across a problem that asked if it is posible for a function to be Riemann integrable function in <span class="math-container">$[0,+\infty)$</span> but also <span class="math-container">$|f(x)|\geq 1$</span> for all <span class="math-container">$x\geq 0$</span>. </p>
<p>At first I thought it was imposible, but I realized that only holds for continuous functions, because they would have to be either positive or negative, and then they would have to go to 0 at infinity. </p>
<p>I have an idea of what the function would have to be like, with alternating signs, but whose integral converges, but I haven't been able to find any, so I'm starting to think it is imposible. </p>
<p>I would like some help finding this function, or disproving it, as I don't know many tools for working with functions without a constant sign.</p>
| Chris Culter | 87,023 | <p>The <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">Fresnel integral</a>:
<span class="math-container">$$\int_0^\infty e^{ix^2}dx=\frac{(1+i)\sqrt{2\pi}}{4}$$</span>
(The problem statement doesn't say <span class="math-container">$f$</span> has to be real!)</p>
|
292,331 | <p>Suppose that $(n_k)_{k\in \mathbb{N}}$ is a given increasing sequence of positive integers. </p>
<p>Does there exist an (irrational) number $a$ such that
$\{an_k\}:=(a n_k)\text{mod }1 \rightarrow 1/2$ as $k \rightarrow \infty$? </p>
| Nick S | 11,552 | <p>The sequence $n_{2k}=k^2, n_{2k+1}=k^2+1$ is a counterexample.</p>
<p>Indeed, if $\{ n_k a \} \to \frac{1}{2}$ then $\{ n_{2k}a \} \to \frac{1}{2}$ and $\{ n_{2k+1} a\} \to \frac{1}{2}$.</p>
<p>This implies that $a= n_{2k+1}a-n_{2k} a= \lfloor n_{2k+1}a\rfloor+ \{ n_{2k+1}a \} - \lfloor n_{2k}a\rfloor- \{ n_{2k}a \} $ and hence
$$ a \pmod{1} \equiv \{ n_{2k+1}a \} - \{ n_{2k}a \} \to 0 \pmod{1} $$</p>
<p>This shows that $a \in \mathbb Z$, which contradicts $\{ n_k a \} \to \frac{1}{2}$.</p>
<p>The same is true about any subsequence containing infinitely many pairs of consecutive intgers.</p>
<p><strong>P.S.</strong> On another hand, if $a$ is an irrational number, it follows from the denseness of $\{ na \}$ that there exists some $n_k$ such that $\{ n_ka \} \to \frac{1}{2}$.</p>
<p>This shows that there are many subsequences with this property, and I think one can argue that there are uncounatbly many such sequences.</p>
|
3,055,649 | <p>Can there be more than four different types of polygons meeting at a vertex? How?
(The polygons must be convex, regular and different)</p>
<p>There are two ways to fit 5 regular polygons around a vertex, what are they?
(The polygons must be regular, they may not be of different types)</p>
| user630708 | 630,708 | <p>This is easy using Residue Theory:</p>
<p>Note that by symmetry <span class="math-container">$\int_{0}^{\pi/2}...dx=1/4\int_{-\pi}^{\pi}…dx$</span> (use parity and a sub <span class="math-container">$y=\pi-x$</span> to Show that).</p>
<p>employing <span class="math-container">$z=e^{ix}$</span> we get</p>
<p><span class="math-container">$$
4 I_{n,\beta}=\oint_C \left[\frac{z^{4n+2}-1}{z^2-1}\right]^{\beta}\frac{dz}{i z^{2\beta n+1}}
$$</span></p>
<p>where <span class="math-container">$C$</span> denotes the unit circle in the comlex plane. By the residue theorem (there is one pole inside the contour at <span class="math-container">$z=0$</span>, using f.e. the geometric series you can Show that the Points <span class="math-container">$z=\pm i$</span> are removable singularities). We have</p>
<p><span class="math-container">$$
4 I_{n,\beta}=2\pi \text{Res}(\left[\frac{z^{4n+2}-1}{z^2-1}\right]^{\beta}\frac{1}{z^{2\beta n+1}}
,z=0)
$$</span></p>
<p>Using <span class="math-container">$\beta!(z^2-1)^{-\beta}=((2z)^{-1}\partial_z)^{\beta-1}(z^2-1)^{-1}$</span> we get</p>
<p><span class="math-container">$$
(1-z^2)^{-\beta}=\frac{1}{2^{\beta-1}}\sum_{m\geq0}\binom{m+\beta-1}{\beta-1}z^{2m}\\
(1-z^{4n+2})^{\beta}=z^{2\beta}\sum_{k\geq0}(-1)^k\binom{\beta}{k}z^{4k}
$$</span></p>
<p>which means that we have the condition <span class="math-container">$4k+2(m+\beta)-2\beta n-1=-1$</span> (since we are interested in <span class="math-container">$a_{-1}$</span> coefficent of the Laurent expansion) which essenitally kills one of the sums, and we end up with</p>
<p><span class="math-container">$$
I_{n,\beta}=\frac{\pi}{2^{\beta}}\sum_{m\geq0}(-1)^{\beta n /2-(\beta+m)/2}\binom{m+\beta-1}{\beta-1}\binom{\beta}{\beta n /2-(\beta+m)/2}
$$</span></p>
<p>which is a finite sum, since the second binomial becomes zero when <span class="math-container">$m$</span> is large enough (<span class="math-container">$m> \beta (n-1)$</span>)</p>
|
315,457 | <p>I am trying to evaluate $\cos(x)$ at the point $x=3$ with $7$ decimal places to be correct. There is no requirement to be the most efficient but only evaluate at this point.</p>
<p>Currently, I am thinking first write $x=\pi+x'$ where $x'=-0.14159265358979312$ and then use Taylor series $\cos(x)=\sum_{i=1}^n(-1)^n\frac{x^{2n}}{(2n)!}$ to decide the best $n$ and the fact the error bound $\frac{1}{(n+1)!}$ for $\cos(x)$ when $x\in[-1,1]$ to decide $n$. Using wolfram alpha I got $n=11$. Thus I need to use the first $11$ term of Taylor series of $\cos(x)$. Is this seems a reasonable approach?</p>
<p>If I am using some programming languages which don't contain $\pi$ as a constant, should I just define $\pi$ first and use the above method? Is there any other approach to this?</p>
<p>If I want to evaluate $\sin(\cos(x))$ at the point $x=3$, should I use above method to evaluate $\cos(x)$ first and then $\sin(\cos(x))$? Is there any other approach to this?</p>
| user1551 | 1,551 | <p>A square matrix is reducible iff the associated directed graph has smaller <a href="http://en.wikipedia.org/wiki/Strongly_connected_component" rel="noreferrer">strongly connected components</a>. So you may use a <a href="http://en.wikipedia.org/wiki/Path-based_strong_component_algorithm" rel="noreferrer">strong component algorithm</a> to solve your problem.</p>
|
1,101,371 | <p>Any book that I find on abstract algebra is somehow advanced and not OK for self-learning. I am high-school student with high-school math knowledge. Please someone tell me a book can be fine on abstract algebra? Thanks a lot. </p>
| Sophie Clad | 190,787 | <p>Abstract algebra by John Fraleigh and JA Gallian Contemporary Abstract Algebra </p>
|
1,101,371 | <p>Any book that I find on abstract algebra is somehow advanced and not OK for self-learning. I am high-school student with high-school math knowledge. Please someone tell me a book can be fine on abstract algebra? Thanks a lot. </p>
| Gridley Quayle | 202,118 | <p>I am also in high school and two books I've used and found very accessible are:</p>
<p>1) 'Visual Group Theory' by Nathan Carter (the diagrams and illustrations are excellent, a bit pricey though) </p>
<p>2) 'Book of Abstract Algebra' by Charles C. Pinter (the 'Dover books on Mathematics' series of mathematics books are all worth a look) </p>
<p>I can attach the contents if you want.</p>
|
3,882,261 | <p>I have the following question. It's basically my first day doing complex numbers, so I am absolutely lost here. <a href="https://i.stack.imgur.com/JTebK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JTebK.png" alt="enter image description here" /></a>
I have read that the modulus-arg form is
<span class="math-container">$$ z = r(\cos\theta + i \sin\theta)$$</span>
Now, in this case, I tried expanding the equation given (I'm only on part i right now) and got:</p>
<p><span class="math-container">$$z - i = 2\cos\theta - 2i\sin\theta $$</span>
What do I do now?
Yes, I can factor the 2 out, but my issue is that I was told that the value of r and the signs of cos and sin must be positive for the mod-arg form. I'm not sure what to do.</p>
| Karan Elangovan | 497,101 | <p>I believe a valid parametrisation would be:</p>
<p><span class="math-container">$$ x = a * cosh(t)$$</span>
<span class="math-container">$$ y = b * sinh(t)$$</span>
<span class="math-container">$$ t \in \mathbb{R} $$</span></p>
|
3,882,261 | <p>I have the following question. It's basically my first day doing complex numbers, so I am absolutely lost here. <a href="https://i.stack.imgur.com/JTebK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JTebK.png" alt="enter image description here" /></a>
I have read that the modulus-arg form is
<span class="math-container">$$ z = r(\cos\theta + i \sin\theta)$$</span>
Now, in this case, I tried expanding the equation given (I'm only on part i right now) and got:</p>
<p><span class="math-container">$$z - i = 2\cos\theta - 2i\sin\theta $$</span>
What do I do now?
Yes, I can factor the 2 out, but my issue is that I was told that the value of r and the signs of cos and sin must be positive for the mod-arg form. I'm not sure what to do.</p>
| Intelligenti pauca | 255,730 | <p>Your equation can be factored as
<span class="math-container">$$
\left({x\over a}+{y\over b}\right)\left({x\over a}-{y\over b}\right)=1.
$$</span>
Set then:
<span class="math-container">$$
{x\over a}+{y\over b}=t
\quad\text{and}\quad
{x\over a}-{y\over b}={1\over t}.
$$</span></p>
|
1,453,010 | <p>A certain biased coin is flipped until it shows heads for the first time. If the probability of getting heads on a given flip is $5/11$ and $X$ is a random variable corresponding to the number of flips it will take to get heads for the first time, the expected value of $X$ is:
$$E[x] = \sum_{x=1}^\infty{x\frac{5}{11}\frac{6}{11}^{x-1}}$$
I'm not sure how to find an exact value for $E[x]$. I tried thinking about it in terms of a summation of an infinite geometric series but I don't see how that formula can be applied. </p>
| Math1000 | 38,584 | <p>In general, if $X\sim\operatorname{Geo}(p)$, i.e. $\mathbb P(X=n)=p(1-p)^{n-1}$, $n=1,2,\ldots$, then
\begin{align}
\mathbb E[X] &= \sum_{n=1}^\infty np(1-p)^{n-1}\\
&= -p\sum_{n=1}^\infty \frac{\mathsf d}{\mathsf dp}\left[ (1-p)^n\right]\\
&= -p\frac{\mathsf d}{\mathsf dp}\left[\sum_{n=1}^\infty (1-p)^n\right]\\
&= -p\frac{\mathsf d}{\mathsf dp}\left[\frac {1-p}{p}\right]\\
&= -p\left(-\frac1{p^2} \right)\\
&= \frac1p.
\end{align}
Here we have $p=\frac 5{11}$ so $\mathbb E[X]=\frac{11}5$.</p>
|
2,666,772 | <blockquote>
<p>$W$ = $\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}$. Use W to build an 8x8 matrix encoding an orthonormal basis in $R^8$ by scaling A = $\begin{bmatrix} W & W \\ W & -W \end{bmatrix}$ in the right way.</p>
</blockquote>
<p>Am I wrong here, but is it not just this matrix beside each other 4 times? Doesn't that work?</p>
| José Carlos Santos | 446,262 | <p>No. First of all, $W$ is not orthonormal: its columns don't have norm $1$. Normalize it first. Call it $U$. Then consider $\left[\begin{smallmatrix}U&0\\0&U\end{smallmatrix}\right]$.</p>
|
2,090,790 | <p>We are given $g(x)=\frac{x \sin x}{x+1}$, and as I said we need to show it has no maxima in $(0,\infty)$.</p>
<p><strong>My attempt</strong>: assume there is some $x_0>0$ that yields a maxima. then for all $x$</p>
<p>$$-1+\frac{1}{x+1}\leq \frac{x \sin x}{x+1}\leq \frac{x_0 \sin x_0}{x_0+1}\leq 1-\frac{1}{x_0+1}$$
and we can find some $x$ for which this isn't satisfied (like $\frac{1}{2-\frac{1}{x_0+1}}$?). This feels very unnecessary (plus I assume things about $x_0$), but I don't know what good way there is...</p>
<p>Note that I saw the thread here showing $\sup_{x>0} g(x)=1$ but one solution is not on my level, and the other one doesn't actually show its the $\sup$, rather than some sort of "partial limit". Even so, I still don't know how to show that there is no $x$ such that $g(x)=1$, and I don't think it's needed here.</p>
<p>Any help is appreciated in advance!</p>
| Dr. Sonnhard Graubner | 175,066 | <p>for the first derivative we get $$\frac{(x^2+x)\cos(x)+\sin(x)}{(x+1)^2}$$
but this derivative hase Solutions in $$0<x<\infty$$</p>
|
2,090,790 | <p>We are given $g(x)=\frac{x \sin x}{x+1}$, and as I said we need to show it has no maxima in $(0,\infty)$.</p>
<p><strong>My attempt</strong>: assume there is some $x_0>0$ that yields a maxima. then for all $x$</p>
<p>$$-1+\frac{1}{x+1}\leq \frac{x \sin x}{x+1}\leq \frac{x_0 \sin x_0}{x_0+1}\leq 1-\frac{1}{x_0+1}$$
and we can find some $x$ for which this isn't satisfied (like $\frac{1}{2-\frac{1}{x_0+1}}$?). This feels very unnecessary (plus I assume things about $x_0$), but I don't know what good way there is...</p>
<p>Note that I saw the thread here showing $\sup_{x>0} g(x)=1$ but one solution is not on my level, and the other one doesn't actually show its the $\sup$, rather than some sort of "partial limit". Even so, I still don't know how to show that there is no $x$ such that $g(x)=1$, and I don't think it's needed here.</p>
<p>Any help is appreciated in advance!</p>
| Maverick | 171,392 | <p>Consider the interval $x\in [2n\pi,(2n+1)\pi]$</p>
<p>$f(2n\pi)=f((2n+1)\pi)=0$</p>
<p>Invoking Rolle's theorem it is clear that there exists atleast one $x=c$ for which $f'(c)=0$ where $c\in(2n\pi,(2n+1)\pi)$.</p>
<p>Also,on this interval $f(x)$ is clearly positive and since $f(x)$ is continuous and differentiable $f(x)$ must achieve atleast one maxima.</p>
|
3,504,422 | <blockquote>
<p>Find: <span class="math-container">$$\displaystyle\lim_{x\to
\infty}\left(\frac{\ln(x^2+3x+4)}{\ln(x^2+2x+3)}\right)^{x\ln x}$$</span></p>
</blockquote>
<p>My attempt:</p>
<p><span class="math-container">$\displaystyle\lim_{x\to\infty}\left(\frac{\ln(x^2+3x+4)}{\ln(x^2+2x+3)}\right)^{x\ln x}=\lim_{x\to\infty}\left(1+\frac{\ln(x^2+3x+4)-\ln(x^2+2x+3)}{\ln(x^2+2x+3)}\right)^{x\ln x}=\\\displaystyle\lim_{x\to\infty}\left(1+\frac{\ln\left(\frac{x^2+3x+4}{x^2+2x+3}\right)}{\ln(x^2+2x+3)}\right)^{x\ln x}=\lim_{x\to\infty}\left(1+\frac{\ln\left(1+\frac{x+1}{x^2+2x+3}\right)}{\ln(x^2+2x+3)}\right)^{x\ln x}$</span></p>
<p>What I used:
<span class="math-container">$\displaystyle\lim_{x\to\infty}\ln\left(1+\frac{x+1}{x^2+2x+3}\right)=0\;\;\&\;\;\lim_{x\to\infty}\ln(x^2+2x+3)=+\infty$</span></p>
<p>In the end, I got an indeterminate form: <span class="math-container">$\displaystyle\lim_{x\to\infty}1^{x\ln x}=1^{\infty}$</span></p>
<p>Have I made a mistake anywhere? It seems suspicious.</p>
<p>added: replacement <span class="math-container">$\frac{x+1}{x^2+2x+3}$</span> by <span class="math-container">$\frac{1}{x}$</span> wasn't appealing either.</p>
<p>Would:<span class="math-container">$$\lim_{x\to\infty}\Big(\Big(1+\frac{1}{x}\Big)^x\Big)^{\ln x}=x=\infty$$</span>
be wrong?</p>
<p>//a few days after users had provided hints and answered the question,we discussed this with our assistant and he suggested a table formula that can also be applied (essentialy the last step in methods provided in the answers I recieved).:<span class="math-container">$$\lim_{x\to c}f(x)=1\;\&\;\lim_{x\to c}g(x)=\pm\infty$$</span>then<span class="math-container">$$\lim_{x\to c}f(x)^{g(x)}=e^{\lim_{x\to c}(f(x)-1)g(x)}//$$</span></p>
| bjorn93 | 570,684 | <p>Use <span class="math-container">$\lim_{t\to 1}\frac{\ln(t)}{t-1}=1$</span> twice: first for <span class="math-container">$t=\frac{\ln(x^2+3x+4)}{\ln(x^2+2x+3)}$</span> and then for <span class="math-container">$t=\frac{x^2+3x+4}{x^2+2x+3}$</span>. The rest is simplification.
<span class="math-container">$$\begin{align}
\lim_{x\to\infty}\left(\frac{\ln(x^2+3x+4)}{\ln(x^2+2x+3)}\right)^{x\ln x} &=
\exp\left(\lim_{x\to\infty}\ln\left(\frac{\ln(x^2+3x+4)}{\ln(x^2+2x+3)}\right)x\ln x\right) \\
&= \exp\left(\lim_{x\to\infty}\frac{\ln\frac{x^2+3x+4}{x^2+2x+3}}{\ln(x^2+2x+3)}x\ln x\right) \\
&=\exp\left(\lim_{x\to\infty}\frac{(x+1)x}{x^2+2x+3}\frac{\ln x}{\ln(x^2+2x+3)}\right)
\end{align} $$</span>
We have
<span class="math-container">$$\frac{(x+1)x}{x^2+2x+3}=\frac{x^2+x}{x^2+2x+3}\to 1 $$</span>
For the other fraction, use that <span class="math-container">$\ln(x^2+2x+3)=\ln(x^2(1+2/x+3/x^2))=2\ln x+\ln(1+2/x+3/x^2)$</span>
<span class="math-container">$$\frac{\ln x}{\ln(x^2+2x+3)}=\frac{\ln x}{2\ln x+\ln(1+2/x+3/x^2)}=\frac{1}{2+\frac{\ln(1+2/x+3/x^2)}{\ln x}}\to \frac 12 $$</span>
So the limit is <span class="math-container">$e^{\frac 12 \cdot 1}=\sqrt{e}$</span>.</p>
|
979,144 | <p>I am searching for a formula of sum of binomial coefficients $^{n}C_{k}$ where $k$ is fixed but $n$ varies in a given range? Does any such formula exist?</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
With $\ds{n_{1} \geq n_{0}}$ and the identity
$\ds{{m \choose s}
\equiv
\oint_{\verts{z}\ =\ 1\ }{\pars{1 + z}^{m} \over z^{s + 1}}
\,{\dd z \over 2\pi\ic}\,,\quad s = 0,1,2,3,\ldots}$:</p>
<blockquote>
<p>\begin{align}
&\color{#66f}{\large\sum_{n\ =\ n_{0}}^{n_{1}}{n \choose k}}
=\sum_{n\ =\ n_{0}}^{n_{1}}\oint_{\verts{z}\ =\ 1\ }{\pars{1 + z}^{n} \over z^{k + 1}}\,{\dd z \over 2\pi\ic}
=\oint_{\verts{z}}{1 \over z^{k + 1}}\sum_{n\ =\ n_{0}}^{n_{1}}\pars{1 + z}^{n}\,{\dd z \over 2\pi\ic}
\\[5mm]&=\oint_{\verts{z}\ =\ 1\ }{1 \over z^{k + 1}}
\pars{1 + z}^{n_{0}}\,{\pars{1 + z}^{n_{1} - n_{0} + 1} - 1 \over \pars{1 + z} - 1}\,{\dd z \over 2\pi\ic}
\\[5mm]&=\oint_{\verts{z}\ =\ 1\ }{\pars{1 + z}^{n_{1} + 1} \over z^{k + 2}}
\,{\dd z \over 2\pi\ic}
-\oint_{\verts{z}\ =\ 1\ }{\pars{1 + z}^{n_{0}} \over z^{k + 2}}
\,{\dd z \over 2\pi\ic}
\end{align}</p>
</blockquote>
<p>$$
\color{#66f}{\large\sum_{n\ =\ n_{0}}^{n_{1}}{n \choose k}}
=\color{#66f}{\large{n_{1} + 1 \choose k + 1} - {n_{0} \choose k + 1}}
$$</p>
|
1,891,496 | <p>For example, can we say: $\infty=\lim\limits_{n\rightarrow\infty} n < \aleph_0$?</p>
<p>These are two different types of structures. The limit being like the length, extension, or just generic magnitude and the other being cardinality of a set. Can we compare magnitude to cardinality?</p>
<p>Intuitively, we can reach $\aleph_0$ by counting the natural numbers on the number line and in the process will be approaching $\infty$. Which leads me to believe $\infty\leq\aleph_0$. But I can't see why it should be a strict inequality. I feel like they should be of equal magnitude.</p>
<p>I saw on a recent comment that $2^\infty=\infty$, but are those infinities really the same? It seems not to me. Of course we (usually) have that $2^{\aleph_0}=\aleph_1$ where ${\aleph_0}$ and $\aleph_1$ are clearly two very distinct infinities, countable vs uncountable at least. Maybe one might argue that as far as the concept of magnitude is concerned, all infinities are "equal".</p>
| jdods | 212,426 | <p>To summarize information I've gotten from the existing answers and discussion in comments:</p>
<ul>
<li>Cardinals and real numbers are not comparable with the standard relations for real numbers nor those for cardinals (e.g. the usual $=$, $<$, etc., $<$ for real numbers is not the same $<$ as for cardinals, etc.).</li>
<li>$\infty$ is not a cardinal either and so isn't comparable to cardinals</li>
<li>One could probably define any arbitrary relation they want between $\aleph_0$ and $\infty$ and it would be of no consequence to mathematics.</li>
</ul>
<p>===</p>
<p>Now, how about the following:</p>
<p>Let $\infty$ represent the length of the real line. We have that $1=\mu\left([n,n+1]\right)$ the length of each segment between consecutive integers for $\mu$ the standard length measure. </p>
<p>Thus the length of the real line is $\infty=\displaystyle\sum_{i\in\mathbb Z}1$. </p>
<p>Since there are exactly $\aleph_0$ unit length intervals for consecutive integers (and exactly $\aleph_0$ consecutive intervals of any finite length, of course), then to get the length of the real line, we just count these unit intervals, hence the length of the real line would be $\aleph_0$ if we were to allow $\aleph_0$ to represent a spatial magnitude.</p>
<p>So the only reasonable/natural comparison would be $\infty=\aleph_0$ if one were to make a comparison. <strong>NOTE:</strong> The $=$ used here is not the equals sign used to show identity of real numbers! Nor is it the equals sign used to equate cardinals! </p>
|
1,243,661 | <p>Let $\Theta$ be an unknown random variable with mean $1$ and variance $2$. Let $W$ be another unknown random variable with mean $3$ and variance $5$. $\Theta$ and $W$ are independent.</p>
<p>Let: $X_1=\Theta+W$ and $X_2=2\Theta+3W$. We pick measurement $X$ at random, each having probability $\frac{1}{2}$ of being chosen. This choice is independent of everything else.</p>
<p>How does one calculate $Var(X)$ in this case?
Is </p>
<p>$$
Var(X)\;\; = \;\; \frac{1}{2}(Var(\Theta)+Var(W))+\frac{1}{2}(Var(2\Theta)+Var(3W)) \;\; =\;\; \frac{1}{2}(5Var(\Theta)+10Var(W))?
$$</p>
| grand_chat | 215,011 | <p>The choice between $X_1$ and $X_2$ is a coin toss independent of everything else. Let $Y$ equal 1 if $X_1$ is chosen, and 0 if $X_2$ is chosen. Then
$$
X = X_1 Y + X_2(1-Y)\;.
$$
Calculate
$$E(X) = E(X_1Y)+ E[X_2(1-Y)] = 0.5 (EX_1+EX_2)
$$
and
$$E(X^2) = E(X_1^2Y^2) + 2E(X_1X_2Y(1-Y)) + E[X_2^2(1-Y)^2] = 0.5(EX_1^2+EX_2^2)
$$
using independence between $Y$ and everything else, and the facts $Y^2=Y$, $Y(1-Y)=0$, $(1-Y)^2=1-Y$, and $EY=E(1-Y)=0.5$.</p>
<p>Finally calculate the variance using
$$\text{var}(X)=E(X^2)-[E(X)]^2\;.$$</p>
|
3,852,877 | <p>Is it possible to solve the eigenvalue problem of <span class="math-container">$$y''(x) - 2\gamma\, y'(x) + [\lambda^2 + \gamma^2 - (\frac{x^2}{2}+\alpha)^2 + x]\, y(x)=0$$</span> where <span class="math-container">$\lambda$</span> is the eigenvalue and <span class="math-container">$\alpha,\gamma$</span> are parameters. The boundary condition is <span class="math-container">$y(\pm\infty)=0$</span>.</p>
<p>Or instead of the eigenvalue problem, can I just solve it with freely running <span class="math-container">$\lambda$</span>? Then probably I can tackle the eigenproblem by imposing the boundary condition. Hopefully, it is related to or can be transformed into some classical form with special function?</p>
| doraemonpaul | 30,938 | <p><span class="math-container">$y''(x)-2\gamma y'(x)+\left(\lambda^2+\gamma^2-\left(\dfrac{x^2}{2}+\alpha\right)^2+x\right)y(x)=0$</span></p>
<p><span class="math-container">$y''(x)-2\gamma y'(x)-\left(\dfrac{x^4}{4}+\alpha x^2-x+\alpha^2-\lambda^2-\gamma^2\right)y(x)=0$</span></p>
<p>Let <span class="math-container">$y(x)=e^{nx^3}u(x)$</span> ,</p>
<p>Then <span class="math-container">$y'(x)=e^{nx^3}u'(x)+3nx^2e^{nx^3}u(x)$</span></p>
<p><span class="math-container">$y''(x)=e^{nx^3}u''(x)+3nx^2e^{nx^3}u'(x)+3nx^2e^{nx^3}u'(x)+(9n^2x^4+6nx)e^{nx^3}u(x)=e^{nx^3}u''(x)+6nx^2e^{nx^3}u'(x)+(9n^2x^4+6nx)e^{nx^3}u(x)$</span></p>
<p><span class="math-container">$\therefore e^{nx^3}u''(x)+6nx^2e^{nx^3}u'(x)+(9n^2x^4+6nx)e^{nx^3}u(x)-2\gamma(e^{nx^3}u'(x)+3nx^2e^{nx^3}u(x))-\left(\dfrac{x^4}{4}+\alpha x^2-x+\alpha^2-\lambda^2-\gamma^2\right)e^{nx^3}u(x)=0$</span></p>
<p><span class="math-container">$u''(x)+(6nx^2-2\gamma)u'(x)+\left(\dfrac{(36n^2-1)x^4}{4}-(6\gamma n+\alpha)x^2+(6n+1)x-\alpha^2+\lambda^2+\gamma^2\right)u(x)=0$</span></p>
<p>Choose <span class="math-container">$36n^2-1=0$</span> , i.e. <span class="math-container">$n=-\dfrac{1}{6}$</span> , the ODE becomes</p>
<p><span class="math-container">$u''(x)-(x^2+2\gamma)u'(x)-((\alpha-\gamma)x^2+\alpha^2-\lambda^2-\gamma^2)u(x)=0$</span></p>
<p>Let <span class="math-container">$u(x)=e^{kx}v(x)$</span> ,</p>
<p>Then <span class="math-container">$u'(x)=e^{kx}v'(x)+ke^{kx}v(x)$</span></p>
<p><span class="math-container">$u''(x)=e^{kx}v''(x)+ke^{kx}v'(x)+ke^{kx}v'(x)+k^2e^{kx}v(x)=e^{kx}v''(x)+2ke^{kx}v'(x)+k^2e^{kx}v(x)$</span></p>
<p><span class="math-container">$\therefore e^{kx}v''(x)+2ke^{kx}v'(x)+k^2e^{kx}v(x)-(x^2+2\gamma)(e^{kx}v'(x)+ke^{kx}v(x))-((\alpha-\gamma)x^2+\alpha^2-\lambda^2-\gamma^2)e^{kx}v(x)=0$</span></p>
<p><span class="math-container">$v''(x)-(x^2+2\gamma-2k)v'(x)-((\alpha-\gamma+k)x^2+\alpha^2-\lambda^2-\gamma^2-k^2+2\gamma k)v(x)=0$</span></p>
<p>Choose <span class="math-container">$k=\gamma-\alpha$</span> , the ODE becomes</p>
<p><span class="math-container">$v''(x)-(x^2+2\alpha)v'(x)+\lambda^2v(x)=0$</span></p>
<p>Which relates to <a href="http://dlmf.nist.gov/31.12#E4" rel="nofollow noreferrer">Heun's Triconfluent Equation</a>.</p>
<p>Alternatively, choose <span class="math-container">$n=\dfrac{1}{6}$</span> , the ODE becomes</p>
<p><span class="math-container">$u''(x)+(x^2-2\gamma)u'(x)-((\alpha+\gamma)x^2-2x+\alpha^2-\lambda^2-\gamma^2)u(x)=0$</span></p>
<p>Choose another <span class="math-container">$k=\alpha+\gamma$</span> , the ODE simplify to</p>
<p><span class="math-container">$v''(x)+(x^2+2\alpha)v'(x)+(2x+\lambda^2)v(x)=0$</span></p>
<p>Which relates to <a href="http://dlmf.nist.gov/31.12#E4" rel="nofollow noreferrer">Heun's Triconfluent Equation</a>.</p>
|
3,852,877 | <p>Is it possible to solve the eigenvalue problem of <span class="math-container">$$y''(x) - 2\gamma\, y'(x) + [\lambda^2 + \gamma^2 - (\frac{x^2}{2}+\alpha)^2 + x]\, y(x)=0$$</span> where <span class="math-container">$\lambda$</span> is the eigenvalue and <span class="math-container">$\alpha,\gamma$</span> are parameters. The boundary condition is <span class="math-container">$y(\pm\infty)=0$</span>.</p>
<p>Or instead of the eigenvalue problem, can I just solve it with freely running <span class="math-container">$\lambda$</span>? Then probably I can tackle the eigenproblem by imposing the boundary condition. Hopefully, it is related to or can be transformed into some classical form with special function?</p>
| Disintegrating By Parts | 112,478 | <p>Starting with
<span class="math-container">$$
y''(x) - 2\gamma\, y'(x) + [\lambda^2 + \gamma^2 - (\frac{x^2}{2}+\alpha)^2 + x]\, y(x)=0,
$$</span>
let <span class="math-container">$y = e^{\gamma t} f$</span>. Then the above is reduced to potential form
<span class="math-container">$$
(e^{\gamma t}f''+2\gamma e^{\gamma t}f'+\gamma^2e^{\gamma t}f)
-2\gamma(e^{\gamma t}f'+\gamma e^{\gamma t}f)
+[\lambda^2 + \gamma^2 - (\frac{x^2}{2}+\alpha)^2 + x]e^{\gamma t}f=0 \\
f''+(\lambda^2-(\frac{x^2}{2}+\alpha)^2+x)f=0.
$$</span>
This may be written as an eigenfunction problem in standard potential form:
<span class="math-container">$$
-f''+\left((\frac{x^2}{2}+\alpha)^2-x\right)f=\lambda^2 f.
$$</span></p>
|
41,883 | <p>Let $G$ be an abelian group, $A$ a trivial $G$-module. We know that $\text{Ext}(G,A)$ classifies abelian extensions of $G$ by $A$, whereas $H^2(G,A)$ classifies central extensions of $G$ by $A$. So we have a canonical inclusion $\text{Ext}(G,A)\hookrightarrow H^2(G,A)$. Is there some naturally arising exact sequence/spectral sequence which realizes this injection?</p>
<p>Usually this kind of thing can be explained by constructing a clever short exact sequence, but here I have no idea how one might compare $R^1\text{Hom}_\mathbb{Z}(G,\underline{\quad})$ with $R^2\text{Hom}_G(\mathbb{Z},\underline{\quad})$.</p>
| Torsten Ekedahl | 4,008 | <p>You get a description from the universal coefficient theorem which gives a (split) exact sequence
$$
0\to \mathrm{Ext}(H_1(G),A) \to H^2(G,A) \to \mathrm{Hom}(H_2(G),A) \to 0
$$
and the fact that $H_1(G)=G$. We have that $H_2(G)=\Lambda^2G$ and the map $H^2(G,A) \to \mathrm{Hom}(H_2(G),A)$ associates to an extension its commutator map.</p>
|
3,963,479 | <p>In a quadrilateral <span class="math-container">$ABCD$</span>, there is an inscribed circle centered at <span class="math-container">$O$</span>. Let <span class="math-container">$F,N,E,M$</span> be the points on the circle that touch the quadrilateral, such that <span class="math-container">$F$</span> is on <span class="math-container">$AB$</span>, <span class="math-container">$N$</span> is on <span class="math-container">$BC$</span>, and so on. It is known that <span class="math-container">$AF=5$</span> and <span class="math-container">$EC=3$</span>. Let <span class="math-container">$P$</span> be the intersection of <span class="math-container">$AC$</span> and <span class="math-container">$MN$</span>. Find the ratio <span class="math-container">$AP:PC$</span>.</p>
<p>I know that <span class="math-container">$AM=AF=5$</span> and <span class="math-container">$CN=CE=3.$</span> The answer is equivalent to the ratio of the areas <span class="math-container">$[ADP]:[DPC]$</span>. I cannot continue on from this point. Would anyone please help?<a href="https://i.stack.imgur.com/LXdXy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LXdXy.png" alt="enter image description here" /></a></p>
| jlammy | 304,635 | <p>Here's a ridiculously high-powered solution, because why not :)</p>
<p>By Brianchon's theorem on the "hexagon" <span class="math-container">$ABCEDM$</span>, we get that <span class="math-container">$AE, BD, CM$</span> concur. So by Ceva's theorem,
<span class="math-container">$$\frac{AP}{PC}\cdot\frac{CE}{ED}\cdot\frac{DM}{MA}=1\implies\frac{AP}{PC}=\frac{MA}{CE}=\frac{5}{3}.$$</span></p>
|
362,854 | <blockquote>
<p>Show that every subgroup of $Q_8$ is normal.</p>
</blockquote>
<p>Is there any sophisticated way to do this ? I mean without needing to calculate everything out.</p>
| Cameron Buie | 28,900 | <p>Here, I consider $$Q_8=\langle -1,i,j,k\mid (-1)^2=1,i^2=j^2=k^2=ijk=-1\rangle.$$ Note that $-1$ commutes with everything, and that all other non-identity elements have order $4$, so their cyclic subgroups have index $2$, and are <a href="https://math.stackexchange.com/a/84663/28900">therefore</a> normal subgroups. Trivial subgroups are always normal, so since the non-trivial subgroups of $Q_8$ are $\langle -1\rangle,$ $\langle i\rangle,$ $\langle j\rangle,$ and $\langle k\rangle,$ then we're done.</p>
|
747,789 | <p>I've been reading some basic classical algebraic geometry, and some authors choose to define the more general algebraic sets as the locus of points in affine/projective space satisfying a finite collection of polynomials $f_1, \dots, f_m$ in $n$ variables without any more restrictions. Then they define an algebraic variety as an algebraic set where $(f_1, \dots, f_m)$ is a prime ideal in $k[x_1, \dots, x_n]$. </p>
<p>My question has two parts: </p>
<ol>
<li><p>I'm guessing the distinction is like any other area of math where you try to break things up into the "irreducible" case and deduce the general case from patching those together. How does that happen with varieties and algebraic sets? Is it correct to conclude that every algebraic set is somehow built from algebraic varieties since the ideal $(f_1, \dots, f_m)$ is contained in some prime (maximal) ideal? </p></li>
<li><p>How can one tell whether or not an algebraic set is a variety intuitively? I know formally you'd have to prove $(f_1, \dots, f_m)$ is prime (or perhaps there are some useful theorems out there?), but many times in texts the author simply states something is a variety without any justification. Is there a way to sort of "eye-ball" varieties in the sense that there are tell-tale signs of algebraic sets which are not varieties? </p></li>
</ol>
<p>Perhaps this is all a moot discussion since modern algebraic geometry is done with schemes and this is perhaps a petty discussion in light of that, but nonetheless, I'd like to understand the foundations before pursuing that.</p>
<p>Thanks. </p>
| Georges Elencwajg | 3,217 | <p>Here are some personal reflections on the role of schemes in algebraic geometry, commenting on your very interesting remark: "Perhaps this is all a moot discussion since modern algebraic geometry is done with schemes".</p>
<p>Grothendieck introduced scheme theory in the late nineteen fifties and the high level of abstraction of that theory had a discouraging effect on even great mathematicians like Néron.<br />
Soon however the incredible power of these new tools allowed people like Deligne, Grothendieck and Faltings to solve problems untouchable by classical methods: Grothendieck and Deligne solved all of the Weil conjectures and Faltings solved conjectures of Mordell, of Shafarevich and of Tate.<br />
Fields medals were of course awarded to Grothendieck, Deligne and Faltings and several other medals were won by mathematicians whose work involved in a fundamental way scheme theory: Lafforgue, Mumford, Ngô, Voevodski,...</p>
<p>For us lesser mortals scheme theory has now become quite accessible thanks to the didactic efforts of pioniers like Mumford and Hartshorne and then thanks to the excellent textbooks by Eisenbud-Harris, Liu, Görtz-Wedhorn,... and on-line notes like Vakil's splendid <a href="http://math.stanford.edu/%7Evakil/216blog/FOAGjun1113public.pdf" rel="nofollow noreferrer"><em>Foundations of Algebraic Geometry</em></a>.</p>
<p>Algebraic varieties retain an enormous place in contemporary algebraic geometry.<br />
However an overwhelming part of the results in that field are obtained with the help of scheme theory.<br />
So every algebraic geometer must at some time learn scheme theory, but the best way to approach algebraic geometry is through one of the many introductory books written in the classical language by experts (who of course also master scheme theory!) like Fulton, Hulek, Perrin, Reid, ...</p>
|
3,112,682 | <p>I was looking at</p>
<blockquote>
<p><em>Izzo, Alexander J.</em>, <a href="http://dx.doi.org/10.2307/2159282" rel="nofollow noreferrer"><strong>A functional analysis proof of the existence of Haar measure on locally compact Abelian groups</strong></a>, Proc. Am. Math. Soc. 115, No. 2, 581-583 (1992). <a href="https://zbmath.org/?q=an:0777.28006" rel="nofollow noreferrer">ZBL0777.28006</a>.</p>
</blockquote>
<p>which proves existence of the Haar-measure for locally compact abelian groups using the Markov-Kakutani theorem. </p>
<p>What I find strange is that the Haar measure is constructed as an element of the dual of <span class="math-container">$C_c(X)$</span>. But for noncompact <span class="math-container">$X$</span> (such as <span class="math-container">$X$</span> being the real numbers <span class="math-container">$\Bbb R$</span>) this must be an unbounded functional (as the Lebesgue-measure on <span class="math-container">$\Bbb R$</span> is not finite). It seems like the author has no problem with this, and (without mentioning it further) goes on to define a weak-* topology for this case and even uses Banach-Alaoglu.</p>
<p>I have not seen this being done this way before, am I misunderstanding something or can one define a weak-* topology on the algebraic dual of a TVS without any problems?</p>
| José Carlos Santos | 446,262 | <p><strong>Hint:</strong> <span class="math-container">$2\cos\theta=e^{i\theta}+e^{-i\theta}.$</span></p>
|
273,219 | <p>I have been studying for the AP BC Calculus exam (see this <a href="https://math.stackexchange.com/questions/272628/how-know-which-direction-a-particle-is-moving-on-a-polar-curve">previous question</a>) and most of the questions that deal with the first derivative in polar coordinates say that if ${dr\over d\theta}<0$ and $r>0$, then the graph (in polar coordinates) is moving closer to the origin. </p>
<p>What about $r = 4-\theta$, which has $\frac{\mathrm{d}r}{\mathrm{d}\theta} = -1$? </p>
<p>Here is the graph: </p>
<p><img src="https://i.stack.imgur.com/4L1Ct.gif" alt="wolfram alpha"></p>
<p>Does this disprove the statement?</p>
| Alexander Gruber | 12,952 | <p>It's pretty hard to answer this question when 'familiar' isn't defined further.</p>
<p>Every finite group (and thus every permutation group) has a composition series, which is <a href="http://mathworld.wolfram.com/Jordan-HoelderTheorem.html" rel="nofollow noreferrer">unique</a> in the sense that the length and composition factors of any two composition series are the same up to permutation and isomorphism. If you define familiar groups to be simple groups, then these quotients are the familiar pieces you're looking for.</p>
<p>Defining familiar as simple is a stretch, though. You'd be hard pressed to find someone who found <a href="http://en.wikipedia.org/wiki/Finite_simple_group#Held_group_He" rel="nofollow noreferrer">the Held group</a> especially recognizable.</p>
<p>The other problem is that two nonisomorphic groups can have the same composition factors. The factors do help you break the group into smaller, more recongizable chunks, but they don't tell you how those chunks interact.</p>
<p>There are so many variations on combinations of smaller groups that it is difficult to imagine how we could recognize them all without group presentations. Another way of breaking down a group into recognizable groups is by looking at its Sylow subgroups and seeing how they interact (this is <em>local group theory</em>). But for this, we need to understand $p$-groups enough to call them recognizable. My answer to <a href="https://math.stackexchange.com/a/241381/12952">this question</a> should give you an idea of the magnitude of what we're dealing with just in the world of $p$-groups.</p>
<p>Take for example <a href="http://groupprops.subwiki.org/wiki/SmallGroup%2816,3%29" rel="nofollow noreferrer">$\text{SmallGroup}(16,3)$</a>. There is no other name for that group, as far as I know. It is isomorphic to $(\mathbb{Z}_4\times\mathbb{Z}_2)\rtimes \mathbb{Z}_2$, which shows us it can be split into those recognizable pieces. This decomposition uses semidirect products, however, which is basically the same as the relations / presentation concept we are trying to avoid.</p>
<p>So in summary, there are many a lot of different ways to divide a group up into pieces that are as familiar as possible, to gain understanding about it and how it works. However, to achieve a full description of the group, most of the time we need to use specific relations in a group presentation to show the way those pieces fit together.</p>
|
221,729 | <p>Till now, I have proved followings;</p>
<p>Suppose $X,Y$ are metric spaces and $E$ is dense in $X$ and $f:E\rightarrow Y$ is uniformly continuous. Then,</p>
<ol>
<li><p>$Y=\mathbb{R}^k \Rightarrow \exists$ a continuous extension.</p></li>
<li><p>$Y$ is compact $\Rightarrow \exists$ a continuous extension.</p></li>
<li><p>$Y$ is complete $\Rightarrow \exists$ a continuous extension. (AC$_\omega$)</p></li>
<li><p>$E$ is countable & $Y$ is complete $\Rightarrow \exists$ a continuous extension.</p></li>
</ol>
<p>What are true and what are false if $f$ is replaced by a 'continuous function', not uniformly?</p>
| Hagen von Eitzen | 39,174 | <p>Big-Oh is is not completely determined by derivatives. For example $\sin(x^2)\in O(1)$ but the derivative $2x\cos(x^2)$ is unbounded. </p>
<p>The claim that $f(n)\le g(n)$ implies $f(n)\in O(g(n))$ is false: Consider $g(n)=n$, $f(n)=-n^2$. But if you replace the condition with $|f(n)|\le g(n)$ then the claim is easy: That is almost the definition of $f(n)\in O(g(n))$. And of course trivially $g(n)\in O(g(n))$. Then since Big-ohs are closed under addition, also $f(n)+g(n)\in O(g(n)$.</p>
|
221,729 | <p>Till now, I have proved followings;</p>
<p>Suppose $X,Y$ are metric spaces and $E$ is dense in $X$ and $f:E\rightarrow Y$ is uniformly continuous. Then,</p>
<ol>
<li><p>$Y=\mathbb{R}^k \Rightarrow \exists$ a continuous extension.</p></li>
<li><p>$Y$ is compact $\Rightarrow \exists$ a continuous extension.</p></li>
<li><p>$Y$ is complete $\Rightarrow \exists$ a continuous extension. (AC$_\omega$)</p></li>
<li><p>$E$ is countable & $Y$ is complete $\Rightarrow \exists$ a continuous extension.</p></li>
</ol>
<p>What are true and what are false if $f$ is replaced by a 'continuous function', not uniformly?</p>
| Alex | 38,873 | <p>One more easy way of looking at this type of problem is noticing that if $f(n) \leq g(n)$ then
$$
f(n)+g(n) \leq g(n) +g(n)=2g(n)= O(g(n))
$$
In fact it is easy to show that this sum is $\Theta(g(n))$</p>
|
4,120,827 | <p>Let's assume <span class="math-container">$P_1=(x_1, y_1)$</span> and <span class="math-container">$P_2=(x_2, y_2)$</span> and <span class="math-container">$P_3=(x_3, y_3)$</span>.</p>
<p>How to find the closest distance between <span class="math-container">$P_3$</span> and the line segment between <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span>?</p>
<p>I tried using the formula <span class="math-container">$\frac{area(P_1, P_2, P_3)}{distance(P_1, P_2)}$</span>:
<span class="math-container">$$\operatorname{distance}(P_1, P_2, P_3) = \frac{|(x_2-x_1)(y_1-y_3)-(x_1-x_3)(y_2-y_1)|}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}$$</span></p>
<p>Sadly it seems like it only works for infinite lines, not for line segments like in my case.</p>
| Cewein | 921,323 | <p>there is three area to take in count:</p>
<p><a href="https://i.stack.imgur.com/uutOo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uutOo.png" alt="enter image description here" /></a></p>
<p>For <span class="math-container">$P1$</span> the distance is <span class="math-container">$|P1-B|$</span> <br>
For <span class="math-container">$P3$</span> the distance is <span class="math-container">$|P3-A|$</span></p>
<p>to summarise, it juste the distance between two point when the point P is not between the two perpendicular line passing trough A and B.</p>
<p>the hardest part is P2 : <br></p>
<p>since it between the two point, the distance to the segment is the distance to the point <span class="math-container">$C$</span> on the segment <span class="math-container">$AB$</span>. We can know where the point C is located with a parameter <span class="math-container">$h$</span> (the doted line) because of how a segment is parameterised.</p>
<p><span class="math-container">$$(A,B) = (hx_B + (1-h)x_A,hy_B + (1-h)y_A)$$</span></p>
<p>if C is on B then <span class="math-container">$h=1$</span> and <span class="math-container">$h=0$</span> if C is on A, in the exemple h might be equal to 0,75 or 0,8.</p>
<p>To find h we need to do :</p>
<p><span class="math-container">$$h = \frac{<P_2-A, B-A>}{|B-A|^2}$$</span></p>
<p>this is a projection of the length of the vector <span class="math-container">$\overrightarrow{P_2A}$</span> on to <span class="math-container">$\overrightarrow{BA}$</span> we use a dot product for that, then devide it to the length of <span class="math-container">$\overrightarrow{BA}$</span> then normalise it ench the power of 2.</p>
<p>then to find the distance <span class="math-container">$|P2-C|$</span> we do :</p>
<p><span class="math-container">$$|P_2 - (A +h*(B-A))|$$</span></p>
<p>to finish we can compute the 3 expression at the same time wich lead us to :</p>
<p><span class="math-container">$$h = min(1,max(0,\frac{<P_2-A, B-A>}{|B-A|^2}))$$</span>
<span class="math-container">$$d = |P - A - h*(B-A)|$$</span></p>
|
546,701 | <p>Find the number of positive integers $$n <9,999,999 $$ for which the sum of the digits in n equals 42.</p>
<p>Can anyone give me any hints on how to solve this?</p>
| Christian Blatter | 1,303 | <p>Since $0$ has no weight we can consider all numbers as $7$-place decimals. It follows that we have to count the solutions of
$$\sum_{k=1}^7 d_k =42,\qquad 0\leq d_k\leq 9\quad(1\leq k\leq 7)\ .$$
Forgetting about the condition $d_k\leq9$ we have a standard stars-and-bars problem, which has ${42+7-1\choose 7-1}={48\choose6}$ solutions.</p>
<p>Among these solutions there are some with, say, $d_1\geq10$. To count these we count the nonnegative solutions of $\sum_{k=1}^7 d_k=32$ (with the idea to replace in each of these solutions the $d_1$ by $d_1':=d_1+10$ afterwards).
There are ${38\choose6}$ such solutions.</p>
<p>This number of "forbidden solutions" has to be multiplied by $7$, since each one of the $d_k$ could be $\geq10$. But in this way we count the cases where two $d_k$s are $\geq10$ twice, and it becomes apparent that we run into an inclusion-exclusion scheme. It follows that the total number $N$ we are looking for is given by
$$N={48\choose6}-{7\choose1}{38\choose6}+{7\choose2}{28\choose6}-{7\choose3}{18\choose6}+{7\choose4}{8\choose6}=209525\ .$$
(It is impossible that more than $4$ of the $d_k$ are $\geq10$.)</p>
|
1,229,467 | <p>I tried substituting the $4$ into the $3x-5$ equation, so my slope would be represented as $3(4)-5 = 7$. Then my equation for the line would be $y-3 = 7(x-4)$. That means the equation of the line would be $y = 7x - 25$. However, I'm trying to submit this answer online and it comes back as incorrect. Any help would be appreciated, thank you.</p>
| Rolf Hoyer | 228,612 | <p>Your equation only satisfies $y'(x) = 3x-5$ at the point $(4,3)$, rather than everywhere. You need to take the antiderivative to get the family of curves $y = (3/2)x^2 - 5x + C$ that satisfy the given differential equation everywhere. Solve for $C$ by plugging in $(4,3)$ to this family, instead.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.