qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,676,347 | <p>Suppose that $V$ and $W$ are vector spaces, $g:V\rightarrow W$ a linear map. Show that g is surjective $\iff$ For any vector space $U$ the map $g^{\ast}:Hom(W,U) \rightarrow Hom(V,U)$ defined by $g^{\ast}(f) = f\circ g$ is injective.</p>
<p>I've failed too much trying to solve this problem. Any hint could be useful, thank you.</p>
| Bernard | 202,857 | <p>One way is easy: if $g$ is surjective, $g*$ is injective – in other words, for any $v\colon W\to U$ , if $v\circ g=0$, then $v=0$.</p>
<p>Conversly, suppose $g*$ is injective, and consider the linear map $p\colon W \longrightarrow \operatorname{coker} g=W/g(V)g$.</p>
<p>By definition, $g*(p)=p\circ g=0$, hence$p=0$, which means $W=g(V)$, i.e. $g$ is surjective.</p>
|
1,883,459 | <p>I am trying to make a block diagonal matrix from a given matrix by multiplying the given matrix to some other matrices.
Say $A$ is an $N \times N$ matrix, I want to make an $A^\prime$ matrix with size $kN \times kN$ such that $A^\prime$ has $A$ as its diagonal element $k$ times. In fact $A^\prime$ is the direct sum of A as $A^\prime = \bigoplus_{i =1}^{k} A$.</p>
<p>What I am looking for is what the elements of $B$ and $C$ should be to have
$$ BAC =A^\prime . $$</p>
<p><strong>Use case:</strong> Using $A^\prime$ in a linear optimization problem if $A$ can be transformed to $A^\prime$ linearly.</p>
| Med | 261,160 | <p>It is not possible to find matrices $B$ and $C$ to get what you want. To give a simple example, the matrix A' is a general form of the identity matrix "I". </p>
<p>$B_{kN×N}.1_{N×N}.C_{N×kN}=I_{kN×kN}$</p>
<p>Having a column vector "B" and a row vector "C", you cannot get the identity matrix. Because the identity matrix is a full rank matrix and multiplication of matrices, that at least one of them is not full rank, would give a singular matrix.</p>
|
2,463,565 | <p>I want to use the fact that for a $(n \times n)$ nilpotent matrix $A$, we have that $A^n=0$, but we haven't yet introduced the minimal polynomials -if we had, I know how to prove this.</p>
<p>The definition for a nilpotent matrix is that there exists some $k\in \mathbb{N}$ such that $A^k=0$.</p>
<p>Any ideas?</p>
| Ben Grossmann | 81,360 | <p>Note that if $\operatorname{rank}(A^k) = \operatorname{rank}(A^{k+1})$, then $\operatorname{rank}(A^j) = \operatorname{rank}(A^k)$ for all $j \geq k$. To see that this is the case, note if $\operatorname{rank}(A^k) = \operatorname{rank}(A^{k+1})$, then the restriction of $A$ to the image of $A^{k}$ is an invertible map.</p>
<p>Thus, if $A$ is nilpotent and if $A^{n-1} \neq 0$, we must have
$$
n > \operatorname{rank}(A) > \operatorname{rank}(A^2) > \cdots > \operatorname{rank}(A^n)
$$
and of course, the rank of any matrix is an integer.</p>
|
1,555,429 | <p>Hi I am trying to solve the sum of the series of this problem:</p>
<p>$$
11 + 2 + \frac 4 {11} + \frac 8 {121} + \cdots
$$</p>
<p>I know its a geometric series, but I cannot find the pattern around this. </p>
| Olivier Oloa | 118,798 | <ul>
<li>If your series is</li>
</ul>
<p>$$
S_1=11+11\times\sum_{n=1}^{\infty}\frac{2^n}{11^{n}}
$$
then you may use </p>
<blockquote>
<p>$$
x+x^2+\cdots+x^n+\cdots=\frac{x}{1-x}, \quad |x|<1, \tag1
$$ </p>
</blockquote>
<p>with $x=\dfrac2{11}$.</p>
<ul>
<li>If your series is</li>
</ul>
<p>$$
S_2=11+2+\sum_{n=1}^{\infty}\frac{4n}{11^{n}}
$$
then differentiating $(1)$ termwise and multiplying by $x$ you get</p>
<blockquote>
<p>$$
x+2x^2+3x^3+\cdots+nx^n+\cdots=\frac{x}{(1-x)^2}, \quad |x|<1, \tag2
$$ </p>
</blockquote>
<p>giving, with $x=\dfrac1{11}$,
$$
\sum_{n=1}^{\infty}\frac{n}{11^{n}}=\frac{11}{100}
$$ </p>
<p>It is now pretty easy to obtain $S_2$.</p>
|
3,361,153 | <p>I am given two boolean expression<br>
1) <span class="math-container">$x_1 \wedge x_2 \wedge x_3$</span><br>
2) <span class="math-container">$(x_1 \wedge x_2) \vee (x_3 \wedge x_4)$</span> </p>
<p>Now I need to know which expression is trivial and which is non-trivial. I wanted to know what is the procedure of doing so?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Note that <span class="math-container">$$n\equiv 0,1,2\mod 3$$</span> so <span class="math-container">$$n^2\equiv 0,1 \mod 3$$</span></p>
|
3,361,153 | <p>I am given two boolean expression<br>
1) <span class="math-container">$x_1 \wedge x_2 \wedge x_3$</span><br>
2) <span class="math-container">$(x_1 \wedge x_2) \vee (x_3 \wedge x_4)$</span> </p>
<p>Now I need to know which expression is trivial and which is non-trivial. I wanted to know what is the procedure of doing so?</p>
| Anurag A | 68,092 | <p>By division algorithm, <span class="math-container">$n=3q+r$</span>, where <span class="math-container">$r \in \{0,1,2\}$</span>. Thus
<span class="math-container">$$n^2=9q^2+6qr+r^2=3(3q^2+2qr)+r^2.$$</span>
Since <span class="math-container">$3 \mid n^2$</span>, this implies <span class="math-container">$3 \mid r^2$</span>. But <span class="math-container">$r^2 \in \{0,1,4\}$</span> and the only number satisfying this property is when <span class="math-container">$r^2=0$</span>, same as saying <span class="math-container">$r=0$</span>. Thus <span class="math-container">$n=3q$</span> for some <span class="math-container">$q \in \Bbb{Z}$</span>.</p>
|
3,989,591 | <p>The following question I read in a book, but the book does not give proof. I doubt the correctness of the result</p>
<p>let <span class="math-container">$p>3$</span> be prime number. prove or disprove
<span class="math-container">$$(x+1)^{2p^2}\equiv x^{2p^2}+\binom{2p^2}{p^2}x^{p^2}+1\pmod {p^2}\tag{1}$$</span></p>
<p>I think use binomial theorem
it maybe show <span class="math-container">$$\binom{2p^2}{k}\equiv 0,\pmod {p^2},k=1,2,\cdots,p^2-1,p^2+1,\cdots,2p^2-1\tag{2}$$</span>
but other hand I think (2) is not right.becuase let <span class="math-container">$p=5,k=5$</span>
we have <span class="math-container">$$\binom{2p^2}{k}=\binom{50}{5}=\dfrac{50\cdot 49\cdot 48\cdot 47\cdot 46}{5\cdot 4\cdot 3\cdot 2\cdot 1}\ne 0\pmod {25}$$</span>,so I think <span class="math-container">$(1)$</span> is not right?</p>
| Abhijeet Vats | 426,261 | <p>Yea, so we are partitioning <span class="math-container">$[0,2]$</span> into <span class="math-container">$n$</span> equal pieces each of length <span class="math-container">$\frac{2}{n}$</span>. In particular:</p>
<p><span class="math-container">$$x_i = 0+\frac{2i}{n} = \frac{2i}{n}$$</span></p>
<p>Then, we pick our evaluation set <span class="math-container">$T = \{x_i: i \in \{1,2,\ldots,n\}\}$</span>. So, consider the Riemann Sum:</p>
<p><span class="math-container">$$R(f,P,T) = \sum_{i=1}^{n} f(x_i) \Delta x_i = \sum_{i=1}^{n} e^{\frac{2i}{n}} \frac{2}{n}$$</span></p>
<p>Observe that <span class="math-container">$\sum_{i=1}^{n} \left(e^{\frac{2}{n}} \right)^i$</span> is a geometric sum with common ratio <span class="math-container">$e^{\frac{2}{n}}$</span>. So, we have:</p>
<p><span class="math-container">$$\sum_{i=1}^{n} \left(e^{\frac{2}{n}} \right)^i = \frac{1-\left(e^{\frac{2}{n}}\right)^{n+1}}{1-e^{\frac{2}{n}}}$$</span></p>
<p>Observe that the numerator of this expression simplifies to <span class="math-container">$1-e^{2+\frac{2}{n}}$</span>. It is easy to see that this expression goes to <span class="math-container">$1-e^2$</span> as <span class="math-container">$n \to \infty$</span>. Then, study the expression:</p>
<p><span class="math-container">$$\frac{\frac{2}{n}}{1-e^{\frac{2}{n}}}$$</span></p>
<p>It is relatively easy to see that as <span class="math-container">$n \to \infty$</span>, this expression goes to <span class="math-container">$-1$</span> (try proving this). So, it is the case that:</p>
<p><span class="math-container">$$\lim_{n \to \infty} R(f,P,T) = e^2-1$$</span></p>
<p>and we are done. Now, alternatively, if you don't want to go that route with it, then there's a faster way to do the problem without having to do this sort of gymnastics.</p>
<p>Since <span class="math-container">$x \mapsto e^x$</span> is continuous on <span class="math-container">$[0,2]$</span>, it is Riemann Integrable there. Let <span class="math-container">$P = \{0 = x_0 < x_1 < \ldots < x_{n-1} < x_n = 2\}$</span> be any partition of <span class="math-container">$[0,2]$</span>. Since <span class="math-container">$x \mapsto e^x$</span> is continuous and differentiable on every interval of <span class="math-container">$\mathbb{R}$</span>:</p>
<p><span class="math-container">$$\forall i: \exists t_i \in (x_{i-1},x_i): e^{t_i} = \frac{e^{x_i}-e^{x_{i-1}}}{\Delta x_i}$$</span></p>
<p>This is just a consequence of the MVT. Then, pick <span class="math-container">$T = \{t_i: i \in \{1,2,\ldots,n\}\}$</span> as your evaluation set and compute the Riemann Sum:</p>
<p><span class="math-container">$$R(f,P,T) = \sum_{i=1}^{n} f(t_i) \Delta x_i = \sum_{i=1}^{n} \left(e^{x_i}-e^{x_{i-1}} \right) = e^2-e^0 = e^2-1$$</span></p>
<p>where we have made use of the fact that we have a telescoping sum. This approach is nice because it actually generalizes and gives us a pretty neat proof of the Fundamental Theorem of Calculus :D</p>
|
3,715,987 | <p>The domain is: <span class="math-container">$\forall x \in \mathbb{R}\smallsetminus\{-1\}$</span></p>
<p>The range is: first we find the inverse of <span class="math-container">$f$</span>:
<span class="math-container">$$x=\frac{y+2}{y^2+2y+1} $$</span>
<span class="math-container">$$x\cdot(y+1)^2-1=y+2$$</span>
<span class="math-container">$$x\cdot(y+1)^2-y=3 $$</span>
<span class="math-container">$$y\left(\frac{(y+1)^2}{y}-\frac{1}{x}\right)=\frac{3}{x} $$</span>
I can't find the inverse... my idea is to find the domain of the inverse, and that would then be the range of the function. How to show otherwise what is the range here?</p>
| Vishu | 751,311 | <p>How about using calculus?</p>
<p><span class="math-container">$$f(x)=\frac{x+2}{(x+1)^2}=\frac{1}{x+1}+\frac{1}{(x+1)^2} \\ f’(x) =\frac{-1}{(x+1)^2}-\frac{2}{(x+1)^3} =0 \\ \implies x=-3$$</span> which you can see clearly corresponds to a minimum. We see <span class="math-container">$f(-3) =-\frac 14$</span>. Now, <span class="math-container">$\lim_{x\to\pm\infty}f(x)=0$</span> and also <span class="math-container">$\lim_{x\to -1} f(x) =+\infty$</span> which means that <span class="math-container">$-\frac 14$</span> is the global minimum. There is no upper bound and hence the range is <span class="math-container">$$\left[-\frac 14,\infty\right)$$</span></p>
|
3,715,987 | <p>The domain is: <span class="math-container">$\forall x \in \mathbb{R}\smallsetminus\{-1\}$</span></p>
<p>The range is: first we find the inverse of <span class="math-container">$f$</span>:
<span class="math-container">$$x=\frac{y+2}{y^2+2y+1} $$</span>
<span class="math-container">$$x\cdot(y+1)^2-1=y+2$$</span>
<span class="math-container">$$x\cdot(y+1)^2-y=3 $$</span>
<span class="math-container">$$y\left(\frac{(y+1)^2}{y}-\frac{1}{x}\right)=\frac{3}{x} $$</span>
I can't find the inverse... my idea is to find the domain of the inverse, and that would then be the range of the function. How to show otherwise what is the range here?</p>
| Michael Hardy | 11,667 | <p><span class="math-container">$$\text{You wrote: } x=\frac{y+2}{y^2+2y+1} $$</span>
<span class="math-container">$$x(y^2+2y+1)=y+2$$</span>
<span class="math-container">$$
xy^2 + ( 2x-1 )y +(x-2)=0 \tag 1
$$</span>
<span class="math-container">$$
ay^2 + by + c = 0, \quad\text{where } a=x,\, b=(2x-1), \, c = x-2 \tag 2
$$</span>
<span class="math-container">$$
y = \frac{-b\pm\sqrt{b^2-4ac}}{2a} = \frac{1-2x\pm \sqrt{(2x-1)^2 -4x(x-2)}}{2x} \tag 3
$$</span>
except that <span class="math-container">$(2)$</span> can be a valid solution of <span class="math-container">$(1)$</span> only when <span class="math-container">$a=x\ne 0.$</span> When <span class="math-container">$x=0,$</span> equation <span class="math-container">$(1)$</span> becomes
<span class="math-container">$$
y+2=0, \text{ so } y=-2.
$$</span>
Thus the domain of the function defined on line <span class="math-container">$(3)$</span> excludes only those values of <span class="math-container">$x$</span> for which the expression under the radical is negative. So we want
<span class="math-container">$$
(2x-1)^2 - 4x(x-2)\ge0.
$$</span>
<span class="math-container">$$
-4x + 1+8x\ge0
$$</span>
<span class="math-container">$$
x \ge \frac {-1} 4.
$$</span></p>
|
688,168 | <p>$\int\limits_{y=0}^{3}\int\limits_{x=y}^{\sqrt{18-y^2}} 7x + 3y$ $dxdy$</p>
<p>Okay so I converted this into polar form because I was told to do so
I got the integral of $(7r\cos\theta + 3r\sin\theta)rdrd\theta$ where $0\le \theta \le \pi/4$ and $0\le r \le \sqrt{18}$</p>
<p>I think I'm making a mistake solving this integral. I keep getting $72$ which is incorrect.</p>
<p>Work:
Taking the first integral I get $7r^3\cos\theta/3 + r^3\sin\theta$ from $0$ to $\sqrt{18}$
then I get $7(\sqrt{18})^3 \cos\theta/ 3 + \sqrt {18}^3\sin\theta$
Then I took the second integral and got $7(\sqrt{18})^3 \sin\theta/ 3 - (\sqrt {18})^3\cos{\theta}$
I plugged in the values of pi/4 and o and got</p>
<p>$126 - 54 + (\sqrt{18}^3) = 148.367532368147$</p>
<p>Sorry, while typing the work I found my mistake. Thanks everyone for trying to help me out :)</p>
| walkar | 98,077 | <p>I'm not sure about polar form, but you can evaluate the integral as such:</p>
<p>$\int\limits_{y=0}^{3}\int\limits_{x=y}^{\sqrt{18-y^2}} 7x + 3y$ $dxdy$</p>
<p>$\int\limits_{0}^{3}(\frac{7}{2}x^2 + 3y\cdot x)|_{\sqrt{18-y^2}}^y$ $dy$</p>
<p>$\int\limits_{0}^{3}(\frac{7}{2}y^2 + 3y^2))-(\frac{7}{2}(18-y^2) + 3y\cdot \sqrt{18-y^2})$ $dy$</p>
<p>$\int\limits_{0}^{3} 10\cdot y^2-3y\cdot \sqrt{18-y^2}-63$ $dy$</p>
<p>It should be relatively easy to solve from there using more basic, one-variable methods. </p>
|
688,168 | <p>$\int\limits_{y=0}^{3}\int\limits_{x=y}^{\sqrt{18-y^2}} 7x + 3y$ $dxdy$</p>
<p>Okay so I converted this into polar form because I was told to do so
I got the integral of $(7r\cos\theta + 3r\sin\theta)rdrd\theta$ where $0\le \theta \le \pi/4$ and $0\le r \le \sqrt{18}$</p>
<p>I think I'm making a mistake solving this integral. I keep getting $72$ which is incorrect.</p>
<p>Work:
Taking the first integral I get $7r^3\cos\theta/3 + r^3\sin\theta$ from $0$ to $\sqrt{18}$
then I get $7(\sqrt{18})^3 \cos\theta/ 3 + \sqrt {18}^3\sin\theta$
Then I took the second integral and got $7(\sqrt{18})^3 \sin\theta/ 3 - (\sqrt {18})^3\cos{\theta}$
I plugged in the values of pi/4 and o and got</p>
<p>$126 - 54 + (\sqrt{18}^3) = 148.367532368147$</p>
<p>Sorry, while typing the work I found my mistake. Thanks everyone for trying to help me out :)</p>
| Lion | 125,746 | <p>Let $I$ denote the integral value. By calculating the integral by the polar form, we have:
\begin{equation}
I=\int_0^{\frac{\pi}{4}}\int_0^\sqrt{18}7r^2\cos(\theta)+3r^2\sin(\theta)drd\theta\\
=\int_0^{\frac{\pi}{4}}\int_0^\sqrt{18}7r^2\cos(\theta)drd\theta+\int_0^{\frac{\pi}{4}}\int_0^\sqrt{18}3r^2\sin(\theta)drd\theta\\
=\int_0^{\frac{\pi}{4}}\cos(\theta)d\theta\int_0^\sqrt{18}7r^2dr+\int_0^{\frac{\pi}{4}}\sin(\theta)d\theta\int_0^\sqrt{18}3r^2dr\\
=\frac{\sqrt{2}}{2}\times126\sqrt{2}+(1-\frac{\sqrt{2}}{2})\times54\sqrt{2}\\
=72+54\sqrt{2}
\end{equation}</p>
|
3,629,186 | <p>Assume that <span class="math-container">$x=x(t)$</span> and <span class="math-container">$y=y(t)$</span>. Find <span class="math-container">$dx/dt$</span> given the other information.</p>
<p><span class="math-container">$x^2−2xy−y^2=7$</span>; <span class="math-container">$\frac{dy}{dt} = -1$</span> when <span class="math-container">$x=2$</span> and <span class="math-container">$y=-1$</span></p>
<p>I am trying to figure this problem out. My book does not give one example similar to it.</p>
<p>I'm assuming that the first step is to find the derivative, to which I get</p>
<p><span class="math-container">$2x\left(\frac{dy}{dt}\right)-2\left[x(\frac{dy}{dt})+y(\frac{dx}{dt})\right]-2y\left(\frac{dx}{dt}\right) = 0$</span></p>
<p>I'm not sure if this is correct, and I'm not sure what to do after this. Do I just plug in <span class="math-container">$x$</span> and <span class="math-container">$y$</span>?</p>
| TravorLZH | 748,964 | <p><span class="math-container">$$x^2-2xy-y^2=7$$</span>
Now, differentiate the both side of the equation:</p>
<p><span class="math-container">$$2x{\mathrm{d}x\over\mathrm{d}t}-2y{\mathrm{d}x\over\mathrm{d}t}-2x{\mathrm{d}y\over\mathrm{d}t}-2y{\mathrm{d}y\over\mathrm{d}t}=0$$</span></p>
<p>Rearranging this equation will give us:</p>
<p><span class="math-container">$$-2(x+y){\mathrm{d}y\over\mathrm{d}t}=-2(x-y){\mathrm{d}x\over\mathrm{d}t}$$</span></p>
<p><span class="math-container">$${\mathrm{d}x\over\mathrm{d}t}={x+y\over x-y}{\mathrm{d}y\over\mathrm{d}t}$$</span></p>
<p>Now, you can plug <span class="math-container">$x,y,$</span> and <span class="math-container">$\mathrm{d}y\over\mathrm{d}t$</span> to get <span class="math-container">$\mathrm{d}x\over\mathrm{d}t$</span>:</p>
<p><span class="math-container">$$
{\mathrm{d}x\over\mathrm{d}t}={2+(-1)\over 2-(-1)}(-1)=-{1\over3}
$$</span></p>
<p>To me, it seems more organized to solve for <span class="math-container">$\mathrm{d}x\over\mathrm{d}t$</span> before plugging in values.</p>
|
101,526 | <p>Is there a notion of <i>"smooth bundle of Hilbert spaces"</i> (the base is a smooth finite dimensional manifold, and the fibers are Hilbert spaces) such that:</p>
<blockquote>
<p><b>1•</b> A smooth bundle of Hilbert spaces over a point is the same thing as a Hilbert space.</p>
<p><b>2•</b> If <span class="math-container">$E\to M$</span> is a smooth fiber bundle of orientable manifolds (say with compact fibers) equipped with a vertical volume form, then taking fiberwise <span class="math-container">$L^2$</span>-functions produces a smooth bundle of Hilbert spaces over <span class="math-container">$M$</span>.</p>
<p><b>3•</b> If the Hilbert space is finite dimensional, then this specializes to the usual notion of smooth vector bundle (with fiberwise inner product).</p>
</blockquote>
<p>I suspect that the answer is "no", because I couldn't figure out how it might work...<br>
If the answer is indeed no, then what is/are the best notion/s of smooth bundle of Hilbert spaces?</p>
| TaQ | 12,643 | <p><strong>This is not an answer</strong> but rather a comment to Peter Michor's answer. Anyway, I post it as an answer to get more flexibility in text formulation and to get more visibility. Namely, I think there is a crucial error which completely breaks down the argument so that generally <em>it is not possible</em> to perform the construction (<b>2•</b> of OP) of associating a fiberwise $L^2$ bundle over a given fiber bundle. The error lies in the following passage:</p>
<blockquote> That it is smooth $U\times L^2(F, vol(g)) \to U\times L^2(F,vol(g))$ is seen as follows:
It suffices to show that $(x,f)\mapsto \langle f\circ \rho_x, \lambda\rangle_{L^2}$ is smooth for all $\lambda$ in a subset $\subset L^2$ of linear functionals which together recognize bounded sets.
We may take $C^\infty(F)\subset L^2(F,vol(g))$ as this set. By one of the two smooth uniform boundedness theorems from the book below it suffices to show that for each fixed $f\in L^2$ the function $F\to \mathbb R$ given by
$$x\mapsto \langle f\circ \rho_x, \lambda\rangle_{L^2} = \int_F f(\rho_x(u))\lambda(u)\,vol(g)(u)= \int f(v) \lambda(\rho_x^{-1}(v) ((\rho_x^{-1})^*vol(g))(v)$$
is smooth.
But this now obvious since $\lambda$ and $vol(g)$ are smooth.
</blockquote>
<p>Specifically, it is <em>not</em> sufficient to check "scalar smoothness" against the set of smooth (not even continuous) functions since it (generally) <em>does not</em> "recognize" bounded sets in $L^2$ . To give an explicit counterexample, understanding $\mathbb S^1$ as $\mathbb R$ mod $1$ , consider the map $f:\mathbb R\times L^2(\mathbb S^1)\to L^2(\mathbb S^1)$ defined by $(t,[\,x\,])\mapsto[\,\langle\,x(t+s):s\in\mathbb R\,\rangle\,]$ . If the argument in Peter Michor's answer were correct, then for any fixed $x$ in $L^2$ the map $c:t\mapsto f(t,[\,x\,])$ should be smooth $\mathbb R\to L^2(\mathbb S^1)$ . However, it is easily seen that it is not even once differentiable if for example one takes $x$ defined by $x(s)=1$ for $|\,s-n\,|\le\frac 14$ and $n\in\mathbb Z$ , and $x(s)=0$ otherwise, since then $\lim_{\,t\to 0\,}\|\,t^{-1}(c(t)-c(0))\,\|_{L^2}=+\infty$ .</p>
|
615,067 | <p>I heard that Weil proved the Riemann hypothesis for finite fields. Where can I found the details of the proof? I found the following sketch but I was unable to fill the details: </p>
<p>Motivation: I try to understand the elementary theory of finite fields but I'm not an expert of algebraic geometry, it would be nice to get some hints what should I study before I can understand schemes so well that I understand the proof.</p>
<p>Let $C,E$ be two proper smooth curves over a field $k$, and $f:C\to E$ a finite morphism. Let us set $X=C\times_{\operatorname{Spec}k}E$. Let us consider the graph $\Gamma_f\subseteq X$ of $f$ endowed with the reduced closed subscheme structure.</p>
<p>(a) Let $p_1:X\to C$ and $p_2:X\to E$ denote the projections. Then $p_1$ induces an isomorphism $\varphi:\Gamma_f\simeq C$. Show that $\omega_{X/K}\simeq p_1^*\omega_{C/k}\otimes p_2^*\omega_{E/k}$ and that $\omega_{X/k}|_ {\Gamma_F}\simeq \varphi^*\omega_{C/k}\otimes \varphi^*f^*\omega_{E/k}$.</p>
<p>(b) Show that</p>
<p>$$\operatorname{deg}_k\omega_{X/k}\mid_{\Gamma_f}=2g(C)-2+(\operatorname{deg} f)(2g(E)-2).$$</p>
<p>Deduce from this that $\Gamma_f^2=(\operatorname{deg} f)(2-2g(E))$.</p>
<p>(c) Let us henceforth suppose that $C=E$. Let $\Delta\subset X$ denote the diagonal. Show that $\Delta^2=2-2g(C)$.</p>
<p>(d) Let us suppose that $f\ne \operatorname{Id}_C$. Let $x\in X(k)\cap \Delta\cap \Gamma_f$, let $y=p_1(x)$, and let $t$ be a uniformizing parameter for $\mathcal{O}_{C,y}$. Show that</p>
<p>$$i_x(\Gamma_f,\Delta)=\operatorname{length}\mathcal{O}_{C,y}/(\sigma(t)-t),$$</p>
<p>where $\sigma$ is the automorphism of $\mathcal{O}_{C,y}$ induced by $f$.</p>
<p>(e) Let us take a finite field $k=\mathbb{F}_{p^r}$ of characteristic $p>0$, and let $f:C\to C$ be the Frobenius $F_C^r$. Show that the divisors $\Gamma_f,\Delta$ meet transversally and that $\Gamma_f\cap\Delta\subseteq X(k)$. Deduce from this that the cardinal $N$ of $C(k)$ is given by $N=\Gamma_f\cdot \Delta$.</p>
| Dietrich Burde | 83,966 | <p>Very nice articles with many historal details are the survey articles $1,2,3,4$ by Peter Roquete, <a href="http://www.rzuser.uni-heidelberg.de/~ci3/rv4.pdf" rel="nofollow"><em>On the Riemann hypothesis in characteristic $p$, its origin and development</em></a>. The link is for the last part, part $1,2,3$ are also available.</p>
|
3,414,208 | <blockquote>
<p>In the beginning A=0. Every time you toss a coin, if you get head, you increase A by 1, otherwise decrease A by 1. Once you tossed the coin 7 times or A=3, you stop. How many different sequences of coin tosses are there?</p>
</blockquote>
<p>The tricky part of this problem is the combination of the requirements, so it seems that recursion could be useful. If <span class="math-container">$P_n$</span> is the number of ways before A=3 and n flips, I'm not sure on the recurrence. Of course, this problem could also be solved with a tree, but I'm looking for a cleaner solution.</p>
| JMoravitz | 179,297 | <p>Continue flipping even after having reached <span class="math-container">$A=3$</span>.</p>
<p>There are <span class="math-container">$2^7$</span> different sequences of seven flips.</p>
<p>All sequences of the form <code>HHHxxxx</code> should have counted the same. There are <span class="math-container">$2^4$</span> such sequences that we wanted to count only one time, so we subtract by <span class="math-container">$2^4-1$</span> to correct the count so as to have it that we counted all of these exactly once.</p>
<p>All sequences of the form <code>THHHHxx</code> should have counted the same. There are <span class="math-container">$2^2$</span> such sequences...</p>
<p>All sequences of the form <code>HTHHHxx</code> should have counted the same.</p>
<p>All sequences of the form <code>HHTHHxx</code> should have been counted the same.</p>
<p>Note, we do not bother with <code>HHHTHxx</code> or <code>HHHHTxx</code> since these both fall into the first case as they start with three heads.</p>
<p>Also, no special consideration needs to be taken for having reached <span class="math-container">$A=3$</span> on the seventh flip and it is impossible for <span class="math-container">$A=3$</span> to have occurred on any other flip other than the third or fifth or seventh as <span class="math-container">$A$</span> is always even on an even numbered flip.</p>
<p>We get then a total of:</p>
<p><span class="math-container">$$2^7-(2^4-1)-3\times (2^2-1)$$</span></p>
|
4,569,458 | <p>I wonder if there is any trick in this problem. The following graph is a regular hexagon with its center <span class="math-container">$C$</span> and one of the vertices <span class="math-container">$A$</span>. There are <span class="math-container">$6$</span> vertices and a center on the graph, and now assume we perform the symmetric random walk on it. Suppose the random walk starts from <span class="math-container">$A$</span>. Given now the process is at one of the vertices, then it has probability <span class="math-container">$\frac{1}{3}$</span> for entering into <span class="math-container">$C$</span> for the next step, and <span class="math-container">$\frac{1}{3}$</span> probability for moving to its two neighbors respectively as well. We want to compute the probability of this random process started from <span class="math-container">$A$</span> and finally return to <span class="math-container">$A$</span>, and it must not go through <span class="math-container">$C$</span> before its first arrival back to <span class="math-container">$A$</span>.</p>
<p><a href="https://i.stack.imgur.com/j0quC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/j0quC.png" alt="enter image description here" /></a></p>
<p>Therefore, basically <span class="math-container">$A$</span> and <span class="math-container">$C$</span> are two absorbing states. Denote this discrete random walk as <span class="math-container">$\left\{ X_{t} \right\}_{t\geq 0}$</span>, and I want to compute:</p>
<p><span class="math-container">$$
P(X_{t} = A, ~ X_{s} \notin \left\{A, C \right\} \mbox{ for } \forall ~ 0 < s < t ~ | ~ X_{0} = A)
$$</span></p>
<p>The following are my thoughts:</p>
<p>The first intuition came to me was DTMC by regarding <span class="math-container">$A$</span> and <span class="math-container">$C$</span> as two absorbing states and the process absorbed by <span class="math-container">$A$</span> eventually. This requires me to write down a probability transition matrix and then solve linear equations. However, it is actually an interviewing question, so I suppose there is a much easier way to do this.</p>
<p>Also, by intuition I think by constructing a stopping time might help, but my thought just stuck at here.</p>
| HackR | 1,088,726 | <p>We can look at this graph as an electric network where every edge has conductance <span class="math-container">$c(xy)=1$</span> (and hence resistance <span class="math-container">$1$</span> as well) where <span class="math-container">$xy$</span> represents an edge. We label the hexagon with <span class="math-container">$ABCDEF$</span>, and the center of the hexagon as <span class="math-container">$O$</span>. Then we want to look at <span class="math-container">$$\mathbb P(\tau_A^+<\tau_O)$$</span> Note that <span class="math-container">$\mathbb P(\tau_A^+<\tau_O)+\mathbb P(\tau_A^+>\tau_O)=1$</span>, so we try and find <span class="math-container">$$\mathbb P(\tau_A^+>\tau_O)$$</span></p>
<p>This is an escape problem! We are trying to calculate the probability that a random walk starting at <span class="math-container">$A$</span> 'escapes' to <span class="math-container">$O$</span> before coming back to <span class="math-container">$A$</span>. This is precisely where electric networks shine. By a standard theorem, we have that</p>
<p><span class="math-container">$$\mathbb P(\tau_A^+>\tau_O)=\frac{r(A)}{R_{eff}^{AO}}$$</span></p>
<p>where <span class="math-container">$r(A)=\frac{1}{c(A)}=\frac{1}{\sum_{x|x\sim A}c(Ax)}=\frac1{1+1+1}=\frac13$</span> and <span class="math-container">$R_{eff}^{AO}$</span> is the effective resistance between <span class="math-container">$A$</span> and <span class="math-container">$O$</span>. This effective resistance is what we want to find out.</p>
<p>Here is the circuit (apologies for the bad diagram)</p>
<p><a href="https://i.stack.imgur.com/WIDda.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WIDda.png" alt="Circuit 1" /></a></p>
<p>We exploit the symmetry in the figure to make lives simpler. Since current enters through <span class="math-container">$A$</span> (<span class="math-container">$A$</span> is the source), by symmetry <span class="math-container">$B$</span> and <span class="math-container">$F$</span> are at equal voltages. Similarly <span class="math-container">$C$</span> and <span class="math-container">$E$</span> are at same voltages. Thus the following pairs of resistances are in parallel</p>
<p><span class="math-container">$$(AF,AB),\ (FE,BC),\ (ED,CD)$$</span></p>
<p>This inadvertently makes the following pairs also in parallel (check the potentials at the end and note that they are the same)</p>
<p><span class="math-container">$$(OF,OB),\ (OE,OC)$$</span></p>
<p>Gluing all the same voltage points together, and using the parallel law, we get the following circuit</p>
<p><a href="https://i.stack.imgur.com/hKA6X.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hKA6X.png" alt="Circuit 2" /></a></p>
<p>where the <span class="math-container">$500m$</span> just means <span class="math-container">$1/2$</span> units of resistance (the applet I used to draw these diagrams measures these in ohms and milliohms, hence the m). This circuit, drawn in a better way, looks like</p>
<p><a href="https://i.stack.imgur.com/5QpKF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5QpKF.png" alt="Circuit 3" /></a></p>
<p>Now note that the resistances at the extreme right are in series (the <span class="math-container">$1$</span> and <span class="math-container">$1/2$</span> unit ones at the end), and thus you can replace them with a <span class="math-container">$1+1/2=3/2=1.5$</span> unit resistance using the series law. This gets us</p>
<p><a href="https://i.stack.imgur.com/Z2a7R.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z2a7R.png" alt="Circuit 4" /></a></p>
<p>Now again note that the <span class="math-container">$1.5$</span> unit resistance and the <span class="math-container">$1/2$</span> unit resistance at the extreme end are in parallel, so using the parallel law, you can replace them with a resistance of unit <span class="math-container">$\frac{1}{1/(1.5)+1/(1/2)}=3/8=0.375$</span>, giving us the circuit</p>
<p><a href="https://i.stack.imgur.com/prY4Q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/prY4Q.png" alt="Circuit 5" /></a></p>
<p>Continuing similarly, you finally end with a resistance of unit <span class="math-container">$9/20$</span> which is the effective resistance <span class="math-container">$R_{eff}^{AO}$</span> we were looking for. Plugging this in, we get</p>
<p><span class="math-container">$$\mathbb P(\tau_A^+>\tau_O)=\frac13\frac{20}9=20/27$$</span></p>
<p>Finally, putting this in the equation for total probability gives us</p>
<p><span class="math-container">$$\mathbb P(\tau_A^+<\tau_O)=1-\mathbb P(\tau_A^+>\tau_O)=1-\frac{20}{27}=\frac7{27}$$</span></p>
<p>and we are done!</p>
|
2,405,505 | <p>How to prove that the infinite product $\prod_{n=1}^{+\infty} \left(1-\frac{1}{2n^2}\right)$ is positive ?</p>
<p>Thanks</p>
| Raffaele | 83,382 | <p>As you can see <a href="http://functions.wolfram.com/ElementaryFunctions/Sin/08/0001/" rel="nofollow noreferrer">here</a> <span class="math-container">$\sin z$</span> can be expressed by an infinite product, namely</p>
<p><span class="math-container">$$\sin z=z \prod _{n=1}^{\infty } \left(1-\frac{z^2}{\pi ^2 n^2}\right)$$</span>
Thus for <span class="math-container">$z=\dfrac{\pi }{\sqrt{2}}$</span> we get
<span class="math-container">$$\sin\left(\dfrac{\pi }{\sqrt{2}}\right)=\dfrac{\pi }{\sqrt{2}}\,\prod _{n=1}^{\infty } \left(1-\frac{1}{2 n^2}\right)$$</span>
hence <span class="math-container">$$\prod _{n=1}^{\infty } \left(1-\frac{1}{2 n^2}\right)=\dfrac{\sin\left(\dfrac{\pi }{\sqrt{2}}\right)}{\dfrac{\pi }{\sqrt{2}}}\approx 0.358$$</span></p>
<p>Hope this helps</p>
|
4,008,987 | <p>I am reading a math book and in it, it says, "Let <span class="math-container">$V$</span> be the set of all functions <span class="math-container">$f: \mathbb{Z^n_2} \rightarrow \mathbb{R}.$</span> I know that <span class="math-container">$\mathbb{Z^n_2}$</span> is just the cyclic group of order <span class="math-container">$2$</span> taken to the <span class="math-container">$n$</span>th power, but I don't get what the actual function means.</p>
<p>How would such a function, <span class="math-container">$f$</span>, work? Could someone please give an example?</p>
| Asker | 201,024 | <blockquote>
<p>[...] I don't get what the actual function means.</p>
</blockquote>
<p><span class="math-container">$f:\bf{Z}_2^n \to R$</span> simply assigns to each element of <span class="math-container">$\bf{Z}_2^n$</span> an element of <span class="math-container">$\bf{R}$</span>.</p>
<blockquote>
<p>How would such a function, f , work?</p>
</blockquote>
<p>Each element of <span class="math-container">$\bf{Z}_2^n$</span> is assigned an element of <span class="math-container">$\bf{R}$</span>.</p>
<blockquote>
<p>Could someone please give an example?</p>
</blockquote>
<p>Say <span class="math-container">$n=2$</span>. Define <span class="math-container">$f:\bf{Z}_2^n \to R$</span> such that</p>
<p><span class="math-container">$$f((0, 0)) = 1$$</span>
<span class="math-container">$$f((0, 1)) = 2$$</span>
<span class="math-container">$$f((1, 0)) = 3.14159$$</span>
<span class="math-container">$$f((1, 1)) = 1738$$</span>
That's all a function does - it assigns elements of one set to elements of another. That's it.</p>
|
403,184 | <p>A (non-mathematical) friend recently asked me the following question:</p>
<blockquote>
<p>Does the golden ratio play any role in contemporary mathematics?</p>
</blockquote>
<p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p>
<p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p>
<p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
| Wojowu | 30,186 | <p>Golden ratio is relevant in most places where you consider Fibonacci numbers. One occurrence where it is particularly visible is the proof by Bugeaud, Mignotte and Siksek of the fact that the largest perfect power in the Fibonacci series is 144 (arXiv version <a href="https://arxiv.org/abs/math/0403046" rel="noreferrer">here</a>).</p>
<p>The key is, perhaps unsurprisingly, the Binet formula <span class="math-container">$F_n=\frac{\phi^n-\phi^{-n}}{\sqrt{5}}$</span>. Equating it to a perfect power <span class="math-container">$y^p$</span> gives us an exponential Diophantine equation which we have to solve. The solution is a combination of methods in which <span class="math-container">$\phi$</span> occurs prominently: it occurs in determining various congruences on the numbers involved which are further crucial in applying modularity results (like in the proof of Fermat's last theorem), as well as in estimates on linear forms in logarithms using Baker's method.</p>
|
403,184 | <p>A (non-mathematical) friend recently asked me the following question:</p>
<blockquote>
<p>Does the golden ratio play any role in contemporary mathematics?</p>
</blockquote>
<p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p>
<p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p>
<p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
| Jakub Konieczny | 14,988 | <p>The golden ratio is closely related to the <a href="https://en.wikipedia.org/wiki/Zeckendorf%27s_theorem" rel="noreferrer">Zeckendorf representation</a> which is one of the simplest examples of a numeration system (other than the usual base-<span class="math-container">$b$</span> represenations). As such, it's an important testing ground for new ideas. For instance, in <a href="https://arxiv.org/pdf/1706.09680.pdf" rel="noreferrer">this paper</a> Drmota, Müllner and Spiegelhofer showed that the parity of the sum of digits in the Zeckendorf representation does not correlate with the Möbius function. Such considerations are interesting for their own sake, and are also connected with <a href="https://en.wikipedia.org/wiki/Morphic_word" rel="noreferrer">morphic sequences</a> thanks to the work of Rigo: One can think of a morphic sequence as an <a href="https://en.wikipedia.org/wiki/Automatic_sequence" rel="noreferrer">automatic sequence</a> in a non-standard numeration system.</p>
|
403,184 | <p>A (non-mathematical) friend recently asked me the following question:</p>
<blockquote>
<p>Does the golden ratio play any role in contemporary mathematics?</p>
</blockquote>
<p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p>
<p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p>
<p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
| Denis Serre | 8,799 | <p>In numerical analysis, a basic problem consists in approaching a solution <span class="math-container">$x^*$</span> of <span class="math-container">$f(x)=0$</span>, where <span class="math-container">$f:{\mathbb R}\rightarrow{\mathbb R}$</span> is at least continuous and more often <span class="math-container">$C^2$</span>-smooth. The basic methods are those of dichotomy, secant and Newton. Dichotomy is more than elementary (based upon the intermediate value theorem) and its convergence is of order <span class="math-container">$1$</span>, meaning that the error <span class="math-container">$\lvert x-x^*\rvert\sim2^{-n}$</span> decays exponentially. Newton's method is much faster — order <span class="math-container">$2$</span> if <span class="math-container">$x^*$</span> is non-degenerate, that is <span class="math-container">$f'(x^*)\ne0$</span>, which means <span class="math-container">$\lvert x-x^*\rvert=O(\rho^{2^n})$</span> for some <span class="math-container">$\rho<1$</span>. But it requires the knowledge of <span class="math-container">$f'$</span>, which can be questionable in real world applications, where values of <span class="math-container">$f$</span> are given by measurements. A good compromise is the secant method, which resembles Newton a lot, but the tangent to the graph at <span class="math-container">$x_n$</span> is replaced by the chord of the graph determined by the abscissæ <span class="math-container">$x_{n-1}$</span> and <span class="math-container">$x_n$</span>. Then its intersection with the horizontal axis determines <span class="math-container">$x_{n+1}$</span>.</p>
<p>The secant method turns out to be of order <span class="math-container">$\phi$</span> whenever <span class="math-container">$x^*$</span> is non-degenerate: <span class="math-container">$\lvert x-x^*\rvert=O(\rho^{\phi^n})$</span> for some <span class="math-container">$\rho<1$</span>.</p>
<p>The main drawback of the secant method is that its extension to vector fields <span class="math-container">$f:{\mathbb R}^n\rightarrow{\mathbb R}^n$</span>, which can be defined easily, behaves badly when <span class="math-container">$n\ge2$</span>. Notice that the extension of the dichotomy, which involves triangulations and the Sperner Lemma, becomes very slow. Only the Newton method admits an efficient generalization (then called Newton–Raphson).</p>
|
3,667,798 | <p>Find values of <span class="math-container">$a$</span> for which the integral <span class="math-container">$$\int^{\infty}_{0}e^{-at}\sin(7t)dt$$</span> converges</p>
<p>What i try</p>
<p><span class="math-container">$$\int^{\infty}_{0}e^{-at}\sin(7t)dt$$</span></p>
<p><span class="math-container">$$=\frac{1}{a^2+49}\bigg(-e^{-at}a\sin(7t)-7e^{-at}\cos(7t\bigg)\bigg|^{\infty}_{0}=\frac{7}{a^2+49}$$</span></p>
<p>The integral is converges for all real <span class="math-container">$a$</span></p>
<p>What i have done above is right. If not then please tell me how do i solve it. Thanks </p>
| J.G. | 56,861 | <p>For <span class="math-container">$z\in\Bbb C$</span>,<span class="math-container">$$\int_0^\infty e^{-zt}dt=\frac1z(1-\lim_{t\to\infty}e^{-zt}),$$</span> which converges iff<span class="math-container">$$0=\lim_{t\to\infty}|e^{-zt}|=\lim_{t\to\infty}e^{-(\Re z)t},$$</span>i.e. <span class="math-container">$\Re z>0$</span>. So <span class="math-container">$\Im\int_0^\infty e^{-(a-7i)t}dt$</span> converges iff <span class="math-container">$\Re a>0$</span>. If <span class="math-container">$a\in\Bbb R$</span>, this simplifies to <span class="math-container">$a>0$</span>.</p>
|
1,336,424 | <p>Find the minimum distance between point $M(0,-2)$ and points $(x,y)$ such that: $y=\frac{16}{\sqrt{3}\,x^{3}}-2$ for $x>0$ .</p>
<p>I used the formula for distance between two points in a plane to get: $$d=\sqrt{x^{2}+\frac{256}{3x^{6}}}$$ And this is where I cannot come up with how to proceed. I tried calculus but the first derivative of $d(x)$ is fairly ugly expression...
A few techniques on how to handle problems on maxima and minima with(out) using calculus would be really useful.</p>
| georg | 144,937 | <p>HINT:</p>
<p>Searching a minimum value for $d^2$. I would say - in point minimum for $d^2$ will be minimized too 'd'.</p>
|
101,972 | <p>I have a grammatically computable function $f$, which means that a grammar $G = (V,\Sigma,P,S)$ exists, so that</p>
<p>$SwS \rightarrow v \iff v = f(w)$.</p>
<p>Now I have to show that, given a grammatically computable function $f$, a Turing Machine $M$ can be constructed, so that $M$ computes $f$.</p>
<p>Here is my idea:</p>
<ol>
<li>The Turing Machine writes the string $SwS$ to its tape.</li>
<li>It nondeterministically chooses one of the productions in $P$.</li>
<li>For the chosen production rule, say $x \rightarrow y$, it scans the tape from left to right and nondeterministically stops at a symbol and checks whether the next $|x|$ symbols match the left side of the production rule. If that's the case, it replaces $x$ with $y$ and chooses the next production rule.</li>
</ol>
<p>After the computation, the tape contains the string $f(w) = v$.</p>
<p><strong>Here is the question:</strong></p>
<p>Because the TM is nondeterministic, there are many branches in the computation and each branch has its own "version" of the tape with its own content.
But in the end, the starting string can only derive to a single string $v$, because the TM computes a function.</p>
<p>Does this mean that the computation branches will all have the same tape content when they arrive at their leaves?</p>
| Dilip Sarwate | 15,941 | <p>I am not sure what <em>Euclid's algorithm</em> in the title of the question is referring to, but as Marc van Leeuwen says, polynomial long division is the way to go. Since doing it <em>by hand</em> is asked about, I will suggest the following based on long experience of doing such divisions by hand and making many mistakes along the way by not being careful.</p>
<blockquote>
<p>Use ruled paper for your work, turning it through a right angle so that the lines are vertical rather than horizontal. Then, write one symbol per column
to keep things aligned properly. The calculation is exactly as shown in ego's
answer but has vertical ruled lines between the bits so that you can keep things properly aligned. Yes, the problem can be solved easily without this artifice in this instance, but the simple trick is very helpful when dealing with polynomials of larger degree.</p>
</blockquote>
|
426,306 | <p>If <span class="math-container">$K = \mathbb{Q}(\sqrt{d})$</span> is a real quadratic field, then any unit <span class="math-container">$u \in \mathcal{O}_K^\times$</span> with <span class="math-container">$u > 1$</span> must not be too small: indeed, such a <span class="math-container">$u = u_1 + u_2 \sqrt{d}$</span> with <span class="math-container">$u_1, u_2 > 0$</span> must satisfy <span class="math-container">$u_1^2 - d u_2^2 = \pm 1$</span>, so <span class="math-container">$u_1 \gg \sqrt{d}$</span>, say. Thus the gap between the smallest unit <span class="math-container">$u \in \mathcal{O}_K^\times$</span> and <span class="math-container">$1$</span> must be quite large.</p>
<p>This seems a particular quirk of fields whose unit group is 1-generated. For real cubic fields, whose unit group is 2-generated, there seems to be substantial possibility for cancellation and even possible for the smallest unit <span class="math-container">$u > 1$</span> to be arbitrarily close to <span class="math-container">$1$</span> as the fields vary.</p>
<p>My question is this: let <span class="math-container">$K$</span> be a cyclic cubic field (in particular, necessarily totally real) having discriminant <span class="math-container">$d_K = c_K^2$</span>. Let <span class="math-container">$u_K \in \mathcal{O}_K^\times$</span> be the smallest (in terms of usual archimedean valuation) element satisfying <span class="math-container">$u_K > 1$</span>. Write</p>
<p><span class="math-container">$$\displaystyle u_K = 1 + \kappa_K.$$</span></p>
<p>Can one effectively bound <span class="math-container">$\kappa_K$</span> in terms of <span class="math-container">$d_K$</span> (or equivalently, <span class="math-container">$c_K$</span>)?</p>
| Wojowu | 30,186 | <p>There will not be a smallest one. As you say, <span class="math-container">$O_K^\times$</span> has rank <span class="math-container">$2$</span>, meaning it has two multiplicatively independent generators <span class="math-container">$u_1,u_2$</span>, which we may take to be positive. This is equivalent to saying <span class="math-container">$\log u_1,\log u_2$</span> are linearly independent over <span class="math-container">$\mathbb Q$</span>, and that implies the linear combinations of them will be dense in <span class="math-container">$\mathbb R$</span>. Hence the subgroup of <span class="math-container">$O_K^\times$</span> generated by <span class="math-container">$u_1,u_2$</span> will be dense in <span class="math-container">$(0,\infty)$</span>, and thus <span class="math-container">$O_K^\times$</span> will be dense in <span class="math-container">$\mathbb R$</span>.</p>
|
3,803,360 | <p>Convergence of <span class="math-container">$\sum\sum_{k, n=1}^\infty\frac{1}{(n+3)^{2k}}$</span>.</p>
<p>What I tried:</p>
<p>For the iterated summation, <span class="math-container">$\sum_{n=1}^\infty(\sum_{k=1}^\infty\frac{1}{(n+3)^{2k}})=\sum_{n=1}^\infty\lim_{k\to\infty}\frac{1-(\frac{1}{n+3})^{2k}}{1-(\frac{1}{n+3})^2}=\sum_{n=1}^\infty\frac{1}{1-(\frac{1}{n+3})^2}$</span>.</p>
<p>But when <span class="math-container">$n\to\infty$</span>, <span class="math-container">$\frac{1}{1-(\frac{1}{n+3})^2}\to 1\neq 0$</span>, so the double summation diverges.</p>
<p>Is this proof right? And for the a general double series to converge, is it necessary that the iterated summation also converges?</p>
| Eric Towers | 123,905 | <p>The <a href="https://en.wikipedia.org/wiki/Geometric_series" rel="nofollow noreferrer">geometric series</a> doesn't start at <span class="math-container">$k = 0$</span>, so the numerator is not <span class="math-container">$1$</span>. (That is, the numerator is the first term in the series, which is not <span class="math-container">$1$</span>.)</p>
<p><span class="math-container">\begin{align*}
\sum_{k=1}^\infty \frac{1}{(n+3)^{2k}} &= \sum_{k=1}^\infty \left( (n+3)^{-2} \right)^k \\
&= \frac{(n+3)^{-2}}{1 - (n+3)^{-2}} \\
&= \frac{1}{n^2+6n+8} \text{,}
\end{align*}</span>
so your argument is not quite right.</p>
<p>Hopefully, you can use the comparison test to resolve the convergence of the resulting series in <span class="math-container">$n$</span>.</p>
<p>For this particular series, since all the terms are positive, you are free to rearrange the series as you like, so turning it into an iterated series is fine. (The collection of sums of finite subsets of the set of terms is indifferent to what order you imagine you are summing them.) If the multiple series is not <a href="https://en.wikipedia.org/wiki/Absolute_convergence" rel="nofollow noreferrer">absolutely convergent</a>, the question of rearranging it into an iterated series is a little more delicate. (This should not be a surprise. Contrast absolutely convergent and <a href="https://en.wikipedia.org/wiki/Conditional_convergence" rel="nofollow noreferrer">conditionally convergent</a> and think about the <a href="https://en.wikipedia.org/wiki/Riemann_series_theorem" rel="nofollow noreferrer">Riemann rearrangement theorem</a>.)</p>
<p><a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem" rel="nofollow noreferrer">Fubini's theorem</a> is the usual tool to justify such a rearrangement. And when use the <a href="https://en.wikipedia.org/wiki/Counting_measure" rel="nofollow noreferrer">counting measure</a> to switch back and forth between sums and integrals, the Fubini hypotheses are equivalent to requiring absolute convergence of the multiple series.</p>
|
684,755 | <p>In section 10 of <em>Topology</em> by Munkres, the minimal uncountable well-ordered set $S_{\Omega}$ is introduced. Furthermore, it is remarked that,</p>
<blockquote>
<p>Note that $S_{\Omega}$ is an uncountable well-ordered set every section of which is countable. Its order type is in fact uniquely determined by this condition.</p>
</blockquote>
<p>However, how to justify its uniqueness?</p>
| user642796 | 8,348 | <p>This stems from the fact that given two well-ordered sets $X, Y$ exactly one of the following holds (<a href="https://math.stackexchange.com/q/332793/8348">see this previous question</a>):</p>
<ol>
<li>$X$ is order-isomorphic to $Y$;</li>
<li>There is a (unique) $a \in X$ such that $\{ x \in X : x < a \}$ is order-isomorphic to $Y$; or</li>
<li>There is a (unique) $b \in B$ such that $\{ y \in Y : y < b \}$ is order-isomorphic to $X$.</li>
</ol>
<p>So suppose $X$ is another well-ordered set which is <em>not</em> order-isomorphic to $S_\Omega$. Then either</p>
<ul>
<li><p>there is a (unique) $b \in S_\Omega$ such that $\{ y \in S_\Omega : y < b \}$ is order-isomorphic to $X$, and since $\{ y \in S_\Omega : y < b \}$ is countable, it follows that $X$ is also countable.</p></li>
<li><p>there is a (unique) $a \in X$ such that $\{ x \in X : x < a \}$ is order-isomorphic to $S_\Omega$. But then $\{ x \in X : x <_X a \}$ is itself an initial section of $X$ which is uncountable.</p></li>
</ul>
<p>So any well-ordered set which is not order-isomorphic to $S_\Omega$ is either countable, or has an uncountable proper initial section.</p>
|
2,985,172 | <p>Let <span class="math-container">$f(x) = \dfrac{1}{3}x^3 - x^2 - 3x.$</span> Part of the graph <span class="math-container">$f$</span> is shown below. There is a maximum point at <span class="math-container">$A$</span> and a minimum point at <span class="math-container">$B(3,-9)$</span>. </p>
<p><a href="https://i.stack.imgur.com/RtFSQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RtFSQ.jpg" alt="graph here"></a> </p>
<p>a) Find the coordinates of <span class="math-container">$A$</span>. </p>
<p>I already did this part and found it to be <span class="math-container">$(-1, 5/3)$</span>. </p>
<p>b) Write down the coordinates of:<br>
i. the image of <span class="math-container">$B$</span> after reflection in the <span class="math-container">$y-axis$</span>. </p>
<p>would this simply be the opposite? Like, (-3, -9). </p>
<p>ii. the image of B after translation by the vector (-2, 5). </p>
<p>Would this be (1, -4)? </p>
<p>iii. the image of B after reflection in the x-axis followed by a horizontal stretch with a scale factor of 1/2. </p>
<p>Would this be <span class="math-container">$(3, 9)$</span> multiplied by 1/2?</p>
| Parcly Taxel | 357,390 | <p>All your stated answers (to the problems before (b)(iii)) are correct. For (b)(iii), do the calculation in steps:</p>
<ul>
<li>Reflect <span class="math-container">$B$</span> in the <span class="math-container">$x$</span>-axis. This sends <span class="math-container">$(3,-9)$</span> to the point <span class="math-container">$(3,9)$</span>, negating the <span class="math-container">$y$</span>-coordinate.</li>
<li>Stretch horizontally with a scale factor of <span class="math-container">$\frac12$</span>. This corresponds to multiplying the <span class="math-container">$x$</span>-coordinate of the point we obtained after the first transformation by that scale factor of <span class="math-container">$\frac12$</span>, so we get the final answer of <span class="math-container">$\left(\frac32,9\right)$</span>.</li>
</ul>
|
3,143,649 | <p>I am reading the book <em>Random perturbation of dynamical sustem</em> of Freidlin and Wantzell (2nd edition). On page 20, they define a Markov process as follow:</p>
<blockquote>
<p>Let <span class="math-container">$(\Omega ,\mathcal F,\mathbb P)$</span> a probability space and <span class="math-container">$(X,\mathcal B)$</span> the state space. Let <span class="math-container">$(\mathcal F_t)$</span> a filtration. Let <span class="math-container">$(\mathbb P_x)_{x\in X}$</span> a familly of probability measure. Define the function <span class="math-container">$p$</span> as <span class="math-container">$$p(t,x,\Gamma )=\mathbb P_x\{X_t\in \Gamma \},\quad \Gamma \in \mathcal B, t\in [0,T],x\in X.$$</span> Then <span class="math-container">$X=(X_t)_{t\leq T}$</span> is a Markov process in <span class="math-container">$X$</span> if:<br>
a) <span class="math-container">$X$</span> is adapted to the filtration.<br>
b) <span class="math-container">$x\mapsto p(t,x,\Gamma )$</span> is measurable wrt <span class="math-container">$\mathcal B$</span>.<br>
c) <span class="math-container">$p(0,x, X\setminus \{x\})=0$</span>.<br>
d) <span class="math-container">$\mathbb P_x\{X_u\in Γ\mid \mathcal F_t\}=p(u-t,X_t,\Gamma )$</span> for all <span class="math-container">$t,u\in [0,T]$</span>, <span class="math-container">$t\leq u$</span>, <span class="math-container">$x\in X$</span> and <span class="math-container">$\Gamma \in \mathcal B$</span>.</p>
</blockquote>
<p>I am not sure how to interpret c) and d). Would these be correct? </p>
<p><strong>Q1)</strong> For c), is it <span class="math-container">$\mathbb P_x\{X_0=x\}=1$</span>? </p>
<p><strong>Q2)</strong> For d), is it <span class="math-container">$$\mathbb P_{X_0=0}\{X_{t+h}\in \Gamma \mid X_t=k\}=\mathbb P_{X_0=k}\{X_h\in \Gamma \}?$$</span></p>
<p>But I don't really know how to interpret it. </p>
| Gautam Shenoy | 35,983 | <p>To put it in simple words,
Q1: You begin in state <span class="math-container">$x$</span> with probability 1.</p>
<p>Q2: (The Markov property: ) Given the entire history upto time <span class="math-container">$t$</span> (that's what <span class="math-container">$|\mathcal{F_t}$</span> means), the present sample <span class="math-container">$X_u$</span> is dependent only on the value of the most recent sample <span class="math-container">$X_t$</span>.</p>
<p>Also in d: I think you wrote <span class="math-container">$\mathcal{G}$</span> instead of <span class="math-container">$\Gamma$</span>.</p>
|
4,563,707 | <p>Sequence given : 6, 66, 666, 6666. Find <span class="math-container">$S_n$</span> in terms of n</p>
<p>The common ratio of a geometric progression can be solved is <span class="math-container">$\frac{T_n}{T_{n-1}} = r$</span>, where r is the common ratio and n is the</p>
<p>When plugging in 66 as <span class="math-container">$T_n$</span> and 6 as <span class="math-container">$T_{n-1}$</span>, I got the following ratio: <span class="math-container">$ \frac {66}{6} = 11$</span>.</p>
<p>However, when I plugged in 666 as <span class="math-container">$T_n$</span> and 66 as <span class="math-container">$T_{n-1}$</span>, I got: <span class="math-container">$\frac {666}{66} = 10.09$</span>.</p>
<p>And when I plugged in 6666 and 666: <span class="math-container">$ \frac {6666}{666} = 10.009$</span>.</p>
<p>It's clear to me that the ratio is slowly decreasing, and seems to be approaching 10. Alas, this is about as far as I have gotten.</p>
<p>Looking at the answers scheme, the final answers is <span class="math-container">$ \frac {20}{27}{(10^n-1)} - \frac {2}{3}{(n)}.$</span></p>
<p>The answers scheme does include a few steps, but frankly, I couldn't understand the reasoning behind them, but I guess I should post them anyways.</p>
<p><span class="math-container">$$ \frac {2}{3}[9 + 99 + 999 + 9999] $$</span></p>
<p><span class="math-container">$$ S_n = \frac {2}{3}{(10-1)} + (10^2-1) + ...(10^n-1) $$</span></p>
<p><span class="math-container">$$ = \frac {2}{3}[10^1+10^2+10^3...10^n] + \frac {2}{3}[-1-1-1-1...-1] $$</span></p>
<p><span class="math-container">$$ = \frac {2}{3}(10)\left(\frac {10^n-1}{10-1}\right) - \frac {2}{3}(n) $$</span>
<span class="math-container">$$ \frac {20}{27}{(10^n-1)} - \frac {2}{3}{(n)} $$</span></p>
<p>I'm sorry if I did something stupid, but I have no idea where that <span class="math-container">$\frac {2}{3}$</span> came from, and even if I do, I don't understand the reasoning or the explanation behind it. I already asked my teacher, as well as my mother, both of which yielded little in understanding the logic behind the solutions given in the answers scheme.</p>
<p>If anybody could offer an explanation, it would be greatly appreciated</p>
| insipidintegrator | 1,062,486 | <p>Let’s look at it another way:
<span class="math-container">$$6666=6\times 1111=6\times \dfrac{9999}{9}$$</span><span class="math-container">$$=\dfrac69\cdot(10^4-1)=\dfrac23(10^4-1).$$</span>
Thus, the general term is <span class="math-container">$$T_n=\underbrace{6666…6}_{n \ \text{times}\ 6}=6\times\dfrac{\overbrace{9999…9}^{n\ \text{times}\ 9}}{9}$$</span><span class="math-container">$$=\dfrac69\times (10^n-1)=\dfrac23(10^n-1)$$</span></p>
<p>So <span class="math-container">$$S_n=\sum_{i=1}^n T_i= \sum_{i=1}^n \dfrac23(10^i-1) = \dfrac23 \sum_{i=1}^n(10^i-1)$$</span><span class="math-container">$$=\dfrac23\bigg(\sum_{i=1}^n(10^i)\bigg)-\frac23 n$$</span> Can you take it from here?</p>
|
1,903,263 | <p>I am developing an algorithm that approximates a curve using a series of linear* segments. The plot below is an example, with the blue being the original curve, and the red and yellow being an example of a two segment approximation. The x-axis is time and the y-axis is attenuation in dB. The ultimate goal is to use the least amount of segments while keeping the maximum error below a certain value. Clearly, since some portions of the curve are more linear than others, some segments can be longer than others.</p>
<p><a href="https://i.stack.imgur.com/yJoT6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yJoT6.png" alt="fitting segments to curve"></a></p>
<p>*The line segments are not actually linear, they are logarithmic. I am actually operating in log (dB) space, while the segments are linear in voltage.</p>
<p>My question is, how do I approach such an optimization problem? The method I am currently considering is to start at one end and select the longest segment possible, then move to the next segment and do the same. This will be good enough, in that it will keep me under my max error, but it may not be the best method. If it matters, I am working in Matlab.</p>
| Kuifje | 273,220 | <p>Your problem reminds me of a similar one developed <a href="http://dspace.mit.edu/bitstream/handle/1721.1/5097/OR-300-94-26970351.pdf" rel="nofollow">here</a>, in Chapter 4. </p>
<p>It is solved with a graph theory approach, more specifically with a shortest path algorithm that gives you the optimal set of knots.</p>
<p>It is pretty well detailed, take a look!</p>
|
7,080 | <p>What is the right definition of the symmetric algebra over a graded vector space V over a field k?</p>
<p>More generally: What is the right definition of the symmetric algebra over an object in a symmetric monoidal category (which is suitably (co-)complete)?</p>
<p>Two possible definitions come to my mind:</p>
<p>1) Take the tensor algebra over V and identify those tensors which differ only by an element of the symmetric group, i.e. take the coinvariants wrt. the symmetric group. The resulting algebra A is then the universal algebra together with a map V -> A such that the product of elements in V is commutative.</p>
<p>2) Take the tensor algebra over V and divide out the ideal generated by antisymmetric two-tensors. In this case, the resulting algebra A is the universal algebra together with a map V -> A such that the product of A vanishes on all antisymmetric two-tensors (one could say that all commutators of A vanish).</p>
<p>The definition 1) looks more natural and gives, for example, the polynomial ring in case V is of degree 0.</p>
<p>The definition 2) applied a vector space shifted by degree 1 gives (up to degree shift) the exterior algebra over the unshifted vector space. However, in characteristic 2 for example, one doesn't get the polynomial ring if one starts with a vector space of degree 0.</p>
<p>Finally, both definitions have a shortcoming in that they don't commute well with base change.</p>
| Qiaochu Yuan | 290 | <p>My understanding is that the "right" way to define the symmetric algebra comes from a braiding that tells you how the symmetric group acts on tensor products. And an easy and general way to get such a braiding is to consider the category of representations of a <a href="https://mathoverflow.net/questions/4640/are-supervector-spaces-the-representations-of-a-hopf-algebra">quasi-triangular Hopf algebra</a>; this is a short way to define the category of supervector spaces and recover the usual notion of graded-commutativity there.</p>
<p>The problem is that when you say "graded" (say, with respect to a group) you're only specifying at best a Hopf algebra, not a quasi-triangular structure. So the answer depends on what possible quasi-triangular structures are floating around (in the $\mathbb{Z}/2\mathbb{Z}$ case on the group algebra of $\mathbb{Z}/2\mathbb{Z}$) and of course for a group of order $|G|$ terrible things are going to happen in characteristic dividing $|G|$.</p>
|
417,181 | <p>We have to prove that if the difference between two prime numbers greater than two is another prime,the prime is $2$.
It can be proved in the following way.</p>
<p>1)$Odd -odd =even$. </p>
<p>Therefore the difference will always even.</p>
<p>2)The only even prime number is $2$.Therefore the difference will be $2$ if the difference between primes is another prime.</p>
<p>I am looking for more proofs to this theorem.Any help will be appreciated.</p>
| A.S | 24,829 | <p>The series $$\sum_{n=4}^\infty \frac{1}{n\log n \log\log n}$$ does not converge. Use the integral test.</p>
|
69,658 | <p>Given a fiber bundle $f: E\rightarrow M$ with connected fibers we call the image $f^*(\Omega^k(M))\subset \Omega^k(E)$ the subspace of basic forms. Clearly, for any vertical vector field $X$ on $E$ we have that the interior product $i_X(f^*\omega)$ and the Lie derivative $L_X(f^*\omega)$ vanish for all $\omega \in \Omega^k(M)$. Is the converse true? That is, if $\alpha \in \Omega^k(E)$ is a form such that $i_X(\alpha)=0$ and $L_X(\alpha)=0$ for all vertical vector fields $X$ on $E$, is it true that $\alpha$ is a basic form? I believe so, but I am not sure how to prove it. Thanks for your help.</p>
| Willie Wong | 1,543 | <p>If you added a connectedness assumption (which is essential, as I mentioned in my comment to your main question), what you desire follows from </p>
<p><strong>Prop</strong> Let $f:M\to N$ be a submersion such that for every $y\in N$, $f^{-1}(y)$ is a connected submanifold of $M$. Then if $\mu\in\Omega^k(M)$ is a differential form satisfying $i_Y\mu = 0$ and $i_Y d\mu = 0$ for any $Y\in \ker df$, then there is a unique $\nu\in\Omega^k(N)$ such that $\mu = f^*\nu$. </p>
<p><em>Sketch of proof</em> It suffices to show that the differential form defined by $\nu(f_*X_1,\ldots ,f_*X_k) = \mu(X_1,\ldots, X_k)$ is well-defined. By the assumption that $i_Y\mu = 0$ if $y\in \ker df$, at a fixed base point $x\in M$ you have that if $f_*X_1 = f_*X'_1$, $\mu(X_1,\ldots, X_k) = \mu(X'_1, X_2,\ldots,X_k)$. Now, it is clear that if $X$ is a vector field on $M$ such that its pushforward is well-defined (in the sense that there exists a vectorfield $Z$ on $N$ such that if $x_1,x_2\in f^{-1}(y)$, $df(X)|_{x_1} = df(X)|_{x_2} = Z|_y$) and if $Y$ is a vector field in $\ker df$, we have that $[X,Y] \in \ker df$. This can be easily checked by acting $[X,Y]$ on the pullback of functions defined on $N$. From this you can conclude that using $i_Yd\mu = 0$, for any vector fields $X_1, \ldots, X_k$ such that the pushforward is well-defined, $\mu(X_1,\ldots, X_k)$ is constant along connected components of $f^{-1}(y)$. q.e.d.</p>
<hr>
<p><em>Addendum to address comments</em>: Consider the scalar $\mu(X_1,\ldots, X_k)$ on $M$. Let $Y$ be a vertical vector field, then we have that
$$ Y(\mu(X_1,\ldots,X_k)) = (\mathcal{L}_Y\mu)(X_1,\ldots,X_k) + \mu([Y,X_1],X_2,\ldots,X_k) + \cdots + \mu(X_1,\ldots, X_{k-1}, [Y,X_k]) $$
using Leibniz rule for Lie derivatives. So local constancy of the scalar field needs that $\mathcal{L}_Y\mu \equiv 0$ and that $[Y,X_l]$ is vertical. </p>
|
2,601,601 | <p>Consider the complex matrix $$A=\begin{pmatrix}i+1&2\\2&1\end{pmatrix}$$ and the linear map $$f:M(2,\mathbb{C})\to M(2,\mathbb{C}),\qquad X\mapsto XA-AX.$$</p>
<p>I want to find a basis of $\ker f$.</p>
<p>I already know the canonical basis $\{E_{11},E_{12},E_{21},E_{22}\}$ and computed $$f(E_{11})=\begin{pmatrix}0&2\\-2&0\end{pmatrix},f(E_{12})=\begin{pmatrix}2&0\\0&-2\end{pmatrix},f(E_{21})=\begin{pmatrix}-2&0\\0&2\end{pmatrix},f(E_{22})=\begin{pmatrix}0&-2\\2&0\end{pmatrix}$$</p>
<p>Does this help to find the basis?</p>
| GNUSupporter 8964民主女神 地下教會 | 290,189 | <p>From your computations, we know that the range of $f$ has dimension two, so we just need two linearly independent matrices $K_1$ and $K_2$ so that $f(K_1)=f(K_2)=0$.</p>
<p>Set $K_1=(E_{11}+E_{22})/2$ and $K_2=(E_{12}+E_{21})/2$. From the question's computation results, it's easy to see that $f(K_1)=f(K_2)=0$. Finally, observe that $K_1$ and $K_2$ are linearly independent, so $\mathrm{ker} f=\mathrm{span}\{K_1,K_2\}=\left\{\begin{pmatrix}a&b\\b&a\end{pmatrix}:a,b\in\Bbb{C}\right\}$.</p>
|
2,601,601 | <p>Consider the complex matrix $$A=\begin{pmatrix}i+1&2\\2&1\end{pmatrix}$$ and the linear map $$f:M(2,\mathbb{C})\to M(2,\mathbb{C}),\qquad X\mapsto XA-AX.$$</p>
<p>I want to find a basis of $\ker f$.</p>
<p>I already know the canonical basis $\{E_{11},E_{12},E_{21},E_{22}\}$ and computed $$f(E_{11})=\begin{pmatrix}0&2\\-2&0\end{pmatrix},f(E_{12})=\begin{pmatrix}2&0\\0&-2\end{pmatrix},f(E_{21})=\begin{pmatrix}-2&0\\0&2\end{pmatrix},f(E_{22})=\begin{pmatrix}0&-2\\2&0\end{pmatrix}$$</p>
<p>Does this help to find the basis?</p>
| Mohammad Riazi-Kermani | 514,496 | <p>You have already found$$f(E_{11})=\begin{pmatrix}0&2\\-2&0\end{pmatrix},f(E_{12})=\begin{pmatrix}2&0\\0&-2\end{pmatrix},f(E_{21})=\begin{pmatrix}-2&0\\0&2\end{pmatrix},f(E_{22})=\begin{pmatrix}0&-2\\2&0\end{pmatrix}$$Note that $$f(E_{11})+f(E_{22})=\begin{pmatrix}0&0\\0&0\end{pmatrix}$$ Similarly $$ f(E_{12})+f(E_{21})=\begin{pmatrix}0&0\\0&0\end{pmatrix}$$ Thus $ E_{12}+E_{21}$and $ E_{12}+E_{21}$ belong to the Ker(f).</p>
<p>Therefore, $$\{E_{11}+E_{22},E_{12}+E_{21}\}$$ =$$\{ \begin{pmatrix}1&0\\0&1\end{pmatrix}, \begin{pmatrix}0&1\\1&0\end{pmatrix}\} $$is a basis for the Ker(f)</p>
|
2,886,973 | <p>The Wikipedia article gives an <a href="https://en.wikipedia.org/wiki/Gauss%E2%80%93Bonnet_theorem#Interpretation_and_significance" rel="nofollow noreferrer">interesting example</a> of the Gauss-Bonnet theorem:</p>
<blockquote>
<p>As an application, a torus has Euler characteristic 0, so its total curvature must also be zero. ... It is also possible to construct a torus by identifying opposite sides of a square, in which case the Riemannian metric on the torus is flat and has constant curvature 0, again resulting in total curvature 0.</p>
</blockquote>
<p>But there are of course other ways of closing up the square by identifying points on its boundary. If we identify opposite sides with one pair's orientation flipped, we get the Klein bottle, which also has Euler characteristic 0. But if we flip both pair's orientations then we get the real projective plane, which has Euler characteristic 1. And if we identify the entire boundary together then <a href="https://math.stackexchange.com/questions/24785/the-n-disk-dn-quotiented-by-its-boundary-sn-1-gives-sn">we get</a> the sphere, with Euler characteristic 2. These are all closed surfaces so the Gauss-Bonnet theorem would naively imply that their total curvature equals $2\pi$ times their Euler characteristic, but this only works for the torus and Klein bottle identifications, but not the real projective plane or sphere identifications. Why?</p>
<p>For concreteness, consider the manifold $M$, the closed square (or unit disk) quotiented by its boundary. I think that $M$ has the topological structure of $S^2$ but not the differential structure - i.e. it's homeomorphic but not diffeomorphic to $S^2$. Is this correct? But $M$ can't be an <a href="https://en.wikipedia.org/wiki/Exotic_sphere" rel="nofollow noreferrer">exotic sphere</a> because they don't exist in two dimensions, so it must not be a differentiable manifold at all. If I'm correct, then the identification process means that $M$ is not actually differentiable at the point which is the identified boundary - it's perfectly homogenous as a topological manifold, but fails to be a differentiable manifold because there's one problematic point at which the Gaussian curvature is undefined. Similarly, I guess the identified-edge square is homeomorphic but not diffeomorphic to the real projective plane? Then are the squares with the torus and Klein bottle identifications differentiable at the identified edges and corners? If so, why are the torus- and Klein-bottle-identified squares differentiable at their boundary but not the RPP- and sphere-identified squares?</p>
| Julian Rosen | 28,372 | <p>There is a version of the Gauss-Bonnet formula that works for manifolds that are not smooth.</p>
<p>Gaussian curvature is a curvature density, and its integral over a region gives the total curvature in that region. There is also a notion of point curvature: if you have a bunch of angles summing to $\theta$ that meet at a point, the point curvature (also called the angular defect) there is defined to be $2\pi-\theta$. If you have a closed manifold that is smooth away from finitely many points, then the total curvature is defined to be the integral of the Gaussian curvature where it is defined, <em>plus</em> the sum of the point curvatures at the non-smooth points. Then the Gauss-Bonnet formula holds with this modified definition of total curvature.</p>
<p>To illustrate, consider the square with opposite edges identified with a flip. The curvature density is $0$ everywhere in the interior of the square. However, opposite vertices get identified, and at each pair of vertices, we only see $\pi/2+\pi/2=\pi$ total angle. This means the point curvature at each of the non-smooth points of the resulting surface is $2\pi-\pi=\pi$. So we can compute that the total curvature is $\pi+\pi=2\pi$, which indeed equals $2\pi\cdot\chi(\mathbb{RP}^2)$.</p>
<p>I don't know if this theorem can be made to apply to the square with the boundary collapsed to a point. However, there is another way to obtain $S^2$ from the square, by identifying edges in the following way:
<a href="https://i.stack.imgur.com/2dbEh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2dbEh.png" alt="enter image description here"></a></p>
<p>There are three non-smooth points: the head of the single arrows, the head of the double arrows, and the common tail of all arrows. These points have curvature $3\pi/2$, $3\pi/2$, and $\pi$, respectively, and the total curvature $3\pi/2+3\pi/2+\pi=4\pi$. Here again, this agrees with $2\pi\cdot\chi(S^2)$.</p>
|
253,271 | <p>I recently found that <code>\[Gradient]</code> and <code>\[InlinePart]</code> both expand (contract) to special symbols in MMA.</p>
<p>So far as I can tell (see <a href="https://mathematica.stackexchange.com/questions/134506/inlinepart-what-is-it-and-what-happened-to-it">InlinePart. What is it and what happened to it?</a>) they have no built in meaning. Further, I can't find any Wolfram documentation of them, even in <a href="https://reference.wolfram.com/language/guide/ListingOfNamedCharacters.html" rel="nofollow noreferrer">https://reference.wolfram.com/language/guide/ListingOfNamedCharacters.html</a>.</p>
<p>Is there some introspection that can list all special symbols that expand like <code>\[stuff]</code>? The same goes for symbols entered as <code>\[AliasDelimiter]stuff\[AliasDelimiter]</code>.</p>
<hr />
<p>Here's what the symbols look like</p>
<p><a href="https://i.stack.imgur.com/ZCKJZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZCKJZ.png" alt="symbols" /></a></p>
<hr />
<p>OK the second part of the question concerns <code>InputAlias</code>s, which don't seem to be collected in any one place, but are sprinkled throughout the filesystem, for instance, in Mathematica/SystemFiles/FrontEnd/TextResources/CommonFrontEndInit.tr</p>
<p>This question, then, amounts to basically lamenting that no method so far lists <em>all</em> of the existing glyphs. So far there's</p>
<ul>
<li>the Wolfram ListingOfNamedCharacters, missing many</li>
<li>UnicodeCharacters.tr, missing a bit fewer glyphs</li>
<li>Brute forcing with <code>"PrintableASCII"</code> export of every char code, missing only a couple</li>
</ul>
| Adam | 74,641 | <p>I copy pasted the input strings from the Listing of (not all) Named Characters, excluding the unicode characters and dashes. There are 1009 elements in that <code>list</code>.</p>
<p>I went grepping for the text <code>\[Gradient]</code>.</p>
<ul>
<li>Mathematica/SystemFiles/Libraries/Linux-x86-64/libWolframEngine.so has it, compiled in for some reason</li>
<li>Mathematica/SystemFiles/FrontEnd/TextResources/UnicodeCharacters.tr has it, bingo!</li>
</ul>
<p>I don't know how to read the .tr files, but sure enough the symbols, categorized and with unicode codes, are there. There are 972 of them after taking</p>
<pre><code>Cases[list,x_/;StringStartsQ[x,"\\["]\[And]StringEndsQ[x,"]"]\[And]StringLength@x>3]
</code></pre>
<p>and let's put them in <code>unlisted</code>. Now</p>
<pre><code>DeleteCases[unlisted,Alternatives@@list]
</code></pre>
<p>yields</p>
<pre><code>{"\\[Rupee]", "\\[VectorGreater]", "\\[VectorGreaterEqual]",
"\\[VectorLess]", "\\[VectorLessEqual]", "\\[Limit]", "\\[MaxLimit]",
"\\[MinLimit]", "\\[CubeRoot]", "\\[Minus]", "\\[DivisionSlash]",
"\\[Laplacian]", "\\[Divergence]", "\\[Curl]", "\\[ProbabilityPr]",
"\\[ExpectationE]", "\\[Shah]", "\\[ContinuedFractionK]",
"\\[Gradient]", "\\[InlinePart]", "\\[Villa]", "\\[Akuz]",
"\\[Andy]", "\\[Spooky]", "\\[UnknownGlyph]",
"\\[COMPATIBILITYNoBreak]", "\\[KeyBar]", "\\[ShiftKey]",
"\\[Mod1Key]", "\\[Mod2Key]", "\\[PageBreakAbove]",
"\\[PageBreakBelow]", "\\[NumberComma]", "\\[FreeformPrompt]",
"\\[WolframAlphaPrompt]"}
</code></pre>
<p>which is pretty cool. No respect for India in the docs! Also I'm disappointed that <code>\[Spooky]</code> just shows up as the rectangle with x on my system.</p>
<p>Unfortunately, I've been beaten to the punch: <a href="https://mathematica.stackexchange.com/questions/135805/which-operators-are-missing-from-the-official-precedence-table">Which operators are missing from the official precedence table?</a> already notes UnicodeCharacters.tr.</p>
|
384,450 | <p>I don't know what this double-arrow $\twoheadrightarrow$ means!</p>
| FiveLemon | 76,591 | <p>This is usually used in category theory to denote an <a href="http://en.wikipedia.org/wiki/Epimorphism" rel="nofollow noreferrer">epimorphism</a>. </p>
<p>Related question:
<a href="https://math.stackexchange.com/questions/20015/special-arrows-for-notation-of-morphisms">Special arrows for notation of morphisms</a></p>
|
29,703 | <p>For an <a href="http://www.bekirdizdaroglu.com/ceng/Downloads/ISCE10.pdf">image denoising problem</a>, the author has a functional $E$ defined </p>
<p>$$E(u) = \iint_\Omega F \;\mathrm d\Omega$$</p>
<p>which he wants to minimize. $F$ is defined as </p>
<p>$$F = \|\nabla u \|^2 = u_x^2 + u_y^2$$</p>
<p>Then, the E-L equations are derived:</p>
<p>$$\frac{\partial E}{\partial u} = \frac{\partial F}{\partial u} -
\frac{\mathrm d}{\mathrm dx} \frac{\partial F}{\partial u_x} -
\frac{\mathrm d}{\mathrm dy} \frac{\partial F}{\partial u_y} = 0$$</p>
<p>Then it is mentioned that gradient descent method is used to minimize the functional $E$ by using </p>
<p>$$\frac{\partial u}{\partial t} = u_{xx} + u_{yy}$$</p>
<p>which is the heat equation. I understand both equations, and have solved the heat equation numerically before. I also worked with functionals. I do not understand however how the author jumps from the E-L equations to the gradient descent method. How is the time variable $t$ included? Any detailed derivation, proof on this relation would be welcome. I found some papers on the Net, the one by Colding <em>et al.</em> looked promising. </p>
<p>References:</p>
<p><a href="http://arxiv.org/pdf/1102.1411">http://arxiv.org/pdf/1102.1411</a> (Colding <em>et al.</em>)</p>
<p><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.1675&rep=rep1&type=pdf">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.1675&rep=rep1&type=pdf</a></p>
<p><a href="http://dl.dropbox.com/u/1570604/tmp/functional-grad-descent.pdf">http://dl.dropbox.com/u/1570604/tmp/functional-grad-descent.pdf</a></p>
<p><a href="http://dl.dropbox.com/u/1570604/tmp/gelfand_var_time.ps">http://dl.dropbox.com/u/1570604/tmp/gelfand_var_time.ps</a> (Gelfand and Romin)</p>
| Glen Wheeler | 4,322 | <p>This is essentially a matter of definitions. The steepest descent gradient flow of a functional $F$ in an inner product space $S(M,N)$ (for example) is a family $u:M\times [0,T)\rightarrow N$ which satisfies
$$
\partial_t F = - \lVert u_t \rVert^2.
$$
For example, suppose $\Sigma$ is a surface immersed in $\mathbb{R}^3$ (for simplicity) via an immersion $f:M\rightarrow\mathbb{R}^3$ and consider the Willmore functional
$$
\mathcal{W}(f) = \frac{1}{2}\int_M H^2 d\mu,
$$
where $H$ is the mean curvature of $M$. We wish to compute from this functional, the Willmore flow, which is the steepest descent gradient flow in $L^2(M,\mathbb{R}^3)$. To do this, one computes the first variation of $\mathcal{W}$ along normal variations of $f$ (the Willmore functional is invariant under tangential diffeomorphisms, (among other things) which are essentially reparametrisations).</p>
<p>Now, any critical point of the functional will have zero first variation. This is a simple fact from basic calculus. The equation "first-variation = 0" is the Euler-Lagrange equation. It is a necessary condition that all minimal points of the functional must satisfy, although it is not in general sufficient.</p>
<p>The Euler-Lagrange equation is
$$
\Delta H + H|A^o|^2 = 0,
$$
where $A^o$ is the tracefree second fundamental form. A detailed explanation of how one derives this equation can be found in the back of Riemannian Geometry by Willmore. Any immersion satisfying this equation is a critical point of the Willmore functional and is called a Willmore surface.</p>
<p>Finally, suppose we have a one-parameter family of immersions $f:M\times[0,T)\rightarrow\mathbb{R}^3$ satisfying
$$
\partial_t^\perp f = \Delta H + H|A^o|^2.
$$
Along this family of immersions we have
$$
\partial_t\mathcal{W} = -\int_M |\Delta H + H|A^o|^2|^2 d\mu,
$$
and thus it is by definition the steepest descent gradient flow of $\mathcal{W}$ in $L^2$. Usually one doesn't bother writing all that (or any of it) and just goes directly from the Euler-Lagrange operator in some function space to the gradient flow, since it is quiet straightforward.</p>
|
3,276,877 | <blockquote>
<p>Insert <span class="math-container">$13$</span> real numbers between the roots of the equation: <span class="math-container">$x^2 +x−12 = 0$</span> in a few ways that these <span class="math-container">$13$</span> numbers together with the roots of the equation will form the first <span class="math-container">$15$</span> elements of a sequence. Write down in an explicit form the general (nth) element of the formed sequence.</p>
</blockquote>
<p>Both roots of <span class="math-container">$x^2 +x−12 = 0$</span> are in reals as <span class="math-container">$D= 49$</span>, these are:
<span class="math-container">$x = \frac{-1 \pm 7}{2}= 3, -4$</span>.</p>
<p>i) Form an arithmetic sequence, i.e. the distance between the terms is the same. <br>Insert <span class="math-container">$13$</span> reals between these two in equidistant manner.<br>
As the distance is <span class="math-container">$7$</span>, so, need equal intra-distance <span class="math-container">$=\frac {7}{14}$</span>.<br>
So, the first term is at <span class="math-container">$-4$</span>, next at <span class="math-container">$-4+\frac {7}{14}=\frac {-45}{14}$</span>, & so on.</p>
<p>ii) Make the distance double with each next point, i.e. there is a g.p. of the minimum distance.<br>Let the first term be <span class="math-container">$a$</span>, common ratio term be <span class="math-container">$r=2$</span>, & <span class="math-container">$\,2^{14}r\,$</span> is the maximum gap between the consecutive terms.<br>
The sum of the geometric series is given by:
<span class="math-container">$a+ar+ar^2+\cdots+ar^{14}$</span>, or<br>
<span class="math-container">$a+2a+4a+8a+16a+\cdots+2^{14}a = a\frac{2^{15}-1}{2-1}=a(2^{15}-1)$</span></p>
<p>The last term <span class="math-container">$\,ar^{14}=3\implies a= \frac{3}{r^{14}} = \frac{3}{2^{14}}.$</span></p>
<p>So, the series starts at the second point (i.e., the one after <span class="math-container">$-4$</span>).<br>
This second (starting) point is at : <span class="math-container">$-4+\frac{3}{2^{14}}$</span>, third point at : <span class="math-container">$-4+3\frac{3}{2^{14}}$</span>, fourth point at : <span class="math-container">$-4+7\frac{3}{2^{14}}$</span>, <br>
The last point should act as a check, as its value is <span class="math-container">$\,3\,$</span> giving us <span class="math-container">$-4+\frac{3}{2^{14}}(2^{15}-1)$</span>, which should equal <span class="math-container">$3$</span>, but is not leading to that.</p>
| PM. | 416,252 | <p>Exterior angle of triangle = sum of 2 interior opposite angles
<span class="math-container">$$
\angle ACB = 2 \angle EAC \\
\angle ABC = 2 \angle DAB \\
$$</span>
<span class="math-container">$$
\Rightarrow \angle EAC + \angle DAB = 0.5(\angle ACB + \angle ABC) = 0.5(180-\alpha)
$$</span>
<span class="math-container">$$
\angle DAE = 90-0.5\alpha + \alpha
$$</span></p>
|
37,052 | <p>This is my first question with mathOverflow so I hope my etiquette is up to par here.</p>
<p>My question is regarding a <span class="math-container">$3\times3$</span> magic square constructed using the la Loubère method (see <a href="http://en.wikipedia.org/wiki/Magic_square#Method_for_constructing_a_magic_square_of_odd_order" rel="nofollow noreferrer">la Loubère method</a>)</p>
<p>Using the method, I have constructed a magic square and several semimagic squares (where one or both of the diagonals do not add up to a magic sum) with a program on written on my graphing calculator. After playing around with the program, I was shocked that the determinants of these <span class="math-container">$3\times3$</span> magic squares are all the same (specifically -360). Why is this so? (I am still an undergraduate so please go easy on the math :] )</p>
| Marcel Bischoff | 10,718 | <p>One should be able to obtain the formula from the appendix of:</p>
<ul>
<li>Hellmut Baumgärtel, Matthias Jurke, Fernando Lledó, <em>Twisted duality of the CAR-Algebra</em>, J.Math.Phys. 43 (2002) 4158-4179, <a href="https://doi.org/10.1063/1.1483376" rel="nofollow noreferrer">https://doi.org/10.1063/1.1483376</a>, <a href="http://arxiv.org/abs/math-ph/0204029" rel="nofollow noreferrer">http://arxiv.org/abs/math-ph/0204029</a></li>
</ul>
<p>They have a formula for all vectors, to the vacuum expectation just the summand with <span class="math-container">$2p=n$</span> contributes. They using Arakis self dual CAR algebra, and if you consider <span class="math-container">$a(f)$</span> for <span class="math-container">$f=\Gamma f$</span> it should equal your <span class="math-container">$c(f)+c(f)^\ast$</span>.</p>
|
2,445,023 | <p>I have a trouble in calculating which function grows faster. </p>
<p>$f(n) = 3\log_4 n + \sqrt{n} + 3 \\
g(n) = 4\log_3 n + \log n + 200$</p>
<p>Can someone let me know how to solve this? </p>
| Penguino | 90,137 | <p>You can ignore the constants at the end because they don't effect the growth rate of the functions. If you then compare e to the power of f and g, you will see that e^g is of the form cn^2, while e^f is of the form cne^sqrt(n). Can you tell which of these grows faster?</p>
|
2,107,837 | <p>Let $f:\mathbb{R}^2\to\mathbb{R}^2$ be a continuous map such that $\forall \ x : |x-f(x)|<5777$. </p>
<p>Show that $f$ is surjective.</p>
| John | 253,426 | <p>The notation is not very precise, but $X\cap Y$ is the event "both $X$ and $Y$ happen". This means that "I do not get number 3 on the first thrown" AND "I do not get number 3 on the first four thrown", which is equivalent to say "I do not get number 3 on the first four thrown", which is $Y$. </p>
|
2,107,837 | <p>Let $f:\mathbb{R}^2\to\mathbb{R}^2$ be a continuous map such that $\forall \ x : |x-f(x)|<5777$. </p>
<p>Show that $f$ is surjective.</p>
| Community | -1 | <p>Note that $X\cap Y=Y$ if and only if $Y\subset X$. </p>
|
3,669,937 | <p>I had this problem where i had the application <span class="math-container">$\varphi: \mathbb Z[i] \Rightarrow \mathbb Z/(2)$</span> where <span class="math-container">$\varphi(a+bi)=\bar{a}+\bar{b}$</span>. I had to find the kernel and prove that is a factor ideal. I proofed that the kernel is formed by all the complex numbers such that <span class="math-container">$a+b$</span> is even but I need to find the generator to the ideal. </p>
| Pedro Juan Soto | 601,282 | <p>Since <span class="math-container">$\mathbb{Z}[i]$</span> is a <a href="https://en.wikipedia.org/wiki/Gaussian_integer#Euclidean_division" rel="nofollow noreferrer">Euclidean domain</a> and this implies that it is a <a href="https://en.wikipedia.org/wiki/Gaussian_integer#Principal_ideals" rel="nofollow noreferrer">PID</a> we have that any ideal <span class="math-container">$I \subset \mathbb{Z}[i]$</span> is generated by any element in <span class="math-container">$\mathbb{Z}[i]$</span> with the smallest norm; where the norm is defined by <span class="math-container">$a+bi \mapsto a^2 + b^2$</span>. Therefore we need to find a non-zero <span class="math-container">$a,b$</span> that satisfies <span class="math-container">$a+b \equiv 0 \ (\text{mod } 2)$</span> that has the smallest possible <span class="math-container">$a^2+b^2$</span>. </p>
<blockquote>
<p>Therefore <span class="math-container">$1+i$</span> works; <span class="math-container">$1-i$</span>, <span class="math-container">$-1+i$</span>, and <span class="math-container">$1-i$</span> also work. </p>
</blockquote>
|
167,262 | <p>I make a circle with radius as below</p>
<pre><code>Ctest = Table[{0.05*Cos[Theta*Degree], 0.05*Sin[Theta*Degree]}, {Theta, 1, 360}] // N;
</code></pre>
<p>And herewith is my list of data points</p>
<pre><code>pts = {{0., 0.}, {0.00493604, -0.00994539}, {0.00987001, -0.0198918}, {0.0148019, -0.0298392}, {0.0197318, -0.0397877}, {0.0246596, -0.0497372}, {0.0295853, -0.0596877}, {0.0345089, -0.0696392}, {0.0394305, -0.0795918}, {0.04435, -0.0895453}, {0.0492675, -0.0994999}, {0.0541829, -0.109456}, {0.0590962, -0.119412}, {0.0640075, -0.12937}, {0.0689166, -0.139328}, {0.0738238, -0.149288}, {0.0787288, -0.159249}, {0.0836318, -0.169211}, {0.0885327, -0.179173}, {0.0934316, -0.189137}, {0.0983284, -0.199102}, {0.103223, -0.209068}, {0.108116, -0.219034}, {0.113006, -0.229002}, {0.117895, -0.238971}, {0.122781, -0.248941}, {0.127666, -0.258912}};
</code></pre>
<p>I would like to know the intersection between a circle and list data point as shown by figure below. How to make its program automatically? I mean that if one day I would like to change the radius of circle, the program would still work.</p>
<p><a href="https://i.stack.imgur.com/ckZuP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ckZuP.jpg" alt="enter image description here"></a></p>
| jkuczm | 14,303 | <p>You could use symmetry of your equation to "factorize" problem of finding solutions. Since some variables have same coefficients, your equation is invariant with respect to permutations of those variables. You could group them, and replace with lower number of variables, representing sums of original variables.</p>
<p>$n_i$ variables associated with $i$-th coefficient naively contribute $2^{n_i}$ combinations of $0$ and $1$ to possible solutions. One variable representing their sum contribute only $n_i+1$ possible values. Splitting value of combined variable to original variables, after finding solution to reduced equation, is trivial.</p>
<p>With your equation we have:</p>
<pre><code>eq = 2 x11 + 8 x12 + 6 x13 - 4 x14 + 7 x15 + 3 x16 - 5 x17 + 4 x18 +
2 x19 + 2 x20 + 10 x21 + 10 x22 + 4 x23 + 3 x24 + 2 x25 + 2 x26 -
5 x27 + 9 x28 + 6 x29 + 9 x30 + 7 x31 - x32 - 4 x33 - 3 x34 +
x35 + 2 x36 + x37 - 2 x38 - x39 + x40 == 39;
coeffToVars = KeySort@GroupBy[
Replace[List @@ eq[[1]], c_. x_ :> {c, x}, {1}],
First -> Last
]
varGroups = Values@coeffToVars;
(* <|-5 -> {x17, x27}, -4 -> {x14, x33}, -3 -> {x34}, -2 -> {x38},
-1 -> {x32, x39}, 1 -> {x35, x37, x40}, 2 -> {x11, x19, x20, x25, x26, x36},
3 -> {x16, x24}, 4 -> {x18, x23}, 6 -> {x13, x29}, 7 -> {x15, x31},
8 -> {x12}, 9 -> {x28, x30}, 10 -> {x21, x22}|> *)
</code></pre>
<p>Naively there are <code>2^Length@Variables@eq[[1]] (* 1 073 741 824 *)</code> possible combinations. After grouping variables we are down to <code>Times @@ (Length /@ varGroups + 1) (* 4 408 992 *)</code>. Reduced problem is small enough that 1 GB of memory is enough to perform brute-force search of solutions.</p>
<pre><code>possibleSols = Tuples[Range[0, Length@#]& /@ varGroups]; // MaxMemoryUsed
(* 493 810 136 *)
possibleSols // Length
(* 4 408 992 *)
sols = Pick[possibleSols, possibleSols.Keys@coeffToVars, eq[[2]]]; // MaxMemoryUsed
(* 564 351 608 *)
sols // Length
(* 105 929 *)
</code></pre>
<p>There are $105\,929$ solutions of reduced problem.</p>
<p>To recover solutions in terms of original variables we construct tuples of permutations of lists with $k_i$ ones and $n_i - k_i$ zeros, where $n_i$ is number of variables associated with $i$-th coefficient, and $k_i$ is value of $i$-th variable of reduced problem in given solution.</p>
<pre><code>ungroupedSolutions // ClearAll
ungroupedSolutions[varGroups_ : {__List}] := With[
{
lengths = Length /@ varGroups,
ord = Ordering[Join @@ varGroups]
},
Developer`ToPackedArray[Join @@@ Tuples[
Permutations /@ MapThread[
Join[ConstantArray[1, #1], ConstantArray[0, #2 - #1]]&,
{#, lengths}
]
]][[All, ord]]&
]
ungroupedSolutionsNumber // ClearAll
ungroupedSolutionsNumber[varGroups_ : {__List}] := With[
{lengths = Length /@ varGroups},
Times @@ Binomial[lengths, #]&
]
</code></pre>
<p>Let's choose one of solutions of reduced problem and construct all corresponding solutions to original problem:</p>
<pre><code>sols[[13784]]
(* {0, 1, 0, 0, 2, 3, 4, 0, 2, 0, 1, 0, 1, 1} *)
tmpSols = ungroupedSolutions[varGroups]@%;
tmpSols // Length
(* 240 *)
</code></pre>
<p>above reduced solution corresponds to $240$ solutions of original problem. Let's check that they all solve original equation:</p>
<pre><code>And @@ (eq /. (Thread[Sort@Variables@eq[[1]] -> #] & /@ tmpSols))
(* True *)
</code></pre>
<p>Let's count all solutions of original equation:</p>
<pre><code>ungroupedSolutionsNumber@varGroups /@ sols // Total
(* 30 174 150 *)
</code></pre>
<p>there are $30\,174\,150$ of them.</p>
|
1,743,482 | <p>I was doing this question on convergence of improper integrals where in our book they have used the fact that $2+ \cos(t) \ge1$. Can somebody prove this?</p>
| MooS | 211,913 | <p>On $(\mathbb Z/p\mathbb Z)^*=\{1,2, \dotsc, p-1\}$ consider the equivalence relation, defined by $x \sim y$ iff $x=-y$ or $x=y^{-1}$ or $x=-y^{-1}$.</p>
<p>Since $p$ is odd, we have $x \neq -x$, i.e. any equivalence class has $2$ or $4$ elements:</p>
<ul>
<li>If $x^2=1$, we have $[x] = \{x,-x\}$</li>
<li>If $x^2=-1$, we have $[x] = \{x,-x=x^{-1}\}$</li>
<li>In any other case, we have $[x] = \{x,-x,x^{-1},(-x)^{-1}\}$</li>
</ul>
<p>We certainly have the equivalence class $\{1,p-1=-1\}$.</p>
<p>$p-1 \equiv 0 \pmod 4$ yields, that there will be another equivalence class with $2$ elements. This equivalence class must come from the second case, since the first case occurs only once ($x^2=1$ implies $x=\pm 1$ over a field). This shows that $x^2=-1$ is solvable.</p>
|
423,718 | <p>I have a general question, that deals with the question how I am able to find out whether a particular curve(in $\mathbb{C}$) is positively oriented? Take e.g. $ y(t)=a+re^{it}$. Obviously this one is positively oriented, but is there a fast general method to proof this? </p>
| Fly by Night | 38,495 | <p>Orientation only really applies to simple, closed curves. </p>
<p>For a general curve, you might find the winding number to be useful.</p>
<p>See: <a href="http://mathworld.wolfram.com/ContourWindingNumber.html" rel="nofollow">http://mathworld.wolfram.com/ContourWindingNumber.html</a></p>
|
1,443,335 | <p>The rank of a linear transformation from V into W is defined:</p>
<blockquote>
<p>If V is finite-dimensional, the <em>rank</em> of T is the dimension of the range of T and ...</p>
</blockquote>
<p>However, there is no guarantee the range of T is finite-dimensional, in which case the dimension of it cannot be defined.</p>
| user133281 | 133,281 | <p>Note that $m$ is an integer iff $15-2n$ is a divisor of $17+n$. Now, $15 -2n \mid 17+n$ if and only if $$15-2n \mid 2(17+n)+(15-2n) = 49$$ (note that $15-2n$ is odd). So we find an integral value of $m$ if and only if $15-2n$ is one of the divisors $-49$, $-7$, $-1$, $1$, $7$ or $49$ of $49$. This corresponds to $n$ being equal to one of $32$, $11$, $8$, $7$, $4$ or $-17$, with $m$ being equal to $-1$, $-4$, $-25$, $24$, $3$ or $0$. </p>
<p>The full solution set is
$$
(m,n) \in \{ (-25,8), (-4,11), (-1,32), (0,-17), (3,4), (24,7) \}.
$$</p>
|
361,862 | <p>I would like you to expose and explain briefly some examples of theorems having some hypothesis that are (as far as we know) actually necessary in their proofs but whose uses in the arguments are extremely subtle and difficult to note at a first sight. I am looking for hypothesis or conditions that appear to be almost absent from the proof but which are actually hidden behind some really abstract or technical argument. It would be even more interesting if this unnoticed hypothesis was not noted at first but later it had to be added in another paper or publication not because the proof of the theorem were wrong but because the author did not notice that this or that condition was actually playing a role behind the scene and needed to be added. And, finally, an extra point if this hidden hypothesis led to some important development or advance in the area around the theorem in the sense that it opened new questions or new paths of research. This question might be related with this <a href="https://mathoverflow.net/questions/352249/nontrivially-fillable-gaps-in-published-proofs-of-major-theorems">other</a> but notice that it is not the same as I am speaking about subtleties in proof that were not exactly incorrect but incomplete in the sense of not mentioning that some object or result had to be use maybe in a highly tangential way.</p>
<p>In order to put some order in the possible answers and make this post useful for other people I would like you to give references and to at least explain the subtleties that helps the hypothesis to hide at a first sight, expose how they relate with the actual proof or method of proving, and tell the main steps that were made by the community until this hidden condition was found, i.e., you can write in fact a short history about the evolution of our understanding of the subtleties and nuances surrounding the result you want to mention.</p>
<p>A very well known and classic example of this phenomenon is the full theory of classical greek geometry that although correctly developed in the famous work of Euclides was later found to be incompletely axiomatized as there were some axioms that Euclides uses but <a href="https://en.wikipedia.org/wiki/Euclidean_geometry#cite_note-6" rel="noreferrer">he did not mention</a> as such mainly because these manipulations are so highly intuitive that were not easy to recognize that they were being used in an argument. Happily, a better understanding of these axioms and their internal respective logical relations through a period of long study and research lasting for millennia led to the realization that these axioms were not explicitly mention but necessary and to the development of new kinds of geometry and different geometrical worlds.</p>
<p>Maybe this one is (because of being the most classic and expanded through so many centuries and pages of research) the most known, important and famous example of the phenomena I am looking for. However, I am also interested in other small and more humble examples of this phenomena appearing and happening in some more recent papers, theorems, lemmas and results in general.</p>
<p>Note: I vote for doing this community wiki as it seems that this is the best way of dealing with this kind of questions.</p>
| Timothy Chow | 3,106 | <p>This is not a perfect example because the subtle hypotheses in question were not "unnoticed"; nevertheless I think it fulfills several of your other criteria. Let us define the "Strong Fubini theorem" to be the following statement:</p>
<blockquote>
<p>If <span class="math-container">$f:\mathbb{R}^2 \to \mathbb{R}$</span> is nonnegative and the iterated integrals <span class="math-container">$\iint f\,dx\,dy$</span> and <span class="math-container">$\iint f\,dy\,dx$</span> exist, then they are equal.</p>
</blockquote>
<p>The Strong Fubini theorem looks innocent enough, but without any measurability hypotheses, it is independent of ZFC. For example, Sierpinski showed that Strong Fubini is false if the continuum hypothesis holds.</p>
<p>In the other direction, a <a href="https://doi.org/10.1090/S0002-9947-1990-1025758-0" rel="noreferrer">paper by Joe Shipman</a> investigates a variety of interesting hypotheses that imply Strong Fubini, e.g., RVM ("the continuum is real-valued measurable"), which is equiconsistent with the existence of a measurable cardinal.
Here's another one: Let <span class="math-container">$\kappa$</span> denote the minimum cardinality of a nonmeasurable set, and let <span class="math-container">$\lambda$</span> denote the cardinality of the smallest union of measure-zero sets which covers <span class="math-container">$\mathbb{R}$</span>. Then the assertion that <span class="math-container">$\kappa < \lambda$</span> implies Strong Fubini.</p>
|
1,231,365 | <p>I am from a non-English speaking country. Should we say monotonous function or monotonic function?</p>
| Unit | 196,668 | <p>"Monotonic" or "monotone", but not "monotonous" (boring).</p>
|
1,231,365 | <p>I am from a non-English speaking country. Should we say monotonous function or monotonic function?</p>
| dilpreet | 230,441 | <p>monotonic for sure is to be used. Monotonic functions are those functions which are either increasing or decreasing. They are such that for each specific value of x there is a unique y(value of function) which does not repeat for any other x.</p>
|
381,011 | <p>I should prove this claim:</p>
<blockquote>
<p>Every undirected graph with n vertices and $2n$ edges is connected.</p>
</blockquote>
<p>If it is false I should find a counterexample.
I was thinking to consider the complete graph with $n$ vertices. Such a graph is connected and contains $\frac{n(n-1)}{2}$ nodes. Considering that $2n > \frac{n(n-1)}{2}\implies$ my graph is connected too. But I'm not sure this could be a solution because even my graph has $2n$ edges it doesn't have to be complete.
Can anybody help me?</p>
| ccorn | 75,794 | <p>Consider two copies of graphs with $m$ vertices amd $2m$ edges, e.g. of $K_5$, the complete graph with $m=5$ vertices. With two copies, you have $n=2m$ vertices and $2n$ edges, but the graph is not connected. Therefore the claim is false.</p>
|
381,011 | <p>I should prove this claim:</p>
<blockquote>
<p>Every undirected graph with n vertices and $2n$ edges is connected.</p>
</blockquote>
<p>If it is false I should find a counterexample.
I was thinking to consider the complete graph with $n$ vertices. Such a graph is connected and contains $\frac{n(n-1)}{2}$ nodes. Considering that $2n > \frac{n(n-1)}{2}\implies$ my graph is connected too. But I'm not sure this could be a solution because even my graph has $2n$ edges it doesn't have to be complete.
Can anybody help me?</p>
| Douglas S. Stones | 139 | <p>The claim is true for up to $6$ vertices. The smallest counterexample (unique up to isomorphism) is this graph on $7$ vertices:</p>
<p><img src="https://i.stack.imgur.com/nxUhB.png" alt="Smallest counterexample"></p>
<p>The vertices are ascribed their degree, so we can easily verify there are $14$ edges via the <a href="http://en.wikipedia.org/wiki/Handshaking_lemma" rel="nofollow noreferrer">Handshaking Lemma</a>.</p>
<p>For $n \geq 8$ there always exists counterexamples; we add $n-7$ vertices and connect them to the two blue vertices above, thereby adding $2(n-7)$ edges.</p>
|
3,628,358 | <p>As stated, I need to prove that, up to isomorphism, the only simple group of order <span class="math-container">$p^2 q r$</span>, where <span class="math-container">$p, q, r$</span> are distinct primes, is <span class="math-container">$A_5$</span> (the alternating group of degree 5).</p>
<p>Now I know the following: if <span class="math-container">$G$</span> is a simple group and <span class="math-container">$|G| = 60$</span>, then <span class="math-container">$G$</span> is isomorphic to <span class="math-container">$A_5$</span>. However, I don't even know how to begin the proof that <span class="math-container">$|G| = 60$</span>, or anything similar.</p>
| Derek Holt | 2,820 | <p>Here is a sketch solution. I can give more detail, but it depends on which results you are familiar with.</p>
<p>Let <span class="math-container">$G$</span> be simple of order <span class="math-container">$p^2qr$</span>.
By Burnside's Transfer Theorem, <span class="math-container">$p$</span> must be the smallest of the three primes because, if for example <span class="math-container">$q$</span> was smallest then <span class="math-container">$G$</span> would have a normal <span class="math-container">$q$</span>-complement so would not be simple.</p>
<p>Let <span class="math-container">$P \in {\rm Syl}_p(G)$</span>. Then <span class="math-container">$P$</span> must be properly contained in its normalizer, since otherwise there would be a normal <span class="math-container">$p$</span>-complement by Burnside's Transfer Theorem. So we can assume that <span class="math-container">$|N_G(P)| = p^2q$</span>. Let <span class="math-container">$Q \in {\rm Syl}_q(N_G(P))$</span>, so <span class="math-container">$|Q|=q$</span>.</p>
<p>We cannot have <span class="math-container">$Q < C_G(P)$</span> or again there would be a normal <span class="math-container">$p$</span>-complement, so <span class="math-container">$|Q|$</span> must divide <span class="math-container">$|{\rm Aut}(P)|$</span>, which is equal to <span class="math-container">$p(p-1)$</span> if <span class="math-container">$P$</span> is cyclic and <span class="math-container">$p(p^2-1)$</span> if it is <span class="math-container">$C_p \times C_p$</span>.</p>
<p>But since <span class="math-container">$q$</span> is prime and <span class="math-container">$p<q$</span>, the only possibility is <span class="math-container">$p=2$</span>, <span class="math-container">$q=3$</span>, and <span class="math-container">$PQ \cong A_4$</span>.</p>
<p>But now <span class="math-container">$|{\rm Syl}_r(G)|$</span> must divide <span class="math-container">$|G:R|=12$</span> and also be congruent to <span class="math-container">$1$</span> mod <span class="math-container">$r$</span>. We cannot have <span class="math-container">$|{\rm Syl}_r(G)|=12$</span>, or <span class="math-container">$G$</span> would have a normal <span class="math-container">$r$</span>-complement, so the only possibility is <span class="math-container">$|{\rm Syl}_r(G)| = 6$</span> and <span class="math-container">$r=5$</span>.</p>
|
54,878 | <p>Consider the 2 parameter family of linear systems </p>
<p>$$\frac{DY(t)}{Dt} = \begin{pmatrix}
a & 1 \\
b & 1 \end{pmatrix} Y(t)
$$</p>
<p>In the ab plane, identify all regions where this system posseses a saddle, a sink, a spiral sink, and so on. </p>
<p>I was able to get the eigenvalues as $$\lambda = \frac{a+1}{2} \pm \frac{\sqrt{(a+1)^2 - 4(a-b)}}{2}$$</p>
<p>but need help in finding the sink and source.</p>
<p>I got the spiral sink as: if $a \lt -1$</p>
<p>spiral source if $a \gt -1$</p>
<p>and center if $a = -1$</p>
<p>Can someone check this?</p>
| 40 votes | 85,506 | <p>Summarizing the comments: the best way to begin is to look at determinant $a-b$ and trace $a+1$: </p>
<ul>
<li>$a-b<0$: saddle</li>
<li>$a-b> 0$ and $a+1=0$: stable center</li>
<li>$a-b> 0$ and $a+1<0$: stable node or spiral, depending on $(a+1)^2-4(a-b)$ being positive or negative</li>
<li>$a-b> 0$ and $a+1>0$: unstable node or spiral, depending on $(a+1)^2-4(a-b)$ being positive or negative</li>
</ul>
|
697,984 | <p>I want to check whether the position operator $A$, where $Af(x)=xf(x)$ , is self-adjoint. For this to be true it has to be Hermitian and also the domains of it and its adjoint must be equal. The Hilbert space I'm working with is of course $L^2(\mathbb{R}) $ with the natural inner product. The problem I'm having is with checking the domains, the definition of the adjoint domain is extremely unwieldy. Here are the definitions I'm using.</p>
<p>The domain of a linear operator is defined thusly:
$$D(A) =\{ f \in H : Af \in H\}.$$</p>
<p>The domain of its adjoint (and subsequently the adjoint itself) is defined through:
$$D(A^*) = \{ f \in H : \exists f_1 : \forall g \in H, \ (f,Ag)=(f_1,g) \}.$$
The adjoint is then given by $A^*f = f_1$.</p>
<p>I'm not very comfortable with this. Verifying that my operator is Hermitian boils down to just writing down an integral, but I now I have no idea how to go about comparing the two domains to establish that $D(A)=D(A^*)$. </p>
<p>The domain of $A$ I've found to be the set of all functions such that $\int_{\mathbb{R}}dx \ x^2f^2(x)$ exists. But writing down the definition for the domain of the adjoint gives me:</p>
<p>$$D(A^*)=\{ f \in L^2(\mathbb{R}) : \exists f_1 :\forall g : \int_{\mathbb{R}}dx \ x f^*(x)g(x) = \int_{\mathbb{R}}dx \ f_1^*(x)g(x) \ \}.$$</p>
<p>(here $f_1$ and $g$ also belong to $L^2(\mathbb{R})$).</p>
<p>I'm supposed to conclude that the two sets are equal but I don't know how. Any help would be greatly appreciated.</p>
| Jimmy R. | 128,037 | <p>The event on the LHS in english can be described as "The event that either A but not B or B but not A occurs". </p>
<p>This event is the union of two disjoint sets $A \cap B^c$ and $A^c \cap B$. They are disjoint because $$(A\cap B^c) \cap(A^c \cap B)=(A\cap A^c)\cap (B\cap B^c)=\emptyset\cap\emptyset=\emptyset$$The probability of the union of two disjoint sets can be written as the sum of their probabilities, that is, eq$(1)$: $$P[(A \cap B^c) \cup (A^c \cap B)] = P(A\cap B^c) + P(A^c\cap B)$$ Now, since $$P(A)=P(A\cap B^c)+P(A\cap B)$$ we have that, eq$(2a)$: $$P(A\cap B^c)=P(A)-P(A\cap B)$$ Similarly, since $$P(B)=P(A^c\cap B)+P(A\cap B)$$ we have that, eq$(2b)$: $$P(A^c\cap B)=P(B)-P(A\cap B)$$ Combining equations $(2a)$ and $(2b)$ and substituting into equation $(1)$ we get the required result, that is $$\begin{align*}P((A\cap B^c) \cup(A^c \cap B))&=P(A)-P(A\cap B)+P(B)-P(A\cap B)=\\&=P(A)+P(B)-2P(A\cap B)\end{align*}$$</p>
|
1,531,291 | <p>I want to find the radius of convergence of </p>
<p><span class="math-container">$$\sum_{k = 0}^{\infty}\frac{ k^{2 k + 5} \ln^{10} k \ln \ln k}{\left(k!\right)^2} \,x^k$$</span></p>
<p>I know formulae
<span class="math-container">$$R=\dfrac{1}{\displaystyle\limsup_{k\to\infty} \sqrt[k]{\left\lvert a_k\right\rvert}}.$$</span></p>
<p>For this power series
<span class="math-container">$$R= \dfrac{1}{\limsup_{k\to\infty}{\displaystyle\sqrt[k]{\dfrac{ k^{2 k + 5} \ln^{10} k \ln \ln k}{\left(k!\right)^2}}}}.$$</span></p>
<p>But I don't know how calculate <span class="math-container">$\;\displaystyle\limsup_{k\to\infty} \sqrt[k]{\frac{ k^{2 k + 5} \ln^{10} k \ln \ln k}{\left(k!\right)^2}}$</span></p>
<p>Thank you for any help!</p>
| xpaul | 66,420 | <p>You can use
<span class="math-container">$$ R=\lim_{k\to\infty}\frac{a_k}{a_{k+1}}. $$</span>
In fact,
<span class="math-container">\begin{eqnarray}
R&=&\lim_{k\to\infty}\frac{ k^{2 k + 5} \ln^{10} k \ln \ln k}{\left(k!\right)^2}\frac{\left((k+1)!\right)^2}{ (k+1)^{2 (k+1) + 5} \ln^{10} (k+1) \ln \ln (k+1)}\\
&=&\lim_{k\to\infty}\frac{k^{2 k + 5}}{(k+1)^{2 (k+1) + 5}}\bigg[\frac{\ln k}{\ln(k+1)}\bigg]^{10}\frac{\ln\ln k}{\ln\ln(k+1)}\bigg[\frac{(k+1)!}{k!}\bigg]^2\\
&=&\lim_{k\to\infty}\frac{k^{2 k + 6}}{(k+1)^{2 (k+1) + 4}}\\
&=&\lim_{k\to\infty}\bigg(\frac{k}{k+1}\bigg)^{2k+6}\\
&=&\frac{1}{e^2}.
\end{eqnarray}</span></p>
|
189,380 | <p>How can I solve this ODE:</p>
<p>$y(x)+Ay'(x)+Bxy'(x)+Cy''(x)+Dx^{2}y''(x)=0$</p>
<p>Can you please also show the derivation.</p>
| Tunococ | 12,594 | <p><a href="http://en.wikipedia.org/wiki/Frobenius_method" rel="nofollow">Frobenius method</a> is the most general method I know for this case. Assume your solution is of the form $y = x^r\sum_{n=0}^\infty a_n x^n$, plug it in, and solve for $r$ and $a_n$. You should get two $r$, say $r_1$ and $r_2$. If $r_1 - r_2$ is not an integer, you already have two linearly independent solutions. If $r_1 - r_2$ is an integer, then there are several ways to get two solutions. The simplest theoretical method would be <a href="http://en.wikipedia.org/wiki/Reduction_of_order" rel="nofollow">reduction of order</a>. There are, however, special ways tailored for Frobenius method. <a href="http://www.math.ualberta.ca/~xinweiyu/334.1.10f/DE_series_sol.pdf" rel="nofollow">This</a> seems like a good reference.</p>
|
3,541,897 | <p>While searching for non-isomorph subgroups of order <span class="math-container">$2002$</span> I just encountered something, which I want to understand. Obviously I looked for abelian subgroups first and found <span class="math-container">$2002=2^2*503$</span> so we have the groups
<span class="math-container">$$
\mathbb{Z}/2^2\mathbb{Z} \times \mathbb{Z}/503\mathbb{Z},\mathbb{Z}/2\mathbb{Z} \times
\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/503\mathbb{Z}
$$</span></p>
<p>Now I want to understand why those two are not isomorph. I know that for two groups <span class="math-container">$\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z} \cong\mathbb{Z}/(nm)\mathbb{Z}$</span> it has to hold that <span class="math-container">$\gcd(n,m)=1$</span>. But I don't understand how we can compare Groups written as two products with groups written as three products as above, how does that work? And I think that goes in the same direction: How is it then at the same time that
<span class="math-container">$$
\mathbb{Z}/4\mathbb{Z} × \mathbb{Z}/503\mathbb{Z} \cong \mathbb{Z}/2012\mathbb{Z} \ncong \mathbb{Z}/2\mathbb{Z} × \mathbb{Z}/2\mathbb{Z} × \mathbb{Z}/503\mathbb{Z} \cong \mathbb{Z}/2\mathbb{Z} × \mathbb{Z}/1006Z
$$</span>
because <span class="math-container">$\gcd (4,2012)\neq 1, \gcd (2,2)\neq 1, \gcd (503,1006)\neq 1 $</span>. I don't understand the difference to the first comparison.</p>
| Olivier Roche | 649,615 | <p>Here's where the hypothesis <span class="math-container">$\gcd (n, m) = 1$</span> plays a role : if <span class="math-container">$d := \gcd(n, m) \neq 1$</span>, then <span class="math-container">$\mathbb{Z} /nm \mathbb{Z}$</span> has an element of order <span class="math-container">$d^2$</span> but <span class="math-container">$\mathbb{Z} / n \mathbb{Z} \times \mathbb{Z}/m\mathbb{Z}$</span> does not.</p>
<p>In your example, <span class="math-container">$\mathbb{Z}/ 4 \mathbb{Z} \times \mathbb{Z} / 503 \mathbb{Z}$</span> has an element of order 4 (namely <span class="math-container">$(1, 0)$</span>), but <span class="math-container">$\mathbb{Z}/ 2 \mathbb{Z} \times \mathbb{Z} / 1006 \mathbb{Z}$</span> has no element of order <span class="math-container">$4$</span>.</p>
|
2,878,508 | <p>How can I determine $ f(x)$ if $f(1-f(x))=x$ for all real $x$?
I have already recognized one problem caused from this: it
follows that $ f(f(x))=1-x $, which is discontinuous. So how can I construct a function $f(x)$?</p>
<p>Best regards and thanks,
John</p>
| Adrian Keister | 30,813 | <p>This is a partial answer.</p>
<p>We know that $f(x)$ is invertible, because $f^{-1}(x)=1-f(x),$ from the original; from here we get the very interesting relationship of $f(x)+f^{-1}(x)=1.$ Suppose we try to find out what $f(0)$ is (set it equal to $a$). By repeated alternating applications of $f$ and the equation $f^{-1}(x)=1-f(x),$ we wind up with the following interesting table:
$$
\begin{array}{c|c|c}
x &f(x) &f^{-1}(x) \\ \hline
0 &a &1-a \\ \hline
1-a &0 &1 \\ \hline
1 &1-a &a \\ \hline
a &1 &0
\end{array}
$$
One more step gets you where you started. In studying this table, we see that if $f$ and $f^{-1}$ are to be well-defined, we cannot have $a=0, 1/2,$ or $1$. We get a similar table if we start off with $x=-1:$
$$
\begin{array}{c|c|c}
x &f(x) &f^{-1}(x) \\ \hline
-1 &b &1-b \\ \hline
1-b &-1 &2 \\ \hline
2 &1-b &b \\ \hline
b &2 &-1
\end{array}
$$
From here we find that $b\not=-1, -3, 2, 1/2.$ Yet another table generates when we start with $x=-2:$
$$
\begin{array}{c|c|c}
x &f(x) &f^{-1}(x) \\ \hline
-2 &c &1-c \\ \hline
1-c &-2 &3 \\ \hline
3 &1-c &c \\ \hline
c &3 &-2
\end{array}
$$
From this we get that $c\not=3, 1/2, -2.$ This generalizes to the following table:
$$
\begin{array}{c|c|c}
x &f(x) &f^{-1}(x) \\ \hline
1-n &m &1-m \\ \hline
1-m &1-n &n \\ \hline
n &1-m &m \\ \hline
m &n &1-n
\end{array}
$$</p>
<p>From here, we can see that $n=1/2$ forces $m=1/2,$ which would be consistent in this table. So $f(1/2)=1/2.$</p>
<p>Moving on, we can see that the following are true:
\begin{align*}
f(1-x)&=y \\
f(1-y)&=1-x \\
f(x)&=1-y \\
f(y)&=x.
\end{align*}
Combining two of these equations yields $f(1-x)=1-f(x)$. Differentiating yields $f'(1-x)=f'(x).$ </p>
<p>These are mostly negative results, obviously. My hope is that perhaps these ideas might spur someone else on to a solution.</p>
|
184,266 | <p>Let $a,b,c$ and $d$ be positive real numbers such that $a+b+c+d=4.$ </p>
<p>Prove the inequality </p>
<blockquote>
<p>$$a^2bc+b^2cd+c^2da+d^2ab \leq 4 .$$ </p>
</blockquote>
<p>Thanks :) </p>
| Michael Rozenberg | 190,319 | <p>Let $\{a,b,c,d\}=\{x,y,z,t\}$, where $x\geq y\geq z\geq t$.</p>
<p>Hence, since $(x,y,z,t)$ and $(xyz,xyt,xzt,yzt)$ are the same ordered,</p>
<p>by Rearrangement and AM-GM we obtain:
$$a^2bc+b^2cd+c^2da+d^2ab=a\cdot abc+b\cdot bcd+c\cdot cda+d\cdot dab\leq$$
$$\leq x\cdot xyz+y\cdot xyt+z\cdot xzt+t\cdot yzt=xy(xz+yt)+zt(xz+yt)=$$
$$=(xy+zt)(xz+yt)\leq\left(\frac{xy+xz+zt+yt}{2}\right)^2=$$
$$=\left(\frac{(x+t)(y+z)}{2}\right)^2\leq\left(\frac{\left(\frac{x+y+z+t}{2}\right)^2}{2}\right)^2=4.$$
Done!</p>
|
510,151 | <p>Prove by induction that $2k(k+1) + 1 < 2^{k+1} - 1$ for $ k > 4$.
Can some one pls help me with this?</p>
<p>I reformulated like this</p>
<p>$ 2k(k+1) + 1 < 2^{k+1} - 1 $</p>
<p>$ 2k^2+2k+2<2^{k+1}$</p>
<p>and I tried like this
Take $k=k+1$</p>
<p>$ 2^{k+2} -1 > 2(k+1)(k+2) + 1 $</p>
<p>$2^{k+2} > 2(k+1)(k+2) + 2$</p>
<p>$ 2^{k+2} > 2k^2+2k+2 +4k+4$</p>
<p>I dont know how to proceed further</p>
<p>Please help me.</p>
| kedrigern | 97,299 | <p>Sequence $1/n$ converges to $0$ iff for every $\varepsilon>0$ there is $n\in\mathbb{N}$ so that for every $k > n$ $$d(\frac1k,0) < \varepsilon.$$ Since $d$ is the discrete metric and $\frac1k > 0$, $d(\frac1k,0) = 1$. So if you take for example $\varepsilon=\frac12$ there is no $n$ as required above.</p>
|
2,878,814 | <ol>
<li>If a function $f(x)$ is continuous and increasing at point $x=a,$ then there is a nbhd $(x-\delta,x+\delta),\delta>0$ where the function is also increasing.</li>
<li>if $f' (x_0)$ is positive, then for $x$ nearby but smaller than
$x_0$ the values $f(x)$ will be less than $f(x_0)$, but for $x$
nearby but larger than $x_0$, the values of $f(x)$ will be larger
than $f(x_0)$. This says something like $f$ is an increasing
function near $x_0$, but not quite.</li>
</ol>
| WW1 | 88,679 | <p>Let $AB = AB'\equiv x$
then
$$ BC = x\cos\theta
\\AC = x\sin\theta $$</p>
<p>And
$$ AC'=x \sin(\theta-\alpha)
\\ \implies AC'=x\sin\theta \cos\alpha -x\cos\theta \sin \alpha
\\AC'= AC \cos\alpha - BC\sin\alpha
$$</p>
|
3,084,934 | <p>I want to prove or disprove that the Fourier transform <span class="math-container">$\mathcal F \colon (\mathcal S(\mathbb R^d), \lVert \cdot \rVert_1) \to L^1(\mathbb R^d)$</span> is unbounded, where <span class="math-container">$\lVert\cdot \rVert_1$</span> denotes the <span class="math-container">$L^1(\mathbb R^d)$</span>-norm. </p>
<p>Having thought about this for a moment, I believe it is indeed unbounded. So I tried to find a sequence of Schwartz functions <span class="math-container">$(f_n)_{n\in \mathbb N} \subseteq \mathcal S(\mathbb R^d)$</span> with <span class="math-container">$\forall n: \lVert f_n \rVert_1 = 1$</span> and <span class="math-container">$$\lVert \mathcal F f_n \rVert \to +\infty.$$</span>
Of course I first thought about Gaussians but couldn't quite find a suitable sequence. Any help appreciated!</p>
| Locally unskillful | 494,915 | <p>We say that <span class="math-container">$\lim_{x\to\ a} f(x) = \infty$</span>, if <span class="math-container">$\forall M>0, \exists δ>0$</span> s.t. whenever <span class="math-container">$\vert x-a \vert < δ$</span>, then <span class="math-container">$f(x) > M $</span>.</p>
<p>We say that <span class="math-container">$\lim_{x\to\ a}\frac{1}{f(x)} = 0$</span>, if <span class="math-container">$\forall ε>0, \exists δ>0$</span> s.t. whenever <span class="math-container">$\vert x-a \vert < δ$</span>, then <span class="math-container">$\left|\frac{1}{f(x)} - 0 \right| <\epsilon$</span>.</p>
<p>It suffices to choose <span class="math-container">$\epsilon = \frac{1}{M}$</span>, in which case <span class="math-container">$\left|\frac{1}{f(x)} - 0 \right| <\epsilon$</span> holds.</p>
|
34,487 | <p>A few years ago Lance Fortnow listed his favorite theorems in complexity theory:
<a href="http://blog.computationalcomplexity.org/2005/12/favorite-theorems-first-decade-recap.html" rel="nofollow">(1965-1974)</a>
<a href="http://blog.computationalcomplexity.org/2006/12/favorite-theorems-second-decade-recap.html" rel="nofollow">(1975-1984)</a>
<a href="http://eccc.hpi-web.de/eccc-reports/1994/TR94-021/index.html" rel="nofollow">(1985-1994)</a>
<a href="http://blog.computationalcomplexity.org/2004/12/favorite-theorems-recap.html" rel="nofollow">(1995-2004)</a>
But he restricted himself (check the third one) and his last post is now 6 years old. An updated and more comprehensive list can be helpful.</p>
<blockquote>
<p>What are the most important results (and papers) in complexity theory that every one should know? What are your favorites?</p>
</blockquote>
| Suresh Venkat | 972 | <p>There's the <a href="http://lucatrevisan.wordpress.com/2008/08/06/bounded-independence-and-dnfs/" rel="nofollow">Bazzi/Razborov</a>/<a href="http://www.cs.toronto.edu/~mbraverm/Papers/FoolAC0v7.pdf" rel="nofollow">Braverman</a> sequence on fooling AC0 circuits. </p>
|
704,921 | <p>This is the question:
$$
\frac{(2^{3n+4})(8^{2n})(4^{n+1})}{(2^{n+5})(4^{8+n})} = 2
$$
I've tried several times but I can't get the answer by working out.I know $n =2$, can someone please give me some guidance? Usually I turn all the bases to 2, and then work with the powers, but I probaby make the same mistake every time, unfortunately I don't know what that is. Thank you in advance.</p>
<p><em>EDIT</em></p>
<p>This is what I simplified it to in the beginning of every attempt.</p>
<p>$$
\frac{(2^{3n} *16)(2^{6n})(2^{2n}*4)}{(2^{n}*32)(2^{n}*2^{16})} = 2
$$</p>
<p>Therefore</p>
<p>$$
\frac{64(2^{3n+6n+2n})}{(2^{16}*32)(2^{2n})} = 2
$$
<br> I simplified further:
$$
\frac{2^{11n}}{32768(2^{2n})} =2
$$
<br>
$$
2^{11n} = (2^{2n+1})*32768 \\
$$
$$
\frac{2^{11n}}{2^{2n+1}} = 32768
$$</p>
<p>$$
\frac{2^{11n}}{2*2^{2n}} = 32768
$$</p>
<p>And this is the furthest I get, what do i do now?</p>
| Cookie | 111,793 | <p>We have
\begin{align}
\frac{(2^{3n+4})(8^{2n})(4^{n+1})}{(2^{n+5})(4^{8+n})} &= \frac{(2^{3n+4})(2^{3})^{2n}(2^2)^{n+1}}{(2^{n+5})(2^2)^{8+n}} \\
&=\frac{2^{3n+4+6n+2n+2}}{2^{n+5+16+2n}} \\
&=\frac{2^{11n+6}}{2^{3n+21}} \\
&=2^{(11n+6)-(3n+21)} \\
&=2^{8n-15}
\end{align}</p>
<p>Also from the original problem,
\begin{align}
\frac{(2^{3n+4})(8^{2n})(4^{n+1})}{(2^{n+5})(4^{8+n})} &= 2
\end{align}</p>
<p>Therefore,
\begin{align}
2^{8n-15} &= 2^1
\end{align}</p>
<p>Drop the base $2$ on both sides, and we will get
\begin{align}
8n-15 = 1
\end{align}
Thus, $n=2$.</p>
|
1,114,502 | <p>I attempted the following solution to the birthday "paradox" problem. It is not correct, but I'd like to know where I went wrong.</p>
<p>Where $P(N)$ is the probability of any two people in a group of $N$ people having the same birthday, I consider the first few values.</p>
<p>For two people, the probability that they share a birthday is simply $1/365$, not counting leap years. For three people, it is the probability of every combination of two of them "ored" together, which is simply the sum of the probabilities of every combination of two people. Thus,</p>
<p>$$
P(2)=P(AB)=\frac{1}{365}
$$
$$
P(3)=P(AB)+P(AC)+P(BC)=3\times P(2)
$$
$$
P(4)=P(AB)+P(AC)+P(AD)+P(BC)+P(BD)+P(CD)=6\times P(2)
$$</p>
<p>Where $P(XY)$ is used to denote the probability of persons $X$ and $Y$ sharing a birthday. You can see pretty clearly that the coefficients are binomial.</p>
<p>$$
P(N)=\binom{N}{2}\times P(2)=\frac{N!}{2!(N-2)!}\cdot\frac{1}{365}=\frac{N(N-1)}{730}
$$</p>
<p>Now according to the pidgeonhole principal, we should have $P(366)=1$, which this expression clearly violates (instead it gives $P(366)=183$). So clearly I'm doing <em>something</em> wrong.</p>
| David | 119,775 | <p>$\def\hor{\ \hbox{or}\ }$Continuing your notation, write $P(ABC)$ for the probability that $A,B$ and $C$ share a birthday. We can also write something like $P(ABBC)$, but it is the same as $P(ABC)$. By the principle of inclusion/exclusion we have
$$\eqalign{P(3)
&=P(AB\hor AC\hor BC)\cr
&=P(AB)+P(AC)+P(BC)-P(ABAC)-P(ABBC)-P(ACBC)+P(ABACBC)\cr
&=P(AB)+P(AC)+P(BC)-P(ABC)-P(ABC)-P(ABC)+P(ABC)\cr
&=P(AB)+P(AC)+P(BC)-2P(ABC)\cr
&\ne3P(2)\ .\cr}$$
Formulae for $P(4)$ and so on can be worked out in a similar way, although, as pointed out by others, this is not the easiest way to solve the problem.</p>
|
428,843 | <p>Consider the lines in the image below:</p>
<p><img src="https://i.stack.imgur.com/AWmrd.png" alt="enter image description here"></p>
<p>Given a set of arbitrary points $p1$ and $p2$ where the direction of travel is from the former to the latter, I want to be able to directional arrow marks as in the image above.</p>
<p>I got as far as calculating the mid-points of the lines but could not figure out how to cater to various combinations of $x1<x2$, $x1>x2$, etc. Is there a direct way to calculate these points? <strong>EDIT</strong>: By direct, I mean in one step without conditioning of where the points lie with respect to each other.</p>
<p>$f1(p1, p2) = $ get the line coordinates of the left directional marker.<br>
$f2(p1, p2) = $ get the line coordinates of the right directional marker.</p>
| Hagen von Eitzen | 39,174 | <p>a) Note that $0<u<v$ implies $0<\sqrt u<\sqrt v$. This allows you to show the claim by starting from $0<n<n+\sqrt {n+1}$ and walking your way to the outer $\sqrt{}$.</p>
<p>b) Follow the hint</p>
<p>c) By induction: $0<x_1<2$ and $0<x_n<2$ implies $1+\sqrt 2 x_n<1+2\sqrt 2<4$</p>
|
1,116,009 | <p>Suppose $\alpha$,a,b are integers and $b\neq-1$. Show that if $\alpha$ satisfies the equation $x^2+ax+b+1=0$,then prove $a^2+b^2$ is composite.</p>
<p>I am starting with this study course of polynomials and finding it very difficult to understand. Please help me with the question. Thanks in advance ! </p>
| Muphrid | 45,296 | <p>I know of no algebra in which 3d arrays of numbers can be manipulated with the same ease as matrices. Part of that has to do with how any linear map from one space to another space can be represented with a matrix. You can chain such maps together sensibly (and really, only in one way up to the order of how you compose those operations together), whereas chaining general tensors together has considerably more freedom: when you're chaining operations together, on which arguments do they act? And so on.</p>
<p>Further, I would object to considering matrices to be "geometrical" in any such sense. Yes, you can <em>write them down</em> as a 2d array, but that arrangement of entries in the matrix doesn't have anything to do with the actual geometry of the underlying manifold or vector space.</p>
|
41,155 | <p>Lauren has 20 coins in her piggy bank, all dimes and quarters. The total amount of money is $3.05. How many of each coin does she have?</p>
| Murta | 2,266 | <p>Here is a start point. First, let's import your data</p>
<pre><code>SetDirectory@NotebookDirectory[]
data=Import["data.csv"]
</code></pre>
<p>Now <code>data</code> should be something like this:</p>
<pre><code>{{ ,ABC,DIA,ACE,SJ,KARMA,NOVICE},{ABC,0,2,3,1,2,1},{DIA,1,0,3,1,2,1},{ACE,2,1,0,3,1,2},{SJ,2,3,1,0,1,3},{KARMA,1,2,3,2,0,1},{NOVICE,1,1,2,3,2,0}}
</code></pre>
<p>Now you can use <code>WeightedAdjacencyGraph</code> as:</p>
<pre><code>g=WeightedAdjacencyGraph[data[[2;;,2;;]]/.(0-> ∞),VertexLabels->MapIndexed[#2[[1]]-> #1&,data[[1,2;;]]]]
</code></pre>
<p>to get:</p>
<p><img src="https://i.stack.imgur.com/3LJ7C.png" alt="Example Graph"></p>
<h2>Update</h2>
<p>Here a function to color it (as suggested by @ubpdqn), and some additional formatting.</p>
<pre><code>color[w_]:=Switch[w,1,Directive[Thick,Red],2,Directive[Thick,Darker@Green],3,Directive[Thick,Blue]];
edgeFormat=(#-> color@PropertyValue[{g, #}, EdgeWeight])&/@EdgeList[g];
g=WeightedAdjacencyGraph[data[[2;;,2;;]]/.(0-> ∞)
,VertexLabels->MapIndexed[#2[[1]]-> Placed[#1,Center]&,data[[1,2;;]]]
,VertexSize->0.27
,VertexStyle->White
,ImagePadding -> 20
,EdgeStyle->edgeFormat
]
</code></pre>
<p><img src="https://i.stack.imgur.com/8kiuE.png" alt="enter image description here"></p>
|
69,476 | <p>Hello everybody !</p>
<p>I was reading a book on geometry which taught me that one could compute the volume of a simplex through the determinant of a matrix, and I thought (I'm becoming a worse computer scientist each day) that if the result is exact this may not be the computationally fastest way possible to do it.</p>
<p>Hence, the following problem : if you are given a polynomial in one (or many) variables $\alpha_1 x^1 + \dots + \alpha_n x^n$, what is the cheapest way (in terms of operations) to evaluate it ?</p>
<p>Indeed, if you know that your polynomial is $(x-1)^{1024}$, you can do much, much better than computing all the different powers of $x$ and multiply them by their corresponding factor.</p>
<p>However, this is not a problem of factorization, as knowing that the polynomial is equal to $(x-1)^{1024} + (x-2)^{1023}$ is also much better than the naive evaluation.</p>
<p>Of course, multiplication and addition all have different costs on a computer, but I would be quite glad to understand how to minimize the "total number of operations" (additions + multiplications) for a start ! I had no idea how to look for the corresponding litterature, and so I am asking for your help on this one :-)</p>
<p>Thank you !</p>
<p>Nathann</p>
<p>P.S. : <em>I am actually looking for a way, given a polynomial, to obtain a sequence of addition/multiplication that would be optimal to evaluate it. This sequence would of course only work for <strong>THIS</strong> polynomial and no other. It may involve working for hours to find out the optimal sequence corresponding to this polynomial, so that it may be evaluated many times cheaply later on.</em></p>
| Emil Jeřábek | 12,705 | <p>If the polynomial is given as $\alpha_0x^0+\dots+\alpha_nx^n$ and you do not know a priori anything about the $\alpha_i$’s, then you can’t do better than <a href="http://en.wikipedia.org/wiki/Horner_scheme">Horner’s scheme</a> (which takes $n$ additions and multiplications). If you know that the polynomial is sparse and you are given a list of nonzero coefficients, you can evaluate the individual terms using <a href="http://en.wikipedia.org/wiki/Exponentiation_by_squaring">repeated squaring</a> (this takes about $k$ additions and $O(k\log n)$ multiplications, where $k$ is the number of nonzero terms). Other information about the polynomial may also help in principle, such as some sort of symmetries in the coefficient list.</p>
|
43,886 | <p>Has work been done on looking at what happens to the exponents of the prime factorization of a number $n$ as compared to $n+1$? I am looking for published material or otherwise. For example, let $n=9=2^0\cdot{}3^2$, then,</p>
<p>$$
9 \;\xrightarrow{+1}\; 10
$$</p>
<p>$$
2^0\cdot{}3^2 \;\xrightarrow{+1}\; 2^1\cdot{}3^0\cdot{}5^1
$$</p>
<p>or looking just at the exponents,</p>
<p>$$
[0,2,0,0,...] \;\xrightarrow{+1}\; [1,0,1,0,...]
$$</p>
<p>I realize the canonical way of reaching the latter is generating the prime factorization of $n$ and of $n+1$ separately, but has there been any research into manipulating the exponents directly instead (short-cutting around the factorization)?</p>
<p>For anyone who can't answer but still wants to see something interesting, check out <a href="http://scienceblogs.com/goodmath/2006/10/prime_number_pathology_fractra.php" rel="nofollow">FRACTRAN</a>.</p>
| Edison | 11,857 | <p>If we could deduce the prime factorization of a number $n+1$ from the prime factorization of the number $n$, then by induction we could in particular arrive at the prime factorization of all numbers. By the way, this would allow us to find new prime numbers. Since prime numbers seem to appear at random in the set of integers, I doubt this method would work.</p>
|
1,336,937 | <p>I think: <em>A function $f$, as long as it is measurable, though Lebesgue integrable or not, always has Lebesgue integral on any domain $E$.</em></p>
<p>However Royden & Fitzpatrick’s book "Real Analysis" (4th ed) seems to say implicitly that “a function could be integrable without being Lebesgue measurable”. In particular, theorem 7 page 103 says: </p>
<p><strong>“If function $f$ is bounded on set $E$ of finite measure, then $f$ is Lebesgue integrable over $E$ if and only if $f$ is measurable”.</strong> </p>
<p>The book spends a half page to prove the direction “$f$ is integrable implies $f$ is measurable”! Even the book “Real Analysis: Measure Theory, Integration, And Hilbert Spaces” of Elias M. Stein & Rami Shakarchi does the same job!</p>
<p>This makes me think there is possibly a function that is not bounded, not measurable but Lebesgue integrable on a set of infinite measure?</p>
<p>===
Update: Read the answer of smnoren and me below about the motivation behind the approaches to define Lebesgue integrals.
Final conclusion: The starting statement above is still true and doesn't contradict with the approach of Royden and Stein.</p>
| srnoren | 99,255 | <p>On pg. 73 of Royden & Fitzpatrick, Lebesgue integrability is defined for bounded functions on domains of finite measure, without the assumption of measurability. However, this theorem that you have reveals that functions of this type can't be integrable unless they are measurable. Hence, the definition of integrable on pg. 73 is consistent with all the other definitions in the book that assume the function to be measurable.</p>
|
3,898,818 | <p>A (UK sixth form; final year of high school) student of mine raised the interesting question of how to prove that the total angle in the Spiral of Theodorus (formed by constructing successive right-angled triangles with hypotenuses of <span class="math-container">$\sqrt{n}$</span>), diverges.</p>
<p>He identified that this is equivalent to proving the divergence of the series <span class="math-container">$$\sum_{r=1}^\infty \arctan \left(\frac{1}{\sqrt r}\right)$$</span> and came up with an interesting proof attempt which didn't conceptually work (although it was very nicely thought of).</p>
<p>The best I could offer by way of intuition is that <span class="math-container">$\arctan\left(\frac{1}{\sqrt n}\right) \approx \frac{1}{\sqrt n}$</span>, and the latter series diverges by comparison with <span class="math-container">$\frac{1}{n}$</span>. But the <span class="math-container">$\arctan$</span> value is strictly lesser, so that doesn't convert into a precise proof as far as I can see.</p>
<p>He hasn't been taught formal convergence tests (and it's a while since I was taught them!) although I'm sure he'd be very open to learning. However, I can't shake the feeling there ought to be a nice geometrical demonstration that the spiral does indeed keep winding around the starting point.</p>
| PM 2Ring | 207,316 | <p>Rather than concentrating on the angle, consider the arc length of the spiral.</p>
<p>At the <span class="math-container">$n$</span>th step, we add a right triangle of base <span class="math-container">$\sqrt n$</span> and hypotenuse <span class="math-container">$\sqrt{n+1}$</span>, with the outer side (of course) of length 1 perpendicular to the base. So we add a new arc to the spiral which must be greater than 1. Hence the arc length at <span class="math-container">$n$</span> steps must be greater than <span class="math-container">$n$</span>. Thus the arc length diverges, and so must the angle.</p>
<p>(FWIW, a decade or so ago, I spent some time trying to come up with a good approximation of <span class="math-container">$n$</span> given a total angle of <span class="math-container">$\theta$</span>. It gets rather messy...).</p>
<p><a href="https://en.wikipedia.org/wiki/Spiral_of_Theodorus" rel="nofollow noreferrer">Wikipedia</a> has some info, including an analytic continuation of the spiral published by Davis in 2001. It also has this nice diagram.</p>
<p><a href="https://i.stack.imgur.com/NlXy5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NlXy5.png" alt="Spiral of Theodorus" /></a></p>
<p>And here's a link to their <a href="https://upload.wikimedia.org/wikipedia/commons/9/9f/Spiral_of_Theodorus.svg" rel="nofollow noreferrer">SVG version</a>. (I'm sure it'd be easy to make a much smaller SVG).</p>
|
3,226,028 | <h2>Problem</h2>
<p>I want to know how to solve the differential equation
<span class="math-container">$$ \dot{x} + a\cdot x - b\cdot \sqrt{x} = 0 $$</span> for <span class="math-container">$a>0$</span> and both situations: for <span class="math-container">$b > 0$</span> and <span class="math-container">$b < 0$</span>. </p>
<h2>My work</h2>
<p>One can separate the variables to obtain:
<span class="math-container">$$ \frac{dx}{b\cdot \sqrt{x} - a\cdot x} = dt$$</span> but I do not know how to proceed ...
<a href="https://www.wolframalpha.com/input/?i=solve+x%27(t)%2Bax(t)-bsqrt(x(t))+%3D+0" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=solve+x%27(t)%2Bax(t)-bsqrt(x(t))+%3D+0</a> it seems to have an explicit solution ... </p>
<h2>Context</h2>
<p>This problem occurs in the following context:
<span class="math-container">$$ \ddot{X} + a \cdot \dot{X} = f(X)$$</span> then multiplying both sides with <span class="math-container">$2\dot{X}^T$</span> one obtains:
<span class="math-container">$$ (\dot{X}^T\dot{X})' + 2a\cdot \dot{X}^T\dot{X} = 2\dot{X}^T f(X)$$</span> Let <span class="math-container">$v= \dot{X}^T \dot{X}$</span> and the above differential equation arises ... </p>
| D.B. | 530,972 | <p>One way of working this out is to make the substitution <span class="math-container">$y = \sqrt{x}$</span>. Then,
<span class="math-container">$$\frac{dx}{b\sqrt{x}-ax} \rightarrow \frac{2ydy}{by-ay^2} = \frac{2ydy}{y(b-ay)}.$$</span>
You can treat the integral in <span class="math-container">$y$</span> with partial fractions.</p>
|
1,639,568 | <p>The above applies $\forall x,y \in \mathbb{R}$</p>
<p>I've tried: $x + y \ge 0$</p>
<p>$$x + y \ge x$$</p>
<p>$$ (x + y)^2 \ge 2xy$$</p>
<p>$$\frac{(x + y)^2}{2} \ge xy$$</p>
<p>But the closest I get is $\dfrac{x+y}{\sqrt{2}} \ge \sqrt{xy}$</p>
<p>Any ideas?</p>
| AsdrubalBeltran | 62,547 | <p>Note that:
$$(x-y)^2\ge 0\implies x^2+y^2\ge2xy\implies x^2+2xy+y^2\ge4xy\implies(x+y)^2\ge4xy$$</p>
|
1,014,303 | <blockquote>
<p>Is
$$\sum^\infty_{n=4}\frac{3^{2n}}{(-10)^n}$$
Convergent or Divergent? Explain why.</p>
</blockquote>
<p>I know I can do:
$$\sum^\infty_{n=4}\frac{9^{n}}{(-10)^n} \Rightarrow \sum^\infty_{n=4}\bigg(\frac{9}{-10}\bigg)^n$$
But I'm not sure where to go from here. The negative denominator is really throwing me off.</p>
| user26857 | 121,097 | <p>$I$ not prime (because it is maximal among the <em>non-f.g.</em> ideals, and all prime ideals are f.g. by hypothesis) implies that there are two ideals $I_1,I_2$ such that $I_1I_2\subseteq I$, but $I_1\nsubseteq I$, and $I_2\nsubseteq I$. Now take $J_i=I_i+I$ and observe that $J_1J_2\subseteq I$, and $I\subsetneq J_i$, so $J_i$ are f.g.</p>
|
1,014,303 | <blockquote>
<p>Is
$$\sum^\infty_{n=4}\frac{3^{2n}}{(-10)^n}$$
Convergent or Divergent? Explain why.</p>
</blockquote>
<p>I know I can do:
$$\sum^\infty_{n=4}\frac{9^{n}}{(-10)^n} \Rightarrow \sum^\infty_{n=4}\bigg(\frac{9}{-10}\bigg)^n$$
But I'm not sure where to go from here. The negative denominator is really throwing me off.</p>
| Patrick Da Silva | 10,704 | <p>The ideal $I$ cannot be prime, because by (a) $I$ is not finitely generated (the ideal $I$ is maximal in the collection of non-finitely generated ideals, but it is not necessarily a maximal ideal!). If $I$ were prime, $I$ would be finitely generated by assumption on $R$, a contradiction. Therefore $I$ is not prime ; this means there exists two principal ideals $J_1 = (x_1)$, $J_2 = (x_2)$ such that $x_1 x_2 \equiv 0 \pmod{I}$, i.e. $J_1 J_2 \subseteq I$. This is what D&F meant by "Observe that $I$ is not a prime ideal", and it is a correct observation.</p>
<p>P.S. : You are proving a statement by contradiction, so of course you might encounter statements which are false in general, but that is precisely the point.</p>
<p>Hope that helps,</p>
|
88,122 | <p>For the easiest case, assume that $L/E$ is Galois and $E/K$ is Galois. Under what conditions can we conclude that $L/K$ is Galois? I guess the general case can be a bit tricky, but are there some "sufficiently general" cases that are interesting and for which the question can be answered?</p>
<p>EDIT: Since Jyrki's reply seems to suggest that there is no general criterion on the groups. Can we say something if we put criterions on the fields? Assume say that $K=\mathbb{Q}$ or $K=\mathbb{Q}_p$?</p>
| Keenan Kidwell | 628 | <p>This isn't an answer, nor is it very general, just an illustration with an example. Let $K$ be a number field, $E/K$ finite Galois, and $L/E$ the Hilbert class field of $E$ (the maximal unramified abelian extension of $E$, which I'll take to be inside some algebraic closure $\overline{K}$ of $K$). Then $L/E$ is Galois by definition, and using its characterizing property, it can be proved that $L/K$ is Galois. Namely, let $\sigma:L\rightarrow\overline{K}$ be a $K$-monomorphism. Then $\sigma(E)\subseteq E$ because $E/K$ is Galois, so $\sigma(E)=E$. Thus $E\subseteq \sigma(L)$, and $\sigma$ sets up an isomorphism between the extensions $L/E$ and $\sigma(L)/E$. I don't mean that $L$ and $\sigma(L)$ are $E$-isomorphic, because, while $\sigma(E)=E$, it needn't be the case that $\sigma$ fixes $E$ pointwise. But, for example, $\sigma(L)/E$ must be Galois, and we have an isomorphism $\mathrm{Gal}(L/E)\cong\mathrm{Gal}(\sigma(E)/E)$ induced by $\sigma$. One can also verify that $\sigma(L)/E$ is unramified because $L/E$ is. So $\sigma(L)/E$ is abelian and unramified. This means $L\sigma(L)$ (compositum inside $\overline{K}$) is abelian and unramified over $E$. By maximality, $L\sigma(L)=L$, which implies that $\sigma(L)=L$. So we can conclude that $L/K$ is Galois.</p>
<p>I don't really know if there is a way to make this formal. Similar arguments can be used to prove that various class fields of $E$ are Galois over $K$. I guess the informal idea is that $L/E$ should be maximal (inside $\overline{K}$) with respect to some properties which are preserved by $K$-embeddings of $L$ into $\overline{K}$ (which necessarily send $E$ onto itself) and which are preserved under compositum. I admit this is vague, but I'm not sure how to make it more precise. </p>
|
3,780,575 | <p>We know that if <span class="math-container">$f$</span> is continuous on [a,b] and <span class="math-container">$f:[a,b] \to \mathbb{R}$</span>, then there exists <span class="math-container">$c \in [a,b]$</span> with <span class="math-container">$f(c)(a-b) = \int_a^bf(x)dx$</span></p>
<p>If we change ''f is continuous on [a,b]'' to ''f is Riemann integrable'', does the mean value theorem for integral still holds? If not, Can you give me a counter-example?</p>
<p>I know that the first Mean-Value Theorem for Riemann-Stieltjes does not requires continuity, but that is still different from this statement.</p>
<p>reference:<a href="http://mathonline.wikidot.com/the-first-mean-value-theorem-for-riemann-stieltjes-integrals" rel="nofollow noreferrer">http://mathonline.wikidot.com/the-first-mean-value-theorem-for-riemann-stieltjes-integrals</a></p>
| user10354138 | 592,552 | <p>First of all, it is consider very bad style to just throw random equations around. Write down in English what exactly you are doing --- is it an assumption you make? A given condition? Some logical deduction from earlier? And every sentence should start with an English word not an equation, unless you absolutely have to.</p>
<p>Note that it suffices to consider <span class="math-container">$a,b,c$</span> coprime (otherwise you get a smaller triangle by scaling). Now <span class="math-container">$a^2=b^2+bc$</span> is equivalent to <span class="math-container">$c^2+4a^2=(2b+c)^2$</span>. If <span class="math-container">$c$</span> is even we have <span class="math-container">$a^2+(\frac12c)^2=(b+\frac12c)^2$</span>, and we must have <span class="math-container">$a$</span> odd (otherwise <span class="math-container">$a,c$</span> even gives <span class="math-container">$b$</span> also even, and so not primitive). So applying the classification of primitive Pythagorean triples, we have:
<span class="math-container">\begin{align*}
(c,2a,2b+c)&=(m^2-n^2,2mn,m^2+n^2)\text{ or }\\
(\tfrac12c,a,b+\tfrac12c)&=(2mn,m^2-n^2,m^2+n^2)\\
\end{align*}</span>
for some <span class="math-container">$m,n$</span> coprime, <span class="math-container">$m>n$</span> opposite parity. Now analyse each case separately:</p>
<p><strong>Case 1</strong>: <span class="math-container">$(c,2a,2b+c)=(m^2-n^2,2mn,m^2+n^2)$</span>, so <span class="math-container">$(a,b,c)=(mn,n^2,m^2-n^2)$</span>. We want <span class="math-container">$c^2>a^2+b^2$</span>, so <span class="math-container">$m^2>3n^2$</span>. Also triangle inequality <span class="math-container">$c<a+b$</span> gives <span class="math-container">$(m+n)(m-2n)<0$</span>. So seek <span class="math-container">$\sqrt3<\frac{m}n<2$</span> (it is probably not a good idea to bring in <span class="math-container">$\sqrt3$</span> but you know what I mean) and we will have perimeter <span class="math-container">$m(m+n)$</span>. <span class="math-container">$\frac{m}n=\frac74$</span> is obviously the best candidate here with least <span class="math-container">$n$</span> and least <span class="math-container">$m$</span>. So we have <span class="math-container">$(a,b,c)=(28,16,33)$</span> and perimeter <span class="math-container">$77$</span>.</p>
<p><strong>Case 2</strong>: <span class="math-container">$(\tfrac12c,a,b+\tfrac12c)=(2mn,m^2-n^2,m^2+n^2)$</span> so <span class="math-container">$(a,b,c)=(m^2-n^2,(m-n)^2,4mn)$</span>. Then <span class="math-container">$c^2>a^2+b^2$</span> gives <span class="math-container">$16m^2n^2>(m+n)^2(m-n)^2+(m-n)^4$</span>, i.e., <span class="math-container">$(m+n)^2 (m^2-4mn+n^2)<0$</span> and <span class="math-container">$c<a+b$</span> gives <span class="math-container">$m>3n$</span>, so <span class="math-container">$3<\frac{m}{n}<2+\sqrt3$</span> and the perimeter is <span class="math-container">$2m(m+n)$</span>. The best candidate here is the choice <span class="math-container">$\frac{m}n=\frac72$</span>, which gives <span class="math-container">$(a,b,c)=(45,25,56)$</span> and perimeter <span class="math-container">$126$</span>.</p>
<p>So the least perimeter is 77.</p>
|
2,077,958 | <p>Or more abstractly, let $T \in \mathcal{L}(U,V)$ be a linear map over finite dimensional vector spaces, I need to prove that $T^*$ and $T^* T$ have the same range.</p>
<p>The direction $v \in range(T^*T) \rightarrow v \in range(T^*)$ is obvious. I'm stuck on the other direction.
Suppose $u\in range(T^*)$, then there exists $v \in V$ such that $u = T^*v$. Now how do I show $u \in range(T^*T)$? (I've proved that $T$ and $T^*T$ have the same nullspace, but that doesn't seem helpful here)</p>
| Noble Mushtak | 307,483 | <p>$A$ and $A^TA$ have the same null space. Therefore, the orthogonal complements of their null spaces are the same. It is well-known that the orthogonal complement of any matrix $M$ is the column space of $M^T$, so $A^T$ and $(A^TA)^T=A^TA$ have the same column space.</p>
|
344,345 | <p>Are there any relationship between the scalar curvature and the simplicial volume? </p>
<p>The simplicial volume is zero (positive) on Torus (Hyperbolic manifold) and those manifolds does not admit a Riemannian metric with positive scalar curvature. What do we know about the simplicial volume of a Riemannian manifold with positive scalar curvature?</p>
| Paul Siegel | 4,362 | <p>In a <a href="https://www.ihes.fr/~gromov/wp-content/uploads/2018/08/101-problemsOct1-2017.pdf" rel="noreferrer">preliminary version</a> of what would become Gromov's "A Dozen Problems, Questions and Conjectures about Positive Scalar Curvature", he writes on page 88:</p>
<blockquote>
<p>Neither is one able to prove (or disprove) that manifolds with positive scalar
curvatures have zero simplicial volumes.
Possibly, these conjectures need significant modifications to become realistic.</p>
</blockquote>
<p>Simplicial volume didn't make it into the published version of these notes, maybe because he thought his conjectural relationship between scalar curvature and simplicial volume was too hopeless to be worth mentioning, but at any rate this is reasonable evidence that this question was an open problem in 2017, and I haven't seen any evidence of progress since then.</p>
|
194,218 | <blockquote>
<p>Let A, B be two sets. Prove that <span class="math-container">$A \subset B \iff A \cup B = B$</span></p>
</blockquote>
<p>I'm thinking of using disjunctive syllogism by showing that <span class="math-container">$\neg \forall Y(Y \in A).$</span> However, I'm not sure how the proving steps should proceed such that it leads me to that premise.</p>
<p>Edit: Thanks for the input. FYI, I need to prove this using predicate logic.</p>
| rschwieb | 29,335 | <p>Here are the (very straightforward) first steps you should have thought of beginning with:</p>
<p>In one direction, suppose $A\subseteq B$: then $A\cup B\subseteq B\cup B\dots$</p>
<p>In the other direction, suppose $A\cup B=B$: then $A\subseteq A\cup B\subseteq\dots$</p>
|
3,599,893 | <p>I had this idea to build a model of Earth in Minecraft. In this game, everything is built on a 2D plane of infinite length and width. But, I wanted to make a world such that someone exploring it could think that they could possibly be walking on a very large sphere. (Stretching or shrinking of different places is OK.) </p>
<p>What I first thought about doing was building a finite rectangular model of the world as like a mercator projection, and tessellating this model infinitely throughout the plane. </p>
<p><a href="https://i.stack.imgur.com/bzdjA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bzdjA.png" alt="enter image description here"></a></p>
<p>Someone starting in the US could swim eastwards in a straight line across the Atlantic, walk across Africa and Asia, continue through the Pacific and return to the US. This would certainly create a sense of 3D-ness. However, if you travel north from the North Pole, you would wind up immediately at the South Pole. That wouldn't be right.</p>
<p>After thinking about it, I hypothesized that an explorer of this model might conclude that they were walking on a donut-shaped world, since that would be the shape of a map where the left was looped around to the right (making a cylinder), and then the top was looped to the bottom. For some reason, by simply tessellating the map, I was creating a hole in the world.</p>
<p>Anyway, to solve this issue, I thought about where one ends up after travelling north from various parts of the world. Going north from Canada, and continuing to go in that direction, you end up in Russia and you face south. The opposite is true as well: going north from Russia, you end up in Canada pointing south. Thus, I started to modify the tessellation to properly connect opposing parts of Earth at the poles. </p>
<p>When going north of a map of Earth, the next (duplicate) map would have to be rotated 180 degrees to reflect the fact that one facing south after traversing the north pole. This was OK. However, to properly connect everything, the map also had to be <em>flipped</em> about the vertical axis. On a globe, if Alice starts east of Bob and they together walk North and cross the North Pole, Alice still remains east of Bob. So, going north from a map, the next map must be flipped to preserve the east/west directions that would have been otherwise rotated into the wrong direction.</p>
<p><a href="https://i.stack.imgur.com/U5n9t.png" rel="noreferrer"><img src="https://i.stack.imgur.com/U5n9t.png" alt="enter image description here"></a></p>
<p>Now the situation is hopeless. After an explorer walks across the North Pole in this Minecraft world, he finds himself in a mirrored world. If the world were completely flat, it would feel as if walking North will take you from the outside of a 3D object to its inside.</p>
<p>Although I now think that it is impossible to trick an explorer walking on infinite plane into thinking he is on a sphere-like world, a part of me remains unconvinced. Is it really impossible? Also, how come a naive tessellation introduces a hole? And finally, if an explorer were to roam the world where crossing a pole flips everything, what would he conclude the shape of the world to be?</p>
| Captain Lama | 318,467 | <p>What you want to do is not possible because there is no flat sphere. That is, there is no way to put a metric on a topological sphere such that the curvature is everywhere zero. This can be shown using the <a href="https://en.wikipedia.org/wiki/Gauss%E2%80%93Bonnet_theorem" rel="noreferrer">Gauss-Bonnet theorem</a>: the global curvature (by which I mean the integral of the curvature on the whole sphere) is equal to (<span class="math-container">$2\pi$</span> times) the Euler characteristic, which for a sphere is <span class="math-container">$2$</span> (and not <span class="math-container">$0$</span>).</p>
<p>On the other hand, it is very well-known to gamers that there are flat tori: you just teleport on the other side when you hit a wall. This is illustrated by the fact that the Euler characteristic of a torus is <span class="math-container">$0$</span>, so there can be a flat metric on a torus (and indeed you can define one by expressing the torus as a quotient of the plane).</p>
|
3,223,618 | <p>I have this system of linear equations with parameter:</p>
<p><span class="math-container">$ ax + 4y + z =0 $</span></p>
<p><span class="math-container">$2y + 3z = 1$</span> </p>
<p><span class="math-container">$3x -cz=-2$</span></p>
<p>What I did was to put those equations into a matrix and transform that matrix it into a triangular matrix. Then I got these results on the right side:
<span class="math-container">$ \frac{-c+10}{ac-15}$</span>, <span class="math-container">$\frac{3+(c-6)a}{2ac-30}$</span>, <span class="math-container">$\frac{2a-6}{ac-15}$</span>. However, I don't know what to do now. </p>
<p>Thanks for any help!</p>
| Brian Fitzpatrick | 56,960 | <p>This is a situation where <a href="https://en.wikipedia.org/wiki/Cramer%27s_rule" rel="nofollow noreferrer">Cramer's rule</a> works quite well. </p>
<p>Our system is of the form <span class="math-container">$A\vec{x}=\vec{b}$</span> where
<span class="math-container">\begin{align*}
A &= \left[\begin{array}{rrr}
a & 4 & 1 \\
0 & 2 & 3 \\
3 & 0 & -c
\end{array}\right] & \vec{x} &= \left[\begin{array}{r}
x \\
y \\
z
\end{array}\right] & \vec{b} &= \left[\begin{array}{r}
0 \\
1 \\
-2
\end{array}\right]
\end{align*}</span>
Note that
<span class="math-container">$$
\det(A)=2\cdot(15-ac)
$$</span>
This means that the system has a unique solution if and only if <span class="math-container">$ac\neq 15$</span>. </p>
<p>Now, assuming <span class="math-container">$ac\neq 15$</span>, we define
<span class="math-container">\begin{align*}
A_1 &= \left[\begin{array}{rrr}
0 & 4 & 1 \\
1 & 2 & 3 \\
-2 & 0 & -c
\end{array}\right] & A_2 &= \left[\begin{array}{rrr}
a & 0 & 1 \\
0 & 1 & 3 \\
3 & -2 & -c
\end{array}\right] & A_3 &= \left[\begin{array}{rrr}
a & 4 & 0 \\
0 & 2 & 1 \\
3 & 0 & -2
\end{array}\right]
\end{align*}</span>
Note that <span class="math-container">$A_i$</span> is <span class="math-container">$A$</span> with the <span class="math-container">$i$</span>th column replaced by <span class="math-container">$\vec{b}$</span>. According to Cramer's rule, the solution to the system is given by
<span class="math-container">\begin{align*}
x &= \frac{\det(A_1)}{\det(A)} = \frac{2 \, {\left(c - 5\right)}}{15-ac} & y &= \frac{\det(A_2)}{\det(A)} = \frac{a {\left(c - 6\right)} + 3}{2 \, {\left(a c - 15\right)}} & z &= \frac{\det(A_3)}{\det(A)} = \frac{2 \, {\left(a - 3\right)}}{a c - 15}
\end{align*}</span></p>
|
3,881,390 | <p>I tried multiplying both sided by 4a
which leads to <span class="math-container">$(6x+4)^2=40 \pmod{372}$</span>
now I'm stuck with how to find the square root of a modulo.</p>
| fleablood | 280,126 | <p>Might be easier to factor or use the quadratic formula.</p>
<p><span class="math-container">$3x^2 + 4x - 2\equiv 0\pmod {31}$</span> so abusing notation where <span class="math-container">$\sqrt {k}$</span> will mean the congruence <span class="math-container">$a$</span> where <span class="math-container">$a^2 \equiv k \pmod {31}$</span> and <span class="math-container">$\frac 1{m}=m^{-1}$</span> is the congruence where <span class="math-container">$m(m^{-1})\equiv 1 \pmod {31}$</span> then</p>
<p><span class="math-container">$x \equiv \frac {-4\pm \sqrt{16 +24}}{6}\equiv$</span></p>
<p><span class="math-container">$(-4 \pm \sqrt{40})\cdot 6^{-1}\equiv$</span></p>
<p><span class="math-container">$(27\pm \sqrt{9})\cdot 6^{-1}\equiv $</span></p>
<p><span class="math-container">$(27\pm 3)\cdot 6^{-1}\equiv $</span></p>
<p><span class="math-container">$\begin{cases}30\cdot 6^{-1}\equiv 5\cdot 6\cdot 6^{-1}\equiv 5\\ 24\cdot 6^{-1}\equiv4\cdot 6\cdot 6^{-1}\equiv 4\end{cases}\pmod{31}$</span></p>
<p>So <span class="math-container">$x \equiv 4\pmod {31}$</span> or <span class="math-container">$x\equiv 5\pmod{31}$</span></p>
|
3,891,749 | <p>I got a pretty good idea for the proof but it feels like it's missing some details.</p>
<p>Proof:
<span class="math-container">$$X \times Y \Leftrightarrow a\in X \land b \in Y \Leftrightarrow a\in X \land b\in Z$$</span></p>
<p>Since <span class="math-container">$b\in Y \Leftrightarrow b\in Z$</span>, then <span class="math-container">$Y = Z \blacksquare$</span></p>
<p>I feel like the jump to <span class="math-container">$b\in Y \Leftrightarrow b\in Z$</span> is flawed is some way, missing some details. Can anyone help me justify it more or tell me that it's good?</p>
| Derek Luna | 567,882 | <p>Let <span class="math-container">$y \in Y$</span>. Since <span class="math-container">$X \neq \emptyset $</span> , let <span class="math-container">$x \in X$</span>. Consider <span class="math-container">$(x,y) \in X \times Y = X \times Z$</span> which implies <span class="math-container">$y \in Z$</span>. The other way is proven similarly.</p>
|
2,828,487 | <p>If $\mathcal{R}$ is a von Neumann algebra acting on Hilbert space $H$, and $v \in H$ is a cyclical and separating vector for $\mathcal{R}$ (hence also for its commutant $\mathcal{R}'$), and $P \in \mathcal{R}, Q \in \mathcal{R}'$ are nonzero projections, can we have $PQv = 0$?</p>
<p>[note i had briefly edited this to a reformulated version of the question, but am rolling it back to align with the answer below.]</p>
| Community | -1 | <p>In my undergrad, I took two one-semester courses following Royden's <em>Real Analysis</em>. I liked this book because Royden (generally) has a sufficient amount of detail. I found it a good grounding in the fundamentals of measure theory and metric spaces.</p>
<p>After this, I took a course in Fourier analysis that used many textbooks (among which my favourite was Katznelson's <em>An Introduction to Harmonic Analysis</em>) and a course in functional analysis that used Macluer's <em>Elementary Functional Analysis</em> (an introductory text but very well presented - I imagine very appropriate for self-study).</p>
<p>It was during this time (the last year of my undergrad) that my advisor lent me her copy of Rudin's <em>Real and Complex Analysis</em>. This changed my outlook on analysis. Rudin writes in a way that I find simultaneously enlightening and challenging. The overall effect is stimulating. I feel like this book was important in training as an analyst and still exerts an influence on the way I think.</p>
<p>I found that these texts were sufficient for me to acquire strong foundations. In fact, these four texts are among those that can be found at my desk for quick access. For getting to research-level analysis, my experience is that each area has its own set of knowledge and techniques at people's fingertips. Reading specialised textbooks and well written papers in a particular area is, in my opinion, the best way to approach research. An advisor or other specialist can be a great source of suggestions to find these things. </p>
<p>For undergraduate summer programmes, definitely speak to people in your department (repeating @Thomas, above). </p>
|
231,479 | <p>Is there a function that can create hexagonal grid?</p>
<p>We have square grid graph, where we can specify <code>m*n</code> dimensions:</p>
<pre><code>GridGraph[{m, n}]
</code></pre>
<p>We have triangular grid graph (which works only for argument <code>n</code> up to 10 - for unknown reason):</p>
<pre><code>GraphData[{"TriangularGrid", n}, "Graph"]
</code></pre>
<p>I can not find a function that would generate a hexagonal grid graph. I would like it like it is with <code>GridGraph</code> something like <code>HexagonalGridGraph[{m,n,o}]</code> where <code>m,n,o</code> are dimensions <code>m*n*o</code> of planar graph - or other way said - "lengths" of the sides of the graph.</p>
<p>I can make my own code, I am asking just in case there already exist implemented function.</p>
<p><strong>UPDATE:</strong></p>
<p>What I mean by <code>m*n*o</code> hexagonal grid is for example this <code>3*5*7</code> hexagonal grid:</p>
<p><a href="https://i.stack.imgur.com/r8yTS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/r8yTS.png" alt="enter image description here" /></a></p>
<p>My code for producing it is very long and cumbersome so I will not upload it unless I can make it simpler.</p>
| Szabolcs | 12 | <p>With <a href="http://szhorvat.net/mathematica/IGraphM" rel="noreferrer">IGraph/M</a>:</p>
<pre><code>IGMeshGraph@IGLatticeMesh["Hexagonal", {6, 4}]
</code></pre>
<p><a href="https://i.stack.imgur.com/mWWa5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mWWa5.png" alt="enter image description here" /></a></p>
<p>We can also crop it to a hexagon:</p>
<pre><code>IGMeshGraph@IGLatticeMesh["Hexagonal", Polygon@CirclePoints[10, 6]]
</code></pre>
<p><a href="https://i.stack.imgur.com/gu5YC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gu5YC.png" alt="enter image description here" /></a></p>
<p>It can also generate many other kinds of lattices, not just hexagonal.</p>
|
231,479 | <p>Is there a function that can create hexagonal grid?</p>
<p>We have square grid graph, where we can specify <code>m*n</code> dimensions:</p>
<pre><code>GridGraph[{m, n}]
</code></pre>
<p>We have triangular grid graph (which works only for argument <code>n</code> up to 10 - for unknown reason):</p>
<pre><code>GraphData[{"TriangularGrid", n}, "Graph"]
</code></pre>
<p>I can not find a function that would generate a hexagonal grid graph. I would like it like it is with <code>GridGraph</code> something like <code>HexagonalGridGraph[{m,n,o}]</code> where <code>m,n,o</code> are dimensions <code>m*n*o</code> of planar graph - or other way said - "lengths" of the sides of the graph.</p>
<p>I can make my own code, I am asking just in case there already exist implemented function.</p>
<p><strong>UPDATE:</strong></p>
<p>What I mean by <code>m*n*o</code> hexagonal grid is for example this <code>3*5*7</code> hexagonal grid:</p>
<p><a href="https://i.stack.imgur.com/r8yTS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/r8yTS.png" alt="enter image description here" /></a></p>
<p>My code for producing it is very long and cumbersome so I will not upload it unless I can make it simpler.</p>
| cvgmt | 72,111 | <p><strong>Edit-4</strong></p>
<p>Besides of type <code>{m,n,o}</code>,here we want to find the type <code>{n[1],n[2],n[3],n[4],n[5],n[6]}</code>. Simple calculate, for example</p>
<pre><code>Solve[Array[n, 6] . CirclePoints[{0, 0}, {1, 0}, 6] == 0,
Array[n, 6], PositiveIntegers]
</code></pre>
<p>We can find that it satisfy two equations.</p>
<pre><code>{n[1] - n[4] + n[2] - n[5] == 0, n[2] - n[5] + n[3] - n[6] == 0}
</code></pre>
<p>( so <code>{m,n,o,m,n,o}</code> always satisfy this relation)</p>
<pre><code>e[1] = AngleVector[0];
e[3] = AngleVector[2 π/3];
e[2] = e[1] + e[3];
e[4] = -e[1];
e[5] = -e[2];
e[6] = -e[3];
sol = Simplify[
SolveValues[Array[n, 6] . Array[e, 6] == {0, 0}, Array[n, 6],
PositiveIntegers][[1]], Array[C, 5] ∈ PositiveIntegers];
type = sol /. Thread[Array[C, 5] -> RandomInteger[{1, 10}, 5]]
bd = Accumulate@
Catenate[MapThread[ConstantArray, {Array[e, 6], type - 1}]];
reg = BoundaryMeshRegion[bd,
Line /@ {##, #1} & @@ Partition[Range@Length@bd, 2, 1, 1]];
allpts = Tuples[Range[0, 2 Max@type], 2] . {e[1], e[3]};
pts = Pick[allpts, RegionMember[reg]@allpts];
Graphics[{EdgeForm[Blue], FaceForm[],
RegularPolygon[#, {1/Sqrt[3], π/2}, 6] & /@ pts, Red, Point@bd}]
</code></pre>
<blockquote>
<p><code>{12, 10, 11, 15, 7, 14}</code></p>
</blockquote>
<p><a href="https://i.stack.imgur.com/k4jPu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k4jPu.png" alt="enter image description here" /></a></p>
<p><strong>Edit-3</strong></p>
<pre><code>{m, n, o} = {3, 5, 7};
Graphics[{EdgeForm[Blue], FaceForm[],
RegularPolygon[#, {1, 0},
6] & /@ (Sqrt[
3] SolveValues[{0 <= x <= o - 1, 0 <= y <= n - 1,
0 <= z <= m - 1, x == 0 || y == 0 || z == 0}, {x, y, z},
Integers] . CirclePoints[{1, Pi/6}, 3])}]
</code></pre>
<pre><code>{x, y, z} = {7, 5, 3};
bases = CirclePoints[{1, 30 Degree}, 3];
coordinates =
Catenate[{Tuples[{Range[0, x - 1], Range[0, y - 1], {0}}],
Tuples[{Range[1, x - 1], {0}, Range[1, z - 1]}],
Tuples[{{0}, Range[0, y - 1], Range[1, z - 1]}]}];
Graphics[{EdgeForm[Blue], FaceForm[],
RegularPolygon[#, {1, 0}, 6] & /@ (Sqrt[3]*coordinates . bases)}]
</code></pre>
<p><a href="https://i.stack.imgur.com/3QJ8t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3QJ8t.png" alt="enter image description here" /></a></p>
<p><strong>Edit-2</strong></p>
<p>The ideal comes from 3D.</p>
<pre><code>Graphics3D[Cuboid[], BoxRatios -> {5, 7, 3}, Boxed -> False,
ViewProjection -> "Orthographic", ViewPoint -> {2.0, -1.7, 2.0}]
</code></pre>
<p><a href="https://i.stack.imgur.com/AHl3T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AHl3T.png" alt="enter image description here" /></a></p>
<pre><code>{eX, eY, eZ} = CirclePoints[{1, 30 Degree}, 3];
{x, y, z} = {7, 5, 3};
(*{x,y,z}={8,8,8};*)
pXY = Sqrt[3] Tuples[{Range[x] - 1, Range[y - 1]}] . {eX, eY};
pYZ = Sqrt[3] Tuples[{Range[y] - 1, Range[z - 1]}] . {eY, eZ};
pZX = Sqrt[3] Tuples[{Range[z] - 1, Range[x - 1]}] . {eZ, eX};
Graphics[{EdgeForm[White], Red, RegularPolygon[#, {1, 0}, 6] & /@ pXY,
Green, RegularPolygon[#, {1, 0}, 6] & /@ pYZ, Blue,
RegularPolygon[#, {1, 0}, 6] & /@ pZX, Black, Point[pXY],
PointSize[Medium], Point[pYZ], PointSize[Large], Point[pZX]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/QZ9v1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QZ9v1.png" alt="enter image description here" /></a></p>
<p><strong>Edit-1</strong></p>
<p>If we introduce three coordinates <code>{x1,y1,z1}</code> and three bases <code>e1,e2,e3</code> instead of just two coordinates and two bases, the construction of the type <code>{m, n, o} = {3, 5, 7}</code> is relatively easy.</p>
<pre><code>{m, n, o} = {3, 5, 7};
eM = AngleVector[90 Degree];
eN = AngleVector[150 Degree];
eO = AngleVector[30 Degree];
pts = Sqrt[3] Tuples[Range /@ {m, n, o}] . {eM, eN, eO} // Union;
Graphics[{EdgeForm[White],
Table[RegularPolygon[p, {1, 0}, 6], {p, pts}], Red, Point[pts]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/0r7Q2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0r7Q2.png" alt="enter image description here" /></a></p>
<pre><code>Graphics[{EdgeForm[Blue], FaceForm[],
Table[RegularPolygon[p, {1, 0}, 6], {p, pts}], Red, Point[pts],
Riffle[{Red, Green, Blue}, Arrow[{{0, 0}, 2 #}] & /@ {eM, eN, eO}]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/xjhYH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xjhYH.png" alt="enter image description here" /></a></p>
|
117,285 | <p>Let $R \subseteq A \times A$ and $S \subseteq A \times A$ be two arbitary equivalence relations.
Prove or disprove that $R \cup S$ is an equivalence relation.</p>
<p>Reflexivity: Let $(x,x) \in R$ or $(x,x) \cup S \rightarrow (x,x) \in R \cup S$</p>
<p>Now I still have to prove or disprove that $R \cup S$ is symmetric and transitive. How can I do that?</p>
<p>My guess for symmetry is: R and S are equivalence relations, which means that $(x,y)\ and\ (y,x) \in R,S$ For each (x,y) in R and S there is an (y,x) in R and S so that (x,y) ~ (y,x). Is that correct?</p>
<p>Transitivity: ?</p>
| Ashish kakran | 541,198 | <p>I don't know if i m right or wrong. But here is how i can think of it.</p>
<p>Lets say R and S are two equivalence relations on nonempty set A.</p>
<p>To answer whether R union S is equivalence relation? consider the fact that R forms partitions on A and S also forms some partitions</p>
<p>My intuition: taking union of R and S is equivalent to taking union of their partitions (haven't proved it yet)</p>
<p>case 1: if partitions of R and S overlap imperfectly, partitions formed after union are not disjoint hence R union S can't be equivalence relation</p>
<p>case 2: if partitions of R contained in those of S perfectly, which happens when either of them is subset of one another , then in their union one set of partitions can be dropped, and rest of the partitions will be disjoint and their union generates whole A. thus R union S must be equivalence in this case</p>
<p>case 3: if partitions don't overlap at all (can't happen)</p>
|
1,936,043 | <p>I would like to prove that the sequence $n^{(-1)^{n}}$ is divergent. </p>
<p>My thoughts: I know $(-1)^n$ is divergent, so $n$ to the power of a divergent sequence is still divergent? I am not sure how to give a proper proof, pls help!</p>
| user159517 | 159,517 | <p>The argument that "n to the power of a divergent sequence is divergent" does not make sense (consider $n^{-n}$ for example.). Regarding your sequence: if this sequence were convergent against some limit value $a$, then every subsequence would have to converge against the same value $a$. Now look at the subsequences for $n$ even and odd, respectively.</p>
|
1,441,905 | <blockquote>
<p>Find the range of values of $p$ for which the line $ y=-4-px$ does not intersect the curve $y=x^{2}+2x+2p$</p>
</blockquote>
<p>I think I probably have to find the discriminant of the curve but I don't get how that would help.</p>
| D.L. | 95,150 | <p>The two equations give $x^2+(2+p)x+2p+4=0$, so you have to calculate the discriminant of this. You get : $-7p^2-4p-12$ which is always negative. </p>
|
4,327,729 | <p>In this question, I would like to investigate the location of the absolute value in the arcsecant integral.</p>
<p>Following <a href="https://math.stackexchange.com/questions/3735966/why-the-derivative-of-inverse-secant-has-an-absolute-value">this answer</a> and <a href="https://math.stackexchange.com/questions/1449228/deriving-the-derivative-formula-for-arcsecant-correctly?rq=1">this answer</a>, we know the following is true:
<span class="math-container">$$
\frac{d}{dx}\sec^{-1}(x)=\frac{1}{|x|\sqrt{x^2-1}}.
$$</span>
Taking indefinite integrals of both sides, we get
<span class="math-container">$$
\sec^{-1}(x)+C=\int \frac{1}{|x|\sqrt{x^2-1}} dx
$$</span>
How can one use this result to deduce that
<span class="math-container">$$
\sec^{-1}(|x|)+C=\int \frac{1}{x\sqrt{x^2-1}} dx?
$$</span>
(Notice that the absolute values used to be in the denominator of the right hand side, and now they are in the argument of the <span class="math-container">$\sec^{-1}$</span> function.)</p>
<p>What logical rule of deduction allows us to interchange the location of the absolute values on opposite sides of this equation?</p>
| John Wayland Bales | 246,513 | <p>Given that</p>
<p><span class="math-container">$$ \sec^{-1}(u)+C=\int \frac{1}{|u|\sqrt{u^2-1}} du $$</span></p>
<p>let <span class="math-container">$u=|x|$</span>. Then <span class="math-container">$du=\dfrac{|x|}{x}dx$</span> and</p>
<p><span class="math-container">\begin{eqnarray}
\sec^{-1}(|x|) +C&=&\int\frac{1}{|x|\sqrt{|x|^2-1}}\cdot \frac{|x|}{x}\,dx\\
&=& \int \frac{1}{x\sqrt{x^2-1}} dx?
\end{eqnarray}</span></p>
|
2,011,181 | <blockquote>
<p><strong>Question:</strong> Find the area of the shaded region given $EB=2,CD=3,BC=10$ and $\angle EBC=\angle BCD=90^{\circ}$.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/BFf2h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BFf2h.jpg" alt="Diagram"></a></p>
<p>I first dropped an altitude from $A$ to $BC$ forming two cases of similar triangles. Let the point where the altitude meets $BC$ be $X$. Thus, we have$$\triangle BAX\sim\triangle BDC\\\triangle CAX\sim\triangle CEB$$
Using the proportions, we get$$\frac {BA}{BD}=\frac {AX}{CD}=\frac {BX}{BC}\\\frac {CA}{CE}=\frac {AX}{EB}=\frac {CX}{CB}$$
But I'm not too sure what to do next from here. I feel like I'm <em>very</em> close, but I just can't figure out $AX$.</p>
| David Quinn | 187,299 | <p>You are almost there. Just add the right hand pairs of equations you have so you get $$\frac{AX}{CD}+\frac{AX}{EB}=1$$ </p>
<p>Substituting the values, you get $AX=\frac 65$ so the required area is...?</p>
|
19,356 | <p>So I was wondering: are there any general differences in the nature of "what every mathematician should know" over the last 50-60 years? I'm not just talking of small changes where new results are added on to old ones, but fundamental shifts in the nature of the knowledge and skills that people are expected to acquire during or before graduate school.</p>
<p>To give an example (which others may disagree with), one secular (here, secular means "trend over time") change seems to be that mathematicians today are expected to feel a lot more comfortable with picking up a new abstraction, or a new abstract formulation of an existing idea, even if the process of abstraction lies outside that person's domain of expertise. For example, even somebody who knows little of category theory would not be expected to bolt if confronted with an interpretation of a subject in his/her field in terms of some new categories, replete with objects, morphisms, functors, and natural transformations. Similarly, people would not blink much at a new algebraic structure that behaves like groups or rings but is a little different.</p>
<p>My sense would be that the expectations and abilities in this regard have improved over the last 50-60 years, partly because of the development of "abstract nonsense" subjects including category theory, first-order logic, model theory, universal algebra etc., and partly because of the increasing level of abstraction and the need for connecting frameworks and ideas even in the rest of mathematics. I don't really know much about how mathematics was taught thirty years ago, but I surmised the above by comparing highly accomplished professional mathematicians who probably went to graduate school thirty years ago against today's graduate students.</p>
<p>Some other guesses:</p>
<ol>
<li>Today, people are expected to have a lot more of a quick idea of a larger number of subjects, and less of an in-depth understanding of "Big Proofs" in areas outside their subdomain of expertise. Basically, the Great Books or Great Proofs approach to learning may be declining. The rapid increase in availability of books, journals, and information via the Internet (along with the existence of tools such as Math Overflow) may be making it more profitable to know a bit of everything rather than master big theorems outside one's area of specialization.</li>
<li>Also, probably a thorough grasp of multiple languages may be becoming less necessary, particularly for people who are using English as their primary research language. Two reasons: first, a lot of materials earlier available only in non-English languages are now available as English translations, and second, translation tools are much more widely available and easy-to-use, reducing the gains from mastery of multiple languages.</li>
</ol>
<p>These are all just conjectures. Contradictory information and ideas about other possible secular trends would be much appreciated.</p>
<p>NOTE: This might be too soft for Math Overflow! Moderators, please feel free to close it if so.</p>
| Timothy Chow | 3,106 | <p>I believe that one shift is that "what every mathematician should know" is nowadays much less a specific body of mathematical facts and much more a facility with navigating the ocean of mathematical knowledge.</p>
<p>For example, I might not need to have advanced computer programming skills, but I do need to have some sense of what kinds of computations are feasible and when it is appropriate for me to do a computation.</p>
<p>I might not need to hold in my head everything that is known about a certain topic, even if that topic is close to my area of specialization, but I definitely need to have the ability to search the literature, assess what is in a certain paper that my search turns up, and know when I should ask an expert and how to formulate a targeted question to ask.</p>
<p>Similarly, I might not need detailed knowledge of fields (seemingly) distant from my own, but I do need to be able to discern when those distant fields might provide relevant tools for my own work.</p>
<p>So far I have been focusing on what a mathematician needs to know in order to be an effective researcher. However, the phrase "what every mathematician should know" carries overtones of what one should know <i>if one wants to earn a reputation for being an educated, knowledgeable, respectable, and attractive representative of the profession</i>. In my opinion this is quite a different question. For this, you need to be fluent in the language of the hot topics <i>du jour</i>, and <i>au courant</i> with flashy announcements of big breakthroughs in all areas of mathematics. While there's some correlation between this kind of knowledge and the knowledge I discussed above, I find it questionable whether, literally speaking, every mathematician should have it.</p>
|
1,995,663 | <p>My brother in law and I were discussing the four color theorem; neither of us are huge math geeks, but we both like a challenge, and tonight we were discussing the four color theorem and if there were a way to disprove it.</p>
<p>After some time scribbling on the back of an envelope and about an hour of trial-and-error attempts in Sumopaint, I can't seem to come up with a pattern that only uses four colors for this "map". Can anyone find a way (algorithmically or via trial and error) to color it so it fits the four color theorem?</p>
<p><a href="https://i.stack.imgur.com/rlVrW.png"><img src="https://i.stack.imgur.com/rlVrW.png" alt=""five color" graph"></a></p>
| dxiv | 291,201 | <p>One possible 4-coloring of such.</p>
<p><a href="https://i.stack.imgur.com/z23kq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/z23kq.png" alt="enter image description here"></a></p>
|
79,041 | <p>Let <span class="math-container">$\mathfrak{g}$</span> be the Lie algebra of a Lie group <span class="math-container">$G$</span> which acts on a manifold <span class="math-container">$M$</span>.
It is quite standard that the basic forms in <span class="math-container">$\Omega^*(M) \otimes W(\mathfrak{g}^*)$</span> form a model for the singular equivariant cohomology of <span class="math-container">$M$</span>. However, I have never seen a proof and it is not straightforward to me. Could someone give a sketch or a reference of the proof of this fact? It is probably in one of Cartan's papers but I haven't been able to find it.</p>
<hr>
<p>Here goes some background:</p>
<p>We define its Weil algebra by <span class="math-container">$W^*(\mathfrak{g}^*)=S^*(\mathfrak{g}^*) \otimes \wedge^*(\mathfrak{g}^*)$</span> there is also a natural differential operator <span class="math-container">$d_W$</span> which makes <span class="math-container">$W*(\mathfrak{g}^*)$</span> into a complex. We define <span class="math-container">$d_W$</span> as follows:</p>
<p>Choose a basis <span class="math-container">$e_1,...,e_n$</span> for <span class="math-container">$\mathfrak{g}$</span> and let <span class="math-container">$e^*_1,...e^*_n$</span> its dual basis in <span class="math-container">$\mathfrak{g}^*$</span>. Let <span class="math-container">$\theta_1,...,\theta_n$</span> be the image of <span class="math-container">$e^*_1,...e^*_n$</span> in <span class="math-container">$\wedge(\mathfrak{g}^*)$</span> and let <span class="math-container">$\Omega_1,...,\Omega_n$</span> be the image of <span class="math-container">$e^*_1,...e^*_n$</span> in <span class="math-container">$S(\mathfrak{g}^*)$</span>. Let <span class="math-container">$c_{jk}^i$</span> be the structure constants of <span class="math-container">$\mathfrak{g}$</span>, that is <span class="math-container">$[e_j,e_k]=\sum_{i=1}^nc_{jk}^ie_i$</span>. Define <span class="math-container">$d_W$</span> by
<span class="math-container">\begin{eqnarray}
d_W\theta_i=\Omega_i- \frac{1}{2}\sum_{j,k} c_{jk}^i \theta_j \wedge \theta_k
\end{eqnarray}</span>
and
<span class="math-container">\begin{eqnarray}
d_W\Omega_i=\sum_{j,k}c_{jk}^i\theta_j \Omega_k
\end{eqnarray}</span>
and extending <span class="math-container">$d_W$</span> to <span class="math-container">$W(\mathfrak{g})$</span> as a derivation.</p>
<p>We can also define interior multiplication <span class="math-container">$i_X$</span> on <span class="math-container">$W(\mathfrak{g}^*)$</span> for any <span class="math-container">$X \in \mathfrak{g}$</span> by
<span class="math-container">\begin{eqnarray}
i_{e_r}(\theta_s)=\delta^r_s, i_{e_r}(\Omega_s)=0
\end{eqnarray}</span>
for all <span class="math-container">$r,s=1,...,n$</span> and extending by linearity and as a derivation. </p>
<p>Now consider <span class="math-container">$\Omega^*(M) \otimes W(\mathfrak{g}^*)$</span> as a complex. Using this definition of interior multiplication, together with the usual definition of interior multiplication on forms, we define the basic complex of <span class="math-container">$\Omega^*(M) \otimes W(\mathfrak g^*)$</span>:</p>
<p>We call <span class="math-container">$\alpha \in \Omega^*(M) \otimes W(\mathfrak{g}^*)$</span> a basic element if <span class="math-container">$i_X(\alpha)=0$</span> and <span class="math-container">$i_X(d \alpha)=0$</span>. Basic elements in <span class="math-container">$\Omega^*(M) \otimes W(\mathfrak{g}^*)$</span> form a subcomplex which we denote by <span class="math-container">$\Omega^*_G (M)$</span>.</p>
<p>The claim is that <span class="math-container">$H^*(\Omega^*_G (M))=H^*(M \times_G EG)$</span> where the right hand side denotes the singular equivariant cohomology of <span class="math-container">$M$</span>.</p>
| Praphulla Koushik | 118,688 | <p>As mentioned by the user SGP, the book <a href="https://www.springer.com/gp/book/9783540647973" rel="nofollow noreferrer">Supersymmetry and Equivariant de Rham Theory</a> by Victor W Guillemin and Shlomo Sternberg discuss about Cartan model.
One of the intentions is to prepare the reader to understand Cartan's papers:</p>
<ul>
<li><em>Notions d'algèbre différentielle; application aux groupes de Lie et aux variétés où opère un groupe de Lie</em>, Colloque de Topologie, C.B.R.M., Bruxelles 15-27 (1950)</li>
<li><em>La transgression dans un groupe de Lie et dans un espace
fibré principal</em>, Colloque de Topologie, C.B.R.M., BruxeIles 57-71 (1950)</li>
</ul>
<p>Preface of the book : </p>
<blockquote>
<p>This is the second volume of the Springer collection Mathematics Past
and Present. In the first volume, we republished Hörmander's
fundamental papers <em>Fourier integral operators</em> together with a brief
introduction written from the perspective of 1991. The composition of
the second volume is somewhat different: the two papers of Cartan
which are reproduced here have a total length of less than thirty
pages, and the 220 page introduction which precedes them is intended
not only as a commentary on these papers but as a textbook of its own,
on a fascinating area of mathematics in which a lot of exciting
innovation have occurred in the last few years. Thus, in this second
volume the roles of the reprinted text and its commentary are
reversed. The seminal ideas outlined in Cartan's two papers are taken
as the point of departure for a full modern treatment of equivariant
de Rham theory which does not yet exist in the literature.</p>
</blockquote>
<p>Introduction : </p>
<blockquote>
<p>The year 2000 will be the fiftieth anniversary of the publication of Henri Cartan's two fundamental papers on equivariant De Rham theory "Notions d'algèbre différentielle; applications aux groupes de Lie et aux variétés où opère un groupe de Lie" and "La trangression dans un groupe de Lie et dans un espace fibré principal." The aim of this monograph is to give an updated account of the material contained in these papers and to describe a few of the more exciting developments that have occUfred in this area in the five decades since their appearance. </p>
</blockquote>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.