title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Proving that when gf is injective, f is surjective
For functions $f: A \rightarrow B$ and $g: B \rightarrow C$ (A, B, C nonempty sets) I have started off with presuming $g \circ f$ is injective (given) $\exists x_1, x_2 \in A$ such that $g \circ f (x_1) = g \circ f (x_2) \implies x_1 = x_2$ . I have tried to prove this by assuming f is not surj, ie $\exists a \in A$ such that $f(a) \not \in B$ . Then I tried to say that since $g \circ f$ is surjective, $g \circ f(a) \in C$ . Not sure where to progress from here, any ideas would be appreciated.
The statement you are trying to prove is false. Take $A=\{a\}$ , $B=\{b_1, b_2\}$ and $C=\{c_1, c_2\}$ . Further, define $f(a)=b_1$ , and define $g(b_1)=g(c_1), g(b_2)=c_2$ . Then, $g\circ f$ is injective. However, $f$ is not surjective.
|functions|
0
Proof critique - Glueing local isomorphism
I apologize first for such a frequently asked question. I am not confident about my approach of this problem which is possibly related to glueing local isomorphisms. I am aware of that local isomorphisms do not always glue to global ones - otherwise all locally free sheaf of rank 1 will all be isomorphic. So I guess one needs to have a global morphism first to ensure the compatibility of local information. Problem. Let $\pi:\operatorname{Spec}A\to \operatorname{Spec}B$ be a morphism of affine schemes and $M$ an $A$ -module, hence $\widetilde{M}$ is a quasi-coherent sheaf on $\operatorname{Spec}A$ . Give an isomorphism $\pi_*\widetilde{M}\cong \widetilde{M_B}$ where $M_B$ is simply regarding $M$ as an $B$ -module via the ring map $\varphi:B\to A$ corresponding to the affine scheme morphism. My approach. On the base $\{D(f)\}_{f\in B}$ of $\operatorname{Spec}B$ , we have $$\alpha_{D(f)}:\pi_*\widetilde{M}(D(f))=\widetilde{M}(\pi^{-1}(D(f)))=\widetilde{M}(D(\varphi(f)))\cong M_{\varphi(f)
I have time now to write up an answer to your question; in short your proof is correct. In the long, let $X$ be a topological space, $\mathcal B$ a basis for $X$ , and $\mathcal F$ a sheaf on $X$ . Then, the sheaf $\mathcal F_\mathcal B$ on $\mathcal B$ defined by $U\mapsto \mathcal F(U)$ for all $U$ in $\mathcal B$ induces a sheaf on $X$ which is uniquely isomorphic to $\mathcal F$ . In particular, let $\theta^U_V$ denote the restriction maps for both sheaves, and sheaves on a basis, and $\psi_U$ denote the isomorphisms $\mathcal{F}(U)\rightarrow \mathcal F_\mathcal B(U)$ , then $\mathcal F$ satisfies the following universal property: For any sheaf $\mathcal G$ on $X$ , and and any collection of set/group/ring morphisms $\phi_U:\mathcal{F}_B(U)\rightarrow \mathcal G(U)$ satisfying $\theta^U_V\circ\phi_U=\phi_V\circ \theta^U_V$ for all $V\subset U\in \mathcal{B}$ , there exists a unique sheaf morphism $F:\mathcal F\rightarrow \mathcal G$ , such that for all $U\in\mathcal{B}$ : $$F_U=\p
|algebraic-geometry|sheaf-theory|schemes|affine-schemes|
1
Find the exact time
A teacher leaves his home everyday at 8:00A.M to come to school. If he comes 95 meters per minute. He will arrive one minute late for his class. If he walks 105 meters per minute he will arrive one minute early for his class. At what speed should he walk to arrive perfectly on time? Is this the correct way to approach the problem... D=95(t+1),D=105(t-1) if then, 95(t+1)=105(t-1)....t=20 then just substitute the found value of t to one of the equations to find the value of d then use the basic formula distance over time to find the solution... doesn't the unit meters per minute cause problem at all
No the units of speed won't cause any issue as when you are writing $D = Speed . Time $ the unit of time is in minute and that of speed is in $metre \space minute^{-1}$ .Muĺtiplication of the two physical quantities gives unit of distance.Hence no issue.
|algebra-precalculus|
1
Non commutative field of characteristic $\neq 0$
Does there exist a non-commutative field of a characteristic different from zero?
Usually we call "non-commutative fields" division algebras (the terminology "skew field" also exists, but "division algebra" is really the modern term). And yes they exist in all characteristic: if $K$ is any field, the so-called Brauer group $\operatorname{Br}(K)$ gives the set of isomorphism classes of finite-dimensional central division $K$ -algebras. If $K$ has characteristic $p$ , then so do those algebras. You might have asked this question because if $K$ is finite then $\operatorname{Br}(K)$ is trivial, so our favorite fields of characteristic $p$ (the finite fields) do not yield non-commutative division algebras. But in general $\operatorname{Br}(K)$ is far from trivial. For instance, $\operatorname{Br}(\mathbb{F}_p(X))$ has a nice description in terms of the points of the projective line $\mathbb{P}^1(\mathbb{F}_p)$ . Here is a concrete example of a division algebra in characteristic $p\neq 2$ : define $Q$ as an $\mathbb{F}_p(X,Y)$ -algebra generated by two elements $i$ and $j
|field-theory|noncommutative-algebra|
0
how can i find $f(2)$
Let $f$ be a real function with $f(0)=0 ,$ differentiable for $x \neq0 $ and $f'(x)= \frac{e^x}{3f^2(x)+2f(x)+1}$ . How can I show that there exists a $x_0, 0 s.t. $f(x_0)=1$ ? I’ve already shown that $f^3(x)+f^2(x)+f(x)=e^x-1$ from multiplying the given fraction with the denominator and intergrating. Also I have shown that $f$ is differentiable at $x=0$ with $f'(0)=1$ . Now to prove that there exists a $x_0 , f(x_0)=1$ : we already know $f(0)=0 $ , and I tried to find $f(2)$ so I can use the intermediate value theorem but got nowhere. Any help would be appreciated, thanks in advance.
I guess this might be the way.Using your conclusion. $f^3(x) + f^2(x) + f(x) + 1 = e^x$ Let there be some $x = x_o $ such that $f(x_o) = 1$ .So putting $x =x_o$ in the above equation we get $e^{x_o} = 4$ .Now since $e^x$ is a strictly increasing function.So as $1 then $e^0 .So $0 .
|real-analysis|calculus|
0
Zero set of continuous and monotone function is convex
Suppose that $F:\mathbb{R}^{n}\to\mathbb{R}^{n}$ is continuous and monotone, i.e., \begin{align} (\forall x,y \in \mathbb{R}^{n}) \quad \langle F(x) - F(y), x-y \rangle \geq 0. \end{align} Show that \begin{align} \text{zer }F := \{x\in\mathbb{R}^{n}\mid F(x) = 0\} \end{align} is convex. Let $x,y\in \text{zer }F$ and $\theta\in(0,1)$ . Let $z=\theta x + (1-\theta) y$ and $\alpha>0$ . Then \begin{align} \langle F(z) - F(x \mp \alpha(1-\theta) F(z)), y-x \pm \alpha F(z) \rangle &\geq 0, \\ \langle F(z) - F(y \mp \alpha \theta F(z)), x-y \pm \alpha F(z) \rangle &\geq 0. \end{align} This feels close. What about a proof that does not use the Minty trick as below?
Here is proof based on the so-called Minty trick for maximal monotone operators. It based on the equivalence $$ F(x) =0 \quad \Leftrightarrow \quad \langle F(y), y-x \rangle \ge0 \quad \forall y. $$ The direction $\Rightarrow$ follows from monotonicity. Now assume the right condition. Take $w$ and set $y:=x+tw$ . Then $$ \langle F(x+tw), tw \rangle \ge0 . $$ Dividing by $t$ and passing to the limit $t\searrow0$ proves $$ \langle F(x),w\rangle =0 \quad \forall w. $$ This uses continuity. Hence $F(x)=0$ . And the equivalence is proven. Then $$ zer(F) = \{x :\ F(x) =0 \} = \bigcap_{y} \{x :\ \langle F(y), y-x\rangle \ge0 \} $$ is an intersection of convex sets, hence convex.
|real-analysis|convex-analysis|
1
Convergent sum of operator norms of iterates of kernel operator corresponding to stepping of graphon $W$
Suppose that we are given a possibly non-symmetric graphon $W:[0,1]^2\to\mathbb [0,K]$ , where $K>0$ is some fixed, known constant. Given such a graphon, we consider the kernel operator $T_W:L^1[0,1]\to L^1[0,1]$ defined by $$(T_Wf)(\cdot)=\int_0^1W(\cdot,y)f(y)\ \mathrm dy;$$ see $\S$ 7.5 of Lovasz - Large networks and graph limits . Now assume that $W$ is such that $\rho(T_W) , i.e. the spectral radius of the operator $T_W$ is smaller than one. By a straightforward application of Gelfand's formula, we can prove that $$\sum_{n\geq0}\|T_W^n\| using the exponential decay of $\|T_W^n\|$ . Now suppose that we are given a sequence of partitions $(\mathcal P^d)_{d\in\mathbb N}$ of $[0,1]$ into consecutive intervals $I_1,\ldots,I_d$ , in such a way that $\mathrm{mesh}(\mathcal P^d)\to0$ as $d\to\infty$ . I'm willing to assume that $\mathcal P^{d+1}$ is a refinement of $\mathcal P^d$ , but I don't think this is of much help. Now consider the stepping of $W$ , denoted by $W_{\mathcal P^d}$ , o
The sums are indeed uniformly bounded. For legibility I'll write $T_d$ instead of $T_{W_{\mathcal P^d}}.$ I'll use the notation $\|X\|_{p\to q}$ for $\sup\{\|X(f)\|_q:\|f\|_p=1\}$ with $p,q\in\{1,\infty\}.$ Here are the main ideas/observations: by submultiplicativity + bounding by a geometric series, it suffices to show $\|T_d^n\|\to\|T^n\|$ for each fixed $n$ $\|T_d^n\|\leq \|T_d^{n-1}T_W\|$ $\|T_d-T_W\|_{\infty\to 1}\to 0$ $\|T_d\|_{p\to q},\|T_W\|_{p\to q}\leq K$ for all $p,q\in\{1,\infty\}$ In more detail: there exists $\epsilon>0$ and $N$ such that $\|T^N\|\leq 1-\epsilon.$ Assume we can show that $\|T_d^n\|\leq \|T^n\|+\epsilon/2$ for $n\leq N$ and sufficiently large $d.$ Then \begin{align*} &\sum_{n\geq 0}\|T_d^n\|\\ &= \sum_{m\geq 0}\sum_{n which is finite and independent of $d.$ we can write $T_d=S_dT_WS_d$ where $S_d$ is the "stepping operator" $L^1\to L^1$ : the conditional expectation operator onto the $\sigma$ -algebra generated by $\mathcal P_d.$ Since $S_d$ is a contract
|real-analysis|functional-analysis|graph-theory|spectral-theory|compact-operators|
1
Show convergence of sets
Consider the following sets: $$ \begin{aligned} & A_n\equiv \Big\{ x\in X: d\big(p_n, [\ell(x), u(x) ] \big)\leq \delta_n\Big\}\\ & A \equiv \Big\{ x\in X: \lim_{n\rightarrow \infty} d\big(p_n, [\ell(x), u(x)] \big)= 0\Big\} \end{aligned} $$ where: $X\subseteq \mathbb{R}$ . $(p_n)_n$ is a sequence of numbers taking values in $[0,1]$ . $\ell(\cdot)$ and $u(\cdot)$ are real function taking values in $[0,1]$ . $(\delta_n)_n$ is a sequence of positive numbers going to $0$ as $n\rightarrow \infty$ . In particular, $\delta_n\equiv \sqrt{\frac{1}{2n}\log(\frac{2}{\alpha})} $ with $\alpha\in (0,1)$ . $d\left(a, E\right)\equiv \inf \left\{|a - y| : y \in E\right\}$ . Assume that $A$ is non-empty. I would like to prove (or, disprove) that $d_H(A_n, A)\rightarrow 0$ as $n\rightarrow \infty$ , where $d_H$ is the Hausdorff distance . Comments: I know that the claim is wrong when $\delta_n\equiv 0$ for each $n$ . See here for a counterexample. More generally, I am trying to understand under which co
As Amit said in his comment, it seems like you need to demand more, even if $A$ is non-empty. For a simple example, take $\ell(x)=\max\{x-0.01,0\}$ and $u(x)=\min\{x+0.01,1\}$ with $X=[0,1]$ . Then $A_n= \{ x: \vert x-p_n\vert \leq \delta_n+0.01 \}$ if $0.1 . If you take $p_n= \frac{1}{2}+(-1)^n \cdot 0.01$ , then $A_{2n}=[\frac{1}{2}-\delta_n, \frac{1}{2}+0.02 +\delta_n]$ and $A_{2n+1}=[\frac{1}{2}-\delta_n-0.02, \frac{1}{2} +\delta_n]$ . If I am not mistaken, then $A=\{ \frac{1}{2} \}$ , $A_{2n}\supseteq[\frac{1}{2}, \frac{1}{2}+0.02]$ and $A_{2n+1}\supseteq [\frac{1}{2}-0.02, \frac{1}{2}]$ . Which means that $A_n$ is not a Cauchy sequence with respect to the Hausdorff metric. So I think if $p_n$ does not converge and $\lim_{n\to \infty} \ell(p_n) \neq \lim_{n\to \infty} u(p_n)$ , you can always have this case. To avoid pathologies, you should probably assume that $u$ and $\ell$ are continuous and maybe also that $p_n\to p_\infty \in [0,1]$ . Without assuming these there are probably
|sequences-and-series|general-topology|limits|elementary-set-theory|hausdorff-distance|
1
Figuring out the right proportion that resolves to a quadratic equation
Question: Two travelers A and B set out from two places, Γ and Δ, and at the same time; A from Γ with a design to pass through Δ, and B from Δ to travel the same way: after A had overtaken B, they found on computing their travels that they had both together traveled 30 miles; that A had passed through Δ four days before, and that B, at his rate of traveling, was a journey of nine days distant from Γ. Required the distance between the places Γ and Δ. Answer: 6 miles. My thoughts: Considering previous problems in this chapter of the book, I need to make the right proportion that would lead me to a quadratic equation where $x=6$ is the required distance between the places. I found that $x=30-2B$ , where $B$ is the distance traveler B traveled. Then, $B=\frac{30-x}{2}$ miles. $A$ traveled $30 - \frac{30-x}{2}=\frac{30+x}{2}$ miles. The speed of $B=\frac{30+x}{2\times9}=\frac{30+x}{18}$ , the speed of $A=\frac{30-x}{2\times4}=\frac{30-x}{8}$ At this point I need to make the right proportion
Using your notions, we have that $$x=B\cdot (9-\frac{30}{A+B}),\tag1$$ because $\frac{30}{A+B}$ is the time they travelled, so $9-\frac{30}{A+B}$ days were spent by the second traveler to get from $\Gamma$ to $\Delta$ . Putting $A=\frac{30-x}8$ and $B=\frac{30+x}{18}$ in $(1)$ , we get $$x= \frac{30+x}{18}\cdot (9-\frac{30}{\frac{30-x}8+ \frac{30+x}{18}}).$$ I’ll leave it to you to find the roots of this equation, there are two of them: $6$ and $150$ . Another question for you: why isn’t $150$ a second solution to the problem? All that being said, I would solve the problem differently. I would denote the time from the start till the meeting as $t$ . Then the following three equations hold: $$\begin{cases} t(A+B)=30\\ 4A=tB\\ A(t-4)=B(9-t)\\ \end{cases}$$ Three equations, three unknowns. Here is the answer: Wolfram Alpha . Again, there are two solutions. Why only one of them is good? Then $x=A(t-4)=6.$
|algebra-precalculus|
1
A problem about the incircle and excircle
I have tried many ways, including denote a tangent point $F'$ and prove $F'\equiv F$ or denote the points at which the incircle touches the sides but I can't go further. Can someone help me in this problem? The incircle (its center is point $I$ ) of $\Delta ABC$ touches the side $AB$ at point $D$ . Point $E$ is the reflection of point $D$ with respect to point $I$ . Let $AB \cap CE = \{F\}$ . Prove that $F$ is the tangent point of the side $AB$ and the $C$ -excircle of $\Delta ABC$ .
Are you sure about your statement? I've tried to draw the situation and in my case, the point $F$ is located somewhere between points $A$ and $B$ , so it can't be tangent to a circle, containing those points:
|geometry|
0
the existence of a holomorphic function
Let $f$ be holomorphic on $\mathbb{C}$ . Prove that there exists a holomorphic function $g$ on $\mathbb{C}$ s.t $g'=f$ . My attempt is not too much, at least I have tried to think, but can't go further: If $f$ is bounded on $\mathbb{C}$ , then according to Liouville's theorem, $f$ is constant. Let $f=a\in\mathbb{C}$ . We can choose $g(z)=az$ as a holomorphic function on $\mathbb{C}$ s.t $g'=f$ . I'm stuck at the case that $f$ is not bounded on $\mathbb{C}$ . Could someone help me? Thanks in advance
Every entire function $f$ admits a power series expansion $$f(z)=\sum_{n=0}^\infty a_nz^n, $$ which holds for all $z\in\mathbb{C}$ . Thus $$g(z)=\sum_{n=0}^\infty\frac{a_n}{n+1}z^{n+1} $$ will be the desired function.
|complex-analysis|
1
What is a complete geometric interpretation of the eigendecomposition of matrices?
For reference, eigendecomposition of a matrix $A$ $\in R^{n \times n}$ is defined as: $A = P \Lambda P^{-1}$ where $P$ is a matrix whose columns are the eigenvectors of $A$ , and $\Lambda$ is a diagonal matrix whose entries are the corresponding eigenvalues. This is only possible when $P$ is invertible, which is true when the eigenvectors of $A$ form a basis of $R^n$ . What is a complete geometric interpretation of the eigendecomposition of matrix $A$ ? Many interpretations that I see on the internet and in lectures narrow down on a special case where $A$ is symmetric. I understand that when $A$ is symmetric, its eigenvectors form an orthonormal basis of the space, so $P$ is an orthogonal matrix signifying rotations (maybe flips too). And in this special case, $P \Lambda P^{-1}$ can be interpreted as a transformation where: You first rotate the space ( $P^{-1}$ ). Then, you scale the rotated space alone the axes ( $\Lambda$ ). Finally, you rotate it back ( $P$ ). But eigendecomposition
There are basically two interpretations of matrices. Either as transformation of vectors or as a change of basis . Your description for symmetric matrices ("rotation of space") corresponds to the latter. For a symmetric matrix $A$ the Eigenvalue decomposition yields rotations ( $P$ ) and scalings ( $\Lambda$ ) of an arbitrary vector $x$ . For an arbitrary matrix, it consists of rotation-and-stretching and scalings. Basis change In terms of a basis change, we can interpret $A x = P \Lambda P^{-1}$ as follows: $\hat x = P^{-1} x$ : Express $x= \sum_i x_i e_i$ in the Eigenbasis { $p_i$ }, i.e. find the $\hat x_i$ which yield $x=\sum_i \hat x_i p_i$ . This can also be understood as a transformation of the underlying space. Since the Eigenbasis must not be orthonormal (e.g. $p_1=(1,0), p_2=(1,1)$ could be a valid basis), the underlying space cannot only be rotated, but must also be stretched in some directions. $\tilde x = \Lambda \hat x$ : Scaling along the $p_i$ . $Q \tilde x$ : Transform
|linear-algebra|matrices|eigenvalues-eigenvectors|matrix-decomposition|
0
A problem about the incircle and excircle
I have tried many ways, including denote a tangent point $F'$ and prove $F'\equiv F$ or denote the points at which the incircle touches the sides but I can't go further. Can someone help me in this problem? The incircle (its center is point $I$ ) of $\Delta ABC$ touches the side $AB$ at point $D$ . Point $E$ is the reflection of point $D$ with respect to point $I$ . Let $AB \cap CE = \{F\}$ . Prove that $F$ is the tangent point of the side $AB$ and the $C$ -excircle of $\Delta ABC$ .
Let the center of $C$ -excircle be $J$ . Then $C$ , $I$ , $J$ all belong to one line (the bisector of angle $C$ ). The point $E$ , being symmetric to $D$ with respect to $I$ , belongs to the incircle. Consider the homothety with center in $C$ and coefficient equal to the ratio of $C$ -excircle and incircle radii. The tangent line to the incircle, going through $E$ , becomes $AB$ . So $E$ becomes $F$ . The incircle becomes the $C$ -excircle. Hence $F$ is indeed the tangent point of the $C$ -excircle. —- The same reasoning in terms of similar triangles. We can see that $CI:CJ = r:R$ , where $R$ is the radius of the $C$ -excircle and $r$ the radius of the incircle. Just drop perpendiculars $II’$ and $JJ’$ from $I$ and $J$ to the line $CB$ and consider similar triangles $CII’$ and $CJJ’$ . As we already know, $E$ is the point of the incircle. Let’s draw a tangent line through it. $B’$ will be the intersection of it and $BC$ . The following holds: $CB’:CB = CI:CJ$ . Indeed, $FB \parallel EB
|geometry|
1
Translation morphism of Algebraic Groups .
My question is about a point on page 18 of J.S.Milne's "Algebraic Groups- The theory of group schemes of finite type over a field." and specifically about the fact that the translation map induced on an algebraic group by a rational point is an isomorphism. To be more precise the statement is : Let $(G,m)$ be an algebraic group over k. For each $a∈G(k)$ , there is a translation map $l_a:G\stackrel{\sim}{\to}\{a\}\times G\to G$ where the second map is the group multiplication $m$ of the group scheme G. For $a,b\in G(k)$ $l_a\circ l_b=l_{ab}$ . From this "composition law " argument and the identity element property of the distinguished rational point the fact that the translation is an isomorphism follows easily. While this is very clear intuitively, I struggle to show it using the categorical definition of group schemes, i.e the commutativity of the diagrams representing associativity, identity element and inversion. I would really appreciate any help !
$\DeclareMathOperator{\id}{id}$ Associativity is expressed as the commutativity of the diagram $$\require{AMScd} \begin{CD} G \times G \times G @>{m \times \id}>> G \times G\\ @V\id \times mVV @VVmV \\ G \times G @>{m}>> G \end{CD}$$ Now if you restrict to $\{a\}\times \{b\} \times G$ , you get $$\require{AMScd} \begin{CD} \{a\} \times \{b\} \times G @>{m \times \id}>> \{ab\} \times G\\ @V\id \times mVV @VVmV \\ \{a\} \times G @>{m}>> G \end{CD}$$ Identifying $G \cong \{a\}\times\{b\}\times G$ as before, the "upper way" in this diagram is $m \circ (m \times \id) = l_{ab}$ , whereas the "lower way" is $m \circ (\id \times m) = l_a \circ l_b$ . By commutativity you get $$l_{ab} = l_a \circ l_b.$$
|algebraic-geometry|group-schemes|
1
summing binomial coefficiens related
If $s_n=\sum_{k=0}^{n}(-4)^k\binom{n+k}{2k}$ how to prove $s_{n+1}+2s_n+s_{n-1}=0$ . One of my student had this question in his exam. Honestly to speak I couldn't get any single idea how to even start. I knew some strategies to find binomial sums but they all couldn't help. It would be great if someone helps.
Let us prove a stronger result: for $n\ge1$ $$ s_n=\sum_{k=0}^{n}(-4)^k\binom{n+k}{2k}=(-1)^n(2n+1),\quad\tag1 c_n=\sum_{k=0}^{n}(-4)^k\binom{n+k-1}{2k-1}=(-1)^n 4n. $$ Indeed for $n=1$ the formula $(1)$ is valid as can be easily checked. Assume it is valid for some $n\ge1$ . Then it is valid for $n+1$ as well: $$ \begin{align} c_{n+1}&=\sum_{k=0}^{n+1}(-4)^k\binom{n+k}{2k-1}\\ &=\sum_{k=0}^{n+1}(-4)^k\left[\binom{n+k-1}{2k-1}+\binom{n+k-1}{2k-2}\right]\\ &=c_n-4s_n\\& =(-1)^n4n-(-1)^{n}4(2n+1)\\ &=(-1)^{n+1}4(n+1);\\ \vphantom{X}\\ s_{n+1}&=\sum_{k=0}^{n+1}(-4)^k\binom{n+1+k}{2k}\\ &=\sum_{k=0}^{n+1}(-4)^k\left[\binom{n+k}{2k}+\binom{n+k}{2k-1}\right]\\ &=s_n+c_{n+1}\\& =(-1)^n(2n+1)+(-1)^{n+1}4(n+1)\\ &=(-1)^{n+1}(2(n+1)+1). \end{align}$$ Thus by induction the formula $(1)$ is proved. The original statement is a trivial corollary.
|algebra-precalculus|
0
Simple way to show that standard basis on $\ell^p$ is weakly pre-compact?
Consider the sequence $e_n = (0,0,\ldots,0,1,0,\ldots)$ in $\ell^1$ which is weakly convergent to zero in $\ell^p$ . for all $1\leq p \leq \infty.$ It is then obvious, from the theorem that sequences completely characterises weak convergence, that $\{e_n\}\cup\{ 0\}$ is weakly compact. However, I remember somehow that this non-trivial theorem is not needed, and there is a more obvious answer (of the proof that $\{e_n\}\cup\{ 0\}$ is weakly compact). Does anyone have the solution that uses the weak topology directly without using sequential characterisations?
Let $S=\{e_n\}_{n=1}^\infty$ . Then $S$ is weakly closed and weakly non-compact in $\ell^1$ . Let $F_n\in(\ell^1)^*$ be the functional $F_n(x)=x_n$ and $F_\infty\in(\ell^1)^*$ be the functional $F_\infty(x)=\sum_{k=1}^\infty x_k$ . They are all bounded $$ \begin{align} |F_n(x)|&=|x_n|\le\sum_{k=1}^\infty|x_k|=\|x\|_1\\ |F_\infty(x)|&=\left|\sum_{k=1}^\infty x_k\right|\le\sum_{k=1}^\infty|x_k|=\|x\|_1 \end{align} $$ Since $(\ell^1)^*$ separates points on $\ell_1$ , the weak topology on $\ell^1$ is $\text{T}_1$ (i.e. every point is weakly-closed.) Let $a\in\ell^1$ . If $a\ne 0$ there is some $m$ such that $a_m\ne 0$ . The set $U=F_m^{-1}(\Bbb{C}\setminus\{0\})$ is weakly-open in $\ell^1$ , $a\in U$ and for all $n\ne m$ , $e_n\not\in U$ because $F_m(e_n)=0$ . Therefore $U\cap S\subseteq \{e_m\}$ . If $a=e_m$ then $U\cap S=\{a\}=\{e_m\}$ showing that $a=e_m$ is a weakly isolated point of $S$ . If $a\ne e_m$ then $V=U\cap(\ell^1\setminus\{e_m\})$ is a weak open neighborhood of $a$ such that
|functional-analysis|alternative-proof|weak-topology|
0
Assume that $\lim_{x \to 0} f(x) = L.$ Prove that $\lim_{x \to 0} f(x^3) = L.$ Is this proof sufficient?
Assume that $\lim_{x \to 0} f(x) = L.$ Prove that $\lim_{x \to 0} > f(x^3) = L$ Here's my attempt using an $\epsilon-\delta$ argument: Since $lim_{x\to0}f(x) = L$ , we have that $\forall \epsilon > 0, \exists \delta: 0 Let $x = a^3$ and we have $\forall \epsilon > 0, \exists \delta: 0 Thus, there exists $\delta' = \sqrt[3]{\delta}$ such that $0 0$ It follows that $\lim_{x\to0} f(x^3) = \lim_{a\to0} f(a^3) = L.$ Here's my other attempt that I'm not sure is acceptable. As $x$ tends to $0$ , $x^3$ tends to $0$ , so $\lim_{x\to0} f(x^3) = \lim_{x^3\to0} f(x^3)$ . Let $a = x^3$ and we have $\lim_{x^3\to0} f(x^3) = \lim_{a\to0} f(a) = L$ , so $\lim_{x\to0} f(x^3) = L$ . For some reason, I'm not confident about my 2nd argument. Please let me know if it's acceptable. Thanks!
Your first argument is good, but perhaps a more 'elegant' $\delta'$ to choose, as @StéphaneJaouen points out, is $\delta' = \min \{1, \delta\}$ , where $\delta > 0$ is chosen such that $\forall x$ : $0 Then when $0 , we have both $|x| (hence $|x^3| \leq |x|$ ) and $|x| , and so $$0 which you know implies that $|f(x^3) - L| This is something that often occurs in an $\varepsilon$ - $\delta$ proof: if you know something is true about 'small enough' $|x-a|$ (like less than 1) you can force $\delta'$ to be at most $1$ using such a minimum. Do not be afraid to use it! The second argument works if you understand why. If $\lim_{a \to 0} g(a) = 0$ , and $\lim_{x \to 0} f(x) = L$ , then you should be able to prove using the $\epsilon$ - $\delta$ definitions that $\lim_{x \to 0} f(g(x)) = L$ . Try this! (Here we assume $g(a) \neq 0$ for all $a \neq 0$ sufficiently small.)
|calculus|limits|
1
Difficulty understanding the conditional expectation
I'm currently having a confusion dealing with the conditional expectation. Let's recall the definition first: Let $X$ be an integrable function (or a random variable) defined on a probability space $(\Omega,\mathfrak{M},\mu)$ and let $\mathfrak{A}$ be a $\sigma$ -subalgebra of $\mathfrak{M}$ . The conditional expectation of $X$ on $\mathfrak{A}$ is then defined to be an $\mathfrak{A}$ -measurable function $\mathbb{E}(X|\mathfrak{A})$ which satisfies the identity $$\int_AXd\mu=\int_A\mathbb{E}(X|\mathfrak{A})d\mu $$ for any $A\in\mathfrak{A}$ . The existence and uniqueness follow from the Radon-Nikodym theorem. Two special cases are as follows: (1) If $X=\chi_{\omega}$ for some $\omega\in\mathfrak{M}$ , $\mathbb{E}(\chi_{\omega}|\mathfrak{A})$ is called the conditional probability of $\omega$ (relative to $\mathfrak{A}$ ). (2) If $\mathfrak{A}$ is the $\sigma$ -algebra generated by a function $Y$ , we write $\mathbb{E}(X|\mathfrak{A})=\mathbb{E}(X|Y)$ . Q1. Why is the conditional "proba
First, since conditional probabilities are conditional expectations of indicator functions, you can forget about that concept as a separate thing. Moving on: for intuition, it is helpful to think about finite probability spaces with no nonempty null sets. This makes some statements that are "morally true" in the general case be literally true. Consider for example $\Omega=\{ 1,2,3 \}$ with the underlying $\sigma$ -algebra being $\mathcal{F}=P(\Omega)$ and then think about $\mathcal{G}=\sigma(\{ \{ 1 \} \})=\{ X,\emptyset,\{ 1 \},\{ 2,3 \} \}$ . Say $P(\{ i \})=p_i>0$ for all $i$ . Conditional expectation of some rv $X$ with respect to this $\mathcal{G}$ intuitively means you have an instrument that can distinguish whether $\omega=1$ or not, but it can't tell you whether $\omega=2$ or $\omega=3$ . Then you're asked to use your instrument to give a "best guess" of $X$ for each $\omega$ . This will depend on $\omega$ : if $\omega=1$ then it will be $X(1)$ but otherwise it should be somewh
|probability|probability-theory|conditional-probability|conditional-expectation|
1
How can I find numerically nice-to-compute upper limits of nCr?
How can I find nicely behaving functions which are easy to compute and which fit well to the upper limits of the (2-logarithm) of the nCr function? What I am interested in in practice is to be able to model "how many binary digits (bits) will a digital representation of nCr(a,b)" have? A guess of 1 or 2 too many bits are acceptable, but a guess of 1 or 2 too few bits are not. So the problem is a bit un-symmetric. Errors in one direction are catastrophically problematic (unable to represent the number) but errors in the other direction "only" cause an unnecessary extra percentage of bits stored. Actually calculating factorials will not be considered easy enough. On the other hand, the simplest of approximations like $nCr(a,b) \leq 2^a$ will be considered too sloppy. Own work what I have tried on my own is to calculate the number of bits of all numbers, then fit low order polynomials to the bit count, then rounded the estimation to closest integer. In the case of too few bits I have upda
From this answer : $$\binom{a}{b}\le \frac{1}{\sqrt{2 \pi }}\sqrt{\frac{1}{b}+\frac{1}{a-b}}\,\frac{a^{a} }{b^b\, (a-b)^{a-b} }$$ Hence $$\log_2 \binom{a}{b}\le a \log_2 a - b \log_2 b -(a-b) \log_2 (a-b) - \frac12 \log_2 \left(\frac{1}{b}+\frac{1}{a-b}\right)-1.32574$$ Or even tighter, using this : $$\log_2 \binom{a}{b}\le \left(a+\frac12\right) \log_2 a - \left(b+\frac12\right) \log_2 b - \left(a-b+\frac12\right) \log_2 (a-b) +\\+ \left( \frac{1}{12 a} - \frac{1}{12 b +1} - \frac{1}{12(a-b)+1}\right)\log_2(e) -\frac12 \log_2(2 \pi)$$
|linear-algebra|combinatorics|computer-science|information-theory|
1
Understanding the proof of the uncomputability of the "productivity function"
I'm not following this proof of what Bools,Burgess,Jeffrey call the productivity function . The proof states that the value from the productivity function is not computable. They call it $s$ , and it computes a score for a $k$ -state Turing machine, $s(k)$ . They construct a new machine with one more state ( $k + 1$ states), that computes another value $t$ . But then they say the machine to compute $t$ only has $k$ states. This is confusing because it was just defined as having one more than $k$ states. It seems like the machine to compute $t$ should have $k+1$ states (by definition), which isn't a contradiction, and also doesn't prove anything. What am I not understanding? Consider a $k$ -state Turing machine, that is, a machine with $k$ states (not counting the halted state). Start it with input $k$ , that is, start it in its initial state on the leftmost of a block of $k$ strokes on an otherwise blank tape. If the machine never halts, or halts in nonstandard position, give it a scor
Maybe it helps to picture what is going on. First, let's define $[k]$ to be the tape/head configuration that is used to represent a number $k$ , which is a block of $k$ consecutive strokes on an otherwise blank tape, and with the read/write head on the leftmost stroke. Below is a picture of what $[5]$ looks like (I use $1$ for strokes and $0$ for blanks): With this representation convention, we can define what it means for a Turing-machine to compute some function. In particular, let's say that a Turing-machine computes a function $f(k)$ iff for any $k$ , the machine, when started on $[k]$ , eventually halts on $[f(k)]$ . OK, so now we'll do the proof by contradiction to prove that the scoring function $s(k)$ is not computable. So let's assume it is computable. This means that we have a Turing-machine $S$ that has the following input/output behavior: Note that we are not saying that machine $S$ has $k$ states (I think that's a point of confusion you have). It has some number of states,
|proof-explanation|computability|turing-machines|
1
Statistic problem ( some problem with understanding)
I got this problem in my home exercise at the University. I think I solved it, but I am not sure, that my solution is right. Can somebody check it? Let $A$ and $B$ be two events on a probability space $(\Omega, \mathbb{P})$ , with $\mathbb{P}(A)=\frac{1}{3}$ and $\mathbb{P}(B)=\frac{1}{4}.$ Determine the value of $\mathbb{P}(A \cap B^c)$ if $A$ and $B$ are disjoint. My answer : $\mathbb{P}(A \cap B^c)$ = $\mathbb{P}(A)=\frac{1}{3}$ , because since $A$ and $B$ are disjoint we have $A \cap B=\emptyset$ and if $x \in A$ then $x \in B^c$ . So that will mean $A \cap B^c=A$ and now we can apply probability on the both sides. Am I right?
OP has got the answer & is looking for verification. OP has also tagged it set theory , hence I will give a verification based on that using Venn Diagrams. Let us make a Standard Venn Diagram & see that "Disjoint" indicates that the Inter-Section is $0$ Probability. Hence , let us make the Second Venn Diagram where the is no Inter-Section. Observe that $B^C$ totally covers $A$ , hence $A \cap B^C = A$ Hence , we make the last Diagram where we insert the Probability values. [[ In Case , the outer area had negative value ( $A$ & $B$ total up to more than $1$ ) , then it will be inconsistent. Luckily , that is not the Case here. ]] Hence $P(A \cap B^C)=P(A)=1/3$
|probability|elementary-set-theory|solution-verification|
1
How to construct an hexagon that has sides of two different specific lengths in the order of ababab, opposing sides are parallel
I'm building a three legged sculpture stand. I would like to construct a hexagon in which: Every other side is of equal specific lengths (Lengths of sides: a b a b a b ). Opposing sides are parallel. I hope the above summarises in words what I want to do. It should look approximately like this: But the short respectivly long sides should be of specific lenghts, in my case the short sides should be 45 and the long sides 280. How can I construct that? I am drawing in a cad-program and have constructed this in the way below, but I'm also interested in construction on paper. My research so far The angle sum of a hexagon is 720°, so each side is 120°. In the cad-program I can easily draw the figure by starting with a line of length 45, then draw a connected line of length 280 at angle 120° to the first line and so on. But on paper, using a goniometer, this wouldn't be very exact.
One approach: Start with an equilateral triangle of side length $45+280+45$ and chop off 3 equilateral triangles of side length 45 from the vertices.
|geometry|geometric-construction|
1
Are the sylow subgroups of multiplicative group of a finite field unique?
Let $\mathbb{F}_q$ be a finite field with $q=p^w$ elements for some prime $p$ . Let $l$ be another prime which divides $q-1$ . My question is if there is a unique sylow $l$ -subgroup of $\mathbb{F}_q^*$ . Here is my proof attempt: Write $q-1$ as $q-1=l^el'$ where $\gcd(l,l')=1$ . Sylow Theorem 1 guarantee's that there exists a sylow $l$ -subgroup of order $l^e$ . Furthurmore, it is cyclic and generated by an element $a$ of order $l^e$ because $\mathbb{F}_q^*$ is cyclic. Sylow Theorem 2 says that all the sylow $l$ -subgroups are conjugate. Since $\mathbb{F}_q^*$ is abelian, they are all equal, hence there is only one.
You are correct ......................................
|solution-verification|finite-fields|
1
Prove that $4(a+b+c)\ge 3(abc+3); \forall a,b,c > 0: a^2+b^2+c^2=a^2b^2+b^2c^2+c^2a^2.$
Prove that $$4(a+b+c)\ge 3(abc+3); \forall a,b,c> 0: a^2+b^2+c^2=a^2b^2+b^2c^2+c^2a^2.$$ I've tried $pqr$ method. Let $p=a+b+c;q=ab+bc+ca;r=abc.$ The condition gives $$r=\frac{q^2+2q-p^2}{2p}$$ Now, we rewrite the inequality as $$4p\ge 9+3\cdot \frac{q^2+2q-p^2}{2p}$$ which is reduced to $$11p^2\ge 18p+3q^2+6q $$ I didn't know how to continue. Could you please help me? Thank you
We use the pqr method. Let $p = a + b + c, q = ab + bc + ca, r = abc$ . The condition is written as $$p^2 - 2q = q^2 - 2pr,$$ or $$r = \frac{q^2 + 2q - p^2}{2p}. \tag{1}$$ We need to prove that $$4p \ge 3(r + 3),$$ or (using (1)) $$11p^2 - 18p - 3q^2 - 6q \ge 0. \tag{2}$$ Using (1) and $q^2 \ge 3pr$ , we have $q^2 \ge 3p \cdot \frac{q^2 + 2q - p^2}{2p}$ which results in $$p \ge \sqrt{\frac{q^2 + 6q}{3}}. \tag{3}$$ Using (1) and $r \ge \frac{4pq - p^3}{9}$ (three degree Schur inequality), we have $\frac{q^2 + 2q - p^2}{2p} \ge \frac{4pq - p^3}{9}$ which results in $$2p^4 - (8q + 9)p^2 + 9q^2 + 18q \ge 0. \tag{4}$$ We split into two cases. Case 1. $q \ge 3$ Using (2) and (3), it suffices to prove that $$11\left(\sqrt{\frac{q^2 + 6q}{3}}\right)^2 - 18\cdot \sqrt{\frac{q^2 + 6q}{3}} - 3q^2 - 6q \ge 0$$ which is true. ( Note : We use the fact that $x \mapsto 11x^2 - 18x$ is strictly increasing on $x \ge 2$ .) Case 2. $q From (3) and (4), we have $$p \ge \sqrt{2q + \frac94 + \frac14\sqrt{81
|inequality|
0
Can first N eigenvalues have the same real value?
Essentially the subject line: Suppose there is a matrix of shape $[M,M]$ where $M \ge N$ . We perform eigen decomposition and observe that the leading $N$ eigenvalues have the same, real value. Is this hypothetical scenario possible? In my anecdotal experience, the leading eigenvector (largest eigenvalue) dominates and the effect of trailing eigenvectors (lesser eigenvalues) vanishes. And this seems sensible; if you performed some transformation on an arbitrary vector with some matrix once or twice, you'd probably observe the effects of all eigenvectors to some degree. But if this transformation was recursively applied an infinite number of times, the leading eigenvector would dominate the others in the long run. (Which is why the possibility of equal & maximum eigenvalues is so interesting to me.) I suspect the answer is "no" (perhaps equal if imaginary.)
The answer is yes. Take $I$ to be the identity matrix; then all eigenvalues of $I$ are equal to 1. You can also force the bound $N$ to be tight: Simply take the the diagonal matrix $\operatorname{diag}(1, \ldots, 1, 1/2, \ldots, 1/2)$ where there are $N$ entries of value 1 and $M - N$ entries of value $1 / 2$ .
|eigenvalues-eigenvectors|
1
A sort of extension of Cauchy-Schwarz inequality?
Let $a_{i}$ and $b_{i}$ , $i = 1,\dotsb,n$ be real numbers. Denote $S_{pq}:=\sum_{i=1}^{n}a_{i}^{p}b_{i}^{q}$ . I want to show that, $$ (S_{20}S_{02} - S_{11}^2)S_{00} - (S_{10}S_{02} - S_{01}S_{11})S_{10} + (S_{10}S_{11} - S_{01}S_{20})S_{01} \geq 0,$$ and find the condition on $a$ and $b$ for this inequality to become an equality. I thought about it in several ways, and one of them is that the left-hand side(LHS) of the inequality is the determinant of the matrix defined below: $$ \begin{bmatrix} S_{00} & S_{10} & S_{01} \\ S_{10} & S_{20} & S_{11} \\ S_{01} & S_{11} & S_{02} \end{bmatrix} $$ Also, I have a mere feeling that it is somewhat connected to Cauchy-Schwarz inequality. A high-dimensional version of it, maybe? In addition, I found out that if $n=2$ , then (LHS) is always 0. This made me think of typical version of Cauchy-Schwarz, where $$n\sum_{i=1}^{n}a_{i}^{2} - (\sum_{i=1}^{n}a_{i})^{2} \geq 0 $$ Here, if $n=1$ then it becomes an equality. This is one of the reasons that
We have $$ S = \begin{pmatrix} S_{00} & S_{10} & S_{01} \\ S_{10} & S_{20} & S_{11} \\ S_{01} & S_{11} & S_{02} \end{pmatrix} = A^T A $$ where $A$ is the $n \times 3$ matrix $$ A = \begin{pmatrix} 0 & 0 & 0 \\ 1 & a_1 & b_1 \\ 1 & a_2 & b_2 \\ \vdots & \vdots & \vdots \\ 1 & a_n & b_n \end{pmatrix} \, . $$ It follows that $$ x^T S x = \Vert A x \Vert^2 \ge 0 $$ for all $x \in \Bbb R^3$ , i.e. $S$ is positive semidefinite. It follows that the eigenvalues $\lambda_1, \lambda_2, \lambda_3$ of $S$ are (real and) nonnegative and $\det(S) = \lambda_1\lambda_2\lambda_3 \ge 0$ . If the columns of $A$ are linearly dependent then $Sx = A^TAx = 0$ for some nonzero vector $x$ , so that $0$ is an eigenvalue of $S$ and $\det(S) = 0$ . This is always the case if $n \le 2$ . If the columns of $A$ are linearly independent then $x^T S x > 0$ for all nonzero vectors $x$ , so that $S$ is positive definite and $\det(S) > 0$ .
|determinant|cauchy-schwarz-inequality|positive-semidefinite|
1
Fourier Transform Duality Theorem scaling problem?
From the duality theorem if: $$ FT(x(t)) = X(w) $$ Then $$ FT(X(t)) = 2\pi x(-w) $$ If I choose $X(t) = 1$ , then $X(w) = 1$ thus $x(t) = \delta(t)$ . Now plugging back into the second equation: $$ FT(1) = 2\pi \delta(-w) = 2\pi \delta(w) $$ Which is wrong, the general shape of the result is correct, indeed a constant in time domain will be an impulse at zero in frequency domain, but why is it scaled by $2\pi$ ? this $2\pi$ has to be removed somehow. Now I usually use the duality theorem for finding inverse fourier transform, but I was trying to find a way to find fourier transform of $sinc(t)$ and I expect it to be a square pulse but I can't get the scaling right.
I am now taking a communications course and have successfully got to know the issue, simply speaking Fourier Transform is defined as $$ G(f) = \int_{-\infty}^{\infty} g(t) e^{-j2\pi ft} \,dt $$ Which can be re-written in terms of $\omega$ as following $$ G(\omega) = G(j\omega) = G(e^{j\omega}) = \int_{-\infty}^{\infty} g(t) e^{-j\omega t} \,dt $$ Now the Inverse-Fourier Transform is defined as $$ g(t) = \int_{-\infty}^{\infty} G(f) e^{j2\pi ft} \,df $$ And from a quite simple observation by adjusting exponential to have a negative power and swapping variables $t$ and $f$ $$ g(-t) = \int_{-\infty}^{\infty} G(f) e^{-j2\pi ft} \,df $$ $$ g(-f) = \int_{-\infty}^{\infty} G(t) e^{-j2\pi ft} \,dt $$ This in a simple form means $$ \text{If} ~ g(t)\Leftrightarrow G(f) ~ \text{, then} ~ G(t) \Leftrightarrow g(-f) $$ Now in my example I choose $G(t) = 1$ which by comparing to the following and plugging back into the theorem means $$ g(t) = \delta (t) ~\text{then} ~ G(f) = 1 \\ G(t) = 1 ~ \text{th
|fourier-transform|
0
Maximization problem
I've been trying to solve the following problem from Stewart's Calculus Textbook for a while without any success. My answer makes sense, but I'm looking for a way to solve it analytically. The problem concerns a pulley that is attached to the ceiling of a room at a point C by a rope of length r . At another point B on the ceiling, at a distance d from C (where d > r ), a rope of length l is attached and passed through the pulley at F and connected to a weight W . The weight is released and comes to rest at its equilibrium position D . This happens when the distance | ED | is maximized. Show that when the system reaches equilibrium, the value of x is: $$\frac{r}{4d}(r+\sqrt{r^2+8d^2})$$ Here is what I've done. First, I expressed | DE | as a function of x $$|DE|(x)={a}_{2}+{a}_{3}=l-{a}_{1}+\sqrt{{r}^{2}-{x}^{2}}=l-\sqrt{{a}_{3}^2+y^2}+\sqrt{r^2-x^2}$$ from what follows that $$|DE|(x)=l-\sqrt{r^2+d^2-2xd}+\sqrt{r^2-x^2}$$ defined for $$0\leq x \leq r$$ ...and it works since $$|DE|(0)=l+r
From $$2^3−(^2+2^2)^2+^2^2=0$$ we can proceed as follows: $$2d(x-d)x^2 = r^2(x^2-d^2)$$ $$2d(x-d)x^2 = r^2(x+d)(x-d)$$ $$(x-d)[2dx^2 - r^2(x+d)] = 0$$ The solution $x = d$ is excluded by the description of the problem: $d > r$ and obviously $r \geq x$ , so $x . We remain with the quadratic equation $$2dx^2 - r^2x + - r^2d = 0$$ whose solutions are $$x_1 = \frac{r^2 - r\sqrt{r^2 + 8d^2}}{4d}$$ $$x_2 = \frac{r^2 + r\sqrt{r^2 + 8d^2}}{4d}$$ Only $x_2$ is greater than 0, so this is the solution as long as $|DE|''(x_2) . The related computations are tedious and I omit them, assuming that the author is right.
|calculus|
0
Finding $\sum_{_{0\le i}}\sum_{_{<j\le n}} {n \choose i}$
Let $$S=\sum_{0\le i}\sum_{ Consider the summation: $$\sum_{j=0}^{n}\sum_{i=0}^{n} A_i B_j=\sum_{i=0}^n A_i B_i+\sum_{0\le i j\ge 0} A_i B_j$$ If $B_j=1$ , then the second and the third sums in RHS above are equal, then $$\sum_{0\le i Finally, taking $A_i={n \choose i}$ and $\sum_{i=0}^{n} {n \choose i} =2^n$ , we get $S=n2^{n-1}$ . The question is: How else this sum (*) can be evaluated? Edit: It requires good steps to show that $S=\sum_{k=0}^{n} k {n \choose k}$ and hence this question cannot be taken as duplicate. See the positive comments of @jjagmath and @Phicar below.
This sum is actually a double sum $$S=\sum_{j=0}^{n} \sum_{i=0}^{j-1} {n \choose i}$$ This can be expanded as $$S={n \choose 0}+\left[{n \choose 0}+{n \choose 1}\right]+\left[{n \choose 0}+{n\choose 1}+{n \choose 2}\right]+\left[{n \choose 0}+{n \choose 1}+{n \choose 2}+{n \choose 3}\right]+.......+[0]$$ This is as though the frequency of ${n \choose i}$ is $(n-i)$ , then $$=\sum_{i=0}^{n} (n-i) {n \choose i}=\sum_{k=0}^n k {n \choose k}=n 2^{n-1}$$
|sequences-and-series|summation|binomial-coefficients|
1
$\frac{f(x)}{f(y)}=\frac{g(x)}{g(y)}$ iff $f(x)=a g(x)$?
We have two continuous, nonzero functions $f,g:\mathbb{R}\to\mathbb{R}$ , and we are interested in whether or not $$\frac{f(x)}{f(y)}=\frac{g(x)}{g(y)}$$ for all $x$ and $y$ . It is obvious that a sufficient condition is that $f(x)=a g(x)$ for some $a\in\mathbb{R}$ . But is this condition also necessary? It would not be necessary without the continuity assumption, but here both $f$ and $g$ are continuous.
Given that $f,g:\mathbb R\to\mathbb R$ are nonzero functions and that $$ \forall x,y\in \mathbb R. \frac{f(x)}{f(y)}=\frac{g(x)}{g(y)}. $$ Since $f$ and $g$ are never zero, so we can rearrange the equation: $$ \forall x,y\in \mathbb R. \frac{f(x)}{g(x)}=\frac{f(y)}{g(y)}. $$ In particular, if we fix $y = 0$ , $$ \forall x\in \mathbb R. \frac{f(x)}{g(x)}=\frac{f(0)}{g(0)}. $$ This should address the "necessary" direction of implication. By the way, I think this contradicts the idea that $f(x)=ag(x)$ would not be necessary without the continuity assumption.
|real-analysis|
1
How to prove $e^{-x}-\frac{1}{n}< \left (1-\frac{x}{n} \right )^{n}$?
I was looking the solution given by Simply Beautiful Art of the following problem: https://math.stackexchange.com/a/2029616/952348 In a part of the solution the author claims that How I can prove the left side of the inequality?, I have tried using the taylor expansion of the exponential function but I did not get anything.
Denote $x=nu$ and consider for $0\leq u\leq 1$ the following function $$f(u)=((1-u)e^u)^n+\frac{e^{nu}}{n}-1.$$ Then $f(0)=1/n>0$ and{*} $$f'(u)=e^{nu}(-nu(1-u)^{n-1}+1)\geq 0$$ which implies that $f$ is positive on $(0,1),$ a fact equivalent to the desired identity. Edit: * thanks to user48203 who points out a mistake and observe $$nu(1-u)^{n-1}\leq \sum_{k=0}^nC^k_nu^k(1-u)^{n-k}=1$$
|real-analysis|inequality|exponential-function|
0
Shortest distance between two affine subspaces through orthogonal projection
I'm trying to show the following: Let $V$ be a finite dimensional euclidean vector space, with two vector subspaces $S_1 \subset V$ and $S_2 \subset V$ . Suppose that $X, Y$ are affine subspaces with $X = x + S_1$ and $Y = y + S_2$ , that $P_U$ is the orthogonal projection onto a vector subspace $U$ , and finally that $$ d(X, Y) = \min \{d(x,y) \mid x \in X,\, y \in Y\}. $$ Show that the following is true: $$ d(X, Y) = \lVert P_{S_1 + S_2}(x - y) + y - x \rVert. $$ I've learned and used a similar formula before: $$ d(v, U) = \inf\{\lVert v - u \rVert\ \mid u \in U\} = \lVert P_{U^{\perp}}(v) \rVert $$ with $U$ being a vector subspace of the vector space $V$ . In this case there was also the condition that $V = U \oplus U^{\perp}$ . I guess my problem is that I can understand the orthogonal projection visually if I project a vector onto, for example, a plane, but I can't understand the projection onto two subspaces. Edit: Thank you @Stéphane Jaouen for the picture, it was invaluable whe
Before we go any further, I think it's helpful to follow the advice of @Ted Shifrin. $ x,y \in \mathbb R^3; \color{green}{S_1}, \color{red}{S_2}$ two vector lines; $S_1\oplus S_2$ is represented in $\color{grey}{\text{grey}}$ ; $P:=P_{\color{grey}{S_1\oplus S_2}}(x-y)$ ; $V:=x-y=\vec{OP}+\vec{PV}$ ; $\vec{OP}=\color{green}{\vec{OP_1}}+\color{red}{\vec{OP_2}}; x_1:=x-\color{green}{P_1}; y_1:=y+\color{red}{P_2};\color{Green} {X}:=x+\color{DarkGreen} {S_1}; \color{Orange} {Y}:=y+\color{red} {S_2}$ $$d(\color{green}X,\color{orange}Y)=||x_1-y_1||=||\vec{PV}||=\lVert x-y-P_{S_1 + S_2}(x - y) \rVert$$ Regarding the proof, you have $$\forall \xi\in X=x_1+S_1, \forall \eta \in Y=y_1+S_2$$ $$\xi=x_1+u_1, \eta=y_1+u_2$$ with $u_i\in S_i(i=1,2)\subset S_1+S_2$ Then $||\xi-\eta||^2=||x_1-y_1||^2+||u_1+u_2||^2$ (Pythagora's theorem) So $||\xi-\eta||^2\geq ||x_1-y_1||^2$ and finally $$||\xi-\eta||\geq||x_1-y_1||$$ So, $$d(X,Y)=||x_1-y_1||=||\vec{PV}||=\lVert x-y-P_{S_1 + S_2}(x - y) \rVert.\square$$
|linear-algebra|metric-spaces|orthogonality|projection|affine-geometry|
1
Does the direct sum of two real line bundles admit a non-vanishing section?
Let $M$ be a smooth manifold, and $L$ be a smooth real (non-trivial) line bundles over $M$ , is it then true that $L\oplus L$ admits a non vanishing section? Intuitively, I feel like the answer should be yes, since I think I should be able to take a global section $s$ of $L$ , and then obtain a new section $t$ by "pushing the zeroes of $s$ around" so that $s$ and $t$ don't vanish at the same point. The section $(s,t)$ would then not vanish. I am having trouble making this precise; I also suspect that I might actually need compactness or something so that I can make sure the zeros of $s$ are isolated and can actually push them off. My motivation for this, is that I'd like to treat $L\oplus L$ as a trivial complex line bundle (I am pretty sure I can equip $L\oplus L$ with a complex structure since it's structure group should be $GL_1(\mathbb R)\times GL_1(\mathbb R)$ which I believe embeds in $GL_1(\mathbb C)$ , but maybe I am also messing that up).
$L \oplus L$ need not admit a non-vanishing section. To see this, note that the Stiefel-Whitney classes of $L \oplus L$ are given by \begin{align} \omega_1(L \oplus L) &= 2\omega_1(L) \omega_0(L) = 0 \\ \omega_2(L \oplus L) &= \omega_1(L)^2 \end{align} with all higher classes zero for degree reasons. Now if $L \oplus L$ has a nonvanishing section, it splits as $L \oplus L \cong L' \oplus \epsilon^1$ where $L'$ is a line bundle, and consequently we compute \begin{align} \omega_1(L \oplus L) = \omega_1(L' \oplus \epsilon^1) &= \omega_1(L') \\ \omega_2(L \oplus L) = \omega_2(L' \oplus \epsilon^1) &= 0 \end{align} so it suffices to find a line bundle $L$ over some manifold $M$ such that $\omega_1(L)^2 \neq 0$ , and for this pulling back the universal bundle $\gamma^1$ over $\mathbb{R}\mathrm{P}^\infty$ to some $\mathbb{R}\mathrm{P}^n$ , $n \geq 2$ will do.
|differential-topology|vector-bundles|line-bundles|
1
Polynomial iterations II: Strange behaviour of $x^2-7x+5$ under iteration
About a month ago, I asked a similar question about a certain class of polynomials that seem to defy the odds of "eventual divisibility", i.e. iterating an "eventually divisible" polynomial we can guarantee that any input will yield a number that is divisible by a certain prime $p$ at least once (and thus infinitely often). Please refer to that question for a thorough explanation of the idea. After some careful case elimination, I was left with (i.e. could not prove/disprove the property for) three quadratic polynomials where linear and constant coefficients are absolutely smaller than $10$ : $(1)$ $x^2-7x-7$ (which is checked up to $p \leq 4 \cdot 10^6$ by Mike Daas with no hits) $(2)$ $x^2-7x+5$ (which is checked up to $p \leq 6881261$ by Mike Daas with no hits) $(3)$ $x^2+5x+7$ (which was shown by Oscar Lanzi to be provably non-eventually divisible) Since the property seems to hold except for trivial exceptions (see other question), it seems highly unusual that these polynomials jus
I will post further developments as an answer to not clutter the exposition in the question too much. To reduce the amount of case checking we have to do, we can put the ideas about higher iterates mentioned in the question to the test: Define $q(x):=x^2-7x+5$ . Since $$q(x)-x = \underbrace{x^2-8x+5}_{\Delta_1=44}$$ $$q^{(2)}(x)-x = \underbrace{(x^2-8x+5)}_{\Delta_1}\underbrace{(x^2-6x-1)}_{\Delta_2 = 40}$$ $$q^{(3)}(x)-x = (x^2-8x+5)(x^3-13x^2+40x+13)(x^3-7x^2+4x+1) \stackrel{t = x+\frac{13}{3},s=x+\frac{7}{3}}{=} \underbrace{(x^2-8x+5)}_{\Delta_1}\underbrace{(t^3-\frac{49}{3}t+\frac{637}{27})}_{-\frac{\Delta_t}{108} = -\frac{2401}{108}}\underbrace{(s^3-\frac{37}{3}s-\frac{407}{27})}_{-\frac{\Delta_s}{108} = -\frac{1369}{108}}$$ we get additional conditions based on if one of the discriminants of these factors is a quadratic residue (refer to the quadratic and cubic formula if this is not clear): $$\left(\frac{\Delta_1}{p}\right) = \left(\frac{44}{p}\right) = \left(\frac{11}{p}\right)
|number-theory|polynomials|graph-theory|prime-numbers|recreational-mathematics|
0
Example of Complex Pythagorean Triples
I am looking for example of a Pythagorean Triple with Gaussian Integers. I followed the links and looked at followings : Relation to Gaussian integers in https://en.m.wikipedia.org/wiki/Pythagorean_triple Links and Google searches mentioned in: Generating all the Pythagorean triples by factorizing using complex numbers And References for generating Pythagorean triple using complex number Just looking for an example of a Pythagorean Triple as Gaussian integers, must be missing something obvious in all the above and their mentioned links to not come up with Just one example.
Have you tried working through an algebraic derivation of the primitive Pythagorean triple parametric formula in $\mathbf Z$ to get an analogue in $\mathbf Z[i]$ ? Because $-1$ is a square, the terms in $\alpha^2 + \beta^2 = \gamma^2$ are essentially symmetric in $\alpha$ , $\beta$ , and $\gamma$ , so we can assume without loss of generality that $\beta$ is divisible by $1+i$ while $\alpha$ and $\gamma$ are not. Then it turns out that $$ \alpha = u(z^2 - w^2), \ \beta = \pm 2uzw, \ \gamma = u(z^2+w^2) $$ where $u$ is a unit in the Gaussian integers, $z$ and $w$ are relatively prime in the Gaussian integers, and $z \not\equiv w \bmod 1+i$ .
|reference-request|complex-numbers|soft-question|examples-counterexamples|pythagorean-triples|
0
Prove: $L^p(X)$ is not contained in $L^q(X)$ iff $X$ contains sets of arbitrarily small positive measure.
Let $(X,M, μ)$ be a measure space and $0 My work: I proved the forward direction. For backward direction, considering $q Now for the case $q=\infty$ how can I find a function $g$ so that $g\in L^p$ and $g\notin L^\infty$. Can anybody please give me a hint? EDIT: Does $g=\sum_{n=1}^\infty \chi_{E_n}$ will work? I can prove that $g\in L^p$. But struggling to prove that $g\notin L^\infty$. Any ideas?
Suppose $L^p\not\subset L^q$ , then there is some $f \in L^p \setminus L^q$ . Put $A_t=\{x: \vert f(x) \vert \geq t\}$ . Then $$\vert \vert f \vert \vert_p^p = \int \vert f \vert^p \geq t^p\mu(A_n) .$$ That is, $$\mu(A_n) \leq \vert \vert f \vert \vert_p^p \frac{1}{t^p} \to 0; \space \text{as $t \to \infty$}.$$ However, if $\mu(A_n)=0$ , for some $t$ , then $\vert \vert f \vert \vert_\infty \leq t$ then $$\vert \vert f \vert \vert_q^q=\int \vert f \vert^p \int \vert f \vert^{q-p} \leq \int \vert f \vert^p \vert \vert f \vert \vert_\infty^{q-p} contradicting $f \not\in L^q$ . Suppose $X$ contains sets of arbitrary small positive measure. Choose a measurable set $A_1$ with $\mu(A_1) \in (0,1]$ and inductively choose $A_n$ measurable with $\mu(A_n) \in (0,\frac{1}{3}\mu(A_n)]$ so that $\mu(A_n) \in (0,\frac{1}{3^n}]$ and moreover, $$\mu(A_{n+k}) \in (0,\frac{1}{3^k}\mu(A_n)].$$ Then put $E_n=A_n \setminus\bigcup_{k=1}^\infty A_{n+k}$ . Then the $E_n$ have positive measure and are disjoint
|measure-theory|lp-spaces|
0
What formula could calculate the speed of music playback in a variable length of time?
I am trying to create a patch in Max that records sound over a portion of a given length of time, then plays back the recording to fill out the remaining portion of time with the speed of playback adjusted, such that the complete recording of the first portion is heard within the second portion. The total length is given as an integer of minutes 1-180. I have a timer set up that records the length of the recording in milliseconds. I converted the minutes to seconds by multiplying that number by 60, and the milliseconds to minutes by multiplying that number by .001 (please correct me if I have that wrong!) I know I can determine the length of the playback (of the remaining portion ) by subtracting the length of the recording (of the first portion ) from the total length . However, I don't know how to calculate the speed at which the recording needs to play in order to fill up the remaining portion of time. (I also don't know how to write this as math formulas). Abstractly, I understand
First please observe that speed refers to units of the type something per unit of time ; for instance kilometers per hour , liters per seconds , etc. Here you are looking for an unit whose type is seconds per seconds , in which case I guess a more appropriate name would be a speed ratio Let $T$ be the total length, $D$ the duration of the original recording, and $\lambda$ the speed ratio at which you want to play the recording the second time to fill the total duration. So you basically have $T = D + \lambda D$ . As you mentioned if $D = T/2$ then the second recording is played at the same speed as the first one, aka $\lambda=1$ . By solving the above equation for $\lambda$ you find: $$\lambda = (T-D)/D$$ Note that I have not mentioned the units of time (seconds, minutes etc.) but you need to make sure they all are in the same units.
|related-rates|music-theory|
0
Range of $I-T$ is closed for $T$ being a compact operator in a Banach Space
I know from this question that this is true for Hilbert spaces. But is the result true for Banach spaces as well? So suppose $x_{n}$ is a bounded sequence and $(I-T)(x_{n})$ converges. Then upto a subsequence, we have that $Tx_{n}$ converges, and hence $x_{n}$ converges to some $x$ . Thus, we have that $(I-T)(x_{n})\to (I-T)(x)$ . This would be enough to show that the range is closed but for the fact that we assumed $x_{n}$ to be bounded in the first place. So what if $(T-I)(x_{n})$ converges and $x_{n}$ is unbounded. Is the result true? And if it is, then how do I prove it?
Just realized this: We can follow the steps of the linked question. Firstly, by the Fredholm theory, we have that $\dim(\ker(T-I)) and a finite dimensional subspace is always complemented in a Banach Space (Actually true in any normed linear space). Thus we have that $X=\ker(T-I)\oplus Y$ for some closed subspace $Y$ . we can now follow the exact steps in the linked question and conclude that if $||(T-I)(u_{n})||\to 0$ for some sequence of unit vectors in $Y$ , then $u_{n}\to 0$ which contradicts the unit norm of $u_{n}$ . Thus $T-I$ as an operator from the Banach space $Y$ will have a lower bound and hence will be closed.
|linear-algebra|functional-analysis|operator-theory|banach-spaces|compact-operators|
1
Related to random variables
I am working on research project related to wireless communication wherein there is extensive use of concept of random variables. I have the following equation: $X = h_0h_{1_{n}}$ ---(1) where, $h_0$ is Nakagami- $m$ random variable with mean power $\Omega_0$ and $h_{1_{n}}$ is Gaussian noise with mean power $\sigma^2$ . My query is what will be variance of $X$ of equation (1). Any help in this regard will be highly appreciated.
I try to reformulate the variables at play to make sure I got it right: $$ h_0 \sim \text{Nak}(m, \Omega_0) \\ h_1 \sim \mathcal{N}(0,\sigma^2) $$ Assuming the variables $h_0$ and $h_1$ are independent (and hence we can factor the expectation of the product) we have: $$ \mathbb{E}(X)=\mathbb{E}(h_0 h_1) = \mathbb{E}(h_0)\mathbb{E}(h_1)=0 \\ \mathbb{E}(X^2)=\mathbb{E}(h_0^2 h_1^2) = \mathbb{E}(h_0^2)\mathbb{E}(h_1^2)=\Omega_0 \sigma^2 \\ \text{Var}(X)=\mathbb{E}(X^2)-[\mathbb{E}(X)]^2=\Omega_0 \sigma^2 $$ where for the second moment we used the fact that for $Z\sim \text{Nak}(m, \Omega)$ we have $\mathbb{E}(X^2)=\Omega$
|probability-distributions|random-variables|density-function|
1
Set of infinite well founded trees if complete coanalytic?
I know that the set $WF$ of well founded trees in $\omega^\omega$ is a coanalytic complete set and I was wondering if I restrict to the set of all inifinite (as sets) trees, the set of all infinite well founded trees would also be coanalytic complete. I've managed to prove that the set $\mathcal{T}_i$ of infinite trees in $\omega^\omega$ is a $G_\delta$ set of the set of trees (that's because its complement, the set of finite trees, is countable so a $F_\sigma$ set) so it is a polish space. Now, as the set of well founded finite trees is equal to the set of finite trees, $\mathcal{T}_f$ , we obtain that $WF_i$ cannot be Borel in $\mathcal{T}$ (the set of all trees in $\omega^\omega$ ), because $WF$ is not Borel in $\mathcal{T}$ and $$WF=WF_i \cup WF_f = WF_i \cup \mathcal{T}_f$$ Furthermore, if $WF_i$ is Borel in $\mathcal{T}_i$ , then there exists some Borel subset $A \subset \mathcal{T}$ such that $$WF_i=A \cap \mathcal{T}_i$$ but as $\mathcal{T}_i$ is $G_\delta$ in $\mathcal{T}$ thi
Yes, it's still coanalytic complete. In my opinion, the fastest way to see this is via an explicit "translation" map: given a tree $T\subseteq \omega^{ , let $$T'=\{\sigma\in\omega^{ Basically, we "infinitize" $T$ without adding any new branches. The map $f:T\mapsto T'$ is continuous, preserves/reflects well-foundedness, and only outputs infinite trees, so we're done: Given a coanalytic set $Y$ and a reduction $r$ of $Y$ to $WF$ , the map $f\circ r$ is a reduction of $Y$ to $WF_i$ . (More generally it is indeed true that throwing away a countable set from a set of "high complexity" - say, worse than $F_\sigma$ - won't change the complexity of that set. In particular, if $X\subseteq\omega^\omega$ is coanalytic-complete and $C$ is countable then $X\setminus C$ is also coanalytic-complete.)
|descriptive-set-theory|
1
Find a conformal mapping $f$ that maps $\{z \in \mathbb{C}: Re(z) > 0\}$ onto itself that maps $f(1)=2$
Find a conformal mapping $f$ that maps $\{z \in \mathbb{C}: Re(z) > 0\}$ onto itself that maps $f(1)=2$ . So we need to ensure $f(1)=2$ and that the right half plane maps to the right half plane. Well, we can choose a Mobius mapping. Notice that the mapping $$f(z) = \frac{z+1}{\frac{1}{2}z+\frac{1}{2}}$$ will satisfy $f(1) =2$ but it's not obvious to me that I can check that the right half plane maps onto the right half plane. In fact, my choice of $f$ above is only one of infinitely many choices that satisfy the constraint $a+b = 2(c+d)$ . How can I choose a mapping in such a way that the right half plane maps onto itself?
The correct order of matrix multiplication for this is $$ \left( \begin{array}{rr} -i & 0 \\ 0 & 1 \\ \end{array} \right) \left( \begin{array}{rr} a & b \\ c & d \\ \end{array} \right) \left( \begin{array}{rr} i & 0 \\ 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rr} a & -bi \\ ci & d \\ \end{array} \right) $$ Here we demand $a,b,c,d$ real and $ad-bc > 0.$ Your condition that $1$ map to $2$ by $$ \frac{az-bi}{ciz +d} $$ becomes $ a-bi = 2 (ci +d)$ with real variables , so that $a-2d = (b+2c)i $ gives the restrictions.......... $$ \frac{2dz + 2ci}{ciz +d} $$ from matrix $$ \left( \begin{array}{rr} 2d & 2ci \\ ci & d \\ \end{array} \right) $$ of determinant $2(c^2 + d^2)$
|complex-analysis|mobius-transformation|
1
Continuous analogue of the discrete simple continued fraction
Background The classical Riemann integral of a function $f : [a,b] \to \mathbb{R}$ can be defined by setting $$\int_{a}^{b} f(x) \ dx := \lim_{\Delta x \to 0} \sum f(x_{i}) \ \Delta x. $$ Here, the limit is taken over all partitions of the interval $[a,b]$ whose norms approach zero. We can do something roughly similar with product integrals . They take the limit over a product instead of a sum, and can be interpreted as continuous analogues of discrete products. There are multiple types of product integrals. Type I is often refered to as Volterra's integral . It is defined as follows: \begin{align*} \prod_{a}^{b} \left(1+f(x) \ dx \right) &:= \lim_{\Delta x \to 0} \left(1 + f(x_{i}) \ \Delta x \right) \newline &= \exp \left( \int_{a}^{b} f(x) \ dx \right). \tag{1} \label{1} \end{align*} However, this is not a multiplicative operator. As an alternative, there is also Type II, the geometric integral. It is defined as \begin{align*} \prod_{a}^{b} f(x)^{dx} &:= \lim_{\Delta x \to 0} \prod
I don't think this is likely to work. An important feature of both sums and products is that they're commutative and associative, so that when considering adding/multiplying more and more nearby numbers, the order of addition/multiplication doesn't matter. On the other hand, continued fractions make use of division as well as addition, and division is neither commutative nor associative. This isn't just an abstract complaint: iterated division associates the even-numbered terms with the numerator more than with the denominator—the overall expression is increasing in those parameters while decreasing in the odd-numbered parameters. So I don't see any reasonable way of doing that "more and more" with nearby points without radically changing the dependence on individual values.
|integration|analysis|definition|products|continued-fractions|
0
Counterexample on a definite integral
Is that possible to have an example on $$\int_{-2}^2f(x)dx=4 \text{, and $f$ is an even function}$$ but $\int_0^2f(x)dx$ does not exist?
If the integral doesn't exist on the interval $[0,2]$ , the integral on the interval $[-2,2]$ does not exist either since $$\int_{-2}^2f(x) dx= \int_{-2}^0f(x)dx + \int_0^2f(x)dx$$ and the sum of an existing term and a non existing term yields a non existing result.
|calculus|integration|
0
How probable is that a randomly typed 47 digit odd integer is a prime?
So, I have been playing around with prime numbers, I have installed gmp and gmpy2 gmpy2 has a function gmpy2.is_prime for primality testing (non deterministic) which uses the Miller-Rabin primality test. Now to test the speed of gmpy2.is_prime I typed some random digits '1245268798719487981976914598618498569816481948' and added a '3' to the end so that the number is not even . It took it milliseconds to get the result and to my surprise, In [18]: gmpy2.is_prime(12452687987194879819769145986184985698164819483) Out[18]: True What? really? the number I randomly typed is a prime? To make sure is_prime wasn't returning true for every other number I added a few digits and expectedly In [25]: gmpy2.is_prime(124526879871948798197691459861849856981648139483) Out[25]: False In [26]: gmpy2.is_prime(12452687987194879819769145986555184985698164819483) Out[26]: False I was blown away, but now, I am curious. What is the probability of this happening? More specifically, What is the probability that a
According to Prime number theorem there are approximately $$10^{47}/\ln(10^{47}) - 10^{46}/\ln(10^{46}) = 8.296\cdot10^{44}$$ 47-digit primes and $$9\cdot10^{46}/2$$ odd 47-digit numbers. So the probability is approximately $8.296\cdot10^{44}/4.5\cdot10^{46} = 0.018$ . Without numbers with 5 as last digit the probability is approximately $8.296\cdot10^{44}/3.6\cdot10^{46} = 0.023$ . Edit: The values with the somewhat better approximation $1/\ln(x)$ for the probability that a random integer not greater than $x$ is prime are $0.01868$ for odd number in the given range and $0.02335$ for odd numbers not ending with 5.
|probability|prime-numbers|primality-test|
0
Application of integrals (energy/work)
I'm not sure about the integral associated with the example below. I see different options and I don't know exactly which one is the right one. My found options are the following: $$ \int_{0}^{100} (1000+60x)*10*4 \,dx $$ or $$ \int_{0}^{400} (1000+60x)*10 \,dx $$ I'm not sure about my limits. Maybe it should be a maximum of 99 in stead of 100? I also doubt whether my integral takes into account the fact that passengers only board from floor 1. Is it correct that my integral assumes that passengers board from floor 0? If so, how can I adjust this? The question: The Petronas Twin Towers is a skyscraper in Kuala Lumpur. In the ground floor entrance hall there is a mega-sized elevator with a weight of 1000 that can transport visitors from the ground floor (floor 0) to floor 100. Each floor is 4 meters high. We are looking at a situation in which the elevator will travel from the ground floor (floor 0) to floor 100. On floor 1, the first person (weighing 60 ) steps into the elevator. An ad
Integral is continuous, whilst $\sum$ is discrete. You don't want $1.00000000000000000001$ passengers on the lift do you?
|integration|
0
Application of integrals (energy/work)
I'm not sure about the integral associated with the example below. I see different options and I don't know exactly which one is the right one. My found options are the following: $$ \int_{0}^{100} (1000+60x)*10*4 \,dx $$ or $$ \int_{0}^{400} (1000+60x)*10 \,dx $$ I'm not sure about my limits. Maybe it should be a maximum of 99 in stead of 100? I also doubt whether my integral takes into account the fact that passengers only board from floor 1. Is it correct that my integral assumes that passengers board from floor 0? If so, how can I adjust this? The question: The Petronas Twin Towers is a skyscraper in Kuala Lumpur. In the ground floor entrance hall there is a mega-sized elevator with a weight of 1000 that can transport visitors from the ground floor (floor 0) to floor 100. Each floor is 4 meters high. We are looking at a situation in which the elevator will travel from the ground floor (floor 0) to floor 100. On floor 1, the first person (weighing 60 ) steps into the elevator. An ad
Since the weights change in discrete periods, I would use a summation and not an integral. $\sum_\limits{n=0}^{99} (1000 + 60n)(4\text{ meters})(10 \frac {\text {meters}}{\text {seconds}^2})$ $[(10^5) + (60)(\frac {(99)(100)}{2})] (40 \text{Nm})$ Nonetheless, the setup for the integral vs. the summation is identical. What is the difference between: $\int_{0}^{100} (1000 + 60x)(40)\ dx$ and $\int_{0}^{400} (1000 + 60x)(10)\ dx$ ? The second integral would indicate that the mass increases 60 kilos per meter that the elevator moves vs. 60 kg per floor.
|integration|
0
Question About Differentiability Of Principal Branch Of log(z).
Theorem:1 Let the function $$f (z) = u(r, θ) + iv(r, θ)$$ be defined throughout some neighborhood of a nonzero point $z_0 = r_0 \exp(iθ_0)$ and suppose that (a) The first-order partial derivatives of the functions u and v with respect to r and θ exist everywhere in the neighborhood; (b) Those partial derivatives are continuous at $(r_0, θ_0)$ and satisfy the polar form $$ru_r = v_θ , u_θ = −rv_r$$ of the Cauchy–Riemann equations at $(r_0, θ_0)$ . Then $f$ is differentiable at $z_0$ . Now consider $$\text{Log}(r\exp(i\theta))=\ln r+i\theta$$ for $(r, θ)\in \mathbb R^+\times (-\pi,\pi].$ Then $u_{\theta}=v_r=0$ and $u_r=\frac 1 r$ , $v_{\theta}=1$ . So clearly all Hypothesis of Theorem 1 satisfy for $(r, θ)\in \mathbb R^+\times (-\pi,\pi].$ .so Log(z) is differentiable everywhere except origin . which is false since it is not even continuous on negative real axis. So what I am missing here ? Edit: Given $$f(z) = u(r, \theta) + iv(r, \theta)$$ for $(r,\theta)\in\underbrace{(0,\infty) \tim
The point is that in Theorem 1, you must choose $\theta$ close to $\theta_0$ and $\theta$ is not unique. In your definition of the log, you choose $\theta \in (-\pi,\pi]$ . If you take for example $z_0 = -1$ (or any other negative number), you have $\theta_0 = \pi$ and for all $\varepsilon > 0$ close to $0$ , the argument of $-1 - \varepsilon i$ is close to $-\pi$ . Therefore, the argument is not continuous hence not differentiable. One way to solve the problem is to exclude the negative numbers. In this case, you can choose $\theta$ in $(-\pi,\pi)$ and the argument function becomes continuous (and differentiable). In Theorem 1, it is assumed (but not explicitely said) that you always choose $\theta$ close to $\theta_0$ . If you take $\theta \in [0,2\pi)$ instead, you have a log which is differentiable at the negative numbers for not continuous at the positive numbers. If you take it in $(-\pi/2,3\pi/2]$ , you have to exclude the pure imaginary numbers with negative imaginary part.
|complex-analysis|analytic-functions|
0
Property of Outer Measure on $\mathbb{R}$
Question From - Axler Measure Theory - Problem 3 - Section 2A Throughout: For $A \subset \mathbb{R},$ $|A|$ denotes the outer measure of $A$ and is defined $|A|=inf\\{\sum_{k=1}^{\infty}\ell(I_k): I_1, I_2, \cdots \text{open intervals}, A \subset \cup_{k=1}^{\infty} I_k\\}$ Let $A,B \subset \mathbb{R}$ with $|A| Show that $|B|-|A| \leq |B \setminus A|$ I was able to show the result for when $A \subset B$ because in this case we have $$|B|=|(B \setminus A) \cup A| \leq |B \setminus A|+|A|$$ However, the general case I am unable to show. I really think it is just a matter of the right set identity and properties of outer measures, but I cannot find the right one. I am aware that $B \setminus A = B \cap A^c.$ Thank you for the help.
This is a good start. You've shown that you have the desired inequality when $A \subseteq B$ , so next you should ask yourself "is the case $A \nsubseteq B$ any harder?" Imagine we start with $A \subseteq B$ , and then enlarge $A$ by adding elements from $B^c$ to get a set $A'$ with $A \subseteq A' \nsubseteq B$ . Because we only added elements which were not in $B$ , we have $B \setminus A' = B \setminus A$ , so $\lvert B \setminus A' \rvert = \lvert B \setminus A \rvert$ . On the other hand, $\lvert A' \rvert \geq \lvert A \rvert$ , so we get $\lvert B \rvert - \lvert A' \rvert \leq \lvert B \rvert - \lvert A \rvert$ . Overall, we get $\lvert B \rvert - \lvert A' \rvert \leq \lvert B \rvert - \lvert A \rvert \leq \lvert B \setminus A \rvert = \lvert B \setminus A' \rvert$ . In other words, adding elements from $B^c$ to $A$ makes the inequality easier to show! The other good news is that every subset of $\mathbb{R}$ can be produced by starting with a subset of $B$ and then adding in e
|real-analysis|measure-theory|outer-measure|
0
Second derivatives of 1/r
Let $r = \sqrt{x^2 + y^2 + z^2}$ . From the fact that $\nabla^2 r^{-1} = -4\pi \delta^{(3)}(\vec{r})$ , is it correct to say that $$ \frac{\partial^2}{\partial x^2}(r^{-1}) = \frac{3x^2 - r^2}{r^5} - \frac{4\pi}{3} \delta^{(3)}(\vec{r}) \\ \frac{\partial^2}{\partial x \partial y}(r^{-1}) = \frac{3xy}{r^5} $$ The question is, is it justified to split the Dirac delta evenly in all three directions? This seems like the most straightforward way. $$ \delta = \frac{\delta}{3} + \frac{\delta}{3} + \frac{\delta}{3} $$ Or maybe there are other ways to distribute, while still respecting symmetry, like $$ \delta = \frac{x^2 \delta}{r^2} + \frac{y^2 \delta}{r^2} + \frac{z^2 \delta}{r^2} $$
Let $\psi$ be the distribution $\psi(r)=\frac1r$ . Then, for $\phi \in C_C^\infty$ , we have $$\begin{align} \langle \partial_{ij}\psi, \phi\rangle &=\langle \psi, \partial_{ij}\phi\rangle\\\\ &=\int_{\mathbb{R}^3}\frac1r \frac{\partial^2\phi(\vec r)}{\partial x_i\partial x_j}\,d^3\vec r\\\\ &=\underbrace{\int_{\mathbb{R}^3}\frac{\partial}{\partial x_i}\left(\frac1r \frac{\partial\phi(\vec r)}{\partial x_j}\right)\,d^3\vec r}_{=0\,\,\text{since}\,\,\phi\in C_C^\infty}-\int_{\mathbb{R}^3}\frac{\partial}{\partial x_i}\left(\frac1r \right)\left(\frac{\partial\phi(\vec r)}{\partial x_j}\right)\,d^3\vec r\\\\ &=-\lim_{\varepsilon\to 0^+}\int_{\mathbb{R}^3\setminus B(0,\varepsilon)}\frac{\partial}{\partial x_i}\left(\frac1r \right)\left(\frac{\partial\phi(\vec r)}{\partial x_j}\right)\,d^3 \vec r\tag1 \end{align}$$ where $B(\vec r_c,R)$ is a sphere of radius $R$ centered at $\vec r_C$ . Now, integrating by parts again, we find that $$\begin{align} \langle \partial_{ij}\psi, \phi\rangle&=-\li
|derivatives|dirac-delta|
1
Solving The Quasi-Symmetric Quartic Equation
This is a self-answer question of the following problem: Solve the equation: $$4x^4 - 36x^3 + 61x^2 + 90x + 25 = 0.$$ See my answer.
Solve the equation: $$4x^4 - 36x^3 + 61x^2 + 90x + 25 = 0.$$ This problem was lifted from this Youtube Video . I didn't particularly like the video's approach, so I used my own approach, which I have never seen discussed anywhere. To understand the approach, first see this answer . Assuming that $~u = x - \dfrac{1}{x},~$ the foundation of the linked answer is that in a symmetric quartic equation, you can (for example) express $~\displaystyle x^2 + \frac{1}{x^2}~$ as $~u^2 + 2.~$ This implies that a symmetric quartic equation like $$a_4x^4 + a_3x^3 + a_2x^2 - a_3x + a_4 = 0$$ can be converted into a quadratic equation in $~u.$ The point of this self-answer posting is that this method may be extended beyond symmetric quartic equations to quasi -symmetric quartic equations (a description that I made up). Suppose (for example) you have the quartic equation $$a_4 + a_3x^3 + a_2x^2 - a_1x + a_0 = 0$$ where $$\left| ~\frac{a_1}{a_3} ~\right|^2 = \frac{a_0}{a_4}.$$ I refer to such a quartic eq
|polynomials|quartics|
0
$\sup$ and $\inf$ of $\{a - \frac1n|n\in \mathbb N^*\}$
I have looked at similar questions on here but still cannot reach a conclusion for this particular one. I have gathered that as n increases, $\frac1n$ tends to $0$ , meaning $a - \frac1n$ tends to $a$ . So the set must be bounded below by $a - 1$ , and bounded above by $a$ . To prove $\inf(A) = a - 1$ , I started by assuming that $\inf(A) = l$ for some $l > a - 1$ . So $a - l . Set $a - 1 = ε$ . By Archimedean Property, there exists some $ñ > ε$ . So $ñ > \frac1{\epsilon} \implies a - l > \frac1ñ$ This seems to imply $l , so I did not arrive at a contradiction. Does this mean there is another possible value for the infimum? I can't think of what it could be.
You have to do things in order. First, manipulating inequalities to highlight a minorant and a majorant; and then prove that it's the $\inf$ and the $\sup$ . $$\text{let }1\leq n$$ $$0 $$-1\leq -\frac1n $$\color{red}a-1\leq \color{red}a-\frac1{\color{green}n} Then, as @hff1 and @fleablood said, $a-1=a-\frac1{\color{green}1}$ is the $\inf.$ What you still have to prove is that $$\forall \epsilon>0, a-\epsilon \text{ is not a majorant}$$ to prove that $a$ is the smallest of the majorants, i.e. the $\sup.$ And the arguments you have cited are used as follows : $$\text{Let }\epsilon >0$$ $$\text{Let }\widetilde{n}>0 \text{ s.t. } \widetilde{n}\epsilon>1$$ $$\epsilon>\frac1{\widetilde{n}}$$ $$-\epsilon $$\color{red}a-\epsilon
|real-analysis|supremum-and-infimum|
0
Proof about a property of $\mathbb{R}$
The result that I want to prove is the following: Let's associate to each $x\in \mathbb{R}$ a number $r_{x}>0$ . Then there is $M\in \mathbb{N}$ and a sequence $ \left( x_{n} \right)_{n\in \mathbb{N}}\subseteq \mathbb{R} $ of distincts points of $\mathbb{R}$ such that $$r_{x_{n}}>\frac{1}{M}, \ \ \forall n\in \mathbb{N}.$$ It is clear that the result is true if the function $r$ is continuous, but I can't find a way to prove it in a general way. I know that I have to use the fact that $\mathbb{R}$ is uncountable. Can someone help me with a hint on how to attack this problem?
For each $M \in \mathbb N$ define $$R_M = \bigl\{x \in \mathbb R \mid r_x > \frac{1}{M} \bigr\} $$ Your statement is equivalent to saying that there exists $M \in \mathbb Z$ such that $R_M$ is infinite. If not then $R_M$ is finite for all $M$ . Using the sequence of inclusions $$R_1 \subset R_2 \subset R_3 \subset \cdots \subset R_m \subset \cdots $$ it follows, using finiteness of each $R_m$ , that the set $\bigcup_{m \in \mathbb Z} R_m$ is countably infinite. But also $\bigcup_{m \in \mathbb Z} R_m = \{x \in \mathbb R \mid r_x > 0\} = \mathbb R$ which is uncountable.
|real-analysis|sequences-and-series|second-countable|
0
Evaluation of $\prod\limits_{n=1}^\infty(ne^{-n}+1)$
We know that: $$\prod_{n=1}^\infty(e^{-n}+1)=\left(-1,\frac1e\right)_\infty$$ with the Q Pocchhammer symbol $(a,q)_k$ , but what if we made it: $$P=\prod_{n=1}^\infty(ne^{-n}+1)= 2.27396845235252995…?$$ which does not appear in oeis. Here are some other forms of the constant: $$\prod_{n=1}^\infty(ne^{-n}+1)=\lim_{k\to\infty}\frac{\prod\limits_{n=1}^k (e^n+n)}{e^\frac{k(k+1)}2}=\exp\left(\sum_{n=1}^\infty\ln(ne^{-n}+1)\right)=\exp\left(\lim_{k\to \infty}-\frac{k(k+1)}2+\sum_{n=1}^k \ln(e^n+n) \right)$$ If it helps, multiply over the product’s argument’s inverse’s range using the $-1$ st branch of Lambert W : $$\prod_{n=1}^\infty(ne^{-n}+1) = \prod_{-\text W_{-1}(n-1)=1}^\infty n$$ Using an integral : $$\sum_{n=1}^\infty\ln(ne^{-n}+1)=-\int_0^\infty \lfloor x\rfloor d(\ln(xe^{-x}+1))=\int_0^\infty \frac{x\lfloor x\rfloor}{e^x+x}-\frac{\lfloor x\rfloor}{e^x+x} dx=0.8215265…$$ Additionally, expanding $\ln(x+1)$ and switching sums gives the polylogarithm function : $$\ln(P)=-\sum_{n=1}^\inf
Not as much of an evaluation as a number-theoretic interpretation. Here is a very general formula. Given two countable sets $A,B$ , and sufficiently nice $x_{a,b}$ , we can write $$\prod_{b\in B}\left(\sum_{a\in A}x_{a,b}\right)=\sum_{\pi\in P(A,B)}\prod_{(a,b)\in\pi}x_{a,b},\tag1$$ where $P(A,B)$ is the set of all collections of points $\left\{(a_b,b):b\in B\right\}$ such that $a_b\in A$ for each $b\in B$ . Here, $(x_{a,b})_{a\in A, b\in B}$ is "sufficiently nice" when the sums on the LHS of $(1)$ converge absolutely. This formula looks bizarre, but it actually encodes a lot of information about partitions. Anyway, setting $A=\{0,1\}$ , $B=\Bbb Z_{\ge1}$ , and $x_{a,b}=(be^{-b})^a$ , we have $$P=\prod_{b\ge1}(1+be^{-b})=\sum_{\pi\in X}\prod_{(a,b)\in \pi}(be^{-b})^a,$$ where $X=P(\{0,1\},\Bbb Z_{\ge1})$ . We can then see that $$\begin{align} P&=\sum_{\pi\in X}\prod_{(a,b)\in \pi}(be^{-b})^a\\ &=\sum_{\pi\in X}\left(\prod_{(a,b)\in \pi}e^{-ab}\right)\left(\prod_{(a,b)\in \pi}b^a\right)
|calculus|recreational-mathematics|infinite-product|lambert-w|constants|
0
Proving a linear bounded operator invertible
*I am not familiar with functional analysis at all. I apologize beforehand if there is any conceptual mistake. The situation is as follows: Let H be a real Hilbert space, suppose I have a bounded linear operator $T: H \to H$ such that there exists $m > 0$ satisfies $ \lvert \langle Tx, x \rangle \rvert ≥ m\lVert x \rVert ^2$ (*) for any $x \in H$ . Show that $T$ is invertible. I have two outlines of proof in my hand: (i) We wish to find an operator $S$ such that $S \circ T = id$ . So a natural idea is to plug in $Sx$ and try it out. One obtains $ \lvert \langle Tx, Sx \rangle \rvert ≥ m\langle Sx, Sx \rangle. $ On the other hand, one uses the boundedness property and plugs in $Sx$ again one obtains $ \lVert TSx \rVert \le M \lVert Sx \rVert$ Compare the equations one forces $M = m = 1$ . (ii) One could prove $T$ is in fact a bijection. The injection part is easy by the inequality in the assumption. I haven't found a good way to prove surjection. Now, may I ask: Are both (i) and (ii) va
If $y$ is orthogonal to the range then $\langle Ty,y\rangle=0.$ Hence $y=0$ which implies that the range is dense. The assumption implies $$\|Tx\|\|x\|\ge|\langle Tx,x\rangle |\ge m\|x\|^2\quad $$ Therefore $$\|Tx\|\ge m\|x\|\quad (*)$$ Hence the range is closed. Thus the operator is surjective. It is also injective by $(*).$ Next $(*)$ gives $$ m\|T^{-1}y\| \le \|TT^{-1}y\|=\|y\|$$ The latter implies that the inverse is bounded with norm less or equal $m^{-1}.$
|functional-analysis|
1
Solving The Quasi-Symmetric Quartic Equation
This is a self-answer question of the following problem: Solve the equation: $$4x^4 - 36x^3 + 61x^2 + 90x + 25 = 0.$$ See my answer.
The rational root theorem gives $$ 4x^4 - 36x^3 + 61x^2 + 90x + 25=(2x + 1)^2(x - 5)^2. $$ Of course, there is some work to do for it, but it seems to me the first thing to try. So the polynomial factors into linear factors. One could also compare coefficients with $(2x+a)(2x+b)(x+c)(x+d)$ and would find this factorization, or first consider $(4x^2+ax+b)(x^2+cx+d)$ .
|polynomials|quartics|
0
What's the relationship between $(X_i)$ and $(X_i - \mathbb{E}[X_i | X_{<i}])$?
Let $X_1,\cdots,X_n$ be $n$ random variables on the same probability space $(\Omega, F, P)$ , all with expectation $0$ . Define $Y_i=X_i - \mathbb{E}[X_i | X_1, \cdots, X_{i-1}]$ . Is it true that for any $X = a_1 X_1 + \cdots + a_n X_n$ , there exist $b_1,\cdots,b_n$ such that $X = b_1 Y_1 + \cdots + b_n Y_n$ , and vice versa? If it is true, is there a textbook reference that I can cite? If it is not true, is there something similar that is true? I feel that $X_i$ and $Y_i$ must have some strong connections. EDIT: The above statement is false. For example, let $X_2=X_1^3$ , then $Y_1 = X_1$ and $Y_2 = 0$ . In terms of the relationship between $(X_i)$ and $(Y_i)$ , I only know that they generate the same $\sigma$ -algebra. If someone can point out some other relationships, for example from an information theoretical perspective, then this would be really helpful.
Take $X_1=\cdots=X_n\neq0$ . Then $Y_i=0$ . So if $a_1+\cdots+a_n\neq0$ then $X=(a_1+\cdots+a_n)X_1\neq0=b_1Y_1+\cdots+b_nY_n$ . EDIT: as the author pointed out it is not a counterexample as we would have $Y_1=X_1$ . I leave it here for the moment until we get a proper answer...
|probability|probability-theory|measure-theory|conditional-expectation|
0
Transforming partial derivatives to polar coordinates
I have to convert the following expression $V(x,y) = x\dfrac{\partial f}{\partial y} - y\dfrac{\partial f}{\partial x}$ to polar coordinates. How do i express the partial derivatives in terms of $r$ and $\theta$ ? The answer should be something like $V(r,\theta) = \dfrac{\partial F}{\partial \theta}$ , where $F(r,\theta) = f(r\cos\theta, r\sin\theta)$ .
Write $x=r\cos\theta$ and $y=r\sin\theta$ and apply the chain rule \begin{equation}\frac{\partial f}{\partial \theta}= \frac{\partial f}{\partial x}\frac{\partial x}{\partial \theta}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial \theta}\end{equation} We can substitute $\frac{\partial x}{\partial \theta}=-r\sin\theta= -y$ and $\frac{\partial y}{\partial \theta}=r\cos\theta=x$ Therefore \begin{equation}\frac{\partial f}{\partial \theta}=-y\frac{\partial f}{\partial x}+x\frac{\partial f}{\partial y}=V(x,y)\end{equation}
|derivatives|partial-derivative|polar-coordinates|
0
Showing inequality involving log
I want to show that for all $x_i > 0$ : $$\sum_{i=1}^{n}\dfrac{x_i}{x_1 + \ldots+x_n}\;\log(x_i) \geq \log\left(\dfrac{x_1 + \ldots + x_n}{n}\right)$$ I thought of Jensen's inequality but since $\log$ is a concave function, this seems to give me the opposite as for $x_i > 0, a_i \geq 0$ , it holds: $a_1\log(x_1) + \ldots + a_n\log(x_n) \leq \log(a_1x_1 + \ldots + a_nx_n)$ . Or is there some algebraic manipulation we can do on the original inequality such that we can use Jensen (or any other inequality for that matter)?
The inequality can be rewritten to $$ \frac{x_1 + \ldots + x_n}{n} \cdot \log\left(\frac{x_1 + \ldots + x_n}{n}\right) \le \frac 1n \sum_{i=1}^n x_i \log(x_i) \, , $$ and that is true because the function $f(x) = x \log(x)$ is convex.
|inequality|cauchy-schwarz-inequality|jensen-inequality|
1
Show that Mat$_{n\times k}^k\to G_k(\mathbb{R}^n)$ is a fibration
I am trying to show that Mat $_{n\times k}^k\to G_k(\mathbb{R}^n)$ is a fibration where Mat $_{n\times k}^k$ denotes the full-rank matrices of rank $k$ . I know that $V_k(\mathbb{R}^n)\to G_k(\mathbb{R}^n)$ is a fibration and composition of fibrations is a fibration by Composition and Product of fibrations , so how can we show that Mat $_{n\times k}^k\to V_k(\mathbb{R}^n)$ is a fibration? Also, by Ehresmann's Theorem, every proper submersion is a fibration. The map is indeed a submersion, but I do not think it is proper, so the theorem is not applicable here.
I cannot comprehend this in terms of matrices. Therefore, let $V$ be an $n$ -dimensional $\mathbb R$ -vector space and let $W\subset V$ be a $k$ -dimensional sub-space. Recall that $\mathop{Grass}(k,V)$ is the homogeneous space $\mathop{GL}(V)/H$ with respect to the smooth and transitive action of $\mathop{GL}(V)$ on $\mathop{Grass}(k,V)$ taking sub-spaces to their image under the respective automorphism; i.e., $$H=\mathop{stab}\nolimits_{\mathop{GL}(V)}([W])=\{f\in \mathop{GL}(V)\mid f(W)=W\}.$$ In particular, $\mathop{GL}(V)\to \mathop{Grass}(k,V)$ is a principal $H$ -bundle. The idea is to use the fact that for any closed normal sub-group $H' , the associated $H/H'$ -bundle $\mathop{GL}(V)\times_H(H/H')\to\mathop{Grass}(k,V)$ is, by construction, a fibre-bundle. We just have to find the correct $H'$ . Note that $\mathop{GL}(V)$ acts smoothly and transitively on the the space of injective linear maps $\mathop{Inj}(W,V)$ (which is the coordinate-free version of $\mathrm{Mat}^k_{n\time
|differential-topology|grassmannian|fibration|
0
Only $x_{6n}, y_{6n}$ doesn't have the same simplest denominator
I'm studying this series \begin{align*} a_1=\mathrm{i}, \quad a_{n+1}=\mathrm{i} + \frac{\mathrm{i}}{a_n}, \end{align*} where $\mathrm{i}$ is the imaginary unit, in order for me to see clearly the structure of this sequence, I separate the real part and the imaginary part as a sequence, as follows. \begin{align*} x_{n+1} = \frac{y_n}{x_n^2+y_n^2}&, \quad y_{n+1}=\frac{x_n}{x_n^2+y_n^2}+1, \\ &a_n = x_n + y_n\mathrm{i}, \end{align*} with $x_1=0, y_1=1$ , when I try to find the first few patterns, I found Only $x_{6n}, y_{6n}$ doesn't have the same simplest denominator The following are $x_n$ , $y_n$ for the first 20 items ( $x_n$ is on the left and $y_n$ is on the right) and has undergone preliminary programming verification on a larger scale. But I don't really know how to prove this, Can anyone give me an idea or prove this wrong? Addition Information If we consider a more complex situation, given two nonnegative integers X and Y, make the following sequence: \begin{align*} x_{n+1} =
First of all, it is never necessary (in my experience) to separate a recurrence relation into its real and imaginary parts. Admittedly, however, you might not have made the observation about the denominators otherwise. In this answer, I'll show how I constructed a closed-form solution to this problem, while ignoring the fact that the solution is complex. To avoid confusion, I'm going to express the problem as follows $$ f_n=a+\frac{b}{f_{n-1}},\quad f_0 \text{ given} $$ Now, assume a solution of the form $f_n=q_n/p_n$ , so that $$ \frac{q_n}{p_n}=a+\frac{bp_{n-1}}{q_{n-1}} $$ Finally, let $p_n=q_{n-1}$ so that $$ q_n=aq_{n-1}+bq_{n-2} $$ which is a standard Fibonacci-type second-order recurrence. I have previously given the solution to this recurrence with arbitrary $q_0, q_1, a, b$ in this previous post of mine . The solution can be written as $$ q_n=\left(q_1-\frac{aq_0}{2}\right) \frac{\alpha^n-\beta^n}{\alpha-\beta}+a\frac{q_0}{2} \frac{\alpha^n+\beta^n}{\alpha+\beta} $$ where $\al
|sequences-and-series|algebra-precalculus|discrete-mathematics|recurrence-relations|
0
how do I find the intersection points of two curves?
I need to calculate the area using a double integral (converting to polar system), which are bounded by the following curves: $(x^2+y^2)^2=2a^2(x^2-y^2)$ and $x^2+y^2 \ge a^2$ . So I plotted both of these curves, assuming that a = 1. I converted it to the polar system and found $r=\pm\sqrt{2a^2cos2\theta}$ . But I can't find $\theta$ using algebraic method. On the graph, I see that it can be equal to ${\pi}\over{6}$ . Then my solution could be like this(the answer is correct): $$ S=\iint\limits_{S} \,dx \,dy=4\int_{0}^{\pi\over6}d\theta\int_{a}^{\sqrt{2a^2cos2\theta}}rdr$$ I tried to find intersection point by doing this: $\sqrt{2a^2(x^2-y^2)}=a^2$ . But i think that's wrong. So basically my question is, how can i find this $\pi\over6$ by calculating it? How do I find this intersection point?
COMMENT AS A HINT.-Calculation of some points in the figure of circle and lemniscata is easy and you know the polar coordinates, say $r=f_1(\theta)$ and $r=f_2(\theta)$ for each curve. A way you can finish is the following: ►Calculate the angles $\theta$ corresponding to points $A,B,C$ in the attached figure; let these $\alpha,\beta,0$ . ►►The shadow area, $A_1$ , is given by $$\frac12\int_{\alpha}^{\beta}(f_1(\theta))^2d\theta-\frac12\int_{\alpha}^{\beta}(f_2(\theta))^2d\theta$$ ►►►You can finish multiplying by $4$ the area given by $$\frac{\pi}{4}+A_1+\frac12\int_{0}^{\alpha}(f_1(\theta))^2d\theta-\frac{\sqrt{-2+\sqrt5}}{2}$$
|integration|polar-coordinates|
1
Is every quasi-finite field algebraically closed?
It is known that Ax axiomatized the pseudofinite field F as follows: F is quasifinite [https://en.wikipedia.org/wiki/Quasi-finite_field] and F is PAC [https://en.wikipedia.org/wiki/Pseudo_algebraically_closed_field]. I read Zilber's article about how AСF(algebraically closed field ) is pseudofinite in Zariski language. It is also not difficult to notice that the equation $\exists x (x^2+1=0)$ has a solution in a finite field. I think that after all, an algebraically closed field is pseudofinite. It is clear that any PAC is AСF. A natural question arises: is any quasi-finite field an ACF? I found one good problem, from the book [Serre, Jean-Pierre (1979), Local Fields, Graduate Texts in Mathematics, vol. 67, translated by Greenberg, Marvin Jay, Springer-Verlag, ISBN 0-387-90424-7, MR 0554237, Zbl 0423.12016], page 192, exersize 3d: "Deduce that every ACF $\Omega$ admits a quasi-finite subfield E having $\Omega$ as its algebraic closure."
"I think that after all, an algebraically closed field is pseudofinite." Certainly not. Let $\varphi$ be the sentence asserting that the definable function $x\mapsto x^2$ is surjective but not injective: $(\forall y\, \exists x\, x^2 = y)\land (\exists z\exists w\, (z\neq w\land z^2 = w^2))$ . The sentence $\varphi$ is true in every algebraically closed field of characteristic $\neq 2$ , but it has no finite model, because in a finite structure $M$ , every surjective fuction $M\to M$ is injective. "It is clear that any PAC is AСF." No, the implication goes the other way. Algebraically closed fields are PAC, but there are PAC fields which are not algebraically closed. "A natural question arises: is any quasi-finite field an ACF?" No. In fact, the negative answer is immediate from the definition of quasi-finite. A field $K$ is quasi-finite if its absolute Galois group is isomorphic to $\widehat{\mathbb{Z}}$ , the profinite completion of the integers. But the absolute Galois group of an a
|field-theory|model-theory|
0
When can a homotopy lift to its compactification?
Let $h_t:X\to Y$ be a homotopy, we assume both spaces are locally compact and hausdorff, and each $h_t$ is proper, when can we lift it to a homotopy of one point compactification $\bar X\to \bar Y$ ? I think this is true when $h$ is a isotopy, or at least when $h_t$ is a homeomorphism at each $t$ ? Besides, if $X,Y$ are smooth manifolds, do we have different story in smooth category? Remark: the motivation of this question is isotopy class of knots in $\mathbb R^3$ is same as in 3 sphere.
Let $Z$ be a locally compact Hausdorff topological space. Recall that the 1-point compactification of $Z$ , denoted $Z^+$ or $Z\cup \infty$ is the disjoint union of $Z$ and a singleton $\{\infty\}$ , where the neighborhoods of $\infty$ are complements to compact subsets in $Z$ , while $Z$ is topologically embedded in $Z^+$ . A sequence in $Z$ is said to be divergent if it converges to $\infty$ in $Z^+$ . A (continuous) map of two Hausdorff spaces is called proper if the preimage of every compact is again compact. A continuous map $f: X\to Y$ of two locally compact Hausdorff spaces is proper if and only if it extends to a continuous map $f: X^+\to Y^+$ by sending $X^+\setminus X$ to $Y^+\setminus Y$ . A homotopy $H: X\times I\to Y$ is said to be proper if it is a proper map. Suppose that $X, Y$ are manifolds. Then it is easy to see that a homotopy $H: X\times I\to Y$ is proper if and only if the following holds: For every sequence $t_n\to t_0\in I=[0,1]$ and every divergent sequence $x_
|general-topology|algebraic-topology|homotopy-theory|compactification|
1
Solving The Quasi-Symmetric Quartic Equation
This is a self-answer question of the following problem: Solve the equation: $$4x^4 - 36x^3 + 61x^2 + 90x + 25 = 0.$$ See my answer.
I am aware of a technique of reducing quartic polynomials using substitution $y=x+\frac{a_2}{4a_1}$ . Applying this substitution leads to $4y^4-\frac{121}{2}y^2+\frac{121^2}{64}$ which despite somewhat scary coefficients is easy to solve directly or factor. Reference: Factoring Quartic Polynomials
|polynomials|quartics|
0
Construct a linear transformation $T:\Bbb R^4 →\Bbb R^4$ such that $\ker T =\operatorname{im}T =\operatorname{span}\{(1,1,1,1) , (0,1,1,1)\}$
I need help for the following exercise: Construct a linear transformation $T:\Bbb R^4 →\Bbb R^4$ such that $$\ker T =\operatorname{im}T =\operatorname{span}\{(1,1,1,1) , (0,1,1,1)\}.$$ I know that $$T(1,1,1,1)=T(0,1,1,1)=\vec 0$$ and I completed these two vectors to a basis of $\Bbb R^4$ , so I took $$T(0,0,0,1)=(0,0,0,1)\quad\text{and}\quad T(0,1,0,0) = (1,1,1,1),$$ but I couldn't find a formula for $T(x,y,z,w)$ .
Your (implicit) claim that $$e_1:=(1,1,1,1),\quad e_2:=(0,1,1,1),\quad e_3:=(0,0,0,1),\quad e_4:=(0,1,0,0)$$ make a basis of $\Bbb R^4$ is correct (how would you prove it?), hence for any choice of four vectors $f_1,f_2,f_3,f_4\in\Bbb R^4$ , there is a unique linear transformation $T:\Bbb R^4\to\Bbb R^4$ that maps $e_i$ to $f_i$ for $i=1,2,3,4$ . (So, you won't need any `formula $T(x,y,z,w)$ '.) In order to get $\ker T=V:=\operatorname{span}\{e_1,e_2\}$ , you are right that $f_1=f_2=\vec0$ is necessary (but not sufficient on its own, since $\ker T$ might still be larger). In order to get moreover $\operatorname{im}T=V$ , a necessary condition is $\operatorname{span}\{f_3,f_4\}=V$ (but not sufficient on its own, since $\operatorname{im}T$ might be larger, depending on the choice of $f_1,f_2$ ). Your choice $f_3=e_3$ is therefore not convenient, since $e_3=(0,0,0,1)\notin\{(x,y,y,y)\mid x,y\in\Bbb R\}=V$ . If you replace it for instance with $f_3=e_2$ (keeping your $f_4=e_1$ and $f_1=f_2
|linear-algebra|
0
Euler class of a principal $SO(2)$-bundle over a lens space
Note that for a manifold $X$ , isomorphism classes of principal $SO(2)$ -bundles over $X$ are classified by their Euler classes in $H^2(Z;\Bbb Z)$ . Now consider a lens space $L(p,q)$ ; we have $H^2(L(p,q);\Bbb Z)=\Bbb Z_p$ . Suppose there is a principal $SO(2)$ -bundle (or equivalently a principal $S^1$ -bundle) $P\to L(p,q)$ such that $P$ is homeomorphic to $S^3\times S^1$ . Then should the Euler class of the bundle be a unit (i.e. a generator) of $H^2(L(p,q);\Bbb Z)=\Bbb Z_p$ ? This seems true, according to p.338 (second paragraph) of this paper: https://bpb-us-e2.wpmucdn.com/faculty.sites.uci.edu/dist/3/246/files/2011/03/23_PseudoFreeOrbifolds.pdf , but I can't see why.
We may construct principal $S^1$ -bundles over $L(p,q)$ as follows. We have a $\Bbb Z_p$ action on $S^3\times S^1$ generated by $((z_1,z_2),z)\mapsto ((e^{i2\pi/p}z_1,e^{i2\pi q/p}z_2),e^{i2\pi k/p}z)$ , where $1\le k\le p-1$ . The projection $S^3\times S^1\to S^3$ induces $\tilde{p}:(S^3\times S^1)/\Bbb Z_p\to S^3/\Bbb Z_p=L(p,q)$ by passing to the orbit space, which is a $S^1$ -bundle. This bundle is principal, being the unit sphere bundle of a complex line bundle $(S^3\times \Bbb C)/\Bbb Z_p\to L(p,q)$ . To emphasize the importance of $k$ in this construction, we will use the notation $S^3\times_{\Bbb Z_p}\Bbb C(k)$ and $S^3\times_{\Bbb Z_p}S^1(k)$ to denote the total space of the complex line bundle and its unit sphere bundle respectively. We now investigate the total space. If $\gcd(k,p)=d\neq 1$ , then $S^3\times_{\Bbb Z_p}S^1(k)\not\cong S^3\times S^1$ . Consider the subgroup $\Bbb Z_d$ of $\Bbb Z_p$ acting on $S^3\times S^1$ by $$((z_2,z_2),z_2)\mapsto ((e^{i2\pi(p/d)/p}z_1,e^{
|algebraic-topology|homology-cohomology|vector-bundles|principal-bundles|characteristic-classes|
1
If $\{a,b\}$ and $\{\bar a,\bar b\}$ have equal sum and product then they are the roots of the same quadratic.
I've searched for solutions online and I found them, but there's still one solutions that i can't understand. here there are some solutions to the problem, most of them start by showing that $k \geq 4$ is impossible, one in particular say: Assume $k \geq 4$ . Hence we get the chains $$n=a_1a_k=a_2a_{k-1}=\cdots$$ $$m=(a_1+1)(a_k+1)=(a_2+1)(a_{k-1}+1)=\cdots \implies u=a_1+a_k=a_2+a_{k-1}=\cdots$$ Hence, the roots of the quadratic $x^2-ux+n=0$ are all the pairs $\{a_1,a_k\},\{a_2,a_{k-1}\}, \cdots$ which is a contradiction as $a_1 . I've read all the other solutions, but I still don't understand were this $x^2-ux+n=0$ quadratic formula came from, and why its solutions must be $\{a_1,a_k\},\{a_2,a_{k-1}\}, \cdots$ . Thanks in advance to everyone reading this.
You have for each $i$ , $$\begin{align*} m &= (a_i + 1)(a_{k+1-i}+1) \\ &= a_i a_{k+1-i} + (a_i +a_{k+1-i}) + 1 \\ &= n + (a_i +a_{k+1-i}) + 1 \end{align*}$$ Which gives you $\forall i,j \, a_i +a_{k+1-i} = a_j +a_{k+1-j} = u$ But seeing this formula $(a_i + 1)(a_{k+1-i}+1) = (-1-a_i )(-1-a_{k+1-i})$ you should see that it is a polynomial evaluated in $-1$ with roots $a_i , \,a_{k+1-i}$ , which is $$p(x) = (a_i -x)(a_{k+1-i}-x) = (x-a_i )(x-a_{k+1-i}) = x^2 - ux - n$$ Written like that it is obvious that the roots are the one described in your proof. And this polynomial does not depend on $i$ hence all the pairs $\{a_i,a_{k+1-i}\}$ are in the set of its roots.
|elementary-number-theory|
0
Is there an element outside a ring that maps an ideal of the ring to a subset?
Let $R$ be a commutative ring, $I \subset R$ a finitely generated proper ideal, and $K$ a field containing $R$ . Let $\alpha_i$ be an arbitrary set of generators of $I$ , and define the column matrix $v = [\alpha_1, \dots, \alpha_n]^\intercal$ . My question is, does there exist $\gamma \in K - R$ such that the entries of $\gamma v$ sum to an element of $I$ ? I tried to prove that there isn't such a $\gamma$ by the following argument. For every element $j \in I$ there is some diagonal matrix $M$ with entries in $R$ that maps $v$ to a column matrix representing $j$ , e.g. if $j = a_1 \alpha_1 + \cdots + a_n \alpha_n$ , then $Mv = [a_1\alpha_1, \dots, a_n \alpha_n]^\intercal$ . So if $\gamma v = Mv$ , then $\gamma$ must be an eigenvalue of $M$ . Since $M$ is diagonal, $\gamma$ must be an entry in $M$ , which implies $\gamma \in R$ , a contradiction. Can someone verify if this is true or if not, show which step in the proof is incorrect? For context, I'm reading Number Fields by Daniel Mar
Here is a counterexample to the original question: Consider $R=k[x,y]/(x^2-y^3)$ and the ideal $I=(x,y)$ . Notice: $\frac{x}{y}(x+y)=y^2+x\in I$ , whilst $\frac{x}{y}\in K-R$ .
|abstract-algebra|solution-verification|ring-theory|commutative-algebra|algebraic-number-theory|
1
Two group elements induce the same permutation on $A$ if and only if they are in the same coset of the kernel.
Page 113 - Dummit and Foote - Group actions Two group elements induce the same permutation on $A$ if and only if they are in the same coset of the kernel. What does this mean? Two permutations induce the same permutation on $A$ looks like it means $\sigma_1 A = A' = \sigma_2 A$ and then I don't really understand the right hand side of the $\iff$ .
An action $G\stackrel{}{\curvearrowright} A$ is equivalent to a homomorphism $\varphi \colon G\to\operatorname{Sym}(A)$ , via $\varphi_g(a):=:g\cdot a$ . Therefore: \begin{alignat}{1} &\varphi_g=\varphi_h\iff \\ &\varphi_g\varphi_h^{-1}=Id_A\iff \\ &\varphi_g\varphi_{h^{-1}}=Id_A\iff \\ &\varphi_{gh^{-1}}=Id_A\iff \\ &gh^{-1}\in\ker\varphi\iff \\ &g\in(\ker\varphi)h \\ \end{alignat} And of course $h\in(\ker\varphi)h$ .
|abstract-algebra|group-theory|group-actions|
0
Prove the diff Bianchi identity $d^\nabla R^\nabla=0$ using that $(d^\nabla\circ d^\nabla)\circ d^\nabla = d^\nabla\circ (d^\nabla\circ d^\nabla)$
I found on page 25 of Arthur Besse’s “Einstein Manifolds” that the differential Bianchi identity $$d^\nabla R^\nabla=0$$ follows from the alternate definition of the Riemann curvature tensor as the second covariant exterior derivative $$R_{X,Y}^\nabla s = -d^\nabla(d^\nabla s) (X,Y)$$ and the fact that $(d^\nabla\circ d^\nabla)\circ d^\nabla = d^\nabla\circ (d^\nabla\circ d^\nabla)$ . However, the proof of the differential Bianchi identity that I know is the one given by Robert Wald on pp. 39 and 40 of his book “General Relativity” and is more involved (in particular, it invokes the algebraic Bianchi identity). How do I see that $d^\nabla R^\nabla=0$ only from $(d^\nabla\circ d^\nabla)\circ d^\nabla = d^\nabla\circ (d^\nabla\circ d^\nabla)$ ?
I don't know a lot about Riemannian geometry, I only work on complex manifolds but I believe the idea is the same. A connection is a linear map $d^\nabla : \Omega^k(X,E) \rightarrow \Omega^{k + 1}(X,E)$ that verifies a Leibnitz relation. Here, $E$ is a vector bundle (maybe the tangant bundle in your case ?). Then, we can prove that $d^\nabla \circ d^\nabla : \Omega^0(X,E) \rightarrow \Omega^2(X,E)$ is $\Omega^0(X,\mathbb{C})$ -linear thus there is a unique $R^\nabla \in \Omega^2(X,\mathrm{End}(E))$ such that $d^\nabla(d^\nabla s) = R^\nabla s$ for all section $s$ . This is the curvature. Therefore, by the Leibnitz relation, for all smooth section $s$ , $$ d^\nabla(d^\nabla(d^\nabla s)) = d^\nabla(R^\nabla s) = (d^\nabla R^\nabla)s + R^\nabla d^\nabla s = (d^\nabla R^\nabla)s + d^\nabla(d^\nabla(d^\nabla s)). $$ It is true for all $s$ hence $d^\nabla R^\nabla = 0$ .
|differential-geometry|
1
How does making a spanning tree solution "strongly feasible" for the Network Simplex method assist in cycling prevention?
Question : How does making a spanning tree solution "strongly feasible" for the Network Simplex method assist in cycling prevention? To start, I was reading An Implementation of Network Simplex Method (Big-M) for solving Minimum Cost Flow Problem (Hassan and Ghalawingy) where they say on page $4$ : Strongly Feasible Trees Property : A feasible tree $T$ with corresponding flow vector $x$ is said to be strongly feasible if every arc $(i, j)$ of $T$ with $x_{ij}=0$ is oriented away from the root. The artificial starting solution described above generates a strongly feasible basis, for consider each node, $i\in N$ . This was presented as a definition rather than an explanation as to why this is beneficial, so I dug around in the references and located Theoretical Properties of the Network Simplex Method " (Cunningham), page $200$ : We define a strongly feasible tree to be a feasible tree $T$ whose associated tree solution $x^0$ satisfies: every edge $f \in T$ such that $x^0_f = 0$ is direc
For some reason, you have stopped reading Cunningham's "Theoretical Properties of the Network Simplex Method" just as it was about to answer your question. We read on: Lemma 1. Let $T, T'$ be strongly feasible trees, such that $T'$ is a successor of $T$ with entering edge $e$ and leaving edge $f$ , and suppose that the associated pivot is degenerate. Then $\pi_v(T') = \pi_v(T) + c_e(T)$ if $f$ is an edge of the path in $T$ from $r$ to $v$ , and $\pi_v(T') = \pi_v(T)$ otherwise. Any simplex method which maintains strongly feasible trees does not admit cycling, for it follows from Lemma 1 that $\sum(\pi_v(T) : v\in V)$ strictly decreases through each degenerate sequence, and so no tree can be repeated. In particular, this suggests that the pivoting rule mentioned earlier prevents cycling, because it ensures that our spanning tree solution remains strongly feasible. Some questions you may have: Q1: Why does this pivoting rule (i.e. the rule "Pick the first possible leaving edge on $C(T,e)
|graph-theory|trees|network-flow|simplex-method|
1
Solving a second order ODE using the Fourier transform.
I am having a really hard time trying to apply the Fourier transform to a second order ODE. The questions ask to find a solution to the ODE with the following conditions, $$\frac{d^2y}{dx^2} -k^2y = f(x)$$ $$\lim_{|x|\to\infty}y(x)= 0$$ $$\lim_{|x|\to\infty}\frac{dy}{dx} = 0$$ Where k is an arbitrary real constant and I have to solve for the cases, $f(x) = 1$ and $f(x) = \delta(x-x_0)$ This is for a PDE class that I am taking and we are using the Haberman book, So the transforms I am doing are based off of the tables provided in the text. For the case $f(x) = 1$ , I get the following transform, $$Y(\omega) = -\frac{\delta(\omega)}{(\omega^2 + k^2)}$$ However, looking at the table provided, I am having a hard time trying to find the inverse transform. I can get something that looks transformable by doing the following, $$Y(\omega) = -\delta(\omega)\frac{1}{2k}\frac{2k}{(w^2+k^2)}$$ Using convolution leaves me with an answer of, $$y(x) = \frac{-1}{2k}e^{-k|x|}$$ but I am unsure of this i
Long story short: For $f(x)=1$ , the ODE does simply not have a solution (with the given decay conditions). Disclaimer: I am a little bit confused which version of the Fourier transform you are using since the constants are not consistent, I am guessing the unitary angular freuency version? ( https://en.wikipedia.org/wiki/Fourier_transform#Tables_of_important_Fourier_transforms ) Following your idea with the convolution (you made a small error here) yields $$ f(x)=1\Rightarrow Y(\omega)=-\frac{\sqrt{2\pi}\delta(\omega)}{\omega^2+k^2}=-\frac{1}{2k}\cdot\sqrt{2\pi}\cdot\sqrt{2\pi}\delta(\omega)\cdot\sqrt{\frac{2}{\pi}}\frac{k}{\omega^2+k^2}\\ \Rightarrow y(x)=-\frac{1}{2k}\left(1*e^{-k|x|}\right)=-\frac{1}{2k}\int_\mathbb{R}e^{-k|x|}dx=-\frac{1}{k^2} $$ which is a solution of the differential equation by itself but does obviously not satisfy the first decay condition. (Since the ODE is linear and autonomous a more general class of solutions can be found easily which reads $f(x) = c_1 e^{
|ordinary-differential-equations|fourier-transform|
0
Transforming partial derivatives to polar coordinates
I have to convert the following expression $V(x,y) = x\dfrac{\partial f}{\partial y} - y\dfrac{\partial f}{\partial x}$ to polar coordinates. How do i express the partial derivatives in terms of $r$ and $\theta$ ? The answer should be something like $V(r,\theta) = \dfrac{\partial F}{\partial \theta}$ , where $F(r,\theta) = f(r\cos\theta, r\sin\theta)$ .
By the chain rule: $\frac{\partial f}{\partial \theta}=\frac{\partial f}{\partial x}\frac{\partial x}{\partial \theta}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial \theta}$ $\frac{\partial f}{\partial r}=\frac{\partial f}{\partial x}\frac{\partial x}{\partial r}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial r}$ Which can be expressed as a matrix. $\begin{pmatrix} \frac{\partial f}{\partial \theta} \\ \frac{\partial f}{\partial r} \end{pmatrix} = \begin{pmatrix} \frac{\partial x}{\partial \theta} & \frac{\partial y}{\partial \theta} \\ \frac{\partial x}{\partial r} & \frac{\partial y}{\partial r} \end{pmatrix} \begin{pmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y}\end{pmatrix}$ The matrix's inverse can be used to do a reverse coordinate transform without having to reuse the chain rule. You can effectively bypass the coordinate transform or much calculation at all. $V(x,y)=x \frac{\partial f}{\partial y}-y\frac{\partial f}{\partial x }=\vec{r}
|derivatives|partial-derivative|polar-coordinates|
0
What’s the definition of iterated covariant derivative?
I’m confused by Arthur Besse’s definition of iterated covariant derivative on p. 25 of his book “Einstein Manifolds". He writes: 1.17 Let $\nabla$ be a linear connection on a vector bundle $E$ over $M$ . For any section $s$ in $\mathcal E$ , $\nabla s$ is a section of $T^*M\otimes E$ . Now let $D$ be a linear connection on $M$ . Then $D$ and $\nabla$ induce a linear connection (that we still denote by $\nabla$ ) on $T^*M\otimes E$ , so we may define $\nabla(\nabla s)$ and denote it by $\nabla^2 s$ . It is the section of $T^{(2,0)}M\otimes E$ defined by $$(\nabla^2 s)_{X,Y} = \nabla_X(\nabla_Ys)-\nabla_{(D_XY)}s$$ Here, I’m first perplexed because Besse says that he is denoting $\nabla(\nabla s)$ by $\nabla^2 s$ and then goes on to define $(\nabla^2)_{X,Y} s$ as $\nabla_X(\nabla_Y s)$ minus some other term. Have I missed something here, or did I find a minor mistake in his writing? In any case, he continues Now, using an obvious induction we may define the iterated covariant derivative
For $T$ a vector valued $(0,2)$ -tensor, we have $$ \nabla T(X,Y,Z) = \nabla_X(T(Y,Z)) - T(\nabla_XY,Z) - T(Y,\nabla_XZ). $$ If $T = \nabla^2s$ , note that $ \nabla^2s(Y,Z) = \nabla_Y(\nabla_Zs) - \nabla_{\nabla_YZ}s, $ so that \begin{align} \nabla^3s(X,Y,Z) &= \nabla_X(\nabla^2s(Y,Z)) - \nabla^2s(\nabla_XY,Z) - \nabla^2s(Y,\nabla_XZ)\\ &= \nabla_X\left(\nabla_Y(\nabla_Zs) - \nabla_{\nabla_YZ}s\right) \\ &\quad - \nabla_{\nabla_XY}(\nabla_Zs) + \nabla_{(\nabla_{\nabla_XY}Z)}s \\ &\quad - \nabla_Y(\nabla_{\nabla_XZ}s) + \nabla_{(\nabla_Y(\nabla_XZ))}s \\ &= \nabla_X(\nabla_Y(\nabla_Zs)) \\ &\quad -\nabla_X(\nabla_{\nabla_YZ}s) - \nabla_{\nabla_XY}(\nabla_Zs) - \nabla_Y(\nabla_{\nabla_XZ}s) \\ &\quad + \nabla_{(\nabla_{\nabla_XY}Z)}s + \nabla_{(\nabla_Y(\nabla_XZ))}s. \end{align}
|differential-geometry|
1
Helicopters problem: What is the probability of being heard but not seen?
Consider an observer who can hear helicopters at a distance of $s$ and can see helicopters at a distance of $r$ . Let $s \geq r$ and fix $s = 1$ . Given an observer can hear a helicopter flying in a straight path over them, determine a probability function $P(r)$ that an observer cannot see the helicopter. My current solution: We only consider helicopters that enter the audible radius, so we can define any path from the edge of the outer circle with radius s through the circle. A tangent line to this circle is the boundary for a helicopter which enters the audible range. We define θout as the angle between a tangent of the outer circle (the audible range) and a tangent of the inner circle (the visible range) extending from the same point on the outer circle. This is the angle where helicopters are heard, but not seen. We define θin as the angle where helicopters are both heard and seen. Hence, our probability that a line goes through the audible range, but not the visible range is: $P(
The problem is similar to Bertrand paradox , in which an ill-posed problem has different answers of $\frac{1}{3}$ , $\frac{1}{4}$ , and $\frac{1}{2}$ , depending on how uniform selection is defined based on three selection methods of random endpoint, random midpoint, and random radius (radial point). Here, similarly you can have different solutions. Your answer is the same as the endpoint method. It hiddenly assumes that the person is always there and can always hear when any helicopter enters the first circle. The answer $$1-\frac{r^2}{s^2}$$ obtained by dividing the areas is related to the midpoint method. It hiddenly assumes that the person happens to be in a position to hear the sound of a helicopter entering the first circle, which depends on the position of the helicopter, not obly its distance from the person. If you use the radius method, the answer is $$1-\frac{r}{s}.$$ Here, it is hiddenly assumed that the person happens to be in a position to hear the sound of a helicopter e
|probability|geometry|
0
Prove/Disprove for $f:\mathbb N\to\mathbb N,$ if $f\circ f$ is periodic then $f$ is periodic
We define periodic according to the standard definition where $f(x)=f(x+p)$ , this means that there exists p so that $f(f(x)) = f(f(x)+p)$ . However I am unable to find a counter example or a constructing a formal proof. Any insights are welcome
Hint: Start with $$f(x)=\begin{cases} 0, & \text {if $x$ is even}\\ 1, & \text {if $x$ is odd} \end{cases} $$ This has $f\circ f = f$ periodic. Now find a small change $f'$ to $f$ such that $f'\circ f' = f$ (so periodic), but $f'$ becomes "slightly " non-periodic.
|functions|periodic-functions|
1
is there a closed solution to counting pairwise combinations of the elements of two sets without resampling?
This is the pairing algorithm I have written and it seems to work. numlist =(1,2,3,4) letter_combinations=("pa","pb") i=0 for num1 in numlist: for num2 in numlist: if num1 != num2: used_combinations = set() for comb1 in letter_combinations: for comb2 in letter_combinations: if comb1 != comb2 and comb2 not in used_combinations: print(f"({comb1}{num1},{comb2}{num2})", end=" ") i += 1 used_combinations.add(comb1) print() print(i) The number of pairwise combinations without resampling is 12. (pa1,pb2) (pa1,pb3) (pa1,pb4) (pa2,pb1) (pa2,pb3) (pa2,pb4) (pa3,pb1) (pa3,pb2) (pa3,pb4) (pa4,pb1) (pa4,pb2) (pa4,pb3) but is there a formula using the number of elements of each set to calculate the total number of pairs?
Let $N$ be the number of elements of num_list , and let $L$ be the number of elements of letter_combinations . Every combination in this list looks like $$ (n_1,\ell_1,n_2,\ell_2) $$ such that... $n_1,n_2$ are in num_list , and $\ell_1,\ell_2$ are in letter_combinations . $n_1\neq n_2$ , and $\ell_1\neq \ell_2$ . $\ell_1$ appears earlier in letter_combinations than $\ell_2$ . Let us count the number of ways to specify such a list, using simple combinatorics. There are $N$ choices for $n_1$ , since it can be anything in num_list . There are $N-1$ choices for $n_2$ , since it can be anything in num_list , except for $n_1$ . Choosing $\ell_1$ and $\ell_2$ requires a little more care. Naively, there are $L$ choices for $\ell_1$ (anything in letter_combinations ), and there are $L-1$ choices for $\ell_2$ (anything in letter_combinations besides $\ell_1$ ). However, exactly half of these choices are invalid; for each $\ell_1\neq \ell_2$ , only one of the pairs $(\ell_1,\ell_2)$ and $(\ell_2,
|combinatorics|sampling|
1
Formalism of syntax in first order logic
I am attending a introductory model theory course this semester. The professor started talking about formulas and sentences without paying much attention to the formalism of the syntax. However, as the course progressed, I began to notice that we need to put the formulas in a set because Zorn's lemma is needed to prove theorems like the compactness theorem. So I was thinking probably we would need to define formulas and sentences as elements of the free monoid generated by the formal symbols. This construction leads to many questions that were overlooked and that have to be checked, for example, the necessity of introducing parentheses. My thoughts are: Are these technical details that can somehow be avoided, or are they necessary as the foundation of first order logic?
The usual formal approach in textbooks is to treat syntactic classes such as formulas or terms as subsets of the set of all strings (lists) of symbols drawn from a small vocabulary (yes, the set of all lists is a free monoid, but that fact doesn't add much value in this context: you just need to know that you can construct new lists from old by prefixing or postfixing a list with a symbol or by concatenation of two lists). The subset is typically defined by saying that is the smallest set closed under certain constructions. So, for example, we might have a vocabulary comprising variables $x_1, x_2, \ldots$ , a single (binary) operator symbol $+$ together with brackets and commas as punctuation symbols. We could then define the set $\cal T$ of all terms to be the smallest set of strings of these symbols that: Contains each string " $x_i$ " comprising a single variable. Contains " $(t_1+t_2)$ " whenever it contains $t_1$ and $t_2$ (here I have taken $t_1$ , prefixed it with "(", postfixe
|logic|first-order-logic|model-theory|
0
Given a measure of some subsets, how can you determine the number of sets in the sigma algebra?
I was given a question that I can't wrap my head around. Given a measure space ( $X, \Omega,\mu$ ), with $X$ = {1,2,3, ... 16 }, we have $C_1$ = {1,2,3,4,5,6,7,8 } $C_2$ = {9,10,11,12,13,14,15,16} $C_3$ = {1,2,5,6,9,10,13,14} $C_4$ = {3,4,7,8,11,12,15,16} $C$ denotes the field generated by { $C_1,C_2,C_3,C_4$ }. We then let $\Omega$ = $\sigma[C]$ . With $\mu(C_\mathscr i) = 1/2, 1\le\mathscr i\le4$ , and $\mu(C_1\cap C_3) = 1/4$ , how can I show that $\hat{\Omega_\mu} = \Omega$ , with $2^4 = 16$ sets? I have tried equating the measures of the subsets to 1, but I do not know how it will determine the number of sets in $\Omega$ . Additionally, I suppose that the proving of the completeness of $\Omega$ (hence it being equal to $\hat{\Omega_\mu}$ ) will come after I get the number of subsets in the sigma algebra? Please enlightment me. Thank you so much!
Let $\mathcal{C}$ be a family of subsets of some finite set $X$ , and let $\Omega$ be the $\sigma$ -algebra generated from $\mathcal{C}$ . Let $\mathcal{A}$ be the set of subsets of $S \subseteq X$ that are "indivisible" in the sense that they always go together as a group, or not at all, in any $C \in \mathcal{C}$ , and that are as big as possible. More formally, define $\mathcal{A}$ be the set of subsets of $S$ so that For any set $C \in \mathcal{C}$ , either $S \cap C = \emptyset$ or $S \cap C = C$ . There is no proper superset of $S$ that has property (1). I claim that $|\Omega| = 2^{|\mathcal{A}|}$ . I'll give an outline for the proof and leave the details to you. The reason is that we have a bijection $\mathcal{P}(\mathcal{A}) \rightarrow \Omega$ , where $\mathcal{P}(\mathcal{A})$ is the power set of $\mathcal{A}$ , given by $f(T) = \bigcup_{A \in T} A$ . To see this, we must show $f(T) \in \Omega$ for any $T \in \mathcal{P}(\mathcal{A})$ . To show this, first show that any $A \i
|combinatorics|measure-theory|measurable-sets|
1
Untwisting is isomorphism
So I have come across the following claim in this paper by B. Poonen (Lemma 2.2): Let $X$ is a smooth quasi-projective scheme over $\mathbb{F}_q$ , $P$ a closed point, $\mathfrak{m}$ the ideal sheaf corresponding to $P$ and $Y$ the closed subscheme corresponding to $\mathfrak{m}^2$ . The there is an isomorphism $$H^0(Y, \mathcal{O}_Y(d)) \overset{\sim}{\longrightarrow} H^0(Y, \mathcal{O}_Y),$$ by untwisting with a suitable coordinate. I don't see how this is an isomorphism as I would expect the left vector space to have a higher dimension than the right. Can anyone give me a hint why this map is indeed a isomorphism?
$Y$ is affine with reduction $\operatorname{Spec} k(P)$ , so any line bundle on $Y$ is in fact trivial. You can see this by taking $M$ a projective module of rank one and a module map $f:k[Y]\to M$ and showing that $f$ is an isomorphism iff $\overline{f}:k[Y]/\mathfrak{m}\to M/\mathfrak{m}M$ is and identifying $k[Y]/\mathfrak{m}\cong M/\mathfrak{m}M\cong k(P)$ .
|algebraic-geometry|
1
Symmetry group of the unit circle in $\mathbb{R}^2$ versus $\mathbb{R}/\mathbb{Z}$.
I am studying a set of lecture notes on group theory, and I don't think I understand a point the author makes about the unit circle and its symmetry group in relation to $\mathbb{R}/\mathbb{Z}$ . Let $S^1$ denote the set of $(x,y) \in \mathbb{R}^2$ with $x^2 + y^2 = 1$ . (This could be complex, I believe, and we'd arrive at the same conclusion. That could be the missing link my understanding.) The symmetry group of $S^1$ consists of rotations and reflections, which is infinite. I cannot fully visualize this, but my mental picture is that it takes a point $(x,y)$ on the circle and either moves it counterclockwise along the unit circle (in polar coordinates, the value of $\theta$ increases mod $2\pi$ ) or reflects it in any line through the origin. The author claims that the rotation of $S^1$ is isomorphic to $\mathbb{R}/\mathbb{Z}$ . I'm not certain I fully understand this. I believe the complex unit circle is isomorphic to $\mathbb{R}/\mathbb{Z}$ with isomorphism given by (denoting the
There's only one thing keeping the unit circle from being isomorphic to its rotation group, which is that the unit circle is not technically a group - just a set. There's no intrinsic binary operator on that set of points. But you can quite easily give it a reasonable group operation - identify the circle with the complex unit circle as you do in the question, and use complex multiplication as your operation. Under this operation, the circle is indeed isomorphic to its set of rotations, and you can easily construct the isomorphism: Just map every point $z$ to the unique rotation that takes the complex number $1$ to $z$ .
|group-theory|symmetric-groups|group-isomorphism|
1
Show that $\int_0^\pi\arctan\left(\frac{a\cos x+b\sin x}{c\cos x+d\sin x}\right)dx=\pi\arctan\left(\frac{ac+bd}{|bc-ad|+c^2+d^2}\right)$
Given that $a,b,c,d\in\mathbb{R}$ and that $c$ and $d$ are not both $0$ , show that $$\int_0^\pi\arctan\left(\frac{a\cos x+b\sin x}{c\cos x+d\sin x}\right)\mathrm dx=\pi\arctan\left(\frac{ac+bd}{|bc-ad|+c^2+d^2}\right).$$ I was trying to answer a question and ended up with this integral. I don't think it helped me answer that other question, but I thought this integral is interesting by itself. I teased out the RHS expression by first noticing that $\tan \left(\frac{1}{\pi}\int_0^\pi\arctan\left(\frac{a\cos x+b\sin x}{c\cos x+d\sin x}\right)\mathrm dx\right)$ seemed to be always rational when $a,b,c,d\in \mathbb{Z}$ , then setting two of $a,b,c,d$ equal to $1$ and then looking for patterns in the value of the integral in terms of the other two. That's all the progress I've made so far.
This is not a $100\%$ complete answer, but if no mistakes have been made, then the calculus problem can be boiled down to an algebraic one, after taking an elaborate detour into the complex world. Shift the integration range down by $\dfrac\pi2$ , then substitute $x=\arctan y=\arctan\dfrac{b-dz}{a-cz}$ : $$\begin{align*} I &= \int_0^\pi \arctan \left(\frac{a \cos x + b \sin x}{c \cos x + d \sin x}\right) \, dx \\ &= \int_{-\tfrac\pi2}^\tfrac\pi2 \arctan \left(\frac{a\tan x-b}{c\tan x-d}\right) \, dx \\ &= \int_{-\infty}^\infty \frac{\arctan \left(\frac{ay-b}{cy-d}\right)}{1+y^2} \, dy \\ &= \int_{-\infty}^\infty \frac{(bc-ad) \arctan z}{a^2+b^2 - 2(ac+bd) z + \left(c^2+d^2\right) z^2} \, dz \tag{$*$} \end{align*}$$ See Zacky's comment for the fine print of changing variables to $z$ in $(*)$ . Provided the discriminant of the denominator $p(z)$ is negative, i.e. $$4(ab+cd)^2 - 4(a^2+c^2)(b^2+d^2) = -4(bc-ad)^2 (agreeing with the conjectured condition), we can rewrite $I$ in terms of $J(
|calculus|integration|trigonometry|definite-integrals|trigonometric-integrals|
0
Three players toss an unfair coin - Give an intuitive explanation of dependent
Three players toss an unfair coin each, independently of each other, with the same probability p = 1/3 that the coin shows heads. Let X = 1 if player #2 gets the same result as player #1 - Call this a success (otherwise X = 0). Let Y = 1 if player #3 gets the same result as player #1 - Call this a success (otherwise Y = 0). From calculations, I know that X and Y are dependent - And is easy to show. But I find it difficult to give an intuitive explanation of why X and Y are dependent. Can someone help with an intuitive explanation for this.
The dependence of $X$ and $Y$ is intimately related to the value of $p$ , since if the coin is fair, $X$ and $Y$ are actually independent . To understand why, let the outcomes of each player's coin toss be denoted $T_1, T_2, T_3$ where $\Pr[T_i = 1] = p$ . We can calculate $$\begin{align} \Pr[Y = 1 \mid X = 1] &= \Pr[T_3 = T_1 \mid T_2 = T_1] \\ &= \frac{\Pr[T_3 = T_2 = T_1]}{\Pr[T_2 = T_1]} \\ &= \frac{p^3 + (1-p)^3}{p^2 + (1-p)^2}, \end{align}$$ and $$\Pr[Y = 1] = \Pr[T_3 = T_1] = p^2 + (1-p)^2.$$ Thus if $X$ and $Y$ are independent, we require $p \in \{0, 1/2, 1\}$ , of which only $p = 1/2$ admits a non-deterministic sample space. An intuitive way to see this is to note that if the coin is fair, then the event $T_1 = T_2 = 1$ has the same probability as $T_1 = T_2 = 0$ . The same is true for $T_1$ and $T_3$ . So whether the first two players' coin tosses match does not provide any information about whether the outcome of those coins were more likely to have shown two heads, or if th
|probability|independence|
0
Proving the Value of a Unique Infinitely Nested Radical
Math Stack Exchange! I am trying to figure out how to find value of the infinitely nested radical $$x = \sqrt{2^0 + \sqrt{2^2 + \sqrt{2^4 + \sqrt{2^8 + \ldots}}}}$$ I have already established that this expression converges based on Herschfeld's Convergence Theorem , but I am now interested in proving the specific value to which it converges. Here is the proof that it converges: Let $a_n = 2^{2^n}$ . Then we have $$ a_n^{2^{-n}} = (2^{2^n})^{2^{-n}} = 2^{2^n \cdot 2^{-n}} = 2^1 = 2 $$ In other words, for all $n$ , $ a_n^{2^{-n}} = 2 $ , which means the supremum of $ a_n^{2^{-n}} $ is 2 by Herschfeld's Convergence Theorem. Thank you for any help!!
$x^2=1+\sqrt{2^2+\sqrt{2^4+\sqrt{2^8+...}}}=1+2\sqrt{1+\sqrt{1+\sqrt{1+...}}}=1+2u$ Now notice that $u^2=1+u$ so by solving this this quadratic you get $u=\frac{1+\sqrt{5}}{2}$ and finally $x^2=1+2\frac{1+\sqrt{5}}{2}=2+\sqrt{5}$ so $x=\sqrt{2+\sqrt{5}}\approx2.06$
|sequences-and-series|nested-radicals|
0
Step 1 of Rudin theorem 7.32 proof
There is one part I don't get from the proof of Baby Rudin's theorem 7.32: The reason we require that the polynomial $\sum_{i=1}^n c_i y^i$ to vanish when $y=0$ because we want this property to hold: if $f(x)=0$ for some $x\in K$ , then $g(x)=0$ as well. Is that correct?
To expand on the answer in the comments, let us suppose that the polynomial does not vanish at $0$ and see what goes wrong. Suppose that given $\varepsilon > 0$ , there are real numbers $c_{0}, \ldots , c_{n}$ such that for all $y\in [-a ,a]$ , $$\left| \sum_{i=0}^{n}c_{i} y^{i} - |y| \right| where the convention is being used that $0^{0} = 1$ in this context. Then for all $x\in K$ , $f(x)\in [-a, a]$ , so it follows that for all $x\in K$ , $$ \left| \sum_{i=0}^{n}c_{i}(f(x))^{i} - |f(x)| \right| If for each $i\in \{0, \ldots , n\}$ the function $f^{i}:K\to\mathbb{R}$ is defined by $f^{i}(x) := (f(x))^{i}$ , then by using that the component-wise product of two functions in $\mathscr{B}$ is again in $\mathscr{B}$ , it can be shown by a finite induction argument that $f^{i} \in \mathscr{B}$ for all $i \in \{1, \ldots , n\}$ . Then $c_{i}f^{i} \in \mathscr{B}$ for each $i\in \{1, \ldots , n\}$ , so it follows that $\sum_{i=1}^{n}c_{i}f^{i}\in \mathscr{B}$ . For the case $i=0$ , $f^{0}(x)
|real-analysis|
1
Sharing of profit in a partnership
I have two theories and I was pondering on what basis we are choosing one over the other. Let me take specific example and describe the two theories and any discussion in this regard would be helpful to me. $\textbf{Question}$ There were two partners $A$ and $B$ who had invested $Rs. 50,000$ and $Rs. 80,000$ respectively. After four months $C$ joined and the total profit at the end of the year was $40000$ . Of this, the share of $C$ was $15,000$ . Then what was the amount invested by $C$ ? $\textbf{Common solution suggested}$ Profit is directly proportional to the investment amount times the time. If $X_A$ , $X_B$ , and $X_C$ represent the shares of profit of $A, B$ and $C$ respectively, then $$X_A : X_B : X_C = 50000\times 12: 80000\times 12 : I_C\times 8,$$ where $I_C$ is the amount invested by $C$ . The calculations would finally lead to the answer $I_C = 1,17,000$ . $\textbf{Another Approach}$ The profit for year is $40000$ and hence profit for four months in $\frac{40000}{3}$ . As
In your alternative approach, you have added the profits of A and B to their investments which never happened. Acc. to the question framed, profits are reaped only at the end of the year. Therefore, you shouldn't add their profits to their investment mid-way. Everything else seems fine. Edit: Added Full Solution After the first four months, A and B share $\frac{40000}{3}$ in the ratio 5:8 as you've mentioned. Let see after this, at the year end C claims $15000$ as his share of profit. That is $\frac{15000}{\frac{80000}{3}}$ of the next 8 months profit, which simplifies to $\frac{9}{16}$ of 8th month proft. So, he would have invested $\frac{9}{16}$ of the total invested amount by A, B and C. So A and B must have invested the remaining $\frac{7}{16}$ of the total invested amount and we know together A and B invested 130000. So, $$\frac{7x}{16} = 130000 \Rightarrow x = \frac{16 \cdot 130000}{7}$$ Amount Invested by C = $$\frac{9x}{16} \Rightarrow \frac{9}{16} \cdot \frac{16 \cdot 130000}{
|algorithms|arithmetic|word-problem|ratio|
0
Probability of occurrence of games in a football league
This question just came to me as I was watching a football game. There is a football league with 20 teams. Each team has to play every other team at home and away, which means each team will play a total of 38 games. All teams will play a game every weekend, and hence the season lasts for 38 weeks. Now, I am interested in following the 6 teams that are favored to win the league. What is the probability that on any given weekend during the season, there is at least one match involving 2 of my 6 favored teams? What is the generalized solution for the same problem in a league with N teams and n favorites?
As pointed out by FanOfFourier in a comment to the question, this is similar to the Birthday Problem: en.wikipedia.org/wiki/Birthday_problem
|probability|combinatorics|recreational-mathematics|
1
Can you efficiently determine the number of possible solutions for an arbitrary starting sudoku configuration?
I thought it would be fun to implement a solver which updates in real time to show how many possibilities remain as you fill in squares in any order. I'm able to find a number of resources explaining how to calculate the number of possible solutions exist in total, and they go into detail about some reductions that can be made by enumerating the possibilities for the top three rows, but it's not clear to me if or how that analysis can be mapped to, say, a board with 7 digits filled in randomly. The more I skim around the more computationally infeasible this seems, but I have yet to find any literature directly addressing this question. It feels plausible it could be done with a sufficiently clever pre-computed lookup table.
Because of the way that the cells in a sudoku are tightly bound to one another, there is likely no easy way to count the number of possible solutions without explicitly finding them. To give a very simple demonstration of this effect, if we look at 4x4 sudoku (because it's fairly simple to enumerate all the solutions), look at how adding given digits in different arrangements changes the number of solutions: With this grid, there are 72 solutions. Putting a 2 here reduces the number of solutions to 24. Put the 2 here instead and now there are only 12 solutions. Put it here, and there are 18 solutions. Putting a 1 in this position instead leaves us with 36 solutions. Putting it here is 18 solutions. And of course putting it here reduces the solutions to zero. As a bit of a basic test, I took a random sudoku from online and put it into f-puzzles . The solve took basically no time at all (because it was mostly a case of placing single digits with a small number of pairs). Removing a singl
|combinatorics|sudoku|
0
When does $\sigma(X_1,..,X_n)=\sigma(f(X_1,...,X_n))$
Given $n$ random variables $X_{1},\dots,X_{n}:(\Omega,\mathcal{F}) \to (\mathbb{R}^{n},\mathcal{B}(\mathbb{R}^{n}))$ . We can consider $\sigma(X_{1},\dots,X_{n})$ which is the smallest sigma algebra on $\Omega$ to make $X_{1},\dots,X_{n}$ measurable. I think consider $g:\Omega \to (\mathbb{R}^{k},\mathcal{B}(\mathbb{R}^{n}))$ , $g(\omega)=(X_{1}(\omega ),\dots,X_{n}(\omega))$ , this is the smallest sigma algebra to make $g$ measurable, or in other word, $\sigma(g) = \sigma(X_{1},\dots,X_{n})$ . Is this correct? If so, I think given any measurable map $f:\mathbb{R}^{n} \to (C, \mathscr{C})$ , where $f(X_{1},\dots,X_{n}) = f \circ g$ , so $\sigma(f(X_{1},\dots,X_{n})) \subset \sigma(X_{1},\dots,X_{n})$ as $f$ is composition of two measurable map, thus measurable. But when does $\sigma(f(X_{1},\dots,X_{n})) = \sigma(X_{1},\dots,X_{n})$ ? For instance, if $C$ is the reals the natural number, or some common sets, when is this true? Does strict monotonicity of $f$ suffice if $C =\mathbb{R}$
Instead of looking at the collection $X_1,...,X_n$ we can look at the vector $Y = (X_1,...,X_n)$ for which $\sigma(Y) = \sigma(X_1,...,X_n)$ , so it is enough to consider case of single random variable $Y$ (but taking values in some product space $\mathbb R^n$ or any different). Thus, let us assume that $(\Omega,\mathcal F,\mathbb P)$ is some probability space, $(S,\mathcal S)$ is some measurable space and $Y:\Omega \to S$ is $\mathcal F/\mathcal S$ measurable random variable $($ in your case $S=\mathbb R^d$ , $\mathcal S = \mathcal B(\mathbb R^d))$ . As you noticed, for any measurable space $(E,\mathcal E)$ and $\mathcal S/\mathcal E$ measurable map $f:S \to E$ it is true that $\sigma(f(Y)) \subset \sigma(Y)$ . Indeed, any $A \in \sigma(f(Y))$ is of the form $(f(Y))^{-1}[B] = Y^{-1}[f^{-1}[B]]$ for some $B \in \mathcal E$ , but since $f$ is $\mathcal S/\mathcal E$ measurable, ten $f^{-1}[B] \in \mathcal S$ and hence $A = Y^{-1}[ f^{-1}[B]] \in \sigma(Y)$ . To talk about other inclusio
|real-analysis|probability-theory|measure-theory|random-variables|
1
Writing $\exp{tA}$ as finite sum
Let $A \in \mathcal{M}_{n\times n} (\mathbb{R})$ such that $A^2=\alpha A$ for some $\alpha \neq 0$ . Under this assumption, we have by induction that $A^{n}=\alpha^{n-1}A$ ; $n \geq2$ ; then: $$\exp(tA)=I+ tA+\frac{t^2A^2}{2!}+ \dots \ =I+At+\frac{A\alpha t^2}{2!} + \frac{A\alpha^2 t^3}{3!} + \dots = \\ = I+ \frac{1}{\alpha}A(1+ \alpha t+ \frac{\alpha^2 t^2}{2!} +\dots)- \frac{A}{\alpha}=I+\frac{1}{\alpha}A\exp({\alpha t})- \frac{A}{\alpha}=I+A(\frac{\exp(\alpha t)-1}{\alpha}) $$ Are these steps correct?
COMMENT.- It is known that the function $x\to e^x$ operates over the algebra $\mathcal{M}_{n\times n} (\mathbb{R})$ ( I don't know how the french word opère is translated in the English of Functional Analysis World ). In simplest English, if $A\in\mathcal{M}_{n\times n} (\mathbb{R})$ , the matrix $e^A\in\mathcal{M}_{n\times n}(\mathbb R)$ Now $\exp(tA)=I+A\left(\dfrac{\exp(a t)-1}{a}\right)$ is true each time that $A=aI$ in whose case $\exp(tA)=\exp(at)I$ . Is it possible that $A\ne aI$ , say $A=aI+B$ for some non-zero $B\in\mathcal{M}_{n\times n}(\mathbb R)$ ?. From $A^2=aA$ we can get $A=aI$ only if $A$ is not a divisor of zero. I tend to believe with @Somos above that the equality is true but I rather feel more comfortable not answering the question for now.
|ordinary-differential-equations|matrix-exponential|
0
Geometric series with indexed inequality
I wanted to complete the following sum: $$\sum_{0\leq i Where $a_ib_jc_k$ are all different geometric sequences with $|r| . My attempt was to break up the sum into what each thing is bounded by: $$\sum_{0\leq i $$=\sum_{k=j+1}^{\infty}(\sum_{j=i+1}^{k-1}(\sum_{i=0}^{j-1}a_ib_jc_k)) \tag{1}$$ But I've also seen on stack exchange a discrete case, which if I extended it to this situation would make me believe the solution would be: $$\sum_{k=j+1}^{\infty}(\sum_{j=i+1}^{\infty}(\sum_{i=0}^{\infty}a_ib_jc_k))\tag{2}$$ I'm not sure if these are equivalent nor that either are correct for this case. For practical purposes, 1 would suggest that I compute $a_i$ as a finite geometric sum, making the summand become $a_ib_jc_k \to d_jc_k$ and then again to $f_k$ and I sum that to infinity subtracting the sum from 0 to base index for k which then leaves a result still in terms of the index k (which seems wrong). And 2 would suggest that I could do something funky like separating out each series, eva
I do not think your (3) will work. To see this, $i is more severe restriction than $0\leq i, 1\leq j, 2\leq k$ . Also (1) will not work, due to incorrect bounds. Actually, (2) is the correct approach, but with reverse order of summations (we treat inner sum first). First, it is enough to evaluate the sum for $$\sum_{0\leq i The sum over $k$ is $k\geq j+1$ , so it is $\frac{c^{j+1}}{1-c}$ . Then take $c/(1-c)$ out of the sums, we are left with $$ \frac c{1-c}\sum_{0\leq i Then the sum over $j\geq i+1$ gives $$ \frac c{1-c} \frac{bc}{1-bc}\sum_{0\leq i} (abc)^i. $$ Thus, we have $$ \frac c{1-c} \frac{bc}{1-bc} \frac{1}{1-abc}. $$
|sequences-and-series|geometric-series|index-notation|
1
Question About Differentiability Of Principal Branch Of log(z).
Theorem:1 Let the function $$f (z) = u(r, θ) + iv(r, θ)$$ be defined throughout some neighborhood of a nonzero point $z_0 = r_0 \exp(iθ_0)$ and suppose that (a) The first-order partial derivatives of the functions u and v with respect to r and θ exist everywhere in the neighborhood; (b) Those partial derivatives are continuous at $(r_0, θ_0)$ and satisfy the polar form $$ru_r = v_θ , u_θ = −rv_r$$ of the Cauchy–Riemann equations at $(r_0, θ_0)$ . Then $f$ is differentiable at $z_0$ . Now consider $$\text{Log}(r\exp(i\theta))=\ln r+i\theta$$ for $(r, θ)\in \mathbb R^+\times (-\pi,\pi].$ Then $u_{\theta}=v_r=0$ and $u_r=\frac 1 r$ , $v_{\theta}=1$ . So clearly all Hypothesis of Theorem 1 satisfy for $(r, θ)\in \mathbb R^+\times (-\pi,\pi].$ .so Log(z) is differentiable everywhere except origin . which is false since it is not even continuous on negative real axis. So what I am missing here ? Edit: Given $$f(z) = u(r, \theta) + iv(r, \theta)$$ for $(r,\theta)\in\underbrace{(0,\infty) \tim
Working with polar coordinates means that we use the function $$p : (0,\infty) \times \mathbb R \to \mathbb C^* = \mathbb C \setminus \{0\}, p(r,\theta) = r\exp(i \theta) = r\cos \theta + i r\sin \theta$$ to parameterize the points of $\mathbb C^*$ . This function surjective, but not injective. Actually we have $p(r,\theta) = p(r',\theta')$ iff $r' = r$ and $\theta' = \theta + 2k\pi$ for some $k \in \mathbb Z$ . $p$ is continuously differentiable in the sense of multivariable calculus and has non-vanishing Jacobian determinant. Thus it is an open map. The restrictions $$p_a : (0,\infty) \times (a,a+2\pi] \to \mathbb C^*$$ $$\tilde p_a : (0,\infty) \times [a,a+2\pi) \to \mathbb C^*$$ are bijective for all $a \in \mathbb R$ , but they are no homeomorphisms . In fact $p_a^{-1}$ and $\tilde p_a^{-1}$ are not continuous at the points of the ray $R_a = \{re^{ia} \mid r \in (0,\infty)\}$ . However, since $p$ is an open map, $$\bar p_a : (0,\infty) \times (a,a+2\pi) \to \mathbb C^*$$ is always
|complex-analysis|analytic-functions|
1
Is a Set of Infinitely Differentiable Functions Linearly Independent?
Let V be the following vector space. $V = \{ f : \mathbb{R} \longrightarrow \mathbb{R} : \text{f = infinitely differentiable at all x, } f^{(k)}(0) = 0 \;\;, \forall \;k \in (0 \cup \mathbb{N}) \}$ How would one show that this vector space is infinite dimensional? To show that a vector space is infinite dimensional, one needs to show: Basis ( $V$ ) contains an infinite number of linearly independent vectors. I understand that if $f \in V$ , then: $f$ = infinitely differentiable $f^{(k)}(0) = 0, \;\;\; (k = 0, 1, 2, ... n, ...) $ This would make $\{f^{(1)}, f^{(2)}, f^{(3)}, ... f^{(n)}... \}$ be an infinite subset of $V$ , as each of the elements would also be infinitely differentiable and would pass through $(0,0)$ . But how would one show that $\{f, f^{(1)}, f^{(2)}, f^{(3)}, ... f^{(n)}... \}$ is linearly independent?
We prove that $\dim V = |\mathbb{R}|$ . The following is a set of functions in $V$ forming cardinality $|\mathbb{R}|$ , and any finite subset is linearly independent. $$S=\{ e^{-r/x^2} \ | \ r>1\}.$$ To show $S$ is linearly independent, it suffices to show that for any distinct $r_i>1$ , $i=1,\ldots k$ , $$ e^{-r_1 t}, \ldots, e^{-r_k t} $$ are linearly independent as functions of $t>0$ . This is clear from Vandermonde determinant. Thus, $\dim V\geq |\mathbb{R}|$ . The reverse inequality is obvious from considering the cardinality of $V$ itself. $V$ is a subspace of the set of continuous functions, whose values can be determined from $\mathbb{Q}$ .
|linear-algebra|
0
How to show when the inequality holds?
If $f:[a,b]\to\mathbb{C}$ is continuous, $M=\sup\limits_{z\in[a,b]}|f(z)|$ then $$\left\vert\int\limits_a^bf(z)dz\right\vert\leq M(b-a).$$ It's easy to prove the inequality. But I don't know how to show the above inequality becomes equality if and only if $f$ is constant. I just can deduce that if $\leq$ becomes $=$ , $|f(z)|=M\quad\forall z\in [a,b]$ . Could someone help me?
If $f$ were real the result would follow immediately from the fact that equality in the integral implies $|f(z)|=M$ hence by continuity $f(z)=M$ or $f(z)=-M$ , so the trick is to force that. So let $|\int\limits_a^bf(z)dz|=M(b-a)$ hence $\int\limits_a^bf(z)dz=M(b-a)e^{i\alpha}$ and let's consider $g(x)=e^{-i\alpha}f(x), x \in [a,b]$ then $\int\limits_a^bg(x)dx=M(b-a)\ge 0$ and since we are on the segment $[a,b]$ we have that $dx$ is a real measure so $\int\limits_a^b \Re g(x)dx=M(b-a)$ and $\int\limits_a^b\Im g(x)dx=0$ . But $|\Re g(x)| \le |g(x)| \le M$ so the comments above imply that $|\Re g(x)|=M$ hence $\Re g$ is constant $M$ or $-M$ and then $\Im g=0$ since $M=|g(x)|=\sqrt {(\Re g)^2+(\Im g)^2}=\sqrt {M^2+(\Im g)^2}$ so $g=\Re g$ is constant and $f=e^{i\alpha}g$ is also constant.
|calculus|complex-analysis|
0
Question about pseudo-euclidean spaces and manifolds
Is it possible to make a pseudo-euclidean space $ \mathbb{R}^{p, q} $ into a differentiable manifold? My intuition tells me that yes since in special relativity we deal with the flat Minkowski space $ \mathbb{R}^{1, 3} $ however i tried to approach the problem of proving that for any given pseudo-euclidean space we can make it a differentiable manifold and i am stuck. From what i understand, the way we prove this is by defining two charts $ (U, x) $ and $ (V, y) $ on local regions U and V in $M$ with two maps $ x: U \rightarrow A $ and $ y: V \rightarrow B $ where A and B are subsets of $ \mathbb{R}^{\dim(M)} $ and then looking at the transition maps $ y(x^{-1}(p)) $ to see if they are of class $ C^k $ and if they are for any pair of charts in M then M is a differentiable manifold I tried to do this by defining a subset of $ \mathbb{R}^{1,2} $ as a collection of intervals. $ U = ( (0;1); (1;2); (3; 4)) $ Which would mean my local region U would be from 0 to 1 in x, from 1 to 2 in y and
OK, let's try to make sense of your question. First of all, what is ${\mathbb R}^{p,q}$ ? Let $n=p+q$ . Then ${\mathbb R}^{p,q}$ is the vector space ${\mathbb R}^{n}$ equipped with the quadratic form $$ Q({\mathbf x})= x_1^2+...+x_p^2 - x_{p+1}^2- ... - x_n^2. $$ In particular, as a topological space, ${\mathbb R}^{p,q}$ is just ${\mathbb R}^{n}$ . (At this point it is critical that you know what the words "topological space" mean. I assume that you do.) What does it mean to make ${\mathbb R}^{p,q}$ into a differentiable manifold? You want to accomplish two things: Find a (maximal) smooth atlas on the topological space ${\mathbb R}^{n}$ such that the transition maps are smooth. (The "definition" that you wrote in your question is not quite the right one.) This will give ${\mathbb R}^{n}$ a structure of a differentiable manifold, I will call this manifold $M$ . Make sure that the quadratic form $Q$ becomes a semi-Riemannian metric on $M$ (of the signature $(p,q)$ ). (I assume that you k
|general-topology|ordinary-differential-equations|manifolds|differential-topology|smooth-manifolds|
0
How you evaluate $\lim_{n\to\infty} \frac{1}{n^2}\left(1 + \sqrt{3\cdot4} + \sqrt[3]{4\cdot5\cdot6} + \ldots + \sqrt[n]{(n+1)(n+2)\ldots(2n)}\right)$
How you evaluate $\lim_{n\to\infty} \frac{1}{n^2}\left(1 + \sqrt{3\cdot4} + \sqrt[3]{4\cdot5\cdot6} + \ldots + \sqrt[n]{(n+1)(n+2)\ldots(2n)}\right)$ I have been with this problem all day and I dont know how to solve it, I tried with Slotz, but didn't get any answer. Computing it I have got 0.7362
Using Stolz theorem and definite integral Let: $$ \begin{align*} A&=\lim_{n \to \infty}\frac{1+\sqrt{3\cdot4}+\ldots+\sqrt[n]{(n+1)(n+2)\ldots(2n)}}{n^2} \\ A&=\lim_{n \to \infty}\frac{\sqrt[n]{(n+1)(n+2)\ldots(2n)}}{n^2-(n-1)^2}\\ A&=\lim_{n \to \infty}\frac{1}{2}\cdot\sqrt[n]{(1+\frac{1}{n})(1+\frac{2}{n})\ldots(1+\frac{n}{n})} \end{align*} $$ Let: $$ \begin{align*} B&=\sqrt[n]{(1+\frac{1}{n})(1+\frac{2}{n})\ldots(1+\frac{n}{n})}\\ \ln(B)&=\frac{1}{n}\sum_{i=1}^{n}\ln(1+\frac{i}{n})\\ C&=\lim_{n \to \infty}\ln (B)=\int_{0}^{1}\ln(1+x)dx\\ C&=(1+x)\ln(1+x)|_{0}^{1}-x|_{0}^{1}=2\ln(2)-1 \end{align*} $$ Hence: $$ \begin{align*} A&=\frac{2}{e} \end{align*} $$
|real-analysis|calculus|
1
Understanding a step in this proof of the $L^4$-SLLN
Let $(X_n)_{n \ge 1}$ be independent with $E(X_k) = \mu$ and $E((X_k)^4) \le C$ for some $C>0$ . Prove that if $S_n = X_1 + ... + X_n$ then $\frac{S_n}{n} \to \mu$ almost surely as $n \to \infty$ . The proof begins by assuming $\mu = E(X_k) = 0$ , then claims $$E((S_n)^4) = E \left( \left( \sum_{k=1}^n X_k \right)^4 \right) = \sum_{i,j,k,l = 1}^n E(X_i X_j X_k X_l) = \sum_{i=1}^n E((X_k)^4) + 6 \sum_{1 \le i \lt j \le n} E((X_i)^2(X_j)^2)$$ and it is with this final equality that I'm having trouble. I understand that all the terms containing odd exponents of $X_k$ are zero because of independence, but its the counting of the final term I'm struggling with. Why is the coefficient 6? My best explanation is that we have $n \times n$ possible arrangements of the $i$ 's and $j$ 's, but then remove the cases when $i=j$ because those are in the first term, so we have $n(n-1)$ arrangements. But we also have to consider the $4 \choose 2$ ways we can pick which indices of $i,j,k,l$ become our ne
For example, if $i=2$ and $j=5$ , we have $6$ cases for $i,j,k,l$ : $2,2,5,5$ $2,5,2,5$ $2,5,5,2$ $5,2,2,5$ $5,2,5,2$ $5,5,2,2$ This is indeed $\binom 42$ , considering choosing spots for $5$ 's. Apply this to all $1\leq i . Then by independence, the final term becomes $$ 6\binom n2 E(X_i^2)^2 = 3n(n-1)E(X_i^2)^2. $$
|combinatorics|probability-theory|law-of-large-numbers|
1
Upscalling sample size for bootstrap during resampling
I started experimenting with bootstrapping and noticed that using a bigger sample size gives a tighter confidence interval, especially at very low sample size. I made a test and created a bootstrap function that upscale the sample size (during the resampling step). The value it gives is very close than without upscaling but the distribution is less spread out. Is there a downside to upscale the sample size in such way? Is the CI still valid in that case?
I commented that using bootstrap samples larger than the original sample size will tend to give you narrower intervals around the original sample mean, and these intervals will not cover the population distribution mean as often as you hope they do. There is a separate issue that the bootstrap methodology can be overoptimistic (i.e. confidence intervals tend to be too narrow), especially for small original sample sizes. As an illustration of these, let's try to find bootstrap $95\%$ confidence intervals sampling from a normal distribution, using the following R code. It is not time efficient for large resample sizes, but serves to illustrate the point. In each example, I take $1000$ resamples with replacement from each original sample from a normal distribution to suggest a $95\%$ confidence interval for the mean from that original sample and then see whether that confidence interval covers the population mean; I do this for $1000$ different original samples and so hope about $950$ con
|bootstrap-sampling|
1
How to formally write probability distribution?
Let $X$ be a random variable that takes values from 0 to 9 with equal probability $\frac{1}{10}$ . a) Find the probability distribution of the random variable $Y = X \mod 4$ . b) Find the probability distribution of the random variable $Y = 6 \mod (X + 1)$ . For instance part a - I know there are only four possible answers for these: anything mod 4 is either $0, 1, 2,$ or $3$ . What I don't know how to do is how to "find" the probability distribution. Is it a number? A chart? In my notes I have two ways of writing it; one is a table and another is " $P(X = x) = \{$ " followed by the distribution. This question is very different from the one in my notes and I'm a little confused on how to begin. edit: Is probability distribution the same as "PMF"? The term PMF has never been used in class before (yet?).
FullofDill's answer is perfectly acceptable, but I just wanted to challenge myself to create a non-piecewise PMF for these, because it seemed like a neat challenge! For Part a, Obviously we start knowing that our denominator for our probability will be 10, and we can see that as long as $y$ is in range, $y\geq2$ - So we can start with 2 as a constant in our numerator: $$P(Y=y)= \frac{2+?}{10}$$ We know that if y is less than 2, we need to add 1 to the denominator. There's a few options to add a one if $x , in this case the easier options was to use the floor function and some odd arithmetic to make a statement that subtracts 1 if y > 2, and then subtract that to get what we want: $$P(Y=y)= \frac{2-\left \lfloor \frac{y-2}{2} \right \rfloor}{10}$$ This, of course, will only work if $0 \leq y \leq 3$ . Part b is much more complicated. The function needs to return $\frac{4}{10}$ if $y=\{0,6\}$ , $\frac{1}{10}$ if $y=\{1,2\}$ and 0 otherwise. I racked my head for an hour or so trying to th
|probability-distributions|
0
Statement from Proposition 0.18 of Hatcher
On page $16$ of Hatcher's, following proof is given: but I can't see the induced deformation retract's existence. My attempt is to use "passing to the quotient", but somehow at the end I can't guarantee the final step of my argument: $\require{AMScd}$ \begin{CD} {X_0\sqcup (X_1 \times I)} @>{\text{fix }X_0,\, D = D_2\circ D_1}>> X_0\sqcup (X_1 \times I)\\ @VVV @VVV\\ {X_0\sqcup_{F} (X_1 \times I)} @>{\exists ?}>> {X_0\sqcup_{F} (X_1 \times I)} \end{CD} Where $D_1$ is the deformation retraction mentioned in the picture onto $X_1\times\{0\} \cup A\times I$ , and $D_2$ deforms $X_1\times\{0\} \cup A\times I$ onto $X_1\times \{0\}$ in the natural way. To apply passing to the quotient argument, I must show (necessarily, but not enough) that whenever $\pi((a,t)) = \pi((a',t'))$ for $a,a \in A$ ,we have $D((a,t)) = D((a',t'))$ . Now condition using $\pi$ is equivalent to $F(a,t) = F(a',t')$ . Condition using $D$ is equivalent to $f(a) = f(a')$ . But we only have $f_t(a) = f_{t'}(a')$ . How to
$\require{AMScd}$ I don't quite buy your diagrams because you forget you must in fact have a map $(X_0\sqcup_F(X_1\times I))\times I\to X_0\sqcup_F(X\times I)$ to speak of a deformation retraction. Let $G:X_1\times I\times I\to X_1\times I$ be the deformation retraction onto $X_1\times\{0\}\cup A\times I$ . Luckily, $-\times I$ is cocontinuous ( $I$ is LCH) which is a miracle that makes a lot of stuff in topology work. Thus $(X_0\sqcup_F(X_1\times I))\times I$ is the pushout: $$\begin{CD}A\times I\times I@>>>X_1\times I\times I\\@VF\times1VV@VVV\\X_0\times I@>>>(X_0\sqcup_F(X_1\times I))\times I\end{CD}$$ To get a map into $X_0\sqcup_F(X_1\times I)$ again we specifiy the following components; the projection $X_0\times I\to X_0\to X_0\sqcup_F(X_1\times I)$ and $X_1\times I\times I\overset{G}{\to}X_1\times I\to X_0\sqcup_F(X_1\times I)$ . As $G$ is identity on $A\times I$ this truly, by pushout properties, induces something $H:(X_0\sqcup_F(X_1\times I))\times I\to X_0\sqcup_F(X_1\times I
|algebraic-topology|homotopy-theory|quotient-spaces|retraction|
1