title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Find coordinates for points on circle given R, 2 Points, and angle or 2 points and center?
I would like to find coordinates for points on a circle given: Radius of circle Coordinates of 2 points on the circle Angle of point 1, center, and point 2. Ultimately, I would like to write a formula in excel to calculate points on circle to stake out coordinates for surveying. Thanks again for your help. Please let me know if I can provide any more information or rewrite question using better terminology.
Your problem is overdetermined since you only need either the radius or the angle since they are related by $R = \frac{d}{2} \cot\frac{\alpha}{2}$ where $d$ is the distance between the two points on the circle and $\alpha$ is the angle. The position of the center of the circle is the solution to this equation: $$(x_C-x_A)^2 + (y_C-y_A)^2 = (x_C-x_B)^2 + (y_C-y_B)^2 = R^2$$ $C$ is the center. Solving $(x_C-x_A)^2 + (y_C-y_A)^2 = (x_C-x_B)^2 + (y_C-y_B)^2$ gives you the linear relation between $x_C$ and $y_C$ . Substitute $x_C$ or $y_C$ in $(x_C-x_A)^2 + (y_C-y_A)^2 = R^2$ then solve the quadratic equation. This gives you two possible coordinates for $C$ . Then you can use the standard formula for a circle with given center and radius
|circles|coordinate-systems|
0
Exterior derivative of 1-form and derivatives of sections of $T^*M$
Let $M$ be a compact manifold and consider a differential form $\alpha\in\Omega^1(M)$ , which we can think of as a map $\alpha:M\to T^*M$ . Since $T^*M$ is a smooth manifold we can compute the differential of this map $$D\alpha:TM\to T(T^*M)$$ How does this differential differ from the exterior derivative on 1-forms? I have a feeling that $D\alpha(X) = d(\alpha(X))$ , but I'm not sure how to show this.
As Ted points out, your guess isn't exactly right. I explain in this answer that $d\alpha$ is basically the "antisymmetric part" of $D \alpha$ . Things are a little bit awkward to formulate viewing $\alpha$ as a map $M \to T^*M$ , so let's view it as a fiberwise linear map $TM \to \mathbb{R}$ instead. Then the derivative is $D\alpha : TTM \to T\mathbb{R}$ . There is a canonical identification $T \mathbb{R} = \mathbb{R} \times \mathbb{R}$ , so that $D \alpha$ viewed this way just does $\omega \circ \pi$ on the first factor. Let $k : T\mathbb{R} \to \mathbb{R}$ be the projection onto the second factor. (If you want to be extremely fancy, $k$ is the connector of the canonical affine connection on the vector space $\mathbb{R}$ .) Using $\operatorname{flip} : TTM \to TTM$ to denote the canonical flip on the double tangent bundle, the difference $k \circ D \alpha - k \circ D\alpha \circ \operatorname{flip} : TTM \to \mathbb{R}$ factors through the projection $(D\pi, \pi) : TTM \to TM \times_
|differential-geometry|differential-forms|
0
Number of functions with one element in the range when continuously composed
I have a question that goes: Find the number of functions $f: X \to X$ , such that the range of the $n$ times composed function $f^{(n)}$ has only one element. Given that $|X| = n$ , where $|X|$ is the number of the elements in $X$ . It is obvious that when $n = 1, 2$ , the number of those functions are $1, 2$ . I observed by python that if $n \le 8 $ , the number of those functions are $n^{n-1}$ . I tried to solve the question by counting "functions without a loop", where a loop is defined as a subset of $X$ $L$ such that $f(L) = L$ . If the function as one or more loop (in the case of loops with size $1$ , more than one), no matter how many times composed, its range will never have one element. The number of loops with size $n$ is $(n-1)!$ . If the number of loops with size $n$ is $a_n$ , $a_{n+1 }=n a_n$ , so $a_n = (n-1)!$ From now on, I am stuck on how to solve the original problem. Could anybody help?
Show that functions $f:X\to X$ , for which the range of $f^{(n)}(x)$ has exactly one element, are in bijection with labeled rooted trees on the vertex set $X$ . In this correspondence, the root vertex is the unique element $x$ of $X$ which is a fixed point of $f$ , meaning $f(x)=x$ . Then, apply Cayley's theorem , which says that the number of labeled unrooted trees on $n$ vertices is $n^{n-2}$ . Then, a labeled rooted tree is given by choosing a tree in $n^{n-2}$ ways, and choosing one of $n$ elements of $X$ to be a root, so the number of rooted labeled trees is $n\cdot n^{n-2}=n^{n-1}$ .
|combinatorics|
1
Positive definiteness of the function
Given is the following function $f(x)=q y+\sqrt{\frac{1}{2} y^2+g x}$ with $q=\frac{1}{\sqrt{2}} \frac{1-\gamma}{1+\gamma}$ where $x\geq 0$ , $y\leq 0$ , $g > 0$ and $\gamma \in[0,1)$ . Show that this function is positive definite. I am not sure how to do it. I thought about writing $\sqrt{\frac{1}{2} y^2+g x} > -q y$ and squaring, but not sure how to deal with then $\frac{1}{2} y^2+g x > (q y)^2$
$\gamma = \cos \theta, 0\le \theta $q=(1/\sqrt{2})\frac{(1-\cos \theta)}{1+\cos\theta}=(1/\sqrt{2}) \tan^2 \theta /2$ $q^2y^2 $(q^2-1/2)y^2 Here's a jump. $q^2 $(1/2) \tan^4 (\theta /2) $\tan^4(\theta/2) $0\le \theta $\implies (q^2-1/2)y^2 I think that does it.
|linear-algebra|inequality|
0
Maths question regarding union of multiple sets
The question is as follows, Suppose $A_1, A_2, \ldots A_{30}$ are thirty sets having 5 elements each and $B_1, B_2, \ldots B_n$ are $n$ sets having 3 elements each Let $$\bigcup_{i=1}^{30}A_{i} = \bigcup_{j=1}^{n}B_{j} = S$$ and each element of $S$ belongs to exactly 10 of the $A_i$ 's and exactly 9 of the $B_j$ 's, then find the value of $n$ . In the solution since the total number of elements in $A_1 \cup A_2 \cup A_3, \ldots A_{30}$ is 150. The following equation is given $$S = \frac{30 \times 5}{10} = 15$$ where does this come from and what does the 15 even mean is it the number of elements in the set if so why>
Here you are counting pairs of the form $(A_i,x)$ with $x\in A_i$ , $i\in 1,\cdots, 30$ . You can count the number of such pairs two ways: There $30$ choices for $i$ and for each such choice there are $5$ choices for $x$ . Thus there are $30\times 5$ such pairs. There are $|S|$ choices for $x$ , and for each such choice there are $10$ choices for $i$ . Thus there are $|S|\times 10$ such pairs. Equating the two, gives the equation in the solution: $$30\times 5= |S|\times 10,$$ which rearranges to $$|S|=\frac{30\times 5}{10}=15$$ The reason this is stated without much explanation is it is a fairly common trick when comparing the size of two sets, where an element of the first set and an element of the second may be in some sense "incident" to each other. In this case $A_i$ and $x$ are incident if $x\in A_i$ . Note you can use exactly the same trick again to find $n$ . I suggest you do this before looking at the rest of the solution.
|combinatorics|elementary-set-theory|
1
Maths question regarding union of multiple sets
The question is as follows, Suppose $A_1, A_2, \ldots A_{30}$ are thirty sets having 5 elements each and $B_1, B_2, \ldots B_n$ are $n$ sets having 3 elements each Let $$\bigcup_{i=1}^{30}A_{i} = \bigcup_{j=1}^{n}B_{j} = S$$ and each element of $S$ belongs to exactly 10 of the $A_i$ 's and exactly 9 of the $B_j$ 's, then find the value of $n$ . In the solution since the total number of elements in $A_1 \cup A_2 \cup A_3, \ldots A_{30}$ is 150. The following equation is given $$S = \frac{30 \times 5}{10} = 15$$ where does this come from and what does the 15 even mean is it the number of elements in the set if so why>
How many total elements do you get from sets $A_i$ 's if you do not consider the equality of elements? The answer is $30\times 5=150$ . Now, note that $$\bigcup_{i=1}^{30}A_i=S$$ and every element in $S$ occurs in exactly $10$ different $A_i$ 's. We are interested in the size of $S$ , so we look at the total elements we get from sets $A_i$ 's but take into consideration uniqueness of these elements. We can form groups of size $10$ among those $150$ elements such that each group has the same element repeated $10$ times. Hence, total number of unique elements in $S$ are $150/10=15$ . Similarly for the set $B_i's$ , you have total number of elements without uniqueness consideration $=3n$ . With uniqueness consideration, the total number of elements in $S$ is $\frac{3n}{9}$ . You have two different values for the number of unique elements in $S$ . Equating them, you get $$\frac{30\times 5}{10}=\frac{3n}{9}$$
|combinatorics|elementary-set-theory|
0
The linear relationship between two plane(or two subspace)
Suppose $V$ is a $3$ -dimensional vector space, suppose I have $2$ planes $S_1$ and $S_2$ going through the origin, then they must at least intersect at a line. Suppose $V$ is a $4$ -dimensional vector space, suppose I have $2$ planes $S_1$ and $S_2$ going through the origin, then they must at least intersect at the origin. Then the more general question is, if I have an $n$ -dimensional vector space $V$ , and I have two subspaces $S_1$ and $S_2$ with dimension $r$ , their intersection forms an at least dimension $(n-r)$ subspace. I'm trying to prove this: first take $S_1$ and $S_2$ as $2$ dimensional subspaces (a plane), then choose linearly independent $v_1, v_2$ to form $S_1$ , then choose linearly independent $v_3, v_4$ to form $S_2$ , then their intersection is $a_1v_1 + a_2v_2 = a_3v_3 + a_4v_4$ . $a_1v_1 + a_2v_2 - a_3v_3 - a_4v_4 = 0$ . Suppose $V$ is an $n$ -dimensional space, then there's $(n-r)$ redundant $v$ here that can be expressed as other $v$ 's linear combination, the
This is not true. Take any two distinct lines in $\mathbb R^n$ for $n\geq 2$ . Their intersection is $\{0\}$ , so has dimension $0$ , which is less than $n-1$ .
|vector-spaces|vectors|
0
Probability that a random bridge board does not contain a sequence?
Two cards with adjacent values and the same suit produce a SEQUENCE. For example, the heart-ten and the heart-jack form a sequence. The order of the values in bridge is $$23456789TJQKA$$ -What is the probability that a random bridge board contains no sequence in any hand ? (That means, for example : If North holds the queen of spades, he neither holds the jack nor the king of spades ) A long time ago, I approached this problem by first determining all possible combinations not containing a sequence (for example kt742) , and then calculate the number of boards from that. I remember that the probability is very low. But there should be a better method, perhaps inclusion-exclusion or something like that.
We first calculate, for each club distribution $(n_n,n_s,n_e,n_w)$ distinguishing players but not ranks , how many explicit (distinguishing ranks too) distributions $c$ there are with no player having consecutive clubs. There are $4\cdot3^{12}$ explicit distributions in all, small enough to loop over and tabulate. Then associate with $(n_n,n_s,n_e,n_w)$ the monomial $cw^{n_n}x^{n_s}y^{n_e}z^{n_w}$ : #!/usr/bin/env python3 from math import factorial from collections import Counter from itertools import product, accumulate, permutations distribs = Counter() for diffs in product(*([(0,1,2,3)] + [(1,2,3)]*12)): dist = list(accumulate(diffs, lambda x,y: (x+y)%4)) dist = tuple(dist.count(i) for i in (0,1,2,3)) distribs[dist] += 1 print(distribs) for (d, c) in distribs.items(): l = f"+{c}" + "".join(f"{v}{p if p > 1 else ''}" for (v, p) in zip("wxyz", d) if p > 0) print(l, end="") print() The sum of monomials is the generating polynomial for admissible club distributions. The $w^{13}x^{13}y^{
|probability-theory|card-games|
1
Two coins, each with P(Head)$\to 0$ as $n\to\infty,$ where $n=n-$th coin toss. Under what conditions is P(both coins Head) infinite times $<1?$
Suppose we have two coins. The probability that the first coin lands on heads is $p_1(n)$ and depends on $n,$ the $n-$ th coin toss of this first coin. Similarly, the probability that the second coin lands on heads is $p_2(n)$ and depends on $n,$ the $n-$ th coin toss of this second coin. All coin tosses are mutually independent. If $\displaystyle\lim_{n\to\infty} p_1(n) > 0 $ and $\displaystyle\lim_{n\to\infty} p_2(n) > 0,$ then it is somewhat clear that if we toss the coins an infinite number of times, then the probability that the number of coin tosses whose result is that both coins land on heads (e.g. both coins land on heads on the third toss, both coins land on heads on the third toss $17$ th toss, etc) is infinite, is $1.$ Suppose $\displaystyle\lim_{n\to\infty} p_1(n) = 0 = \lim_{n\to\infty} p_2(n).$ It is (somewhat?) clear that if both functions converge to $0$ very slowly, then the result in the previous paragraph remains. How quickly do $p_1$ and $p_2$ have to tend to $0$ (
I show the following satisfying result. CLAIM : Let $S=\sum_{n=0}^\infty p_1(n)p_2(n)$ . If $S , then the probability of having infinitely many tosses where both coins show heads is zero. If $S=\infty$ , then the probability of having infinitely many tosses where both coins show heads is one. Let $E_n$ be the event that both the first and the second coin show heads in the $n$ -th toss (of both coins simultaneously). Then we have $\mathbb P(E_n)=p_1(n)p_2(n)$ . So, in the first case we have $\sum_{n=0}^\infty \mathbb P(E_n) . By Borel-Cantelli the probability that infinitely many of the $E_n$ occur is zero, meaning the probability that infinitely many tosses show two heads is zero. For the second case we have $\sum_{n=0}^\infty \mathbb P(E_n)=\infty$ . Now, we use that the events $E_n$ are mutually independent and the second (or converse) Borel-Cantelli to obtain that the probability that infinitely many of the $E_n$ occur is one, which means that the probability that infinitely many to
|probability|limits|probability-distributions|markov-chains|probability-limit-theorems|
0
Proving $I(E(\Bbb C))=\left< y^2-x^3-ax-b\right>$
Let $E/\Bbb C : y^2=x^3+ax+b$ be an elliptic curve over $\Bbb C$ . Let $$E(\Bbb C)=\{(x,y)\in\Bbb C^2:y^2=x^3+ax+b\}$$ be the affine variety generated by $E$ . Finally, let $$I(E(\Bbb C))=\{f(x,y)\in\Bbb C[x,y]: f(p)=0,\, \forall p\in E(\Bbb C)\}.$$ I am trying to show that $I(E(\Bbb C))=I_E$ is the ideal generated by $E$ , that is, $$I_E=\left \subset\Bbb C[x,y].$$ My attempt so far has been the following. Suppose $f\in\Bbb C[x,y]$ . Then we can divide by $y^2-x^3-ax-b$ and get $$f(x,y)=q(x,y)(y^2-x^3-ax-b)+r(x,y),$$ where the total degree of $r\in\Bbb C[x,y]$ is less than $3$ . Suppose then, that $f$ vanishes on $E(\Bbb C)$ . That is, $f\in I_E$ . From the uniformalization theorem, there is a lattice $L\subset \Bbb C$ such that $a=-15G_4(L),$ and $b=-35G_6(L)$ , where $G_4,G_6$ are Eisenstein series corresponding to $L$ . Because of this, the point $(\wp(z,L),\tfrac12\wp'(z,L))$ is in $E(\Bbb C)$ for all $z\in\Bbb C$ . Here, $\wp(z,L)$ is the Weierstrass Elliptic function correspondi
Suppose $p$ is a polynomial vanishing on $E$ . Then after dividing $p$ by $y^2-(x^3+ax+b)$ , we have $p=(y^2-(x^3+ax+b))q+r$ , where $r$ has degree at most one in $y$ . So we can write $r(x,y)=c(x)y+d(x)$ , and $p$ vanishes on $E$ iff $r$ does (clearly $y^2-(x^3+ax+b)$ vanishes on $E$ ). Now let $\overline{r}=-c(x)y+d(x)$ . Then $r\overline{r}$ vanishes on $E$ , and therefore $$r\overline{r} = c(x)^2-d(x)^2y^2 = c(x)^2 - d(x)^2(x^3+ax+b)$$ is a polynomial in $x$ which vanishes for all $x\in\Bbb C$ . But this implies it is the zero polynomial, and for reasons of degree, this gives that $c=d=0$ . So $r=0$ and in fact $p$ is divisible by $y^2-(x^3+ax+b)$ . One can get an awful lot of mileage out of this conjugation technique for varieties with equations of the form $z^2+pz+q=0$ , just like one can get a lot of mileage out of conjugation for the complex numbers.
|complex-analysis|algebraic-geometry|ideals|elliptic-curves|
1
What's relationship between ST and TS's eigenvalues
this comes from a comment of this If and are × and × matrices, and have the same nonzero eigenvalues: is an eigenvector of iff is an eigenvector of , with the same nonzero eigenvalue. I try to proof this if is an eigenvector of => is an eigenvector of $\begin{align} STu &= \lambda u \tag{1} \end{align}$ left multiply by T, then $\begin{align} TSTu &= \lambda Tu \tag{2} \end{align}$ but I'm blocked (2)=>(1) if $TSTu=\lambda Tu$ , I can't multiply left with $T^-1$ since T is not square matrix, even T is square matrix, it still may not be invertible. and also why OP in that thread says have the same nonzero eigenvalues , from (1)=>(2) it seemed the zero eigenvalues are also corresponding?
Your argument works. If $\lambda\ne0$ and $STu=\lambda u$ for nonzero $u$ , then $Tu\ne0$ and $TS(Tu)=\lambda Tu$ . This shows that any nonzero eigenvalue of $ST$ is an eigenvalue of $TS$ . As the roles can be exchanged, you get that $ST$ and $TS$ have the same nonzero eigenvalues. The same idea shows that the geometric multiplicity is preserved for nonzero $\lambda$ . Indeed, if $STu=\lambda u$ and $STv=\lambda v$ for linearly independent $u,v$ , if $aTu+bTv=0$ then $$ au+bv=\frac1\lambda\,(aSTu+bSTv)=\frac1\lambda\,S (aTu+bTv)=0 $$ and so $a=b=0$ . As for the algebraic multiplicity, the same ideas allow you to show that the generalized eigenspaces have the same dimension, so the algebraic multiplicity also agrees for nonzero $\lambda$ . For an example of this, if $(ST-\lambda I)^2u=0$ , then $$STSTu-2\lambda STu+\lambda^2u=0.$$ You cannot have $Tu=0$ for you would have $u=0$ . Multiplying by $T$ on the left as before, $(TS-\lambda I)^2Tu=0$ , and also as above one can show that the d
|matrices|eigenvalues-eigenvectors|
0
How can I find the dimension of this vector space?
This question was asked in my assignment of linear algebra and I am not able to make any significant progress on this question. Question: Let $V$ be the space of all linear transformations from $\mathbb{R}^2 \to \mathbb{R}^2$ under usual addition and scalar multiplication. Then show that $V$ is a vector space of dimension $6$ . Attempt: I think $V$ is a vector space of dimension $4$ with basis as matrix $A,B,C,D$ and all of them are $2 \times 2$ matrix with $A$ has $a_{11}=1 $ and rest all terms zero and $B$ has $a_{12} =1$ and rest all zero and so on. But I don't know how answer is $6$ . Can you please help?
The general inhomogenous linear transformation between two spaces with basis $(v_i),(w_j)$ is fixed by the image of the basis $$L(v_i) = p_i + \sum_j q_{i,k} w_j,$$ that is a translation by a 2d vector plus multiplication by a 2x2 matrix. The addition of two linear forms is defined by separate parallel addition on the translation $R^2$ and the addition of the matrices in $\mathbb R^{2\times2}$ $$ a\ L_1(p_1,m_1)\oplus b \ L_2(p_2,m_2) =L(a\ p_1+b \ p_2,a \ m_1+b \ m_2)$$
|linear-algebra|vector-spaces|
0
Does the series $\sum_{n=0}^\infty\arcsin(3^{-n})$ converge?
I have been working on this question for two days, does $\sum_{n=0}^\infty\arcsin(3^{-n})$ converge? I did that. $$\arcsin x then i put $3^{-n}$ as $x$ , $3^{-n}\in (0,1]$ $$\arcsin(3^{-n}) $$\arcsin(3^{-n}) then I use limit comparasion test. $$\lim_{n \to \infty} \frac{\frac{1}{\sqrt{3^{2n}-1}}}{\frac{1}{\sqrt{3^{2n}}}}=1$$ then $\sum_{n=0}^{\infty} \frac{1}{\sqrt{3^{2n}-1}}$ converges since also $\sum_{n=0}^{\infty} \frac{1}{\sqrt{3^{2n}}}$ is converges by geometric series $\sum_{n=0}^{\infty} \frac{1}{\sqrt{3^{2n}-1}}$ converges then also $\sum_{n=0}^{\infty} \arcsin\left(\frac{1}{3^n}\right)$ converges I have completed the assigned tasks; however, I am uncertain about the accuracy of all methodologies employed. Furthermore, I am seeking clarification on whether there exists a more straightforward approach to address the question."
For large enough $n$ you can see $$\arcsin(\frac 1{3^n} )\sim \frac 1{3^n}$$ but to make sure of comparison you can use for example below inequality $$0\leq \arcsin(\frac 1{3^n}) and $$\sum \frac 2{3^n}=\frac {\frac 13}{1-\frac 23}$$
|sequences-and-series|trigonometry|
0
Is it true that $10^n + 1$ is never a perfect square? If so, how can you prove it?
I'm working on a problem and right now I want to prove that $10^n + 1$ is never a perfect square. I know that for even values of $n$ , the expression is not a perfect square since two positive perfect squares cannot differ by $1$ and both $1$ and $10^{2m}$ are perfect squares. I just don't know how to prove it for odd $n$ s. Thanks in advance!
Note: In any base $b$ , $10_{b}$ means $b$ . We know that $10^{n}$ is always a perfect power, so if $10^{n}+1$ were ever a square number, then it would contradict the Catalan’s Conjecture, as the only non-trival consecutive perfect powers are $8$ and $9$ .
|elementary-number-theory|
0
solving a Diophantine equation $17+2^m=n^2$
What is the method to solve $$17+2^m=n^2$$ for positive integers $m$ and $n$ . I tried using modular arithmetic which seems did not help.
As a hint: Take $m$ odd/even like below $$(1): \ m=2k \to 17+2^{2k}=n^2\\17=n^2-2^{2k}\\17=17*1 \to \\17*1=(n-2^k)(n+2^k)\\\begin{cases}n+2^k=17 \\n-2^k=1\end{cases}$$ for the odd $m $ $$(2): \ 17+2^{2k+1}=n^2\\2^{2k+1}+1=n^2-16\\2^{2k+1}+1\underbrace{\equiv}_{3} 0$$ so RHS must be multiple of $3$ or $$n^2-16 \underbrace{\equiv}_{3} 0$$
|number-theory|diophantine-equations|
0
Fermat's Two Square Theorem: How do you prove that an odd prime is the sum of two squares iff it is congruent to 1 mod 4?
It is a theorem in elementary number theory that if $p$ is a prime and congruent to 1 mod 4, then it is the sum of two squares. Apparently there is a trick involving arithmetic in the gaussian integers that lets you prove this quickly. Can anyone explain it?
Let $p=a^2+b^2$ be an odd prime. Since $p$ is odd, $a$ and $b$ can't both be even, nor can they both be odd. So, one of them is odd, and one is even. Without Loss of Generality, let $a=2k+1$ be odd, and $b=2l$ be even. Then $$p=a^2+b^2=(2k+1)^2+(2l)^2=4(k^2+k+l^2)+1.$$ So clearly, $p\equiv 1\pmod{4}$ . Now, suppose $p\equiv 1\pmod{4}$ is prime, we will show that there exist integers $a$ and $b$ such that $p=a^2+b^2$ . Since $p\equiv 1\pmod{4}$ , we have $p=4k+1$ . First, we claim that their exists some $\alpha\in\mathbb{Z}$ such that $p\mid\alpha^2+1$ . Note that $$\genfrac{(}{)}{}{}{-1}{p}=(-1)^{\frac{p-1}{2}}=(-1)^{2k}=1.$$ So $-1$ is a quadratic residue (mod $p$ ), and there exists $\alpha$ such that $\alpha^2\equiv -1\pmod{p}$ or $p\mid\alpha^2+1$ . Clearly $p\nmid a$ , otherwise $a^2+1\equiv 1\pmod{p}$ . Now, let $x$ and $y$ range over all the numbers $0,1,2,\dots \left\lfloor\sqrt{p}\right\rfloor$ . There are a total of $\left(\left\lfloor\sqrt{p}\right\rfloor+1\right)^2>p$ choic
|elementary-number-theory|prime-numbers|
0
Is there a closed-form expression for the series $\sum_{n=0}^\infty \frac{n x^n}{1 - \left( \frac{x}{1+x} \right)^n}$?
During a homework assignment, I ran into this series: $$S(x) = \sum_{n=0}^\infty \frac{n x^n}{1 - \left( \frac{x}{1+x} \right)^n}$$ Graphing $S(x)$ shows that it converges for some values of $x$ , mainly near $x=0$ and near $x=-1$ . I was wondering if $S$ has a closed form. Despite my best efforts, I was not able to find one online or through my own analysis. It should be noted that the summand is ill-defined for $n=0$ . It does, however, possess a limit. In particular: $$\lim_{n \to 0} \! \left ( \frac{n x^n}{1 - \left( \frac{x}{1+x} \right)^n} \right ) = \frac{1}{\ln \! \left( 1+\frac{1}{x} \right)}$$ This limit should be taken as the $n=0$ term.
This is not an answer. The continuous case is interesting $$ T(x) = \int_{1}^\infty \frac{n\, x^n}{1 - y^n}\,dn=\int_{1}^\infty\frac{n\, e^{a n}}{1-e^{b n}}\,dn$$ with $$ \qquad 0 $$\large\color{blue}{T(x)=\frac{e^a}{a\, b^2} \left(a\, \Phi \left(e^b,2,\frac{a}{b}\right)-b^2 \,\, _2F_1\left(1,\frac{a}{b};\frac{a+b}{b};e^b\right)\right)}$$ If we compare with $$ S(x) = \sum_{n=1}^\infty \frac{n\, x^n}{1 - y^n}$$ (for $0.05 \leq x \leq 0.95$ ), we have a correlation coefficient $R=0.999938$ .
|sequences-and-series|closed-form|
0
Exchange case in proving interpolation theorem by induction on the length of proof tree
I'm trying to prove Craig's interpolation theorem of propositional logic using Maehara's method by induction on the length of proof tree using sequent calculus. the theorem is as stated below: $$ \forall \Gamma_1 \Gamma_2 \Delta_1 \Delta_2: \Gamma_1,\Gamma_2 \vdash \Delta_1,\Delta_2 $$ $$ \rightarrow \exists c: \Gamma_1 \vdash c,\Delta_1 \land c,\Gamma_2 \vdash \Delta_2$$ $$\land (atomsof(c) \subseteq atomsof(\Gamma_1,\Delta_1) \cap atomsof(\Gamma_2,\Delta_2))$$ and here I'm stuck in the case where the proof ends by exchange rule like this: $$ \frac{\Gamma_1,\Gamma_2 \vdash \Delta,a,b,\Delta'}{\Gamma_1,\Gamma_2 \vdash \Delta,b,a,\Delta'}$$ Here in induction hypotheses we have $$ \forall \Delta_1' \Delta_2': \Delta,a,b,\Delta' = \Delta_1',\Delta_2' $$ $$ \rightarrow \exists c': \Gamma_1 \vdash c',\Delta_1' \land c',\Gamma_2 \vdash \Delta_2'$$ $$\land (atomsof(c) \subseteq atomsof(\Gamma_1,\Delta_1') \cap atomsof(\Gamma_2,\Delta_2'))$$ and we have to deduce that $$ \forall \Delta_1 \Delt
In order to prove Craig's interpolation theorem for propositional logic using Maehara's method, I would propose to modify the calculus. Consider a similar calculus, where the left and right parts of sequents are finite multisets (instead of finite sequences). This modified calculus does not have explicit exchange rules. First, one can prove that the modified calculus is equivalent to the original one. After that, one can prove the theorem as you stated it (by induction). Another approach would be to use finite sequences, but modify the claim that we prove by induction. We would need to consider sequents of the form $\Gamma_1^1,\Gamma_2^1,\Gamma_1^2,\Gamma_2^2,\ldots,\Gamma_1^k,\Gamma_2^k \vdash \Delta_1^1,\Delta_2^1,\Delta_1^2,\Delta_2^2,\ldots,\Delta_1^k,\Delta_2^k$ and construct an interpolant $c$ such that $\Gamma_1^1,\Gamma_1^2,\ldots,\Gamma_1^k \vdash c,\Delta_1^1,\Delta_1^2,\ldots,\Delta_1^k$ and $c,\Gamma_2^1,\Gamma_2^2,\ldots,\Gamma_2^k \vdash \Delta_2^1,\Delta_2^2,\ldots,\Delt
|logic|interpolation|sequent-calculus|
0
On Tu's definition of tangent space at a point of $\mathbb{R}^n$
In Tu's book, the tangent vector at $p \in U \subseteq \mathbb{R}^n$ is defined as an arrow emanating from $p$ . However, when passing to manifolds, such an approach cannot be used, since we should fix a chart $(U,\phi)$ about $p$ and, next, to decree that a tangent vector is an arrow emanating at $\phi(p)$ . In such a case, changing the chart around $p$ (take for instance $(V,\psi)$ , a priori the arrow emanating from $\psi(p)$ is different from that emanating from $\phi(p)$ , so the definition of tangent vector so given is ambiguous. My question is: $\mathbb{R}^n$ is itself a manifold. On it we can take another atlas, instead of the obvious one. Why does the previous definition of tangent space work? Also in this case one should have the same problem.
One of the main takeaways of linear algebra is that it doesn't make sense to speak of a vector by itself, vectors only make sense in the broader context of a vector space. So the definition that Tu makes is that you define the tangent space in the chart, and a tangent vector is a vector in that space. The definition indeed a priori depends on the chart, but you then show that it doesn't essentially depend on chart. For this, you take the inverse of the first chart and compose it with the second chart (everything on their intersection, of course). This gives you a diffeomorphism from an open set in $\mathbb R^n$ to an open set in $\mathbb R^n$ . Now the tangent spaces are identified by a linear isomorphism, which is the differential at the point $\phi(p)$ of this diffeomorphism. This definition has the advantage of giving an intuition for what the tangent space is, but the disadvantage that it makes the tangent space practically useless. For the purpose of usefulness, it is better to se
|differential-geometry|
0
$(\sin(\frac{\pi }{3})+i \cos(\frac{\pi }{3}))^8$
Express $(\sin(\frac{\pi }{3})+i \cos(\frac{\pi }{3}))^8$ in $a+bi$ form. I used demoivre's, so I got $\sin\left(\frac{8\pi}{3}\right)+i \cos \left(\frac{8\pi}{3}\right)$ , does that yield my answer upon evaluation?
You need to be careful: DeMoivre's theorem says $$(\color{red}{\cos} \theta + i \color{blue}{\sin} \theta)^n = \cos n \theta + i \sin n \theta.$$ Note that what you have instead is $$(\color{blue}{\sin} \theta + i \color{red}{\cos} \theta)^n.$$ So what you need to do first is find a way to switch the cosine and sine in your expression. One way to do this is to note that $$\cos\left(\tfrac{\pi}{2} - \theta\right) = \sin \theta,$$ and similarly, $$\sin\left(\tfrac{\pi}{2} - \theta\right) = \cos \theta.$$ So what does this do to your expression? Alternatively, you may observe $$(\sin \theta + i \cos \theta)^n = i^n (-i \sin \theta + \cos \theta)^n = i^n (\cos (-\theta) + i \sin (-\theta))^n.$$ In any case, after you perform the switch to get the correct form to apply DeMoivre's theorem, you will then want to evaluate $\cos n\theta$ and $\sin n\theta$ .
|complex-numbers|
0
Determining the colon ideal in a polynomial ring
I am a grad student having background in a Commutative Algebra. I need help with the following. Let $$ \begin{aligned} I = \langle &x_1^4,\, x_2^4,\, x_3^4,\, x_4^4, \\ &x_1^3 x_2^3,\, x_1^3 x_3^3,\, x_1^3 x_4^3,\, x_2^3 x_3^3,\, x_2^3 x_4^3,\, x_3^3 x_4^3, \\ &x_1^2 x_2^2 x_3^2,\, x_1^2 x_3^2 x_4^2,\, x_2^2 x_3^2 x_4^2,\, x_1^2 x_2^2 x_4^2, \\ &x_1 x_2 x_3 x_4 \rangle \end{aligned} $$ be an ideal in a polynomial ring $R=\mathbb{R}[x_1,x_2,x_3,x_4]$ . Question: Prove that there does not exist any $f\in R$ with $\mathrm{deg}(f)\leq 5$ such that $(I:f)=\langle x_1,x_2,x_3,x_4 \rangle$ . Here $(I:f)=\{r \in R: rf \subseteq I\}$ . Any help would be appreciated. Thank you.
We have $(I:f)\supsetneq\langle x_1,x_2,x_3,x_4 \rangle$ if and only if $1\in (I:f)$ , which happens if and only if every monomial in $f$ is divisible by one of the listed monomial generators of $I$ . Thus if $(I:f)= \langle x_1,x_2,x_3,x_4 \rangle$ , one of the monomials $m$ of $f$ must not be divisible by any of the listed generators of $I$ . On the other hand, increasing any of the exponents of $x_1,x_2,x_3,x_4$ in $m$ would result in a monomial divisible by one of the listed generators of $I$ . Let $e_1, e_2, e_3,e_4$ denote these exponents respectively. We know that $e_{i_1}=0$ for some $i_1\in1,2,3,4$ (or $m$ would be divisible by $x_1x_2x_3x_4$ ). Now $mx_{i_1}$ must be divisible by one of the listed generators of $I$ . Any such generator dividing $mx_{i_1}$ must be divisible by $x_{i_1}$ , or else it would divide $m$ . Thus the generator must have exponent exactly $1$ on $x_{i_1}$ (or it would not divide $mx_{i_1}$ ). We conclude the generator must be $x_1x_2x_3x_4$ so $e_j\geq
|abstract-algebra|polynomials|ring-theory|commutative-algebra|groebner-basis|
1
prove the existence of a fixed point
Prove that every holomorphic function $f$ on the closed disk $\overline{D}(0,1)$ with $|f(z)| when $z\in \overline{D}(0,1)$ has at least one fixed point in $D(0,1)$ . My attempt: Since $f$ is holomorphic on $\mathbb{D}$ , $f$ is either constant or attains its maximum on boundary. If $f$ is constant, we finish. If $f$ is not constant, then $f$ attains its maximum on $\partial\mathbb{D}$ . According to Maximum modulus principle, we have $$|f(z)|\leq |f(z_0)|\quad\forall z\in\mathbb{D}\quad (\text{for some }|z_0|=1).$$ I don't know how to continue. I haven't used the hypothesis $|f(z)| . Could someone have any idea how to solve this problem?
Consider $g(z)=-z$ . Then, in the unit circle, $|f(z)| . By Rouche's Theorem, $f(z)+g(z)=-z+f(z)$ has the same number of roots as $-z$ on $\overline{D}(0,1)$ . But the roots of $-z+f(z)$ are the fixed points of $f$ and $-z$ has one root in $\overline{D}(0,1)$ , so not only does $f$ have one fixed point, it is unique!
|complex-analysis|
0
prove the existence of a fixed point
Prove that every holomorphic function $f$ on the closed disk $\overline{D}(0,1)$ with $|f(z)| when $z\in \overline{D}(0,1)$ has at least one fixed point in $D(0,1)$ . My attempt: Since $f$ is holomorphic on $\mathbb{D}$ , $f$ is either constant or attains its maximum on boundary. If $f$ is constant, we finish. If $f$ is not constant, then $f$ attains its maximum on $\partial\mathbb{D}$ . According to Maximum modulus principle, we have $$|f(z)|\leq |f(z_0)|\quad\forall z\in\mathbb{D}\quad (\text{for some }|z_0|=1).$$ I don't know how to continue. I haven't used the hypothesis $|f(z)| . Could someone have any idea how to solve this problem?
Argument using Contraction Mapping Theorem: There is a point $z_0$ in the closed disk at which $|f|$ attains its maximum. If $r=|f(z_0)|$ then $r and $|f(z)| \le r$ for all $z$ . By Cauchy's estimates (Ref. Rudin's RCA) we get $|f'(z)| \le r$ for all $z$ and this makes $f$ a contraction mapping of the closed unit disk. Hence, there is a unique fixed point. [ $|f(u)-f(v)|=|\int_{[u,v]} f(w)dw|\leq r|u-v|$ where $[u,v]$ is the line segment from $u$ to $v$ ].
|complex-analysis|
0
If A and B be two subsets of a set X, then prove that A ⊆ B iff X-B ⊆ X-A.
If A and B be two subsets of a set X, then prove that A ⊆ B iff X-B ⊆ X-A. My attempt Let A ⊆ B then x $\in$ X-B implies $x \in X$ but $x \notin B$ since A ⊆ B $x \notin A$ therefore $x \in X$ but $x \notin A$ i. e., $x \in X-A$ therefore X-B ⊆ X-A. I don't know how to prove converse part please help me
If you have a proof that $A \subseteq B \implies X-B \subseteq X-A$ then $¬(X-B \subseteq X-A) \implies ¬(A \subseteq B)$ which is equivalent to $(X-B \supset X-A) \implies (A \supset B)$ so $(X-A \subset X-B) \implies (B \subset A)$ . Then just show that if $X-A=X-B$ then the statement holds.
|elementary-set-theory|
0
Why do you require connection in the open mapping theorem and maximum modulus principle?
I cannot understand why we require the connection in these two theorems (or at least I've been taught this way, and also in some books you have it). The first states that every non-constant analytic function defined from an open and connected subset $U$ of $\mathbb{C}$ to $\mathbb{C}$ is open. The second says that if the norm of an analytic function defined in an open and connected subset $U$ of the complex plane has a real maximum in a point internal to $U$, then the function is constant. I'll try to prove these two theorems without using connection (or maybe I use that without realising). First, I want to prove that for every analytic function $f$ defined in an open subset $U$ such that $f(z) \ne 0 \forall z \in U$ and $\forall n$ positive integers, there is an analytic function $g$ such that $f(z)=g(z)^{n}$.(Here the teacher supposed $U$ to be connected or maybe even simply connected, it isn't clear in my notes). You just have to consider a primitive $F$ of $\frac{f'}{f}$ and than i
When proving the Open Mapping Theorem , in order to use the Argument Principle , it's necessary to choose a certain disk with its center at some preimage of an open set's image point, ensuring careful selection of the radius of this disk to guarantee that there is only one preimage point within the disk. This fact relies on the property that the zeros of non-constant holomorphic functions on a connected open set do not accumulate. As for why this property of zeros not accumulating needs to be on a domain, it's because we establish this property by demonstrating that the zero set, being both open and closed, combined with connectedness, leads to the proof. And the Maximum Modulus Principle is a straightforward corollary of the Open Mapping Theorem, while the Local Maximum Principle requires the use of an additional Identity Theorem . The Identity Theorem also holds on connected open sets.
|complex-analysis|analyticity|holomorphic-functions|analytic-functions|
0
Does a Riemannian submersion maps horizontal geodesics to geodesics, and a relevant question?
Setup: Let $\pi:(M,g)\to (N,h)$ be a surjective Riemannian submersion, i.e. $\forall p\in M, D\pi_p$ is surjective between the respective tangent spaces and that . $T_pM=H_pM \oplus V_pM$ ( $g_p$ -orthogonal direct sum), and $H_pM$ is isometric to $T_qN, q:=\pi(p)$ via $D\pi_p, i.e. h_q(D\pi_p(v), D\pi_p(w)=g_p(v,w)\forall v,w \in H_pM.$ (this defines the Riemannian submersion part, isometry of the horizontal part of the tangent space with the tangent space of the image/quotient). Questions: Is it true that $\pi$ maps horizontal geodesics in $M$ to geodesics in $N?$ Perhaps relevant is this MO question: initially horizontal geodesics are always horizontal . I can't help thinking the way we show that an isometry $\phi$ maps geodesics to geodesics: the idea is to show for vectore fields $X,Y$ that $\phi_{*}(\nabla^M_X {Y})= \nabla^N_{\phi_{*}X}{\phi_{*}Y}.$ (John Lee: Riemannian manifolds: an introduction to curvature, P.71). Should we show the same here for $X, Y$ hotizontal vector fiel
Yes, the horizontal lifts of $N$ -geodesics are $M$ -geodesics if $\pi: M\to N$ is a Riemannian submersion. And yes, a Riemannian submersion commutes with the geodesic flow in horizontal directions.
|geometry|differential-geometry|riemannian-geometry|
0
How to approach an Hyperbolic Integral that doesn't appear to be solvable in closed form.
I'm interested in tackling the following integral: $$\int_{-\ln (2+\sqrt 5)}^{\ln (2+\sqrt 5)} \sqrt{4+\sinh^2(x)} dx$$ While I've attempted various techniques, it appears challenging to find a closed-form solution for this integral. I'm beginning to suspect that it might not have one. Do you have any insights into expressing it as an infinite series or in terms of special functions? Any guidance or suggestions on alternative approaches would be greatly appreciated. Thank you for your assistance!
$$I=\int\sqrt{a+\sinh^2(x)}\, dx$$ $$x=it \quad \implies \quad I=i\int \sqrt{a-\sin^2(t)}\, dt$$ So, we face a typical elliptic integral $$I=i\, \sqrt{a} \,E\left(t\left|\frac{1}{a}\right.\right)$$ Back to $x$ $$I=-i\, \sqrt{a}\, E\left(i x\left|\frac{1}{a}\right.\right)$$ $$J=\int_{-p}^{+p}\sqrt{a+\sinh^2(x)}\, dx=-2 i\, \sqrt{a}\, E\left(i p\left|\frac{1}{a}\right.\right)$$
|calculus|integration|hyperbolic-functions|
0
What is the name of this functor's property?
Assume there is a functor $L$ from a category $C$ to a category $D$ which satisfies the following property: for any objects $X,Y,Z$ from $C$ and morphisms $f\colon X\to Y, g\colon X\to Z$ such that $L(g)=\varphi\circ L(f)$ for some morphism $\varphi\colon L(Y)\to L(Z)$ there is a morphism $h\colon Y\to Z$ such that $\varphi=L(h)$ and $g=h\circ f$ . What is the name of this property?
In case we require this $h$ to be unique, then the property that you wrote down is saying that $f$ is an $L$ -cocartesian morphism (see Definition 2.1 here ), so $L$ is a functor for which every morphism in $C$ is $L$ -cocartesian. On its own, this does not bring us much (except that all fibers of $L$ are groupoids), but we could additionally ask for the following: for every morphism $a:A\to B$ in $D$ and every object $A'$ in $C$ such that $LA'=A$ , there exists a morphism $a'\colon A'\to B'$ in $C$ such that $La'=a$ . If this is also satisfied, then your functor $L$ is a Grothendieck opfibration fibered in groupoids , also called a left fibration in more modern higher category theory. (By the Grothendieck construction, this would mean that $L$ classifies a $(2,1)$ -functor $D\to\mathsf{Grpd}$ , where $\mathsf{Grpd}$ is the $(2,1)$ -category of groupoids. Informally, this functor is of the form $d\mapsto L^{-1}(d)$ .) If we do not require $h$ to be unique, then I am not aware of any de
|category-theory|terminology|functors|
1
Isometric copy of the quotient $N$ embedded in the domain $M$ when $\pi:M\to N$ is a surjective Riemannian submersion?
Let $\pi:(M,g)\to (N,h)$ be a surjective Riemannian submersion, i.e. $\forall p\in M, D\pi_p$ is surjective between the respective tangent spaces and that . $T_pM=H_pM \oplus V_pM$ ( $g_p$ -orthogonal direct sum), and $H_pM$ is isometric to $T_qN, q:=\pi(p)$ via $D\pi_p, i.e. h_q(D\pi_p(v), D\pi_p(w)=g_p(v,w)\forall v,w _in H_pM.$ (this defines the Riemannian submersion part, isometry of the horizontal part of the tangent space with the tangent space of the image/quotient). Define $S:=exp_p(H_pM)\subset M.$ My questions are: Is $S$ an embedded totally geodesic submanifold of $M?$ It seems to me yes, since it has been defined via the exponential map at $p.$ But while attempting to prove it, I'm running into a problem: say $p':=exp^M_p(v)\in S$ where $v\in H_pM,$ and then let $w\in T_{p'}S \subset T_{p'}M.$ How do I show that the $S$ -geodesic $t\mapsto exp^S_{p'}tw$ is also an $M$ geodesic, i.e. $\exists u\in T_{p'}M$ so that $exp^S_{p'}tw = exp^M_{p'}tu$ ? An illustrative diagram is be
As already pointed out in the comments, $S$ will generally not be totally geodesic (because in "transverse" directions it is not necessarily horizontal). You should keep some basic examples in mind when thinking of such questions. Thus, if you consider the Hopf fibration $S^3\to S^2$ , it should be obvious that the answer to your second question is negative.
|geometry|differential-geometry|riemannian-geometry|geodesic|
0
An integral question which i have never encountered before
I = $\int _{0}^{3}\left( 1+x^{2}\right) d[ x]$ $a)12$ $b)17$ $c)15$ $d)19$ where $[x]$ is the greatest integer less than or equal to $x$ , shouldn't this integral be $0$ since $d[x]$ is 0 for all $x$ in the domain of the function. what is the correct explanation ??
If you think integration as sum of areas of rectangles under the function, it will be easy. Like breadth of rectangles generally is $dx$ $\to $ $0$ but here $d[x]$ is I think equals to $1$ . So make rectangles of breadth $1$ . Answer will be sum of all $\sum f(x)d[x]$ that is . $ f(0)d[x] + f(0+d[x])d[x] + f(0+2d[x])d[x]$ . So answer becomes $2×1 + 5×1 + 10×1 = 17$ I hope this might help you.
|calculus|integration|
0
Maximum Entropy and Minimum Divergence
Let random variable $X$ be defined over alphabet $X = \{-2, 0, 2\}$ . a) Find the distribution $p(x)$ that maximizes the entropy $H(X)$ while maintaining $E\{|X|\} = \theta$ , where $\theta \in [0, 2]$ . What is this maximum $H(X)$ ? b) Find distribution $q(x)$ that minimizes the divergence $D[q||p]$ , with respect to the entropy maximizing $p(x)$ of part a, subject to the constraint $E\{|X|\} = \phi$ , where $\phi \in [0, 2]$ . What is the entropy of $q(x)$ ? My attempt: Part(a): I used the Lagrangian method and found that $$p_i = \frac{e^{-\lambda|i|}}{\sum_k e^{-\lambda |k|}},$$ such that $i, k\in X$ . That is $p_i$ is Gibbs distribution. I am stuck at the point of imposing the average constraint. Part(b): I started with the Lagrangian method and found that $$q_i = \frac{p_i e^{-\lambda|i|}}{\sum_k p_k e^{-\lambda |k|}},$$ However, I need to impose now two constraints ( $\theta, \phi$ ), which I am not sure how to do. Any Help! Thanks in advance.
Well, for part $a)$ , you don't even really need to do the multiplier stuff. $|X|$ is supported either on $0$ or $2$ . There is only one binary law supported on $\{0,2\}$ with mean $\theta,$ namely $P(|X| = 2) = \theta/2.$ Then to maximise entropy the mass should be spread equally on $+2$ and $-2$ , giving you the law $p = (\theta/4, 1-\theta/2, \theta/4)$ . If you prefer the explicit multipliers approach, then you need to work out $\mathbb{E}_P[|X|]$ in terms of $\lambda,$ and solve for the constraint. In other words, you need to solve the equation $$ \sum |i| p_i = \theta. $$ This is straightforard to work out. $$ \sum |i|p_i = \frac{2 e^{-2\lambda} + 0 + 2 e^{-2\lambda}} {e^{-2\lambda} +1 + e^{-2\lambda}} = \theta\iff (4-2\theta)e^{-2\lambda} = \theta \iff e^{-2\lambda} = \frac{\theta }{4-2\theta}.$$ Then $2 e^{-2\lambda} + 1 = \frac{2\theta}{4-2\theta} + 1 = \frac{4}{4-2\theta},$ and so your law works out to $$ P(2) = P(-2) = \frac{e^{-2\lambda}}{2e^{-2\lambda} + 1} = \frac{\theta}
|optimization|information-theory|entropy|
0
Smallest known unfactored composite number?
I'm trying to find examples of "small" numbers which are known to be composite, but for which no prime factors are known. According to this website the number $109!+1$ is a composite number of 177 digits, but no factors are known. However, I can't find anything more up-to-date; maybe that number has been factored now; maybe there is a smaller unfactored composite number. Anyway: does anybody know the smallest known composite number for which no prime factors are known? -- Addendum. With further exploration of the above pages I've found that the Wolstenholme number which is the numerator of $$\sum_{k=1}^{163}\frac{1}{k^2}$$ has 138 digits and is composite, and no factors are known, as of July 16, 2012. This is the smallest such number I've found so far. -- More: In the most recent (third) edition of the book of factorization of Cunningham numbers ($b^n\pm 1$) by Brillhart et al, the number $2^{1462}+1$ includes in its factorization a 130-digit composite number which at the time of publi
What about the following number consisting of 85 decimal digits found on http://factordb.com/listtype.php?t=3&mindig=85&perpage=100&start=100 as the third number on the list: $\dfrac{15019567^{17}-1}{33547074128942890979279691620109514914} = 3002765137097738438742260288410633997871343683756604808094241439840597239910460153959$ Put into https://www.alpertron.com.ar/ECM.HTM , it needs about 8 minutes to be factorised. When you go higher, you can find numbers consisting of 90 decimal digits that need about 40 minutes to be factorised.
|reference-request|factoring|prime-factorization|
0
An Estimate in Calderon Zygmund for Periodic Function
I am reading the following paper of Calderon and Zygmund http://matwbn.icm.edu.pl/ksiazki/sm/sm14/sm14122.pdf On page 3 (252) they provide an estimate (2.3) which is following $$ |K(x-x_{\nu}) - K(-x_{\nu})| \leq \frac{1}{|x_{\nu}|^k} \omega\left(\frac{A}{|x_{\nu}|}\right) $$ where they have said earlier that A is a constant which depends at most on $\Omega$ but I am unable to get the following estimate, infact going through usual way to estimate i.e. writing $$ |K(x-x_{\nu}) - K(-x_{\nu})| \leq |\frac{\Omega\left(\frac{x-x_{\nu}}{|x-x_{\nu}|}\right) - \Omega\left(\frac{-x_{\nu}}{|x_{\nu}|}\right)}{|x-x_{\nu}|^k} | + |\Omega\left(-\frac{x_{\nu}}{|x_{\nu}|}\right)\left(\frac{1}{|x_{\nu}|^k} - \frac{1}{|x-x_{\nu}|^k} \right)| $$ I can make the first term of above form (with $A$ also depending on $x$ ) but the second term I am not able to, what I am getting is something of form $\frac{c|x|}{|x-x_{\nu}|^{k+1}}$ . Which can for $|x_{\nu}| > 2 |x|$ can be made in form $\frac{c|x|}{|x_{\nu}|^
In second reference they have shown in footnote that we may without loss of generality assume that $\omega(t) \geq t$ , and with this we can get such bounds.
|analysis|harmonic-analysis|singular-integrals|
1
Solve $\begin{cases} x^3+1=81(y^2+y)\\x^2+x=9(y^3+1) \end{cases}$
Solve $\begin{cases} x^3+1=81(y^2+y)\\x^2+x=9(y^3+1) \end{cases}$ $\Leftrightarrow \begin{cases} (x+1)(x^2-x+1)=81y(y+1)\\ x(x+1)=9(y+1)(y^2-y+1) \end{cases}$ Then I'm stuck. I'm not seeing any patterns here. Please help.
You could do $$ x^3 + 3x^2 + 3x + 1 = 27(y^3 + 3y^2 + 3y + 1) $$ by adding 3 times the second line to the first line, then this is just $$ (x+1)^3 = 27(y+1)^3 $$ So if $x,y \in \mathbb{R}$ , this implies $x+1 = 3(y+1)$ , i.e. $x = 3y+2$ . Substituting this back into the first equation gives $$ \begin{align} 27y^3 + 54y^2 + 36y + 9 = 81y^2 + 81y &\Leftrightarrow 27y^3 - 27y^2 - 45y + 9 = 0 \\ &\Leftrightarrow 3y^3 - 3y^2 - 5y + 1 = 0 \\ &\Leftrightarrow (y+1) \left(y-\left(1 - \sqrt{2/3}\right)\right) \left(y + \left(1+\sqrt{2/3}\right)\right) = 0 \\ \end{align} $$ So $(x,y) = (-1, -1)$ or $\left(5 - \sqrt{6}, 1 - \sqrt{2/3}\right)$ or $\left(5 + \sqrt{6}, 1 + \sqrt{2/3}\right)$ .
|algebra-precalculus|
1
What's matrix's exponent if the exponent is real or complex?
this comes from a comment of this where the comment use $A^{1/2}$ , I learned that a integer kth power of a matrix, if A can be diagonizled as $XDX^{-1}$ then $A^k=XD^kX^{-1}$ , even when k so the OP means $A^{1/2}=XD^{1/2}X^{-1}$ ? what's a general definition of matrix's exponent if the exponent is real or complex?
Th problem with square root for matrices is the lack of uniqueness, you don't have an integral ring, hence you can have infinite roots for the polynomial $X^2 -A$ . For example $X^2 = 0$ in $\mathcal{M}_2(\mathbb{R})$ has all nilpotent matrices as solution (and $0$ is diagonalisable). One way to achieve uniqueness is to work with symmetric definite positive matrices, which have an unique square root. Otherwise you could use for $||| A-I||| : $$A^{1/2} = (I + (A-I))^{1/2} = \sum_{k\geq 0} \binom{1/2}{k}A^k.$$ This gives you a (not unique) but canonical square root. You solution would work and would be a canonical way to find a square root, to detail it, it is better to work with the eigenspaces, if you have $\lambda_1,\dots,\lambda_d$ as eigenvalues, you define $A_{|E_\lambda} = \lambda^{1/2} id$ , but think about it, the uniqueness of the square root is not even verified in $\mathbb{C}$ and if we were to choose a canonical root, we would have to do it on $\mathbb{C} \setminus \mathbb{R
|matrices|
0
Probability distribution of the mean of a discrete random variable
While self-studying a probability book, I came up with a modified version of a problem that I don't know how to solve. In the original statement we have a salesman that visites houses selling books, and his probability of selling a book in a visit is 0.3. The problem asks what his sales average per day is, if he does a fixed amount of 20 visits per day. This is very easy, since the variable $X$ defined as the amount of successful sales on a day where he does 20 visits is distributed as a binomial, that is, $X\sim Bi(n=20, p=0.3)$ . So the average we are looking for is $\mu_X=np=20\cdot 0.3=6$ . However, what if we asked the following... The salesman will have to work on the weekends if his average of successful sales for the week is less than 10. What is the probability of him working on the weekend? I have no idea how to tackle this even though it sounds simple. Writing all the possible combinations of sales for the 5 days of the week that would result in an average of less than 5 wou
You have initally random variable X, which is the amount of successful sales on a day . Next you need to define random variable $X'$ which is the amount of successful sales on a week . Assuming that X for every day is independent of each other, we can consider $X'$ as a sum of Binomial distributions for 5 days. Thus: $$X' \sim Bi(\sum n_i; p) \sim Bi(20 * 5 = 100; 0.3)$$ From cumulative function we have: $$ P(X' \le 9) = \sum_{i=0}^{9} {100\choose i} \cdot0.3^i\cdot0.7^{100-i}$$ But I'm not sure that it will give average, but it definetly gives your probability of salesman working at weekend, kinda lower bound.
|probability|
1
Can I always find two new linear independent vectors whose dotproduct with existing vector \ge0 in vector space $R^4$
suppose V is 4-dimensional vector space, $v_1$ and $v_2$ are linear independent in V, now for arbitrary plane S that goes through origin, can I find two new linear independent vector in plane S that condition: \begin{align} v_1 \cdot v_3 \ge 0 \ \& \ v_2 \cdot v_3 \ge0, & v_1 \cdot v_4 \ge0 \ \& \ v_2 \cdot v_4 \ge0, \tag{1} \end{align} (i.e. the new vectors dotproduct with existing vectors always $\ge$ 0) (I'm trying to be general, but basically I want $v_1$ , $v_2$ , $v_3$ , $v_4$ to be linear independent, with general S, $v_3$ , $v_4$ should not be related to $v_1$ , $v_2$ , and $v_1$ , $v_2$ , $v_3$ , $v_4$ spans V, but say S1=span of { $v_1$ , $v_2$ }, and when S=S1, $v_3$ , $v_4$ can be only be in S1,which is linear dependent on $v_1$ , $v_2$ , but under this special condition I still want to find linear independent $v_3$ , $v_4$ that satisfy condition (1) so that the problem statement is a bit general. If so, how to proof it, If not so, is it because $v_1$ and $v_2$ have no cons
Take any vector $u$ orthogonal to both $v_1$ and $v_2$ and a vector $w$ in the plane spanned by $v_1$ and $v_2$ so that $w\cdot v_1 and $w\cdot v_2>0$ . Now let $S$ be spanned by $u$ and $w$ . Any vector $au+bw$ in $S$ has $v_1\cdot (au+bw)=bv_1\cdot w and $v_2\cdot (au+bw)=bv_2\cdot w>0$ (if $b>0$ , in the case $b the inequalities are the other way). This shows that $v_3$ and $v_4$ both need $b=0$ and then they are not linearly independent.
|vector-spaces|vectors|
0
Tangent bundle $TM\to M$ is an orientable bundle iff $M$ is orientable
This is Example 6.3 in Bott-Tu, which asserts a smooth manifold $M$ is orientable iff the tangent bundle $TM\to M$ is an orientable bundle. If $A=\{(U_\alpha,\psi_\alpha)\}$ is an atlas for $M$ , then for each $\alpha$ , there is a local trivialization $\phi_\alpha:TU_\alpha\to U_\alpha \times \Bbb R^n$ (where $n=\dim M$ ) given by $\sum_{i=1}^n a^i \dfrac{\partial }{\partial x^i}|_p$ where $\psi_\alpha=(x^1,\dots,x^n)$ . Clearly the transition function $g_{\alpha \beta}:U_\alpha\cap U_\beta \to GL_n(\Bbb R)$ equals the Jacobian $U_\alpha\cap U_\beta \to GL_n(\Bbb R)$ , $p\mapsto J(\psi_\alpha \circ \psi_\beta^{-1})(p)$ . Thus if $A$ is an oriented atlas, then the trivialization $\{(U_\alpha, \phi_\alpha)\}$ is oriented, and this proves one direction. But how does the opposite direction hold? (There is no explanation in the book)
Here is a sketchy idea. Consider the frame bundle of $TM$ , denoted by $F(TM)$ , whose fibre consists of bases of $T_xM$ above each point $x$ . Then take the pointwise quotient of $F(TM)$ by $SO(n)$ , and we get a bundle which has 2 points in each fibre. Show that this is diffeomorphic to the orientation double-cover of $M$ . An equivalent definition (which you can prove) of a bundle being orientable is that this cover (as a bundle) has a smooth section. Then it actually has two disjoint smooth sections and is diffeomorphic to $M\sqcup M$ . We can conclude $M$ is orientable.
|differential-geometry|smooth-manifolds|orientation|tangent-bundle|
0
Existence of ideal
Let $R$ be a ring, $I$ a finitely generated ideal of $R$ and $J$ an ideal of $R$ with $J \subset I$ . Show that $R$ has an ideal $M$ so that $J \subset M \subset I$ and for every ideal $N$ with $M \subset N \subset I$ $\Rightarrow$ $N=M$ or $N=I$ . If I choose $M:=I+J=\{a+b,a \in I, b \in J\}$ then the first part should be proved? For the second part I have to assume that if $N \neq M$ then it has to be equal to $I$ meaning there is at least one element in $N$ that is not in $I$ ? Thanks for any help!
There is a finite sequence $x_1,\cdots,x_n\in I$ such that $$I=J+(x_1)+\cdots+(x_n).$$ So among all such sequences there is one of minimial length, and so we may assume $x_1,\cdots,x_n$ has a minimal length.Then $$L=J+(x_2)+\cdots+(x_n)$$ is a proper ideal of $I$ . Let $\mathscr{F} $ be the set of all proper ideal of $I$ that contanin $L$ . Clearly, $\mathscr{F} $ is a non-empty. Now an ideal $N$ that contains $L$ is in $\mathscr{F} $ if and only if $x_1\notin N$ . By zorn's lemma, $\mathscr{F} $ has a maximal element, say $M$ . $M$ is the ideal satisfying the condition.
|linear-algebra|abstract-algebra|ring-theory|ideals|
1
A step in the proof of Fubini theorem (Theorem 2.36, Folland)
This is a first case of the proof of the Fubini-Tonelli theorem, given in Folland's Real Analysis. I'm confused with the line underlined in blue at the end (namely, 'the preceding argument applies to' part): $\newcommand{\blueunderline}[1]{\color{blue}{\underline{\color{black}{\text{#1}}}}}$ 2.36 Theorem. Suppose $(X, \mathcal{M}, \mu)$ and $(Y, \mathcal{N}, \nu)$ are $\sigma$ -finite measure spaces. If $E \in \mathcal{M} \otimes \mathcal{N},$ then the functions $x \mapsto \nu\left(E_{x}\right)$ and $y \mapsto \mu\left(E^{y}\right)$ are measurable on $X$ and $Y,$ respectively, and $$ \mu \times \nu(E)=\int \nu\left(E_{x}\right) d \mu(x)=\int \mu\left(E^{y}\right) d \nu(y) $$ Proof. First suppose that $\mu$ and $\nu$ are finite, and let $\mathcal{C}$ be the set of all $E \in$ $\mathcal{M} \otimes \mathcal{N}$ for which the conclusions of the theorem are true. If $E=A \times B$ , then $\nu\left(E_{x}\right)=\chi_{A}(x) \nu(B)$ and $\mu\left(E^{y}\right)=\mu(A) \chi_{B}(y),$ so clearly $E
Here's a proof that doesn't restrict the starting measure spaces: Given that $X \times Y$ can be written as the union of an increasing sequence $\left\{X_{j} \times Y_{j}\right\} $ of rectangles of finite measure one can redefine the set $\mathcal{C}$ as the set of all $E \in$ $\mathcal{M} \otimes \mathcal{N}$ such that $E \cap\left(X_{j} \times Y_{j}\right) \in$ $\mathcal{M} \otimes \mathcal{N}$ satisfies the conclusions of the theorem. Then, following the same proof for finite measure spaces one prove that $\mathcal{C}$ is equal to $\mathcal{M} \otimes \mathcal{N}$ (i.e. for every $E \in$ $\mathcal{M} \otimes \mathcal{N}$ $\implies$ $E \cap\left(X_{j} \times Y_{j}\right) \in$ $\mathcal{M} \otimes \mathcal{N}$ satisfies the conclusions of the theorem). In fact, analyzing the proof for finite measure spaces, Folland uses the fact that $\mu$ and $\nu$ are finite only to prove that the function $y \mapsto \mu\left(\left(E_{1}\right)^{y}\right)$ is in $L^{1}(\nu)$ ( $\implies$ $y \mapsto
|real-analysis|probability-theory|measure-theory|solution-verification|
0
How to Evaluate $1-5(\frac{1}{2})^3+9(\frac{(1)(3)}{(2)(4)})^3-13(\frac{(1)(3)(5)}{(2)(4)(6)})^3+...$
I want to Evaluate $1-5(\frac{1}{2})^3+9(\frac{(1)(3)}{(2)(4)})^3-13(\frac{(1)(3)(5)}{(2)(4)(6)})^3+...$ ,I tried from arcsin(x) series and got $\frac{1-z^4}{(1+z^4)^{\frac{2}{3}}}= 1-5(\frac{1}{2})z^4+9(\frac{(1)(3)}{(2)(4)})z^8-13(\frac{(1)(3)(5)}{(2)(4)(6)})z^{12}+...$ ,but i got stuck with $\frac{(2n-1)!!}{2n!!}$ terms and I don't know how to get $(\frac{(2n-1)!!}{2n!!})^3 $ Can anyone help me to Evaluate this infinite sum? You can share your own way.
$$\frac{2}{\pi}=\sum_{n=0}^{\infty}(-1)^n\binom{2n}{n}^3\left(\frac{1+4n}{2^{6n}}\right)$$ One can prove this using Fourier-Legendre Expansions thereby completely bypassing the Elliptic Integral Pathway. First denote, $$P_s:=P_s(k)={_2F_1}\left[\begin{array}c-s,1+s\\1\end{array}\middle|\,k^2\right]$$ and $P'_s:=P_s(\sqrt{1-k^2})$ which represents the Alternate Elliptic Integrals for $$-s\in\left\{\frac{1}{2},\frac{1}{3},\frac{1}{4},\frac{1}{6}\right\}$$ But that won't be the focus here, instead we will be using this when $s$ is an integer. Using Differential Equations, one may prove that $$\int_0^1 kP_a'P_b\ dk=\frac{\sin \pi b-\sin \pi a}{2\pi[b(b+1)-a(a+1)]}$$ and we can also prove that if $n$ is an integer then $P_n'=(-1)^nP_n$ And we also have the evaluation of $P_s$ at $k=1/\sqrt{2}$ $$P_s\left(\frac{1}{\sqrt{2}}\right)=\frac{\cos(\pi s/2)}{2^{s}}\binom{s}{s/2}$$ This results in for $a$ and $b$ integer, $$\int_0^1kP_aP_b\ dk=0,\quad a\neq b$$ $$\int_0^1kP^2_n\ dk=\frac{1}{2(2n+1)}
|sequences-and-series|gamma-function|ramanujan-summation|
0
Help with functional property
I am sorry, but I do not know how to search this. I apologize in advance. Assume I have a set $X$ and a map $f:X\rightarrow X$ ; there are $n$ elements $x_1,... x_n\in X$ such that $f(x_k)=x_{k+1}$ and $f(x_n)=x_1$ . Does this property have a name?
I don't think this property has a name. It may help to note that it is equivalent to saying that there exists a nonempty finite subset $Y \subseteq X$ such that $f(Y) = Y$ . If you have to make use of this condition very often, you can give it your own name! For example you could write Definition. We say that a function $f : X \to X$ has property P if there exists a nonempty finite $Y \subseteq X$ such that $f(Y) = Y$ .
|functions|
1
Leibniz integral rule help
Let $$I:=\frac{\partial}{\partial \epsilon} \left[\epsilon \int^{b(\epsilon)}_{a(\epsilon)} x f(x) dx \right]_{\epsilon = 0}.$$ My textbook claims that, as a consequence of the Leibniz integral rule: $$I= \int^{b(0)}_{a(0)} x f(x)dx + f(b(0)) \ b(0) \left( \frac{\partial}{\partial \epsilon} b(\epsilon) \right)_{\epsilon=0} - f(a(0)) \ a(0) \left( \frac{\partial}{\partial \epsilon} a(\epsilon) \right)_{\epsilon =0} .$$ I found that (by an application of the product rule): $$I= \int^{b(0)}_{a(0)} x f(x)dx + \left[ \epsilon \left(\frac{\partial}{\partial \epsilon} \int^{b(\epsilon)}_{a(\epsilon)} x f(x) dx \right) \right]_{\epsilon = 0} = \int^{b(0)}_{a(0)} x f(x)dx + 0$$ which appears to be in contradiction with the first claim... Am I missing something here ?
What you are given is suitable for Product Rule & you get the Correct $I$ . It is not suitable for Leibniz Integral Rule due to Extra Product term , which the text book is wronging ignoring. Let us rewrite it to make it suitable for Leibniz Integral Rule : $$ I = \frac{\partial}{\partial \epsilon} \left[\int^{b(\epsilon)}_{a(\epsilon)} \epsilon x f(x) dx \right]_{\epsilon = 0} $$ Consequence of the Leibniz Integral Rule is now : $$ \begin{align} I &= \int^{b(0)}_{a(0)} x f(x)dx + \left( \epsilon f(b(0)) \ b(0) \frac{\partial}{\partial \epsilon} b(\epsilon) \right)_{\epsilon=0} - \left( \epsilon f(a(0)) \ a(0) \frac{\partial}{\partial \epsilon} a(\epsilon) \right)_{\epsilon =0} \\ &= \int^{b(0)}_{a(0)} x f(x)dx + 0 \end{align} $$ It will match with Product Rule ...
|derivatives|leibniz-integral-rule|
0
How can I calculate Fibonacci retracements levels for stocks correctly -- not like 99% of the world does it?
If a stock travels from 40 to 100, a common belief is that a 50% retracement means that of the 60 move, half is given back, or 30. For a 38.2% retracement, prevalent thinking is that it's .382(60), or 22.92 that the stock should give back, or 100 - $22.92 = 77.08. But, this assumes arithmetic scaling of the y-axis (the price axis), and investing isn't arithmetic. For example, 40 to 100 is a big move of 150%. The 50% retracement back to 70 suggests that a move from 40 to 70 equates to a move from 70 to 100. Is that true? If you, the investor, could choose whether you get to own the stock from 40 to 70 or 70 to 100, which scenario would you choose? The correct answer is 40 to 70, which is a 75% increase. What is the increase from 70 to 100? It is 42.9%. Seventy is NOT the middle of 40 and 100. To find the middle, take the geometric mean, or square root of the product of 40 and 100, to get 63.25. That's the level where we can say that a move from 40 to 63.25 is the same as 63.25 to 100. Y
I faced the same issue, had to develop my custom solution. Posting it here for anyone it helps - > levels = [ 0, 0.236, 0.382, 0.5, 0.618, 0.786, 1] def fibonacci_levels_log_based(high_low): high, low = high_low global levels log_pricelow, log_pricehigh = np.log10(low), np.log10(high) log_price_ratio = log_pricehigh - log_pricelow # Calculate Fibonacci extension levels in logarithmic scale log_fib_levels = [log_pricelow + level * log_price_ratio for level in levels] # Convert Logarithmic Levels Back to Original Scale fib_levels = [10**level for level in log_fib_levels] return fib_levels values = fibonacci_levels_log_based([100,40]) print(values)
|logarithms|fibonacci-numbers|
0
How to approach an Hyperbolic Integral that doesn't appear to be solvable in closed form.
I'm interested in tackling the following integral: $$\int_{-\ln (2+\sqrt 5)}^{\ln (2+\sqrt 5)} \sqrt{4+\sinh^2(x)} dx$$ While I've attempted various techniques, it appears challenging to find a closed-form solution for this integral. I'm beginning to suspect that it might not have one. Do you have any insights into expressing it as an infinite series or in terms of special functions? Any guidance or suggestions on alternative approaches would be greatly appreciated. Thank you for your assistance!
Observe that the value $a=\log(2+\sqrt 5)$ satisfies $\sinh a=\frac 12((\sqrt 5+2)-(\sqrt 5-2))=2$ . So the substitution $t=\sinh x$ with (formally) $dt=\cosh\; dx$ , $\cos h=\sqrt{1+\sinh^2 x}=\sqrt {1+t^2}$ , leads to $$ \begin{aligned} I &=\int_{-a}^a\sqrt{4+\sinh^2x}\; dx =2\int_0^a\sqrt{4+\sinh^2x}\; dx \\ &=2\int_0^2\sqrt{4+t^2}\; \frac{dt}{\sqrt {1+t^2}} =4\int_0^2\sqrt{\frac{1+\frac 14t^2}{1+t^2}}\; dt \\ &\qquad\text{ now consider the above as an integral on a path in $\Bbb C$, set $t=-iu$, $u=it$, $dt=-i\; du$} \\ &=-4i\int_0^{2i}\sqrt{\frac{1-\frac 14t^2}{1-t^2}}\; dt \ , \end{aligned} $$ and compare with 19.2.5, Legendre's integrals , which is explicitly $\displaystyle E(\varphi,k)=\int_0^{\sin\varphi}\frac{\sqrt{1-k^2t^2}}{\sqrt{1-t^2}}\; dt$ , note that $\sin^{-1}(2i)=i\sinh^{-1} 2=ia$ , to obtain the formula $$ I =-4iE\left(\varphi=ia,k= \frac 12\right) =-4iE\left(\varphi=ia,m= \frac 14\right) \ . $$ Here, the second argument of the elliptic integral $m$ is in some sourc
|calculus|integration|hyperbolic-functions|
0
What kind of isomorphisms preserve matrix group?
A matrix group is defined to be a closed subgroup of $GL_n(\mathbb K)$ where $\mathbb K=\mathbb R,\mathbb C,\mathbb H$ In book Matrix Group: An Introduction to Lie Group Theory the author identifies $A\in GL_n(\mathbb R)$ with matrix $diag(A,1)\in GL_{n+1}(\mathbb R)$ and thus proved that $GL_n(\mathbb R)$ is a matrix group. I understand this process as $GL_{n}(\mathbb R)$ is isomorphic to a subgroup of $GL_{n+1}(\mathbb R)$ which is actually a matrix group. And this is also applied to show $(R^n,+)\cong Trans(\mathbb R^n)$ is a matrix group. However, the author later states in prop 1.48 that although by a homomorphism $\varphi : G \to H$ between matrix groups with $\varphi G$ closed in $H$ (which implies $\varphi G$ is a matrix group) we get an isomorphism $\bar \varphi: G/\ker\varphi\to \varphi G$ , we cannot expect $G/\ker\varphi$ to be matrix group. I don't understand why the last isomorphism cannot ensure the left side to be matrix group but the right side is. What's the differenc
Let's start from your definition. $\mathbf{GL}_n(\mathbb{R})$ being closed in itself, it is a matrix group; no need to embed it somewhere. Maybe the definition you are quoting isn't the right one? Being a closed subgroup of something isn't preserved by isomorphism: take any non-closed subgroup $H$ of a group $G$ . Then $H$ is closed in itself, but not in $G$ . Therefore, if you are quoting it correctly, the book you are reading seems self-contradictory. Are you sure you didn't forget anything? The right definition might be: a topological group is a matrix group if it is isomorphic (algebraically and topologically) to a closed subgroup of $\mathbf{GL}_n(\mathbb{K})$ . If this is the case, everything seems OK, and maybe what the book is saying at the end is that if the morphism $\phi$ isn't continuous, then the quotient might not be a matrix group.
|abstract-algebra|group-theory|lie-groups|topological-groups|
1
One dimensional $\ell$-adic representations of $\mathbb{Z}_p$
I am reading the paper "Lectures on the Langlands program and conformal field theory" by E.Frenkel. In page 28, he computes the one-dimensional continuous representations of $\mathbb{Z}_p$ over $\mathbb{Q}_\ell$ , here $\ell \neq p$ . He chooses any $\ell$ -adic number $\mu$ such that $\mu-1 \in \ell \mathbb{Z}_\ell$ . He claims that for any natural number $n$ , $\mu^{p^n}-1 \in \ell^{p^n}\mathbb{Z}_\ell$ so it gives a continuous group homomorphism $\mathbb{Z}_p \to \mathbb{Q}_\ell^\times$ . I do not know why it is true. I think that we only have $\mu^{p^n}-1 \in \ell\mathbb{Z}_\ell$ . So I want to know how to compute these one-dimensional representations actually?
Yes this is false as the example $\ell = 3$ , $\mu = 4$ , $p = 2$ , $n = 1$ shows. Firstly, any continuous representation has to actually map into $\mathbb{Z}_\ell^\times$ , because $\mathbb{Z}_p$ is compact. Now a one-dimensional representation is a continuous map of profinite groups $f: \mathbb{Z}_p \to \mathbb{Z}_\ell^\times$ . The preimage of the open subgroup $V^{(n)} = 1 + \ell^n \mathbb{Z}_\ell$ is an open subgroup $U^{(n)}$ of $\mathbb{Z}_p$ . Let $m = m(n)$ be minimal such that $p^m \mathbb{Z}_p \subseteq U^{(n)}$ . Then $f$ is the same as a system of maps $f_n: \mathbb{Z} / p^m \mathbb{Z} \to \mathbb{Z}_\ell^\times / V^{(n)} = (\mathbb{Z} / \ell^n \mathbb{Z})^\times$ which is compatible with the inverse limits on both sides. You can look up the structure of the group on the right-hand side, and note that the image of $f_n$ has to be contained in its $p^m$ -torsion. One sees that if $\ell = 2$ , the only possibility for $f$ is the zero map. If $\ell \neq 2$ $f$ maps into the d
|number-theory|representation-theory|local-field|
1
Confusion Over Distributive Property in Tensor and External Tensor Products
I've been delving into the properties of tensor ( $\otimes$ ) and external tensor products ( $\boxtimes$ ) within the context of coalgebra, particularly examining how the coproduct $\Delta$ applies to tensor products of elements from a vector space $V$ and its extension. The operation $\otimes$ is understood as the tensor product within the tensor algebra $T(V)$ , whereas $\boxtimes$ represents an external tensor product, combining elements from two separate spaces or algebras, $T(V) \boxtimes T(W)$ . According to a Wikipedia article on tensor algebra, the coproduct $\Delta: V \to V \boxtimes V$ is defined for $v \in V$ as $\Delta(v) = v \boxtimes 1 + 1 \boxtimes v$ , extending homomorphically over $T(V)$ . My inquiry focuses on the expansion of $\Delta(v \otimes w)$ for $v, w \in V$ , specifically through these steps: Starting from $\Delta(v\otimes w)$ , the definition of $\Delta$ is applied to both $v$ and $w$ : $$\Delta(v\otimes w) = (v\boxtimes 1 + 1\boxtimes v) \otimes (w\boxtimes
The confusion arises from misunderstanding how the tensor ( $\otimes$ ) and external tensor products ( $\boxtimes$ ) interact. The key is to define the product $\otimes_{T(V) \boxtimes T(V)}$ such that for pure tensors (elements that are not sums of other tensors), the operation behaves as follows: $$ (a \boxtimes b) \otimes_{T(V) \boxtimes T(V)} (c \boxtimes d) := (a \otimes_{T(V)} c) \boxtimes (b \otimes_{T(V)} d) $$ This is then extended bilinearly to non-pure tensors. This definition allows for the simplification seen in the last step. Essentially, when you have products like $(v \boxtimes 1) \otimes (w \boxtimes 1)$ , you're using the $\otimes_{T(V) \boxtimes T(V)}$ operation, which under our definition simplifies to $(v \otimes w) \boxtimes 1$ . Similarly, the terms $v \boxtimes w$ and $1 \boxtimes (v \otimes w)$ follow from the bilinear extension of our definition to non-pure tensors. It is possible to verify that the above product satisfies the axiomsto make $T(V) \boxtimes T(V
|linear-algebra|abstract-algebra|tensor-products|tensors|coalgebras|
0
Find all $x \in \mathbb{R}$ such that $f: \mathbb{N} \rightarrow \mathbb{R}$, where $f(n) = \{2^n \cdot x\}$ is monotone on $\mathbb{N}$.
Find all $x \in \mathbb{R}$ such that $f: \mathbb{N} \rightarrow \mathbb{R}$ , where $f(n) = \{2^n \cdot x\}$ (where $\{x\}$ denotes the fractional part of $x$ ), is increasing/decreasing on $\mathbb{N}$ . My approach We have $f(n+1) = \{2f(n)\}$ , where $f(n+1)$ equals $2f(n)$ when $f(n) \in [0, \frac{1}{2})$ and equals $2f(n)-1$ when $f(n) \geq \frac{1}{2}$ . Thus, we have three cases: Case 1: When $f(0) = 0$ , implying $f(n) = 0$ for every $n$ . This means $x$ is an integer. Case 2: When $f(0) \in (0, \frac{1}{2})$ . Case 3: when $f(0) \geq \frac{1}{2}$ , we encounter a contradiction. However, it seems that $x$ can also be in the form $a - \frac{1}{2^m}$ , but I'm not sure how to obtain this solution. I may have did something wrong at case 1 or 3 that made me miss that solution. Any help is appreciated.
Since it's not mentioned how the fractional part is defined for $x so I'll solve for nonegative $x$ first. Let $r = \{x\}$ be the fractional part of $x$ , it's clear that $\{2^nx\} = \{2^nr\}$ . Notice that $\left \lfloor 2^nr \right \rfloor$ corresponds to the first n fractional digits of r in the binary system, and $\left \lfloor 2^nr \right \rfloor$ corresponds to what's left. So for example $r = 0.11101_2$ , then $\left \lfloor 2^nr \right \rfloor = 11_2$ and $\left \{ 2^nr \right \} = 0.101_2$ Let $d_i(r)$ for $i \in \mathbb{N_+}$ be the $i$ -th fractional digit of r, $\mathbf{0}(r) = \min \left\{k \in \mathbb{N_+} \mid d_k(r) = 0\right\}$ , $\mathbf{1}(r) = \min \left\{k \in \mathbb{N_+} \mid d_k(r) = 1\right\}$ be the index of the first $0$ or $1$ fractional digit respectively. If $r = 0$ , then $f(n) = \{2^nx\} = 0$ so it's clearly monotonic If $r > 0$ , let $k = \mathbf{1}(r)$ . There are two cases: If $f(n)$ is monotonically increasing, Then $f(k-1) \geq 0.1_2$ . Since $f(n)$
|sequences-and-series|functions|contest-math|monotone-functions|fractional-part|
1
$a^2-b^2=37$, evaluate $ a^2+b^2$
Given $a^2-b^2=37$ and also a and b are integers, can we evaluate $a^2+b^2$ possible values? Are those many or just some? I found that $a^2+b^2$ can be only $685$ . But how to prove it? I just guessed, but can we somehow evaluate it?
To restate the question for clarity: $$ \boxed{ \text{Given }a^2-b^2=37\text{ and }a\text{ and }b\text{ are integers; Then, evaluate }a^2+b^2 .}\tag{Eq. 1}$$ Add $b^2$ to both the left and the right of Equation 1, and solve for $a^2$ in terms of $b^2$ , and then sum $a^2$ and $b^2$ so: $$ a^2=b^2+37 \underset{implies}\implies a^2+b^2=(b^2+37)+b^2 = 2*b^2+37 \tag{Eqs. 2} $$ Alternatively, given a value of $a^2$ it is possible to determine also $a^2+b^2$ in terms of $a^2$ as follows. Subtract from the left of Equations 2 the value $37$ , so that: $$ b^2=a^2-37 \underset{implies}\implies a^2+b^2=a^2+(a^2-37)=2*a^2-37 \tag{Eqs. 3} $$ So given any value for $a^2$ or $b^2$ , it is possible to find the value for $a^2+b^2$ (from Equations 2 and Equations 3) as: $$ \text{Given }a\text{ then }a^2+b^2=2*a^2-37 \text{ or } \text{Given }b\text{ then }a^2+b^2=2*b^2+37 \tag{Eqs. 4} $$ Now comes the restriction that $a$ and $b$ are integers. Given $a$ ,Equations 4 on the left imply: $$ (b-a)(b+a)=-37
|algebra-precalculus|
0
Formalism of syntax in first order logic
I am attending a introductory model theory course this semester. The professor started talking about formulas and sentences without paying much attention to the formalism of the syntax. However, as the course progressed, I began to notice that we need to put the formulas in a set because Zorn's lemma is needed to prove theorems like the compactness theorem. So I was thinking probably we would need to define formulas and sentences as elements of the free monoid generated by the formal symbols. This construction leads to many questions that were overlooked and that have to be checked, for example, the necessity of introducing parentheses. My thoughts are: Are these technical details that can somehow be avoided, or are they necessary as the foundation of first order logic?
I'd argue the technical details are necessary but can be avoided. To say you have to study the syntactic details to understand what logic is about is similar to say you have to learn a programming language to understand what an algorithm is. Or you have to know PA to understand arithmetic. Or you have to know how to construct real numbers from ZFC to study real analysis. Mathematical logic is arguably about syntax, otherwise it would not exist. It's a fundamental insight of Hilbert: No matter how you think about (a theorem or a proof), eventually it can be represented as a string of symbols. To study syntax, it's a basic tool to do induction on the length of symbols (to prove theorems about term , formula and proof ), so some weak theory of arithmetic (e.g. PRA ) is necessary (and sufficient). For example, Manin introduced parentheses and used basic arithmetic to prove the "unique reading lemma": there is a unique way to pairing the parentheses; But "the necessity of introducing parent
|logic|first-order-logic|model-theory|
0
Borel and analytic sets - Why did Jech do this?
I've read in Jech's chapter on Borel and analytic sets that there is a universal $\Sigma_{\alpha}^0$ set $U \subseteq \mathcal{N} \times \mathcal{N}$ such that for every $\Sigma_{\alpha}^0$ set $A$ in $\mathcal{N}$ , there exists some $a \in \mathcal{N}$ such that: $$ A = \{x : (x,a) \in U \} $$ Having constructed $U$ , we can construct a new universal set $V$ that is $\Sigma_{\alpha+1}^0$ . To construct $V$ , Jech uses some continuous mapping of $\mathcal{N}$ onto the product space $\mathcal{N}^{\omega}$ such that: $$ (x,y) \in V \text{ if and only if for some $n$, } (x,y_{(n)}) \not \in U $$ where $y_{(n)}$ represents the $n$ th coordinate of the image of $y$ in the product space $\mathcal{N}^{\omega}$ under the continuous mapping, which can be some continuous pairing function $\Gamma$ , i.e. for instance, we can have $y_{(n)}(k)=y(\Gamma(n,k))$ . I would like to get some help on the intuition behind having to use a continuous mapping to construct $V$ . From what I understand, $(x,y)
For $a\in \mathcal{N}$ and $X\subseteq \mathcal{N}\times \mathcal{N}$ , let's define $X_a = \{x\mid (x,a)\in X\}$ . In particular, $U_a$ is the $\Sigma^0_\alpha$ set coded by $a$ . Now let's follow your proposal and define $V' = (\mathcal{N}\times\mathcal{N})\setminus U$ . Note that $V'$ is the complement of a $\Sigma^0_\alpha$ set, so it is $\Pi^0_\alpha$ . For all $a\in \mathcal{N}$ , we have $b\in V'_a$ iff $(b,a)\in V$ iff $(b,a)\notin U$ iff $b\notin U_a$ . So $V'_a = \mathcal{N}\setminus U_a$ . Since every $\Pi^0_\alpha$ set is the complement of some $\Sigma^0_\alpha$ set of the form $U_a$ , we see that $V'$ is a universal $\Pi^0_\alpha$ set. But the goal here is to find a universal $\Sigma^0_{\alpha+1}$ set, not a universal $\Pi^0_\alpha$ set! So we want to find a $\Sigma^0_{\alpha+1}$ set $V$ such that the sets of the form $V_a$ are arbitrary countable unions of sets of the form $V'_x$ . This is the motivation for mapping $\mathcal{N}\to \mathcal{N}^\omega$ , so that a single $
|set-theory|borel-sets|
1
How many bit strings of length $10$ contain at least three $1$'s and at least three $0$'s?
The given answer is ${10 \choose 3}+{10 \choose 4}+{10 \choose 5}+{10 \choose 6}+{10 \choose 7}=912$ . Why can't it be $2^4=16$ instead? I thought that the way to solve the above question might be similar to the question below: There are 4 choices ${G,U,A,C}$ for each element. How many $6$ -element sequences ending with $GU$ can we form? The answer is $4^4=256$ .
The problem with your solution ( $2^4$ ) is that you are assuming that the three $0$ s and the three $1$ s that are required will occupy the same position in every valid string. For example, $$1\;0\;\_\;\_\;0\;1\;1\;\_\;0\;\_$$ The four underscores represent the positions that are free to take $1$ or $0$ . But that's not correct, for two reasons. One , the six fixed digits can switch positions among themselves. One such rearrangement of the string shown above is $$0\;1\;\_\;\_\;0\;1\;1\;\_\;0\;\_$$ You could account for this by multiplying by $\dfrac{6!}{3! \cdot 3!}$ . Two , those six fixed digits are not required to always be in the same position. Again, for example, we could have $$\_\;\_\;1\;0\;0\;1\;1\;\_\;0\;\_$$ I don't see how we can compensate for this. If we just multiply by $10 \choose 6$ , that will lead to double counting.
|combinatorics|combinations|
1
What is this $\mathfrak{D} =\mathfrak{D}(\mathfrak{U})$ in differentiable topology
From the textbook: Introduction to Differential Topology by TH. Brocker and K. Janich. Defintion 1.3: An atlas of a manifold is called differentiable if all its chart transformations are differentiable. Then a few lines below: If $\mathfrak{U}$ is a differentiable atlas on the manifold $M$ , then the atlas $\mathfrak{D} =\mathfrak{D}(\mathfrak{U})$ contains precisely those charts for which every chart transformation with a chart from $\mathfrak{U}$ is differentiable. I don't understand this at all. So a differentiable atlas is an atlas where every chart is differentiable. ok. However, I don't understand what "precisely those charts for which every chart transformation with a chart from $\mathfrak{U}$ " means. So what is $\mathfrak{D}(\mathfrak{U})$ and how is it different than $\mathfrak{U}$ ?
A chart on $M$ is any homeomorphism $\phi : U \to U'$ from an open subset $U \subset M$ to an open subset $U' \subset \mathbb R^n$ . Let us denote by $\operatorname{Ch}(M)$ the set of all charts on $M$ . An atlas on $M$ is a subset $\mathfrak A \subset \operatorname{Ch}(M)$ such that each $p \in M$ is contained in the domain $U$ of some chart $\phi : U \to U'$ in $\mathfrak A$ . Definition 1.3 introduces the concept of a differentiable atlas . This is a serious restriction; there exist atlases which are not differentiable. For each differentiable atlas $\mathfrak A$ the authors define $$\mathfrak D(\mathfrak A) = \\ \{ \psi \in \operatorname{Ch}(M) \mid \forall \phi \in \mathfrak A : \text{ The chart transformations between } \psi \text{ and } \phi \text{ are differentiable}\} \\ = \{ \psi \in \operatorname{Ch}(M) \mid \mathfrak A \cup \{ \psi\} \text{ is a differentiable atlas} \} .$$ Clearly $\mathfrak A \subset \mathfrak D(\mathfrak A)$ , but in general $\mathfrak D(\mathfrak A) $ i
|smooth-manifolds|
1
Is it possible to solve a system of first-order linear PDE's through a matricial approach?
Consider the following ODE: $$P_n\frac{d f_n(x)}{d x} = \sum_{n' = 1}^NQ_{n,n'}f_{n'}(x),$$ for $n$ and $n' = 1, 2, ..., N$ , and in which $P_n$ and $Q_{n,n'}$ are real constant parameters. If we expand this equation for every possible value of $n$ and $n'$ , we would get a $N \times N$ system of ODE's that could be rewritten like: $$\frac{d}{dx}\begin{bmatrix}f_1(x)\\\vdots\\f_N(x)\end{bmatrix}=\begin{bmatrix}Q_{1,1}/P_1&...&Q_{1,N}/P_1\\\vdots&\ddots&\vdots\\Q_{N,1}/P_N&...&Q_{N,N}/P_N\end{bmatrix} \begin{bmatrix}f_1(x)\\\vdots\\f_N(x)\end{bmatrix}.$$ So, if we call $\textbf{f}\equiv[f_1(x),...,f_N(x)]^T$ and the $N\times N$ matrix above (which we will consider henceforth to be non-singular) as $\textbf{M}$ , then we could rewrite it again as: $$\frac{d\textbf{f}}{dx}=\textbf{M}\textbf{f}.$$ Thus, we can suppose a solution to $\textbf{f}$ of the type: $$\textbf{f} = \textbf{ξ}e^{\lambda x},$$ in which $\textbf{ξ}$ is the constant eigenvector associated with a particular eigenvalue $\
Consider solutions of the form $f_n(x,y,z) = u_n \exp(a x + b y + c z)$ . You get a system of linear equations $(A_n a + B_n b + C_n c) u_n - \sum_{n'} D_{n,n'} u_{n'} = 0$ . These have nontrivial solutions if and only if $(a,b,c)$ makes the determinant $P(a,b,c)$ of the coefficient matrix zero. A difference between this and the ODE case is that instead of having a finite number of eigenvalues, $P(a,b,c) = 0$ is in general a two-dimensional variety in $\mathbb C^3$ .
|partial-differential-equations|matrix-equations|linear-pde|
0
Fourier Series Coefficient Confusion
In the context of the Discrete Fourier Transform, a function in time may be expanded as a Fourier series, $$ f_m = \frac{1}{T} \sum_{n=0}^{N-1} \tilde{f}_n e^{2\pi inm/N}, $$ where $ f_m $ is the function in the time domain in time step $ m\Delta t $ , $ f_m = f(m\Delta t) $ , $ T = N \Delta t $ is the period, $ f(m\Delta t) = f(m\Delta t + T) $ and $ \tilde{f}_n $ are the frequency-resolved weights. The last, are computed in terms of the time-resolved function as $$ \sum_{m=M}^{M+N-1} f_m e^{-2\pi in'm/N} \Delta t = \frac{1}{N} \sum_{n=0}^{N-1} \tilde{f}_n e^{2\pi i(n-n')M/N} \sum_{m=0}^{N-1} e^{2\pi i(n-n')m/N} = \tilde{f}_{n'}, $$ for any given integer $ M $ . Assuming the above is correct, $ \underline{\text{my question is the following}} $ : If we can get the same coefficient $ \tilde{f}_n $ for ANY given integer $ M $ , we get that $$ \sum_{m=M}^{M+N-1} f_m e^{-2\pi inm/N} \Delta t = \sum_{m=0}^{N-1} f_m e^{-2\pi inm/N} \Delta t $$ $$ \Rightarrow \left( e^{-2\pi inM/N} - 1 \right
Your last step is the error. You can't subtract the two sums and assume the values are the same. If we change the variable in the first sum to $p=m-M,$ you get: $$\sum_{m=M}^{N+M-1} f_m e^{-2\pi mni/N}=\sum_{p=0}^{N-1}f_{p+M}e^{-2\pi (p+M)ni/N}$$ Now we can subtract this sum from the right sum term by term. Your result uses $f_p$ where you should have $f_{p+M}.$ You substituted correctly in the exponent, but failed to substitute in the subscript.
|fourier-analysis|fourier-series|fourier-transform|fast-fourier-transform|
1
$f(x)$ is continuous on $[a, b]$ and assumes one single critical point at $x = c$ that is a local minimum. Prove that $f(c)$ is the absolute minimum.
I was working on a textbook problem yesterday when I found myself needing to prove a statement to make my main argument more rigorous. The statement seems relatively simple at first, yet I ended up with a rather lengthy proof involving an additional lemma, which itself was not as straightforward as I had anticipated either. I understand that basic-looking statements can sometimes require long arguments, but I was wondering if there might be a more efficient approach that I've overlooked. As someone relatively new to writing proofs, I would greatly appreciate any feedback on my proofs too! Thank you very much! Let $f(x)$ be a continuous function on $[a, b]$ and have one single critical point in $(a, b)$ at $x = c$ , with $f(c)$ being a local minimum. Prove that $f(c)$ is the absolute minimum of $f(x)$ . Lemma: Let $f(x)$ be a continuous function on $[a, b]$ and have no critical points in $(a, b).$ Then $f(x)$ is strictly monotone over $[a, b].$ Proof: Let $f(x)$ be a continuous function
Suppose $c$ is different from a global minimum $d$ . Consider a connected neighborhood $U$ of $c$ where $c$ is a local minimum, which is sufficiently small so that $d\not\in U$ . Since $c$ is the unique critical point in $(a,b)$ , there are must be a point $e\in U$ between $c$ and $d$ where $f(e)>f(c)$ . In the interval with endpoints $c$ and $d$ (in whichever order), there is a global maximum $g$ . Since $f(g)\geq f(e)>f(c)$ , the point $g$ is distinct from the endpoints $c$ and $d$ . Thus $g$ is another critical point, contradicting the hypothesis that $c$ is the unique critical point in $(a,b)$ .
|real-analysis|calculus|
1
how do you compute the value of $\sum\limits_{n=1}^{\infty} \dfrac{(-1)^n}{4n-3}$
I know that the series $\sum\limits_{n=1}^{\infty} \dfrac{(-1)^n}{4n-3}$ is convergent by Leibniz's law. However, finding the exact sum of this series can be quite challenging. I try to evaluate out to $10$ terms is $0.861229$ ; $20$ terms is $0.863979$ . Would you please share me a way to solve this question? Best wishes,
Take the partial sums $$ \sum_{n=0}^N\frac{(-1)^{n+1}}{4n+1}=-\sum_{n=0}^N\int_0^1 (-x^4)^ndx=-\int_0^1\frac{1-(-x^4)^{N+1}}{1+x^4}dx. $$ Now, $$ \left|\int_0^1\frac{(-x^4)^{N+1}}{1+x^4}dx\right|\leqslant\int_0^1 x^{4N+4}dx=\frac{1}{4N+5}\underset{N\rightarrow +\infty}{\longrightarrow}0 $$ so, letting $N\rightarrow +\infty$ , we get $$ \sum_{n=0}^{+\infty}\frac{(-1)^{n+1}}{4n+1}=-\int_0^1\frac{dx}{1+x^4}. $$ To compute the above integral, you can brute-force with partial fraction decomposition, this gives calculations I don't have the courage to write down (though you can watch this video for details), but in the end you get $$ \sum_{n=0}^{+\infty}\frac{(-1)^{n+1}}{4n+1}=-\frac{\pi+2\log(1+\sqrt{2})}{4\sqrt{2}}. $$
|sequences-and-series|convergence-divergence|power-series|pointwise-convergence|conditional-convergence|
0
Solve the recurrence $T(n) = 2T(n-1) + n$
Solve the recurrence $T(n) = 2T(n-1) + n$ where $T(1) = 1$ and $n\ge 2$. The final answer is $2^{n+1}-n-2$ Can anyone arrive at the solution?
I expanded on "Siddharth Chakravorty" answer, not that his answer needed expanding. I could not understand the result of subtracting 2T(n) from T(n). After a long time, I got it. I shared my understanding of the result. Enjoy! $Problem:$ $T(1) = 1$ $T(n) = 2T(n-1) + n$ $Equation \, 1 :$ $T(n) = 2T(n-1) + n$ $Equation \, 2$ $T(n-1) = 2T(n -1 -1) + n - 1$ $T(n-1) = 2T(n-2) + n - 1$ $Equation \, 3$ $T(n-2) = 2T(n-1-2) + n - 2$ $T(n-2) = 2T(n-3) + n -2$ $Back \, substitution$ $T(n) = 2T(n-1) + n$ $Substituted \, equation \, 2$ $T(n) = 2[2T(n-2) + n -1 ] + n$ $T(n) = 2^2 T(n-2) + 2(n-1) + n$ $Substituted \, equation \, 3$ $T(n) = 2^2 [2T(n-3) + (n - 2) ] + 2n -1 + n$ $T(n) = 2^3 T(n-2) + 2^2 (n -2) + 2(n -1) + n$ $ Continue for K \, times$ $T(n) = 2^k * T(n-k) + 2 ^{k-1} * (n-(k-1)) + ... + 2^0 n$ $Continue \, subtracting \; n -k \; until \, reaching \, 1$ $Let \, n - k = 1$ $-k = 1- n$ $k = -1 + n$ $k = n - 1$ $Substituting \; the \; value \; of \; k$ $T(n) = 2^{n-1} * T(n-(n -1)) + 2^{n-1
|recurrence-relations|
0
Endomorphisms of Lie group acting on cotangent space
Let $G$ be a (complex, compact, commutative) Lie group. Apparently the endomorphism ring $\textrm{End}(G)$ of $G$ (i.e., holomorphic group homomorphisms) acts on the cotangent space $T^\ast_eG$ at the identity $e \in G$ . If $\phi \in \textrm{End}(G)$ and $\omega \in T^\ast_eG$ then \begin{align} \phi \cdot \omega = \omega \circ d_e\phi \end{align} seems like a reasonable definition of this action. I have two questions. (i) Is this the correct action? (ii) How can I show that the action is faithful? I think that (ii) holds if I can show $\phi$ is a submersion. For example, if $\phi \cdot \omega = 0$ then $T^\ast_eG = \textrm{im}(d_e\phi) \subseteq \textrm{ker}(\omega)$ so $\omega = 0$ . Moreover, $\phi$ will be a submersion if and only if it is an immersion, since it maps from a space to itself. Some extra facts that may be of use: (a) I really am interested in $G = \mathbf{C}^g/\Lambda$ where $\Lambda \subseteq \mathbf{C}^g$ is a lattice, i.e., $G$ is a $g$ -dimensional torus. (b) We
I found a solution using this question. We have a diagram $\require{AMScd}$ \begin{CD} T^\ast_eG @>{d_e\phi}>> T^\ast_eG \\ @V{\textrm{exp}}VV @VV{\textrm{exp}}V\\ G @>{\phi}>> G. \end{CD} Since $G$ is compact, the exponential map is surjective. Hence $\phi$ surjective implies $d_e\phi$ surjective. As remarked in the question, this shows that $\textrm{End}(G)$ acts faithfully on $T^\ast_eG$ . Please let me know if this solution is not correct.
|lie-groups|abelian-varieties|complex-manifolds|co-tangent-space|
0
Integrate : $\int (\sin(x))^{2a-1} dx$
Question : Integrate $$\int (\sin x)^{2a-1} dx$$ where $a \in \mathbb{N}$ . My Attempt : Using IBP on $$I_j=\int (\sin x)^{2j-2}\sin x dx= -(\sin x)^{2j-2}\cos x+\int(2j-2)(\sin x)^{2j-3}(\cos x)^2dx$$ $$=-(\sin x)^{2j-2}\cos x +2(j-1)\left(\int(\sin x)^{2j-3}dx-\int(\sin x)^{2j-1}dx\right)$$ $$=-(\sin x)^{2j-2}\cos x +2(j-1)(I_{j-1}-I_{j})$$ $$\implies (2j-1)I_j-(2j-2)I_{j-1}=-(\sin x)^{2j-2}\cos x$$ getting stuck here...
For odd powers of the sine one can start with $$ \sin^{2a-1}x = \frac{1}{2^{2a-2}}\sum_{k=0}^{a-1} (-)^{a+k-1}\binom{2a-1}{k}\sin[(2a-2k-1)x], $$ so $$ \int \sin^{2a-1}x dx= \frac{1}{2^{2a-2}}\sum_{k=0}^{a-1} (-)^{a+k}\binom{2a-1}{k}\frac{1}{2a-2k-1}\cos[(2a-2k-1)x], $$
|integration|indefinite-integrals|
0
$a^2-b^2=37$, evaluate $ a^2+b^2$
Given $a^2-b^2=37$ and also a and b are integers, can we evaluate $a^2+b^2$ possible values? Are those many or just some? I found that $a^2+b^2$ can be only $685$ . But how to prove it? I just guessed, but can we somehow evaluate it?
I present an alternative. It is not as good an alternative. In fact I would say this is a bad way to solve. But I want to impress that if one doesn't always see the correct insight one can still chew an answer and there is always more than one way to do things. $a^2 -b^2 =37$ so $a^2 > b^2$ and if we assume $a,b$ are positive integers (we might as well, whether they are negative of positive their squares are positive and we are only answering questions about their squares) then $a > b$ . Let $d = a-b$ so $a=b+d$ . Then we have $a^2 -b^2 = (b+d)^2 - b^2 = 2bd + d^2 = 37$ . We see that $d^2 so $d= 6,5,4,3,2,1$ and we can test each value. If that is too much work, we can see that $2bd$ is even and $37$ is odd so $d$ is odd and we can test $d = 5,3, 1$ . If that is still too much work we can not that $d|d^2$ and also $2bd$ so $d|2bd +d^2 = 37$ . But $37$ is prime so $d=1$ . And so $2bd + d^2 = 2b + 1=37$ and $b=18$ . ANd $a = b+d = 18+1=19$ . Which.... if we were really paying attention we
|algebra-precalculus|
0
Proof of infinitely many prime numbers
Here's the proof from the book I'm reading that proves there are infinitely many primes: We want to show that it is not the case that there only finitely many primes. Suppose there are finitely many primes. We shall show that this assumption leads to a contradiction. Let $p_1, p_2,...,p_n$ be all the primes there are. Let $x = p_1...p_n$ be their product and let $y=x+1$. Then $y \in \mathbb{N}$ and $y \neq 1$, so there is a prime $q$ such that $q \mid y$. Now $q$ must be one of $p_1,...,p_n$ since these are all primes that there are. Hence $q \mid x$. Since $q \mid y$ and $q \mid x$, $q \mid (y-x)$. But $y-x=1$. Thus $q \mid 1$. But since $q$ is prime, $q \geq 2$. Hence $q$ does not divide 1. Thus we have reached a contradiction. Hence our assumption that there are only finitely many primes must be wrong. Therefore there must be infinitely many primes. I have a couple of questions/comments regarding this proof. I will use a simple example to help illustrate my questions: Suppose only 6
Another way to arrive at a contradiction from your initial set-up is related to Rob Arthan's answer. Given some arbitrary finite set of primes, say $\{p_1, p_2, ..., p_k\}$ , define $N$ to be 1 more than their product, namely $$N = p_1 p_2 ... p_k\ + 1.$$ Now every positive integer is uniquely determined (up to ordering) by its prime divisors, and so $N$ must have at least one prime divisor, say $q$ . But $q$ cannot be one of the $p_i$ s: if it was, then it would divide $N$ (which we just deduced) and also $p_1p_2...p_k$ (because $q$ is one of the $p_i$ s). But then if $q$ divides $N$ and $N+1$ , it must also divide their difference, $N - p_ 1p_2...p_k = 1$ . This is a contradiction (since 1 is the empty product and does not have any prime divisors) and so $q$ must be a prime not in the finite set $\{p_1, p_2, ..., p_k\}$ . But the finite set we started with was arbitrary! This means that in any finite set of primes there exists an integer (one of the same form as $N$ ), at least one o
|elementary-number-theory|prime-numbers|proof-explanation|
0
Understanding the limit of a function
I've always heard that if the value of a function approaches a given limit as the value of the argument approaches a specific value, then the limit exists. But here is an example that has me confused; Say we have $f(x) = x^2$ and we want to check: if $|x - (-1)| , then $|f(x) - L| , for every $\varepsilon > 0$ ; $\varepsilon$ is unbounded from up, so I could technically select something very large. If I select, for example $10$ , then there will be a moment where the value of $f(x)$ will reach the value of $L$ , but then pass it; this will happen as $x$ approaches $-1$ from the right. The value of $f(x)$ will be larger than $L$ before $x = 1$ , after that $f(x)$ will start going in the opposite direction (towards $0$ ). So technically, the limit shouldn't exist with such an $\varepsilon$ , yet it does. Shouldn't there be a restriction on the upper bound of $\varepsilon$ ?
The way you have your definition worded is a little misleading. If you want to show that $$\lim_{x\to -1}x^2 = 1$$ then you need to show that for every $\varepsilon > 0$ there exists a $\delta > 0$ so that if $|x - (-1)| then $|x^2 - 1| If you pick $\varepsilon = 10$ as an example you need to find a $\delta > 0$ so that if $|x + 1| then $|x^2 - 1| . Now, $$|x^2 - 1| What that means is that as long as you are within $\delta = \sqrt{11} + 1$ of $x = -1$ , then $f(x) = x^2$ will be within $\varepsilon = 10$ of $L = 1$ .
|calculus|
1
Path Signatures and Picard iterations
Recently, I've started studying path signatures and, currently, I'm reading a standard reference, namely "A Primer on the Signature Method in Machine Learning" by Ilya Chevyrev and Andrey Kormilitzin. Now, at some point the authors try to show how path signatures naturally emerge in the theory of Controlled Differential Equations, and they do so via Picard iterations. So far, so good. The things that are bothering me are the following: The authors claim that a map $V: \mathbb{R}^e \to L(\mathbb{R}^d,\mathbb{R}^e)$ can be equivalently seen as a map $V: \mathbb{R}^d \to L(\mathbb{R}^e,\mathbb{R}^e)$ . This I am more or less willing to accept. Indeed, if we consider $x \in \mathbb{R}^d$ and $y \in \mathbb{R}^e$ , we observe that $$ y \in \mathbb{R}^e \mapsto V(y) \in \mathbb{R}^{e \times d} \implies V(y)x \in \mathbb{R}^e, $$ yields a map in $\mathcal{L}(\mathbb{R}^e, \mathbb{R}^e)$ by taking $y$ to $V(y)x$ for all $y\in \mathbb{R}^e$ . If we now consider this map to be parameterized by $
I believe that what the author means is that we may define a tensor map $V^{\otimes n}$ by setting $V^{\otimes n}(e_1 \otimes \dots \otimes e_n) = V(e_1) \dots V(e_n)$ and extending by linearity. This way we have that $$\int_{a If someone wants to complement with something, feel free to do so.
|integration|ordinary-differential-equations|reference-request|notation|stieltjes-integral|
0
General solution to Laplace Equation
Show the general solution to the Laplace equation, $$\frac{\partial^2\phi}{\partial x^2}+\frac{\partial ^2\phi}{\partial y^2}=0$$ is $\phi(x,y)=f(x+iy)+g(x-iy)$. The only thought I have is let $x+iy$ be $z$, a complex number so $\phi=f(z)+g(z^*)$. What are the next steps
The shortes proof of this is by splitting the operator: $$ (\delta^2_x + \delta^2_y)\phi = 0 $$ yields $$ ((\delta_x + i\delta_y)(\delta_x - i\delta_y))\phi = (\delta_x + i\delta_y)((\delta_x - i\delta_y)\phi) = (\delta_x + i\delta_y)f = 0 $$ or, as well $$ ((\delta_x - i\delta_y)(\delta_x + i\delta_y))\phi = (\delta_x - i\delta_y)((\delta_x + i\delta_y)\phi) = (\delta_x - i\delta_y)g = 0 $$ We must now solve the two transport equations $$ (\delta_x + i\delta_y)f = 0 $$ $$ (\delta_x - i\delta_y)g = 0 $$ Their differential operators are complex scaling-rotators, which means that the transport equations are a left-sense and a right-sense scaling-rotation. We have thus advantage to rewrite the transport equations in polar coordinates $ (r,\phi) $ : $$ (\delta_x + i\delta_y)f = 0 \quad becomes \quad (\delta_\phi + i r \delta_r)F = 0 $$ which is solved by any twice differentiatable function $$ F(re^{-i\phi}) $$ and $$ (\delta_x - i\delta_y)g = 0 \quad becomes \quad (\delta_\phi - i r \delta
|partial-differential-equations|partial-derivative|laplacian|
0
Breaking up a product probability measure
Context Suppose I have two measurable spaces $(\mathsf{X}, \mathcal{X})$ and $(\mathsf{Y}, \mathcal{Y})$ on which we have two probability measures $\pi$ and $\nu$ respectively. I form a joint distribution $\mu = \pi\otimes \nu$ on the joint space $(\mathsf{Z}, \mathcal{Z}) := (\mathsf{X}\times\mathsf{Y}, \mathcal{X}\otimes \mathcal{Y})$ . Question Let $\mathsf{C}\in\mathcal{Z}$ . Is it true that there always exist $\mathsf{A}\in\mathcal{X}$ and $\mathsf{B}\in\mathcal{Y}$ such that $$ \mu(\mathsf{C}) = \pi(\mathsf{A})\nu(\mathsf{B}) $$ Attempted Solution Given any two subsets $\mathsf{A}\subset\mathsf{X}$ and $\mathsf{B}\in\mathsf{Y}$ (not necessarily measurable subsets), the Cartesian product of these subsets is the set of all pairs as follows $$ \mathsf{A}\times\mathsf{B} := \left\{(x, y)\,:\, x\in\mathsf{A}, y\in\mathsf{B}\right\}. $$ Now, if these two sets are measurable, meaning $\mathsf{A}\in\mathcal{X}$ and $\mathsf{B}\in\mathcal{Y}$ then we call their Cartesian product $\mathsf{
If one of the measures, say $\nu$ , is diffuse, the answer is Yes. Given $C$ , take $A=\mathsf{X}$ and $B$ any element of $\mathcal Y$ with $\nu(B)=\nu(C)$ . The fact that $\nu$ is diffuse guarantees that $\nu(\mathcal Y) = [0,1]$ . Example: Take $\mathsf{X}=\mathsf{Y}=\{0,1\}$ , $\pi={1\over 2}\delta_0+{1\over 2}\delta_1$ and $\nu={1\over 3}\delta_0+{2\over 3}\delta_1$ . Then $\pi$ takes only the values $0,{1\over 2}, 1$ , while $\nu$ takes only the values $0,{1\over 3}, {2\over 3}, 1$ . Therefore $\mu\left(\{(0,1), (1,0),(1,1)\}\right) ={5\over 6}$ cannot be of the form $\pi(A)\nu(B)$ .
|probability|probability-theory|measure-theory|measurable-sets|
0
Huppert, III.19.2: How to construct a homomorphism from a $p$-group into the center of a maximal subgroup?
Let $G$ be a finite $p$ -group and $N$ a maximal subgroup (so $G/N$ has order $p$ ) such that $Z(N) \leq Z(G)$ . III.19.2 in Huppert's Book "Endliche Gruppen I" says that there exists a non-inner automorphism of $G$ of order $p$ . In the proof of Hilfssatz III.19.2, Huppert starts with a homomorphism $\tau: G\to Z(N)$ which is non-trivial and has $\ker\tau = N$ . He makes no statement on why such a homomorphism exist. How does one prove that such a homomorphism exists? Can one even explicitly describe such a homomorphism? Thanks in advance!
So, to try to make such a morphism (somewhat) explicit, we have little choice but to start with the canonical projection $\pi:G\to G/N\cong \Bbb Z_p.$ Then since $N$ is a $p$ -group, it has non-trivial center $Z(N).$ So it's got an element of order $p$ , hence $\Bbb Z_p$ embedds in $\Bbb Z(N).$ Let $i:\Bbb Z_p\to Z(N)$ be an embedding. Finally, just compose: $$\tau :=i\circ \pi.$$
|group-homomorphism|p-groups|
1
A standard 6-sided fair die is rolled until the last 3 rolls are strictly ascending. What is probability that the first such roll is a 1,2,3, or 4?
A standard $6$ -sided fair die is rolled until the last $3$ rolls are strictly ascending. What is probability that the first such roll is a $1$ , $2$ , $3$ , or $4$ ? My attempt We can investigate the $3-$ roll (when there are exactly $3$ rolls), the $4-$ roll, the $5-$ roll, the $n-$ roll. I did that via SQL, basically n times a cartesian product of a table with digits $1-6$ , with some constraints in place. There are $20$ ascending triplets that can be thrown. Starting with $1: 123, 124, 125, 126, 134, 135, 136, 145, 146, 156$ I call those triplets $T_1$ . There are $10$ $T_1$ ‘s. Likewise $T_2$ : $234, 235, 236, 245, 246, 256$ . There are 6 $T_2$ ’s. $T_3$ : $345, 346, 356$ . There are $3$ $T_3$ 's. $T_4: 456$ . There is only one $T_4$ . We can start with the $3-$ roll: The probability of a $3-$ roll is $\frac{20}{216}=\frac{5}{54}$ . Each triplet has the same probability of being rolled. So the probability of each triplet is $\frac{1}{20}$ . The $4-$ roll. Not any digit can be the
A simple Monte Carlo method : import random NUM_SIMULATIONS = 10**6 def roll_die(): return random.randrange(1, 7) def roll_until_ascending(): dice = [] while len(dice) My results for three runs of the script are: Probability that the first roll is 1 = 0.571111 Probability that the first roll is 2 = 0.286095 Probability that the first roll is 3 = 0.114084 Probability that the first roll is 4 = 0.028710 Probability that the first roll is 1 = 0.570891 Probability that the first roll is 2 = 0.285710 Probability that the first roll is 3 = 0.114338 Probability that the first roll is 4 = 0.029061 Probability that the first roll is 1 = 0.570729 Probability that the first roll is 2 = 0.285701 Probability that the first roll is 3 = 0.114523 Probability that the first roll is 4 = 0.029047
|probability|dice|
0
Confusion Over Distributive Property in Tensor and External Tensor Products
I've been delving into the properties of tensor ( $\otimes$ ) and external tensor products ( $\boxtimes$ ) within the context of coalgebra, particularly examining how the coproduct $\Delta$ applies to tensor products of elements from a vector space $V$ and its extension. The operation $\otimes$ is understood as the tensor product within the tensor algebra $T(V)$ , whereas $\boxtimes$ represents an external tensor product, combining elements from two separate spaces or algebras, $T(V) \boxtimes T(W)$ . According to a Wikipedia article on tensor algebra, the coproduct $\Delta: V \to V \boxtimes V$ is defined for $v \in V$ as $\Delta(v) = v \boxtimes 1 + 1 \boxtimes v$ , extending homomorphically over $T(V)$ . My inquiry focuses on the expansion of $\Delta(v \otimes w)$ for $v, w \in V$ , specifically through these steps: Starting from $\Delta(v\otimes w)$ , the definition of $\Delta$ is applied to both $v$ and $w$ : $$\Delta(v\otimes w) = (v\boxtimes 1 + 1\boxtimes v) \otimes (w\boxtimes
While that article is trying to make things easier to comprehend with its notational choices, I think it may actually make things more confusing. In general, if $(A,\cdot_A)$ and $(B,\cdot_B)$ are two algebras (associative and unital in the present context), then we can define a multiplication $\cdot_{A\otimes B}$ on the tensor product vector space $A\otimes B$ which satisfies $$(a_1\otimes b_1)\cdot_{A\otimes B}(a_2\otimes b_2)=(a_1\cdot_A a_2)\otimes(b_1\cdot_B b_2)\tag{1}$$ and makes $(A\otimes B,\cdot_{A\otimes B})$ an algebra. If as in the article we swap the regular tensor product symbol $\otimes$ here with $\boxtimes$ , then (1) becomes $$(a_1\boxtimes b_1)\cdot_{A\boxtimes B}(a_2\boxtimes b_2)=(a_1\cdot_A a_2)\boxtimes(b_1\cdot_B b_2)\tag{1'}$$ which makes $(A\boxtimes B,\cdot_{A\boxtimes B})$ an algebra. If we now take $A=T(V)$ with $\cdot_A=\otimes_{T(V)}$ and similarly $B=T(V)$ with $\cdot_B=\otimes_{T(V)}$ , and also write $\cdot_{A\boxtimes B}$ as $\otimes_{T(V)\boxtimes T
|linear-algebra|abstract-algebra|tensor-products|tensors|coalgebras|
0
Why $B^{\sharp}e_{1}^{*},...,B^{\sharp}e_{n}^{*}$ is a basis of $\mathfrak{g}$?
Let $\mathfrak{g}$ be a finite dimensional Lie algebra and $B:\mathfrak{g} \times \mathfrak{g} \rightarrow \mathbb{K}$ be a non-degenerate invariant symmetric bilinear form. Then there exists a unique $C_{B} \in \mathfrak{U(g)}$ , called the Casimir element of $\mathfrak{g}$ corresponding to $B$ , such that $$\sum_{i=1}^{n} e_{i}(B^{\sharp}e_{i}^{*}), $$ for any basis $e_{1},...,e_{n}$ of $\mathfrak{g}$ , with corresponding dual basis $e_{1}^{*},...,e_{n}^{*}$ , where $B^{\sharp}:\mathfrak{g^{*}} \rightarrow \mathfrak{g}$ denotes the inverse of the linear isomorphism $B^{\flat}: \mathfrak{g} \rightarrow \mathfrak{g^{*}}, X \mapsto B(X,-)$ . Additionally, $C_{B}$ is a quadratic central element of $\mathfrak{U(g)}$ , i.e. $$C_{B} \in \mathfrak{U_{2}(g)} \setminus \mathfrak{U_{1}(g)}$$ and $$C_{B} \in Z(\mathfrak{U(g)}). $$ I'm studying this proposition on Casimir element and i've a question: in its proof it says that if we fix a basis $e_1,...,e_n \in \mathfrak{g}$ , since $B$ is non-deg
Since $B$ is non-degenerate, we can easily compute the kernel of $B^\flat$ , indeed $$\begin{align*} X \in \ker B^\flat &\iff B^\flat (X) = 0\\ &\iff \forall Y \in \mathfrak{g}, \, B(X,Y) = 0\\ &\iff X = 0 \end{align*}$$ So $B^\flat$ is injective and since $\dim \mathfrak{g} = \dim \mathfrak{g}^* , it is bijective, thus it has an inverse linear map $B^\sharp:\mathfrak{g}^* \to \mathfrak{g}$ that sends any basis of $\mathfrak{g}^*$ on a basis of $\mathfrak{g}$ . In particular $(B^\sharp e_1^*,\dots,B^\sharp e_n^*)$ is a basis of $\mathfrak{g}$ . This can be summarized as "the composition $B^\sharp \circ (-)^*_{(e_1,\dots,e_n)}$ is an automorphism of $\mathfrak{g}$ (as a vector space).
|lie-algebras|
0
Analysis of Metric Properties in an Infinite Set with Discrete Metric
I am not sure if my solution to the following problem is correct Let x be an infinite set. for x $\in$ X and y $\in$ X we define: \begin{equation} d(x,y) = \left\{ \begin{array}{ll} 1 & x \neq y \\ 0 & x=y \end{array} \right. \end{equation} Prove that d is metric, Which subsets of X are open which are closed? which are compact? My answer I show that the function $d: X \times X \rightarrow \mathbb{R}$ defines a metric on the set $X$ , where $X$ is an infinite set. Non-negativity: For any $x, y \in X$ , $d(x, y)$ is either 0 or 1, which are both non-negative. Identity: $d(x, x) = \left\{ \begin{array}{ll} 1 & x \neq x \\ 0 & x = x \end{array} \right\} = 0$ for all $x \in X$ due to the definition. Symmetry: $d(x, y) = \left\{ \begin{array}{ll} 1 & x \neq y \\ 0 & x = y \end{array} \right\} = d(y, x)$ whenever $x \neq y$ . Triangle Inequality: Suppose $x, y, z \in X$ . We need to show that $d(x, z) \leq d(x, y) + d(y, z)$ . Case 1: $x \neq y$ and $y \neq z$ In this case, $d(x, y) = d(y, z)
Some corrections: For any $0 and $x\in X$ we will have $B(x,\varepsilon)=\{x\}$ , as $d(x,x)=0 . For $\varepsilon>1$ we have $B(x,\varepsilon)=X$ , as every element is at most at distance $1$ from $x$ . From here, every set is open (since every element is by itself a ball) and thus every set is closed. With respect to compactness, every finite subset will be compact (as in every other topological space) since from every open cover we may take an open set per element. Nonetheless, any infinite subset $A$ will not be compact: one may not find a finite subcover from $\{\{a\}: a\in A\}$ .
|real-analysis|calculus|elementary-set-theory|metric-spaces|
0
Any Hausdorff topology on a finite-dimensional vector space is equivalent to the usual one
I am trying to understand the proof of this statement. Any Hausdorff topology on a finite-dimensional vector space with respect to which vector space operations are continuous is equivalent to the usual one. Source: https://personal.math.ubc.ca/~cass/research/pdf/TVS.pdf In the proof it is stated that: It remains to show that the inverse of $f$ is continuous. For this, it suffices to show that $f(B(1))$ contains a neighbourhood of $0$ . My Question is: Why does it suffice to show that $f(B(1))$ contains a neighborhood of $0$ ? Here the proposition in detail: Any Hausdorff topology on a finite-dimensional vector space is equivalent to the usual one Proof: A basis of $V$ determines a linear isomorophism $f:\mathbb{R}^n \rightarrow V$ , which is continuous by the assumptions on the topology of $V$ . By definition of continuity, if $U$ is any neighborhood of $0$ in $V$ there exists some disk $B(r)$ with $f(B(r))\subseteq U$ . It remains to show that the inverse of $f$ is continuous. For th
It is well-known that a linear map $\phi : V \to W$ between TVS $V,W$ is continuous iff it is continuous at $0$ . I shall give a proof later. To prove that $g := f^{-1} : V \to \mathbb R^n$ is continuous, it therefore suffices to show that for each open neighborhood $W$ of $0$ in $\mathbb R^n$ there exists an open neighborhood $W'$ of $0$ in $V$ such that $g(W') \subset W$ . The latter is equivalent to $W' \subset f(W)$ . Assume that $f(B(1))$ contains an open neighborhood $W_0$ of $0$ in $V$ . There exists $r > 0$ such that $B(r) \subset W$ . Then $W' := rW_0$ is an open neighborhood of $0$ in $V$ such that $$W' = rW_0 \subset r f(B(1)) = f(rB(1)) =f(B(r)) \subset f(W) .$$ Let us finally prove that a linear $\phi : V \to W$ which is continuous at $0$ is continuous at all $v \in V$ . Let $W'$ be an open neighborhood of $\phi(v)$ in $W$ . Then $W'' = W' - \phi(v)$ is an open neighborhood of $0$ in $W$ and there exists an open neighborhood $V''$ of $0$ in $V$ such that $\phi(V'') \subset
|general-topology|functional-analysis|topological-vector-spaces|
1
Why a skew-symmetric bilinear form is degenerate if the space is odd dimensional
Let $V$ be a vector space of odd dimension over a field F and $B:V\times V\rightarrow F$ an skew symmetric bilinear form. Why can we assert that $B$ is degenerate? Is it also true for any skew symmetric bilinear map $C: V\times V\rightarrow W$ where W is other vector space?
For your first question : first of all, you have to assume $\operatorname{car}(F) \neq 2$ as Tuvasbien mentioned. In fact, take $F = V = \mathbb Z / 2\mathbb Z$ so that $\dim_F(V) = 1$ and define $B(x,y) = xy$ . Because $1 = -1$ in $F$ , $B$ is a skew symmetric bilinear form but $B(1,1) = 1$ so that it is non-degenerate. Now let $\operatorname{car}(F) \neq 2$ , $n = \dim(V)$ and $\{e_1, \dotsc, e_n\}$ be a base of $V$ . You have an isomorphism between the space of skew symmetric bilinear forms and the space of skew symmetric matrices by sending $B$ to its Gram matrix $G(B) := \left[B(e_i,e_j)\right]_{i,j}$ . Under this identification, if $u = \sum_{i=1}^n u_i e_i$ and $v = \sum_{i=1}^n v_i e_i$ , then $$B(u,v) = \begin{pmatrix}v_1 & v_2 & \dotsc & v_n \end{pmatrix} \begin{pmatrix}B(e_1, e_1) & B(e_1,e_2) & \dotsc & B(e_1, e_n) \\ B(e_2,e_1) & B(e_2,e_2) & \dotsc & B(e_2,e_n) \\ \vdots & \dotsc & \dotsc & \vdots \\ B(e_n, e_1) & B(e_n, e_2) & \dotsc & B(e_n, e_n) \end{pmatrix}\begin{pma
|linear-algebra|
0
Sum of the integral of the $n$th power of the distribution function of real-valued random variable diverge
Let $X$ a real-valued random variable I need to prove that $$ \sum_{n=0}^{\infty} \int_{F^{-1}((0,1))} F(s)^n d\mu(s) = \infty $$ with $F$ the distribution function of $X$ and $\mu$ the law of $X$ .
Summing the geometric series (interchange of summation and integration okay by Tonelli) you're left to evaluate $$ \int_{F^{-1}((0,1))}{1\over 1-F(s)}\,d\mu(s). $$ This can be interpreted as an expectation: $$ E\left[{1\over 1-F(X)}; 0 If $F$ is continuous, you know that $F(X)$ is uniformly distributed over $(0,1)$ , and the end is in sight. If $F$ has some jumps, the assertion is in doubt. For example, suppose $X$ is uniformly distributed over $\{1,2,\ldots,K\}$ for some positive integer $K$ . Then the above expectation is $$ \sum_{j=1}^{K-1}{1\over 1-j/K}=\sum_{j=1}^{K-1}{K\over K-j}=K\sum_{i=1}^{K-1}{1\over i}, $$ which is finite. Let $M:=\sup\{x: F(x) . Evidently the expectation (hence the original sum) will be finite whenever $M and $F$ has a jump at $M$ but is constant in an interval $(M-\epsilon,M)$ just to the left of $M$ .
|probability|random-variables|
0
Is there a complete list of all compact spaces covered by $\mathbb{R}^2$?
I'm interested in compact spaces that are covered by $\mathbb{R}^2$ . I know of the torus and the Klein bottle. Is the double-holed torus covered by $\mathbb{R^2}$ ? Are there any other spaces, or are these the only two compact spaces? Is there a way to show this/is there a complete characterization of all such spaces? EDIT: I realize from the comments that all spaces (apart from the sphere and the real projective plane) are covered by $\mathbb{R}^2$ . I guess then my question then really is the following: Is there some (maybe geometric) characterization that helps reduce the set of spaces that are covered by $\mathbb{R}^2$ ? What I mean is that the double torus is covered by the plane, but (if I understand correctly) the plane needs to have hyperbolic geometry. Say I were only interested in a Euclidean geometry? What about in that case? Is there some characterization for which only the torus, or the torus and the klein bottle are the reasonable covered spaces? The motivation for this
In order to make sense of your question one has to specify what do "space" and "cover" mean. Without further adjectives, they usually mean "a topological space" and "a topological covering map." However, it is clear from your comments, this is not what you are actually interested in. One can define the notion of a covering in other categories of "spaces," for instance, for triangulated manifolds, complex manifolds, and for Riemannian manifolds. Let me first discuss the case of Riemannian manifolds. I will assume that you know at least basic Riemannian geometry. Given a connected Riemannian manifold $(M,g)$ (where $g$ is the Riemannian metric tensor on $M$ , aka a Riemannian metric ), and a local diffeomorphism $$ f: N\to M $$ one defines the pull-back Riemannian metric $h=f^*(g)$ on $N$ by the rule $$ h(u,v)=g(Df_p(u), Df_p(v)), p\in N, u, v\in T_xM. $$ When we equip $N$ with the Riemannian metric $h$ , the map $$ f: (N, h)\to (M, g) $$ becomes a (Riemannian) isometric map. This does n
|algebraic-topology|manifolds|riemannian-geometry|covering-spaces|
1
Compact embedding in $L^1$
Is it true that the space $H^1(\mathbb{R}^2)\cap L^2(\mathbb{R}^2,(1+|x|^2)^2dx)$ is compactly embedded in $L^1(\mathbb{R}^2)?$ I don't think that the dimension $2$ is important. Here $H^1$ is the usual Sobolev space and $L^2(\mathbb{R}^2,(1+|x|^2)^2dx)$ is the weighted $L^2$ space, i.e. $\int_{\mathbb{R}^2}|f|^2(1+|x|^2)^2dx It seems to be similar to the previous question here Proving a subset of $H^1(\mathbb{R}^d)$ is compactly embedded in $L^2(\mathbb{R}^d)$. , but I cannot adapt it to $L^1$ . Thank you.
Note that it suffices to show that if $f_n \rightharpoonup f$ in $H^1\cap L^2((1+|x|^2)^2dx)$ , then it converges strongly in $L^1$ . Indeed, suppose that this is true. Then, denote by $T:H^1\cap L^2((1+|x|^2)^2dx) \to L^1$ the injection map. For any sequence $T(f_n)$ with $f_n \in B_1$ , we use Banach Alaoglu (since $H^1\cap L^2((1+|x|^2)^2dx)$ is a Hilbert space) to extract a weakly converging subsequence $f_{n_j}$ , which then yields that $Tf_{n_j}$ is strongly converging by assumption, showing compactness. To this end, note that $$ \|f_n-f\|_{L^1}=\|f_n-f\|_{L^1(|x|\leq R)}+\|f_n-f\|_{L^1(|x|> R)} $$ for any fixed positive $R$ . For any fixed $R$ , the first term tends to zero by Rellich, so we just need to ensure we can make the last term sufficiently small. Indeed, it is bounded above by $\|f_n\|_{L^1(|x|> R)}+\|f\|_{L^1(|x|> R)}$ , which may be bounded as follows: By weak convergence, there exists some constant $C>0$ so that $\|f_n\|_{L^2((1+|x|^2)^2)},\|f\|_{L^2((1+|x|^2)^2)}\l
|real-analysis|functional-analysis|sobolev-spaces|compact-operators|
0
Combinatorics how many clock combinations of different 4 digits using 24 wise clock
I tried solving this problem and I got into a nice way, sadly I think I might be wrong. What I thought of doing is separating (0/1)H:MM and 2H:MM from each other while calculating While calculating (0/1)H:MM I came to this calculation: 2 * 9 * 4 * 7 = 588 2 - for two options 9 - for one less option (0 to 9) - 0/1 4 - The first M is between (0 - 5) - (0/1 + (0 to 9)) = 6 - 2 7 - What is left is calculating the last M which is has 10 options minus 3 options already written The second calculation I got was 1 * 3 * 4 * 7 1 - I was calculating only for first M = 2 => only one option 3 - you have three options which can happen 20, 21, 23 (22 is same digit which is forbidden) 4 - you have two numbers down so form the 6 numbers available deduct 2 => 6-2 = 4 7 - You have already chosen 3 numbers and now you have 10 - 3 which is 7. This is what I came up with, but I think it might be wrong because a friend's tested it on his computer and got 646. the question is what did I do wrong and how can y
There are $24 - 3 = 21$ possibilities for the hour (00, 11 and 22 excluded). $10$ of these have 0 or 1 as 1st digit and 0 to 5 as 2nd digit and $3$ start with 2 (20, 21, 23); in these $13$ cases there are $4$ possibilities for the 1st digit of the minute and $7$ for the 2nd: $13 \cdot 4 \cdot 7 = 364$ . The other $8$ cases for the hour have $5$ possibilities for the 1st digit of the minute and $7$ for the 2nd: $8 \cdot 5 \cdot 7 = 280$ . The sum is $364 + 280 = 644$ .
|combinatorics|
0
a subfield isomorphic to $\mathbb F_{p^n}.$
Here is the question I am trying to see the solution of part (a) in it: Fix $\mathbb F_p$ a field with $p$ elements for $p$ prime. Consider $f(X) = X^{p^n} - X + 1$ over $\mathbb F_p$ and let $E$ be a splitting field of $f.$ $(a)$ Prove that $E$ has a subfield isomorphic to $\mathbb F_{p^n}.$ My questions are: 1-Why the following hint will prove part $(a)$ ? The hint is: If $\alpha$ is a root of $f,$ produce more roots of the form $\alpha + a$ Any clarification will be greatly appreciated!
Let us consider the field $E' := \mathbb{F}_{p^n}(\alpha)$ . [Note that $\alpha$ is not in $\mathbb{F}_{p^n}$ . Indeed, if $\alpha$ were already in $\mathbb{F}_{p^n}$ then $\alpha$ would satisfy $\alpha^{p^n}-\alpha = 0$ and thus $\alpha$ would not be a root of $X^{p^n}-X+1$ . ] Then $X^{p^n}-X+1$ completely splits in $E'$ . [Indeed, let $a \in \mathbb{F}_{p^n}$ . Then $\alpha +a \in E'$ is a root of $X^{p^n}-X+1=0$ . [Indeed, $$(\alpha+a)^{p^n} = \alpha^{p^n}+a^{p^n},$$ as $E'$ is characteristic $p$ , and $a^{p^n}=a$ as $a$ is in $\mathbb{F}_{p^n}$ .] So all $p^n$ roots of $X^{p^n}-X+1$ are accounted for in $E'$ .] Thus on the one hand, the field $E$ as defined in the OP is a subfield of $E'$ . On the other hand, every root of $X^{p^n}-X+1$ is in $E$ and thus, $\alpha +a$ is in $E$ for each $a \in \mathbb{F}_{p^n}$ . As $\alpha$ itself is also in $E$ , it follows that $(\alpha+a) - \alpha =$ $a$ is in $E$ for each $a \in \mathbb{F}_{p^n}$ . And so the result follows.
|abstract-algebra|
1
why is Terence Tao's definition 3.3 of set intersection so complicated?
Terence Tao's Analysis I 4th edition defines set intersection as follows: Given any non-empty set $I$ , and given an assignment of a set $A_{\alpha}$ to each $\alpha \in I$ , we can define the intersection $\bigcap_{\alpha \in I}A_{\alpha}$ by first choosing some element $\beta$ of $I$ (which we can do since $I$ is non-empty), and setting $$ \bigcap_{\alpha \in I} A_{\alpha} := \{ x \in A_{\beta} : x \in A_{\alpha} \text{ for all } \alpha \in I \} $$ which is a set by the axiom of specification. Question My question is - why does he define it in this complicated way? A simpler definition might be: The intersection of a family of sets $A_\alpha$ is the set whose elements exist in every set $A_\alpha$ . That is, $$ \bigcap_{\alpha \in I} A_{\alpha} := \{ x : x \in A_{\alpha} \text{ for all } \alpha \in I \} $$ Clearly I am missing something important. Thoughts I recall Tao discussing earlier in the book how subsets of sets are sets, but sets defined by logical statements can cause parado
I must admit that I might take a slightly different pedagogical approach than Tao. I would define a set according to your proposed definition, and then prove that it exists. But both approaches have their complications.
|elementary-set-theory|
0
Relations of the rational points between two birational curves.
Let $X$ and $X'$ be two birational algebraic curves over $\mathbb{Q}$ . Is it true (or not) that $X(\mathbb{Q})$ is Zariski dense iif $X'(\mathbb{Q})$ is Zariski dense ? If no, where can I find a counter-example? If yes, what about the higher dimensional varieties?
This is true for any two birational varieties, assuming the birational map is defined over $\Bbb Q$ . Suppose $X(\Bbb Q)\subset X$ is Zariski dense. We will show that any nonempty open subset $V'\subset X'$ contains a rational point. Recall that two varieties $X,X'$ are birational iff there are dense open sets $U\subset X$ and $U'\subset X'$ with $U\cong U'$ . Fix $U,U'$ as in the previous sentence; then $V'\cap U'$ is a nonempty open subset as $X'$ is irreducible, so $V'\cap U'$ is $\Bbb Q$ -isomorphic to a nonempty open subset of $X$ and therefore contains a rational point by density. Since a $\Bbb Q$ -isomorphism preserves rational points, we see that $V'\cap U'$ contains a rational point, hence $V'$ contains a rational point, and we win.
|algebraic-geometry|algebraic-curves|
1
Prove $\sum_{k=0}^\infty\frac{\sin(\frac{\pi k}4)}{k!\sqrt{2^k}}\pi^k=\sqrt{e^\pi}$
$$\sum_{k=0}^\infty\frac{\sin(\frac{\pi k}4)}{k!\sqrt{2^k}}\pi^k=\sqrt{e^\pi}$$ Prove that the value of lhs is equal to given value of rhs side. Help me with this doubt I am unable to find out what to do in this question
The $k!$ and the $k$ th power terms should remind you of Taylor series of $e^x$ . The only obstacle is the $\sin(\frac{\pi k}{4})$ term, but it can be easily written as a power function. $e^{i\theta}=\cos\theta+i\sin\theta$ , so we can simply write it as $\text{Im} (e^{i\frac{\pi k}{4}})$ . So the sum becomes $$lhs=\text{Im} \sum\frac{e^{i\frac{\pi k}{4}}\left(\frac{\pi}{\sqrt 2}\right)^k}{k!}=\text{Im}\sum\frac{\left(e^{i\frac{\pi}{4}}\frac{\pi}{\sqrt 2}\right)^k}{k!}$$ $e^{i\frac{\pi}{4}}=\cos\frac{\pi}{4}+i\sin\frac{\pi}{4}=\frac{1}{\sqrt 2}+\frac{i}{\sqrt 2}$ , so $$lhs=\text{Im}\sum\frac{\left(\frac{\pi}{2}+\frac{i\pi}{2}\right)^k}{k!}=\text{Im}(e^{\frac{\pi}{2}+\frac{i\pi}{2}})=\text{Im}(ie^\frac{\pi}{2})=e^\frac{\pi}{2}=\sqrt{e^\pi}=rhs.$$
|sequences-and-series|summation|
0
How many different totals can be obtained in a test with $20$ questions if one can obtain $-1$, $0$, or $4$ for a question?
In an exam there are $20$ questions, one can obtain either $-1,0$ or $4$ in each question based on marking scheme. How many different totals can he obtained in the test? One thing I observed different that even without repetition there is repetition as different permutation can obtain same number. I tried several ways like beggar method and others, but tackled the same above problem. But one thing I deduced that the lowest he can get is -20 and highest he can get is $80$ . So total $101$ number but in this some numbers are there he can't obtain so answer must lie between $1-101$ But certainly unable to find the answer.
We can use the fact that one can obtain $0$ marks from a question to our advantage. Let me explain. As you have rightly concluded, the total has to fall in the range $-20$ to $80$ . Now, you can obtain every total from $0$ to $-20$ . For example, $-3$ can be obtained by getting $-1$ each from three questions and $0$ from the rest. Also, as long as you have three questions free to use, you can obtain the numbers between two multiples of $4$ . For example, $41 = 11 \times 4 + (-1) \times 3$ . Now, $20 - 3 = 17$ . So, you can obtain every total upto $17 \times 4 = 68$ . Thereafter, we need to be a little cautious. If you get $4$ for eighteen questions, your total would be $72$ , $71$ , or $70$ , depending on what you get for the other two questions. So, $69$ is ruled out. Similarly, if you get $4$ for nineteen questions, you will end up with $76$ or $75$ marks. So $73$ and $74$ are ruled out. And then, if you get $4$ for all twenty questions, you will get a total of $80$ . So $77$ , $78$
|combinatorics|permutations|combinations|
1
Integrate : $\int (\sin(x))^{2a-1} dx$
Question : Integrate $$\int (\sin x)^{2a-1} dx$$ where $a \in \mathbb{N}$ . My Attempt : Using IBP on $$I_j=\int (\sin x)^{2j-2}\sin x dx= -(\sin x)^{2j-2}\cos x+\int(2j-2)(\sin x)^{2j-3}(\cos x)^2dx$$ $$=-(\sin x)^{2j-2}\cos x +2(j-1)\left(\int(\sin x)^{2j-3}dx-\int(\sin x)^{2j-1}dx\right)$$ $$=-(\sin x)^{2j-2}\cos x +2(j-1)(I_{j-1}-I_{j})$$ $$\implies (2j-1)I_j-(2j-2)I_{j-1}=-(\sin x)^{2j-2}\cos x$$ getting stuck here...
Another, much messier, approach; $$\int(\sin x)^{2a-1}dx=\frac{1}{(2i)^{2a-1}}\int e^{i(2a-1)x}(1-e^{-2ix})^{2a-1}dx$$ Applying the binomial expansion yields; $$=\frac{1}{(2i)^{2a-1}}\sum_{n=0}^{2a-1}\frac{(-1)^n(2a-1)!}{n!(2a-n-1)!}\int e^{i(2a-1)x}e^{-2nix}dx$$ $$=\frac{1}{(2i)^{2a-1}}\sum_{n=0}^{2a-1}\frac{(-1)^n(2a-1)!}{n!(2a-n-1)!}\int e^{i(2a-2n-1)x}dx$$ $$=\frac{1}{i(2i)^{2a-1}}\sum_{n=0}^{2a-1}\frac{(-1)^n(2a-1)!\cdot e^{i(2a-2n-1)x}}{n!(2a-n-1)!(2a-2n-1)}+C$$ Taking the real component of our sum then gives us; $$=\frac{2}{(-4)^{a}}\sum_{n=0}^{2a-1}\frac{(-1)^n(2a-1)!}{ n!(2a-n-1)!(2a-2n-1)}\cdot\cos((2a-2n-1)x)+C$$
|integration|indefinite-integrals|
1
Commutators of unitary matrices and their unitary square roots with other matrices
Given some unitary matrix $U$ , we can possibly find many unitary square root matrices $S_i$ so that $S_i^2=U$ . Let's assume we have an additional (complex) matrix $T$ so that it commutes with $U$ : $$ [U,T]=0. $$ What can we say about the commutator $[S_i,T]$ ? Do these matrices necessarily commute? If not, is there some restriction we can place on $U$ so that this holds for all $S_i$ ? Note that this is an extension this question , but they are not equivalent, because it only asks for the existence of such a matrix $S_i$ .
As I stated before, they don't necessarily commute, since $$\begin{pmatrix}0&1\\0&0\end{pmatrix}^2 = 0$$ and $0 \in Z(\operatorname{Mat}_2(k))$ , but $\begin{pmatrix}0&1\\0&0\end{pmatrix} \notin Z(\operatorname{Mat}_2(k))$ . Now suppose you have $S^2 = U$ and $[U,T]=0$ the dunford decomposition of $S$ (provided your field is perfect) is the unique $S = D+N$ with $D$ semi-simple and $N$ nilpotent such that they are both polynomials in $S$ . You also have $U = D' + N'$ and these are polynomials in $U$ and $S$ , so if we compute $S^2 = D^2 + 2 DN + N^2$ , which gives you $D^2 = D'$ and $2DN + N^2 = N'$ . It is obvious that $T$ commutes with $U$ iff it commutes with both $D$ and $N$ (these are polynomials in $A$ ), so you get $D^2$ and $2DN + N^2$ commute with $T$ . One could get some result like the rank of the commutator has to be small, I believe at most $n/2$ .
|linear-algebra|matrices|radicals|unitary-matrices|
0
The work done by the force $ \vec{F} $ on a particle
Question The work done by the force ${\vec{F}}=(x^{2}-y^{2})\hat{i}+(x+y)\hat{j}$ in moving a particle along the closed path $C$ containing the curves $x+y=0,x^{2}+y^{2}=16$ and $y = x$ in the first and fourth quadrant is? The path given is I tried to compute it using Green's Theorem Here $$ M= x²-y²$$ and $$ N= x+y$$ We have , $$ \oint_C{F}.ds =\int \int ( \frac{ \partial{N}}{\partial{x}} - \frac{\partial{M}}{\partial{y}})dA $$ So, $$\oint_C{\vec{F}}.ds = \int \int (1+2y) dA $$ Now converting the problem into polar coordinates by substituting $x= rcos\theta$ and $y= rsin \theta $ And substituting the limitts of $r$ from $0$ to $4$ and $\theta$ from $-\pi/4$ to $\pi/4$ So $$ \oint_C{\vec{F}}.ds = \int_{\theta = -π/4}^{π/4} \int_{r=0}^4(1+2r sin \theta ) r drd\theta$$ $$ = \int_{\theta = -π/4}^{π/4} (\frac{r^2}{2}+\frac{2r³sin \theta }{3})|_0^4 d \theta $$ $$= \int_{\theta = -π/4}^{π/4} 8 + \frac{128}{3} sin \theta d\theta $$ $$ = (8 \theta - \frac{128 cos \theta}{3} )|_{-π/4}^{π/4} $$
You are doing it correctly. If you want to save time, interpret the double integral using symmetry. $\iint_R 1\,dA$ gives the area of the region $R$ , which is $\frac14(\pi\cdot 4^2) = 4\pi$ , and $\iint_R y\,dA$ gives $0$ by symmetry (since the region is symmetric about $y=0$ ). Your answer is correct. The problem is poorly phrased, as we are not explicitly told the orientation of the curve. But it sounds like you are meant to do traverse those curves in the order given, so you inferred the correct orientation.
|vector-analysis|line-integrals|greens-theorem|
1
Probability of Team A winning a $5$ round max tournament ( wins when hits $3$ wins ) given that it has won the first round
There are $2$ volleyball teams A and B. They both have a $50 $ % chance of winning a round. They play until one wins $3$ rounds. The first round is won by A. What is the probability of A winning. I calculated by hand that it's $\frac{11}{16}$ but I'm wondering if there's a better more mathematical way of proving it. I tried to use the Binomial Probability Theorem (example: https://www.cuemath.com/binomial-distribution-formula/ ) because this somewhat resembles the Bernoulli example question with coin throws but still I couldn't get $11/16$ So I'm looking for an explanation why the probability is $11/16$ using a math equation Thank you
A won the first round, so A has to win two rounds to finish the game. You could use binomial probability to calculate the probability of A winning two rounds $C^{4}_{2}\frac{1}{2^{4}}$ Ok, agree this is wrong.
|probability|
0
Probability of Team A winning a $5$ round max tournament ( wins when hits $3$ wins ) given that it has won the first round
There are $2$ volleyball teams A and B. They both have a $50 $ % chance of winning a round. They play until one wins $3$ rounds. The first round is won by A. What is the probability of A winning. I calculated by hand that it's $\frac{11}{16}$ but I'm wondering if there's a better more mathematical way of proving it. I tried to use the Binomial Probability Theorem (example: https://www.cuemath.com/binomial-distribution-formula/ ) because this somewhat resembles the Bernoulli example question with coin throws but still I couldn't get $11/16$ So I'm looking for an explanation why the probability is $11/16$ using a math equation Thank you
Assuming that each game has the independent probability of $~50\%~$ of A winning, the standard shortcut here is to assume that all 5 games will be played, even if one of the two players wins before the 5th game. Note that playing all 5 games can not possibly change who won, when one of the players wins before the 5th game. Then, A wins if he wins at least two of the four remaining games. Per the Binomial Distribution , the probability of exactly $~k~$ wins out of $~n~$ trials is $\displaystyle \binom{n}{k}p^kq^{(n-k)}.$ So, A's probability of winning is: $$~\left\{ ~\binom{4}{2} (1/2)^2 \times [1 - (1/2)]^2 ~\right\} + \left\{ ~\binom{4}{3} (1/2)^3 \times [1 - (1/2)]^1 ~\right\} \\ + \left\{ ~\binom{4}{4} (1/2)^4 \times [1 - (1/2)]^0 ~\right\} $$ $$= \frac{1}{16} \times [6 + 4 + 1] = \frac{11}{16}.$$
|probability|
0
Probability of Team A winning a $5$ round max tournament ( wins when hits $3$ wins ) given that it has won the first round
There are $2$ volleyball teams A and B. They both have a $50 $ % chance of winning a round. They play until one wins $3$ rounds. The first round is won by A. What is the probability of A winning. I calculated by hand that it's $\frac{11}{16}$ but I'm wondering if there's a better more mathematical way of proving it. I tried to use the Binomial Probability Theorem (example: https://www.cuemath.com/binomial-distribution-formula/ ) because this somewhat resembles the Bernoulli example question with coin throws but still I couldn't get $11/16$ So I'm looking for an explanation why the probability is $11/16$ using a math equation Thank you
To use the binomial distribution here, you need to realise two things the match will need a maximum of $4$ more rounds for $A$ to win, $B$ can be allowed to win at most $2$ matches Then, writing in more general terms first, with $p$ being the probability for $A$ winning a round, and $q$ being that for $B$ winning a round, P(A wins) $=\binom40q^0p^4 + \binom41q^1p^3 +\binom42q^2p^2$ If you compute the result with $p=q=\frac12$ , you will get ans $= \dfrac{1+4+6}{2^4} = \dfrac{11}{16}$ If you are wondering why we are seemingly allowing $A$ to win more rounds than needed to win the match, see here
|probability|
0
Example of non isomorphic field extensions
What is an example of non isomorphic field extensions? I would like such a non example as I think it would explain the significance of the commutativity of the square condition in an isomorphism of field extensions $L/K\cong M/N$ where $\mu$ , $\sigma$ are isomorphisms:
Note that it is well known that $\mathbb{C}$ contains a copy of itself as a proper subfield. In such situation you can take $K=L=M=\mathbb{C}$ and $N$ a proper subfield of $\mathbb{C}=M$ isomorphic to $\mathbb{C}$ . Of course those fields are pairwise isomorphic, but no extension isomorphism (i.e. the diagram) exists. Because extension isomorphism also preserves degree. And one extension is of degree $1$ and the other of (necessarily) infinite degree. And so extension isomorphism is more than just pair of isomorphisms.
|field-theory|examples-counterexamples|extension-field|
0
What subvariety corresponds to the preimage of an ideal?
Let $f : X \to Y$ be a morphism of complex affine varieties. Denote the coordinate rings of $X$ and $Y$ by $\mathbb{C}[X]$ and $\mathbb{C}[Y]$ respectively. Then $f$ corresponds to a ring homomorphism $f^* : \mathbb{C}[Y] \to \mathbb{C}[X]$ . It is an easy exercise to show that for every ideal $I \lhd \mathbb{C}[X]$ , the preimage $J := (f^*)^{-1}(I)$ is an ideal of $\mathbb{C}[Y]$ . Question. If an ideal $I \lhd \mathbb{C}[X]$ corresponds to a subvariety $Z \subset X$ , then what subvariety of $Y$ does the preimage $J = (f^*)^{-1}(I)$ correspond to? The obvious guess is that it corresponds to the image $f(Z) \subset Y$ , but I've read in many places that the image is not necessarily a subvariety. Is it the closure $\overline{f(Z)}$ ? Or something else?
Is it the closure $\overline{f(Z)}$ ? Yes indeed. Remember that we think of elements $g \in \mathbb C[Y]$ as functions $g: Y \to \mathbb C$ . The map $f^*$ is then just composition $f^*g = g \circ f: X \to Y \to \mathbb C$ . Now such a function $g$ vanishes on $f(X)$ if and only if $f^*g$ vanishes on $X$ , i.e. $f^*g \in \sqrt{I}$ . But this is equivalent to $g \in \sqrt{J}$ . Let me end this with an example, whose image is neither open nor closed. Take $$f:\mathbb C^2 \to \mathbb C^2, (x,y) \mapsto (x, xy).$$ Its image is $$f(\mathbb C^2) = \{\, (x,y) \in \mathbb C^2 : x \neq 0 \,\} \cup \{(0,0)\}$$
|abstract-algebra|algebraic-geometry|
0
Bijection between permutations of even length cycles and pairs of factors
I was trying to solve Exercise 3.13.15 in Cameron's Combinatorics: Topics, Techniques, Algorithms. It goes like this: (a) Let $n = 2k$ be even, and $X$ a set of $n$ elements. Define a factor to be a partition of $X$ into $k$ sets of size $2$ . Show that the number of factors is equal to $(2k-1)!!=1.3.5...(2k-1)$ . (b) Show that a permutation of $X$ interchanges some $k$ -subset with its complement if and only if all its cycles have even length. Prove that the number of such permutations is $((2k - 1)!!)^2$ . [HINT: any pair of factors defines a partition of $X$ into a disjoint union of cycles, and conversely. The correspondence is not one-to-one, but the non-bijectiveness exactly balances.] I was able to solve the first part, but I am stuck in the second part. Specifically, I don't understand the hint: how can two factors define a partition of $X$ into a disjoint union of cycles? Can anyone shed some light on it? Somehow one should be able to show that this defines a bijection, which i
The hint isn't needed to establish the first sentence of part (b). For that, consider the case where all cycles of a given permutation have even length; within each cycle, color the elements "red" and "blue" (however you like) in alternating fashion. It is then clear that the permutation takes the red set to the blue set and vice versa. Conversely, if the permutation swaps a $k$ -element "red" set with a $k$ -element "blue" set, then the same is evidently true within each cycle of the permutation: the cycle takes red elements to blue ones and vice versa. Therefore each cycle must be of even length. That you found the hint mysterious is completely understandable. I did too at first. But the basic idea should be this: given a permutation $\sigma: X \to X$ all of whose cycles have even length, draw an arrow from $x$ to $\sigma(x)$ for every $x \in X$ , then pick a representative element in each cycle, and color the arrows alternately red and blue (say you color each arrow out of a represe
|combinatorics|permutations|combinatorial-proofs|
0
Example of non isomorphic field extensions
What is an example of non isomorphic field extensions? I would like such a non example as I think it would explain the significance of the commutativity of the square condition in an isomorphism of field extensions $L/K\cong M/N$ where $\mu$ , $\sigma$ are isomorphisms:
Here's another example where the distinction is often practically important. If $K$ is a field of characteristic $p$ , then $K^{1/p}$ ( $p$ -th roots taken in an algebraic closure) is abstractly isomorphic to $K$ via the $p$ -th power Frobenius map $K^{1/p} \to K$ . But if $K$ is not perfect then the inclusion $K \hookrightarrow K^{1/p}$ is not an isomorphism.
|field-theory|examples-counterexamples|extension-field|
0
[convex optimization]Does there exist a function h such that its composition with a non-convex function g, h(g(x)), is a convex function?
I have a question regarding convex functions and compositions that I'm hoping someone could provide some insight on. For a given non-convex function g(x), I'm wondering if it's possible to find another function h such that the composition h(g(x)) results in a convex function. In other words, does there exist a function h that can 'convexify' a non-convex function g when composed together? I've been studying convex analysis and optimization, but I'm still a bit uncertain about the conditions and techniques for constructing such a function h, if it exists. Any guidance, references, or insights from the knowledgeable members here would be greatly appreciated. Thank you in advance for your time and help!
The answer is, technically, yes. As I hinted at in my comment, making $h$ constant satisfies the conditions. Given any $g$ , and any constant function $h$ (which is convex), $h \circ g$ is constant and hence convex. However, in the realm of convex-optimization , this is likely unhelpful. The unstated spirit of this question is one of convexification of optimisation problems. Can we take a non-convex objective function, compose it with a convex function, and get a nice convex problem, the solution of which tells us something about the solution of the original problem? Zim's comment is a reasonable attempt to solidify this problem: what if we insisted that the minimisers of $h \circ g$ are precisely the minimisers of $g$ ? This, unfortunately, doesn't work either: the minimisers of a convex function is a convex set, so if $g$ has a non-convex set of minimisers, then $h \circ g$ must have a different set of minimisers. Perhaps, instead of insisting that $\operatorname*{argmin} g = \operat
|analysis|convex-analysis|convex-optimization|
0
Show that $\mathcal L_X g_{ij} = \nabla_i X_j + \nabla_j X_i$
I’m trying to understand equation (1.8) on p. 4 of Chow et al.’s “The Ricci flow: techniques and applications”. There, the authors say that, using indices, the equation $$ -2\operatorname{Rc}(g) = \epsilon g + \mathcal L_X g$$ becomes $$ -2R_{ij} = \nabla_i X_j + \nabla_j X_i + \epsilon g_{ij}. \tag{1.8}$$ But how come is $\mathcal L_X g_{ij} = \nabla_i X_j + \nabla_j X_i$ ?
This question has already been asked several times (in different phrasings), e.g. here and here . The best answer in my opinion is this one: https://math.stackexchange.com/a/3189815/1060681 Its from page 14 of https://web.math.princeton.edu/~nsher/ricciflow.pdf .
|differential-geometry|ricci-flow|
1
Why must an automorphism of $Z_n$ be of the form $x \mapsto x^a$?
The proof that the automorphism group of $Z_n$ is isomorphic to $(\mathbb Z/n\mathbb Z)^\times$ in Dummit and Foote uses the fact that if $\varphi \in \operatorname{Aut}(Z_n)$ then $\varphi(x) = x^a$ for some integer $a \in \mathbb Z$ . I can understand that this is the case because the isomorphism must send generators to generators. It seems (intuitively) that any other mapping of generators to generators won't preserve the group structure. But I don't feel comfortable with this rigorously yet; is there a proof for why any automorphism must be of the form $\varphi(x) = x^a$ ?
Well, since $\Bbb Z_n$ is cyclic, we can take $g$ to be a generator. Then any homomorphism is determined by where you send $g$ (because it generates). And any $y\in \Bbb Z_n$ is of the form $g^a$ for some $a$ (again because $g$ is a generator). When the dust clears, the rule will just be $x\to x^a.$
|group-theory|group-actions|automorphism-group|
1
How to prove $\sqrt{99} + \sqrt{101} < 2\sqrt{100}$ without a calculator
This is one of my extension sheet questions and I was really stumped on how to approach it. $\sqrt{99} + \sqrt{101} First I had approached it by looking at smaller and larger square roots, however the numbers are much too small to be practical in an exam. I next have done the following: Let $n = 10$ $\sqrt{n^2-1}$ + $\sqrt{n^2+1}$ $\sqrt{(n-1)(n+1)}$ + $\sqrt{n^2+1}$ But this didn't seem to workout to anything meaningful, so have left it at this. Could anyone see another approach to this question which is probably glaringly obvious?
The sequence $\sqrt{n+1}-\sqrt{n}$ is decreasing, as $$\sqrt{n+1}-\sqrt{n}=\dfrac{1}{\sqrt{n+1}+\sqrt{n}}$$ and the square root is an increasing function. Thus, $$\sqrt{100}-\sqrt{99}>\sqrt{101}-\sqrt{100}\implies \sqrt{99}+\sqrt{101}
|inequality|proof-explanation|radicals|number-comparison|
1
How to prove $\sqrt{99} + \sqrt{101} < 2\sqrt{100}$ without a calculator
This is one of my extension sheet questions and I was really stumped on how to approach it. $\sqrt{99} + \sqrt{101} First I had approached it by looking at smaller and larger square roots, however the numbers are much too small to be practical in an exam. I next have done the following: Let $n = 10$ $\sqrt{n^2-1}$ + $\sqrt{n^2+1}$ $\sqrt{(n-1)(n+1)}$ + $\sqrt{n^2+1}$ But this didn't seem to workout to anything meaningful, so have left it at this. Could anyone see another approach to this question which is probably glaringly obvious?
Square both sides (both are positive), you'll get: $$200+2\sqrt{99}\sqrt{101} which is true because $\sqrt{x}$ is an increasing function
|inequality|proof-explanation|radicals|number-comparison|
0
How to prove $\sqrt{99} + \sqrt{101} < 2\sqrt{100}$ without a calculator
This is one of my extension sheet questions and I was really stumped on how to approach it. $\sqrt{99} + \sqrt{101} First I had approached it by looking at smaller and larger square roots, however the numbers are much too small to be practical in an exam. I next have done the following: Let $n = 10$ $\sqrt{n^2-1}$ + $\sqrt{n^2+1}$ $\sqrt{(n-1)(n+1)}$ + $\sqrt{n^2+1}$ But this didn't seem to workout to anything meaningful, so have left it at this. Could anyone see another approach to this question which is probably glaringly obvious?
A little bit of an overkill, but overall another approach - define $f(x)=\sqrt{x}$ . The inequality becomes: $$\frac{f(100)-f(99)}{100-99}=f(100)-f(99)>f(101)-f(100)=\frac{f(101)-f(100)}{101-100}$$ By Lagrange IVT, the LHS is equal to $f'(c_1)$ for some $c_1\in (99,100)$ and the RHS is equal to $f'(c_2)$ for some $c_2\in (100,101)%$ . As $f'(x)=\frac{1}{2\sqrt{x}}$ is strictly decreasing, $f'(c_1)>f'(c_2)$ , giving us the desired inequality.
|inequality|proof-explanation|radicals|number-comparison|
0
Is there a permutation $\sigma(n)$ of $\{0,1,2,\ldots,m-1\},$ such that $\{(n+\sigma(n))\pmod m:0\leq n\leq m-1\}=\{0,1,2,\ldots m-1\}?$
Given $m\in\mathbb{N},$ is there a permutation $\sigma(n)$ of $\{0,1,2,\ldots,m-1\},$ such that $\{ (n+\sigma(n))\pmod m: 0\leq n \leq m-1 \} = \{0,1,2,\ldots m-1\},\ ?$ Based on computation with Python for low numbers, the answer seems to be, "yes for prime numbers; no for non-primes", although this might not be true as the results of low numbers (up to $m=9$ ) could be deceptive. However, there is some evidence that the conjecture is true: for $m=3$ there are about $3$ "successes" (successful permutations). For $m=4$ there are no successes. For $m=5$ there are about $10$ successes. For $m=6$ there are no successes. For $m=7$ there are lots of successes. For $m=8$ there are no successes. Edit: for $m=9$ there are lots of successes, so this suggests the answer is "yes for odd, no for even", rather than "yes for primes", which also matches the current only answer. Python code for $m=5$ : import itertools length_five_list=[0, 1, 2, 3,4] permutations_list=list(itertools.permutations(lengt
Such a permutation exists iff $m$ is odd. For any odd $m$ , choose $\sigma(n) = n$ . Then $2$ resulting values being congruent to each other means $$n_1 + \sigma(n_1) \equiv n_2 + \sigma(n_2) \pmod{m} \;\to\; 2(n_1 - n_2) \equiv 0 \pmod{m}$$ Since $m$ is odd, with $0 \le n_1, n_2 \le m - 1 \;\to\; -m \lt n_1 - n_2 \lt m$ , then $m\mid n_1 - n_2 \;\to\; n_1 - n_2 = 0 \;\to\; n_1 = n_2$ . Thus, all of the modulo values are unique and, by the Pigeonhole Principle , must comprise the values between $0$ and $m - 1$ , inclusive. With $m$ being even, assume such a permutation exists. We then get $$\begin{equation}\begin{aligned} \sum_{n=0}^{m-1}(n+\sigma(n)) & \equiv \sum_{n=0}^{m-1}n \pmod{m} \\ 2\left(\frac{(m-1)m}{2}\right) & \equiv \frac{(m-1)m}{2} \pmod{m} \\ (m-1)\left(\frac{m}{2}\right) & \equiv 0 \pmod{m} \end{aligned}\end{equation}$$ This means $m \mid (m-1)\left(\frac{m}{2}\right)$ . However, $\gcd(m-1,m) = 1$ (with $\gcd(m-1,m)=d$ , then $d\mid m-1$ and $d\mid m$ , so $d\mid(m - (m
|elementary-number-theory|permutations|pigeonhole-principle|
1