title
string
question_body
string
answer_body
string
tags
string
accepted
int64
If $(a,b,c)$ are the sides of a triangle, is it true that probability that $a+b > c^{\frac{3}{c}}$ is $\zeta(2)-1$?
Let $(a,b,c)$ be the sides of a triangle inscribed inside a unit circle such that the vertices of the triangle are distributed uniformly on the circumference. The solution of this question unexpectedly showed that the simple triangle inequality $a+b \ge c$ is equivalent to the famous Basel problem . $$ \zeta(2) = 1 + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6} $$ Motivated by this, I was exploring if there are other relationship interesting relationships between the Riemann zeta function and the triangle inequality and I observed numerically that the probability $$ P\left(a+b \ge c^{\frac{3}{c}}\right) = \zeta(2) - 1 $$ Can this be proved or disproved? Julia Code: step = 10^7 target = step count = 0 f = 0 while 1 > 0 count += 1 angles = (rand(3) .* 2 * π) vertices_x = cos.(angles) vertices_y = sin.(angles) push!(vertices_x, vertices_x[1]) push!(vertices_y, vertices_y[1]) x_diff = diff(vertices_x) y_diff = diff(vertices_y) side_lengths = sqrt.(x_diff.^2 + y_diff.^2) a = sid
The answer appears to be no. The four significant figures quoted in the question are correct, but the fifth figure for the actual probability is less than it would be for the Basel constant minus 1. To six figures: $0.644904$ from the calculation below versus $0.644934$ for $\pi^2/6-1$ . As noted in the comments, the problem is a bit wonky because a triangle has three sides and whether the proposed inequality holds for a given triangle may depend on which side is chosen as $c$ . A Monte Carlo-type simulation reveals that picking the minimum side as $c$ always holds but picking the maximilum side fails almost $75$ % of the time. To get a probability close to the claimed $\pi^2/6-1$ requires picking the side at random with equal probability ( $1/3$ ) for each side of any triangle. The problem is then equivalent to drawing a chord through the unit circle with its endpoints uniformly distributed around the circumference, then selecting a third point with the same uniform distribution to co
|integration|geometry|number-theory|inequality|triangles|
1
Find the generating function for $a_n=\sum_{k=0}^{n} 3^{k}(n-k), n\geq0$.
For every integer $n\geq 0$ we define $a_n=\sum_{k=0}^{n} 3^{k}(n-k)$ . Find the generating function of the sequence $(a_n)_{n=0}^\infty$ and write down the answer without series. Attempt: $G(x)=\sum_{n=0}^{\infty} (\sum_{k=0}^{n} 3^{k}(n-k))x^n=\sum_{n=0}^{\infty} (n\sum_{k=0}^{n} 3^{k}-\sum_{k=0}^{n}k3^k)x^n=\sum_{n=0}^{\infty}(\frac{1}{2}n(3^{n+1}-1)-\sum_{k=0}^{n}k3^{k})=...?$ where I took into account that $1+3+...+3^n=\frac{1}{2}(3^{n+1}-1)$ . How to continue? WolframAlpha returns: $\sum_{n=0}^{\infty} (\sum_{k=0}^{n} 3^{k}(n-k))x^n$ = $-\frac{x}{(x-1)^2 (3x-1)}$ . How do I get to this?
An alternative method to the discussion in the comments: We can reindex the sum as follows: $$\begin{align}\sum_{n=0}^{\infty} \left(\sum_{k=0}^{n} 3^{k}(n-k)\right)x^n &= \sum_{n=0}^{\infty} \left(\sum_{k=0}^{n} 3^{n-k}k\right)x^n \\ &= \sum_{n=0}^{\infty} 3^n\left(\sum_{k=0}^{n} (1/3)^kk\right)x^n \\ &= \sum_{n=0}^{\infty} 3^n\left(\frac{3^{n+1} - 2n-3}{4\cdot 3^n}\right)x^n \\ &= \sum_{n=0}^{\infty} \frac{3^{n+1} - 2n-3}{4}x^n \end{align}$$ To avoid long calculations (I'm too lazy for this), I leave the rest to the OP. When you split it into three sums, two should be standard GP, the remaining one can be salvaged by using derivatives.
|combinatorics|discrete-mathematics|generating-functions|
0
Two Limits involving an integral.
Let $$I_n = \int_{0}^{1}\frac{x^{n-1}-x^{n+p-1}}{(1+x^{n})(1+x^{n+p})} \, \mathrm{d}x,$$ where $p$ is a positive natural number. Show that: $$\lim_{n \to \infty} n^2 I_n=p\ln{2} \qquad\text{and}\qquad \lim_{n \to \infty} \left({\frac{n^2I_n}{p\ln{2}}}\right)^n=e^{-p} $$ For the first one I substituted $t=x^n$ and we get: $$ n^2I_n=n\int_{0}^{1}\frac{1-x^{\frac{p}{n}}}{(1+x)(1+x^{1+\frac{p}{n}})} \, \mathrm{d}x .$$ Now I argue that $ x^{1+\frac{p}{n}} $ converges uniformly to $x$ on $[0,1]$ so we can substitute it with $x$ and in the limit they will be equal. I do have a rigorous proof for than using the definition for uniform convergence and then making an inequality involving $I_n$ , but its just too long for this post and I am also not sure its correct. So please tell me if such argument is possible. In the end we have: $$ n^2I_n = n\int_{0}^{1}\frac{1-x^{\frac{p}{n}}}{(1+x)^2} \, \mathrm{d}x $$ Now $\frac{1}{(1+x)^2}$ is the diff power series $\sum_{k\ge0} x^k$ , hence after multipl
We get \begin{align*} I_n &= \int_{0}^{1} \left( \frac{x^{n-1}}{1+x^n} - \frac{x^{n+p-1}}{1+x^{n+p}} \right) \, \mathrm{d}x \\ &= \left[ \frac{1}{n}\log(1+x^n) - \frac{1}{n+p}\log(1 + x^{n+p}) \right]_{0}^{1} \\ &= \frac{p}{n(n+p)}\log 2. \end{align*} This immediately proves that $n^2 I_n \to p \log 2$ . Moreover, $$ \left( \frac{n^2 I_n}{p \log 2} \right)^n = \frac{1}{(1+p/n)^n} \to \frac{1}{e^p} = e^{-p}. $$
|real-analysis|integration|limits|
1
Natural domain of rational functions
I had an argument about the natural domain of rational functions. Consider $f(x) = \frac{P(x)}{Q(x)},$ where $P$ , and $Q$ are (real) polynomials. Is the natural domain of $f$ the set $\{ x \in \mathbb{R}: Q(x) \neq 0\}$ ? Or should we first simplify the fraction? For instance, let us consider $P = Q = x$ , so that $$f(x) = \frac{x}{x}. $$ I sustain that the domain should exclude $x = 0$ , but my friend argues that the functions $f \equiv 1$ on $\mathbb{R}$ thus $0$ belongs to the natural domain of the function. Who is right?
Typically, when one says $$f(x)=R(x)/Q(X)$$ is a rational function, they mean that the fraction is in reduced form, i.e. $R(x)$ and $Q(x)$ have no common factor. In such a case, the largest domain that makes sense (what you would call a "natural domain") is $\{x \in \mathbb{R} : Q(x) \neq 0\}$ . Thus, it makes sense to reduce the fraction before thinking about the domain.
|real-analysis|notation|
0
Find the generating function for $a_n=\sum_{k=0}^{n} 3^{k}(n-k), n\geq0$.
For every integer $n\geq 0$ we define $a_n=\sum_{k=0}^{n} 3^{k}(n-k)$ . Find the generating function of the sequence $(a_n)_{n=0}^\infty$ and write down the answer without series. Attempt: $G(x)=\sum_{n=0}^{\infty} (\sum_{k=0}^{n} 3^{k}(n-k))x^n=\sum_{n=0}^{\infty} (n\sum_{k=0}^{n} 3^{k}-\sum_{k=0}^{n}k3^k)x^n=\sum_{n=0}^{\infty}(\frac{1}{2}n(3^{n+1}-1)-\sum_{k=0}^{n}k3^{k})=...?$ where I took into account that $1+3+...+3^n=\frac{1}{2}(3^{n+1}-1)$ . How to continue? WolframAlpha returns: $\sum_{n=0}^{\infty} (\sum_{k=0}^{n} 3^{k}(n-k))x^n$ = $-\frac{x}{(x-1)^2 (3x-1)}$ . How do I get to this?
We can directly see that $a_n$ is the convolution of the sequences $b_n = 3^n$ and $c_n = n$ . Therefore, the generating function for $a_n$ is the product of the generating functions for $b_n$ and $c_n$ , which are $\frac{1}{1-3x}$ and $\frac{x}{(1-x)^2}$ , respectively.
|combinatorics|discrete-mathematics|generating-functions|
1
Design two independent normal samples to test $H_0: \mu_1=\mu_2$
Given independent random samples $(X_1,...,X_m)$ and $(Y_1,...,Y_n)$ , respectively, from the following distributions of $X$ and $Y: X\sim N(\mu_1, \sigma^2)$ and $Y \sim N(\mu_2,\sigma^2)$ ,consider the problem of testing the null hypothesis $H_0: \mu_1=\mu_2$ against the alternative $H_1: \mu_1 \neq \mu_2$ with unknown $\sigma^2$ . a. Formulate the underlying testing problem as that of testing one of the parameters in a multiparameter exponential family, having expressed the family explicitly in terms of all the parameters and the corresponding statistics. b. Give the formula of the UMPU test at level α, in its conditional form, involving these statistics. c. Describe how this test can be stated equivalently as an unconditional test. What are the ultimate test statistic and the rejection region (in terms of a known distribution? My attempt: Suppose $H_0: \mu_1-\mu_2 . The joint density of $X=(X_1,...,X_m)$ and $Y=(Y_1,...,Y_n)$ can be expressed in the form of a 3-parameter exponentia
Considering that (see here ) $$\frac{\bar{X}-\bar{Y}-(\mu_1 -\mu_2)}{S_p\sqrt{\frac{1}{m}+\frac{1}{n}}}\sim T_{m+n-2}$$ with $S^2_p=\frac{(m-1)S_1^2+(n-1)S_2^2}{m+n-2}$ . Hence, for $$\color{blue}{H_0: \mu_1- \mu_2= \theta}$$ you can use the test with the following test statistic: $$\color{blue}{T = \frac{\bar{X}-\bar{Y}-\theta}{S_p\sqrt{\frac{1}{m}+\frac{1}{n}}}}$$ and critical region: $$\color{blue}{C= \mathbb R \setminus (-t_{m+n-2, \frac{\alpha}{2}}, t_{m+n-2, \frac{\alpha}{2}})}$$ where $\alpha$ is the significance level of the test.
|statistics|hypothesis-testing|
1
Given f holomorphic on $B_1(0)$ and $max_{z \in C_r(0)} |f(z)| \to 0$ as $r \to 1$. Show f is identically 0.
I have tried to use cauchy integral formula and the deformation theorem but that got nowhere: I got $f(z_0) = max_{z \in C_r'(z_0)}|f(z)|$ where $r' > 0$ such that $B_r'(z_0) \subset B_1(0)$ . Note: $C_r(x)$ is notation for a circle of radius $r$ and centre $x$ in the complex plane Let me outline my attempt further. I have attempted to show that $\forall \varepsilon > 0: \forall w \in B_1(0): |f(w)| . I let $\varepsilon$ and $w$ be free, I evaluate f(w) at some $C_r(0)$ such that $B_{|w|}(0) \subset \mathrm{Int}(C_r(0))$ . From the Cauchy integral formula, I have $$ f(w) = \frac{1}{2i\pi}\int_{C_r(0)}\frac{f(z)}{z-w}dz $$ From which I tried to deform into something that would cancel out the denominator but that led me nowhere. I have also tried just straight up bashing it but that went nowhere too.
For $0 and $z_0\in B_r(0)$ , Cauchy's integral formula yields $$f(z_0)=\frac{1}{2\pi i}\int\limits_{C_r(0)}\frac{f(z)}{z-z_0}\,dz$$ Taking absolute values and using the triangle inequality for integrals, we obtain \begin{align*} |f(z_0)|&\leq\frac{1}{2\pi }\int\limits_{C_r(0)}\frac{|f(z)|}{|z-z_0|}\,dz\\ &\leq \frac{1}{2\pi }\int\limits_{C_r(0)}\frac{\max\limits_{\zeta\in C_r(0)}|f(\zeta)|}{r-|z_0|}\,dz\\ &=\frac{r}{r-|z_0|}\max\limits_{\zeta\in C_r(0)}|f(\zeta)| \end{align*} Taking $r\to 1^-$ now proves the desired result. Edit: Changed $r$ in denominator to $r-|z_0|$ .
|complex-analysis|cauchy-integral-formula|
0
Find the maximum value of $m^2+n^2$ if $(m^2-mn-n^2)^2=1$
Given that the integers $m$ and $n$ in the set $A=\left\{1,2,3,....,2024\right\}$ satisfy $(m^2-mn-n^2)^2=1$ . Find the maximum possible value of $m^2+n^2$ . My effort: We have $m^2-mn-n^2=\pm 1$ Case $1.$ If $m^2-mn-n^2=1 \Rightarrow m^2-mn-(n^2+1)=0$ Now the Discriminant is $$D=n^2+4(n^2+1)=k^2, k \in \mathbb{Z}$$ $$ \implies 5n^2+4=k^2$$ I am not able to proceed now.Same problem with Case $2.$
If $(m^2-mn-n^2)^2=1$ then $$(2m-n)^2-5n^2=4m^2-4mn-4n^2=\pm4,$$ which is a (pair of) Pell equations. For $(m,n)=(1,0)$ and $(m,n)=(0,1)$ we have $$(2\cdot1-0)^2-5\cdot0^2=4\qquad\text{ and }\qquad (2\cdot1-1)^2-5\cdot(-1)^2=-4,$$ and the fundamental solution is $2+\sqrt{5}$ . A few quick computations show that \begin{eqnarray} 2\cdot(2+\sqrt{5})^5\ \ &=&\ \ 1364+\ \ 610\sqrt{5},\\ 2\cdot(2+\sqrt{5})^6\ \ &=&\ \ 5778+2584\sqrt{5},\\ (1+\sqrt{5})\cdot(2+\sqrt{5})^5\ \ &=&\ \ 2207+\ \ 987\sqrt{5},\\ (1+\sqrt{5})\cdot(2+\sqrt{5})^6\ \ &=&\ \ 9349+4181\sqrt{5},\\ (1+\sqrt{5})\cdot(2+\sqrt{5})^{-5}&=&\ \ \ \ 843-\ \ 377\sqrt{5},\\ (1+\sqrt{5})\cdot(2+\sqrt{5})^{-6}&=&-3571-1597\sqrt{5},\\ \end{eqnarray} and so the maximum is at $(m,n)=(1597,987)$ .
|elementary-number-theory|contest-math|quadratics|
0
Suppose that a and b are integers, a $\equiv$ 11 (mod 19) and b $\equiv$ 3 (mod 19). Find the integer c such that c $\equiv$ a - b (mod 19).
With $0 \le c \le 18$ . I can recall the basic modulo arithmetic properties, but I can't see how to apply them. Any help would be appreciated.
$$a\equiv11\bmod(19)\\b\equiv3\bmod(19)\\\Rightarrow a-b\equiv8\bmod(19)\Rightarrow c\equiv8\bmod(19)$$ So c is any number of the form 19k+8
|modular-arithmetic|
0
Exploring the Graphs of a Prime Number Integer Sequence: Seeking Insights
I am an engineering student and when I was doing some work on data visualisation I stumbled across an integer sequence after watching a video about sequences that produce interesting graphs. I submitted the sequence to the On-Line Encyclopedia of Integer Sequences A328225 . The sequence is as follows: a(1) = 1, a(n) = a(n-1) - prime(prime(n)) - prime(n-1) if this produces a positive integer not yet in the sequence, otherwise a(n) = a(n-1) + prime(prime(n)) - prime(n-1). This sequence produces the following graph (up to n = 1000, n = 10000 and n = 6e6 respectively) Graph of sequence up to n = 1000 Graph of sequence up to n = 10000 Graph of sequence up to n = 6e6 I'm would love to get a understanding of the underlying principles that give rise to the curves observed in the graphs. Additionally, I'm intrigued by the apparent upper limit observed along the y-axis. I would greatly appreciate any insights, explanations, or conjectures regarding the origins of these curves and the factors con
This is not an answer, per se , but rather a comment that requires a couple of figures. I ran your Matlab code out to $m=10^9$ with just over five million primes (computation time was 4086 s). I'm seeing patterns in the structure that are much more complex than those you are seeing with the 10,000 primes. The first image below shows the complete field, albeit in log-log coordinates. The second image shows a detail of the structure in the vicinity of $n=10^6$ . This behavior is present along the entire domain of $n$ . To me, it's reminiscent of chaotic behavior. This prompted me to create the third figure below, whose purpose is to show the irregular spacing of the primes along a straight line; some tight clusters, some open spaces. The last figure show similar behavior with the Ulam spiral. This is beyond my ken, but the point is that there may be an underlying structure.
|sequences-and-series|prime-numbers|
0
solutions to the equation $(2m^2-1)^2=2n^2-1$, where $m$, $n$ are positive integers
I'm studying the equation $(2m^2-1)^2=2n^2-1$ ( $\ast$ ), where $m$ , $n$ are positive integers. It is known that $m$ can only be 1 or 2 by using Wolfram Alpha. Now I want to prove that result. That is my attempt: If a prime $p$ | gcd( $m$ , $n$ ), ( $\ast$ ) is equivalent to $(2m^2-1)(m^2-1)=(n-m)(n+m)$ , $p$ divides RHS but not LHS, which is impossible. So gcd( $m$ , $n$ )=1. Module 8 to ( $\ast$ ) and we find $n$ must be odd. This is as far as I got. Can someone help me?
COMMENT ( just for fun ).- Your equation has solution modulo all integer, because if $f(m,n)=2m^4-2m^2-n^2+1$ then $f(2,5)=f(1,1)=0$ , so all "local" method is not useful (unless restrictions but for the first primes till $19$ there are other solutions and for greater primes brute force becomes hard). Because of squares we can just to try with positive integers. The problem is difficult. We have $(2m^2-1)^2=2n^2-1\iff 2m^4-2m^2-n^2+1=0$ solving the quadratic in $m^2$ we get $$m^2=\frac{1+\sqrt{2n^2-1}}{2}\hspace1cm(1)$$ which imposes for some integer $x$ , $$2n^2-1=x^2\hspace1cm(2)$$ Equation $(2)$ has infinitely many solutions given by $$x=-\frac12[(1+\sqrt2)(3-2\sqrt2)^k+(1-\sqrt2)(3+2\sqrt2)^k]; \space k\ge0$$ for example, for $k=1,2,3,4,5$ , we have respectively $x=1,7,41,239,1393$ which in equation $(1)$ gives $m^2=1,4,21,120,697$ respectively so only the two first values given a square. We get the two solutions $(m,n)=(1,1),(2,5)$ given by the O.P. Consequently a way of proving t
|diophantine-equations|
0
Counting problem to find maximum number of possible passwords
Let S={a--z, 0--9} be the set of characters(lower case a through z) and the digits 0 through 9 I need to form a password of length 8 in which two of them must be digits. How many possible passwords can I create? Here is what I came up with but im not positive it is correct. p2 = 8C2 *10^2 * 26^6 p3 = 8C3 * 10^3 * 26^5 p4 = 8C4 * 10^4 * 26^4 p5 = 8C5 * 10^5 * 26^3 p6 = 8C6 * 10^6 * 26^2 p7 = 8C7 * 10^7 * 26^1 p8 = 8C8 * 10^8 * 26^0 and then add all the results up and thats the answer?
Total number of passwords with no constraints is $36^{8}$ Now, total number of passwords with no numbers is $26^8$ Similarly, number of passwords with no numbers is $8 \choose 1$ $\cdot$ $10 \cdot 26^8$ Therefore, the total number of possible passwords is $36^{8}-26^8-$$8 \choose 1$ $\cdot$ $10 \cdot 26^8$
|combinatorics|combinations|
0
Counting problem to find maximum number of possible passwords
Let S={a--z, 0--9} be the set of characters(lower case a through z) and the digits 0 through 9 I need to form a password of length 8 in which two of them must be digits. How many possible passwords can I create? Here is what I came up with but im not positive it is correct. p2 = 8C2 *10^2 * 26^6 p3 = 8C3 * 10^3 * 26^5 p4 = 8C4 * 10^4 * 26^4 p5 = 8C5 * 10^5 * 26^3 p6 = 8C6 * 10^6 * 26^2 p7 = 8C7 * 10^7 * 26^1 p8 = 8C8 * 10^8 * 26^0 and then add all the results up and thats the answer?
Total number of passwords $36^8$ Passwords with no numbers $26^8$ Passwords with one number $8*10*26^7$ Number of possible with at least two numbers: $36^8 -26^8 - 8*10*26^7$
|combinatorics|combinations|
0
Equivalence between two integrals
I would like to prove the equality of the two integrals $$ I_1 = \int_0^1 \frac{\sqrt{1-x^2}}{(1+x)(1+2x)} dx$$ and $$ I_2 = -\int_0^1 \frac{(-x+2)^2-1 -\sqrt{1-x^2}\sqrt{1-{(-x+1)}^2}}{\left(\sqrt{1-x^2}+\sqrt{1-{(-x+1)}^2}\right )(1-x)(3-2x)} dx.$$ Mathematica can solve this and the solution is for both integral $$I_{1/2} = \frac{1}{4} [-\pi - 2 \sqrt{3} \log(2 - \sqrt{3})].$$ What I would like is a simple proof of the equality $I_1 = I_2$ , without having to calculate the solution directly. Any help would be appreciated. EDIT: We can multiple the integrand of $I_2$ by $\frac{\sqrt{1-x^2}-\sqrt{1-{(-x+1)}^2}}{\sqrt{1-x^2}-\sqrt{1-{(-x+1)}^2}}$ . After a substitution $u = 1-x$ we get $$I_2 = \int_0^1 \frac{\sqrt{1-x^2}}{(1+x)(1+2x)} \frac{(2x-5)(1+x)}{(1-2x)(1-x)}.$$ Almost there but I still can't conclude.
This is not what you asked for, but it won't fit into a comment, so I am putting it into an answer. At least for integral $I_1$ , it may take less effort to calculate its value directly than to transform it into $I_2$ . For example, we have that $$ I_1=\int_0^1 \frac{\sqrt{1-x^2}}{(1+x)(1+2x)} \, dx = \int_0^1 \frac{1}{1+2x}\sqrt{\frac{1-x}{1+x}} \, dx. $$ Let's try the substitution $u=\dfrac{1-x}{1+x}$ . Then $x=\dfrac{1-u}{1+u}$ and $dx=-\dfrac{2}{(1+u)^2}\,du$ , so $$ \begin{split} I_1&=\int_1^0 \frac{1}{1+2\frac{1-u}{1+u}}\sqrt{u}\left(-\frac{2}{(1+u)^2}\right)du = \int_0^1\frac{2\sqrt{u}}{(1+u)(3-u)}\,du\\ &=\int_0^1\frac{4v^2}{(1+v^2)(3-v^2)}\,dv = \int_0^1 \left(\frac{3}{3-v^2}-\frac{1}{1+v^2}\right)dv\\ &=\left.\left(\sqrt{3}\operatorname{arctanh}\frac{v}{\sqrt{3}}-\arctan v\right)\right|_0^1 =\left.\left(\frac{\sqrt{3}}{2}\ln\left(\frac{\sqrt{3}+v}{\sqrt{3}-v}\right)-\arctan v\right)\right|_0^1\\ &=\frac{\sqrt{3}}{2}\ln\left(\frac{\sqrt{3}+1}{\sqrt{3}-1}\right)-\frac{\pi}{4}=\
|integration|definite-integrals|
0
Verification of answer in a birthday problem
In the answer here, should the number of ways to pick the groups of triples, pairs and singlets from the 20 people be: $$\frac{20!}{\color{#C00}{3!^2}\,\color{#090}{2!^4}\,\color{#E90}{6!}}$$ since if you were to take this to the extreme and have 20 single people, there would be only 1 way to do it, which would be $$\frac{20!}{20!}$$ I would like to think it is a typo, but I want to make sure my understanding is correct.
I suspect you may have misread the linked answer. If you want the $20$ people to all have different birthdays, the probability calculation is $\dfrac{20!}{1!^{20}}\,\dfrac{365!}{20!\, 345!} \, \dfrac1{365^{20}} \approx 0.58856$ , somewhat more than $\frac12$ as you might expect from the birthday problem. The full calculation for the probability that all $20$ share the same single birthday would be $\dfrac{20!}{20!^1}\,\dfrac{365!}{1!\, 364!} \, \dfrac1{365^{20}}$ which is $\frac{1}{365^{19}}\approx 2\times 10^{-49}$ and very small, again as you would expect.
|combinatorics|algebra-precalculus|birthday|
0
Find the center of all circles that touch the $x$-axis and a circle centered at the origin
Given a circle $C$ of radius $1$ centered at the origin, I want to determine the locus of the centers of all circles that touch $C$ and the $x$ -axis. This is the red curve in the following Desmos plot , where the blue circle touches $C$ and the $x$ -axis: Let $P=(\sin\alpha,\cos\alpha)$ be the point where the blue circle touches $C$ . Moving $\ell$ units towards the center of $C$ (the origin) gives the point $Q=(1-\ell)(\sin\alpha,\cos\alpha)$ . Now a circle around $Q$ touches the $x$ -axis when the $y$ -coordinate of $Q$ equals $\ell$ , so that $Q$ has the same distance to $C$ and to $y=0$ : $$ Q_y=(1-\ell)\cos\alpha \stackrel.= \ell \tag1 $$ This equation is solved by $$Q_y=\frac{\cos\alpha}{1+\cos\alpha} \tag2$$ It's also easy to compute the $x$ -ccordinate of $Q$ , which yields $Q$ depending on $\alpha$ : $$ Q=Q(\alpha)=\left(\frac{\sin\alpha}{1+\cos\alpha}, \frac{\cos\alpha}{1+\cos\alpha}\right) \tag3 $$ Where I am stuck is to compute $Q_y$ as a function of $Q_x$ , that is find $
With the arcsin substitution you had ended up with: $$Q(z) = \left(\frac z{1+\sqrt{1-z^2}} , \frac{\sqrt{1-z^2}}{1+\sqrt{1-z^2}} \right)\tag6$$ It looks harder than it really is. You just have to press on: $$\begin{aligned} x &= \frac z{1+\sqrt{1-z^2}} \\ \frac z x - 1 &= \sqrt{1-z^2} \\ \frac{z^2}{x^2} - \frac{2 z}{x} + 1 &= 1 - z^2 \\ \left(\frac{1}{x^2} + 1\right)z^2 - \frac{2 z}{x} &= 0 \\ \left(1 + x^2\right)z - 2 x &= 0 \\ z &= \frac{2 x}{1 + x^2} \end{aligned}$$ (Note that $z = 0$ is a spurious solution coming from $z/x - 1 = {\color{red}-}\sqrt{1-z^2}$ , which we can disregard.) Then we evaluate $y$ : $$\begin{aligned} z &= \frac{2 x}{1 + x^2} \\ z^2 &= \frac{4 x^2}{(1 + x^2)^2} \\ 1 - z^2 &= \frac{1 + 2 x^2 + x^4 - 4 x^2}{(1 + x^2)^2} \\ &= \frac{1 - 2 x^2 + x^4}{(1 + x^2)^2} \\ &= \frac{(1 - x^2)^2}{(1 + x^2)^2} \\ \sqrt{1 - z^2} &= \frac{1 - x^2}{1 + x^2} \\ y &= \frac{\sqrt{1-z^2}}{1 + \sqrt{1-z^2}} \\ &= \frac{(1 + x^2) \sqrt{1-z^2}}{1 + x^2 + (1 + x^2) \sqrt{1-z^2}} \\ &=
|geometry|inverse-function|locus|moduli-space|
0
Average item cost
So let's say I can open up crates that have balls in it, but the amount of balls received from each crate has different percentage probability. As shown bellow: 10x balls = 40% chance, 20x balls = 30% chance, 50x balls = 20% chance, 100x balls = 7% chance, 200x balls = 2% chance, 1000x balls = 1% chance, How can I figure out the average ball amount received per crate opened over an infinite amount of openings, or at least a large number opened so I can get a closely accurate answer.
For your problem, the answer would be: $ 0.4*10+0.3*20+0.2*50+0.07*100+0.02*200+0.01*1000 =\boxed{41}$
|probability|average|percentages|
0
Is there a term for the idea that mathematical objects are defined by their relationships?
In a recent Veritasium video discussing Euclid's Elements, Alex Kontorovich comments that Euclid's definitions of primitive objects (e.g. "A point is that which has no part.") are absurd and lead to an infinite regression of definitions. He goes on to say: You shouldn't have definitions; you should have undefined terms... It's the relationships between the objects that's important, not the definitions of the objects themselves. (timestamped link: https://www.youtube.com/watch?v=lFlu60qs7_4&t=1042s Is there an agreed-upon term for the idea that a mathematical object is entirely defined by its relationships to other mathematical objects? I'd like to read more about it, but I don't know what to search for.
In the philosophy of mathematics, the term is structuralism . See here for a survey.
|euclidean-geometry|definition|philosophy|
1
Find the generating function for $a_n=\sum_{k=0}^{n} 3^{k}(n-k), n\geq0$.
For every integer $n\geq 0$ we define $a_n=\sum_{k=0}^{n} 3^{k}(n-k)$ . Find the generating function of the sequence $(a_n)_{n=0}^\infty$ and write down the answer without series. Attempt: $G(x)=\sum_{n=0}^{\infty} (\sum_{k=0}^{n} 3^{k}(n-k))x^n=\sum_{n=0}^{\infty} (n\sum_{k=0}^{n} 3^{k}-\sum_{k=0}^{n}k3^k)x^n=\sum_{n=0}^{\infty}(\frac{1}{2}n(3^{n+1}-1)-\sum_{k=0}^{n}k3^{k})=...?$ where I took into account that $1+3+...+3^n=\frac{1}{2}(3^{n+1}-1)$ . How to continue? WolframAlpha returns: $\sum_{n=0}^{\infty} (\sum_{k=0}^{n} 3^{k}(n-k))x^n$ = $-\frac{x}{(x-1)^2 (3x-1)}$ . How do I get to this?
The convolution approach by @DanielSchepler is the simplest way, but here is an explicit derivation that does not rely on recognizing that. \begin{align} \sum_{n=0}^\infty \sum_{k=0}^n 3^k(n-k) x^n &= \sum_{k=0}^\infty \sum_{n=k}^\infty 3^k(n-k) x^n \\ &= \sum_{k=0}^\infty (3x)^k \sum_{n=k}^\infty (n-k) x^{n-k} \\ &= \sum_{k=0}^\infty (3x)^k \sum_{n=0}^\infty n x^n \\ &= \sum_{k=0}^\infty (3x)^k \frac{x}{(1-x)^2} \\ &= \frac{x}{(1-x)^2} \sum_{k=0}^\infty (3x)^k \\ &= \frac{x}{(1-x)^2} \cdot \frac{1}{1-3x} \\ &= \frac{x}{(1-x)^2 (1-3x)} \end{align} Essentially the same steps yield the general convolution result: \begin{align} \sum_{n=0}^\infty a_n x^n &= \sum_{n=0}^\infty \sum_{k=0}^n b_k c_{n-k} x^n \\ &= \sum_{k=0}^\infty \sum_{n=k}^\infty b_k c_{n-k} x^n \\ &= \sum_{k=0}^\infty b_k x^k \sum_{n=k}^\infty c_{n-k} x^{n-k} \\ &= \sum_{k=0}^\infty b_k x^k \sum_{n=0}^\infty c_n x^n \\ &= \sum_{k=0}^\infty b_k x^k C(x) \\ &= C(x) \sum_{k=0}^\infty b_k x^k \\ &= C(x) B(x) \end{align}
|combinatorics|discrete-mathematics|generating-functions|
0
Interpretation of a problem involving Chebyshev's inequality
I found the next problem, I know that I will use the Chebyshev's inequality but I don't understand the "question" of the problem Precisely, I don't understand the phrase "...fraction of $x_1,\ldots,x_n$ included in the interval...". What does it mean? I'm not looking for a solution of the problem (well yes, but is not the point of my question), I'm trying to understand the problem. The problem is from Statistical Inference by V.K Rohatgi (3.7.28).
In general the fraction of $x_1,\ldots,x_n$ included in set $A$ means the proportion of elements of the list $x_1, \ldots, x_n$ that lie in the set $A$ This is calculated as the number of elements that lie in set $A$ , divided by $n$ : $$\frac{ \#\{i: x_i \in A\}}n.$$ As a hint to the solution to your problem: To apply Chebyshev's inequality, you need a random variable $X$ . Let $X$ be an element chosen uniformly at random from the list $x_1,\ldots, x_n$ .
|probability|statistics|statistical-inference|
1
Companions to Rudin?
I'm starting to read Baby Rudin (Principles of mathematical analysis) now and I wonder whether you know of any companions to it. Another supplementary book would do too. I tried Silvia's notes, but I found them a bit too "logical" so to say. Are they good? What else do you recommend?
I'm a current sophomore that's using Rudin. I found Munkres' Analysis on Manifolds had nice exercises for practicing computations and direct applications of theorems, and the answer keys online are pretty thorough so it's easy to check work. I also really enjoyed reading Pugh's Real Mathematical Analysis for explanations of various topics, as he includes a lot of intuitive contextualization and neat graphics that really ground the reader's understanding. A couple of the others that people mentioned, like Tao and Abbott are great as well!
|analysis|reference-request|soft-question|book-recommendation|
0
Theorem 3.11 (c): Rudin's PMA
I wanted to ask some clarification on one of the proofs in Rudin's PMA, specifically Theorem 3.11 (c). It states that In $\mathbb R^k$ , every Cauchy sequence converges. The proof for it is as follows: Let $\{\mathbf x_n\}$ be a Cauchy sequence in $\mathbb R^k$ . Define $E_N$ to be the set $\mathbf x_N, \mathbf x_{N + 1}, \ldots$ . Then for some $N$ , we have that the diameter of $E_N$ is less than 1. The range of $\{\mathbf x_n\}$ is the union of $E_N$ and the finite set $\{\mathbf x_1, \mathbf x_2, \ldots, \mathbf x_{N - 1}\}$ . Then $\{\mathbf x_n\}$ is bounded. Since every bounded subset of $\mathbb R^k$ has compact closure in $\mathbb R^k$ by Theorem 2.41, then (c) follows from (b). In the bolded part, I have no clue as to how we can just say that there exists some N. Is there some property of Cauchy sequences that I'm missing?
Think about how, after a certain N, you can make any two n > N members of the Cauchy sequence arbitrarily close together.
|real-analysis|metric-spaces|cauchy-sequences|
0
Orientablity of Manifold with only two charts in an atlas
Suppose that $M$ is an $n-$ dimensional smooth manifold and if there exists an atlas consisting of only two charts $(U,x)$ and $(V,y)$ . If $U\cap V$ is connected, then $M$ is orientable. To prove this, I must show that the transition map $x\circ y^{-1}: y(U\cap V)\to x(U\cap V)$ has a positive Jacobian determinant. That is, if $x=(x_1,...,x_n), y=(y_1,...,y_n)$ , then the determinant of the matrix $(\partial x_i/\partial{y_j})_{i,j}$ is positive. But I am not sure how to show that. Since $U\cap V$ is connected and $\det$ is continuous, it suffices to show that the determinant is positive only at a point $p\in U\cap V$ . Consider the standard orientation given by $dr_1\wedge \cdots\wedge dr_n$ on $\mathbb R^n$ . Let's pull these back to $U$ and $V$ : $\begin{align}x^\ast (dr_1\wedge \cdots\wedge dr_n)&=d(r_1\circ x)\wedge\cdots\wedge d(r_n\circ x)\\ &=dx_1\wedge\cdots\wedge dx_n\\ &=\det\left((\partial x_i/\partial{y_j})_{i,j}\right)dy_1\wedge\cdots\wedge dy_n\\ &=\det\left((\partial x
You can’t get a contradiction. Note you’re asked to prove orien tability , not that the specific $\{(U,x),(V,y)\}$ is an orien ted atlas (which is a false statement). So, you should instead replace one of the coordinate functions, say $y^1$ with $-y^1$ if you get a negative determinant; then the new pair will have transition with positive determinant. Also, your ‘proof’ doesn’t really make sense. You can’t speak of $x,y$ being orientation preserving/reversing unless you have an orientation on $M$ already. So, you should rephrase your argument: If $\det \frac{\partial x}{\partial y}>0$ , then we’re done. Otherwise, consider the chart $(z^1,\dots, z^n)=(-y^1,y^2,\dots, y^n)$ (to be super precise, one should first check this is a chart which is compatible with the atlas given by $\{(U,x),(V,y)\}$ , i.e belongs to the maximal atlas… but this is almost obvious). Then, $\det\frac{\partial x}{\partial z}>0$ , so the atlas $\{(U,x),(V,z)\}$ provides an orientation.
|differential-geometry|differential-topology|smooth-manifolds|
1
How to show :/⊗r→/ is a well-defined R/I module homomorphism
I'm new to tensor products, and am struggling with showing that f is a well-defined R/I homomorphism, where I is an ideal of R ring, and A is a left R module. Showing scalar multiplication isn't too bad , as f((x+I)(r+I X a)) = f(xr + I X a) = xra + IN = (x+I) (ra+IN) = (x+I) f(ra+ I X a), but am unsure how to show addition.
See Theorem 4.5 here , where $R$ is a commutative ring. Every time you want to build a linear map out of a tensor product, you should use the universal property of tensor products. It lets you avoid trying to develop some special argument to deal with addition, which you're struggling over. I advise learning how to work with tensor products of modules over a commutative ring before allowing the ring $R$ to be noncommutative and having to bother with left vs. right issues. This topic already has many subtleties even when $R$ is commutative.
|modules|
0
Companions to Rudin?
I'm starting to read Baby Rudin (Principles of mathematical analysis) now and I wonder whether you know of any companions to it. Another supplementary book would do too. I tried Silvia's notes, but I found them a bit too "logical" so to say. Are they good? What else do you recommend?
Pugh's "Real Mathematical Analysis" is basically Rudin 2.0. It has the same chapters with a lot of pictures and discussion. Very good for self study.
|analysis|reference-request|soft-question|book-recommendation|
0
Prove a level set of a function in $\mathbb{R}^4$ is not a smooth manifold
Let $f: \mathbb{R}^4 \rightarrow \mathbb{R}$ be the function $$f(x_1,x_2,x_3,x_4) = x_1^2+x_2^2 - x_3^2 - x_4^2$$ I need to show that $S = f^{-1}(0)$ is not a smooth manifold. I was able to show that $S$ is not an embedded submanifold by identifying the tangent spaces $T_{p} S$ with subspaces of $T_{p} \mathbb{R}^4$ and showing that $T_{\mathbf{0}} S$ is 4-dimensional while $T_p S$ is not. However, I'm not sure I can use that to say anything about $S$ itself. I'm not sure how I would show the tangent spaces of $S$ vary dimension for any possible smooth structure on $S$ unrelated to how $S$ sits in $\mathbb{R}^4$ . I'm now trying to show a neighborhood of the origin is not locally Euclidean. I've seen exercises where one shows $\{(x,y) : x^2 = y^2\} \subset \mathbb{R}^2$ is not a manifold by showing that a neighborhood of the origin leaves 4 connected components when you remove the origin, but I haven't been able to rigourously justify similar reasoning here.
$T_0S$ is not defined (unless you talk about notions like the Zariski tangent space). You’re talking about the kernel of the derivative of $f$ , of course, but failure of the hypotheses of the implicit function theorem does not, working over the reals, guarantee failure to be a manifold. Cones (that are not linear) are not manifolds because of their cone points. If $0\in S$ has a neighborhood diffeomorphic (or, indeed, homeomorphic) to a $3$ -ball, then that neighborhood must remain connected if you remove $0$ . But it does not.
|differential-geometry|smooth-manifolds|
0
Does the following series of fractions of consecutive odd numbers converge? If yes, where does it converge to?
I have a series of fractions that consists consecutive odd numbers in the following fashion: $$\frac{1}{3} + \frac{3}{5} + \frac{5}{7} + \frac{7}{9} + \ ... \ = \ ?$$ My question is whether this series converges if it were to go on forever until infinity. In my attempt, I do feel it should converge. Every term in the series is less than 1. Moreover, as it is apparent, the sum of the first ‘n’ terms is also less than 1. So, on first look, it does seem to converge. If that is indeed true, and the series converges, I feel it must either converge to 1 or 2. But that’s just an intuition. Additionally, in my own attempt, I found that this series can be written in summation notation as follows: $$ \sum_{n=1}^{\infty} \frac{n}{n + \frac{n}{n - \frac{n}{n + n}} } = \frac{1}{3} + \frac{3}{5} + \frac{5}{7} + \frac{7}{9} + \ ... \ $$ Edit : The function diverges, as pointed out by many in the answers and comments. Thank you all.
No, it does not converge. Here $\lim\limits_{n\to \infty} a_n=1$ , where $a_n$ is the $n-$ th term of the series, i.e., $a_n=\frac{2n-1}{2n+1}$ and now use the Divergence test.
|sequences-and-series|power-series|
0
Does the following series of fractions of consecutive odd numbers converge? If yes, where does it converge to?
I have a series of fractions that consists consecutive odd numbers in the following fashion: $$\frac{1}{3} + \frac{3}{5} + \frac{5}{7} + \frac{7}{9} + \ ... \ = \ ?$$ My question is whether this series converges if it were to go on forever until infinity. In my attempt, I do feel it should converge. Every term in the series is less than 1. Moreover, as it is apparent, the sum of the first ‘n’ terms is also less than 1. So, on first look, it does seem to converge. If that is indeed true, and the series converges, I feel it must either converge to 1 or 2. But that’s just an intuition. Additionally, in my own attempt, I found that this series can be written in summation notation as follows: $$ \sum_{n=1}^{\infty} \frac{n}{n + \frac{n}{n - \frac{n}{n + n}} } = \frac{1}{3} + \frac{3}{5} + \frac{5}{7} + \frac{7}{9} + \ ... \ $$ Edit : The function diverges, as pointed out by many in the answers and comments. Thank you all.
As a hint: notice that you can write the $n$ th term as $\frac{2(n-1) + 1}{2n + 1}$ since the numerator is always the odd number directly before the denominator. From here, it is not too difficult to show that this does not converge using some series tests or by definition.
|sequences-and-series|power-series|
1
Solving a non-elementary integral
$$L=\lim_{\epsilon\to 0^+}\int_{\epsilon}^{2\epsilon}\frac{e^{(x-1)^2}}{x}dx$$ How can one approach this integral? I've tried inserting an extra variable to perform Feynman's trick but it just complicates it more.Kindly give a few hints on how to solve this.
Hint: perform the substitution $x=\epsilon u$ so that $\epsilon$ only appears inside the integrand and not in the bounds. Then you can move the limit inside the integral (be sure to justify this step with your favorite convergence theorem).
|definite-integrals|improper-integrals|
1
Demonstrating equality of Extended Cauchy-Schwarz Inequality
I have a bit of a problem with the Extended Cauchy Schwarz Inequality, specifically at the line "with equality if and only if b = cB^(-1)d for some constant c." As such, I'm having problems understanding the following lemma, at a similarly worded line: "with the maximum attained when x = cB^(-1)d for any constant c != 0." How do I demonstrate that the equality holds/maximum is attained if b = cB^(-1)d ? Most resources I found cater to general linear algebra courses, so it is very hard for me to relate to this text (Johnson & Wichern, Applied Multivariate Statistical Analysis).
Presumably you remember $x’y=x\cdot y = \|x\|\|y\|\cos\theta$ from your multivariable calculus classes. So the dot product is maximized when $\cos\theta=1$ , which means $x$ is a positive scalar multiple of $y$ . Following the proof and applying this, equality holds here if and only if $B^{1/2}b$ is a (positive) scalar multiple of $B^{-1/2}d$ . But $B^{1/2}b=cB^{-1/2}d$ means $b=cB^{-1}d$ .
|linear-algebra|inner-products|
0
Find transformation between two sets of points by minimizing the maximum distance
There are two sets of four 2D points. There is a 1 to 1 correspondence between the two sets. The points in both sets are close to the following nominal positions, only deviate with a random value in both direction: (-1.5, 0.0), (-0.5, 0.0), (0.5, 0.0), (-1.5, 0.0). I would like to find a transformation (translation and rotation), which brings one set over the other by minimizing the maximum distance between the corresponding points: $f = max(||a_i - b_i||^2 : i = 1, 2, 3, 4)$ I tried different optimization methods in the Python Scipy.optimisation package, but non of them worked well. They are close, but not perfect. I am not intereseted in any other best fit algorithms (SVD, PCA etc.). Playing with random datasets, I observed, that in some cases after the optimization, two pairs of points have the same distance, sometime three. Is there any algorithm, geometrical approach to find the global minimum of f? Here is an example, before any optimization. Let's say, we want to move the orange
Your goal is to find the three parameters $\delta_x,\delta_y,\theta$ (translation to $x$ , translation to $y$ , angle of rotation). Define a loss function $\Phi(\delta_x,\delta_y,\theta)$ that measures the "badness" of using a particular set of parameter values. Specifically, it is given by $$\Phi(\delta_x,\delta_y,\theta) = \sum_{i=1}^4 \|T_{\delta_x,\delta_y,\theta}(a_i)-b_i\|^2,$$ where $T_{\delta_x,\delta_y,\theta}$ denotes translation and rotation as given by the parameters, and $a_i,b_i$ are the two sets of four points. Thus $\Phi$ represents the total error after transforming then. Now use any standard algorithm to find $\delta_x,\delta_y,\theta$ that minimize $\Phi(\delta_x,\delta_y,\theta)$ . This is a numerical optimization problem and you can try any standard optimization method: gradient descent , Nelder-Mead , particle swarm optimization , etc. Your problem comes up in image registration , also known as image alignment. See especially point-set registration , which is basi
|geometry|optimization|
0
Sum of squared maximization with a norm constraint
I have the following optimization \begin{align} \max_{\|x\|^2\le1} \|Lx - y\| \end{align} where $L$ is a lower triangular and $y$ is a given vector. Does it admit a closed-form solution? I am interested in the general case where $L$ may be not invertible and even not square, but any progress under some assumptions will be muchnappreciated. The motivation is to understand the standard least squares method but as a game from the "adversarial" point of view: $y$ is a measurement sequence and $x$ is the set of possible states from $y = Lx + n$ .
We can consider the following equivalent objective: $$\|Lx - y\|^2=x^T\color{blue}{L^TL}x-2y^TLx+y^Ty.$$ As $L^TL$ is a symmetric (positive semidefinite) matrix, it admits the spectral decomposition : $$\color{blue}{L^TL = Q^T \Lambda Q }$$ for some orthogonal matrix $Q$ with $Q^TQ=I$ and diagonal matrix $\Lambda \, (\ge 0)$ . Then, defining $\color{blue}{z=Qx}$ and $\color{blue}{b:=2QL^Ty}$ , the problem is equivalent to: $$\color{blue}{\max_{\|z\|^2=1} \quad z^T\Lambda z-b^Tz+y^Ty }.$$ Note that here we consider the equivalent constraint $$\|x\|^2=x^Tx=x^TIx=x^TQ^TQx=z^Tz=\|z\|^2=1.$$ As the problem is convex maximization (maximization of the convex quadratic function $z^T\Lambda z-b^Tz+y^Ty$ over the convex set $\|z\|^2 \le 1$ ), there is an optimal solution on the boundary (at some extreme point), so the constraint $\|z\|^2 \le 1$ can be replaced with $\|z\|^2 = 1$ . For the $\color{blue}{\text{new problem}}$ , from the KKT conditions, the maximizer is among normalized vectors $z$
|linear-algebra|optimization|convex-optimization|least-squares|
1
Is the component function of a smooth vector field unique?
Let $M$ be a smooth manifold. We know that if $X:\ M\to TM$ is a smooth vector field, then for any local chart $x=(x_1,\dots,x_n):\ U\subset M\to\mathbb R^n$ , there exists smooth functions $X_1,\dots,X_n:\ U\to\mathbb R $ such that $$X_p\,=\,\sum X_i(p)\frac{\partial }{\partial x_i}\bigg|_p\ \ \forall p\in U $$ I wonder if $y:\ V\subset M\to\mathbb R^n $ is another local chart such that $U\cap V\ne\varnothing $ , and $$X_p\,=\,\sum Y_i(p)\frac{\partial }{\partial y_i}\bigg|_p\ \ \forall p\in V $$ then is it true that $X_i(p)=Y_i(p)\ \ \forall p\in U\cap V$ ? May you give me some hints for this question ? Thanks.
Are you asking if two coordinate charts would always give the same components for a vector at a point? If so, I think considering cartesian and polar coordinates on $\mathbb{R}^2$ should answer your question. The vector field itself is manifestly invariant but the component functions which describe the vectors at the various points will obviously change in general.
|differential-topology|smooth-manifolds|vector-fields|
0
Coefficients of pullback connection
I am learning about connections using Lee's Intro to Riemannian Manifolds, and I am confused regarding the coefficients of the pullback connection. For the setting, let $(M,g) $ be a Riemannian manifold and let $\nabla$ be a connection on $M$ . From Prop 4.7 in Lee's book, if we have two local frames $(E_i)$ and $(\tilde{E}_j)$ related by a matrix $A^j_i$ , then the transformation law for the connection coefficients is: $$ \tilde{\Gamma}^k_{ij} = (A^{-1})^k_pA^q_iA^r_j\Gamma^p_{qr} + (A^{-1})^k_pA^q_iE_q(A^p_j). \tag{1} $$ I showed this as indicated in this question , using the definition of the connection coefficients. Further, Lee introduces the pullback connection by a diffeomorphism $\varphi:M \to \tilde{M}$ between $(M,g) $ and $(\tilde{M},\tilde{g})$ by: $$ (\varphi^*\tilde{\nabla})_X Y = (\varphi^{-1})_*(\tilde{\nabla}_{\varphi_*X}(\varphi_*Y)), $$ for any vector fields $X,Y$ on $M$ . I obtained that the coefficients of the pullback connection satisfy an analog relation to the t
You misunderstood the other post. The two formulisms are compatible, although you can say the other post has much less information. First, the other post is using a different convention of the Christoffel symbols -- he differentiates the first vector field with respect to the second. So your $\Gamma_{ij}^k$ is his $\Gamma_{ji}^k$ . Then his formula, as you correctly tried to interpret, is $$ {}^{\varphi^*\tilde\nabla}\Gamma_{ij}^k =\frac{\partial \varphi^q}{\partial x^i} {}^{\tilde\nabla}\Gamma_{qj}^k . $$ This would match with your formula (2) if you group terms as $$ {}^{\varphi^*\tilde\nabla}\Gamma_{ij}^k =\frac{\partial \varphi^q}{\partial x^i}\Big(\frac{\partial\varphi^{-1,k}}{\partial \tilde x^p} \frac{\partial\varphi^{r}}{\partial x^j}{}^{\tilde\nabla}\Gamma_{qr}^p + \frac{\partial\varphi^{-1,k}}{\partial \tilde x^p} \frac{\partial}{\partial \tilde x^q}\big(\frac{\partial\varphi^{p}}{\partial x^j}\big) \Big). $$ So the compatibility comes from $$ {}^{\tilde\nabla}\Gamma_{qj}^k=\
|differential-geometry|riemannian-geometry|
1
Solving a non-elementary integral
$$L=\lim_{\epsilon\to 0^+}\int_{\epsilon}^{2\epsilon}\frac{e^{(x-1)^2}}{x}dx$$ How can one approach this integral? I've tried inserting an extra variable to perform Feynman's trick but it just complicates it more.Kindly give a few hints on how to solve this.
Term-by-term integration of the Laurent series ... (Conveniently, there's only one disk or annulus of convergence with inner radius $0$ and outer radius $\infty$ and the pole at $x = 0$ is simple.) \begin{align*} \frac{\mathrm{e}^{(x-1)^2}}{x} &= \frac{\mathrm{e}}{x} - 2 \mathrm{e}+3 \mathrm{e}x - \frac{10 \mathrm{e} x^2}{3} + \frac{19 \mathrm{e} x^3}{6} + \cdots \\ \int_{\epsilon}^{2\epsilon} \; \frac{\mathrm{e}}{x} &{}- 2 \mathrm{e}+3 \mathrm{e}x - \frac{10 \mathrm{e} x^2}{3} + \frac{19 \mathrm{e} x^3}{6} + \cdots \,\mathrm{d}x \\ {} &= \mathrm{e}\ln(2) - 2 \mathrm{e}\epsilon + \frac{9 \mathrm{e} \epsilon^2}{2} - \frac{70 \mathrm{e} \epsilon^3}{9} + \frac{95 \mathrm{e} \epsilon^4}{8} + \cdots \\ &\underset{\epsilon \rightarrow 0^+}{\longrightarrow} \mathrm{e} \ln 2 \text{.} \end{align*}
|definite-integrals|improper-integrals|
0
Remainders of $(1-3i)^{2009}$ when divided by $13+2i$ in $\Bbb{Z}[i]$
Find the possible remainders of $(1-3i)^{2009}$ when divided by $13+2i$ in $\mathbb{Z}[i]$ . I'm having a hard time understanding remainders in $\mathbb{Z}[i]$ . I'm gonna write my solution to the problem above just to give some context, but you can skip this part and read the question below if you want. We start by noticing that $N(13+2i) = 173$ , which is a prime number. Therefore, $13+2i$ is irreducible in $\mathbb{Z}[i]$ . This allows us to quickly calculate the number of elements of the multiplicative group $(\mathbb{Z}[i]/(13+2i))^{\times}$ , namely $N(13+2i) - 1 = 172$ . By Lagrange's Theorem, since $1-3i$ and $13+2i$ are coprime, $$(1-3i)^{11\cdot 172} \equiv 1\mod(13+2i).$$ We now reduced the problem to finding $(1-3i)^{117} \mod (13+2i)$ . Note that $117 = 3^2\cdot 13$ , so by a tedious but direct calculation (once we figure out $(1-3i)^{3^2}$ modulo $13+2i$ we get a nice expression, so the $13$ th power is relatively straightforward to compute), we arrive at $6-2i$ . Questio
The map $\mathbb{Z}\hookrightarrow \mathbb{Z}[i]$ induces an isomorphism of fields $$\mathbb{Z}/173 \simeq\mathbb{Z}[i]/(13+2i)$$ So we could work with representatives in $\mathbb{Z}$ , which is sometime easier. By the above $ {80}\mapsto i$ , and $107\mapsto 1-3i$ . Note that $107$ is a primitive root $\mod 173$ . About remainders : it seemed to me that for the Gaussian integers the remainder of $a \in \mathbb{Z}[i]$ $\mod b$ is obtained as follows: consider a closest element $b q$ of $b \mathbb{Z}[i]$ to $a$ . Then $b$ will be the "quotient", and the remainder is $a- b q$ . In the case $N(b)$ odd, the closest element is unique, and so is the remainder. (We need the closest so the Euclidean algorithm works). Note that one could do the closest multiple also in $\mathbb{Z}$ ( so the remainder is at most half of the divisor, in absolute value). Example: 1. $$\frac{107}{13+2i} = \frac{1391}{173} -\frac{214}{173}i$$ 2. $$(\frac{1391}{173}, -\frac{214}{173})\simeq(8,-1)$$ 3. $$107 - (13+2i)
|number-theory|elementary-number-theory|algebraic-number-theory|gaussian-integers|
0
Relation between curvature form and the Riemann tensor
Let $ (M, g) $ be a Riemannian manifold and $ \omega $ a connection on the tangent bundle of $ M $ at a given point. Then the curvature form is the $\mathfrak{g} $ -valued 2-form $ \Omega \in \Omega^2(M) $ where $ \mathfrak{g} $ is the Lie algebra of the structure group in the tangent bundle $ TM $ . The connection form is given by $ \Omega = d\omega + \frac{1}{2} \left[ \omega \wedge \omega \right] $ From what i understand the curvature form is an equally valid description of curvature as the Riemann tensor. Is there a way of equaling both? In the wikipedia page "curvature on Riemannian manifolds" there's this equation $ R(\boldsymbol{u}, \boldsymbol{v}) \boldsymbol{w} = \Omega(\boldsymbol{u} \wedge \boldsymbol{v}) \boldsymbol{w} $ I would like to use this to get an equation in terms of the components, however i don't know about how to write the curvature form in terms of its components. I know that $ \Omega^i_j = d\omega^i_j + \omega^i_k \wedge \omega^k_j $ And also that using the de
I will work with the coordinate vector fields $\partial_i$ . Let $(x^1,\dots,x^n)$ be local coordinates on $M$ , and $\partial_i = \frac{\partial}{\partial x^i}$ the local basis of coordinate vector fields. Define the Christoffel symbols for the Levi-Civita connection by $$ \nabla_{\partial_i} \partial_j = \Gamma_{ij}^k \partial_k. $$ Then by $[\partial_i, \partial_j]=0$ , $\Gamma_{ij}^k = \Gamma_{ji}^k$ . Note that the connection form is $$ \omega_j^k = \Gamma_{ij}^k dx^i. $$ Now we define the curvature tensor by $$ R(\partial_i, \partial_j)\partial_k = \nabla_{\partial_i}\nabla_{\partial_j}\partial_k - \nabla_{\partial_j}\nabla_{\partial_i}\partial_k = R_{ijk}^\ell \partial_\ell. $$ A direct calculation gives $$ R_{ijk}^\ell = \Gamma_{k[j,i]}^\ell + \Gamma_{k[j}^m\Gamma_{i]m}^\ell= \Gamma_{kj,i}^\ell - \Gamma_{ki,j}^\ell + \Gamma_{kj}^m\Gamma_{im}^\ell -\Gamma_{ki}^m\Gamma_{jm}^\ell, $$ where $[\cdot,\cdot]$ is physicists' notation for skew-symmetrization. (Note again $\Gamma_{ij}^k$
|differential-geometry|riemannian-geometry|differential-forms|curvature|connections|
0
$p$- subgroups of $Gl_2(q)$
This question is similar to non-abelian groups of order $p^2q^2$. , for which Derek Holt gave an answer to one of the cases, but I am looking for an answer that cover all the cases. I am looking for $p$ groups of $Gl_2(q)$ for $p,q$ odd primes. More explicitly, I am looking for subgroups of isomorphism type $C_p,C_{p^2}, C_p\times C_p$ in $Gl_2(q)$, up to conjugation. Clearly, the cases to consider are where $p|q-1,p^2|q-1,p|q+1,p^2|q+1$. My goal is to classify up to isomorphism, all the groups of order $p^2q^2$ which are semi-direct product of the Sylow subgroup $Q\rtimes P$. I am interested not only in their number but also in their structure, that is the matrices in $Gl_2(q)$ which correspond to the $P$ action. I will appreciate any help, in an explicit answer or a book reference which have the answer. p.s any leads on the case where $p=2$ will be great also.
When $p$ equals 2 and $q$ is 1 modulo 4, the matrices are of the form $\left( \begin{array}{cc} \alpha & 0 \\ 0 & \beta \end{array} \right)$ and $\left( \begin{array}{cc} 0 & \alpha \\ \beta & 0 \end{array} \right)$ with $\alpha$ and $\beta$ elements of order a power of 2 in $\mathbb{F}_q^*$ . When $p$ is odd and divides $q-1$ , the matrices are of the form $\left( \begin{array}{cc} \alpha & 0 \\ 0 & \beta \end{array} \right)$ with $\alpha$ and $\beta$ elements of order a power of $p$ in $\mathbb{F}_q^*$ . When $p$ equals 2 and $q$ is 3 modulo 4, or when $p$ is odd and divides $q+1$ , one can build matrices representatives by considering $\mathbb{F}_{q^2}$ as a two-dimensionnal vector space over $\mathbb{F}_q$ . Elements of $\mathbb{F}_{q^2}^*$ (i.e. the multiplicative group of the extended finite field) of order a power of $p$ have an action on the vector space that can be computed explicitely. The multiplicative group of a finite field being cyclic, you just need to compute a single
|group-theory|reference-request|finite-groups|
0
Use sequent calculus to show if $\Gamma\vdash t_1=t_2$ then $\Gamma\vdash f(t_1)=f(t_2)$
Suppose the sequent $\Gamma\vdash t_1=t_2$ where $t_1,t_2$ are closed terms. Let $f$ be a one-place function symbol. I am trying to find a sequent calculus derivation of $\Gamma\vdash f(t_1)=f(t_2)$ using these identity rules from this set of notes : I don't see where functions fit in the picture.
Whenever there is a non-trivial sequent to prove in the sequent calculus, use the cut rule, which is the only "intelligent" rule in that proof system. The idea is that you can easily derive the sequent $t_1 = t_2 \vdash f(t_1) = f(t_2)$ by the rule $=$ , and then you cut this sequent with your assumption $\Gamma \vdash t_1 = t_2$ . The formal solution is below, where the formula $\varphi(x)$ to which I applied the rule $=$ is $f(t_1) = f(x)$ . ! $$ \dfrac{ \displaystyle{\atop \Gamma \vdash t_1 = t_2} \qquad \dfrac{ \dfrac{ \dfrac{}{\vdash f(t_1) = f(t_1)}\text{refl} }{ t_1 = t_2 \vdash f(t_1) = f(t_1) }\text{w}_L } { t_1 = t_2 \vdash f(t_1) = f(t_2) }= }{ \Gamma \vdash f(t_1) = f(t_2) }\text{cut} $$
|logic|first-order-logic|proof-theory|formal-proofs|sequent-calculus|
1
Proving differentiability of a payoff
Let $F = \Phi (X^x_T)$ . $X^x_T$ behaves as a risky asset such that $dX^x_t=rX^x_tdt+\sigma X^x_tdW_t$ with $W_t$ standard Brownian Motion; $X_0^x=x$ . $\Phi$ is continuous with linear growth and $P(x) = e^{−rT}\mathbb{E}[F]$ . Prove that $P \in C^2(\mathbb{R}_+;\mathbb{R}) $ . I've already proved that $P \in C^1(\mathbb{R}_+;\mathbb{R}$ ). Also, solving the SDE, we obtain $X^x_T=x e^{(r-\frac{\sigma^2}{2})T+\sigma \sqrt{T}Y}$ with $Y \sim \mathcal{N}(0,1)$ but I have no idea on how to continue. Could some one help me? Thanks in advance.
This is shown in Le Gall Brownian Motion, Martingales, and Stochastic Calculus theorem 7 (for continuous and bounded but the proof is similar). Another source is here lecture 12: stochastic differential equations, diffusion : Define the infinitesimal generator of the process $X_{t}$ to be the differential generator $$G=\frac{1}{2}\sigma^2(x)\partial_{xx}+\mu(x)\partial_{x}.$$ Theorem 5 Assume that $\mu(x),\sigma(x)$ are Lipschitz with linear growth. Let $f(x),K(x)$ be continuous such that $K\geq 0$ and $f(x)=O(|x|)$ as $|x|\to +\infty$ . Then the function defined by $$u(t,x)=E_{x}\left[exp(-\int_{0}^{t}K(X_{s})ds)f(X_{t})\right]$$ satisfies the diffusion equation $u_{t}=Gu-Ku$ with $u(0,x)=f(x)$ . As mentioned there one needs to make a modification for unbounded. In particular, we need integrability but thanks to the linear growth we have $$E[\Phi(X_{t}^{x})]\leq cE[|X_{t}^{x}|]=cE[X_{t}^{x}]=cxe^{rT} The rest of the proof is the same as in the case of $f$ being bounded and continuous.
|stochastic-processes|stochastic-calculus|stochastic-analysis|stochastic-differential-equations|malliavin-calculus|
0
Why do we call integration "accumulation of change"?
So in virtually all English-language calculus classes I have seen, we define integration as the "accumulation of change". And that makes sense to me intuitively, but when I think about it, I feel like accumulation of value makes more sense. Because if we take the change in a function $f:\mathbb{R}\mapsto\mathbb{R}\text{ s.t. } f(x)=k$ for $k\in\mathbb{R}$ , then the "change" in $f$ at any $x$ is $0$ . So accumulating it, we add $0$ with itself some number of times, right? Which will always be $0$ . Yet, $$\int_a^b k\, dx$$ isn't always equal to zero... I feel like accumulation of change makes sense to me but I can't put my finger on why it does. Can anyone try to explain?
Think of a car driving from one location to another. It can speed up or slow down, so its velocity can increase or decrease, but it is always changing position (except for when it comes to a stop). The total distance the car has traveled is the accumulation of all its changes in position, i.e. the integration of its velocity function.
|calculus|integration|definite-integrals|indefinite-integrals|
0
Intuition on group homomorphisms
So I'm studying for finals now, and came across the idea of homomorphisms again. This is not a new idea for me at all, having seen them in groups, rings, fields ect. However, on reevaluating them I realized suddenly that I really don't understand them on the same level as I thought. While isomorphisms seem to be very natural to think about, I can't visualize what's happening in homomorphisms. At this time, I only have a sketchy idea that the existence of a homomorphism between groups (or between groups and what they're acting on) means that they both somehow combine the same way.. Does someone mind sharing their intuition on the concept? As always, any help is appreciated, thanks
Sometimes, understanding the utility of a mathematical concept with some "use cases" helps to understand the concept itself. This is what I am trying to do with this answer. Let us say that you want to study a certain group. One of the ideas is to find homomorphisms from this group to other groups. One-to-one homomorphisms provide new ways to think about the elements of the group. For example, there is a one to one homomorphism from the set of symmetries of an equilateral triangle (the dihedral group $D_3$ ) to the set of symmetries of a regular hexagon (the dihedral group $D_6$ ). This means that the symmetries of an equilateral triangle can be seen as symmetries of a regular hexagon as well. Many-to-one homomorphisms may help you to “classify” the elements of a group according to their image under the homomorphism, and to understand the internal structure of the group. For example, there is a many-to-one homomorphism from the set of symmetries of an equilateral triangle $D_3$ to the
|abstract-algebra|group-theory|intuition|group-homomorphism|
0
Non-trivial real valued two-cocycle on $\mathbb{R}^2$
Assume that $c$ is an arbitrary non-vanishing antisymmetric bilinear form on $\mathbb{R}^2$ and view $\mathbb{R}^2$ as an Abelian Lie algebra. Can $c$ define a non-trivial real valued two-cocycle on $\mathbb{R}^2$ ?
By definition of the Chevalley Eilenberg cohomology, the space of $2$ -cocycles with trivial coefficients $K$ is given by $$ Z^2(L,K)=\{\omega\in {\rm Alt}(L\times L,K)\mid \omega([x_1,x_2],x_3)-\omega([x_1,x_3],x_2)+\omega([x_2,x_3],x_1)=0\}. $$ Of course, if $L$ is abelian, we obtain all antisymmetric (alternating) bilinear forms.
|lie-algebras|homology-cohomology|
1
Why do we call integration "accumulation of change"?
So in virtually all English-language calculus classes I have seen, we define integration as the "accumulation of change". And that makes sense to me intuitively, but when I think about it, I feel like accumulation of value makes more sense. Because if we take the change in a function $f:\mathbb{R}\mapsto\mathbb{R}\text{ s.t. } f(x)=k$ for $k\in\mathbb{R}$ , then the "change" in $f$ at any $x$ is $0$ . So accumulating it, we add $0$ with itself some number of times, right? Which will always be $0$ . Yet, $$\int_a^b k\, dx$$ isn't always equal to zero... I feel like accumulation of change makes sense to me but I can't put my finger on why it does. Can anyone try to explain?
The thing that you're integrating is the change that's being accumulated. When you want to calculate the cumulative change of $f(x)=k$ , you integrate its rate of change, which you correctly identified as 0. So you should expect $f(b)-f(a)=\int_a^b 0\mathrm dx$ , which is true. When you integrate $f$ itself, you view $f$ as the rate of change of a different function $F$ , and you expect $F(b)-F(a)=\int_a^b f(x)\mathrm dx$ .
|calculus|integration|definite-integrals|indefinite-integrals|
1
Defining a lambda expression to swap the order of lambda abstractions
I want to define a lambda term $\mathrm{swapQ}$ such that $$\begin{align} \mathrm{swapQ}\ &(\lambda P Q_1 Q_2. Q_1(\lambda x_1. Q_2 (\lambda x_2. Px_1x_2)))\\ \triangleright_\beta\ & (\lambda P Q_1 Q_2. Q_2 (\lambda x_2. Q_1(\lambda x_1. Px_1 x_2))) \end{align}$$ So far I have come up with $$\mathrm{swapQ^*} := \lambda S' P' Q_1' Q_2'. S' P' Q_2' Q_1' \\ \begin{align} \mathrm{swapQ^*}\ &(\lambda P Q_1 Q_2. Q_1(\lambda x_1. Q_2 (\lambda x_2. Px_1x_2)))\\ \triangleright_\beta\ & (\lambda P Q_1 Q_2. Q_2 (\lambda x_1. Q_1(\lambda x_2. Px_1 x_2))) \end{align}$$ but this only swaps the $Q_i$ s, not the $\lambda x_i$ s that should go with them. After having fiddled around for a while, I can not seem to find a way to change order of the variable abstractions while leaving the variable occurrences inside the term intact (or vice versa for an equivalent expression). I probably need to abstract over the $(\lambda x_i. \ldots$ ) parts and reapply them in the same way I did with the $Q$ 's, but I c
Just out of curiosity, I decided to run this in Combo , which I put up on GitHub. Ok, so first, the lambda expression $$f_0 = λP Q_1 Q_2 · Q_1 (λx_1 · Q_2 (λx_2 · P x_1 x_2))$$ (which is entered in as " $\backslash P\ Q1\ Q2.Q1(\backslash x1.Q2(\backslash x2.P\ x1\ x2))$ ") reduces to $B (C B) (C B)$ , which Combo writes as " $\_0 = C\ B,\ B\ \_0\ \_0$ ". It reduces to "strong" normal form (as defined and described in the software reference) " $w\ (B\ x\ v)$ ", when applied to the arguments $v$ , $w$ and $x$ , thereby leading to the reduction: $$f_0 v w x = w (B x v).$$ The second expression $$f_1 = λP Q_1 Q_2 · Q_2 (λx_2 · Q_1 (λx_1 · P x_1 x_2)),$$ reduces to $B (B T) (B (C B) C)$ . When applied to arguments $v$ , $w$ and $x$ , it reduces to the strong normal form $x (B w (C v))$ , thereby leading to the reduction: $$f_1 v w x = x (B w (C v)).$$ The combinators appearing in this are the following lambda expressions $$B = λxyz·x(yz),\quad C = λxyz·xzy,\quad T = λxy·yx.$$ The expressio
|lambda-calculus|
0
Given a $n \times n$ matrix $A$, when do the first $p$ columns of $A^{-1}$ coincide with the Moore–Penrose inverse of the first $p$ rows of $A$?
I noticed that for ${A}= \left[\begin{matrix} \cos{\left(\phi\right)} & \cos{\left(\phi+2\pi/3\right)} & \cos{\left(\phi+4\pi/3\right)}\\ \sin{\left(\phi\right)} & \sin{\left(\phi+2\pi/3\right)} & \sin{\left(\phi+4\pi/3\right)}\\ \beta & \beta & \beta \end{matrix}\right]$ and $p=2$ the proposition in the title holds $\forall \phi$ . I wonder if there is any generalization to this fact, i.e. the minimal structure that a generic matrix $A$ needs to have in order to satisfy $$(PA)^{\dagger}={A}^{-1}P^T$$ where $P$ is a projection of the form $\left[\begin{matrix}I &0\end{matrix}\right]$ with $I$ the identity matrix of size $p$ and $0$ a zero matrix of size $p \times n$ .
It happens when the first $p$ rows are orthogonal to the other ones.
|linear-algebra|matrices|projection-matrices|pseudoinverse|
1
continuous open mapping of segment
$f : [0,1] \rightarrow [0,1]$ is a continuous open mapping. Show that it has finite number of maxima and $\max(f(x)) = 1$ . It is well-known that if $g:\mathbb{R}\rightarrow\mathbb{R}$ is continuous and open than it's monotonic ( Every continuous open mapping $\mathbb{R} \to \mathbb{R}$ is monotonic ). In the same way I've managed to show that $\max(f(x)) = 1$ . But how can one prove that the number of maxima is finite?
Suppose that the set $O=\{x\in[0,1]:f(x)=1\}$ is an infinite set. Then it has an accumulation point $a$ and, since $f$ is continuous, $f(a)=1$ . There is some $\delta>0$ such that $$|x-a| and, since $a$ is an accumulation point of $O$ , $(x-a,x+a)\cap[0,1]$ contains some point $b\ne a$ . I will assume that $a ; the case in which $b is similar. We have two possibilities here. Either $f$ is constant equal to $1$ on $[a,b]$ or it is not. In the first case, $f\bigl((a,b)\bigr)=\{1\}$ , which is impossible, since $f$ is an open map. And if $f$ is not constant on $[a,b]$ , $f|_{[a,b]}$ has a minimum at some point $c\in(a,b)$ . Besides, because of \eqref{a}, $f(c)>0$ , and then $f\bigl((a,b)\bigr)=[c,1]$ , which is not an open subset of $[0,1]$ . So, again we reach an impossibility.
|general-topology|functions|continuity|maxima-minima|open-map|
1
Cartesian equation of $r(t) = 4\cos t \textbf{i} + 4\sin t \textbf{j} + 4 \cos t \textbf{k} $
The curve described with $r(t) = 4\cos t \textbf{i} + 4\sin t \textbf{j} + 4 \cos t \textbf{k}$ is an ellipse and I would like to find it's Cartesian equation. We get $x = z = 4\cos t, y = 4\sin t$ . From that we have $t = \arccos\frac{x}{4}$ . Then we have: $y = 4\sin t = 4 \sin (\arccos\frac{x}{4}) \implies y^2 = 16 \sin ^2 (\arccos\frac{x}{4}) = 16 (1 - \cos ^2 (\arccos\frac{x}{4})) = 16 (1 - (\cos \arccos\frac{x}{4})) ^ 2 = 16 ( 1 - (\frac{x}{4})^2) = 16(1 - \frac{x^2}{16})=16 - x^2 \implies y^2 + x^2 = 16.$ That circle is the view of the ellipse when we project it in the $(x, y)$ plane, so we need to plug in $z$ somewhere to get the ellipse, but I'm not sure how to do that. Thanks!
Notice that the parametrization $r$ defines a $1$ -dimensional shape in $\mathbb{R}^3$ , so we need two equations to describe it. You already found one, $$y^2+x^2=16$$ To finish, notice that, directly from the expression of $r$ , one sees $x=z$ . Thus, the ellipse is given by the equations $$y^2+x^2=16 \\ x=z$$ It is the intersection of a cylinder and a plane.
|linear-algebra|conic-sections|
1
Joint distribution of two conditional distributions
I am trying to understand how a joint distribution is formed when two regular conditional distributions are involved that are conditional with respect to different random variables. Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability space, and let there be the three random variables $X:(\Omega, \mathcal{A})\rightarrow (\mathcal{X}, \mathcal{F})$ , $Y:(\Omega, \mathcal{A})\rightarrow (\mathcal{Y}, \mathcal{G})$ , $Z:(\Omega, \mathcal{A})\rightarrow (\mathcal{Z}, \mathcal{H})$ . Let us consider the Markov kernels $\mathbb{P}_{Y|X}$ and $\mathbb{P}_{X|Z}$ . 1.) My question now is, if $$\int_{\mathcal{X}}\mathbb{P}_{Y|X=x}(E) \mathbb{P}_{X|Z=z_0}(dx)=\mathbb{P}_{Y, X|Z=z_0}(E)$$ holds (for some fixed $z_0$ )? On the one hand I would say no because intuitively I would assume that we would need a kernel $\mathbb{P}_{Y|X, Z}$ for that. On the other hand, for some fixed $z_0$ , $\mathbb{P}_{X|Z=z_0}$ is simply a measure on $\mathcal{X}$ and not a conditional distribution (i.e., a kernel)
The correct is: $$\int_D d\mathbb{P}_{X/Z}\int_E d\mathbb{P}_{Y/X,Z}=\mathbb{P}_{X,Y/Z}(D\times E)$$ Logically, if $D=\mathcal{X}$ : $$\mathbb{P}_{X,Y/Z}(\mathcal{X}\times E)=\mathbb{P}_{Y/Z}(E)$$
|probability-distributions|conditional-probability|conditional-expectation|
0
Intuition on group homomorphisms
So I'm studying for finals now, and came across the idea of homomorphisms again. This is not a new idea for me at all, having seen them in groups, rings, fields ect. However, on reevaluating them I realized suddenly that I really don't understand them on the same level as I thought. While isomorphisms seem to be very natural to think about, I can't visualize what's happening in homomorphisms. At this time, I only have a sketchy idea that the existence of a homomorphism between groups (or between groups and what they're acting on) means that they both somehow combine the same way.. Does someone mind sharing their intuition on the concept? As always, any help is appreciated, thanks
I will just add to and adjust the answer by @Brian, by saying that not only does the kernel tell you how much structure is "wiped out" but, and this is a view I think I remember being espoused in Herstein, if not others, that (and as we all know, and it's a very useful fact, that they are normal subgroups) kernels/normal subgroups are in one-one correspondence with homomorphisms out of a group. Furthermore, in view of the first isomorphism theorem, the image of a homomorphism is isomorphic to the quotient of the domain group by its kernel. Homomorphisms have a number of nice properties, they always map the identity to the identity, as a result map an element to an element dividing its order, and, if it's surjective, maps normal subgroups to normal subgroups. Under certain conditions (known as a split extension ) the image of a homomorphism is isomorphic to a subgroup of the domain.
|abstract-algebra|group-theory|intuition|group-homomorphism|
0
Rational functions where they are undefined
There's something I can't quite wrap my head around, and I am not satisfied with my professor's "that's just defined that way" answer. So suppose we have: $$f_1(x) = x - 5$$ $$f_2(x) = \frac{1}{x - 5}$$ $$f_3(x) = 1$$ By simple algebra, we can see that $f4(x) = f_1(x) \cdot f_2(x) = f_3(x)$ What I mean is the rational function we get from dividing $x - 5$ by $x - 5$ is the same as the constant function $y = 1$ Now I like to look at functions graphically to understand them. So we can clearly see that $f_1(5) = 1$ and $f_2(5)$ is not defined. So how come, when we multiply these two functions, we end up with a function (or a curve) that is defined at point $x = 5$ . By logic, on $x = 5$ , we should have $0/0$ , which is undefined. But here it just takes the value $1$ . How come? Visuals to better understand whats bugging me: I would appreciate if your answer addresses this logically (or rather conceptually), not as a plain mathematical proof using algebra. I too, know, that algebraically
In fact, $f_1(x)\cdot f_2(x)=\dfrac{x-5}{x-5}$ is not the same function as $f_3(x)=1$ . One has that $f_1(x)\cdot f_2(x)=1=f_3(x)$ for every $x\neq 5$ , but in $x=5$ $f_1\cdot f_2$ is not defined as the denominator vanishes. Nonetheless, $f_1\cdot f_2$ can be extended continuously to another function which is defined in $5$ , namely the constant function $1$ (which would now be $f_3$ ). This happens because the numerator has a zero of at least the order of the zero of the denominator. For example, if we defined $f_4(x)=\dfrac{x-5}{(x-5)^2}$ , you can check there is no way to continuously extend $f_4$ to $x=5$ , because the denominator has a zero of order $2$ and the numerator has a zero of order $1$ (one gets a vertical asymptote). However, $f_5(x)=\dfrac{(x-5)^2(x-3)}{(x-5)(x-2)}$ may be continuously extended to $x=5$ .
|calculus|functions|
1
How to find the range of the function $f(x) = \frac{1}{x^2 - x}$
$f(x) = \displaystyle \frac{1}{x^2 - x}$ I found: Domain, $x \in$ $R - \{0, 1\}$ But I am struggling to find range of this fucntion. My approach to find the range: $y = \displaystyle \frac{1}{x^2 - x}$ $\Rightarrow yx^2 -yx - 1 = 0$ After this I used Discriminant rule: $D \ge 0$ $\Rightarrow D = y^2 +4y \ge 0$ From above, I got, $y > 0 \space$ or $\space y \in (0, \infty)$ But I am unable to find the rest part of the range. On Internet, I found Range of the above function as, $y \in$ $(-\infty, -4] \space \cup \space (0, \infty)$
Let us find the range of the function $f(x)=\frac1{x^2-x}$ . The domain of this function is indeed $D=\mathbb{R}\setminus \{0,1\}$ . Now let us determine all such values of $y$ that there exists $x$ in $D$ such that $$y=\frac1{x^2-x}.$$ Since $x\ne 0$ or $1$ , it is equivalent to the equation with respect to $x$ : $$yx^2-yx-1=0.$$ If $y=0$ then the equality doesn’t hold. If $y\ne0$ then this is a quadratic equation. Since $0$ and $1$ are not its solutions, its solutions are always in $D$ . It has solutions if and only if its discriminant is non-negative. So $$y^2+4y\ge0$$ $$y(y+4)\ge 0$$ $$y\le -4\text{ or }y\ge 0.$$ Removing $y=0$ , we get that the range of $f$ is $$(-\infty,-4]\cup(0,\infty).$$
|algebra-precalculus|
1
Rank of $DH(x,x^{\prime})$ where $H$ does not depend on $x^\prime$
Suppose $f:\mathbb{R}^m\rightarrow\mathbb{R}^n$ , $\text{rank}(Df(x))=k.$ Now Define a function $H: \mathbb{R}^m\times \mathbb{R}^n \rightarrow \mathbb{R}^k$ , $H(x,x')=f(x)$ . Does $\text{rank}(DH(x,x^{\prime}))= \text{rank}(Df(x))$ ? Can I conclude that $\text{nullity}DH(x,x') = m+n-\text{rank}(Df(x))=m+n-k$ ?
I would argue in a few steps. First, look that \begin{align} H(x + h, x') &= f(x + h)\\ &= f(x) + Df(x)h + \varepsilon(h)\\ &= H(x, x') + Df(x)h + \varepsilon(h) \end{align} and \begin{align} H(x, x' + k) = f(x) = H(x, x'). \end{align} From this follows that \begin{align} H(x + h, x' + k) &= H(x, x' + k) + Df(x)h + \varepsilon(h)\\ &= H(x, x') + Df(x)h + \varepsilon(h). \end{align} Therefore, by the uniqueness of the derivative, $DH(x,x')(h,k) = Df(x)h$ . From this one have $\text{rank}(DH(x,x')) = k$ and $\text{null}(DH(x,x')) = m + n - k$ .
|real-analysis|calculus|linear-algebra|
1
Proving the Increase of $(a+\delta)_m - (a)_m$ Given Decrease of $\frac{(a+\delta)_m}{(a)_m}$.
The question I'm struggling with is as follows: Let $a>0$ and $\delta > 0$ (fixed). Suppose $ a \mapsto \frac{(a+\delta)_{m}}{(a)_{m}}$ is decreasing then prove that $ a \mapsto (a+\delta)_{m} - (a)_{m} $ is increasing ( means Wright convex). I'm reading a paper in which the authors have given no reason for this. I have tried by taking $b>a$ and tried to show that $(a+\delta)_{m} - (a)_{m} using $\frac{(a+\delta)_{m}}{(a)_{m}} > \frac{(b+\delta)_{m}}{(b)_{m}}$ (decreasing). I have getting $(a+\delta)_{m} - (a)_{m} - (b+\delta)_{m} + (b)_{m} > \text{something} $ . I can't get the answer to this. I guess there is something related to increasing and decreasing functions. I'm not getting an idea how to proceed.
Assuming $m>0$ , suppose $x>0$ and $\delta>0$ . Define $$h(x)= \frac{(x+\delta)_m}{(x)_m}.$$ Let, for the sake of convenience, $f(x) = (x+\delta)_m$ and $g(x) = (x)_m$ . I now see that there is an extra information in your problem statement which was unnecessary. I see that with $\delta>0$ , $h(x)$ is decreasing for $x>0$ . To see this, logarithmically differentiate $h(x)$ to get, $$h^{\prime}(x) = -\delta h(x) \left(\frac{1}{x(x+\delta)}+\dots+\frac{1}{(x+\delta+n-1)(x+n-1)}\right).$$ $h(x)$ is anyway positive. So, $h^{\prime}(x)$ is negative and hence $h$ is decreasing. Quite clearly $$f^{\prime}(x) = \frac{f(x)}{x+\delta}+\frac{f(x)}{x+\delta+1}+\dots+\frac{f(x)}{x+\delta+n-1}.$$ And $$g^{\prime}(x) = \frac{g(x)}{x}+\frac{g(x)}{x+1}+\dots+\frac{g(x)}{x+n-1}.$$ Further, it is easy to see by the definitions of $f$ and $g$ that $$\frac{f(x)}{x+\delta+k}-\frac{g(x)}{x+k} \geq 0,$$ whenever $k\geq 0$ . Thus $$f^{\prime}(x) \geq g^{\prime}(x),$$ indicating that $f(x)-g(x)$ is increasing.
|real-analysis|calculus|inequality|convexity-inequality|
1
Does the determinant of a Knot bound the number of primes for which the mod p rank is nonzero?
Definition. If $V$ is a Seifert matrix for a $\operatorname{knot} K$ , then the determinant of $K$ , denoted $\operatorname{det}(K)$ , is the absolute value of the determinant of the symmetrization of the Seifert matrix: \begin{align} \det(K)=|\det(V+V^T)|. \end{align} Definition. The $\bmod p$ rank of a knot is the dimension of the solution space of the system of linear equations attributed to a $\bmod p$ labeling of its knot diagram. Out of pure curiosity, I am wondering whether these two numerical invariants from knot theory have anything to do with each other. Since both are defined using matrices, I presume there might be an inequality that can be derived? Similar to how the unknotting number $U(K)$ and the signature $\sigma(K)$ relate via $2 U(K) \geq|\sigma(K)|$ , or how the mod $p$ ranks and the bridge index $\operatorname{brg}(K)$ relate via $\bmod p$ rank $\leq \operatorname{brg}(K) - 1$ . Context: I have studied the first five chapters of Livingston's Knot Theory book last y
A knot with rank $n$ mod $p$ has $p(p^n-1)$ nontrivial colourings with $p$ colours (this is Theorem 2 in Counting m-coloring classes of knots and links by Brownell, O'Neil and Taalman) so rank mod $p$ is bound by $v_p(\det K)$ , the exponent of $p$ in factorisation of $\det K$ . It is known that you can't calculate rank of a knot directly from the determinant because $\det(8_{18}) = \det (9_{24}) = 45$ . However their mod 3 ranks are 1 and 2 respectively.
|linear-algebra|algebraic-topology|determinant|knot-theory|knot-invariants|
0
Positive integers satisfying $a^b = cd$ and $b^a = c+d$
Yesterday, at 23:18, I thought it was a remarkable moment of the day. The digits on the watch were providing a quadruplet of positive integers that satisfy the following system of equations: $$\begin{align} a^b &= cd \\ b^a &= c+d \end{align}$$ I wondered what the set of all positive integer solutions of this system was. Writing $d=b^a-c,$ I obtained a quadratic of $c$ with coefficients in terms of $a$ and $b$ , which led me to the following: $$\left\{ c,d \right\} = \left\{\frac{b^a-\sqrt{b^{2a}-4a^b}}{2},\frac{b^a+\sqrt{b^{2a}-4a^b}}{2}\right\}$$ Therefore, given positive integers $a$ and $b,$ there is a solution if and only if $b^{2a}-4a^b$ is a perfect square. A brute force search using this result yields the following solutions: $(1,2,1,1),$ $(2,2,2,2),$ $(2,3,1,8),$ and $(2,3,8,1).$ I believe that there is no other solution than these. However, I'm unable to prove it. I would be glad if anyone could help me.
Yesterday night I watched the video on youtube and tried my chance :) I was able to rule out some small number of cases. If $b^{2a}-4a^b$ is a perfect square, then any $a>1$ must be even. Proof. We use the already proven fact that $b^{2a}-4a^b$ is a perfect square iff there are positive integers $c,d$ satisfying the equations \begin{align} a^b &= cd \label{eq:cd} \tag{1} \\ b^a &= c+d \label{eq:c+d} \tag{2}. \end{align} Assume that $b^{2a}-4a^b$ is a perfect square and $a > 1$ is an odd number. Then, $c$ and $d$ are odd numbers by \eqref{eq:cd}. But then $b$ must be even by \eqref{eq:c+d}. Let $b=2k$ . Since $b^{2a}-4a^b$ is a perfect square, we have a Pythagorean triple of the form $p = (\star, 2a^k, (2k)^a)$ . It is well-known that for any Pythagorean triple $(x,y,z)$ either all $x, y, z$ are even or $z$ is odd and $x$ and $y$ have different parity. Observe that simplifying the triple $p$ via dividing each term by 2 yields a new triple of the form $p' = (\star, a^k, 2^{a-1}k^a)$ . No
|number-theory|elementary-number-theory|diophantine-equations|exponential-diophantine-equations|
0
What is the name of the 3D matrices?
The name of a variable in the $\mathbb{R}$ is called scalar. Multiple scalars form a vector: $\mathbb{R}^n$ Two or more vectors form together a matrix: $\mathbb{R}^{n \times m }$ But what is the name of the 3 dimensional array of elments? ($\mathbb{R}^{n \times m \times k }$)
There are no such think. Tensors are not the same to matrices, since depending on the order (number of indices) and dimension, the space where you life, you can find structures that are ddifferent to this. However, specifically, a tensor of rank 3 in a tridimensional space, therefore an structure containing three indices, like the next presented, can be the closest structure you can use. 3-dimensional array T = [ [[T_{111}, T_{112}, T_{113}], [T_{121}, T_{122}, T_{123}], [T_{131}, T_{132}, T_{133}]], [[T_{211}, T_{212}, T_{213}], [T_{221}, T_{222}, T_{223}], [T_{231}, T_{232}, T_{233}]], [[T_{311}, T_{312}, T_{313}], [T_{321}, T_{322}, T_{323}], [T_{331}, T_{332}, T_{333}]] ]
|linear-algebra|matrices|vectors|
0
Convex function almost surely differentiable.
If f: $\mathbb{R}^n \rightarrow \mathbb{R}$ is a convex function, i heard that f is almost everywhere differentiable. Is it true? I can't find a proof (n-dimentional). Thank you for any help
It's ofcouse weaker than Rademacher theorem. But here is a direct proof with some measure theory. (Convex Functions, Monotone Operators and Differentiability first chapter , R. Phelps, Exercise 1.17) The function $x \to d^{+}f(x)(e_k)$ is pointwise limit of continous functions, so its Borel measurable. The set where a convex function $f$ in finite dimensions is not differentiable is exactly the set of points for which the gradient doesn't exists(see Exercise 1.15 (b)). Denoting the coresponding set in each direction $B_k=\{x \in D : \frac{\partial f}{\partial x_k}(x) \text{doesn't exists}\}$ is Borel measurable. But for the correspoding theorem in $\mathbb R^1$ we know that $\operatorname{meas}(B_k)=0.$ Thus from Fubini theorem $$ \operatorname{meas}(B_k)=\int_{\mathbb{R}^n} \chi_{B_k}(x)dx=\int_{\mathbb{R}^{n-1}} \left(\int_{\{\lambda e_k\}} \chi_{B_k}(x)dx_k \right)dx_1\ldots dx_{k-1} dx_{k+1}\ldots dx_n=0. $$
|real-analysis|derivatives|convex-analysis|
0
Two gaussian dependency
Given $x_1, x_2, x_3 \sim N(0,1)$ , independently and identically distributed, and $y = x_1\sin(x_3) + x_2\cos(x_3)$ . I know $Y$ is Gaussian and independent of $x_3$ . I need to check if $Y$ and $x_1$ are independent. I see $E[yx_1] = 0$ , but I don't know if they are jointly Gaussian, so I can't use that to claim independence. Also, I think they are dependent since $E[y|x_1=0] = x_2\sin(x_3)$ and $E[y|x_1=1] = \cos(x_3) + x_2\sin(x_3)$ , which is different from $E[y|x_1=0]$ ... so I see dependency.
To determine whether $Y$ and $x_1$ are independent, you need to consider whether the conditional distribution of $Y$ given $x_1$ is the same as the marginal distribution of $Y$ . Given $x_1, x_2, x_3$ are i.i.d. standard normal variables and $Y = x_1 \sin(x_3) + x_2 \cos(x_3)$ , you correctly observe that $Y$ is Gaussian and independent of $x_3$ , which means that $Y$ is also Gaussian. Now, to check for independence between $Y$ and $x_1$ , you need to verify if the conditional distribution of $Y$ given $x_1$ is the same as the marginal distribution of $Y$ . You've correctly noted that $E[Yx_1] = 0$ , which implies that $Y$ and $x_1$ are uncorrelated. However, uncorrelatedness is not enough to establish independence. You've also noticed that $E[Y|x_1=0]$ and $E[Y|x_1=1]$ are different, which suggests that the conditional distributions of $Y$ given different values of $x_1$ are not the same, indicating dependency. Based on your analysis, $Y$ and $x_1$ are dependent. The conditional distr
|stochastic-processes|random-variables|normal-distribution|independence|
1
Two gaussian dependency
Given $x_1, x_2, x_3 \sim N(0,1)$ , independently and identically distributed, and $y = x_1\sin(x_3) + x_2\cos(x_3)$ . I know $Y$ is Gaussian and independent of $x_3$ . I need to check if $Y$ and $x_1$ are independent. I see $E[yx_1] = 0$ , but I don't know if they are jointly Gaussian, so I can't use that to claim independence. Also, I think they are dependent since $E[y|x_1=0] = x_2\sin(x_3)$ and $E[y|x_1=1] = \cos(x_3) + x_2\sin(x_3)$ , which is different from $E[y|x_1=0]$ ... so I see dependency.
Although $E(YX_1)=0$ and $Y,X_1\sim N(0,1)$ , Y and $X_1$ are not independent. Actually $$E(e^{sY+tX_1})=E\left[E(e^{sY+tX_1}|X_3)\right]=e^{\frac{s^2}{2}}e^{\frac{t^2}{2}}E(e^{st \cos X_3})$$ Since $E(e^{st \cos X_3})\neq 1$ this proves that $Y,X_1$ are not independent and that the pair $(Y,X_1)$ is not Gaussian. Elegant exercise.
|stochastic-processes|random-variables|normal-distribution|independence|
0
Multiplication operator is injective if and only if its range is dense
I was working on couple of questions regarding multiplication operators and I couldn't figure out how to proceed for this one. Let $f \in L^{\infty} ([a,b])$ be continuous and consider the multiplication operator $M_f$ defined as $M_f : L^{2} ([a,b]) \to L^{2} ([a,b])$ . $M_f(h) = fh$ for $h \in L^2 ([a,b])$ . I want to show that $M_f$ is injective if and only if $R(M_f)$ is dense in $L^2([a,b])$ . We know that $M_f$ is injective if and only if $f \neq 0$ a.e. in $[a,b]$ . I have managed to show this. I believe showing that $f \neq 0$ a.e. in $[a,b]$ if and only if $R(M_f)$ is dense in $L^2([a,b])$ is a good way to approach this problem but I couldn't figure out how to proceed from here. I would appreciate any help!
I'll drop the continuity assumption. Suppose $M_{f}$ is injective and let if possible the measure $\mu\{f=0\}\neq 0$ Then we have that $M_{f}(\mathbf{1}_{\{f=0\}})=0$ a.e. but $\mathbf{1}_{\{f=0\}}$ is not $0$ a.e. which contradicts injectivity. Thus we have $\mu\{f=0\}=0$ . Now consider $g\in Range(T)^{\perp}$ . So $\int_{[a,b]}M_{f}(h)g\,d\mu=\int_{[a,b]}fhg\,d\mu=0$ for all $h\in L^{2}[a,b]$ This means that $\langle h,fg\rangle=0$ for all $h\in L^{2}[a,b]$ which means that $fg=0$ a.e. But as $\mu\{f=0\}=0$ , you have that $g=0$ a.e. Thus $Range(T)^{\perp}=\{0\}$ which proves that $\overline{Range(T)}=L^{2}[a,b]$ . Conversely if $\overline{Range(T)}=L^{2}[a,b]$ then again we have that $\mu\{f=0\}=0$ as otherwise $\mathbf{1}_{f=0}$ is in $Range(T)^{\perp}$ and $\mathbf{1}_{\{f=0\}}\neq 0$ . And this also shows that $M_{f}(h)=0$ implies $h=0$ a.e. which implies injectivity. So both conditions are equivalent and they are also equivalent to $\mu\{f=0\}=0$ .
|functional-analysis|operator-theory|
0
Tilings closed under translation in only one direction
Background A tiling or tessellation of the plane is periodic if it is closed under at least two non parallel translations. Three examples of periodic tilings, including their corresponding translation vectors, are shown below: Question I wonder if tilings that are closed under only one translation also exist, and if so, what they are called. I am also curious what they look like.
I don't know what they are called, they might not even have a name. But I can come up with two simple examples on the spot: Take your two first examples (triangles and squares), take a single row from those, and push that row a little bit to the side. You have now broken the vertical symmetry, and have only the horizontal symmetry left. If you have symmetry in only one direction (say horizontal), then the pattern consists not of repeated, finite-size tiles, but of repeated infinitely tall, finite-width columns. Make whatever column you like, copy it indefinitely, and put all those columns next to one another (at the same height). You now have yourself an example.
|periodic-functions|tiling|
1
Let $P(X_n=n)=1/n^a$ and zero otherwise. Let $Y_n=X_{n+1}\cdot X_n$. Find all $a$ such that $\liminf Y_n=0$ and $\limsup Y_n=\infty$ almost surely.
Remark: My attemps is WRONG as I fasly assumed that the $Y_n$ were independant. I ve corrected this in my own answer to this post. This answer is correct but not enough well wrotten, read the selected answer for a better quality answer. Question: Let $a > 0$ . For $n \in \mathbb{N}$ , let the random variable $X_n$ on $\{0,n\}$ be defined by $$ \begin{cases} P(X_n=0)=1- \frac{1}{n^{a}}\\ P(X_n=n)=\frac{1}{n^{a}}. \end{cases} $$ All $X_n$ are independent. Define the random variables $Y_n=X_{n+1}\cdot X_n$ . Find the set of all $a$ such that $$P(\omega \in \Omega: \liminf_{n \to \infty} Y_n(\omega) = 0 \cap \limsup_{n \to \infty} Y_n(\omega) = \infty) = 1.$$ Attempt: I have already proved the following statement. Part 1 Let's denote $$ D = \{ \omega \in \Omega : Y_n(\omega) \neq 0 \text{ for infinity many } n \}.$$ In order for $Y_n(\omega) \neq 0 $ we should have both $ X_n(\omega) \neq 0, X_{n+1} (\omega) \neq 0$ , so $$P(Y_n \neq 0 ) = P(X_n\neq 0 \cap X_{n+1} \neq 0 ) = \frac{1}{n^a}
I thought again on this question and on my answer and I have found that it is incorrect for different reasons (one of them is that I ve used incorrectly the BC lemma ). So below you will be able to read a new (and I think and hope correct) answer of mine. Answer: 1- According to the answer I ve posted here we have that $P(D) = 1$ with $D = \{ Y_n(\omega) \neq 0 $ for an infinity of indexes $n \in \{1,2,3,.... \} \} $ for $0 . Moreover it is easy to notice that $ P(limsup Y_n = \infty) $ correspond to the probability of $P(D)$ as the event $ D $ and $\{ \omega \in \Omega : limsup_ {n \to \infty }Y_n = \infty \} $ are a.s. equals. 2- Now let's focus on the event $ E= \{ ω∈Ω:Y_n(ω)=0 $ for infinitely many $ n \}. $ that correspond to the event $ \{\omega \in \Omega : \liminf _{n \to \infty } Y_n = 0 \}$ We want to prove that $P(E)=1 , \forall a>0$ that will mean that $ E = \Omega$ a.s. But again here we can not use directly BC lemma as the sequence of events $ \{ A_k \} = \{Y_k = 0 \}$ ar
|probability|probability-theory|probability-distributions|solution-verification|borel-cantelli-lemmas|
0
Find transformation between two sets of points by minimizing the maximum distance
There are two sets of four 2D points. There is a 1 to 1 correspondence between the two sets. The points in both sets are close to the following nominal positions, only deviate with a random value in both direction: (-1.5, 0.0), (-0.5, 0.0), (0.5, 0.0), (-1.5, 0.0). I would like to find a transformation (translation and rotation), which brings one set over the other by minimizing the maximum distance between the corresponding points: $f = max(||a_i - b_i||^2 : i = 1, 2, 3, 4)$ I tried different optimization methods in the Python Scipy.optimisation package, but non of them worked well. They are close, but not perfect. I am not intereseted in any other best fit algorithms (SVD, PCA etc.). Playing with random datasets, I observed, that in some cases after the optimization, two pairs of points have the same distance, sometime three. Is there any algorithm, geometrical approach to find the global minimum of f? Here is an example, before any optimization. Let's say, we want to move the orange
As the points are ordered, a rigid motion can be implemented using a translation and a rotation. Given the sets $A$ and $B$ , the distances can be computed as $$ d_i^{\gamma} =\|a_i - R(\theta)\cdot(b_i-p_0)+p_0\|^{\gamma} $$ with $R(\theta)$ a rotation matrix and $p_0 = (x_0,y_0)$ and $\gamma = \{0,1,2\}$ represents the selected norm. To have a democratic distance we choose the involved optimization procedure as $$ \min_{\theta,x_0,y_0}\left(\max_i d_i^{\gamma}(\theta,x_0,y_0)\right) $$ Follows a MATHEMATICA script implementing this procedure. We can choose between obj0, obj1, obj2 to perform the optimization. For the task, it is advisable to use a solver that uses evolutionary procedures. In red the set $A$ , in blue the set $B$ and in black the adjusted $B$ set. A = {{-2.21405945, -0.0735467}, {-1.08033674, -0.05451294}, {0.98732845, 0.11462708}, {2.22058245, 0.08284019}}; B = {{-2.58064647, -0.2670957}, {-0.4799502, -0.06829777}, {0.69444341, 0.17659052}, {2.4039824, -0.11898507}};
|geometry|optimization|
0
What's the conditions for equality of Cheeger inequality?
I have learnt the Cheeger inequality for a graph $G$ : $$\lambda_2/2\le h(G)\le\sqrt{2\lambda_2}.$$ But does the equality hold? For a non-connected graph, it is obvious. But for a connected graph, I have tried some examples and find the first equality may hold, but fail at the second one. Are there any conditions for equality of Cheeger inequality?
Equality holds in the Cheeger inequality under certain conditions. Specifically, equality holds if and only if the graph $G$ is a regular graph or a bipartite graph. Regular graphs are graphs where every vertex has the same degree, and bipartite graphs are graphs where the vertices can be divided into two disjoint sets such that every edge connects a vertex from one set to a vertex in the other set. Equality holds in the Cheeger inequality for regular graphs. Equality holds in the Cheeger inequality for bipartite graphs. For general connected graphs, equality may not hold, as you observed in your examples.
|graph-theory|spectral-graph-theory|graph-laplacian|
0
How to integrate $\int_{2}^{\infty} \frac{\pi(x) \ln(x^{\sqrt{x}}) \cdot (x^2 + 1)}{(x^2 - 1)^3} \,dx$
How to integrate $$\int_{2}^{\infty} \frac{\pi(x) \ln^2(x^{\sqrt{x}}) \cdot (x^2 + 1)}{(x^2 - 1)^3} \,dx \quad?$$ Wolfram gives the numerical value $$\int_{2}^{\infty} \frac{\pi x (1 + x^2) \log^2(x^{\sqrt{x}})}{(-1 + x^2)^3} \, dx = 1.46036 $$ My idea Let $$I = \int_{2}^{\infty} \frac{\pi(x) \ln^2(x^{\sqrt{x}}) (x^2 + 1)}{(x^2 - 1)^3} \, dx$$ The $\ln^2(x^{\sqrt{x}})$ looks kinda annoying , so let's manipulate it so looks more familiar using the property of logarithms we get: $$\ln^2(x^{\sqrt{x}}) = (\sqrt{x} \ln(x))^2 = x \ln^2(x) \implies I = \int_{2}^{\infty} \frac{\pi(x) x \ln^2(x) (x^2 + 1)}{(x^2 - 1)^3} \, dx$$ Also, notice that $x^2 + 1 = x^2 - 1 + 2$ , so we can express $I$ as: $$I = \int_{2}^{\infty} \left(\frac{\pi(x) x \ln^2(x)}{(x^2 - 1)^2} + \frac{2\pi(x) x \ln^2(x)}{(x^2 - 1)^3}\right) \, dx$$
I will assume that you do mean $\log^2(x)$ instead of $\log(x)$ as you change this halfway through. We can write your integral from $0$ instead of $2$ as $\pi(x)=0$ for $x and the zero at $x=1$ is of high enough order that the pole there from the denominator becomes removable. Recall that for $\Re(s)>1$ , $$\frac{1}{s}\log\zeta(s)=\int_{0}^{\infty}\frac{\pi(x)}{x(x^s-1)}\,\mathrm{d}x.$$ Taking the second derivative in $s$ , we have $$\frac{d^2}{ds^2}\left(\frac{1}{s}\log\zeta(s)\right)=\int_{0}^{\infty}\frac{\pi(x)x^{s-1}(x^s+1)\log^2(x)}{(x^s-1)^3}\,\mathrm{d}x.$$ Taking $s\mapsto 2$ , we recover your integral immediately, giving $$I=\frac{1}{4}\log\zeta(2)-\frac{\zeta’(2)}{2\zeta(2)}-\frac{1}{2}\left(\frac{\zeta’(2)}{\zeta(2)}\right)^2+\frac{\zeta’’(2)}{2\zeta(2)}.$$ Using $\zeta(2)=\frac{\pi^2}{6}$ and $\zeta’(2)=\frac{\pi^2}{6}\left(\gamma+\log(2\pi)-12\log(A)\right)$ , where $A$ is the Glaisher-Kinkelin constant, we can give the integral as $$I=\frac{1}{4}\log\left(\frac{\pi^2}{6}
|calculus|integration|definite-integrals|improper-integrals|closed-form|
0
Frechet derivative of "power function"
Let $H$ be a Hilbert space with norm $\|\cdot\|$ , and let $p\geq2.$ Define $F(x):=\|x\|^p,\,\forall x\in H.$ Now I want to calculate the first and second order Frechet derivatives of $F$ . When $p=2$ , it is straightforward to check that $F^\prime(x)=2(x,\cdot)$ using the Hilbert structure, and it is reasonable to guess that $F^{\prime\prime}(x)=2$ . But when $p$ is other than $2$ , the thing seems very different. I guess that $$F^\prime(x)=p\|x\|^{p-2}(x,\cdot)$$ and $$F^{\prime\prime}(x)=(p-1)p\|x\|^{p-3}(x,\cdot)$$ but I cannot check them. Any help is appreciated.
You can use the chain rule because $x\mapsto\|x\|^p$ is the composition of the mappings $x\mapsto\|x\|^2$ and $\alpha\mapsto\alpha^{\frac{p}{2}}$ .
|real-analysis|functional-analysis|frechet-derivative|
1
does Integrating both sides of an equation in dx will Invalidates the equality?
I'm struggling to grasp the justification behind integrating both sides of an equation. While I understand that operations can be applied to both sides, maintaining equality, it appears that this principle doesn't apply here. given the equality $x=y$ integrating both sides by dx would give $\frac{x^2}{2} = xy$ but this seems not to be valid since if I start from x=y=5 , I would get $\frac{25}{2}=25 $ that's not true. given for instance $log(a)=log(b)$ if I integrate both sides by $da$ I get: $alog(a)-a=log(b)a$ if $a=b=2$ then $alog(a)-a=-0.61..$ and $log(b)a=1.38$ that are different, what I'm doing wrong here?
If you anti-differentiate two equal functions the answers differ only by a constant. This is because the derivative of a constant function is zero. You do need to be careful with integrating expressions involving $x$ and $y$ if one of them depends on the other. Integrating $y$ with respect to $x$ and getting $xy+C$ assumes that $y$ remains fixed while you change $x$ , but if $y=x$ , then this is not the case. For comparison, in general the derivative of $xy$ with respect to $x$ is not $y$ , but $$\frac{d(xy)}{dx}=y+x\frac{dy}{dx} $$ If $y$ does not depend on $x$ the derivative in the RHS is zero and we get just $y$ .
|real-analysis|calculus|integration|derivatives|logarithms|
0
An exercise on convex decreasing function properties
A function f$(x)$ defined for $x\geq0$. It is positive, decreasing, convex and log-convex: $\frac{d^2}{dx^2}\log[f(x)]>0$, $f(0) 0$ for sufficiently large $x$? Thanks in advance!
If $f=e^k$ is decreasing and log convex on $(0,\infty)$ we are asked to prove that there exists $x_0$ such that $$xk'^2(x)+(xk'(x))'\geq 0$$ for $x>x_0.$ To study this, we use the fact that $k'$ is increasing and negative. Therefore we can write $$k'(x)=-C-\int_{x}^{\infty}k''(t)dt$$ with $C\geq 0.$ Now $$I=xk'^2(x)+(xk'(x))'=x\left(\int_x^{\infty}k''(t)dt\right)^2+(xC-1)C+(2Cx-1)\int_{x}^{\infty}k''(t)dt+xk''(x)$$ which is positive if $x>1/C$ and $C>0$ . If $C=0$ then $$I=\left(x\int_x^{\infty}k''(t)dt-1\right)\left(\int_x^{\infty}k''(t)dt\right)+xk''(x).$$ It is negative if for instance $k''(t)=1/4t^2$ say for $t>1.$
|real-analysis|convex-analysis|
0
Finding $\lim_{x\to 0} x \ln(x)$ via $\lim_{x\to -\infty} x \exp(x)$ - is the composition to the right licit?
The high-school classes from a well-known high-school teacher of my country justify the limit of $x \ln(x)$ at $0$ by composing (i.e. doing a change of variable $X= \ln(x)$ ) on the right by the exponential function. The limit of $x \exp(x)$ at $-\infty$ being known, we can conclude that $\lim_{x\to 0} x \ln(x) = 0$ . I assume the theorem used here is the limit composition theorem. The limit composition theorem says: If $\lim_{x\to b} f(x) = c$ and $\lim_{x\to a} g(x) = b$ then $\lim_{x\to a} f(g(x)) = c$ . Here, $a,b,c$ are chosen in the extended real number line and $f,g$ are reals-to-reals functions. However, in order to use the limit composition theorem, one must be sure that $\lim_{x\to b} f(x) = c$ exists. So, what the high-school teacher is doing here (composing to the right by $\exp$ ) is not allowed - since we don't know if $\lim_{x\to 0} x \ln(x)$ exists. Here's a short example to show composing to the right is non-sense if we don't make sure that $\lim_{x\to b} f(x) = c$ exi
If $f(x)=x\log x $ , what you know is that $$ \lim_{t\to-\infty}f(e^t)=0. $$ This means that if you fix $\def\e{\varepsilon}\e>0$ , there exists $t_0$ such that $f(e^t) for all $t . As $\lim_{x \to0}\log x=-\infty$ , there exists $\delta>0$ such that $\log x when $x . For such $x$ , we have $$ f(x)=f(e^{\log x}) And that's the definition of limit: given $\e>0$ we found $\delta>0$ such that $|f(x)| whenever $0 ; that is $$ \lim_{x\to0^+}f(x)=0. $$
|real-analysis|limits|
1
Reference for a combinatorial identity involving the number of derangements
Let $$c_n=n!\sum\limits_{k=0}^n (-1)^k \frac{1}{k!}$$ be the number of derangements of $n$ elements. The following combinatorial identity is coming up in my research: $$\sum\limits_{j=1}^{n-2}c_{n-j}{n-2\choose j-1}=(n-2)!(n-2)$$ Is there a known reference or proof for this identity? Simulations do support this.
I think this identity is true. It's quite similar to this . I'll give a combinatorial argument. Let's try and count the number of permutations of $\{1, \dotsc, n - 1\}$ that do not fix $1$ . One way to count is to note that the number of permutations fixing $1$ is just the number of permutations of $\{2, \dotsc, n - 1\}$ , so the answer is $(n - 1)! - (n - 2)! = (n - 2)!(n - 2)$ . (Or you can argue that there are $n - 2$ choices for where to send $1$ , then $n - 2$ choices for where to send $2$ , then $n - 3$ choices for where to send $3$ ...) Another way to count is more complicated: for each $1 \le j \le n - 2$ , choose $j - 1$ elements of $\{2, \dotsc, n - 1\}$ to keep fixed (note a permutation cannot fix all but one element), and then choose a derangement of the remaining $n - j$ elements of $\{1, \dotsc, n - 1\}$ . This counts each of the permutations we're interested in exactly once, so it follows that \begin{equation*} \sum_{j = 1}^{n - 2} c_{n - j} \binom{n - 2}{j - 1} = (n - 2
|combinatorics|reference-request|summation|binomial-coefficients|derangements|
1
Differentiability of a function extended via symmetry
Suppose I have a function $f$ defined on $[0,\infty)$ and differentiable on $(0,\infty)$ . Define a function $g$ on $(-\infty,\infty)$ by $g(x)=f(|x|)$ . Then do I get that $g$ is differentiable on $(-\infty,\infty)$ , or could $g$ not be differentiable at $0$ ?
I think $f(x)=x^{\frac{1}{2}}\cos(\frac{1}{x})$ for $x>0$ and $f(x)=0$ for $x=0$ gives a counterexample. This function is continuous on $[0,\infty)$ but $g(x)=|x|^{\frac{1}{2}}\cos(\frac{1}{x})$ is not differentiable at $0$ .
|real-analysis|functional-analysis|
0
Solving $2\sqrt {x^5}-\sqrt {x^3}=2x-1$
I've been stuck on this question for like an hour. $2\sqrt {x^5}-\sqrt {x^3}=2x-1$ . The memo doesn't explain how to do it, but it says the answers are $1$ and $1/2$ . I can get $1$ , just not $1/2$ ever. Working I did: $2\sqrt {x^5}-\sqrt {x^3}=2x-1$ , $2x^{5/2} - 3x^{3/2} = 2x - 1$ , $x^{3/2}(2x-1) = 2x - 1$ , $x^{(3/2)*(2/3)} = 1^{2/3}$ , $x = 1$ .
You can obtain $x = \frac{1}{2}$ by considering the case when $2x-1=0$ , since you only considered the case when it is not zero, before simplifying $$x^{\frac{3}{2}}(2x-1) = 2x-1$$ Which allows you to obtain $$2x-1 = 0$$ $$\implies x =\frac{1}{2}$$ Hope this helps!
|algebra-precalculus|
0
If each connected component of a submanifold of Euclidean space embeds does the entire manifold embed?
I have been working on a problem where I want to prove an injective immersion $f$ from a submanifold $M \subset R^n$ (actually, a level set) to $R^k,\; k \geq \dim M$ is a topological embedding. If I knew $M$ was connected I could do it (I believe) and so I have been twisting myself into a pretzel trying to come up with a proof. Then I thought: Question : Does $M$ have to be connected? What if I could show that $f$ restricted to each connected component of $M$ was a topological embedding to its image? Would that suffice to show that the image of $M$ is an embedded submanifold of $R^k$ ? Since immersions are inherently local and topological embeddings don't seem to depend on connectivity, I can't immediately see why that wouldn't be true. Any pointers towards a way of proving/disproving this would be appreciated. Edit To correct inequality. Edit I see where the title of my post is more general than I intended. I have changed the title to accurately reflect the actual question I'm asking
For the revised question: Let $M= (-2,0)\cup (0,1)$ embedded in the $x$ -axis in ${\mathbb R}^2$ . Define a map $f: M\to {\mathbb R}^2$ by: $f$ restricted to $(0,1)$ is the identity map. $f$ restricted to $(-2,0)$ is the composition of the rotation by $\pi/2$ (counterclockwise) around the origin, followed by the translation by the vector $[0, 1]$ . I will leave it to you to check that $f$ is an embedding when restricted to both components of $M$ , but the image of $f$ is not a topological manifold, hence, is not a submanifold of ${\mathbb R}^2$ . Now, to the question in the title. Lemma. Suppose that $M$ is a (2nd countable) manifold such that each component of $M$ embeds in ${\mathbb R}^k$ . Then $M$ admits an embedding in ${\mathbb R}^k$ . (This works in both category of topological manifolds and differentiable manifolds.) Proof. Let $M_j, j\in J$ , denote the connected components of $M$ , where $J$ is countable. The key is that each component $M_j$ is both open and closed in $M$ . S
|general-topology|smooth-manifolds|
0
Differentiability of a function extended via symmetry
Suppose I have a function $f$ defined on $[0,\infty)$ and differentiable on $(0,\infty)$ . Define a function $g$ on $(-\infty,\infty)$ by $g(x)=f(|x|)$ . Then do I get that $g$ is differentiable on $(-\infty,\infty)$ , or could $g$ not be differentiable at $0$ ?
In the sense of distributions, the derivative is $$ (f|x|)' = f'(|x|) )* |x|' = f'(|x|) * \text{sign}(x) $$ which is continuous at $x=0$ only if $f'(0)=0$ . $ x\in [0,\infty), f: x \to x$ is a counterexample.
|real-analysis|functional-analysis|
0
integration of floor function
How to Integrate the following? $$\int_{2}^{343}\{x-\lfloor{x}\rfloor\}^2dx = \int_{2}^{3}x^2dx + \int_{3}^{4}x^2dx+\cdots+\int_{342}^{343}x^2dx$$ This is what I did, How can I proceed further with this? Any help is appreciated.
The idea of separating the integral in integer intervalls was correct. You have $$ \int_n^{n+1} \left(x-\lceil x \rceil \right)^2dx=\int_n^{n+1} (x-n)^2dx=\frac{1}{3} $$ so $$ I=\sum_{n=2}^{342} \frac{1}3=\frac{341}3 $$ Edit: As noted in the comments of the OP, as the fractional part function is periodic with period 1 would have been enought to calculate $$ \int_0^1 x^2 dx $$
|calculus|ceiling-and-floor-functions|
1
Prove that $\{x\in X : f(x)\leq g(x)\}$ is closed, order topology
Let $X$ be a topological space, $Y$ be an ordered topological space (see definition below) and $f,g: X\to Y$ be continuous maps. Prove that: a) $Y$ is the Hausdorff space. b) $A=\{(y_1,y_2)\in Y\times Y:y_1 is open in product topology $Y\times Y$ . c) $B=\{x\in X:f(x)\leq g(x)\}$ is closed in $X$ . d) The map $h: X\to Y$ defined by $$h(x)=\min\{f(x),g(x)\}$$ is continuous on $X$ . PS: Assume $(X,\leq)$ is the set with linear order (if $x,y$ in $X$ then $x\leq y$ or $y\leq x$ ). Denote $\mathbb{B}$ is a family of subsets of $X$ having one of $3$ following forms: $\{x\in X|a , $a,b\in X, a . $\{x\in X|x , $a\in X$ . $\{x\in X|a , $a\in X$ . We knew that $\mathbb{B}$ is a base of some $\tau$ in $X$ and $(X,\tau)$ is called as the ordered topological space. I already finished part a),b) but stuck at c). Here's my attempt for c) We need to prove $G=X\setminus B=\{x\in X|f(x)>g(x)\}$ is open in $X$ . Consider $x\in G$ . Then $f(x)>g(x)$ . Since $Y$ is Hausdoff, there exist an open nbh $U'$ o
You can just use b) to prove c). b) implies that the set $\{(y_1, y_2)\in Y^2 : y_1 \leq y_2\}$ is closed. Of course, that's not directly implied by b), but since the map $(y_1, y_2)\mapsto (y_2, y_1)$ is a homeomorphism of $Y^2$ , its true. Consider the map given by $h(x) = (f(x), g(x))$ , then $h:X\to Y^2$ is continuous and so $$h^{-1}(\{(y_1, y_2)\in Y^2 : y_1 \leq y_2\}) = \{x\in X : f(x)\leq g(x)\} = B$$ is closed set.
|general-topology|
0
Proving that the orthogonal projection on the xy-plane of the corner of a cube looking upwards forms a triangle.
By the corner of a cube looking upwards I mean one vertex V and the three vertices adyacent to V such that V has the minimum z-coordinate of the cube. I suppose that the projection on the xy plane of V should be contained inside the projection of the other vertices, because that is what my visual intuition suggests, but I'm trouble proving that. My objective is to try to prove this analytically, preferably without using angles, for the unit cube. I thought about using vectors in the xy plane with V as the origin and expressing one of the vectors as the cross product of the other two so that I had less variables, but I'm having trouble since it can go both ways. I guess I should try to find a convex combination of the projection of the projection of those three vectors that gives the projection of V, but I'm not sure how to proceed.
This is pretty clear geometrically. Why do you need an algebraic proof? The projections of those three vertices of a cube will always form a triangle unless two of them are collinear in the $xy$ -plane. Your "looking upward" hypothesis prevents that since if two of the projections were collinear the third vertex would coincide with its projection, so not be "above" the corner. In fact, the corner must be an interior point of the triangle. If it were not then one of the lines containing an edge of the triangle joining the projections of two of the vertices would separate the corner from the projection of the third vertex, which happens only if the third vertex is "below" the $xy$ -plane.
|geometry|euclidean-geometry|analytic-geometry|
0
How to use dimension shifting in group cohomology
Let $M$ be a discrete $G$ -module and $M'$ another $G$ -module defined by the exact sequence \begin{equation} 0 \to M \to \text{Ind}_G(M)\to M'\to 0\end{equation} where $\text{Ind}_G(M):=\text{Maps}(G,M)$ is the collection of continuous functions $\alpha:G\to M$ and is called the induced G-module . As stated on page 32 of the book " Cohomology of number fields " by Neukirch-Schmidt-Wingberg, dimension shifting is a technique by which " definitions and proofs concerning the cohomology groups for all G-modules M and all n, may be reduced to a single dimension n, e.g. n = 0 ". Basically, if we take the long exact sequence corresponding to the sequence given above, we get a map \begin{equation}H^n(G,M')\to H^{n+1}(G,M)\end{equation} which is a surjection for $n=0$ and an isomorphism for $n>0$ by a theorem in the book (in fact, something more general is true but I think understanding this special case is enough to understand the general case). My question is how does this help us actually c
The point is that if you know something like $H^1(G, M)$ vanishes for all $M$ , then you get this for the higher cohomology as well. In general, for the reason you observed, dimension-shifting is most useful when you have "for all $M$ " type statements since you have very little control over $M'$ .
|group-cohomology|galois-cohomology|
1
Are isomorphic structures really indistinguishable?
I always believed that in two isomorphic structures what you could tell for the one you would tell for the other... is this true? I mean, I've heard about structures that are isomorphic but different with respect to some property and I just wanted to know more about it. EDIT : I try to add clearer informations about what I want to talk about. In practice, when we talk about some structured set, we can view the structure in more different ways (as lots of you observed). For example, when someone speaks about $\mathbb{R}$, one could see it as an ordered field with particular lub property, others may view it with more structures added (for example as a metric space or a vector space and so on). Analogously (and surprisingly!), even if we say that $G$ is a group and $G^\ast$ is a permutation group, we are talking about different mathematical object, even if they are isomorphic as groups! In fact there are groups that are isomorphic (wrt group isomorphisms) but have different properties, fo
Indistinguishability recalls the concept of "perfect overlapping". To apply this concept to "structures", you first need to have a set-wise picture of them, to then literally apply the Venn diagram-like overlapping routine. The following is an argument about the groups. (Not sure if, and to what extent, this can be generalized to other structures). A group $G$ manifests all of its structure as soon as we allow the internal operation to fully deploy its effects. Accordingly, we can reasonably set forth the following: Definition 1 . The structure of a group $G$ is the set $\theta_G:=\{\theta_a, a\in G\}\subseteq \operatorname{Sym}(G)$ , where $\theta_a$ is the bijection on $G$ defined by: $g\mapsto \theta_a(g):=ag$ . With in back of our mind the "perfect overlapping accomplishment" $\theta_G=\theta_{\tilde G}$ >> (" instinguishable structure s"), here a problem arises if we want to determine whether two groups, $G$ and $\tilde G$ , "have the same structure": in general, $\operatorname{Sy
|universal-algebra|
0
Cantor's diagonal argument for proving the completeness of $L^\infty$ space, check work
I wrote up the following proof for proving $L^\infty$ is complete. Please check if I made any mistakes, thank you! There is a common/standard treatment for establishing the limit of a Cauchy sequence in $L^\infty$ -norm using the approach suggested by Munroe, Aliprantis and Burkinshaw, Wheeden and Zygmund, Folland, and so on to take care the measurability based on the dependency and countability of measurable subsets that follow directly after applying the definition of $L^\infty$ -norm and the convergence of Cauchy sequence in measure. It appears implausible that all of these books are incorrect, which was what I was told by my graduate level analysis course professor. But wait! I can't find a mistake in my thinking or the proofs provided by these books, so I'm hoping that someone will point out my error or that I can send some errata emails. In a previous post , I mentioned an alternative approach for establishing the limit of the Cauchy sequence in $L^\infty$ -norm which was also po
I already wrote a related answer at here . In brief, the thought process of the claim that to say step 1 is not logical/mathematical, or this kind of proofs are wasting time, or the result can follow without a need to further address why might due to the reasoning as follow: (i) Given $\epsilon>0, \exists N\in\mathbb{N}$ , then $\left\Vert f_n-f_m \right\Vert_\infty , $\forall m,n>N$ . (ii) $\Rightarrow$ $\vert f_n(x)-f_m(x)\vert for almost every $x\in\mathbb{R}$ . (iii) $\Rightarrow$ one can define $\lim\limits_{n\to\infty} f_n (x) = f(x)$ for a.e. $x\in\mathbb{R}$ , since by definition, the previous line shows $f_n$ is Cauchy. Without a proof, from (ii) to (iii) is merely an analogy from topological/metric space to measure space by assuming results of Cauchy sequences in metric/topological spaces could also valid in a morally similar way, hence if without a proof, then (iii) becomes an assumption. However, since (a) the condition "almost every $x\in\mathbb{R}$ " exists in (ii), and (
|solution-verification|proof-explanation|banach-spaces|fake-proofs|
0
Integral from $0$ to $\pi/2$ of $\tan x/(4\ln(\tan x)^2 +(\pi)^2)$
$$\int_0^\frac{\pi}{2} \frac{\tan(x)}{(2\ln(\tan(x)))^2 + \pi^2}dx$$ I have solved it as such: $$\int_0^\frac{\pi}{2}\frac{\tan(x)}{(2\ln(\tan(x)))^2+\pi^2}dx=\int_0^\frac{\pi}{2}\frac{\tan(\frac{\pi}{2}-x)}{(2\ln(\tan(\frac{\pi}{2}-x)))^2+\pi^2}dx=\int_0^\frac{\pi}{2} \frac{\cot(x)}{(2\ln(\tan(x)))^2 + \pi^2}dx$$ $$2I=\int_0^\frac{\pi}{2} \frac{\tan x+\cot x}{(2\ln(\tan x))^2 + \pi^2}dx$$ $$I=\int_0^\frac{\pi}{2} \frac{tan x+\frac{1}{tan x}}{2((2\ln(\tan x))^2 + \pi^2)}dx$$ $$=\int_0^\frac{\pi}{2} \frac{\tan^2x+1}{2\tan x((2\ln(\tan x))^2 + \pi^2)}dx$$ putting $t= \tan x$ , we get $dt=\sec^2x$ dx $$=\int_0^\infty \frac{1}{2t(2\ln(t))^2 + \pi^2)}dt$$ now putting $\ln(t)=u$ , $\frac{1}{t}dt=du$ $$=\int_{-\infty}^\infty \frac{1}{2((2u)^2 + \pi^2)}du=\left[\frac{\tan^{-1}(\frac{2u}{\pi})}{4\pi}\right]_{-\infty}^\infty=\frac{1}{4}$$ Is this correct and are there any other methods?
Your answer is correct. Here is an alternative solution, which is basically the same as yours but less tricky. Denote the original integral by $I$ and put $y=\tan x$ , then $$I=\int_{0}^{\infty } \frac{1}{ 4\log^2y+\pi^2 } \frac{y}{1+y^2} \mathrm d y.$$ Now put $z=\log y$ , then $$I=\int_{-\infty}^{\infty } \frac{1}{4z^2+\pi^2 }\frac{e^{2z}}{1+e^{2z}} \mathrm d z.$$ Put $w=-z$ and hence $$I=\int_{-\infty}^{\infty } \frac{1}{4w^2+\pi^2}\frac{e^{-2w}}{1+e^{-2w}} \mathrm d w=\int_{-\infty}^{\infty } \frac{1}{4w^2+\pi^2}\frac{1}{1+e^{2w}} \mathrm d w.$$ Therefore $$2I=\int_{-\infty}^{\infty } \frac{1}{4z^2+\pi^2 }\frac{e^{2z}}{1+e^{2z}} \mathrm d z+\int_{-\infty}^{\infty } \frac{1}{4w^2+\pi^2}\frac{1}{1+e^{2w}} \mathrm d w=\int_{-\infty}^{\infty } \frac{1}{4z^2+\pi^2 }\mathrm d z.$$
|calculus|integration|definite-integrals|
0
Let $G$ be a directed graph, and let $C$ be a circuit in $G$. Prove there exists a simple circuit $C'$ such that all the edges of $C'$ belong to $C$.
Let $G$ be a directed graph, and let $C$ be a circuit in $G$ . Prove there exists a simple circuit $C'$ such that all of the edges of $C'$ belong to $C$ . I am new to graph theory, my idea was the following: Look at the shortest circuit in $C$ . It must be a simple circuit (I don't know how to prove that) and all of its edges belong to $C$ . How do I expand on this?
You have the right idea. If some interior vertex of $C$ is repeated, consider the (necessarily shorter) path between two occurrences of that vertex. What can you say about that path?
|graph-theory|
1
Why do we have a constant function as part of primitive recursive functions?
Currently taking a class on computability theory and started learning about primitive recursive functions. The constant function described as follows: $C^n_a(x_1,...,x_n) = a$ First of all, why do we have a function of this sort to begin with? Can't we just refer to the constant itself as it is? Also why is $x_1, ..., x_n$ passed to it if they're not used?
If you don't have constant functions, none of the other rules will let you build them. They are all ways of combining previously defined functions or returning one of the arguments to the function. The constant functions are how you refer to constants. Sometimes functions just don't use some (or any) of their arguments. These are perfectly valid functions Also, some other rules involve combining functions that take all the 'extra' arguments necessary to compute the function. Then a constant base case must throw them all away. This is pretty commonplace in actual functional programming. General purpose functions that abstract 'looping' patterns might provide their cases with the most possible information, and a simple case just ignores that information and yields a constant.
|computability|
1
Showing that $I_aJ \subseteq I$
Let $I$ be an ideal in an integral domain $R$ , and define the ideals $I_a = (I, a)$ where $a \in R-I$ , and $J = \{r \in R: rI_a \subseteq I\}$ . It is also known that $I_a$ and $J$ are principal ideals, so $I_a = (\alpha)$ and $J = (\beta)$ . We then have $I_aJ = (\alpha\beta)$ . To show set containment, we let $x \in I_aJ$ be arbitrary. Then we can write $x = r(\alpha\beta)$ for $r \in R$ . Then since $\alpha \in I_a$ , we can write $\alpha = qs + at$ , where $q \in I$ and $s,t \in R$ . So then $x = r(qs+at)\beta$ . It's at this point that I am stuck. I know that since $\beta \in J$ , we have $x \in J$ by properties of ideals. This would imply that $xI_a \subseteq I$ , but that doesn't seem to imply that $x \in I$ . Can someone help?
Since $x=r(qs+at)\beta$ , with $r,s,t \in R$ , $q \in I$ , and $\beta \in J$ , we can distribute out the terms to get that $x = q(rs\beta) + \beta a(rt)$ . The first term is in $I$ since $q \in I$ and ideals absorb products. The second term contains $\beta a \in I_aJ$ . Since $\beta \in J$ , we have that $\beta I_a \subseteq I$ by the definition of $J$ . But then since $\beta a \in \beta I_a$ , this implies that $\beta a \in I$ , and so by properties of ideals, $\beta a(rt) \in I$ , and since both terms are in $I$ , their sum $x \in I$ .
|abstract-algebra|ring-theory|
0
How many non-isomorphic, non-labelled, forests are there (asymptotically) on $n$ vertices?
Is there any formula known? There is the following asympotic formula for unlabelled trees: $$t(n) \sim C \alpha^n n^{-\frac{5}{2}}$$ With $t(n)$ the number of unlabelled non-isomorphic trees on $n$ vertices. Is there a similar formula known for forests? I actually just wanna know if the number of forests is $F(n)=O(\beta^n)$ for some constant $\beta$ .
If all we want is a rough estimate, then it's enough to say that $t(n)$ is a lower bound on $F(n)$ , of course, but $(n+1) t(n+1)$ is an upper bound on $F(n)$ . If we take an $(n+1)$ -vertex tree and delete a vertex from it, we get an $n$ -vertex forest, and every forest can be obtained by doing this in at least one way. The lower bound has order $\Theta(\alpha^n n^{-5/2})$ , and the upper bound has order $\Theta(\alpha^n n^{-3/2})$ , so $F(n)$ is somewhere in between.
|combinatorics|graph-theory|trees|
1
A question about two definitions of a Goppa code
I see two definitions of “Goppa codes”. The first is through polynomials, as in this thesis . This is also true in some of the core textbooks in the field, such as van Lint 3rd edition (1999), and MacWilliams and Sloane (1977). However, another definition of “Goppa codes” is given on Wikipedia , with more than just polynomials. Goppa codes with this definition are also mixed into the first page of Google Scholar results, like Wirtz 1988 as well as Yang, Kumar, Stichtenoth 1994 . Since these code definitions are both called “Goppa codes”, are they equivalent? And if they’re not equivalent, why are they both named “Goppa codes”?
Answering because the comments don't highlight the historical nuance in naming of Goppa's two distinct classes of codes . In particular, the fact that the Wikipedia page on "Goppa codes" didn't use the modern naming convention correctly until I rewrote it recently [1] shows that there is continued confusion over the naming of his codes. In 1970 and 1971 Goppa published papers on his first big class of error-correcting codes [2,3], which is the polynomial construction you're talking about. An English description of this was written by Berlekamp in 1973 [4]. These "polynomial" codes are famous, since both his papers won information theory best paper awards [5], and with binary Goppa codes being the core of the McEliece cryptosystem [6,7]. Today, this class of codes are referred to as "Goppa codes". This naming was natural to do since they are important and were constructed by Goppa, so nothing's complicated yet. Much later though, Goppa developed a second class of codes in 1982, which he
|definition|math-history|coding-theory|
0
What is wrong with substituting a constant in a system of equations?
Imagine that I have the following system of equations: \begin{equation} \left\{ \begin{aligned} x + 2 &= 3 \\ y - 5 &= 3 \end{aligned} \right. \end{equation} The solution (x=1, y=8) of this equation is trivial (equations do not even depend on each other). But imagine that to solve this system I do the following: I note that both: x+2 and y-5 equal 3 I do the substitution: x+2 = y-5 I arrive at the solution: y-x=7 This solution includes the correct solution of (x=1, y=8), but generally is not correct. So my question is: Is it correct to do a substitution like this? If not, what exactly makes it wrong? Thank you!
Such a substitution is logically correct. However, it is a conclusion and the result is a necessary condition that is no longer sufficient. $$ (x,y)=(1,8)\Longleftrightarrow x+2=3 \text{ AND }y-5=3\;\Longrightarrow y-x=7 \not\Longrightarrow (x,y)=(1,8) $$ Hence, the substitution gives more solutions than the two equations alone. $y-x=7$ is necessary for any solution of the two equations. However, it is no longer equivalent to both equations, i.e. we lost information.
|systems-of-equations|substitution|
1
Proving that the orthogonal projection on the xy-plane of the corner of a cube looking upwards forms a triangle.
By the corner of a cube looking upwards I mean one vertex V and the three vertices adyacent to V such that V has the minimum z-coordinate of the cube. I suppose that the projection on the xy plane of V should be contained inside the projection of the other vertices, because that is what my visual intuition suggests, but I'm trouble proving that. My objective is to try to prove this analytically, preferably without using angles, for the unit cube. I thought about using vectors in the xy plane with V as the origin and expressing one of the vectors as the cross product of the other two so that I had less variables, but I'm having trouble since it can go both ways. I guess I should try to find a convex combination of the projection of the projection of those three vectors that gives the projection of V, but I'm not sure how to proceed.
Define the cube to originally have been a unit cube with a vertex on the origin and $3$ neighboring vertices $(0,0,1),(0,1,0)$ , and $(1,0,0)$ . Rotate the cube first by $0 about the y-axis and then by $0 about the x-axis. Looking "down" from positive z onto the xy-plane, the cube is first rotating about the y-axis up toward you and tilting left, and then that rotated cube is rotating down on the origin to tilt left and down. The vertex at the origin remains at the origin and is the lowest point. This is a specific setup, but the outcome is entirely general, you just need a plane not parallel to any face for the projection, and we just generated that situation for the xy plane. You want to know if the origin's neighboring vertices form a triangle around it when projected to the xy plane. $(0,0,1)$ : The first rotation moves it left, the second down, so its projection is in the 3rd quadrant. $(0,1,0)$ : The first rotation does nothing, the second pulls it down the y-axis some but not to
|geometry|euclidean-geometry|analytic-geometry|
1
Can a non-constant function satisfy $f\left(\frac{x+y+1}{2}\right) = \frac{f(x)+f(y)}{2}$?
I'm trying to find functions $f$ over $\mathbb R$ which satisfy $$ f\left(\frac{x+y+1}{2}\right) = \frac{f(x)+f(y)}{2} $$ for all real $x$ and $y$ . One thing I immediately note is that shifting any potential function by a constant preserves the above identity, so I can assume $f(0) = 0$ and so $f(\frac{x+1}{2}) = \frac{f(x)}{2}$ for all real $x$ . I can then rewrite the left side of the above equality to deduce $$f(x + y) = f(x) + f(y)$$ for all real $x$ and $y$ , which is the well-known Cauchy functional equation. Clearly, non-zero affine solutions do not work, so $f$ is either the zero function or is some highly pathological function. Can such a pathological solution be compatible with the requirement that $f(\frac{x+1}{2}) = \frac{f(x)}{2}$ ?
Un example of $f$ non-constant ( it is not so pathological! ) At first sight we have $f\left(x+\dfrac12\right)=f(x)$ for all $x$ so $f$ is periodical with period $\dfrac12$ . It follows we must have $f\left(\dfrac{x+y}{2}\right)=\dfrac{f(x)+f(y)}{2}$ . The function is entirely defined if we take $f(x)$ for $0\le x\lt\dfrac12$ and we consider the displacements of a periodic function. Giving to $f(0)$ an arbitrary real value, say $f(0)=a$ , we cannot choose $f(x)$ arbitrarily because we need $f\left(\dfrac y2\right)=\dfrac{a+f(y)}{2}$ with $0\le y\lt\dfrac12$ . A little calculation gives $\boxed{f(x)=a+\dfrac x2\text{ for } 0\le x\lt\dfrac12}$ which satisfies this condition, so complete definition of $f$ becomes $$\boxed{f(x)=\frac{4f(0)-n}{4}+\frac x2;\space \left\{\frac n2\le x\lt \frac{n+1}{2}\right\}\text{ with} \space x,f(0)\in\mathbb R; n\in\mathbb Z}$$ We add an example for $f(0)=3$ , limiting us to two "segments" with points of the graph and four "segments" with complete graph.
|functional-equations|
0
Why does $\bigcup_{j \geq 1} A_j \cup B_j=(\bigcup_{j \geq 1} A_j) \cup (\bigcup_{j \geq 1} B_j)$?
As the title says, I don't know why countable unions have this ("commutative") property. Is there any counterexample? Could be a countable union or not.
Usually, when one wants to show that $A = B$ , the routine procedure is to prove both the inclusions $A \subset B$ and $A \supset B$ . This is achieved by proving the implications $$ x \in A \implies x \in B \qquad \text{ and } \qquad y \in B \implies y \in A, $$ respectively. This is something you should always have in mind when you encounter these kind of problems. Consider an arbitrary element $x \in \bigcup_{j \geqslant 1} A_j \cup B_j$ . By definition of union, there exists a constant $j_0 \geqslant 1$ such that $x \in A_{j_0} \cup B_{j_0}$ . Moreover, again by definition of union, it follows that $x \in A_{j_0}$ or $x \in B_{j_0}$ . These cases imply that $x \in \bigcup_{j \geqslant 1} A_j$ or $x \in \bigcup_{j \geqslant 1} B_j$ , respectively. Hence, we conclude $x \in \left( \bigcup_{j \geqslant 1} A_j \right) \cup \left( \bigcup_{j \geqslant 1} B_j \right)$ , which is enough to guarantee the inclusion $(\subset)$ . On the other hand, consider now an arbitrary element $y \in \l
|elementary-set-theory|
1
Natural domain of rational functions
I had an argument about the natural domain of rational functions. Consider $f(x) = \frac{P(x)}{Q(x)},$ where $P$ , and $Q$ are (real) polynomials. Is the natural domain of $f$ the set $\{ x \in \mathbb{R}: Q(x) \neq 0\}$ ? Or should we first simplify the fraction? For instance, let us consider $P = Q = x$ , so that $$f(x) = \frac{x}{x}. $$ I sustain that the domain should exclude $x = 0$ , but my friend argues that the functions $f \equiv 1$ on $\mathbb{R}$ thus $0$ belongs to the natural domain of the function. Who is right?
It depends on how you define the function obtained from a polynomial fraction, as well as on the definition of "polynomial fraction". This is a very subtle question, and I'll do my best to provide some ideas, if not give a definitive answer. First, it is important to distinguish between a rational function , which is a function, and what I'll call a polynomial fraction , which, strictly speaking, isn't. To simplify, let's look at the analogous case of polynomials and polynomial functions. A polynomial is a formal expression like $x^2+2x-3$ . It's just a string of variable symbols, numbers, and arithmetic symbols; it isn't (strictly speaking) a function. However, each polynomial $p$ gives rise to a polynomial function $f_p : \Bbb{R} \rightarrow \Bbb{R}$ , in this example given by $f_p(x) = x^2+2x-3$ . This distinction isn't important in basic algebra and calculus, but it does exist, and it becomes important in more advanced contexts. For instance, in modular arithmetic, you can have dif
|real-analysis|notation|
0
does square root of -x has a meaning in R?
I had a question earlier, and the question is: X is a negative number, does the square root of -X has a meaning in R (real numbers) what's the answer of this question (I know it's a really simple question but I'm confused!)
If $X$ is negative then $-X$ is positive and so has a square root. I think you are confused because you think that any algebraic expression for a number that starts with a minus sign represents a negative number. That's just not so. If $X = -2$ then $-X = -(-2) = 2$ .
|algebra-precalculus|
1
How many different ways can the 19 people be split across 3 cars such that no two cars have the same number of people?
There are 19 people who are riding a rollercoaster and each time the rollercoaster runs with all 19 people they are organized differently among the 3 separate cars on the track. Each car can carry more than 19 people. We can see the coaster from far away, so we can see the number of people on each coaster car but the people themselves are indistinguishable. How many different ways can the 19 people be split across the 3 coaster cars on the ride such that no two cars have the same number of people? So far I have figured out that using the concept of multisets the total number of ways to distribute 19 people among 3 cars would be: $$\binom{21}{2}$$ Now I am not able to figure out how do I go about calculating the number of ways such that two cars have the same number of people and so then I can simply subtract it from the above expression and get the final answer. Any help would be appreciated.
Now I am not able to figure out how do I go about calculating the number of ways such that two cars have the same number of people ... Let the number of people in the three cars be $(x, x, y)$ . $$2x + y = 19$$ Clearly, $2x$ can take any even value from $0$ to $18$ (inclusive). So, $2x$ can take any of the $10$ possible values. And for each value of $2x$ , $x$ and $y$ each will have a unique value. But since the three cars are distinguishable, we'll have to multiply by ${3 \choose 2} = 3$ . Any two of the three cars can have the same number of passengers. Putting it all together, the number of ways such that two cars have the same number of people would be $10 \times 3 = 30$ .
|combinatorics|discrete-mathematics|permutations|
0
$F(x) = f([x]) \cdot f(\{x\})$ . Find $f$
Given a function $f: [0,\infty) \rightarrow (0,\infty)$ which admits primitives, and let $F$ be a primitive such that: $F(x) = f([x]) \cdot f(\{x\})$ for all x, and there exists $a>0$ s.t. $f(a)=F([a]) \cdot F(\{a\})$ , where $[x]$ refers to the integer part, and the other to the fractional part. Find every function $f$ that verifies these properties. I have only been able to find that small relations between $F$ and $f$ for intervals of the form $[n,n+1)$ and that $f$ seems to have a recursive formula for natural numbers, yet nothing that would point out a certain clear path..
Let $(c_n)$ be the sequence in $(0,+\infty)$ defined by $c_n = f(n)$ . Let $x\in[0,1)$ . We have that $F$ is continuous on $[0,1)$ and differentiable on $(0,1)$ by the property of being antiderivative of $f$ . The condition $F(x) = f([x])f(\{x\})$ simplifies to $F(x) = c_0 f(x)$ , i.e. $f(x) = \frac 1{c_0} F(x)$ , so $f$ is continuous on $[0,1)$ and differentiable on $(0,1)$ and we have differential equation $f'(x) = \frac 1{c_0}f(x)$ , $f(0) = c_0,$ which solves to $f(x) = c_0e^{x/c_0}$ and $F(x) = c_0^2e^{x/c_0}$ . Now let $x\in[n,n+1)$ . The condition $F(x) = f([x])f(\{x\})$ simplifies to $F(x) = c_nf(x-n) = c_nc_0e^{(x-n)/c_0}$ , so by differentiating we have $f(x) = c_ne^{(x-n)/c_0}$ on $[n,n+1)$ . Since $F$ is a continuous function, we get $F(n+1) = \lim_{x\to(n+1)^-}F(x)$ , i.e. $c_{n+1}c_0e^{(x-(n+1))/c_0} = \lim_{x\to(n+1)^-}c_nc_0e^{(x-n)/c_0}$ or $c_{n+1} = c_ne^{1/c_0}$ . Finally, let $n$ be such that $a\in[n,n+1)$ . From the condition $f(a) = F([a])F(\{a\})$ we get $c_ne^{
|real-analysis|derivatives|continuity|fractional-part|
1
Minimum $k$ for which every positive integer of the interval $(kn,(k+1)n)$ is composite
I am looking for references containing results on the minimum $k$ for which every positive integer of the interval $(kn,(k+1)n)$ is composite. If we denote as $k(n)$ this minimum $k$ for some $n$ , $k$ is indeed always equal or greater than $n+2$ , and grows exponentially as $n$ grows. I attach the graph of $k(n)$ for $n\leq 100$ . The exponential growth could be reasonable, and suggests that some exponential lower bound for $k(n)$ could indeed be established for all $n>N_0$ . What I do not find "so reasonable" is the great variations that appear between $k(n)$ and $k(n+1)$ . So, My questions: Is there any result concerning lower bounds of $k(n)$ ? I will appreciate the references of any paper addressing $k(n)$ . Why do we have this great variation between $k(n)$ and $k(n+1)$ for so many $n$ ? Thanks for your time! EDIT After reviewing the partial answer provided by @Eric, it surprises me that, looking at the exponential growth showed in the graph, it has not even been established a lo
Partial answer: Another way to consider $k(n)$ is by looping through prime gaps that are at least $n$ and finding one which contains two consecutive multiples of $n$ . Denote the location of the first prime gap of at least length $n$ as $m_n$ . Since a gap of at least $n$ is necessary and a gap of length $2n$ always gives such consecutive numbers, this gives that $$m_n/n \leq k(n)\leq m_{2n}/n$$ Large jumps can occur because the start of $m_n$ or any similar gaps might not line up perfectly with multiples of $n$ , so you can bounce in between those two bounds. I’d recommend plotting $n\times k(n)$ . In particular, you should see some interesting constant lines across your graph. This is already apparent on your graph, where it looks like there are some long straightish horizontal lines in your graph, from 65 to 82 and 70 to 98 which probably correspond to a few large prime gaps around 15k and 400k that are causing these. For example, the first length 96 gap starts at 360653. The table
|number-theory|prime-numbers|prime-factorization|prime-gaps|
0
If $\phi:V_1\rightarrow V_2$ is a morphism of varieties then $V_1\cong \phi(V_1)$
I am reading Silverman's The Arithmetic of Elliptic Curves . I am wondering if with the definition of morphism he gives, we can conclude that if $\phi:V_1\rightarrow V_2$ is a morphism of projective varieties then $V_1\cong \phi(V_1)$ . Throughout my mathematical career, I have seen cases where a morphism induces an isomorphism to the image, and cases where it doesn't. It usually depends on how restrictive you want the morphism condition to be. I don't really know what happens in this case, and would appreciate some insight. Here are the definitions Silverman uses for (iso)morphism of projective varieties: Let $V_1$ and $V_2 \subset \mathbb{P}^n$ be projective varieties. A rational map from $V_1$ to $V_2$ is a map of the form $$ \phi: V_1 \longrightarrow V_2, \quad \phi=\left[f_0, \ldots, f_n\right], $$ where the functions $f_0, \ldots, f_n \in \bar{K}\left(V_1\right)$ have the property that for every point $P \in V_1$ at which $f_0, \ldots, f_n$ are all defined, $$ \phi(P)=\left[f_0(P
Let $E\subset\Bbb P^2$ be an elliptic curve in Weierstrass form. Then the projection to the $x$ -axis gives a 2-to-1 surjective map from $E\to\Bbb P^1$ , and $E\not\cong\Bbb P^1$ since they have different genera. Even if you require bijectivity, this won't hold: consider the normalization map $\Bbb P^1\to V(y^2z-x^3)\subset\Bbb P^2$ , which is bijective but not an isomorphism, since the source is smooth but the target is singular. One positive result in this direction is Zariski's main theorem - a quasi-finite separated birational map of varieties with normal target is an open immersion. Applying this to the case of a morphism between projective varieties, we have that a quasi-finite birational morphism of projective varieties with normal target is an isomorphism. So you need some assumptions.
|algebraic-geometry|algebraic-curves|projective-space|morphism|
1
Hamming code parity check matrix.
Let's say we have a matrix $ H= \begin{bmatrix} 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{bmatrix} $ Which is a hamming code parity check matrix I need to find code generator matrix and then find code words if sequence being sent is $ 0 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0$ Now, when i find code generator matrix, its dimensions are supposed to be 4x7. Since i have a sequence of twelve numbers. How am i supposed to determine the code words when number of elements in a given sequence is not equal to number of rows of generator matrix so multiplication of that vector and generator matrix is not possible? Any help appreciated!
This question was answered to some satisfaction in the comment by Jyrki Lahtonen , as written by the poster in another comment. From the comments: "In block coding, when using an (n,k) block code, the message to be transmitted is split into blocks of k bits. These are then encoded to match that you would get when using a prescribed generator matrix."
|information-theory|coding-theory|
0
The rate of a code with prescribed minimum distance
Fix $\delta\in(0,1)$. For integer $n$, what is the largest size of a subset $C\subset\{0,1\}^n$ such that any two elements of $C$ are at least $\delta n$ away from each other in the Hamming metric? In other words, what is the largest possible size of a (not necessarily linear) binary code of length $n$ with the minimum distance at least $\delta n$? It is easy to give a very precise answer for $\delta>0.5$. What about $\delta=0.5$ and $\delta No doubt, this has been studied - please, excuse my ignorance, and thanks in advance for sharing your knowledge!
This question was answered to some satisfaction in the comment by Jyrki Lahtonen , as written by the poster in another comment. From the comments: "This is unknown in general. IIRC the best asymptotic bound is the linear programming bound due to McEliece, Rodemich, Rumsey & Welch. One of the first google hits ."
|combinatorics|coding-theory|
0
Systems of equations with numerical analysis
If the differential equations $$ y'' + 0.5yy'z = \sin(x) $$ $$ y' - z'' + zy = \cos(y), \quad 0 \leq x \leq 6 $$ are rewritten as a system of (n) first-order differential equations, then (n). I answered 6 because I thought we need to know have for y, y', y'', z, z', z''. But the actual answer is 4. I would appreciate it if someone could explain this for me!
Explicitly, those equations are equivalent to the following system of first order ODEs: $$ \begin{cases} y'=p, \\ p'+0.5ypz=\sin(x), \\ z'=q, \\ p-q'+zy = \cos(y). \end{cases} $$
|calculus|ordinary-differential-equations|analysis|numerical-methods|mathematical-physics|
0