title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Alternating sum of reciprocals of binomial coefficients
I'm looking for a simple proof of the identity $$ \sum_{k=0}^n \frac{(-1)^k}{\binom{n}{k}} = \frac{n+1}{n+2} (1+(-1)^n) $$ relying only on elementary properties of binomial coefficients. I obtained this by starting with the integral representation $$ \frac{(a-1)!(b-1)!}{(a+b-1)!} = \int_0^{\infty} \frac{t^{b-1}}{(1+t)^{a+b}} \, dt, \hspace{0.5cm} a,b \in \mathbb{N}, $$ setting $a=n-k+1, b=k+1$ and adding, which gives $$ \frac{1}{n+1} \sum_{k=0}^n \frac{(-1)^k}{\binom{n}{k}} = \sum_{k=0}^n (-1)^k \frac{(n-k)! k!}{(n+1)!} = \int_0^{\infty} \frac{1}{(1+t)^{n+2}} \sum_{k=0}^n (-t)^k \, dt = \int_0^{\infty} \frac{1}{(1+t)^{n+2}} \frac{(-t)^{n+1}-1}{-t-1} \, dt $$ $$ = (-1)^n \int_0^{\infty} \frac{t^{n+1}}{(1+t)^{n+3}} \, dt + \int_0^{\infty} \frac{dt}{(1+t)^{n+3}} = \frac{(-1)^n+1}{n+2} $$ Is there an easier way, something which doesn't rely on the integral representation? Thanks.
This answer is along the same lines as hxthanh's answer, but I had embellished with a bit more explanation, so I thought I would post it. I waited for a while so that their answer would get the attention it deserved; however, if it is decided that there is no added benefit to my version, I will remove it. $$ \begin{align} \frac{k!(n-k)!}{n!}&=\frac{n+1}{n+2}\left(\vphantom{\frac{k!}{1!}}\right.\overbrace{\frac{k!(n-k+1)!}{(n+1)!}}^{\frac{n-k+1}{n+1}\frac{k!(n-k)!}{n!}}+\overbrace{\frac{(k+1)!(n-k)!}{(n+1)!}}^{\frac{k+1}{n+1}\frac{k!(n-k)!}{n!}}\left.\vphantom{\frac{k!}{1!}}\right)\tag{1a}\\[3pt] \frac{(-1)^k}{\binom{n}{k}}&=\frac{n+1}{n+2}\left(\frac{(-1)^k}{\binom{n+1}{k}}-\frac{(-1)^{k+1}}{\binom{n+1}{k+1}}\right)\tag{1b}\\ \sum_{k=0}^n\frac{(-1)^k}{\binom{n}{k}}&=\frac{n+1}{n+2}\left(1-(-1)^{n+1}\right)\tag{1c} \end{align} $$ Explanation: $\text{(1a):}$ $\frac{n+1}{n+2}\left(\frac{n-k+1}{n+1}+\frac{k+1}{n+1}\right)=1$ $\text{(1b):}$ multiply by $(-1)^k$ $\text{(1c):}$ sum in $k$ fro
|summation|binomial-coefficients|alternating-expression|
0
A question on a step in proving $L^\infty$ is complete.
Let $(X,\mathcal{A},\mu)$ be $\sigma$ -finite measure space. Claim: $L^\infty$ is complete. Ideas of the proof: (i) Assume Cauchy in norm - let $\{f_n\}$ be a Cauchy sequence which converges in $L^\infty$ norm. That is, let $\epsilon>0$ , and assume $\left\Vert f_n-f_m \right\Vert_\infty Prove: $ \lim\limits_{n\to\infty}f_n(x) \text{ exists } $ for all $x\notin A=\bigcup\limits_{m,n}A_{m,n}\subset X$ , where $\mu\left( A_{m,n}\right)=0$ and on $A_{m,n}$ $f$ could take any finite value. I found that this diagonal argument was used in proving $L^\infty$ is complete in a book written by Zhang Gongqing , but I was taught that (to start with or even include) this step in proving $L^\infty$ is complete is not logical/mathematical. (ii) Define $f:=\lim\limits_{n\to\infty}f_n(x)$ for all $x\notin A$ . Prove: $f \in L^{\infty}$ . (iii) Prove: $\left\Vert f_n-f\right\Vert_\infty i.e. $\{f_n\}$ is uniformly convergent on $X\setminus A$ . When I presented the above outline to the professor of my a
Now, it is clear (I think) that an analogy is not a proof. The thought process of that claim to say it's not logical/mathematical; it's wasting time; and the result can follow without a need to further address why might due to the reasoning as follow: (1) Given $\epsilon>0, \exists N\in\mathbb{N}$ , then $\left\Vert f_n-f_m \right\Vert_\infty , $\forall m,n>N$ . (2) Then, $\vert f_n(x)-f_m(x)\vert for almost every $x\in\mathbb{R}$ . (3) Then, one can define $\lim\limits_{n\to\infty} f_n (x) = f(x)$ for a.e. $x\in\mathbb{R}$ , since by definition, the previous line shows $f_n$ is Cauchy (which is not clear, and need a proof to reach this result, e.g. either using Munroe's approach, or Wheeden and Zygmund's way to briefly explain why, or like the step 1 in this proof , or Folland's Theorem 2.30, and so on). Without a proof, from (2) to (3) is merely an analogy from topological/metric space to measure space by assuming results of Cauchy sequences in metric/topological spaces could also va
|functional-analysis|measure-theory|solution-verification|proof-explanation|cauchy-sequences|
0
Can $\sin^2(x) = \cos^2(x) -1$
I came across this problem on varsity tutors. A part of the answer walk through states that $\sin^2(x)$ can equal $\cos^2(x) - 1$ . This is stated more than once on the site. I do not see how this is possible. I can see that \begin{align} \sin^2(x) &= 1 - \cos^2(x) \\ \cos^2(x) &= 1 - \sin^2(x) \end{align} if $\cos^2(x)-1$ were to be the equation. then it would need to equal $-\sin^2(x)$ . Am I missing something?
You want to solve $\sin^2 x = \cos^2 x - 1$ . Substitution via the known identity $\sin^2 x + \cos^2 x = 1$ yields $$\sin^2 x = (1-\sin^2 x) - 1 = -\sin^2 x,$$ which immediately implies that $\sin^2 x=0$ , hence $\sin x = 0$ . So the solutions are $x=n \pi$ for integer $n$ .
|algebra-precalculus|trigonometry|
1
Solve in $\mathbb{R}^+$ algebra calculus Equation with derivatives
I was trying to solve this question: Solve in $\mathbb{R}^+$ the Equation $7^x-5^x=\lfloor x^2 \rfloor+1$ Where $\lfloor t\rfloor$ is the floor function It is known that $\lfloor x^2\rfloor\leq x^2 $ so $$7^x-5^x but You hace $$7^x-5^x-x^2>1$$ for $x>1 $ But it is hard with derivatives
You are on the right track. Let call $$ f(x)=7^x-5^x-x^2 $$ we have $$ f'(x)=\log(7) 7^x-\log(5) 5^x -2x $$ and $$ f^{''}(x)=(\log(7))^2 7^x-(\log(5))^2 5^x -2 $$ $f''(x)$ is increasing and $f''(1)>0$ so for $x \ge 1$ $f'(x)$ is increasing. $f'(1)>0$ , so for $x \ge 1$ $f(x)$ is increasing, but $f(1)=1$ that implies that, for any $x >1$ $$ 7^x-5^x-x^2>1 $$ This show that there are no solution for $x>1$ . $x=1$ is a solution. So it only remains to find the solution for $0\le x i.e. the solution of the equation $$ g(x):=7^x-5^x-1=0 $$ As $g(0)=-1$ and $g(1)=1$ and the function is continuous, by Bolzano theorem there is at least one solution. As $g(x)$ is increasing this solution is unique Edit: If you are not allowed to use a calculator you can prove that $ f'(1)>0$ in the following way: We have $$ \log(7)7-\log(5) 5 > \log(7) (7-5)>2 $$ as $\log(7)> \log(5)>1$ . So $f'(1)>0$ Similarly $$ (\log(7))^27-(\log(5))^2 5 > (\log(7))^2 (7-5)>2 $$ and so $f''(1)>0$
|algebra-precalculus|ceiling-and-floor-functions|
0
Number of binary strings of length $56$ vs number of permutations of English alphabet
This is exercise $1.2$ in Nicholas Loehr's book "Combinatorics". Which is larger: the number of binary strings of length $56$ , or the number of permutations of the English alphabet ( $26$ letters)? Letting $D$ and $P$ denote these sets, respectively, it is obvious that $|D|=2^{56}$ and $|P|=26!$ . I checked using a calculator that $26!$ is indeed larger, but I want to find a combinatorial argument for this. So far, I have been unsuccessful. I have been trying to find an injection $D\hookrightarrow P$ , though I'm not sure this is the best strategy. I know that any binary string can be represented uniquely in the form $0^{\alpha_1}10^{\alpha_2}1\cdots 10^{\alpha_k}$ where $0^j$ denotes a string of $j$ consecutive zeroes. However, this doesn't immediately help to get a permutation of $26$ letters, since the $\alpha_i$ could be between $0$ and $56$ , and they could repeat. But maybe there is something that can be done with the $\alpha_i$ to get an injection. On the other hand, one can un
One way to do it: De Polignac's Formula allows us to factor factorials easily. Here we get $$26!= 2^{23}\times 3^{10}\times 5^6\times 7^3\times 11^2\times 13^2\times 17\times 19\times 23$$ But it is easy to see that $3^{10}>2^{10}$ and $5^6>2^{12}$ and $7^3>2^6$ and $11^2>2^6$ and that's more than sufficient.
|combinatorics|discrete-mathematics|permutations|
0
Proof of Polya Gamma Laplace Transformation
If $w$ follows a Polya-Gamma Distribution, denoted as $w\sim PG(b,0)$ with $b>0$ then $$w\overset{D}{=}\frac{1}{2\pi^{2}}\sum_{k=1}^{\infty}\frac{g_{k}}{(k-1/2)^{2}},$$ where $g_{k}\sim\Gamma(b,1)$ mutually independent. In the following paper, https://arxiv.org/abs/1205.0310 , at page 4, equation (3) they derive that the Laplace Transformation of $w$ is the following $$\mathbb{E}[e^{-wt}]=\prod_{i=1}^{t}(1+\frac{t}{2\pi^{2}(k-1/2)^{2}})^{-b}=\frac{1}{\cosh^{b}(\sqrt{t/2})},$$ where in the last equality they used the Weierstrass Factorization Theorem. Does anyone know why the product appears in the Laplace Transformation, and overall how they derived the Laplace form?? I made an attempt which seems to be close to what they derived. First, I define as $c_{k}=\frac{1}{2\pi^{2}(k-1/2)^{2}}$ . Hence, the Laplace Transformation can be expressed as $$\mathbb{E}[e^{-wt}]=\mathbb{E}[e^{-\sum_{k=1}^{\infty}c_{k}g_{k}}]$$ because $g_{k}$ are mutually independet I can write $$=\mathbb{E}[e^{-c_{1}
I tried the idea of using the moment generating function for the $PG(1,0)$ case: $$ \begin{aligned} E\left(\exp(-wt)\right) &= \prod_{k=1}^\infty E\left(\exp\left(-\dfrac{t\cdot g_k}{2\pi^2(k-1/2)^2}\right)\right)\\ &= \prod_{k=1}^\infty \dfrac{1}{\left(1+\dfrac{t}{2\pi^2(k-1/2)^2}\right)} \quad \leftarrow \text{by moment generating function}\\ &=\dfrac{1}{\cosh\left(\sqrt{\dfrac{t}{2}}\right)} \quad \quad \leftarrow \text{by Weierstrass factorization of cosh} \end{aligned} $$ For the Weierstrass factorization of cosh, please see this link: https://specialfunctionswiki.org/index.php/Weierstrass_factorization_of_cosh You will find $PG(b,0)$ is quite similar, which gives the result as you provided.
|statistics|laplace-transform|statistical-inference|bayesian|logistic-regression|
0
Can $\sin^2(x) = \cos^2(x) -1$
I came across this problem on varsity tutors. A part of the answer walk through states that $\sin^2(x)$ can equal $\cos^2(x) - 1$ . This is stated more than once on the site. I do not see how this is possible. I can see that \begin{align} \sin^2(x) &= 1 - \cos^2(x) \\ \cos^2(x) &= 1 - \sin^2(x) \end{align} if $\cos^2(x)-1$ were to be the equation. then it would need to equal $-\sin^2(x)$ . Am I missing something?
If $\sin^2(x)=\cos^2(x)-1$ then $$2\sin^2(x)=\sin^2(x)+\sin^2(x)=\sin^2(x)+(\cos^2(x)-1)=1-1=0,$$ and so $\sin^2(x)=0$ . This implies that $x=k\pi$ for some integer $k$ .
|algebra-precalculus|trigonometry|
0
Solving non-homogenous differential equation $\ddot{y}+\frac km\dot{y}=-g\hat{j}$
How to solve this non-homogeneous second-order linear ordinary differential equation $$\ddot{y}+\frac km\dot{y}=-g\hat{j}$$ $\hat{j}$ is just unit vector in $y$ direction. I found the solution to homogenous part $y_h(t)=c_1+c_2e^{-\frac{k}{m}t}$ . By variation of parameters assume $y_p(t)=c_1(t)+c_2(t)e^{-\frac{k}{m}t}$ . I calculated $\dot{y}_p=c_1'(t)+c_2'(t)e^{-\frac{k}{m}t}-\frac{k}{m}c_2(t)e^{-\frac{k}{m}t}$ and $\ddot{y}_p(t)=c_1''(t)+c_2''(t)e^{-\frac{k}{m}t}-c_2'(t)\frac{k}{m}e^{-\frac{k}{m}t}-c_2''(t)\frac{k}{m}e^{-\frac{k}{m}t}+(\frac{k}{m})^2e^{-\frac{k}{m}t}$ , if I didn't make any errors.
Are you solving for a falling object ? Its easier to multiply both sides by $e^{\frac kmt}$ so $$y''+y'{\frac km}=-g\hat{j}\\ \times e^{\frac kmt} \\ e^{\frac kmt}y''+e^{\frac kmt}y'{\frac km}=-ge^{\frac kmt}\hat{j}\\ (e^{\frac kmt}y')'=-ge^{\frac kmt}\hat{j}$$ no apply integral both sides $$\int =\int \\e^{\frac kmt}y'=-g\frac {1}{\frac km}e^{\frac kmt}\hat{j}+c\hat{j} $$ now multiply by $e^{-\frac kmt}$ so $$y'=-g\frac {1}{\frac km}\hat{j}+ce^{-\frac kmt}\hat{j}$$ so $$y=-g\frac {1}{\frac km}t\hat{j}+c\frac {-1}{\frac km}e^{-\frac kmt}\hat{j}+c_2\hat{j}$$
|ordinary-differential-equations|
0
Hybrid between $5x+1$ and $7x+1$ that is probably convergent
Both the $5x+1$ and $7x+1$ variant of the Collatz sequence are conjectured to have large number of divergent trajectory. Here, i combined the two. As always, when you encounter even $x$ , you apply $x\rightarrow x/2$ , but if you encounter odd $x$ , you have the option of applying either $x \rightarrow 5x+1$ or $x \rightarrow 7x+1$ . My conjecture is that you can always reach $1$ from any positive integer starting point. It's very counterintuitive, but i tested the conjecture on integers where $5x+1$ and $7x+1$ are conjectured to diverge on, and yet the conjecture still holds for those integers. Is there heuristic argument that can explain why this happen?
This is a long comment, not an answer Nice question. It seems it is a bit sharper than the comparable $5x \pm 1$ and $7x \pm 1$ problems (where the $ 5x \pm1$ is easily solvable) , and the statistical formula for the average increase/decrease is a bit more difficult - I would like to see it explicitely. Here is some short heuristic, using the basic formula $ {m\cdot a_k +1\over 2^{A_k} } \to a_{k+1}$ , where $m \in \{5,7\}$ finding this regular pattern when we observe the $ a_1 \pmod 8$ a1 m:A1->a2 m:A2->a3 m:A3->a4 m:A4->a5 m:A5->a6 ---------------------------------------------------------------- 3 (5:4) 1 (7:3) 1 (7:3) 1 (7:3) 1 (7:3) 1 --- 11 (5:3) 7 (5:2) 9 (7:6) 1 (7:3) 1 (7:3) 1 --- 19 (5:5) 3 (5:4) 1 (7:3) 1 (7:3) 1 (7:3) 1 --- 27 (5:3) 17 (7:3) 15 (5:2) 19 (5:5) 3 (5:4) 1 --- 35 (5:4) 11 (5:3) 7 (5:2) 9 (7:6) 1 (7:3) 1 --- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1 (7:3) 1 (7:3) 1 (7:3) 1 (7:3) 1 (7:3) 1 9 (7:6) 1 (7:3) 1 (7:3) 1 (7:3) 1 (7:3) 1--- 17
|collatz-conjecture|
1
Is this proof of the principle of recursion theorem proving anything?
The following is presented as an example to motivate my question. It's my paraphrase of the principle of recursion theorem and proof. From Fundamentals of Mathematics Foundations of Mathematics: The Real Number System and Algebra Let $c$ be a number and let $F$ be a function of two arguments defined in $\mathbb{N}$ and with values in $\mathbb{N}.$ Then there exists exactly one function $f$ defined in $\mathbb{N}$ such that \begin{align*} f\left(1\right)= & c;\\ \forall_{x}f\left(x^{\prime}\right)= & F\left(x,f\left(x\right)\right) \end{align*} Proof: Express $f$ as the set $\mathcal{P}$ of ordered pairs $\left\langle x,y\right\rangle $ where $y=f\left(x\right)$ and having only the recursively defined elements given by \begin{align*} \left\langle 1,c\right\rangle \in & \mathcal{P};\\ \left\langle x,y\right\rangle \in & \mathcal{P}\implies\left\langle x^{\prime},F\left(x,y\right)\right\rangle \in\mathcal{P}. \end{align*} We need to prove that for every number $x\in\mathbb{N}$ there is ex
To help understnd this proof note that: (1) The proof is using induction for both the existence and uniqueness steps (2) The proof is making use of the following argument: If $ \in \mathcal{P}$ then $ $ must have been created from one of the two rules: a) $ \in \mathcal{P} $ b) For $x \in \mathbb{N}^+$ , $ \in \mathcal{P} \implies \in \mathcal{P}$ And so $ \in \mathcal{P}$ means either $x=1$ and $y=c$ or $ x = x_1'$ and $y=F(x_1,y_1)$ for some $x_1 \in \mathbb{N}^+$ and $ \in \mathcal{P}$ . So, to show thet the relation $\mathcal{P}$ is single valued, we use induction. For the base step, assume $ \in \mathcal{P}$ . Then from (2) either $y=c$ or $1=x_1'$ and $ \in \mathcal{P}$ . But for the latter case, $ \in \mathcal{P}$ . Applying (2) again neither rule applies, a contradiction. For the inductive step, assume that $\mathcal{P}$ is single valued for $x$ . Now suppose $ \in \mathcal{P}$ and $ \in \mathcal{P}$ and $x \gt 1$ . Then by (2), $ x' = x_1'$ , $z_1=F(x_1,y_1)$ and $x'=x_2'$ , $
|elementary-number-theory|elementary-set-theory|logic|proof-writing|quantifiers|
0
How to understand the differential is a linear map?
I read the following claim in the book , P19 Eq.(3.9). For a smooth function $f:\mathcal{E} \rightarrow R$ , where $\mathcal{E}$ is a linear space. $Df(x): \mathcal{E}\rightarrow R$ is the differential of $f$ at $x$ , that is, it is the linear map defined by: $$Df(x)[v]=\lim\limits_{t\rightarrow 0} \frac{f(x+tv)-f(x)}{t}.$$ Is this the standard definition of differential? Is the claim " $Df(x)$ is a linear map" inferred from the limit expression?
Fix some $x \in \text{Dom}(f)$ . Then as you have defined $$Df(x)[v] = \lim_{t \rightarrow 0} \frac{f(x+tv)-f(x)}{t}$$ and from this we have $$Df(x)[v] = v \lim_{t \rightarrow 0} \frac{f(x+tv)-f(x)}{tv}$$ And then let $\Delta x = tv$ . As $t \rightarrow 0$ , $\Delta x \rightarrow 0$ . So we have $$Df(x)[v] = v \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x)-f(x)}{\Delta x} = v f'(x)$$ As we can see, for each $x \in \text{Dom}(f)$ , $Df(x)[v] = v f'(x)$ is a linear map, because if $x$ is fixed, then $f'(x)$ is a constant or infinity. Hope this can help you.
|calculus|linear-algebra|analysis|derivatives|linear-transformations|
0
Is this proof of the principle of recursion theorem proving anything?
The following is presented as an example to motivate my question. It's my paraphrase of the principle of recursion theorem and proof. From Fundamentals of Mathematics Foundations of Mathematics: The Real Number System and Algebra Let $c$ be a number and let $F$ be a function of two arguments defined in $\mathbb{N}$ and with values in $\mathbb{N}.$ Then there exists exactly one function $f$ defined in $\mathbb{N}$ such that \begin{align*} f\left(1\right)= & c;\\ \forall_{x}f\left(x^{\prime}\right)= & F\left(x,f\left(x\right)\right) \end{align*} Proof: Express $f$ as the set $\mathcal{P}$ of ordered pairs $\left\langle x,y\right\rangle $ where $y=f\left(x\right)$ and having only the recursively defined elements given by \begin{align*} \left\langle 1,c\right\rangle \in & \mathcal{P};\\ \left\langle x,y\right\rangle \in & \mathcal{P}\implies\left\langle x^{\prime},F\left(x,y\right)\right\rangle \in\mathcal{P}. \end{align*} We need to prove that for every number $x\in\mathbb{N}$ there is ex
This is really just a too-long comment: I think it may help to consider cases where the (obvious analogue of the) recursion theorem doesn't hold. Consider something like the following: $$0,1,2,3,...,\infty-2,\infty-1,\infty,\infty+1,\infty+2,...$$ Basically, this is a " $\mathbb{N}+\mathbb{Z}$ -shaped" number system. We don't have well-behaved addition or multiplication here, but we do have a successor operation, and that's enough to check whether the recursion theorem holds. And it doesn't : consider the pair of functions $$f_1:x\mapsto\begin{cases} 0 & \mbox{ if $x$ is finite}\\ 1 & \mbox{ otherwise} \\ \end{cases} \quad\mbox{and}\quad f_2: x\mapsto 0.$$ Both of these functions satisfy the rule " $f(0)=0,f(x')=f(x)$ ," but they are clearly not the same function. Similarly, the rule " $f(0)=0, f(x')=f(x)^2$ " has no solutions over all of our new number system.
|elementary-number-theory|elementary-set-theory|logic|proof-writing|quantifiers|
0
Number of binary strings of length $56$ vs number of permutations of English alphabet
This is exercise $1.2$ in Nicholas Loehr's book "Combinatorics". Which is larger: the number of binary strings of length $56$ , or the number of permutations of the English alphabet ( $26$ letters)? Letting $D$ and $P$ denote these sets, respectively, it is obvious that $|D|=2^{56}$ and $|P|=26!$ . I checked using a calculator that $26!$ is indeed larger, but I want to find a combinatorial argument for this. So far, I have been unsuccessful. I have been trying to find an injection $D\hookrightarrow P$ , though I'm not sure this is the best strategy. I know that any binary string can be represented uniquely in the form $0^{\alpha_1}10^{\alpha_2}1\cdots 10^{\alpha_k}$ where $0^j$ denotes a string of $j$ consecutive zeroes. However, this doesn't immediately help to get a permutation of $26$ letters, since the $\alpha_i$ could be between $0$ and $56$ , and they could repeat. But maybe there is something that can be done with the $\alpha_i$ to get an injection. On the other hand, one can un
I don't think a combinatorial argument necessarily gives you very much here, and I don't think that's the point of the exercise. If I was solving it I would do that by making some numerical estimates on $26!$ . That said, there is a natural way to biject $P$ with the set of finite sequences $(x_0, \dotsc, x_{25})$ such that $x_i \in \{0, \dotsc, i\}$ for each $i$ (it's a nice exercise, see if you can work it out! You can read about one such bijection here . Of course all you need for this problem is an injection to $P$ ). You can inject $D$ into this set by interpreting appropriately sized "blocks" of a binary string as nonnegative integers (and this clearly won't be surjective). I think that this argument straightforwardly extends to prove that $2^{78} (if I've calculated correctly!), and you can make it go further by being a bit cleverer about it. Really this argument is a round-about way to spell out the bound $26! > 1 \cdot 2 \cdot 2 \cdot 4 \cdot 4 \cdot 4 \cdot 4 \cdot 8 \cdot \d
|combinatorics|discrete-mathematics|permutations|
1
Clarification on Simplification of a radical
I recently solved the following integral: $$ \int _1^{\sqrt{3}}\frac{\sqrt{1+x^2}}{x^2}dx $$ After integrating, I obtained the result: $$ \frac{1}{2}\ln\frac{2-\sqrt2}{2-\sqrt3}+\frac{1}{2}\ln\frac{2+\sqrt3}{2+\sqrt2}+\sqrt2-\frac{2}{\sqrt3} $$ However, the answer provided in my textbook differs slightly: $$ \sqrt{2}-\frac{2}{\sqrt{3}}+\log\left(\frac{2+\sqrt{3}}{1+\sqrt{2}}\right) $$ The provided answer in the book clearly looks better and more concise. I'm particularly interested in understanding how $$ \sqrt{\frac{(2-\sqrt{2})(2+\sqrt{3})}{(2-\sqrt{3})(2+\sqrt{2})}} $$ is exactly equal to $$ \frac{2+\sqrt{3}}{1+\sqrt{2}} $$ Could someone kindly provide an explanation for this simplification as it is not obvious to me? Thank you!
Note that $$\frac{2+\sqrt3}{2-\sqrt3}=\frac{\left(2+\sqrt3\right)^2}{\left(2-\sqrt3\right)\left(2+\sqrt3\right)}=\left(2+\sqrt3\right)^2$$ and that \begin{align}\frac{2-\sqrt2}{2+\sqrt2}&=\frac{\left(2-\sqrt2\right)\left(2+\sqrt2\right)}{\left(2+\sqrt2\right)^2}\\&=\frac2{\left(2+\sqrt2\right)^2}\\&=\frac{\sqrt2^2}{\left(2+\sqrt2\right)^2}\\&=\frac1{\left(\sqrt2+1\right)^2}.\end{align}
|integration|radicals|
1
How to determine the truth value of a propositional function with free variables?
$x,y \in \mathbb{N}$ $\exists x \exists y (x + y = 0) \lor (x * y = 0)$ In this propositional function, is it true that the $x$ and $y$ in $(x * y = 0)$ are free variables? If so, am I allowed to assign a random value to the $x$ and $y$ so that I can get the truth value of the propositional function? edit: additional information I've been taught that $\exists x \exists y [P(x,y) \lor Q(x,y)]$ is different from $\exists x \exists y P(x,y) \lor Q(x,y)$ and that the existential quantifier is distributable in $\exists x \exists y [P(x,y) \lor Q(x,y)]$
We can define the function $FV$ , the set of free variables in a formula, on the set of formulae in our language recursively (any text should have a precise definition). If $\theta$ is an atomic formula, then $FV(\theta)$ is defined as the variables in the formula (as there are no quantifiers). $(x*y=0)$ is an atomic formula, so $FV((x*y=0))=\{x,y\}$ The truth of a formula which contains free variables depends on the value of the variables.
|first-order-logic|quantifiers|
0
Can you give me a hint for the following?
I am a first year uni student learning about linear transformations I encountered the following question if $$T^2=T$$ such that T is from V to V proof or disproof that T is an Injective function , what I understood is that $$T^2$$ is a composition of T on T secondly I deduced that t can be the identity function since when applying the transformation twice its the same as applying it once , it can also be the transformation to 0 such that I am getting zero no matter the input , so I concluded that the sentence is wrong but I am not sure how to deal with this formally , I tried writing something such as $$T(v1)+T(v2)=T(v1)^2+T(v2)^2$$ $$T(v1)^2-T(v1)+T(v2)^2-T(v2)=0$$ $$ T(T(v1+v2))=T(T(v1))+T(T(v2))$$ I just could not figure out how to continue
I tried to imagine 2 more scenarios and answer them to get a good intuition on the matter , I may be wrong and I would love nothing more than to be corrected , case 1: if $T(x)^2=0$ does $T(x)=0$ have to be zero as well ?, I went and started with the thinking if there is a case where this is not true could not think of anything, I switched the formate for the case to $$T(T(x))=0 $$ I know that if T(x) is 0 then for certainT(T(x) is zero because T(0) =0 from the properties of a linear transformation now I tried to imagine the opposite I where T(x) is not zero and see what happens from there T(num)=num T(num)=0 this case can't always work so I deduced that the T(x) have to be 0 , case2: I tried to think of a question similar to the first if $$T(x)^2$$ is injective does T(x) have to be injective $$T(v1)^2=T(v1)^2$$ if $$v1=v2$$ $$T(T(v1))=T(T(v2)$$ I see it working for zero and identity transformation for composition and since it can't be zero according to the condition above it have to b
|linear-algebra|linear-transformations|
0
Proof that sample variance is biased in presence of autocorrelation
With no correlation we can show sample variance is unbiased: $$E[s^2] = E\left(\frac{\sum^n_{i=1}(X_i - \bar{X})^2}{n-1}\right) = \sigma^2$$ Proof $$E\left(\sum^n_{i=1}(X_i - \bar{X})^2\right) = E\left(\sum^n_{i=1}X_i^2 -2\bar{X}\sum^n_{i=1}X_i + n \bar{X}^2 \right ) = E\left(\sum^n_{i=1}X_i^2 -2\bar{X}n\bar{X} + n \bar{X}^2 \right ) = \sum E(X_i^2) - E(n \bar{X}^2)$$ $$=n\sigma^2 + n\mu^2 - \sigma^2 - n\mu^2 = (n-1)\sigma^2$$ However, if there is autocorrelation among $x_i$ , which part of this proof is no longer correct?
$$\operatorname{E}[\bar X^2] = \frac{\sigma^2}{n} + \mu^2$$ is no longer true if there is autocorrelation because $$\operatorname{E}[X_i X_j] \ne \operatorname{E}[X_i]\operatorname{E}[X_j]$$ for $i \ne j$ when there is correlation.
|probability|statistics|variance|
1
Non-deterministic bounded Lévy process
Does there exist a non-deterministic $\mathbb{R}$ - valued Lévy process $(X_t)_{t \in [0, \infty)}$ such that there exist a $t_0 \in (0, \infty) $ and $R>0$ such that $$ P(0 i.e. $X_{t_0}$ is almost surely positive and bounded by $R$ . Since Lévy processes are more or less generated by infinitely divisible random variables, this rases the question if there exist a non-deterministic $\mathbb{R}$ - valued infinitely divisible random variable (i.e. it's density measure $\mu_X$ is infinitely divisible with respect convolution) such that there exist $R>0$ with $ P(0 ; equivalently, $\mu_X((0,R))=1$ . The question is motivated by the problem here: translation invariance of expectation value of hit counting variable for Lévy process I'm seeking to construct a counterexample with a $\mathbb{R}$ - valued Lévy process $(X_t)_{t \in [0, \infty)}$ such that there exist $s,a,u>0$ with $$\mathbb{E}[M_u(a,s)]= \mathbb{E}[M_0(a,s)] $$ where $M_u(a,s)$ the random variable counting the number of tiles $
An answer to your actual question: No. Thus, suppose that $(X_t)$ is a Lévy process with $X_0=0$ and $0 a.s., for some $t_0>0$ and some finite $R>0$ . I assume without loss of generality that $t_0=1$ . Claim: $X_t>0$ for all $t>0$ a.s. Indeed, for positive integer $n$ , $P(X_n>0)\ge\left(P(X_1>0)\right)^n=1$ . Consider now $X_{1/2}$ . If we had $P(X_{1/2}\le 0)>0$ , then we would have $P(X_1\le 0)\ge \left( P(X_{1/2}\le 0)\right)^2>0$ , a contradiction. Therefore $P(X_{1/2}>0)=1$ as well. Proceeding recursively, $P(X_{2^{-k}}>0)=1$ for all $k=1,2,\ldots$ . In combination with the first sentence of this paragraph this shows that $P(X_{k2^{-n}}>0)=1$ for all positive integer $k$ and $n$ ; by countable additivity, $$ P(X_s>0,\hbox{ for all dyadic rational }s>0)=1.\qquad (1) $$ Because the paths of a Lévy process are right continuous, we deduce from (1) that $$ P(X_t\ge 0,\forall t>0)=1.\qquad (2) $$ It is known that for a non-negative Lévy process, the Laplace transform $E(\exp(-\lambda X
|stochastic-processes|levy-processes|
1
Nonhomogeneous nonlinear differential equation with delta functions
I'm trying to solve the following differential equation $$ y'' + \dfrac{1}{2}(y')^2 = A \delta(x) + B \delta(x-a) + C $$ I tried two times, the first one using Laplace transforms, but I don't really know how to deal with the $(y')^2$ term. I found some papers discussing it, but I couldn't really use them since they overcomplicate my original equation. The second attempt has been to find a particular solution to add to the homogeneous one give by $y=\dfrac{1}{c_1+x}$ , but I have no clue for where to start. Does anyone have an idea? Thank you all!
Assuming $C\gt 0$ , the differential equation can be rescaled by substituting $y=2 u$ and $x=\sqrt{2/C}\; t$ , leading to the simpler form $$\ddot{u}(t)+\dot{u}(t)^2=1+\alpha\, \delta(t)+\beta \,\delta(t-\tau) \tag{1} \label{1}$$ with $\alpha = A/\sqrt{2C}$ , $\beta=B/\sqrt{2C}$ and $\tau = a\sqrt{C/2}$ . Comparing the LHS and the RHS of \eqref{1}, it is obvious that $u(t)$ must be continuous at the critical points $t=0$ and $t=\tau$ and only $\dot{u}(t)$ can have jumps at these points (see also the comments by @Sal above). Therefore, setting $\dot{u}(t)= \dot{w}(t)/w(t)$ , leading to the linear differential equation $$ \ddot{w}(t)-w(t)= \alpha\, w(0)\, \delta(t) +\beta \,w(\tau)\, \delta(t-\tau) \tag{2} \label{2}$$ should pose no problems, unless $w(t)$ acquires zeros for some unfortunate choice of parameters and/or initial conditions. Restricting ourselves to the special case $\beta=0$ , eq. \eqref{2} is solved by employing the ansatz $$w(t)=(c_1 e^t+c_2e^{-t}) \,\Theta(-t)+(c_3e^t+c
|ordinary-differential-equations|laplace-transform|homogeneous-equation|
1
How to prove that CC($\mathbb R$) is true in permutation models
I've familiarised myself with models of set theory and am beginning to understand the basics, but am still very far away from being a proper model theorist. I currently live under the impression that I understand the idea of the construction of the first and second Fraenkel models. Hence, unfortunately, I must ask how one proves that the axiom of countable choice for subsets of $\mathbb R$ holds in permutation models. As far as I'm aware, it is not necessary to assume the axiom of choice when constructing permutation models. There is a particular issue that caused me some trouble. When we start constructing some permutation model, we assume that we already have some set theory as a metatheory. In this set theory, the atoms are mere sets, and thus, if we start building up the von Neumann hierarchy with atoms, all atoms will appear at some point in the kernel, which is a problem, because atoms should not be in the kernel. What am I missing here? EDIT: Or are the atoms greater than some s
I think that I may have discovered an important point here. For if we just assume that AC holds in the meta-theory, then all the statements we prove using permutation models are of the form "If the axiom of choice is true, then statement XYZ is not implied by the axioms." But this is unsatisfactory, because we don't know what happens if AC is not true. Thus, in my upcoming paper I'll first construct the "ZFA version" of the constructible universe, and then take my permutation models to be subclasses of this universe. EDIT: Or alternatively, if that will not give me enough sets, I will construct a ZFA model inside the constructible universe.
|set-theory|axiom-of-choice|
0
Showing $ \frac{2^j(2k-j)!}{(k-j)!}=\sum_{l=j}^k\frac{2^ll(2k-l-1)!}{(k-l)!} $
For any given $j,k\in\mathbb{N}$ , with $j\leq k$ , how is it possible to prove the following equivalence? $$ \frac{2^j(2k-j)!}{(k-j)!}=\sum_{l=j}^k\frac{2^ll(2k-l-1)!}{(k-l)!} $$ My attempt is: $$ \frac{2^j(2k-j)!}{(k-j)!}=\sum_{l=j}^k\frac{2^ll(2k-l-1)!}{(k-l)!}\\ 2^j\binom{2k-j}{k-j}k!=\sum_{l=j}^k\binom{2k-l-1}{k-l}l2^l(k-1)!\\\binom{2k-j}{k-j}k=\sum_{l=j}^kl2^{l-j} \binom{2k-l-1}{k-l} $$ and then I don't know how to continue. Maybe there is some binomial formula for this sum, but I don't know.
Here is a generating function approach that generalises @MarkoRiedel's answer to this problem . We use the coefficient of operator $[z^n]$ to denote the coefficient of $z^n$ of a series. This way we can write for instance \begin{align*} [z^k](1+z)^n = \binom{n}{k}\tag{1} \end{align*} We show the following identity is valid \begin{align*} \color{blue}{\sum_{l=j}^k\binom{2k-l-1}{k-l}l\,2^{l-j}=k\binom{2k-j}{k-j}}\tag{2} \end{align*} We obtain \begin{align*} \color{blue}{\sum_{l=j}^k}&\color{blue}{\binom{2k-l-1}{k-l}l\,2^{l-j}}\\ &=\sum_{l=j}^k[z^{k-l}](1+z)^{2k-l-1}l\,2^{l-j}\tag{3}\\ &=[z^k](1+z)^{2k-1}\sum_{l=j}^\infty\left(\frac{z}{1+z}\right)^ll\,2^{l-j}\tag{4}\\ &=[z^k](1+z)^{2k-1}\sum_{l=0}^\infty\left(\frac{z}{1+z}\right)^{l+j}(l+j)2^l\tag{5}\\ &=[z^{k-j}](1+z)^{2k-j-1}\sum_{l=0}^{\infty}\left(\frac{2z}{1+z}\right)^l(l+j)\\ &=[z^{k-j}](1+z)^{2k-j-1}\left(\frac{2z(1+z)}{(1-z)^2}+j\frac{1+z}{1-z}\right)\tag{6}\\ &=2[z^{k-j}](1+z)^{2k-j}\frac{z}{(1-z)^2}+j[z^{k-j}](1+z)^{2k-j}\frac{1
|combinatorics|summation|binomial-coefficients|arithmetic|factorial|
0
Finite moment generating function near zero but not subexponential
A centered random variable $X$ with moment generating function (MGF) $M_X(t) := \text E[e^{tX}]$ is subexponential if $\log M_X(t) \leq ct^2$ for some $c > 0$ on some neighborhood $(-\delta, \delta)$ of zero. Are there any examples of random variables with finite MGFs in some neighborhood of zero that are not subexponential? Relatedly, how quickly can the MGF grow near zero while still being finite, and, for a given growth rate, how to find a random variable with that MGF?
I think all such functions must be subexponential. Suppose $M_X$ is finite on some neighborhood of zero, so that there exists $\lambda > 0$ and a constant $C$ so that $M_X(\lambda) \le C$ and $M_X(-\lambda) \le C$ . Then by Markov's inequality for any $t > 0$ we can bound \begin{align} \Pr[|X| \ge t] &\le \Pr[X \ge t] + \Pr[X \le -t] \\ &= \Pr[e^{\lambda X} \ge e^{\lambda t}] + \Pr[e^{-\lambda X} \ge e^{\lambda t}] \\ &\le \frac{\mathbf{E}[e^{\lambda X}]}{e^{\lambda t}} + \frac{\mathbf{E}[e^{-\lambda X}]}{e^{\lambda t}} \\ &\le 2Ce^{-\lambda t} \end{align} and so $X$ has subexponential tails, hence is subexponential.
|real-analysis|probability-theory|probability-distributions|moment-generating-functions|
0
FEM for non linear PDEs
I am looking for an easy but rigorous reference for FEM methods for non linear PDEs like the p-laplace equattion or non linear heat equation ect. Can one recommend me a good exposition of this topic please. Questions I have in mind is, where do I know that the resulting system of non linear equations is uniquely solvable or when to choose what basis functions and what are pros and cons of different basis functions ect. I literally cant find something about that on the internet. I only find papers where they treat non trivial forms of FEM for non linear equations.
Analysis of nonlinear PDEs is very specific to each equation and literature on numerical PDEs is similar. Everything has to be tailored to the specific nature of the equations you are studying. I think papers are your only sure bet if you have something very specific in mind or you want state-of-the-art methods. That being said, Nonlinear Finite Element Methods by Wriggers is quite good, but it necessarily focuses on a few different examples from applications.
|partial-differential-equations|numerical-methods|numerical-calculus|elliptic-equations|finite-element-method|
0
prove a trigonometric polynomial has no more than $2n$ disctinct roots in $[0,2\pi)$
Prove the nonzero polynomial $$T(\theta)=a_0+\sum\limits_{k=1}^{n}(a_k\cos k\theta+b_k\sin k\theta)$$ has no more than $2n$ distinct roots on $[0,2\pi).$ My attempt: From $\cos k\theta=\dfrac{e^{ik\theta}+e^{-ik\theta}}{2}$ and $\sin k\theta=\dfrac{e^{ik\theta}-e^{-ik\theta}}{2i}$ , $T$ can be rewritten as $T(\theta)=e^{-ni\theta}p(e^{i\theta})$ , with $p$ is a polynomial, $degp\leq 2n$ . $\Rightarrow T(\theta)=0\iff e^{-ni\theta}p(e^{i\theta})=0\iff p(e^{i\theta})=0$ I know that $p(z)=0$ has no more than $2n$ roots in $\mathbb{C}$ , but how can I argue that $p(e^{i\theta})=0$ also has no more than $2n$ roots in $[0,2\pi)$ ?
Let $\theta\in[0,2\pi)$ be a root of $T$ , so that $T(\theta)=0$ . Then you have found that $p(e^{i\theta})=0$ for some (nonzero?) polynomial $p$ of degree at most $2n$ . This means $e^{i\theta}$ is a root of $p$ . Of course the map $$[0,2\pi)\ \longrightarrow\ \Bbb{C}:\ \theta\ \longmapsto\ e^{i\theta},$$ is injective. So if $T$ has more than $2n$ roots in $[0,2\pi)$ , then also $p$ has more than $2n$ roots in $\Bbb{C}$ , a contradiction.
|complex-analysis|trigonometry|
0
Sylvester–Gallai theorem Proof By Induction
While reading about Sylvester–Gallai theorem, I got the simple proof of the theorem using Induction. Statement: Given a finite set of points in Euclidian plane such that any line passing through two of the points shall pass through at least one more of them, then all points have to be collinear. Base case: Let n=3 Given 3 points, If a line containing two of them passes through one more of them, they are by definition, collinear. Thus statement #1 is true for n=3. Step Case P(k) : Given k points. Assuming that the statement #1 holds true for given set of k points. P(k+1): Show that for every k ≥ 3, if P(k) holds, then P(k + 1) also holds. Here we can say that for a new point (k+1 th) in the same plane to pass through any line passing through two of the points in the k-points set in P(k), then only the newly added point can go through atleast 3 points. If it pass through atleast two of the point then it has to pass through the line passing through all of them. Hence the only way to expan
The issue here is that you can't apply the induction hypothesis. Although you know that the set of $k+1$ points satisfy the conditions, that doesn't mean that you can just throw out 1 point and make the remaining $k$ points still satisfy the condition, as you potentially need that thrown out point to serve as the "another point on the line for some $p_i, p_j$ ". Explicit example from the OP (See comments) Line AB Containing C Line BC Containing D Line CD Containing A Line DA Contain B, And if we remove any one of the points from the set then the condition will fail for one of the line
|combinatorics|solution-verification|
1
Simplicial map of the circle $S^1$
I want to construct a simplicial map $S^1 \to S^1$ of degree $n>0$ by giving a simplicial structure on $S^1$ with minimal number of vertices. (Here, the simplicial structure for the domain $S^1$ and the target $S^1$ need not be the same.) Since a face of a simplcial complex is uniquely determined by its vertices, a simplicial structure of $S^1$ must have at least $3$ vertices. Thus for $n=1$ , the identity map will be an answer. More generally, if we give the simplicial structure on the domain $S^1$ having $3n$ vertices (more precisely, the $3n$ -th roots of unity as vertices) and give the simplicial structure on the target $S^1$ having $3$ vertices, then the map $S^1\to S^1$ , $z\mapsto z^n$ is a simplicial map of degree $n$ . My guess is that this map should be a simplicial map of degree $n$ with minimal vertices. However, I can't find a way to prove that the map above is a simplicial map of degree $n$ with minimal number of vertices. How can we show that for a simplicial map $S^1\to
Say $K$ and $L$ are simplicial structures of $S^1$ , and $L$ has three edges $e_1$ , $e_2$ , and $e_3$ ordered and oriented counterclockwise about $S^1$ . If $K$ has $k$ ordered and oriented edges $\tilde e_1, \ldots, \tilde e_k$ , then the simplicial homology groups $H_1^K(S^1) \cong \mathbb Z$ and $H_1^L(S^1) \cong \mathbb Z$ with respect to each simplicial structure are respectively generated by the cycles $\tilde e_1 + \cdots + \tilde e_k$ and $e_1 + e_2 + e_3$ . Suppose $f : K \to L$ is a simplicial map of degree $n$ . Then the induced map $f_* : H_1^K(S^1) \to H_1^L(S^1)$ is given by: $$ f_*\left( \tilde e_1 + \cdots + \tilde e_k\right) = n(e_1 + e_2 + e_3) = ne_1 + ne_2 + ne_3 $$ Since the map is simplicial, each edge $\tilde e_i$ gets sent homeomorphically to one of $e_1, e_2, e_3$ , so each edge $e_j$ in $L$ has at least $n$ disjoint preimages in $K$ (there may be more). So $K$ has at least $3n$ edges. The map you've constructed has degree $n$ , so the bound is sharp.
|general-topology|algebraic-topology|circles|simplicial-complex|
0
Troubles in solving this system of equations
Well, despite using programs this "appears" simple (the solutions are numerically easy at sight), I am not able to solve this system of two equations by hand, except for the trivial solution $(0, 0)$ . Can you help me perhaps? $$\begin{cases} 3y^3 + 6xy^2 + 3x^2y - 3y + 6x^3 - 7x = 0 \\ 2y^3 - 3xy^2 + 2x^2y - y - 3x^3 + 3x = 0 \end{cases}$$ These are the other solutions: $$\left\{x\to -\frac{1}{\sqrt{15}},y\to \sqrt{\frac{3}{5}}\right\},\left\{x\to \frac{1}{\sqrt{15}},y\to -\sqrt{\frac{3}{5}}\right\},\left\{x\to -\frac{6}{\sqrt{35}},y\to -\frac{2}{\sqrt{35}}\right\},\left\{x\to \frac{6}{\sqrt{35}},y\to \frac{2}{\sqrt{35}}\right\}$$ I tried to use some methods, like adding twice the second from the first equation, but this didn't really help. Since this exercise comes from a past exam, I cannot solve it with calculators. Thank you for your help.
For brevity let \begin{eqnarray} f&=&3y^3 + 6xy^2 + 3x^2y - 3y + 6x^3 - 7x,\\ g&=&2y^3 - 3xy^2 + 2x^2y -\,\ y - 3x^3 + 3x.\\ \end{eqnarray} Then we can cancel a few similar terms in two different ways as follows: \begin{eqnarray} 2f-3g&=&21x^3+21xy^2-3y-23x,\\ f+2g&=&7y^3+7x^2y-5y-x, \end{eqnarray} and now we can cancel the higher degree terms against each other as follows: \begin{eqnarray} y(2f-3g)-3x(f+2g)&=&(21x^3y+21xy^3-3y^2-23xy)-(21xy^3+21x^3y-15xy-3x^2)\\ &=&3x^2-8xy-3y^2\\ &=&(x-3y)(3x+y) \end{eqnarray} Then $f=g=0$ implies that either $x=3y$ or $x=-\tfrac y3$ . If $x=3y$ then $f=0$ simplifies to \begin{eqnarray} 0&=&f\\ &=&3y^3+6(3y)y^2+3(3y)^2y-3y+6(3y)^3-7(3y)\\ &=&210y^3-24y\\ &=&6y(35y^2-4), \end{eqnarray} and so either $y=0$ or $y=\pm2\sqrt{\frac{1}{35}}$ . If $x=-\tfrac y3$ then $y=-3x$ and then $f=0$ simplifies to \begin{eqnarray} 0&=&f\\ &=&3(-3x)^3+6x(-3x)^2+3x^2(-3x)-3(-3x)+6x^3-7x\\ &=&-30x^3+2x\\ &=&-2x(15x^2-1), \end{eqnarray} and so either $x=0$ or $x=\pm\sqrt{\
|systems-of-equations|
0
Orthant probability
Let $(X_i)_{i \in \mathbb{N}} \sim \mathcal{N}(0,1)$ , where all variables are i.i.d. For the random vector $$\mathbf{X} = \left [ X_1, \ \ \ \ X_1 + X_2, \ \ \ \ X_2 + X_3, \ \ \ \ \cdots \ \ \ \ X_{n-2} + X_{n-1}, \ \ \ \ X_{n-1} \right ]^T,$$ how can I find $\mathbb{P}(\mathbf{X} \geq 0)?$ In other words, this is the probability that all components are non-negative. For small $n$ , the probabilities (although tedious) are doable. I was wondering if there is a general form, or bounds, for this probability. I know that orthant probabilities for $n >5$ do not have a closed form, but in this specific case I believe something can be done, since this is not a classic orthant problem. The tricky part about this is that the covariance matrix is non-invertible, so I cannot integrate joint densities. I would appreciate a detailed answer (if one exists). Thanks!
The probability can be analytically calculated by recursion. Let us decompose the probability to nested functions by using conditional expectations $$\begin{align} \mathbb{P}(\mathbf{X}\ge0)&= \mathbb{P}(X_1\ge0,X_1+X_2\ge 0,X_2+X_3\ge 0,...,X_{n-1}+X_{n-2}\ge 0,X_{n-1}\ge 0) \\ &=\mathbb{E}\left(\mathbf{1}_{\{X_{n-1}\ge 0 \}}\cdot \mathbf{1}_{\{ X_{n-1}+X_{n-2}\ge 0 \}} \cdot...\cdot \mathbf{1}_{\{X_2+X_3\ge 0 \}} \cdot \mathbf{1}_{\{X_1+X_2\ge 0 \}}\cdot \mathbf{1}_{\{X_1\ge 0 \}} \right)\\ &=\underbrace{\mathbb{E}\left(\mathbf{1}_{\{X_{n-1}\ge 0 \}}\cdot \underbrace{ \mathbb{E}\left(\left.\mathbf{1}_{\{ X_{n-1}+X_{n-2}\ge 0 \}} \cdot...\cdot \underbrace{ \mathbb{E}\left(\left. \mathbf{1}_{\{X_2+X_3\ge 0 \}}\cdot \underbrace{\mathbb{E}\left(\left. \mathbf{1}_{\{X_1+X_2\ge 0 \}}\cdot \mathbf{1}_{\{X_1\ge 0 \}}\right|X_2\right)}_{:=f_2(X_2)} \right|X_3\right)}_{:=f_3(X_3)} ..\right|X_{n-1}\right)}_{:=f_{n-1}(X_{n-1})}\right)}_{:=f_n(\color{red}0)}\\ \end{align}$$ that is $$\color{red}{
|probability|probability-theory|
0
Locally $p-$integrable function on an open set that satisfies additional condition must be $0$ almost everywhere on the open set?
Suppose that $f$ is a Lebesgue measurable function defined on an arbitrary open set $\Omega \subset \mathbb R^n$ . Furthermore, assume that $f \in L^p_{\operatorname{loc}}(\Omega)$ and that $$ \sup_{x \in \Omega, \, r > 0} r^{-\lambda} \int_{B(x,r) \cap \Omega} |f(y)|^p \, dy where $\lambda$ is an arbitrary real constant such that $\lambda > n$ . GOAL. My goal is to show that, under these conditions, it follows that $f = 0$ almost everywhere on $\Omega$ . Precedent work and why it fails. Severin Schraven has provived an answer to this problem that we thought to be good enough. After some time, while re-analysing this question, I figured out that there was a slight problem with his answer. The main result that he used was the Lebesgue Differentiation Theorem which is usually formulated as Lebesgue Differentiation Theorem (LDT). Given a function $g \in L^1_{\operatorname{loc}}(\mathbb R^n),$ we have that $$ \lim_{r \to 0} \frac{1}{|B(x,r)|} \int_{B(x,r)} g(y) \, dy = g(x) $$ for almost e
This really should be a comment, but it’s a little too long for it. With your assumptions, let $M_{\lambda}$ denote that finite supremum. Now, fix any point $x\in\Omega$ . Since $\Omega$ is open, there exists an $r_x>0$ such that $B(x,r_x)\subset\Omega$ . Then, for any $0 , we have that $B(x,r)\cap \Omega= B(x,r)$ . Letting $c_n$ denote the volume of the unit ball, we have \begin{align} \frac{1}{|B(x,r)|}\int_{B(x,r)}|f(y)|^p\,dy&=\frac{1}{c_nr^n}\int_{B(x,r)\cap \Omega}|f(y)|^p\,dy\leq\frac{1}{c_nr^n}\cdot M_{\lambda}r^{\lambda}=\frac{M_{\lambda}}{c_n}r^{\lambda-n}. \end{align} Since $\lambda>n$ , we have that the LHS vanishes as $r\to 0^+$ . Hence, by Lebesgue’s differentiation theorem, it follows that $f=0$ a.e on $\Omega$ . Notice I didn’t use the full strength of all your assumptions. In particular we can allow the $\lambda$ to depend on $x$ , and we only need the $\limsup\limits_{r\to 0^+}$ to be finite for a.e $x\in \Omega$ (as opposed to the supremum over all $x$ ). You’re righ
|real-analysis|functional-analysis|functions|lebesgue-integral|alternative-proof|
1
Interpreting $t \rightarrow 0^+$ limit of Brownian motion
For a Brownian motion $B_t$ there are many results about its $t \rightarrow + \infty$ or $t \rightarrow 0+$ behavior, such as $$\limsup_{t\rightarrow \infty} \frac{B_t}{(2t\log\log t)^{1/2}} = 1 \quad a.s.\\ \liminf_{t\rightarrow \infty} \frac{B_t}{(2t\log\log t)^{1/2}} = -1 \quad a.s.$$ I interpret the above result as follows: as $t \rightarrow +\infty$ the paths of the process $B_t$ will a.s. grow by the rate $(2t\log\log t)^{1/2}$ . Another way I think of this is if you scale $B_t$ by $(2t\log\log t)^{-1/2}$ then the resulting process will a.s. be within $[-1, 1]$ in the limit $t \rightarrow +\infty$ . Assuming the above interpretations are correct, I am having a hard time coming up with a similar interpretation to the $t \rightarrow 0+$ limit, for example: $$\liminf_{t\rightarrow 0+} \frac{B_t}{(2t\log\log \frac1t)^{1/2}} = -1 \quad a.s.$$ A Brownian motion by definition starts at $0$ a.s., and if we assume a continuous version of the Brownian motion I would expect that the $t\righ
We can use time-inversion Show that $X(t)=t W(1/t)$ is a Brownian motion if $W(t)$ is a Brownian motion. i.e. $X_{t}:=tB_{1/t}$ is a BM and so $$\liminf_{t\rightarrow 0+} \frac{B_t}{(2t\log\log \frac1t)^{1/2}} \stackrel{d}{=}\liminf_{t\rightarrow 0+} \frac{tB_{1/t}}{(2t\log\log \frac1t)^{1/2}}$$ $$=\liminf_{t\rightarrow +\infty} \frac{\frac{1}{t}B_{t}}{(2\frac{1}{t}\log\log t)^{1/2}}$$ $$=\liminf_{t\rightarrow +\infty} \frac{B_{t}}{(2t\log\log t)^{1/2}}=-1.$$
|probability|probability-theory|stochastic-processes|stochastic-calculus|brownian-motion|
1
Troubles in solving this system of equations
Well, despite using programs this "appears" simple (the solutions are numerically easy at sight), I am not able to solve this system of two equations by hand, except for the trivial solution $(0, 0)$ . Can you help me perhaps? $$\begin{cases} 3y^3 + 6xy^2 + 3x^2y - 3y + 6x^3 - 7x = 0 \\ 2y^3 - 3xy^2 + 2x^2y - y - 3x^3 + 3x = 0 \end{cases}$$ These are the other solutions: $$\left\{x\to -\frac{1}{\sqrt{15}},y\to \sqrt{\frac{3}{5}}\right\},\left\{x\to \frac{1}{\sqrt{15}},y\to -\sqrt{\frac{3}{5}}\right\},\left\{x\to -\frac{6}{\sqrt{35}},y\to -\frac{2}{\sqrt{35}}\right\},\left\{x\to \frac{6}{\sqrt{35}},y\to \frac{2}{\sqrt{35}}\right\}$$ I tried to use some methods, like adding twice the second from the first equation, but this didn't really help. Since this exercise comes from a past exam, I cannot solve it with calculators. Thank you for your help.
Put $y=kx$ so you get $$3(k+2)(k^2+1)x^3=(3k+7)x\\(2k-3)(k^2+1)x^3=(k-3)x$$ from which $k=\dfrac13$ and $k=-3$ . It follows in the first equation (or the second one if you want) taking the first value of $k$ , $$\left(\frac{3}{27}+\frac69+\frac33+1+6\right)x^3-(1+7)x=0$$ so you have $$70x^2=72\Rightarrow 35x^2=36\Rightarrow x=\pm\frac{6}{\sqrt{35}}$$ and the solutions $$(x,y)=\left(\pm\frac{6}{\sqrt{35}},\pm\frac{2}{\sqrt{35}}\right)$$ Similarly, with $k=-3$ , you can get the other two solutions $$(x,y)=\left(\pm\frac{1}{\sqrt{15}},\mp\frac{1}{\sqrt{15}}\right)$$
|systems-of-equations|
0
A question about derivative and polynomial
Given a polynomial $p(x) = x^3 + kx - 2$ , where $k \in \mathbb{R}$ is a constant and $p(x)$ has a double root at $x = \alpha$ . Prove that $\alpha=-1$ and $k=-3$ . I plugged in $\alpha$ for $p(x)$ and $p'(x)$ , but that didn't go anywhere, bc I would just be proving the values for $\alpha$ and $k$ by inspection. So I'm not sure how to tackle this problem.
I assume that, by saying "has a double root", you mean $\alpha$ is a root of multiplicity. Then we can assume $$f(x) = (x-\alpha)^2(x-\beta)$$ because $\deg(f) = 3$ . So we get $$f(x) = (x-\alpha)^2(x-\beta)$$ $$= x^3 - (2\alpha+\beta)x^2 + (2\alpha \beta + \alpha^2)x - \alpha^2\beta$$ And this implies $$x^3 - (2\alpha+\beta)x^2 + (2\alpha \beta + \alpha^2)x - \alpha^2\beta = x^3 + kx -2$$ $$\Rightarrow - (2\alpha+\beta) = 0, 2\alpha \beta + \alpha^2 =k, - \alpha^2\beta = -2$$ After solving this system of equations, we finally get $$\alpha = -1, \beta = 2, k = -3$$
|algebra-precalculus|derivatives|polynomials|quadratics|
1
Solving equation involving shifts of the unknown function.
Let's consider the following equation: $$ a_2.f(t+2) + a_1.f(t+1) + a_0.f(t+0) = g(t) $$ where $a_0,a_1,a_2$ are non zero reals and $g(t)$ is a known function. Although it looks like an ordinary differential equation of order 2, the "changing" parts on the left part of this equation doesn't involve derivatives but rather a shift on the main variable $t$ . Question(s): what is the name of such an equation and what kind of methods do we have to find the unknown function $f(t)$ ?
Equations Name In Math.SE you can find equations like this under functional-equations or recurrence-relations . You can classify the equation in different ways, for example as a non-homogeneous linear advance functional equation with constant coefficients (equivalent to a homogeneous linear delay functional equation or you could view it as an DDE of order $0$ ). It's also a recurrence equation , functional differential equation of order $0$ , arithmetic difference equation ( Mathematica can solve such equations via RSolve )... Solving this equation You can solve them in different ways. Depending on the type of functional solution, the solution methods work and some don't. These are examples that work here: $\mathcal{Z}$ -transform There is a quite useful transform like the Laplace transform called the $\mathcal{Z}$ -transform Like with the Laplace transform there are multiple possible definitions. It is often used because of its special properties (especially its linearity and its time
|ordinary-differential-equations|recurrence-relations|functional-equations|
0
What properties of metric spaces are not preserved by uniformly continuous isomorphism?
Compactness and connectedness are preserved by homeomorphism, in the sense that if two metric spaces $(X,d_X)$ and $(Y,d_Y)$ are homeomorphic and $(X,d_X)$ is compact then it follows that $(Y,d_Y)$ is also compact (and similarly for connectedness). However, if $(X,d_X)$ is complete it does not necessarily follow that $(X,d_Y)$ is also complete (to see this, we can take $X = \mathbb{N}$ and $Y = \{\frac{1}{n} : n \in \mathbb{N}\setminus\{0\}\}$ , both with the usual metric). In fact the same counterexample can be used to show that boundedness is not preserved by homeomorphism. On the other hand, it can be shown that uniformly continuous isomorphism preserves compactness, connectedness, completeness and boundedness, in the sense that if there exists a uniformly continuous bijection $f:(X,d_X) \rightarrow (Y,d_Y)$ whose inverse is uniformly continuous and $(X,d_X)$ has one of the properties listed earlier, than $(Y,d_Y)$ also has that property. Compactness, connectedness, completeness and
You probably do not know any of the properties/invariants listed below, but these are important in the theory of metric spaces. Here are some examples: Hausdorff dimension is not preserved by uniformly continuous isomorphisms. For instance, you can have a compact metric space $(X_1,d_1)$ of zero Hausdorff dimension and a compact metric space $(X_2,d_2)$ of infinite Hausdorff dimension, such that $X_1$ is homeomorphic to $X_2$ . A metric space $(X,d)$ is called a path-metric space if for any two points $x,y\in X$ , $d(x,y)$ equals infimum of lengths of paths in $X$ connecting $x$ to $y$ . (Another name is an intrinsic metric space.) You can have two homeomorphic metric compacts $(X_1,d_1)$ , $(X_2,d_2)$ such that the first one is a path-metric space while the second contains no rectifiable nonconstant paths. You can have two countably-infinite metric spaces $(X_1,d_1)$ , $(X_2,d_2)$ both of which have discrete topology and satisfy $d_i(x,y)\ge 1$ for all $x\ne y$ , such that the first o
|general-topology|continuity|metric-spaces|uniform-continuity|uniform-spaces|
1
Does a constant $C>0$ exist such that for $\forall\ p\in\mathbb{C}[z_1,z_2]$ we have: $\sup_{z\in rD^2}|p(z_1,z_2)|\le C\sup_{z\in D^2}|p(z_1,z_2)|$?
Question: Does a finite constant $C>0$ exist such that for $\forall\ p\in\mathbb{C}[z_1,z_2]$ we have: $$\sup_{z\in r \mathbb D^2}|p(z_1,z_2)|\le C\sup_{z\in \mathbb D^2}|p(z_1,z_2)|$$ where $r>0$ ? I define as $p\in\mathbb{C}[z_1,z_2]$ a finite function of the following formula: $$p(z_1, z_2) := \sum_{i,j=0}^k \alpha_{p_{ij}} z_1^i z_2^j$$ where $\alpha_{p_{ij}} \in \mathbb{C}$ . Of course, $\mathbb D^2$ is the (open) bidisc. Important: The constant $C$ cannot depend on $p$ . It can only depend on $r$ . Is this even possible to prove? If not, is it maybe possible for a specific family of functions in $\mathbb{C}[z_1,z_2]$ ? Is it maybe possible for a closed bidisc? I really need this for something I work on currently (PhD student here). Thanks.
(I think this works. Maybe I'm tireder than I think.) Let $p_n(z) = n z_1^{2n}$ . (This is holomorphic in the first coordinate and oblivious to the second coordinate. So we know that the supremum is attained on the boundary of these scaled bidiscs (in fact at any point that is on the boundary of the disc in the first coordinate).) Then \begin{align*} C(r) &\geq \frac{\sup_{z \in r\Bbb{D}^2} \left| p_n(z_1,z_2) \right|}{\sup_{z \in \Bbb{D}^2} \left| p_n(z_1,z_2) \right|} \\ &= \frac{n r^{2n}}{n 1^{2n}} \\ &= r^{2n} \text{.} \end{align*} So this sequence of functions satisfies your requirement only if $0 and does not do so if $1 (since in that case, $C$ is unbounded as we proceed through the sequence $\{p_n\}_n$ ).
|real-analysis|functions|inequality|polynomials|multivariate-polynomial|
0
Limit of a sequence defined by $x_{n+1}=\arctan(\sin(x_n))$
This question involves a sequence $x_{n}$ which is defined as follow: $x_{n+1}=g(x_{n})$ where $0 and where $g(x)=\arctan(\sin(x))$ . Prove that the sequence converges and find for which value. My approach : As we know that $0 we can assume by induction that $0 and then it follows that $\sin(x_{n})>0$ and so $0 and therefore $0 . now, I tried to do Lagrange's mean value theorem : all conditions apply and so there exists a $c\in[x_{n+1},x_{n}] $ $\displaystyle f'(c)= \frac{g(x_{n+1})-g(x_{n})}{x_{n+1}-x_{n}}=\frac{x_{n+2}-x_{n+1}}{x_{n+1}-x_{n}}=\frac{\cos(c)}{1+\sin^2(c)} and also by the earlier induction $0 . Now. I will denote a sequcne $a_{n}=x_{n+1}-x_{n} \forall n\in \mathbb{N}$ . We know that $\displaystyle \left|\frac{a_{n+1}}{a_n}\right|\leq\frac{1}{2}$ and therefore $\displaystyle \lim_{n \to \infty}(a_{n})=0\to \lim_{n \to \infty}(x_{n+1}-x_{n})=0$ . I'm not even sure if it is the correct way but anyways I am still stuck and will appreciate any given help.
It's easy to prove $0 by induction, which means ${x_n}$ is bounded.Because $\sin x_n and $\arctan x$ is an increasing function, we have $x_{n+1}-x_n=\arctan\sin x_n-\arctan\tan x_n\leq 0$ .Hence, the sequence converges. For the value of the limit, you can prove $|x|>|\arctan\sin x|$ for every $x\in\mathbb{R}\setminus\{0\} $ with derivatives.Hence the value of limit is 0.
|real-analysis|sequences-and-series|convergence-divergence|
1
Apply the complex exponential function to a region of the plane.
Considering the complex exponential map $w=e^{z}$ , I want to see what the image of the function would look like for the following region of the complex plane $$\text{The disk} \quad |z| \leq \pi$$ Note that the region is represented graphically as follows Initially the points (in the axis $x$ ) of the disk are described by $- \pi \leq x \leq \pi$ , similarly $- \pi \leq y \leq \pi$ so, if I apply the function we have that: $$e^{- \pi} \leq e^{x} \leq e^{ \pi}$$ I understand that the modules will now be placed between $e^{- \pi}$ and $e^{ \pi}$ that is, they will only take positive values. So you could say that by sending such a disk through the function, you almost translate the ball on the $x$ -axis, something like the following would I think I really don't see clearly what the result is, any suggestions or help? I would appreciate it!
It might be helpful to thing about what happens on "vertical lines" (lines of constant $\Re z$ ). These get sent to circles of radius $\Re z$ and the imaginary part tells you which angles on that circle are part of your image. When $\Re(z) = -\pi$ , the one point at angle $0$ of the circle of radius $\mathrm{e}^{-\pi}$ is part of the image. Since the vertical range the domain generally does not include an entire $2\pi$ segment in the vertical direction, the image only lands on arcs of the circles, leaving a "hole" in the image near the negative real axis until ... When $\Re(z) = 0$ , the angle ranges from $-\pi$ to $\pi$ , so that entire circle is present. Then as the real part continues to increase, the radii of the circles increases to $\mathrm{e}^{\pi}$ , but the range of angles decreases until when $\Re z = \pi$ , only the point where that circle meets the positive real axis is in the image. Zoomed in to see the hole left by the short range of angles from the left half of the domai
|complex-analysis|
1
If $f$ and $g$ are algebraic functions, is $x\mapsto (f(x),g(x))$ contained in an algebraic curve?
Let's consider the following definition (I'm not an expert, so let me know if this is not the usual one!). Let $I\subset \mathbb{R}$ be an open interval. A function $f:I\to \mathbb{R}$ is said to be algebraic on $I$ if there is a nonzero polynomial $p\in \mathbb{R}[x,y]$ such that $p(x,f(x)) = 0$ , for every $x\in I$ . Remark: Even if this is not usual, it is what I have in the situation I'm studying. My question is: Given two algebraic functions $f,g$ on $I$ , is there a nonzero polynomial $P\in \mathbb{R}[x,y]$ such that $P(f(x),g(x))=0$ for every $x\in I$ ? What I've tried: Let $p$ and $q$ be polynomials on $\mathbb{R}[x,y]$ such that $p(x,f(x))=q(x,g(x))=0$ , for every $x\in I$ . Write $$p(x,y)=\sum_{j=0}^n a_j(y) x^j,$$ $$ q(x,y)=\sum_{j=0}^m b_j(y) x^j,$$ with $a_j, b_j\in \mathbb{R}[y]$ and $a_n$ , $b_m$ both not identically zero. Whithout loss of generality, suppose $m\leq n$ . Then we have $$b_m(g(x)) x^m =-\sum_{j=0}^{m-1}b_j(g(x)) x^j,\quad \forall x\in I.$$ Then, multiplyin
The answer is yes, and we don't necessarily have to construct an explicit $P$ . If $f$ is constant, say equal to $r\in\Bbb R$ , then $P=x-r$ suffices. Now suppose $f,g$ are nonconstant. Consider the tower of fields $\Bbb R(f)\subset \Bbb R(f,g)\subset\Bbb R(x)$ . As $\Bbb R(f)\subset\Bbb R(x)$ is algebraic, the extension $\Bbb R(f)\subset\Bbb R(f,g)$ is algebraic, so there is a nonzero polynomial in $\Bbb R(f)[t]$ satisfied by $g$ . Clearing denominators, we obtain a nonzero polynomial $P$ which vanishes on $f$ and $g$ . There are other ways to see this, too. One is to consider the ideal $J=(p(x,y),q(x,z))\subset\Bbb R[x,y,z]$ and note that $J\cap \Bbb R[y,z]$ will contain a $P$ if it's nonzero. But $f,g$ nonconstant imply $p,q$ nonzero imply $J$ of height two, so the intersection $J\cap \Bbb R[y,z]$ cannot be zero (algebro-geometrically, this is the statement that an [algebraic] curve in 3-space cannot project down to cover the whole plane). If you do want to construct an explicit $P$
|abstract-algebra|algebraic-geometry|algebraic-curves|
1
Show: $\text{im } f \cap \text{ker } f = \{0\} \iff \text{ker } f \circ f = \text{ker } f$ for vectorspace $V$ with linear function $f:V\rightarrow V$
Let $V$ be a vectorspace with linear function $f:V\rightarrow V$ Show that: $\text{im } f \cap \text{ker } f = \{0\} \iff \text{ker } f \circ f= \text{ker } f$ This is my current proof; I'm certain the idea is correct, however it feels a bit slopy, so I would appreciate suggestions for improvements: Let $x\in V \implies f(x) \in \text{im } f$ \begin{align} &\text{im } f \cap \text{ker } f = \{0\}\\ \iff & \forall f(x)\in \text{ker } f : f(x)=0\\ \iff & \forall f(x)\in \text{ker } f : f(x)=0 : x \in \text{ker } f \\ \iff & \forall f(x)\in \text{ker } f : x \in \text{ker } f \\ \iff & \forall x\in \text{ker } f \circ f : x \in \text{ker } f\\ \iff & \text{ker } f \circ f = \text{ker } f \end{align}
I think it will be clearer to write separately. $\Rightarrow$ Obviously,we have $\ker f\subseteq \ker f\circ f$ .So we just need to prove $\ker f\supseteq \ker f\circ f$ .For every $x\in \ker f\circ f$ ,there is $f(f(x))=0$ .Hence, $f(x)\in\ker f$ .For $f(x)\in \mathrm{im}f$ ,we have $f(x)\in \mathrm{im}f\cap\ker f$ .Thus, $f(x)=0$ , $x\in \ker f$ ,then $\ker f\circ f\subseteq \ker f$ , $\ker f\circ f= \ker f$ . $\Leftarrow$ For every $x\in\ker f\cap \mathrm{im}f$ , $x\in\ker f$ thus $f(x)=0$ .At the same time, $x\in\mathrm{im}f$ .So there exists $x'\in V$ ,such that $ x=f(x')$ .Therefore, $f(x)=f(f(x'))=0.$ Hence, $x'\in\ker f\circ f$ .Because $\ker f=\ker f\circ f$ ,we have $x'\in\ker f$ ,which means $x=0$ .As the result, $\ker f\cap \mathrm{im}f=0$
|linear-algebra|functions|vector-spaces|
1
proof that if $e^{z+\lambda}=e^z$ then $\lambda=2k \pi i$
Let $z \in \mathbb{C}$ , i want to prove that $e^{z+\lambda}=e^z$ then $\lambda=2k \pi i$ , I should take $z=x+iy$ s.t. If $e^{z+\lambda}=e^z$ , $e^{x+iy+\lambda}=e^z$ then $e^{x}e^{iy} e^{\lambda}=e^xe^{iy}$ , and I can take $e^{\lambda}=1$ , but note that: $$e^{i2k\pi} e^{ix}=e^{i2k\pi +ix}=e^{i(2k\pi +x)}= \cos(x+2k\pi)+i\sin(x+2k\pi)= \cos(x)+i\sin(x)=e^{ix}.$$ This means $e^{i2k\pi}=1$ , thus $2k\pi i=\lambda$ . This can work?, i'll appreciate any suggestions.
The first step is fine: you show that $e^{z+\lambda} = e^z \implies e^\lambda = 1$ . I probably wouldn't split $z$ into its real and imaginary components. Instead, I would say, $$e^{z+\lambda} = e^z \implies e^{-z} e^{z + \lambda} = e^{-z}e^z \implies e^{-z + z + \lambda} = e^0 \implies e^\lambda = 1.$$ This works just as well in complex numbers as it does in real numbers. The next step is somewhat garbled. First, and most importantly, you simply show that $e^{i2\pi k} = 1$ , or in particular, $\lambda = i2\pi k \implies e^\lambda = 1$ . This is not what you are being asked to show! You need to show $e^\lambda = 1 \implies \lambda = i2\pi k$ for some integer $k$ , i.e. the converse. Just because you found a class of complex numbers that produce $1$ when raised as a power of $e$ , does not mean that you've found them all! Also, as a side note, reintroducing $x$ seems unnecessary. You could have just written instead $$e^{i2k\pi} = e^{i2k\pi}= \cos(2k\pi)+i\sin(2k\pi)= \cos(0)+i\sin(0)=1.
|complex-analysis|
1
How to prove $\sum\limits_{cyc} \frac{1}{x+yz} \le \frac{9}{2(xy+yz+zx)}$ for all $x,y,z >0:x+y+z=3.$?
When I entered a test at my school, I stuck this problem (it is also posted here ) Let $x,y,z$ be positive real numbers such that $x+y+z=3$ , prove that $$\frac{1}{x+yz}+\frac{1}{y+zx}+\frac{1}{z+xy} \le \frac{9}{2(xy+yz+zx)}.$$ I have tried AM GM $$\frac{1}{x+yz}\le \frac{1}{2\sqrt{xyz}}.$$ But it leads to a wrong inequality $$(xy+yz+zx)^2 \le 9xyz=3xyz(x+y+z).$$ Also, I tried SOS or Vonicu Schur without any success. It seems impossible to prove $$LHS \le \frac{3}{2} \le RHS$$ Could anyone help me with this problem? Thanks alot.
Not a very elegant solution, but works (Schurhead + Triangle notation). Homogenize before expansion: $$0\leq\frac{9}{2 (x y+x z+y z)}-\frac{1}{\frac{1}{3} x (x+y+z)+y z}-\frac{1}{\frac{1}{3} y (x+y+z)+x z}-\frac{1}{\frac{1}{3} z (x+y+z)+x y}$$ Then we're left to prove: $$x^4 y^2+14 x^4 y z+x^4 z^2+2 x^3 y^3-10 x^3 y^2 z-10 x^3 y z^2+2 x^3 z^3+x^2 y^4-$$ $$-10 x^2 y^3 z+6 x^2 y^2 z^2-10 x^2 y z^3+x^2 z^4+14 x y^4 z-10 x y^3 z^2-10 x y^2 z^3+$$ $$+14 x y z^4+y^4 z^2+2 y^3 z^3+y^2 z^4\geq0$$ Use the triangle notation: Then we notice the Schur pattern around the central 6, subtract $$2xyz(x(x-y)(x-z)+y(y-x)(y-z)+z(z-x)(z-y))$$ Then we're left with: Which can be Cleared by Muirhead since it's $ S[4, 2, 0] + S[3, 3, 0] + 6 S[4, 1, 1] \geq 8 S[3, 2, 1]$
|inequality|
0
What does inflection points of a function tell us about the integral of the function?
Context: Let $f(x)$ be a polynomial of degree $4$ with $2$ inflection points. A line is draw through the inflection points and three regions are made. What inference can you make about these regions? Two regions have equal area; Area of one is equal to sum of other two; area of one is double the sum of other two; area of one region is square of the sum of the other two; What I found already : https://www.khanacademy.org/math/ap-calculus-ab/ab-integration-new/ab-6-5/a/behavior-of-antiderivative-of-f-from-graph-of-f This articles tells that increasing function suggest concave up integral but not sure if that is what I need to solve this problem.
So much of calculation as shown by @TonyK earlier can be avoided if we assume points of inflexion to be $(-a,0)$ and $(a,0)$ which give $f''(x)=x^2-a^2$ and consequently $f(x)=\frac{1}{12}\left(x^4-6a^2x^2+5a^4\right)=\frac{1}{12}\left(x^2-a^2\right)\left(x^2-5a^2\right)$ which is clearly an EVEN function. To check for areas see the following $$\int_{-a}^{a}f(x)dx=\frac{8a^5}{15}$$ and $$\int_{a}^{\sqrt5a}\left|f(x)\right|dx=\frac{4a^5}{15}$$
|calculus|
0
Follow-up: $h_{\omega}(s)$'s so that $\lim_{\omega\rightarrow 2\pi n} \frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} + h_{\omega}(s) = e^{-i2\pi n s}s$
This is a follow-up question or variant question to Finding $1$-periodic $h_{\omega}(s)$'s so that $\lim_{\omega\rightarrow 2\pi n} \frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} + h_{\omega}(s) = s$ . Let $n$ be a nonzero integer. I am wondering now if it is possible to find $1$ -periodic functions $h_{\omega}(s)$ such that $$ \lim_{\omega\rightarrow 2\pi n}\frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} + h_{\omega}(s) = a_{n}(s) + b_{n}(s)s $$ where $a_{n}, b_{n}$ are themselves $1$ -periodic functions. We can remove the need for $a_{n}$ by plugging $h_{\omega}(s) = h'_{\omega}(s) + a_{n}(s)$ for all $\omega$ . Then we can deduce \begin{align*} b_{n}(s) &= b_{n}(s+1)(s+1) - b_{n}(s)s \\ &= \lim_{\omega\rightarrow 2\pi n} \frac{e^{-i\omega (s+1)} - 1}{e^{-i\omega} - 1} + h_{\omega}(s+1) - \frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} - h_{\omega}(s) \\ &= \lim_{\omega\rightarrow 2\pi n} \frac{e^{-i\omega (s+1)} - 1}{e^{-i\omega} - 1} - \frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} \\ &= \cdots
First, let us consider the problem for a fixed nonzero integer $n$ . We propose $$ h_{\omega}(s) = \frac{1-e^{-2\pi i n s}}{e^{-i\omega} - 1} $$ for $\omega\in (2\pi n - \pi, 2\pi n + \pi)\setminus\{2\pi n\}$ , which satisfies $h_{\omega}(s+1) = h_{\omega}(s)$ . We have \begin{align*} \lim_{\omega\rightarrow 2\pi n} \frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} + \frac{1-e^{-2\pi i n s}}{e^{-i\omega} - 1} &= \lim_{\omega\rightarrow 2\pi n} \frac{e^{-i\omega s} - e^{-2\pi i n s}}{e^{-i\omega} - 1} \\[1.0ex] &= \lim_{\varepsilon\rightarrow 0} \frac{e^{-i(2\pi n + \varepsilon) s} - e^{-2\pi i n s}}{e^{-i(2\pi n + \varepsilon)} - 1} \\[1.0ex] &= \lim_{\varepsilon\rightarrow 0} e^{-2\pi i n s}\frac{e^{-i\varepsilon s} - 1}{e^{-i\varepsilon} - 1} \\[1.0ex] &= e^{-2\pi i n s}\frac{-is}{-i} \\[1.0ex] &= e^{-2\pi i n s}s. \end{align*} For a solution that works for all integers $n$ , define $$ h_{\omega}(s) = \frac{1}{e^{-i\omega} - 1} + \sum_{k=-\infty}^{\infty} \frac{-e^{-2\pi i k s}}{e^{-i\omeg
|calculus|limits|analysis|
1
The proof by induction of the multinomial theorem
I looked at the proof by induction of the multinomial theorem on Wikipedia and do not understand how to get the last step. Specifically, I do not know why this equality is true: $$\sum_{k_1 + k_2 + \cdots + k_{m-1} + K = n} \binom{n}{k_1, k_2, \ldots, k_{m-1}, K} x_1^{k_1} x_2^{k_2} \cdots x_{m-1}^{k_{m-1}} \sum_{k_m + k_{m+1} = K} \binom{K}{k_m, k_{m+1}} x_m^{k_m} x_{m+1}^{k_{m+1}} = \sum_{k_1 + k_2 + \cdots + k_{m-1} + k_m + k_{m+1} = n} \binom{n}{k_1, k_2, \ldots, k_{m-1}, k_m, k_{m+1}} x_1^{k_1} x_2^{k_2} \cdots x_{m-1}^{k_{m-1}} x_m^{k_m} x_{m+1}^{k_{m+1}}.$$ Is there a summation identity that states you can multiply the contents of each summation like this, where you combine the two summations into one? I looked at the general summation identities on Wikipedia but cannot see how they would be applied here. I would think it should not be so simple as to just multiply the contents of the summations together because the distributive property should add new terms. Maybe it is the lon
I have thought of a proof that involves working backwards from the desired result. A word of warning: while this proof may be correct, it makes the process of combining two dependent $\sum$ 's more complicated than the linked Wikipedia article and this ProofWiki article suggests. I would appreciate it if someone could give a more direct proof and explain why the process is treated as straightforward. Proof Suppose $m \ge 2$ . Start with the RHS $$\sum_{k_1 + \cdots + k_{m+1} = n} \binom{n}{k_1, \ldots, k_{m+1}} x_1^{k_1} \cdots x_{m+1}^{k_{m+1}},$$ which sums the summand $\sigma(k_1, \ldots, k_{m+1})=\binom{n}{k_1, \ldots, k_{m+1}} x_1^{k_1} \cdots x_{m+1}^{k_{m+1}}$ over all groups of nonnegative integral values $(k_1, \ldots, k_{m+1})$ satisfying the condition $k_1 + \cdots + k_{m+1} = n$ . Now, for every group $(k_1, \ldots, k_{m+1})$ satisfying such condition, $k_m + k_{m+1}$ is some fixed integer. Since $k_m$ and $k_{m+1}$ are nonnegative and $k_m + k_{m+1}$ must not exceed $n$ ,
|summation|proof-explanation|induction|multinomial-theorem|
0
Convergence of the double series
Let $\{a_i\}_{i\in \mathbb{N}}$ be a sequence of positive real numbers, whose series converges i.e. $\sum\limits_{i\in \mathbb{N}}a_i = a Does this imply the convergence of the double series $\sum\limits_{i,j \in \mathbb{N}}a_i a_j?$ If yes, how can this be proven? If not, what are some counterexamples? Any help is appreciated. Thanks in advance. P.S. : A double series $\sum\limits_{i,j \in \mathbb{N}}a_i a_j$ converges to A, if for all $\epsilon > 0,$ there exists $ N_0 \in \mathbb{N}$ such that \begin{eqnarray} \left|\sum\limits_{i=1}^m\sum\limits_{j=1}^n a_i a_j -A \right| \leq \epsilon, \qquad \text{for all } m,n \geq N_0. \end{eqnarray}
Let $S_n=\sum_{i=1}^na_i$ . Given $S_n$ converges to $a$ . Then For an $\varepsilon>0$ , there exists $N_0\in\mathbb{N}$ such that, $$ n>N_0\implies|S_n-a| Let $n,m>N_0$ , then $$ |S_n-a| Also, $$ |S_m-a| Multipliying the above two inequalities, we get, $$ |S_nS_m+a(S_n-S_m)-a^2| By cauchy criterion, for an $\varepsilon>0$ , there exists $N_1\in\mathbb{N}$ , such that $$ n,m>N_1\implies|S_n-S_m| Let $N=\max(N_0,N_1)$ . Then $\forall m,n>N$ , $$ \implies|S_mS_n-a^2| i.,e $S_nS_m=\sum_{i=1}^n\sum_{j=1}^na_ia_j$ converges to $a^2$ .
|real-analysis|calculus|sequences-and-series|analysis|
0
How to prove or disprove a function is Lipschitz continuity?
Because I am not majoring in math, I wonder if there is a standard approach to prove or disprove Lipschitz continuity. In my case, I want to prove that the Mean Squared Error (MSE) loss function for the weight ( w ) is Lipschitz continuous. MSE is defined as: $$ E_{in}(w) = \frac{1}{n}\sum_{i=1}^{n}(y_i - w^Tx_i)^2 $$ As we know, in particular, a real-valued function $f: \mathbb{R}^d → \mathbb{R}$ is called Lipschitz continuous if there exists a positive real constant K such that, for all real $x_1$ and $x_2$ , $$ |f(x_1) - f(x_2)| \leq K \|x_1 - x_2\|_2 $$ I tried to analyze $|E_{in}(w_1) - E_{in}(w_2)|$ , but the result involves $ w_1$ and $w_2$ , so I couldn't find a constant $K$ to satisfy the condition. However, this observation alone is not sufficient to disprove that MSE is Lipschitz continuous. Do I need to provide a counterexample to disprove it? If a counterexample is necessary, does it mean that I should check some simpler conditions to test satisfaction before attempting a
As commentators noted and shown below, a function of the form $x^2$ or equivalent such as the one in the MSE above is not Lipschitz. For your counter-example, consider $d=1$ i.e. $\omega$ is one dimensional $|\frac{1}{n}\sum_{i=1}^{n}[(y_i-\omega_1x_i)^2-(y_i-\omega_2x_i)^2]|=|\frac{1}{n}\sum_{i=1}^{n}[y_i^2-2y_i\omega_1x_i+\omega_1^2x_i^2-y_i^2+2y_i\omega_2x_i-\omega_2^2x_i^2]|$ Now suppose, $\omega_1=M,\omega_2=-M$ $|\frac{1}{n}\sum_{i=1}^{n}[(y_i-\omega_1x_i)^2-(y_i-\omega_2x_i)^2]|=|\frac{1}{n}\sum_{i=1}^{n}[-2y_iMx_i+M^2x_i^2-y_i^2-2y_iMx_i-Mx_i^2]|=\\|\frac{1}{n}\sum_{i=1}^{n}-4y_iMx_i|$ Which so long as not all the data is exactly 0 shows that this difference will grow to infinity for $M$ large.
|real-analysis|optimization|machine-learning|lipschitz-functions|
0
Finding Splitting Field with Minimal Adjoined Elements
I have to find the splitting field of the following in $\mathbb{Q}$ : a) $f(x) = x^6 + 1 $ b) $f(x) = (x^2-3)(x^3+1) $ For a), finding the 6th roots of -1, I concluded that $\pm i, \frac{\pm \sqrt{3} \pm i}{2}$ should be in the splitting field. The minimum adjoined splitting field is thus $\mathbb{Q}[\sqrt{3},i]$ . For b) using similar method, I found that $-1, \sqrt{3}, \frac{1 \pm \sqrt{3}i}{2}$ should be in the splitting field. Is the minimal adjoined field for b) $\mathbb{Q}[\sqrt{3},i]$ as well? I am not so sure in how to find the minimal elements one must adjoin to $\mathbb{Q}$ .
Both splitting fields are equal to $\Bbb Q(i,\sqrt3)=\Bbb Q[i,\sqrt3]$ , as you correctly wrote. You still didn't reply what you mean by "minimal adjoined elements", but if what you want is as few generators as possible, the primitive element theorem tells you that one is sufficient. Anyway, you don't need that theorem: you can take for instance $$\alpha:=i+\sqrt3.$$ Indeed, $$i=\frac{\alpha^3}8\quad\text{and}\quad\sqrt3=\alpha-i.$$
|abstract-algebra|splitting-field|
0
Skew Symmetric Matrix for expressing a Rotation
How skew symmetric matrix can be used to express rotations about a given axis? I came across this concept while dealing with rotation matrices used in robotics. Can someone detail on this concept? and explain how a vector is converted to a skew symmetric matrix? Also I would like to go in depth in this.
More geometric approach. Suppose you have a vector $v=\begin {bmatrix} x &y & z \end{bmatrix}^T$ of unit length. The rotation matrix can be generated with so-called Rodrigues formula. With this formula you have: $R(v,\theta)=I+\sin(\theta)S(v)+(1-\cos(\theta))S^2(v)$ where skew-symmetric matrix $S(v)=\begin {bmatrix} 0 &-z &y \\ z & 0 &-x \\ -y & x &0 \end{bmatrix}$ and $x,y,z$ are exactly coordinates of unit vector representing axis. It is easy to check that columns of this matrix generate a plane which is perpendicular to vector $v$ and it is a plane of rotation where all vectors lying on it are rotated by $\theta$ angle. $S(v)$ can be obtained also with a formula where cross product is used $S(v)=\begin {bmatrix} v \times i &v \times j &v \times k \end{bmatrix}$ , where $ i,j,k $ are vectors of standard basis i.e. columns of identity matrix.
|linear-algebra|matrices|rotations|
0
Computing $g'(3)$ given $g(x) = xf(x)/(x^3+5),\, f(3),\, f'(3)$
Given the function $g : \mathbb{R} \to \mathbb{R}$ defined by $g(x)=\frac{xf(x)}{x^3+5}$ , my task is to find derivative of function g at $x=3$ where $f(3) = -1,f'(3) = 4$ . Should I substitute in the value $3$ before finding the derivative or get the general form and then substitute in the value of $x$ ?
You know about $f(3)=-1,f'(3)=4$ and $g(x)=\frac {xf(x)}{x^3+5}\to g'(3)=?$ $$g'(x)=\\\frac {(xf(x))'\times (x^3+5)-(x^3+5)'\times(xf(x))}{(x^3+5)^2}=\\\frac {(1f(x)+xf'(x))\times (x^3+5)-(3x^2+0)\times(xf(x))}{(x^3+5)^2}$$ then put $x=3$ and you willl have $$g'(3)=\\\frac {(1f(3)+3f'(3))\times (3^3+5)-(3(3)^2+0)\times(3f(3))}{(3^3+5)^2}=\\\frac {(-1+3(4))\times (27+5)-(27+0)\times(3(-1))}{(27+5)^2}$$
|calculus|limits|derivatives|
1
How to rigorously prove that $\lim\limits_{n \to \infty }\prod\limits_{r=1}^n \frac{n^2-r}{n^2+r} = e^{-1}$
I saw this problem: Find $\lim\limits_{n \to \infty }\prod\limits_{r=1}^n \frac{n^2-r}{n^2+r} $ I tried to prove this problem and got: $$\ln(1+x) = \sum_{k=1}^ \infty \frac{(-1)^{k+1} x^k}{ k} \ \ \ \ \text{ with radius of convergence = 1 }$$ $$\ln(1-x)=- \sum_{k=1}^ \infty \frac{ x^k}{ k} \ \ \ \ \text{ with radius of convergence = 1 }$$ $$L:= \lim_{n \to \infty }\prod_{r=1}^n \frac{n^2-r}{n^2+r}$$ $$\ln(L) =\lim_{n \to \infty } \sum_{r=1}^n \ln\left(1 - \frac{r}{n^2}\right) - \ln\left(1 + \frac{r}{n^2}\right) $$ $$=-2\lim_{n \to \infty } \sum_{r=1}^n \sum_{k\in 2\mathbb{ N}-1} \frac{ \left(\frac{r}{n^2} \right)^k}{ k} = -1$$ $$L=e^{-1}$$ But this proof missing a few details and tried to complete it $$\lim_{n \to \infty }\frac{\sum_{r=1}^n r^k}{n^{k+1}}=\lim_{n\to \infty }\frac{\sum\limits_{r=1}^n\left(\frac rn\right)^k}n=\int_0^1x^kdx=\frac1{k+1}$$ hence for all $k >1 $ $$\lim_{n \to \infty }\frac{\sum_{r=1}^n r^k}{n^{2k}} =\lim_{n \to \infty }\frac{n^{k+1}}{(k+1)n^{2k}}=\lim_{n \to
You can first prove that $\mathrm{log}\tfrac{n^2-r}{n^2+r}$ is bounded away from $-2r/n^2$ by $\mathcal{O}(r^3.n^{-6})$ . Then you sum over all $r$ values and obtain on the one hand $-1-1/n$ and an error term $\mathcal{O}(n^4.n^{-6})=\mathcal{O}(n^{-2})$ . This result is a bit stronger than needed by the way since it gives you the rate of convergence towards the limit.
|real-analysis|calculus|sequences-and-series|limits|
0
Showing There is No Group Epimorphism from $S_n \longrightarrow \mathbb{Z}/2\times\mathbb{Z}/2$, $n\geq 1$
I know that the function $\text{sgn}: S_n\longrightarrow \mathbb{Z}/2$ is a unique group epimorphism. I am having trouble proving that there does not exist such an epimorphism bewteen $S_n\longrightarrow \mathbb{Z}/2\times\mathbb{Z}/2$ . My approach is as follows As a suggestion by my instructor, I consider a homomorphism $\phi: \mathbb{Z}/2\times\mathbb{Z}/2\longrightarrow\mathbb{Z}/2$ , defined by $(a,b)\longmapsto a+b$ , and suppose there exists an epimorphism $\psi: S_n\longrightarrow \mathbb{Z}/2\times\mathbb{Z}/2$ . Then $\phi\circ\psi: S_n\longrightarrow \mathbb{Z}/2$ . We compare this to the $\text{sgn}$ function since we know it has the same domain and codomain as this composite function. So I assume we are supposed to identify some choice (or rather, conclude there is no such choice of) of $\psi$ such that this composite is equivalent to $\text{sgn}$ . I tried defining a $\psi$ like $(0,\cdots,n-1)\mapsto(n_i,n_j)$ , where $n_i$ and $n_j$ are some natural numbers in that $n$
By contradiction, if it existed, then $S_n$ would have a normal subgroup of order $\frac{n!}{4}$ (first homomorphism theorem). But, for $n\ge5$ , the only proper normal subgroup of $S_n$ is $A_n$ , of order $\frac{n!}{2}$ . On the other hand, $S_4$ hasn't got any normal subgroup of order $6$ . The cases $n=3,2$ are trivial. This, jointly with the fact that $S_4$ has a normal subgroup of order $4$ , shows more, indeed: that the only possible epimorphism from $S_n$ , for every $n\ge2$ , is onto $C_2$ , and the case $n=4$ yields as additional possible epimorphisms those onto either $C_6$ or $S_3$ .
|abstract-algebra|group-theory|permutations|symmetric-groups|group-homomorphism|
0
Probability distribution of the number of unique elements when picking $m$ items from $n$ with replacement
Consider a length- $m$ array of unique elements $A=\{1,2,\dots, m\}$ . Independently poll $n$ samples (with replacement, so duplicate elements are possible) and place them into array $B$ . Then, you de-duplicate $B$ to produce array $C$ with $k$ unique elements (of course, $k\le n$ ). Question: what is the probability $P(k\ge i)$ for each $i$ ?
The probability $\mathsf P(k=i)$ is the probability that $k$ different elements appeared. This is $\left\{n\atop k\right\}\frac{m!}{(m-k)!}m^{-n}$ , where $\left\{n\atop k\right\}$ is a Stirling number of the second kind that counts the ways to partition $n$ samples into $k$ groups, $\frac{m!}{(m-k)!}$ counts the ways to assign $k$ out of $m$ elements to these groups, and $m^n$ is the total number of outcomes. Then $\mathsf P(k\ge i)=\sum_{j\ge i}\mathsf P(k=j)$ .
|probability|independence|
1
How to rigorously prove that $\lim\limits_{n \to \infty }\prod\limits_{r=1}^n \frac{n^2-r}{n^2+r} = e^{-1}$
I saw this problem: Find $\lim\limits_{n \to \infty }\prod\limits_{r=1}^n \frac{n^2-r}{n^2+r} $ I tried to prove this problem and got: $$\ln(1+x) = \sum_{k=1}^ \infty \frac{(-1)^{k+1} x^k}{ k} \ \ \ \ \text{ with radius of convergence = 1 }$$ $$\ln(1-x)=- \sum_{k=1}^ \infty \frac{ x^k}{ k} \ \ \ \ \text{ with radius of convergence = 1 }$$ $$L:= \lim_{n \to \infty }\prod_{r=1}^n \frac{n^2-r}{n^2+r}$$ $$\ln(L) =\lim_{n \to \infty } \sum_{r=1}^n \ln\left(1 - \frac{r}{n^2}\right) - \ln\left(1 + \frac{r}{n^2}\right) $$ $$=-2\lim_{n \to \infty } \sum_{r=1}^n \sum_{k\in 2\mathbb{ N}-1} \frac{ \left(\frac{r}{n^2} \right)^k}{ k} = -1$$ $$L=e^{-1}$$ But this proof missing a few details and tried to complete it $$\lim_{n \to \infty }\frac{\sum_{r=1}^n r^k}{n^{k+1}}=\lim_{n\to \infty }\frac{\sum\limits_{r=1}^n\left(\frac rn\right)^k}n=\int_0^1x^kdx=\frac1{k+1}$$ hence for all $k >1 $ $$\lim_{n \to \infty }\frac{\sum_{r=1}^n r^k}{n^{2k}} =\lim_{n \to \infty }\frac{n^{k+1}}{(k+1)n^{2k}}=\lim_{n \to
By noting that $ \log(1-x) - \log(1+x) = -2 \int_{0}^{x} \frac{1}{1-t^2} \, \mathrm{d}t $ , you can prove that $$ -2x \geq \log(1-x) - \log(1+x) \geq -\frac{2x}{1-x^2}. $$ Then for $n \geq 1$ and $0 \leq r \leq n$ , $$ -\frac{2r}{n^2} \geq \log(1-r/n^2) - \log(1+r/n^2) \geq -\frac{2r}{n^2-(r/n)^2} \geq - \frac{2r}{n^2-1}, $$ hence by summing this for $ r = 1, 2, \ldots, n$ we get: $$ - \frac{n(n-1)}{n^2} \geq \log \prod_{r=1}^{n} \frac{n^2 - r}{n^2 + r} \geq - \frac{n(n-1)}{n^2 - 1}. $$ This is enough to conclude that $\log L = -1$ and hence $L = e^{-1}$ .
|real-analysis|calculus|sequences-and-series|limits|
0
Clarification on Simplification of a radical
I recently solved the following integral: $$ \int _1^{\sqrt{3}}\frac{\sqrt{1+x^2}}{x^2}dx $$ After integrating, I obtained the result: $$ \frac{1}{2}\ln\frac{2-\sqrt2}{2-\sqrt3}+\frac{1}{2}\ln\frac{2+\sqrt3}{2+\sqrt2}+\sqrt2-\frac{2}{\sqrt3} $$ However, the answer provided in my textbook differs slightly: $$ \sqrt{2}-\frac{2}{\sqrt{3}}+\log\left(\frac{2+\sqrt{3}}{1+\sqrt{2}}\right) $$ The provided answer in the book clearly looks better and more concise. I'm particularly interested in understanding how $$ \sqrt{\frac{(2-\sqrt{2})(2+\sqrt{3})}{(2-\sqrt{3})(2+\sqrt{2})}} $$ is exactly equal to $$ \frac{2+\sqrt{3}}{1+\sqrt{2}} $$ Could someone kindly provide an explanation for this simplification as it is not obvious to me? Thank you!
Textbook Answer Putting $x\mapsto \frac{1}{x}$ transforms the integral into $$ \begin{aligned} \int_1^{\sqrt{3}} \frac{\sqrt{1+x^2}}{x^2} d x & =-\left[\frac{\sqrt{1+x^2}}{x}\right]_1^{\sqrt{3}}+\int_1^{\sqrt{3}} \frac{1}{x} \frac{x}{\sqrt{11 x^2}} d x \\ & =\sqrt{2}-\frac{2}{\sqrt{3}}+\int_1^{\sqrt{3}} \frac{d x}{\sqrt{1+x^2}} \end{aligned} $$ Putting $x\mapsto \tan \theta$ gives $$ \begin{aligned} \int_1^{\sqrt{3}} \frac{d x}{\sqrt{1+x^2}} = & \int_{\frac{\pi}{4}}^{\frac{\pi}{3}} \frac{1}{\sec \theta} \cdot \sec ^2 \theta d \theta \\ = & {\left[\ln (\sec \theta+\tan \theta \mid]_{\frac{\pi}{4}}^{\frac{\pi}{3}}\right.} \\ = & \ln (2+\sqrt{3})-\ln (\sqrt{2}+1) \\ = & \ln \left|\frac{2+\sqrt{3}}{1+\sqrt{2}}\right| \end{aligned} $$
|integration|radicals|
0
Solving non-homogenous differential equation $\ddot{y}+\frac km\dot{y}=-g\hat{j}$
How to solve this non-homogeneous second-order linear ordinary differential equation $$\ddot{y}+\frac km\dot{y}=-g\hat{j}$$ $\hat{j}$ is just unit vector in $y$ direction. I found the solution to homogenous part $y_h(t)=c_1+c_2e^{-\frac{k}{m}t}$ . By variation of parameters assume $y_p(t)=c_1(t)+c_2(t)e^{-\frac{k}{m}t}$ . I calculated $\dot{y}_p=c_1'(t)+c_2'(t)e^{-\frac{k}{m}t}-\frac{k}{m}c_2(t)e^{-\frac{k}{m}t}$ and $\ddot{y}_p(t)=c_1''(t)+c_2''(t)e^{-\frac{k}{m}t}-c_2'(t)\frac{k}{m}e^{-\frac{k}{m}t}-c_2''(t)\frac{k}{m}e^{-\frac{k}{m}t}+(\frac{k}{m})^2e^{-\frac{k}{m}t}$ , if I didn't make any errors.
When the inhomogeneous part of a linear ODE is constant, there are far easier methods to get the particular solution. You just have to find a solution to the equation with only the lowest order term, in this case $$\frac{k}{m}\dot y=-gj.~~(*)$$ The reason is that any $y$ solving this equation will yield $0$ when substituted into the higher order terms, because the lowest order term already makes it a constant, and any higher order derivatives make it 0. Here for instance, $\ddot y=\partial_t\left(-\frac{mg}{k}j\right)=0$ , so it also solves the original, higher order equation. Now finding a particular solution to $(*)$ is easy: $$y_p(t)=-\frac{gmt}{k}j.$$ In case this is, in fact, about a free falling object: This solution corresponds to the limiting case where friction and gravitation are in equilibrium, yielding a constant velocity of $-\frac{gm}{k}j$ .
|ordinary-differential-equations|
0
Clarification on Simplification of a radical
I recently solved the following integral: $$ \int _1^{\sqrt{3}}\frac{\sqrt{1+x^2}}{x^2}dx $$ After integrating, I obtained the result: $$ \frac{1}{2}\ln\frac{2-\sqrt2}{2-\sqrt3}+\frac{1}{2}\ln\frac{2+\sqrt3}{2+\sqrt2}+\sqrt2-\frac{2}{\sqrt3} $$ However, the answer provided in my textbook differs slightly: $$ \sqrt{2}-\frac{2}{\sqrt{3}}+\log\left(\frac{2+\sqrt{3}}{1+\sqrt{2}}\right) $$ The provided answer in the book clearly looks better and more concise. I'm particularly interested in understanding how $$ \sqrt{\frac{(2-\sqrt{2})(2+\sqrt{3})}{(2-\sqrt{3})(2+\sqrt{2})}} $$ is exactly equal to $$ \frac{2+\sqrt{3}}{1+\sqrt{2}} $$ Could someone kindly provide an explanation for this simplification as it is not obvious to me? Thank you!
Noting that $$ \frac{2-\sqrt{2}}{2-\sqrt{3}} \cdot \frac{\sqrt{2}+1}{2+\sqrt{3}}=\sqrt{2} \quad \Rightarrow \quad \frac{2-\sqrt{2}}{3-\sqrt{3}}=\frac{\sqrt{2}(2+\sqrt{3})}{\sqrt{2}+1} $$ Therefore $$ \begin{aligned} \frac{(2-\sqrt{2})(2+\sqrt{3})}{(3-\sqrt{3})(2+\sqrt{2})} & =\frac{\sqrt{2}(2+\sqrt{3})^2}{(\sqrt{2}+1)(2+\sqrt{2})} \\ & =\left(\frac{2+\sqrt{3}}{1+\sqrt{2}}\right)^2 \end{aligned} $$
|integration|radicals|
0
How to evaluate $\int_0^{\frac{\pi}{2}} \frac{\ln(\cos(x))}{1+\sin^2(x)} \, dx$
I saw this problem $$\int_0^{\frac{\pi}{2}} \frac{\ln(\cos(x))}{1+\sin^2(x)} \, dx$$ on my problem book but I have no idea how to evaluate it. I reduced the integral like this by king's rule $$I_1= \int_0^{\frac{\pi}{2}} \frac{\ln(\cos(x))}{1+\sin^2(x)}dx =\int_0^{\frac{\pi}{2}} \frac{\ln(\sin(x))}{1+\cos^2(x)}dx $$ by using the double angle rule $$I_1=\int_0^{\frac{\pi}{2}} \frac{\ln(2\sin(x/2) \cos(x/2))}{2\cos(x/2)^2}dx$$ we let $x/2=x$ then $dx=2dx$ $$I_1=\int_0^{\frac{\pi}{4}} \frac{\ln(2)}{\cos(x)^2}dx+\int_0^{\frac{\pi}{4}} \frac{\ln(\sin(x))}{\cos(x)^2}dx+\int_0^{\frac{\pi}{4}} \frac{\ln(\cos(x))}{\cos(x)^2}dx$$ now I will denote $\int_0^{\frac{\pi}{4}}\frac{\ln(2)}{\cos(x)^2}dx$ by $I_2$ , and I will denote $\int_0^{\frac{\pi}{4}}\frac{\ln(\sin(x))}{\cos(x)^2}dx$ by $I_3$ , and I will denote $\int_0^{\frac{\pi}{4}} \frac{\ln(\cos(x))}{\cos(x)^2}dx$ by $I_4$ $I_2$ is easy to do $$I_3=\frac{\ln(\tan(x))}{\cos(x)^2}dx +I_4$$ I will denote $\int_0^{\frac{\pi}{4}}\frac{\ln(\tan(x))
Let $t=\tan x$ , then $$ \begin{aligned} \int_0^{\frac{\pi}{2}} \frac{\ln (\cos x)}{1+\sin ^2 x} d x = & \int_0^{\frac{\pi}{2}} \frac{\sec ^2 x \ln (\cos x)}{\sec ^2 x+\tan ^2 x} d x \\ = & -\frac{1}{2} \int_0^{\infty} \frac{\ln \left(1+t^2\right)}{1+2 t^2} d t \end{aligned} $$ Using contour integration along anti-clockwise direction of the path $$\gamma=\gamma_{1} \cup \gamma_{2} \textrm{ where } \gamma_{1}(t)=t+i 0(-R \leq t \leq R) \textrm{ and } \gamma_{2}(t)=R e^{i t} (0 and the identity $\ln(a^2+b^2)=2\Re (\ln(a-bi))$ , we have $$ \begin{aligned} \int_0^{\infty} \frac{\ln \left(1+t^2\right)}{1+2 t^2} d t&=\frac{1}{2} \int_{-\infty}^{\infty} \frac{\ln \left(1+t^2\right)}{1+2 t^2} d t \\ & =2 \Re \int_{-\infty}^{\infty} \frac{\ln (1-t i)}{1+2 t^2} d t \\ & =2 \Re \int_\gamma \frac{\ln (1-z i)}{1+2 z^2} d z \\ & =\Re\left[2 \pi i \cdot \lim _{z \rightarrow \frac{i}{\sqrt{2}}}\left(z-\frac{i}{\sqrt{2}}\right) \frac{\ln (1-z i)}{2\left(z^2+\frac{1}{2}\right)}\right] \\ & =\frac{\pi}{\
|calculus|integration|definite-integrals|
0
Showing $\text{Aut}_\mathsf{Grp}(\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}) \cong S_3$
So I believe I've finished the proof, but it all amounts to case-work and I really think (and hope) there's a better approach than this. Here's what I've done: Attempt: Within $\text{Aut}_\mathsf{Grp}(\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z})$ identities must be mapped to identities thus what defines each isomorphism is the bijective function between its last three elements: \begin{matrix} (0,0) & \mapsto & (0,0) \\ (0,1) & & (0,1) \\(1,0) & & (1,0) \\(1,1) & & (1,1) \\ \end{matrix} If every bijective function between these last three elements constitutes a homomorphism (including the first unchanged mapping of identities), then $\text{Aut}_\mathsf{Grp}(\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}) \cong S_3$ in that isomorphisms in $\text{Aut}_\mathsf{Grp}(\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z})$ can be identified with permutations of three elements, which is exactly what $S_3$ is. I then, essentially go case-by-case showing every one of the six diffe
Since every nontrivial element of the Klein 4-group $K_4:=\{1,a,b,ab\}$ has order $2$ , each element of $\operatorname{Aut}(K_4)$ induces a permutation of the set $X:=\{a,b,ab\}$ . Therefore, $\operatorname{Aut}(K_4)\le S_X\cong S_3$ . But $\operatorname{Aut}(K_4)$ has both elements of order $2$ and elements of order $3$$^\dagger$ , and hence $\operatorname{Aut}(K_4)\cong S_3$ . $^\dagger$ In fact, $\varphi(1)=1$ , $\varphi(a)=b$ , $\varphi(b)=a$ , $\varphi(ab)=ab$ is operation-preserving of order $2$ , as well as $\varphi(1)=1$ , $\varphi(a)=b$ , $\varphi(b)=ab$ , $\varphi(ab)=a$ is operation-preserving of order $3$ .
|group-theory|proof-verification|
0
Is there a first countable, $T_1$, weakly Lindelof, sequentially compact space which is not also compact?
Spinning things out from a recent question , let's recall that every second countable sequentially compact space is compact, because every Lindelof countably compact space is compact. We cannot weaken second countable to first countable; the first uncountable ordinal is a direct example of a first countable sequentially compact space which is not compact. We similarly cannot weaken second countable to weakly Lindelof (every open cover has a countable subcollection whose union is dense); the altered long ray is weakly Lindelof and sequentially compact, but not compact. The pi-Base lacks a proof for weakly Lindelof, so let me share one: in fact, I'll show every open cover has a finite subcollection whose union is dense. Let $X=Y\cup\{\infty\}$ be the altered long ray, where $Y=\omega_1\times[0,1)$ is an open subspace with its lexicographic topology, and each neighborhood of $\infty$ contains some $U_\alpha=\{\infty\}\cup\Big(\omega_1\setminus\alpha\Big) \times (0,1)$ . Take an open cover
Here is an example: It is easy to see that $\{[0, \alpha] \setminus F: \alpha is a base for a topology on $\omega_1$ . Obviously, this topology is $T_1$ . For $\alpha , $\{[0, \alpha] \setminus F: F \subset [0, \alpha) , F \text{ finite}\}$ is a countable neighborhood base of $\alpha$ , hence it is first countable . It is sequentially compact : Let $(\alpha_n)_{n be a sequence in $\omega_1$ . W.l.o.g, we may assume that $(\alpha_n)_{n is pairwise distinct, and strictly increasing. Then it converges to its supremum $ . It is not compact : $\{[0, \alpha]: \alpha is an open cover without finite subcover. It is weakly Lindelöf : Let $\mathfrak{U}$ be an open cover of $\omega_1$ . For each $n choose $U_n \in \mathfrak{U}$ such that $n \in U_n$ . Then $\{U_n: n is a countable subcollection of $\mathfrak{U}$ , whose union is dense in $\omega_1$ . Remarks It is easy to see that separable spaces are weakly Lindelöf, and a first countable space is sequentially compact, iff it is countably compac
|general-topology|compactness|separation-axioms|lindelof-spaces|
1
Find surface which generated by revolving a line in $\mathbb{R}^3$
Problem : Let $l$ be a line which passes two points : $(1,0,0), (1,1,1)$ . And $S $ be a surface which generated by revolving line $l$ around $z$ -axis. Find a volume enclosed by surface $S$ and two planes : $z=0, z=1$ . My Attempt Parametric equation of $l$ can be obtained easily : $$l(t) = (1,t,t)$$ And, distance $d(t)$ between $l$ and $z$ -axis is same with distance between $(0,0, t), (1,t,t)$ : $$d(t) = \sqrt{1+t^2}$$ Since $l(0)=(1,0,0), l(1)=(1,1,1)$ , volume $V$ what we want is : $$\begin{align} V &= \pi\int_0^1 d(t)^2 dt \\ &= \pi\int_0^1 (1+t^2)dt \\ &= \frac{4}{3}\pi\end{align}$$ Here are my main questions : Is this method legit? If not, I'd like to know what the problem is. If yes, is this method can be used anytime? (For problems which request volume enclosed by surface which generated revolving line or curve and planes.) Can I find equation of $S$ explicitly to evaluate $V$ with triple integral?
Your approach is correct. As regards the second question, the surface S is a one-sheeted hyperboloid which is an example of ruled surface . Indeed, by rotating each point $(1,t,t)$ of the line, individually, we find the disjoint circles $$x^2+y^2=1^2+t^2,\quad z=t, \quad t\in\mathbb{R}.$$ After eliminating the parameter $t$ , we obtain the equation $$x^2+y^2=z^2+1.$$ Therefore $V$ is the volume of the solid $$D=\{(x,y,z): x^2+y^2\leq z^2+1, z\in [0,1]\}$$ and we find $$V=\int_{z=0}^1\int_{\rho=0}^{\sqrt{z^2+1}}\int_{\theta=0}^{2\pi} \rho d\theta d\rho dz=\pi\int_{z=0}^1(z^2+1)\,dz=\frac{4}{3}\pi.$$
|calculus|multivariable-calculus|volume|solid-of-revolution|
1
Is the notion "If a polynomial has small coefficients (relative to the exponent), then it has small roots" true?
Basically I'm trying to find good starting values for algorithms that determine the roots of a polynomial (e.g. newton method). Obviously we are trying to get as close as possible to the root as we can, but how can we estimate where the roots of a polynomial lie? Is a argument like: "If the coefficients are relatively small compared to the degree of the polynomial, then the magnitude of the roots is somewhere near the coefficients" correct? Are there counterexamples of polynomials with very small coefficients and very large roots?
No. Take a polynomial with all roots "very large" (according to the context), and scale it down by dividing by an even larger number. To employ the counterexample in the comments, take $p(x) = x - 10^{50}$ . Then, $q(x) = 10^{-100}p(x)$ is also a polynomial, with root $10^{50}$ . But $$q(x) = \color{blue}{10^{-100}}x - \color{blue}{10^{-50}}$$
|polynomials|numerical-methods|roots|
0
How to solve a method of characteristics problem with only a point boundary condition?
I am familiar with the method of characteristics. However, I have a problem that appears to only have a specified solution at a single point: Solving for $V(x,y)$ , where $V_xy-V_y(ax+bx^2y) = -cx^2y^2-dx^2-ey^2$ , $\quad V(0,0) = 0$ . and where (a,b,c,d,e) are scalar constants. I'm not sure how to define my "data" and propagate it to execute the method. Any help would be greatly appreciated
The boundary reduced to one point doesn't define a unique solution. For example in order to illustrate with a simplified example : For example in the case of $a=d=e=0$ $$yV_x-bx^2yV_y=-cx^2y^2$$ $\frac{dx}{y}=\frac{dy}{-bx^2y}=\frac{dV}{-cx^2y^2}$ First characteristic equation from $\frac{dx}{y}=\frac{dy}{-bx^2y}\quad\implies\quad 3y-bx^3=c_1$ Second characteristic equation from $\frac{dy}{-bx^2y}=\frac{dV}{-cx^2y^2}\quad\implies\quad V-\frac{c}{2b}y^2=c_2$ General solution of the PDE from $c_2=F(c_1)$ : $$V(x,y)=\frac{c}{2b}y^2+F(3y-bx^3)$$ $F$ is an arfitrary function ( to be determined according to the condition ). CONDITION : $V(0,0)=0=\frac{c}{2b}0^2+F(3*0-b*0^3)=F(0)$ This implies $F(0)=0$ . They are imfinity many functions $F$ such as $F(0)=0$ . Thus they are infinity many solutions $V(x,y)=\frac{c}{2b}y^2+F(3y-bx^3)$ .
|partial-differential-equations|characteristics|
0
How to code how many prime numbers are there between 1 million and 2 million on MATLAB
How would I find the total number of prime numbers between 1 million and 2 million on MATLAB. I have the code for displaying every prime number between 2 integers being: n = firstnumber : secondnumber; p = isprime(n); n(p) %displays the primes But how would I code it so it gives me a total number of primes between 2 numbers. Any help will be appreciated.
With Mathematica Select[Range[10^6, 2*10^6], PrimeQ] // Length // Timing {0.515625, 70435} I suspect an equally fast test existing in matlab, using the function isprime() mapped into an array.
|prime-numbers|matlab|
0
Show that $\sigma$ is an algebra on $[0,1)$
Let $X=[0,1)$ and \begin{align*} \sigma\,=\,\{\varnothing\}\cup\left\{\,\bigcup_{i=1}^m\,[a_i,b_i)\,\bigg|\,m\in\mathbb N,\,a_i,b_i\in[0,1]: \ a_i I want to show $\sigma$ is an algebra on $[0,1)$ . My proof : First, since $X=[0,1)$ , $X$ is clearly an element of $\sigma$ . Next, let $S\in\sigma$ . If $S=\varnothing$ , we have $X\setminus S=X\in\sigma$ . So we consider $S\ne\varnothing$ . There exists $m\in\mathbb N,\ a_1,b_1,\cdots,a_m,b_m\in[0,1) $ such that \begin{align*} a_i Hence \begin{align*} X\setminus S\,&=\,[0,1)\setminus \left(\bigcup_{i=1}^m\,[a_i,b_i) \right) \newline &=\,\bigcap_{i=1}^m\,\Big[[0,1)\setminus[a_i,b_i)\Big] \newline &=\,\begin{cases} [b_1,1) \cap \displaystyle\bigcap_{i=2}^m\,\Big[ [0,a_i) \cup [b_i,1) \Big] &\text{if } a_1=0, \\ \displaystyle\bigcap_{i=1}^m\,\Big[ [0,a_i) \cup [b_i,1) \Big] &\text{if } a_1>0 \end{cases} \end{align*} We have \begin{align} &\Big[[0,a_2) \cup [b_2,1)\Big]\cap\Big[[0,a_3) \cup [b_3,1)\Big] \\ =\ \,&\begin{cases} [0,a_2)\cup[b_2,
It would be enough to show that \begin{align*} \overline S=X\setminus S &= [0, a_1[\cup [b_m,1[\cup \bigcup_{i=2}^m [b_{i-1}, a_i[ \end{align*} Note that here $[x,x[$ is assumed to be $\emptyset$ , this assumption also helps removing the $\emptyset$ special case from your definition. In order to prove that, we need, $\overline S\subseteq X$ , $S\cap\overline S=\emptyset$ and $S\cup \overline S=X$ . The first one is trivially true and the second and third can be visually seen from writing the union as disjoint intervals interleaving sets from $S$ and from $\overline S$ : \begin{align*} S\cup \overline S = [0,a_1[\cup [a_1,b_1[\cup[b_1, a_2[\cup [a_2, b_2[\cup\dots\cup [b_{m-1}, a_m[\cup [a_m, b_m[ \cup [b_m, 1[ \end{align*} It is clear that this union is disjoint from assumptions and that it is $X$ .
|measure-theory|elementary-set-theory|
0
Proof Needed: Minimum # of coins needed to make a given sum
We have infinite supply of these 5 coins: $1, 3, 6, 10, 15$ Find the minimum number of these coins required such that their total value sums up to exactly $n$ . Example: For $n = 425$ , answer is $29$ . Explanation: $27 \times 15 = 405$ and $2 \times 10 = 20$ . $27$ coins of 15-value and $2$ coins of 10-value give the optimal answer. My Approach: Realise that when n grows, say n = 420 , its 15 x 28 and 10 x 42 , and 42-28 = 14 This means using the 15-value coin is a no-brainer as its using atleast 14 less coins than other combinations, so even if the remainder is non-zero, it will be less than or equal to 14 and we can use 1-value coin to fill the remainder, and it will still be optimal. So I reduce large values of n, say 10^9 to a number between 420 and 435 by using only 15-value coins, then I rely on dynamic programming to tell me the minimal coins needed to consume the current number. This works right, but I am not able to prove this how the corner case of using two 10-coins optimal
Consider you have an optimal configuration that sums to $n$ . Then you must have: $\le 2$ coins of value 1 $\le 1$ coins of value 3 $\le 2$ coins of value 6 $\le 2$ coins of value 10 Otherwise, we may replace smaller coins with larger coins. So, if you remove the coins of value 15, the other coins sum to at most 37. You can manually search for the optimum configuration for $1\le n\le37$ . Other values of $n$ can be reduced to this by coins of value 15. Your approach might work, but I don't think it is logically sound. You say that "Realise that when n grows, say n = 420, its 15 x 28 and 10 x 42, and 42-28 = 14". This is not right. To get an optimum configuration, we don't have to swap all the 28 15 value coins with 42 10 value coins, even a 2:3 ratio might work.
|combinatorics|elementary-number-theory|discrete-optimization|dynamic-programming|
1
Weighted Random Sampling
Suppose I have to distribute 2 gold bars to 3 people. The independent probabilities of each person getting a bar is 90%, 60%, and 50%. Each person only gets 1 bar or no bar at all. 2 people will get a bar and 1 person will not get a bar. How can I use a random number generator to distribute the bars accordingly? I am currently working on creating a game and want to implement something like this. Just don't know where to begin.
Call the people $A$ , $B$ , and $C$ . Let the situation be composed of two steps: First, give the first gold bar. Second, give the second gold bar. For step 1, let the probability of the first gold bar going to person $A$ be $P(F_A)=\frac{90}{90+60+50}=0.45$ . Let $P(F_B)=\frac{60}{90+60+50}=0.30$ and $P(F_C)=\frac{50}{90+60+50}=0.25$ . The bar is now given to one of the three people. For step 2, there are three possibilities: step 2A (where the first bar was given to person $A$ ), step 2B, and step 2C. For step 2A, let the probability of the second gold bar going to person $B$ be $P(S_B|F_A)=\frac{60}{60+50}=\frac{6}{11}$ . Let $P(S_C|F_A)=\frac{50}{60+50}=\frac{5}{11}$ . For step 2B, $P(S_A|F_B)=\frac{90}{90+50}=\frac{9}{14}$ and $P(S_C|F_B)=\frac{50}{90+50}=\frac{5}{14}$ . For step 2C, $P(S_A|F_C)=\frac{90}{90+60}=0.6$ and $P(S_B|F_C)=\frac{60}{90+60}=0.4$ . I hope this is enough to help you.
|probability|
0
Weighted Random Sampling
Suppose I have to distribute 2 gold bars to 3 people. The independent probabilities of each person getting a bar is 90%, 60%, and 50%. Each person only gets 1 bar or no bar at all. 2 people will get a bar and 1 person will not get a bar. How can I use a random number generator to distribute the bars accordingly? I am currently working on creating a game and want to implement something like this. Just don't know where to begin.
I think the problem is not clearly stated. In particular, it is not clear as to what the "independent probabilities" stand for in the context of distributing gold bars to person. As such, the answer hinges on the interpretation. Let me present an answer based on the interpretation that each "independent probability" means the probability of a person ending up getting a gold bar after the bars has been distributed. Let $A_i$ be the event that person $i$ gets a gold bar, $i = 1, 2, 3$ . Then they satisfy $\mathbf{P}(A_1) = 0.9$ , $\mathbf{P}(A_2) = 0.6$ , $\mathbf{P}(A_3) = 0.5$ . Let $E_i$ be the event that person $i$ doesn't get a bar. Then exactly one of $E_1$ , $E_2$ , or $E_3$ occurs. Then $A_1$ is a disjoint union of $E_2$ and $E_3$ and hence $$ \mathbf{P}(E_2) + \mathbf{P}(E_3) = \mathbf{P}(A_1) = 0.9. \tag{1} $$ Likewise, $$ \mathbf{P}(E_1) + \mathbf{P}(E_3) = \mathbf{P}(A_2) = 0.6 \tag{2} $$ and $$ \mathbf{P}(E_1) + \mathbf{P}(E_2) = \mathbf{P}(A_3) = 0.5 \tag{3} $$ Note that, i
|probability|
1
Evaluate the integral $\int_{\gamma} \frac{z^2+1}{(z+1)(z+4)}dz$
Evaluate the integral $$\int_{\gamma} \frac{z^2+1}{(z+1)(z+4)}dz$$ if $\gamma = \beta + [4 \pi ,0]$ and $\beta(t) = te^{it}$ for $0 \le t \le 4 \pi$ . My attempt By Cauchy's integral formula. \begin{align} \int_{\gamma} \frac{z^2+1}{(z+1)(z+4)}dz &= -\frac{1}{3} \int_{\gamma} \frac{z^2+1}{(z+4)}dz + \frac{1}{3} \int_{\gamma} \frac{z^2+1}{(z+1)}dz \\ &= -\frac{1}{3} (2\pi i)f(-4) + \frac{1}{3} (2\pi i)f(-1) \\ &= -10\pi i \end{align} I'm not sure whether this is the correct method. Any help is appreciated.
Let $f(z)\to \frac{z^2+1}{(z+1)(z+4)}$ Then $$I_1=2\pi i\cdot \text{Res}_{-1}(f(z))=\frac{4}{3}\pi i$$ and $$I_2=2\pi i\cdot \text{Res}_{-4}(f(z))=-\frac{34}{3}\pi i$$ The total integral then is $$2I_1+I_2=-\frac{26}{3}\pi i$$ because the pole $z=-1$ is circled twice by $\gamma$ .
|integration|complex-analysis|cauchy-integral-formula|
0
Why is the residual variance / pooled sample variance divided by n-k in ANOVA?
I was looking for a proof such as for sample variance where it's shown that expected value of sample variance with n-1 in the denominator yields the parameter. I'm not even sure what pooled sample variance / residual variance tries to estimate $$ E[\frac{1}{n-k}\sum_{i=1}^{n}(y_i-\bar{y}_{g(i)})^2] = E[\frac{1}{n-k}\sum_{i=1}^{k}(n_j - 1)s_j^2] =?$$ n - # observations, k - # groups, $n_j$ # observations in $j$ groups, $s_j^2$ group variance. $g(i)$ assign Is it population variance? I think no, because it's about groups. My attempt was: $$ E[\frac{1}{n-k}\sum_{i=1}^{k}(n_j - 1)s_j^2] = \frac{1}{n-k}\sum_{i=1}^{k}(n_j - 1)E[s_j^2] $$ Yet I'm not sure what is the expectation of the group. Thanks
First note that $n=\sum_{i=1}^{k}n_j$ . Secondly, for each group $j$ we consider the following linear model: $$X_{ji}=\mu_j+\epsilon_{ji}, i=1,\dots,n_j$$ where $\epsilon_{ij}$ are independent and follow $\mathcal N (0,\sigma^2)$ . Hence, the sample variance $S^2_j$ of the observations $X_{ji}, i=1,\dots,n_j$ from group $j$ is an unbiased estimator of $\sigma^2$ , i.e., $\mathbb E[S_j^2]$ . Finally, we have $$\text{MSE}=\frac{SSE}{n-k}= \mathbb E [\frac{1}{n-k}\sum_{i=1}^{k}(n_j - 1)S_j^2] = \frac{1}{n-k}\sum_{i=1}^{k}(n_j - 1) \mathbb E[S_j^2]\\=\frac{1}{n-k}\sum_{i=1}^{k}(n_j - 1)\sigma^2 =\sigma^2 \frac{1}{n-k} \left ( \sum_{i=1}^{k}n_j -k \right )=\sigma^2,$$ which means that $\text{MSE}$ is also an unbiased estimator $\sigma^2$ (it is better as it has a less variation compared to each $S^2_j$ ). Now you can see why $n-k$ is used here.
|statistics|random-variables|variance|sampling|anova|
1
Is there an Euler-Maclaurin-like formula for products?
Background The Euler-Maclaurin (E-M) formula is a formula for the difference between the sum and the integral of a real or complex continuous function on the interval $[m,n]$ . It expresses this difference in terms of the higher derivaties $f^{(k)} (x)$ . More specifically, the E-M formula states that, for a $p$ -times differentiable function $f(\cdot)$ , we have $$ \sum_{i=m}^{n} f(i) - \int_{m}^{n} f(x) \ dx = \frac{f(n)+f(m)}{2} + \sum_{k=1}^{ \lfloor \frac{p}{2} \rfloor } \frac{B_{2k}}{(2k)!} \left( f^{(2k-1)} (n) - f^{(2k-1)} (m) \right) + R_{p}. \label{1}\tag{1} $$ Here, $B_{k}$ is the $k$ 'th Bernoulli number, and $R_{p}$ is an error term that depends on $n,m,p,$ and $f$ . There are many generalizations of the E-M formula. Some of these are described in the following paper by Sarafyan et al. (1979). Also, this article by Karshon et al. (2007) discusses Euler-Maclaurin formulas for lattice polytopes. However, I haven't found or obtained any analogous formulas for products yet. Pr
(Partial answer, too long for a comment, etc.) The proof behind Euler-Maclaurin formula is based on the properties of Bernoulli polynomials (with respect to differentiation among others), combined with iterated integrations by parts. If you plan to construct a multiplicative analog, several remarks have to be treated. Firstly, the major difficulty in order to do so lies in the integration by parts in fact. Indeed, when applied repeatedly, it permits to add new terms to the expansion in order to make it more accurate; actually, these new terms are the boundary terms, while the remainder contains the higher-order derivatives of the considered function. However, the geometric analog of integration by parts doesn't exist for product integral. Indeed, one has $\prod (f(x)g(x))^{\mathrm{d}x} = \prod f(x)^{\mathrm{d}x} \prod g(y)^{\mathrm{d}y}$ , which corresponds to the property of linearity for the standard (additive) integral. This relation is itself due to the absence of a Leibniz rule fo
|analysis|reference-request|products|
0
What is the total number of words of length $500$ on $\{a,b\}$ such that the letter $"a"$ appears more than $"b"$ ( without Brute force)?
The question : What is the total number of words of length $500$ on $\{a,b\}$ such that the letter " $a$ " appears more than " $b$ "? $(*)$ We know that the total number of words is $ 2^{500} $ . At first glance it looks like half of them. But there are subsets of words which contain the same number of letters of " $a$ " and " $b$ " (I assume that if the length were odd, it would be half of $2^{500}$ . I'm trying to approach this question with even length which is countable. For example, when $length = 4$ , we have $"aaaa", "aaab", "aaba", "abaa", "baaa"$ which is $5$ out of $2^4$ options. For $length = 6$ , we have " $aaaaaa$ ", $6$ words with $5$ " $a$ " and $1$ " $b$ ", and ${2\choose 6}$ words with $4$ " $a$ " and $2$ " $b$ ". So in total we have $15 + 6 + 1 = 22$ words out of $2^6$ words. For $length = 8$ , we have " $aaaaaaaa$ " , $8$ words with $7$ " $a$ " and $1$ " $b$ ", ${2 \choose 8}$ words with $6$ " $a$ " and $2$ " $b$ ", and so on. Hence we have ${8 \choose 0}$ + ${8 \cho
There are $\binom{500}{250}$ words of length $500$ with exactly $250$ $a$ , so the number of words with $500$ letters that have not the same number of $a$ -s and $b$ -s is $$ 2^{500}-\binom{500}{250} $$ By symmetry there are $$ \frac{1}{2} \left(2^{500}-\binom{500}{250}\right) $$ words with more $a$ -s than $b$ -s
|combinatorics|discrete-mathematics|binomial-coefficients|binomial-theorem|combinatorics-on-words|
1
Is this a finite set for which it is impossible to list its elements?
Here is the idea for the set. Let $\alpha$ be a real number. Then $x_\alpha$ be a set of digits with the property that a digit is in the set if it appears infinitely many times in $\alpha$ . For many numbers $\alpha$ it is easy to determine $x_\alpha$ . For some it is challenging like $\alpha = \pi$ . However, what if $\alpha$ was a non-computable number? Then it would be impossible to list the elements of $x_\alpha$ , right?
In constructive mathematics, we often distinguish finite and subfinite . For a finite set, we know a bijection to some $\{0,1,\ldots,n\}$ . A subfinite set is a set we know to be a subset of a finite set, but we do not necessarily know who is in and who is not. With this terminology settled, we do indeed find that the set of digits occurring infinitiely often in the decimal expansion of a real number is definitely subfinite, but not necessarily finite. However, this is not closely tied to computability of the real. For example, random real numbers are non-computable and normal, so all digits appear infinitely often in their expansions. On the other hand, consider the real $x_G$ defined by making its $n$ -th digit $0$ if all even numbers $> 2$ below $n$ are the sum of two primes, and making it $1$ otherwise. This yields an algorithm to compute $x_G$ , but knowing which digits appear infinitely often in $x_G$ would require solving Goldbach's conjecture.
|elementary-set-theory|
0
Can convolution fails to be commutative and distributive in the sense of (generalized) Riemann integral?
For two (say continuous) function $f,g\in C(\mathbb R^n)$ , let us define \begin{equation} f\ast g(x):=\lim_{R\to+\infty}\int_{B^n(0,R)}f(x-y)g(y)dy, \end{equation} whenever the integral converges. It is partially clear that under such definition the convolution can be non-associative (e.g. this post ). Indeed by taking Fourier transform convolutions become multiplications, where the multiplications of distributions can be non-associative without certain restrictions. My question is, can it fails to be commutative and distributive as well? More precisely, Can we find $f,g\in C(\mathbb R^n)$ such that $f\ast g$ and $g\ast f$ both converge, but not equal? Can we find $f,g,h\in C(\mathbb R^n)$ such that $f\ast h$ , $g\ast h$ and $(f+g)\ast h$ all converge, but $f\ast h+g\ast h\neq(f+g)\ast h$ ? Note that both questions don't have analogy on the products of distributions. For a bonus part from Proposition 8.6 in Folland's Real Analysis , set $\tau_hf(x)=f(x-h)$ , Can we find $f,g\in C(\mat
This convolution is distributive, since for all $x\in\mathbb{R}^n$ and $R>0$ , we have: $$ \int_{B^n(0,R)} (f+g)(x-y)h(y)dy = \int_{B^n(0,R)} f(x-y)h(y)dy + \int_{B^n(0,R)} g(x-y)h(y)dy, $$ and so taking the limit $R\to\infty$ , we get $(f+g)*h = f*g + g*h$ by definition. Similarly, it is distributive on the other side, with $f*(g+h) = f*g + f*h$ . In the same vein, we also have: $$ \tau_h(f*g)(x) = f*g(x-h) = \lim_{R\to\infty} \int_{B^n(0,R)} f(x-h-y)g(y)dy = \lim_{R\to\infty} \int_{B^n(0,R)} \tau_h f(x-y)g(y)dy = (\tau_h f)*g(x). $$ However, it is NOT commutative. For simplicity's sake, $f$ will be discontinuous but that won't change the arguments. We'll also work in dimension $n=1$ . Take $f(x)=sgn(x)$ the sign function, and $g\equiv 1$ constant. We have for $R>|x|$ : $$ \int_{-R}^R f(x-y)g(y) = \int_{-R}^x 1dy + \int_x^R (-1)dy = x+R - (R-x) = 2x, $$ and so $f*g(x)=2x$ . However: $$ \int_{-R}^R g(x-y)f(y)dy = \int_{-R}^R sgn(y)dy = 0, $$ and so $f*g\neq g*f$ . We can extend the arg
|real-analysis|analysis|convolution|riemann-integration|
1
Prove that Aut$(G) \cong \mathbb{Z}_4$
I really appreciate all of you who helped me! Please take a look at my argument below and feel free to give me some comments: Write $G = \{ e, a, a^2, a^3, a^4 \}$. Since any element $f$ in Aut$(G)$ is an isomorphism that maps $G$ to itself, we notice that $f(e)=e$ and a generator of $G$ is always mapped to a generator of $G$ under $f$. But we also observe that $a, a^2, a^3$ and $a^3$ are generators of $G$. Then we can find $f_1, f_2, f_3, f_4 \in$Aut$(G)$ such that $f_1(a) = a$ $f_2(a)=a^2$ $f_3(a)=a^3$ $f_4(a)=a^4$ Note that the above exhaust all elements in Aut$(G)$. If not, then we have $f(a)=e$, but $e$ is not a generator of $G$. Then we define $\phi:$Aut$(G) \rightarrow \mathbb{Z}_4$ as follow: For any $f \in$ Aut$(G)$, if $f(a)=a^i$, then $\phi(f) = \overline{i-1}$. By the above, we observe that the only possible values of $\overline{i-1}$ are $\bar{0}, \bar{1}, \bar{2}, \bar{3}$, which are exactly the elements of $\mathbb{Z}_4$. Then the following can be checked: 1. $\phi$ is a
For $i\in X:=\{1,2,3,4\}$ , we get $\{2i\pmod 5, i\in X\}=X$ . Therefore, $f\colon G\to G$ , defined by $f(a):=a^2$ , has order $4$ .
|abstract-algebra|group-theory|
0
Deducing the convergence of a sequence in the product space $\Pi_{i=1}^\infty X_i$ from the convergence of its components
Let $(x_i)_{i=1}^\infty$ be a sequence in the product space $X = \Pi_{i=1}^\infty X_i$ , equipped with the product/box topology. I would like to know conditions such that this statement is true: $(x_i)_{i=1}^\infty$ converges to $x \in \Pi_{i=1}^\infty X_i$ $\iff$ $\pi_k(x_i)$ converges to $\pi_k(x)$ (where $\pi_k$ denote the projection to the $k$ -th component). For the $(\Rightarrow)$ direction: no conditions are necessary. Assume that $(x_i)_{i=1}^\infty$ converges to $x \in \Pi_{i=1}^\infty X_i$ . Fix $k \in \mathbb{N}$ . We would like to show that $\pi_k(x_i)$ converges to $\pi_k(x)$ . Take any open set $U_k \subseteq X_k$ containing $\pi_k(x)$ . Consider the open set $V = U_k \times \Pi_{i \neq k} X_i \subseteq X$ . Since $x \in V$ , we use convergence of $(x_i)_{i=1}^\infty$ to obtain an $N > 0$ such that for all $i > N$ , $x_i \in V$ , which follows that $\pi_k(x_i) \in U_k$ as needed. For the ( $\Leftarrow$ ) direction: If we equip $X$ with the product topology, then: For any
The $\Leftarrow$ direction is false in the box topology. Take $2=\{0,1\}$ . Let $X=2^\Bbb{N}$ with the box topology. This is the discrete topology on $X$ , so a sequence converges iff it's eventually constant. Now take the sequence $$ \begin{split} (1,1,1,1,1,1,1\ldots)\\ (0,0,1,1,1,1,1,\ldots)\\ (0,0,0,1,1,1,1,\ldots)\\ (0,0,0,0,1,1,1,\ldots)\\ (0,0,0,0,0,1,1,\ldots)\\ \ldots \end{split} $$ It converges pointwise to $(0,0,0,0,0,\ldots)$ , but it's not eventually constant.
|sequences-and-series|general-topology|
1
Can $\sin^2(x) = \cos^2(x) -1$
I came across this problem on varsity tutors. A part of the answer walk through states that $\sin^2(x)$ can equal $\cos^2(x) - 1$ . This is stated more than once on the site. I do not see how this is possible. I can see that \begin{align} \sin^2(x) &= 1 - \cos^2(x) \\ \cos^2(x) &= 1 - \sin^2(x) \end{align} if $\cos^2(x)-1$ were to be the equation. then it would need to equal $-\sin^2(x)$ . Am I missing something?
When I was looking at your statement, I first looked at it as an identity, and then as an equation. As an identity , no. It's well known that $\sin^2 x + \cos^2 x = 1$ and you're correct that $\cos^2 x - 1$ would yield $-\sin^2 x$ , so that identity would be incorrect. As an equation , yes. With manipulation of the identity $\sin^2 x + \cos^2 x = 1$ , the equation yields $\sin x = 0$ , to which $x = n \pi$ .
|algebra-precalculus|trigonometry|
0
Find $n$ when the coefficients of the $16^{th}$ and $26^{th}$ terms of $(1+x)^n$ are equal.
Find $n$ when the coefficients of the $16^{th}$ and $26^{th}$ terms of $(1+x)^n$ are equal. $16^{th}$ term coefficient: $^nC_{15}= \space ^nC_{n-15}$ $26^{th}$ term coefficient: $^nC_{25}= \space ^nC_{n-25}$ $\Rightarrow 15 = n-25 \Rightarrow n=40$ $\space$ or $\space$ $n-15=25 \Rightarrow n=40$ This is the correct answer. My question is since $^nC_{15}= \space ^nC_{25}$ , why can't $n-15=n-25$ ? In this case $-15 \neq -25$ , but if the algebra works would it be allowed? After all the question says the coefficients are equal. Thanks.
No, for the same reason that $15\neq 25$ even though $\binom{n}{15}=\binom{n}{25}$ The question tells you two terms that are in different positions, 16th and 26th. To find the correct answer of $40$ , you use the symmetry property to change one of these, so that you have two representations of the same position: $15, n-25$ or $25, n-15$ . If you use the symmetry property to change both positions, you're back to talking about different positions, hence it doesn't work.
|binomial-theorem|
1
What is $\sqrt{-1}$? circular reasoning defining $i$.
I am reading complex analysis by Gamelin and I am having trouble understanding the square root function. The principal branch of $\sqrt{z}$ ( $f_1(z)$ ) is defined as $|z|^{\frac 1 2} e^{\frac{i \operatorname{Arg}(z)}{2}}$ for $z \in \mathbb{C} - (-\infty,0]$ and $f_2(z)$ is defined as $-f_1(z)$ where $\operatorname{Arg}(z) \in (-\pi , \pi]$ By this definition, what is $\sqrt{-r} $ where $r$ is a non negative real number? Of course the answer is $i\sqrt{r}$ but the definition of square root functions doesn't apply here What is $i$ then? $i:=\sqrt{-1}$ but how? We didn't define the square root function for negatives but we still use $i$ to define complex numbers. Shouldn't the definition of square root function taught before defining $i$ ? and defining $i^2=-1$ without define $i$ as either $\pm \sqrt{-1}$ is also very strange because we want extended function to be continuous and choosing $\pm \sqrt{-1}$ will make this impossible although $i$ must be one of the two (after defining the s
If $r$ is a non-negative real number, then $-r\notin\Bbb C\setminus(-\infty,0]$ , and therefore $f$ is undefined at $-r$ .
|complex-analysis|algebra-precalculus|complex-numbers|continuity|riemann-surfaces|
0
Line bundle on projective line
Let $L$ be the (holomorphic) line bundle on $\mathbb{P}^1$ for which the glueing function from $\mathbb{P}^1\setminus \{\infty\}$ to $\mathbb{P}^1\setminus \{0\}$ on the overlap $\mathbb{C}^\times = \mathbb{P}^1\setminus \{0,\infty\}$ is given by multiplication with $g(z) = \exp(1/z)$ . A global (holomorphic) section of $L$ , say $s$ , has to satisfy $\exp(1/x) \cdot s(x) = s(1/x)$ for all nonzero complex numbers $x$ . Surely such an $s$ does not exist. In fact, there does not even exist a meromorphic such $s$ , as far as I can tell. But, how can a line bundle $L$ not have a meromorphic section?
tl; dr: This line bundle actually admits a non-vanishing holomorphic section, and is therefore trivial. The equation $\exp(1/x) \cdot s(x) = s(1/x)$ is an equation for the section $s$ , but in our covering coordinate charts $s$ is represented by a pair of functions, not by a single function. In more detail, a meromorphic section of $L$ corresponds to a pair of meromorphic functions $s_{0}$ and $s_{1}$ satisfying $$ \exp(1/x) \cdot s_{0}(x) = s_{1}(1/x),\quad x \neq 0. $$ We may take $s_{0}$ to be the constant function $1$ and $s_{1} = \exp$ . These functions are holomorphic and non-vanishing, so together they constitute a non-vanishing holomorphic section of $L$ .
|complex-analysis|complex-geometry|vector-bundles|
1
Show that rank correlation lies between -1 and 1
Show that Spearman's rank correlation r lies between -1 and 1. where $r = 1 - \frac{6 \sum d_i^2}{n(n^2 - 1)}$
For maximum r, d should be the least. So consider the ranks (x, y) like: x y d 1 1 0 2 2 0 3 3 0 ... ... ... n n 0 So, $\sum d^2 =0$ and $r=1$ For minimum r, consider the ranks like: x y d n 1 n-1 n-1 2 n-3 n-2 3 n-4 ... ... ... 1 n 1-n We can write $d_i = n + 1 - 2i$ $$\begin{align} & \sum_{i=1}^n d_i^2 \\ &= \sum_{i=1}^n (n + 1 - 2i)^2 \\ &= \sum ((n + 1)^2 - 4(n + 1)i + 4i^2) \\ &= \frac n3 (n^2+1) \end{align}$$ Plugging into the formula of r, we get $r=-1$
|correlation|
0
Formula for the chromatic symmetric function of a graph in terms of the graph's chromatic polynomial?
I know the chromatic symmetric function simplifies to the chromatic polynomial when 1's and 0's are subbed in for the x's. I was wondering if one could easily find the chromatic symmetric of a graph if given the chromatic polynomial of the graph? Thanks in advance
This could only be possible if graphs with the same chromatic polynomial always had the same chromatic symmetric function. It’s well-known that graphs with the same chromatic polynomial need not be isomorphic, and indeed one of the simplest examples of non-isomorphic graphs with identical chromatic polynomials already affords an example of graphs that have different chromatic symmetric functions despite having the same chromatic polynomial. The claw graph on four vertices, $S_3$ , has proper colourings in which three vertices have the same colour, whereas the path graph on four vertices, $P_4$ , has no such colourings (since that would make at least two adjacent vertices the same colour). Thus, the symmetric chromatic function of $S_3$ contains variables to the third power, and that of $P_4$ does not. But these graphs are both trees and thus both have the chromatic polynomial $(x-1)^3x$ .
|graph-theory|reference-request|coloring|symmetric-functions|
1
An easy question on unit and counit to an adjunction
Let $F: \mathcal{A} \to \mathcal{B}$ be a functor and suppose that it admits a left adjoint $R: \mathcal{B} \to \mathcal{A}$ . Take next an object $B \in \mathcal{B}$ and consider the object $(F\circ R \circ F \circ R)(B)$ . Which is the relationship between such an object $(F\circ R \circ F \circ R)(B)$ and the object $(F \circ R)(B)$ ? I tried to understand it on books (mainly Borceux, Adamek), but I do not understand in effect what happens. Using the unit and the counit, we get the relations also given in the post: Equivalence of the definition of Adjoint Functors via Universal Morphisms and Unit-Counit However, for proving theorems and for solving exercises, it is convenient to manipulate (and simplify, if possible) expression such $(F\circ R \circ F \circ R)(B)$ or $(R\circ F \circ R \circ F)(A)$ or similar. What about all these cases?
$\newcommand{\A}{\mathscr{A}}\newcommand{\B}{\mathscr{B}}$ Ok, please don't call your left adjoints $R$ ! I'm not really sure what you want from this vague question of "comparing" but let me offer some useful 'algebra' maps. I'm going to use the notation $L:\B\rightleftarrows\A:R$ with the adjunction having variance $L\dashv R$ i.e. $L$ is the left adjoint and $R$ is the right adjoint. You're asking to compare $RLRL(B)$ with $RL(B)$ . Let's use $T:=RL:\B\to\B$ ; this is the associated monad of the adjunction. You have $T^2(B)$ and this is related to $T$ with three canonical natural transformations: $$\eta_T:T\implies T^2,\,T\eta:T\implies T^2,\mu:T^2\implies T$$ Where $\eta:1\implies T=RL$ is the adjunction unit and $\mu$ is the monad multiplication map, or more concretely $\mu=R\epsilon_L$ where $\epsilon:LR\implies1$ is the counit, $\epsilon_L:LRL\implies L$ and $\mu=R\epsilon_L:T^2=RLRL\implies RL=T$ . These three transformations relate to each other in the following way: $$\mu\circ
|category-theory|
1
What is $\sqrt{-1}$? circular reasoning defining $i$.
I am reading complex analysis by Gamelin and I am having trouble understanding the square root function. The principal branch of $\sqrt{z}$ ( $f_1(z)$ ) is defined as $|z|^{\frac 1 2} e^{\frac{i \operatorname{Arg}(z)}{2}}$ for $z \in \mathbb{C} - (-\infty,0]$ and $f_2(z)$ is defined as $-f_1(z)$ where $\operatorname{Arg}(z) \in (-\pi , \pi]$ By this definition, what is $\sqrt{-r} $ where $r$ is a non negative real number? Of course the answer is $i\sqrt{r}$ but the definition of square root functions doesn't apply here What is $i$ then? $i:=\sqrt{-1}$ but how? We didn't define the square root function for negatives but we still use $i$ to define complex numbers. Shouldn't the definition of square root function taught before defining $i$ ? and defining $i^2=-1$ without define $i$ as either $\pm \sqrt{-1}$ is also very strange because we want extended function to be continuous and choosing $\pm \sqrt{-1}$ will make this impossible although $i$ must be one of the two (after defining the s
$\DeclareMathOperator{\Arg}{Arg}$ The times I taught complex analysis using the (good, free) book of Beck, Marchesi, Pixton, and Sabalka, there was a nitpicky but useful handling polar angle: The principal branch of argument $\Arg$ is defined for all non-zero complex numbers , and $-\pi . One must take care that $\Arg$ is discontinuous on the negative real axis, but on the negative real axis $\Arg = \pi$ . The non-positive real axis is explicitly excluded from domains of differentiable functions whose definition involves $\Arg$ : $\log$ , roots, other non-integer power functions. In the question, the negative real axis is explicitly excluded from the domain of the principal square root (second bullet point); consequently, as José says, with this definition $\sqrt{-1}$ (the principal square root of $-1$ ) is undefined. If we are unconcerned about the discontinuity of the principal square root on the negative real axis, however (first bullet point), then for all positive real $r$ we have
|complex-analysis|algebra-precalculus|complex-numbers|continuity|riemann-surfaces|
0
Do the moments of the reciprocal normal distribution exist?
The following question is based on this question: Reciprocal of a normal variable with non-zero mean and small variance To summarize the main information from that question: $X$ is a normally distributed random variable: $$X \sim \mathcal{N}(\mu,\sigma^2)$$ Then $Y = 1/X$ has the following probability density function (see wiki ): $$f(y) = \frac{1}{y^2\sqrt{2\sigma^2\pi}}\,\exp\left(-\frac{(\frac{1}{y} - \mu)^2}{2 \sigma^2}\right)$$ This distribution of $Y$ does not have moments since ( stackExchange ): $$\int_{-\infty}^{+\infty}|x|f(x)\,dx = \infty$$ An intuitive explanation of this is that the distributions tails are too heavy and consequently the law of large number fails. The more samples that are drawn and averaged the less stable this average is. The non-zero probability density that $X = 0$ means that $Y$ will not have finite moments since there is a non-zero probability that $Y = \infty$ . However, I am interested in the case where $X$ is normally distributed like above, but in
The moments exist because you have a bounded distribution on the positive real interval $\left[\frac1b,\frac1a\right]$ : you are going to find $\frac1b and $\frac1{b^2} . It is easy enough to use numerical methods to find the moments to arbitrary accuracy. For example with R: momentsreciptruncnormal and as an example of the first and second moments with arbitrary $a,b,\mu,\sigma^2$ : momentsreciptruncnormal(a=1, b=2, mu=3, sigmasq=4) # 0.6783127 0.4787007
|probability-distributions|normal-distribution|inverse|standard-deviation|means|
0
What is $\sqrt{-1}$? circular reasoning defining $i$.
I am reading complex analysis by Gamelin and I am having trouble understanding the square root function. The principal branch of $\sqrt{z}$ ( $f_1(z)$ ) is defined as $|z|^{\frac 1 2} e^{\frac{i \operatorname{Arg}(z)}{2}}$ for $z \in \mathbb{C} - (-\infty,0]$ and $f_2(z)$ is defined as $-f_1(z)$ where $\operatorname{Arg}(z) \in (-\pi , \pi]$ By this definition, what is $\sqrt{-r} $ where $r$ is a non negative real number? Of course the answer is $i\sqrt{r}$ but the definition of square root functions doesn't apply here What is $i$ then? $i:=\sqrt{-1}$ but how? We didn't define the square root function for negatives but we still use $i$ to define complex numbers. Shouldn't the definition of square root function taught before defining $i$ ? and defining $i^2=-1$ without define $i$ as either $\pm \sqrt{-1}$ is also very strange because we want extended function to be continuous and choosing $\pm \sqrt{-1}$ will make this impossible although $i$ must be one of the two (after defining the s
An alternative way to construct the complex plane $\mathbb{C}$ is to take the vector space $\mathbb{R}^2$ and equip it with a multiplication function $\ast: \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}^2$ defined by: $$(x_1, y_1) \ast (x_2, y_2) = (x_1 x_2 - y_1 y_2, x_1 y_2 + x_2 y_1)$$ You can check that is satisfies the usual 'nice' properties of multiplication (associativity, commutativity, etc.), so the name is justified. We can now denote a vector in this space by $x + yi := (x, y)$ , where $i$ here really just means the "second coordinate direction." From here, the trick is to think of $x + yi$ as a "number" (which we'll call a complex number ), and see how the normal multiplication of numbers aligns with our multiplication operation. You'll find that $$(x_1 + iy_1)(x_2 + iy_2) = (x_1x_2 + i^2 y_1 y_2) + i(x_1 y_2 + x_2 y_1)$$ So we have compatibility if we interpret the symbol $i^2$ as $-1$ . It's notational convenience and nothing more, it does not say that $i = \sqrt{-1}$
|complex-analysis|algebra-precalculus|complex-numbers|continuity|riemann-surfaces|
0
Boundedness of Radon-Nikodym derivative
I have the following question: Under which conditions (I mean the conditions on the measures) the Radon-Nikodym derivative is bounded ? I made some researches but I couldn't find any answer, although I find it very interesting and could have many useful applications in statistics :/ Does any one have an idea ? Thank you for your time
One can show in the following setting that the Radon-Nikodym derivative is bounded a.e. : If $\nu$ and $\mu$ are both finite, positive measures on the same measurable space, and $\nu$ is absolutely continuous with respect to $\mu$ , and $L^1(\mu)\subseteq L^1(\nu) $ , then the Radon-Nikodym derivative is in $L^{\infty}(\mu) $ .
|integration|measure-theory|statistics|statistical-inference|radon-nikodym|
0
Real Polynomials on Compact sets of Complex numbers
Setting: $\mathbb{R}[x]$ is the set of polynomials with real coefficients. All $f\in \mathbb{R}[x]$ has domain $\mathbb{C}$ . $K$ is a compact subset of $\mathbb{C}$ . $\mathbb{R}[x]|_{K}$ is the set of restrictions of functions in $\mathbb{R}[x]$ . $\mathcal{C}(K)$ is the set of continuous complex-valued functions on $K$ . By a set $F$ of complex-valued functions on some set $R$ being self-adjoint, we mean for all $f\in F,$ there exists some $\overline{f}\in F$ such that $\overline{f}(x)=\overline{f(x)}$ for all $x\in R$ . Questions: Is $\mathbb{R}[x]$ self-adjoint? Is $\mathbb{R}[x]|_{K}$ dense in $\mathcal{C}(K)$ ? If $\mathbb{R}[x]|_K$ is not dense in $\mathcal{C}(K)$ , is there a continuous function $f:K\rightarrow \mathbb{C}$ such that $f$ is not in the uniform closure of $\mathbb{R}[x]|_{K}$ and can be explicitly written down? Motivation (for me to ask this question): I am currently studying the section of Stone-Weierstrass in Baby Rudin, and it seems that Rudin doesn't answer t
Question 1 Let $f(x)=ax+b$ , $\overline{f}(x)=a\overline{x}+b, z=r_1+ir_2\in K\subseteq\mathbb{C}$ ,where $a,b\in\mathbb{R}$ is fixed and $r_2\neq 0$ . Assume $\overline{f}(x)=g(x)$ , we note that $\deg(g)=\deg(\overline{f})=1$ since fundamental theorem of algebra. Suppose $g(x)=a_1x+b_1$ ,where $a_1,b_1\in\mathbb{R}$ . You can easy to see (by calculus) that \begin{align} a_1&=-a\\ b_1&=b+2ar_1. \end{align} Since $r_1$ is arbitrary, $g(x)$ doesn't exist. Note that we can assume that there is $z\in K$ such that $z=r_1+ir_2$ , $r_2\neq 0$ . Otherwise $K$ will be a compact set of $\mathbb{R}$ . Then that will go back to the Stone-Weierstrass Theorem. Question 2 I think you can get answer from exercise 21 of Rudin "Principles of Mathematical Analysis" 2-edtion. It implies that: Let $K=\{z\in\mathbb{C} : |z|=1\}$ and $h(z)=1/z$ . Then $h$ is not in the uniform closure of $\mathbb{R}[x]|_K$ .
|functional-analysis|complex-analysis|analysis|examples-counterexamples|dense-subspaces|
0
Is the notion "If a polynomial has small coefficients (relative to the exponent), then it has small roots" true?
Basically I'm trying to find good starting values for algorithms that determine the roots of a polynomial (e.g. newton method). Obviously we are trying to get as close as possible to the root as we can, but how can we estimate where the roots of a polynomial lie? Is a argument like: "If the coefficients are relatively small compared to the degree of the polynomial, then the magnitude of the roots is somewhere near the coefficients" correct? Are there counterexamples of polynomials with very small coefficients and very large roots?
There exist estimates for the size of the largest root. The most general go back to the idea that $z$ is not a root of $$ p(z)=a_nz^n+a_{n-1}+...+a_1z+a_0 $$ if $|z|>R>0$ with an outer root radius $R$ that satisfies the intequality $$ |a_n|R^n\ge |a_{n-1}|R^{n-1}+...|a_1|R+|a_0| $$ This polynomial inequality for $R$ is easier to solve numerically than zeroing in on any specific root of the original polynomial. Especially as for the further numerical purposes only a low relative accuracy is needed. The smallest $R$ is obtained as the only positive root of a polynomial with only one sign change in the coefficient sequence, meaning there is exactly one positive root. This situation allows for the secure use of simple scalar root-finding methods like the Newton method. But one can also obtain simple (over-)estimates like $$ R=\max\left(1,\frac{|a_{n-1}|+...+|a_0|}{|a_n|}\right) $$ or $$ R=1+\frac{\max_{k These estimates support the general idea, if the coefficients are small relative to th
|polynomials|numerical-methods|roots|
1
Quadratic field extension of finite field $\mathbb{F}_{q}$.
Question : Let $q = p^{n}$ be a prime power. For which $q$ is the quadratic extension $\mathbb{F}_{q^{2}}$ of $\mathbb{F}_{q}$ of the form $\mathbb{F}_{q}(\sqrt{x})$ for $x \in \mathbb{F}_{q}$ ? Furthermore, for what $q$ is the cubic extension $\mathbb{F}_{q^{3}}$ of $\mathbb{F}_{q}$ of the form $\mathbb{F}_{q}(\sqrt[3]{x})$ for $x \in \mathbb{F}_{q}$ . I am not sure how to approach this question. I have made a similar question to these where we look at quadratic extensions $\mathbb{Q}(\sqrt{d})$ of $\mathbb{Q}$ . I tried to do a similar approach here by considering that we must adjoin a root of a minimal polynomial of degree 2, which we can find with the quadratic formule (since characteristic $\neq 2$ ). But I am not sure how I can find the conditions for $q$ such that the extension has this form. Same problem for the cubic extension.
The case of finite fields is easier because an extension of degree $n$ in this case is unique. So if you can find any element $\alpha\in\mathbb{F}_q$ such that $x^3-\alpha$ is irreducible over $\mathbb{F}_q$ then a root of this polynomial necessary generates $\mathbb{F}_{q^3}$ (unlike over $\mathbb{Q}$ where we can only conclude it generates some extension of degree $3$ ), and so the extension is indeed of the required form. On the other hand, if there is no such $\alpha$ then clearly the extension cannot be of this form. So the question is now for which $q$ there is some $\alpha\in\mathbb{F}_q$ such that $x^3-\alpha$ is irreducible over $\mathbb{F}_q$ . Since this is a polynomial of degree $3$ , this is equivalent to asking for which $q$ there is some $\alpha$ such that $\alpha$ has no third root in $\mathbb{F}_q$ . The answer follows from the following simple exercise in group theory: Exercise: Let $G$ be a finite group. Then every element of $G$ has a third root if and only if $3\nm
|abstract-algebra|galois-theory|finite-fields|extension-field|
1
If there is a large cardinal, can GCH also hold?
Let $P$ be a statement saying there is large cardinal of some kind. For example, $P$ can be one of There is a weakly inaccessible cardinal. There is a Mahlo cardinal. There is a weakly compact cardinal. There is a measurable cardinal. There is a strongly compact cardinal. There is a supercompact cardinal. ... Then, does $\operatorname{Con}(\mathrm{ZFC}+P)$ imply $\operatorname{Con}(\mathrm{ZFC}+\mathrm{GCH}+P)$ ? Context: Traditionally, that $\mathrm{ZF}$ and $\mathrm{ZFC+GCH}$ has the same consistent strength is proved by Gödel's constructible universe $L$ . However, a famous theorem from Scott says if there is a measurable cardinal then $V\ne L$ , so this proof cannot work for measurable and above. The Levy–Solovay theorem implies CH is consistent with a measurable, but is there a proof for the situation where GCH holds?
The construction of $L$ is extended to accommodate large cardinal in the study of inner model theory. The core model for large cardinal axioms provide us with a canonical inner model which captures, in a sense, "exactly the large cardinal of interest", and one of the consequences is that $\sf GCH$ holds in those models. However, above Woodin cardinals, inner model theory starts to break down, slowly at first, but eventually more and more, to the point that we don't have good canonical inner model for very large cardinals. Woodin is working on his Ultimate- $L$ programme, which will be a canonical inner model for supercompact cardinals that actually captures all larger large cardinals as well, and will satisfy $\sf GCH$ and more. There is still much work there, though. Still, we do have a different weapon that works just as well. Silver's theorem tells us that we can lift an elementary embedding $j\colon V\to M$ to an embedding $j\colon V[G]\to M[H]$ if and only if $j``G\subseteq H$ , s
|set-theory|large-cardinals|
1
Is the sum between a norm and a seminorm still a norm?
Let $X$ denote a vector space. Let $n:X\to\mathbb R$ be a norm over $X$ and let $s:X\to\mathbb R$ be a seminorm over $X$ . I have proved quite easily that the sum of two norms is a norm itself. I would like to understand what happens if I sum a norm with a seminorm. Does one obtain a norm or a seminorm? According to me the result is a norm, but I heard some classmates that proved that the result of the sum is a seminorm instead. Anyone please help me to understand who is right?
It should be a norm. The triangle inequality is just \begin{align} (n+s)(x+y)&=n(x+y)+s(x+y) \leq n(x) +s(x) + n(y) + s(y) \\ &= (n+s)(x) + (n+s)(y) \end{align} Absolute homogenity is similar and positivity follows directly by the positivity of s and n. It remains to show that $0=(n+s)(x)$ implies $x=0$ . But this is just $0=(n+s)(x)=n(x)+s(x) \geq n(x) \geq 0$ and hence $n(x)=0$ which implies $x=0$ since n is a norm.
|real-analysis|calculus|normed-spaces|
0
Probability that $b^2 - 4ac \geq 0$ where $a,b,c$ are normally distributed (numerical integration)
I would like to determine the probability that a random quadratic polynomial has positive discriminant, where the 3 coefficients $a, b, c$ are normally distributed and independent: That is, given $a,b,c \sim \operatorname{N}\left(0,1\right)$ , what is ( a numerical approximation to 5 digits of ) $\operatorname{Pr}\left(\,{b^2 - 4 a c \geq 0}\,\right)$ ? Thoughts : We have $$ \mathrm{Pr}\left(\,{b^2 - 4 a c \geq 0}\,\right) = \dfrac{1}{\left(\sqrt{\pi}\right)^3} \iiint_{\mathbb{R}^3} \mathbb{\large 1}_{b^{2}\ -\ 4ac\ \geq\ 0}\quad{\rm e}^{-a^2} {\rm e}^{-b^2}{\rm e}^{-c^2}\,\mathrm{d}a \,\mathrm{d}b \,\mathrm{d}c $$ This integral probably cannot be expressed explicitly, but even a numerical approximation is not so easy. Here is what I tried with SAGE , without success: var('a,b,c') RR = RealField(100) I = integrate(integrate(integrate(exp(-a^2), a, 0, b^2/(4*c)) * exp(-c^2), c, 0, oo) * exp(-b^2), b, 0, oo) print(RR(I)) #error... Expermenting with SAGE seems to give a probability betwee
Just another presentation of the problem. Denote $$B=\frac{b+c}{\sqrt{2}}, C=\frac{b-c}{\sqrt{2}}$$ You are interested in $\Pr(X>0)$ when $X=\frac{B^2}{2}-\frac{C^2}{2}-4A^2$ where $A,B,C$ are independent $N(0,1).$ The characteristic function of $X$ is $$\varphi(t)=E(e^{itX})=\frac{1}{\sqrt{1+t^2}}\times \frac{1}{\sqrt{1-8it}}$$ with a suitable interpretation of $\sqrt{1-8it}.$ Since $\varphi$ is integrable, inversion formula applies and the density of $X$ is $$f(x)=\frac{1}{2\pi}\int_Re^{-itx}\varphi(t)dt$$ and for $T>0$ we have by Fubini $$\Pr(0
|probability|integration|numerical-methods|normal-distribution|discriminant|
0
Quadratic field extension of finite field $\mathbb{F}_{q}$.
Question : Let $q = p^{n}$ be a prime power. For which $q$ is the quadratic extension $\mathbb{F}_{q^{2}}$ of $\mathbb{F}_{q}$ of the form $\mathbb{F}_{q}(\sqrt{x})$ for $x \in \mathbb{F}_{q}$ ? Furthermore, for what $q$ is the cubic extension $\mathbb{F}_{q^{3}}$ of $\mathbb{F}_{q}$ of the form $\mathbb{F}_{q}(\sqrt[3]{x})$ for $x \in \mathbb{F}_{q}$ . I am not sure how to approach this question. I have made a similar question to these where we look at quadratic extensions $\mathbb{Q}(\sqrt{d})$ of $\mathbb{Q}$ . I tried to do a similar approach here by considering that we must adjoin a root of a minimal polynomial of degree 2, which we can find with the quadratic formule (since characteristic $\neq 2$ ). But I am not sure how I can find the conditions for $q$ such that the extension has this form. Same problem for the cubic extension.
In general, this question falls under the umbrella of Kummer theory . The basic result says: Let $n\geq 1$ be any integer. If $K$ is a field of characteristic coprime to $n$ (or $0$ ) that contains all $n$ -th roots of unity, then an extension $L/K$ of degree $n$ is of the form $K(\sqrt[n]{a})$ if and only if it is galois with cyclic galois group. Also conversely, if $K$ admits such an extension that is simultaneously radical and cyclic of degree $n$ , then $K$ has all $n$ -th roots of unity and is of characteristic coprime to $n$ (or $0$ ). For finite fields, all this is not difficult to translate into a concrete condition. Suppose we have the finite field $K$ of characteristic $p$ and want to know if the unique extension $L/K$ of degree $n$ is radical. As every extensions of finite fields is cyclic, it will be radical if and only if $K$ has characteristic coprime to $n$ and contains all $n$ -th roots of unity. The first is clearly the case if and only if $p\nmid n$ , while the second
|abstract-algebra|galois-theory|finite-fields|extension-field|
0
Inequality for integers with floor function
I want to show that for any nonnegative integers $l$ and $b$ we have $$ \frac{l}{2^{b+1}} - 1 \leq \left\lfloor \frac{l-1}{2^{b+1}} \right\rfloor. $$ I have a proof where I wrote $l = \alpha\cdot 2^{b+1} + \beta$ with $0\leq \beta\leq 2^{b+1}$ , but it's not very elegant, I say. Let $l = \alpha\cdot 2^{b+1} + \beta$ with $0\leq \beta\leq 2^{b+1}.$ Then we have (LHS): $$ \frac{l}{2^{b+1}}-1 = \alpha + \frac{\beta}{2^{b+1}}-1 $$ and (RHS): $$ \left\lfloor \frac{l-1}{2^{b+1}}\right\rfloor = \left\lfloor \alpha + \frac{\beta-1}{2^{b+1}}\right\rfloor. $$ In case $\beta = 0$ we get $\alpha - 1$ for (LHS) and due to $0 we get the same for (RHS), so we have equality for this case. Otherwise, we have $1\leq \beta \leq 2^{b+1}$ and so $0 \leq \frac{\beta - 1}{2^{b+1}} . So we have for (LHS) $$ \frac{l}{2^{b+1}}-1 = \alpha + \frac{\beta}{2^{b+1}}-1 and for (RHS) $$ \left\lfloor \frac{l-1}{2^{b+1}} \right\rfloor = \left\lfloor \alpha + \frac{\beta-1}{2^{b+1}} \right\rfloor = \alpha.\square $$ Is t
This is a one-liner if you use the fact that $\lfloor x+a \rfloor = \lfloor x\rfloor +a$ for $x \in \Bbb R$ and $a \in \Bbb Z$ . Then the inequality is equivalent to $$\frac{l}{2^{b+1}} \le \left \lfloor \frac{l+2^{b+1} - 1}{2^{b+1}} \right \rfloor$$ which is trivially true. To see why, pull out and cancel the quotient of $l \div 2^{b+1}$ . If $l \equiv 0 \pmod{2^{b+1}}$ , then equality holds, otherwise the RHS (after cancelling) would be at least one, but the LHS would be less than one. Actually, the above proof is equivalent to your approach, so here's another which uses the fact that $x-1 : $$\frac{l-1}{2^{b+1}} - 1 Now, suppose you add $\frac{1}{2^{b+1}}$ to the LHS. Can the inequality be reversed? No, if $l \equiv 1 \pmod{2^{b+1}}$ then the LHS and RHS differ by 1, so adding won't change anything. Otherwise, the LHS is not an integer, and adding $\frac{1}{2^{b+1}}$ can make it equal to an integer (the RHS), but not greater. Hence, $$\frac{l}{2^{b+1}} - 1 \le \left\lfloor \frac{l-1
|inequality|ceiling-and-floor-functions|
0
Why Peirce's law implies law of excluded middle?
Why if in a formal system the Peirce's law $((P\rightarrow Q)\rightarrow P) \rightarrow P$ is true, the law of excluded middle $P \lor \neg P$ is true too?
First, let us prove that $$\big((P \vee \neg P) \rightarrow \text{False}\big) \rightarrow \neg P$$ by thinking of proofs of an implication $Q \rightarrow R$ as functions from $Q$ to $R$ , and of $\neg P$ as the implication $P \rightarrow \text{False}$ (by definition of $\neg P$ ). To prove the implication above, assume that we are given a proof $f$ of $(P \vee \neg P) \rightarrow \text{False}$ (i.e. a function from $P \vee \neg P$ to $\text{False}$ ). We then want to construct a proof of $P \rightarrow False$ , meaning a function from $P$ to $\text{False}$ . To do that, we start with a proof $p$ of $P$ . Recall now that $P$ implies $P \vee \neg P$ (this implication is a tautology: regardless of $P$ , we have a function $l : P \rightarrow (P \vee \neg P)$ , by definition of $\vee$ ). So we can associate to $p$ a proof $l(p)$ of $P \vee \neg P$ . Then $f(l(p))$ is a proof of $\text{False}$ and we are done. Second, let us note that, since $\neg P \rightarrow (P \vee \neg P)$ is also a tau
|logic|propositional-calculus|
0
Inequality for integers with floor function
I want to show that for any nonnegative integers $l$ and $b$ we have $$ \frac{l}{2^{b+1}} - 1 \leq \left\lfloor \frac{l-1}{2^{b+1}} \right\rfloor. $$ I have a proof where I wrote $l = \alpha\cdot 2^{b+1} + \beta$ with $0\leq \beta\leq 2^{b+1}$ , but it's not very elegant, I say. Let $l = \alpha\cdot 2^{b+1} + \beta$ with $0\leq \beta\leq 2^{b+1}.$ Then we have (LHS): $$ \frac{l}{2^{b+1}}-1 = \alpha + \frac{\beta}{2^{b+1}}-1 $$ and (RHS): $$ \left\lfloor \frac{l-1}{2^{b+1}}\right\rfloor = \left\lfloor \alpha + \frac{\beta-1}{2^{b+1}}\right\rfloor. $$ In case $\beta = 0$ we get $\alpha - 1$ for (LHS) and due to $0 we get the same for (RHS), so we have equality for this case. Otherwise, we have $1\leq \beta \leq 2^{b+1}$ and so $0 \leq \frac{\beta - 1}{2^{b+1}} . So we have for (LHS) $$ \frac{l}{2^{b+1}}-1 = \alpha + \frac{\beta}{2^{b+1}}-1 and for (RHS) $$ \left\lfloor \frac{l-1}{2^{b+1}} \right\rfloor = \left\lfloor \alpha + \frac{\beta-1}{2^{b+1}} \right\rfloor = \alpha.\square $$ Is t
Case 1 When $l=0$ , in RHS we have $$ \left\lfloor \frac{-1}{2^{b+1}} \right\rfloor = -1 $$ In LHS we will have $-1$ , hence both sides will be equal Case 2 When $l \geq 1$ Assume $$ \frac{l}{2^{b+1}} - 1 \gt \left\lfloor \frac{l-1}{2^{b+1}} \right\rfloor $$ $$ \frac{l}{2^{b+1}} - \left\lfloor \frac{l-1}{2^{b+1}} \right\rfloor \gt 1 $$ $$ \frac{l-1}{2^{b+1}} - \left\lfloor \frac{l-1}{2^{b+1}} \right\rfloor + \frac{1}{2^{b+1}} \gt 1 $$ $$ \left\{ \frac{l-1}{2^{b+1}} \right\} + \frac{1}{2^{b+1}} \gt 1 $$ Where $\{a\}$ is fractional part of $a$ , $0 \leq \{a\} \lt 1$ Using Euclid's division algorithm we can find $q$ and $r$ such that $$(2^{b+1})q + r = l-1$$ Where $0 \leq r \leq 2^{b+1} - 1$ and $q \in \mathbb{Z}$ $$q + \frac{r}{2^{b+1}} = \frac{l-1}{2^{b+1}}$$ $$\left\{ q + \frac{r}{2^{b+1}} \right\} = \left\{ \frac{l-1}{2^{b+1}} \right\} $$ As per restrictions on $q$ and $r$ , we have $$\left\{ q + \frac{r}{2^{b+1}} \right\} = \frac{r}{2^{b+1}}$$ Putting it in an earlier equation we get
|inequality|ceiling-and-floor-functions|
0
Simplifying the inductive case for a summation
I am trying to prove by induction that $$\sum_{k=1}^n k^2(k+1) = \frac{1}{12} n(n+1)(n+2)(3n+1)$$ I proved the base case for n=1 easily, however when proving the inductive case for (n+1), I am met with too complex of an expression. I go from $$\sum_{k=1}^{n+1} k^2(k+1) = (\sum_{k=1}^n k^2(k+1)) + (n+1)^2(n+2)$$ After that I distributed the $k^2$ term to get two summations $$(\sum_{k=1}^n k^3) + (\sum_{k=1}^n k^2) + (n+1)^2(n+2)$$ After using the respective equations for $k^3$ and $k^2$ I narrowed it down to $$\frac{1}{4}n^2(n+1)^2 + \frac{1}{6}n(n+1)(2n+1) + (n+1)^2(n+2)$$ and at this point I can't seem to get it to simplify right; I keep getting into messier situations. Do you guys have any tips, or did I do something wrong? Thank you
Take $n\in\Bbb N$ and assume that $$\sum_{k=1}^nk^2(k+1)=\frac1{12}n(n+1)(n+2)(3n+1).$$ Then \begin{align}\sum_{k=1}^{n+1}k^2(k+1)&=\left(\sum_{k=1}^nk^2(k+1)\right)+(n+1)^2(n+2)\\&=\frac1{12}n(n+1)(n+2)(3n+1)+(n+1)^2(n+2)\\&=\frac1{12}(n+1)(n+2)\bigl(n(3n+1)+12n+12\bigr)\\&=\frac1{12}(n+1)(n+2)(3n^2+13n+12).\end{align} On the other hand \begin{align}\frac1{12}(n+1)(n+2)(n+3)(3(n+1)+1)&=\frac1{12}(n+1)(n+2)(n+3)(3n+4)\\&=\frac1{12}(n+1)(n+2)(3n^2+13n+12),\end{align} and therefore $$\sum_{k=1}^{n+1}k^2(k+1)=\frac1{12}(n+1)(n+2)(n+3)(3(n+1)+1).$$
|discrete-mathematics|summation|induction|
1
Why is the bilinear form on $H^d(X,\mathbb Q_{\ell})$ afforded by Poincaré duality alternating when $d = \dim(X)$ is odd?
Let $X$ be a smooth projective irreducible variety of pure dimension $d$ over an algebraically closed field of positive characteristic $p$ . Let $\ell \not = p$ be a prime number. There is an isomorphism $\mathrm{Tr}: H^{2d}(X,\mathbb Q_{\ell}) \xrightarrow{\sim} \mathbb Q_{\ell}(-d)$ , and cup-product define a pairing $$\mathrm{Tr}(x\cup y): H^i(X,\mathbb Q_{\ell}) \times H^{2d-i}(X,\mathbb Q_{\ell}) \to \mathbb Q_{\ell}(-d).$$ Poincaré duality states that this pairing is perfect. In particular, for $i = d$ we obtain a bilinear form on $H^d(X,\mathbb Q_{\ell})$ . In La conjecture de Weil: I (1974), Deligne writes that if $d$ is odd, this pairing is alternating (cf. paragraph 2.6). I don't understand where this property comes from. I thought that cup product on $H^d(X,\mathbb Q_{\ell})$ was commutative, ie. $x \cup y = y \cup x$ . If the form were alternating, it would be in particular skew-symmetric, ie. $\mathrm{Tr}(x \cup y) = - \mathrm{Tr}(y \cup x)$ . This is impossible if cup-pro
As with singular cohomology, the cup product is graded commutative, not commutative. This means that for $x\in H^p(X,\mathbb Q_\ell)$ and $y\in H^q(X,\mathbb Q_\ell)$ , $$x\cup y=(-1)^{pq}y\cup x\in H^{p+q}(X,\mathbb Q_\ell).$$
|algebraic-geometry|etale-cohomology|poincare-duality|
1
Find an isomorphism $PGL_2(F_3) \cong S_4$
I am struggling to find an explanation why this true, I know and I'm sorry that kind of question is commonly asked, although I couldn't find anything about this particular question.
As mentioned by Qiaochu, the isomorphism follows from the following observations: (general observations) there is a homomorphism $\varphi_p\colon\mathrm{PGL}_2(p)\to S_{p+1}$ given by the action of $\mathrm{PGL}_2(p)$ on the $(p+1)$ -element set $\mathbb{PF}_p^1$ . $\varphi_p$ is injective, i.e., the action above is faithful. This is shown as follows: for any $g\in\mathrm{GL}_2(p)$ , note that $g(1,0)=(a,0)$ and $g(0,1)=(0,b)$ for some constants $a,b\in\mathbb F_p^\times$ . But now $g(1,1)=(a,b)$ so $a=b$ . $\mathrm{PGL}_2(p)$ has order $(p^2-1)(p^2-p)/(p-1)=p(p^2-1)$ , see, e.g. Order of general- and special linear groups over finite fields. Moreover, $S_{p+1}$ has order $(p+1)!$ . Now when $p=3$ , the order of $\mathrm{PGL}_2(3)$ is $3\cdot(3^2-1)=24$ and $S_4$ has order $24$ , so together with the injectivity of $\varphi_3$ , we deduce $\varphi_3$ must be an isomorphism.
|group-theory|simple-groups|exceptional-isomorphisms|
0
Unexpected appearances of $\pi^2 /~6$.
"The number $\frac 16 \pi^2$ turns up surprisingly often and frequently in unexpected places." - Julian Havil, Gamma: Exploring Euler's Constant . It is well-known, especially in 'pop math,' that $$\zeta(2)=\frac1{1^2}+\frac1{2^2}+\frac1{3^2}+\cdots = \frac{\pi^2}{6}.$$ Euler's proof of which is nice. I would like to know where else this constant appears non-trivially. This is a bit broad, so here are the specifics of my question: We can fiddle with the zeta function at arbitrary even integer values to eek out a $\zeta(2)$ . I would consider these 'appearances' of $\frac 16 \pi^2$ to be redundant and ask that they not be mentioned unless you have some wickedly compelling reason to include it. By 'non-trivially,' I mean that I do not want converging series, integrals, etc. where it is obvious that $c\pi$ or $c\pi^2$ with $c \in \mathbb{Q}$ can simply be 'factored out' in some way such that it looks like $c\pi^2$ was included after-the-fact so that said series, integral, etc. would equal
$$\int_0^{\pi/2} \cot^{-1}\frac{215+36\cos^2\theta}{88\sqrt{21}}\ d\theta =\frac{\pi^2}6$$
|integration|sequences-and-series|riemann-zeta|big-list|pi|
0