title
string
question_body
string
answer_body
string
tags
string
accepted
int64
One of the numbers $\zeta(5), \zeta(7), \zeta(9), \zeta(11)$ is irrational
I am reading an interesting paper One of the numbers ζ(5), ζ(7), ζ(9), ζ(11) is irrational by Zudilin. We fix odd numbers $q$ and $r$ , $q\geq r+4$ and a tuple $\eta_0,\eta_1,...,\eta_q$ of positive integer parameters satisfying the conditions $\eta_1\leq \eta_2\leq...\leq \eta_q and $$ \eta_1+\eta_2+...+\eta_q\leq \eta_0\left(\frac{q-r}{2}\right)\tag{1}$$ Define $$F_n:=\frac{1}{(r-1)!}\sum_{t=0}^\infty R_n^{(r-1)}(t)\tag{2}$$ and note that $R_n(t)=O(t^{-2})$ . We put $m_j=\max\{\eta_r,\eta_0-2\eta_{r+1},\eta_0-\eta_1-\eta_{r+j}\}$ for $j=1,2,...,q-r$ and define the integer $$\Phi_n:=\prod_{\sqrt{\eta_0 n} where only primes enter the product and $$\varphi(x)=\min_{0\leq y where [.] denotes the ceiling function. Let $D_N$ denote the lcm of $1,2,...,N$ . Lemma $1$ : ( $2$ ) defines a linear form of $\zeta(r+2),\zeta(r+4),...,\zeta(q-2)$ with rational coefficients; moreover, $$ D_{m_1n}^r D_{m_2n... D_{m_{q-r}n}}.\Phi_n^{-1}.F_n\in\mathbb{Z}+\mathbb{Z}\zeta(r+2)+\mathbb{Z}\zeta(r+4)+...+\
There isn't a lot of thought that goes into the verification. It's pure computation at this point. I'll include code that computes $C_0$ . Computing $C_1$ is similarly a direct computation (but it requires integrating the digamma function). # This is sage code, for the Sagemath computer algebra system. # It's very similar to python, but with extra batteries included. etas = [91, 27, 27, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38] r = 3 q = 13 term1(x) = 1 term2(x) = 1 for eta in etas: term1 *= (x - eta) term1 *= (x - etas[0])^2 for eta in etas[1:]: term2 *= (x - etas[0] + eta) term2 *= x^3 Now term1 and term2 are the two polynomials $$ (x - 38) \cdot (x - 37) \cdot (x - 36) \cdot (x - 35) \cdot (x - 34) \cdot (x - 33) \cdot (x - 32) \cdot (x - 31) \cdot (x - 30) \cdot (x - 29) \cdot (x - 91)^{3} \cdot (x - 27)^{3} $$ and $$ (x - 62) \cdot (x - 61) \cdot (x - 60) \cdot (x - 59) \cdot (x - 58) \cdot (x - 57) \cdot (x - 56) \cdot (x - 55) \cdot (x - 54) \cdot (x - 53) \cdot (x - 64)^{3} \
|number-theory|analytic-number-theory|riemann-zeta|computational-mathematics|computational-number-theory|
1
Some interesting trigonometric sums
In working on a physics problem, I've come across sums of trigonometric functions of the following form: $$S(n,L) = -4^{n}+2^{2n}\sum_{k=0}^{L}\left[\cos\left(\frac{k\pi}{L+1}\right)\right]^{2n}$$ where $L$ and $n$ are positive integers. These sums aren't particularly recognizable to me, so if an expert is familiar with sums of these types it would be helpful to get their comments. In any case here is what I've tried to see how they behave: To begin understanding these sums, I've tried evaluating them as a function of $n$ at fixed $L$ . The first few results are $S(n,1) = 0, S(n,2) = 2, S(n,3) = 2^{n+1}$ . For larger $L$ , numerical experimentation indicates that the result is significantly more complicated. Can anyone shed light on possible method of attacking these kinds of sums?
Let $\theta=\pi/(L+1)$ . Using the fact that $$ \cos(k\theta)=\frac{e^{ik\theta}+e^{-ik\theta}}{2}, $$ we get $$ \begin{align} 2^{2n}\sum_{k=0}^L[\cos(k\theta)]^{2n} &=2^{2n}\sum_{k=0}^L\left(\frac{e^{ik\theta}+e^{-ik\theta}}{2}\right)^{2n} \\&=2^{2n}\sum_{k=0}^L\left(\frac{e^{ik\theta}+e^{-ik\theta}}{2}\right)^{2n} \\&=\sum_{k=0}^L \sum_{r=0}^{2n} \binom{2n}r (e^{ik\theta})^r(e^{-ik\theta})^{2n-r} \\&=\sum_{r=0}^{2n} \binom{2n}r\sum_{k=0}^L ( e^{2i\theta(r-n)} )^k. \end{align} $$ Let us now zoom in on the inner sum $\sum_{k=0}^L ( e^{2i\theta(r-n)} )^k,$ which is equal to $$ \sum_{k=0}^L ( e^{i2\pi (r-n)/(L+1)} )^k. $$ This is a finite geometric series of the form $1 + \alpha + \alpha^2 + \dots + \alpha^L$ , where $\alpha = e^{i2\pi (r-n)/(L+1)}$ . If $\alpha\neq 1$ , then the sum is equal to $$\frac{1-\alpha^{L+1}}{1-\alpha}$$ However, since $\alpha^{L+1}=e^{i2\pi(r-n)}=1$ , the geometric series sums to zero in the case $\alpha\neq 1$ . All that remains are the terms for which $\alph
|combinatorics|summation|trigonometric-series|
0
Where did I go wrong? Integration by parts
So I was looking at the integral of $\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx}$ and I got the following using the DI method (for those who don't know it search DI bprp on youtube): choosing f(x) for D and $\frac{d}{dx}(\frac{df(x)}{dx})$ for I we get on the first row $\frac{df(x)}{dx}$ for both D and I, and continuing another row (and switching the sign from + to -) we get $\frac{d}{dx}(\frac{df(x)}{dx})$ for D and f(x) for I, and by using the rules of DI method we would get: $\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx} = f(x)\cdot\frac{df(x)}{dx} -\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx}$ by adding $\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx}$ to both sides we get: $2\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx} = f(x)\cdot\frac{df(x)}{dx}$ and so: $\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx} = \frac{f(x)\cdot\frac{df(x)}{dx}}{2}$ but this is clearly wrong as even just looking at $f(x)=x^2 -> f'(x)=2x -> f''(x)=2$ we can see that the "formula" I got is wrong
Here's my DI table: $$ \array { & \text D & & \text I \\ & --- & & --- \\ + & f ( x ) & & f ' ' ( x ) \\ & & \searrow \\ - & f ' ( x ) & & f ' ( x ) \\ & & \searrow \\ + & f ' ' ( x ) & & f ( x ) } $$ So the top row is $ + \int f ( x ) \, f ' ' ( x ) \, \mathrm d x $ , which is the integral that we're evaluating. Then next we follow the first arrow to get $ + f ( x ) \, f ' ( x ) $ . Then we follow the next arrow to get $ - f ' ( x ) \, f ( x ) $ . There are no more arrows, so we finish with $ + \int f ' ' ( x ) \, f ( x ) \, \mathrm d x $ . Conclusion: $$ \int f ( x ) \, f ' ' ( x ) \, \mathrm d x = f ( x ) \, f ' ( x ) - f ' ( x ) \, f ( x ) + \int f ' ' ( x ) \, f ( x ) \, \mathrm d x \text , $$ which is true but useless. When you did this, it seems like you forgot to include the term from the second arrow. When you did it the non-DI way, you neglected to distribute a minus sign. The last term should have been $ - - = + $ , but you left it $ - $ .
|integration|
0
Converting a proper fraction into partial fraction
For solving integration-related questions, a rational proper fraction of the form $\frac{px^{2}+qx+r}{(x-a)(x^{2}+bx+c)}$ is decomposed into the sum of the expressions, $$\frac{A}{x-a} + \frac{Bx+C}{x^{2}+bx+c}$$ where $x^{2}+bx+c$ can't be factorised further. I don't understand why we have to use $Bx+C$ in the numerator of the second expression. Why can't it be just $B$ ? To elaborate, if $\frac{x^{2}+x+1}{(x-4)(x^{2}+x+3)}$ is decomposed into the sum $\frac{A}{x-4} + \frac{B}{x^{2}+x+3}$ , we will get contradictory values for $A$ and $B$ . But if we use $Bx+C$ in the second expression, the method works fine. Why is this so?
In full, the partial fraction decomposition for OP's expression should be $$\frac{p(x)}{q(x)} = \frac{a_1}{x-r_1} + \frac{a_2}{x-r_2} + \frac{a_3}{x-r_3} \tag{$*$}$$ where $a_i\in\Bbb C$ and $r_i$ denote the roots of $q(x)$ . But any two of the fractions on the RHS can be joined to obtain e.g. $$\begin{align*} \frac{p(x)}{q(x)} &= \frac{a_1}{x-r_1} + \frac{a_2\left(x-r_3\right)+a_3\left(x-r_2\right)}{\left(x-r_2\right)\left(x-r_3\right)} \\ &= \frac{a_1}{x-r_1} + \frac{\left(a_2+a_3\right)x - \left(a_2r_3+a_3r_2\right)}{x^2 - \left(r_2+r_3\right)x + r_2r_3} \tag{$**$} \end{align*}$$ The expansion in $(**)$ is easier to work with in OP's example since $x^2+x+3=0$ at the roots $r_{2,3}=\dfrac{-1\pm i\sqrt{11}}2$ . There's far less (read: no) juggling of imaginary constants involved. By the same token, if $q(x)$ has any repeated factors, we have the "shortcut" e.g. $$\frac{p(x)}{\left(x-r_1\right) \left(x-r_2\right)^2} = \frac{a_1}{x-r_1} + \frac{a_2}{x-r_2} + \frac{a_3}{\left(x-r_2\right
|algebra-precalculus|partial-fractions|
1
Probabilty of an event reoccuring x times in a row within n trials
How can I calculate the probablity of an event to reoccur 9 times in a row within 6000 trials if there is a 1/3 chance of the event to occur in a single trial.
I will focus on the complementary probability that in $~6000~$ consecutive (independent) Bernoulli trials, at no point is there an occurrence of $~9~$ consecutive successes. The math is somewhat complicated. However, this response provides the formulas needed in order to have a computer program compute the exact probability. Let $~S~$ denote the collection of all possible occurrences of the $~6000~$ trials. Since each trial either fails or succeeds, you have that $~|S| = 2^{6000}.$ For $~k \in \Bbb{Z_{\geq 0}}, ~k \leq 6000,~$ let $~S_k~$ denote the subset of $~S~$ that represents having exactly $~k~$ failures. Therefore, $$|S_k| = \binom{6000}{k}.$$ For any element $~x \in S_k,~$ where $~x~$ represents a set of $~6000~$ trials that had exactly $~k~$ failures, the probability of the element $~x~$ occurring is $$p(k) = \left[ ~\frac{2}{3} ~\right]^k \times \left[ ~\frac{1}{3} ~\right]^{6000 - k} = \frac{2^k}{3^{6000}}.$$ Let $~g(k)~$ denote the number of elements in $~S_k~$ that represe
|probability|
0
Do (ineffective/locally effective) Cartan geometries have a unique maximal atlas?
In Sharpe's book on Cartan geometry, he mentions in kind of an offhand way that Cartan atlases extend to unique maximal Cartan atlases, but I'm feeling skeptical. For a Klein pair $(G,H)$ with Lie algebras $(\mathfrak{g},\mathfrak{h})$ , a Cartan atlas is defined as a family of $\mathfrak{g}$ -valued $1$ forms on an open cover such that on the overlap of any two open charts $U$ and $V$ there exists a smooth function $k:U\cap V\to H$ with $$\theta_U=\mathrm{Ad}(k^{-1})\theta_V+k^* \omega_H,\tag{1}\label{eq1}$$ where $\omega_H$ is the Maurer-Cartan form on $H$ . The standard way to prove that an atlas extends to a unique maximal atlas is to union all atlases compatible with a given one and prove that all such atlases are compatible with each other. So if $\mathcal{A}\sim \mathcal{B}$ and $\mathcal{B}\sim \mathcal{C}$ then we should get $\mathcal{A}\sim \mathcal{C}.$ Given $U\in \mathcal{A}$ and $V\in \mathcal{C}$ with nonempty intersection, we cover $U\cap V$ with charts from $\mathcal{B
This answer is speculative, so I welcome someone else to post a better one. I believe that transitivity of compatibility of atlases fails in general. The problem is essentially that two charts can be compatible locally without being compatible globally (I think) when their intersection is not simply connected. This seems like a pathological state of affairs, and so in the ineffective case of Cartan geometry, it would seem more natural to force localness of compatibility and say that two charts are compatible if there exists a $k$ inducing the appropriate change of gauge in a neighborhood of each point of the intersection. With this altered definition we would recover the existence of unique maximal atlases by the argument I sketched in my question.
|differential-geometry|lie-groups|gauge-theory|cartan-geometry|
0
What are the groups containing dihedral group $D_4$ of order $8$?
I'm a little embarrassed to ask this but I couldn't answer it myself. I am looking the groups that contains $D_4$ and larger than $D_4$ . Here is what I think: We cannot say $D_4 \subseteq D_n, n\ge 5$ because their group operation are different. But what else? $D_4$ doesn't lie in $S_4$ . Any help is appreciated.
They can't be abelian. What about products? For any group $G,$ $$G×D_4,G*D_4.$$ Direct and free, respectively. Don't think you can always do a semi-direct or a wreath; but sometimes. Also, many dihedral groups contain embedded $D_4$ 's (some contain more than one copy of $D_4$ ). $S_n,n\ge4$ contains $D_4.$
|abstract-algebra|group-theory|dihedral-groups|
1
6 conics through 3 points and tangent to a line
I played in GeoGebra and discovered an interesting fact: Let $L$ be a line. Let $A_1,A_2,A_3$ be points that are not collinear and not on $L$ . Let $C_1,\dots,C_6$ be 6 conics that pass through points $A_1, A_2, A_3$ and are tangent to $L$ . Let $C_1,C_2$ intersect at a fourth point $Q_1$ other than $A_1,A_2,A_3$ . Let $C_2,C_3$ intersect at a fourth point $Q_2$ other than $A_1,A_2,A_3$ . Let $C_3,C_4$ intersect at a fourth point $Q_3$ other than $A_1,A_2,A_3$ . Let $C_4,C_5$ intersect at a fourth point $Q_4$ other than $A_1,A_2,A_3$ . Let $C_5,C_6$ intersect at a fourth point $Q_5$ other than $A_1,A_2,A_3$ . Let $C_6,C_1$ intersect at a fourth point $Q_6$ other than $A_1,A_2,A_3$ . Let $D_1$ be the conic through 5 points $Q_1, Q_4, A_1, A_2, A_3$ . Let $D_2$ be the conic through 5 points $Q_2, Q_5, A_1, A_2, A_3$ . Let $D_3$ be the conic through 5 points $Q_3, Q_6, A_1, A_2, A_3$ . Then $D_1, D_2, D_3$ pass through a fourth point $E$ other than $A_1, A_2, A_3$ . (The black conics are
OP has observed that the set of conics passing through three points form a projective plane $P$ , and has observed that a certain configuration of conics in $P$ give a result analogous to Pascal's theorem. OP also claims (1) that the family of conics passing through the points $A_1,A_2,A_3$ and tangent to a line L constitute a conic in $P$ . The purpose of this answer is to spell out how Pascal's theorem yields the result in question, assuming that claim (1) is true. The figure above shows the projective plane $P$ . A point of $P$ represents a conic passing through the points $A_1,A_2,A_3$ . A line of $P$ represents a pencil $x$ of conics passing through base points $A_1,A_2,A_3,X$ . Note that the name of the pencil and the fourth base point are the same, except for case. The diagram follows the construction given in the OP, starting with 6 conics $C_i$ sitting on a conic in $P$ , and ending with the (red) Pascal line $e$ . For example, $q_1$ is the pencil of conics with base points $A
|conic-sections|projective-geometry|
1
Evaluating limit involving integrals
I recently came across a challenging limit problem involving integrals of power functions during an examination, and I’m having trouble figuring out how to solve it. I would greatly appreciate any help or insights you can provide. The problem is to evaluate the following limits: $$ \lim_{n\rightarrow +\infty} \frac{\int_0^{1/2}x^{nx}dx}{\int_{1/2}^1x^{nx}dx},\quad \lim_{n\rightarrow +\infty} \frac{\int_0^{1}\frac{x^{nx}}{1+x^2}dx}{\int_{0}^1x^{nx}dx}. $$ I am aware that we might need to consider the function $f(x)=x^x=e^{x\ln x}$ , which attains its maximum value of 1 at 0 and 1, and its minimum value at $e^{-1}$ . However, I’m unsure how to utilize these facts to tackle the problem. Any hints, suggestions or explanations on how to approach these types of limit problems would be greatly appreciated. Thank you in advance for your assistance.
For the second ratio, we recall from the first ratio that the dominant part is given by $x$ close to $1$ . In this case $1+x^2$ is essentially $2$ , so the limit should now be $\frac12$ . This means that bounds don't suffice anymore, we need to determine the asymptotics of the numerator and the denominator. Let $f(x)=x\ln(x)$ ; $C_n=1-\frac{1}{\sqrt n}$ and let $n$ be sufficiently large. Recall from the other answer that $$\int_0^{1/2}e^{nf(x)}\mathrm dx\le\frac2{f(n)}+2e^{-\sqrt n}.$$ But using $f(x)\le f(C_n)$ for $1/2\le x\le C_n$ further yields $$\int_{1/2}^{C_n}e^{nf(x)}\mathrm dx\le\frac12e^{nf(C_n)}\le\frac12e^{-C_n\sqrt n}$$ using $\ln(x)\le x-1$ . For the last part we use $x-1\le f(x)\le x-1+(x-1)^2=x(x-1)$ for $x\le 1$ (since $f(1-x)\le-x(1-x)$ using $\ln(x)\le x-1$ ). This shows that \begin{align*} \int_{C_n}^1e^{nf(x)}\mathrm dx&\ge\frac1n(1-e^{-\sqrt n}),\\ \int_{C_n}^1e^{nf(x)}\mathrm dx&\le\int_{C_n}^1e^{nx(x-1)}\mathrm dx \le\int_{C_n}^1e^{C_nn(x-1)}\mathrm dx=\frac{1}{
|real-analysis|calculus|integration|limits|
0
Square is not a monotone function on the set of PSD matrices
Let $X,Y$ be real symmetric matrices. We say $X \geq Y$ if $X-Y$ is PSD. Question: What is a basic ( $2\times 2$ ) example of matrices $X,Y$ s.t.: $$ X \leq Y $$ but: $$ X^2 > Y^2 $$ e.g. $X^2 - Y^2$ is strictly positive definite? All I know is that $X,Y$ cannot commute but I am having difficulty coming up with concrete examples.
Here's an example if we allow arbitrary symmetric matrices: take $X = -2I$ and $Y = I$ , where $I$ denotes the identity matrix. Of course, we have $X^2 = 4I \geq I = Y^2$ . No example exists in which $0 \leq X \leq Y$ but $X^2 \geq Y^2$ . One way to see that this is the case is to note that $X \leq Y$ implies that $\lambda_i(X) \leq \lambda_i(Y)$ (where $\lambda_i(X)$ denotes the $i$ th eigenvalue of $X$ in increasing order), and $X implies $\lambda_i(X) . So, $X \leq Y$ implies that $\lambda_1(X) \leq \lambda_1(Y)$ , which means that $$ \lambda_1(X^2) = \lambda_1(X)^2 \leq \lambda_1(Y)^2 = \lambda_1(Y^2). $$ However, $X^2 > Y^2$ necessitates that $\lambda_1(X^2) > \lambda_1(Y^2)$ , which is contradicted in the above inequality.
|linear-algebra|matrices|
0
Choosing at least one of 3 different types
Suppose I had to choose 5 people from 8 engineers, 7 scientists and 6 mathematicians. How many ways can I choose at least one person from each profession? By brute force total number of selection possible $n$ : $$n = \binom{8}{1}\binom{7}{1}\binom{6}{3}+\binom{8}{1}\binom{7}{3}\binom{6}{1}+\binom{8}{3}\binom{7}{1}\binom{6}{1}+\binom{8}{1}\binom{7}{2}\binom{6}{2}+\binom{8}{2}\binom{7}{2}\binom{6}{3}+\binom{8}{2}\binom{7}{1}\binom{6}{2}=14140$$ Is there a more suitable approach to solving this problem? Moreover, can it be generalised? Suppose I had to choose $n$ objects from $p$ distinguishable objects of one type, $q$ distinguishable objects of another type and $r$ distinguishable objects of the last type, given $p,q,r>n$ . Then how many ways are there to choose $n$ people with at least one from each category? How could I generalise this for more, say $m$ , categories?
There should be $p+q+r\choose n$$-$$p+q\choose n$$-$$p+r\choose n$$-$$q+r\choose n$$+$$p\choose n$$+$$q\choose n$$+$$r\choose n$ ways to choose $n$ from at least each category. Using the principle of inclusion-exclusion (PIE), the total number of ways to choose $n$ people without any restrictions is $p+q+r\choose n$ . When at least one type of occupation is missing, we subtract $p+q\choose n$ , $p+r\choose n$ , and $q+r\choose n$ . Finally, we double-counted the cases where two types are missing. We add $p\choose n$$+$$q\choose n$$+$$r\choose n$ . Let's test our formula with your original example! $21\choose 5$$-$$15\choose 5$$-$$14\choose 5$$-$$13\choose 5$$+$$8\choose 5$$+$$7\choose 5$$+$$6\choose 5$$=14140$ as desired. I'm not sure if this is more suitable for you but this formula seems a lot simpler than using brute force. This can be expanded for $m$ categories as well.
|combinatorics|combinations|
0
Complement of a set of measure zero is dense in $[a,b]$?
Assume we have a subset of measure zero $N$ in the Borel- $\sigma$ -field $\mathcal{B}([a,b])$ . I think that $[a,b]\setminus N$ is a dense set. But how to prove that? My attempt: let $x \in [a,b]$ and $\varepsilon>0$ arbitrary then $\lbrace z \in [a,b]: |x-z| so there are infinitely many in any neighbourdhood. Hence it is dense, right? The reason why the intersection is non-empty is if it would be empty, then $N\cup \lbrace z \in [a,b]: |x-z|\geq \varepsilon\rbrace = [a,b]$ so $\lambda(N\cup \lbrace z \in [a,b]: |x-z|\geq \varepsilon\rbrace )=b-a$ which is wrong for $\varepsilon>0$ hence we have a contradiction.
Your proof is correct but the way you obtain a contradiction strikes me as somewhat awkward: I would just note that if $(x - \epsilon, x + \epsilon) \cap ([a, b] \setminus N) = \emptyset$ , then $(x - \epsilon, x + \epsilon) \subseteq N$ (by definition of complement), and $\lambda((x - \epsilon, x + \epsilon)) = 2\epsilon$ , contradicting the fact that $\lambda(N) = 0$ .
|measure-theory|lebesgue-measure|borel-sets|
1
(Why) is the norm of a RKHS positive definite?
$ \newcommand{\real}{\mathbb{R}} $ A $\color{red}{\text{(strictly)}}$ positive definite kernel $k: \real^d\times \real^d \to \real$ satisfies for all $x_i \in \real^d$ , $a=(a_1,\dots, a_n)\in \real^d$ $$ \sum_{i=1}^n a_i a_j k(x_i, x_j) \ge 0 \quad \color{red}{>0 \iff a\neq 0} $$ The reproducing kernel hilbertspace (RKHS) $H=H(k)$ of kernel $k$ is then defined as the closure of $$ S = \Bigl\{ u: \real^d\to\real \;:\; u = \sum_{i=1}^n a_ik(x_i, \cdot), a_i \in \real, x_i\in \real^d, n\ge 1\Bigr\} $$ where the closure is with respect to the norm $\|\cdot\|_H$ induced by the dot product defined by the bilinear extension of $$ \langle k(x_i, \cdot), k(x_j, \cdot\rangle_H = k(x_i, x_j). $$ I have two questions: Is this norm $\|\cdot\|_H$ really a norm (i.e. does positive definiteness hold, in the sense that $\|u\|_H = 0$ implies $u=0$ ? If yes in general or assuming the kernel is strictly positive definite, how can you show that? I would think that this should be wrong if the kernel is not
I would assume that $\| \cdot \|{H}$ is defined as $\| x \|{H} := \sqrt{\langle x, x \rangle_{H}}$ with $\langle \cdot, \cdot \rangle_{H}$ being an inner product. Since for an inner product, we have $\langle x, x\rangle = 0$ if and only if $x = 0$ , we conclude that: $$ \| x \|{H} = 0 \Longleftrightarrow \langle x, x\rangle = 0 \Longleftrightarrow x = 0 $$ which proves that $\| x \|{H}$ is positive definite. The property is thus independent of the kernel. However, in the context of function spaces, it is crucial to note that "=" here means "almost everywhere".
|reference-request|normed-spaces|covariance|reproducing-kernel-hilbert-spaces|
0
Strong convergence of average function
Let $f\in L^{2}(0,T;L^{2}(\Omega))$ where $T$ is a finite number and $\Omega\subset \mathbb{R}^{3}$ is a bounded domain. I'm consider the average function with respect to time that is \begin{equation*} f_{\kappa}(t)=\frac{1}{\kappa}\int_{t-\kappa}^{t}f(\tau) d\tau, \end{equation*} after extending $f$ to zero outside $(0,T)$ . One can see $f_{\kappa}\to f$ a.e. in $(0,T)$ due to Lebesgue differentiation theorem. Then I think that by Hardy-Littlewood maximal inequality (I am not sure whether it is still valid for e.g. vector valued functions, because I think I can view this function $f$ as map from $(0,T)$ to $L^{2}(\Omega)$ . Can someone give me a reference?) \begin{align} \lVert f_{\kappa}\rVert_{L^{2}(0,T;L^{2}(\Omega))}\leq C\lVert f \rVert_{L^{2}(0,T;L^{2}(\Omega))} \end{align} I can first deduce that $f_{\kappa}\in L^{2}(0,T;L^{2}(\Omega))$ and moreover weak convergence in $L^{2}(0,T;L^{2}(\Omega))$ . But I am thinking whether I can say some strong convergenve result about $f_{\kap
Suppose $f$ is of the form $\sum_{k = 1}^n c_k \varphi_k(t)$ where $c_k \in L^2(\Omega)$ and $\varphi_k \in C^\infty_c(0, T)$ . Then $$ \begin{align*} \Vert f(t) - f_\kappa(t) \Vert_{L^2(\Omega)} &= \left\Vert \frac{1}{\kappa} \int_{t-\kappa}^t (f(t) - f(s)) \ \mathrm{d}s \right\Vert_{L^2(\Omega)}\\ & \le \sum_{k=1}^n \Vert c_k\Vert_{L^2(\Omega)} \frac{1}{\kappa} \int_{t - \kappa} ^ t |\varphi_k(t) - \varphi_k(s)|\ \mathrm{d}s, \end{align*} $$ which tends to $0$ as $\kappa \to 0$ , since $\varphi_k$ are smooth. By the dominated convergence theorem, $\int_0^T \Vert f(t) - f_\kappa(t) \Vert^2_{L^2(\Omega)} \mathrm{d} t \to 0$ . Since functions of the preceding form are dense in $L^2(0, T; L^2(\Omega))$ , we may select, given general $f \in L^2(0, T; L^2(\Omega))$ and $\epsilon > 0$ , a function $g$ of this form for which $\Vert f - g\Vert_{L^2(0, T; L^2(\Omega))} . Then (omitting $L^2(0, T; L^2(\Omega))$ from the norms), $$ \Vert f_\kappa - f \Vert \le \Vert f_\kappa - g_\kappa \Vert + \
|real-analysis|partial-differential-equations|sobolev-spaces|
0
numerically solving for the fixed points of a system of nonlinear ODEs
I was looking at an excellent lecture series on Robotics by Russ Tedrake, and he discusses Linear Quadratic Control (LQR) for system of nonlinear differential equations. So as he suggests, robots are just big collections of coupled pendulums that we are trying to stabilize around their fixed points. Now, when I look at a small system of nonlinear ODEs, I can find the fixed points of the system by finding the values of the state that return the zero vector. I will make this more precise in a second. My question is, for even slightly larger systems, I might not be able to symbolically solve for the fixed points. So I was wondering what are the current numerical methods for finding all of the fixed points of a larger system of nonlinear ODES? So let me be a bit more precise now. Say I am looking at a system like the undamped, unforced pendulum. This is a simple 2nd order system. $$ mL^2\ddot{\theta} = -mgLsin\theta $$ I can linearize this into a first order system in generalized coordinat
The answer to this question is there are indeed numerical methods for finding fixed points of nonlinear systems of equations that perform better than randomly peppering the domain with initial values and doing Newton solves from there. These methods are called Continuation methods, and there are different flavors. Deflated Continuation: https://arxiv.org/abs/1603.00809 Pseudo Arc-length Continuation: https://www.sciencedirect.com/science/article/abs/pii/S0094576523006379 Homotopy Continuation: https://pubsonline.informs.org/doi/abs/10.1287/moor.16.4.754 There are also programming packages for implementing these algorithms, such as PHCPack in python/C, or BifurcationKit.jl and HomotopyContinuation.jl in Julia.
|ordinary-differential-equations|numerical-methods|control-theory|fixed-points|nonlinear-dynamics|
1
Are there examples of statements not provable in PA that do not require fast growing (not prf) functions?
Goodstein's theorem is an example of a statement that is not provable in PA. The Goodstein function, $\mathcal {G}:\mathbb {N} \to \mathbb {N}$ , defined such that $\mathcal {G}(n)$ is the length of the Goodstein sequence that starts with $n$ is a total function that grows faster than any primitive recursive function. Paris–Harrington theorem is another example of a statement not provable in PA. Wiki says "The smallest number $N$ that satisfies the strengthened finite Ramsey theorem is then a computable function of $n, m, k$ , but grows extremely fast. ... It dominates every computable function provably total in Peano arithmetic, which includes functions such as the Ackermann function." This question is perhaps a bit vague, but along the lines of the above examples, are there statements not provable in PA that only make use of functions that grow no faster than primitive recursive functions? (are fast growing functions in unprovable statements a coincidence or unrelated?)
In this paper , it's shown to be an "if and only if" situation. Theorem 3 states "Every function appearing in the Fast Growing Hierarchy below level $\epsilon_0$ is provably computable in PA". This means that every function which grows "fast" (ie, is not dominated by any $F_{\alpha}$ ) is unprovable in PA, and every function which is dominated by one of the fast growing functions is provable in PA. So the example you are looking for does not exist.
|logic|computability|proof-theory|peano-axioms|provability|
0
Find the number of functions $f : \{ 1,2,...,n \} \rightarrow \{p_1,p_2,p_3 \} $ for which the number $f(1)f(2)...f(n)$ is a perfect square.
Let $p_1,p_2,p_3$ be distinct prime numbers and consider $n \in \mathbb N$ . Find the number of functions $f : \{ 1,2,...,n \} \rightarrow \{p_1,p_2,p_3 \} $ for which the number $f(1)f(2)...f(n)$ is a perfect square. My attempt : If $n$ is an odd number, it is obvious that there are no functions, because it is essential that the exponents of the prime numbers in the product are even numbers , otherwise we would not have a perfect square. Next we consider $n = 2k , k \in \mathbb N$ . Let's look at a set $\mathrm M_{2k}$ with $2k$ elements and see in how many ways we can write it with $a$ and $b$ ( $a \ne b$ ) so that the number of appearances of a and b is even.We have $\binom{2k}{0}$ when all the elements are $a$ , $\binom{2k}{2}$ when 2 elements are $b$ and the rest are $a$ , $\binom{2k}{4}$ when 4 elements are $b$ and the rest are $a$ ... $\binom{2k}{2k}$ when we only use $b$ .So we can write the set in $\binom{2k}{0}$ + $\binom{2k}{2}$ + ... + $\binom{2k}{2k}$ $= 2^{2k-1} $ ways. N
You have made good progress thus far. Note that the first term does not quite fit \begin{eqnarray*} S= 1+\sum_{i=1}^{k} \binom{2k}{2i}2^{2i-1}. \end{eqnarray*} To complete rearrange, do the odd even trick & binomial ... \begin{eqnarray*} 4S-2 &=& 2\sum_{i=0}^{k} \binom{2k}{2i}2^{2i} \\ &=& \sum_{j=0}^{2k} \binom{2k}{j}2^{j}+ \sum_{j=0}^{2k} \binom{2k}{j}(-2)^{j}\\ &=& 3^{2k}+1. \end{eqnarray*} We have $\frac{3^{2k}+3}{4}$ . See also https://oeis.org/A054879
|combinatorics|prime-numbers|combinatorial-proofs|
1
Relation between characteristic polynomial and elliptic linear operators
I'm taking a course in functional analysis that is using Rudin's book. I want to ask if there is a relationship between the characteristic polynomials and their elliptic linear operators? I understand that we call $L$ an elliptic linear operator of $n$ variables of order $N$ if its characteristic polynomial $p(x,y)=\sum_{|\alpha|=N}f_\alpha(x)y^\alpha \neq 0$ for $x\in \Omega \subset \mathbb{R}^n$ and $y\in \mathbb{R}\setminus \{0\}$ . But is there an analogous formula as in linear algebra, where we have that a characteristic polynomial of a transformation/operator/map $T$ (on vector space $V$ ) is the determinant of a polynomial with respect to $T$ ?
As it turns out, credit to a friend of mine who pointed out, that typically, $p(x,y)$ is called the principal symbol of an elliptic operator. And it has nothing to do with the linear algebra object we expect. Here is a link that says more about this on the Encyclopedia of Mathematics.
|functional-analysis|partial-differential-equations|elliptic-operators|
0
Proving that $u_n $ is arithmetic sequence
Let $u_n$ be a sequence defined on natural numbers (the first term is $u_0$) and the terms are natural numbers ($u_n\in \mathbb{N}$ ) We defined the following sequences: $$\displaystyle \large x_n=u_{u_n}$$ $$\displaystyle \large y_n=u_{u_n}+1$$ If we know that both $y_n$ and $x_n $ are arithmetic sequences ,how we can prove that $u_n $ is also arithmetic sequence
I don't think that $u_n$ must be an arithmetic sequence. $u_n=a, a+\Delta u, a+2\Delta u, ... , a+(k-1)\Delta u, ...$ $$\Rightarrow x_n=u_{u_n}=a+(u_n-1)\Delta u$$ $$\Rightarrow y_n=u_{u_n}=a+(u_n-1)\Delta u+1$$ In arithmetic series, consecutive terms must have a constant difference. Looking at at $x_n$ , $$a+(u_n-1)\Delta u-[a+(u_{n-1}-1)\Delta u]=\Delta x$$ $$(u_n-u_{n-1})\Delta u=\Delta x$$ $$\Rightarrow u_n-u_{n-1}=\frac{\Delta x}{\Delta u}\neq\Delta u$$ unless $\Delta x=\Delta u^2$ , which isn't always the case. I may have misunderstood the problem, though.
|algebra-precalculus|
0
What is this field in $\mathbb{R}^4$ that contains both the real and complex numbers called?
Note: this question is wrong – this is not a field, though it is not obvious why it wouldn't be. So, I (first year undergraduate mathematics student) was looking around the internet and found an interesting enough piece of C++ . It contained the definition of something they called a 'recomplex' number so I'm assuming it's some kind of hypercomplex construction, which it appears to be. It also obeys the field axioms and both the reals and the complex numbers are its subsets and closed under its operations. With fair warning that it is the work of a well-known man of many failures and delusions of grandeur, I went ahead and tested out the field axioms he was claiming it fulfilled. In all simplicity for any numbers $x = (a,i,j,k), y = (b,l,m,n) \in \mathbb{R}^4$ , their addition is normal vector addition $x+y = (a+b, i+l, j+m, k+n)$ , but their multiplication is defined by $$xy = (a b - i l - j n - k m, a l + b i + j m - k n, a m + b j - i n - k l, a n + b k + i m + j l).$$ This can also
This is isomorphic to tessarines (reals extended with complex and split-complex unities). Let denote your $j$ as $J$ and your $k$ as $K$ . At the same time, let denote the split-complex unity as $j$ . Then $J$ and $K$ can be expressed as: $J={(-1)}^{1/4} j$ and $K={(-1)}^{3/4} j.$ Now you can see that all your multiplication table holds.
|abstract-algebra|complex-numbers|quaternions|number-systems|hypercomplex-numbers|
0
Image of a two-parameter function over a set
I have the following problem: Let $B$ be the set $B=\{(x_1 , x_2) \in \mathbb{R}^2\mid x_1^2 + x_2^2 \leq 1\text{ and }x_1 \geq 0\}$ . A function $f:B \rightarrow \mathbb{R}$ is given by $$ f(x_1 ,x_2 )=x_1^2 x_2^2 +3x_1 x_2 +x_2 -4.$$ State the image of $f$ . To find the image, I want to do an extrema investigation. First, I find all the stationary points. There is one at $\left(-\frac13, 0\right)$ , but based on the Hessian matrix, I conclude it is a saddle point. Next, I want to do a boundary investigation. I parametrize the circular part of the boundary of the set as $$s(u) = (\cos u,\sin u),\quad u \in\left[-\frac\pi2,\frac\pi2\right].$$ However, this is where my problem starts. I make a composite function $f \circ s$ - this describes the values of $f$ along the circular boundary. I then differentiate this composite function and get $$- 2 \sin^3u\,\cos u- 3 \sin^2u+ 2 \sin u\,\cos^3u+ 3 \cos^2u+ \cos u.$$ Now, I want to set it equal to $0$ and solve, since the stationary points on
Considering turning points, there is one at $(-1/3,0)$ . However we must also check on the boundary of the set for further extrema. If you take $x_2 = \pm \sqrt{1-x_1^2}$ and then look for turning points along this boundary, using numerics, you should find the minimum to be $-6.0069$ , and the maximum to be $-1.51414$ . It would be nice to do it without numerics but the derivative is tricky. Perhaps someone else can shed light on how to solve the derivative $= 0$ . If $x_2 = \pm \sqrt{1-x_1^2}$ then set $g(x) = f(x_1,x_2) = x^2(1-x^2)\pm (3x+1)\sqrt{1-x^2}-4$ . $g'(x) = 2x-1/3x^3\pm3\sqrt{1-x^2}\pm(3x+1)\frac{1}{\sqrt{1-x^2}}$
|calculus|linear-algebra|multivariable-calculus|convex-optimization|
0
Show that $\int_0^{\frac\pi 2}\frac{\theta-\cos\theta\sin\theta}{2\sin^3\theta}d\theta=\frac{2C+1}4$
While trying to evaluate $\int_0^1 k^2K(k)dk$ related to elliptic integral of the first kind , by integral switching method, I reached the trigonometric integral $$\int_0^{\frac\pi 2}\frac{\theta-\cos\theta\sin\theta}{2\sin^3\theta}d\theta$$ which is evaluated to $\frac{2C+1}4$ by Wolfram Alpha. Here, $C$ is the Catalan constant, sometimes denoted by $G$ . This integral is complicated for me. Are there another methods to evaluate the integrals $\int_0^1k^nK(k)dk$ , $n\geq2$ ? Or can someone please evaluate the trigonometric integral above? Thank you.
With $\int_0^{\frac\pi 2}\frac{t}{\sin t}dt= 2G $ \begin{align} &\int_0^{\frac\pi 2}\frac{t-\cos t\sin t}{\sin^3t}dt\\ = &\int_0^{\frac\pi 2}\left(\frac{t}{2\sin t} +(t-\cos t\sin t)\frac{2-\sin^2t}{2\sin^3t}-\frac12\cos t\right) dt\\ = &\ G +\int_0^{\frac\pi 2} \frac{t-\cos t\sin t}2 \overset{ibp} d\left(-\frac{\cos t}{\sin^2t}\right)-\frac12\cos t dt\\ =& \ G +\frac12 \int_0^{\frac\pi 2}\cos t dt= G+\frac12 \end{align}
|integration|definite-integrals|trigonometric-integrals|elliptic-integrals|catalans-constant|
0
Zariski topology on $\mathbb{Z}$
I'm trying to understand the Zariski topology on $\text{Spec}(\mathbb{Z})$ . I've just learned about this new concept and I wanted to compute this topology for a more concrete example to see how it looks but I dont know how to start. For now I have that $\text{Spec}(\mathbb{Z})= \{(p); \text{p prime} \}\cup \{0\}$ since $\mathbb{Z}$ is principal. Then I don't know how to compute $\text{V}((n))$ for $n\in \mathbb{Z}$ but I don't see how, can someone help me ?
If you want to to compute $\text{V}((n))$ you can consider the prime decomposition of $n$ : $n=p_1^{\alpha_1}\cdot ... \cdot p_r^{\alpha_r}$ , then you have $$\text{V}((n))=\{(p) ; \text{p prime and }(n)\subset (p)\}=\{(p) ; \text{p prime and }p \mid n \} = \{(p_1),...,(p_n)\}$$ Does that helps you visualizing what the topology looks like ?
|algebraic-geometry|ring-theory|zariski-topology|
1
Is there a simple geometric proof of why two simple harmonic oscillators draw a circle?
We all know that a circle can be drawn with the trigonometric functions $x=\cos(t), y=\sin(t)$ . If we define the sine and cosine functions in terms of triangles (like we do in high school), then this is quite obvious. But then later on in our education, we learn that the solution to a simple harmonic oscillator is the sine function. A weight on an undamped spring goes back and forth following a sine wave over time, and that this is the intuition behind a lot of wave motion (like sound waves). However, it's not generally taught why the sine wave solution to simple harmonic motion is the same function as the sine wave as defined by triangles or circles. Or in other words, when you take two harmonic oscillators and plot their outputs as $x=\cos(t), y=\sin(t)$ , why should they make a perfect circle? More specifically: it's intuitive enough that they must form a shape that makes a full loop of some kind. But why does it happen to be a perfect circle , as opposed to an alternative shape li
Here's a geometric argument to complement the existing (+1) answers: Suppose a point (the black dot at the end of the gray arrow) moves at unit speed counterclockwise around the unit circle. When the point is at $(x, y)$ , a vector of unit length, its velocity $(x', y')$ is A unit vector, Orthogonal to the position (because we're moving on a circle), and Rotated counterclockwise from the position (because the point moves counterclockwise). On the other hand, by elementary geometry in Cartesian coordinates, $(-y, x)$ is the unique vector satisfying all three conditions. We conclude \begin{align*} x' &= -y, \\ y' &= \phantom{-}x. \end{align*} Differentiating the first equation and using the second, we deduce $$ x'' = (x')' = (-y)' = -y' = -x; $$ similarly, $y'' = -y$ . This amounts to the identity $$ e^{it} = \cos t + i\sin t, $$ from which we formally deduce $$ \frac{d^{2}}{dt^{2}}(e^{it}) = (i^{2})e^{it} = -e^{it}. $$
|calculus|geometry|trigonometry|taylor-expansion|intuition|
0
What is the relationship between the Laplace equation and the Wave equation?
What is the relationship between the Laplace equation: $$ (\delta^2_x + \delta^2_y)\phi = 0 $$ and the Wave equation: $$ (\delta^2_x - \delta^2_y)\phi = 0 $$ What is the relationship between the Laplace equation: $$ (\delta^2_{x0} + \delta^2_{x1} + \delta^2_{x2} + \delta^2_{x3})\phi = 0 $$ and the Wave equation: $$ (\delta^2_{x0} - \delta^2_{x1} - \delta^2_{x2} - \delta^2_{x3})\phi = 0 $$
A more philosophical consideration about measurement: Physical measurements are always made against a standard reference: either counting events per standard reference time unit, or dividing a quantity by a standard reference quantity. Physical measurements thus always involve a division. The measurement result is a number. The "space" of the measured event and the "space" of the reference quantity are therefore in a reciprocal relationship: "space" and "reciprocal space" I did not find any litterature on this, but it seems obvious to me that the "energy", i.e. the quantity of relativistic movement (which is maintained in a closed physical system: see P.A.M. Diracs relativistic energy-momentum-relationship) is described in an Euclidean space, where you can sum up, and where no upper limit exists, whereas the measurement scale is described in a hyperbolic (i.e. reciprocal) space, where asymptotes exist for the maximum speed, because measurement implies a division. It seems also obvious
|partial-differential-equations|complex-numbers|mathematical-physics|quaternions|
0
Probability of each type of inscribed octahedron
Fix a $V\in\mathbb{N}$ with $V\ge 4$ . Randomly pick $V$ points on a sphere (independently and uniformly with respect to the surface area measure). You may think of the convex hull of these $V$ points. With probability 1, this constitutes a nondegenerate convex polyhedron. In fact, all faces would be triangles (probability 1)(*), so this should be a $(2V-4)$ -hedron (from Euler's $F-E+V=2$ when $3F=2E$ ). But what is the probability for each topological type? I think(**) the first "interesting" case is for $V=6$ random points on the sphere, so random octahedra, so let me ask about that. What is the probability that 6 randomly chosen points on a sphere span an octahedron which is topologically like a regular octahedron (so all 6 vertices have same vertex order). Both an exact answer and an approximate value from simulating the random process would be interesting. Notes: (*) I learn that a polyhedron all of whose faces are triangles can be called a simplicial polyhedron . (**) A bit more
I generated $N=10^6$ samples for this random process... I wrote a Python script to do it, and made little effort to optimize for speed, so that took a half-hour or so to run. The experimental result was that $620260$ samples had the topology of a biaugmented tetrahedron (in which the vertex degrees are $3,3,4,4,5,5$ ) and the remaining $379740$ samples had that of the regular octahedron (in which all vertex degrees are $4$ ). This gives the probabilities as $0.620$ and $0.380$ , with errors in the last digit. So plausibly (and this would be a lovely result) the exact odds ratio is $\varphi = (1 + \sqrt{5})/2 \approx 1.61803$ .
|probability|polyhedra|solid-geometry|geometric-probability|
0
Imagining an exponential "hypercomplex" system
I recently learned about hypercomplex systems that are taken over the reals, i.e. the dual numbers for which $j^2=1$ , $j≠1$ , and the dual numbers for which $ε^2=0$ , $ε≠0$ . These number systems, along with the complex numbers, have some nice properties, namely they represent all the possibilities for 2-dimensional unital algebras over R, up to isomorphism. They also happen to be constructed by polynomial equations, which begs the question: Is there a way to construct a "hypercomplex" system by defining the "imaginary constant" as a solution to an exponential equation? For example: $a^k=b$ , where $a,b∈\mathbb{R}$ and $k$ is the "imaginary unit". The first thought I had was of somehow defining an algebra with the equation $e^{L}=0$ , though I might just be being naive about the whole prospect in general.
In extended real numbers $\overline {\mathbb R}$ , the solution for $e^L=0$ is $-\infty$ . Extending reals with logarithm of zro can be done in a different way as well.
|abstract-algebra|hypercomplex-numbers|dual-numbers|
0
Pullback metric on sphere
I am learning differential geometry, and wanted to see a calculation for the round (induced) metric on the sphere $S^n$ . To do this, I wanted to consider the immersion $\iota:S^n \rightarrow \mathbb{R}^{n+1}$ , and consider the pullback formula for an immersion $\phi: M \rightarrow N$ and a Riemannian metric $g$ on $N$ given at each $p \in M$ by $$\phi^{*}g(v, w) = \langle \mathrm{d}\phi_p(v), \mathrm{d}\phi_p(w) \rangle$$ which in this case becomes $$\iota^{*}g(v, w) = \langle \mathrm{d}\iota_p(v), \mathrm{d}\iota_p(w) \rangle$$ and where the metric tensor is given on $\mathbb{R}^{n+1}$ by $g_{ij} = \delta_{ij}$ . I would appreciate seeing this calculation in full, since I don't know how to compute with pullbacks, and cannot find a reference. Thank you!
Note that to do this calculation, you need a parameterization of $S^n$ , also known as a coordinate chart (or more than one if you prefer). I suggest doing this calculation first for $S^2$ by using the parameterization given by spherical coordinates. Another one is to parameterize the upper hemisphere as a graph of a function. In general, "pullback" means the following: Suppose you have a map $f: M \rightarrow N$ , coordinates $(x^1, \dots, x^m)$ on $M$ , and coordinates $(y^1, \dots, y^n)$ on $N$ .You can use $f$ to treat each coordinate $y^k$ as a function of the coordinates $x^1, \dots, x^m$ . The pullback of a differential form on $N$ , written in terms of $y^1, \dots, y^n$ and $dy^1, \dots, dy^n$ , is simply the same differential form but treating each $dy^k$ as the exterior derivative of the function $y^k$ in terms of $dx^1, \dots, dx^m$ .
|differential-geometry|manifolds|riemannian-geometry|pullback|
0
There exists $a$ s.t the 2 equations both have integer solutions.
For each prime $p>5$ , prove that there always exists an integer $a$ s.t the two following equations $$x^2+py+a=0\quad\text{and}\quad x^2+py+a+2=0$$ both have integer solutions. Here are my thoughts for this problem: $x^2+py+a=0\iff x^2+a\equiv 0\pmod{p}$ and $m^2+pn+a+2=0\iff m^2+a+2\equiv 0\pmod{p}.$ So if there exists an integer $a$ s.t $\left(\dfrac{-a}{p}\right)=\left(\dfrac{-a-2}{p}\right)=1$ then we finish. But I don't know whether such $a$ exists. Could someone help me or have another way to deal with this problem? Thanks in advance!
You have made a good start on a valid approach. To prove an integer $a$ always exists, consider several special cases. For $a = -1$ , since $\left(\frac{-a}{p}\right) = \left(\frac{1}{p}\right) = 1$ for all primes, check when $\left(\frac{-a-2}{p}\right) = \left(\frac{-1}{p}\right) = 1$ . By $q = \pm 1$ and the first supplement , we have that $p \equiv 1 \pmod{4}$ works. Thus, we have only $p \equiv 3\pmod{4}$ left, which can be handled with $p \equiv 3\pmod{8}$ and $p \equiv 7\pmod{8}$ separately. With $a = 0$ satisfying the first Legendre symbol , we need $\left(\frac{-a-2}{p}\right) = \left(\frac{-2}{p}\right) = 1$ . By $q = \pm 2$ and the second supplement (also in Show that there exists $a \in \mathbb{Z}$ s.t $a^2 \equiv -2 \pmod p$. ), we have $p \equiv 1, 3\pmod{8}$ , so this covers the $p \equiv 3\pmod{8}$ case. With $a = -4$ satisfying the first value, then for $\left(\frac{-a-2}{p}\right) = \left(\frac{2}{p}\right) = 1$ , by $q = \pm 2$ and the second supplement again, we hav
|number-theory|quadratic-residues|
0
Covariance between functions of same $Z_k$ = 0
Let's assume we are faced with showing that the following covariance between two functions $g(\cdot)$ and $f(\cdot)$ of the same set of $Z_1,...,Z_K$ i.i.d. standard normal random variables is equal to $0$ (we want to show that they are independent), such that: $$Cov\left[f(Z_k),g(Z_k)\right]= Cov\left[\sqrt{\hat{\rho}_{s}}\hat{Y},\sum_{k=1}^K\left(\sqrt{\rho_{s}}\alpha_{s,k} - \sqrt{\hat{\rho}_{s}}\beta_{k}\right)Z_k\right] = 0 $$ where $S=K$ and $$\hat{Y} = \sum_{k=1}^K\beta_{k} Z_k \quad\;,\quad\; \sum_{k=1}^K\beta^2_{k} = 1 \quad\;,\quad\; \sum_{k=1}^K\alpha^2_{s,k}=1$$ Additionally, we know the following relationship between the parameters: $$\sqrt{\hat{\rho}_{s}} = \sqrt{\rho_{s}}\sqrt{\gamma_{s}} = \sqrt{\rho_{s}}\sum_{k=1}^{K}\alpha_{s,k}\beta_k$$ consider that, overall, $\rho_{s},\gamma_{s},\hat{\rho}_{s},\alpha_{s,k},\beta_k$ are just model parameters. I know that the above boils down to needing to show that (as all other covariances are 0): $$Cov\left[f(Z_k)|g(Z_k)\right] =
Recall $\textrm{Cov}[X,Y]=E[XY]-E[X]E[Y]$ . Note that $E[f]=E[g]=0$ . If I understood correctly all the notation, we then have $$\begin{aligned}E[fg]&=\sum_{k\leq K}\sum_{\ell\leq K}(\sqrt{\rho_s}\alpha_{s,k}-\sqrt{\hat{\rho}_s}\beta_k)\sqrt{\hat{\rho}_s}\beta_\ell E[Z_kZ_\ell]\\ &=\sum_{k\leq K}(\sqrt{\rho_s}\sqrt{\hat{\rho}_s}\alpha_{s,k}\beta_k-{\hat{\rho}_s}\beta_k^2)\\ &=\sqrt{\hat{\rho}_s}\bigg(\sqrt{\rho_s}\sum_{k\leq K}\alpha_{s,k}\beta_k\bigg)-\hat{\rho}_s\bigg(\sum_{k\leq K}\beta_k^2\bigg)\\ &=\hat{\rho}_s-\hat{\rho}_s\\\ &=0 \end{aligned}$$
|linear-algebra|probability|summation|covariance|
1
A semisimple matrix that is not diagonalizable over algebraic closure?
In this quetion , it has been proved that an operator is semisimple iff it's diagonalizable over the algebraic closure of the base field. However, when the minimal polynomial is not separable, it seems that there's something going wrong. Consider operator on $\mathbb F_2(t)$ to be $$ T=\begin{bmatrix} &&1&\\ &&&1\\ t&&&\\ &t&& \end{bmatrix}. $$ Clearly it has minimal polynomial $m(x)=x^2+t$ , which is irreducible but not separable on $\mathbb F_2(t)$ . Then by definition , $T$ is semisimple since $m(x)$ is a product of irreducible factors. While $T$ only has two linearly independent eigenvectors $(1,0,\sqrt t,0)$ and $(0,1,0,\sqrt t)$ in $\mathbb F_2(\sqrt t)$ . What's wrong here? Thanks in advance!
Reordering the bases, you just have the diagonal sum of two copies of $\begin{bmatrix}0&1\\t&0\end{bmatrix}$ . This has characteristic polynomial $x^2+t$ , as you noted, which factorises over the algebraic closure as $(x+\sqrt t)^2$ . This has a repeated root, so the matrix is not necessarily diagonalisable. The result you (mis-)quoted says semisimplicity is the same as the minimal polynomial splitting into distinct linear factors over the algebraic closure. In our example the minimal polynomial is the characteristic polynomial, so the matrix is not diagonalisable. It must therefore yield the Jordan block $\begin{bmatrix}\sqrt t&1\\0&\sqrt t\end{bmatrix}$ . Taking the basis $u=\begin{bmatrix}1\\\sqrt t\end{bmatrix}$ and $v=\begin{bmatrix}1\\1+\sqrt t\end{bmatrix}$ shows that this is indeed the case.
|linear-algebra|abstract-algebra|linear-transformations|
1
Krull dimension of a ring quotienting the intersection of two ideals
I am considering the following problem: Let $ A $ be a commutative Noetherian local ring, $ I $ , $ I' $ be two ideals of $ A $ satisfying that $ \dim(A/I) = \dim(A/I') $ . Is it true that $ \dim(A/I)=\dim(A/I\cap I') $ always holds? (Here $ \dim $ means Krull dimension.) Any ideas would be thankful.
Yes, with no Noetherian or local hypothesis. This is because if a prime ideal contains $I \cap J$ it contains $IJ$ , and hence contains either $I$ or $J$ (so any chain containing $I \cap J$ must contain either $I$ or $J$ ). To see this, suppose $P$ is prime and $IJ \subseteq P$ but both $I$ and $J$ are not contained in $P$ . Then there exist $x\in I$ , $y\in J$ such that $x\notin P$ and $y\notin P$ . But as $xy\in IJ$ we have $xy \in P$ , contradicting primality of $P$
|commutative-algebra|krull-dimension|
0
Let $f\in L^1(R)$. Suppose $\int_{a}^{b} f \,dx\ = 0$ whenever $a<b$. Prove that $f=0 $ a.e.
Let $f\in L^1(R)$ . Suppose $\int_{a}^{b} f \,dx = 0$ whenever $a . Prove that $f=0 $ a.e. My approach- I thought of showing $m({x\in R : f\ne0})=0$ here there are two cases $$m({x\in R : f>0})=0$$ $$m({x\in R : f here m is the measure on R. Is my approach correct or are there any alternative ways?
First, we look at the Lebesgue Differentiation Theorem: Let $f_t(x)=\frac{1}{\mu(B(x,t))}\int_{B(x,t)}f(y)\mu(dy)$ , then $\lim\limits_{t\rightarrow 0}f_t(x)=f(x)$ $\mu$ -a.e. So, writing this for our function then simplifying the notation as we're in Lebesgue measure gives: $f(x)=\lim\limits_{t\rightarrow 0}\frac{1}{\mu(B(x,t))}\int_{B(x,t)}f(y)\mu(dy)=\lim\limits_{t\rightarrow 0}\frac{1}{2t}\int_{x-t}^{x+t}f(y)dy$ $\mu$ -a.e., but by supposition we then have that each $\int_{x-t}^{x+t} f(y) dy=0$ , thus the limit is $0$ , and we have $f(x)=0$ $\mu$ -a.e.
|real-analysis|lebesgue-integral|lebesgue-measure|
0
why this procedure of proof valid?
I READ A new introduction to modal logic at page 176-177 it said in modal logic, irreflexive frame preserves the one rule, i.e. Gabb. and proof's content is as follow : I am having trouble understanding from the point after the statement that the consequent of the Gabb rule has value 0 at w1. Why does it follow that if there is a world where the consequent of the Gabb rule is not valid, then it implies that the wff index and world index coincide, and why does the last sentence is deduced from the second to last sentence?
I shall trace Hughes and Cresswell's argument with some explicative notes. The argument is remarkably instructive in a broader perspective. First, preliminary notes: A modal logical system $\mathbf{S}$ is said to be characterised by a class $\mathbf{C}$ of frames if $\mathbf{S}$ is sound and complete with respect to every frame $\mathcal{F}\in\mathbf{C}$ . A formula $\theta$ characterises a class $\mathbf{C}$ of frames $\iff\mathcal{F}\Vdash\theta$ , for all $\mathcal{F}\in\mathbf{C}$ . For example, the formula $\Box p\rightarrow p$ (i.e., axiom $\mathrm{T}$ ) characterises the class of reflexive frames. In other words, the system $\mathbf{T}$ is characterised by the class of reflexive frames. Updating the notation a bit, we have what they call Gabb rule (with axiom $\mathrm{T}$ as a component): $$\vdash\alpha_{1}\rightarrow\Box(\alpha_{2}\rightarrow\Box(\alpha_{3}\rightarrow\ldots\Box(\alpha_{n}\rightarrow(\underbrace{\Box p\rightarrow p}_{axiom\, \mathrm{T}}))\ldots)$$ $$\implies\vda
|logic|propositional-calculus|modal-logic|
0
Evaluating limit involving integrals
I recently came across a challenging limit problem involving integrals of power functions during an examination, and I’m having trouble figuring out how to solve it. I would greatly appreciate any help or insights you can provide. The problem is to evaluate the following limits: $$ \lim_{n\rightarrow +\infty} \frac{\int_0^{1/2}x^{nx}dx}{\int_{1/2}^1x^{nx}dx},\quad \lim_{n\rightarrow +\infty} \frac{\int_0^{1}\frac{x^{nx}}{1+x^2}dx}{\int_{0}^1x^{nx}dx}. $$ I am aware that we might need to consider the function $f(x)=x^x=e^{x\ln x}$ , which attains its maximum value of 1 at 0 and 1, and its minimum value at $e^{-1}$ . However, I’m unsure how to utilize these facts to tackle the problem. Any hints, suggestions or explanations on how to approach these types of limit problems would be greatly appreciated. Thank you in advance for your assistance.
@Matija provided solutions to both limits; however, it would be interesting to find their several first asymptotic terms at $n\to\infty$ . Let's take $a, b\,$ that $\,0 . For $n\to\infty$ $$\int_a^bx^{nx}dx\overset{x=e^{-t}}{=}\int_{\ln\frac1b}^{\ln\frac1a}e^{-nte^{-t}-t}dt=\frac1n\int_{n\ln\frac1b}^{n\ln\frac1a}e^{-xe^{-\frac xn}-\frac xn}dx\sim\frac1n\left(e^{-n\ln\frac1b}-e^{-n\ln\frac1a}\right)\tag{1}$$ If we put $b=1$ $$\int_a^1x^{nx}dx\sim\frac1n$$ More explicitly in the case of the first limit $$\int_{1/2}^1x^{nx}dx=\frac1n\int_0^{n\ln2}e^{-xe^{-\frac xn}-\frac xn}dx=\frac1n\int_0^{n\ln2}e^{-x(1-\frac xn+...)-\frac xn}dx$$ $$\sim\frac1n\int_0^\infty e^{-x}\left(1+\frac{x^2}n-\frac xn+...\right)dx=\frac1n+\frac1{n^2}+O\left(\frac1{n^3}\right)\tag{2}$$ We are not allowed to take $a=1$ in (1): the function $f(x)=x^{nx}$ is not analytical at $x=0$ and has the infinite derivative in this point. Let's consider $$\int_0^{1/2}x^{nx}dx=\int_2^\infty e^{-\frac{n\ln t}t}\frac{dt}{t^2}=\int
|real-analysis|calculus|integration|limits|
0
Are there any new identities when we go from subtraction to subtraction with a nonzero constant?
This is the subtraction counterpart to my previous universal algebra question on addition with a nonzero constant, here: No simplifying identities for any single nonzero number under addition. . I know that the structure $(\mathbb{R};-)$ of the binary operation of subtraction on reals has a finite basis $E$ of identities. I also know that when we expand that structure by adding $0$ , we need new identities. My question is, suppose $r$ is a nonzero real number. Is the equational identities of the structure $(\mathbb{R};-,r)$ generated by $E$ alone?
As often, think about automorphisms! For any $r,s\in\mathbb{R}_{\not=0}$ , there is an automorphism of $(\mathbb{R};-)$ sending $r$ to $s$ (this is a good exercise) and so the structures $(\mathbb{R};-,r)$ and $(\mathbb{R};-,s)$ are isomorphic. In particular, if $$\forall\overline{x}(t(\overline{x},c)=s(\overline{x},c))$$ is an equation true in $(\mathbb{R};-,1)$ (say) where $c$ is a fresh constant symbol and $t,s$ are $\{-\}$ -terms, it is in fact true for every nonzero value of $c$ . But since the functions corresponding to $t$ and $s$ are continuous, this means that we will also have $\forall\overline{x}(t(\overline{x},0)=s(\overline{x},0)$ , which is to say that in fact $$\forall y\forall\overline{x}(t(\overline{x},y)=s(\overline{x}, y))$$ is true in $(\mathbb{R};-)$ . So the equational theory of $(\mathbb{R};-,a)$ for any nonzero $a$ is the equational-logic deductive closure of the equational theory of $(\mathbb{R};-)$ . Of course this breaks down once there are two or more named
|universal-algebra|
1
There exists $a$ s.t the 2 equations both have integer solutions.
For each prime $p>5$ , prove that there always exists an integer $a$ s.t the two following equations $$x^2+py+a=0\quad\text{and}\quad x^2+py+a+2=0$$ both have integer solutions. Here are my thoughts for this problem: $x^2+py+a=0\iff x^2+a\equiv 0\pmod{p}$ and $m^2+pn+a+2=0\iff m^2+a+2\equiv 0\pmod{p}.$ So if there exists an integer $a$ s.t $\left(\dfrac{-a}{p}\right)=\left(\dfrac{-a-2}{p}\right)=1$ then we finish. But I don't know whether such $a$ exists. Could someone help me or have another way to deal with this problem? Thanks in advance!
There is a combinatorial way of doing this, for those who know basic graph theory, that uses far less number theory as well. Indeed, we need just the following: Lemma 1: Let $C$ be a cycle and let $A$ be a subset of the vertices of $C$ s.t. $A$ is more than half the vertices of $C$ . Then at least two vertices in $A$ are adjacent in $C$ . To see Lemma 1, start by drawing say a cycle $C$ on $5$ vertices and observing that, if $A$ be any subset of $3$ vertices, then at least two vertices in $A$ have to be adjacent to each other in $C$ . Then try for $C$ w $7$ vertices and $A$ with $4$ vertices. $\surd$ Let $C$ be the graph on the elements of $\mathbb{Z}/p\mathbb{Z}$ where two integers $i$ and $j$ are adjacent if $i-j$ is in $\{-2,2\}$ , or equivalently, either $j=i+2$ or $i=j+2$ . Then note that $C$ is a cycle on the $p$ vertices of $\mathbb{Z}/p\mathbb{Z}$ , because $\gcd(p,2)$ is $1$ . Next, let $A$ be the elements $a$ in $\mathbb{Z}/p\mathbb{Z}$ such that $-a$ is a square residue. The
|number-theory|quadratic-residues|
0
Sufficient condition for holomorphic function being constant
Let $f: \mathbb{C} \to \mathbb{C} $ be an holomorphic function. Furthermore, suppose that: \begin{align*} \exists \ (a,b) \in \mathbb{R}^2 \setminus \{(0,0)\}:\quad a\operatorname{Re}(f) + b\operatorname{Im}(f) \text{ is constant}. \end{align*} Show that $f$ is constant. By the hypothesis above, we can arrive at the following linear system: \begin{equation} \left\{ \begin{aligned} a\frac{\partial{u}}{\partial{x}}+b\frac{\partial{v}}{\partial{x}}=0 \\[2pt] a\frac{\partial{u}}{\partial{y}}+b\frac{\partial{v}}{\partial{y}}=0 \end{aligned} \right. \end{equation} As $(a,b)\neq (0,0)$ , then: \begin{align*} \frac{\partial{u}}{\partial{x}}\frac{\partial{v}}{\partial{y}}-\frac{\partial{v}}{\partial{x}}\frac{\partial{u}}{\partial{y}} &= 0 \\ \Bigl(\frac{\partial{u}}{\partial{x}}\Bigr)^2 + \Bigl(\frac{\partial{u}}{\partial{y}}\Bigr)^2 &= 0 \\ \end{align*} So, $\frac{\partial{u}}{\partial{x}}$ = $\frac{\partial{u}}{\partial{y}}=0$ and similarly, $\frac{\partial{v}}{\partial{x}}$ = $\frac{\partial
Your solution should be correct. One other solution would be to observe that if $\text{Re}(f)$ is constant then $f$ is constant. With this observe the following: it is $a\text{Re}(f)+b\text{Im}(f) = \text{Re}(\bar w f)$ with $w := a + bi\neq 0$ . So, if $a\text{Re}(f)+b\text{Im}(f)$ is constant then $\text{Re}(\bar w f)$ is constant, so $\bar w f$ is constant and therefore $f$ is constant (since $w \neq 0$ ).
|complex-analysis|
1
Generalizing the property of two parabolas tangent to a circle, each of which touches it at two points
About five years ago, I discovered a property of a parabola, a circle that touches it at two points, and two tangents to it that are parallel to the index of the parabola. In the previous picture $AC⊥BD$ After some trying, I was able to prove this with analytical geometry But my happiness was soon complete until I was able to generalize this property, as it was not required for two parallel lines to be parallel to the index of the parabola. I was also able to prove this with analytical geometry But after a while I noticed that I could consider two parallel lines to be a special case of a parabola when the distance between the focus and the guide approaches zero and they are located at infinity. The property is actually valid for two parabolas Then, after a while, I found that this property was already known and was present in Arseniy Akuban’s book This was frustrating, as I had used the previous properties in many applications and theorems But after a while I was able to generalize thi
Still not an answer, but I would like to share a generic SAGE program (SAGE is a free Python-like online resource ; see below this program and the way to run it). Writing this program has in fact be beneficial because of the need to compute analytical results that could be useful for reaching the assigned goal. Fig. 1. The principle is based on the following remark : parabolas externaly tangent to the unit circle having a vertical-axis constitute a family $(F)$ with common cartesian equation : $$y=kx^2+s(k) \ \ \ \text{with this shift :} \ \ \ s(k):=-k-\frac{1}{4k}$$ depending on a single parameter $k$ . We can assume WLOG that the circle is the unit circle. Let us choose two parabolas $P_1$ and $P'_2$ of family $(F)$ with resp. parameters $k_1$ and $k_2$ . $P_1$ , featured in red on the figure, will be kept as it is. The second parabola $P_2$ , will be the image by a rotation with angle $a$ of $P'_2$ (transformation of the dotted-blue parabola into the solid-blue parabola). A result a
|geometry|euclidean-geometry|conic-sections|
0
Proving $\lim_{(x,y) \to (0,0)} \frac{\cos(x) - 1 + \frac{x^2}{2}}{x^2 + y^2}$ does not exist
in the polar coordinates, $\ x = r \cos(\theta) , y = r \sin(\theta) $ , $$f(r, \theta) = \frac{\cos(r \cos(\theta)) - 1 + \frac{(r \cos(\theta))^2}{2}}{(r \cos(\theta))^2 + (r \sin(\theta))^2}$$ $$\lim_{r \to 0+} f(r, \theta) = \lim_{r \to 0+} \frac{\cos(r \cos(\theta)) - 1 + \frac{(r \cos(\theta))^2}{2}}{r^2}=\lim_{r \to 0+} \left( \frac{\cos(r \cos(\theta)) - 1}{r^2} + \frac{\cos^2(\theta)}{2} \right).$$ $$\lim_{r \to 0+} \frac{\cos(r \cos(\theta)) - 1}{r^2} =\lim_{r \to 0+} -\frac{\cos^2(\theta)}{2}.$$ (L'Hôpital's rule) therefore, $$\lim_{r \to 0+} f(r, \theta) =\lim_{r \to 0+} (-\frac{\cos^2(\theta)}{2}+\frac{\cos^2(\theta)}{2})=0.$$ It's my solutions... The answer to this question is DNE. I don't know how to solve this problem exactly. Please give me some feedback.
Use the fact that: for $x\in\mathbb R$ (cosx) , we have $$1-\frac{x^2}{2}\le \cos(x)\le1-\frac{x^2}{2}+\frac{x^4}{24},$$ so $$0\leq\cos(x)-1+\frac{x^2}{2}\leq\frac{x^4}{24},\quad\forall x\in\mathbb R.$$ Hence $$0\leq\frac{\cos x-1+x^2/2}{x^2+y^2}\leq\frac{x^2}{24}\to0,\quad (x,y)\to(0,0).$$
|limits|
0
Solving equations involving inverse trigonometric functions
My math teacher said me if while solving trigonometric equations involving inverse trigonometric functions you are not able to get enough information about the inputs and if you have to solve for specific ranges of input or if nothing is given then directly use the below formula without thinking of anything and just check for extraneous solutions $$\tan^{-1}(x)+\tan^{-1}(y)=\tan^{-1}\left(\frac{x+y}{1-xy}\right)$$ Though the above formula is only valid for $xy$ being less than 1 And guess what! I saw the textbook doing the same in many illustration in the above picture I have shared 2 of them in illustration 7.62 look at the 3rd line and also in illustration 7.63 look at 3rd line In both places they have directly used the above formula without mentioning anything also my teacher didn't go in much details What I want to ask that is it even correct to do so? or am I missing something? Won't it cause us to miss some solutions? or is it some standard convention or something? Because the bo
I too thought the same while solving these questions from Cengage. What I made out is, just try using the other variations of these formulae; at some step you'll see that the RHS becomes out of range of LHS or using it would totally cancel ±π from the whole equation Example 1: Use that ±π formula in 2nd step of 7.62 , RHS becomes out of range of sin inverse. Even in the 2nd step of 7.63 , doing the same would result in RHS being out of range of tan inverse in LHS Example 2: Let the term inside cot inverse in 7.63 be -ve. So use that π + tan inverse in both the cot inverse conversions to tan inverse (because if the term in cot inverse is -ve, its reciprocal will also be -ve; so if you have to use that formula, you'll have use it for both terms of cot inverse or none of them which means you can't add π only one of them). So π cancels out from LHS and RHS Sometimes, using the other variations might take the value of x out of its possible range as deduced from the original expression given
|trigonometry|inverse|inverse-function|
0
Extenstion of Caputo's fractional derivative to distribution.
Let us start with the definitions that $\frac{d}{dx}\theta(x)=\delta(x)$ where $\theta(x)$ and $\delta(x)$ are Heaviside and delta functions. Now, with the definition: $^c D^\alpha f(t)=\frac{1}{\Gamma(n-\alpha)}\int_0^t\frac{\frac{d^n}{dx^n}f(x)}{(t-x)^{\alpha-n+1}}\mathrm d x$ I want to understand what is the half-derivative of the $\theta(x)$ such that applying another half derivative on the result gives the delta function. For a simple function like say $f(t)=sin(x)$ one can look at the Taylor expansion and for each power of x, the formula gives the half derivative to be $$^c D^\frac12 x^p=\frac{p x^{p-\frac{1}{2}} \Gamma (p)}{\Gamma \left(p+\frac{1}{2}\right)}$$ such that application of another half derivative gives the same result as the full derivative i.e. $px^{p-1}$ . However, for the Heaviside theta function, the first application gives $$^c D^\frac12 \theta(x)=\frac{2 \theta (x)-1}{\sqrt{\pi } \sqrt{x}}$$ and now I am usure how to show that the second application gives the D
Too long for a comment: The derivative of a distribution $\theta$ is defined as $$ ( \partial_x \theta, \varphi):= -(\theta, \partial_x \varphi) $$ For any function $\varphi \in C^\infty_c$ (or if the distribution has compact support $C^\infty$ ) As such the most natural definition for the fractional derivative of a distribution will be $$ ( ^cD^{\frac{1}{2}} \theta, \varphi):=i(\theta, ^cD^{\frac{1}{2}} \varphi) $$ for any smooth (with compact support) $\varphi$ . It is immediate to see that $\left( ^cD^{\frac{1}{2}}\right)^2\theta=\partial_x\theta$ for any distribution $\theta$ I would be surprised if the fractional derivative of the heaviside function is a function and not a distribution as it would be $$ (^cD^{\frac{1}{2}} \theta, \varphi)= \frac{i}{\Gamma(n-\frac{1}{2})} \int_{\mathbb{R}} \theta(t) \int_0^t\dfrac{\partial_x^n \varphi(x)}{(t-x)^{a-n-1}} dx dt= $$ $$=\frac{i}{\Gamma(n-\frac{1}{2})}\int_0^\infty \int_{0}^t \dfrac{\partial_x^n \varphi(x)}{(t-x)^{a-n-1}} dx dt $$ That
|real-analysis|calculus|derivatives|fractional-calculus|
0
Evaluating a limit involving parameter
Consider the following limit which hypothetically uniformly converges to some real analytic function $\Delta$ : $$\Delta_t(x)=\lim_{r\to \infty} \frac{1}{r} \sum_{n=1}^\infty \exp\left({\frac{t\log n}{\log r \log x}}\right)$$ I'm looking to evaluate this limit. Here $t > 0$ is some real parameter. The motivation to find the limit is because it prescribes a controlled deformation of: $$ \Gamma_t(x)= \sum_{n=1}^\infty \exp\left({\frac{t \log n}{\log x}}\right)$$ for real $x\in(0,1).$ I want to be able to conclude that this deformation preserves analyticity. Conjecturally we have $$\Delta_t(x)=e^{\frac{t}{\log x}}$$ which is indeed real analytic on $(0,1)$ . Is $\Delta_t(x)=e^{\frac{t}{\log x}}$ ?
Note that $$ \sum_{n=1}^\infty \exp\left({\frac{t\log n}{\log r \log x}}\right) = \sum_{n=1}^\infty n^{-{t/(\log r)(\log x^{-1})}} = \zeta\biggl( \frac t{(\log r)(\log x^{-1})} \biggr) $$ is a value of the Riemann zeta function, but only as long as $t/(\log r)(\log x^{-1})>1$ . In particular, once $r \ge \exp(t/\log x^{-1})$ , the sum will no longer converge, and so the limit in question is undefined.
|real-analysis|calculus|sequences-and-series|limits|convergence-divergence|
1
Calculate the integral of $1/\sin z$ over the unit circle
$\int_C \frac{1}{\sin z}$ where $C := \{z \in C : |z| = 1 \}$ I know $\frac{1}{\sin z}$ has a pole at $z = 0$ . I can apply Cauchy Residue Theorem, so I need to know the laurent series expansion of $\frac{1}{\sin z}$ which I am unable to figure out. Is there any other neat way to approach this? Thanks in advance.
Let $$f(z)=\frac{z}{\sin z},0 then Morera's theorem implies $f$ is analytic in $\{z\mid |z| . So, by Cauchy integral formula , we have $$\int_{|z|=1}\frac{dz}{\sin z}=\int_{|z|=1}\frac{f(z)}{z}dz =2\pi if(0)=2\pi i.$$
|complex-analysis|
1
Proving $A\to (¬A \to B)$ with Łukasiewicz's axioms and modus ponens?
I am trying to answer the following exercise from Hao's Fundamentals of Logic and Computation: With Practical Automated Reasoning and Verification . Using only modus ponens and the following axioms: I'm stuck for some days on it, I've even made a small program in Mathematica to help me with the substitutions but got nothing. Can you help me?
Hou gives the rule of transitivity of implication (TI) on p. 17, but does not give the theorem $A\rightarrow\neg\neg A$ . So, I copy the proof of this theorem from my answer and paste it here (lines 1—23) and the rest follows: $(\neg\neg\neg\neg\neg A\rightarrow\neg\neg\neg A)\rightarrow (\neg\neg A\rightarrow\neg\neg\neg\neg A)\qquad\text{Ax 3}$ $\big((\neg\neg\neg\neg\neg A\rightarrow\neg\neg\neg A)\rightarrow (\neg\neg A\rightarrow\neg\neg\neg\neg A)\big)\rightarrow\Big(\neg\neg\neg A\rightarrow\big((\neg\neg\neg\neg\neg A\rightarrow\neg\neg\neg A)\rightarrow(\neg\neg A\rightarrow\neg\neg\neg\neg A)\big)\Big)\qquad\text{Ax 1}$ $\neg\neg\neg A\rightarrow\big((\neg\neg\neg\neg\neg A\rightarrow\neg\neg\neg A)\rightarrow(\neg\neg A\rightarrow\neg\neg\neg\neg A)\big)\qquad\text{MP 1, 2}$ $\Big(\neg\neg\neg A\rightarrow\big((\neg\neg\neg\neg\neg A\rightarrow\neg\neg\neg A)\rightarrow(\neg\neg A\rightarrow\neg\neg\neg\neg A)\big)\Big)\rightarrow\Big(\big(\neg\neg\neg A\rightarrow(\neg\neg\
|logic|propositional-calculus|formal-proofs|
0
How to calculate the probability of a repeating pattern of outcomes within a sequence of events (specific situation)
I have a sequence of 54 events (Likert-scale survey items). The outcomes of these events (item responses) correspond to integers. I need to find the probability (assuming all outcomes for each event are equally likely) of ANY pattern of outcomes repeating 3 or more times consecutively OR of a single outcome repeating 6 times consecutively. The length of the repeating pattern does not matter (i.e. it could be 2 outcomes repeating, such as "4,5,4,5,4,5", or up to 18 items repeating) so long as the pattern repeats 3 times consecutively. To complicate matters, some events have different numbers of possible outcomes. The first 22 events have 6 possible outcomes (Outcomes 0 through 5). The next 10 events (events 23 through 32) have 5 possible outcomes (Outcomes 1 through 5). The final 22 events (events 33 through 54) have 7 possible outcomes (Outcomes 1 through 7). Thus, 5 outcomes (Outcomes 1 through 5) are possible throughout all events, while outcome 0 is only present in events 1 through
Here’s Java code that performs inclusion–exclusion for this problem. I don’t see a way to make an exact solution tractable, but part of the beauty of inclusion–exclusion is that it gives you alternating upper and lower bounds on the desired count. In the present case, the bounds from the first four stages may already be sufficiently tight for your purposes. They are \begin{align} p\lt\frac{401720233802817099323960526777224873}{10721193919434425014594534361530368000}&\approx0.03746972928776285\;, \\ p\gt\frac{4201948007014679218289498623445742497411}{143587418563853906445462513770496000000000}&\approx0.029264040324995855\;, \\ p\lt\frac{1748294416292362535737205604304327179703}{55839551663720963617679866466304000000000}&\approx0.03130924880667034\;, \\ p\gt\frac{4385318275113451134817187683255378685393}{143587418563853906445462513770496000000000}&\approx0.03054110394193961\;, \end{align} where $p$ is the probability that at least one of the repeating patterns occurs. (Note that the even
|probability|sequences-and-series|statistics|
0
Solving equations involving inverse trigonometric functions
My math teacher said me if while solving trigonometric equations involving inverse trigonometric functions you are not able to get enough information about the inputs and if you have to solve for specific ranges of input or if nothing is given then directly use the below formula without thinking of anything and just check for extraneous solutions $$\tan^{-1}(x)+\tan^{-1}(y)=\tan^{-1}\left(\frac{x+y}{1-xy}\right)$$ Though the above formula is only valid for $xy$ being less than 1 And guess what! I saw the textbook doing the same in many illustration in the above picture I have shared 2 of them in illustration 7.62 look at the 3rd line and also in illustration 7.63 look at 3rd line In both places they have directly used the above formula without mentioning anything also my teacher didn't go in much details What I want to ask that is it even correct to do so? or am I missing something? Won't it cause us to miss some solutions? or is it some standard convention or something? Because the bo
You don't need to use the addition formula for $\tan^{-1}$ . Instead, use the addition formula for $\tan$ itself. For the first problem: \begin{align*} \sin^{-1}x&=\tan^{-1}2x-\tan^{-1}x\\ \implies \tan(\sin^{-1}x)&=\tan\left(\tan^{-1}2x-\tan^{-1}x\right)\\ \implies \frac{x}{\sqrt{1-x^2}}&=\frac{2x-x}{1-2x\cdot (-x)}\\ &\vdots \end{align*} Note that we require $|x|\le 1$ for the problem to be meaningful. For the second problem: \begin{align*} \tan^{-1}6x&=\tan^{-1}\left(\frac{3x^2+1}{x}\right)-\tan^{-1}\left(\frac{1-3x^2}{x}\right)\\ \implies 6x&=\tan\left(\tan^{-1}\left(\frac{3x^2+1}{x}\right)-\tan^{-1}\left(\frac{1-3x^2}{x}\right)\right)\\ \implies 6x&=\frac{\frac{3x^2+1}{x}-\frac{1-3x^2}{x}}{1-\frac{3x^2+1}{x}\cdot\left(-\frac{1-3x^2}{x}\right)}\\ &\vdots \end{align*} In these workings, you only need: \begin{align*} \tan(A+B)&\equiv\frac{\tan(A)+\tan(B)}{1-\tan(A)\tan(B)}\\ \tan(\tan^{-1}(x))&\equiv x\\ \tan(\sin^{-1}x)&\equiv\frac{x}{\sqrt{1-x^2}} \end{align*} This method should gi
|trigonometry|inverse|inverse-function|
1
A set $S$ of positive integers is self-indulgent if $\gcd(a, b) = |a-b|$ for any two distinct $a,b\in S$
A set $S$ of positive integers is self-indulgent if $\gcd(a, b) = |a-b|$ for any two distinct $a,b\in S$ (a) Prove that any self-indulgent set is finite. (b) Prove that for any positive integer n, there exists a self- indulgent set with at least n elements. For the first question it can be seen that their will be no self indugent set consisting more no of elements twice of minimum element present. For sake of exploration i have picked any natural number. got its prime factorization. added every possible factors to the original number. it can be seen that for a self indulgent set where any arbitrary no N is minimum all the elements will be got by adding every factor of N initially. then there will be similar process to follow on the natural numbers we get by adding factors of N to N.Then for each of the numbers we will get a set of natural numbers.We will take intersection with all those sets with the first set we got. We can get multiple self indulgent set with minimum element N. Now h
Here is the construction of self-indulgent sets of arbitrary finite cardinality given at quora.com (link in my comment on the question): Let $a_1,a_2,\dotsc,a_n$ be a self-indulgent list of $n$ distinct positive integers. Let $L$ be the least common multiple of $a_1,a_2,\dotsc,a_n$ . Then $L,L+a_1,L+a_2,\dotsc,L+a_n$ is a self-indulgent list of $n+1$ distinct positive integers. For example; $1,2$ is a self-indulgent list, with $L=2$ ; $2,3,4$ is a self-indulgent list, with $L=12$ ; $12,14,15,16$ is a self-indulgent list, with $L=1680$ ; $1680,1692,1694,1695,1696$ is a self-indulgent list of five distinct positive integers (and I'm too lazy to calculate $L$ .) The proof that this works is easy. We have $|(L+a_i)-(L+a_j)|=|a_i-a_j|=\gcd(a_i,a_j)$ , and $\gcd(a_i,a_j)$ divides both $a_i$ and $a_j$ , so it divides their lcm, which divides the lcm of $a_1,a_2,\dotsc,a_n$ , which is $L$ . Also, any $d$ which divides both $L+a_i$ and $L+a_j$ divides their absolute difference, which is $|a_i-a
|number-theory|gcd-and-lcm|
0
An exotic operation on repeating sequences of natural numbers such as $\overline{6,9} = 6,9,6,9,6,9, \dots$ Having to do with common subsums.
Definitions. This is about repeating sequences of natural numbers such as $\overline{6} = 6,6,6, \dots$ or $\overline{2,1,2} = 2,1,2,2,1,2, \dots$ . I am aware that these can be faithfully represented as the sequence of coefficients of a power series $f(x)(1 + x^k + x^{2k} + \dots)$ where $k = \deg f + 1$ . And conversely, so there is a bijection. However, I have (consider this my attempt) verified that neither power series $+$ nor $\cdot$ represent the below described $\star$ operation. Given $a = \overline{a_0, a_1, \dots, a_n}$ and $b = \overline{b_0, b_1, \dots, b_m}$ define $a \star b$ by the following algorithm. We must start at $c_0 = (a\star b)_0$ the first entry of the result $c$ . The Algorithm . $a \star b$ : Input: $a, b$ (finite sequences such that if you index them the index you pass in gets modulo'd by their length). Output: $c$ another repeating sequence in smallest length finite form. Let $i = 0, j=0, k=0, c = []; s = 0, t = 0$ If $\sum c = \text{LCM}(\sum a, \sum b)$
I looked today again on the question, have also considered from the beginning the same (periodic) series approach, which is just an other way to write the operation $\star$ , and saw no constructive progress. However, it may be of interest to have also this way to look at the problems, so here is an attempt of an answer. Let $F=\Bbb F_2$ be the field with two elements. Consider the power series ring $R$ with powers in possible degrees $1,2,3,\dots$ over $F$ with the usual operations plus/minus $\pm$ , and with two multiplications $\cdot$ , the usual one, and $\odot$ which is acting only degree-wise (as the addition)! $$ P=\Big\{\ f=\sum_{n>0}f_nx^n\ \Big|\ a_1,a_2,a_3,\dots\in F\text{ periodic }\Big\}\ . $$ We identify for instance $a=[2,1,2]:=\overline {2,1,2}$ with the series $f=f(a)$ which satisfies: $$ f=f(a)=x^2(1+x^1(1+(x^2(1+f)))=x^2+x^3+x^5+x^5f=\frac{x^2+x^3+x^5}{1+x^5}\ . $$ Then $f$ moves the structure $\star$ from the set of periodic sequences to the set $P$ with the monoid
|abstract-algebra|sequences-and-series|algorithms|modular-arithmetic|periodic-functions|
1
Any assured method for finding the derivative of p Euclidian norms?
Assuming both $y$ and $\beta$ are $p \times 1$ vectors, and $W$ is a $p \times p - 2$ matrix, how would one take the first derivative of this: $L(\beta) = || y - \beta||^2_2 + || W\beta||^2_2$ . I'm aware that this essentially means $\frac{\partial L}{\partial \beta} \sum_{i = 1}^p (y_i - \beta_i)^2 + \frac{\partial L}{\partial \beta} || W\beta||^2_2$ . After looking online, I found that the identity of $||A\mathbf{x}||^2_2 = 2A^TA\mathbf{x}$ . While this is suffice for my proofs, I fail to understand why this is the case. I do know that this will apply to $|| W\beta||^2_2$ . So my question is how to differentiate $\frac{\partial L}{\partial \beta} \sum_{i = 1}^p (y_i - \beta_i)^2$ in a very straightforward and logical way, with fundamentals that I can apply to other $p$ -value norms at any time without worrying about identities and such. Many thanks, this has been driving me utterly mad.
The Frobenius $(:)$ product is extremely useful in Matrix Calculus and has these properties $$\eqalign{ \def\b{\beta} \def\p{\partial} \def\LR#1{\left(#1\right)} A:B &= \sum_{i=1}^m\sum_{i=1}^n A_{ij}B_{ij} \;=\; \operatorname{Trace}\LR{A^TB} \\ B:B &= \|B\|_F^2 \\ A:B &= B:A \;=\; B^T:A^T \\ }$$ Calculating the differential of the Frobenius norm is easy using this product $$\eqalign{ d\,\|B\|_F^2 &= d\LR{B:B} \\&= B:dB + dB:B \\&= 2B:dB \\ }$$ Use this result to calculate the differential of the $L$ function and recover its gradient $$\eqalign{ L &= \LR{\b-y}:\LR{\b-y} \;+\; \LR{W\b}:\LR{W\b} \\ dL &= 2\LR{\b-y}:{d\b} \;+\; 2\LR{W\b}:\LR{W\,d\b} \\ &= 2\LR{\b-y}:{d\b} \;+\; 2\LR{W^TW\b}:d\b \\ \frac{\p L}{\p\b} &= 2\b-2y \;+\; 2W^TW\b \\ }$$
|calculus|matrices|derivatives|vectors|partial-derivative|
0
Sum of angles in a $1$-by-$3$ rectangle
This problem was in a competition for a job. It seems simple BUT the challenge is you cannot use trigonometry. Let there be 3 squares with side length of $\ell$ arranged in such a way that it forms a rectangle with length $3\ell$ and width $\ell$ . So $ABCD$ , $CEHD$ , and $EFGH$ are squares. (This is notation because it's easier to tell you this way.) If $\alpha = m(\angle AED)$ and $\beta = m(\angle AFD)$ , then what is $\alpha + \beta$ ? I tried to use angles of quadrilaterals but so far I didn't find anything useful. Have fun solving it! Non-OP edit : added diagram to confirm construct or clarify
The best answer I can give not using trigonometry is: $\alpha + \beta$ is equal to the smallest angle of the $1, 2, \sqrt{5} $ right triangle. Proof: Translate the angle $\alpha$ from AED to DFH. Draw a diagonal from H to C Let's call K the point where the segment HC intersects the diagonal AF. In this way we get the triangle FHK which is a right triangle because the angle CHF is a right angle. (see Proof 1) The length of the side FH must be $\sqrt{2}ℓ$ , because it is the diagonal of a square with sides of length $ℓ$ . The length of the side HK must be $\sqrt{2}\frac{ℓ}{2}$ , because it is the diagonal of a square with sides of length $\frac{ℓ}{2}$ , because K is the center of the center square DCEH. (see Proof 2) Consider a similar triangle with sides of length R, S and T, scaled by a factor $x=\frac{\sqrt{2}}{ℓ}$ , such as: $R=x|HK|=\frac{\sqrt{2}}{ℓ}\sqrt{2}\frac{ℓ}{2}=1$ $S=x|FH|=\frac{\sqrt{2}}{ℓ}\sqrt{2}ℓ=2$ These are the two sides that form the right angle. By the Pythagorean t
|geometry|angle|quadrilateral|
0
Expectation of maximum number of non-overlapping random intervals inside $[0,1]$ out of $n$ intervals
Choose two random numbers from $[0,1]$ and let them be the endpoints of a random interval. Repeat this independently for $n$ times to get $n$ random subintervals (denoted as $U_1,\dots,U_n$ ). Let $T_n = \max\{|S|: S\subset\{U_1,\dots,U_n\}, U_i\cap U_j=\emptyset \mbox{ for any }U_i\not =U_j\in S\}$ , where $|S|$ denote the number of intervals in $S$ . I'm wondering what is the expectation of $T_n$ ? Or how fast does the expectation of $T_n$ increase with $n$ ? ( is it $O(n)$ , $O(\sqrt{n})$ or $O(\log n)$ ?)
I can prove that $$ -\tfrac12+\sqrt{n}\le \newcommand\E{\mathbb E}\E T_n\le e\sqrt{n/2}+O(1) $$ So, the growth rate of $\E T_n$ is $\Theta(\sqrt{n})$ . Upper bound Let $t\in \mathbb N$ . Using the union bound, we have $$ \newcommand\P{\mathbb P} \P(T_n\ge t)\le \binom{n}t\frac{2^t\cdot t!}{(2t)!} $$ Indeed, in order for $T_n\ge t$ to occur, there needs to exist $t$ particular intervals which do not overlap. There are $\binom nt$ selections of $t$ intervals. For each of these, the probability they do not overlap is $2^t\cdot t!/(2t)!$ . This is because each of the $(2t)!$ ordering of the $2t$ endpoints are equally likely, but there are only $2^t\cdot t!$ successful orderings. Now, using the inequalities $\binom nt\le n^t/t!$ and $(2t)!\ge (2t)^{2t}\cdot e^{-t}$ , the above becomes $$ \P(T_n\ge t)\le \left(\frac{e\sqrt{n/2}}{t}\right)^{2t} $$ Now, let $t=e\sqrt{n/2}+m$ , for some $m\in \mathbb N$ . We see that $$ \begin{align} \P\left(T_n\ge e\sqrt{n/2}+m\right) &\le \left(\frac{e\sqrt{n
|probability|statistics|
0
Expected value of $X_N$ with smallest index $N$ for which $\sum_{i=1}^N X_i$ exceeds $1$ when $X_i$ are uniformly distributed
From an interview book, where the answer is not so clear I believe. You keep generating $\mathcal U_{[0,1]}$ iid random variables until their sum exceeds 1, then compute the expected value of the last random variable, i.e. the one responsible for letting the sum of rvs overflow 1. My idea (not working): The $i$ -th draw from $\mathcal U_{[0,1]}$ is called $X_i$ , and $S_N:=\sum_{i=1}^N X_i$ . I aim to compute: $$\mathbb E\left[X_{N}\right], N:=\min \left[i:\sum_i X_i > 1\right].$$ Rewrite it as: $$\mathbb E\left[X_{N}\right] = \sum_{i=2}^\infty \mathbb E\left[X_{N}|N=i\right]\mathbb P[N=i].$$ From this question I know that $\mathbb P[N=i] = (i-1)/i!$ . I know that $X_N$ takes positive values between 0 and 1, so I use the expectation of the tail function: $$\mathbb E\left[X_{N}|N=i\right]=\int_0^1 \mathbb P[X_N>t|N=i]\ \text d t= 1-\int_0^1 \mathbb P[X_N\leq t|N=i]\ \text d t.$$ Now, some relabeling, using $X$ for the generic $\mathcal U_{[0,1]}$ and $Y$ for $S_{i-1}$ : $$\mathbb P[X_N\
This step is wrong: $$\int_0^1\mathbb P[X\leq t\cap X> 1-Y|Y You can't say $\mathbb P[X\leq t\cap X> 1-Y|Y because sometimes $t-1+y$ is negative. Instead, for this probability to be nonzero, we need $1-y \geq t$ , which is the same as $y \geq 1-t$ , so we need to adjust the integral limits accordingly. We then get: $$ \begin{align*} \int_0^1\mathbb P[X\leq t\cap X> 1-Y|Y You also have a second mistake later on: You are substituting $(it-1)/(i-1)$ for $\mathbb{E}[X_N \mid N=i]$ , but this isn't right, you should use your integral from before to compute this expectation: $$\mathbb E[X_{N}|N=i]= 1-\int_0^1 \mathbb P[X_N\leq t|N=i]\ \text d t.$$ In fact, I'm not sure how you figured out the value of that infinite sum, since that infinite sum still uses $t$ even though $t$ does not mean anything in that context. Unfortunately, I don't know how to correct this reasoning, I think the problem can be solved in the way you are doing, but I am not good at infinite sums. Instead, I'll use differen
|probability|random-variables|expected-value|conditional-expectation|
1
Proving the equality case of the Rising Sun Inequality
I'm stuck at the finish line on a pair of Exercises from Tao's Introduction to Measure Theory: Exercise 1.6.12 (Rising Sun Inequality). Let $f : \mathbb{R} \to \mathbb{R}$ be an absolutely integrable function, and let $f^* : \mathbb{R} \to \mathbb{R}$ be the one-sided signed Hardy-Littlewood maximal function $$ f^*(x) := \sup_{h>0}\frac{1}{h}\int_{[x,x+h]}f(t)\,dt. $$ Establish the rising sun inequality $$ \lambda m(\{x\in\mathbb{R} : f^*(x) > \lambda\}) \leq \int_{\{x \in\mathbb{R} : f^*(x) > \lambda\}}f(t)\,dt $$ for all real $\lambda$ (note here that we permit $\lambda$ to be zero or negative) [...] Exercise 1.6.13. Show that the left- and right-hand sides of Exercise 1.6.12 are in fact equal when $\lambda > 0$ . ( Hint : One may first wish to try this in the case when $f$ has compact support [...]) I've proven Exercise 1.6.12, and for 1.6.13, following the hint I proved equality for the case when $f$ has compact support. My issue is on generalizing 1.6.13 to all absolutely integrab
$\def\R{\mathbb{R}}\def\d{\mathrm{d}}\def\le{\leqslant}$ The missing piece here is the (absolute) integrability of $f$ . For any $n$ , $f_n \le |f| \implies f_n^* \le {|f|}^*$ , thus for any $\lambda > 0$ , $$ \{f_n^* > \lambda\} \subseteq \{{|f|}^* > \lambda\} \implies |\chi_{\{f_n^* > \lambda\}}| = \chi_{\{f_n^* > \lambda\}} \le \chi_{\{{|f|}^* > \lambda\}}. $$ Note that $\chi_{\{{|f|}^* > \lambda\}} \in L^1(\R)$ by the rising sun inequality, hence the dominated convergence theorem shows that $$ m(\{f_n^* > \lambda\}) \to m(\{f^* > \lambda\})\ (n \to \infty). $$
|real-analysis|measure-theory|inequality|lebesgue-integral|
1
Sum of reciprocals of generalized polygonal numbers
Let us consider the series of the sum of reciprocals of generalized pentagonal numbers $$\sum_{-\infty}^{\infty}\left(\frac{2}{3n^{2}-n}\right)=1+\frac{1}{2}+\frac{1}{5}+\frac{1}{7}+\frac{1}{12}+\frac{1}{15}+\frac{1}{22}+\frac{1}{26}+...$$ It seems that the series is convergent, but is it known to which number? I reasoned the following: Let it be the sum of reciprocals of triangular numbers $$\sum_{n=1}^{\infty}\frac{1}{T_{n}}=1+\frac{1}{3}+\frac{1}{6}+\frac{1}{10}+\frac{1}{15}+\frac{1}{21}+...=2$$ Already Euler noted that, if we multiply each term by 3, we obtain $$3+1+\frac{1}{2}+\frac{3}{10}+\frac{1}{5}+\frac{1}{7}+\frac{3}{28}+\frac{1}{12}+\frac{1}{15}+\frac{3}{55}+\frac{1}{22}+\frac{1}{26}+\frac{3}{91}+...=6$$ Reordering conveniently, we have that $$\left(1+\frac{1}{2}+\frac{1}{5}+\frac{1}{7}+\frac{1}{12}+\frac{1}{15}+...\right)+...\left(\frac{3}{10}+\frac{3}{28}+\frac{3}{55}+\frac{3}{91}+...\right)+3=6$$ It could be noted that $$1+\frac{1}{2}+\frac{1}{5}+\frac{1}{7}+\frac{1}{12}+
If $s$ is the number of sides in a polygon, the formula for the $n$ th $s-$ gonal number $P(s, n)$ is $$P(s, n) = (s-2)\frac{n(n-1)}2+n.$$ You can read more about it here . Let $\alpha : = \dfrac{s-2}2$ . Then, we have $$\frac1{P(s, n)} = \frac1{\alpha n(n-1)+n} = \frac1{1-\alpha}\left(\frac1n-\frac1{n+\frac{1-\alpha}\alpha}\right)$$ This formula can be derived through partial fraction decomposition . It follows $$\sum_{n=1}^\infty\frac1{P(s,n)} = \frac1{1-\alpha}\sum_{n=1}^\infty\left(\frac1n-\frac1{n+\frac{1-\alpha}\alpha}\right)$$ But notice the RHS summation is (one of) the formula for the generalized harmonic number (the much cooler version of the digamma function ) $H\left(\frac{1-\alpha}\alpha\right)$ . Therefore $$\sum_{n=1}^\infty\frac1{P(s,n)} = \frac{1}{1-\alpha}H\left(\frac{1-\alpha}\alpha\right)$$ If you wish to include negative indices, just notice $$\begin{aligned} \sum_{n=-1}^{-\infty}\frac1{P(s,n)} &= \frac1{1-\alpha}\sum_{n=-1}^{-\infty}\left(\frac1n-\frac1{n+\frac{1-
|real-analysis|sequences-and-series|convergence-divergence|
1
Restriction of a compactly supported function on a bounded domain in a surface
Let $\Omega \subset \mathbb{R}^N$ be a bounded domain and let $P_k$ a plane of dimension $1\leq k in $\mathbb{R}^N$ . Denote by $\sigma_k$ the surface measure in the surface $\Omega_k = \Omega\cap P_k$ . Given $\varphi \in C^\infty_0(\Omega)$ , how to prove directly that there exist a positive constant $C$ such that $$ \int_{\Omega_k} |\varphi_{x_i}(w)|^2 d \sigma_{k w} \leq C \int_{\Omega} |\varphi_{x_i}(x)|^2 d x ? $$ Any reference about this is also very welcome. Edit: How to prove that $$ \int_{\Omega_k} |\nabla \varphi(w)|^2 d \sigma_{k w} \leq C \int_{\Omega} |\nabla \varphi(x)|^2 d x ? $$ If I am wrong, what should be the interpretation of the following result from Adams, Sobolev Space,2003: And
I am sure is no. In fact we have trace theorem $[f\mapsto f|_S]:H^1(\mathbb R^n)\to H^{1/2}(S)$ whenever $S$ is a bounded smooth surface. Here $H^1$ is $W^{1,2}$ Sobolev space, $H^{1/2}(\mathbb R^m)$ the fractional Sobolev space which is the completion of $f\in C_c^\infty(\mathbb R^m)$ with respect to $\|f\|=(\int|f\cdot\nabla f|^2)^{1/2}$ . In particular $[f\mapsto f|_S]:H^1\not\to H^1$ . Edited: For the part in Adams' book is not the theorem, but a description on "what is called a Sobolev embedding". The actual theorem on specifying what $W^{m,p}(\Omega)\to W^{j,q}(\Omega)$ is true for $j,m,p,q$ will be seen later in this chapter. Plus I don't think the type of maps $W^{m,p}(\Omega)\to W^{j,q}(\Omega_k)$ are "embedding", as it is never injective. I prefer the usage of "trace theorem".
|sobolev-spaces|surfaces|surface-integrals|submanifold|trace-map|
1
Show that orthogonal matrices have eigenvalues with magnitude $1$ without a sesquilinear inner product
Is it possible to consider complex eigenvalues without a Hermitian (i.e. sesquilinear) inner product over a complex vector space? For instance: let $A$ be a real orthogonal matrix (so $A^TA = I$). Without referencing a Hermitian inner product, is it possible to show that the complex eigenvalues of $A$ have magnitude $1$? The usual proof of this fact is as follows: if $x,\lambda$ is an eigenpair of $A$, then we have $$ \|x\|^2 = x^*x = x^*(A^*A)x = (Ax)^*(Ax) = \lambda\overline{\lambda} (x^*x) = |\lambda|^2 \|x\|^2 $$ from which it follows that $|\lambda| = 1$. Note: this proof required the use of the sesquilinear inner product $\langle x,y \rangle = y^*x$. A rephrasing of the original question: consider $\Bbb C^n$ with the bilinear from $$ \langle x,y \rangle = y^Tx $$ note that this bilinear form is not an inner product. The complex-orthogonal matrices are those matrices $A$ that satisfy $A^TA = I$, where $T$ is the entrywise transpose. Notably, the complex-orthogonal matrices preserv
You can show this in a way which seems to me more elementary than the answers above suggest. The key is simply that if $A \in \text{Mat}_n(\mathbb R)$ satisfies $A^tA$ then $A$ is an isometry of $\mathbb R^n$ (with respect to the usual Euclidean distance). Since $A$ has to preserve the length of a vector, and in particular any eigenvector, its real eigenvalues must lie in $\{\pm 1\}$ . For the complex eigenvalues of $A$ , we have to consider $A$ as a linear map on $\mathbb C^n$ , but as a real vector space this is just $\mathbb R^n \oplus i\mathbb R^n$ , with $A$ acting "diagonally'', that is, $A(v_1+iv_2) = Av_1+iAv_2$ . By imposing the condition that the two copies of $\mathbb R^n$ are orthogonal to each other, and using the usual dot product on each copy, $\mathbb C^n = \mathbb R^n \oplus i\mathbb R^n$ naturally inherits a (real) inner product from $\mathbb R^n$ , and the diagonal action of $A$ still preserves distances. Now if $\lambda=\lambda_1+i\lambda_2\in \mathbb C$ is an eigen
|linear-algebra|matrices|alternative-proof|
0
A walk into the desert and back - have some hard-won answers but can't see the pattern
Mission : To find a minimum-length walk from "base" to a point g distance units away and back for 1 person alone, given: An unlimited supply of rations provided at base, to be consumed continuously while walking. The ability to walk up to 1 distance unit per unit of rations consumed. The ability to carry no more than 2 total units of rations at a time, including those about to be consumed. The ability to walk away from and return to sealed rations only, i.e. rations in full 1-unit packages kept sealed since leaving base. Of course each such package has to be unsealed for any of its contents to be consumed. Let us assume we have a rule that no more than 1 unit of rations is available (ready to be consumed without an unsealing) at any given time and that an increase in available rations always implies an unsealing. (Such a rule is not really required, but explaining why requires some wordy lemmas and would distract from what I am trying to ask.) Using a long-running simplex algorithm com
We shall assume the artificial rule already mentioned, so the amount of walking between consecutive unsealings will always be at least zero and at most 1 unit, and we shall follow the convention of using the same names for locations as for their respective distances from base, and answer Questions 1 and 2 assuming g > 1. Let g′ = g − ½, and let I be any valid itinerary that fulfills the mission, i.e. in which the person reaches g and then returns to base. Since g′ > ½, there must be at least one location between base and g′ where in I the person changes direction, unseals a package, or walks away from stationary rations. Let g″ be the closest such location to g′ . Lemma 1: For any valid itinerary I that fulfills the round-trip-to- g mission, there exists an itinerary I′ fulfilling the same mission in which the person takes the same actions between base and g″ and does only 2( g − g″ ) units of walking between g″ and g . Proof of Lemma 1: Let m b be the moment in I of the person’s last
|optimization|puzzle|operations-research|
0
Prove that if a continuous function is injective, then it is monotonic
It is intuitive and it seems very obvious that if a function $f : X \rightarrow Y$ is continuous on whole $X$ and it's injective, then it must be monotonic, but I can't come up with any neat proof for that. Could you maybe help me?
I am assuming that $X$ is an interval in $\mathbb{R}$ , and $Y=\mathbb{R}$ . Lemma : Let $f:X\to Y$ be a function with the following property: For any $x,y,z\in X$ such that $x , $f(y)$ is strictly between $f(x)$ and $f(y)$ . Then $f$ is strictly monotonic. Proof : $f$ should injective. For, if $x,y\in X$ and $x , then there exists $z\in X$ such that $x , and either $f(x) or $f(x)>f(z)>f(y)$ , i.e., $f(x)\neq f(y)$ . Next, let $x\in X$ . There exists $y\in X$ such that $x\neq y$ . First suppose that $x . Since $f$ is injective, either $f(x) or $f(x)>f(y)$ . We will show that $f$ is strictly increasing in the former case, and strictly decreasing in the latter case. Suppose that $f(x) . Let $z\in X$ . Note the following: If $x , then necessarily $f(x) . If $x , then necessarily $f(x) . If $z , then necessarily $f(z) . It follows that if $x , then $f(x) , and if $z , then $f(z) . This shows that $f$ is strictly increasing. Now suppose that $f(x)>f(y)$ . Let $z\in X$ . Note the following:
|calculus|
0
Deriving the catenary from a hanging chain
Assume a heavy chain (constant mass per unit length) takes the shape of a plane curve $\mathcal C$ after being suspended by its two ends from the same height. Let $s$ be its arc length starting from its lowest point, let $T$ be the tension in the chain, let $\varphi$ be the angle between the tangent line and the horizontal, and let $\lambda$ , $\mu$ be two non-zero constants. I'm trying to show that $T\cos \varphi = \lambda $ and $T\sin \varphi = \mu s$ . Just as a gut check, if we plug in $s=0$ where the tangent line is horizontal so $\varphi = 0$ , we find that $T=\lambda\neq 0$ , but to me the tension in the chain at its lowest point or where $s=0$ should be zero since there is no chain below it to hold up. Without solving the problem for me here, where am I going wrong?
Draw a free body diagram of forces. Consider the portion of the chain between the lowest point and another point P above. Tension T at P above is inclined, so has two components. One vertical component $Q=T\sin\varphi$ is balanced by the chain weight and the horizontal component of chain weight $T\cos \varphi$ is counteracted or balanced by an always constant non-zero horizontal force at P = H say. The differential equation of catenary is $$ \tan \varphi= \frac{Q}{H}=\frac{\mu s}{H}~$$ if $\mu$ is the chain weight per unit arc length. Hope you take it from there.
|physics|arc-length|
0
Unit Quaternions on the 3-sphere, $S^3$ as orthogonal transformations.
I am reading through Andrew Hanson's "Visualizing Quaternions" and came across this passage on page 50: $q(\theta, {\bf n}) = \left( \cos\frac{\theta}{2}, {\bf n} \sin \frac{\theta}{2} \right)$ produces the standard rotation matrix ${\bf R} \left( \theta, {\bf n} \right)$ for a rotation by $\theta$ in the plane perpendicular to ${\bf n}$, where ${\bf n} \cdot {\bf n} = 1$. $\theta$ is an angle obeying $0 \leq \theta This extension of the range of $\theta$ allows the values of q to reach all points of the hypersphere $S^3$. Here, $q$ represents the quaternion and ${\bf n}$ the rotation axis. To describe how I visualize $S^3$, I will first describe analogously how to visualize $S^2$. Imagine cross sections of $S^2$ as we pass from $z = 1$ to $z = -1$. The cross sections are circles of radius $\sin\beta$, where $0 \leq \beta \leq \pi$ is the usual polar angle in the spherical polar parametrization of $S^2$. If $0 \leq \alpha Now, to visualize $S^3$, I draw a set of 2-spheres, the intersec
Let $\vec u\in S^2$ be a unit vector and $\phi\in[0,4\pi)$ . Then $$(\cos(\tfrac{4\pi-\phi}2),\sin(\tfrac{4\pi-\phi}2)\vec u)= (\cos(\tfrac{\phi}2),\sin(\tfrac{\phi}2)(-\vec u))$$ as unit quaternions . So, it is enough to take $\theta\in[0,2\pi)$ in the representation $$q=(\cos\tfrac\theta 2,\sin\tfrac\theta 2\vec u)\in S^3$$ of a unit quaternion.
|linear-algebra|geometry|rotations|quaternions|
0
Sum of angles in a $1$-by-$3$ rectangle
This problem was in a competition for a job. It seems simple BUT the challenge is you cannot use trigonometry. Let there be 3 squares with side length of $\ell$ arranged in such a way that it forms a rectangle with length $3\ell$ and width $\ell$ . So $ABCD$ , $CEHD$ , and $EFGH$ are squares. (This is notation because it's easier to tell you this way.) If $\alpha = m(\angle AED)$ and $\beta = m(\angle AFD)$ , then what is $\alpha + \beta$ ? I tried to use angles of quadrilaterals but so far I didn't find anything useful. Have fun solving it! Non-OP edit : added diagram to confirm construct or clarify
Construct the " $3\times 3$ -square" with side $AG$ in the same half-plane w.r.t to the line $AG$ where the given three squares are already constructed. Let $O$ be the point $O=CD\cap EG$ . Reflection w.r.t. $EG$ is denoted by a prime. Then by a simple lattice argument a (chess) knight in $O$ can equally jump in $A,H,F=H',A'$ . So these points are on the same circle centered in $O$ . Then the wanted angle $\alpha+\beta$ is $\widehat{AFH}$ , and in the figure we can write many equalities... $$ \alpha+\beta= \widehat{AFH}= \widehat{AA'H}= \frac 12 \widehat{AOH}= \widehat{AOD}= \widehat{DOH}= \widehat{FDG}= \widehat{EAH}=\arctan\frac 12\ . $$ $\square$ Note: Using $\arctan x-\arctan y=\arctan\frac {x-y}{1+xy}$ (a relation parallel to the one for the tangent of a difference of angles, apply $\tan$ on both sides to check), we can trigonometrically check: $$ \small \alpha+\beta= \widehat{AFG}- \widehat{HFG}= \arctan 3-\arctan 1=\arctan\frac{3-1}{1+3\cdot 1} =\arctan\frac24 =\arctan\frac12 =\
|geometry|angle|quadrilateral|
0
Proof of Theorem 1.2.17 (Amplitude in families) - [PAG1] Robert Lazarsfeld
I encounter some difficulties to understand some notations in the proof of Theorem 1.2.17 of the book " Positivity in Algebraic Geometry I " by Robert Lazarsfeld. To make things clear, a scheme is a seperated algebraic scheme of finite type over $\mathbb C$ . Theorem 1.2.17. (Amplitude in families) : Let $f : X \to T$ be a proper morphism of schemes, and $L$ a line bundle on $X$ . Given $t \in T$ , write $$X_t = f^{-1}(t), ~~L_t = L \vert_{X_t}$$ Assume that $L_0$ is ample on $X_0$ for some point $0 \in T$ . Then there is an open neighborhood $U$ of $0$ in $T$ such that $L_t$ is ample on $X_t$ for every $t \in U$ . The proof is essentially divided into three parts and I am stuck on a notation intervening in the second one. First step (I have no problem with this one) : we show that for any coherent sheaf $\mathcal F$ on $X$ , there is a positive integer $m(\mathcal F,L)$ such that $R^i f_* \left( \mathcal F \otimes L^{\otimes m} \right) = 0$ in a neighborhood $U_m \subset T$ of $0$ (*)
We're assuming $T = \operatorname{Spec} A$ is affine so $f_*(L^n \otimes \mathcal{O}_{X_0})$ is then the coherent sheaf associated to the $A$ -module $H^0(X, L^n \otimes \mathcal{O}_{X_0})$ , which as you observed is naturally isomorphic to $H^0(X_0, L_0^n)$ by the projection formula(or, if you'd prefer, using this theorem from the stacks project ). As such it would probably be more accurate to write $f_*(L^n \otimes \mathcal{O}_{X_0}) \cong \widetilde{H^0(X_0, L_0^n)}$ , but usually forgetting the tilde is more or less harmless since the categories of quasicoherent sheaves on $T$ and the category of $A$ -modules can be naturally identified with one another. Similarly, $f_*(L^n)$ is the coherent sheaf $\widetilde{H^0(X, L^n)}$ and this map $f_*(L^n) \to f_*(L^n \otimes \mathcal{O}_{X_0})$ is induced by the map of $A$ -modules $H^0(X, L^n) \to H^0(X_0, L_0^n)$ obtained by restricting sections, which we find to be surjective using the first step. Hence $H^0(X, L^n) \otimes_\mathbb{C} \ma
|algebraic-geometry|schemes|
1
The codifferential δ of a k -form on an n -dimensional Riemannian manifold
The Hodge dual (or formal adjoint) to the exterior derivative $d: \Omega^k(M) \to \Omega^{k+1}(M)$ on a smooth manifold $M$ is the codifferential $ d^* $ , a linear map $$ d^*: \Omega^k(M) \to \Omega^{k-1}(M),$$ defined by $ d^* = (-1)^{n(k+1)+1} * d * $ Could someone kindly provide a detailed explanation of the operation denoted by $ * d * $ ?
Hodge Dual: The Hodge dual, denoted by $(*)$ , is an operation that maps a $( k )$ -form to an $( (n-k) )$ -form on an $( n )$ -dimensional manifold. It’s defined using the metric of the manifold and the orientation. If $( \omega )$ is a $( k )$ -form, then its Hodge dual $( *\omega )$ is an $( (n-k) )$ -form such that for any ( k )-form ( \eta ), we have the inner product $( \langle \omega, \eta \rangle = \int_M \omega \wedge *\eta )$ . Exterior Derivative: The exterior derivative ( d ) is a map $( d: \Omega^k(M) \rightarrow \Omega^{k+1}(M) )$ that takes a $( k )$ -form to a ( (k+1) )-form. It generalizes the concept of taking derivatives of functions to higher-dimensional analogs. Codifferential $( d^ )*$ : The codifferential is the formal adjoint of the exterior derivative with respect to the inner product on forms. It’s a map $( d^: \Omega^k(M) \rightarrow \Omega^{k-1}(M) )$ that takes a ( k )-form to a $( (k-1) )$ -form. The codifferential is defined using the Hodge dual and the e
|hodge-theory|
0
Range is product then so must the operator
Consider an operator $\eta$ on the (tensor) product of Hilbert spaces $\mathcal{H}_1\otimes \mathcal{H}_2$ such that its range is of the form $V_1\otimes V_2$ . Does this imply that $\eta$ must be a product operator, i.e., of the form $\eta = A\otimes B$ ? Followup Question (in response to @David Gao's counterexample). What if the range is such that $\dim V_2 =1$ ? Background . In physics, we often consider Hermitian operators (called the Hamiltonian $H$ ) defined on tensor products $\bigotimes_{i\in V}\mathcal{H}_i$ on some set of vertices $V$ , consisting of local interactions in the sense that $H=\sum_{e} h_e$ where $h_e$ are Hermitian operator defined on $\mathcal{H}_i\otimes \mathcal{H}_j$ for some $e=ij$ (and thus induces a graph structure on the vertices $V$ ). I was curious what it meant for each local interaction $h_e$ to be "entangled", i.e., not a trivial product operator.
No. For example, choose a unitary operator $\eta$ on $\mathbb{C}^2 \otimes \mathbb{C}^2$ such that $\eta(e_1 \otimes e_1) = \frac{1}{\sqrt{2}}(e_1 \otimes e_1 + e_2 \otimes e_2)$ . Since it’s a unitary, its range is $\mathbb{C}^2 \otimes \mathbb{C}^2$ . But it cannot be written as $A \otimes B$ as otherwise $\frac{1}{\sqrt{2}}(e_1 \otimes e_1 + e_2 \otimes e_2) = \eta(e_1 \otimes e_1) = Ae_1 \otimes Be_1$ is decomposable as a tensor product, which is not possible. The answer is still no even assuming $\mathrm{dim}(V_2) = 1$ . In fact, to settle this completely, the following is an example showing that the answer is no in $\mathbb{C}^2 \otimes \mathbb{C}^2$ (and therefore the answer is no whenever both $\mathcal{H}_1$ and $\mathcal{H}_2$ have dimension at least $2$ ) even when $\mathrm{dim}(V_1) = \mathrm{dim}(V_2) = 1$ . Simply let $\eta$ be defined as the orthogonal projection onto $\mathrm{span}\{\frac{1}{\sqrt{2}}(e_1 \otimes e_1 + e_2 \otimes e_2)\}$ composed with the map that send
|functional-analysis|hilbert-spaces|quantum-information|
1
Proving $\text{Li}_3\left(-\frac{1}{3}\right)-2 \text{Li}_3\left(\frac{1}{3}\right)= -\frac{\log^33}{6}+\frac{\pi^2}{6}\log 3-\frac{13\zeta(3)}{6}$?
Ramanujan gave the following identities for the Dilogarithm function : $$ \begin{align*} \operatorname{Li}_2\left(\frac{1}{3}\right)-\frac{1}{6}\operatorname{Li}_2\left(\frac{1}{9}\right) &=\frac{{\pi}^2}{18}-\frac{\log^23}{6} \\ \operatorname{Li}_2\left(-\frac{1}{3}\right)-\frac{1}{3}\operatorname{Li}_2\left(\frac{1}{9}\right) &=-\frac{{\pi}^2}{18}+\frac{1}{6}\log^23 \end{align*} $$ Now, I was wondering if there are similar identities for the trilogarithm ? I found numerically that $$\text{Li}_3\left(-\frac{1}{3}\right)-2 \text{Li}_3\left(\frac{1}{3}\right)\stackrel?= -\frac{\log^3 3}{6}+\frac{\pi^2}{6}\log 3-\frac{13\zeta(3)}{6} \tag{1}$$ I was not able to find equation $(1)$ anywhere in literature. Is it a new result? How can we prove $(1)$? I believe that it must be true since it agrees to a lot of decimal places.
Here is a solution. We are using the identity: $${\rm Li}_3 \left ( \frac{1-z}{1+z} \right ) - {\rm Li}_3 \left ( - \frac{1-z}{1+z} \right )= -2 {\rm Li}_3 \left ( \frac{z}{z-1} \right ) - 2{\rm Li}_3 \left ( \frac{z}{z+1} \right )+ \frac{1}{2}{\rm Li}_3 \left ( \frac{z^2}{z^2-1} \right )+ \frac{7}{4}{\rm Li}_3 (1)+ \\ +\frac{1}{4}\log \left ( \frac{1-z^2}{z^2} \right )\log^2 \left ( \frac{1+z}{1-z} \right )+ \frac{1}{4}\pi^2 \log \left ( \frac{1-z}{1+z} \right )$$ along with some known results of the trilog , i.e ${\rm Li}_3(1)=\zeta(3), \; {\rm Li}_3(-1)=-\frac{3\zeta(3)}{4}$ . Hence subbing $z=1/2$ we have the desired result.
|real-analysis|sequences-and-series|analysis|special-functions|polylogarithm|
0
How can I find the fifth root of unity?
I need to find fifth root of unity in the form $x+iy$. I'm new to this topic and would appreciate a detailed "dummies guide to..." explanation! I understand the formula, whereby for this question I would write: $1^{1/5} = r^{1/5}e^{2ki\pi/5}$. However, I don't know what to do next. Any help is appreciated.
Given the equation $z^5 = 1$ where $z \in \mathbb{C}$ , subtract $1$ from both sides. $0 = z^5 - 1 = (z - 1)(z^4 + z^3 + z^2 + z + 1)$ Now, for solving $z^4 + z^3 + z^2 + z + 1 = 0$ , one can express it in the form $\text{square} = \text{square}$ by completing the square twice as follows: $z^4 + z^3 + z^2 + z + 1 = 0$ First, add $z^2$ to both sides: $\iff z^4 + z^3 + 2z^2 + z + 1 = z^2$ Next, group and factor out $z$ from $z^3 + z$ . $\iff [z^4 + 2z^2 + 1] + z(z^2 + 1) = z^2$ Note that $z^4 + 2z^2 + 1$ is a perfect square; it's equivalent to $(z^2 + 1)^2$ . $\iff (z^2 + 1)^2 + z(z^2 + 1) = z^2$ Complete the square by adding the square of half the 'coefficient' of $z^2 + 1$ to both sides. $\iff (z^2 + 1)^2 + z(z^2 + 1) + \frac{z^2}{4} = z^2 + \frac{z^2}{4}$ $\iff (z^2 + \frac{1}{2}z + 1)^2 = \frac{5}{4}z^2$ Take the square root of both sides. $\iff z^2 + \frac{1}{2}z + 1 = \frac{\pm_1\sqrt{5}}{2}z$ Subtract $\frac{\pm_1\sqrt{5}}{2}z$ from both sides of this new equation. $\iff z^2 + \fr
|calculus|roots-of-unity|
0
How is the mean of a high dimensional random variable defined?
If $X$ is an $\mathbb{R}$ -valued r.v., with density $g(x)$ , then $\mathbb{E}(X) := \int xg(x)dx$ . But what if say $X \in \mathbb{R}^{3}?$ It can still have a density $g(x)$ such that the integral of $g$ over the whole 3-dimensional space is 1. I know for a high dimensional Gaussian, as long as we have its pdf, we know the mean and variance, but what about an arbitrary distribution? Does $\int xg(x) dx$ even make sense now?
If $X = (X_1, X_2, \ldots, X_n)^T$ then the mean of $X$ is defined to be $$ ( \mathbb{E}[X_1], \mathbb{E}[X_2], \ldots, \mathbb{E}[X_n])^T $$ where $$ \mathbb{E}[X_k] = \int_{\mathbb{R}} x_k \left(\int_{\mathbb{R}^{n - 1}} g(x_1, x_2, \ldots, x_{k - 1}, x_{k + 1}, \ldots, x_n)dx_1 dx_2 \ldots dx_{k - 1}dx_{k + 1} \ldots dx_n \right)dx_k $$ i.e, the mean of the marginal distribution of the $k$ -th coordinate, $X_k$ (as long as the integral above makes sense)
|probability|integration|random-variables|
0
Closed form for $\int_0^{\infty}\frac{\arctan x\ln(1+x^2)}{1+x^2}\sqrt{x}\,dx$
Please help me to find a closed form for this integral: $$\int_0^{\infty}\frac{\arctan x\ln(1+x^2)}{1+x^2}\sqrt{x}\,dx$$
OK, I worked out the second complex integral and here is my solution: Step 1 - Contour Integration Let $$\displaystyle I= \Re \int_{-\infty}^\infty \frac{\log^2 \left( 1-iz^2\right)}{1+i z^2}dz$$ and $\displaystyle f(z)=\frac{\log^2 \left( 1-iz^2\right)}{1+i z^2}$ Now integrate $f(z)$ around the following contour: There is a branch point at $\displaystyle z= e^{\frac{3i\pi}{4}}$ and a pole at $\displaystyle z=e^{\frac{i\pi}{4}}$ . The residue at $\displaystyle z=e^{\frac{i\pi}{4}}$ is $$\text{res}_{z=e^{\frac{i\pi}{4}}}f(z)=\frac{-e^{\frac{i\pi}{4}}}{2}\log^2(2)$$ Therefore \begin{align*} \int_{-\infty}^\infty \frac{\log^2 \left( 1-iz^2\right)}{1+i z^2}dz &= 2\pi i \left(\frac{-e^{\frac{i\pi}{4}}}{2}\log^2(2) \right) +\int_{e^{3i\pi /4}}^{\infty e^{3i\pi /4 }} \frac{\left( \pi i + \log|1-ix^2|\right)^2-\left( \log|1-ix^2|-i\pi \right)^2}{1+i x^2}dx \\ &= 2\pi i \left(\frac{-e^{\frac{i\pi}{4}}}{2}\log^2(2) \right) +4\pi i \int_{e^{3i\pi /4}}^{\infty e^{ 3i\pi /4 }} \frac{\log|1-ix^2|}{1
|definite-integrals|improper-integrals|closed-form|
0
The nth coordinate position along a path tracing the diagonals of an infinite matrix
I am trying to find a formula or algorithm to calculate the nth pair of numbers in this infinite sequence, starting at 0. That is, for each number on the left, a method to calculate the corresponding pair of numbers on the right: 0: (0, 0) 1: (1, 0) 2: (0, 1) 3: (2, 0) 4: (1, 1) 5: (0, 2) 6: (3, 0) 7: (2, 1) 8: (1, 2) 9: (0, 3) 10: (4, 0) 11: (3, 1) 12: (2, 2) 13: (1, 3) 14: (0, 4) ... Each pair of numbers represents a coordinate in an infinite 2 dimensional matrix. The ordering of the coordinates traces a path along each diagonal: 14 9 13 5 8 12 2 4 7 11 0 1 3 6 10 I have gotten as far as finding a Python function to generate a finite number of terms of the sequence: def generate_seq(m): k = 0 for i in range(m): for j in range(i): print(f'{k}: {(i - j - 1, j)}') k += 1 I also think that the last printed value of k can be calculated as k = (n * (n - 1) / 2) - 1 .
I took the sequence of the first value in each pair 0 1 0 2 1 0 3 2 1 0 4 3 2 1 0 5 4 3 2 1 0 6 5 4 3 2 1 0 and entered it into the On-Line Encyclopedia of Integer Sequences . It identified the sequence as https://oeis.org/A025581 . This led me to https://oeis.org/A073189/a073189.txt . From this I was able to define a Python function that appears to generate it: import math def binomial(n, k): return math.factorial(n) / (math.factorial(k) * math.factorial(n - k)) for n in range(0, 15): if n == 0: t1 = 0 t2 = 0 else: t1 = binomial(math.floor(3/2 + math.sqrt(2 + 2 * n)), 2) - (n + 1) t2 = n - binomial(math.floor(1/2 + math.sqrt(2 + 2 * n)), 2) print(f'{n}: {(t1, t2)}')
|sequences-and-series|
0
Probability of each type of inscribed octahedron
Fix a $V\in\mathbb{N}$ with $V\ge 4$ . Randomly pick $V$ points on a sphere (independently and uniformly with respect to the surface area measure). You may think of the convex hull of these $V$ points. With probability 1, this constitutes a nondegenerate convex polyhedron. In fact, all faces would be triangles (probability 1)(*), so this should be a $(2V-4)$ -hedron (from Euler's $F-E+V=2$ when $3F=2E$ ). But what is the probability for each topological type? I think(**) the first "interesting" case is for $V=6$ random points on the sphere, so random octahedra, so let me ask about that. What is the probability that 6 randomly chosen points on a sphere span an octahedron which is topologically like a regular octahedron (so all 6 vertices have same vertex order). Both an exact answer and an approximate value from simulating the random process would be interesting. Notes: (*) I learn that a polyhedron all of whose faces are triangles can be called a simplicial polyhedron . (**) A bit more
The two types of triangular octahedra can be characterized by the degree sequences of their vertices. The regular octahedron has six vertices with $4$ incident edges, whereas the other one has degree sequence $(3,3,4,4,5,5)$ . A vertex $v$ having degree $d$ means that $v$ can “see” $d$ other vertices; that is, if you project from $v$ , the convex hull of the other $5$ vertices is a $d$ -gon, with the other $5-d$ vertices inside the hull and thus occluded from view. In the following, one vertex will always be considered as the north pole, and the remaining vertices will be considered in stereographic projection , with the edges between them mapped to straight lines in the plane; in particular, all triangles referred to are triangles in the plane, not spherical triangles (nor projections of spherical triangles, since the projections of the edges are not projections of great circles). Denote the probability of the convex hull of $5$ points in stereographic projection to consist of $k$ poi
|probability|polyhedra|solid-geometry|geometric-probability|
1
Derivative of convolution
Assume that $f(x),g(x)$ are positive and are in $L^1$ . Moreover, they are differentiable and their derivative is integrable. Let $h(x)=f(x)*g(x)$ , the convolution of $f$ and $g$ . Does the derivative of $h(x)$ exist? If yes, how can we prove that $$ \frac{d}{dx}(f(x)*g(x)) = \left(\frac{d}{dx}f(x)\right)*g(x)$$ Thanks
Using Leibniz integral rule: $$\frac{d}{dx}\left(f*g(x)\right) = \int_{-\infty}^{+\infty}{\frac{\partial}{\partial x}(g(t)f(x-t))dt} = \int_{-\infty}^{+\infty}g(t){\frac{\partial}{\partial x}f(x-t)dt}=g*\frac{\partial}{\partial x}f$$
|functional-analysis|fourier-analysis|
0
Uniform continuity in a closed interval
I want to show that if f:I ⟹ ℝ is uniform continuous, and the interval I is bounded, then f is bounded. But I am not sure how to prove that. I know that if the interval I is bounded, the function will take on all values between f(a) and f(b) in the closed interval I = [a,b], such that a
Possibly the argument is similar to the soundness theorem. But to make use of the uniform continuous part, the convergent sequence should be replaced by Cauchy. That is: Assume $f$ is unbounded in $I$ , then you can pick a sequence $\{x_n\}\in I$ with $|f(x_n)|>n$ for each $n\in\mathbb{N}$ . Bolzano-Weiestrass Theorem will give a convergent subsequence $\{x_{n_k}\}$ , which is Cauchy in $\mathbb{R}$ . So $\{f(x_{n_k})\}$ is also Cauchy, so it is convergent and thus bounded. This contradicts to unboundness of $f$ over $I$ .
|real-analysis|analysis|
0
A continuous, injective function $f: \mathbb{R} \to \mathbb{R}$ is either strictly increasing or strictly decreasing.
I would like to prove the statement in the title. Proof: We prove that if $f$ is not strictly decreasing, then it must be strictly increasing. So suppose $x . And that's pretty much how far I got. Help will be appreciated.
Lemma : Let $f:\mathbb{R}\to\mathbb{R}$ be a function with the following property: For any $x,y,z\in\mathbb{R}$ such that $x , $f(y)$ is strictly between $f(x)$ and $f(y)$ . Then $f$ is strictly monotonic. Proof : $f$ should injective. For, if $x,y\in \mathbb{R}$ and $x , then there exists $z\in \mathbb{R}$ such that $x , and either $f(x) or $f(x)>f(z)>f(y)$ , i.e., $f(x)\neq f(y)$ . Next, let $x\in \mathbb{R}$ . There exists $y\in \mathbb{R}$ such that $x\neq y$ . First suppose that $x . Since $f$ is injective, either $f(x) or $f(x)>f(y)$ . We will show that $f$ is strictly increasing in the former case, and strictly decreasing in the latter case. Suppose that $f(x) . Let $z\in \mathbb{R}$ . Note the following: If $x , then necessarily $f(x) . If $x , then necessarily $f(x) . If $z , then necessarily $f(z) . It follows that if $x , then $f(x) , and if $z , then $f(z) . This shows that $f$ is strictly increasing. Now suppose that $f(x)>f(y)$ . Let $z\in \mathbb{R}$ . Note the following
|calculus|real-analysis|
0
An attempt for approximating the logarithm function $\ln(x)$: Could be extended for big numbers?
An attempt for approximating the logarithm function $\ln(x)$ : Could be extended for big numbers? PS: Thanks everyone for your comments and interesting answers showing how currently the logarithm function is numerically calculated, but so far nobody is answering the question I am asking for, which is related to the formula \eqref{Eq. 1} : Is it correctly calculated?, Could a formula for the logarithm of large numbers be found with it? Here with "big/large numbers" I am meaning in the same sense of how the Stirling's approximation formula approximates the factorial function at large values. Intro__________ On a previous question I found that the following approximation could be used: $$\ln\left(1+e^x\right)\approx \frac{x}{1-e^{-\frac{x}{\ln(2)}}},\ (x\neq 0) \quad \Rightarrow \quad \dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln\left(x^{\ln(2)}\right)} \approx \frac{x}{x-1}$$ And later I noted that I could do the following: $$\dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln(2)} \approx \frac{x\ln\lef
Answers: No. Or more precisely, the "drudgery" is not worth it. By drudgery I mean the computational effort. There are simpler procedures. Maybe yes. About the computational effort. IMO, in contrast to the function you suggest the continued fraction Euler shows in §24..25 of Eneström No. 606 is much simpler. I converted the CF to the 6th level, what resulted in $$\log(x)\approx f(x)=\frac{\mathrm{137}x^5+\mathrm{1625}x^4+\mathrm{2000}x^3-\mathrm{2000}x^2-\mathrm{1625}x-\mathrm{137}}{\mathrm{30}\left(x^5+\mathrm{25}x^4+\mathrm{100}x^3+\mathrm{100}x^2+\mathrm{25}x+1\right)}$$ Plotting $f(x)-\log(x)$ unveils disappointing differences for larger and lower $x$ values. Note: $x=\displaystyle 10^z$ (In addition I found a typo in the original paper and few more in the German translation). "How is a logarithm computed?" asked Agent Smith in a comment (I guess he knows it) -- if an approximation to 10..12 or few more digits is satisfactory, see HP Journal Vol. 29, No. 8 , p. 29 ff.
|real-analysis|combinatorics|convergence-divergence|solution-verification|pochhammer-symbol|
0
What is the least $n$ such that it is possible to embed $\operatorname{GL}_2(\mathbb{F}_5)$ into $S_n$?
Let $\operatorname{GL}_2(\mathbb{F}_5)$ be the group of invertible $2\times 2$ matrices over $\mathbb{F}_5$, and $S_n$ be the group of permutations of $n$ objects. What is the least $n\in\mathbb{N}$ such that there is an embedding (injective homomorphism) of $\operatorname{GL}_2(\mathbb{F}_5)$ into $S_n$? Such a question has been asked today during an exam; it striked me as quite difficult. There is an obvious embedding with $n=24$, and since $|\operatorname{GL}_2(\mathbb{F}_5)|=480$ and in $\operatorname{GL}_2(\mathbb{F}_5)$ there are many elements with order $20$, we have $n\geq 9$. However, "filling the gap" between $9$ and $24$ looks hard, at least to me. Can someone shed light on the topic? I would bet that representation theory and Cayley graphs may help, but I am not so much confident to state something non-trivial. I think that proving that $\operatorname{GL}_2(\mathbb{F}_5)$ is generated by three elements (is this true?) may help, too. I would be interested also in having a pr
First let's consider the general problem of determining for a finite group $G$ the smallest $n\in\mathbb N_0$ such that $G$ embedds into $S_n$ . This is equivalent to asking what is the smallest possible cardinality of a set on which $G$ acts faithfuly. We know from Cayley's theorem that $n=|G|$ is possible, but this is in general far from optimal. So let $G$ act faithfuly on a finite set $X$ and let $x_1,\ldots,x_k\in X$ be representatives of the orbits of this action. Then by the orbit-stabilizer theorem it's $$|X|=\sum_{i=1}^k|G\cdot x_i|=\sum_{i=1}^k[G:G_{x_i}].$$ Every $x\in X$ is of the form $x=gx_i$ for some $g\in G$ and $i\in\{1,\ldots,k\}$ and stabilizers of elements in the same orbit are conjugated since $G_{gx}=gG_xg^{-1}$ . Then the condition that the action is faithful means that $$\bigcap_{x\in X}G_x=\bigcap_{i=1}^k\bigcap_{g\in G}gG_{x_i}g^{-1}=1.$$ OTOH, let $k\in\mathbb N_0$ and $H_1,\ldots,H_k\le G$ such that $$\bigcap_{i=1}^k\bigcap_{g\in G}gH_ig^{-1}=1.$$ The the ac
|group-theory|representation-theory|symmetric-groups|
0
Show that the relation R defined on $\mathscr{P} \left( A \right)$ by $a R b, \forall x (x \in A \Leftrightarrow x \in B)$
Let $\mathscr{P} \left( A \right)$ denote the set of all subsets of a set $A$ . Show that the relation $R$ defined on $\mathscr{P} \left( A \right)$ by $a R b, \forall x (x \in A \Leftrightarrow x \in B)$ is an equivalence relation on $\mathscr{P} \left( A \right)$ . My idea is to let $a \in \mathscr{P} (A), b \in \mathscr{P} (A)$ . Then $a R a$ ; $a R b \Leftrightarrow b R a$ ; $a R b, b R c \Leftrightarrow a R c$ . But how to organize mathematical language?Can you give me some advice? Thank you.
(1) Reflexivity. Let subset $B \subseteq A$ be arbitrary; we want to show $B R B$ . This is trivial: letting $x \in A$ be arbitrary, if $x \in B$ , then obviously $x \in B$ . (2) Symmetry. Let subsets $B, C \subseteq A$ be such that $B R C$ ; we want to show $C R B$ . Again, this is trivial: if $x \in C$ , then by hypothesis, $x \in B$ . Conversely, if $x \in B$ , then by hypothesis, $x \in C$ . Since for all $x$ , we have that $(x \in C) \Leftrightarrow (x \in B)$ , then $C R B$ . (3) Transitivity. Let subsets $B, C, D \subseteq A$ be such that $B R C$ and $C R D$ ; we want to show that $B R D$ . Let $x \in B$ . Then because $B R C$ , we have $x \in C$ ; and because $C R D$ , we have that $x \in D$ . Conversely, let $x \in D$ . Because $C R D$ , we have $x \in C$ ; and because $B R C$ , we have $x \in B$ . So for all $x$ , we have that $(x \in B) \Leftrightarrow (x \in D)$ , showing that $B R D$ . This kind of equivalence is just the familiar notion of "set equivalence": roughly speak
|equivalence-relations|
0
Proving $A\to (¬A \to B)$ with Łukasiewicz's axioms and modus ponens?
I am trying to answer the following exercise from Hao's Fundamentals of Logic and Computation: With Practical Automated Reasoning and Verification . Using only modus ponens and the following axioms: I'm stuck for some days on it, I've even made a small program in Mathematica to help me with the substitutions but got nothing. Can you help me?
Name the first two axioms as follows: $$K: a → b → a,\quad S: (a → b → c) → (a → b) → a → c,$$ where we adopt the prevailing convention of of associating to the right, i.e. $a → b → c = a → (b → c)$ . Name the third axiom $$Z: (¬a → ¬b) → b → a,$$ and for modus ponens, applied to $f: a → b$ and $g: a$ , write $fg: b$ . Adopt the prevailing convention of associating products to the left, i.e. $fgh = (fg)h$ . As is well-known, and as you can verify, there is an equivalence of proofs given by the following: $$K f g = f,\quad S f g h = f h (g h).$$ Important lemmas can be stated as follows: $$\begin{align} I = S K K:& a → a,\\ B = S (K S) K:& (b → c) → (a → b) → a → c,\\ C = S (B B S) (K K):& (a → b → c) → b → a → c. \end{align}$$ The corresponding formulas, derivable from those above, are $$I f = f,\quad B f g h = f (g h),\quad C f g h = f h g.$$ The formulas are identical to those seen in Combinatory Logic and may be interpreted as a system of type templates (and proof conversion rules),
|logic|propositional-calculus|formal-proofs|
0
Evaluating $\lim_{x\to\infty} \left(x-x^2\ln\frac{1+x}{x}\right)$ with different methods gives different answers
I was solving $$\lim_{x\to\infty} \left(x-x^2\ln\frac{1+x}{x}\right)$$ My first solution was: $$\begin{align} \lim_{x\to\infty} \left(x-x\ln(\frac{1+x}{x})^x\right) &= \lim_{x\to\infty} \left(x\left(1-\ln\left(\frac{1+x}{x}\right)^x\right)\right) \tag1\\[6pt] &=\lim_{x\to\infty} \left(x\left(1-\ln(e)\right)\right) \\[6pt] &=0 \tag2 \end{align}$$ Then I saw the right answer, which is $\frac{1}{2}$ . $$\lim_{x\to\infty} \left(x-x^2\ln\frac{1+x}{x}\right)=\lim_{x\to\infty} \frac{\frac{1}{x}-\ln(\frac{1+x}{x})}{\frac{1}{x^2}} \tag3$$ Applying L'Hospital's rule: $$\lim_{x\to\infty} \frac{-\frac{1}{x^2}-\frac{1}{\frac{1+x}{x}}(-\frac{1}{x^2})}{-\frac{2}{x^3}}=\lim_{x\to\infty} \frac{x}{2(1+x)}=\frac{1}{2}\tag4$$ I can not understand what's wrong with my first solution, thanks in advance.
Your wrong result shows that the following rule is wrong : If $\lim_{x\to a}f(x)=\ell$ then $\lim_{x\to a}g(x)f(x)=\lim_{x\to a}g(x)\ell$ . A simpler counterexample is: $\lim_{x\to 0}x=0$ but $$\lim_{x\to 0}\frac1xx\ne\lim_{x\to a}\frac1x0.$$ Side remark: a less contrived solution than letting L'Hospital appear is using a short Taylor expansion: $$\ln\left(1+\frac1x\right)=\frac1x-\frac1{2x^2}+o\left(\frac1{x^2}\right).$$
|limits|
1
Evaluating $\lim_{x\to\infty} \left(x-x^2\ln\frac{1+x}{x}\right)$ with different methods gives different answers
I was solving $$\lim_{x\to\infty} \left(x-x^2\ln\frac{1+x}{x}\right)$$ My first solution was: $$\begin{align} \lim_{x\to\infty} \left(x-x\ln(\frac{1+x}{x})^x\right) &= \lim_{x\to\infty} \left(x\left(1-\ln\left(\frac{1+x}{x}\right)^x\right)\right) \tag1\\[6pt] &=\lim_{x\to\infty} \left(x\left(1-\ln(e)\right)\right) \\[6pt] &=0 \tag2 \end{align}$$ Then I saw the right answer, which is $\frac{1}{2}$ . $$\lim_{x\to\infty} \left(x-x^2\ln\frac{1+x}{x}\right)=\lim_{x\to\infty} \frac{\frac{1}{x}-\ln(\frac{1+x}{x})}{\frac{1}{x^2}} \tag3$$ Applying L'Hospital's rule: $$\lim_{x\to\infty} \frac{-\frac{1}{x^2}-\frac{1}{\frac{1+x}{x}}(-\frac{1}{x^2})}{-\frac{2}{x^3}}=\lim_{x\to\infty} \frac{x}{2(1+x)}=\frac{1}{2}\tag4$$ I can not understand what's wrong with my first solution, thanks in advance.
In order not to fall into pits of overly complex reasoning, look at the series at $x=0$ for $x\to \frac{1}{x}$ $$ x - x^2 \log \left(\frac{x+1}{x}\right)|_{ x \to 1/x } \longrightarrow \frac{x-\log (x+1)}{x^2}= \frac{x-x +\frac{x^2}{2} - \frac{x^3}{3} \pm \dots}{x^2} $$ It is evident that the quadratic term survives for $x\to 0$
|limits|
0
Unification with constants
I am learning unification (Martelli and Montanari 1982 paper). I have a question about how the algorithm (Algorithm 1) deals with constants. Take these two equations: $$ f(a) = c \\ f(X) = c $$ where $a$ and $c$ are constants (zero-arity functions), $f$ an unary function and $X$ a variable. Note that there are no variables in the first equation. The paper states: "If the two root function symbols are different, stop with failure; otherwise, apply term reduction." $f$ is different from $c$ , I cannot apply term reduction, does that mean I have to stop with failure? My intuition would lead to $f(a)=f(X)$ , $a=X$ and finally $X=a$ .
Rereading the Wiki page gives me the answer: "So for example 1+2 = 3 is not satisfiable because they are syntactically different." My example would fail on the first equation.
|logic|unification|
1
How to evaluate $\int_{-\infty}^\infty \sin^3(x)/x^3 dx$ without finding an analytic continuation, still using complex analysis.
There is a similar post regarding the integral $\int_{-\infty}^\infty \sin^3(x)/x^3 dx$ on Stack. The reason I have some trouble with this post is because it gives an analytic continuation of the function $\sin^3(x)/x^3=\left(\frac{e^{iz}-e^{-iz}}{2iz}\right)^3$ , namely $h(z)=-\frac{e^{3iz}-3e^{iz}}{4z^3}-\frac{1}{2z^3}$ . Firstly, I'd like to know how such an analytic continuation is obtained. Regarding the sinc function, I know that sine has a zero of order 1, and the denominator also does, so at $z=0$ , there is a removable singularity and thus an analytic continuation. I also know that $\lim_{z\to 0}\text{sinc}(z)=1$ (meaning 1 is an analytic continuation to $\mathbb{C}$ ?) Regarding the actual integral itself, I have considered the usual indented semicircle, of radius $R$ , with an $\epsilon$ semicircle about the essential singularity 0. Without using that analytic continuation $h(z)$ , this gives, letting $f(z)=\frac{e^{3iz}-3e^{iz}+3e^{-iz}-e^{-3iz}}{z^3}$ $$0=\frac{i}{8}\left[
Here is a method which doesn't use analytic continuation (or contours). Begin with the known integral $$ \int_0^{\infty} \frac{\sin x}{x} \, dx = \frac{\pi}{2} $$ Integrating by parts once and then twice gives \begin{align*} \frac{\pi}{2} = \left. \frac{1-\cos x}{x} \right|_0^{\infty} - \int_0^{\infty} (1-\cos x) \frac{-1}{x^2} \, dx &= \int_0^{\infty} \frac{1-\cos x}{x^2} \, dx \\ &= \left. \frac{x-\sin x}{x^2} \right|_0^{\infty} - \int_0^{\infty} (x-\sin x) \frac{-2}{x^3} \, dx \\ &= 2\int_0^{\infty} \frac{x-\sin x}{x^3} \, dx \end{align*} so $$ \int_0^{\infty} \frac{x-\sin x}{x^3} \, dx = \frac{\pi}{4} $$ Substituting $3x$ for $x$ gives $$ \int_0^{\infty} \frac{3x-\sin(3x)}{x^3} \, dx = \frac{9\pi}{4} $$ Now use the identity $\sin(3x) = 3\sin x-4\sin^3x$ to write $$ \frac{9\pi}{4} = \int_0^{\infty} \frac{3x-3\sin x +4\sin^3x}{x^3} \, dx = 3\int_0^{\infty} \frac{x-\sin x}{x^3} \, dx +4\int_0^{\infty} \frac{\sin^3x}{x^3} \, dx $$ and rearrange. This method can be used to compute all o
|real-analysis|integration|complex-analysis|contour-integration|analytic-continuation|
0
The set $Ham(M,\omega)$ of Hamiltonian diffeomorphisms is a subgroup of the set of symplectic diffeomorphisms $Symp(M\omega)$
The set $Ham(M,\omega)$ of Hamiltonian diffeomorphisms is a subgroup of the set of symplectic diffeomorphisms $Symp(M\omega)$ is proven in the following lectures notes https://jumpshare.com/s/tkHDIaSse4wwcxlG3mW1 (proof of proposition 3.42 -pag 110 (=pag 120 of the pdf)) But I have no idea what is going on in some computations: What is going in the two highlighted statements? In the first one how do we get a sum? The derivative of a composition should be by the chain rule $\frac{d}{dt}(\varphi_F^t\circ\varphi_G^t)=\frac{d}{dt}(\varphi_F^t)|_{\varphi_G^t}\frac{d}{dt}(\varphi_G^t)$ . I don't see how they I getting a + there and why are they getting those terms In the second one, how does 3.33 imply that the flow of $G^t$ is $(\varphi_F^t)^{-1}$ ? and how is this showing that $\varphi^{-1}\in Ham(M,\omega)$ ? edit:
@TedShifrin pointed out the way to go. Let me write it out. Consider formally the map $(r, s)\to \phi_F^r\circ \phi_G^s$ . Then by the definition of flow and the chain rule, \begin{align*} \frac{\partial}{\partial r} (\phi_F^r\circ \phi_G^s) &= X_{F^r}\circ \phi_F^r\circ \phi_G^s,\\ \frac{\partial}{\partial s} (\phi_F^r\circ \phi_G^s) &= D\phi_F^r X_{G^s}\circ \phi_G^s \quad \text{chain rule here for }\phi_F^r\circ. \end{align*} Now consider the map $t\to (r, s)=(t, t)$ , then another chain rule for this change of variables gives \begin{align*} \frac{d}{dt} (\phi_F^t\circ \phi_G^t) &= \frac{dr}{dt} \frac{\partial}{\partial r}\Big|_{r=t,s=t} (\phi_F^r\circ \phi_G^s) + \frac{ds}{dt}\frac{\partial}{\partial s}\Big|_{r=t,s=t} (\phi_F^r\circ \phi_G^s)\\ &= X_{F^t}\circ \phi_F^t\circ \phi_G^t + D\phi_F^t X_{G^t}\circ \phi_G^t. \end{align*} Now the second question mainly follows from the two lines after (3.3). So for the particular choice of $$ G^t = -F^t\circ \phi_F^t, $$ the function $$ F^t
|differential-geometry|manifolds|differential-forms|symplectic-geometry|
1
Suppose $f:[0,1] \rightarrow \mathbb{R}$ is twice differentiable and $f = f''$, $f(0) = f(1) = 0$, then $f$ is constant.
Suppose $f:[0,1] \rightarrow \mathbb{R}$ is twice differentiable and $f = f''$ , $f(0) = f(1) = 0$ . Show that $f$ is constant. The main approach we are taught to use at these kind of problems is using Rolle's theorem/MVT. I am not sure how this this idea can be applied here and moreover I struggle to relate it to the condition $f = f''$ . I hope to get an idea about it, thanks
Perhaps using differential equation is quite straight forward (but quite boring). Or we use analytical method. First, if $(c,f(c))$ is a critical point, then by second derivative test, if $f(c)>0$ , then $f$ is a local minimum, vice versa. We first claim that there is a zero $s\in(0,1)$ . Then we can keep apply the same argument on the $(0,s)$ , $(s,1)$ , and so on. This means $f\equiv0$ . By Rolle's theorem on $[0,1]$ , there exists $c\in(0,1)$ where $f'(c)=0$ . If $f(c)=0$ , then we are done! If $f(c)\ne0$ , then WLOG $f(c)>0$ , this means $f$ has a local minimum at $x=c$ . But $f(0) , so there must be a local maximum attained by $f$ at $d$ in $(0,c)$ , then $f"(d)\le0$ . Hence, $f(d)\le0$ . If $f(d)=0$ , then we are done! If $f(d) , then by intermediate value theorem on $(d,c)$ , we have a zero in it, let say $e$ . Then we are done!
|real-analysis|derivatives|alternative-proof|
0
probability expected winnings per lottery question
I am trying to solve this problem but I'm a little stuck: You and your friends decide to make a lottery with $n$ tickets, where each ticket has a number between $1$ and $n$ , and each ticket is unique. Each ticket is $\$5$ , and the lottery works in the following way. Once all $n$ tickets have been purchased, a number $x$ is selected uniformly at random between $1$ and $n$ and all the money is split equally between people with tickets less than $x$ . That way, if $x=1$ is selected, you and your friends get to keep the prize pool. What is your expected winnings per lottery as the organizer? So what I did was make 2 cases. The first case is when $x = 1$ and the organizer keeps all the money, and the second case is where $x > 1$ and the organizer gets no money: Case 1: $x = 1$ $P(X = 1) = \frac{1}{n}$ organizer gets $5n$ money $\frac{1}{n}(5n) = 5$ Case 2: $x > 1$ $P(X > 1) = 1 - \frac{1}{n}$ organizer gets no money $\left(1- \frac{1}{n}\right)(0) = 0$ From there I add the 2 equations and
The answer and derivation are correct. As revealed in the comments, the source of the question continues with asking about expected winnings for a player, to which this question is a stepping stone.
|probability|probability-distributions|solution-verification|conditional-probability|lotteries|
0
Find the radius of the circle tangent to $x^2, \sqrt{x}, x=2$
I'm taking up a post that was closed for lack of context because I'm very interested in it : Let $(a,b)$ be the center of this circle. It seems intuitive that $b=a$ , but I have not been able to prove it formally, although I know that two reciprocal functions are symmetric with respect to the first bisector $y=x$ . Then let $(X,X^2)$ the point of tangency with $y=x^2$ . I think we're going to use the formula for the distance from $(a,a)$ to the line $y-X^2=2X(x-X)$ . We have obviously the relation $r=2-a$ . The normal to $(X,X^2) $ passes through $(a,a)$ I'm not sure if my notations are the best to elegantly solve the exercise. I hope you will share my enthusiasm for this lovely exercise that I have just discovered thanks to MSE.
From symmetry, it can be assumed at first that the center of the tangent circle is on the line $y = x$ . So, let the center of the circle be at $(a,a)$ , and let the tangency point with the parabola $y = x^2$ be $ (b , b^2)$ , then the radius of the circle is $ r = | 2 - a | $ The vector $(a - b , a - b^2)$ is along the normal vector to the parabola, which given by its gradient vector, namely $(2 x, -1) $ . Therefore, $ (a - b , a - b^2) = K (2 b , -1) $ So that $ (-1) ( a - b) - 2 b (a - b^2) = 0 $ And finally, the radius is equal to the distance between the the two points $(a, a)$ and $(b , b^2) $ , hence $ (a - 2)^2 = (a - b)^2 + (b^2 - a)^2$ The solutions of these equations are $ (a, b) = (1.655444, 1.332841) $ and $(3.783181, 2.050749) $ and $(-5.21545, -0.44123) $ The corresponding radii are $0.344556$ , $ 1.783181$ , and $7.21545$ respectively. The two circles corresponding to the first two of these solutions are shown below. Now, if we want all the possible solutions of circles
|geometry|functions|analytic-geometry|
1
Why is $\sup f_- (n) \inf f_+ (m) = \frac{5}{4} $?
Let $f_- (n) = \Pi_{i=0}^n ( \sin(i) - \frac{5}{4}) $ And let $ f_+(m) = \Pi_{i=0}^m ( \sin(i) + \frac{5}{4} ) $ It appears that $$\sup f_- (n) \inf f_+ (m) = \frac{5}{4} $$ Why is that so ? Notice $$\int_0^{2 \pi} \ln(\sin(x) + \frac{5}{4}) dx = Re \int_0^{2 \pi} \ln (\sin(x) - \frac{5}{4}) dx = \int_0^{2 \pi} \ln (\cos(x) + \frac{5}{4}) dx = Re \int_0^{2 \pi} \ln(\cos(x) - \frac{5}{4}) dx = 0 $$ $$ \int_0^{2 \pi} \ln (\sin(x) - \frac{5}{4}) dx = \int_0^{2 \pi} \ln (\cos(x) - \frac{5}{4}) dx = 2 \pi^2 i $$ That explains the finite values of $\sup $ and $ \inf $ .. well almost. It can be proven that both are finite. But that does not explain the value of their product. Update This is probably not helpful at all , but it can be shown ( not easy ) that there exist a unique pair of functions $g_-(x) , g_+(x) $ , both entire and with period $2 \pi $ such that $$ g_-(n) = f_-(n) , g_+(m) = f_+(m) $$ However i have no closed form for any of those ... As for the numerical test i got about $ln
Too long for comments. Using q-Pochhammer symbols amke some inexpected factors to appear in the problem. $$f_n(a) = \prod_{i=0}^n ( \sin(i) +a)$$ $$f_n(a) = i\,\frac {(-1)^n}{2^{n+1} }\,e^{-\frac{i}{2} n (n+\pi +1)}\left(i \left(a-\sqrt{a^2-1}\right);e^i\right){}_{n+1} \left(i \left(a+\sqrt{a^2-1}\right);e^i\right){}_{n+1}$$ Similarly $$g_n(a) = \prod_{i=0}^n ( \sin^2(i) +a)$$ $$g_n(a) = \frac {(-1)^{n+1}}{4^{n+1} }\,e^{-i n (n+1)} \left(2 a-2 \sqrt{a (a+1)}+1;e^{2 i}\right){}_{n+1} \left(2 a+2 \sqrt{a (a+1)}+1;e^{2 i}\right){}_{n+1}$$ Using exact arithmetic, it is "simple" to confirm the observed results (at least for $m=n$ ). My own questions For the first problem : what happens if $a$ is rational and $\sqrt{a^2-1}$ too ? (this is the case for a=5/4) For the second problem : what happens if $a$ is rational and $\sqrt{a(a+1)}$ too ? (this is the case for a=9/16) Edit At least by curiosity, have a look at @ubpdqn's answer and results here
|calculus|geometry|fractions|limsup-and-liminf|products|
0
Why do k arithmetic right/left bit shifts divide by $2^k$/multiply by $2^k$ in two's complement, rigorously?
I want to understand the semantics of rights bit shifts x>>k in two's complement properly, in particular why do right bit shifts of size $k$ approximately divide $x$ by $2^k$ (and ideally making this "approx" precise and also handling the arithmetic left bit shift too). So the representation of a number in two complement is for a $N$ bit length word: $$ x = -a_{N-1} 2^{N-1} + \sum^{N-2}_{i=0} a_{i} 2^i = neg + pos $$ e.g., as a intentionally selected running example $-6$ is $1010_2$ which is $-8 + 0 + 2 + 0$ . I'm thinking of bit slots [-8, 4, 2, 1] [-2^3, 2^2, 2^1, 2^0] for each bit. When doing an arithmetic right bit shift we get $1101_2$ which ends up being $-3 = -8 + 4 + 0 + 1$ in twos complement. When the most significant bit (MSB) is $0$ the mathematics seems trivial. If you move the bits to the right it's a division by two because every number is divided by two (maybe floor since we lose the final bit sometimes if it's 1). With negative numbers using 2s complement it seems more
In two's complement, let the bit representation be $T=a_{N-1}\dots a_0$ , with $a_{N-1}$ the most significant bit (MSB) with a value of $-2^{N-1}$ , and $a_0$ as the least significant bit (LSB) with value $1$ . The numerical value of $|T|$ is the positive integer $S$ represented by $S=a_{N-2}\dots a_0$ , where $T=-2^{N-1}+S$ . As $S+NOT(S)+1=2^{N-1}$ , then $|T|=NOT(S)+1$ , where $NOT(S)$ is the one's complement. Similarly, if $S$ is a positive integer, $-S = -2^{N-1} + NOT(S) + 1$ . So in order to halve $T$ (knowing that $T ) we can halve $NOT(S)+1$ by performing a logical right shift ( $>>1$ ). Note that the floor function has slightly unusual behaviour when changing from positive to negative odd numbers, $\lfloor \frac{5}{2}\rfloor = 2$ , whereas $\lfloor \frac{-5}{2}\rfloor = -3$ for example, so converting back to two's complement needs to take this into consideration. If $L=(NOT(S)+1)$ is odd, then $\frac{|T|}{2}=-2^{N-1} + NOT(L>>1)$ , otherwise $\frac{|T|}{2}=-2^{N-1} + NOT(L>>1
|computer-science|binary|binary-operations|
0
probability expected winnings per lottery question
I am trying to solve this problem but I'm a little stuck: You and your friends decide to make a lottery with $n$ tickets, where each ticket has a number between $1$ and $n$ , and each ticket is unique. Each ticket is $\$5$ , and the lottery works in the following way. Once all $n$ tickets have been purchased, a number $x$ is selected uniformly at random between $1$ and $n$ and all the money is split equally between people with tickets less than $x$ . That way, if $x=1$ is selected, you and your friends get to keep the prize pool. What is your expected winnings per lottery as the organizer? So what I did was make 2 cases. The first case is when $x = 1$ and the organizer keeps all the money, and the second case is where $x > 1$ and the organizer gets no money: Case 1: $x = 1$ $P(X = 1) = \frac{1}{n}$ organizer gets $5n$ money $\frac{1}{n}(5n) = 5$ Case 2: $x > 1$ $P(X > 1) = 1 - \frac{1}{n}$ organizer gets no money $\left(1- \frac{1}{n}\right)(0) = 0$ From there I add the 2 equations and
The pot is $5n$ You have correctly found expected winnings for the organiser, $\mathbb{E}[X] = 5$ If the organiser doesn't get the pot, the "players" must, and by symmetry, each will earn the same on an average Thus expected earning for a player $=\Large\frac{5n-5}{n}$
|probability|probability-distributions|solution-verification|conditional-probability|lotteries|
0
How can one avoid small errors and mistakes in easy calculations.
I have a problem for a long time now which is I always make small stupid errors and mistakes in calculations when the question is easy or. This issue caused me a lot of problems in exams that I was known for having bad grades when the exam is easy and a good grades when the exam is challenging (I make a lot of mistakes when the question is easy for some reason). Some examples: When I first learned the Beta function $\displaystyle B(a,b):=\int_0^1 t^{a-1}(1-t)^{b-1}dt \ \ \ $ st $a,b >0$ I saw this identity $\displaystyle B(a,b)= \int_0^\infty \frac{t^{a-1}}{(1+t)^{a+b}}dt$ and I tried to prove it: Let $t= \frac{1}{u}-1 \ , dt =\frac{-du}{u^2}$ . and $u=\frac{1}{1+t}$ $$ \int_0^\infty \frac{t^{a-1}}{(1+t)^{a+b}}dt =-\int_1^0 \frac{\left(\frac{1}{u}-1 \right)^{a-1}}{\left(\frac{1}{u} \right)^{a+b-2} }du =\int_0^1 {\color{red}{(u-1)}^{a-1} u^{b-1} } du $$ here I spent more than an hour trying to find the mistake In his integral $\lim\limits_{n \to \infty} \int_0^ \infty \frac{nx\arctan{x}
It probably doesn't help to think in terms of hard problems versus easy problems. That probably means problems that require some conceptual thought versus relatively straightforward calculations. But being good at calculation is a skill in itself, and as your experience shows, it can't be taken for granted. Indeed, a distinguishing feature of some great mathematicians is how good they are at calculation. Once you frame things that way, that there is skill that needs to be improved and can be learned, that's half the battle. It might have a lot to do just with making your writing better organized. Or when you think you have must have made a mistake when evaluating an integral, for example, can you run a plausibility check on each step? For example, for the question about $\arctan$ , a quick check of what the graph of $\arctan$ looks like would have told you that your memory of its integral was wrong. Then you would have surely thought about it a bit more, and realized you were confusing
|soft-question|
1
Continuous function taking given values and bounded by a certain constant
Given $a,b\in \mathbb{R}$ with $a and $t_1,...,t_n\in [a,b]$ (where $t_i$ 's are distinct) and $a_1,...,a_n\in \mathbb{C}$ how to define a complex-valued continuous function $f$ on $[a,b]$ such that $f(t_i)=a_i$ for each $1\leq i\leq n$ and $\max_{t\in [a,b]}|f(t)|\leq \max_{1\leq i\leq n}|a_i|$ ? It is easy to define such a real-valued continuous function (with that conditions), so I tried to use real-valued ones as the real and imaginary parts of the desired function, but I ran into some scaling problems while trying to satisfy given conditions. I think it seems a subtle task to define a complex-valued one. Could anybody give a clue/hint? Thanks in advance.
On each interval $[t_i, t_{i+1}]$ , you can define the function as $$f(x) = a_{i+1}\frac{x-t_i}{t_{i+1}-t_i} + a_i\frac{t_{i+1} - x}{t_{i+1}-t_i}$$ i.e. the linear interpolation between the two complex points.
|complex-analysis|analysis|
1
Trouble in Understanding an Inequality in Rudin RCA 11.16
I am dealing with the following inequality: $$|u_r(e^{i\theta})|^p\le\frac{1}{2\pi}\int_{-\pi}^\pi|f(t)|^pP_r(\theta-t)dt,$$ where $1\le p\le\infty$ , $0\le r , $P_r$ the Poisson kernel, $$u_r(e^{i\theta})=\frac{1}{2\pi}\int_{-\pi}^\pi f(t)P_r(\theta-t)dt$$ and $f(t)\in L^p(T)$ . I can't figure out how this inequality is obtained. Here are my attempts: Consider Jensen's Inequality. Then $$ \left| u_r\left( e^{i\theta} \right) \right|^p=\left| \int_{-\pi}^{\pi}{f\left( t \right) P_r\left( \theta -t \right) \frac{dt}{2\pi}} \right|^p\le \int_{-\pi}^{\pi}{\left| f\left( t \right) \right|^p\cdot \left| P_r\left( \theta -t \right) \right|^p\frac{dt}{2\pi}}\overset{?}{\le}\int_{-\pi}^{\pi}{\left| f\left( t \right) \right|^p\cdot P_r\left( \theta -t \right) \frac{dt}{2\pi}}. $$ I can't convince me that the final step is true. Indeed there are segments that $P_r$ is larger than $1$ . Can you help me with this? Thanks in advance. To avoid possible typos (or misunderstandings), I will put the im
Rudin formulates Jensen's inequality for an arbitrary probability space, and he is applying it in this instance not to the normalized Lebesgue measure $\frac{1}{2\pi} \, dt$ on $\mathbb{T}$ , but to the probability measure $\frac{1}{2\pi} P_r(\theta - t) \, dt$ on $\mathbb{T}$ (here $r$ and $\theta$ are fixed for purposes of applying the inequality). The symbolic effect is that one does not need to include $P_r(\theta - t)$ in what is "raised to the $p$ " inside the integral, as one would if one were applying Jensen's inequality to the normalized Lebesgue measure. The fact that the $p$ th power of the modulus of the right hand side of Rudin's equation (3) is at most the right hand side of Rudin's equation (4) is thus (as a commenter to the original post stated but did not explain) a fairly literal consequence of Jensen's inequality (again, not in $(\mathbb{T}, \frac{1}{2\pi} \, dt)$ but in this other probability space). Rudin's passing reference to Hölder's inequality is similar: he me
|inequality|
0
Non-Dimensionalizing a Traffic Flow PDE for Physics Informed Neural Network Issue
I'm working on analyzing a traffic flow model described by the following partial differential equation (PDE): $V_{\max} \left(1 - \frac{2\rho}{\rho_{\max}}\right) \frac{\partial \rho}{\partial x} + \frac{\partial \rho}{\partial t} = 0$ here: p(x,t) represents the vehicle density as a function of position $V_{max}$ represents the maximum velocity in km/h $\rho_{max}$ represents the maximum density in veh/km My goal is to non-dimensionalize this PDE to facilitate its application within a Physics Informed Neural Network (PINN), but since I'm new to this area, I am seeking validation or advice on my methodology. I have introduced the following dimensionless variables: $\hat{x} = \frac{x}{L}$ $\hat{t} = \frac{t}{T}$ , where $T = \frac{L}{V_{max}}$ $\hat{\rho} = \frac{\rho}{\rho_{max}}$ Substituting these into the PDE yields: $(1-2\hat{\rho}) \frac{\partial{\hat{\rho}}} {\partial{\hat{x}}} + \frac{\partial{\hat{\rho}}} {\partial{\hat{t}}} = 0$ I am uncertain if my approach is correct. The PI
The non-dimentional form seems correct : $$(1-2\hat{\rho}) \frac{\partial{\hat{\rho}}} {\partial{\hat{x}}} + \frac{\partial{\hat{\rho}}} {\partial{\hat{t}}} = 0\quad\text{is OK.}$$ The analytic solution (expressed on the form of an implicit equation) is : $$\hat{\rho}=F\bigg(\hat{x}-(1-2\hat{\rho})\hat{t}\bigg)$$ $F$ is an artbitrary function. This means that they are infinity many solutions (if no initial condition is given). In the wording of the question no initial condition is given. So one cannot go further.
|partial-differential-equations|machine-learning|mathematical-modeling|neural-networks|dimensional-analysis|
1
Fundamental lemma of calculus of variations for higher order-derivatives
Consider an expression of the form $$0=\int_a^b f_0(t) \varphi(t) + f_1(t)\varphi'(t) + \ldots + f_n(t) \varphi^{(n)}(t) \, \mathrm{d}t, \qquad \forall \varphi \in C_c^\infty(a,b)$$ and $f_0, \ldots, f_n \in L^2(a,b)$ . Can I conclude that $$f_i \in W^{i,2}(a,b)$$ where $W^{i,2}(a,b)$ denotes the set of all $f \in L^2(a,b)$ whose weak-derivatives up to order $i$ exist and are again in $L^2(a,b)$ ? Especially in the case $n=1$ , this is the definition of the weak derivative. I would be very grateful for hints.
No, this equation only implies $$ \sum_{j=0}^n (-1)^j f_j^{(j)}=0 $$ almost everywhere, where $f_j^{(j)}$ denotes the $j$ -th order weak derivative of $f_j$ . A counterexample to your claim would be $(a,b)=(-1,+1)$ , $$ f_0 = 0, \quad f_1 = \chi_{(0,1)}, \quad f_2 = -\max(x,0). $$ Here, $f_1 \in L^2 \setminus H^1$ , $f_2 \in H^1 \setminus H^2$ .
|sobolev-spaces|calculus-of-variations|weak-derivatives|
0
How to evaluate $\int_{-\infty}^\infty \sin^3(x)/x^3 dx$ without finding an analytic continuation, still using complex analysis.
There is a similar post regarding the integral $\int_{-\infty}^\infty \sin^3(x)/x^3 dx$ on Stack. The reason I have some trouble with this post is because it gives an analytic continuation of the function $\sin^3(x)/x^3=\left(\frac{e^{iz}-e^{-iz}}{2iz}\right)^3$ , namely $h(z)=-\frac{e^{3iz}-3e^{iz}}{4z^3}-\frac{1}{2z^3}$ . Firstly, I'd like to know how such an analytic continuation is obtained. Regarding the sinc function, I know that sine has a zero of order 1, and the denominator also does, so at $z=0$ , there is a removable singularity and thus an analytic continuation. I also know that $\lim_{z\to 0}\text{sinc}(z)=1$ (meaning 1 is an analytic continuation to $\mathbb{C}$ ?) Regarding the actual integral itself, I have considered the usual indented semicircle, of radius $R$ , with an $\epsilon$ semicircle about the essential singularity 0. Without using that analytic continuation $h(z)$ , this gives, letting $f(z)=\frac{e^{3iz}-3e^{iz}+3e^{-iz}-e^{-3iz}}{z^3}$ $$0=\frac{i}{8}\left[
It would be a good application of Laplace techniques for an even function. \begin{align} \int_{-\infty}^{\infty}\sin^3\left(x\right)\frac{1}{x^3}\mathrm{d}x&=\int_0^{\infty}\mathcal{L}\left[\sin^3\left(t\right)\right](x)\mathcal{L}^{-1}\left[\frac{2}{s^3}\right](x)\mathrm{d}x\\ &=\int_0^{\infty}\frac14\left(\frac{3}{x^2+1}-\frac{3}{x^2+9}\right)x^2\mathrm{d}x\\ &=\int_0^{\infty}\frac{27}{4}\frac{1}{x^2+9}-\frac34\frac{1}{x^2+1}\mathrm{d}x\\ &=\frac34\pi. \end{align} $\mathcal{L}[\sin^3t]=\frac14\mathcal{L}[3\sin(t)-\sin(3t)]$ from trigonometry formulas.
|real-analysis|integration|complex-analysis|contour-integration|analytic-continuation|
0
A prime is not a product of primes, so why is “every positive integer, except 1, […] a product of primes”?
I'm trying to understand why the fundamental theorem of arithmetic is phrased like this: Every positive integer, except 1, is a product of primes. A prime number is an integer but it is not a product of primes. It seems to me that "every positive integer" generalization is not justified. Something like, "every non-prime integer is a product of primes" should have been a better definition. This phrasing would except 1 as well. What am I missing? According to the definition of prime numbers, a prime is not defined as "a product of primes": A number p is said to be prime if (i) p > 1, (ii) p has no positive divisors except 1 and p. I took these definitions from Introduction to the theory of numbers , by Hardy and Wright, p.2. In his proof to Theorem 1, "Every positive integer, except 1, is a product of primes" Hardy states, "Either $n$ is prime, when there is nothing to prove, or $n$ has divisors between 1 and $n$ ." But this doesn't make sense to me. The prime $n$ is a positive integer a
This is not the fundamental theorem of arithmetic, and Hardy & Wright don’t call it that – in fact, the next theorem on the next page is referred to as the fundamental theorem of arithmetic; it states that the prime factorization is unique. As regards your criticism of the treatment of special cases, my criticism would be the opposite: Not only is a prime a product of $1$ prime, but $1$ is a product of $0$ primes, the empty product, so excluding $1$ is unnecessary. The Wikipedia article on the empty product cites two quotes from Edsger Dijkstra where he specifically criticizes this definition of Hardy & Wright and a similar one by Harold Stark. So you’re not the first to take issue with it, but consider doing away with unnecessary special cases rather than adding more. To see why it makes sense to define products to include zero or one factors, consider, for example, the map from the positive integers to the exponents in their representation as a product of primes (i.e. by a function f
|prime-numbers|
0
How to solve this using variation of parameters
I have managed to solve this ODE $$y''+4y=\cos(2x)$$ using comparison of coefficient, and I get: $$y(x) = C_1\cos(2x)+C_2\sin(2x)+\frac{1}{4}x\sin(2x)$$ Using parameters variation I get: $$y(x) = C_1\cos(2x)+C_2\sin(2x)+\frac{1}{4}\cos(2x)\cos(4x)+\frac{1}{2}x\sin(2x)+\frac{1}{8}\sin(4x)\sin(2x)$$ However, trying to get my first solution from the second, I get stuck. I tried using trigonometric identities and the fact that $y_h$ can "swallow" solutions of the form $A\cos(2x)$ and $B\sin(2x)$ where $A,B$ are constants. What am I missing? Thanks
For the para Find the general solution $$ y''+4y=0 $$ The characteristic equation is: $$ r^{2}+4=0 $$ So, $$ y_{h}(x)=\mathrm{C_{1}}\cos(2x)+\mathrm{C}_{2}\sin(2x) $$ Since the right side of the non-homogeneous equation is $\cos(2x)$ , and $\cos(2x)$ is already a part of the homogeneous solution, we need to use a parameter variable. $$ y_{p}(x)=x(A\cos(2x)+B\sin(2x)) $$ Put it into the equation: $$ 2 (2 B \cos (2 x)-2 A \sin (2 x))+x (-4 A \cos (2 x)-4 B \sin (2 x))+4x(A\cos(2x)+B \sin(2x))=\cos(2x) $$ $$ A=0,\quad B=\frac{1}{4} $$ So, $$ y(x)=\mathrm{C_{1}}\cos(2x)+\mathrm{C_{2}}\sin(2x)+\frac{x}{4}\sin(2x) $$
|ordinary-differential-equations|
0
Collapsing spectral sequence
In Weibel's definition of a homology spectral sequence collapsing at $E^r$ he says that if it converges to $H_*$ then $H_n$ is the unique non zero $E_{pq}^r$ such that $p+q=n$ . I am trying to prove it but so far, I haven't been able to go very far, I am trying to use the fact that $E_{pq}^\infty\cong F_pH_n/F_{p-1}H_n$ but I haven't been really successful and I don't have any idea on how to see that. Does anyone have any help for me?
Okay I think I've managed to find the answer: Since $E_{p,q}^r\neq 0$ for only one index $p_0$ then consider a filtration $$0=F_sH_n \subseteq \ldots \subseteq F_{p_0}H_n \subseteq \ldots \subseteq F_tH_n=H_n.$$ By assumption for all $(p,q)\in\mathbb{Z}^2$ and $n=p+q$ , we have $$E_{p,q}^\infty\cong F_pH_n/F_{p-1}H_n$$ Now this implies that $F_{s+1}H_n/F_sH_n\simeq E_{s,q}^\infty=0$ hence $F_{s+1}H_n=0$ . By induction, we show that for $k\in[|s;p_0-1|]$ , $F_kH_n=0$ . Now we have $E_{p_0,q}^\infty\cong F_{p_0}H_n$ and $E_{p_0+1,q}^\infty\cong F_{p_0+1}H_n/F_{p_0}H_n$ hence $F_{p_0+1}H_n\simeq E_{p_0,q}^\infty$ and we show that it is the case for all $k\in[|p_0;t|]$ . This shows that the term $F_tH_n=H_n\cong E_{p_0,q}^\infty$ which is said non zero term.
|homology-cohomology|spectral-sequences|
0
A prime is not a product of primes, so why is “every positive integer, except 1, […] a product of primes”?
I'm trying to understand why the fundamental theorem of arithmetic is phrased like this: Every positive integer, except 1, is a product of primes. A prime number is an integer but it is not a product of primes. It seems to me that "every positive integer" generalization is not justified. Something like, "every non-prime integer is a product of primes" should have been a better definition. This phrasing would except 1 as well. What am I missing? According to the definition of prime numbers, a prime is not defined as "a product of primes": A number p is said to be prime if (i) p > 1, (ii) p has no positive divisors except 1 and p. I took these definitions from Introduction to the theory of numbers , by Hardy and Wright, p.2. In his proof to Theorem 1, "Every positive integer, except 1, is a product of primes" Hardy states, "Either $n$ is prime, when there is nothing to prove, or $n$ has divisors between 1 and $n$ ." But this doesn't make sense to me. The prime $n$ is a positive integer a
Every positive integer, except 1, can be expressed as a product of prime(s) There, fixed it. Jokes aside, the theorem can be stated more rigorously as: For any given integer $n>1$ , there exists a prime sequence $p_i$ such that $$n=\prod p_i$$ This explained why $1$ is treated differently, cause $1$ is an empty product
|prime-numbers|
0
How to solve this using variation of parameters
I have managed to solve this ODE $$y''+4y=\cos(2x)$$ using comparison of coefficient, and I get: $$y(x) = C_1\cos(2x)+C_2\sin(2x)+\frac{1}{4}x\sin(2x)$$ Using parameters variation I get: $$y(x) = C_1\cos(2x)+C_2\sin(2x)+\frac{1}{4}\cos(2x)\cos(4x)+\frac{1}{2}x\sin(2x)+\frac{1}{8}\sin(4x)\sin(2x)$$ However, trying to get my first solution from the second, I get stuck. I tried using trigonometric identities and the fact that $y_h$ can "swallow" solutions of the form $A\cos(2x)$ and $B\sin(2x)$ where $A,B$ are constants. What am I missing? Thanks
If your problem is to determine a particular solution you can consider to make a complex immersion: $$ \cases{ y''+4y = \cos(2x)\\ i(u''+4u) = i \sin(2x) }\Rightarrow z''+4z = e^{2ix} $$ then as $z_h =c_0 e^{2ix} + c_1 e^{-2ix}$ we choose as particular solution $z_p=c_2x e^{2ix}$ so $$ i(i+4c_2)e^{2ix}=0\Rightarrow c_2 = -\frac i4 $$ and then $$ z_p = -\frac i4 x\left(\cos(2x)+i\sin(2x)\right) = \frac x4\sin(2x)-i\frac x4\cos(2x) = y_p + i u_p $$ then $y_p = \frac x4\sin(2x)$ .
|ordinary-differential-equations|
0