title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Can I use the BFGS algorithm in combination with a L1-Norm penalized LLH to get exact zeros?
I am working along a paper Predicting the Long term Stock Market Volatility:A GARCH MIDAS Model with Variable Selection It uses a PLLH = LLH - L1-Norm (equation 7) And always speaks of "non-zero" and "shrunken to zero" We choose the optimal tuning parameter using Generalized Information Criteria (GIC), and select the variables with non zero parameter estimates I coded it in R for a simulation study so I know the true variables and also tried a proximal gradient descent variant but that struggles to converge and takes a lot longer. So now I am using optim(method="BFGS") and it works quiet well but it never returns exact zero estimates but in the ball park of 1e-7 - 1e-13 for the non-active variables. The gradient function is a numerical approximation of the PLLH. Can I even achieve exact zero estimates without a threshold or did I do something wrong?
BFGS obtains superior convergence to gradient decsent by forming an approximation to the Hessian matrix as the iteration progresses, which speeds up convergence as you enter a region where the true Hessian is positive definite, which occurs at the minimum of a smooth function. However, the 1-norm is not smooth at the origin, so the Hessian is undefined and cannot be approximated, meaning that BFGS may not converge to the true minimum at the origin. If you need this, try doing a few steps of gradient descent starting at the BFGS solution to send the non-active variables to zero.
|optimization|estimation|
0
how to show that a ODE is negative
So I have the ODE given by $-u''(x)=f(x;\lambda)$ for $x\in(0,1)$ with the boundary condition $u(0)=u(1)$ and the function is defined as $f(x;\lambda) = -\lambda e^{-\lambda x}$ for $\lambda>0$ . This has a unique solution in $C^2_0[0,1]=\{g\in C^2[0,1] \mid g(0)=g(1)=0\}$ . How can I show (using Green's function) that $u(x)\leq 0$ for all $x\in (0,1)$ . The general solution is given by $u''(x)=0$ with $u(x)=Ax+B$ . But we have to also consider two cases, $x\in [0,y)$ and $x\in (y,1]$ . I got that the system of equation as $$G(x,y) = \begin{cases}Ax+B \qquad x\in [0,y) \\ Cx+D\qquad x\in (y,1]\end{cases}$$ Applying the boundary conditions I got that $$G(x,y) = \begin{cases}Ax \qquad\qquad x\in [0,y) \\ C(x-1)\qquad x\in (y,1]\end{cases}$$ I know that I have to also consider $x=y$ where $Ax=C(x-1)$ , I ended up solving $$\lim_{\epsilon \to 0^+}G'(x,y)-\lim_{\epsilon \to 0^-}G'(x,y)=1$$ This ended up giving me $x^2-x$ for both cases. Am I missing something? edit Sorry, I noticed the erro
By integrating the equation twice $$ u(x)= \frac{1}{\lambda} e^{-\lambda x}+ C_1x+C_2 $$ with the boundary conditions we obtain $$C_2=-\frac{1}{\lambda} \quad C_1=\frac{-e^{-\lambda}+1}{\lambda}$$ So the solution of the ODE is $$ u(x)= \frac{1}{\lambda} e^{-\lambda x}+\frac{1-e^{-\lambda}}{\lambda}x-\frac{1}{\lambda} $$ the first derivative is $$ u'(x)=-e^{-\lambda x}+\frac{1-e^{-\lambda}}{\lambda} $$ It is equal to zero only in one point, namely $$ x=\frac{1}{\lambda} \left(\log(\lambda)-\log(1-e^{-\lambda})\right) $$ As the function is convex such point is a minimum point. But for Wiestrass theorem the function attains its maximum at some point of the compact set $[0,1]$ As it can't be any interior point (as the derivative would be zero in such point) the maximum has to be on the boundary. i.e $$ u(x) \le \max(u(0),u(1))=0 $$
|ordinary-differential-equations|partial-differential-equations|
0
Show that $2+ \sqrt{-5}$ is not prime in $\mathbb{Z}\sqrt{-5}$
Show that $2+ \sqrt{-5}$ is not prime in $\mathbb{Z}\sqrt{-5}$ . Here is my approach: If $2+ \sqrt{-5}$ is a prime in $\mathbb{Z}\sqrt{-5}$ , then for all $r,s \in \mathbb{Z}\sqrt{-5}$ we would have: $2+ \sqrt{-5} \mid rs \Rightarrow 2+ \sqrt{-5} \mid r$ or $2+ \sqrt{-5} \mid s$ . We can consider the function $N :\mathbb{Z}\sqrt{-5} \rightarrow \mathbb{Z}$ , such that $N(a+b\sqrt{-5})=(a+b\sqrt{-5})(a-b\sqrt{-5})=a^2+5b^2$ . The next thing is to recognize, that if $a\mid b$ in $\mathbb{Z}\sqrt{-5}$ , then $N(a) \mid N(b)$ in $\mathbb{Z}$ . This means that, if $2+ \sqrt{-5} \mid r \Rightarrow N(2+ \sqrt{-5}) \mid N(r)$ . Next, we notice that, $N(2+ \sqrt{-5})=9$ . Now, $9 \mid 3 \cdot 6$ but $9 \nmid 3$ and $9 \nmid 6$ . Thus, $2+ \sqrt{-5}$ can't be prime in $\mathbb{Z}\sqrt{-5}$ . Question: Is my approach correct?
No. You know that if $a$ divides $b$ in $\mathbb{Z}[\sqrt{-5}]$ , then $N(a)$ divides $N(b)$ in $\mathbb{Z}$ . However: You don't have an $N(a)$ and an $N(b)$ . You have $N(a)$ , $N(2+\sqrt{-5})$ , and integers $3$ and $6$ , and note that $N(a)$ divides $18$ but does not divide $3$ nor $6$ . So? You have not even shown an element $b\in\mathbb{Z}[\sqrt{-5}]$ with $N(b)=18$ , or $6$ , or $3$ . And even if you did, you have the wrong implication. It is not necessarily true that if $N(a)$ divides $N(b)$ in $\mathbb{Z}$ , then $a$ divides $b$ in $\mathbb{Z}[\sqrt{-5}]$ . For example, $N(3)=9$ divides $N(2+\sqrt{-5})$ in $\mathbb{Z}$ , but $3$ does not divide $2+\sqrt{-5}$ in $\mathbb{Z}[\sqrt{-5}]$ , since any multiple of $3$ is of the form $3a+3b\sqrt{-5}$ , and $2+\sqrt{-5}$ does not have that form. So you are trying to affirm the consequent (which is already a fallacy), and you did not even affirm the consequent, but something else. You are right that what you need to do is find two elem
|elementary-number-theory|
1
Infinite dimensional symmetric operators
I'm a physicist and in one of my derivation I am using a relation which I simply generalized from the finite dimensional case and I wanted to confirm that this makes sense in the infinite as well. The relation is the following. Say you have a function $B(x,y)$ that is symmetric $B(x,y) = B(y,x)$ . Does the following relation hold \begin{equation*} \int dy dz \; \delta(x-z) B(x,y) f(y)=\int dy dz \; \delta(x-y)B(x,z) f(y), \end{equation*} where $f(y)$ is just a function. I realize that this use of delta functions is a bit unorthodox. I just want to know if such a relation can make sense. Thanks!
Note that, where these integrals makes sense, they can be written as iterated integrals: for the left hand side, $$ \iint dy\,dz\, \delta(x - y)B(x,y)f(y) = \\ \int dy\left[\int dz \,\delta(x - z) B(x,y) f(y)\right] = \\ \int dy\,B(x,y)f(y). $$ For the right hand side, $$ \iint dy\,dz \,\delta(x-y)B(x,z)f(y) = \\ \int dz\,\int dy\,\delta(x - y)B(x,z) f(y) = \\ \int dz\,B(x,z) f(x) = \\ f(x) \int dz\,B(x,z). $$ With the integrals written in this way, I think it is clear that the results will not be equal. As a counterexample, though, consider $$ B(x,y) = e^{-x^2 - y^2}, \quad f(x) = x. $$ The first integral yields the result zero because $\int dy\,B(x,y)f(y)$ is an integral of an odd function. The second integral results in $f(x)g(x)$ , where $f$ is as above and $g(x) = \int dz\,B(x,z) = \sqrt{\pi}e^{-x^2}$ is non-zero for all $x$ .
|linear-algebra|functional-analysis|operator-theory|
1
Necessity of Hausdorff-ness in "continuous function determined by its values on a dense subset"
It's well-known that if a continuous function taking values in a Hausdorff space is uniquely determined by its specification on a dense subset of the domain. Now, I contemplate on the necessity of Hausdorff-ness in this result: It's clear that the result no longer holds if the codomain is just $T_0$ , as is demonstrated by the function $\mathbb R$ to the Sierpiński space $\{0, 1\}$ (with $1$ being the Sierpiński point) given by $x\mapsto 0$ if $x\ne 0$ and $x\mapsto 1$ if $x = 0$ . (Thus this and the constant $0$ function are both continuous despite agreeing on the dense $\mathbb R\setminus \{0\}$ .) Now, I am trying to come up with an similar example where the codomain is $T_1$ (and not Hausdorff ofc), but haven't been able to conjure anything up yet. You know anything? The only examples of $T_1$ spaces fmailiar to me are the co-finite/-countable spaces.
Any bijection on $\mathbb R$ is continuous with the codomain taken under cofinite topology. Now, just consider the identity function with two distinct points, $a$ and $b$ , swapped. Then this and the identity function are continuous functions agreeing on the dense subset $\mathbb R\setminus\{a, b\}$ and yet distinct.
|general-topology|continuity|examples-counterexamples|separation-axioms|dense-subspaces|
1
A r.v. $Y\sim U]0;1[$ and $\mu$ a proba measure on $B( \mathbb{R})$. Build a new r.v. $X$ of law $\mu$ by composing $Y$ with a well chose fct
Question: Given a r.v. $Y \sim U]0;1[$ and $ \mu $ a probability measure on $B( \mathbb{R})$ . Build a new r.v. $X$ of law $ \mu $ by composing $Y$ with a well chose function. Answer: 1- We define $0 2- Our objective is to get $ F_X(F^{-1}_X(t)) = t $ . If we define $F^{-1}_X(t) = inf \left \{ x \in \mathbb{R} : F_X(x) \geq t \right \}$ we indeed get that $ F_X(F^{-1}_X(t)) = t $ . More over if $t \leq F_X(x) \Rightarrow F^{-1}_X(t) \leq x$ 3- From "2-" and by construction we have that $P(F^{-1}_X(Y) \leq X) = P(Y \leq F_X(X)) = F_X(X)$ My weak point here is that I do not succeed to rigoursly justify that $t \leq F_X(x) \Rightarrow F^{-1}_X(t) \leq x$ means that $\left \{ F^{-1}_X(Y) \leq X \right \} = \left \{ Y \leq F_X(X) \right \}$ How can I do it? Thank you
Below some attempt but I am not sure about it. -Option I Prove: $\left \{ Y≤F_X(x) \right \} ⊆ \left \{ F_X^{−1}(Y)≤x \right \}$ 1. $ Y \sim ]0;1[ $ and $0 so if $Y≤F_X(x) \Rightarrow \exists x' $ s.t. $Y=F_X(X')$ . And so we have that $\left \{Y≤F_X(x) \right \} = \left \{ F_X(X')≤F_X(x) \right \}$ . Moreover $ X' \leq X$ as $F_X(x)$ is a no decreasing function. 2.Now by the definition of $F^{-1}_X$ we have that $F^{-1}_X(Y) = F^{-1}_X( F_X(X') ) = X'$ , and since $x 3.In conclusion $\left \{Y≤F_X(x) \right \} = \left \{ F_X(X')≤F_X(x) \right \} ⊆ \left \{ X' ≤ X \right \} = \left \{ F_X^{−1}(Y)≤x \right \} $ Prove: $\left \{ F_X^{−1}(Y)≤x \right \} ⊆ \left \{ Y≤F_X(x) \right \}$ 1.If $ F_X^{−1}(Y)≤x $ by definition of $F_X^{−1}$ we have that $F_X^{−1}(Y)$ is equal to the smallest possible $x'$ s.t. $F_X(x')$ is at least $Y$ . We write $ X' =F_X^{−1}(Y) \Rightarrow \left \{ F_X^{−1}(Y)≤x \right \} = \left \{ X'≤x \right \}$ . 2.If $ \left \{ X'≤x \right \} $ using the fact that $F_X(.
|probability|probability-theory|random-variables|
0
Trivial cases of a Theorem of Criterion for Multiple Zeros
This is from Gallian's Contemporary Abstract Algebra : Theorem: A polynomial $f(x)$ over a field $F$ has a multiple zero in some extension E if and only if $f(x)$ and $f'(x)$ have a common factor of positive degree in $F[x]$ . Counterexample: For case $f(x)=0$ the zero function with $F=\mathbb{R}$ , it has a multiple zero on $x=4$ in $\mathbb{R}$ with $E=\mathbb{R}$ but $f(x)=f'(x)=0$ have no common factor of positive degree in $F[x]$ . I just want to know if the criterion doesn't apply for trivial cases or why is my counterexample wrong.
Your counterexample doesn't work. In fact, we have for example $g(x)=x^2$ that it is common factor of $f(x)=f'(x)=0$ because $x^2*b=f(x)=f'(x)$ with $b=0 \in \mathbb{R}$ . What's more, every polynomial in $F[x]$ is $gcd(0,0)$ . I wish I helped.
|linear-algebra|abstract-algebra|analysis|
1
Overview of basic results in Stochastic Calculus
Are there some good overviews of basic facts about Stochastic Integrals and Stochastic Calculus? These can be in the form of resources (preferably accessible online) as well as directly writing out these results as answers. If possible, it would be helpful to link to proofs of the results. These can be external proofs or proofs on the site. This question was inspired by other similar [big-list] questions including overviews on basic results on images and preimages and on basic results in cardinal arithmetic . See the answers at these links for inspiration on the types of responses that would be suitable for this question. Edit: I have now offered a bounty to raise awareness of the question and encourage strong answers for future reference. Edit 2: I have posted an answer with links to lecture notes and blogs. However, I still welcome other answers. I am especially looking for an answer with a direct list of results. Although the answer I have posted with links is a good start, it would
I will start off with an answer dedicated to publically available online lecture notes and blogs. I still welcome other answers - especially those that contain explicit results as opposed to linked resources. I am making this answer Community Wiki so that others can contribute and allow this list to expand into a comprehensive resource. Lecture Notes Stochastic Calculus (University of Cambridge) An Introductory Course on Stochastic Calculus (University of Melbourne) Stochastic Calculus for Finance (Carnegie Mellon University) Stochastic Calculus: An introduction with applications (University of Chicago) Blogs Almost Sure (George Lowther) Stochastic Calculus: Navigating Risk Neutral Measures (FasterCapital)
|reference-request|stochastic-calculus|big-list|online-resources|faq|
0
How does $ \frac{\sqrt{x}\left(2ax+b\right)-\frac{ax^2+bx+c}{2\sqrt{x}}}{x} $ become $ \frac{3ax^2+bx-c}{2x^{3/2}} $?
The original question was: Differentiate the given function with respect to x: $$ f(x) = \dfrac{ax^2+bx+c}{\sqrt{x}} $$ I simplified till $$ \dfrac{\sqrt{x}\left(2ax+b\right)-\frac{ax^2+bx+c}{2\sqrt{x}}}{x} \tag1$$ then the next step simplified it into $$ \dfrac{2ax+b}{\sqrt{x}}-\dfrac{ax^2+bx+c}{2x^\frac{3}{2}} \tag2$$ then finally into $$ \dfrac{3ax^2+bx-c}{2x^\frac{3}{2}} \tag3$$ Can someone explain the simplification?
Let $\color{green}{x>0}$ . We have $$ \dfrac{\sqrt{x}\left(2ax+b\right)-\frac{ax^2+bx+c}{2\sqrt{x}}}{x} =\frac{A-\frac{B}{C}}{D}=\frac{A}{D}-\frac{\frac BC}{D}$$ where A, B, C and D are 4 clearly defined numbers. Then you use $$\frac{\frac BC}{D}=\frac{\frac BC}{\frac {\color{red}{D}}1}=\frac BC.\frac 1{\color{red}{D}}=\frac{B}{CD}$$ with $CD=\boxed{2\color{Purple}{\sqrt x} .x=2\color{Purple}{x^{\frac12}}x^1=2x^{\frac12+1}=2x^{\frac32}}$ And you end by following the advice of @true blue anil. You will obtain $$\frac{2x(2ax+b)-ax^2-bx-c}{2x^{\frac32}}$$ Note for $\frac AD$ that you can write $x=\sqrt x\times\sqrt x $ since $\color{green}{x>0}$ .
|algebra-precalculus|derivatives|
0
Quadratic residue of $-1$ in composite modulus
It is true for each odd prime number p that if $x^2\equiv-1 \pmod p$ then $p\equiv1\pmod 4$ I've observed that it should be true for all composite integers, whose prime factors are congruent to $1$ modulo $4$. However I couldn't find any remark on the internet whether it's true. In other words, if it is correct, how do we prove that $x^2\equiv-1\pmod n$ has a solution if and only if $n=\prod_{p|n}p$ such that $p_i=4k_i+1$ P.S: My level is pretty elementary.
Daniel Fischer has explained thst for odd $n$ all prime factors must be $\equiv1\bmod4$ . We could equivalently require all factors, prime or otherwise, of an odd $n$ to have this residue. If $n$ is even render it as $2^km$ for odd $m$ and $k$ a positive whole number. Then $-1$ will be a quadratic residue $\bmod n$ iff it is so both $\bmod 2^k$ and $\bmod m$ . Thus $m$ has to satisfy the criteria noted by Daniel Fischer, while for $\bmod 2^k$ we have $1^2\equiv-1$ when $k=1$ . But no square root of $-1$ exists for any $k\ge 2$ ; there are simply no squares one less than a multiple of $4$ which such a root would imply. So the even moduli for which $-1$ is a quadratic residue are just those twice the appropriate odd ones. Thus if $-1$ is a quadratic residue $\bmod 5$ it will be so $\bmod 10$ , $\bmod 13$ similarly implies $\bmod 26$ , etc.
|elementary-number-theory|modular-arithmetic|
0
Showing $\sin^{12}\theta+3\sin^{10}\theta+3\sin^8\theta+\sin^6\theta+2\sin^4\theta+2\sin^2\theta-2=1$, given $\cos\theta=\sin^2\theta$
Let $\theta \in \mathbb R$ such that $\cos\theta=\sin^2\theta$ . Then $$\cos{\theta} + \cos^{2}{\theta} = 1 \tag a$$ We need to prove that: $$\sin^{12}{\theta} + 3\sin^{10}{\theta} + 3\sin^{8}{\theta}+\sin^{6}{\theta}+2\sin^{4}{\theta}+2\sin^{2}{\theta}-2=1 \tag b$$ I can't even start to solve, wolfram alpha, and gpt does brute force, but I feel there is elegant way (it is class 10 problem, tho hots one), tried to rise $(a)$ to power $6$ and derive $(b)$ but no success.
Let us consider the polynomial: $$ f(s) = s^{12} + 3 \, s^{10} + 3 \, s^{8} + s^{6} + 2 \, s^{4} + 2 \, s^{2} - 3\ . $$ It factors as: $$ f(s) = \left(s^{8} + 2 \, s^{6} + 2 \, s^{4} + s^{2} + 3\right)\left(s^{4} + s^{2} - 1\right)\ . $$ Now observe that from the given relation $\sin^2\theta=\cos \theta$ we get $\sin^4\theta=\cos^2 \theta=1-\sin^2\theta$ , so $\sin\theta$ is a root of $s^4+s^2-1$ , thus also of $f(s)$ .
|trigonometry|
0
algebraic equation with powers
The original statement of the problem : Solve in $\mathbb R$ the following equation : $$ a^{log_b x^2 } + a^{log_x b^2 } = a^{1+log_b x } + b^{1+log_x b } $$ where $a,b>0$ and $b \neq 1$ A more general statement of the problem : Find all real solutions t , to the equation $ a^{2t} + a^{\frac{2}{t}} = a^{1+t} + a^{1+ \frac{1}{t}} $ where a is a strictly positive real constant. EDIT I hope I did not complicate the problem by making these transformations to the second equation, I am especially interested in solving the original one, so that's more important. My approach : First of all, if a = 1 , it is clear that any real t verifies the given equation. In the following, I assumed that a > 1 to simplify, and then take the case when 0 , analogously. I tried to prove that t = 1 is the only solution of the equation for any a different from 1. I transformed the equation into this $a^{t}(a^{t}-a) + a^{\frac{1}{t}}(a^{\frac{1}{t}}-a)=0$ where I was going to use some inequalities and prove that t
Unfortunately this answer is not quite finished, but I have to run, so posting it here to save it and/or have others finish it. Let $a\in\Bbb{R}_{>0}$ be a positive real number, and let $t\in\Bbb{R}$ be a real number such that $$a^{2t} + a^{\frac2t} = a^{1+t} + a^{1+ \frac1t}.\tag{0}$$ First note that $t=1$ is a solution for any $a\in\Bbb{R}_{>0}$ , so we will only consider $t\neq1$ from here on. Similarly, for $a=1$ any $t\in\Bbb{R}$ is a solution, so we will only consider $a\neq1$ from here on. We will also not consider $t=0$ , as the original equation is not defined at this point. Let $b:=a^{\tfrac1t}$ so that $a=b^t$ and hence $$b^{2t^2}+b^2=b^{t^2+t}+b^{t+1}.$$ Rearranging, dividing by $b^2$ and factoring the exponents, we find that $$b^{2(t+1)(t-1)}-b^{(t+2)(t-1)}-b^{t-1}+1=0,$$ and hence for $c:=b^{t-1}$ that $$(c^t)^2-c^t=\frac{c-1}{c^2}.\tag{1}$$ Note that $c=a^{\frac{t-1}{t}}$ is again a positive real number, and that conversely, if $c\in\Bbb{R}_{>0}$ is a positive real numbe
|algebra-precalculus|
0
Solve functional equation $f(x+y) + f(x-y) = 2f(x)f(y)$
The statement of the problem : Let $f: \mathbb R \rightarrow \mathbb R $ with the 2 properties : $f(x+y) + f(x-y) = 2f(x)f(y) , \forall x , y \in \mathbb R$ There exists a "the smallest" strictly positive number a , with the property that f(a) is the maximum of the function and f(a)>0 . Prove that f is a periodic function with period a . My approach : For $x = y = 0 \implies 2f(0) = 2f(0)^2$ so $f(0)\in\{0,1\}$ ; If $f(0) = 0 \implies 2f(x)=0 \implies f(x)=0 , \forall x \in \mathbb R$ , so it's clearly periodic.( EDIT actually it is false because if the function was 0 then it would not correspond with 2) so f(0) is necessarily 1). Now if $f(0) = 1$ , for $x = 0$ we get $f(y) + f(-y) = 2f(y) \implies f(y)=f(-y) , \forall x \in \mathbb R $ . So it is enough to determine $f$ on the interval $[0,\infty)$ . EDIT : so I managed to prove that f(a)=1 and that the final answer might be something related to a trigonometric function , probably cosine , since $cos(-x)=cos(x)$ as above.I don't real
$2f(x) = f(x+0) + f(x-0) = 2f(x)f(0)$ for all $x$ . If there exist any $w$ so that $f(w) \ne 0$ then $2f(w) =2f(w)f(0)$ and $f(0)=1$ . And if $f(x)=0$ for all $x$ that violates property 2. So $f(0) = 1$ . $f(a+a) +f(a-a) = 2f^2(a)$ so $f(2a)+1 = 2f^2(a)$ . But $f(a) \ge 1$ . But if $f(a) > 1$ we'd have $f(2a) + 1 \le f(a) + 1 . That's a contradiction so $f(a) = f(0) = 1$ . It follows that $f(2a) + 1=2f^2(a)=2$ so $f(2a) =1$ and by strong induction that $f(na) = f(a) =1$ . [suppose $f(ka) =f(a)=1$ for all $k \le n$ then $f(na + a) + f(na-a)= 2f(na)f(a)=2$ . But $f(na+a) + f(na-a)=f((n+1)a) + f((n-1)a) = f((n+1)a) + 1$ so $f((n+1)a) = 1 =f(a)$ .] Now consider $f(x+a) + f(x-a) = 2f(x)f(a) = 2f(x)$ . If $f(x+a) \ne f(x-a)$ , wolog $f(x-a) , then there is a positive $d$ so that $f(x-a) = f(x)-d; f(x+a)=f(x) + d$ . This leads to a bit of problem $f(x + 2a)+f(x) = f((x+a)+a)+f((x+a)-a)=2f(x+a)f(a)=2f(x) + 2d$ so $f(x+2a) = f(x)+2d$ and by induction for any natural $n$ we get $f(x + na) = f(x)
|functions|functional-equations|
1
Substitution for Weierstrass Substitution
I recently learnt about the Weierstrass Substitution for integrals of the form: $$\frac{d}{a \sin x+b \cos x}$$ You substitute $\tan(\frac{x}{2})$ as $t$ , and proceed to algebraically manipulate the equation to get partial fractions. Why does this not work with $\tan x$ instead. Thanks.
Substituting $\tan x$ does work, but requires more work to get to the same end result. We have the identity $$\tan x=\dfrac{2\tan\frac x2}{1-\tan^2\frac x2} \implies \tan x\tan\dfrac x2+1= \sqrt{1+\tan^2x}$$ which for $\left(\tan x,\tan\dfrac x2\right)\to(t,u)$ matches the pattern for the second class of Euler substitution . In other words, after replacing $t=\tan x$ , let $$u=\frac{\sqrt{1+t^2}-1}t \implies t = \frac{2u}{1-u^2}$$ to end up with $$\int \frac{dx}{a\sin x+b\cos x} = \int \frac{dt}{(at+b) \sqrt{1+t^2}} = \int \frac{2\left\lvert u^2-1\right\rvert}{\left(u^2-1\right) \left(bu^2-2au-b\right)} \, du$$ Due to our choice in substitution, we restrict the domain to $$x\in\left(-\dfrac\pi2,\dfrac\pi2\right) \implies t\in(-\infty,\infty) \implies u\in(-1,1)$$ so that $\dfrac{\left\lvert u^2-1\right\rvert}{u^2-1}=-1$ . On the other hand, the half-angle substitution more immediately yields $$\int \frac{dx}{a\sin x+b\cos x} \stackrel{v=\tan\tfrac x2}= \int \frac2{-bv^2+2av+b}\,dv$$ th
|calculus|
0
Can $4\cdots41$ (with odd number of $4$s) be a Square Number?
Consider a number in its decimal representation that begins with an odd number of consecutive digits of 4 , followed by a single digit of 1 . An example of such a number would be 41, 4441, or any similar pattern extending with 4s. My question is: Can such a number ever be a perfect square ? To clarify, the numbers we're considering take the form 44 … 41 44…41, where the number of 4's is odd, and it's terminated by a single 1. Here are the specific points I'm curious about: Is there a mathematical approach or theorem that directly addresses the properties of numbers with specific digit patterns in relation to being perfect squares? Could modular arithmetic or any form of number theory provide insight into proving or disproving the possibility of such a number being a perfect square? I've attempted some preliminary analysis, including playing around with smaller cases and considering the last digits of square numbers, but haven't reached a conclusive answer. Any guidance, references, or
A simpler approach. Number $44…4$ , consisting of even number of $4$ s is always divisible by $11$ . Hence our number $44…41$ with odd number of $4$ s will be $8$ modulo $11$ . But these are the only possible remainders of squares modulo $11$ : $$0^2=0$$ $$(\pm1)^2=1$$ $$(\pm2)^2=4$$ $$(\pm3)^2=9$$ $$(\pm4)^2=5$$ $$(\pm5)^2=3$$ All equalities are modulo $11$ .
|square-numbers|
1
An exercise question in the probability theory class
I was asked a "simple" question by one of my students in the tutorial class, but I found myself struggling with this for about 2 hours already. Here is the question: Assume there are $K$ constants: $0\leq C_{1} ; Let $X$ be a random variable with $Pr(X=C_{k})=\frac{1}{K}$ , $k=1,...,K$ . Find when the maximum of $Var(X)$ is attained, determine the value of $K$ and the value of $(C_{1},...,C_{K})$ when this happens. Here are some of my ideas: $E[X]=\sum_{k=1}^{K}C_{k}\frac{1}{K}=\frac{1}{K}\sum_{k=1}^{K}C_{k}$ $E[X^2]=\sum_{k=1}^{K}(C_{k})^2\frac{1}{K}=\frac{1}{K}\sum_{k=1}^{K}(C_{k})^2$ $Var(X)=E[X^2]-(E[X])^2=\frac{1}{K^2}\big(K\sum_{k=1}^{K}(C_{k})^2-(\sum_{k=1}^{K}C_{k})^2\big)\\~~~~~~~~~~~~=\frac{1}{K^2}\big((K-1)\sum_{k=1}^{K}(C_{k})^2-2\sum_{i\neq j}C_{i}C_{j}\big)$ . I am really wondering if it is possible to apply Lagrange multiplier method to this. Any help and advice will be sincerely appreciated. Thank you very much in advance!
Let's first of all allow some of the $C$ s to be equal. The derivative of the variance w.r.t. $C_i$ is $\frac2K(C_i-m)$ where $m$ is the expectation of $X$ = the mean of the $C$ s. Therefore, pushing any $C_i$ away from $m$ increases the variance, so a maximum-variance configuration has all the $C$ equal to either 0 or 1. If it has $p$ 0s and $q$ 1s then the variance is $\frac{pq}{(p+q)^2}$ which, for fixed $p+q=K$ , is biggest when $p,q$ are as close as possible. So the biggest variance you can get is when "half" of the $C$ s are 0 and "half" are 1. (If $K$ is odd, one of those groups will be 1 larger than the other.) This maximum isn't attained, if you keep the restriction that all the $C$ have to be unequal. But you can get as close to it as you like by pushing "half" the values very close to (one another and to) 0 and the rest very close to (one another and to) 1. (You didn't ask about this, but the minimum happens when all the $C$ are equal; this too isn't attained if you require
|probability|random-variables|lagrange-multiplier|variance|uniform-distribution|
1
Solve functional equation $f(x+y) + f(x-y) = 2f(x)f(y)$
The statement of the problem : Let $f: \mathbb R \rightarrow \mathbb R $ with the 2 properties : $f(x+y) + f(x-y) = 2f(x)f(y) , \forall x , y \in \mathbb R$ There exists a "the smallest" strictly positive number a , with the property that f(a) is the maximum of the function and f(a)>0 . Prove that f is a periodic function with period a . My approach : For $x = y = 0 \implies 2f(0) = 2f(0)^2$ so $f(0)\in\{0,1\}$ ; If $f(0) = 0 \implies 2f(x)=0 \implies f(x)=0 , \forall x \in \mathbb R$ , so it's clearly periodic.( EDIT actually it is false because if the function was 0 then it would not correspond with 2) so f(0) is necessarily 1). Now if $f(0) = 1$ , for $x = 0$ we get $f(y) + f(-y) = 2f(y) \implies f(y)=f(-y) , \forall x \in \mathbb R $ . So it is enough to determine $f$ on the interval $[0,\infty)$ . EDIT : so I managed to prove that f(a)=1 and that the final answer might be something related to a trigonometric function , probably cosine , since $cos(-x)=cos(x)$ as above.I don't real
(Too big for a comment), If we additionally assume that $f$ is $C^2(\mathbb{R})$ , then this regularity condition uniquely determines the type of the function. Subtracting both sides by $2f(x)$ and then taking the limit as $y$ goes to zero. We obtain, $$ \frac{1}{y^2}(f(x+y)+f(x-y)-2f(x))=\frac{2}{y^2}f(x)(f(y)-1)$$ or $f''(x) = 2f(x)\left(\lim_{y \to 0}\frac{f(y)-1}{y^2}\right) = Lf(x)$ , where $L = \lim_{y\to 0} f''(y)$ (Obtained via LH rule). Since the function $f$ is bounded, $L and $f(x) = \cos{(\sqrt{-L}x)}$ . I have used the boundary conditions that $f(0) =1$ and by taking partial derivative with respect to $y$ and plugging in $x=y=0$ , $f'(0) = 0$ . The function achieves maximum value at $\frac{2\pi}{\sqrt{-L}}$ , therefore from property 2, $a = \frac{2\pi}{\sqrt{-L}}$ and $f$ is periodic with period $a$ , thus $$ \boxed{f(x) = \cos{\left(\frac{2\pi}{a}x\right)} }$$
|functions|functional-equations|
0
Nilpotent degree $2$ 'families' of $4\times 4$ matrices
$\def\b{\begin{bmatrix}}\def\e{\end{bmatrix}}$ Are $\b0&0&a&b\\0&0&c&d\\0&0&0&0\\0&0&0&0\e$ and it's transpose, $a,b,c,d\in \Bbb C$ the only nilpotent degree $2$ 'families' of matrices of size $4\times 4$? I believe they are, but I wanted to verify.
It's not clear what the post means by "families". I assume it's about $4\times 4$ nilpotent matrices of degree $2$ up to similarity i.e., similarity class of matrices such that $A\neq \mathbf 0$ but $A^2=\mathbf 0$ . Read this answer (and also the question post) for context. Up to similarity, there are two $4\times 4$ matrices with degree of nilpotence $2$ corresponding to the partitions $2+2$ and $2+1+1$ . $\displaystyle \begin{pmatrix} 0 & 1 & & \\ 0 & 0 & & \\ & & 0 & 1\\ & & 0 & 0 \end{pmatrix} \ \begin{pmatrix} 0 & 1 & & \\ 0 & 0 & & \\ & & 0 & \\ & & & 0 \end{pmatrix}\tag*{}$ All matrices in the described family are similar to one of these. Note that the blank entries are zeroes.
|matrices|nilpotence|
0
Show that $2+ \sqrt{-5}$ is not prime in $\mathbb{Z}\sqrt{-5}$
Show that $2+ \sqrt{-5}$ is not prime in $\mathbb{Z}\sqrt{-5}$ . Here is my approach: If $2+ \sqrt{-5}$ is a prime in $\mathbb{Z}\sqrt{-5}$ , then for all $r,s \in \mathbb{Z}\sqrt{-5}$ we would have: $2+ \sqrt{-5} \mid rs \Rightarrow 2+ \sqrt{-5} \mid r$ or $2+ \sqrt{-5} \mid s$ . We can consider the function $N :\mathbb{Z}\sqrt{-5} \rightarrow \mathbb{Z}$ , such that $N(a+b\sqrt{-5})=(a+b\sqrt{-5})(a-b\sqrt{-5})=a^2+5b^2$ . The next thing is to recognize, that if $a\mid b$ in $\mathbb{Z}\sqrt{-5}$ , then $N(a) \mid N(b)$ in $\mathbb{Z}$ . This means that, if $2+ \sqrt{-5} \mid r \Rightarrow N(2+ \sqrt{-5}) \mid N(r)$ . Next, we notice that, $N(2+ \sqrt{-5})=9$ . Now, $9 \mid 3 \cdot 6$ but $9 \nmid 3$ and $9 \nmid 6$ . Thus, $2+ \sqrt{-5}$ can't be prime in $\mathbb{Z}\sqrt{-5}$ . Question: Is my approach correct?
One should notice that $\begin{array} .(1-\sqrt{-5})^2 &=1-2\sqrt{-5}+(-5)\\ &=-2(2+\sqrt{-5})\\ \end{array}$ so that $2+\sqrt{-5}\,\mid (1-\sqrt{-5})(1-\sqrt{-5}).$ But, $2+\sqrt{-5}\nmid 1-\sqrt{-5}$ , because $N(2+\sqrt{-5})=9$ , $N(1-\sqrt{-5})=6$ and $9\nmid 6$ . So $2+\sqrt{-5}$ is not prime in $\Bbb Z[-5]$ .
|elementary-number-theory|
0
How does one perform induction on integers in both directions?
On a recent assignment, I had a question where I had to prove a certain statement to be true for all $n\in\mathbb{Z}$ . The format of my proof looked like this: Statement is true when $n=0$ "Assume statement is true for some $k\in\mathbb{Z}$ " Statement must be true for $k+1$ Statement must be true for $k-1$ My professor said the logic is flawed because of my second bullet point above. She says that since mathematical induction relies on the well-ordering principle and since $\mathbb{Z}$ has no least or greatest element, that using induction is invalid. Instead, she says my argument should be structured like this: Statement is true when $n=0$ "Assume statement is true for some integer $k\geq0$ " Statement must be true for $k+1$ "Assume statement is true for some integer $k\leq0$ " Statement must be true for $k-1$ I am failing to understand where my logic fails and why I need to split the assumptions like she is suggesting. Could someone explain why relying on the well-ordering principl
Any nonempty set $A$ of integers which contains $k+1$ and $k-1$ whenever it contains $k$ is the entire set of integers. So yes, you can do induction over all the integers. But it's not often done that way. It seems more frequent to: Prove the the statement for all nonnegative integers, possibly by induction Reduce the negative case to the nonnegative case For example, I'm teaching discrete mathematics from a textbook that proves the theorem “every integer is either even or odd” this way: Lemma 1: For all integers $n$ , if $n$ is even or odd, then $n+1$ is even or odd. Lemma 2: For all integers $n$ , $n$ is even if and only if $|n|$ is even, and $n$ is odd if and only if $|n|$ is odd. The proofs are left to the reader. Proposition: All natural numbers are either even or odd. Proof: By induction on $n$ . The base case $n=0$ is apparent by the definition of even. For the inductive case, suppose $k$ is either even or odd. By Lemma 1, $k+1$ is either even or odd. So by induction all natural
|logic|proof-writing|induction|
0
How to prove there exists a set $B$ such $\mathrm{card}(B)>\dfrac{n}{3}$?
Question: Let $a_{i}\in\mathbb N^{+}$ , and the set $A=\{a_{1},a_{2},\dots,a_{n}\}$ . Show that there exists a set $B\subset A$ , such that $\mathrm{card}(B)>\dfrac{n}{3}$ , and that for any $x,y\in B$ , then $x+y\notin B$ . This problem is from China Nankai university math competition today. How prove this? Thank you. Maybe this problem is an old problem, but I can't find it. At last, I find a similar problem: let the set $A=\{1,2,3,\dots,2n,2n+1\}$ , and the sets $B$ such $B\subset A$ , and for any $x,y\in B$ , then $x+y\notin B$ . Find $\max |B|$ . The answer is $\max |B|=n+1$ . For a full solution (mathematical induction) see BBs: http://zhidao.baidu.com/question/260607678.html But for this I can't prove it. Thank you very much.
A paper by Erdős ("Extremal problems in number theory", Matematika 11:2 (1967); pdf ) has a very nice short argument for this. It's a more sophisticated version of the remark by Ma Ming in comments on the question. Take any $\alpha$ and consider $B_\alpha=\{\,a_i\,:\,\frac13\leq (\alpha a_i \bmod 1) . Note that $B_\alpha$ always has the required sum-free property. The average of $|B_\alpha|$ for $\alpha$ in some large interval -- say of width $T$ -- equals, by "interchanging the order of integration", the sum over the $a_i$ of the fraction of that large interval in which $\frac13\leq (\alpha a_i \bmod 1) , which is $\frac13+O(1/T)$ . So the average of $|B_\alpha|$ can be made at least as close as we like to $\frac13$ , and so the maximum can too. Thus far, I'm just paraphrasing Erdős. But this argument, as it stands, doesn't quite solve the problem: it gets $\geq\frac n3$ , not $>\frac n3$ . If $n$ isn't a multiple of 3, this is enough, but what if it is? Well, in this case we can take
|combinatorics|contest-math|extremal-combinatorics|
0
Show $nx_n \to 1$ for sequence defined by $x_{n+1}=\frac{x_n}{1+nx_n^2}$
Let $a>0$ . Consider the sequence $x_{n+1}=\frac{x_n}{1+nx_n^2},~x_1=a,~n\in\mathbb{N}^*$ . Study the convergence of $(x_n)_{n\ge1}$ and $(nx_n)_{n\ge1}$ . It was easy for me to prove that $(x_n)_n$ is convergent with the limit equal to $0$ , but couldn't find a straightforward approach for $(nx_n)_n$ . I first proved that $x_n\le\frac{1}{n}$ , or equivalently, $nx_n\le1$ , for $n\ge2$ , using induction. Then I showed that $(nx_n)_n$ is a nondecreasing sequence: $$\frac{(n+1)x_{n+1}}{nx_n}=\frac{n+1}{n+n^2x_n^2}\ge 1$$ Thus $(nx_n)_n$ is nondecreasing and upper bounded, so it is convergent with positive finite limit $l$ . Now consider $a_n=n$ and $b_n=\frac{1}{x_n}$ , so $(b_n)_n$ is a positive, increasing sequence with infinite limit. We have that $$\frac{a_{n+1}-a_n}{b_{n+1}-b_n}=\bigg(\frac{1}{x_{n+1}}-\frac{1}{x_n}\bigg)^{-1}=\frac{1}{nx_n}\to\frac{1}{l}$$ By the Stolz–Cesàro theorem, we obtain the identity $l=\frac{1}{l}$ , so the only possible value for $l$ is $l=1$ . The hardest
Once you have shown that $nx_n \le 1$ for $n \ge 2$ you can use the general form of the Stolz–Cesàro theorem: $$ \frac{1}{\liminf_{n \to \infty} nx_n} = \limsup_{n \to \infty} \frac{1}{nx_n} = \limsup_{n \to \infty} \frac{1/x_n}{n} \\ \le \limsup_{n \to \infty} \frac{1/x_{n+1}-1/x_n}{(n+1)-n} = \limsup_{n \to \infty} n x_n \le 1 $$ implies that $\liminf_{n \to \infty} n x_n \ge 1$ , so that the limit exists and is equal to one.
|sequences-and-series|limits|convergence-divergence|recurrence-relations|
0
Difference of connections $C^\infty(M)$-linear and the sum of connection and a $1$-form again a connection.
I'm trying to understand the following proposition. IF $\nabla$ and $\nabla'$ are two connections on a vector bundle $E$ , then $\nabla-\nabla'$ is a $C^\infty(M)$ -linear and can, therefore be considered as an element in $\mathcal{A}^1(M,\text{End}(E))$ . If $\nabla$ is a connection on $E$ and $a \in \mathcal{A}^1(M,\text{End}(E))$ , then $\nabla + a$ is again a connection on $E$ . I'm able to verify the linearity directly by the Leibniz rule. Given a smooth function $f$ and a section $s$ of $E$ I have $$ (\nabla-\nabla')(fs)=(df\otimes s + f\nabla s) - (df\otimes s + f\nabla's)=f(\nabla-\nabla')(s). $$ Now generally a connection is an element $\nabla \in \text{Hom}_{C^\infty(M)}(\Gamma(TM),\Gamma(T^*M \otimes E))$ , but for some reason in this instance I should get the endomorphism bundle there. Is there the tensor characterization lemma at play here in disguise? If so I would be glad to know how it is being used. For the second statement I'm not sure how it makes sense to sum a conn
An ${\rm End}(E)$ -valued $1$ -form is the same thing as a ${\rm C}^\infty(M)$ -bilinear map $A:\mathfrak{X}(M)\times \Gamma(E)\to \Gamma(E)$ , taking $X$ and $\psi$ to a section $A_X\psi$ . Now show that $$\nabla_X(f\psi) + A_X(f\psi) = f(\nabla_X\psi+A_X\psi) + X(f)\psi.$$ The key point is that $A_X(f\psi)= fA_X\psi$ instead of $A_X(f\psi) = fA_X\psi + X(f)\psi$ (which only holds for a connection).
|differential-geometry|
1
Probability of getting exactly one pair on rolling five dice
I have a game with five die with six faces (A normal dice). The player rolls all the dice and what I am trying to work out is the probability of rolling exactly two dice with the same number. For example, XXYYZ would not be allowed as it has two pairs. I originally had the following equation: $$\frac{6 \times 1 \times 5 \times 5 \times 5 \times 4}{6^5} = \frac{3000}{7776}$$ But after doing some simulations of the game I've found the value is somewhere around $67\%$ . Does anyone have any ideas to help me? I've tried thinking of it as a slot machine with 5 numbers and each reel has 6 numbers, but still no success. I plan to work out the probability of a tuple later as well as a quadruple.
If you are trying to solve for other patterns also, it would be better to have a settled formula for finding the number of ways, like [ $\mathtt{ Lay\;Down\; Pattern}$ ] $\times$ [ $\mathtt{Permute\; Pattern}$ ] which involves multiplying two multinomial coefficients For the $2-1-1-1-0-0$ pattern it would be $$\binom{5}{2,1,1,1,0,0}\times \binom{6}{1,3,2}$$ or in the permutation form of the multinomial coefficient, where we can conveniently remove $1!'s$ and $0's$ $$\frac{5!}{2!}\times\frac{6!}{3!2!} = 3600$$ and $Pr = \dfrac{3600}{6^5} =\dfrac{25}{54}$ To illustrate the process for another pattern $2-2-1-0-0-0$ eg, in the full form, $$Pr = \binom{5}{2,2,1,0,0,0}\binom{6}{2,1,3}\Large/6^5$$
|probability|combinatorics|
0
About Euler-Lagrange equation and Beltrami identity. What is the error in the process?
If $F=y+y'$ , then Euler-Lagrange equation $\frac{\partial F}{\partial y}-\frac{d}{dx}(\frac{\partial F}{\partial y'})=0$ is not satisfied. Hence, we may conclude that there is no extremal for the functional concerned. However, since $F$ does not contain $x$ explicitly, we can use the Beltrami identity $F-y'\frac{\partial F}{\partial y'}=C$ and obtain $y=C$ as an extremal. Can anyone please explain what is the error in this process?
If the Lagrangian $F$ does not depend explicitly on the independent variable $x$ , then the Beltrami identity (BI) is a necessary (but not a sufficient condition) for a solution to the Euler-Lagrange (EL) equation . The BI can alternatively be viewed as a special case of Noether's theorem : The energy function is conserved along a solution. In OP's example it turns out that there are no solutions to the EL equation.
|calculus-of-variations|euler-lagrange-equation|
0
Proof of resonance giving a valid solution for an ODE?
During a differential equations class, we were given the example: $\frac{d^2y}{dx^2} -6\frac{dy}{dx} + 9y = 10e^{3x}$ . Solving the auxiliary equation gives: $(Ax+B)e^{3x}$ . When finding a value for the particular solution, we picked y = $Cx^2e^{3x}$ , so $\frac{dy}{dx} = 2Cxe^{3x} + 3Cx^2e^{3x}$ and $\frac{d^2y}{dx^2} = 2Ce^{3x} + 12Cxe^{3x} + 9Cx^2e^{3x}$ . Substituting these in gives us: $e^{3x}(2C + 12Cx + 9Cx^2 - 12Cx + 18Cx^2 +9Cx^2)=10e^{3x}$ . By dividing through by $e^{3x}$ and canceling out like terms, we just get $C = 5$ . My question is is there a proof for why we have to multiply $Ce^{3x}$ by $x^2$ ? I understand that without doing so, some coefficients cancel out and we get contradictions. I also understand that it's because in the RHS, the $e^{3x}$ is contained twice in the solution to the complementary function. I'm more looking for a proof of why resonance gives a valid solution, if the proof exists.
Hint, Since the $p(x)=10e^{3x}$ , particular solution in the form of $(A+Bx+Cx^2)e^{3x}$ as auxiliary equation already gives $(A'x+B')e^{3x}$ but you dont need to consider the part $(A+Bx)e^{3x}$ in the particular solution because it is already in the auxiliary equation so particular solution in the form of $Cx^2e^{3x}$ . Even if you assume a particular solution in the form of $(A+Bx+Cx^2)e^{3x}$ nothing change.
|ordinary-differential-equations|
1
Calculate Riemann integral
Calculate Riemann integral: $$\int_{\frac{1}{3}}^{3} \frac{\arctan(x)}{x^2-x+1}dx $$ In this assignment, the integrand function does not have an antiderivative. I used the substitution of $x=\frac{1}{t}$ and the fact that $\arctan(x) +\arctan \left( \frac{1}{x} \right) = \frac{\pi}{2} (x \neq 0, x \in \mathbb{R})$ and reduced the integral from the task to the following form: $$ \int_{\frac{1}{3}}^{3} \frac{\arctan(x)}{x^2-x+1}dx = \int_{3}^{\frac{1}{3}} \frac{\arctan(\frac{1}{t})}{\frac{1}{t^2}-\frac{1}{t}+1} \left(-\frac{1}{t^2} \right) dt = \frac{\pi}{2} \cdot \int_{3}^{\frac{1}{3}} \frac{1}{\frac{1}{t^2}-\frac{1}{t}+1} \left(-\frac{1}{t^2} \right) dt - \int_{3}^{\frac{1}{3}} \frac{\arctan(t)}{\frac{1}{t^2}-\frac{1}{t}+1} \left(-\frac{1}{t^2} \right) dt= $$ $$ = \frac{\pi}{2} \cdot \int_{3}^{\frac{1}{3}} \frac{1}{\frac{1}{t^2}-\frac{1}{t}+1} \left(-\frac{1}{t^2} \right) dt - \int_{\frac{1}{3}}^{3} \frac{\arctan(\frac{1}{x})}{x^2-x+1}dx \ \left( * \right) $$ I'm stuck on this. I calcu
Notice that $$\int_{1/3}^{3}\frac{\arctan(1/x)}{x^2-x+1}dx,\quad t = 1/x$$ $$=\int_{3}^{1/3}\frac{\arctan(t)}{(1/t)^2-(1/t)+1}\frac{-1}{t^2}dt$$ $$=-\int_{3}^{1/3}\frac{\arctan(t)}{1-t+t^2}dt$$ $$=\int_{1/3}^{3}\frac{\arctan(t)}{t^2-t+1}dt.$$ So, the (*) integral is the same as the original.
|calculus|integration|riemann-integration|
0
If $g$ is contraction and $f+f'=g(f)$ then f(t) has a limit as $t\to\infty$
I would like to prove the following statement: Let $g:\mathbb{R}^n\to\mathbb{R}^n$ be contraction mapping. If function $f:\mathbb{R}\to\mathbb{R}^n$ satisfies: $$f(t)+f'(t)=g(f(t))$$ then the limit $\lim_{t\to\infty}f(t)$ exists. Below I write a sketch of my solution to this problem. My question is: is there some simpler way to prove the theorem and is my proof correct? Let $L be Lipschitz constant of $g$ . It's not hard to show that if $\alpha\in\mathbb{R}^n$ is a fixed point of $g$ then $|f(t)-\alpha|\leq\frac{1}{1-L}|f'(t)|$ so it's enough to show that $|f'(t)|\to 0$ . With some elementary calculation it can be proven that if $a,b\in\mathbb{R}^n$ satisfy $|a+b|\leq L|a|$ then $$\langle a,b\rangle\leq \frac{c}{2}|a|^2$$ where $c=\frac{L-1}{2-L} . Let $h\neq 0$ . Substituting $a=f(t+h)-f(t)$ , $b=f'(t+h)-f'(t)$ in the formula above we get: $$\langle f(t+h)-f(t),f'(t+h)-f'(t)\rangle\leq \frac{c}{2}|f(t+h)-f(t)|^2$$ Since $\frac{d}{dt}|f(t+h)-f(t)|^2=2\langle f(t+h)-f(t),f'(t+h)-f'(t)\r
Let $\alpha$ be the fixed point of $g$ . Multiplying both sides of the equation by $e^t$ gives $$ (e^t(f(t)-\alpha))'=e^t(g(f)-g(\alpha)). \tag1$$ Now integrating (1) from $0$ to $t$ gives $$ f(t)-\alpha=(f(0)-\alpha)e^{-t}+e^{-t}\int_0^t(g(f)(s)-g(\alpha))e^sds. $$ So $$\begin{eqnarray} |f(t)-\alpha|&\le& |f(0)-\alpha|e^{-t}+Le^{-t}\int_0^t|f(s)-\alpha|e^sdt\\ &\le& |f(0)-\alpha|e^{-t}+L\int_0^t|f(s)-\alpha|dt. \end{eqnarray}$$ By Gronwall's equality https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s_inequality , one has $$ |f(t)-\alpha|\le|f(0)-\alpha|e^{-t}\exp(\int_0^tLds)=|f(0)-\alpha|e^{-(1-L)t} $$ from which gives the result when $t\to\infty$ .
|ordinary-differential-equations|multivariable-calculus|solution-verification|contraction-mapping|
1
Proof of resonance giving a valid solution for an ODE?
During a differential equations class, we were given the example: $\frac{d^2y}{dx^2} -6\frac{dy}{dx} + 9y = 10e^{3x}$ . Solving the auxiliary equation gives: $(Ax+B)e^{3x}$ . When finding a value for the particular solution, we picked y = $Cx^2e^{3x}$ , so $\frac{dy}{dx} = 2Cxe^{3x} + 3Cx^2e^{3x}$ and $\frac{d^2y}{dx^2} = 2Ce^{3x} + 12Cxe^{3x} + 9Cx^2e^{3x}$ . Substituting these in gives us: $e^{3x}(2C + 12Cx + 9Cx^2 - 12Cx + 18Cx^2 +9Cx^2)=10e^{3x}$ . By dividing through by $e^{3x}$ and canceling out like terms, we just get $C = 5$ . My question is is there a proof for why we have to multiply $Ce^{3x}$ by $x^2$ ? I understand that without doing so, some coefficients cancel out and we get contradictions. I also understand that it's because in the RHS, the $e^{3x}$ is contained twice in the solution to the complementary function. I'm more looking for a proof of why resonance gives a valid solution, if the proof exists.
Let's take a step back and consider the initial homogeneous equation, namely $y'' - 6y' + 9y = 0$ . Its characteristic polynomial is given by $r^2 - 6r + 9 = (r-3)^2$ , which possesses a double root. In consequence, the homogeneous solutions are given by $e^{3x}$ , as expected, and $xe^{3x}$ , because the "usual" second solution would be $e^{3x}$ too, which coincides with the first one. Now, the particular solution is often constructed as an ansatz from the source term. Yet, in the present case, the source term on the RHS also coincides with the first homogeneous solution $e^{3x}$ , so that you need to introduce a prefactor $x^2$ in order to produce a new contribution to the general solution $-$ otherwise, the particular ansatz would only modify the constants of integration in front of the homogeneous solutions.
|ordinary-differential-equations|
0
Bounded sequence in vector valued space
I am working with a space of the form $E=L^{\infty}(0,\infty;X)$ , where $X$ is a non reflexive Banach space. If a sequence $(x_n)$ is bounded in $E$ , how to extract a weakly converging subsequence? I think that this is not always possible for an arbitrary Banach space $X$ ! I tried to consider that $X$ has a separable dual and use Banach Alaoglu theorem, but I did not success to identify the dual of $E$ . What I am looking is to assume that $X$ satisfies a given property letting me extract a weakly converging subsequence. Any help will be appreciated.
As you thought, this is indeed not always possible. Since $X$ isn't reflexive, a classical consequence of the Mackey-Arens theorem gives you the existence of bounded subsets of $X$ which aren't weakly relatively compact in $X$ and the Eberlein-Šmulian theorem then gives you the existence of a bounded sequence $(y_n)_{n\in\mathbf N}$ of elements of $X$ which has no weakly convergent subsequence. For every $n\in\mathbf N$ , let $x_n$ be (the equivalence class in $L^\infty(0,\infty;X)$ of ) the constant function on $(0,\infty)$ taking everywhere the value $y_n$ . The sequence $(x_n)_{n\in\mathbf N}$ is a bounded sequence of $L^\infty(0,\infty;X)$ and I will show that it has no weakly convergent subsequence. Let's proceed by contradiction and suppose that $(x_n)_{n\in\mathbf N}$ has a subsequence $(x_{n_k})_{k\in\mathbf N}$ which converges weakly to some $x_\infty\in L^\infty(0,\infty;X)$ . By the Lebesgue differentiation theorem , $x_\infty$ has many Lebesgue points (maybe you only know t
|banach-spaces|
1
The ring of Laurent polynomials over a Noetherian ring is Noetherian
In the Wikipedia about Larent polynomial , there is this result: The ring of Laurent polynomials over a field is Noetherian. I was wondering if would it be enough for us to have Laurent polynomial over a Noetherian ring? For example, I think $\mathbb Z[X,X^{-1}]$ is Noetherian. Question : Let $R$ be a commutative Noetherian ring. Is $R[X,X^{-1}]$ Noetherian?
If $R$ is a Noetherian ring then $R[X]$ is a Noetherian ring (by Hilbert's basis theorem). Since $R[X,X^{-1}]$ is the just the localization of $R[X]$ at the multiplicatively closed set $\{X^n: n \geq 0\}$ , it too must be Noetherian.
|abstract-algebra|ring-theory|commutative-algebra|noetherian|
1
Calculate Riemann integral
Calculate Riemann integral: $$\int_{\frac{1}{3}}^{3} \frac{\arctan(x)}{x^2-x+1}dx $$ In this assignment, the integrand function does not have an antiderivative. I used the substitution of $x=\frac{1}{t}$ and the fact that $\arctan(x) +\arctan \left( \frac{1}{x} \right) = \frac{\pi}{2} (x \neq 0, x \in \mathbb{R})$ and reduced the integral from the task to the following form: $$ \int_{\frac{1}{3}}^{3} \frac{\arctan(x)}{x^2-x+1}dx = \int_{3}^{\frac{1}{3}} \frac{\arctan(\frac{1}{t})}{\frac{1}{t^2}-\frac{1}{t}+1} \left(-\frac{1}{t^2} \right) dt = \frac{\pi}{2} \cdot \int_{3}^{\frac{1}{3}} \frac{1}{\frac{1}{t^2}-\frac{1}{t}+1} \left(-\frac{1}{t^2} \right) dt - \int_{3}^{\frac{1}{3}} \frac{\arctan(t)}{\frac{1}{t^2}-\frac{1}{t}+1} \left(-\frac{1}{t^2} \right) dt= $$ $$ = \frac{\pi}{2} \cdot \int_{3}^{\frac{1}{3}} \frac{1}{\frac{1}{t^2}-\frac{1}{t}+1} \left(-\frac{1}{t^2} \right) dt - \int_{\frac{1}{3}}^{3} \frac{\arctan(\frac{1}{x})}{x^2-x+1}dx \ \left( * \right) $$ I'm stuck on this. I calcu
Let $$I_1=\int_{\frac{1}{3}}^{3} \frac{\arctan(x)}{x^2-x+1}dx$$ and $$I_2=\int_{\frac{1}{3}}^{3} \frac{\arctan(\tfrac1x)}{x^2-x+1}dx.$$ In the first equality you showed that $I_1=I_2.$ Then, I guess you tried to do the trick $I_1=\frac{I_1+I_2}2...$ $$I_1=\frac12\int_{\frac{1}{3}}^{3} \frac{\arctan(x)+\arctan(\tfrac1x)}{x^2-x+1}dx\\ =\frac12\int_{\frac{1}{3}}^{3} \frac{\frac{\pi}2}{x^2-x+1}dx\\ =\frac{\pi}4\int_{\frac{1}{3}}^{3} \frac{4}{(2x-1)^2+3}dx\\ =\frac{\pi}{2\sqrt3}\arctan(\tfrac{2x-1}{\sqrt3})\vert_{1/3}^3\\ =\frac\pi{2\sqrt3}(\arctan(\tfrac5{\sqrt3})-\arctan(-\tfrac1{3\sqrt3}))\\ =\frac\pi{2\sqrt3}\arctan(4\sqrt3) $$
|calculus|integration|riemann-integration|
1
How to justify expanding both f and g to find the limit of f/g?
If I can Taylor expand $f(x) = 2+ x+ x^2/2 +....$ and $g(x) = x/2 + x^2/3 + ....$ and I want to find the limit as $x\to0$ , why is it that I can write $$ \frac {f(x)}{g(x)} = \frac {2+ x+ x^2/2 + O(x^3)}{ x/2 + x^2/3 + O(x^3)} = \frac {2+ x+ x^2/2 }{ x/2 + x^2/3} + O(x^3) \to 2. $$ I can obviously see why it is intuitively true, but can someone explain the second equality rigorously?
Let consider $$ f(x)= \sum_{i=0}^N a_i x^i + O\left(x^{N+1}\right) $$ where $a_i=\frac{f^{(i)}(0)}{i!}$ and $$ g(x)=\sum_{j=0}^M b_j x^j + O\left(x^{M+1}\right) $$ Let $I:=\min( i : a_i \ne 0) the order of $f(x)$ and similar for $J$ , then $$ \frac{f(x)}{g(x)}= x^{I-J}\frac{\sum_{i= I}^N a_i x^{i-I} + O\left(x^{N+1-I}\right)}{\sum_{j= J}^M b_j x^{j-J} + O\left(x^{M+1-J}\right)} $$ By taking the limit $$ \lim_{x \to 0} \frac{f(x)}{g(x)}= \lim_{x \to 0} x^{I-J} \frac{a_I}{b_J} $$ where I have used that $$ \lim_{x \to 0} a_i x^{i-I}=0= \lim_{ x \to 0} b_j x^{j-J} $$ for $i>I$ and $j>J$ , $a_I \ne 0$ , $b_J \ne 0$ and the definition of big $O$ In your example $I=0$ , $J=1$ and so $$ \lim_{x \to 0^+} \frac{f(x)}{g(x)}= \infty $$ $$ \lim_{x \to 0^-} \frac{f(x)}{g(x)}= -\infty $$
|taylor-expansion|
0
Is the real Jordan form unique?
Let $A\in M_n(\mathbb{R})$ . I know that $A$ has a real Jordan form which is obtained from the complex Jordan form by using the usual Jordan blocks for the real eigenvalues and by associating to the complex conjugate eigenvalues $\lambda=a+bi$ and $\overline{\lambda}=a-bi$ block diagonal matrices formed by $2\times 2$ blocks of the form $\begin{pmatrix} a_i & b_i \\ -b_i & a_i \end{pmatrix}$ . I understand this and I am able to write the real Jordan form from the complex Jordan form by proceeding like this, but what I don't get is what kind of uniqueness property the real Jordan form has. I mean, what if instead of the block $\begin{pmatrix} a_i & b_i \\ -b_i & a_i \end{pmatrix}$ I use the block $\begin{pmatrix} a_i & -b_i \\ b_i & a_i \end{pmatrix}$ , i.e. what if I use $\overline{\lambda}$ instead of $\lambda$ ? Is this still a valid real Jordan form? If this is the case, I think that the real Jordan form is unique up to a permutation of the elements on the secondary diagonal of thes
It depends. Let $R=\pmatrix{a&-b\\ b&a}$ . If the real Jordan form of $A$ contains a Jordan block of the form $$ \pmatrix{R&I\\ &R&I\\ &&\ddots&\ddots\\ &&&R&I\\ &&&&R}, $$ you may take the transposes of all copies of $R$ within the same Jordan block (and leave other copies of the same $R$ in other Jordan blocks unchanged) and still obtain a valid real Jordan form of $A$ . However, you cannot take the transposes of only some $R$ in a Jordan block and leave others in the same block unchanged. E.g. let $$ J_1=\left[\begin{array}{c|cc}R\\ \hline&R&I\\ &&R\end{array}\right],\quad J_2=\left[\begin{array}{c|cc}R\\ \hline&R^T&I\\ &&R^T\end{array}\right] \quad\text{and}\quad M=\left[\begin{array}{c|cc}R\\ \hline&R&I\\ &&\color{red}{R^T}\end{array}\right]. $$ Then $J_1$ is always similar to $J_2$ and both are valid real Jordan forms, but they are not always similar to $M$ . In particular, when $R=\pmatrix{0&-1\\ 1&0}$ , the minimal polynomial of $M$ is $x^2+1$ , but $J_1^2,J_2^2\ne-I$ . The mat
|linear-algebra|matrices|jordan-normal-form|
0
Inverse Mellin transform of the Mellin transform of the binomial PMF
The probability mass function of the Binomial distribution is given by $$\begin{equation} f(x)=\binom{n}{x} p^x (1-p)^{n-x}, \end{equation}$$ where $p \in [0,1]$ and $x=\{0,1,\dots,n\}$ (finite support). The Mellin transform of $f$ is therefore $$\begin{align} \mathcal{M}\{f\}(s)&=\int_0^{\infty}x^{s-1} \binom{n}{x} p^x (1-p)^{n-x}\mathrm{d}x\\ &=\sum_{k=0}^{n}k^{s-1} \binom{n}{k} p^k (1-p)^{n-k}. \end{align}$$ Since $f$ is a probability mass function, the integral becomes a sum. Now I'd like to apply the inverse Mellin transform to get back $f$ : $$\begin{align} \mathcal{M}^{-1}\{\mathcal{M}\{f\}\}(x) &= \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} x^{-s} \left( \sum_{k=0}^{n}k^{s-1} \binom{n}{k} p^k (1-p)^{n-k} \right) \mathrm{d}s \\ &=\frac{1}{2 \pi i} \sum_{k=0}^{n} \binom{n}{k} p^k (1-p)^{n-k} \int_{c-i\infty}^{c+i\infty} x^{-s} k^{s-1} \mathrm{d}s. \end{align}$$ The problem is that the integral $\int_{c-i\infty}^{c+i\infty} x^{-s} k^{s-1} \mathrm{d}s$ doesn't seem to converge.
Thanks to Steven Clark for some clarification. Indeed the dirac-Delta is very relevant here, I should've played around with it more. What happens is that \begin{align} \int_{c-i \infty}^{c+i\infty} x^{-s}k^{s-1} \mathrm{d} s &= \frac{1}{x} \int_{c-i \infty}^{c+i\infty} \left(\frac{k}{x}\right)^{s-1} \mathrm{d}s,\\ &=\frac{i}{x} \int_{-\infty}^{\infty} e^{it \ln \left({\frac{k}{x}} \right)} \mathrm{d}t,\\ &= \frac{2 \pi i}{x} \delta\left(\ln\left(\frac{k}{x}\right)\right),\\ &= 2 \pi i \delta \left(k-x\right). \end{align} and what I completely missed is that $\mathcal{M}^{-1}\{\mathcal{M}\{f\}\}(x)$ is a function of $x$ on the real number line , and that $\delta\left(\ln\left(\frac{k}{x}\right)\right) = x \delta \left(k-x\right)$ , which not only cancels the $x$ in the denominator that I could not get rid of, but also means that the integral is 0 whenever $x \neq k$ . So we get that \begin{align} \mathcal{M}^{-1}\{\mathcal{M}\{f\}\}(x) &=\frac{1}{2 \pi i} \sum_{k=0}^{n} \binom{n}{k} p^k
|integration|complex-analysis|binomial-distribution|dirac-delta|mellin-transform|
1
Birthday problem with shared birthdays among males and female students
There are $m$ male and $f$ female students in a class (where $m$ and $f$ are each less than 365) What is the probability that a male student shares a birthday with a female student? I have attempted the method suggested by Alex in his comment on the linked question . The number of ways to allocated dates for males and female students is $$365^m \times(365-m)^f$$ The total number of ways to allocate birthdays without restriction to $m+f$ students is $365^{m+f}$ . Hence, the probability of getting a shared birthday, using complementary probability is: $$1 - \frac{365^m \times(365-m)^f}{365^{m+f}}=1 - \frac{(365-m)^f}{365^{f}}$$ Is the approach/calculations correct? (Comment) Start with finding all ways of putting $k$ identical while balls into $n=365$ bins (each bin may contain up to k balls). Then find the number of ways of putting $m$ identical black balls in the remaining bins $n−j,1≤j≤k$ bins. Then find $P(Sc)$ , probability of these events. $1−P(Sc)$ is what you want
It will be easy to first compute the probability that no male and female share the same birthday. So we begin by computing the $u_{i}$ ways of having $m$ males among exactly $% i$ known different birthdays. $u_{i}$ is then given by : \begin{eqnarray*} u_{i} &=&\sum_{\substack{ m_{1}+\cdots +m_{i}=m \\ m_{1}\wedge \cdots \wedge m_{i}\not=0}}\frac{m!}{m_{1}!\cdots m_{i}!} \\ &=&i^{m}-iu_{i-1}-\frac{i!}{2!\left( i-2\right) !}u_{i-2}-...-iu_{1} \\ &=&\sum_{k=1}^{i}\frac{\left( -1\right) ^{i-k}i!k^{m}}{\left( i-k\right) !k!} \end{eqnarray*} Let $v_{j}$ is the number of ways of having $f$ females among exactly $j$ known different birthdays. But there are $\frac{365!}{i!j!\left( 365-i-j\right) !}$ ways of choosing $i+j$ different birthdays, so the probability $P$ you are looking for is : \begin{eqnarray*} P=1-356^{-\left( m+f\right) }\sum_{\substack{ i\leq m \\ j\leq f}}\frac{365!% }{i!j!\left( 365-i-j\right) !}u_{i}v_{j} \end{eqnarray*} An other method, after considering $u_{i}$ , is to comp
|probability|combinatorics|discrete-mathematics|contest-math|birthday|
0
$sin(1/x)$: expectation vs infimum
I want to show that $$ \underset{Q \in \mathcal{P}^0}{\inf} \ \{\mathbb{E}_{x \sim Q} \sin(1/x)\} > \inf_{x \in \mathbb{R}}\{ \sin(1/x)\} = -1$$ holds where $\mathcal{P}^0$ is the set of all probability distributions supported on $\mathbb{R}$ . Is this possible? I don't have a good intuition of "inf" over the set of all probability distributions.
Take any probability distribution $X$ on $\mathbb{R}$ , let its associated density function be $f_X$ . Then, $$\mathbb{E}(sin(1/X))=\displaystyle\int_{-\infty}^{+\infty}sin(1/x) f_X(x) dx$$ So, it immediately follows that $$|\mathbb{E}(sin(1/X))|=\left|\displaystyle\int_{-\infty}^{+\infty}sin(1/x) f_X(x) dx\right|\leq \displaystyle\int_{-\infty}^{+\infty}|sin(1/x) f_X(x)|dx\leq \displaystyle\int_{-\infty}^{+\infty} f_X(x) dx=1.$$
|analysis|probability-distributions|curves|supremum-and-infimum|
0
algebraic equation with powers
The original statement of the problem : Solve in $\mathbb R$ the following equation : $$ a^{log_b x^2 } + a^{log_x b^2 } = a^{1+log_b x } + b^{1+log_x b } $$ where $a,b>0$ and $b \neq 1$ A more general statement of the problem : Find all real solutions t , to the equation $ a^{2t} + a^{\frac{2}{t}} = a^{1+t} + a^{1+ \frac{1}{t}} $ where a is a strictly positive real constant. EDIT I hope I did not complicate the problem by making these transformations to the second equation, I am especially interested in solving the original one, so that's more important. My approach : First of all, if a = 1 , it is clear that any real t verifies the given equation. In the following, I assumed that a > 1 to simplify, and then take the case when 0 , analogously. I tried to prove that t = 1 is the only solution of the equation for any a different from 1. I transformed the equation into this $a^{t}(a^{t}-a) + a^{\frac{1}{t}}(a^{\frac{1}{t}}-a)=0$ where I was going to use some inequalities and prove that t
Solution for $a > 1$ : As you mentioned, we begin with $a > 1$ . Let us consider $t \leq 0$ . Clearly $a^t - a and $a^{1/t} - a so the equation clearly does not have a solution for $t \leq 0$ . So let us focus on $t > 0$ . Define $f(t) = a^t(a^t - a)$ . It is straightforward to note that $f$ is a strictly increasing, convex function. We are interested in the solution of $$ g(t) = f(t) + f\left(\frac{1}{t} \right).$$ For any $p > 0$ , if $t' > 0$ is a solution to $g(t) = p$ , then so is $1/t'$ . Since $1/x$ is a convex function and $f$ is increasing, $g$ is a strictly convex function. To show that let $h(t) = 1/t$ for simplicity. For any $\lambda \in [0,1]$ and $x,y > 0$ , we have, \begin{align*} g(\lambda x + (1-\lambda)y) & = f(h(\lambda x + (1-\lambda)y)) + f(\lambda x + (1-\lambda)y) \\ & \leq f(h(\lambda x + (1-\lambda)y)) + \lambda f(x) + (1-\lambda)f(y) \\ & In second and fourth line, we used convexity of $f$ and in third line we used convexity of $h$ and that $f$ is strictly inc
|algebra-precalculus|
1
Getting the right average rate of growth
I am trying to get the average rate of growth across a series of numbers that will lead to the same sum as the discrete rates of growth. For example, if I have 1 unit at $t=1$ that grows at 2%, then 3%, then 2% again the resulting sum of units would be $$1+1\cdot(1.02)+1\cdot(1.02)\cdot(1.03)+1\cdot(1.02)\cdot(1.03)\cdot(1.02)=4.142212$$ Unless I'm making a calculative error, I've found that the arithmetic, geometric, and harmonic averages all lead to an incorrect sum when scaled linearly over the same 4 periods. Is there a method to find the correct average growth rate that will give the same sum? First time question asker, please forgive the potentially convoluted question. Welcome any tips on better question formatting.
Let's introduce some notation to describe your question. Suppose we perform $N$ measurements of a time-dependent quantity $x(t)$ . Denote the observed value of $x$ at time $t_i$ as $x_i=x(t_i)$ for $i \in \{1,2,...,N\}$ . Then define $r_{i+1}=\frac{x_{i+1}}{x_i}$ as the fractional increase of $x$ between times $t_i$ and $t_{i+1}$ for $i \in \{1,2,...,N-1\}$ . Moreover, it is convenient to assign $r_1=1$ . It follows by iterative application of this definition that $x_i=x_1 \prod_{j=1}^{i} r_j$ , for all $i \in \{1,2,...,N\}$ . The (unweighted) average of $x$ across the $N$ measurements is $$\bar{x}=\frac{1}{N} \sum_{i=1}^{N} x_i=\frac{x_1}{N} \sum_{i=1}^{N} \prod_{j=1}^{i} r_j\tag{1}.$$ The metric you are considering for the average fractional change is then $$\bar{r}=\frac{\bar{x}}{x_1}=\frac{1}{N} \sum_{i=1}^{N} \prod_{j=1}^{i} r_j \tag{2}.$$ For computational purposes, it is useful to note that $\sum_{i=1}^{N} \prod_{j=1}^{i} r_j =r_1\left(1+r_2\left(1+...r_{N-1}\left(1+r_N\right)\r
|average|
1
Prove the following relationship of the number e and then show that it is irrational
I have been trying to show the following fact: $$e -(1+\frac{1}{1!}+ ... + \frac{1}{n!}) My steps: it seems reasonable to me to try to do this problem by induction, but when I tried it I came up with the following: inductive hypothesis: satisfies for n. for the case n+1 we will have to $e -(1+\frac{1}{1!}+ ... + \frac{1}{n!} + \frac{1}{(n+1)!}) = e -(1+\frac{1}{1!}+ ... + \frac{1}{n!}) - \frac{1}{(n+1)!} but in the last inequality I got that it was between (n+1) but I am looking for it to be divided by n, how could I complete the proof? Any clues or ideas? Finally, once this inequality is obtained, it is simple if the inequality is multiplied by n! We can consider an n large enough such that there would have to be an integer between $0$ and $1$ , which would be a contradiction when assuming that e is rational.
appears you need $ \frac{1}{(n+1)!} + \frac{1}{(n+2)!} + ... $ as $$ \frac{1}{n!} \left( \frac{1}{n+1} + \frac{1}{(n+1)(n+2)} + \frac{1}{(n+1)(n+2)(n+3)} +... \right) The geometric series sums to $$ \frac{\frac{1}{n+1}}{1 - \frac{1}{n+1} } = \frac{\frac{1}{n+1}}{ \frac{n+1 -1}{n+1} } = \frac{\frac{1}{n+1}}{ \frac{n}{n+1} } = \frac{1}{n}$$ so $$ \frac{1}{(n+1)!} + \frac{1}{(n+2)!} + ...
|real-analysis|sequences-and-series|analysis|
1
Pullback of a 2-form
Let $(\Sigma,\omega)$ be a closed surface equipped with an area form. For any $\phi:\Sigma \to \Sigma$ such that $\phi^{*}\omega = \omega$ (i.e., area-preserving), we define its mapping torus $M_{\phi}$ to be $[0,1] \times \Sigma $ by identifying $ (0,x) \sim (1,\phi^{-1}(x))$ . Note that the pullback of $\omega$ to $[0,1] \times \Sigma$ descends to a two-form $\omega_{\phi}$ on $M_{\phi}$ , by the area-preserving assumption. Now, I want to compare the geometry between the mapping tori of two Hamiltonian isotopic maps $\phi, \phi'$ . That is, say $\phi' = \phi \circ \varphi_{H}^1$ , where $\varphi_{H}^{1}$ is the time-1 map of some time-dependent Hamiltonian. We can define a map \begin{align*} [0,1] \times \Sigma &\to [0,1] \times \Sigma\\ (t,x) &\to (t,(\varphi_{H}^t)^{-1}(x)) \end{align*} which descends to a diffeomorphism $\Psi_H: M_{\phi} \to M_{\phi'}$ . Now, I want to show that $\omega_{\phi'}$ pulled back to $\omega_{\phi} + dH \wedge dt$ under this diffeomorphism. My thoughts s
Let me work upstairs on $[0, 1]\times \Sigma$ before you take the quotients to form the mapping cylinders. So we have \begin{align} \psi_H : [0, 1]\times \Sigma&\to [0, 1]\times \Sigma;\\ (t, x)&\mapsto (t, (\varphi_H^t)^{-1}(x)). \end{align} Let me denote the pullback as $\tilde\omega = \pi_2^*\omega\in \Omega^2([0, 1]\times\Sigma)$ . Now we want to show the corresponding statement $$ \psi_H^*(\tilde \omega) = \tilde \omega + dH\wedge dt. $$ By the decomposition $\Omega^2([0, 1]\times \Sigma) = \Omega^2(\Sigma) \oplus \Omega^1(\Sigma)\wedge \Omega^1([0, 1])$ , we see that $$ \psi_H^*(\tilde \omega) = \sigma + \eta\wedge dt. $$ Note that $$ \psi_H^*(\tilde \omega)=\psi_H^*\pi_2^*(\omega)=\big((\varphi^t_H)^{-1}\big)^*\omega. $$ To compute $\sigma$ , we restrict it to $\{t_0\}\times \Sigma$ . Note that $$ \big((\varphi^t_H)^{-1}\big)^*\omega\big|_{\{t_0\}\times \Sigma}=\big((\varphi^{t_0}_H)^{-1}\big)^*\omega=\omega, $$ since $\varphi^t_H$ is Hamiltonian so symplectic. Therefore, $$ \si
|differential-geometry|differential-forms|symplectic-geometry|
1
Can both of these linear operators be unbounded?
Suppose $A$ , $B$ , $C$ , and $D$ are compact and injective linear operators in $L_2[0,1]$ and that $AB^{*}=CD^{*}$ . Is it possible that both $D^{-1}B$ and $C^{-1}A$ are unbounded?
Yes, that's possible. Choose an ONB $(e_n)_{n\geq 1}$ of $L^2([0,1])$ and let \begin{align*} Ae_n=\begin{cases}n^{-1}e_n&\text{if }n\text{ even},\\ n^{-2}e_n&\text{if }n\text{ odd},\end{cases}\\ Be_n=\begin{cases}n^{-2}e_n&\text{if }n\text{ even},\\ n^{-1}e_n&\text{if }n\text{ odd}.\end{cases} \end{align*} Both are self-adjoint compact injective operators and they commute. With $C=B$ and $D=A$ we get $AB=CD$ and $$ D^{-1}Be_n=ne_n $$ for $n$ odd and $$ C^{-1}Ae_n=ne_n $$ for $n$ even. Hence both $D^{-1}B$ and $C^{-1}A$ are unbounded.
|functional-analysis|operator-theory|compact-operators|
1
Approximating continuous functions in L^p by nice functions
Problem: Let $X$ be a separable complete metric space and let the borel measure be inner and outer regular, and sigma finite, and $F$ a closed subset of $X$ . Assume $f\in L^p(X)$ such that $f|_{F}=0$ . Then show that there for every $\epsilon>0$ , there exists a continuous function $g\in L^p(X)$ such that $g|_{F}=0$ and such that $||f-g||_{L^p} We may use standard facts, otherwise if false, I must find a counterexample Attempt: I thought that since $F$ is closed, maybe I should consider $f_n(x)=\frac{d(x,F)}{n}$ . or a variation. But I am stuck. $f_n(x)\rightarrow 0$ here. Alternatively,functions in $L^p(X)$ can be approximated by continuous functions with compact support, so would need their support to be in $F^{c}$ . But I don't know how to finish What about $L^1$ or $L^2$ ?
Since this is too long for a comment, you can do this if you assume that $F$ is compact and $\mu$ is a Radon measure on $X$ . From here, it may be possible to generalize to the general $F$ is closed case. Furnish a sequence of continuous functions $\{f_n\}$ with $f_n \to f$ in $L^p(\mu)$ . Let $\{U_n\}$ be a decreasing sequence of open sets containing $F$ such that $\mu(U_n\setminus F) \to 0$ as $n\to \infty$ . Put $\rho_n = \frac{d(x,F)}{d(x,U_n^c) + d(x,F)}$ . Then $\rho_n$ is continuous, and $0 \leq \rho_n \leq 1$ , and $\rho_n = 1$ outside of $U_n$ and $\rho_n = 0$ on $K$ . It follows that $$\|f - \rho_nf_n\|_p^p = \int_{U_n^c}|f-f_n|^p\,d\mu + \int_{U_n\setminus F}|f-\rho_nf_n|^p\,d\mu$$ since $f$ and $\rho_n$ vanish on $F$ . First, observe $$\lim_{n\to \infty} \int_{U_n^c}|f-f_n|^p\,d\mu \leq \lim_{n\to \infty}\|f - f_n\|_p^p = 0.$$ Now we need to estimate $\int_{U_n\setminus F}|f-\rho_nf_n|^p\,d\mu$ . For concision, label $h_n = \chi_{U_n\setminus F}$ , so that $\int_{U_n\setmin
|real-analysis|functional-analysis|measure-theory|
1
Is consistency sufficient for existence?
In his Mathematical Analysis I , Zorich says the following after introducing the reals axiomatically: In relation to any abstract system of axioms, at least two questions arise immedi- ately. First, are these axioms consistent? That is, does there exist a set satisfying all the conditions just listed? This is the problem of consistency of the axioms. Second, does the given system of axioms determine the mathematical object uniquely? That is, as the logicians would say, is the axiom system categorical? Here uniqueness must be understood as follows... Now I am no logician or philosopher so I certainly don't expect, want, or need the full treatment, but I am wondering if it is at all "controversial" to assume that consistency implies existence in the sense which seems to be tacit in Zorich passing from First, are these axioms consistent? to using "that is" in That is, does there exist a set satisfying all the conditions just listed?
Godel's perhaps-unfortunately-named completeness theorem gives a precise positive answer: if $T$ is a set of (first-order) sentences and every model of $T$ satisfies the (first-order) sentence $\varphi$ , then in fact there is a proof of $\varphi$ from $T$ . More snappily, we have $$T\vdash\varphi\iff T\models\varphi$$ (technically completeness is the right-to-left direction, with the left-to-right direction being soundness , but soundness is so trivial it's often subsumed by completeness). In particular, let $T$ be "any abstract system of axioms" (as long as they're first-order) and let $\varphi$ be $\perp$ , the always-false sentence. Then " $T\models\varphi$ " means "Every model of $T$ satisfies the always-false sentence," which is another way of saying " $T$ is unsatisfiable;" meanwhile, " $T\vdash\varphi$ " is just another way of saying " $T$ is consistent." Contrapositing, we get that if $T$ is consistent then $T$ is satisfiable (= has a model). (Note that "consistent iff satisfi
|axioms|
1
Is every linear functional of L1 the pointwise limit of a sequence of continuous linear functionals?
Is the following true? For any linear functional $A$ from $L_1(S,\Sigma,\mu)\to\mathbb{R}$ there is a sequence of continuous linear functionals $\{B_k\}_{k=1}^\infty$ so that for all $f\in L_1(S,\Sigma,\mu)$ , $\lim_{k\to\infty} B_k[f]=A[f]$ . Might it help if $\mu$ is a probability measure?
As mentioned in the comments, this is not true for sequences. It is true however if you allow for nets, and the construction works abstractly (i.e. you can replace $L^1(S,\mu)$ by an arbitrary normed space). For a finite-dimensional subspace $F$ of $L^1(S,\mu)$ the restriction $A|_F$ is continuous. By the Hahn-Banach theorem, $A|_F$ can be extended to a continuous linear functional $B_F\in L^1(S,\mu)^\ast$ . The finite-dimensional subspaces of $L^1(S,\mu)$ form a directed set ordered by inclusion, hence we get a net $(B_F)$ . If $f\in L^1(S,\mu)$ , then $B_F(f)=A(f)$ whenever $f\in F$ by the construction of $B_F$ . Thus $B_F(f)\to A(f)$ .
|functional-analysis|lp-spaces|
0
I want to know the expectation of a product of independent beta distributed random variables.
I have an equation of the form $$Z = \frac{\prod_{i=1}^p X_i}{\prod_{i=1}^p Y_i}$$ where $$X_i \sim \mathcal{B}(\alpha_{x_i},\beta_{x_i})$$ and $$Y_i \sim \mathcal{B}(\alpha_{y_i},\beta_{y_i})$$ $X_i$ and $Y_i$ are independent beta distributed random variables. I want to know $$E[Z]$$ in terms of $$\alpha_{x_i},\beta_{x_i}, \alpha_{y_i},\;and\;\beta_{y_i}\;\forall\;i\in[1,p]$$ The product of expectations is problematic in a computer science perspective because the estimate of $X_i$ 's and $Y_i$ 's can be 0. Also, $$X_i = P(U_i=a_i|V=0)$$ , where $U_i$ and $V$ are binary variables, and $a_i$ is either 0 or 1, which means $X_i$ is a probability. I used the beta distribution because this conditional probability, $X_i$ , can be estimated from the count of instances of the events { $U_i$ =0 and V=0} and { $U_i$ =1 and V=0}. Formally, let C(A) be the counting function of instances of event A. Then, $$X_i \sim \mathcal{B}(C(U_i=a_i\;and\;V=0)+1, C(U_i=\neg a_i\;and\;V=0)+1)$$ and $$Y_i \sim \
Let $$X \sim \operatorname{Beta}(a,b), \quad a, b > 0.$$ Then $$\begin{align} \operatorname{E}[X^k] &= \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \int_{x=0}^1 x^{a+k-1}(1-x)^{b-1} \, dx \\ &= \frac{\Gamma(a+b)\Gamma(a+k)}{\Gamma(a+b+k)\Gamma(a)} \frac{\Gamma(a+b+k)}{\Gamma(a+k)\Gamma(b)} \int_{x=0}^1 x^{a+k-1} (1-x)^{b-1} \, dx \\ &= \frac{\Gamma(a+b)\Gamma(a+k)}{\Gamma(a+b+k)\Gamma(a)}. \tag{1}\end{align}$$ For $k = 1$ , this yields $$\operatorname{E}[X] = \frac{a}{a+b}. \tag{2}$$ For $k = -1$ , this yields $$\operatorname{E}[1/X] = \frac{a+b-1}{a-1}, \quad a > 1, \tag{3}$$ where the condition arises since the integral $(1)$ diverges if $a \le 1$ . Consequently, $$\operatorname{E}[Z] \overset{\text{ind}}{=} \prod_{i=1}^p \operatorname{E}[X_i]\operatorname{E}[1/Y_i] = \prod_{i=1}^p \frac{\alpha_{x_i}}{\alpha_{x_i} + \beta_{x_i}} \frac{\alpha_{y_i} + \beta_{y_i} - 1}{\alpha_{y_i} - 1}, \tag{4}$$ where we require $\alpha_{y_i} > 1$ for all $i \in \{1, \ldots, p\}$ .
|probability|probability-theory|probability-distributions|conditional-probability|beta-function|
0
How to show this matrix limit?
Let $A$ be a $m\times n(m matrix, $\mu>0$ . $A$ is a full rank matrix. I am interested in showing that $$\lim_{\mu\to 0} (A^TA+\mu I)^{-1}A^T= A^T(AA^T)^{-1},$$ where $I$ is the identity matrix. Could you provide insights or a step-by-step proof for this limit expression?
Let $\:A,B^T\in{\mathbb R}^{m\times n}\:$ and assume the existence of a function $f(z)$ which is well defined for both of the matrix arguments $(AB)$ and $(BA)$ . Then there is a wonderful (yet almost trivial) identity due to Higham $$f(BA)\;B = B\;f(AB)$$ In the current problem, take $f(z) = (z+\mu)^{-1}$ and $B=A^T$ to obtain $$\eqalign{ \lim_{\mu\to0}\;(A^TA+\mu I)^{-1}A^T &= \lim_{\mu\to0}\,A^T(AA^T+\mu I)^{-1} &= A^T(AA^T)^{-1} \\ }$$
|limits|matrix-calculus|matrix-analysis|
0
Fundamental solution of Poisson equation on the torus
The potential of a point charge placed at $y\in\mathbb{R}^N$ , for $N=1,2,3$ , is (see e.g. this MathSE post): \begin{align} &\mathbb{R}^3: \qquad \nabla^2 \phi(x) = \delta^3(x-y) \quad \Rightarrow \quad \phi(x) = \frac{-1}{4 \pi |x-y|} \qquad \\ &\mathbb{R}^2: \qquad \nabla^2 \phi(x) = \delta^2(x-y) \quad \Rightarrow \quad \phi(x) = \frac{1}{2\pi}\log|x-y| \qquad \\ &\mathbb{R}^1: \qquad \frac{d^2}{dx^2} \phi(x) = \delta(x-y) \quad \Rightarrow \quad \phi(x) = \frac{1}{2}|x-y| \qquad \end{align} The above formulae define the fundamental solution to the Poisson equation with boundary conditions at infinity $\Phi \rightarrow 0$ . How does this work with periodic boundary conditions? Sum over the lattice: Imagine that we have to find the fundamental solution to the Poisson equation on a flat torus: instead of $\mathbb{R}^N$ we have to solve the above equations on $T^N$ , the $N$ -dimensional flat torus , provided that we "regularize" the problem by adding also a constant neutralising nega
I'll add background uniform charge to neutralise the point source so that the problem is well posed. I'll also rather consider the Green functions of $-\Delta$ which is positive. Actually, in dimensions $D\geq 2$ , there is an ambiguity of the flat torus as you have many types of lattices up to a similarity (the Laplacian being invariant by similarities). Each equivalence class of lattices will correspondingly have its own fundamental solution which are not equivalent up to a simple change of variable. In 1D, there is only one class, your domain is $\mathbb R/\mathbb Z$ with no loss of generality. You can calculate the fundamental solution directly: $$ -\phi'' = \delta-1 \\ $$ or equivalently solve it in the unit interval $(0,1)$ : $$ \phi'' = 1 $$ with boundary condition: $$ \phi'(0^+)-\phi'(0^-) = 1 $$ so: $$ \phi = -\frac{x(1-x)}2 $$ if you set $\phi(0)=0$ , which agrees with your formula on the line for $x\ll1$ . In 2D, you already have your answer. For $N>3$ , you'll start to get
|partial-differential-equations|differential-topology|dirac-delta|poissons-equation|poisson-summation-formula|
0
Why can we suppose a scheme is locally local affine, as an improvement to locally affine?
$\newcommand{\spec}{\operatorname{Spec}}\newcommand{\O}{\mathcal{O}}$ Liu claims the following exercise: Let $S$ be a locally Noetherian scheme. Let $X,Y$ be $S$ -schemes of finite type and $s\in S$ . Then all morphisms of $S$ -schemes $\phi:X\times_S\spec\O_{S,s}\to Y\times_S\spec\O_{S,s}$ arise as the base change of some $f:X\times_SU\to Y\times_SU$ along $\spec\O_{S,s}\to U$ where $U$ is some open neighbourhood of $s$ . Ok, sure, I believe you. What I don't believe is how this applies to make the following reduction in his proof of proposition $4.2$ : Let $Y$ be a locally Noetherian scheme, $f:X\to Y$ a separated morphism of finite type such that $\O_Y\to f_\ast\O_X$ is an isomorphism. Then there is an open $V$ of $Y$ such that $f^{-1}(V)\to V$ is an isomorphism and $X_y$ has no isolated points for $y\in Y\setminus V$ . It suffices to show for every $y\in Y$ for whom $X_y$ has an isolated point $x$ , there is an open subset $W\ni y$ where $f^{-1}(W)\to W$ is an isomorphism. It's cle
There is indeed a shortcut taken here. The statement is as follows: if $X,Y$ are schemes of finite type over some locally Noetherian scheme $S$ , and if $f: X \rightarrow Y$ is an isomorphism after base changing by $\operatorname{Spec}{\mathcal{O}_{S,s}} \rightarrow S$ for some $s \in S$ , then $f$ is an isomorphism after base changing by some open immersion $U \rightarrow S$ with $s \in U$ . Let’s prove it. Step 1 : We assume that $f$ is a closed immersion and we prove the statement. Proof : we can assume that $S$ , then $Y$ (hence $X$ ) is affine. Let’s say that $A=\mathcal{O}(S)$ , $B=\mathcal{O}(Y)$ and $\mathcal{O}(X)=B/I$ . The assumption is that there is a prime $\mathfrak{p} \subset A$ such that $B_{\mathfrak{p}} \rightarrow (B/I)_{\mathfrak{p}}$ is an isomorphism. In other words, $I_{\mathfrak{p}}=0$ . Since $I$ is finitely generated, there is some $a \in A \backslash \mathfrak{p}$ with $I_a=0$ . Hence $f$ is an isomorphism above the open subset $D(a) \subset S$ . Step 2 : let
|algebraic-geometry|commutative-algebra|
1
If $ρ_{AB} ∈ D(H_A \otimes H_B)$ such that $ρ_{A}$ is pure. $\implies ρ_{AB} = ρ_{A} \otimes ρ_{B}$
Let $H_A, H_B, H_C$ be arbitrary Hilbert spaces. Let $ρ_{AB} ∈ D(H_A \otimes H_B)$ such that $ρ_{A}$ is pure. Prove that $ρ_{AB} = ρ_{A} \otimes ρ_{B}$ ( Hint: One way could be to prove it before for when $ρ_{AB}$ is pure and then use a purification to reduce to this case) I haven't been able to go very far with this. If $\rho_A$ is pure $\exists$ unit vector $|\psi_A\rangle$ s.t. $ \rho_A =|\psi_A\rangle \langle|\psi_A|$ . But I don't know what to do next. How should I go about it?
Since $\rho_A$ is pure, there exists a unit vector $|\psi_A\rangle \in H_A$ such that $\rho_A = |\psi_A\rangle\langle\psi_A|$ . Now consider a purification of $\rho_{AB}$ , which is a vector $|\psi_{AB C}\rangle \in H_A \otimes H_B \otimes H_C$ such that: $\text{Tr}_C(|\psi_{AB C}\rangle\langle\psi_{AB C}|) = \rho_{AB}$ By the properties of purification, $|\psi_{AB C}\rangle$ has the form: $|\psi_{AB C}\rangle = |\psi_A\rangle \otimes |\phi_{B C}\rangle$ Where $|\phi_{B C}\rangle \in H_B \otimes H_C$ . Now we can calculate: $\rho_{AB} = \text{Tr}_C(|\psi_{AB C}\rangle\langle\psi_{AB C}|) = (|\psi_A\rangle \otimes |\phi_{BC}\rangle)(\langle\psi_A|\otimes\langle\phi_{BC}|) = |\psi_A\rangle\langle\psi_A| \otimes \text{Tr}_C(|\phi_{BC}\rangle\langle\phi_{BC}|) = \rho_A \otimes \rho_B$ Then we conclude $\rho_{AB} = \rho_A \otimes \rho_B$ .
|quantum-mechanics|quantum-computation|quantum-information|
0
WolframAlpha syntax - evaluate series at a point
I am looking for syntax for WolframAlpha, which would evaluate a series at a given point. My goal is to have a sheet of practice exercises with links to solutions in WolframAlpha. Example problem: Approximate the value of $\sqrt{102}$ using Taylor series of appropriate function at appropriate point of 2nd order. The solution on paper is straightforward: calculate a Taylor series of 2nd order of $f(x)=\sqrt{x}$ at $x_0=100$ and plug in $x=102$ . In WolframAlpha, I can get the Taylor series with a query such as "Taylor series of sqrt(x) at x=100 to order 2". I can also evaluate any expression via "evaluate x^2+2x+1 at x=102". Is there any way to evaluate the series at a given point? I tried many combinations of verbose commands as well as direct Mathematica inputs, but nothing seems to work. I have a feeling it has something to do with the error term.
This is quite a hacky solution, but you can nest " Limit ", Sum , and SeriesCoefficient to produce what you want, although WA is very particular about the syntax ( link ): Sum[SeriesCoefficient[Sqrt[100+x], {x, 0, n}] * x^n, {n, 0, 2}] as x->2.0 ↑ degree Neither using the symbol Limit nor natural language, e.g. Limit[Sum[...], x->2.0] / limit Sum[...] as x->2.0 , returned any useful results. Replace 2.0 with 2 if an exact result is desired. I'm including a screenshot of the output for posterity:
|wolfram-alpha|
1
Finding parameter values for which a system has closed orbits
My question is on Exercise 7.3.8 of Chaos and Nonlinear Dynamics (2nd ed) by Strogatz: 7.3.8. Recall the system $\dot{r} = r(1-r^2) + \mu r \cos \theta, \; \dot{\theta} = 1$ of Example 7.3.1. Using the computer, plot the phase portrait for various values of $\mu > 0$ . Is there a critical value $\mu_c$ at which the closed orbit ceases to exist? If so, estimate it. If not, prove that a closed orbit exists for all $\mu > 0$ . [Here the ODE is given in polar coordinates, i.e. $x = r \cos \theta$ and $y = r \sin \theta$ .] I just want to make sure my reasoning is correct. Here is my work: Differentiating the equations $x = r \cos \theta$ and $y = r \sin \theta$ with respect to $t$ gives \begin{align*} \dot{x} &= -r \dot{\theta} \sin \theta + \dot{r} \cos \theta \\[2pt] \dot{y} &= r \dot{\theta} \cos \theta + \dot{r} \sin \theta \end{align*} Using the above equations I found the following Cartesian equations: \begin{align*} \dot{x} &= -y + x(1 - x^2 - y^2) + \frac{\mu x^2}{\sqrt{x^2 + y^2}}
Fix $0 . If $r>1+\mu$ , then $$ \dot{r} = r(1-r^2) + \mu r \cos \theta\le r(1-r^2)+\mu r=r(1+\mu-r) If $r , then $$ \dot{r} = r(1-r^2) + \mu r \cos \theta\ge r(1-r^2)-\mu r=r(1-\mu-r)>0. $$ By Bendixson's Theorem, there is a closed limit cycle in $1-\mu . The following graph is shown for a closed orbit for $\mu=0.5$ .
|ordinary-differential-equations|dynamical-systems|
0
$ \int_0^af = 0, \forall a \in \mathbb{R} \Rightarrow f=0$ almost everywhere
Let $f: \mathbb{R} \rightarrow \mathbb{R}$ a Lebesgue integrable function and $\int_0^af=0, \forall a \in \mathbb{R}$.Prove that $f=0$ almost everywhere. Here is my solution: Let $0 Now we have that $ \mathbb{R}= \bigcup_{n \in \mathbb{Z}}[n,n+1)$ which is a disjoint union, thus $$ \int_{\mathbb{R}}f=\sum_{n \in \mathbb{Z}} \int_n^{n+1}f=0$$. Also we know that$$\{x:f(x)\neq 0\}= \{x:f(x)>0\} \cup \{x:f(x) 0\} \cup \{x:-f(x)>0\}$$. Now $\{x:f(x)>0\}=\{x:f(x)> \frac{1}{n}\})$ thus $m(\{x:f(x)> \frac{1}{n}\})\leqslant n \int_{\mathbb{R}}f=0$ therefore $m(\{x:f(x)> \frac{1}{n}\})=0 \Rightarrow m(\{x:f(x)>0\})=0$, from the subadditivity of the Lebesgue measure and from the Markov inequality. With the same argument we can prove that $m(\{x:-f(x)>0\})=0$ Combining these we have that $$m(\{x:f(x) \neq 0\}) \leqslant m(\{x:f(x)>0\})+m(\{x:-f(x)>0\})=0$$ $\therefore$ $m(\{x:f(x) \neq 0\})=0$ $$Second-solution$$ Now $ \forall x \in \mathbb{R}$ we have in the same way as above that $\frac{1}{2 \de
I would like to provide a solution that I have not seen from anywhere else. The result directly follows Lebesgue Differentiation Theorem. Since one can show that the given condition $\int_0^a f =0, \forall a\in\mathbb{R}$ is equivalent to $\int_I f =0$ for each intervals $I$ that is contained in $\mathbb{R}$ , then for almost every $x$ in $\mathbb{R}$ , we can take $I=B(r,x)$ , then based on the given condition, $\int_{B(r,x)} f =0$ for all $r>0$ . Thus, when $r\to 0$ , by LDT, $f(x)=0$ for almost every $x\in\mathbb{R}$ . I am referring to the theorem that is proved in Prof. Axler's Measure, Integration and Real Analysis . This proof shows that to derive Lebesgue Differentiation Theorem, it is not necessary to based on this result. (Theorem 4.10 in Axler's MIRA book is based on Theorem 3.48 which is based on the fourth result of Proposition 3.47 and it's a result can be proved independently, e.g. using triangle inequality .) Alternatively, if one uses Folland's book, then firstly, the
|real-analysis|measure-theory|proof-verification|lebesgue-integral|lebesgue-measure|
0
About an integral from MIT Integration Bee 2024
Good evening, I was interested in the third integral from the finals of the MIT Integration Bee 2024 : $$I = \int_{-\infty}^{\infty} \frac{1}{x^4+x^3+x^2+x+1} \hspace{0.1cm} \mathrm{d}x$$ One way to solve this is to identify that the denominator is the fifth cyclotomic polynomial, so we can write : $$I = \displaystyle\int_{-\infty}^{\infty} \frac{1-x}{(1-x)(x^4+x^3+x^2+x+1)} \hspace{0.1cm} \mathrm{d}x = \displaystyle\int_{-\infty}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ Then, by Chasles : $$I = \displaystyle\int_{-\infty}^{0} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x + \displaystyle\int_{0}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ Let $x\to -x$ in the first integral, we obtain : $$I = \displaystyle\int_{0}^{\infty} \frac{1+x}{1+x^5} \hspace{0.1cm} \mathrm{d}x + \displaystyle\int_{0}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ And we use these two identities : $$ \displaystyle\int_{0}^{\infty} \frac{x^{s-1}}{1+x^k} \hspace{0.1cm} \mathrm{d}x
Since it is a fast exam question, there is an easy way to calculate residues: $f(z)=\frac{z-1}{z^5-1}$ . Let $w=e^{\tfrac{2\pi i}5}$ . Then by L'Hospital rule $$r_1=Res_{z=w}f(z)=\frac{z-1}{5z^4}\vert_{z=w}=\frac{w^2-w}5$$ $$r_2=Res_{z=w^2}f(z)=\frac{z-1}{5z^4}\vert_{z=w^2}=\frac{w^4-w^2}5 $$ Hence, $$I=2\pi i(r_1+r_2)=\frac{4\pi}5\sin72^\circ=\frac{\pi\sqrt2\sqrt{5+\sqrt5}}{5}$$
|integration|
0
Find all functions $f: \mathbb R \to \mathbb R$ satisfying $f(xf(y) + f(x+y)) = y(f(x)+1)+f(x)$
Can you tell me if my solution is so far correct and help me complete the second case? The statement of the problem : Find all functions $f: \mathbb R \to \mathbb R$ with the following property: $$ f(xf(y) + f(x+y)) = y(f(x)+1) + f(x) , \forall x,y \in \mathbb R. $$ My approach : For $x = y = 0$ we get $$ f(f(0)) = f(0). $$ For $x = 0$ we get $$ f(f(y))=y(f(0)+1)+f(0) , \forall y \in \mathbb R. $$ Now the function $y \mapsto ay+c$ is bijective , where $a$ and $c$ real constants and $a \neq 0$ . So , for $f(0) \neq -1$ we get $$ \begin{align} &y \mapsto y(f(0)+1)+f(0) \text{ is bijective,}\\ \implies &y \mapsto f(f(y)) \text{ is bijective,}\\ \implies &f \text{ is injective and $f$ is surjective,}\\ \implies &f \text{ is bijective}. \end{align} $$ Making $y = 0$ in the main equation we get $$f(f(x))=f(x) ,\forall x \in \mathbb R. $$ And since $f$ is bijective we obtain that $$f(x)=x ,\forall x \in \mathbb R, $$ which satisfies the initial equation. Now we have left the case where $y \ma
Note that $f(x) \equiv -1$ is also a solution of the functional equation $$ f(xf(y) + f(x+y)) = y(f(x)+1) + f(x) , \qquad \forall x,y \in \mathbb{R} \tag{1}\label{fe}$$ To examine other possibilities, assume that $f$ is not identically $-1$ . Let $a \in \mathbb{R}$ be an arbitrary element satsifying $f(a) \neq -1$ . Then plugging $x = a$ to $\eqref{fe}$ gives $$ f(af(y) + f(a+y)) = y(f(a) + 1) + f(a). $$ This shows that $f$ is surjective. Now we plug $x = 0$ to $\eqref{fe}$ . Then $$ f(f(y)) = y(f(0) + 1) + f(0). $$ Since $y \mapsto f(f(y))$ is surjective, the same is true for the right-hand side. In particular, $f(0) \neq -1$ and this in fact shows that $f$ is bijective. Finally, we plug $y = 0$ to $\eqref{fe}$ . Then $$ f(xf(0) + f(x)) = f(x), $$ hence by the injectivity of $f$ we get $$ x f(0) + f(x) = x, \qquad\text{i.e.,}\qquad f(x) = (1 - f(0)) x. $$ This then shows that $f(0) = 0$ and therefore $f(x) = x$ . Conclusion. The functional equation $\eqref{fe}$ has two solutions: $f(x
|functions|functional-equations|
1
Basis for $V$ containing no elements of the proper subspace $U$
Let $U$ be a proper subspace of a finite-dimensional vector space $V$ . Find a basis for $V$ containing no element of $U$ . We have no answer key for this question which i find annoying since that is the case for many questions in uni. But if we look at a $R^2$ with basis vectors { $e_1, e_2$ } and for example $U$ spans the vector $(1, 1)$ . I get that this is a much simpler case but shouldn't it be the same for all other vector spaces of dimensions $n$ ? I struggle a lot with formally proving/showing that something is true and I'm not sure how to think when looking at arbitrary vector spaces. Also I saw somewhere an explanation that said that: Let { $u_1, u_2, ... , u_k$ } be a basis for $U$ . Because $U$ is a subspace of $V$ { $u_1, u_2, ... , u_k$ } are also in $V$ . By adding linearly independent vectors from $V$ to the set of basis vectors of $U$ we eventually get a basis for $V$ { $u_1, u_2, ... , u_k, v_{k+1}, ... , v_n$ }. And by removing all vectors that are in $U$ we get a ba
What you "saw somewhere" is of course incorrect, as $v_{k+1},\ldots,v_n$ cannot be a basis for $V$ if you had a basis for $V$ with $n$ elements; this one just has $n-k$ elements. Instead, let us consider the following: if $v_1,\ldots,v_n$ form a basis for $V$ , and $1\lt i\leq n$ , then $$v_1+v_i, v_2+v_i,\ldots, v_{i-1}+v_i, v_i, v_{i+1},\ldots,v_n$$ is still a basis. Why? Well, clearly it spans! If $\mathbf{x}\in V$ , then we can write $$\mathbf{x} = \alpha_1v_1+\cdots +\alpha_nv_n.$$ Then $$\begin{align*} \mathbf{x} &= \alpha_1v_1+\cdots + \alpha_n v_n\\ &= \alpha_1(v_1+v_i) - \alpha_1v_i + \alpha_2(v_2+v_i) - \alpha_2v_i + \cdots \\&\qquad \mathop{+}\alpha_{i-1}(v_{i-1}+v_i) - \alpha_{i-1}v_i + \alpha_iv_i + \cdots + \alpha_n v_n\\ &= \alpha_1(v_1+v_i) + \alpha_2(v_2+v_i) + \cdots + \alpha_{i-1}(v_{i-1}+v_i)\\ &\qquad \mathop{+} (\alpha_i-(\alpha_1+\cdots+\alpha_{i-1}))v_i + \alpha_{i+1}v_{i+1} + \cdots + \alpha_n v_n. \end{align*}$$ Since it is a spanning set of vectors with exact
|linear-algebra|formal-proofs|
1
Solve partial differential equation $F = zp(x+y)+q^2-pq-z^2=0$
Solve partial differential equation $$F = zp(x+y)+q^2-pq-z^2=0, p = \frac{\partial z}{\partial x}, q = \frac{\partial z}{\partial y}$$ My attempt: Differentiating partially wrto x,y,z,p,q $F_x = zp-q \\ F_y=zp , \\F_p = z(x+y) \\ F_q = 2q-p \\ F_z=-2z+px+py \\ \frac{dx}{-F_p}=\frac{dy}{-F_q}=\frac{dz}{-pF_p-qF_q}=\frac{dp}{F_x+pF_z}=\frac{dq}{F_x+qF_z}$ Charpit's equations gives: $$\frac{dx}{-z(x+y)} = \frac{dy}{p-2q} = \frac{dp}{p^2(x+y)-pz-q} = \frac{dq}{zp-q(p-2q)} = \frac{dz}{(p-2q)q-pz(x+y)} $$ Taking $dx, dy, dz$ terms, i got $$dz=pdx+qdy$$ How to proceed further? Pls find the detailed calculation part in following image
In order to simplify the Lagrange-Charpit equations, let's make the following changes of variables: $z=e^u$ ; then $(p,q)=(z_x,z_y)=(u_xe^u, u_ye^u)$ and the PDE $$ zp(x+y)+q^2-pq-z^2=0 \tag{1} $$ becomes $$ u_x(x+y)+u_y^2-u_xu_y-1=0. \tag{2} $$ $u(x,y)=v(\xi,\eta)$ , where $(\xi, \eta)=(x+y, x-y)$ ; then $u_x=v_{\xi}\xi_x+v_{\eta}\eta_x=v_{\xi}+v_{\eta}=:P+Q$ and, similarly, $u_y=P-Q$ . In terms of the new variables, Eq. $(2)$ becomes $$ (P+Q)\xi-2PQ+2Q^2-1=0. \tag{3} $$ The Lagrange-Charpit equations for the PDE $(3)$ are $$ \frac{d\xi}{\xi-2Q}=\frac{d\eta}{\xi-2P+4Q}=\frac{dP}{-P-Q}=\frac{dQ}{0}=\frac{dv}{(P+Q)\xi-4PQ+4Q^2}. \tag{4} $$ The null denominator in $\frac{dQ}{0}$ implies that $v_{\eta}=Q=a,$ hence $v=a\eta+f(\xi)$ and $P=v_{\xi}=f'(\xi).$ Substituting $P$ and $Q$ in $(3)$ , we conclude that $f$ must satisfy the ODE $$ (f'+a)\xi-2af'+2a^2-1=0 \implies f'=-a+\frac{1-4a^2}{\xi-2a}, \tag{5} $$ whose solution is $$ f(\xi)=-a\xi+(1-4a^2)\ln|\xi-2a|+b. \tag{6} $$ Therefore, $$ v
|partial-differential-equations|
0
Take invertible matrix $A$, and set a diagonal entry of $A$ to $\infty$. Can we use $A^{-1}$ to find the pseudo inverse of this new matrix?
Consider having $A$ , with $A^{-1}$ previously calculated. Say I apply a rank 1 update on $A$ using $v = (\infty,0,0,0,0)$ : $${B} = vv^T A $$ Clearly A is no longer invertible, but the pseudo inverse still exists. We can apply the sherman morrison formula ${\displaystyle \left(A+vv^{\textsf {T}}\right)^{-1}=A^{-1}-{A^{-1}vv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}v}.}$ However, the denominator here is obviously not well defined. Is there another route to efficiently calculate the pseudo inverse of $B$ ? We have the standard formula for inverting 2x2 block matrices, i.e for S invertible, $$A=\begin{bmatrix} C &D \\ E&F \end{bmatrix} $$ If $C^{-1}$ or $F^{-1}$ exist, then we can compute $S^{-1}$ . In our case, $F^{-1}$ must exist so we can write $$B=\begin{bmatrix} \infty &D \\ E&F \end{bmatrix} $$ $$B^{-1}=\begin{bmatrix} 0 &0 \\ 0&F^{-1} \end{bmatrix} $$ However, we would like to be able to calculate $B^{-1}$ without first computing $F^{-1}$ - in general the index of the valu
Computation of the above: Let $v = (k,0,0)$ for simplicity Then we can calculate: $1+v^TA^{-1}v^T$ = $1+k^2A^{-1}_{11}$ $$ A^{-1}v v^T A^{-1} =(v^TA^{-T})^{T}v^T A^{-1} = k^2\cdot B\cdot D,\text{ where} $$ $$ D = \cdot \begin{bmatrix} Row_1(A) \\ 0 \\ ... \\ 0 \end{bmatrix}, $$ $$ B = \cdot \begin{bmatrix} Col_1(A) & 0 & \ldots & 0 \\ \end{bmatrix} $$ i.e $D_{ij} = \begin{cases} A_{ij},& \text{if } j=1 \\ 0, & \text{otherwise} \end{cases},$ $B_{ij} = \begin{cases} A_{ij},& \text{if } i=1 \\ 0, & \text{otherwise} \end{cases}$ Then $$(A+vv^T)^{-1} =A^{-1} - \frac{k^2}{1+k^2A^{-1}_{11}} BD $$ Now, as $k\to+\infty$ , the fraction tends to $1/A^{-1}_{11}$ .
|linear-algebra|numerical-methods|computer-science|
0
Show that the general solution to $y^{\prime\prime}-4xy^{\prime}+\left(4x^2-2\right)y=0$ is $y\left(x\right)=C_1\mathrm{e}^{x^2}+C_2x\mathrm{e}^{x^2}$
By solving the differential equation $$y^{\prime\prime}-4xy^{\prime}+\left(4x^2-2\right)y=0$$ using the series method, demonstrate that its general solution is $$y\left(x\right)=C_1\mathrm{e}^{x^2}+C_2x\mathrm{e}^{x^2}$$ Here, $C_1$ and $C_2$ are arbitrary constants of integration. My attempt and hint : I've been given a hint that by obtaining recursive relationships for the coefficients of the series, you can calculate several initial terms, and then "guessing" the expression for the $n$ -th coefficient (since you know what you should get) you can verify whether it holds true in all cases. I am not really familiar with this method and would appreciate your help.
You are given a candidate for the solution. It has the required amount of independent parameters or integration constants. So you just have to test if inserting the function satisfies the differential equation. Or you could detect the common factor in the proposed solution and set $y(x)=u(x)e^{x^2}$ and extract the differential equation for $u$ . Then the claim becomes almost trivial.
|sequences-and-series|ordinary-differential-equations|
0
Correspondence between actions of $G$ on $X$ and actions of $G$ on $C_0(X)$
Let $G$ be a Hausdorff topological group; let $X$ be a locally compact Hausdorff space; let $A$ be a C*-algebra. Define an action of $G$ on $X$ to be a continuous map $G \times X \to X, \ (g,x) \mapsto gx$ such that $ghx = g(hx), \ ex = x$ for every $g,h \in G$ ; define an action of $G$ on $A$ to be a group homomorphism $G \to \text{Aut}(A), \ g \mapsto \alpha_g$ such that the map $g \mapsto \alpha_g(a)$ is continuous in the norm topology for every $a \in A$ . Given an action of $G$ on $X$ , define the induced action of $G$ on $C_0(X)$ by $\alpha_g(f)(x) = f(g^{-1}x)$ . It is clear that the induced action of $G$ on $C_0(X)$ is a group homomorphism from $G$ to the automorphisms of $C_0(X)$ . However, how does one show that the map $g \mapsto \alpha_g(f)$ is norm continuous for every $f \in C_0(X)$ ? One way to show the continuity of this map is as follows. Consider the basic open neighborhood $B_r(\alpha_g(f)) = \{F\in C_0(X) \mid \|F-\alpha_g(f)\| of $\alpha_g(f)$ in $C_0(X)$ . Can we
Lemma 1: Let $K \subset X$ be compact and $U \subset X$ be open. If $g \in G$ satisfies $g(K) \subset U$ , then there exists an open neighborhood $V \subset G$ of $g$ such that $h(K) \subset U$ whenever $h \in V$ . Proof: We note that the map $G \times X \ni (h, x) \mapsto hx \in X$ is jointly continuous. We denote this map by $\beta$ . For any $x \in K$ , by our assumption on $g$ , $\beta^{-1}(U)$ contains $(g, x)$ . Since $\beta^{-1}(U)$ is open, there exists $V_x \subset G$ , open neighborhood of $g$ , and $K_x \subset X$ , open neighborhood of $x$ , s.t. $\beta(V_x \times K_x) \subset U$ . Observe that $K_x$ for $x \in K$ form an open cover of $K$ . As $K$ is compact, there exists $x_1, x_2, \cdots, x_N \in K$ such that $K \subset \cup_{i=1}^N K_{x_i}$ . Then $V = \cap_{i=1}^N V_{x_i}$ satisfies the desired condition. $\square$ Lemma 2: For any $K \subset X$ compact, there exists $U \subset X$ open such that $K \subset U$ and $\overline{U}$ is compact. Proof: Since $X$ is locally c
|dynamical-systems|c-star-algebras|
1
Find the first variational equation of the following perturbed IVP: $\dot{x}=x^2+\varepsilon, x(0)=x_0,\varepsilon\geq 0$.
I want to show that the first variational equation of the system is $$\dot{\phi_1} = 1 +\frac{2x_0}{1-x_0t}\phi_1(t),\qquad\phi_1(0)=0$$ And solve the IVP approximately up to Order $O(\varepsilon^2)$ . For the second point I have the hint to use a particular solution of the form $x_p(t)=a(1-x_0t)$ . I already got stuck on the first variational equation, which is defined in my script (Ordinary Differential Equations and Dynamical Systems, Teschl, p.46) as $\dot{y}=A(t,x)y$ , where $A(t,x):=\frac{\partial f}{\partial x}(t,\phi(t,t_0,x))$ . I already have found the solution $$x(t) = \sqrt{\varepsilon}\tan\left(\sqrt{\varepsilon}t + \arctan\left(\frac{x_0}{\sqrt{\varepsilon}}\right)\right)$$ I would have assumed (as the derivative of $f(x)=x^2+\varepsilon$ is $f(x)=2x$ ) that my first variational equation would just be $$\dot{y} = 2\cdot\left(\sqrt{\varepsilon}\tan\left(\sqrt{\varepsilon}t + \arctan\left(\frac{x_0}{\sqrt{\varepsilon}}\right)\right)\right)y$$ I guess I am not understanding
In speaking of perturbations, the basis solution is the solution of the unperturbed problem, that for $ε$ . So that gives $$ x(t)=\frac{x_0}{1-x_0t}. $$ The next idea is that small perturbations in the equation result in small perturbations of solutions with identical initial conditions, at least for some reasonable time span. There are multiple ways to introduce a perturbation term, the easiest is $x_ε(t)=x(t)+εv(t)+O(ε^2)$ . Alternatively one could use $x_ε(t)=x(t)(1+εv(t)+O(ε^2))$ or $x_ε(t)=x(t)/(1+εv(t)+O(ε^2))$ or similar. In the linear case one gets $$ \dot x+ε\dot v=x^2+2εxv+ε+O(ε^2)\implies \dot v=2xv+1. $$ The last is a linear differential equation for $v$ , it can be solved via integrating factor. Generally, if $\dot x=f(x,ε)$ , then the basis solution solves $\dot x=f(x,0)$ and the perturbation solution as above can be developed as $$ \dot x+ε\dot v=f(x+εv,ε)=f(x,0)+\partial_xf(x,0)ε+\partial_εf(x,0)ε+O(ε^2) $$ removing the basis solution gives for the linear perturbation t
|ordinary-differential-equations|variational-analysis|
1
Proving monotonicity of a function without using the concept of continuity
Let $f: \mathbb R \to \mathbb R$ be a function such that $2|x-y|\ge|f(x)-f(y)|\ge |x-y|, \forall x,y \in \mathbb R$ , can it be proved without using the result " every continuous injective function on $\mathbb R$ is monotone on $\mathbb R$ " that this function $f$ is monotone ?
(Getting another old question off the unanswered list.) It is indeed possible to prove the result without invoking continuity. For convenience say that a function $f:\Bbb R\to\Bbb R$ is good if $2|x-y|\ge|f(x)-f(y)|\ge|x-y|$ for all $x,y\in\Bbb R$ . For future reference note that it is easy to check that if $f$ is good, so are the functions $g(x)=f(x+a)+b$ for any $a,b\in\Bbb R$ , which translate $f$ horizontally and vertically, and the functions $g(x)=-f(x)$ and $g(x)=f(-x)$ , which reflect $f$ in the $x$ - and $y$ -axes, respectively. We want to prove the following Theorem: Every good function is monotone. Much of the work is done by the following lemma. Lemma: Let $f$ be a good function such that $f(0)=0$ , and suppose that $f(b) for some $b>0$ ; if $x>b$ , then $f(x) . Proof: Suppose first that $b , and to get a contradiction suppose further that $f(x)\ge 0$ . Since $f$ is good, we have $$x=|x-0|\le|f(x)-f(0)|=f(x)\,,$$ $$b=|b-0|\le|f(b)-f(0)|=-f(b)\,,$$ and $$f(x)-f(b)=|f(x)-f(b)|
|real-analysis|
0
Meaning of the Event $A \cap B$
The probability of the intersection of events $A$ and $B$ has 2 formulas: $P(A \cap B) = P(A)P(B)$ for independent events, $P(A \cap B) = P(A)P(B \vert A)$ for dependent events. There are several things that make me confused. My teacher said intersection events are events that can occur simultaneously and with intersection probability we can calculate the probability of events $A$ and $B$ occurring simultaneously. But what is meant by happening simultaneously? Is $P(A \cap B)$ something like "the probability that events $A$ and $B$ have the same outcome" or "the probability that events $A$ and $B$ occur at the same time but with no common outcome"?
You posted this exact same question yesterday and I gave you the answer in a comment. Perhaps you didn't see it, so I'm making it an actual answer: by using the term "simultaneously", your teacher has misled you into thinking of the intersection of events in a temporal manner, but this need not be the case. Nor does it necessarily mean that both events are identical in terms of outcome (if they were, it'd just be the same event twice). Intersection of events mean that both events are observed in the same trial/experiment . For example, if event $A$ is rolling a $1$ on a die and event $B$ is rolling a $2$ , and a trial consists of $3$ rolls, say, then the intersection of events $A$ and $B$ would be rolling at least one of both $1$ and $2$ . This also goes to demonstrate that "simultaneous" is not the correct description of intersection, because you can't roll two different numbers on a single die at the same time, now can you?
|probability|
0
Existence of countably additive, translation invariant measure on real line
I'm teaching a basic measure theory course using Royden's book, and told my students that there does not exist a set function $M : \mathcal{P}(\mathbb{R}) \rightarrow [0, \infty]$ such that $1) M(I) = \ell(I)$ where $I$ is an interval and $\ell(I)$ is the length of $I$ $2) M\left(\bigcup_{k = 1}^\infty E_k\right) = \sum_{k = 1}^\infty M(E_k)$ for pairwise disjoint $E_k$ . $3) M(E + x) = M(E)$ for all $E \subseteq \mathbb{R}$ and $x \in \mathbb{R}$ . I just proved the existence of a non Lebesgue measurable set and wanted to go back and show above, but hit a roadblock when trying to prove this and I can't seem to find a good reference. In particular, the strategy is to prove that the conditions above imply that $ m^* (E) = M(E)$ for all $E \subseteq \mathbb{R}$ where $m^*$ is Lebesgue outer measure. Then clearly these conditions (namely $2)$ and $3)$ ) would imply that every set is Lebesgue measurable, which is false. This doesn't seem to hard to prove by Dynkin's Lemma for all Borel set
Let $\sim$ be the equivalence relation on $[0, 1]$ given by $x \sim y$ iff $x - y \in \mathbb{Q} \cap [-1, 1]$ . Use the Axiom of Choice to choose a set $E \subset \mathbb{R}$ which contains exactly one element from each equivalence class. If such an $M$ exists, then $M(E)$ is well-defined and $M(E + q) = M(E)$ for all $q \in \mathbb{Q} \cap [-1, 1]$ . But by definition of $E$ , $(E + q_1) \cap (E + q_2) = \varnothing$ whenever $q_1 \neq q_2 \in \mathbb{Q} \cap [-1, 1]$ . Thus, $$M(\cup_{q \in \mathbb{Q} \cap [-1, 1]} (E + q)) = \sum_{q \in \mathbb{Q} \cap [-1, 1]} M(E + q) = \sum_{q \in \mathbb{Q} \cap [-1, 1]} M(E) = \infty \cdot M(E)$$ We observe that $\cup_{q \in \mathbb{Q} \cap [-1, 1]} (E + q) \subset [-1, 2]$ , so $M(\cup_{q \in \mathbb{Q} \cap [-1, 1]} (E + q)) and therefore $M(E) = 0$ , whence $M(\cup_{q \in \mathbb{Q} \cap [-1, 1]} (E + q)) = 0$ . On the otherhand, $\cup_{q \in \mathbb{Q} \cap [-1, 1]} (E + q) \supset [0, 1]$ , so we must have $1 = M([0, 1]) \leq M(\cup_{q \i
|real-analysis|measure-theory|lebesgue-measure|
1
Inequation of the product of ascending and random integers
We have three sequences of positive integers $l$ , $p$ and $q$ such that: $$ p_1 \geq p_2 \geq \cdots \geq p_k\ \ and\ q_1 \geq q_2 \geq \cdots \geq q_k \geq \cdots \geq q_h \ \ where:\ \ k and $$ 0\leq l_1 \leq l_2 \cdots \leq l_k\leq \cdots \leq l_h $$ and $$ p_1+2p_2+\cdots+kp_k and $$ p_1+p_2+\cdots+p_k = q_1+q_2+\cdots+q_k+\cdots+q_h=n $$ Is the following statement true: $$ l_1(p_1-q_1)+l_2(p_2-q_2)+\cdots+l_k(p_k-q_k) \leq l_k(p_1-q_1)+l_k(p_2-q_2)+\cdots+l_k(p_k-q_k) $$ This proof will help me to prove another result that uses it in its proof. Best regards
A counterexample is given by $k=2$ , $h=3$ , $n=18$ , $$p_1=9, p_2=9, q_1=10, q_2=4, q_3=4, l_1=1, l_2=2, l_3=2.$$
|number-theory|inequality|
1
Congruency and congruent triangles
Given that $\Delta\,ABC$ is an isosceles right triangle with $AC = BC$ and $\angle{ACB}= 90^\circ$ . D is a point on AC and E is on the extension of BD such that $AE \perp BE$ . If $AE = \frac{1}{2}BD$ , prove that BD bisects $\angle{ABC}$ . Since $AE = \frac{1}{2}BD$ , so I mark point F on BD such that it is the midpoint of BD and then draw FG i.e G lies on AB and FG = AE. By using some angle chasing I find $\angle{DGA}$ a right angle. From here my aim is to make $\Delta\,DCB$ congruent to $\Delta DGB$ but I cannot able to deduce the solution from here please help me ????
Here's a simple solution. The original information is in black, where I've marked the sides of the isosceles as $s$ and the length of $AE$ as $x$ . Extend $AE$ to meet $BC$ at $F$ (shown in red), mark the corresponding angles of $\alpha$ , and also mark the angle $\beta$ where we seek to prove $\alpha = \beta$ Now, $(x+z)\cos(\alpha) = s = 2x \cos(\alpha) \;\Rightarrow\; z=x$ Thus $AF$ is bisected at $E$ , so $\triangle FEB \cong \triangle AEB $ , and so $\alpha=\beta$ .
|euclidean-geometry|
0
Does there exist a subset $S$ in $[0,1]$ such that for all interval, $S$ and its complement in the interval has the same Lebesgue masure?
I would want to see a strengthening or a disproof of a result from an exercise from Rudin's Real and Complex Analysis , that is asked before at here Construct a Borel set on R such that it intersect every open interval with non-zero non-"full" measure and here Construction of a Borel set with positive but not full measure in each interval and possibly elsewhere. Does there exists a subset $S\subseteq [0,1]$ , such that for all intervals $(a,b)\subseteq [0,1]$ , the subset and its complement are measurable and has the same positive Lebesgue measure in $(a,b)$ , that is, $\mu((a,b)\cap S)=\mu((a,b)\setminus S)=\dfrac{b-a}{2}$ ? Thanks.
Assuming $\mu$ stands for the Lebesgue measure, the existence of such a set $S$ would imply that $$\mathbf{1}_{S}(x)=\frac{\mathrm{d}}{\mathrm{d}x}\mu(S\cap[0,x])=\frac{1}{2}\quad\text{a.e.}$$ by Lebesgue's differentiation theorem , which is of course a contradiction.
|real-analysis|measure-theory|lebesgue-integral|lebesgue-measure|
1
Points on Hyperelliptic Curve
Here: https://en.wikipedia.org/wiki/Imaginary_hyperelliptic_curve#The_divisor_and_the_Jacobian it says: " Actually, Bézout's theorem states that a straight line and a hyperelliptic curve of genus $2$ intersect in $5$ points. So, a straight line through two point lying on $C$ does not have a unique third intersection point, it has three other intersection points. " However in the same page, Figure 2 has , let's say a "curved line" that intersects the elliptic curve on 6 points. How can this happen? I am not seeing why the "curved line" does not fulfill the above paragraph.... P.S.: I know there are other questions related on HECC in the current site, but I am asking something very specific, not mentioned before, the way I put it.
The short answer: The "curved line" is not a straight line. The longer answer: It may be useful to have an explicit situation, where we add reduced divisors. Let us consider the Example from the linked wiki page . $$ (C)\ :\qquad y^2 =x(x+1)(x+3)(x-3)(x-5)\ . $$ Consider the points $P,Q,R,S$ , and the reduced divisors $D_1,D_2$ : $$ \begin{aligned} P &= (1, 8)\ , \\ Q &= (3, 0)\ ,\\ R &= (5, 0)\ ,\\ S &= (0, 0)\ ,\\[3mm] D_1 &= P+Q-2O\ ,&&\text{Mumford representation:} & ((x-1)(x-3),&\ -4(x-3)\ ,\\ D_2 &= R+S-2O\ ,&&\text{Mumford representation:} & (x(x-5),&\ 0)\ . \end{aligned} $$ (There is no $[P]$ notation to pass from geometry, a point $P$ , to a linearized object, $[P]$ , a divisor. It should be clear from the context.) It turns out that the sum $D=D_1+D_2$ is the divisor with the Mumford representation $$ (x^2 - 8x+3, \ -12x)\ . $$ So if we want to represent it as $T+U-2O$ , we have to take the roots of $x^2-8x+3$ , which are $4\pm s$ with $s=\sqrt {13}$ for an easy typing, and s
|elliptic-curves|
1
When is an analytic function a first integral of a vector field?
Given an function $f:\Bbb R^n \rightarrow \Bbb R$ , is it always possible to find a non vanishing vector field $V:\Bbb R^n \rightarrow \Bbb R^n$ such that f is a first integral of the vector field? If not, when is it possible? I read from this question that for every differentiable vector field, there exists a function that is constant along the trajectories of the field and such that it's differential is not zero. My question is in the oposite way, so I would like to know if there is a result that guarantees the existence of such vector field, and under which conditions at the function f (maybe f needs to be analytic).
Notice that $f$ is constant along the trajectories of $V$ if and only if $V$ is tangent to the level sets $f^{-1}(c)$ , $c \in \Bbb R$ . This observation implies that if $f$ has an isolated extremum at $p$ , then any vector field $V$ which has $f$ as a first integral satisfies $V_p = 0$ , and in particular there is no nonvanishing vector field $V$ for which $f$ is a first integral. A nonvanishing vector field can fail to exist for other reasons, too. For example, suppose that $f: (x, y, z) \mapsto x^2 + y^2 + z^2$ is the first integral of a vector field $V$ on $\Bbb R^3$ . Since $V$ is tangent to the sphere $S^2(R) := f^{-1}(R^2)$ , $\smash{V\vert_{S^2(R)}}$ is a vector field on $S^2(R)$ , so the Hairy Ball Theorem implies that it, and hence $V$ , vanishes at some point. A common generalization of the above $2$ examples is a corollary of the Poincaré–Hopf Theorem : If for some $c$ the level set $M := f^{-1}(c)$ is a connected, closed manifold of Euler characteristic not $0$ , then ther
|ordinary-differential-equations|differential-geometry|manifolds|smooth-manifolds|vector-fields|
1
Can a oblique antiprism be constructed?
Can a oblique antiprism be constructed? Intuitively, it would seem oblique antiprism exist: Take any right antiprism and translate one of the parallel faces within it's plane. I'm baffled, though, because Google shows countless models and pictures of (right) antiprisms, but not a single oblique antiprism . If not: Are all antiprisms right? What are examples of non-right antiprisms? Update I found two drawings of skewed antiprisms , which seems to be the same as what I called oblique antiprisms .
I've updated the OP with sketches, but am baffled why I can find almost no reference, discussion, or models (when far more complicated polyhedra models abound), and have still not found a proof or construction of their existence (beyond a convincing sketch).
|geometry|euclidean-geometry|polyhedra|solid-geometry|polytopes|
0
Did my TA make a mistake regarding probabilities?
So I had an assignment due yesterday, regarding the probability of the attackers second highest dice roll, being strictly bigger than the defenders second highest dice roll, in the game Risk. In the game, the attacker has three dices and the defender has two. The highest dice of the attacker is compared with the highest dice of the defender, and the second-highest dice of the attacker is compared with the second highest dice of the defender. For the attacker to win, their dice must be strictly larger than the defender. The dice of the defender are $C_1$ and $C_2$ , ordered as $C_{(1)} \leq C_{(2)}$ , and the dice of the attacker are $B_1$ , $B_2$ and $B_3$ , ordered as $B_{(1)} \leq B_{(2)} \leq B_{(3)}$ I, with the help of this board wrote my answer as P(A>D) = $B_{(2)}>C_{(1)}$ : $$\sum_{k=2}^{6} \left(\sum_{n=1}^{k-1}\frac{13-2n}{36}\right) \cdot \frac{-6k^2+42k-20}{216} =\frac{1181}{1944} =0.6075 $$ Yet my TA had another answer. We both had the same probability for $B_{(2)}$ , ie.
I find that it helps in cases like this to use clear symbology in the setup of the formula that shows (as much as possible) exactly why you chose the terms you chose, rather than going directly to polynomials. The probability you set up was the probability that $B_{(2)} > C_{(1)},$ taking each value of $B_{(2)}$ as a disjoint event to be summed: \begin{align} P(B_{(2)} > C_{(1)}) &= \sum_{k = 1}^6 P((B_{(2)} = k) \cap (C_{(1)} That's your formula (up to harmless changes in the order of multiplication) and it makes perfect sense. The best sense I can make out of your TA's formula is: \begin{align} \sum_{k=1}^6 \left(\frac{21k-3k^2-10}{108} \cdot \frac{9k^2-k^3}{108}\right) &= \sum_{k=1}^6 \left(\frac{21k-3k^2-10}{108} \cdot \sum_{n=1}^k \frac{21n-3n^2-10}{108} \right) \\ &= \sum_{k=1}^6 \left( P(B_{(2)} = k) \cdot \sum_{n=1}^k P(A_{(2)} = n) \right) \\ &= \sum_{k=1}^6 \left( P(B_{(2)} = k) \cdot P(A_{(2)} \leq k) \right) \\ &= \sum_{k=1}^6 P((B_{(2)} = k) \cap (A_{(2)} \leq k)) \\ &= P(
|probability|
1
Solve functional equation $f(x+y) + f(x-y) = 2f(x)f(y)$
The statement of the problem : Let $f: \mathbb R \rightarrow \mathbb R $ with the 2 properties : $f(x+y) + f(x-y) = 2f(x)f(y) , \forall x , y \in \mathbb R$ There exists a "the smallest" strictly positive number a , with the property that f(a) is the maximum of the function and f(a)>0 . Prove that f is a periodic function with period a . My approach : For $x = y = 0 \implies 2f(0) = 2f(0)^2$ so $f(0)\in\{0,1\}$ ; If $f(0) = 0 \implies 2f(x)=0 \implies f(x)=0 , \forall x \in \mathbb R$ , so it's clearly periodic.( EDIT actually it is false because if the function was 0 then it would not correspond with 2) so f(0) is necessarily 1). Now if $f(0) = 1$ , for $x = 0$ we get $f(y) + f(-y) = 2f(y) \implies f(y)=f(-y) , \forall x \in \mathbb R $ . So it is enough to determine $f$ on the interval $[0,\infty)$ . EDIT : so I managed to prove that f(a)=1 and that the final answer might be something related to a trigonometric function , probably cosine , since $cos(-x)=cos(x)$ as above.I don't real
Partial answer Plugging in $y = 0$ gives us $2f(x) = 2f(x)f(0)$ , so $f(0) = 1$ . ( $f(x) = 0$ would also be a solution, but would violate property #2.) Plugging in $x = 0$ gives us $f(y) + f(-y) = 2f(0)f(y)$ . From this, and the previous result that $f(0) = 1$ , we get $f(-y) = f(y)$ . In other words, $f$ is an even function. Plugging in $y = x$ gives us $f(2x) + f(0) = 2f(x)^2$ , or $f(2x) = 2f(x)^2 - 1$ . The same result can be obtained with $y = -x$ , since $f$ is an even function. Let $M = f(a)$ be the maximum value of the function. By the recurrence relation $f(2x) = 2f(x)^2 - 1$ , we have $f(2a) = 2M^2 - 1$ . Because $M$ is the maximum value of $f$ , we have $2M^2 - 1 \le M$ , or $2M^2 - M - 1 \le 0$ . Equivalently, $-\frac{1}{2} \le M \le 1$ . But since we know that $f(0) = 1$ , them $M \ge 1$ . Thus, the only possible value of $M$ is $M = 1$ . So, $f(2a) = 2\times 1^2 - 1 = 1$ . Similarly, $f(4a) = 2f(2a)^2 - 1 = 2\times 1^2 - 1 = 1$ . And continuing this reasoning, we get $\f
|functions|functional-equations|
0
Generate two independent uniform variables from one independent uniform variable not using binary representation?
If I want to generate two independent uniform random variables $Y_1,Y_2$ from a single uniform random variable $X$ , I know I can do it by converting $X$ into its binary representation, then taking its odd decimal digits as $Y_1$ and even decimal digits as $Y_2$ based on this answer . However, can I do it without the binary representation and just use $X$ 's decimal representation? Specifically, for $X=0.x_1 x_2 x_3 ...$ , take $Y_1=0.x_1 x_3 x_5...$ and $Y_2=0.x_2 x_4 x_6...$ ?
You can use Hilbert's space filling curve, however this still uses binary representation albeit in disguise. Here is a Wiki link for Hilbert Curve's definition: https://en.m.wikipedia.org/wiki/Hilbert_curve Hilbert curve, in essence, is a mapping of a 1D line segment to a 2D square plane and vice versa. One of its properties is that it preserves locality which might be helpful for you. I had tried creating one myself and I learned that it is just a binary manipulation, and instead of separating the bits with odd and even index, it is clumping them into small groups with very small Hamming distance between them. Having a decimal solution for this problem is not efficient information-wise. Converting it into binary and separating it, is the easiest solution. If you want locality, then use the Hilbert Space Filling Curve or any other space filling curves. Any decimal solution will have information loss greater than a good binary solution. This is evident from the radix economy between bas
|random-variables|uniform-distribution|
0
Meaning of the Event $A \cap B$
The probability of the intersection of events $A$ and $B$ has 2 formulas: $P(A \cap B) = P(A)P(B)$ for independent events, $P(A \cap B) = P(A)P(B \vert A)$ for dependent events. There are several things that make me confused. My teacher said intersection events are events that can occur simultaneously and with intersection probability we can calculate the probability of events $A$ and $B$ occurring simultaneously. But what is meant by happening simultaneously? Is $P(A \cap B)$ something like "the probability that events $A$ and $B$ have the same outcome" or "the probability that events $A$ and $B$ occur at the same time but with no common outcome"?
Events are sets of possible outcomes. We say an event "happens" or "occurs" when the realised outcome is an element of its set. (Also called the "actual outcome" or just " the outcome".) Two events "occur simultaneously" when the realised outcome is an element of both sets. That is, it is in their intersection. So $\mathsf P(A\cap B)$ is "the probability for the outcome being an element of both events $A$ and $B$ ."
|probability|
1
Composition of Taylor expansions of trigonometric functions and their inverses.
I was trying to check whether $ arc sin (sin \theta) = \theta$ and $arc cos(cos \theta) = \theta $ is satisfied when I compose Taylor series expansion of these functions, i.e.: For sine: $$ x = sin \theta = \theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - ... $$ $$ arc sin x = x + \frac{1}{2 \cdot 3} x^3 + \frac{3}{2 \cdot 4 \cdot 5} x^5 + ... $$ For cosine: $$ x = cos \theta = 1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} - ... $$ $$ arc cos x = \frac{\pi}{2} - (x + \frac{1}{2 \cdot 3} x^3 + \frac{3}{2 \cdot 4 \cdot 5} x^5 + ...) $$ What I wanted to do was to substitute $x$ in the Taylor expansion of an inverse trigonometric function with the Taylor expansion for the respective trigonometric function, group the terms according to the power of $\theta$ and check if everything different than $\theta$ term disappears. In case of the sine function, I indeed get $\theta = arc sin (sin \theta)$ with this approach. However, in the case of cosine it seems to me that the lowest non-constan
Here is the Puiseux series as mentioned by Robert: $$ \arccos(x) = \sqrt{2}(1-x)^{1/2}+\frac{\sqrt{2}}{12}(1-x)^{3/2} +\frac{3\sqrt{2}}{5\cdot 2^5}(1-x)^{5/2} + \frac{5\sqrt{2}}{7\cdot 2^7}(1-x)^{7/2} +\frac{35\sqrt{2}}{9\cdot 2^{11}}(1-x)^{9/2} + \dots \tag1$$ for $-1 . If we plug $$ x = 1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} - ... \tag2$$ into $(1)$ , we should get $\theta$ . At least for $0 \le \theta \le \pi/2$ .
|taylor-expansion|inverse-trigonometric-functions|
0
Encoding the first element of an ordered pair
Assuming we are working in axiomatic set theory, such as $ZFC$ or $NBG$ can we, for all ordered pairs $(x, y)$ find a function, call it $First$ such that $First((x, y))=x$ Is such a thing possible if we are using Kuratowski's definition of the ordered pair, namely $(x, y) = \{ \{x \}, \{ x, y \} \}$ . One thing that comes to mind is to take $First(t)=\bigcup\bigcap t$ Can we also generalize this? Like assuming an $n$ -ordered tuple, call it y, can we define a predicate $P(y, m)$ such that it gives us the $m$ -th component of $y$ . One way I thought about doing this is to completely disregard the Kuratowski definition and identify an $n$ -ordered tuple as a function with its domain $n$ , where as a set $n= \{ 0, 1, ..., n - 1 \}$ . Therefore, my predicate $P(y, m)$ is nothing but $y(m)$ with the assumption that $y$ is a function. This generalization would work for ordered tuples with arbitrary length, even infinite. I feel like what I have done is cheating, so I am wondering if we can f
So, to first answer your main question: can we, for all ordered pairs $(x, y)$ find a monadic predicate, call it $\def\Fst{{\rm First}}\Fst$ such that $\Fst((x, y))=x$ ? And the answer is, yes, absolutely. If someone put forth some notion of ordered pairs, and there was not some way of extracting the $x$ from $(x, y)$ , we would tell them to go away and come back when they had something useful. So yes, we can do this with Kuratowski pairs. You can find the formulas in the Wikipedia article on “ordered pair” . As you surmised, $\Fst(p)$ is just $\bigcup\bigcap p$ . The corresponding ${\rm Second}(p)$ is unfortunately rather more complicated. But you should know there is nothing really special about the Kuratowski formula $$(x, y) = \{\{x\}, \{x, y\}\}.$$ It was not the first or only set-theoretic model of ordered pairs. It was preceded historically by Norbert Wiener's definition: $$(x, y) = \{\{\{x\},\emptyset\}, \{\{y\}\}\}$$ and by Felix Hausdorff's: $$(x, y) = \{\{1, x\}, \{2, y\}\}$
|elementary-set-theory|
1
How to prove that $(x-3)^{\frac {1} {2}}=\frac{10}{x-5}$ has at least one real root?
How to prove that $(x-3)^{\frac {1} {2}}=\frac{10}{x-5}$ has at least one real root? I know that you have to let $f(x)={(x-3)^{\frac {1} {2}}}$ and $g(x)=\frac{10}{x-5}$ then you let $h(x)=f(x)-g(x)=0$ I know I must use the intermediate value theorem but I cant find the $x$ intervals (need 2) of this graph which is needed for the proof
Rewrite the equation as $(x-5)\sqrt{x-3} = 10$ . Notice that at $x=5$ , the LHS equals $0 . Now take a large $x$ , e.g. $x=100$ , indeed we have LHS $> 10$ . By continuity of LHS function, we proved that the equation must have at least one real root (for some $x \in (5,100)$ , IVT).
|calculus|
0
Can a ring $R$ be finitely-generated as a $k$-algebra under one map but not under another
We say that an extension of rings $R \to S$ (the map is a ring homomorphism and so takes $1 \to 1$ ) is finitely-generated as an $R$ -algebra if the image of $R$ is contained in the center of $S$ and every element of $S$ can be written as a polynomial with coefficients in the image of $R$ whose variables are from a finite set of the elements of $S$ . Given two extensions $\phi_1 : k \to S$ and $\phi_2 : k \to S$ , where $k$ is a field. Is it possible for $\phi_1$ to be a finitely-generated $k$ -algebra, but $\phi_2$ to not be a finitely-generated $k$ -algebra? I don't see a reason why this can't happen, but I can't find an example. Bonus points if you know the answer when $k$ is not assumed to be a field, where I feel more confident that this can happen.
Let $K$ be a field, $k=S=K(x_1,x_2,...)$ (the field of fractions of $K[x_1,x_2,...]$ , the polynomial ring in countably infinitely many variables over $K$ ), $\phi_1:k\to S$ be the identity $\mathrm{id}_k$ , and $\phi_2:k\to S$ be given by fixing $K$ and mapping $x_i$ to $x_{2i}$ for all whole $i\geq 1$ . Evidently $\phi_1$ makes $S$ a finitely generated $k$ -algebra. Furhter, adjoining any finite amount of elements of $S$ to the image of $\phi_2$ will not suffice to adjoin all of the missing $x_{2k+1}$ s for all natural $k$ , which suffices an argument for that $\phi_2$ doesn't make $S$ a finitely generated $k$ -algebra. For an example where $k$ is not a field replace in the above $k=S=K[x_1,x_2,...]$ . For more examples taking appropriately "small" extensions of $S$ allows to consider when e.g. $k$ is a field but $S$ isn't. Note also that in the above that if $k$ is took to be a field, the degree of $k$ over its prime field is necessarily infinite: If a ring $S$ can be regarded as a
|abstract-algebra|ring-theory|commutative-algebra|
1
Decomposing a matrix with unit sphere constraints
I would like to decompose an $m\times n$ matrix $A$ into two matrices $U\in\mathbb{R}^{m\times n}$ and $V\in\mathbb{R}^{n\times n}$ such that $UV=A$ , and the $m$ rows of $U$ each have unit magnitude. In other words, I want the rows of $U$ to lie on the unit (hyper-)sphere. Is such a decomposition possible in general? Is there a formula or method for finding it?
Take any $X\in\mathbb{R}^{m\times n}$ with $\operatorname{rank}(X)=m\,$ and construct the semi-orthogonal matrix $$ U=(XX^T)^{-1/2}\,X \quad\implies\quad UU^T = I = UU^{\bf+} $$ Note that the pseudoinverse of $U$ is equal to its transpose. Construct the second factor as $\; V = U^TA$ Then $\:UV=UU^TA=IA=A.\;$ This factorization is not unique, since the choice of $X$ was arbitrary.
|linear-algebra|convex-optimization|matrix-decomposition|spheres|svd|
0
Definition of first-order and second-order (quadratic) variation
In Shreeve's book on finance in continuous time, he "defines" the following. He says on p. 99: In general, to compute the first-order variation of a function up to time $T$ , we first choose a partition $\Pi= \{t_0, t_1, \ldots, t_n\}$ of $[0,T]$ , which is a set of times $0=t_0 . These will serve to determine the step size. We do not require the partition points $t_0 = 0, t_1, t_2, \ldots , t_n=T$ to be equally spaced, although they are allowed to be. The maximum step size of the partition will be denoted $||\Pi||= \max_{j=0, \ldots ,n-1} (t_{j+1} - t_j)$ . We then define $$ FV_T (f) = \lim_{||\Pi||\to 0} \sum_{j=0}^{n-1} | f(t_{j+1}) - f(t_j)|.$$ The limit here is taken as the number $n$ of partition points goes to infinity and the length of the longest subinterval $t_{j+1} - t_j $ goes to zero. Unfortunately, these symbols (or rather their mathematical meaning) are not defined. It is unclear to me what this " $\lim_{||\Pi||\to 0}$ " means. This is not a term defined anywhere in math
The limit as the mesh size goes to zero is quite standard when treating Riemann-Stieltjes integrals. The formal definition of $\lim_{\| \Pi \| \to 0}$ is the following: Let $\Pi$ a partition of an interval $[a,b]$ . and $\|\Pi\|$ as in the OP. a sequence $X$ is called an evaluation sequence for a partition $P=\{a=:t_0 if $X=\{x_1 with each $x_i \in [t_{j-1},t_j]$ Let $I(\Pi,X): \mathcal{P} \to \mathbb{R}$ a function (where $\mathcal{P}$ is the set of all the possible partitions over $[a,b]$ ). $$ \lim_{\|\Pi\| \to 0} I(\Pi,X)=L $$ if for each $\epsilon>0$ there exists $\delta>0$ such that, for any $\Pi \in \mathcal{P}$ such that $\|\Pi\| and for any evaluation sequence of $\Pi$ we have $$ |I(\Pi,X)-L|
|probability|quadratic-variation|
0
How to prove that $(x-3)^{\frac {1} {2}}=\frac{10}{x-5}$ has at least one real root?
How to prove that $(x-3)^{\frac {1} {2}}=\frac{10}{x-5}$ has at least one real root? I know that you have to let $f(x)={(x-3)^{\frac {1} {2}}}$ and $g(x)=\frac{10}{x-5}$ then you let $h(x)=f(x)-g(x)=0$ I know I must use the intermediate value theorem but I cant find the $x$ intervals (need 2) of this graph which is needed for the proof
Our requirement is to use the intermediate value theorem to prove that there must be some root between an interval $ (a,b) $ . We must first show that the function is continuous on $ (a,b) $ . But that might be better left for later. I suppose you may just need a reminder about how to pick $ a$ and $b$ to show the existence of a root of a function $ h(x) $ in the interval $(a,b)$ What we need is to know that $ h(x_0) = 0 $ at some $ x_0 \in (a,b) $ . Do you think you can figure out how to pick the interval? Just remember to ensure that $ h(x) $ is continuous on $(a,b)$ Hint: Opposite signs
|calculus|
0
Computing conditional expectation of recursive process
Assume that $u(c_s)$ is some concave function. The goal is to evaluate the following recursive stochastic process $V_t$ given by $$V_t = E_t[\int_t^{\infty} (u(c_s) - \beta V_s)ds] $$ for some constant $\beta$ and some adaptible stochastic process $c_s$ . The expectation is conditional with respect to filtration at time- $t$ . The solution is $V_t = E_t[\int_t^{\infty}e^{-\beta(s-t)}u(c_s)ds]$ . What is the easiest way to prove this? Following are my steps \begin{eqnarray} V_t = E_t[\int_t^{\infty} (u(c_s) - \beta V_s)ds] = E_t[\int_t^{\infty} u(c_s)ds] - \beta E_t[\int_t^{\infty}V_sds] \end{eqnarray} $$V_t + \beta E_t[\int_{t}^{\infty}V_s ds] = E_t[\int_t^{\infty} u(c_s)ds]$$ Then what? Thanks!
How about proving that the provided expression satisfies the recursive formula? \begin{align} \mathbb E_t\left[\int_t^\infty \left(u(c_s)-\beta\mathbb E_s\left[\int_s^\infty e^{-\beta(w-s)}u(c_w)dw\right]\right)ds\right]&=\mathbb E_t\int_t^\infty \left(u(c_s) - \int_s^\infty \beta e^{-\beta(w-s)} \mathbb E_s\left[u(c_w)\right]dw\right)ds \end{align} Using the fact that $$\int_s^\infty \beta e^{-\beta (w-s)} \mathrm dw = 1$$ then the previous expression is: \begin{align} \int_t^\infty \int_s^\infty\beta e^{-\beta (w-s)} \mathbb E_t\mathbb E_s[u(c_s)-u(c_w)]dwds&=\int_t^\infty \int_s^\infty\beta e^{-\beta (w-s)} \mathbb E_t[u(c_s)-u(c_w)]dwds \end{align} Can you finish the proof using Fubini?
|probability|integration|probability-theory|expected-value|conditional-expectation|
0
About Symmetrie on Frechet differential.
I am trying to solve this problem. Let $E$ , $F$ be Banach spaces and $f:E\to F$ be an n-times differentialble function on $a \in E$ $D^nf(a)$ with $n\geq 2$ is a multilinear, bounded and symmetrical map. So in general I know that the space of multilinear bounded maps can be identified with the derivatives of order n, for example the second derivative is a bilinear map. And according to Schwarz's theorem it is a symmetric application, since it proceeds by induction, For the case $n=2$ the base case is valid. I don't know how to relate my inductive hypotheses to the proof of the case $n+1$ Big thx for the Help.
Here is a sketch of one argument with some of the trickier parts explained in more detail. Suppose the statement is true for $n$ and suppose $f$ is $(n+1)$ -times differentiable at the point $a$ . The statement follows from proving the following: If $h_{1}, \ldots , h_{n+1}\in E$ , then $$[D^{n+1}f(a)](h_{2}, h_{1}, h_{3}, \ldots , h_{n+1}) = [D^{n+1}f(a)](h_{1}, h_{2}, h_{3}, \ldots , h_{n+1})$$ and if, in addition, $\sigma$ is a permutation of $\{2, \ldots , n+1\}$ , $$[D^{n+1}f(a)](h_{1}, h_{\sigma (2)}, \ldots , h_{\sigma (n+1)}) = [D^{n+1}f(a)](h_{1}, h_{2}, \ldots , h_{n+1}).$$ For if those statements have been proved, the general case follows from noting that any permutation of $\{1, \ldots , n+1\}$ is a composition of permutations of $\{1,2\}$ and $\{2, \ldots , n+1\}$ . Throughout the rest of the discussion, consider $h_{1}, \ldots , h_{n+1}$ all as fixed elements of $E$ . For the first statement, the function $g$ defined on an open neighbourhood of $a$ by $g(x) := [D^{n-1}f(x
|derivatives|banach-spaces|frechet-derivative|
1
Singular solution of partial differential equation
If complete integral of differential equation $$ x (p^2 +q^2) = zp $$ ( p is partial derivative of z with respect to x and q is partial derivative of z with respect to y ) Passing through $x=0$ and $z^2 =4y$ then envelope of this family passing through $x=1$ ,$y=1 $ has 1) $z= -2 $ 2) $z=2$ 3) $z= √(2+2√2)$ 4) $z= -√(2+2√2)$
Here is an alternative solution. Our starting point is the complete integral of the PDE, derived by Lutz Lehmann: $$ z^2=a^2[x^2+(y-b)^2]. \tag{1} $$ The intersection of the surface $(1)$ with the plane $x=0$ is the pair of lines $z=\pm a(y-b)$ . Now consider the tangent line to the curve $x=0, z^2=4y$ at the point $(0,y_0,\sqrt{4y_0})\,(y_0>0)$ : $$ z-\sqrt{4y_0}=z_y(y_0)(y-y_0)=\frac{1}{\sqrt{y_0}}(y-y_0)\implies z=\frac{1}{\sqrt{y_0}}(y+y_0). \tag{2} $$ Comparing $(2)$ with $z=a(y-b)$ , we conclude that the condition for the surface $(1)$ to be tangent to the curve $x=0, z^2=4y$ is $^{(*)}$ $$ (a,b)=\left(\frac{1}{\sqrt{y_0}},-y_0\right) \implies b=-\frac{1}{a^2}. \tag{3} $$ Substituting $(3)$ in $(1)$ , we obtain the family $$ z^2=a^2\left[x^2+\left(y+\frac{1}{a^2}\right)^2\right]. \tag{4} $$ To find the envelope of this family, we eliminate $a$ between $(4)$ and the equation $$ \frac{\partial}{\partial a}\left\{z^2-a^2\left[x^2+\left(y+\frac{1}{a^2}\right)^2\right]\right\}=0 $$ $$
|ordinary-differential-equations|partial-differential-equations|characteristics|singular-solution|
0
Is every subset of a set also a set?
Using the axioms of $ZF$ , you ensure that from a set or multiple sets, you can also create a set. However, the question that arose in my mind is whether all subsets of this set that was created are also sets. In other words, is there a proof that every class $A$ , where $A\subset C$ and $C$ is a set, is also a set? If A is defined by a formula $[ A = \{ x \in C : P(x) \} ]$ , then by the axiom of schema, it becomes easy to infer that A is a set. But, we do not know if any subset of C can be defined by a formula.
By definition, a class is a collection of sets given by a first order formula. If $A$ is given by the formula $\phi$ and $C$ is already a set, you can use the axiom (schema) of specification to prove that $A$ is a set. That's precisely what the axiom was invented for.
|set-theory|
0
About an integral from MIT Integration Bee 2024
Good evening, I was interested in the third integral from the finals of the MIT Integration Bee 2024 : $$I = \int_{-\infty}^{\infty} \frac{1}{x^4+x^3+x^2+x+1} \hspace{0.1cm} \mathrm{d}x$$ One way to solve this is to identify that the denominator is the fifth cyclotomic polynomial, so we can write : $$I = \displaystyle\int_{-\infty}^{\infty} \frac{1-x}{(1-x)(x^4+x^3+x^2+x+1)} \hspace{0.1cm} \mathrm{d}x = \displaystyle\int_{-\infty}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ Then, by Chasles : $$I = \displaystyle\int_{-\infty}^{0} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x + \displaystyle\int_{0}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ Let $x\to -x$ in the first integral, we obtain : $$I = \displaystyle\int_{0}^{\infty} \frac{1+x}{1+x^5} \hspace{0.1cm} \mathrm{d}x + \displaystyle\int_{0}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ And we use these two identities : $$ \displaystyle\int_{0}^{\infty} \frac{x^{s-1}}{1+x^k} \hspace{0.1cm} \mathrm{d}x
The prosaic way to proceed is also possible, we just factor the denominator: $$ f(x)= x^4 + x^3 + x^2 + x + 1 =\left(x^2 +\frac 12 x+1\right)^2 -\frac 54x^2 =\prod\left(x^2 +\frac {1\pm \sqrt 5}2 x+1\right) \ . $$ It is invariated by the Galois substitution $\sqrt 5\to-\sqrt 5$ . The partial fraction decomposition is, also taking care of Galois invariance $$ \begin{aligned} \frac 1{f(x)} &= \frac 1{\sqrt 5} \left( \frac{x+\frac 12(1+\sqrt 5)}{x^2+\frac 12(1+\sqrt 5)+1} - \frac{x+\frac 12(1-\sqrt 5)}{x^2+\frac 12(1-\sqrt 5)+1} \right) \\ &= \frac1{\sqrt 5}\sum_{a=(1\pm\sqrt5)/2} \pm\frac{x+a}{x^2 +ax+1} \ . \end{aligned} $$ We have for a parameter $a$ the value $\int_{-M}^M\frac{x+a}{x^2 +ax+1}\; dx = \int_{-M}^M\frac{x+\frac 12a}{x^2 +ax+1}\; dx +\frac a2\int_{-M}^M\frac1{x^2 +ax+1}\; dx $ . The first piece leads to $\frac 12\ln(x^2+ax+1)$ taken from $-M$ to $M$ . In our case, we make a difference for the values $a=\frac 12(1\pm\sqrt 5)$ . We obtain a difference of logarithms, thus a l
|integration|
0
Question regarding definition of 'Markov process' in Shreve book and remark related to it
I have a question regarding the definition of 'Markov process' given by Shreeve, book II (continuous models). It says (see below) this: "Assume that for all $0 \leq s \leq t \leq T$ and for every nonnegative Borel-measurable function $f$ , there is another Borel-measurable function $g$ such that $\mathbb E [ f(X(t)) \mid \mathcal F (s) ] = g(X(s)$ . I get very confused from his comments in the Remark below it. It says there that $f$ is "permitted to depend on $t$ " and $g$ depend on $s$ . I do understand the Definition as stated, but I do not understand the Remark following it. What does this mean? Why is this necessary? If we pick some times $s,t$ with $0 \leq s \leq t \leq T$ then these times are fixed--It's unclear to me what a function "depending on $t$ " means then. Can someone clarify this definition? I do not understand what it means. Together with this, I also do not understand why we can take $g$ to be the same symbol as $f$ . In general, I just don't understand what is meant
I too find this Remark 2.3.7. a bit hard to follow probably even slightly incorrect, in the sense that Shreve should have written that "and the function $g$ will depend on $\color{red}{t}$ " (this might be a typo). Regardless if $f$ depends on $t$ or not, a way of writing the Markov property is $$ \mathbb E\big[f\big(X_t)\,\big|\,{\cal F}_s\big]=\mathbb E\big[f(X_t)\,\big|\,X_s\big]\,. $$ From the RHS we see that the function $g$ should always depend on $t\,$ (and $X_s\,$ ). When $f$ depends on $t$ also we use the fact that $(t,X_t)$ is trivially a Markov process (with the same filtration as $X\,$ ). Then $$ \mathbb E\big[f\big(t,X_t)\,\big|\,{\cal F}_s\big]=\mathbb E\big[f(t,X_t)\,\big|\,X_s\big]\,. $$ The function $g$ on the RHS now depends on $t$ and $X_s$ again but not on $s$ because that is not random: $\sigma(s,X_s)=\sigma(X_s)\,.$ Simple example: Brownian motion (or any other martingale): $$ \mathbb E[t\,W_t\,|\,{\cal F}_s]=\color{red}{t}\,W_s=g(\color{red}t,W_s)\,. $$
|probability-theory|markov-process|
0
Convergence of $\displaystyle \int_0^\infty\frac{\sin x}{x^p + \sin x} dx$
I'm trying to find bound on $p \gt 0$ where $\displaystyle \int_0^\infty\frac{\sin x}{x^p + \sin x} dx$ converges. Around zero we can move to equivalent(in terms of convergence) integral $\displaystyle \int_0^\varepsilon\frac{dx}{x^{p-1} + 1}$ using $x \gt \sin x \gt \frac x 2$ . After checking out all choices of placement $p$ , I decided that integral converges around zero. When going to infinity i ignored $\sin x$ in bottom part of fraction and looked for inequalities for $\displaystyle I_k \sim \int_{2\pi k}^{2\pi (k + 1)}\frac{\sin x}{x^p} dx$ . For clarity I changed the variable to $t = 2\pi x$ and ignored the resulting constant in integral. Then I divided interval into $[k:k+1/2]$ ; $[k+1/2:k+3/2]$ ; $[k+3/2:k+4/2]$ On each interval I bounded $\frac{1}{x^p}$ with something like $\frac{1}{(k+i)^p}$ and integrated $\sin x$ to get another non-important constant. In result I got $$\frac{1}{(k+1/2)^p} - \frac{1}{(k+3/2)^p}\lt I_k \lt \frac{1}{(k)^p} - \frac{1}{(k+2)^p}\\$$ Then I calc
Due to $$\lim_{x\to0^+}\frac{\sin x}{x^p+\sin x}=\lim_{x\to0^+}\frac{\frac{\sin x}x}{x^{p-1}+\frac{\sin x}x}= \begin{cases} 0,& 0 1 \end{cases}, $$ the point $0$ is a continuous point for the function $\frac{\sin x}{x^p+\sin x}(p>0)$ . So we consider the improper integral $$\int_{1}^{\infty}\frac{\sin x}{x^p+\sin x}dx.$$ Note that: $$\frac {\sin x}{x^p+\sin x} =\frac{\sin x}{x^p}-\frac{\sin^2x}{x^p(x^p+\sin x)}.$$ $$\int_1^{\infty}\frac {\sin x}{x^p} \, \mathrm{d}x\ \mbox{converges}\iff p>0,$$ and $$\int_1^{\infty}\frac{\sin^2x}{x^p(x^p+\sin x)}dx\ \mbox{converges}\iff p>\frac{1}{2}.$$ $\textbf{Proof}$ : $$0\leq\frac{\sin^2x}{x^p(x^p+\sin x)}\leq\frac{1}{x^p(x^p+\sin x)}\sim\frac{1}{x^{2p}},$$ and $$\int_1^{\infty}\frac {1}{x^{2p}} \, \mathrm{d}x\ \mbox{converges}\iff p>\frac{1}{2}.$$ On the other hand, when $0 $$\int_1^{\infty}\frac{\sin^2x}{x^p(x^p+\sin x)}dx\ \mbox{is not convergent}.$$ Proof as follows: $$\frac{\sin^2x}{x^p(x^p+\sin x)}\geq\frac{\sin^2x}{x^p(x^p+1)}\geq\frac{\sin^2
|convergence-divergence|improper-integrals|solution-verification|
0
A countable ordinal which is $\Sigma_n$-definable in first-order ZFC, but not $\Sigma_m^1$-definable in full second-order arithmetic
Let us say that a $\Sigma_m^1$ -formula $\phi$ defines a countable ordinal $\alpha$ if it defines a type- $1$ object (i.e. a real) $x$ that encodes a well-ordering of $\mathbb{N}$ of order type $\alpha$ (assuming that there is a fixed way to interpret a real as a well-ordering of $\mathbb{N}$ ). Does there exist a countable ordinal $\alpha$ that satisfies both of the following two properties? There exists an integer $n$ such that $\alpha$ is $\Sigma_n$ -definable (without parameters) in the language of first-order ZFC (by a formula in the Lévy hierarchy) over $L_{\omega_1}$ ; There does not exist an integer $m$ such that $\alpha$ is $\Sigma_m^1$ -definable (without parameters) in the language of full second-order arithmetic (over the standard model). If no, why? If yes, what is the smallest such $\alpha$ ? Can its value depend on whether we accept some additional assumptions or axioms?
No, there is no such ordinal (or even set). Actually we need to be a bit careful; what does it mean to talk about the definability of an ordinal in the context of arithmetic? Below, I'm assuming that by " $\alpha$ is $\Sigma^1_n$ " we mean "the set of reals coding relations on $\mathbb{N}$ which are isomorphic to $\alpha$ is $\Sigma^1_n$ ," but there is some flexibility here; e.g. maybe you want a specific code for $\alpha$ to be so definable. The reason is that elements of $L_{\omega_1}$ (or even $H_{\omega_1}$ ) can be "reasonably" coded by real numbers. There are several ways to do this; my personal favorite is to say that the codes of a set $x$ are the reals coding (in some reasonable way) the membership graph of $tc(\{x\})$ , which is a countable graph and so can be coded by a real. The important thing to check is that $(i)$ the set of reals which are codes for sets and $(ii)$ the relation of " $r$ and $s$ code the same set" are each projectively definable, but this isn't hard. At
|logic|set-theory|ordinals|
1
Prove uniqueness and existence of $\theta$ such that $\theta\left(x,0\right)=x$ and $\theta\left(x,y^{\prime}\right)=\theta\left(x,y\right)^{\prime}.$
The domain of consideration is the set of whole numbers, $\mathbb{N}_0$ . The following theorem (see facsimile below) appears in The Number System, by H.A. Thurston : 3. Theorem: There is just one operation $\theta$ such that, for every $x$ and $y,$ \begin{align*} (i)\text{ }\theta\left(x,0\right)=&x\text{ and}\\ (ii)\text{ }\theta\left(x,y^{\prime}\right)=&\theta\left(x,y\right)^{\prime}. \end{align*} Proof: If $\theta$ exists, let $\phi$ be an operation such that \begin{align*} (iii)\text{ }\phi\left(x,0\right)=&x\text{ for every }x\text{ and}\\ (iv)\text{ } \phi\left(x,y^{\prime}\right)=&\phi\left(x,y\right)^{\prime}\text{ for every }x\text{ and }y. \end{align*} Let $M$ be the set of $y$ for which $\phi\left(x,y\right)=\theta\left(x,y\right)$ for every $x.$ Then $0\in M,$ because $\phi\left(x,0\right)=x=\theta\left(x,0\right)$ for every $x,$ by $(iii)$ and $(i).$ \begin{align*} \text{If }y\in M\text{, then }\phi\left(x,y^{\prime}\right)= & \phi\left(x,y\right)^{\prime}\text{ by }(iv
I reject Thurston's entire proof, including the uniqueness part. Changing the name of the operator proves nothing. His proposed proof of existence shows that $(v),(vi)$ provide equivalent definitions of $\theta.$ That begs the question. Instead I proceed as follows: $\theta$ can be viewed as a three-place relation of whole numbers. Thus it can be treated as the set $$\vartheta=\left\{\langle{x,y,z}\rangle\backepsilon z=\theta\left(x,y\right)\right\}\subset\mathbb{N}_0^3$$ generated recursively using $(i),(ii).$ That $\vartheta\neq\mathbb{N}_0^3$ follows from $\langle{0^\prime,0,0}\rangle\notin\vartheta.$ To show that this generation deterministically produces exactly $\vartheta,$ we could use induction on $x$ with each step being induction on $y.$ However, since $(i),(ii)$ are defined identically for all $x,$ there is no need for induction on $x$ . Thus we consider the case of an arbitrarily chosen $x.$ Assume $x\in\mathbb{N}_0.$ Rule $(i)$ provides the base case. Assume $y=0.$ $$(i)\l
|elementary-number-theory|solution-verification|proof-explanation|induction|
0
Proving $\sum_{i=0}^n (-1)^i\binom{n}{i}\binom{m+i}{m}=(-1)^n\binom{m}{m-n}$
I am trying to prove the following binomial identity: $$\sum_{i=0}^n (-1)^i\binom{n}{i}\binom{m+i}{m}=(-1)^n\binom{m}{m-n}$$ My idea was to use the identity $$\binom{m}{m-n}=\binom{m}{n}=\sum_{i=0}^n(-1)^i \binom{m+1}{i}$$ but the coefficients of the two sums don't seem to be equal. I also know there is a combinatorial argument that proves this, but I am trying to find an algebraic proof.
Disclaimer Not an algebraic proof, at least not fully. But I think it may provide some insights into the solution you are looking for. Combinatorial Argument A company is looking to potentially replace some of its employee. At the moment, there are $n$ existing employees and $m$ applicants. The company fires $i$ employees. However, these employees immediately apply to work for the company again, making the number of applicants $m+i$ . The company then selects $i$ people out of the $m+i$ applicants to join the company. The number of ways to do this is $$ \binom{n}{i}\binom{m+i}{i} $$ which is part of the summand on the left-hand side of the question. The question is then to find the difference in the number of possibilities when $i$ is even and $i$ is odd. Alternative Counting Let's say that $p$ employees get fired and get hired again, while $q$ employees get replaced by the new applicants. The number of possibilities is $$ \binom{n-q}{p}\binom{n}{q}\binom{m}{q} $$ with $p+q=i$ . Theref
|combinatorics|algebra-precalculus|summation|binomial-coefficients|binomial-theorem|
0
Let f be an infinitely differentiable function such that f(1) = 0, f(5) = ln 4, f'(1) = 2 and f'(5) = −2. Using the given equation(below),Find f(x).
The given equation is: $$x^2\left(f^{\prime \prime}(x)+\left(f^{\prime}(x)\right)^2\right)=1$$ The full question actually requires you to find f(x) and then use it to evaluate $$\int_1^5 e^{f(x)} dx$$ However I am stuck on finding f(x) first. First I attempted to guess what it could be using the boundaries given but that of course failed, so I am trying to get f(x) by manipulating the given equation by integrating both sides. Here is what I tried: $$ \begin{aligned} & x^{2}\left(f^{\prime \prime}(x)+\left(f^{\prime}(x)\right)^{2}\right)=1 . \\ & f^{\prime \prime}(x)+\left(f^{\prime}(x)\right)^{2}=\frac{1}{x^{2}} \end{aligned} $$ Integrate both sides: $$ \int f^{\prime \prime}(x) d x+\int\left(f^{\prime}(x)\right)^{2} d x=\int \frac{1}{x^{2}} $$ let $u=f^{\prime}(x)$ $$ d u=f(x) d x \Rightarrow \frac{d u}{f(x)}=d x $$ then, $$ \int \frac{u^{\prime}}{f(x)} d u+\int \frac{u^{2}}{f(x)} d u=\int \frac{1}{x^{2}} $$ Reaching here I am understanding that I am headed in the wrong direction howe
Hint . Let $u(x):=e^{f(x)}$ ; then $u'=f'e^f$ and $u''=(f''+f'^2)e^f=\frac{u}{x^2},$ which implies that $u$ satisfies the Cauchy-Euler equation $$ x^2u''-u=0. \tag{1} $$
|calculus|ordinary-differential-equations|definite-integrals|
1
Conditional expectation of random summation - How to show $E[\sum_{i=1}^{N}\xi_i|\sigma(N)]=pN$?
I am using the formal definition of conditional expectations: $E[X|\mathscr{F}]$ is any RV $Y$ such that $Y\in\mathscr{F}$ and $\int_AXdP=\int_AYdP$ for all $A\in\mathscr{F}$ . Suppose that $\xi_1,\xi_2,\dots$ are iid RV with mean $p$ , and they are independent of RV $N$ . How do I show rigorously by definition that $E[\sum_{i=1}^{N}\xi_i|\sigma(N)]=pN$ ? I understand the intuitive saying that conditioned on $\{N=n\}$ , $E[\sum_{i=1}^{N}\xi_i]=E[\sum_{i=1}^{n}\xi_i]=\sum_{i=1}^{n}E[\xi_i]=pn=pN$ . But it seems too far away from the formal definition for me.
$\int_\limits{\{N=n\}} \sum\limits_{i=1}^{N}\xi_idP=\int_\limits{\{N=n\}} \sum\limits_{i=1}^{n}\xi_idP=P(N=n)\int_{\Omega} \sum\limits_{i=1}^{n}\xi_idP =P(N=n)np$ . Summing over $n\in A$ we get $\int_\limits{\{N\in A\}} \sum\limits_{i=1}^{N}\xi_idP=\sum_{n\in A}np P(N=n)=\int_{N \in A} pNdP$ for any Borel set $A$ in $\mathbb R$ .
|probability|measure-theory|conditional-probability|conditional-expectation|
0
Why is the stochastic integral only defined for predictable integrands?
The answer here makes sense to me, in the sense that the stochastic integral does not preserve the local martingale property if the integrand is not predictable, but I am confused as to why. I have been taught the following definition of the stochastic integral: We first define the quadratic variation for a RCLL martingale, $M$ , to be $$ Q(M,t) \equiv \lim_{n \rightarrow \infty} \sum_{i=1}^\infty(M_{\sigma_{i+1} \wedge t}-M_{\sigma_{i} \wedge t})^2$$ where $\sigma_0 \equiv 0$ and $$ \sigma_{i+1} \equiv \inf \left(t > \sigma_i : |M_t - M_{\sigma_i}| \ge 2^{-n} \text{ or } |M_{t-} - M_{\sigma_i}| \ge 2^{-n} \right) $$ This limit is shown to converge uniformly on compact sets almost surely from first principles in this excellent pedagogical paper . The limit is shown to be non-decreasing, RCLL, etc. Now for any bounded RCLL martingale $M$ , and any (predictable, but where does this argument rely on predictability?) process $H$ such that $$\mathbb{E} \left(\int_0^\infty H_s^2 Q(M,ds) \rig
As mentioned in Revuz-Yor pg.120, the main reason that we require $H_{s}$ to be progressively measurable, is because we want the integral against an adapted increasing process $A_{s}$ $$H\cdot A:=\int H_{s}dA_{s}$$ to remain adapted and in turn preserve martingale too. So as mentioned in (2.2) Theorem part (a), if we don't have that $H\cdot A$ is a martingale, then we can't do the uniqueness argument i.e. we want $$\langle L-L',L-L'\rangle=0$$ to give us $L=L'$ . Then they also use Riesz-representation theorem to get a representative. So in the above construction, one needs to add some measurability assumption for $H$ in order to ensure that we get a martingale back. This is also shown here in lemma 2 https://almostsuremath.com/2010/01/03/the-stochastic-integral/ using the monotone class theorem. See also Le Gall Brownian Motion, Martingales, and Stochastic Calculus Proposition 5.3 and Theorem 5.4, in particular on pg. 100 To be precise, we should here say “equivalence classes of eleme
|stochastic-processes|stochastic-calculus|stochastic-integrals|stochastic-analysis|
0
Taylor coefficients and termwise integration
Many special function are calculated using its Taylor series, and is an efficient method of estimation the values of a function. Nevertheless, for some function evaluation of a formula of Taylor coefficients might be difficult. I would like to ask if it is possible to compute integrals of compositions of elementary analytic functions $f(h(g))$ following ways: Using values of the inner functions as $x$ and decompose over outer function as $$f(h(g(x))) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(h(g(x))-a)^n$$ and then integrate termwise. or Using series composition as in the article and then again integrate termwise
Simple example, $\displaystyle f(z)=\frac{e^z}{\sqrt{2\pi}}$ , $\displaystyle h(y)=\frac{y}{-2}$ , and $\displaystyle g(x)=y=x^2$ . Taking $f'$ as simple derivative of outer function $\displaystyle \frac{df}{dz}=\frac{e^z}{\sqrt{2\pi}}$ what is also $f''=f'''$ and so on (the $e$ function, youknowit ). Taking your suggested formula 1. at $a=0$ will result $\displaystyle \frac{1}{\sqrt{2\pi}}\sum_{n=0}^{\infty}\frac{\left (-x^2/2\right )^n}{n!}$ what is nicely wrong . In contrast, for $\displaystyle f(x)=\frac{e^{-x^2/2}}{\sqrt{2\pi}}$ the first few derivatives are $\displaystyle f'=\frac{-x\cdot e^{-x^2/2}}{\sqrt{2\pi}}$ , $\displaystyle f''=\frac{(x^2-1)\cdot e^{-x^2/2}}{\sqrt{2\pi}}$ , $\displaystyle f'''=\frac{x\left(-x^2+3\right)\cdot e^{-x^2/2}}{\sqrt{2\pi}}$ , $\displaystyle f^{(4)}=\frac{(x^4-6x^2+3)\cdot e^{-x^2/2}}{\sqrt{2\pi}}$ , thus its Taylor series is $\displaystyle \frac{1}{\sqrt{2\pi}}\cdot\left (1-\frac{x^2}{2}+\frac{x^4}{8}-\frac{x^6}{48}+\frac{x^8}{384}\mathrm{...}\ri
|real-analysis|integration|sequences-and-series|numerical-methods|taylor-expansion|
1
Proving $\sum_{i=0}^n (-1)^i\binom{n}{i}\binom{m+i}{m}=(-1)^n\binom{m}{m-n}$
I am trying to prove the following binomial identity: $$\sum_{i=0}^n (-1)^i\binom{n}{i}\binom{m+i}{m}=(-1)^n\binom{m}{m-n}$$ My idea was to use the identity $$\binom{m}{m-n}=\binom{m}{n}=\sum_{i=0}^n(-1)^i \binom{m+1}{i}$$ but the coefficients of the two sums don't seem to be equal. I also know there is a combinatorial argument that proves this, but I am trying to find an algebraic proof.
Generating function proof. $$ \begin{split} \sum_{m=0}^{\infty}\sum_{i=0}^n (-1)^i\binom{n}{i}\binom{m+i}{m} x^m &=\sum_{i=0}^n (-1)^i\binom{n}{i}\left(\sum_{m=0}^{\infty}\binom{m+i}{m} x^m \right)\\ &=\sum_{i=0}^n (-1)^i\binom{n}{i}\left(\sum_{m=0}^{\infty}\binom{m+i}{i} x^m \right)\\ &=\sum_{i=0}^n (-1)^i\binom{n}{i}\frac{1}{(1-x)^{i+1}}\\ &=\frac{1}{1-x}\sum_{i=0}^n \binom{n}{i}\left(-\frac{1}{1-x}\right)^i\\ &=\frac{1}{1-x}\left(-\frac{1}{1-x}+1\right)^n\\ &=\frac{(-x)^n}{(1-x)^{n+1}}=(-1)^n\frac{x^n}{(1-x)^{n+1}}\\ &=(-1)^n\sum_{m=0}^{\infty}\binom{m}{n}x^m, \end{split} $$ and therefore, $$ \sum_{i=0}^n (-1)^i\binom{n}{i}\binom{m+i}{m}=(-1)^n\binom{m}{n}. $$ Combinatorial proof (Principle of Inclusion-Exclusion). Consider the sum $$ \begin{split} (-1)^n\sum_{i=0}^n (-1)^i\binom{n}{i}\binom{m+i}{m} &=\sum_{i=0}^n (-1)^{n-i}\binom{n}{i}\binom{m+i}{m}\\ &=\sum_{i=0}^n (-1)^i\binom{n}{n-i}\binom{m+n-i}{m}\\ &=\sum_{i=0}^n (-1)^i\binom{n}{i}\binom{m+n-i}{m}. \end{split} $$ Consider non
|combinatorics|algebra-precalculus|summation|binomial-coefficients|binomial-theorem|
0
Tricky Application of Rouche's Theorem
I'm supposed to use Rouche's theorem to solve this problem, but I'm pretty sure it's not possible. Can anyone confirm this? I want to determine how many zeros $e^z-z$ has on $B_1(0)$ . The obvious set up is to take $f(z)=e^z$ and $g(z)=-z$ , but Rouche's theorem can't be applied on $\partial B_1(0)$ , as $e^{-1} . We are able to use Rouche on a smaller region. I believe $\partial B_{1/2}(0)$ works. The justification being that the modulus of $e^z$ should be minimized when $z=-1/2$ , but $e^{-1/2}>1/2$ . Then, we get that $e^z$ and $e^z-z$ have the same number of zeros in $B_{1/2}(0)$ . So $e^z-z$ has no zeros in $B_{1/2}(0)$ . That's great, but not what the problem asked. Is there a tricky way to relate this to $B_1(0)$ somehow that I'm missing? This seems like it should be such a cut-and-dry application problem, but I'm just not seeing what to do. Is this even possible to do using Rouche?
For “small” $z$ is $e^z \approx 1 + z $ or $e^z - z \approx 1 \ne 0$ . That suggests to apply Rouché's theorem to the functions $f(z) = e^z-z$ and $g(z) =1$ : For $|z| = 1$ is $$ |f(z)-g(z)| = \left| \sum_{n=2}^\infty \frac{z^n}{n!}\right| \le \sum_{n=2}^\infty \frac{1}{n!} = e - 2 so that $f$ and $g$ have the same number of zeros in the unit disk, i.e. none. Alternatively, use the triangle inequality instead of Rouché's theorem: For $|z| \le 1$ is $|e^z-1-z| \le e-2$ and therefore $$ |e^z-z| \ge 1 - |e^z-z-1| \ge 1 - (e-2) > 0 \, . $$
|complex-analysis|analysis|roots|rouches-theorem|
1
Tricky Application of Rouche's Theorem
I'm supposed to use Rouche's theorem to solve this problem, but I'm pretty sure it's not possible. Can anyone confirm this? I want to determine how many zeros $e^z-z$ has on $B_1(0)$ . The obvious set up is to take $f(z)=e^z$ and $g(z)=-z$ , but Rouche's theorem can't be applied on $\partial B_1(0)$ , as $e^{-1} . We are able to use Rouche on a smaller region. I believe $\partial B_{1/2}(0)$ works. The justification being that the modulus of $e^z$ should be minimized when $z=-1/2$ , but $e^{-1/2}>1/2$ . Then, we get that $e^z$ and $e^z-z$ have the same number of zeros in $B_{1/2}(0)$ . So $e^z-z$ has no zeros in $B_{1/2}(0)$ . That's great, but not what the problem asked. Is there a tricky way to relate this to $B_1(0)$ somehow that I'm missing? This seems like it should be such a cut-and-dry application problem, but I'm just not seeing what to do. Is this even possible to do using Rouche?
You can try the symmetric version of Rouché': If $f$ and $g$ are analytic in a neighbourhood of $K$ and $|f(z) - g(z)| for $z \in \partial K$ , then $f$ and $g$ have the same number of zeros in $K$ . Note that $|f(z) - g(z)| \le |f(z)| + |g(z)|$ with equality only if $f(z)$ and $g(z)$ are on opposite sides of the same line through the origin. So take $f(z) = \exp(z) - z$ and $g(z) = \exp(z)$ . On the unit circle $\partial K$ we have $|f(z) - g(z)| = 1$ . If $z = x + i y \in \partial K$ with $x > 0$ , $|g(z)| = \exp(x) > 1$ so $|f(z)| +|g(z)| > 1$ . On the other hand, if $x \le 0$ , $\text{Re}(g(z)) = \exp(x) \cos(y) > 0$ and $\text{Re}(f(z)) > -x \ge 0$ , so $f(z)$ and $g(z)$ can't be on opposite sides of the same line through the origin.
|complex-analysis|analysis|roots|rouches-theorem|
0
Cauchy Schwartz type inequality for $C^{\ast}$-algebras
Let $\mathcal{A}$ be a $C^{\ast}$ -algebra and $a_1, a_2, b_1$ and $b_2$ nonzero elements of $\mathcal{A}$ . Then it is not difficult to see that $\vert \vert a_1b_1+a_2b_2 \vert \vert^2 \leq (\vert \vert a_1 \vert \vert^2 +\vert \vert a_2 \vert \vert^2 ) (\vert \vert b_1 \vert \vert^2 +\vert \vert b_2 \vert \vert^2) $ Does the above inequality becomes an equality in case $a_1, a_2, b_1$ and $b_2$ are linearly dependent?
No. Pick $x$ to be a nonzero element whose square is $0$ . Say, the matrix $\begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix}$ . Let $a_1 = a_2 = b_1 = b_2 = x$ . Then the LHS is $0$ but the RHS is $4\|x\|^4 \neq 0$ .
|functional-analysis|c-star-algebras|
0
Why do so many solutions to $n+1\mid3^n+1$ satisfy $n\equiv27\pmod{72}$?
Here are the first $40$ integers for which $\dfrac{3^n+1}{n+1}$ is an integer. I've denoted their residue mod $72$ by color and a symbol. $$\begin{array}{r}\dagger\,\color{violet}0,&\bullet\,\color{brown}1,&*\,\color{red}3,&27,&531,\\ 1\,035,&4\,635,&6\,363,&11\,475,&19\,683,\\ 4\,131,&80\,955,&*\,\color{red}{266\,475},&280\,755,&307\,395,\\ 356\,643,&\circ\,\color{blue}{490\,371},&544\,347,&557\,955,&565\,515,\\ 572\,715,&808\,227,&1\,256\,355,&1\,695\,483,&1\,959\,075,\\ 1\,995\,075,&2\,771\,595,&2\,837\,835,&3\,004\,155,&3\,208\,491,\\ *\,\color{red}{3\,337\,635},&3\,886\,443,&4\,670\,955,&5\,619\,411,&6\,434\,595,\\ \bullet\,\color{brown}{6\,942\,817},&*\,\color{red}{7\,631\,715},&*\,\color{red}{9\,274\,755},&9\,436\,923,&9\,586\,107,\end{array}$$ $\dagger\,\color{violet}{\rm Violet}$ means $0$ mod $72$ , $\circ\,\color{blue}{\rm blue}$ means $51$ mod $72$ , $\bullet\,\color{brown}{\rm brown}$ means $1$ mod $72$ , $*\,\color{red}{\rm red}$ means $3$ mod $72$ . All the rest, ${\rm b
This answer shows that, apart from $n = 0$ , all solutions of $n$ are odd. Also, if $3 \mid n$ , then $n \equiv 3 \pmod{24}$ , so $n$ is congruent to one of $3$ , $27$ or $51$ mod $72$ , as your results show. OTOH, if $3 \nmid n$ , then $n \equiv 1 \pmod{12}$ . This solution also gives heuristic explanations about why almost all $n$ are divisible by $3$ , with most being divisible by $9$ , i.e., with $n \equiv 27\pmod{72}$ . The following only considers $n \gt 0$ . Also, note that $$3^n \equiv -1 \pmod{n + 1} \;\;\;\to\;\;\; 3^{2n} \equiv 1 \pmod{n + 1} \tag{1}\label{eq1A}$$ Assume an integer solution $n$ to \eqref{eq1A} is even. Using the $p$ -adic valuation function, this means $$\nu_2(n) = i \;\;\;\to\;\;\; n = 2^{i}j, \;\; i\ge 1, \;\; 2\nmid j \tag{2}\label{eq2A}$$ Since $n + 1$ is odd and $\gt 1$ , it's a product of one or more odd prime factors. Using the multiplicative order , for any one of these prime factors $p$ , plus also using \eqref{eq1A} and \eqref{eq2A}, we get $$\oper
|number-theory|modular-arithmetic|divisibility|
1