title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Alternative proof for Dini's theorem. Is it correct?
Let $(X,d)$ be a compact metric space, $f_n : X \to \mathbb{R}$ a sequence of continuous functions, $f_{\infty} : X \to \mathbb{R}$ a continuous function. Now suppose that $f_n \to f$ pointwise, and that for each $x \in X$ the sequence (of real numbers) $\{ f_n (x) \}$ is monotonic (not necessarily all monotonic in the same way, there could exist $x$ such that $f_n (x)$ is monotonically increasing and $y$ such that $f_n (y)$ is monotonically decreasing). Then $f_n \to f$ uniformly. I think i found a simple proof of this theorem but i'm not sure it works. Let's assume by way of contradiction that the convergence is not uniform. Therefore there exists a positive real number $\rho > 0$ such that $\sup_{x \in X} |f_n(x)-f_{\infty}(x)| \geq 2 \rho$ for infinitely many $n \in \mathbb{N}$ . For each one of these $n$ 's, choose $x_n \in X$ such that $|f_n (x_n) - f_{\infty} (x_n)| \geq \rho$ . Because we have infinitely many of these $n$ 's and each $x_n$ can be either such that $f_N (x_n)$ is
The proof is correct even when $X$ is any sequentially compact space. Good job!
|sequences-and-series|continuity|compactness|uniform-convergence|pointwise-convergence|
1
in the triangle ABC on the AC side, points M and N are chosen such that ABM = MBN = NBC
in the triangle ABC on the AC side, points M and N are chosen such that AM. I tried to get a triangle with sides AK NC and AM. But I couldn't. So I don't know how to prove this
Hint: One can prove that NC>KM, and AK+KM>AM thus the requirement will be satisfied. (':=' was used as denoting) we have: $$ we also have that $$BN=BC =>$$ $$ its evident that β $$ 90, $$ $$ and since $$ $$ $$ $$ $$ BM>BN$$ (is located in front of bigger angle) $$=>$$ we have to triangles BKM, BNC with the same angle α but edges of the first are bigger than the edges of the second => base of the first $$=> KM and since $$AM+KM>AM => $$ $$AM+NC>AM$$ P.S I would kindly ask someone to edit/improve this solution.
|geometry|trigonometry|triangles|angle|triangle-inequality|
0
How can I motivate and deal with speculation?
I am a student of mathematics and have been increasingly feeling doubtful about whether or not I really understand the theory. What I mean by this is highly attributed to the style of textbooks in presenting information. Authors simply write definitions and make vague connections, but I never get to know why or how theorems and their definitions are developed. How could I have discovered it for myself, roughly speaking?
There is little left to add to what @Gauss has already said, especially concerning the individual nature of the question. Something that helps me a lot is to elaborate on a subject to the extent that you can teach it. I never forget the following dialogue with my mentor when I saw the announcement of his upcoming lecture. Me: "I didn't know that you were an expert on that subject." He: "I am not. I want to learn it. That's why I hold the lecture." The trick is that you have to be prepared for any thinkable question if you want to teach something. This forces you to a level of understanding that is beyond the usual one that you get by just reading a book. Another possible motivation - at least for me - is the study of the history of mathematics. Jean Dieudonné has written some books about it. I own one that covers the development from the 17th to the 19th century. I would recommend it if it was available in English, but I couldn't find a version. However, I saw that there are books from
|soft-question|self-learning|learning|
0
Kernel, variance and estimation
In the paper by Terrell 1990, in the Theorem 1 below on the page 471, I would like to derive the formulas for $g(x)$ and $h(x)$ and perhaps also why $\beta(k+2,k+2)$ minimizes that integral given.Why there is no $\sigma^2$ nor in $g$ nor in $h(x)$ : $g(x)$ is given as $\frac{15}{16}(1-x^2)^2$ therefore we cannot just assume that it has that specific variance! APPENDIX
Presumably, if you have access to the full article, you might want to look at the Appendix as the paper claims to provide the requisite proof therein. As I do not have access, I don't know. Regarding your question about how neither $g$ nor $h$ contain $\sigma$ , I should point out that when they say "member of the scale family," they are saying that there is an entire family of distributions parametrized by a scaling factor, say $a > 0$ ; e.g., $$g(x \mid a) = \frac{15}{16a} \left(1-\frac{x^2}{a^2}\right)^2, \quad |x| \le a,$$ whose relationship with the variance is given by $\sigma^2 = a^2/7$ , which allows us to specify an arbitrary width for $g$ with the desired properties. In other words, the author is giving you a "canonical" or "standard" form for these.
|reference-request|variance|estimation|
0
What is a maximally isotropic subgroup of product of elliptic curves
From page 5 of https://mast.queensu.ca/~kani/papers/numgenl.pdf : Then $\psi$ is automatically an anti-isometry with respect to the $e_N$ -pairings on $E$ and on $E'$ : $$e_N(\psi(x), \psi(y)) = e_N(x, y)^{-1}, \forall x, y \in J_E[N],$$ and this condition is equivalent to the assertion that the "graph subgroup scheme" $H_\psi = \text{Graph}(\psi) \leq (J_E \times J_{E'})[N]$ Is a (maximal) isotropic subgroup of $(J_E \times J_{E'})[N]$ (with respect to the $e_N$ -pairing on $A = J_E \times J_{E'}$ ). What does a maximal isotropic subgroup with respect to a Weil pairing mean? If possible I would like a reference to a definition. Finding a maximal isotropic subspace : This question talks about isotropic subspaces, which might work if we consider the elliptic curve as a vector space.
If $A$ is a principally polarised abelian variety (so that we may equip $A[n]$ with the symplecitc Weil pairing $e_{A,n}$ taking values in $\mu_n$ ), then a subgroup $G \subset A[n]$ is said to be isotropic (wrt $e_{A,n}$ ) if $e_{A,n}$ is trivial on $G$ (i.e., everything in $G$ pairs to $1$ ). It is maximal isotropic if it is not properly contained in another isotropic subgroup $G \subset G' \subset A[n]$ . As you can see, the vector space (or more accurately $\mathbb{Z}/n\mathbb{Z}$ module) is the $n$ -torsion subgroup. I can't think of a reference which could be considered "canonical" (maybe Mumford or Milne's books on abelian varieties), this is a standard definition and the adjective isotropic applies in some sufficiently general situation. For example, a free $R$ -module $M$ of rank $n$ with a symplectic pairing $e \in \wedge^n M$ (this is the situation in your reference, though I guess they don't require their pairing to be alternating). To explain how this works in the case in
|algebraic-geometry|reference-request|elliptic-curves|abelian-varieties|
1
Lee Mosher book definition of a tree.
I was just reading the definition of a tree in Lee Mosher book, and he said if graph is simply connected then it is contractible. I am wondering how is this true, can someone explain this to me please?Is there a proof for this fact? Edit: It seems like my question was not clear, here is what exactly I am asking, any simply connected graph is a tree,I know a priori that any tree is contractible.
Take a maximal spanning tree $T$ of the graph $\mathscr{G}$ . You hopefully know that, by one possible definition, $T$ is contractible, and also that $\pi_1(\mathscr{G})$ is freely generated by all edges in $E(\mathscr{G})\setminus E(T)$ . So if the graph is simply connected there can be no such edges, and $\mathscr{G}=T$ is contractible. For details see Hatcher's discussion of fundamental groups of graphs.
|graph-theory|algebraic-topology|free-groups|
1
Riemann zeta function in the critical strip
Hello wonderful people It seems that the Riemann zeta function in the critical strip may be given by: \begin{gather} \zeta(s)=\frac{1}{s-1}+1-s \int\limits_{1}^{+\infty} \frac{x-[x]}{x^{s+1}} \mathrm{d}x \end{gather} Does someone have a nice and pedagogical proof for a public of engineers? Thx in advance
You can take Abel's partial summation formula: $$ \sum_{1 \leq n \leq t} a_n f(n) = C(t) f(t) - \int_1^s C(x) f'(x) dx $$ where $C(t) = \sum_{1 \leq n \leq t} a_n$ and $f(t)$ is a complex valued function defined for $t \in \mathbb{R}^+$ . Which is easily derived here . Then you choose $a_n=1$ and $f(t) = t^{-s}$ : $$ \sum_{1 \leq n \leq t} \frac{1}{n^s} = \frac{\lfloor t \rfloor}{t^s} + s \int_1^t \frac{\lfloor x \rfloor}{x^{s+1}} dx $$ For $\Re (s) > 1$ we can take the limit $t \rightarrow \infty$ resulting in: $$ \zeta (s) = \sum_{n= 1}^\infty \frac{1}{n^s} = s \int_1^\infty \frac{\lfloor x \rfloor}{x^{s+1}} dx $$
|riemann-zeta|zeta-functions|
0
Parametric recursive sequence depending on initial value
Define the sequence $(x_n)_{n \geq 0}$ with $x_0 >0$ and recursive equation $x_{n+1} = \sqrt{2+x_n}$ for all $n \geq 0$ . I wish to determine the limit of the sequence $(x_n)_{n \geq 0}$ in terms of the parameter $x_0 \in (0, \infty)$ . If $x_0 , then $x_n > 0, (\forall)n \geq 0$ , $x_n and $(x_n)_n$ is strictly increasing. By Weierstrass' property, there exists $\ell = \lim_{n \to \infty} x_n \in \mathbb{R}$ . Taking the limit, obtain $\ell^2-\ell-2=0$ with solutions $\ell \in \{-1,2 \}$ . But since the $x_n>0, (\forall)n \geq 0$ , $\ell \geq 0$ so $\lim_n x_n=2$ . If $x_0=2$ , then $x_n=2, (\forall)n \geq 0$ constant sequence, so $\lim_n x_n=2$ . If $x_0>2$ , then $x_n > 0, (\forall)n \geq 0$ and $(x_n)_n$ is strictly increasing. However, because $x_0>2$ we would have $x_n > 2, (\forall)n \geq 0$ . But we know already that the only possible limit points of the sequence are $\lim_n x_n \in \{ -1,2, \pm \infty \} \subset \overline{\mathbb{R}}$ , so does that mean the sequence diverges
The sequence converges for all $x_0\in \mathbb{R}^+$ to $2$ . Here is an approach with closed-form solution. For $x_0>2$ : Denote $x_n = 2\cdot \cosh(\theta_n)$ then $$\begin{align} &\iff2\cdot \cosh(\theta_{n+1}) = \sqrt{2+2\cdot \cosh(\theta_n)} =\sqrt{4\cosh^2\left(\frac{\theta_n}{2} \right)} = 2\cosh\left(\frac{\theta_n}{2} \right)\\ &\iff \cosh(\theta_{n+1}) = \cosh\left(\frac{\theta_n}{2} \right)\\ &\iff \theta_{n+1} = \frac{\theta_n}{2} \\ &\implies \theta_n = \frac{\theta_{n-1}}{2}=...=\frac{\theta_0}{2^n} \end{align}$$ We deduce then $$\color{red}{x_n = 2\cdot \cosh\left(\frac{\text{arccosh}(x_0)}{2^n} \right)}$$ Because $\frac{\text{arccosh}(x_0)}{2^n}\xrightarrow{n\to+\infty}0$ , we have then $x_n\xrightarrow{n\to+\infty}2$ For $x_0 : Denote $x_n = 2\cdot \cos(\theta_n)$ , by the same method we have the closed form solution of $x_n$ $$\color{red}{x_n = 2\cdot \cos\left(\frac{\text{arccos}(x_0)}{2^n} \right)}$$ And the sequence converges also to $2$ .
|sequences-and-series|
0
Is there an analytical solution to this ordinary differential equation?
I am not very knowledgeable in the field so I apologize in advance if this question might look naive. But is there an analytical solution to a differential equation that looks like this : $$\frac{\mathrm{d}^2\theta}{\mathrm{d}t^2}-a^{2}\cot \theta \csc ^2\theta +b\sin \theta =0$$ $a,b$ are real constants, and $b>0$ . I have tried to solve this with Mathematica, and I could only solve it numerically.
It can be reduced to an implicit solution including special functions. Multiply by $\theta'$ and integrate to get \begin{align} \theta'\theta''+b\sin(\theta)\theta'-a^2\cot(\theta)\csc(\theta)^2\theta'=0,\\\\ \frac12 (\theta')^2-b\cos(\theta)-\frac12 a^2\csc(\theta)^2=C,\\\\ \int\frac{\mathrm d\theta}{\sqrt{C+2b\cos(\theta)+a^2\csc(\theta)^2}}=t+C'. \end{align} Take $\cos(\theta)=x$ to get \begin{align} \int \frac{-\mathrm dx}{\sqrt{(1-x^2)(C+2bx)+a^2}}, \end{align} which Wolfram Alpha can give a solution to in terms of the elliptic integral of the first kind , though the notation used is a bit cluttered. Perhaps Mathematica can give you a more clear integral solution.
|ordinary-differential-equations|
1
Statements which feel like they shouldn't be first-order expressible, but are
I recently had the opportunity to study some (very basic) model-theory, and that made some theorems in ring theory become immediately more interesting. For example, one characterization of the Jacobson radical of a ring $R$ makes it possible to express the formula " $x \in J(R)$ " as a first-order formula in the language of rings: $$\forall r \exists s (s(1-rx) = 1)$$ as it is known that $x \in J(R) \iff 1 - rx$ is invertible for all $r \in R$ . Similarily, the statement "every $R$ -module is flat", which feels very far from first-order expressible, can be written as the sentence: $$\forall a \exists x (a = axa)$$ as both of these statements are satisfied precisely by the von Neumann regular rings. I am also aware of this question , which talks about a way of expressing the sentence $\dim(R) \leqslant k$ for a commutative ring $R$ . In that spirit, I was hoping for more examples (not necessarily from algebra) of seemingly "complex" statements which are equivalent to some first-order fo
Ordinal definability is a great example of this (and a very useful one). A set is said to be definable if it is first-order definable in the class structure $\langle V; \in\rangle$ (with no parameters allowed). As is well-known, the notion of definability is not first-order definable. (One needs to be a bit careful in stating this: There are countable models of ZFC in which every set is definable, so for those particular models, definability happens to be definable. But this doesn't happen in "most" models of ZFC, and definability cannot be defined in general within ZFC.) In contrast, consider the following: A set is said to be ordinal definable if it is first-order definable in the class structure $\langle V; \in\rangle,$ allowing ordinal parameters. At first glance, the notion of ordinal definability would appear also not to be first-order definable (the definition above can't be directly expressed in first-order logic). But, in fact, the reflection principle can be used to show that
|ring-theory|first-order-logic|model-theory|big-list|
0
Show that any object isomorphic to a product object $A\times B$ is a product of $A$ and $B$.
This is the second part of an exercise on a book about category theory. The first one asked to show that any par of products are equal up to isomorphism. For my attempt at proving the proposition, I used the following 2 definitions: An arrow $f : A \rightarrow B$ is an isomorphism if there is an arrow $f^{-1} : B \rightarrow A$ , called the inverse of $f$ , such that $f^{-1} \circ f = id_{A}$ and $f \circ f^{-1} = id_{B}$ . The objects A and B are said to be isomorphic if there is an isomorphism between them. A product of two objects A and B is an object $A \times B$ , together with two projection arrows $\pi_{1} : A \times B \rightarrow A$ and $\pi_{2} : A \times B \rightarrow B$ such that for any object C and pair of arrows $f : C \rightarrow A$ and $g : C \rightarrow B$ there si exactly one mediating arrow $\langle f , g \rangle : C \rightarrow A \times B$ , such that $\pi_{1} \circ \langle f , g \rangle = f$ and $\pi_{2} \circ \langle f , g \rangle = g$ . For the proof, I started b
Your title says you are trying to prove that an object isomorphic to $A\times B$ is also a product of $A$ and $B$ , with suitably defined projection maps. But then you seem to be trying to prove that the isomorphism $h$ is somehow "unique"... But in general there need not be a unique isomorphism. For example, if $A=B$ , you can $X=B\times A$ and take $h$ to be the identity, or you can take $h'$ to be the morphism $(a,b)\mapsto (b,a)$ , and both will allow you to define a product structure on $B\times A$ (different product structures, but both will be a product of $A$ and $B$ ). Below is a proof that if $X$ is isomorphic to $A\times B$ , then via suitably defined functions $\pi_A\colon X\to A$ and $\pi_B\colon X\to B$ , we have that $(X,\pi_A,\pi_B)$ is a product of $A$ and $B$ , and that the original isomorphism is the only one that respects the resulting projection maps. Let $X$ be isomorphic to $A\times B$ via $h$ , $h\colon X\to A\times B$ and $h^{-1}\colon A\times B\to X$ . We let
|category-theory|
1
Lebesgue Integral help
I'm studying for my Real Analysis Masters Exam and the only thing that I don't understand at all is Lebesgue integrals and cant find any good examples. A old master exam problem I'm trying says: Prove $\lim_{n \to \infty} \int_{\mathbb{R}} \frac{\cos(nx)}{1+nx^2} \,d\lambda (x) = 0$ where this is the Lebesgue integral. My thought is to create a sequence of functions $f_n = \frac{\cos(nx)}{1+nx^2} $ and then use the Dominate Convergence Theorem to bring the limit inside, though I'm not sure how to show I meet the criteria or if I'm even on the right track. Any tips or help would be greatly appreciated!
The idea is indeed to use the dominated convergence theorem. For any $x\in\mathbb{R}$ , we have that $$|f_n(x)|=\left|\dfrac{\cos(nx)}{1+nx^2}\right|\leq \dfrac{1}{1+nx^2}\to 0,$$ which tells us that $f_n$ converge pointwise to $0$ . Continuing with the bound, $$|f_n(x)|=\left|\dfrac{\cos(nx)}{1+nx^2}\right|\leq \dfrac{1}{1+nx^2}\leq \dfrac{1}{1+x^2}$$ But $g(x)=\dfrac{1}{1+x^2}$ verifies that $$\displaystyle\int_\mathbb{R}|g(x)|dx=\displaystyle\int_\mathbb{R}g(x)dx=\pi so $g$ is integrable and the Dominated convergence theorem assures that $\displaystyle\int_\mathbb{R} f_n(x) dx\to 0.$
|real-analysis|lebesgue-integral|lebesgue-measure|
1
Lebesgue Integral help
I'm studying for my Real Analysis Masters Exam and the only thing that I don't understand at all is Lebesgue integrals and cant find any good examples. A old master exam problem I'm trying says: Prove $\lim_{n \to \infty} \int_{\mathbb{R}} \frac{\cos(nx)}{1+nx^2} \,d\lambda (x) = 0$ where this is the Lebesgue integral. My thought is to create a sequence of functions $f_n = \frac{\cos(nx)}{1+nx^2} $ and then use the Dominate Convergence Theorem to bring the limit inside, though I'm not sure how to show I meet the criteria or if I'm even on the right track. Any tips or help would be greatly appreciated!
Notice that $$\left | \frac{cos(nx)}{1+nx^{2}} \right |\leq \left | \frac{1}{1+nx^{2}} \right |\leq \left | \frac{1}{1+x^{2}} \right |, \ \ \forall n\in \mathbb{N}, \forall x\in \mathbb{R}.$$ You have that the right hand side is a measurable function which integral over $\mathbb{R}$ is finite. Then, you have the conditions to apply the Dominate Convergence Theorem and conclude what you want.
|real-analysis|lebesgue-integral|lebesgue-measure|
0
Take the Laplace Transform
Take the Laplace transform of $$ \int_{0}^{t}x^2(x-t)^4 \cos(x)dx .$$ I'm not quite sure where to start...
I had not found any other way than to follow the above suggested. So I just did the labor calculation. From $$\mathcal{L}(f*g) =\mathcal{L}(f)\mathcal{L}(g),$$ we can set $f(x)=x^2\cos(x)$ and $g(x)=x^4$ and Since $\mathcal{L}[(-t)^nf(t)]=F^{(n)}(s)$ , we can find that $$\mathcal{L}[t^2\cos(t)]=\frac{d^2}{ds^2}\frac{s}{s^2 + 1}=\frac{2s(s^2-3)}{(s^2 + 1)^3}$$ and together with $\mathcal{L}[t^4]=\frac{4!}{s^5}$ , the final answer will be $$ \mathcal{L}[x^2\cos(x)*x^4]=\frac{2s(s^2-3)}{(s^2 + 1)^3}\frac{4!}{s^5}=\frac{48(s^2-3)}{s^4(s^2 + 1)^3}. $$
|ordinary-differential-equations|laplace-transform|
0
Determine Coordinates of $30^\circ$ Counterclockwise Rotation
Determine the coordinates of $A'$ , the image of $A$ after a $30^\circ$ counterclockwise rotation about $P\left(3,2\right)$ I have figured out that $\overline{PA}=5$ and that a $36.87^\circ$ counterclockwise rotation maps $A'$ onto $(3,7)$ , but can't seem to find a way to progress.
If $\alpha$ is the angle by which you have to rotate $(8,2)$ about $P$ in order to get $A$ , then $A=(3+5\cos\alpha,\;2+5\sin\alpha)$ and $\cos\alpha = \frac{3}{5}$ and $\sin\alpha=\frac{4}{5}.$ You get $A'$ by simply rotating $30^{\circ}$ more: $$ A'=(3+5\cos(\alpha+30^{\circ}),\;2+5\sin(\alpha+30^{\circ})\;) $$ Now use $$ \cos(\alpha+\beta) = \cos\alpha\,\cos\beta-\sin\alpha\,\sin\beta \\ \sin(\alpha+\beta) = \sin\alpha\,\cos\beta+\cos\alpha\,\sin\beta $$ and $$ \cos 30^{\circ} = \frac{\sqrt{3}}{2} \\ \sin 30^{\circ} = \frac{1}{2} $$ to get $$ A'=\left( 3+5\cdot\frac{3}{5}\cdot\frac{\sqrt{3}}{2}-5\cdot\frac{4}{5}\cdot\frac{1}{2} \;,\; 2+5\cdot\frac{4}{5}\cdot\frac{\sqrt{3}}{2}+5\cdot\frac{3}{5}\cdot\frac{1}{2} \right) \\ = \left( 1+\frac{3\sqrt{3}}{2}\;,\; \frac{7}{2}+2\sqrt{3} \right) $$
|geometry|
1
How to prove $e^{-x}-\frac{1}{n}< \left (1-\frac{x}{n} \right )^{n}$?
I was looking the solution given by Simply Beautiful Art of the following problem: https://math.stackexchange.com/a/2029616/952348 In a part of the solution the author claims that How I can prove the left side of the inequality?, I have tried using the taylor expansion of the exponential function but I did not get anything.
Let $f(x)=(1-\tfrac xn)^n-e^{-x}+\tfrac1n.$ Then $f'(x)=-(1-\tfrac xn)^{n-1}+e^{-x}$ and if $f(x^*)$ is local minumum in $(0,n)$ , then $f(x^*)=\tfrac1n-\tfrac{x^*}n(1-\tfrac{x^*}{n})^{n-1}$ . Let $g(t)=\tfrac1n-\tfrac{t}n(1-\tfrac{t}{n})^{n-1}$ . Then local minumum of $g(t)$ in $(0,n)$ is $g(1)=\frac1n-\frac1n(1-\tfrac1n)^{n-1}>0.$ Hence, $f(x^*)>g(1)>0$ and $f(x)>0$ in $(0,n)$ .
|real-analysis|inequality|exponential-function|
0
Prove that if $p \equiv 7 \pmod{8}$, the solutions to the congruence $x^2-34x+1 \equiv 0 \pmod{p}$ are both quadratic residues modulo p.
Consider the quadratic congruence \begin{equation} x^2-34x+1 \equiv 0 \pmod{p}, \end{equation} where $p \equiv 7 \pmod{8}$ . There are two solutions, since we can write it as \begin{equation} (x-17)^2 \equiv 288 \pmod{p} \end{equation} and check that $288$ is a quadratic residue in this case. Since the product of solutions is congruent to $1 \pmod{p}$ , then both of them are either quadratic residues or quadratic non-residues. I have checked multiple primes, and it appears that both solutions are always quadratic residues. I'm not sure how to prove it; I feel like I'm missing something obvious. Any hint would be much appreciated!
When $p\equiv7\pmod8$ , then $y^2=2$ has solutions $\pmod p$ , which I will write as $y=\pm\sqrt2\pmod p$ . The solutions of the congruence $x^2-34x+1\equiv0\pmod p$ are seen to be $x=17\pm12\sqrt2\pmod p,$ using the quadratic formula. Can you show that $x=(3\pm2\sqrt2)^2$ ?
|elementary-number-theory|modular-arithmetic|quadratic-residues|
1
How do I prove that -5 is the limit of the given function using a delta epsilon proof?
I'm trying to find a $\delta$ in terms of $\epsilon$ that will allow me to prove that $$\lim_{x \to 1} \frac{3x^{2} + x + 1}{2x^{2} + x - 4} =-5$$ in a delta-epsilon proof. Since I would like to show that $$\left | \frac{3x^{2} + x + 1}{2x^{2} + x - 4} + 5 \right | I have done the following scratch work: $$\left | \frac{3x^{2}+x + 1 + 5(2x^{2} + x -4)}{2x^{2} + x -4} \right |=\left | \frac{13x^{2} + 6x -19}{2x^{2} + x -4} \right| = \left | \frac{(x-1)(13x+19)}{2x^{2} + x -4} \right | If we assume that $\delta , then $\left | x - 1 \right | implies that $(x-1)(13x + 19) . However, I'm not sure how to bound the denominator.
The denominator has a root at about $x = 1.186$ . You would want to stay away from that point, so assuming $\delta is actually not strict enough. I suggest you use something like $\delta or similar. For $x\in (0.9, 1.1)$ , we have $|2x^2 + x - 4| > 0.48$ , which allows you to continue with your proof.
|real-analysis|calculus|
1
Is cosine a function of sine?
Suppose we are working over the real numbers $\mathbb{R}$ . Is the cosine function $\cos(x)$ a function of the sine function $\sin(x)$ ? I define a function $f$ to be a function of a function $g$ if there is a function $h$ such that $h$ composed with $g$ , in that order, equals $f$ .
The "co" in cosine stands for "complementary". In a right triangle, if you take the sine of one of the acute angles, the cosine of that angle is the sine of the complementary acute angle, i.e. "complementary sine". So you could subtract an acute angle in a right triangle from 90 $^\circ$ and then take the sine of that to get the cosine of the original angle, which I guess would be a 2-step function, but it's far simpler to just go with adjacent/hypotenuse or x/r in circular form.
|functions|trigonometry|
0
How to calculate new position of a rectangle after translation and rotation?
I have a rectangle - lets say 100 long by 75 high. Origin been bottom left corner. I move the rectangle up and across by 10 and rotate by 3 degrees from centre of part. How do I calculate the new position of the origin in X and Y. I need to make only 3 measurements: 2 from X direction, and 1 from Y. I will use a laser to get 3 master measurements. I will move part and re-measure from same positions. I can compare new measurements to original but do not know to calculate.
If you know linear algebra you could put the transformation functions into matrices and the computer would calculate what would happen to the coordinates.
|geometry|analytic-geometry|rectangles|
0
Unclear step in the proof of Inverse Function Theorem
There is one step the necessity of which I don’t understand in the proof of Inverse Function Theorem in Jerry Shurman’s $Calculus$ $and$ $Analysis$ $in$ $Euclidean$ $Space$ (available here https://www.professores.uff.br/diomarcesarlobao/wp-content/uploads/sites/85/2017/09/Multi_calculus_BOOK.pdf ). After having established injectivity of $f$ in some closed ball $\overline B$ around $a$ and introducing an open ball $W$ around $f(a)$ s.t. $|f(a) - y| for every $y\in W$ and every $x\in \partial \overline B$ , the author goes on to prove, using Critical Point Theorem and Chain Rule, that for every $y\in W$ there is one and only one $x\in int\overline B$ s.t. $f(x) = y$ . Why do we need to do this, or, more concretely, why does the proof of this fact need to be so sophisticated? Why can’t we just take $V$ to be the pre-image of $W$ , i.e. $V = f^{-1}(W)$ (every point of $W$ has a pre-image in $\overline B$ , because $W$ is a subset of $f(\overline B)$ ; moreover, this pre-image should be un
Okay, I was just being stupid. If we ignore injectivity for a second, then it is a straightforward manner to show that under a continuous mapping boundary points can become interior points and interior points can become boundary points. Thus, the point $a$ might have become a boundary point, which would then imply that $W$ is not really a subset of $f(\overline B)$ . Maybe if injectivity is added back, this can be disproved (I fell like injectivity is still not enough, but a proof or a counterexample would be much appreciated). But nonetheless, it is much easier to show using the invertibility of the derivative matrix that $f(\overline B)$ , even more so, $f(int\overline B)$ does indeed contain $W$
|calculus|analysis|multivariable-calculus|inverse-function|theorem-provers|
0
Graph morphism is obtained by base extension from the diagonal morphism
I am working on exercise II.4.8 in Hartshorne, and am stuck on part (e). The problem asks to show that a property of morphisms of schemes ( $\mathscr{P}$ ) such that a closed immersion has $\mathscr{P}$ a composition of two morphisms having $\mathscr{P}$ has $\mathscr{P}$ $\mathscr{P}$ is stable under base extension satisfies the following: if $f : X \to Y$ and $g : Y \to Z$ are two morphisms, and if $g \circ f$ has $\mathscr{P}$ and $g$ is separated, then $f$ has $\mathscr{P}$ . There is a hint in Hartshorne that says to note that the graph morphism $\Gamma_f : X \to X \times_Z Y$ is obtained by base extension from the diagonal morphism $\Delta : Y \to Y \times_Z Y$ . I really don't see how this is the case, could anyone help me out?
Since $g:Y\to Z$ is separated, $\Delta:Y\to Y\times_Z Y$ is closed immersion and by condition (1) $\Delta$ has $\mathscr{P}$ . Then by hint and condition (3), the base change $\Gamma_f=\mathrm{id}_X\times_Z f$ has $\mathscr{P}$ . Now $f=p_2\circ \Gamma_f$ where $p_2:X\times_Z Y\to Y$ , so by condition (2) it suffices to prove $p_2$ has $\mathscr{P}$ . It is followed by the fact of $p_2:X\times_Z Y\to Y$ is the base change of $g\circ f:X\to Z$ by $g$ and (3).
|algebraic-geometry|schemes|
0
Does the weak limit of a sequence in $L^2([0,1])$ vanish on the limit set of vanishing sets?
Suppose $h_n$ is a sequence of non-negative functions in $L^2([0,1])$ converging weakly to $h$ (i.e., for every $g\in L^2([0,1])$ it holds $\int g\cdot h_n \,d\lambda \to \int g\cdot h\, d\lambda$ ). Let $T_n\subseteq [0,1]$ be a measurable set for each $n$ such that $h_n$ vanishes almost everywhere on $T_n$ . Suppose $T$ is a set such that the Lebesgue measure of the symmetric difference $T\Delta T_n := T\setminus T_n \cup T_n\setminus T$ goes to zero for $n\to \infty$ . Question: Does $h$ vanish almost everywhere on $T$ ? Attempts: I tried to bound $\int_T h \, d\lambda for arbitrary $\varepsilon >0$ . Let $\varepsilon > 0$ and let $n_0$ such that for all $n\geq n_0$ we have $|\int_T h_n \, d\lambda - \int_T h \, d \lambda | (which exists by weak convergence). Then, we have $$ \int_T h \, d\lambda \leq \int_T h_n \, d\lambda + \varepsilon/2 $$ for all $n\geq n_0$ . Now, I would want to use that $T$ is very close to $T_n$ and that $h_n$ vanishes on $T_n$ . However, I cannot find a bou
By passing to a subsequence, we may assume $\lambda(T \Delta T_n) for all $n$ . Since the closures of a convex set under the weak topology and under the norm topology coincide, per Hahn-Banach theorem, we may choose $k_n \in \mathrm{co}(h_n, h_{n+1}, \cdots)$ such that $\|k_n - h\| for all $n$ , where $\mathrm{co}$ indicates the convex hall. Then as $k_n \to h$ in norm, by passing to a subsequence if necessary, we may assume $k_n \to h$ a.e. Note that, as $k_n \in \mathrm{co}(h_n, h_{n+1}, \cdots)$ , $k_n$ vanishes a.e. on $K_n = \cap_{m \geq n} T_m$ . Since $\lambda(T \Delta T_m) , we have $\lambda(T \Delta K_n) . Clearly, as $h$ is the a.e. limit of $k_n$ , $h$ vanishes a.e. on $\cup_{n \geq 1} \cap_{m \geq n} K_m$ . But then, $$T \setminus (\cup_{n \geq 1} \cap_{m \geq n} K_m) = \cap_{n \geq 1} (T \setminus [\cap_{m \geq n} K_m]) = \cap_{n \geq 1} (\cup_{m \geq n} [T \setminus K_m])$$ Since $\lambda(T \setminus K_m) , we have $\lambda(\cup_{m \geq n} [T \setminus K_m]) as $n \to \in
|functional-analysis|measure-theory|lebesgue-integral|lp-spaces|weak-convergence|
1
Graph morphism is obtained by base extension from the diagonal morphism
I am working on exercise II.4.8 in Hartshorne, and am stuck on part (e). The problem asks to show that a property of morphisms of schemes ( $\mathscr{P}$ ) such that a closed immersion has $\mathscr{P}$ a composition of two morphisms having $\mathscr{P}$ has $\mathscr{P}$ $\mathscr{P}$ is stable under base extension satisfies the following: if $f : X \to Y$ and $g : Y \to Z$ are two morphisms, and if $g \circ f$ has $\mathscr{P}$ and $g$ is separated, then $f$ has $\mathscr{P}$ . There is a hint in Hartshorne that says to note that the graph morphism $\Gamma_f : X \to X \times_Z Y$ is obtained by base extension from the diagonal morphism $\Delta : Y \to Y \times_Z Y$ . I really don't see how this is the case, could anyone help me out?
We'll show that the following diagram is a pullback square: $$\require{AMScd} \begin{CD} X @>{\Gamma_f}>> X\times_Z Y\\ @V{f}VV @VV{f\times id_Y}V \\ Y @>{\Delta_{Y/Z}}>> Y\times_Z Y \end{CD}$$ Let $T$ be a test object with maps $\phi:T\to Y$ and $\psi:T\to X\times_Z Y$ so that $\Delta_{Y/Z}\circ \phi = (f\times id_Y)\circ \psi$ . Let $\pi_1:X\times_Z Y\to X$ be the projection map from the definition of the fiber product. I claim that $\pi_1\circ\psi:T\to X$ makes the diagram commute and is the only map $T\to X$ which does so. Writing $\pi_1':Y\times_ZY\to Y$ for the projection on to the first factor, we have $\pi_1'\circ\Delta_{Y/Z}=id_Y$ and $\pi_1'\circ(f\times id_Y)=f\circ\pi_1$ , so $\pi_1'\circ\Delta_{Y/Z}\circ\phi=\phi$ and $\pi_1'\circ(f\times id_Y)\circ\psi=f\circ \pi_1\circ\psi$ , giving $\phi = f\circ \pi_1\circ\psi$ , which shows that the triangle consisting of $T,X,Y$ commutes when $T\to X$ is defined by $\pi_1\circ\psi$ . We also get that the triangle consisting of $T,X,X
|algebraic-geometry|schemes|
1
What is $\sqrt{-1}$? circular reasoning defining $i$.
I am reading complex analysis by Gamelin and I am having trouble understanding the square root function. The principal branch of $\sqrt{z}$ ( $f_1(z)$ ) is defined as $|z|^{\frac 1 2} e^{\frac{i \operatorname{Arg}(z)}{2}}$ for $z \in \mathbb{C} - (-\infty,0]$ and $f_2(z)$ is defined as $-f_1(z)$ where $\operatorname{Arg}(z) \in (-\pi , \pi]$ By this definition, what is $\sqrt{-r} $ where $r$ is a non negative real number? Of course the answer is $i\sqrt{r}$ but the definition of square root functions doesn't apply here What is $i$ then? $i:=\sqrt{-1}$ but how? We didn't define the square root function for negatives but we still use $i$ to define complex numbers. Shouldn't the definition of square root function taught before defining $i$ ? and defining $i^2=-1$ without define $i$ as either $\pm \sqrt{-1}$ is also very strange because we want extended function to be continuous and choosing $\pm \sqrt{-1}$ will make this impossible although $i$ must be one of the two (after defining the s
The constructions of complex number that you must have seen rely on people's intuitions and experience with arithmetic, in order to avoid certain formalities that are deemed too confusing or too demotivating for most people. However, if one is very careful, one discovers that they do not hold water, just like you did: how can we define $\sqrt{\phantom{x}}$ using $i$ and the same time define $i$ using $\sqrt{\phantom{x}}$ ? If that's not what we do, then what is $i$ ? (Another dilemma that sometimes arises is: thee are two square roots of $-1$ , how do we know which one is $i$ ?) The answer thar resolves the problems was already given by infinitylord, but they perhaps did not put the right emphasis on it to get through to you. So let me spell out the logical steps of the construction of $\mathbb{C}$ . Before we do that, you must forget all the talk about $i = \sqrt{-1}$ and such. All of that is just motivation and is not part of an actual, precise construction (even though many pre-univ
|complex-analysis|algebra-precalculus|complex-numbers|continuity|riemann-surfaces|
0
Contour integral of $\int_{γ} \frac{1}{|z-z_{0}|^\alpha} dz$ where $0<\alpha<1$
I am studying Cauchy integrals and their value in points on the integration curve. The principal Cauchy value is defined on these points, however in part of the proof it is used that $\int_{γ} \frac{1}{|z-z_{0}|^\alpha} dz$ exists, where $γ$ is a smooth open or closed curve, $z_{0}$ it is a point of $γ$ (which is not an extreme if $γ$ is open) and $0 . I don't understand why this statement is true, does anyone have an idea?
The main point is that the only part of the integral that can give us trouble is in a neighbourhood of $z_0$ . So, without loss of generality, we'll consider a parameterisation of $\gamma$ such that $\gamma(0) = z_0$ and $\gamma'(0) \neq 0$ . Now, we can Taylor expand $\gamma$ around $0$ to find that, for some $c$ between $0$ and $t$ , \begin{align*} \lvert \gamma(t) - z_0 \rvert &= \biggl\lvert \gamma'(0)t + \frac{\gamma''(c)}{2}t^2 \biggr\rvert \\ &\geq \lvert \gamma'(0)\rvert\lvert t\rvert \biggl(1 - \frac{\lvert \gamma''(c)\rvert}{\lvert \gamma'(0)\rvert} \frac{\lvert t\rvert}{2}\biggr). \end{align*} Since $\gamma$ is smooth, $\gamma''$ is bounded on $[-1, 1]$ , so when $\lvert t \rvert \leq 1$ , we have that $\lvert \gamma''(c)\rvert \leq C$ for some $C$ independent of $t$ . Hence, let $\epsilon = \min\{\lvert \gamma'(0)\rvert/C, 1\}$ so that, when $\lvert t\rvert , $$ \lvert \gamma(t) - z_0 \rvert \geq \frac{\lvert\gamma'(0)\rvert}{2} \lvert t\rvert. $$ So finally, we can decompo
|complex-analysis|improper-integrals|cauchy-principal-value|
0
Does reindexing an infinite series change its value?
I'm wondering if reindexing a divergent or conditionally convergent infinite series changes the value of the series. Specifically, if we have a series $\sum_{n=1}^{\infty} a_n$ that is either divergent or conditionally convergent, does rearranging the terms by reindexing the series (e.g. $\sum_{n=2}^{\infty} a_{n-1}$ or $\sum_{n=17}^{\infty} a_{n-16}$ ) change the final value of the series? I would assume such a change in index holds true (probably by Riemann Rearrangement Theorem). But I'm not sure if this holds for divergent or conditionally convergent series. Any insight would be appreciated!
If you write out the summations by putting down the first few terms in order followed by an ellipsis ... you will see that the expressions are the same. In all cases you see just $$ a_1 + a_2 + a_3 + \cdots . $$ (That is often a useful strategy when looking at sums.)
|sequences-and-series|
0
basic complex analysis, confusion about complex exponential and its modulus
This question is from the paper 'A new proof of Spitzer's result on the winding of two dimensional Brownian motion' by R Durrett, 1982. Let $D_t = A_t + iB_t$ be a complex Brownian motion. Then $\int \vert e^{iD_s} \vert ^2 ds = \int e^{2A_s} ds$ . But why? I think by Eulers equation should follow: $\vert e^{iA_s + i^2B_s} \vert ^2 = \vert e^{iA_s}e^{-B_s} \vert ^2= \vert e^{-B_s} \vert ^2 = e^{-2B_s}$ I'd appreciate the help! Thanks :))
Let's clarify the misunderstanding here. Your reasoning is almost correct, but there's a small mistake in the application of Euler's equation. Let's go through it step by step: Starting from Euler's equation: $$ \left| e^{i A_s + i B_s} \right|^2 = \left| e^{i A_s} e^{i B_s} \right|^2. $$ Now, we have: $$ \left| e^{i A_s} e^{i B_s} \right|^2 = \left| e^{i A_s} \right|^2 \left| e^{i B_s} \right|^2. $$ Using Euler's equation for each term separately: $$ \left| e^{i A_s} \right|^2 = \left| \cos(A_s) + i \sin(A_s) \right|^2 = \cos^2(A_s) + \sin^2(A_s) = 1, $$ and $$ \left| e^{i B_s} \right|^2 = \left| \cos(B_s) + i \sin(B_s) \right|^2 = \cos^2(B_s) + \sin^2(B_s) = 1. $$ Hence, $$ \left| e^{i A_s + i B_s} \right|^2 = \left| e^{i A_s} \right|^2 \left| e^{i B_s} \right|^2 = 1 \cdot 1 = 1. $$ This means that $\left| e^{i D_s} \right|^2 = 1$ , which is not the same as $\left| e^{i D_s} \right|^2 = e^{2 A_s}$ . The correct relation should follow from the fact that $D_t = A_t + iB_t$ is a complex
|complex-analysis|exponential-function|brownian-motion|eulers-number-e|
0
Need help naming a fractal/image generated by exponential
I'm taking an introductory comp-sci course this semester, and my professor pulled up this graphic while explaining mergesort (the graphic shows blocks of size $2^{-n}$ ). For some reason, the vertical lines that appear in this photo look familiar to me, but I can't seem to find anything online or put my finger on it. Is there a name for a fractal that's generated in this manner?
It bears a small resemblance to both the Cantor set and the Blancmange curve , mainly because those are both fractals that are depicted as heights above (or below) an interval that gets split in some way.
|logarithms|computer-science|fractals|
0
Statements which feel like they shouldn't be first-order expressible, but are
I recently had the opportunity to study some (very basic) model-theory, and that made some theorems in ring theory become immediately more interesting. For example, one characterization of the Jacobson radical of a ring $R$ makes it possible to express the formula " $x \in J(R)$ " as a first-order formula in the language of rings: $$\forall r \exists s (s(1-rx) = 1)$$ as it is known that $x \in J(R) \iff 1 - rx$ is invertible for all $r \in R$ . Similarily, the statement "every $R$ -module is flat", which feels very far from first-order expressible, can be written as the sentence: $$\forall a \exists x (a = axa)$$ as both of these statements are satisfied precisely by the von Neumann regular rings. I am also aware of this question , which talks about a way of expressing the sentence $\dim(R) \leqslant k$ for a commutative ring $R$ . In that spirit, I was hoping for more examples (not necessarily from algebra) of seemingly "complex" statements which are equivalent to some first-order fo
This may be a bit more straightforward, but I was once surprised to learn that the statement for (abelian) groups " $A$ is a vector space over some field" is equivalent to a countable set of formulas: An abelian group in question has this property if and only if each multiplication-by- $n$ -map is either identically zero or an isomorphism, which (for any fixed $n$ at a time) is clearly expressible as a first order statement in the language of groups.
|ring-theory|first-order-logic|model-theory|big-list|
0
Time Reversal of an Ornstein-Uhlenbeck Process
Suppose we are given a stochastic process $(X_t)_{0\leq t\leq 1}$ which satisfies $$dX_t=-\theta X_tdt+\sigma dB_t,$$ also known as the Ornstein-Uhlenbeck process, where $\theta>0,\sigma\in\mathbb R$ and $X_0=x_0\in\mathbb R$ . The goal of this post is to find coefficients $\bar b(t,x),\bar\sigma(t,x)$ so that the time-reversed process $\bar X_t:=X_{1-t}$ satisfies $$d\bar X_t=\bar b(t,\bar X_t)dt+\bar\sigma(t,\bar X_t)dB_t,\hspace{1cm} 0\leq t The primary reference i am using for this is this paper . I will include screenshots of the relevant parts throughout this post. First, note that the coefficients of the SDE of $X$ satisfy the usual conditions, i.e. $b,\sigma$ are Lipschitz and don't grow significantly faster than $|x|$ . Additionally, it is well-known that $X_t$ is given by $$X_t=e^{-\theta t}x_0+\sigma\int_0^te^{-\theta(t-s)}dB_s$$ and hence by noting that the integrand $e^{-\theta(t-s)}$ is deterministic and by using Itô's isometry one can show that $X_t\sim\mathcal N\big(e^{
For both references you have shared, you need to calculate the score $\nabla_x \ln p_t(x)$ of the initial SDE you want to reverse; in this case, you are calculating the score of $ \mathcal{N}\left(x_t |e^{-\theta t} x_0, \frac{\sigma^2}{2 \theta}\left(1-e^{-2 \theta t}\right)\right)$ : $$ \nabla_{x_t} \ln \mathcal{N}\left(x_t |e^{-\theta t} x_0, \frac{\sigma^2}{2 \theta}\left(1-e^{-2 \theta t}\right)\right) = -2\theta \frac{x_t - e^{-\theta t} x_0 }{\sigma^2 (1-e^{-2 \theta t} )} $$ The calculation you did instead is more tedious and prone to errors as it's much easier first to take the log of the Gaussian and then differentiate $\left(\frac{\nabla_x p_t(x)}{p(x)} = \nabla_x \ln p_t(x)\right)$ . Now, substituting this back into the time reversal SDE (using the Haussmann and Pardoux version of time reversal) $$ \bar{X}_0 \sim \mathcal{N}\left(e^{-\theta } x_0, \frac{\sigma^2}{2 \theta}\left(1-e^{-2 \theta }\right)\right) $$ $$ d \bar{X}_t=\left[\theta \bar{X}_t - 2\theta \frac{\bar{X}_t
|probability|stochastic-processes|stochastic-calculus|markov-process|stochastic-differential-equations|
0
Integral asymptotic expansion of $\int_{0}^{\infty} \frac{e^{-x \cosh t}}{\sqrt{\sinh t}}dt$ for $x \to \infty$
$$\int_{0}^{\infty} \frac{e^{-x \cosh t}}{\sqrt{\sinh t}}dt$$ I'm trying to use Laplace's method to find the leading asymptotic behavior as $x$ goes to positive infinity, but I'm having some trouble. Could someone help me?
Changing the variable of integration from $t$ to $s$ using $t = \cosh^{-1}(1 + s)$ transforms the integral into the form $$ {\rm e}^{ - x} \int_0^{ + \infty } {{\rm e}^{ - xs} s^{ - 3/4} (2 + s)^{ - 3/4} {\rm d}s} . $$ Applying Watson's lemma directly leads to the asymptotic expansion $$ {\rm e}^{ - x} \int_0^{ + \infty } {{\rm e}^{ - xs} s^{ - 3/4} (2 + s)^{ - 3/4} {\rm d}s} \sim \frac{\Gamma \big( {\tfrac{1}{4}} \big)}{\sqrt 2}\frac{{{\rm e}^{ - x} }}{{ (2x)^{1/4} }}\sum\limits_{n = 0}^\infty \frac{{\left( {\frac{1}{4}} \right)_n \left( {\frac{3}{4}} \right)_n }}{{n!}}\frac{{( - 1)^n }}{{(2x)^n }} , $$ as $x\to+\infty$ . Here, $(a)_n = \Gamma(a+n)/\Gamma(a)$ represents the Pochhammer symbol.
|integration|asymptotics|laplace-method|
0
A symmetric positive definite, $\Vert x \Vert = \sqrt{x^tAx}$ norm? Equivalent to euclidean norm?
Let $A \in \mathbb{R}^{n,n}$ be a symmetric positive definite matrix. We define $\Vert x \Vert_A = \sqrt{x^tAx}$. I want to prove that is a norm. Moreover I want to prove that it is equivalent to the euclidean norm. I know that $\Vert x \Vert_A$ is always positive and it is 0 if and only only when x = 0, so the first property is satisfied. Moreover, it is also immediate that $\Vert \alpha x \Vert_A $ = $\vert \alpha \vert \Vert x \Vert_A$. However, I am struggling to prove that $$ \Vert v_1 + v_2 \Vert \leq \Vert v_1 \Vert + \Vert v_2 \Vert$$ I think I should like the proof of Schwarz inequality, but I do not see how to do that here. Supposing it is a norm, how do I prove the equivalence with the Euclidean metric? I know that I have to find two positive numbers m,M such that: $$ m\Vert x \Vert \leq \Vert x \Vert_A \leq M\Vert x \Vert $$ Any tips?
Another way of showing $||.||_{A}$ is a norm is just to notice that since A is symmetric and positive definite: Let $A = LL^{T} $ be the Cholesky decomposition $$ ||x||_{A}^{2} = x^{T}Ax = x^{T}(LL^{T})x = (L^{T}x)^{T}(L^{T}x) = ||L^{T}x||_{2}^{2}$$ So $||x||_{A} = ||L^{T}x||_{2}$ and you know $ ||.||_{2} $ to be a norm.
|linear-algebra|general-topology|
0
Definition of an $\mathcal{O}_X$-module generated by its global sections in Liu
In Liu's Algebraic Geometry and Arithmetic Curves , the author says that an $\mathcal{O}_X$ -module is generated by its global sections at $x \in X$ if the canonical homomorphism $\mathcal{F}(X) \otimes_{\mathcal{O}_X(X)} \mathcal{O}_{X,x} \to \mathcal{F}_x$ is surjective. I assume that what is meant by "the canonical homomorphism" is the map defined by $f \otimes [s] \mapsto [s] \star [f]$ , where $\star$ is the action induced by the actions of $\mathcal{O}_X(U)$ on $\mathcal{F}(U)$ for all open $U$ containing $x$ . This feels like the only reasonable definition, but I'm not sure I understand fully how it is "canonical" in the sense of being defined by a universal property? I thought this should be the map induced by the universal property of tensor products, where we consider $\mathcal{F}(X)$ , $\mathcal{O}_{X,x}$ , and $\mathcal{F}_x$ as $\mathcal{O}_X(X)$ -modules, but I don't see how there is a natural choice of $\mathcal{O}_X(X)$ -module map from $\mathcal{O}_{X,x}$ to $\mathcal{
hunter in the comments: Your definition is correct. "Canonical" can be a little overused/underdefined in math literature -- sometimes it means "morphism arising from an underlying natural transformation of functors" (which is I think what you are searching for) and sometimes it just means something informally "the only reasonable one that anyone would write down."
|algebraic-geometry|sheaf-theory|quasicoherent-sheaves|
1
On a strange step in the proof regarding a maximal problem.
As far as I'm concerned to show that something is true, proving that something is true for an example is never enough, you have to be able to prove that it is true for all statements/numbers with the same property. However the answer to a problem that I was trying for a while ends the proof with an example. Maybe it is correct but I'm not seeing why. Determine the greatest natural number $n$ , that has the property that writing the numbers $1,2,3,4....,2010$ in any order , there exists 15 consecutive numbers whose sum is at least equal to $n$ . Proof: The sum of all terms is $S=1005\cdot2011$ . Then we can form $2010/15=134$ disjoint groups each of $15$ terms. Therefore there exists a group with the sum of at least $S/134=15082,5$ . Rounding to the nearest integer $15083$ (Why does it use the ceiling function?). Now here is the part that I could not understand. It states: And because an example can be constructed so that there are no 15 consecutive terms with a sum greater than 15803,
The proof uses the combinatorial "theorem", the Pigeonhole principle. This provides that S/134 is a good lower bound (dealing with all the possible orderings at the same time). As it is not an integer, but the answer to the question is, we actually got that the answer is at least the lowest integer not smaller than S/134. That's why the ceiling function is used. Your proof doesn't give the exact example, but if an example of 15803 is provided, then you can see that this can actually be reached. So combining the two steps: you saw an example for a sum of 15803, but also got that there is no ordering for 15802 (or less). You can look at the example as the step providing the upper bound for the proof for the correctedness of the answer.
|sequences-and-series|algebra-precalculus|optimization|solution-verification|summation|
0
Find all the polynomials such that $P(x)-P(x-1)=x^2$ and $P(0)=0$
Find all the polynomials $p(x)\in \mathbb{R}[x]$ such that $p(x)-p(x-1)=x^2$ and $p(0)=0$ . And then deduce the value of $\sum_{i=1}^n i^2$ for any natural $n$ . I get, using the initial condition and making induction, that $p(n)$ has to be equal to $\sum_{i=1}^n i^2$ for any given $n$ . From here I think I can use the fact that $\sum_{i=1}^n i^2= \frac{n(n+1)(2n+1)}{6}$ and conclude that $p(x)=\frac{x(x+1)(2x+1)}{6}$ . But I feel like this answer is not what the problem is looking for. I do not even know if my answer is correct.
Your answer is correct. To explicitly calculate the polynomial, note that $x^n - (x-1)^n$ for any $n \in \mathbb{N}$ gives a polynomial of degree n-1. By this argument, we cannot have terms that have power greater than 3, therefore our polynomial must be of the form $$p(x) = ax^3 +bx^2 + cx$$ where there is no constant term as $p(0) = 0$ . calculating $p(x) - p(x-1)$ and equating it to $x^2$ , we get $$p(x) - p(x-1) = (3a)x^2 + (2b-3a)x + (a-b+c)$$ We must have $3a = 1 \Rightarrow a = \frac{1}{3}$ , $2b-3a = 0 \Rightarrow b = \frac{1}{2}$ and $a-b+ c = 0 \Rightarrow c = \frac{1}{6}$ . Therefore the polynomial is $$p(x) = \dfrac{2x^3 + 3x^2 + x}{6} = \dfrac{x(x+1)(2x+1)}{6}$$ For the second part, we have $$\sum_{i =1} ^n i^2 = \sum_{i =1} ^n (p(i) - p(i-1)) = p(n) - p(0) = \dfrac{n(n+1)(2n+1)}{6}$$ Using our polynomial from the first part and the fact that the series is a telescoping series.
|polynomials|summation|
1
Suppose $\mathbb{Q}(E[N]) = \mathbb{Q}(\zeta_N)$ and let $P$ be a rational point of order $N$, show $E[N]\cong \mathbb{Z}/N\mathbb{Z}\times \mu_N$
$N$ is prime and $E$ is of course an elliptic curve defined over $\mathbb {Q}$ . My attempt so far: Consider the Galois action of $G = Gal(\mathbb{Q}(\zeta_N)/\mathbb{Q})\cong (\mathbb{Z}/N\mathbb{Z})^\times$ on $E[N]$ and the Weil pairing $e_N$ . Let $\sigma$ be a generator of the Galois group. Let $Q$ be a point of $E[N]$ such that $e_N(P,Q) = \zeta_N$ . $\zeta_N^\sigma = e_N(P,Q)^\sigma = e_N(P,Q^\sigma)$ Applying $N-1$ times the automorhism $\sigma$ one obtains $N-1$ distinct values for $\zeta_N^\sigma$ , that is, all the powers of $\zeta_N$ except $1$ . Therefore, all of the points $Q^\tau$ are all different for all $\tau\in G$ . I would like to show that $Q^\sigma \in \langle Q \rangle$ . Suppose $\zeta_N^\sigma = \zeta_N^a$ for some integer $a$ . Then from the linearity of the Weil pairing one has $$ \zeta_N^a = e_N(P,Q)^a = e_N(P,[a]Q) $$ $$ e_N(P,Q^\sigma - [a]Q) = 1 $$ therefore $$ Q^\sigma - [a]Q \in \langle P \rangle $$ We could use the argument for deriving the expression
We may assume $N > 2$ , since the Galois action is trivial on both $E[2]$ and $\mu_2 =\{\pm 1\}$ and the statement is clear. The point $Q$ might not quite be the generator you want. Instead, note that if $e_N(P, Q) = \zeta_N$ then you also have $e_N(P, mP + Q) =\zeta_N$ for any integer $m$ , since the Weil pairing is alternating. The points $P$ and $Q$ generate $E[p]$ (abstractly as a group), so letting $\sigma$ generate $\text{Gal}(\Bbb{Q}(\zeta_N)/\Bbb{Q})$ , you can always write $Q^\sigma = rP + sQ$ for some $s, r \in \Bbb{Z}/N\Bbb{Z}$ . Moreover, $s \neq 1$ , since we need to have $$e_N(P^\sigma, Q^\sigma) = \zeta_N^{\sigma} \neq \zeta_N = e_N(P, Q) = e_N(P,rP + Q).$$ (Here we use $N \neq 2$ .) Therefore we may replace $Q$ with $Q' = Q + \frac{r}{s - 1}P$ (division modulo $N$ ) so that $$(Q')^\sigma = Q^{\sigma} + \frac{r}{s - 1}P^{\sigma} = rP + sQ + \frac{r}{s - 1}P = sQ'.$$ So this choice of $Q'$ does what you want while still satisfying $e_N(P, Q') = \zeta_N$ .
|number-theory|elliptic-curves|torsion-groups|
1
PA + "(PA + this axiom) is consistent"
By Gödel's second incompleteness theorem, no sufficiently powerful formal system can prove its own consistency. I was wondering what happens if one tries to manually append an axiom stating a formal system's own consistency to an existing formal system. By the $2$ nd incompleteness theorem (assuming the consistency of PA) we can append $\text{Con}(PA)$ to PA to get a consistent formal system which proves (trivially) the consistency of PA. Though this resulting system of course says nothing about itself. What if instead, we append to $PA$ the following axiom T: $T:\text{Con}(PA + T)$ The resulting system would seemingly trivially prove its own consistency, having it as an axiom. But also the effort to formally construct this system appears doomed to fail by the $2$ nd incompleteness theorem, as it proves its own consistency (and I assume it remains a "sufficiently powerful" formal system as it still contains $PA$ ). Where does this go wrong? I assume it is in the attempt to include a se
Actually, whipping up a self-referential axiom isn't difficult at all; the real issue is that doing so in the way you want just produces an inconsistent system. We can indeed produce a sentence $\varphi$ with the property that, provably in $\mathsf{PA}$ , we have $$\varphi\leftrightarrow Con(\mathsf{PA}+\varphi);$$ this is a direct consequence of the diagonal lemma . In particular, $\mathsf{PA}+\varphi\vdash Con(\mathsf{PA}+\varphi)$ . However, $\mathsf{PA}+\varphi$ also proves everything else, that is, $\mathsf{PA}+\varphi$ is inconsistent. This is exactly what the second incompleteness theorem tells us. Specifically, let $\rho$ be the Godel-Rosser sentence for this theory, i.e. "For every $\mathsf{PA}+\varphi$ -proof of me there is a shorter $\mathsf{PA}+\varphi$ -disproof of me." Reasoning within $\mathsf{PA}+\varphi$ , since $\mathsf{PA}+\varphi$ is consistent (remember that $\mathsf{PA}+\varphi$ proves that!) we know that the Rosser sentence must not be provable in $\mathsf{PA}+\v
|logic|axioms|peano-axioms|incompleteness|
1
How many ways can n different objects be split up into r groups, where each group is of size q, and where $n \geq r q$
I'm wondering how many ways n different objects can be split up into r groups, where each group is of size q, and where n >= r * q. I know that, when n = r * q, there is a straightforward solution (the formula, as seen here , is (n!) / (((q!) ^ r) * (r!)). When n > r * q, though, does the formula change? (The extra n - (r * q) objects are discarded and not assigned to any groups.) I believe (from simulations) that the formula is (n!) / (((q!) ^ r) * (r!) * ((n - (q * r))!)) (in other words, there is an additional factor of ((n - (q * r))!) in the denominator, and since 0! = 1, this formula still holds true when n = r * q), but I'd like to confirm if this formula is correct. Thanks!
Many problems like this can be solved with a method of choosing twice. In this case, first you can choose which $q\cdot r$ objects you want to use and then you choose their exact partition. For the first choice, there are $\binom{n}{qr}$ possibilities and you already described the number of the second choices. The result is the product of these, as for each first choice, you have the same number of possible second choices. Different pairs of choices lead to different "splits" and every "split" is counted once, so the product is really the answer. PS: Please use MathJax for mathematical formulas for better readability.
|combinatorics|
1
alternate methods lead to contradictory answers (conditional pdf)
let $T_1, T_2, T_3$ be Exponential( $\lambda$ ), all independent and let $Y_1 = T_1$ , $Y_2 = T_1 + T_2$ , $Y_3 = T_1 + T_2 + T_3$ Find pdf of $Y_2$ given $Y_1 = y_1$ First method: computed the pdf for $Y_1$ , $Y_2$ , $Y_3$ using the convolution integral and found that they only depend on the variable of parametrization and are thus independent. Also find $Y_2$ is Gamma(2, $\lambda)$ and $Y_3$ is Gamma(3, $\lambda$ ) deduce conditional pdf is just pdf of $Y_2$ Second method: see that $P(Y_2 \le y_2 | Y_1 = y_1) = P(y_1 + T_2 \le y_2) = P(T_2 \le y_2 - y_1)$ making the conditional pdf equivalent to $f_{T_2}(y_2 - y_1)$ Question: which method is correct and what is wrong with the other one? My intuition is the second one is incorrect, since you seem to be speaking only of the "and" event ... and you cannot have $P(Y_1 = y_1)$ alone.
Method $2$ is correct. $$\begin{align} \Pr[Y_2 \le y_2 \mid Y_1 = y_1] &= \Pr[T_1 + T_2 \le y_2 \mid T_1 = y_1] \\ &= \Pr[T_2 \le y_2 - y_1] \\ &= F_{T_2}(y_2 - y_1), \tag{1} \end{align}$$ hence $$f_{Y_2 \mid Y_1}(y_2 \mid y_1) = f_{T_2}(y_2 - y_1) = \lambda e^{-\lambda(y_2 - y_1)}, \quad y_2 \ge y_1. \tag{2}$$ This is a location-shifted exponential density with rate $\lambda$ . Method $1$ is incorrect because $Y_1, Y_2, Y_3$ are not independent. We can immediately see that $$Y_2 = T_1 + T_2 = Y_1 + T_2,$$ so for instance, $$\Pr[Y_1 > 1 \mid Y_2 = 1] = \Pr[Y_1 > 1 \mid Y_1 = 1 - T_2] = 0 \ne \Pr[Y_1 > 1].$$ Although it is true that $$Y_2 \sim \operatorname{Gamma}(2,\lambda), \\ f_{Y_2}(y_2) = \frac{\lambda^2 y_2 e^{-\lambda y_2}}{\Gamma(2)} = \lambda^2 y_2 e^{-\lambda y_2}, \quad y_2 \ge 0, \tag{3}$$ this is just the marginal density of $Y_2$ . The joint density of $Y_1, Y_2$ is $$f_{Y_1, Y_2}(y_1, y_2) = \lambda^2 e^{-\lambda y_2}, \quad 0 \le y_1 \le y_2. \tag{4}$$ Integrating this j
|probability|probability-theory|
0
alternate methods lead to contradictory answers (conditional pdf)
let $T_1, T_2, T_3$ be Exponential( $\lambda$ ), all independent and let $Y_1 = T_1$ , $Y_2 = T_1 + T_2$ , $Y_3 = T_1 + T_2 + T_3$ Find pdf of $Y_2$ given $Y_1 = y_1$ First method: computed the pdf for $Y_1$ , $Y_2$ , $Y_3$ using the convolution integral and found that they only depend on the variable of parametrization and are thus independent. Also find $Y_2$ is Gamma(2, $\lambda)$ and $Y_3$ is Gamma(3, $\lambda$ ) deduce conditional pdf is just pdf of $Y_2$ Second method: see that $P(Y_2 \le y_2 | Y_1 = y_1) = P(y_1 + T_2 \le y_2) = P(T_2 \le y_2 - y_1)$ making the conditional pdf equivalent to $f_{T_2}(y_2 - y_1)$ Question: which method is correct and what is wrong with the other one? My intuition is the second one is incorrect, since you seem to be speaking only of the "and" event ... and you cannot have $P(Y_1 = y_1)$ alone.
computed the pdf for $Y_1, Y_2, Y_3$ using the convolution integral and found that they only depend on the variable of parametrization and are thus independent. Also find $Y_2$ is $\Gamma(2, λ)$ and $Y_3$ is $\Gamma(3, λ$ . The marginal distributions of $Y_1, Y_2,$ and $Y_3$ do only depend on the parameter, $\lambda$ . Indeed, they are $Y_1\sim\Gamma(1,\lambda)$ , $Y_2\sim\Gamma(2,\lambda),$ and $Y_3\sim\Gamma(3,\lambda)$ . However, this does not imply that they are independent. The marginal distributions on their own do not tell you about independence. Rather, you would need to verify whether $f_{Y_1,Y_2}(y_1,y_2) = f_{Y_1}(y_1)f_{Y_2}(y_2)$ is true or not. see that $P(Y_2 \le y_2 \mid Y_1 = y_1) = P(y_1 + T_2 \le y_2) = P(T_2 \le y_2 - y_1)$ making the conditional pdf equivalent to $f_{T_2}(y_2 - y_1)$ Indeed. $T_2=Y_2-Y_1$ , so therefore: $(Y_2-Y_1)\mid Y_1\sim\mathcal{Exp}(\lambda)$ $$f_{Y_2\mid Y_1}(y_2\mid y_1) = f_{T_2}(y_2-y_1)$$ This should be sensible to you, as you have perf
|probability|probability-theory|
0
How do I find the area in the region inside the cardioid $r = 6 + 6\sin(\theta)$ and above the line $y = 9/2$
Image of graph 1 I'm sure this is easy but I have looked everywhere and can't get an exact answer. Just having trouble figuring it out. I understand that you can find the area by having 2 radii but I am unaware on how to find with a line.
A sketch in polar coordinates will reveal the following: You can work entirely in polar coordinates, but your first step is to find the polar equations of the blue and green rays in the first and third quadrants. Hint: Note that the required area is symmetric about the $y$ -axis, so you can work exclusively in the first quadrant, then double. You can find the $\theta$ value of the intersection of the line $y = \frac 92$ and the cardioid in the first quadrant by finding the line's polar equation by $r\sin\theta = \frac 92 $ , where $r = 6(1+\sin\theta)$ . You should get a "nice" value for $\theta$ in radian measure (let's call that $\alpha$ ). Now use the polar area formula and find the integral $\int_{\alpha}^{\frac {\pi} {2} } \frac 12 r^2 d\theta$ . Note that to get the required area in the first quadrant, you need to subtract away that pesky right triangle bounded by the blue and black line segments and the $y$ -axis, whose area you can quite easily determine. Finally double the res
|calculus|
0
How close do distinct distances to $0$ determined by a square integer lattice in $\mathbb{R}^2$ get?
Recently on MSE's chat, user "Simd" raised the following problem (I have rephrased and introduced some notation): For $n \geq 1$ let $S_n \subseteq \mathbb{R}^2$ denote the Cartesian product of $\{0,1,2,\dots,n\}$ with itself, and let $\|\cdot\|$ denote the usual Euclidean distance on $\mathbb{R}^2$ . Define $m_n$ to be the smallest positive element of $\{\|a\|-\|b\|: a, b \in S_n\}$ , and define $B_n := \{a \in S^n: \text{there is $b \in S^n$ with $|\|a\| - \|b\|| = m_n$}\}$ . Identify the sequences $m_n$ and $B_n$ in more concrete terms. This problem has a geometric interpretation: drawing all circles in $\mathbb{R}^2$ with center $0$ that pass through points of $S_n$ , the number $m_n$ is the smallest distance between two distinct such circles, and $B_n$ is the set of all points in $S_n$ that are in one of a pair of circles that realizes this smallest distance. (Before the chat moved on to other things, it was only observed that $m_n \to 0$ , as is also shown below). My question is
This is an interesting question; thanks for the detailed exposition of the substantial progress you made on it. I’m a bit surprised that you didn’t find this proof yourself, since you provide all the ingredients and your nice proof of $\liminf_{n\to\infty}nm_n=\frac1{2\sqrt2}$ is much more sophisticated than this. So perhaps I’m overlooking some gap, but this seems to me to be a straightforward proof of your claim: With your $a_n=(0,n)$ and $b_n=(1,n)$ , we have $\|a_n\|^2=n^2$ and $\|b_n\|^2=n^2+1$ , so $\|b_n\|^2-\|a_n\|^2=1$ . So if $a_n$ and $b_n$ realize $m_n$ , we’re done. If not, some $u$ and $v$ with $0\lt\|v\|-\|u\|\lt\|b_n\|-\|a_n\|$ realize $m_n$ . With \begin{eqnarray*} \|b_n\|-\|a_n\| &=& \sqrt{n^2+1}-n \\ &=& n\left(\sqrt{1+\frac1{n^2}}-1\right) \\ &\le& n\left(1+\frac1{2n^2}-1\right) \\ &=& \frac1{2n}\;, \end{eqnarray*} we have \begin{eqnarray*} \|v\|^2-\|u\|^2 &=& (\|v\|-\|u\|)\,(\|v\|+\|u\|) \\ &\lt& (\|b_n\|-\|a_n\|)\,(\|v\|+\|u\|) \\ &\le& \frac1{2n}\left(\sqrt2n+\sq
|discrete-geometry|
1
Smallest natural deduction proof for Meredith's axiom from basic rules
I tried to find a small natural deduction proof for Meredith's single axiom Infix notation Polish notation ((((ψ→φ)→(¬χ→¬ξ))→χ)→τ)→((τ→ψ)→(ξ→ψ)) CCCCCpqCNrNsrtCCtpCsp but I failed. My proof (shown below) has a whooping 72 steps . Am I missing something? Can it be reduced further, or are natural deduction proofs just that big over only basic rules? For example, the semantic tableau proof (linked above) is much smaller. So, essentially, I am looking for three things: A smaller proof for Meredith's single axiom. Methods to find optimal proofs in natural deduction systems. Ways to show that such a given proof is optimal. Inference Rules I'm looking for the smallest possible natural deduction proof, based only on the rules of {¬,→,∨,∧,↔}-{elimination,introduction} and ⊥-introduction, i.e. no derived rules: My Attempt The proof can be imported into https://mrieppel.github.io/fitchjs/ for automated validation and a prettier view. Problem: |- (((((A > B) > (~C > ~D)) > C) > E) > ((E > A) > (D
Here is a 26 line proof (if you don't count line 1):
|logic|propositional-calculus|alternative-proof|natural-deduction|formal-proofs|
0
Building a function $f$ such that $\| f - f_n \|_{L^p(B(x,r) \cap \Omega)} \to 0$ as $n \to \infty$ and $f \in L^p_{\text{loc}}(\Omega)$.
Consider an arbitrary open set $\Omega \subset \mathbb R^n$ and an arbitrary element $1 \leqslant p . Moreover, let $(f_n)_{n \in \mathbb N} \subset L^p(B(x,r) \cap \Omega)$ denote a convergent sequence in $L^p(B(x,r) \cap \Omega)$ , for every $x \in \Omega$ and $r > 0$ . In other words, for each $x \in \Omega$ and $r > 0$ there exists a function $f_{x,r} \in L^p(B(x,r) \cap \Omega)$ such that $$ \lim_{n \to \infty}\| f_{x,r} - f_n \|_{L^p(B(x,r) \cap \Omega)} = 0. $$ My goal is, if possible , to build a function $f$ such that $$ \tag{1} f \in L^p_{\text{loc}}(\Omega) \quad \text{ and }\quad \lim_{n \to \infty}\| f - f_n \|_{L^p(B(x,r) \cap \Omega)} = 0, $$ for every $x \in \Omega$ and $r > 0$ . My attempt. Clearly, to prove the R.H.S condition of $(1)$ , it is sufficient to establish that $$ \lim_{n \to \infty} \| f - f_n \|_{L^p(B(x,r) \cap \Omega)} \leqslant \lim_{n \to \infty} \| f_{x,r} - f_n \|_{L^p(B(x,r) \cap \Omega)}, $$ for every $x \in \Omega$ and $r > 0$ . Furthermore, simp
This question is much simpler than you thought. The key is that the following caution in your OP This inequality turns out to not provide a viable choice since the functions $f_{x,r}$ are not necessarily equal for different values of $x\in \Omega$ and $r>0$ . is almost unnecessary. The reason is that $L^p(X)$ for a measure space $X$ is a normed space, so limit in $L^p$ is unique. That is, two limit functions are equal almost everywhere. Therefore, if $B(x_1, r_1)\cap B(x_2,r_2)\neq \emptyset$ , then the limits $$ f_{x_1,r_1}=f_{x_2,r_2} \text{ a.e. in }B(x_1, r_1)\cap B(x_2,r_2), $$ since $f_n\to f_{x_1,r_1}$ and $f_n\to f_{x_2,r_2}$ in $L^p(B(x_1, r_1)\cap B(x_2,r_2))$ . Therefore we have a function $$ F_2(y):=\begin{cases} f_{x_1,r_1}(y) & y\in B(x_1,r_1)\\ f_{x_2,r_2}(y) & y\in B(x_2,r_2)\backslash B(x_1,r_1) \end{cases} $$ defined on $B(x_1, r_1)\cup B(x_2,r_2)$ . Furthermore, $F_2=f_{x_2,r_2}$ a.e. on $B(x_2,r_2)$ . Now since ${\mathbb R}^n$ is second countable, there exist a coun
|real-analysis|functional-analysis|convergence-divergence|examples-counterexamples|
1
A misunderstanding about the number of points on an algebraic variety
Consider the multivariate polynomial $Q(x,y) = y^2 - f(x)$ over a finite field $\mathbb{F}_q$ where $f(x) \in \mathbb{F}_q[x]$ has $m$ distinct roots in $\mathbb{F}_q$ . In his book " Equations over Finite Fields ", Schmidt claims that if $Q$ is absolutely irreducible then $N$ , the number of points on the algebraic variety specified by $Q$ , satisfies $$|N - q| \leq (m-1) \sqrt{q}.$$ Now, what if $f(x)$ itself is irreducible? In that case we have $m=0$ and the above inequality would not make sense.
In this context, $m$ is the degree of $f$ , and the assumption is that $f$ has $m$ distinct roots in an algebraic closure of $\mathbb{F}_q$ .
|algebraic-geometry|finite-fields|
0
Solve $\int_{0}^\infty \frac{1}{(x^2 +b^2)^4}\ dx$
I ran into this integral in the context of Quantum Mechanics, and I don't really know how to tackle it. Here, $b$ is simply a real constant, which we can assume is positive. $$\int\limits_0 ^\infty\frac{1}{(x^2+b^2)^4}\ dx$$ It doesn't look like I can use "traditional" methods to solve it, so I was thinking to maybe try to transform it to complex integral somehow and apply Cauchy's theorem or something, but I'm unsure if that would even work. Any nudge in the right direction would be appreciated!
Starting with $B>0,$ $$ \int_0^{\infty} \frac{1}{x^2+B} d x=\frac{1}{\sqrt{B}}\left(\tan ^{-1}\left(\frac{x}{\sqrt{B}}\right)\right]_0^{\infty}=\frac{\pi}{2 \sqrt{B}}, $$ we differentiate both sides w.r.t. $B$ thrice and get $$ \begin{aligned} \int_0^{\infty} \frac{(-1)^3 2 !}{\left(x^2+B\right)^4} d x & =\frac{\pi}{2} \frac{d^3}{d B^3}\left( B^{-\frac{1}{2}}\right) \\ & =\frac{\pi}{2}\left(-\frac{1}{2}\right)\left(-\frac{3}{2}\right)\left(-\frac{5}{2}\right) B^{-\frac{7}{2}} \end{aligned} $$ Putting $B=b^2,$ yields $$ \int_0^{\infty} \frac{1}{\left(x^2+b^2\right)^4} d x=\frac{15 \pi}{32 b^7} $$ In general, $$ \boxed{\int_0^{\infty} \frac{1}{\left(x^2+b^2\right)^n} d x=\frac{(-1)^{n-1}}{(n-1) \cdot b^{2 n+1}}(-1)^{n-1}\left(\frac{1}{2}\right)_n=\frac{\left(\frac{1}{2}\right)_n}{(n-1) ! b^{2 n+1}}} $$
|integration|definite-integrals|
0
Inequalitie of definite integral
I encountered the following integral question: Show that $$\frac{3}{8} \leq \int_{0}^{\frac{1}{2}} \frac{\sqrt{1-x}}{\sqrt{1+x}} \, dx \leq \frac{\sqrt{3}}{4}$$ I know I can just integrate it, but is there a way to obtain the result above without directly integrating? I tried doing rectangles and trapezoids but neither yield the result above. Also do you have any tips for tackling with other such questions of a similar vein?
I'll transcribe my comment on the lower bound here. Observe that the integrand is precisely $$ \frac{\sqrt{1-x^2}}{1+x}. $$ Since $\sqrt{1-x^2} \ge 1-x^2$ on the interval $[0,1/2]$ , we have $$ \int_0^{1/2}\frac{\sqrt{1-x^2}}{1+x}dx \ge \int_0^{1/2}\frac{1-x^2}{1+x}dx=\int_0^{1/2}(1-x)dx = 3/8. $$
|definite-integrals|
1
The domain of FFT
I have a real signal $f(t) $ that is periodic on $[0,L]$ where $L=1$ . I sampled it uniformly $64$ times and computed FFT (in python). I was asked to find the frequency where the transformed signal is having the largest magnitude . I found the index $i$ of the largest value in the transformed signal. How do I find the frequency (in hertz ) corespondent to the index $i$ ? My thinking is that my frequency domain must be $64$ uniform samples on $[-32/L,31/L]=[-32,31]$ and therefore the central frequency is $$f=i-32$$ . To support my claim I run the following code to get the frequency domain: import numpy as np num_samples = 64 L=1 freqs = np.fft.fftfreq(num_samples, L/num_samples) print(np.fft.fftshift(freqs)) which yields: [-32 ,-31 ,-30 ,-29 ,-28 ,-27, -26 ,-25 ,-24, -23 ,-22 ,-21, -20 ,-19 -18 ,-17 ,-16 ,-15 ,-14 ,-13 ,-12 ,-11, -10 , -9 , -8 ,-7 , -6 , -5 -4 , -3 ,-2 ,-1 , 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 10 , 11 , 12 , 13 , 14 , 15 , 16 ,17 ,18 , 19 , 20 , 21 , 22 , 23 24 , 25 ,
You are correct, assuming that frequency is measured in Hertz. Suppose $N$ is even for simplicity, and there is a collection of samples $g_0,...,g_{N-1}$ . If $n \mapsto \hat{g}_n$ is the DFT, then we can write $g_k = \sum_{n=-{N \over 2}}^{{N \over 2}-1} \hat{g}_ne^{i2 \pi {n k \over N}}$ . In the case given $g_k = f(k {L \over N} )$ , so letting $t=k {L \over N} $ , we have $f(t) = \sum_{n=-{N \over 2}}^{{N \over 2}-1} \hat{g}_ne^{i2 \pi {n kL \over N L}} = \sum_{n=-{N \over 2}}^{{N \over 2}-1} \hat{g}_ne^{i2 \pi {n \over L}t}$ . Hence the frequencies are ${n \over L}$ with $n = -{N \over 2},...,{N \over 2}-1$ , and so the frequency of largest magnitude occurs are $n=-{N \over 2}$ and is $f_\max=-{N \over 2L}$ . In your example, $N=64, L=1$ and so $f_\max = 32$ .
|fast-fourier-transform|
1
How to solve a recurrence relation with full history $ T(n) = n + \sum_{i=1}^{n-1} \frac{2T(i)}{3i}$?
I have to solve a recurrence relation with full history given as: $$ T(n) = n + \sum_{i=1}^{n-1} \frac{2T(i)}{3i} \tag{1}$$ I tried to solve it using the method given here and here and expanded it like this: $$ T(n+1) = (n+1) + \sum_{i=1}^{n} \frac{2T(i)}{3i} \tag{2}$$ Subtracting $(1)$ from $(2)$ gives: $$ T(n+1) - T(n) = 1 + \frac{2T(n)}{3n} $$ which can be simplified as $$T(n+1) = 1 + T(n)\left(1 + \frac{2}{3n}\right)$$ I am lost after that. Please help.
(Some result from WolframAlpha, acknowledged later in this answer) Inspired by the closed form given by WolframAlpha, from the given recurrence and the particular solution $T(n) = 3n$ mentioned by @Sil: $$\begin{align*} && T(n+1) &= 1 + T(n) \left(1+\frac{2}{3n}\right)\\ &(-)& 3(n+1) &= 1 + 3n \left(1+\frac{2}{3n}\right)\\ &\implies& T(n+1) - 3(n+1) &= \left[T(n) - 3n\right] \left(1+\frac{2}{3n}\right)\\ \end{align*}$$ So for the $T(n)$ term, $$\begin{align*} T(n) - 3n &= \left[T(n-1) - 3(n-1)\right] \left(1+\frac{2}{3(n-1)}\right)\\ &\vdots\\ &= \left[T(1) - 3\cdot1\right]\prod_{i=1}^{n-1}\left(1+\frac{2}{3i}\right)\\ \end{align*}$$ I assume that, according to $(1)$ in the question, $$\begin{align*} T(1) &= 1+(\text{Empty sum}) = 1\\ T(n) &= 3n - 2 \prod_{i=1}^{n-1}\left(1+\frac{2}{3i}\right) \end{align*}$$ Based on the last recurrence, WolframAlpha gives a closed form in terms of some fractional $\Gamma$ : $$T(n) = 3n - \frac{2\Gamma\left(n+\frac23\right)}{\Gamma\left(\frac53\right)\
|recurrence-relations|computational-complexity|
1
How to calculate the Moment of Inertia for a ball(radius=R) with cylinder?
Assume Line $l$ is the axis of the ball, and the radius of the ball is $R$ . I want to calculate the Moment of Inertia for the ball with cylinder: I take cylinder in the ball as is shown in the img: Assume the angle noted on the img is $\theta$ , ranging from $0$ to $\dfrac{\pi}{2}$ . For each $\theta$ , the area of the surface for the cylinder is $2\pi R\sin{\theta}\cdot 2R\cos{\theta}$ . And then the moment of inertia for the cylinder is $\rho(2\pi R\sin{\theta}\cdot 2R\cos{\theta})\cdot (R\sin{\theta})^2$ Finally I have $I = \int_0^{\frac{\pi}{2}} \rho(2\pi R\sin{\theta}\cdot 2R\cos{\theta})\cdot (R\sin{\theta})^2 d\theta$ . This is not correct, for that I know the $I$ should be $\frac{8\pi}{15}\rho R^5$ . It contains a $R^5$ , not the $R^4$ in my integration. Where have I gone wrong in my method? And how to correct this method with 'cylinder'?
Your moment of inertia for a thin-walled cylindrical tube is incorrect: you are forgetting the differential thickness of the tube wall. If the tube has radius $r$ , height (or length) $L$ , and wall thickness $dr$ , then its moment of inertia about its symmetry axis is $dI_{zz} = 2 \pi \rho r^3 L dr$ . Here, $r = R \sin \theta$ as you've correctly identified, which also implies $dr = R \cos \theta d\theta$ . The height (or length) of the tube is $L = 2 R \cos \theta$ . Therefore, $$dI_{zz} = 2 \pi \rho (R \sin \theta)^3 (2 R \cos \theta) (R \cos \theta d\theta)$$ $$= 4 \pi \rho R^5 \sin^3 \theta \cos^2 \theta d\theta$$ Integrating, $$I_{zz} = \int_{\theta = 0}^{\theta = \frac{\pi}{2}} dI_{zz} = \int_0^\frac{\pi}{2} 4 \pi \rho R^5 \sin^3 \theta \cos^2 \theta d\theta$$ $$= \rho R^5 \left[ \int_0^\frac{\pi}{2} 4 \pi \sin^3 \theta \cos^2 \theta d\theta \right]$$ The integral evaluates to $\frac{8 \pi}{15}$ , as you can check.
|calculus|integration|physics|
1
why is the co-prime part not mentioned in the definition of the rational number?
Proving $\sqrt{2}$ an irrational number is a quite popular exercise, in precalculus courses, but if we look clearly the definition that is introduced, in the beginning of the course, it never mentions, that the $p$ and $q$ in rational number $\frac{p}{q}$ have to be co-primes, My question is why is this part considered understood. By there definition alone $\frac{2}{4}$ is a rational, but if we use the same idea that we use in proofing $\sqrt{2}$ irrational, any fraction where numerator and denominator are not co-primes, will not be rational number. I find it quite absurd. Edit : second question, should have been why while proofing the ${\sqrt{2}}$ irrational, we are just satisfied with the fact, that $\exists d \in \mathbb{Q}(d = \gcd(p,q))$ , so its irrational - by looking this way $\frac{2}{4}$ is also irrational, because $d = 2$ . Speaking of dentition, I'm not saying the co-prime part is necessary, neither I'm questioning anything, but in proofing $\sqrt{2}$ , irrational all we ar
I think, after checking your bio, that I see where your question comes from. I presume you've not studied set theory. About 2-3 hours of study will make what I say next clear in a formal sense if you pick up Halmos's Naive Set Theory. What @StevenStadnicki is referring to as an equivalence class is exactly what you were taught is a rational: a number $\frac{p}{q}$ where $p,q$ are integers. We all know that $\frac{1}{2}=\frac{2}{4}=\frac{3}{6}$ etc. In set theoretic terms, we define an equivalence relation where any pairs of numbers $(a,b)$ and $(p,q)$ that satisfy $aq=pb$ are considered equivalent. So, $(1,2)$ and $(2,4)$ are such pairs, as are $(1,2)$ and $(3,6)$ . You are taught to recognize all rationals (i.e., pairs) that reduce to the same coprime form as "the same", which is also what the equivalence relation just mentioned formalizes within the "canonical" definition of the rationals. We choose as a convention to represent the rational with the least possible sum, the numbers we
|algebra-precalculus|proof-writing|real-numbers|education|
0
Number of ways to sit six people at a circular table such that two of the people cannot sit immediately beside each other
Problem: Six people are to sit at a circular table. Two of the people are not to sit immediately beside each other. Find the number of ways that the six people can be seated I'm really stuck on this question. The correct answer is $72$ , but my solutions seem to be incorrect (I get $432$ ways as an answer). I've tried to solve the problem in two ways, and these are my workings so far Method 1: Call the two restricted people $p_1$ and $p_2$ There are 6 choices for the first person $p_1$ to be seated, so hence $6$ ways. The second person $p_2$ cannot sit immediately beside $p_1$ , so there are $5-2 = 3$ ways the second person $p_2$ can sit. After sitting $p_1$ and $p_2$ , the remaining people can be seated in $4!$ ways. So the number of ways with $p_1$ not next to $p_2$ = $6 \cdot3\cdot4! = 432$ ways Method 2: With no restrictions 6 people can be seated in $6!$ ways. Again, there are 6 choices for the first person $p_1$ to be seated, so hence $6$ ways. There are 2 ways for $p_2$ to sit n
The original problem could be more clearly worded, I think. When you say in Method 1 that there are 6 possible choices for person 1, this is "true", but only if you actually care about the seats as being distinct seats, as opposed to "arrangements of people" being what we care about. The key difference is that in this problem where there's a "circular table" in which a rotation of the table doesn't change the "arrangement of people". For example, let's say that there are 6 chairs placed around a clock, just at the even numbers. So a chair at 2:00, at 4:00, ..., at 12:00. Suppose that you've placed all your people around the clock. Great! There's an arrangement! And your current solution keeps these clock numbers in mind, and this is why you get the answer you do. But now rotate the clock so that 2:00 becomes 4:00, that 4:00 becomes 6:00, etc. The arrangement you found has now changed as long as we keep track of these hour markings. In fact, there are $6$ ways for you to "rotate" the cl
|combinatorics|
1
jointly normal with constant
I am trying to calculate the mean for $Z = a_1 X_1 + a_2 X_2$ . As a first step, I want to find the pdf for $(a_1 X_1, a_2 X_2)$ (I plan to convolve). $X_1$ and $X_2$ are jointly normal. How can I do this? (specifically, how does the pdf of $(X_1, X_2)$ relate to that of $(a_1 X_1, a_2 X_2)$ ?
If $X_1,X_2$ are jointly normal then $a_1X_1,a_2X_2$ are also jointly normal. So all you need is the mean vector and variance-covariance matrix of $(a_1X_1,a_2X_2)$ . The mean vector is $(a_1EX_1, a_2EX_2)$ . The variances are $a_1^{2}Var (X_1)$ and $a_2^{2}Var (X_2)$ and the covariance is $a_1a_2Cov (X_1,X_2)$ .
|probability|
0
How to count all possible $2^n$ subsets using the fundamental principle of counting?
This is NOT a homework question. How to use the fundamental principle of counting to derive the result that the total number of subsets of a set $A$ is $2^n$ ? $A$ has $n$ distinct elements. I perceive that for building a subset from $A$ , I have two choices for each element and $2^n$ choices for all $n$ elements considered together(this will contain all possible subsets of $A$ ). However, I want to write this down formally in a simple manner to prove the $2^n$ result that we have in elementary set theory. What I want to use: Fundamental principle of counting
It is exactly what you said. Let $A=\{a_1,\dots,a_n\}$ , then there are $2$ choices -- whether $a_1$ is in the subset or not, then $2$ for whether $a_2$ is there or not, and so on for each $a_i$ , $1\leq i\le n$ . By the fundamental principle of counting, the total number of subset is precisely $$\underbrace{2\times 2\times \dots \times 2}_{n\ \mathrm{times}}=2^n$$ Hope this helps. :)
|elementary-set-theory|combinations|
0
Given a random walk with shifted exponential increment, how to calculate the expected sum distance to the origin?
Suppose $\{X_i, i=1,2,\ldots\}$ are i.i.d. random variables with exponential distribution and mean $\mu$ . Consider a random walk as follows. $$S_1=0$$ $$S_{i+1}= \begin{cases} S_i+X_i-k, & \text{if $S_i \ge 0$,} \\ X_i-k, & \text{if $S_i , where $k$ is a given constant greater than 0. How to calculate the value of $\mu$ such that the expected sum distance from $S_i$ to the point 0 for the first $R$ steps is minimized, i.e., $\arg \min_u E[\sum_{i=1}^{R}|S_i|]$ . Thank you. So far, I only have an approximate solution for two extreme cases, i.e., when $\mu>>k$ and when $\mu . However, I still have no idea about how to calculate the expected sum distance when the value of $\mu$ and $k$ are comparable, e.g., when $\mu=60$ and $k=100$ . The hitting time of a similar random walk has been asked before here .
Let me instead consider the problem in the limit $R\to\infty$ , which is essentially the problem of minimizing $\lim_{n\to\infty} \mathbf{E}[|S_n|]$ as a function of $\mu$ . When $\mu \geq k$ , $(S_n)$ behaves like a reflected random walk, hence $\lim_{n\to\infty} \mathbf{E}[|S_n|] = \infty$ . In light of this, let us consider the case $\mu . Then for arbitrary initial distribution, $(S_n)$ converges in distribution to some $S_{\infty}$ whose law does not depend on the initial distribution. (This has to do with the fact that $S_n for some $n$ with probability one, and when this happens the tail behaves exactly the same as $(S_n)$ started at $0$ .) Moreover, this $S_{\infty}$ solves the distributional identity $$ S_{\infty} \stackrel{d}= \max\{0, S_{\infty}\} + X - k, \tag{*}$$ where $X$ is an exponential random variable with mean $\mu$ independent of $S_{\infty}$ . We claim: Claim. $S_{\infty} \stackrel{d}= -k + Y$ , where $Y$ has an exponential distribution with mean $\beta$ solving t
|stochastic-processes|random-walk|
1
How to count all possible $2^n$ subsets using the fundamental principle of counting?
This is NOT a homework question. How to use the fundamental principle of counting to derive the result that the total number of subsets of a set $A$ is $2^n$ ? $A$ has $n$ distinct elements. I perceive that for building a subset from $A$ , I have two choices for each element and $2^n$ choices for all $n$ elements considered together(this will contain all possible subsets of $A$ ). However, I want to write this down formally in a simple manner to prove the $2^n$ result that we have in elementary set theory. What I want to use: Fundamental principle of counting
Given a set $A = \{ a_1, a_2, ..., a_n \}$ of $n$ distinct elements, we may use the fundamental principle of counting to enumerate the total number of subsets of $A$ as follows: For every element $a_i \in A$ , we must consider two possibilities, that is, whether $a_i$ is or is not in a given subset. In other words, we must consider whether or not $a_1$ is in a given subset. Then, we must consider whether or not $a_2$ is in a given subset. Then, we must consider whether or not $a_3$ is in a given subset... and so on and so forth. We must consider these two possibilities for each of the $n$ elements in $A$ , and there is always two such possibilities. Now, consider that each sequence of possible outcomes corresponds to a subset and vice versa. Hence, we may count the number of subsets by multiplying the number of possibilities at each step for each of the $n$ steps. In other words, we multiply $2$ by itself $n$ times. Hence, there are $2^n$ subsets.
|elementary-set-theory|combinations|
1
Why does there exist unique numbers $\varphi_1(v), \ldots, \varphi_m(v)$ such that $ Tv = \varphi_1(v)w_1 + \cdots + \varphi_m(v)w_m$?
Exercise. Suppose $T \in \mathcal{L}(V,W)$ and $w_1,\ldots,w_m$ is a basis of $\text{range} \ T$ . Hence for each $v \in V$ , there exist unique numbers $\varphi_1(v), \ldots, \varphi_m(v)$ such that $$ Tv = \varphi_1(v)w_1 + \cdots + \varphi_m(v)w_m, $$ thus defining functions $\varphi_1, \ldots, \varphi_m$ from $V$ to $\mathbb{F}$ . Show that each of the functions $\varphi_1, \ldots, \varphi_m$ is a linear functional on $V$ . Source. Linear Algebra Done Right, Sheldon Axler, 4th Edition, Exercise 5. in Section 3F. My Question. I was able to show they're linear functionals by using the linearity of $T$ . But my question has to do with the assumption taken in the exercise, that there does indeed exist unique numbers $\varphi_1(v), \ldots, \varphi_m(v)$ such that $Tv = \varphi_1(v)w_1 + \cdots + \varphi_m(v)w_m$ . I understand why they're unique is because $w_1, \ldots, w_m$ is a basis, but how do we know they are functions of $v$ ? Would that be easy to show? I might be forgetting an o
If you agree that for every $v\in V$ , there are unique scalars $a_1$ , $a_2$ , $\dots$ , $a_m$ such that $$Tv=a_1w_1+a_2w_2+\dots+a_mw_m$$ then we can define for each $1\leq i\le m$ , a function $\phi_i\colon V\to \mathbb F$ as follows. Remember what a function does is assigns a unique value in the codomain to each point in the domain. Take $v\in V$ , then find unique scalars $a_i$ s such that the above equation holds and then define $\phi_i(v)=a_i\in\mathbb F$ . Hence, the functions $\phi_i$ assigns a unique value in $\mathbb F$ to each point in $V$ , and by definition of these functions, for any $v$ in $V$ , we can write $$Tv=\phi_1(v)w_1+\phi_2(v)w_2+\dots+\phi_m(v)w_m$$ Hope this helps. :)
|linear-algebra|linear-transformations|dual-spaces|
1
why is the co-prime part not mentioned in the definition of the rational number?
Proving $\sqrt{2}$ an irrational number is a quite popular exercise, in precalculus courses, but if we look clearly the definition that is introduced, in the beginning of the course, it never mentions, that the $p$ and $q$ in rational number $\frac{p}{q}$ have to be co-primes, My question is why is this part considered understood. By there definition alone $\frac{2}{4}$ is a rational, but if we use the same idea that we use in proofing $\sqrt{2}$ irrational, any fraction where numerator and denominator are not co-primes, will not be rational number. I find it quite absurd. Edit : second question, should have been why while proofing the ${\sqrt{2}}$ irrational, we are just satisfied with the fact, that $\exists d \in \mathbb{Q}(d = \gcd(p,q))$ , so its irrational - by looking this way $\frac{2}{4}$ is also irrational, because $d = 2$ . Speaking of dentition, I'm not saying the co-prime part is necessary, neither I'm questioning anything, but in proofing $\sqrt{2}$ , irrational all we ar
The definition of a rational number is a ratio between two integers. There is no requirement that the integers be coprime (else $\frac 24$ wouldn't be rational when it is). Notice that from the definition alone there is no statement that all rational numbers can be written as a ratio of coprime numbers nor that if they can the choice of numbers are unique. If the numbers have factors in common they can be reduced out as a ratio but what if they have other factors in common. We can't argue to factor out the greatest common factor because we have no reason to assume to integers have a greatest common factor. However many (maybe most?) will take that for granted or will prove it before they get around to proving there is no rational square root of two. Most texts I've seen do choose to assume $p$ and $q$ are relatively prime so I have no idea why you say "never". However you also said "have to be relative prime". They don't have to be. But whatever you choose then $(\frac pq)^2 =2\implies
|algebra-precalculus|proof-writing|real-numbers|education|
1
Is there any solution except integration by parts for $\int_0^\infty x^2 e^{-x} \sin(\alpha x) dx$?
I want to calculate the Fourier sine integral and the Fourier cosine integral of the function $f(x)=x^2e^{-x}$. So I have to calculate: $$ \int_0^\infty x^2e^{-x}\sin(\alpha x)dx \qquad , \qquad \int_0^\infty x^2e^{-x}\cos(\alpha x)dx $$ The integration by parts method is really complex for these two integrals! Is there any better solution to calculate them?!
In general, for any real numbers $\alpha$ and $\beta>0$ \begin{aligned} \int_0^{\infty} x^\beta e^{-x} \sin (\alpha x) d x & =\Im \int_0^{\infty} x^\beta e^{-x} e^{\alpha x i} d x \\ & = \Im \int_0^{\infty} x^\beta e^{-(1-\alpha i) x} d x \\ & = \Im\left[\frac{\Gamma(\beta+1)}{(1-\alpha i)^{\beta+1}}\right] \\ & =\Gamma(\beta+1) \Im \left[ \frac{(1+\alpha i)^{\beta+1}}{\left(1+\alpha^2\right)^{\beta+1}} \right]\\ & =\frac{\Gamma(\beta+1)}{\left(1+\alpha^2\right)^{\beta+1}} \Im\left[(1+\alpha i)^{\beta+1}\right] \\ & \end{aligned} As a bonus, $$\int_0^{\infty} x^\beta e^{-x} \cos (\alpha x) d x= \frac{\Gamma(\beta+1)}{\left(1+\alpha^2\right)^{\beta+1}} \Re\left[(1+\alpha i)^{\beta+1}\right] \tag*{} $$ In particular, $\beta=2$ , $$\int_0^{\infty} x^2 e^{-x} \sin (\alpha x) d x= \frac{\Gamma(3)}{\left(1+\alpha^2\right)^{3}} \Im\left[(1+\alpha i)^{3}\right]= \frac{6 \alpha-2 \alpha^3}{\left(1+\alpha^2\right)^3} $$ $$\int_0^{\infty} x^2e^{-x} \cos (\alpha x) d x = \frac{2-6 \alpha^2}{\left(
|calculus|integration|partial-differential-equations|self-learning|
0
Can closed curves be assumed unit speed?
Andrew Pressley's Elementary Differential Geometry textbook claims on page $21$ that "...we can always assume that a closed curve is unit-speed and that its period is equal to its length." In the previous paragraph, Pressley derives this assuming the curve $\gamma$ is regular. This is obvious since regularity is equivalent to the existence of a unit speed parametrization. However, the summary statement itself here excludes the "regularity" assumption. Later on in Chapter $2$ , Pressley wants to show the total signed curvature of a closed plane curve is an integer multiple of $2\pi$ . However, he starts out the proof by seemingly adhering to the principle quoted above and assumes the closed plane curve is unit speed. No regularity assumption was made. Am I going crazy here, or am I just being pedantic and is regularity just being implicitly assumed both in this proof and the quoted statement above?
No, this is false if the curve is not regular. For example, the cuspidal cubic, which can be smoothly parametrized as $\gamma(t) = (t^2, t^3)$ , cannot be smoothly parametrized with unit speed. You should assume regularity for this result.
|differential-geometry|plane-curves|
0
exercise 1.7.5 of Basic Algebra I Jacobson
let $H$ and $K$ be two subgroups of a group $G$ . Show that the set of maps $x \rightarrow hxk, h \in H, k \in K$ is a group of transformations of the set $G$ . Show that the orbit of $x$ relative to this group is the set $HxK = $ { $ hxk|h \in H, k \in K\ $ }. This is called the double coset of x relative to the pair(H, K) . Show that if G is finite then $|HxK|=|H|[K:x^{-1}Hx\cap K]$ . btw I am so confused about the proof, as in this book, it seems the author just show the proof but I do not know how he gets the idea.
The first two parts are quite straightforward. Let me elaborate a bit on the last statement. Proof : Since $K$ is a subgroup, it is easy to verify that the conjugation $xKx^{-1}$ is also a subgroup. The intersection of two subgroups $I:=H\cap xKx^{-1}$ is a subgroup of the group $H$ . The number of left cosets of $H$ to $I$ is exactly $|H:I|=\frac{|H|}{|I|}$ . Since $HxK=\bigcup_{h\in H}hxK$ and $\forall h\in H,\ |hxK|=|K|$ , the cardinality of $HxK$ is a multiple of $|K|$ . Let's say $|HxK|=r|K|$ . Finally, we show that $r=|H:I|$ via the following bijective map. $$f:\{hI:h\in H\}\to\{hxK:h\in H\}$$ $$hI\mapsto f(hI)=hxK$$ $f$ is well-defined and is injective (hence bijective) since for any $h,h^\prime\in H$ , $$hI=h^\prime I\Leftrightarrow h^{-1}h^\prime \in I\Leftrightarrow x^{-1}h^{-1}h^\prime x\in K\Leftrightarrow hxK=h^\prime xK$$ Therefore, $$|HxK|=|H:I||K|=\frac{|H||K|}{|H\cap xKx^{-1}|}$$
|abstract-algebra|
0
Dedekind's proof that square of rational can't be integer
I'm reading Dedekind's work where he defines irrational numbers, using cuts. I'm stuck where he proves (p. 13) that there exist infinitely many cuts not produced by rational numbers His proof includes proof of "lemma" that there is no rational (without integer) number square of which is integer number. Let me highlight one very imortant moment. I'M NOT INTERESTED IN ANY OTHER PROOF OF THIS "LEMMA", I WANTED TO UNDERSTAND ONLY DEDEKIND'S PROOF AND NO OTHER ONE. Let $\lambda \in \mathbb{Z}_+$ , $\sqrt{D} \notin \mathbb{Z}$ : $$ \lambda^2 Let's take $r \in \mathbb{R}_+$ and if $r^2 > D$ then $r$ belongs to $A_2$ and if $r^2 then it belongs to $A_1$ . Dedekind says that $\sqrt{D} \notin \mathbb{Q}$ and prooves it by contradiction. Assume that $\sqrt{D} \in \mathbb{Q}$ , then there are such $t$ and $u$ in $\mathbb{Z}_+$ that: $$ t^2-Du^2=0. $$ Note: remainder of $\frac t u$ isn't $0$ , cause in that way $\sqrt{D} \in \mathbb{Z}$ . I'm not quite sure how existion of such numbers ( $t$ and $u
So, I had consulted with my lecturer* and he said that I'm right, but there was no reason to write so much, I could stop at $t^2=\frac{m^2}{n^2}u^2$ cause we can find such $u\in\mathbb{Z}_+$ that $n^2|u^2$ and hence we can solve equation $t^2 = q^2$ , where $q\in\mathbb{Z}_+$ . If we can solve this equation then such numbers ( $u$ and $t$ ) exist. * that wasn't my homework, I already have master's degree** ** And I have low self-esteem either, heh... UPD: Read comments below the question. Ethan Bolker linked equivalent theorem there, but it's not Dedekind's proof and so that wasn't useful for me, but maybe it'll be the one for you. And also there is reason why we can't proof existing of such numbers by vacuous truth.
|elementary-number-theory|
1
A question about switching the order of $\int$ and $\lim$ for a series of complex functions
When I took undergraduate complex analysis, the instructor was trying to prove an inequality, and he used a technique as below: $$\int \lim_{n\rightarrow \infty} f_n dz= \lim_{n\rightarrow \infty} \int f_n dz \cdots (*)$$ And he claimed that we can use this, because all $f_n$ s are measurable and $f_n$ converges uniformly to some complex function $f$ . Can someone help to explain why from measurability and uniform convergence of $f_n$ s we have $(*)$ , or show me theorems that I need to study deeply? Thanks!
As geetha290krm points out, there are details that are missing that are important. Here is my stab at a version of this that you might see in an undergraduate complex analysis course, but in any case you should edit your question to add these details for posterity's sake. Proposition. Let $\gamma:[0,1] \to \mathbb C$ be a smooth, rectifiable contour of length $\ell$ and $\{f_n(z)\}_{n = 1}^\infty$ be a sequence of measurable functions which are integrable on $\gamma$ . Furthermore, suppose that there exists a measurable function $f(z)$ which is integrable on $\gamma$ such that $$\lim_{n \to \infty}f_n(z) = f(z)$$ uniformly on $\gamma$ , then $$ \lim_{n \to \infty} \int_\gamma f_n(z) \mathrm d z = \int_\gamma f(z) \mathrm d z.$$ Proof. Uniform convergence means: $\forall \epsilon >0$ $\exists N>0$ such that for all $n>N$ and for all $z \in \gamma$ , $$|f_n(z) - f(z)| Taking $n>N$ and using the triangle inequality we have $$\left| \int_\gamma f_n(z) \mathrm d z - \int_\gamma f(z) \mathrm
|calculus|integration|complex-analysis|limits|measure-theory|
1
Proving martingale properties
I'm new to stochastic processes and have problems understanding martingales, conditional probability, $\sigma$ -algebras etc. I have two proofs that I'm now sure how to handle. Problem 1 . Prove that an integrable stochastic process $\{X(t),\mathcal{F}_t, t\in \mathbb{T}\}$ is a martingale if and only if for any bounded predictable process $\{\xi(t),\mathcal{F}_t, t\in \mathbb{T}\setminus\{0\}\}$ we have that $E\left[\sum_{k=1}^n\xi(k)\Delta X(k)\right]=0$ . Attempt on Problem 1 " $\Rightarrow$ " " $\Leftarrow$ " Am I correct? I'm not quite sure about the last step. Problem 2 . Prove the equivalence of the following statements: $\{X(t),\mathcal{F}_t, t\in \mathbb{T}\}$ is a martingale; $X(t)=E\left[X(T)|\mathcal{F}_t\right]$ , $t\in \mathbb{T}$ ; $E\left[\Delta X(t+1)|\mathcal{F}_t\right]=0$ , $t=0,1,\ldots,T-1$ . Here $\Delta X(k)=X(k)-X(k-1)$ . Attempt on Problem 2 $\Rightarrow$ 2. Since $\{X(t),\mathcal{F}_t, t\in \mathbb{T}\}$ is a martingale, then $E[X(t)|\mathcal{F}_s]=X(s)$ . Ju
Direction $\Rightarrow$ in Problem 1 looks good. The other direction has a bit of a gap as you noticed. I would take for fixed $k$ $$ \xi(k):=1_{\textstyle\{\mathbb E[X(k)|{\cal F}_{k-1}]> X(k-1)\}}\,. $$ and $\xi:\equiv 0$ for all other $k\,.$ Then $$ \mathbb E\Big[\xi(k)\Big(X(k)-X(k-1)\Big)\Big]=0 $$ implies $$ \mathbb E\Big[\xi(k)\Big(E[X(k)|{\cal F}_{k-1}]-X(k-1)\Big)\Big]=0 $$ and that implies $E[X(k)|{\cal F}_{k-1}]\le X(k-1)$ almost surely. The other inequality is shown similarly.
|probability-theory|stochastic-processes|martingales|stochastic-analysis|
1
Proving a multivariate normal distribution gets the maximum entropy when mean and covariance are given
I'm working on a homework question. The first part was: Given an unbounded one dimensional continuous random variable: $X\in\left(-\infty,\infty\right)$ , that satisfies: $\left\langle X\right\rangle =\mu,\;\left\langle \left(X-\mu\right)^{2}\right\rangle =\sigma^{2}$ Show that the distribution that maximizes entropy is Gaussian $X\sim N\left(\mu,\sigma^{2}\right)$ . I've solved this using Lagrange multipliers method. The next part is proving the same holds in the case of multivariate distributions. Generalize the previous part to a $k$ dimensional variable $X$ with given expectation value $\vec{\mu}$ and covariance matrix $\Sigma$ . I started the same way when I define the proper functional I wish to optimize: $$ F\left[f_{X}\left(\overline{x}\right)\right]=H\left(X\right)+\lambda\left(1-\intop_{\mathbb{R}^{k}}f_{X}\left(\overline{x}\right)d\overline{x}\right)+\sum_{i\in\left[k\right]}\varGamma_{i}\left(\mu_{i}-\intop_{\mathbb{R}^{k}}\overline{x}_{i}f_{X}\left(\overline{x}\right)d\ove
The proof of this can be made very simple. You want to minimize the following quantity \begin{align} H[\rho] = \int_{\mathbb{R}^d} \rho \log \rho \, \mathrm{d}x \, , \end{align} where $\rho$ is the p.d.f/law of the random variable $X$ , subject to the constraint that $\rho$ has mean $\mu$ and covariance $\Sigma$ . Denote by $N_{\Sigma,\mu}$ the corresponding Gaussian p.d.f. Let $\rho$ be a p.d.f which satisfies this constraint. We then have that \begin{align} H[\rho]= & \int_{\mathbb{R}^d} \rho \log \frac{\rho}{N_{\Sigma,\mu}} \, \mathrm{d}x +\int \rho \log N_{\Sigma,\mu} \, \mathrm{d}x \\ =&\int_{\mathbb{R}^d} \rho \log \frac{\rho}{N_{\Sigma,\mu}} \, \mathrm{d}x -\frac{d}{2} -\frac 12 \log (2\pi)^d|\Sigma| \qquad (*) \,. \end{align} The first term in (*) is the relative entropy between $ \rho$ and $N_{\Sigma,\mu}$ and the other terms are just fixed constants. The relative entropy (or what someone in the comments called the KL divergence) is always non-negative, so if you could make it
|real-analysis|probability|statistics|optimization|information-theory|
0
How to prove this inequality involving a small number?
Let $\varepsilon$ be a small enough positive number, we want to show: $\frac{(0.5-\varepsilon)^2}{(0.5-\varepsilon)^2+(0.5+\varepsilon)^2}\geq 0.5-(4-\varepsilon)\varepsilon$ . I tested it numerically and did not find any contradiction. My trial is as follows: $0.5-(4-\varepsilon)\varepsilon=\frac{0.5-(4-\varepsilon)\varepsilon}{0.5-(4-\varepsilon)\varepsilon+0.5+(4-\varepsilon)\varepsilon}$ , and try to prove $(0.5-\varepsilon)^2\geq 0.5-(4-\varepsilon)\varepsilon$ and $(0.5+\varepsilon)^2 \leq 0.5+(4-\varepsilon)\varepsilon$ . But the first auxiliary inequality does not hold.
Define $$ f(\varepsilon)= \frac{(0.5-\varepsilon)^2}{(0.5-\varepsilon)^2+(0.5+\varepsilon)^2}- 0.5+(4-\varepsilon)\varepsilon $$ we have $$ f(0)=0 $$ After some tedious calculation you have that the derivative is $$ f'(\varepsilon)=\dfrac{-2\varepsilon^5+4\varepsilon^4-\varepsilon^3+\frac{5}{2}\varepsilon^2-\frac{1}{8}\epsilon+\frac{1}{8}}{\left(\varepsilon^2+\frac{1}{4}\right)^2} $$ so $$ f'(0)>0 $$ As the function is $0$ for $\varepsilon=0$ and is increasing in $0$ there exists $\varepsilon>0$ such that $f(\varepsilon)>0$
|inequality|
0
Derive apriori estimate for solution of PDE
I have the following PDE: Let $G = (a, b)$ and $J = (0, 1)$ . Consider the Dirichlet problem with general coefficient functions $\alpha(x)$ , $\beta(x)$ , $\gamma(x)$ \begin{equation} \begin{cases} \partial_t u - \partial_x (\alpha(x) \partial_x u) + \beta(x) \partial_x u + \gamma(x) u = f(t, x), \text{ in $J \times G$}\\ u = 0, \text{ on $J \times \partial G$}\\ u|_{t=0} = u_0, \text{ in $G$} \end{cases} \end{equation} where $u_0 \in L^2(G)$ , $f(t,x) \in L^2(J,H^{-1}(G))$ , $\alpha,\gamma \in C(G)$ , and $\beta \in C^1(G)$ such that with some $\bar \alpha > 0$ the bound $\alpha(x) > \bar \alpha$ holds for all $x \in G$ . Additionally assume that $\alpha(x) \in C^1(G)$ , $\gamma(x) > 0$ for all $x \in G$ , and that $f(t,x) \equiv 0$ . Assume that we have a strong solution $u \in C^2(\overline{J \times G})$ . I am supposed to show that $$ u(t,x) \leq \max\{0, \max_{x \in \overline G} u_0(x)\} $$ for any $(t,x) \in J \times G$ . My attempt: Staring from $\partial_t u = \partial_x (\alph
What you need here is the maximum principle for parabolic equations. I suggest you look for the proof for the maximum principle for the heat equation and then see if you are able to apply the proof to your problem. The general idea is to prove the maximum of the solution on the compact set $J\times G$ must be attained on the parabolic boundary $\{t=0\}\cup\{x=a,b\}$ . To show this is true think about what you know about the derivatives of a function at an inner point (or a boundary point at $t=1$ ) of maximal value, and use the equation to get a contradiction. Notice that the proof can work for $\gamma\geq 0$ (rather than strictly positive) as well, but it will be a bit more subtle.
|partial-differential-equations|estimation|
1
Second order central difference formula with difference step sizes
The well-known equation $$u''(x)= \frac{u(x+h) - 2 u(x) + u(x-h)}{h^2}+\mathcal{O}(h^2)$$ is derived by adding $$u(x+h)= u(x)+u'(x)h+\frac{1}{2}u''(x)h^2+\mathcal{O}(h^2)$$ $$u(x-h)= u(x)-u'(x)h+\frac{1}{2}u''(x)h^2+\mathcal{O}(h^2)$$ How does it look like if we have different step sizes such as $$u(x+h_1)= u(x)+u'(x)h_1+\frac{1}{2}u''(x)h_1^2+\mathcal{O}(h_1^2)$$ $$u(x-h_2)= u(x)-u'(x)h_2+\frac{1}{2}u''(x)h_2^2+\mathcal{O}(h_2^2)$$ Here, the first order derivatives do not cancel out each other. How can we define a good approximation for the second derivative in that case? The motivation for this are linear constraints in an optimization problem where I optimize function values $u_i$ at points $x_i$ . The function is supposed to be convex and I enforce this (very roughly) by constraints of the form $u_{i+1} - 2u_i + u_{i-1} > 0$ . But this of course makes only sense if the spacing between the points is the same.
You make them cancel each other, taking the appropriate linear combination $$ h_2u(x+h_1)+h_1u(x-h_2)=... $$ The resulting terms with $u(x)$ need to be transported to the left side afterwards.
|numerical-methods|finite-differences|
1
Whether covering transformation of universal covering have not fixed point?
Assuming $\pi: \tilde M\rightarrow M$ be a universal covering of a complete Riemannian manifold $M$ . $f:\tilde M \rightarrow \tilde M$ is a covering transformation. If $f$ has fixed point, whether $f$ must be the identity?
As Moishe Kohan comments, the reason lies in a very general property of covering maps. This is the unique lifting property : Given a covering map $p : \tilde X \to X$ and a map $\phi : Y \to X$ living on a connected space $Y$ , then any two lifts $\tilde \phi_1, \tilde \phi_2 : Y \to \tilde X$ of $\phi$ which agree at one point of $Y$ agree on all of $Y$ . This theorem can be found in any textbook dealing with covering maps. See for example Proposition 1.34 in Hatcher's "Algebraic Topology". Now observe that your $f : \tilde M \to \tilde M$ is a lift of $\pi : \tilde M \to M$ . Since $\pi$ is a universal covering, $\tilde M$ is (pathwise) connected.
|algebraic-topology|riemannian-geometry|covering-spaces|
1
Example of Transcendental degree of a polynomial
I'm studying algebraic geometry and I tried to understand th concept of Transcendental degree of a polynomial like the following. Wiki The field of rational functions in $n$ variables $K(x_1, \ldots, x_n)$ (i.e., the field of fractions of the polynomial ring $K[x_1, \ldots, x_n]$ ) is a purely transcendental extension with transcendence degree $n$ over $K$ ; we can, for example, take $\{ x_1, \ldots, x_n \}$ as a transcendence base. I tried to come up with some examples and my professor wrote on the blackboard the following examples which I couldn't quite get; For example, consider the polynomial $ f(x, y) = x^2 + y^2 - 1 $ . This polynomial involves two variables, $ x $ and $ y $ , which are algebraically independent over the real numbers. The transcendence degree of the field extension $ \mathbb{R}(x, y)/\mathbb{R} $ is 2, because $ x $ and $ y $ are algebraically independent and there are no polynomial relations between them over $ \mathbb{R} $ . In contrast, if you have a polynomia
Your misunderstanding seems to come from confusing the polynomial $f(x, y) = x^2 + y^2 - 1$ with the polynomial equation $f(x, y) = 0 \iff x^2 + y^2 = 1$ . There are obvious inclusions $\mathbb{R} \subset \mathbb{R}(x) \subset \mathbb{R}(x, y)$ . Keeping this in mind, $g(x) = x^2 - 2$ is an element of $\mathbb{R}(x)$ but not of $\mathbb{R}$ , and $f(x, y)$ is an element of $\mathbb{R}(x, y)$ but not of $\mathbb{R}(x)$ . Instead of $f$ and $g$ one could look at the polynomials $x^2 + 1$ or $x^2 + y^2 + 2$ , which have no real solutions, but this distinction has nothing to do with the field extensions or their transcendence degrees. ... it is algebraic over $\mathbb{R}$ (since it satisfies the polynomial equation $x^2 − 2 = 0$ ). This looks wrong to me. In the field of rational functions, $x \in \mathbb{R}(x)$ is not algebraic over $\mathbb{R}$ . There is a ring $\mathbb{R}[x]/g(x)$ associated to $g$ in which $x$ (or to be precise the image of $x$ ) is algebraic, and indeed satisfies the
|algebraic-geometry|
1
General term for increasing AP's
Can some give an easy general method to find general term of sequences whose difference is in AP? Example: 1,4,8,13,19.... The difference is 3,4,5... which is in AP. Through vigorous testing and analysis you get the general term as $ (n^2 + 3n - 2)/2$ but how to derive it mathematically
Yes dude you got the right question. There is a method to obtain general term directly for your question.This method can even extended when difference of differences are in AP or even further. To find $t_1 + t_2 + t_3 + \cdots +t_n$ Let $S_n = t_1+t_2+t_3+ \cdots + t_n$ Then find, $ \Delta t_1, \Delta t_2 ,\Delta t_3 $ [first order differences] $\Delta^2 t_1 ,\Delta^2 t_2 ,\Delta^2 t_3$ [2nd order differences] ...Continue the process until you get all the terms 0. Note : here $\Delta t_1 = t_2-t_1, \Delta t_2 = t_3-t_2 \space etc.$ and $\Delta^2 t_1 = \Delta t_2 -\Delta t_1, \Delta^2 t_2 = \Delta t_3 - \Delta t_2\space etc.$ Similarly you can find higher order differences.After obtaining all the required order of differences you can write $t_n $ as $t_n = \binom{n-1}{0}t_1 + \binom{n-1}{1}\Delta t_1 + \binom{n-1}{2}\Delta^2 t_1......$ and $S_n = \binom{n}{1}t_1+\binom{n}{2} \Delta t_1 + \binom{n}{3} \Delta^2 t_1....$ You can apply this process for your question and get the same as you
|sequences-and-series|arithmetic-progressions|
0
General term for increasing AP's
Can some give an easy general method to find general term of sequences whose difference is in AP? Example: 1,4,8,13,19.... The difference is 3,4,5... which is in AP. Through vigorous testing and analysis you get the general term as $ (n^2 + 3n - 2)/2$ but how to derive it mathematically
Let us first closely examine the sequence: $1,4,8,13,19,\dots$ \begin{align*} u_1&=1\\ u_2&=1+3=4\\ u_3&=1+(3+4)=8\\ u_4&=1+(3+4+5)=13\\ \vdots \end{align*} Notice that essentially to get the term $u_n$ in our sequence, we need to add the sum of the first $n-1$ terms in the difference sequence ( $3,4,5,6,\dots$ ) to the first term $u_1=1$ . So, to find the general expression for $u_n$ , we first need find the general expression for the sum of the first $n-1$ terms in the difference sequence. We have $a_1=3$ and $d=1$ , meaning $a_n=a_1+d(n-1)=3+1\cdot(n-1)=n+2$ . So, if $$S_{n}=\frac{a_1+a_n}{2}\cdot n=\frac{n(3+n+2)}{2}=\frac{n(n+5)}{2}$$ Then $$S_{n-1}=\frac{(n-1)(n-1+5)}{2}=\frac{(n-1)(n+4)}{2}$$ Now, we have the first term of the original sequence $u_1=1$ , we know the difference expression that is added to the first term to get the $n$ th term, so let us put it all together in a general expression for $u_n$ ! \begin{align*} u_n&=u_1+S_{n-1}\\ &=1+\frac{(n-1)(n+4)}{2}\\ &=\frac{2+n
|sequences-and-series|arithmetic-progressions|
1
General term for increasing AP's
Can some give an easy general method to find general term of sequences whose difference is in AP? Example: 1,4,8,13,19.... The difference is 3,4,5... which is in AP. Through vigorous testing and analysis you get the general term as $ (n^2 + 3n - 2)/2$ but how to derive it mathematically
Let $a_1$ , $a_2$ , $\dots$ , be a sequence such that the sequence $a_2-a_1$ , $a_3-a_2$ , $\dots$ , forms an AP with first term $b$ and common difference $d$ . Then, $$a_n-a_1=(a_n-a_{n-1})+(a_{n-1}-a_{n-2})+\dots+(a_2-a_1)$$ So, $a_n-a_1$ is the sum of first $n-1$ terms of this new AP. I assume you know what the sum of $n$ terms of an AP is. So, $$a_n=a_1+\frac{n-1}{2}(2b+(n-2)d)$$ Plugging in $a_1$ , $b$ and $d$ , you can find the general term. In your case, $a_1=1$ , $b=3$ and $d=1$ gives $$a_n=1+\frac{n-1}{2}(2\times3+(n-2))$$ so $$a_n=1+\frac12(n-1)(n+4)$$ which you can see is precisely the same as what you have got. Hope this helps. :)
|sequences-and-series|arithmetic-progressions|
0
Understanding an application of Legendre's Formula as used in the proof of Bertrand's Postulate
In Wikipedia's proof of Bertrand's Postulate , Legendre's Formula is used to establish an upper bound to the p-adic valuation of ${2n}\choose{n}$ The argument is presented as this: (1) Let $R(p, x)$ be the p-adic order of $x$ so that it is the largest number of $r$ such that $p^r$ divides $x$ . (2) Applying Legendre's Formula: $$R\left(p, {{2n}\choose{n}}\right) = \sum\limits_{j=1}^{\infty}\left(\left\lfloor\frac{2n}{p^j}\right\rfloor - 2\left\lfloor\frac{n}{p^j}\right\rfloor\right)$$ I am not clear on the notation or the meaning of the argument. Here is the argument: But each term of the last summation must be either zero (if $n/p^j \bmod 1 ) or one (if $n/p^j\bmod 1\ge1/2$ ), and all terms with $j>\log_p(2n)$ are zero. I am not clear why it must be $0$ or $1$ . If I change the binomial coefficient to something else. Let's say ${n^2+2n}\choose{n^2}$ which then becomes: $$R\left(p, {{n^2+2n}\choose{n^2}}\right) = \sum\limits_{j=1}^{\infty}\left(\left\lfloor\frac{n^2+2n}{p^j}\right\rflo
Since you say you’re not clear on the notation: Perhaps the part you’re missing is that $x\bmod1$ is the fractional part of $x$ . It’s still just $0$ or $1$ in your second example. Quite generally, $\lfloor x+y\rfloor-\lfloor x\rfloor-\lfloor y\rfloor$ can only be $0$ or $1$ . If you write $x$ and $y$ as sums of integer and fractional parts, $x=i+u$ and $y=j+v$ with $u,v\in[0,1)$ , you get \begin{eqnarray*} \lfloor x+y\rfloor-\lfloor x\rfloor-\lfloor y\rfloor &=& \lfloor i+u+j+v\rfloor-\lfloor i+u\rfloor-\lfloor j+v\rfloor \\ &=& i+j+\lfloor u+v\rfloor-i-j \\ &=& \lfloor u+v\rfloor \end{eqnarray*} with $0\le u+v\lt2$ , and thus $\lfloor u+v\rfloor=0$ or $1$ .
|binomial-coefficients|prime-factorization|
1
Can closed curves be assumed unit speed?
Andrew Pressley's Elementary Differential Geometry textbook claims on page $21$ that "...we can always assume that a closed curve is unit-speed and that its period is equal to its length." In the previous paragraph, Pressley derives this assuming the curve $\gamma$ is regular. This is obvious since regularity is equivalent to the existence of a unit speed parametrization. However, the summary statement itself here excludes the "regularity" assumption. Later on in Chapter $2$ , Pressley wants to show the total signed curvature of a closed plane curve is an integer multiple of $2\pi$ . However, he starts out the proof by seemingly adhering to the principle quoted above and assumes the closed plane curve is unit speed. No regularity assumption was made. Am I going crazy here, or am I just being pedantic and is regularity just being implicitly assumed both in this proof and the quoted statement above?
Clearly a curve has a unit-speed reparametrization if and only if it is a regular curve. On p. 21 the author proves the following two facts: If $\gamma$ is a regular closed curve, a unit-speed reparametrization of $\gamma$ is always closed. This shows that [the unit-speed reparametrization] $\tilde \gamma$ is a closed curve with period $ℓ(γ)$ . Note that, since $\tilde \gamma$ is unit-speed, this is also the length of $\tilde \gamma$ . He then writes In short, we can always assume that a closed curve is unit-speed and that its period is equal to its length. It is obvious that this is just a sloppy summary of the preceding results. You are right, one should correctly state that In short, we can always assume that a regular closed curve is unit-speed and that its period is equal to its length. Corollary 2.2.5 says (again sloppily) that the total signed curvature of a closed plane curve is an integer multiple of $2π$ . In the proof of the Corollary only unit-speed curves are considered, a
|differential-geometry|plane-curves|
1
Find all holomorphic functions satisfying the given inequality
Find all holomorphic functions $f$ on $\mathbb{D}$ s.t the following condition holds for all $n>1$ $$\dfrac{1}{\sqrt{n}} My attempt: From hypothesis, we have $f(0)=0$ and $\left\vert nf\left(\dfrac{1}{n}\right)\right\vert\geq \sqrt{n}\to\infty$ as $n\to\infty$ . I'm trying to show that $\left\vert\dfrac{f\left(z\right)}{z}\right\vert$ is bounded in the neighbourhood of $z=0$ using $f(0)=0$ . If that's true, then there is no such holomorphic function satisfying. But I got stuck here. Could someone help me or have another way to deal with problem? Thanks in advance!
Let $$ f(z)=\sum_{k=0}^\infty a_kz^k $$ be the power series expansion of $f$ . From $|f(1/n)| $(n>1)$ we get $f(0)=0$ , hence $a_0=0$ . Thus $$ f(z)= z \sum_{k=1}^\infty a_kz^{k-1} =: zg(z) $$ where $g$ is a holomorphic function on $\mathbb{D}$ . Now $$ |g(1/n)| = |nf(1/n)| > \sqrt{n} \to \infty \quad (n \to \infty). $$ On the other hand $g(1/n) \to g(0)=a_1$ $(n \to \infty)$ , a contradiction.
|complex-analysis|
0
Is possibile to define an exponentiation with respect an ordinal operation?
It is well know the following resul holds. Theorem For any $(M,\bot,e)$ monoid there exists a unique esternal operation $\wedge_\ast$ from $X\times\omega$ into $X$ such that for any $x$ in $M$ the following results hold: the equality $$ x\curlywedge_\bot 0=e $$ the equality holds $$ x\curlywedge_\bot(n+1)=(x\curlywedge_\bot n)\bot x $$ for any $n$ in $\omega$ . So let's we call $\curlywedge_\bot$ exponentiation of $\curlywedge_\bot$ so that let's we observe that the usual multiplication into $\omega$ is just the exponentiation of the usual summ. So observing the definition of ordinal multiplication I advanced the following conjecture: to follow I indicate the ordinal class with the symbol $\mathbf{Ord}$ whereas I indicate the limit ordinal class with the symbol $\mathbf{Lim}$ . Conjecture Let $\bot$ be an operation on $\mathbf{Ord}$ with a neutral element $\mu$ in $\mathbf{Ord}$ . Then there exists a unique operation $\curlywedge_\bot$ on $\mathbf{Ord}$ such that for every $\alpha$ in
By examining accurately the existence proof of sum I elaborated by my self what to follow. Let us first recall the following theorem: to follow we will indicate with $\mathbf{Ord}$ the class of ordinals and with $\mathbf{Lim}$ the class of limit ordinals. Theorem $0$ If $\mathbf F$ , $\mathbf G$ , and $\mathbf H$ are functionals, then there exists a unique functional $\Psi$ in $\mathbf{Ord}$ such that the following statements hold: $\Psi(\emptyset)$ equals $\mathbf F(\emptyset)$ ; $\Psi\big(S(\alpha)\big)$ equals $\mathbf G\big(\Psi(\alpha)\big)$ for every $\alpha$ in $\mathbf{Ord}$ ; $\Psi(\lambda)$ equals $\mathbf H(\Psi\restriction_\lambda)$ for every non-empty $\lambda$ in $\mathbf{Lim}$ . Now, armed with Theorem $0$ , let's we prove the following important and useful result. Theorem $1$ For every operation $\bot$ in $\mathbf{Ord}$ with a neutral element $\mu$ , there exists a unique binary operation $\curlywedge_\bot$ in $\mathbf{Ord}$ such that the following statements hold for e
|abstract-algebra|solution-verification|set-theory|recursion|ordinals|
1
What is the simplest way to approach this integral problem? To me, this problem seemed to necessitate some kind of gimmick.
How can I tackle this problem with as few abstract ideas as possible when attempting to 'purely' integrate from left to right? I've tried a few different versions of this term, but I'm not sure how to get into an answer. Is there another way to approach this?
Just a bit of manipulation and you have the answer. Split the integral this way. $\int(t+1-\frac{1}{t}).e^{t+ \frac{1}{t}}+ \int(1-\frac{1}{t^2}).e^{t+ \frac{1}{t}}$ Now right side easily evaluates to $e^{t+ \frac{1}{t}}$ . For the left the side multiply and divide it by $t$ and write it as $t(\frac{1}{t} + 1- \frac{1}{t^2})$ . Then the left integral becoems $\int e^{t+ \frac{1}{t}}dt + \int t(1-\frac {1}{t^2})e^{t+ \frac{1}{t}}dt$ . Now here in the right side integral take $t$ as the 1st and the rest as the 2nd function(and using by parts integration).So the right integral on integrating becomes $t.e^{t+ \frac{1}{t}}$$- \int e^{t+ \frac{1}{t}}.dt$ which cancels the left most term of the integral hence we finally obtain the integral as $(t+1).e^{t+ \frac{1}{t}} + C$
|integration|indefinite-integrals|problem-solving|
1
Does the limit of $ye^{-1/x}$ as $(x, y)\to (0, 2)$ exist?
$$\lim_{(x,y) \to (0,2)} ye^{-\frac{1}{x}}$$ Does this limit exist? Or does the limit not exist?
The key point here is what will happen to $e^{\frac{-1}{x}}$ when $x \rightarrow 0$ , because there is no problem for $y$ if $y \rightarrow 2$ . Then, one important fact you need to know is that, if a limit exists, then the its left limit and right limit have to be euqal. But here, you see $$x \rightarrow 0^+, \frac{-1}{x} \rightarrow -\infty$$ and $$x \rightarrow 0^-, \frac{-1}{x} \rightarrow +\infty$$ And since $$e^{-\infty} = 0 \neq +\infty = e^{+\infty}$$ we can conclude that the limit of $e^{\frac{1}{x}}$ as $x \rightarrow 0$ doesn't exist.
|linear-algebra|limits|
1
$u_1=1, u_{n+1}=2u_n+1$, prove that $u_{n}=2^n-1$
Prove that $u_{n}=2^{n}-1$ for all positive integers $n$ , by induction or what means? What I did was sub $u_{n}=2^{n}-1$ into the $u_{n+1}$ , but that got me nowhere. so what options do i have now.
There are two methods you could use to solve this problem. Method 1 Proof by induction: Base case First, we verify the base case where $n = 1$ . According to the sequence definition, $u_1 = 1$ . Now, let's check the formula $u_n = 2^n - 1$ for $u_1 = 1$ : $2^1 - 1 = 2 - 1 = 1$ . So the base case holds Inductive step Assume true for $n=k$ (where $k \in \mathbb{Z}^+$ ), so $u_k = 2^k - 1$ . When $n=k+1$ $$u_{k+1} = 2^{k+1} - 1$$ Remember that the recurrence relation $u_{k+1} = 2u_k + 1$ . Substituting the inductive hypothesis $u_k = 2^k - 1$ into the recurrence relation, we get: $$u_{k+1} = 2(2^k - 1) + 1$$ $$= 2^{k+1} - 2 + 1$$ $$= 2^{k+1} - 1$$ Since $u_1$ was shown to be true and it was also shown that if the formula is true for $n=k$ , $k \in \mathbb{Z}^+$ , it is also true for $n=k+1$ , it follows by the principle of mathematical induction that the formula is true for all $n \in \mathbb{Z}^+$ Method 2 We can also prove it by examining the pattern established by the recursive definit
|sequences-and-series|induction|
0
Are Gaussian mixtures stable?
Let $X$ be a Gaussian-mixture random vector with probability density function (PDF) $f_X(x)=\sum_{i=1}^kp_if_i(x)$ , where for $i=1,2,\ldots,k$ , $f_i$ is a multivariate Gaussian PDF with mean $\mu_i$ and covariance matrix $\Sigma_i$ , and $\sum_{i=1}^kp_i=1$ . Is $Y=AX$ , where $A$ is an arbitrary matrix of appropriate size, also a Gaussian-mixture random vector? If yes, what are the parameters of its PDF?
This is still a mixing of Gaussian variates $N(A^T\mu_i, A^T\Sigma_i A )$ with the same mixing coefficients $p_i.$ Proof: compute the Laplace transform $AX.$
|probability-distributions|
0
A lemma on a compact subset of $G/H$, where $G$ is locally compact and $H$ is a closed subgroup of $G$
I have this lemma from "Kazhdan’s Property (T)" book (page 343 in the link). Here, $G$ is locally compact and $H$ is a closed subgroup of $G$ . Can't really understand why $K$ is compact. The union is obviously compact, but $p^{-1}(Q)$ not necessarily. Also, I understand why $p(K) \subseteq Q$ , but the other direction is not clear to me. Thanks!
You can just show this by hand: Let $q\in Q$ . Since $Q\subset\bigcup_{i=1}^n p(x_iU)$ there exists $i\in\{1,\dots,n\}, a\in U$ such that $p(x_ia)=q$ . Hence, $x_ia\in p^{-1}(Q)$ . Since clearly $x_ia\in x_iU$ , we deduce that $$x_i a\in K=p^{-1}(Q)\cap\bigcup_{i=1}^n x_iU$$ Therefore $q=p(x_ia)\in p(K)$ . This shows that $p(K)\supseteq Q$ .
|topological-groups|
0
Expected value of minimum of discrete uniform random variables
Let $m,n$ be positive integers. Consider $X_1, \dots, X_n$ iid uniform discrete random variables taking values over the set of $mn$ integers $\{0,1,2,\dots,mn-1\}$ . Let $Y_n=\min\{X_1,\dots, X_n\}$ . What's the expected value of $Y_n$ ? I'm more than OK with asymptotics for large $n$ .
Recall that $\mathbb{E}[Z] = \sum_{k=1}^{\infty} \mathbb{P}(Z \geq k)$ for any non-negative integer-valued random variable $Z$ . Applying this to $Y_n$ , $$ \mathbb{E}[Y_n] = \sum_{k=1}^{\infty} \mathbb{P}(Y_n\geq k) = \sum_{k=1}^{\infty} \mathbb{P}(X_1\geq k)^n = \sum_{k=1}^{\infty} \left(1-\frac{k}{mn}\right)_+^n, $$ where $x_+ = \operatorname{ReLU}(x) = \max\{0, x\}$ stands for the positive part of $x$ . Then by utilizing the bound $1-x\leq e^{-x}$ , we note that each summand is bounded by $e^{-k/m}$ . Since $\sum_{k=1}^{\infty} e^{-k/m} , we can apply the dominated convergence theorem to obtain $$ \lim_{n\to\infty} \mathbb{E}[Y_n] = \sum_{k=1}^{\infty} \lim_{n\to\infty} \left(1-\frac{k}{mn}\right)^n = \sum_{k=1}^{\infty} e^{-k/m} = \frac{1}{e^{1/m} - 1}, $$ which coincides with OP's prediction.
|probability|
1
Let $A = \{1, 2, n\}$. Show repeatedly taking the average of two elements from $A$ adding it to $A$, we can get all integers $\in \{1, 2, ... n\}$
Let $A = \{1, 2, n\}$ . Show that by repeatedly taking the average of two elements from $A$ and adding it to $A$ , we can obtain all integers between 1 and $n$ , inclusive. For example, with 1, 2, 7, we have 4 from 1 and 7, 3 from 2 and 4, 5 from 3 and 7, and 6 from 5 and 7. I just thought of this and I haven't found any solutions to problems of this sort.
We will prove the statement while restricting to averages that are integers. Let us work by induction on $n$ , the statement being obvious for $n \le 3$ . It suffices to show that $3$ is reachable from $\{1, 2, n\}$ , since $(x + 1 + y + 1)/2 = (x + y)/2 + 1$ , so a sequence of steps that reaches $k$ starting from $\{1, 2, n - 1\}$ will reach $k + 1$ starting from $\{2, 3, n\}$ and by induction this includes each $1 \le k \le n-1$ , ie $2 \le k + 1 \le n$ . Now consider the sequence of numbers obtained by starting with $n$ and taking the average with either $1$ or $2$ depending on parity. That is, consider the sequence $f^k(n)$ , where $$f(m) = \begin{cases} \frac{1 + m}{2} & \text{ if $m$ is odd,}\\ \frac{2 + m}{2} & \text{ if $m$ is even.}\\\end{cases}$$ By induction on $k$ , each $f^k(n)$ is reachable. Also note that for $m > 3$ , $3 \le f(m) . Therefore this sequence decreases to $3$ , in particular must take the value $3$ at some step.
|elementary-number-theory|discrete-mathematics|
0
A lemma on a compact subset of $G/H$, where $G$ is locally compact and $H$ is a closed subgroup of $G$
I have this lemma from "Kazhdan’s Property (T)" book (page 343 in the link). Here, $G$ is locally compact and $H$ is a closed subgroup of $G$ . Can't really understand why $K$ is compact. The union is obviously compact, but $p^{-1}(Q)$ not necessarily. Also, I understand why $p(K) \subseteq Q$ , but the other direction is not clear to me. Thanks!
(Thank you Izaak, brain wasn't online) Quotients of Hausdorff topological groups by closed subgroups are themselves Hausdorff, so $Q$ is a compact subset of a Hausdorff space, and thus is closed. By continuity of $p$ , $p^{-1}(Q)$ is hence a closed set, so the intersection of it with a compact set is thus compact. You are correct that $p^{-1}(Q)$ is not necessarily compact.
|topological-groups|
1
How to calculate the value of the n-th derivative of this function at this point
$$\forall n\in \mathbb{N},\left.\frac{d^{2n+1}}{dx^{2n+1}}(x^{\ln x})\right|_{x=e^{\frac{n}{2}}}=0$$ How to prove this equation? I find that if $n=0$ or $n=1$ , the answer is $0$ . But I can't prove that it is true for all the nonegative integer. My idea is, calculate the analytical expression of its $n$ -th derivative and then substitute it into the specific value. I find that $$\frac{d^{2n+1}}{dx^{2n+1}}(x^{\ln x})=\frac{x^{\ln x}}{x^{2n+1}}P_{2n+1}(\ln x)$$ here $P_{2n+1}(x)$ is a polynomial and the its degree is $2n+1$ . And I find for $n=0,1,2$ , $P_{2n+1}(\frac{n}{2})=0$ . But I don't know how to prove the proposition " $\forall n,P_{2n+1}(\frac{n}{2})=0$ ". This is my ideas and I can't solve this question. I hope someone can help me! Thanks!
You want the $(2n+1)$ -st derivative of the function $x^{\log x}=\exp((\log x)^2)$ at $x=e^{n/2}$ , i.e., by the residue theorem, $$\frac1{2\pi i}\oint\frac {e^{(\log x)^2}}{(x-e^{n/2})^{2n+2}}\,dx$$ where we integrate along a contour encircling $e^{n/2}$ . Using the substitution $x=e^{t+n/2}$ , this integral is $$ e^{n^2/4-n/2-1}\frac1{2\pi i}\oint\frac {e^{t^2}}{(e^{t/2}-e^{-t/2})^{2n+2}}\,dt$$ where we now integrate along a circle centered at $t=0$ . Since the integrant is even, the integral is zero, as you wanted to show. (You can likely get this without integrals/residues, just using the same substitution.)
|differential|
1
Prove or reject my solution to "How quickly can you type this unary string?"
I had posted an answer on the Code Golf SE yesterday. Although the answer on that site remain valid if no counterexample can be find. I'm interesting in its correctness. So I want to find a prove or counterexample to my solution. While, answer this question do not involve any skill about programming, but only some math work about integers. So, don't panic. You may find the question authed by emanresu A on Code Golf SE, and my answer . But let me briefly restate the question: Question One want to type $n$ letter a 's on an editor. The editor starts with no content in it, and an empty clipboard. In each step, the user may: A : Type a letter a , so there is one more letter a in the editor. C(k) : Copy k letter a 's into the clipboard. $k$ must be less than or equals to the number of characters currently in the editor. The clipboard now have $k$ letter a 's in it. V : Paste the clipboard to the editor content. So the editor have $k$ more a 's in it. $k$ is the number of characters we just
Fact 1: We may never paste more than two times in a row at the end: If we ends in $C(r)V^{2k+1}$ with $k\geq 1$ , replace by $C(r)VC(2r)V^{k}$ , and we have $k+2 \leq 2k+1$ . If we ends in $C(r)V^{2k}$ with $k\geq 2$ , replace by $C(r)VVC(2r)V^{k-1}$ , and we have $k+2\leq 2k$ . By induction, we have our result. Corollary: We have the induction: $$g(n) = \min\left\{\min_{r\geq \left\lfloor\frac{n}{2}\right\rfloor} g(n-r)+2, \min_{r\geq \left\lfloor\frac{n}{3}\right\rfloor} g(n-2r)+3\right\}$$ Fact 2: $g(n) \leq g(n+2)$ The optimal solution of $g(n+2)$ ends in $C(r)V$ or $C(r)VV$ . We replace it by $C(r-2)V$ or $C(r-1)VV$ to upper bound $g(n)$ by $g(n+2)$ . Corollary: If $g(n) > g(n+1)$ then $g(n+1)$ ends in $C(r)VV$ Corollary: We are left with an induction that relies on three cases (when we end in two pastes, there is only one case – the third one – because the other inductions are at least two steps away) : $$g(n) = min \left\{g\left(n-\left\lfloor\frac{n}{2}\right\rfloor\right)+2, g
|optimization|algorithms|recursive-algorithms|integer-sequences|
1
Probability of a zero-summable random finite sequence having a small sub-sum at a given index.
This might be anything from trivial to not trivial, but my probability theory is too rusty to get even started with this. Supppose I have $N$ real numbers $(x_1, x_2, \ldots, x_N)$ , each sampled independently from $\mathcal{N}(0,1)$ . Conditional on their sum being $0$ , how could I calculate the probability that for a given $k and $\varepsilon > 0$ the sum $\sum_{j=1}^k x_j$ is in $[-\varepsilon,\varepsilon]$ ? Surely the probability will be much better than without the conditional part? Without it we'd just look at the $k$ -fold sum of independent normal distributions, i.e. $\mathcal{N}(0,k)$ , but here the condition that we get to $0$ at $N$ should surely limit our variations? Some simple simulations seem to support this as well. Bonus points if your solution can help me to also work with other base distributions besides $\mathcal{N}(0,1)$ .
Recall that if $(X,Y)\sim N(m_X,m_Y, \Sigma)$ with $$\Sigma=\left[\begin{array}{cc}A&B^T\\B&C\end{array}\right]$$ (with obvious notations) then the conditional law of $X|Y$ has mean $m_X+B^TC^{-1}(Y-m_Y)$ and a covariance independent of $Y$ equal to $$A-B^TC^{-1}B$$ Let us apply this to $$X=(X_1,\ldots,X_n), \ \ Y=X_1+\cdots+X_n$$ where the iid $X_i$ are $N(0,1).$ Then $A_n=I_n$ , $C_n=n$ and $B_n=(1,\ldots,1).$ We get $$X|Y=0\sim N(0,I_n-\frac{1}{n}J_n)$$ where $J_n$ is the $(n,n)$ matrix containing only ones. Finally $$(X_1,\ldots,X_k)|Y=0\sim N(0, I_k-\frac{1}{n}J_k)$$ Another fact: If $X\sim N(0,\Sigma)$ then $BX\sim N(0, B\Sigma B^T)$ (compute the Laplace transforms to see this). Therefore with $B=B_k$ then $$X_1+\cdots+X_k|Y=0\sim N(0, k-\frac{k^2}{n}).$$ Computing $\Pr(|X_1+\cdots+X_k|\leq \epsilon|Y=0)$ is now standard.
|probability|normal-distribution|conditional-probability|
1
Proving a multivariate normal distribution gets the maximum entropy when mean and covariance are given
I'm working on a homework question. The first part was: Given an unbounded one dimensional continuous random variable: $X\in\left(-\infty,\infty\right)$ , that satisfies: $\left\langle X\right\rangle =\mu,\;\left\langle \left(X-\mu\right)^{2}\right\rangle =\sigma^{2}$ Show that the distribution that maximizes entropy is Gaussian $X\sim N\left(\mu,\sigma^{2}\right)$ . I've solved this using Lagrange multipliers method. The next part is proving the same holds in the case of multivariate distributions. Generalize the previous part to a $k$ dimensional variable $X$ with given expectation value $\vec{\mu}$ and covariance matrix $\Sigma$ . I started the same way when I define the proper functional I wish to optimize: $$ F\left[f_{X}\left(\overline{x}\right)\right]=H\left(X\right)+\lambda\left(1-\intop_{\mathbb{R}^{k}}f_{X}\left(\overline{x}\right)d\overline{x}\right)+\sum_{i\in\left[k\right]}\varGamma_{i}\left(\mu_{i}-\intop_{\mathbb{R}^{k}}\overline{x}_{i}f_{X}\left(\overline{x}\right)d\ove
Here, I first provide a proof with required details based on the relative entropy method, and then I discuss how your main issue on dropping the expectation constraint when using the KKT method can be solved. Indeed, without the expectation constraint, the covariance of the distribution cannot be fixed, and you will have many other feasible normal densities satisfying the KKT conditions and both first and third constraints (see PS3 for more details). Let $f_X \sim \mathcal N (\mu,\Sigma)$ , for which $$\color{blue}{H(X)=\frac{1}{2} \log\left ((2\pi e)^n\det{\Sigma} \right)}.$$ Then, one can see that $$\log f_{X}(x) = \underbrace{-\frac{1}{2} \log((2 \pi)^n\det{\Sigma})}_{\color{blue}{A:=-H(X)+\frac{1}{2} \log(e^n)}} -\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu). \tag{1}$$ For any other density $f_Y$ with $\mathbb E (Y)=\mu$ and $\text{cov}(Y)=\Sigma$ , the relative entropy is computed as $$D_{\text{KL}}(f_Y\|f_X)=\int_{\mathbb R^n} f_{Y}(x)\log \frac{f_{Y}(x)}{f_{X}(x)}\text{d}x=\underbrace{
|real-analysis|probability|statistics|optimization|information-theory|
1
continuity of f probably with heine
there is a debate and the professor isnt available. True / False: Given that $f$ is continuous, does it mean that $$\lim_{n \to \infty} f \left( \frac{(-1)^n}{n} \right)$$ exists? I think it does, because the if $f$ is continuous that means the limit of $f$ as $x$ approaches $0$ exists. and the mentioned limit is $0$ when $n$ approaches infinity (similar to heine definition). edit: apparently a teaching assistant said that it is not necessarily true because n is natural and therefore discrete? your opinion?
If $f: \mathbb{R} \to \mathbb{R}$ is continuous, then it is continuous at $x = 0$ . Let $f(0) = y_0$ . For any $\varepsilon > 0$ , there exists $\delta > 0$ such that $|x| . Now let's consider your limit: $$\lim_{n \to \infty} f ((-1)^n n^{-1})$$ Since $n \in \mathbb{Z}^+$ , this is the limit of a sequence $a_n = f ((-1)^n n^{-1})$ . We would expect this limit to be $y_0$ . To show that this is indeed the case, let $\varepsilon > 0$ be arbitrary; we need to find $N \in \mathbb{Z}^+$ such that $|a_n - y_0| for each $n > N$ . By continuity of $f$ , there exists some $\delta > 0$ such that $|(-1)^n n^{-1}| = n^{-1} . Pick $N = \lfloor \delta^{-1} \rfloor$ , and you are done.
|limits|continuity|
0
Generalising relation between hypotenuse and other side of right angled triangle in arbitrary vector space
I was working on my Linear algebra homework and I felt I needed to prove this statement (which might not be true), but is at least true for $\mathbb{R}^2$ from a right angled triangle. Given a inner product space $V$ with derived norm $|| \cdot || = \sqrt{\langle\cdot ,\cdot\rangle}$ . Let $a,b,c \in V$ . Show that $$ c-a~\bot~b-a \Rightarrow ||c-b|| \geq ||c-a|| $$ Can someone help me proving this?
Using the theorem in our book that states, Given an inner product space $V$ with derived norm $\langle \cdot, \cdot \rangle = \sqrt{||\cdot||}$ we have for $u,v \in V$ , if $\langle u,v\rangle = 0$ then $$ ||u + v||^2 = ||u||^2 + ||v||^2 $$ Choosing $u = a-b$ and $v=c-a$ we have $u + v = c - b$ using the theorem we get $$ ||c-b||^2 = ||a-b||^2 + ||c-a||^2 $$ since $||a-b||$ is positive we have $$ ||c-b||^2 \geq ||c-a||^2 \iff ||c-b|| \geq ||c-a|| $$
|linear-algebra|normed-spaces|triangles|inner-products|orthogonality|
0
$\dim(U_1 \cap U_2 \cap U_3) = n − 3$, Give a proof or find a counterexample.
Suppose that $U_1, U_2, U_3$ are three distinct subspaces of $\dim = n-1$ from a vector space of $\dim = n$ . where $n \gt 3$ . Give a proof or find a counterexample for $\dim(U_1 \cap U_2 \cap U_3) = n − 3$ . My attempt: first I showed that $\dim(U_1 \cap U_2 \cap U_3) \geq n − 3$ , but I do not know what should I do with the inverse. However for $n=3$ I found a counterexample which I mentioned it below. $U_1 = \{(x,x,y):x,y\in \mathbb{R}\}$ , $U_2 = \{(x,y,x):x,y\in \mathbb{R}\}$ , $U_3 = \{(y,x,x):x,y\in \mathbb{R}\}$ , $U_1 \cap U_2 \cap U_3 = \{(x,x,x):x\in \mathbb{R}\}$ Thank you for your time.
Your counterexample extends to $n = 4$ via $U'_i = \iota(U_i) + \langle e_4 \rangle$ for all $i$ where $\iota\colon \mathbb{R}^3 \hookrightarrow \mathbb{R}^4$ is the inclusion $\iota(x, y, z) = (x, y, z, 0)$ . The intersection then satisfies $U'_1 \cap U'_2 \cap U'_3 = \langle e_1 + e_2 + e_3, e_4 \rangle$ and is therefore 2-dimensional.
|linear-algebra|vector-spaces|examples-counterexamples|
1
Hint for showing the identity $(\nabla\psi\cdot\nabla)\nabla\psi^*+(\nabla\psi^*\cdot\nabla)\nabla\psi=\nabla|\nabla \psi|^2$
I need some help to show this following identity. $$(\nabla\psi\cdot\nabla)\nabla\psi^*+(\nabla\psi^*\cdot\nabla)\nabla\psi=\nabla|\nabla \psi|^2$$ My attempt: $$ \partial_i\psi\partial_i\partial_k\psi^*+\partial_i\psi^*\partial_i\partial_k\psi\\ \partial_i(\psi\partial_i\partial_k\psi^*+\psi^*\partial_i\partial_k\psi) $$ I term inside looks like a product rule but except there are two derivative. I am not sure what should I do next. Any hint will help. Thank you. Update: I have made some progress $$ \partial_i\partial_k(\psi \psi^*)=0\\\partial_i(\psi^*\partial_k\psi+\psi\partial_k\psi^*)=0\\\partial_i\psi^*\partial_k\psi+\psi^*\partial_i\partial_k\psi+\partial_i\psi^*\partial_k\psi+\psi\partial_i\partial_k \psi^*=0 $$ So, $$ \partial_i\psi^*\partial_k\psi+\partial_i\psi^*\partial_k\psi=-(\psi^*\partial_i\partial_k\psi+\psi\partial_i\partial_k \psi^*) $$ So $$ \partial_i(\psi\partial_i\partial_k\psi^*+\psi^*\partial_i\partial_k\psi)\\\implies\partial_i(\partial_i\psi^*\partial_k\psi+\
Think about the form of the right hand side. It is ultimately the gradient of a scalar field, so you will want to manipulate your expression for the left hand side using index notation to start with $\partial_k$ . Commutation relationships will be useful for this means since $\partial_i \partial_k = \partial_k \partial_i +[\partial_i,\partial_k]$ , for example. Especially be mindful of the product rule since it will be understood that $\partial_k$ acts on every scalar field to the right of it. It should feel ideologically similar to integration by parts. Alternatively, starting from the right hand side may be more natural here.
|vector-analysis|quantum-mechanics|
0
Necessity of denseness and completeness for a surjective monotone being continuous
It turns out that the familiar result that surjective monotones $\mathbb R\to\mathbb R$ are continuous extends to general LOTS (linearly ordered topological spaces): Theorem . If $X$ , $Y$ are LOTS with $Y$ being dense and complete, then any surjective monotone $X\to Y$ is continuous. (The same proof from $\mathbb{R\to R}$ case sails through.) The necessity of surjectivity is clear by considering the function $f\colon \mathbb {R\to R}$ given by $f(x) := x$ for $x and $f(x) := x +1$ for $x\ge 1$ . However, I am having trouble finding two examples that show the necessity of denseness and, respectively completeness of the codomain space. Any leads?
dense-in-itself suffices: Let $(X, \le), (Y, \le)$ be linearly ordered sets equipped with the order topology, $Y$ dense-in-itself (i.e. for each $y, z \in Y, y , it is $(y, z) \neq \emptyset$ . Let $f: X \rightarrow Y $ be a monotone increasing, surjective map. Then f is continuous. PROOF. Let $x \in X$ , $f(x) \in V$ open in $Y$ . Case 1: $f(x)$ is neither minimum nor maximum of $Y$ . Then there exist $u, v \in Y$ such that $u and $(u, v) \subset V$ . Since $Y$ is dense-in-itself, we may assume that $[u, v] \subset V$ . Since $f$ is surjective, pick $a, b \in X$ such that $f(a) = u, f(b) = v$ . Hence $a and $f( (a,b) ) \subset [u. v] \subset V$ . Other cases: similar. It doesn't hold in general, if $Y$ is not dense-in-itself: $f: \mathbb R \rightarrow \{0, 1\}, \space f(x) = \begin{cases} 0, & \text{if } x
|general-topology|continuity|order-theory|
1
Is $C^\infty_c(\mathbb{R})$ closed under differentiation
I'm curious about the properties of the space of smooth functions with compact support, $C^\infty_c(\mathbb{R})$ . Specifically, I'm interested in whether this space is closed under the operation of differentiation. If we take a function $f$ that is in $C^\infty_c(\mathbb{R})$ and differentiate it, is the resulting function guaranteed to also be in $C^\infty_c(\mathbb{R})$ , regardless of the support interval of $f$ potentially changing? Could anyone provide references for further reading on this topic? Thanks for any insights!
Say $K$ is the compact support of a smooth $F$ . Let $f$ be the gradient of $F$ . Is $f$ also supported on $K$ and therefore compactly supported? Yes. Simply because on $K^c$ $F$ is identically zero, by definition, and the derivative at points there may be computed by only working on $K^c$ - it being open! - and the derivative of zero is zero. So $C_c^\infty$ is closed under differentiation.
|real-analysis|
1
$|z_1+z_2|+|z_1-z_2| \le |z_1| + |z_2| + \max(|z_1|,|z_2|) , \forall z_1,z_2 \in \mathbb C$
I tried to solve an olympiad-level problem with complex numbers by myself. Can you please look at my solution and tell me if it is correct and complete, and how many points I would have gotten out of 7? I hope I didn't miss anything. Prove that for any complex numbers $z_1 , z_2$ the following inequality holds $$|z_1+z_2|+|z_1-z_2| \le |z_1| + |z_2| + \max(|z_1|,|z_2|)$$ My solution : First we check if one of the 2 complex numbers is $0$ , we assume that it is $z_1$ $\implies |z_2|+|z_2| \le 0 + |z_2|+|z_2| \implies |z_2| \le |z_2|$ and we obtain the equality in the inequality. From now, we will assume that the two numbers are different from $0$ . Let's approach the case when $z_1,z_2$ have the same modulus, so let $|z_1| = |z_2| = r , r\in \mathbb R , r>0 $ , we have to prove that $|z_1+z_2|+|z_1-z_2| \le 3r$ . We will use that $|z_1+z_2|^2+|z_1-z_2|^2 = 2(|z_1|^2+|z_2|^2) = 4r^2$ for the inequality $(|z_1+z_2|+|z_1-z_2|)^2 \le 2(|z_1+z_2|^2+|z_1-z_2|^2) \implies |z_1+z_2|+|z_1-z_2| \
I have not read your proof ... but I think one could simplify computations by noticing that rotation or scaling (that is multiplying both numbers by the same factor) does not change the inequality. With this in mind, all we have to prove is this: $|1 + z| + |1-z| \leq 2 +|z|$ where $|z| \leq 1$ . Then one just raises both sides to the second, uses some identities like $|a+b|^2 + |a-b|^2 = 2|a|^2 + 2|b|^2$ and $(1-z)(1+z) = (1-z^2)$ and everything simplifies to $|1-z^2| \leq 1 +|z|^2 \leq (1+|z|)^2$ .
|inequality|solution-verification|complex-numbers|
0
Prove the limit $\lim_{(x,y)\to(0,0)} x^y (x>0)$ doesn't exist
Given the function $f(x,y)= x^y (x>0)$ Prove that the limit $\lim_{(x,y)\to(0,0)} f(x,y)$ does not exist I think I would choose 2 sequences $(x,y) \to (0,0)$ such that the two limits do not agree, concluding that the limit doesn't exist. But I just found a sequence $\left(\left(\frac{1}{n},0\right)\right)_{n\geq1}$ and found that $\lim_\limits{n\to\infty} f\left(\frac{1}n, 0\right) =1$ , but I don't know how to choose the second sequence. Can you explain or any way to prove it
Try directly construct a constant-valued sequence, e.g., ${x_n}^{y_n} = 1/2$ for all $n$ . If we set $y_n = 1 / n$ , then solve for $x_n$ , we will get $x_n = \sqrt[1/n]{1/2} = \dfrac{1}{2^n}$ . Then is it correct for $(x_n, y_n) \to (0, 0)$ ?
|calculus|limits|multivariable-calculus|
1
Prove the limit $\lim_{(x,y)\to(0,0)} x^y (x>0)$ doesn't exist
Given the function $f(x,y)= x^y (x>0)$ Prove that the limit $\lim_{(x,y)\to(0,0)} f(x,y)$ does not exist I think I would choose 2 sequences $(x,y) \to (0,0)$ such that the two limits do not agree, concluding that the limit doesn't exist. But I just found a sequence $\left(\left(\frac{1}{n},0\right)\right)_{n\geq1}$ and found that $\lim_\limits{n\to\infty} f\left(\frac{1}n, 0\right) =1$ , but I don't know how to choose the second sequence. Can you explain or any way to prove it
Choose for example $\;x_n=\frac1n\;,\;\;y_n=\frac1{\ln n}\;$ , then both $$\;x_n\xrightarrow[m\to\infty]{}0\xleftarrow[n\to\infty]\, y_n$$ but $$f\left(x_n\,,y_n\right)=\left(\frac1n\right)^{1/{\ln n}}=\text{exp}\left({\frac1{\ln n}\ln n}\right)=e$$ which doesn't converge to $\;1\;$ ...
|calculus|limits|multivariable-calculus|
0
Proving $C([0,1])$ is not dense in $L^{\infty}(\mu)$ using an explicit counterexample.
Let $\mu = $ Lebesgue measure on $[0,1]$ and $f: \mathbb [0,1] \to \mathbb C$ defined as: $$f(x) = \begin{cases} 1 & \text{$x \in \mathbb Q \cap [0,1]$} \\ 0 & \text{$x \in [0,1] - \mathbb Q$} \end{cases}$$ Prove $f \in L^{\infty}(\mu)$ . Using the definition of $\lVert f \rVert_{\infty}=\inf\{a\in \mathbb R | \mu(\{|f|>a\}) = 0 \}$ show that there exists $\epsilon>0$ such that for all continuous functions $g:[0,1] \to \mathbb C$ , $\lVert f-g \rVert_{\infty} \ge \epsilon$ . Conclude $C([0,1])$ is not dense in $L^{\infty}(\mu)$ . For the first part, since $\mu(\mathbb Q \cap [0,1])=0$ we have $\lVert f \rVert_{\infty}=0$ and so $f \in L^{\infty}(\mu)$ . As for the second part, I'm having trouble using the definition of $\infty$ -norm. I understand that the essential supremum is like taking the supremum of a function only over non-null sets, but how can I determine for which $a\in \mathbb R$ the set $\{|f-g|>a\}$ is of measure zero if $g$ is arbitrary?
This counterexample doesn't seem to work, as $g\equiv 0$ is continuous and $||f-g||_\infty=||f||_\infty=0$ . However, take $f$ defined as $$f(x) = \begin{cases} 1 & \text{$x \in [0,1/2)$} \\ 0 & \text{$x \in [1/2,1]$} \end{cases}$$ which clearly verifies $f\in L^\infty(\mu)$ . Take any continuous function $g:[0,1]\to\mathbb{C}$ . If $|g(1/2)| then $|g(x)| for $x$ in some interval around $1/2$ and thus $||f-g||_\infty\geq 1/4$ . If $|g(1/2)|\geq 1/2$ then $|g(x)|>1/4$ for $x$ in some interval around $1/2$ and thus $||f-g||_\infty\geq 1/4$ . For this reason, taking $\varepsilon=1/4$ there is no continuous function $g$ such that $||f-g||_\infty , and from here $C([0,1])$ is not dense in $L^\infty(\mu)$ .
|real-analysis|functional-analysis|measure-theory|
1