title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Can adjacent points exist in geometric space?
|
My question is going to focus on quite a counterintuitive thing. A couple of preliminaries. I understand geometric space as a set of points. A point , in turn, is an abstract idealization of an exact position in a space. There is a point in each position of geometric space. While exploring the possible ways to define a line, I have come up with the definition of adjacent points . I classify two non-coincident points of geometric space adjacent if there are no points between them. Intuition hints there is no problem with such definition. However, it turns out that it's more complex than I have thought. A point is a position in a space, thus it's possible to assign a unique number to each point. It's well-known that smallest strictly positive rational number does not exist (see here for a proof) which implies that there will always be a point between two distinct points. Now here comes the question: do adjacent points somehow exist? If not, then how is it possible that geometric space is
|
This is just a question of building a model that is being researched. Hilbert’s axioms for geometry include the following statement: “For any two different points on a line there is a point on that line that is between them”. You can define the space in another (discrete) way, so that there are adjacent points. For example, you can take a connected graph, and call the set of its vertices a space. There is a distance in that space. You can define a distance between two vertices as the length of the shortest path joining them. There are adjacent points in this space. As to what model is better for real world tasks, it depends on a task. Anyway, it’s a big question if there are “adjacent points” in the real world space and does that even mean.
|
|geometry|measure-theory|definition|
| 0
|
Why can we substitute $ V_{\mu \nu} $ to $ V_{\mu ; \nu} $ while inducing contracted Bianchi identity?
|
After $ ( A_\mu B_\nu )_{; \sigma ; \rho} - ( A_\mu B_\nu )_{; \rho ; \sigma} = A_\alpha B_\nu R^\alpha_{\mu \rho \sigma} + A_\mu B_\alpha R^\alpha_{\nu \rho \sigma} $ where $ A_\mu B_\nu $ is outer product of two vector $ A_\mu $ and $ B_\nu $ , I understand until substituting $ A_\mu B_\nu $ to $ V_{\mu \nu} $ so it becomes $ V_{\mu \nu ; \sigma ; \rho} - V_{\mu \nu ; \rho ; \sigma} = V_{\alpha \nu} R^\alpha_{\mu \rho \sigma} + V_{\mu \alpha} R^\alpha_{\nu \rho \sigma} $ . But I don't get the reason why we can substitute $ V_{\mu \nu} $ to $ V_{\mu ; \nu} $ so we get $ V_{\mu ; \sigma ; \rho ; \nu} - V_{\mu ; \rho ; \sigma ; \nu} = V_{\alpha ; \nu} R^\alpha_{\mu \rho \sigma} + V_{\mu ; \alpha} R^\alpha_{\nu \rho \sigma} $ . Isn't $ V_{\mu \nu} $ a rank-2 tensor and $ V_{\mu ; \nu} $ a vector?
|
The formula $$ T_{\mu \nu ; \sigma ; \rho} - T_{\mu \nu ; \rho ; \sigma} = T_{\alpha \nu} R^\alpha_{\mu \rho \sigma} + T_{\mu \alpha} R^\alpha_{\nu \rho \sigma} $$ holds for all rank-2 tensors $T_{\mu \nu}$ . (And $T_{\mu \nu}$ does not have to be of the form $A_\mu B_\nu$ .) In particular, $V_{\mu ; \nu}$ is a rank-2 tensor, so the formula applies to it.
|
|differential-geometry|tensor-products|tensors|tensor-rank|outer-product|
| 1
|
how to find the roots for this equation?
|
I encountered the following question while learning about complex numbers. I did reach a solution, yet I feel like I took a long route. I am sure that there are better ways to solve this question. I will share my solution down . If there are any new suggestions with explanation, I would be more than thankful. find the roots for the following equation $$x^4-2x^3-x^2+2x+10$$
|
When I have to solve a quartic equation $x^4+\alpha x^3+\beta x^2+\gamma x+\delta=0$ , usually the first thing that I do is to apply the substitution $x=y-\frac\alpha4$ , because then I get a quartic equation without a cubic term. In your case, that means to make the substitution $x=y+\frac12$ . Note that $$p\left(y+\frac12\right)=y^4-\frac{5y^2}2+\frac{169}{16}.$$ Well, not only there is no cubic term, as there is no term of degree $1$ . And the roots of this quartic equation (which is a biquadratic equation) are $$\frac32-i,\frac32+i,-\frac32-i,\text{ and }-\frac32+i.$$ So, you have \begin{align}p(x)=0&\iff p\left(y+\frac12\right)=0\\&\iff y^4-\frac{5y^2}2+\frac{169}{16}=0\\&\iff y=\frac32-i\vee y=\frac32+i\vee{}\\&\phantom{\iff}\vee y=-\frac32-i\vee y=-\frac32+i\\&\iff x-\frac12=\frac32-i\vee x-\frac12=\frac32+i\vee{}\\&\phantom{\iff}\vee x-\frac12=-\frac32-i\vee x-\frac12=-\frac32+i\\&\iff x=2-i\vee x=2+i\vee x=-1-i\vee x=-1+i.\end{align}
|
|complex-numbers|roots|
| 1
|
Dirichlet's approximation theorem with even or odd denominators
|
It follows from Dirichlet's approximation theorem that for any irrational $\alpha,\ 0 for infinitely many pairs of integers $(p,q).$ In fact, Hurwitz's theorem is a nice result which says that the best we can do with the above is replace $\ \frac{1}{q^2}\ $ with $\ \frac{1}{\sqrt{5}q^2},$ and that the result does not hold if we replace $\sqrt{5}$ with any number $A>\sqrt{5},$ so this $\ \sqrt{5}$ is an (least) upper bound. My question is the following. Is it true that, for any irrational $\alpha,\ 0 for infinitely many pairs of integers $(p,q)$ with $q$ even? And similar question for $q$ odd? Or can we construct an irrational number such that the above inequality is only true for finitely many even (similarly: odd) $q$ ?
|
Every irrational $\alpha$ has infinitely many such rational approximations with $q$ odd. Indeed, it is known that all of the convergents (in the sense of continued fractions) to $\alpha$ satisfy the inequality in Dirichlet's approximation theorem. If the continued fraction of $\alpha$ equals $[a_0;a_1,a_2,\dots]$ , then the denominators of the convergents of $\alpha$ satisfy $k_0 = 1$ , $k_1=a_1$ , and $k_n=a_nk_{k-1}+k_{n-2}$ for $n\ge2$ . In particular, if $k_{n-2}$ is odd and $k_{n-1}$ is even, then $k_n$ is automatically odd. In other words, it's impossible for the sequence of denominators to have two consecutive even terms, and therefore there are infinitely many convergents with odd denominator. There are certainly irrational numbers all of whose convergents have odd denominators; the simplest one is $\frac1{\sqrt2} = [0;1,2,2,2,\dots]$ . However, this only happens when the partial quotients $a_j$ are all even, which forces the improved inequality $|\alpha-\frac{h_n}{k_n}| . I ha
|
|number-theory|irrationality-measure|liouville-numbers|
| 0
|
Find a non-degenerate vector perpendicular to a given 3D vector, general solution
|
I stumbled across the following problem: for a general non-degenerate 3D vector $\vec{r} = (x,y,z)$ find a perpendicular vector $\vec{v}$ (so $\vec{v} \cdot \vec{r} = 0$ ). For simplicity, since we assume that $\vec{r}$ is non-degenerate, we might as well assume that $|\vec{r}| = 1$ . Actually, this problem can be simplified even further: for a given unit vector $\vec{r} = (x,y,z)$ find any non-degenerate, linearly independent (from $\vec{r}$ ) vector $\vec{v}$ (then we can do a cross product $\vec{r} \times \vec{v}$ and get a perpendicular vector). I will therefore not make a distinction between finding any linearly independent vector and a perpendicular vector, since the latter can be formed by crossing the former with the original vector. In 2D, this has a simple solution: for $\vec{r} = (x,y)$ an obvious vector parallel to this is $(-y,x)$ and it is unique (up to a constant multiple). In 3D, despite there being a whole 2D linear space of perpendicular vectors, finding any of those
|
If whatever tentative definition of "nice" should imply that $\vec{r} \mapsto \vec{v} = \vec{v}(\vec{r})$ is continuous then this is impossible to achieve for all $\vec{r} \in \mathbf{R}^3 - 0$ . As you mention, the distinction between linearly indepedent and orthorgonal is not important: in $\mathbf{R}^3$ you can take the cross product, more generally you can take the projection $\vec{v} - \frac{\vec{v} \cdot \vec{r}}{\vec{r} \cdot \vec{r}} \vec{r}$ . As you also mention, we can restrict to $|\vec{r}| = 1$ : if there is a construction that works for all of $\mathbf{R}^3 - 0$ then it works on $S^2$ , conversely $\vec{v} = \vec{v}(\vec{r}/|\vec{r}|)$ extends to all of $\mathbf{R}^3 - 0$ . But if we are looking for a continuous choice of $\vec{v} \in \mathbf{R}^3 - 0$ for each $\vec{r} \in \mathbf{R}^3 - 0$ with $\vec{v} \cdot \vec{r} = 0$ , this is exactly ruled out by the hairy ball theorem . More generally, such a continuous choice is possible on $\mathbf{R}^n - 0$ exactly when $n$ is
|
|linear-algebra|coordinate-systems|
| 1
|
Multiple definitions of casus irreducibilis
|
In the case of cubic equations, Casus irreducibilis occurs when none of the roots is rational and when all three roots are distinct and real (...) — Wikipedia's Casus irreducibilis article So, $x^3-3x+1=0$ is definitely an example of casus irreducibilis. Cardano's formula can express a rational root in terms of non-real radicals (yet it is unnecessary), as in this example: $x^3-15x-4=0$ . Some ( Working with casus irreducibilis ) call this equation a casus irreducibilis, but this disagrees with the (supposed) Wikipedia definition (which is described below), as it has a rational solution, namely $x=\sqrt[3]{2+11i}+\sqrt[3]{2-11i}=4$ . Does the question in the link just involve a misinterpretation of casus irreducibilis, or are there any trustworthy books or other sources which support the fact that equations like $x^3-15x-4=0$ (which yield a rational root through Cardano's formula, though unnecessarily, using roots of complex numbers) are casus irreducibilis? I suppose that the Wikipedi
|
The real and imaginary parts of a cube root of an arbitrary complex number can not always be expressed as a finite sequence of arithmetic operations, square root extractions, and cube root extractions over the real numbers. A proof is here. https://en.wikipedia.org/wiki/Casus_irreducibilis Implicit is that the real and imaginary parts of solutions to cubics and quartics can not always be expressed as a finite sequence of arithmetic operations and root extractions over the coefficients and constants. This does not contradict the solvability of cubics and quartics. Each solution as a whole can be expressed as a finite sequence of arithmetic operations and root extractions over the coefficients and constants. there is a pattern here. For quadratics, the solutions' real and imaginary parts each can be expressed as a finite sequence of arithmetic operations and square root extractions over the coefficients and constants For cubics and quartics, the solutions as a whole can be expressed as a
|
|polynomials|radicals|cubics|
| 0
|
Lagrange quartic resolvent $x_1+ix_2-x_3-ix_4$
|
Suppose we want to solve the "reduced" quartic equation $x^4+px^2+qx+r=0$ by means of Lagrange resolvent. I denote the roots by $x_1, x_2, x_3, x_4$ ; we have $x_1+x_2+x_3+x_4=0$ . In many texts one typically reads: the "generic" Lagrange resolvent would be $R=x_1+ix_2-x_3-ix_4$ ; however, Lagrange found easier resolvents, for example $(x_1+x_2)(x_3+x_4)$ . That's fine; but I still want to see how $R$ works. When one permutes the roots in all possible ways, $R^4$ has six (rather than three) distinct values. So we get an equation $\prod (x-\ldots)$ of degree 6 (its coefficients are some polynomials in $p,q,r$ ). At this point some texts say "with some tricks, it can be reduced to degree 3". I computed this equation: its coefficients are not pleasant, and have no clear pattern. So I guess that the tricks should not be applied on the explicit form. Hence the question: What are the tricks? How one reduces this equation to degree 3?
|
I would like to add even more detail to the answer by @UWS. I, too, have realized that one should examine the orbit of $R\bar{R}$ and found the $x_1x_2+x_3x_4$ expressions. But then it's still not immediately clear how to actually recover the roots from these expressions. So here's my attempt: Consider $(x_1+x_2)^2 + (x_3+x_4)^2 = x_1^2 + x_2^2 +x_3^2 + x_4^2 + 2(x_1x_2+x_3x_4)$ Because we have $x_1+x_2+x_3+x_4 = 0$ , we get $(x_1+x_2)^2 + (x_3+x_4)^2 = (x_1+x_2)^2 + (-x_1-x_2)^2 = 2(x_1+x_2)^2$ $x_1^2 + x_2^2 +x_3^2 + x_4^2$ is symmetric in the roots, therefore it equals some S which is a polynomial expression in the coefficients. From the resolvent cubic we found that $x_1x_2+x_3x_4$ equals some Y_1 (again a function of the coefficients). Similarly for the other expressions in the orbit. Rearranging we get $$ x_1+x_2 = \sqrt{\frac{S}2 + Y_1} \\ x_3+x_4 = -\sqrt{\frac{S}2 + Y_1} \\ x_1+x_3 = \sqrt{\frac{S}2 + Y_2} \\ x_2+x_4 = -\sqrt{\frac{S}2 + Y_2} \\ x_1+x_4 = \sqrt{\frac{S}2 + Y_3
|
|abstract-algebra|galois-theory|quartics|
| 0
|
Is "up to natural isomorphism" crucial?
|
Let $\mathcal{C}$ be a category. By diagram I mean a covariant functor $F\colon\mathcal{J}\to\mathcal{C}$ for some category $\mathcal{J}$ . In this source it is said that a commutative diagram is a diagram $F\colon\mathcal{J}\to\mathcal{C}$ such that there exists a posetal category $\mathcal{P}$ and diagrams $G\colon\mathcal{J}\to\mathcal{P}$ and $H\colon\mathcal{P}\to\mathcal{C}$ such that the composition functor $H\circ G$ is naturally isomorphic to $F$ . My question is: can we drop the condition that $H\circ G$ must be naturally isomorphic to $F$ and just instead insist that $H\circ G=F$ ? Would such a definition be just as general as the quoted definition? It appears to me that if $H\circ G$ is naturally isomorphic to $F$ , then I should be able to modify my functor $H$ to some functor $\tilde{H}\colon\mathcal{P}\to\mathcal{C}$ in such a way that $\tilde{H}\circ G=F$ . Is this the case?
|
I think the issue is that we want to factor through a partially ordered set , which in particular entails the antisymmetry condition ( $x\leq y \land y\leq x \implies x=y$ ). Consider as diagram shape $\mathcal{I}$ the free living isomorphism $\mathcal{I}=(0\cong 1)$ , where $\mathcal{I}(i,j)=\{\ast\}$ for all $i,j \in \{0,1\}$ . Consider a functor $F:\mathcal{I}\rightarrow\mathcal{C}$ , which picks two distinct (but isomorphic) objects. For example something like $\Bbb Z/2$ and $\{\pm 1\}$ as abelian groups. You will never be able to factor this functor through a single poset (this poset has necessarily to be the terminal category $\mathbb{1}$ , which has only one object!). This deficit is resolved by either requiring a natural isomorphism to our original functor or allowing for non-automorphic isomorphisms in our poset structure. In your link this is called a proset , other words for that are posetal category or thin category .
|
|category-theory|definition|functors|natural-transformations|
| 1
|
If $A$ is a symmetric positive definite matrix, show that $f(x) = x^TAx$ is convex.
|
Let: \begin{gather*} f: \mathbb{R}^n \to \mathbb{R}, \quad A \in \mathbb{R}^{n \times n}, b \in \mathbb{R}^n, x \in \mathbb{R}^n, c \in \mathbb{R} \\ f(x) = x^T A x + b^T x + c \\ \end{gather*} If $A$ is symmetric positive definite, show that $f$ is convex. The domain of $f$ is convex. Algebraically, it will suffice to show that for any $x,y \in \mathbb{R}^n$ : \begin{gather*} f(\alpha x + (1 - \alpha) y) \le \alpha f(x) + (1 - \alpha) f(y) \\ \end{gather*} Expanding the left hand side: \begin{align*} f(\alpha x + (1 - \alpha) y) = [\alpha x + (1 - \alpha) y]^T [\alpha A x + (1 - \alpha) A y] + b^T [\alpha x + (1 - \alpha) y] + c \\ \end{align*} Expanding the right hand side: \begin{gather*} \alpha f(x) + (1 - \alpha) f(y) = \alpha x^T A x + (1-\alpha) y^T A y + \alpha b^T x + (1-\alpha) b^T y + c \\ \end{gather*} Consider the right hand minus the left-hand side. It will suffice to show that this expression is positive. The $b^T x$ and $c$ terms cancel out: \begin{gather*} \alpha f(x)
|
What if you were to use $$(y-x)^{\top}A(y-x) > 0$$ $$\implies y^{\top}Ay + x^{\top}Ax > y^{\top}Ax + x^{\top}Ay$$ $$\implies y^{\top}Ay + x^{\top}Ax > 2y^{\top}Ax$$ $$\implies (1-\alpha)\alpha y^{\top}Ay + (1-\alpha)\alpha x^{\top}Ax > 2(1-\alpha)\alpha y^{\top}Ax.$$ [The fact that $A$ is positive definite gives $(y-x)^{\top}A(y-x)>0$ and the fact that $A$ is symmetric implies $x^{\top}Ay = y^{\top}Ax$ .]
|
|convex-optimization|quadratics|positive-definite|
| 1
|
Balls out of a sack
|
To understand combinatorics better I formulated a question for myself (not homework!), that I'm trying to solve: Imagine a sack with 6 red balls, 5 yellow balls and 4 blue balls. Pick six of them without laying them back. What is the probability that at least 2 of the six balls are yellow and at least 1 is red. (The other three balls can be anything.) First I'd find out how many possibilities there are to pick three different balls. Here I'm not sure how to do it. My first guess would be $\binom{6+5+4}{3}$ but I think its less because we have multiple balls of each color? Then I would need to find how many possibilities there are so that three balls are so that 2 are yellow and 1 is red. Maybe it's $\binom{5}{2}\cdot\binom{6}{1}$ ? Then we would have to divide those values.
|
There are $\binom{15}6$ ways to choose any $6$ balls from the $15=4+5+6$ balls that we have. How many ways to choose $6$ balls in such way that at least $1$ is red and at least $2$ are yellow? Let us calculate the opposite thing, it would be easier. If we pick $0$ red balls, we have to choose all $6$ balls from yellow and blue. So there are $\binom96$ ways to do that. If we pick $0$ yellow balls, than the number of ways is $\binom{10}6$ . If we pick exactly $1$ yellow ball, the number of combinations is $\binom{10}5$ . Note that none of the combinations mentioned above coincide. It can’t be that we picked $0$ red and $0$ yellow balls. As well as it can’t be that we picked $0$ red and exactly $1$ yellow ball. So the answer is $$\frac{\binom{15}6-\binom96-\binom{10}6-\binom{10}5}{15\choose6}=\frac{49}{55}.$$
|
|combinatorics|
| 0
|
Prove $T$ with bounded basis sum in Hilbert space is compact.
|
Let $T:\mathcal{H}\rightarrow \mathcal{H}$ be a linear continuous operator between Hilbert spaces and $\{b_i\: |\: i \in I\}$ an orthonormal basis. Prove that if: $$\sum_{i\in I}\lVert T b_i\rVert^2 Then, $T$ is compact. Hilbert spaces are reflexive, as is easily seen from applying Riesz Representation Theorem twice. This means that if $x_n$ is a sequence whose norm is bounded by $M$ , there is a subsequence such that $x_{n_j}\rightharpoonup x$ . By continuity of $T$ , we obtain that $T(x_{n_j})\rightharpoonup T(x)$ . I want to show this convergence is strong. Because we have a basis it is clear that from Parseval we obtain: $$\lVert T(x_{n_j})-T(x)\rVert^2=\sum_{i\in I}|\langle T(x_{n_j}-x),b_i\rangle |^2=\sum_{i\in I}|\langle x_{n_j}-x,T^*(b_i) \rangle|^2 $$ By our hypothesis there is also $I_o$ with $|I_o| such that $\sum_{i\in I_o^C}\lVert Tb_i\rVert^2 . Using weak convergence for every $i\in I_o$ we have $|\langle x_{n_j }-x_j, T^*(b_i) \rangle|\rightarrow 0$ , and thus one can ta
|
A proof by showing $T(B_{\mathcal H})$ is totally bounded: given $\varepsilon >0$ , let $F \subset I$ finite such that $$\sum_{i \in I \setminus F}\lVert Tb_i\rVert^ 2 Since $E=\langle b_i : i \in F\rangle$ is finite-dimensional, $T|_E$ is compact so by total boundness there is a finite set $S \subset \mathcal H$ with $\min_{y \in S} \lVert x-y\rVert for all $x \in T(B_E)$ . Now, given $x \in B_{\mathcal H}$ , write $x=z + w$ where $z \in E$ and $w \in E^\perp$ . Notice $|\langle x, b_i\rangle| \le 1$ , so the definition of $F$ implies $\lVert T(w)\lVert . Since there is some $y \in S$ with $\lVert T(z)-y\rVert , it satisfies $$\lVert T(x) - y\rVert \le \lVert T(z)-y\rVert + \lVert T(w) \rVert
|
|functional-analysis|hilbert-spaces|compact-operators|
| 0
|
Prove $\frac{(n/2)!(n/2)!}{\left(\frac{n+i}2\right)!\left(\frac{n-i}2\right)!}=\prod_{j=1}^{i/2}\frac{\frac n2+1-j}{\frac n2+j}$
|
I have a textbook ( Asymptopia by Joel Spencer, p.66 ) that states that $$\frac{(n/2)!(n/2)!}{\left(\frac{n+i}2\right)!\left(\frac{n-i}2\right)!} =\prod_{j=1}^{i/2}\frac{\frac n2+1-j}{\frac n2+j}.$$ The equivalence between the LHS and $\left(\prod_{j=1}^{n/2}j^2\right)\left(\prod_{j=1}^{(n+i)/2}j^{-1}\right)\left(\prod_{j=1}^{(n-i)/2}j^{-1}\right)$ I understand, it results from the product notation form of the factorial and some simple algebraic manipulation. However, for the life of me, I don't see how you can then manipulate it to get the form of the RHS. I've numerically tested it out with different values, so I believe that it is true. For example, if $n=20$ and $i=2$ , all terms equal $10/11$ . Could someone help explain how one arrives at the RHS expression? Edit: I believe that the book implies that, due to the use of $n$ even, or the floor of $n/2$ , $i$ should be even.
|
Assume that $n$ and $i$ are even. Then, $$\frac{(n/2)!(n/2)!}{\left(\frac{n+i}2\right)!\left(\frac{n-i}2\right)!}=\frac AB$$ where $$A=\frac{(n/2)!}{\left(\frac{n-i}2\right)!}=\prod_{k=\frac{n-i}2+1}^{n/2}k=\prod_{j=1}^{i/2}\left(\frac n2+1-j\right)$$ and $$B=\frac{\left(\frac{n+i}2\right)!}{(n/2)!}=\prod_{k=\frac n2+1}^{(n+i)/2}k=\prod_{j=1}^{i/2}\left(\frac n2+j\right).$$
|
|factorial|products|
| 1
|
What does c mean in the following question on cardinality?
|
I found a question in Modern Real Analysis by William P. Ziemer. The question can be found in Section 2.1 Question 1. It goes: Use the fact that $\mathbb N =$ { $ n: n = 2k$ for some $k \in \mathbb N$ } $\cup$ { $ n: n = 2k+1$ for some $k \in \mathbb N$ } to prove $c \cdot c = c$ . What could they be referring to by c? I know it has something to do with cardinality because the question continues as follows: Consequently, $card(\mathbb R^n) = c$ for each $n \in \mathbb N$ .
|
Based on the structure of the question, it seems likely that the dot is just the product operator, which in the case of cardinalities means that $c \cdot c = card(\mathbb{R}) \cdot card(\mathbb{R}) = card(\mathbb{R} \times \mathbb{R})$ , i.e. the product of two cardinalities is the cardinality of the Cartesian product of the sets.
|
|real-analysis|question-verification|
| 1
|
Compactness of a continuous functional calculus of $\mathbf{1}_{\{\lambda\}}$ and normal operator $N$
|
Consider a normal operator $N$ on complex Hilbert space $\mathcal{H}$ , such that for each $\lambda\in\sigma(N)\setminus\{0\}$ , $\lambda$ is isolated in $\sigma(N)$ and $\mathrm{dim}\mathrm{ker}(N-\lambda I) . Moreover, let $\rho$ denote the continuous functional calculus on $\sigma(N)$ . How would one verify that $\rho(\mathbf{1}_{\{\lambda\}})$ and $N$ are compact? I realize that $\rho(\mathbf{1}_{\{\lambda\}})$ is a self-adjoint idempotent and that $(N-\lambda I)\rho(\mathbf{1}_{\{\lambda\}})=0$ , but I am unsure whether I should try to come up with some functional calculus trick or use some purely theoretic insight.
|
As Bruno B mentioned in comments, $\rho(1_{\{\lambda\}})$ is the projection onto $\mathrm{ker}(N - \lambda I)$ , which is finite-dimensional by assumption, whence $\rho(1_{\{\lambda\}})$ is a finite rank projection and therefore compact. Now, we show that $N$ itself is compact. There are two cases: $\sigma(N)$ is finite. Then $N = \sum_{\lambda \in \sigma(N) \setminus \{0\}} \lambda \rho(1_{\{\lambda\}})$ is a finite linear combination of finite rank projections, whence $N$ is finite rank itself and thus compact; $\sigma(N)$ is infinite. As $\sigma(N)$ is compact, it has a cluster point. But every point in $\sigma(N)$ is isolated, apart from potentially $0$ , so the unique cluster point of $\sigma(N)$ is $0$ . Since any subset of $\mathbb{C}$ is separable, there are at most countably many points in $\sigma(N) \setminus \{0\}$ (again because every point in the latter set is isolated), so if we write $\sigma(N) \setminus \{0\} = \{\lambda_1, \lambda_2, \cdots\}$ , then $\lambda_i \to 0$
|
|functional-analysis|functional-calculus|
| 1
|
Smallest prime number not divides $n$
|
For each integer $n>2$ , we define $p(n)$ to be the smallest prime number that does not divide $n$ . Prove that $$\lim_{n\to \infty}\frac{p(n)}{n}=0.$$ My argument is: We only need to prove $$\lim_{n\to \infty,\; n\; \text{is square-free}}\frac{p(n)}{n}=0$$ since $\frac{p(n)}{n}\leq \frac{p(n)}{Rad(n)}=\frac{p(Rad(n))}{Rad(n)}$ . Let $p_k$ is the $k-$ th prime number, for each square-free number $p_1\dots p_{k-1} , $p(n)$ must be less than $p_{k+1}$ otherwise $p_1\dots p_k|n$ . So $\frac{p(n)}{n}\leq \frac{p_k}{p_1\dots p_{k-1}}$ . To finish, we must prove $$\lim_{k\to \infty}\frac{p_k}{p_1\dots p_{k-1}}=0$$ For a moment, I thought this is obvious however I couldn't prove the above limit equals $0$ . How can I continue?
|
You can show that without Bertrand's postulate. Instead, you can use the much more elementary lemma: $$\forall n\geq 3, \prod_{i=1}^n p_i \geq p_{n+1} + p_{n+2}$$ This can be shown by considering $p_2 \dots p_n \pm 2$ . It is not divisible by any prime less than $p_n$ so it has a prime factor strictly greater than $p_n$ (because $p_2 \dots p_n \pm 2 > 1$ and it is odd so not divisible by $p_1=2$ ), therefore there exists $i,j>n$ such that: $$p_2 \dots p_n + 2 \geq p_i$$ $$p_2 \dots p_n - 2 \geq p_j$$ Moreover euclid's algorithm gives $$\gcd(p_2 \dots p_n + 2, p_2 \dots p_n - 2) = \gcd(\underbrace{p_2 \dots p_n - 2}_{\text{odd}}, 4) = 1$$ (we must use the assumption that $n\geq 3$ such that $p_2 \dots p_n - 2 > 4$ ) It means that $p_2 \dots p_n + 2$ and $p_2 \dots p_n + 2$ do not share any factor. In particular, $p_i \neq p_j$ so $p_i+p_j\geq p_{n+1}+p_{n+2}$ . Finally $$\prod_{i=1}^n p_i = p_2 \dots p_n + 2 + p_2 \dots p_n - 2 \geq p_i+p_j\geq p_{n+1}+p_{n+2}$$ Plugging in your fractio
|
|number-theory|elementary-number-theory|
| 0
|
Multiplication of a time-domain sinusoid to a s-domain (Laplace) signal?
|
I am confused between the transformations between the time-domain and the frequency domain. I have a signal y(t) which is a sum of multiple sinusoids. I band-pass filter this signal to extract one sinusoid and mix this signal with two sinusoids. This is shown in the following figure: I want to obtain a frequency domain representation of this whole thing: Suppose the signal is Y(s), then the filtered signal is BP(s)Y(s). I then multiply BP(s)Y(s) with the sinusoids $\cos(\omega t)$ and $\sin(\omega t)$ , but what does this look like in the frequency domain? I am trying to obtain a single transfer function between Y(s) (the input) and the outputs (the mixed sinusoids).
|
Convolution in one domain equals point-wise multiplication in the other domain , so multiplication in time domain means convolution in frequency domain. Such convolution in frequency domain does not have a transfer function, because that only holds for multiplication in frequency domain.
|
|trigonometry|laplace-transform|control-theory|nonlinear-analysis|inverse-laplace|
| 0
|
Norm equality for operators on a Hilbert tensor product
|
Ok, this could be tough. Suppose you are given bounded linear operators, $\{A_n\}_{n=1}^N\subset\mathcal{B}(\mathcal{H})$ on a Hilbert space $\mathcal{H}$ and $\{B_n\}_{n=1}^N\subset\mathcal{B}(\mathcal{K})$ on another Hilbert space $\mathcal{K}$ , with $N\geq1$ . If $\{U_n\}_{n=1}^N\subset\mathcal{U}(\mathcal{K})$ is a family of unitary operators on $\mathcal{K}$ , is it true that $$ \left\|\sum\limits_{n=1}^N A_n\otimes U_nB_n\right\|_{\mathcal{B}(\mathcal{H}\otimes\mathcal{K})}=\left\|\sum\limits_{n=1}^N A_n\otimes B_nU_n^*\right\|_{\mathcal{B}(\mathcal{H}\otimes\mathcal{K})}\,\,? $$ Notice that for $N=1$ , the equality surely holds since $$ \|A\otimes UB\|_{\mathcal{B}(\mathcal{H}\otimes\mathcal{K})}=\|A\|_{\mathcal{B}(\mathcal{H})}\|UB\|_{\mathcal{B}(\mathcal{K})}=\|A\|_{\mathcal{B}(\mathcal{H})}\|BU^*\|_{\mathcal{B}(\mathcal{K})}=\|A\otimes BU^*\|_{\mathcal{B}(\mathcal{H}\otimes\mathcal{K})}. $$ From $N=2$ , it seems that nothing can be said.
|
This isn’t true even when $H = \mathbb{C}$ and $N = 2$ . Indeed, set $K = \mathbb{C}^2$ , $B_1 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$ , $U_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ , $B_2 = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$ , $U_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ . Then $U_1B_1 + U_2B_2 = \begin{pmatrix} 0 & 0 \\ 0 & 2 \end{pmatrix}$ has norm $2$ but $B_1U_1^\ast + B_2U_2^\ast = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ has norm $1$ .
|
|operator-theory|hilbert-spaces|tensor-products|operator-algebras|matrix-norms|
| 0
|
Notation for domain and range of a function
|
Suppose function $f$ and sets $X$ and $Y$ such that $f : X \rightarrow Y$ . What does this notation exactly mean, out of the following options? $\text{domain(} f) \subseteq X \wedge \text{range(} f) \subseteq Y$ $\text{domain(} f) = X \wedge \text{range(} f) \subseteq Y$ $\text{domain(} f) = X \wedge \text{range(} f) = Y$ For example look at these three statements for the real-valued square root: $\sqrt{x} : \mathbb{R} \rightarrow \mathbb{R}$ $\sqrt{x} : \mathbb{R} \rightarrow \mathbb{R}_{\geq 0}$ $\sqrt{x} : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}$ which one of them are correct?
|
In the notation $f: X \rightarrow Y$ , $X$ is the domain of $f$ and $Y$ is the codomain. The range of $f$ , which we sometimes write as $f(X)$ , is a subset of $Y$ . So then for the square root function, both $\sqrt{}: \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}$ and $\sqrt{}: \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}$ are valid, but $\sqrt{}: \mathbb{R} \rightarrow \mathbb{R}_{\geq 0}$ is not because the function is not defined on negative numbers. There is also a related function $\sqrt{}: \mathbb{N} \rightarrow \mathbb{R}$ which only takes inputs in the natural numbers, but note that we couldn't define $\sqrt{}: \mathbb{N} \rightarrow \mathbb{N}$ because then there would be inputs that don't produce a valid output.
|
|functions|notation|
| 1
|
Invertible positive maps which are not automorphisms
|
Let be a unital C*-algebra. Is there a unital positive self-map :→ which is invertible (i.e. injective and surjective) but neither a ∗-automorphism, nor a *-antiautomorphism (i.e. a *-isomorphism with the opposite algebra)? If yes, how does appear its Gelfand-Naimark-Segal covariant representation associated to a state on which is invariant under ? Are there examples of such maps which are involutive (i.e. such that ^2=_, or equivalently =^{−1})?
|
$A = \mathbb{M}_2(\mathbb{C})$ and $F(X) = X^T$ .
|
|functional-analysis|operator-theory|operator-algebras|
| 0
|
What shape a point with a constant distance along a parabola will trace?
|
Define a parabola where it's directrix will be a line over the Y axis and the focus point will be a point in the X axis. Example parabola Define a distance n and grab a point D in the parabola where it's distance from the X axis along the parabola is equal to n. Illustration of distance n Move the focus point along the X axis and keep the distance of point D constant (it will always have the same distance, along the parabola, from the X axis). What shape will the point D trace? Note: I have a suspicion it will be a line but I don't know how to prove it. Also this not my homework or work project so no need to hurry in answering it.
|
The first thing we would need to do is compute the arclength function of a parabola with a fixed vertex (labeled $H$ in the diagram) and a variable focus $C$ . To this end, it is more convenient to transform the coordinate system to place the vertex at $(0,c/2)$ and the focus at $(0,c)$ . Then the parabola has equation $$y = \frac{x^2}{2c} + \frac{c}{2}.$$ Hence the arclength from $x = 0$ to $x = k$ is $$S(k) = \int_{x=0}^k \sqrt{1 + \left(\frac{dy}{dx}\right)^2} \, dx = \int_{x=0}^k \sqrt{1 + \frac{x^2}{c^2}} \, dx = \frac{k \sqrt{c^2+k^2}}{2 c}-\frac{1}{2} c \log \left(\frac{\sqrt{c^2+k^2}-k}{c}\right).$$ So if $S(k) = n$ , we require the $x$ -coordinate of the point on the parabola to be $$k = S^{-1}(n)$$ and we wish to trace the locus of $$\left(S^{-1}(n), \frac{(S^{-1}(n))^2}{2c} + \frac{c}{2}\right)$$ for $c > 0$ . Clearly such a locus has no elementary closed form expression, so we resort to numerical methods. A plot for $n \in \{0.1, 0.5, 1, 2\}$ is shown below: The upward beha
|
|calculus|geometry|
| 0
|
Methods for Determining Divisibility by 4 for the Formula $2^n - 46$
|
I'm working on a problem where I need to determine the conditions under which $2^n - 46$ is divisible by 4, where $n$ is a non-negative integer. I understand that for any power of 2 greater than $2^2$ , the result is always divisible by 4. However, when subtracting 46 from $2^n$ , I'm unsure how to systematically approach or solve for $n$ to ensure the result remains divisible by 4. I've considered direct computation for small values of $n$ and observed patterns, but I'm looking for a more general method or a mathematical insight that could help solve this more efficiently or elegantly. Specifically, I'm interested in any theorems, properties, or techniques that could be applied to this problem. Could anyone provide guidance on how to approach this problem or point me towards relevant mathematical concepts or methods that could simplify determining the divisibility of $2^n - 46$ by 4? Thank you in advance for any assistance!
|
If $n=0$ then $2^n-46=-45$ isn’t divisible by $4$ . If $n=1$ then $2^n-46=-44$ is divisible by $4$ . If $n\ge 2$ then $2^n-46$ isn’t divisible by $4$ as a difference of a number divisible by $4$ and a number not divisible by $4$ . You could write $2^n-46 = 4(2^{n-2}-12)+2$ . Number $n-2$ is non-negative, hence $2^{n-2}-12$ is integer. Also, $2 . That is the division with remainder formula. The remainder is $2$ , not $0$ .
|
|logarithms|diophantine-equations|
| 1
|
Equality characterization of the reverse Cauchy-Schwarz inequality in a Lorentzian manifold
|
Let $(M, g)$ be a Lorentzian manifold (signature -++...) with a time orientation and suppose that $v, w \in T_pM$ are causal vectors that are in the same light cone(ie, both future-directed or past-directed). Denoting by $∥v∥_g = \sqrt{|g(v, v)|}$ the “norm” of $v$ . Then the reverse C-S inequality holds with equality iff v and w are colinear: $−g(v, w) ≥ ∥v∥_g∥w∥_g \tag{*}$ My try: Adapting of the proofs of the C-S inequality in the wikipedia page of the C-S inequality : finite dimensional real vector space case I get: Since $v$ and $v $ are causal, they are timelike or lightlike we have that $∥w∥_g^2=|g(w,w)|=-g(w,w)$ and $∥v∥_g^2=|g(v,v)|=-g(v,v)$ and since they are equally directed $g(v,w)\le 0$ Define the function $p:\Bbb R \to \Bbb R: t\to p(t):=g(tw+v,tw+v)$ $p(t):=g(tw+v,tw+v)=t^2g(w,w)+2g(v,w)+g(v,v)$ $=-t^2∥w∥_g^2+2g(v,w)-∥v∥_g^2 \le 0 \forall t \in \Bbb R$ Then the discriminant of this quadratic equation ( $g(w,w)\neq 0$ ) $\Delta=4g(v,w)^2-4g(w,w)g(v,v)\le 0$ from which (*)
|
To prove the reverse Cauchy-Schwarz inequality $|g(v,w)|\geq \|v\|_g\|w\|_g$ , you actually don’t need to assume they lie in the same cone (but if you do, then the proportionality constant will be non-negative). We’ll rely on some basic properties of Lorentzian inner product spaces, such as orthogonal complements of timelike vectors is a spacelike subspace , and that if we have two non-zero null vectors which are orthogonal then they must be a (non-zero) multiples of each other (otherwise you’d violate Lorentzian signature). Here’s the proof of the reverse Cauchy-Schwarz inequality: if $v=0$ , then for all $w$ , the claim is obvious. Likewise, if $w=0$ , then again for all $v$ , the claim is obvious. Henceforth we can assume $v\neq 0$ and $w\neq 0$ . Suppose $v,w$ are both non-zero null vectors. Then, the inequality is obvious because we have $0$ on the RHS. We have equality if and only if $g(v,w)=0$ , i.e $v,w$ are orthogonal. By the ‘fact’ I mentioned above (a good exercise for you t
|
|differential-geometry|riemannian-geometry|cauchy-schwarz-inequality|general-relativity|semi-riemannian-geometry|
| 1
|
How to prove the implication $(\neg q \rightarrow (\neg p \lor \neg q))$?
|
I am working on a proof for the statement: Suppose $x, y \in \mathbb{R}$ . If $5|x$ and $5|y$ , then $5|xy$ Let's denote the propositions as follows: $P: 5|x$ $R: 5|y$ $Q: 5|xy$ Logically, this is represented as $(P \land R) \rightarrow Q$ . Now, I am aiming to conduct a contrapositive proof, transforming my proposition into $\neg Q \rightarrow \neg (P \land R)$ , and applying Morgan's laws, I get $\neg Q \rightarrow (\neg P \lor \neg R)$ . Here's my question: Do I need to prove that $(\neg P)$ is true OR $(\neg Q)$ is true? Or do I have to prove both $(\neg P)$ is true AND $(\neg Q)$ is true simultaneously? I am inclined to think that proving either $(\neg P)$ or $(\neg Q)$ is sufficient,because otherwise there would not be any logical distinction between $(\neg Q \rightarrow (\neg P \lor \neg Q))$ and $(\neg Q \rightarrow (\neg P \land \neg Q))$ . I want to ensure I am on the right track.
|
Remember that you don't actually know that $(P \land R) \implies Q$ . That implication is the statement you are trying to prove. Then, after the rearranging, what you want to prove is $\neg Q \implies (\neg P) \lor (\neg R)$ . That is, if we assume $\neg Q$ , we want to show that this implies either $\neg P$ or $\neg R$ . In the context of your proof, this means that if we assume $5 \nmid xy$ , we want to show that $5 \nmid x$ . I don't know if the contrapositive is required for the particular exercise you're doing, but the proof can also be done without it - start with the fact that for $x \in \mathbb{N}$ , if $5 \mid x$ , then $x = 5k$ for $k \in \mathbb{N}$ .
|
|solution-verification|proof-writing|proof-explanation|
| 0
|
Show that $\partial x^2 = 2P(x)$ and $\partial \wedge x = 0$ (Geometric algebra)
|
Show that $\partial x^2 = 2P(x)$ and $\partial \wedge x = 0$ (Geometric algebra)I’m trying to show equations (2-1.32) and (2-1.33) on page 51 of 20 of Hestenes and Sobczyk’s “Clifford Algebra to Geometric Calculus”. $\partial |x|^2 = \partial x^2 = 2P(x) = 2x, \tag{1.32}$ $\partial \wedge x = 0, \tag{1.33}$ In trying to obtain $(1.32)$ , I used the definition $(1.2)$ $a\cdot\partial F(x) \equiv \left.\frac{\partial F(x+\tau a)}{\partial \tau}\right\vert_{\tau = 0} = \lim_{\tau\rightarrow 0} \frac{F(x+\tau a)-F(x)}{\tau},\tag{1.2}$ along with equation $(1.5)$ $\partial_x = P(\partial_x) = \sum_k a^ka_k\cdot \partial_x \tag{1.5}$ and the product rule $(1.24a)$ $\partial(FG) = \dot \partial\dot F G + \dot \partial F\dot G, \tag{1.24a}$ where the overdot designates the quantities to be differentiated (with the remaining ones to be treated as constants), to rewrite $\partial x^2$ as follows: $$\begin{aligned}\partial x^2 &= \partial (xx) = \sum_k a^ka_k\cdot\dot\partial \dot x x + \sum_k a^
|
$\sum_ka^ka_kx$ is not the same thing as $\sum_ka^ka_k\cdot x$ . Instead, $$ \partial x^2 = \sum_k(a^ka_kx + a^kxa_k) = \sum_ka^k(a_kx + xa_k) = 2\sum_ka^ka_k\cdot x = 2P(x) = 2x. $$ Better yet, if you know that $b\cdot\partial x = \partial b\cdot x = P(b)$ for constant $b$ then $$ \partial x^2 = \partial(x\cdot x) = \dot\partial(\dot x\cdot x) + \dot\partial(x\cdot\dot x) = 2\dot\partial x\cdot\dot x = 2P(x) = 2x. $$ For $\partial\wedge x$ just directly apply the formula for $\partial x^2$ : $$ \partial\wedge x = \partial\wedge\left(\frac12\partial x^2\right) = \frac12(\partial\wedge\partial)x^2 = 0. $$ We can rebracket like this because $x^2$ is a scalar.
|
|geometric-algebras|
| 1
|
Proof: If connected graph G has only one cut-vertex, then every longest path contins the cut vertex.
|
This is one of exercises from my uni's Discrete math problems. (Non-mandatory) And the job is to either found a counterexample or prove it. I considered approaching this problem using proof by contradiction – "Connected graph G contains exactly one cut vertex and there exists a longest path such that it doesn't contain the cut vertex." Let's label the cut vertex $x$ and the longest path $P$ . If $P$ contains one of the neighbours of $x$ , then we can make our path longer by one edge. And I'm somewhat lost as how to proceed in the proof since proving graph-related theorems is one of my weakest links and I'd like to get better. Is proof by contradiction a bad move to make?
|
You will not get better by having others give you the answers. So I will just give you some ideas on what to try. First of all, when dealing with "prove or disprove" questions, don't just decide on some proof-strategy and get stuck immediately. First get some feeling for the question by fiddling around with examples... So in this specific instance, what is the simplest graph you can think of, which has a cut vertex? Probably something like $\bullet - \bullet - \bullet$ . Does the longest path contain the cut vertex? Find another graph with a cut vertex. Try to find the longest paths and figure out, whether they contain the cut vertex. Probably this makes you guess that the statement might be true. Now think of what kind of information do you have to your disposal. You know you have a cut vertex, which means that deleting that vertex will make your graph disconnected. What would that mean for a longest path, which is a possible counterexample? Well it would have to be completely in one
|
|discrete-mathematics|graph-theory|
| 0
|
$E[\exp(-sX)\exp(-sY)]$ for two identically distributed, but correlated random variables X and Y.
|
I am trying to figure out the following problem. I am trying to evaluate the expectation: $E[\exp(-sX)\exp(-sY)]$ , where $X$ and $Y$ are identically distributed, but correlated random variables, hence, they are NOT independent. I have access to the Laplace transform of the distribution (i.e., $E[\exp(-sX)]$ ) but not the exact PDF. I know that it is possible to take the inverse Laplace transform to get the PDF, but it might not be necessary due to the structure of the problem. I can also calculate the correlation coefficient $\rho(X,Y)$ from numerical data. So far I have done the following: $E[\exp(-sX)\exp(-sY)]=\\E_X[E_{Y|X}[\exp(-sX)\exp(-sY)]]=\\E_X[\exp(-sX)E_{Y|X}[\exp(-sY)]].$ Is there a simplification I am not seeing, or a way to incorporate the correlation coefficient and the known Laplace transforms $E[\exp(-sX)]$ and $E[\exp(-sY)]$ . Edit: What about the following approach: $Cov(A,B)=E[AB]-E[A]E[B]$ , where $A=\exp(-sX)$ , and $B=\exp(-sY)$ . Therefore, $E[A]=\mathcal{L}_X(
|
Suppose $(X,Y)$ is a 2-dimensional multivariate Gaussian with respective means $\mu_X,\mu_Y$ , variances $\sigma^2_X,\sigma^2_Y$ and covariance $\rho_{XY}$ . Note that I'm using the covariance rather than correlation just because it is simpler. Then $(X,Y) \sim \mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})$ , where $$\mathbf{\mu} =\begin{bmatrix} \mu_X\\ \mu_Y \end{bmatrix} \text{ and } \mathbf{\Sigma} = \begin{bmatrix} \sigma^2_X & \rho_{XY}\\ \rho_{XY} & \sigma^2_Y \end{bmatrix}.$$ Then the moment generating function $M_{XY}:\mathbb{R}^2\to\mathbb{R}$ of $(X,Y)$ is known to be given by: $$M_{XY}(\mathbf{t}) = \mathbb{E}\left[e^{\mathbf{t}\cdot (X,Y)^T}\right] = \exp\left(\mathbf{\mu}\cdot\mathbf{t} + \frac{1}{2}\mathbf{t}^T\mathbf{\Sigma}\mathbf{t}\right).$$ So, $$\mathbb{E}\left[\exp(-sX)\exp(-sY)\right] = M_{XY}(-s,-s) = \exp\left(-s\mu_X + \frac{s^2}{2}\sigma^2_X\right)\exp\left(-s\mu_Y + \frac{s^2}{2}\sigma^2_Y\right)\exp\left(s^2\rho_{XY}\right) = \mathbb{E}\left[\exp(-sX)\right]\ma
|
|statistics|expected-value|conditional-expectation|laplace-transform|correlation|
| 0
|
Why isn´t SAS similarity valid with directed angles?
|
I´m reading EGMO and there the author leaves the exercise of finding a pair of triangles ABC and XYZ such that AB : XY = BC : YZ, the directed angles ∠BCA = ∠YZX, but triangles ABC and XYZ are not similar. I think the example might be a pair of oppositely similar triangles, because the problem seems just valid for directly similar triangles. But i´m not sure if it is the correct example or if i´m misunderstanding the assumptions.
|
Old post, but for anyone reading the book, the Errata for the E.G.M.O has the problem listed as a mistake, it should be the normal SAS condition but with directed angles: Link for EGMO errata To answer the question you may look at Theorem 2.4 where this very issue is the reason for the extra condition of "P lies in both of the segments AB and XY, or in neither segment." Referring to Figure 2.2B. the SAS condition with directed angles would lead us to "conclude" that YPA is similar to BPX. Figure 2.2B
|
|geometry|
| 0
|
Is a finite semigroup with a unique idempotent $e$ always a group with unit $e$?
|
Given a finite semigroup $S$ containing a unique idempotent $e$ , I can show that every element $s\in S$ has an ''inverse" $s^{-1}\in S$ in the sense that $s s^{-1} = s^{-1}s = e$ : Since $S$ is finite, every element $s$ has an idempotent power $s^{p_s}$ with $p_s\geq1$ . It follows that $s^{2p_s}$ is also idempotent. Since $e$ is the only idempotent, we obtain $e=s^{2p_s}$ . Since $2p_s\geq 2$ , we can define $s^{-1} = s^{2p_s-1}$ in the sense stated above. Now my question: Is $e$ actually a unit of $S$ , i.e., does $es = se = s$ hold for every $s\in S$ ?
|
What you're trying to prove is not true. Consider for example the two-element semigroup $\{e, a\}$ with the operation $\ast$ that always outputs $e$ ("the null semigroup "). It's clearly associative, and $e$ is the unique idempotent, but this is not a group.
|
|abstract-algebra|semigroups|idempotents|finite-semigroups|
| 1
|
Normal block upper triangular matrix proof
|
Prove that if a block upper triangular matrix is normal then its off-diagonal blocks is zero and each of its diagonal blocks is normal. This question was asked before, but it got just one answer which contains a mistake(it does not take into consideration the order of multiplication of blocks). A complex square matrix $A$ is normal if it commutes with its conjugate transpose $A^*$ ( $A$ is normal $ $ $A^*A=AA^*$ ). So let’s say the matrix M is normal and it is block upper triangular, so it looks like this: $$M = \begin{pmatrix} A & B \\ 0 & C \end{pmatrix} $$ $$M^* = \begin{pmatrix} A^* & 0 \\ B^* & C^* \end{pmatrix} $$ We know that $A$ and $C$ are square, and we want to prove that $A$ and $C$ are normal, $B=0$ . If $M$ is normal, doing the block-computations gives us four following equations 1) $AA^* + BB^* = A^*A$ , 2) $A^*B=BA^*$ , 3) $B^*A=CB^*$ , 4) $C^*C+B^*B=CC^*$ . This is as far as I was able to get, have no idea how we can prove that $B=0$ from this. For context, I have compl
|
I think the following may be an argument which is roughly of the kind the OP requests: suppose that $T$ is a normal operator on $\mathbb C^n$ which has matrix $$ \left(\begin{array}{cc} A & B\\ 0 & D\end{array} \right) $$ then we claim that $B=0$ and $A$ and $D$ are normal. Now the condition that $T^*T = TT^*$ yields the following equations in $A,B$ and $D$ . $$ \begin{split} AA^*+BB^*&= A^*A, \\ BD^* &= A^*B\\ DB^* &= B^*A\\ DD^* &= B^*B+D^*D \end{split} $$ Now $\text{Mat}_n(\mathbb C)$ carries an inner product given by $\langle A,B \rangle = \text{tr}(AB^*) = \text{tr}(B^*A)$ . Indeed if $A = (a_{ij})$ and $B = (b_{ij})$ then $\text{tr}(AB^*) = \sum_{i=1}^n (AB^*)_{ii} = \sum_{i,j=1}^n a_{ij}\bar{b}_{ij}$ , thus the inner product is just what one obtains by identifying $\text{Mat}_n(\mathbb C)$ with $\mathbb C^{n^2}$ . But now the first of the above 4 equations implies that $\text{tr}(AA^*)+ \text{tr}(BB^*) = \text{tr}(A^*A)$ . Since $\text{tr}(AA^*) = \text{tr}(A^*A)$ , it follows t
|
|linear-algebra|matrices|block-matrices|
| 0
|
If $A$ is a symmetric positive definite matrix, show that $f(x) = x^TAx$ is convex.
|
Let: \begin{gather*} f: \mathbb{R}^n \to \mathbb{R}, \quad A \in \mathbb{R}^{n \times n}, b \in \mathbb{R}^n, x \in \mathbb{R}^n, c \in \mathbb{R} \\ f(x) = x^T A x + b^T x + c \\ \end{gather*} If $A$ is symmetric positive definite, show that $f$ is convex. The domain of $f$ is convex. Algebraically, it will suffice to show that for any $x,y \in \mathbb{R}^n$ : \begin{gather*} f(\alpha x + (1 - \alpha) y) \le \alpha f(x) + (1 - \alpha) f(y) \\ \end{gather*} Expanding the left hand side: \begin{align*} f(\alpha x + (1 - \alpha) y) = [\alpha x + (1 - \alpha) y]^T [\alpha A x + (1 - \alpha) A y] + b^T [\alpha x + (1 - \alpha) y] + c \\ \end{align*} Expanding the right hand side: \begin{gather*} \alpha f(x) + (1 - \alpha) f(y) = \alpha x^T A x + (1-\alpha) y^T A y + \alpha b^T x + (1-\alpha) b^T y + c \\ \end{gather*} Consider the right hand minus the left-hand side. It will suffice to show that this expression is positive. The $b^T x$ and $c$ terms cancel out: \begin{gather*} \alpha f(x)
|
It is sufficient to show that $f$ is convex on any segment $[x,y]$ with $x,y \in \mathbb{R}^n$ . Let $\phi(t) = f(x+t(y-x))$ and note that $\phi''(t) = 2(y-x)^T A (y-x) \ge 0$ and so $\phi$ (and hence $f$ ) is convex.
|
|convex-optimization|quadratics|positive-definite|
| 0
|
Doubts about family of seminorms on space of test functions
|
This question is about the topology on $\mathscr{D}(\Omega)$ , the space of test functions. The conventional way to construct $\tau$ is to define a locally convex neighborhood base of $0$ , and check that it gives rise to a topological vector space structure. But an interesting alternate claim I have seen ( e.g. , Exercise 10.4 of Leoni's Sobolev Spaces book) is that the topology $\tau$ can be given through a particular family of seminorms. Despite spending several days trying to prove this, I simply cannot see how the seminorm topology is the same, and I'm wondering if any knowledgeable people could chime in. The Definitions For an open subset $\Omega \subseteq \mathbb{R}^n$ , given a compact $K \subseteq \Omega$ , define $$\mathscr{D}_K(\Omega) = \{f \in C^{\infty}(\Omega) : \text{supp}(f) \subseteq K\}.$$ It is well-known that these $\mathscr{D}_K$ admit a Fréchet space structure: simply take the seminorms $p_{n, K}(f) = \sum_{|\alpha| \leq n} \sup_{x \in K} |(\partial^{\alpha} f)(x
|
I don't know how to prove this rigorously, but I think the picture is the following: For simplicity, assume $K_n=B_n(0)$ on the Euclidean space. Assume $N$ is an open convex balanced neighborhood of $0$ in ${\mathscr D}$ , then $N\cap C_0^\infty(K_n)$ is open in $C_0^\infty(K_n)$ for every $n$ . Assume $N\cap C_0^\infty(K_1)$ is the set $\{u:\sup_{x\in B_1(0), \,\, |\alpha|\leq m_1}|\partial^\alpha u| . Now consider $N\cap C_0^\infty(K_2)$ : the point is, this set cannot have any restrictions on derivatives of order more than $m_1$ on any interior points of $K_1$ , because functions in $N\cap C_0^\infty(K_1)$ can have uncontrolled $m_1+1$ or more order derivatives at any interior point of $K_1$ . Therefore, if the defining condition for $N\cap C_0^\infty(K_2)$ have any restrictions on $m_1+1$ or more order derivatives, it must be something like $$ \sup_{x\in S, \,\, |\alpha|=m_1+1}|\partial^\alpha u| where $S$ is some set in $K_2-K_1$ . (This statement is the spot I don't know how to p
|
|functional-analysis|proof-explanation|distribution-theory|topological-vector-spaces|
| 0
|
How to prove point is NOT a limit point? $E=\{\frac{4n-3}{2n-1}:n\in\mathbb{N}\}$
|
I've seen a bunch of questions showing how to prove a point is a limit point, but almost nothing showing that a point is NOT a limit point. My definition of a limit point $x$ is that $\forall \epsilon >0, (B(x,\epsilon)\backslash\{x\})\cap E \neq \emptyset$ where $B(x,\epsilon)$ is the ball centered in $x$ with radius $\epsilon$ . Since we are in $\mathbb{R}$ , $B(x,\epsilon)=]x-\epsilon, x+\epsilon[$ . Let $E=\{\frac{4n-3}{2n-1}:n\in\mathbb{N}\}$ be a set. How do I show that 5/3 is not a limit point? Intuitively I can see that points will not "accumulate" at 5/3, but how do I show this rigorously?
|
Intuitively, a point is not a limit point of a sequence if its smallest distance to an element of the sequence (not including itself) is a positive number. What $n$ minimizes $d=|\frac{4n-3}{2n-1}-5/3|?$ $\frac{4n-3}{2n-1}-5/3=\frac{12n-9-5(2n-1)}{3(2n-1)}=\frac{2(n-2 )}{3(2n-1)}$ $\frac{2(n-2 )}{3(2n-1)}=\frac{1}{3}-\frac{1}{2n-1}$ So $n=2\implies d=0$ . So 5/3 is an element/point of the sequence. $n=1\implies d=2/3$ $n=3\implies d=2/15$ $n=4\implies d=4/21$ More generally higher values of $n$ take d further away from 0. If it were a limit point, there would exist elements of the sequence "arbitrarily close" to 5/3, but there are no points closer than 2/15, not counting 5/3 itself. We can move from intuition to the formal definition. Let $\epsilon =1/15$ . $\forall n\in \mathbb N \backslash \{2\}, |5/3-\frac{4n-3}{2n-1}|>\epsilon$ . This is in direct contradiction to a point being a limit point. Note a corollary of the formal definition of a limit point is also that there are infinite
|
|real-analysis|general-topology|real-numbers|
| 1
|
Generate number with odd number of divisors
|
I need to find the number of numbers from 1 to n that will have an odd number of divisors. For example, 4 has 3 divisors - 1,2,4. Knowing that such numbers are formed by raising prime numbers to an even power in their product (2^2 = 4, 3^2 = 9, 2^2 * 3^2 = 36, etc.), I I want to generate these numbers until I reach n and then I can count how many there were, but I don’t understand which square of the prime number to add at each step to go in ascending order. For example, to get the first number with an odd number of divisors we raise the prime number 2 to the 2nd power = 4, the next in ascending order will be 3 to the 2nd power = 9, but the next in ascending order will NOT be 5 to the 2nd power = 25, but 2 to the 4th power = 16 and only then comes 5 to the 2nd power. Is there an algorithm to iterate in ascending order over all numbers with an odd number of divisors using the product of prime numbers to even powers?
|
All the divisors of a natural number $n$ are split into pairs in the following way: $a$ and $b$ go in pair if $ab=n$ . Since all the divisors are in pairs, there is an even number of them. When would this reasoning not work? When $a$ and $b$ are the same number. So $n=a^2$ . Hence squares have always odd number of divisors. Example: consider $12$ . Its divisors are: $1-12$ , $2-6$ , $3-4$ , they go in pairs. Now consider $16$ . Divisors: $1-16$ , $2-8$ , $4$ . The number $4$ here is the “pair to itself”, since $16=4^2$ . Knowing this, there are $\lfloor \sqrt{n}\rfloor$ numbers which have odd number of divisors in $\{1,2,…,n\}$ .
|
|combinatorics|prime-numbers|
| 1
|
How to prove this identity of ceiling function?
|
My book writes down this identity of least integer function: $$\lceil x\rceil +\left\lceil x + \frac{1}{n}\right \rceil + \left\lceil x + \frac{2}{n}\right \rceil + \cdots +\left\lceil x + \frac{n -1}{n}\right \rceil = \lceil nx\rceil + n-1 $$. It didn't deduce it, however. I googled a bit about ceiling function but couldn't find any deduction. It is more like Hermite's Identity of floor function. Can anyone show me how to deduce this?
|
Since $\lfloor -x \rfloor = - \lceil x \rceil$ , substituting $x\to -x$ in the Hermite's Floor Function Identity, one obtains \begin{align*} -\lceil nx \rceil = \lfloor n(-x) \rfloor &= \sum_{k=0}^{n-1} \left \lfloor -\left(x-\frac{k}{n}\right) \right \rfloor = -\sum_{k=0}^{n-1} \left \lceil x-\frac{k}{n} \right \rceil \\ \lceil nx \rceil &= \sum_{k=0}^{n-1} \left \lceil x-\frac{k}{n} \right \rceil \end{align*} which is precisely the Hermite's Ceiling Function Identity Rearranging the index by substitution $k\to n-k$ \begin{align*} \lceil nx \rceil &= \sum_{k=1}^n \left \lceil x-\frac{n-k}{n} \right \rceil \\ &= \sum_{k=1}^n \left( \left \lceil x+\frac{k}{n} \right \rceil-1 \right) \\ &= \sum_{k=0}^{n-1} \left \lceil x+\frac{k}{n} \right \rceil \: + \lceil x+1 \rceil - \lceil x \rceil -n \\ &= \sum_{k=0}^{n-1} \left \lceil x+\frac{k}{n} \right \rceil \; -n+1 \\ \lceil nx \rceil +n-1 &= \sum_{k=0}^{n-1} \left \lceil x+\frac{k}{n} \right \rceil \end{align*}
|
|functions|ceiling-and-floor-functions|
| 0
|
Does a reverse triangle inequality hold for any nonnegative convex function?
|
Let $f:\mathbb{R}^2\to\mathbb{R}$ be nonnegative and strictly convex in the second argument everywhere (that is, $f(p,\cdot)$ is strictly convex for all $p\in\mathbb{R}$ ). Moreover, assume $f(p,p)=0$ for all $p$ . A primary example of such function is $f(p,q)=(q-p)^2$ . If necessary $f$ can be twice-continuously differentiable. Now, take any $p . Is it always the case that the following "reverse triangle inequality'' holds? $$ f(p,q)+f(q,r) If $f$ is symmetric (i.e., $f(p,q)=f(q,p)$ ), then I can show this by writing $q=\lambda p + (1-\lambda)r $ for some $\lambda\in(0,1)$ . In particular, \begin{align*} f(p,q)+f(q,r) & =f(p,q)+f(r,q)\\ & =f(p,\lambda p+(1-\lambda)r)+f(r,\lambda p+(1-\lambda)r)\\ & But, I would like to show this for nonsymmetric $f$ , if it is true.
|
Note that if $f(p,q)$ is a function with the given properties, then so is $h(p,q) = f(p,q)g(p)$ for any positive function $g$ whatsoever. But $h(p,q)+h(q,r) is equivalent to $f(p,q)g(p)+f(q,r)g(q) , or $g(p) \bigl( f(p,q)-f(p,r) \bigr) ; and it should be easy to choose a function $g$ for which this is not always true. For a specific example, let $f(p,q)=(q-p)^2$ and $g(p) = |p|+1$ and set $h(p,q)=f(p,q)g(p)$ . Then, choosing $0=p , we have $$ h(0,q)+h(q,r)-h(0,r) = q^2\cdot1 + (r-q)^2\cdot(q+1) - r^2\cdot1 = q (r-q) (r-q-2), $$ which is undesirably positive if $q .
|
|real-analysis|calculus|convex-analysis|
| 1
|
$\omega$-th or $(\omega + 1)$-th when putting odd numbers after even numbers?
|
The following clip is taken from Chapter 4 - Cantor: Detour through Infinity (Davis, 2018, p. 56)[2]. When putting odd numbers after even numbers, what should the index for the first odd number ( $1$ in the sequence) be, $\omega$ -th or $(\omega + 1)$ -th? Davis put the former, but I found the latter elsewhere on the Internet ( e.g. ). I am confused. Could you help? Reference Davis, M. (2018). The universal computer: the road from Leibniz to Turing. CRC Press.
|
Ordinals probably shouldn't be used as an index to reference the individual elements of some well-ordered set, especially not $\omega$ which is a limit ordinal (as opposed to a successor ordinal or zero). The order type of the even numbers as listed is $\omega$ and the order type of the even numbers followed by "1" as above is $\omega+1$ . One convention would be to index an element by the ordinal of the rest of the well-ordered set preceding it, which would give "1" the index $\omega$ but would then give the least element "2" the index $0$ instead of $1$ .
|
|set-theory|cardinals|infinity|ordinals|
| 0
|
Dual of topological vector space
|
Assume $E$ is an infinite dimensional vector space. Assume $\tau_1, \tau_2$ are two topologies that make $E$ into topological vector spaces. Assume $\tau_1$ is strictly finer than $\tau_2$ . Is there any examples of $E, \tau_1, \tau_2$ , such that the dual of $(E, \tau_1)$ is the same (as set of linear functions $E\to{\mathbb C}$ ) as the dual of $(E, \tau_2)$ ?
|
If $E$ is normed, and $\tau_1$ and $\tau_2$ are the strong and weak topologies, then $(E, \tau_1)$ and $(E, \tau_2)$ have the same duals, by definition: $\tau_2$ is the weakest/coarsest topology that makes the linear functionals on $E$ continuous. If $E$ is infinite-dimensional, then $\tau_1$ is strictly finer than $\tau_2$ .
|
|functional-analysis|topological-vector-spaces|
| 1
|
Probability of a deck of cards
|
Suppose you have a deck of 52 cards with 13 cards in each of the 4 suits. What is the probability of drawing 2 cards and getting a pair if you replace the first card and shuffle before drawing the second card?
|
There are three possible interpretations of "pair" in this context, which you need to clarify in your post. The first is "same card", the second is "same face value", the third is "same suit". Given that the definition of "pair" in card games is usually "same face value", that's the interpretation I'll be going with (unless otherwise stated by you). The probability of drawing any card is $1$ , obviously. If the card is replaced and the deck shuffled before drawing again, there will be $4$ out of $52$ with the same face value as the one you drew first. The probability of getting a pair is then $1\times\frac{4}{52}=\frac{1}{13}$ .
|
|probability|
| 0
|
Placing $a, b, c,$ $a, a, b, b, c, c,$ $a, a, a, b, b, b, c, c, c, \dots$ into rows of $N$
|
Imagine this sequence made of letters $\{a, b, c\}$ : $$a_1, b_1, c_1, a_2, a_2, b_2, b_2, c_2, c_2, a_3, a_3, a_3, b_3, b_3, b_3, c_3, c_3, c_3, \dots$$ and we will divide it into groups of $N$ and place them into a table with each row containing a group. We need to determine the value of $[P,Q]$ where $P$ represents row and $Q$ represents column. For example, if we divide it into group of $4$ , it will look like \begin{array}{|c|c|c|c|} \hline \mathbf{P} & \mathbf{Q} & \mathbf{R} & \mathbf{S} \\ \hline \mathrm{a}_1 & \mathrm{b}_1 & \mathrm{c}_1 & \mathrm{a}_2 \\ \hline \mathrm{a}_2 & \mathrm{b}_2 & \mathrm{b}_2 & \mathrm{c}_2 \\ \hline \mathrm{c}_2 & \mathrm{a}_3 & \mathrm{a}_3 & \mathrm{a}_3 \\ \hline \mathrm{b}_3 & \mathrm{b}_3 & \mathrm{b}_3 & \mathrm{c}_3 \\ \hline \mathrm{c}_3 & \mathrm{c}_3 & \mathrm{a}_4 & \mathrm{a}_4 \\ \hline \end{array} Now, I need to determine the value of $m$ th row and $n$ th column after dividing it in group of $T$ . I came up with a formula that can d
|
First of all, let's split the problem into two parts. (1) What is the correspondence between position in the sequence and letter ? (2) What is the correspondence between position in your rectangular table and position in the sequence ? Of these, #1 is the trickier one. Let's number our sequence starting at 0; then the position of the first $a_1$ is 0, the position of the first $a_2$ is 3 more than that, the position of the first $a_3$ is 6 more than that, and so on. So the position of the first $a_k$ is 3 times ( $0+1+\cdots+k-1$ ), which equals $\frac{3k(k-1)}2$ . But you already have essentially that formula, and I think you want the reverse. So, what's in position $n$ ? The answer will be "the first $a_k$ " exactly when $n=\frac{3k(k-1)}2$ , which gives us a quadratic equation for $k$ : $3k^2-3k-2n=0$ . The standard formula for quadratic equations tells us that this is true when $k=\frac{3\pm\sqrt{9+24n}}{6}$ . This gives two solutions, but of course we have to take the $+$ sign to
|
|sequences-and-series|algebra-precalculus|pattern-recognition|
| 1
|
Minimal set of condition for the Feynmann-Kac formula to hold
|
The Feynmann Kac formula tell us that the solution of the PDE $$ \frac{\partial u}{\partial t}(x,t) + \mu(x,t) \frac{\partial u}{\partial x}(x,t) + \tfrac{1}{2} \sigma^2(x,t) \frac{\partial^2 u}{\partial x^2}(x,t) -V(x,t) u(x,t) + f(x,t) = 0$$ with terminal condition $u(x,T)=\psi(x)$ Can be represented as $$ u(x,t) = E^Q\left[ \int_t^T e^{- \int_t^\tau V(X_s,s)\, ds}f(X_\tau,\tau)d\tau + e^{-\int_t^T V(X_\tau,\tau)\, d\tau}\psi(X_T) \,\Bigg|\, X_t=x \right] $$ where $X_\tau$ satisfies the SDE $$ dX_t = \mu(X,t)\,dt + \sigma(X,t)\,dW^Q_t, $$ The article does not specify the regularity of the various coefficients of the PDE In the wikipedia article is quoted that: "A proof that the above formula is a solution of the differential equation is long, difficult and not presented here" Where I can find this "long difficult proof" quoted in the article? What are the minimal set of conditions on the coefficients for which the FK formula still holds (and so we have unicity of the PDE) In my notes
|
In Shreve-Karatzas pg.366, they study the Feynman Kac formula for continuous and linear growth coefficients. In the work "Feynman-Kac Formulas for Solutions to Degenerate Elliptic and Parabolic Boundary-Value and Obstacle Problems with Dirichlet Boundary Conditions" the authors develop a Feynman-Kac formulation in the case of weak regularity for the coefficients (including Hölder) $$Au:=\frac{1}{2}\operatorname{tr}(a(x)D^{2}u(x))-\langle b(x),Du(x)\rangle+c(x)u(x)$$ and boundary data $g$ , they obtain for the elliptic case $f=Au$ and the parabolic case $u_{t}=Au-f$
|
|partial-differential-equations|reference-request|stochastic-calculus|stochastic-analysis|stochastic-differential-equations|
| 1
|
Convergence of positive random vector
|
Suppose I have a sequence of positive random vectors $\vec{X}_N$ of fixed length $l$ . That is, $\vec{X}_N = (x_N^{(1)}, x_N^{(2)},\cdots, x_N^{(l)})$ where each entry $x_N^{(i)} > 0$ . Suppose I further have that $\mathbb{E}\left[ \sum_{i=1}^l \left( \frac{(x_N^{(i)}-1)^2}{x_N^{(i)}}\right) \right] = O(1/N)$ so the LHS in the limit $N \to \infty$ is $0$ . It seems clear that $ \vec{X}_N$ in the limit $N \to \infty$ should be the ones-vector ${\bf{1}} = (1,1,\cdots, 1)$ . But is this convergence almost surely, or is this a convergence in probability? Or is this assertion false?
|
Let $(\Omega,\mathcal{A},\mathbb{P})$ be our probability space. For $1\leqslant p , let $L_p$ denote the space of (equivalence classes of) random variables $X:\Omega\to \mathbb{R}$ (that is, $\mathcal{A}$ - $\mathcal{B}$ -measurable functions, where $\mathcal{B}$ is the Borel measure on $\mathbb{R}$ ) having finite $p^{th}$ moment, equipped with the norm $\|\cdot\|_p$ satisfying $\|x\|_p^p=\mathbb{E}[|x|^p]$ . Recall that for $1\leqslant p and $x\in L_q$ , $x\in L_p$ and $\|x\|_p\leqslant \|x\|_q$ (from, for example, Jensen's inequality applied with the convex function $\phi(t)=t^{q/p}$ . First let's start with a single sequence $(x_N)_{N=1}^\infty\subset L_2$ and assume that each is almost surely positive. Let $$t_N=\frac{(x_N-1)^2}{x_N}.$$ Then $\lim_N \mathbb{E}\frac{(x_N-1)^2}{x_N}=0$ is exactly the assumption that $\lim_N \|t_N\|_1=0$ . This implies that every subsequence of $t_N$ has a further subsequence which converges almost surely to $1$ . This implies that every subsequence
|
|random-variables|probability-limit-theorems|convergence-probability|
| 1
|
If $\sum b_n$ diverges, and $\sum a_n$ converges, and $b_n$ is monotonic does $\lim_{n\to\infty} \frac{a_n}{b_n}$ exist?
|
$a_n,b_n>0$ I've determined $b_n$ is monotonically increasing then $\frac{1}{b_n}$ is monotonically decreasing and bounded, so $\sum \frac{a_n}{b_n}$ converges and so the limit indeed exists and is equal to zero. However, I'm not sure if $b_n$ is monotonically decreasing. Intuitively, $a_n$ must decrease faster than $b_n$ , because the series converges, but I don't know how to prove that the limit must then exist.
|
The main flaw in the intuition here seems to be that any sequence must be either increasing or decreasing. There are many sequences that are neither, and one can construct many counterexamples this way. Consider the following: define $b_n=\frac1n$ for all $n$ . Define $$ a_n = \begin{cases} \frac1n, &\text{if $n$ is a power of $2$}, \\ 0, &\text{otherwise}. \end{cases} $$ Then $\sum b_n$ diverges, but $\sum a_n = \sum_{k=0}^\infty \frac1{2^k} = 2$ converges. However, $\frac{a_n}{b_n}$ equals $1$ infinitely often and $0$ infinitely often, and thus $\lim_{n\to\infty} \frac{a_n}{b_n}$ does not exist. (If one requires that $a_n$ is strictly positive, one can just change the $0$ s to incredibly small values—even $\frac1{n^2}$ is still enough to preserve the counterexample.)
|
|real-analysis|
| 0
|
Prove ${2n \choose n+i} \geq e^{-8 i^2/n} {2n \choose n}$
|
I am trying to prove ${2n \choose n+i} \geq e^{-8 i^2/n} {2n \choose n}$ for $0\leq i \leq n$ . My attempt: I rewrote ${2n \choose n+i}$ to $${2n \choose n+i} = {2n \choose n} \prod_{1\leq j \leq i} \left( 1- \frac{2j-1}{n+j} \right) $$ So all I need is to prove $$\prod_{1\leq j \leq i} \left( 1- \frac{2j-1}{n+j} \right) \geq e^{-8i^2/n}$$ I tries using $1-x \geq e^{-x/(1-x)}$ inequality to get $$\prod_{1\leq j \leq i} \left( 1- \frac{2j-1}{n+j} \right) \geq e^{- \sum_{j=1}^i \frac{2j-1}{n+1-j}}$$ However, I think this inequality is too loose. For example, for $n=400$ and $i=399$ , it violates what we are trying to prove. $$8 i^2/n
|
Here is a stronger result: Theorem. For $0 \leq i \leq n$ , we have $$ \binom{2n}{n+i} \geq e^{-(\log 4)i^2/n} \binom{2n}{n}. $$ To begin with, we write \begin{align*} \frac{\binom{2n}{n+i}}{\binom{2n}{n}} &= \frac{n!^2}{(n+i)!(n-i)!} \\ &= \frac{(k+n)!^2}{(k+n+i)!(k+n-i)!} \prod_{p=1}^{k} \left( 1 - \frac{i^2}{(n+p)^2} \right) \\ &= \prod_{p=1}^{\infty} \left( 1 - \frac{i^2}{(n+p)^2} \right) \end{align*} where we let $k \to \infty$ in the last line and utilized the asymptotic formula $\frac{(k+a)!}{k!} \sim k^a$ as $k \to \infty$ (which itself can be easily proved using the Stirling's approximation). Now by noting that $p \mapsto \log(1 - i^2/p^2)$ is increasing for $p > i$ , we get \begin{align*} \log \frac{\binom{2n}{n+i}}{\binom{2n}{n}} &= \sum_{p=1}^{\infty} \log \left( 1 - \frac{i^2}{(n+p)^2} \right) \\ &\geq \int_{0}^{\infty} \log \left( 1 - \frac{i^2}{(n+p)^2} \right) \, \mathrm{d}p \\ &= i \int_{0}^{\frac{i}{n}} \frac{\log(1-x^2)}{x^2} \, \mathrm{d}x. \qquad \tag{$x = \tfrac{i
|
|real-analysis|inequality|
| 0
|
Feynman-Kac formula for elliptic equation in whole space
|
I can find the Feynman-Kac formula for boundary value problem of elliptic equation and for Cauchy problem of parabolic equation. Is there any reference about the elliptic equation in whole space? In particular, I am looking for Feynman-Kac formula of $$ a(x)u''(x)+b(x)u'(x)+c(x)u(x)=f(x)$$ in $\mathbb{R}$ for regular enough coefficients.
|
In Shreve-Karatzas pg.365, they study the Feynman Kac formula for continuous and linear growth coefficients for general Elliptic operator and the Dirichlet problem.
|
|probability|probability-theory|ordinary-differential-equations|reference-request|stochastic-processes|
| 0
|
Asymptotic Analysis of 2nd Order Differential Equation with Irregular Singular Point
|
I am asked to find the leading behavior of the solution $y(x)$ to this differential equation that is asymptotically the largest as $x \to 0$ : $$x^2\frac{d^2y}{dx^2} + \frac{dy}{dx} + \frac{3}{16}y = 0$$ From assuming that $y(x)$ has the form $e^{S(x)}$ , taking derivatives, and substituting them back into the original equation, I get the following equation: $$x^2S''(x)e^{S(x)} + x^2e^{S(x)}(S'(x))^2 + S'(x)e^{S(x)} + \frac{3}{16}e^{S(x)} = 0$$ I then made the assumption that $S''(x) \ll (S'(x))^2$ , which gives this asymptotic relationship: $$(S'(x))^2 \sim \frac{-S'(x)}{x^2} - \frac{3}{16x^2}$$ I'm not sure where to go from here. Most examples I can find online don't have a factor of $S'(x)$ on the RHS, so taking the square root and integrating becomes simple. I also want to find a correction factor for the solution in series form. Some guidance on this would be appreciated.
|
This DE can actually be solved in a closed form involving Bessel functions: according to Maple, the general solution is $$ \frac{c_{1} {\mathrm e}^{\frac{1}{2 x}} \left(\left(x -2\right) I_{\frac{1}{4}}\! \left(-\frac{1}{2 x}\right)-2 I_{-\frac{3}{4}}\! \left(-\frac{1}{2 x}\right)\right)}{\sqrt{x}}+\frac{c_{2} {\mathrm e}^{\frac{1}{2 x}} \left(\left(x -2\right) K_{\frac{1}{4}}\! \left(-\frac{1}{2 x}\right)+2 K_{\frac{3}{4}}\! \left(-\frac{1}{2 x}\right)\right)}{\sqrt{x}}$$ As $x \to 0+$ , the coefficient of $c_1$ is asymptotic to: $$ - \frac{4\; i}{\sqrt{\pi}} + \frac{3\; i}{4 \sqrt{\pi}} x + \ldots $$ while the coefficient of $c_2$ is asymptotic to $$ -\frac{3\; i}{4} \sqrt{\pi} x^2 e^{1/x} - \frac{105\; i}{64} \sqrt{\pi} x^3 e^{1/x} + \ldots$$
|
|ordinary-differential-equations|asymptotics|
| 0
|
Let $A\subseteq \Bbb R^n$, then $(\overline{A^c})^c=\mathring A$
|
Let $A\subseteq \Bbb R^n$ , then: $A^{c\bar{}c}=\mathring A$ Notation: As an example $(\bar{A^c})^c=A^{c\bar{}c}$ Dem: We know that $A\subseteq \bar A$ , now if we take the complement on both sides: $A^{\bar{}c}\subseteq A^c$ Now, knowing that $A^{\bar{}c}$ is an open set, then $A^{\bar{}c}\subseteq \bar A$ $$\implies A^{coc}\subset A^{\bar{}cc}=\bar A$$ Now, $\color{blue}{A^{co}\subseteq A^c}$ , then: $$\implies A=A^{cc}\subseteq A^{coc}$$ $$\implies \color{blue}{\bar A\subseteq A^{coc}}$$ Therefore, $A^{coc}=\bar A$ Can you explain me what exactly happened to get to the parts in $\color{blue}{blue}$ , please?
|
For the parts in blue: The statement $A^{{\complement}~\bigcirc}\subseteq A^\complement$ follows from the fact that $E^\bigcirc\subseteq E$ for any set $E$ . The assertion $\bar A\subseteq A^{\complement~\bigcirc~\complement}$ follows from taking the closure of both sides of the prior equality $A\subseteq A^{\complement~\bigcirc~\complement}$ and noting that $\overline{A^{\complement~\bigcirc~\complement}}=A^{\complement~\bigcirc~\complement}$ since $A^{\complement~\bigcirc~\complement}$ is a closed set. $A^{\complement~\bigcirc~\complement}$ is a closet set because it is the complement of the set $A^{\complement~\bigcirc}$ , which is open.
|
|general-topology|proof-explanation|
| 1
|
Prove ${2n \choose n+i} \geq e^{-8 i^2/n} {2n \choose n}$
|
I am trying to prove ${2n \choose n+i} \geq e^{-8 i^2/n} {2n \choose n}$ for $0\leq i \leq n$ . My attempt: I rewrote ${2n \choose n+i}$ to $${2n \choose n+i} = {2n \choose n} \prod_{1\leq j \leq i} \left( 1- \frac{2j-1}{n+j} \right) $$ So all I need is to prove $$\prod_{1\leq j \leq i} \left( 1- \frac{2j-1}{n+j} \right) \geq e^{-8i^2/n}$$ I tries using $1-x \geq e^{-x/(1-x)}$ inequality to get $$\prod_{1\leq j \leq i} \left( 1- \frac{2j-1}{n+j} \right) \geq e^{- \sum_{j=1}^i \frac{2j-1}{n+1-j}}$$ However, I think this inequality is too loose. For example, for $n=400$ and $i=399$ , it violates what we are trying to prove. $$8 i^2/n
|
@Sangchul Lee posted a stronger result already, but I just wanted to clarify that the argument I had in the question is salvageable to prove the weaker statement. First, note that for $i>n/2$ , the claim is trivially true. This is because $$ {2n \choose n}e^{-8i^2/n} \leq {2n \choose n}e^{-2n} \leq 2^{2n}e^{-2n} Hence, we can assume $i\leq n/2$ . At which case $(2i-1)/(n+i) and the inequality $1-x \geq e^{-3x}$ holds for $x . We have $$ \prod_{1\leq j \leq i} \left( 1- \frac{2j-1}{n+j} \right) \geq e^{-3 \sum_{j=1}^i \frac{2j-1}{n+j} } \geq e^{-\frac{3}{n}\sum_{j=1}^i (2j-1)} = e^{-3i^2/n} \geq e^{-8i^2/n} $$
|
|real-analysis|inequality|
| 0
|
Conditional expectation of a Ito process
|
Consider a standard Brownian motion $(\Omega,\mathcal{F}_t,\mathcal{F}, W_t,\mathbb{P})$ . Let $dX_t = b(X_t)dt + \sigma(X_t)dW_t$ be a SDE. Assume the existence of a unique strong solution. Let $X_t(x, s)$ denote the solution of the SDE starting from $x\in\mathbb{R}$ at time $s$ . For $u , can the freezing lemma be applied to establish the equality: $$ \mathbb{E}[X_t(X_s(x,u),s)\mid X_s(x,u)] = \Phi(X_s(x,u)), $$ where $$ \Phi(y) = \mathbb{E}[X_t(y,s)], $$ even in cases where $X_t$ and $ X_s $ are not independent?
|
This follows from the Markov property, as mentioned here The solution $X$ of the SDE $\mathrm d X_t = f(t, X_t) \mathrm d t + g(t, X_t) \mathrm d B_t$ is a Markov process one source is in : Schilling; Th.21.23. p. 403
|
|stochastic-processes|stochastic-calculus|stochastic-differential-equations|
| 1
|
proving that the area of a 2016 sided polygon is an even integer
|
Let $P$ be a $2016$ sided polygon with all its adjacent sides perpendicular to each other, i.e., all its internal angles are either $90$°or $270$°. If the lengths of its sides are odd integers, prove that its area is an even integer. I think visualising what this polygon might look like would more or less be the key to getting started on the problem, but I'm having quite a hard time doing so. Since the internal angles are all $90$° or $270$°, I think the shape would look like a rectangle or a square with smaller squares or rectangles protruding out from each side — but mini rectangles protruding from sides would not be possible because degrees of $180$° are not allowed. And to prove that the area is an even integer, if there are mini squares protruding from the sides there will have to be even number of those squares in total. And then I also need to make sure that at least the width or length of 'bigger' square or rectangle is even. But I seem to get stuck trying to draw a possible di
|
here is specific math to prove even areas are formed by certain products of sides if we use a test case of |3n-2| which I somehow found element wise enumerates sides of the polygon drawn two different ways The test case generalized a form for area similar to shoelace but working for Orthogonal Irregular Polygons specifically, where the sides are ordered by construction as in {8,5,2,1,4,7,10,13} that makes the 8 sided figure drawn twice in the image. This same case can be solved for area by breaking it down into three rectangles two ways and setting those areas of 116 equal to each other. This would be a1a2+(a7-a5)(a8-a2) = a1a2 + a4(a7-a5) + a6a7 Where {a1,a2,a3,a4,a5,a6,a7,a8} are edges of my 8 sided test case to find the general form of these parallel polygons in order to get the 2016 odd sided case. Since a7 ≠ a5 a8 = a2 + a4 + a6, where a4 is the middle term and a6 is the n-2 term. Now let’s plug in odd numbers for our terms we will always find even areas. It is like Gauss when he
|
|geometry|proof-writing|integers|polygons|angle|
| 0
|
Pushforward, tangent map, vertical endomorphism and all that
|
I've already posted a similar question a couple of times, here and here , without receiving a fully clarifying answer, so I post it again, trying to be more specific. Suppose you have a smooth surjective submersion between smooth manifolds: $f: N \rightarrow M \equiv f(N)$ , with $\dim N >\dim M$ . The tangent map $Tf: TN \rightarrow TM$ is defined pointwise in terms of $f_\ast $ , the pushforward by $f$ . In local coordinates, if $(x, v)$ is a point in $TN$ , corresponding to the tangent vector $ v = \left. v^a \frac{\partial}{\partial x^a }\right|_x \in T_x N$ , then $$(Tf)(x,v) = (y, w) := (f(x),f_{\ast,x}v), $$ where $w := f_{\ast,x}v \in T_{f(x)}M$ is such that $(f_{\ast,x}v)g = v (g \circ f)$ for any smooth $g: M \rightarrow \mathbb{R}$ . Of course, since $f$ is surjective but not injective, if $f(x)=f(x') =y$ , then the images of the two points $(x,v)$ and $(x', v')$ correspond to two generally different tangent vectors to $M$ at the same point $y$ . Thus, $Tf$ cannot map the ta
|
Let me formulate everything for you coordinate-independently, so there is no question whether the maps $T\pi_M$ , $\operatorname{vl}$ , and $S$ are well-defined globally without restriction. Global formulas First $\pi_M : TM \to M$ is the structure map of the fiber bundle $TM$ . Since $TM$ and $M$ are smooth manifolds, we can take the derivative of this map just like any other to obtain a map of vector bundles $$ \require{AMScd} \begin{CD} TTM @>{T\pi_M}>> TM\\ @V{\pi_{TM}}VV @VVV\\ TM @>{\pi_M}>> M \end{CD}. $$ Next, for any vector bundle $p : E \to M$ we can define a map of vector bundles $$ \operatorname{vl} : E \times_M E \to TE. $$ Let's be very careful about what this means: here we are thinking of $E \times_M E$ as a vector bundle over $E$ via the projection $\operatorname{pr}_1$ onto the first factor, and $TE$ as a vector bundle over $E$ via $\pi_E$ . (The notation $E \times_M E$ means the subset of all $(e, e') \in E \times_M E$ such that $p(e) = p(e')$ .) Now, what's the defi
|
|differential-geometry|fiber-bundles|pushforward|
| 0
|
Rotation a 3D frame of a reference to match the X-axis with the direction of a unite vector.
|
I am trying to solve a problem related to computational fluid dynamics. However, I got stuck on a mathematical operation and am unsure how to tackle it. Here is it. Let $\boldsymbol{n}$ be a unit vector in 3D space with known components. Given the following rotational matrix: $$ \begin{equation}\begin{gathered} \mathbf{T}=\mathbf{T}(\theta^{(y)},\theta^{(z)})=\mathbf{T}^{(y)}\mathbf{T}^{(z)},\\ \mathbf{T}^{(y)}\equiv\mathbf{T}^{(y)}(\theta^{(y)}) =\begin{bmatrix}\cos\theta^{(y)}&0&\sin\theta^{(y)}\\0&1&0\\-\sin\theta^{(y)}&0&\cos\theta^{(y)}\end{bmatrix},\\ \mathbf{T}^{(z)}\equiv\mathbf{T}^{(z)}(\theta^{(z)}) =\begin{bmatrix}\cos\theta^{(z)}&\sin\theta^{(z)}&0\\-\sin\theta^{(z)}&\cos\theta^{(z)}&0\\0&0&1\end{bmatrix}. \end{gathered}\end{equation} $$ The goal is to find the angles of rotation such that the x-axis of the rotated frame of reference matches the direction of the normal vector. My question is: How do we define the angles of rotation given only the components of the normal ve
|
I found the correct answer. In reference [1]. It is almost what @of course answered earlier. But they forgot that we need to take an inverse problem: Once the normal vector has been found, the angles ( $\theta_y , \theta_z$ ) of the orientation are determined by: \begin{equation} \boldsymbol{T}(\theta_y,\theta_z) \cdot \boldsymbol{n} = [1,0, 0, 0, 0,0]^T \end{equation} This can be rewritten as: \begin{equation} \boldsymbol{T}^{-1}(\theta_y,\theta_z) \cdot [1,0, 0, 0, 0,0]^T =\boldsymbol{n} \end{equation} rearranging gives: \begin{align} \sin{\theta_y} &= n_z\\ \cos{\theta_y} &= \sqrt{1-\sin^2{\theta_y}}\\ \sin{\theta_z} &= \frac{n_y}{\cos{\theta_y}}\\ \cos{\theta_z} &= \frac{n_x}{\cos{\theta_y}} \end{align} [1] S. J. Billett and E. F. Toro. Unsplit WAF–Type Schemes for Three Dimensional Hyperbolic Conservation Laws. In Numerical Methods for Wave Propagation. Toro, E. F. and Clarke, J. F. (Editors), pages 75–124. Kluwer Academic Publishers, 1998.
|
|matrices|rotations|
| 0
|
Determine all matrices, $A$, that satisfy the condition that $\operatorname{rank}(A^k) = \operatorname{rank}(A)$ for each $k \geq 1$.
|
My intution is that the condition is true if and only if $A$ is invertible or $A$ is a projection, i.e $A^2 = A$ . It is trivial to show that if $A$ is invertible or $A$ is a projection then the condition is true but I couldn't show the other implication.
|
Let's say $A$ is an $n \times n$ matrix with entries in $\mathbb{k}$ satisfying your property. Now, the question linked in comments shows that your condition is equivalent to just $\operatorname{rk} A^2 = \operatorname{rk} A$ . Since $\operatorname{ker} A \subseteq \operatorname{ker} A^2$ , this is the same as $\operatorname{ker} A = \operatorname{ker} A^2$ . That means that if $x \in \operatorname{im} A$ and $x \not = 0$ then $A x \not = 0$ . In turn, this means that $\operatorname{im} A \cap \operatorname{ker A} = \{0\}$ . These simplification steps form a chain of equivalences, so one answer to your question is exactly those matrices $A$ such that $\operatorname{im} A \cap \operatorname{ker} A = \{0\}$ . Let's figure out how to construct them all. Pick a basis $(v_1, \ldots, v_m)$ of $\operatorname{im} A$ and a basis $(w_{m + 1}, \ldots, w_n)$ of $\operatorname{ker} A$ . Then, since $\operatorname{im} A \cap \operatorname{ker} A = \{0\}$ , we must have that $(v_1, \ldots, v_m, w_{m
|
|linear-algebra|matrix-equations|matrix-rank|
| 1
|
Determine all matrices, $A$, that satisfy the condition that $\operatorname{rank}(A^k) = \operatorname{rank}(A)$ for each $k \geq 1$.
|
My intution is that the condition is true if and only if $A$ is invertible or $A$ is a projection, i.e $A^2 = A$ . It is trivial to show that if $A$ is invertible or $A$ is a projection then the condition is true but I couldn't show the other implication.
|
Looking at the Jordan form, a matrix is similar to a block-diagonal matrix where each block is a Jordan block. The blocks corresponding to nonzero eigenvalues do it lose rank when taking powers. The only way for the matrix to lose rank when taking powers is to have Jordan blocks of size $>1$ for the eigenvalue zero. In other words the $n\times n$ matrices that keep their rank for all powers are of them form $$ A=S\,\begin{bmatrix} B &0\\0&0_m\end{bmatrix}\,S^{-1}, $$ where $B$ is a $k\times k$ invertible matrix and $k+m=n$ . This includes both the invertible matrices and the idempotents, but also many other matrices.
|
|linear-algebra|matrix-equations|matrix-rank|
| 0
|
Can the golden ratio accurately be expressed in terms of $e$ and $\pi$
|
I was playing around with numbers when I noticed that $\sqrt e$ was very somewhat close to $\phi$ And so, I took it upon myself to try to find a way to express the golden ratio in terms of the infamous values, $\large\pi$ and $\large e$ The closest that I've come so far is: $$ \varphi \approx \sqrt e - \frac{\pi}{(e+\pi)^e - \sqrt e} $$ My question is, Is there a better (more precise and accurate) way of expressing $\phi$ in terms of $e$ and $\pi$ ?
|
This is an approximation for $\pi$ by Ramanujan $$\pi\approx\frac95+\sqrt\frac95 = \frac{9+3\sqrt5}5 = \frac65(\varphi+1) = \frac65\varphi^2$$ The expression give us $$\varphi\approx \sqrt{\frac56\pi} = 1.6180\color{red}{2159}\dots\quad(E\sim 1.24\cdot10^{-5})$$ I've found this xkcd on random approximations. It gives, among others $$\sqrt5\approx\frac2e+\frac32\quad\text{and}\quad\sqrt5\approx\frac{13+4\pi}{24-4\pi}$$ These expression give us $$\begin{aligned} \varphi&\approx\frac1e+\frac54 ~=~ 1.61\color{red}{7879}\dots&&(E\sim1.55\cdot 10^{-4})\\ &\approx\frac{37}{8(6-\pi)} = 1.6180339\color{red}{0472}\dots&&(E\sim8.40\cdot10^{-8}) \end{aligned}$$ This one was found with the help of Wolfram Mathematica. $$\begin{aligned} \frac{24499\pi-17844e}{5599\pi} &= 1.618033988749894\color{red}{9426}\dots\quad(E\sim 9.44\cdot10^{-17}) \end{aligned}$$
|
|soft-question|recreational-mathematics|approximation|diophantine-approximation|golden-ratio|
| 0
|
Number of zeros in the decimal representation of the powers of 5
|
I am trying to solve this problem: Prove that for every natural number $m$ , there exists a natural number $n$ such that in the decimal representation of the number $5^n$ there are at least $m$ zeros. Before that I proved that for $m$ = $2^a$ $(a \ge 3)$ , we have $Z^*_{m} = \langle -1\rangle_2*\langle5\rangle_{2^{a-2}}$ . These tasks are on the topic of primitive roots, so how can I reformulate my problem in these terms? I think these tasks may be related to each other, but I don't see how exactly. Give me some tips please.
|
I think there is a much simpler method. Please notice that: $$10^n \mid 5^{2^{n-1}+n}-5^n.$$ Hence, there is $m \in \mathbb N$ such that: $$m \times 10^n +5^n=5^{2^{n-1}+n}.$$ Now letting $n \to \infty$ implies that there are consecutive zeros in the middle of $5^{2^{n-1}+n}$ as many as required since $10^n$ grows faster than $5^n.$ We are done.
|
|elementary-number-theory|primitive-roots|
| 1
|
If $X=M\oplus N$, is it true $(A\cap M)\oplus (A\cap N)\subseteq A$?
|
Let $X$ be a norm space and $X=M\oplus N$ where $M,N$ are closed subspace of $X$ . Take a closed set $A\subseteq X$ . I know that $(A\cap M)\oplus (A\cap N)\neq A$ , for example consider $X=\mathbb{R}^2$ , $M=\mathbb{R}\times \{0\}$ , and $N=\{0\}\times \mathbb{R}$ . Then if $A= \Delta_X=\{(x, x):x\in\mathbb{R}\}$ ,then $(A\cap M)\oplus (A\cap N)=\{(0,0)\}$ Is it true that $(A\cap M)\oplus (A\cap N)\subseteq A$ ? Please help me to know it.
|
Counter-example: $X=\mathbb R^{2}, M=\{(x,0): x \in \mathbb R\},N=\{(0,y): y \in \mathbb R\}, A =M \cup N$ . [ $(1,0)+(0,1)\notin A$ ].
|
|linear-algebra|vector-spaces|normed-spaces|direct-sum|
| 0
|
Sum of norms induced by PSD matrices
|
Suppose you have two positive definite matrices $M$ and $N \in \mathcal{S}_{++}^n$ . They induce scalar products and norms on $\mathbb{R}^n$ : \begin{align} \lVert x \rVert_M &= \sqrt{x^\intercal M^{-1} x}, & \lVert x \rVert_N &= \sqrt{x^\intercal N^{-1} x}. \end{align} Introduce the function $f:x\mapsto \lVert x \rVert_M + \lVert x \rVert_N$ . It is a norm as it is the sim of two norms. However, it is not induced by a scalar product. Can we upper-bound $f$ by a norm induced by another positive definite matrix?
|
In addition to the answer by Gérard, we can use the concavity of the function $\sqrt{\cdot}$ to get a whole family of bounds. For any $\theta \in (0,1)$ : \begin{align} \sqrt{m} + \sqrt{n} &= \theta\sqrt{\frac{1}{\theta^2}m} + (1-\theta)\sqrt{\frac{1}{(1-\theta)^2}n}, \\ &\le \sqrt{\frac{1}{\theta}m + \frac{1}{1-\theta}n}. \end{align} This gives: \begin{align} \| x \|_M + \| x \|_N &=\sqrt{x^TMx}+\sqrt{x^TNx}, \\ & \le \sqrt{\frac{1}{\theta}x^TMx+ \frac{1}{1-\theta}x^TNx}, \\ &= \sqrt{x^T\left(\frac{1}{\theta}M + \frac{1}{1-\theta}N\right) x}, \\ &=\| x \|_{\frac{1}{\theta}M+\frac{1}{1-\theta}N}. \end{align} In particular with $\theta = \frac{1}{2}$ : $$ \| x \|_M + \| x \|_N \le \| x \|_{2(M+N)}.$$
|
|inner-products|positive-definite|matrix-norms|
| 0
|
Show that $ \int_0^{\pi\over 2}\frac{\sin(2nx)}{\sin^{2n+2}(x)}\frac{1}{e^{2\pi \cot x}-1}dx =(-1)^{n-1}\frac{2n-1}{4(2n+1)} $
|
Show that $$ \int_0^{\pi\over 2}\frac{\sin(2nx)}{\sin^{2n+2}(x)}\frac{1}{e^{2\pi \cot x}-1}dx =(-1)^{n-1}\frac{2n-1}{4(2n+1)} $$ My attempt Lemma-1 \begin{align*} \frac{\sin(2nx)}{\sin^{2n}(x)}&=\sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r-1}\cot^{2n-2r+1}(x) \\ \end{align*} proof \begin{align*} \frac{\sin(2nx)}{\sin^{2n}(x)} &= \text{Im}\frac{e^{2inx}}{\sin^{2n}(x)} \\ &= \text{Im} \;\left( \frac{e^{ix}}{\sin x}\right)^{2n} \\ &= \text{Im} \; \left( \frac{\cos x+i\sin x}{\sin x}\right)^{2n} \\ &= \text{Im} \; (\cot x+i)^{2n} \\ &= \text{Im} \sum_{r=0}^{2n}\binom{2n}{r}i^r \cot^{2n-r}(x) \\ &= \sum_{r=0}^{2n}\binom{2n}{r}\sin\left(\frac{\pi r}{2} \right) \cot^{2n-r}(x) \\ &= \sum_{r=1}^n \binom{2n}{2r-1} (-1)^{r-1}\cot^{2n-2r+1}(x) \end{align*} Let $I$ denote the integral. Now using the identity above I mentioned $$I = \int_0^{\pi\over 2}\frac{\sin(2nx)}{\sin^{2n+2}(x)}\frac{1}{e^{2\pi \cot x}-1}dx $$ $$ = \int_0^{\pi\over 2}\left( \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r-1}\cot^{2n-2r+1}(x)\r
|
(Assuming $n$ is a positive integer.) As you have noted $$\frac{\sin 2nx}{\sin^{2n}x}=\Im(\cot x+i)^{2n}=\frac1{2i}\left((\cot x+i)^{2n}-(\cot x-i)^{2n}\right),$$ the given integral, after the substitution $t=\cot x$ , equals $$I=\frac1{2i}\int_0^\infty\frac{(t+i)^{2n}-(t-i)^{2n}}{e^{2\pi t}-1}\,dt.$$ Now, for $0 , consider the contour $\lambda_r$ that is the boundary of $$\{z\in\mathbb{C} : \Re z>0 \land |\Im z| (goes along $+\infty-i\to-i\to i\to+\infty+i$ with a notch around $z=0$ ). Then \begin{align} 0=\int_{\lambda_r}f(z)\,dz &=\int_0^\infty\big(f(x+i)-f(x-i)\big)\,dx\\ &+i\int_r^1\big(f(ix)+f(-ix)\big)\,dx\\ &+\int_{-\pi/2}^{\pi/2}f(re^{it})\,rie^{it}\,dt, \end{align} where we set $f(z)=\displaystyle\frac{z^{2n}-i^{2n}}{e^{2\pi z}-1}$ . The first of the three terms is $2iI$ ; as $r\to 0$ , the second one tends to $i^{2n+1}$ times $$\int_0^1\left(\frac{x^{2n}-1}{e^{2i\pi x}-1}+\frac{x^{2n}-1}{e^{-2i\pi x}-1}\right)dx=\int_0^1(1-x^{2n})\,dx=\frac{2n}{2n+1},$$ and the last one tend
|
|calculus|integration|definite-integrals|summation|improper-integrals|
| 0
|
Has this "everything category" been defined before?
|
Define a category $\textbf{All}$ as follows: The objects of $\textbf{All}$ are the pairs $(X, C)$ , where $C$ is a category and $X$ is an object of $C$ . A morphism from $(X, C)$ to $(Y, D)$ in $\textbf{All}$ is an ordered pair $(F:C\to D, f:F(X)\to Y)$ , where $F$ is a functor and $f$ is a morphism in $D$ . Composition in $\textbf{All}$ is given by: $(F, f)\circ (G, g)=(GF, gG(f))$ . This category seems to capture the idea of taking the disjoint union of all categories, and then "bridging them together" with functors. The idea seems so natural that I feel it must have been defined before, but I can't find what it is called. Does anyone have a reference?
|
I don't know of a reference for the specific category $ \mathbf { All } $ , but it's the result of applying the Grothendieck construction to the identity functor on $ \mathbf { Cat } $ (the category of small categories). That is, $ \mathbf { All } = \int \mathrm { id } _ { \mathbf { Cat } } = \int _ { C \in \mathbf { Cat } } C $ . The Grothendieck construction can be applied to any functor $ F \colon A \to \mathbf { Cat } $ from any category $ A $ to produce a category $ \int F = \int _ { a \in A } F ( a ) $ . The notation $ \int $ for the Grothendieck construction is meant to suggest a more high-powered version of $ \sum $ , which can be used for the disjoint union; any function $ f \colon A \to \mathbf { Set } $ gives a set $ \sum f = \sum _ { a \in A } f ( a ) $ whose elements are pairs $ ( a , x ) $ where $ a \in A $ and $ x \in f ( a ) $ . So up one level, the objects of $ \int _ { a \in A } F ( a ) $ are pairs $ ( a , x ) $ where $ a \in A $ (meaning as an object) and $ x \in F (
|
|category-theory|
| 1
|
Linear system of ODE from quadratic functions
|
Given $N$ independent quadratic functions such as $$x_i(t)=a_it^2 + b_it +c_i$$ is there a linear system of ODEs $$\dot{x}_i=f(x_i)$$ such that the ODEs depend on $x_i$ and not (directly) on $t$ ? Of course, one can just take the derivative of $x_i$ : $\dot{x}_i=2a_it + b_i$ , but this does not yield a system of ODEs. Also, is there a name for this process? I have been able to find the system of ODEs using a data-driven approach, but the fact that its numerical integration yields to machine-precision error when compared to $x_i$ makes me think there is an analytical way to find such ODEs system. Thanks!
|
Are the $x_i$ scalars? Then $N \le 3$ . And if $N = 3$ you have $$ \pmatrix{x_1\cr x_2\cr x_3\cr} = A \pmatrix{1\cr t \cr t^2\cr}$$ for some invertible $3 \times 3$ matrix $A$ . Then $$\pmatrix{\dot{x_1}\cr \dot{x_2}\cr \dot{x_3}} = A \pmatrix{0 \cr 1 \cr 2 t\cr} = A \pmatrix{0 & 0 & 0\cr 1 & 0 & 0\cr 0 & 2 & 0\cr} \pmatrix{1 \cr t \cr t^2} = A \pmatrix{0 & 0 & 0\cr 1 & 0 & 0\cr 0 & 2 & 0\cr} A^{-1} \pmatrix{x_1 \cr x_2\cr x_3\cr} $$
|
|ordinary-differential-equations|
| 1
|
Minimize the area of $n$ intersecting circles
|
Let $0 . For $p_1,...,p_n \in \mathbb{R}^2$ , consider the property $$ |p_{k+1}-p_k| = c, \ k=1,...,n-1 \tag 1 $$ where $| \cdot |$ is the Euclidean distance. Denote $C_k$ for the unit circle centered at $p_k, \ k=1,...,n$ . $\textbf{Question}$ : For what geometric configuration of $p_1,...,p_n \in \mathbb{R}^2$ satisfying $(1)$ is the area of $C_1 \cap \cdots \cap C_n$ minimized? (note $c$ is fixed) Comment: I've been trying to prove the solution is a straight line, as it appears to lead to a solution for the following: if you are placed at a random position in a circle, which path minimizes the average time it takes for you to escape the circle? (something I really want to solve) $\textbf{Update} \ (2/19)$ : I solved the alternate question, that the area of $C_1 \cup \cdots \cup C_n$ is maximized when $p_k$ are in line. The proof is below. Write $\mu(A)$ for the area of a set $A$ . Write $B_1 = C_1 \cup \cdots \cup C_{n-1}$ and $B_2 = C_2 \cup \cdots \cup C_n$ . Then \begin{align} \m
|
$\forall x \in \mathbb{R}^2$ , let $B_1(x)$ be the closed unit ball in $\mathbb{R}^2$ with center $x$ . Let $\mu$ be the Lebesgue measure. We will outline a proof of the stronger statement (which hopefully can make you a believer, because many details are needed to make the proof rigorous): For all paths $\gamma: [0,L] \rightarrow \mathbb{R}^2$ parameterized by arclength with $\gamma(0) = 0$ and length $L , $$ \mu\Big(\bigcap_{0 \leq s \leq L} B_1(\gamma(s))\Big) \geq \mu\Big(B_1(0) \cap B_1((L,0)) \Big). \tag 4 $$ It is easy to show with law of cosines that equality holds when $\gamma$ is a straight line. $(4)$ solves the question asked, since one can take $L = c(n-1)$ and $\gamma$ to be a polygonal path with vertices $p_1,...,p_n$ , making the LHS of $(4) \leq \mu(C_1 \cap \cdots \cap C_n)$ , whereas if $p_1,...,p_n$ are put in a sequential line one has $\mu(C_1 \cap \cdots \cap C_n) = $ RHS of $(4)$ . Fix $L$ and a $\gamma$ satisfying the conditions of $(4)$ . Denote $$ I(s) = \bigc
|
|geometry|
| 0
|
Understanding the proof of Theorem 10.1 in Montgomery & Vaughan's Multiplicative Number Theory
|
In the last step of the proof of Theorem 10.1 in the book Multiplicative number theory I: Classical theory by Hugh L. Montgomery, Robert C. Vaughan I couldn't understand what exactly "turn the path through an angle" means. I reparametrize $x$ to $xe^{-\frac12 \arg z}$ and I got $$\int_{-\infty}^{\infty} e^{- \pi x^2 e^{-\arg z}z} e^{-\frac12 \arg z} dx$$ which is not equal to the result that the authors claim, i.e. $$z^{-1/2} \int_{-\infty}^{\infty} e^{- \pi x^2} dx = |z|^{-1/2} e^{-\frac12 \arg z} \int_{-\infty}^{\infty} e^{- \pi x^2} dx.$$ My question is how to show that $$\int_{-\infty}^{\infty} e^{- \pi x^2 z} dx = z^{-1/2} \int_{-\infty}^{\infty} e^{- \pi x^2} dx$$ is true?
|
It means (I think) to consider the following path: That is, the rotated path is $\gamma(t) = \frac{t}{\sqrt{z}} = \frac{t}{ \sqrt{|z|}} e^{-i\frac{ arg z}{2}}$ , so we also scale by $\frac{1}{\sqrt{|z|}}$ . The vertical parts will tend to $0$ so we get $$ \int_{-\infty}^\infty e^{-\pi x^2z} dx = \int_{-\infty}^\infty e^{-\pi \left(\frac{t}{\sqrt{z}}\right)^2z} \frac{1}{\sqrt{z}} dt = z^{-\frac{1}{2}} \int_{-\infty}^\infty e^{-\pi t^2} dt. $$
|
|complex-analysis|proof-explanation|fourier-analysis|cauchy-integral-formula|theta-functions|
| 1
|
Solving $2mx^{2m+1}-(2m+1)x^{2m}+1=0$
|
I set up a problem for myself and came to this equation and I'm wondering if it can be solved algebraically. The variable $m$ is a whole number and we're solving for $x$ . $$2mx^{2m+1}-(2m+1)x^{2m}+1=0$$ One solution is $x=1$ but I believe there should be another root in $(-\infty,0)$ .
|
$$f(x)=2mx^{2m+1}-(2m+1)x^{2m}+1$$ In terms of reals roots, there is a double one at $x=1$ and another which is negative. If $x_{(m)}$ is this root, we have $$-1 \lt x_{(m)} \leq -\frac 1 2$$ To work for this root is not very simple with $f(x)$ . What I found more convenient is to let $t=-x$ and search for the zero of function $$g(t)=\log \left(t^{2 m} (1+2 m (t+1))\right)$$ Interesting (but I do not see how to use it) is to notice that the problem is "hust" the inverse of $$2m= \frac{1}{\log (t)}\,W_{-1}\left(\frac{ \log (t)}{t+1}\,t^{\frac{1}{t+1}}\right)-\frac{1}{t+1}$$ where $W(.)$ is Lambert function. Now, to have an estimate of $t_{(m)}$ , we can make one single iteration of a Newton-like method of order $k$ starting with $$t_0=\frac 12+\frac{3 m+1}{6 m (2 m+1)}\log \left(\frac{4^m}{3 m+1}\right)$$
|
|calculus|polynomials|roots|
| 0
|
How can a probability density function (pdf) be greater than $1$?
|
The PDF describes the probability of a random variable to take on a given value: $f(x)=P(X=x)$ My question is whether this value can become greater than $1$? Quote from wikipedia: "Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval $[0, \frac12]$ has probability density $f(x) = 2$ for $0 \leq x \leq \frac12$ and $f(x) = 0$ elsewhere." This wasn't clear to me, unfortunately. The question has been asked/answered here before, yet used the same example. Would anyone be able to explain it in a simple manner (using a real-life example, etc)? Original question: "$X$ is a continuous random variable with probability density function $f$. Answer with either True or False. $f(x)$ can never exceed $1$." Thank you! EDIT: Resolved.
|
In continuous probability distributions, probabilities are associated with intervals rather than individual points. The probability density function (PDF) represents the likelihood of a random variable falling within a specific interval. While the height of the PDF at a particular point might be greater than one, this doesn't directly represent a probability. The key concept is that probabilities are defined over intervals, and the area under the curve between two distinct points on the interval represents the probability for that specific interval. The integral of the entire probability density function over its entire range must equal one, ensuring that the total probability is accounted for. In simpler terms, the height of the curve at a given point doesn't provide a direct probability; instead, it's the collective area under the curve within an interval that corresponds to the probability of the variable falling within that interval.
|
|probability|statistics|density-function|
| 0
|
Intrinsic proof that sprays induce involutions
|
Let $M$ be a smooth manifold. Let $V$ be the canonical vector field on $T M$ (also called the Liouville vector field), which if $(x, y)$ are local coordinates on $T M$ is defined by $V = y^i \frac{\partial }{\partial y^i}$ . Also denote by $J : TTM \to TTM$ the canonical endomorphism of $TTM$ (also called an almost tangent structure), which in local coordinates is given by $J(a^i \frac{\partial }{\partial x^i} + b^i \frac{\partial }{\partial y^i}) = a^i \frac{\partial }{\partial y^i}$ . These definitions immediately imply some basic facts: $\mathcal{L}_V J = -J$ , $J^2 = 0$ (in fact $\operatorname{im} J = \operatorname{ker} J$ ), and $[JX, JY] = J[JX, Y] + J[X, JY]$ . Now let $S$ be a spray on $M$ , which means that $S$ is a vector field on $TM$ such that (1) $J S = V$ and (2) $[V, S] = S$ . In this situation, the Lie derivative $\mathcal{L}_S J$ is an involution on $TTX$ , which is intimately connected to the affine connection $S$ induces on $M$ . (Namely $S$ defines an Ehresmann conn
|
It turns out that this is just a linear algebra problem. We don't even need the condition $[V, S] = S$ (such $S$ is then just called a semispray). First observe that since $(\mathcal{L}_S J)X = [S, JX] - J[S, X]$ we easily have $$ J(\mathcal{L}_S J)X = J[S, JX] = [V, JX] - J[V, X] = (\mathcal{L}_V J)X = -JX, $$ and $(\mathcal{L}_S J) J X = -J[S, JX] = JX$ similarly. Now we just appeal to the following lemma. Lemma. Let $J$ and $T$ be endomorphisms of a finite dimensional vector space $V$ . If $\operatorname{im} J = \operatorname{ker} J$ , $JT = -J$ , and $TJ = J$ , then $T^2 = 1$ . Proof. The condition $TJ = J$ implies that the $1$ -eigenspace of $T$ is at least $\dim \ker J$ i.e $\frac{1}{2} \dim V$ -dimensional. The condition $JT = -J$ i.e. $J (T + 1) = 0$ implies that $\dim \operatorname{im} (T + 1) \leq \frac{1}{2} \dim V$ . Hence the $-1$ -eigenspace of $T$ is at least $\frac{1}{2} \dim V$ -dimensional as well. It follows that $T^2$ is diagonalizable with sole eigenvalue $1$ , i.e
|
|differential-geometry|riemannian-geometry|connections|lie-derivative|tangent-bundle|
| 1
|
Cone Theorem Step 3
|
I'm reading the proof of the cone theorem from Koll'ar-Mori and I find step 3 to be very unclear. In particular below is a simple proof of this step which must be wrong somehow. I am wondering where I went wrong. The setup for this step is that we have the closed cone of curves $\overline{NE}$ and we define a closed convex(?) subcone $W = \overline{\overline{NE}_{K \geq 0} + \sum_{\dim F_L = 1} F_L}$ where we define $F_L = L^\perp \cap \overline{NE}$ where $L$ is a nef divisor, and $K = K_X + \Delta$ is the log canonical divisor of some log pair $(X, \Delta)$ (ie. $X$ is normal and $K_X + \Delta$ is $\mathbb{Q}$ -Cartier). It is claimed in the book that $W = \overline{NE}$ . But isn't $W$ a closed convex subcone? Then, we know that it is equal to the convex hull of its extremal rays, so to show it is all of $\overline{NE}$ , it would seem sufficient to show that it contains all of the extremal rays of $\overline{NE}$ . But then an extremal ray of $\overline{NE}$ is nothing but a one-di
|
I think the point of this step is that the $L$ over which the sum is taken are integral Cartier divisors. (Look at the statement of Theorem 3.15: the $\xi_i$ are supposed to be elements of $N_{\mathbb Z}$ .) Your statement that "faces are all of the form.." is true, but it is not obvious a priori why for a given face in the $K \geq 0$ halfspace, we can choose the corresponding $L$ to be integral.
|
|algebraic-geometry|birational-geometry|
| 1
|
Axiom of Regularity and nested set
|
I have some trouble witch the axiom of regularity. I wolud like to show that $$x = (x,y)$$ for any y, not exists. As example pair definition based on set I use Kuratowski definition - so: $$x=\{\{x\},\{x,y\}\}$$ When I read axiom of regularity it is easy to show that $x=\{x\}$ - not exists. But here e.g. for $y=x$ we have something more nested: $x=\{\{x\}\}$ - and actually I'm stuck - I really have no idea...
|
The axiom of regularity says that every nonempty set is disjoint from one of its elements. If $x=\{\{x\},\{x,y\}\}$ then the set $\{x,\{x\}\}$ has nonempty intersection with each of its elements, contradicting the axiom of regularity. If $x=\{\{x\}\}$ then the set $\{x,\{x\},\{\{x\}\}\}$ has nonempty intersection with each of its elements, contradicting the axiom of regularity.
|
|set-theory|
| 1
|
If $(f_n)$ and $(g_n)$ converge uniformly on $A \subseteq \mathbf{R}$ then $(f_n + g_n)$ converges uniformly, but not necessarily $(f_ng_n)$. Why?
|
It seems I can prove that if $(f_n) \rightarrow f$ uniformly on $A$ and $(g_n)\rightarrow g$ uniformly on $A$ then for arbitrary but fixed $x \in A$ , $f_n(x) + g_n(x)\rightarrow f(x) + g(x) = (f+g)(x)$ via the algebraic limit theorem for sequences, as $f_n(x)$ and $g_n(x)$ are just sequences of numbers. It seems I should be able to do the same with products, but I am aware there are counterexamples. Therefore there is an issue with invoking the algebraic limit theorem here, except I cannot see what it is. There is clearly something I am missing - not sure exactly what.
|
Your use of the algebraic limit theorem proves merely the pointwise convergence of $f_n(x)+g_n(x) \rightarrow f(x)+g(x)$ not uniform convergence. The same algebraic limit theorem asserts the pointwise convergence of $f_n(x)g_n(x) \rightarrow f(x)g(x)$ . But neither case has anything to do with uniform convergence.
|
|real-analysis|
| 1
|
Show that a state $\rho=\sum_i p_i|e_i\rangle\!\langle e_i|$ has purifications of the form $\sum_i s_i |e_i\rangle\otimes|f_i\rangle$
|
Let $ρ_A = \sum_{i=1}^r p_i|e_i⟩⟨e_i|$ , where $p_i$ are the nonzero eigenvalues of $ρ_A$ and $|e_i⟩$ corresponding orthonormal eigenvectors. If some eigenvalue appears more than once then this decomposition is not unique. Show that, nevertheless, any purification $|\psi_{AB}⟩$ of $ρ_A$ has a Schmidt decomposition of the form $|\psi_{AB}⟩ = \sum_{i=1}^r s_i|e_i⟩ ⊗ |f_i⟩$ ,...(*) with the same $|e_i⟩$ as above. Hint: Start with an arbitrary Schmidt decomposition and rewrite it in the desired form. My try: Following the hint,let $|\Psi_{AB}\rangle\in H_A\otimes H_B$ be a purification of $\rho_A$ with $H_B$ of dimension dim $H_B \ge $ rank $\rho_A$ . By theorem 2.20 , $|\psi_{AB}⟩ = \sum_{i=1}^r \tilde s_i|\tilde e_i⟩ ⊗ |f_i⟩$ , $\tilde s_i >0$ with the $|\tilde e_i\rangle\in H_A$ orthonormal and $|f_i\rangle \in H_B$ so that $\rho_A=Tr_{B}(|\psi_{AB}\rangle\langle \psi_{AB}|)=\sum_{i=1}^r\tilde s_i^2|\tilde e_i\rangle \langle\tilde e_i|$ I can relate the bases $B=\{|e_i\rangle\}_{i=1}^n$
|
This follows from the uniqueness of the singular value decomposition, and the fact that $|\Psi\rangle$ being a purification for $\rho$ is the same as saying that $\rho=\Psi\Psi^\dagger$ with $\Psi$ the matrix whose vectorisation is $|\Psi\rangle$ . See also this answer to a related question on qc.SE
|
|linear-algebra|quantum-mechanics|quantum-computation|quantum-information|
| 0
|
Relation between subsets: Simmons
|
I am trying to solve the below exercise in Simmons. (a) Let $U$ be the single-element set $\{1\}$ . There are two subsets, the empty set $\emptyset$ and $\{1\}$ itself. If $A$ and $B$ are arbitrary subsets of $U$ , there are four possible relations of the form $A \subseteq B$ . Count the number of true relations among these. (b) Let $U$ be the set $\{1,2\}$ . There are four subsets. List them. If $A$ and $B$ are arbitrary subsets of $U$ , there are $16$ possible relations of the form $A \subseteq B$ . Count the number of true ones. (c) Let $U$ be the set $\{1,2,3\}$ . There are $8$ subsets. What are they? There are $64$ possible relations of the form $A \subseteq B$ . Count the number of true ones. (d) Let $U$ be the set $\{1,2, \ldots, n\}$ for an arbitrary positive integer $n$ . How many subsets are there? How many possible relations of the form $A \subseteq B$ are there? Can you make an informed guess as to how many of these are true? Here is my attempt at a solution. (a) We have fo
|
Lets consider $B$ has $r$ elements we can select $B$ in $n C r$ ways. For each $B$ we can find subsets of $0,1,2,...,r$ elements in $r C_0, r C_1, r C_2, ..., r C_r$ ways respectively. Therefore for $B$ with $r$ elements there are $r C_0 + r C_1 + r C_2 + ... + r C_r = 2^r$ subsets. Therefore in total, number of true relations equals $n C_0 2^0 + n C_1 2^1 + ... + n C_n 2^n = 3^n$ .
|
|elementary-set-theory|
| 0
|
How much data does a category contain?
|
This might seem like a very vague question, but the details are really confusing me. So, for example, say we are studying the category of $A$ -modules $\mathsf{Mod}_A$ where $A$ is a commutative unital ring. At first, I was confused how $\mathsf{Mod}_A$ is an abelian category, as in the hom-sets having an abelian group structure. Sure, we know that for any $A$ -module homomorphisms $\phi,\psi: M \to N$ , we can add them to get another homomorphism $\phi + \psi$ in a natural way. But thinking categorically, doesn't $\mathsf{Mod}_A$ contain absolutely no information about the elements of modules? To define $\phi + \psi$ we must define in terms of elements, and I am not aware of how to construct some $\phi + \psi$ categorically. So, from what I have understood, the 'abelian category $\mathsf{Mod}_A$ ' contains not only information about $\mathsf{Mod}_A$ , but additionally abelian group structures for the hom-sets $\operatorname{Hom}(M,N)$ for every object $M,N$ . I guess this is related t
|
It's very helpful to know in these kinds of circumstances whether you are viewing categories as a language/tool, or as an object of study in its own right. In the case of $A$ modules, I would say it's clear that we are really interested in properties of modules, so we know that there is a forgetful functor, we know what free modules are (useful!), we often have a tensor product, and this all helps us answer questions about modules (and rings). From this perspective, being an abelian category is something we could check for our category of interest $A-mod$ , and then we would use general abelian category theory to assist us. Viewing $A-mod$ as a pure category, and forgetting all else we know could be an interesting idea, but for reaching qcoherent sheaves, cohomology, derived categories etc, it seems like an artificial restriction which doesn't help understanding. However, the questions of how much is purely categorical can be very interesting! I'll give a few examples that I like, rela
|
|category-theory|abelian-categories|monoidal-categories|
| 0
|
How much data does a category contain?
|
This might seem like a very vague question, but the details are really confusing me. So, for example, say we are studying the category of $A$ -modules $\mathsf{Mod}_A$ where $A$ is a commutative unital ring. At first, I was confused how $\mathsf{Mod}_A$ is an abelian category, as in the hom-sets having an abelian group structure. Sure, we know that for any $A$ -module homomorphisms $\phi,\psi: M \to N$ , we can add them to get another homomorphism $\phi + \psi$ in a natural way. But thinking categorically, doesn't $\mathsf{Mod}_A$ contain absolutely no information about the elements of modules? To define $\phi + \psi$ we must define in terms of elements, and I am not aware of how to construct some $\phi + \psi$ categorically. So, from what I have understood, the 'abelian category $\mathsf{Mod}_A$ ' contains not only information about $\mathsf{Mod}_A$ , but additionally abelian group structures for the hom-sets $\operatorname{Hom}(M,N)$ for every object $M,N$ . I guess this is related t
|
For your first question about $\mathbf{Mod}_A$ , it is a remarkable fact that being abelian is actually a property of a category and not some additional structure. That is, even though the straightforward definition of $\phi+\psi$ is via elements, this can be done without mentioning them, namely by declaring $\phi+\psi$ to be the morphism $$M\stackrel{(\phi,\psi)}{\to}N\oplus N \to N,$$ where the last morphism from $N\oplus N\to N$ is given by the identity map from both components. With quite some work you can then show that a category is additive if and only if it has all biproducts and this "addition" rule on parallel morphisms gives an abelian group structure on hom sets. The point being, that this can be done without ever looking inside the objects. As for the second question, it may be also surprising that there are always several different valid "forgetful" functors $U:\mathbf{Mod}_A\to \mathbf{Set}$ . In fact, if $A$ and $B$ are Morita equivalent , their module categories are eq
|
|category-theory|abelian-categories|monoidal-categories|
| 0
|
How much data does a category contain?
|
This might seem like a very vague question, but the details are really confusing me. So, for example, say we are studying the category of $A$ -modules $\mathsf{Mod}_A$ where $A$ is a commutative unital ring. At first, I was confused how $\mathsf{Mod}_A$ is an abelian category, as in the hom-sets having an abelian group structure. Sure, we know that for any $A$ -module homomorphisms $\phi,\psi: M \to N$ , we can add them to get another homomorphism $\phi + \psi$ in a natural way. But thinking categorically, doesn't $\mathsf{Mod}_A$ contain absolutely no information about the elements of modules? To define $\phi + \psi$ we must define in terms of elements, and I am not aware of how to construct some $\phi + \psi$ categorically. So, from what I have understood, the 'abelian category $\mathsf{Mod}_A$ ' contains not only information about $\mathsf{Mod}_A$ , but additionally abelian group structures for the hom-sets $\operatorname{Hom}(M,N)$ for every object $M,N$ . I guess this is related t
|
It indeed looks like you need extra information to answer the question whether a given category $\mathcal{C}$ is an abelian category, but this is actually not true. An abelian category is an additive category satisfying some extra properties, i.e. no extra structure is imposed once we have verified $\mathcal{C}$ is an additive category. A category $\mathcal{C}$ is additive if it is pre-additive and if finite coproducts and products coincide. The latter is again a property and not structure, because it asks for all objects $x,y\in\mathcal{C}$ for the canonical comparison map $x\sqcup y\to x\times y$ to be an isomorphism. However, in general, a category being a pre-additive category is not a property, but extra data: it is the data of an enrichment over abelian groups, informally a well-behaved choice of abelian group structure on each hom-set of your category. In this light, it looks like being an additive category is extra structure. This is not true: the point is that, if your categor
|
|category-theory|abelian-categories|monoidal-categories|
| 1
|
The total number of non-negative integers $n$ satisfying the equations $n^2=p+q$ and $n^3=p^2+q^2$, where $p$ and $q$ are integers, is...
|
The total number of non-negative integers $n$ satisfying the equations $n^2=p+q$ and $n^3=p^2+q^2$ , where $p$ and $q$ are integers, is... Case $1$ : If both $p$ and $q$ are non-negative integers then, using, AM of mth power $\ge$ mth power of AM, we get, $\frac{p^2+q^2}{2}\ge(\frac{p+q}{2})^2\implies \frac{n^3}{2}\ge\frac{n^4}{4}\implies n\le2\implies n=0,1,2$ specifically, $n=0$ when $(p,q)=(0,0)$ $n=1$ when $(p,q)=(1,0) or (0,1)$ $n=2$ when $(p,q)=(2,2)$ Case $2$ : when both $p$ and $q$ are negative integers. This case gets rejected because $n^2=p+q$ Case $3$ : when one of $p$ and $q$ is positive and the other is negative. How do we tackle this case?
|
First prove that $p$ and $q$ have the same odd prime factors. Then prove that no odd prime factor can have different multiplicity in $p$ and $q$ . Conclude that $p$ and $q$ are plus or minus powers of $2$ . Details: $2pq = n^4 - n^3$ . So every odd prime factor of $n$ divides $pq$ . But $n \mid p + q$ so every odd prime factor divides both $p$ and $q$ . Suppose $r$ is odd and has higher multiplicity in $p$ than $q$ , say $r^e$ divides $p$ exactly. Then $r^{e}$ divides $n^2$ exactly and $r^{2e}$ divides $n^3$ exactly, and the multiplicities don't match. Therefore $p$ and $q$ are plus or minus powers of $2$ . By the same argument above they must have the same multiplicity of $r=2$ so $p = \pm q$ and you can finish from there: The sign must be $+$ and then $n = \frac{p^2 + q^2}{p+q} = p$ and then $n=2$ .
|
|combinatorics|inequality|solution-verification|contest-math|
| 0
|
Norm equality for operators on a Hilbert tensor product
|
Ok, this could be tough. Suppose you are given bounded linear operators, $\{A_n\}_{n=1}^N\subset\mathcal{B}(\mathcal{H})$ on a Hilbert space $\mathcal{H}$ and $\{B_n\}_{n=1}^N\subset\mathcal{B}(\mathcal{K})$ on another Hilbert space $\mathcal{K}$ , with $N\geq1$ . If $\{U_n\}_{n=1}^N\subset\mathcal{U}(\mathcal{K})$ is a family of unitary operators on $\mathcal{K}$ , is it true that $$ \left\|\sum\limits_{n=1}^N A_n\otimes U_nB_n\right\|_{\mathcal{B}(\mathcal{H}\otimes\mathcal{K})}=\left\|\sum\limits_{n=1}^N A_n\otimes B_nU_n^*\right\|_{\mathcal{B}(\mathcal{H}\otimes\mathcal{K})}\,\,? $$ Notice that for $N=1$ , the equality surely holds since $$ \|A\otimes UB\|_{\mathcal{B}(\mathcal{H}\otimes\mathcal{K})}=\|A\|_{\mathcal{B}(\mathcal{H})}\|UB\|_{\mathcal{B}(\mathcal{K})}=\|A\|_{\mathcal{B}(\mathcal{H})}\|BU^*\|_{\mathcal{B}(\mathcal{K})}=\|A\otimes BU^*\|_{\mathcal{B}(\mathcal{H}\otimes\mathcal{K})}. $$ From $N=2$ , it seems that nothing can be said.
|
It's even wrong for $H=K=\mathbb C$ and $N=2$ . Take $A_1=A_2=1$ and $B_1=B_2^\ast=U_1^\ast=U_2=\frac 1{\sqrt 2}(1+i)$ . Then $$ \lvert U_1B_1+U_2B_2\rvert=2\lvert B_1\rvert^2=2 $$ and $$ \lvert B_1 U_1^\ast+B_2 U_2^\ast\rvert=\lvert B_1^2+(B_1^\ast)^2\vert=0. $$
|
|operator-theory|hilbert-spaces|tensor-products|operator-algebras|matrix-norms|
| 0
|
Transform a vector of positive and negative values to sum up to 0
|
Is there a transformation that produces a vector with sum $0$ ? There are positive and negative values and the transformation does not need to be preserve the weights. E.g.: $f(x_1, x_2, x_3) = (x_1', x_2', x_3')$ s.t. $$\sum_{i=1}^3x_i' = 0$$
|
Intrigued, i started looking for a linear map, $$ f \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} x_1' \\ x_2' \\ x_3' \end{pmatrix} $$ Imposing that $x_1'+x_2'+x_3'=0$ , we get that $$(a_1+b_1+c_1)x_1+(a_2+b_2+c_2)x_2+(a_3+b_3+c_3)x_3=0$$ So one example of a linear map that produces vectors with sum of the component $0$ , is the map given by the matrix $$ \begin{pmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{pmatrix} \quad \text{with} \quad \begin{cases} a_1+b_1+c_1=0\\ a_2+b_2+c_2=0\\ a_3+b_3+c_3=0 \end{cases} $$ and i think you can produce infinitely many examples.
|
|linear-algebra|statistics|vectors|transformation|
| 0
|
Series of binomial coefficients: $\sum\limits_{n=k}^{\infty}{\binom nk}x^n=\frac{x^k}{(1-x)^{k+1}}$
|
Any hint to prove this? I tried with properties of binomial coefficient but I can't get anything. $$\sum_{n=k}^{\infty}{{n}\choose{k}}x^n=\dfrac{x^k}{(1-x)^{k+1}}$$
|
Here is a self-contained proof based on the idea of combinatorial class. Note the combinatorial interpretation of the binomial coefficient: \begin{align*} \binom{n}{k} &= [\text{# of length-$n$ strings in alphabet $\{\texttt{a}, \texttt{b}\}$ containing exactly $k$ $\texttt{b}$'s}] \end{align*} So, if $w$ stands for any string in alphabet $\{\texttt{a}, \texttt{b}\}$ and $|w|$ denotes the length of $w$ , then $$ \binom{n}{k} = \sum_{\substack{|w| = n \\ \text{$w$ has exactly $k$ $\texttt{b}$'s}}} 1 $$ and hence \begin{align*} \sum_{k=n}^{\infty} \binom{n}{k} x^n &= \sum_{k=n}^{\infty} \sum_{\substack{|w| = n \\ \text{$w$ has exactly $k$ $\texttt{b}$'s}}} x^n \\ &= \sum_{k=n}^{\infty} \sum_{\substack{|w| = n \\ \text{$w$ has exactly $k$ $\texttt{b}$'s}}} x^{|w|} \\ &= \sum_{\text{$w$ has exactly $k$ $\texttt{b}$'s}} x^{|w|}. \end{align*} On the other hand, any word $w$ in alphabet $\{\texttt{a}, \texttt{b}\}$ having exactly $k$ $\texttt{b}$ 's can be parametrized in the form $$ \texttt{
|
|sequences-and-series|power-series|binomial-coefficients|
| 0
|
Limit of $∞.0$ form of an integral and Riemann sum
|
I was pondering how to solve a limit of the kind $$\lim_{n \to ∞} n^k \left(\dfrac{1}{n}\sum_{k=0}^{n-1} f \left(\dfrac{k}{n} \right) -\int_0^1 f(x)dx \right)$$ where k is chosen such that the order of infinity is just right for the limit to be non-zero but not infinite. For instance, if $f(x)=x^2$ , then the limit becomes; $$\lim_{n \to ∞} n^k \left(\dfrac{(n+1)(2n+1)}{6n^2}-\dfrac{1}{3} \right)$$ Here, k should be 1 and the final answer will be the ratio of the coefficients of the linear term in the numerator and denominator i.e. $1/2$ . But, this was a relatively simple integral and I'm searching for a general solution or algorithm to calculate this kind of limit. Here, I'm assuming $f(x)$ is a well-behaved function i.e. it is a $C^∞$ function with a Taylor series which converges for $0 . I thought of perhaps applying l'hopital's rule to the sum but the issue is that the number of terms itself is dependent of the variable with respect to which I wish to differentiate. Then I thought
|
if $f\in C^2[0,1]$ ,we have the conclusion: $\begin{aligned}\lim_{n\rightarrow\infty}n\left(\frac{1}{n}\sum_{k=0}^nf\left(\frac{k}{n}\right)-\int_0^1f(x)\mathrm{d}x\right)=\frac{1}{2}[f(1)-f(0)]\end{aligned}$ Consider Taylor expansion of $f$ $\begin{aligned}f(x)=f\left(\frac{k}{n}\right)+f'\left(\frac{k}{n}\right)\left(x-\frac{k}{n}\right)+\frac{f''(\xi_{n,k})}{2}\left(x-\frac{k}{n}\right)^2\end{aligned}$ Let $M=\max_{x\in[0,1]}\{|f''(x)|\}$ , we can find that $\begin{aligned}&n\left|\frac{1}{n}\sum_{k=0}^nf\left(\frac{k}{n}\right)-\int_0^1f(x)\mathrm{d}x+\sum_{k=0}^n\int_{\frac{k-1}{n}}^\frac{k}{n}f'\left(\frac{k}{n}\right)\left(x-\frac{k}{n}\right)\mathrm{d}x\right|\\ &=n\left|\sum_{k=0}^n\int _{\frac{k-1}{n}}^\frac{k}{n}f\left(\frac{k}{n}\right)\mathrm{d}x-\sum_{k=0}^n\int_\frac{k-1}{n}^\frac{k}{n}f(x)\mathrm{d}x+\sum_{k=0}^n\int_{\frac{k-1}{n}}^\frac{k}{n}f'\left(\frac{k}{n}\right)\left(x-\frac{k}{n}\right)\mathrm{d}x\right|\\ &=n\left|\sum_{k=0}^n\int_\frac{k-1}{n}^\frac{k}{n}\lef
|
|limits|taylor-expansion|riemann-integration|infinity|riemann-sum|
| 1
|
Converting the given equation in $y=mx+c$ form
|
I need to convert the below formula into $y = mx +c$ . I have it on excel graphically as a solved equation, but no matter how hard I try I'm having a hard time identifying what " $c$ " is equal to: \begin{equation} \Delta E = \frac{1}{8m((N+1)l)^2} \left[ \left(\frac{N}{2} + 1\right)^2 - \left(\frac{N}{2} \right)^2h^2 \right]. \tag{1} \end{equation} My $x$ term is $1/(N+1)$ , so I've pulled that out separately, but now I'm trying to separate $m$ and $c$ : \begin{equation} \Delta E = \frac{\left(\frac{N}{2} + 1\right)^2 - \left(\frac{N}{2}\right)^2h^2}{8ml^2(N+1)} \times \frac{1}{N+1}, \tag{2} \end{equation} where $h$ , $m$ and $l$ are constant. Any help would be hugely appreciated!
|
If $x = \frac 1{N+1}$ , we can solve it for $N$ to get $N = \frac 1x - 1$ and substitute that into $\frac{(\frac{N}{2} + 1)^2 - (\frac{N}{2})^2h^2}{8m((N+1)l)^2}$ to get \begin{align}\frac{x^2 \left(\left(\frac{1}{2} \left(\frac{1}{x}-1\right)+1\right)^2-\frac{1}{4} h^2 \left(\frac{1}{x}-1\right)^2\right)}{8 l^2 m}&=\frac{(x+1)^2-h^2 (x-1)^2}{32 l^2 m}\\ &=\frac{1-h^2}{32 l^2 m}x^2+ \frac{1+h^2}{16 l^2 m}x+\frac{1-h^2}{32 l^2 m}\end{align} Therefore, function is linear with respect to $x = \frac 1{N+1}$ if and only if $h^2 = 1$ .
|
|algebra-precalculus|
| 0
|
Compute the limit of the Log-Sum-Exp function
|
I am trying to prove that the Log-Sum-Exp function converges to the maximum function, i.e. $$ lim_{\tau\rightarrow0}\tau\log\left(\frac{1}{N}\sum_{i=1}^N\exp\left(\frac{x_{i}}{\tau}\right)\right) = \max_{i}(x_1, \dots, x_N).$$ I saw that a possible direction is to solve the limit as $\rho=1/\tau\rightarrow \infty$ and apply De l'Hôpital rule. What I get is the following: $$lim_{\rho\rightarrow\infty} \frac{\frac{1}{N}\sum_{i=1}^Nx_{i}\exp\left(\rho x_{i}\right)}{\frac{1}{N}\sum_{i=1}^N\exp\left(\rho x_{i}\right)}.$$ But at this point I got stuck and didn't know how to proceed. The final result should be the maximum among $\{x_{i}\}$ . Does anyone have some suggestions?
|
Here's a hint if you want to try it yourself first : what happens if you divide by $\exp(\rho x_j)$ where $x_j$ is a maximal element? Choose $x_j$ so that $x_j = \max_i (x_1, \ldots, x_N)$ . We then have $$ \frac{\sum_i x_i \exp(\rho x_i)}{\sum_i \exp(\rho x_i)} = \frac{\sum_i x_i \exp(\rho (x_i - x_j))}{\sum_i \exp(\rho (x_i - x_j))} $$ Suppose $I$ is the set of indices $i$ such that $x_i = x_j$ for all $i \in I$ , i.e. $I$ is the set of indices of elements with maximal value. We can then write $$ \sum_{i} x_i \exp(\rho (x_i - x_j)) = \sum_{i \in I} x_i \exp(\rho (x_i - x_j)) + \sum_{i \not\in I} x_i \exp(\rho (x_i - x_j)) = |I| x_j + \sum_{i \not\in I} x_i \exp(\rho (x_i - x_j)) $$ Then $$ \sum_{i} x_i \exp(\rho (x_i - x_j)) \rightarrow |I| x_j $$ as $\rho \rightarrow \infty$ , since $x_i for all $i \not\in I$ . Similarly, $$ \sum_i \exp(\rho (x_i - x_j)) = \sum_{k=1}^{|I|} 1 + \sum_{i \not\in I} x_i \exp(\rho (x_i - x_j)) = |I| + \sum_{i \not\in I} \exp(\rho (x_i - x_j)) \rightarrow
|
|limits|approximation-theory|
| 1
|
Converting the given equation in $y=mx+c$ form
|
I need to convert the below formula into $y = mx +c$ . I have it on excel graphically as a solved equation, but no matter how hard I try I'm having a hard time identifying what " $c$ " is equal to: \begin{equation} \Delta E = \frac{1}{8m((N+1)l)^2} \left[ \left(\frac{N}{2} + 1\right)^2 - \left(\frac{N}{2} \right)^2h^2 \right]. \tag{1} \end{equation} My $x$ term is $1/(N+1)$ , so I've pulled that out separately, but now I'm trying to separate $m$ and $c$ : \begin{equation} \Delta E = \frac{\left(\frac{N}{2} + 1\right)^2 - \left(\frac{N}{2}\right)^2h^2}{8ml^2(N+1)} \times \frac{1}{N+1}, \tag{2} \end{equation} where $h$ , $m$ and $l$ are constant. Any help would be hugely appreciated!
|
Let's look at the equation \begin{equation} \Delta E = \frac{1}{8m((N+1)l)^2} \left[ \left(\frac{N}{2} + 1\right)^2 - \left(\frac{N}{2} \right)^2h^2 \right]. \tag{1} \end{equation} If we now replace $N$ by $\left(\frac{1}{x}-1 \right)$ , we get $$ \Delta E = \frac{1}{8m((\left(\frac{1}{x}-1 \right)+1)l)^2} \left[ \left(\frac{\left(\frac{1}{x}-1 \right)}{2} + 1\right)^2 - \left(\frac{\left(\frac{1}{x}-1 \right)}{2} \right)^2h^2 \right] $$ Simplify $\rightarrow$ $$ \Delta E = \frac{1}{8m(l/x)^2 \cdot 2^2} \left[ \left(\frac{1}{x}+1 \right)^2 - \left(\frac{1}{x}-1 \right)^2h^2 \right] $$ $$ \Delta E = \frac{x^2}{8ml^2 \cdot 2^2} \left[-\frac{h^2}{x^2}+ \frac{2h^2}{x} - h^2 + \frac{1}{x^2} + \frac{2}{x} +1 \right] $$ $$ \Delta E = \frac{1}{32~ml^2} \left[-h^2+ 2xh^2 - x^2 h^2 + 1 + 2x +x^2 \right] $$ Perhaps you can see the problem with the assumption that this would be linear with respect to $x$ .
|
|algebra-precalculus|
| 0
|
Converting the given equation in $y=mx+c$ form
|
I need to convert the below formula into $y = mx +c$ . I have it on excel graphically as a solved equation, but no matter how hard I try I'm having a hard time identifying what " $c$ " is equal to: \begin{equation} \Delta E = \frac{1}{8m((N+1)l)^2} \left[ \left(\frac{N}{2} + 1\right)^2 - \left(\frac{N}{2} \right)^2h^2 \right]. \tag{1} \end{equation} My $x$ term is $1/(N+1)$ , so I've pulled that out separately, but now I'm trying to separate $m$ and $c$ : \begin{equation} \Delta E = \frac{\left(\frac{N}{2} + 1\right)^2 - \left(\frac{N}{2}\right)^2h^2}{8ml^2(N+1)} \times \frac{1}{N+1}, \tag{2} \end{equation} where $h$ , $m$ and $l$ are constant. Any help would be hugely appreciated!
|
Your approach does not work since you can't simply factor out $1/(N+1)$ , as $N$ remains as a variable in the rest of the expression. To illustrate, $x\cdot(x+1)$ has $x$ factored out, but it would not be a linear function of $x$ . A more fruitful approach would be to write your unsimplified expression in terms of $x$ , by working out how $N$ relates to $x$ , and then simplifying that.
|
|algebra-precalculus|
| 0
|
Improper Integral $\int_{0}^{\infty} \log(t) t^{-\frac{1}{2}} \exp\left\{-t\right\} dt$
|
Background Hi. I am currently writing my undergraduate thesis which mainly revolves around the generalized log-Moyal distribution pioneered by Bhati and Ravi (see here ). In the aforementioned article, Bhati and Ravi addressed a proposition regarding the generalized log-Moyal distribution, one of which is the following: $$E\left[\log\left(\frac{\mu}{Y}\right)\right] = -\sigma(\log 2+\gamma),$$ where $\gamma=-\int_{0}^{\infty}\log(t)\exp\{-t\}dt\approx 0.577216$ is Euler's gamma constant. The two authors did not provide the proof for this proposition. However, they mentioned that the proof is followed by straightforward computations. I have attempted to prove this proposition, but I have been stuck for quite some time trying to complete the third-to-last line of the following proof. Attempted Proof Notice that \begin{align*} E\left[\log\left(\frac{\mu}{Y}\right)\right] &= \int_{0}^{\infty} \log\left(\frac{\mu}{y}\right) f(y) d y\\ &= \int_{0}^{\infty} \log\left(\frac{\mu}{y}\right) \fra
|
Too long for a comment Using the Frullani' integral $\,\,\displaystyle \ln (t)=-\int_0^\infty\frac{e^{-xt}-e^{-x}}xdx\,\,$ and changing the order of integration $$\int_0^\infty \ln (t) \,t^{-\frac12} e^{-t} dt=-\int_0^\infty\frac{dx}x\int_0^\infty\left(e^{-(x+1)t}-e^{-t}e^{-x}\right)t^{-\frac12}dt$$ $$=-\int_0^\infty\frac{dx}x\left(\frac1{\sqrt{1+x}}-e^{-x}\right)\int_0^\infty t^{-\frac12}e^{-t}dt=\sqrt\pi\int_0^\infty\left(e^{-x}-\frac1{\sqrt{1+x}}\right)\frac{dx}x$$ Integrating by parts, we have $$=\sqrt\pi\ln x\left(e^{-x}-\frac1{\sqrt{1+x}}\right)\,\bigg|_{x=0}^\infty-\sqrt\pi\int_0^\infty\left(\frac{\ln x}{2(1+x)^{\frac32}}-e^{-x}\ln x\right)dx$$ $$=-\sqrt\pi\,\gamma-\frac{\sqrt\pi}2\int_0^\infty\frac{\ln x}{(1+x)^{\frac32}}dx\overset{t=\frac1{\sqrt{1+x}}}{=}-\sqrt\pi\,\gamma-\sqrt\pi\int_0^1\ln \frac{1-t^2}{t^2}dt$$ $$=-\sqrt\pi\,\gamma+2\sqrt\pi\int_0^1\ln (t)\,dt-\sqrt\pi\int_0^1\ln(1-t)dt-\sqrt\pi\int_0^1\ln(1+t)dt$$ $$I=-\sqrt\pi\big(\gamma+2\ln2\big).$$
|
|calculus|definite-integrals|improper-integrals|closed-form|actuarial-science|
| 1
|
Theorem of Shelah about the existence of an inaccessible cardinal
|
There is a theorem of Shelah, stated in the following way: If all $\Sigma_3^1$ sets of reals are measurable, then $\aleph_1$ is an inaccessible cardinal in $L$ . In some textbooks (for example Schindler, 2014: Set Theory, Exploring Independence and Truth) the theorem is stated slightly different. Instead of inaccessible cardinal in $L$ , they say $\aleph_1^V$ is then inaccessible to the reals. Can someone explain me why this is an equivalent way to state the theorem? Thank you in advance for your help!
|
The statements are not equivalent. More precisely, $\omega_1$ being inaccessible to the reals is a strictly stronger statement then $\omega_1$ being inaccessible in $L$ . If $\omega_1$ is not inaccessible in $L$ , let $\omega_1 = (\alpha^+)^L$ where $\alpha$ is a cardinal in $L$ . Then $\alpha$ is countable in $V$ and therefore there exists a real $x \subset \omega$ coding a well order of length $\alpha$ on $\omega$ , hence $\omega_1^{L[x]} = \omega_1$ . On the other hand, if $V = L[0^\sharp]$ then $\omega_1$ is inaccessible in $L$ .
|
|set-theory|descriptive-set-theory|forcing|
| 1
|
Dimension of certain linear systems on a cubic surface
|
Let $S$ be a smooth degree $3$ hypersurface in $\mathbb P^3$ . Let $C$ be a smooth hyperplane section of $S$ . Let $l$ be a line on $S$ . It can be deduced that $h^0(\mathcal O_S(2C-2l)) \leq 4$ . Can it happen that $h^0(\mathcal O_S(2C-2l)) = 4$ ? In general, is there a recipe to compute $h^0(\mathcal O_S(nC-2l))$ for any positive integer $n \geq 2$ by interpreting $S$ as $\mathbb P^2$ blown up at $6$ points in general linear position and translating this cohomology computation to the cohomology computation of certain sheaves on $\mathbb P^2$ or otherwise? Thanks in advance.
|
You can choose a contraction $\pi \colon S \to \mathbb{P}^2$ such that $$ l \sim 2h - e_2 - e_3 - e_4 - e_5 - e_6, $$ where $h$ is the pullback of a line under $\pi$ and $e_1,\dots,e_6$ are the exceptional divisors of $\pi$ . Then $C - l \sim h - e_1$ and $2C - 2l \sim 2(h - e_1)$ . Now, the linear system $|h - e_1|$ gives a contraction $S \to \mathbb{P}^1$ such that $\mathcal{O}_S(C - l)$ is the pullback of $\mathcal{O}(1)$ , hence $$ H^0(S, \mathcal{O}_S(2C - 2l)) \cong H^0(\mathbb{P}^1, \mathcal{O}(2)) $$ is 3-dimensional.
|
|algebraic-geometry|
| 0
|
What is the intuition behind the single-pass algorithm (Welford's method) for the corrected sum of squares?
|
The corrected sum of squares is the sum of squares of the deviations of a set of values about its mean. $$ S = \sum_{i=1}^k\space\space(x_i - \bar x)^2 $$ We can calculate the mean in a streaming fashion, as follows: $$ m_n = \frac{n-1}{n}m_{(n-1)} + \frac{1}{n}x_n $$ I understand the intuition behind this: the sum of the previous $n-1$ values was divided by $n-1$ , so by multiplying those values by $\frac{n-1}{n}$ , we down-weight the sum properly. However, we can also calculate the full corrected sum of squares as follows: $$ S_n = S_{n-1} + \frac{n-1}{n}\left( x_n - m_{n-1}\right)^2 $$ However, I don't have a good intuition for why this works. It looks like we use the previous corrected sum of squares value, and then add the square of the current value's deviation from the mean of all the previous values. But, this algorithm doesn't make sense to me, even if it was derived logically. These formulas are from "Note on a Method for Calculating Corrected Sums of Squares and Products" by
|
Let $k_n = \frac1n \sum\limits_{i=1^n} x_i^2$ , i.e. the mean of the sum of squares. From your mean updating, you have $$k_n = \frac{n-1}{n}k_{n-1} + \frac{1}{n}x_n^2.$$ Recalling expressions for variance: $$k_n = m_n^2 + \frac1n S_n \text{ and so } k_{n-1} = m_{n-1}^2 + \frac1{n-1} S_n.$$ Substituting gives $$m_n^2 + \frac1n S_n = \frac{n-1}{n}(m_{n-1}^2 + \frac1{n-1} S_{n-1}) + \frac{1}{n}x_n^2$$ and multiplying through by $n$ and rearranging gives $$S_n = S_{n-1}+ (x_n^2-m_{n-1}^2) - n(m_n^2 -m_{n-1}^2)$$ which is approaching what you are asking for and may be a little more intuitive: you need to add something for the additional term, and adjust for the effect of changing the mean on all the squares of differences. If you now use $m_n = \frac{n-1}{n}m_{n-1} + \frac{1}{n}x_n$ , you get $S_n$ in terms of $S_{n-1}$ , $x_n$ and $m_{n-1}$ : $$S_n = S_{n-1}+ \left(x_n^2-m_{n-1}^2\right) - n\left(\left(\frac{n-1}{n}m_{n-1} + \frac{1}{n}x_n\right)^2 -m_{n-1}^2\right)$$ and expanding and tid
|
|statistics|variance|average|means|sampling-theory|
| 1
|
Find the 'Pattern' subtotals of the number of ways of choosing k objects from n types of objects
|
I recently answered this StackOverflow question https://stackoverflow.com/questions/77854835/how-to-calculate-the-number-of-ways-to-choose-y-items-from-at-most-x-groups-effi/77855848#77855848 with a conjecture I constructed just from testing a bunch of values with a computer program I wrote to explore the problem. I have some confidence that the conjecture is true, because it seems unlikely that if it were not true, then some of the 'Pattern' subtotals would be wrong, probably then giving incorrect totals. I'm no expert in combinatorics so I thought I'd ask the experts. I feel like there could be a new theorem here waiting to be proven which could have practical usefulness. Suppose you have $n$ types of objects from which you want to choose $k$ objects. Call a 'Pattern' an increasing sequence of counts of the objects of different types. How many different ways are there of choosing the $m$ objects that give a target Pattern? For example, suppose I want to choose $5$ items from $3$ type
|
For $p\in \omega$ , let $p=\{0,\ldots, p-1\}$ . Fix a pattern $a=(a_0, \ldots, a_{n-1})$ . Fix a permutation $\sigma:n\to n$ and note that this yields a way of assigning a type to a count of that type. That is, we can choose $a_{\sigma(i)}$ items of type $i$ for $i\in n$ . However, some of these permutations will yield to the same counts of each type. For example, if $a_0=a_1=p$ , then the identity permutation and the transposition $\tau$ which sends $0$ to $1$ and $1$ to $0$ both yield $p$ items of types $0$ and $1$ . So we want to group these permutations based on when they produce the same counts of each type and when they don't. In other words, for a fixed $\sigma$ , for how many $\tau$ will it be true that $(a_{\tau\circ \sigma(i)})_{i\in n}=(a_{\sigma(i)})_{i\in n}$ ? This is exactly where your $D_i$ come in. For $j\in k$ , let $$E_j=\{i\in n:a_i=j\}.$$ Let $|E_j|=b_j$ . Then there are $\prod_{j\in k}b_j!$ ways to pick a $\tau$ so that $(a_{\tau\circ \sigma(i)})_{i\in n}=(a_{\sig
|
|combinatorics|
| 0
|
Find number of ways to get number
|
I have a number that has odd number of divisors, for example 36, and I have all divisors of this number 1, 2, 3, 4, 6, 9, 12, 18, 36. How many ways do we have to get 36 from divisors if we need to multiply k(from 2 to 1000 - for each k there is number of ways) any divisors? (repetition is allowed, order matters) For example, if k = 3: 1,6,6 - one way to get 36
|
Each integer can be written as: $$ p_1^{a_1}p_2^{a_2}...p_m^{a_m} $$ $$ eg. 36=2^23^2 $$ where $p_1, p_2$ are prime numbers. Each divisor is the product of these prime numbers, each raised to some power (from zero up to the maximum power allowed). Example, 9 is a divisor of 36, and it can be written: $$ 9=2^03^2 $$ So $p_1$ can be raised to the power from zero to $a_1$ , $p_2$ can be raised to the power from zero to $a_2$ etc. The total number of divisors will be: $$ (1+a_1)(1+a_2)...(1+a_m) $$ Since our number has odd number of divisors, that means that each of $(1+a_1), (1+a_2)$ etc is odd, so each of $a_1, a_2...a_m$ is even number. Each "way" we are looking for can be written as: $$ (p_1^{a_{11}}p_2^{a_{21}}...p_m^{a_{m1}}\ ,\ p_1^{a_{12}}p_2^{a_{22}}...p_m^{a_{m2}}\ ,...,\ p_1^{a_{1k}}p_2^{a_{2k}}...p_m^{a_{mk}}) $$ where $$ a_{11}+a_{12}+...+a_{1k}=a_1 $$ $$ a_{21}+a_{22}+...+a_{2k}=a_2 $$ $$ ... $$ $$ a_{m1}+a_{m2}+...+a_{mk}=a_m $$ The ways we can partition each $a_i$ to k numb
|
|combinatorics|
| 1
|
How to prove the existence of this progression? Struggling with a BDMO problem.
|
This problem is from Bangladesh Mathematical Olympiad $2023$ , The problem statement is as follows- Prove that there is sequence of $2023$ distinct positive integers such that the sum of the squares of two consecutive integer is a perfect square itself I can come up with such a sequence but I can't prove it. Such as- $(2^k-1)2^0 , (2^{k-1}-1)2^k , (2^{k-2}-1)2^{k+(k+1)} , (2^{k-3}-1)2^{k+(k+1)+(k+2)}\cdots\cdots\cdots$ Where $k$ represents the number of elements in the sequence I want to create. Say, I want such a sequence with 5 numbers. Plugging in $k=5$ , $(2^5-1)2^0=31$ $(2^{5-1}-1)2^5=480$ $(2^{5-2}-1)2^9=3584$ $(2^{5-3}-1)2^{12}=12288$ $(2^{5-4}-1)2^{14}=16384$ We get the sequence $31,480,3584,12288,16384$ which obviously meet up the conditions. I also tried to find out the $n^{th}$ term of the sequence like this: $n^{th}$ term = ${2^{k-(n-1)}-1}[2^{(n-1)(k+1-\frac{n}{2})}]$ (based on the observed property of the sequence , some simplifications will lead to this) Then what is lef
|
A different approach All we need is the Egyptian triangle. Or the most common Pythagorean triple. Let us substitute $2023$ with $5$ for the sake of simplicity. Then the following sequence works: $$3^5\cdot 4,\;3^4\cdot 4^2,\; 3^3\cdot 4^3,\; 3^2\cdot 4^4,\; 3\cdot 4^5$$ Now sum of squares of two consecutive numbers (say, second and third one) is $$(3^3\cdot4^2)^2\cdot(3^2+4^2)= (3^3\cdot4^2)^2\cdot5^2= (3^3\cdot4^2\cdot5)^2.$$
|
|sequences-and-series|discrete-mathematics|contest-math|square-numbers|
| 0
|
How to prove the existence of this progression? Struggling with a BDMO problem.
|
This problem is from Bangladesh Mathematical Olympiad $2023$ , The problem statement is as follows- Prove that there is sequence of $2023$ distinct positive integers such that the sum of the squares of two consecutive integer is a perfect square itself I can come up with such a sequence but I can't prove it. Such as- $(2^k-1)2^0 , (2^{k-1}-1)2^k , (2^{k-2}-1)2^{k+(k+1)} , (2^{k-3}-1)2^{k+(k+1)+(k+2)}\cdots\cdots\cdots$ Where $k$ represents the number of elements in the sequence I want to create. Say, I want such a sequence with 5 numbers. Plugging in $k=5$ , $(2^5-1)2^0=31$ $(2^{5-1}-1)2^5=480$ $(2^{5-2}-1)2^9=3584$ $(2^{5-3}-1)2^{12}=12288$ $(2^{5-4}-1)2^{14}=16384$ We get the sequence $31,480,3584,12288,16384$ which obviously meet up the conditions. I also tried to find out the $n^{th}$ term of the sequence like this: $n^{th}$ term = ${2^{k-(n-1)}-1}[2^{(n-1)(k+1-\frac{n}{2})}]$ (based on the observed property of the sequence , some simplifications will lead to this) Then what is lef
|
HINT : Consider the geometric sequence $a,ar,ar^2,ar^3,\dots,ar^{2022}$ . Choose any Pythagorean triplet $(x,y,z)$ . Set $r=x/y$ and $a=y^{2022}$ . Do you see what the individual terms become? Does it satisfy your required property? It becomes clear that there are infinitely many sequences that satisfy your given property and they all may be of differing lengths.
|
|sequences-and-series|discrete-mathematics|contest-math|square-numbers|
| 0
|
Why is $\sum_{n=1}^{k}\frac{1}{n^2+n}=\frac{k}{k+1}$
|
$$\sum_{n=1}^{k}\frac{1}{n^2+n}=\frac{k}{k+1}$$ I don't think that this summation requires too much context as this is a Q&A site, but I was just wondering why the summation is evaluated so nicely. I don't have too much knowledge and creativity in evaluating sums, so is there a bunch of equations that will make this make sense instantly? If so, I haven't found it on the internet. $\frac{k}{k+1}$ looks a lot like the formula for the geometric series. I was wondering if $$\dfrac{1}{r}\left(\frac{1}{1-r}\right)=\frac{1}{-r^2+r}$$ had anything to do with this problem.
|
Hint $$\frac{1}{n^2+n}=\frac{1}{n(n+1)}=\frac{1}{n}-\frac{1}{n+1}.$$
|
|algebra-precalculus|summation|
| 0
|
$x \sin x + 2 \cos x \lt 2$ for $x \in (0, \frac{\pi}{2})$
|
I want to prove that $x \sin x + 2 \cos x \lt 2$ for $x \in (0, \frac{\pi}{2})$ . I have computed the derivative: $$f'(x) = x \cos x - \sin x$$ And we have $f'(x) = x \cos x - \sin x = x \left( \cos x - \frac{\sin x}{x} \right)$ . Since $\frac{\sin x}{x} \gt \cos x$ in that interval we have that $f'(x) \lt 0$ for $x \in (0, \frac{\pi}{2})$ , so $f$ is decreasing on that interval. We also have that $f(0) = 2$ . So I can conclude $f(x) \lt 2$ for $x \in (0, \frac{\pi}{2})$ . Is there a simpler way of proving this inequality? (I find the inequality $\frac{\sin x}{x} \gt \cos x$ a little bit obscure)
|
$f''(x)=(\cos{x}-x\sin{x})'=-x\sin{x} for $x\in (0,\frac{\pi}{2})$ hence $f'$ is decreasing in $[0,\frac{\pi}{2}]$ . Hence $0 f'(x)\Rightarrow f'(x)
|
|real-analysis|
| 1
|
Possible issue with proposed solution to finding the Taylor Polynomial of $f(x)=\frac{e^x-1}{x}$ (a problem in Spivak's Calculus Chapter 20)
|
I wanted to confirm whether or not I am correctly identifying a mistake in the author-provided solution to the following problem (located within Spivak's Calculus 4th Ed.): Let $f$ be defined by: $\begin{cases} \frac{e^x-1}{x}\quad &x \neq 0\\ 1\quad & x=0\end{cases}\quad$ Find the Taylor polynomial of degree $n$ for $f$ at $0$ and compute $f^{(k)}(0)$ . The solution manual reads as follows: From $e^x=1+x+\frac{x^2}{2!}+\cdots+\frac{x^{n+1}}{(n+1)!}+R_{n+1,0,\exp}(x)$ , we get $f(x)=1+\frac{x}{2!}+\frac{x^2}{3!}+\cdots+\frac{x^n}{(n+1)!}+R_{n,0,f}(x)$ , so $\frac{f^{(k)}(0)}{k!}=\frac{1}{(k+1)!} \implies f^{(k)}(0)=\frac{1}{k+1}$ . My issue with this proof is that I think there is an omitted theorem from earlier in the chapter that is being erroneously (or perhaps 'prematurely') applied. The theorem I refer to reads as follows: Let $f$ be $n$ -times differentiable at $a$ , and suppose that $P$ is a polynomial in $(x-a)$ of degree $\leq n$ , which equals $f$ up to order $n$ at $a$ . The
|
Honestly, I am uncertain if I am interpreting Brian Moehring 's comments correctly, but here is how I think we can get around any logic-violations. It is certainly true that: $f(x)=1+\frac{x}{2!}+\frac{x^2}{3!}+\cdots+\frac{x^n}{(n+1)!}+\frac{R_{n+1,0,\exp}(x)}{x}$ Also, using Spivak's formulation, which (in the general case) implicitly invokes the Axiom of Choice, the remainder function $\displaystyle R_{n+1,0,\exp}(x)$ can be conceptualized as: $\displaystyle \frac{e^{\phi(x)}}{(n+2)!}x^{n+2}$ , where $\phi(x)=t$ and satisfies $0 \lt |\phi(x)| \lt x$ . As such, we note that for any $\displaystyle m \in \{1,2,\cdots,n+1\}: \lim_{x \to 0} \frac{e^{\phi(x)}}{(n+2)!}\frac{x^{n+2}}{x^m}=\lim_{x \to 0}\frac{e^{\phi(x)}}{(n+2)!}\times \lim_{x \to 0} \frac{x^{n+2}}{x^m}=\frac{1}{(n+2)!}\times 0=0.$ It is important to see that at this point in the proof, we do not know if $1+\frac{x}{2!}+\frac{x^2}{3!}+\cdots+\frac{x^n}{(n+1)!}$ is the $n$ -th degree Taylor polynomial of $f$ and if, therefore
|
|calculus|proof-explanation|taylor-expansion|
| 1
|
Compute the limit of the Log-Sum-Exp function
|
I am trying to prove that the Log-Sum-Exp function converges to the maximum function, i.e. $$ lim_{\tau\rightarrow0}\tau\log\left(\frac{1}{N}\sum_{i=1}^N\exp\left(\frac{x_{i}}{\tau}\right)\right) = \max_{i}(x_1, \dots, x_N).$$ I saw that a possible direction is to solve the limit as $\rho=1/\tau\rightarrow \infty$ and apply De l'Hôpital rule. What I get is the following: $$lim_{\rho\rightarrow\infty} \frac{\frac{1}{N}\sum_{i=1}^Nx_{i}\exp\left(\rho x_{i}\right)}{\frac{1}{N}\sum_{i=1}^N\exp\left(\rho x_{i}\right)}.$$ But at this point I got stuck and didn't know how to proceed. The final result should be the maximum among $\{x_{i}\}$ . Does anyone have some suggestions?
|
You don't need L'Hospital in the present case. Let's set $x_1 = \max(x_1,\ldots,x_N)$ without loss of generality $-$ because you can always re-index the list of $x_i$ . The factor $e^{x_1/\tau}$ is thus the "heaviest weight" in the sum, which can be factorized as follows : $$ \sum_{i=1}^N e^{x_i/\tau} = e^{x_1/\tau} \left(1 + \sum_{i=2}^N e^{(x_i-x_1)/\tau}\right), $$ with $x_i - x_1 , hence $$ \tau\ln\left(\frac{1}{N}\sum_{i=1}^N e^{x_i/\tau}\right) = x_1 + \tau\ln\left(1 + \sum_{i=1}^N e^{-|x_1-x_i|/\tau}\right) - \tau\ln N, $$ which converges to $x_1$ when $\tau \to 0$ , since $e^{-|x_1-x_i|/\tau} \to 0$ . QED
|
|limits|approximation-theory|
| 0
|
Find recurrence relation for sum of absolute deviations in a sequence
|
Given an ordered sequence $S_n = s_1, s_2, ..., s_n$ we define its cost as the sum of absolute deviations from any median: $$ cost(S_n) = \sum_{i=1}^{n} \left| s_i - median(S_n) \right|\text{, where } median(S_n) = s_{\left\lceil \frac{n}{2}\right \rceil} \text{ or } s_{\left\lceil \frac{n+1}{2}\right \rceil}. $$ By adding or removing an arbitrary value from $S_n$ , how can the new cost be expressed based on $cost(S_n)$ , the previous median and the arbitrary value ? I've tried to deduce a recurrence relation by using the cost definition from above, and splitting into cases based on the relation between $x$ and the median, however my results are incorrect: adding a new value $x$ , resulting in the sequence $S_{n+1}$ : $$ cost(S_{n+1}) = cost(S_n) + \left| x - median(S_{n+1}) \right| + median(S_n) - median(S_{n+1})$$ removing an existing value $x$ , resulting in the sequence $S_{n-1}$ : $$ cost(S_{n-1}) = cost(S_n) - \left| x - median(S_{n}) \right| + median(S_n) - median(S_{n-1})$$ Is
|
Let $S_n = (s_1,\ldots,s_n)$ with $s_1 \le \ldots \le s_n$ , First, assume that you add a new value $x$ . If $n$ is even, set $n=2p$ . Then the cost of $S_n$ equals $\displaystyle \sum_{i=1}^{2p}|s_i-s|$ for any $s \in [s_p,s_{p+1}]$ . Adding the new value will give a unique median, namely $s_p$ , $x$ , or $s_{p+1}$ according that $x \le s_p$ , $x \in [s_p,s_{p+1}]$ , or $x \ge s_{p+1}$ . As a result, the increment of the cost is the distance of $x$ to $[s_p,s_{p+1}]$ . If $n$ is even, set $n=2p-1$ . Then the unique median of $S_n$ is $s_p$ , and $s_p$ will still be a median once the value $x$ is added. As a result, the increment of the cost is $|x-s_p|$ . Now, assume that you remove a value $x$ from $S_n = (s_1,\ldots,s_n)$ . If $n=2p$ , the increment of the cost is $-|x-s|$ , where $s$ is the unique median after removing $x$ , namely $s=s_p$ if $x \le s_{p-1}$ and $s=s_{p-1}$ if $x \ge s_p$ . In both cases, the increment is minus the maximum of the the distances from $x$ to $s_{p-1}$
|
|sequences-and-series|recurrence-relations|absolute-value|median|
| 1
|
Why is $\sum_{n=1}^{k}\frac{1}{n^2+n}=\frac{k}{k+1}$
|
$$\sum_{n=1}^{k}\frac{1}{n^2+n}=\frac{k}{k+1}$$ I don't think that this summation requires too much context as this is a Q&A site, but I was just wondering why the summation is evaluated so nicely. I don't have too much knowledge and creativity in evaluating sums, so is there a bunch of equations that will make this make sense instantly? If so, I haven't found it on the internet. $\frac{k}{k+1}$ looks a lot like the formula for the geometric series. I was wondering if $$\dfrac{1}{r}\left(\frac{1}{1-r}\right)=\frac{1}{-r^2+r}$$ had anything to do with this problem.
|
Thank you to Surb for that hint. $$\begin{align*} \sum_{n=1}^{k}\frac{1}{n^2+n}&=\sum_{n=1}^{k}\frac{1}{n}-\frac{1}{n+1}\\ &=\sum_{n=1}^{k}\frac{1}{n}-\sum_{n=1}^{k}\frac{1}{n+1} \\ &=\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\ldots+\frac{1}{k-1}+\frac{1}{k}\\\\ &\phantom{\frac{1}{1}+}-\Big(\frac{1}{2}+\frac{1}{3}+\ldots+\frac{1}{k-1}+\frac{1}{k}+\frac{1}{k+1}\Big)\end{align*}$$ Everything cancels out except for $\frac{1}{1}-\frac{1}{k+1}=\frac{k}{k+1}$ .
|
|algebra-precalculus|summation|
| 1
|
Why is $\sum_{n=1}^{k}\frac{1}{n^2+n}=\frac{k}{k+1}$
|
$$\sum_{n=1}^{k}\frac{1}{n^2+n}=\frac{k}{k+1}$$ I don't think that this summation requires too much context as this is a Q&A site, but I was just wondering why the summation is evaluated so nicely. I don't have too much knowledge and creativity in evaluating sums, so is there a bunch of equations that will make this make sense instantly? If so, I haven't found it on the internet. $\frac{k}{k+1}$ looks a lot like the formula for the geometric series. I was wondering if $$\dfrac{1}{r}\left(\frac{1}{1-r}\right)=\frac{1}{-r^2+r}$$ had anything to do with this problem.
|
The problem of this telescopic series was posed in Finnish Matriculation Exam, 2015 Spring, A-level mathematics, Problem set 15*, worth 9 points instead of ordinary 6. The problem set, question b) itself gives hints to the intuition behind it: b.) Find $a, b \in \mathbb{R}$ s.t. $$ \frac{1}{n(n+1)} = \frac{a}{n}+\frac{b}{n+1}$$ The solution starts by expanding the terms: $$ \frac{1}{n(n+1)} = {}^{n+1)} \frac{a}{n}+ {}^{n)} \frac{b}{n+1}$$ $$ \frac{1}{n(n+1)} = \frac{(a+b)n + a}{n(n+1)} $$ We have a linear system, we need to satisfy both $a+b = 0$ and $a=1$ which implies $b=-1$ , the hint Surb gave.
|
|algebra-precalculus|summation|
| 0
|
Prove that the distance $d$ from point $A$ to plane $(DNC)$ verifies the relation $d< \frac{AB+3AD}{6\sqrt{2}}$
|
the question In the triangle $ABC$ we consider $(AM$ the bisector of the angle $\angle A$ so that $MB=3MC, M\in (BC)$ and $N\in (AB)$ so that $BN=2NA$ . On the plane of the triangle $ABC$ , the perpendicular $DA$ rises. a) Prove that the distance $d$ from point $A$ to plane $(DNC)$ verifies the relation $d b) If $\angle A= 60, AC=a, AD=\frac{3a\sqrt{7}}{14}$ . Determine the angle formed by the planes $\angle((ABC),(DMN))$ The drawing The idea As you can see I noted $BC=4x => BM=3x, MC=x$ and $BA=3y=> BN=2y, NA=y$ Using the thorem of the bisector: $\frac{AB}{AC}=\frac{BM}{MC}=3=> AB=3AC=> AC=y$ $AN=AC=y=>$ triangle $ANC$ is isoscells and $AM$ is the bisector of angle $A => AM \perp NC$ Let the intersection of $AM,NC$ be point $O$ $AO \perp NC$ and $AD\perp (ABC)$ by the theorem of the $3$ perpendiculars => $DO\perp NC$ Writing the volume of the tetrahedron $DANC$ in $2$ ways we get that the $d=\frac{AO*AD}{DO}$ We can also show that $DN=DC$ I don't know what to do forward...maybe the in
|
Let us call $z$ the length of $AD$ and $h$ the length of $AO$ . For part (a), we know that the distance from $A$ to the plane is the altitude over the hypothenuse in $\triangle DAO$ , right angled in $A$ . That is, \begin{align} d=\frac{zh}{\sqrt{z^2+h^2}}&=\left(z^{-2}+h^{-2}\right)^{-1/2}\\ &= \frac{1}{\sqrt{2}}\left(\frac{z^{-2}+h^{-2}}{2}\right)^{-1/2}\\ &= \frac{1}{\sqrt{2}} M_{-2}(z,h)~~, \end{align} where $M_{-2}$ is the power mean of $z$ and $h$ of order -2. A fairly general result is that the power means are monotonous in the power parameter, thus: $$M_{-2}(z,h)\le M_{1}(z,h) =\frac{z+h}{2}$$ Because $h$ is the altitude of isosceles $\triangle NAC$ , we have that $h Therefore, we conclude that $$ d The result follows from noting that $AB=3y$ . For part (b), I'll keep using $y$ instead of $a$ for $AC$ . I guess there are several ways, I found one not particularly inspired. You can start with the identity for the length of the bisector $$AM^2=AB.AC-BM.MC=3(y^2-x^2)~~.$$ Then, fr
|
|geometry|triangles|
| 1
|
Determine the angles of quadrilateral that make it concyclic
|
Inspired by a recent post , consider the following problem. You are given the four sides lengths of a quadrilateral $ABCD$ . Let these sides be $a = AB , b = BC, c = CD, d = DA $ . I want to determine the angles $ \phi = \angle DAB $ and $\theta = \angle ABC $ , that will make this quadrilateral concyclic (i.e. having its vertices on a single circle). That is the task. As a hint, from this Wikipedia page we know that such a quadrilateral maximizes the area $[ABCD]$ of the quadrilateral.
|
COMMENT.-It is easy if we take into account that the opposite to the angle in a vertex must be its suplement. In fact $$\overline{DB}^2=a^2+d^2-2ad\cos(\phi)=b^2+c^2-2bc\cos(\pi-\phi)=b^2+c^2+2bc\cos(\phi)$$ so $$\cos(\phi)=\frac{a^2+d^2-(b^2+c^2)}{2(ad+bc)}$$ Similarly with the other angle.
|
|geometry|optimization|analytic-geometry|quadrilateral|
| 1
|
Function $y=e^{1/\ln(x)}$ has a singularity at $x=1$ which breaks it into two continuous parts. What is the minimum distance between these parts?
|
The function $y = e^{1/\ln(x)}$ has a singularity at $x = 1$ that partitions it into two separate continuous sections. Notice that it has reflective symmetry about a $y=x$ axis, because it can be re-written as $\ln(x)\ln(y) = 1$ . When plotted, the 'left curve', (i.e. where $x ), has a discreet length, is continuous between the points $(0,1)$ and $(1,0).$ The 'right curve', (i.e. where $x > 1$ ), is of infinite length. It has asymptotes at $x = 1$ , and $y=1$ . I am trying to find the minimum distance between the left and right curves, so slopes must be calculated. The derivative of the function is $$-\frac{y}{x\ln^2(x)}$$ The left curve has a centre point at $(\frac{1}{e}, \frac{1}{e})$ , and the tangent at this point is $-1$ . The normal there has slope $1$ , and intersects the right curve at $(e, e)$ . The tangent at that intersection is also $-1$ , so the normal there has a slope of $1$ . The straight line between $\left(\frac{1}{e}, \frac{1}{e}\right)$ and $(e, e)$ therefore seems
|
In fact the distance between the curves is about $4\%$ smaller than $\sqrt{2} \left(e - \frac1e\right)$ , small enough that it is not apparent (to me at least) from a plot of the curve. We can parametrize the right curve by $${\mathbf r}(s) = \left(\exp s, \exp \frac1s\right) , \quad s > 0.$$ The squared distance $D(s)^2$ between ${\mathbf r}(s)$ and the limit point $(1, 0)$ of the left curve is $$D(s)^2 = (\exp s - 1)^2 + \left(\exp \frac1s\right)^2 .$$ Differentiating gives that where $D(s)^2$ is minimized, $$\frac{d}{ds}(D(s)^2) = 2 \left(\exp (2 s) - \exp s - \frac{1}{s^2} \exp\frac2s\right) = 0 .$$ This equation has a unique (real) solution, $s_0 \approx 1.07227$ (there doesn't appear to be a nice closed form for $s_0$ ), and the distance $D(s_0)$ from $(1, 0)$ to ${\bf r}(s_0) \approx {\bf r}(1.07227) \approx (2.92201, 2.54111)$ is $$\approx D(1.07227) \approx 3.18612 More generally, given two curves $A, B$ (completely) parametrized by ${\bf a}(s)$ , $s \in I$ , and ${\bf b}(t)$
|
|real-analysis|functions|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.