title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Center of Mass and Concurrency of Lines
Given points $v_1$ , $v_2$ ,..., $v_p \in \mathbb{R^n}$ and corresponding "masses" $m_1, m_2...,m_p>0,$ the center of mass can be defined as: $$\frac{m_1v_1+m_2v_2+...+m_pv_p}{m_1+m_2+...+m_p}$$ Prove that if $p \geq 3$ , then the $p$ lines, say $L_1, L_2,..., L_p$ [where $L_i$ joins $v_i$ ( $1 \leq i \leq p)$ to the center of mass of the other $p-1$ points], all meet at the center of mass.
Hint By symmetry, you only have to show that the line joining $\ v_p\ $ to the centre of mass of the points $\ v_1,v_2,\dots,v_{p-1}\ $ passes through the centre of mass of all the points. The line joining $\ v_p\ $ to the centre of mass of the points $\ v_1,v_2,\dots,v_{p-1}\ $ has parametric equation $$ v=t\left(\frac{\sum_\limits{i=1}^{p-1}m_iv_i}{\sum_\limits{i=1}^{p-1}m_i}\right)+(1-t)v_p $$ If $\ t=\frac{\sum_\limits{i=1}^{p-1}m_i}{\sum_\limits{i=1}^pm_i}\ ,$ what is the corresponding point $\ v\ $ on this line?
|linear-algebra|geometry|vectors|
1
How many ways to put 60 balls into 10 boxes such that each box has at least one ball and all boxes contain a different number of balls?
Put 60 identical balls into 10 identical boxes such that each box has at least one ball while all boxes contain a different number of balls. How many ways can you do so?
Write the $10$ counts per box in increasing order and compute the pairwise differences, which must exceed $0$ . The total is the sum of the terms of the prefix sum of the differences, plus one. Let us start with the differences $1, 1, 1, 1, 1, 1, 1, 1, 1\to1,2,3,4,5,6,7,8,9,10\to55$ . So we have a budget of $5$ to fill. Now if we increment the $k^{\text{th}}$ difference, the total increases by $10-k$ . Hence, the number of possibilities is the number of partitions of the number $5$ . There are $\color{green}7$ such partitions. E.g. $5=2+2+1\to1, 1, 1, 1, 1, 1, 1, 3, 2\to1,2,3, 4, 5, 6, 7, 8, 11,13\to60.$ More generally, for a total of $n$ , take then number of partitions of $n-55$ , excluding those with a component that exceeds $9$ . E.g. for the total $70$ , there are $176$ partitions of $15$ , but $19$ of them have a term above $9$ .
|combinatorics|
0
Spectral family/resolution for $A \otimes 1+ 1 \otimes B$
Let $A, B$ be unbounded self-adjoint operators on Hilbert spaces $\mathcal{H_1}, \mathcal{H_2}$ with spectra $\sigma(A), \sigma(B)$ . We know that : $\sigma(A \otimes 1 + 1 \otimes B) = \overline{\sigma(A) +\sigma(B)}$ and $A \otimes 1 + 1 \otimes B$ defines an unbounded self-adjoint operator on $\mathcal{H_1} \otimes \mathcal{H_2}$ (after taking its closure). By the spectral theorem : $A = \int_{\sigma(A)} \lambda \,dE_A(\lambda)$ ; $B = \int_{\sigma(B)} \lambda\, dE_B(\lambda)$ ; $A \otimes 1 + 1 \otimes B = \int_{\overline{\sigma(A) +\sigma(B)}} \lambda\, dE_{A \otimes 1 + 1 \otimes B}(\lambda)$ . My question is : do we have $E_{A \otimes 1 + 1 \otimes B} = E_A \otimes E_B$ ? If so, how does one show this, do you have a reference that I could cite ? I guess one would have to state that there exists a common spectral family, then taking the closure of $A \otimes 1 + 1 \otimes B$ would yield integrating over the closure of the spectrum, but it seems touchy so I'm looking for a referen
While the approach from the comments using multiplication operators is probably the standard way to go and also easier, I like this approach using the Fourier inversion formula and unitary groups. Since $(e^{itA}\otimes e^{itB})$ is a strongly continuous group of unitaries, there exists a self-adjoint operator $H$ such that $e^{itA}\otimes e^{itB}=e^{itH}$ by Stone's theorem. If $\xi\in D(A)$ and $\eta\in D(B)$ , then \begin{align*} \frac 1 t(e^{itA}\otimes e^{itB}(\xi\otimes \eta)-\xi\otimes\eta)&=\frac 1 t(e^{itA}\xi-\xi)\otimes e^{itB}\eta+\frac 1 t\xi\otimes (e^{itB}\eta-\eta)\\ &\to A\xi\otimes \eta+\xi\otimes B\eta. \end{align*} Thus $A\odot 1+1\odot B\subset H$ , which implies $A\otimes 1+1\otimes B=H$ (you cannot have proper inclusions of self-adjoint operators). Let $E$ denote the spectral measure of $H$ . If $\phi\in\mathcal S(\mathbb R)$ and $\xi\in \mathcal H_1\otimes\mathcal H_2$ , then \begin{align*} \int \phi(t)\,d\mu_\xi(t)&=\int\hat\phi(t)\,d\check\mu_\xi(t)\\ &=\int\i
|functional-analysis|tensor-products|mathematical-physics|spectral-theory|unbounded-operators|
0
Approach when there is no y(x) in ODE
I was wondering what techniques we can use when there is no $y(x)$ in the ODE. Specifically, we've covered singular and regular perturbations for boundary layers but there's no example with a missing y(x). For example: Find the lowest-order uniform approximation to the boundary-value problem \begin{align}\label{bvp} \begin{cases} \epsilon y''(x) + x y'(x) = x \cos(x),\\ y(1) = 2,\\ y(-1) = 2. \end{cases} \end{align}
For the outer solution you get $$ y_o(x)=C+\sin x $$ This can not satisfy both boundary conditions simultaneously, so you get boundary or fast layers. These are possible at both the boundaries $x=\pm 1$ or at the single exceptional inner point $x=0$ . At $x=-1$ one would set $x=-1+\delta X$ , $Y(X)=y(x)$ to get $\delta=ϵ$ and $$ Y''(X)-Y'(X)=0, ~~ Y(0)=2, Y(\infty)\text{ bounded } $$ which only has the trivial constant solution. The situation at $x=+1$ is a mirror image of this. At $x=δX$ the balance and equation reduce to $δ^2=ϵ$ , $$ Y''(X)+XY'(X)=0\implies Y(X)=C+DS(X) $$ with $S$ a sigmoid function that can be parametrized to look similar to the hyperbolic tangent, $S'(X)=k\exp(-X^2/2)$ , $S(\pm\infty)=\pm 1$ . In total, after applying the constant balancing formalism of your choice, the combined first-order approximation should look like $$ y(x)=2+\sin(x)-\sin(1)S(ϵ^{-1/2}x). $$
|ordinary-differential-equations|asymptotics|
1
solution verification| Determine the smallest number that is integer $k$ for which the inequality $x^4+4x^2+k>6x$ is true for every real number x.
the question Determine the smallest number integer $k$ for which the inequality $x^4+4x^2+k>6x$ is true for every real number $x$ . my idea $x^4+4x^2-6x+4>4-k => x^4+4x^2-6x+4=(x^2-2x+1)(x^2+2x+4)+3x^2=(x-1)^2((x+1)^2+3)+3x^2 \geq 3$ This means that $4-k -k k>1$ so the smallest value it can take is 2. Im not sure is this inequality $(x-1)^2((x+1)^2+3)+3x^2 \geq 3$ is right. I don't know how to show it and I conclude it by doing brute force. I hope one of you can help me and tell me how is the correct way to solve it! Thank you.
Consider the polynomial $P(x)=x^4+4x^2-6x$ , we have $P'(x)=4x^3+8x-6$ and $P''(x)=12x^2+8$ . As $P''$ is always stricly positive we conclude that $P$ is convex, in paritcular, its graph lies above any of its tangent. One computes, $P'(1/2)=-3/2$ , $P(1/2)=1/16-2$ , $P'(1)=6$ and $P(1)=-1$ . Hence $P(x)\geq -3/2(x-1/2)+1/16-2$ , In partcular we have that $\forall x\in(-\infty,1], P(x)\geq -3/2(1-1/2)+1/16-2=-(11/16+2)>-3$ . Simillarly, $\forall x\in[1,\infty),P(x)\geq 6(x-1)-1\geq -1>-3$ . Therefore we have that $\forall x\in\mathbb{R},P(x)>-3$ . As aforementionned $k=2$ does not work.
|solution-verification|
0
Why is the second derivative operator self-adjoint?
I read that a second derivative operator is self-adjoint, namely $\langle L(u),v\rangle=\langle u,L(v)\rangle$ and $L$ is the second derivative operator. But if I define $$\langle u,v\rangle=\int_0^1 u(x)v(x)\text{d}x,$$ I just don't see how it works if I take, say $ u(x) =x , v(x)=x^2 $ . I will have in this case $$\int_0^1 u(x)''v(x)\text{d}x \ne \int_0^1 u(x)v(x)''\text{d}x$$ I hope someone can clear my confusion. Thanks!
The property of being self-adjoint depends on the domain of the operator. When you take $X=L^2[0,1]$ the second derivative is self-adjoint if, for instance, you choose $$D(\frac{d^2}{dx^2}):=H^2_0[0,1].$$ The function you chose ( $u(x)=x$ and $v(x)=x^2$ ) do not belong to this domain, that's why the condition $\langle Au,v\rangle_2=\langle u,Av\rangle_2$ is not satisfied. In the proof given by @Vajra the domain is assumed to be $H^2(\mathbb{R})$ and it is known that any function and its derivative in this Sobolev space vanish when $x\to\pm\infty$ . So, the domain is essential to prove that an operator is self-adjoint, in particular the boudary conditions are.
|self-adjoint-operators|
0
Show that a group is commutative if and only if another operation exists with certain properties
Prove that a group $(G, \cdot )$ is commutative if and only if we can define on G another operation " $\circ$ " such that: i) $x \circ x = e$ , for every $x \in G$ ii) $x \circ (y \circ z)=(x \circ y) \cdot z $ for every $x,y,z \in G$ Attempt [from the comments]: If we take $x=y=z$ we get $x \circ (x \circ x)=(x \circ x) \cdot x$ so $x \circ e = e \cdot x = x$ . Taking $x=e$ , $y=z=x$ we have $e \circ (x \circ x)=(e \circ x) \cdot x$ , that is $e=(e \circ x) \cdot x$ so $(e \circ x)=x^{−1}$ . And I think that's the way to go but I don't know how to continue. [Edit] I have proved the second half: If we take $ y=z$ we obtain : $x \circ (y \circ y)=(x \circ y) \cdot y,$ $\quad $ so $\quad x = x \circ e =(x \circ y) \cdot y, \quad$ therefore $x \cdot y^{-1}= x \circ y$ $\quad (1)$ For x=y we get $x\circ (x\circ y) = (x \circ x) \cdot y=e \cdot y=y$ $\;$ and from $(1)$ we have $x \circ (x\cdot y^{-1})=y \quad (2)$ . But $e \circ (y\cdot x^{-1})=(y\cdot x^{-1})^{-1}=x\cdot y^{-1} \quad$ so f
Let us suppose that such a $\circ$ exists. As you point out, $x \circ e = x$ , and $e \circ x = x^{-1}$ (inverse, with respect to $\cdot$ , of course). Thus, $$(y \circ z)^{-1} = e \circ (y \circ z) = (e \circ y) \cdot z = y^{-1} \cdot z,$$ and so $$y \circ z = (y^{-1} \cdot z)^{-1} = z^{-1} \cdot y. \tag{$\star$}$$ Now, (ii) becomes \begin{align} (y \circ z)^{-1} \cdot x = y^{-1} \cdot x \cdot z &\iff (z^{-1} \cdot y)^{-1} \cdot x = y^{-1} \cdot x \cdot z \\ &\iff y^{-1} \cdot z \cdot x = y^{-1} \cdot x \cdot z \\ &\iff z \cdot x = x \cdot z \end{align} Thus, $\cdot$ is commutative. As for the other direction, $(\star)$ shows that $\circ$ is unique, if it exists, so we may use $(\star)$ instead as a definition. It's not difficult to show that $(\star)$ indeed satisfies (i) and (ii), if $z$ and $x$ are commutative (and indeed, the previous argument, followed backwards, shows (ii) holds).
|group-theory|abelian-groups|
1
Connections on a complex line bundle
Let $L \to M$ be a complex line bundle over a smooth manifold $M$ . Let $\{U_\alpha\}$ be a trivializing open cover with transition maps $z_{\alpha\beta}:U_\alpha \cap U_\beta \to \Bbb C^* = \text{GL}(1,\Bbb C)$ . The bundle of endomorphisms of $L$ , $\text{End}(L) \cong L^* \otimes L$ is trivial since it can be defined by transition maps $(z_{\alpha\beta})^{-1} \otimes z_{\alpha\beta} = 1$ . This, the space of connections on $L$ , $\mathcal{A}(L)$ is an affine space modeled by the linear space of complex valued $1$ -forms. A connection on $L$ is simply a collection of $\Bbb C$ -valued $1$ -forms $\omega^\alpha$ on $U_\alpha$ related on overlaps by $$\omega^\beta = \frac{dz_{\alpha\beta}}{z_{\alpha\beta}} + \omega^\alpha = d\log z_{\alpha\beta} + \omega^\alpha.$$ Could someone explain to me what is going on here? I'm familiar with line bundles, transition maps, connections and bunch of other stuff, but this seems very weird to me. I have never seen for example this affine space $\mathc
Let $\nabla_0\colon C^\infty(M,L)\to \Omega^1(M,L)$ be a reference connection. Then any other connection on $L$ can be written as $\nabla_A = \nabla_0+A$ , where $A\in \Omega^1(M,\mathbb C)\cong \Omega^1(M,\mathrm{End}(L))$ . This is what makes $\mathcal{A}(L)$ into an affine space. Let now $(U_\alpha)$ be an open cover of $M$ and $s_\alpha\in C^\infty(U_\alpha,L|_{U_\alpha})$ be collection of non-vanishing sections. These automatically trivialise the bundle and thus there exist transition functions $z_{\alpha\beta}\in C^\infty(U_{\alpha\beta},\mathbb C)$ on $U_{\alpha\beta} =U_\alpha\cap U_\beta$ such that $$s_\alpha = z_{\alpha\beta}s_\beta \quad \text{ on } U_{\alpha\beta}. \tag{1}$$ Moreover, we can express $\nabla_A s_\alpha$ via some $\omega_{\alpha}\in \Omega^1(U_\alpha,\mathbb C)$ as follows: $$ \nabla_A s_\alpha = \omega_{\alpha} s_\alpha \tag{2} $$ Now apply $\nabla_A$ to $(1)$ and use the Leibniz rule: $$ \omega_\alpha s_\alpha = \nabla_A(s_\alpha) = dz_{\alpha\beta} s_\beta
|differential-geometry|vector-bundles|connections|
1
How does it work? In the nested case.
I cannot understand how double/nested induction works. I am supposed to prove that the addition operation is both commutative (x + y = y + x) and associative (x+(y+z)) = ((x+y)+z) from the following definitions of the naturals and addition (no other axiom is given). Naturals: Base: $0 \in \mathbb{N}$ and Ind. $n \in \mathbb{N} \implies sn \in \mathbb{N}$ . Addition: Base: $0 + n = n$ and Ind. $s(m)+n = s(m+n)$ . I do understand that I am supposed to divide the proof into a base case and an inductive case, but I am very unsure about the way to proceed since there's $m$ and $n$ to care about: Proof statement: $\forall m,n \in \mathbb{N}: n+m = m+n$ . Base case: Assume $n=0$ . This gives us $0 + m = m + 0$ . By the definition of addition, this means $0 + m = m$ . But now I am stumped, obviously I should do some sort of induction on $m$ , but otherwise I am stumped, since there is nothing I can say about $0 + m$ (as it is not on the definition of +. Any ideas on how to structure the proof
For the commutative property you do indeed need nested induction: see QC_QAOA’s answer how to do that. As such, you might fear that to show the associative property that $x+(y+z) = (x +y)+z)$ you need triple-nested induction, but fortunately that is not the case. You only need to do induction once: Simply pick $y$ and $z$ to be arbitrary, and do induction on x: Base: Show that $0+(y+z) = (0+y) +z$ … which is trivial, since $0+(y+z) =y+z$ , and $(0+y)+z =y+z$ Step: Assume (IH) that $x+(y+z)=(x+y)+z$ and show that $s(x) + (y+z) = (s(x) +y) +z$ … Which is also not hard: $s(x)+(y+z) = s(x+(y+z))= \text{(by IH)} s((x+y) +z)= s(x+y)+z = (s(x)+y)+z$
|natural-numbers|
0
number of possibilities to distribute k balls in 2n boxes with condition
I'm trying to find the number of possibilities to distribute $k$ balls to $2n$ boxes such that for every $i$ between $1$ and $n$ the sum of the balls in the $i$ box and the $n+i$ box isn't equal to $6$ . I tried writing the equation: $$x_1+x_2+...+x_{2n} = k$$ , but I don't know how to progress further and it seems like it should be solves with recursion, can someone direct me?
Using generating functions The number of ways to fill in box $i$ and $n+i$ is given by: $$ 1x^0 + 2x^1 + 3x^2 + 4x^3 + 5x^4 + 6x^5 + 0x^6 + 8x^7 + 9x^8 + \ldots = \frac{ 1}{(1-x)^2 } - 7x^6.$$ The number of ways to fill in all $2n$ boxes is given by: $$(\frac{ 1}{(1-x)^2 } - 7x^6)^n.$$ We are interested in the coefficient of $x^k$ . Expanding out via the binomial theorea, we get $$(\frac{ 1}{(1-x)^2 } - 7x^6)^n = \frac{1}{(1-x)^{2n}} - {n \choose 1}7x^6\frac{1}{(1-x)^{2n-2} } + { n\choose 2} (7x^{6})^2\frac{1}{(1-x)^{2n-2}} + \ldots + (-7x^6)^{2n}. $$ Then, finidng the coefficient of $x^k$ in each term gives us: $$\sum\limits_{\ell=0}^n(-7)^\ell\binom{n}{\ell}\binom{k+2n-1-8\ell}{2n-1-2\ell}$$
|combinatorics|balls-in-bins|
0
A philosophical question on the nature of mathematics
I had a seemingly simply question today, that goes as following. What do we need for a mathematics to exist in a universe, or a system, more broadly speaking? Is it a matter of having the ability to define axioms, or regularities and certain patterns? How much does it hinge on us having the cognitive functionality that we currently do? I am not very familiar with the work of Kurt Gödel, but I suppose it might also have connections to what I am pondering. Thank you in advance for your consideration of this question.
This seemingly simple question and some variation thereof (‘where does math come from?’ or ‘what is the basis of math?’ or ‘what justifies math?’) is often asked on this site, but (as you probably suspect) it has no simple answer. Indeed, this question is probably more suited for the Philosophy.SE site, and I would also recommend taking a course or consulting a text in the philosophy of mathematics, where you will learn about a wide variety of views on this. That said, it seems like you are leaning towards some kind of cognitive basis of math. Now, for that, I would actually recommend staying away from consciousness, since consciousness is such a quagmire by itself. However, I think you may enjoy the book “Where Mathematics comes from” by Lakoff and Nunez, where they try to relate our mathematical abilities to more basic cognitive abilities to perceive and interact with the environment, and how our ‘embodiedness’ and ‘situatedness’ not only effects, but largely creates such concepts as
|philosophy|
0
Do closure and intersection of open sets commute when composed with the interior?
In the Wikipedia article on the interior operator it is stated that on a complete metric space $X$ one has for every sequence of open sets $(A_i)_{i\in \mathbb N}\subset X$ the relation $$ \mathrm{Int}\Big(\bigcap_{i}\overline A_i\Big)=\mathrm{Int}\Big(\overline {\bigcap_{i}A_i}\Big). $$ Now I wonder whether the finite version of this relation, i.e., for any two open sets $A,B\subset X$ to have $$ \mathrm{Int}\big(\overline A\cap \overline B\big)=\mathrm{Int}\big(\overline {A\cap B}\big), $$ does actually hold in any topological space $X$ . If so, it should be found in some elementary textbooks, perhaps as an exercise. I searched for a while but it turned out that searching for relations like this is extremely inconvenient - one obtains hundreds of results which are similar but different from the wanted relation. On the other hand, I guess that (if the relation holds in general) there should be an easy direct proof based on playing around with the operations interior, closure, intersec
Here is a proof of the other direction. It suffices to show that $\operatorname{int}(\overline{A}\cap\overline{B})\subseteq \overline{A\cap B}$ , because $\operatorname{int}(\overline{A}\cap\overline{B})$ is open, so if it's a subset of some set, it's a subset of the interior of it. Since $\overline{A\cap B}=\overline{A\cap\overline{B}}$ , it suffices to show $\operatorname{int}(\overline{A}\cap\overline{B})\subseteq \overline{A\cap\overline{B}}$ . Let $x\in\operatorname{int}(\overline{A}\cap\overline{B})$ . Let $W$ be an open neighborhood of $x$ . Then so is $V=W\cap\operatorname{int}(\overline{A}\cap\overline{B})$ . Since $x\in\overline{A}$ , there exists some $y\in V\cap A$ . Since $V\subseteq \overline{B}$ then $y\in V\cap A\cap\overline{B}\subseteq W\cap A\cap\overline{B}$ . This was for an arbitrary open neighborhood $W$ of $x$ - proving that $x\in \overline{A\cap\overline{B}}$ . To show $\overline{A\cap B}=\overline{A\cap\overline{B}}$ , let $U$ be an open set. If $U\cap A\cap B
|general-topology|reference-request|
0
Prove that the king made at least one diagonal move.
I am having trouble understanding the solution to the question, how does when it starts on a black square must end on another black square, and if the king's moves change the colors of the squares how does that constitute making the number of traversed squares odd? Question: A king Visited all squares of the usual 8 by 8 chessboard exactly once starting from the lower left square (a1) and finishing at the upper right square (h8). Prove that the king made at least one diagonal move. Answer: Assume on the contrary that the king made no diagonal moves. then on each move it changed the color of it's square. Consider a list of all squares in the order the king visited them. It starts with a white square and ends with a white square, and the colors of the squares alternate. Therefore it has an odd number of squares. but the king visited all 64 squares exactly once. a contradiction. therefore the king made at least 1 diagonal move.
There are $64$ squares in total, which means that the King must make $63$ moves. If each move changes the color, then the King will end in a square of a different color, as $63$ is odd. To visualize it, I suggest working with a smaller board. For $2\times 2$ , there isn't much to it, so look at $4\times 4$ .
|logic|
0
Connections on a complex line bundle
Let $L \to M$ be a complex line bundle over a smooth manifold $M$ . Let $\{U_\alpha\}$ be a trivializing open cover with transition maps $z_{\alpha\beta}:U_\alpha \cap U_\beta \to \Bbb C^* = \text{GL}(1,\Bbb C)$ . The bundle of endomorphisms of $L$ , $\text{End}(L) \cong L^* \otimes L$ is trivial since it can be defined by transition maps $(z_{\alpha\beta})^{-1} \otimes z_{\alpha\beta} = 1$ . This, the space of connections on $L$ , $\mathcal{A}(L)$ is an affine space modeled by the linear space of complex valued $1$ -forms. A connection on $L$ is simply a collection of $\Bbb C$ -valued $1$ -forms $\omega^\alpha$ on $U_\alpha$ related on overlaps by $$\omega^\beta = \frac{dz_{\alpha\beta}}{z_{\alpha\beta}} + \omega^\alpha = d\log z_{\alpha\beta} + \omega^\alpha.$$ Could someone explain to me what is going on here? I'm familiar with line bundles, transition maps, connections and bunch of other stuff, but this seems very weird to me. I have never seen for example this affine space $\mathc
Jan Bohr's answer is already a great answer on how to solve your problem, let me just add a bit more on that. In particular the intuition of where the logarithm comes from. Most of this answer will actually just be re-stating definitions that I'll need, so you can just skip to the last part to see the explanation.. This is all inspired from Huybrechts's 2005 book on complex geometry, which I'll write [1] for short: Let $M$ be a complex manifold and $E \to M$ a bundle over it (for now, of any rank), then a connection is defined the usual way: \begin{equation} D : \Gamma(E) \longrightarrow \Omega_\mathbb{C}^1(E) := T^{*}M\otimes\Gamma(E) \end{equation} In other word, given $\sigma \in \Gamma(E)$ , and $V \in TM$ , $D\sigma(V) = D_V \sigma$ is a section in $\Gamma(E)$ . And we ask $D$ to satisfy the usual properties a connection satisfies. Then: If we let $(\cdot, \cdot)$ be a Hermitian structure on $E$ , then we say that $D$ is compatible with the Hermitian structure if $D(s, r) = (Ds, r
|differential-geometry|vector-bundles|connections|
0
Birthday problem: how to show the scaling with $1/N^2$?
Suppose there is a sequence of $N$ numbers $x_1, x_2, x_3, ... x_N$ . There are then gaps $|x_i - x_j|$ , and the minimum gap: $\delta (N) = \text{min}_{i \ne j \le N} \{ | x_i -x_j | \}$ . Let the mean gap be normalized to $1/N$ . If the sequence of numbers $x_1, x_2, x_3, ... x_N$ is random, the minimum gap $\delta (N) \simeq \frac{1}{N^2}$ this is the birthday problem! How to calculate this $1/N^2$ explicitly? Or, where is the calculation shown? See the problem described here: https://youtu.be/n8IMb2mW6TM?t=2396
To deal with the $\frac1n$ issue and get the exact result, let's in fact choose $n$ iid uniform postions on a circle circumference $1$ (equivalent in the birthday problem to days in a year), then pick one of them as the starting point and measure the distances clockwise from there: this is equivalent to choosing $n-1$ iid uniform numbers in $(0,1)$ and sorting them into $0 \le x_{(1)} \le x_{(2)}\le \cdots \le x_{(n-1)} \le 1$ , and then using $x_{(0)}=0$ and $x_{(n)}=1$ so $n$ gaps $G_i= x_{(i)}-x_{(i-1)}$ . The $n$ gaps $G_i$ are identically distributed and sum to $1$ so they each have expectation $E[G_i]=\frac1n$ . We can state their distributions as $P(G_i \le y) = 1-(1-y)^{n-1}$ for $0 so with density $(n-1)(1-y)^{n-2}$ and expectation $\frac 1n$ . The $G_i$ are not independent of each other as they sum to $1$ and so the smallest cannot exceed $\frac1n$ , but as $n$ increases the relationship between them becomes weaker. Going back to the circle analogy, the probability the gaps a
|statistics|random-variables|average|birthday|random-functions|
0
Intuition behind the definition of continuity in terms of open sets
I have familiarised myself with the definition of continuity in terms of limits, each point in the codomain being 'within' an $\varepsilon$ of the domain, etc... But my lecturer has suddenly begun using the definition "a function is continuous if for all open sets, its preimage is open" I was wondering if someone could shed some light on the intuition behind this definition so it makes better sense in my head.
The question mentions the classical Weierstrass-Jordan "epsilon-delta" definition of continuity. It says that a function $f:{\mathbb{R}} \to {\mathbb{R}}$ is continuous at the point ${x_0}$ when the following holds: for any positive real number $\varepsilon > 0$ , however small, there exists some positive real number $\delta > 0$ such that for all $x$ in the domain of definition of $f$ , if $\left| {x - {x_0}} \right| then $\left| {f\left( x \right) - f\left( {{x_0}} \right)} \right| . We can paraphrase this definition saying: for any measure of closeness that we fix in the image space ( $\varepsilon$ ), we can find a measure of closeness in the domain space ( $\delta$ ) such that the function maps close points to close points. Now, consider how we measure closeness in topological spaces. Not by means of distance, a concept that is defined in metric spaces but not in topological spaces. In topology the concept of closeness is more related to similarity or likeness. Two points are close
|calculus|elementary-set-theory|continuity|
0
two points are close to eachother if there are a lot of open sets that contain both of them
I started taking an introductory course on topology of the real numbers. I am familiar with the ideas of open sets , closed sets , limit points , interior points ... However , I have came across a particular definition of two points being close to eachother that goes as follows : Definition: We say two points are close to eachother if there are a lot of open sets that contain both of them. Worded differently: The closer two points are to each other , the more open sets contain them at the same time. My Question: I'm having a hard time grasping this concept of points being close using open sets How does it relate to the same idea of distance and how to visualise it ? What is the intuition behind it ? And if possible how can i more formally write this in terms of topological definitions ?
In a topological space we do not talk about closeness in the sense of distance. Distance is a concept defined in metric spaces, not in topological spaces. We talk about abstract closeness, something more related to similarity or likeness than to distance. In a metric space, when somebody asks “how close are these two points?” you answer with a number: their distance. In a topological space, you answer with the list of properties that they share. We identify sets with properties. Having such property means belonging to the (sub)set of all elements (of the topological space) that have that property. Then we can measure the proximity of two points by which properties (that we care about) they share. The collection of open sets of a topological space represents the collection of properties that the points of the space may (or may not) have. For this collection to make sense, it has to meet the following requirements. There is one property (being a point of the space) that all points have.
|general-topology|soft-question|
0
Problem with limit and increasing function
Let $f:(0,\infty) \to (0,\infty)$ be an increasing function such that $$\lim_{x \to \infty}f(x)=\infty \text { and } \lim_{x \to \infty} \frac {f(x+f(x))}{f(x)}=1.$$ Show that $$\lim_{x \to \infty} \frac {f(x)}{x}=0 \text { and } \lim_{x \to \infty} \frac {f(x+af(x))}{f(x)}=1,$$ for every $a \gt 0$.
The limit $ \lim_{x\to \infty} \frac{f(x+f(x))}{f(x)}$ if of the form $\frac{\infty}{\infty}$ . Thus, we can apply the LH Rule. Differentiating the denominator and numerator gives: $$ \lim_{x \to \infty} \frac{f'(x+f(x))}{f'(x)} (1+f'(x))$$ Which can be rearranged as: $$ \lim_{x \to \infty} \frac{f'(x+f(x))}{f'(x)}+f'(x+f(x))$$ We can now divide the problem into cases, Case1: $f'(x)$ tends to some non-zero finite value (positive) $\lambda$ $\implies$ 1 = 1 + $\lambda$ $\implies$ $\lambda$ = 0 This case is rejected. Case2: $f'(x)$ tends to infinite In this case it is intuitive that the first term in the above sum is a positive indeterminate value and the second term is infinite(positive), their sum cannot be 1. Therefore, this case is rejected as well. Having rejected the other possibilities, we can state that as x tends to infinite f'(x) tends to zero. (eg: a function such as ln(1+x)) The parts to show are now trivial (simple application of LH rule)
|limits|
0
Need help with this differential equation problem involving second derivative and first derivative
I have been trying to solve this problem. I was able to observe that from the given conditions $x^2 + f^2(x) + (f'(x))^2 = c$ , where c is some constant, but it does not help me in any of the options. Any hints or ideas would be appreciated Let $f$ be a twice differentiable function that assumes real values satisfying the equation: $x+f(x)f'(x)+f'(x)f''(x)=0 \forall x \in (-1,1)$ , then which of the following is/are correct? A) $ \lim_{x\to0}xf(x) = -1$ B) $ \lim_{x\to0} \frac{f'(x)}{ln|x|} = 0$ C) $ \lim_{x\to0}xf(x)f'(x) = 1$ D) $ \lim_{x\to0} \frac{f(x)f'(x)}{ln|x|} = 0$
Since $f$ and $f'$ are continuous at $x=0$ , they are bounded in a neighbourhood of $0$ as well. Hence you always have $$\lim_{x\rightarrow 0}x f(x) = \lim_{x\rightarrow 0}x f(x)f'(x)=0.$$ Therefore A) and C) cannot be. Let us rewrite the equation a bit by dividing by $f'$ : $$\frac{x}{f'(x)} + f(x) + f''(x)=0.$$ If we choose $f'(0)\neq 0$ and $f(0)\neq 0$ , then this initial value problem has a unique solution by Picard-Lindelöfs theorem. Again by continuity $f$ and $f'$ would be bounded away from zero at $x=0$ . Therefore $$\lim_{x\rightarrow 0} \frac{f'(x)}{\ln |x|} = \pm \lim_{x\rightarrow 0} \frac{f(x)f'(x)}{\ln |x|} = 0$$ Because f(x) and f'(x) are bounded the above limits are intuitive.
|ordinary-differential-equations|limits|
1
Predual of $W^{1,\infty}$
I understand the meaning of $u_n$ converges to $u$ weak star (it means that $u_n\in E^*$ and $(u_n,x)_{E^*,E} \to (u,x)_{E^*,E}$ for all $x\in E$) but I've some trouble for identifying a space $E$ such that $E^* = W^{1,\infty}(\Omega)$ (where $\Omega$ is an open subset of $\mathbb{R}^n$). I know that we can identify $L^\infty(\Omega)$ to $L^1(\Omega)^*$ but what happens for $W^{1,\infty}(\Omega)$ ? It seems that we can identify the predual of $W^{1,\infty}(\Omega)$ as $W^{1,1}(\Omega)$ as Terence Tao says here but where can I find a reference of this fact?
This an old question, but I found the existing answer to be unsatisfactory since it doesn't give a proper description of the predual. Since this is not easy to find in the literature, I hope posting a detailed answer will be of value. In practice, one often wants to understand duality through a suitable extension of the distributional pairing $$ \langle f, g \rangle = \int_{\Omega} f(x) g(x) \,\mathrm{d}x.$$ In particular, see my answer regarding the identification $W^{-1,p^{\prime}}(\Omega) \simeq W^{1,p}_0(\Omega)^{\ast}.$ However, the endpoint $p=\infty$ case requires a slightly different formulation. Claim : The pre-dual of $W^{1,\infty}(\Omega)$ is given by the space of distributions $$ W^{-1,1}(\Omega) = \left\{ f_0 + \sum_{i=1}^n \partial_{x_i}f_i \in \mathcal{D}'(\Omega): f_0, f_1, \dots, f_n \in L^1(\Omega) \right\},$$ equipped with the norm $$ \lVert T \rVert_{W^{-1,1}(\Omega)} = \inf\left\{\lVert f_0 \rVert_{L^1(\Omega)} + \sum_{i=1}^n \lVert f_i \rVert_{L^1(\Omega)} : \ T =
|general-topology|reference-request|sobolev-spaces|
0
How to prove that the shortest path between two points in sphere is a part of the great circle?
It is a known fact that the shortest path between tow points in sphere is a part of the great circle. but I don't know the proof of this claim so I tried to rigorously prove it myself. My attempt: Claim 1 Let two points $a,b$ be in the opposite poles of the sphere with radius $r$ assume the distance is less than $\pi r$ by the symmetry of the sphere any two points in the opposite side of the sphere must have their distance less than $\pi r$ i.e the radius is less than $r$ which contradicts the hypothesis (I am not sure this part is correct). Claim: any two points can be part of a Great circle. Let two points $a,b$ be in the opposite poles and let $c$ be in the middle of distance between them in the great sphere if the distance between $c,a$ is less than half of distance in the great sphere then by the symmetry of the sphere the distance between $c,b$ which contradicts Claim 1. the second method can be generalised for all rational numbers but I am having a trouble generalising this clai
First, I'll define the following unit vectors: $u_0 = \dfrac{b - a}{\| b - a\|}$ $u_1 = \dfrac{a + b}{\| a + b \| }$ $u_2 = \dfrac{a \times b }{ \| a \times b \| }$ then the axis of rotation that takes $a$ to $b$ is given by $ U = \cos \alpha \ u_1 + \sin \alpha \ u_2 $ unit vectors $v_1$ and $v_2$ are orthogonal to $U$ , where $ v_1 = \sin \alpha \ u_1 - \cos \alpha \ u_2 $ $ v_2 = u_0 $ Now vector $a$ can written in the basis $ u_0 , u_1 , u_2 $ as $ a = r( - \sin \phi \ u_0 + \cos \phi \ u_1) $ $ b = r( \sin \phi \ u_0 + \cos \phi \ u_1) $ where $ 2 \phi $ is the angle between $a $ and $ b$ . The projection of $a$ onto $v_1$ is $ a_1 = ( a \cdot v_1 ) v_1 = r (\sin \alpha \cos \phi) v_1 $ and onto $v_2$ is $ a_2 = (a \cdot v_2 ) v_2 = r ( - \sin \phi ) v_2 $ Similarly, the projections of $ b$ onto $v_1$ and $v_2$ are $ b_1 = r (\sin \alpha \cos \phi) v_1 $ $ b_2 = r \sin \phi v_2 $ Therefore, the angle of rotation is given by $ \psi = 2 \tan^{-1} \left( \dfrac{ \sin \phi }{ \sin \al
|geometry|algebra-precalculus|euclidean-geometry|3d|
0
What's the correct way to derive the eq. of a parabola without knowing the conventional values of focus and directrix?
According to the convention that we are taught in our schools, focus = (a,0) [for a standard parabola] and directrix: x=-a. But is there any way to obtain the equation without knowing these conventions? I know the basic definition is this: A Parabola is the locus of all points whose distance from a point is equal to their perpendicular distance from a line known as the directrix. But is it possible to derive the equation from the ground up using just the definition and not making any of the assumptions like I did in the first line? P.S - If you have the link to the derivation or any article that is related to this, please do send it! Thankyou so much!
Parabola by definition is the locus of points equidistant from a given point and a given line. To derive its equation assume a general line of the form ax+by+c=0 and a general point (h,k). Use the standard formulas for distance from a line and distance from a point and equate them.
|conic-sections|
0
Inner product space $\langle v, x \rangle = \langle w, x \rangle \implies v = w$ on total set
Let $M$ be a total subset of an inner product space $X$ . Show that $\forall x \in M$ : $$ \langle v, x \rangle = \langle w, x \rangle \implies v = w $$ My idea was to calculate $$ \langle v, x \rangle - \langle w, x \rangle = \langle v - w, x \rangle = 0 $$ I know that no vector $a \in X$ can be perpendicular to all $x \in M$ . So therefore $\exists x \in M$ such that $v-w \not\perp x$ . So in order for this inner product to be $0$ $\forall x \in M$ , one of the operands must be $0$ . And $x$ can't be it since it is part of a total set. $$ \implies v = w $$ Is this proof correct? It seems too easy to be true
Observing that the equality $\langle v, x \rangle - \langle w, x \rangle = \langle v - w, x \rangle = 0$ holds for all $x\in M$ , we deduce that $v-w$ belongs to the orthogonal complement $M^\perp$ . Given that $M$ is a total subset, it follows that the closure of the span of $M$ , denoted as $\overline{\text{span}(M)}$ , equals $X$ . This, in turn, implies that the orthogonal complement $M^\perp$ consists solely of the zero vector, i.e., $M^\perp=\{0\}$ . Consequently, the fact that $v-w\in\{0\}$ leads us to the conclusion that $v$ equals $w$ .
|inner-products|
0
Geometry of linear equations
its my first question here. I'm re-self-studying linear algebra from different sources and one of them is Linear Algebra and Its applications by g.strang 4th ed. . While i have studied a bunch of material i still don't grasp some basics and i really struggle. Page 4-5 gives two approaches for the geometry of linear equations: my problem is the second graph. I get it, the operations, how we use them as a transformation to create the vectors and that we have to guess a solution to get to the (1,5). I also get that he is trying to make a point for later on, on Gauss elimination. But how/why can he take the coefficients of each equation for example for the y part \begin{bmatrix}-1\\1\end{bmatrix} and use it as x,y coordinates to map it to the graph? How can -1y and 1y coefficients turn into a (x,y) / ( -1,1) vector? It looks like to me that we got two coeffients and turned them from y,y to x,y.I don't get how that concept works. Do i make sense? I've searched a lot but i may lack knowledge
The author suggests two methods of solving the linear equations. One is standard algebra class elimination, while the other looks at combining the vectors in some way to get to a goal vector. $2x - y = 1$ $x + y = 5$ Above are the equations that we have. He gets those column vectors with the properties of vector addition and scalar multiplication. $x\begin{bmatrix}2\\1\end{bmatrix} + y \begin{bmatrix}-1\\1\end{bmatrix} = \begin{bmatrix}2x\\1x\end{bmatrix} + \begin{bmatrix}-y\\y\end{bmatrix} = \begin{bmatrix}2x-y\\x+y\end{bmatrix} = \begin{bmatrix}1\\5\end{bmatrix}$ Therefore the first element is $2x-y$ which corresponds with 1, and the second element is $x+y$ corresponding with 5. That is how he got the coefficients.
|linear-algebra|
0
What's the correct way to derive the eq. of a parabola without knowing the conventional values of focus and directrix?
According to the convention that we are taught in our schools, focus = (a,0) [for a standard parabola] and directrix: x=-a. But is there any way to obtain the equation without knowing these conventions? I know the basic definition is this: A Parabola is the locus of all points whose distance from a point is equal to their perpendicular distance from a line known as the directrix. But is it possible to derive the equation from the ground up using just the definition and not making any of the assumptions like I did in the first line? P.S - If you have the link to the derivation or any article that is related to this, please do send it! Thankyou so much!
EDIT: This got a downvote, which made me check to see what was off, there's an addition at the asterisk below. If you did not know the focus or directrix, but you DO have a curve in a coordinate space, and you know it is a parabola, there are several options. I presume the axis is vertical, with upward opening. Find the vertex $(p,q)$ . Then $y+q=(x-p)^2$ * , if this is a $y=x^2$ form of a parabola. If it is wider or narrower, you need to determine a scalar, $m$ to account for that, most easily done by examining the value of the curve at $p+1$ , to yield $y+q=m(x-p)^2$ . This is the only simple option if you have imaginary roots. Identify where the parabola touches/crosses zero, these are the roots, and there are at most two distinct real roots. If one, $y=(x-r_1)^2$ . If two, $y=(x-r_1)(x-r_2)$ . One or both of those methods will work for any parabola, and if it opens downward, just add a negative sign to the RHS. If you then wish to find the geometric focus and directrix, reverse the
|conic-sections|
1
Is there any connection between QR and SVD of a matrix?
Is it possible to draw any parallels between the SVD and QR decomposition of a matrix? Moreover, for a given matrix $\mathbf{A}\in\mathbb{R}^{n\times m}$, under what conditions, the $\mathbf{U}$ matrix coming from singular value decomposition of $\mathbf{A}$ is equal to $\mathbf{Q}$ matrix obtained via QR-decomposition of $\mathbf{A}$
Singular value decomposition factors matrix A into three matrices, $A = U S V^T$ The QR decomposition factors matrix A into two matrices, $A = Q R$ The question you should have asked is for the relationship between the SVD and the complete orthogonal decomposition. https://en.wikipedia.org/wiki/Complete_orthogonal_decomposition , $A = U T V^T$ , where $A$ is $m$ by $n$ $U$ is $m$ by $r$ $T$ is $r$ by $r$ $V$ is $n$ by $r$ and $r$ is the rank of $A$ The similarities between SVD and COD: $U$ and $V$ have orthogonal columns. Both SVD and COD are rank-revealing decompositions. Both SVD and SVD can give the least-squares solution to an underdetermined or overdetermined system of linear equations $Ax = b$ The differences: SVD is near unique. COD depends on the order of columns and rows in $A$ . $S$ is a diagonal matrix. $T$ is a triangular matrix. The COD can be computed in a fixed number of steps, by performing performing the QR factorization twice, first $A = UR$ , second $R^T = V T^T$ To
|matrix-decomposition|
0
Solving quintic equations of the form $x^5-x+A=0$
I was on Wolfram Alpha exploring quintic equations that were unsolvable using radicals. Specifically, I was looking at quintics of the form $x^5-x+A=0$ for nonzero integers $A$ . I noticed that the roots were always expressible as sums of generalized hypergeometric functions: $$B_1(_4F_3(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5};\frac{1}{2},\frac{3}{4},\frac{5}{4};\frac{3125|A|^4}{256}))+B_2(_4F_3(\frac{7}{10},\frac{9}{10},\frac{11}{10},\frac{13}{10};\frac{5}{4},\frac{3}{2},\frac{7}{4};\frac{3125|A|^4}{256}))+B_3(_4F_3(\frac{9}{20},\frac{13}{20},\frac{17}{20},\frac{21}{20};\frac{3}{4},\frac{5}{4},\frac{3}{2};\frac{3125|A|^4}{256}))+B_4(_4F_3(\frac{-1}{20},\frac{3}{20},\frac{7}{20},\frac{11}{20};\frac{1}{4},\frac{1}{2},\frac{3}{4};\frac{3125|A|^4}{256}))$$ where the five roots have $(B_1,B_2,B_3,B_4)\in\{(A,0,0,0),(-\frac{A}{4},-\frac{5A|A|}{32},\frac{5|A|^3}{32},-1),(-\frac{A}{4},\frac{5A|A|}{32},-i\frac{5|A|^3}{32},i),(-\frac{A}{4},\frac{5A|A|}{32},i\frac{5|A|^3}{32},-i),(-\frac
An easier way to obtain the coefficients that satisfy both the quintic and the differential resolvent will be outlined. The general solution of the differential resolvent equation (corrected below), as Elliot Yu so eloquently illustrated, is found to be: $(3125\,t^4-256)\,x''''(t)+31250\,t^3\,x'''(t)+73125\,t^2\,x''(t)+31875\,t\,x'(t)-1155\,x(t)=0$ The general solutions is: $x(t)=c_1\;{}_4F_3(-\frac{1}{20},\frac{3}{20},\frac{7}{20},\frac{11}{20};\frac{1}{4},\frac{1}{2},\frac{3}{4},\frac{3125}{256}\,t^4)\,+ \\c_2\;t\,{}_4F_3(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5};\frac{1}{2},\frac{3}{4},\frac{5}{4},\frac{3125}{256}\,t^4)\,+ \\c_3\;t^2\,{}_4F_3(\frac{9}{20},\frac{13}{20},\frac{17}{20},\frac{21}{20};\frac{3}{4},\frac{3}{2},\frac{5}{4},\frac{3125}{256}\,t^4)\,+ \\c_4\;t^3\,{}_4F_3(\frac{7}{10},\frac{9}{10},\frac{11}{10},\frac{13}{10};\frac{5}{4},\frac{3}{2},\frac{7}{4},\frac{3125}{256}\,t^4)\,$ There is nothing new here except for the correction of the differential equation. An ea
|polynomials|special-functions|quintics|
0
Different Hausdorff topologies with the same continuous mappings
Fix a set $X$ . For a topology $\tau$ on $X$ , let $F_\tau = \{f: X \to X \mid f \text{ continuous w.r.t. } \tau\}$ . What would be an example of a set $X$ and two Hausdorff topologies $\tau$ and $\sigma$ on $X$ such that $\tau \neq \sigma$ but $F_\tau = F_\sigma$ ?
One way to find an example is via this fact. In more detail: suppose $(X, \tau)$ is a nontrivial Hausdorff space with the property that any $\tau$ -continuous function $X \to X$ is either the identity or constant. Let $f: X \to X$ be any non-identity bijection of $X$ (eg one that swaps two elements). Let $\sigma$ be the topology on $X$ that makes $f: (X, \tau) \to (X, \sigma)$ into a homeomorphism (ie "transport the structure along $f$ "). Then $F_\tau$ and $F_\sigma$ both consist only of the identity function and the constant functions. But $\tau$ and $\sigma$ are not equal, because otherwise $f$ would be a continuous function from $(X, \tau) \to (X, \tau)$ .
|general-topology|continuity|separation-axioms|
1
Real analysis questions
Let S be a set of $x_1,x_2,x_3,x_4$ where summation of $\sum_{i=1}^4 x_i^2=1$ and $m= $ minimum of $(x_1,x_2,x_3,x_4)$ where summation of $|x_i|$ where $1\le i\le 4$ and $M=$ maximum of $(x_1,x_2,x_3,x_4)$ where summation of $|x_i|$ where $1\le i There for $\frac mM$ equals: Option a $0$ Option b $1/4$ Option c $1/2$ Option d $3/4$
It seems to me that $\dfrac{m}{M}$ could be $0$ but it does not have to be. For example, take $x_1 = 1$ and $x_2 = x_3 = x_4 = 0$ . In this case, $m = 0$ . I do not see why you should expect $\dfrac{m}{M}$ to have a unique value. I am also assuming that this is not a probability question.
|calculus|
0
Differences between real and complex Hilbert spaces
I recently noticed that my go-to reference for stuff related to Hilbert spaces uses the terms "Hilbert space" and "complex Hilbert space" synonymously. Hence the following questions: Are there any "basic" theorems that only hold in the complex (or the real) case? By "basic" I perhaps mean that they would be discussed in introductory functional analysis courses. In particular, what about spectral theory and functional calculus? What motivates the restriction to the complex case? Any general guidelines/rules of thumb to quickly evaluate whether a result only holds for complex/real spaces? E.g. some feature in the statement of the theorem or something to look for in the proof.
Here is my favorite result, which is true in the real case ( $\mathbb K = \mathbb R$ ), but fails in the complex case ( $\mathbb K = \mathbb C$ ): The map $f \colon H \to \mathbb K$ , $x \mapsto \|x\|_H^2/2$ is (Fréchet) differentiable.
|functional-analysis|hilbert-spaces|
0
Estimating error between Taylor series and $\tanh$
I want to show that: $\mid \tanh(x) - P_4(x) \mid \leq 0.005 \qquad for \quad \mid x \mid Where $P_4$ is the taylor series expansion with degree 4: $P_4(x)=x-\frac{1}{3}x^3$ I've established that: $\tanh(x) and that $\tanh(x)-P_4(x)$ is increasing on $(0, \infty)$ I now need to show that: $\tanh \frac{1}{2} - P_4(\frac{1}{2}) \leq 0.005$ using the fact that $e , and use this to proof the result: $\mid \tanh(x) - P_4(x) \mid \leq 0.005 \qquad for \quad \mid x \mid I'm unsure how i can use this estimate for $e$ as i don't have a lower bound for $e$ , so i can't change $e$ with a fractions that are bigger and smaller. I'm also not sure how i can use this to show that: $\mid \tanh(x) - P_4(x) \mid \leq 0.005 \quad when \quad -\frac{1}{2}
Note that $$\tanh(1/2) = \frac{\exp(1/2) - \exp(-1/2)}{\exp(1/2) + \exp(-1/2)} = \frac{e-1}{e+1} $$ and (if $k ) $$ \frac{e-1}{e+1} \le k \ \text{iff}\ e-1 \le k (e+1) ]\ \text{iff}\ e EDIT: You already established that $\tanh(x) - P_4(x)$ is increasing, and of course $\tanh(0) = P_4(0) = 0$ , so you want to show $\tanh(1/2) - P_4(1/2) \le .005$ , i.e. $\tanh(1/2) \le .005 + P_4(1/2) = 0.46333\ldots$ , call this $k$ . As shown in the paragraph above, this is equivalent to $e \le (1+k)/(1-k) = 2.7267\ldots$ . And since you know $e , that does it. As for $-1/2 , note that both $\tanh$ and $P_4$ are odd functions.
|real-analysis|
1
Let $R$ be a UFD and $f(x)\in R[x]$ with $d$ as the content of the polynomial. Let $f(x)=df_1(x).$ Prove that $f_1(x)$ is a primitive polynomial.
Let $R$ be a unique factorization domain and $f(x)$ be a non constant polynomial in $R[x].$ We denote the content of the polynomial as $c(f)=d$ and consequently, write $f(x)=df_1(x).$ Prove that $f_1(x)$ is a primitive polynomial. The proof given is as follows: Let $f(x)\in R[x]$ be given where $f(x)$ is a nonconstant polynomial with coefficients $a_0, a_1,..., a_n.$ Let $c$ be a gcd of the $a_i$ for $i =0,1,...,n.$ Then for each $i,$ we have $a_i = cq_i$ for some $q_i\in R.$ By the distributive law, we have $f(x) =df_1(x),$ where no irreducible element in $R$ divides all of the coefficients, $q_1,..., q_n$ of $f_1(x).$ Thus, $f_1(x)$ is a primitive polynomial. However, I am not getting why no irreducible element in $R$ divides all of the coefficients, $q_1,..., q_n$ of $f_1(x)$ ? I think if it does, then we have a contradiction or something like that. But, I don't understand what it is?
By the properties of the gcd (valid in any gcd domain), $$\gcd(a_0,a_1,\ldots,a_n)=\gcd(cq_0,cq_1,\ldots cq_n)=c\cdot \gcd(q_0,q_1,\ldots q_n)=\\\gcd(a_0,a_1,\ldots,a_n)\cdot \gcd(q_0,q_1,\ldots q_n)$$ Thus, $\gcd(q_0,q_1,\ldots q_n)=1$ and no irreducible element can divide all $q_i$ .
|abstract-algebra|ring-theory|proof-explanation|
1
Equivalence of norms on a basis
I'm trying to prove something and I have two norms $||\cdot||$ , $|||\cdot|||$ over some finite dimensional space $X$ spanned by $x_1, \cdots, x_n$ . I know that $$||x_i|| \leq K\, |||x_i||| \, \forall i=1, \cdots, n$$ My question is if it is possible to bound in terms of $K$ the constant $C$ (which exists because the two norms are equivalent) such that $$||x|| \leq C \,|||x||| \, \forall x \in X$$ Any help would be welcome.
No. In $\Bbb{R}^2$ , let $x_1 = (1,0)$ and $x_2 = (\cos\theta, \sin\theta)$ for some small $\theta$ . In the standard norm, $x_1, x_2$ are unit vectors, and $\|x_1-x_2\|$ is small. However, there exists a (unique) inner product such that $x_1, x_2$ form an orthonormal basis, and with respect to this induced norm, $x_1-x_2$ has norm $\sqrt{2}$ by the Pythagorean theorem. By making $\theta$ small, we make the constant for the equivalence of norms as big as we want, despite $x_1, x_2$ being the same size in both norms.
|functional-analysis|
1
Closed formula for a $gcd$
Le $k≥1$ be positive integer. I am asking if it is possibe to find a closed formula for $$gcd(2^{2k+5}-3×2^{k+2}+1,2×(2^{k+2}-1)(2^{k+1}-1),2×(3×2^{k+1}-1)(2^{k+2}-1))$$ where $gcd$ is the greatest common divisor. For $k=1$ , we have $$gcd(2²⁺⁵-3×2¹⁺²+1,2×(2¹⁺²-1)(2¹⁺¹-1),2×(3×2¹⁺¹-1)(2¹⁺²-1))= 7$$
Hint Look at the value of the gcd for the first few values of $\ k\ .$ That suggests the following factorisation: \begin{align} 2^{2k+5}-3\cdot2^{k+2}+1&=2^{2k+5}-2^{k+3}-2^{k+2}+1\\ &=\big(\color{red}{2^{k+2}-1}\big)\big(2^{k+3}-1\big) \end{align} of the first number, and you already have $$ 2\big(\color{red}{2^{k+2}-1}\big)\big(2^{k+1}-1\big) $$ and $$ 2\big(3\cdot2^{k+1}-1\big)\big(\color{red}{2^{k+2}-1}\big) $$ for the other two numbers. The second of these minus $\ 3\ $ times the first is $$ 4\big(2^{k+2}-1\big)\ . $$
|gcd-and-lcm|
0
Are Stonean spaces a reflective subcategory of topological spaces with continuous open maps?
I'm not talking about Stone spaces! A Stonean space is a compact Hausdorff extremally disconnected space i.e. a compact Hausdorff space such that for all open $U$ , its closure $\overline{U}$ is also open. I've been thinking about applications of regular open sets in topology, and I came up with the following: If $X$ is a topological space then the set $\text{RO}(X)$ of regular open sets of $X$ forms a complete Boolean algebra. This Boolean algebra induces a Stonean space $S(X)$ by Stone duality . Thus to any topological space we can assign a Stonean space $S(X)$ . If $X$ is a Stonean space, then $\text{RO}(X)$ is a Boolean algebra of clopen sets of $X$ , so that Stone duality tells us $S(X)$ gives us $X$ back. If $f:X\to Y$ is a continuous map, then $f^{-1}(\text{RO}(Y))\subseteq \text{RO}(X)$ is not true in general (see comments below). Thus I want to restrict to continuous open maps. In this case, if $U\in\text{RO}(Y)$ then $$\text{int }\overline{f^{-1}(U)} = \text{int }f^{-1}(\over
No, Stonean spaces are not reflective in all topological spaces with continuous open maps. Consider an arbitrary Stone space $X$ . Note that every continuous map $X\to\{0,1\}$ is automatically open. So, if a reflection $i:X\to Y$ of $X$ into Stonean spaces existed, every continuous map $X\to\{0,1\}$ would factor through $i$ and so $i$ would be injective since these maps separate points of $X$ . But since $i$ is open and $X$ is compact, this would mean that $i$ is an embedding of $X$ as a clopen subspace of $Y$ , so $X$ itself must be Stonean. So any non-Stonean Stone space does not have a Stonean reflection. (The problem with the construction you propose is that there is not actually a canonical continuous open map $X\to S(X)$ . Very concretely, a point of $S(X)$ is an ultrafilter on $RO(X)$ . But a point of $X$ doesn't naturally give an ultrafilter on $RO(X)$ , since if $U\in RO(X)$ , then $U$ and its complement in $RO(X)$ may not actually cover all of $X$ , so the filter on $RO(X)$ g
|general-topology|category-theory|boolean-algebra|
1
Inverse Laplace transform of Dirac delta function
I am trying to understand how to identify, or at least derive some properties of the inverse Laplace transform of the Dirac delta function, i.e. a function $\eta$ s.t. $$ \int_0^\infty dt\, \eta(\omega,t) e^{-\omega’ t} = \delta(\omega-\omega’) \,. $$
The answer might not be definitive and depends on the context. Even if you intend to carry real computations ultimately, the Laplace transform is assumed to be defined with respect to a complex variable or, at the very least, it may be analytically continued to the complex plane for convergence purposes. In the present case, it allows us to "cheat" a little bit, because the Dirac delta function is simply represented by the inverse function in the complex plane thanks to Cauchy's formula. Concretely, one has $\delta(z) \equiv \frac{1}{2\pi iz}$ , whose inverse transform is easily found as the Heaviside function. In consequence, one can conclude that $\eta(\omega,t) = \frac{e^{\omega t}}{2\pi i}u(t)$ . Alternatively and a bit more naively, you could also apply Mellin's inverse formula directly, i.e. $$ \eta(\omega,t) = \mathscr{L}^{-1}[\delta(\omega'-\omega)](t) = \int_{\omega-i\Bbb{R}}^{\omega+i\Bbb{R}} \frac{\mathrm{d}\omega'}{2\pi i}\, \delta(\omega'-\omega)e^{\omega't} = \frac{e^{\om
|functional-equations|laplace-transform|inverse-problems|
0
Covering a Polygon with circles with a constant radius of $r$
"I am currently engaged in optimizing the placement of sensors in an agricultural field, inspired by the problem outlined in the paper ' Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy '. The specific challenge I'm facing involves efficiently covering a convex polygon with circles of a constant radius $r$ (where the circle's area is always smaller than the polygon's). In my pursuit of a solution, I've encountered the algorithm proposed by Shai Gul in the paper ' Efficient covering of convex domains by congruent discs ,' but its complexity poses implementation challenges. Considering the NP-Hard nature of the problem, I am actively seeking alternative algorithms with worse approximations that may be more manageable for implementation. Thank you in advance for any guidance or assistance.
Here is how I would attack this problem. First, ask yourself if the circles are to be overlapping, so as to leave no part of the polygon area uncovered, or just tangent to each other with uncovered deltoid spaces in between. Then, find the centroid of the polygon and place a circle of the specified radius there. If the circles are to overlap, inscribe a regular hexagon in the circle, Otherwise, circumscribe the circle with the hexagon. Now, build out a honeycomb with concentric rings of hexagons, so as to fill in the polygon. Finally, fill in the circles (inscribed are circumscribed) centered on each hexagon.
|geometry|optimization|computational-geometry|
0
limit of convergane in probability
I was learning about the convergence in probability. I'm unsure if looking at the epsiolon-delta definitions below which captures better the convergence in probability correctly. $$ \forall \epsilon,\delta>0 \quad \exists n>N \qquad S.T \qquad Pr(|X_n -X|>\epsilon) $$\forall \epsilon>0\quad\exists n>N \qquad S.T \qquad Pr(|X_n -X|>\epsilon)=0 $$
The usual definition of convergence in probability is $$ X_n \to_{p} X \iff \forall \epsilon > 0, \lim_{n \to \infty} P(|X_n - X| > \epsilon) = 0 $$ The limit is the "basic real analysis" sequence definition, since $P(|X_n - X| > \epsilon)$ is just a sequence over $n \in \mathbb{N}$ . So using that definition, we have $$ \forall \epsilon > 0, \lim_{n \to \infty} P(|X_n - X| > \epsilon) = 0 \iff \forall \epsilon > 0, \forall \delta > 0, \exists N \in \mathbb{N} \; \text{s.t.} \; \forall n \geq N, P(|X_n - X| > \epsilon) This matches basically with the first one you wrote.
|probability|limits|epsilon-delta|probability-limit-theorems|convergence-probability|
1
Finding solution given a non-invertible matrix
Given the following state equation of a mechanical structure: $K(x) u = F(x)$ where $K$ is the stiffness matrix of size $n \times n$ , $x$ is the design variable, $u$ is the state variable (displacement vector) , and $F$ is the force vector. If $K$ was invertible then $u = K^{-1} ~F$ But, if $K$ is not invertible, then we can find $u$ by solving a system of $n$ equations, right?
Since the equation is linear, the general solution $u = u_h + u_p$ will be made up of the general solution to the associated homogeneous problem, namely $u_h \in \ker K$ , i.e. $Ku_h = 0$ , and a particular solution the inhomogeneous equation, i.e. $Ku_p = F$ . So, determining the kernel of $K$ (which is non-trivial, since $K$ is non-invertible) will constitute your first task, before finding a particular solution $u_p$ by any means. Finally, it has to be highlighted that no such $u_p$ exists if $F \not\in \mathrm{im\,} K$ .
|systems-of-equations|
1
Taking stones game beginning with 1 to 4 stones in a 2 player game. If we started with 18 stones, is the a winning strategy for the first player?
Amy and Beck are playing 'taking the stones game'. There are 18 stones on the table, and the two people take stones in turns. The first move of the starting player can take 1 to 4 stones. For the rest of the game, each player takes a number of stones from 1 up to twice the number of stones his/her opponent just took. The player taking the last stone wins. Amy comes first. If Amy has a winning strategy, how many stones should Amy take? (at the start) If Amy does not have a winning strategy, write '0'. I tried like copying same moves for Amy and Beck etc. and did a few manual computations to see that it's not possible. An answer key with no solution says that there is no winning strategy as it gives "0" as the answer. Anyone know how to prove it? Is there an invariant or something which I am missing? I have some background in Math Olympiad and a Math related degree so feel free to use any techniques.
Given that you have Math Olympiad background, I assume that you're familiar with setting up winning positions. You might think that doesn't work here because we don't have exact / clean-cut winning positions on $n$ stones, since gameplay also depends on how many stones the previous player took. That is fine, we simply make the winning positions conditional on how many stones the current player can take , namely $(n, k)$ , where $n$ is the number of stones left, and $k$ is how many they can remove. The question asks if $(18, 4)$ is a winning position, and how to play it. We proceed via the typical recursive Winning Positions analysis. The solution is intentionally left incomplete. I've gotten you started almost till the very end. Can you complete it to 18 stones? First, some initial observations Since the game is symmetric in moves, let $X$ denote the player to move, and $Y$ the other player. Note that each time, X can always take 1 or 2 stones. Wishful thinking that this allows us to f
|combinatorics|contest-math|recreational-mathematics|game-theory|combinatorial-game-theory|
0
Proving Legendres Relation for elliptic curves
The legendre's relation can be stated as follows $$ K(k) E(k^*)+ E(k) K(k^*) - K(k) K(k^*) = \frac{\pi}{2} $$ where $k^* = \sqrt{1 - k^2}$ is the complimentary modulus, and $E$ and $K$ are respectively complete elliptic integral's of the first and second kind $$ K(k) = \int_0^{\pi/2} \frac{\mathrm{d}\theta}{\sqrt{1-k^2 \sin^2\theta}}\ ,\quad E(k) = \int_0^{\pi/2}\sqrt {1-k^2 \sin^2\theta}\ \mathrm{d}\theta\,. $$ Now sorry for not attempting to solve this problem myself, but I have tried both myself and finding sources online. Alas it seems this relation has been somewhat forgotten. I did however find an article claiming to show the relation, but I do not have access to check it's validity. Can someone provide sources for a proof of this relation, or outline a proof? Hopefully not using hypergeometric functions, but a proof more in the spirit of Legendre. Any help would be greatly appreciated.
Math is simpler if we use elliptic parameter $m = k^2$ This make simple relation: $(k^*)^2 = m^* = 1-m$ $\displaystyle \frac{dE(m)}{dm} = \frac{dE(m)}{dk} ÷ \frac{dm}{dk} = \frac{E(m)\,-\,K(m)}{2m}$ $\displaystyle\frac{dK(m)}{dm} = \frac{dK(m)}{dk} ÷ \frac{dm}{dk} = \frac{E(m)\,-\,(m^*)\,K(m)}{(2m)\,(m^*)}$ $\displaystyle → \frac{d(E(m)-K(m))}{dm} = \frac{-E(m)}{2\,m^*}$ Legendre relation using elliptic parameter is: $\displaystyle K(m)\,E(m^*) + K(m^*)×(E(m)-K(m)) = \frac{\pi}{2}$ $\displaystyle \frac{d}{dm} (K(m)\,E(m^*)) \\= K(m)×-(\frac{E(m^*)\,-\,K(m^*)}{2(m^*)}) + \left(\frac{E(m)\,-\,(m^*)\,K(m)}{(2m)\,(m^*)}\right)×E(m^*) \\= \frac{m\,K(m)\,K(m^*) \;+\; E(m)\,E(m^*) \;-\; K(m)\,E(m^*)}{(2m)(m^*)} $ $\displaystyle \frac{d}{dm} (K(m^*)×(E(m)-K(m))) \\= K(m^*)×\left(\frac{-E(m)}{2\,m^*}\right) + \left(-\frac{E(m^*) - (m)\,K(m^*)}{(2m)\,(m^*)}\right) × (E(m)-K(m)) \\= - \frac{m\,K(m)\,K(m^*) \;+\; E(m)\,E(m^*) \;-\; K(m)\,E(m^*)}{(2m)(m^*)} $ Sum of derivatives is zero → legendre's
|integration|special-functions|elliptic-integrals|
0
Estimating error between Taylor series and $\tanh$
I want to show that: $\mid \tanh(x) - P_4(x) \mid \leq 0.005 \qquad for \quad \mid x \mid Where $P_4$ is the taylor series expansion with degree 4: $P_4(x)=x-\frac{1}{3}x^3$ I've established that: $\tanh(x) and that $\tanh(x)-P_4(x)$ is increasing on $(0, \infty)$ I now need to show that: $\tanh \frac{1}{2} - P_4(\frac{1}{2}) \leq 0.005$ using the fact that $e , and use this to proof the result: $\mid \tanh(x) - P_4(x) \mid \leq 0.005 \qquad for \quad \mid x \mid I'm unsure how i can use this estimate for $e$ as i don't have a lower bound for $e$ , so i can't change $e$ with a fractions that are bigger and smaller. I'm also not sure how i can use this to show that: $\mid \tanh(x) - P_4(x) \mid \leq 0.005 \quad when \quad -\frac{1}{2}
From the taylor series of $\tanh(x)$ for $|x| , $$ |\tanh(x)| If, $x>0$ , $$ \tanh(x) If x $$ \tanh(x)>x-\frac{x^3}{3}+\frac{2x^5}{15} \\ \implies \tanh(x)-P_4(x)>\frac{2x^5}{15}. \\ \therefore \tanh\left(-\frac{1}{2}\right)-P_4\left(-\frac{1}{2}\right)>-\frac{1}{15\cdot 16}. $$ $\tanh(x)-P_4(x)$ is increasing in $\mathbb{R}$ , thus $$ -\frac{1}{2} Thus, $|\tanh(x)-P_4| , whenever $|x|
|real-analysis|
0
Heaviside under Geometric Brownian Motion
I'm new to using Geometric Brownian Motion, so I'm not sure if what I've done is correct. Be the Geometric Brownian Motion $dS_t = \mu S_tdt + \sigma S_t dW_t$ , $H$ a Heaviside, and $p_r, r_k$ constants, then $$ \begin{align} {\rm I\kern-.3em E} H\Big(\frac{p_r}{S_m}-r_k\Big)\frac{S_T}{S_m}=& {\rm I\kern-.3em E} H\Big(\frac{p_r}{S_m}-r_k\Big){\rm I\kern-.3em E}\frac{S_T}{S_m}\\ =& {\rm I\kern-.3em E} H\Big(\frac{p_r}{S_m}-r_k\Big)e^{(\mu-\sigma^2/2)(T-m)+\sigma^2/2(T+m)}\\ =& {\rm I\kern-.3em E} H\Big(\frac{p_r}{S_m}-r_k\Big)e^{\mu T-m(\mu - \sigma^2)}\\ =& e^{\mu T-m(\mu - \sigma^2)}\int dx \frac{1}{\sigma\sqrt{2\pi m}}e^{\frac{(x-\mu m)^2}{2m\sigma^2}}H\Big(p_rS_0^{-1}e^{-x}-r_k\Big)\\ =& e^{\mu T-m(\mu - \sigma^2)}\int dx \frac{1}{\sigma\sqrt{2\pi m}}e^{\frac{(x-\mu m)^2}{2m\sigma^2}}H\Big(\frac{p_r}{S_0r_k}-e^{x}\Big)\\ =& e^{\mu T-m(\mu - \sigma^2)}\int dx \frac{1}{\sigma\sqrt{2\pi m}}e^{\frac{(x-\mu m)^2}{2m\sigma^2}}H\Big(\log\frac{p_r}{S_0r_k}-x\Big)\\ =& e^{\mu T-m(\mu - \sig
I guess you are writing $H(x)=\mathbf{1}_{[0,\infty)}(x)$ . We have $$\begin{aligned}H(p_r/S_m-r_k)&=\mathbf{1}_{[0,\infty)}(p_r/S_m-r_k)\\ &=\mathbf{1}_{\{S_m\leq p_r/r_k\}} \end{aligned}$$ We get $$\begin{aligned}E[\mathbf{1}_{\{S_m\leq p_r/r_k\}}(S_T/S_m)|S_m]&=\mathbf{1}_{\{S_m\leq p_r/r_k\}}E[e^{(\mu-\sigma^2/2)(T-m)+\sigma(W_T-W_m)}|S_m]\\ &=\mathbf{1}_{\{S_m\leq p_r/r_k\}}e^{(\mu-\sigma^2/2)(T-m)}e^{\sigma^2(T-m)/2}\\ &=\mathbf{1}_{\{S_m\leq p_r/r_k\}}e^{\mu(T-m)} \end{aligned}$$ And $$\begin{aligned} P(S_m\leq p_r/r_k)&=P\bigg(W_m\leq \frac{\ln(p_r/(r_kS_0))-(\mu-\sigma^2/2)m}{\sigma}\bigg)\\ &=\Phi\bigg(\frac{\ln(p_r/(r_kS_0))-(\mu-\sigma^2/2)m}{\sigma\sqrt{m}}\bigg) \end{aligned}$$ So we conclude: $$E[\mathbf{1}_{\{S_m\leq p_r/r_k\}}(S_T/S_m)]=e^{\mu(T-m)}\Phi\bigg(\frac{\ln(p_r/(r_kS_0))-(\mu-\sigma^2/2)m}{\sigma\sqrt{m}}\bigg)$$
|probability|stochastic-processes|expected-value|brownian-motion|finance|
1
Sum of the rows of Pascal's Triangle.
I've discovered that the sum of each row in Pascal's triangle is $2^n$, where $n$ number of rows. I'm interested why this is so. Rewriting the triangle in terms of C would give us $0C0$ in first row. $1C0$ and $1C1$ in the second, and so on and so forth. However, I still cannot grasp why summing, say, 4C0+4C1+4C2+4c3+4C4=2^4.
Here's a simple combinatorial proof that is mentioned above but not actually spelled out, whose full exposition — if you are asking this question in the first place — may not be obvious. (But the other detailed answers are much more complicated than is necessary, I think.) Suppose you flip a coin $n$ times. You want to work out the number of ways that that can occur. The most intuitive answer is $2^n$ , a permutation where we have $2$ outcomes possible, we're allowed to repeat, and we have a sequence of length $n$ . But, an alternative method would be to realize that the number of ways to choose $0, 1, 2, ... n$ trials to be heads must cover all possible outcomes, and thus \begin{align*} 2^n = \sum_{j = 0}^n \binom{n}{j}. \end{align*} If you know the connection between Pascal's triangle and the binomial coefficient, the proof is complete. If you don't, then simply consider this bonus proof. Each term in the triangle is the sum of the two above it. $\binom{n-1}{k-1} + \binom{n-1}{k} = \
|binomial-coefficients|
0
Checking the multiplicative system definition for a class of maps in a triangulated category
I am reading these notes Derived categories, resolutions, and Brown representability Henning Krause ( https://arxiv.org/pdf/math/0511047.pdf ) about derived and triangulated categories I am having trouble understanding the proof of condition MS3 in this lemma of section 3: 1-How does one show that $\alpha -\beta$ factors through $\phi$ ? 2Why does $\tau$ belong to $S$ and $\tau\circ \alpha=\tau\circ\beta $ Condition MS3: In section 2.5 i ìt is stated what $\Sigma$ is So I guess $\Sigma^n$ means a composition of n times $\Sigma$
Sorry, dont have time currently to tikz/tex the diagrams rigorously, hope that is ok. regarding 1), consider the triangle $$0\to Y \xrightarrow{\mathrm{id}} Y \to \Sigma 0$$ then you can map into that using $0$ and $\alpha-\beta$ such that the left square commutes and then you can fill that in to get factoring over $\psi$ . regarding 2) We know that $X''$ is the cone of $\sigma\in S$ so $0=H(\Sigma^n X'')$ (this is since $H(\Sigma^n\sigma)$ is an iso an iso for all $n$ and so, since $H$ is cohomological the third step in the long exact sequence, has to be zero). This argument works the other way as well. In particular as $X''$ is the cocone of $\tau$ and has vanishing $H\left(\Sigma^n\right)$ we can conclude that $H(\Sigma^n \tau)$ is an isomorphism for all n and so $\tau\in S$ . Furthermore, as we have a triangle $$X \xrightarrow{\alpha-\beta}Y\xrightarrow{\tau}X'' \to \Sigma X$$ we get $$0=(\alpha-\beta)\circ \tau=\alpha \circ \tau -\beta \circ\tau $$ and so $$\alpha \circ \tau=\beta
|category-theory|homological-algebra|abelian-categories|derived-categories|triangulated-categories|
1
Compute $\int_0^{\pi/2}\frac{\cos{x}}{2-\sin{2x}}dx$
How can I evaluate the following integral? $$I=\int_0^{\pi/2}\frac{\cos{x}}{2-\sin{2x}}dx$$ I tried it with Wolfram Alpha, it gave me a numerical solution: $0.785398$. Although I immediately know that it is equal to $\pi /4$, I fail to obtain the answer with pen and paper. I tried to use substitution $u=\tan{x}$, but I failed because the upper limit of the integral is $\pi/2$ and $\tan{\pi/2}$ is undefined. So how are we going to evaluate this integral? Thanks.
A bit of an alternate solution, together with some motivation. The first step is the same as in the other solutions: upon substituting $y = \frac{\pi}{2} - x$ we also get that $$I = \int_{0}^{\frac{\pi}{2}}\frac{\sin y}{2 - \sin 2y} dy,$$ and so adding $I$ again we also obtain $$2I = \int_0^{\frac{\pi}{2}} \frac{\cos x + \sin x}{2 - \sin 2x} dx.$$ Nothing new here opposed to the other solutions. Now observe that the function $$f(x) = \frac{\cos x + \sin x}{2 - \sin 2x}$$ is symmetric for $0 \leq x \leq \frac{\pi}{2}$ about $x = \pi/4$ . This makes it intuitive to perform the substitution $z = x - \frac{\pi}{4}$ . This way we can 'make' $f$ into an even function, which almost always makes integrations easier. Since $dz = dx$ we find $$2I = \int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} \frac{\cos\left(z + \frac{\pi}{4}\right) + \sin \left( z + \frac{\pi}{4} \right)}{2 - \sin 2\left(z + \frac{\pi}{4} \right)} dz.$$ Here, using the sum formulas for sine and cosine one can easily verify that $$\cos
|calculus|integration|trigonometry|pi|
0
Examples of specific points $g\in G:=\text{SL}(2,\mathbb R)$ such that the curve $u_tg \Gamma, \Gamma:=\text{SL}(2,\mathbb Z)$ is dense in $G/\Gamma$?
Let $G:=\text{SL}(2,\mathbb R)$ , $\Gamma:=\text{SL}(2,\mathbb Z)$ and $u_t=\begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix}, t\ge 0$ Although this paragraph is not really needed to answer this question, I want to point out that ergodic theory tells us that for almost every point $g\Gamma$ in $G/\Gamma$ (with respect to Haar measure), we have $\{u_tg\Gamma\}_{t\ge 0}$ is dense in $G/\Gamma$ (Howe-Moore theorem tells us $u_t$ acts ergodically on $G/\Gamma$ ) But how to produce any specific $g$ 's such that $\{u_tg\Gamma\}_{t\ge 0}$ is dense in $G/\Gamma$ ? I guess there are some Diophantine approximation theorem that I am not familiar with. Perhaps we need $g$ 's entries to be linearly independent over rationals.
If you want a specific matrix $g$ , you can take $$ g=\left[ \begin{array}{cc} 1&0\\ \sqrt{2}&1\end{array}\right]. $$ Here is the reason. Let $U_\infty$ denote the subgroup of strictly upper triangular matrices in $G$ . Then for $g\in G$ , $$ U_\infty g = g U, \quad U=g^{-1}U_\infty g. $$ The unique fixed-point of $U$ in the ideal boundary of the hyperbolic plane is $g^{-1}(\infty)=\frac{a}{c}$ , where $$ g^{-1}=\left[ \begin{array}{cc} a&b\\ c&d\end{array}\right]. $$ Thus, $U\cap \Gamma$ is a lattice in $U$ if and only if $g^{-1}(\infty)\in {\mathbb Q}\cup\{\infty\}$ . In particular, for my choice of $g$ as above, $g^{-1}(\infty)= -1/\sqrt{2}\notin {\mathbb Q}$ , hence, $U\cap \Gamma$ is not a lattice in $U$ . Ratner proves (see here for a free copy of her paper) in Ratner, Marina , Raghunathan’s conjectures for SL(2,R) , Isr. J. Math. 80, No. 1-2, 1-31 (1992). ZBL0785.22013 . that for a nontrivial unipotent subgroup $U a coset $U\Gamma$ is either dense in $G/\Gamma$ or $U\cap \Gamma$
|general-topology|number-theory|dynamical-systems|ergodic-theory|diophantine-approximation|
0
If minimum of independent random variable is exponential, is each one exponential?
We know that that given $X_1,...,X_n$ independent random variables which are distributed exponentially, their minimum is also distributed exponentially. Is the converse true? Given $X = \min\{X_1, ..., X_n\}$ distributed exponentially, and $X_1, ..., X_n$ are independent, is it true that $X_i$ is distributed exponentially for all $i = 1,...,n$ ? Edit: I don't assumme the $X_i$ 's are identically distributed. I believe it is true that if $X \sim \exp(\lambda)$ , and if $p_i \equiv \mathbb{P}(X_i = X)$ then $X_i \sim \exp(\lambda p_i)$ . However, I couldn't find a reference and didn't manage to prove this by myself. Further edit: The result is true if you assume $\{X = X_i\}$ is an event independent of $X$ . This is proven in the following manner: Construct a Poisson process $N_t$ whose waiting times are IID copies of $X$ . Split this Poisson process to $n$ counting processes - the process $N_t^i$ will count the number of occurences where $X = X_i$ . Using the fact that $\{X = X_i\}$ is
Example: Let $\lambda_1:[0,\infty)\to(0,1)$ be a continuous function, and then define $\lambda_2(t):=1-\lambda_1(t)$ . Also define $\Lambda_k(t):=\int_0^t\lambda_k(s)\,ds$ for $t\ge 0$ and $k=1,2$ . Then $F_k(t) :=1-\exp(-\Lambda_k(t))$ , $t\ge 0$ , are distribution functions of non-negative random variables, say $X_1$ and $X_2$ . Their minimum $X:=\min(X_1,X_2)$ has cdf $1-(1-F_1(t))\cdot(1-F_2(t))=1-e^{-t}$ , $t\ge 0$ . Thus $X$ is exponentially distributed, but the $X_k$ need not be.
|probability|probability-theory|random-variables|exponential-distribution|
1
$\int_{\mathbb{R}^2} e^{-(x^2+y^2)}=[\int_{\mathbb{R}} e^{-x^2}]^2,$ provided the first of these integrals exists. Munkres Analysis on Manifolds
I am reading "Analysis on Manifolds" by James R. Munkres. (a) Show that $$\int_{\mathbb{R}^2} e^{-(x^2+y^2)}=[\int_{\mathbb{R}} e^{-x^2}]^2,$$ provided the first of these integrals exists. I don't understand what answer the author expects. I showed $$\int_{\mathbb{R}^2} e^{-(x^2+y^2)}=\left[\int_{\mathbb{R}} e^{-x^2}\right]^2$$ holds as follows. Theorem 13.6. Let $S$ be a bounded set in $\mathbb{R}^n$ ; let $f:S\to\mathbb{R}$ be a bounded continuous function; let $A=\operatorname{Int}S$ . If $f$ is integrable over $S$ , then $f$ is integrable over $A$ , and $\int_S f=\int_A f$ . Theorem 15.6. Let $A$ be open in $\mathbb{R}^n$ ; let $f:A\to\mathbb{R}$ be continuous. Let $U_1\subset U_2\subset\cdots$ be a sequence of open sets whose union is $A$ . Then $\int_A f$ exists if and only if the sequence $\int_{U_n} |f|$ exists and is bounded; in this case $$\int_A f=\lim_{N\to\infty}\int_{U_N} f.$$ My solution: Let $U_n:=(-n,n)\times (-n,n)$ . Let $Q_n:=[-n,n]\times [-n,n]$ . Then, $U_n=\opera
We don't need to evaluate the value of $\int_{(-n,n)}e^{-x^2}$ in the following answer. So, the following answer is easier. If $\int_{\mathbb{R}^2} e^{-(x^2+y^2)}$ exists, then by Theorem 15.6, the sequence $\int_{U_n} |e^{-(x^2+y^2)}|=\int_{U_n} e^{-(x^2+y^2)}$ exists (this is obvious directly) and is bounded and $\int_{\mathbb{R}^2} e^{-(x^2+y^2)}=\lim_{n\to\infty}\int_{U_n} e^{-(x^2+y^2)}$ . By Fubini's theorem and Theorem 13.6, $$\int_{U_n}e^{-(x^2+y^2)}=\int_{Q_n}e^{-(x^2+y^2)}=\int_{x\in [-n,n]}e^{-x^2}\cdot \int_{y\in [-n,n]}e^{-y^2}=\left[\int_{x\in [-n,n]}e^{-x^2}\right]^2=\left[\int_{x\in (-n,n)}e^{-x^2}\right]^2.$$ So $$\lim_{n\to\infty}\int_{x\in (-n,n)}|e^{-x^2}|=\lim_{n\to\infty}\int_{x\in (-n,n)}e^{-x^2}=\lim_{n\to\infty}\sqrt{\int_{U_n} e^{-(x^2+y^2)}}=\sqrt{\int_{\mathbb{R}^2} e^{-(x^2+y^2)}}.$$ So $\int_{x\in (-n,n)}|e^{-x^2}|$ exists (this is obvious) and is bounded and $\int_{\mathbb{R}}e^{-x^2}=\lim_{n\to\infty}\int_{x\in (-n,n)}e^{-x^2}=\sqrt{\int_{\mathbb{R}^2} e
|change-of-variable|gaussian-integral|
0
First Proth number with a million digits
I am examining the Proth numbers $n = h\cdot 2^k+1$ , where $k$ is an integer and $h an odd integer. Now I want to find the first Proth number with a million digits. But I hve no idea how I could solve this. So I know that we get the number of digits of $n$ by $\lfloor \log_{10}(n)\rfloor + 1$ . So I write $$ \log_{10}(n) = \log_{10}(h\cdot 2^k+1)\approx \log_{10}(h) + k\cdot\log_{10}(2). $$ But how would I go on from here?
The binary expansion of $n=h\cdot 2^k+1$ looks like this: $$\underbrace{1\ldots 1}_{\le k \text{ digits}}\;\;\;\;\underbrace{0\ldots0}_{k-1\;0's}1$$ So you take the binary expansion of $10^{999,999}$ , and look for the least significant zero digit that is no more than half-way from the start; then replace this digit by $1$ , replace all less-significant digits with $0$ , and add $1$ .
|algebra-precalculus|
0
Sum equals the product
A while ago, just playing with some numbers I noticed that $1+2+3=1\cdot2\cdot3$, so I started thinking about the non-zero integer solutions of the equation $$\prod_{i=1}^na_i=\sum_{i=1}^na_i$$ For example, for $n=2$, the only solution is the pair $(2,2)$, for $n=3$ the only solutions are $(1,2,3)$ and $(-1,-2,-3)$ and that's what I have by now, the problem is, I solved the case $n=2$ using divisibility and the case $n=3$ I proved that if $|a_1|\leq|a_2|\leq|a_3|$, then for $a_2>2$ there was no solution, so I just analyzed every case. Can someone help me with the general case?
This answer is partial. It concerns mainly natural solutions of the equation The brief search showed several related papers, which I listed below as the references. Unfortunately, I had no time to study them all, so my survey below is incomplete. It seems that all of these references concern natural solutions of the equation. So let $a(n)$ be the number of all nondecreasing natural solutions. To avoid the trivial case we assume that $n\ge 1$ . For each natural $n$ the equation has a natural solution $(1,\dots,1,2,n)$ [Theorem 2, KN], so $a(n)\ge 1$ . On the other hand, $a(n)\le n^n$ , see [EN]. All nondecreasing natural solutions are listed for $n\le 5$ in the proof of Theorem 1 from [KN], for $n\le 7$ in the solution of Problem 2.14 from [ML] and before, for $n\le 12$ in Table 1 from [Eck], and for $n=50$ and $n=100$ in [KN, p.4]. The numbers $a(n)$ for all natural $n\le 100$ are listed in [KN]: Also is known that $a(1997) = 20$ , $a(1998) = 8$ , $a(1999) = 16$ , and $a(2000) = 10$ [K
|number-theory|
0
How to prove that $S = \sum_{i=1}^{2^n} \frac{1}{2^n + i} \gt \frac12$ for some natural number $n$?
While attempting to prove another statement by induction, I was led to proving that this expression is always greater than $\frac12$ . I tried all sorts of manipulation I could come up with to no avail. I hope somebody could help or at least give me some hint to what to do. Thank you very much! $S = \sum_{i=1}^{2^n} \frac{1}{2^n + i} \gt \frac12$ for some natural number $n$ . EDIT : Oh, I think I've figured it out! This is what I just did: $\sum_{i=1}^{2^n} (\frac{1}{2^n + i}) - \frac12 = \sum_{i=1}^{2^n} (\frac{1}{2^n + i}) - \frac{1}{2^{n+1}}2^n = \sum_{i=1}^{2^n} (\frac{1}{2^n + i} - \frac{1}{2^{n+1}})$ $= \sum_{i=1}^{2^n} \frac{2^n - i}{2^{n +1}(2^n + i)}$ Since $i \leq 2^n$ , the difference must be greater than $0$ . I'm typing this up without much thought on seeing that this seemed to work, so I'm checking it again to make sure it's correct, but even if it is, I wondered if there is a better way!
A slightly different take: $\frac{1}{2^n} > \frac{1}{2^n+i}$ for any $i \geq 1$ . Thus the strictly smallest term in each summation is the final term, $\frac{1}{2^n+2^n}=\frac{1}{2^{n+1}}$ . Since you have $2^n$ terms, your sum is strictly $> 2^n\frac{1}{2^{n+1}}=\frac{1}{2}$ .
|sequences-and-series|
1
Probability problem regarding rooks on a chessboard
Eight rooks are placed in distinct squares of an 8 x 8 chessboard, with all possible replacements being equally likely. Find the probability that all the rooks are safe from one another.
Thought process: - First rook can be placed in 64 squares. - For the next rook the entire row and column of the first rook are off-limits, so the second rook has 64-16+1=49 squares available (add one to avoid double counting the square where the first rook was placed). - repeat the same idea starting with 49 and add back double counted squares from the rows and columns of previous rooks already placed. Calculating Probablity: A:= placing 8 rooks without either of them capturing each other. A_i:= placing the ith rook first rook can be placed in 64 squares second rook can be placed in 64-16+1= 49 squares third rook can placed in 49-16+3= 36 squares fourth rook can placed in 36-16+5= 25 squares fifth rook can placed in 25-16+7= 16 squares sixth rook can placed in 16-16+9= 9 squares seventh rook can placed in 9-16+11= 4 squares eighth rook can placed in 4-16+13= 1 square By principle of counting |A| = |A_1|*|A_2|*|A_3|*|A_4|*|A_5|*|A_6|*|A_7|*|A_8| = 64*49*36*25*16*9*4*1 outcomes P(A) = |A
|probability|problem-solving|
0
How do I combine a PDF with another continuous function and then do a sum product?
I haven't done math in awhile so am a bit fuzzy on how to set up the problem. Let's say there's a test that students take and they can score anywhere between 0 and 10000. Let's say the distribution of test scores follows a Beta PDF, let's call this function PDF(X). And let's say there are 200 students. There's also another function: y = b0 + b1X + b2X^2 That tells us, for each score, what the probability is a student makes it to college. I want to calculate two things. What is the total expected number of students that make it to college in a class of 200? What is the average score for the students that make it to college? In a discrete world, this is easy, just do sum product. I would first, multiply each value in a PMF, by the chance of a student making it to college, and then times the total number of students, to get the number of students that make it to college. It would be something like: sumproduct(PMF(X), b0 + b1X + b2X^2) * 200 In other words, let's say 5% of students score 1
Let random variable $X$ represent a student's test score, with pdf $f(x)$ for $a \le x \le b$ . Let $g(x) = \mathbb P[C \mid X = x]$ be the conditional probability of making it to college ( $C$ ) given a test score of $x$ . Then the probability of a student making it to college is $$ \mathbb P[C] = \int_a^b g(x) f(x)\; dx$$ and the average score of the students that make it to college is $$ \dfrac{\int_a^b x g(x) f(x)\; dx}{\int_a^b g(x) f(x)\; dx} $$
|calculus|probability|integration|probability-distributions|definite-integrals|
1
Is it true that nonnegative independent random variables with sum of expectations$=\infty$ also has sum$=\infty$ a.s.?
I wonder if it's true that if $X_1,...,X_n,...$ are independent and nonnegative random variables and that $\sum_{i=1}^\infty E[X_i]=\infty$ , then $\sum_{i=1}^\infty X_i=\infty$ a.s.? I suspect so because its true if $X_i=1_{A_i}$ , which is the 2nd Borel-Cantelli lemma. If $X_i=exp(\lambda_i)$ , then its also true, which is a property of exponential random variables. However, I couldn't work out a proof for the above statement, so I wonder if there are counterexamples?
For a counterexample, let $X_i = 3^i$ with probability $\frac1{3^i}$ , and $0$ otherwise. Then $\mathbb E[X_i]= 1$ and therefore $\sum_{i=1}^\infty \mathbb E[X_i] = \infty$ . However, we have $\sum_{i=1}^\infty X_i = 0$ with probability at least $1 - \sum_{i=1}^\infty \frac1{3^i} = \frac12$ (actually, with probability about $0.56$ ).
|probability-theory|
1
Understanding $c_0$ is closed subset in $\ell^{\infty}$
Prove the set of sequences $c_0$ which converge to zero in $l_{\infty}$ is closed. I came across this proof and have a question: Since all $x_n(k)$ is in $c_0$ , it's limit point $x(k)$ should be 0, now we only need to show that the sequence of all zeros is in $c_0$ to show that $c_0$ is closed. Is my understanding correct? Any help would be helpful.
As I am just a student, be carefull with my solution. $C_0 = \{a_n \in l_{_\infty}( \mathbb{N}) : lim_{n \to \infty}a_n = 0 \}$ 1- In order to prove that $C_0$ is closed we need to show that any "sequence" of $C_0$ that converges, converge to a limit in $C_0$ . The difficulty here is to understand what do we call a "sequence" in $C_0$ . A sequence in $C_0$ is not a sequence of $ \mathbb{C}$ numbers but rather a sequence of sequence $ \mathbb{C}$ numbers. More simply you visualise (that how I do it at least) a sequence $(a_{m;n}) \in C_0$ as a sequence of column vectors with an infinite number of rows. $ \{\underline{a}_{m;n} \} = \{ \underline{a}_{1;n}; \underline{a}_{2;n}; ...; \underline{a}_{m;n}; ... \} = \{ \begin{pmatrix} a_{1;1}\\ a_{1;2}\\ ...\\ ... \end{pmatrix} ; \begin{pmatrix} a_{2;1}\\ a_{2;2}\\ ...\\ ... \end{pmatrix} ;...; \begin{pmatrix} a_{m;1}\\ a_{m;2}\\ ...\\ ... \end{pmatrix} ; ... \} $ So basically $ lim_{m \to \infty} a_{mn} = l_n $ can be seen as $lim_{m \to \inf
|real-analysis|functional-analysis|
0
Finding the analytical solution of a first order system with pure time delay
I have a simple system and I am searching for the solution for f(t): $$\frac{\partial f(t)}{\partial t} = c_1 \left( f(t) + g(t) + c_2 \right)$$ . It turns out that, in this system $g$ can be related to $f$ by a pure time delay: $$g(t) = f(t-a)u(t-a)$$ where $u(t)$ is the Heaviside function. Applying the Laplace transform to the equation above and isolating $F(s)$ leads to: $$F(s) = \frac{f(0) + c_1c_2}{s - c_1(1 - e^{-as})}$$ . Since there is a complex exponential at the denominator I cannot use the inverse Laplace transform nor break this into partial fractions. How can one find the solution of such an equation? If I cannot directly resolve the equation, can I infer a given solution (from the numerical integration of this system) and identify the coefficients? If the analytical solution does not exists, is there a proof that explains why?
Applying the Laplace transform to $$\frac{\partial f(t)}{\partial t} = c_1 \left( f(t) + f(t-a)u(t-a) + c_2 \right)$$ . and assuming $f(t)=0,\ t , we got $$ F(s) = -\frac{((c_1c_2+s f_0)e^{as})}{s(c_1+(c_1-s)e^{as})}=-\frac{c_1c_2+s f_0}{s(c_1e^{-a s}+c_1-s)}=\frac{c_1c_2+s f_0}{s(s-c_1)(1-\frac{c_1e^{-as}}{s-c_1})} $$ and now making $$ \frac{1}{(1-\frac{c_1e^{-as}}{c_1-s})}= \sum_{k=0}^{\infty} \left(\frac{c_1e^{-as}}{s-c_1}\right)^k $$ we can invert $F(s)\to f(t)$ . Note that the series is convergent for $|c_1e^{-as}| and $$ \frac{c_1c_2+s f_0}{s(s-c_1)}=\frac{c_2+f_0}{s-c_1}-\frac{c_2}{s} $$ and $$ \mathcal{L}^{-1}\left[\frac{\left(\frac{c_1e^{-as}}{s-c_1}\right)^k}{s-c_1}\right]=\frac{c_1^k \left(\left(\left(1-\frac{t}{a k}\right)^k-1\right)(-a k)^k\right) e^{c_1 (t-a k)} u (t-a k)}{k!} $$ Follows a MATHEMATICA script showing the approximation accuracy for n = 4 in blue and the precise integration in dashed red n = 4; parms = {c1 -> -1, c2 -> 1, a -> 2, f0 -> 0}; sum = Sum[((c1 Exp
|ordinary-differential-equations|inverse-laplace|delay-differential-equations|
0
Is there a name for this Fibonacci Identity
Last night I was trying to solve a problem and discovered an identity relating to the Fibonacci sequence $$ \left\lvert F_{i-j}F_{i+j} - F_{i-k}F_{i+k} \right\lvert = \left\lvert F_{k - j}F_{k+j} \right\lvert $$ Where $F_{n}$ is the $n$th term of the Fibonacci sequence. The modulus brackets remove the awkwardness of the $(-1)$ exponent, which gives $$ F_{i-j}F_{i+j} - F_{i-k}F_{i+k} = F_{k-j}F_{j+k} (-1)^{(i + k)} $$ When $j=1$ and $k=0$ we get the Cassini identity, but is there a name for this more generalized form?
Your Fibonacci identity $$ F_{i-j}F_{i+j} - F_{i-k}F_{i+k} = F_{k-j}F_{j+k} (-1)^{(i + k)} $$ is derived from the identity $$ 0 = (a-b)(a+b) -(a-c)(a+c) +(b-c)(b+c) $$ with label $\texttt{id3_3_1_2b}$ in my collection of Special Algebraic Identities . It is equivalent to, but more symmetrical than Vajda's identity which is derived from $$ 0 = (a-b)(a-c) -bc -a(a-b-c) $$ with label $\texttt{id3_3_1_2c}$ . The answer to your question ... but is there a name for this more generalized form? is probably no. There are very few named identities, especially for Fibonacci numbers. Anyone can propose a name, but it has to gain acceptance.
|number-theory|fibonacci-numbers|
0
Proving $\large \mathrm{X}$ and $\large \mathrm{Y}$ are not jointly continuous but marginally each of them are continuous?
Let $\large \mathrm {X} \sim \mathcal{U}(0,1)$ . $\large \mathrm {(X,Y)}$ have a joint pdf . Define $\large \mathrm{Y=X}$ . Then show that $\large \mathrm{X}$ and $\large \mathrm{Y}$ are not jointly continuous but even though marginally each of them is a continuous random variable. This is how I approach to prove the problem. $\textrm{Define a diagonal } \mathrm{L} \textrm{ joining the points O and B.} \\\ \large \mathrm{X} , \large \mathrm{Y} \underset{cont} \sim \mathcal{U}(0,1) \\\ \large \mathrm{X} = \large \mathrm{Y} \normalsize \textrm { with probability } 1 \\\ \large \mathrm{L} \normalsize \textrm{ is an } 1- \normalsize \textrm{dimensional set} \\\ \\\ \textbf{Observation :- } \\\ \\\ \textrm{The projection of the diagonal line } \mathrm{L} \textrm{ both on the X and Y-axis yield a set of singletons } \\\ \textrm { Hence by the theory of integration we are going to integrate a two-dimensional function on} \\\ \textrm{ a one-dimensional set, therefore } \large \mathbb{P}[(\math
No, this proof is not correct. I don't know exactly what you mean by "The projection of the diagonal line L both on the X and Y-axis yield a set of singletons," but any way to describe projection I know of would have the projection of $L$ onto either axis be $[0,1]$ rather than a set of singletons. Next, you say $\mathbb{P}((X,Y) \in L) = 0$ , but in reality $$\mathbb{P}((X,Y) \in L) = \mathbb{P}((X,X) \in L) = \mathbb{P}(X \in [0,1]) = 1.$$ You reference the function $f_{X,Y}$ without defining it, and during the subsequent calculation apply Fubini's theorem for seemingly no reason. In the end, you conclude $\iint_L f_{X,Y}(x,y) dydx = 1,$ but since $L$ has Lebesgue measure $0$ , the integral of any function over it is $0$ . Near the end, you write $\mathbb{P}(X \in L)$ , which is nonsense. $X \sim U(0,1)$ , so $X$ takes values in $\mathbb{R}$ and hence cannot be part of any subset of $\mathbb{R}^2$ . You then claim $\mathbb{P}(X \in \triangle OBA) = \frac 12$ , but again, $X$ cannot b
|probability-theory|solution-verification|random-variables|
1
Division algorithm proof intuitive
The following algorithm can be used for division. Can someone intuitively explain or offer proof that quotient returned from recursive call (say q' ) is such that quotient that should be returned from current function call (say q ) is at max one less than 2q' . I get the pattern but don't understand it well enough. divide(x,y): if (y>x) return 0 q = divide (x, 2*y) if ((x-2*q*y)
The algorithm is doing long division in base $2$ . For example: $$34 = 10\,0010_b\\6=110_b$$ Doing long division: $$\require{enclose}\begin{array}{r} 101\\[-3pt] 110 \enclose{longdiv}{100010}\\[-3pt] -\underline{110\phantom{00}}\\[-3pt] 01010\\[-3pt] -\underline{000\phantom{0}}\\[-3pt] 1010\\[-3pt] -\underline{110}\\[-3pt] 100 \end{array}$$ giving a quotient of $101_b = 5$ and a remainder of $100_b = 4$ . The algorithm starts by moving the divisor $y$ to the left (by doubling it) until the result of subtracting from $x$ gives a result less than the moved $y$ (actually it goes one move further, but returns $0$ for that level, effectvely making the previous call the first one of consequence). This gives $$\begin{array}{r} 1\\[-3pt] 110 \enclose{longdiv}{100010}\\[-3pt] -\underline{110\phantom{00}}\\[-3pt] 01010 \end{array}$$ with $q = 1$ being the value on top. at the next level down, $q$ is multiplied by $2$ , moving one place left. So now the remainder is $x - (2q)y$ . If this is big e
|recursive-algorithms|
0
The norm of the derivative of a matrix with respect to a vector
Let $A(x) \in \mathbb{R}^{n\times n}$ be a matrix depending on a vector $x \in \mathbb{R}^{n}$ . What can we say about the derivative of $f(x) = (\tau A(x)- I)^{-1}x_0$ with respect to $\tau > 0$ and $x_0 \in \mathbb{R}^{n}$ ? Are there any simplifications? For the context of the problem, I am looking for a $\tau = \tau(x,x_0)$ such that $\lVert D_xf(x) \rVert holds for a skew-symmetric matrix $A(x)$ .
$ \def\l{\tau} \def\G{{\Gamma}} \def\LR#1{\left(#1\right)} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\mt{\mapsto} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\c#1{\color{red}{#1}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} $ For typing convenience, define the variables $$\eqalign{ b &= x_0 &\qiq db = dx_0 \\ C &= \LR{I-A\l}^{-1} &\qiq \c{dC = C\LR{A\:d\l}C} \\ }$$ Calculate the differential of the function $$\eqalign{ f &= -Cb \\ df &= -\c{dC}\,b \;-\; C\,db \\ &= -\c{C\LR{A\:d\l}C}b \;-\; C\,db \\ &= \LR{CAf}d\l \;-\; C\,db \\ }$$ from which the required gradients are easily identified $$\eqalign{ \grad f\l &= CAf, \qquad\quad \grad fb &= -C \\ }$$
|partial-derivative|tensor-products|inverse|matrix-norms|contraction-mapping|
0
Relationship between variance and covariance
I know that $$\text{var}(x-y) = \text{var}(x) + \text{var}(y) - 2\text{cov}(x,y)$$ and $$\text{cov}(x,y) = \frac{1}{2}(\text{var}(x) + \text{var}(y) - \text{var}(x-y)).$$ Is it possible to say that $$\text{cov}(x,y) \leq \text{var}(x) + \text{var}(y)$$ If I plug this inequality in above equations, I get that $-\text{var}(x-y) \leq \text{var}(x) + \text{var}(y)$ , which is true since variance is positive.
$$\text{cov}(X,Y) = \frac{1}{2}(\text{var}(X) + \text{var}(Y) - \text{var}(X-Y))\implies \text{cov}(X,Y) \leq \frac{1}{2}(\text{var}(X) + \text{var(Y)})\leq \text{var(X)}+\text{var}(Y),$$ with equality holding iff $\text{var}(X-Y)=0$ and $\text{var}(X)+\text{var}(Y)=0$ , that is, iff $$X=\text{constant}, Y=\text{constant}$$
|probability|statistics|inequality|variance|covariance|
1
The key step to prove log-convexity is preserved under sums
In S. Boyd textbook p.105 (button): (cvx = convex) Let F = log f & G = log g are convex (i.e. Let f & g are log-cvx) (This guarantees f & g are cvx, since log-cvx is included in cvx) Now, the book says: From the composition rules for cvx functions: log(exp(F) + exp(G)) = log(f + g) is cvx. I do not understand the last step? What I think is: exp(F) & exp(G) are cvx .... (OK) exp(F) + exp(G) are cvx .... (OK) However, logarithm is a concave function in R++ Composition rules: Thanks!!
There's a nice proof using a baby version of Hölder’s inequality. The log-convexity condition on a positive function $f$ is equivalent to the inequality $$f\left(\frac1{p}x + \frac1{q}y\right) \leq f(x)^{\frac1{p}} \cdot f(y)^\frac1{q}$$ for all $x, y \in \mathrm{dom}(f)$ and all positive numbers $p, q$ satisfying $\frac1{p} + \frac1{q} = 1$ . The version of Hölder’s inequality we'll use states that for positive numbers $s, t, u, v$ , $$su + tv \leq (s^p + t^p)^\frac1{p} \cdot (u^q + v^q)^\frac1{q}$$ (which is a special case of the usual Hölder’s inequality by considering a two-point measure space with counting measure -- clearly overkill, but the connection with Hölder is irresistible to point out). For log-convex functions $f, g$ and points $x, y$ in their common domain, put $$s = f(x)^\frac1{p}, \qquad t = g(x)^\frac1{p}, \qquad u = f(y)^\frac1{q}, \qquad v = g(y)^\frac1{q}.$$ Then log-convexity of $f + g$ follows from $$\begin{array}{ccc} f\left(\frac1{p}x + \frac1{q}y\right) + g\l
|convex-analysis|
0
Calculating Inverse Tangent
When calculating the inverse tangent of a degree, the calculator will always give an angle between 90 degrees and –90 degrees. But I want to find the positive value of the negative angle. Do I add 180 or 360 to the angle? How do I know if I should add 180 or 360 to the negative angle.
You need to check to see if your angle looks reasonable when you are given a negative number. For example: If you are trying to find the angle of 1, -4 you know that it must be in the IV quadrant point 1, -4 in quadrant IV. We know that ArcTan(-4/1) ~= -76 degrees and that -76 degrees = 360-76 degrees which gives us the degree 284. If you check the graph, you will find that 284 degrees is in the IV quadrant, meaning that the point (1, -4) is at 284 degrees. Conversely, if the point were to be (-4, 1) you know that the answer must be in the III quadrant point -4, 1 in quadrant II. ArcTan(1/-4) ~= -14 degrees and -14 degrees is equal to 360-14 degrees which gives us the degree 346 which is in the IV quadrant. Because we can tell that II != IV we know to add 180 degrees to our original number -14 instead, to obtain 180-14=166, which is how we determine that 166 is our degree for (-4, 1).
|geometry|trigonometry|angle|
0
$f(x) + f(y) = f ( xy - 2024x - 2024y + 2024*2025 ) ,\forall x,y \in (2024,\infty)$
The statement of the problem : Find all monotone functions $f : \mathbb R \rightarrow \mathbb R $ that verify the following $f(x) + f(y) = f ( xy - 2024x - 2024y + 2024*2025 ) ,\forall x,y \in (2024,\infty)$ My ideas (some of them are wrong ) : First of all, by making $x = y = 2025$ we easily get that f(2025) = 0. Thinking about the fact that it is monotonic, I thought that maybe I can prove that it is a constant(proving that it has different monotonies on certain intervals), which would result in $f \equiv 0$ , which verifies so it might be the right path. I thought how we can exploit the fact that $f(2025) = 0$ so I thought about the solutions of the equation $$xy - 2024x - 2024y + 2024*2025=2025$$ now the bad part is that $x,y\in (2024,\infty)$ , so there aren't any solutions .The next idea was to think of an additive function, but it was unsuccessful. Does it seem like a difficult problem? I want to see what ideas and solutions you have, and what helped you to intuit the answer, be
We can simplify the RHS as: $$ f(xy-2024x-2024y+2024\cdot2025)=f((x-2024)(y-2024)+2024). $$ i.e, $$ f(x)+f(y)=f((x-2024)(y-2024)+2024) $$ Now, let $f(x)=g(x-2024)$ for some $g(x)$ . This can also be written as $f(x+2024)=g(x)$ . Therefore, $$ f(x)+f(y)=f((x-2024)(y-2024)+2024) \\ \implies g(x-2024)+g(y-2024)=g((x-2024)(y-2024)). $$ Substitute $u=x-2024$ and $v=y-2024$ . Thus, $$ g(u)+g(v)=g(u\cdot v). $$ From this, we obtain that $g(x)=\log_a(x), a>0$ . Thus, our original function, $f(x)=g(x-2024)=\log_a(x-2024)$ is the solution. EDIT: Consider the function $h(x)=g(e^x)$ , this function is additive, i.e, $h(x)+h(y)=h(x+y)$ . Since both $g(x)$ and $e^x$ are monotonic, $h(x)$ is also monotonic. Additive monotonic functions are linear . Thus $h(x)=ax,a\in\mathbb{R}$ . $g(e^x)=ax$ or $g(x)=a\ln(x)$ .
|functions|functional-equations|
0
Calculating the area of a square between two circles
I have this problem : Two circles shown in the figure, of centers $O$ and $O'$ , both have radius equal to $r$ , are externally tangent to each other and are further tangent to the line $t$ . The square $ABCD$ has side $AB$ on line $t$ and the other two vertices, $C$ and $D$ , which belong respectively to the circumferences bounding the two circles of centers $O '$ and $O$ . What is the measure of the side of the square $ABCD$ ? I have made this assumption with this figure: I have drawn from $O$ and $O'$ the perpendiculars to $t$ and joined $O$ with $D$ and $O'$ with $C$ . I have denoted by $H$ and $H'$ the orthogonal projections of the centers $O$ and $O'$ . From $P$ I have drawn the perpendicular to the line $t$ . This line $t$ cut the square in two parts. Named $\overline{DC}=x$ . The rectangular trapezoids $ADOH$ and $BCO'H'$ are equivalents. $\overline{BH'}=\overline{AH}=r-x/2$ , $\overline{PF}=r-x$ , hence $$\text{area}(DCO'O)=\frac{(2r+x)\cdot (r-x)}{2}$$ $$\text{area}(OO'H'H)=2
Consider this argument: I am given some constants $a$ , $b$ , $c$ , and I want to find the value of variable $d$ for which some relation $X$ holds based on the given. I somehow know that this value must be unique. I start solving this question. I find that in this case, $d = f(a,d) + f(b,d) + f(c,d) - g(a,b,c,d)$ for some functions $f,g$ . Then I evaluate those functions by keeping $d$ as it is on both sides. Eventually, after cancelling, I get $d=d \iff 0=0$ . Did I make any mistake? No . I just put the givens into some equations, without deriving something of use. Maybe I missed using some specific property or failed to notice that adding another function $h(a,b,c,d)$ to RHS changes nothing, i.e., $h(a,b,c,d) = 0$ which could have been enough to determine $d$ based on $a,b,c$ . Does this mean $d$ can take all real (or whatever set it was defined on) values? No . It means that $d=d$ holds for my unique value of $d$ , which is not of much consequence. It can also be understood as sayin
|geometry|algebra-precalculus|
0
Prove that if the set of ideals is $\{\{0\}, R \}$, then $R$ is a field.
Let $R$ be an integral domain. Prove that if the set of ideals is $\{\{0\}, R \}$, then $R$ is a field. $\{0 \}$ and $R$ are the trival ideals of $R$. Let $I$ ba an ideal of $R$ and $a\in I$ where $a$ is not a zero element. And let $b\in R$. Because $R$ is a integral domain which is a commutative ring with unity has no zero divisor, then there exist $a^{-1}\in R$, and $a^{-1}b\in R$. So $a(a^{-1}b)=b\in I$, hence $I=F$; therefore, $\{0\}$ and $R$ are ideals of $R$, thus $R$ is a field. Does the argument above right? If not, can anyone give me a hit to write a better one ? Thanks
Here is @Kooper 's approach, but with a better argument. Let R be an integral domain(and of course, a commutative ring with unity), such that its only ideals are $\{0, R\}$ . By Zorn's lemma, we know that every non-unit element is contained in one maximal ideal, but the only maximal ideal is ${0}$ , so we get that the only non-unit element in R is $0$ , and by definition, a field is a commutative ring with unity, such that every non-zero element is a unit. Since the only element is $0$ we get what we wanted. $\square$
|abstract-algebra|ring-theory|
0
Help proving a chain rule from total derivative chain rule
Consider $f:\mathbb{R}^d\to\mathbb{R}$ and $g:\mathbb{R}\to\mathbb{R}^d$ . It is known that $$ \tag{*} (f\circ g)'(x) = \sum_{i=1}^d \partial_i f(g(x)) \cdot g'(x)^i $$ I would like to prove this from the chain rule for the total derivatives: $$ \tag{**} D_x(f\circ g) = D_{g(x)}f \circ D_xg $$ I'm not sure how to rigorously proceed here. Intuitively I know the two total differentials in the total derivative chain rule can be represented as matrices and their composition will correspond to the multiplication in the desired partial derivative chain rule. But I'm not sure how to get there. How does the function composition get converted into a summation + multiplication? Another related issue. The expression $(*)$ is a real number when evaluated at $x$ . The expression $(**)$ is a linear map from $\mathbb{R}\to\mathbb{R}$ . It's not to difficult to understand that the linear maps $\mathbb{R}\to\mathbb{R}$ (the dual space of $\mathbb{R}$ ) are isomorphic to $\mathbb{R}$ itself. But still,
$\newcommand{\ep}{\epsilon}$ $\newcommand{\R}{\mathbb{R}}$ I have answer that I find most straightforward with the appropriate background. Matrix Components Suppose we have vector spaces $V, W$ with $\dim(V)=n$ , $\dim(W)=m$ . And suppose we have a linear transformation $T:V\to W$ ( $T\in \hom(V,W)$ ). Suppose we have bases $\left\{e^{(V)}_i\right\}$ for $V$ and $\left\{\ep_{(W)}^i\right\}$ for $W^*$ (The dual space of $W$ , the linear maps $W\to \R$ , $\hom(W,\R)$ ). We can find the matrix $[T]\in M_{m\times n}(\R)$ of $T$ by $$ [T]^i_j = \ep_{(W)}^i\left(T\left(e^{(V)}_j\right)\right) $$ We can think of the matrix $[T]$ as an object in its own right. It is a function that takes two numbers, $i, j$ , and gives back the specific real number matrix component. It is well known that if $T\in \hom(V,W)$ and $S\in \hom(W, U)$ that for $S\circ T \in \hom(V, U)$ with $\dim(U)=p$ that $$ \tag{1} [S\circ T] = [S][T] $$ Where matrix multiplication is defined by $$ \tag{2} \left(AB\right)^i_j = \
|linear-algebra|multivariable-calculus|partial-derivative|
0
Trace of algebraic integer $\alpha$ that is in every prime ideal of $\mathcal{O}_K$ lying over $(p)$
Let $K$ be a number field with ring of integers $\mathcal{O}_K$ and let $p$ be a rational prime. Let $(p) = \mathfrak{p}_1^{e_1}\ldots\mathfrak{p}_r^{e_r}$ be the prime factorisation of (p) over $\mathcal{O}_K$ , and suppose that $\alpha \in \mathfrak{a} = \mathfrak p_1\ldots \mathfrak{p}_r$ . Then show that $\text{Tr}_{K/\mathbb{Q}}(\alpha) \equiv 0$ (mod $p$ ). I'd really appreciate any help in proving this. Thanks for reading! Special Case I am able to prove the result in the case where $K/\mathbb{Q}$ is a Galois extension. In that case, each embedding $\sigma$ of $K$ is actually a $\mathbb{Q}$ -automorphism, and $\sigma$ permutes the $\mathfrak{p}_i$ , hence $\sigma(\alpha) \in \mathfrak{a}$ , so clearly $\text{Tr}_{K/\mathbb{Q}}(\alpha) = \sum_\sigma \sigma(\alpha) \in \mathfrak{a} \cap \mathbb{Z} \subseteq p\mathbb{Z}$ . However, if $K/\mathbb{Q}$ is not Galois, the argument fails because the embeddings no longer permute the $\mathfrak{p}_i$ , so the conjugates of $\alpha$ are no
Here is a nice idea that doesn't assume $K/\mathbb{Q}$ is Galois: If the $\sigma_i$ are the embeddings into $ \mathbb{C},$ note that $$ \mathrm{Tr}(\alpha^p) = \sum \sigma(\alpha^p) = \sum \sigma(\alpha)^p = \mathrm{Tr}(\alpha) \pmod p.$$ This means we can take repeated $p$ -th powers within the trace and it won't change the residue class modulo $p.$ Picking $N$ such that $p^N \geq \max{(e_1, \ldots, e_r)}$ guarantees that $\alpha^{p^N} \in (p).$ Now we are done, as $\alpha^{p^N} = p\beta$ for some $\beta \in O_K,$ and we see $$ \mathrm{Tr}(\alpha) = \mathrm{Tr}(\alpha^{p^N}) = \mathrm{Tr}(p\beta) = 0 \pmod p.$$
|abstract-algebra|number-theory|algebraic-number-theory|integral-extensions|
0
If minimum of independent random variable is exponential, is each one exponential?
We know that that given $X_1,...,X_n$ independent random variables which are distributed exponentially, their minimum is also distributed exponentially. Is the converse true? Given $X = \min\{X_1, ..., X_n\}$ distributed exponentially, and $X_1, ..., X_n$ are independent, is it true that $X_i$ is distributed exponentially for all $i = 1,...,n$ ? Edit: I don't assumme the $X_i$ 's are identically distributed. I believe it is true that if $X \sim \exp(\lambda)$ , and if $p_i \equiv \mathbb{P}(X_i = X)$ then $X_i \sim \exp(\lambda p_i)$ . However, I couldn't find a reference and didn't manage to prove this by myself. Further edit: The result is true if you assume $\{X = X_i\}$ is an event independent of $X$ . This is proven in the following manner: Construct a Poisson process $N_t$ whose waiting times are IID copies of $X$ . Split this Poisson process to $n$ counting processes - the process $N_t^i$ will count the number of occurences where $X = X_i$ . Using the fact that $\{X = X_i\}$ is
On the other hand, if $X_1, X_2, X_3$ are independent non-negative random variables such that the minimum of each pair of them is exponentially distributed, then each of the random variables is exponentially distributed. (Pairwise independence suffices.)
|probability|probability-theory|random-variables|exponential-distribution|
0
How do I go "backwards" with error propagation?
Hopefully this is appropriate for Math SE given that, ultimately, this questions stems from a math textbook (Zorich's Mathematical Analysis ). Zorich has shown using simple arguments with absolute values that if one has uncertainties for an approximation $\tilde{x}$ to some true value $x$ (defined as $\Delta_x(\tilde{x}) := (x-\tilde{x})$ , and I emphasize that the $\Delta$ function is parametrized by the true value $x$ of interest) then the uncertainty in the approximation to the sum of two true values by the sum of their approximations is bounded above by the sum of the two uncertainties. Mathematically, $$\Delta_{x+y}(\tilde{x} + \tilde{y}) := |(x+y)-(\tilde{x} + \tilde{y})| \leq \Delta_x(\tilde{x}) + \Delta_y(\tilde{y}) $$ with the $\leq$ bit being the "theorem" part which follows immediately from the triangle inequality. Now Zorich says the following: For example, suppose your height has been measured twice by some device, and the precision of the measurement is ±0.5 cm. Suppose a
$$\Delta_{x_1 + x_2 + \cdots + x_{1000}}(\tilde{x}_1 + \tilde{x}_2 + \cdots + \tilde{x}_{1000}) \\ \qquad \leq \Delta_{x_1}(\tilde{x}_1) + \Delta_{x_2} (\tilde{x}_2) + \cdots + \Delta_{x_{1000}} (\tilde{x}_{1000})$$ The left-hand side is the error in measuring the $1000$ pages together, which only induces one copy of the device's limited precision. On the right-hand side, we presume that the pages are all the same thickness, $x$ , and that we have the same error for each of them, $\tilde{x}$ . (That is, our single measurement's error results is a consistent and systematic bias in the estimated thickness of each page. We don't have independent errors on the right when we do this.) (Consider how we would do this in the other direction: If we measure you $1000$ times, we can estimate the height of a stack of $1000$ of you, but accumulating $1000$ copies of the error, we only know the height of that stack to within $\pm 500$ cm. When we accumulate repeated error, we have the right-hand sid
|word-problem|error-propagation|
1
Prove that if $\dim(U+W)=1+\dim(U∩W)$, then $U⊆W$ or $W⊆U$
$U,W$ are subspaces of $V$ . Prove that if $$\dim(U+W)=1+\dim(U∩W) then $U⊆W$ or $W⊆U$ . I'm really struggling here.
$1 + \dim(U \cap W) = \dim(U+W) \geq \dim(U) \geq \dim(U \cap W)$ , and we see that at least one of these inequalities must be an equality. So, either $\dim(U + W) = \dim(U)$ , or $\dim(U) = \dim(U \cap W)$ . Can you take it from here?
|linear-algebra|vector-spaces|
1
optimize travel time on a 2d plane
There is a dog at (x,y) = (37,10) with units say feet. The dog wants to go home located at origin (0,0). The dog speed is 6 feet/seconds on the x,y axes and 6.5 anywhere else. What's the fastest path to the origin? My answer says a direct path but that seems to easy (this is supposed to be a trick question). Am i missing something?
Optimization should be with a constraint, here no constraint. Tag is wrong. Fastest path length is the straight line length $\sqrt{37^2+10^2} $ whatever its speed while proceeding to the origin directly or along axes, no trick. Divide by speed to get the time of dog's walk if required.
|optimization|
1
Evaluating the Definite Integral $\int_0^{\pi}\cos^{2n} \theta d\theta$
$$\int_0^{\pi}\cos^{2n} \theta d\theta$$ $$u=\cos \theta \implies du= -\sin \theta d\theta \implies d\theta= -\frac{du}{1-u^2} $$ $$\int_{-1}^1 \frac{u^n}{1-u^2} du=\int_{-1}^1 \frac{u^n}{(1-u)(1+u)}du$$ I have no idea what to do next, any guidance is appreciated!
\begin{align} \int_{0}^{\pi}\cos^{2n}\theta\ dt =\sum_{k=0}^{2n} \frac{\binom {2n}k}{2^{2n}}\int_0^{\pi}\cos[2(n-k) \theta]d \theta =\frac{\pi \binom {2n}n}{2^{2n}} \end{align}
|calculus|integration|definite-integrals|trigonometric-integrals|
0
How is the Lagrangian or the costate variable obtained for optimal control problems?
I'm new to this topic, I've just reviewed some lectures on the Langrangian. I'm not entirely sure how analagous the costate variables and Lagrangian are, so I may be conflating things? I see that the Hamiltonian is a function of the costate variables. If I remember from my lecture videos, the Lagrangian is obtained from only the constraint and the function you wish to maximise. I don't see an analagous way to obtain the costate variables.
The costate variables are the Lagrange multipliers. A distinction though, is that for optimal control problems, the costate variables are functions of time.
|optimization|control-theory|optimal-control|
0
Solve $a^3+b^3+3ab=1$ with $(a,b)\in \Bbb{Z}^2$
Solve the following equation for $(a,b)\in \Bbb{Z}^2$ : $$a^3+b^3+3ab=1$$ I tried all of the standard techniques I know. I tried modular arithmetic: $$a^3+b^3+3ab\equiv 1 \pmod{3} $$ $$a^3+b^3\equiv 1 \pmod{3} $$ Now by Fermat's Little Theorem: $$a^2 a+b^2 b\equiv 1 \pmod{3} $$ $$a+b\equiv 1 \pmod{3} $$ But I can't see the next move I have to do. I can't find any banal factorization of the first term(it would require solving a $3$ degree equation). I tried using classic scomposition such as the sum of $2$ cubes and the cube of a binomial. Thank you for your time :)
I have another solution but it is not short as the previous ones. First of all, we see that $(-1, -1)$ is a solution. Next, we can consider the cubic curve $f(x, y)=x^3+y^3+3xy-1=0$ for $(x, y)\in \mathbb{R}^2$ and show that $(-1, -1)$ is a singular point of multiplicity 2. It holds, $f_x=3x^2+3y=0$ and $f_y=3y^2+3x=0$ if and only if $(x, y)=(-1, -1)$ or $(x, y)=(0, 0)$ but $(0, 0)$ does not belong the cubic curve. Now, we know that $(-1, -1)$ is a double point. Hence, the line with slop $m$ through $(-1, -1)$ has intersection multiplicity of 2 with the curve. I. e., the polynomial $p(x)=f(x, m(x+1)-1)$ of degree 3 has a double root at $x=-1$ . We can do polynomial division to find the third root of $p$ . We get after some long calculations that $x=\dfrac{2-m}{m+1}$ and $y=m(x+1)-1=m\left(\dfrac{2-m}{m+1}+1\right)-1=\dfrac{2m-1}{m+1}$ for $m\neq -1$ . Adding $x$ and $y$ yields $$x+y=\dfrac{2-m}{m+1}+\dfrac{2m-1}{m+1}=1.$$ Now, the solution set over integers is given by $$ \left\{(x, y)
|number-theory|diophantine-equations|
0
If I ask a person if they can say "no" and they say "no", is this a paradox?
If I ask a person if they can say "no" and they say "no", is this a paradox? If they answer "no" it means they can't say "no", but they just said it
It's not a paradox, but you're close to something which is. It might be a reasonable assumption that every statement has a truth value. That is, every statement is either true or false. We may not know which, but it certainly is one or the other. This assumption, however, is wrong. Classically, the example is the "Liar's Paradox:" "This sentence is false" Think about this for a moment, and see that it has no consistent truth value. If it that statement is false, we are lead to the conclusion that it is actually true. If instead we see the statement as true, we are forced then to accept that it is false. This statement has no truth value . There are (informal) set theoretic analogs in Russel's Paradox, which is one of causes of the rigorous axiomatization of math and set theory in the first place. However, even in this rigorous setting (or any rigorous, powerful enough setting), it is a result of Gödel that there are true statements which are unprovable.
|paradoxes|
0
The square of n+1-th prime is less than the product of the first n primes.
I wanted to prove the following question in an elementary way not using Bertrand postulate or analytic estimates like $x/\log x$. The question is $$ p_{n+1}^2 I made the following argument. Does anyone have some opinion or simpler ideas to complete. We consider two cases: Case 1: $N=p_1p_2\cdots p_n-1$ is composite: then there will be a prime factor $q\leq\sqrt{N}$, and of course $q$ is not any of the $p_i$'s. therefore $p_{n+1}\leq q\leq\sqrt{N}$. So $p_{n+1}^2 Case 2: $N=p_1p_2\cdots p_n-1$ is prime. Then???
Proof using induction: Base: let $n\geq 4\colon 2\cdot 3\cdot 5\cdot 7 = 210 > 121 = 11^2$ . Assume it holds for $k\in\mathbb{Z}\colon p_1\cdot p_2\dots \cdot p_k > p_{k+1}^2$ . We want to prove $p_1\cdot p_2\dots \cdot p_k\cdot p_{k+1} > p_{k+2}^2$ . By Bertrand's postulate $\exists q\in\mathbb{P}\colon p_{k+2} . It's enough to show $p_1\cdot p_2\dots \cdot p_{k+1}>4p_{k+1}^2$ beacuse $4p_{k+1}^2>q^2\geq p_{k+2}^2$ . The last inequality holds beacause any if the next prime numbers greater than $p_{k+1}$ can't be smaller than the next prime $p_{k+2}$ . Clearly $p_{k+1}>4$ so \begin{align*} 4p_{k+1}^2 The last inequality holds based in the inductive assumption. The proof is form: https://www2.cms.math.ca/crux/v42/n4/Malikic_42_4.pdf
|number-theory|prime-numbers|prime-factorization|prime-gaps|
0
Is a function differentiable at a point if and only if both of the one-sided derivatives exist and are equal?
Most (all?) textbooks on calculus state unconditionally that a function f is differentiable at x if and only if both of the one-sided derivatives $f_+'(x)$ and $f_-'(x)$ exist and are equal. On the other hand, answers to this question implies that that is not the case. So, are the textbooks or answers to that question wrong?
If you take a function: $$ f( x) =\begin{cases} x^{2} & ,x\leqslant 0\\ x & ,x >0 \end{cases} $$ Then at $x=0$ we can't agree on what the derivative is because there are 2 choices. Which one would you choose - the left or the right derivative? That's why the (two-sided) derivative doesn't exist at that point. But if the whole function is: $$ g(x) = x^{2} ,x\leqslant 0 $$ Then there's no ambiguity at $x=0$ , there's only one derivative there. So at end points of a function (or at other points where the domain is interrupted) we can still say that the derivative exists. It's just most of the time text books don't talk about the endpoints.
|calculus|derivatives|
1
Solving a $3$ variable diophantine equation
I am looking to solve the following equation in positive integers: $$abc=5(a-2)(b-2)(c-2), a,b,c\in\mathbb{N}_0$$ From experimentation, I found that the only solution is $(10,4,4)$ and permutations, but I am not sure how to rigorously prove this. I simply looked at common prime factors and tried small values of the variables.
Let $a,b,c\in\Bbb{N}$ be such that $$abc=5(a-2)(b-2)(c-2).\tag{1}$$ By symmetry, without loss of generality we may assume that $a\leq b\leq c$ . If $abc=0$ then $a=0$ , and so either $b=2$ or $c=2$ . A quick check shows that $(a,b,c)$ is one of $$(0,0,2),\quad(0,1,2),\quad(0,2,c),\ c\geq2.$$ Now suppose $a,b,c>0$ . Then the left hand side of $(1)$ is positive, hence the right hand side is positive, and so $a,b,c\geq3$ . From $3\leq a\leq b\leq c$ we find that $$1=5(1-\tfrac2a)(1-\tfrac2b)(1-\tfrac2c)\geq5(1-\tfrac2a)^3,$$ and because $\left(\tfrac35\right)^3>\tfrac15$ this shows that $a , so either $a=3$ or $a=4$ . If $a=3$ then, after some rearranging, equation $(1)$ simplifies to $$(b-5)(c-5)=15.$$ Because $3\leq b\leq c$ we see that $(a,b,c)$ is either $(3,6,20)$ or $(3,8,10)$ . If $a=4$ then, after some rearranging, equation $(1)$ simplifies to $$(3b-10)(3c-10)=40.$$ Because $4\leq b\leq c$ we see that $(a,b,c)$ is either $(4,4,10)$ or $(4,5,6)$ .
|algebra-precalculus|number-theory|diophantine-equations|
0
If $B = x(xI-A)^{-1}$ for a generator matrix $A$, then $B-B^2$ has positive diagonal elements
Let $A$ be the generator matrix of a continuous-time Markov chain. This means that $A$ has positive off-diagonal elements $A_{ij} > 0$ , $i \ne j$ , and row sums $\sum_j A_{ij}$ equal to $0$ . For example, $A$ could be $$ A = \left( \begin{matrix} -7 & 4 & 3 \\ 1 & -2 & 1 \\ 3 & 5 & -8 \end{matrix} \right). $$ I am interested in proving the following claim about the matrix $B = x(x I - A)^{-1}$ for some $x > 0$ . Claim. For the matrix $B = x(x I - A)^{-1}$ , it holds that the diagonal elements of $B - B^2$ are non-negative. Using numerical simulations I have convinced myself that this claim is likely true; however, I have not been able to make much progress toward proving it. It is straightforward to show that the matrix $B$ is stochastic. However, the claim above is not true for all stochastic matrices $B$ ; there is something special about stochastic matrices of this particular form. Any ideas?
Not an answer, but some observations. Let $d$ be the maximum diagonal entry of $-\frac{1}{x}A$ . Then $S=I+\frac{1}{xd}A$ is a stochastic matrix whose diagonal elements are less than $1$ . Conversely, given any $d>0$ and any stochastic matrix $S$ whose diagonal elements are less than $1$ , $A=-xd(I-S)$ is a generator of a CTMC. Therefore, while the stochastic matrix $B$ in the OP takes a very special form and the statement that $B-B^2$ has a nonnegative diagonal is not true for a general stochastic matrix $B$ , if we express $B$ in terms of $S$ , we may reformulate the statement in question as one about an almost general stochastic matrix $S$ . More specifically, let $c=\frac{d}{d+1}$ . Then $0 and \begin{align*} B-B^2 &=(B^{-1}-I)B^2\\ &=-\frac{A}{x}\left(I-\frac{A}{x}\right)^{-2}\\ &=d(I-S)\big(I+d(I-S)\big)^{-2}\\ &=d(d+1)^{-2}(I-S)(I-cS)^{-2}.\\ \end{align*} Hence the OP is essentially asking whether $(I-S)(I-cS)^{-2}$ has a nonnegative diagonal for any $c\in(0,1)$ and any stochast
|linear-algebra|matrices|stochastic-processes|markov-chains|stochastic-matrices|
0
Given $x^2y = 32$ and $x^3/y = 1/8$, find $\log_2 x$ and $\log_4 y$
Let $\log_2 x=a$ and $\log_4 y = b$ . Then from $$x^2y = 32$$ and $$\frac{x^3}{y} = \frac{1}{8}$$ we need to find the values of $a$ and $b$ . I substituted values for $x$ and $y$ : $\log_2 2^2 = 2$ and $\log_4 8 = 1.5$ or $\log_2 8 = 2\cdot1.5$ where $a$ is $2$ and $b$ is $1.5$ . So $\log_2 x^2y=a+2b$ or $\log_2 32 = 2+(2\cdot1.5)$ If $\frac{x^3}{y} = \frac{1}{8}$ then the exponent would be $-3$ , but this would mean $a = 2$ and $b = 1.5$ and would not prove to be true throughout the equation. Any assistance is appreciated.
From $$x^2y=32, \frac{x^3}{y}=\frac{1}{8}$$ we have $$x^5 = 4$$ $$\Rightarrow x = 4^{\frac{1}{5}} = 2^{\frac{2}{5}}$$ Therefore, we have $$ y = 8x^3 $$ $$= 8 \cdot (2^{\frac{2}{5}})^3 $$ $$= 2^3 \cdot 2^{\frac{6}{5}} $$ $$= 2^{\frac{21}{5}}$$ $$= 4^{\frac{21}{10}}$$ Hence, we have $$\log_2(x) = \log_2 2^{\frac{2}{5}} = \frac{2}{5}$$ $$\log_4(y) = \log_4 4^{\frac{21}{10}} = \frac{21}{10}$$
|logarithms|exponentiation|
1
Geometrically determining the symmetry of a tetrahedron (using solid geometry, not group theory)
It is well established that a regular tetrahedron has 12 orientation preserving symmetries , the group $A_4$ . To better understand these symmetries, I set out to identify them geometrically: Prove that the rigid (orientation preserving) motions which send a regular tetrahedron to itself are precisely The identity Rotations of $\pm 120^\circ$ around a line through the tetrahedron's center and a vertex Rotations of $180^\circ$ around a line through the tetrahedron's center and the midpoint of an edge. My proof is below, to which I request verification and feedback, in particular the portion between the "[*]" symbols, which I'd like verification of or suggestions to make it (or its exposition) simpler or more clear. Note: This question does not ask for lists of these symmetries , visualizations , or linear algebra or coordinate based derivations , but rather a synthetic, geometric proof of exactly which motions send the tetrahedron to itself. Proof: Let us call a such a motion an admissi
Nobody was touching this question, so i will say some words on the problem, there was a bounty, so there is a special interest in having a parallel opinion and feed-back. (This too long for a comment, became an answer...) One may use the same plan, but structured to be "more convincing". Since i have to argue geometrically, we observe first that geometrically we have: four vertices (or $0$ -simplices), six edges (or $1$ -simplices), four faces (or $2$ -simplices), (and one tetrahedral part, the one $3$ -simplex,) and each (admissible, orientatin-preserving) symmetry $\sigma$ has to preserve the geometry in each dimension, the incidence relations, and the duality vertex-(opposite )face, respectively edge-(opposite )edge. OP considers the cases by starting with the observation, that $\sigma$ must be a rotation, take its fixed line, intersect it with the tetrahedron boundary (only the faces), get two points, and now consider cases depending on the position of these points. Yes, one can do
|geometry|solution-verification|euclidean-geometry|symmetry|solid-geometry|
1
Bessel's Correction Confusion: Random Variable is derived but sample from Random Variable is used in practice
Let $X$ be the set of random variables $X=\{X_{1},X_{2}, \dots,X_{N}\}$ and random variable $X_{i}$ be i.i.d from $X_{j}$ for $i\neq j$ . From $E[\text{var}(X)]=E\left[ \frac{1}{N}\sum_{i=1}^N (X_{i}-\bar{X})^2 \right]$ you can eventually reach $E[\text{var(X)}]=\frac{{N-1}}{N}\text{var}(X_{i})$ using the properties of expectation and the definition of i.i.d. Isolating $\text{var}(X_{i})$ implies: $$ \text{var}(X_{i})=\frac{N}{N-1}E[\text{var}(X)] $$ . where $\text{var}(X_{i})=\sum_{j=1}^M (X_{i,j}-E[X_{i}])$ , $M$ is the number of samples for the $i$ th random variable (or all random variables since they all have the same distribution). By definition of $E[\text{var}(X)]$ : $$ \text{var}(X_{i})=\frac{N}{N-1}E\left[ \frac{1}{N}\sum_{i=1}^N (X_{i}-\bar{X})^2 \right] $$ $$ = \frac{1}{N-1}E\left[ \sum_{i=1}^N (X_{i}-\bar{X})^2 \right] $$ , which is apparently the definition of unbiased variance for a random variable $X_{i}$ . This does not make sense to me because I should expect that the
Fundamentally, your confusion stems from not properly making a distinction between parameters (more specifically, moments) of a probability distribution, and estimators of those parameters. This becomes especially apparent in your use of notation. When we talk about the mean or variance of a probability distribution, we aren't speaking about properties of some random sample drawn from that distribution. For instance, if I say $X$ has a normal distribution with mean $\mu = 1$ and variance $\sigma^2 = 4$ , that is literally defining how $X$ is distributed: $$f_X(x) = \frac{1}{2 \sqrt{2\pi}} e^{-(x-1)^2/8}, \quad x \in \mathbb R.$$ Or in the case where the distribution is not explicitly parametrized by mean and variance, e.g., $Y$ has a binomial distribution with parameters $n = 7$ and $p = 1/3$ , that means $$\Pr[Y = y] = \binom{7}{y} (1/3)^y (2/3)^{7-y}, \quad y \in \{0, 1, \ldots, 7\}$$ and we can use this to calculate the corresponding moments: $$\operatorname{E}[Y] = np = \frac{7}{3}
|probability|probability-theory|statistics|
1
If $x>0,y>0$ and $4xy=2^{x+y}$ then find the minimum and maximum values of $x+y$.
If $x>0,y>0$ and $4xy=2^{x+y}$ then find the minimum and maximum values of $x+y$ . My Attempt I tried by putting $t=x+y$ $\Rightarrow 4x(t-x)=2^t$ . On differentiation we have $4t-8x=\frac{dt}{dx}(2^tln2-4x)$ . For maximum/minimum put $\frac{dt}{dx}=0$ to obtain $t=2x$ $\Rightarrow x+y=2x$ and thus $y=x$ . Putting in given equation one obtains $4x^2=2^{2x}$ . The obvious solution here is $x=1,2$ . So the value of corresponding $y$ will be $y=1,2$ . So, the extreme values of $x+y$ can be $2$ and $4$ . Is above correct or am I missing something here. Can we solve this using $AM\geq GM$ inequality.
Let $x$ and $y$ be positive real numbers such that $4xy=2^{x+y}$ . Set $t:=x+y$ so that $xy=2^{t-2}$ and hence $x$ and $y$ are real roots of the quadratic $$X^2-tX+2^{t-2}.$$ This implies that the discriminant $\Delta$ of this quadratic is nonnegative, where $$\Delta=(-t)^2-4\cdot2^{t-2}=t^2-2^t,$$ and it follows that $2\leq t\leq 4$ . For $t=2$ we find $x=y=1$ and for $t=4$ we find $x=y=2$ .
|calculus|derivatives|inequality|maxima-minima|a.m.-g.m.-inequality|
0
Fast convolution with "small" values
Say we have two sequences of integers $a = \{a_1 \dots a_n\}$ and $b = \{b_1 \dots b_n\}$ , where $a_i, b_i \in \mathbb{Z}_q$ , but we know some value $p such that $0 \leq a_i . We want to compute the cyclic convolution of these sequences over the larger field $\mathbb{Z}_q$ ( $a \circledast b$ ). Naively, we could compute an FFT for each sequence, multiply component-wise, and then compute the inverse FFT. This takes 3 FFT's of length $n$ , and $n$ $q$ -bit multiplications. Now, imagine that we keep $a$ fixed but want to compute the cyclic convolution against many different $b$ 's. Assume that we are given the $b$ 's in FFT form, and can produce a result in FFT form. Then we can precompute and store the FFT of $a$ , and our runtime will be quite fast, since we no longer need to compute any FFT's. Our storage will be $n \log q$ bits, for the FFT form of $a$ over $\mathbb{Z}_q$ . Can we exploit the fact that $p to reduce our storage costs? Specifically, can we store only $\approx n \log
Here's a solution that works for some values of $n,p,q$ : Assume $q=\Pi_i q_i$ for several prime $q_i for $i \in [1,k]$ . Choose a modulus $M \ge np^2$ . By CRT, we can decompose the convolution between $a$ and $b$ into $k$ convolutions, each between $a$ and $b \bmod q_i$ . By our choice of $M$ and because $q_i , we can just do each of these convolutions over $\Bbb Z_M$ . The reason this works is that, over the integers, a convolution between $b \bmod q_i$ and $a$ can produce a maximum value of $np^2$ , so we will not wrap around $M$ when computing over $\Bbb Z_M$ . Then we can reduce the mod $M$ results mod $q_i$ , and apply the CRT to recover the value mod $q$ . Say we choose $M$ so that it is 'FFT friendly' - that is, it has a primitive $2n$ -th root of unity. Then we can now get away with just storing the $a$ value in FFT form over $\Bbb Z_M$ . This is $n \log M = n (2\log p + \log n)$ bits, which could be much less than $n \log q$ bits, depending on the values of $n, p, q$ . The s
|abstract-algebra|algorithms|convolution|fast-fourier-transform|
0
Why can't we write $\frac{d}{dy}(\frac{dx}{dy})=\frac{dy(\frac{dx}{dy})-dx\frac{dy}{dy}}{(dy)^2}$
I know solving $\frac{d}{dy}(\frac{dx}{dy})=\frac{d}{dy}(\frac{1}{\frac{dy}{dx}}) = \frac{-\frac{d}{dy}(\frac{dy}{dx})}{(\frac{dy}{dx})^2} = - \frac{d^2y}{dx^2}(\frac{dy}{dx})^{-3}$ But I like to know why its wrong to write $\frac{d}{dy}(\frac{dx}{dy})=\frac{dy(\frac{dx}{dy})-dx\frac{dy}{dy}}{(dy)^2}$ Thanks.
\begin{eqnarray} d \left(\frac{dx}{dy}\right) &=& \frac{d^2x dy - dx d^2y}{dy^2} \\ &=& \frac{d^2x dy/dx^3 - d^2y/dx^2}{dy^2/dx^3} \end{eqnarray} $d^2 x$ is the small change of $dx$ . when $x$ is a "input variable", you can set $d^2x = 0$ . But when you describe x as function of other variable like x = x(t) and consider t as an input, you have to revive this term. by setting $d^2x = 0$ \begin{eqnarray} \frac{d}{dy} \left(\frac{dx}{dy}\right) &=& - \frac{ d^2y/dx^2}{dy^3/dx^3} \end{eqnarray} The above consideration about $d^2x$ is illustrated in the following example. The transformation shown below may seem correct but it omits the dependence on $d^2x$ and yield incorrect result. $$ \frac{d^2y}{dt^2} = \frac{d^2y}{dx^2} \frac{dx^2}{dt^2} $$ The term $d^2x$ has to be revived in this way. $$ d^2y = d (dy) = d(y' dx) = dy' dx + y' d^2x $$ Now using the relation $dy'/dx = y''$ \begin{eqnarray} \frac{d^2y}{dt^2} &=& y'' \frac{dx^2}{dt^2} + y' \frac{d^2x}{dt^2} \\ \end{eqnarray} This is a cor
|derivatives|
0
Find the probability of at least one lobster being caught in the afternoon given that more lobsters were caught in the morning.
I am solving the following problem from Sheldon Ross: Lobsters are caught in morning and afternoon sessions. The number of lobsters that can be caught in each session are 0, 1, 2, 3, 4, or 5, each with probability 1/6. Find the probability of at least one lobster being caught in the afternoon given that more lobsters were caught in the morning. I can solve it using conditionals and looking at all the possibilities. I would like to use random variables instead. Denote X as the amount of fish caught in the afternoon and Y as the amount of fish caught in the morning. Then the problem is asking for $$P(X\geq 1|Y> X)=\frac{P(X\geq1,Y> X)}{P(Y>X)}$$ Is there any way to compute the above without resorting to look at all the cases? Thanks.
$$P(Y X)=1 $$ since $P(Y=X)=\frac 16$ and $P(Y X)$ We must have $P(Y>X)=\frac 5{12}$ $$P(X=0, Y>X) + P(X\ge1, Y>X)= P(Y>X)$$ $$P(X=0, Y>X)= \frac 5{36}\implies P(X\ge1, Y>X)=\frac 5{18}$$ So... $$P(X\ge 1|Y>X) = \frac 23$$
|probability|random-variables|
1
How to convert from 2D points to 3D points on a plane
I have some 3D coplanar points. The plane is defined by normal vector and constant. I need to work with the points in 2D and then convert them back to 3D. In order to convert the points to 2D I made a quaternion to rotate the plane to the xy-plane and I can transform the points with that (simply discarding the z-coord). After modifying the points in 2D I want to convert them back to 3D and that's where I got stuck. I noticed that the same quaternion, if inverted, rotates the point back but I need to translate it too somehow, involving the plane's constant, but I don't know how to do that. Thanks!
Attach a reference frame to the plane. You know the plane's normal $n$ and you know the constant $d$ . The plane equation is $n \cdot r = d $ , where $r =(x,y,z)$ . Such a frame is not unique . The frame is defined by a point $P_1 = (x_1, y_1, z_1)$ on the plane (any point), and a rotation matrix $R = [u_1, u_2, n] $ To specify the rotation matrix $R$ , you need to choose a unit vector $u_1$ such that $ u_1 \cdot n = 0 $ and $u_1 \cdot u_1 = 1 $ Further set $u_2 = n \times u_1 $ Vector $n$ is also assumed to be a unit vector. If it is not then normalize it, and change the constant $d$ accordingly. For example, if $n$ is given as $[1, 2, 3]$ then normalizing it we get $ n = [1, 2 , 3] / \sqrt{14} $ $u_1$ can be choosen as follows $ u_1 = [2, -1, 0 ] / \sqrt{ 5 } $ or $u_1 = [3, 0, -1] / \sqrt{10} $ or $u_1 = [0, 3, -2] / \sqrt{13} $ etc. Once you've selected $u_1$ , you can compute $u_2$ in a unique way as $u_2 = n \times u_1$ . Now the $2D$ points are generated from the $3D$ points as
|geometry|3d|plane-geometry|quaternions|
1
Evaluating $\int_0^{\pi } \frac{\sin (n \sigma )}{(a-\cos \sigma )^2} \, d\sigma$
What is the formula for $$\int_0^{\pi } \frac{\sin (n \sigma )}{(a-\cos \sigma )^2} \, d\sigma$$ where $ a>1 $ and $n$ is a positive integer? To evaluate, I tried to replace $a$ with $\cosh\xi $ in the integral $$\int_0^{\pi } \frac{\sin (n \sigma )}{(\cosh\xi-\cos \sigma )^2} \, d\sigma$$ Please note that I already obtained the formula for a related integral $$\int_0^{\pi } \frac{\cos (n \sigma )}{(a-\cos \sigma )^2} \, d\sigma = \frac{\pi(n \sqrt{a^2-1}+a)}{(a^2-1)^{3/2} (\sqrt{a^2-1}+a)^n} $$
Evaluate \begin{align} K_n(r)= &\int_0^\pi \frac{e^{i n x}}{1-2r\cos x+r^2} \overset{z= e^{ix} }{dx} = \frac1i \int_\gamma\frac{z^n}{(z-r)( 1-rz)}dz\\ \end{align} along the semicircle path $\gamma$ in the upper plane, resulting in \begin{align} I_n(r) =& \int_0^\pi \frac{\cos (n x)}{1-2r \cos x +r^2}dx=\Re K_n(r) = \frac{\pi r^n}{1-r^2}\\ J_n(r)= &\int_0^\pi \frac{\sin (n x)}{1-2r \cos x +r^2}dx= \Im K_n(r)\\ =&\ \frac{{r^{-n}}-r^n} {1-r^2}\ln\frac{1+r}{1-r} -2\sum_{k=1}^{[\frac n2]}\frac{{r^{-n-1+2k}}-r^{n+1-2k}}{(2k-1)(1-r^2)} \end{align} Then, utilize $I_n(r)$ and $J_n(r)$ to evaluate \begin{align} & \int_0^\pi \frac{\cos (n x)}{\cosh\xi -\cos x }dx =2e^{-\xi} I_n(e^{-\xi})=\frac{\pi e^{-n\xi}}{\sinh \xi}\\ \\ &\int_0^\pi \frac{\sin( n x)}{\cosh\xi-\cos x }dx = 2e^{-\xi} J_n(e^{-\xi})\\ =& \ \frac{2\sinh(n\xi)}{\sinh \xi} \ln\left(\coth\frac\xi2\right) -4\sum_{k=1}^{[\frac n2]}\frac{\sinh(n+1-2k)\xi}{(2k-1)\sinh\xi} \end{align} which, via differentiation with respect to $\xi$ , lead
|integration|trigonometric-integrals|
1
Snake Lemma Weibel 1.3.2
I'm reading An introduction to homological algebra by Charles A. Weibel and I'm trying to prove the snake lemma 1.3.2. Using this diagram (don't have enough points to embbed it in my question) it says that $\partial=c_0^{-1} f_1b_1^{-1}$ makes the sequence $\ker(f_0)\to\ker(f_1)\to\ker(f_2)\overset{\partial}{\to}coker(g_0)\to coker(g_1)\to coker(g_2)$ exact. However, Using $c_0^{-1}$ and $b_1^{-1}$ would mean that they are isomorphisms. Which they can't be since the sequence is supposed to be exact and $\ker(f_0)\to \ker(f_1)$ isn't the $0$ map. Does anyone know what it could be please?
These preimage notations can just mean... preimage, rather than inverse. For example under $f:\Bbb R\to\Bbb R,x\mapsto x^2$ , $4$ has a preimage, $2$ (say) but that does not mean $f$ is an isomorphism. Arguments are made to show the $c_0,b_1$ in question do in fact have preimages under the relevant maps. All of this needs to be suitably interpreted if one works in an arbitrary Abelian category.
|category-theory|homology-cohomology|abelian-categories|
0
Homotopy orbits
Let $\mathcal{C}$ be an $\infty$ -category, and let $G$ be a group. Denote by $\mathcal{C}^{\text{BG}}$ the functor $\infty$ -category $\text{Fun}(\text{BG}\longrightarrow \mathcal{C})$ . The homotopy orbit functor is given by $$-_{\text{hG}}: \mathcal{C}^{\text{BG}}\longrightarrow \mathcal{C}: (F: \text{BG}\longrightarrow \mathcal{C})\mapsto \text{colim}_{\text{BG}}F.$$ What does taking a $\text{colim}$ with respect to $\text{BG}$ mean? What type of object does it represent, and what is the relation of this functor to the classical concept of orbits of groups acting on a set or a space?
The object $BG$ is a space, and hence an $\infty$ -groupoid, so we can index diagrams using it. One particular presentation of this homotopy type is as the Kan complex $N(\mathbf{B}G)$ , where $N(-)$ is the nerve functor and $\mathbf{B}G$ is the $1$ -object category with $\mathrm{Hom}_{\mathbf{B}G}(*,*)=G$ . This means that $N(\mathbf{B}G)_n\cong G^{\times n}$ for all $n\geq 0$ . The simplicial structure maps are the usual ones that a nerve of a category has. So a functor $F\colon BG\to\mathcal{C}$ can be thought of as picking a single object $x\in\mathcal{C}$ , picking for every $g\in G$ an automorphism $\theta_g\colon x\xrightarrow{\sim}x$ , for every $f,g\in G$ a homotopy $\theta_g\theta_f\simeq\theta_{gf}$ , for every $f,g,h\in G$ a further homotopy witnessing our already defined homotopies $\theta_h\theta_g\theta_f\simeq\theta_{hg}\theta_f\simeq\theta_{hgf}$ and $\theta_h\theta_g\theta_f\simeq\theta_h\theta_{gf}\simeq\theta_{hgf}$ are actually themselves homotopic, etc. In this se
|homotopy-theory|simplicial-stuff|higher-category-theory|
0
Measure Theory: Proof regarding a measurability of a function
Is my proof for the following problem correct? Let $f:X\rightarrow[0,\infty)$ and $X=\bigcup_{i\in\mathbb{N}}A_i$ be measurable. Prove: $f$ is measurable $\iff$ $f\cdot\chi_{A_i}$ is measurable. Proof: " $\implies$ " Let f be $\lim \limits_{n \to \infty}s_n(x)=f(x)$ $\forall x\in X$ (where $s_n$ is a sequence of simple functions). Then the following holds: $\int_Xf(x)\:d\mu=\int_{\bigcup_{i\in\mathbb{N}}A_i}f(x)\:d\mu=\sum_{i=1}^n\int_{A_i}f(x)\:d\mu=\sum_{i=1}^n\int_Xf(x)\cdot\chi_{A_i}d\mu$ . " $\impliedby$ " Let $f\cdot\chi_{A_i}$ be measurable. Then the following holds: $\int_Xf(x)\cdot\chi_{A_i}\:d\mu=\int_{\bigcup_{i\in\mathbb{N}}A_i}f(x)\cdot\chi_{A_i}\:d\mu=\sum_{i=1}^n\int_{A_i}f(x)\cdot\chi_{A_i}\:d\mu=\sum_{i=1}^n\int_{X}f(x)\cdot\chi_{A_i}\cdot\chi_{A_i}\:d\mu=$ $\sum_{i=1}^n\int_{X}f(x)\cdot\chi_{A_i}\:d\mu=\sum_{i=1}^n\int_{A_i}f(x)\:d\mu=\int_{\bigcup_{i\in\mathbb{N}}A_i}f(x)\:d\mu=\int_X f(x)\:d\mu.$ $\blacksquare$
It's not true that $ \int _ { \bigcup _ i A _ i } f = \sum _ i \int _ { A _ i } f $ unless the $ A _ i $ are all disjoint (or at least have intersections of measure zero). And even then, I'm not sure how you would prove that integrals decompose like this without already knowing at least the $ \Longrightarrow $ direction of this theorem, unless maybe you make measurability on each set a hypothesis (in which case this doesn't help you). More than that, thinking about integrals is bound to be a bad approach to this. Measurability is about the algebra of measurable sets, not the measure. (In fact, if you use a measure to prove the theorem, then you're relying on the theorem that every measurable space has a measure on it; which is true, but have you proved that yet?) You began with a totally different approach, writing $ f $ as the limit of a sequence of simple measurable functions. This seems more promising to me. For each $ i $ , you can convert this sequence into a sequence of simple me
|real-analysis|calculus|integration|measure-theory|multivariable-calculus|
0
Proposition 2.10 Atiyah MacDonald
I have been self studying from Atiyah and Macdonald's Intoduction to Commutative Algebra and am having difficulty with the following proposition: Proposition 2.10 Let $$ \require{AMScd} \begin{CD} 0 @>>> M^{'} @>{u}>> M @>{v}>> M^{''} @>>> 0 \\ @. @V{f^{'}}VV @V{f}VV @VV{f^{''}}V \\ 0 @>>> N^{'} @>{u^{'}}>> N @>{v^{'}}>> N^{''} @>>> 0 \end{CD}$$ be a commutative diagram of $A$ -modules and homomorphisms, with the rows exact. Then there exists an exact sequence $$0\longrightarrow \mathrm{Ker}(f')\stackrel{\bar{u}}\longrightarrow \mathrm{Ker}(f) \stackrel{\bar{v}}\longrightarrow \mathrm{Ker}(f^{''})\stackrel{d}\longrightarrow \mathrm{Coker}(f^{'})\stackrel{\bar{u}^{'}}\longrightarrow \mathrm{Coker}(f)\stackrel{\bar{v}^{'}}\longrightarrow \mathrm{Coker}(f'') \longrightarrow 0$$ in which $\bar{u}, \bar{v}$ are restrictions of $u, v$ , and $\bar{u}^{'}, \bar{v}^{'}$ are induced by $u^{'}, v^{'}$ . Here, $d: \mathrm{Ker}(f'') \to \mathrm{Coker}(f')$ is defined by $x'' \mapsto y+\mathrm{Im}(f
Suppose $x'' \in \ker(d)$ . As you note, we then know that here is some $x \in M$ and $y \in N'$ such that $u'(y) = f(x)$ , $v(x) = x''$ , and $y \in \operatorname{img}(f')$ . We want to find some $a \in M$ such that $v(a) = x''$ and $f(a) = 0$ . Whatever $a$ ends up being, we will have $v(a-x) = v(a) - v(x) = x'' - x'' = 0$ , so we will have $a-x \in \ker(v) = \operatorname{img}(u)$ . So we will try to construct $a$ as $x + b$ for some clever choice of $b \in \operatorname{img}(u)$ . What property will we need $b$ to satisfy? Well, we just need that $f(a) = 0$ , and we know $f(x) = u'(y)$ , so we will need $f(b) = -u'(y)$ . If we want $b \in \operatorname{img}(u)$ , we really will need to find some $z \in M'$ , and then set $b = u(z)$ . Then our desired condition reads $f(u(z)) = -u'(y)$ . My commutativity of the diagram, this is the same as $u'(f'(z)) = -u'(y)$ . So wouldn't it be nice if we could choose $f'(z) = -y$ ? We can! We already know $y \in \operatorname{img}(f')$ (this is t
|commutative-algebra|modules|exact-sequence|
1
Integral $\int_0^{1/\sqrt{2}}\frac{K\left[\sqrt{1-k^2}\right]}{\sqrt{1-2k^2}\sqrt{1-k^2}}dk$
$$\int_0^{1/\sqrt{2}}\frac{K\left[\sqrt{1-k^2}\right]}{\sqrt{1-2k^2}\sqrt{1-k^2}}dk=\frac{\Gamma^2(1/8)\Gamma^2(3/8)}{32\pi}$$ With $K$ as the Complete Elliptic Integral of the First Kind, $$K:=K(k)=\int_0^{\pi/2}\frac{1}{\sqrt{1-(k\sin t)^2}}dt$$ Define, $$I(k)=\int_0^{\pi/2}\frac{K\left[\sqrt{1-(k\sin t)^2}\right]}{\sqrt{1-(k\sin t)^2}}dt$$ I was interested in this Integral because while researching, I found out that this one has a really interesting Fourier-Legendre Expansion. Denote, $$P_n(x)=\text{LegendreP}[1-2x^2]$$ (The notation might look a bit weird but to me it made more sense this way) or we can have this closed form, $$P_n(x)=\sum_{r=0}^n(-1)^r\frac{(n+r)!}{(n-r)!}\frac{x^{2r}}{r!r!}$$ So we have this expansion, $$I(k)=\frac{\pi^2}{2}\sum_{n=0}^{\infty}\binom{2n}{n}^2\frac{P_{2n}(k)}{2^{4n}}$$ And we also have the following, $$P_{2n}(1/\sqrt{2})=\binom{2n}{n}\frac{(-1)^n}{2^{2n}}$$ So this allows us the evaluation, $$I(1/\sqrt{2})=\frac{\pi^2}{2}\sum_{n=0}^{\infty}\binom{2
We may define $$ K_{4}(x) =\frac{K\left (\sqrt{\frac{2x}{1+x} } \right ) }{ \sqrt{1+x} } =\frac{\pi}{2}\,_2F_1\left ( \frac14,\frac34;1;x^2 \right ). $$ And consider the contour integral on the quarter circle in the first quadrant, giving $$ \left ( \int_0^1+\int_{1}^{\infty} \right ) \frac{K_4(x)}{\sqrt{1-x^2} }\text{d}x =i\int_{0}^{\infty} \frac{K_4(ix)}{\sqrt{1+x^2} }\text{d}x. $$ Now that $$ \int_0^1 \frac{K_4(x)}{\sqrt{1-x^2} }\text{d}x =\sqrt{2} \int_{0}^{1} \frac{K^\prime(x)}{\sqrt{1+x^2} } \text{d}x=\text{solvable}, $$ $$ \int_1^\infty \frac{K_4(x)}{\sqrt{1-x^2} }\text{d}x = \int_{1}^{\infty} \frac{1}{\sqrt{2x} } \left ( K\left ( \sqrt{\frac{1+x}{2x}} \right ) +i K\left ( \sqrt{\frac{1-x}{2x}} \right ) \right ) \frac{i}{\sqrt{x^2-1} } \text{d}x=\frac{i}{\sqrt{2} } \int_{0}^{1} \frac{K\left ( \sqrt{\frac{1+x}{2} } \right ) +iK\left ( \sqrt{\frac{1-x}2} \right ) }{\sqrt{x\left ( 1-x^2 \right ) } } \text{d} x,$$ $$ \int_{0}^{\infty} \frac{K_4(ix)}{\sqrt{1+x^2} } \text{d}x =\int_{0
|integration|definite-integrals|special-functions|elliptic-integrals|
1
An upper and lower bound resulting from the Lipschitzness of the gradient of a function
Let $f:\mathbb{R}^n \rightarrow \mathbb{R}$ be a continuously differentiable function and assume its gradient $\nabla f$ is Lipschitz continuous with a constant $L$ , i.e., for every $x_1,x_2 \in \mathbb{R}^n$ , it holds that $$ \left\| \nabla f(x_1) - \nabla f(x_2) \right\| \leq L \|x_1 - x_2\|. $$ I am reading a paper that claims that because $\nabla f$ is Lipschitz, the following also holds: $$ \nabla f \cdot (x_1 - x_2) + \frac{L}{2} \|x_1 - x_2\|^2 \geq f(x_1) - f(x_2) \geq \nabla f \cdot (x_1 - x_2) - \frac{L}{2} \|x_1 - x_2\|^2. $$ First, at what point $\nabla f$ is being computed? Second, how do we prove these bounds?
The paper should describe at what point $\nabla f$ is computed. One guess is as follows, where we compute $\nabla f$ at $\overline{x} = \frac{x_1 + x_2}{2}$ . To prove the inequalities, by mean value theorem one has $$f(x_1) = f(x_2) + \nabla f(x^*) \cdot (x_1 - x_2)$$ for some $x^*$ on the line segment connecting $x_1, x_2$ . Clearly, the distance to midpoint satisfies $$\|x^* - \overline{x}\| \leq \frac{1}{2} \|x_1 - x_2\|.$$ Hence \begin{align*} & |f(x_1) - f(x_2) - \nabla f(\overline{x}) \cdot (x_1 - x_2)| \\ = \, & |(\nabla f(x^*) - \nabla f(\overline{x})) \cdot (x_1 - x_2)| \\ \leq \, & \|\nabla f(x^*) - \nabla f(\overline{x}) \| \|x_1 - x_2\| \\ \leq \, & L \|x^* - \overline{x}\| \|x_1 - x_2\| \\ \leq \, & \frac{L}{2} \|x_1 - x_2\|^2 \end{align*} as desired.
|real-analysis|
1
Proving a function is not regulated
I'm struggling to show that the function $f:[0,1] \rightarrow \mathbb{R}$ given by $$f(x)=\begin{cases}\frac{1}{1-x} & 0 \leq x is not regulated. Any help would be much appreciated.
The point $x=1$ does not have a left-limit within $\mathbb{R}$ , and is thus not a regulated function. (c.f. https://en.wikipedia.org/wiki/Regulated_function ) Intuitively, this is saying that if you have a genuinely diverging point in your function, superficially removing that point only is not going to fix it to become regulated. It is still not bounded.
|real-analysis|
0
Proving Non-Isomorphism between $\mathbb{Q}$ and $\mathbb{Z}$ Using POSET Concepts
Hello fellow mathematicians! I am seeking assistance in proving that the groups $\mathbb{Q}$ (the group of rational numbers under addition) and $\mathbb{Z}$ (the group of integers under addition) are not isomorphic. However, I would like to approach this problem using the concepts of partially ordered sets (POSET). In particular, I am interested in utilizing the order structure of the groups to establish their non-isomorphism. My intuition tells me that examining the existence of certain elements and their orders might provide valuable insights. Here's what I have considered so far: In the group $\mathbb{Q}$ , we know that the order of an element $\frac{a}{b}$ , where $a$ and $b$ are coprime integers, is equal to the absolute value of $b$ . How does this relate to the order structure of $\mathbb{Z}$ ? The group $\mathbb{Z}$ is known to contain elements of arbitrary large order. Can we observe a similar property in $\mathbb{Q}$ ? Are there any inherent properties of POSETs that can help
A group does not generally come with a (compatible) poset structure. Even if it did, there is no reason to expect it to tell you about two groups not being isomorphic. However, you can consider the poset of subgroups $\text{Sub}(G)$ of a group $G$ , ordered by inclusion. If two groups $G_1$ and $G_2$ are isomorphic and $f:G_1\to G_2$ is an isomorphism, then it induces an increasing bijection $\bar f:\text{Sub}(G_1)\to \text{Sub}(G_2)$ where $\bar f(H) := \{f(h) : h\in H\}$ for $H\in \text{Sub}(G_1)$ . In other words, $\text{Sub}(G_1)$ and $\text{Sub}(G_2)$ are isomorphic as posets, isomorphism in this case being an increasing bijection. Now lets look at the posets $\text{Sub}(\mathbb{Z})$ and $\text{Sub}(\mathbb{Q})$ . Descriptions for subgroups of both of those groups exist... but the caveat is that $\text{Sub}(\mathbb{Q})$ is way harder to describe. If $H\in \text{Sub}(\mathbb{Z})$ , then there exists unique $n\in \mathbb{N}_0$ such that $H = n\mathbb{Z}$ , and so $\text{Sub}(\mathbb
|group-theory|order-theory|
1
How to show that $\sum_{\rho} 1/|\rho| = \infty$?
Just after Theorem 10.13 in the book Multiplicative number theory I: Classical theory by Hugh Montgomery, Robert C. Vaughan the following two statements are assumed to be without proofs. Perhaps they are simple to prove but I have no idea of the proofs: Since $N(T) ≪ T \log T$ then $\sum_{\rho} |\rho|^{-A} for all $A>1$ and $\sum_{\rho} 1/|\rho| = \infty$ . Here $N(T)$ is the number of zeros of the Riemann zeta function in the rectangle $0 and $0 , and $\rho$ 's are zeros of zeta function. How to prove these two claims? Hints also would be much appreciated.
This is a consequence of partial summation. Note that $$\sum_{\rho} |\rho|^{-A} = \lim_{T \to \infty} \sum_{0 Integration by parts shows that this equals $$\lim_{T \to \infty} \frac{N(T)}{T^A} + A\int_0^T \frac{N(t)}{t^{A+1}} dt.$$ Since $N(T) \ll T \log T$ , if $A > 1$ we have $$\frac{N(T)}{T^A} = O\left(\frac{\log T}{T^{A-1}}\right) \to 0$$ as $T \to \infty$ and $$\frac{N(t)}{t^{A+1}} = O\left(A \frac{\log t}{t^A}\right)$$ is integrable as $t \to \infty$ . Hence $$\sum_{\rho} |\rho|^{-A} = A\int_0^{\infty} \frac{N(t)}{t^{A+1}} dt$$ is finite. Note that $\sum_{\rho} |\rho|^{-1} = \infty$ does not follow from $N(T) \ll T \log T$ (Pretend for a moment that Riemann zeta function has finitely many zeros, then $N(T) \ll T \log T$ is of course true but $\sum_{\rho} |\rho|^{-1}$ converges). You would need $N(T) \asymp T \log T$ for that claim.
|sequences-and-series|complex-analysis|convergence-divergence|riemann-zeta|
1
Prove two definitions of strongly convex functions are equivalent
Prove that the following definitions of a strongly convex function $f: \mathbb{R}^n \to \mathbb{R}$ are equivalent: \begin{gather*} f(y) \ge f(x) + \nabla f(x)^T (y-x) + \frac{\mu}{2} {\lVert y-x \rVert}_2^2, \quad \forall x,y \in \mathbb{R}^n \tag{A} \\ \end{gather*} \begin{gather*} (\nabla f(x) - \nabla f(y))^T (x-y) \ge \mu {\lVert y-x \rVert}_2^2, \quad \forall x,y \in \mathbb{R}^n \tag{B} \\ \end{gather*} $(A) \Rightarrow (B)$ is simple. The first three steps are basic algebra. The fourth step is summing the previous two rows. \begin{gather*} f(y) - f(x) - \nabla f(x)^T (y-x) \ge \frac{\mu}{2} {\lVert y-x \rVert}_2^2, \quad \forall x,y \in \mathbb{R}^n \\ f(x) - f(y) - \nabla f(y)^T (x-y) \ge \frac{\mu}{2} {\lVert y-x \rVert}_2^2, \quad \forall x,y \in \mathbb{R}^n \\ f(y) - f(x) + \nabla f(x)^T (x-y) \ge \frac{\mu}{2} {\lVert y-x \rVert}_2^2, \quad \forall x,y \in \mathbb{R}^n \\ (\nabla f(x) - \nabla f(y))^T (x-y) \ge \mu {\lVert y-x \rVert}_2^2, \quad \forall x,y \in \mathbb{R}
Notice the similarity between condition $(A)$ and the necessary and sufficient first-order condition for convexity and the similarity of condition $(B)$ and the monotone gradient condition for convexity. This is not a coincidence. If you let $g(x)=f(x)-\frac{\mu}{2}\|x\|^2$ it's easy to see that condition $(B)$ implies that $$(\nabla g(x) - \nabla g(y))^T (x-y) \geq 0 \quad \forall\, x,y\in \mathbb{R}^n$$ This is precisely the monotone gradient condition for convexity of $g(x)$ . Since $g(x)$ is a convex differentiable function the first-order condition must hold, namely $$g(y) \geq g(x) + \nabla g(x)^T (y - x) \geq 0 \quad \forall\, x,y\in \mathbb{R}^n$$ If you expand the above inequality you will arrive at condition $(A)$ . The same argument could be used to prove the other implication: $(A)$ implies that $g(x)$ is convex and so the monotone gradient condition for $g(x)$ must hold.
|linear-algebra|convex-optimization|
1