title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Given matrices H, K that are Hermitian, with K=AH, H nonnegative, is A necessarily normal?
|
Given Hermitian matrices $H$ and $K$ related by $AH=K$ , $H$ nonnegative, is it necessarily true that $A$ is normal? $H$ may be assumed to be positive if that helps. Edit: Additional answer when $K$ and $H$ are both positive definite Hermitian. We have $AH=HA^*$ $A$ is invertible and has no eigenvalue on the negative real axis. In fact all of the eigenvalues of $A$ must be positive real. Therefore, we can determine a square root, $A^{1/2}$ (Shur decomposition $A=SUS^*$ , $S$ unitary and $U$ upper triangular, is used). Similarly we find $A^{*{1/2}}$ . Now we may form $K_1=A^{{1/2}}KA^{*{-{1/2}}}=A^{1/2}HA^{*{{1/2}}}$ which is positive. Any pair of Hermitian positive definite matrices $H_1$ and $H_2$ are conjugates via a conjugation $H_2=TH_1T^*=TH_1T$ with $T$ Hermitian. (Proof below.) Thus we have $K_1=THT^*=THT$ with $T$ Hermitian and positive. Combining, we obtain $THT=A^{1/2}HA^{*{{1/2}}}$ Let $S=A^{-{1/2}}T$ . Then $SHS^*=H$ Let $H=LL^*$ be the unique Cholesky decomposition of $H$
|
Proposed (partial) answer. Let H have distinct positive eigenvalues. Then a basis for the space of matrices can be taken to be the set of outer products $v_iv_j^*$ . The null space of the operator $L(A)=H^2A - AH^2$ is spanned by the basis elements $B_{ij}=v_iv_i^*$ . I believe that the other elements cannot be combined to create a null vector, so the only null vectors (i.e., matrices $A$ being sought) are linear combinations of the "diagonal" basis elements, and hence a function of $H$ . If a linear combination of the non-diagonal basis vectors $B_{ij}=v_iv_i^*$ , $i\neq j$ , say $\sum_i^n c_{ij}B_{ij}$ was a null vector of the operator $L$ , the result would contradict the linear independence of the basis vectors. This partial answer relies on the relation $A^*H=HA$ which requires the conjugation function to be defined for $A$ .
|
|linear-algebra|matrix-equations|
| 0
|
Fourier transform of function composition
|
Given two functions $f$ and $g$, is there a formula for the Fourier transform of $f \circ g$ in terms of the Fourier transforms of $f$ and $g$ individually? I know you can do this for the sum , the product and the convolution of two functions. But I haven't seen a formula for the composition of two functions.
|
Starting from Lee's answer, we can write the following: \begin{equation*} P(k,l) = \int_{x \in \mathcal{R}} e^{i2\pi(lg(x) - kx)}\,dx \end{equation*} \begin{equation*} P(k,l) = \int_{x \in \mathcal{R}} e^{i2\pi \, l \, g(x)} \, e^{-i2\pi kx} \,dx \end{equation*} The Jacobi-Anger expansion tells us the following: \begin{equation*} e^{i z \cos \theta} \equiv J_0(z)\, +\, 2\, \sum_{n=1}^{\infty}\, i^n\, J_n(z)\, \cos\, (n \theta) \end{equation*} then \begin{equation*} e^{i z g(x)} \equiv J_0(z)\, +\, 2\, \sum_{n=1}^{\infty}\, i^n\, J_n(z)\, \cos\, (n \arccos(g(x))) \end{equation*} \begin{equation*} e^{i z g(x)} \equiv J_0(z)\, +\, 2\, \sum_{n=1}^{\infty}\, i^n\, J_n(z)\, T_n \big( g(x) \big) \end{equation*} where $T_n$ is the nth Chebychev polynomial and $J_n$ the nth Bessel function of the first kind. If we use it in the expansion of $P(k,l)$ , we have: \begin{equation*} P(k,l) = \int_{x \in \mathcal{R}} \Big[ J_0(2\pi\,l)\, +\, 2\, \sum_{n=1}^{\infty}\, i^n\, J_n(2\pi\,l)\, T_n \big( g(
|
|fourier-analysis|
| 0
|
what are the cases i need to take for using maclaurin expansion for log(1+x)?
|
While expanding log (1+x) as a infinite sum using maclaurin series, i understand that i need to satisfy the condition that lagrange's form of remainder Rn limits to zero as n limits to infinity. but how to take up the cases for x such that above condition is satisfied? i tried considering positive and negative cases but not getting anything, any hint would be highly appreciated. PS: please guide me how to put my doubts clearly as I'm unable to post any new questions because as per stack exchange i need to improve the questions asked so far and I don't want to open a new account to ask new questions.
|
You are probably referring to the Mercator Series and its convergence radius.
|
|logarithms|taylor-expansion|
| 0
|
Showing that for every acute angle $x$ in a right triangle $\frac{1}{\sin x}+\frac{1}{\cos x}\ge 2\sqrt 2$ is always true
|
The problem is to show that in a right-angle triangle with hypothenuse K and sides M and N, the inequality $\frac{K}{M}+\frac{K}{N}\ge 2\sqrt 2$ is always true. My approach: I tried to simplify the expression $\frac{K}{M}+\frac{K}{N}$ = $\frac{K(M+N)}{MN}$ , and since we have a triangle, this means that $K\ge M+N$ which makes us conclude that $$\frac{K(M+N)}{MN}\ge\frac{K^2}{MN}$$ , and since K is the hypothenuse, we get: $\frac{K(M+N)}{MN}\ge \frac{M^2+N^2}{MN}\ge 2$ by AM-GM. So this means that I got: $$\frac{K}{M}+\frac{K}{N}\ge 2$$ but couldn't reach the desired result which is $2\sqrt 2$ , so any help or another approach is much appreciated.
|
Just another way: $$\left(\frac1{\sin x}+\frac1{\cos x} \right)^2=\frac{\sin^2x+\cos^2x+2\cdot\sin x\cdot\cos x}{\sin^2x \cdot \cos^2x}=\frac1{\frac14\sin^2 2x}+\frac2{\frac12\sin 2x} \\ \geqslant4+4$$
|
|inequality|trigonometry|
| 0
|
Can this presentation of reflection be considered foundational?
|
Working in the first order language of set theory. By $R$ -bounded quantifiers its meant those of the form $\forall x \ R \ a \, ( \cdots) $ , or $ \exists x \ R \ a \, ( \cdots) $ , and these are defined as $\forall x \, ( x \ R \ a \to \cdots)$ and $\exists x \, (x \ R \ a \land \cdots) $ respectively. Here $R$ is a relation symbol. A quantifier is defined here to be open if it is neither $\in$ -bounded nor $\subseteq$ -bounded. Let $\varphi^{\mathcal V|}$ be a formula obtained from $\varphi$ by merely $\in$ -bounding some open quantifiers in $\varphi$ by the symbol " $\mathcal V$ " in a manner that is closed anteriorly; that is, if an open quantifier is bounded by $\mathcal V$ then all open quantifiers to the left of it must also be bounded by $\mathcal V$ . Anterior Bounded Reflection : if $\varphi$ is a formula that doesn't use the symbol " $\mathcal V$ ", then: $$ \forall \vec{a} \, (\varphi \to \exists \mathcal V : \varphi^{\mathcal V|})$$ This by itself can prove: Pairing , Uni
|
Seeing how this can be expressed easily and in such a parsimonious manner, then is it eligible to be placed among the main basic principles of set theory, i.e. at axiomatic or near axiomatic level? My 2c, as for one thing I am certainly not an expert of set theory: On the "philosophical" side, I would not think there is a definite answer there: what one adopts as a "principle" is partly in the name of economy and easy proofs, partly in the name of understandability and cogency, in other words, for a clear informal reading . For example, take the "Sheffer stroke", which is alone sufficient to bootstrap boolean arithmetic, but makes understanding more difficult, e.g. just think how "not" is expressed in those terms. Indeed, for "principled" definitions, I would think the semantic side is eventually more important than the syntactic one. On the technical side, there is indeed, relevant to set theory, a " reflection principle " (thank you!). And I would think the axiom you present might in
|
|soft-question|set-theory|first-order-logic|axioms|foundations|
| 0
|
Showing that for every acute angle $x$ in a right triangle $\frac{1}{\sin x}+\frac{1}{\cos x}\ge 2\sqrt 2$ is always true
|
The problem is to show that in a right-angle triangle with hypothenuse K and sides M and N, the inequality $\frac{K}{M}+\frac{K}{N}\ge 2\sqrt 2$ is always true. My approach: I tried to simplify the expression $\frac{K}{M}+\frac{K}{N}$ = $\frac{K(M+N)}{MN}$ , and since we have a triangle, this means that $K\ge M+N$ which makes us conclude that $$\frac{K(M+N)}{MN}\ge\frac{K^2}{MN}$$ , and since K is the hypothenuse, we get: $\frac{K(M+N)}{MN}\ge \frac{M^2+N^2}{MN}\ge 2$ by AM-GM. So this means that I got: $$\frac{K}{M}+\frac{K}{N}\ge 2$$ but couldn't reach the desired result which is $2\sqrt 2$ , so any help or another approach is much appreciated.
|
IDEA 1: Using AM–GM inequality: $$ \frac{\frac{1}{\sin x}+\frac{1}{\cos x}}{2}\ge \sqrt{\frac{1}{\sin x}\cdot\frac{1}{\cos x}}=\sqrt{\frac{2}{2\sin x\cos x}}=\sqrt{2}\cdot\frac{1}{\sqrt{\sin 2x}} $$ We know that $0 , given that $0 . Then $\frac{1}{\sqrt{\sin 2x}}\ge 1$ . Thus $$ \frac{\frac{1}{\sin x}+\frac{1}{\cos x}}{2}\ge \sqrt{2}\Rightarrow \frac{1}{\sin x}+\frac{1}{\cos x}\ge 2\sqrt 2. $$ Suppose we have a triangle then $$ \frac{K}{M}+\frac{K}{N}=\frac{1}{\frac{M}{K}}+\frac{1}{\frac{N}{K}}=\frac{1}{\sin x}+\frac{1}{\cos x}\ge 2\sqrt 2. $$ IDEA 2: Using AM–GM inequality and $x+\frac{1}{x}\ge 2$ for all $x>0$ . $$ \frac{\frac{K}{M}+\frac{K}{N}}{2}\ge \sqrt{\frac{K}{M}\cdot \frac{K}{N}}=\sqrt{\frac{K^2}{MN}}=\sqrt{\frac{M^2+N^2}{MN}}=\sqrt{\frac{M}{N}+\frac{N}{M}}\ge \sqrt 2\Rightarrow \frac{K}{M}+\frac{K}{N}\ge 2\sqrt 2. $$
|
|inequality|trigonometry|
| 0
|
How many times will I get X before getting Y N times or Z 1 time
|
First, I come from a Computer Engineering (software) background, not Math, so any Math formula that aren’t pemdas aren’t going to be super helpful or insightful for me. Now, the problem I’m trying to solve. I’m porting some old software over to a new environment and I need the behavior to remain the same. But I’d like to improve the efficiency of the algorithm over the original naive approach. Pass = 0 Fail = 0 while Pass 97 Fail += 1 Pass = Total return Pass, Fail I'll try to state the algorithm in English here. Roll a 100 side dice until Total Passes have been achieved. If the result of the roll is less than 97 its a Pass, if the result is more than 97 its a Fail, if the result is 97 no more Fails can occur. I’d like to completely get rid of the loop by calculating the probability of a Fail result for any number of Totals. Failures = Weighted Random( 1 to Total, with weights[ 1 to Total ] ) I’m unsure how to calculate the weights. 3% of the time the Result should be Fail until 1% of
|
The probability of $n$ failures when your total is $t$ is given by... $$ P(n, t)= \binom{t+n-1}n(0.96)^t(0.03)^n+(0.01)\sum_{k=0}^{t-1}\binom{n+k}k(0.96)^k(0.03)^n $$ You will need to make a decision about what will be your highest value of $n$ Brief explanation Either... you make $n+t$ rolls with $t$ passes and $n$ fails with the last roll being a pass you make $n+k+1$ rolls with $k$ passes (sum over all $k ) and $n$ fails, and the last roll is a 97
|
|probability|geometric-distribution|
| 1
|
Change of Basis over $\mathbb{Z}$
|
$[v ]_B$ coordinate vector of v with respect to $B$ $[v ]_C$ coordinate vector of v with respect to $C$ ${}_C[ Id_V ]_B$ change of basis matrix from $B$ to $C$ For example we have the following set of vectors in $\mathbb{Z_3^3}$ $B$ = {[1 0 2] ${^T}$ , [2 1 1] ${^T}$ , [1 1 1] ${^T}$ } $C$ = {[1 1 1] ${^T}$ ,[2 1 1] ${^T}$ , [1 1 2] ${^T}$ } How can I compute the change of basis from $B$ to $C$ using the following formula: ${}_C[ Id_V ]_B [v ]_B = [ v ]_C$ where , ${}_C[ Id_V ]_B = [ [b_1]_C \cdots [b_n]_C]$ Again $V$ belongs to $\mathbb{Z_3^3}$ and $Id_V : V \rightarrow V$ I think the formula is a fairly simple computation but the confusing part is the change of coordinate vectors , namely ${}_C[ Id_V ]_B = [ [b_1]_C \cdots [b_n]_C]$ . Also, I have checked that the given set of vector do form basis of $\mathbb{Z_3^3}$
|
${}_C[Id_V]_B = {}_C[Id_V]_I \, {}_I[Id_V]_B=C^{-1}B$ = $\begin{bmatrix} 2 \:0\: 2 \\ 1 \:2\: 0 \\ 0 \:2\: 1\end{bmatrix}$ $\begin{bmatrix} 1 \:2\: 1 \\ 0 \:1\: 1 \\ 2 \:1\: 1\end{bmatrix}$ = $\begin{bmatrix} 0 \:0\: 1 \\ 1 \:1\: 0 \\ 2 \:0\: 0\end{bmatrix}$
|
|linear-algebra|matrices|change-of-basis|
| 0
|
Showing that for every acute angle $x$ in a right triangle $\frac{1}{\sin x}+\frac{1}{\cos x}\ge 2\sqrt 2$ is always true
|
The problem is to show that in a right-angle triangle with hypothenuse K and sides M and N, the inequality $\frac{K}{M}+\frac{K}{N}\ge 2\sqrt 2$ is always true. My approach: I tried to simplify the expression $\frac{K}{M}+\frac{K}{N}$ = $\frac{K(M+N)}{MN}$ , and since we have a triangle, this means that $K\ge M+N$ which makes us conclude that $$\frac{K(M+N)}{MN}\ge\frac{K^2}{MN}$$ , and since K is the hypothenuse, we get: $\frac{K(M+N)}{MN}\ge \frac{M^2+N^2}{MN}\ge 2$ by AM-GM. So this means that I got: $$\frac{K}{M}+\frac{K}{N}\ge 2$$ but couldn't reach the desired result which is $2\sqrt 2$ , so any help or another approach is much appreciated.
|
you can note that: $$\frac{K}{M} + \frac{K}{N} = \frac{K(M+N)}{MN} = \frac{\sqrt{M^2+N^2}(M+N)}{MN}\geq \frac{\sqrt{2MN}\cdot 2\sqrt{MN}}{MN} = 2\sqrt{2}$$ where it has been used the classic inequality: $\forall a,b\in\mathbb{R}: a^2+b^2\geq 2ab$ .
|
|inequality|trigonometry|
| 1
|
Expected value of normally distributed random variable with "reroll"
|
I am trying to model the following random variable $Z$ : Values get drawn from a normally-distributed random variable $X$ with mean 0 and variance $\sigma^2$ . If the drawn value is outside the interval $[a,b]$ , a value is drawn instead from a uniformly-distributed random variable $Y$ on the same interval. I think the probability density function of $Z$ can be expressed as $$ f(x) = \begin{cases} \phi_\sigma(x) + \left( 1-\Phi_{\sigma}(x) \right) \frac{1}{b-a} & \text{for $x \in [a,b]$} \\ 0 & \text{otherwise} \end{cases} $$ where $\phi_\sigma$ and $\Phi_\sigma$ are the probability and cumulative density functions of the normal distribution, resp. I am trying to determine the expected value of this random variable. What I have so far is: $$ \mathbb{E}(Z)= \int{xf(x)dx}\\ =\int{x\phi_\sigma(x)}dx+\int{x\frac{1-\Phi_{\sigma}(x)}{b-a}}dx \\ =\mathbb{E}(X)+\int{x\frac{1-\Phi_{\sigma}(x)}{b-a}}dx \\ =\mathbb{E}(X)+\mathbb{E}(Y)-\int{x\frac{\Phi_\sigma(x)}{b-a}}dx \\ $$ where $X\sim \mathca
|
It is easier to avoid mistakes if you try to find the CDF first. Let $X$ be the standard normal, let $Y$ be the uniform variable, and then let $Z=\begin{cases} X & X \in [a,b] \\ Y & \text{otherwise} \end{cases}$ . So You have $$P(Z \leq z)=P(Z \leq z \mid X \in [a,b]) P(X \in [a,b]) + P(Z \leq z \mid X \not \in [a,b]) P(X \not \in [a,b]) \\ = P(X \leq z \mid X \in [a,b]) P(X \in [a,b]) + P(Y \leq z \mid X \not \in [a,b]) P(X \not \in [a,b]) \\ = P(X \in [a,b] \cap (-\infty,z]) + P(Y \leq z) P(X \not \in [a,b]).$$ You can express those in terms of $\Phi$ . Then you can differentiate that to get the density.
|
|probability-distributions|expected-value|
| 0
|
Negative definite forcing of an ODE ? (soft question)
|
Given an ODE in $\mathbb{R}^n$ $$\frac{d}{dt}x(t)=b(x(t)).$$ How can the assumption that $$\langle D b \xi,\xi\rangle \leq - c |\xi|,~~~~~\forall \xi \in \mathbb{R}^n$$ be interpreted? Does this stop the trajectory going too wild in some sense? What would this condition be called? (Here $Db$ is the differential of $b$ ).
|
Your condition is that $b'$ has all eigenvalues with real part $\leq -c . (Presuming $c>0$ here). Linearizing near an equilibrium point $x_0\in \{b(x) = 0\}$ and we get $$\dot{x} = b(x_0) + b'(x_0)x + O(2) = b'(x_0)x + O(2)\ .$$ So in a neighborhood of $x_0$ our system will act like $\dot{x} = b'(x_0)x$ which is asymptotically stable. So I think that if for some $t_1$ $x(t_1)$ is close enough to $\{b=0\}$ , it will thereafter approach a stable fixed point. I'm unsure the conditions needed to guarantee one enters such a region of stability.
|
|ordinary-differential-equations|functions|soft-question|positive-semidefinite|coercive|
| 0
|
Equal pushforward measures imply existence of random variable copy with matching joint?
|
Consider $\mathbb{R}^n$ along with its Borel $\sigma$ -algebra, and let $\mathbb{P}$ be a non-atomic probability measure on $\mathbb{R}^n$ . Let $f, g: \mathbb{R}^n \rightarrow \mathbb{R}^n$ be two measurable functions such that their pushforwards of $\mathbb{P}$ match, i.e. $f_\# \mathbb{P} = g_\# \mathbb{P}$ . Consider a random variable $X$ whose law is given by $\mathbb{P}$ . I would like to know if there exists a random variable $X'$ such that $(X, f(X)) \overset{d}{=} (X', g(X'))$ . Thanks!
|
I realized there need not exist such an $X'$ . Take for example $n=1$ , $\mathbb{P}$ as the uniform distribution on $[0,1]$ , $f(x)=x$ , and $g(x)=1-x$ . In this case $(X, f(X))$ is supported on $D_1=\{(x, x): x \in [0,1]\}$ , whereas for any $X'$ whose law is also $\mathbb{P}$ , $(X', g(X'))$ will be supported on $D_2=\{(x, 1-x): x \in [0,1]\}$ , which intersects with $D_1$ at a single point.
|
|probability-theory|measure-theory|borel-measures|
| 1
|
Complex Differentiability Limit Definition
|
I have $f(x+iy)=y^2$ . This satisfies the Cauchy-Riemann equations for all $z_0=(x,0)$ . Now I want to check whether $f$ is differentiable for all $(x,0)$ . So I use the definition, taking the limit as $h$ tends to $0$ , and letting $h=a+bi$ : $$\frac{f(z_0+h)-f(z_0)}{h}=\frac{f(x+a+bi)-f(x)}{a+ib}=\frac{b^2}{a+ib}$$ But how am I meant to compute the limit of $\frac{b^2}{a+ib}$ as $h$ tends to $0$ , which I suppose means $(a,b)$ tends to $(0,0)$ . Can someone help me do this using the definition please? Thanks.
|
Notice that $$0\leq\left|\dfrac{b^2}{a+ib}\right|^2=\dfrac{b^4}{a^2+b^2}\leq \dfrac{b^4}{b^2}=b^2\to 0,$$ so $$\dfrac{b^2}{a+ib}\to 0$$ and $f$ is differentiable in all points of the form $(x,0)$ . Edit: It could also be used that $f(x,y)=u(x,y)+ iv(x,y)$ is complex differentiable at $x_0+iy_0$ iff $u$ and $v$ are differentiable at $(x_0,y_0)$ and they verify the CR equations; I understood the purpose of the question was to prove that $f$ is holomorphic using exclusively the limit definition.
|
|real-analysis|calculus|complex-analysis|analysis|derivatives|
| 0
|
Using Minkowski's Theorem to prove existence.
|
I need to use Minkowski’s theorem to show that if α ∈ R and Q ∈ N then there exist q, a ∈ Z such that 1 ≤ q ≤ Q and |qα − a| ≤ 1/Q. I believe that I need to define the lattice in R: Λ = {qα : 0
|
Consider the set of the form $ S = \{(x,y) \in \mathbb{R}^2 : |\alpha x - y|\leq Q^{-1} , |x| and the lattice $\Lambda$ spanned by $\begin{pmatrix} 1 \\ 0\end{pmatrix}$ and $\begin{pmatrix} 1 \\ 0\end{pmatrix}$ . You can calculate that $V(S) = 4$ and $2^2\det(\Lambda) = 4$ therefore since $S$ is symmetric, convex and compact by the strong minkowski property you can show that there exists a lattice point in $S$ . This lattice point consists of $(q,a)\in \mathbb{Z}^2$ such that $|q\alpha - a|\leq Q^{-1} , |q| as required. Since S is symmetric if $-Q \leq q \leq -1$ we know that the point $(-q,-s)$ also satisfies your equation with the additional caveat that $ 1 \leq q \leq Q$ as required. P.S. MA 257 gets pretty techy :)
|
|number-theory|integer-lattices|
| 1
|
Complex Differentiability Limit Definition
|
I have $f(x+iy)=y^2$ . This satisfies the Cauchy-Riemann equations for all $z_0=(x,0)$ . Now I want to check whether $f$ is differentiable for all $(x,0)$ . So I use the definition, taking the limit as $h$ tends to $0$ , and letting $h=a+bi$ : $$\frac{f(z_0+h)-f(z_0)}{h}=\frac{f(x+a+bi)-f(x)}{a+ib}=\frac{b^2}{a+ib}$$ But how am I meant to compute the limit of $\frac{b^2}{a+ib}$ as $h$ tends to $0$ , which I suppose means $(a,b)$ tends to $(0,0)$ . Can someone help me do this using the definition please? Thanks.
|
If $x\in\Bbb R$ , \begin{align}\lim_{z\to x}\frac{\operatorname{Im}^2z-\operatorname{Im}^2x}{z-x}&=\lim_{z\to x}\frac{\operatorname{Im}^2z}{z-x}\\&=\lim_{z\to x}\frac{\operatorname{Im}(z-x)}{z-x}\operatorname{Im}z\end{align} (since $x\in\Bbb R$ and therefore $\operatorname{Im}x=0$ ). But $\lim_{z\to x}\operatorname{Im}z=0$ and $\left|\frac{\operatorname{Im}(z-x)}{z-x}\right|\leqslant1$ for each $z\in\Bbb C\setminus\{x\}$ . Therefore $$\lim_{z\to x}\frac{\operatorname{Im}(z-x)}{z-x}\operatorname{Im}z=0.$$ In other words, your function is differentiable at $x$ .
|
|real-analysis|calculus|complex-analysis|analysis|derivatives|
| 0
|
Inequality of entropies for Bernoulli plus Gaussian
|
Question: Given $X\sim\text{Bernoulli}(\alpha)$ , $Y\sim\mathcal{N}(0,1)$ , and two non-random constants $C_1$ and $C_2$ such that $C_1>C_2$ . What can we say about the inequality between the two differential entropies $H(C_1X+Y)$ and $H(C_2X+Y)$ ? I.e., is it true that $$ H(C_1X+Y)>H(C_2X+Y). $$ Fact: We know from here that differential entropy is shift invariant but is affected by scaling. Attempt: I am aware that for a single random variable $X$ alone, the entropy $H(X)$ is shift invariant. Hence we can rewrite $$ H(C_1X+Y)=H(\epsilon X+C_1X+Y), $$ for some $\epsilon$ . However, this is not a simple shift because we are shifting by a fraction of a random variable.
|
It should be intuitive that $I(C_1X + Y;X) \ge I(C_2 X + Y;X)$ (but see below). But notice that $I(C_1 X + Y; X) = h(C_1 X + Y) - h(C_1X + Y|X),$ and the second term is just $h(Y)$ . So, we can conclude that $$I(C_1X + Y;X) \ge I(C_2X + Y;X) \iff h(C_1 X + Y) \ge h(C_2 X + Y),$$ and so we'd be done if we show the first inequality. One way to see this is the I-mmse relation that says that for standard Gaussian noise $Y$ , with mutual information defined in nats, $$ \frac{\mathrm{d}}{\mathrm{d}\gamma} I(\sqrt{\gamma} X + Y;X) = \frac12\mathrm{mmse}(\gamma),$$ where $\mathrm{mmse}(\gamma)$ is the minimum mean square error $\mathbb{E}[(X - \mathbb{E}[X| \sqrt{\gamma} X + Y])^2].$ See this paper for a proof. The striking thing here is that the result is completely generic, and holds for any law over $X$ (as long as it has a second moment of course). The relation is indelibly connected to the infinite divisibility of the Gaussian (see the paper for more), which you could use directly to show
|
|probability|statistics|inequality|information-theory|entropy|
| 1
|
Equation for conic section given an arbitrary cone?
|
I learned in school that the curve formed by the intersection of a double cone and a plane is always one of the three types of curves called "conic sections" - an ellipse, a parabola, or a hyperbola (or the degenerate forms of each - a point, a line, or a pair of lines). Apparently this has been known since at least the days of the ancient Greek mathematicians. And Cartesian coordinate systems have also been around for centuries. Both are very useful and easy to study. But it is surprisingly difficult for me to find any references describing how to relate the conic sections to defined cones and planes that are defined in a 3D Cartesian coordinate system. Surely someone has done it before! In general, what is the equation for the intersection of the Cartesian XY plane with a circular double cone centered at $(x_c, y_c, z_c)$ , with axis of symmetry $[x_a, y_a, z_a]$ and with an angle of $\theta$ ? As a simple example, if the cone is centered at $(5, 6, 10)$ and has an axis $[0, 0, 1]$ a
|
This is actually very easy and straight forward, but only if you've studied a first course in linear algebra, which is usually offered at all universities in the first year. Define the vector $r = [x, y, z]^T $ where $T$ denotes transposition. Then the equation of the cone whose vertex is at $C$ is $ (r - C)^T Q (r - C) = 0 $ where $Q$ is a symmetric $3 \times 3$ matrix, that either has two positive eigenvalues and one negative eigenvalue, or two negative eigenvalues and one positive eigenvalue. The cone doesn't have to be a right circular cone, but if it is then $Q$ is given by $ Q = (\cos^2 \theta) I_3 - a a^T $ where $\theta$ is the angle between the axis of the cone and it generatrix, and is called the semi-vertical angle. $a$ is a unit vector along the axis of the cone. $I_3$ is the $3 \times 3$ identity matrix. The cutting plane is usually given in algebraic form as $n^T r = d$ The first step to the solution is to find the vectorial representation of the plane. This can be done b
|
|geometry|conic-sections|
| 0
|
Sum of positive semi-definite matrix and positive definite matrix?
|
Is the sum of a positive semidefinite matrix and positive definite matrix a positive definite matrix? I have a positive semidefinite matrix $M\in \mathbb{R}^{n\times n}$ and the identity $I_n$ , is their sum positive definite? I ask because I want to ensure their sum has an inverse.
|
A matrix $M$ is positive-definite (semidefinite) if and only if it is symmetric and $u^TMu>0\space (\geq)$ for all nonzero vectors $u$ . If $M$ is positive definite and $N$ is positive semidefinite then $M$ and $N$ are symettric so their sum $M+N$ is symettric too, and for every $u\neq 0$ , $u^TMu>0 $ and $u^TNu\geq 0$ , so $u^T(M+N)u>0$ , and thus $M+N$ is positive definite.
|
|matrices|positive-definite|positive-semidefinite|
| 1
|
Roots of $x^2-x+2=0 \in \mathbb{Z}_3[i]$
|
I've been challenged by a professor to find the roots of $x^2-x+2=0$ in the field $\mathbb{Z}_3[i] = \{a+bi \; \vert \; a,b \in \mathbb{Z}_3\}$ . I used the "normal" quadratic formula and got roots of $2+2i$ and $2+i$ , and was told there's a way without using the formula. Is there a method for this besides guessing and checking the nine elements of $\mathbb{Z}_3[i]$ ?
|
In general it's better to think of $2=-1$ . That way multiplication and exponentiation became a matter of just units, negatives, and zeros and magnitudes and growing values are just taken out of the picture altogether. Also noting that $1$ and $-1$ are "natural opposites" is just easy to intuit than $1$ and $2$ . So that said..... ====== Okay. This may be kind of dumb but: $x^2 -x +2 = 0\iff x^2-1 = x$ And $x^2 -1=x$ $(x^2 -1)^2 = x^2$ $x^4 -2x^2 +1 = x^2$ $x^4 = -1$ $x^2 =\pm i$ And so $x^2 -1 = x\implies \pm i - 1 =x$ .... and that's it. $x = -1 \pm i$ . However when we squared both sides we potentially added extraneous solutions. So we must test these which easy enough. (I had originally wanted to comment about roots coming in conjugate pairs but now that I've solved quicker than I expected we don't really need to.) .... Not sure that a better way but I can say it didn't use the quadratic formula. I wouldn't try it with any field more complex than $\mathbb Z_3[i]$ . The "all or noth
|
|abstract-algebra|ring-theory|complex-numbers|field-theory|integers|
| 0
|
On the proof of convergence of Pòlya urns
|
I'm reading the proof of Proposition $2$ of the Appendix here . The proposition states Let $d\ge 2$ and $S\ge 1$ be integers. Let also $(\alpha_1,\dots,\alpha_d)\in\mathbb{N}^d \setminus\{ 0 \}$ . Let $(P_n)_{n\ge0}$ be the $d$ -color Pòlya urn random process having $S\cdot Id$ as replacement matrix and $(\alpha_1,\dots,\alpha_d)$ as initial composition. Then, almost surely and in any $L^t$ , $t\ge 1$ , $$\frac{P_n}{nS}\to V\ \ \text{as}\ n\to\infty$$ where $V$ is a $d$ -dimensional Dirichlet-distributed random vector, with parameters $(\frac{\alpha_1}S,\dots,\frac{\alpha_d}S)$ . What I know about the distribution of $P_n$ is that $\forall n\ge0$ , let $X_{n+1}$ be a random variable in $\{ 1,\dots,d\}$ with distribution $\mathbb{P}(X_{n+1}=k|P_n)=\frac{(P_n)_k}{\|P_n\|_1}\ \forall k=1,\dots,d$ , where $(P_n)_k$ denotes the number of balls of colour $k$ at time $n$ and $\|P_n\|_1=\sum^d_{k=1}(P_n)_k$ . Notice that in this script it holds $(P_0)_k=\alpha_k \forall k=1,\dots,d$ and so $||
|
You can't get from (1) to (2). Rather, both follow from knowledge of the conditional distribution of $P_{n+1}$ given $\mathcal F_n$ . With your notation: $$ P_{n+1}=P_n+S\sum_{k=1}^d 1_{\{X_{n+1}=k\}}e_k. $$
|
|markov-chains|conditional-expectation|martingales|polya-urn-model|
| 1
|
Showing that a continuous map $f:X \to S^n$ which is not onto is always nullhomotopic
|
This is a problem I came across recently. Let $f:X \to S^n$ be a continuous map which is not onto. Then it is nullhomotopic. My attempt: Since $f:X \to S^n$ is not onto, there exist $s_0 \in S^n$ such that $f(x) \not = s_0$ for any $x \in X$ . So we throw away $s_0$ and consider $S^n \setminus \{s_0\}$ . So, we consider $f$ as a function from $X$ to $S^n \setminus \{s_0\}$ . Now via the stereographic projection we may identify $S^n \setminus \{s_0\}$ with $\mathbb{R}^n$ . Now $\mathbb{R}^n$ is contractible, so $id_{\mathbb{R}^n}:\mathbb{R}^n \to \mathbb{R}^n$ is nullhomotopic, say $id_{\mathbb{R}^n} \simeq c_k$ ( $c_k$ is a constant map). So $id_{\mathbb{R}^n} \circ f \simeq c_k \circ f$ . Thus $f \simeq c_k \circ f$ . Note $c_k \circ f$ is a constant map. So, $f$ is nullhomotopic. PS: Let $H : X \times \mathbb{I} \to \mathbb{R}^n$ be a homotopy between $f$ and a constant map $c$ . Since we identified $S^n \setminus \{s_0\}$ with $\mathbb{R}^n$ , we have the inclusion $i:\mathbb{R}^n \
|
It is $f = i\circ F$ for some continuous maps $F: X \to \mathbb{R}^n$ and $i: \mathbb{R}^n \to S^n$ . Since $\mathbb{R}^n$ is contractible, it holds $F \simeq c_{x_0}$ with $c_{x_0} : X \to \mathbb{R}^n, x \mapsto x_0$ constant. Therefore $f = i \circ F \simeq i \circ c_{x_0} = c_{i(x_0)}$ .
|
|algebraic-topology|solution-verification|
| 1
|
Is there a rigorous probability theoretic formulation of linear regression
|
Let $(\Omega,\Sigma,P)$ be some probability space and let $Y: \Omega \to \mathbb{R} $ be a random variable. Let $x, \beta \in \mathbb{R}^n$ . In a linear model we assume something like this: $E(Y|x)=\beta^Tx$ . I know conditional expectation, where you condition on either an event $E \in \Sigma$ or on a sub- $\sigma$ -algebra $\Sigma' \subseteq \Sigma$ . As a special case of the latter, we can condition on another random variable. But the variable $x$ is not random, so how would you rigorously define $E(Y|x)$ ?
|
The typical way linear models work is to assume that $Y = \beta^T X + \varepsilon$ where $X$ is some random variable that we believe is related to $Y$ and $\varepsilon$ is another random variable independent of $X$ , and $\beta$ is constant (i.e. deterministic), but possibly unknown. For example, $Y$ might be a person's weight and $X$ might be a person's height. There is a theorem that states that (under certain conditions) there exists a measurable, deterministic function $f$ such that $\mathbb{E}[Y|X] = f(X)$ . One can then define $\mathbb{E}[Y|x] = f(x)$ to avoid conditioning on null events like $X = x$ when $X$ is a continuous random variable.
|
|probability|probability-theory|statistics|linear-regression|
| 1
|
number of possibilities to distribute k balls in 2n boxes with condition
|
I'm trying to find the number of possibilities to distribute $k$ balls to $2n$ boxes such that for every $i$ between $1$ and $n$ the sum of the balls in the $i$ box and the $n+i$ box isn't equal to $6$ . I tried writing the equation: $$x_1+x_2+...+x_{2n} = k$$ , but I don't know how to progress further and it seems like it should be solves with recursion, can someone direct me?
|
By stars-and-bars there are $\binom{k+2n-1}{2n-1}$ ways to distribute the $k$ balls into $2n$ boxes where we don't care whether or not any box and its pair had six put in it. The number of ways where boxes $1,n+1$ had $6$ balls collectively between them is $7\times \binom{k-6+2n-3}{2n-3}$ as there are $7$ ways to distribute the six balls into these two boxes, and then we have $k-6$ remaining balls and $2n-2$ remaining boxes to deal with. This is similarly the same result for had we been talking about any other pair of boxes. The number of ways where boxes $1,n+1$ as well as boxes $2,n+2$ each collectively had $6$ balls between them would be $7^2\times \binom{k-2\times 6 + 2n-1-2\times 2}{2n-1-2\times 2}$ by similar logic and this generalizes to if we had $\ell$ pairs of boxes having six balls in each pair to be $7^\ell\times \binom{k-6\ell + 2n-1-2\ell}{2n-1-2\ell}$ Applying inclusion-exclusion and simplifying a bit, this gives: $$\sum\limits_{\ell=0}^n(-7)^\ell\binom{n}{\ell}\binom{k+
|
|combinatorics|balls-in-bins|
| 1
|
Using integration to solve non-constant acceleration problems
|
The question is: A body moves along a straight line with acceleration $a$ in meters per second squared given by $a = \frac{7}{36}t$ where $t$ is the time in seconds. Initially the body is at rest at an origin $O$ . The acceleration continues until $t = 6$ , whereupon it ceases and the body is retarded to rest. During this retardation the acceleration is given by $a = -1/4t$ . Find the value of $t$ when the body comes to rest and the displacement of the body from $O$ at that time. I understand how to use integration to find velocity and displacement, I have written out the equation for velocity using integration but I am not sure what to do after this.
|
start from $a = \frac{7}{36}t \ $ like below $$a=\frac {dv(t)}{dt}= \frac{7}{36}t\\ \to dv(t)= \frac{7}{36}tdt\\\int_0^6dv(s)ds=\int_0^6\frac{7}{36}sds\\v(s)|_0^6=\frac{7}{36}\frac {s^2}{2}|_0^6\\v(6)-v(0)=3.5\\W.R.T. \ v(0)=0 \to v(6)=3.5 $$ for the second part $v(6)=3.5 \frac {m}{s}$ and now $$a=\frac {dv(t)}{dt}=\frac{-1}{4}t \ \frac ms\\ dv(t)=\frac{-1}{4}tdt\\ \int_6^{t}dv(s)=\int_6^{t}\frac{-1}{4}sds\\v({t})-v(6)=\frac {-1}{8}(s^2)|_6^t\\v(t)-3.5=\frac {-1}{8}(t^2-36)$$ if you want to find when $v(t)=0$ solve the equation $$v(t)=3.5+\frac {-1}{8}(t^2-36)=0 \to t=8 \ sec$$ at the end , a figure can achieve them
|
|calculus|classical-mechanics|
| 0
|
Largest divisor of all $ n(n^2-1)(5n+2)\, $ [gcd of values of recurrence]
|
My Solution: $$ n(n^2-1)(5n+2) = (n-1)n(n+1)(5n+2) $$ This number is divisible by 6 (as at least one of 2 consecutive integers is divisible by 2 and one of 3 consecutive integers is divisible by 3. $ 5n+2 \equiv 5n \equiv n \mod 2 $ then $n$ and $5n+2$ have the same pairness and at least one of $n+1$ and $5n+2$ is divisible by 2. $ n \equiv 5n \equiv 5n+4 \mod 4 \to $ if $ 2\ | \ n+1 \to n - 1 $ or $ n + 1 $ is divisible by 4 if $ 2\ | \ 5n+2 \to n $ or $ 5n + 2 $ is divisible by 4 The expression is divisible by 6 and has 2 even integers and one of them is divisible by 4 $\to$ is divible by 24.
|
Let $I_n=n(n^2-1)(5n+2)$ . Then $I_n=(n-1)n(n+1)(n+2)\pmod4$ therefore $8|I_n$ and $I_n=2(n-1)n(n+1)^2\pmod3$ therefore $3|I_n.$ Since $I_2=24\times3$ and $I_3=24\times17$ , $\gcd I_n=24.$
|
|elementary-number-theory|discrete-mathematics|recurrence-relations|divisibility|
| 0
|
Equation for conic section given an arbitrary cone?
|
I learned in school that the curve formed by the intersection of a double cone and a plane is always one of the three types of curves called "conic sections" - an ellipse, a parabola, or a hyperbola (or the degenerate forms of each - a point, a line, or a pair of lines). Apparently this has been known since at least the days of the ancient Greek mathematicians. And Cartesian coordinate systems have also been around for centuries. Both are very useful and easy to study. But it is surprisingly difficult for me to find any references describing how to relate the conic sections to defined cones and planes that are defined in a 3D Cartesian coordinate system. Surely someone has done it before! In general, what is the equation for the intersection of the Cartesian XY plane with a circular double cone centered at $(x_c, y_c, z_c)$ , with axis of symmetry $[x_a, y_a, z_a]$ and with an angle of $\theta$ ? As a simple example, if the cone is centered at $(5, 6, 10)$ and has an axis $[0, 0, 1]$ a
|
Here's a relatively elementary approach. The main difficulty is simply in coming up with an equation for an arbitrary cone. Consider a right circular cone. Let the position vector of the vertex of the cone be $\mathbf c = [x_c, y_c, z_c]^T$ . Let the axis of the cone be parallel to the vector $\mathbf a = [x_a, y_a, z_a]^T$ and the half-angle of the opening be $\theta$ (so $\theta$ is the angle between the axis and one of the lines lying on the surface of the cone. Then if $\mathbf x = [x,y,z]^T$ is an arbitrary point on the surface of the cone other than the vertex the vector $\mathbf x - \mathbf c$ is a vector parallel to one of the lines lying on the surface of the cone. Therefore the angle between $\mathbf x - \mathbf c$ and the vector $\mathbf a$ is either $\theta$ or $\pi - \theta$ . That is, $$ (\mathbf x - \mathbf c)\cdot \mathbf a = \pm \lVert \mathbf x - \mathbf c\rVert \lVert \mathbf a\rVert \cos\theta. $$ For simplicity let's assume that $\lVert \mathbf a\rVert = 1$ , since
|
|geometry|conic-sections|
| 1
|
Bounds on number of primes $<n$ by re-examining Euler's function
|
Edit: Using asymptotic to and not equal to as it keeps the meaning intact and makes sense of the relation Would it be correct to imply from Euler's prime equality function $x is ~ (asymptotic) to \frac {(\log x)^{\pi(x)}}{\prod \log p}$ where $\pi(x)$ is prime counting function? $(\sum_{i=0}^x 1/i)$ ~ $(1 + \frac {1}{2} + \frac {1}{2^2} + \frac {1}{2^3} + \dots)(1 + \frac {1}{3} + \frac {1}{3^2} + \frac {1}{3^3} + \dots)(1 + \frac {1}{5} + \frac {1}{5^2} + \frac {1}{5^3} + \dots)\dots$ In famous Euler's prime euqality equation $\sum x^{-1} = \prod (1-p^{-1})^{-1}$ We have x terms of left hand side but can we write x as $x$ ~ $(\text{no of terms of 2})(\text{no of terms of 3})(\text{no of terms of 5})\dots$ or as $x$ ~ $\prod (\log_p x)$ If we find the no. of terms it contributes to $x$ say for some power of each prime number say $p$ so for some $k$ we can say $p^k \le x$ i.e $k = \log_p x = \frac {\log x}{\log p}$ Therefore we can rewrite $x$ ~ $\frac {\log x}{\log 2}$$\frac {\log x}{\
|
It is true for any finite number of primes $p_1,\dots,p_k$ that, when $x$ is sufficiently large in terms of $\{p_1,\dots,p_k\}$ , the number of integers up to $x$ made up of powers of those primes alone is asymptotic to $\prod_{j=1}^k \frac{\log x}{\log p_j}$ . However, there is an error term in this approximation, and if we tried to extend this idea to an unbounded number of primes, the error term would become far larger than the main term, making the approximation unreliable. Thus trying to examine all integers/all primes this way doesn't end up working.
|
|prime-numbers|riemann-zeta|
| 0
|
On asymptotics of certain sums of multinomial coefficients
|
Given positive integers $n$ and $k$ , set $$ S_{n,k}=\sum_{\substack{a_1+a_2+\dots+a_k=2n\\ a_i \in 2\mathbb{N},\,i=1,\ldots,k}}\frac{(2n)!}{a_1!a_2!\dots a_k!}, $$ where $2\mathbb{N}=\{0,2,4,\ldots\}$ . According to the answers of Special sum of multinomial coefficients! there is no "nice" closed form expression for $S_{n,k}$ . My question is: How can one find the asymptotics of $S_{n,k}$ for fixed $n$ when $k \rightarrow \infty$ ? My thoughts so far: It is mentioned in the link above that the expression for $S_{n,k}$ resembles Sterling numbers of the second kind, so perhaps some approximation results for those numbers may be relevant. Also, I did some numerical experimentation that seems to suggest that the naive guess $S_{n,k}\sim C_n \cdot k^n$ (where $C_n>0$ depends on $n$ only) is plausible.
|
Claim: $S_{n,k}\sim (2n-1)!!\cdot k^n$ as $k\to\infty$ , where $(2n-1)!!=(2n-1)(2n-3)\cdots 3\cdot 1.$ Proof: Let $\Omega$ be the set of sequences of length $2n$ where each entry is in $\{1,\dots,k\}$ , so $|\Omega|=k^{2n}$ . Let $\newcommand{\oe}{\Omega_\text{even}}\oe$ be the set of sequences $\omega\in \Omega$ such that, for each $i\in \{1,\dots,k\}$ , the number $i$ appears an even number of times in $\omega$ . Then $$ |\oe|=S_{n,k} $$ Indeed, given a list $(a_1,\dots,a_k)$ , the number of sequences where $i$ appears $a_i$ for each $i\in k$ is $\frac{(2n)!}{(a_1)!\cdots (a_k)!}$ , so you conclude by summing over all lists where each $a_i\in 2\mathbb N$ and $a_1+\dots+a_k=2n$ . Each $\omega$ in $\oe$ determines a partition of $\{1,\dots,2n\}$ where all parts have even cardinality. For each $i$ which appears in $\omega$ , one of the parts of this partition is the set of $j\in \{1,\dots,2n\}$ such that $\omega_j=i$ . Conversely, given a partition of $\{1,\dots,2n\}$ where all parts ar
|
|probability|combinatorics|asymptotics|multinomial-coefficients|multinomial-distribution|
| 1
|
Equation for conic section given an arbitrary cone?
|
I learned in school that the curve formed by the intersection of a double cone and a plane is always one of the three types of curves called "conic sections" - an ellipse, a parabola, or a hyperbola (or the degenerate forms of each - a point, a line, or a pair of lines). Apparently this has been known since at least the days of the ancient Greek mathematicians. And Cartesian coordinate systems have also been around for centuries. Both are very useful and easy to study. But it is surprisingly difficult for me to find any references describing how to relate the conic sections to defined cones and planes that are defined in a 3D Cartesian coordinate system. Surely someone has done it before! In general, what is the equation for the intersection of the Cartesian XY plane with a circular double cone centered at $(x_c, y_c, z_c)$ , with axis of symmetry $[x_a, y_a, z_a]$ and with an angle of $\theta$ ? As a simple example, if the cone is centered at $(5, 6, 10)$ and has an axis $[0, 0, 1]$ a
|
Let us do it the other way: use the fixed cone $x^2+y^2=c^2z^2$ and a variable plane $z=ax+b$ (we don't need a $y$ term, by symmetry). Now the equation reads $x^2+y^2=c^2(ax+b)^2$ , or $(1-c^2a^2)x^2+y^2-2abc^2x+b^2c^2=0$ , with three independent parameters. Now this equation has all it takes to describe a reduced conic of genre decided by the sign of $1-c^2a^2$ , or a general conic by translating/rotating the coordinates. The same trick would not work with an ellipsoid, as the coefficient $1+c^2a^2$ could not be non-positive.
|
|geometry|conic-sections|
| 0
|
Is there a rigorous probability theoretic formulation of linear regression
|
Let $(\Omega,\Sigma,P)$ be some probability space and let $Y: \Omega \to \mathbb{R} $ be a random variable. Let $x, \beta \in \mathbb{R}^n$ . In a linear model we assume something like this: $E(Y|x)=\beta^Tx$ . I know conditional expectation, where you condition on either an event $E \in \Sigma$ or on a sub- $\sigma$ -algebra $\Sigma' \subseteq \Sigma$ . As a special case of the latter, we can condition on another random variable. But the variable $x$ is not random, so how would you rigorously define $E(Y|x)$ ?
|
There are several models that can be called "linear regression." The most straightforward model is to assume we have $p$ $n$ -vectors $X = [x_{(1)}, \ldots, x_{(p)}]$ and a "response" $y$ in $\mathbf{R}^n$ both of which are known, and we want to obtain the projection of $y$ onto the span of $X.$ There are no probability assumptions here. A second model is to assume $(Y,X)$ is multinormal and then $E(Y \mid X)$ is linear. Here we explicitly assume $(Y,X)$ follow a particular probability distribution. The third is to observe that for $L^2$ random variables, $Y = E(Y \mid \mathscr{H}) + (Y - E(Y \mid \mathscr{H}))$ and we see that $E(Y \mid \mathscr{H})$ is the orthogonal projection of $Y$ onto $L^2(\mathscr{H})$ ; when $\mathscr{H} = \sigma(X),$ then $E(Y \mid \mathscr{H}) = E(Y \mid X) = f(X)$ for some (deterministic but unknown) measurable function $f$ ; often we approximate linearly $f(x) \approx \beta^\intercal x$ and then we recover Ordinary Linear Regression. Here, we only need tha
|
|probability|probability-theory|statistics|linear-regression|
| 0
|
Is it possible to go over all lines of a grid with a pencil without lifting it or going over a drawn line?
|
Is it possible to go over all lines of an infinite grid with a pencil without lifting it or going over a drawn line ? The pencil can cross through a segment already drawn but cannot go over an already drawn line. After doodling around, I have the feeling it is not possible. If indeed it is not possible what demonstration exist of this result ? If it is possible then can you show a way of drawing the grid ? Easily one sees that drawing a sub-grid of $m\times n$ is possible for all values of m and n, by drawing vertical lines and then horizontal lines, without removing the pencil. Here is an example of a sub-grid of $3\times 6$ .
|
Yes, it is possible, using the method illustrated in this image. Referring to Especially Lime's comment, this answer used to have an alternate proof which Lime noticed was fallacious, so I deleted that proof.
|
|combinatorics|geometry|graph-theory|eulerian-path|
| 1
|
How to simplify complex logs
|
I have f(x) = (x+ $\sqrt{x}$ ) $\log_2x$ and g(x) = x $log_2(x+\sqrt{x})$ . How would I go about simplifying them and obtaining the limit at infinity of f(x)/g(x). So far, the best I have gotten for f(x) is x $log_2x$ (1+ $\frac{1}{\sqrt{x}}$ ), and nothing for g(x). Any help is appreciated!
|
As an Idea you can take substitution like $u=\sqrt x$ which makes some simplify $$\lim_{x \to \infty}\frac {f(x)}{g(x)}=\\\lim_{u \to \infty}\frac {(u^2+u)\log_2{u^2}}{u^2\log_2{(u^2+u)}}=\\\lim_{u \to \infty}\frac {2(u+1)\log_2{u}}{u\log_2{(u^2+u)}}= \\\lim_{u \to \infty}\frac {2(u+1)}{u}\lim_{u \to \infty}\frac {\log_2{u}}{\log_2{(u^2+u)}}=\\\lim_{u \to \infty}\frac {2(u+1)}{u}\lim_{u \to \infty}\frac {\log_2{u}}{\log_2{u^2(1+\frac 1u)}}$$ does it helps?
|
|calculus|limits|logarithms|
| 0
|
Solution of $x^x = (x-1)^{x+1}$
|
Is there any way to solve this equation algebraically and give an exact form of the solution: $x^x = (x-1)^{x+1}$ ? WolframAlpha only finds the approximate solution 4.14.
|
Consider the function $$f(x) = \frac{(x-1)^{x+1}}{x^x}, \quad x > 1$$ and its logarithm $$g(x) = \log f(x) = (x+1) \log (x-1) - x \log x, \quad x > 1.$$ The original equality is satisfied whenever $g = 0$ . A plot of $g$ is shown as follows: We can compute successive iterates of the Newton's method recursion of $g$ with an initial guess of $x_0 = 4$ : $$x_{n+1} = \frac{2x_n - (x_n - 1) \log (x_n - 1)}{2 + (x_n - 1)\log(1 - 1/x_n)}$$ gives $$\begin{array}{c|c} n & x_n \\ 0 & 4.00000000000000 \\ 1 & 4.13751482760659 \\ 2 & 4.14103935294361 \\ 3 & 4.14104152540996 \\ 4 & 4.14104152541079 \\ 5 & 4.14104152541079 \end{array}$$ When $x , we run into problems with defining a real-valued function for $(x-1)^{x+1}$ , which is why the domain is restricted.
|
|algebra-precalculus|exponentiation|tetration|
| 1
|
Reciproc of the Lindemann theorem and the arc cosine of the golden ratio
|
Via the Lindemann theorem it is easy to prove that the cosine of any rational multiple of $\pi$ is an algebraic number; however its contrapositive only tells us that the arc cosine of an algebraic number is a transcendental number, not necessarily linearly dependent from $\pi$ over the rationals. I ask, then, is it true via any other theorem? Is there a nice counterexample? I came to this question specifically as I've found in my research an angle $\theta$ such that $$\cos\theta = \frac{\sqrt{5}-1}{2} =: \varphi^{-1},$$ and I wanted to find out if $\theta$ has any nice representation, such as a rational multiple of $\pi$.
|
To generalize Makholm's answer: Niven's theorem says that if $\theta$ is any rational multiple of $\pi$ other than an integer multiple of $\pi/2$ or of $\pi/3$ , then $\cos \theta$ is irrational. Thus if $a$ , $b$ and $c$ are a Pythagorean triple, that is if they are integers and $a^2+b^2=c^2$ , then $\arccos(a/c)=\arcsin(b/c)$ and $\arcsin(a/c)=\arccos(b/c)$ are not rational multiples of $\pi$ .
|
|transcendental-numbers|
| 0
|
If the suspension of a topological space X is a n-manifold, why does X have to be a homology sphere?
|
I am currently working on the following problem : Let $X$ be a topological space such that its suspension $\Sigma X$ is a n-manifold. Show that $H_k(X) \cong H_k(\mathcal{S}^{n-1})$ for all $k$ . I already proved that $H_k(X) \cong H_{k+1}(\Sigma X)$ for all $k$ , using Mayer-Vietoris. I thought it could help me but then I couldn't find any way to get to my goal. My intuition is telling me that if $\Sigma X$ is a n-manifold, it means that $X$ has to be a (n-1)-manifold too but it has to be such that things are not getting too bad near the points $X \times \{0\}$ and $X \times \{1\}$ in the suspension. I have no idea how to prove it : do someone has a suggestion to do so ? Thank you.
|
Unfortunately $X$ does not have to be a manifold, see this: https://mathoverflow.net/questions/394391/suspension-of-a-topological-space But we don't need it to be a manifold. Pick $y_0\in \Sigma X$ one of the ends. Then we have the long exact sequence of homologies: $$\cdots\to H_k(\Sigma X-y_0)\to H_k(\Sigma X)\to H_k(\Sigma X,\Sigma X-y_0)\to H_{k-1}(\Sigma X-y_0)\to\cdots$$ Note that $H_*(\Sigma X-y_0)$ vanishes because $\Sigma X-y_0$ is a cone and thus contractible. This means $H_k(\Sigma X)\simeq H_k(\Sigma X,\Sigma X-y_0)$ . Finally for manifolds the local homologies are well known and easy to calculate via excision theorem : $$H_k(\Sigma X,\Sigma X-y_0)\simeq H_{k-1}(S^{n-1})$$ which together with the fact $H_{k-1}(X)\simeq H_k(\Sigma X)$ gives us what you want.
|
|algebraic-topology|
| 1
|
Is the theory of dual numbers strong enough to develop real analysis, and does it resemble Newton's historical method for doing calculus?
|
I've been interested in non-standard analysis recently. I was reading up on it and noticed the following interesting comment on the Wikipedia page about hyperreal numbers , right after giving an example of a nonstandard differentiation: The use of the standard part in the definition of the derivative is a rigorous alternative to the traditional practice of neglecting the square of an infinitesimal quantity... the typical method from Newton through the 19th century would have been simply to discard the $dx^2$ term. I've never heard anything like this before, and really find it fascinating that Newton's method was to define the relation $dx^2 = 0$. If we actually formalize the above structure by taking $\mathbb{R}$ and adjoining an element $dx^2 = 0$ to it, we get the "dual numbers," isomorphic to the quotient ring $\mathbb{R}[x]/x^2$. I'd seen some things about how this algebra plays into automated differentiation algorithms for some computer software systems, but I've never heard anyth
|
Warning: this isn’t a “proper answer”, many of those already exists. This just attempts to push though the calculations you don’t seem to have done, but wanted to. Let our number system have elements $z=x+ky$ where $x, y, k^2 \in \mathbb R$ while $k$ itself does not belong to $\mathbb R$ . I'm leaving $k$ and its memberships perpousfully vaigue. We can play the same game as when deriving the Cauchy Riemann equations. Write a function $\mathrm f(z)$ as $\mathrm U(x,y)+k\mathrm V(x,y)$ for two real valued functions $\mathrm U, \mathrm V: \mathbb R^2 \to \mathbb R$ . For $\mathrm f'(z)$ we at least need $$\lim_{s \to 0}\frac{\mathrm f(z+s)-\mathrm f(z)}{z+s-z}=\lim_{t\to 0}\frac{\mathrm f(z+kt)-\mathrm f(z)}{z+kt-z}$$ For positive real numbers $s$ and $t$ . Working out these two limits quickly: \begin{eqnarray*} \lim_{s \to 0}\frac{\mathrm f(z+s)-\mathrm f(z)}{z+s-z} &=& \lim_{s\to 0}\frac{\mathrm U(x+s,y)+k\mathrm V(x+s,y)-\mathrm U(x,y)-k\mathrm V(x,y)}{s} \\ &=& \lim_{s\to 0}\frac{\mat
|
|calculus|real-analysis|abstract-algebra|math-history|nonstandard-analysis|
| 0
|
How do we know that common rearrangement proofs of the Pythagorean theorem work for any right triangle?
|
I’m a little bit puzzled by geometrical proofs, like the common algebraic proof for the Pythagorean theorem listed Wikipedia's "Pythagorean theorem" entry . I understand the idea of arranging the right triangles and the area $c^2$ in a neat way to form another square and writing the area of the new square in different terms and going from there. But I’m a little confused on how you actually know that the right triangle you draw isn’t just the one special right triangle with which you can actually form such a square. How do we know this way of rearranging the pieces works for any other right triangle?
|
Others have explained why it is a square, but I feel like there's a deeper confusion in the question. When someone gives a "proof by picture" such as the ones shown here, the point is not to say Look at the picture, the thing in the middle looks like a square so it must be a square. nor Look at the picture, I say this is a square so it must be a square, trust me. A proof by picture is not intended to be a formal proof. The point is to quickly communicate the idea of the proof to someone who is assumed to recognize some basic geometry concepts that the proof uses and then be able to go throught the necessary steps to convince themselves of the theorem that is being proven. The same rearrangement proof could be given in just words and mathematical symbols without any pictures, it would just be waaaaaaay more cumbersome to read and understand. For example, to give the same proof with basic geometry concepts, one could write something like: (No need to read this carefully, I didn't even ch
|
|geometry|
| 0
|
On asymptotics of certain sums of multinomial coefficients
|
Given positive integers $n$ and $k$ , set $$ S_{n,k}=\sum_{\substack{a_1+a_2+\dots+a_k=2n\\ a_i \in 2\mathbb{N},\,i=1,\ldots,k}}\frac{(2n)!}{a_1!a_2!\dots a_k!}, $$ where $2\mathbb{N}=\{0,2,4,\ldots\}$ . According to the answers of Special sum of multinomial coefficients! there is no "nice" closed form expression for $S_{n,k}$ . My question is: How can one find the asymptotics of $S_{n,k}$ for fixed $n$ when $k \rightarrow \infty$ ? My thoughts so far: It is mentioned in the link above that the expression for $S_{n,k}$ resembles Sterling numbers of the second kind, so perhaps some approximation results for those numbers may be relevant. Also, I did some numerical experimentation that seems to suggest that the naive guess $S_{n,k}\sim C_n \cdot k^n$ (where $C_n>0$ depends on $n$ only) is plausible.
|
Let ${\bf X}$ be a $(2n,k)$ multinomial random variable, corresponding to the experiment of placing $2n$ balls inside $k$ urns, with uniform probability. Then $$P_{\bf X} = \frac{(2n)!}{x_1!x_2! \cdots x_k!} (1/k)^{2n} \left[\sum x_i = 2n\right] \tag 1$$ Let $E$ be the event that all urns have an even number of balls. Then $$k^{2n} \, P(E) = S_{n,k} \tag 2$$ If $k\to \infty$ and $n$ is fixed, then the probability that an urn gets more than two balls turns negligible, and we can just count the events that have $n$ urns with $2$ balls and the rest empty. Summing the probabilities of these events we get $$S_{n,k} \approx \binom{k}{n}\frac{(2n)!}{2^n} \tag 3$$ This approximation is similar to that of Mike Earnest, the graph displays $\log S_{n,k}$ for $n=18$ ("Ap1" is approx $(3)$ , "Ap2" is Mike Earnest's) It's also a lower bound, of course, because it omits some valid configurations. Plugging in $(3)$ the Stirling approximation (first order) we get $$S_{n,k} \approx (2 k n/e)^n$$ in agre
|
|probability|combinatorics|asymptotics|multinomial-coefficients|multinomial-distribution|
| 0
|
Can you construct (ruler and compass) a square with an irrational area?
|
I've heard that when $\pi$ was proved irrational, that squaring the circle was not proved impossible. This lead me to believe that you could construct a square with an irrational area. Is this possible? It is possible to create a polynomial with integer coefficients that satisfies an irrational area ($x^4=2$), so that leads me to believe it is doable, although my knowledge of the precise relationship between polynomial and areas is sketchy at best. Also, it is easy to construct a square of irrational side lengths (consider 4 1x1 squares in a grid, you can make a square side length $\sqrt{2}$ using the diagonals) but a square with an irrational area eludes me yet it seems a possible extension. What about any shape with an irrational area, is that possible to construct? Does that shape have to be regular to be constructible?
|
Construct a square. Label the points $A,B,C$ , and $D$ clockwise around the square Then, place the needle of the compass on corner $A$ , and construct an arc length connecting the corner $C$ to a line passing through points $A$ and $B$ outside the square. Call this intersection between the arc and this line point $E$ Then use the compass to construct a square from the line segment between points $E$ and $B$ This new square has a side length $1+\sqrt2$ times the length of the side of the first square. So its area is $(1+\sqrt2)^2=3+2\sqrt2$ , which is irrational.
|
|euclidean-geometry|
| 0
|
Why don't I see any mention of fundamental domain in the context of non-discrete Lie Group acting on Riemannian manifolds
|
Let $G$ be a non-discrete Lie Group acting properly on a Riemannian manifold $M$ by its isometries. When $G$ is discrete, there are plenty of literature on Fundamendal Domains (FD), but a quick literature search shows none on FD when $G$ is not discrete, and is say a Lie Group of manifold dimension at least one . In what follows, $d_M$ is the metric on $M$ induced by the Riemannian metric $g$ on $M.$ My first question is: What's the reason behind the fact that we don't see any literature on FD when $G$ is non-discrete? Also, a very common FD used when $G$ is discrete is the Dirichlet Fundamental Domain $F(p_0)$ associated with the point $p_0\in M$ , see for example: this link , this link , this link , when $M:=\mathbb{H}^2, G$ a discrete subgroup of its isometry group, given by: $$F(p_0):=\{p\in M: d_M(p,p_0)\le d_M(p, g.p_0)\forall g \in G\}$$ I also consulted this MO post , but in this excellent answer given by Misha, the assumption remains the same: $G$ is discrete. Before asking my
|
Let's suppose that you reword the definition of a fundamental domain by throwing away the discreteness requirement: just replace the phrase proper action of a discrete group by the phrase proper action of a topological group . Theorem: If $M$ is a Riemannian manifold, and if $G \times M$ is a proper action of a topological group, and if $G$ has a fundamental domain, then $G$ is discrete. For the proof, assuming that $G$ is not discrete, I'll argue to a contradiction. Because of the failure of discreteness, there exists a sequence $g_i \in G$ of nonidentity elements of $G$ , such that $g_i \ne g_i$ if $i \ne j$ and such that $g_i$ that converges to the identity element of $G$ . Pick a point $x \in D$ . By item 4, $x$ has an open neighborhood $U$ such that the set $$S_U = \{g \in G \mid g(cl(D)) \cap U \ne \emptyset\} $$ is finite. But since $g_i$ converges the identity, it follows that $g_i(x)$ converges to $x$ , and so there exists $I$ such that $i \ge I \implies g_i(x) \in U$ . But $x
|
|differential-geometry|lie-groups|hyperbolic-geometry|
| 0
|
game coloring points rioplatense olympiad 1999
|
This problem is from the Rioplatense MO 1999/3 L3. The question is the following: Two players $A$ and $B$ play the following game: $A$ chooses a point, with integer coordinates, on the plane and colors it green, then $B$ chooses $10$ points of integer coordinates, not yet colored, and colors them yellow. The game always continues with the same rules; $A$ and $B$ choose one and ten uncolored points and color them green and yellow, respectively. a) The goal of $A$ is to achieve $111^2$ green points that are the intersections of $111$ horizontal lines and $111$ vertical lines (parallel to the coordinate axes). $B$ 's goal is to stop him. Determine which of the two players has a strategy that ensures they achieve their goal. b) The goal of $A$ is to achieve $4$ green points that are the vertices of a square with sides parallel to the coordinate axes. $B$ 's goal is to stop him. Determine which of the two players has a strategy that ensures they achieve their goal. I'm having trouble with p
|
Hint 1: If $A$ can win, their penultimate move must setup $11$ different squares so $B$ can't block them all. What would this look like? Hint 2: The first part of the question is a hint for the second part. Solution: First $A$ creates the configuration from a). Imagine a box around all the points that have already been colored by either player. Choose a 45-degree line outside this box. This will be the diagonal of the square. $A$ colors $11$ of the intersections of the horizontal lines from a) with this diagonal. ( $B$ can't block this by coloring only $100$ additional points before $A$ 's eleventh move.) Then, consider the reflections of the $111^2$ intersections from a) across the diagonal. There are $111$ horizontal lines passing through these points. Because $B$ has only colored $110$ additional points, there must be one with no yellow point. $A$ 's next move is to color the intersection of this line with the diagonal. Then, there are $11$ ways $A$ can make a square on the next mov
|
|combinatorics|contest-math|game-theory|combinatorial-game-theory|
| 1
|
For a vector space, given two automorphisms show the following
|
For a vector space, $E$ , with $f$ and $g$ being two automorphisms on $E$ such that $f \circ g = Id_{E}$ , the identity mapping. Show that $\ker(f) = \ker(g \circ f)$ $\newcommand{\Img}{\operatorname{Im}}\Img(g) = \Img(g \circ f)$ $\ker(f) \cap \Img(g) ={0_E}$ For 1. I went about this the following way - $f$ is an automorphism, meaning that it is a linear, bijective mapping on $E$ , meaning it has an inverse and is so full rank. This means that $\ker(f) = 0$ \begin{align} \ker(f \circ g) &= \ker(Id_{E}) \\ &= 0 \\ &= \ker(f) \end{align} For 2. I approached it the following way \begin{align} \Img(g) &= \Img(f \circ g) \\ &= \Img(Id_{E}) \\ &= \Img(g) \end{align} This is because $\dim(\ker(Id_E)) = 0$ , meaning that $\dim(\Img(Id_E)) = \dim(E)$ I took this approach but otherwise wouldn't have an idea how to solve this (inluding part 3.) so would be grateful for the communities thoughts. This is from Mathematics for Machine Learning Q2.18 .
|
Note that if $f$ and $g$ are automorphisms, then so is $f \circ g$ (an inverse is given by $g^{-1} \circ f^{-1}$ ). But now any automorphism satisfies $\ker(f) = 0$ and $\operatorname{Im}(f) = E$ , hence all three claims follow immediately. The equations you have written all amount to this more or less, although you do not need to consider dimension at all. Why this question is about automorphisms instead of endomorphisms is a mystery to me.
|
|linear-algebra|
| 1
|
Has there ever been a substantial change in notation resulting from a landmark theorem?
|
Often times the notation we use for mathematical concepts is based in part on properties of that concept. For example, the notation $|A| = |B|$ in set theory means there’s a bijection between $A$ and $B$ , and the notation is warranted because the “there is a bijection between these two sets” relation is an equivalence relation. The notation $A \le_p B$ for polynomial-time reducibility works both because of the intuitive connection between the hardness of $A$ and $B$ and because reducibility is reflexive and transitive. Have there been any major examples of a concept being well-known and studied with one set of notation whereupon a surprising or fundamental theorem was proved that caused folks to revisit the old notation and swap it out for newer notation based on the insights of the theorem?
|
Bra-ket notation? I believe some use bra-ket notation as implying certain things like being strictly for wavefunctions or implying normalization, but in reality it is just a vector notation which has two main advantages: it defines an explicit notation for dual vectors, making things like the Dirac's delta being well defined in this notation (it is a vector in the dual vector space that is not dual to any vector in the original space) it makes it easy to define vectors via the eigenvalues of commuting operators. This is just because it is the only vector notation that encloses the name of the vector rather than changing font or adding something at the top or the bottom. It's similar to how XML tags are convenient because they allow to add properties inside the tag. I would say that the discovery of quantum mechanics just made the use of dual vectors along side normal vectors so standard that eventually it forced to fix the lacking of the original vector notations. You can fix the first
|
|notation|math-history|
| 0
|
Transition functions for locally free sheaves
|
The following is from Vakil's notes. Given a locally free sheaf $\mathcal{F}$ with rank $n$ , and a trivializing open neighborhood of $\mathcal{F}$ (an open cover $\{U_i\}$ such that over each $U_i$ , $\mathcal{F}|_{U_i} \cong \mathcal{O}^{\oplus n}_{U_i}$ as $\mathcal{O}$ -modules) we have transition functions $g_{ij} :U_i \cap U_j \to \text{GL}(n,\mathcal{O}(U_i \cap U_j))$ . Can someone elaborate how these transition functions arise? I understand that we have two different isomorphisms $\varphi_i : \mathcal{F}|_{U_i\cap U_j} \to \mathcal{O}^{\oplus n}_{U_i \cap U_j}$ and $\varphi_j:\mathcal{F}|_{U_i\cap U_j} \to \mathcal{O}^{\oplus n}_{U_i \cap U_j}$ and we can consider the composition $$\varphi_i \circ \varphi^{-1}_j : \mathcal{O}^{\oplus n}_{U_i \cap U_j} \to \mathcal{O}^{\oplus n}_{U_i \cap U_j}$$ but this is a morphism of sheaves and I think that these $g_{ij}$ are maps from $U_i \cap U_j$ so we need to consider $\varphi_i \circ \varphi^{-1}_j$ at the level of sections or someth
|
For our purpose, the indexed open cover doesn't matter so, for now, let's just work on an open subset $U \subset X$ . Then, $GL_n(\mathcal{O}(U))$ is an affine scheme, isomorphic to $\operatorname{Spec} A$ where $$A = \mathcal{O}(U)[x_{11}, x_{12} \dots, x_{nn}]_{\operatorname{det}(x_{ij})}.$$ As such, to give a map $U \to GL_n(\mathcal{O}(U))$ it is equivalent to give a map $A \to \mathcal{O}(U)$ . If $\mathcal{O}^{\oplus n}_{U} \to \mathcal{O}^{\oplus n}_{U}$ is an isomorphism it then gives an isomorphism of $\mathcal{O}(U)$ -modules $\psi: \mathcal{O}(U)^{\oplus n} \to \mathcal{O}(U)^{\oplus n}$ . Being a map of free modules, it admits a matrix form $\psi = (\psi_{ij})$ , with $\psi_{ij} \in \mathcal{O}(U)$ , so that $\det(\psi_{ij})$ is a unit in $\mathcal{O}(U)$ . By the universal property of localization, the map sending $x_{ij} \mapsto \psi_{ij}$ induces a ring homomorphism $A \to \mathcal{O}(U)$ , as required.
|
|algebraic-geometry|sheaf-theory|
| 1
|
Asymptotics for some modified Bessel function of first kind
|
I am interested in the asymptotics for large order and large argument for the modified Bessel function of first kind $$I_{\frac{N-1}{2}} (Nz) $$ where $z$ is a fixed positive real number $z>0$ and $N$ is natural number. I want to pass $N$ to $+\infty$ . I checked the NIST collection and the book by Abramowitz and Stegun, but I did not find anything useful. Do you have any tips? Thank you very much.
|
Let $\kappa \in \mathbb C$ be fixed, $x>0$ and $\operatorname{Re}(\nu)>0$ . By $(10.32.12)$ , we can write $$ I_{\nu + \kappa } (\nu \operatorname{csch} x) = \frac{1}{{2\pi {\rm i}}}\int_{\infty - {\rm i}\pi }^{\infty + {\rm i}\pi } {\exp \left( { - \nu (t - \operatorname{csch} x\cosh t)} \right)\,{\rm e}^{ - \kappa t} {\rm d}t} . $$ The relevant saddle point is at $t=x$ . An application of the saddle point method then gives $$ I_{\nu + \kappa } (\nu \operatorname{csch}x) \sim \frac{{\exp (\nu (\coth x - x) - \kappa x)}}{{\sqrt {2\pi \nu \coth x} }}\sum\limits_{n = 0}^\infty {\frac{{U^{\kappa}_n (\tanh x)}}{{\nu ^n }}} , $$ as $\nu \to \infty$ in the sector $|\arg \nu|\le \frac{\pi}{2}-\delta , uniformly with respect to $x>0$ (cf. $(10.41.3)$ ). The coefficients $U^{\kappa}_n(w)$ are polynomials in $w$ of degree $3n$ , the first few being \begin{align*} & U^{\kappa}_0(w)=1,\\ & U^{\kappa}_1(w)= - \frac{5}{{24}}w^3 - \frac{\kappa }{2}w^2 + \frac{{1 - 4\kappa ^2 }}{8}w, \\& U^{\kappa}_2(
|
|asymptotics|bessel-functions|
| 1
|
Tangent bundle of a sphere $T\mathbb S^n$ is diffeomorphic to $\mathbb S^n \times \mathbb S^n - \Delta$
|
Let $\mathbb S^n$ denote the $n$ -sphere, which is the smooth manifold consisting of all points in $\mathbb R^{n+1}$ with Euclidean norm one. Recall that the tangent bundle of $\mathbb S^n$ , denoted $T \mathbb S^n$ , is a $2n$ -dimensional smooth manifold. Let $\Delta \subseteq \mathbb S^n \times \mathbb S^n$ denote the diagonal $\Delta := \{(x,x) : x \in \mathbb S^n\}$ . By Hausdorffness, $\Delta$ is closed, so $\mathbb S^n \times \mathbb S^n - \Delta$ is an open subset of $\mathbb S^n \times \mathbb S^n$ . In this question about the tangent bundle of the $n$ -sphere, (I believe) one of the answers claims that we have a diffeomorphism $$T\mathbb S^n \cong \mathbb S^n \times \mathbb S^n - \Delta.$$ How can we see that this is true? What is an explicit diffeomorphism?
|
We must be very explicit to prove it. We have $$S^n \times S^n \setminus \Delta = \bigcup_{p \in S^n} \{p\} \times (S^n \setminus \{ p\})$$ which is a smooth submanifold of $\mathbb R^{n+1} \times \mathbb R^{n+1}$ . The tangent space $T_pS^n$ can be regarded as the orthogonal complement of $p$ , i.e. as $$T_pS^n = \{x \in \mathbb R^{n+1} \mid x \cdot p = 0 \} .$$ Then the "Euclidean" tangent bundle is $$TS^n = \bigcup_{p \in S^n} \{p\} \times T_pS^n = \{(p,x) \in \mathbb R^{n+1} \times \mathbb R^{n+1} \mid p \cdot p = 1, x \cdot p = 0 \} .$$ It is well-known that this is a smooth submanifold of $R^{n+1} \times R^{n+1}$ which is diffeomorphic to the "abstract" tangent bundle of $S^n$ . Our task is to find diffeomorphisms $S^n \setminus \{ p\} \to T_pS^n$ which fit together to a diffeomorphism $S^n \times S^n \setminus \Delta \to TS^n$ . Stereographic projection from $p$ to $T_pS^n$ is the key (see Stereographic projection when the "North/South Pole" is not given by $(0,...,\pm 1)$? ). T
|
|differential-geometry|smooth-manifolds|spheres|diffeomorphism|tangent-bundle|
| 1
|
Convex function derivatives inequality
|
Decide whether there exists function $f: \mathbb{R} \to (0,+\infty)$ such that for all $x \in \mathbb{R}$ we have $f''(x)f(x)>(f'(x))^{2}$ . I know that if such function would exist then $f''(x) >0$ for all $x$ so such function would be convex. I was hoping that I could fix $a$ and find some solutions to $f''f=a(f')^2$ but all solutions of that are not nonnegative or have problems with continiuity at $x=0$ . Any help will be greatly appreciated
|
We observe that $$\frac{f''(x)f(x)-(f'(x))^2}{f^2(x)} = \left( \frac{f'(x)}{f(x)} \right)' =\left(\ln(f(x))\right)''$$ So, it suffices to chose $f$ such that $\ln(f(x))$ is a convex function. For example, if we choose $\ln(f(x)) = x^2$ then $$f(x) = e^{x^2}$$
|
|real-analysis|derivatives|inequality|convex-analysis|
| 1
|
Tilted rectangle falling down
|
Rectangle $ABCD$ is tilted such that its base $AB$ makes an angle of $\theta$ with the horizontal floor. It has its vertex $A$ in contact with the horizontal floor, and the rectangle is released from this position, to fall freely and the vertex at $A$ is free to slide along the floor. What will be the rectangle's final linear and angular velocity when its base hits the floor? The rectangle has dimensions: $\overline{AB} = \ell$ and $\overline{BC} = w $ . My initial thought: The solution, I think, can be obtained from the Euler-Lagrange equation of motion . First, I define my state variables as follows: firstly, we have the linear distance of point $A$ from a fixed origin. Let's call this $x$ . Then we have the tilt angle $q$ of the base $AB$ from the $x$ axis. We can take $x(0) = 0$ and $q(0) = \theta$ . The position of the center of mass of the rectangle is $$ P = (x, 0) + ( \ell \cos(q) - w \sin(q), \ell \sin(q) + w \cos(q) ). $$ Hence, the linear velocity is $$ \dot{P} = (\dot{x}, 0
|
This may be a case where the Euler-Lagrange machinery is unnecessarily abstract. Call the center of the rectangle $O$ ; let $\phi$ be the angle between the positive $x$ -axis and $\overline{AO}$ ; and let $r=\frac{1}{2}\sqrt{w^2+\ell^2}$ be the length of $\overline{AO}$ . The center of mass is at height $h=r \sin\phi$ . There is an upward force $F$ acting at $A$ , and the downward force of gravity $mg$ acting at $O$ , so the net force is $F-mg$ , and $$ F-mg=m\ddot{h}=m\frac{d^2}{dt^2}(r\sin\phi)=mr\left(\ddot{\phi}\cos\phi - (\dot{\phi})^2\sin\phi\right). $$ Moreover, the torque relative to $O$ is $\tau=-Fr\sin(\pi/2-\phi)=-Fr\cos\phi$ , so $$ -Fr\cos\phi=I\ddot{\phi}=\frac{1}{3}mr^2\ddot{\phi}. $$ Putting these two expressions for $F$ together, we find the equation of motion for $\phi$ to be $$ \ddot{\phi}=\frac{(\dot{\phi})^2\sin\phi\cos\phi - (g/r)\cos\phi}{\cos^2\phi + 1/3}. $$ Note that $\ddot\phi$ is initially negative, as expected, since the block is rotating clockwise and $\ph
|
|solution-verification|physics|euler-lagrange-equation|
| 1
|
Nef and big implies semi ample?
|
Consider the case of a smooth projective variety, $D$ a divisor which is nef and big. Is it semi ample and if not is there a condition to add so this is always the case ? I tried to show it using that $D$ is semi ample if its stable base locus $\mathbb B(D)=\bigcap_{m\geq1}\operatorname{Bs}|mD|$ is empty, $D$ is nef if and only if its diminished stable base locus $\mathbb B_{-}(D)=\bigcup_{A \text{ ample}}\mathbb B(D+A)$ is empty. Moreover since $D$ is big for each ample divisor we can find $m_A\in\mathbb N$ and $N_A$ effective such that $m_AD\sim A+N_A$ . However I didn't manage to do so.
|
No it is not always the case that big and nef imply semiample. Zariski's example is a counterexample to this, which appears in section 2.3.A in Lazarsfeld's Positivity in Algebraic Geometry , volume I. You are correct that, quite generally, a divisor is semiample if and only if its stable base locus is empty. Indeed, theorem 2.1.20 in the aforementioned book states that $\mathbb{B}(D)$ is the unique minimal element in the set $\{\text{Bs}(|mD|)\}_{m \geq 0}$ . In the big and nef case however we have that $D$ is semiample if and only if its section ring $R(X, D) = \bigoplus_{m \geq 0} H^0(X, mD)$ is a finitely generated algebra. This is theorem 2.3.15 in Lazarsfeld's book.
|
|algebraic-geometry|
| 1
|
Hamiltonian isotopy of the zero section is $\text{Graph}(df)$?
|
Consider a manifold $L$ included into its tangent bundle $T^\star L$ as the zero section (so $L$ is a Lagrangian in $T^\star L$ , hence the notation). It is standard knowledge that given $\alpha$ a $1$ -form on $L$ , viewed as a section (hence a map $\alpha : L \to T^\star L$ ), then $\text{Graph}(\alpha) \subset T^\star N$ is also a Lagrangian submanifold iff $\alpha$ is a closed $1$ -form (see Prop. 1.2 of these notes for proof, or most any textbook on symplectic geometry). However, the result I need is the following: Prop. Any Hamiltonian deformation of the zero section $L \hookrightarrow T^\star L$ can be written $\text{Graph}(df)$ for $f\in\mathcal{C}^\infty(L)$ . where by Hamiltonian deformation, I mean a deformation by a Hamiltonian isotopy. Does anyone know where I could find a proof to such a statement? Or to a similar one? I am mainly looking for references (or a proof, if it's so short) Why I think this is true: though I've never seen it stated in such a straightforward way,
|
The answer is no. The easiest example is the cotangent fiber in $\mathbb{R}^2=T^*\mathbb{R}$ . In this case, the cotangent fiber $T^*_0\mathbb{R}$ is Hamiltonian isotopic to the zero section since the rotation by $\pi/2$ is a Hamiltonian map given by the quadratic Hamiltonian function $H(x,y)=c(x^2+y^2)$ (for a suitable $c$ I don't want to compute now). $T^*_0\mathbb{R}$ is not a graph of an exact $1$ -form, obviously or by the following discussion. In fact, Lagrangian $L=\operatorname{graph}(df)$ has the following property: The restriction of the cotangent projection $\pi|_L: L \subset T^*N\rightarrow N$ induces a diffeomorphism. The same is also true for the Legendrian lifting $j^1f=\{(x,df(x),f(x):x\in N\}$ , and its front projection $j^0f=\{(x,f(x):x\in N\}$ : All natural projections are restricted to a diffeomorphism with $N$ . However, it is easy to construct front projections (which are easy to describe since it is in $N\times \mathbb{R}$ ) of exact Lagrangians Hamiltonian isoto
|
|differential-geometry|reference-request|symplectic-geometry|
| 1
|
Alternating sum of reciprocals of binomial coefficients
|
I'm looking for a simple proof of the identity $$ \sum_{k=0}^n \frac{(-1)^k}{\binom{n}{k}} = \frac{n+1}{n+2} (1+(-1)^n) $$ relying only on elementary properties of binomial coefficients. I obtained this by starting with the integral representation $$ \frac{(a-1)!(b-1)!}{(a+b-1)!} = \int_0^{\infty} \frac{t^{b-1}}{(1+t)^{a+b}} \, dt, \hspace{0.5cm} a,b \in \mathbb{N}, $$ setting $a=n-k+1, b=k+1$ and adding, which gives $$ \frac{1}{n+1} \sum_{k=0}^n \frac{(-1)^k}{\binom{n}{k}} = \sum_{k=0}^n (-1)^k \frac{(n-k)! k!}{(n+1)!} = \int_0^{\infty} \frac{1}{(1+t)^{n+2}} \sum_{k=0}^n (-t)^k \, dt = \int_0^{\infty} \frac{1}{(1+t)^{n+2}} \frac{(-t)^{n+1}-1}{-t-1} \, dt $$ $$ = (-1)^n \int_0^{\infty} \frac{t^{n+1}}{(1+t)^{n+3}} \, dt + \int_0^{\infty} \frac{dt}{(1+t)^{n+3}} = \frac{(-1)^n+1}{n+2} $$ Is there an easier way, something which doesn't rely on the integral representation? Thanks.
|
We have $$(-1)^kk!(n-k)!=\dfrac{1}{n+2}\left[(-1)^k(k+1)!(n-k)!-(-1)^{k-1}k!(n+1-k)!\right]$$ Therefore \begin{align*}\sum_{k=0}^n\dfrac{(-1)^k}{n\choose k}&=\dfrac{1}{n!}\sum_{k=0}^n(-1)^kk!(n-k)!\\ &=\dfrac{1}{n!(n+2)}\left[(-1)^n(n+1)!0!-(-1)0!(n+1)!\right] \\ &=\dfrac{(n+1)![(-1)^n+1]}{n!(n+2)} \\ & = \dfrac{(n+1)[(-1)^n+1]}{n+2}\end{align*}
|
|summation|binomial-coefficients|alternating-expression|
| 1
|
The projective tensor norm on tensor product of Banach spaces implies the inner product on tensor product of Hilbert spaces?
|
As presented in the answer of this post , the projective tensor norm on the algebraic tensor product of two Banach spaces $X$ and $Y$ is given by \[ \Vert \omega\Vert_{\pi} = \inf\left\{\sum \lVert x_{i}\rVert_X \,\lVert y_{i}\rVert_ Y\,:\,\omega = \sum_{i=1}^{n} x_{i} \otimes y_{i}\right\} \] with respect to which we complete the algebraic tensor product to obtain the tensor product Banach space $X \otimes Y$ For Hilbert spaces $H_1$ and $H_2$ , as shown in this link , the inner product on the algebraic tensor product is given by linear extension of the formula \begin{equation} \langle \phi_1 \otimes \phi_2, \psi_1 \otimes \psi_2 \rangle = \langle \phi_1, \psi_1 \rangle_{H_1} \langle \phi_2, \psi_2 \rangle_{H_2} \end{equation} for $\phi_1, \psi_1 \in H_1$ and $\phi_2, \psi_2 \in H_2$ . Next, the tensor product Hilbert space $H_1 \otimes H_2$ is defined to be completion under this inner product. Now, my question is Does the projective tensor norm on general Banach spaces, as defined ab
|
Suppose $X$ and $Y$ are Hilbert spaces and $X\hat\otimes_\pi Y$ is isomorphic to $X\hat\otimes_h Y$ . Since the latter space is reflexive, then so is $X\hat\otimes_\pi Y$ . From theorem 4.21 in Introduction to tensor products of Banach spaces. R.A. Ryan we know that $X\hat\otimes_\pi Y$ is reflexive iff every operator operator from $X$ to $Y^*$ is compact. For Hilbert spaces $X$ and $Y$ this is the case only if $X$ or $Y$ is finite dimensional. I think that one of the spaces has to be at most one dimensional if we require isometric isomorphism between $X\hat\otimes_\pi Y$ and $X\hat\otimes_h Y$ .
|
|functional-analysis|hilbert-spaces|banach-spaces|inner-products|tensor-products|
| 0
|
Canonical dilation vector field on a vector bundle
|
I am reading a paper by Langerock, called "A connection theoretic approach to sub-Riemannian geometry". He writes: [Let $\pi:E\rightarrow M$ be a vector bundle] ... Let $\{\phi_t\}$ represent the flow of the canonical dilation vector field on $E$ i.e. in natural vector bundle cooordinates $(x^i,y^A)$ on $E$ we have $$\phi_t(x^i,y^A)=(x^i,e^ty^A)$$ Now, I searched online but I cannot figure out what a canonical dilation vector field on a vector bundle is. Moreover, I cannot figure out what $e^t$ is in the above notation. My best guess is that $\phi_t:E\rightarrow E$ maps a point $e=(x^i,y^A)$ to the point obtained letting $e$ act $t$ times on itself (through the action of a vector field on itself, i.e. the one of an abelian group on itself). So we get a vertical action on each fiber when at each $t$ each point $f$ in a generic fiber is sent to $(t+1)f$ in the same fiber. Am I correct? Thank in advance
|
The dilation vector field is also called the canonical vector field or (in the case that $E = TM$ ) the Liouville vector field . The situation is that for every vector bundle $p: E \to B$ there is a canonical vector field $C$ on $E$ coming from the linear structure of the fibers of $E$ . If we put local coordinates $(x_i, v_i)$ on $E$ , then the local formula for $C$ is $$ C_{(x_i, v_i)} = v_i \frac{\partial}{\partial v_i}\!{\LARGE\rvert}_{\normalsize\!(x_i,v_i)}. $$ Put another way, given $v \in E$ each $C_v$ is a tangent vector in $T_v E$ , which is represented by curve $\gamma(t) = v + t v$ in $E$ (i.e. $C_v = \frac{d}{dt}{\large\rvert}_{t = 0}\!\ \gamma(t)$ ). It's not too difficult to see that the integral curves of this vector field are given by the global flow $\phi_t : \mathbb{R} \times E \to E$ defined by $\phi_t(v) = e^t v$ , so this explains your original question. (Note that the curves $\gamma(t) = v + t v$ and $\delta(t) = e^t v$ represent the same tangent vector based at
|
|differential-geometry|group-actions|vector-bundles|vector-fields|
| 1
|
Probabilities for Pure birth Process
|
Consider a pure birth process starting from $X(0) = 0$ with birth parameters $\lambda_0 = 1.4$ and $\lambda_1 = 1.8$ Compute the following probabilities $\mathbb{P}(X(0.2) = 0)$ and $\mathbb{P}(X(0.2) = 1)$ So far I have calculated $\mathbb{P}(X(0.2) = 0) = \frac{(1.4*0.2)^0*e^{-1.4*0.2}}{0!} = 0.75568 $ But I am having trouble figuring out how to calculate the next one, so far I have tried assuming since we have not reached 1 yet since the time interval is [0,0.2] so we have $(1.4*0.2)^1*e^{-1.4*0.2}$ but it isn't correct and also $(1.8*0.2)^1*e^{-1.8*0.2}$ to no avail. I would appreciate any help with this question
|
For the second one, you have to consider all possible times at which the birth happened, and it could have happened anywhere in the interval $[0,T]$ . $$ \begin{split}P(X(T)=1)&=\int_0^T dt_1P(X(T)=1|X(t_1)=1)P(X(t_1)=1|X(0)=0)\\&= \int_0^T dt_1e^{-\lambda_1(T-t_1)}(1-e^{-\lambda_0t_1})\\&= \frac{1-e^{-\lambda_1T}}{\lambda_1}+\frac{e^{-\lambda_1T}- e^{-\lambda_0T}}{\lambda_1-\lambda_0} \end{split}$$
|
|poisson-process|birth-death-process|
| 0
|
Find an efficient way to approximate the sum of the reciprocals of the squares of integers from 2017 to 4030
|
The question This question was taken from an International Junior Mathematical Olympiad exam for 9th graders. The question is as follows: Let $x$ satisfies the equation $\dfrac{1}{x}=\dfrac{1}{2017^2}+\dfrac{1}{2018^2}+\ldots+\dfrac{1}{4030^2}$ . Which of the following numbers is the nearest to $x$ ? A. $2016$ B. $2017$ C. $3024$ D. $4035$ E. $4037$ D. $4035$ My work First of all, let's simplify the problem: $\dfrac{1}{x}=\displaystyle\sum_{k=2017}^{4030}\frac{1}{k^2}$ When searching for algebraic resources, I first used the following two identities to obtain a lower and an upper bound: $\dfrac{1}{n}-\dfrac{1}{n+1} = \dfrac{1}{n\left(n+1\right)} $\dfrac{1}{n-1}-\dfrac{1}{n} = \dfrac{1}{n\left(n-1\right)} > \dfrac{1}{n^2}$ After a few manipulations (note the application of telescopic sums), this is the result obtained: $\displaystyle\sum_{n=2017}^{4030}\left(\dfrac{1}{n}-\dfrac{1}{n+1}\right) From this point on, I realized that this approach was inefficient for an Olympiad like IJMO, af
|
The first approach is not needed; the second approach using the identity $$\frac{1}{n-1} - \frac{1}{n+1} = \frac{2}{n^2-1}$$ yields a tighter bound. With it, we can eliminate choices (A) through (C). The next thing to do is to estimate the error: the term-wise error is approximately $$\frac{1}{n^2 - 1} - \frac{1}{n^2} = \frac{1}{n^2(n^2-1)} = \frac{1}{n^4 - n^2 + \frac{1}{4} - \frac{1}{4}} = \frac{1}{(n^2-\frac{1}{2})^2 - \frac{1}{4}} \approx \frac{1}{(n^2-\frac{1}{2})^2}.$$ Certainly, $$\frac{1}{n^2-1} - \frac{1}{n^2} for $n > 1$ . This error is largest when $n = 2017$ , so the total error is $$\sum_{n=2017}^{4030} \frac{1}{n^2-1} - \frac{1}{x} This tells us that the given sum is extremely close to $\frac{1}{4035.5}$ , namely $$\frac{1}{4035.5} - \frac{1}{x} and we easily have $x : $$\frac{1}{4035.5} - \frac{1}{4036} = \frac{1}{4036(2(4036)+1)} > \frac{1}{2016^3}.$$
|
|summation|approximation|telescopic-series|
| 0
|
How do we show that on a parabolic Hölder space, the polynomial and Hölder seminorms are equivalent?
|
I am currently working through Krylov's Lectures on Elliptic and Parabolic Equations in Holder Spaces . One of the key points of chapter 8 is that the two seminorms $$[u]_{1+\delta/2,2+\delta;U} := \sup_{X \neq Y \in U}\frac{|u(X)-u(Y)|}{\rho(X,Y)^{\delta}}$$ and $$[u]'_{1+\delta/2,2+\delta;U} := \sup_{X\in Q}\sup_{\rho>0} \frac{1}{\rho^{2+\delta}}\inf_{p\in\mathcal{P}_2}|u-p|_{0;U}$$ for polynomials $p \in \mathcal{P}_2$ (those that are first order in space and second order in time) are equivalent. Here, $$\rho(X=(t,x),Y=(s,y)) := |t-s|^{1/2} + |x-y|$$ and $$Q_{\rho}(X_0) := (t_0-\rho^2) \times B_{\rho}(X_0).$$ The key point of the proof that is absolutely stumping me is that after defining the finite difference operator $$ \sigma_{h}: u\to \frac{1}{h^2}[u(t,x)-u(t-h^2,x)]$$ and showing that for any polynomial $p \in \mathcal{P}_2$ and points $X_1,X_2$ such that $t_1 \leq t_2$ , taking $h := \varepsilon\rho$ for $\varepsilon \in (0,1)$ yields that \begin{equation} \tag{1}|\sigma_h(u-p
|
Putting the brain to grindstone yielded what I believe is a correct solution. We note that the inequality above holds for all $p \in \mathcal{P}_2$ . Therefore, taking an infimum over $p$ of both sides of the inequality yields $$\inf_{p}\frac{4}{h^2}|u-p|_{0;Q_{3\rho(X_2)}} = \inf_{p}\frac{4}{\varepsilon^2}\cdot\frac{\rho^{\delta}}{\rho^{2+\delta}}|u-p|_{0;Q_{3\rho(X_2)}}.$$ Noting the independence of some quantities on $p$ yields $$\frac{4\rho^{\delta}}{\varepsilon^2}\frac{1}{\rho^{2+\delta}}\inf_p|u-p|_{0;Q_{3\rho(X_2)}} \leq C\varepsilon^{-2}\rho^{\delta}[u]'_{1+\delta/2,2+\delta}.$$
|
|partial-differential-equations|taylor-expansion|regularity-theory-of-pdes|holder-spaces|parabolic-pde|
| 1
|
Ratio of two beta random variables
|
I'm working on a problem for an hour and I wanted to get some hints. Suppose: $y_1, y_2, y_3, y_4 \sim Dir(\alpha_1, \alpha_2, \alpha_3, \alpha_4)$ what is the distribution of $\frac{y_1}{y_1 + y_2}$ ? My guess is that distribution should be $Beta(\alpha_1, \alpha_2)$ Could you guys give me some hints on how to show it?
|
By the story of the dirichlet distribution, given $X_{1} \sim \Gam(\alpha_{1}, 1)$ , it follows that the distribution of $y_{1}$ can be expressed as: \begin{equation*}\begin{aligned} y_{1} &\sim \frac{X_{1}}{\sum_{i = 1}^{4} X_{i}} \end{aligned}\end{equation*} Therefore, we can write the distribution of $\alpha = \frac{y_{1}}{y_{1} + y_{2}}$ as: \begin{equation*}\begin{aligned} \alpha &\sim \frac{y_{1}}{y_{1} + y_{2}}\\ &\sim \frac{X_{1}/\sum_{i = 1}^{4} X_{i}}{X_{1}/\sum_{i = 1}^{4} X_{i} + X_{2}/\sum_{i = 1}^{4} X_{i}}\\ &\sim \frac{X_{1}}{X_{1} + X_{2}}\\ &\sim \Dir(\alpha_{1}, \alpha_{2})\\ &\sim \Beta(\alpha_{1}, \alpha_{2}) \end{aligned}\end{equation*} Which shows that $\alpha \sim \Beta(\alpha_{1}, \alpha_{2})$ .
|
|probability-distributions|ratio|
| 0
|
Geometric Intuition for Hamiltonian Actions
|
Let $G$ be a connected Lie group acting on the symplectic manifold $(M,\omega)$ . In the definition of a Hamiltonian action one requires that the moment map $\mu\colon M\xrightarrow{} \mathfrak{g}^\ast$ satisfies $$\omega(X_\xi,\cdot) = d\langle \mu,\xi\rangle$$ for all $\xi\in\mathfrak{g}$ . In other words, we want $X_\xi$ to be the Hamiltonian vector field of the function $\langle \mu,\xi\rangle$ . While I understand what the definition says, I don't really have a geometric understanding of it. The second condition in the definition is that $\mu$ should be $G$ -equivariant, which is more intuitive to me. What exactly does the above condition ensure? Is there a more geometric way to think about it?
|
A moment map is a set of rules that measure your symplectic actions. On a symplectic manifold, you can use one Hamiltonian function $H$ to test action/energy/momentum at every point of your symplectic manifold equipped with the symmetry (precisely, a flow generated by the Hamiltonian vector field $X_H$ ) corresponding to the Hamiltonian. This is a version of Noether's theorem. So you can think of $H$ as A ruler of the symmetry. Now, suppose you have more than one symmetries! Precisely, you have a Lie group $G$ of dimension greater than 1 acts on your symplectic manifold. Then for every element $\zeta\in \mathfrak{g}$ of Lie algebra, you can first assume the flow corresponding to a $\zeta$ is a Hamiltonian flow: i.e. there exists $H_\zeta$ such that $X_\zeta=X_{H_\zeta}$ . Therefore, you have an assignment: for a symmetry $\zeta$ , you have a Hamiltonian function $H_\zeta$ (a ruler for each $\zeta$ ). By definition of the Hamiltonian field, you have $dH_\zeta=\iota_{X_\zeta}\omega$ . (I
|
|differential-geometry|lie-groups|lie-algebras|group-actions|symplectic-geometry|
| 0
|
Check if method of moments estimator is unbiased for $X_1...X_n$ being a random sample from $Uniform[-\theta,\theta]$
|
I am not sure how to do this. To find the method of moments estimator I did: $E[X] = \frac{-\theta + \theta}{2} = 0$ use 2nd moment: $E[X^2] = \frac{(-\theta)^2 + -(\theta^2) + \theta^2}{3} = \frac{\theta^2}{3} = \frac{1}{n} \sum_{i=1}^n X_i^2$ $\hat{\theta} = \sqrt{\frac{3}{n} \sum_{i=1}^n X_i^2}$ And to find if it is unbiased, I tried this, but am not sure what to do with the integral $E[\hat{\theta}] = \sqrt{\frac{3}{n}} E[\sqrt{\sum_{i=1}^n X_i^2}]$ $E[\hat{\theta}] = \int_{-\theta}^\theta \sqrt{\frac{3}{n} \sum_{i=1}^n X_i^2} \frac{1}{2\theta} dx$ $= \sqrt{\frac{3}{n}} * \frac{1}{2\theta} \int_{-\theta}^\theta \sqrt{\sum_{i=1}^n X_i^2} dx$
|
It is sufficient to compute the expectation of $$\tilde \theta = \sqrt{\frac{3}{n} \sum_{i=1}^n X_i^2}$$ for the case $n = 1$ to show that it is biased. We have $$\operatorname{E}[\tilde \theta] = \int_{x=-\theta}^\theta \sqrt{3x^2} \cdot \frac{1}{2\theta} \, dx = \frac{\sqrt{3}}{\theta} \int_{x=0}^\theta x \, dx = \frac{\sqrt{3}}{2} \theta \ne \theta.$$ Moreover, because of Jensen's inequality applied to the concave function $\sqrt{x}$ , we have $$\operatorname{E}[\tilde \theta] \le \sqrt{\operatorname{E}\left[\frac{3}{n} \sum_{i=1}^n X_i^2\right]} = \sqrt{3 \operatorname{E}[X^2]} = \theta$$ for all $n$ . Equality is not attained because $\sqrt{x}$ is strictly concave for all positive reals.
|
|probability|statistics|parameter-estimation|
| 0
|
Elementary proof that $(1+1/x)^x$ is increasing for $x \in \mathbb{R}_{>0}$
|
All similar proofs I could find show that $(1+1/n)^n$ is increasing for positive integers values of $n$ only, or show that the derivative of $(1+1/x)^x$ is positive for all $x \in \mathbb{R}_{>0}$ . I'm looking for a proof that $x for all $x,y \in \mathbb{R}_{>0}$ that doesn't use the derivative of $a^x$ nor the derivative of $\log_a (x)$ My attempt so far: Let $x,y \in \mathbb{R}_{>0}$ such that $x . Let $$ \begin{align} J&= \frac {\left(1+\frac{1}{y}\right)^y}{\left( 1+ \frac{1}{x} \right)^x}\\ &= \frac {(\frac{y+1}{y})^x(1+\frac{1}{y})^{y-x}}{(\frac{x+1}{x} )^x}\\ &= {(\frac{xy+x}{xy+y})^x(1+\frac{1}{y})^{y-x}} \\ &= {\left(1 - \frac{y-x}{xy+y}\right)^x\left(1+\frac{1}{y}\right)^{y-x}} \end{align} $$ Showing that $\left(1 - \frac{y-x}{xy+y}\right)^x > \frac{1}{\left(1+\frac{1}{y}\right)^{y-x}}$ is what I'm missing
|
I managed to prove the part I'm missing if I add the further assumption that $x . $y - x and $\frac{x}{y - x} > 1$ Let $K = \frac{y-x}{xy+y} = \frac{y-x}{(x+1)y}$ and $\alpha=y-x$ $K = \frac{\alpha}{(x+1)(x+\alpha)} = \frac{\alpha}{x^2+\alpha x+ x + \alpha}$ $\frac{1}{K} = \frac{x^2+\alpha x+ x + \alpha}{\alpha} = \frac{x^2+\alpha x+ x}{\alpha} + 1 >1$ and $0 Since $-1 and $\frac{x}{y-x} > 1$ , we can use Bernoulli's inequality . $(1-\frac{y-x}{xy+y})^{\frac{x}{y-x}} > 1 - (\frac{y-x}{xy+y})(\frac{x}{y-x}) = 1 - \frac{1}{y+1} = \frac{y}{y+1}$ $(1-\frac{y-x}{xy+y})^x = \Big((1-\frac{y-x}{xy+y})^{\frac{x}{y-x}} \Big)^{y-x} > (\frac{y}{y+1})^{y-x} = \frac{1}{(1+\frac{1}{y})^{y-x}}$ Therfore, $\frac {(1+\frac{1}{y})^y}{( 1+ \frac{1}{x} )^x} = {(1 - \frac{y-x}{xy+y})^x(1+\frac{1}{y})^{y-x}} > 1$ and $(1+\frac{1}{y})^y > ( 1+ \frac{1}{x} )^x$
|
|calculus|algebra-precalculus|inequality|
| 0
|
Determining the location number of a graph $G$
|
Consider the following graph, $G$ . I want to determine the location number of $G$ . From the source I am using to learn graph theory, an ordered set $S=\{w_1, w_2, ..., w_k\}$ of vertices in a connected graph $G$ such that for every two vertices $u$ and $v$ of $G$ , there is some vertex $w_i \in S$ whose distance to $u$ is different from its distance to $v$ is a locating set . In addition, the minimum number of vertices in a locating set is the location number of $G$ . My approach has been to test various possibilities of $S$ and see whether or not $S$ coveres the whole of $G$ . For example, if I select two vertices of $G$ at random, I can define a collection or ordered pairs $(x, y)$ for every other vertex in $G$ consisting of the distance from a said vertex to both of the vertices in $S$ . If any number of ordered pairs are the same, then $S$ is not a locating set. Other than this, I am not sure of any good approached for a problem of this sort, as this has not been very successful.
|
You can solve this set covering problem via integer linear programming as follows. Let $N$ be the node set, let $P=\{u\in N, v\in N: u be the set of node pairs, and let $d_{uv}$ be the shortest path distance from $u$ to $v$ . For $(u,v)\in P$ , let $N_{uv}=\{i\in N: d_{iu} \not= d_{iv}\}$ . For $i\in N$ , let binary decision variable $x_i$ indicate whether $i\in S$ . The problem is to minimize $$|S|=\sum_{i\in N} x_i$$ subject to linear constraints $$\sum_{i\in N_{uv}} x_i \ge 1 \quad \text{for all $(u,v)\in P$}.$$ These constraints force at least one $i\in S \cap N_{uv}$ for each node pair $(u,v)$ . For your sample graph, if you number the nodes from left to right and top to bottom so that the top left node is $2$ and the bottom left node is $4$ , we have $N_{2,4}=\{2,4,5,7,8,10,11,13,14,16\}$ , which is the set of nodes that do not appear in the middle row. The graph has $17$ nodes, so $|P|=\binom{17}{2}=136$ , and there are $136$ constraints. For example, the constraint for $(u,v)=(
|
|graph-theory|discrete-optimization|
| 0
|
Asymptotic expansion of the integral $\int_0^s \sqrt{\frac{1}{E-V(x)}} - \sqrt{\frac{1}{E+x^2}}dx$ as $E\to0$ for $V(x)=-x^2+...$
|
How can one obtain an asymptotic expansion of an integral of the form $$I(E)=\int_0^s \sqrt{\frac{1}{E-V(x)}} - \sqrt{\frac{1}{E+x^2}}dx$$ as $E\to0^+$ given that $V(x)$ attains its maximum in $[0,s]$ at $x=0$ and that $V(x)$ is analytic around $x=0$ with a finite radius of convergence and that its Taylor expansion begins $V(x)=-x^2+...$ . It is clear that the first term is given by $$I(E)\sim\int_0^s \sqrt{\frac{1}{-V(x)}} - \sqrt{\frac{1}{x^2}}dx$$ I would like to be able to obtain an explicit expression for at least the next term.
|
Whenever we have a (sum or) difference of square roots, we should always try the trick $\sqrt a-\sqrt b = \frac{a-b}{\sqrt a+\sqrt b}$ and see if it magically helps. Here $$ \sqrt{\frac{1}{E-V(x)}} - \sqrt{\frac{1}{E+x^2}} = \frac{\frac{1}{E-V(x)} -\frac{1}{E+x^2}}{\sqrt{\frac{1}{E-V(x)}} + \sqrt{\frac{1}{E+x^2}}} = \frac{V(x)+x^2}{(E-V(x))(E+x^2)\bigl( \sqrt{\frac{1}{E-V(x)}} + \sqrt{\frac{1}{E+x^2}} \bigr)}. $$ As $E\to0^+$ , the denominator approaches $2x^3+\cdots$ and the numerator approaches $cx^3+\cdots$ for some $c\in\Bbb R$ , leaving a much more tractable integral to approximate. (How to proceed from here depends on how good an approximation one needs, how much information we have about $V(x)$ and $s$ , etc.)
|
|asymptotics|mathematical-physics|
| 0
|
Quotient space not the same as the original space
|
I am trying to understand why quotient space is not the same as the original space. Let $V$ be a vector space and $W$ be its vector subspace. If I define $a+W:=\{x=a+w; w\in W\}$ then the quotient space $V/W$ is a set the elements of which are $a+W$ for each $a\in V$ . $V/W$ is quotient space. Now, my issue stems from this: Let $\{w_1,\ldots,w_k\}$ be the base of $W$ . I can then complement it to get a base of $V$ : $\{w_1,\ldots,w_k,u_{k+1},\ldots,u_n\}$ , but then an element of $a+W$ can be written: $\sum_{i=1}^k \alpha_i w_i +\sum_{i=k+1}^n \alpha_i u_i + \sum_{i=1}^k \beta_i w_i = \sum_{i=1}^k (\alpha_i+\beta_i)w_i + \sum_{i=k+1}^n\beta_i u_i$ which is just expression of any element of $V$ since $a$ is any element of $V$ and $V$ is a vector space, then there will be always such $\alpha_i,\,i=(1,\ldots k)$ and $\beta_i,\,i=(1\ldots k+1)$ to make a necessary element of $V$ . Where is my mistake?
|
The mistake is in the way you interpreted vectors in $V/W$ . By definition, the quotient space $V/W$ is the set of all affine subsets of $V$ parallel to $U$ : $$ V/U=\{v+U: v\in V\}. $$ So the vectors in $V/U$ are affine subsets which are not vectors in $V$ anymore. Let me quote an example from Axler's Linear Algebra Done Right : If $U$ is a line in $\mathbb{R}^3$ containing the origin, then $\mathbb{R}^3/U$ is the set of all lines in $\mathbb{R}^3$ parallel to $U$ . In this case, the vectors in $\mathbb{R}^3/U$ are lines not points in $\mathbb{R}^3$ .
|
|linear-algebra|vector-spaces|quotient-spaces|quotient-group|
| 0
|
Elementary combinatorics.
|
Suppose there are six different shirts of one color and six different trousers of same colours as that of shirts. In how many ways these can be put on by five people such that no person wears the shirt and trouser of the same colour. I knew one part of the answer as $\binom{6}{5}×5!×D_5$ . But actually we can see more for example for the selection of shirts $s_1,...s_5$ the trousers selection could be $t_2,...t_6$ . Could not do after this.
|
We can just make a straightforward inclusion-exclusion calculation: $$\sum_{k=0}^5 (-1)^k\cdot\binom{6}{k}\binom{6-k}{5-k}^2 \cdot (5-k)!=1854.$$ In this calculation $k$ is the number of people that we make to wear the same color shirt and trousers, $\binom{6}{k}$ is the number of ways to choose clothes for them, $\binom{6-k}{5-k}^2$ is the number of ways to choose clothes for the other $5-k$ people, $(5-k)!$ ways to get pairs shirt-trousers for those $5-k$ people. Edit: If you distinguish $5$ people that receive the clothes, then the answer is exactly $P_5=5!$ times greater: $$1854\cdot 5!=222480.$$ Edit 2: Another nice reasoning just came to my mind. Let us make a row of shirts and distribute the trousers between them. There are $D_6$ ways to do so that no color is matched. We then can take any $5$ pairs of clothes. This gives $6\cdot D_6$ ways. Now let us allow one matching. There are $6$ ways to choose which pair will match. And then there are $D_5$ ways to distribute the trousers.
|
|combinatorics|
| 0
|
How to add two arcs together if they overlap
|
I'm writing some code for various calculations with arcs and lines and need help with some math. I have list of arcs and need to add them together if they "overlap", here is the available variables: Radius; Center X Y Z; StartAngle; EndAngle; I can check if the arcs have the same starting point and if their radius is the same, meaning we know that they are a part of the same circle. By adding them together I mean I should get a new arc with new starting and ending angles that cover given two arcs. This could be solved by checking and comparing a lot of conditions but I'm looking for a more mathematical approach. Few examples Arc{startAngle;endAngle}: A1{350;340} B1{90;270} -> C1{350;340} A2{225;20} B2{90;270} -> C2{90;20} A3{225;300} B3{90;270} -> C3{90;300} A4{340;45} B4{90;270} -> not overlapping Edit Here is a simple illustration to help in understanding my problem as I'm not the best in using math words. Illustration
|
Given two arcs A(a0, a1) and B(b0, b1), first make sure the degrees are between 0-360. Initial test should be to check if any of the arcs form a full circle then of course it's an early overlap. Otherwise loop 3 times; First two loop will drag the degrees back in each set so that the condition a0 (270, 360), offset += 45. On the 3rd loop if it is still a0 > a1 or b0 > b1 then we had a full rotation/roll over itself thus it's an overlap and the result is (0, 360) If not then; Check for condition (a0 >= b0) && (a0 = b0) && (a1 = a0) && (b0 = a0) && (b1 m0 = min(a0, b0) + offset m1 = max(a1, b1) + offset In some cases m0 can be equal to m1 then I basically turn them into (0, 360) The key point is whenever you add or subtract degrees from arcs, always make sure to normalize them by taking a modulus. Checking if two arcs are totally containing one an other is a lot easier btw.
|
|geometry|trigonometry|circles|angle|
| 0
|
Name this property: product of functions = function of sum
|
Consider a function $f(x)$ with the property $f(a)f(b) = f(a+b)$ , where $a,b$ are numbers (or matrices, or operators). Is there a name for this property? For example, $f(x)=c^x$ for a fixed $c \in \mathbb{R}$ will do this. This comes up often enough in my physics research that I would really like to refer to it by either an existing name or give it a reasonable one.
|
From Wikipedia : This equation is sometimes referred to as Cauchy's additive functional equation to distinguish it from Cauchy's exponential functional equation $f(x+y)=f(x)f(y)$ .
|
|linear-algebra|abstract-algebra|functions|terminology|
| 0
|
Exercise involving Bayes' theorem
|
I want to solve the following exercise: John tells the truth two out of every three times he is asked, while Peter tells the truth four out of every five times. Both agree in assuring that from a box of marbles containing 6 marbles of different colors a red one was taken. Determine the probability that this statement is true. So, consider: A:John tells the truth B:Peter tells the truth C:A red marble comes out of the marble box. I think our objective is to determine $$P(E \mid A \cap B)=\frac{P(A \cap C \mid E) P(E)}{P(A \cap C)}$$ Thinking that $P(A)= \frac{2}{3}$ , $P(B)= \frac{4}{5}$ And using the total probability theorem I could determine $P(A \cap C)$ I really don't see clearly how to perform the exercise approach, it is a bit confusing. Any help?
|
The events you should be looking at are: $J$ : John says the marble was red $P$ : Peter says the marble was red $R$ : The marble was red Then the probability we are trying to calculate is $P(R | J \cap P)$ , i.e. the probability the marble is actually red given that John and Peter both say that it was red. From Bayes' theorem, you can express this as: $$P(R | J \cap P) = \frac{P(J \cap P | R) P(R)}{P(J \cap P)}$$ $P(R)$ should be fairly simple to calculate, while $P(J \cap P)$ will involve some decomposition into parts of $P(J \cap P \cap R)$ and $P(J \cap P \cap R^c)$ .
|
|probability|bayes-theorem|
| 1
|
I found an interesting question but I keep getting stuck in a loop.
|
Find all values of A, B, C and C such that: $$ \frac{x-1}{(x-1)(x-2)(x-2)} = \frac{A}{x+1} + \frac{B}{x+2} + \frac{C}{(x-2)^2} $$ I keep getting into a loop in which: $$ x - 1 = Ax^2 - 4Ax + 4A + Bx^2 - Bx - 2B + Cx + C $$ Which then creates a system which will create equal versions of existing equations in the system. $$ -1 = 4A - 3B + C $$ $$ -1 = -Ax - Bc + 4A + B - C $$ Are there any methods of approaching this?
|
Mistake in The Problem / Your Post First, there are mistakes in the decomposition. The proper equation should be: $$ \frac{x-1}{(x-1)(x-2)^{2}} = \frac{A}{x-1}+ \frac{B}{x-2}+ \frac{C}{(x-2)^{2}} $$ from which we have $$ \frac{x-1}{(x-1)(x-2)^{2}} = \frac{A(x-2)^{2}+B(x-1)(x-2)+C(x-1)}{(x-1)(x-2)^{2}} $$ Comparing The Numerators Because the denominators on the LHS and RHS are equal, the numerators must be equal as well: $$ x-1 = (A+B)x^{2}+(-4A-3B+C)x+(4A+2B-C) $$ which can be written in matrix-vector notations: $$ \begin{Bmatrix} x^{2} \\ x \\ 1 \end{Bmatrix}^{\top} \begin{Bmatrix} 0 \\ 1 \\ -1 \end{Bmatrix} = \begin{Bmatrix} x^{2} \\ x \\ 1 \end{Bmatrix}^{\top} \begin{bmatrix} 1 & 1 & 0 \\ -4 & -3 & 1 \\ 4 & 2 & -1 \end{bmatrix} \begin{Bmatrix} A \\ B \\ C \end{Bmatrix} $$ and the solution is $$ \begin{Bmatrix} A \\ B \\ C \end{Bmatrix} = \begin{bmatrix} 1 & 1 & 0 \\ -4 & -3 & 1 \\ 4 & 2 & -1 \end{bmatrix}^{-1} \begin{Bmatrix} 0 \\ 1 \\ -1 \end{Bmatrix} $$ i.e. $A=B=0$ and $C=1$
|
|linear-algebra|algebra-precalculus|partial-fractions|
| 0
|
Showing that for every acute angle $x$ in a right triangle $\frac{1}{\sin x}+\frac{1}{\cos x}\ge 2\sqrt 2$ is always true
|
The problem is to show that in a right-angle triangle with hypothenuse K and sides M and N, the inequality $\frac{K}{M}+\frac{K}{N}\ge 2\sqrt 2$ is always true. My approach: I tried to simplify the expression $\frac{K}{M}+\frac{K}{N}$ = $\frac{K(M+N)}{MN}$ , and since we have a triangle, this means that $K\ge M+N$ which makes us conclude that $$\frac{K(M+N)}{MN}\ge\frac{K^2}{MN}$$ , and since K is the hypothenuse, we get: $\frac{K(M+N)}{MN}\ge \frac{M^2+N^2}{MN}\ge 2$ by AM-GM. So this means that I got: $$\frac{K}{M}+\frac{K}{N}\ge 2$$ but couldn't reach the desired result which is $2\sqrt 2$ , so any help or another approach is much appreciated.
|
Using the QM-HM inequality: $$ \sqrt{ \frac{\sin^{2}(x)+\cos^{2}(x)}{2} } \geq \frac{2}{ \frac{1}{\sin(x)}+ \frac{1}{\cos(x)} } $$ and using $\sin^{2}(x)+\cos^{2}(x)=1$ , we get what we want: $$ \frac{1}{\sin(x)}+ \frac{1}{\cos(x)} \geq2\sqrt{2} $$
|
|inequality|trigonometry|
| 0
|
Deriving the Fisher information matrix for a reparameterised gamma distribution
|
Let $X \sim \mathrm{Gamma}(\alpha, \theta),$ where $$f(x) = \frac {x^{\alpha - 1} e^{-\frac x \theta}} {\theta^{\alpha}\Gamma(\alpha)}.$$ The log-likelihood function can be shown to be $$l(\alpha, \theta) = -n\alpha\ln\theta - n\ln[\Gamma(\alpha)] + (\alpha - 1)\sum^n_{i = 1} \ln x_i - \theta^{-1}\sum^n_{i = 1} x_i.$$ Now, suppose we reparameterise $X$ by introducing $$\mu = \alpha\theta$$ and the log-likelihood function can be shown to be $$l(\alpha, \mu) = n\alpha\ln\alpha -n\alpha\ln\mu - n\ln[\Gamma(\alpha)] + (\alpha - 1)\sum^n_{i = 1} \ln x_i - \alpha\mu^{-1}\sum^n_{i = 1} x_i.$$ I am interested in deriving the Fisher information matrix for the reparameterised Gamma distribution which was given to me as $$\begin{aligned} I(\alpha, \mu) & = - \begin{bmatrix} \mathbb{E}_{\alpha, \mu}\left(\frac {\partial^2l(\alpha, \mu)} {\partial\alpha^2} \right) & \mathbb{E}_{\alpha, \mu}\left(\frac {\partial^2l(\alpha, \mu)} {\partial\alpha\partial\mu} \right) \\ \mathbb{E}_{\alpha, \mu}\left(\f
|
I am answering my own question since I have figured it out and no one else was able to assist; it actually turned out to be rather simple! In particular, after deriving $\frac {\partial^2l(\alpha, \mu)} {\partial\mu^2}$ , take expectations on both sides to get $$\begin{aligned} \mathbb{E}_{\alpha, \mu}\left(\frac {\partial^2l(\alpha, \mu)} {\partial\mu^2} \right) & = \mathbb{E}\left(n\alpha\mu^{-2} - 2\alpha\mu^{-3}\sum^n_{i = 1} x_i \right) \\ & = n\alpha\mu^{-2} - 2\alpha\mu^{-3}\mathbb{E}\left(\sum^n_{i = 1} x_i \right) \\ & = n\alpha\mu^{-2} - 2\alpha\mu^{-3}n\mu \\ & = -n\alpha\mu^{-2}, \end{aligned}$$ which is the desired solution.
|
|statistics|probability-distributions|gamma-distribution|log-likelihood|fisher-information|
| 1
|
Proving a trivial theorem involving a circle and a line
|
I want to prove that if the length of a line from the origin to the line $Ax + By + C$ is less than the radius of a circle with center $(0, 0)$ , then the line $Ax + By + C = 0$ intersects the circle $x^2 + y^2 = r^2$ at two points. I want to prove it because just drawing a graph isnt enough. To tackle this problem, I used the formula to find the length between a point and a line. Assume the length is $d$ and $d , then $C^2 . This is when $d . Note that a line $y = mx + n$ intersects a circle with center $(0, 0)$ twice if the discriminant of $x^2 + (mx + n)^2 - r^2 = 0$ is greater than zero. Then I found out that $C^2 or substituting $m=\frac{-A}B$ , I got $C^2 . My question is how do I go on about using $C^2 and $C^2 to prove this 'theorem'?
|
Solve the two equations $$ Ax + By + C = 0,~x^2 + y^2 = r^2$$ eliminating either x or y. The resulting quadratic has two roots, in the form $ a+\sqrt{\Delta}$ for two intersections. The quantity under radical is the discriminant ${\Delta}$ . If discriminant ${\Delta}>0$ , there are two real roots or two intersections between the circle and straight line, so we can decide without a sketch or a graph. $d_{min} . If ${\Delta}=0$ , the two roots coincide at a tangent point. $d_{min}=r$ . If $ {\Delta} , the roots are complex and they stay apart without intersection. $d_{min}>r$ .
|
|analytic-geometry|circles|
| 0
|
How will the map be described?
|
Collapsing either one of the circles in the bouquet of two circles to the basepoint, how can I describe this by a map (in the free product) and how is this related to that $\mathbb{R}^2\backslash\{p,q\}$ is not homeomorphic to a bouquet of two circles! However there is a deformation retraction of that space to a subspace homeomorphic to the bouquet of two circles? Any help will be greatly appreciated!
|
If you try to write down the homomorphism induced by the collapsing map from the figure eight to the collapsed space which is a circle, it would be enough to write down the images of the two generators of the fundamental group of the figure eight which is a free group of over 2 generators. Then the homomorphism takes one of the generators to 0 (which corresponds to the collapsed circle) and another one to identity (which corresponds to the intact circle). For the 2nd question it would be enough to point out that wedge of 2 circles is compact whereas plane without 2 points is not.
|
|general-topology|algebraic-topology|homotopy-theory|fundamental-groups|free-product|
| 1
|
How many ways to assign 7 different jobs to 5 different people assuming one person must have at least one job?
|
This is what i got so far. Case 1: one person get 3 jobs and everybody else (4 people left) get 1 job. And there are 5 ways to pick a person to assign 3 jobs $\binom{7}{3} \cdot 4!\cdot 5 = 4200$ Case 2: two people get 2 jobs and everybody else (3 people left) get 1 job. And there are $ \binom{5}{2} $ way to pick 2 people to assign 2 jobs out of 5 people $\binom{7}{2} \cdot \binom{5}{2} \cdot 3!\cdot \binom{5}{2} = 12600$ So the total number of ways = 16800 Is that correct?
|
You do this in two steps. First choose how many jobs each person gets, with the condition that each person gets at least one job. Assign jobs afterwards. For example, person A gets 2 jobs, B gets 2 jobs, C gets 2 jobs, and D gets 1 job. After you made that choice the number of assignments is then ${{7}\choose 2,2,2,1}$ . Now sum all cases from step (1).
|
|combinatorics|
| 0
|
Why I can't calculate $0*log(0)$ but can $log(0^0)$
|
I got this doubt after some difficult in programming. In a part of code, i had to calculate: $$ x = 0 * \log(0) \\ x = 0*(-\infty) $$ and got $x = NaN$ (in R and Matlab). So I changed my computations to $x = Log(0^0)$ and got $x=0$ . I found this question/asnwer about $0*\infty$ getting NaN. But, why the 'log way' does have a result?
|
Almost every program considers $\frac{1}{0}$ as $+\infty$ and $-\frac{1}{0}$ as $-\infty$ according to the IEEE 754 floating-point standard. Thus $\pm\infty\cdot 0=\pm\frac{1}{0}\cdot 0=\pm\frac{0}{0}=\text{NaN} $ . Some programs also consider $\log(0)$ as $-\infty$ according to: $$ \lim_{x\to 0^+}\log(x)=-\infty $$ For these programs, $0\log(0)=0\cdot -\infty=\text{NaN}$ . For those programs which consider $\log(0)$ as $\text{NaN}$ , $0\log(0)=0\cdot\text{NaN}=\text{NaN}$ . Then, for $\log(0^0)$ , all programs will calculate $x=0^0$ first before calculating $\log(x)$ . So the last question is why $0^0=1$ , or why calling $\text{pow(0,0)}$ will return $1$ . Again, according to the IEEE 754 floating-point standard, the "pow" function is combined with the "pown" function and the "powr" function ( Wiki: Zero to the power of zero ): pown (whose exponent is an integer, discrete exponent) treats $0^0$ as $1$ . powr treats $0^0$ as $\text{NaN}$ due to the indeterminate form. So which one does
|
|logarithms|infinity|
| 0
|
A doubt on a problem involving continuous function on a compact metric space
|
Let $X$ be a compact metric space with metric $d$ and let $f \in C (X, X)$ and such that $d (f(a ),f(b))\ge d (a, b)$ for all $a$ and $b$ in $X.$ Show that $d(f(a), f(b)) = d(a, b)$ for all $a$ and $b$ in $X.$ My attempt: $X$ is compact implies that $f(X)$ is also compact, since $f$ is continuous. Also, $f$ is uniformly continuous. On the contrary, let us assume that there exists a pair of points $(x,y) \in X^2$ such that $d (f(x ),f(y)) > d (x, y).$ Since, $X$ is compact, diameter of $X$ is finite, say $M.$ And there exits points $x_0,y_0 \in X$ such that $d(x_0,y_0)=M.$ Suppose that $(x_0,y_0)=(x,y),$ we are done with the contradiction that $d (f(x_0 ),f(y_0)) > d (x_0, y_0)>M,$ since diameter of $f(X) \subseteq X$ is atmost M. Now we are left with the case where $d(x,y) I am stuck here. Please help. Thanks in advance.
|
I am skipping some of the details that are easy to fill. Also, I use sup instead of max but it does not matter for this proof. The set $f(X)$ is a compact subset of X. Hence, $\sup_{(x,y)\in X^2} d(f(x),f(y))\leq \sup_{(x,y)\in X^2} d(x,y)$ . Observe that $X^2$ is also compact and the left hand side is supremum taken over a subset of $X^2$ , so the sup cannot dominate. However, from the property of the metric in the question it also follows that $\sup_{(x,y)\in X^2} d(f(x),f(y))\geq \sup_{(x,y)\in X^2} d(x,y)$ . So, $\sup_{(x,y)\in X^2} d(f(x),f(y))= \sup_{(x,y)\in X^2} d(x,y)$ must hold. In fact, for any closed subset $A$ of $X^2$ we must have, $\sup_{(x,y)\in A} d(f(x),f(y))= \sup_{(x,y)\in A} d(x,y)$ must hold. If there is some $a,b \in X$ such that $d(f(a),f(b))> d(a,b)$ we simply define the closed set $A=\{(x,y):d(x,y)\leq d(a,b)\}$ and note that $\sup_{(x,y)\in A} d(x,y)=d(a,b) . This contradiction completes the argument.
|
|continuity|metric-spaces|compactness|
| 1
|
Wreath product, semi-direct product, and partitions
|
Any help with the question which follows will be greatly appreciated. I'm working through Dixon and Mortimer's Permutation Groups and have a question regarding a particular semi-direct product related to wreath products. First I'll give a (slightly expanded) account of what Dixon and Mortimer say, and then my attempt to untangle their claims. Let $\Omega$ be a set and $\Sigma$ a partition of $\Omega$ into equally sized subsets. We define the automorphism group of $\Sigma$ to be $$ G:=\mathrm{Aut}(\Sigma) := \{x \in \mathrm{Sym}(\Omega) : \forall \Delta \subseteq \Omega,\ \Delta^x \in \Sigma \iff \Delta \in \Sigma\} $$ where the notation $\Delta^x = \{\alpha^x : \alpha \in \Delta\}$ (recalling that $\mathrm{Sym}(\Omega)$ acts on $\Omega$ ). Then $G$ has a natural action on $\Sigma$ given by $\Delta \mapsto \Delta^x$ for every $x \in G$ and every $\Delta \in \Sigma$ . Let $B$ denote the kernel of this action: $$ B = \{x \in G : \Delta^x = \Delta, \ \forall \Delta \in \Sigma\} $$ Begin wi
|
I just want to note. You said that the bloc of $\Sigma$ are all of the same size. And $B=\prod_{\Delta\in \Sigma}Sym(\Delta)$ , then $$B\simeq Sym(|\Delta|)^{|\Sigma|}$$ for some $\Delta \in \Sigma$ . Then $$G\simeq Sym(|\Delta|)^{|\Sigma|}\rtimes Sym(\Sigma)=Sym(\Delta)\wr Sym(\Sigma)$$ .
|
|group-theory|semidirect-product|wreath-product|
| 0
|
Elementary proof that $(1+1/x)^x$ is increasing for $x \in \mathbb{R}_{>0}$
|
All similar proofs I could find show that $(1+1/n)^n$ is increasing for positive integers values of $n$ only, or show that the derivative of $(1+1/x)^x$ is positive for all $x \in \mathbb{R}_{>0}$ . I'm looking for a proof that $x for all $x,y \in \mathbb{R}_{>0}$ that doesn't use the derivative of $a^x$ nor the derivative of $\log_a (x)$ My attempt so far: Let $x,y \in \mathbb{R}_{>0}$ such that $x . Let $$ \begin{align} J&= \frac {\left(1+\frac{1}{y}\right)^y}{\left( 1+ \frac{1}{x} \right)^x}\\ &= \frac {(\frac{y+1}{y})^x(1+\frac{1}{y})^{y-x}}{(\frac{x+1}{x} )^x}\\ &= {(\frac{xy+x}{xy+y})^x(1+\frac{1}{y})^{y-x}} \\ &= {\left(1 - \frac{y-x}{xy+y}\right)^x\left(1+\frac{1}{y}\right)^{y-x}} \end{align} $$ Showing that $\left(1 - \frac{y-x}{xy+y}\right)^x > \frac{1}{\left(1+\frac{1}{y}\right)^{y-x}}$ is what I'm missing
|
Well, since you know Bernoulli's inequality, then for $0 , note: $$\left(1+\frac1y \right)^{\frac{y}x}>1+\frac{y}x\cdot\frac1y=1+\frac1x$$
|
|calculus|algebra-precalculus|inequality|
| 1
|
How many ways to assign 7 different jobs to 5 different people assuming one person must have at least one job?
|
This is what i got so far. Case 1: one person get 3 jobs and everybody else (4 people left) get 1 job. And there are 5 ways to pick a person to assign 3 jobs $\binom{7}{3} \cdot 4!\cdot 5 = 4200$ Case 2: two people get 2 jobs and everybody else (3 people left) get 1 job. And there are $ \binom{5}{2} $ way to pick 2 people to assign 2 jobs out of 5 people $\binom{7}{2} \cdot \binom{5}{2} \cdot 3!\cdot \binom{5}{2} = 12600$ So the total number of ways = 16800 Is that correct?
|
Suppose the people are $A,B,C,D,E$ The jobs can be broken up as either $\;2-2-1-1-1\;$ or $\;3-1-1-1-1$ But that will only account for distributing them in order to $ABCDE$ , so we need to permute the assignments between people, too, thus $$\binom{7}{2,2,1,1,1}\cdot\frac{5!}{2!3!} + \binom{7}{3,1,1,1,1}\cdot \frac{5!}{1!4!}$$ which can be simplified in permutation form ignoring $1!s,0!s$ to $$\frac{7!}{2!2!}\frac{5!}{2!3!} + \frac{7!}{3!}\frac{5!}{4!} = 16800$$
|
|combinatorics|
| 1
|
Showing the double commutant of the image of GNS representation is a factor provided uniqueness of a tracial state.
|
Let $A$ be a $C^*$ -algebra with an identity. If $A$ has a unique tracial state $\varphi:A \rightarrow \mathbb{C}$ i.e. $\varphi(ab) = \varphi(ba)$ , $\varphi(x^*x) \geq 0$ , and $\varphi(1) = 1$ . I am trying to show that the double commutant of the image of GNS representation for $\varphi$ is a factor i.e. $Z((\pi_\varphi(A))'') = \mathbb{C}I$ . Suppose that $Z((\pi_\varphi(A))'')$ is not a factor. There exists a nontrivial projection in $Z((\pi_\varphi(A))'')$ since $\mathbb{C}I$ is proper to $Z((\pi_\varphi(A))'')$ and using the fact the double commutant of projections in $Z(M)$ equals $Z(M)$ . I am having trouble constructing tracial state to contradict the uniqueness. I know if I find a cyclic vector $\xi$ (maybe the trivial i.e. $1 + N_\varphi$ ), then I can construct at least a state $$\phi(a) = \langle \pi_\varphi(a)\xi, \xi \rangle$$ then $\phi$ is a state and maybe I can just it is tracial.
|
Let $P\in Z(\pi_\varphi(A)'')$ be a non-trivial projection. Write $\tilde\varphi(X)=\langle X\xi_\varphi,\xi_\varphi\rangle$ the normal tracial state induced by $\varphi$ . We have $\tilde\varphi(P)>0$ . Choose $s\in(0,1)$ with $s\ne \tilde\varphi(P) $ and define a state $\psi$ on $A$ by $$ \psi(a)=\frac s {\tilde\varphi(P)}\,\tilde\varphi(P \pi_\varphi(a)) +\frac {(1-s)} {\tilde\varphi(I-P)}\,\tilde\varphi((I -P) \pi_\varphi(a)). $$ Because $\tilde\varphi$ is a trace and $P$ is central, $\psi$ is a trace. Suppose that $\psi=\varphi$ . Let $\{a_n\}$ be a sequence such that $\pi_\varphi(a_n)\to P$ . Then $$ \tilde\varphi(P)=\lim_n\varphi(a_n)=\lim_n\psi(a_n)=s, $$ a contradiction. So $\psi$ would be a different tracial state on $A$ . The uniqueness thus implies that $Z(\pi_\varphi(A)'')$ is trivial.
|
|functional-analysis|analysis|operator-algebras|von-neumann-algebras|
| 1
|
The projective tensor norm on tensor product of Banach spaces implies the inner product on tensor product of Hilbert spaces?
|
As presented in the answer of this post , the projective tensor norm on the algebraic tensor product of two Banach spaces $X$ and $Y$ is given by \[ \Vert \omega\Vert_{\pi} = \inf\left\{\sum \lVert x_{i}\rVert_X \,\lVert y_{i}\rVert_ Y\,:\,\omega = \sum_{i=1}^{n} x_{i} \otimes y_{i}\right\} \] with respect to which we complete the algebraic tensor product to obtain the tensor product Banach space $X \otimes Y$ For Hilbert spaces $H_1$ and $H_2$ , as shown in this link , the inner product on the algebraic tensor product is given by linear extension of the formula \begin{equation} \langle \phi_1 \otimes \phi_2, \psi_1 \otimes \psi_2 \rangle = \langle \phi_1, \psi_1 \rangle_{H_1} \langle \phi_2, \psi_2 \rangle_{H_2} \end{equation} for $\phi_1, \psi_1 \in H_1$ and $\phi_2, \psi_2 \in H_2$ . Next, the tensor product Hilbert space $H_1 \otimes H_2$ is defined to be completion under this inner product. Now, my question is Does the projective tensor norm on general Banach spaces, as defined ab
|
The Hilbert tensor product is in general not equal to the projective tensor product: If $H$ is a Hilbert space and $H^*$ its dual space, then $H \hat \otimes_\pi H^*$ (the projective tensor product) is (isometrically isomorphic to) the trace class (nuclear) operators with the trace norm $H \hat \otimes_\epsilon H^*$ (the injective tensor product) is (isometrically isomorphic to) the compact operators with the operator norm $H \hat \otimes_h H^*$ (the Hilbert tensor product, which is a Hilbert space again) is (isometrically isomorphic to) the Hilbert-Schmidt operators with the Hilbert-Schmidt norm Since the spaces of Hilbert-Schmidt, compact and trace class operators are, in infinite dimensions, never the same it follows that the tensor norms can not be the same either. To show that they are never the same in infinite dimensions: Let $(e_n)_{n \in \mathbb{N}}$ be an orthonormal system in $H$ . Let $(x_n)_{n \in \mathbb{N}}$ be any sequence of complex numbers that converges to 0, but who
|
|functional-analysis|hilbert-spaces|banach-spaces|inner-products|tensor-products|
| 1
|
I found an interesting question but I keep getting stuck in a loop.
|
Find all values of A, B, C and C such that: $$ \frac{x-1}{(x-1)(x-2)(x-2)} = \frac{A}{x+1} + \frac{B}{x+2} + \frac{C}{(x-2)^2} $$ I keep getting into a loop in which: $$ x - 1 = Ax^2 - 4Ax + 4A + Bx^2 - Bx - 2B + Cx + C $$ Which then creates a system which will create equal versions of existing equations in the system. $$ -1 = 4A - 3B + C $$ $$ -1 = -Ax - Bc + 4A + B - C $$ Are there any methods of approaching this?
|
Another point of view like below: $$* \ \ \frac{x-1}{(x+1)(x-2)(x-2)} = \frac{A}{x+1} + \frac{B}{x-2} + \frac{C}{(x-2)^2}$$ multilply * by $(x+1)$ then put $x=-1$ $$(x+1) \times (\frac{x-1}{(x+1)(x-2)(x-2)} = \frac{A}{x+1} + \frac{B}{x-2} + \frac{C}{(x-2)^2}) \\ \frac{x-1}{(x-2)(x-2)} = \frac{A}{1} + (x+1)(\frac{B}{x-2} + \frac{C}{(x-2)^2}) \\x=-1 \to \frac {-1-1}{(-1-2)^2}=A+0 \to A=\frac {-2}{9}$$ then multiply * by $(x-2)^2$ to finc $C$ $$(x-2)^2(\frac{x+1}{(x-1)(x-2)(x-2)} = \frac{A}{x+1} + \frac{B}{x+2} + \frac{C}{(x-2)^2})\\\frac {x-1}{x+1}=(x-2)^2(\frac{A}{x+1} + \frac{B}{x-2})+C \\x=2 \to \frac {2-1}{2+1}=0+C$$ now to find B you can put any $x\neq -1,2$ forexample $x=0$ $$\frac{x-1}{(x+1)(x-2)(x-2)} = \frac{\frac {-2}{9}}{x+1} + \frac{B}{x-2} + \frac{\frac {1}{3}}{(x-2)^2} \\x=0 \to \frac {0-1}{(0+1)(0-2)^2}=\frac{\frac {-2}{9}}{0+1} + \frac{B}{0-2} + \frac{\frac {1}{3}}{(0-2)^2}$$
|
|linear-algebra|algebra-precalculus|partial-fractions|
| 0
|
Prove (by calculating the integral) that $\int_C \vec{F}\cdot d\vec{r} =0$ for $\vec{F}=\langle x,y,z\rangle$
|
In the following $C$ is a curve on a sphere of radius $a$ of center $(0,0,0)$ . I get the geometric interpretation that since $\vec{F}=\langle x,y,z \rangle$ is perpendicular to the tangent: $$\int_C \vec{F}\cdot d\vec{r}= \int_C \vec{F} \cdot \hat{T} ds=0$$ However, I couldn't prove it by doing the integral. $$\int_C xdx +ydy +zdz$$ I tried parametrizing the sphere into polar coordinates but then I'll have more than 1 variable and won't be able to convert $dx, dy, dz$ into one $ A(t)dt$
|
Too long for a comment. Use that $\vec{F}$ is curl free and therefore a gradient of a function. The line integral depends only on the end points of the curve so you can choose a straight line instead. $$\vec{F}=\nabla f(x,y,z)=\nabla\frac{x^2+y^2+z^2}2$$ $$\vec{r}(t)=\pmatrix{x_0+t(x_1-x_0)\\y_0+t(y_1-y_0)\\z_0+t(z_1-z_0)}$$ $$\frac{d}{dt}\vec{r}(t)=\pmatrix{x_1-x_0\\y_1-y_0\\z_1-z_0}$$ gives $$\int_C\vec{F}\cdot d\vec{r}=f(x_1,y_1,z_1)-f(x_0,y_0,z_0)\,.$$ Since $f$ is constant on the sphere of any radius this is zero.
|
|multivariable-calculus|
| 0
|
An intuitive explanation of the Poisson equation?
|
The Poisson equation $\nabla^2 u = \Delta u = f$ . When $f = 0$ we obtain the Laplacian equation which to me has an intuitive interpretation with the mean-value property. However, how can we form an intuitive understanding of the Poisson equation from the function $f$ ? How does $f$ impact $u$ ?
|
Here is my understanding from a pure math point of view. But I am not sure if this is what you want for the ''intuition''. In analysis and PDE, that $\Delta u=f$ says that $u$ ''gain 2 derivatives than'' $f$ . Assuming $u$ and $f$ are both defined on $\mathbb R^n$ and has at most polynomial growth, if we take Fourier transform on both sides then we have $-4\pi^2|\xi|^2\hat u(\xi)=\hat f(\xi)$ . Basically it says that if $\hat f(\xi)=O(|\xi|^{-m}$ then $\hat u(\xi)=O(|\xi|^{-m-2})$ decay two more degree faster than $\hat f$ as $\xi\to\infty$ . The decay of Fourier coefficients is equivalent as saying the smoothness of the original functions. By taking Fourier transform back we see that this is saying $u$ is smoother than $f$ with order 2. To get a better idea, we can consider $L^2$ Sobolev spaces. By the so-called Plancherel identity if $f\in W^{k,2}(\mathbb R^n)$ if and only if $\int (1+|\xi|^2)^{k/2}|\hat f(\xi)|^2d\xi . If we igonore the "+1" in the coefficient, i.e. the low frequenc
|
|partial-differential-equations|
| 0
|
Short question: Fourier Coefficients
|
Let $f$ be a function that is $2\pi$ -periodic satisfying $f(x) = x^2$ for $-\pi and $f(\pi) = 100$ i.e. it's repeating $x^2$ and $100$ on both endpoints. This function is continuous on $(-\pi, \pi)$ , but can we compute its fourier coefficients? i.e. is it integrable on $[-\pi, \pi]$ ?
|
I hope you find this helpful. Yes, the function (f(x)) you've defined is integrable on ([-π, π]). The function (f(x) = x^2) is a polynomial, which is smooth and hence integrable on its domain. The function (f(x) = 100) is a constant function, which is also integrable. Since (f(x)) is a piecewise combination of these two functions, it is also integrable. The Fourier coefficients of a function (f(x)) are given by: $$ a_n = \frac{1}{π} \int_{-π}^{π} f(x) \cos(nx) dx, \quad n \geq 0 $$ $$ b_n = \frac{1}{π} \int_{-π}^{π} f(x) \sin(nx) dx, \quad n > 0 $$ For the function (f(x) = x^2) on ([-π, π]), the Fourier coefficients can be computed directly from these formulas. However, because (f(x)) is not continuous at (x = π), the Fourier series may not converge to (f(x)) at this point. Instead, it will converge to the average of the limit from the left and the limit from the right, which is ((\lim_{x \to π^-} f(x) + \lim_{x \to π^+} f(x))/2 = (π^2 + 100)/2). In conclusion, while the function (f(x)
|
|functions|fourier-series|
| 0
|
Short question about continuity of derivative
|
If $f_x:=\partial f/\partial x$ and $f_{xy}:=\partial f_x/\partial y$ exist, and $f_{xy}$ is continuous, does this imply that $f_x$ is also continuous? I'm not sure if existence and continuity of one of the second partial derivatives imply the continuity first order partial derivative. Thanks.
|
The existence and continuity of the second partial derivative (f_{xy}) does not necessarily imply the continuity of the first order partial derivative (f_x). This is due to the fact that the continuity of a derivative at a point does not guarantee the continuity of the original function at that point. In other words, a function can have a derivative at a point without being continuous at that point. However, if we know that all second order partial derivatives exist and are continuous in a neighborhood of the point in question (i.e., the function (f) is of class (C^2)), then by Clairaut's theorem, the mixed partial derivatives are equal, i.e., (f_{xy} = f_{yx}), and the first order partial derivatives (f_x) and (f_y) are also continuous. So, in summary, the continuity of (f_{xy}) alone does not guarantee the continuity of (f_x), but if all second order partial derivatives are continuous, then (f_x) is also continuous.
|
|calculus|multivariable-calculus|derivatives|
| 0
|
$A+\vec x={\{\vec a+\vec x:\vec a\in A\}}$ is an open set in $\Bbb R^n$.
|
Let $A\subseteq \Bbb R^n$ an open set and let $\vec x\in \Bbb R^n$ a fixed vector. Show that the set $A+\vec x={\{\vec a+\vec x:\vec a\in A\}}$ is an open set in $\Bbb R^n$ . Dem: An element of the set $A+\vec x$ is of the form $\vec a+\vec x$ for some $\vec a\in A$ and because $A$ is an open set: $\implies \forall \vec a\in A, \exists\epsilon\gt 0:B(\vec a,\epsilon)\subseteq A$ : $\implies \forall \vec{\alpha}\in A, \exists\epsilon\gt 0:\lVert\vec{\alpha}-\vec a\rVert\lt\epsilon$ Then, $B(\vec a+\vec x,\epsilon)\subseteq A$ : $\implies \forall \vec{\alpha}\in A,\exists \epsilon \gt 0:\lVert\vec{\alpha} - (\vec a+\vec x)\rVert\lt\epsilon$ $\implies \lVert \vec{\alpha} - \vec a-\vec x\rVert\lt\epsilon$ I don't know what follows. Can you help me? I'll appreciate any feedback. Thanks
|
Let $y\in A+x$ . $\implies \exists a\in A, y=a+x$ . $\implies$ Since $A$ is open $\exists \epsilon>0$ , s.t $B(a, \epsilon) \subset A$ . Now claim $B(y, \epsilon)\subset A+x$ . Let $t\in B(y, \epsilon)$ . $\implies |y-t| $\implies |a+x-t| or $|a-(t-x)| $\implies t-x\in B(a, \epsilon)\subset A$ . $\implies t\in A+x$ . $\implies B(y, \epsilon)\subset A+x$ .
|
|general-topology|solution-verification|
| 1
|
Effect on Eigenvalues by exchanging columns
|
I have a short questions. Can there be made a general statement about what is happening to the Eigenvalues of a Matrix if I exchange two columns ?
|
When you exchange two columns of a matrix, the determinant of the matrix is multiplied by -1. However, the eigenvalues of a matrix are the roots of the characteristic polynomial, which is defined in terms of the determinant of the matrix minus a variable. Therefore, exchanging two columns of a matrix does not change the eigenvalues of the matrix. This is because the eigenvalues are solutions to the equation det(A−λI)=0 where A is your matrix, I is the identity matrix, and λ is a scalar. The determinant of the matrix A changes sign when two columns are swapped, but this does not affect the solutions λ to the equation above However, it’s important to note that while the eigenvalues remain the same, the corresponding eigenvectors will generally be different. This is because eigenvectors are dependent on the specific arrangement of values in the matrix. So, while the eigenvalues give us information about the overall ‘shape’ of the transformation represented by the matrix, the eigenvectors
|
|linear-algebra|
| 0
|
Question about short exact sequence of differential complexes
|
$0—>A—>B—>C—>0$ is a short exact sequence of differential complexes. I wonder why for any element $c$ belonging to $C$ , it is true that $dc=0$ .
|
In the context of differential complexes, the notation d typically represents the differential operator, which maps an element of one complex to the next. The property dc=0 for all c in C is a consequence of the exactness of the sequence at C . A short exact sequence of complexes 0→A→B→C→0 is exact at C if the image of the differential map from B to C is equal to the kernel of the differential map from C to 0 . But the kernel of any map to the zero object 0 is the entire domain, so in this case, it’s all of C . Therefore, the image of the differential map from B to C must also be all of C . However, the only element that maps to 0 under the differential operator is 0 itself (since we’re in a complex, where the composition of two consecutive differentials is zero: d2=0 ). Therefore, for all c in C , we must have dc=0
|
|de-rham-cohomology|
| 0
|
Zero-dimensional space with multiple objects
|
I am unsure if this belongs to math or philosophy. Let's say there's 0-dimensional space, however multiple objects exist within in, occupying the same "spot". If multiple objects exist, is the space still zero-dimensional? The problem here is that multiple objects existing at the same spot introduces dimensionality. If objects exist, they are somehow distinct, and can be ordered around. At the same time, as far as I know, there is no axiom saying that only one point can exist at given coordinates. And in 3d space, each point of space can have phenomenon associated with it, which may require a higher-dimensional construct to represent it. For example, electromagnetic spectrum at a point could be expressed as 1-dimensional line. This implies that a single point can have a set of properties associated with it, which can be expressed in dimensional way, but that does not affect dimensionality of the space itself, as they do not form a spatial dimension. Which line of reasoning is correct?
|
The only zero dimensional space, by definition, is the space containing only one element, the zero element. This means that a zero dimensional space cannot have multiple objects existing within it. All reasoning after we assume the existence of such a space is meaningless.
|
|philosophy|dimension-theory-analysis|
| 1
|
How to prove these rules produce a Gray code
|
There is a neat N-ary puzzle that has a solution that follows a 5-ary Gray code. https://www.schwandtner.info/publications/Kugellager.pdf It's not the typical reflected Gray code but still seems to go through all the numbers 00..0 to 44..4, one transition at a time. The rules are as follows (taken from the above pdf but starting at 0 index rather than 1 index) Transition Lower numbers must be Higher numbers must be 0 -- 1 4 0 or 1 or 4 1 -- 2 0 0 or 1 or 2 2 -- 3 0 or 1 or 4 any 3 -- 4 0 or 1 or 2 any I would like to prove that these rules do in fact produce a Gray code for any length n, but am having trouble since a transition depends on the lower and upper numbers. In the above paper, they confirm this is the case for n = 1,2,3,4. One thing I've tried is to show that every number besides 00..0 and 44..4 has exactly two neighbors but that's not enough to prove they all lie on the same line (there could be a line and multiple cycles, and they'd still have the property of only 2 neighbo
|
I'll leave my other answer up, but I think this explanation is clearer. First of all, I was able to write the recursion, and posted it here: https://colab.research.google.com/gist/jtb/151cef71fd7104bb840d1fa1f54971a0/kugellagerrecursion.ipynb The crux of the recursion is [ $2^m0^n$ to $4^{m+n}$ ] = [ $2^m*0*0^{n-1}$ to $4^m*0*4^{n-1}$ ] + [ $4^m*1*4^{n-1}$ to $2^m*1*0^{n-1}$ ] + [ $2^{m+1}*0^{n-1}$ to $4^{m+n}$ ] The leading 0 in the first term and the leading 1 in the second term never change within those ranges, so that is why we can ignore it and look at the smaller sequence. The third term has one fewer 0s than the original range, so we have the recursion $F(m,n) = F(m, n-1) + R(m, n-1) + F(m+1, n-1)$ where $F(m,n)$ = count of [ $2^m0^n$ to $4^{m+n}$ ] and $R(m,n)$ = count of [ $4^{m+n}$ to $2^m0^n$ ] Note that $F(m,n) = R(m,n)$ because R is just counting the same sequence as F but in reverse so has the same count. The recursion becomes $F(m,n) = 2*F(m, n-1) + F(m+1, n-1)$ It is ea
|
|puzzle|gray-code|
| 0
|
How to simplify complex logs
|
I have f(x) = (x+ $\sqrt{x}$ ) $\log_2x$ and g(x) = x $log_2(x+\sqrt{x})$ . How would I go about simplifying them and obtaining the limit at infinity of f(x)/g(x). So far, the best I have gotten for f(x) is x $log_2x$ (1+ $\frac{1}{\sqrt{x}}$ ), and nothing for g(x). Any help is appreciated!
|
$$\frac{f(x)}{g(x)} =\frac{(x+\sqrt{x})\log_2(x)}{x\log_2({x+\sqrt{x})}}=\frac{\ln\left({x^{x+\sqrt x}}\right)}{\ln\left({(x+\sqrt x)^{x}}\right)}$$
|
|calculus|limits|logarithms|
| 0
|
Elementary proof that $(1+1/x)^x$ is increasing for $x \in \mathbb{R}_{>0}$
|
All similar proofs I could find show that $(1+1/n)^n$ is increasing for positive integers values of $n$ only, or show that the derivative of $(1+1/x)^x$ is positive for all $x \in \mathbb{R}_{>0}$ . I'm looking for a proof that $x for all $x,y \in \mathbb{R}_{>0}$ that doesn't use the derivative of $a^x$ nor the derivative of $\log_a (x)$ My attempt so far: Let $x,y \in \mathbb{R}_{>0}$ such that $x . Let $$ \begin{align} J&= \frac {\left(1+\frac{1}{y}\right)^y}{\left( 1+ \frac{1}{x} \right)^x}\\ &= \frac {(\frac{y+1}{y})^x(1+\frac{1}{y})^{y-x}}{(\frac{x+1}{x} )^x}\\ &= {(\frac{xy+x}{xy+y})^x(1+\frac{1}{y})^{y-x}} \\ &= {\left(1 - \frac{y-x}{xy+y}\right)^x\left(1+\frac{1}{y}\right)^{y-x}} \end{align} $$ Showing that $\left(1 - \frac{y-x}{xy+y}\right)^x > \frac{1}{\left(1+\frac{1}{y}\right)^{y-x}}$ is what I'm missing
|
$$f(x)=x\log(\frac{x+1}{x})\Rightarrow f'(x)=\log(\frac{x+1}{x})-\frac{1}{1+x}\geq 0$$ from $\log y\leq y-1$ applied to $y=\frac{x}{x+1}.$
|
|calculus|algebra-precalculus|inequality|
| 0
|
Why the magnitudes of the gradients produced by the soft targets scale as $1/T^2$ in knowledge distillation?
|
In the paper "Distilling the Knowledge in a Neural Network" , they claim that " the magnitudes of the gradients produced by the soft targets scale as $1/T^2$ it is important to multiply them by $T^2$ when using both hard and soft targets ". In section 2.1, they write: Each case in the transfer set contributes a cross-entropy gradient, $dC/dz_i$ , with respect to each logit, $z_i$ of the distilled model. If the cumbersome model has logits $v_i$ which produce soft target probabilities $p_i$ and the transfer training is done at a temperature of $T$ , this gradient is given by: $$ \frac{\partial C}{\partial z_i} = \frac{1}{T}(q_i - p_i) = \frac{1}{T}(\frac{e^{z_i/T}}{\sum_j e^{z_j/T}} - \frac{e^{v_i/T}}{\sum_j e^{v_j/T}}) \tag{2} $$ If the (softmax) temperature is high compared with the magnitude of the logits, we can approximate: $$ \frac{\partial C}{\partial z_i} \approx \frac{1}{T}\left(\frac{1 + z_i/T}{N + \sum_j z_j/T} - \frac{1 + v_i/T}{N + \sum_j v_j/T}\right) \tag{3} $$ If we now a
|
I have a guess. For Eq.5, can approximate : $$ \frac{e^{z_i}}{\sum_j e^{z_j}} - 1 \approx \frac{z_i + 1}{\sum (1 + z_j)} - 1 \\ = \frac{1 + z_i}{N} - 1 \\ \approx \frac{z_i}{N} $$ From Eq.4, can get : $$ \frac{1}{T^2} \frac{z_i}{N} $$ $v_i$ is not considered because it comes from the teacher model and is a definite number. Finally, can get the gradient of CE is $T^2$ times the gradient of KL. Please note, I'm not sure this is right.
|
|machine-learning|neural-networks|
| 0
|
$A+\vec x={\{\vec a+\vec x:\vec a\in A\}}$ is an open set in $\Bbb R^n$.
|
Let $A\subseteq \Bbb R^n$ an open set and let $\vec x\in \Bbb R^n$ a fixed vector. Show that the set $A+\vec x={\{\vec a+\vec x:\vec a\in A\}}$ is an open set in $\Bbb R^n$ . Dem: An element of the set $A+\vec x$ is of the form $\vec a+\vec x$ for some $\vec a\in A$ and because $A$ is an open set: $\implies \forall \vec a\in A, \exists\epsilon\gt 0:B(\vec a,\epsilon)\subseteq A$ : $\implies \forall \vec{\alpha}\in A, \exists\epsilon\gt 0:\lVert\vec{\alpha}-\vec a\rVert\lt\epsilon$ Then, $B(\vec a+\vec x,\epsilon)\subseteq A$ : $\implies \forall \vec{\alpha}\in A,\exists \epsilon \gt 0:\lVert\vec{\alpha} - (\vec a+\vec x)\rVert\lt\epsilon$ $\implies \lVert \vec{\alpha} - \vec a-\vec x\rVert\lt\epsilon$ I don't know what follows. Can you help me? I'll appreciate any feedback. Thanks
|
$A+\vec x=$ $f^{-1}(A)$ where the translation $f:\Bbb R^n\to\Bbb R^n,\;\vec y\mapsto\vec y-\vec x$ is continuous (it is even an isometry ). Since the inverse image of an open subset by a continuous function is open , $A+\vec x$ is open.
|
|general-topology|solution-verification|
| 0
|
In a fssc Lie algebra, if for some x the killing form of x with itself is non zero then x is ad-diagonalizable
|
I am currently studying Lie algebras and stumbled upon this little lemma in my lecture notes. The proof given there is insufficient I believe. The proof there is based on choosing an "orthonormal" basis $\{T_1, \ldots, T_n\}$ where $T_1 = \frac{x}{\sqrt{\kappa \left(x, x\right)}}$ and $\kappa (T_a, T_b) = \delta_{ab}$ . Then what's left to show is that $ad(T_1)$ is diagonalizable and then in the lecture notes it is argued that because $ad(T_1)$ is basically just given by structure coefficients in the given basis, it is antisymmetric and thus diagonalizable. But not every antisymmetric matrix is diagonalizable, this would only be true if it was a real-valued matrix over $\mathbb{C}$ , but it is not. It is complex valued. So my question here is, is this proof valid and am I thus just not getting something here, or is it actually insufficient? And does anyone have an alternative/actual proof? Thanks in advance! :)
|
If by fssc you mean "finite dimensionnal semisimple complex", I think that $x=\begin{pmatrix}-2&0&0\\0&1&1\\0&0&1\end{pmatrix}\in\mathfrak{sl}_3$ satisfies $\kappa(x,x)\neq0$ and $ad_x$ is not diagonalizable. Diagonalizable is equivalent to semisimple and $ad_x$ is semisimple iif $x$ is semisimple (as an element of $\mathfrak{gl}(V)$ ). So $ad_x$ is diagonalizable iif $x$ is diagonalizable. The Killing form is a non zero multiple of $(x,y)\mapsto Tr(xy)$ so $\kappa(x,x)$ is non-zero iif the sum of the square of the eigenvalues of $x$ is non-zero. The two properties seems relatively uncorelated. You can check Chapter 20 of Lie algebras and algebraic groups by P.Tauvel and R.Yu for the properties aforementionned.
|
|diagonalization|semisimple-lie-algebras|
| 1
|
What is the orthogonal of an intersection?
|
Background I have been introduced to the notion of orthogonal complement of a subset of a (pre)hilbert space. Given $X$ a (pre)hilbert space and $A\subseteq X$, one defines $A^\perp:=\{x\in X:x\perp A\}$. I have then been encouraged to try finding out what happens if, given $A,B\subseteq X$, I try finding $(A\cup B)^\perp$ or $(A\cap B)^\perp$. I think I have managed the former, with it being $A^\perp\cap B^\perp$. Indeed, if $x\perp A\cup B$, then $x\perp A, x\perp B$, thus $x\in A^\perp\cap B^\perp$. Conversely, if $x\in A^\perp\cap B^\perp$, then $x\perp A,x\perp B$, so for all $z\in A\cup B$, since $z\in A\vee z\in B$, we have $x\perp z$, giving $x\perp A\cup B$. My intuition led me to thinking that, since $A^\perp\cup B^\perp$ does not fill the whole of $(A\cap B)^\perp$, I might have $(A\cap B)^\perp=A^\perp+B^\perp$, where if $P,Q\leq X$ are subspaces, then $P+Q:=\mathrm{span}(P\cup Q)=$ $=\{p+q:p\in P,q\in Q\}$. I have managed to prove one inclusion. If $x\in A^\perp+B^\perp$,
|
Question Is it true that $(A\cap B)^\perp=A^\perp+B^\perp$ ? Does this require the space to be a Hilbert space or is it valid for prehilbert spaces too? How do I prove it? Can you give me a counterexample if it's not true? And in that case, is there a way to express $(A∩B)^\perp$ in terms of $ A^\perp$ , $B^\perp$ ? If so, how do I prove it holds? No. It's true on Hilbert space if $A$ and $B$ are closed spaces ( $(A^\perp)^\perp$ can be different from $\overline A$ on prehilbert space ( ex. )) Consider $E=\mathcal{C}(\left[-1,1\right],\mathbb{R})$ , with inner product : $$\forall\left(f,g\right)\in E^{2},\thinspace\left(f\mid g\right)=\int_{-1}^{1}f\left(t\right)g\left(t\right)\thinspace dt$$ and $$D=\left\{f\in E;\,\forall t\in\left[0,1\right],\thinspace f\left(t\right)=0\right\}$$ et $$G=\left\{f\in E;\,\forall t\in\left[-1,0\right],\thinspace f\left(t\right)=0\right\}$$ One can prove in french... that $D^\perp=G$ and $G^\perp=D$ . On one hand $(D\cap G)^\perp=\{0_E\}^\perp=E$ And on
|
|hilbert-spaces|orthogonality|
| 1
|
Distributing $n$ people into $m$ groups
|
For example if we have 22 people and we want to distribute them into two groups there are $\binom{22}{11}/2$ possibilities. But I'm wondering, can we generalise this if we say how many possibilities are there to distribute $n$ people into $m$ equally large groups? (Even more difficult, what if each group is of a different size?) And how does importance of order and if order in the groups doesn't matter, affect the end result?
|
While dividing people into groups, you need to take into account whether the groups are labeled or unlabeled Your example of $\binom{22}{11}/2$ applies only to unlabeled groups. Had the groups been labeled, say Tigers and Lions, the answer would have been just $\binom{22}{11}$ It is obvious you can generalise for more groups of equal size. For $3$ unlabeled groups of $5$ from $15$ people. for example, you would get $\binom{15}5\binom{10}5\binom55 \div 3!$ If group sizes or composition (say, men/women) vary, the groups automatically become labeled, so if we divided the previous $15$ into groups of size $8,4,3$ the answer would just be $\binom{15}8\binom74\binom33$ Finally, if order within groups mattered, in the previous example, you would need to multiply by $8!4!3!$ Or what would be much simpler, just permute all (so $15!$ ) and draw dividing lines after $8$ and $12$ people.
|
|combinatorics|
| 0
|
Which statements count as first-order?
|
I have been reading a bit about formal languages to understand exactly what we mean by "first-order" when talking about things such as the transfer principle. If we use the language of set theory $\displaystyle S=\left\langle \in \right\rangle$ , then all first-order statements we can create right now consist of logical symbols and our relation $\in$ . But suppose we define a new relation: $\displaystyle a\subseteq b \iff\forall x[( x\in a \implies x\in b)]$ Now we have used our language $S$ to define a new relation. So we could make a statement such as: $\displaystyle \forall x\forall y[(x\subseteq y)\vee\neg (x\subseteq y)]$ Would this also be considered a first-order statement for some type of number (such as the naturals)? Or would we call this something different entirely? If so, suppose we now go a step further and talk about real numbers where we have our standard ordered relation $ . Suppose you make a statement such as: $\forall x\forall y[(x Would this be considered "first-or
|
Your question is specifically about the transfer principle, more precisely about the meaning of "first-order" when applying the transfer principle to "first-order formulas". The essential point that does not seem to have come out clearly in the comments is that the quantifiers have to be bound for the transfer principle to apply to the formula. Therefore you cannot have expressions like " $(\forall x)$ " in your formula, but rather $\forall x\in A$ where $A$ is a set that the variable $x$ ranges through (this formula is itself a shorthand for a more detailed formula, but this technical point can be left out for now). The theory ZF is a theory in the $\in$ -language. Therefore, so long as your formula uses only the relation $\in$ (as opposed to $\subseteq$ , for example), the transfer principle could be applied. Note that there is no restriction that transfer should be applied to "individuals" specifically (whatever that means: individual real numbers? etc.), but can apply to any set. F
|
|logic|
| 1
|
Projective bundles on projective spaces and birational contractions?
|
I'm a beginner of algebraic geometry and I find some questions as follows. Consider the projective bundle (in sense of Grothendieck) $\mathbb{P}_{\mathbb{P}^n}(\mathscr{O}(a_1)\oplus\cdots\cdots\mathscr{O}(a_r))$ on $\mathbb{P}^n$ . I have the following questions: [Q1] We know that $\mathrm{Bl}_{\mathbb{P}^a}\mathbb{P}^b=\mathbb{P}_{\mathbb{P}^{b-a-1}}(\mathscr{O}(1)\oplus\mathscr{O}^{\oplus b+1})$ , it there some similar description of the general $\mathbb{P}_{\mathbb{P}^n}(\mathscr{O}(a_1)\oplus\cdots\cdots\mathscr{O}(a_r))$ on $\mathbb{P}^n$ ? Note that $\mathrm{Bl}_{\mathbb{P}^a}\mathbb{P}^b$ can be seems as blowing up the vertex of the cone of $\mathbb{P}^{b-a-1}$ in $\mathbb{P}^b$ . How about general case? I guess this maybe the resolotion of singularity of some $(a_1,...,a_i,...)$ -uple embedding in the bigger projective space. [Q2] Now the Picard number of $\mathbb{P}_{\mathbb{P}^n}(\mathscr{O}(a_1)\oplus\cdots\cdots\mathscr{O}(a_r))$ is $2$ , so we may have two Mori contractio
|
Q1. This projective bundle can be thought of a higher-dimensional scroll. Indeed, if you twist the vector bundle approperately (this does not affect the projective bundle) you can assume that all $a_i$ are positive, then you can consider the Veronese embeddings of the degrees $a_i$ of $\mathbb{P}^n$ , and then form a scroll with fibers $\mathbb{P}^{r-1}$ . Q2. Consider the twist of you vector bundle such that all $a_i$ are nonnegative but some are equal to zero (such a twist is unique). Then a construction similar to the above gives a morphism from your projective bundle to the cone over the subscroll that corresponds to the positive weights. This is the second extremal contraction.
|
|algebraic-geometry|
| 1
|
Real non-trivial zeros of Riemann zeta function inside critical strip
|
Could someone please tell me what is known about the real non-trivial zeros of Riemann zeta function inside critical strip? Do we know there is none? I want to know what is known about the above questions without the assumption of Riemann hypothesis. Any hint or help would be appreciated. Thanks in advance.
|
There are no real nontrivial zeroes of $\zeta(s)$ in $(0,1)$ . This can be easily shown as follows. Let $\eta(s)=\sum_{n=1}^\infty(-1)^n/n^s$ , which converges in $\mathop{\rm Re}s>0$ . When $\mathop{\rm Re}s>1$ , we have that $$\eta(s)=\sum_{n=1}^\infty\frac1{n^s}-2\sum_{n=1}^\infty\frac1{(2n)^s}=\left(1-\frac1{2^{1-s}}\right)\zeta(s).$$ Since $\eta(s)$ is holomorphic on $\mathop{\rm Re}s>0$ , by analytic continuation this equality holds on $\mathop{\rm Re}s>0$ as well. Now, when $s$ is real and $0 , $\eta(s)=\sum_{n=0}^\infty1/(2n+1)^s-1/(2n+2)^s$ and each term is positive, so $\eta(s)>0$ , in particular $\eta(s)\ne0$ , thus $\zeta(s)$ cannot vanish.
|
|number-theory|analytic-number-theory|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.