title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Find a function such that $f^{-1}=f'$
Let $f:\Bbb{R}^+\rightarrow\Bbb{R}^+$ be a differentiable bijection and let $f$ satisfy: $f'=f^{-1}$ (where $f^{-1}$ denotes the inverse of $f$). Find $f$. This comes from a facebook page "Mathematical theorems you had no idea existed, cause they're false". The negation of this statemed is given here, that is: There is no such function that satisfies $f'=f^{-1}$, but in the comments, there is a counterexample given: $$g(x)=\varphi^{1-\varphi}x^{\varphi}$$ where $\varphi=\frac{1+\sqrt{5}}{2}$ is the golden ratio. It's straightforward to check that $g'=g^{-1}$ holds. The OP claims that this solution is unique. Can someone come up with a way to derive the function $g$ or more functions satisfying this property? Also IS this really the unique solution?
Problem Clarification - Solving a Simpler Related Problem Given ( with $f^{-1}=\frac{1}{f}$ ) as $f^{-1}$ is the fraction (and not the inverse, as there are two possible conventions for $f^{-1}$ ): $$f^{-1} = f^{'} \tag{ Eq. 1}$$ Then solve for $f$ . Solution in the case that $f^{-1}=\frac{1}{f}$ , and $f^{'}$ is the derivative, the easier case The convention of $f^{-1}=\text{ Inverse (}f\text{)}$ can be solved later. Multiply the left and right sides of Equation 1 by $f$ so: $$ 1=f f^{'} \tag{Eq. 2}$$ Now integrate both sides of Equation 2 using the identities that $\frac{d}{df}\frac{1}{2} \left( f \right)^2 = f f' $ and $\frac{d}{df} f =1$ , then, with $C/2$ an arbitrary integration constant: $$ f+C/2=\frac{1}{2}f^2 \tag{Eq. 3}$$ Rearrange Equation 3: $$ f=\frac{1}{2}f^2 - C/2 \\ \tag{Eqs. 4}$$ Consider the function $f=\sqrt{a x}=\sqrt{2 x}$ so $f'=\frac{a}{2}\frac{1}{\sqrt{ a x}}$ and then: $$ f^{'}f=\left( \frac{a}{2}\frac{1}{\sqrt{ a x}} \right) \sqrt{a x} \\ \text{And with }a=2 \
|ordinary-differential-equations|functions|functional-equations|
0
What is the reasoning behind Stirling number $S(n,2)$?
The answer that was given in class was $(2^n -2)/2$ . I think it's trying to use the theorem that the number of $k$ -digit strings that can be formed over $n$ element set is $n^k$ . Or, I think it might be counting the number of ways that each number can be placed in each of the two subsets. Also, why are we subtracting the two? Can someone provide an explanation? Thanks!
Here's an interesting way to count the same situation. Let's say that there are $n$ people and $k$ identical boxes ( $1\leq k \leq n$ ). We wish to count the number of ways to put these $n$ people in these $k$ boxes. Let us begin by fixing person $n$ in any one of the boxes. As of this moment the boxes are no more identical. So now we just have to count the number of ways to put $n-1$ people in $2$ distinct boxes i.e. $2^{n-1}$ , however we will have to subtract one from this count as there is one arrangement which is illegal i.e. all $n$ people in the same box. Therefore the final number turns out to be $2^{n-1}-1$ , same as yours.
|combinatorics|set-partition|stirling-numbers|
0
Is it really true that $\mathcal{S}(\mathbb{R}^n)$ is identified with smooth functions on $S^n$ vanishing at a fixed point?
Let $\mathcal{S}(\mathbb{R}^n)$ be the Schwartz space on $\mathbb{R}^n$ and $C^\infty(S^n)$ be the space of smooth functions on $n$ -sphere. Now fix a point $x \in S^n$ and define \begin{equation} C^\infty_x(S^n) := \{ f \in C^\infty(S^n) \mid f(x)=0\} \end{equation} Then I heard that $\mathcal{S}(\mathbb{R}^n)$ may be identified with $C^\infty_x(S^n)$ . That is, a Schwartz function is one extending smoothly to $S^n$ while vanishing at infinity. However, I wonder why we need such a strong decay condition of Schwartz functions to extend it smoothly to $S^n$ ? Could anyone please clarify for me?
You have heard of a wrong identification. In his book Théorie des distributions , Laurent Schwartz mentions that by identifying $S^n$ with the one point compactification of $\mathbb R^n$ , the space $\mathcal S(\mathbb R^n)$ may be identified with the space $\dot{\mathscr D}(S^n)$ of all scalar smooth functions on $S^n$ whose derivatives of any order all vanish at infinity (in the latest edition ( Zbl 0962.46025 ), see chap. VII, § 3, th. II). Identifying $x$ with the point at infinity, the space $\dot{\mathscr D}(S^n)$ is a lot smaller than $C^\infty_x(S^n)$ as there are many extensions of rational functions with no singularity in $\mathbb R^n$ that are in $C^\infty_x(S^n)$ but not in $\dot{\mathscr D}(S^n)$ . The identification between $\dot{\mathscr D}(S^n)$ and $\mathcal S(\mathbb R^n)$ is quite intuitive. If $f\in\mathcal S(\mathbb R^n)$ , then by composing with the inversion $y\mapsto\|y\|^{-2}y$ , you obtain a smooth function $g:S^n\setminus\{0\}\to\mathbb C$ whose derivatives a
|functional-analysis|smooth-manifolds|smooth-functions|schwartz-space|compact-manifolds|
1
Algorithm for allocating numbers into groups such that the maximum number of elements in the groups is minimized
Suppose there is a set of arrays $S= [A_1, A_2, ..., A_k] $ , where $A_i$ is a finite subset of $\mathbb{Z}$ . Given a positive integer $g$ , I want to build $g$ sets $G_1,G_2,\dots,G_g$ such that $\forall i\in[k], \exists j\in[g], A_i \subseteq G_j$ and I want the maximum size of the groups minimized, i.e., $$\min \max_j |G_j|.$$ For example, $S=[[1,2,3],[2,5],[4,6,7]]$ , $g=2$ , then the optimal groups are $G_1=[1,2,3,5], G_2=[4,6,7]$ , and the objective value is 4. I wonder what's the optimal algorithm (or any algorithm better than enumerating all possible groups) for solving this problem?
You can show that your problem is $\mathcal{NP}$ -hard by reduction from multi-processor scheduling (take all the $A_i$ disjoint, and the size of the $A_i$ as the length of the tasks), so there is probably no polynomial time algorithm for this problem. Your problem (with $A_i$ not necessarily disjoint) can be seen as a subproblem of submodular scheduling (like for submodular bin-packing). If $g$ is fixed and that all the $A_i$ are disjoint, then the problem is polynomial by using a dynamic programming algorithm (weakly polynomial for scheduling, but strongly polynomial in your case). I don't see how the dynamic programming algorithm can be adapted in the general case, but maybe you can do something about it. Note that there always are better algorithms than enumerating all possible groups, even when the problem is $\mathcal{NP}$ -hard. Depending on the size of the instances at hand, full generation can be good enough for small instances, otherwise, you can do a branch and bound with so
|optimization|algorithms|computer-science|discrete-optimization|
0
Could any "incoherently involutive" endofunctor be made "coherently involutive"?
Given a category $C$ with an endofunctor $F:C \to C$ and a natural isomorphism $\epsilon:FF \cong 1_C$ , call the pair $(F, \epsilon)$ an involutive endofunctor. Also, call the pair $(F, \epsilon)$ a coherently involutive endofunctor if $(F, F, \epsilon^{-1}, \epsilon)$ forms an adjoint equivalence, or equivalently, if $F\epsilon_X=\epsilon_{FX}$ for all objects $X$ of $C$ . Then, given any involutive endofunctor $(F, \epsilon)$ , is there always another natural isomorphism $\epsilon^{\prime}:FF \cong 1_C$ for which $(F, \epsilon^{\prime})$ is coherently involutive? It is known that incoherent equivalences could always be made into coherent adjoint equivalences by changing one of the involved isomorphisms without also changing the other.
Consider a category with a family of objects $X_n$ with $n \in \mathbb Z$ . Let the set of morphisms originating from any particular object be $(p,q,r) \in \mathbb Z^2 \times \mathbb N$ . A morphism $(p,q,r)$ starting from $X_n$ has codomain $X_{n+2p+2q+r}$ . Let composition be $$(p, q, r) \circ (p', q', r') = (p + p', q + q', r + r').$$ Now construct an involutive functor $F$ which maps $X_n$ to $X_{n+1}$ . Its action on morphisms is $F(p,q,r) = (q,p,r)$ . This functor is indeed involutive. We construct $\epsilon_{X_n} = (-1,0,0)$ , with inverse $(1,0,0)$ . Finally, we prove that it cannot be coherently involutive. Suppose we have $\epsilon_{X_n} = (p, q, r)$ . We need $r$ to be even, and $p+q+r/2 = -1$ , so that the codomain is correct. Then we need $\epsilon_{X_{n+1}} = \epsilon_{FX_n} = F\epsilon_{X_n} = (q, p, r)$ . But now the square formed by $\epsilon$ on $(0,0,1)$ cannot commute, because we need $$ (p,q,r) + (0,0,1) = (0,0,1) + (q,p,r) $$ implying $p = q$ and $r \ne 0$ by pari
|category-theory|involutions|equivalence-of-categories|
0
On the definition of quadratic forms
Let $\mathbb{F}$ be an arbitrary field of characteristic $\mathrm{char}(\mathbb{F})\neq 2$ and $V$ a $\mathbb{F}$ -vector space. The definition of a quadratic form I am used to is a map $\varphi\colon V\to\mathbb{F}$ with the property $\varphi(\lambda v)=\lambda^{2}\varphi(v)$ such that $$\beta_{\varphi}(v,w):=\frac{1}{2}(\varphi(v+w)-\varphi(v)-\varphi(w))$$ is bilinear. Now, clearly, $\beta_{\varphi}$ is symmetric and $\beta_{\varphi}(v,v)=\varphi(v)$ , by definition. In the case $\mathbb{F}=\mathbb{R}$ , I have seen the following claim: Proposition 1 : If $\varphi\colon V\to\mathbb{R}$ with the property $\varphi(\lambda v)=\lambda^{2}\varphi(v)$ satisfies the paralellogram law $$\varphi(v+w)+\varphi(v-w)=2\varphi(v)+2\varphi(w)$$ then it is a quadratic form, i.e. there exists a symmetric bilinear form $\beta_{\varphi}$ such that $\beta_{\varphi}(v,v)=\varphi(v)$ . This is essentially a similar statement as the Jordan-von Neumann theorem, which asserts that a norm $\Vert\cdot\Vert$ o
Without positivity, this is not even true for $\mathbb F=\mathbb R$ . See this wonderful post . We show some more general results. It has been shown by P. Quinton that $(\cdot, \cdot):=\beta_{\varphi}$ is bi-additive. In particular it follows $$(-u, v)=(u,-v)=-(u,v)$$ Consider $$\begin{cases} (\lambda(u+v), \lambda(u+v)=\lambda^2(u+v, u+v) & (1)\\ (\lambda(u-v), \lambda(u-v))=\lambda^2(u-v,u-v) & (2)\end{cases}$$ Consider $(1)-(2)$ , and after additive expansion, we get $4(\lambda u, \lambda v) =4 \lambda^2(u, v)$ that is $$(\lambda u, \lambda v)=\lambda^2(u, v)$$ where $u, v$ are not necessarily equal. Now we show $(\lambda u, u)=\lambda (u, u)$ . This follows from $$((1+\lambda)u, (1+\lambda)u)=(1+\lambda)^2(u, u)$$ After the expansion and cancellation, $2(\lambda u, u)=2\lambda (u, u)$ . Now we can show that in the very special case $\dim V = 1$ , $\beta_{\varphi}$ is bilinear. And since the post has constructed examples for $\dim V=2$ , we know the statement also fails for all $\di
|linear-algebra|abstract-algebra|reference-request|inner-products|quadratic-forms|
1
Show is a continuous linear operator
I am reading the book introduction to functional analysis by Christian Clason. I need help with the problem 4.2 from the book: Let $T:C[a, b] \rightarrow C[a, b],\ [Tx](t)=tx(t) \quad \forall t \in [a, b]$ Show T is a continuous linear operator Calculate $\|T\|$ Show or give a counterexample for the closedness of ranT What I have done: (i) linear: $[T(\lambda x + y)](t) = t(\lambda x(t) + y(t)) = t\lambda x(t)+ ty(t) = [T(\lambda x)](t) + [Ty](t)$ . (ii) How can I show it is continuous. My idea was: Since $[a, b]$ is compact, it follows $x(a) \leq x(t) \leq x(b),\ \forall t \in [a, b]$ . Take $x_0 \in C[a, b]$ . $\forall \epsilon > 0 \exists \delta>0 : \|x - x_0\| . We can choose $\delta = \epsilon / |t|$ . This proves T is continuous. We know $\|T\| = \sup_{x \in C[a, b]} \frac{\|Tx\|}{\|x\|}$ $\|Tx\| = \sup_{t \in [a, b]} |tx(t)| \leq \sup_{t \in [a, b]} |t|\|x\| \leq \sup_{t \in [a, b]}|t| |x(b)|$ . Let $x(t) = 1 \in C[a, b]$ , then $\|x\|=1$ and $\|Tx\| = \sup_{t \in [a, b]} |tx(t)
Assume $0\in [a,b].$ The range of $T$ consists of all functions of the form $tx(t),$ where $x\in C[a,b].$ On the other hand the closure of the range of $T$ consists of all functions in $C[a,b],$ which vanish at $t=0.$ Indeed, if $x(0)=0,$ then by the Weierstrass approximation theorem there exists a sequence of polynomials $q_n(t)$ such that $q_n(t)\rightrightarrows x(t).$ Then $p_n(t):=q_n(t)-q_n(0)\rightrightarrows x(t),$ and $p_n$ belong to the range of $T.$ Therefore the function $\sqrt{|t|}$ does not belong to the range of $T$ but belongs to the closure of the range. Remark For $\sqrt{|t|}$ the sequence $p_n(t)$ can be indicated explicitly by applying the MacLaurin expansion of $\sqrt{1-u}.$
|functional-analysis|analysis|
0
Why are column having no leading entry are called free variable?
In the textbook that I am following, it says that column with leading entry are basic variable and column that has no leading entry are called free variable. I don't get the intuition behind why are they called free variable if they don't have leading entry. Suppose, I have a system of linear equation written as augmented matrix. Reducing that matrix to row reduced echelon form, I have \begin{bmatrix} 1 & 6 & 0 & 3 & 0 :& 0\\ 0 & 0 & 1 & -4 & 0 : & 5\\ 0 & 0 & 0 & 0 & 1 : & 7 \end{bmatrix} Now the system can be written as, $$ x_1+6x_2+3x_4 = 0\\ x_3-4x_4 = 5\\ x_5 = 7 $$ Now the text book says $x_1 , x_3, x_5$ are basic variable and can be expressed in terms of $x_2$ and $x_4$ $$ x_1 = -6x_2 - 3x_4\\ x_3 = 5 + 4x_4\\ x_5 = 7 $$ Why cannot I express $x_2$ in terms of $x_1$ and $x_4$ and say $x_1$ and $x_4$ are free variable. $$x_2 = \frac{-3x_4-x_1}{6}$$
Yes, you can. The point of choosing $x_2$ and $x_4$ is that when you reduce to echelon form you are certain that the variables with zero leading entry can be chosen as free parameters. In general, in a system like yours (say 3 independent equations with 5 variables) not every pair of variables can be chosen as free.
|linear-algebra|
0
Analytical proof for the convergence of a sequence
Consider the following sequence $\Xi_N=N\sum\limits_{i=0}^{N-1} {N-1 \choose i} (-1)^{(i+1)} \log\left(i+1\right)$. I numerically compute the asymptotic behavior of sequence and it turns out that the sequence approaches to a non-zero value as N goes to infinity. Now, I want to analytically prove that this sequence converges to a non-zero value as N goes to infinity. Also, it can be proved that the sequence has another form as follows $\Xi_N=\sum\limits_{i=1}^{N} {N \choose i} (-1)^{(i)} i \log\left(i\right)$. Moreover, Using $\int_{0}^{1} \sum_{m=1}^{i} \frac{1}{x+m} dx=\log(i+1)$ Then $\Xi_N=N\sum_{m=1}^{N-1}{N-1 \choose m-1} (-1)^{m-1}\int_{0}^{1} \frac{1}{x+m} dx $ Could you give me some advice? Thanks
Jack's answer above seems to have missed one term, the correct form should be: $$\Xi _{N+1}=\left( N+1 \right) \int_0^{\infty}{\frac{\left( 1-e^{-x} \right) ^N}{xe^x}dx}$$ Just as you say. Notice that $$ for\,\,\forall n\geqslant 1, n\left( 1-e^{-x} \right) ^{n-1}\leqslant e^x\,\,, x\in \left[ 0,\infty \right) $$ then we have: $$ \forall N\geqslant 0, \frac{\left( N+1 \right) \left( 1-e^{-x} \right) ^N}{xe^x}\leqslant \frac{1}{x}\,\,,x\in \left( 0,\infty \right) $$ then you can use the dominated convergence theorem.
|sequences-and-series|combinatorics|analysis|limits|convergence-divergence|
0
Number of words of fixed length, starting with A and ending with B, without double letter
So I have a graph-related problem, where I need to determine the number of paths of length $L$ from Node $N_A$ to node $N_B$ , in a complete graph containing $N$ nodes. Nodes $N_A$ and $N_B$ can point to the same node. An example application of this problem is finding the number of words of length $L$ , starting with letter A and ending with letter B (the actual letters do not matter), and the graph is simply an alphabet of size $N$ . Because it is a complete graph, no double letters are allowed, so words like AACB or ABBB are forbidden. Example solution on the restricted alphabet {A, B, C, D} ( $N=4$ ), with words of length 4: ABAB , ABCB , ABDB , ACAB , ACDB , ADAB , and ADCB . So 7 different paths. For $L=5$ , the solution is 20. For $L=6$ , the solution is 61. For $L=7$ , the solution is 181. I actually have a working program that can generate all those paths. The issue is that I want to know the exact length of the iterator without actually needed to consume it. Is-there a closed
Fix an alphabet size $m$ and assume A $\ne$ B. Let the number of such words of length $n$ be $f(n)$ . Consider the second letter C, which must be different from A. If moreover C $\ne$ B then there are $f(n-1)$ many words of length starting with C and ending with B, and there are $m - 2$ choices of such C. Otherwise C = B, and consider the third letter D, which must be different from C (and therefore automatically D $\ne$ B) so there are $f(n-2)$ words starting with each of the $m - 1$ choices of D. Adding up, $$f(n) = (m-2)f(n-1) + (m-1)f(n-2).$$ Since $m$ is fixed, this is a linear recurrence with constant coefficients , which has standard solution techniques, eg using generating functions or matrix exponentiation. The associated characteristic polynomial is $x^2 - (m-2)x - (m-1)$ , which has roots $-1$ and $m - 1$ . Therefore $f(n)$ is of the form $P (m-1)^n + Q (-1)^n$ for some constants $P$ and $Q$ . Using $f(1) = 0$ and $f(2) = 1$ we can solve for $P$ and $Q$ , which gives us $$f(
|combinatorics|graph-theory|
0
Find the largest value of $a = x^2 - y^2 + 2xy$, for real numbers $x, y$ such that $x^2 + y^2 \leq 1$
Find the largest value of $a = x^2 - y^2 + 2xy$ , for real numbers $x,y$ such that $x^2 + y^2 \leq 1$ Here's what I did, but it doesn't look quite convincing to me. $a = x^2 - y^2 + 2xy = 2x^2 - (x - y)^2 \leq 2x^2 $ The equality occurs when $x = y$ . However, $x^2 + y^2 \leq 1$ . When $x = y$ , this becomes $2x^2 \leq 1$ Thus, $a \leq 1$ , and $a = 1$ occurs when $x = y = \pm\frac{1}{\sqrt{2}}$ I hope someone could give me some feedback and perhaps also a more rigorous argument. Thank you!
Here is an elementary solution. If $x = 0$ , $a(0,y) =-y^2 \le 0$ and for $(x,y)=(0,0)$ $a(0,0) = 0$ . So, $a_{\max }$ is a least greater than $0$ , that is $a_{\max }\ge a(0,0) = 0$ it suffices to find the maximum for the case $x \ne 0$ (as for $x =0$ , $a$ is negative) From the condition $x^2 + y^2 \le 1$ , and by denoting $t = y/x$ we have: $$\frac{x^2 -y^2 +2xy}{x^2 +y^2} \ge a \implies \frac{1-t^2+2t}{1+t^2} \ge a \implies (a+1)t^2 -2t+(a-1)\le 0 \tag{1}$$ For $a\ge 0$ , the term $(a+1) \ge 0$ , then the quadratic inequality $(1)$ has solution if and only if the Discriminant $\Delta' = 1-(a+1)(a-1)=2-a^2 $ is non negative. That means $$-\sqrt{2}\le a \le \sqrt{2} $$ So, $a$ reaches its maximum $\sqrt{2}$ if and only if $\cases{x^2 + y^2 = 1\\ t = \frac{1}{1+\sqrt{2}}} \iff$ $(x,y) = \left(\frac{\sqrt{2+\sqrt{2}}}{2},\frac{\sqrt{2-\sqrt{2}}}{2} \right)$ or $(x,y)=\left(-\frac{\sqrt{2+\sqrt{2}}}{2},-\frac{\sqrt{2-\sqrt{2}}}{2} \right)$ .
|calculus|algebra-precalculus|
0
How to show that $\int_{-\infty}^{\infty} \mathrm{d}^3 \textbf{k} \frac {e^{i \textbf{k x}}} {(2 \pi)^3} = \delta^3(x)$ in spherical coordinates?
Recently I had to deal with Fourier transformations and delta functions, and I was wondering how about that. I know, that its trivial to show in cartesian coordinates, but i couldn't do it in spherical coordinates. Somehow one should be able to reduce it to something which only depends on |k| (which i could do) and show that it is a delta function (which i couldn't).
There is no need to start and work with any kind of co-ordinates in here, it can be done only using Delta & Fourier transform properties, as follows: First, take a look at the fourier transform of the delta: $$\mathcal{F}[\delta^3(\textbf{x})] = \int \mathrm{d}^3 \textbf{x} \delta^3(\textbf{x}) {e^{-i \textbf{kx}}} = 1$$ Now use the fact that the inverse fourier transform of fourier transform of some function (assume you can apply the transform) is the function+use the result above and the definition of inverse transform to obtain: $$\delta^3(\textbf{x})=\mathcal{F^{-1}}[\mathcal{F[\delta^3(\textbf{x})]}]=\mathcal{F^{-1}}[1]= \int \mathrm{d}^3 \textbf{k} \frac {e^{i \textbf{k x}}} {(2 \pi)^3} $$
|calculus|fourier-analysis|distribution-theory|
0
On the definition of quadratic forms
Let $\mathbb{F}$ be an arbitrary field of characteristic $\mathrm{char}(\mathbb{F})\neq 2$ and $V$ a $\mathbb{F}$ -vector space. The definition of a quadratic form I am used to is a map $\varphi\colon V\to\mathbb{F}$ with the property $\varphi(\lambda v)=\lambda^{2}\varphi(v)$ such that $$\beta_{\varphi}(v,w):=\frac{1}{2}(\varphi(v+w)-\varphi(v)-\varphi(w))$$ is bilinear. Now, clearly, $\beta_{\varphi}$ is symmetric and $\beta_{\varphi}(v,v)=\varphi(v)$ , by definition. In the case $\mathbb{F}=\mathbb{R}$ , I have seen the following claim: Proposition 1 : If $\varphi\colon V\to\mathbb{R}$ with the property $\varphi(\lambda v)=\lambda^{2}\varphi(v)$ satisfies the paralellogram law $$\varphi(v+w)+\varphi(v-w)=2\varphi(v)+2\varphi(w)$$ then it is a quadratic form, i.e. there exists a symmetric bilinear form $\beta_{\varphi}$ such that $\beta_{\varphi}(v,v)=\varphi(v)$ . This is essentially a similar statement as the Jordan-von Neumann theorem, which asserts that a norm $\Vert\cdot\Vert$ o
This needs a total of 9 applications of Proposition 1, sorry about that, maybe one can do better. I can prove that $\beta_\varphi(u+v,w)=\beta_\varphi(u,w)+\beta_\varphi(v,w)$ , I don't know if $\beta_\varphi(av,w)=a\beta_\varphi(v,w)$ . We can apply Propposition 1 three times to get \begin{align*} \varphi(-u+v+w)&=2\varphi(w)+2\varphi(-u+v)-\varphi(-u+v-w)\\ &=2\varphi(w)+2\varphi(-u+v)-\varphi(u-v+w)\\ \varphi(u-v+w)&=2\varphi(u)+2\varphi(-v+w)-\varphi(-u-v+w)\\ &=2\varphi(u)+2\varphi(-v+w)-\varphi(u+v-w)\\ \varphi(u+v-w)&=2\varphi(v)+2\varphi(u-w)-\varphi(u-v-w)\\ &=2\varphi(v)+2\varphi(u-w)-\varphi(-u+v+w)\\ \end{align*} summing the three equations, rearanging and dividing by $2$ yields \begin{align*} &\varphi(-u+v+w)+\varphi(u-v+w)+\varphi(u+v-w)\\ =&\varphi(u)+\varphi(v)+\varphi(w)+\varphi(-u+v)+\varphi(-v+w)+\varphi(u-w) \end{align*} Now we can apply Proposition 1 three more times to the term $\varphi(u+v+w)$ to get \begin{align*} \varphi(u+v+w) &=2\varphi(u+v)+2\varphi(w)-\varp
|linear-algebra|abstract-algebra|reference-request|inner-products|quadratic-forms|
0
Solving & Understanding $a≡97\pmod7$ When $1\le a\le7$
I am asked to solve $a≡97\pmod 7$ , from what I understand, this is equivalent to saying $7\mid(a-97)$ , and I know that $7\mid a, 7\mid 97$ , which means that $a=7e, 97=7r$ for integers $e,r$ , therefore, $a-97=7(e-r)$ . When I solve for $7\mid 97$ , I get $97 = 7 \times13+6$ , so a quotient of $91$ and remainder $6$ . What I do not understand is how to use this to derive $a$ given $a-97=7(e-r)$ which solves for $e$ and $r$ . When $1\le a\le7$
The main thing is The remainder when $97$ is divided by $7$ should be same as the remainder when $a$ is divided by $7$ . We want to solve $a \equiv 97 \pmod 7$ . Equivalently, $7 \mid (a - 97)$ . As you see, $7 \nmid 97$ , so $7$ can not divide $a$ (think why?). So $a$ will be in the form of $7k + q$ , where $k\in \mathbb{Z}$ and $0 . Now, \begin{align*} 7 \mid (a - 97) & \implies 7 \mid (7k + q -97) \\ & \implies 7 \mid (97 -q) \\ & \implies q = 6. \end{align*} Now, all possible values of $a$ will be $$a \in \{7k + 6: k \in \mathbb{Z}\}.$$
|elementary-number-theory|
1
Show that if $x + y + z = xy + xz + yz$ then $x ^ 2 * (1 - y ^ 2) + y ^ 2 * (1 - z ^ 2) + z ^ 2 * ( 1 - x ^ 2) = 2(x + y + z)(xyz - 1)$
the question Show that if $x + y + z = xy + xz + yz$ then $x ^ 2 * (1 - y ^ 2) + y ^ 2 * (1 - z ^ 2) + z ^ 2 * ( 1 - x ^ 2) = 2(x + y + z)(xyz - 1)$ the idea First of all i though of breaking all the products and i got to: $x^2-x^2y^2+y^2-y^2z^2+z^2-z^2x^2=2x^2yz+2xy^2z+2xyz^2-2x-2y-2z$ Then i squared the given equality and got to $x^2+y^2+z^2+2(xy+yz+xz)=x^2y^2+y^2z^2+x^2z^2+2x^2yz+2xy^2z+2xyz^2$ I tried processing it, but got to nothing usegul. Hope one of you can help me! Thank tou!
$$x^{2}\cdot \left(1-y^{2}\right)+y^{2}\cdot \left(1-z^{2}\right)+z^{2}\cdot \left(1-x^{2}\right)-2 \left(x+y+z\right) \left(x y z-1\right)=-\left(x y+x z+y z+x+y+z+2\right)\cdot \underbrace{\left(x y+x z+y z-(x+y+z)\right)}_0=0$$
|square-numbers|
1
Show that, if $L$ is a regular language, then so is $\{w : \exists n \in \Bbb{N}, w^n \in L\}$
Suppose $L$ is a regular language over an alphabet $\Sigma$ . Let $$L' = \{w : \exists n \in \Bbb{N}, w^n \in L\}.$$ Prove that $L'$ is regular too.
This question is much easier to solve if you use monoids instead of automata. Proposition . The language $L'$ is recognized by the syntactic monoid ${}^*$ of $L$ . It follows immediately that $L'$ is regular, since a language is regular if and only if its syntactic monoid is finite, but the proposition is much more precise. For instance, it shows that if $L$ is star-free , then so is $L'$ , a result that would be challenging to prove with automata. The proposition is a corollary of a more general result on transductions, first given in [2] (see also [1] for further references). Here is a self-contained proof. Let $M$ be the syntactic monoid of $L$ , let $f:A^* \to M$ be its syntactic morphism and let $P = f(L)$ . Then $$ (*) \quad L = f^{-1}(P) = f^{-1}(f(L)) $$ For each $m \in M$ , let $m^*$ be the submonoid of $M$ generated by $m$ . Observe that, if $m = f(u)$ , then $m^* = f(u)^* = f(u^*)$ , which justifies the notation $m^*$ . Let $Q = \{m \in M \mid m^* \cap P \not= \emptyset\}$ .
|regular-language|finite-state-machine|
0
Are there homeomorphisms from $\mathbb{R}^{m\times n} / S_n$ into $\mathbb R^k$ where $k$ grows more slowly than $k\sim O(m^2 n)$?
I am a physicist, not a mathmo, and I am working with some objects which are multisets (i.e. sets able to contain repeats) containing $n$ elements, each of which is a vector in ${\mathbb R}^m$ . I.e. one could perhaps notate one such multiset as follows: $$M(n,m,V)= \{ \vec v_1, \vec v_2,\ldots,\vec v_n \} $$ wherein each $\vec v_i$ is some point in $\mathbb R^m$ , and where $\vec v_i$ is the $i^\text{th}$ column of some matrix $V\in \mathbb R^{m\times n}$ . I am not certain of the name and/or correct notation for the space in which such multisets $M(n,m,V)$ lie, but I suspect it is some kind of quotient-space which I will notate like this: $$\mathbb{R}^{m\times n} / S_n$$ in which $S_n$ is meant to be the permutation group of order $n$ -- which in this case permutes the columns of $V$ . (But as I am a lowly physicist, not a mathmo, I may be inadvertently misusing that notation somehow!) I have been trying to map such objects continuously and losslessly into a real Euclidean space (or
First, some terminology. Let $X$ be a topological space. Then $n$ -th symmetric product of $X$ , denoted $SP_n(X)$ , is the quotient of $X^n$ (the $n$ -fold product of $X$ with itself) by the action of the symmetric group $S_n$ , which acts by permuting components of tuples $(x_1,...,x_n)\in X^n$ . On general grounds, if $X$ has topological dimension $m$ , then $X^{n}$ has topological dimension $nm$ (more precisely, the Lebesgue covering dimension) and its quotient by $S_n$ has the same dimension. There is a general theorem in topology which states that every locally compact metrizable topological space of dimension $N$ can be embedded in ${\mathbb R}^{2N+1}$ . But, likely, you do not care about such general embeddings and, besides, for $SP_n({\mathbb R}^m)$ one can do better than $2mn+1$ . It is instructive to first look at the cases $m=1, m=2$ : $m=1$ . Then we are considering the quotient of ${\mathbb R}^n$ by $S_n$ which acts by permuting the components of vectors. The quotient spa
|general-topology|
1
Real Polynomials on Compact sets of Complex numbers
Setting: $\mathbb{R}[x]$ is the set of polynomials with real coefficients. All $f\in \mathbb{R}[x]$ has domain $\mathbb{C}$ . $K$ is a compact subset of $\mathbb{C}$ . $\mathbb{R}[x]|_{K}$ is the set of restrictions of functions in $\mathbb{R}[x]$ . $\mathcal{C}(K)$ is the set of continuous complex-valued functions on $K$ . By a set $F$ of complex-valued functions on some set $R$ being self-adjoint, we mean for all $f\in F,$ there exists some $\overline{f}\in F$ such that $\overline{f}(x)=\overline{f(x)}$ for all $x\in R$ . Questions: Is $\mathbb{R}[x]$ self-adjoint? Is $\mathbb{R}[x]|_{K}$ dense in $\mathcal{C}(K)$ ? If $\mathbb{R}[x]|_K$ is not dense in $\mathcal{C}(K)$ , is there a continuous function $f:K\rightarrow \mathbb{C}$ such that $f$ is not in the uniform closure of $\mathbb{R}[x]|_{K}$ and can be explicitly written down? Motivation (for me to ask this question): I am currently studying the section of Stone-Weierstrass in Baby Rudin, and it seems that Rudin doesn't answer t
Consider the polynomial $f(x) = x$ , which is certainly a real polynomial. Is there a real polynomial $\bar{f}$ such that $\bar{f}(x) = \bar{x}$ ? More generally, if both $f$ and $\bar{f}$ satisfying $\bar{f}(x) = \overline{f(x)}$ also individually satisfy the Cauchy-Riemann equations then they have to be constants. In fact, if $f$ is a real polynomial then it satisfies $\overline{f(x)} = f(\bar{x})$ , which is a closed (check!) condition on $\mathcal{C}(K)$ , and is non-trivial (by above) if $K$ is infinite (and therefore has a limit point). If $K$ is finite then $\mathcal{C}(K)$ is the set of all functions $K \to \mathbb{C}$ , so the same condition is again non-trivial iff $K$ contains some pair of conjugate points or some real point, ie some $x \in K$ with $\bar{x} \in K$ . If $K$ does not contain such an $x$ then for any function $f : K \to \mathbb{C}$ , the unique complex polynomial of degree $\le 2|K|$ taking the value $f(x)$ at $x \in K$ and $\overline{f(\bar{x})}$ at $x$ with $
|functional-analysis|complex-analysis|analysis|examples-counterexamples|dense-subspaces|
1
How to Prove the Divergence of an Improper Integral Involving Absolute Value
I'm working on understanding the convergence properties of certain improper integrals and encountered the following integral: $$\int_{0}^{\infty} \left| \frac{\cos(x)}{\sqrt{x}} \right| \, dx$$ I believe this integral diverges when considering the absolute value, but I'm struggling to formally prove it. My initial thought was to apply comparison tests or limit comparison tests since the absolute value removes the oscillatory nature of the cosine function, making direct integration challenging. However, I'm unsure how to select an appropriate function to compare it to, especially near the singularity at x=0 and as x approaches infinity.
$$\int_0^{\infty }|\cos x|\frac{dx}{\sqrt{x}}\geq \sum_{k=1}^{\infty}\int_{k\pi}^{k\pi+\frac{\pi}{4}}|\cos x|\frac{dx}{\sqrt{x}}$$ $$=\int_0^\frac{\pi}{4}\sum_{k=1}^{\infty}|\cos x|\frac{dx}{\sqrt{x+k\pi}}\geq \frac{\sqrt{2}}{2}\int_0^\frac{\pi}{4}\sum_{k=1}^{\infty}\frac{dx}{\sqrt{x+k\pi}}=\infty$$
|integration|trigonometric-integrals|absolute-convergence|infinite-groups|
0
Convex set with convex complement has non empty interior?
My question is if in a normed space (maybe complete if necessary), a convex (non empty) set with convex complement has non-empty interior. I cannot think of a counterexample, but neither how to prove it. Any ideas?
Take an unbounded functional $\ell$ and consider $$ A = \{ x \in X \mid \ell(x) \le 0 \}. $$ The complement $$ B = \{ x \in X \mid \ell(x) > 0 \} $$ is convex as well and neither of these sets have an nonempty interior.
|functional-analysis|
1
Is there a "correct" $k$ scheme structure to put on $\coprod_{i=1}^n \operatorname{Spec}(k)$?
Let $k$ be an algebraically closed field, with $n\in\mathbb N\subset k$ invertible. I am trying to prove that if $\mathbb G_m=k[t,t^{-1}]$ is the multiplicative group scheme over $k$ , and $\mu_n$ is the kernel of the group map $[n]:\mathbb G_m\rightarrow \mathbb G_m$ defined on all $k$ schemes $S$ by $s\in G(S)\mapsto s^n\in G(S)$ , then $\mu_n$ is isomorphic to the group scheme $\coprod_{\alpha\in \mathbb Z/n\mathbb Z}\operatorname{Spec}(k)$ . However, I have just realized that I am not entirely sure how to turn $\coprod_{i=1}^n \operatorname{Spec}(k)$ into a $k$ -scheme. Indeed, we can identify $\coprod_{i=1}^n \operatorname{Spec}(k)$ with $\operatorname{Spec}(k^n)$ , and then any morphism $k\rightarrow k^n$ turns $\operatorname{Spec}(k^n)$ into a $k$ -scheme. Is there any reason I should $x\mapsto (x,0,\dots,0)$ over say $x\mapsto (x,\dots, x)$ or any other such variant?
KReiser's comments and FShrike probably already suffice for you and I'm not saying much more but let me write an answer with a little more detail (completely inside the category of $k$ -schemes). Recall us quickly recall the notion of slice categories from ordinary category theory. Slice categories. Let $\mathscr{C}$ be a category with an object $c \in \mathscr{C}$ . Then, $\mathscr{C}_{/c}$ is the slice category (over $c$ ) where objects are maps $c' \to c$ and maps $(c' \to c) \to (c'' \to c)$ are maps $c' \to c''$ commuting with the structure maps to $c$ . Colimits in slice categories are then computed in the underlying category. Written out: If $(c_i \to c)_{i \in I}$ is a diagram of objects in $\mathscr{C}_{/c}$ , then one may check that $$\operatorname{colim}_{i \in I} (c_i \to c)_{i \in I} \cong \left(\operatorname{colim}_{i \in I} c_i \to c \right)$$ in $\mathscr{C}_{/c}$ where the map $\operatorname{colim}_{i \in I} c_i \to c$ is induced by the universal property of the colimi
|algebraic-geometry|schemes|affine-schemes|group-schemes|
1
Possible generalizations of combinatorial species to arbotrary infinite sets
It is well-known ghe notion of Joyal's combinatorial species. In his treatment, Joyal assumes finiteness. In fact he works with the groupoid of finite sets. The underlying idea is to capture the notkon of a structure on s finite ground set and to work "combinatorially" with. Ok, finiteness is essential for the Joyal's purposes. However, one may consider the family of set partitions of an infinite set, or set-theoretic/algebraic/topological on infinite sets. Why does not exists a generalization of combinatorial structures dealing with these cases. Clearly, in such possible generalization, it is not possible to work with generating functions. But, actually, some othef techniques may be given. Why it has never been done?
See here https://mathoverflow.net/questions/138518/generalization-of-analytic-functors where I was wondering the same As for why, I don't know the exact answer, but after all this time I know that the features of the category of bijections of finite sets (e.g. being traced) are essential for some of the theory -as an example, the category of virtual species does not exist if you define species over $Bij( for a cardinal bigger than $\omega$ . Furthermore, a hint for why the category of species is so fundamental is that $Bij( has the universal property of the free monoidal category on a singleton. $Bij( should then be the free category with a structure like a monoidal one, where you can tensor together, associatively (but what does it mean exactly now?), at most $\lambda$ objects, and you have a unit. These kind of structures are elusive and way more unnatural: sets, and complete categories have it (the monoidal structure is a functor of every arity ${\cal C}^\lambda \to \cal C$ you migh
|combinatorics|category-theory|
1
Elementary function solution of $\int_0^{\varepsilon} \sqrt{\varepsilon^2-y^2}\,\frac{{\rm artanh}\,y}{1-y^2}{\rm d}y$
I would like to to know if the following integral can be done in terms of elementary functions of $\varepsilon$ $$ I(\varepsilon)=\int_0^{\varepsilon} \sqrt{\varepsilon^2-y^2}\,\frac{{\rm artanh}\,y}{1-y^2}{\rm d}y\,. $$ I tried writing the ${\rm artanh}\,y$ as $$ {\rm artanh}\,y=y\int_{0}^{1}\frac{1}{1-\delta^2y^2}\,{\rm d}\delta $$ and then do the integration in $y$ first. This can indeed be done. However, the integration that I am left with in terms of $\delta$ does not look any better than the above.
Define the function $\mathcal{I}:(0,1]\rightarrow\mathbb{R}$ via the definite integral $$\mathcal{I}{\left(a\right)}:=\int_{0}^{a}\mathrm{d}x\,\frac{\sqrt{a^{2}-x^{2}}}{1-x^{2}}\operatorname{artanh}{\left(x\right)},$$ where the real inverse hyperbolic tangent is defined here by $$\operatorname{artanh}{\left(z\right)}:=\int_{0}^{z}\mathrm{d}y\,\frac{1}{1-y^{2}}\stackrel{y\mapsto zt}{=}\int_{0}^{1}\mathrm{d}t\,\frac{z}{1-z^{2}t^{2}}=\frac12\ln{\left(\frac{1+z}{1-z}\right)};~~~\small{z\in(-1,1)}.$$ It can be shown that $\mathcal{I}{\left(1\right)}=2C$ , where here $C$ denotes Catalan's constant: $$\begin{align} \mathcal{I}{\left(1\right)} &=\int_{0}^{1}\mathrm{d}x\,\frac{\sqrt{1-x^{2}}}{1-x^{2}}\operatorname{artanh}{\left(x\right)}\\ &=\int_{0}^{1}\mathrm{d}x\,\frac{\operatorname{artanh}{\left(x\right)}}{\sqrt{1-x^{2}}}\\ &=\int_{0}^{1}\mathrm{d}x\,\frac{\ln{\left(\frac{1+x}{1-x}\right)}}{2\sqrt{1-x^{2}}}\\ &=-\int_{0}^{1}\mathrm{d}x\,\frac{\ln{\left(\frac{1-x}{1+x}\right)}}{2\left(1+x\ri
|definite-integrals|
0
For any square-free $n \geq 1$ and $a \in \Bbb{Z}/n$ including $\gcd(a,n) \neq 1$, then in the list $a, a^2, a^3, \dots$ either $a$ or $a^2$ repeats?
For example, modulo $30$ we have that $\gcd(5, 30) = 5$ , but $5, 5^2, 5^3 = 5, 5^2, \dots$ goes the list, so both repeat. We know it's true when $\gcd(a,n) = 1$ because a cyclic group is formed in $\Bbb{Z}/n^{\times}$ , s it's sufficient to prove the case when $\gcd(a, n) = d \neq 1$ . Another example: $$ n = 15:\\ 3, 3^2, 3^3=-3,-3^2,(-3)^2=3^2, \dots \\ 6,6,6, \dots \\ 9,9^2=6,9, \dots\\ 5,10,5,10, \dots\\ $$ Is this hopelessly difficult to prove?
By the Chinese Remainder theorem, $\mathbb{Z}_n$ is a product of $\mathbb{Z}_{p_i}$ for prime $p_i$ , and we can view $a$ as an element of $\prod_i \mathbb{Z}_{p_i}$ , $(a_1,a_2,\dots,a_k)$ . For any of the component rings, $\mathbb{Z}_{p_i}$ , we have that the sequence $a_i^m$ must repeat either with period $1$ (if $a_i = 0$ ) or with period dividing $p_i-1$ (otherwise). Take the lcm of the periods of all the $a_i^m$ sequences for all the component rings and you'll get the period for the product ring. So $a^m$ always repeats.
|abstract-algebra|elementary-number-theory|proof-writing|modular-arithmetic|conjectures|
1
what's the pro-cyclic group with a given order?
cyclic groups are classified according to their orders. but for procyclic groups, (procyclic) means: the group is a profinite group $G$ , such that there exists $g \in G$ , the smallest closed subgroup containing $g$ is $G$ . this is different from the cyclic group case. to classify procyclic groups I have 2 questions: (1) how to prove a procyclic group the direct product of its sylow p subgroups? (2) why this definition is equivalent to: $G$ is an inverse limit of cyclic groups? can anyone give me some hint? or is this a classical result? thanks.
A pro-cyclic group is isomorphic to a Cartesian product $$ \prod_{p\in \mathbb{P}} \frac{\mathbb{Z}_p}{p^{n(p)}\mathbb{Z}_p}, $$ with $n(p)\in \mathbb{Z}_{\ge 0}\cup \{\infty\}$ . This is a quotient of $\widehat{\mathbb{Z}} = \prod_{p\in \mathbb{P}}$ , where for each prime $p$ you pick a pro-cyclic pro- $p$ group. So for each Steiniz number $\prod_{p\in \mathbb{P}} p^{n(p)}$ there is exactly one pro-cyclic group of that order. For (1), as any pro-nilpotent group, a pro-cyclic group is the product of its Sylows. For (2), the definition of pro-cyclic is that it is an inverse limit of finite cyclic groups. Given an inverse limit of finite cyclic groups $G$ , a standard inverse limit argument shows that there exists $g\in G$ such that $G=\overline{\langle g\rangle}$ . On the other hand, if $G=\overline{\langle g\rangle}$ , then $$G\cong \varprojlim_{N\lhd_o G} G/N = \varprojlim_{N\lhd_o G} \langle g \rangle N/N \cong \varprojlim_{N\lhd_o G} \langle g \rangle/(N \cap \langle g \rangle)$$ an
|group-theory|
1
Solve $3a^2 + 3ab + b^2 - p^3 = 0$ using infinite descent or otherwise?
I'm trying to get my head around infinite descent proofs for Diophantine equations and I was trying to apply it to a problem, and as you can see, I am struggling with it. Consider the identity $$3a^2 + 3ab + b^2 - p^3 = 0$$ I also know that $a$ and $b$ are coprime, as are $p$ and $b$ . I think from this I can infer $a$ and $p$ are coprime too. The trivial solutions that I found are $a = b = p = 0$ and $a = 0, b = p = 1$ . But beyond this, I don't think there are further solutions in the positive integers, and I'd like to prove or disprove this. I think infinite descent might work as a general method here, but I'm struggling with it so far. All I've shown is that $p$ must be odd, and $p = 3K +1$ as neither $p$ nor $b$ is divisible by $3,$ and $p^3\pmod 3\equiv 1$ to match $b^2\pmod 3 \equiv 1$ . If anyone knows how to proceed with a proof (or counterproof) of this identity, I would appreciate the pointers!
$$3 \cdot 17^2 + 3 \cdot 17 \cdot 19 + 19^2 - 13^3 = 0$$
|elementary-number-theory|diophantine-equations|infinite-descent|
1
ch. 8.3 Exercise 1 in Cohomology of Number Fields
Let $k$ be a number field, $S$ a set of places of $k$ , $k_S$ the maximal extension of $k$ unramified outside $S$ , $\mathcal{O}_S$ the subring of $k_S$ with $\nu_{\mathfrak{p}}(\alpha)\geq 0$ for all $\mathfrak{p}\not\in S$ , and $G_{S}$ the galois group $\text{Gal}(k_S/k)$ . I am considering the following exercise at the end of chapter 8.3 in Cohomology of Number Fields. Show the isomorphism $H^2(G_S,\mathcal{O}_S^\times)\cong Br(k_S/k)$ My attempt was to consider the following exact sequence $$0\longrightarrow k_S^\times \longrightarrow I_{k_S}\longrightarrow C_{k_S}\longrightarrow 0$$ Where $I_{k_S}$ and $C_{k_S}$ are the Ideles and Idele class group of $k_S$ . (Note that if $k_S$ is an infinite extension, these are simply the Galois invariance of the absolute Ideles and Idele class group.) We take the cohomology long exact sequence of this exact sequence and obtain at $H^2$ : $$0\simeq H^1(G_S, C_{k_S})\longrightarrow H^2(G_S,k_S^\times)\simeq Br(k_S/k)\longrightarrow H^2(G_S,I_{k
I sent an email to Alexander Schmidt about this problem. It is indeed incorrect and the body of my post contains two counterexamples. He said he thinks the exercise should have been to prove that $H^2(G_S,k_S^\times)=br(k_S|k)$ , but notes that this is a special case of theorem 6.3.4 of the same book. The errata list has now been updated here .
|number-theory|algebraic-number-theory|group-cohomology|class-field-theory|galois-cohomology|
1
Conditions for the equality $IJ = I$ for proper ideals of a domain
$I,J$ are proper ideals in $R$ , an integral domain with unity, and $I$ or $J$ is finitely generated. Can we say $IJ\subsetneq I?$ I would also like to know if we could more generally say something like this for modules over $R$ . The only ideals I could find where the equality $IJ=I$ was achieved were in non-domanins or when both $I$ and $J$ are infinitely generated, specifically in the rings $\mathbb{Z}/6\mathbb{Z}$ and $\mathbb{Z} [\{x^{1/2^k}|k\in \mathbb{N} \cup {0}\}]$ respectively. I can see if $I$ were principal, then my question would be true, as if $I=(a)$ and $IJ=I$ then $a\in aJ$ , but we can cancel $a$ as $R$ is a domain, which would give us $1\in J$ , a contradiction. However I am unsure of how I can extend this to even when $I$ is finitely generated.
Consider the subring $\mathbb{Z} + X\mathbb{Q}[X]$ of the polynomial ring $\mathbb{Q}[X]$ . This is the ring of rational polynomials where the constant term has to be an integer. Let $I$ be the ideal of all elements with constant term $0$ and $J=(2)$ . Both are proper ideals, but $IJ = I$ because everything in $I$ can be halved and still be in $I$ . This shows $J$ need not be infinitely generated at least.
|commutative-algebra|modules|
0
Proving the derivative of an even function is odd using the chain rule
My answer: Suppose that f: ℝ $\rightarrow$ ℝ is an even function, that is differentiable everywhere. If f is an even function, then we have that f(x)= f(-x). We now take the derivative using the chain rule $$f'(x)= \frac{df}{dx}$$ $$=\frac{d}{dx}[f(x)]$$ $$=\frac{d}{dx}[f(-x)]$$ $$= f'(-x)\cdot (-x)'$$ $$= f'(-x)(-1)$$ $$=-f'(-x)$$ Since that f '(x) = -f '(-x) we have shown that the derivative of an even function that is differentiable everywhere, is odd. My question: Why did my TA not accept my solution? I was asked to use the chain rule and did just that. I don't understand what I did wrong.
The answer is correct, if you want to use the chain rule explicitly you should define $\phi(x)=-x$ then you know $\phi '(x)=-1$ . Now we can apply the chain rule within the proof: $$f'(-x)=f'(\phi(x))=-f'(\phi(x))\phi'(x)=-(f\circ\phi)'(x)=-(f\circ\phi)'(-x)=-f'(x)$$ When the last equation is due to the fact that $f(\phi (x))=f(x)$ since $f$ is even.
|calculus|derivatives|solution-verification|even-and-odd-functions|
1
Edge contraction problem in graphs vs in algebraic topology
I have a really Naive question. I was reading this paper on graph theory where it classifies the edge contraction problem as an NP-hard problem where it cannot be solved in a linear time. On the other hand, I've been thinking about turning graphs into simplicial complexes first and then applying the edge contraction on it. While looking for ways to do that, I saw the algorithms and the technique mentioned here in which the edges of a simplex are being contracted in a linear run time. I'd say I don't really understand what's going on here. Is it that the simplicial way of edge contraction is a specific case of the graphs or can we really solve this in a linear run time?
From the abstract of the paper you linked: For a property $π$ on graphs, the corresponding edge-contraction problem $P_{EC}(π)$ is defined as follows: Given a graph $G$ , find a set of edges of minimum cardinality whose contraction results in a graph satisfying property $π$ . You can contract any edge, in fact any subset of edges in linear time, but that's completely irrelevant to the edge-contraction problem.
|graph-theory|algebraic-topology|
0
Conditions for the equality $IJ = I$ for proper ideals of a domain
$I,J$ are proper ideals in $R$ , an integral domain with unity, and $I$ or $J$ is finitely generated. Can we say $IJ\subsetneq I?$ I would also like to know if we could more generally say something like this for modules over $R$ . The only ideals I could find where the equality $IJ=I$ was achieved were in non-domanins or when both $I$ and $J$ are infinitely generated, specifically in the rings $\mathbb{Z}/6\mathbb{Z}$ and $\mathbb{Z} [\{x^{1/2^k}|k\in \mathbb{N} \cup {0}\}]$ respectively. I can see if $I$ were principal, then my question would be true, as if $I=(a)$ and $IJ=I$ then $a\in aJ$ , but we can cancel $a$ as $R$ is a domain, which would give us $1\in J$ , a contradiction. However I am unsure of how I can extend this to even when $I$ is finitely generated.
In the case that $I\neq 0$ is finitely generated one always has $IJ\neq I$ : the equation $IJ=I$ by Nakayamas lemma implies the existence of some $x\not\in J$ such that $xI=0$ , which is impossible in a domain.
|commutative-algebra|modules|
1
Mistake computing $\sum_{n=1}^{+\infty} \frac{n}{e^{2\pi n}-1} = \frac{1}{24}-\frac{1}{8\pi}$
I recently gave a try to show that $$\sum_{n=1}^{+\infty} \frac{n}{e^{2\pi n}-1}=\frac{1}{24}-\frac{1}{8\pi} $$ without using the Theta function or Mellin transform, but I ended up with twice the result and I can't figure out where my mistake was. For my approach, I need to introduce some Lemmas to help the proof: (1): $\forall z \in \mathbb{R}^*, \frac{\pi z}{\sinh(\pi z)}=\int_{0}^{\infty} \frac{\cos(2zx)}{\cosh^2(x)}dx$ (2): $\forall z \in \mathbb{R} , \sum_{n=1}^{+\infty} \cos(2nz)e^{-n\pi} = \frac{\cos(2z)-e^{-\pi}}{2(\cosh(\pi)-\cos(2z))}$ (3): $\forall z \in \mathbb{R} , \frac{\sinh(\pi)}{4(\cosh(\pi)-\cos(2z))}=\frac{\pi}{2(\pi^2+4z^2)}+\int_{0}^{\infty} \frac{\sin(\pi t)\cosh(2zt)}{e^{2\pi t}-1} dt $ (4): $ \forall z \in \mathbb{R} , \frac{\pi}{\pi^2+4z^2}=\int_{0}^{\infty} \cos(2zu)e^{-\pi u} du $ (5): $ \frac{\pi^2}{6}=\int_{0}^{\infty} \frac{w}{e^w-1}dw$ Here is my approach: $$\begin{align} I&=\sum_{n=1}^{+\infty} \frac{n}{e^{2\pi n}-1}\\ &=\frac{1}{2}\sum_{n=1}^{+\infty} \
The application of the Lamma (3) is questionable, resulting in the undefined double integral below $$\int_{0}^{\infty}\int_{0}^{\infty}\frac{\sin \pi t}{e^{2\pi t}-1}\frac{\cosh2xt}{\cosh^2x} dx\ dt $$
|integration|power-series|improper-integrals|theta-functions|mellin-transform|
0
How do we get this approximation
Show the following relationship: $\int_{-\infty}^{\infty} \Phi(\lambda x) N(x | \mu, \sigma^2) \, dx = \Phi\left(\frac{\mu}{\sqrt{\lambda^{-2} + \sigma^2}}\right)$ Hint: One way to solve this is to use the definition of $Phi(x) = P[Z \leq x]$ . I have gotten to the point in the Bayesian logistic regression where the integral of the probability distribution is only solvable as this inverse probit. I started off by writing the LHS in full $\int_{-\infty}^{\infty} \left( \int_{-\infty}^{\lambda x} \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{u^2}{2}\right) du \right) \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) \, dx$ I expect to do some change of variable to convolute the exponentials but can't quite see what to do next.
Let $Y \sim \mathcal{N}(0,1)$ independent of $X \sim \mathcal{N}(\mu, \sigma^2)$ . Then $$\Pr(Y \leqslant \lambda X) = \int_{-\infty}^{\infty} \Pr(Y \leqslant \lambda x) N(x | \mu, \sigma^2) \, dx = \int_{-\infty}^{\infty} \Phi(\lambda x) N(x | \mu, \sigma^2) \, dx.$$ Now, $Y - \lambda X \sim \mathcal{N}(-\lambda\mu, 1 + \lambda^2\sigma^2)$ has cdf $\Phi\left(\frac{\cdot \, + \lambda\mu}{\sqrt{1 + \lambda^2\sigma^2}}\right)$ , hence $$\Pr(Y \leqslant \lambda X) = \Pr(Y - \lambda X \leqslant 0) = \Phi\left(\frac{\lambda\mu}{\sqrt{1+\lambda^2\sigma^2}}\right),$$ and this is equal to the annouced result.
|statistics|proof-writing|
0
How to understand the differential is a linear map?
I read the following claim in the book , P19 Eq.(3.9). For a smooth function $f:\mathcal{E} \rightarrow R$ , where $\mathcal{E}$ is a linear space. $Df(x): \mathcal{E}\rightarrow R$ is the differential of $f$ at $x$ , that is, it is the linear map defined by: $$Df(x)[v]=\lim\limits_{t\rightarrow 0} \frac{f(x+tv)-f(x)}{t}.$$ Is this the standard definition of differential? Is the claim " $Df(x)$ is a linear map" inferred from the limit expression?
The differential $Df(x)$ of $f$ at $x$ is linear by definition (whenever it exists). The limit you wrote is not the definition of $Df(x)(v)$ but of the directional derivative of $f$ at $x$ in the direction of $v$ . Bad news: For a given point $x$ , this limit can exist for every vector $v$ without being a linear function of $v$ . It can also be a linear function of $v$ without $f$ being differentiable or even continuous . Good news: If $Df(x)$ exists, then the directional derivative of $f$ at $x$ in the direction of $v$ exists and is equal to $Df(x)(v)$ . It is then de facto a linear function of $v$ .
|calculus|linear-algebra|analysis|derivatives|linear-transformations|
1
General addition theorem for $\operatorname{cas}(x):=\sin(x) + \cos(x)$ | Summation form
I like addition theorems in trigonometry and recently YouTuber Dr Barker posted the video "My New Favourite Trig Function" playing around with following: Define $$ \operatorname{cas}(x) := \cos(x) + \sin(x) $$ Then: $$ \operatorname{cas}(x+y) = \frac{1}{2} ( \operatorname{cas}(x)\operatorname{cas}(y) + \operatorname{cas}(x)\operatorname{cas}(-y) + \operatorname{cas}(-x)\operatorname{cas}(y) - \operatorname{cas}(-x)\operatorname{cas}(-y) ) $$ This is straightforward. What I want is: $$ \operatorname{cas}( \sum_{k=1}^{n} x_k ) = f( \operatorname{cas}(x_{k}), \operatorname{cas}(-x_{k}) )$$ To do so I assume the addition theorems for $ \sin $ and $\cos $ . Let $N := \lbrace 1,2,3,...,n \rbrace $ and $k$ runs from 1 to n. $$ \sin( \sum_{k=1}^{n} x_k ) = \sum_{ X \subseteq N, \left\lvert X \right\rvert odd} (-1)^{\frac{\left\lvert X \right\rvert-1}{2}} \prod_{k \notin X} \cos(x_k) \prod_{k \in X} \sin(x_k) $$ $$ \cos( \sum_{k=1}^{n} x_k ) = \sum_{ X \subseteq N, \left\lvert X \right\rvert ev
We know that $$ \newcommand{cas}{{\operatorname{cas}}} \cas(x) = \sqrt2\cos\left(x - \frac\pi4\right) $$ and $$ \cas(-x) = -\sqrt2\sin\left(x - \frac\pi4\right). $$ Given a list of $n$ quantities $x_1, x_2, x_3, \ldots, x_n$ , let $N_1 = \{1,2,3,\ldots,n + 1\}$ and let $x_{n+1} = \dfrac{n\pi}4$ . Then \begin{align} \cas\left( \sum_{k=1}^{n} x_k \right) &= \sqrt2 \cos\left( \left(\sum_{k=1}^{n} x_k\right) - \frac\pi4 \right) \\ &= \sqrt2 \cos\left( \left(\sum_{k=1}^{n} \left(x_k - \frac\pi4\right)\right) + \frac{n\pi}4 - \frac\pi4 \right) \\ &= \sqrt2 \cos\left( \sum_{k=1}^{n+1} \left(x_k - \frac\pi4\right) \right) \\ & = \sqrt2 \sum_{\substack{ X \subseteq N_1 \\ \left\lvert X \right\rvert \text{ even}}}(-1)^{\left\lvert X \right\rvert/2} \prod_{k \notin X} \cos\left(x_k - \frac\pi4\right) \prod_{k \in X} \sin\left(x_k - \frac\pi4\right)\\ & = \sqrt2 \sum_{\substack{ X \subseteq N_1 \\ \left\lvert X \right\rvert \text{ even}}}(-1)^{\left\lvert X \right\rvert/2} \prod_{k \notin X} \frac
|trigonometry|
0
The limit of Thomae's function at any a such that 0 < a < 1
I'm currently reading Spivak's Calculus and I'm having trouble understanding the example with the limit of Thomae's function. The function \begin{align} f(x) = \begin{cases} 0 & \text{$x$ irrational, $0 approaches $0$ for any $a \in (0, 1)$ . Looking at this picture , I don't get how that could be true. For instance, if $a = \frac{1}{2}$ , then it seems to me that for $\epsilon = \frac{1}{10}$ there's no $\delta$ such that $0 . What am I missing here?
Consider $\delta = \frac1{20}$ . Other than $\frac12$ itself, each rational number in the interval $(\frac9{20}, \frac{11}{20})$ has a denominator at least $11$ , so the value of Thomae's function in this interval is at most $\frac1{11}$ . (If you disagree, please say what rational number in the interval has a denominator smaller than $11$ .) In fact we can take $\delta$ as large as $\frac1{18}$ because this is the value that is required to exclude $\frac49$ and $\frac59$ from the interval. Note that for any given $n$ , we can simply list the fractions in (say) $(0, 1)$ whose denominators are less than $n$ . This is a finite set, so there must be an interval around $\frac12$ that will exclude them all.
|real-analysis|calculus|analysis|
1
Cutting the $d$-dimensional torus?
I am curious what the object you obtain is when you cut the $d$ -dimensional flat torus $\mathbb{T}^d :=\left(\mathbb{R} / \mathbb{Z}\right)^d$ along certain hyper-planes. Specifically, if you identify it with $[0,1)^d$ and consider a typical element $x=(x_1,\dots,x_d)$ , I would like to cut it along the planes $x_i=x_j, i,j=1,\dots,d, i\neq j$ . For $d=2$ , one can check by hand that the resulting shape is isometric to $\mathbb{T}_{\sqrt{2}} \times [0,\frac{1}{\sqrt{2}}]$ , where $\mathbb{T}_{\sqrt{2}}= \mathbb{R} /\sqrt{2}\mathbb{Z}$ is the appropriately scaled $1$ -dimensional torus. Is there a general representation of what such a construction looks like in higher dimensions? Edit: Just to clarify, I want to cut the torus along all such planes of which there are $d^2-d$ in dimension $d$ . The final object (which will be a collection of manifolds of the same dimension $d$ ) may not be connected.
I don't know whether this helps but here are some pictures for the 4D case. The 4D torus I'm considering can be parameterized as $$ (a, b, c) \mapsto \left\{ \begin{align*} x &= (R_1 + (R_2 + R_3\cos\,a)\cos\,b)\cos\,c\\ y &= (R_1 + (R_2 + R_3\cos\,a)\cos\,b)\sin\,c\\ z &= (R_2 + R_3\cos\,a)\sin\,b\\ w &= R_3\sin\,a \end{align*}\right. $$ When slicing such a 4D torus in a hyperplane, we obtain shapes like this:
|geometry|euclidean-geometry|
0
Any assured method for finding the derivative of p Euclidian norms?
Assuming both $y$ and $\beta$ are $p \times 1$ vectors, and $W$ is a $p \times p - 2$ matrix, how would one take the first derivative of this: $L(\beta) = || y - \beta||^2_2 + || W\beta||^2_2$ . I'm aware that this essentially means $\frac{\partial L}{\partial \beta} \sum_{i = 1}^p (y_i - \beta_i)^2 + \frac{\partial L}{\partial \beta} || W\beta||^2_2$ . After looking online, I found that the identity of $||A\mathbf{x}||^2_2 = 2A^TA\mathbf{x}$ . While this is suffice for my proofs, I fail to understand why this is the case. I do know that this will apply to $|| W\beta||^2_2$ . So my question is how to differentiate $\frac{\partial L}{\partial \beta} \sum_{i = 1}^p (y_i - \beta_i)^2$ in a very straightforward and logical way, with fundamentals that I can apply to other $p$ -value norms at any time without worrying about identities and such. Many thanks, this has been driving me utterly mad.
Essentially coming back to the definition of the differential $$\|A(y_0+h)\|^2=\|Ay_0\|^2+2\langle Ay_0,Ah\rangle+\|Ah\|^2$$ $$= \|Ay_0\|^2+2\langle A^TAy_0,h\rangle +o(h)$$ Thus the differential of $y\mapsto \|Ay\|^2$ calculated at the point $y_0$ is the linear form $h\mapsto 2\langle A^TAy_0,h\rangle$ or, equivalently, the gradient of $y\mapsto \|Ay\|^2$ calculated at the point $y_0$ is $2A^TAy_0.$
|calculus|matrices|derivatives|vectors|partial-derivative|
0
Show that left and right eigenvectors corresponding to the same simple eigenvalue cannot be orthogonal
Let $x, y$ be a right and a left eigenvector corresponding to the same simple eigenvalue (algebraic multiplicity is $1$ ) of a matrix. Show that $x, y$ cannot be orthogonal. In my opinion, if the eigenvalue with algebraic multiplicity is $1$ , that means the power of $(A-\lambda I)$ must be $1$ . Does it mean that the eigenvalues must be different? If the eigenvalues are different, then $x,y$ should be orthogonal. So, how to prove $x,y$ cannot be orthogonal? Thanks a lot.
Another approach to the question: if $\lambda$ is a simple eigenvalue, meaning it has algebraic (and thus geometric) multiplicity $1$ , then there's an invertible $P$ such that $A=P(\lambda\oplus A')P^{-1}$ for some $A'$ . This is just a restatement of the existence of a right eigenvector $v=Pe_1$ such that $Av=\lambda v$ . Then, the left eigenvector corresponding to $\lambda$ is $u^T=e_1^T P^{-1}$ , which gives you $u^T A=\lambda u^T$ . It follows that $u^T v=e_1^T P^{-1} Pe_1=1$ , i.e. the vectors aren't orthogonal. Being the eigenspaces one-dimensional, it's clear that any other choice of such eigenvectors just gives scalar multiples and doesn't change the result. Example with diagonalisable non-degenerate $2\times 2$ matrix Consider $A=\begin{pmatrix}-1&-2\\1&2\end{pmatrix}$ , whose eigenvalues are $\lambda_1=0$ and $\lambda_2=1$ , and we can write it as $$A = \underbrace{\begin{pmatrix}-1&-2 \\ 1 &1\end{pmatrix}}_{\equiv P} \begin{pmatrix}1&0\\0&0\end{pmatrix} \underbrace{\begin{p
|eigenvalues-eigenvectors|orthogonality|
0
Find the green area
How to find the green area My attempt Each area is $\pi r_n^2=\dfrac{4\pi}{\left(n^2+2\right)^2}$ . We can approximate $\sum_1^\infty \dfrac{4\pi}{\left(n^2+2\right)^2}$ to be $\int_1^\infty \dfrac{4\pi}{\left(n^2+2\right)^2}dn$ . Let us evaluate the indefinite integral $\int\dfrac{4\pi}{\left(n^2+2\right)^2}=4\pi\int\dfrac1{\left(n^2+2\right)^2}dn$ first. We use a trig sub - let $n=\sqrt2\tan\theta\implies dn=\sqrt2\sec^2\theta~d\theta$ . We end up with: \begin{align*} \int\dfrac1{\left(2\tan^2\theta+2\right)^2}~dn&=\int\dfrac1{4\left(\sec^2\theta\right)^2}~dn \\ &=\dfrac14\int\dfrac1{\sec^4\theta}\sqrt2\sec^2\theta~d\theta \\ &=\dfrac1{2\sqrt2}\int\dfrac1{\sec^2\theta}~d\theta \\ &=\dfrac1{2\sqrt2}\int\cos^2\theta~d\theta \\ &=\dfrac1{4\sqrt2}\int1+\cos(2\theta)~d\theta \\ &=\dfrac1{4\sqrt2}\left(\theta+\dfrac{\sin(2\theta)}2\right) \\ &=\dfrac1{8\sqrt2}(2\theta+\sin(2\theta)) \\ \theta&=\tan^{-1}\left(\dfrac n{\sqrt2}\right) \\ I&=\dfrac1{8\sqrt2}\left(2\tan^{-1}\left(\dfrac n{\sqrt
This is a special case of the Pappus chain, your equation for the area of the nth circle in the chain is correct, however this is not an integration problem but a summation one. The area is simply $\pi+8\pi\sum_{n=1}^\infty{1\over(n^2+2)^2}\approx6.99796$
|geometry|solution-verification|problem-solving|area|
0
Cellular homology of $X\times S^n$
I want to compute the cellular homology for $X\times S^n$ where $X$ is a CW-complex. My current observation is that the $k-$ th cellular group will be generated by cells $\{e^k_i\times e^0, e^{k-n}_j\times e^n\}$ where the upper index is the dimension of the cell and the lower index is just the index of cells in that dimensions. Well, intuitively, I think this structure will give the cellular homology group to be $H_k(X\times S^n)=H_k(X)\oplus H_{k-n}(X)$ , but I'm not sure how to argue this rigorously. That being said, I want to know why we can just decompose those generators to two parts and use them generate the homology group on their own and then do direct sum to get the result. I'm appreciate if you can provide any insight on this, thanks!
This is false even if $n\leq\mathrm{dim}X$ , e.g. $H_2(S^3 \times S^1) = \{ 0 \}$ , whereas your formula would give $H_2(S^3 \times S^1) \cong H_1(S^1) \cong \mathbb{Z}$ .
|algebraic-topology|homology-cohomology|products|spheres|cw-complexes|
0
Is the exponential mean weight average (EMWA) a Pearson mean?
The Pearson mean is a weighted mean defined as $$m_t = \frac {\sum_{i=0}^t w_ix_i} {\sum_{i=0}^t w_i}$$ The exponential mean weight average (EMWA) is the following recurrence where $0 . $$m_t = (1-\alpha)m_{t-1} + \alpha x_t = m_{t-1} + \alpha(x_t - m_{t-1})$$ Is this recurrence a Pearson mean with an exponentially decaying weight ? If we develop the EMWA, we have $$m_t = \sum_{i=0}^t \alpha (1-\alpha)^i x_{t-i}$$ But it seam that the denominator is missing.
Yes, it is. The easiest way to think of this is that in the limit you have $w_i = \alpha (1-\alpha)^i$ . The denominator then is $$\sum_{i=0}^\infty w_i = \sum_{i=0}^\infty \alpha (1-\alpha)^I$$ , and summing the geometric series gives $$\alpha \times {1 \over 1 - (1-\alpha)} = 1$$ so the "missing" denominator is in fact 1. But of course you don't have infinite data in practice. Let $m_0 = x_0$ - that is, your initial value for the EWMA is just the initial value of the time series. Then you have $$m_1 = m_0 + \alpha (x_1 - m_0) = \alpha x_1 + (1-\alpha) m_0 = \alpha x_1 + (1 - \alpha) x_0$$ $$m_2 = m_1 + \alpha (x_2 - m_1) = \alpha x_2 + (1-\alpha) m_1 = \alpha x_2 + (1-\alpha) (\alpha x_1 + (1-\alpha) x_0) = \alpha x_2 + \alpha (1-\alpha) x_1 + (1-\alpha)^2 x_0$$ and so on for larger values of $t$ . If you continue to develop this you find that $$m_t = \left( \sum_{i=0}^{t-1} \alpha (1-\alpha)^i x_{t-i} \right) + (1-\alpha)^t x_0$$ and again you can sum the geometric series to see tha
|statistics|
1
Solving for Y, Linear and Conic Convergence
In casual mathematics solving for x, y, z, etc... is usually pretty simple. With the very seemingly basic math that I know, I come across some equations that I cannot figure out myself. In this specific case, I am trying to solve for $y$ . $$ -\sqrt{-\left(\left(y-\left(W_{heely2}\right)\right)^{2}-\left(F_{orks}\frac{1}{25.4}\right)^{2}\right)}=\frac{y-\left(W_{heely2}\right)}{\tan\left(-\frac{H_{eadtubeangle}}{\frac{180}{\pi}}\right)} $$ In my previous question, subscripts were somewhat confusing without an explanation, but to be clear, the subscript doesn't change anything.... so for example $W_{heely2}$ in mathematical terms is just a variable like $x$ or $y$ . This is the same for $F_{orks}$ , $W_{heelx2}$ , and $H_{eadtubeangle}$ . This equation is a solution to where a line intersects a circle (might be called a conic section, I hope this is right) outputting a $y$ value. I tried solving for $y$ but I kept getting stuck with $y$ being on both sides, and every time I tried to mov
$$y=\frac{W_{heely2}\csc\left(-\frac{H_{eadtubeangle}}{\frac{180}{\pi}}\right)-F_{orks}\frac{1}{25.4}}{\csc\left(-\frac{H_{eadtubeangle}}{\frac{180}{\pi}}\right)}$$
|algebra-precalculus|functions|
0
Prove that $\alpha(r+s+1,r)=\alpha(r+s+1,s)$
I am reading a paper I am having difficulty understanding this part, which is why I posted it on this forum to discuss with you Let me recall some definitions: Let $\pi = \left(a_1,a_2,\ldots a_n\right)$ and $\rho=\left(b_1, b_2, \ldots, b_n\right)$ be two permutations of $\mathbb{Z}_n$ . We call a pair of consecutive elements $a_i, a_{i+1}$ in a permutation a rise if $a_i and a fall if $a_i > a_{i+1}$ . There will be $4$ possibilities denoted by RR, FF, RF, and FR if we consider whether each pair $a_i, a_{i+1}: b_{i}, b_{i+1}$ forms a rise or a fall. $\alpha(n, k)$ as the count of pairs of permutations of length $n$ with exactly $k$ occurrences of either RF or FR On page 229, they wrote that To get a more symmetrical version of (3.8) we define (3.11) $\quad \alpha^*(r, s)=\alpha(r+s+1, r)=\alpha(r+s+1, s)=a^*(s, r)$ , But I don't understand why $\alpha(r+s+1,r)=\alpha(r+s+1,s)$ . I tried in the following ways: For a permutation of length $r+s+1$ , we can select $r$ pairs out of $r+s+1
Let $A_{r,s}$ be the set of pairs of permutations of length $r+s+1$ with exactly $r$ instances of RF or FR. This means $|A_{r,s}|=\alpha(r+s+1,r)$ , and it means $|A_{s,r}|=\alpha(r+s+1,s)$ . To prove that $\alpha(r+s+1,r)=\alpha(r+s+1,s)$ , it suffices to find a bijection between $A_{r,s}$ and $A_{s,r}$ . Here is the bijection. Given a pair of permutations $(\pi, \rho)\in A_{r,s}$ , then the corresponding pair of permutations is $$ (\pi^\text{rev},\rho)\in A_{s,r} $$ Here, $\pi^\text{rev}$ is the reverse of $\pi$ . If $\pi=(\pi_1,\pi_2,\dots,\pi_{r+s+1})$ in one-line notation, then $\pi^\text{rev}=(\pi_{r+s+1},\pi_{r+s},\dots,\pi_1)$ . This works because reversing $\pi$ converts all RF and FR pairs into RR and FF pairs, and also converts all RR and FF pairs into RF and FR pairs.
|combinatorics|permutations|
1
Question on universal property of Clifford algebra.
I have short question regarding the universal property of the Clifford algebra. Suppose $(V,q)$ is a quadratic $\mathbb{F}$ -vector space and $(\mathrm{Cl}(V,q),j)$ is the Clifford algebra. Recall that $\mathrm{Cl}(V,q)$ is an associative and unital algebra and $j:V\to \mathrm{Cl}(V,q)$ is a linear map such that $j(v)^{2}=-q(v)1_{\mathrm{Cl}(V,q)}$ . Then, the universal property is usually stated as follows: For any other pair $(A,j)$ consiting of a unital associative algebra $A$ and a linear map $k$ such that $k(v)^{2}=-q(v)1_{\mathrm{Cl}(V,q)}$ , there exists a unique algebra homomorphism $\psi:\mathrm{Cl}(V,q)\to A$ such that $k\circ\psi=j$ . Now to my question: Is the unique algebra homomorphism $\psi$ required to be unital ? It seems a bit more natural to me, since the category we are working with is the one of unital associative algebras. However, in all the resources I saw (and I checked many of them), it was never written that the unique algebra homomorphism in the universly pr
Yes, it is implicitly understood that we are working with morphisms of unital associative algebras. If we drop the "unital" condition then then things breaks. In the case of $V = \{0\}$ , the maps $j$ and $k$ are necessarily the unique maps that take $0$ to $0$ . If we require $\psi$ to be unital then $\mathrm{Cl}(V,q) \cong \mathbb F$ . If we don't require it to be unital then this cannot be because we could have $\psi(1) = 1$ or $\psi(1) = 0$ , so $\psi$ is not unique. So instead $\mathrm{Cl}(V,q) = \{0\}$ . But recall that we have a nice theorem like $\dim\mathrm{Cl}(V,q) = 2^{\dim V}$ ; the above shows that this theorem fails if $\psi$ is not required to be unital.
|linear-algebra|abstract-algebra|quadratic-forms|clifford-algebras|universal-property|
1
Solve $3a^2 + 3ab + b^2 - p^3 = 0$ using infinite descent or otherwise?
I'm trying to get my head around infinite descent proofs for Diophantine equations and I was trying to apply it to a problem, and as you can see, I am struggling with it. Consider the identity $$3a^2 + 3ab + b^2 - p^3 = 0$$ I also know that $a$ and $b$ are coprime, as are $p$ and $b$ . I think from this I can infer $a$ and $p$ are coprime too. The trivial solutions that I found are $a = b = p = 0$ and $a = 0, b = p = 1$ . But beyond this, I don't think there are further solutions in the positive integers, and I'd like to prove or disprove this. I think infinite descent might work as a general method here, but I'm struggling with it so far. All I've shown is that $p$ must be odd, and $p = 3K +1$ as neither $p$ nor $b$ is divisible by $3,$ and $p^3\pmod 3\equiv 1$ to match $b^2\pmod 3 \equiv 1$ . If anyone knows how to proceed with a proof (or counterproof) of this identity, I would appreciate the pointers!
taking $$ p = x^2 + 3xy + 3 y^2 $$ with $\gcd(x,y) = 1,$ let us also take $x \neq 0 \pmod 3,$ then $$ b = x^3 - 9 x y^2 - 9 y^3 \; , \; \; $$ $$ a = 3 x^2 y + 9 x y^2 + 6 y^3 \; . \; \; $$ After which $$b^2 + 3 ba + 3 a^2 = p^3 $$ and we should now fiddle with the gcd's. The method for finding the above parametrization is Gauss composition, using Dirichlet's method for $x^2 + 3xy + 3 y^2.$ In 3.5 part (b) take $a=a'=1, B=C=3.$ We are cubing the form, so two composition steps are needed. Oh, these are pages 66-67 in first edition of Cox, Primes of the Form $x^2 + n y^2.$ This is given first in Proposition 3.8 on page 49, but there is a typo in the definition of capital $X.$ Also correct in the second edition. here are some numbers, the first section is with $p$ a prime or prime power (49). The second section need larger $x,y$ to get p divisible by at least two primes. $$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc
|elementary-number-theory|diophantine-equations|infinite-descent|
0
Chain rule of the derivative on smooth manifold
Let $M$ be a smooth manifold. We will use the following definitions and results. Consider a local chart $\,x=\big(x_1,\cdots,x_n\big):\,U\subseteq M\longrightarrow\mathbb R^n\,$ at a point $p\in U$ . The partial derivative operator at $p$ is \begin{align*} \frac{\partial}{\partial x_i}\bigg|_p:\ C^\infty(p)\,&\longrightarrow\,\mathbb R \newline f \, &\longmapsto \,\frac{\partial}{\partial x_i}\bigg|_p(f)\,=\,\frac{\partial\big(f\circ x^{-1}\big)}{\partial x_i}\big(x(p)\big)\,=:\,\frac{\partial f}{\partial x_i}(p). \end{align*} The tangent space at $p$ of $M$ is \begin{align*} \left \,=:\,TM_p. \end{align*} For any smooth curve $\,\alpha:\,I\longrightarrow M$ , the velocity vector of $\alpha$ at $t_0\in I$ is a map \begin{align*} \alpha'(t_0):\ C^\infty(p)\,&\longrightarrow\,\mathbb R \newline f \, &\longmapsto \, \alpha'(t_0)[f]\,=\,\frac{d\big(f\circ\alpha\big)}{dt}(t_0). \end{align*} For any smooth curve $\,\alpha:\,I\longrightarrow M\,$ and local chart $\,x:\,U\longrightarrow\mathbb
I think a more illuminating definition would be the following. Let $M$ and $N$ be smooth manifolds and let $\varphi:M\rightarrow N$ be a map between them. If I want the push-forward of some vector $v\in T_{p}M$ with respect to the map $\varphi$ I would write; $$d\varphi_{p}(v):=v(-\circ\varphi)|_{p}$$ By inspection, one can see that this is a vector that takes some function $f\in C^{\infty}(N)$ and returns an element of $\mathbb{R}.$ Therefore, $d\varphi_{p}(v)\in T_{\varphi(p)}N$ as expected. Additionally, we can produce the expilict formula for $d\varphi_{p}$ thusly; $$d\varphi_{p}(v)(f):=v(f\circ\varphi)|_{p}=\sum_{i}v^i\frac{\partial}{\partial x^i}(f\circ\varphi)\big|_{p}$$ $$=\sum_{i}(x^i\circ\alpha)'|_{t_0}\cdot\frac{\partial}{\partial x^i}(f\circ\varphi)\big|_{p}$$ If you now want to express this in terms of the chart components on $N$ we have; $$=\sum_{i}(x^i\circ\alpha)'|_{t_0}\cdot\frac{\partial}{\partial x^i}(f\circ y^{-1}\circ y\circ\varphi)\big|_{p}$$ $$=\sum_{i}(x^i\circ\
|derivatives|differential-topology|smooth-manifolds|
1
Can we find an explicit formula for $r$ and $s$ as a function of $k$
Define $$g=\gcd(2^{k+3}-1,-2×(2^{k+1}-1)).$$ Then we know that there exist some integers $r$ and $s$ such that $$g=r×(2^{k+3}-1)-(2×s)(2^{k+1}-1).$$ Then my question is: Can we find an explicit formula for $r$ and $s$ as a function of $k$ ? I remark that if $k$ is odd then $g=3$ and if $k$ is even, then $g=1$ . So, the problem is reduced to how one can represent $1$ or $3$ in the above form.
Just looking to cancel some high powers of $2$ , we start with the observation that $$ (2^{k+3}-1) + 2\bigl( -2(2^{k+1}-1) \bigr) = 3. $$ This already gives the answer $(r,s)=(1,2)$ when $k$ is odd. Otherwise, we just want to subtract $3$ many times from $2^{k+3}-1$ until the answer is $1$ . Of course $$ 2^{k+3}-1 = 3\frac{(2^{k+3}-1)-1}3 + 1 $$ (where the fraction is an integer when $k$ is even). Therefore \begin{align*} 1 &= (2^{k+3}-1) - \frac{2^{k+3}-2}3\cdot3 \\ &= (2^{k+3}-1) - \frac{2^{k+3}-2}3 \Bigl( (2^{k+3}-1) + 2\bigl( -2(2^{k+1}-1) \bigr) \Bigr) \\ &= - \frac{2^{k+3}-5}3 (2^{k+3}-1) - \frac{2^{k+4}-4}3 \bigl( -2(2^{k+1}-1) \bigr), \end{align*} so that the answer is $(r,s) = \biggl( - \dfrac{2^{k+3}-5}3, - \dfrac{2^{k+4}-4}3 \biggr)$ when $k$ is even. (Of course there are many answers in both cases, but these are the ones with the smallest numbers.) Note that the above computations are precisely the usual Euclidean algorithm! It just looks funnier when the numbers are symbol
|divisibility|gcd-and-lcm|
0
Integrating $e^{-(x^2+y^2+z^2)/a^2}$ over $\mathbb{R}^3$
I want to compute the following integral $$ \int_{\mathbb{R}^3}e^{-(x^2+y^2+z^2)/a^2}\,dxdydz $$ and I thought that using spherical coordinates could make it easier, since $r^2=x^2+y^2+z^2$ . With this in mind, I tried $$ \int_{\mathbb{R}^3}e^{-(x^2+y^2+z^2)/a^2}\,dxdydz=\int_0^\infty\int_0^{2\pi}\int_0^\pi r^2\sin \theta \,d\theta d\phi dr=4\pi\int_0^\infty r^2e^{-r^2/a^2}\,dr $$ but I am stuck. Any ideas? Should I simply try to integrate it in cartesian coordinates?
This integral factors as $$\left(\int_\mathbb{R}e^{-x^2/a^2}dx\right)^3$$ The integral in the parentheses is well-known, and is called a Gaussian integral . Here it is explained many methods of how to calculate it. A particularly nice way to calculate it is in fact to use polar coordinates to calculate the integral $$\int_{\mathbb{R}^2}e^{-(x^2+y^2)/a^2}dx dy$$ first. This two dimensional integral evaluates to $\pi a^2$ , and it is the square of the Gaussian integral. Therefore, the desired integral is $\pi^{3/2} a^3$ .
|integration|multivariable-calculus|spherical-coordinates|
0
Special form of 3 vertex-connectedness for Graphs with every edge contained in a perfect matching
I am currently struggling with the following problem: Given a simple, connected Graph $G = (V,E)$ such that every edge is contained in a perfect matching of $G$ . Show that for each edge $e \in E$ (of the form $e = \{ v,w\}$ ), the graph $G[V \setminus \{v,w\}]$ is connected. The above is quite similar to this however in the linked question we only show 2 connectedness which seems way easier. I have tried using the same technique of using the parity in the components (of the supposed not-connected Graph $G [V \setminus \{v,w\}]$ ), however I am struggling with the fact, that $v$ (and $w$ with the same reasoning) could be connected with the component $w$ "belongs" to.
You're struggling because the claim is false. The $2\times 3$ grid graph *----*----* | | | | | | *----*----* has the property you want: for every edge, there is some perfect matching that contains it. However, if you delete the endpoints of the middle edge, you're left with $2$ components.
|graph-theory|matching-theory|graph-connectivity|
1
Is there an inequality on the maximum of subgaussian random variables of different variance factors?
Let $v_1,...,v_N$ be positive real numbers. I would like a (preferably sharp up to a universal constant) bound on $\mathbb{E}(\max_i Z_i)$ , where $Z_1,…,Z_N$ are real-valued random variables such that or every i=1,…,N, the logarithm of the moment-generating function of $Z_i$ satisfies $ψ_{Z_i}(λ)≤λ^2v_i/2 $ for all λ>0. The following is a statement from Concentration Inequalities: A Nonasymptotic Theory of Independence(Stéphane Boucheron,Gábor Lugosi,Pascal Massart) and is similar to what I am looking for, but assumes that every random variable has the same variance factor. Let $Z_1,…,Z_N$ be real-valued random variables where a v>0 exists such that for every i=1,…,N, the logarithm of the moment-generating function of $Z_i$ satisfies $ψ_{Z_i}(λ)≤λ^2v/2 $ for all λ>0 Then $\mathbb{E}(\max_i Z_i) \leq \sqrt{2v log(N)}$
Check out Exercise 2.5.10 of Vershynin, "High-Dimensional Probability". I believe this is what you are looking for. To summarize the result, let $\Vert \cdot \|_\psi$ be the "sub-Gaussian norm" or "Orlicz norm", which takes the form $$ \Vert X\|_\psi = \inf_{t>0} \ t \ \ \text{subject to} \ \ \mathsf{E}\left\{ \exp\left(\dfrac{X^2}{t^2}\right) \right\} \leq 2. $$ This norm is essentially how Vershynin discusses the parameters (or as you say, variance factors) of sub-Gaussian random variables; check Definition 2.5.6. to see his treatment and how it relates to other characterizations of sub-Gaussian variables. Now, for your problem, define $ K := \max_{1 \leq i \leq N} \Vert Z_i \Vert_\psi$ . You can show (see the aforementioned Exercise and the below discussions) that for every $N\geq 2$ we have $$ \mathsf{E}\left\{ \max_{1 \leq i \leq N} |Z_i| \right\} \leq CK\sqrt{\log(N)}, $$ where $C$ is a universal constant factor. See the below discussions you may find useful: Upper bound of expec
|probability|statistics|probability-distributions|
0
Proving Associativity of the Sum in a Space of Infinite Sequences with Non-Zero Initial Element
Consider the set $V$ consisting of all infinite sequences $a = (a_0, a_1, \ldots)$ where each $a_i \in \mathbb{R}$ and $a_0 \neq 0$ . How can we demonstrate that the operation $(a + b) + c = a + (b + c)$ is associative for all $a, b, c \in V$ ? $a + b = (a_0b_0, a_0b_1 + a_1b_0, \ldots)$ and it is equal to: $(a + b)_j = \sum_{i=0}^{j}a_ib_{j−i}$ I attempted to prove this by expanding both sides into sums, but I suspect this method might not be the most efficient or revealing. Is there a more innovative or insightful approach to addressing this problem? I would greatly appreciate any guidance or alternative methods for proving this property. Thank you!
Consider each sequence as the coefficients of a formal power series with non-zero constant terms. The "sum" operation you're working is simply multiplication of two such series, so associativity of the sum follows from associativity of ordinary multiplication of power series.
|linear-algebra|abstract-algebra|sequences-and-series|numerical-linear-algebra|associativity|
1
Integrating $e^{-(x^2+y^2+z^2)/a^2}$ over $\mathbb{R}^3$
I want to compute the following integral $$ \int_{\mathbb{R}^3}e^{-(x^2+y^2+z^2)/a^2}\,dxdydz $$ and I thought that using spherical coordinates could make it easier, since $r^2=x^2+y^2+z^2$ . With this in mind, I tried $$ \int_{\mathbb{R}^3}e^{-(x^2+y^2+z^2)/a^2}\,dxdydz=\int_0^\infty\int_0^{2\pi}\int_0^\pi r^2\sin \theta \,d\theta d\phi dr=4\pi\int_0^\infty r^2e^{-r^2/a^2}\,dr $$ but I am stuck. Any ideas? Should I simply try to integrate it in cartesian coordinates?
If you really want to use spherical coodinates, then you can proceed as follows. $$\begin{align} I&=\int_{\mathbb{R^3}}e^{-(x^2+y^2+z^2)/a^2}\,dx\,dy\,dz=\int_0^{2\pi}\int_0^\pi \int_0^\infty e^{-(r/a)^2}\,r^2\,\sin(\theta)\,dr\,d\theta\,d\phi\\\\ &=4\pi \int_0^\infty r^2 e^{-(r/a)^2}\,dr\\\\ &=4\pi a^3 \int_0^\infty x^2 e^{-x^2}\,dx\tag1 \end{align}$$ Now, integrating by parts with $u=x$ and $v=-\frac12 e^{-x^2}$ , we find that $$\begin{align} \int_0^\infty x^2 e^{-x^2}\,dx&=\frac12 \int_0^\infty e^{-x^2}\,dx\\\\ &=\sqrt\pi/4 \end{align}$$ Hence, we find that $$I=\pi^{3/2}a^3$$
|integration|multivariable-calculus|spherical-coordinates|
1
Showing a certain relation for the partial derivatives of two functions
I'm currently working on a physics exercise where I'm told: Given two functions $f$ and $g$ which are both continuously differentiable and depend on $x$ and $y$ . Show that the following relation holds $$ \left( \frac{\partial f}{\partial x} \right)_g =\left( \frac{\partial f}{\partial x} \right)_y - \left( \frac{\partial f}{\partial y} \right)_x \left( \frac{\partial g}{\partial x} \right)_y \left( \frac{\partial g}{\partial y} \right)_x^{-1}. $$ My first approach to this problem was that I build the total derivatives of $f$ and $g$ , namely $$ d f = \left( \frac{\partial f}{\partial x} \right)_y d x + \left( \frac{\partial f}{\partial y} \right)_x d y $$ and $$ d g = \left( \frac{\partial g}{\partial x} \right)_y d x + \left( \frac{\partial g}{\partial y} \right)_x d y. $$ I then expressed $dy$ as a function of $dg$ and $dx$ and inserted this expression into the $df$ term. Doing this gave me $$ df = \left[ \left( \frac{\partial f}{\partial x} \right)_y - \left( \frac{\partial f}{\par
Your second equation is $$ d g = \left( \frac{\partial g}{\partial x} \right)_y d x + \left( \frac{\partial g}{\partial y} \right)_x d y. $$ So, at constant g, $$ \left( \frac{\partial g}{\partial x} \right)_y d x + \left( \frac{\partial g}{\partial y} \right)_x d y=0 $$ So, $$\left(\frac{\partial y}{\partial x}\right)_g=-\frac{\left(\frac{\partial g}{\partial x}\right)_y}{\left(\frac{\partial g}{\partial y}\right)_x}$$
|derivatives|calculus|
0
Integrating $e^{-(x^2+y^2+z^2)/a^2}$ over $\mathbb{R}^3$
I want to compute the following integral $$ \int_{\mathbb{R}^3}e^{-(x^2+y^2+z^2)/a^2}\,dxdydz $$ and I thought that using spherical coordinates could make it easier, since $r^2=x^2+y^2+z^2$ . With this in mind, I tried $$ \int_{\mathbb{R}^3}e^{-(x^2+y^2+z^2)/a^2}\,dxdydz=\int_0^\infty\int_0^{2\pi}\int_0^\pi r^2\sin \theta \,d\theta d\phi dr=4\pi\int_0^\infty r^2e^{-r^2/a^2}\,dr $$ but I am stuck. Any ideas? Should I simply try to integrate it in cartesian coordinates?
Integration by parts is possible, also letting $\frac{r^2}{a^2}=u$ we have $$4\pi\int_0^\infty r^2e^{-\tfrac{r^2}{a^2}}dr=2\pi a^3\int_0^\infty u^{\tfrac12}e^{-u}du=2\pi a^3\Gamma(\tfrac32)=\pi a^3\Gamma(\tfrac12)=\pi^{\tfrac32} a^3$$ using some properties of the Gamma function.
|integration|multivariable-calculus|spherical-coordinates|
0
How do I solve this differential-integral equation?
The following equation has come up in my research and I am lost at where to start. I have tried guessing forms of the solution and Mathematica is not helpful. Any help pointing me in the right direction or useful resources would be greatly appreciated. $$p''(x) + \bigg(1-K\Big(N-\int_0^\infty tp(t)dt\Big)\bigg)p'(x)=0$$ The equation is defined in real space, all the parameters are positive, and $\Big(N-\int_0^\infty tp(t)dt\Big)>0$ .
If $$(1−K(N−\int_0^{\infty} tp(t)dt))$$ is not dependent on the variable $x$ , we can just say $$A = (1−K(N−\int_0^{\infty} tp(t)dt))$$ and we get $$ \ddot p(x) + A \dot p(x) = 0$$ We can also say $$R(x) = \dot p(x)$$ $$ \mathcal{L}(\dot R(x)) = sr(s)-R(0)$$ $$ \mathcal{L}(-R(x)A) = -Ar(s)$$ $$ sr(s)-R(0) = -Ar(s)$$ $$ r(s)(s+A) = R(0)$$ $$ r(s) = \frac{R(0)}{s+A}$$ $$ \mathcal{L}^{-1}(\frac{R(0)}{s+A}) = R(0)e^{-Ax}$$ So $$p(x)=\dot p(0)\int {e^{-Ax}}dx$$ $$p(x) = \frac{-\dot p(0)}{A}e^{-Ax}+C$$ $$p(x) = -\frac{\dot p(0)}{(1−K(N−\int_0^{\infty} tp(t)dt))}e^{-(1−K(N−\int_0^{\infty} tp(t)dt))x}+C$$ ! Do note that actually $$\int_0^{\infty} tp(t)dt$$ is a constant. this is obvious because this integral is definite. Perhaps the easiest way to evaluate it is using known data at points. E.g $p(0) = a$ and $p(1)=b$ and so on on till we could evaluate all the unknown constants. It is unlikely that is possible to understand the value of the double integral of this function at $\infty$ and $0$
|ordinary-differential-equations|functional-equations|integral-equations|
0
Find a positive integer $n$,$m$, $p$, $q$ ($p \neq q$, $m>1$) such that $p$ and $q$ divide $mn^2 - 1$ and $mn$ divides $p - q$.
Our programming teacher asked us to find triple positive integers n,p,q (m=2) such that: $p$ and $q$ divide $2n² - 1$ and $2n$ divides $p - q$ With the program below, I didn't find such an integer (I stopped the iterations at 1000000). So, I wonder if such an integer n does exist or if my program is faulty. from sympy import isprime def find_integers(): solutions = [] max_n = 10**6 # Upper limit for n for n in range(1, max_n + 1): if not isprime(2 * n ** 2 - 1): for p in range(1, 2 * n ** 2 - 1, 2): for q in range(1, 2 * n ** 2 - 1, 2): if (2 * n ** 2 - 1) % p == 0 and (2 * n ** 2 - 1) % q == 0 and (p - q) % (2 * n) == 0 and p != q: solutions.append((n, p, q)) if n % 100 == 0: # Print progress every 100 iterations print(f"Progress: n = {n}") return solutions # Find the solutions solutions = find_integers() # Display the results if solutions: print("Solutions:") for i, (n, p, q) in enumerate(solutions): print(f"Solution {i + 1}: n = {n}, p = {p}, q = {q}") else: print("No solutions foun
There is no integers $n$ , $p \ne q$ such that $p, q \mid 2n^2 - 1$ , $p, q > 0$ , $p \equiv q \pmod{2n}$ . The following solution is based on the theory of generalized Pell's equation. If $\gcd(p, q) > 1$ , then $p', q' \mid 2n^2 - 1$ and $p' \equiv q' \pmod{2n}$ , where $p' = \frac{p}{\gcd(p, q)}$ , $q' = \frac{q}{\gcd(p, q)}$ . Hence, $p$ and $q$ can be considered coprime. Hence, $pq \mid 2n^2 - 1$ . Let $q = p + 2nt$ . We get the equation $$2n^2 - 1 = sp(p + 2nt)$$ $$(2n - spt)^2 - (s^2t^2 + 2s)p^2 = 2$$ This is a generalized Pell's equation. Associated Pell's equation is $$x^2 - (s^2t^2 + 2s)y^2 = 1$$ which has fundamental solution $(st^2 + 1, t)$ . Solutions of the generalized Pell's equation can be bounded. See https://en.m.wikipedia.org/wiki/Pell%27s_equation , section "Generalized Pell's equation". $$p \le \sqrt{2} \frac{\sqrt{st^2 + 1 + t\sqrt{s^2t^2 + 2s} } + 1}{2 \sqrt{s^2t^2 + 2s}} Hence, $p \in \{0, 1\}$ . Case $p = 0$ is clearly impossible. Case $p = 1$ is impossible, be
|modular-arithmetic|arithmetic|
1
Number of sequences $(a_1, . . . , a_n)$, with elements from the set $\{ 0, 1, 2 \}$, satisfying: $|a_k - a_{k-1}| \leq 1$ for $k = 2, 3, . . ., n$.
Determine the number of sequences $(a_1, a_2, . . . , a_n)$ , with elements from the set $\{ 0, 1, 2 \}$ , satisfying the condition $|a_k - a_{k-1}| \leq 1$ for $k = 2, 3, . . ., n$ . We know that if I define the number of sequences for a given $n$ as $a_n$ , then: $$a_n = 3a_{n-1} + 2a_{n-1} + 2a_{n-1} = a_n = 7a_{n-1}$$ The equality holds because if we know that: the last element was equal to $1$ , then semi-last element could be equal to: $0$ , $1$ or $2$ , so we have $3$ options the last element was equal to $0$ , then semi-last element could be equal to: $0$ or $1$ , so we have $2$ options the last element was equal to $2$ , then semi-last element could be equal to: $1$ or $2$ , so we have $2$ options From that we have: $$a_n = k \cdot 7^n$$ We know that $a_1 = 3$ , because that one element can be equal to: $0$ , $1$ or $2$ , so we have $3$ options. Therefore: $$a_n = \frac{3}{7} \cdot 7^n = 3 \cdot 7^{n-1}$$ Is that correct?
Denote $(A_n,B_n,C_n)$ the number of sequences for a given $n$ that begins with $0$ , $1$ or $2$ . We observe that: $$\begin{align} &A_{n+1} = A_n + B_n\\ &B_{n+1} = A_n + B_n+C_n\tag{1}\\ &C_{n+1} = B_n +C_n\\ \end{align}$$ Write $(1)$ in matrix form: $$\underbrace{\pmatrix{A_{n+1}\\B_{n+1}\\C_{n+1}}}_{:=X_{n+1}}= \underbrace{\pmatrix{1&1&0\\1&1&1\\0&1&1}}_{:=M}\cdot\underbrace{\pmatrix{A_{n}\\B_{n}\\C_{n}}}_{:=X_{n}}$$ Diagonalize $M=U^{-1}DU$ where $$\begin{align} &U=\pmatrix{1&\sqrt{2}&1\\-1&0&1\\1&-\sqrt{2}&1}\\ &D=\pmatrix{1 + \sqrt{2}&0&0\\0&1&0\\0&0&1 - \sqrt{2}}\end{align}$$ Then $$(2)\iff X_{n+1} = U^{-1}D UX_n\iff (UX_{n+1})=D\cdot (UX_n)=...=D^n\cdot (UX_1)$$ $$\implies X_n = U^{-1}D^{n-1}U X_1 = U^{-1}\cdot \pmatrix{(1 + \sqrt{2})^{n-1} &0&0\\0&1&0\\0&0&(-1)^{n-1}(\sqrt{2} - 1)^{n-1}}\cdot U\cdot X_1 $$ If we denote $t = 1+\sqrt 2$ and $v = (-1)^{n-1}$ then thanks to WolframAlpha, the matrix $\Lambda_n:= U^{-1}D^{n-1}U \in \mathbb{R}^{3\times 3}$ can be computed easily. $$
|sequences-and-series|discrete-mathematics|recurrence-relations|
0
No free lunch theorem and understanding of distributions
I am currently studying "Understanding Machine Learning from Theory to Practice" written by Shai Shalev-Shwartz and Shai Ben-David. And i have trouble in understanding the presented proof of the No Free Lunch Theorem. Theorem 5.1 (No-Free-Lunch): Let $A$ be any learning algorithm for the task of binary classification with respect to the $0 - 1$ loss over a domain $X$ . Let $m$ be any number smaller than $|X|/2$ , representing a training set size. Then, there exists a distribution $D$ over $X \times \{0, 1\}$ such that: There exists a function $f : X \rightarrow \{0, 1\}$ with $L_D(f) = 0$ . With probability of at least $1/7$ over the choice of $S \sim D^m$ , we have that $L_D(A(S)) \geq 1/8$ . Proof: Let $C$ be a subset of $X$ of size $2m$ . Note that there are $T = 2^{2m}$ possible functions from $C$ to ${0, 1}$ . Denote these functions by $f_1, \ldots, f_T$ . For each such function, let $D_i$ be a distribution over $C \times {0, 1}$ defined by: $$D_i{(x,y)}=\begin{cases} 1/|C| & \tex
$\def\calS{\mathcal{S}}$ $\def\calW{\mathcal{W}}$ $\def\qty#1{\left( #1 \right)}$ $\def\calX{\mathcal{X}}$ $\def\calY{\mathcal{Y}}$ $\def\abs#1{\lvert #1 \rvert}$ $\def\pdv#1#2{\frac{\partial #1}{\partial #2}}$ $\def\expect{\mathbb{E}}$ $\def\indic{\mathbb{1}}$ In their book "Understanding Machine Leaning" the authors look at algorithms for binary classification; all samples are drawn from a domain $\calX$ which has cardinality $\abs{\calX}$ . They draw the training and testing sets from $C\subset\calX$ with cardinality $2m$ where $m\leq \frac{\abs{\calX}}{2}$ . If we think of a function $f$ as a list of pairs $(x,y)$ where $x\in C$ and $y\in\{0,1\}$ then for each $x$ there are two possible pairs $(x,0)$ and $(x,1)$ . Since $C$ has $2m$ elements there are $T=2^{2m}$ such lists of pairs; hence there are $T$ functions that can be defined from $C$ to $\{0,1\}$ . These are denoted as $f_1,\ldots,f_i, \ldots,f_T$ . Postulate 1 states that "there exists a distribution $D$ over $\calX \times
|probability|probability-distributions|machine-learning|
1
Does it follow that $(a_n)$ and $(b_n)$ are the same sequence?
Let $(a_n)$ and $(b_n)$ be two real sequences. Suppose that $(a_n)$ is a subsequence of $(b_n)$ and $(b_n)$ is a subsequence of $(a_n)$ . Does it follow that they are the same sequence? I am not sure, I think if $ (a_n) $ and $ (b_n) $ are real sequences such that $ (a_n) $ is a subsequence of $ (b_n) $ and $ (b_n) $ is a subsequence of $ (a_n) $ , then $ (a_n) $ and $ (b_n) $ are the same sequence if and only if they have exactly the same terms for each $ n $ . How can I prove or disprove the original statement?
To summarize the discussion in the comments: It's not entirely clear what it means to say that two sequences coincide. The most natural thing is probably to normalize the indices, so that we are always running from $1$ to $\infty$ (or from $0$ to $\infty$ if you prefer, the point is to fix the indices). Then we say that $\{a_n\}$ is the same as $\{b_n\}$ if and only if $a_n=b_n$ for all $n$ . With that definition, we can choose $$\{a_n\}=\{0,1,0,1,0,\cdots\}\quad \quad \& \quad \quad \{b_n\}=\{1,0,1,0,1,\cdots\}$$ It is clear that each is a subsequence of the other and that they aren't the same sequence.
|real-analysis|sequences-and-series|
1
Calculating acceleration from velocity as a function of distance
This comes from USA Harvard-MIT Mathematics Tournament (I don't know from which year) A particle moves along the $x$ -axis such that its velocity at the $x$ -position is given by the formula $v(x) = 2 + \sin(x)$ . What is its acceleration at $x = \pi/6$ ? (Answer is $2.2$ ) How do I do this if I don't know velocity as a function of time?
We have for the one-dimensional motion that $$v = \frac{d x}{d t} = 2 + \sin x$$ The acceleration is $$a = \frac{d v}{d t} = \frac{d v}{d x} \frac{d x}{d t} = v \frac{d v}{d x}$$ by the chain rule. We have $\frac{d v}{d x} = \cos x$ so that $$a = (2 + \sin x) \cos x$$ When $x = \frac{\pi}{6}$ , $\sin x = \frac{1}{2}$ and $\cos x = \frac{\sqrt{3}}{2}$ . Therefore, $a \left( x = \frac{\pi}{6} \right) = \boxed{\frac{5 \sqrt{3}}{4}} \approx 2.165$ .
|calculus|kinematics|
1
Internal angle sum of a triangle
I teach a 5th grade class geometry, and I came up with the following alternative proof (?) to show that the internal angle sum of a triangle is $180^\circ$ . I remember reading that this result is equivalent to the parallel postulate , however I can't see where I have used this axiom. I would be very happy if anyone could point out my mistake. Here we go: The pencil starts at the side $CB$ The pencil turns clockwise $\angle B$ The pencil turns clockwise $\angle A$ The pencil turns clockwise $\angle C$ . Since the pencil points in the opposite direction it has turned $180^\circ$ (Or maybe $180^\circ+360^\circ\cdot k$ ?)
Euclid adds angles only when they are adjacent at a vertex: With this proposition, we begin to see what the arithmetic of magnitudes means to Euclid, in particular, how to add angles. Euclid says that the angle CBE equals the sum of the two angles CBA and ABE. So, one way a sum of angles occurs is when the two angles have a common vertex (B in this case) and a common side (BA in this case), and the angles lie on opposite sides of their common side. (From http://aleph0.clarku.edu/~djoyce/java/elements/bookI/propI13.html ) In order to move the angles you rotate through to a common vertex to add them you use Book I Prop 29 : the theorem that parallel lines make equal opposite angles with a transversal. Then you are essentially using this argument from wikipedia :
|geometry|solution-verification|euclidean-geometry|
1
What is the smallest and largest $|z|$ that makes $|a+z|+|a-z|=2|c|$ complex analysis By Lars Ahlfors 1.1.5.4
In Complex analysis by Lars Ahlfors 1.1.5.4 Show that there are complex numbers $z$ satisfying $|z - a| + |z + a| = 2|c|$ if and only if $|a| \le |c|$ - If this condition is fulfilled, what are the smallest and largest values of |z|? The first two questions are easy if $|c|>|a|$ , $z=\frac{|c|a}{|a|}$ will make this equality $|a|(\left|\frac{|c|-|a|}{|a|}\right| + \left|\frac{|c|-|a|}{|a|}\right|)=2|c| $ and if $|z - a| + |z + a| = 2|c|$ then $|a| by triangle inequality the last two questions were very challenging to me and I couldn't find an answer. I want to ask specifically for an algebraic approach as the geometric interpretation of the complex number is in the next part of this chapter so the author had some proof that work without representing complex numbers in the complex plane
We have $$ 2 |z| = | (z-a)+(z+a)| \le |z-a| + |z+a| = 2|c| \, , $$ so that $ \boxed{|z| \le |c|}$ . Equality holds if $z-a$ and $z+a$ are real non-negative multiples of each other, that is for $$ z = \pm |c|\frac{a}{|a|} \, . $$ For the other direction we can use the parallelogram law : $$ \begin{align} 4 |c|^2 &= |z-a|^2 + |z+a|^2 + 2 |z-a| \cdot |z+a| \\ &= 2 |z|^2 + 2 |a|^2 + 2 |z^2-a^2| \\ &\le 2 |z|^2+ 2 |a|^2 + 2 |z|^2 + 2 |a|^2 \\ &= 4 (|z|^2 + |a|^2) \, , \end{align} $$ so that $\boxed{|z| \ge \sqrt{|c|^2 - |a|^2}}$ . Equality holds if $|z^2-a^2| = |z|^2 + |a|^2$ , that is if $z^2$ and $-a^2$ are real non-negative multiples of each other, i.e. for $$ z = \pm i \sqrt{|c|^2-|a|^2}\frac{a}{|a|} \, . $$
|complex-analysis|algebra-precalculus|complex-numbers|
1
How can one characterize all linear functionals on random variables on $\mathbb{R}$ with finite expectation?
This question was completely restated to make it well-posed and understandable. Let $\mathcal{C}$ be the algebra of random variables on $\mathbb{R}$ with finite expectation. By a random variable on $\mathbb{R}$ with finite expectation I mean a Lesbegue-measurable probability distribution $\mu$ on $\mathbb{R}$ , such that $\left| \int x \mu(x) dx \right| and an index $\nu \in \Lambda$ , where $\Lambda$ is some set (let it be countable), that enumerates different random variables with the same distribution. Note that random variables with all possible distributions with finite expectation on $\mathbb{R}$ belong to this algebra, including singular distributions. The random variables defined above are the basis for a free commutative algebra over $\mathbb{R}$ , factored by coincidence almost surely. The field $\mathbb{R}$ is injected in this algebra via a map $u: r \to \delta_{r}$ , where $\delta_{r}$ denotes a constant random variable with value $r$ . Since the algebra is factored by coin
The set of all random variables on a probability space $\Omega$ with finite expectation is just $L^1(\Omega)$ . It seems you are looking for the algebraic dual of this space. I am not sure what this is to be honest and I can imagine it is a very large space consisting of some very pathological objects. The topological dual on the other hand, i.e. the space of all continuous linear functions (continuity defined with respect to the norm on $L^1(\Omega)$ ), is well-understood, and is given by $L^\infty(\Omega)$ . To be more precise, given any $Y \in L^\infty(\Omega)$ , the map $L^1(\Omega) \ni X \mapsto \mathbb{E}[XY] \in \mathbb{R}$ is linear and continuous as a map from $L^1(\Omega) \to \mathbb{R}$ and, furthermore, for any such linear, continuous map $F:L^1(\Omega) \to \mathbb{R}$ there exists a $Y \in L^\infty(\Omega)$ such that $F(X)=\mathbb{E}[XY]$ .
|probability|functional-analysis|
0
For $g\in\operatorname{SL}(2,q)$, do we have $\operatorname{tr}(g)=\operatorname{tr}(g^{-1})?$
The Question: For $g\in\operatorname{SL}(2,q)$ and $q=p^r$ for a prime $p$ with $r\in\Bbb N$ , do we have $\operatorname{tr}(g)=\operatorname{tr}(g^{-1})?$ Here $\operatorname{tr}(h)$ is the trace of the matrix $h$ . Thoughts: I think so. According to a preprint , it holds for $\operatorname{SL}(2,\Bbb K)$ , for $\Bbb K=\Bbb{R,C}$ . My guess is that it should fall nicely from $\det(g)=1$ but I don't see how just yet. Does diagonalising $g$ work, as we are over a finite field? I suppose the answer would be something along those lines. I feel like I'm missing something obvious.
Since $\det g = 1$ , the inverse of $g$ is the adjugate of $g$ . So, regardless of the ground field $\mathbb{K}$ , for a $2 \times 2$ matrix in $\operatorname{SL}(2, \mathbb{K})$ $$ g = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, $$ we have $$ g^{-1} = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}. $$ It's clear that either matrix has trace $a + d$ .
|linear-algebra|matrices|finite-fields|trace|
1
Unique Prime Ideal in $\mathbb{Z}[\zeta_6]$ of Residual Characteristic 3
I’m trying to find the unique prime ideal in $\mathbb{Z}[\zeta_6]$ of residual characteristic 3, but I’m having difficulty finding it. I computed that $\mathbb{Z}[\zeta_6]\cong \mathbb{Z}[x]/(x^2-x+1)$ . Therefore, let $P\subset \mathbb{Z}[\zeta_6]$ be a prime ideal, and $(3)\subset P$ , I will show $1+\zeta_6\in P$ , but I couldn’t… Please let me know if there is any way I can solve.
By the way, this is part of a general pattern: for $\omega$ a primitive $2p$ th root of unity, with $p$ prime (your $2\cdot 3$ case generalizes to $2\cdot p$ ), $\omega+1$ satisfies ${(x-1)^p-1\over (x-1)+1}=0$ , so the product of $\omega^k+1$ over $k=1,3,5,\ldots,2p-1$ excluding $p$ itself is $p$ . Further, the ratios of the those elements (in $\mathbb Z[\omega]$ ) are demonstrably units , so these all generate the same (prime) ideal, whose $p-1$ power (square, instead, for $p=2$ ) is $p\in\mathbb Z$ .
|abstract-algebra|ring-theory|ideals|
0
Number of sequences $(a_1, . . . , a_n)$, with elements from the set $\{ 0, 1, 2 \}$, satisfying: $|a_k - a_{k-1}| \leq 1$ for $k = 2, 3, . . ., n$.
Determine the number of sequences $(a_1, a_2, . . . , a_n)$ , with elements from the set $\{ 0, 1, 2 \}$ , satisfying the condition $|a_k - a_{k-1}| \leq 1$ for $k = 2, 3, . . ., n$ . We know that if I define the number of sequences for a given $n$ as $a_n$ , then: $$a_n = 3a_{n-1} + 2a_{n-1} + 2a_{n-1} = a_n = 7a_{n-1}$$ The equality holds because if we know that: the last element was equal to $1$ , then semi-last element could be equal to: $0$ , $1$ or $2$ , so we have $3$ options the last element was equal to $0$ , then semi-last element could be equal to: $0$ or $1$ , so we have $2$ options the last element was equal to $2$ , then semi-last element could be equal to: $1$ or $2$ , so we have $2$ options From that we have: $$a_n = k \cdot 7^n$$ We know that $a_1 = 3$ , because that one element can be equal to: $0$ , $1$ or $2$ , so we have $3$ options. Therefore: $$a_n = \frac{3}{7} \cdot 7^n = 3 \cdot 7^{n-1}$$ Is that correct?
Another solution which is simpler. Denote $(A_n,B_n,C_n)$ the number of sequences for a given $n$ that begins with $0$ , $1$ or $2$ . We observe that: $$\begin{align} &A_{n+1} = A_n + B_n\\ &B_{n+1} = A_n + B_n+C_n\tag{1}\\ &C_{n+1} = B_n +C_n\\ \end{align}$$ Denote $P(n)$ the total number of sequences of length $n$ , then $P(n) = A_n + B_n +C_n$ . From $(1)$ , we have: $$P(n+1) = A_{n+1}+B_{n+1} +C_{n+1} = 2\underbrace{(A_n + B_n+C_n)}_{= P(n)} + \underbrace{B_n}_{= A_{n-1}+B_{n-1} +C_{n-1} = P(n-1)}$$ $$\implies \color{red}{P(n+1) = 2P(n) +P(n-1)} \tag{2}$$ It's easy to solve the quadratic sequence $(2)$ : if $x_{1,2}= (1\pm \sqrt{2})/2$ the two solutions of the quadratic equation $x^2 - 2x -1 =0$ then $$P(n) = p_1\cdot x_1^{n}+p_2\cdot x_2 ^n \tag{3}$$ Determine $(p_1,p_2)$ from $(3)$ given that $P(1) = 3$ and $P(2) = 7$ , we can show that $$\color{red}{P(n) = \frac{(\sqrt{2}+1)^{n+1}+(1-\sqrt{2})^{n+1}}{2}}$$
|sequences-and-series|discrete-mathematics|recurrence-relations|
1
How to show a decreasing sequence is convergent and how to find its limit?
I just started studying real analysis and am having some trouble with this question: Show that the sequence given by x n+1 = $\frac{1}{5-x_n}$ , $x_0$ =4 converges and find its limit. I can observe that the sequence is decreasing and bounded below and that $\lim_{x\rightarrow\infty}x_{n+1}=0$ . Since $x_1 >x_2$ can I assume that $x_n > x_{n+1}$ then by induction $x_{n+1} > x_{n+2}$ so therefore the sequence is monotone decreasing. Would this be correct? Knowing the limit is $0$ and $x_1\ge 0$ and $x_0\ge 0$ can I assume that $x_n\ge 0$ , so then $\frac{1}{5-x_n}\ge0$ , so $x_{n+1}\ge0$ . Hence the sequence is bounded below by 0? Therefore by the monotone convergence theorem the series converges. Is any of this correct or on the right track, I feel like I'm missing something or more explaining in some of the induction bits, I'm not entirely sure if I'm using things correctly. I would really appreciate any help. Thank you!
Assume that the limit $L$ exists. Then it has to satisfy the equation: $$ L=\frac1{5-L} $$ or $$ L=\frac{5\pm\sqrt{21}}2:=L_\pm.\tag1 $$ Observe $0 . Next we can prove: $$ \forall \{x_n:\ L_- Indeed, provided $ L_- we have: $$ (L_+-x_n)(x_n-L_-)>0\implies 5x_n-x_n^2-1>0\implies x_n>\frac1{5-x_n}=x_{n+1} $$ and $$ x_{n+1}=\frac1{5-x_n}>\frac1{5-L_-}=\frac1{L_+}=L_-. $$ In view of $(2)$ the series converges and combining $(1)$ and $(2)$ one obtains: $$ \lim_{n\to\infty}x_n=L_-. $$
|real-analysis|sequences-and-series|limits|convergence-divergence|
0
Conceptual Question regarding Shannon Entropy and bits
It is said that the number of "information bits" contained in a certain piece of information can be roughly translated as the number of yes/no-questions that would have to be answered in order to transmit the information. But isn't this entirely dependent on the knowledge of the receiver (what questions they ask) and if so, how could one ever talk about the "objective" number of bits in a certain piece of information? In theory, it seems to me that any amount of information could be transmitted in a single bit using the right (possibly very long) yes/no-question. Am I missing something fundamental here? Thank you in advance!
But isn't this entirely dependent on the knowledge of the receiver Of course. If the receiver already has the full knowledge, then she needs to ask zero (not even one!) questions. If the receiver knows that the information has one two possible outcomes A/B, each equally probable, then she has to ask just one question. In the two examples above, the amount of information she attains after discovering the actual value, is respectively zero and one bit. Which is precisely the entropy . This amount of bits/questions is "objective" in the sense that, if $n$ persons agree in the a priori knowledge they have about the unknown value (which is modelled by a probability distribution), then they must agree about the optimal (in average) amount of questions they must ask to discover the value.
|information-theory|philosophy|bit-strings|
1
Number of compositions of $n$ where each part is at least $m$
How many compositions of $n$ are there where each part is at least $m$ ? My attempt: So far I have found the generating function and have that the desired number of compositions is $$ [x^{n}]\frac{1-x}{1-x-x^{m}}. $$ But I am unsure how to proceed, I have made a few attempts to transform this (for example using the geometric series or through rewriting), perhaps there is a simple way to do this and I am overlooking it. Does anybody have any suggestions. Thanks in advance. Edit: I have summarized how I've obtained the previous expression, kindly mention if you notice any mistakes. Note that $\Phi_{N\geq m} = \left(\sum_{i \geq m} x^{i}\right) = \left(\sum_{i \geq 0} x^{i+m}\right)$ $$\Phi_{\cup_{k\geq 0}N_{\geq m}^{k}}(x) = \sum_{k \geq 0}(\Phi_{N\geq m}(x))^{k} = \sum_{k\geq 0}\left(\sum_{i\geq 0} x^{i+m}\right)^{k} = \sum_{k\geq 0}\left(x^{m}\sum_{i\geq 0} x^{i}\right)^{k}$$ $$ = \sum_{k\geq 0}(x^{m}(1-x)^{-1})^{k} = \frac{1}{1-x^{m}(1-x)^{-1}} = \frac{1-x}{1-x-x^{m}} $$
We obtain for $m,n\geq 2$ using the geometric series expansion \begin{align*} \color{blue}{[x^n]}&\color{blue}{\frac{1-x}{1-x-x^m}}=[x^n]\frac{1-x}{1-x\left(1+x^{m-1}\right)}\\ &=[x^n](1-x)\sum_{q=0}^{\infty}x^q\left(1+x^{m-1}\right)^q\\ &=\sum_{q=0}^n[x^{n-q}]\left(1+x^{m-1}\right)^q(1-x)\tag{1}\\ &=\sum_{q=0}^n[x^{n-q}]\sum_{j=0}^q\binom{q}{j}x^{(m-1)j}(1-x)\tag{2}\\ &=\sum_{q=0}^n[x^{q}]\sum_{j=0}^{n-q}\binom{n-q}{j}x^{(m-1)j}(1-x)\tag{3}\\ &=\sum_{q=0}^{\left\lfloor\frac{n}{m-1}\right\rfloor} [x^{(m-1)q}]\sum_{j=0}^{n-(m-1)q}\binom{n-(m-1)q}{j}x^{(m-1)j}(1-x)\tag{4}\\ &\,\,\color{blue}{=\sum_{q=0}^{\left\lfloor\frac{n}{m-1}\right\rfloor}\binom{n-(m-1)q}{q} -\sum_{q=0}^{\left\lfloor\frac{n-1}{m-1}\right\rfloor}\binom{n-1-(m-1)q}{q}}\tag{5} \end{align*} Comment: In (1) we apply $[x^{p-q}]A(x)=[x^p]x^qA(x)$ . We also set the upper limit of the series to $n$ since other terms do not contribute to $[x^n]$ . In (2) we make a binomial expansion. In (3) we change the order of summation $q\
|combinatorics|generating-functions|
0
matrix with univariate entries: rank deficit of specialization ≤ vanishing order of determinant.
fix a field $F$ and consider an $n \times n$ matrix $M(X)$ with entries in $F[X]$ . the determinant $\det(M(X))$ is itself a polynomial, say $D(X)$ . clearly, if $x \in F$ is such that the specialized $\text{rank}(M(x)) , then $D(x) = 0$ . i want to say more: Claim. If $x \in F$ is such that $\text{rank}(M(x)) \leq n - d$ , then $D(x)$ vanishes to order $\geq d$ ; that is, $(X - x)^d \mid D(X)$ in $F[X]$ .
A matrix over a commutative ring is invertible whenever its determinant is, so multiplying $M$ by invertible matrices would change neither quantity. $F[X]$ is a PID, so we have the Smith normal form, and for diagonal matrices the claim is rather evident: at least $d$ diagonal entries are zero in the specialization, and $X-x$ divides each of these.
|linear-algebra|abstract-algebra|polynomials|ring-theory|matrix-rank|
1
Coloring of a 4-sided figure with three colors
I attached the image below for reference. I am trying to color the figure using three colors and find the number of distinguishable colorings. The long sides of the figure are one side and not two short sides. We do not count reflections as indistinguishable I was trying to solve it for using casework: When the cross in the middle is all one color, and the short sides are both the same color, there are 3*3=9 ways to do that. Then there are the cases where the cross is one color and the short sides are split one and one (these are rotations) so that is another 3*3/3=3. Then when the cross is two different colors, and sides same color: 6*3/2=9 Then cross different colors, sides different colors: 6*3/2=9 So I'm getting 30 ways to do this. But I'm looking for a nicer solutions that doesn't involve casework. Is there a way to do this (possibly with orbits?) What about if reflections do count? Then what happens? Picture for reference
The fact that the two long sides cross does not actually matter. From @caduk's comment, if I understand what you're asking, this is the same as finding the number of ways to paint the sides of a rectangle with three colors. Since you mentioned orbits, I assume you are familiar with how to use groups. Burnside's Lemma should be applied here. If you need another hint, let me know.
|combinatorics|permutations|combinations|
1
whether we can calculate a limit by knowing the values of two other ones
This is not an exercise found in a textbook, just a question that arose while I was working on a larger project. The question may sound vague, so let me know If I have skipped on some information. So $f, g : \mathbb{R}\rightarrow\mathbb{R}$ are continuous in an interval that contains zero. $$\lim\limits_{x\rightarrow 0}\frac{f^2(x)}{x}=l_1\in \mathbb{R}$$ $$\lim\limits_{x\rightarrow 0}\frac{1-g^2(x)}{x}=l_2\in \mathbb{R}$$ Can we say anything about the limit $$\lim\limits_{x\rightarrow 0}\frac{f(x)g(x)}{x}=\ ?$$ You can assume that the functions are differentiable if that helps. Are the three limits independent (in the sense that we need more information about the two functions to answer the question)?
In the first limit, $l_1$ must be zero, since $f(x)^2/x$ is nonnegative for $x > 0$ and nonpositive for $x . And then that limit contains very little information about how fast $f(x)$ actually tends to zero; it could be like $f(x) \sim |x|^{3/4}$ or $f(x) \sim c x$ or $f(x) \sim x^{1000}$ . Coupled with the fact that $g(x)$ must tend to $1$ or $-1$ for the second limit to exist, this implies that we can't conclude anything about the third limit.
|real-analysis|calculus|limits|
1
Determine the mapping related to a machine which shuffles cards
A friend of mine told me about this problem. Given a indexed sequence of 13 cards from Ace to King, there is a machine which shuffles them and gives a certain (indexed) output sequence. Now inserting this output again in the machine, it gives another indexed sequence in the order: Jack, 9, Queen, Ace, King, 3, 5, 2, 6, 4, 7, 10, 8 From the given information, we have to determine what is the algorithm of the machine? I was thinking of a mapping f which represents the algorithm of machine such that: $f(Ace) = x_1; f(x_1) = \text{Jack}$ But I don't know what to do of it even? Thank your reading.
First choose an encoding of the cards as integers $1, 2, \dots, 13$ . The usual one is to assign $1$ to Ace, use the value of the cards for $2$ through $10$ , then assign $11$ to Jack, $12$ to Queen, and $13$ to King. Now we have the a permutation $\sigma \in S_{13}$ such that $$ \tau = \sigma^2 = \begin{pmatrix} 11 & 9 & 12 & 1 & 13 & 3 & 5 & 2 & 6 & 4 & 7 & 10 & 8 \\ 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \end{pmatrix}. $$ This is two-line notation , denoting the fact that $\tau(11) = 1$ , or that the Jack is moved from position $11$ to position $1$ , etc. The pairs are sorted by the bottom row, which indicates where in the deck each card lands. It's typical to represent the permutation by rearrange the pairs, sorting by the top row, yielding the equivalent $$ \tau = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ 4 & 8 & 6 & 10 & 7 & 9 & 11 & 13 & 2 & 12 & 1 & 3 & 5 \end{pmatrix}, $$ where each card rank is in order, and the place in the deck
|permutations|algorithms|card-games|
1
Is there a way to represent the 4th kinematic equation of velocity squared visually?
Kinematic Equations: \begin{align} v &= v_0+at \\ \Delta x &= \left( \frac {v+v_0} 2 \right) t \\ \Delta x &= v_0t + \frac12 at^2 \\ v^2 &= v_0^2 + 2a\Delta x \end{align} So for the fourth equation, final velocity squared is found without $t$ , time. My question: is it possible for this equation to be derived or explained visually/graphically? For the first three equations it is easy to come up with a visual representation as to why the equations make sense under constant acceleration. I tried to imagine the 4th equation as a $v^2$ -vs.- $t$ graph, that is quadratic. But my mind can’t understand how $$2a\Delta x$$ can be explained. I understand how the equation is derived algebraically, but my question is whether the equation can be easily explained in geometric terms like the other kinematic formulas.
One way that you could visualise this is by using the graph $a$ vs $\Delta x$ . Since $a\Delta x$ has the same units as $v^2$ this means that the area under an acceleration vs displacement graph from $0$ to $\Delta x$ has to be equal to the area under the graph velocity vs velocity from $v_0$ to $v$ . In kinematics the acceleration is constant so the area under the $a$ vs $\Delta x$ graph is simply $a\Delta x$ meanwhile the area under the velocity graph would be the area of the trapezium of side lengths $v_0$ and $v$ and height $v-v_0$ . $$a\Delta x=(v-v_0)({v+v_0\over2})$$ $$a\Delta x={v^2-v_0^2\over2}$$ $$v^2={v_0}^2+2a\Delta x$$ or more properly: $$\int_0^{\Delta x}adx=\int_{v_0}^vvdv$$ $$a(\Delta x-0)={v^2\over2}-{{v_0}^2\over2}$$ $$v^2={v_0}^2+2a\Delta x$$ Hope this is what you were looking for.
|physics|kinematics|
0
When is cdf $F_{X_1+\dots+X_n}(c)$ of sum of iid zero mean random variables decreasing in sample size $n$?
Let $X_1, X_2, \dots$ be a sequence of i.i.d. random variables with mean zero (e.g., $N(0,1)$ ). Let $n > m$ and $c \geq 0$ . I want to show that $$ P\left(\sum_{i=1}^n X_i \leq c \right) \leq P \left( \sum_{i=1}^m X_i \leq c \right).$$ In view of existing concentration bounds like Hoeffding's inequlity which scale with the the length of the sequence, i.e. $n$ and $m$ , I would think that the above statement should hold. Edit: Since it was pointed out that this doesn't hold for specific cases where $n$ and $m$ are small and the distribution of $X_i$ is discrete, assume that $X_1, X_2, \dots$ are Gaussian and $n$ sufficiently large.
From the gaussian assumption, we can calculate analytically the two probabilities, it suffices to notice that $\sum_{i=1}^nX_i\sim \sqrt{n}\mathcal{N}(0,1)$ then $$\mathbb{P}\left(\sum_{i=1}^nX_i\le c \right) = \mathbb{P}\left(\mathcal{N}(0,1)\le \frac{c}{\sqrt n} \right) = \Phi\left(\frac{c}{\sqrt n}\right)$$ As $\frac{c}{\sqrt n} for $c \ge 0$ and $n >m$ , we have $$\mathbb{P}\left(\sum_{i=1}^nX_i\le c \right)\le \mathbb{P}\left(\sum_{i=1}^mX_i\le c \right)$$
|probability|probability-theory|concentration-of-measure|
1
The time-derivative of the Hamiltonian for a 1D harmonic potential
I do not undersand how to take the time derivative of the following Hamiltonian $\hat{H}(t) = \frac{\hat{p}^2}{2m} + \frac{1}{2}m\omega^2(\hat{x}-a(t))^2$ where $a(t) = v_0t$ . For instance how does $\hat{p}$ change when taking the time derivative of it? And the same for $\hat{x}$ .
The way you have written it, without time arguments in $\hat p$ and $\hat x$ , these two operators are presumed to be in the Schrödinger representation, i.e., time independent . Thus $$ \frac{d}{dt} \hat H(t)=\partial_t \hat H(t)=v_0(v_0 t-\hat x). $$ Note this is not the Galilean transform of the oscillator hamiltonian, but, instead, a Hamiltonian canonically equivalent to the $v_0=0$ case, and with the same spectrum as it. (Under a quantum canonical transformation, the commutation relations are preserved.) It would be perverse & unfriendly to assume your expression is in the Heisenberg picture while omitting the (t), as the Schrödinger operator above would have to be converted into the corresponding Heisenberg one, $U(t)^\dagger v_0(v_0 t-\hat x) U(t)$ , liable to confuse you , if you are asking this question.
|derivatives|operator-algebras|harmonic-functions|quantum-mechanics|
0
Category of isomorphisms is equivalent to underlying $\infty$-category
Let $\mathscr{C}$ be an $\infty$ -category (for example taking quasicategories as a model). Recall that the arrow category is $\mathsf{Ar}(\mathscr{C}) = \mathsf{Fun}([1], \mathscr{C})$ . We denote by $\mathsf{Isom}(\mathscr{C})$ the full sub- $\infty$ -category spanned by the equivalences. Then, there is the expected result that $\mathsf{Isom}(\mathscr{C}) \simeq \mathscr{C}$ . This follows for example from Kerodon 02BY where Lurie even shows that $\mathrm{ev}_0, \mathrm{ev}_1 : \mathsf{Isom}(\mathscr{C}) \to \mathscr{C}$ are trivial fibrations. However, the proof there is taken as the corollary of a much more general technical-looking result. It feels like that's overkill for this situation which is why I'm looking for a more immediate way. If one tried to prove this $1$ -categorically, then one could write down inverse functors and explicit natural isomorphisms realizing that the functors realize equivalences of categories. But I couldn't manage to transport such a technique to $\in
I was actually thinking recently about something quite similar. You can argue model-independently as follows, as long as you assume you have established some facts about $\infty$ -category theory. It is possible that Lurie's treatment of the material does not cover some of this stuff before this point in Kerodon (it is quite likely in fact), but I think you should in principle be able to establish all this before you start asking yourself your question. I will prove the statement for $\mathrm{ev}_0$ . The other case follows by a similar argument. The inclusion $i\colon[0]\to[1],0\mapsto 0$ and the projection $p\colon[1]\to[0]$ give us a factorization $\mathcal{C}\xrightarrow{p^*}\mathsf{Fun}([1],\mathcal{C})\xrightarrow{i^*}\mathcal{C}$ of the identity functor on $\mathcal{C}$ . Since $\mathsf{Isom}(\mathcal{C})$ is the essential image of $p^*$ (in quasicategories, you can see this from the homotopy equivalence $\Delta^0\simeq NE(2)$ , in which $E(2)$ is the groupoid on two objects wit
|category-theory|homotopy-theory|higher-category-theory|
0
Is there an easier way to evaluate the integral $I=\int_{0}^{1}\frac{\ln x\sin^{-1}x\cos^{-1}x}{x}dx$?
As I was surfing the Mathematics side of Instagram (as usual), I came across this integral: $$I=\int_{0}^{1}\frac{\ln x\sin^{-1}x\cos^{-1}x}{x}dx$$ It encouraged me to embark on a very satisfying journey to try and evaluate it. The result turned out very nice, of course. I've found that $$I=\frac{1}{24}\ln^{4}2-\frac{\pi^{2}}{24}\ln^{2}2-\frac{49}{2880}\pi^{4}+\text{Li}_{4}\left(\frac{1}{2}\right)$$ using a series of tedious calculations. I will provide my solution below. My question is as follows: Can you think of alternative ways to evaluate this integral? Is it possible to generalize it in any way? (e.g. $F(a,b)=\int_{0}^{1}\frac{\ln x\sin^{-1}ax\cos^{-1}bx}{x}dx$ ) At one point, I defined the function $J_{n}=\int_{0}^{\frac{\pi}{2}}\theta^{n}\ln(\sin\theta)\cot\theta\,d\theta\quad ,n\in\mathbb{N}$ .Is there a nice closed form for this function? Cheers!
We are going to find the definite integral $${I=\int\limits_{0}^{1}\frac{\ln x\sin^{-1}x\cos^{-1}x}{x}dx}$$ Ah, this integral. The more I looked into it the more it looked like it might have a closed form. Arriving at it just feels so satisfying! Firstly, by the substitution $x=\sin \theta$ , we obtain $$\begin{align*}I&=\int\limits_{0}^{\frac{\pi}{2}}\theta\left(\frac{\pi}{2}-\theta\right)\ln(\sin\theta)\,\cot\theta d\theta\\&=\frac{\pi}{2}J_{1}-J_{2}\end{align*}$$ where $\displaystyle{J_{n}=\int\limits_{0}^{\frac{\pi}{2}}\theta^{n}\ln(\sin\theta)\cot\theta d\theta}$ For $J_{1}$ , let’s first integrate by parts to get $$\begin{align*}J_{1}&=\int\limits_{0}^{\frac{\pi}{2}}\theta\ln\sin\theta\,\,d\left(\ln\sin\theta\right)\\&=-\int\limits_{0}^{\frac{\pi}{2}}\theta\cot\theta\ln\sin\theta d\theta-\int\limits_{0}^{\frac{\pi}{2}}\ln^{2}\sin\theta d\theta\\&=-\frac{1}{2}\int\limits_{0}^{\frac{\pi}{2}}\ln^{2}\sin\theta d\theta\end{align*}$$ Now $\displaystyle{J_{1}=-\frac{1}{2}\int\limits_{0}
|calculus|integration|definite-integrals|special-functions|trigonometric-integrals|
0
How to rigorously prove that $\lim\limits_{n \to \infty }\prod\limits_{r=1}^n \frac{n^2-r}{n^2+r} = e^{-1}$
I saw this problem: Find $\lim\limits_{n \to \infty }\prod\limits_{r=1}^n \frac{n^2-r}{n^2+r} $ I tried to prove this problem and got: $$\ln(1+x) = \sum_{k=1}^ \infty \frac{(-1)^{k+1} x^k}{ k} \ \ \ \ \text{ with radius of convergence = 1 }$$ $$\ln(1-x)=- \sum_{k=1}^ \infty \frac{ x^k}{ k} \ \ \ \ \text{ with radius of convergence = 1 }$$ $$L:= \lim_{n \to \infty }\prod_{r=1}^n \frac{n^2-r}{n^2+r}$$ $$\ln(L) =\lim_{n \to \infty } \sum_{r=1}^n \ln\left(1 - \frac{r}{n^2}\right) - \ln\left(1 + \frac{r}{n^2}\right) $$ $$=-2\lim_{n \to \infty } \sum_{r=1}^n \sum_{k\in 2\mathbb{ N}-1} \frac{ \left(\frac{r}{n^2} \right)^k}{ k} = -1$$ $$L=e^{-1}$$ But this proof missing a few details and tried to complete it $$\lim_{n \to \infty }\frac{\sum_{r=1}^n r^k}{n^{k+1}}=\lim_{n\to \infty }\frac{\sum\limits_{r=1}^n\left(\frac rn\right)^k}n=\int_0^1x^kdx=\frac1{k+1}$$ hence for all $k >1 $ $$\lim_{n \to \infty }\frac{\sum_{r=1}^n r^k}{n^{2k}} =\lim_{n \to \infty }\frac{n^{k+1}}{(k+1)n^{2k}}=\lim_{n \to
Your last step $\lim_{n \to \infty } \sum_{k=3}^\infty\dots=\sum_{k=3}^\infty\lim_{n \to \infty }\dots$ is indeed lacking some justification. You can repair (and simplify) your proof the following way: instead of your tricky evaluation of $\lim_{n \to \infty }\frac{\sum_{r=1}^n r^k}{n^{k+1}}$ , just notice a rough upper bound: $\sum_{r=1}^nr^k\le\sum_{r=1}^nn^k=n^{k+1}$ , hence $$\sum_{k\in 2\mathbb{N}+1}\frac{\sum_{r=1}^nr^k}{kn^{2k}}\le\sum_{k\ge2}\frac{n^{1-k}}k
|real-analysis|calculus|sequences-and-series|limits|
1
Maximal number of vertices in graph $G$ such that every vertex has degree $\binom{n}{2}$
What is the maximum number of vertices in a connected graph $G$ if we are given that every vertex has degree $\binom{n}{2}$ ? For small cases (1,2,3,4), the answer seems to be $2^{n-1},$ but how would you go about proving this? Is there a way to find an isomorphism between subsets of edges from a vertex and vertices?
As long as $n \ge 3$ , so ${n \choose 2} \ge 3$ , there is no maximum. Start with a cycle graph $C_N$ on vertices $1 \ldots N$ where $N$ is a large integer. This is already connected, and each vertex has degree $2$ . Adding more edges will keep it connected. To add $2k$ more edges to every vertex, join each vertex to the vertices $2$ to $k+1$ positions ahead of it in the cycle (and thus also to those $2$ to $k+1$ positions behind. This works if $N \ge 2k+3$ . If ${n \choose 2}$ is odd, you want to add one more edge per vertex. One way to do this is two take two copies of the current graph and join each vertex of one to its copy in the other.
|graph-theory|combinatorial-geometry|extremal-graph-theory|
1
Partial limits, question clarification.
I don't understand a question and I need clarification as to what is being asked. The question is: Assume that $\forall n, \frac{n}{n+1}$ is a partial limit of the sequence $\left\{{a_n}\right\}^\infty _{n=1}$ , and that there is no partial limit bigger than 1, prove that there exists a subsequence $\left\{{a_{n_k}}\right\}^\infty _{n=1}$ such that $\lim_{n\to\infty}a_{n_k}=1$ . Are they saying that any $\frac{n}{n+1}$ is a partial limit of the sequence (1/2, 2/3, 3/4, 4/5,...)? Because it doesn't seem to make sense to me, how do I understand and solve this question?
We can rephrase the problematic part of the question (using the definition of partial limit ) as follows: For any $n$ there exist a subsequence of $(a_n)$ which converges to $\frac{n}{n+1}$ . It is always nice to have some examples in mind. Here, $(a_n)$ could be arbitrary enumeration of rationals between half and one. Try to solve it now :). If you still struggle, here is a sketch of a proof: For any $n$ you can pick (from the corresponding subsequence of $(a_n)$ ) an element which is arbitrarily close to $\frac{n}{n+1}$ . So for $n=1$ there will be some $a_{k_1}$ which will be no further than, say $2^{-1}$ from $\frac{1}{2}$ . For $n=2$ there will be some $a_{k_2}$ no further than $2^{-2}$ from $\frac{2}{3}$ . For $n=3$ there will be some $a_{k_3}$ no further than $2^{-3}$ from $\frac{3}{4}$ . And so on. Subsequence $(a_{k_n})_n$ converges to $1$ .
|calculus|limits|
0
Sum of Components of k-dimensional Inverse Hypergeometric Random Variable is... Inverse Hypergeometric?
I have come around the following statement regarding a multivariate inverse (negative) hypergeometric distribution in Section 11 of >>Guenther, William C. "The inverse hypergeometric‐a useful model." Statistica Neerlandica 29, no. 4 (1975): 129-144., https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9574.1975.tb00257.x \begin{align*}\mathbb{P}&[X_1=x_1,X_2=x_2,\dots,X_k=x_k,]=p^*(N,n,d_1,\dots,d_k,x_1,\dots,x_k)\\&= \frac{\begin{pmatrix}x_1-1\\d_1-1\end{pmatrix} \begin{pmatrix}x_2-1\\d_2-1\end{pmatrix}\dots\begin{pmatrix}x_k-1\\d_k-1\end{pmatrix}\begin{pmatrix}N-\sum_{i=1}^k x_i\\n-\sum_{i=1}^kd_i\end{pmatrix}} {\begin{pmatrix}N\\n\end{pmatrix}}\\&x_i\geq d_i,\quad i= 1,2,\dots,k\quad\sum_{i=1}^kx_i\leq N-n+\sum_{i=1}^kd_i\end{align*} We will call $(X_1, X_2, . . ., X_k)$ a k-dimensional inverse hypergeometric random variable. [ $\dots$ ] It is easy to verify that a multivariate inverse hypergeometric probability function defined by (11.2) has the property that all marginals and c
The $X_i$ 's are not independent, since they represent the counts for each category. The text says that each $X_i$ (inside a multivariate inverse hypergeometric random vector) individually has a (univariate) inverse hypergeometric distribution. The part in blue says that if you merge say categories 1 and 2, then the marginal count $X_1 + X_2$ for those merged categories will again be (univariate) inverse hypergeometric. The same is true if you merge 3 categories, etc.
|probability|probability-distributions|
1
About the continuity of the partial derivative
I'm trying to study if the following function has continuous partial derivatives at the origin: $$f(x, y) = \begin{cases}\frac{x^4y^3}{x^8 + y^4} & (x, y) \neq (0, 0) \\\\ \quad 0 & (x, y) = (0, 0)\end{cases}$$ I proved $f$ is continuous at the origin, and I also proved its partial derivatives exist at the origin. Now to show the continuity of $f'_x$ at $(0, 0)$ here is what I did: $$\frac{\partial f}{\partial x} = \frac{4x^3y^3(y^4 - x^8)}{(x^8 + y^4)^2}$$ Having observed it goes to zero along various paths, I did: $$\bigg|\frac{4x^3y^3(y^4 - x^8)}{(x^8 + y^4)^2}\bigg| \leq \frac{4x^2y^2|x| |y| (x^8 + y^4)}{(x^8+y^4)^2} \leq \frac{4x^2y^2|x| |y|}{y^4} = \frac{4x^2|x|}{|y|}$$ But now I am stuck. If for example I say $y = x^4$ , this reduces to $\frac{4}{|x|}$ which does not goes to zero. But the notes say the partial derivatives are continuous (without any proof...) Any help? Thank you! Notice We cannot use polar coordinates. We are demanded to find a distance function to make an upper
If you really don't like trigonometric functions, we can consider the rectangles instead; however, this is not really any different than Ted Shifrin's answer. Consider the family of rectangles $$ \max(x^2,|y|)=r\tag1 $$ On each rectangle, $$ r^4\le x^8+y^4\le2r^4\tag2 $$ and for $r\le1$ , $\mathrm{d}((x,y),(0,0))\ge r$ . Thus, we have $$ \begin{align} |f_x(x,y)| &=\left|\frac{4x^3y^3\left(y^4-x^8\right)}{\left(x^8+y^4\right)^2}\right|\tag{3a}\\ &\le\frac{4r^{3/2}r^3r^4}{r^8}\tag{3b}\\[9pt] &=4r^{1/2}\tag{3c}\\[12pt] &\le4\mathrm{d}((x,y),(0,0))^{1/2}\tag{3d} \end{align} $$ So we can choose a rectangle small enough so that $f_x(x,y)$ is as small as we wish. However, on the path $(x,y)=\left(r^{1/2},r\right)$ , $$ \begin{align} |f_y(x,y)| &=\left|\frac{x^4y^2\left(3x^8-y^4\right)}{\left(x^8+y^4\right)^2}\right|\tag{4a}\\ &=\frac{r^2r^2(2r^4)}{4r^8}\tag{4b}\\[9pt] &=\frac12\tag{4c} \end{align} $$ Whereas, along the path $(x,y)=(0,r)$ , $f_y(x,y)=0$ . That is, $f_y$ cannot be defined at $(
|multivariable-calculus|continuity|partial-derivative|
0
Expectation of Negative Multinomial Distribution
If a trial consists of throwing an n-sided fair die having numbers a1,a2,a3...,an on its faces. What will be the expected number of trials required before we get atleast k1 times a1, k2 times a2,....kn times an. I think it can be modelled as the expected value of negative multinomial distribution because each individual follows a multinomial distribution. In the simpler case where the trial is binomial, we can model "The expected number of trials required before we get k successes" as negative binomial. An example for understanding...suppose that there is 3 sided dice with numbers 1,2 and 3 and I want to know the expected number of trials before I get to see say 4 1s, 5 2s and 6 3s. PS: I cannot find any good free resource available on the net on Negative multinomial distributions
Here is a paper that addresses the moments of the negative multinomial distribution: https://doi.org/10.3390/mca28040085
|probability|probability-distributions|binomial-distribution|multinomial-coefficients|negative-binomial|
0
Is $\emptyset : \emptyset \to \emptyset$ an isomorphism from $(\emptyset, \leq)$ to $(\emptyset, \leq)$?
I was asked to determine whether the following statement is true: If every function $F : P \to P$ is a homomorphism from $(P, \leq)$ to $(P, \leq)$ , with $\leq$ an arbitrary order, then $|P| = 1$ . It is straightforward to observe that $|P| \not> 1$ . However, $|P| = 0$ seems to satisfy the first part of the predicate. If $P = \emptyset$ there is one and only one function over $\emptyset^2$ that maps from the empty set to itself; namely, $\emptyset$ . The definition of a homomorphism $F$ involves statements of the form: for all $x, y$ in $P$ occurs this and that involving $F$ ... So to refute that $F$ is a homomorphism one is to find a counter-example to these properties. Of course, such counter examples cannot be found in the empty set. So $\emptyset : \emptyset \to \emptyset$ is a homomorphism from $(\emptyset, \leq)$ to $(\emptyset, \leq)$ . The notation $\emptyset : \emptyset \to \emptyset$ is odd but seems formally correct, because $\emptyset$ is a function and it does have itsel
Note that $\mathrm{id}_{\varnothing}=\varnothing$ ; that is, the empty set is also the identity map from $\varnothing$ to $\varnothing$ . Therefore, you have $\varnothing\circ\varnothing=\varnothing=\mathrm{id}_{\varnothing}$ , so indeed you have that $\varnothing$ is an invertible function $\varnothing\to\varnothing$ with $\varnothing=\varnothing^{-1}$ . You can also verify that if we view $\varnothing$ as a function from $\varnothing$ to $\varnothing$ we must conclude that it is one-to-one and onto. I expect that the statement you were given was expected to include a non-empty clause, If $P\neq\varnothing$ and every function $F\colon P\to P$ is a homomorphism from $(P,\leq)$ to $(P,\leq)$ , with $\leq$ an arbitrary order, then $|P|=1$ . (I'm not wild about "an arbitrary order" there either... poor phrasing, IMHO...) Unless your definition of partially ordered set requires non-emptyness... It is not uncommon to see people forget to include a nonemptiness clause when they should. For e
|discrete-mathematics|elementary-set-theory|order-theory|
0
Is there an easier way to evaluate the integral $I=\int_{0}^{1}\frac{\ln x\sin^{-1}x\cos^{-1}x}{x}dx$?
As I was surfing the Mathematics side of Instagram (as usual), I came across this integral: $$I=\int_{0}^{1}\frac{\ln x\sin^{-1}x\cos^{-1}x}{x}dx$$ It encouraged me to embark on a very satisfying journey to try and evaluate it. The result turned out very nice, of course. I've found that $$I=\frac{1}{24}\ln^{4}2-\frac{\pi^{2}}{24}\ln^{2}2-\frac{49}{2880}\pi^{4}+\text{Li}_{4}\left(\frac{1}{2}\right)$$ using a series of tedious calculations. I will provide my solution below. My question is as follows: Can you think of alternative ways to evaluate this integral? Is it possible to generalize it in any way? (e.g. $F(a,b)=\int_{0}^{1}\frac{\ln x\sin^{-1}ax\cos^{-1}bx}{x}dx$ ) At one point, I defined the function $J_{n}=\int_{0}^{\frac{\pi}{2}}\theta^{n}\ln(\sin\theta)\cot\theta\,d\theta\quad ,n\in\mathbb{N}$ .Is there a nice closed form for this function? Cheers!
\begin{align} &\int_{0}^{1}\frac{\ln x\sin^{-1}x\cos^{-1}x}{x}dx\\ =&\ \frac12\int_{0}^{1}\sin^{-1}x\cos^{-1}x\ d(\ln^2x)\\ \overset{ibp}=&\ \frac12\int_{0}^{1}\frac{\ln^2x\ (\sin^{-1}x-\cos^{-1}x)}{\sqrt{1-x^2}}\overset{x=\sin t}{dx}\\ =&-\frac\pi4 \int_0^{\pi/2}\ln^2(\sin t)\ dt + \int_0^{\pi/2}t\ln^2(\sin t)\ dt \end{align} where the first integral $\int_0^{\pi/2}\ln^2(\sin t)\ dt= \frac{\pi^2}{24}+\frac\pi2\ln^22$ is better known and the second is referenced below $$\int_0^{\pi/2} t\ln^2(\sin t)\ d t = \text{Li}_4(\frac12) -\frac{19\pi^4}{2880}+\frac{\pi^2}{12}\ln^22+\frac1{24}\ln^42 $$
|calculus|integration|definite-integrals|special-functions|trigonometric-integrals|
0
A couple of clarifications about curvature, T(t), N(t)
Suppose $f: \mathbb{R} \rightarrow \mathbb{R^3}$ is for example 3 times differentiable, not necessarily smooth function (smooth meaning $f'(t)$ exists and $f'(t) \ne \overrightarrow{0}$ for each $t$ ). Here $f$ is not a path-length parametrization. Suppose also that $T(t)$ - the unit tangent at $t$ $N(t)$ - the principal normal at $t$ $\kappa(t)$ - the curvature at $t$ Is it correct that: when $f'(a) = \overrightarrow{0}$ then $N(a)$ and $\kappa(a)$ are not defined? when $f'(a) \ne \overrightarrow{0}$ then $N(a)$ and $\kappa(a)$ are defined? the image of $f$ (which is the curve $C$ in $\mathbb{R^3}$ ) has "a corner" at the point $t = a$ , if and only if $f'(a) = 0$ ? Note: What do I mean by "a corner"? Well, I am not sure of the exact definition but e.g. the image of the function $g(t) = (t^2, t^3, 0)$ has a corner at $t = 0$ . So are the above statements true or not? I am asking because of this formula. I got confused by chapter 2.8 of this book https://www.amazon.com/Vector-Calculus-
It happens that I'm also studying using this book, so you may take it with a grain of salt. By definition, if $f:\mathbb{R}\to\mathbb{R}^n$ is an smooth (has non-zero first derivative everywhere) $C_n$ (n-differentiable) parametrization, then we use a generalized formula for Frénet-Serret (via the following theorem) Let $f:D\subseteq\mathbb{R}\to\mathbb{R}^n$ be a smooth parametrization of the curve $C=f(D)$ By the existance of arc-length parametrizations theorem, there exists a function $h=f\circ \lambda^{-1}:\mathbb{R}\to\mathbb{R}^n$ , where $\lambda'(t)=||f'(t)||$ If $||h''(s)||=\kappa_h(s)>0$ , then the function $h$ posesses its Frenét-Serrét apparatus Then, we denote $v(t)=||\lambda'(t)||$ , and: $f'(t)=v(t)T_h(\lambda(t))$ $f''(t)=v'(t)T_h(\lambda(t))+v^{2}(t)\kappa_h(\lambda(t))N_{h}(\lambda(t))$ If $\kappa_{h}(s)=0$ , then the second term vanishes If $n=3$ , then we can talk about the Binormal and use cross products. When confused, refer back to this theorem. Plus: Given that
|calculus|differential-geometry|vector-analysis|
0
The composition of two unbounded linear operator is also unbounded or not?
Let $H$ be a Hilbert space, and consider some linear operators $ A, B: H\rightarrow H $ . In functional analysis, I knew that if $A$ and $B$ are both bounded, then the composition $AB$ is bounded. I'm curious about the situation when $A$ and $B$ are unbounded and linear. Now, is $AB$ also unbounded or not? I tried to prove it or find a counterexample to show it's wrong, but I failed. Could someone give me a hand? I've found an example, but not in Hilbert space: $H=l^\infty,\|x\|=\max_n{x_n}$ Let $a=(1,1,1/2,2,1/3,3,\cdots), Ax=(a_nx_n)_{n=1}^\infty$ and $b=(1,1,2,1/2,3,1/3,\cdots), Bx=(b_nx_n)_{n=1}^\infty$ It seems that $AB=I$ ; hence, $AB$ is bounded, but both $A, B$ are unbounded. Am I right? Are there any examples in Hilbert space?
MaoWao's answer provides an example where you require the domain to actually be the entirety of $H$ , which matches your question description. However, unbounded operators in practice usually mean something else, namely unbounded closed operators that are defined on a dense subspace (where closed means the graph of the operator is closed in $H \times H$ ). Such an operator can never be defined on the entirety of $H$ (or any Banach space, for that matter) due to closed graph theorem. If that is what you want instead, the following is an example: $A$ is multiplication by $x$ on $L^2(\mathbb{R})$ and $B$ is multiplication by $1/x$ on $L^2(\mathbb{R})$ . Then $AB$ is the identity operator. (More precisely, the closure of $AB$ is the identity operator. $AB$ itself is only densely defined.)
|functional-analysis|hilbert-spaces|unbounded-operators|
0
Is it possible to find matrix $A$ and matrix $B$ if you know $AB$ and $BA$?
Assuming that matrix $A$ & $B$ are both square and invertible, is there a reliable way to separate out matrix $A$ and $B$ from $AB$ and $BA$ ? For example, if I want to find matrix $A$ and matrix $B$ : $$ AB = \begin{pmatrix} 4 & -4 & -1 \\ 5 & 1 & 1 \\ -5 & -1 & -1 \\ \end{pmatrix} $$ and $$ BA = \begin{pmatrix} 5 & 4 & 1 \\ -4 & 1 & -5 \\ -1 & 1 & -2 \\ \end{pmatrix} $$ How would would I find $A$ and $B$ ? (for the record, these specific matrices were arbitrarily made with a calculator and are only here for the sake of example, so:) $$ A = \begin{pmatrix} 1 & 2 & -1 \\ 2 & 1 & 1 \\ -2 & -1 & -1 \\ \end{pmatrix} $$ and $$ B = \begin{pmatrix} 1 & 1 & -1 \\ 2 & -2 & 1 \\ 1 & 1 & 2 \\ \end{pmatrix} $$
$\def\ed{\stackrel{\text{def}}{=}}$ If $\ K_1\ $ and $\ K_2\ $ are two square matrices, then there exist invertible square matrices $\ A\ $ and $\ B\ $ satisfying the equations \begin{align} AB&=K_1\label{e1}\tag{1}\\ BA&=K_2\label{e2}\tag{2} \end{align} if and only if $\ K_1\ $ and $\ K_2\ $ are similar , and both invertible. If $\ A\ $ and $\ B\ $ are invertible matrices satisfying equations (\ref{e1}) and (\ref{e2}), then $$ A=K_1B^{-1}=B^{-1}K_2\ , $$ from which it follows that $\ K_1=B^{-1}K_2B\ $ —that is, $\ K_1\ $ and $\ K_2\ $ are similar—and since $\ B^{-1}A^{-1}K_1=I=A^{-1}B^{-1}K_2\ ,$ then $\ K_1\ $ and $\ K_2\ $ are invertible. Conversely, if $\ K_1\ $ and $\ K_2\ $ are invertible and similar, then there exists an invertible square matrix $\ Q\ $ such that $$ K_2=QK_1Q^{-1}\ ,\label{e3}\tag{3} $$ and if we put $\ A\ed K_1Q^{-1}\ ,$$\ B\ed Q\ ,$ then $\ A\ $ and $\ B\ $ satisfy equations (\ref{e1}) and (\ref{e2}) and are invertible. You can determine whether $\ K_1\ $ and
|linear-algebra|matrices|matrix-equations|
0
If two vectors are perpendicular, the triangles are similar.
I'm working through Ted Shifrin Linear algebra and when introducing the dot product he motivates it by saying that if P and Q are two points in the plane and the angle $\angle POQ$ is a right angle then the triangles $\triangle OAP$ and $\triangle OBQ$ are similar where A is the projection of $\vec{OP}$ along the x-axis and B is the projection of $\vec{OQ}$ along the y-axis (see image). Now, I'm not that good in geometry so i got little bit lost in this part. How would one prove that the triangles are similar, which theorems/postulates are used?. Thanks in advance.
As given in question, ∠ POQ = 90. Thus ∠POB and ∠BOQ are complementary to each other, i.e. ∠POB + ∠BOQ =90 Also, OA ⊥ OB, Thus, ∠BOP + ∠ AOP = 90 Now, comparing both, We can prove, ∠BOQ = ∠AOP Also, ∠OBQ = ∠OAP = 90 As two angles are equal, we can apply AA similarity in the two triangles. Thus, they are similar.
|geometry|euclidean-geometry|
1
Notational Ambiguity: Covariant Derivative
Let $M$ be a smooth manifold and $\nabla$ the Levi-Civita connection. Now, I am a bit puzzled by a serious notational ambiguity, namely for the second covariant derivative. To explain myself, let us consider a vector field $v\in\Gamma(TM)$ . Then, the notation $\nabla_{a}\nabla_{b}$ can mean two things: Some authors use the notation $\nabla_{a}:=\nabla_{\partial_{a}}$ . In this case, $\nabla_{\partial_{b}}v\in\Gamma(TM)$ and $\nabla_{\partial_{a}}(\nabla_{\partial_{b}}v)\in\Gamma(TM)$ . In local coordinates, we get $$\nabla_{\partial_{a}}\nabla_{\partial_{b}}v=(\partial_{a}(\partial_{b}v^{c}+\Gamma_{db}^{c}v^{d})+\Gamma_{ad}^{c}(\partial_{b}v^{d}+\Gamma_{eb}^{d}v^{e}))\partial_{c}$$ Some authors, especially in the physics literature (but for example also in Wald's book on GR), write expressions like $\nabla_{a}\nabla_{b}v^{c}$ to indicate the coefficients of the $(1,2)$ tensor obtained by applying $\nabla$ twice. In other words, $\nabla_{a}$ is acting on the $(1,1)$ tensor with coeffic
One style is in index-free notation and the other is in index notation. You know which style you're looking at, based on whether vectors and tensors are written with indices all the time. In index-free notation, vectors and tensors are written like $v$ , $h$ , etc., and covariant derivatives are written like $\nabla v$ or $\nabla_X h$ . In index notation, vectors are written like $v^a$ , $h_{ab}$ , etc., and covariant derivatives are written like $\nabla_a v^b$ or $X^a \nabla_a h_{bc}$ . (Or $v^b{}_{;a}$ or $h_{bc;a} X^a$ , but we'll set that aside.) In this system, the meaning of $\nabla_a \nabla_b v^c$ or $\nabla_c \nabla_d h_{ab}$ is always (2). The index-free notation $\nabla^2_{X,Y} v$ translates to $X^a Y^b \nabla_a \nabla_b v^c$ , while $\nabla_X \nabla_Y v$ translates to $X^a \nabla_a (Y^b \nabla_b v^c)$ . Their difference is $X^a (\nabla_a Y^b) (\nabla_b v^c)$ which translates back to $\nabla_{\nabla_X Y} v$ . Even if $\nabla_a v$ could be short for $\nabla_{\partial_a} v$ , t
|differential-geometry|riemannian-geometry|differential-topology|laplacian|connections|
0
$KG$-module of finite $K$-dimension contains a simple $KG$-submodule
Let $K$ be a field, $G$ a group and $KG$ the corresponding group ring. Let $M$ be a $KG$ -module, therefore it can also be viewed as a $K$ -vector space. Suppose that $M$ has finite $K$ -dimension, $\dim_KM = n$ . Then, $\exists U \leq M$ a $KG$ -submodule of $M$ such that $U$ is simple. Is this result true?
I arrived at the following proof, just wanted to make sure it is fine. Proof. If $M$ is not simple it means that $\exists T_1 a non-zero proper submodule of $M$ . Since $T$ is a $KG$ -submodule of $M$ it is also a $K$ -subspace of $M$ viewed as a $K$ -vector space. We have that $\dim_k T_1 = n_1 \leq n$ . If $n_1 = n$ it would mean that $T_1 = M$ when viewing them as vector spaces. Thus, $n_1 . If $T_1$ is not simple we may apply the same argument inductively in at most $n$ steps, until arriving at a $T_m$ such that $T_m$ is simple or has $\dim_KT_m = 1$ . In case that $\dim_kT_m =1$ , suppose that $T \leq T_m$ is a submodule, then it is also a $K$ -subspace of $T_m$ . However, since $T_m$ is of dimension $1$ , its only subspaces are $0$ and itself, thus $T_m$ is simple. We have found $T_m$ as a simple submodule of $M$ .
|abstract-algebra|modules|group-rings|
0
Expansion of Gamma function at half integers, around an integer
From the Taylor series expansion of the Gamma function, I know that: $$ \Gamma\!\left(z + \frac12\right) = \Gamma(z) + \sum_{k=0}^\infty \frac{\Gamma^{(k)}(z)}{k!}\left(\frac12\right)^k, $$ where $\Gamma^{(k)}(z) = \frac{d^k}{dz^k}\Gamma(z)$ , and for any $z$ . My question is: is there an (perhaps, analytical) expression for these derivatives at $z\in\mathbb N$ , a positive integer. Thanks for any suggestions!
You can obtain a recurrence relation for those values. First note that, for $k\ge 1$ , $$ \frac{{{\rm d}^k \Gamma (z)}}{{{\rm d}z^k }} = \frac{{{\rm d}^{k - 1} }}{{{\rm d}z^{k - 1} }}(\psi^{(0)} (z)\Gamma (z)) = \sum\limits_{j = 0}^{k - 1} {\binom{k-1}{j}\psi ^{(j)} (z)\frac{{{\rm d}^{k - j - 1} \Gamma (z)}}{{{\rm d}z^{k - j - 1} }}} , $$ where $\psi^{(j)}$ is a polygamma function . Assume that $z = n \ge 1$ is an integer. Then $$ \psi ^{(0)} (n) = - \gamma + \sum\limits_{r = 1}^{n - 1} {\frac{1}{r}}, $$ with $\gamma$ being the Euler–Mascheroni constant , and $$ \psi ^{(j)} (n) = ( - 1)^{j + 1} j!\sum\limits_{r = 0}^\infty {\frac{1}{{(r + n)^{j + 1} }}} = ( - 1)^{j + 1} j!\bigg( {\zeta (j + 1) - \sum\limits_{r = 1}^{n - 1} {\frac{1}{{r^{j + 1} }}} } \bigg) $$ for $j\ge 1$ , with $\zeta$ being the Riemann zeta function . Of course, the formula $$ \Gamma\! \left( {n + \frac{1}{2}} \right) = \frac{{(2n)!}}{{4^n n!}}\sqrt \pi $$ provides a much more efficient way to compute the gamma funct
|taylor-expansion|gamma-function|
1
Issues with motivating the conditional connective
The conditional operator $\Rightarrow$ can be tricky to motivate in the cases where it is True. An approach taken by Elliott Mendelson in Number Systems and the Foundations of Analysis is to examine the expression $(C\land D)\Rightarrow D$ . My understanding is as follows: Because English sentences that can be paraphrased with this schema are regarded as true by virtue of their structure, we axiomatically define any interpretation of the sentence letters in this schema to evaluate to a True truth value. For example, ``If I like ice cream and Au is the chemical abbreviation for gold, then I like cream'' is obviously a true sentence by it's structure, even if I hate ice cream. In particular, when $C$ is false and $D$ is true, we get $F\Rightarrow T$ evaluates to $T$ . The above makes sense to me, but then if I again try to use English to motivate the truth table of the conditional operator, I can run into issues. In particular, the following English structure sounds clearly false, irresp
$C\implies \neg C~~$ is "problematic" only if $C$ is true. It is trivial to prove: $~~(C \implies \neg C)\iff \neg C$ The truth table: Using a form of natural deduction ( not using the truth table): Plain text version: 1 C => ~C Premise 2 C Premise 3 ~C Detach, 1, 2 4 C & ~C Join, 2, 3 5 ~C Conclusion, 2 6 C => ~C => ~C Conclusion, 1 7 ~C Premise 8 C Premise 9 C Premise 10 ~C & C Join, 7, 8 11 ~C Conclusion, 9 12 C => ~C Conclusion, 8 13 ~C => [C => ~C] Conclusion, 7 14 [C => ~C => ~C] & [~C => [C => ~C]] Join, 6, 13 15 C => ~C ~C Iff-And, 14 Re: The meaning of "implies in natural language vs. propositional logic Example: Consider the implication, "If it is raining, then it is cloudy." In propositional logic, this does not mean that rain causes cloudiness. Or that it is always cloudy when it is raining, sunshowers being a counterexample. It means only that, at present, it is not both raining and not cloudy.
|logic|propositional-calculus|
0
Distance between two hyperplanes
I have two parallel hyper planes $$a^Tx=b_1,a^Tx=b_2$$ where $a \in \mathbb{R}^n, x \in \mathbb{R}^n ,b \in \mathbb{R}$ and I want to find the distance between the two. I have read that the distance between the two hyperplanes is also the distance between the two points $x_1$ and $x_2$ where the hyperplane intersects the line through the origin and parallel to the normal vector $\vec a$. These points are given by $$x_1=\frac{b_1}{\|a\|^2_2}a$$ and $$x_2=\frac{b_2}{\|a\|^2_2}a$$ Then the distance is $|x_1-x_2|$ but I don't really understand how we got $x_1$ and $x_2$.
If you take two arbitrary points $y_1, y_2$ , one on each hyperplane and you consider the projection of their difference on the normalized normal vector $a$ , you get $\frac{a \cdot (y_2-y_1)}{\|a\|} \frac{a}{\|a\|} = \frac{b_2-b_1}{\|a\|} \frac{a}{\|a\|} = \frac{b_2 }{\|a\|^2}a - \frac{b_1 }{\|a\|^2}a$ , which correspond to your definition of $x_1, x_2$ . To obtain the distance, you can take the norm on the second equality.
|linear-algebra|
0
Internal angle sum of a triangle
I teach a 5th grade class geometry, and I came up with the following alternative proof (?) to show that the internal angle sum of a triangle is $180^\circ$ . I remember reading that this result is equivalent to the parallel postulate , however I can't see where I have used this axiom. I would be very happy if anyone could point out my mistake. Here we go: The pencil starts at the side $CB$ The pencil turns clockwise $\angle B$ The pencil turns clockwise $\angle A$ The pencil turns clockwise $\angle C$ . Since the pencil points in the opposite direction it has turned $180^\circ$ (Or maybe $180^\circ+360^\circ\cdot k$ ?)
The following is from Edwin Wolfe, Non-Euclidean Geometry (1945) PDF [the text I studied from long ago... Thanks, Dr. Underwood] pages 40-41 The Rotation Proof This ostensible proof, due to Bernhard Friedrich Thibaut (1775-1831) is worthy of note because it has from time to time appeared in elementary texts and has otherwise been indorsed. ... This proof is typical of those which depend upon the idea of direction. The circumspect reader will observe that the rotations take place about different points ..., so that not only rotation, but translation, is involved. ...
|geometry|solution-verification|euclidean-geometry|
0
How to understand cofibration as a commutative diagram?
I am having trouble understanding the definition of cofibration as given in Bredon's "Topology and Geometry." In particular, he defines a cofibration using the following diagram: I am not sure how to interpret what Bredon means by "filling in" the diagram. I assume that the top and bottom horizontal maps are inclusions, but what are the two non-dotted diagonal maps $X\times \{0\}\to Y$ and $A\times I \to Y$ ? I'm unsure what is given and what we are trying to "fill in". How does one interpret this diagram in terms of homotopies and homotopy extension?
Regarding your questions, both $\phi: X\times\{ 0\} \rightarrow Y$ and $\psi: A\times I \rightarrow Y$ are assumed to be arbitrary (continuous) functions to $Y$ . Nevertheless, in order for the diagram to commute we should have that $\phi\circ f\times 1 = \psi\circ \iota$ , which ultimately means that $\phi$ and $\psi$ are compatible under $f$ . Then, the information you have previously on hand is how to map $X$ into $Y$ (given by $\phi$ ), and you have a homotopy from $A$ to $Y$ (given by $\psi$ ), as long as the diagram commutes. A cofibration $f:A\rightarrow X$ ensures you can "fill in" a homotopy between $X$ and $Y$ (represented in the diagram by the dotted map). The commutativity of the diagram implies that this homotopy maps $X\times\{0\}$ to $\phi(X\times\{0\})$ and that it is equivalent to $\psi$ when composed with $f\times 1.$ In summary, a cofibration $f:A\rightarrow X$ allows you to construct a new homotopy between $X$ and $Y$ such that the first end is a given continuous fu
|algebraic-topology|homotopy-theory|cofibrations|
1
Is it true that every locally compact Polish space must be boundedly compact?
My question: Is it true that every locally compact Polish space must be boundedly compact? Explanation of terminology: Locally compact: Every point admits a compact neighborhood; Polish space: Complete, separable metric space; Boundedly compact: Every closed and bounded subset is compact. It seems that this fact is used in Villani's book Optimal transport, old and new , e.g. in the proof of Proposition 27.24 and the convention in p.763-764. Any help will be highly appreciated!
Another example ... The interval $(0,1)$ with the usual topology. It is homeomorphic to $\mathbb R$ , so it is Polish and locally compact. But the whole space $(0,1)$ is closed and bounded, but not compact. Unlike the OP, I used the usual definition of "Polish space", namely: homeomorphic to a complete separable metric space.
|general-topology|
0
How do I prove that a given point in a given subset of R^2 is an interior point?
I need to show that the given point $(1, -3)$ in $$D = \{(x, y): x>0 \space \mathrm{and} \space y \leq -2 \}$$ is an interior point of $D \subseteq \mathbb{R}^2$ . That is, $(1, -3) \in D^{\mathrm{o}}$ . I understand that $(1, -3)$ is an interior point of $D$ if and only if there exists a positive real number $r > 0$ such that an open disk $B_r(1, -3)$ of radius $r > 0$ centered at $(1, -3)$ , that is $$B_r(1, -3) = \{ (x, y) : (x-1)^2 + (y+3)^2 is a proper subset of $D$ . That is, $B_r(1, -3) \subset D$ . I also understand that in order to prove that $B_r(1, -3) \subset D$ , I need to show that every arbitrary point $(x, y)$ in $B_r(1, -3)$ must also be in $D$ , but $B_r(1, -3)$ can't be equal to $D$ , since there is at least one point in $D$ that is not in $B_r(1, -3)$ . How should I begin to word and write this proof?
A simple drawing will convince you that $r = 1$ works. Algebraically: if $(x,y) \in B_1(1,-3)$ then $|x-1| and $|y - (-3)| (why?). The former implies $x > 0$ and the latter implies $y as desired.
|general-topology|algebra-precalculus|multivariable-calculus|
0