title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Find the coefficient of $x^{30}$ in the following polynomial $(1+x+x^2+x^3+x^4+x^6)^6$
|
How do we find the coefficient of $x^{30}$ in the following polynomial $(1+x+x^2+x^3+x^4+x^6)^6$ My approach is as follows: $$1+x+x^2+x^3+x^4+x^5+x^6=\frac{1-x^7}{1-x}$$ hence $$\begin{align}1+x+x^2+x^3+x^4+x^6&=\frac{1-x^7}{1-x}-x^5\\&=(1-x^7-x^5+x^6)(1-x)^{-1}\end{align}$$ $$(1+x+x^2+x^3+x^4+x^6)^6=(1-x^7-x^5+x^6)^6(1-x)^{-6}.$$ It is getting complicated.
|
Write as $(x^2+1)^6(x^4+x+1)^6$ To form the $x^{30}$ term we can take $x^{12}$ term from the first expansion and $x^{18}$ from the second expansion In the second expansion $x^{18}$ is obtained by choosing the $x^4$ term from some $4$ brackets and $x$ from the remaining $2$ . So the contribution is $\displaystyle \binom{6}{0} \times \binom{6}{4} = 15$ $x^{10}$ from the first and $x^{20}$ from the second. Thats $\displaystyle \binom{6}{1} \times \binom{6}{5} = 36$ by choosing $x^4$ from some $5$ brackets $x^6$ from the first and $x^{24}$ from the second which is clearly $\displaystyle \binom{6}{3} = 20$ Hence the coefficient will be $15+36+20=71$
|
|binomial-coefficients|generating-functions|
| 0
|
Proving a characterization of the Dense Linear Subspaces on a Normed Space
|
Let $(E, \lVert \cdot \rVert)$ a normed space over a field $\Bbb{K}$ and $M$ a linear subspace of $E$ . Then, $$\overline{M}=E \iff \forall f \in E' : f(M) = \{0\} \rightarrow f(E) = \{0\}$$ ( $E'$ is the set of linear functionals from $E$ to $\Bbb{K}$ ) I am proving this result that appears as an excercise in a book I found on the library. For the left-to-right implication, take $f \in E'$ such that $f(M) = \{0\}$ . Given $y \in f(E)$ it exists $x \in E : f(x)=y$ . Since $\overline{M}=E$ , it exists $\{x_n \}_{n \in \Bbb{N}} \subset M : \lim_{n \rightarrow \infty} x_n = x$ Since $f$ is a functional, is bounded and then continuous. This with $f(M) = \{0\}$ and $\{x_n \}_{n \in \Bbb{N}} \subset M$ implies that $y = f(x) = f(\lim_{n \rightarrow \infty} x_n) = \lim_{n \rightarrow \infty} f(x_n) = \lim_{n \rightarrow \infty} 0 = 0$ . I am having problems to see the another implication, so any possible help would be appreciated.
|
Suppose $\forall f \in E' : f(M) = \{0\} \rightarrow f(E) = \{0\}$ but $E \setminus \overline{M} \neq \emptyset$ and take $x \in E \setminus \overline{M}$ . This means that $d(x, M) > 0$ . Let denote $f : span\{x\} + M \rightarrow \Bbb{K}$ the linear operator defined as $f(a x + v) = a \, d(x, M)$ . By its definition, $|f(a x + v)| = |a| d(x, M) \leq |a| \lVert x + \frac{1}{a} v \rVert = \lVert ax + v\rVert$ , so $f \in (span\{x\} + M)'$ . The Hahn-Banach Theorem implies that this functional extends to another with the same norm on $E$ , call it $F : E \rightarrow \Bbb{K}$ . Now, oberve that $F(M) = f(M) = \{0\} \implies F(E)= \{0\} \implies 0=F(x)=f(x)=d(x,M)$ , which is a contradiction.
|
|functional-analysis|normed-spaces|
| 1
|
About reversing the Euclidean Algorithm, Lemma of Bézout
|
From the book Discrete Mathematics for Computing 2nd Edition in eBook: I know how to perform the Euclidean Algorithm and GCM(a,b). I am however, deeply confused by this: $$1 = 415 - 69(421 - 1 \times 415)$$ $$ = 70 \times 415 - 69 \times 421$$ How is this expression constructed $70 \times 415 - 69 \times 421$ ? N.B. I am still new to this and learning, I am using the following book(s): Discrete Mathematics for Computing / Edition 2 by Peter Grossman [eBook] Discrete Mathematics for Computing / Edition 3 by Peter Grossman [Physical copy]
|
You already brought up the Euclidean algorithm. For $n,m\in\mathbb{Z}$ with $\operatorname{gcd}(n,m)=1$ you get from "reversing" the algorithm an expression of $1=an+bm$ . Example: $\operatorname{gcd}(13,23)=1$ . We have $23=1\cdot 13+10$ $13=1\cdot 10+3$ $10=3\cdot 3+1$ Now use the last equation to get an expression in $1$ and substitute into the other equations to express this in terms of $23$ and $13$ . So $1=10-3\cdot 3$ . Now $13=1\cdot 10+3\Leftrightarrow 3=13-10$ (we solve for the "remainder" for the substitution) Now plug this in for $3$ . $1=10-3\cdot (13-10)=10-3\cdot 13+3\cdot 10=4\cdot 10-3\cdot 13$ . Proceed and solve the first equation for $10$ . $10=23-13$ . Plug this in: $1=4(23-13)-3\cdot 13=4\cdot 23-4\cdot 13-3\cdot 13=4\cdot 23-7\cdot 13$ .
|
|elementary-number-theory|proof-explanation|euclidean-algorithm|
| 0
|
About reversing the Euclidean Algorithm, Lemma of Bézout
|
From the book Discrete Mathematics for Computing 2nd Edition in eBook: I know how to perform the Euclidean Algorithm and GCM(a,b). I am however, deeply confused by this: $$1 = 415 - 69(421 - 1 \times 415)$$ $$ = 70 \times 415 - 69 \times 421$$ How is this expression constructed $70 \times 415 - 69 \times 421$ ? N.B. I am still new to this and learning, I am using the following book(s): Discrete Mathematics for Computing / Edition 2 by Peter Grossman [eBook] Discrete Mathematics for Computing / Edition 3 by Peter Grossman [Physical copy]
|
As far as I see, the calculations from the quotation concern to construct an integer solution $(x,y)$ of the Diophantine equation $2093x+836y=1$ . To do this, we apply first the Euclidean algorithm to find the greatest common divisor to numbers $r_1=2093$ and $r_2=836$ and then go backwards. Namely, the Euclidean algorithm yields: $r_3=421=2093-2\times 836=r_1-2\times r_2$ $r_4=415=836-1\times 421=r_2-1\times r_3$ $r_5=6=421-1\times 415=r_3-1\times r_4$ $r_6=1=415-69\times 6=r_4-69\times r_5$ . Next we go backwards: $1=r_6=r_4-69\times r_5=415-69\times 6$ $1=r_4-69\times r_5=r_4-69\times (r_3-1\times r_4)=-69\times r_3+70\times r_4=-69\times 421+70\times 415$ $1=-69\times r_3+70\times r_4=-69\times r_3+70\times (r_2-1\times r_3)=70\times r_2-139\times r_3=70\times 836-139\times 421$ $1=70\times r_2-139\times r_3=70\times r_2-139\times (r_1-2\times r_2)=-139\times r_1+348\times r_2=-139\times 2093+348\times 836$ That is we constructed an integer solution $(x,y)=(-139,348)$ of the equati
|
|elementary-number-theory|proof-explanation|euclidean-algorithm|
| 1
|
Why is $ U \otimes \operatorname{Ind}(W) = \operatorname{Ind}(\operatorname{Res}(U) \otimes W)$?
|
If $U$ is a representation of $G$ and $W$ is a representation of $H$, then why is $$ U \otimes \operatorname{Ind}(W) = \operatorname{Ind}(\operatorname{Res}(U) \otimes W)$$ I've tried to simply use the definitions to prove they are equal, but I've hit a road block.
|
When characteristic k=0, we can see this, using Frobenius Reciprocity, as follows. Recall that Frobenius reciprocity says the following: $${\rm Hom}_G({\rm Ind}(W),U)\cong{\rm Hom}_H(W,U).$$ Thus, we have, for a $G$ representation $M$ \begin{align} {\rm Hom}_G({\rm Ind}(W\otimes U),M)&\cong {\rm Hom}_H(W\otimes U,M) \\\\ &\cong {\rm Hom}_H(W,U^{-1}\otimes M) \\\\ &\cong {\rm Hom}_G({\rm Ind}(W),U^{-1}\otimes M) \\\\ &\cong {\rm Hom}_G({\rm Ind}(W)\otimes U,M) \end{align} Now letting $M$ vary over the irreducible representations of $G$ , we see that the irreducible representations appear with the same multiplicity. This proves that both are isomorphic. This part is more or less same as the answer by anon above. However, we can be more explicit and describe the isomorphism. This part is more or less same as the answers given by Matthew Towers and Sunny Sood above. Recall that we may write the representation $${\rm Ind}(W\otimes U)=\bigoplus_i g_i\otimes (W\otimes E),$$ where $g_i$ run ov
|
|representation-theory|
| 0
|
Find the equation of a plane containing two given points and having a given distance to a third point
|
This problem is part of examination preparation material for second mid-semester test of 12-th grade in my school: In the 3D space Oxyz, given 3 points $A(1,0,0)$ , $B(0,-2,3)$ , $C(1,1,1)$ . Let $(P)$ be the plane containing $A$ , $B$ such that the distance from $C$ to the plane $(P)$ is $\frac{2}{\sqrt{3}}$ . The equation of the plane $(P)$ is: A. $2x + 3y + z - 1 = 0$ or $3x + y + 7z + 6 = 0$ B. $x + y + z - 1 = 0$ or $-2x +37y+17z+13=0$ C. $x + y +2z - 1 = 0$ or $-2x +3y+7z+23=0$ D. $x + y + z - 1 = 0$ or $-23x+37y+17z+23=0$ This is a multiple choice question. However, we're still expected to provide some work... It's not quite important, as it is not part of the mandatory homework section. But I still find it quite interesting, somehow. So far, the best thing I've got is in Geogebra using some translation and rotations. As on paper, I don't know where to even start... Surely I can't just tell my teacher "so we rotate this segment $sin^{-1}{(\frac{2}{\sqrt{3}}} \div |Vector(A,D)|)$
|
If you want to do it by rotation of the normal vector of the plane, then first find the vector $ V = B - A = (0, -2, 3) - (1, 0,0) = (-1, -2, 3) $ Create two orthogonal unit vectors to $V$ , these are arbitrary, but an easy choice is $ U_1 = \dfrac{1}{\sqrt{5}} (2, -1, 0) $ $U_2 = \dfrac{ V \times U_1}{\| V \times U_1 \| }$ Now $ V \times U_1 = \dfrac{1}{\sqrt{5}} ( 3 , 6 , 5 ) $ So that $ U_2 = \dfrac{ (3, 6, 5)}{ \sqrt{70}} $ Now the equation of the plane is a function of one parameter only $\theta$ The equation is $ N \cdot ( r - A ) = 0 $ where $N = \cos \theta \ U_1 + \sin \theta \ U_2 $ Since $N$ is a unit vector, the distance between $C$ and the plane as given in this equation is $| N \cdot (C - A) | $ . And we have $C - A = (0, 1, 1) $ Therefore, we require that $ | \cos \theta (U_1 \cdot (C - A) ) + \sin \theta (U_2 \cdot (C - A) | = \dfrac{2}{\sqrt{3}} $ Now $U_1 \cdot (C - A) = - \dfrac{1}{\sqrt{5}} $ and $U_2 \cdot (C - A) = \dfrac{11}{\sqrt{70}} $ We already know that ther
|
|3d|
| 0
|
Fubini's theorem for differential forms? Why does $\int_{t_0}^{t_1}(\oint_{\partial\Omega}j)dt=\int\limits_{[t_0,t_1]\times\partial\Omega}dt\wedge j$?
|
In an electrodynamics book I came across the following claim for the electric current density (twisted) 2-form $j$ along the boundary of some 3-dimensional volume $\Omega$ : $$\int_{t_{0}}^{t_{1}}\left(\oint_{\partial\Omega}j\right)dt=\int\limits _{\left[t_{0},t_{1}\right]\times\partial\Omega}dt\wedge j$$ It looks to me like an attempt to apply the Fubini's theorem for differential forms, but I do not see how the LHS can be reduced to a 2-argument function so that Fubini applies. The context is the conservation of charge law $\frac{dQ}{dt}=-\mathcal J$ , where $\mathcal J=\oint_{\partial\Omega}j$ is the electric current flowing out of $\Omega$ and $Q=\int_\Omega \rho$ is the total charge inside $\Omega$ . Integrating it along time from $t_0$ to $t_1$ , the book claims the equation above. The argument then continues with the definition of $J=-j\wedge dt+\rho$ (the $j\wedge dt$ part coming from the equation above) so that the conservation law can be written in a "super-global" form $\oin
|
Under a suitable cover $\{U_\alpha,\varphi_\alpha\}_\alpha$ and partition of unity $\{f_\alpha\}_\alpha$ , $j=\sum_\alpha f_\alpha j=\sum_\alpha j_\alpha dx\wedge dy$ . Now \begin{align} \int_{I}\left(\iint_{\partial V}j\right)dt&\overset{\text{(PU)}}{=}\int_{I}\left(\sum_{\alpha}\iint_{U_{\alpha}}j_{\alpha}dx\wedge dy\right)dt\\&\overset{\text{(i)}}{=}\int_{I}\left(\sum_{\alpha}\iint_{\varphi_{\alpha}(U_{\alpha})}j_{\alpha}dxdy\right)dt\\&\overset{\text{(F)}}{=}\sum_{\alpha}\iiint_{I\times\varphi_{\alpha}(U_{\alpha})}j_{\alpha}dtdxdy\\&\overset{\text{(i)}}{=}\sum_{\alpha}\iiint_{I\times U_{\alpha}}j_{\alpha}dt\wedge dx\wedge dy\\&=\iiint_{I\times\partial V}dt\wedge\sum_{\alpha}\left(j_{\alpha}dx\wedge dy\right)\\&\overset{\text{(PU)}}{=}\iiint_{I\times\partial V}dt\wedge j \end{align} where in $\overset{\text{(PU)}}{=}$ we apply the parition of unity, in $\overset{(i)}{=}$ we apply the definition of the integral for differential forms, and in $\overset{\text{(F)}}{=}$ we apply Fubini'
|
|integration|differential-geometry|differential-forms|electromagnetism|
| 0
|
Solving partial differential equation using separation of variables
|
I recently learnt about solving partial differential equations using method of separation of variables. The problem: Find a solution of the equation $\frac{\partial^2 u}{\partial x^2} = \frac{\partial u}{\partial x} + 2u$ in the form $u = f(x)g(y)$ . Solve the equation subject to the conditions $u = 0$ and $\frac{du}{dx} = 1 + e^{-3y}$ when $x = 0$ for all values of $y$ . Solution: On taking $u = f(x)g(y)$ , I get $f(x) = c_1e^{2x} + c_2e^{-x}$ . I assume $u = g(y)(c_1e^{2x} + c_2e^{-x})$ and then apply the boundary conditions. I end up getting couple of conditions: $c_1+c_2 = 0$ and $g(y) = \frac{1 + e^{-3y}}{2c_1-c_2}$ and then I am stuck. How to calculate the constants and $g(y)$ here? No equation for $g(y)$ seems to be there. The given answer is: $u = \frac{1}{\sqrt{2}}sinh\sqrt{2}x + e^{-3y}sinx$ .
|
You are almost done. Put $c_2=-c_1$ in $g$ and $u$ to get $g(y)=\frac{1+e^{-3y}}{3c_1}$ and $u(x,y)=g(y)c_1(e^{2x}-e^{-x})$ , and therefore $$u(x,y)=\frac{1}{3}(1+e^{-3y})(e^{2x}-e^{-x})$$ Remark: Although this is a valid method to get a solution, it is pretty uncommon to use for these kind of problems, since there is no derivative wrt the variable $y$ in the equation. Treating this problem as ODE instead (by taking $c_1,c_2$ to be functions of $y$ ), will get you the same result, but will prove this is the only solution to the problem.
|
|ordinary-differential-equations|partial-differential-equations|partial-derivative|boundary-value-problem|
| 1
|
Are two principal ideals are equal if their generator has the same root in some extension?
|
I recently came across the following claim in a paper concerning polyonomials in $\mathcal{R}_p = \mathbb{Z}_p[X]/\langle X^d + 1\rangle$ (where $\mathbb{Z}_p = \mathbb{Z}/\mathbb{Z}p$ ) and their automorphism $\sigma_i: X \mapsto X^i$ . $$\sigma_i \langle X - \alpha \rangle = \langle X^i - \alpha \rangle = \langle X - \alpha^{i^{-1}} \rangle$$ where the latter (according to the authors) follows as the generators have the same roots in (in some extension of the base field $\mathbb{Z}_p$ ), where $X - \alpha$ is irreducible modulo $p$ for prime $p$ . I can see that they would have a same root in an appropriate extension, but I don't understand how that leads to the claim that they are equal in the base field itself. It's been a while since I took algebra but what I remember is that two principal ideals of a ring $\langle a \rangle$ and $\langle b \rangle$ are equal if there is a unit $u$ in the ring such that $a = bu$ . Is it simple enough to find such a unit here? How do we get the cla
|
The precise description of the ring $\mathcal{R}_p$ added to the question explains what is going on. I lead off with a very simple toy example to get the ball rolling. I consider the case of $p=13$ , $d=16$ (the parameter $d$ is always a power of two). As further prescribed in section 2.2. of the linked article we can then choose $\ell=2$ as $$ p-1=12\equiv 2\ell\pmod{4\ell}, $$ or, equivalently, $4=2^2=2\ell$ is the highest power of two that divides $p-1$ . Modulo $p=13$ we have $$(\pm 5)^2=25\equiv-1\pmod{13},$$ so the fourth roots of unity modulo $p$ are $\zeta=5$ and $\zeta^3=-\zeta=8$ . Consequently the polynomial $X^{16}+1$ factors as $$ X^{16}+1=(X^8-5)(X^8-8)\pmod{13}, $$ which you can also easily verify by hand. As a part of the toy example let's consider the (ring) automorphism $\sigma_3$ of $\mathcal{R}_{13}$ defined by $X\mapsto X^3$ . If I understood correcly, the question is then about the equality of the principal ideals (line 12 of section 2.2. in the source) $$\sigma_3
|
|abstract-algebra|polynomials|ring-theory|irreducible-polynomials|
| 0
|
Why is the polar representation for lines $x \cos\theta + y \sin \theta = d$?
|
I'm looking for a great intuitive view/explanation for seeing that this equation $$x \cos (\theta) + y \sin (\theta) = d$$ for a given distance to the line $d$ and angle $\theta$ , is a line. I.e. points $(x, y)$ on the line satisfy it. Basically, $d$ and $\theta$ specify the vector pointing to the line:
|
You've represented the xy-plane, and a straight line of that plane. It is ok. On the other hand- and this is the meaning of my comment in the comments-it is not because there is an angle and a length on your drawing, that your question is related to polar coordinates. It is therefore necessary to clarify things by starting by specifying in which mathematical object we are going to work in order to try to answer the question asked. You're talking about $(x,y)$ points. So you seem to know the Euclidean plane So we're going to work into that and recall that a line $l$ is a part of this plane defined by $$\boxed{l=\{(x,y)\in \mathbb R\times \mathbb R| \color{red}ax+\color{blue}by=\color{green}c\}}$$ where $a,b,c \in \mathbb R$ . It is a line which is parallel to the line $l_0$ passing through $(0,0)$ defined by $$l_0=\{(x,y)\in \mathbb R\times \mathbb R| \color{red}ax+\color{blue}by=0\}$$ Let us now introduce vector $\vec u=(\cos \theta,\sin \theta)$ . You want $\vec u$ to be a normal vect
|
|geometry|
| 0
|
Why is the polar representation for lines $x \cos\theta + y \sin \theta = d$?
|
I'm looking for a great intuitive view/explanation for seeing that this equation $$x \cos (\theta) + y \sin (\theta) = d$$ for a given distance to the line $d$ and angle $\theta$ , is a line. I.e. points $(x, y)$ on the line satisfy it. Basically, $d$ and $\theta$ specify the vector pointing to the line:
|
This can be easily understand if you know a little of the geometry of vectors: Given a unitary vector $u$ and a vector $v$ , the projection of $v$ over $u$ has (signed) length $v\cdot u$ . In the case of $\Bbb R^2$ , if you take a unitary vector $u = (\cos(\theta),\sin(\theta))$ and any other vector $v=(x,y)$ , the equation $x\cos(\theta) + y \sin(\theta) = d$ can be written as $v\cdot u = d$ . This means that you are describing the points in the line as those which projection over $u$ has length $d$ .
|
|geometry|
| 1
|
A polynomial from character theory
|
Let $G$ be a finite group, and define $a(n)=\#\{g\in G\mid o(g)=n\}$ . In problem 5.18 of Isaacs' Character Theory of Finite Groups , the following polynomial is defined: $$F_G(X)=\frac{1}{|G|}\sum_ma(m)X^{|G|/m}$$ Theorem: $F_G(k)$ is an integer for $k\in\mathbb{Z}$ . Setting $G$ to be the cyclic group with $n$ elements, define: $$g_n(X)=\frac{1}{n}\sum_{d|n}\varphi(d)X^{n/d}$$ Questions: Does $g_n$ have some other significance, in some other context? Perhaps in number theory? Is there a more direct way to show that $g_n(k)$ is an integer for every integer $k$ ? In the special case that $n=p$ for some prime $p$ , $$g_p(X)=\frac{1}{p}(X^p+(p-1)X)$$ . It follows easily from Fermat's Little Theorem that $g_p(k)$ is an integer for integer values of $k$ .
|
It plays a role in combinatorics. $g_n(k)$ counts the number of necklaces with $n$ beads chosen from $k$ colors (up to rotationary symmetry). See https://en.wikipedia.org/wiki/Necklace_(combinatorics) Follows from 1. :-)
|
|number-theory|characters|
| 1
|
Self-intersections of a Lissajous curve
|
Hei, I want to find the self intersections of a Lissajous curve, for instance: $$x(t)=\sin 2t$$ $$y(t)=\sin 3t$$ I have been trying for a couple of hours but I really don't get how to compute all the solutions. Basically I was trying to write the relation such that $$\sin 2u=\sin 2v \text{ and }\sin 3u=\sin 3v.$$ Thanks for the support!
|
To find the self-intersections of the Lissajous curve $(\sin(2\pi t),\sin(3\pi t)),t\in[0,2)$ , let $u>v$ be two values in $[0,2)$ that are mapped to the same point on the curve. So the point $(u,v)$ lies in the triangular region ${\Large◢}:\cases{u>v\\u\in[0,2)\\v\in[0,2)}$ . From $\sin(2\pi u)=\sin(2\pi v)$ we get $$2u\equiv2v\text{ or }1-2u\pmod{2}$$ so the point $(u,v)$ lies on one of the blue lines. From $\sin(3\pi u)=\sin(3\pi v)$ we get $$3u\equiv3v\text{ or }1-3v\pmod{2}$$ so the point $(u,v)$ lies on one of the red lines. We find that there are 7 intersections of blue and red lines in the region $\Large◢$ . So there are 7 self-intersections of the Lissajous curve $(\sin(2\pi t),\sin(3\pi t)),t\in[0,2)$ .
|
|geometry|parametric|
| 0
|
Evidence for Theorem 3.33 part (c) Principles of Mathematical Analysis (Baby Rudin)
|
In Principles of Mathematical Analysis Rudin states that: Theorem 3.33: Given $\sum a_n$ , put $\alpha = \lim_{n \to \infty}\sqrt[n]{\lvert a_n \rvert}$ . Then $$$$ ... $$$$ (c) if $\alpha = 1$ , the test gives no information For proving (c) he states: We consider the series $$\Sigma\frac{1}{n}, \Sigma\frac{1}{n^2}$$ For each of these series $\alpha = 1$ , but the first diverges, the second converges. However, he does not provide any evidence as to why $\alpha = 1$ for any one of those series. I've tried to prove it myself by copying the proof for $\lim_{n \to \infty} \sqrt[n]{n} = 1$ , which is presented in his book but I had no success. Can anyone help?
|
Say $a_n=\dfrac{1}{n}$ . Then, $$\sqrt[n]{|a_n|}=\dfrac{1}{\sqrt[n]{n}}$$ As it is known that $$\displaystyle\lim_{n\to\infty}\sqrt[n]{n}=1\neq 0,$$ limit algebra tells us that $$\displaystyle\lim_{n\to\infty}\sqrt[n]{|a_n|}=\displaystyle\lim_{n\to\infty}\dfrac{1}{\sqrt[n]{n}}=\dfrac{1}{1}=1$$ Analogously, if $b_n=\dfrac{1}{n^2}$ , we have $$\sqrt[n]{|b_n|}=\dfrac{1}{\sqrt[n]{n^2}}=\dfrac{1}{\sqrt[n]{n}}\cdot \dfrac{1}{\sqrt[n]{n}}$$ As we know that $$\displaystyle\lim_{n\to\infty}\dfrac{1}{\sqrt[n]{n}}=1,$$ it follows immediately that $$\displaystyle\lim_{n\to\infty}\sqrt[n]{|b_n|}=\displaystyle\lim_{n\to\infty}\dfrac{1}{\sqrt[n]{n^2}}=\displaystyle\lim_{n\to\infty}\dfrac{1}{\sqrt[n]{n}}\cdot \dfrac{1}{\sqrt[n]{n}}=1\cdot 1=1.$$
|
|real-analysis|analysis|
| 1
|
finding a function consisting of two variables
|
Some context beforehand:- I was writing an expression for force between two charges with two mediums placed in between them. I noticed that the deviation of the force I calculated to be varying by a constant factor at particular distances from the actual value of force. I know the expression for the actual force but I wanted to take this opportunity to learn a way of finding what the expression for that factor would be knowing their magnitudes at 2 different distances. That leads me to this question where r is the distance between them and t is the distance covered by another medium. I need to find a function $f(r,t)$ such that $$f(r,t) = \begin{cases} 2, & t=\frac{r}{2},& r\ge t>0\\ r, & t=r, & r\ge t>0\\ \end{cases}$$ The question really is straightforward but I don't have any mathematical approach to this so in reality I just wasted an hour manually trying out different possible functions. can someone tell me how I can conclusively find the function or disprove the existence of the
|
Let, $$f(r,t)=2\sin\left(\frac{t}{r}\pi\right)+r\sin\left(\frac{t}{r}\pi-\frac{1}{2}\pi\right)$$ Then, \begin{align} f(r,r/2)&=2\sin\left(\frac{r/2}{r}\pi\right)+r\sin\left(\frac{r/2}{r}\pi-\frac{1}{2}\pi\right)\\ &=2\sin\left(\frac{1}{2}\pi\right)+r\sin\left(\frac{1}{2}\pi-\frac{1}{2}\pi\right)\\ &=2\sin\left(\frac{\pi}{2}\right)+r\sin\left(0\right)\\ &=2\cdot1+r\cdot0\\ &=2 \end{align} and, \begin{align} f(r,r)&=2\sin\left(\frac{r}{r}\pi\right)+r\sin\left(\frac{r}{r}\pi-\frac{1}{2}\pi\right)\\ &=2\sin\left(\pi\right)+r\sin\left(\pi-\frac{1}{2}\pi\right)\\ &=2\sin\left(\pi\right)+r\sin\left(\frac{\pi}{2}\right)\\ &=2\cdot 0+r\cdot1\\ &=r \end{align} as required.
|
|linear-algebra|algebra-precalculus|functional-equations|
| 1
|
A conjectured closed form of $\int\limits_0^\infty\frac{x-1}{\sqrt{2^x-1}\ \ln\left(2^x-1\right)}dx$
|
Consider the following integral: $$\mathcal{I}=\int\limits_0^\infty\frac{x-1}{\sqrt{2^x-1}\ \ln\left(2^x-1\right)}dx.$$ I tried to evaluate $\mathcal{I}$ in a closed form (both manually and using Mathematica ), but without success. However, if WolframAlpha is provided with a numerical approximation $\,\mathcal{I}\approx 3.2694067500684...$, it returns a possible closed form: $$\mathcal{I}\stackrel?=\frac\pi{2\,\ln^2 2}.$$ Further numeric caclulations show that this value is correct up to at least $10^3$ decimal digits. So, I conjecture that this is the exact value of $\mathcal{I}$. Question: Is this conjecture correct?
|
\begin{align*} \int_{0}^{+\infty} \frac{x-1}{ \sqrt{2^{x}-1} \log(2^{x}-1) } \mathrm{d}x &\overset{2^{x}-1=u \, , \, dx = \frac{du}{(\log2)(u+1)}}{\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=} \int_{0}^{+\infty} \frac{ \frac{1}{\log2}\left(\log(u+1) -\log2\right)}{(\log2)(u+1)\sqrt{u}\log{u}} \mathrm{d}u \\ &= \frac{1}{\log^{2}2} \int_{0}^{+\infty} \frac{ \log(u+1) - \log2 }{(u+1)\sqrt{u}\log{u}} \mathrm{d}u \\ &= \frac{1}{\log^{2}2} \left[ \int_{0}^{1}\frac{ \log(u+1) - \log2 }{(u+1)\sqrt{u}\log{u}} \mathrm{d}u + \int_{1}^{+\infty} \frac{\log(u+1) -\log2}{(\log2)(u+1)\sqrt{u}\log{u}} \mathrm{d}u \right] \\ &\overset{u=\frac{1}{t} \, , \, du=-\frac{1}{t^{2}}dt}{\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=} \frac{1}{\log^{2}2} \left[ \int_{0}^{1} \frac{ \log(u+1) - \log2 }{(u+1)\sqrt{u}\log{u}} \mathrm{d}u + \int_{1}^{0} \frac{ \log(\frac{1}{t}+1) -\log 2 }{(\frac{1}{t}+1)\sqrt{\frac{1}{t}}\log(\frac{1}{t})} \left( -\frac{1}{t^2} \right) \mathrm{d}t \right] \\ &= \frac{1}{\log^{2}2} \left[ \int_{0}^{1}
|
|calculus|integration|definite-integrals|improper-integrals|closed-form|
| 0
|
Fully faithful aspheric functors are locally aspheric.
|
I was trying to prove the claim in the title, that is, fully faithful aspheric functors are locally aspheric, and I came up with a proof which I believe is false, but I can't pin down why. This is stated without proof in a paper by Maltsiniotis as an example of locally aspheric functor. For completeness' sake : By aspheric, I mean $W$ -aspheric, where $W$ is a collection of arrows in $Cat$ which is weakly saturated, contains the functor $A\to e$ for any $A$ with a final object (where $e$ denotes a final object in $Cat$ ) and such that for $f:A\to B$ , if $f/b : A/b\to B/b$ is in $W$ for all $b\in B$ , then $f\in W$ . A functor $f:A\to B$ is called aspheric if for all object $b\in B$ the map $f/b:A/b\to B/b$ is in $W$ . It is called locally aspheric if instead for all $a\in A$ the functor $A/a\to B/f(a)$ is in $W$ . Given $a\in A$ , we still denote $f:A/a\to B/f(a)$ the functor induced by $f$ , we want to show that for all $a\in A$ $f:A/a\to B/f(a)$ is aspheric. But earlier in the paper
|
In the "proof" above, I claimed $(A/a)/c$ has a final object given by the triangle $(1_b,c,c)$ but there no reason for this thing to even be an object of $(A/a)/c$ as described. Indeed for this to be the case, $b$ would need to be in the image of $f$ so the proof would work I guess for functors inducing a surjection on objects. So indeed, one would have to use the assumption of asphericity to get to the desired result.
|
|category-theory|
| 1
|
How to prove this proposition about collection of continuous functions $\{f_i\}_{i \in I}$?
|
Let $(X,\tau)$ be a topological space. Suppose that $\{f_i\}_{i \in I}$ is a collection of continuous functions $X \rightarrow \mathbb{R}$ such that for every $x \in X$ , there exists a neighborhood $U_x$ of $x$ such that $U_x$ intersects only finitely many of the sets $supp(f_i)$ . Show that $f(x):=\sum_{i \in I}f_i(x)$ is a continuous function $X \rightarrow \mathbb{R}$ with $supp(f) \subseteq \bigcup_{i \in I} supp(f_i)$ . My attempt: I know the following: Let $A,B \in \tau$ , and $X=A \cup B$ , $f:X \rightarrow Y$ . If $f_{|A}$ and $f_{|B}$ are continuous (regarding the subspace topologies $\tau_A=\tau \cap A$ respectively $\tau_B=\tau \cap B$ ), then $f$ is continuous. So one way to approach this would be to show that for every $x \in X$ , there exists an open neighborhood $U$ of $x$ such that $f_{|U}$ is continuous. We know that for $f_i$ , for every $x \in X$ there exists a neighborhood of $x$ , $U_x$ such that $U_x$ intersects only finitely many $supp(f_i)$ , let those be $i \i
|
Let $x\in X$ and take a neighborhood $U_x$ of $x$ such that $U_x\cap\operatorname{supp}(f_i)=\emptyset$ if and only if $i\in I\setminus F$ , for some finite subset $F$ of $I$ . Then, on $U_x$ , $f=\sum_{i\in F}f_i$ , which is a finite sum of continuous functions. Therefore, it is continuous. Since $f|_{U_x}$ is continuous and $U_x$ is a neighborhood of $x$ , then $f$ is continuous at $x$ .
|
|general-topology|continuity|neighbourhood|
| 1
|
Conditional expectation to de maximum $E(X_1\mid X_{(n)})$
|
Let $X_1, \ldots, X_n$ a random sample of a Uniform(0,1): Which is $E(X_1\mid X_{(n)})$ ? where $X_{(n)}=\max\{X_1,\ldots,X_n\}$
|
I'd like to give an answer based on some concepts and theorems in mathematical statistics, they can be found in Casella and Berger's Statistical Inference. Let $\theta\in(0,+\infty)$ be a parameter and consider $Y_i=X_i\theta\sim U(0,\theta)$ . The joint density of $(Y_1,\cdots,Y_n)$ is $f(y_1,\cdots,y_n)=\theta^{-n}1_{y_{(1)}>0}1_{y_{(n)} , here $y_{(1)}=\min y_i$ and $y_{(n)}=\max y_i$ . So by factorization theorem, $T(Y)=Y_{(n)}$ is a sufficient statistic for $\theta$ . Now note that $Y_{(n)}$ is complete, i.e. for any function $g:\mathbb{R}\to\mathbb{R}$ , $Eg(T(Y))=0$ for any $\theta$ implies $P(g(Y)=0)=1$ for any $\theta$ ,(this can be seen from directly write the expectation as integration), and $2Y_1$ is unbiased for $\theta$ , i.e. $E(2Y_1)=\theta$ , we know that $E(2Y_1|T(Y))$ is the UMVUE for $\theta$ . Now we note that $E(T(Y))=\frac{n}{n+1}\theta$ , we know $\frac{n+1}{n}T(Y)$ is unbiased for $\theta$ and thus $\frac{n+1}{n}T(Y)=E(\frac{n+1}{n}T(Y)|T(Y))$ is the UMVUE for
|
|probability|order-statistics|conditional-expectation|
| 0
|
definiton of homogenous function
|
according to different sources in the internet, homogeneous function is defined as if $f(tx,ty)=t^mf(x,y)$ for some integer $m$ , but my question is why integer, why not a real number(assuming the function is defined over reals to reals), consider this function $f(x,y)=(x^2+y^2)^{1/3}$ , this intuitively seems to be a homogeneous function, but according to the definition it is not. What is the reason? and is $f(x,y)=(x^2+y^2)^{1/2}$ a homogeneous function, since $f(tx,ty)=|t|f(x,y)$ , so I suppose it shouldn't be a homogeneous function, but please verify it(and are these the reason for it to be defined over integers?)
|
Related: Is a function still homogeneous if it factors out the absolute value of $t$ or a non-integer exponent of $t$? It depends on the definition you are using. $m$ does not necessarily need to be an integer. Here they define the exponent as being a number. $f(x,y)=(x^2+y^2)^{1/3}$ would satisfy this definition as $$f(tx,ty)=((tx)^2+(ty)^2)^{1/3}=t^{2/3}f(x,y)$$ with $m=2/3$ . And, $(x^2+y^2)^{1/2}$ is homogeneous of degree 1. See the related link for more.
|
|functions|definition|
| 0
|
Equation of a plane equidistant from two points $A$ and $B$?
|
Suppose I have two points $A=(x_1,y_1,z_1)$ and $B=(x_2,y_2,z_2)$. Suppose I want to find the equation of the plane equidistant from these points of form $ax+by+cz+d=0$. What is the equation in terms of the coordinates of $A$ and $B$
|
First of all, it is necessary to define what it is here meant with plane equidistant from these points Henceforth, in the following it will be assumed that for such a plane: each point of it is equidistant from the two given points. The answer to your question is stated in the following proposition. Proposition . Let $\big(\mathbb R ^n, \|\cdot\|_2\big)$ be the usual $n$ -dimensional Euclidean space and let two points be represented by the two vectors $p,q\in \mathbb R ^n$ . The Hyperplane $H\subseteq \mathbb R ^n$ (seen as $n-1$ dimensional vector affine subspace of $\mathbb R ^n$ ) that is equidistant from $p$ and $q$ is given by the equation: $$(p-q)\cdot x = \frac{1}{2} \big(\|p\|_2^2- \|q\|_2^2\big)$$ Proof . A hyperplane is uniquely described by the equation $u\cdot x= c$ , for $c\in \mathbb R$ and $u\in \mathbb R ^n$ such that $\|u\|_2=1$ . (Finding $u$ ) Typically, $u$ is the unit vector orthogonal to the plane: in this case, an such an orthogonal element is given by $p-q$ , di
|
|vector-spaces|
| 0
|
Prove that center of $G$ is not trivial and $G$ is a cyclic group
|
Assume that $G$ is a group such that $|G|=55$ and $G$ has exactly four elements with order $5$ . Prove that center of $G$ is not trivial and $G$ is a cyclic group. My try: We know that $|h_1|=|h_2|=|h_3|=|h_4|=5$ and there does not exist $h_5$ such that $|h_5|=5$ . Hence we know that $h_1^5=h_2^5=h_3^5=h_4^5=1$ . If element has an order $1$ then it is neutral element in $G$ so $|e|=1$ . Divisors of $G$ are $1,5,11,55$ so others $50$ elements which we haven't mentioned have an order $11$ or $55$ . Center of $G$ is a set: $Z(G)=\{g\in G: \forall _{x\in G} xg=gx\}$ . The group is cyclic when is generated by one element. These are all observations about this task. Have you got some ideas?
|
The Sylow $5$ is unique because there's only $4$ elements of order $5$ . Hence it's normal. Then since the Sylow $5,11$ intersect trivially, we easily get that $G\cong\Bbb Z_5\rtimes \Bbb Z_{11}.$ But there's no non-trivial action because $(4,11)=1.$ So it's cyclic and the center is everything.
|
|group-theory|cyclic-groups|
| 0
|
L'Hospital's Rule and Transfer Function
|
In an Electrical Engineering Textbook I am given this function: $$ H_C(j\omega) = \frac{10^6j\omega}{10^{10} + -\omega^2 + 10^5j\omega} $$ where $j = \sqrt{-1}, \omega = 2\pi f, f=frequency (Hz) $ The text book goes on to say 'the high frequency asymptote is $$ H(j\omega) = \frac{10^6}{j\omega} $$ ' If I take the limit as $\omega -> \infty$ , then isn't the fraction in the form $ \frac{\infty}{\infty}$ . If not, why not? When I apply L'Hospitals Rule here, I do not get the same answer for the high-frequency asymptote, I get $$ \frac{10^6j}{-2\omega + 10^5j} $$ , so in the limit, the function goes to 0?
|
You’re almost there. The only thing left to note is that $|10^5j| for large enough $\omega$ . Thus, your fraction becomes $\frac{10^6j}{-2\omega}$ , which is the suggested answer apart from a factor of $\frac12$ , but I believe that this is just a typo in the textbook.
|
|calculus|
| 0
|
Help me check my proof of the cancellation law for natural numbers (without trichotomy)
|
can you guys help me check the fleshed out logic of 'my' proof of the cancellation law for the natural numbers? It's in Peano's system of the natural numbers with the recursive definitions of addition and multiplication. I was trying to prove it without trichotomy and stumbled across a proof given by Mauro Allegranza here . I'm basically unsure about whether I have articulated the logic in step 19 correctly. Also, I find it kinda iffy to use the same symbol $b$ in steps 8 and 9 when articulating what $P_1(c)$ and $P_1(S(c))$ state. This is cos' when we reason about both statements simultaneously, $b$ in one statement is not equal to $b$ in another statement, but I don't know if there's a way to notate the variables elegantly with this fact in mind. Also, $A2$ states that $0$ succeeds no number and that every number has a predecessor, $A3$ states that $a=b$ iff $S(a)=S(b)$ , and positivity theorem 2 states that if $a\times b=0$ , then $a=0$ or $b=0$ . By the way, it'll be nice if I can
|
You are doing a really good job in trying to convey the structure of the proof. However, there are a couple of times where you don't get the interplay between the universals and the conditionals that are involved here quite right. OK, so as the inductive step part of the inductive proof, you are trying to prove that: $\forall c: (P_1(c) \to P_1(S(c)))$ where $P_1(c)$ is: $\forall a \neq 0 \ \forall b \ (a*b = a*c \to b = c)$ Thus, you are trying to show : $\forall c (\forall a \neq 0 \ \forall b \ (a*b = a*c \to b = c) \to \forall a \neq 0 \forall b \ (a*b = a*c \to b = c))$ Now, I think the source of your worry is that you didn't spell out in lines 8 and 9 that the universal quantifiers for $a$ and $b$ are part and parcel of the property $P_1$ . And by not doing so, it now suddenly seems as if you are trying to prove: $\forall a \neq 0 \forall b \ \forall c ((a*b = a*c \to b = c) \to (a*b = a*S(c) \to b = S(c)))$ And yes, if that is what you are trying to show, then we have a worry: t
|
|elementary-number-theory|logic|solution-verification|induction|peano-axioms|
| 0
|
Khan Academy question , expected value with calculated probabilities
|
Merita has decided to play in The Clothing Combo Contest. First, she will randomly choose from a pair of brown, purple, blue, green, or black pants. Next, she will randomly choose from a black, brown, or green shirt. If both the shirt and pants are a color that starts with a "B", she will win 10 dollars. If only one of the pieces of clothing is a color that starts with a "B", she will break even. Under any other outcome, she will lose 20 dollars. What is Merita’s expected value of playing The Clothing Combo Contest? Round your answer to the nearest cent. This is the question , and I can easily figure out the odds of her winning which is 6/15. How can I figure out the probability of her breaking even without writing up a table and counting the number of times there is only 1 "B" in the possible outcomes?
|
You don't need to consider the probability of one B, in order to compute the expected result, because having one B results in no profit or loss. If you play the game 15 times, you can expect one occurrence of each of the $~5 \times 3~$ possibilities. This implies that at the end of these 15 games, you will have won $~3 \times 2 \times 10\$ = 60\$~$ and lost $~2 \times 1 \times 20\$ = 40\$.~$ So, your expectation is (in dollars) $~\dfrac{60 - 40}{15}.$
|
|probability|expected-value|
| 0
|
How to solve $|x-5| + |x-4| ≥ 3$
|
I was given the following question to solve for homework. The solution $S$ I got was $$S = \{x : x ≥ 6 \text{ or } x\leq3\}$$ I checked my solution with the answers provided and it was correct. My working out can be seen below: $|x-5| + |x-4| - 3 = \begin{cases} 2x-12 , & \text{if } x>5\\ -2, & \text{if }4 I then solved $2x-12 ≥3, -2 ≥ 3,$ and $-2x+6 ≥3$ . This is how I was instructed to do it but it does not make sense to me because why do we just dismiss the regions stated above, namely $x > 5,4 and $x≤4$ . Can someone please provide an explicit explanation?
|
We may consider the problem in two dimensions. The condition $$\|(x,y)-(4,0)\|+\|(x,y)-(5,0)\|\ge 3$$ describes the exterior of the ellipse with foci at $(4,0),\ (5,0)$ and the distance $3.$ The left most point of the ellipse is located at $(3,0)$ while the right most one at $(6,0).$ Therefore the solution is $x\le 3$ or $x\ge 6.$
|
|calculus|algebra-precalculus|analysis|inequality|absolute-value|
| 0
|
Determining the norm of a linear operator of a normed R-vector space.
|
I have been trying to solve an exercise in Normed Vector Spaces, and I'm stuck in the 2nd question. My answer to the 1st question: We have $\varphi$ linear. Let $$||\varphi(f)||_{1} = \int_0^1 |\varphi(f)(t)|dt$$ $$ = \int_0^1 |\int_0^t f(x)dx|dt$$ $$ \leq \int_0^1 \int_0^t |f(x)|dxdt$$ $$ \leq \int_0^1 \int_0^t |f(x)|dxdt + \int_0^1 \int_t^1 |f(x)|dxdt$$ $$ = \int_0^1 \int_0^1 |f(x)|dxdt$$ $$ = \int_0^1 |f(x)|dx\quad. \int_0^1 dt$$ $$ = \int_0^1 |f(x)|dx$$ $$ = ||f||_{1}$$ $$ \implies |||\varphi||| \leq 1, and \ \varphi \ is \ continuous $$ For the 2nd question, I'm trying to find a certain function in E that Verifies $=$ instead of $\leq$ in my answer to the previous question, so I can get the second inequality and hence the equality, using the sup in the definition of $|||\varphi|||$ .
|
Finally, I found the function $f_{0}(x)=e^{-nx}$ , which verifies the conditions. Where, $$\frac{||\varphi(f_{0})||_{1} }{||f_{0}||_{1}} = \frac{1+\frac{e^{-n}}{n}-\frac{1}{n}}{1-\frac{e^{-n}}{n}}\rightarrow 1,\quad as \ n \rightarrow +\infty$$ And since $|||\varphi|||$ is bounded by $1$ , $sup_{f \in E}(\frac{||\varphi(f)||_{1} }{||f||_{1}}) = 1$ . i.e, $|||\varphi||| = 1$ .
|
|functional-analysis|vector-spaces|normed-spaces|vector-analysis|
| 0
|
About reversing the Euclidean Algorithm, Lemma of Bézout
|
From the book Discrete Mathematics for Computing 2nd Edition in eBook: I know how to perform the Euclidean Algorithm and GCM(a,b). I am however, deeply confused by this: $$1 = 415 - 69(421 - 1 \times 415)$$ $$ = 70 \times 415 - 69 \times 421$$ How is this expression constructed $70 \times 415 - 69 \times 421$ ? N.B. I am still new to this and learning, I am using the following book(s): Discrete Mathematics for Computing / Edition 2 by Peter Grossman [eBook] Discrete Mathematics for Computing / Edition 3 by Peter Grossman [Physical copy]
|
How is this expression constructed $70 \times 415 - 69 \times 421$ ? Using colors might be helpful. $$\begin{align}1& = \color{red}{415} - 69(\color{blue}{421} - 1 \times \color{red}{415}) \\\\&=\color{red}{415} - 69\times \color{blue}{421} +69 \times\color{red}{415} \\\\&=\color{red}{415}+69 \times \color{red}{415} - 69\times \color{blue}{421} \\\\&=(1+69)\times \color{red}{415}- 69\times \color{blue}{421} \\\\&=70 \times \color{red}{415} - 69 \times \color{blue}{421}\end{align}$$ To get a solution $(x,y)$ of the equation $\color{purple}{2093}x+\color{orange}{836}y=1$ , they started with $$\color{blue}{421}=\color{purple}{2093}-2\times \color{orange}{836}\tag1$$ $$\color{red}{415}=\color{orange}{836}-1\times \color{blue}{421}\tag2$$ $$\color{green}6=\color{blue}{421}-1\times \color{red}{415}\tag3$$ $$1=\color{red}{415}-69\times \color{green}6\tag4$$ They have $$\begin{align}1&=\color{red}{415}-69\times \color{green}6 \\\\&=\color{red}{415}-69\times (\color{blue}{421}-1\times \color{re
|
|elementary-number-theory|proof-explanation|euclidean-algorithm|
| 0
|
If I have the COLUMN vectors (7 -11 24), (1 3 -2), (5 -1 9) in the vector space $\mathbb R^3$ then show i) (7 -11 24) ∈ Span((1 3 -2), (5 -1 9))
|
If I have the COLUMN vectors $\begin{pmatrix}7 \\-11 \\24\end{pmatrix},\begin{pmatrix}1 \\3\\-2\end{pmatrix},\begin{pmatrix}5 \\-1 \\9\end{pmatrix}$ in the vector space $\mathbb R^3$ then show i) $\begin{pmatrix}7 \\-11 \\24\end{pmatrix} \in$ Span $(\begin{pmatrix}1 \\3 \\-2\end{pmatrix},\begin{pmatrix}5 \\-1 \\9\end{pmatrix})$ ii) Are $(1, 3, -2)$ and $(5, -1, 9)$ linearly independant in $\mathbb R^3$ ? iii) Consider the set $S =$ Span $((7 ,-11, 24))$ , which is a subspace of $\mathbb R^3$ . What is the dimension of $S$ ? These are my thoughts so far: i) $\alpha(1, 3, -2) + \beta(5, -1 ,9) = (7, -11 ,24)$ $\alpha + 5\beta = 7$ $3\alpha -\beta = -11$ $-2\alpha + 9\beta = 24$ so a + 5β = 7 adding to 5(3α-β) = -11 becomes 16α = -48 so α=-3 and β=2 checking this we have -3(1 3 2) + 2(5 -1 9) which is (7 -11 24). so it checks out that its in the span ii) I thought of 2 ways to do this WAY 1: I set up the vectors as one big matrix and then found the RREF. Since the RREF has a row of zeros,
|
(i)Here is an alternative method for question (i), which seems to me to be more in the spirit of the exercise: $\begin{bmatrix}1 & 5 & 7 \\3 & -1 & -11 \\-2&9&24\end{bmatrix}\overset{C'_3=C_3-7C_1}\longleftrightarrow \begin{bmatrix}1 & 5& 0\\3 & -1 &-11 \\-2 & 9 & 24\end{bmatrix}\overset{C'_2=C_2-5C_1}\longleftrightarrow \begin{bmatrix}1 &0 & 0 \\3 & -16 & -32 \\-2 & 19 & 38\end{bmatrix}$ So, $C'_3=2C'_2=2(C_2-5C_1)=2C_2-10C_1$ So, $C_3-7C_1=2C_2-10C_1$ And finally $C_3=2C_2-3C_1$ , as OP found. (ii) For question (ii), it suffices to note that the two vectors are not collinear. (iii) OP is right.
|
|linear-algebra|vector-spaces|
| 0
|
Proving the set of limit (adherent) points equals the closure.
|
DEFINITION: The closure of $A$ in a metric space is $\overline A := \bigcap \{C : C \text{ is closed and } A \subset C \}$ . DEFINITION: Let $(X,d)$ be a metric space and $A \subset X$ . We say $x$ is a limit point of $A$ if for all open subsets $U$ such that $x \in U$ , $U \cap A \neq \emptyset$ . (Sometimes, this is called an "adherent point", but I am using the word "limit point.") LEMMA: Let $(X,d)$ be a metric space. Then $x\in X$ is a limit point of $A \subset X$ if and only if there exists $(x_n)$ in $A$ (that is, each $x_n \in A$ ) such that $\lim_{n\rightarrow \infty} x_n = x$ . CLAIM: Let $$\overline A' = \{\text{limit pts of A in }(X,d) \}.$$ Then $\overline A = \overline A'$ . PROOF: $(\subset)$ We claim $\overline A'$ is closed. Let $y \in X \backslash \overline A'$ . Then $y$ is not a limit point of $A$ . By the Lemma for all $(x_n) \in A$ , there exists $\epsilon_0 > 0$ such that for all $N_1 \in \mathbb N$ , there exists (STAR) $n \geq N_1$ such that $d(x_n, y) \geq \ep
|
Make sure you understand that the commonly-accepted definitions of "limit point" and "adherent point" are not the same. An "adherent point" $x$ of a set $A$ is such that every neighborhood $U$ of $x$ intersects $A$ . A "limit point" $x$ of a set $A$ is such that every neighborhood $U$ of $x$ intersects $A$ at a point other than $x$ itself. The definition you provided is for an "adherent point", so we will instead prove the statement Let $S$ be the set of adherent points of $A$ . Then $S = \text{Cl} A$ . ( $\text{Cl} A$ contains $S$ ) Let $x$ be an adherent point of $A$ . We want to show that $x \in \text{Cl} A$ , so using your definition, we need to show that $x$ is contained in all the closed sets containing $A$ . So, let $C$ be an arbitrary closed set containing $A$ ; then $X - C$ is an open set missing $A$ . Suppose for the sake of contradiction that $x \in X - C$ . Since $X - C$ is open, there would be a neighborhood $U$ of $x$ such that $U$ misses $C$ , and since $A \subseteq C$ ,
|
|general-topology|metric-spaces|
| 1
|
Are Caesaro summation and Abelian summation frown upon in mathematical community?
|
If we agree to define $S=\lim_{n\to \infty}\sum_0^na_n$ ,then, only convergent series can have meaningful $S$ . But if we decide to define $S$ , to be the Caesaro mean, now we can also assign a real number to $S$ for some divergent series as well. My question is such relaxation of the definition frown upon in the mathematical community? The reason I ask is I have read several blog posts where authors are adamant that only convergent series are summable and only summmability in the ordinary sense is useful. In those articles, they usually point out the Ramanujan sum as an example of getting nonsensical answers. They especially have huge problems with getting results by analytical continuation of Rieman zeta function. To me Caesaro summation and Abelian summation are natural extensions of the summation in ordinary sense. Where do you draw the line? P.S. I am very new to this topic and mathematics in general so please bear with me for my naive question.
|
You can argue that Ceasaro and Abelian summation are the most natural extension of the standard infinite summation but that doesn't make a summable divergent series convergent. There are a lot of significant properties that a convergent series would have and a divergent series wouldn't so it's best to keep thing precise to their meaning and avoid ambiguity, which can later on lead to contradiction if you're not careful. You can just call them Caesaro or Abel summable. An example is grouping consecutive terms in a series. For a convergent series, you can do that however you want and the result wouldn't change. But for a Caesaro summable series, that isn't true. $1+(-1+1)+... = 1+0+0+.. = 1$ Whereas $(1-1) + (1-1) + ... = 0+0+... = 0$
|
|divergent-series|
| 0
|
If $f_n \to f$ in $L^p$ then $f_n^r \to f^r$ in $L^{p/r}$ for $1\leq r \leq p$
|
I want to do this problem from Le Gall's Measure Theory book. If $f_n \to f$ in $L^p$ then $f_n^r \to f^r$ in $L^{p/r}$ for $1\leq r \leq p$ . I am really stuck on even where to start. Since there is no finite measure I don't see if I can simplify it using Holder's inequality by splitting on $\{ |f-f_n| \geq \varepsilon\}$ .
|
I'm assuming $f_n,f$ are meant to be non-negative here so that $f_n^r$ and $f^r$ are well-defined for non-integer $r$ . Suppose we are working on a measure space $(E,\mathcal{E},\mu)$ . Consider the function $g: \mathbb{R}_+ \to \mathbb{R}$ defined by $g(x) = x^r$ . Then $g'(x)=rx^{r-1}$ so for $0\le a\le b$ , by the mean value theorem we have $|g(a)-g(b)|= |a-b||g'(c)|$ for some $c \in [a,b]$ . Hence $|a^r-b^r| \le r|a-b| \max(a,b)^{r-1}$ . Therefore we have $$\begin{align*} |f_n^r-f^r| &\le r|f_n-f| \max(f_n,f)^{r-1}\end{align*}$$ and so $$\begin{align*} |f_n^r-f^r|^{p/r} &\le r^{p/r} |f_n-f|^{p/r} \max(f_n,f)^{p-p/r}.\end{align*}$$ Now by applying Hölder's inequality with $r$ and $q := \frac{1}{1-\frac{1}{r}}$ we have $$\begin{align*} \int_E |f_n-f|^{p/r}\max(f_n,f)^{p-p/r}\mathrm{d}\mu &\le \left(\int_E |f_n-f|^p \mathrm{d}\mu\right)^{1/r} \left(\int_E \max(f_n,f)^{q(p-p/r)}\mathrm{d}\mu\right)^{1/q}\\ &= \left(\int_E |f_n-f|^p \mathrm{d}\mu\right)^{1/r} \left(\int_E \max(f_n,f)^{p
|
|real-analysis|measure-theory|
| 1
|
Confused.. why we cannot use determinant addition property for this question
|
$$ \begin{vmatrix} x+y & z & z \\ x & y+z & x \\ y & y & x+z \\ \end{vmatrix} $$ The answer turns out to be 4xyz but after some transfomations. But here, why can we not use the addition property of matrices in 1st step and get the answer, which is giving 0.
|
The additive property of the determinant (or at least what I guess you're saying) applies when you have two matrices that share all but one row/column. Let's say $$A = \begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{bmatrix}, \\ B = \begin{bmatrix} b_{11} & b_{12} & b_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{bmatrix},\\ C = \begin{bmatrix} a_{11} + b_{11} & a_{12}+b_{12} & a_{13}+b_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{bmatrix}, $$ Then $\det C = \det A + \det B$ Can you use this to solve the above problem? Yes, but you have to be clever $$ \begin{aligned} &\begin{vmatrix} x+y & z & z\\ x & y+z & x\\ y & y & x+z \end{vmatrix}\\ &= \begin{vmatrix} x+y & z & z\\ x & z-y & -x\\ y & y & x+z \end{vmatrix} + \begin{vmatrix} x+y & z & z\\ 0 & 2y & 2x\\ y & y & x+z \end{vmatrix}\\ &= 0 + 2\begin{vmatrix} x+y & z & z\\ 0 & y & x\\ y & y & x+z \end{vmatrix}\\ &= 2\begin{vmatrix} y & 0 & z\\ 0 & y
|
|matrices|determinant|
| 0
|
Difference between the sphere and RP^2 in the generators and relations of $\pi_1$
|
Here are two drawings we had that I did not quite well understand why they are correct: Here are my questions: 1-I think the drawing of $RP^2$ is meant to be a circle not a disk, am I correct? 2- what should be the generators of $S^2$ ? And how this leads us to conclude that $\pi_1$ of it is zero? Could someone clarify these points for me please?
|
For $RP^2$ , its classic definition is indeed the disc quotiented by the relation on the 'edges', so here it indeed is disk, and not a circle $S^2$ is simply connected, which precisely means that $\pi_1(S^2)=\{id\}$ is trivial. We can see this either via more general results, or by showing a path on $S^2$ is going to cover some point $x$ finitely many times, then moving the path away from $x$ . Then the path has image in $S^2\backslash \{x\}$ which is homeomorphic to $\mathbb{R}^2$ which is simply connected so the path is homotopic to the constant path.
|
|general-topology|algebraic-topology|fundamental-groups|projective-space|
| 1
|
Solving $25^n + 16^n \equiv 1 \pmod{121}$
|
Original Question: Solve for all positive integers $n$ such that $25^n + 16^n \equiv 1 \pmod{121}$ . I began by substituting $k=2n$ to obtain $$5^k + 4^k \equiv 1 \pmod{11}$$ Considering this modulo $11$ , it can be found that $k \equiv 4 \pmod{5}$ . Then, I observed that $5^3 \equiv 4 \pmod{121}$ , and so I split the congruence into three cases: Case-1 $k \equiv 0 \pmod{3}$ . Then, $k=3r$ . $$(4^k)^3+4^k \equiv 1 \pmod{121}$$ It follows that $4^k \equiv 64 \pmod{121}$ . Case-2 $k \equiv 1 \pmod{3}$ . Then, $k=3r+1$ . $$4 \cdot (4^r)^3 +5 \cdot (4^r) \equiv 1 \pmod{121}$$ It follows that $4^k \equiv 37 \pmod{121}$ . Case-3 $$k \equiv 2 \pmod{3}.$$ Then, $$ k=3r+2.$$ $$16 \cdot (4^r)^3 +25 \cdot (4^r) \equiv 1 \pmod{121}.$$ It follows that $4^r \equiv 80 \pmod{121}$ . This leaves the congruences: $4^k \equiv 64 \pmod{121}$ for $k \equiv 4 \pmod{15}$ $4^k \equiv 37 \pmod{121}$ for $k \equiv 9 \pmod{15}$ $4^k \equiv 80 \pmod{121}$ for $k \equiv 14 \pmod{15}$ From here, I am unsure how to
|
As $5^3\equiv4\pmod{121},$ $$p(n)=5^{2n}+4^{2n}\equiv5^{2n}+(5^3)^{2n}\pmod{121}$$ Let's verify $5^{2n}+5^{6n}\pmod{11}$ As $5^2\equiv3\pmod{11},5^3\equiv15\equiv4,5^5\equiv3\cdot4\equiv1,$ $p(n)=5^{2n}+5^{6n}\equiv5^n(1+5^n)\pmod{11}$ and it will have a period of $5$ $p(1)\equiv5(1+6)\not\equiv1\pmod{11}$ $p(2)\equiv5^2(1+5^2)\equiv3(1+3)\equiv1\implies n=5m+2$ where $m$ is a non-negative integer $5^3\equiv4\pmod{121},5^5\equiv5^2\cdot4\not\equiv1\pmod{11^2}$ Using Order of numbers modulo $p^2$ , modulo ord $_{121}5=11\cdot5\ \ \ \ (1)$ $p(5m+2)=5^{2(5m+2)}+4^{2(5m+2)}\equiv5^4\cdot(5^{10})^m+4^4\cdot(4^{10})^m$ $5^4\equiv5\cdot4\pmod{121},5^5\equiv4\cdot5^2\equiv-21,5^{10}\equiv(-21)^2\equiv1+77$ As using binomial theorem $(1+77)^m=1+\binom m177^1+$ terms divisible by $77^2,$ $\implies5^4\cdot(5^{10})^m\equiv20(1+77)^m\equiv20(1+77m)\pmod{121}\equiv20+88m$ Similarly, $4^4\cdot(4^{10})^m\equiv14(1-11)^m\equiv14(1-11m)\pmod{121}\equiv14-33m$ $\implies p(5m+2)\equiv34+55m$ So, we need $
|
|elementary-number-theory|contest-math|
| 1
|
Conditional probability and extraction without replacement
|
Inside a box, there are $10$ balls numbered from $1$ to $10$ , which are extracted without replacement. $X_n$ is a random variable that represents the result of the $n^{\text{th}}$ extraction. Evaluate $$P(X_5>\max(X_1,X_2,X_3,X_4)|X_5=7).$$ By definition, $$\Pr(X_5>\max(X_1,X_2,X_3,X_4)|X_5=7)=\frac{\Pr(\max(X_1,X_2,X_3,X_4) But now I'm stuck: I don't know how to evaluate $\Pr(\max(X_1,X_2,X_3,X_4) .
|
$ Pr(\max(X_1, X_2, X_3, X_4) is the probability that all of the first four extractions result in numbers less than $7$ , and that the fifth extraction is exactly $7$ . Since the balls are extracted without replacement, the total number of possible outcomes for the first four extractions is the number of ways to choose $4$ different balls from $10$ , which is $ \binom{10}{4} $ . Now, let's calculate $ Pr(\max(X_1, X_2, X_3, X_4) . To satisfy the condition $ \max(X_1, X_2, X_3, X_4) , the first four balls must be chosen from the balls numbered $1$ to $6$ . There are $ \binom{6}{4} $ ways to do this because we're choosing $4$ balls out of the $6$ that are less than $7$ . After removing these $4$ balls, there are $6$ balls left, one of which is the ball numbered $7$ , so the probability of the 5th extraction being a ball numbered $7$ is $ frac{1}{6}$ This gives us: $$ Pr(\max(X_1, X_2, X_3, X_4) Next, we calculate $ Pr(X_5=7) $ . The probability that the fifth ball is the ball numbered 7
|
|probability|
| 0
|
Could you provide a theroy to calculate the possibility?
|
Qusetion: Imagine that we have a total of $n$ coins, which have been serialised from $1$ to $n$ . We know that the probability that the $i$ - th coin will be heads up after being tossed is $$P(i) = \frac{2i}{2i+1}$$ . What is the probability that the number of coins that are heads up after we have tossed n coins will be an odd number? So far I still don't know how to approach this question. The largest problem I enhance is how to determine the distribution of the coins since each coin has a different possibility for itself to be heads up. There is no need to provide the full solution, could you please offer me a possible method to approach it?
|
Consider the following polynomial $$Q(x) = \prod_{i=1}^n{\left(1-P_i+P_i x\right)} = \sum_{k=0}^n {c_k x^k}$$ Where $c_k$ is the kth order coefficient of $Q(x)$ in it's expanded form. Notice that $c_k$ is equal to the probability of "k coins are heads up". Then the question boils down to finding the sum of odd order coefficients of Q(x) That sum is equal to $\frac{Q(1) - Q(-1)}{2}$ I'll let you figure out why :)
|
|probability|
| 0
|
Free object is a free group in the category of groups
|
I have a question and would appreciate a clear answer. Firstly, I will provide an introduction regarding my understanding, and then I will ask my question. Let's begin with the definition of a universal morphism from an object $ X $ to a functor $F$ : Let $\mathcal{C}$ and $\mathcal{D}$ be two categories. A universal morphism from object $X\in ob(\mathcal{D}) $ to functor $F: \mathcal{C} \longrightarrow \mathcal{D} $ is a pair $ (A, f) $ , where $A$ is an object of $\mathcal{C}$ and $ f: X \longrightarrow F(A) $ is a morphism in $\mathcal{D}$ . This pair satisfies the universal property, i.e., for every $ B \in \text{ob}(\mathcal{C}) $ and any morphism $ g: X \longrightarrow F(B) $ , there exists a unique map $h: A \longrightarrow B$ such that the diagram commutes: $ g = F(h) \circ f $ . Now, if $(\mathcal{C}, F) $ is a concrete category and $f $ is the canonical injection, then the universal morphism $ (A, f) $ is called a free object on a set $ X $ . On the other hand, if $\mathcal{C
|
Many definitions in mathematics (like the one you gave of "free group") are constructions . They tell you how to build a certain object. Other definitions (like the one you gave of "free object") are properties -- for any given mathematical object, you can check whether or not it satisfies the desired property. The central idea of category theory is to identify properties which uniquely determine important constructions up to canonical isomorphism. The definition of "free object" applies in the setting of any concrete category, but the constructions of free objects in these categories will be different. So, you need to show two things: (i) The construction of free groups that you provided satisfies the desired universal property; (ii) Any two free objects on the same set are canonically isomorphic. Part (i) is an elementary abstract algebra exercise. Part (ii) is easy if you have already learned the Yoneda lemma -- have you?
|
|group-theory|category-theory|free-groups|
| 0
|
When is $\sqrt p$ contained in $\Bbb Q[\omega_n]$?
|
Given any positive prime $p$ , there is some cyclotomic extension $\Bbb Q[\omega_n]$ of $\Bbb Q$ containing $\sqrt p$ as a consequence of Kronecker–Weber theorem. But more specifically, given any positive integer $n$ is there any nice way to tell if $\sqrt p\in\Bbb Q[\omega_n]$ ? I am looking for a way to compute the "yes or no" just like Legendre symbol, or any related results. This question arises naturally when one attempts to compute the degree of the extension $\Bbb Q\left[\omega_n,\sqrt{p}\right]$ over $\Bbb Q$ , in which case $\Bbb Q\left[\omega_n,\sqrt p\right]$ has degree $2$ over $\Bbb Q[\omega_n]$ if and only if $\sqrt p\in\Bbb Q[\omega_n]$ . This computation is crucial in determining the degree of the splitting field of $x^n-p$ over $\Bbb Q$ .
|
The primes ramified in $\mathbb Q(\sqrt{p})$ are $p$ and, if $p\not\equiv1(4)$ , also $2$ . On the other hand, the primes ramified in $\mathbb Q(\omega_n$ ) are the odd primes dividing $n$ , and also $2$ if $4|n$ . So necessary conditions for $\mathbb Q(\sqrt{p})\subset\mathbb Q(\omega_n)$ are (primes ramified in the subfield should stay ramified in the containing field): for $p\equiv1(4)$ : $p|n$ for $p\equiv-1(4)$ : $4p|n$ and by the comment of Jyrki Lahtonen (that $\sqrt{p}\in Q(\omega_p)$ in the 1st case and $\sqrt{p}\in Q(\omega_{4p})$ in the second case) they are also sufficient . What remains is $p=2$ , where the necessary & sufficient condition is $8|n$ . Here the ramification argument only gives $4|n$ , but in that case $1+i\in\mathbb{Q}(\omega_n)$ , and if $\sqrt2$ is in that field too, then so is $\frac{1+i}{\sqrt2}=\omega_8$ , so indeed $8|n$ .
|
|abstract-algebra|algebraic-number-theory|cyclotomic-fields|
| 1
|
Show that $\lim_{n\to\infty}\frac{a^n}{n!}=0$ and that $\sqrt[n]{n!}$ diverges.
|
Let $a\in\mathbb{R}$. Show that $$ \lim_{n\to\infty}\frac{a^n}{n!}=0. $$ Then use this result to prove that $(b_n)_{n\in\mathbb{N}}$ with $$ b_n:=\sqrt[n]{n!} $$ diverges. Okay, I think that's not too bad. I write $$ \frac{a^n}{n!}=\frac{a}{n}\cdot\frac{a}{n-1}\cdot\frac{a}{n-2}\cdot\ldots\cdot a $$ and because all the factors converges to 0 resp. to $a$ (i.e. the limits exist) I can write $$ \lim_{n\to\infty}\frac{a^n}{n!}=\lim_{n\to\infty}\frac{a}{n}\cdot\lim_{n\to\infty}\frac{a}{n-1}\cdot\ldots\cdot\lim_{n\to\infty}a=0\cdot 0\cdot\ldots\cdot a=0. $$ Let $a_n:=\frac{a^n}{n!}$ and $a=1$ then $$ b_n=\frac{1}{\sqrt[n]{a_n}}. $$ Because (as shown above) $a_n\to 0$ it follows that $\sqrt[n]{a_n}\to 0$, because $$ \lvert\sqrt[n]{a_n}\rvert\leqslant\lvert a_n\rvert\to 0\implies\lvert\sqrt[n]{a_n}\rvert\to 0 $$ and therefore $b_n\to\infty$. I think that's all . Am I right?
|
Alternative ideas: Suppose first that $a > 0$ . Then the first sequence is convergent because it is bounded below by $0$ and decreasing for $n$ large enough (how large?): $$\frac{a^{n+1}}{(n + 1)!} = \frac{a}{n + 1}\frac{a^n}{n!} Let be $L$ the limit. Taking the limit of the subsequence: $$ L = \lim_{n\to\infty}\frac{a^{n+1}}{(n + 1)!} = \lim_{n\to\infty}\frac{a}{n + 1}\frac{a^n}{n!} = \lim_{n\to\infty}\frac{a}{n + 1}\lim_{n\to\infty}\frac{a^n}{n!} = 0 L = 0. $$ The general case ( $a\in\mathbb R$ ) can be proven by sandwiching (how?). The second limit can be calculated taking $\log$ and using Stolz–Cesàro : $$ \log\lim_{n\to\infty}\sqrt[n]{n!} = \lim_{n\to\infty}\log\sqrt[n]{n!} = \lim_{n\to\infty}\frac{\log 1 + \log 2 + \cdots + \log n}n = \cdots $$
|
|real-analysis|sequences-and-series|analysis|proof-verification|
| 0
|
Consider the ring $( R = \mathbb{Z}/12\mathbb{Z} = \{[0]_{12}, [1]_{12}, ..., [11]_{12}\})$, and let ( I ) be its ideal
|
I have solved a ring theory question but I am not sure whether my solution is correct. Can anyone Guide if there is mistake in it? Question Consider the ring $( R = \mathbb{Z}/12\mathbb{Z} = \{[0]_{12}, [1]_{12}, ..., [11]_{12}\})$ , and let ( I ) be its ideal $( I = \{[0]_{12}, [3]_{12}, [6]_{12}, [9]_{12}\} ).$ (a) List explicitly all the cosets of $( I )$ in $( R ).$ (b) Write down the addition and multiplication tables for ( R/I ). (c) Prove that $( R/I \cong \mathbb{Z}/3\mathbb{Z} )$ , by giving an explicit isomorphism (there is no need to prove formally that it is an isomorphism). Solution Part (a) \begin{align*} 0 + I &= \{0, 3, 6, 9\} \\ 1 + I &= \{1, 4, 7, 10\} \\ 2 + I &= \{2, 5, 8, 11\} \\ \end{align*} Part (b) Addition Table for $( R/I )$ : \begin{array}{c|ccc} + & [0] & [1] & [2] \\ \hline [0] & [0] & [1] & [2] \\ [1] & [1] & [2] & [0] \\ [2] & [2] & [0] & [1] \\ \end{array} Multiplication Table for ( R/I ): \begin{array}{c|ccc} \times & [0] & [1] & [2] \\ \hline [0] & [0]
|
Let $\phi:R/I\to \mathbb Z/3\mathbb Z$ defined by $$\phi(0+I)=0$$ $$\phi(1+I)=1$$ $$\phi(2+I)=2$$ According to your work in part b, it seems very likely that $\phi$ is an isomorphism of rings: $$ R/I \cong \mathbb{Z}/3\mathbb{Z} $$ That's all that's required in part c.
|
|abstract-algebra|ring-theory|
| 0
|
Efficiency of constrained LQR formulation in CVXPY via batch-approach
|
I am interested in formulating a discrete finite time constrained LQR in CVXPY. \begin{align} \text{minimize } J = & \sum_{k=0}^N x'(k)Qx(k) + u'(k)Ru(k) \\\\ \text{subject to } & x(k+1) = Ax(k) + Bu(k) \\\\ & Cx(k) + Du(k) \leq e \end{align} I came across the batch-approach [1] in a paper solving a similar problem [2]. As I understand it, the method is describing the problem only in terms of the inputs $u$ and initial state $x_0$ , which would reduce the total amount of variables in CVXPY in exchange for computing some large matrices beforehand. \begin{align} \begin{bmatrix} x(0) \\\\ x_1 \\\\ \dots \\\\ \dots \\\\ x_k \end{bmatrix} &= \begin{bmatrix} I \\\\ A \\\\ \dots \\\\ \dots \\\\ A^N \end{bmatrix} x(0) + \begin{pmatrix} 0 & \dots & \dots & 0 \\\\ B & 0 & \dots & 0 \\\\ AB & \ddots & \ddots & \vdots \\\\ \vdots & \ddots & \ddots & \vdots \\\\ A^{N-1}B & \dots & \dots & B \end{pmatrix} \begin{bmatrix} u_0 \\\\ \vdots \\\\ \vdots \\\\ u_{N-1} \end{bmatrix} \\\\ X &= S^x x(0) + S^u
|
From the mathematics point of view the reformulated problem is the same. Only the different formulation fits the more general quadratic programming problem framework. Therefore, it might be that a more efficient solvers has been developed for it. The given structure of the optimization problem cast as a quadratic programming problem might benefit from using sparse matrices. But the best way to know which implementation is more efficient/faster for any particular problem, is by implementing both and test which one is better. Also note that the problem that you are solving, LQR+inequality constraints, is also a form of model predictive control , for which quadratic programming solvers are used [1]. [1]: Schwenzer, M., Ay, M., Bergs, T., & Abel, D. (2021). Review on model predictive control: An engineering perspective. The International Journal of Advanced Manufacturing Technology, 117(5), 1327-1349.
|
|optimal-control|quadratic-programming|cvxpy|
| 1
|
the free product of two presentations is isomorphic to a third presentation using UP of free product.
|
Here is the question that I want an answer to it using commutative diagrams (as small number of them as possible): Prove that the free product of $ \langle g_1, \dots ,g_m | r_1, \dots ,r_n \rangle$ and $ \langle h_1, \dots ,h_k| s_1, \dots ,s_l \rangle$ is isomorphic to $\langle g_1, \dots ,g_m,h_1, \dots ,h_k|r_1, \dots ,r_n,s_1, \dots ,s_l \rangle.$ I had an incomplete proof for it where I ignored the relations, which is off course a mistake. I showed that the free group of a disjoint union $F(A\amalg B)$ is isomorphic to the free product of the corresponding free groups $F(A)*F(B)$ . Now, my question is: How can I prove this problem using the Universal property of free product? The Universal property of free product is: Note that the free product is the "coproduct in the category of groups." The universal property is: If $G_1$ and $G_2$ are two groups,then for any group $G$ and for any pair of homomorphisms $\varphi_1:G_1 \to G$ and $\varphi_2:G_2 \to G$ there is a unique homomorph
|
The two (not necessarily finitely) presented groups $G_i:=\langle S_i\mid R_i\rangle$ ( $i=1,2$ ) come with maps $\sigma_i:S_i\to G_i$ , and $(G_i,\sigma_i)$ is characterized (up to isomorphism) by the following universal property: $\sigma_i^*(R_i)=1$ , i.e. $\forall s_1^{n_1}\dots s_j^{n_j}\in R_i\quad\sigma_i(s_1)^{n_1}\dots\sigma_i(s_j)^{n_j}$ is the identity element of $G_i$ ; for every group $H$ and every map $f:S_i\to H$ such that $f^*(R_i)=1$ , there is a unique homomorphism $F:G_i\to H$ such that $F\circ\sigma_i=f$ . The free product $G_0:=G_1*G_2$ comes with two homomorphisms $\psi_i:G_i\to G_0$ ( $i=1,2$ ), and $(G_0,\psi_1,\psi_2)$ is characterized (up to isomorphism) by the following universal property (correcting your formulation, since there is no categorical notion of `words representatives of length one'): for any group $G$ and any pair of homomorphisms $\phi_i:G_i\to G$ ( $i=1,2$ ), there is a unique homomorphism $\varphi:G_0→G$ such that $\varphi∘ψ_i=\varphi_i$ ( $i=1
|
|algebraic-topology|category-theory|universal-property|free-product|coproduct|
| 0
|
Given constrain $m=a_1>a_2>...>a_n$ and the elements are integer prove $\sum \frac{a_i-a_{i+1}}{a_i} < H_m$
|
For decreasing positive integers $a_1>a_2>...>a_n>0$ when $a_1=m$ . Mark $a_{n+1}=0$ , Prove that $\sum_{k=1}^n \frac{a_i-a_{i+1}}{a_i} Might add that $n$ can be chosen by us, as long as $a_1=m$ . I got the problem from a different problem and if I prove this I solved the problem, so I'm not sure if it's true. I know the harmonic number is achieved if we choose $a_i=m-i+1$ i.e $a_m=1 My attemp was to prove that the question is equivalent to proving $$\min \sum_{k=1}^n \frac{a_{i+1}}{a_i} $$ is achieved by $a_i=m-i+1$ i.e $a_m=1 . Showing the equivalent is simple, but proving that this is the best I havn't manage to do.
|
The idea has been nicely demonstrated in Matija's answer . Here is a formal proof for the general case. For $1 \le i \le n$ and $a_{i+1}+1 is $1/a_i \le 1/k$ , so we have $$ \frac{a_i - a_{i+1}}{a_i} = \sum_{k=a_{i+1}+1}^{a_i} \frac 1{a_i} \le \sum_{k=a_{i+1}+1}^{a_i} \frac 1k \, , $$ with equality if $a_i - a_{i+1} = 1$ , and strict inequality otherwise. It follows that $$ \sum_{i=1}^n \frac{a_i - a_{i+1}}{a_i} \le \sum_{i=1}^n \sum_{k=a_{i+1}+1}^{a_i} \frac 1k \overset{(*)}{=} \sum_{k=1}^m \frac 1k = H_m \, . $$ $(*)$ holds because every integer $k$ between $1$ and $m$ is in exactly one half-open interval $(a_{i+1}, a_i]$ . Equality holds if $a_i - a_{i+1} = 1$ for all $i$ , that is if $n = m$ and $a_i = m+1-i$ for $1 \le i \le m$ . Otherwise the inequality is strict.
|
|number-theory|integers|harmonic-numbers|
| 1
|
Subsequential limit
|
Hey I need to find the limit superior and limit inferior of this sequence, I didn't manage to get rid of the n^2 to get a real number, I will glad if you could help me solve the question. Thanks
|
As $n \to \infty,$ the cosine term tends to $\cos(-1).$ Therefore $$\left|a_n - n^2 \cos(-1) \sin\left(\frac{n\pi}{2} + \frac{\pi}{3}\right)\right| \to 0.$$ From here, you can show that the sequence $a_n$ takes both arbitrarily large positive values and arbitrarily large negative values so limsup is $\infty$ while liminf is $-\infty.$
|
|sequences-and-series|limits|
| 0
|
Why is $\lim_{x\to a}\frac{E(x)}{x-a} = 0$, instead of $\lim_{x\to a} E(x) = 0,$ used to explain why linear approximation works?
|
In my calculus textbook, the author made the following remark in the chapter about linear approximation: Let $f$ be a differentiable function with $f'$ continuous. Define $E(x)$ to be the error in the approximation of $f(x)$ using $f(a)$ ; that is: $$E(x) = f(x) - [f(a) + f'(a)(x-a)]$$ We have that: $$\lim_{x\to a}\frac{E(x)}{x-a} = \lim_{x\to a}\frac{f(x) - [f(a) + f'(a)(x-a)]}{x-a} = \lim_{x\to a}\frac{f(x) - f(a)}{x-a} - f'(a) = f'(a) - f'(a) = 0$$ This means that $E(x)$ approaches $0$ faster than $x-a$ does when $x \rightarrow a$ . So as $x$ gets near $a$ , the error in the linear approximation approaches $0$ faster than $x$ approaches $a$ . This is the formal explanation of what we mean when we say that the linear approximation is "close" to $f(x)$ "near" $x=a$ . In a physical sense, I take this to mean that $E(x)$ gets practically close to $0$ far before $x$ gets close to $a$ . More broadly, the distance between $E(x)$ and $0$ is far smaller than the distance between $x$ and $a$
|
If you define $$ E_t(x) = f(x) -[f(a) - t(x-a)] $$ then $E_t(x)$ will have limit $0$ at $a$ for every value of $t$ (assuming $f$ is continuous). All those lines defined by the expression in square brackets are in some sense linear approximations - all that means is that they pass through the point $(a,f(a))$ . Setting $t=f'(a)$ gives you the best linear approximation - the only one that has the right limit when you take $x-a$ into account.
|
|calculus|linear-approximation|
| 0
|
Graded ring generated by finitely many homogeneous elements of positive degree has Veronese subring finitely generated in degree one
|
Let $S=\bigoplus_{k\ge 0}S_n$ be a graded ring which is generated over $S_0$ by some homogeneous elements $f_1,\dotsc, f_r$ of degrees $d_1,\dotsc, d_r\ge 1$ , respectively. I want to show that there exist some integer $N>0$ such that $\bigoplus_{k\ge 0}S_{kN}$ is generated over $S_0$ by $S_N$ . This result is useful in algebraic geometry because it allows one to reduce oneself to the case in which $S$ is generated by $S_1$ over $S_0$ . However, I can't see how to solve this problem. It would be necessary and sufficient to find a $N$ which satisfies the following elementary condition: Let $k$ be a positive integer, $a_1,\dotsc, a_r\ge0$ integers such that $a_1d_1+\cdots+a_rd_r=kN$ . Then there exist integers $0\le b_i\le a_i$ such that $b_1d_1+\cdots+b_rd_r=N$ . Supposedly taking $N=rd_1\cdots d_r$ works, but I can't show that.
|
Let $g=\prod_{i=1}^r d_i$ and let $q_i=g/d_i$ . Write $a_i=k_iq_i+r_i$ for integers $k_i$ and $r_i$ with $0\leq k_i$ and $0\leq r_i . Then $$ kN = \sum a_id_i = \sum (k_iq_i+r_i)d_i = (\sum k_ig) + (\sum r_id_i),$$ so $\sum r_id_i$ is divisible by $g$ . On the other hand, as $r_i , we have $\sum r_id_i and therefore $\sum k_iq_id_i = \sum k_ig > (k-1)N$ . So we can pick $c_1,\cdots,c_r$ with $0\leq c_i\leq k_i$ and $\sum c_iq_id_i = \sum c_ig = N - \sum r_id_i$ , and choosing $b_i=c_iq_i+r_i$ , we win: $\sum b_id_i = N$ .
|
|combinatorics|elementary-number-theory|algebraic-geometry|integer-partitions|graded-rings|
| 1
|
Why is $\lim_{x\to a}\frac{E(x)}{x-a} = 0$, instead of $\lim_{x\to a} E(x) = 0,$ used to explain why linear approximation works?
|
In my calculus textbook, the author made the following remark in the chapter about linear approximation: Let $f$ be a differentiable function with $f'$ continuous. Define $E(x)$ to be the error in the approximation of $f(x)$ using $f(a)$ ; that is: $$E(x) = f(x) - [f(a) + f'(a)(x-a)]$$ We have that: $$\lim_{x\to a}\frac{E(x)}{x-a} = \lim_{x\to a}\frac{f(x) - [f(a) + f'(a)(x-a)]}{x-a} = \lim_{x\to a}\frac{f(x) - f(a)}{x-a} - f'(a) = f'(a) - f'(a) = 0$$ This means that $E(x)$ approaches $0$ faster than $x-a$ does when $x \rightarrow a$ . So as $x$ gets near $a$ , the error in the linear approximation approaches $0$ faster than $x$ approaches $a$ . This is the formal explanation of what we mean when we say that the linear approximation is "close" to $f(x)$ "near" $x=a$ . In a physical sense, I take this to mean that $E(x)$ gets practically close to $0$ far before $x$ gets close to $a$ . More broadly, the distance between $E(x)$ and $0$ is far smaller than the distance between $x$ and $a$
|
The idea is that $\lim_{x \to a}\frac{E(x)}{x - a} = 0$ says something stronger than just $\lim_{x \to a}E(x) = 0$ . The latter tells us that the error $E(x)$ goes to 0 as $x$ gets close to $a$ , but doesn't say how fast the limit converges. As an analogy to understand what I mean by "how fast", the sequence $1, \frac{1}{2}, \frac{1}{3},\frac{1}{4}, \dots$ goes to 0, but $1, \frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \dots$ goes to 0 much faster (the corresponding terms are getting much closer to 0 much sooner in the latter sequence than in the former sequence). Formalizing this notion gives the idea of "order of convergence" (see https://en.wikipedia.org/wiki/Rate_of_convergence ). On the other hand, the limit $\lim_{x \to a}\frac{E(x)}{x - a} = 0$ is saying that the numerator is going to 0 faster than the denominator. As an example to see what I mean, consider the case where $a = 0$ and suppose we had an error function like $E(x) = \sqrt{x}$ . Then $\lim_{x \to 0}\frac{\sqrt{x}}{x} = \li
|
|calculus|linear-approximation|
| 1
|
Does $\text{Im}(f)\cong \text{Ker}(f^*)$ hold? Pontryagin dual
|
Let $M, M'$ be a profinite group. Let $M^*=\text{Hom}_{conti}(M,\Bbb{Q}/\Bbb{Z})$ be a Pontryagin dual of $M$ . Let $f:M\to M'$ be a homomorphism of abelian group. Let $f^*: M'^*\to M^*$ be a map defined by $g\to f・g$ . Suppose $M'^*\cong M'$ , then $f^*: M'\cong M'^*\to M^*$ . Does $\text{Im}(f)\cong \text{Ker}(f^*)$ hold? (In other words, is $M\stackrel{f}{\to}M' \stackrel{f^*}{\to}M^*$ exact ?) I know $(\text{Im}(f))^* \cong \text{Im}(f^*)$ and $\text{Ker}(f^*) \cong (\text{Coker}(f))^*$ , does the isomorphism $\text{Im}(f) \cong \text{Ker}(f^*)$ hold? I'm having difficulty relating the image of $f$ directly to the cokernel of $f$ .
|
No. From the text you seem to be confusing cokernel and kernel; I am answering the question you wrote in symbols, which has kernel, and not the question you wrote in words, which has kernel. Pick some isomorphism from $M$ to its dual. Now if $f$ is the identity map from $M$ to itself, then $f^*$ is also the identity map. So the image of $f$ is everything, and the kernel of $f^*$ is nothing, and therefore they are not the same under the chosen isomorphism.
|
|abstract-algebra|group-theory|algebraic-number-theory|duality-theorems|
| 1
|
finding a bijection with conditions
|
$x$ and $y$ are variables in the set $\mathbb R$ . With bijective functions $f$ and $g$ , $x$ and $y$ respectively goes to $a$ and $b$ which is in the set $[-1, 1]$ . So, $f(x)=a$ and $g(y)=b$ .For example, $f$ and $g$ can be hyperbolic tangent function. Is there $f$ and $g$ that satisfies the followings? The set $xy=1 ~(x, y>0)$ is equal to the set $a+b=1$ The set $x+y=0$ is equal to the set $a+b=0$
|
$$x + y = 0\implies f(x) + g(-x) = 0\implies g(x) = -f(-x).$$ $$xy = 1\implies f(x) + g(1/x) = 1\implies f(x) - f(-1/x) = 1.$$ This suggests $$f(x) = - x/(x + 1)\hbox{ for } x \ge 0;$$ $$f(x) = x/(x - 1)\hbox{ for } x $$g(x) = -f(-x).$$
|
|functions|change-of-variable|
| 1
|
why is Terence Tao's definition 3.3 of set intersection so complicated?
|
Terence Tao's Analysis I 4th edition defines set intersection as follows: Given any non-empty set $I$ , and given an assignment of a set $A_{\alpha}$ to each $\alpha \in I$ , we can define the intersection $\bigcap_{\alpha \in I}A_{\alpha}$ by first choosing some element $\beta$ of $I$ (which we can do since $I$ is non-empty), and setting $$ \bigcap_{\alpha \in I} A_{\alpha} := \{ x \in A_{\beta} : x \in A_{\alpha} \text{ for all } \alpha \in I \} $$ which is a set by the axiom of specification. Question My question is - why does he define it in this complicated way? A simpler definition might be: The intersection of a family of sets $A_\alpha$ is the set whose elements exist in every set $A_\alpha$ . That is, $$ \bigcap_{\alpha \in I} A_{\alpha} := \{ x : x \in A_{\alpha} \text{ for all } \alpha \in I \} $$ Clearly I am missing something important. Thoughts I recall Tao discussing earlier in the book how subsets of sets are sets, but sets defined by logical statements can cause parado
|
In axiomatic set theory, it is not true that everything we can write down using the symbols { and } and the language of mathematical logic is a set. You have to prove something is a set by using the axioms. The most common way to do this is to prove it's a subset of some set you already know about, using Specification. That's what Tao is doing. Note that your definition isn't obviously a subset of anything.
|
|elementary-set-theory|
| 0
|
Integer solutions for a specific genus one, quartic plane curve
|
As part of a physics project studying renormalization group flows of scalar field theories, I've come across the following quartic plane curve in the variables $m$ and $n$ : $$ 36 + 16 m^4 - 108 n + 105 n^2 - 18 n^3 + n^4 - 8 m^3 (-18 + 17 n) + m^2 (420 - 468 n + 81 n^2) + m (216 - 408 n + 234 n^2 - 34 n^3)=0. $$ I would like to know the list of integer solutions to this equation. Searching all values of $m$ and $n$ up to a few thousand, I've found only (-2,-2), (-1,2), and (1,4). I know the curve has two nodal points, one at (-1,2) and another at (-1/5, 2/5) and thus should have genus one. Playing around with conics and lines through nodal points, I can generate some further rational solutions, for example (-88857223/103456502,-73582856/51728251), but my interest is in whether there exist further integer solutions. Using the Maple algcurves package, I can put this curve in Weierstrass form, but the coefficients involve roots of $z^2 - 10 z + 7$ , and so I'm not sure that's actually a
|
Through google search and talking with some number theorist friends, I came across the following two papers by Tzanakis and Stroeker: Solving elliptic diophantine equations by estimating linear forms in elliptic logarithms and Computing all integer solutions of a genus 1 equation . As often happens, my problem is simpler than the general case. As pointed out by Viktor Vaughn in the comments, the curve has rank one. Thus there is no need for this Lenstra, Lenstra, Lovasz algorithm toward the end of the demonstration. The lattice in question is only two dimensional, and Gauss's old algorithm suffices. At any rate, if I have understood the procedure (a big IF in my case), I need to search through only a handful of points to demonstrate that I have in fact found all the integer solutions. The lattice of rational points can be written in the form $$ P = s T + q P_1 $$ where $s \in \{0, 1 \}$ and $q \in {\mathbb Z}$ . (According to Sage, I can take $T = (-34992, 0)$ and $P_1 = ( -11664, 7558
|
|number-theory|diophantine-equations|elliptic-curves|plane-curves|
| 0
|
Integral calculation with infinite series
|
so basically i was confronted by the following series: $\sum_{i=0}^\infty \frac{(-1)^i}{2(2i+1)(i+1)}$ I encountered this series while working on the following integral: $\int_0^\infty \frac{(1-cosx)}{(x^2e^x)} dx $ I first expanded the taylor series of $cos(x)$ , and then the integral just came out as a sum of gamma functions. Then I evaluated each of them to end up with the previously mentioned infinite series. Now according to this video:-https://youtu.be/DVFf-qT3swY?si=aCEeAKot6aZElyv1 This integral equals $ \frac{\pi}{4} - \frac{log2}{2} $ , so our sum is equal to $ \frac{\pi}{4} - \frac{log2}{2} $ . Is my evaluation right? And if someone has better ways to evaluate it I would highly appreciate if its is shared with me.
|
We can derive a closed expression for the series as \begin{align*} \color{blue}{\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)(2n+1)}}&=\sum_{n=0}^{\infty}(-1)^n\left(\frac{1}{2n+1}-\frac{1}{2n+2}\right)\\ &=\sum_{n=0}^{\infty}(-1)^n\int_{0}^{1}\left(x^{2n}-x^{2n+1}\right)\,dx\\ &=\int_{0}^{1}\sum_{n=0}^{\infty}(-1)^n\left(x^{2n}-x^{2n+1}\right)\,dx\\ &=\int_{0}^{1}\frac{1}{1+x^2}\,dx-\int_{0}^1\frac{x}{1+x^2}\,dx\\ &=\frac{\pi}{4}-\frac{1}{2}\int_{0}^1\frac{2x}{1+x^2}\,dx\\ &=\frac{\pi}{4}-\frac{1}{2}\log\left(1+x^2\right)\Big|_{0}^1\\ &\,\,\color{blue}{=\frac{\pi}{4}-\frac{1}{2}\log(2)} \end{align*} according to the claim.
|
|calculus|
| 1
|
Evaluating analytically the improper integral $ \int_0^\infty \frac{x}{x^2 + \beta^2}\,J_2(\alpha x) \,\mathrm{d}x$ for $\alpha,\beta\in\mathbb{R}_+$
|
While I was elaborating on a physical problem involving fluids and interfaces, I came across the following integral, which seems at a first glance, like very easy to solve, but it turned out that Maple fails, unfortunately, to provide correct expressions. I am thinking of using the residue theorem but no notable progress has been made so far. Any help or suggestions are very much appreciated: $$ \int_0^\infty \frac{x}{\eta x^2 + \alpha} \, J_2(\rho x) \, \mathrm{d}x \, , $$ wherein $J_2$ stand for Bessel function of the first kind of order 2. Here, $\alpha$ , $\rho$ , and $\eta$ are positive real numbers.
|
Yet another way to the already existing answers is by using the Mellin transform. First rewrite the integral as $$ \int_0^{\infty} \frac{t}{\beta^2+t^2}J_2(\alpha t)dt = \frac{1}{2} \int_0^{\infty} \frac{1}{1+t}J_2(2\sqrt{xt})dt = \frac{1}{2}\mathcal{H}(x) $$ with $x = (\alpha\beta/2)^2$ . With the Mellin transforms \begin{align} h_1(t) = \frac{1}{1+t} &\to \mathcal{H}_1^*(s) = \Gamma(s)\Gamma(1-s)\\ h_2(t) = J_v(2\sqrt{t}) &\to \mathcal{H}_2^*(s) = \frac{\Gamma(s+v/2)}{\Gamma(v/2+1-s)} \end{align} the Mellin transform of $\mathcal{H}(x)$ is given by \begin{align} \mathcal{H}^*(s) &= \mathcal{H}_1^*(1-s) \mathcal{H}_2^*(s) \\ &= x^{-s}\frac{\Gamma(s)\Gamma(s+v/2)\Gamma(1-s)}{\Gamma(v/2+1-s)} \\ &= x^{-s}\frac{\Gamma(s)\Gamma(s+1)\Gamma(1-s)}{\Gamma(2-s)} \end{align} with $v=2$ . There is a pole of order one for $s=0$ and poles of order two for $s=-k-1, k=1,2,3,...$ . We have $\text{Res}(\Gamma(s)\Gamma(s+1),s=0)=1$ whereas $$ \text{Res}(\Gamma(s)\Gamma(s+1),s=-k-1) = -\frac{1}{k!(k+1)!
|
|real-analysis|integration|improper-integrals|indefinite-integrals|residue-calculus|
| 0
|
If seven vertices of a hexahedron lie on a sphere, then so does the eighth vertex.
|
I'm trying to prove https://imomath.com/index.cgi?page=inversion (Problem 11) by projective geometry: If seven vertices of a (quadrilaterally-faced) hexahedron lie on a sphere, then so does the eighth vertex. Is it suffice to prove: If seven vertices of a (quadrilaterally-faced) hexahedron lie on a quadric surface , then so does the eighth vertex? The projective proposition (B) is easy to prove, much easier than the original (sphere) one (A). I believe B implies A in this case, but I don't think it as easy as "A is a special case of B, so B implies A". Think about the converse of Pascal's theorem : ..., then 6 vertices lie on a conic. We can't change it to "..., then 6 vertices lie on a circle". The correct one should be: "..., and 5 vertices lie on a circle, then so does the 6th vertex". But this problem is a slight different than the converse of Pascal's theorem (the circle version): 5 vertices determine a conic, but 7 vertices is not enough to determine a quadratic surface.
|
I think that OP's strategy can be made to work. The following is excerpted from Salmon, Analytical Geometry of Three Dimensions , pg 130, Article 131 . Given seven points common to a series of quadrics, then an eighth point common to the whole system is determined. For let $U, V, W$ be three quadrics, each of which passes through the seven points, then $U +\lambda V +\mu W$ may represent any quadric which passes through them; for the constants $\lambda,\mu$ may be so determined that the surface shall pass through any two other points, and may in this way be made to coincide with any given quadric through the seven points. But $U +\lambda V +\mu W$ represents a surface passing through all points common to $U, V, W,$ and since these intersect in eight points, it follows that there is a point, in addition to the seven given, which is common to the whole system of surfaces. Let's refer to the set of all quadrics $U +\lambda V +\mu W$ as the net $N$ generated by $U, V, W$ . The net $N$ , wh
|
|solution-verification|transformation|projective-geometry|solid-geometry|quadrics|
| 1
|
Factorizaton in an Euclidean ring
|
I have a doubt concerning Lemma 3.7.4 from Topics in Algebra by I. N. Herstein. The statement of the Lemma is: Let $R$ be a Euclidean ring. Then every element in $R$ is either a unit in $R$ or can be written as the product of a finite number of prime elements of $R$ . My question is that $0$ is an element of $R$ but it is neither a unit nor a product of finitely many prime elements of $R$ . So is there a typo and the Lemma holds only for all nonzero elements of $R$ ?
|
You seem to be right. Herstein's definition of a prime element on page 147 definitely implies that $0$ is not prime, and his proof by induction on $d(a)$ fails to account for the case $a = 0$ because he intentionally leaves $d(0)$ undefined.
|
|abstract-algebra|ring-theory|prime-factorization|euclidean-domain|
| 0
|
Term for initial condition defined as a surface
|
I am analyzing the following PDE $$P\frac{\partial^2P}{\partial x\partial y}=\frac{\partial P}{\partial x}\frac{\partial P}{\partial y}$$ Which is a second order non-linear partial differential equation. The "initial condition" for $P$ is $P(a-1,a)=1$ for all real $a$ . My question is the term for the initial condition, because there is an infinite amount, and it looks more like a surface that the function must be on. Is there a term for this initial condition then?
|
This condition is usually referred as 'initial curve'. These 'infinite amount' of initial conditions form a curve in $\mathbb{R}^3$ (not a surface) that must be contained in the graph of the solution. This is usually the type of initial conditions given for PDE with two variables. More generally, for PDE with $n$ variables, one should have a condition as a hyperspace of dimension $n-1$ . For ODE (PDE of a single variable) the hyperspace is just a point.
|
|partial-differential-equations|reference-request|
| 1
|
Free object is a free group in the category of groups
|
I have a question and would appreciate a clear answer. Firstly, I will provide an introduction regarding my understanding, and then I will ask my question. Let's begin with the definition of a universal morphism from an object $ X $ to a functor $F$ : Let $\mathcal{C}$ and $\mathcal{D}$ be two categories. A universal morphism from object $X\in ob(\mathcal{D}) $ to functor $F: \mathcal{C} \longrightarrow \mathcal{D} $ is a pair $ (A, f) $ , where $A$ is an object of $\mathcal{C}$ and $ f: X \longrightarrow F(A) $ is a morphism in $\mathcal{D}$ . This pair satisfies the universal property, i.e., for every $ B \in \text{ob}(\mathcal{C}) $ and any morphism $ g: X \longrightarrow F(B) $ , there exists a unique map $h: A \longrightarrow B$ such that the diagram commutes: $ g = F(h) \circ f $ . Now, if $(\mathcal{C}, F) $ is a concrete category and $f $ is the canonical injection, then the universal morphism $ (A, f) $ is called a free object on a set $ X $ . On the other hand, if $\mathcal{C
|
Two things before we start: You are misquoting the definition of free group. What you describe is the "free group on $x_1,\ldots,x_m$ ", not on " $x_1,\ldots,x_m,x_1^{-1},\ldots,x_m^{-1}$ ". The symbols $x_1^{-1},\ldots,x_m^{-1}$ are part of the construction, but they are not part of the definition. So here, the set $X$ is going to be $X=\{x_1,\ldots,x_n\}$ , and not the set $\{x_1,\ldots,x_m,x_1^{-1},\ldots,x_m^{-1}\}$ . I am going to change your notation, because the use of $F$ is of course natural for "functor", but in this case it conflicts with the usual notation for "free group" and "underlying set." So instead, I am going to take the "data" functor denoted as $\mathsf{U}$ , for "underlying set". Your category $\mathcal{C}$ is $\mathsf{Group}$ , and $\mathcal{D}$ is $\mathsf{Set}$ . So the particular instance of the "free object on $X$ " here would be: A universal morphism from the set $X$ to the "underlying set functor" $\mathsf{U}\colon\mathsf{Group}\to\mathsf{Set}$ (the forget
|
|group-theory|category-theory|free-groups|
| 1
|
Intersection of hypersphere with hyperplane
|
Suppose we are working in $\mathbb{R}^n$ , with $n$ a positive integer greater than or equal to $2$ . Also, let $S$ be a $m$ -dimensional hypersphere in $\mathbb{R}^n$ , where $1 \leq m \leq n-1$ , and let $P$ be a $k$ -dimensional hyperplane in $\mathbb{R}^n$ , where again $1 \leq k \leq n-1$ . For those who don't know, a hypersphere is the analogue of circles and spheres, and a hyperplane is the analogue of lines and planes. My question is, is the intersection of $S$ and $P$ either empty, a single point, or a $p$ -dimensional hypersphere, where $0 \leq p \leq min(m,k-1)$ , and all cases are possible? Note, a $0$ -dimensional hypersphere is just a set of two distinct points.
|
Let's recall what happens in space E of dimension $3$ . Consider a sphere $S$ with center $0$ (We then consider the space $E_0$ pointed to $0$ ) and radius $R$ . Let $H$ be a plane (hyperplane of three-dimensional space) and $d$ be the distance from $0$ to $H$ . Two cases : $d\geq R : H\cap S$ is then reduced to a point or empty; $d let $O$ be the orthogonal projection of $0$ on $H$ ; according to the Pythagorean theorem, $$\forall M\in H\cap S, OM^2=R^2-d^2$$ $H\cap S$ is then a sphere in $H$ of radius $\sqrt{R^2-d^2}$ , i.e. a circle. Reasoning in dimension 3 extends without any problem to any dimension, where we also have a Euclidean structure, the Pythagorean theorem, etc
|
|geometry|
| 1
|
Probabilities of fragments of a molecule
|
Lets consider a toy example of a chemical molecule which is comprised of building blocks P/Q/R/S. The probabilities of breaking the bonds between the building blocks were derived over fragmentations of a large number of molecules built over a large alphabet of building blocks are as follows: P~Q = 0.1, Q~R = 0.7, R~S = 0.3 The molecule PQRS can undergo breakage at any ONE bond between two building blocks, some part of the molecule will remain undissociated as well. What will be the probabilities of each fragment (and undissociated molecule) i.e. P, QRS, PQ, RS, PQR, S and PQRS after The following dissociation shown in fig.? I've used raw probabilities, which I've to normalize to sum up to one . Thanks
|
One way to make this question precise is to assume the breakage probabilities for each individual bond are as given (and that bonds break independently of one another), and then to calculate the breakage probabilities in $PQRS$ given that at most one breakage occurred. The probability of at most one breakage is $$ P[n_b \le 1] = P[PQRS]+P[P/QRS]+P[PQ/RS]+P[P/QRS]\\ =0.9\cdot0.3\cdot 0.7 + \mathbf{0.1}\cdot0.3\cdot0.7 + 0.9\cdot\mathbf{0.7}\cdot0.7+0.9\cdot0.3\cdot\mathbf{0.3} = 0.732. $$ (The probability of exactly two breakages is $$ P[n_b=2] = P[P/Q/RS]+P[P/QR/S]+P[PQ/R/S]\\=\mathbf{0.1}\cdot\mathbf{0.7}\cdot 0.7 + \mathbf{0.1}\cdot 0.3 \cdot \mathbf{0.3} + 0.9\cdot\mathbf{0.7}\cdot \mathbf{0.3}=0.247, $$ and the probability of all three is $P[n_b=3]=\mathbf{0.1}\cdot \mathbf{0.7} \cdot \mathbf{0.3} = 0.021$ ; these sum to $1$ as they should.) So, given that at most one breakage occurs, the probabilities are: $$ P[PQRS \;\vert\; n_b \le1]=\frac{P[PQRS]}{P[n_b \le 1]}=\frac{63}{244} \
|
|probability|combinatorics|
| 1
|
Second structural equation on complex/holomorphic vector bundles
|
I'm going over Kobayashi's Differential Geometry of Complex Vector Bundles, and I see no mention of the famous second structural equation $$\Omega^i_j = d\omega^i_j + \sum_k \omega^i_k \wedge \omega^k_j,$$ where $\Omega = [\Omega^i_j]$ and $\omega=[\omega^i_j]$ are the connection and curvature matrices relative to a framed open set. I'm begin to wonder, does this still hold in the case of a complex or holomorphic vector bundle? The proof of this requires only the Leibniz rule property of the connection so I don't see why it would not hold. I'm just curious since there is no mention about it on whether there is some nuance I'm missing here.
|
Smooth complex vector bundles are just special instances of real vector bundles, so anything that applies to real vector bundles also applies to complex vector bundles. For the second part, I'll assume you are talking about holomorphic connections on holomorphic vector bundles. Otherwise the first sentence of my post applies. If an object is holomorphic, then there is also an underlying smooth object, to which you can apply any differential geometric operation. The question is: does this operation take you outside the holomorphic category? In this case, the only actions that are involved are differentiating and wedging. The wedge of two holomorphic differential forms is holomorphic. The exterior derivative of a holomorphic differential $k$ -form is $$d\eta=(\partial+\bar{\partial})\eta = \partial\eta\in \Omega^{k+1,0}(M)$$ since $\bar{\partial}\eta=0$ . So if $\nabla$ is a holomorphic connection on a holomorphic vector bundle with local holomorphic $1$ -form $\omega$ , the structure eq
|
|differential-geometry|complex-geometry|
| 1
|
$\mathbb E [f(X)] = \mathbb E [f(Y)]$ for all $f \in \mathcal C_b (\mathbb R) \implies \mathbb P (X = Y) = 1.$
|
Let $X$ and $Y$ be two random variables on a probability measure space $(\Omega, \mathcal F, \mathbb P)$ such that $\mathbb E [f(X)] = \mathbb E [f(Y)]$ for all $f \in \mathcal C_b (\mathbb R)$ (bounded continuous functions). Can we conclude that $\mathbb P (X = Y) = 1$ i.e. $X = Y$ almost surely?
|
No. You can conclude they have the same distribution but not that they are almost surely equal. There are plenty counter-examples. For instance $X$ and $-X$ for any centred gaussian variable, or $U$ and $1-U$ for $U$ uniformly distributed on $(0,1)$ .
|
|probability-theory|cumulative-distribution-functions|
| 1
|
Free object is a free group in the category of groups
|
I have a question and would appreciate a clear answer. Firstly, I will provide an introduction regarding my understanding, and then I will ask my question. Let's begin with the definition of a universal morphism from an object $ X $ to a functor $F$ : Let $\mathcal{C}$ and $\mathcal{D}$ be two categories. A universal morphism from object $X\in ob(\mathcal{D}) $ to functor $F: \mathcal{C} \longrightarrow \mathcal{D} $ is a pair $ (A, f) $ , where $A$ is an object of $\mathcal{C}$ and $ f: X \longrightarrow F(A) $ is a morphism in $\mathcal{D}$ . This pair satisfies the universal property, i.e., for every $ B \in \text{ob}(\mathcal{C}) $ and any morphism $ g: X \longrightarrow F(B) $ , there exists a unique map $h: A \longrightarrow B$ such that the diagram commutes: $ g = F(h) \circ f $ . Now, if $(\mathcal{C}, F) $ is a concrete category and $f $ is the canonical injection, then the universal morphism $ (A, f) $ is called a free object on a set $ X $ . On the other hand, if $\mathcal{C
|
Perhaps what the OP meant to say is that one way to construct a free group on a set $X = \{x_1, \ldots, x_n\}$ is to Start by taking a free monoid $M$ on a set $\{x_1, \ldots, x_n, x_1^{-1}, \ldots, x_n^{-1}\}$ (where the $x_i^{-1}$ are new symbols that we adjoin to $X$ ), whose elements are formal "words" in letters drawn from the new set, multiplied by concatenation, and then Pass to the monoid whose elements are equivalence classes of words, taking the equivalence relation to be the the least equivalence relation containing pairs $(x_i x_i^{-1}, e)$ and $(x_i^{-1} x_i, e)$ where $e$ is the empty word, and such that whenever $(a, b)$ and $(c, d)$ belong to the equivalence relation, then so does $(ac, bd)$ . Equivalence classes $[a]$ are multiplied by the rule $[a] [b] = [ab]$ . The resulting monoid is a group (define $[w]^{-1}$ to be the equivalence class of the word obtained by writing the word $w$ in reverse and replacing each of its letters by its formal inverse). It takes a few l
|
|group-theory|category-theory|free-groups|
| 0
|
Geodesic Wasserstein space => the base space is also geodesic?
|
Let $(Z,d)$ be a Polish space, and for $p\geq 1$ , consider a metric space $(W_p,d_{W^p})$ defined by The Wasserstein Space $\begin{align}W_p = \{\mu|\mu\textrm{ is a Borel probability measure on Z such that} \int_{Z}d(z_0,z)^p\mu(dz) and The Wasserstein distance $\begin{align} d_{W^p}(\mu,\nu) = \inf_{\pi}\left(\int_{Z\times Z}d(z,z')^p\pi (dz\times dz')\right)^{1/p}\end{align}$ where $\pi$ is a coupling between $\mu$ and $\nu$ i.e. a probability measure on $Z\times Z$ such that $\pi(A\times Z)=\mu(A), \pi(Z\times B)=\nu(B)$ for any measurable $A,B\subset Z$ . Now suppose that $(W_p,d_{W^p})$ is a geodesic space i.e. for any Borel probability measures $\mu,\nu$ , there is a curve $\gamma:[0,1]\to W_p$ s.t. $\gamma(0)=\mu,\gamma(1)=\nu$ and $d_{W^p}(\gamma(s),\gamma(t))=|s-t|d_{W^p}(\mu,\nu)$ . Is it true that $(Z,d)$ is also a geodesic space? My Attempt: This should be true since $Z\ni z \mapsto \delta_{z} \in W_p$ , here $\delta_z$ is the Dirac measure, is an isometric embedding, but
|
The implication usually goes the other way (see Corollary 7.2.2 of Villani's big yellow book: https://link.springer.com/book/10.1007/978-3-540-71050-9 ). For what you want, I am not sure it is true. Consider the two point space, $Z= \{-1,1\}$ equipped with the discrete metric (which is Polish as desired). I think its not possible to find a geodesic between $-1$ and $1$ but the curve $(1-t)\delta_{-1} + t\delta_1, t\in [0,1]$ is a geodesic between the corresponding Dirac measures in $W_1$ . To see this note that the (optimal?) plan between $(1-t)\delta_{-1} + t\delta_1$ and $(1-s)\delta_{-1} + s\delta_1$ for $s is given by $(1-t)\delta_{(-1,-1)} + (t-s)\delta_{(-1,1)} + s \delta_{(1,1)}$ at least for $W_1$ and this satisfies your condition for being a geodesic. Note that since, linear combinations of Diracs are the only possible measures, we can construct geodesics between all measures in a similarly straightforward manner. Presumably, a similar construction is possible for $p>1$ , but
|
|metric-spaces|geodesic|optimal-transport|
| 1
|
Consider the ring $( R = \mathbb{Z}/12\mathbb{Z} = \{[0]_{12}, [1]_{12}, ..., [11]_{12}\})$, and let ( I ) be its ideal
|
I have solved a ring theory question but I am not sure whether my solution is correct. Can anyone Guide if there is mistake in it? Question Consider the ring $( R = \mathbb{Z}/12\mathbb{Z} = \{[0]_{12}, [1]_{12}, ..., [11]_{12}\})$ , and let ( I ) be its ideal $( I = \{[0]_{12}, [3]_{12}, [6]_{12}, [9]_{12}\} ).$ (a) List explicitly all the cosets of $( I )$ in $( R ).$ (b) Write down the addition and multiplication tables for ( R/I ). (c) Prove that $( R/I \cong \mathbb{Z}/3\mathbb{Z} )$ , by giving an explicit isomorphism (there is no need to prove formally that it is an isomorphism). Solution Part (a) \begin{align*} 0 + I &= \{0, 3, 6, 9\} \\ 1 + I &= \{1, 4, 7, 10\} \\ 2 + I &= \{2, 5, 8, 11\} \\ \end{align*} Part (b) Addition Table for $( R/I )$ : \begin{array}{c|ccc} + & [0] & [1] & [2] \\ \hline [0] & [0] & [1] & [2] \\ [1] & [1] & [2] & [0] \\ [2] & [2] & [0] & [1] \\ \end{array} Multiplication Table for ( R/I ): \begin{array}{c|ccc} \times & [0] & [1] & [2] \\ \hline [0] & [0]
|
The ideal $(I)$ is just $\Bbb Z_4.$ Recognizing that makes it easy to see that $\mathcal R/I\cong \Bbb Z_{12}/\Bbb Z_4\cong (\Bbb Z/12\Bbb Z)/(\Bbb Z/4\Bbb Z)\cong 4\Bbb Z/12\Bbb Z\cong \Bbb Z_3.$ I imagine that from the point of view of modules, each submodule of a cyclic module is cyclic etc. Since rings are not usually called cyclic. But I am out of my element.
|
|abstract-algebra|ring-theory|
| 0
|
Any $\ell^2$-closed subspace of $\ell^2 \cap \ell^1$ is finite-dimensional
|
Let $X$ be a closed subspace of $\ell^2$ such that $X$ is contained in $\ell^1$ . It is easy to show that the inclusion operator $J \colon X \hookrightarrow \ell^1$ is closed, hence, by the closed graph theorem $J$ is bounded. Is it true that $X$ is automatically finite dimensional? I would really appreciate any hints.
|
This is true. Indeed, composing the orthogonal projection $\ell^2 \to X$ with $J$ , then restricting to $\ell^1$ (recall that $\ell^1 \subset \ell^2$ and $\|\cdot\|_1 \geq \|\cdot\|_2$ ) shows that there exists a bounded projection $\ell^1 \to J(X)$ . Thus, $J(X)$ is a closed and complemented subspace of $\ell^1$ . All such subspaces are either finite-dimensional or isomorphic to $\ell^1$ (see https://mathworld.wolfram.com/ComplementarySubspaceProblem.html ). The latter case is impossible since $J$ is an isomorphism between $X$ and $J(X)$ , but $\ell^1$ is not isomorphic to a Hilbert space, whilst $X$ is a Hilbert space. Hence, $J(X)$ and thus $X$ must be finite-dimensional.
|
|functional-analysis|closed-graph|
| 0
|
Solving $K + \frac{N}{F} + \frac{1}{MF^N}= 1$ for F
|
I'm trying to solve the following equation in relation to F: $K + \frac{N}{F} + \frac{1}{MF^N}= 1$ . With the following as symbolic context (but not needed for simplification): $M = \prod_{i∈L}{i}$ , $K = \sum_{i∈L}\frac{1}{i}$ , $L = {[4, 5, 6, ...]}$ . I think I've simplified it to the following, but no idea if this is right or if I'm on the right path or if this is correct: $ F = ((1-K-\frac{N}{F})M)^{-\frac{1}{N}}.$
|
If you multiply both sides of your equation by $F^N$ and rearrange it you will find that you are looking for the roots of a polynomial of degree $N$ . When $N=2$ it's a quadratic, and easy. There are complicated formulas for $N=3$ and $4$ but none for higher degrees. In an application you may need numerical methods.
|
|algebra-precalculus|solution-verification|
| 1
|
About non-examples of rings
|
I was thinking that question and I know that it's a very fool question but I couldn't convince myself. By definition $R$ is a ring with two operation addition and multiplication $+,.$ I know that $S_n , n\ge3$ is a non abelian group group under the composition $\circ$ . I believe that we can't talk about ring structure on it as addition and multiplication is not defined on it. So, automatically it is not a ring. Exactly the same reason valid for $D_n$ , so it is not a ring. For example, select $n=3$ then what is $r+s$ ? Usual addition is not defined on $D_3$ so we can't talk about ring structure on it. Is that a reason why they are not ring ?
|
The group $D_n$ can never be the underlying multiplicative monoid of a ring structure. In fact, no group can be the underlying multiplicative monoid of a ring structure. The reason is that every element in a group has an inverse, but the zero element of a ring cannot have a (multiplicative) inverse.
|
|abstract-algebra|ring-theory|
| 0
|
About non-examples of rings
|
I was thinking that question and I know that it's a very fool question but I couldn't convince myself. By definition $R$ is a ring with two operation addition and multiplication $+,.$ I know that $S_n , n\ge3$ is a non abelian group group under the composition $\circ$ . I believe that we can't talk about ring structure on it as addition and multiplication is not defined on it. So, automatically it is not a ring. Exactly the same reason valid for $D_n$ , so it is not a ring. For example, select $n=3$ then what is $r+s$ ? Usual addition is not defined on $D_3$ so we can't talk about ring structure on it. Is that a reason why they are not ring ?
|
Let $(G,\cdot)$ be a finite group that is not abelian. The question is: can I define an operation $+\colon G\times G\to G$ such that $(G,+,\cdot)$ is a ring? Suppose you can do it and call $R$ this "ringification" of your starting group $G$ . Since $(G,\cdot)$ is a group every non-identity element admit an inverse, so $(R,+,\cdot)$ is not just a ring but what is called a division ring (or skew-field). Since every finite division ring is a field, you have that the operation $\cdot$ is commutative and it's false since $G$ is not abelian. So you cannot turn into a ring (in the sense of above) any finite non abelian group. This should provide an answer for $S_n$ and $D_n$ with $n\geq3$ , since they are finite non abelian groups.
|
|abstract-algebra|ring-theory|
| 0
|
About non-examples of rings
|
I was thinking that question and I know that it's a very fool question but I couldn't convince myself. By definition $R$ is a ring with two operation addition and multiplication $+,.$ I know that $S_n , n\ge3$ is a non abelian group group under the composition $\circ$ . I believe that we can't talk about ring structure on it as addition and multiplication is not defined on it. So, automatically it is not a ring. Exactly the same reason valid for $D_n$ , so it is not a ring. For example, select $n=3$ then what is $r+s$ ? Usual addition is not defined on $D_3$ so we can't talk about ring structure on it. Is that a reason why they are not ring ?
|
A ring is a set $A$ together with two binary operations, usually (but not necessarily) denoted $+$ and $\cdot$ , satisfying certain properties (which I won't list). The group $(S_3,\circ)$ is not a ring, because it is not a set together with two operations. Just like a cow is not a ring, because it is not a set together with two operations. It doesn't even get in the door. Usually, when we ask if something is or is not a ring, the "something" has to be (1) a set; and (2) two operations on the set. Then we can check whether that set and those two operations satisfy the required properties. If it is not a set, then it's not a ring, because it doesn't even qualify for the possibility of being a ring. If you are not given a set and two binary operations on it, with a specification of which one will play the role of $+$ and which one will play the role of $\cdot$ , then you don't have a ring, again because it doesn't even qualify for the possibility of being a ring. Only after you have been
|
|abstract-algebra|ring-theory|
| 1
|
Why doesn't the linear regression preserve the standard deviation?
|
If we model $Y = \beta X$ , we can estimate $\beta$ to minimize $$\sum (Y_i - \beta X_i)^2$$ Taking derivatives and solving for 0, we get $\sum 2\beta X_i^2 - 2Y_1X_i = 0 \implies \beta = \frac{\sum Y_i X_i}{\sum X_i^2}.$ Why does our best fit for $Y = \beta X$ not even satisfy that it has the same variance?
|
There are two issues here. One is that with simple regression (allowing an intercept) you would expect the variance of the fitted values to be less than that of the observations unless there is a perfect fit: indeed this is what originally caused it to be called regression. The other is that forcing the regression line through the origin can lead to peculiar results if that is not the actual relationship. As an illustration, consider linear regression with the observations $(1,25)$ , $(2,21)$ , $(3,23)$ , $(4,24)$ , $(5,22)$ . If you allow an intercept, then the fitted line is $\hat y_i = 23.9 -0.3x_i$ and so the variance of the $\hat y_i$ is $R^2=0.09$ times the variance of the $y_i$ , largely because the linear relationship appears to to be weak. (See the points and regression line in black below). To get the variance of the $\hat y_i$ up to that of the $y_i$ you would need something like $\hat y_i = 20 +x_i$ or $\hat y_i = 26 - x_i$ , both of which would be worse (shown in pink belo
|
|statistics|regression|
| 0
|
Generalization of the first Isomorphism Theorem
|
In a question that I recently posted I defined a property of some concrete categories $(*)$ , that if it holds, then: A subobject of $X$ is uniquely determined by the underlying set (subset of $U(X)$ ), same for quotient object (namely in this case, determined by the equivalence relation on $U(X)$ ). So now I will be referring only to concrete categories that satisfy this property, and I'm about to define yet another property. But first, motivation. In the category $\textbf{Rng}$ ( rings without identity ) a morphism is a function that preserves addition and multiplication. But in the category $\textbf{Ring}$ , a morphism is a rng-morphism, that also takes identity to identity. So I was wondering, what is the advantage in adding this condition? Let's look at a ring as a set that is a group with respect to addition and a monoid with respect to multiplication. So we can make the question simpler: why do we define a monoid morphism as a function that preserves the operation and takes iden
|
The first isomorphism theorem can be generalised as an axiom. It's one way to define Abelian category, actually. $\mathscr{A}$ is Abelian iff. it is additive, has all kernels and cokernels, and all morphisms are strict (first isomorphism theorem). $\phi:a\to b$ is said to be strict if the induced map $\mathrm{coker}\ker\phi\to\ker\mathrm{coker}\,\phi$ is an isomorphism. That is, if $\mathrm{Coim}(\phi)\cong\mathrm{Im}(\phi)$ , if $a/\mathrm{Ker}\,\phi\cong\mathrm{Im}(\phi)$ , via the natural map induced by $\phi$ . We don't care if they're abstractly isomorphic, we need them isomorphic in the canonical way.
|
|abstract-algebra|general-topology|group-theory|category-theory|
| 0
|
Sufficient conditions for finitely supported measures being dense
|
Let $(X,\mathcal{B})$ be a Hausdorff topological space with its Borel $\sigma$ -algebra. What are some general conditions we could impose on $X$ so that finitely supported measures (i.e. finite affine combinations of dirac measures) are weakly dense in the space $\mathcal{M}(X)$ of Borel probability measures of $X$ ? Also, it is not hard to prove that if $X$ is a Polish space, then for each Borel probability measure $\mu$ in $X$ there is a sequence $(\mu_n)_n$ of finitely supported measures weakly convergent to $\mu$ , but could someone provide a reference so that I can just quote it? Context: At some point in my research I need to weakly approximate measures in the Pontryagin dual of a countable discrete group by sequences finitely supported measures. So the result for polish spaces is enough for me, but I was wondering how generally can we approximate measures by finitely supported ones.
|
I don't think there are any real conditions you have to impose. It holds if $X$ is just a topological space and $M(X)$ is the set of Baire measures on $X$ equipped with its weak topology. This is discussed in Chapter 8 of Bogachev Volume II ( https://link.springer.com/book/10.1007/978-3-540-34514-5 ). The relevant things you need to look at are : Definition 8.1.2 and Example 8.1.6 (i). The only thing you need to enforce is that the Baire sigma algebra is the same as the Borel sigma algebra (and so Baire and Borel measures are the same). This is discussed in Chapter 6, Section 6.3 of the same book. The relevant results are Proposition 6.3.4 and Corollary 6.3.5. It is true for example if $X$ is metric or, more generally, if $X$ is perfectly normal. So Polish is definitely fine.
|
|functional-analysis|measure-theory|reference-request|weak-convergence|borel-measures|
| 1
|
Let $\mathcal{F}$ be a $\sigma$-algebra and let $B \in \mathcal{F}$. Show that $\mathcal{G} = \{A \cap B: A \in \mathcal{F}\}$ is a $\sigma-$algebra.
|
Let $\mathcal{F}$ be a $\sigma$-algebra and let $B \in \mathcal{F}$. Show that $\mathcal{G} = \{A \cap B: A \in \mathcal{F}\}$ is a $\sigma-$algebra of subsets of $B$. To show that $\mathcal{G}$ is a $\sigma$-algebra, we need to show that: (a) $\emptyset \in \mathcal{G}$. This is clear, because $\emptyset \in \mathcal{F}$ and that $\emptyset = B \cap \emptyset$. Therefore $ \emptyset \in \mathcal{G}$. (b) If $A_1, A_2, \ldots \in\mathcal{F}$, then $$\bigcup_i (A_i \cap B) = (\bigcup_i A_i)\cap B \in \mathcal{G}.$$ (c)If $A \in \mathcal{F}$, then $A^c \in \mathcal{F}$, then I need to show that $$A^c \cap B \in \mathcal{G}.$$ First, since $A ,B \in \mathcal{F}$, then $A \cap B \in \mathcal{F}$ and that $B -(A\cap B) \in \mathcal{F}$. Also, $$B - (A \cap B) = A^c\cap B,$$ so that $A^c \cap B \in \mathcal{G}$ as required. Is this reasoning correct? or im missing something?
|
Relative to B, $(A \cap B)^c=B-(A \cup B)=B \cap(A \cup B)^c=B\cap A^c \cup \emptyset=B\cap A^c$ , so since A is in F, $A^c$ is also in F, QED.
|
|probability-theory|measure-theory|proof-verification|
| 0
|
I've found that the area between ≈30° north and south contains exactly half of Earth's surface area. Is there an exact number out there?
|
This doesn't just apply to Earth (I first discovered it when calculating the area of the Celestial Sphere), but to any perfect sphere. Think of it this way: The area contained between 80°N and 90°N is far less than the area contained between 0°N and 10°N, despite both being bands 10° wide. Anyway, I've found that the area inclosed between 29.85799272 degrees (those are the digits that I'm 1000% sure of, got them via desmos trial and error $\text{*}$ ) north and south of any sphere contains exactly half of the sphere's total surface area. My question is: does this number have a nicer form, such as a fraction involving radicals? Or is it just a simple digit? And if so, has anyone else previously noticed this? *I can't be arsed to list all the math I used to find this result, but I guarantee you it's 1000% accurate. Desmos only showed the first 3 digits, so I zoomed in as much as I can to see when the line denoting the area intersected y=50 (denoting 50%), and came to the aforementioned d
|
Some helpful formulae for curve surface areas on a perfect sphere with radius $R$ (not the Earth): Each spherical / polar cap with $60^\circ$ polar angle: $2\pi R^2\left(1-\cos 60^\circ\right) = \pi R ^2$ ; The spherical segment between $\pm 30^\circ$ latitudes, with distance $h = 2R\sin 30^\circ = R$ between them: $2\pi R\cdot h = 2\pi R^2$ ; The whole sphere : $4\pi R^2$ . So the required fraction of the surface area between $\pm 30^\circ$ latitudes is $$1-\frac{2\cdot \overbrace{\pi R^2}^{\text{Cap}}}{4\pi R^2} = \frac{\overbrace{2\pi R^2}^{\text{Segment}}}{4\pi R^2} = \frac 12$$
|
|coordinate-systems|
| 1
|
Curvature of a special curve
|
In the following problem I am asked to find the curvature of a curve. The problem is the following: \ Given the curve $\alpha(t), \alpha: \mathbb{R} \rightarrow \mathbb{R}^3$ , which is parametrised by arc-length, we define the curve $\beta(t) = \alpha'(t)$ . Show that the curvature of $\beta,\kappa_{\beta} = \sqrt{1+\frac{\tau^2}{\kappa_{\alpha}^²}}$ , where $\tau$ is the torsion of $\alpha$ and $\kappa_{\alpha}$ its curvature. My attempt is the following: I know that $\kappa_{\beta} = \frac{||\beta' \times \beta''||}{||\beta^3||}, ||\beta' \times \beta''|| = (||\beta'^2||||\beta''^2|| - (\beta'\cdot\beta'')^2)^\frac{1}{2}$ . However, since torsion of $\alpha$ is $\tau = N'\cdot B$ , I don't know could the torsion appear here. I also noticed that $N = T_{\beta}$ , where $N$ is the normal vector of $\alpha$ and $T_{\beta}$ the tangent unit vector of $\beta$ . I would be grateful if somebody could give me a hint in order to solve the exercise.
|
Let me write out the details for this one. We will use $T, N, B$ for the Frenet frame of $\alpha$ , and I will write the curvature and torsion of $\alpha$ as $k$ and $\tau$ without subscripts. Then by the structure equation, \begin{align} \alpha'' &= k N,\\ \alpha''' &= (kN)' = k'N + kN' = k'N + k(-kT + \tau B). \end{align} Then $\|\beta'\|=\|\alpha''\| = k$ . Also $$ \|\beta'\times\beta''\| = \|\alpha''\times\alpha'''\| = \|kN\times (k'N+k(-kT+\tau B)\|= k^2\|k B + \tau T\| = k^2\sqrt{k^2+\tau^2}, $$ by $N\times N=0, N\times T=-B, N\times B=T$ . Putting these together, we see $$ k_\beta = \frac{\|\beta'\times\beta''\|}{\|\beta'\|^3} = \frac{k^2\sqrt{k^2+\tau^2}}{k^3} = \frac{\sqrt{k^2+\tau^2}}{k} = \sqrt{1+\frac{\tau_\alpha^2}{k_\alpha^2}}. $$
|
|differential-geometry|curves|curvature|
| 1
|
Rings that appear as quotients $B/I$ of subrings$B \subseteq F$ of fields $F$ and for $I \subseteq B$ ideals
|
What are the rings $A$ that appear as quotients $B/I$ of subrings $B \subseteq F$ of fields $F$ and for $I \subseteq B$ ideals? For each $A$ , give an explicit formula for a ring $B$ and a field $F$ . For any ring homomorphism $\varphi: B \to F$ for some ring $B$ and field $F$ , we know that $B/\ker \varphi \cong \textrm{Im }F$ . If $A$ is a domain then we may let $I = \{ 0\}$ , $B = A$ , and $F$ be the field of fractions of $A$ . I am wondering if you could tell me (1) if my reasoning so far is correct, and (2) how to proceed from here.
|
In case $A$ is an integral domain, what you did works. In general, any commutative ring is a quotient of an integral domain. For example, for each $a\in A$ define a variable $x_a$ , and let $B$ be the very large polynomial ring $B=\mathbb{Z}[\{x_a\}_{a\in A}]$ . There is a clear surjective homomorphism $B\to A$ , and so $A$ is a quotient of $B$ . Now take $F$ to be the field of fractions of $B$ .
|
|ring-theory|quotient-spaces|integral-domain|
| 1
|
How does first order logic influence everyday mathematics?
|
I have read through a book about first order logic. It was interesting. However, when I read through undergraduate math texts it’s unclear about what system they are working in. They don’t specify anything at all about axioms or logical axioms. I’m so confused now. What is the point of learning about first order logic if it seems that nobody cares / knows very much about it? Analysis 1 by Tao talks about ZFC and Peano arithmetic but that’s about all I’ve seen. Is it worth learning model theory? Where do I start with that?
|
It depends. Set theory, logic, and model theory are their own fields. There are interesting applications of set theory, logic, and model theory to other fields of mathematics, but also plenty of mathematicians work in their own fields without much knowledge of them, only basic knowledge more or less required for all fields of mathematics. In general I think it’s a good idea to know the basics, like what the axioms of ZFC are, what cardinals and ordinals are, what logical quantifiers mean, stuff like that. But anything more than that should be considered as just another field of mathematics. Whether it is worth learning or not depends on your interests and needs.
|
|foundations|
| 0
|
Intersection of two hyperbolas in polar coordinates.
|
I want to find the intersections of two hyperbolas in polar coordinates. One of their foci coincides, we use this as the pole. (The right focus of the left hyperbola is the same as the left focus of the right hyperbola, as shown on the picture ). I'm trying to solve the following equation to get their intesections: $$r = \frac{p}{1\mp e \cos \varphi}$$ $$\frac{p_{0}}{1 - e_{0} \cos \varphi} = \frac{p_{1}}{1 + e_{1} \cos \varphi}$$ Hyperbolas can intersect in at most 4 points, but this equation only has two solutions. Where am I going wrong, what am I misunderstanding? How do I find the 4 possible intersections?
|
The equations have 4 solutions . Radius vectors have to be considered separately. Solving separately for the four polar angles we have symmetrical inverse cosine functions for each combination : $$r_0=\frac{p_{0}}{1 - e_{0} \cos \varphi} ,~ r_2=\frac{p_{1}}{1 - e_{1} \cos \varphi},$$ $$ \cos \varphi_{0+1}=\frac{1-p_0/r_0}{e_0},~ \cos \varphi_{1+0}=\frac{1-p_1/r_1}{e_1}.$$
|
|geometry|conic-sections|polar-coordinates|
| 0
|
Expected Maximum of 10 Balls Selected From Urn
|
There are 20 balls in an urn labeled from 1 to 20. You randomly pick 10 balls out of this urn. What is the expected maximum value of the 10 balls you picked out? I saw answers for this question on the forum, but I am confused why my answer is not correct. I approached this problem by determining the density function of the maximum, some $Z = Max(X_1,...,X_{10})$ . $P(Z=z) = \frac{z^{10}-(z-1)^{10}}{20^{10}}$ Then, I found the expectation by summing the product of the probabilities with values of z ranging from 1 to 20. The answer I obtained is 18.64, whilst the real answer is 210/11. I know that my approach is basically assuming sampling with replacement although in the question it is implied as without. However,I thought this shouldn't matter due to linearity of expectation? Please let me know!
|
I know that my approach is basically assuming sampling with replacement although in the question it is implied as without. However,I thought this shouldn't matter due to linearity of expectation? I am not sure how you connected this problem with linearity of expectation. But the expected maximum would certainly be different depending on whether sampling is done with or without replacement. So you cannot assume sampling with replacement and expect to get the correct answer . If sampling is done with replacement, maximum for a trial can be any of the numbers from $1$ to $20$ . For example, if you pick the ball numbered $1$ each time, the maximum would be $1$ . So, $P(Z) > 0, 1 \le Z \le 20$ . On the other hand, if sampling is done without replacement, the maximum for a trial must be $10$ or higher. So, $P(Z) = 0, 1 \le Z \le 9$ .
|
|probability|expected-value|
| 0
|
Number of Cubic Residues Modulo $p$
|
Let $p$ be prime, $p \equiv 1 \pmod{3}$ . I wish to show that there are $(p-1)/3$ (non-zero) cubic residues (mod $p$ ), for which I am having some difficulty. There is a response here by userabc, which I am not certain how to follow. I can easily prove that $-3 \in Q_{p}$ and show that $x^3 - 1 \equiv 0 \pmod{p}$ admits three solutions. (In fact, if $-3 \equiv c^2$ , then these three solutions are given by $1, 2^{-1}(c-1)$ and $-2^{-1}(c-1)$ ). How next am I to make use of the identity $4(x^2 + x + 1) = (2x+1)^2+3$ ? And how do I conclude from here that there are $(p-1)/3$ cubic residues?
|
When $x, y \not\equiv 0 \bmod p$ , we have $x^3 \equiv y^3 \bmod p$ if and only if $(x/y)^3 \equiv 1 \bmod p$ , so $x \equiv yz \bmod p$ where $z^3 \equiv 1 \bmod p$ . Thus the cubing function on nonzero numbers mod $p$ is $k$ -to- $1$ where $k$ is the number of solutions to $z^3 \equiv 1 \bmod p$ . This is similar to squaring on nonzero numbers mod $p$ being $2$ -to- $1$ when $p > 2$ since $x^2 \equiv y^2 \bmod p$ if and only if $x \equiv \pm y \bmod p$ , where $\pm 1 \bmod p$ are the solutions to $z^2 \equiv 1 \bmod p$ , and there are two such $z$ when $p$ is an odd prime. So you need to show there are three solutions to $z^3 \equiv 1 \bmod p$ when $p \equiv 1 \bmod 3$ . Since $$ z^3 - 1 = (z-1)(z^2+z+1), $$ the solutions to $z^3 \equiv 1 \bmod p$ , rewritten as $z^3 - 1 \equiv 0 \bmod p$ , are $1 \bmod p$ and the solutions to $z^2 + z + 1 \equiv 0 \bmod p$ (which doesn't have $1 \bmod p$ as a solution since $p > 3$ ). The quadratic formula is valid mod $p$ when $p > 2$ : there are t
|
|elementary-number-theory|modular-arithmetic|
| 0
|
Diffeomorphisms between $ \mathbb{R}^{p, q} $ and $ \mathbb{R}^{p+q} $
|
suppose an euclidean space $ \mathbb{R}^{n} $ with a quadratic form $ Q: \mathbb{R}^{n} \rightarrow \mathbb{R} $ such that $$ \forall X \in \mathbb{R}^n, \, \ Q(X) = \sum_{i=1}^p X_i^2 - \sum_{j=1}^q X_j^2 $$ with $ p + q = n $ then this space is the pseudo-euclidean space $ \mathbb{R}^{p,q} $ of signature $ (p,q) $ which can be made a pseudo-Riemannian manifold by setting $ Q $ to be a pseudo-Riemannian metric. is there any way to prove $ \mathbb{R}^{p,q} \simeq \mathbb{R}^n $ as manifolds ? my intuition tells this may be possible but there's the extra step of requiring that the quadratic form is preserved right? i am really lost
|
You’re overthinking things. As sets, topological spaces, $C^k$ manifolds (for all $k\in \{0,1,\dots,\infty,\omega\}$ ), and vector spaces, we have $\Bbb{R}^{p,q}:=\Bbb{R}^{p+q}$ , by definition. The only difference between the two is when you decide to endow each with a geometry, namely a semi-inner product, of signature $(p,q)$ in the first case and positive-definite in the second case. In other words, for the purposes of diffeomorphisms, the identity map is a diffeomorphism, and you should completely ignore the roles of the respective quadratic forms.
|
|differential-geometry|manifolds|riemannian-geometry|quadratic-forms|
| 0
|
Why is this inequality in Brent and Cohens paper on odd perfect numbers true?
|
In Brent and Cohen's paper about odd perfect numbers, they show this inequality. $N \ge p^a\sigma(p^a) \gt p ^ {2a}$ where a is even. I understand the next second half of this: $p^a\sigma(p^a) \gt p ^ {2a}$ . The component * sum of its divisors is always larger than the square of the component, , since the sum of a component's divisors is always larger than the component itself. However, I do not understand the first half of this: $N \ge p^a\sigma(p^a)$ . I believe this is saying that for our hypothetical odd perfect number, N, it must be larger than one of it's even-exponent components times the sum of divisors of that component. I do not understand this. Say, for example, N was $4.96*10^{13} = 5^9 * 71^4$ . $71^8 = 6.45 * 10^{14}$ , so in this case, $N \lt p^{2a}$ . Obviously this N is not an odd perfect number, so what extra criteria do odd perfect numbers need to satisfy to make the equation shown in the paper true? Thank you!
|
The key observation is that $\sigma(p^a)=p^a+p^{a-1} \cdots+p+1$ is relatively prime to $p^a$ . Note that $\sigma(p^a)|N$ since $\sigma(p^a)|\sigma(N)$ and $\sigma(N)=2N$ . Since $p$ is a prime which does not divide $\sigma(p^a)$ , one must have $\sigma(p^a)p^a|N$ , which implies the desired result.
|
|perfect-numbers|
| 1
|
Relation between points on $\partial \mathbb{D}$ and $\partial \mathbb{H}$
|
I have proved that $\phi \in \operatorname{Aut}(\mathbb{D})$ has a fixed point in $\mathbb{D}$ iff its correspondent from $\operatorname{Aut}(\mathbb{H})$ has a fixed point in $\mathbb{H}$ , where $\mathbb{D}$ is open unit disk, and $\mathbb{H}$ is upper half-plane. But I don't know how to show that $\phi \in \operatorname{Aut}(\mathbb{D})$ has a fixed point in $\partial \mathbb{D}$ iff its corespondent from $\operatorname{Aut}(\mathbb{H})$ has a fixed point in $\partial \mathbb{H} \cup \{\infty\}$ . I don't even know if it's true. There exists some sort of theorem that say that a homeomorphic functions preserve the number of fixed points or something? Because there is a homeomorphism between $\overline{\mathbb{D}}$ and $\overline{\mathbb{H}} \cup \{\infty\}$ . Thanks! $\textbf{EDIT!}$ Ok. So what I know is that $F:\mathbb{H} \to \mathbb{D}, F(z) = \frac{i-z}{i+z}$ is a biholomorphism. I need to show that this extends to a homeomorphism $F : \overline{\mathbb{H}} \to \overline{\mathbb{
|
Suppose $a, b, c, d$ are complex numbers satisfying $ad-bc\ne 0$ . Then one defines the linear-fractional transformation $\gamma$ of the Riemann sphere $S^2= {\mathbb C}\cup \{\infty\}$ by the formula: $$ \gamma(z)= \frac{az+b}{cz+d}, z\in {\mathbb C}\setminus \{-d/c\}, $$ $$ \gamma(-d/c)=\infty,$$ $$ \gamma(\infty)= \frac{a}{c}, $$ unless $c=0$ in which case $\gamma(\infty)=\infty$ . The next lemma and its corollary are routine calculations, I will give only partial proofs, leaving the rest to you: Lemma. 1. Every linear-fractional transformation of the Riemann sphere is continuous. Every linear-fractional transformation of the Riemann sphere is invertible and its inverse is again a linear-fractional transformation. The composition of any pair of linear-fractional transformations is again a linear-fractional transformation. Proof. 1. Continuity of $\gamma$ at points $\ne \infty$ and $\ne -d/c$ is a direct consequence of the limit laws in complex analysis. $$ \lim_{z\to \infty} \gamma(
|
|real-analysis|complex-analysis|
| 1
|
Symmetric linear least-squares solution with known diagonal elements
|
Given matrices $\pmb{A}\in\mathbb{R}^{p\times n}$ and $\pmb{B}\in\mathbb{R}^{p\times n}$ with $p>n$ , I need to solve the following linear system in symmetric matrix $\pmb{X}\in\mathbb{R}^{p\times p}$ $$\pmb{X}\pmb{A}=\pmb{B}.$$ Based on this post , it seems that $\pmb{X}$ has an explicit formula. My issue is that all diagonal elements of the matrix $\pmb{X}$ are known to be $-1$ . How does one effectively resolve $\pmb{X}$ under this constraint? Additionally, what about the special case when $\pmb{B}=\pmb{0}$ ?
|
$ \def\k{\otimes} \def\h{\odot} \def\o{{\tt1}} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\vc#1{\op{vec}\LR{#1}} \def\diag#1{\op{diag}\LR{#1}} \def\Diag#1{\op{Diag}\LR{#1}} \def\Unvec#1{\op{vec}^{-1}\LR{#1}} \def\qif{\quad\iff\quad} \def\qiq{\quad\implies\quad} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} $ Matrix equations can be vectorized $$\eqalign{ \vc{ABC} = \LR{C^T\k A}\vc C \\ }$$ A Commutation matrix is an orthogonal permutation matrix that transforms between the vectorizations of a matrix and its transpose, i.e. $$\eqalign{ a = \vc{A} = K^T\vc{A^T} \qif \vc{A^T} = K \vc{A} \\ }$$ Note that if $A=A^T$ then $\,\c{Ka=a}$ Construct the $X$ matrix with the required properties using the all-ones matrix $J$ , the identity matrix $I$ , the Hadamard product $\h$ , and an arbitrary unconstrained matrix $U$ $$\eqalign{ F &= J-I &\qiq &X = \LR{F\h U} - I \\ f &= \vc F &\qiq & \vc{F\h U} = f\h u
|
|linear-algebra|matrix-equations|least-squares|symmetric-matrices|quadratic-programming|
| 1
|
Understanding induction in constructing an injection from $\mathbb{N}$ to an infinite set
|
I've been working on a proof that states if $X$ is an infinite set, then there exists an injection $f: \mathbb{N} \to X$ . Here is the proof I came up with: Let $X$ be an infinite set. We will construct the injection $f: \mathbb{N} \to X$ inductively. For the base case, define $f_1: \{1\} \to X$ by setting $f_1(1)=x_1$ for some $x_1 \in X$ . Now, assume that for some $n \in \mathbb{N}$ , we have an injective function $f_n: \{1, \ldots, n\} \to X$ . We can extend $f_n$ to $f_{n+1}: \{1, \ldots, n+1\} \to X$ as follows: $$ f_{n+1}(k) = \begin{cases} f_n(k) & \text{if } k \in \{1, \ldots, n\} \\ x_{n+1} & \text{if } k = n+1 \end{cases} $$ where $x_{n+1}$ is an element of $X$ not in the image of $f_n$ . Such an element exists because $X$ is infinite and the image of $f_n$ is finite. By construction, $f_{n+1}$ is injective. Therefore, by the principle of induction, there exists an injection $f: \mathbb{N} \to X$ . While working on this proof, I realized, that I don't really understand Induc
|
What you have written constructs a function $f_n : \{1, \ldots, n\} \to X$ for each $n \in \mathbb{N}$ , with the additional properties that each $f_n$ is injective and for each $n$ we have $f_{n+1}|_{\{1, \ldots, n\}} = f_n$ . There's still one more step to do to get a single function $f : \mathbb{N} \to X$ . Here it is. Define $f : \mathbb{N} \to X$ by $f(n) = f_n(n)$ . Let's prove that $f$ is injective. Suppose that we have $n$ and $m$ such that $n \neq m$ . We'll assume $n , as the other case is essentially identical. Since we are assuming $n , $n$ is in the domain of $f_m$ , and $f_m(n) = f_m|_{\{1, \ldots, n\}}(n) = f_n(n)$ . We therefore have $f(n) = f_n(n) = f_m(n)$ , while $f(m) = f_m(m)$ . Since $n \neq m$ and $f_m$ is injective, we have $f_m(n) \neq f_m(m)$ and hence $f(n) \neq f(m)$ . If you are comfortable thinking of functions as sets of ordered pairs, this construction is simply $f = \bigcup_{n=1}^{\infty}f_n$ . To be slightly more general, the fact that each $f_{n+1}$ e
|
|induction|
| 1
|
Difficulty in proving $\text{Im}f\cap\text{Ker}f\subseteqq\text{Im}(f\circ f-3f)$
|
I am now having difficulty in trying to prove the relation in linear mapping. $V$ is finite dimension vector space on $\mathbb{C}$ , $f:V\to V$ is linear mapping. The given condition is $$\text{dim}\ \text{Im}f\cap\text{Ker}f=\text{dim}\ \text{Im}(f\circ f-3f)=1$$ I was asked to show the including relation $$\text{Im}f\cap\text{Ker}f\subseteqq\text{Im}(f\circ f-3f)$$ First I set the arbitrary element $v\in\text{Im}f\cap\text{Ker}f$ , then there exists $u\in V$ such that $f(u)=v$ and $f(v)=0$ . It also known that these $u$ belongs to $\text{Ker}f^2$ . Next I want to show there exists $w\in V$ such that $(f\circ f-3f)(w)=v$ so that $v\in\text{Im}(f\circ f-3f)$ could be seen and the relation shall be proved.
|
The question is solved and I will post answer here to end up it. For each vector $v\in\text{Im}f\cap\text{Ker}f$ , there exists some $u\in V$ that $f(u)=v$ and $f(v)=f^2(u)=0$ . Hence $u\in\text{Ker}f^2$ . By letting $w=-\frac{1}{3}u\in V$ , then $(f\circ f-3f)(w)=(f\circ f-3f)(-\frac{1}{3}u)=0+f(u)=v$ , so the existence of $w$ is proved, hence $\text{Im}f\cap\text{Ker}f\subseteqq\text{Im}(f\circ f-3f)$ . Hinted by @Matija, the opposite can also be shown. It's obvious since the dimension of two spaces are equal, so no doubt they're the same subspace.
|
|linear-transformations|
| 1
|
Birth-death : Always more than 1 bifurcation?
|
Say I have a (smooth) function $f : \mathbb{R}^n \to \mathbb{R}$ , and a critical point $x$ ( ie , $f'(x) = 0$ ). I call this point degenerate if $\det \text{Hess}_x f = 0$ (so, equivalently, if the kernel of the Hessian at $x$ is non-trivial). An example is $x = 0$ for $f : \mathbb{R} \to \mathbb{R} : x \mapsto x^3$ . Then, if I perturb my function $f$ generically, I should observe such a phenomenon: Called a "birth-death bifurcation": my degenerate critical point will either bifurcate into multiple non-degenerate critical points (birth), or die. (In the picture, I drew the bifurcations $f(x) \pm \varepsilon x$ for $\varepsilon > 0$ . We can easily show that, for $f(x) = x^3$ , $f - \varepsilon x$ has two critical points near $0$ , while $f + \varepsilon x$ has none.) My question is then the following: is it standard knowledge that my degenerate critical point will either die , or bifurcate into strictly more than one critical point? (For a generic bifurcation). If so, where can I fin
|
Eric's answer is correct. The $x^3$ can not be perturbed into function with one non-degenerate singularity essentially because then that point is either a global max or a global min, which does not match the behavior of $x^3$ far away from $0$ ("at infinity"). For the function $x^4$ we don't have such an obstruction. In general, any function whose critical points lie in a compact region can be perturbed into one with non-degenerate critical points (this is studied in Morse theory, and the resulting function is called Morse). Moreover, if the original function had isolated singularities, then one can associate each such point a Hopf index (for non-degenerate points it's $(-1)^k$ , where k is the Morse index, the number of negative eigenvalues of the Hessian), and if the perturbation is through functions fixed at infinity (or more generally, transverse to the boundary of a region enclosing all singularities at all times), the Poincare–Hopf theorem says the sum of all indexes stays consta
|
|reference-request|dynamical-systems|perturbation-theory|morse-theory|bifurcation|
| 1
|
Prove that a function is Riemannian integrable and its Riemannian integral is 0
|
Let $f:[a,b]\to \mathbb{R},\forall x_{0}\in[a,b],\lim_{x\to x_0}f(x)=0$ ,Prove:f(x) is Riemannian integrable on [a,b] and its Riemannian integral $\int_a^bf(x)dx=0$ Using the completeness theorem of real numbers,I have proved that $\forall \varepsilon>0,$ there exists at most finite $x\in[a,b]$ such that f(x) $\geq\varepsilon $ .Next step should I prove that f(x) is continuous almost everywhere on [a,b]?I hope someone can help me.Thanks!
|
I feel like you could just compare with $\int_a^b |f|$ . For this integral, you have an obvious bound on the lower Darboux integral, and you have basically already shown that upper Darboux sums can be taken less than arbitrary $\epsilon > 0$ .
|
|integration|
| 1
|
Colouring a sequence
|
Define a 2 coloring of $\left \{ \left. 0,1 \right \}^* \right.$ to be a function $\chi:\left \{ \left. 0,1 \right \}^* \right. \rightarrow \left \{ \left. red,blue \right \} \right. $ e.g. if $\chi(1101)=red$ , we say that 1101 is red in the coloring of $\chi$ . Prove: For every 2 coloring $\chi$ of $\left \{ \left. 0,1 \right \}^* \right.$ and every (infinite) binary sequence $S \in \left \{ \left. 0,1 \right \}^\infty \right.$ There is a sequence: $$w_{0},w_{1},w_{2},...$$ of strings $w_{n} \in \left \{ \left. 0,1 \right \}^* \right.$ such that: i) $S= w_{0}w_{1}w_{2}...$ and ii) $w_{1},w_{2},...$ are all the same color. (The string $w_{0}$ may not be this color). I have tried making arguments that you can split any sequence and define each 'half' as the colorings. This then fails for an infinite sequence. I am not sure how to 'prove' such a result. I'm not sure if you can split it into cases where the infinite sequence is repeating or non-repeating etc. Thanks for any help
|
The following very nice solution is not mine, apparently it is by Philipp Czerner , I found it on a discussion on HN. Let $\color{blue}{\textrm{B}} = \{i; \, \chi(S[i \dots j]) = \color{blue}{\text{blue}} \; \forall j \geq i \}$ , that means $\color{blue}{\textrm{B}}$ is the set of all indexes $i$ so that all substring of $S$ starting from $i$ is blue. There are two cases: $\color{blue}{\textrm{B}}$ is infinite: then we have an infinite sequence $i_1 of elements of $\color{blue}{\textrm{B}}$ . Let $w_k = S[i_k \dots \left(i_{k+1}-1\right)]$ , obviously $\chi\left(w_k\right) = \color{blue}{\text{blue}}$ for all $k \geq 1$ $\color{blue}{\textrm{B}}$ is finite: let $k$ be the largest element of $\color{blue}{\textrm{B}}$ , for any index $i > k$ there is always a substring $w = S[i \dots j]$ (i.e. substring starting from $i$ ) so that $\chi(w) = \color{red}{\text{red}}$
|
|combinatorics|coloring|
| 0
|
Moduli $m \ge 2$ such that $\{ a^a \bmod m : 1 \le a \le m \text{ and } \gcd(a, m) = 1 \}$ forms a reduced residue system
|
In AtCoder Regular Contest 172, the problem E. Last 9 Digits asks to solve $n^n \equiv X \pmod{10^9}$ for $n$ (the smallest positive $n$ , more precisely), given $X$ coprime to $10^9$ . The intended solution is, first solve it in reducing $10^2$ , then lift the solution to reducing $10^3$ , then $10^4$ , …, finally $10^9$ . The lifting is realizable because $\{ a^a \bmod m : 1 \le a \le m \text{ and } \gcd(a, m) = 1 \}$ forms a reduced residue system for any $m$ that is a power of $10$ . In other words, $a \mapsto a^a \bmod m$ is a bijective mapping on the reduced residue system modulo $10^k$ itself. My question is, can we find a pattern of all integers $m$ having this property? It doesn’t seem that the sequence is on the OEIS. Update: It seems that the sequence coincides with A124240 (numbers $n$ such that $\lambda(n) \mid n$ where $\lambda$ is Carmichael’s lambda function), with only one difference: $10$ is not in A124240. How to prove it?
|
The importance of $\lambda(n)$ dividing $n$ : The unit group $(\mathbb{Z}/n)^{\times}$ is a $\mathbb{Z}$ -module and $(\lambda(n))$ is by definition, the annihilator of the module. In general, there is no canonical way to turn to raise elements of the unit group to an exponent in $(\mathbb{Z}/n)$ . Of course, you can choose a set of representatives for $(\mathbb{Z}/n)$ , but this isn't particularly interesting. However, if $\lambda(n) \mid n$ , then the ideal $(n)$ is contained in the annihilator $(\lambda(n))$ , so you have an induced $(\mathbb{Z}/n)$ -module structure on $(\mathbb{Z}/n)^{\times}$ , which allows you to canonically raise elements of $(\mathbb{Z}/n)^{\times}$ to an exponent in $(\mathbb{Z}/n)$ . Claim : If $\lambda(n) \mid n$ , the set-theoretic map on $(\mathbb{Z}/n)^{\times}$ given by $a \mapsto a^a$ is a bijection. Converting to an Equivalent Problem : Let $a, b \in (\mathbb{Z}/n)^{\times}$ and $c = a^{-1}b$ . Observe: $$a^a = b^b \iff a^a = (ac)^{ac} \iff a^{a(1-c)}
|
|elementary-number-theory|modular-arithmetic|
| 1
|
Prove that a function is Riemannian integrable and its Riemannian integral is 0
|
Let $f:[a,b]\to \mathbb{R},\forall x_{0}\in[a,b],\lim_{x\to x_0}f(x)=0$ ,Prove:f(x) is Riemannian integrable on [a,b] and its Riemannian integral $\int_a^bf(x)dx=0$ Using the completeness theorem of real numbers,I have proved that $\forall \varepsilon>0,$ there exists at most finite $x\in[a,b]$ such that f(x) $\geq\varepsilon $ .Next step should I prove that f(x) is continuous almost everywhere on [a,b]?I hope someone can help me.Thanks!
|
HINT: There's a quick compactness argument you could employ. Let $\varepsilon > 0$ be given. Then for each $x\in [a,b]$ , there is a $\delta_x > 0$ such that $|f(y)| whenever $|y-x| . Now, cover $[a,b]$ with the family $\{ (x-\delta_x , x+\delta_x ) : x\in [a,b] \}$ . Since $[a,b]$ is compact, we can find $x_1 , \ldots , x_n \in [a,b]$ such that $[a,b]\subset \bigcup_{i=1}^{n} (x_i-\delta_{x_i} , x_i +\delta_{x_i})$ . Now, we may assume that none of the $(x_i-\delta_{x_i} , x_i +\delta_{x_i})$ are contained in the other and furthermore that $x_1 . Form a partition out of the points of the form $a, x_i-\delta_{x_i} , x_i +\delta_{x_i}, b$ and see if you can complete the proof.
|
|integration|
| 0
|
Expected Maximum of 10 Balls Selected From Urn
|
There are 20 balls in an urn labeled from 1 to 20. You randomly pick 10 balls out of this urn. What is the expected maximum value of the 10 balls you picked out? I saw answers for this question on the forum, but I am confused why my answer is not correct. I approached this problem by determining the density function of the maximum, some $Z = Max(X_1,...,X_{10})$ . $P(Z=z) = \frac{z^{10}-(z-1)^{10}}{20^{10}}$ Then, I found the expectation by summing the product of the probabilities with values of z ranging from 1 to 20. The answer I obtained is 18.64, whilst the real answer is 210/11. I know that my approach is basically assuming sampling with replacement although in the question it is implied as without. However,I thought this shouldn't matter due to linearity of expectation? Please let me know!
|
Expectation problems are sometimes surprisingly easy to solve because they essentially deal with averages Consider the $20$ points making cuts to divide a line of length $1$ into $21$ equal parts to reflect where the points will fall on a $0−1$ scale, which will be $ k/21\; for\; k = 1,2,3,....20$ And ten randomly sampled points will on an average divide the line into $11$ equal parts at $1/11,2/11,3/11, ...10/11$ Thus for the highest sampled point, $k/21 = 10/11 \Rightarrow \boxed{k = 210/11}$
|
|probability|expected-value|
| 0
|
A question on the parabolic fractal distribution
|
The parabolic fractal distribution is a discrete probability distribution with probability mass function $$f(n;b,c)\propto n^{-b}\exp (-c(\log n)^{2}).$$ The function is given as a function of the rank $n$, where $b$ and $c$ are parameters. I have tried to research about this distribution with little success. There are almost no papers on the topic, zero examples or notes, and no analysis or derivation. Does anyone have any information on this probability distribution? And, since it is not a popular topic, is trying to research about this a total waste of time? Any input would be extremely appreciated!
|
Like you said, this distribution is not often researched, but shows up predominantly in nature comparative to other things. First off, it exhibits self-similarity across different scales, characterized by a central clustering of points with diminishing density towards the periphery, mirroring the geometric structure of a parabola. It typically arises in complex systems with nonlinear dynamics, where the underlying mechanisms generate patterns that are scale-invariant and exhibit fractal geometry. Analyzing such distributions often involves techniques from fractal analysis and nonlinear dynamics to elucidate underlying principles governing the system's behavior across multiple scales. This is how I understand it to be. Like previously mentioned, there is not much out and especially not much out recently, but speculation is it has application in other areas like finance and modeling and simulation. In terms of derivation, you would first have to assume self-similarity. Next is deriving t
|
|probability-theory|probability-distributions|
| 0
|
Covering dark squares on a large chess board
|
Imagine we have a $8 \times 8$ chessboard and a person situated on one of the dark squares. The person is allowed to jump diagonally, but only by $1$ square and the person cannot revisit the squares they were on before. Of course, if the player begins in a non-corner square, then as there are two dark squares at the corners they cannot cover all of the dark squares: if they enter a corner square, there is no way of them leaving it without retracing their steps. Therefore they must begin at a corner; WLOG assume that they start at the bottom left corner. Then, they have a forced move towards the top right square and now they have the same problem: The top right square of the chessboard and the square that is third from the left on the bottom can be entered only by a single way. Therefore the person cannot cover all of the dark squares by jumping diagonally only $1$ diagonals. If now we allow the player more freedom by giving them the ability to jump $1$ or $2$ diagonal squares, the pers
|
You should be able to convince yourself that one space plus two space can cover any rectangular board except very small ones. The only problem comes in a corner and there are only two kinds of corner, those with one black square in the corner and those with two black squares next to the corner. You can find a pattern that handles both kinds of corner. Then if you have enough room you can connect the corners using only single space moves. The only problem comes if the corners are too close together and one uses squares that are needed for another.
|
|number-theory|graph-theory|recreational-mathematics|chessboard|
| 0
|
If $g \in L^1(\mathbb{R})$ and $g' \in L^p$ then $\lim_{|x| \to \infty} g(x) = 0$
|
I am working on this problem from Le Gall's measure theory book with 2 parts. Let $f$ be in $L^p(\mathbb{R})$ for $1\leq p . Let $q$ be the conjugate exponent of $p$ . Let $F(x) = \int_{0}^{x} f(t) dt$ . Verify that $F$ is well-defined and $$\lim_{h \to 0^+} \frac{\sup_{x \in \mathbb{R}} \lvert F(x+h) - F(x)\rvert}{h^{1/q}}= 0$$ Let $g$ be continuously differentiable and in $L^1(\mathbb{R})$ . Suppose $g' \in L^p$ for some $p \in [1, \infty)$ . Then $\lim_{|x| \to \infty} g(x) = 0$ . I can show the first part using the density of continuous and compactly supported functions in $L^p$ but I need help with the second part. If I define $G(x) = \int_{0}^{x} g'(t)dt = g(x) - g(0)$ then the first part only gives me that $$\lim_{h \to 0^+} \frac{\sup_{x} \lvert g(x+h) - g(x)\rvert}{h^{1/q}} = 0$$ This only gives a lipschitz like continuity bound and doesn't seem connected to the tails, at least on the surface.
|
Suppose $g(x)$ does not tend to $0$ as $x \to \infty$ . Then there is a sequence $x_n \to \infty$ and a positive number $\delta$ such that $|g(x_n)|>\delta$ for all $n$ . Using the fact that $\frac{\sup_{x} \lvert g(x+h) - g(x)\rvert}{h^{1/q}} \to 0$ conclude that there exists $h>0$ with the property $|g(x+h')-g(x)| for all $x$ whenever $0 . Now $\int_{x_n}^{x_n+h} |g(y)|dy \ge \int_{x_n}^{x_n+h} [|g(x_n)|-\delta/2] dy>(\delta/2)h$ for all $n$ . But integrability of $g$ implies that $\int_{x_n}^{\infty} |g(y)| dy \to 0$ as $ n \to \infty$ . We have arrived at a contradiction. This proves that $\lim_{x \to \infty} g(x)=0$ . A similar argument shows that $\lim_{x \to -\infty} g(x)=0$ .
|
|real-analysis|measure-theory|
| 1
|
Limit as $x$ $\to$ $\infty$ of $x\left(1+\frac1x\right)^x-kx^2\ln\left(1+\frac1x\right)$
|
Evaluate:- $$\lim_{x\to\infty}\left[ x\left(1+\frac1x\right)^x-kx^2\ln\left(1+\frac1x\right)\right]$$ I tried calculating the limits of the two terms separately. By applying L'Hopital, the 2nd limit can be solved by writing $x^2\ln\left(1+\frac1x\right)=\frac{\ln\left(1+\frac1x\right)}{\frac{1}{x^2}}$ . Now, L'Hopital gives the final form as $\frac{x^2}{2(1+x)}=\infty$ . Now, for the 1st term, we know $\left(1+\frac1x\right)^x$ is $e$ as $x$ goes to infinity, so writing the 1st term as $\frac{\left(1+\frac1x\right)^x}{\frac{1}{x}}$ , we get $\frac{e}{0}=\infty$ . Now, since both limits are infinite, how can we determine the actual limit?
|
As $x\to+\infty$ , $u:=x\ln\left(1+\frac1x\right)-1\sim-\frac1{2x}\to0$ , and $$\begin{align}\left(1+\frac1x\right)^x-kx\ln\left(1+\frac1x\right)&=e^{1+u}-k(1+u)\\&=(e-k)(1+u)+\frac e2u^2+o(u^2)\\ &\begin{cases}\to e-k&\text{if }e\ne k\\\sim\frac e{8x^2}&\text{if }e=k \end{cases} \end{align}$$ hence $$\lim_{x\to\infty}x\left(\left(1+\frac1x\right)^x-kx\ln\left(1+\frac1x\right)\right)=\begin{cases}+\infty&\text{if }e>k\\-\infty&\text{if }e
|
|calculus|limits|limits-without-lhopital|
| 0
|
Every sequentially discrete space is totally path disconnected
|
A topological space $X$ is sequentially discrete if every convergent sequence in $X$ is eventually constant. A space $X$ is totally path disconnected if every path $f:[0,1]\to X$ is constant. It seems the following should be true: Proposition: Every sequentially discrete space is totally path disconnected. Can anyone provide a proof? Note : One cannot deduce the stronger property that the space is totally disconnected. For example, the cocountable topology on an uncountable set is sequentially discrete, but not totally disconnected. It is in fact connected (even hyperconnected).
|
Assume $f: [0,1] \rightarrow X$ is not constant. $B := f([0,1])$ is connected. Since sequentially discrete spaces are obviously $T_1$ , $B$ is infinte. Hence there is a sequence $(r_n)_{n \in \mathbb N} \subset [0, 1]$ , such that $(f(r_n))_n$ is pairwise distinct. Since $[0, 1]$ is sequentially compact, a subsequence $(r_n)_{n \in N}$ converges to an $r \in [0,1]$ . Hence $(f(r_n))_{n \in N}$ converges to $f(r)$ contradicting sequential discreteness.
|
|general-topology|connectedness|
| 1
|
Proof of limit of a sequence where n approaches infinity
|
I understand all of the proof until I get to the part where we divide by 2M. My teacher explained that we divide by 2M because we are aiming to get to epsilon, which makes sense but this does not seem vigorous enough to me as I believe that dividing by 2M, can potentially effect the inequality sign? Is there a better explanation?
|
The inequality signs would not change since $M>0$ . Since $X = (x_n)$ is convergenging to $a$ and $Y = (y_n)$ is converging to $b$ . For arbitrary $\epsilon_2 > 0$ , we can find $K_1, K_2 \in \mathbb{N}$ such that $n > K_1$ then we have $|x_n-a| and $n > K_2$ then we have $|y_n-b| . Now, we just have to let $\epsilon_2 =\frac{\epsilon}{2M}>0$ .
|
|real-analysis|sequences-and-series|limits|analysis|
| 0
|
Show that the map $f(z) = z^k$ is a open map in the complex plane
|
I can see that the map $f(z) = z^k$ is an homeomorphism for neighbourhoods around non-zero points not including zero. So I need to check only around neighbourhoods of zero. Can anyone help me out? Edit: As mentioned in the comments, I forgot to mention $k$ here is a positive integer.
|
Fix an open set $X\subseteq C$ . For each $z\in X$ , there is a small radius $r=r_z$ such that the open ball $B_r(z) = \{w\mid |w-z| is completely included in $X$ . As you have noticed, for $z\ne 0$ , we may choose $r_z$ sufficiently small so that $f(z)=z^k$ restricted to $B_{r_z}$ is a homeomorphism. For $z=0$ , $f$ maps $B_r(0)$ to $B_{r^k}(0)$ , which is an open set. Therefore, the image of $X$ under $f$ is $\bigcup_{z\in X}f[B_r(z)]$ , which is open.
|
|general-topology|complex-analysis|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.