title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Ratio of definite integrals
my school gave this question out and I couldn't think of a way to do it other than bashing by parts multiple times. Is there a clever solution to the question? Given: $$ I = \int_{0}^{4\pi} e^x (\sin^6 x + \cos^6 x) \, dx $$ $$ J = \int_{0}^{\pi} e^x (\sin^6 x + \cos^6 x) \, dx $$ Find the value of $$\frac{I}{J}$$ . (PS: Sorry about the wonky formatting, still trying to work stuff out)
$$I = \int_{0}^{4\pi} e^x (\sin^6 x + \cos^6 x) \,dx$$ $$I = \int_{0}^{2\pi} e^x (\sin^6 x + \cos^6 x) \,dx+\int_{2\pi}^{4\pi} e^x (\sin^6 x + \cos^6 x) \,dx$$ Let $y=x-2\pi$ , $$I = \int_{0}^{2\pi} e^x (\sin^6 x + \cos^6 x) \,dx+\int_{0}^{2\pi} e^{y+2\pi} (\sin^6 (y+2\pi) + \cos^6 (y+2\pi)) \,dy$$ $$I=(1+e^{2\pi})\int_{0}^{2\pi} e^x (\sin^6 x + \cos^6 x) \,dx$$ Similarly, let $y=x-\pi$ , $$I=(1+e^{2\pi})\int_{0}^{\pi} e^x (\sin^6 x + \cos^6 x) \,dx+(1+e^{2\pi})\int_{\pi}^{2\pi} e^x (\sin^6 x + \cos^6 x) \,dx$$ $$I=(1+e^{2\pi})\int_{0}^{\pi} e^x (\sin^6 x + \cos^6 x) \,dx+(1+e^{2\pi})\int_{0}^{\pi} e^{y+\pi} (\sin^6 (y+\pi) + \cos^6 (y+\pi)) \,dy$$ $$I=(1+e^{\pi})(1+e^{2\pi})\int_{0}^{\pi} e^x (\sin^6 x + \cos^6 x) \,dx$$ $$I=(1+e^{\pi})(1+e^{2\pi})J$$ Therefore, $$\boxed{\dfrac{I}{J}=(1+e^{\pi})(1+e^{2\pi})}$$
|definite-integrals|trigonometric-integrals|
0
Atan2 Faster Approximation
I am using atan2(y, x) for finding the polar angle from the x-axis and a vector which contains the point (x,y) for converting Cartesian coordinates to polar coordinates. But, in my program which will be used for calculations to move a robot, atan2 is a very computationally expensive procedure. What are ways to approximate atan2's result with a fast calculation with error bounds of 1 degree. Wikipedia reference for atan2: https://en.wikipedia.org/wiki/Atan2 Here is some sample output: atan2(1, 0) --> pi/2 atan2(-1, 0) --> -pi/2 atan2(0, 1) --> 0 atan2(0, -1) --> pi
The code is intended for Visual Studio and is based on my proposed fast approximation of the arctangent using the Padé-Chebyshev method (see https://stackoverflow.com/questions/42537957/fast-accurate-atan-arctan-approximation-algorithm ). Unlike the standard atan2 function, situations where both operands are equal to $0$ or the value $x^2+y^2$ causes an overflow (including when $x$ or $y$ are equal to infinity) are not supported. But this made it possible to achieve fairly high performance and accuracy (~4 ulps max) over a wide range of operands. The function calculates $2\operatorname{arctg} t$ , where $$t=\frac{y}{\sqrt{x^2+y^2}+x}, \text{ if } x\ge0;$$ and $$t=\frac{\sqrt{x^2+y^2}-x}{y}, \text{ if } x\le 0.$$ Calculations will be even easier if the norm of the vector (x, y) is known in advance. _declspec(naked) float _vectorcall arctg2(float y, float x) { static const float ct[14] = // Constants table { 6.28740248E-17f, // a0*(a1+b0)=a0*c1 9.73632411E-17f, // 2*a0*a1 2.24874633E-18f
|sequences-and-series|trigonometry|approximation-theory|
0
Finding $\int_0^\infty \frac{(\log x)^3}{x^2 +2x +2} dx$
I need help with this integral $$\int_0^\infty \frac{(\log x)^3}{x^2 +2x +2} dx$$ What I have done so far: \begin{align*} I &= \int_0^1 \frac{(\log x)^3}{x^2 + 2x + 2} dx + \int_1^\infty \frac{(\log x)^3}{x^2 +2x +2} dx\\ &=−(\int_1^\infty \frac{(\log x)^3}{2x^2 + 2x + 1} dx) + \int_1^\infty \frac{(\log x)^3}{x^2 +2x +2} dx \end{align*} After this I am not sure how to proceed. According to wolframalpha the approximate value of the integral is $2.55128$
\begin{align}J&=\int_0^\infty \frac{(\log x)^3}{x^2 +2x +2} dx\\ &\overset{u=\frac{x}{\sqrt{2}}}=\frac{1}{\sqrt{2}}\int_0^\infty \frac{\ln^3\left(\sqrt{2}u\right)}{u^2+\sqrt{2}u+1}du\\ &=\frac{3\ln 2}{2\sqrt{2}}\underbrace{\int_0^\infty\frac{\ln^2 u}{u^2+\sqrt{2}u+1}du}_{=K}+\frac{\pi\ln^3 2}{32}\\ S&=\int_0^\infty\int_0^\infty\frac{\ln^2(tu)}{(u^2+\sqrt{2}u+1)(t^2+\sqrt{2}t+1)}du\\ &\overset{z(t)=tu}=\int_0^\infty\int_0^\infty \frac{u\ln^2 z}{(u^2+\sqrt{2}u+1)(z^2+\sqrt{2}uz+u^2)}dudz\\ &=\int_0^\infty \left(-\frac{\pi\ln^2 z}{1+z^2}+\frac{z\ln^3 z}{z^3-z^2+z-1}+\frac{\ln^3 z}{z^3-z^2+z-1}+\frac{\frac{\pi\ln^2 z}{2}}{1+z^2}\right)dz\\ &=-\int_0^\infty \left(\frac{\pi\ln^2 z}{2(1+z^2)}+\frac{(1+z)\ln^3 z}{(1-z)(1+z^2)}\right)dz\\ &=-\frac{\pi^4}{16}-2\int_0^1 \frac{(1+z)\ln^3 z}{(1-z)(1+z^2)}dz\\ &=-\frac{\pi^4}{16}-2\int_0^1 \frac{\ln^3 z}{1-z}dz-\underbrace{\int_0^1 \frac{2z\ln^3 z}{1+z^2}dz}_{w=z^2}\\ &=-\frac{\pi^4}{16}+12\zeta(4)+\frac{21}{32}\zeta(4)=\frac{5\pi^4}{64} \end{align}
|calculus|definite-integrals|contour-integration|
0
Ratio of definite integrals
my school gave this question out and I couldn't think of a way to do it other than bashing by parts multiple times. Is there a clever solution to the question? Given: $$ I = \int_{0}^{4\pi} e^x (\sin^6 x + \cos^6 x) \, dx $$ $$ J = \int_{0}^{\pi} e^x (\sin^6 x + \cos^6 x) \, dx $$ Find the value of $$\frac{I}{J}$$ . (PS: Sorry about the wonky formatting, still trying to work stuff out)
Let $$ f(x) = \sin^6 x + \cos^6 x. $$ By using the identities $\sin(x+\pi)=-\sin x$ and $\cos(x+\pi) = -\cos x$ , it is easy to check that $$ f(x+\pi) = f(x). $$ So, we get \begin{align*} I &= \sum_{k=0}^{3} \int_{k\pi}^{(k+1)\pi} e^x f(x) \, \mathrm{d}x = \sum_{k=0}^{3} \int_{0}^{\pi} e^{x+k\pi} \underbrace{ f(x+k\pi) }_{=f(x)} \, \mathrm{d}x = \sum_{k=0}^{3} e^{k\pi} J. \end{align*} Dividing both sides by $J$ , we get $$ \frac{I}{J} = \sum_{k=0}^{3} e^{k\pi} = \frac{e^{4\pi} - 1}{e^{\pi} - 1}. $$
|definite-integrals|trigonometric-integrals|
1
$f(x,y)= Ax +By +Cxy +D$. if $f(x,y)=\frac 1{xy}$ for $x,y \in \{3,4\}$, then $f(5,5)=?$
Let $f(x,y)= Ax +By +Cxy +D$ . if $f(x,y)=\frac 1{xy}$ for $x,y \in \{3,4\}$ , then what is the value of $f(5,5)$ ? Here in this question I tried substituting $y=x$ thus giving : $f(x,x) = \frac {1}{x^2}$ for $x \in \{3,4\}$ where $f(x,x)$ is a polynomial in $x$ of degree 2. So $x^2f(x,x)$ should be a polynomial of degree 4. $\implies$ $x^2f(x,x)-1 = k(x-3)^2(x-4)^2$ Putting $x=0$ in the eqn gives : $k=\frac {-1}{144}$ Putting $x=5$ in eqn gives : $f(5,5) = \frac 7{180}$ But the answer given is : $f(5,5) = \frac1{36}$ I cant find the mistake so it will be good if someone points out the mistake and provides a correct solution too for the question.
You have $$ \begin{align} f(3,4)=f(4,3)=\frac{1}{12}\\ f(3,3)=\frac{1}{9}\\ f(4,4)=\frac{1}{16}\\ \end{align} $$ and $$ f(x,y)=ax+by+cxy+d $$ Solving the linear system of equation with your favourite methods yields $$ f(x,y)=\frac{1}{144}\left(-7x-7y+xy+49\right) $$ see here That yields $$ f(5,5)=\frac{4}{144}=\frac{1}{36} $$
|functions|polynomials|linear-transformations|
0
Polish Olympiad Problem
Given the above problem, the solution seems trivial. Shouldn't it just be $$\dfrac{180^\circ-40^\circ-90^\circ }{2}=25^\circ ?$$ However because this is an olympiad problem, I think I might have gotten the answer wrong. How do you work this problem out? What am I doing wrong?
Please refer to the image below for a less complicated geometric solution. As depicted in the picture, firstly, mirror the triangle $BDC$ across $BC$ . Then mirror triangle $ABD'$ across $AB$ . Following the angles, you can see that $D''AC$ is a straight line. Furthermore, $\triangle BD'D''$ is equilateral and $\triangle BCD''$ is isosceles. Therefore $BD''=D'D''=CD''$ , which implies that points $B$ , $D'$ and $C$ all lie on a circle centred at $D''$ . In other words, $D''$ is the circumcentre of $\triangle BD'C$ . Therefore $\angle BCD'=\frac12\angle BD''D'$ . Thus $x=30^{\circ}$ .
|geometry|contest-math|triangles|problem-solving|angle|
0
$f(x,y)= Ax +By +Cxy +D$. if $f(x,y)=\frac 1{xy}$ for $x,y \in \{3,4\}$, then $f(5,5)=?$
Let $f(x,y)= Ax +By +Cxy +D$ . if $f(x,y)=\frac 1{xy}$ for $x,y \in \{3,4\}$ , then what is the value of $f(5,5)$ ? Here in this question I tried substituting $y=x$ thus giving : $f(x,x) = \frac {1}{x^2}$ for $x \in \{3,4\}$ where $f(x,x)$ is a polynomial in $x$ of degree 2. So $x^2f(x,x)$ should be a polynomial of degree 4. $\implies$ $x^2f(x,x)-1 = k(x-3)^2(x-4)^2$ Putting $x=0$ in the eqn gives : $k=\frac {-1}{144}$ Putting $x=5$ in eqn gives : $f(5,5) = \frac 7{180}$ But the answer given is : $f(5,5) = \frac1{36}$ I cant find the mistake so it will be good if someone points out the mistake and provides a correct solution too for the question.
$$\begin{align}f(5,5)&=f(3,3)+4f(4,4) - 2\left(f(3,4)+f(4,3)\right) \\& = \frac{1}{3\cdot3}+\frac{4}{4\cdot4}-2\cdot2\cdot\frac{1}{3\cdot4}\\&=\frac{1}{36} \end{align}$$
|functions|polynomials|linear-transformations|
0
Expected Value Question - SMT 2020
If $a$ is picked randomly in the range $(\frac{1}{4}, \frac{3}{4})$ and $b$ is chosen such that $$\int_{a}^{b} \frac{1}{x^2}dx = 1,$$ compute the expected value of $(b − a)$ . https://www.stanfordmathtournament.com/pdfs/smt2020/team-solutions.pdf I have soved it differently given as an answer. let me know if there is any flaw in deriving like you see in my answer.
I have solved it differently. $\int_{a}^{b} \frac{1}{x^2}dx = 1$ $[\frac{1}{a} - \frac{1}{b}] = 1$ $b = \frac{a}{1-a}$ pdf of a => is 2 pdf of b is derived as below using CDF method: $P_B(B\leq b) => P(\frac{a}{1-a}\leq b)$ $P(a \leq \frac{b}{1+b})$ $ =\frac{\frac{b}{1+b}-\frac{1}{4}}{{\frac{3}{4}} - \frac{1}{4}}$ $P(B\leq b)=2\frac{b}{1+b} - \frac{1}{2}$ Now differentiate to get the pdf interval for b transforms to $(\frac{1}{3}, 3)$ Thus $P(B=b) = \frac{2}{(1+b)^2}$ $E(B) = \int_{\frac{1}{3}}^{3} \frac{2b}{(1+b)^2}$ Put $u = (1+b)$ => $db = du$ and $b = u-1$ Now $E(b) = 2 \int_{\frac{4}{3}}^{4} (\frac{(u-1)}{u^2}du$ $E(b) = 2 \int_{\frac{4}{3}}^{4} \frac{1}{u}du - 2 \int_{\frac{4}{3}}^{4}\frac{1}{u^2}du$ $E(b) = 2\ln(3) - 1$ $ = \ln(9) - 1$ $E(a) = \frac{\frac{1}{4}+\frac{3}{4}}{2} = \frac{1}{2}$ Thus $E(b - a) = E(b) - E(a)$ $E(b-a) = \ln(9) - 1 - \frac{1}{2}$ $E(b-a) = \ln(9) - \frac{3}{2}$
|expected-value|
0
Has this integral got a solution?
I was looking for a family of functions, $f_{n}:[0,1]\to\mathbb{C}$ with pointwise convergence to some function $f$ such that $\lim_{n\to\infty}\int_{0}^{1}f_ndt\neq\int_{0}^{1}fdt$ . Taking $f=1$ and $f_{n}=e^{2\pi it^{n}}$ satisfies the pointwise convergence...but I have no idea what is going on with the integral $\int_{0}^{1}e^{2\pi it^{n}}dt$ . Wolfram Alpha refused to say anything productive, does the limit of this integral as $n\to\infty$ exist?
By Lebesgue dominated convergence theorem , as $$ \|f_n(t)\|=1 $$ you have $$ \lim_{n \to \infty} \int_0^1 f_n(t) dt= \int_0^1 \lim_n f_n(t)=1 $$
|complex-analysis|analysis|
0
Has this integral got a solution?
I was looking for a family of functions, $f_{n}:[0,1]\to\mathbb{C}$ with pointwise convergence to some function $f$ such that $\lim_{n\to\infty}\int_{0}^{1}f_ndt\neq\int_{0}^{1}fdt$ . Taking $f=1$ and $f_{n}=e^{2\pi it^{n}}$ satisfies the pointwise convergence...but I have no idea what is going on with the integral $\int_{0}^{1}e^{2\pi it^{n}}dt$ . Wolfram Alpha refused to say anything productive, does the limit of this integral as $n\to\infty$ exist?
Here is an answer to the question you really care about. Let $f_n(x)$ be $0$ outside the interval $[1/2n, 1/n]$ and inside that interval the constant that makes the integral $1$ - namely, $2n$ . That sequence of functions converges pointwise to $0$ . You could modify this example so that the functions are continuous, but there is no example with bounded functions. By including the actual question you sidestepped the xy problem .
|complex-analysis|analysis|
0
Has this integral got a solution?
I was looking for a family of functions, $f_{n}:[0,1]\to\mathbb{C}$ with pointwise convergence to some function $f$ such that $\lim_{n\to\infty}\int_{0}^{1}f_ndt\neq\int_{0}^{1}fdt$ . Taking $f=1$ and $f_{n}=e^{2\pi it^{n}}$ satisfies the pointwise convergence...but I have no idea what is going on with the integral $\int_{0}^{1}e^{2\pi it^{n}}dt$ . Wolfram Alpha refused to say anything productive, does the limit of this integral as $n\to\infty$ exist?
Your $f_n$ being uniformly bounded on $[0,1]$ , the Lebesgue Dominated Convergence Theorem will tell you that $\lim_{n \to \infty} \int_0^1 f_n(x)\; dx = \int_0^1 \lim_{n \to \infty} f_n(x)\; dx$ For a counterexample, you'll want functions that are not uniformly bounded. If you don't want a "piecewise" function, you might try $$ f_n(x) = n^2 x \exp(-n x) $$ [EDIT] As for your integral, you can write $$ \exp(2\pi i t^n) = \sum_{k=0}^\infty \frac{(2 \pi i)^k t^{nk}}{k!}$$ so that $$ \int_0^1 \exp(2 \pi i t^n)\; dt = \sum_{k=0}^\infty \frac{(2\pi i)^k}{k!\;(nk+1)} $$ The summation can be expressed using Laguerre functions or hypergeometric functions.
|complex-analysis|analysis|
1
Has this integral got a solution?
I was looking for a family of functions, $f_{n}:[0,1]\to\mathbb{C}$ with pointwise convergence to some function $f$ such that $\lim_{n\to\infty}\int_{0}^{1}f_ndt\neq\int_{0}^{1}fdt$ . Taking $f=1$ and $f_{n}=e^{2\pi it^{n}}$ satisfies the pointwise convergence...but I have no idea what is going on with the integral $\int_{0}^{1}e^{2\pi it^{n}}dt$ . Wolfram Alpha refused to say anything productive, does the limit of this integral as $n\to\infty$ exist?
The problem is $\exp(t^n)$ has no elementary antiderivative. So, why do not you look at something simple to integrate, like $$f_n(x)=n\chi_{(0, 1/n)}(x)$$ where $\chi_A$ is the characteristic function on the set $A.$
|complex-analysis|analysis|
0
Has this integral got a solution?
I was looking for a family of functions, $f_{n}:[0,1]\to\mathbb{C}$ with pointwise convergence to some function $f$ such that $\lim_{n\to\infty}\int_{0}^{1}f_ndt\neq\int_{0}^{1}fdt$ . Taking $f=1$ and $f_{n}=e^{2\pi it^{n}}$ satisfies the pointwise convergence...but I have no idea what is going on with the integral $\int_{0}^{1}e^{2\pi it^{n}}dt$ . Wolfram Alpha refused to say anything productive, does the limit of this integral as $n\to\infty$ exist?
What about $f_n(x)=n, x\in (0,1/n]$ and $f_n(x)=0, x=0 $ or $x\in (1/n,1]$ ? Clearly $f_n\to 0$ but $\displaystyle\int_0^1 f_n(x) dx=1$ for all $n\geq 1$ .
|complex-analysis|analysis|
0
A sentence $\phi$ with only self-dual connectives corresponds to both truth-functions $f$ and $f^*$
I'm working on a question for a course in mathematical logic. Prove that if a sentence $\phi$ expresses a truth function $f$ and every connective in $\phi$ is assigned a self-dual truth function, then $\phi$ expresses $f^*$ (the dual of $f$ ). My lecture notes say that $\phi$ expresses $f$ iff for all $\mathcal{A}:$ $|\phi|_\mathcal{A}=f(|\alpha_1|_\mathcal{A},...,|\alpha_n|_\mathcal{A})$ for $\alpha_1,...,\alpha_n\in\text{SenLett}(\phi)$ where $\text{SenLett}(\phi)$ is the set of all sentence letters in $\phi$ and $|\phi|_\mathcal{A}$ means the truth value of $\phi$ under the model $\mathcal{A}$ . I've tried to do this by strong induction on the complexity of $\phi$ , denoted $\text{NConn}(\phi)$ . Base case: $\text{NConn}(\phi)=0$ $|\phi|_\mathcal{A}=f(|\phi|_\mathcal{A})$ since there are no connectives. $\therefore |\phi|_\mathcal{A}=1-(1-|\phi|_\mathcal{A})=1-f(1-|\phi|_\mathcal{A}):=f^*(|\phi|_\mathcal{A})$ Inductive step: assume true for $\text{NConn}(\phi)\leq k$ Now I get stuck
I'm going to answer my own question after having asked my professor. Here is one approach. First note that $|\phi|_\mathcal{A}=f_\phi(|\alpha_1|_\mathcal{A},...,|\alpha_n|_\mathcal{A})$ for $\alpha_1,...,\alpha_n\in\text{SenLett}(\phi)$ Claim: $f_\phi=f_{\phi}^*$ Base case: $\text{NConn}(\phi)=0$ $|\phi|_\mathcal{A}=f(|\phi|_\mathcal{A})$ since there are no connectives. $\therefore |\phi|_\mathcal{A}=1-(1-|\phi|_\mathcal{A})=1-f(1-|\phi|_\mathcal{A}):=f^*(|\phi|_\mathcal{A})$ Inductive step: assume true for $\text{NConn}(\phi_i)\leq k,\ \ 1\leq i\leq m$ $\phi:=c(\phi_1,...,\phi_m)$ for some connective $c$ . We see that $\text{NConn}(\phi)\leq mk+1$ . $|\phi|_\mathcal{A}=f_c(|\phi_1|_\mathcal{A},...,|\phi_m|_\mathcal{A})$ where $f_c$ is the truth function for the connective $c$ . $$\begin{align*} |\phi|_\mathcal{A}&=f_{c^*}(|\phi_1|_\mathcal{A},...,|\phi_m|_\mathcal{A}) \text{ by self duality}\\ &=1-f_c(1-|\phi_1|_\mathcal{A},...,1-|\phi_m|_\mathcal{A})\\ &=1-f_c(1-f_{\phi_1}(|\beta_1|_
|logic|first-order-logic|
1
Given that $|z+w| = 1$ and $|z^2+w^2| = 14$, find the smallest possible value of $|w^3+z^3|$.
Given $w$ and $z$ two complex numbers such that $|z+w| = 1$ and $|z^2+w^2| = 14$ . Find the smallest possible value of $|w^3+z^3|$ , where |.| denotes the absolute value of a complex number, given by $|a+bi| = \sqrt{a^2 + b^2}$ . My working $|w^3+z^3| = |w+z||w^2+z^2-zw| = |(w+z)^2-3wz|$ Using triangle inequalities, $|(w+z)^2-3wz| \ge ||w+z|^2-3|wz|| = |1-3|wz||$ $|z+w|^2 = |z^2+w^2+2wz|$ again applying triangle inequality $|z^2+w^2|+2|wz|\ge1\ge||w^2+z^2|-2|wz||$ solving for range of |wz| I get $15/2\ge |wz|\ge13/2$ then i getting the incorrect range of of $|z^3+w^2|$ .
Mathematica 14 answers z = x + I*y; w = u + I*v; NMinimize[ComplexExpand[{Abs[w^3 + z^3]^2, Abs[z + w]^2 == 1 && Abs[z^2 + w^2]^2 == 14^2}], {x, y, u, v}] $$\{420.25,\{x -> -2.05877, y -> -0.404213, u -> 3.04004, v -> 0.596871\}\} $$ I have strong doubts concerning its symbolic solution.
|complex-numbers|
0
Find the minimum value of this complex expression
Z is a complex number ; $i=\sqrt{-1}$. Find the minimum value of $$|Z-2(1+i)|+|Z+1-5i|+|Z-6+2i|$$ I am interested in a geometric method to solve this problem. I tried to use triangle inequality but couldn't make it.
$|z-a|$ represents the distance of $z$ from $a$ hence, $|z-2(1+i)|$ represents the distance of $z$ from $2(1+i)$ it is true for all others too so, if you see the points $(2,2);(-1,5);(6,-2)$ {it's my way to represent complex no. on the argand plane (real part, imaginary part)} so these points are collinear hence the minimum distance is possible only when z lies on the line joining these points specifically $(2,2)$ (think why?) so the distance is just the distance between $(-1,5)$ and $(6,-2)$ which is $7\sqrt{2}$
|complex-analysis|geometry|complex-numbers|
0
Central idempotents and their relation to subrepresentations, in finite representation-theory over the group-ring $\mathbb{C}[G]$.
Let $G$ be a finite group. In M. Isaacs "Character theory of finite groups", chap. 2, we cover the fact that $\mathbb{C}[G] = \bigoplus_{i = 1}^{k} M_i(\mathbb{C}[G])$ , where $M_i(\mathbb{C}[G])$ is the sum of all submodules of $\mathbb{C}[G]$ that are isomorphic to an irreducible $\mathbb{C}[G]$ -module $M_i$ . The reason for the finiteness is due to a theorem that gives us that the set $\mathcal{M}(\mathbb{C}[G])$ of isomorphism-classes of irreducible $\mathbb{C}[G]$ -modules are finite. Implicitly, we have here picked a representative set $\mathcal{M}(\mathbb{C}[G]) = \{M_1,\ldots,M_k\}$ of irreducible $\mathbb{C}[G]$ -modules, one from each isomorphism-class. Since $M_i$ is a $\mathbb{C}[G]$ -module, it follows that $M_i$ is a $\mathbb{C}$ -module (I believe), hence a $M_i$ is a vector space over $\mathbb{C}$ . We can then pick a basis $\mathcal{B}_{M_i}$ for $M_i$ , seen as a vector space over $\mathbb{C}$ . As I understand it, we then get an induced representation $\mathfrak{X}_
First (as discussed in the comments already), linear representations of the group extend naturally to the group algebra. Thus, it is sufficient for the theory to consider the representations of the algebra. With this, here is a slightly different way of stating Wedderbuerns theorem over an algebraically closed field. It follows from what Isaacs proves, but he does not spell it out this way: Let $A=\mathbb{F}G$ be a semisimple algebra over an algebraically closed field $\mathbb{F}$ . Then $A$ is the direct sum of full matrix rings $\mathbb{F}^{n_i\times n_i}$ , where the $n_i$ are the dimensions of the irreducible modules. That is, in a suitable basis (and finding this basis concretely is equivalent to finding the irreducible representations), the matrices for elements of $A$ in the regular (dimension $|G|$ ) representation are all the block diagonal matrices, with blocks of size $n_i\times n_i$ , and all possible matrix entries in the blocks. In this form, the idempotent $e_i$ is the m
|representation-theory|characters|
1
Do isomorphic ideals yield isomorphic quotient rings?
I am interested in a proof or disproof of the following statement: Conjecture: If $A$ and $B$ are isomorphic ideals of a ring $R$ then the quotient ring $R/A$ is isomorphic to $R/B$. $$A\cong B\implies R/A\cong R/B$$ I start by assuming the existence of an isomorphism $\phi$ from $A$ to $B$. Then I define the mapping $$\Psi:r+A\mapsto \phi(r)+B.$$ $\Psi$ is well-defined. Suppose $r+A=s+A$ for some $s,r\in R$. $$ r-s\in A\implies \phi(r)-\phi(s)=\phi(r-s)\in B.$$ Thus $\phi(r)+B=\phi(s)+B$. $\Psi$ is a homomorphism. $$\Psi(r+s+A)=\phi(r+s)+B$$ $$=\phi(r)+\phi(s)+B=\Psi(r+A)+\Psi(s+A).$$ $$\Psi(rs+A)=\phi(rs)+B$$ $$\phi(r)\phi(s)+B=\Psi(r+A)\Psi(s+A).$$ $\Psi$ is one-to-one. If $\Psi(r+A)=\Psi(s+A)$ then $\phi(r)+B=\phi(s)+B$ which implies $\phi(r-s)\in B$. How can I get $r-s\in A$? $\Psi$ is onto. For any $t+B\in R/B$ choose $\phi^{-1}(t)+A\in R/A$. $\quad\square$ Questions: Is this work valid? Do I have sufficient conditions?
Let $R={\mathbb Z}\times 2{\mathbb Z}$ , $I=2{\mathbb Z}\times\{0\}$ and $J=\{0\}\times 2{\mathbb Z}$ . It is obvious that $I\simeq 2{\mathbb Z}\simeq J$ . Define $\varphi:R\to {\mathbb Z}_2\times 2{\mathbb Z}$ by $\varphi((a,2b))=([a]_2,2b)$ , then we have $$ \begin{array}{cl} \varphi((a,2b)+(c,2d))&=\varphi((a+c,2(b+d)))=([a+c]_2,2(b+d)) \\ &=([a]_2+[c]_2,2b+2d)=([a]_2,2b)+([c]_2,2d) \\ &=\varphi((a,2b))+\varphi((c,2d)) \end{array}, $$ $$ \begin{array}{cl} \varphi((a,2b)(c,2d))&=\varphi((ac,4bd))=\varphi((ac,2(2bd))) \\ &=([ac]_2,2(2bd))=([a]_2[c]_2,(2b)(2d)) \\ &=([a]_2,2b)([c]_2,2d)\\ &=\varphi((a,2b))\varphi((c,2d)) \end{array}. $$ Thus, $\varphi$ is a ring epimorphism. Let $(x,2y)\in {\rm Ker}(\varphi)$ , then $(0,0)=\varphi(x,2y)=([x]_2,2y)$ and so $x\in 2{\mathbb Z}, y=0$ . We can conclude that ${\rm Ker}(\varphi)=2{\mathbb Z}\times\{0\}=I$ . Therefore, $R/I\simeq {\mathbb Z}_2\times 2{\mathbb Z}$ . Define $\psi:R\to {\mathbb Z}$ by $\psi((a,2b))=a$ , then we have $$ \begin{arr
|abstract-algebra|proof-verification|
0
A combinatoric problem about a nine-node complete graph.
Last update is on 2024/3/7 22:13:a crucial error fixed. At first, I feel sorry for my poor English.As it said in my title,my problem is that how to prove that: In a complete graph with 9 points, where the edges are colored black and white, there must be either a completely 3-node white cycle or a completely 4-node black clique. Please note ,"black" & "white" mentioned in my words are specifically earmarked ,what my problem meant is not "In a complete graph with 9 points, the edge color is black and white, there must either be an 3-node all-white cycle or an 4-node all-black clique, or there must be an 3-node all-black cycle or an 4-node all-white clique." What's more,if you help me prove that when 9 turns to 8,the statement becomes false,and give me an example of graph for it,I'd be rather grateful.Thanks a lot for your listening.
Let $W$ be the subgraph induced by white edges and $B$ = $G\backslash W$ . If there's a vertex $v$ with $\delta_W(v)\ge4$ : call $a,b,c,d$ four of its neighbors. If either couple among $\{a,b,c,d\}$ is a white edge we have a white triangle. If not, $\{a,b,c,d\}$ is a black $K_4$ . Suppose now that all vertices in $G$ have $\delta_W(v)\le 3$ . Because the number of odd degree vertices is even, there's necessarily a vertex $v$ st $\delta_W(v)\le2$ , or in other words, $\delta_B(v) \ge 6$ . Take its 6 neighbors along black edges: $a,b,c,d,e,f$ . Let $G'$ be the graph induced by these six vertices. By Friends and stranger's theorem , there's either a white or a black triangle among these, which concludes the proof.
|combinatorics|graph-theory|
0
Matrices with rank 1 over finite field
Let $\mathbb{F}_{11}=\{\overline{0},\overline{1},\ldots,\overline{10}\} $ be a finite field and $A=(a_{ij})_{3 \times 3}$ a matrix such that $a_{ij} \in \mathbb{F}_{11}, \ \ \forall \ 1\leq i,j\leq 3$ . What is the probability that $rank(A)=1$ . I tried as follows: There is $11^9$ matrices of the described form. If $v_1, v_2, v_3 \in \mathbb{F}_{11}^3$ are the row vectors if $A$ , we need that $dimV=1$ , where $V=span\{v_1,v_2,v_3\}$ . So we can choose the vector $v_1$ of $11^3-1$ different ways ( $v_1\neq 0$ ).Now, there are $11$ ways for choose teh vector $v_2$ and $11.11=11^2$ ways for choose the vector $v_3$ ...Is this idea correct?
Your counting is not entirely correct, as the first row can also be zero (as already commented by user1551). But the basic idea works and refining your argument, we get a correct solution. Solution. If the first row is non-zero ( $11^3 - 1$ possibilities), there are $11^2$ possibilities for the other two rows. (The only condition is that they are contained in the span of the first row, which is of size $11$ .) If the first row equals zero and the second row is non-zero ( $11^3 - 1$ possibilities), there are $11$ possiblilities for the last row. If the first two rows are zero, the last row must be nonzero, so $11^3 - 1$ possibilities. So in total there are $$ (11^3 - 1) \cdot 11^2 + (11^3 - 1) \cdot 11 + (11^3 - 1) = 176890 $$ rank $1$ matrices, and hence the requested probability equals $$ \frac{176890}{11^9} \approx 0.00750186\%. $$ Alternative 1. Knowing a bit more about the combinatorics of finite vector space, you can do it in this way for $n\times n$ matrices of rank $1$ over a ge
|linear-algebra|probability|combinatorics|finite-fields|matrix-rank|
1
Conditions under which the integral of an exponential function over $\mathbb{R}^k$ is finite
Theorem Let $C$ be a normalization constant and $\lambda_1,\ldots, \lambda_k$ a number sequence. The integral over $\mathbb{R}^k$ : $$ \int_{\mathbb{R}^k} C \exp\left\{-\frac{1}{2}\sum_{i=1}^k\lambda_iy^2_i\right\} dy_1\cdots dy_k $$ be finite only if all the $\lambda$ 's are positive . Question Can this be easily shown? I have no policy at all. Please give me ideas or guidelines. Background We consider $X\sim k$ -variate normal distribution, the density of which is of the form $$ C\exp\left\{-\frac{1}{2}\sum_{i=1}^k\sum_{j=1}^k a_{ij}(x_i-\xi_i)(x_j-\xi_j)\right\}. $$ In addition, we consider the orthogonal transformation $\underline{y} = Q \underline{x}, \underline{\eta} = Q \underline{\xi}$ . Then, it is known that $$ \sum_{i=1}^k\sum_{j=1}^k a_{ij}(x_i-\xi_i)(x_j-\xi_j) = \sum_{i=1}^k \lambda_i (y_i - \eta_i)^2 $$ holds. Therefore, the density of $\underline{Y} = Q\underline{X}$ is $$ C \exp\left\{-\frac{1}{2}\sum_{i=1}^k\lambda_i (y_i - \eta_i)^2\right\}. $$
The integrand is positive for any choice of $\{\lambda_1,...,\lambda_k\}$ so Tonelli-Fubini holds. If all $\lambda_\ell>0$ , then $$\begin{aligned}\int_{\mathbb{R}^k}e^{-\sum_{1\leq \ell \leq k}\lambda_\ell y_\ell^2/2}d\lambda^k&=\int_{\mathbb{R}}(...)\int_{\mathbb{R}}e^{-\sum_{1\leq \ell \leq k}\lambda_\ell y_\ell^2/2}dy_1(...)d y_k\\ &=\int_{\mathbb{R}}(...)\int_{\mathbb{R}}e^{-\sum_{2\leq \ell \leq k}\lambda_\ell y_\ell^2/2}\bigg(\int_{\mathbb{R}}e^{-\lambda_1 y_1^2/2}dy_1\bigg)dy_2(...)d y_k\\ &=(2\pi)^{1/2}\lambda_1^{-1/2}\int_{\mathbb{R}}(...)\int_{\mathbb{R}}e^{-\sum_{2\leq \ell \leq k}\lambda_\ell y_\ell^2/2}dy_2(...)d y_k\\ &=(...)\\ &=(2\pi)^{k/2}\prod_{1\leq \ell\leq k}\lambda_\ell^{-1/2} Suppose there exists $\lambda_h\leq 0$ . Wlog $h=1$ . Then $$\begin{aligned} \int_{\mathbb{R}^k}e^{-\sum_{1\leq \ell \leq k}\lambda_\ell y_\ell^2/2}d\lambda^k=\int_{\mathbb{R}}(...)\int_{\mathbb{R}}e^{-\sum_{2\leq \ell \leq k}\lambda_\ell y_\ell^2/2}\underbrace{\bigg(\int_{\mathbb{R}}e^{-\l
|real-analysis|probability|integration|analysis|gaussian-integral|
1
Given that $|z+w| = 1$ and $|z^2+w^2| = 14$, find the smallest possible value of $|w^3+z^3|$.
Given $w$ and $z$ two complex numbers such that $|z+w| = 1$ and $|z^2+w^2| = 14$ . Find the smallest possible value of $|w^3+z^3|$ , where |.| denotes the absolute value of a complex number, given by $|a+bi| = \sqrt{a^2 + b^2}$ . My working $|w^3+z^3| = |w+z||w^2+z^2-zw| = |(w+z)^2-3wz|$ Using triangle inequalities, $|(w+z)^2-3wz| \ge ||w+z|^2-3|wz|| = |1-3|wz||$ $|z+w|^2 = |z^2+w^2+2wz|$ again applying triangle inequality $|z^2+w^2|+2|wz|\ge1\ge||w^2+z^2|-2|wz||$ solving for range of |wz| I get $15/2\ge |wz|\ge13/2$ then i getting the incorrect range of of $|z^3+w^2|$ .
We can write the given conditions as $z+w=\mathrm e^{\mathrm i\theta}$ and $z^2+w^2=14\mathrm e^{\mathrm i\phi},$ for some angles $\theta$ and $\phi$ . Now $$\begin{align}z^3 +w^3&=\tfrac12(z+w)[3(z^2+w^2)-(z+w)^2]\\&=21\mathrm e^{\mathrm i(\theta+\phi)}-\tfrac12\mathrm e^{3\mathrm i\theta}\\&=21\cos(\theta+\phi)-\tfrac12\cos3\theta+\mathrm i[21\sin(\theta +\phi)-\tfrac12\sin3\theta].\end{align}$$ Therefore $$\begin{align}|z^3+w^3|^2&=[21\cos(\theta+\phi)-\tfrac12\cos3\theta]^2+[21\sin(\theta +\phi)-\tfrac12\sin3\theta]^2\\&=441.25-21\cos(\phi-2\theta).\end{align}$$ This is minimized when $\cos(\phi-2\theta)=1$ , namely when $$|z^3+w^3|=\surd420.25=20.5.$$
|complex-numbers|
1
Evaluate $\int_0^3 x{\sqrt {3-x}}\;dx $ using Riemann sums
So, I was asked to do this integral using the limit method (or the Riemann Sum) $$\int_0^3 x{\sqrt {3-x}}\;dx $$ And, I do it like this: $$\int_0^3 x{\sqrt {3-x}}\;dx $$ Firstly, I determine the $\Delta x$ and $c_i$ $$\Delta x = \frac{b-a}{n}$$ $$\Delta x = \frac{3-0}{n}$$ $$\Delta x = \frac{3}{n}$$ Using the right end point : $$c_i=a+\Delta xi$$ $$c_i=0+\frac{3}{n}i$$ $$c_i=\frac{3i}{n}$$ Then, evaluating the Integral using limit: $$\lim_{n\to\infty}\;\sum_{i=1}^n\;f(c_i)\Delta x $$ $$\lim_{n\to\infty}\;\sum_{i=1}^n\;f\left(\frac{3i}{n}\right)\left(\frac{3}{n}\right)$$ $$\lim_{n\to\infty}\;\sum_{i=1}^n\;\left(\frac{3i}{n}{\sqrt {3-\frac{3i}{n}}}\right)\;\left(\frac{3}{n}\right)$$ $$\lim_{n\to\infty}\;\left(\frac{9 \sqrt {3}}{n^\frac {5}{2}}\right)\;\sum_{i=1}^n\;i{\sqrt {n-i}}$$ And, now I'm stuck here. Is there a way to do a summation that has a square root in it? notes : if you find error in my calculation please let me know
Here are three more or less equivalent methods to get rid of the square root and solve your problem. They all rely on the fact that for $p=2$ or $4$ , $$\lim_{n\to\infty}\frac{\sum_{k=1}^nk^p}{n^{p+1}}=\frac1{p+1}.$$ (Actually, this holds for every integer $p\ge0$ , but can also be proved for each $p$ by direct computation of the numerator : $\sum_{k=1}^nk^2=\frac{n(n+1)(2n+1)}6$ and $\sum_{k=1}^nk^4=\frac{n(n+1)(2n+1)(3n^2+3n-1)}{30}$ .) First method : before taking Riemann sums as you were ordered to, first do a substitution $y=\sqrt{3-x}$ in your integral , i.e. apply the formula $$\int_{g(a)}^{g(b)}f(x)\,dx=\int_a^bf(g(y))g'(y)\,dy$$ with $$f(x)=x\sqrt{3-x},\quad g(y)=3-y^2,\quad a=\sqrt3,\quad b=0$$ or less formally: in $\int_0^3x{\sqrt {3-x}}\,dx$ , replace $\sqrt{3-x}$ by $y$ (which goes from $\sqrt{3-0}$ to $\sqrt{3-3}$ when $x$ goes from $0$ to $3$ ) hence $x$ by $3-y^2=:g(y)$ and $dx$ by $g'(y)dy=-2ydy$ . This results in $$\begin{align}\int_0^3x{\sqrt {3-x}}\,dx&=\int_{\sqrt3
|definite-integrals|riemann-sum|
0
How change an non-equivalency relation to equivalency relation?
I am an engineer, so maybe this question is naive. I study equivalence relations and equivalence classes. An equivalence relation is a binary relation that is reflexive, symmetric, and transitive. Any number is equal to itself (reflexive). If a = b, then b = a (symmetric). If a = b, and b = c, then a=c (transitive). The relation that does not satisfy these conditions is not an equivalence relation. For example, orthogonality is a non-equivalency relation on a set of lines in $\ R^2$ . Because it is not reflexive and transitive. I want to know: is there any maneuver or algorithm in math that changes a non-equivalency relation to an equivalency relation? For instance, by limitation in space set or adding some conditions.
You can always start with a relation and add to it all the pairs you need to make it an equivalence relation. For reflexivity, add the pairs $(x,x)$ if they are not there. For symmetry, add $(y,x)$ if it's not there and $(x,y)$ is. For transitivity, form the transitive closure . Whether these additions to the relation $R$ make any sense in the context that led to $R$ in the first place is application specific. For example, if you apply this construction to the orthogonality relation for lines in the plane you end up with the relation that says two lines are equivalent just when they are orthogonal or parallel.
|elementary-set-theory|equivalence-relations|
0
how to prove $\exp(t) \ge \sum\limits_{k=0}^{2n-1} \frac{t^k}{k!}$
How to prove the inequality, $\quad \exp(t) \ge \sum\limits_{k=0}^{2n-1} \frac{t^k}{k!}$ , $ $ whenever $t \in \mathbb R , n \in \mathbb N^+$ . When $t\ge 0$ , it can be prove easily by the Taylor expansion $\exp(t) = \sum\limits_{k=0}^{\infty} \frac{t^k}{k!}$ , but when $t , I can't prove it. Need help. Thanks.
Let $$ f_n(t)= \exp(t)-\sum\limits_{k=0}^{2n-1} \frac{t^k}{k!}. $$ Now one can use Mathematical Induction to prove $f_n(t)\ge 0$ for $t\le0$ . If $n=1$ , it is easy to show $f_1(t)\ge0$ for $t\le0$ . Now suppose $f_{n}(t)\ge0$ for for $t\le0$ . Then $$ f_{n+1}''(t)= \exp(t)-\sum\limits_{k=0}^{2n-1} \frac{t^k}{k!}=f_n(t)\ge0, t\le0 $$ which implies that $f'_{n+1}(t)$ is increasing. So for $t\le0$ , $$ f_{n+1}'(t)\le f_{n+1}'(0)=0$$ which implies that $f_{n+1}(t)$ is decreasing. So for $t\le0$ , $$ f_{n+1}(t)\ge f_{n+1}(0)=0. $$ Thus by MI, for all $n\in\mathbb{N}$ , $$ f_{n}(t)\ge 0$$ for $t\le0$ .
|calculus|sequences-and-series|derivatives|taylor-expansion|
1
Outer product of basis vectors
Can someone please help clarify how to calculate the outer product of two basis vectors, assuming $\mathbb{R}3$ ? From the definition, I understand that $\textbf{a} \otimes \textbf{b}$ $= a_ib_j = c_{ij}$ . Another way to write that is $\textbf{a} \otimes \textbf{b}$ $= a_ib_j\textbf{e}{_i}$ $ \otimes \textbf{e}{_j}$ . Where: $e_1 = \left[\matrix{1\cr0\cr0}\right] , e_2 = \left[\matrix{0\cr1\cr0}\right], e_3 = \left[\matrix{0\cr0\cr1}\right].$ When I evaluate the outer product of $\textbf{e}{_i} \otimes \textbf{e}{_j}$ I get the following matrix: $$ \left[\matrix{e_1e_1 & e_1e_2 & e_1e_3\cr e_2e_1 & e_2e_2 & e_2e_3 \cr e_3e_1 & e_3e_2 & e_3e_3}\right]$$ My intuition tells me this should equal the identity matrix, but when I actually calculate the values, I get stuck doing the following: $e_1e_1 = \left[\matrix{1\cr0\cr0}\right]\left[\matrix{1\cr0\cr0}\right]$ There isn't a dot product between the two so I am not sure how to evaluate. I could also be writing down the matrix incorrectly
I did some digging and found what I was missing. What you get is a 2nd order identity tensor if you have mutually orthogonal basis vectors. $e_1e_1$ is just the outer product again, but they are vectors, not components of vectors (yet). Here is a breakdown of what I found which may help people reading this in the future: Notation: Outer product, Dyadic, Tensor product All of these terms are equivalent. Dyadic product of (2) vectors $\textbf{a}$ and $\textbf{b}$ denoted by $\textbf{ab}$ Outer product of (2) column vectors $\textbf{a} \otimes \textbf{b}$ or $\textbf{ab}^T$ Tensor product of (2) vectors $\textbf{a}$ and $\textbf{b}$ : $\textbf{a} \otimes \textbf{b}$ I will do a short example which will help show what a 2nd order identity tensor is. $\textbf{a} = a_1\textbf{i} + a_2\textbf{j}+ a_3\textbf{k}$ $\textbf{b} = b_1\textbf{i} + b_2\textbf{j}+ b_3\textbf{k}$ $\textbf{i},\textbf{j},\textbf{k}$ can also be denoted by $\textbf{e}{_{(1)}},\textbf{e}{_{(2)}},\textbf{e}{_{(3)}}$ $\textb
|linear-algebra|outer-product|
0
Trying to simplify the following boolean expression
Simplify: $y \times [x + (x' \times y)]$ My attempt: $y \times [(x + x') \times (x + y)] \quad \textit{First Distributive Axiom} $ $y \times [1 \times (x + y)] \quad \textit{First Inverse Axiom} $ And for the continued steps, I am unfortunately stuck again. Axioms available:
I don't see $yy=y$ directly in the axioms you have, but it follows from $yy=y(y+0)=y$ . And so $$ y(x + x'y)=yx+yx'y=yx+yyx'=yx+yx'=y(x+x')=y(1)=y. $$
|logic|boolean-algebra|boolean|
1
A question in Dedekind cut
This is a proposition in my textbook: Let $\ x=A|B,x'=A'|B'$ be Dedekind cuts in Q. Define $$x+x'=(A+A')|rest\ \ of\ \ Q $$ Then the union of A+A' and B+B' may not be Q.(Here we define $\ A+A'=\{y\ |\ \exists\ x \in A,\exists x' \in A',y=x+x'\} $ ) My question is under what circumstances are they not equal?
Adding two irrationals that add to a rational might have the union exclude the sum rational. For example $A= \{q\in \mathbb Q|q r\}$ (essentially that $q ) and $B= \{q\in \mathbb Q|q> a, \forall a\in A\}$ . While $A' = \{q\in \mathbb Q|$ either $q and if $q \ge 0$ then $q^2 (essentially that $q ) and $B= \{q\in \mathbb Q|q> a, \forall a \in A'\}$ . Then $A + A' = \{q\in \mathbb Q|q while $B+B'=\{q\in \mathbb Q| q > 0\}$ and $(A+A')\cup (B+B') = \mathbb Q \setminus \{0\}$ . You can verify $0 \not \in A + A'$ as $q\in A+A'$ means $q = q_1 + q_2; q_1 so $q=q_1 + q_2 . And likewise $0 \not \in B+B'$ for the same reason. ====== The key to understanding Deidekind cuts (although this is cheating and putting the cart before the horse) is if $A|B\sim x$ for some $x \in \mathbb R$ (this is assuming we have already finally figured out what $\mathbb R$ is; and it was doing so that we invented the Deidekind cut in the first place--- so we are cheating when I say this; let me just be upfront about t
|real-analysis|
1
If $u_{n+1}\le u_n+u_n^2$ and $\sum u_n$ converges, prove that $\lim\limits_{n\to +\infty}(n\cdot u_n)=0$
Given the positive sequence $\{u_n\},n\in \mathbb{N}$ that meets the conditions: $\boxed{1}$. $u_{n+1}\le u_n+u_n^2$ $\boxed{2}$. Exist the constant $\text{M} >0$ so that $\displaystyle\sum\limits_{k=1}^n u_k\le \text{M},\, \forall n\in \mathbb{N}$ Prove that $$\lim\limits_{n\to +\infty}(n\cdot u_n)=0$$ I think that we can use the Stolz-Cesaro Theorem, 0/0 Case , but I haven't found how.
Define $(p_n)$ and $(a_n)$ by $$ p_n = \prod_{k=1}^{n-1} (1+u_k) \qquad\text{and}\qquad a_n = \frac{u_n}{p_n}. $$ We make the following observations: $\boxed{1}$ implies that $a_{n+1} \leq a_n$ . That is, $(a_n)$ is positive and non-increasing. Since $a_n \leq u_n$ , $\boxed{2}$ implies that $\sum_{n=1}^{\infty} a_n . Then a standard argument tells that $$ \bbox[border:1px dotted navy; color:navy; padding:5px;]{ \lim_{n\to\infty} n a_n = 0. } \tag{1} $$ Indeed, monotonicity of $(a_n)$ yields $\frac{1}{2} n a_n \leq \sum_{k=\lfloor n/2 \rfloor}^{n} a_k $ , which vanishes as $n \to \infty$ by $\boxed{2}$ . By the inequality $1+x \leq e^x$ , we get $p_n \leq e^{\sum_{k=1}^{n-1} u_k}$ . So, $\boxed{2}$ implies $(p_n)$ is bounded above. Moreover, $(p_n)$ is non-decreasing. Hence, $$ \bbox[border:1px dotted navy; color:navy; padding:5px;]{ (p_n) \text{ converges.} } \tag{2} $$ Combining $\text{(1)}$ and $\text{(2)}$ altogether, we conclude: $$ nu_n = na_n \cdot p_n \to 0 $$
|calculus|sequences-and-series|limits|
0
Explanation of taylor series
I understand that for a Taylor series of a function $f(x)$, centered around the point a, the general expression can be written as: $$ \begin{align} &f(x) \\ &= f(a) + f'(a) (x-a) + \frac{f''(a)}{2!} (x - a)^2 + \frac{f^{(3)}(a)}{3!} (x - a)^3 + \dots + \frac{f^{(n)}(a)}{n!} (x - a)^n + \cdots \end{align}$$ I can wrap my head around what the function and meaning of every part of this equation is except for the factorial. Why does each term have to be divided by that order factorial?
I had the same question and I noticed something: Look what happens here for example: $f(x) = x^4 \\ f'(x) = 4 \cdot x^3 \\ f''(x) = 4\cdot3\cdot x^2 \\ f'''(x) = 4\cdot3\cdot2\cdot x \\ f''''(x) = 4\cdot3\cdot2\cdot1$ If we now look at the corresponding Taylor polynomial at the point a: $$Tf(x) =\\ f(a) + \frac{4\cdot a^3}{1}\cdot(x-a) + \frac{4\cdot3\cdot a^2}{1\cdot2}\cdot(x-a)^2 +\frac{4\cdot3\cdot2\cdot a}{1\cdot2\cdot3}\cdot(x-a)^3 +\frac{4\cdot3\cdot2\cdot1}{1\cdot2\cdot3\cdot4}\cdot(x-a)^4$$ I think dividing by $n!$ is to compensate for these factors popping up when taking the derivative. In the last term of the polynomial the factor becomes 1 when dividing by $n!$ . That is all I got so far haha. Maybe someone can explain further why this weighting is necessary.
|calculus|real-analysis|taylor-expansion|
0
derivative of tough matrix
I am not sure about the following derivative: $ \frac{d\textbf{b}^T \textbf{E} \Omega \textbf{r} \textbf{r}^T \Omega^T \textbf{e}^T \textbf{b}}{d \textbf{r}} = ?$ where $\textbf{b} \in \mathbb{R^N}$ , $\textbf{E} \in \mathbb{R^{N \times L}}$ , $\Omega \in \mathbb{R^{L \times K}}$ , $\textbf{r} \in \mathbb{R^{K}}$ . Furthermore, $\textbf{r}$ is independent of all the rest. My guess is that the derivative is $\textbf{e} \Omega^T \textbf{E}^T \textbf{b} \textbf{b}^T \textbf{E} \Omega$ Is this correct?
The notation is really obscuring the main problem. For clarity, let's rewrite the vectors $b$ and $r$ as $\mathbf{b}$ and $\mathbf{r}$ ,resp., and the matrices $\Omega$ and $e$ as $\boldsymbol{\Omega}$ and $\boldsymbol{E}$ , resp. Assuming that $\mathbf{b}$ , $\Omega$ , and $\boldsymbol{E}$ are independent of $\mathbf{r}$ , one can condense \begin{equation} \mathbf{y} = \boldsymbol{\Omega}^{T}\boldsymbol{E}^{T}\mathbf{b} \end{equation} and thus rewrite the whole expression as \begin{equation} \frac{\mathrm{d}}{\mathrm{d}\boldsymbol{r}}\,\mathbf{b}^{T}\boldsymbol{E}\boldsymbol{\Omega}\boldsymbol{r}\,\boldsymbol{r}^{T}\boldsymbol{\Omega}^{T}\boldsymbol{E}^{T}\mathbf{b} = \frac{\mathrm{d}}{\mathrm{d}\boldsymbol{r}}\,\mathbf{y}^{T}\boldsymbol{r}\boldsymbol{r}^{T}\mathbf{y} \end{equation} Defining $f(\boldsymbol{r}) = \mathbf{y}^{T}\boldsymbol{r}$ this reduces the problem to \begin{equation} \frac{\mathrm{d}}{\mathrm{d}\boldsymbol{r}}\,f(\boldsymbol{r})f(\boldsymbol{r}) = \frac{\mathrm{d}}{
|matrices|derivatives|
1
Confusion regarding implications
Out of interest, I wanted to develop some basic understanding of formal logic, but I am having trouble understanding the truth table of implications. Especially examples such as this: A: I want a pizza B: I go shopping A --> B: If I want a pizza, then I will go shopping Why is it, for example, that assuming A is false and B is false that A-->B is true? It isn't obvious to me at all that just because you don't want a pizza and you don't go shopping automatically means that if you want a pizza then you will go shopping. I mean you don't know if you will actually go shopping once you know that you want pizza just based on A and B being false, right? An intuitive explanation of this concept would be greatly appreciated
First of all, if the presupposition is true, the conclusion must be true. This should be obvious. If the presupposition is false, the implication will always be true. The reason for this can be explained by the principle of explosion. If you assume a false thing, you will be able to prove anything. Consider your presupposition being $1=2$ . Well, then you can subtract $1$ on both sides, so $1=2 \Rightarrow 0=1 $ . This is an example of a falsehood implying falsehood. But notice that this means that subtracting $0$ from $2$ is the same as subtracting $1$ from $2$ , so $1=2 \Rightarrow 1=1$ . This is an example of a falsehood impliying a truth. Being able to prove anything from a falsehood holds throughout mathematics.
|logic|
0
Is the provided solution complete? Or is there any aspect left to prove?
The question given is : Let $\displaystyle x\ =\ ( x_{1} ,\ x_{2} ,\ ...,\ x_{n}) ,\ y\ =\ ( y_{1} ,\ y_{2} ,\ ...,\ y_{n}) ,$ where $\displaystyle x_{i} ,\ y_{i} \ \in \ \mathbb{R}$ . We write $\displaystyle x\ >\ y$ if for some $\displaystyle k$ , $\displaystyle 1\ \leqslant \ k\ \leqslant \ n\ -\ 1$ , $\displaystyle x_{1} \ =\ y_{1} ,\ ...,\ x_{k} \ =\ y_{k}$ but $\displaystyle x_{k\ +\ 1} \ >\ y_{k\ +\ 1}$ . If we have $\displaystyle u\ =\ ( u_{1} ,\ u_{2} ,\ ...,\ u_{n}) ,\ v\ =\ ( v_{1} ,\ v_{2} ,\ ...,\ v_{n}) ,\ w\ =\ ( w_{1} ,\ w_{2} ,\ ...,\ w_{n}) ,\ z\ =\ ( z_{1} ,\ z_{2} ,\ ...,\ z_{n})$ such that $\displaystyle u\ >\ v,\ w\ >\ z$ . Prove $\displaystyle u\ +\ w\ >\ v\ +\ z$ . The following is my proof for it : Let the $\displaystyle k$ for $\displaystyle u-v$ relation be $\displaystyle a$ and for $\displaystyle w-z\ $ relation be $\displaystyle b$ Case 1 : $\displaystyle a\ =\ b$ Then we have $\displaystyle u_{a\ +\ 1} \ >\ v_{a\ +\ 1}$ and $\displaystyle w_{a\ +\ 1} \ >\
Seems correct to me, but your phrasing in Case 2 is a little unclear. You could probably just use sum of inequalities to prove this. $$v $$z $$\Rightarrow v + z
|elementary-set-theory|solution-verification|
1
Proving the existence of a subset
I am studying Discrete Maths on my own, and I need help proving that there exists a set $S \subseteq P(\mathbb{R})$ - the powerset of $\mathbb{R}$ with the following 3 conditions: $S \sim \mathbb{R}$ if $X, Y \in S$ and $X \neq Y$ , then $X \cap Y = \emptyset$ if $X \in S$ , then $X \sim \mathbb{R}$ Recall that " $A\sim$ B" means A has the same amount of elements as B My attempt: I understand that $P(\mathbb{R})$ is a set of all possible combinations of points arrangement on a line. Hence, if I take $\mathbb{R} = S$ the first two conditions are met. The third, however, is not, as any single point from $\mathbb{R} \nsim \mathbb{R}$ . How do I proceed from here?
First, notice that $|\mathbb{R}^2|=|\mathbb{R}|$ (for example, see this ). Then, there exists a bijection $f: \mathbb{R}^2\to \mathbb{R}$ . Define $S=\{f(\{x\}\times \mathbb{R})\space |\space x\in\mathbb{R}\}$ . As $f$ is a bijection, condition $2$ follows immediately. Moreover, take $f(\{c\}\times \mathbb{R})\in S$ . Then, it is clear $g:\mathbb{R}\to f(\{c\}\times \mathbb{R})$ given by $g(x)=f(c,x)$ is a bijection, so $|f(\{c\}\times \mathbb{R})|=|\mathbb{R}|$ , that is, condition $3$ also holds. Lastly, defining $h:\mathbb{R}\to S$ as $h(x)=f(\{x\}\times \mathbb{R})$ we find $h$ is a bijection and $|S|=|\mathbb{R}|$ , as desired.
|elementary-set-theory|
1
The value of $\lim_{ n\to \infty} (n^2)/((n^2+1)(n^2+4)(n^2+9)\ldots(n^2+n^2))^{1/n}$ is
First, take $n^2$ common from the denominator and out of the $n$ -th root. $n^2$ in numerator gets cancelled with the one taken common in the denominator Next, we assume the value of limit to be $L$ and take $\log$ $(\ln)$ on both sides. We get to see a standard form of integration of the limit of sum. Now, on solving it we get the final answer in terms of $e$ . My doubt is when we take $n^2$ common as mentioned in the first sentence, we get a term of $1/n^2$ in every single bracket in the denominator. As $n\to\infty$ , $1/n^2 \to 0.$ If we solve using this method, we get the value of limit as $1.$ Which method is correct and why?
$$ L:={n^2\over\Pi_{r=1}^n(n^2+r^2)^{1\over n}}={1\over\Pi_{r=1}^n(1+{r^2\over n^2})^{1\over n}}\\ \ \\ \ \\ \log L=-{1\over n}\sum_{r=1}^n \log\left(1+{r^2\over n^2}\right) $$ This is a reimann summation $$ \log L=-\int_0^1\log(1+x^2)dx=2-\log 2-{\pi\over 2} $$ So $$L={e^{2-{\pi\over2}}\over 2}$$ Note: The reason you can't take limit value as 1 is because you have $1\over n$ as exponent and $\lim(1+{r^2\over n^2})^{1\over n}\neq1$ Think about $$\lim_{x\to\infty}(1+{1\over x})^x=e$$ Note: This was a reply to the comment but it was too big too fit I asked you think about that example to see how exponents effect limits $$\Pi(1+{r^2\over n^2})^{1\over n}=(1+{n(n+1)(2n+1)\over 6n^2}\cdots)^{1\over n}$$ It is immediate that this is of model $\infty^0$ which is an indeterminate form similar to $0\over 0$ so we can't directly compute it
|calculus|
0
Probability that as many heads came out as tails, with the $12$th toss being the first toss after which the numbers of heads and tails became equal.
We toss a fair coin $12$ times. Determine the probability that as many heads fall out as tails. Determine the probability that as many heads came out as tails, with the $12$ th toss being the first toss after which the numbers of heads and tails became equal. The first point is easy since we can see that it's Bernoulli process and count the probability of $6$ successes in $12$ trials. That will be: $$P(X = 6) = {{12}\choose{6}} \left( \frac{1}{2} \right)^6\left( \frac{1}{2} \right)^6 = {{12}\choose{6}} \left( \frac{1}{2} \right)^{12} = \frac{12!}{(12-6)!} \cdot \frac{1} {2^{12}} = \frac{924}{4096}$$ The second part is more tricky and I wanted to ask if I did that one correct. If we consider $12$ tosses, we know that the results of first one and the last one must be different - only that way we can 'start' with advantage of one side of coin and then reduce it to 0 in the last one. Therefore we can have 2 possibilities at this point: we start with heads and finish with tails, we start wi
Treat each head/tail as a $(1,1)/(1,-1)$ step respectively. Then an admissible sequence for the second problem is a path from $(0,0)$ to $(12,0)$ that never touches the horizon except at its endpoints. We can trim the first and last steps to get a length- $10$ path that by itself does not cross the horizon in the middle, but may touch it. The number of such paths is twice the fifth Catalan number – twice because the first step may be head or tail – hence $2×42=84$ . The probability is therefore $\frac{84}{4096}=\frac{21}{1024}$ .
|probability|
1
Find the Automorphism group of direct product of Z(mod m) & Z(mod n).
Find the Automorphism group of direct product of $\mathbb{Z}(\operatorname{mod} m)$ and $\mathbb{Z}(\operatorname{mod} n)$ . We know Automorphism of direct product of $\mathbb{Z}(\operatorname{mod} p)$ and $\mathbb{Z}(\operatorname{mod} p)$ is isomorphic to $\operatorname{GL}_2(\mathbb{Z}(\operatorname{mod} p))$ . Then how to generalize it?
Well, one generalization is to the case where $(m,n)=1.$ And then by the Chinese remainder theorem, we have a cyclic group $\Bbb Z_{mn}.$ The automorphism group of a cyclic group is well-known to be the "group of units of the ring $\Bbb Z_n$ ". (It's also called the "multiplicative group modulo $n$ ".) So $$\rm Aut(\Bbb Z_{mn})\cong \Bbb Z_{mn}^×.$$ Now note that this group of units will often not be cyclic. But since the group of units functor respects products, we can still compute this. For example, say $m=5,n=7.$ Then $$\rm Aut(\Bbb Z_5×\Bbb Z_7)\cong \rm Aut(\Bbb Z_{35})\cong \Bbb Z_{35}^×\cong \Bbb Z_7^××\Bbb Z_5^×\cong \Bbb Z_6×\Bbb Z_4.$$ (In fact, the group of units $\Bbb Z_n^×$ is cyclic precisely when $n=1,2,4,p^\alpha $ or $2p^\alpha.$ ) Next, if $(m,n)\neq1$ , then the situation can be a bit more complicated. The group we're trying to find the automorphism group of is not cyclic anymore. You mentioned the case $m=n=p.$ More generally, while we always have $$\rm Aut(\Bbb Z_
|group-theory|
0
Precise statement of Gödel's Incompleteness Theorems
I have seen the following statements of Gödel's Incompleteness Theorems: Gödel's First Incompleteness Theorem (v1) If $T$ is a recursively axiomatized consistent theory extending PA, then $T$ is incomplete. Gödel's Second Incompleteness Theorem (v1) If $T$ is a recursively axiomatized consistent theory extending PA, then $T \nvdash \text{Con}(T)$ . However, these theorems are often applied to ZFC, and ZFC is not an extension of PA. For one, ZFC and PA do not have the same languages, since ZFC's language is $\mathcal{L}_{\text{ZFC}} = \{ \in \}$ , and PA's language is $\mathcal{L}_{\text{ZFC}} = \{ 0 , S \}$ . So, I thought it might make sense to revise these statements as follows: Gödel's First Incompleteness Theorem (v2) If $T$ is a recursively axiomatized consistent theory such that an expansion of $T$ extends an expansion of PA, then $T$ is incomplete. Gödel's Second Incompleteness Theorem (v2) If $T$ is a recursively axiomatized consistent theory such that an expansion of $T$ exten
Your last sentence is the key. Setting aside the issue of optimizing for strength (e.g. replacing $\mathsf{PA}$ by a weaker theory of arithmetic such as $\mathsf{Q}$ ), the following is a language independent presentation: (G1IT) Suppose $S$ is any consistent recursively axiomatizable theory which interprets $\mathsf{PA}$ . Then $S$ is incomplete. G2IT is a bit messier since we have to talk about how consistency is expressed, but we can still do that: (G2IT) Suppose $S$ is any consistent recursively axiomatizable theory which interprets $\mathsf{PA}$ via $\Phi$ (see below). Let $\psi$ be the usual consistency statement of $T$ as formulated in the language of arithmetic and let $\hat{\psi}$ be the translation of $\psi$ into the language of $S$ via $\Phi$ . Then $\hat{\psi}$ is not provable in $S$ . So what's an interpretation of one theory in another? Given theories $T,S$ in relational (purely for simplicity) languages $\Sigma,\Pi$ respectively, an interpretation of $T$ in $S$ is basica
|logic|peano-axioms|incompleteness|
1
Rules for converting lambda calculus expressions to SKI combinator calculus expression? Which rule(s) is/are incorrect?
learnxinyminutes.com defines $I$ , $K$ , and $S$ as follows: I x = x K x y = x S x y z = x z (y z) Then they give the following correspondences to aid in the conversion between lambda calculus and SKI combinator calculus: λx.x = I EDIT: λx.c = Kc provided that x does not occur free in c (see Mark's answer) λx.(y z) = S (λx.y) (λx.z) I tried to expand on these correspondences as follows: 3'. λx.(y z) = S (λx.y) (λx.z) = S (Ky) (Kz) (apply 2. to λx.y and to λx.y ) λx.(y z) = K(y z) (apply 2. with c = yz ) But that lead to incorrect results when I used 3'. and 4. to convert the number $2$ from its lambda calculus representation $λf.λx.f(f x)$ to its SKI combinator calculus representation (see my previous question for the incorrect result). So that means that 3'. and/or 4. are incorrect. Which step or steps in 3.' and/or 4. are incorrect?
The method I used on Combo , which I recently put up on GitHub, is the following: $$ λx.x = I,\quad λx.a = Ka,\quad λx.xx = D,\quad λx.xb = Tb,\quad λx.ax = a,\\ λx.xv = Ub,\quad λx.ux = Wa,\quad λx.av = Bab,\quad λx.ub = Cab,\quad λx.uv = Sab, $$ where $a$ and $b$ are terms containing no free occurrences of $x$ in them, and where $u$ and $v$ are terms containing free occurrences of $x$ in them in which $a = λx.u$ and $b = λx.v$ (thus $u = ax$ and $v = bx$ under the β-rule). This is equivalent to taking the simplified abstraction rule $$λx.x = SKK,\quad λx.y = Ky,\quad λx.uv = Sab,$$ where, this time, $u$ and $v$ can be any terms, but $y$ is restricted to being just a variable other than $x$ ... and applying the optimization rules: $$ I = SKK,\quad K(ab) = S(Ka)(Kb),\quad SII = D,\quad SI(Kb) = Tb,\quad S(Ka)I = a,\\ SIb = Ub,\quad SaI = Wa,\quad S(Ka)b = Bab,\quad Sa(Kb) = Cab. $$ If you run Combo on $S\ (K\ a)\ (K\ b)$ it will say it's in normal form already. However, if you run Comb
|lambda-calculus|
0
Sign discrepancy between two different definitions of ricci curvature
In my differential geometry course, the $(1,3)$ -riemann curvature tensor $R$ is defined by $$R(X,Y)Z:=\nabla_X\nabla_YZ-\nabla_Y\nabla_XZ-\nabla_{[X,Y]}Z$$ and the $(0,4)$ -riemann curvature tensor Rm by $$Rm(X,Y,\color{red}Z,\color{blue}W):=g(R(X,Y)\color{blue}W,\color{red}Z)$$ In coordinates $Rm_{ij\ell k}=g_{\ell m}R_{ijk}^m$ . The symmetries are the following: $$Rm_{ijk\ell}=-Rm_{jik\ell}=-Rm_{ij\ell k}=Rm_{k\ell ij}$$ Then the Ricci tensor of a riemannian manifold $(M,g)$ is the symmetric $(0,2)$ -tensor $\text{Rc}\in\Gamma(T^*M\otimes T^*M)$ ${\it defined}$ by: $$\text{Rc}(X,Y)=\sum_{i=1}^nRm(X,e_i,Y,e_i)$$ where $\{e_i\}$ is an orthonormal basis of $T_pM$ . It's coordinate representation is $Rc_{ik}=g^{j\ell}Rm_{ijk\ell}$ . Then the notes states (without proving it) that $Rc(X,Y)$ is the trace of the map $Z\longrightarrow R(Z,X)Y$ . This implies that (using the symmetries): $$\boxed{Rc_{ik}=g^{j\ell}Rm_{ijk\ell}=-g^{j\ell}Rm_{ij\ell k}=-R_{ijk}^j}$$ Wikipedia defines the tensor
The notation $R^c_{acb}$ is ambiguous: does it mean $g^{cd}R_{dacb}$ , or $g^{cd}R_{adcb}$ , or $g^{cd}R_{acdb}$ , or $g^{cd}R_{acbd}$ ? In other words, which index position corresponds to the raised index $c$ ? The various choices differ by a sign. For that reason, it's important to maintain both the horizontal and vertical positions of indices. The Wikipedia article on Ricci curvature does exactly that: the equation you quoted actually appears there as $$ \text{Ric}_{ab} = R^c{}_{bca} = R^c{}_{acb}, $$ showing that the raised index belongs in the first position. On the other hand, your computation, if you keep track of horizontal index positions, yields $$ Rc_{ik}=g^{j\ell}Rm_{ijk\ell}=-g^{j\ell}Rm_{ij\ell k}=-R_{ij}{}^j{}_{k} $$ Using the symmetries of the curvature tensor, this can be rewritten as $$ -R_{ij}{}^j{}_{k} = -R^j{}_{kij} = R^j{}_{kji}, $$ which matches the Wikipedia formula.
|differential-geometry|riemannian-geometry|curvature|
1
How to solve following binomial equation to get the assortivity?
Proving Assortativity r from Symmetric Binomial Distribution Consider the symmetric binomial form given by the equation: $$e_{jk} = N \binom{j+k}{j} p^j q^k + \binom{j+k}{k} p^k q^j$$ where $p+q=1$ , $\alpha > 0$ , and $N = \frac{1}{2}(1-e^{-1/\alpha})$ is a normalizing constant. Here, $e_{jk}$ represents the joint probability distribution of the remaining degrees of the two vertices at either end of a randomly chosen edge. Let's define the following terms: $e_{jk}$ : Joint probability distribution of the remaining degrees of the two vertices at either end of a randomly chosen edge. $e_j$ : The marginal probability distribution of the remaining degree of one vertex at the end of a randomly chosen edge, given by summing $e_{jk}$ over all possible values of $k$ . $p_k$ : Binomial probability distribution of the degrees of vertices in the graph, representing the distribution of degrees for a vertex at one end of an edge. $q_k$ : Distribution of the remaining degrees reached by following a
To prove the expression for $r$ , we need to calculate each component: Calculate $\sigma_q^2$ : \begin{align*} $\sigma_q^2 &= \sum_k k^2 q_k - \left(\sum_k k q_k\right)^2 \\ &= \sum_k k^2 \left(\frac{(k+1)p_{k+1}}{\sum_j jp_j}\right) - \left(\sum_k k \frac{(k+1)p_{k+1}}{\sum_j jp_j}\right)^2 \\ &= \sum_k k^2 \left(\frac{(k+1)p_{k+1}}{\sum_j jp_j}\right) - \left(\frac{\sum_k k(k+1)p_{k+1}}{\sum_j jp_j}\right)^2 \\ &= \sum_k k^2 \left(\frac{(k+1)p_{k+1}}{\sum_j jp_j}\right) - \left(\frac{\sum_k k^2 p_{k+1}}{\sum_j jp_j}\right)^2 \\ &= \sum_k \frac{k^3 p_{k+1}}{\sum_j jp_j} - \left(\frac{\sum_k k^2 p_{k+1}}{\sum_j jp_j}\right)^2 \\ &= \frac{\sum_k k^3 p_{k+1}}{\sum_j jp_j} - \left(\frac{\sum_k k^2 p_{k+1}}{\sum_j jp_j}\right)^2 \\ &= \frac{\sum_k k^3 p_{k+1}}{\sum_j jp_j} - \left(\frac{\sum_k k^2 p_k}{\sum_j jp_j}\right)^2 \\ &= \frac{\sum_k k^3 p_{k+1}}{\sum_j jp_j} - \left(\frac{\sum_k k^2 p_k}{\sum_j jp_j}\right)^2 \\ &= \frac{\sum_k k^3 p_{k+1}}{\sum_j jp_j} - \left(\frac{\sum_k k^2 p
|probability-distributions|graph-theory|binomial-distribution|random-graphs|network|
0
Trouble understanding Number of Trials to First Success
I'm having some trouble understanding how to calculate how many trials are needed before you expect to see a given event, I'm not sure what I've misunderstood yet. I've followed these explanations so far: https://www.cut-the-knot.org/Probability/LengthToFirstSuccess.shtml https://www.geeksforgeeks.org/expected-number-of-trials-before-success/ Lets say we have a trial that produces an event $V$ with probability $0.3$ , my understanding from the pages above is we expect to see event $V$ after $\frac{1}{0.3}$ trials, or $3.3$ trials. But if we perform two trials, and look at the probability of seeing the event, don't we get 51%? $$ 0.3^2+0.3*0.7+0.7*0.3 = 0.51 $$ or $$ 1 - (0.7^2) = 0.51 $$ So I would expect to expect $V$ to happen when the number of trials is as low as 2. Can anyone show me what I've misunderstood here please?
Just because you “expect” a certain thing to happen, that does not make it an “expected value.” It’s just a quirk of mathematical English. Expected value means the average value. Sure, more than half the time, it only takes 1 or 2 trials. Still, there is a $0.3· (0.7)^2=14.7$ % chance that it will take $3$ trials, there is a $0.3· (0.7)^3\approx 10.3$ % chance that it will take $4$ trials, and so on. The expected value considers all of these possibilities, and averages them together. It just so happens that the events of taking $3,4,5\ldots$ trials are significant enough to push the average up to $3.3$ trials.
|probability|expected-value|
0
Trouble understanding Number of Trials to First Success
I'm having some trouble understanding how to calculate how many trials are needed before you expect to see a given event, I'm not sure what I've misunderstood yet. I've followed these explanations so far: https://www.cut-the-knot.org/Probability/LengthToFirstSuccess.shtml https://www.geeksforgeeks.org/expected-number-of-trials-before-success/ Lets say we have a trial that produces an event $V$ with probability $0.3$ , my understanding from the pages above is we expect to see event $V$ after $\frac{1}{0.3}$ trials, or $3.3$ trials. But if we perform two trials, and look at the probability of seeing the event, don't we get 51%? $$ 0.3^2+0.3*0.7+0.7*0.3 = 0.51 $$ or $$ 1 - (0.7^2) = 0.51 $$ So I would expect to expect $V$ to happen when the number of trials is as low as 2. Can anyone show me what I've misunderstood here please?
I'm sure there are better ways to illustrate the difference between "The average number of trails until it happens is 2" and "in 2 trails, it happens for a chance higher than 50%". Here is how I see it. Let $V_i=1$ if the $i$ -th trial is a success, and $V_i=0$ if the $i$ -th trial is a failure. The probability of success of each trial is $p$ . Trials are independent. $R$ is the number of trials until the first success. $$ E\left[R\right]=\sum^{\infty}_{n=1}(1-p)^{n-1}pn=1/p. $$ The probability that there is at least one success in the first $n$ trials is $$ X_n=\mathbb{P}\left(\sum^n_{i=1}V_i\geq 1\right)=1-(1-p)^n. $$ You ask why in your example, $E\left[R\right]>2$ and yet $X_2>0.5$ . It seems you presume for $n , $X_n$ should be smaller than $50\%$ . This presumption is incorrect. Think about when $p\in(0.5,1)$ , $X_1>0.5$ and yet $E[R]>1$ by definition.
|probability|expected-value|
0
Induced automorphism on $\mathfrak{sl}_n(\mathbb{C})$
I'm reading chapter 15 of Fulton & Harris. On page 212, they let $\phi$ be the automorphism of $\mathbb{C}^n$ mapping $e_i\mapsto e_j$ , $e_j\mapsto -e_i$ , and $e_k\mapsto e_k$ for $k\neq i,j.$ Then they claim that $\phi$ induces an automorphism $\text{Ad}(\phi)$ of $\mathfrak{sl}_n(\mathbb{C})$ which, among other things, takes $\mathfrak{h}$ to itself. I first thought, based on the notation, that $\text{Ad}(\phi)(X)$ should be $[\phi,X]$ . But this does not take $\mathfrak{h}$ to itself (and this wouldn't give an automorphism of $\mathfrak{gl}_n(\mathbb{C})$ , which they claim as well). So what is this map $\text{Ad}(\phi)$ ??
Since apparently my comment answered the question, I'll just make it an answer: Try $\mathrm{Ad}(\phi)(X) = \phi^{-1} \circ X \circ \phi$ (or depending on convention, $= \phi \circ X \circ \phi^{-1}$ ) instead. Upper case " $\mathrm{Ad}$ " is often group conjugation, lower case " $\mathrm{ad}$ " its "derived version", the Lie algebra commutator -- which you tried, and you rightly saw it cannot be meant here.
|representation-theory|lie-algebras|
1
Omega and Alpha limit sets
Considering IVP: $\dot{x} = \cos(x) +1, \; x(0) = 0.$ I should compute the omega and alpha limit of the given initial point. So I first found equilibria, which in this case is: $ x = k \pi, \; k \in \mathbb{Z}$ , and this equilibrium is not hyperbolic. But I am not sure how to continue further?
Equilibrium points would actually be $x=k\pi$ , with $k$ odd (if $k$ is even, $\cos(x)=1$ ). For this reason, the lines $y=-\pi$ , $y=\pi$ are trajectories of the system. It is well known trajectories cannot intersect each other, so the solution starting at $(0,0)$ will always be contained in the band bounded by those two lines. Now, if $x(t)$ is that solution, $x'(t)=\cos(x(t))+1>0$ , because $-\pi . Thus, the solution is increasing and bounded above, so it has a limit at $t\to\infty$ . But this limit can only be an equilibrium point, and so $x(t) \uparrow \pi$ , and $\omega(0)=\{\pi\}$ . Analogously, as the solution is increasing and bounded below, it has a limit at $t\to-\infty$ , and this limit can only be $-\pi$ , so $\alpha(0)=\{-\pi\}$ .
|dynamical-systems|
1
how to prove $\exp(t) \ge \sum\limits_{k=0}^{2n-1} \frac{t^k}{k!}$
How to prove the inequality, $\quad \exp(t) \ge \sum\limits_{k=0}^{2n-1} \frac{t^k}{k!}$ , $ $ whenever $t \in \mathbb R , n \in \mathbb N^+$ . When $t\ge 0$ , it can be prove easily by the Taylor expansion $\exp(t) = \sum\limits_{k=0}^{\infty} \frac{t^k}{k!}$ , but when $t , I can't prove it. Need help. Thanks.
The direct proof is possible. Consider the function $$g(t)=e^{-t}\sum_{k=0}^{2n-1}{t^k\over k!}$$ We get $$ g'(t)=-e^{-t}{t^{2n-1}\over (2n-1)!}$$ Therefore $g'(t)>0$ for $t and $g'(t) for $t>0.$ Thus the maximal value of $g(t)$ is attained at $t=0.$ Hence $g(t) for $t\neq 0.$ Finally $$\sum_{k=0}^{2n-1}{t^k\over k!}=e^tg(t) Similarly we can prove that $$e^t Indeed for $h(t)=\displaystyle e^{-t}\sum_{k=0}^{2n}{t^k\over k!}$ we have $\displaystyle h'(t)=-e^{-t}{t^{2n}\over (2n)!}.$ Therefore the function $h$ is strictly decreasing on $(-\infty,0].$ Hence $h(t)>h(0)=1$ for $t i.e. $(*)$ holds.
|calculus|sequences-and-series|derivatives|taylor-expansion|
0
Gap in my proof of $f$ is injective $\implies f^{-1}(f(S))=S$.
This question is about a gap in a proof I'm writing. I understand intuitively what the gap is - but can't find the correct or standard phrasing. Context The exercise is as follows (edited from ex 3.4.5 Terence Tao Analysis I 4th ed): Let $f : X \to Y$ be a function from one set $X$ to another set $Y$ . Show that $f^{-1}(f(S))=S$ for every $S \subseteq X$ if and only if $f$ is injective. To do this we need to prove both: $f^{-1}(f(S))=S \implies f$ is injective. $f$ is injective $\implies f^{-1}(f(S))$ . The gap I am asking about is in the proof of the second bullet point only. My Incomplete Proof Let's show $f$ is injective $\implies f^{-1}(f(S))=S$ . To do this we need to show that under the assumption $f$ is injective, both of the following are true: $f^{-1}(f(S)) \subseteq S$ $S \subseteq f^{-1}(f(S))$ Part one: Let's start with the first $f^{-1}(f(S)) \subseteq S$ . By definition of inverse images, $f^{-1}(f(S))$ is the set $\{x \in X: f(x) \in f(S)\}$ . If $x$ is a member of this
There is a much simpler way s to close the gap: $f$ injective $\implies$ $f^{-1}f(S) \subseteq S$ : Proof: Let $x'$ be any element of the set $f^{-1}(f(S))$ $=$ $\{x; f(x) \in f(S)\}$ . Then $f(x')$ is in $f(S)$ , and thus by definition $f(x')$ is in $\{f(x); x \in S\}$ , or equivalently, there is an $x \in S$ such that the equation $f(x')=f(x)$ holds. But as $f$ is injective, the equation $f(x')=f(x)$ implies the equation $x' = x$ . So as $x$ is in $S$ and $x'=x$ , it follows that $x'$ is in $S$ as well. Thus, any element $x'$ in the set $f^{-1}(f(S))$ , is also in $S$ , giving $f^{-1}(f(S)) \subseteq S$ .
|functions|elementary-set-theory|solution-verification|
1
A game of magic Egyptian tilings
Background I've recently been formulating a game that incorporates elements from Egyptian fractions , magic squares , and tilings . It is a single-player game in which the objective is to tessellate a square with sides of length $1$ with tiles that have the surface area of unit fractions. A sum of distinct unit fractions is called an Egyptian fraction . Let's call an 'Egyptian unity sum set' (EUSS) a set of distinct positive integers $\{ a_{1}, a_{2}, \dots , a_{n} \}$ of size $n$ such that their Egyptian fraction sum to $1$ . So we have $$ \frac{1}{c_{1}} + \frac{1}{c_{2}} + \dots + \frac{1}{c_{n}} = 1. $$ For instance, $\{2,3,6\}$ is an EUSS for $n=3$ . Moreover, we say that an Egyptian unity sum set is composite if $c_{1}, \dots , c_{n}$ are all composite numbers. An example of a composite EUSS is $\{4, 6, 8, 9, 10, 12, 15, 18, 24\}$ . In this case, $n=9$ . There are no such sets for $n . But for every $n \geq 9$ , there is at least one. More information can be found in this questio
There is no MET for $n=9$ , so the answer to all of your questions is negative. Here’s Java code that generates all composite EUSSs for a given value of $n$ and finds all METs for them. The only composite EUSS for $n=9$ is the one you tried, and no MET for it is found. There are $46$ composite EUSSs for $n=10$ , and $11$ of them (marked with an asterisk) admit at least one MET: 4, 6, 8, 9, 10, 12, 14, 15, 39, 6552 4, 6, 8, 9, 10, 12, 14, 15, 40, 1260 4, 6, 8, 9, 10, 12, 14, 15, 42, 504 4, 6, 8, 9, 10, 12, 14, 15, 45, 280 4, 6, 8, 9, 10, 12, 14, 15, 56, 126 4, 6, 8, 9, 10, 12, 14, 15, 72, 84 4, 6, 8, 9, 10, 12, 14, 16, 35, 720 4, 6, 8, 9, 10, 12, 14, 18, 28, 840 * 4, 6, 8, 9, 10, 12, 14, 18, 30, 280 * 4, 6, 8, 9, 10, 12, 14, 18, 35, 120 4, 6, 8, 9, 10, 12, 14, 18, 40, 84 * 4, 6, 8, 9, 10, 12, 14, 20, 24, 1260 4, 6, 8, 9, 10, 12, 14, 20, 35, 72 4, 6, 8, 9, 10, 12, 14, 21, 24, 315 4, 6, 8, 9, 10, 12, 14, 24, 35, 45 4, 6, 8, 9, 10, 12, 15, 16, 30, 720 4, 6, 8, 9, 10, 12, 15, 16, 32, 288 4,
|tiling|magic-square|egyptian-fractions|
1
Find r such that $|2z|=|1+z^2|=2r$
Find all $r>0$ real numbers, so that there exists a complex number $z$ such that $|2z|=|1+z^2|=2r$ . The progress I've made is: \begin{align} \left(|z+i|^2 - 2\right) \left(|z-i|^2-2\right) & \;=\; 0 \ \end{align} By squaring the equality and using $|z_1 + z_2 |^2 = (z_1 + z_2)\overline{(z_1 + z_2)}$ . I am stuck here!
$\mathbb{C}$ is a field, so from $$ \left(|z+i|^2 - 2\right) \left(|z-i|^2-2\right)=0$$ we deduce either $|z+i|^2 - 2=0$ or $|z-i|^2 -2=0$ . If $|z+i|^2 - 2=0$ , we have that $|z+i|^2=2$ , that is, $z$ is in the circle of center $-i$ and radius $\sqrt{2}$ . Calling $z=x+iy$ and translating this to a $\mathbb{R}^2$ problem, we get $x^2+(y+1)^2=2$ , so $$x^2+y^2=2-2y-1=1-2y,$$ and $x^2+y^2$ is minimum when $y$ is maximum, that is, at $\sqrt{2}-1$ , where $x^2+y^2=3-2\sqrt{2}$ . Also, $x^2+y^2$ is maximum when $y$ is minimum, at $-1-\sqrt{2}$ , where $x^2+y^2=3+2\sqrt{2}$ . As $r=|z|=\sqrt{x^2+y^2}$ , we have that if $r\in \left[\sqrt{3-2\sqrt{2}},\sqrt{3+2\sqrt{2}}\right]$ there is a complex number satisfying our desired property (because the norm is continuous and thus its image on the connected circle must be the whole interval). If $|z-i|^2 -2=0$ the procedure is analogous and is left to the reader.
|complex-numbers|
0
Digit sum equality $S(a^n + n) = 1 + S(n)$ implies that $a$ is a power of ten
Let $a$ be a positive integer such that for the digit sum $S(\cdot)$ the equality $S(a^n + n) = 1 + S(n)$ holds for every sufficiently large $n$ . Then $a$ is a power of ten. So I know that $$ S(a+b)=S(a)+S(b)-9c(a,b), $$ where $c(a,b)$ is the number of carryovers when adding $a$ and $b$ . Using this gives $$ S(a^n) +S(n)-9c(a^n,n) = 1 + S(n) $$ and hence $$ S(a^n)=1+9c(a^n,n). $$ Now if we set e.g. $a=10$ , we get $$ S(10^n)=1+9c(10^n,n) $$ and $$ 9c(10^n,n)=0 $$ for large $n$ , so the formula makes sense. But the thing that we have to prove is that our condition definitely gives $a=10^z$ . So maybe this way of proving it is wrong, maybe one has to assume that $a\neq 10$ in the beginning. Any help would be great, this is a problem for olympiad training.
This solution is the same in heart as the one posted by John Omielan, only with simplified final steps. If $a$ is a multiple of $10$ , then $S(a^n+n) = S(a^n)+S(n)$ , thus $S(a^n)=1$ for sufficiently large $n$ , which means $a$ is a power of $10$ . Otherwise, let $n = 10^k$ for some sufficiently large $k$ . Then $$\begin{cases} S(a^{10^k}+10^k) &= 1+S(10^k) &= 2\\ S(a^{10^{k+1}}+10^{k+1}) &= 1+S(10^{k+1}) &= 2 \end{cases} \implies \begin{cases} a^{10^k}+10^k &= 10^p+1\\ a^{10^{k+1}}+10^{k+1} &= 10^q+1 \end{cases}$$ where $p$ and $q$ are the amount of digits of $a^{10^k}+10^k$ and $a^{10^{k+1}}+10^{k+1}$ , respectively (of course $p ). Now, this means $10^q+1-10^{k+1} = a^{10^{k+1}} = (a^{10^k})^{10} = (10^p+1-10^k)^{10}$ , therefore $$1-10^{k+1}\equiv (1-10^k)^{10}\pmod{10^p}$$ thus $10^p$ divides $(10^k-1)^{10}+10^{k+1}-1$ . This is ridiculous (I mean, a contradiction) because $$0
|number-theory|contest-math|arithmetic|
0
$\log_a 10 + \log_b 10 +\ log_c 10 \ge \sqrt{3 \log_a 10 * \log_b 10 * \log_c 10}$
Prove the following inequality for $a,b,c, \in (1,\infty)$ , such that $abc = 10$ $$\log_a 10 + \log_b 10 + \log_c 10 \ge \sqrt{3 \log_a 10 * \log_b 10 * \log_c 10}$$ I will transform logarithms to base 10, so we have to prove that $$\frac{1}{\log_{10} a}+\frac{1}{\log_{10} b}+\frac{1}{\log_{10} c} \ge \sqrt{\frac{3}{\log_{10} a * \log_{10} b * \log_{10} c}}$$ We can use now the Cauchy inequality : $$\frac{1}{\log_{10} a}+\frac{1}{\log_{10} b}+\frac{1}{\log_{10} c} \ge \frac{(1+1+1)^2}{\log_{10} a + \log_{10} b + \log_{10} c} = \frac{9}{\log_{10} abc} = 9 $$ So we only have to prove that $$ 9 \ge \sqrt{\frac{3}{\log_{10} a * \log_{10} b * \log_{10} c}} \iff 27\log_{10} a * \log_{10} b * \log_{10} c \ge 1 $$ I got stuck here. I think we will have to continue using the inequality of means, since the numbers are positive, but I don't see where this can lead. Maybe I started wrong. What do you think ? I am here for any idea or solution you have. Thank you very much !
Apply the inequality $(x+y+z)^2 \ge 3(xy+yz+zx)$ : $$\begin{align} \left(\frac{1}{\log_{10} a}+\frac{1}{\log_{10} b}+\frac{1}{\log_{10} c}\right)^2 &\ge 3\left(\frac{1}{\log_{10} a}\frac{1}{\log_{10} b}+\frac{1}{\log_{10} b}\frac{1}{\log_{10} c}+\frac{1}{\log_{10} c}\frac{1}{\log_{10} a} \right)\\ &= 3\cdot\frac{\log_{10} a + \log_{10} b + \log_{10} c}{\log_{10} a\cdot \log_{10} b \cdot\log_{10} c}\\&= \frac{3}{\log_{10} a\cdot \log_{10} b \cdot\log_{10} c} \end{align}$$ The equality occurs if and only if $a = b= c = \sqrt[3]{10}$ .
|inequality|logarithms|cauchy-schwarz-inequality|a.m.-g.m.-inequality|
1
Expectation of min(X,1)
On several occasions, I've come across the following statement: Let $X$ be a random variable. Then we have, for $X$ nonnegative, $E[\min(X,1)] = \int_0^1 Pr(X\geq u) du$. How would one go about to show this?
\begin{align*} \\ \operatorname E\min\{X,1\} = \operatorname E({\operatorname E[\min\{X,1\} |X]}) &= {\operatorname E[\min\{X,1\} |X\le1]}\cdot Pr\{X\le1\} + {E[\min\{X,1\} |X\ge1]}\cdot Pr\{X\ge1\} = \\ &= \int_0^1\Pr\{X\ge x\}\mathrm dx\cdot Pr\{X\le1\}+1\cdot Pr\{X\ge1\}\\ \end{align*} Law of total expectation has been used here, I wonder where's the mistake.
|probability|
0
Order matters or not in birthday probability
In this answer , the choices for the distinct birthdays of single people are counted as if order matters: $$363\times 362...$$ But the choices for the birthdays of the two pairs(four people) that have a birthday on the two different days are considered as if order does not matter: $${365 \choose2 } {n \choose 2} {n-2 \choose 2}$$ Why is it that way? Also, does order matter in the denominator $365^n$ of this question. Because I know that you need to consistent with the denominator and the numerator when doing probability.
Let us imagine making a big list of all of the ways to choose two different birthdays for two pairs of people. The list looks something like this: Alice, Bob on Jan 1; Charlie, Diane on Jan 2. Alice, Bob on Jan 1; Charlie, Evan on Jan 2. Alice, Bob on Jan 1; Diane, Evan on Jan 2. Etc. Now, there are several equivalent ways to write each of these entries. For example, the entry "Alice, Bob on Jan 1; Charlie, Diane on Jan 2" is equivalent to "Charlie, Diane on Jan 2; Alice, Bob on Jan 1". We want to ensure that there are no repeats in the list. One way to do that is to impose the following rule; Always list the date which comes earlier in the year first. In addition to switching the dates, you can also switch the order of the two people born on each of the same days. That is, "Alice, Bob on Jan 1; Charlie, Diane on Jan 2" is the same as "Bob, Alice on Jan 1; Charlie, Diane on Jan 2", which in turns is the same as "Bob, Charlie on Jan 1; Diane, Charlie on Jan 2". To ensure multiple equiva
|probability|combinatorics|birthday|
1
Curves vs Lines: A Symmetry Question
For every line in the 2d plane, we can construct a shape with an "inside and outside" (often a circle) such that the shape is cut by the line into two symmetrical parts. Does this property of lines extend to all curves? My idea is if you "zoom in" on a curve, if you get small enough, it might become a line segment, then it would obviously extend. But does anybody have any ideas on a real proof? REVISION: Let me rephrase my question as the following: Prove or disprove the following statement: For any $C^1$ curve, we can construct a shape (an object with a boundary and thus a clear inside and outside) such that the smooth curve cuts the shape into two congruent pieces.
While it is false to say that a curve becomes a line segment if you zoom in small enough, still though the idea of that thought gave me an idea for a simple proof. The symmetry that I get is not a reflective symmetry (as it is for a straight line cutting a circle in half), it is instead a translational symmetry. But you didn't specify the type of symmetry, so this is presumably acceptable. Let $\Gamma$ be the given $C^1$ curve. Choose $P \in \Gamma$ with tangent line $L$ . Choose a closed arc $\alpha \subset \Gamma$ that contains $P$ in its interior, and choose $\alpha$ so short that the orthogonal projection from $\alpha$ to $L$ is a $C^1$ -diffeomorphism. Let $A \subset L$ be the image of this projection. To simplify matters, we can apply a rigid Euclidean motion to $\Gamma$ so that $A = [-T,T] \times \{0\}$ for some $T > 0$ . The arc $\alpha$ then becomes the graph of some $C^1$ function $f : [-T,T] \to \mathbb R$ , $$\alpha = \{(t,f(t)) \mid t \in [-T,T]\} $$ Now pick $\rho > 0$ an
|geometry|symmetry|
1
Project a level set onto a plane
I have a level surface in $(x,y,z)$ space. For concreteness, let's say $\frac{1}{r^2}x^2 + \left(\frac{\sin \theta}{r}\right)^2y^2 + \left(\frac{\cos \theta}{r}\right)^2z^2 + \frac{2 \cos \theta \sin \theta}{r^2} xy = 1$ which is describes a circle with radius $r$ initially in the $xz$ plane that is tilted by angle $\theta$ . How can I get the level surface of the corresponding projected ellipse in the $xy$ plane? In other words, if I have an observer at $z = \infty$ , how can I describe the equation of the ellipse they will see?
So as I stated in my comment, your set $S$ defined by your equation equation (that I will rewrite for more convenience) $$x^2 + \sin^2(\theta) y^2 + \cos^2(\theta) z^2 + 2\cos(\theta) \sin (\theta) xy = r^2$$ Defines a $2$ -dimensional smooth manifold in $\mathbb{R}^3$ (that will very probably look like a deformed sphere) so its projection parallel to the $z$ axis on the $x,y$ plane will be a filled ellipse. Now the projection $p$ is defined to be $p((x,y,z)) = (x,y)$ , thus if $\cos(\theta)\neq 0$ $$\begin{align*} p(S) &= \bigcup_{z \in \mathbb{R}}\left\{(x,y) \in \mathbb{R}^2\mid x^2 + \sin^2(\theta) y^2 + \cos^2(\theta) z^2 + 2\cos(\theta) \sin (\theta) xy = r^2 \right\}\\ &=\bigcup_{z' \geqslant 0}\left\{(x,y) \in \mathbb{R}^2\mid x^2 + \sin^2(\theta) y^2 + z' + 2\cos(\theta) \sin (\theta) xy = r^2 \right\}\\ &=\bigcup_{z' \geqslant 0}\left\{(x,y) \in \mathbb{R}^2\mid x^2 + \sin^2(\theta) y^2 + 2\cos(\theta) \sin (\theta) xy = r^2 - z' \right\}\\ &=\left\{(x,y) \in \mathbb{R}^2\mid
|linear-algebra|geometry|rotations|projection|
0
Hausdorff property of Grassmannian
Good evening to everyone. I am new to Manifold Theory, so I am trying the last weeks to study some chapters from the book of John M. Lee 's Introduction to Smooth Manifolds . I was trying to understand this evening the concept of the Grassmannian manifold, which he introduced on pages 23–24 in the book. More specifically, there he was trying to endow the space of $k$ -dimensional subspaces of a $n$ -dimensional space $V$ (obviously $k \leq n$ ) with a smooth structure using a preceding Lemma he had mentioned earlier in the book. I understood most of the part, but I was stuck with the proof of the Hausdorff property of the induced topology (which is the last condition remaining for the application of his Lemma).So, he mentions the following paragraph: Hausdorff condition (v) is easily verified by noting that for any two $k$ -dimensional subspaces $P, P' \leq V$ , it is possible to find a subspace $Q$ of dimension $n-k$ whose intersections with both $P$ and $P'$ are trivial. I do not und
As already stated in the comments, $Q = \operatorname{span}\{e_1 + e_4, e_2 + e_5\}$ does the trick, since no vector in $P$ has a non-trivial fourth of fifth while no vector in $P'$ has a non-trivial first or second coordinate.
|linear-algebra|differential-geometry|differential-topology|smooth-manifolds|grassmannian|
0
Break non-simple cycle to simple cycles?
Given a cycle, how can I break it into simple cycles? I know there are general DFS algorithms to find cycles. I want to know if there are any advantages I can use to simplify the calculation/algorithm, considering that this is already a cycle. For example, I have the cycle below: cycle_edges = [(4, 5), (3, 5), (2, 3), (2, 6), (6, 7), (7, 4), (4, 5), (5, 6), (8, 6), (4, 8), (4, 6), (6, 1), (1, 0), (0, 8), (4, 8)] And I would like to break it into 5 simple cycles to [ [6, 5, 3, 2], [6, 5, 4], [6, 4, 7], [6, 4, 8], [6, 8, 0, 1]]
You're starting with the cycle $453267456846108$ (I'm leaving the 4 at the end implicit, so we don't accidentally double-count it). Look for any duplicate vertices in this cycle. For example, we have three $6$ s. Now chop up the cycle (cyclically) into the cycles between those times we hit $6$ . Here we get $6745$ , $684$ , and $61084532$ (looping around back to the start). These are simple because each has no duplicate vertices. If you choose a different vertex to chop at, you could get different cycles. For example, cutting at $4$ gets you $453267$ , $4568$ , and $46108$ . If you want to get exactly those particular simple cycles you listed, this won't work, since the big starting cycle does not include a path $6\leftrightarrow 5 \leftrightarrow 3$ .
|graph-theory|
0
Finding the Areas of Polygons from Side Lengths
I am aware of the formula for the area of a regular polygon: $A=([Side Count] \times [Side Length] \times [Apothem Length])/2$ However, I could not find an equation for the area of a non-regular polygon form the list of its (different) side lengths like the equation for the area of any triangle: $A=\sqrt{Semiperimeter \times [Semiperimeter-Side1] \times [Semiperimeter-Side2] \times [Semiperimeter-Side3]}$ Can either of these equations be appropriated to find the area of a polygon using the lengths of its sides? Is there another equation?
Putting together both the original question with its mentioned solution for general triangles and the argument about flexibility of (general) polygons with more than 3 sides, it might become interesting however to restate a similar quest, which now includes not only the various side lengths only, but rather all the distances between all vertex pairs. And this reformulation then clearly can be solved as desired, at least if the polygon would be assumed to be convex (and even within several re-entrant cases too): The set of all those vertex pair sides or sectors clearly contains as subset a triangulation of that polygon. By means of the already mentioned formula the area of each of those triangles can be calculated. The remainder then would be just to add all these triangular area's numbers. --- rk
|geometry|euclidean-geometry|area|polygons|
1
Some very basic problems in understanding the definition of algebraic variety
I just started learning myself some basic algebraic geometry and I have some trouble doing these rather elementary exercises. I am basically misunderstanding something fundamental so it's causing me trouble to progress further. Some definitions: $X\subset k^n$ is an affine (algebraic) variety if $X=\mathbb{V}(I)$ for some ideal $I\subset R,$ where $\begin{aligned}\mathbb{V}(I)=\{a\in k^n\mid f(a)=0\text{ for all }f\in I\}\end{aligned}$ . Exercises : $(1)\:I\subset J\Rightarrow\mathbb{V}(I)\supset\mathbb{V}(J).\:($ “The more equations you impose, the smaller the solution set”.) $(2)\quad\mathbb{V}(I)\cup\mathbb{V}(J)=\mathbb{V}(I\cdot J)=\mathbb{V}(I\cap J).$ (3) $\mathbb{V}(I)\cap\mathbb{V}(J)=\mathbb{V}(I+J).$ (Note: $\langle I\cup J\rangle=I+J.)$ (4) $\mathbb{V}(I),\mathbb{V}(J)$ are disjoint if and only if $I,J$ are relatively prime (i.e. $I+J=\langle1\rangle)$ My thought processes: I don't think I understand why it's $\mathbb{V}(I)\supset\mathbb{V}(J),$ rather than $\mathbb{V}(I)\s
To get a good initial idea, play with unbelievably simple examples. For example, maybe I'll look at $k = \mathbb{R}$ , choose polynomials in $R = \mathbb{R}[X, Y]$ , and look at resulting ideals. (Later it will turn out that there are complications from not using an algebraically closed ring $k$ , in which case you might look at $k = \mathbb{C}$ . But $\mathbb{C}^2$ is hard to visualize while $\mathbb{R}^2$ is familiar). For example, let $I = \langle X \rangle$ , $J = \langle Y \rangle$ , $A = \langle X, Y \rangle$ , and $B = \langle XY \rangle$ be ideals. What is $V(I)$ , the zero set of $I$ ? It's the $y$ -axis, consisting of points $\{ (x, y) \in \mathbb{R}^2 : x = 0 \}$ . Similarly $V(J)$ is the $x$ -axis. What about $V(A)$ and $V(B)$ ? Any point $(x, y) \in V(A)$ must have $x = 0$ , as $X \in A$ ; and it must have $y = 0$ , as $Y \in A$ . Thus $V(A) = \{ (0, 0) \}$ is just the origin. Similar reasoning shows that any points $(x, y) \in V(B)$ must either have $x = 0$ or $y = 0$ , a
|abstract-algebra|algebraic-geometry|ideals|
0
Constructing a circle tangent to another circle and two sides of a triangle
Given the circle tangent to the sides $AB$ and $BC$ , I want to construct another circle that is tangent to this circle and also tangent to the sides $AB$ and $AC$ . The center of such circle lies on the angle bisector of $\angle A$ . Furthermore, the locus of the points that are equidistant from both $AB$ and the circle with center $O$ is a parabola with its focus at $O$ and the directrix parallel to $AB$ with the distance equal to the radius of the circle as mentioned here . Thus, the center of such circle can be found by intersecting the parabola and the angle bisector of $\angle A$ But I don't think it's possible to do this construction by using just a compass and a straight-edge. Another thing that I've noticed is that when two circles are tangent to each other, their centers and their tangent point are placed on the same line. So, in order to do this construction it is enough to find the point of tangency and connect it to the center $O$ and extend it so that it intersects the an
Hint. Considering the segment $s=[A,B]$ and calling $\alpha = \frac 12\angle \hat{BAC}$ , the unknown radius $r$ and $x$ the distance over $s$ from $A$ to the vertical projection of the unknown circle's center, and $d_0$ the distance from $A$ over $s$ to the known circle's vertical projection, we have: $$ (r+r_0)^2=(d_0-x)^2+(r-r_0)^2 $$ with $x = \frac{r}{\tan\alpha}$ and $r_0$ the known circle's radius, thus we can calculate $r$ .
|euclidean-geometry|circles|geometric-construction|
0
How many cards until 2 kings or a king and an ace?
In a card game with a standard deck of 52 cards, dealing stops when either two kings appear or at least one king and one ace appear. What is the expected number of cards that will be dealt? I know that the expected number of cards to see the first king/ace is 44/9. But I don't know how to proceed to the next step. Any help would be much appreciated !
The $44$ cards that aren’t aces or kings are uniformly distributed among the $9$ bins between and around the $8$ aces and kings. Thus the expected number of them that are dealt is $\frac{44}9k$ , where $k$ is the number of kings and aces dealt until $2$ kings or a king and an ace are dealt. Since this is linear in $k$ , we can just substitute the expected value of $k$ into this expression. If the first $j$ ace-or-king cards are aces and the next is a king, then $j+1$ are dealt unless $j=0$ , in which case $2$ are dealt. The probability for this is $\frac{\binom{7-j}3}{\binom84}$ , since $3$ slots for the remaining kings are chosen from the remaining $7-j$ slots. Thus the expected value of $k$ is $$ \binom84^{-1}\left(\binom73\cdot2+\sum_{j=1}^4\binom{7-j}3(j+1)\right)=\frac{23}{10}\;. $$ Adding the expected number of other cards and the expected number of ace-or-king cards yields $$ \frac{44}9\cdot\frac{23}{10}+\frac{23}{10}=\frac{1219}{90}=13.5\overline4\;. $$
|probability|expected-value|card-games|
1
Is there an approximation to the natural log function at large values?
At small values close to $x=1$, you can use taylor expansion for $\ln x$: $$ \ln x = (x-1) - \frac{1}{2}(x-1)^2 + ....$$ Is there any valid expansion or approximation for large values (or at infinity)?
In this other question I found an approximation as follows: $$\ln(x+1) \approx \dfrac{\ln(x)}{1-x^{-\frac{1}{\ln(2)}}}$$ By differentiating both sides, since just one logarithmic terms remains you could find the following approximation: $$\ln(x)\approx \dfrac{\ln(2)\ x^{-\frac{1}{\ln(2)}}\left(x^{\frac{1}{\ln(2)}}-1\right)\left(x^{\frac{1}{\ln(2)}}+x\right)}{1+x}$$ which works pretty good for $x\in [0.1,\ 10]$ as can be seen in Wolfram-Alpha , but it get worst for bigger values of $x$ , but anyway, maybe it could be improved knowing how it has been done. For bigger values of $x$ , following this answer , this formula could be used: $$\ln(x)\approx x\ \left(x^\frac{1}{x}-1\right)$$ Added later Later I found here that the first approximation could be improved with an even simpler formula, which works quite good for $x\in [0.02,\ 50]$ : $$\ln(x) \approx = \frac{\left(x-1\right)\left(x+x^{\ln(2)}\right)}{x\left(1+x^{\ln(2)}\right)}$$ check its plot :
|logarithms|taylor-expansion|approximation|
0
Counting odd smooth numbers
Let $P(n)$ be the largest prime factor of $n$ , and let $\Psi(x,B) = |\{ n \mid n \leq x \wedge P(n) \leq B\}|$ . (This is a well-studied function in analytic number theory.) Define $\Psi'(x,B) = | \{ n \mid n \leq x \wedge P(n) \leq B \wedge \mbox{$n$ odd}\}|$ . Is there a good estimate for $\Psi'(x,B)$ , or for the ratio $\Psi'(x,B)/\Psi(x,B)$ ? The answer to this post shows how to prove that $\Psi(x, B) \sim \frac{1}{\pi(B)!} \cdot \prod_{p \leq B} \frac{\log x}{\log B}$ . If I repeat the argument to try to estimate $\Psi'(x,B)$ I get $\Psi'(x,B) \sim \frac{1}{(\pi(B)-1)!} \cdot \prod_{2 . But then $\Psi'(x,B)/\Psi(x,B) = \frac{\pi(B)}{\log x} \sim \frac{B}{(\log B) \cdot (\log x)}$ , which can exceed 1. So that wasn't very useful.
Error terms matter. Both of the asymptotic formulas in the OP have error terms, which means that the approximations shown might not be very accurate until $x$ is quite large in terms of $B$ . The ratio $\frac B{(\log B)(\log x)}$ can be greater than $1$ for small $x$ , but when $x$ is large in terms of $B$ it will be less than $1$ . So there is no inconsistency.
|analytic-number-theory|
0
Comparing two definitions of cocartesian morphism
In the literature I've found two notions of " $\pi$ -cocartesian morphism" in a category, and to my knowledge they're not equivalent. The first and I think most common one is the following : Let $\pi : D\to C$ be a functor. A morphism $\phi:X\to Y$ of $D$ is said to be $\pi$ -cocartesian if for any $\psi:X\to Z$ and any map $\rho:\pi Y\to \pi Z$ such that $\pi\psi=\rho\circ\pi \phi$ , there exists a unique lift of $\rho$ , i.e. a map $\epsilon:Y\to Z$ s.t. $\pi(\epsilon)=\rho$ and $\psi=\epsilon\phi$ . This definition can for instance be found on the nlab . There's a second definition I've found, most notably in "La théorie de l'homotopie de Grothendieck" by Georges Maltsiniotis (it's in French). It goes as follows: Let $\pi : D\to C$ be a functor. A morphism $\phi:X\to Y$ of $D$ is said to be $\pi$ -cocartesian if for any $\psi : X\to Z$ s.t. $\pi(\psi)=\pi(\phi)$ , there exists a unique $g:Y\to Z$ s.t. $\pi(g)=1_{\pi(Y)}$ and $\psi=g\circ\phi$ . I think I can prove that the first imp
The first definition is what we would usually call a $\pi$ -cocartesian morphism, and the second is also known as a locally $\pi$ -cocartesian morphism . As this name suggests, the definitions are not equivalent. The idea is as follows: write $[1]$ for the one-arrow category. Then, in the situation of your second definition, we can form a pullback square $$\require{AMScd}\begin{CD}[1]\times_CD@>>> D\\ @V{\pi'}VV @VV{\pi}V\\ [1] @>{\pi(\varphi)}>> C \end{CD}$$ of categories. Now the morphism $\varphi$ is locally $\pi$ -cocartesian iff it is a $\pi'$ -cocartesian morphism in the category $[1]\times_{C}D$ . Since the category $[1]\times_CD$ may be much smaller than $D$ , locally $\pi$ -cocartesian morphisms generally are not $\pi$ -cocartesian, and you can construct counterexamples using this idea.
|category-theory|functors|
1
Number of primitive Dirichlet characters of certain order and of bounded conductor
Writing $q(\chi)$ for the conductor of a Dirichlet character $\chi$ , one can show using Mobius inversion that $$\#\{\text{$\chi$ primitive Dirichlet characters}\,:\,q(\chi)\leq Q\}\sim cQ^2.$$ My question is how to find an asymptotic expression for the cardinality of the more specific set $$\{\text{$\chi$ primitive Dirichlet characters}\,:\,\chi^d=1,\,q(\chi)\leq Q\}$$ for a fixed $d\in\mathbb{Z}_{\geq0}$ . By Lemma 9.3 in Montgomery and Vaughan's Multiplicative number theory , it suffices to count only those characters $\chi$ in the above set modulo a prime power, but I'm not sure how to proceed from there.
You're in luck, because Theorem 2 in this paper of Finch, Sebah, and myself gives an asymptotic formula for exactly this quantity. The order of magnitude is $Q(\log Q)^{\tau(d)-2}$ , where $\tau(d)$ is the number of positive divisors of $d$ ; the leading constant is complicated but is given explicitly in the paper. (Note that there is a difference between " $\chi^d=1$ " and " $\chi$ has order $d$ ", but the same asymptotic formula holds for the two cases.)
|number-theory|elementary-number-theory|analytic-number-theory|arithmetic-functions|dirichlet-character|
1
Arguments for Galois closure of $\mathbb{Q}(\sqrt[3]{2})$
The standard argument for why $K = \mathbb{Q}(\sqrt[3]{2})$ is not the splitting field of $f = x^3 - 2$ relies on us implicitly choosing a complex embedding of $K$ , or in other words choosing the real third root of $2$ as a solution for $f$ . Then we argue that since $K$ is a real field it cannot contain the other two roots which are complex. This sounds like a bit of cheating, though, since a priori, until we choose an embedding, the roots of $f$ are indistinguishable. So my question is whether there is a similar argument for $\mathbb{Q}(\alpha)$ not being the splitting field for $\alpha$ a non-real root of $f$ , e.g. $\alpha = \omega \sqrt[3]{2}$ where $\omega$ is a primitive root of unity. Or in general, whether the not-a-splitting-field argument must necessarily rely on a particular choice of a complex embedding.
I will show that an extension $K$ of $\mathbb{Q}$ of degree $3$ that has one root of $x^3-2$ cannot contain any other roots, and therefore cannot be a Galois extension of $\mathbb{Q}$ . (Since if $K$ is a Galois extension of $\mathbb{Q}$ , and $f$ is an irreducible polynomial in $\mathbb{Q}[x]$ that has at least one root in $K$ , then it must split over $K$ ). Let $K$ be an extension of degree $3$ . Let $\alpha\in K$ be a root of $x^3-2$ . If $K$ contains another root $\beta\neq\alpha$ of $x^3-2$ , then $K$ also contains $\frac{\alpha}{\beta}\neq 1$ , but $$\left(\frac{\alpha}{\beta}\right)^3 = \frac{\alpha^3}{\beta^3} = \frac{2}{2}=1,$$ so $\frac{\alpha}{\beta}$ is a root of $x^3-1 = (x-1)(x^2+x+1)$ . Since $\frac{\alpha}{\beta}\neq 1$ , then $\frac{\alpha}{\beta}$ is a root of $x^2+x+1$ , which is irreducible over $\mathbb{Q}$ . Therefore, $[\mathbb{Q}(\frac{\alpha}{\beta}):\mathbb{Q}]=2$ , but then $$ 3= \left[K:\mathbb{Q}\vphantom{\frac{\alpha}{\beta}}\right] = \left[K:\mathbb{Q}\l
|abstract-algebra|field-theory|galois-theory|extension-field|splitting-field|
1
Why do we turn $|\tan^{-1}\theta|$ to $-\tan^{-1}θ$?
In this question, why did we turn $|\tan^{-1}\theta|$ to $-\tan^{-1}θ$ in the highlighted step? Why is there a negative sign? I first thought that the negative sign was because $\frac\pi2 , so $\tan\theta$ is negative on this domain. But if $\tan\theta$ is negative, why do we put a (-) sign and make it positive? in a similar question [below], we allowed it to be +ve and that seems correct, but i don't understand why in the first question we need to add (-) sign?
In the second line of the equation above, $\sec \theta \lt -1$ is valid in two quadrants: the second and third quadrants. Substituting $x = \sec \theta$ in the denominator, $\sqrt {\sec^2 \theta - 1} = \sqrt {\tan^2 \theta} = |\tan \theta| = \pm \tan \theta$ , but since you're in the range of $\frac {\pi}{2} \lt \theta \lt \pi$ , you'll be in the second quadrant; as $\tan \theta$ is negative in the second quadrant, we choose the negative sign, because the absolute value function defines $|x| = -x$ , when x is negative.
|trigonometry|inverse-trigonometric-functions|
0
Theorem about coupling and independence of random variables
I am reading a book of E. Rio and I found there a theorem ( without a proof ) about coupling. Please see below. Theorem: Let $(\xi_i)_{i \in \mathbb{Z}}$ be a sequence of random variables with values in some Polish space $X$ . Assume that $(\Omega, \mathcal{T}, \mathbb{P})$ is rich enough to contain a random variable $U$ with uniform distribution over $[0, 1]$ , independent of $(\xi_i)_{i \in \mathbb{Z}}$ . Let $\mathcal{F_0} = \sigma(\xi_i: i \le 0)$ and $\mathcal{G}_n = \sigma(\xi_i: i \ge n)$ . Then one can construct a sequence $(\xi^*_i)_{i \in \mathbb{Z}}$ with the same joint distribution as the initial sequence $(\xi_i )_{i \in \mathbb{Z}}$ , independent of $\mathcal{F}_0$ and measurable with respect to the $\sigma$ -field generated by $U$ and $(\xi^*_i)_{i \in \mathbb{Z}}$ , in such a way that, for any positive integer $n$ , $$\mathbb{P}(\xi_k \neq \xi^*_k \text{for some } k \ge n \mid \mathcal{F_0}) = \text{esssup}(|\mathbb{P}(B \mid \mathcal{F_0})−\mathbb{P}(B)| \colon B \in \
It is not meant to be a real answer. Probably only a proof of the theorem would clarify what's going on. Question 1 : I see another assumption of the theorem. The random variables $\xi_i$ are taking values on a Polish space $\mathcal{X}$ . Question 2 : Why do we ask the random variable $U$ to be an $\mathrm{Unif}([0,1])$ independent of $(\xi_i)_{i\in\mathbb{Z}}$ ? Fact: If $U$ is $\mathrm{Unif}([0,1])$ and $F$ is some cumulative distribution function, then the random variable $F^{-1}(U)$ follows the distribution $F$ , i.e. $$\mathbb{P}\bigl(F^{-1}(U)\le x\bigr) = F(x),\quad\forall x\in\mathbb{R},$$ where $F^{-1}$ is the generalized inverse function of $F$ . In other words, with such an $U$ one can define (on the same probability space) real-valued random variables with desired distributions. So, the existence of a random variable $U\sim\mathrm{Unif}([0,1])$ in a probability space guarantees the existence of real-valued random variables with any distribution. Furthermore, note that if $
|probability-theory|stochastic-processes|independence|coupling|
1
Hermitian metric as a section of the bundle $(E\otimes \bar{E})^*$
If $M$ is a Riemannian manifold and $E \to M$ a Riemannian bundle, then the Riemannian metric $g$ can be viewed as a section of the bundle $\bigotimes^2 T^*M$ , which means that it is a $(0,2)$ -tensor field. If we instead consider a complex vector bundle $E$ over $M$ with a hermitian metric $h$ , this wikipedia article claims that $h$ is a section of the bundle $(E\otimes \bar{E})^*$ . Can someone here elaborate on why should $h$ be a section of this proposed bundle?
It's not just any section. To quote from the Wikipedia article you linked: A Hermitian metric on a complex vector bundle $E$ over a smooth manifold $M$ is a smoothly varying positive-definite Hermitian form on each fiber. Such a metric can be viewed as a smooth global section $h$ of the vector bundle $(E \otimes \bar{E})^*$ such that for every point $p$ in $M$ , $$ h_p(\eta, \bar{\zeta}) = \overline{h_p(\zeta, \bar{\eta})} $$ for all $\zeta, \eta$ in the fibre $E_p$ and $$ h_p(\zeta, \bar{\zeta}) > 0 $$ for all nonzero $\zeta$ in $E_p$ . A section of $(E \otimes \bar{E})^*$ alone is a sesquilinear form on $E$ , where "sesquilinear form" in the linear algebra setting is what we call a linear map $V \otimes \bar{V} \to \mathbb{C}$ where $V$ is a complex vector space (equivalently, a bilinear map $V \times \bar{V} \to \mathbb{C}$ ). "Section" here encodes "smoothly varying," as with Riemannian metrics. The first condition in the quote above is then exactly the condition we pose on a sesqu
|differential-geometry|riemannian-geometry|
1
From $z = \dot{\Theta}^2 \operatorname{sgn}(\dot{\Theta})$ to $\dot{\Theta} = \operatorname{sgn}(z) \sqrt{\left| z \right|}$
I read on a scientific paper (*) the following equations: $$ z = \dot{\Theta}^2 \operatorname{sgn}(\dot{\Theta}) $$ and then: $$ \dot{\Theta} = \operatorname{sgn}(z) \sqrt{| z |} $$ Could you tell me how to pass from the first equation to the second one please? (*) Dynamics and stability of a rimless spoked wheel: a simple 2D system with impacts by Michael J. Coleman
$\def\sgn{\mathrm{sgn}}$ Since $z = \dot{\Theta}^2 \sgn(\dot{\Theta})$ we have $|z| = \dot{\Theta}^2$ and then $\dot{\Theta} = \pm \sqrt{|z|}$ Which sign we choose? Well, from $z = \dot{\Theta}^2 \sgn(\dot{\Theta})$ we see that $\sgn(z) = \sgn(\dot{\Theta})$ , so $\dot{\Theta} = \sgn(z) \sqrt{|z|}$ .
|algebra-precalculus|education|transformation|
1
From $z = \dot{\Theta}^2 \operatorname{sgn}(\dot{\Theta})$ to $\dot{\Theta} = \operatorname{sgn}(z) \sqrt{\left| z \right|}$
I read on a scientific paper (*) the following equations: $$ z = \dot{\Theta}^2 \operatorname{sgn}(\dot{\Theta}) $$ and then: $$ \dot{\Theta} = \operatorname{sgn}(z) \sqrt{| z |} $$ Could you tell me how to pass from the first equation to the second one please? (*) Dynamics and stability of a rimless spoked wheel: a simple 2D system with impacts by Michael J. Coleman
Let's examine the two cases; $(1)\,\dot{\Theta}\ge 0$ and $(2)\,\dot{\Theta} . CASE $1$ : For $\dot{\Theta}\ge 0$ , we see that $$z=\dot{\Theta}^2 \text{sgn}\left(\dot{\Theta}\right)\implies z\ge0$$ and hence $$\dot{\Theta}=\sqrt z$$ CASE $2$ : For $\dot{\Theta} , we see that $$z=\dot{\Theta}^2 \text{sgn}\left(\dot{\Theta}\right)\implies z and hence $$\dot{\Theta}=-\sqrt {-z}$$ Putting it all together, we have $$\dot{\Theta}=\text{sgn}(z)\sqrt{|z|}$$ And we are done!
|algebra-precalculus|education|transformation|
0
Constructing a circle tangent to another circle and two sides of a triangle
Given the circle tangent to the sides $AB$ and $BC$ , I want to construct another circle that is tangent to this circle and also tangent to the sides $AB$ and $AC$ . The center of such circle lies on the angle bisector of $\angle A$ . Furthermore, the locus of the points that are equidistant from both $AB$ and the circle with center $O$ is a parabola with its focus at $O$ and the directrix parallel to $AB$ with the distance equal to the radius of the circle as mentioned here . Thus, the center of such circle can be found by intersecting the parabola and the angle bisector of $\angle A$ But I don't think it's possible to do this construction by using just a compass and a straight-edge. Another thing that I've noticed is that when two circles are tangent to each other, their centers and their tangent point are placed on the same line. So, in order to do this construction it is enough to find the point of tangency and connect it to the center $O$ and extend it so that it intersects the an
Your start and instinct were right, here is how to complete your construct (clockwise from upper left): Draw the directrix $d$ , bisect the angle of interest, from $O$ draw the perpendicular to the angle bisector with intersection $K$ , mark off $KO'=KO$ , and mark the intersection with the directrix as $J$ . With diameter $JK$ create a circle centered on the line $JK$ (at the unmarked midpoint), and with diameter $OO'$ create a circle centered on line $OO'$ (at $K$ ), mark their intersection at $L$ . With radius $JL$ and center $J$ draw a circle and mark the intersection with the directrix at $M$ . Draw a perpendicular at $M$ and mark its intersection with the angle bisector at $N$ . By construction $JM=JL$ , $JLK$ is a circle with diameter $JK$ so $\angle{JLK}$ is right and therefore $JL$ is tangent to circle $OLO'$ which has center $K$ . Since $JL$ is a tangent and $JOO'$ a secant to circle $OLO'$ , $JL^2=JO \times JO'$ . But $JM=JL$ so $JM^2=JO \times JO'$ . Now, by symmetry $NO=NO
|euclidean-geometry|circles|geometric-construction|
0
What can be said about an ideal whose vanishing locus consists of a single point?
From classical algebraic geometry we learn that maximal ideals in polynomial rings correspond to points, in the sense that if $K$ is an algebraically closed field, and $\mathfrak a$ is a maximal ideal in $K[x_1,\dots,x_n]$ , then there is a $p\in\mathbb A^n$ such that $\mathfrak a=I(\{p\})$ , and hence $V(\mathfrak a)=\{p\}$ . On the other hand, if $\mathfrak a$ is an ideal, and $V(\mathfrak a)=\{p\}$ , then it does not follow that $\mathfrak a$ is a maximal ideal. My question is: if we know that $\mathfrak a$ is an ideal such that $V(\mathfrak a)=\{p\}$ , what can we conclude about $\mathfrak a$ ? It of course follows from the Nullstellensatz that $I(V(\mathfrak a))=\sqrt{\mathfrak a}=I(\{p\})$ , but I wonder if there is more that can be said.
Claim:. $V(\mathfrak{a}) = \{\mathbf{p}=(p_1,\ldots,p_n)\}$ if and only if $\mathfrak{a}\neq (1)$ and for each $i$ , $1\leq i\leq n$ there exists $m_i\geq 1$ such that $(x_i-p_i)^{m_i}\in\mathfrak{a}$ . I think this is the best you can say: the maximal ideals are precisely the ideals of the form $(x_1-a_1,\ldots,x_n-a_n)$ for some $(a_1,\ldots,a_n)\in k^n$ : that such ideals are maximal is clear, since $k[x_1,\ldots,x_n]/(x_1-a_1,\ldots,x_n-a_n)\cong k$ via evaluation. That any maximal ideal is of that form follows because if $I$ is maximal, then $k[x_1,\ldots,x_n]/I$ is a finite extension of $k$ , and therefore isomorphic to $k$ , since $k$ is algebraically closed. Fix an isomorphism $k[x_1,\ldots,x_n]/I\cong k$ , which we may assume restricts to the identity on $k$ , and let $r_i$ be the image of $x_i$ in $k$ . Then $(x_1-r_1,\ldots,x_n-r_n)\subseteq I$ , so maximality gives $I=(x_1-r_1,\ldots,x_n-r_n)$ . So if $V(\mathfrak{a})$ consists of a single point, then $\sqrt{\mathfrak{a}} =
|algebraic-geometry|commutative-algebra|
1
The connected component of a lie group $G$ containing identity is an open set.
Let $K $ be the connected component of a lie group $G$ containing identity. Then why is it an open set? I thought that any connected component of a manifold is open because it is locally homeomorphic to $\mathbb{R^n}$ (i.e around every point) but another reasoning that I saw elsewhere is that it is due to local connectedness of $G$. What does that mean? Also my reasoning won't work for manifolds with boundary, so is any modification of it applicable or is the result itself not true if we consider the manifolds with boundary? The reasoning is coming from Spivak's Differential Geometry Vol 1, Chapter 10, p-373.
In fact, you can prove a stronger statement: the connected components of a topological manifold is open. This follows from the following facts: A topological manifold is locally path-connected. Each of the path-connected components of a locally path-connected topological space is an open set. A locally path-connected space connected components coincide with path-connected components. The first is a standard fact, and you can find 2, 3 in proposition 7.31 and 7.32 on Introduction to topology of nlab
|manifolds|lie-groups|
0
What are the applications of stochastic differential equations to number theory?
This semester i'm taking a course about stochastic differential equations. This made me wonder what applications does this topic have to areas like number theory and algebraic geometry, specially arithmetic geometry. Unfortunately i wasn't able to find anything online, all i found was about applications to usual differential equations to number theory in this question.
First, let me mention a nice established connection between probability and zeta function random matrices and the Riemann zeta function "Notes on L-functions and Random Matrix Theory it says the the eigenvalues for certain random matrices behave like the zeroes of zeta function along the imaginary line. So then one can further study this problem using Dyson-Brownian motion (which is an evolution on the space of matrices) eg. as done here "Relaxation to equilibrium and the spacing distribution of zeros of the Riemann ζ function" . The DBM is described by an SDE for its eigenvalues $$\lambda_i=d B_i+\sum_{1 \leq j \leq n: j \neq i} \frac{d t}{\lambda_i-\lambda_j}$$ where $$B_1, ..., B_n$$ are different and independent Wiener process|Wiener processes. Start with a Hermitian matrix with eigenvalues $\lambda_1(0), \lambda_2(0), ..., \lambda_n(0)$ , then let it perform Brownian motion in the space of Hermitian matrices. Its eigenvalues constitute a Dyson Brownian motion.
|ordinary-differential-equations|number-theory|partial-differential-equations|reference-request|stochastic-differential-equations|
0
Malliavin derivative of a random variable
We consider a continuous stochastic stochastic process $X_t$ with the following dynamic on $[0,T]$ : $$ dX_{t}^{x} = rX_{t}^{x}dt + \sigma X_{t}^{x}dB_t $$ Where $X_{0}^{x}=x$ is the initial condition, $r>0$ , $B_t$ is a standard Brownian motion and $\sigma>0$ . A solution is given by \begin{align*} X_t^x =& x \exp\left( t\left( r - \frac{1}{2}\sigma^2(s) \right) + \sigma B_t\right) =xX_{t}^{1}. \end{align*} (See the post Solution to General Linear SDE ) I am interesting on the computation of the following $$ \int_{0}^{T} D_s I_{n}ds $$ Where $D_s$ is the Malliavin derivative and $$ I_{n} = \int_{0}^{T} t^n X_{t}^{x} dt $$ I have no clue on how to start this problem. I thought about a classical integration by part for Riemann integral but I do not see how to take an anti derivative or a derivative at the usual sense of such a thing. If you have some hints to provide, I would appreciate. Thank you
I am basing this on Nualart's book The Malliavin Calculus and Related Topics . First we use that the Malliavin-derivative is linear and so $$D_{s}I_{n}=\int_{0}^{T}t^{n}D_{s}(X_{t}^{x})dt.$$ Then we use Malliavin-chain rule $$D_{s}(X_{t}^{x})=x \exp\left( t\left( r - \frac{1}{2}\sigma^2(s) \right) + \sigma B_t\right)\sigma D_{s}(B_{t})$$ and then we use that $ D_{s}(B_{t})=1_{[0,t]}(s)$ to get $$D_{s}I_{n}=\int_{0}^{T}t^{n}x \exp\left( t\left( r - \frac{1}{2}\sigma^2(s) \right) + \sigma B_t\right)\sigma 1_{[0,t]}(s)dt$$ $$=\int_{0}^{min(T,s)}t^{n}x \exp\left( t\left( r - \frac{1}{2}\sigma^2(s) \right) + \sigma B_t\right)\sigma dt.$$
|stochastic-processes|stochastic-analysis|malliavin-calculus|
1
Sum of squared maximization with a norm constraint
I have the following optimization \begin{align} \max_{\|x\|^2\le1} \|Lx - y\| \end{align} where $L$ is a lower triangular and $y$ is a given vector. Does it admit a closed-form solution? I am interested in the general case where $L$ may be not invertible and even not square, but any progress under some assumptions will be muchnappreciated. The motivation is to understand the standard least squares method but as a game from the "adversarial" point of view: $y$ is a measurement sequence and $x$ is the set of possible states from $y = Lx + n$ .
In the sequel, we assume that the matrices and vectors are real and the norm is Euclidean. If the field is complex, let $L=A+iB,\,\mathbf x=\mathbf p+i\mathbf q$ and $\mathbf y=\mathbf a+i\mathbf b$ . The problem is then equivalent to its ‘realified’ version $$ \max_{\|\mathbf p\|^2+\|\mathbf q\|^2\le1}\left\| \pmatrix{A&-B\\ B&A}\pmatrix{\mathbf p\\ \mathbf q}-\pmatrix{\mathbf a\\ \mathbf b}\right\|. $$ So, assume that we work over $\mathbb R$ . Let $r=\operatorname{rank}(L)$ and $L=USV^T$ be an economic/compact singular value decomposition of $L$ , where $S=\operatorname{diag}(s_1,s_2,\ldots,s_r)$ is a positive diagonal matrix and each of $U$ and $V$ has $r$ orthonormal columns. Let $\mathbf v=V^T\mathbf x$ and $\mathbf z=U^T\mathbf y$ , where $\mathbf v,\mathbf z\in\mathbb R^r$ . Then $\|L\mathbf x-\mathbf y\|=\|S\mathbf v-\mathbf z\|$ and hence $\max_{\|\mathbf x\|\le1}\|L\mathbf x-\mathbf y\|=\max_{\|\mathbf v\|\le1}\|S\mathbf v-\mathbf z\|$ . Since $S$ is invertible, the image of
|linear-algebra|optimization|convex-optimization|least-squares|
0
Expressing the determinant of a sum of two matrices?
Can $\det(A + B)$ expressed in terms of $\det(A), \det(B), n$ where $A,B$ are $n\times n$ matrices? I made the edit to allow $n$ to be factored in.
For $n=2$ , the identity follows from $$\det(A+\lambda B)=\det(A)+\operatorname{Tr}(A \operatorname{adj}(B))\lambda + (\det B) \lambda^2.$$
|linear-algebra|matrices|multivariable-calculus|determinant|
0
Inequality $\frac{1}{4}(e^{-4} + e^{-1}) \leq \int_1^2 e^{-x^2}dx \leq \frac{1}{2}(e^{-4}+e^{-1})$
Please help me to understand how to prove the inequality \begin{equation} \frac{1}{4}(e^{-4} + e^{-1}) \leq \int_1^2 e^{-x^2}dx \leq \frac{1}{2}(e^{-4}+e^{-1}). \end{equation} Using the mean value theorem we can easily show that \begin{equation} e^{-4} \leq \int_1^2 e^{-x^2}dx \leq e^{-1}. \end{equation} But I completely don't understand how to obtain main inequality.
The equation of the tangent line to the graph of $f(x)=e^{-x^2}$ at $x=1$ is $$y-e^{-1}=2e^{-1}(1-x)$$ as $f'(1)=-2e^{-1}.$ Observe that the point $(3/2,0)$ belongs to that line. By the convexity of $f(x)$ the right triangle with vertices $(1,0),(1,e^{-1})$ and $(3/2,0)$ is located below the graph of $e^{-x^2}.$ Thus $$\int\limits_1^{3/2}e^{-x^2}\,dx > {1\over 2}\cdot {1\over 2}\cdot e^{-1}={1\over 4}e^{-1}\tag{$*$}$$ On the other hand $$\int\limits_{3/2}^2e^{-x^2}\,dx >{1\over 2}e^{-4}\tag{$**$}$$ Adding $(*)$ and $(**)$ gives a slightly stronger inequality than the first one in OP. Concerning the second inequality observe that $$\int\limits_1^2e^{-x^2}\,dx so the result is stronger than the one in OP. Remark Considering the tangent line at $3/2$ we can substantially improve $(**)$ $$\int\limits_{3/2}^2e^{-x^2}\,dx>{1\over 6}e^{-9/4}$$ The coefficient ${1\over 3}$ shows up as the tangent line cross the line $y=0$ at $\left ({3\over 2}+{1\over 3},0\right ).$
|calculus|integration|definite-integrals|
0
Conditional probability using Bayes's theorem
From Blitzstein, Introduction to Probability (2019 2 edn), Chapter 2, Exercise 25, p 87. A crime is committed by one of two suspects, A and B. Initially, there is equal evidence against both of them. In further investigation at the crime scene, it is found that the guilty party had a blood type found in 10% of the population. Suspect A does match this blood type, whereas the blood type of Suspect B is unknown. (a) Given this new information, what is the probability that A is the guilty party? So here is my approach. Let $A$ stands for "A guilty", $G$ for "Individual having the rare matching blood type" and $B$ for "B is guilty". Then using Baye's theorem $$P(A|G)P(G) = P(G|A)P(A)$$ $$P(A|G) = \frac{1 * \frac{1}{2}}{\frac{1}{10}} = \frac{10}{2}$$ Clearly I did something wrong. The solution replaced P(G) with $(\frac{1}{2} + \frac{1}{10})$ . Why am I wrong in saying P(G) is simply $\frac{1}{10}$ because it is the unconditional probability of anybody in the general population having guilt
For sake of simplicity, call the blood type $O$ . We should first compute the probability that blood type $O$ is observed at the scene. Given that $A$ is guilty, the probability that blood type $O$ is observed is $1$ . Otherwise, given that $B$ is guilty, because their blood type is unknown, there is a $\frac{1}{10}$ probability that blood type $O$ is observed. Since our prior assigns $\frac{1}{2}$ to both suspects $A, B$ , then the probability of the evidence is $\frac{1}{2} \left( 1 + \frac{1}{10} \right) = \frac{11}{20}$ . Therefore, the probability that $A$ is guilty increases to $\frac{\frac{1}{2} (1)}{\frac{11}{20}} = \boxed{\frac{10}{11}}$ . This makes sense because the blood type is not common.
|conditional-probability|bayes-theorem|
1
Urn contains equal numbers of white and black balls. Probability that there will be as many W balls as B balls among the balls drawn.
We randomly draw an even number of balls from an urn containing $n$ white and $n$ black balls. All distinguishable samples containing an even number of balls, including the sample with a count of $0$ , are equally likely. Determine the probability that there will be as many white balls as black balls among the balls drawn. Ok, after thinking a bit more about it, here is what I found: If we have $n = 1$ , there are $2$ groups of options: group, there are $0$ balls in a sample: $0$ W balls ( $+0$ B balls) group, there are $2$ balls in a sample: $1$ W ball ( $+1$ B ball) If we have $n = 2$ , there are $3$ groups of options: group, there are $0$ balls in a sample: $0$ W balls ( $+0$ B balls) group, there are $2$ balls in a sample: $0$ W balls ( $+2$ B balls) $1$ W ball ( $+1$ B ball) $2$ W balls ( $+0$ B balls) group, there are $4$ balls in a sample: $2$ W balls ( $+2$ B balls) If we have $n = 3$ , there are $4$ groups of options: group, there are $0$ balls in a sample: $0$ W balls ( $+0$
First the easier case, odd $n$ : For any given white ball count, there is one black ball count that is equal, and there are $\frac {n+1}2$ black ball counts that make the total even. Thus the probability that the white and black ball counts are equal is $\frac2{n+1}$ . For even $n$ , things are only slightly more complicated. For a given white ball count, there is again one black ball count that makes the counts equal, but now there are $\frac n2$ black ball counts that make the total even if the white ball count is odd and $\frac n2+1$ if the white ball count is even. Since there are $\frac n2$ odd white ball counts and $\frac n2+1$ even white ball counts, that yields the probability $$ \frac{n+1}{\frac n2\cdot\frac n2+\left(\frac n2+1\right)\left(\frac n2+1\right)}=\frac{2(n+1)}{n^2+2n+2}\;. $$
|probability|
1
In classical logic, why is $(p\Rightarrow q)$ True if $p$ is False and $q$ is True?
Provided we have this truth table where "$p\implies q$" means "if $p$ then $q$": $$\begin{array}{|c|c|c|} \hline p&q&p\implies q\\ \hline T&T&T\\ T&F&F\\ F&T&T\\ F&F&T\\\hline \end{array}$$ My understanding is that "$p\implies q$" means "when there is $p$, there is q". The second row in the truth table where $p$ is true and $q$ is false would then contradict "$p\implies q$" because there is no $q$ when $p$ is present. Why then, does the third row of the truth table not contradict "$p\implies q$"? If $q$ is true when $p$ is false, then $p$ is not a condition of $q$. I have not taken any logic class so please explain it in layman's terms. Administrative note. You may experience being directed here even though your question was actually about line 4 of the truth table instead. In that case, see the companion question In classical logic, why is $(p\Rightarrow q)$ True if both $p$ and $q$ are False? And even if your original worry was about line 3, it might be useful to skim the other quest
I am kinda surprised nobody mentioned this: this truth table corresponds to the weakest (i.e. carries the least amount of information, i.e. being true most of the time (think about tautologies, you can’t infer anything from them since they are true no matter what/in all interpretations)) connective that modus ponens holds true. To be more precise, what I meant is suppose we want both $p, p \rightarrow q \vdash q$ (modus ponens) and $\Gamma \vdash p \Rightarrow \Gamma \models p$ (soundness of the proof system), then we must have the row "T F F". If we additionally want implication to be the weakest 1 connective, then all other rows will have the truth value of T which gives exactly the usual truth table. 1 a theory $\Gamma_1$ is weaker than another theory $\Gamma_2$ iff the set(class) of models of $\Gamma_1$ is a proper superset(class) of $\Gamma_2$ . If the proof system $\vdash$ is complete then this implies the deductive closure $\mathrm{Th}(\Gamma_1) \subset \mathrm{Th}(\Gamma_2)$ .
|logic|propositional-calculus|
0
Probability of next head given sequence of heads and prior on heads distribution
Quant interview: A machine produces a weighted coin that lands on heads with an unknown probability $p$ , and we know that $P(p \leq x)=x^{4}$ . You flip the coin $5$ times, and every time it lands on heads. What is the probability that the next flip will also land on heads? Attempt : Using bayes $P(NextHead|5Heads)=\frac{P(6Heads)}{P(Head)}=p^5$ then I took the expectation given the prior distribution and obtain as answer $E(p^5)=1/10$ .
The prior $\mathsf P(p\le x)=x^4$ is the posterior you’d get if you started with a uniform prior $\mathsf P(p\le x)=x$ and then observed $3$ heads. If you then observe $5$ heads, the situation is the same as if you’d started with a uniform prior and observed $3+5=8$ heads. By the rule of succession , the probability that the coin will show heads on the next toss is then $\frac{8+1}{8+2}=\frac9{10}$ .
|probability|statistics|
0
Proving nice divisibility
Let $n=10x+y$ where $n$ , $x$ and $y$ are positive integers. Prove that $n$ is divisible by $13$ iff $x+4y$ is divisible by $13$ . I let $n=13k$ , thereafter mutliplied $x+4y$ by $10$ , to get $10x+40y$ and $n=10x+y$ . Subtract the two to get $39y$ which is clearly divisible by $13$ , but I don't know how to structure this proof.
$n=10x+y. 13|n \iff 13|(x+4y)?$ $(40x+4y)-(x+4y)=39x$ $(x+4y)+39x=4(10x+y)$ If a prime divides two numbers, it divides their sum and their difference. IF $13|(x+4y)$ and $13|39x$ , so $13|4(10x+y)$ . Since $13\not|4 \implies 13|10x+y$ By similar reasoning, one can prove the other direction .
|proof-writing|
0
Conjecture: $\forall n \geq n_0\exists k \geq 0: \gcd(2^k-1, \frac{p_n\#}{6}) = 1$.
Context & Interest. See this MSE post about a twin-prime related topology . Basically $0$ is a easily seen to be a generic point in this topology. Every generic point is clearly dense as a singleton subset, meaning its closure is the whole space $\Bbb{Z}/p_n\#$ . In an approach toward potentially proving the twin prime conjecture, I must show that the interval of residues $I = [1, \dots, p_{n+1}^2 - 2]$ is such that its set of elementwise powers $\mathcal{I} = \{ x^k : x \in I, k \geq 0\}$ is also dense in $(\Bbb{Z}, \tau)$ . But because $x = 2,3$ are such that $x^2 \neq 1 \pmod q$ for all $q\geq 5$ , we get reduction to the below question. Or in other words the conjecture below is sufficient to prove density of $\mathcal{I}$ . Conjecture. For all sufficiently large $n$ (something like $n \geq 3$ ); either $\exists k \geq 2$ such that $\gcd(2^k - 1, \frac{p_n\#}{6}) = 1$ , or $\exists k \geq 2$ such that $\gcd(3^k - 1, \frac{p_n\#}{6}) = 1$ , or both. Where $p_n\# := p_{n}p_{n-1} \cdot
Yes, your conjecture is true. In particular, for any $n\ge 3$ , then $\exists\, k \geq 3$ such that $\gcd(2^k - 1, \frac{p_n\#}{6}) = 1$ , and $\exists\, k \geq 3$ such that $\gcd(3^k - 1, \frac{p_n\#}{6}) = 1$ . In particular, this occurs with any prime $k$ where $2k + 1 \gt p_n$ . First, for $2^k - 1$ , for any prime $p \mid 2^k - 1$ , then $$2^k \equiv 1 \pmod{p}$$ With the multiplicative order , we get $$m = \operatorname{ord}_{p}(2) \;\;\to\;\; m \mid k$$ Since $k$ is prime, this means that $m = 1$ or $m = k$ . As $m = 1$ doesn't work, this means that $m = k$ . However, by Euler's theorem , we then get for some positive, even integer $j$ that $$k \mid \varphi(p) \;\to\; p - 1 \equiv 0 \pmod{k} \;\to\; p = jk + 1 \;\to\; p \ge 2k + 1 \gt p_n$$ As this applies to all prime factors of $2^k - 1$ , we then get $$\gcd\left(2^k - 1, \frac{p_n\#}{6}\right) = 1$$ A similar argument shows that $\gcd(3^k - 1, \frac{p_n\#}{6}) = 1$ . More generally, $k$ can have up to $1$ factor of $2$ (since
|abstract-algebra|elementary-number-theory|modular-arithmetic|gcd-and-lcm|perfect-powers|
1
Is it true that [if the premises are false, the conclusion is true]?
My question is, is my proof correct in concluding "If the premises are false, then the conclusion is true"? Please see if my proof is correct Is my proof of [If the premises are false, the conclusion is true] correct? If the premises are false, the conclusion is false. Statement 1 is false because I am cup. cup is an animal. i am an animal The premise (I am cup, cup is an animal) is false, but the conclusion (I am an animal) is true. If there is even one counterexample, the proposition is false. Therefore, proposition 1 is false Negation of proposition 1 is true. The negation of (p->q) is (p and not q) The negation of proposition 1 is The premise is false and the conclusion is true. Proposition 2 is true If (p and q) are true then (p->q) is true thus If the premises are false, the conclusion is true. Statement 3 is true for example I am a cup. cup is god. I am a God Since the premise (I am a cup) is false, the conclusion (I am God) is true.
If the premises are false, the conclusion is true. Not exactly. For any propositions $A$ and $B$ , if $A$ is false, then the implication $(A \to B)$ is true: $~~~~~~(\neg A \to (A \to B))$ We cannot infer from this result that $B$ is true, or that it is false. The Truth Table $A$ is false on row 3 and 4 of this table. There, the implication is true (column 3), regardless of the truth value of $B$ . Formal Proof Using a Form of Natural Deduction Here, we prove: ~A => [A => B] Note that on line 6, we do indeed infer that $B$ is true. This, however, is an intermediate result. At that point in the proof, the premises on lines 1 and 2 have yet to be discharged. The proof is not complete as long as one or more premises are not discharged. On line 7, we discharge the premise on line 2 to obtain the conclusion: A => B. On line 8, the premise on line 1 is discharged to obtain the final conclusion: ~A => [A => B]. Plain text version of proof: 1 ~A Premise 2 A Premise 3 ~B Premise 4 ~A & A Join,
|logic|
0
Inconsistency of capability of random coding in information theory and coding theory
In information theory, it is well known that the capacity of a channel can be achieved asymptotically using random coding method (Section 7 of Cover & Thomas' textbook ). However, in coding theory, it is only shown that we can apply random coding method to get a code that achieves Gilbert-Varshamov bound (I will show how this is related to channel capacity and the inconsistence later). To be precise, denote $B_n(r) =| \{ x \in \mathbb{F}_2^n: \text{wt}(x)\le r\}|$ (the size of Hamming ball with radius $r$ ) Theorem :(G-V bound) There exists a code $\mathcal{C} \subset \mathbb{F}_2^n$ such that the distance is $d$ and the size is $|\mathcal{C}|=\frac{2^n}{2B_n(d-1)}$ . This result can be shown by choosing $x_1,x_2,\dots, x_{|\mathcal{C}|}$ uniformly and independently. For the proof details, see the lecture note . Choose $d=2\delta n$ and take the limit $n\to \infty$ , we have $B_n(d-1) \approx 2^{nh(2\delta)}$ (see the asymptotic property of binomial coefficient , also page 353 of the t
The GV bound is a lower bound on the capacity, so there is no inconsistency. It just means the GV bound is not tight.
|information-theory|coding-theory|
0
Completely multiplicative and square-summable sequences
I have the following statement. Suppose $(a_n)$ is a sequence of complex numbers satisfying the following properties. i) $a_{nm} = a_na_m$ for all positive integers $n, m$ . ii) \begin{align} \sum_{n \geq 1} |a_n|^2 iii) Not all $a_n$ are zero. Then $|a_n| \leq n^{-1/2}$ for all positive integers $n$ . I have 3 questions that are all related. Is the statement true? I have a proof that I think is correct using Hilbert space theory. Is there a quick elementary proof? How could this statement be generalised to the case of an arbitrary locally compact, Hausdorff abelian group? Edit: noticed a flaw in my proof from a counterexample in the comments. Fixed my proof and replaced $1/n$ by $n^{-1/2}$ .
Given statement (i), statement (ii) is equivalent to $$ \text{(ii')} \quad |a_p| The reason is that when we have a convergent series with a completely multiplicative function, it can always be written as an Euler product $$ \sum_{n=1}^\infty |a_n|^2 = \prod_p \bigl( 1 + |a_p|^2 + |a_{p^2}|^2 + \cdots \bigr) = \prod_p \bigl( 1 - |a_p|^2 \bigr)^{-1}. $$ In particular, the bound $|a_n| \le n^{-1/2}$ is not necessarily true, nor is $|a_n| \le n^{-\varepsilon}$ for any $\varepsilon>0$ , because we can choose any finite number of primes to "misbehave". For example, if $$ a_p = \begin{cases} 0.99, &\text{if } p=2, \\ 1/p, &\text{if } p\ge3, \end{cases} $$ then $a_{2^k} = 0.99^k = (2^k)^{-\log(1/0.99)/log 2} \approx (2^k)^{-0.0145}$ , far greater than $(2^k)^{-1/2}$ ; and the exponent $0.0145$ can be made arbitrarily close to $0$ by moving the $0.99$ closer to $1$ .
|sequences-and-series|number-theory|
0
Exterior derivative for non-alternating forms
Given a manifold $M$ , a differential $k$ -form $\omega$ assigns to each point $p$ $\epsilon$ $M$ a $k$ -covector $\omega_p$ $\epsilon$ $\bigwedge^k \left(T_p^*M\right)$ , where $\bigwedge^k \left(T_p^*M\right)$ is the space of alternating $k$ -tensors on the tangent space $T_pM$ . If $\omega = \sum f_i dx^i$ denotes a 1-form of $M$ , then the exterior derivative $d:\bigwedge^k \left(T_p^*M\right) \rightarrow \bigwedge^{k+1} \left(T_p^*M\right)$ acts on $\omega$ to give a 2-form: $dw = \sum \frac{\partial f_i}{\partial x^j} dx^j \wedge dx^i$ . I wonder if there is an analogous construction of "symmetric" $k$ -forms. By a symmetric $k$ -form I mean a map $\sigma$ that assigns to each point $p$ $\epsilon$ $M$ a $k$ -covector $\sigma_p$ that is a symmetric $k$ -tensor of the tangent space $T_pM$ . If $\sigma = \sum g_i dx^i$ , maybe the exterior derivative of this symmetric 1-form could be $d\sigma = \frac{\partial g_i}{\partial x^j} dx^j \vee dx^i$ , where $dx^j \vee dx^i$ could be analo
Not without more structure, no. Computing directly the transformation of the map under a change of coordinates shows that the proposed map in fact depends on the choice of coordinates. If a manifold $M$ (of general dimension) is equipped with a linear connection $\nabla$ , however, one gets for free a map $$\Gamma(\operatorname{Sym}^k T^*M) \to \Gamma(\operatorname{Sym}^k T^*M \otimes T^*M), \qquad \Phi \mapsto \nabla \Phi ,$$ and symmetrizing gives a manifestly coordinate-independent map $$D : \Gamma(\operatorname{Sym}^k T^*M) \to \Gamma(\operatorname{Sym}^{k + 1} T^* M), \qquad \Phi \mapsto \operatorname{Sym} \nabla \Phi .$$ In abstract index notation, $D$ is $\Phi_{i_1 \cdots i_k} \mapsto \Phi_{(i_1 \cdots i_k, i_{k + 1})}$ ; in index-free notation, $D$ is $$D(\Phi)(X_1, \ldots, X_{k + 1}) = \frac1{k + 1} [(\nabla_{X_1} \Phi)(X_2, \ldots, X_{k + 1}) + \cdots + (\nabla_{X_{k + 1}} \Phi)(X_1, \ldots, X_k)] ;$$ in a coordinate frame, $D$ is $$D(\Phi)_{i_1 \ldots i_{k + 1}} = \partial_{
|differential-geometry|algebraic-topology|manifolds|
1
Looking for a counterexample when dropping one of the constraints (Linear algebra)
I want to find a counterexample for the following "Theorem": Let $V \neq 0$ be a finite dimensional $K$ - vector space and $L \subset \mathfrak{gl}(V)$ a linear subspace. If all $x \in L$ are nilpotent as maps $V \rightarrow V$ then there is a $v \in V, v \neq 0$ such that $\forall x \in L: x(v) = 0$ . When $L$ is a Lie subalgebra of $\mathfrak{gl}(V)$ this statement is supposedly true, according to my lecture notes, but not true if it's just a linear subspace of $\mathfrak{gl}(V)$ . I couldn't come up with a counterexample though, so that's why I am asking for help. Certainly counterexamples can't be abelian, as then they'd automatically be Lie subalgebras. So they also, in particular, can not be one dimensional subspaces. I tried to construct something with two nilpotent matrices where their anti-commutator vanishes so that any power of their linear combinations consists of only powers of themselves again, but so far to no avail. There was always a vector in V mapped to 0 by all elem
Some searching for "Linear spaces of nilpotent matrices" brings up Causa, Antonio, Riccardo Re, and Titus Teodorescu, "Some remarks on linear spaces of nilpotent matrices." Le Matematiche 53.3 (1998): 23-32, which contains the following space giving the desired counterexample: $$V=\{ M(s, t)= \begin{pmatrix} 0 & s & 0\\ -t & 0 & s \\ 0 & t & 0 \end{pmatrix}| s, t \in \mathbb{R} \}.$$ Let's check. Clearly, $V$ is a vector space. Now, we compute $$M(s,t)^2= \begin{pmatrix} -st & 0 & s^2\\ 0 & 0 & 0 \\ -t^2 & 0 & st \end{pmatrix}, M(s,t)^3= \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}.$$ So all $M(s,t)$ are indeed nilpotent. Finally, the null space of $M(1,0))$ is spanned by $\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}$ and null space of $M(0,1)$ by $\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$ . Note: This counterexample is the smallest possible, as for 2 by 2 matrices the maximal dimension of vector space of nilpotent matrices is 1, as explained here .
|linear-algebra|lie-algebras|nilpotence|
1
Have I shown these linear functionals span the annihilator?
Full exercise. Suppose $V$ is finite-dimensional and $U$ is a subspace of $V$ . Then $$ \text{dim}\ U^0 = \text{dim}\ V - \text{dim}\ U. $$ My Question. My strategy is exactly how the first proof in this post goes about showing the above result: https://math.stackexchange.com/a/2475382/645756 However, the way they approached showing that the linear functionals span the annihilator $U^0$ is different than my demonstration below. Let $\varphi \in U^0$ . Then for all $u \in U$ , we have $\varphi (u) = 0$ . Assuming we have already shown $\varphi_{m+1}, \ldots, \varphi_{n} \in U^0$ , we have that \begin{align} \varphi (u) = 0 &= a_{m+1} \cdot 0 + \cdots + a_n \cdot 0 \\ &= a_{m+1} \cdot \varphi_{m+1}(u) + \cdots + a_n \cdot \varphi_{n}(u) \end{align} This shows that $\varphi_{m+1}, \ldots, \varphi_{n}$ spans $U^0$ . Is this correct? EDIT. I suspect what I have shown is incorrect. Following the logic I laid out, I can say \begin{align} \varphi (u) = 0 = a_{m+1} \cdot 0 = a_{m+1} \cdot \varp
Below is in response to my conversation with CyclotomicField. Showing $\varphi_{m+1}, \ldots, \varphi_n \in U^0$ . Let $V$ be finite-dimensional. Then $U$ must be finite-dimensional. Let $u_1, \ldots, u_m$ be a basis of $U$ and extend this basis to a basis of $V$ : $$u_1, \ldots, u_m, u_{m+1}, \ldots u_n$$ Thus, we have $\text{dim} \ V = n$ , $\text{dim} \ U = m$ . We want to show $\text{dim} \ U^0 = n - m$ . Consider the dual basis of the basis of $V$ given by the list $\varphi_1, \ldots, \varphi_m, \varphi_{m+1}, \ldots, \varphi_n$ where each linear functional is defined by $$ \varphi_j(u_k) = \begin{cases} 1, & \text{if} \ k=j, \\ 0, & \text{if} \ k \neq j. \end{cases} $$ Then, for any $u \in U$ and $j = m+1, \ldots, n$ \begin{align} u = a_1u_1 + \cdots + a_mu_m & \implies \\ \varphi_j(u) = a_1 \varphi_j(u_1) + \cdots + a_m \varphi_j(u_m) & \implies \\ \varphi_j(u) = a_1 \cdot 0 + \cdots + a_m \cdot 0 = 0 \end{align} That is, $\varphi_j(u) = 0$ for all $u \in U$ where $j = m+1, \ldo
|linear-algebra|solution-verification|vector-spaces|linear-transformations|
1
Sum of real numbers in intervals
My question is very simple, I consider two closed intervals of real numbers [ $a$ , $b$ ] and [ $c$ , $d$ ]. I want to show any numbers contained in [ $a+c$ , $b+d$ ] can be written as the sum of two real numbers say $x$ of [ $a$ , $b$ ] and $y$ of [ $c$ , $d$ ]. I have tried by considering example but I require a general proof.
Hint: Try representing any number as the sum of $3$ numbers $a+c+k$ , and see if you can find a way to generally combine $k$ with $a$ or $c$
|real-analysis|
0
$Lang$ of a Hyperdoctrine
$$Form \dashv Lang : Hyp \rightarrow FOL$$ We consider an object of $FOL$ to be a signature made of sorts, function symbols and relation symbols, together with a set of axioms of first order logic. We consider an object of $Hyp$ to be a first order hyperdoctrine formed by a cartesian category $C$ and a functor $C\rightarrow HA'$ where $HA'$ is the category of Heyting algebras with heyting algebra morphisms, with left and right adjoints adjointed (unlucky naming) such that these adjoints satisfy Beck-Chevalley and Frobenius conditions. The functor $Form$ (assuming we know how to make $FOL$ a category ---I appreciate if someone has a comment on this) on objects constructs the free hyperdoctrine out of a signature and quotients a la Lindebaum. For that we make $C$ the category of contexts out of our signature, and our functor $F : C \rightarrow HA'$ constructs the free Heyting algebra where $F\ [x_1:\sigma_1,\dots,x_n:\sigma_n]$ is thought of as the set of formulas that depend on $x_1,\do
You include one relation symbol of sort $X$ for each element of the Heyting algebra $F(X)$ . The arrows in the category FOL are interpretations between theories.
|logic|category-theory|categorical-logic|
1
Almost-Everywhere definition
I was doing a homework for measure theory, and had a question involving two functions being equal almost everywhere. I've solved the problem, but am not sure about this other example I thought of. Take $f,g : \{1,2\} \rightarrow \mathbb{R}$ to be two functions such that $f(1) = f(2) = 0$ , and $g(1) = 0, g(2) = 1$ . Then if we take $M = \{\emptyset, \{1,2\} \}$ , with $\mu(\emptyset) = 0, \mu( \{1,2\}) = 1$ , then we have that the set of points where $f \neq g$ is obviously $\{2\}$ . However, I am unsure of whether or not these functions are equal almost everywhere or not because $\{2\}$ is not a measurable set. I am inclined to say no because this is a complete measure and my homework claims if a function is equal to another measurable function almost everywhere on a set with a complete measure, then it is measurable, but clearly g is not measurable. Would these functions be not equal almost everywhere because the set on which they aren't equal is unmeasurable? Any clarification would
Your $g$ is not a measurable fucnbtion on $(\{1,2\},M)$ , so the question of it being equal to $f$ almost everywhere does not even arise. Your measure is already complete, so you cannot even complete it to make $g$ measuarable.
|analysis|measure-theory|
0
calculate definite integrals in terms of area
(a) Graph the function $$ f(x) = \begin{cases} 2 - \sqrt{4 - x^2}, & -2 \leq x \leq 2 \\ |x - 5| - 1, & x > 2 \end{cases} $$ on the interval [-2, 6] and use it to calculate the definite integral $$ \int_{-2}^{6} f(x) \, dx $$ in terms of area. I graphed the functions, and it is a semi-circle that lies on the x-axis plus two lines Then I calculated it by area and it is half square minus half semi-circle which is 8-2pi and the other two triangles are 1 and 2 so the total area is 11-2pi, but if I calculate the definite integrals in the interval [-2,6] the answer is 9-2pi. I feel confused that Should I count the area below the x-axis as negative in this question?
So, a visual of that function is below: positive area in blue, negative in red: Should I count the area below the x-axis as negative in this question? Yes. The integral gives the signed area. You are being told to calculate the integral, using the area as the motivation (but that doesn't mean the area is the answer). So the bit on $[4,6]$ needs to be subtracted, not added; correcting for this fixes your answer. Conversely, if told to find the (total, unsigned) area, it would not be enough necessarily to calculate the integral.
|calculus|definite-integrals|
0
Concrete example of non-invertible element in a Clifford algebra?
Are there simple examples of non-invertible elements (i.e. zero divisors) in a real Clifford algebra $C(V)$ other than $0$ ? Notation: $V$ has a positive definite metric $q$ , and $uv+vu=-2q(u, v)$ . Therefore, if $V$ is $1$ -dimensional, $C(V)$ is isomorphic to ${\mathbb C}$ ; if $V$ is $2$ -dimensional, $C(V)$ is isomorphic to ${\mathbb H}$ . Thus we have to search $C(V)$ with $V$ of dimension at least $3$ ...
Let $e_1,\dotsc,e_n$ be the standard (orthonormal) basis. If $n$ is at least 3, then $e_1$ and $e_2e_3$ commute and both square to $-1$ so $$ (e_1 + e_2e_3)(e_1 - e_2e_3) = e_1^2 - (e_2e_3)^2 = 0 $$ so both $e_1\pm e_2e_3$ are not invertible. By the same logic $$ (1 + e_1e_2e_3)(1 - e_1e_2e_3) = 0. $$
|clifford-algebras|
1
Uniqueness of Pseudo inverse
Given $X\succeq0$ , we can write the SVD decomposition as: \begin{align} X&= U \begin{bmatrix}\Omega&0\\0&0 \end{bmatrix}U^T, \end{align} where $U$ is an orthogonal matrix and $\Omega\succ0$ . My question is whether \begin{align} Y&= U \begin{bmatrix}\Omega^{-1} &0\\0&0 \end{bmatrix}U^T, \end{align} is the unique Moore-Penrose pseudo inverse of $X$ .
If a matrix $M$ is a Moore-Penrose inverse of $A$ , then $$AMA=A,~~~ MAM=M,~~~(AM)^{'}=AM,~~~(MA)^{'}=MA.$$ Omitting the existence, we just prove the uniqueness. Suppose there are two candidates MP inverses $M_1,M_2$ of $A$ . Then $$M_1 = M_1 A M_1 = M_1 (AM_1)^{'}=M_1 M_1^{'}A^{'} = M_1 M_1^{'} A^{'} M_2 A^{'}$$ $$= M_1 M_1^{'} A^{'} A M_2=M_1A M_1 A M_2 = M_1 A M_2=M_1 A M_2 A M_2 = M_1 A A^{'} M_2^{'} M_2$$ $$= A^{'} M_1^{'} A^{'} M_2^{'} M_2 = A^{'} M_2^{'} M_2 = M_2AM_2=M_2.$$
|linear-algebra|matrices|
0
Arguments for Galois closure of $\mathbb{Q}(\sqrt[3]{2})$
The standard argument for why $K = \mathbb{Q}(\sqrt[3]{2})$ is not the splitting field of $f = x^3 - 2$ relies on us implicitly choosing a complex embedding of $K$ , or in other words choosing the real third root of $2$ as a solution for $f$ . Then we argue that since $K$ is a real field it cannot contain the other two roots which are complex. This sounds like a bit of cheating, though, since a priori, until we choose an embedding, the roots of $f$ are indistinguishable. So my question is whether there is a similar argument for $\mathbb{Q}(\alpha)$ not being the splitting field for $\alpha$ a non-real root of $f$ , e.g. $\alpha = \omega \sqrt[3]{2}$ where $\omega$ is a primitive root of unity. Or in general, whether the not-a-splitting-field argument must necessarily rely on a particular choice of a complex embedding.
Arturo's answer is nicer in my opinion, but here is another solution I found: First identify the roots of $x^3-2$ over $\Bbb C$ . These are \begin{align} a&=\sqrt[3]2\\ b&=\sqrt[3]2\omega=-\frac{1}{2^{2/3}}+\frac{\sqrt3}{2^{2/3}}i\\ c&=\sqrt[3]2\omega^2=-\frac{1}{2^{2/3}}-\frac{\sqrt3}{2^{2/3}}i,\\ \end{align} where $\omega=-\frac12+\frac{\sqrt3}{2}i$ . Let $L$ be one of the extensions $\Bbb Q[a]$ , $\Bbb Q[b]$ , or $\Bbb Q[c]$ . Suppose $L$ contains two or more of $a$ , $b$ , or $c$ . There are $3$ cases: Case 1: $L$ contains both $a$ and $b$ Case 2: $L$ contains both $a$ and $c$ . Case 3: $L$ contains both $b$ and $c$ . If $L$ contains both $a$ and $b$ , then $L$ contains $a^2b=-1+\sqrt 3i$ , so it also contains $\sqrt 3i$ . But $\sqrt 3i$ is a root of $x^2+3$ , which is irreducible over $\Bbb Q$ . Hence $$3=[L:\Bbb Q]=[L:\Bbb Q[\sqrt 3i]][\Bbb Q[\sqrt 3i]:\Bbb Q],$$ which implies $2|3$ . This shows cases $1$ and $2$ are impossible. For case $3$ , you can similarly show that $b^2c=-1
|abstract-algebra|field-theory|galois-theory|extension-field|splitting-field|
0
Theory of Computation (Regular/Non-Regular proof)
Suppose that L0, L1, L2 are languages over the same alphabet and that L0 ⊆ L1 ⊆ L2. Is it true that if L0 and L2 are regular, then L1 must be regular as well? By regular = the set of alphabet is accepted by the machine Suppose L0 = { $a^{\textrm{n}}$ | n = 2} L2 = { $a^{\textrm{n}}$ | n => 0} how can i find a set for L1 that is NOT Regular when there are no parameters or syntax on what the machine accepts or not? I'm thinking L1 = { $a^{\textrm{n}}$ | n = prime number } but i don't know how to prove it.
The existence of a non-regular language between the languages $L_0=\{aa\}$ and $L_2=a^*$ can also be proved without the pumping lemma. There are only countably many regular expressions over the alphabet $\{a\}$ , because each of these regular expressions is a word in the alphabet $\{ ),(,+,\cdot,^*,\emptyset,a\}$ . On the other hand, there are uncountably many languages between $L_0$ and $L_2$ (because there are uncountably many subsets of $\mathbb{N}$ and each such subset $K$ yields a language $L_1=L_0 \cup \{a^{i+3} \mid i \in K\}$ ). This proves that there exists a non-regular language $L_1$ between $L_0$ and $L_2$ , but does not provide any particular language $L_1$ .
|computer-science|computational-mathematics|
0
Sum of real numbers in intervals
My question is very simple, I consider two closed intervals of real numbers [ $a$ , $b$ ] and [ $c$ , $d$ ]. I want to show any numbers contained in [ $a+c$ , $b+d$ ] can be written as the sum of two real numbers say $x$ of [ $a$ , $b$ ] and $y$ of [ $c$ , $d$ ]. I have tried by considering example but I require a general proof.
Brainstorm: If $a+c \le w \le b+d$ then $w$ is "some proportion" between $a+c$ and $b+d$ . Pick $x,y$ so that the are the same proportion between $a$ and $b$ and $c$ and $d$ respectively. How? Let $r = \frac {w- (a+c)}{(b+d)-(a+c)}$ . Thats the "some proportion". Note: $0 \le r \le 1$ . Let $x = a + r\times (b-a)$ and $y = c+ r\times (d-c)$ . Those are the same proportion. $a=a + 0\times (b-a) \le x = a+r\times(b-a) \le a+ 1\times (b-a) = a+ b -a=b$ . So $x \in [a,b]$ . Same argument that $y \in [c,d]$ . And $x + y = a + r(b-a) + c + r(d-c) = $ $(a+c) + r[(b+d) - (a+c)]=$ $(a+c) + \frac {w-(a+c)}{(b+d)-(a+c)}\cdot [(b+d) - (a+c)]=$ $(a+c) + (w - (a+c)) = w$ . ..... Or if you don't want to do so much math. $a+c \le w \le b+ d\iff a \le w-c \le b+d-c$ And $a+c \le w \le b+d \iff a+c-b \le w-b \le d$ Case 1: $w-c \le b$ . Then $w-c \in [a,b]$ and $c \in [c,d]$ . Let $x=w-c$ and $y=c$ . Case 2: $w-c > b$ . Then $w-b > c$ . So $w-b\in [c,d]$ and $b\in [a,b]$ . Let $x = b$ and $y= w-b$ .
|real-analysis|
0