{"_id": "A.301", "text": "Inequality between norm 1,norm 2 and norm $\\infty$ of Matrices Suppose $A$ is a $m\\times n$ matrix. Then Prove that, $\\begin{equation*} \\|A\\|_2\\leq \\sqrt{\\|A\\|_1 \\|A\\|_{\\infty}} \\end{equation*}$ I have proved the following relations: $\\begin{align*} \\frac{1}{\\sqrt{n}}\\|A\\|_{\\infty}\\leq \\|A\\|_2\\leq\\sqrt{m}\\|A\\|_{\\infty}\\\\ \\frac{1}{\\sqrt{m}}\\|A\\|_{1}\\leq \\|A\\|_2\\leq\\sqrt{n}\\|A\\|_{1} \\end{align*}$ Also I feel that somehow Holder's inequality for the special case when $p=1$ and $q=\\infty$ might be useful.But I couldn't prove that. Edit: I would like to have a prove that do not use the information that $\\|A\\|_2=\\sqrt{\\rho(A^TA)}$ Usage of inequalities like Cauchy Schwartz or Holder is fine. "} {"_id": "A.302", "text": "$n$-th root of a complex number I am confused about the following problem. With $w=se^{i{\\phi}}$, where $s\\ge 0$ and $\\phi \\in \\mathbb{R}$, solve the equation $z^n=w$ in $\\mathbb{C}$ where $n$ is a natural number. How many solutions are there? Now my approach is simply taking the $n$-th root which gives $$z=\\sqrt[n]{s}e^{\\frac{i\\varphi}{n}}$$ However, it seems that this problem is asking us to show the existance of the $n$-th root. Can I assume that the $n$-th root of a complex number already exists? Moreoover, would I be correct to say that there is only one solution which is given above? "} {"_id": "A.303", "text": "Number of non-commutative Lie algebras of dimension 2 Theorem- Up to isomorphism, the only noncommutative Lie algebra of dimension 2 is that with basis $x , y$ and bracket determined by $[x,y] = x$. I understand that all vector spaces of dimension 2 over the field $K$ are isomorphic to each other. So the number of lie algebras of dimension 2 in a field $K$ is determined by the number of possible bilinear operations [ ]$:\\ V \\ X \\ V \\rightarrow V$ satisfying the conditions $a)$ $[x,x]=0$ for all $x\\in V$ $b)$ $[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0$ for all $x,y,z \\in V$ The bilinear operations on the other hand is determined by the elements to which the pair of base elements are mapped to in the bilinear operation. And since in a lie algebra $[x,x]=[y,y]=0$ and $[x,y]=-[x,y]$ we ony need to determine $[x,y]$. Now how do we prove that $[x,y]=x$ and $[y,x]=-x$ always and why can't it be [y,x]=y or any other vector ? "} {"_id": "A.304", "text": "Elementary proof that a non-orientable manifold of real dimension $2$ does not admit a quasi-complex structure. Is there an easy proof that a non-orientable real surface $X$ does not admit a quasi-complex structure? The proof I know is to observe that any quasi-complex structure on a real surface $X$ necessarily satisfies the integrability condition $$[T_X^{0,1}, T_X^{0,1}] \\subset T_X^{0,1}$$ of the Newlander-Nirenberg theorem, because $T_X^{0,1}$ is a $1$-dimensional complex vector bundle, and the bracket $[-,-]$ is alternating, i.e. it vanishes on $T_X^{0,1}$. So by the Newlander-Nirenberg theorem, $X$ admits a complex structure, and complex manifolds have to be orientable. However, the Newlander-Nirenberg theorem is a deep theorem and feels a bit overkill. Also I don't really see why there cannot be a quasi-complex structure. Is there a more elementary proof to convince myself? "} {"_id": "A.305", "text": "What will be the value of floor function of $\\lim\\limits_{N\\to\\infty}\\left\\lfloor\\sum\\limits_{r=1}^N\\frac{1}{2^r}\\right\\rfloor$ What would be the value of floor function of $\\lim\\limits_{N\\to\\infty}\\left\\lfloor\\sum\\limits_{r=1}^N\\frac{1}{2^r}\\right\\rfloor$ would it be $1$ or would it be $0$ ? The formula I use for this is that of infinite summation series that is $\\frac{a}{1-r}$ but I have no clue how to find out what the floor value of the above expression would be. P.s I am a high school student so please explain in simple terms, and yes I do know basic calculus. EDIT: I'm sorry it was given $\\lim_{N \\to \\infty}$ in the problem "} {"_id": "A.306", "text": "On the irrationality of Euler Mascheroni constant I saw one of the expansions of Euler Mascheroni constant in terms of Meissel Mertens constant as a consequence of Mertens theorem. $$ B = \\gamma + \\sum_p \\left\\{ \\log\\left( 1 - \\frac 1p\\right) + \\frac 1p\\right\\}$$ This is that expansion. Now I don't understand why is it difficult to prove the irrationality of Euler Mascheroni constant. Since we have infinitely many prime numbers, the sum over all those primes in the above equation, if converges must be a irrational, then why is it not considered as a proof of irrationality? "} {"_id": "A.307", "text": "Carmichael function and the largest multiplicative order modulo n By definition, the Carmichael function maps$a $positive integer $n$ to the smallest positive integer $t$ such that $a^t\\equiv1\\pmod n$ for all integers $a$ with $\\gcd(a,n)=1$. It is denoted as $\\lambda(n)$. The Wikipedia page on Carmichael function states that $\\lambda(n)=\\max\\{\\operatorname{ord}_n(a):\\gcd(a,n)=1\\}$. My question is: why is this true? In other words, why is it the case that there always exists an integer $x$ coprime to $n$ with $\\operatorname{ord}_n(x)=\\lambda(n)$? "} {"_id": "A.308", "text": "Riemann's definition of the zeta function I am having trouble understanding Riemann's definition of the zeta function, and I will need to give a brief summary here before I can get to my question. In his 1859 paper, Riemann derived the integral representation $$\\zeta(s)=\\sum_{n=1}^\\infty\\frac{1}{n^s}=\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{x^{s-1}}{e^x-1}dx$$ that is valid for $\\mbox{Re}(s)\\gt 1$, and then modified the integral in order to define a function that is defined for all complex values of $s$, except $s=1$, where it has a simple pole. The extension is given by $$\\zeta(s)=\\frac{\\Gamma(1-s)}{2\\pi i}\\int_C \\frac{(-z)^s}{e^z-1}\\frac{dz}{z}$$ where $C$ is a \"Hankel contour\", that is, a path that travels from $+\\infty $ at a small distance $\\epsilon$ above the positive $x$-axis, circles around the origin once in counterclockwise direction with a small radius $\\delta$, and returns to $+\\infty$ traveling at distance $\\epsilon$ below the positive real axis. Taking the limit as $\\epsilon\\rightarrow 0$ and $\\delta \\rightarrow 0$ one can see that the integral $$\\int_C \\frac{(-z)^s}{e^z-1}\\frac{dz}{z}$$ becomes $$(e^{i\\pi s}-e^{-i\\pi s})\\int_0^\\infty\\frac{x^{s-1}}{e^x-1}dx$$ and then the rest follows easily from known identities satisfied by the Gamma function. While the original real integral over $[0,\\infty)$ is clearly divergent if $\\mbox{Re}(s)\\leq 1$, the contour integral over $C$ is defined for all complex $s$, because the path stays away from the singularity at $s=0$ and from the branch cut along the positive $x$-axis. My problem is understanding why the integral over $C$ does not depend on $\\epsilon$ and $\\delta$, so that we can keep them at a safe positive distance from the singularities for the definition, but we can take the limit for the purpose of evaluating the integral. I know that by Cauchy's theorem we can modify a path of integration (without changing the value of the integral) starting and ending at the same point as long as we do not cross any singularity, but this path starts and ends at infinity, so I am not sure how to rigorously proceed using Cauchy's theorem. Even if I start the path at $R+i\\epsilon$ and end it at $R-i\\epsilon$ for some large $R$, the path starting and ending points change as $\\epsilon$ changes. "} {"_id": "A.309", "text": "Number of solutions of equation over a finite field I have a question regarding the number of solutions of a equation over a finite field $\\mathbb{F}_p$. First of all, consider the equation $x^3=a$ over $\\mathbb{F}_p$, where $p$ is a prime such that $p\\equiv 2 (\\text{mod }3)$. The book that I'm currently reading says that this equation has exactly one solution in $\\mathbb{F}_p$ for every $a\\in \\mathbb{F}_p$, because $\\gcd(3,p-1)=1$, but the book does not prove this. Unfortunately, this doesn't convince me enough. Is there is a convincing elementary straightforward proof justifying why is this true? "} {"_id": "A.310", "text": "Finding positive integer solutions to $\\frac{4}{x}+\\frac{10}{y}=1$ Find the positive integer solutions for: $\\frac{4}{x} + \\frac{10}{y} = 1$ I had calculated the solutions manually but it was a very tedious process. Is there any better way to do this? "} {"_id": "A.312", "text": "Proving $\\left\\lfloor \\frac{\\left\\lfloor a/b \\right\\rfloor}{c} \\right\\rfloor=\\left\\lfloor\\frac{a}{bc}\\right\\rfloor$ for positive integer $a$, $b$, $c$ How can we prove the following? $$\\left\\lfloor \\frac{\\left\\lfloor \\dfrac{a}{b} \\right\\rfloor}{c} \\right\\rfloor = \\left\\lfloor \\frac{a}{bc} \\right\\rfloor$$ for $a,b,c \\in \\mathbb{Z}^+$ I don’t know if I’m doing something wrong, but I can’t prove it even though I’m pretty sure it’s true. Obviously, because the concept of algebra isn’t aware of the fact that we are restricting the variables to positive integers, and given my assumption that the equality doesn’t necessarily hold for non-integers, an element of non-algebraic problem solving is needed, i.e. making a change to the expression given our knowledge of that condition, which then allows for algebraic maneuvers that show that the equality holds. I think that’s what I’m missing. Thanks. "} {"_id": "A.313", "text": "Let $a,b\\in G$, a finite abelian group and $|a|=r, |b|=s$ with $\\gcd(r,s)=1$. Prove that $|ab|=rs$. Let $a,b\\in G$, a finite abelian group and $|a|=r, |b|=s$ with $\\gcd(r,s)=1$. Prove that $|ab|=rs$. My attempt: Let $|ab|=n$. Since $G$ is ableian, $(ab)^n=a^nb^n=1$. Thus $r\\mid n$ and $s\\mid n$. Together with $\\gcd(r,s)=1$, it follows that $rs\\mid n$. This is where I'm stuck; need to show that $rs=n$. Any hints on how to proceed? Edit: I've come up with a solution that is a somewhat different approach to what has been provided in the hints. Here it goes: Since $G$ is abelian, $n\\mid{\\rm lcm}(r,s)$. But since $\\gcd(r,s)=1$, ${\\rm lcm}(r,s)=rs$ by an elementary result in number theory. Thus $n\\mid rs$. Together with $rs\\mid n$, we have that $n=rs$, which is what we want to prove. "} {"_id": "A.314", "text": "Closed span of a sequence in Hilbert spaces. Suppose that you have an orthonormal basis $\\{e_n\\}$ in a Hilbert space such that $\\sum \\|e_n-x_n\\| < 1$. Is this condition enough to prove that the closed span of $\\{x_n\\}$ is $H$? My efforts to prove this have not led anywhere promising. I have tried showing that the only vector perpendicular to all of the $x_n$ would be $0$. Not sure which way I can proceed. Does anyone have an idea how to approach this? Thank you. "} {"_id": "A.315", "text": "Backwards Induction (Exercise 2.2.6) Analysis 1 by Terence Tao I am new to the study of analysis and I decided to start with Terence's book in my endeavor. I want to show my \"proof\" of backwards induction since I have some difficulty in understanding this. I want to now if my proof is correct or have some error, because if have,$a $can't infer that. Any feedback is appreciated. Let $n$ be a natural number, and let $P(m)$ be a property pertaining to the natural numbers such that whenever $P(m\\text{++})$ is true, then $P(m)$ is true. Suppose that $P(n)$ is also true. Prove that $P(m)$ is true for all natural numbers $m ≤ n$; this is known as the principle of backwards induction. (Hint: apply induction to the variable $n$.) First i want to show $P(m)$ is true $\\forall$ $0\\geq m$. H1: $\\forall m$ $P(m\\text{++})\\implies P(m)$ H2: $P(0)$ C: $P(m)$ is true $\\forall$ $0\\geq m$. $0\\geq m$ means $0=m+a$ for some natural number $a$, then $m=a=0$ for corollary 2.2.9. But $P(0)$ is true for H2, then the case $n=0$ is proved. Suppose now that works for $n$ and prove $n\\text{++}$. then: H1: $\\forall m$ $P(m\\text{++})\\implies P(m)$ H2: $P(n)\\implies P(m)$ $\\forall$ $n\\geq m$ H3: $P(n\\text{++})$. In H1, for $m=n$ we have $P(n\\text{++})\\implies P(n)$ and for H2 we now $P(n)\\implies P(m)$ (specifically for $n=m$), then $P(n\\text{++})\\implies P(m)$ for $n\\text{++}>m$. We need to prove that works for $n\\text{++}=m$ but for that $P(n\\text{++})$ is true for H3. We conclude that $P(n\\text{++})\\implies P(m)$ $\\forall$ $n\\text{++}\\geq m$. "} {"_id": "A.316", "text": "Multiplicative of group of odd numbers modulo power of two I want to understand the group structure of the group of units in the ring $\\mathbb{Z}/2^m \\mathbb{Z}$ for positive integers $m$. I expect this has been answered before here on MSE, but so far the automatic suggestions didn't turn it up. Here is what I understand: it is an abelian group, hence a product of cyclic ones. Also the order of this group is $2^{m-1}$ so that the orders of the factors in the product are all powers of two by Lagrange's theorem. Now these conditions are restrictive enough to compute the first few cases by hand. We have $(\\mathbb{Z}/2 \\mathbb{Z})^* = C_1$, $(\\mathbb{Z}/4 \\mathbb{Z})^* = C_2$, $(\\mathbb{Z}/8 \\mathbb{Z})^* = C_2 \\times C_2$, $(\\mathbb{Z}/16 \\mathbb{Z})^* = C_2 \\times C_4$, $(\\mathbb{Z}/32 \\mathbb{Z})^* = C_2 \\times C_8$. I'm starting to believe that for $m > 1$ we have that $(\\mathbb{Z}/2^m \\mathbb{Z})^* = C_2 \\times C_{2^{m-2}}$ but is there a simple conceptual explanation for that (if true)? "} {"_id": "A.317", "text": "$\\int \\frac{1}{\\left(x^2+1\\right)^n}dx$ Let be $n\\in \\mathbb{Z_+}$. Compute the following integral: $$\\int \\frac{1}{\\left(x^2+1\\right)^n}dx$$ I obtained that for $$n=1$$ the value of the integral is $$\\tan^{-1}x+C$$ and for $$n=2$$ $$x\\left(\\frac{1}{2\\left(x^2+1\\right)}+\\frac{\\tan \\:^{-1}}{2x}\\right)+C$$ How to do the rest of the cases? "} {"_id": "A.318", "text": "How do you prove $e^{x} \\geq \\left(1+\\frac{x}{n}\\right)^{n}$ for $n \\geq 1$ How do you prove $e^{x} \\geq \\left(1+\\frac{x}{n}\\right)^{n}$ for $n \\geq 1$? I can prove this for natural numbers only via induction, but how do you prove this for any real $n \\geq 1$? We start with the base case $n=1$. We have $e^x \\geq 1+x$ by a variety of methods. For the induction step, assume $e^{x} \\geq \\left(1+\\frac{x}{n}\\right)^{n}$. Notice that taking the derivative of $(1+\\frac{x}{n+1})^{n+1}$ gives us $(1+\\frac{x}{n+1})^{n}$ and thus $(1+\\frac{x}{n+1})^{n} < \\left(1+\\frac{x}{n}\\right)^{n} \\leq e^x = \\frac{d}{dx} e^x$. I'm not sure how to extend this to the non-integer case. Any help would be appreciated. "} {"_id": "A.319", "text": "A wrong argument for $\\mathbb{R}$ being countable We assume $A$ is the set of all countable subsets of the set of real numbers. We know $A$ is a partially ordered set $(A, \\subseteq)$. Suppose $$A_1 \\subseteq A_2 \\subseteq \\ldots \\subseteq A_n \\subseteq A_{n+1} \\subseteq \\ldots$$ is a chain in $A$. We can prove $B=\\bigcup_{n \\in \\Bbb{N}} A_n$ is a countable set. For each natural number $m$, we have $A_m \\subseteq B$. So $B$ is an upper bound for $A$. This shows each chain in $A$ has an upper bound according to Zorn's lemma. $A$ has a maximal element $X$, and we know $X$ is a countable set. Now we prove $X = \\Bbb{R}$. If $X \\neq \\Bbb{R}$, then there is an $x \\in \\Bbb{R}$ such that $x \\notin X$. Let $Y=X \\cup \\{x\\}$. It's obvious that $Y$ is a countable subset of the real numbers and $X \\subsetneq Y$. This contradicts $X$ being a maximal element. Thus, $X = \\Bbb{R}$ and $\\Bbb{R}$ is a countable set. What is wrong with this argument? "} {"_id": "A.320", "text": "An inverse trigonometric integral So my integral is $$\\int_{0}^{1}\\frac{\\sin^{-1}(x)}{x}$$ To avoid confusion let me re-write the integral as $$\\mathcal I = \\int_0^1 \\frac{\\arcsin(x)}{x}$$ I started off with a trig-substitution that is let $x = \\sin(t)$ and $t = \\arcsin(x)$ which means that $dx = \\cos(t) dt$ So our integrand becomes $$\\mathcal I = \\int_0^{\\frac{\\pi}{2}} \\frac{t}{\\sin(t)} \\cos(t) dt\\tag{Bounds have changed}$$ $$= \\int_0^{\\frac{\\pi}{2}} t\\space\\cot(t) dt$$ Then using Integration by Parts,$\\space$$u = t$ $\\implies du = dt$ and $dv = \\cot(t)$ $\\implies v = \\ln(\\sin(t))$ So our integrand thus becomes, $= t\\space\\ln(\\sin(t))$ from $0$ to $\\frac{\\pi}{2}$ $$-\\int_0^{\\frac{\\pi}{2}} \\ln(sin(t))dt\\tag{t*ln(sin(t)) = 0}$$ From here, I don't know how to proceed further. Any help/hint is appreciated :) Thanks in advance "} {"_id": "A.322", "text": "How do I calculate the sum of sum of triangular numbers? As we know, triangular numbers are a sequence defined by $\\frac{n(n+1)}{2}$. And it's first few terms are $1,3,6,10,15...$. Now I want to calculate the sum of the sum of triangular numbers. Let's define $$a_n=\\frac{n(n+1)}{2}$$ $$b_n=\\sum_{x=1}^na_x$$ $$c_n=\\sum_{x=1}^nb_x$$ And I want an explicit formula for $c_n$. After some research, I found the explicit formula for $b_n=\\frac{n(n+1)(n+2)}{6}$. Seeing the patterns from $a_n$ and $b_n$, I figured the explicit formula for $c_n$ would be $\\frac{n(n+1)(n+2)(n+3)}{24}$ or $\\frac{n(n+1)(n+2)(n+3)}{12}$. Then I tried to plug in those two potential equations, If $n=1$, $c_n=1$, $\\frac{n(n+1)(n+2)(n+3)}{24}=1$, $\\frac{n(n+1)(n+2)(n+3)}{12}=2$. Thus we can know for sure that the second equation is wrong. If $n=2$, $c_n=1+4=5$, $\\frac{n(n+1)(n+2)(n+3)}{24}=5$. Seems correct so far. If $n=3$, $c_n=1+4+10=15$, $\\frac{n(n+1)(n+2)(n+3)}{24}=\\frac{360}{24}=15$. Overall, from the terms that I tried, the formula above seems to have worked. However, I cannot prove, or explain, why that is. Can someone prove (or disprove) my result above? "} {"_id": "A.324", "text": "The Double Basel Problem I have been playing with the series which I had been calling the 'Double Basel problem' for the past couple of hours $$ \\sum_{n=1}^{\\infty} \\sum_{m=1}^\\infty \\frac{1}{{n^2 +m^2}}. $$ After wrestling with this for awhile, I managed to generalize a result demonstrated here. This identity is: $$ \\sum_{m=1}^{\\infty}\\frac{1}{x^2+m^2} = \\frac{1}{2x}\\left[ \\pi \\coth{\\pi x} - \\frac{1}{x}\\right]. $$ Hence the original series becomes: $$ \\sum_{n=1}^{\\infty} \\frac{1}{2n}\\left[\\pi \\coth{\\pi n} - \\frac{1}{n} \\right]. $$ I have no idea where to go next with this problem. I seriously doubt that this series is convergent; however, I have been unable to prove it. Can you prove that this series is divergent? If it converges what is its value? Thanks so much! "} {"_id": "A.325", "text": "Find consecutive composite numbers How to find 100 consecutive composite numbers? After many attempts I arrived at the conclusion that to find $m$ consecutive composite numbers we can use this $n!+2, n! +3, ..., n! + n$ where $n! + 2$ is divisible by $2$, $n! + 3$ is divisible by $3$ and so on... and where $m$ = $n-1$ Thus $n!+2, n! +3, ..., n! + n$ tells that there are $(n-1)$ consecutive numbers. However, there seems to be some gaps or incompetence. For example: $4!+2, 4! +3, 4! +4$ $→$ $26, 27, 28$. Although it's right there are for sure smaller numbers such as $8, 9, 10$ and $14, 15 ,16.$ Is there another method for solving such a problem mathematically? Is it a correct method or have I misunderstood it? "} {"_id": "A.326", "text": "Every projective module is a direct summand of free module. I was reading \"Serial Rings\" by Gennadi Puninski. There it is written that , \"Since every module is a homomorphic image of a free module, every projective module is a direct summand of free module\".(ie. if $P$ is a projective module, there exists a free module F such that, $ F=P \\oplus T$ for some module $T$.) But I can't understand how \"Every module is a homomorphic image of a free module\" implies that \"Every projective module is a direct summand of free module\". (I have found a proof for \"Every projective module is a direct summand of free module\" but the first part of the above mentioned sentence wasn't used there.) "} {"_id": "A.327", "text": "Determinant not equal to volume error (closed) The determinant of a $3\\times 3$ matrix $\\begin{vmatrix} 1 & 1 &1 \\\\ x & y & z \\\\ x^2 & y^2 &z^2 \\\\ \\end{vmatrix} $ is the volume of a parallelopiped with its three sides as the vectors whose tails rest on origin and heads at the coordinates $(1,x,x^2),(1,y,y^2)$ and $(1,z,z^2)$ $^{[1]} $. The determinant of this matrix can be simplified to $(x-y)(y-z)(z-x)$. Proof: Subtracting column$1 $from column 2, and putting that in column 2, $\\begin{equation*} \\begin{vmatrix} 1 & 1 &1 \\\\ x & y & z \\\\ x^2 & y^2 &z^2 \\\\ \\end{vmatrix} = \\begin{vmatrix} 1 & 0 &1 \\\\ x & y-x & z \\\\ x^2 & y^2-x^2 &z^2 \\\\ \\end{vmatrix} \\end{equation*}$ $ = z^2(y-x)-z(y^2-x^2)+x(y^2-x^2)-x^2(y-x) $ rearranging the terms, $ =z^2(y-x)-x^2(y-x)+x(y^2-x^2)-z(y^2-x^2) $ taking out the common terms $(y-x)$ and $(y^2-x^2)$, $ =(y-x)(z^2-x^2)+(y^2-x^2)(x-z) $ expanding the terms $(z^2-x^2)$ and $(y^2-x^2)$ $ =(y-x)(z-x)(z+x)+(y-x)(y+x)(x-z) $ $ =(y-x)(z-x)(z+x)-(y-x)(z-x)(y+x) $ taking out the common term $(y-x)(z-x)$ $ =(y-x)(z-x) [z+x-y-x] $ $ =(y-x)(z-x)(z-y) $ $ =(x-y)(y-z)(z-x) $ As the $x$ coordinate of the heads of these three vectors is $1$, the head of these vectors lies in a plane perpendicular to the $x$-axis and a distance of $1$ unit away from the origin. (If we connect these three points, we get a triangle.) This plane will cut the parallelopiped into two equal triangular pyramids whose base lies in the plane. The perpendicular distance from the base of the pyramid to the tip is $1$ unit. The volume of the required parallelogram is the sum of the volume of the two triangular pyramids. $\\text{volume of a pyramid}=\\frac{1}{3}bh$. The height is $1$ units. The area of a triangle is, by Shoelace formula, $$A = \\frac{1}{2} \\begin{vmatrix} 1 & 1 &1 \\\\ x_1 & x_2 & x_3 \\\\ y_1 & y_2 & y_3 \\\\ \\end{vmatrix} $$ where the vertices of the triangle are $(x_1,y_1),(x_2.y_2),(x_3,y_3)$ $^{[2]}$ The vertices of the required traingle has the coordinates $(x,x^2),(y,y^2)$ and $(z,z^2)$. So the area of the triangle, $$A=\\frac{1}{2}\\begin{vmatrix} 1 & 1 &1 \\\\ x & y & z \\\\ x^2 & y^2 &z^2 \\\\ \\end{vmatrix}$$ which, as shown above, can be simplified to $\\frac{1}{2} (x-y)(y-z)(z-x)$ So, the volume is $$\\frac{1}{3}bh=\\frac{1}{3}\\times\\frac{1}{2}(x-y)(y-z)(z-x)\\times 1$$ $$= \\frac{1}{6}(x-y)(y-z)(z-x) $$ But, shouldn't the volume be equal to the determinant which is $(x-y)(y-z)(z-x)$ ? References [1]Youtube video by 3Blue1brown: https://youtu.be/Ip3X9LOh2dk?t=345 [2]Wikipedia article:https://en.wikipedia.org/wiki/Shoelace_formula "} {"_id": "A.328", "text": "Proving $\\sum_{k=1}^{n}\\cos\\frac{2\\pi k}{n}=0$ I want to prove that the below equation can be held. $$\\sum_{ k=1 }^{ n } \\cos\\left(\\frac{ 2 \\pi k }{ n } \\right) =0, \\qquad n>1 $$ Firstly I tried to check the equation with small values of $n$ $$ \\text{As } n=2 $$ $$ \\cos\\left(\\frac{ 2 \\pi \\cdot 1 }{ 2 } \\right) + \\cos\\left(\\frac{ 2 \\pi \\cdot 2 }{ 2 } \\right) $$ $$ = \\cos\\left(\\pi\\right) + \\cos\\left(2 \\pi\\right) $$ $$ = -1+ 1 =0 ~~ \\leftarrow~~ \\text{Obvious} $$ But $$ \\text{As}~~ n=3 $$ $$ \\cos\\left(\\frac{ 2 \\pi \\cdot 1 }{ 3 } \\right) +\\cos\\left(\\frac{ 2 \\pi \\cdot 2 }{ 3 } \\right) + \\cos\\left(\\frac{ 2 \\pi \\cdot 3 }{ 3 } \\right) $$ $$ = \\cos\\left(\\frac{ 2 \\pi }{ 3 } \\right) + \\cos\\left(\\frac{ 4 \\pi }{ 3 } \\right) + \\cos\\left( 2\\pi \\right) $$ $$ = \\cos\\left(\\frac{ 2 \\pi }{ 3 } \\right) + \\cos\\left(\\frac{ 4 \\pi }{ 3 } \\right) + 1 =?$$ What formula(s) or property(s) can be used to prove the equation? "} {"_id": "A.329", "text": "How do I show that if $A$ is compact and $U \\supseteq A$ is open, then there is an open $V$ with $A \\subseteq V \\subseteq \\overline{V} \\subseteq U$? This question is from Wayne Patty's Topology Section 5.2. Consider $A$ be a compact subset of a regular space and let $U$ be an open set such that $A\\subseteq U$. Prove that there is an open set $V$ such that $A \\subseteq V \\subseteq \\overline{V} \\subseteq U$. Let $p \\in A$ which implies $p \\in U$. Then a result is given in the book (Theorem 5.11): A $T_1$-space $(X, \\mathcal T)$ is regular if and only if for each member $p$ of $X$ and each neighbourhood $U$ of $p$, there is a neighbourhood $V$ of $p$ such that $\\overline{V}\\subseteq U$. So, I got $ V \\subseteq \\overline{V} \\subseteq U$. But I am unable to prove that $A\\subseteq V \\subseteq \\overline{V}$. I thought that I should let $V\\subseteq \\overline{V} \\subseteq A$ but I am not able to find a contradiction. Can you please help with that? "} {"_id": "A.330", "text": "Shilov's Linear Algebra - Chapter 1, Problem 9 Calculate the $n$-th order determinant: $$\\Delta= \\begin{vmatrix} x&a&a&\\ldots&a\\\\ a&x&a&\\ldots&a\\\\ a&a&x&\\ldots&a\\\\ \\cdot&\\cdot&\\cdot&\\ldots&\\cdot\\\\ a&a&a&\\ldots&x \\end{vmatrix}$$ The answer is $\\Delta=[x+a(n-1)](x-a)^{n-1}$. If we add all the other columns to the first column, we get the first multiplicative factor of the answer, and are left with the following determinant: $$\\begin{vmatrix} 1&a&a&\\ldots&a\\\\ 1&x&a&\\ldots&a\\\\ 1&a&x&\\ldots&a\\\\ \\cdot&\\cdot&\\cdot&\\ldots&\\cdot\\\\ 1&a&a&\\ldots&x \\end{vmatrix}$$ How can we calculate this determinant to obtain the answer? "} {"_id": "A.331", "text": "Finding roots of $4^x+6^x=9^x$ by hand The function $f(x)=4^x+6^x-9^x$ is such that $f(0)=1>0, f(1)=1>0, f(2)=-29$ and next $g(x)=(4/9)^x+(6/9)^x-1 \\implies f'(x)<0$ for all real values of $x$. So $g(x)$ being monotonic the equation $$4^x+6^x=9^x$$ has exactly one real solution. The question is whether this real root can be found analytically by hand. "} {"_id": "A.332", "text": "Every number can be expressed as a product of (least prime factor)*(largest integer dividing n less than n) [EDITED] Let $L:{\\Bbb N} \\to {\\Bbb N}$ such that $L(n) = {\\text{least prime factor p of n}}$. Let $g:{\\Bbb N} \\to {\\Bbb N}$ such that $g(n) = {\\text{biggest positive integer d such that d|n and 1}} \\leqslant {\\text{d < n}}$. Show that:$$g(n) = \\frac{n}{{L(n)}},\\forall n \\in {\\Bbb N}{\\text{ such that }}n \\geqslant 2$$ My proof: Since $n \\geqslant 2$, $n$ is either composite or prime. If $n$ is prime(i.e $n = p$ where $p$ is prime), then $$g(n) = 1 = \\frac{p}{p} = \\frac{n}{{L(n)}}$$ This is because $L(n)$ is defined as least prime factor of n. If$n $is composite, by the Fundamental Theorem of Arithmetic: \"Any positive integer bigger than 1 can be expressed as a product of primes.\" Let $p$ be the smallest prime of the product $$n = {p_1}^{{\\alpha _1}}p_2^{{\\alpha _2}}...{p_m}^{{\\alpha _m}},{\\text{ }}i < j,{\\text{ }}{p_i} < {p_j}{\\text{ and }}i,j \\in {{\\Bbb Z}^ + }{\\text{ and }}{\\alpha _i} \\in {{\\Bbb Z}^{ \\geqslant 0}}$$ That is, pick $p = p_1$. Notice that $p$ is necessarily $L(n)$ because since $p = p_1$ implies that $p|n$ and is the smallest prime that divides n. Therefore, $$p_1 = p = L(n)$$ Then, if $x = p = L(n)$, then $1 < x < n$ and $x|n$ (by definition). This implies that $$\\exists y \\in {{\\Bbb Z}^ + }:y = \\frac{n}{x} = \\frac{n}{{L(n)}} = {p_1}^{{\\alpha _1} - 1}{p_2}^{{\\alpha _2}}...{p_m}^{{\\alpha _m}}$$ But $y|n$ such that $y < n$. Implies that $y$ is the greatest integer less than$n $such that $y|n$. Now let $d = g(n)$. Since $$\\frac{n}{d} = \\frac{n}{{g(n)}} = m$$ then $m$ is prime and $m = p$. BWOC, If $n=md$ and $m = ab$ where $1 < a,b < m \\Rightarrow n = abd \\Rightarrow ad|n$. Since $1. But we found another factor of $n$ ($ad$) that divides $n$! This contradicts the definition of $d=g(n)$. Also, since $d = g(n)$ is the biggest factor of n, implies that $m = \\frac{n}{d} = \\frac{n}{g(n)}$ is the smallest prime factor of n. So $m = L(n) = p$. Hence, we have that $$x = m = p = L(n) = \\frac{n}{g(n)} \\Rightarrow g(n) = \\frac{n}{p} =\\frac{n}{L(n)} = y$$ or just $$g(n) = \\frac{n}{L(n)}$$ Q.E.D. "} {"_id": "A.333", "text": "Bivariate normal density $( X, Y)$ have a bivariate normal density centered at the origin with $E(X^2)$ = $E(Y^2) = 1$, and $E(XY) = p$ . In polar coordinates $(X, Y)$ becomes $(R,\\Phi)$ where $R^2 = X^2 + Y^2$. Prove that $\\Phi$ has a density given by $$\\frac{\\sqrt{1-p^2}}{2\\pi(1-2p\\sin(\\varphi)\\cos(\\varphi))}$$ And is uniformly distributed iff $p = 0$. (To this point everything is clear) what i do not understand is how to conclude that $P\\{XY > 0\\} = \\frac{1}{2} +\\pi^{-1} \\arcsin (p)$ and $P\\{XY < 0\\}= \\pi^{-1} \\arccos (p)$. "} {"_id": "A.337", "text": "Suppose that all the tangent lines of a regular plane curve pass through some fixed point. Prove that the curve is part of a straight line. Question. Suppose that all the tangent lines of a regular plane curve pass through some fixed point. Prove that the curve is part of a straight line. Prove the same result if all the normal lines are parallel. I am working on differential geometry from the book by Pressley and I have a doubt in the solution of the above question whose (brief) solution is given by: Solution: We can assume that the curve $\\gamma$ is unit-speed and that the tangent lines all pass through the origin (by applying a translation to $\\gamma$). Then, there is a scalar $\\lambda(t)$ such that $\\gamma'(t) = \\lambda(t)\\gamma(t)$ for all $t$. Then, $\\gamma '' = \\lambda'\\gamma + \\lambda \\gamma' = (\\lambda' + \\lambda^2)\\gamma$. Can anyone please explain me how does this line follow : \" Then, there is a scalar $\\lambda(t)$ such that $\\gamma'(t) = \\lambda(t)\\gamma(t)$ for all $t$.\" Thanks in advance. "} {"_id": "A.338", "text": "Find all integer solutions of equation $y = \\frac{a+bx}{b-x}$ How to find all integer solutions for the equation $y = \\frac{a+bx}{b-x}$, where a and b are known integer values? P.S. x and y must be integer at the same time "} {"_id": "A.339", "text": "Extension of Euclid's lemma This is$a $somewhat obvious fact that is intuitively obvious to me, but I haven't been able to construct a proof of it. Euclid's lemma states for for $p$ a prime and $ab$ a product of integers (let's take everything to be positive for simplicity), if $p \\mid ab$, then $p \\mid a$ or $p \\mid b$. This is clear, and I know how to prove it. Let's extend it somewhat. Suppose that $a$ and $b$ are two relatively prime integers, and we have $a \\mid bc$ for some other integer $c$. Then $a \\not \\mid b$, so it must divide $c$. This fact is obvious to me, but I can't figure out how to prove it. Does anyone have any hints or advice? Do I need the assumption of positivity? (For my purposes at the moment, I only need them to be positive, but there is value in having the most general result possible). EDIT: Updated attempt: We have that $a,b$ are relatively prime, so there exist $r,s \\in \\mathbb{Z}$ such that $ar + bs = 1$ by Bézout's lemma. Multiply through by $c$ to get $arc + bsc = c$. Then $a \\mid arc$ and $a \\mid bsc$, so $a \\mid c$. How is that? "} {"_id": "A.340", "text": "I have the following problem: Let $|x_{n+1} - x_n| < 1/3^n$. Show that $(x_n)$ is a Cauchy sequence. We have that $(x_n)$ is a sequence of real numbers. And the relation on the title: $$ |x_{n+1} - x_n| < \\frac{1}{3^n}. $$ We must prove that this is a Cauchy sequence. I know that an Cauchy sequence follows the definition: given $\\epsilon>0$, exists $n_0 > 0$, such that $m,n > n_o \\Rightarrow |x_m - x_n|< \\epsilon$ But I don't know how to use both informations to prove the exercise. If someone please may help me, I'd be very thankful. "} {"_id": "A.342", "text": "Four students are giving presentations In four sections of a course, running (independently) in parallel, there are four students giving presentations that are each Exponential in length, with expected value of$10 $minutes each. How much time do we expect to be needed until all four of the presentations are completed? I'm a little thrown off by this question since it's in the chapter of order statistics in my book. But I believe that this is just gamma distribution. If each student has expected value of $10$ minutes each. Shouldn't the time needed till all four of the presentations are completed be $40$ minutes? $(10 \\cdot 4 = 40)$ Or is it the following. Calculate the density of the fourth order statistics $$f(x_4) =\\frac{2}{5}e^{\\frac{-x}{10}}\\left(1-e^{\\frac{-x}{10}}\\right)^3.$$ Then $$E(X_4) = \\int_0^\\infty\\frac{2x}{5}e^\\frac{-x}{10}\\left(1-e^\\frac{-x}{10}\\right)^3 \\,dx= 125/6.$$ So is the answer $40$ minutes or $125/6$ minutes? Any help is greatly appreciated. "} {"_id": "A.344", "text": "Any collection of subsets of a set is a subbasis for a topology Theorem Any collection of subsets $\\mathcal{A}$ of a nonempty set $X$ forms the subbasis for a unique topology $\\tau$ on $X$. This theorem is absolutely amazing to me. I really enjoy the idea of it as a powerful tool, but I have come up with a counterexample that I just can't get over. So the theorem states that any collection of subsets of a nonempty $X$ form a subbasis for a unique topology on $X$. The emphasis there is any. So, consider the following counterexample: Let $X= \\{a,b,c,d,e\\}$ and let $\\mathcal{A}=\\{\\{a\\}\\}$. Clearly, this is a collection of subsets of $X$. Assume that, by our theorem, then $\\mathcal{A}=\\{\\{a\\}\\}$ is a subbasis for some topology on $X$. Okay, so since $\\mathcal{A}$ is a subbasis of some topology on $X$, let's try taking intersections of members of $\\mathcal{A}$. Well, $\\{a\\}\\cap\\{a\\}=\\{a\\}$. Then our basis for our topology is $\\mathcal{B} = \\{\\{a\\}\\}$ This is problematic because this means that our basis $\\mathcal{B}$ is just $\\{a\\}$, but note that $\\displaystyle\\bigcup \\mathcal{B} = \\{a\\}$ and $\\{a\\} \\neq X.$ Therefore, $X \\not \\in \\tau.$ How do we get $X$ in $\\tau$? Is my counterexample logically consistent? "} {"_id": "A.345", "text": "Elementary geometry question: How to calculate distance between two skew lines? I am helping someone with highschool maths but I got stacked in a elementary geometry problem. I am given the equation of two straigh lines in the space $r\\equiv \\begin{cases} x=1 \\\\ y=1 \\\\z=\\lambda -2 \\end{cases}$ and $s\\equiv\\begin{cases} x=\\mu \\\\ y=\\mu -1 \\\\ z=-1\\end{cases}$ and asked for some calculations. First I am asked the relative position of them so I get they are skew lines. After that I am asked for the distance between the two lines. In order to get the distance I have to calculate the line that is perpendicular to both of them in the \"skewing\" point, check the points where it touches the other two lines (sorry, not sure about the word in English) and calculate the module of this vector. Im having trouble calculating the perpendicular line. I know I can get the director vector using vectorial product, but I'm not sure how to find a point so that I can build the line. "} {"_id": "A.346", "text": "Greatest lower bound in Q I have a set $$ \\{ r \\in \\mathbb Q \\mid r^2 >2, r>0 \\}$$ I was wondering why it does not have the greatest lower bound. Isn't $0 \\in \\mathbb Q$ a greatest lower bound for this set in rational numbers? "} {"_id": "A.347", "text": "GCD (a,b) =1 prove GCD ( (a+b), (a-b) ) = 1 or 2 if GCD of $(a, b) = 1$, prove that GCD $(a+b, a-b) = 1$ or $2 .$ The proof goes like: Let GCD $( a+b, a-b ) = d$ and let there exist integers m and n such that $ a+b =md$ and $ a-b = nd.$ By adding and subtracting these two equations we get: $2a = (m+n)d$ and $2b = (m-n)d$ , because $a, b$ are coprime then $2$ GCD $(a,b)$ = GCD $(2a, 2b),$ and so on. My question is, why do we have to add and subtract above equations? I need to understand the concept of this prove in some more details. Thanks! "} {"_id": "A.348", "text": "Question about determinant of a block matrix I was studying block matrices and suddenly this question came to my mind. Let $A, B \\in \\Bbb R^{n \\times n}$. From this Wikipedia page, $$\\det \\begin{pmatrix} A & B\\\\ B & A\\end{pmatrix} = \\det(A-B)\\det(A+B)$$ even if $A$ and $B$ do not commute. Does a similar condition hold for the following block matrix? $$\\begin{pmatrix} A & -B\\\\ B & A \\end{pmatrix}$$ "} {"_id": "A.349", "text": "Inverse to Stirling's Approximation The equation for Stirling's Approximation is the following: $$x! = \\sqrt{2\\pi x} * (\\frac{x}{e})^x$$ Writing as a function for y gives us the following: $$y = \\sqrt{2\\pi x} * (\\frac{x}{e})^x$$ Is there a way to solve this equation for x, effectively finding an inverse to this function? "} {"_id": "A.350", "text": "Induction proof for natural numbers in a division operation I want to proove that 2 and 3 divide $x^3 - x, x \\in \\mathbb{N}$ and I'm stuck at the inductive step, here's where I'm at: For all $x \\in \\mathbb{N}$, let $P(x)$ be the proposition: 2 and 3 divide $x^3 - x$ Basic step: the first term in $\\mathbb{N}$ is $0$, then: $\\frac{0^3 - 0}{2} = 0$ et $\\frac{0^3 - 0}{3} = 0$, thus $P(0)$ is true. Inductive step: For the inductive hypothesis, we assume that $P(k)$ is true for an arbitrary nonnegative integer k bigger than 0. That is, we assume that 2 and 3 divide $k^3 - k$ To carry out the inductive step using this assumption, we must show that when we assume that $P(k)$ is true, then $P(k + 1)$ is also true. That is, we must show that 2 and 3 divide $(k+1)^3 - (k+1)$ Is the next step here is that we need to prove that $\\frac{(k+1)^3-(k+1)}{2}$ and $\\frac{(k+1)^3-(k+1)}{3}$ are integers? thus 2 and 3 divide $(k+1)^3 - (k+1)$? "} {"_id": "A.352", "text": "find a positive continuous function with a finite area : $\\int_0^\\infty f(x) dx$ , but the $f(x)\\rightarrow$ doesn't exist. find a positive continuous function with a finite area : $\\int_0^\\infty f(x) dx$ , but the limit of $f(x)$ as $x$ goes to infinity doesn't exist. I tried finding such a function but I failed . "} {"_id": "A.353", "text": "Can someone explain why if two random variables, X and Y, are uncorrelated, it does not necessarily mean they are independent? I understand that two independent random variables are by definition uncorrelated as their covariance is equivalent to 0: $Cov(x,y) = E(xy)- E(x)E(y)$ $E(x)*E(y) = E(xy)$, when x and y are two random independent variables. Therefore, $Cov(x,y) = 0$. However, I am having trouble understanding if two random variables, X and Y, are uncorrelated, it does not necessarily mean they are independent. Could someone also give me a real world example of when two random variables are neither independent nor casually connected? I believe it will help me understand this concept better. "} {"_id": "A.354", "text": "Prove that if $p_1\\mid a$ and $p_2\\mid a$ then $p_1p_2\\mid a$ So I am supposed to be proving that if $p_1$ and $p_2$ are distinct primes and $p_1\\mid a$ and $p_2\\mid a$ then $p_1p_2\\mid a$, and I need to use Euclid's Lemma except as far as I understand Euclid's lemma is the converse of this statement and I have tried for the last few hours to work with Euclid's and GCDs to figure this one out and I just don't know where to start since I can't wrap my head around this one. Can anyone help me out? "} {"_id": "A.355", "text": "$f(f(x)^2+f(y))=xf(x)+y$ Find all functions $f:\\mathbb{R}\\rightarrow\\mathbb{R}$ such that $$f(f(x)^2+f(y))=xf(x)+y$$ for all $x,y\\in{\\mathbb{R}}$. Here is my approach to the problem: We see that $f(x)=x$ is an obvious solution (Just trying easy linear equations). I think this would be the only solution to the problem. Am I right? And how to prove that there is no other solution? (Note: I am a beginner at functional equations) "} {"_id": "A.356", "text": "How can I evaluate $ \\sum_{n=0}^{\\infty}{\\frac{x^{kn}}{(kn)!}} $ where $k$ is a natural number? I suddenly interested in the differential equation $$ f^{(k)}(x)=f(x) $$ So I tried to calculate for some $n$. When $ k=1 $, we know the solution $$ f(x)=A_0e^x=\\sum_{n=0}^{\\infty}{\\frac{A_0x^n}{n!}} $$ Also, for $ k=2 $, $$ f(x)=Ae^x-Be^{-x}=\\sum_{n=0}^{\\infty}{(\\frac{A_0x^{2n}}{(2n)!}+\\frac{A_1x^{2n+1}}{(2n+1)!})} $$ where $ A_0=A+B $ and $ A_1=A-B $. Inductively, I could guess that the solution of the differential equation would be in the form $$ f(x)=\\sum_{n=0}^{\\infty}{\\sum_{i=0}^{k-1}{\\frac{A_ix^{kn+i}}{(kn+i)!}}} $$ But I could neither prove that it is the only solution nor get the explicit formula. How should I evaluate $ \\sum_{n=0}^{\\infty}{\\frac{x^{kn}}{(kn)!}} $, cause if we know the answer for it, we can evaluate the original expression by differentiating it. Thanks to WolframAlpha, I know the answer for $ k=3 $, $$ \\sum_{n=0}^{\\infty}{\\frac{x^{3n}}{(3n)!}}=\\frac{1}{3}(2e^{-\\frac {x}{2}}\\cos{(\\frac {\\sqrt{3}}{2}x)}+e^{x}) $$ I think the answer might related to $ \\sin $ and $ \\cos $ of $ \\frac {2\\pi}{k} $. "} {"_id": "A.357", "text": "How many functions can be used to describe to a finite series? I was learning more about series today and would like to know if there are existing proofs I could look at about this problem. Basically, if you are given an infinite series representing a function f : $\\Bbb N \\Rightarrow \\Bbb R$ but only shown the first n numbers, how many functions f, written in terms of n, could you write to represent that series. I'm not including piecewise functions, because I assume that would always be infinite. Take the series $(2, 4, ...)$ with 2 numbers given. $f(n)=2n$ , $f(n)=n^2-n+2$ , and $f(n)=2^n$ would all be functions that could fit this series, although they differ after the first two numbers. I believe there are more polynomials that fit this description but I'm not sure how many. My question is, essentially, are there an infinite number of functions for which $f(1) = 2$ and $f(2) = 4$, and if this is the case, does this also apply to any finite number of outputs? (e.g. the first n digits of pi written as $(3, 1, 4, 1, 5, 9...)$) If not, could you find out how many possible functions there are? "} {"_id": "A.358", "text": "Confusion about the formula of the area of a surface of revolution Before I read the formula of the area of revolution which is $\\int 2\\pi y \\,ds$, where $ds = \\sqrt{1 + \\frac{dy}{dx}^2}$, I thought of deriving it myself. I tried to apply the same logic used for calculating the volume of revolution (e.g., $\\int \\pi y^2 dx $). My idea is to use many tiny hollow cylinders (inspired from the shell method), each has a surface area of $(2\\pi y) (dx)$: $2\\pi y$ is the circumference of the cylinder, and $dx$ is the height of the cylinder Their product is the surface area of the hollow (e.g., empty from the inside) cylinder. With this logic, the area is $\\int 2\\pi y dx$. Where is my mistake? Also it's confusing why for the volume it was enough to partition the object using cylinders and for areas not. "} {"_id": "A.359", "text": "Apparent inconsistencies in integration In a problem, the substitution $$\\tan\\theta=\\frac{x}{2}$$ was made. In the end, the answer was in terms of sines, and to convert back, $sin\\theta$ was defined as $$\\sin\\theta=\\frac{x}{\\sqrt{4+x^2}}$$ This is a typical example of some stuff about integration I'm struggling to understand; (1) Why are the absolute values of square roots never taken? This is something I keep seeing in every situation involving an integral. (Here, if $\\theta$ is in the third quadrant, sines would be negative and tans would be positive. So this definitely doesn't work for the third quadrant.) (2) Expanding upon the stuff in the parantheses up there, a possible explanation is that while doing trig substitutions, the angle is always a principal angle of the inverse trigonometric operation on whatever you're making the substitution. Is there such a rule? "} {"_id": "A.360", "text": "Fourier transform of function $1/ \\vert x \\vert$ What is the Fourier transform of function $$f(x) = \\frac{1}{\\vert x \\vert}?$$ This is not a homework. I would also appreciate help for calculating it myself. "} {"_id": "A.361", "text": "Is a Riemann-integrable function always differentiable? Let $f:[a,b]\\to\\mathbb{R}$ be Riemann-integrable and $F(x)=\\int_a^xf(t)dt$. Is this function $F$ always differentiable? Because the antiderivative is defined as $F'=f$ right, so you would think that it always holds. "} {"_id": "A.362", "text": "Kuratowski's Theorem using Axiom of Choice I can't seem to be able to prove Kuratowski's Theorem using the Axiom of Choice, although they are equivalent assertions. Kuratowski's Lemma: Every partial order has a maximal chain. Axiom of Choice: For every set X of disjoint nonempty sets there exists a set$Y $such that for every set $Z \\in X, Y \\cap Z$ is a singleton. My attempt: Consider any chain $C_0$ of the partial order. If $\\exists x \\in X \\setminus C_0$ which is comparable with some element of $C_0$, let $C_1 := C_0 \\cup \\{ x \\}$. Iterate this process . If at some point we cannot find such an x, then we have found a maximal chain. Suppose we can find such an $x$ infinitely, then the sets $i\\geq 1 \\Rightarrow X_i := C_{i+1} \\setminus C_i$ are disjoint singletons. Hence by axiom of choice there exists $Y$ for which $X_i \\subseteq Y$. Inorder to finish the proof, I need to prove something of the form \"If a is comparable with some element of $C_0$, then $\\exists j$ s.t. $a \\in C_j$\". I can't seem to prove this. P.S: x is comparable with y iff $x R y \\lor y R x$. "} {"_id": "A.363", "text": "Non-negative martingale $X_n \\rightarrow 0$ a.s. prove that $P[X^* \\geq x | \\mathcal{F}_0]= 1 \\wedge X_0 / x$ I need to prove the following statement. Let $X$ be a non negative martingale such that $X_n\\rightarrow 0$ a.s. when $n\\rightarrow \\infty$. Define $X^*=supX_n$. Prove that for all $x>0$ $$P[X^* \\geq x | \\mathcal{F}_0]= 1 \\wedge X_0 / x$$ I think I've got the easy case if $x\\leq X_0$ Then necessarily $x\\leq X^*$ for the sup property. Then it follows that for $1\\leq X_0 /x$ we have that $P[X^* \\geq x | \\mathcal{F}_0]= 1$. But I can't figure out the other case. "} {"_id": "A.364", "text": "Inequality in metric space For a point $x$ and a non-empty subset $A$ of a metric space $(X, d)$, define $\\begin{align}\\inf\\left\\{ d(x,a):a\\in A\\right\\}\\end{align}$ Prove that if $y$ is another point in $X$ then $$d(x,A)\\leqslant d(x,y)+d(y,A)$$ "} {"_id": "A.365", "text": "Are Infinite ordinals and their successor equinumerous? Ordinals in set theory are well-ordered by $\\in$ or equivalently $\\subset$. If we define all ordinals greater or equal to $\\omega$ as infinite ordinals. Is it true that every infinite ordinal is equinumerous to its successors. Basically my question is the proof or refutation of the following statement: Given infinite ordinal $\\alpha$. Does there exist an injection from $\\alpha^+$ to $\\alpha$. "} {"_id": "A.366", "text": "Show that if a normed space $X $ has a linearly independent subset of $n$ elements, so does the dual space $X'$ Show that if a normed space $X $ has a linearly independent subset of $n$ elements, so does the dual space $X'$ My attempt : $\\text{Given that a normed space $X$ has a linearly indepenedent susbset of $n-$ element}\\tag1$ let the subset be $S=\\{ e_1,e_2,e_3,....,e_n\\}$ Define $e_i \\in X$ by $f_j(e_i)= \\delta_{ij} = \\begin{cases} 1 & i=j \\\\0 , & i \\neq j \\end{cases}$ where $1\\le i\\le n$ and $1\\le j\\le n$ From $(1)$ we have $c_1e_1+...+c_ne_n=0\\implies c_1f(e_1)+...+c_nf(e_n)=0$ After that im not able to proceed further "} {"_id": "A.368", "text": "Basel Problem approximation error bounded by $\\mathcal O(1/x)$? In this answer it is stated that $$ \\sum_{n\\geq1}\\frac{1}{n^2}=\\sum_{n\\leq x}\\frac1{n^2}+\\mathcal O(1/x). $$ Is this statement true as $x\\to\\infty$? What I've done: If $x$ is fixed, then I think the answer is almost trivial, because we may set $C=\\pi^2x/6$, so $$ \\sum_{n=x}^\\infty\\frac1{n^2}\\leq\\sum_{n=1}^\\infty\\frac1{n^2}=\\frac{\\pi^2}{6}=\\frac{C}{x}, $$ therefore $$ \\sum_{n\\geq1}\\frac1{n^2}=\\sum_{n\\leq x}\\frac{1}{n^2}+\\sum_{n=x}^\\infty\\frac{1}{n^2}\\leq\\sum_{n\\leq x}\\frac{1}{n^2}+C/x=\\sum_{n\\leq x}\\frac{1}{n^2}+\\mathcal O(1/x). $$ But is there a constant independent of $x$ that makes this true? "} {"_id": "A.369", "text": "Recurrent integral How to calculate integral $$J_n=\\int_{-\\pi}^\\pi \\frac{\\sin{(nx)}}{(1+2^n) \\sin{x}}\\,\\mathrm{d}x\\:?$$ I tried partial integration but did not succeed in finding a recurrent relation? Also, tried Moivre formula for $I_n+iJ_n$, where $I_n=\\int_{-\\pi}^\\pi \\frac{\\cos{(nx)}}{(1+2^n) \\sin{x}} dx$, but also without success. Any help is welcome. Thanks in advance. "} {"_id": "A.370", "text": "What if we take step functions instead of simple functions in the Lebesgue integral When we define the Lebesgue integral, we first define it for simple functions $s(x) = \\sum\\limits_{j=1}^n c_j\\chi_{A_j}(x)$ (where $A_j$ are measurable) as $\\int sd\\mu = \\sum\\limits_{i=j}^n c_j \\mu(A_j)$ and then for $f\\ge 0$ as $\\int fd\\mu = \\sup\\{\\int sd\\mu$ : s simple and $0\\le s\\le f\\}$. But I was wondering what could go wrong if instead of taking simple functions in the supremum, we would take step functions, i.e. $s(x)=\\sum\\limits_{j=1}^nc_i\\chi_{I_j}(x)$ where $I_j$ are intervals (any type, like $(a,b), (a,b], [a,b), [a,b])$). "} {"_id": "A.371", "text": "Can I say $|f(x)g(x)|=||fg||$ Let $f,g:[0,1]\\to \\Bbb{R}$ be continuous functions. Show that $$||fg||\\le||f||\\space||g||$$ What I have got so far : $|f(x)| \\le\\max|f(x)|=$ norm of $f$, $||f||$.$\\forall x\\in[0,1]$. (Note: I have replaced supremum with maximum.) $|f(x)||g(x)| =|f(x)g(x)|\\le \\max |f(x)|g(x)=||f|| \\space\\space |g|\\le \\max|f(x)|\\space \\max|g(x)|=|f||\\space\\space||g||$ $|f(x)g(x)|\\le||f||\\space\\space||g||$ As I have to show that $||fg||\\le||f||\\space||g||$ : Can I say $|f(x) g(x)|=||fg||$? I'm not sure about that Because $|f(x)g(x)| \\le\\max|f(x)g(x)|=\\max|f(x)| \\space \\max|g(x)|$ I feel I am missing concept to prove $|f(x) g(x)|=||fg||$, through which I think I finally can prove $||fg||\\le||f||\\space||g||$ please If you guys could clarify. "} {"_id": "A.372", "text": "When did we move from $\\mathbb{Z}\\left[\\sqrt{d}\\right]$ to the ring of integers $\\mathcal{O}_{\\mathbb{Q}\\left[\\sqrt{d}\\right]}$ and why? Gauss made great progress in number theory in $\\mathbb{Z}$ by working in $\\mathbb{Z}[i]$ (or equivalently $\\mathbb{Z}\\left[\\sqrt{-1}\\right]$), so much so that we call $\\mathbb{Z}[i]$ the Gaussian integers now. And it was even known to the old mathematicians that solutions to Pell's equation $x^2 - dy^2 = 1$ could be better analysed by working in $\\mathbb{Z}\\left[\\sqrt{d}\\right]$. But now in modern number theory we study much more the ring of integers $\\mathcal{O}_{\\mathbb{Q}\\left[\\sqrt{d}\\right]}$. I find this confusing, as if we want to study Pell's equation with $d = 5$, we have that $\\mathcal{O}_{\\mathbb{Q}\\left[\\sqrt{5}\\right]} = \\mathbb{Z}\\left[\\frac{1 + \\sqrt{5}}{2}\\right]$ instead of $\\mathbb{Z}\\left[\\sqrt{5}\\right]$, which is not what we need. I was under the assumption that modern number theory usually tries to generalise its techniques but I don't see how this is a sensible generalisation and I don't see why the ring of integers is any more useful than just plain old $\\mathbb{Z}\\left[\\sqrt{d}\\right]$. So my question is: Why is the ring of integers defined the way it is? "} {"_id": "A.373", "text": "Show that $\\sqrt n$ is irrational if $n$ is not a perfect square, using the method of infinite descent. Show that $\\sqrt n$ is irrational if $n$ is not$a $perfect square, using the method of infinite descent. I know how to prove this by doing a contradiction proof and using The Fundamental Theorem of Arithmetic, but now I'm asked to use infinite descent to prove it. Then the very next problem says \"Why does the method of the text fail to show that $\\sqrt n$ is irrational if $n$ is a perfect square?\" I'm confused by this. Any hints or solutions are greatly appreciated. I was thinking of the standard argument, let $\\sqrt n = {a\\over b}$ where $gcd(a,b)=1$ and then through some algebra arrive at a common factor for both $a$ and $b$ which contradicts the fact that $gcd(a,b)=1$ and so we can apply this over and over again, but then I don't understand how the next problem says to explain why this method fails. "} {"_id": "A.375", "text": "Can anyone help me solve this diophantine equation? Find all integer solutions to $x^2 + 7 = 2^n$. I've done the case where n is an even integer but now I'm a little lost. Could anyone walk me through the solution? "} {"_id": "A.376", "text": "Evaluating the limit of a sqrt function using Riemann Sums $\\lim\\limits_{n\\to\\infty}\\dfrac{\\sqrt1+\\sqrt2+\\sqrt3+\\ldots+\\sqrt n}{n\\sqrt n}$ I am having trouble doing this problem. I have attempted to take the Riemann Sum but cannot get past the square root. I also tried to upper-bound and lower-bound it, but I got stuck doing this. "} {"_id": "A.378", "text": "Do exponent rules follow different rules from radicals Does $\\left(-3\\right)^\\frac{2}{2}$ not equal $\\sqrt{\\left(-3\\right)^2}$? "} {"_id": "A.379", "text": "Properties of a set of all isomorphisms $ f: G \\to G $ I'm kinda stuck with this task. Let $G$ be a group and $ S $ the set of all isomorphisms $ f: G \\to G$. I first want to show that $ (S, \\circ) $ is also a group. I believe I've shown that all the properties of a group is fulfilled with $ (S, \\circ) $: i) Assume that $ x \\in $ and $ f_1,f_2 \\in S$. Then $f_2(x) = y \\in G$, since $ f_1 $ is an ismorphism. $ f_2(y) = z \\in G $ since $ f_2 $ is an ismorphism. Then $ f_1(f_2(x)) = f_1(y) = z = f_1 \\circ f_2(x),$ hence $f_1, f_2 \\in S \\longrightarrow f_1 \\circ f_2 \\in S.$ ii) $id$ is an isomorphism $ \\longrightarrow id \\in S$. iii) $ f $ is an isomorphism $ \\longrightarrow \\exists f^{-1}$, one can show that $ f^{-1} $ is also an isomorphism, $ \\longrightarrow f^{-1} \\in S$. Now, assume that $ | S | = 1$, then I want to show that $ G $ is abelian and each element has an order of 1 or 2. Im kinda lost with that $ | S | = 1 $. If $ f \\in S $ then $f^{-1} \\in S $ due to previous result, should not $ | S | = 1 $ imply that $ f = f^{-1} = id$? And how do I move forward from this? Any hints is muy appreciated! "} {"_id": "A.381", "text": "$A_1 \\times ... \\times A_n$ is countable if $A_1, ..., A_n$ are countable Suppose that $A_1, ..., A_n$ are countable sets. Show that the cartesian product $A := A_1 \\times ... \\times A_n$ is countable. My attempt: Sets are said to be countable if they are finite or if they have the same cardinality as some subset of $\\mathbb{N}$ (i.e. we can find some bijection $f: A \\rightarrow S$ or $f: S \\rightarrow A$ where $S \\subset \\mathbb{N}$). Assume that $A_1, ..., A_n$ are countable sets. Then, there exists bijections $fi: \\mathbb{N} \\rightarrow A_i$ for $i = 1, ..., n$. Define $g: \\mathbb{N} \\rightarrow A$ as follows My issue arises here in finding such a bijective function without it being too complicated. How would I go about finding one? I am also open to any suggestions. Any assistance is welcomed. "} {"_id": "A.382", "text": "Set of functions from $SS = \\{ A_{1}, A_{2}, A_{3},...\\}$ to $\\{ \\mathbf{T}, \\mathbf{F} \\} $ is countable? Let $SS=\\{ A_1,A_2,A_3,\\ldots\\}$, and let $V = \\{ v \\mid v: SS \\to \\{ \\mathbf{T}, \\mathbf{F} \\} \\}$ .Is the set V countable? Justify your answer. My instinct is to say that $V = \\{ v \\mid v: SS \\to \\{ \\mathbf{T}, \\mathbf{F} \\} \\}$ is uncountable, and to prove this using a diagonalization argument, i.e. create a table of values of the functions $v_{1}, v_{2},...$ for natural numbers $n_1, n_2,\\ldots$ and define a function $v_{m} : SS \\to \\{ \\mathbf{T}, \\mathbf{F} \\}$ where it takes all values in the diagonals of the table $T, F$ and flips them, and then show that $v_{m}$ cannot appear anywhere in the list. Is my guess correct, and if so would this be a reasonable approach to the problem? "} {"_id": "A.383", "text": "Automorphisms of the disk without the maximum principle For pedagogical purposes, I am looking for an elementary proof (i.e. without resorting to the maximum principle) that $f_a(z):=\\frac{z-a}{1-\\overline{a}z}$ maps the unit disk into itself when $|a|<1$. The usual argument (at least usual for me) is to look at $f_a(e^{it})$ and check that $|f_a(e^{it})|=1$. Then we are done by the maximum principle and the fact that $f_a(a)=0$. However, I cannot help but think that there should be an elementary way to do this, using only basic facts about complex numbers such as the triangle inequality. Unfortunately I am running into circles. "} {"_id": "A.384", "text": "What does this bracket notation mean? I am currently taking MIT6.006 and I came across this problem on the problem set. Despite the fact I have learned Discrete Mathematics before, I have never seen such notation before, and I would like to know what it means and how it works, Thank you: $$ f_3(n) = \\binom n2$$ (Transcribed from screenshot) "} {"_id": "A.385", "text": "Properties of size function in a general Euclidean domain In ring theory a given ring $R$ is called a Euclidean domain if there exists a function $\\sigma:R -\\{0\\}\\rightarrow \\{0,1,2,3...\\} $ which satisfies the division algorithm i.e. $ $ if $a,b \\in R$ then there exists $q,r \\in R$ such that $b=aq+r$ and either $r=0$ or $\\sigma(r)\\lt \\sigma(a) $ Now I want to ask if we can prove, using just this definition that an element of larger degree won't divide an element of smaller degree. In specific rings such as (i) the integers we can say$$\\sigma(a)=|a|$$ $$\\sigma(ab)=\\sigma(a)\\sigma(b)$$ ii) polynomials where$$\\sigma(f(x))=deg(f(x))$$$$\\sigma(ab)=\\sigma(a)+\\sigma(b)$$ So in both the above cases the size of product of two elements will always be greater than or equal to the size of individual elements and hence the larger element can never divide the smaller element. But is this true in general for all Euclidean domains ? And how will we prove that "} {"_id": "A.387", "text": "Any positive integer greater than $11$ is a nonnegative linear combination of $5$ and $4$. My solution Let $n\\in\\mathbb{Z}^{+}$, then there exists $k\\in\\mathbb{Z}_0^+$, such that $n=5k + i, i\\in\\{0,1,2,3,4\\}$. Now analyzing by cases we have: If $i=0$, then $\\begin{align*} n = 5k \\Rightarrow n = 5k + 4(0). \\end{align*}$ If $ i = 1 $, then $\\begin{align*} n & = 5k + 1 \\\\ & = 5k-5(3) +5(3) +1 \\\\ & = 5(k-3) + 15 + 1 \\\\ & = 5(k-3) +16 \\Rightarrow n = 5(k-3) +4(4). \\end{align*}$ If $ i = 2 $, then $\\begin{align*} n & = 5k + 2 \\\\ & = 5k-5(2) +5(2) +2 \\\\ & = 5(k-2) + 10 + 2 \\\\ & = 5(k-2) +12 \\Rightarrow n = 5(k-2) +4(3). \\end{align*}$ If $i=3$, then $\\begin{align*} n & = 5k + 3 \\\\ & = 5k-5 + 5 + 3 \\\\ & = 5(k-1) +8 \\Rightarrow n = 5(k-1) +4(2). \\end{align*}$ If $i=4$, then $\\begin{align*} n = 5k + 4 \\Rightarrow n = 5k + 4(1). \\end{align*}$ Thus, every positive number can be expressed as a linear combination of $5$ and $4$. Now using that $n>11$, so we have: $\\begin{align*} n &> 11 \\\\ 5k + i &> 5(2) +1 \\\\ 5k-5(2) &> 1-i \\\\ 5 (k-2) &> 1-i \\\\ k-2 &> \\frac{1-i}{5} \\\\ k &> 2+\\frac{1-i}{5}. \\end{align*}$ So by increasing over the values ​​that $ i $ takes, we have: $\\begin{align*} k &> 2+ \\frac{1-i}{5} \\geq 2+ \\frac{1-0}{5}\\\\ k &> 2 + 0.2 = 2.2 \\end{align*}$ But $k\\in\\mathbb{Z}_0^+ \\Rightarrow k \\geq 3 \\Rightarrow n \\geq 15 $. Thus we have that every positive integer greater than or equal to $15$ is a non-negative linear combination of $5$ and $4$. Finally, let's look at the cases that are still unverified, which are $12$, $13$ and $14$. $\\begin{align*} 12 &= 5(0) +4(3) \\\\ 13 &= 5(1) +4(2) \\\\ 14 &= 5(2) +4(1). \\end{align*}$ Therefore, any positive integer greater than $11$ is a nonnegative linear combination of $5$ and $4$. I think this is the correct solution, I await your comments. If anyone has a different solution or correction of my work I will be grateful. "} {"_id": "A.388", "text": "Linear algebra find $k$ Given the linear system: $$\\begin{cases} x_1 + kx_2 − x_3 = 2\\\\ 2x_1 − x_2 + kx_3 = 5 \\\\ x_1 +10x_2 −6x_3 =1 \\end{cases}$$ for which values of $k$ has the system (2): (a) No solutions (b) A unique solution. (c) Infinitely many solutions. I've been trying echelon form where i switched $R_1$ with $R_3$ and then i switched $R_2$ with $R_3$ So I have $\\left[\\begin{array}{ccc|c}1&10&-6&1\\\\1&k&-1&2\\\\2&-1&k&5\\end{array}\\right]$ but then I'm stuck and don't know how to get any further. "} {"_id": "A.389", "text": "Convergence of a Special Series as N is large I'm trying to find a general formula for the series and x is a constant: $$\\sum\\limits_{i=1}^N \\frac{i}{(1+r)^i}$$ I have deduced the general formula for the sum. $$\\frac{(1+r)^{N+1}-(1+r)-rN}{r^2(1+r)^N}$$ Will this sum converge to some value when N is very large? Could someone explain how to deal with it? "} {"_id": "A.391", "text": "$C_r $ inequality Show that for each $r> 0$ $$\\mathbb{E} |X+Y|^r \\leq c_r (\\mathbb{E} |X|^r + \\mathbb{E} |Y|^r),$$ where $c_r$ is a constant given by $\\begin{equation} c_r = \\left\\{ \\begin{array}{ll} 1 & \\mathrm{if\\ } 0 < r \\le 1 \\\\ 2^{r-1} & \\mathrm{if\\ } 1 < r \\end{array} \\right. \\end{equation}$ I've tried to use other inequalities for the proof of this one but I still get stuck for the case of $2^{r-1}$. "} {"_id": "A.394", "text": "Trying to find the $\\delta$ in epsilon-delta continuity proof. I am trying to prove the following function is continuous for all irrationals: $f(x) = \\begin{cases} 0, & \\text{if $x$ is irrational} \\\\ 1/n, & \\text{if $x = m/n$} \\end{cases}$ The question assumes $m/n$ is in lowest terms. I have shown that it is discontinuous for all rationals, and now I believe I have to either use the $\\epsilon-\\delta$ definition of continuity or sequential continuity to show the function is continuous for when $x$ is irrational. I split my attempt into two cases: Our value of $x$ is irrational, then I want: $$\\forall \\epsilon > 0, \\exists \\delta > 0, |x-a| $a$ is irrational here. Using that $x$ is irrational I get that $f(x) = 0$ as does $f(a)$ so no matter the $\\delta$ we have our condition for continuity satisfied as $0 < \\epsilon$ for all $\\delta$ Our value of $x$ is rational i.e. $x = \\frac{m}{n}$ subbing in we want: $$\\forall \\epsilon > 0, \\exists \\delta > 0, |\\frac{m}{n}-a| I am struggling to find the $\\delta$ necessary. I am able to bound $|\\frac{m}{n}-a|$ by $1+2|a|$ if I say that $\\delta \\le 1$. However I do not know how to find a $\\delta$ to yield the second inequality. Should I change my approach to sequential continuity? "} {"_id": "A.399", "text": "Disjoint axis-aligned rectangles in the plane Let $A$ be some set of axis-aligned rectangles in the plane, each pair of which has empty intersection. Prove that $A$ is a countable set. (An axis-aligned rectangle is a set of the form $$M = {\\{\\langle x,y \\rangle \\in \\mathbb{R^2} | a \\leq x \\leq b , c \\leq y \\leq d}\\}$$ for $a,b,c,d$ such that $a < b$ and $ c < d$.) Attempt: I tried using the density of the $\\mathbb{Q}$ in $(\\mathbb{R},\\leq)$, but without any success. "}