title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Does omitting the multiplication operator have an effect on order of operations
When you write a mathematical expression like this: $4:2(1+1)$, does the fact that the multiplication operator is not explicitly written has any bearing on the precedence? What is the order of operations in this case? Is it: $4:4=1$ (order: parenthesized addition, implicit multiplication, division ) or $2(1+1)=4$ (order division, parenthesized addition, implicit multiplication ). If the explicit operator has no effect, this would be $4\div2\cdot(1+1)$ and calculated from left to right (because of the no precedence between division and multiplication). Result woud then be $4$.
Technically the implied multiplication is really just hiding the real controversy here. By foil, yes, of course the term outside the brackets must be able to be distributed inside of it, or all of Math breaks down. The some people have nonsensically argued that the equation is better parsed as: $$\frac{4}{2}(1+1)$$ But you have to be aware that smart calculators might choose either interpretation and teachers might choose 1 of these randomly and consider it correct. Their are even some stories of math books that teach implied multiplication but the person who wrote the answer section did not use it. So it is indeterminate, nobody important enough has given a ruling on this and enough people have chosen both sides that their is no way of knowing the implied meaning.
|notation|
0
Testing for convergence - $\int_{0}^{\infty} \frac{x^4}{e^{\sqrt{x}}} dx$
How do I go about testing for the convergence of the following improper integral: $$\int_{0}^{+\infty} \frac{x^4}{e^{\sqrt{x}}} dx$$ I don't suppose it's possible (or practical) to evaluate its antiderivative. I tried testing around for an asymptotically equivalent function in the neighbourhood of $+\infty$ , I've also tried to look for a fitting function to make the comparison test; both to no avail.
You may split your integral in two, one from $0$ to $1$ and other from $1$ to $\infty$ . It is obvious the first one is finite due to continuity of the integrand. Furthermore, when $x\to\infty$ , $\dfrac{x^4}{e^{\sqrt{x}}}$ is way smaller than, for example, $\dfrac{1}{x^2}$ , as the exponential 'takes over'. More formally, $$\lim_{x\to\infty}\dfrac{\dfrac{x^4}{e^{\sqrt{x}}}}{\dfrac{1}{x^2}}=\lim_{x\to\infty}\dfrac{x^6}{e^\sqrt{x}}=0$$ Then, by comparison, as $$\int_{1}^{\infty} \dfrac{1}{x^2} we have that $$\int_{1}^{\infty} \dfrac{x^4}{e^\sqrt{x}} and so the original integral, from $0$ to $\infty$ , converges.
|calculus|integration|convergence-divergence|improper-integrals|
1
Differential of the function $\frac{1}{\sqrt{x^2+y^2}}\begin{pmatrix} x \\ y \end{pmatrix} $
I have the following simple question find the differential: $D\phi(x;y) (h_1 ; h_2) $ of the function: $ \phi(x;y):= \frac{1}{\sqrt{x^2+y^2}}\begin{pmatrix} x \\ y \end{pmatrix},$ defined on $\mathbb{R}^2\setminus\{(0;0)\}$ . The answer is $$D\phi(x;y)(h_1 ; h_2) = \frac{1}{x^2 + y^2} \begin{pmatrix} h_1 \\ h_2 \end{pmatrix} - \frac2{(x^2+y^2)^2}\begin{pmatrix} x\\ y \end{pmatrix} \cdot \begin{pmatrix} h_1\\ h_2 \end{pmatrix} \begin{pmatrix} x\\ y \end{pmatrix}^T$$ but I don't undersand why? More precisely: why isn't $ D\phi(x;y)(h_1 ; h_2)$ equal to $ J \cdot \begin{pmatrix} h_1\\ h_2 \end{pmatrix} $ with $J$ the Jacobian matrix? I know that this is a very simple question of understanding but I am stuck as I do not see on which formula of the differential this result stands. Thank you for your help.
Yes, your thinking is correct. They seem to have made numerous errors. We have a map $\Bbb R^2-\{0\}\to\Bbb R^2$ , so the derivative at the point $(x,y)$ should be a linear map from $\Bbb R^2$ to $\Bbb R^2$ . Indeed, since the function is continuously differentiable, this linear map is given by the matrix of partial derivatives. So we should have $$D\phi(x,y)\begin{pmatrix}h_1\\h_2\end{pmatrix} = \begin{pmatrix} \frac{y^2}{(x^2+y^2)^{3/2}} & -\frac{xy}{(x^2+y^2)^{3/2}} \\ -\frac{xy}{(x^2+y^2)^{3/2}} & \frac{x^2}{(x^2+y^2)^{3/2}}\end{pmatrix}\begin{pmatrix}h_1\\h_2\end{pmatrix}.$$ We can certainly rewrite this in the form $$\frac1{(x^2+y^2)^{3/2}}\begin{pmatrix}y\\-x\end{pmatrix}\begin{pmatrix}y\\-x\end{pmatrix}^\top \begin{pmatrix} h_1\\h_2\end{pmatrix}.$$ But their answer is nothing but mistakes.
|calculus|analysis|derivatives|
1
For $a,b,c>0$ and $a+b+c=1$ prove $\frac{1}{ab+2c^{2}+2c}+\frac{1}{bc+2a^{2}+2a}+\frac{1}{ca+2b^{2}+2b}\ge \frac{1}{ab+bc+ca}$
For $a,b,c>0$ and $a+b+c=1.$ Prove $:$ $$\frac{1}{ab+2c^{2}+2c}+\frac{1}{bc+2a^{2}+2a}+\frac{1}{ca+2b^{2}+2b}\geqq \frac{1}{ab+bc+ca}$$ well i try to use AM-GM but turns out: $$\frac{9}{4(a^{2}+b^{2}+c^{2}+5(ab+bc+ca))} \geq\frac1{ab+bc+ca}$$ and i always get this : $$ab+bc+ca \geq a^{2}+b^{2}+c^{2}$$ can someone help me what i'm doing wrong here ? P/S : this is the first time I use this format so pls anyone can help me edit this , thank you !
This can be solved with an application of Jensen's inequality. You want to come up with a function $f(x)$ such that $f(a)=\frac{1}{bc+2a^2+2a}$ , $f(b)=\frac{1}{ac+2b^2+2b}$ , and $f(c)= \frac{1}{ab+2c^2+2c}$ . Then use the second derivative to show that this function is convex which allows you to apply Jensen's inequality. Jensen's inequality will give you the left side of your inequality is less than or equal to $f(a+b+c)$ which you will then need to do some algebra to show is less than the desired $\frac{1}{ab+bc+ca}$ . The fact that $a+b+c=1$ will help with this.
|inequality|
0
Testing for convergence - $\int_{0}^{\infty} \frac{x^4}{e^{\sqrt{x}}} dx$
How do I go about testing for the convergence of the following improper integral: $$\int_{0}^{+\infty} \frac{x^4}{e^{\sqrt{x}}} dx$$ I don't suppose it's possible (or practical) to evaluate its antiderivative. I tried testing around for an asymptotically equivalent function in the neighbourhood of $+\infty$ , I've also tried to look for a fitting function to make the comparison test; both to no avail.
We can apply infinite series. Namely $$\int\limits_{n^2}^{(n+1)^2 }{x^4\over e^{\sqrt{x}}}\,dx\le {(n+1)^8\over e^n}(2n+1)$$ By the Cauchy $n$ -th root test the series $\sum{(n+1)^8\over e^n}$ is convergent, so is the integral in question.
|calculus|integration|convergence-divergence|improper-integrals|
0
Is the state norm of an asymptotically stable linear system always bounded?
Suppose I have the dynamical system $$x_{i+1} = A_{i+1} x_{i}$$ with state vector $x \in \mathbb{R}^{n}$ . If its given that $\lim_{i \to \infty} \Vert x_{i} \Vert = 0$ , does this imply $\Vert x_{i} \Vert \leq M$ ? How do I prove this?
A convergent sequence is automatically bounded, no matter how it has been generated. Of course the bound $M$ depends on the sequence, i.e. in this case on $x_1$ .
|limits|dynamical-systems|control-theory|upper-lower-bounds|
1
Intersection of two ellipses at exactly 2 points
Thanks for your time. Let us consider the ellipses as $x^TAx=k$ and $x^TBx=l$ . We can assume that we know $A$ and $B$ . Suppose if we give one value to $k$ , then we can adjust $l$ such that one ellipse will lie entirely inside the other ellipse and touch exactly at two points. I was wondering if there is any analytical way to obtain the solutions $(x,k,l)$ for this system. If not, how to numerically solve this? Any help much appreciated!
We can solve explicitly this problem with the use of tangency. Given two ellipses $$ \cases{ a_1 x^2+b_1 x y + c_1 y^2 - k = 0\\ a_2 x^2+b_2 x y + c_2 y^2 - l = 0 } $$ eliminating $y$ we have $$ p(x) = \mu_1 k^2+ \mu_2 l k + \mu_3 l^2 + \mu_4 k x^2 + \mu_5 l x^2 + \mu_6 x^4=0 $$ and as the two ellipses are tangent at two different points, necessarily we have $$ p(x) = \mu_6(x-x_1)^2(x-x_2)^2 $$ where $x_2, x_2$ are the tangency coordinates, and this should be true for all $x$ then $$ \cases{ \mu_1 k^2 + \mu_2 k l + \mu_3 l^2 - \mu_6 x_1^2 x_2^2=0\\ \mu_4 k + \mu_5 l - \mu_6 x_1^2 - 4 \mu_6 x_1 x_2 - \mu_6 x_2^2=0\\ x_1 + x_2 = 0 } $$ Three equations and four unknowns $(x_1,x_2,l,k)$ so we can determine: $(x_1(k),x_2(k),l(k))$ or $(x_1(l),x_2(l),k(l))$ . Making $a_1=3,a:2=5,b_1=1,b_2=-2,c_1=5,c_2 = 2$ we have $\mu_1=4,\mu_2=-20,\mu_3=25,\mu_4=52,\mu_5=-202,\mu_6=493$ and choosing $l=1$ we obtain for $k$ $$ \left\{\frac{1}{18} (32 - \sqrt{493}), \frac{1}{18} (32 + \sqrt{493})\right\} $$
|geometry|numerical-methods|numerical-linear-algebra|ellipsoids|
0
Finding the percentage change in the slope with the correct plus or minus symbol.
I am trying to build a formula with the data that I have below. The formula will be embedded in a Arduino sketch. Column A is the time stamp and Column B is my voltage value. Column C is a threshold value, which I am pretty sure is not relevant to my question here. Column E give me the slope between two points. The Excel formula for column E is E1=(B2-B1)/(A2-A1) This formula was based on slope 'm' = (y2-y1)/(x2-x1) Column G gives me the percent change between 2 values in column E. G1=((E2-E1)/E1)*100 This formula for percent was based on [ (New-Old)/Old ] x 100. So far, so good. If you look at row 5 which has coordinates of (550, 36), column G is telling me that there has been a positive increase in slope by 2642.857%. Visually, I can see the slope increasing on the graph and the percentage change is 2642.857% according to Excel. So far so good. Here is the issue. If you look at row 7, visually, it seems that the slope is increasing. But column ‘G’ is telling me that the value there i
You just need to take the absolute value of the old value in the denominator. Basically, percentage change would be given by $$\frac{\text{new value} - \text{old value}}{|\text{old value}|} \times 100 \,\%$$ So, replace E1 with ABS(E1) in the formula (denominator) and you should be fine.
|percentages|slope|
0
Proving Series has no limit
I have this Serie, and I need to show that it has no limit. I tried to do sandwich, and show that the limit of the inner sequence is not 0 by sandwich(and therefore the Serie has not limit). The upper bound I managed to expand it so it will be n* sqrt[6]{n} (n+1-n)= n sqrt[6] and I remained with n*sqrt[6]{n} which his limit goes to infinity. but couldn't do the same with the lower bound, I would glad if someone could help me. (Also I need to mention regarding the lower bound, I am not allowed to evaluate infinity minus infinity) $$\sum_{n=1}^{\infty}n\sqrt[6]{n}(\sqrt[3]{n+1}-\sqrt[3]{n}) $$
You can use some inequality to show $\sum_{n=1}^{\infty}n\sqrt[6]{n}(\sqrt[3]{n+1}-\sqrt[3]{n})$ diverge. $$n\sqrt[6]{n}(\sqrt[3]{n+1}-\sqrt[3]{n})=\\n\frac{\sqrt[6]{n}}{(\sqrt[3]{(n+1)^2}+\sqrt[3]{n(n+1)}+\sqrt[3]{n^2})}\\ \ge n\frac{\sqrt[6]{n}}{(\sqrt[3]{(n+1)^2}+\sqrt[3]{(n+1)^2}+\sqrt[3]{(n+1)^2})}\\= n\frac{\sqrt[6]{n}}{3\sqrt[3]{(n+1)^2}}\\ $$ then use $p-$ series rule , or do like below $$n\frac{\sqrt[6]{n}}{3\sqrt[3]{(n+1)^2}} \ge n\frac{\sqrt[6]{n}}{4\sqrt[3]{n^2}}=\\\frac 14 \sqrt n$$ this mean $$\sum_{n=1}^{\infty}n\sqrt[6]{n}(\sqrt[3]{n+1}-\sqrt[3]{n}) \ge \frac 14 \sum_{n=1}^{\infty} \sqrt n$$
|sequences-and-series|limits|summation|
1
excisive couples, sufficient conditions
Question: What are sufficient conditions on pairs $(X,A)$ and $(Y,B)$ so that $(A\times Y,X\times B)$ is an excisive couple? In more detail: Given pairs of spaces $(X,A)$ and $(Y,B)$ , consider the inclusion $$ \Delta(A\times Y) + \Delta(X \times B) \to \Delta(A\times Y \cup X\times B) $$ Here $\Delta(A\times Y) + \Delta(X \times B)$ denotes the subcomplex of the singular chain complex $\Delta(A\times Y \cup X\times B)$ generated by those singular simplices whose image is completely contained in one of $A\times Y$ or $X\times B$ . Question: Under which circumstances is this map a weak equivalence (i.e. induces isomorphisms in homology)? It is clear to me that $A=B$ , one of $A,B$ is empty and $A,B$ are both open are sufficient. I am learning from lecture notes which claim that it also follows "feom excision results" in case the inclusions are closed cofibrations, but I have no idea how to prove this. I'd love to get a reference. Also what happens when $(X,A)$ and $(Y,B)$ are NDR pairs?
Here are some observations (no claims of exhaustiveness): The map is an isomorphism if and only if $A$ is a union of path-components of $X$ or $B$ is a union of path-components of $Y$ . (E.g. if $A=\emptyset$ or $B=\emptyset$ .) The map is a chain-homotopy equivalence if and only if it is a quasi-isomorphism (you call it weak equivalence) since both complexes are free. The excision theorem tells us that a sufficient condition is that $X\times B\cup A\times Y$ is the union of the interiors of $X\times B$ and $A\times Y$ respectively in $X\times B\cup A\times Y$ . This is the case if $A\subseteq X$ and $B\subseteq Y$ are open (I don't see any other practical sufficient condition). The map is a quasi-isomorphism if the inclusions $A\subseteq X$ and $B\subseteq Y$ are closed and one of them is a cofibration. Firstly, this implies that $X\times B$ and $A\times Y$ are closed in $X\times Y$ , hence closed in $X\times B\cup A\times Y$ . Thus, we have a pushout diagram $$ \require{AMScd} \begin
|algebraic-topology|homology-cohomology|homological-algebra|homotopy-theory|
1
Proof that every 2-connected graph has at least one contractible edge.
I am trying to prove the following statement: An edge $e$ in a 2-connected graph $e$ is said to be contractible if $G/e$ is also 2-connected. Prove that every 2-connected graph of order at least 3 has at least one contractible edge. I know that it's a well known result that every edge in a 2-connected graph is either contractible or deletable, however I would like to prove the statement without using this theorem. My approach so far has been: first, prove that an edge $e = xy$ is contractible if and only if $\{x,y\}$ is not a vertex cut of the graph. Next, prove that any 2-connected graph with at least 3 vertices has at least one edge such that its vertices do not form a vertex cut. The first part was relatively easy, however I'm having quite a bit of trouble with the second part. Some other ideas I've considered are showing that the ear decomposition of $G$ must have an edge such that deleting both vertices does not affect the connectivity of the graph or showing that there must be a
I would like to add to the idea of Dániel, since I was not quite convinced at first: "How do we know that such a proper ear exists?" However, I could convince myself by the following line of argumentation: Let $G$ be a 2-connected graph of order at least 3 (so $G \not \simeq K_3$ ) and consider an ear-decomposition of $G$ (which exists by the 2-connectedness of $G$ ) which takes as the "starting-cycle" a cycle $C$ of minimal size $g(G)$ . If $C = G$ , we are done, as $n(C) \geq 4$ , so any edge $e \in E(C)$ is contractible. Otherwise the next graph in the ear-decomposition is obtained by a $C$ -path $P$ , which cannot be just an edge because that would surely imply the existance of a cycle $C' \subseteq G$ with $n(C') , contradicting our choice of $C$ . Therefore, $P$ is an ear which is not just an edge; so we can safely take a last such ear.
|graph-theory|graph-connectivity|
0
Close form expression for an integral with z derivative of jacobi theta function
I have an expression of the form $$ \tag{1} A(\chi) = \int_{0}^\infty\sum_{i=0}^\infty (-1)^{i+1}\frac{(2i+1)\chi}{\sqrt{t}}\exp\left(\frac{-(2i+1)^2\chi^2}{t}\right)\mathrm{d}t. $$ If I am not mistaken, this expression is equal to $$ \tag{2} A(\chi) = \frac{1}{4}\int_{0}^\infty \frac{\chi}{\sqrt{t}}\frac{\mathrm{d}}{\mathrm{d}z}\left[\vartheta_3\left(\exp\left(\frac{-\chi^2}{t}\right);\frac{\pi}{4}\right)\right]\mathrm{d}t, $$ where $$ \tag{3} \vartheta_3(q;z) = 1+2\sum_{i=1}^\infty q^{i^2}\cos(2iz). $$ Getting rid of the infinite summation excited me. $\sin(i\pi/2)$ handles both the sign alternation and the vanishing of the summation in (3) for even $i$ , which looks neat. But other than the neatness, I cannot proceed any further and this provides only a marginal computational benefit. I know that $$ \frac{\mathrm{d}\vartheta_3(q;z)}{\mathrm{d}q}=-\frac{1}{4q}\frac{\mathrm{d^2}\vartheta_3(q;z)}{\mathrm{d}z^2}, $$ which might be somehow related. This is why I thought there might be a
Here's how I was able to get a seemingly correct answer. I, being a physicist by training, have played very fast and loose with somewhat unfamiliar mathematics here. I am sure, on some level, this is illegitimate. However, we have... $$A(\chi)=-\sum_{i=0}^{\infty}(-1)^{i}(2i+1)\chi\int_{0}^{\infty}t^{-\frac{1}{2}}e^{-\frac{(2i+1)^2\chi^2}{t}}dt$$ applying the substitution $u=\frac{(2i+1)^2\chi^2}{t}$ yields; \begin{align} \tag{1} A(\chi) &=-\sum_{i=0}^{\infty}(-1)^{i}(2i+1)^2\chi^2\int_{0}^{\infty}u^{-\frac{3}{2}}e^{-u}du\\ \tag{2} &=-\sum_{i=0}^{\infty}(-1)^{i}(2i+1)^2\chi^2\Gamma\big(-\frac{1}{2}\big)\\ \tag{3} &=2\chi^2\sqrt{\pi}\sum_{i=0}^{\infty}(-1)^{i}(2i+1)^2 \end{align} Now, we can write the remaining sum thusly... \begin{align} \tag{4} &\space\space\space\space\space \sum_{i=0}^{\infty}(-1)^{i}(2i+1)^2\\ \tag{5} &=\sum_{i=0}^{\infty}(-1)^{i}(4i^2+4i+1) \end{align} Which gives us; \begin{align} \tag{6} &= 4\sum_{i=0}^{\infty}(-1)^{i}i^2+4\sum_{i=0}^{\infty}(-1)^{i}i+\sum_{i=0}
|definite-integrals|elliptic-functions|theta-functions|
1
For $a,b,c>0$ and $a+b+c=1$ prove $\frac{1}{ab+2c^{2}+2c}+\frac{1}{bc+2a^{2}+2a}+\frac{1}{ca+2b^{2}+2b}\ge \frac{1}{ab+bc+ca}$
For $a,b,c>0$ and $a+b+c=1.$ Prove $:$ $$\frac{1}{ab+2c^{2}+2c}+\frac{1}{bc+2a^{2}+2a}+\frac{1}{ca+2b^{2}+2b}\geqq \frac{1}{ab+bc+ca}$$ well i try to use AM-GM but turns out: $$\frac{9}{4(a^{2}+b^{2}+c^{2}+5(ab+bc+ca))} \geq\frac1{ab+bc+ca}$$ and i always get this : $$ab+bc+ca \geq a^{2}+b^{2}+c^{2}$$ can someone help me what i'm doing wrong here ? P/S : this is the first time I use this format so pls anyone can help me edit this , thank you !
Due to CS: $$\sum \frac1{\left( ab+2c^2+2c \right)} \cdot \sum \left(a^2b^2 (ab+2c^2+2c) \right) \ge \left(\sum ab\right)^2.$$ Divide both parts by $\left(\sum ab\right)^3:$ $$\sum \frac1{\left( ab+2c^2+2c \right)}\cdot \frac{ \sum \left(a^2b^2 (ab+2c^2+2c) \right)}{\left(\sum ab\right)^3} \ge \frac1{\sum ab}.$$ It is enough to prove that $$\frac{ \sum \left(a^2b^2 (ab+2c^2+2c) \right)}{\left(\sum ab\right)^3}\le 1.$$ Rewrite as: $$\sum (ab)^3+\sum 2(a^2b^2c^2+a^2b^2c) \le \sum (ab)^3+\sum 3(a^3b^2c+a^3bc^2)+6a^2b^2c^2.$$ Since $a+b+c=1$ : $$\sum 2a^2b^2c(a+b+c) \le \sum 3(a^3b^2c+a^3bc^2).$$ Or: $$6a^2b^2c^2 \le \sum (a^3b^2c+ab^2c^3),$$ which is true due to sum of three AM-GMs.
|inequality|
0
Proof of the Chain Rule of multivariable functions in Munkres' Analysis on Manifolds
In Munkres' Analysis on Manifolds, page 57 Theorem 7.1(the Chain Rule) it states: ...For this purpose, let us introduce the function $F(\mathbf h)$ defined by setting $F(\mathbf 0)=\mathbf 0$ and \begin{equation*} F(\mathbf h)=\frac{[\Delta(\mathbf h)-Df(\mathbf a)\cdot \mathbf h]}{|\mathbf h|} for \ 0 ...Further more, one has the equation \begin{equation*} \Delta(\mathbf h)=Df(\mathbf a)\cdot \mathbf h +|\mathbf h|F(\mathbf h) \end{equation*} for $0 , and also for $\mathbf h=0$ (trivially). The triangle inequality implies that \begin{equation*} |\Delta (\mathbf h)|\leq m|Df(\mathbf a)|\cdot|\mathbf h|+|\mathbf h|\cdot|F(\mathbf h)|. \end{equation*} I'm confused about $|\Delta (\mathbf h)|\leq m|Df(\mathbf a)|\cdot|\mathbf h|+|\mathbf h|\cdot|F(\mathbf h)|$ . Where does this " $m$ " come from? I think it should be $|\Delta (\mathbf h)|\leq |Df(\mathbf a)|\cdot|\mathbf h|+|\mathbf h|\cdot|F(\mathbf h)|$ . This a question from Munkres' Analysis on Manifolds. I show you the whole context
The reason is the metric used here is "sup metric". As a result, we have $|\mathbf h|=\max\{|h_1|,\cdots,|h_m|\}$ and $|Df(\mathbf a)|=\max\{|D_jf_i(\mathbf a)|;i=1,2,\cdots,n \ and\ j=1,2,\cdots,m\}$ . So, we have: $$\begin{align*} |\Delta(\mathbf{h})| &=\left|Df(\mathbf{a})\cdot\mathbf{h}+|\mathbf{h}|F(\mathbf{h})\right|\\ &\leq|Df(\mathbf{a})\cdot\mathbf{h}|+\left||\mathbf{h}|F(\mathbf{h})\right|\\ &\leq m|Df(\mathbf{a})|\cdot|\mathbf{h}|+|\mathbf{h}|\cdot|F(\mathbf{h})| \end{align*}$$
|calculus|analysis|multivariable-calculus|derivatives|partial-derivative|
1
Prove $T$ with bounded basis sum in Hilbert space is compact.
Let $T:\mathcal{H}\rightarrow \mathcal{H}$ be a linear continuous operator between Hilbert spaces and $\{b_i\: |\: i \in I\}$ an orthonormal basis. Prove that if: $$\sum_{i\in I}\lVert T b_i\rVert^2 Then, $T$ is compact. Hilbert spaces are reflexive, as is easily seen from applying Riesz Representation Theorem twice. This means that if $x_n$ is a sequence whose norm is bounded by $M$ , there is a subsequence such that $x_{n_j}\rightharpoonup x$ . By continuity of $T$ , we obtain that $T(x_{n_j})\rightharpoonup T(x)$ . I want to show this convergence is strong. Because we have a basis it is clear that from Parseval we obtain: $$\lVert T(x_{n_j})-T(x)\rVert^2=\sum_{i\in I}|\langle T(x_{n_j}-x),b_i\rangle |^2=\sum_{i\in I}|\langle x_{n_j}-x,T^*(b_i) \rangle|^2 $$ By our hypothesis there is also $I_o$ with $|I_o| such that $\sum_{i\in I_o^C}\lVert Tb_i\rVert^2 . Using weak convergence for every $i\in I_o$ we have $|\langle x_{n_j }-x_j, T^*(b_i) \rangle|\rightarrow 0$ , and thus one can ta
Observe that $$Tx=\sum_{k=1}^\infty \langle x,b_k\rangle Tb_k$$ Let $$T_nx=\sum_{k=1}^n\langle x,b_k\rangle Tb_k$$ The operators $T_n$ are finite dimensional. Moreover $$Tx-T_nx= \sum_{k=n+1}^\infty \langle x,b_k\rangle Tb_k$$ hence by the Cauchy-Schwarz and the Bessel inequalities we get $$\|Tx-T_nx\|^2\le \sum_{k=n+1}^\infty|Tb_k\|^2\,\|x\|^2$$ Thus $$\|T-T_n\|\le \left (\sum_{k=n+1}^\infty\|Tb_k\|^2\right )^{1/2}\underset{n\to \infty}{\longrightarrow} 0$$ Hence $T$ is compact.
|functional-analysis|hilbert-spaces|compact-operators|
0
Is there a sequence, defined in terms of integers and the basic arithmetic operations such that $\lim\limits_{n\rightarrow\infty} s_n = \pi$?
It's a well-known formula that $$ e = \lim\limits_{n\rightarrow\infty} (1+1/n)^n $$ and I'm wondering if there exists a similar formula for $\pi$ . All the formulas I know for $\pi$ involve series or products or special functions (such as inverse trig functions). Or one can use Stirling's approximation if factorials are allowed. I'm wondering if there exists a function $F:\mathbb{Z}^m\rightarrow \mathbb{R}$ such that $F$ can be written in terms of a finite number of additions, subtractions, multiplications, divisions, and exponentiations, and $$ \pi = \lim\limits_{n\rightarrow\infty} F(\underbrace{n,n,\cdots,n}_{r \text{ times}},k_1,k_2,k_3,\dots,k_{m-r}) $$ for some fixed integers $m$ , $r$ , and $k_1,k_2,\dots,k_{m-r}$ . In particular, a formula that is defined by a series, infinite product, recursive formula, or continued fraction would not satisfy these conditions. To be perfectly clear about what I mean, let me formalize. Define a set of elementary sequences $S\subset \mathbb{R}^\
Community wiki as it does not satisfy the constrains of the OP: By this article if you consider the sequence $$ a_{n+1}=\left(1+\frac{1}{2n+1}\right) a_n $$ You have $$ \pi= \lim_n \frac{a_n}{n} $$ The sequence admits a closed form as $$ a_n= \frac{\sqrt{\pi} n!}{\Gamma\left(n+\frac{1}{2}\right)} $$ By this post you can rewrite it as $$ a_n=\frac{(n!)^22^{2n}}{(2n)!} $$ So you have $$ \pi= \lim_{n \to \infty} \frac{(n!)^22^{2n}}{n(2n)!} $$
|sequences-and-series|limits|number-theory|
0
Finding the entries in a matrix given that the rank has to be $2$
Find all $y such that the matrix defined for $y $$\begin{pmatrix}-3&2&y\\ 0&1&-\frac{1}{y}\\ y&0&y\end{pmatrix}$$ has rank $2$ . It seems ok, we just need a $0$ row, which could be the last row for example, if $y = 0$ . However, this would mess up the condition in the second row, where $y≠0$ . Im not sure what to do, any suggestions? Could the determinant help in this case?
Call the first, the second and the third columns of the matrix $u,v$ and $w$ respectively. Clearly $u$ and $v$ are linearly independent. So, in order that the matrix has rank two, we must have $w=au+bv$ for some scalars $a$ and $b$ . In particular, by looking at the bottom elements of the three vectors, we have $3=w_3=au_3+bv_3=a(3)+b(0)=3a$ . Hence $a=1$ . Consequently, $w-u=bv$ , i.e., $$ \pmatrix{y+3\\ -\frac{1}{y}\\ 0}=b\pmatrix{2\\ 1\\ 0}.\tag{$\#$} $$ Since $-\frac{1}{y}$ is nonzero, so must be $b$ . Therefore $(\#)$ holds if and only if $\frac{y+3}{-1/y}=2$ , i.e., iff $y^2+3y+2=0$ . In other words, the matrix has rank two iff $y=-1$ or $y=-2$ .
|linear-algebra|determinant|matrix-rank|
0
Analytic continuation of double factorial
This question is somewhat informative. In Wikipedia the definition of double factorial continued in the complex arguments is provided $$ k!!=\sqrt{\frac{2}{\pi}}2^{\frac{k}{2}}\Gamma[k/2+1] $$ Clearly, this definition does not work for positive even numbers. Is there any double factorial formula in terms of Gamma functions which works for both odd and even?
$$(x)!!=\lim_{n\rightarrow\infty}\left(2n-\sin^{2}\left(\frac{\pi x}{2}\right)\right)^{\frac{x+\sin^{2}\left(\frac{\pi x}{2}\right)}{2}}\prod_{k=1}^{n}\frac{2k-\sin^{2}\left(\frac{\pi x}{2}\right)}{2k+x}$$ Let $b(x)$ be Brian's continuation and $k(x)$ be mine. Then for all $x\in\mathbb{R}\backslash\mathbb{Z}^-$ , $|k(x)|\ge|b(x)|$ . I.e. our continuations aren't the same except in the positive integers.
|functions|
0
Is there an enumeration of finitely presented groups?
I know that the general word problem is undecidable, but is there an effective enumeration of presentations all finitely presented groups generated by $n$ elements in which each isomorphism class of a group appears exactly once? I have no idea of how to tackle this problem, and I'm motivated by the fact that isomorphism classes of path connected covering maps $E \to \bigvee_n S^1$ correspond precisely to subgroups of the free group of $n$ elements. So answering the previous question solves the one about covering maps too.
Since the isomorphism problem for finitely presented groups is undecidable, there can be no such enumeration, at least for $n \ge 2$ . If there were such an enumeration $G_1,G_2,\ldots,G_n,\ldots$ then I could solve the isomorphism problem as follows. Given a finite presentation on $n$ generators, attempt to construct isomorphisms between the group defined by this presentation and the groups $G_1,G_2,\ldots,G_n,\ldots$ in parallel. Eventually an isomorphism with exactly one of the groups $G_i$ would be found.
|group-theory|free-groups|combinatorial-group-theory|
1
What's wrong with this derivation of the volume of a hemisphere?
My idea to calculate the volume of the hemisphere is to sum up the area of circles of all radii up to the radius of the hemisphere we are interested in: $$\int_0^r \pi x^2 dx$$ This gives $\frac{1}{3}\pi r^3 \neq \frac{2}{3}\pi r^3$ , so deviating by a factor of $2$ from the known formula for the volume of a hemisphere. Can you explain where my reasoning is wrong and why it deviates by a factor of $2$ ?
The thing to keep in mind with disk integration is the fact that the variable of integration represents the thickness of each disk. You are trying to integrate the volume of a half-sphere, using: $$ \int_0^r \pi x^2 dx $$ with the idea being that a bunch of circular disks (each of area $A=\pi x^2$ ) should add up to make a half-sphere. So if we investigate, things seem to make sense at the endpoints of $x=0$ and $x=r$ where the areas are $A=\pi 0^2$ and $A=\pi r^2$ respectively as expected. But because the overall answer is incorrect we clearly have to investigate further! What occurs at the half-way point? At $x=r/2$ the area is $A=\pi (r/2)^2$ ... but that is much too narrow a disc! A hemisphere is still pretty wide midway up. Doing a simple Pythagorean Theorem check, the hypotenuse being $r$ and the side being $r/2$ would mean the disk radius should be $\sqrt{3/4}r\approx 0.866r$ not $0.5r$ ! Do you see the issue now? Setting the area of each disk to $A=\pi x^2$ inherently implies t
|integration|geometry|solution-verification|solid-geometry|
1
Coulomb's law from Maxwell's equations.
I'm trying to find general sufficient additional conditions to derive Coulomb equation for the electric field generated by a steady point charge in free space from Maxwell equations in said conditions. I know that a way to do this is assuming that the solution of Maxwell equations must have spherical symmetry due to the disposition of charges (a single point charge infact) having such a degree of symmetry. The thing is, how can I prove that the solution of maxwell equations has to be symmetrical if I don't know in advance the equation of the electric field? And if the answer is "because the space is isotropic", how can I mathematically write such a condition? Infact, if I consider Maxwell's equations (I write just the first two for the case we are considering) I get \begin{align} \int_{\partial \Omega} E \cdot dS &= \delta(\Omega)q \\ \int_\gamma E \cdot ds =& 0 \quad \text{if} \quad \partial\gamma=\emptyset \end{align} ( $\delta$ is $1$ if the carge is inside of $\Omega$ , $0$ if it i
For the spherically symmetric case, you can constrain it by forcing $|E|$ to be a function of only distance $|r|$ from the point charge.
|partial-differential-equations|
0
Volume of a great icosahedron
This is the image of a Great Icosahedron that I obtained starting from the coordinates of the vertices, as $A$ , $B$ $C$ , etc.. Now I want to calculate the volume of the solid. In internet (as in this page of Wolfram ) I can find a formula for this volume, $$ V=\frac{l^3}{4}(25+9\sqrt{5}) $$ but I can't understand how it is obtained. Some one can explain how it come from.
I'll find the point $B$ from the following illustration using the regular icosahedron from this answer . (sorry to every reader for the godawful "graphic design"...) Namely, it can be taken as the intersection of planes $P_1$ , $P_2$ , $P_3$ , where $$(0,-p,1),(-1,0,-p),(p,1,0) \in P_1,$$ $$(0,-p,1),(1,0,-p),(-p,1,0) \in P_2,$$ $$(0,p,1),(-p,-1,0),(p,-1,0) \in P_3$$ (three equilateral triangles with edge length $2 p = 1 + \sqrt{5}$ ). We have a system of linear equations $$\lambda_1 (-1, p, -p-1) + \mu_1 (p, p+1,-1) =$$ $$=\lambda_2 (1,p,-p-1) + \mu_2 (-p,p+1,-1) =$$ $$=(0,2p, 0) + \lambda_3 (-p,-p-1,-1) + \mu_3 (p, -p-1, -1).$$ CAS computation says $\lambda_3 = \mu_3 = \dfrac{9+4\sqrt{5}}{20+9\sqrt{5}}$ and therefore $B = (0,p,1) - 2\frac{8p+5}{18p+11}(0,p+1,1) = \frac{1}{18p+11}(0,-13p-8,2p+1)$ . Now we just find the tetrahedron volume by the |determinant|/6 formula. The red-blue intersection vertex is at $(p-2, -1, 0)$ (from another $3 \times 2$ system of intersecting lines), the gr
|geometry|polyhedra|
0
How many times is the light bulb turned on?
A light bulb is automatically turned off for 21 secs. Next, it lights up automatically for 15 secs. Once again, it is automatically turned off for the following 21 secs. The process keeps repeating. After 200 full mins, how many times has the bulb light up after turning off? My approach was the following: 21 + 15= 36 secs 200 mins x 60 = 12000 secs 12000/36 = 333 so the bulb lit up around 333 times but this isn't the correct answer.
Your answer will be right if we assume at the beginning (t=0 second) the bulb turns from light to off. But if for t=0 second, the bulb turns from off to light, the answer is 333+1 since 12000/36=333.33333, the last incomplete loop also has one light up.
|algebra-precalculus|discrete-mathematics|word-problem|
0
How to calculate the sum of an infinite series, such as $\frac{n(n-1)}{2^n}$?
This problem appears in Girls in Math at Yale, a high school competition which I am prepping for: "Marie repeatedly flips a fair coin and stops after she gets tails for the second time. What is the expected number of times Marie flips the coin?" I deduced that for there to be n flips, we have to add $\dfrac{n(n-1)}{2^n}$ to get the I have also encountered a few similar problems like dice problems, where I have to calculate things like $\dfrac{1\cdot1}{6} + \dfrac{2\cdot1}{36} + \dfrac{3\cdot1}{216} \cdots$ So how should I approach this?
Basic approach. Summations of this kind can often be evaluated by differentiating a geometric sum, for which you may already know a formula. For example, let $$ S(x) = \sum_{k=0}^\infty x^k = \frac{1}{1-x} $$ If we differentiate this with respect to $x$ , we get $$ S'(x) = \sum_{k=0}^\infty kx^{k-1} = \frac{d}{dx} \frac{1}{1-x} = \frac{1}{(1-x)^2} $$ which might be challenging to obtain otherwise. We can then multiply both sides by $x$ to get $$ xS'(x) = \sum_{k=0}^\infty kx^k = \frac{x}{(1-x)^2} $$ Differentiating $S'(x)$ to get $S''(x)$ , multiplying by whatever factor of $x$ you need, and then evaluating at $x = 1/2$ or $1/6$ or whatever should get you to where you need to go.
|probability|combinatorics|
0
Volume of a great icosahedron
This is the image of a Great Icosahedron that I obtained starting from the coordinates of the vertices, as $A$ , $B$ $C$ , etc.. Now I want to calculate the volume of the solid. In internet (as in this page of Wolfram ) I can find a formula for this volume, $$ V=\frac{l^3}{4}(25+9\sqrt{5}) $$ but I can't understand how it is obtained. Some one can explain how it come from.
Let us rescale the great icosahedron so that its true vertices have as coordinates cyclic permutations of $(\pm1,0,\pm\varphi)$ where $\varphi$ is the golden ratio $\frac{\sqrt5+1}2$ , so that its true edge length is $2\varphi$ . Then we have the following decomposition: The orange, red and blue vertices in the positive (larger) tetrahedron have coordinates $A=(1,0,\varphi)$ , $B=(0,2-\varphi,1)$ and $C=(0,\varphi-2,1)$ respectively; the fourth vertex is the origin. The negative (smaller) tetrahedron shares the first three vertices and its fourth vertex $D$ – black in the above images – lies at the intersection of the two lines between red balls and blue balls in the second image: $$(1-t)(0,2-\varphi,1)+t(2-\varphi,-1,0)=(1-u)(0,\varphi-2,1)+u(2-\varphi,1,0)\implies t=u=\frac{\varphi-2}{\varphi-3}$$ $$D=\left(\frac{7-4\varphi}5,0,\frac{2+\varphi}5\right)$$ Then the two tetrahedron volumes may be computed as sixths of determinants; they are $\frac{4-2\varphi}6$ for the positive tetrahed
|geometry|polyhedra|
0
How to calculate the sum of an infinite series, such as $\frac{n(n-1)}{2^n}$?
This problem appears in Girls in Math at Yale, a high school competition which I am prepping for: "Marie repeatedly flips a fair coin and stops after she gets tails for the second time. What is the expected number of times Marie flips the coin?" I deduced that for there to be n flips, we have to add $\dfrac{n(n-1)}{2^n}$ to get the I have also encountered a few similar problems like dice problems, where I have to calculate things like $\dfrac{1\cdot1}{6} + \dfrac{2\cdot1}{36} + \dfrac{3\cdot1}{216} \cdots$ So how should I approach this?
It’s overkill for this particular problem, but I give a fully general (finite) formula for evaluating such sums with a worked example in my answer here . In the case at hand, $$2\sum_{\text{n}\ =\ 0,\ 1,\ \dots}\binom{\text{n}}{\textbf{2}}\left(\tfrac{1}{2}\right)^{\text{n}}\ =\ 2\cdot\frac{\left(\tfrac{1}{2}\right)^{\textbf{2}}}{\left(1-\tfrac{1}{2}\right)^{\textbf{2}+1}}\ =\ 4\text{.}$$
|probability|combinatorics|
0
How to calculate the sum of an infinite series, such as $\frac{n(n-1)}{2^n}$?
This problem appears in Girls in Math at Yale, a high school competition which I am prepping for: "Marie repeatedly flips a fair coin and stops after she gets tails for the second time. What is the expected number of times Marie flips the coin?" I deduced that for there to be n flips, we have to add $\dfrac{n(n-1)}{2^n}$ to get the I have also encountered a few similar problems like dice problems, where I have to calculate things like $\dfrac{1\cdot1}{6} + \dfrac{2\cdot1}{36} + \dfrac{3\cdot1}{216} \cdots$ So how should I approach this?
$$\begin{aligned} S=\sum_{n=0}^\infty(n^2-n)2^{-n}&=\left[\sum_{n=0}^\infty(n^2-n)x^{-n}\right]_{x=2}\\ &=\left[\sum_{n=0}^\infty n^2x^{-n}-\sum_{n=0}^\infty nx^{-n}\right]_{x=2}\\ &=\left[x\frac{\text d}{\text dx}\left(x\frac{\text d}{\text dx}\sum_{n=0}^\infty x^{-n}\right)+x\frac{\text d}{\text dx}\sum_{n=0}^\infty x^{-n}\right]_{x=2}=[S_1(x)+S_2(x)]_{x=2} \end{aligned} $$ First we calculate the $2^{\text{nd}}$ term and then the $1^{\text{st}}$ one with the previous: $$ \begin{aligned} &S_2(x)=x\frac{\text d}{\text dx}\sum_{n=0}^\infty x^{-n}=x\frac{\text d}{\text dx}\left(\frac{x}{x-1}\right)=-\frac{x}{(x-1)^2}\\ &S_1(x)=x\frac{\text d}{\text dx}\left(-\frac{x}{(x-1)^2}\right)=\frac{x(x+1)}{(x-1)^3} \end{aligned} $$ Finally, we have $$S=\left[\frac{x(x+1)}{(x-1)^3}-\frac{x}{(x-1)^2}\right]_{x=2}=4$$ The cool thing about this is that now you can compute the general series for $\forall x$ as long as the summation converges ( $|x|>1$ ).
|probability|combinatorics|
0
Use Riemann sums to prove $\int_{1}^{b} \frac{1}{\sqrt{x}}dx = 2(\sqrt{b}-1)$ using equal subintervals
This post refers to Question 2 of the review problems at the end of Chapter 6 of George Simmon's Calculus: Following the general form $$\int_{a}^{b} f(x)dx = \lim \limits_{max \Delta x_k\to0} \sum_{k=1}^n \Delta x_k f(x^*)$$ use Riemann sums to prove $\int_{1}^{b} \frac{1}{\sqrt{x}}dx = 2(\sqrt{b}-1)$ using equal subintervals and $x^*=\Big(\frac{\sqrt{x_{k-1}}+\sqrt{x_k}}{2}\Big)^2$ . We have $\Delta x=\frac{b-1}{n}, x_k=1+\frac{b-1}{n}k$ , and as $n \to \infty$ we sum $$\sum_{k=1}^n \frac{(b-1)}{n} \frac{1}{\sqrt{x^*}} \;=\; \sum_{k=1}^n \frac{(b-1)}{n} \frac{1}{\frac{\sqrt{x_{k-1}}+\sqrt{x_k}}{2}} \;=\; \frac{2(b-1)}{n} \sum_{k=1}^n \frac{1}{\sqrt{x_{k-1}}+\sqrt{x_k}}$$ With substitution we have $$\frac{2(b-1)}{n} \sum_{k=1}^n \frac{1}{\sqrt{{1+\frac{b-1}{n}(k-1)}}+\sqrt{1+\frac{b-1}{n}k}} \;=\; \frac{2(b-1)}{\sqrt{n}} \sum_{k=1}^n \frac{1}{\sqrt{{n+(b-1)(k-1)}}+\sqrt{n+(b-1)k}}$$ I have not been able to convert this expression into a telescoping series or anything that simplifies ni
$\require{color}$ Note $${\color{red}x_{k-1}-x_k} =\left(1+\frac{b-1}{n}(k-1)\right) -\left(1+\frac{b-1}{n}k\right) = -\frac{b-1}{n},$$ so $$\frac1{\sqrt{x_{k-1}} + \sqrt{x_k}} = \frac{\sqrt{x_{k-1}} - \sqrt{x_k}}{\color{red}x_{k-1}-x_k} = \frac{n}{b-1}\cdot\left(\sqrt{x_{k}} - \sqrt{x_{k-1}}\right).$$
|calculus|sequences-and-series|riemann-sum|
1
geometry problem where the solution involves the use of some properties of complex numbers in geometry
The statement of the problem : The triangle $\mathcal {ABC} $ is inscribed in circle $\mathcal C $ with center $\mathcal O $ and radius 1. For any point $\mathcal M $ on circle $\mathcal C $ { $\mathcal A $ , $\mathcal B $ , $\mathcal C $ } we note s( $\mathcal M $ )= $OH_1^2+OH_2^2+OH_3^2$ , where $H_1, H_2, H_3$ are the orthocenters of the triangles $\mathcal {MAB} $ , $\mathcal {MBC} $ respectively $\mathcal {MCA} $ . a) Show that, if $\mathcal {ABC} $ is an equilateral triangle, then s( $\mathcal M$ ) = 6, for any point $\mathcal M $ on circle $\mathcal C $ \ { $\mathcal A $ , $\mathcal B $ , $\mathcal C $ } b) Determine the smallest natural number k with the property that if there are distinct points $M_1,M_2,...,M_k$ on circle $\mathcal C $ \ { $\mathcal A $ , $\mathcal B $ , $\mathcal C $ } such that $s(M_1)=s(M_2)=... s(M_k)=6$ implies that ABC is an equilateral triangle. My approach: For the point a) I used the point affixes and Sylvester's relation for the orthocenter. I prov
Let me first replace this issue in a classical context. Let $t:=a+b+c \tag{1}$ Alignment relationships $2O+H_k=3G_k$ give : $$\begin{cases}h_1&=&a+b+m\\ h_2&=&b+c+m\\ h_3&=&c+a+m\end{cases}$$ $$s(m)=|h_1|^2+|h_2|^2+|h_3|^2$$ $$s(m)=|m-(-a-b)|^2+|m-(-b-c)|^2+|m-(-b-c)|^2\tag{2}$$ The locus of points $m$ such that $s(m)=6$ enters into a classical configuration which is as follows : Let $A_1,A_2,\cdots A_n$ be a set of points in the plane with centroid $G$ . The set of points $M$ such that $\varphi(M):=\sum_{p=1}^n (MA_p)^2 = k$ (k being a constant) is (according to the sign of $k-\varphi(G)$ ) : either the circle with center $G$ and radius $r=\sqrt{\frac{1}{n}(k-\varphi(G))}$ or the void set. (see my answer here ) We are now able to answer to the two questions. Question a) In our case, referring to (2), the points are $-a-b,-a-c,-b-c$ ; their centroid is $$g=\frac13(-2a-2b-2c)=-\frac23 t$$ and the constant is $k=6$ . $$s(g)=|-\tfrac23 t+a+b|^2+|-\tfrac23 t+b+c|^2+|-\tfrac23 t + c+a|^2=\t
|geometry|complex-numbers|
0
Rigorous proof that $1/\sum_ir_i^{-1}\le\min r_i$
I am taking an introductory physics course where the equivalent resistance of resistors in parallel is: $$\bigg ( \sum_{i=0}^n \frac 1{r_i} \bigg ) ^{-1} = R_{eq}$$ My book says that $R_{eq}$ will always be less than or equal to the least of the other $r's$ , (assuming all resistances are positive and nonzero, of course) and I can reason out how this would work, but I have not seen a rigorous proof of this fact and I think seeing one would be helpful. Thanks in advance.
HINT: Assume $n \ge 2$ first, as this is not interesting for $n=1$ . Then: $$\Big(\sum_{i=1}^n \frac{1}{r_i}\Big) > \max \Big\{\frac{1}{r_1},\frac{1}{r_2}, \ldots, \frac{1}{r_n} \Big\} = \frac{1}{\min\{r_1,\ldots, r_n\}}.$$ [Indeed, the first " $>$ " from noting that the sum of $n$ positive numbers is greater than the max of these $n$ numbers, and the " $=$ " from noting that $\frac{1}{x}$ is a decreasing function in $x$ where $x$ is a positive real number.] Thus, raising both sides of the above to the $(-1)$ -th power [and using the observations that $a>b$ ; $a,b >0$ ; $\implies a^{-1} , and $\Big(\frac{1}{y}\Big)^{-1} = y$ ] yields: $$ \Big(\sum_{i=1}^n \frac{1}{r_i}\Big)^{-1}
|physics|means|
1
How to construct a k-regular graph?
I have a hard time to find a way to construct a k -regular graph out of n vertices. There seems to be a lot of theoretical material on regular graphs on the internet but I can't seem to extract construction rules for regular graphs. My preconditions are k Is an adjacency matrix the way to go here? If so, how would I use it? Is this even a mathematical problem?
You can also use Cayley graphs. Take a group $G$ of order $n$ and a subset $S$ of order $k$ such that $S$ does not contain the identity and such that $s \in S$ if and only if $s^{-1} \in S$ . Then the Cayley graph $\operatorname{Cay}(G,S)$ is the graph with vertices $G$ and $(g,h)$ is an edge if and only if $g=hs$ for some $s \in S$ . This will always give a $k$ -regular graph of order $n$ . It is connected if and only if $S$ generates $G$ . Another way is to look at a graphical sequence and reduce it. Then you can trace back the reductions and construct such a graph. For example look at the sequence $3,3,3,3$ , which means that there are $4$ vertices and each of them has degree $3$ . One can reduce such a sequence by deleting the leading number, say $l$ , and substract $1$ from the first $l$ of the remaining numbers. In our example we would delete $3$ and get $2,2,2$ . After such operations one needs to sort the sequence in decending order. Now you can continue doing this until you ha
|graph-theory|
0
Volume of a great icosahedron
This is the image of a Great Icosahedron that I obtained starting from the coordinates of the vertices, as $A$ , $B$ $C$ , etc.. Now I want to calculate the volume of the solid. In internet (as in this page of Wolfram ) I can find a formula for this volume, $$ V=\frac{l^3}{4}(25+9\sqrt{5}) $$ but I can't understand how it is obtained. Some one can explain how it come from.
First of all, @ChrisLewis is totally right. The area of a great icosahedron can be described as the area of a small stellated dodecahedron with $5\cdot 12=60$ congruent irregular tetrahedra carved out from it. For further reference call the side length of the regular dodecahedron $e$ . The area can thus be divided into three subquestions: The volume of a regular dodecahedron with edge length $e$ which is $$\frac{e^3}{4}(15+7\sqrt{5}).$$ The volume of a pentagonal pyramid with height $e\sqrt{\frac{1}{5} (5+2\sqrt{5})}$ , this height is due to your Wolfram Link . This volume is due to this page and equals $$\frac{1}{12} e^3\sqrt{25 + 10\sqrt{5}} \sqrt{\frac{1}{5} (5+2\sqrt{5})}$$ Now, for the more difficult part we require the volume of 1 of the 60 irregular tetrahedra. A plot of this is at the bottom of this post. We define the regular pentagon $ABCDE$ with coordinates $A=\left(0, \sqrt{\frac{5+\sqrt{5}}{10}}e, 0\right)$ , $B=\left(\frac{1+\sqrt{5}}{4}e, \sqrt{\frac{5-\sqrt{5}}{40}}e, 0
|geometry|polyhedra|
0
What rule is used in this derivation of the interarrival time for the Poisson process?
I'm working on calculating the probability distribution of the interarrival time of the Poisson process. The method used in my textbook is very strange I don't understand how the probabilities are calculated. I am familiar with the method used eg. here but not the working below. $ N(t) $ is the Poisson process: $$ P\{N(s) = 1, N(t) = 2\} = P\{\xi_1 \leq s $$ = P\{\eta_1 \leq s At this point we have converted the Poisson process into 3 independent, exponentially distributed random variables. But now we have the first conversion of the probability to integrals and I don't understand how it was done: $$ = \int_{0}^{s} P\{s What rule of probability was used? The first inequality has now become the integral and the random variable $ \eta_1 $ has become u in the probability. Is this a well known rule of probablity (and what is it called if so?) or is this somehow a specific property of this problem? And likewise for the next two steps: $$ = \int_{0}^{s} \left( \int_{s-u}^{t-u} P\{t $$ = \int
Remember that for a continuous random variable $X$ with probability distribution function $f_X(x)$ , the cumulative distribution function $F_X(x)$ is given by $F_X(x) = P(X \leq x) = \int_{-\infty}^x f_X(\chi) d\chi$ . In the given expression, we have applied that definition to the random variable $\eta_1$ along with the usual decomposition for independent events $P(A \cap B) = P(A) P(B)$ .
|stochastic-processes|poisson-distribution|poisson-process|
0
Can I combine axioms to have less properties to verify independently?
For example, given that a linear function is defined by its satisfying the properties $f(x+y)=f(x)+f(y)$ and $f(ax)=af(x)$ , would it be okay to only check $f(ax+by)=af(x)+bf(y)$ , or maybe even $f\left(\frac ab x+y\right)=\frac abf(x)+f(y)$ and put $c=\frac ab$ ?
I will try to give a more general answer than yes or no. If we are in doubt about such questions, then we can look at if they imply each other. In this case, they are equivalent. For the concrete example: $$"\Rightarrow":$$ Let a function satisfy $(1):f(x+y) = f(x)+(y) $ and $(2):f(\alpha x)= \alpha f(x)$ . Then, $f(\alpha x+y) = f(\alpha x) + f(y) = \alpha f(x) +f(y)$ , Where we first used (1) and then (2) So this direction is correct. Now for the other one (which is a bit more tricky): $$"\Leftarrow":$$ Let a function satisfy $f(\alpha x + y) = \alpha f(x) + f(y)$ . Then, setting $\alpha=1$ , we have $f(\alpha x + y) = \alpha f(x) +f(y) = f(x) + f(y)$ , our first property. For the second, we will first show that $f(0) = 0$ . This is true because $f(0) = f(0+0)=f(0)+f(0)$ , so $f(0)$ must be zero as $0$ is the only real number satisfying $x=x+x$ . So, setting $y$ to zero we have: $f(\alpha x + 0) = \alpha f(x) + f(0) = \alpha f(x)$ . We have shown now that, since these two definition
|logic|axioms|
1
A lower bound estimate for the Left-invariant distance from $g_t$ to $e$ in the Lie group $\text{SL}(d, \mathbb R)$
Let $G:=\text{SL}(d, \mathbb R)$ , a matrix Lie group and let $(g_t)_{t \in \mathbb R}$ be a one-parameter, diagonalizable, unbounded subgroup of the Lie group $\text{SL}(d, \mathbb R)$ . Let $d$ be a Left invariance metric on $G$ . The general construction is shown in this thread and please note that this metric is not bi-invariant . I learnt from a paper the following fact: there exists $\alpha>0$ such that $$d(g_t,e)> \alpha t, \forall t \in \mathbb R.$$ How do we prove this? Or, are there any good references? One possible approach might be connecting this metric to the hyperbolic metric on $\mathbb H^d$ . But I completely don't understand why these two metrics are "equivalent". For those who want to use exponential maps, please note that the Riemanian exponential map and the Lie exponential map are not the same unless the metric is bi-invariant. See here . Background information: The question comes from the bottom of the https://arxiv.org/pdf/1709.04082.pdf#page=11# , where $\|g_t\
If $g$ is a left-invariant Riemannian metric on a Lie group $G$ , any one-parameter subgroup is parameterized proportional to arclength with respect to $g$ . This follows immediately from the fact that by left-invariance of $g$ , left translation by any element $x \in G$ is a Riemannian isometry, and the definition of the Lie group exponential map. Consequently $d(Id_G, g_t) = t \lVert X \rVert$ where $X$ is the infinitesimal generator of $g_t$ . In particular, this gives the lower bound you're looking for. Note that no hypothesis on diagonalizability of $g_t$ is required, and that this result is not the same as asserting that any one-parameter subgroup is a geodesic for the left-invariant metric (which need not be true, if $G$ is not compact)--it is a weaker statement.
|abstract-algebra|riemannian-geometry|lie-groups|smooth-manifolds|
0
Clarification Needed on Selecting the Farthest Right Point 'c' in Lemma 3.5 of Chapter 3 from Stein's Real Analysis
I'm reading chapter 3 of Stein's Real Analysis and have come across Lemma 3.5, which states: Lemma 3.5 Suppose $G$ is real-valued and continuous on $\mathbb R$ . Let $E$ be the set of points $x$ such that $$ G(x+h) > G(x) \quad \text{ for some } h = h_x > 0. $$ If $E$ is non-empty, then it must be open, and hence can be written as a countable disjoint union of open intervals $E = \bigcup \left( a_k, b_k \right)$ . If $\left( a_k, b_k \right)$ is a finite interval in this union, then $$ G(b_k) - G(a_k) = 0. $$ While reading the proof of the lemma, I'm having difficulty with an argument, and I've stuck at that part. In the proof, the author claims that if $G(b_k) , then due to continuity, there exists a point $c$ in the interval $(a_k, b_k)$ where $G(c) = \frac{G(a_k) + G(b_k)}{2}$ . Furthermore, the author says that this $c$ can be chosen to be the farthest to the right in the interval. However, I'm concerned that the set of such $c$ 's could resemble $\left\{ b_k - 1/n \mid n \in \math
Visualizing will help with this proof. Think about the conditions of the set $E$ . If you visualize it, then you will understand it's the set of points $x$ on the real line where there is (at least) one point to the right, $x + h$ , such that $G(x + h)$ is higher than $G(x)$ . Now, think about what is NOT in $E$ . If $y \not\in E$ , then for every point to the right of $y$ , $y + h$ , we must have $G(y + h) \leq G(y)$ . The way I'm visualizing this is if $y \not\in E$ , then if $G$ was my altitude and I was walking (from right to left ), $y$ is point at which $G(y)$ is the highest altitude I have reached so far. By definition, $a_k , we know $a_k, b_k \not\in E$ , and $G(b_k) (all laid out in the proof). Further we know that $G$ is monotone decreasing in the interval $(a_k, b_k)$ , since otherwise it would contain points that are not in $E$ . $$ \\ $$ The core continuity idea is that the function $G$ always has to tend back toward $G(b_k)$ eventually, which means it has to tend away fr
|real-analysis|continuity|
0
Hardy's Inequality: Problems $3.14$ and $3.15$ in Rudin's RCA
In Problem $3.14$ , we prove (a) Hardy's inequality, (b) the condition for equality, and I shall talk about (c), (d) below. Problem $3.15$ is the discrete case of Hardy's inequality. I have asked three related questions in a single post itself, since all of them are related to Hardy's inequality , and none should be too involved. There are some existing posts on MSE related to these topics, so I shall link them right away and point out that my question is not a duplicate: Post 1 , Post 2 , Post 3 , Post 4 . For the sake of mentioning it, Hardy's inequality is: For $p\in (1,\infty)$ , $f\in L^p((0,\infty))$ relative to the Lebesgue measure, and $$F(x) = \frac{1}{x}\int_0^x f(t)\ dt\quad (0 we have $$\|F\|_p \le \frac{p}{p-1} \|f\|_p$$ Question 1: This is Problem $3.14(c)$ in Rudin's book. Prove that the constant $p/(p-1)$ cannot be replaced by a smaller one. In one of the linked posts, there is some discussion on how this is the best constant, but I was unable to follow it. My sense is
Here another approach to question 1, finding simple counter-examples. Given $a>p$ , let's define this function : $$f_a(x)=x^{-1/a}\times\chi_{(0,1]}(x)$$ where $\chi_{(0,1]}(x)=1$ if $x\in(0,1]$ and $0$ otherwise. Then $$(\lVert f_a\rVert_p)^p=\int_0^1x^{-p/a}dx=\dfrac{a}{a-p} $$ Let's compute $F_a$ . if $x\ge 1$ : $$\begin{equation}\begin{aligned} F_a(x)&=\dfrac{1}{x}\int_0^xt^{-1/a}dt\\\\ &=\dfrac{a}{a-1}x^{-1/a} \end{aligned}\end{equation}$$ if $x>1$ : $$\begin{equation}\begin{aligned} F_a(x)&=\dfrac{1}{x}\int_0^1t^{-1/a}dt\\\\ &=\dfrac{a}{a-1}x^{-1} \end{aligned}\end{equation}$$ Thus $$\begin{equation}\begin{aligned} (\lVert F_a\lVert_p)^p&=\left(\dfrac{a}{a-1}\right)^p \left(\int_0^1x^{-p/a}dx+\int_1^{+\infty}x^{-p}dx\right)\\\\ &=\left(\dfrac{a}{a-1}\right)^p\left(\dfrac{a}{a-p}+\dfrac{1}{p-1}\right)\\\\ =&\left(\dfrac{a}{a-1}\right)^p\left(\dfrac{p(a-1)}{(a-p)(p-1)}\right) \end{aligned}\end{equation}$$ And now we can compute $$\begin{equation}\begin{aligned} \left(\dfrac{\lVert
|real-analysis|functional-analysis|inequality|lp-spaces|
0
the intersection of two spaces
though it may be a trivial question but I have some trouble in finding the intersection between two spaces for example $$ W = \mathrm{Sp}\{(1,3,4),(2,5,1)\} \\ U = \mathrm{Sp}\{(1,1,2),(2,2,1) \}$$ Find a span $U \cap W$ . I understand that this have to be true but I am not sure how to get the general solution after I finished simplifying the shape $\alpha(1,1,2) + \beta(2,2,1) = \gamma(1,3,4) + \delta(2,5,1)$ so I reached $$\left(\begin{array}{cc|rr} 1 & 0 & -3 & -6\\ 0 & 1 & 2 & 4\\ 0 & 0 & 2 & 21 \end{array}\right)$$
By the way, " $\mathbf{span}\ U\cap W$ " is a little pleonastic , as $U\cap W$ is necessarily a subspace, and hence equal to its own span. What you did, trying to row-reduce a "double augmented" matrix where both sides of the augmentation correspond to variables, is bound to be both confusing and end up looking like nonsense... which is exactly what happened. You are trying to find the vectors that lie in the intersection. Fair enough. You are attempting to do this by finding all values of $\alpha$ , $\beta$ , $\gamma$ , and $\delta$ ( four variables) such that $$\alpha(1,3,4) + \beta(2,5,1) = \gamma(1,1,2) + \delta(2,2,1).\tag{1}$$ Again, fair enough. We can push this through, if you keep your head about you. We can move everything to the left hand side, and write this as $$\alpha(1,3,4) + \beta(2,5,1) - \gamma(1,1,2) - \delta(2,2,1) = (0,0,0),$$ which translates to $$(\alpha+2\beta-\gamma-2\delta, 3\alpha+5\beta-\gamma-2\delta, 4\alpha+\beta-2\gamma-\delta) = (0,0,0).$$ This in turn
|linear-algebra|matrices|
1
Struggling with proving an elementary atomic equation. Please critique my proof.
I am learning how to prove and disprove atomic equations. I have put together a proof where I know how I want to prove the theorem but can't quite get it to flow correctly. Please critique my syntax, flow, or anything else as I am learning how to prove mathematical statements: \begin{align*} & \hspace{5mm} \textbf{Assume: }x,y \text{ are arbitrary, non-zero real numbers} && \\ & \hspace{5mm} \textbf{Goal: } \text{Prove: }\exists y : ( \forall x \not = 0 : xy = 1)) \text{ is False.} && \\ & \hspace{10mm} \textbf{Expanded Goal: } \forall k \in \mathbb{R}_{\not = 0} : (x+k)y = 1 \text{ is a contradiction}&& \\ & \hspace{15mm} \textbf{Scratch Work} \\ & \hspace{20mm} (x+k)y = 1 \\ & \hspace{20mm} xy + ky = 1 \\ & \hspace{20mm} \text{Recall: }xy = 1 \\ & \hspace{20mm} \therefore 1 + ky = 1 \\ & \hspace{20mm} \therefore ky = 0 \\ & \hspace{20mm} \because y = \frac{1}{x} \\ & \hspace{20mm} \therefore \frac{k}{x} = 0 \\ & \hspace{20mm} \because x \not = 0 \\ & \hspace{20mm} \therefore k = 0 \\
I think there may be more natural ways to prove that " $\exists y \in \mathbb{R}_{\neq 0}: (\forall x \in \mathbb{R}_{\neq 0}, xy = 1)$ " is False. This could be contributing to why the proofs you presented feel less succinct or satisfying to you. Also, depending on the level of formality expected for the proof, your writing style could be overly formal (seeing "atomic equations" in your question makes me feel this is not the case though). I will present two options, the first structured as a proof by contradiction that the theorem is False and the second as a direct proof of the negation of the theorem. I will write them in a less formal style, although they could always be made more formal. Striving for a contradiction, assume that " $\exists y \in \mathbb{R}_{\neq 0}: (\forall x \in \mathbb{R}_{\neq 0}, xy = 1)$ " is True. This should be understood as saying that there is some $y \in \mathbb{R}_{\neq 0}$ (that we do not otherwise have any information about) with the property that fo
|solution-verification|proof-writing|
1
Rigorous proof that $1/\sum_ir_i^{-1}\le\min r_i$
I am taking an introductory physics course where the equivalent resistance of resistors in parallel is: $$\bigg ( \sum_{i=0}^n \frac 1{r_i} \bigg ) ^{-1} = R_{eq}$$ My book says that $R_{eq}$ will always be less than or equal to the least of the other $r's$ , (assuming all resistances are positive and nonzero, of course) and I can reason out how this would work, but I have not seen a rigorous proof of this fact and I think seeing one would be helpful. Thanks in advance.
If the minimum is $r_m$ , your inequality $$ \left(\sum_{i=1}^n \frac{1}{r_i} \right)^{-1} \le r_m $$ is equivalent to $$ 1 \le r_m \left(\sum_{i=1}^n \frac{1}{r_i} \right) $$ which is certainly true since one of the terms on the right is $r_m/r_m = 1$ and the others are positive. Moreover the inequality is strict if $n > 1$ .
|physics|means|
0
How can mathematical logic try to model math, when mathematics are used to define mathematical logic?
I've done so far a few courses in logic and formal verification, and I've always wondered: mathematical logic, at least as Hilbert envisioned, tries to model mathematics. Formally define what a "true" statement is, or why proving something (at least in a sound system) makes it true. But, every logic course uses mathematics in its essence. The use of sets, functions and series' (as with inductive groups, for example) - is already mathematics, in a field that tries to model exactly that. Isn't that circular logic?
It is a common misconception many logic students pass by. Let us start with an example: Every microprocessor we use has its own design, and we can implement it in different ways: Either physically with silicon wafers, virtually with emulators, or even over Minecraft . Nobody says the design of a microprocessor is circular or contradictory. However, people become confused if a "microprocessor" becomes a "formal theory." Formal theories, like first-order logic, set theories, or type theories can be viewed as sets of axioms and formal rules. We can implement them as computer programs (like Coq or Lean, which implement a type theory) or over another theory (as a formal theory, an interpretation, or a model.) Mathematical logic handles formal theories as objects implemented over a metatheory, and we can ask what can hold for 'implemented' formal theories. This line of understanding collides with a common belief that mathematical logic is a field trying to build up mathematics from the groun
|logic|first-order-logic|
0
Can I combine axioms to have less properties to verify independently?
For example, given that a linear function is defined by its satisfying the properties $f(x+y)=f(x)+f(y)$ and $f(ax)=af(x)$ , would it be okay to only check $f(ax+by)=af(x)+bf(y)$ , or maybe even $f\left(\frac ab x+y\right)=\frac abf(x)+f(y)$ and put $c=\frac ab$ ?
If you have some axioms $A$ and an expression $E$ , there are four possibilities: $A$ and $E$ are logically equivalent, i.e. $A \iff E$ , and so anything that satisfies one is guaranteed to satisfy the other. $E$ is a sufficient condition for $A$ , i.e. $E \implies A$ , so any system where $E$ is true will also satisfy $A$ , but if $E$ is not true you can't make any claim about $A$ . $E$ is a necessary condition for $A$ , i.e. $A \implies E$ , so if $E$ is not satisfied then $A$ can't be true but not every system that satisfies $E$ necessarily satisfies $A$ . $E$ and $A$ are not directly related, in which case you're out of luck. So the question is whether $f(ax + by) = af(x) + bf(y)$ is equivalent to $f(x + y) = f(x) + f(y)$ and $f(ax) = af(x)$ (for all vectors $x, y$ and scalars $a, b$ ). Proving the equivalence in one direction ( $A \implies E$ ) is pretty simple: $$f(ax + by) = f(ax) + f(by) = af(x) + bf(y)$$ So the axioms definitely imply the given expression. Proving it in the ot
|logic|axioms|
0
When the convolution is a Schwartz function?
If $u$ is integrable and $v$ is in $L^p(\mathbb{R})$ ( $1\leq p\leq \infty$ ) then $u*v\in L^p(\mathbb{R})$ (Reference: Adapted Wavelet Analysis by Mladen) If $u$ is Schwartz function and $v$ is in $L^2(\mathbb{R})$ , then $u*v$ is a Schwartz function?
No. In generally, we have the following result $$ L^p(\mathbb{R}^n)\ast \mathcal{S}(\mathbb{R}^n) \subseteq C^\infty(\mathbb{R}^n) \quad \forall p\in[1,+\infty] $$ See exercise 3.18 - Mitrea, D. Distribucions, Partial Differential Equations, and Harmonic Analysis. For a counterexample: How to show convolution of an $L^p$ function and a Schwartz function is a Schwartz function
|convolution|
0
Subspaces with common images
Let $X$ and $Y$ be finite dimensional vector spaces over $\mathbb{C}$ , and let $S,T:X\to Y$ be linear transformations. Is there a method for determining all subspaces $V\subseteq X$ such that $S(V)=T(V)$ (as subspaces not necessarily pointwise). I am especially interested in the case where $\text{dim} X \geq \text{dim} Y$ .
Let $\chi=\dim X$ and $\gamma=\dim Y$ . If $SV=TV$ , let $A\in\mathbb C^{\chi\times\chi}$ be a matrix whose columns form a spanning set of $V$ . Then $\operatorname{range}(SA)=SV=TV=\operatorname{range}(TA)$ . Therefore $SA=TAM$ for some matrix $M\in\mathbb C^{\chi\times\chi}$ . The matrix $M$ can be chosen to be invertible: let $r=\dim SV=\dim TV$ . Since $SA$ and $TA$ have the same column spaces, they admit two rank decompositions $SA=L_1R_1$ and $TA=L_2R_2$ so that $L_1,L_2\in\mathbb C^{\gamma\times r}$ have full column ranks as well as identical column spaces, and $R_1,R_2\in\mathbb C^{r\times\chi}$ have full row ranks. It follows that $L_2=L_1Q$ for some invertible matrix $Q\in GL_r(\mathbb C)$ . But then $QR_2$ and $R_1$ are two matrices having the same sizes and full row ranks. Hence there exists an invertible matrix $M\in GL_\chi(\mathbb C)$ such that $R_1=QR_2M$ . In turn, $SA=TAM$ . Conversely, if $SA=TAM$ for some $A\in\mathbb C^{\chi\times\chi}$ and $M\in GL_\chi(\mathbb C)
|linear-algebra|abstract-algebra|algorithms|linear-transformations|
0
How do I treat a derivative at the denominator?
I am sure there is a similar question but I don't know how to phrase it, so I cannot find it. I'm searching for some intuition or direction to material of how to treat an expression like this: $$ \frac{1}{1 + \partial_x}f(x) $$ or more in general (with $L$ some operator) $$ \frac{1}{1 + L}f(x). $$ My first guess would be expanding it with power series, but maybe there are some other methods?
Usually, just put to the other side, whenever that expression is equal to something else. As an example: $$a(x) = \frac{1}{1+\partial_x} f(x) \qquad \rightarrow \qquad f(x) = \left(1 + \partial_x \right) a(x) = a(x) + \partial_x a(x) \ .$$ If you provide some more details, maybe it's possible to provide better answers.
|calculus|
0
How do I treat a derivative at the denominator?
I am sure there is a similar question but I don't know how to phrase it, so I cannot find it. I'm searching for some intuition or direction to material of how to treat an expression like this: $$ \frac{1}{1 + \partial_x}f(x) $$ or more in general (with $L$ some operator) $$ \frac{1}{1 + L}f(x). $$ My first guess would be expanding it with power series, but maybe there are some other methods?
how to treat an expression like this: $$ \frac{1}{1 + \partial_x}f(x) $$ or more in general (with $L$ some operator) $$ \frac{1}{1 + L}f(x) $$ In Physics (and other fields) we often call the inverse of a differential operator a "Green's function." Suppose you are interested in your first example: $$ \frac{1}{1+\partial_x}\;, $$ You can seek a function of two variables $G(x,x')$ such that: $$ (1+\partial_x)G(x,x') = \delta(x - x')\;, $$ where $\delta(x - x')$ is a Dirac delta function. As you can see by pursuing the Wikipedia link provided above, this is a well-known technique. But, in short, the function $G(x, x')$ is effectively the inverse you are looking for, since if you determine $G(x,x')$ then you can always solve an equation like: $$ (1+\partial_x)f(x) = h(x)\;, $$ since you can write down a solution immediately as: $$ f(x) = \int dx' G(x,x')h(x')\;. $$
|calculus|
0
Trace Class Operators On Manifolds With Boundary
Let $X$ be an $n$ -dimensional manifold with nonempty boundary $\partial X$ and $n\geq 2$ . Proposition 4.1 of this paper by Schrohe states that it is "not very difficult" to show: Proposition : A bounded operator on $L^2(X)$ with range in $H^{n+1}(X)$ is trace class. Tragically, I cannot figure out a proof. I've tried generalizing the approach outlined in chapter 8 of Roe's Elliptic Operators, Topology and Asymptotic Methods for the boundaryless case, I've thought about Mercer's Theorem but the setting seems quite different, and looked around in Hörmander's PDEs book III and I don't think it's in there. A hint, proof outline or reference would be greatly appreciated, thank you!
At the beginning of the section where Schrohe states Proposition 4.1 (top of page 14 in your link), Schrohe cites Nest, R. & Schrohe, E., Dixmier’s trace for boundary value problems , Manuscripta Mathematica, 96(2), 203–218 (1998), available at https://doi.org/10.1007/s002290050062 (but paywalled), which contains this (at page 209): Proposition 2.1. A bounded operator on $L^2(X)$ with range in $H^n(X)$ is an element of $\mathcal{L}^{1,\infty}(L^2(X))$ ; if its range even is contained in $H^{n+1}(X)$ then it is trace class. Proof: Let $A$ be bounded on $L^2(X)$ with range in $H^n(X)$ . It is well-known that there is an invertible pseudodifferential operator $R$ of order $−n$ on $M$ such that $R_{+}: L^2(X) \to H^n(X)$ is an isomorphism with inverse equal to $(R^{−1})_{+}$ , for a proof see e.g. [7, Theorem 3.2.14]. We then may write $A = R_{+}(R^{−1})_{+} A$ . The composition $(R^{−1})_{+} A$ yields a bounded operator on $L^2(X)$ , since $(R^{−1})_{+}: H^n(X) \to L^2(X)$ is bounded. On
|functional-analysis|differential-geometry|reference-request|operator-theory|
1
How to calculate the area of a projected path onto a plane?
How do you "orthogonally project" the shape? After I drew multiple graphs, I still had no idea. The correct answer is $12$ . I guess it is $3 \sqrt 2 \cdot 2 \sqrt 2$ according to the gist of the graph that I drew (a plane with two parallel sides through four mid points each, and two shorter sides a little out of shape) The problem: "A bee travels in a series of steps of length $1$ : north, west, north, west, up, south, east, south, east, down. (The bee can move in three dimensions, so north is distinct from up.) There exists a plane $P$ that passes through the midpoints of each step. Suppose we orthogonally project the bee’s path onto the plane $P$ , and let $A$ be the area of the resulting figure. What is $A^2$ ?"
Following is the code I developed to solve this problem. The square of the area output by the program is $A^2 = 12$ Public Sub project_steps_onto_plane() Dim steps(20, 3) As Double Dim points(20, 3) As Double Dim midpoints(20, 3) As Double Dim u1(3), u2(3), u3(3), u4(4), m1(3) As Double Dim pr(3, 3) As Double Dim prpoints(20, 3) As Double Dim e As Double Dim regress(20, 4) As Double Dim bvec(20) As Double Dim x0(4) As Double Dim mt As Integer Dim xx(4, 4) As Double Dim ierr As Integer Dim eqn(4) As Double n = 10 points(1, 1) = 0 points(1, 2) = 0 points(1, 3) = 0 For j = 1 To 3 ActiveSheet.Cells(1, j + 5) = points(1, j) Next j For i = 1 To n For j = 1 To 3 steps(i, j) = ActiveSheet.Cells(i, j) points(i + 1, j) = points(i, j) + steps(i, j) ActiveSheet.Cells(i + 1, j + 5) = points(i + 1, j) Next j Next i For i = 1 To n For j = 1 To 3 midpoints(i, j) = 0.5 * (points(i, j) + points(i + 1, j)) Next j Next i ' regression equation for plane passing through the midpoints For i = 1 To n For j =
|geometry|3d|area|
0
Is $SL(2,\Bbb R)$ generated by $SO(2)$ and a single upper triangular element?
Consider the subgroup $\Gamma generated by the element $$ \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} $$ and all elements of $\operatorname{SO}(2)$ . Is $\Gamma = \operatorname{SL}(2,\mathbb{R})$ ? Some calculations show that $$ \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \in \Gamma $$ and hence $\operatorname{SL}(2,\mathbb{Z}) \subseteq \Gamma$ .
Theorem. $\Gamma=SL(2, {\mathbb R})$ . Proof. I will be using complex numbers to describe points of the upper half-plane ${\mathbb H}^2=\{z\in {\mathbb C}: Im(z)>0\}$ . Let $d$ denote the hyperbolic distance on ${\mathbb H}^2$ . The group $G=SL(2, {\mathbb R})$ acts isometrically and transitively on ${\mathbb H}^2$ via linear-fractional transformations, so that the stabilizer of the point $i\in {\mathbb H}^2$ is the subgroup $K=SO(2) . Thus, in order to show that the subgroup $\Gamma$ equals $G$ it suffices to prove that $\Gamma$ acts transitively on ${\mathbb H}^2$ . A (continuous) curve $c: [0,\infty)\to {\mathbb H}^2$ is called proper if $$ \lim_{t\to\infty} d(c(0), c(t))=\infty. $$ Lemma. Suppose that there exists a proper curve $c: [0,\infty)\to {\mathbb H}^2$ contained in the $\Gamma$ -orbit of $i$ in ${\mathbb H}^2$ . Then the $\Gamma$ -orbit of $i$ equals ${\mathbb H}^2$ , i.e. $\Gamma$ acts transitively on ${\mathbb H}^2$ . Proof. Let $D=d(i, c(0))$ . By the intermediate value
|matrices|group-theory|lie-groups|matrix-decomposition|
1
Markov Chain upper bound on the probability of hitting time
I encountered the following problem. $\{x_t\}$ : Markov chain in discrete time; $\Omega$ : a finite state space s.t. $|\Omega|=n ; $\tau_w\equiv\min\{t\ge 0\,|\,x_t=w\}$ , $w\in\Omega$ (first hitting time). Prove: For each $T\ge 1$ and every $x,y\in \Omega$ $$\Pr(\tau_y=T|x_0=x)\le\frac nT.$$ Effort: This is a surprising result as the setup is terribly general. I proceeded by induction. Using recursive characterization, we have $$\Pr(\tau_y=T+1|x_0=x)=\sum_{z\ne y}\Pr(\tau_y=T|x_0=z)p(x,z),$$ where $p(x,z)=\Pr(x_{t+1}=x|x_t=z)$ is the transition probability. For each Markov chain, using the inductive hypothesis, we can easily confirm that this holds for $T$ sufficiently large. So far I cannot see how to show this for a general $T\ge n+1$ . Any hints or comments will be highly appreciated!
I guess we can study the following quantity $$f(T) = \max_{x \in \Omega} \mathbb{P}(\tau_y = T | x_0 = x).$$ This quantity satisfy two inequalities. For any $T \geq 1$ , $f(T) \leq f(T - 1)$ . Proof: It suffices to show, for any $x \in \Omega$ , that $$\mathbb{P}(\tau_y = T | x_0 = x) \leq f(T - 1).$$ If $x = y$ , this is trivial since $\mathbb{P}(\tau_y = T | x_0 = x) = 0$ . Otherwise, we run one step of the chain to obtain $$\mathbb{P}(\tau_y = T | x_0 = x) = \sum_{x' \in \Omega} p(x, x') \mathbb{P}(\tau_y = T - 1 | x_0 = x') \leq \sum_{x' \in \Omega} p(x, x') f(T - 1) = f(T - 1)$$ as desired. We have $$\sum_{T \geq 0} f(T) \leq |\Omega|.$$ Proof: This comes from the observation that $$f(T) \leq \sum_{x \in \Omega} \mathbb{P}(\tau_y = T | x_0 = x).$$ So $$\sum_{T \geq 0} f(T) \leq \sum_{x \in \Omega} \sum_{T \geq 0} \mathbb{P}(\tau_y = T | x_0 = x) = |\Omega|.$$ Combining these two inequalities, we immediately obtain $$(T + 1) f(T) \leq \sum_{T \geq 0} f(T) \leq |\Omega|$$ as desired
|probability-theory|stochastic-processes|markov-chains|
0
Show that the sequence $(1+N^2)/[N(1+N)]-5/6$ is positive and increasing
I have the function \begin{equation*} f(N)= \frac{1+N^2}{N(1+N)}-\frac{5}{6}, \end{equation*} where $N\in\mathbb{Z}_{++}/\{1, 2\}$ . I want to show that $f(N)>0$ and that $f(N+1)>f(N)$ for all $N>3$ since $f(3)=0$ . Approach 1: Continuous Extension Define a new continuous function $g$ on $x\in\mathbb{R}_{++}$ , where \begin{equation*} g(x)=\frac{1+x^2}{x(1+x)} \text{ and } g'(x)=\frac{(x-1)^2-2}{x^2(1+x)^2}. \end{equation*} The numerator of $g'(x)$ has a positive root at $x=1+\sqrt{2}$ . Moreover, $\frac{d}{dx}[(x-1)^2-2]=2(x-1)>0$ for $x>1$ . Therefore, $g(x)$ is an increasing function for any $x>1$ and is positive for any $x>1+\sqrt{2}$ . Taken together, $g(x)>0$ for any $x>3$ . However, I'm not sure of how legitimate an approach this is or how to relate these results back to my integer function to write a proper proof. Approach 2: Induction Let $N=4$ then $f(4)=1/60$ . Take $f(N)>0$ to be true, then I need to show that \begin{equation*} f(N+1)=\frac{1+(N+1)^2}{(N+1)[1+(N+1)]}>\frac{
The function $f: \mathbb{N} \rightarrow \mathbb{R}$ defined as $f(N) := \frac{1+N^2}{N(N+1)}$ can have its domain extended to the positive reals; indeed for each $x>0$ define $f(x) := \frac{1+x^2}{x(x+1)}$ . If $f'(x)$ is positive for all $x \ge x_0$ then $f$ is increasing function on $(x_0,\infty)$ . Thus, for all real numbers $y',y>0$ with $y'>y$ , the inequality $f(y') > f(y)$ holds. Thus, for all integers $N \ge x_0$ it follows that $f(N+1) > f(N)$ [indeed, set $y=N$ and $N+1=y'$ and note the previous sentence]. Thus, as you have shown that $f'(x)$ is positive for all $x \ge 3$ , it follows that for all integers $N \ge 3$ the inequality $f(N) holds. So to show that $f(N)$ is at least $\frac{5}{6}$ for all positive integers $N$ , all you need to do is check $f(1),f(2)$ , and $f(3)$ . Then if all of $f(1),f(2),f(3)$ are at least $\frac{5}{6}$ , simply use the fact that $f$ is increasing to conclude $f(N)$ must be at least $f(3) > \frac{5}{6}$ for all other positive integers $N$ , equ
|sequences-and-series|algebra-precalculus|functions|proof-writing|
0
Parsing this wff efficiently
I have the wff: $$\alpha = (((\neg(A_1\rightarrow (A_3\vee (\neg A_2))))\wedge(A_4\wedge(\neg A_1)))\rightarrow((\neg(A_3\vee A_2))\rightarrow(((\neg A_1)\wedge A_4)\vee A_3)))$$ I've already parsed through $12$ of the $16$ possibilities in the truth table. If we let $$\beta = ((\neg(A_1\rightarrow (A_3\vee (\neg A_2))))\wedge(A_4\wedge(\neg A_1)))$$ and $$\gamma = ((\neg(A_3\vee A_2))\rightarrow(((\neg A_1)\wedge A_4)\vee A_3))$$ Then $\alpha = (\beta \rightarrow \gamma)$ . I've already shown that $v(A_4)=F\rightarrow \bar{v}(\beta)=F\rightarrow \bar{v}(\alpha)=T$ . I've also shown that $v(A_3)=T\rightarrow \bar{v}(\gamma)=T\rightarrow \bar{v}(\alpha)=T$ . So $\alpha$ is true so far . I just need to show what happens for the remaining $4$ cases when $v(A_3)=F$ and $v(A_4)=T$ . I'm trying to do so efficiently, I know I can get the answer in a usual parsing algorithm. I noticed that when $A_3$ and $A_4$ are as such, it seems $\alpha$ is true for either $v(A_1)=T$ or $v(A_1)=F$ but I'm w
$$\beta=\neg(A_1\to(A_3\lor\neg A_2))\land A_4\land\neg A_1$$ Since $\neg(A\to B)=A\land\neg B$ $$\beta=A_1\land\neg(A_3\lor\neg A_2)\land A_4\land\neg A_1=False$$ Since anything follows from False, $\alpha$ is a tautology.
|logic|parsing|
1
Number of Salem–Spencer subsets of $\{1,2,3,\dots ,n\}$
I was wondering about sets that do not contain any $3$ -term AP, and came to know that the official name of such a set is Salem–Spencer set . I was considering the question of counting the number of Salem–Spencer subsets of $\{1,2,3,\dots ,n\}$ for a given $n$ . This is listed in OEIS A051013 . However, neither on OEIS, nor on anywhere else could I find any literature dealing with the specific question at hand. So, my question is whether anyone knows of any formula (recursive or otherwise) for the number of Salem–Spencer subsets of $\{1,2,3,\dots ,n\}$ for a given $n$ . I tried my hand at a recursion technique and tried to use the fact that such a set either ends with $n$ or it doesn't. So, we have $$T(n)=T(n-1) + S(n)$$ where $T(n)$ is the number of Salem–Spencer subsets of $\{1,2,3,\dots ,n\}$ and $S(n)$ is the number of Salem–Spencer subsets of $\{1,2,3,\dots ,n\}$ containing $n$ . These $S(n)$ 's are listed in OEIS A334893 (with the specific cases listed in OEIS A334892 ) but there
Let $r_3(n)$ be the maximum size of a Salem-Spencer set of $[n]$ . Then the number $N$ of Salem-Spencer set of $[n]$ is trivially bounded by $$2^{r_3(n)}\leq N \leq \sum_{k=1}^{r_3(n)} \binom{n}{k} \leq n^{r_3(n)}.$$ The lower bound is because any subset of a Salem-Spencer set is Salem-Spencer, so there are at least $2^{r_3(n)}$ such sets. The upper bounds is because any Salem-Spencer set has size at most $r_3(n)$ . In other words, we have $$r_3(n) \ll \log_2 N \ll r_3(n) \log n.$$ With our current knowledge of $r_3(n)$ , I think we cannot do much better. Indeed, the SOTA (state-of-the-art) bound for $r_3(n)$ is something like $$\frac{n}{2^{c\log^{1/9} n}} \geq_{\text{Kelley-Meka-Bloom-Sisask}} r_3(n) \geq_{\text{Behrend}} \frac{n}{2^{c\log^{1/2} n}}.$$ where $c$ denotes some positive constant. Note that the lower and upper bounds are much further than a factor of $\log n$ apart! Thus, the best we can write down, without a major breakthrough on $r_3(n)$ , is something like $$\frac{n}{2
|combinatorics|number-theory|elementary-number-theory|arithmetic-progressions|pattern-recognition|
0
Steps WA used to simplify $\left( \frac{\cos \beta \sin \beta \cos\phi +\sin \beta}{\sin^2 \phi + \sin^2\beta \cos^2 \phi} \right)^2\cos \phi$
I have the following expression: $$\left( \frac{\cos \beta \sin \beta \cos\phi +\sin \beta}{\sin^2 \phi + \sin^2\beta \cos^2 \phi} \right)^2\cos \phi$$ Wolfram Alpha yields a simplified form of this as, $$\frac{\sin^2 \beta \cos \phi}{(\cos \beta \cos \phi -1)^2}$$ I can multiply the first expression out and get, $$\frac{\cos^2\beta \sin^2\beta \cos^2 \phi +\sin^2 \beta +2 \cos \beta \sin^2 \beta \cos \phi}{\sin^4 \phi + \sin^4\beta \cos^4 \phi + 2 \sin^2 \beta \sin^2 \phi \cos^2 \phi}$$ but after that I can't see the way forward to get the simplified W-A expression.
$$\left( \frac{\cos \beta \sin \beta \cos\phi +\sin \beta}{\sin^2 \phi + \sin^2\beta \cos^2 \phi} \right)^2\cos \phi=\sin^2\beta\cos\phi\left( \frac{\cos\beta\cos\phi+1}{\sin^2 \phi + \sin^2\beta\cos^2\phi}\right)^2$$ It remains to show that (up to sign, because this is in a square) $$\frac{\cos\beta\cos\phi+1}{\sin^2 \phi +\sin^2\beta\cos^2\phi}=\frac1{\cos\beta\cos\phi-1}$$ This can be established as follows: $$\frac{\cos\beta\cos\phi+1}{\sin^2 \phi +\sin^2\beta\cos^2\phi}=\frac{\cos\beta\cos\phi+1}{1-\cos^2\phi+\sin^2\beta\cos^2\phi}=\frac{\cos\beta\cos\phi+1}{1-(1-\sin^2\beta)\cos^2\phi}=\frac{\cos\beta\cos\phi+1}{1-\cos^2\beta\cos^2\phi}=\frac1{1-\cos\beta\cos\phi}$$ where we have used the difference of two squares in the last step.
|algebra-precalculus|wolfram-alpha|
0
Steps WA used to simplify $\left( \frac{\cos \beta \sin \beta \cos\phi +\sin \beta}{\sin^2 \phi + \sin^2\beta \cos^2 \phi} \right)^2\cos \phi$
I have the following expression: $$\left( \frac{\cos \beta \sin \beta \cos\phi +\sin \beta}{\sin^2 \phi + \sin^2\beta \cos^2 \phi} \right)^2\cos \phi$$ Wolfram Alpha yields a simplified form of this as, $$\frac{\sin^2 \beta \cos \phi}{(\cos \beta \cos \phi -1)^2}$$ I can multiply the first expression out and get, $$\frac{\cos^2\beta \sin^2\beta \cos^2 \phi +\sin^2 \beta +2 \cos \beta \sin^2 \beta \cos \phi}{\sin^4 \phi + \sin^4\beta \cos^4 \phi + 2 \sin^2 \beta \sin^2 \phi \cos^2 \phi}$$ but after that I can't see the way forward to get the simplified W-A expression.
So instead of multiplying the expression out, work with the first equation. Factoring out $\sin \beta$ in the numerator and substituting for sines in the denominator yields, $$\left( \frac{\sin\beta(\cos\beta\cos\phi+1)}{1-\cos^2\phi +(1-\cos^2\beta)\cos^2\phi} \right)^2\cos\phi$$ Multiplying out in the denominator, $$\left( \frac{\sin\beta(\cos\beta\cos\phi+1)}{1-\cos^2\phi +\cos^2\phi-\cos^2\beta\cos^2\phi} \right)^2\cos\phi$$ Simplifying, $$\left( \frac{\sin\beta(1+\cos\beta\cos\phi)}{1-\cos^2\beta\cos^2\phi} \right)^2\cos\phi$$ Breaking out the denominator yields $$\left( \frac{\sin\beta(1+\cos\beta\cos\phi)}{(1+\cos\beta\cos\phi)(1-\cos\beta\cos\phi)} \right)^2\cos\phi$$ Dividing out common expression in numerator and denominator produces $$\left( \frac{\sin\beta}{(1-\cos\beta\cos\phi)} \right)^2\cos\phi$$ which results in a final solution of $$ \frac{\sin^2\beta\cos\phi}{(1-\cos\beta\cos\phi)^2}$$ which is the same as the Wolfram Alpha solution.
|algebra-precalculus|wolfram-alpha|
0
On the isomorphism $\mathbb{R} \cong \mathbb{R}^2$
It is a well known pathological counterexample in group theory that (assuming the axiom of choice), we have an isomorphism $$\mathbb{R} \cong \mathbb{R}^2,$$ where these are additive groups. This somewhat unsettling result generally is shown by appealing to fact that both of these groups, when viewed as $\mathbb{Q}$ -vector spaces, have the same dimension. Naturally, showing this relies on the axiom of choice. As with most results coming from the axiom of choice, it is strongly insinuated that one cannot explicitly construct an isomorphism $\phi : \mathbb{R} \rightarrow \mathbb{R}^2$ . This is very believable, but I'm wondering if it has actually been shown that this fact is equivalent to choice (i.e. without choice, the two are not isomorphic). If so, what would a proof of that look like? All comments are appreciated.
It's definitely not equivalent to full choice, because "choice can start to fail arbitrarily high up the von Neumann hierarchy". One way to see it's independent of $\mathsf{ZF}$ (ie "requires some choice") is to note that composing an isomorphism $\Bbb R \to \Bbb R^2$ with a projection map $\Bbb R^2 \to \Bbb R$ will give a non-injective nontrivial homomorphism $\Bbb R \to \Bbb R$ . In other words, this gives a discontinuous solution to the Cauchy functional equation . And it's known to be consistent with $\mathsf{ZF}$ that there's no such function .
|abstract-algebra|group-theory|axiom-of-choice|
1
Smallest group acting transitively on projective space
Let $ K $ be a field. Let $ K^n $ be an $ n $ dimensional vector space over $ K $ . Let $ KP^{n-1} $ be the projective space of lines in $ K^n $ . Let $ GL(n,K) $ be the group of invertible $ n \times n $ matrices over $ K $ . What is the smallest subgroup of $ GL(n,K) $ that acts transitively on $ KP^{n-1} $ ? Context: When $ K=\mathbb{C} $ then I think the smallest subgroup of $ GL(n,\mathbb{C}) $ acting transitively on $ \mathbb{C}P^{n-1} $ is $ SU(n) $ . When $ K=\mathbb{R} $ then I think the smallest subgroup of $ GL(n,\mathbb{R}) $ acting transitively on $ \mathbb{R}P^{n-1} $ is $ SO(n) $ . I was wondering if this is true and also what the corresponding group is for other choices of $ K $ .
For $K = \Bbb R$ it follows from Montgomery & Samelson, "Transformation Groups of Spheres" that one can do better than $\operatorname{SO}(n)$ when $n \equiv 0 \pmod 2$ (and $n > 2$ ) or $n = 7$ . If $n \equiv 0 \pmod 2$ and $n > 2$ , $\operatorname{SU}\left(\frac n2\right)$ , which has dimension $\frac{1}{4} n^2 - 1$ , acts transitively on $\Bbb R P^{n - 1}$ . If $n \equiv 0 \pmod 4$ , $\operatorname{Sp}\left(\frac n4\right)$ , which has dimension $\frac18 n (n + 2)$ , acts transitively on $\Bbb R P^{n - 1}$ . In the special case $n = 16$ , $\operatorname{Spin}(9)$ , which has the same dimension ( $36$ ) as $\operatorname{Sp}(4)$ , also acts transitively on $\Bbb R P^{15}$ . In the special case $n = 8$ , $\operatorname{Spin}(7)$ , which has dimension $21$ , acts transitively on $\Bbb R P^7$ ; it is smaller than $\operatorname{SO}(8)$ (dimension $28$ ) but it's still larger than $\operatorname{Sp}(2)$ (dimension $10$ ). If $n = 7$ , $\operatorname{G}_2$ , which has dimension $14$ , acts
|group-theory|algebraic-geometry|finite-groups|representation-theory|lie-groups|
1
Is every group isomorphic to a set of isomorphisms?
Informally: Every group is representable (up to an isomorhism) as a group of isomorphisms. Formally: For every group $G$ there exists a binary relation $f$ on some set $U$ such that $G$ is isomorphic to group of all bijections $g$ on $U$ such that $g^{-1}\circ f\circ g=f$ . In other words, is every group $G$ isomorphic to the group of isomorphisms of some (possibly infinite) digraph? Moreover, is this digraph $f$ unique up to an isomorphism, for each group $G$ ?
It appears that the answer to your question is yes, every group $G$ arises as the automorphism group of some graph. In fact, we can even ask for that graph to be undirected. The case of a finite group being representable as the automorphism group of a finite undirected graph is called Frucht's theorem . This has apparently been generalised to infinite groups, as Wikipedia briefly discusses in this section . The references given for this result are de Groot, Johannes (1959), "Groups represented by homeomorphism groups", Mathematische Annalen, 138: 80–102, doi:10.1007/BF01369667 Sabidussi, Gert (1957), "Graphs with given group and given graph-theoretical properties", Canadian Journal of Mathematics, 9: 515–525, doi:10.4153/cjm-1957-060-7 I don't have access to either, and I don't claim to have read or understood these proofs! For the uniqueness part, Wikipedia says that in the case of finite groups, uniqueness fails pretty catastrophically, and indeed it's easy enough to come up with exa
|group-theory|group-isomorphism|graph-isomorphism|
1
How analytically solve $ax^2-b \sqrt{x}+c=0$
How analytically solve $ax^2-b \sqrt{x}+c=0$ , where a, b, c are positive constants? mathematica gives long complex solutions but not sure if it's reliable.
Suppose $\sqrt x=y$ and we instead divide the entire equation by $a$ ; this works without loss of generality. We can then get the quartic $y^4-by+c=0$ , which can be solved using methods described here . Once roots for $y$ are found, you can then find possible values for $x$ . You will have to make sure you do not obtain any extraneous solutions; for example, although the solution $y=-1$ would suggest $x=1$ as $y^2=1$ , this doesn't work as square roots are always positive.
|algebra-precalculus|
0
$\dfrac{1}{a-b \cos x}$ format integral
Looking at solving: $$\int_{0}^{2\pi}\frac{1}{\sqrt{2}-\cos x}\,dx \\$$ According to integral tables (such as this one ) the solution is: $$\frac{2}{\sqrt{a^2 - b^2}}\arctan\frac{\sqrt{a^2 - b^2}\tan(x/2) }{a+b}$$ $$a=\sqrt{2}, b=-1$$ $$\left.2\arctan\frac{2\tan(x/2) }{\sqrt{2}-1}\right\rvert_0^{2\pi}$$ Solving this equation for $x={2\pi}$ results in zero, and for $x=0$ results in zero, so the result is zero. Except I'm missing something here because the original function to be integrated is never less than zero, it oscillates between 0.414 and 2.414, so there must be net positive area under that curve. Another way to solve this integral is to transform it to a complex line integral, using residues and getting a result of ${2\pi}$ $$\int_{0}^{2\pi}\frac{1}{\sqrt{2}-\cos x}\,dx = {2\pi}\\$$ Why doesn't the table of integrals get the same result?
Since $\tan x$ is undefined at $x=\frac{\pi}{2}$ , therefore we have to change the integration interval $(0,2\pi)$ to $(0,\pi)$ using the symmetry of cosine. $$ \int_0^{2 \pi} \frac{1}{\sqrt{2}-\cos x} d x \quad=2 \int_0^\pi \frac{1}{\sqrt{2}-\cos x} d x $$ Then putting $t=\tan \frac{x}{2}$ for $x\in (0,\pi)$ , we have $$ \begin{aligned} I & =2 \int_0^{\infty} \frac{1}{\sqrt{2}-\frac{1-t^2}{1+t^2}} \cdot \frac{2 d t}{1+t^2} \\ & =4 \int_0^{\infty} \frac{1}{(\sqrt{2}+1) t^2+(\sqrt{2}-1)} d t \\ & =\frac{4}{\sqrt{(\sqrt{2}+1)(\sqrt{2}-1)}}\left[\tan ^{-1} \frac{(\sqrt{2}+1) t}{\sqrt{2}-1}\right]_0^{\infty} \\ & =2 \pi \end{aligned} $$
|integration|complex-analysis|
0
Use of Chebyshev's inequality in the coupon collector problem
From the note: https://terrytao.wordpress.com/2015/10/23/275a-notes-3-the-weak-and-strong-law-of-large-numbers/comment-page-1/#comment-682897 . In the coupon collector problem in which one considers an infinite sequence of “coupons” that are iid and uniformly distributed from the finite set ${\{1,\dots,N\}}$ , let ${T_N}$ denote the first time at which one has collected all ${N}$ different types of coupons, we can obtain the bounds $\displaystyle {\bf E} T_N = N \log N + O(N)$ and $\displaystyle {\bf Var} T_N = O(N^2)$ . It is then stated in the note that from Chebyshev’s inequality, we thus see: $\displaystyle {\bf P}( |T_N - N \log N| \geq \lambda N ) = O( \lambda^{-2} )$ for any ${\lambda > 0}$ . Question: Applying Chebyshev's inequality with the given bounds seems to give $\displaystyle {\bf P}( |T_N - N \log N - O(N)| \geq \lambda N ) = O( \lambda^{-2} )$ , why can we disregard the $O(N)$ term in ${\bf E} T_N$ for this estimate?
Chebyshev's inequality says $P(|X-\mu_X|\ge t) \le \frac{\sigma_X^2}{t^2}$ so we can say $P(|X-\mu_X + s|\ge t+|s|) \le \frac{\sigma_X^2}{t^2}$ . (Here $\mu_{T_N}=N H_n$ is close to and bounded above by $N\log_e(N)+ \gamma N +\frac12$ but we will use the upper bound $N\log_e(N) +c_1 N$ , while $\sigma_{T_N}^2$ is close to and bounded above by $\frac{\pi^2}{6}N^2$ though we will use the upper bound $c_2N^2$ .) So letting $X=T_N$ and $\mu_{T_N} = N\log_e(N)+s$ and $\lambda = \frac{t+|s|}{N}$ and so $t= N \lambda -|s|$ , we get $P(|X-\mu_X + s|\ge \lambda N) \le \frac{c_2n^2}{(N \lambda - c_1\lambda)^2} = \frac{c_2N^2}{(N - c_1)^2} \lambda^{-2}$ which is $O(\lambda^{-2})$ as desired. You need $N>c_1$ for this argument, but for smaller $N$ you have the probability bounded above by $1$ . It is not necessary for the argument, but note that the mode of the distribution of $T_N$ is close to $N\log_e(N)$ but, since $T_N$ has a right-skewed distribution, its mean of $N H_n$ is larger. So for sma
|probability-theory|coupon-collector|
1
The Jacobian of $g(\vec{x}) = f(A\vec{x} + \vec{b})\vec{x}$.
Let $A = \mathbb{R}^{n \times n}$ and $f: \mathbb{R^{n}} \mapsto \mathbb{R}$ I can compute Jacobians of simple functions, but this question obliterated me, and I have spent days trying to understand it. Within the solution they derive that $[D(\vec{g}(\vec{x}))]_{jk} = f(\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})\frac{\partial \vec{x}_j}{\partial x_k} + \vec{x}_j \frac{\partial f(\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})}{\partial x_k}$ This is fine as it is just chain rule, but where they lose me is when they change to summation: $f(\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})\frac{\partial \vec{x}_j}{\partial x_k} + \vec{x}_j \sum_{\ell=1}^{n} \frac{\partial f(\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})}{\partial (\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})_{\ell}} \cdot \frac{\partial (\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})_{\ell}}{\partial x_k}$ I've tried coming up with a simple example using the 1-norm of an A $\mathbb{R}^{2 \times 2}$ , and the accompanying x and b vectors, but it doesn't
$ \def\LR#1{\left(#1\right)} \def\qiq{\quad\implies\quad} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} $ First, define a few variables $$\eqalign{ w &= Ax+b &\qiq dw &= A\,dx \\ f &= f(w) &\qiq \;h &= \grad fw \qiq df= h^Tdw = h^TA\:dx \\ }$$ Then calculate the Jacobian of the composite function $$\eqalign{ g &= x f \\ dg &= f\:dx + x\,df \\ &= \LR{fI + xh^TA} dx \\ \grad{g}{x} &= \LR{fI + xh^TA} \;\:\equiv\; J \\ }$$ where $J$ is the Jacobian and $I$ is the Identity matrix.
|linear-algebra|multivariable-calculus|optimization|jacobian|
0
Eliminating $\theta$ from $x^2+y^2=\frac{x\cos3\theta+y\sin3\theta}{\cos^3\theta}$ and $x^2+y^2=\frac{y\cos3\theta-x\sin3\theta}{\cos^3\theta}$
An interesting problem from a 1913 university entrance examination (Melbourne, Australia): Eliminate $\theta$ from the expressions $$x^{2}+y^{2}=\frac{x \cos{3\theta}+y \sin{3\theta}}{\cos^{3}\theta} \tag1$$ $$x^{2}+y^{2}=\frac{y \cos{3\theta}-x \sin{3\theta}}{\cos^{3}\theta} \tag2$$ Find expressions for $x$ and $y$ in terms of $\theta$ As with many of these historic problems, I'm sure it has been discussed somewhere at length... Solutions involving complex numbers would have been acceptable at the time. (EDIT 20/Feb Australian time) I thought I had a solution but upon review I don't think it works (I found an expression for y, then substituted. It was very messy and I don't think the algebra was totally correct.) so editing the post to say: any solutions or suggestions appreciated. The wording of similar questions on the paper suggests that a "simplified" expression is possible. (Edit 21/Feb) Thanks everyone for the excellent suggestions. Really appreciated. I am wondering if the subs
Revising and extending my comment to @Maverick's answer ... (For completeness, I'll derive the result of that answer. Since I suggested the approach in a comment to the question, I don't believe this steps on anyone's toes.) First, to arrive at the parameterization, square the equations and add. (For notational convenience, define $c:=\cos3\theta$ and $s:=\sin3\theta$ .) $$\begin{align} 2(x^2+y^2)^2 &= \left(\frac{xc+ys}{\cos^3\theta}\right)^2 + \left(\frac{yc-xs}{\cos^3\theta}\right)^2 \tag1\\[4pt] &=\frac{(x^2c^2+2xycs+y^2s^2)+(y^2c^2-2yxcs+x^2s^2)}{\cos^6\theta} \tag2\\[4pt] &=\frac{(x^2+y^2)(\cos^33\theta+\sin^23\theta)}{\cos^6\theta)} \tag3\\[4pt] &=\frac{x^2+y^2}{\cos^6\theta} \tag4 \end{align}$$ Ignoring the case $x^2+y^2=0$ (but see the Note later), we divide-through by $2(x^2+y^2)$ to get $$x^2+y^2=\frac{1}{2\cos^6\theta} \tag5$$ This allows us to write the left-hand sides of the original system without $x$ and $y$ , and we can write $$\frac{1}{2\cos^3\theta} =xc+ys \qquad \fr
|trigonometry|math-history|
1
How to calculate the area of a projected path onto a plane?
How do you "orthogonally project" the shape? After I drew multiple graphs, I still had no idea. The correct answer is $12$ . I guess it is $3 \sqrt 2 \cdot 2 \sqrt 2$ according to the gist of the graph that I drew (a plane with two parallel sides through four mid points each, and two shorter sides a little out of shape) The problem: "A bee travels in a series of steps of length $1$ : north, west, north, west, up, south, east, south, east, down. (The bee can move in three dimensions, so north is distinct from up.) There exists a plane $P$ that passes through the midpoints of each step. Suppose we orthogonally project the bee’s path onto the plane $P$ , and let $A$ be the area of the resulting figure. What is $A^2$ ?"
Adopt a Cartesian coordinate system such that the bee begins at the origin and east/north/up is in the direction of increasing $x/y/z$ , respectively. Then the trajectory of the bee is a series of steps along straight lines connecting the points defined by the sequence north, west, north, west, up, south, east, south, east, down : $$(0,0,0) \to (0,1,0) \to (-1,1,0) \to (-1,2,0) \to (-2,2,0)$$ $$\to (-2,2,1) \to (-2,1,1) \to (-1,1,1) \to (-1,0,1) \to (0,0,1) \to (0,0,0).$$ The set of midpoints $M$ along each step is $$\left\{(0,\frac{1}{2},0),(-\frac{1}{2},1,0),(-1,\frac{3}{2},0),(-\frac{3}{2},2,0),(-2,2,\frac{1}{2}),(-2,\frac{3}{2},1),(-\frac{3}{2},1,1),(-1,\frac{1}{2},1),(-\frac{1}{2},0,1),(0,0,\frac{1}{2})\right\}.$$ In order to determine the plane $P$ that contains these points, we will construct a unit vector $\hat{n}$ normal to the plane to obtain the equation of the plane and then verify that $M\subset P$ . Let $\vec{a}$ be the vector connecting the midpoints $(0,\frac{1}{2},0),(
|geometry|3d|area|
0
Why is it that for a unital $C^*$-algebra any positive linear functional $\varphi$ must be such that $\|\varphi\| =\varphi()$?
I understand that this is a consequence of a related, more general theorem for non-unital $C^*$ -algebras, where we can find approximants of a unit, but for the simple case of a unital $C^*$ -algebra $\mathscr{A}$ , what is the simple argument that implies that positive linear functionals $\varphi:\mathscr{A} \rightarrow \mathbb{C}$ (defined as $\varphi(a^*a) \geq 0, \forall \, a \in \mathscr{A}$ ) are such that $\|\varphi\| =\varphi(\mathbf 1)$ ?
First $\varphi(\mathbf 1) \le \left\|\varphi\right\| \left\|\mathbf 1\right\| = \left\|\varphi\right\|$ . For the reverse, we start with the following claim: If $a\in\mathcal A$ is self-adjoint then $$\varphi (a) \in \mathbb R\quad \text{and} \quad \left|\varphi(a)\right| \le \left\|a\right\| \varphi\left(\mathbf 1\right)$$ Proof: Since $\left\|a\right\|\mathbf 1 \succeq a \succeq - \left\|a\right\|\mathbf 1$ then by linearity and positivity of $\varphi$ , $$\left\|a\right\|\varphi\left(\mathbf 1\right) \ge \varphi(a) \ge -\left\|a\right\|\varphi(\mathbf 1)$$ Now we are back to the general case, let $a\in \mathcal A$ (not mandatory self-adjoint), \begin{align} \left|\varphi\left(a\right)\right|^2 &= \left|\varphi\left(\mathbf 1^* a\right)\right|^2\\ &\le \varphi(\mathbf 1)\left|\varphi\left(a^*a\right)\right| &&(\text{Cauchy-Schwarz})\\ &\le \varphi\left(\mathbf 1\right)\left\|a^* a\right\|\varphi\left(\mathbf 1\right) &&(\text{Apply the claim on $a^*a$})\\ &= \varphi(\mathbf 1)^2\left
|functional-analysis|c-star-algebras|
1
Showing that a von neumann algebra in the bounded operators of $l^2(S_\infty)$ is a factor
Consider $S_\infty$ i.e. the group of permutations with functions $\sigma:\mathbb{N} \rightarrow \mathbb{N}$ such that $\sigma(n) = n$ for all but finitely many. The left regular represenation defined on $S_\infty$ into $B(l^2(S_\infty))$ as $$\lambda_\sigma(\delta_\psi)) := \delta_{\sigma \circ \psi}$$ where $\{\delta_\sigma\}$ is the orthnormal basis. Not obvious, but I know that if I show that weak operator closure of $\text{span} \lambda(S_\infty)$ (which is the weak operator closure is clearly a von Neumann algebra)is a factor i.e. the center of this von Neumann algebra is equal to $\mathbb{C}I$ , then $S_\infty$ has infinite conjugacy classes(this implication is not obvious). I am having trouble understanding the weak operator closure of $\text{span} \lambda(S_\infty)$ is a factor. I am not used to dealing with $l^2(G)$ spaces.
When $G$ is a countable group and $\lambda:G\to B(\ell^2(G))$ is the left regular representation, $\lambda(G)''$ is a factor if and only if $G$ is icc. Below is the standard argument. Indeed, suppose first that $G$ is not icc, so there exists $g\in G$ such that $\{hgh^{-1}: h\in G\}=\{g_1,\ldots,g_n\}$ . Consider the element $T=\sum_{j=1}^n \lambda(g_j)\in\lambda(G)''$ . For any $h\in G$ , $$ \lambda(h)T\lambda(h)^*=\sum_{j=1}^n\lambda(hg_jh^{-1})=\sum_{j=1}^n\lambda(g_j)=T. $$ So $\lambda(h)T=T\lambda(h)$ for all $h\in G$ . So $T\in\lambda(G)'\cap\lambda(G)''=Z(\lambda(G)'')$ . This shows that if $G$ is not icc, then $\lambda(G)''$ is not a factor. Conversely, suppose that $G$ is icc. Let $T\in Z(\lambda(G)'')$ . Here we use that $\lambda(G)'=\rho(G)''$ . So $T\in \lambda(G)''\cap\rho(G)''$ . Write $T\delta_e=\sum_{g\in G}\alpha_g\delta_g$ . Then, for $h\in G$ , $$ \sum_{g\in G}\alpha_g\delta_g=T\delta_e=T\lambda(h)\rho(h)\delta_e=\lambda(h)\rho(h)T\delta_e =\sum_{g\in G}\alpha_g\delt
|functional-analysis|group-theory|operator-theory|operator-algebras|von-neumann-algebras|
1
Colinear Complex Solutions
For the equation: $m^6=(m-1)^6$ There are 5 distinct complex solutions and I noticed that all of them are colinear in the complex plane, is this a coincidence or is there a theorem behind?
If $m\in\mathbb{C}$ is a solution to the equation $m^6=(m-1)^6$ then \begin{align*} & |m^6|=|(m-1)^6| \\[4pt] \implies\;& |m|^6=|m-1|^6 \\[4pt] \implies\;& |m|^2=|m-1|^2 \\[4pt] \implies\;& m{\overline{m}} = (m-1){\overline{m-1}} \\[4pt] \implies\;& m{\overline{m}} = (m-1)({\overline{m}}-1) \\[4pt] \implies\;& m{\overline{m}} = m{\overline{m}} - m - {\overline{m}} + 1 \\[4pt] \implies\;& m+{\overline{m}} = 1 \\[4pt] \implies\;& \frac{m+{\overline{m}}}{2} = \frac{1}{2} \\[4pt] \implies\;& \text{Re}(m) = \frac{1}{2} \\[4pt] \end{align*} hence all solutions to the equation $m^6=(m-1)^6$ lie on the vertical line $\text{Re}(m)=\frac{1}{2}$ .
|polynomials|complex-numbers|
0
Solving non linear PDE using Charpit's equations
$(y+zq)^2=z^2(1+p^2+q^2)$ . How to find the complete integral of the given pde by using Charpit's equations ? Charpit's equations are: $$ \frac{dx}{2pz^2}=\frac{dy}{2qz^2-2z(y+zq)}=\frac{dz}{2p^2z^2-2qz(y+zq)}=\frac{dp}{-2pz(1+p^2+q^2)}=\frac{dq}{2(y+zq)-2qz(1+p^2+q^2)} $$
Let $u:=z^2$ , so that $z=\pm\sqrt{u}$ and $(p,q)=(z_x,z_y)=\left(\pm\frac{u_x}{2\sqrt{u}},\pm\frac{u_y}{2\sqrt{u}}\right)=:\left(\frac{P}{2z},\frac{Q}{2z}\right)$ . In terms of the new variables $u, P,$ and $Q$ , the PDE reads $$ \left(y+\frac{Q}{2}\right)^2=u+\frac{P^2+Q^2}{4} \implies y^2+Qy-u-\frac{P^2}{4}=0. \tag{1} $$ The Charpit's equations corresponding to $(1)$ are $$ \frac{dx}{-\frac{P}{2}}=\frac{dy}{y}=\frac{dP}{P}=\frac{dQ}{-2y}=\frac{du}{-\frac{P^2}{2}+Qy}, \tag{2} $$ from which follow $$ P=-2(x+a)\quad\text{and}\quad Q=-2(y+b), \tag{3} $$ where $a$ and $b$ are arbitrary constants. From $(2)$ it also follows that $du=Pdx+Qdy$ ; combined with $(3)$ , it yields $$ u=-(x+a)^2-(y+b)^2+c. \tag{4} $$ To determine the the constant $c$ , we substitute $(3)$ and $(4)$ in $(1)$ ; the result is $c=b^2$ . Therefore, the complete integral of the original PDE is the family of spheres $$ (x+a)^2+(y+b)^2+z^2=b^2. \tag{5} $$
|partial-differential-equations|
0
How do you reflect a graph about an arbitrary line for a precalculus student?
Here's a question one of my precalculus students asked me, paraphrased for clarity: You know how if you have the line $y=x$ , and you want to reflect the graph of a function $f(x)$ across it, you can just switch the $y$ and $x$ in the equation of the function (since you're just finding $f^{-1}$ if it exists). What if you wanted to reflect a graph about something like $y=2x$ instead? Or $y=2x-3$ ? Is there some trick analogous to "switch the $x$ and the $y$ " to find the equation of this reflected curve? My trouble is that I don't have an explanation that would be particularly good for a precalulus student. Is there an elegant way to explain to a precalculus student how to do this that looks analogous to the "switch $x$ and $y$ " trick? Here are the approaches to reflecting a curve about a line that I know, that I think are a bit too heavy for this student. Do some vector-calculus looking stuff: To reflect the graph of $f$ across $y=mx+b$ , you translate the $y$ values by $b$ to make th
Let $l$ be the line $ax+by+c=0$ . It can be shown that the reflection of a point $\left(\hat{x},\hat{y}\right)$ in $l$ is $\left(\hat{x}-2a\frac{a\hat{x}+b\hat{y}+c}{a^{2}+b^{2}},\hat{y}-2b\frac{a\hat{x}+b\hat{y}+c}{a^{2}+b^{2}}\right)$ (see e.g. here ). So, when we reflect an equation in $l$ , to get the new equation, simply replace each $x$ with $x-2a\frac{ax+by+c}{a^{2}+b^{2}}$ and each $y$ with $y-2b\frac{ax+by+c}{a^{2}+b^{2}}$ .
|algebra-precalculus|reflection|
0
Rational + irrational = always irrational?
I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational. Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational. My professor said this is not always true, but I can't think of an example that suggests this. If $x+1$ is irrational, is $x$ always irrational? Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?
Let $j$ be an irrational number, and $x$ a rational number, such that $x+1=j$ . There are coprime integers $z_1$ and $z_2$ such that $x=\frac{z_1}{z_2}$ Substitute| $$\frac{z_1}{z_2}+1=\frac{z_1}{z_2}+\frac{z_2}{z_2}=\frac{z_1+z_2}{z_2}=j$$ The sum of two integers is always an integer, so $j$ is shown to be the quotient of two integers, contradicting the assumption that $j$ is irrational. So no such irrational number exists of the form $x+1$ , with $x$ rational.
|irrational-numbers|
0
Solving recurrence relation in Hilbert space.
This is a question that is based off of the book $\textit{An Introduction to Nonharmonic Fourier Series}$ by Young. Assume that $\{e_n\}_{n = 1}^\infty$ is the canonical basis for $\ell^2(\mathbb{N})$ . Consider the sequence $\{e_n + e_{n+1}\}_{n = 1}^\infty$ . I want to know when this sequence is minimal. Being minimal is equivalent to the existence of another sequence $\{f_n\}_{n = 1}^\infty$ such that $(e_n + e_{n+1}, f_m) = \delta_{nm}$ (biorthogonal system). Here $(\cdot, \cdot)$ denotes the inner product of $\ell^2(\mathbb{N})$ and $$\delta_{nm} = \cases{0, n \neq m \\ 1, n = m}.$$ We know that the first element of the sequence $f_1$ must satisfy the equations $$\cases{(e_2, f_1) = 1 - (e_1, f_1) \\ (e_k, f_1) = (-1)^{k-2}(1 - (e_1, f_1))},$$ for $k \geq 2$ . How can I solve this system for $f_1$ ? More generally, what happens when I instead consider $\{c_1e_n + c_2e_{n+1}\}_{n=1}^\infty$ for scalar $c_1,c_2 \neq 0$ ? We have similar equations that $f_1$ must satisfy $$(e_2, f_1)
$\|f_1\|^{2}=\sum |(e_k,f_1)|^{2} . Since $|(e_k,f_1)|$ is independent of $k$ for $k>1$ this is possible only when $1-(e_1,f_1)=0$ . Thus, $(e_1,f_1)=1$ and $(e_k,f_1)=0$ for $k \neq 1$ . This means $f_1=e_1$ . In the general case just use the fact that $f_1=\sum (f_1,e_k)e_k$ .
|real-analysis|functional-analysis|recurrence-relations|hilbert-spaces|inner-products|
0
Why isn't the third property in the definition of vector bundles redundant?
I am studying Manifold theory and it is essential for me to know vector bundles.The usual definition of vector bundles as given in the standard texts is a follows: Suppose $M$ is a topological manifold and $E$ is another topological manifold with a surjective continuous map $\pi:E\to M$ .Then we say $E$ is a vector bundle of rank $k$ over $M$ if the following properties are satisfied: $1.$ For each $p\in M$ the fiber $E_p=\pi^{-1}(p)$ is a $k$ -dimensional vector space. $2.$ (Local Trivialization)For each $p\in M$ there exists an open nbhd $U$ of $p$ in $M$ such that there is a homeomorphism $\varphi:\pi^{-1}(U)\to U\times \mathbb R^k$ such that $\pi_1\circ \varphi=\pi$ ( $\pi_1:U\times \mathbb R^k\to U$ being the projection map onto $U$ .) $3.$ For each $q\in U$ ,the map $\varphi|_{E_q}:\pi^{-1}(q)\to \{q\}\times \mathbb R^k\cong\mathbb R^k$ is an isomorphism of vector spaces. Then spaces $E$ and $M$ are referred to as the total space and the base space and $\pi:E\to M$ is called the
First of all, the assumption that $M$ and $E$ are manifolds is too strong: There are many reasons to consider vector bundles over topological spaces which are not manifolds. Second, once you assume that $M$ is a topological manifold, Property 2 will imply that $E$ is a manifold as well. Thus, in what follows, I will not assume that $M$ is a manifold, it is totally irrelevant for the discussion. Now, let's start with Property 2 instead of Property 1: It says that $E\to M$ is a (locally trivial) fiber bundle with fibers homeomorphic to ${\mathbb R}^k$ . However, if we consider "transition maps" for pairs $(U, \varphi), (V,\psi)$ , namely, maps $$ \varphi \circ \psi^{-1}: (U\cap V)\times {\mathbb R}^k\to (U\cap V)\times {\mathbb R}^k, $$ Property 2 only ensures that for all $x\in U\cap V$ , $$ \theta_{x}=\varphi \circ \psi^{-1}: \{x\}\times {\mathbb R}^k \to \{x\}\times {\mathbb R}^k $$ is a homeomorphism that does not preserve, a priori, any further structure on the vector space $\{x\}\t
|manifolds|definition|examples-counterexamples|vector-bundles|
0
If there exists an alpha that meets a certain condition show that $(na_n)$ converges to $0$
I was seeing this interesting generalization, since it will show the case when the sequence is decreasing, that is, when alpha $= 0$ , I understand almost the entire proof except for the last part, I do not understand how to get the fact that $r_n = b_n - s_n$ with the fact that $b_n$ is increasing and the $s_n$ are bounded since the series converges, how does that show that $r_n$ is convergent?
Since $\sum_n a_n$ converges $s_n$ converges as well. Since $b_n$ is increasing its limit can be finite or $\infty$ . If the limit is infinite it proves that $r_n \to \infty$ and then $\frac1n = o(a_n)$ which implies that $a_n$ diverges.
|real-analysis|sequences-and-series|
1
How to solve $\ln(x) = 3\left(1-\frac{1}{x}\right)$?
I have been working for this problem for a while: $$\ln(x) = 3\left(1-\frac{1}{x}\right)$$ and by graphing, plugging and chugging values and rigorously doing the math, I can clearly see that one of the values that satisfy this condition is $x = 1$ . However, when I put this into wolfram alpha and Desmos, I see two answers, one that is $x = 1$ and another one that is approximately $16.801$ . It is expressed by the Lambert W Function . I solved for $x = 1$ by using the Lambert W function. I cannot find any way to solve for the latter solution: $16.801$ . Is there anyone that could elaborate for me how to solve for it? Thank you. My method to get $x=1$ : $$\begin{align}\ln(x) = 3\left(1-\frac{1}{x}\right)\:&\Longrightarrow\:x=e^3e^{-\frac{3}{x}}\\&\Longrightarrow\:e^{-3}=\frac{1}{x}e^{-\frac{3}{x}}\\&\Longrightarrow\:-3e^{-3}=-\frac{3}{x}e^{-\frac{3}{x}}\end{align}$$ Applying Lambert W Function, which does the following: $W(ae^a) = a $ : $$-3 = -3/x \Longleftrightarrow x=1$$
$-3e^{-3}=(-3/x)e^{-3/x}$ as a graph The Lambert W function has two real valued branches when x is between -1/e and 0 exclusive, which means it is not a function in this particular case because it is not 1-1. However, if you apply the other branch of the function you will get the other value ~16.801.
|calculus|logarithms|lambert-w|
1
How to calculate the area of a projected path onto a plane?
How do you "orthogonally project" the shape? After I drew multiple graphs, I still had no idea. The correct answer is $12$ . I guess it is $3 \sqrt 2 \cdot 2 \sqrt 2$ according to the gist of the graph that I drew (a plane with two parallel sides through four mid points each, and two shorter sides a little out of shape) The problem: "A bee travels in a series of steps of length $1$ : north, west, north, west, up, south, east, south, east, down. (The bee can move in three dimensions, so north is distinct from up.) There exists a plane $P$ that passes through the midpoints of each step. Suppose we orthogonally project the bee’s path onto the plane $P$ , and let $A$ be the area of the resulting figure. What is $A^2$ ?"
I think this problem is intended to be a purely logical problem with very little vector geometry. For starters, the problem states (without proof) that there IS a plane that passes through the midpoints. Hence, we are allowed to assume that without proving that this actually happens (this doesn't happen for all possible BEE movements). The problem has a large amount of symmetry. There is symmetry in the number of up/down, east/west, and north/south. Since the plane goes through the midpoints, the vertices must alternate between "above the plane" and "below the plane". We also immediately see that the projection must be both vertically and horizontally symmetric. Since the path of the BEE in 3-space is along the vertices of two cubes (but rotated from each other) and since the projected area of both is identical, then the projection of the cubes must be maximal. The maximal area of a cube projected onto a plane is $\sqrt{3}$ . Since there are two of them, we have $$(2\cdot \sqrt{3})^2=1
|geometry|3d|area|
0
Is this combination of positive semidefinite matrices positive semidefinite?
Say I have $A \succcurlyeq 0$ , $ \begin{bmatrix}A & \vec{b}_1 \\\ \vec{b}_1^T & t_1 \end{bmatrix} \succcurlyeq 0 $ , and $ \begin{bmatrix}A & \vec{b}_2 \\\ \vec{b}_2^T & t_2 \end{bmatrix} \succcurlyeq 0 $ . Is $ \begin{bmatrix}A & [\vec{b}_1, \vec{b}_2] \\\ [\vec{b}_1, \vec{b}_2]^T & \operatorname{diag}([t_1, t_2]) \end{bmatrix} \succcurlyeq 0 $ ?
No. For example $\begin{bmatrix} 1 & 1\\ 1 & 1 \end{bmatrix}\succeq 0$ and $\begin{bmatrix} 1 & 1&1\\ 1&1&0\\ 1&0&1\end{bmatrix}$ is not.
|linear-algebra|positive-semidefinite|block-matrices|
1
Solve $pq^2=ax+by$ - a nonlinear first-order pde
$p$ is $dz/dx$ , $q$ is $dz/dy$ I've written down the Charpit's equations: $\frac{dp}{a}=\frac{dq}{b}=\frac{dz}{3q^2p}=\frac{dx}{q^2}=\frac{dy}{2qp}$ But i'm clueless as to what to do next. I don't know why would anyone find these enjoyable and i'm totally clueless about what these equations would mean in the real world, but anyway, i would appreciate if someone helped me out with this.
The solution to the ODE $\frac{dq}{b}=\frac{dx}{q^2}$ is $q=(3bx+C_1)^{1/3}$ . Since $q=\frac{\partial z}{\partial y}$ , it follows that $$ z=(3bx+C_1)^{1/3}y+f(x). \tag{1} $$ Substituting $(1)$ in the PDE $$ pq^2=ax+by, \tag{2} $$ we obtain $$ \left[b(3bx+C_1)^{-2/3}y+f'(x)\right](3bx+C_1)^{2/3}=ax+by $$ $$ \implies f'(x)=ax(3bx+C_1)^{-2/3} $$ $$ \implies f(x)=\frac{a(bx-C_1)(3bx+C_1)^{1/3}}{4b^2}+C_2. \tag{3} $$ Therefore, $$ z(x,y)=(3bx+C_1)^{1/3}\left[y+\frac{a(bx-C_1)}{4b^2}\right]+C_2 \tag{4} $$ is a complete integral of the PDE $(2)$ .
|partial-differential-equations|
0
Prove that the sequence $\frac{4n^2-3n}{2n^2−1}$ converges to the limit $2$
I know this has probably been asked 100 times, but I couldn't find anything resembling this and I've been on this easy number for 2 hours... My work is below. We want to show that $\forall \epsilon >0$ , $\exists N$ such that $\forall n > N$ , then $|x_n-2| . $$ |\frac{4n^2-3n}{2n^2−1}-2|=|\frac{4n^2-3n-4n^2+2}{2n^2−1}|\\ =|\frac{2-3n}{2n^2−1}|\\ So we want: $$ \frac{2}{2n^2−1} But taking n larger than that value does not work? I'm not sure where I'm going wrong here. I must be making some kind of algebra error or going wrong in my reasoning, but I feel bad because this is literally the first number on the exercice sheet.
$|\frac{2-3n}{2n^2−1}|=\frac {3n-2} {2n^{2}-1}\le \frac {3n} {n^{2}}=\frac 3n if $n >\frac 3 {\epsilon}$ .
|real-analysis|convergence-divergence|
1
types of von Neumann subalgebras
Let $N$ be a von Neumann subalgebra of the von Neumann algebra $M$ . Is the type of $N$ at most the type of $M$ ?
If you refer to the main types I, II, III, then all inclusions are possible. $B(H)$ is type I and it contains all types. Any type II $_\infty$ algebra is of the form $M\otimes B(H)$ , and so by embedding $I\otimes N$ you can get the subalgebra to be of any type. Any type III algebra $M$ satisfies $M\simeq M\otimes B(H)$ , so again any type can be embedded as $I\otimes N$ .
|operator-algebras|von-neumann-algebras|
1
Prove that the sequence $\frac{4n^2-3n}{2n^2−1}$ converges to the limit $2$
I know this has probably been asked 100 times, but I couldn't find anything resembling this and I've been on this easy number for 2 hours... My work is below. We want to show that $\forall \epsilon >0$ , $\exists N$ such that $\forall n > N$ , then $|x_n-2| . $$ |\frac{4n^2-3n}{2n^2−1}-2|=|\frac{4n^2-3n-4n^2+2}{2n^2−1}|\\ =|\frac{2-3n}{2n^2−1}|\\ So we want: $$ \frac{2}{2n^2−1} But taking n larger than that value does not work? I'm not sure where I'm going wrong here. I must be making some kind of algebra error or going wrong in my reasoning, but I feel bad because this is literally the first number on the exercice sheet.
Without loss of generality, we can consider that $n\geq 2$ . Consequently, one has that \begin{align*} \left|\frac{4n^{2} - 3n}{2n^{2} - 1} - 2\right| & = \left|\frac{3n - 2}{2n^{2} - 1}\right| \leq \frac{3n + 2}{n^{2}} \leq \frac{3n + n}{n^{2}} = \frac{4}{n} Can you take it from here?
|real-analysis|convergence-divergence|
0
Finding a set of "graded" seminorms on the Schwartz space
For any fixed $N \in \mathbb{N}$ , let $\mathcal{S}(\mathbb{R}^N)$ be the Schwartz space. Then, it is well-known that $\mathcal{S}(\mathbb{R}^N)$ is a Fréchet space with the seminorms: \begin{equation} \lVert f \rVert_{n, \alpha} := \sup_{ x \in \mathbb{R}^N } (1 + \lvert x \rvert^n) \lvert \partial^\alpha f \rvert \end{equation} where $n$ is any non-negative integer and $\alpha$ is any multi-index on $(x_1, \cdots, x_N)$ . Now, I wonder if it is possible to find a collection of seminorms $\lVert \cdot \rVert_m$ for nonnegative intergers $m$ giving the same topology on $\mathcal{S}(\mathbb{R}^N)$ as $\lVert \cdot \rVert_{n, \alpha}$ 's but further satisfing the "graded" property \begin{equation} \lVert \cdot \rVert_0 \leq \lVert \cdot \rVert_1 \leq \lVert \cdot \rVert_2 \leq \cdots. \end{equation} A possible candidate I have come up with is the following: \begin{equation} \lVert f \rVert_m := \sum_{ n,\lvert \alpha \rvert \leq m} \lVert f \rVert_{n, \alpha} \end{equation} However, I am
This is indeed true. Pick an enummeration $\varphi: \mathbb{N}\rightarrow \mathbb{N} \times\mathbb{N}^N$ and define the metrics $$ d_1(f,g) = \sum_{n\in \mathbb{N}} 2^{-n} \frac{\Vert f-g \Vert_{\varphi(n)}}{1+\Vert f-g \Vert_{\varphi(n)}} $$ and $$ d_2(f,g)= \sum_{n\in \mathbb{N}} 2^{-n} \frac{\Vert f-g \Vert_n}{1+\Vert f-g\Vert_n}.$$ The topologies are the same if we can show that the map $$ \iota: (\mathcal{S}(\mathbb{R}^N), d_2) \rightarrow (\mathcal{S}(\mathbb{R}^N), d_1), \iota(f)=f $$ is a homeomorphism. Clearly this map is a bijection. Next we check that it is also continuous. Pick $(f_n)_{n\in \mathbb{N}}\subseteq \mathcal{S}(\mathbb{R}^N)$ converging to $f\in (\mathcal{S}(\mathbb{R}^N), d_2)$ , we now want to show that $$ \lim_{n\rightarrow \infty} d_1(f_n, f) =0. $$ Pick $\varepsilon>0$ and $M\in \mathbb{N}$ such that $\sum_{n\geq M+1} 2^{-n} . Then we have $$ d_1(f_n, f) \leq \varepsilon/2+ \sum_{k=1}^M \frac{\Vert f_n -f \Vert_{\varphi(k)}}{1+\Vert f_n -f \Vert_{\varphi(k)
|general-topology|functional-analysis|metric-spaces|frechet-space|
1
Find all $(a,b)\in\Bbb{N},$ such that $5^a +2^b +8$ is a perfect square.
Find all $(a,b)\in\Bbb{N},$ such that $5^a +2^b +8$ is a perfect square. My approach: Let $5^a +2^b +8=k^2\implies \bigg(k+5^{\frac{a}{2}}\bigg)\bigg(k-5^{\frac{a}{2}}\bigg)=8\bigg(1+2^{b-3}\bigg).$ The RHS is greater than zero and even,so each of $\bigg(k+5^{\frac{a}{2}}\bigg)$ and $\bigg(k-5^{\frac{a}{2}}\bigg)$ is even and $k$ is odd. $1+2^{b-3}$ is even only when $b=3.$ Case(1): When $b=3,k^2 =5^a +16;$ Let $k=2m+1\implies 4m^2 +4m+1=5^a +16$ or $4m(m+1)=5^a +15=5\bigg(5^{a-1}+3\bigg).$ If $m$ is odd, $m+1$ is even and vice-versa. Equating odd and even parts , Sub-case(a): $m+1=5\implies m=4,$ and $4m=16=5^{a-1}+3\implies 5^{a-1}=13\implies$ no natural number exist. Sub-case (b): $m=5$ and $4(m+1)=24=5^{a-1}+3\implies 5^{a-1}=21\implies$ again no natural number exist. So,there is no natural number solution set exist for $b=3.$ This implies that $1+2^{b-3}$ is odd. This means that even components: $k+5^{\frac{a}{2}}=c\bigg(1+2^{b-3}\bigg).$ And $8=8\bigg(k-5^{\frac{a}{2}}\bigg).....
Equation: $2^b + 5^a + 8 = N^2$ where $a$ , $b$ and $N$ are some natural numbers. 1. Case $a = 0$ : For $a = 0$ , the equation becomes $2^b + 9 = N^2$ . $2^b = (N+3)(N-3)$ . Let $2^{k2} = N+3$ and $2^{k1} = N-3$ , where $k_2 > k_1$ and $k_1 + k_2 = b$ . $2^{k2} - 2^{k1} = 6$ . This implies $2^{k1}(2^{k2-k1}-1) = 2 \times 3$ . Forced by this, $k_1 = 1$ and thus $N = 5$ . Consequently, $2^b = 16$ and $b = 4$ , with $k_2 = 3$ . Therefore, $(a, b) = (0, 4)$ . 2. Case $a = 1$ : For $a = 1$ , the equation becomes $2^b + 13 = N^2$ . Note that $N$ must be odd, and $N^2 \equiv 1 \pmod 8$ . Therefore, $2^b + 13 \equiv 1 \pmod 8$ , leading to $2^b \equiv 4 \pmod 8$ . Thus, $b = 2$ and $17 = N^2$ which leads to a contradiction. Hence, there are no integer solutions for $a = 1$ . 3. Case $a = 2$ : For $a = 2$ , the equation becomes $2^b + 33 = N^2$ . It is clear that $N$ must be odd, and hence $2^b \equiv 0 \pmod 8$ , implying $b \geq 3$ . Considering $2^b \equiv N^2 \pmod 3$ , with $N^2 \equiv 1 \
|number-theory|diophantine-equations|
0
Finding a lower bound on the roots
Question Statement- If $z$ is a complex number and $x_0, x_1,..., x_{2n}$ are real then prove that all the roots of the equation of $$(sinx_0)z^{2n}+(cosx_1)z^{2n-1}+(sinx_2)z^{2n-2}+...+(cosx_{2n-1})z+sinx_{2n}=b$$ lie in the region $$|z|>1-\frac{1}{|b|}$$ My working- I tried using the conventional methods of vieta's relations and taking conjugates and somehow arrive on some conclusions. Then I tried proving that maybe all the roots have to be real or all the root's complex conjugate must also be a root, as only possible complex coefficient is $b$ . I have notice that the inequality essentially means we have to prove $|z|>1$ and also all the sin and cos functions essentially imply that all coefficients are less than or equal to 1 in magnitude. But I haven't gotten much out of it. Would love some hints!
Hint: assume for the same of contradiction that $z$ is a root of the polynomial equation and $|z| \le 1-1/|b|$ . Find an upper bound for the left-hand side using the triangle inequality and the sum of a geometric series.
|polynomials|complex-numbers|
1
Given general solution of a matrix, how to reconstruct its RREF form?
Here's the problem I am trying to solve: I get that the second column would have an independent variable (as it would be a pivotal column) and the third and first columns would have nonpivotal columns (as they could be expressed as free variables). But what's next? How to approach constructing R and k matrices?
From general solution, it is clear that $\begin{bmatrix}1\\2\\0\end{bmatrix}$ and $\begin{bmatrix}0\\5\\1\end{bmatrix}$ forms basis for null space of $A$ and $R$ . So can you construct a RREF matrix $R$ with this information. Note that if $C_1, C_2, C_3$ are columns of $R$ , then we have $C_1+2C_2=0_{3\times1}$ and $5C_2+C_3=0_{3\times1}$ . Check this after you have solved. $R=\begin{bmatrix}1 & \frac{-1}{2} & \frac{5}{2} \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$ After this, $k=R\begin{bmatrix}0\\4\\0\end{bmatrix}$ .
|linear-algebra|matrix-equations|matrix-decomposition|
1
Using Matrix Calculus in Backpropagation derivation. Rules of order of matmul and transposition when taking derivatives in different layouts.
I define a neural network with $L$ layers ( $L-1$ hidden layers). The forward pass is as follows: $$ \mathbf{a}^{(l)} = f(\mathbf{W}^{(l)}\mathbf{a}^{(l-1)}+\mathbf{b}^{(l)}) $$ Where $l \in [0,L]$ and $\mathbf{a}^{(0)}=\mathbf{x}$ . We have a dataset $D = \{(\mathbf{X}_i, \mathbf{Y}_i) \mid \mathbf{X}_i = \mathbf{x} \in \mathbf{X}, \mathbf{Y}_i = \mathbf{y} \in \mathbf{Y}, i = 1, \ldots, N\}$ , where $N$ is the size of the dataset and $\mathbf{x}$ and $\mathbf{y}$ are the $i$ th input and label of the dataset. We have a cost/loss function that for example is defined as half the mean square error. $N_L$ is the size of the output layer. For a single data point (SGD) we have: $$ C(\mathbf{a}^{(L)},\mathbf{y}) = \frac{1}{N_{L}}\sum_{i=1}^{N_{L}}{\frac{1}{2}(a^{(L)}_{i}-y_i)^2} $$ We want to minimise the cost function to train the network. $$ \min_{\mathbf{W^{(l)}}, \mathbf{b^{(l)}} \space \forall{l \in [0,L]}} C(\mathbf{a}^{(L)},\mathbf{y}) $$ We derive backpropagation algorithm without m
There is in fact a chain-rule for matrix calculus: https://ccrma.stanford.edu/~dattorro/matrixcalc.pdf However, from an implementation perspective, there is no "right" answer, it depends mostly on how the deep learning framework is optimized for the underlying hardware. The idea is to maximize the amount of parallelism achieved, because ML training is done on GPUs (which execute in parallel). In frameworks like PyTorch/Tensorflow, the dominating paradigm is autograd, which constructs an execution graph in a very granular way (at the level of individual calculations like summation, subtraction, etc) and stores gradients for each granular calculation. Here is an overview: https://pytorch.org/blog/overview-of-pytorch-autograd-engine/ . The execution thus proceeds layer-by-layer, with each layer running the matmuls in paralle. You can probably find the exact ordering but it requires sifting through a lot of CUDA code. This detail is not important in terms of understanding neural nets, unle
|calculus|matrices|matrix-calculus|machine-learning|neural-networks|
0
Confusion regarding the first principle to find derivative of a function.
Let me take an arbitrary function f which is continuous and differentiable wherever defined. According to the first principle, $f'(x)=\lim_{h\to 0} \frac{f(x+h)-f(x)}{h}$ . Also, $f'(x)=\lim_{h\to 0} \frac{f(x)-f(x-h)}{h}$ . However, if I were to replace $x$ with $x+h$ and say $f'(x+h)=\lim_{h\to 0} \frac{f(x+2h)-f(x+h)}{h}$ it would be wrong. An example would be to take $sin(x+h)$ , by using compound angle formulae we would get $cos(x+3h/2)$ . So why exactly is it wrong to say $f'(x+h)=\lim_{h\to 0} \frac{f(x+2h)-f(x+h)}{h}$ ?
I guess that is because you use the derivative definition for a single variable function. However, the function f(x+h) the way you plug in becomes a two-variable function, so you cannot apply the definition here. Actually, you can write this formula for f(x+h) only if h was some constant number, but in that case, it would not be the same h that is in a formula cause it is clearly not a constant value.
|limits|
0
Norm-attainig functionals on $c_0$.
I am trying to solve an exercise from An Introduction to Banach Space Theory by R. E. Megginson, but I am a little bit stuck. It is Exercise 1.114: 1.114. Characterize the elements of $c_0^*$ that are norm-attaining. Conclude that the norm-attaining functionals form a dense subset of $c_0^*$ . In this context, $c_0$ is the sequence space of bounded functions that converges to $0$ , viewed as a Banach space with the supremum norm. $c_0^*$ stands for the continuous dual of $c_0$ , i.e., linear and bounded functionals over the field $K$ , which is either $\mathbb{R}$ or $\mathbb{C}$ . Previously on the text, it is shown that there exists elements on $c_0^*$ that are not norm-attaining (in particular, $c_0$ is not reflexive.) Additionally, we already know that $l_1$ is isometrically isomorphic to $c_0^*$ as Banach spaces. This isomorphism is given by $$T:l_1\longrightarrow c_0^*,\qquad T(x)=\varphi : c_0\longrightarrow K, \quad x=(x_n)\in l_1,$$ where $$\varphi(y)= \sum_n x_ny_n,\qquad y=(
$\varphi=(x_n)$ is norm attaining if and only if $x_n=0$ for all $n$ sufficiently large. Proof: If $x_n=0$ for all $n$ sufficiently large then norm of $\varphi$ is attained at $(\alpha_n)$ where $\alpha_n= \frac {\overline {x_n}} {|x_n|}$ when $x_n \neq 0$ and $0$ when $x_n=0$ . Suppose norm is attained. Then $\sum |x_n|=\sum x_ny_n$ for some $(y_n) \in c_0$ with $|y_n| \le 1$ for all $n$ . But then, $\sum |x_n|=\sum x_ny_n\le \sum |x_n||y_n|\le \sum |x_n|$ . You will get strict inequality (which is a contradiction) unless $|y_n|=1$ for any $n$ with $x_n \ne 0$ . But remember that we need $y_n \to 0$ . Hence, this cannot happen unless $x_n=0$ for all $n$ sufficiently large.
|functional-analysis|banach-spaces|dual-spaces|
1
Why does $\sin(0)$ exist?
I can't understand why should $\sin(0)$ exist, because if an angle is $0^{\circ}$, then the triangle doesn't exist i.e. there is no perpendicular or hypotenuse. However, if we take $\lim_{x \to 0} \sin(x)$, then I can understand $$\lim_{x \to 0} \sin(x) = 0$$ since perpendicular $\approx$ 0. So although $\lim_{x \to 0} \sin(x)$ is $0$, I can't understand how $\sin(0)=0$, and if $\sin(0)$ is not defined, then why is graph of $\sin(x)$ continuous ?
What is $\sin$ ? What is $\sin(57.94°)$ ? According to the idea to which OP refers $$\sin(57.94°):=\frac{MH'}{OM}=\frac{OH}{OM}\simeq \frac{8.5}{10}=0.85$$ But as the drawing suggests, his idea is just a restriction of more general ideas of what $\sin$ really is. For example, $$\sin(0)=0$$
|limits|trigonometry|
0
what is a usual topology for R
i'm studying topology. Let T be a usual topology for R(real) generated by usual metric. first, i know that elements of T are open sets in R. i wonder form of T(topology). Under given condition, Can T have many form? or is T unique family?
Let $(\Bbb R, d_{|.|})$ the usual (or Eucledean ) real metric space , where $d_{|.|}$ denotes the usual (or Eucledean ) metric $\Bbb R^2\mapsto\ \Bbb R_0^+$ and satisfies $$\forall x,y \in \Bbb R : d_{|.|}(x,y):=|x-y|.$$ All real subsets that are $d_{|.|}$ - open (i.e. open with the usual metric) can be collected in class $$\mathcal T_{d_{|.|}}:=\{A\subseteq \Bbb R|A\space d_{|.|}-\rm open\}.$$ This class of real subsets forms a topology induced by usual metric , and the pair $(\Bbb R, \mathcal T_{d_{|.|}})$ is called usual topological space . It is obvious that every open interval belongs to $\mathcal T_{d_{|.|}}$ , for it is a $d_{|.|}$ -open subset of $\Bbb R$ .
|general-topology|
0
A dice is rolled $7$ times. What is the probability the sum of the results is $14$?
In a certain exercise I am asked to find the probability of the event in which a dice is rolled seven times and the sum of the results is $14$ , given the fact that the dize is regular (this is, the probability of each of the six possible results is the same: $\frac{1}{6}$ . I think I know the approach: let $X_i$ denote the random discrete variable that assigns each roll to the result, this is, $X_i = \{1 \dots 6\}$ . Then, I consider the variable $Z = X_1 + \cdots X_7$ , and I want to find $P(Z = 14)$ . Note that each of the $X_i$ are uniform: each result has possibility $\frac{1}{6}$ . Is the sum of $n$ uniform discrete variables also a uniform one? Is there any trick to find this number (just a hint)? I would be grateful if somebody could give me his/her thoughts about the exercise, but not the full solution, just a hint (at least for the moment).
We want the coefficient of $x^{14}$ in $(x+x^2+x^3+x^4+x^5+x^6)^7$ = coeff. of $x^7$ in $(1+x+x^2+x^3+x^4+x^5)^7$ = coeff. of $x^7$ in $(1-x^6)^7\frac{1}{(1-x)^7}\quad$ (sum of G.P.) Expanding the first term, and the second (using Taylor series expansion at $x=0$ ), we have $(1- 7x^6 + ...)(1 + \binom71x + \binom82x^2 +...+\binom{13}7x^7 + ..)$ = $1\cdot\binom{13}7x^7 - 7x^6\cdot\binom71x$ and Pr $ = \dfrac{\binom{13}7 - 7\cdot7}{6^7} =\dfrac{1667}{6^7}$
|probability|combinatorics|dice|
0
Information-theoretic Inequality
If we have two discrete RVs, X, and Y. How can we show: $$\sum_{x,y} p(x|y)p(y|x) \geq 1.$$ The question goes further with finding a sufficient and necessary condition for equality. My attempt: For equality, assuming that X, Y are independent will enable us to sum over each variable PMF and get exactly one. However, I am stuck with showing how the inequality holds in general, and I appreciate any hints and tips.
Another similar approach, exploiting the convexity of the logarithmic function, we can write \begin{equation} \begin{aligned} \log\left(\mathbb{E}\left[\frac{p(y|x)}{p(y)}\right]\right) &\underbrace{\geq}_{\text{Jensen's inequality}} \mathbb{E}\left[\log\left(\frac{p(x,y)}{p(x)p(y)}\right)\right]\\ &\underbrace{=}_{D(p(x,y)||p(x)p(y))}I(X;Y) \geq 0, \end{aligned} \end{equation} such that $I(X;Y) = H(X)-H(X|Y)$ is the mutual information. For equality, $I(X;Y) = 0$ , i.e., $X$ and $Y$ are independent.
|probability|inequality|probability-distributions|conditional-probability|information-theory|
1
Identity regarding the differential of a smooth action of a Lie group
Hey I am currently following a course on Lie groups and I have followed a course on smooth manifolds. The question I have is the following, Let $G$ be a Lie group acting smoothly on a manifold $M$ by the action $A:G\times M\to M$ . Since $A$ is an action the following equality holds; $$A(g,A(h,x)) = A(gh, x).$$ Now we are asked to take the differential on both sides. However I seem to struggle with this rather easy sounding task. What I have done till now; Write $$\tilde A: G\times G\times M\to M,\quad (g,h,x)\mapsto A(g,A(h,x)) = A(gh,x).$$ Now I want to take the differential, so I should end up with a map $$d\tilde A:TG\times TG\times TM \to TM,\quad (v,w,u)\mapsto d\tilde A(v,w,u).$$ I think this should be the differential when evaluated in $(g,h,x)$ $$d(A(g,A(h,x))) = (dA)_{(g,A(h,x))}\circ (dA)_{(h,x)}.$$
It seems to me that your solution has the wrong domain: You correctly identify that your tangent map $D\tilde{A}$ has tangent vectors of the smooth product manifold $G\times G \times M$ as inputs, the dimension of these tangent spaces is $\dim G +\dim G+\dim M$ . $DA$ itself is a map on the $\dim G + \dim M$ -dimensional tangent space of $G\times M$ , so your map cannot be correct. Maybe this helps: The right hand side can be written as $$\tilde{A} = A\circ (m_G\times \operatorname{id}_M)\colon, (G\times G)\times M \to M,$$ where $m_G\colon G\times G\to G$ is the multiplication map, i.e. $gh = m_G(g,h)$ . The left hand side can be written as $$\tilde{A} = A\circ (id_G \times A) \colon G\times G\times M,$$ where the inner map $A$ acts on the second factor $G$ of the product of manifolds. The product of maps just tells us on what factors of the manifold the factors of the maps act. Notice that both domains are diffeomorphic, $$ G\times G\times M \simeq (G\times G)\times M.$$ We can canon
|differential-geometry|solution-verification|lie-groups|differential|
1
Prove that Wallis' product and Euler's formula directly imply that $(-1/2)!=\sqrt{\pi}$
(This occured to me recently, and I was pretty sure that it was true, so I was pleased that it really was. This has almost certainly been published many times before, but I didn't see it in either of the Wikipedia articles on the Wallis product and Euler's limit formula for factorial so I thought that I would propose it here.) Euler's formula for general factorial is $$z! =\lim_{n \to \infty} \frac{n!n^z}{\displaystyle\prod_{k=1}^n (z+k)} . $$ Wallis' product is $$\frac{\pi}{2} =\prod_{k=1}^{\infty} \frac{4k^2}{4k^2-1} $$ Prove that Wallis' product and Euler's formula directly imply that $(-1/2)! =\sqrt{\pi} $ . I'll post my answer in a few days if no one else does.
Rewrite the limit as an infinite product $$ \begin{align} z! &=\lim_{n\to\infty}\frac{n!n^z}{\prod\limits_{k=1}^n(z+k)}\tag{1a}\\ &=\lim_{n\to\infty}\frac{n!(n+1)^z}{\prod\limits_{k=1}^n(z+k)}\tag{1b}\\ &=\lim_{n\to\infty}(n+1)^z\prod_{k=1}^n\frac{k}{z+k}\tag{1c}\\[6pt] &=\lim_{n\to\infty}\prod_{k=1}^n\frac{k}{z+k}\left(\frac{k+1}{k}\right)^z\tag{1d}\\[6pt] &=\prod_{k=1}^\infty\frac{k}{z+k}\left(\frac{k+1}{k}\right)^z\tag{1e} \end{align} $$ Explanation: $\text{(1a):}$ Euler's Formula $\text{(1b):}$ $\lim\limits_{n\to\infty}\frac{n+1}n=1$ $\text{(1c):}$ $n!=\prod\limits_{k=1}^nk$ $\text{(1d):}$ $n+1=\prod\limits_{k=1}^n\frac{k+1}k$ $\text{(1e):}$ definition of an infinite product According to $(3)$ in this paper , $\text{(1b)}$ is Euler's Formula. $\text{(1e)}$ is also proven in $(4)$ of this answer . Evaluate $\boldsymbol{\left(-\frac12\right)!}$ $$ \begin{align} \left(-\frac12\right)! &=\prod_{k=1}^\infty\frac{k}{-\frac12+k}\left(\frac{k+1}{k}\right)^{-\frac12}\tag{2a}\\ &=\prod_{k=1}
|limits|factorial|pi|infinite-product|
1
Prove that the sequence $\frac{4n^2-3n}{2n^2−1}$ converges to the limit $2$
I know this has probably been asked 100 times, but I couldn't find anything resembling this and I've been on this easy number for 2 hours... My work is below. We want to show that $\forall \epsilon >0$ , $\exists N$ such that $\forall n > N$ , then $|x_n-2| . $$ |\frac{4n^2-3n}{2n^2−1}-2|=|\frac{4n^2-3n-4n^2+2}{2n^2−1}|\\ =|\frac{2-3n}{2n^2−1}|\\ So we want: $$ \frac{2}{2n^2−1} But taking n larger than that value does not work? I'm not sure where I'm going wrong here. I must be making some kind of algebra error or going wrong in my reasoning, but I feel bad because this is literally the first number on the exercice sheet.
This is gonna be a complete departure from your method, but you could approach it like so: Consider the following: $$\lim_{n\to\infty}\frac{4n^2-3n}{2n^2-1}$$ By the rules of limits, if the limit exists, this is equal to $\lim_{n\to\infty}\frac{4n^2}{2n^2-1}-\lim_{n\to\infty}\frac{3n}{2n^2-1}$ . Since $2n^2-1\gt 3n$ for all natural numbers greater than $2$ , and the derivative of $3n$ is constant with the derivative of $2n^2-1$ being monotonic over $[2,\infty)$ , this second (sub) limit is equal to $0$ . That was the easy one. For the first limit, it is helpful to rewrite $\frac{4n^2}{2n^2-1}$ as $2+\frac{2}{2n^2-1}$ ; then, by the same rule of limits we initially applied, $\lim_{n\to\infty}\frac{4n^2}{2n^2-1}$ becomes $\lim_{n\to\infty}2+\lim_{n\to\infty}\frac{2}{2n^2-1}$ . By similar reasoning as in the case of $\lim_{n\to\infty}\frac{3n}{2n^2-1}$ , the limit of $\frac{2}{2n^2-1}$ is $0$ . Furthermore, the limit of a constant is just itself, so in total we have: $$\lim_{n\to\infty}\f
|real-analysis|convergence-divergence|
0
Integral containing floor function and derivative
Let $n>1$ be a positive integer, and let $f:\mathbb{R} \rightarrow \mathbb{R}$ be continuously differentiable on the interval $[1,n]$ . I want to calculate an integral of the form $$ \int _1^n\lfloor u \rfloor f'(u) \mathrm d u $$ I realise that I can set upper and lower bounds on the integral by writing $\lfloor x \rfloor = x - \lbrace x \rbrace$ . However, I am hoping to find a precise numerical solution. My approach has been to use integration by parts, writing $$\int_1^n \lfloor u \rfloor f'(u) \mathrm d u = nf(n)-f(1)-\int_1^n f(v) \frac{\delta}{\delta v} \lfloor v \rfloor \mathrm d v$$ (whilst noting that $\lfloor n \rfloor=n$ ). Clearly, $\frac{\delta}{\delta v} \lfloor v\rfloor$ is undetermined or infinite at integer values of $v$ . I have tried to get round this by observing that, for sufficiently small $\epsilon$ , $$ \begin{align} \int_1^n f(v) \frac{\delta }{\delta v} \lfloor v \rfloor \mathrm d v &= \sum_{i=2}^n \int_{i-1}^i f(v) \frac{\delta}{\delta v} \lfloor v \rfloor \
$\int_1^{n} \lfloor u \rfloor f'(u)du=\sum\limits_{k=1}^{n-1}\int_k^{k+1} \lfloor u \rfloor f'(u)du=\sum\limits_{k=1}^{n-1}\int_k^{k+1} k f'(u)du=\sum\limits_{k=1}^{n-1}k[f(k+1)-f(k)]$ . This can also be written as $(n-1)f(n)-[f(1)+f(2)+\cdots+f(n-1)]$ .
|integration|definite-integrals|improper-integrals|ceiling-and-floor-functions|
1
Let G be a topological group that is T2,N2 and locally Euclidean and suppose G admits an open lie subgroup H. Prove that G is Lie
I found this exercise in Tao’s book “Note on Hilbert fifth problem” . Note that if $G$ is connected then $H=G$ since $H$ is also closed and $H\neq\emptyset$ since the neutral element lies in $H$ . But if $G$ is not connected I’m a little bit stuck. Tao suggest to take a coordinate chart $(U,\phi:U \to V)$ from the atlas on $H$ and translate it around to create an atlas on $G$ which makes $G$ a Lie group so i was trying to take the atlas $\{(gU,\psi_{g})\}_{g\in G}$ where $\psi_{g}:gU\to V , gu \mapsto \phi(u)$ for all $u$ in $U$ . Unfortunately i don’t know how to prove the compatibility of the charts. Sorry for the bad English I’m from Italy :)
If two charts have disjoint domains, then compatibility of these two charts is an empty condition. Suppose on the other hand $gU\cap hU\neq\emptyset$ , ie the domains of two given charts intersect. Then $g^{-1}hU\cap U\neq\emptyset$ and $g^{-1}h\in H$ . Note by definition $\psi_g(u)=\phi(g^{-1}u)$ , then $\psi_g\psi_h^{-1}(x) = (\phi g^{-1}h\phi^{-1})(x)$ and you get the coordinate expression of multiplication with $g^{-1}h$ , which is an element of $H$ and so smooth.
|differential-geometry|
1
How does commutative property not hold for subtraction?
I don't get why commutative property is not valid under subtraction, because I find that it is for example: $5 - 3 = 2 = -3 + 5$ or rather $5 + (-3) = 2 = -3 + 5$ So how does it not hold true for negative numbers?
That is not the commutative property. If subtraction were commutative, what would mean that for any $a,b$ ; $$a - b = b - a$$ and this is clearly not the case, for example take $a=1$ and $b=2$
|arithmetic|
0
How does commutative property not hold for subtraction?
I don't get why commutative property is not valid under subtraction, because I find that it is for example: $5 - 3 = 2 = -3 + 5$ or rather $5 + (-3) = 2 = -3 + 5$ So how does it not hold true for negative numbers?
Addition is commutative: $$a+b=b+a$$ This holds for any numbers $a,b$ , e.g. $a=5$ and $b=-3$ , as in your example. Subtraction is not (always) commutative: $$a-b\neq b-a$$ for most $a,b$ (not if $a=b$ ). E.g. $5-3=2\neq-2=3-5$ .
|arithmetic|
0
linear complex structure as submanifold of general linear group
Given a finite dimensional real vector space, we can view all the complex structures on it as a subspace of ${\rm GL}(V)$ . I wonder if it is a submanifold of ${\rm GL}(V)$ . And moreover, given a symplectic form $\omega$ , will the $\omega$ -compatible (for simplicity, I discard the requirement that $\omega(v,Jv)>0$ for it is an open condition on complex structures )complex structure consist a submanifold? For the first problem, I have figured out a solution but I don't satisfy with it because it is not the usual metheod for proving submanifolds using constant rank theorem or transversality. Here is my proof: view the general linear group as matrices. By linear algebra we know that for any complex structure $J$ there is a invertible matrix $P$ such that $PJP^{-1}=\Omega$ , where \begin{align*} \Omega= \begin{pmatrix}0 &-I_{n}\\\ I_{n}&0\end{pmatrix} \end{align*} Let ${\rm GL}(V)$ act on it self by congugation, then the set of complex structures is a closed orbit of its action, and a t
Oh I have done. Consider skew-symmetric matrices taking form \begin{pmatrix} A& B\\\ -B& A \end{pmatrix} where A is skew-symmetric and B is symmetric. It is easy to show that invertible matrices satisfying $P^{-1}\Omega P=$ skew-symmetric matrices like above is a submanifold. Donote it as $H$ . Then $H^{-1}/{\rm GL}(V)_{\Omega}$ (it is submanifold of complex structures by quotient manifold theorem) is the $\omega$ -compatible complex structures.
|smooth-manifolds|complex-geometry|symplectic-geometry|symplectic-linear-algebra|
0
Determine position of sound source from arrival times of a blip
The setup is a follows: You have three sound receivers (microphones) at known locations $P_1(x_1,y_1), P_2(x_2,y_2), P_3(x_3,y_3)$ that all lie on a plane. A sound source in the same plane produces a blip (a short high-pitched sound made by an electronic device) at an unknown time $t_0$ . The blip is received at the three receivers at times $t_1, t_2, t_3$ respectively. Use this information to determine the location of the sound source. It is assumed that the speed of sound is known. My attempt: If $c$ is the speed of sound, and if $d_i$ is the distance travelled by sound from the source to the $i$ -th receiving station, then $$ d_i = c \ (t_i - t_0) = a_i - k $$ where $a_i = c \ t_i $ and is known for $i=1,2,3$ and $ k = c \ t_0 $ and is unknown. On the other hand, we have for each of the three stations: $d_i^2 = (P - P_i) \cdot(P - P_i) $ There are three unknowns here which are $k , P_x , P_y $ Now it is a matter of solving these three quadratic equations for the three unknowns. Thes
I think this is similar to your idea but using the concept of circles. Since we do know the distance of the source from each receiver, we can write the eqn of 3 circles such that each receiver is the centre and the distance to the source as its radius. Now finding its radial centre would be our answer. I am not sure so please feel to let me know if I went wrong anywhere
|geometry|analytic-geometry|quadrics|
0