title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Smallest prime factor of $\lfloor e\uparrow e\uparrow e\uparrow e\rfloor$?
What is the smallest prime factor of $$\lfloor e\uparrow e\uparrow e\uparrow e\rfloor$$ To get this number start with $1$ and apply the $\exp$-funtion four times, then take the integer part. This number is an enormous number, having $$1\ 656\ 521$$ digits (Note that this happens to be a prime number!). I was curious and, for fun wanted to determine the smallest prime factor, expecting that I will soon find a prime factor. But it turned out that the number has no prime factor below $10^9$. What is the smallest prime factor of the above number ? Can we do any better than trial division ? A primilaty test will take long for such a large number and even the pollard-rho-method is slow (at least with PARI/GP). I would like to check the number with PFGW or with yafu, but I do not know how to copy such a large number such that yafu or PFGW can read it. And maybe , this number has already been checked by someone. Who can give a link or helps to find a factor ?
You can pass large numbers to yafu, ecm, and pfgw, by using a file input, and filling the file with the mountains of digits. It's a bit different for each program, but can be done if you're persistent. I've taken the liberty of doing a bit of testing on my own, to see if I couldn't find the smallest factor myself: I've done trial division up to $10^{10}$ , no factors found. Yafu seems to generally choke on a number so large, even disabling the Fermat's factorization steps and Rho steps, it seems to be hung on internal step that doesn't scale well to the millions of digits, which is kind of to be expected. It may eventually get through whatever it's hung on, but we really just want to run ECM, so I went ahead and did that itself instead. I've completed around 50 ECM curves at B1 level 10000, which lets us know that, if our assumptions are correct, we would have likely found a factor if it were under 20 digits, but we're not certain yet. We're about 99.9% sure that there's no factor unde
|number-theory|prime-numbers|math-software|big-numbers|
1
Generating function for binary strings not containing $0110$ or $11010$ as a substring
Find the generating function for the binary strings that do not contain $0110$ or $11010$ as a substring. I am familiar with proofs similar to this post, where the 'symbolic method' is used for counting combinatorial objects. As such I would be able to find generating functions for binary strings not containing $0110$ as a substring and the binary strings not containing $11010$ as a substring. But I'm having a hard time finding the generating function for the binary strings not containing either of these as substrings. I have tried to consider where they overlap? Perhaps the symbolic method as in the linked post is not applicable and another approach is necessary, but I'm not sure what exactly. Any help is appreciated. Thanks in advance.
This answer is based upon the Goulden-Jackson Cluster Method . We consider the set of words of length $n\geq 0$ built from a binary alphabet $\mathcal{V}=\{0,1\}$ and the set $$B=\{0110,11010\}$$ of bad words , which are not allowed to be part of the words we are looking for. We derive a generating function $A(z)$ with the coefficient of $z^n$ being the number of wanted words of length $n$ . According to the paper (p.7) the generating function $A(z)$ is \begin{align*} \color{blue}{A(z)=\frac{1}{1-dz-\text{weight}(\mathcal{C})}}\tag{1} \end{align*} with $d=|\mathcal{V}|=2$ , the size of the alphabet and $\mathcal{C}$ is the weight-numerator of bad words with \begin{align*} \color{blue}{\text{weight}(\mathcal{C})=\text{weight}(\mathcal{C}[0110])+\text{weight}(\mathcal{C}[11010])}\tag{2} \end{align*} We calculate according to the paper \begin{align*} \text{weight}(\mathcal{C}[0110])&=-z^4-z^3\cdot\text{weight}(\mathcal{C}[0110])-z\cdot\text{weight}(\mathcal{C}[11010])\\ &\qquad\qquad\qqua
|combinatorics|generating-functions|regular-expressions|
0
Non-negative map is locally uniformly continuous on the interior of a cone under hypothesis
First of all, I'm sorry if I used the wrong tags for this question, please tell me if you want me to change them. During a reading of a paper, I came across a lemma that is useful for my research (I study the characteristics of a certain function in algebraic geometry) and I tried to prove it without success. Here it is (Lemme 2.7 from B.Lehmann’s paper - " Volume type functions for numerical cycle classes "): Let $V$ be a finite dimensional $\mathbb Q$ -vector space and let $C \subset V$ be a salient full-dimensional closed convex cone (i.e it is a convex cone such that $C - C = V$ and for every $v \in C$ , we have $-v \notin C$ ) . Suppose that $f : V \to \mathbb R_{\geq 0}$ is a function satisfying $f(e) > 0$ for any $e \in C^{\text{int}}$ , there is some constant $c > 0$ so that $f(me) = m^cf(e)$ for any $m \in \mathbb Q_{>0}$ and $e \in C$ , for every $v,e \in C^{\text{int}}$ we have $f(v+e) \geq f(v)$ . Then $f$ is locally uniformly continuous on $C^{\text{int}}$ . All I could ob
This answer is somewhat verbose, it might helpt to just draw the corresponding picture for yourself. We are assuming that $V=\mathbb{Q}^d$ with the subspace topology inherited from the Euclidean topology on $\mathbb{R}^d$ (the OP clarified in the comments that this is the setting of interest to them). Furthermore, we will assume that $d\geq 2$ (the case $d=1$ is trivial as $C$ would be a half-space and $f(x) = \vert x\vert^m f(e)$ for $x\in C^\mathrm{int}$ and $e=1$ if $C=[0,\infty)$ and $e=-1$ if $C=(-\infty,0]$ . Clearly $f$ is locally uniformly continuous). In this answer, we denote by $B(x,r)=\{v\in \mathbb{Q}^d \ : \ \vert v-x\vert the ball in $\mathbb{Q}^d$ of radius $r$ centered at $x$ . We want to prove that the map $f$ is locally uniformly bounded on $C^\mathrm{int}$ . The only thing which is of quantitative nature in our assumption is the requirement that $$ f(c x) = c^m f(x) $$ for all $c\in \mathbb{Q}_{>0}$ and all $x\in C$ . We would like to make use of this to estimate th
|convex-geometry|
1
Solve $\int_0^\infty\frac{\ln(2e^x-1)}{e^x-1}dx$
In one of the final problems of the MIT integration bee for this year, $$I=\int_0^\infty\frac{\ln(2e^x-1)}{e^x-1}dx$$ was one of the given problems. My try was to let $u=e^x-1$ to get $$I=\int_0^\infty\frac{\ln(2u+1)}{u(u+1)}du=\int_0^\infty\frac{\ln(x+1)}{x(x+2)}dx$$ I don't know whether I would be right but I had a feeling this was a dead end. Turning the original integral into a geometric series doesn't seem promising either. How should I solve this? Note: These problems are solved in 5 minutes so please come up with a solution that can be done in such a time limit.
SOLUTION 1 (Integration Bee Style) Let $x=-\ln(1-u)$ . Then $$ \begin{align} \mathcal{I}&:= \int_{0}^{\infty}\frac{\ln\left(2e^{x}-1\right)}{e^{x}-1}dx \\ &= \int_{0}^{1}\frac{\ln\left(2e^{-\ln\left(1-u\right)}-1\right)}{e^{-\ln\left(1-u\right)}-1}\cdot\frac{1}{1-u}du \\ &= 2\int_{0}^{1}\frac{\operatorname{artanh}u}{u}du \\ &= 2\int_{0}^{1}\sum_{n=0}^{\infty}\frac{u^{2n}}{2n+1}du \\ &= \sum_{n=0}^{\infty}\frac{2}{2n+1}\int_{0}^{1}u^{2n}du \\ &= \sum_{n=0}^{\infty}\frac{2}{\left(2n+1\right)^{2}} \\ &= 2\left(\sum_{n=1}^{\infty}\frac{1}{n^{2}}-\sum_{k=1}^{\infty}\frac{1}{4k^{2}}\right) \\ &= 2\left(\frac{\pi^{2}}{6}-\frac{\pi^{2}}{24}\right) \\ &= \frac{\pi^{2}}{4}\,.\\ \end{align} $$ We finish with $$ \bbox[#FCFFE7,border:5px dotted#639E00,10px]{\int_{0}^{\infty}\frac{\ln\left(2e^{x}-1\right)}{e^{x}-1}dx=\frac{\pi^{2}}{4}} $$ and we're done! SOLUTION 2 (Contour Integration) Here is another solution that doesn't use a series evaluation, but uses Euclidean geometry and contour integrals.
|calculus|integration|contest-math|
0
Connection between statistical and functional dependence of random variables
For simplicity lets consider finite probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and two random variables $\xi, \eta$ on it. I wonder is there any connection between statistical and functional dependence of random variables? For example, if I have $\xi = f\circ \eta$ for some $f$ , does it imply that $\xi$ and $\eta$ statistically dependent and vice versa ? Or, if they are functionally independent, does it imply that they are statistically independent and vice versa?
If $X$ is not constant, the functional dependence implies the dependence of $X$ and $f(X)$ . Indeed, the following $$\mathbb P (X \le a, f(X) \le b)=\mathbb P \big (X \in (-\infty,a], X \in f^{-1}(-\infty,b] \big )=\\\mathbb P \big (X \in (-\infty,a]\big ) \times\mathbb P \big ( X \in f^{-1}(-\infty,b]\big )$$ cannot hold for all $a,b \in \mathbb R$ unless $X$ is a constant (see here ). However, the dependence of $X$ and $Y$ does not imply that there is a measurable function $f$ such that $Y=f(X).$ Let $$(X,Y) \sim \mathcal N \left (0, \begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix} \right ). $$ Then, $Y=\text{sgn}(\rho)X$ if and only if $\rho=\pm1$ . For $0 , $X$ and $Y$ are dependent, but they are functionally independent, because there is no joint pdf for the pair $(X,f(X))$ , whereas $(X,Y)$ has a joint pdf for $|\rho| .
|probability|probability-theory|statistics|random-variables|
1
Finite map of local rings $R \to A$ with $R$ a DVR. When is target $A$ a DVR, too?
Let $R$ be discrete valuation ring with field of fractions $K$ , and $\phi:R \to A$ a finite, local (i.e., $A$ dominates $R$ in the sense that its maximal ideal contains the image of maximal ideal of $R$ ) map of local rings. #Edit: As Giorgio Genovesi pointed out, essentially the "interesting" case appear where $\phi$ is assumed to be in addition injective, i.e., $R$ as subring of $A$ . Let's do this in the following. Question: What are the weakest conditions to assure that $A$ is already discrete valuation ring too? Clearly, $A$ have neccessarily to be a domain, therefore we can form the field of fractions $L:=\text{Frac}(A)$ , which as Hagen Knaf in comments below explained not neccessarily imply that $B$ , the integral closure of $R$ in $L$ - which is a dvr and which contains $A$ by construction due to finiteness of $A$ over $R$ - is finite itself over $R$ . So we could reformulate the question approaching it "from other side" (...hope that it not warps to strongly the flavour of i
This is not an answer to the original question but to the question on the existence of intermediate rings $A$ posted in a comment: Assume that $L/K$ is separable and that the DVR $B$ is the integral closure of $R$ . Let $\theta$ be a primitive element of $L/K$ . Since for every $x\in L$ either $x$ or $x^{-1}$ are in $B$ we can assume $\theta\in B$ . Now either $\theta\in B^\times$ or $\theta+1\in B^\times$ . Since $\theta+1$ is a primitive element too, we have shown that there always exists a unit of $B$ which is a primitive element of $L/K$ . Case 1: Assume that the extension of residue fields $B/m_B\supset R/m_R$ is proper. Then for every $s\in m_B$ we have $R[s\theta]\neq B$ sincs otherweise $R[s\theta]/m_B=R/m_A$ . Obviously $L=\mathrm{Frac}(R[s\theta])$ . Case 2: Assume that $m_RB\neq m_B$ . Let $m_B=tB$ ; then $m_RB=t^{e}B$ for some $e>1$ . Then for every $k\geq e$ we have $R[t^k\theta]\neq B$ . Otherwise $ t=r_0+r_1(t^k\theta)+\ldots +r_n(t^k\theta)^n $ for some $r_i\in R$ . Con
|abstract-algebra|ring-theory|commutative-algebra|
1
Prove $\left|\begin{smallmatrix}a&b-c&c+b\\a+c&b&c-a\\a-b&b+a&c\end{smallmatrix}\right|=(a+b+c)(a^2+b^2+c^2)$ without direct evaluation
Prove that $\begin{vmatrix}a&b-c&c+b \\ a+c&b&c-a\\ a-b&b+a&c \end{vmatrix}=(a+b+c)(a^2+b^2+c^2)$ without direct evaluation. My attempt is as follows:- $$R_1\rightarrow R_1-R_2$$ $$R_1\rightarrow R_2-R_3$$ $$\begin{vmatrix}-c&-c&b+a \\ c+b&-a&-a\\ a-b&b+a&c \end{vmatrix}$$ $$C_1\rightarrow C_1-C_2$$ $$C_2\rightarrow C_2-C_3$$ $$\begin{vmatrix}0&-c-b-a&b+a \\ a+b+c&0&-a\\ -2b&b+a-c&c \end{vmatrix}$$ $$R_3\rightarrow R_3+R_1$$ $$\begin{vmatrix}0&-(a+b+c)&b+a \\ a+b+c&0&-a\\ -2b&-2c&a+b+c \end{vmatrix}$$ $$R_1\rightarrow R_1+R_2$$ $$\begin{vmatrix}a+b+c&-(a+b+c)&b \\ a+b+c&0&-a\\ -2b&-2c&a+b+c \end{vmatrix}$$ Looks like we can't reduce it anymore, so I expanded: Expanding with respect to second row as there is one zero in that $$-(a+b+c)(-(a+b+c)^2+2bc)+a(-2c(a+b+c)-2b(a+b+c)$$ $$(a+b+c)(a^2+b^2+c^2+2ab+2bc+2ca-2bc)+(a+b+c)(-2ca-2ba)$$ $$(a+b+c)(a^2+b^2+c^2)$$ I got the result at the end but is it the good way? Honestly I am not satisified with my way, is there any set of transformations
By Gaussian triangularization subtracting a multiple of the first row and from 2,3 and then 2 from 3 yields for $a \neq 0 ,\neq 0, a-b+c \ne 0 $ $$mt=\left( \begin{array}{ccc} a & b-c & b+c \\ 0 & \frac{a c-b c+c^2}{a} & \frac{-a^2-a b-b c-c^2}{a} \\ 0 & 0 & \frac{(a+b+c) \left(a^2+b^2+c^2\right)}{c (a-b+c)} \\ \end{array} \right)$$ For $a=0$ , rotate b or c to position 1,1. Even the case $$(a-b+c) =0, $$ to be handled by rotations too, yields determinant $$a= b-c; \det(m)= 4 b (b^2 - b c + c^2)$$ that has the required form $$a= b-c; (a+b+c)(a^2+b^2+c^2) = 4 b \left(b^2-b c+c^2\right) $$
|matrices|determinant|
0
How to compute $\lim_{x\to \infty} x(a^{\frac{1}{x}}-1)$?
I'm supposed to compute it by using some of the following limits somehow: $$\lim_{x\to 0}(1+x)^{\frac{1}{x}}=\lim_{x\to \infty}\left(1+\frac{1}{x}\right)^{x}$$ But I can't find how to make this limit appear in the calculation. Can someone give me a hint?
I just thought of writing an answer which would use only the limits you had mentioned: The given limit, as has been mentioned in my comment and also later in the answer posted above, is equivalent to $$\lim\limits_{y\to 0^+} \frac{a^y-1}{y}.$$ I am just reiterating how this can be evaluated from first principles. Now, $a^y = e^{y\ln (a)}$ . Thus the limit turns out to be $$ \ln(a)\lim\limits_{y\to 0^+} \frac{e^{y\ln(a)} - 1}{y\ln(a)}.$$ Putting $h= y\ln(a)$ , the above limit is simply, $$\ln(a) \lim\limits_{h\to 0^+} \frac{e^h-1}{h}.$$ The limit above, is easily found out from first principles, which you might be knowing. Yet for the sake of completion, I will rewrite it below: $\textbf{To find $\lim\limits_{h\to 0^+}\frac{e^h-1}{h}$}$ . Put $e^h -1 = t$ . Then, $e^h = 1+t$ and $h = \ln (1+t)$ . Thus, the limit turn out to be $$\lim\limits_{t\to 0^+} \frac{1}{\ln(1+t)^{\frac{1}{t}}} = \frac{1}{\ln(e)},$$ where it is required to assume that limit can be taken inside a continuous functio
|limits|limits-without-lhopital|
1
Finding Dimension of Preimage of Subspace under Surjective Linear Transformation: A Linear Algebra Problem
"I am currently working on solving a previous year Linear Algebra exam question. The question involves vector spaces $V_1$ and $V_2$ over the same field $F$ , with a surjective linear transformation $T: V_1 \rightarrow V_2$ . Given that $W$ is a subspace of $V_2$ with dimension 2, and $dim V_1 = 7$ and $dim V_2 = 5$ , the task is to determine $dim T^{-1}(W)$ . My Approach : I plan to start by examining the restriction of the linear transformation $T_1: T^{-1}(W) \rightarrow V_2$ . Since $T$ is onto, we know that $T(T^{-1}(W)) = W$ . To proceed, I intend to apply the rank-nullity theorem for $T_1$ , which states that $dim(null(T_1)) + dim(Im(T_1)) = dim(T^{-1}(W))$ . However, I'm currently unsure how to calculate $dim(null(T_1))$ and $dim(Im(T_1))$ . Any guidance on this problem would be greatly appreciated. Thank you in advance."
HINT: being surjective means that $\dim(\operatorname{null}T)=2$ as $\dim(\operatorname{img}T)=\dim V_2=5$ . Now, as $T$ is surjective, the map $\tilde T:V_1/\operatorname{null}T \to V_2$ given by $\tilde T(\operatorname{v+null}T)=Tv$ is an isomorphism. Thus $H:=\tilde T^{-1}(W)$ have dimension $2$ and observe that $T=\pi \circ \tilde T$ where $\pi:V_1\to V_1/(\operatorname{null}T),\, v\mapsto v+\operatorname{null}T$ is the canonical projection, therefore $T^{-1}(W)=\pi^{-1}(H)$ . Can you follow from here? As $\dim H=2$ there are $a,b\in V_1$ such that $H=\operatorname{span}\{a+\operatorname{null}T,b+\operatorname{null}T\}$ . Now, as $\dim (\operatorname{null}T)=2$ also, there are $c,d\in V_1$ such that $\operatorname{null}T=\operatorname{span}\{c,d\}$ . Thus $\pi^{-1}(H)=\operatorname{span}\{a,b,c,d\}$ so $\dim T^{-1}(W)=4$ .∎
|linear-algebra|vector-spaces|linear-transformations|matrix-rank|
0
Prove $\left|\begin{smallmatrix}a&b-c&c+b\\a+c&b&c-a\\a-b&b+a&c\end{smallmatrix}\right|=(a+b+c)(a^2+b^2+c^2)$ without direct evaluation
Prove that $\begin{vmatrix}a&b-c&c+b \\ a+c&b&c-a\\ a-b&b+a&c \end{vmatrix}=(a+b+c)(a^2+b^2+c^2)$ without direct evaluation. My attempt is as follows:- $$R_1\rightarrow R_1-R_2$$ $$R_1\rightarrow R_2-R_3$$ $$\begin{vmatrix}-c&-c&b+a \\ c+b&-a&-a\\ a-b&b+a&c \end{vmatrix}$$ $$C_1\rightarrow C_1-C_2$$ $$C_2\rightarrow C_2-C_3$$ $$\begin{vmatrix}0&-c-b-a&b+a \\ a+b+c&0&-a\\ -2b&b+a-c&c \end{vmatrix}$$ $$R_3\rightarrow R_3+R_1$$ $$\begin{vmatrix}0&-(a+b+c)&b+a \\ a+b+c&0&-a\\ -2b&-2c&a+b+c \end{vmatrix}$$ $$R_1\rightarrow R_1+R_2$$ $$\begin{vmatrix}a+b+c&-(a+b+c)&b \\ a+b+c&0&-a\\ -2b&-2c&a+b+c \end{vmatrix}$$ Looks like we can't reduce it anymore, so I expanded: Expanding with respect to second row as there is one zero in that $$-(a+b+c)(-(a+b+c)^2+2bc)+a(-2c(a+b+c)-2b(a+b+c)$$ $$(a+b+c)(a^2+b^2+c^2+2ab+2bc+2ca-2bc)+(a+b+c)(-2ca-2ba)$$ $$(a+b+c)(a^2+b^2+c^2)$$ I got the result at the end but is it the good way? Honestly I am not satisified with my way, is there any set of transformations
Let $$ p=\pmatrix{a\\ b\\ c}=ku, \quad e=\pmatrix{1\\ 1\\ 1} \quad\text{and}\quad [p]_\times=\pmatrix{0&-c&b\\ c&0&-a\\ -b&a&0}, $$ where $k=\sqrt{a^2+b^2+c^2}$ , $u$ is a unit vector and $[p]_\times$ denotes the cross product matrix such that $[p]_\times q=p\times q$ for every vector $q$ . The matrix in the identity is equal to $$ A=pe^T+[p]_\times=kue^T+k[u]_\times\ . $$ It is the matrix representation of the linear operator $$ Tx=\langle e,x\rangle ku+ku\times x\tag{1} $$ with respect to the standard basis of $\mathbb R^3$ . Extend $u$ to an orthonormal basis $\{u,v,w\}$ of $\mathbb R^3$ such that $u\times v=w$ . By $(1)$ , we have \begin{align*} Tu&=\langle e,u\rangle ku,\\ Tv&=\langle e,v\rangle ku+kw,\\ Tw&=\langle e,w\rangle ku-kv.\\ \end{align*} Therefore, with respect to the ordered basis $\{u,v,w\}$ , $T$ has the matrix representation $$ B=\left(\begin{array}{c|cc}k\langle e,u\rangle&k\langle e,v\rangle&k\langle e,w\rangle\\ \hline 0&0&k\\ 0&-k&0 \end{array}\right). $$ It fol
|matrices|determinant|
0
A picture geometry problem
My approach: I know only one way to relate inradius and sides of a triangle which is $\text{Area of triangle = (inradius)(semi-perimeter)}$ I am trying to get $RS$ and altitude of $\Delta AMB$ in terms of $r$ which equals to $b$ and $a/2 \; $ resp. Tell me if there's better way of doing this question
For convenience, define $p = a/2$ and $q = b/2$ . Furthermore, let $MS = h$ . Then $$|\triangle MBS| = hp = r\left(p + \sqrt{p^2 + h^2}\right),$$ hence $$h = \frac{2p^2 r}{p^2 - r^2}, \tag{1}$$ and $$MB = \sqrt{p^2 + h^2} = \frac{p(p^2 + r^2)}{p^2 - r^2}. \tag{2}$$ Similarly, by substituting $3r$ for $r$ , $$b - h = MR = \frac{2p^2 (3r)}{p^2 - (3r)^2}, \quad MA = \frac{p(p^2 + (3r)^2)}{p^2 - (3r)^2}. \tag{3}$$ Hence $$|\triangle AMB| = pq = (2r)\frac{MA + MB + MR + MS}{2} = \frac{2pr(p^2 - 3r^2)}{(p-3r)(p-r)}. \tag{4}$$ Again letting $q = (MR + MS)/2$ , we obtain $$pr(p^2 - 3r^2)(p^2 - 4pr - 3r^2) = 0, \tag{5}$$ and discarding obvious extraneous roots, we have $$p = r \sqrt{3}, \quad p = r(2 + \sqrt{7}). \tag{6}$$ But we can also eliminate $p = r \sqrt{3}$ since this would give $|\triangle AMB| = 0$ in $(4)$ . The rest is straightforward and left as an exercise; we obtain $$\frac{a}{b} = \frac{p}{q} = \frac{3}{4}.$$
|euclidean-geometry|
1
Existence of extremal value
Determine the minimum and maximum values of the function $\frac{x^4}{(x-1)(x-3)^3}$ . By equating the first derivative to $0$ ,its fairly simple to deduce that the extreme points are $x=0,\frac{6}{5}$ . However,to determine minimum or maxumum,we need to analyse the 2nd derivative as well which seems to be awfully long. Is there any way to do it simply?
Call your function $f$ . Notice that the numerator is always non-negative. Now, in some neighbourhood of $0$ we have $x-1 and $x-3 , so the denominator is always non-negative in that neighbourhood. As $f(0)=0$ , it must be a minimum. As the only roots of $f'$ are $0$ and $\dfrac{6}{5}$ and it is a continuous function, it must have a constant sign on $\left(1,\dfrac{6}{5}\right)$ and a constant sign on $\left(\dfrac{6}{5},3\right)$ . Evaluating, one gets that $f'\left(\dfrac{11}{10}\right)>0$ and $f'(2) , so $f$ increases on $\left(1,\dfrac{6}{5}\right)$ and decreases on $\left(\dfrac{6}{5},3\right)$ , from where $f$ attains a maximum at $\dfrac{6}{5}$ .
|calculus|derivatives|maxima-minima|extreme-value-analysis|
0
Solve $\sqrt{(x-2)(x-3)}+5\sqrt{\dfrac{x-2}{x-3}}=\sqrt{x^2+6x+8}$
Solve $\sqrt{(x-2)(x-3)}+5\sqrt{\dfrac{x-2}{x-3}}=\sqrt{x^2+6x+8}$ \begin{align*} &\Leftrightarrow (\sqrt{(x-2)(x-3)}+5\sqrt{\dfrac{x-2}{x-3}})^2=(\sqrt{x^2+6x+8})^2 \tag{1}\\ &\Leftrightarrow (x-2)(x-3)+\dfrac{25(x-2)}{x-3}+10(x-2)=x^2+6x+8 \tag{2}\\ &\Leftrightarrow x^2-5x+6+\dfrac{25x-50}{x-3}+10x-20=x^2+6x+8 \tag{3}\\ &\Leftrightarrow 5x-14+\dfrac{25x-50}{x-3}=6x+8 \tag{4}\\ &\Leftrightarrow \dfrac{(5x-14)(x-3)}{x-3}+\dfrac{25x-50}{x-3}=6x+8 \tag{5}\\ &\Leftrightarrow 5x^2-17x+42+25x-50=6x^2-18x+8x-24 \tag{6}\\ &\Leftrightarrow x^2-18x-16=0 \tag{7}\\ &\Leftrightarrow x=\dfrac{18 \pm \sqrt{324+64}}{2} = \dfrac{18 \pm 2\sqrt{97}}{2}=9 \pm \sqrt{97} \tag{8} \end{align*} But the answer is $x=8, -2$ . I feel my calculations are correct but clearly they aren't.
From $(5)$ to $(6)$ , instead of $$(5x-14)(x-3)=5x^2-17x+42$$ it should be $$(5x-14)(x-3)=5x^2-29x+42$$ This results in the final equation being $$x^2-6x-16=0,$$ with roots $x=8,-2$ as desired. However, from $(1)$ to $(2)$ , you have used $\sqrt{(x-2)(x-3)}\sqrt{\dfrac{x-2}{x-3}}=x-2$ , which assumes $x-2\geq 0$ , so the only solution when $x\geq 2$ is $8$ . You should now consider the case $x , substituing $\sqrt{(x-2)(x-3)}\sqrt{\dfrac{x-2}{x-3}}=2-x$ .
|algebra-precalculus|solution-verification|
1
No adjacent empty boxes with PIE
*PIE = inclusion-exclusion How many ways are there to distribute 10 balls into 5 distinct boxes such that no two adjacent boxes are empty? Note: the same question statement. I believe the question statement is equivalent to 'no empty boxes should be adjacent'. I have attempted this with PIE. Let the set $A_i$ represent the arrangements where $(i+1)$ empty boxes are adjacent. The value required to be subtracted should be $|A_1 \cup A_2 \cup A_3 \cup A_4|=\sum{|A_1|} - \sum{|A_1 \cap A_2|} + \sum{|A_1 \cap A_2 \cap A_3|} - 0$ as there is no possible arrangement where $4+1 = 5$ boxes are empty. $$|A_1 \cup A_2 \cup A_3 \cup A_4| = 4\cdot \binom{10+3-1}{3-1} - 3\cdot \binom{10+2-1}{2-1} + 2\cdot \binom{10+1-1}{1-1}$$ Subtracting this from $\binom{14}{4}$ gives $768$ . However, the answer is 771. Clearly, the quantity I'm subtracting is a little larger. What cases am I overcounting while subtracting?
You’re not counting the three cases with four empty bins that form two pairs of adjacent empty bins correctly. The case where all balls are in the central bin is counted twice (in the first term) instead of once. The two cases where all the balls are in one of the outer bins are each counted three times in the first term, twice in the second term and once in the third term, for a total of twice instead of once. This is because you don’t have a coherent setup for inclusion–exclusion. The four conditions are that none of the four pairs of adjacent bins must both be empty. In standard inclusion–exclusion, you then have to consider all possible intersections between these pairs. It’s more efficient, however, to use Möbius inversion on the poset of inadmissible arrangements. Then each adjacent pair is weighted with $-1$ , each adjacent triple is weighted with $-(1+-1+-1)=1$ , each adjacent quadruple is weighted with $-(1+-1+-1+-1+1+1)=0$ and the configuration with two adjacent pairs on the
|combinatorics|inclusion-exclusion|balls-in-bins|
0
Find the normal vector or an ellipsoid given the scale vector?
Each point of a unit sphere is conveniently also its own normal. I am starting with a unit sphere and then multiplying each point by a scale vector to create an ellipsoid. I would like to know the normal vector at each vertex. Is there some simple math to get the normal vector given the unit sphere vector and the scale vector?
Yes. There is. The equation of the ellipsoid after scaling the unit sphere by $a,b,c$ is $ \dfrac{x^2}{a^2} + \dfrac{y^2}{b^2} + \dfrac{z^2}{c^2} = 1 $ The gradient vector which is also the normal to the surface of the ellipsoid is given by $ \vec{N} = 2 \left( \dfrac{x}{a^2}, \dfrac{y}{b^2}, \dfrac{z}{c^2} \right) $
|geometry|vectors|tangent-line|
1
Finding maximum number of pencils with any child in the problem
In a group of children, each child has a certain number of pencils from $1$ to $n$ . If the number of children who have i or more pencils is $2^{n-i}$ , for $i=1,2,3, \ldots,n$ and the total number of pencils is $511$ , find the maximum number of pencils with any child. My attempt:- Number of children with 1 pencil = $2^{n-1}-2^{n-2}=2^{n-2}$ Number of children with 2 pencils = $2^{n-2}-2^{n-3}=2^{n-3}$ . . . Number of children with n pencils = 1 Now, total number of pencils = $1 \cdot 2^{n-2} + 2 \cdot 2^{n-3} +....+ (n \cdot 1) = 511$ I am not able to understand how to solve it from here. As per the solution given by the author ( screenshot ): The number of children with 1 or more pencils is $2^{n-1}$ , (i.e. those children who have one or more pencils) The number of children with 2 or more pencils is $2^{n-2}$ (i.e. those children who have 2 or more pencils) and so on. Thus the total number of pencils that the children have is $$ 2^{n-1}+2^{n-2}+2^{n-3}+ \dots + 2^{n-n} = 2^{n-1} =
First, an explanation for the given answer: $2^{n-1}$ children have at least one pencil. Take this one pencil from all these children. You have $2^{n-1}$ pencils now. Earlier, $2^{n-2}$ children had at least two pencils, and now they have at least one pencil. Take this one pencil from these $2^{n-2}$ children. Now you have $2^{n-1}+2^{n-2}$ pencils. Now, $2^{n-3}$ children have atleast one pencil (others don't have one). Take one pencil from these students, you have $2^{n-1}+2^{n-2}+2^{n-3}$ pencils now. In the end, when you have taken all the pencils from the children, you will have $2^{n-1}+2^{n-2} +\ldots +2^{n-n}$ pencils, and this is given $511 = 2^9 - 1$ . So, $n=9$ , and one child can have at most $9$ pencils (the no. of children with ten or more is $2^{n-1} = 2^{9-10}$ , which is impossible). A visualisation of the above for $n=4$ (with 511 replaced by 15). The first row represents the students, and each column represents how many pencils they have. From the second row onwards,
|sequences-and-series|word-problem|
0
What are the possible surfaces that one can construct from a finite set ot triangles?
I am looking for references in discrete differential geometry for a concept I've been interested in. It is very common to approximate smooth surfaces using discrete triangulations. I am interested in the opposite problem. I start off with edges of prescribed lengths, which I can use to form triangles, and I want to know which discrete surfaces I can from these triangles (I can use a given triangle more than once). This is similar to a tiling problem. I have a finite collection of possible tiles, but instead of trying to form shapes in the plane, and I want to know which surfaces I can construct from them. This sounds like something which is probably already studied in discrete differential geometry, but I am not sure what are the relevant terms/names I need to know in order to google some existing works. Has anyone here ever come across this concept? If so, I'd be happy to hear. Thanks!
I took a course and take part in a research project concerning simplicial surfaces. This structure sounds pretty familiar to your description. However, this research project rather takes an algebraic/combinatorial route when talking about simplicial surfaces rather than a view from differential geometry. For a taste, here is the first introduction of our lecture notes from the course: Background: In this course we study surfaces composed of triangles. The Platonic solids have been studied since antiquity. Among these, the surfaces of three of these are made up of triangles, namely the tetrahedron, the octahedron, and the icosahedron. These surfaces are our first and most famous examples of surfaces composed of triangles, namely simplicial surfaces. One feature of these three Platonic solids is also that the triangles of these surfaces are congruent to one particular triangle. We call this the control triangle. In this course we will put particular emphasis on simplicial surfaces for wh
|differential-geometry|riemannian-geometry|tiling|discrete-geometry|triangulation|
0
Inverse logarithmic integral for x∈(0,1)
When looking for a suitable representation of the logarithmic integral, $li(x)=\int_0^x \frac{dx}{\log{x}}$ , one can found many texts for $x>1$ , which is understandable because of its relations to $\pi(x)$ . I am looking for something different. While computing some step responses of a pressure driven system, I stumbled upon a differential equation (assume nonnegative values) $$\frac{dy}{dt}=-b\log{\frac{y}{b}},\quad y(0)\in(0,b),$$ which yields a solution of the form $$y(t)=b\ li^{-1} \left(-\frac{a}{b}t+li\left(\frac{y(0)}{b}\right)\right), \quad t\in (0,\mathbb{R}).$$ Graphically, the solution is the first part of the logarithmic integral $li(x), x\in(0,1)$ rotated by 90°. For different values of another, undisclosed, parameter the system yields things like $1-e^{-t}$ and $\tanh(t)$ , with a hypergeometric2F1 stuff happening in-between. The overall goal is to estimate the $a$ parameter as well as the undisclosed one ( $b$ is known, as it is a part of the experimental setup) from t
Independently of @Тyma Gaidash's sophisticated and nice solution, I think that I have a rather simple solution. It starts from the simple remark that the plot of $y=e^{\text{li}(x)}$ is nice and simple for $x \in (0,1)$ . Expanded as a series around $x=1$ , we have $$e^{\text{li}(x)-\gamma }=\sum_{n=1}^\infty a_n\,(x-1)^n$$ where the first coefficients are $$\left\{-1,-\frac{1}{2},-\frac{1}{12},-\frac{1}{72},\frac{1}{720}, -\frac{31}{21600},\frac{859}{907200},-\frac{8669}{12700800},\frac {38911}{76204800},-\frac{2703619}{6858432000}\right\}$$ Using power series reversion $$x=1-\sum_{n=1}^\infty b_n\,k^n\qquad \text{where}\qquad k=e^{\text{li}(x)-\gamma }$$ where the first coefficients are $$\left\{1,\frac{1}{2},\frac{5}{12},\frac{31}{72},\frac{361}{720} ,\frac{4537}{7200},\frac{757517}{907200},\frac{2922187}{254016 0},\frac{41478457}{25401600},\frac{3255225203}{1371686400}\right\}$$ This gives a much better approximation of $x_{\text{calculated}}$
|special-functions|inverse-function|
1
Effect of rank-one update on the smallest eigenvalue and its eigenvector
Suppose diagonal $D\in \mathbb R^{n\times n}$ with $D\succeq 0, v\in \mathbb R^n,$ and $\alpha>0$ are given. Can we $\textit{exactly}$ identify the smallest eigenvalue of $D+\alpha vv^T$ and its corresponding eigenvector (based on $v$ and given eigenpairs of $D$ )? Even though the method to identify the whole spectrum is given in Eigenvectors of rank one update matrix , I am only interested in finding the smallest eigenvalue and its eigenvector. So, I wonder if something more can be said!
I think the only simple connection that can be shown for the smallest eigenvalue of $D+\alpha vv^T$ is the following: $$\lambda_{\min}(D+\alpha vv^T) \ge \min (d_1,\dots,d_n)=\lambda_{\min}(D).$$ This follows from the fact that the roots of the following $$f(\lambda) = 1+\alpha \sum_{i=1}^n \frac{v_i^2}{d_i-\lambda}$$ are the eigenvalues of $D+\alpha vv^T$ (specials cases of $v_i=0$ or $d_i=d_j$ for some $i,j$ can be managed following this answer ). One can see that $f(\lambda)$ tends to $1$ and $+\infty$ as $\lambda \to -\infty$ and $\lambda \to \left (\min ( d_1,\dots,d_n ) \right )^-$ , respectively. As $f(\lambda)$ is increasing over this interval, the smallest root of $f(\lambda)$ cannot be smaller than $\min ( d_1,\dots,d_n )$ ; it can be equal to if $v_i=0$ for $i$ with $d_i=\min ( d_1,\dots,d_n )$ . This could be alternatively proven using the fact that $\lambda_{\min}(X)$ is increasing in $X \in S^n$ with respect to the order $\succeq $ for any symmetric $D$ . In a very specia
|linear-algebra|optimization|eigenvalues-eigenvectors|numerical-linear-algebra|generalized-eigenvector|
1
How to solve the equation $3a^4=8a^3-16$?
How to solve the equation $3a^4=8a^3-16$ ?, where a is a positive number I have tried using the quadric formula, but got to nothing useful. I know the solution is $2$ , but I don't know how to solve it. This is a part of a problem and I need it to solve the whole problem. Hope one of you can help me! Thank you!
$$3a^4-8a^3+16=0$$ $$3a^4-6a^3-2a^3+16=0$$ $$3a^3(a-2)-2(a^3-8)=0$$ $$(a-2)(3a^3-2(a^2+2a+4))=0$$ $$(a-2)(3a^3-2a^2-4a-8)=0$$ $$(a-2)(a^3-8+2a^3-2a^2-4a)=0$$ $$(a-2)(a^3-8+2a(a^2-a-2))=0$$ $$(a-2)^2(a^2+2a+4+2a(a+1))=0$$ $$(a-2)^2(3a^2+4a+4)=0$$ Hence, $a_{1,2}=2$ and $a_{3,4}=\frac{-2\pm\sqrt2 i}{3}$ by quadratic formula.
|polynomials|
0
Union of a open and closed set being closed
Im new to topology, and am trying to prove what conditions results in the union of a closed set with an open set is closed. Also, just want to check that the union of an open set with it's complement open, and maybe why it is?
Suppose that the union of a closed set with an open set is closed. Since the empty set is closed, its union with every open set is closed. This means that every open set is also closed. Conversely, if every open set is also closed, the union of a closed set with an open set is the union of a closed set with another closed set, and hence it is also closed. Thus a characterization of your property is that every open set is also closed, that is, all open sets are clopen. The topologies with this property are precisely the ones that are derived from partitions of the underlying set. See this answer .
|general-topology|
0
Number of elements in set $S = \{(x,y,z): x,y,z \in \mathbb{Z}, x+2y+3z=42, x,y,z \ge 0\}$ is?
Number of elements in set $S = \{(x,y,z): x,y,z \in \mathbb{Z}, x+2y+3z=42, x,y,z \ge 0\}$ is? My solution: This is equal to the coefficient of $t^{42}$ in $(1+t+t^2+t^3+...+t^{43})(1+t^2+t^4+...+t^{42})(1+t^3+t^6+...+t^{42})$ = $\frac{(1-t^{43})}{1-t} \times \frac{(1-t^{44})}{1-t^2} \times \frac{(1-t^{45})}{1-t^3} $ = $\frac{1} {(1-t)(1-t^2)(1-t^3)}$ since I neglected higher powers of t = $\frac{1} {(1-t)^3(1+t)(1+t+t^2)}$ = $\frac{(1-t)^{-3}} {(1+t)(1+t+t^2)}$ Now I know the coefficient of $x^n$ in $(1-x)^{-r}$ is $\binom {n+r-1}{r-1}$ . But I don't know what to do with the denominator part. Can someone help??
Let $$A=\frac{1} {(1-t)(1-t^2)(1-t^3)}$$ Then we can say that $$A=(1-t)^{-1}(1-t^2)^{-1}(1-t^3)^{-1}$$ and it is well known that for $\displaystyle{(1-x)^{-n}=1+\binom{n}{1}x+\binom{n+1}{2}x^2 +\binom{n+2}{3}x^3+\cdots\textrm{till}\:\: \infty}$ where $n$ is a positive integer. I hope you can continue from here.
|binomial-coefficients|generating-functions|
0
Why do we have different set theories
Why do we have different set theories like Zermelo Fraenkel Set Theory , Von Neumann Bernays Godel Set Theory and Morse Kelly set theory?
When we do calculus or algebra, we are usually considering a few basic axioms as true in our "universe". These are fundamental axioms because they can't be proven. The axiom of choice is one such axiom. So when I say I am using the Zermelo-Fraenkel framework, then it means I am accepting a set of such axioms as universal law for the mathematics I am going to perform. If I say I use $\mathsf{ZFC}$ , then it means I am also including the axiom of choice in addition to the $\mathsf{ZF}$ axioms. It is necessary to specify our universe sometimes: it can directly influence the theorems that we can use.
|set-theory|
1
$B(X^{**})$ is the $w^*$-closure of $B(X)$ in $X^{**}$
I am reading Bollobás' Linear Analysis. Chapter 8. Theorem 6., as the title says: $B(X^{**})$ is the $w^*$ -closure of $B(X)$ in $X^{**}$ The proof starts by saying that i) $B(X^{**}$ ) is $w^*$ -closed, and ii) $B \subset B(X^{**})$ where $B$ is the $w^*$ -closure of $B(X)$ in $X^{**}$ . Then, suppose that there is $x_0^{**} \in B(X^{**}) \setminus B$ . By the separation theorem, there is a bounded linear functional $x_0^{***}$ on $X^{**}$ such that $$\langle x_0^{***}, b \rangle \leq 1 for every $b \in B$ . By considering $x_0^{*}$ , the restriction of $x_0^{***}$ onto the subspace $X$ of $X^{**}$ , we see that $\langle x_0^{*}, x \rangle \leq 1$ for every $x \in B(X)$ , thus $\| x_0^{*} \| \leq 1$ . This far everything is clear. To finish up the proof, the book argues as follows: $$1 \geq \langle x_0^{*}, x_0^{**} \rangle = \langle x_0^{***}, x_0^{**} \rangle > 1$$ claiming contradiction. I don't really understand why $\langle x_0^{*}, x_0^{**} \rangle = \langle x_0^{***}, x_0^{**}
Note that as $B$ is a closed convex subset of $(X^{**}, w^{*})$ and as $x_{0}^{**}\not\in B$ , it follows from a separation argument that there exists some $x_{0}^{***}$ in the dual of $(X^{**}, w^{*})$ which strictly separates $B$ and $x_{0}^{**}$ . But as continuous linear functionals on $(X^{**}, w^{*})$ are elements of the form $x^{***}\in X^{***}$ where there is some $x^{*}\in X^{*}$ such that $\langle x^{***}, x^{**} \rangle = \langle x^{**}, x^{*} \rangle$ for all $x^{**} \in X^{**}$ , it follows that there is some $x_{0}^{*}\in X^{*}$ such that $\langle x_{0}^{***}, x^{**} \rangle = \langle x^{**}, x_{0}^{*} \rangle$ for all $x^{**} \in X^{**}$ . Consequently, $x_{0}^{*}$ is the desired element. It is also worth mentioning that this theorem is known as Goldstine's theorem.
|functional-analysis|banach-spaces|weak-topology|
1
Union of a open and closed set being closed
Im new to topology, and am trying to prove what conditions results in the union of a closed set with an open set is closed. Also, just want to check that the union of an open set with it's complement open, and maybe why it is?
Let $C$ be closed set and $U$ be an open set such that $C\cup U$ is closed. Then, $$C\cup U=\overline{C\cup U}=\overline{C}\cup\overline{U}=C\cup\overline{U}.\tag1$$ Since $\partial U\cap U=\emptyset$ , from $(1)$ , we deduce that $\partial U=\overline{U}-U\subset C.$
|general-topology|
0
Show that for every numbers a,b,c real positive we have $\sum a\frac{b^2+c^2}{b+c}\geq ab+bc+ca$
Show that for every numbers $a,b,c$ real positive we have $$\sum a\frac{b^2+c^2}{b+c}\geq ab+bc+ca$$ That $a$ in front is really annoying so I tried: $abc$ and we get that $$\sum \frac{b^2+c^2}{bc(b+c)}\geq \frac{1}{a}+\frac{1}{b}+\frac{1}{c}$$ I am pretty sure that we have to use the inequality of means and I tried writing some inequality from this, but got to nothing useful. Hope one of you can help me! Thank you!
It is straightforward to verify that $$ \frac{x^2+y^2}{x+y} \ge \frac 12 (x+y) $$ holds for all positive real numbers $x, y$ . This gives $$ \sum_{cyc} a \frac{b^2+c^2}{b+c} \ge \frac 12 \bigl( a(b+c)+b(c+a)+c(a+b) \bigr) = ab+bc+ca \, . $$
|inequality|a.m.-g.m.-inequality|
0
Show that for every numbers a,b,c real positive we have $\sum a\frac{b^2+c^2}{b+c}\geq ab+bc+ca$
Show that for every numbers $a,b,c$ real positive we have $$\sum a\frac{b^2+c^2}{b+c}\geq ab+bc+ca$$ That $a$ in front is really annoying so I tried: $abc$ and we get that $$\sum \frac{b^2+c^2}{bc(b+c)}\geq \frac{1}{a}+\frac{1}{b}+\frac{1}{c}$$ I am pretty sure that we have to use the inequality of means and I tried writing some inequality from this, but got to nothing useful. Hope one of you can help me! Thank you!
Note that $$\frac{a^2+b^2}{a+b}\ge \frac{a+b}2,$$ since $$2(a^2+b^2)\ge(a+b)^2,$$ or $$a^2+b^2\ge2ab.$$ Hence $$\frac{a^2+b^2}{a+b}\ge \frac{a+b}2$$ $$\frac{b^2+c^2}{b+c}\ge \frac{b+c}2$$ $$\frac{c^2+a^2}{c+a}\ge \frac{c+a}2.$$ So we have $$c\cdot\frac{a^2+b^2}{a+b}\ge c\cdot\frac{a+b}2.$$ $$a\cdot\frac{b^2+c^2}{b+c}\ge a\cdot\frac{b+c}2$$ $$b\cdot\frac{c^2+a^2}{c+a}\ge b\cdot\frac{c+a}2.$$ Add these and get the desired inequality.
|inequality|a.m.-g.m.-inequality|
1
Proving that this is a convex set
Define the set $$A=\{\langle x,y\rangle : x\in [-1,1]^d, \: y\in \mathbb{R}^d, \:\|y\|_1\leq c\}$$ for some $c\in \mathbb{R}$ . I want to show that is is convex. Take $\langle x,y\rangle, \langle x',y'\rangle\in A$ . I want to show that if $\lambda \in [0,1]$ then $$\lambda\langle x,y\rangle+(1-\lambda) \langle x',y'\rangle\in A$$ My problem is that when I do this computation, I can never end up with only one dot product (i.e. an element of $A$ ). For example I have tried \begin{align} \lambda\langle x,y\rangle+(1-\lambda) \langle x',y'\rangle&=\langle \lambda x,y\rangle + \langle (1-\lambda) x',y'\rangle\\ &=\langle \lambda x,y\rangle + \langle (1-\lambda) x',y+(y'-y)\rangle\\ &=\langle \lambda x,y\rangle + \langle (1-\lambda) x',y\rangle + \langle (1-\lambda)x',y'-y\rangle\\ &=\langle \lambda x+(1-\lambda)x',y\rangle + \langle (1-\lambda)x',y'-y\rangle\\ \end{align} No matter how many tricks I do, I always seem to end up with $2$ scalar products. How can I solve this? Edit I know tha
Let's define $X=\{(x,y): x\in[-1,1]^d, y\in\mathbb{R}^d, ||y||_1\leq c\}$ and $f:X\to\mathbb{R} $ given by $f(x,y)=\langle x,y\rangle$ . Now, $f$ is continuous and $X$ is connected because it is the cartesian product of the $d$ -dimensional hypercube (which is connected) and a ball of radius $c$ (which is connected too). Thus, $A=f(X)\subseteq \mathbb{R}$ is connected. But the only connected sets in the standard topology of $\mathbb{R}$ are intervals, so $A$ is an interval and thus convex.
|real-analysis|convex-analysis|inner-products|
1
Last non Zero digit of a Factorial
I ran into a cool trick for last non zero digit of a factorial . This is actually a recurrent relation which states that: If $D(N)$ denotes the last non zero digit of factorial, then $$D(N)=4D\left(\left\lfloor{\frac N5}\right\rfloor\right)\cdot D(\mbox{Units digit of $N$}) \qquad \mbox{(If tens digit of $N$ is odd)}$$ $$D(N)=6D\left(\left\lfloor{\frac N5}\right\rfloor\right)\cdot D(\mbox{Units digit of $N$}) \qquad \mbox{(If tens digit of $N$ is even)}$$ Where $\left\lfloor\cdots\right\rfloor$ is greatest integer function. I was wondering, if anybody could explain why this works?
For optimizing the calculations instead of the $$ D(n)=(2[n/5]∗D([n/5])∗D(n \mod5)) \mod10 $$ we can calculate $$ D(n)=(LastDigitOf(2⌊n/5⌋)∗D(⌊n/5⌋)∗D(n \mod5)) \mod10 $$ Modified formula is: $$ D(0) = 1 \\ D(1) = 1 \\ D(2) = 2 \\ D(3) = 6 \\ D(4) = 4 \\ D(5) = 2 \\ D(6) = 2 \\ D(7) = 4 \\ D(8) = 2 \\ D(9) = 8 \\ D(n) = (LastDigitOf(2^{\lfloor n/5 \rfloor}) * D(\lfloor n/5 \rfloor) * D(n \mod 5)) \mod 10, \;where \;n > 9 $$ How? The original formula is $D(n) = (2^{[n/5]} * D([n/5]) * D(n \mod 5)) \mod 10$ But there is a problem: we need to calculate $2^{[n/5]}$ . For example if $n = 10 000 $ then we need to calculate $2^{2000}$ which is already too hard (in JS it will output Infinity ). But I discovered that we don't need to calculate it instead of this we just need to calculate the last digit of $2^{[n/5]}$ . Why? Because each time we just need to calculate the last digit of $2^{[n/5]} * D([n/5]) * D(n \mod 5)$ Theorem The last digit of multiplication is the last digit of multiplicati
|number-theory|elementary-number-theory|factorial|
0
Initial vale problem: existence
Does the initial-value problem $d y / d x=x / y$ , $y(0)=0$ have a solution? My answer of this quesstion is that equation, no have solution, because is not defined and also by the theorem of existence and unicity, that is not continous in 0. My question is: Exist any justification or proof more formal to this question? Thanks
Before you talk about solutions to an equation, you should always clarify what you mean by a solution. The differential equation $$ (1) \quad y'(x)=\frac{x}{y(x)} $$ is of the form $y'(x)=f(x,y(x))$ with $f(x,y)=x/y$ , and the natural domain of $f$ is $$ D:=\{(x,y) \in \mathbb{R}^2\mid y \not=0\}. $$ A usual solution concept is, that a solution of $(1)$ is a differentiable function $y:I \to \mathbb{R}$ on an interval $I \subseteq \mathbb{R}$ such that $$ {\rm Graph}~ y = \{(x,y(x))\mid x\in I\} \subseteq D \quad \text{and} \quad \forall x \in I: ~ y'(x)=f(x,y(x)). $$ With this definition of solutions the IVP $$ (2) \quad y'(x)=\frac{x}{y(x)}, ~~ y(0)=0 $$ has no solution since $(0,0) \notin D$ . On the other hand for such singular differential equations you can choose different solution concepts. In your case for example: $y:I \to \mathbb{R}$ is a solution of (1) if it is differentiable and $y'(x)y(x)=x$ ( $\iff (\frac{d}{dx}y^2)(x)=\frac{d}{dx}x^2$ ) for all $x \in I$ . With this defi
|ordinary-differential-equations|
0
Assuming that $(f_n)_n$ is bounded, does $\sup_{n\in\mathbb N}||f_n||_H <+\infty$ hold?
Let $(H, \|\cdot\|)$ be a Hilbert space and let $(f_n)_n$ be a bounded sequence in $H$ . I was wondering if this information is enough to conclude that $$\sup_{n\in\mathbb N}\|f_n\|_H I'd say that the answer is yes because being bounded in $H$ means that there exists $R>0$ such that $\|f_n\|_H \le R$ . With this information, passing to the supremum, one should have $$\sup_{n\in\mathbb N}\|f_n\|_H \le R Anyone can help me in understanding if I argue it correctly?
To provide an answer for this question, yes, what you have done is correct. In fact, the converse is also true, in the sense that if $(f_{n})_{n\in\mathbb{N}}$ is a sequence in a Hilbert space $H$ and if $$\sup_{n\in\mathbb{N}} \|f_{n}\| it follows that $(f_{n})_{n\in\mathbb{N}}$ is a bounded sequence. This is because $\|f_{n}\| \leq \sup_{m\in\mathbb{N}} \|f_{m}\|$ for all $n\in\mathbb{N}$ . Furthermore, it is enough for $H$ to simply be a normed space, because all you needed to conclude the statement was a norm and the definition of a bounded subset of a norm.
|sequences-and-series|functional-analysis|supremum-and-infimum|
0
Is this quadratic form positive definit?
Assume that $\delta >0$ , and $a,b,c>0$ with $ab=c$ . I am considering the following quadratic form: $$ q(x_1,x_2,x_3,x_4)=((1+a+b)x_1+bx_2+ax_3-x_4)^2+\delta\left(\frac{x_2^2}{a}+\frac{x_3^2}{b}+\frac{x_4^2}{c}\right), $$ I try to find out whether this quadratic form is positive definit, i.e. whether $q(\vec{x})>0$ for all $\vec{x}=(x_1,\ldots,x_4)\neq (0,0,0,0)$ . I wrote down the corresponding symmetric matrix, i.e., $q(\vec{x})=\vec{x}A\vec{x}^\top$ with $$ A=\begin{pmatrix}(1+a+b)^2 & (1+a+b)\cdot b & (1+a+b)\cdot a & -(1+a+b)\\(1+a+b)\cdot b & b^2+\frac{\delta}{a} & c & -b\\(1+a+b)\cdot a & c & a^2+\frac{\delta}{b} & -a\\-(1+a+b) & -b & -a & 1+\frac{\delta}{c}\end{pmatrix} $$
Obviously, being the sum of two squares, $q\ge 0$ always. The case $q=0$ can only occur when both terms are zero. But then we must have $$ x_2=x_3=x_4=0 $$ from the second term and therefore $$ ((1+a+b)x_1)^2=0 $$ from the first term which implies $x_1=0\,.$
|real-analysis|analysis|quadratic-forms|
1
How to obtain the period of this nonlinear differential equation?
Lately, I've been trying to find the period of an angle included in the following differential equations, but only could with the basic model: Basic or original: $$\mathrm{For}\ (\Phi (0), \Omega (0))=(\Phi_{o},0),\ \frac{d^2\Phi}{dt^2}= \frac{g}{\ell_{o}}\sin{\Phi}-\frac{g}{\ell_{o}}\zeta\ \mathrm{sgn\ \Phi}\ ;$$ Modified: $$\mathrm{For\ the\ same\ initial\ conditions},\ \frac{d^2\Phi}{dt^2}= \frac{g}{\ell_{o}}\frac{\sin{\Phi}}{f(\Phi)}-\frac{g}{\ell_{o}}\zeta \frac{\mathrm{sgn\ \Phi}}{f(\Phi)}\ -2\dot{\Phi}^2 \frac{f'(\Phi)}{f(\Phi)}.$$ Where $g$ is gravity, $\ell_{o}$ is the length of the inverted pendulum, $\zeta$ a group of other constants, $\operatorname{sgn}\left(\cdot\right)$ is the signum function, $\dot{\Phi}=\Omega=\frac{d\Phi}{dt}$ , $f(\Phi)=\sqrt[3]{1-\eta\cos{\Phi}}$ ( $\eta$ is another constant) and $f'(\Phi)=\frac{df(\Phi)}{d\Phi}$ . And so, the method I used to get the period was basically this: Let $F(\Phi)= \frac{g}{\ell_{o}}\sin{\Phi}-\frac{g}{\ell_{o}}\zeta\ \math
After revisiting this post I forgot to add that I found and exact solution \begin{align} \frac{\mathrm d^2\Phi}{\mathrm dt^2}&=\frac{g}{\ell_0}\frac{\sin\Phi}{f(\Phi)}-\frac{g}{\ell_0}\zeta\frac{\text{sgn}\,\Phi}{f(\Phi)}-2\dot\Phi^2\frac{f'(\Phi)}{f(\Phi)}\\ \ddot\Phi+2\dot\Phi^2\frac{f'(\Phi)}{f(\Phi)}&=\frac{g}{\ell_0f(\Phi)}(\sin\Phi-\zeta\text{sgn}\,\Phi)\\ f(\Phi)^4\ddot\Phi+2\dot\Phi^2f(\Phi)^3f'(\Phi)&=\frac{g}{\ell_0}f(\Phi)^3(\sin\Phi-\zeta\text{sgn}\,\Phi)\\ \frac{1}{2}\left(2f(\Phi)^4\frac{\dot\Phi\ddot\Phi}{\dot \Phi}+4\dot\Phi^2f(\Phi)^3f'(\Phi)\right) &= \frac{1}{2}\frac{\mathrm d}{\mathrm d\Phi}(f^4\dot\Phi^2)\\ &=\frac{g}{\ell_0}f(\Phi)^3(\sin\Phi-\zeta\text{sgn}\,\Phi)\\ \implies \int_{\Phi_{\text{0}}}^\Phi\frac{1}{2}\frac{\mathrm d}{\mathrm d\Phi}(f^4\dot\Phi^2)\,\mathrm d\Phi &= \frac{1}{2}f(\Phi)^4\dot\Phi^2 \\ &=\int_{\Phi_{\text{0}}}^\Phi\frac{g}{\ell_0}f(\Phi)^3(\sin\Phi-\zeta\text{sgn}\,\Phi)\, \mathrm d\Phi\\ \implies \dot\Phi &= \frac{\mathrm d\Phi}{\mathrm d
|calculus|integration|physics|mathematical-physics|computational-mathematics|
0
Graphing $(\lfloor x \rfloor + \lfloor1-x\rfloor)$
$\lim_{x\to0+}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ $\lim_{x\to0-}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ I tried to solve by graphing $(\lfloor x \rfloor + \lfloor1-x\rfloor)$ Graph of the function $(\lfloor x \rfloor + \lfloor1-x\rfloor)$ is straight line $y=0$ : https://www.desmos.com/calculator/la8vnf0z0f But function doesn't gives '0' for all values of x. As $(\lfloor 5 \rfloor + \lfloor1-5\rfloor) =1$ Then why graph of function is straight line $y=0$ ?
For any $x\in\mathbb Z$ : $\lfloor x\rfloor=x$ and $\lfloor 1-x\rfloor=1-x$ so: $\lfloor x\rfloor+\lfloor 1-x\rfloor=x+(1-x)=1$ . However the floor function is discontinuous hence why when evaluating individually the upper and lower limits will be different- the definition of discontinuous.
|real-analysis|algebra-precalculus|graphing-functions|ceiling-and-floor-functions|desmos|
0
Finding maximum number of pencils with any child in the problem
In a group of children, each child has a certain number of pencils from $1$ to $n$ . If the number of children who have i or more pencils is $2^{n-i}$ , for $i=1,2,3, \ldots,n$ and the total number of pencils is $511$ , find the maximum number of pencils with any child. My attempt:- Number of children with 1 pencil = $2^{n-1}-2^{n-2}=2^{n-2}$ Number of children with 2 pencils = $2^{n-2}-2^{n-3}=2^{n-3}$ . . . Number of children with n pencils = 1 Now, total number of pencils = $1 \cdot 2^{n-2} + 2 \cdot 2^{n-3} +....+ (n \cdot 1) = 511$ I am not able to understand how to solve it from here. As per the solution given by the author ( screenshot ): The number of children with 1 or more pencils is $2^{n-1}$ , (i.e. those children who have one or more pencils) The number of children with 2 or more pencils is $2^{n-2}$ (i.e. those children who have 2 or more pencils) and so on. Thus the total number of pencils that the children have is $$ 2^{n-1}+2^{n-2}+2^{n-3}+ \dots + 2^{n-n} = 2^{n-1} =
So, you found that there are $2^{n-2}$ children with $1$ pencil, $2^{n-3}$ children with $2$ pencils, … $2^1$ children with $n-2$ pencils, $2^0$ children with $n-1$ pencils, and $1$ kid with $n$ pencils. The overall number of pencils is then $$1\cdot2^{n-2}+2\cdot2^{n-3}+…+(n-1)\cdot2^0+n.$$ Let us calculate this sum. We must add a lot of powers of $2$ . Here is a trick how to visualize it in a convenient way. Arrange them in a triangle: $$\begin{array}{ccccccc} 2^{n-2} & 2^{n-3}& 2^{n-4} & 2^{n-5} & … & 2^1&2^0 \\ & 2^{n-3}& 2^{n-4} & 2^{n-5} & … & 2^1& 2^0 \\ & & 2^{n-4} & 2^{n-5} & … & 2^1& 2^0 \\ & & & 2^{n-5} & … & 2^1& 2^0 \\ & & & & … & … & … \\ & & & & & 2^1 & 2^0\\ & & & & & & 2^0 \end{array}$$ It is easy to calculate the sum in each horizontal row (it’s just a sum of geometric progression: $2^{k}+…+2^2+2^1+2^0=2^k-1$ ): $$\begin{array}{ccccccc|c} 2^{n-2} & 2^{n-3}& 2^{n-4} & 2^{n-5} & … & 2^1&2^0 & 2^{n-1}-1\\ & 2^{n-3}& 2^{n-4} & 2^{n-5} & … & 2^1& 2^0 & 2^{n-2}-1\\ & & 2^{n
|sequences-and-series|word-problem|
1
Prove that MLEs of two independent samples are independent
I am trying to prove that if I have two samples of iid random variables, then MLE's based on these two samples will be also independent. More formally, let $$\mathbf{x} = (x_i)^T_{i = 1} \stackrel{iid}{\sim} \mathcal{N}(0,1), \quad \mathbf{y} = (y_i)^T_{i = 1} \stackrel{iid}{\sim} \mathcal{N}(0,1)$$ be two samples of the same size. Let $$\theta_1 = \text{argmax}_{\theta} \mathcal{L(\mathbf{x}, \theta)}, \quad \theta_2 = \text{argmax}_{\theta} \mathcal{L(\mathbf{y}, \theta)}$$ be two MLE estimates of the same parameter $\theta$ . Problem: I want to prove that $\theta_1$ and $\theta_2$ are independent. Intuitively it is easy to see, because there are no chances for them to be dependent since the underlying samples are generated by iid random variables. But I am struggling to show this formally. Any support will be appreciated.
If two random variables $X$ and $Y$ are independent, then for any (measurable) maps $f$ and $g$ , $g(X)$ and $g(Y)$ are independent. This is a straightforward deduction of any definition you take for independence. $\theta_1$ is a function of $\mathbf{x}$ , $\theta_2$ is a function of $\mathbf{y}$ , and $\mathbf{x}$ and $\mathbf{y}$ are independent. Therefore $\theta_1$ and $\theta_2$ are independent.
|probability-theory|independence|maximum-likelihood|sampling|
1
The sum of $(-1)^n \frac{\ln n}{n}$
I'm stuck trying to show that $$\sum_{n=2}^{\infty} (-1)^n \frac{\ln n}{n}=\gamma \ln 2- \frac{1}{2}(\ln 2)^2$$ This is a problem in Calculus by Simmons. It's in the end of chapter review and it's associated with the section about the alternating series test. There's a hint: refer to an equation from a previous section on the integral test. Specifically: $$L=\lim_{n\to\infty} F(n)=\lim_{n\to\infty}\left[a_1+a_2+\cdots+a_n-\int_1^n\! f(x)\,\mathrm{d}x\right]$$ Here, $\{a_n\}$ is a decreasing sequence of positive numbers and $f(x)$ is a decreasing function such that $f(n)=a_n$, and $\gamma$ is this limit in the case that $a_n=\frac{ 1}{n}$. New users can't answer their own questions inside of 8 hours, so I'm editing my question to reflect the answer. Ok, I got it. Following the hint in the book $$L=\lim_{n\to\infty}\left[\frac{\ln 2}{2}+\frac{\ln 3}{3}+\cdots+\frac{\ln n}{n}-\int_2^n\! \frac{\ln x}{x}\,\mathrm{d}x\right]$$ $$=\lim\left[\frac{ \ln 2}{2}+\cdots+\frac{ \ln n}{n}-\left.\frac
Start with the classic: $$\frac{1}{n^{a}}=\frac{1}{\Gamma(a)}\int_{0}^{\infty}e^{-nt}t^{a-1}dt$$ Then: $$\sum_{n=1}^{\infty}\frac{\cos(nx)}{n^{a}}=\frac{1}{\Gamma(a)}\sum_{n=1}^{\infty}\cos(nx)\int_{0}^{\infty}e^{-nt}t^{a-1}dt$$ $$=\frac{1}{\Gamma(a)}\int_{0}^{\infty}\left(\sum_{n=1}^{\infty}\cos(nx)e^{-nt}\right)t^{a-1}dt$$ Use the handy and famous Poisson thingie: $$\sum_{n=1}^{\infty}x^{n}\cos(n\theta)=\frac{x(\cos\theta-x)}{x^{2}-2x\cos\theta+1}$$ and let $x=e^{-t}$ to get: $$=\frac{1}{\Gamma(a)}\int_{0}^{\infty}\frac{e^{-t}(\cos(x)-e^{-t})}{e^{-2t}-2e^{-t}\cos(x)+1}t^{a-1}dt$$ Make the sub $u=e^{-t}, \;\ t=\ln(1/u), \;\ du=-e^{-t}$ and obtain: $$\sum_{n=1}^{\infty}\frac{\cos(nx)}{n^{a}}=-\frac{1}{\Gamma(a)}\int_{0}^{1}\frac{\log^{a-1}(1/u)(u-\cos(x))}{u^{2}-2u\cos(x)+1}du$$ Now diff w.r.t 'a': $$\sum_{n=1}^{\infty}\frac{\cos(nx)\log(n)}{n^{a}}=\frac{1}{\Gamma(a)}\int_{0}^{1}\frac{(u-\cos(x))\log^{a-1}(1/u)\log(\log(1/u))}{u^{2}-2u\cos(x)+1}du-\frac{\psi(a)}{\Gamma(a)}\int_{0}^{1}\
|calculus|sequences-and-series|stieltjes-constants|
0
The sum of $(-1)^n \frac{\ln n}{n}$
I'm stuck trying to show that $$\sum_{n=2}^{\infty} (-1)^n \frac{\ln n}{n}=\gamma \ln 2- \frac{1}{2}(\ln 2)^2$$ This is a problem in Calculus by Simmons. It's in the end of chapter review and it's associated with the section about the alternating series test. There's a hint: refer to an equation from a previous section on the integral test. Specifically: $$L=\lim_{n\to\infty} F(n)=\lim_{n\to\infty}\left[a_1+a_2+\cdots+a_n-\int_1^n\! f(x)\,\mathrm{d}x\right]$$ Here, $\{a_n\}$ is a decreasing sequence of positive numbers and $f(x)$ is a decreasing function such that $f(n)=a_n$, and $\gamma$ is this limit in the case that $a_n=\frac{ 1}{n}$. New users can't answer their own questions inside of 8 hours, so I'm editing my question to reflect the answer. Ok, I got it. Following the hint in the book $$L=\lim_{n\to\infty}\left[\frac{\ln 2}{2}+\frac{\ln 3}{3}+\cdots+\frac{\ln n}{n}-\int_2^n\! \frac{\ln x}{x}\,\mathrm{d}x\right]$$ $$=\lim\left[\frac{ \ln 2}{2}+\cdots+\frac{ \ln n}{n}-\left.\frac
We recall that the eta function is defined as $$\eta(s)=\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^s} = \left(1- 2^{1-s} \right) \zeta(s)$$ Hence differentiating once we have that: \begin{align*} \eta'(s) &= \sum_{n=1}^{\infty} \frac{(-1)^{n-1} \log n}{n^s} \\ &= \sum_{n=2}^{\infty} \frac{(-1)^{n-1} \log n}{n^s}\\ &= 2^{1-s} \zeta(s )\log 2 + \left ( 1-2^{1-s} \right ) \zeta'(s) \end{align*} Thus taking limit as $s \rightarrow 1$ we have that: \begin{align*} \eta'(1) &=\sum_{n=2}^{\infty} \frac{(-1)^{n-1} \log n}{n} \\ &= \lim_{s \rightarrow 1} \left [ 2^{1-s} \zeta (s) \log 2 + \left ( 1-2^{1-s} \right ) \zeta'(s) \right ]\\ &=\lim_{s \rightarrow 1} 2^{1-s} \zeta(s) \log 2 + \lim_{s\rightarrow 1} \left ( 1-2^{1-s} \right ) \zeta'(s) \\ &= \gamma \ln 2 -\frac{\ln^2 2}{2} \end{align*}
|calculus|sequences-and-series|stieltjes-constants|
0
Graphing $(\lfloor x \rfloor + \lfloor1-x\rfloor)$
$\lim_{x\to0+}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ $\lim_{x\to0-}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ I tried to solve by graphing $(\lfloor x \rfloor + \lfloor1-x\rfloor)$ Graph of the function $(\lfloor x \rfloor + \lfloor1-x\rfloor)$ is straight line $y=0$ : https://www.desmos.com/calculator/la8vnf0z0f But function doesn't gives '0' for all values of x. As $(\lfloor 5 \rfloor + \lfloor1-5\rfloor) =1$ Then why graph of function is straight line $y=0$ ?
Look at the superimposed graphs of $\lfloor x\rfloor$ (blue) and $\lfloor 1-x \rfloor$ (red). Observe that there is a break at every right end point of both functions (because $x,(1-x)$ themselves become an integer). What you now have is, for, $$\lim_{x\rightarrow0+} \lfloor x\rfloor=0\quad\text{and}\quad\lim_{x\rightarrow0-} \lfloor x\rfloor=-1 \quad\text{but}\quad\lfloor0\rfloor=0$$ $$\lim_{x\rightarrow0+} \lfloor 1-x\rfloor=0\quad\text{and}\quad\lim_{x\rightarrow0-} \lfloor 1-x\rfloor=1\quad\text{but}\quad\lfloor1-0\rfloor=1$$ Hence, $$\lim_{x\rightarrow0+}( \lfloor x\rfloor+\lfloor 1-x\rfloor)=0 \quad\text{and}\quad\lim_{x\rightarrow0-} ( \lfloor x\rfloor+\lfloor 1-x\rfloor)=0\quad\text{but}\quad\lfloor 0\rfloor+\lfloor 1-0\rfloor=1$$ The correct statement to all this is that the limit of the function $\lfloor x\rfloor+\lfloor 1-x\rfloor$ exists at $0$ despite pointwise discontinuity. This is true because of the equality of left-hand and right-hand limits despite the true value of
|real-analysis|algebra-precalculus|graphing-functions|ceiling-and-floor-functions|desmos|
0
Integral $\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx$
I know such integral: $\int_0^{\infty}\frac{\ln x}{e^x}\,dx=-\gamma$ . What about the integral $\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx$ ? The answer seems very nice: $-\frac{1}{2}{\ln}^22$ but how it could be calculated? I tried integration by parts but the limit $\displaystyle{\lim_{x\to 0}\ln x\ln(1+e^{-x})}$ doesn't exist. Or I can also write the following equality $$\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx=\lim\limits_{t\to 0}\frac{d}{dt}\left(\int_0^{\infty}\frac{x^t}{e^x+1}\, dx\right)$$ but I don't know what to do next.
Finding an antiderivative is a non-starter, but there are a lot of levers to work with in an integral like this. First idea: a series. $\frac1{e^x+1}=e^{-x}-e^{-2x}+e^{-3x}-e^{-4x}+\cdots$ for $x>0$ . We have to be careful with the behavior near zero, and then we need $\int_0^{\infty} e^{-nx}\ln x\,dx$ , which again requires careful treatment near zero... I might revisit this, but forging ahead as we are now looks like a bad idea. Second idea : complex analysis. We want to close this to a contour... there's a singularity at zero, cut an arc around that... the numerator is predictable on rays from the origin, and the denominator is predictable on (some) rays parallel to the $x$ -axis. No, those don't line up; this looks unproductive. Third idea: introduce another variable. Specifically, something to kill that logarithm: $\ln x$ is the derivative of $x^t$ (with respect to $t$ ) at $t=0$ ... Let $I(t)=\int_0^{\infty}\frac{x^t}{e^x+1}\,dx$ . We want to find $I'(0)$ . \begin{align*}I(t) &=
|calculus|
0
$L^2$-convergence implies convergence in $H_0^k$
Suppose that $Mx_n$ converges to $M x$ in $L^2([0,1], \mathbb{R}^m)$ for $n \to \infty$ and $M x_n \in H_0^k([0,1], \mathbb{R}^m)$ , for $M \in \mathbb{R}^{m\times m}$ and a $k \in \mathbb{N}$ . The space $H_0^k([0,1], \mathbb{R}^m)$ denotes the Sobolev space $W_0^{k,2}([0,1], \mathbb{R}^m)$ endowed with the standard Sobolev norm. How can I show that $M x \in H_0^k([0,1], \mathbb{R}^m)$ ? I think, by the Gagliardo-Nirenberg inequality, it suffices to show that $$\lim_{n\to \infty} \left|\left|\frac{\partial^k}{\partial \omega^k} M (x_n-x)\right|\right|_{L^2([0,1], \mathbb{R}^m)}^2=0,$$ but I don't see this. I would be very grateful for help or hints.
Call $f_n:=M x_n$ , $f:=M x$ . You have that $f_n\to f$ in $L^2$ , and you know that $f_n\in H^k_0$ for all $n$ . The above is not enough to prove that $f\in H^k_0$ . You need more assumptions. The reason is that $H^k_0$ is dense in $L^2$ . There is a simple assumption one could make here that ensures $f\in H^k_0$ : it is enough to assume that the sequence $(f_n)_n$ is bounded in $H^k_0$ , i.e., $$ \|f_n\|_{H^k_0}\leq C for some constant $C$ that does not depend on $n$ . In fact, with this assumption, any subsequence $f_{n_k}$ has a weakly convergent sub-subsequence $f_{n_{k_j}}\to g$ in $H^k_0$ , due to the fact that $H^k_0$ is reflexive. The function $g$ might in principle depend on the chosen sub-subsequence. However, $f_{n_{k_j}}\to g$ weakly in $L^2$ (because the $L^2$ norm is weaker than the $H^k_0$ norm) and $f_{n_{k_j}}\to f$ weakly in $L^2$ by assumption, so by the uniqueness of weak limits, $f=g$ a.e.. It follows that $f\in H^k_0$ and $f_n\to f$ weakly in $H^k_0$ . In particu
|partial-differential-equations|convergence-divergence|sobolev-spaces|weak-derivatives|
0
Integral $\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx$
I know such integral: $\int_0^{\infty}\frac{\ln x}{e^x}\,dx=-\gamma$ . What about the integral $\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx$ ? The answer seems very nice: $-\frac{1}{2}{\ln}^22$ but how it could be calculated? I tried integration by parts but the limit $\displaystyle{\lim_{x\to 0}\ln x\ln(1+e^{-x})}$ doesn't exist. Or I can also write the following equality $$\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx=\lim\limits_{t\to 0}\frac{d}{dt}\left(\int_0^{\infty}\frac{x^t}{e^x+1}\, dx\right)$$ but I don't know what to do next.
\begin{align*} \int_{0}^{\infty}\frac{\ln x}{e^x+1} \, {\rm d}x &= \lim_{s \rightarrow 1}\int_{0}^{\infty} \frac{\partial }{\partial s} \frac{x^{s-1}}{e^x+1} \, {\rm d}x \\ &= \lim_{s \rightarrow 1}\frac{\mathrm{d} }{\mathrm{d} s} \int_{0}^{\infty} \frac{x^{s-1}}{e^x+1} \, {\rm d}x\\ &= \lim_{s \rightarrow 1}\frac{\mathrm{d} }{\mathrm{d} s} \mathcal{M} \left \{ \frac{1}{e^x+1} \right \}\\ &= \lim_{s \rightarrow 1}\frac{\mathrm{d} }{\mathrm{d} s} \left ( \Gamma(s) \zeta(s) \right ) \\ &= \lim_{s \rightarrow 1} \left [\Gamma'(s) \zeta(s) + \Gamma (s) \zeta'(s) \right ] \\ &= \lim_{s \rightarrow 1}\left [\Gamma (s)\psi(s) \zeta(s) + \Gamma (s) \zeta'(s) \right ] \\ &=\lim_{s \rightarrow 1} \Gamma(s) \left [ \psi(s) \zeta(s) + \zeta'(s) \right ] \\ &= - \frac{\ln^2 2}{2} \end{align*}
|calculus|
0
Show that $||f_n-f||_p \rightarrow 0$ as $n \rightarrow \infty$
Given $(X, \mathcal{A}, \mu)$ be a finite measure space and $f_n \in L^p(X, \mu)$ where $f_n(x) \rightarrow f(x)$ almost everywhere as $n \rightarrow \infty$ and $1 \leq p . Now suppose $||f_n||_q \leq M and $M \in \mathbb{R}^+$ is a constant, plus $q > p $ . Then show that $||f_n-f||_p \rightarrow 0$ as $n \rightarrow \infty$ My Attempt We have been asked to use the hint to use Vitali's Convergence Theorem. Now firstly as $||f_n||_q \leq M . Using $L^p$ and $L^q$ space inclusion I can say that $f_n \in L^1(X, \mu) \ \forall n $ . This $\implies f_n$ is uniformly integrable, again using For every $\epsilon>0$ there exists $\delta>0$ such that $\int_A|f(x)|\mu(dx) These above conditions now let me use Vitali's convergence theorem. But that does not involve a $||.||_p$ norm anywhere. How do I proceed from here?
Let $g_n=|f_n-f|^{p}$ . Then $g_n \to 0$ a.e. and $\int g_n^{q/p}d\mu\le 2^{q}[\int |f_n|^{q}d\mu+\int |f|^{q}d\mu]$ . Note that the last term is finite by Fatou's Lemma. Since $q/p >1$ it follws that $(g_n)$ is uniformly integrable. Hence, $\int g_n d\mu \to 0$ ,as required. If you want to apply Vitali's Theorem just apply it to $(g_n)$ and conclude that $g_n \to 0$ in $L^{1}(\mu)$ . [Apply the theorem to $L^{1}$ space instead of $L^{p}$ ].
|measure-theory|lebesgue-integral|lebesgue-measure|
1
Calculate integral $\int^{+\infty}_0 \frac{e^{-x^2}}{(x^2+\frac{1}{2})^2} dx$?
I've posted a similar integral earlier, in which the Goodwin-Staton Integral is involved, making the integral unsolvable. Now I make a little modification to make it solvable and give my answer below.
$$\displaystyle{\int\limits_0^\infty {\frac{{{e^{ - {x^2}}}}}{{{{\left( {{x^2} + 1/2} \right)}^2}}}dx} = \mathop = \limits^{x = \sqrt y } = \frac{1}{2}\int\limits_0^\infty {\frac{{{e^{ - y}}}}{{\sqrt y {{\left( {y + 1/2} \right)}^2}}}dy} \mathop = \limits^{\left\lfloor * \right\rfloor } \frac{1}{2}\int\limits_0^\infty {\frac{{{e^{ - y}}}}{{\sqrt y }}\left( {\int\limits_0^\infty {x{e^{ - \left( {y + 1/2} \right)x}}dx} } \right)dy} = }$$ $$\displaystyle{ = \frac{1}{2}\int\limits_0^\infty {x{e^{ - x/2}}\left( {\int\limits_0^\infty {\frac{1}{{\sqrt y }}{e^{ - \left( {x + 1} \right)y}}dy} } \right)dx} \mathop = \limits^{\left\lfloor {**} \right\rfloor } \frac{1}{2}\int\limits_0^\infty {x{e^{ - x/2}}\frac{{\sqrt \pi }}{{\sqrt {x + 1} }}dx} = \frac{{\sqrt \pi }}{2}\int\limits_0^\infty {\frac{x}{{\sqrt {x + 1} }}{e^{ - x/2}}dx} = }$$ $$\displaystyle{ = \mathop = \limits^{x + 1 = {y^2}} = \sqrt \pi \sqrt e \int\limits_1^\infty {\left( {{y^2} - 1} \right){e^{ - {y^2}/2}}dy} = \mathop = \limits^{
|probability|integration|
0
A supremum on orthogonal matrices
I'm working on a problem where I want to find the supremum over the orthogonal group $O_n(\mathbb{R})$ of the sum of the upper triangular elements of matrices in this group, specifically we want to compute $$ m_n=\sup_{M \in O_n(\mathbb{R})} \sum_{1 \leq i \leq j \leq n} m_{ij} $$ I have shown that this is equivalent to finding the supremum of $\text{Tr}(MB)$ where $M$ ranges over orthogonal matrices and $B$ is the "1s" upper triangular matrix : $$ B = \begin{pmatrix} 1 & 1 & 1 & \cdots & 1 \\ 0 & 1 & 1 & \cdots & 1 \\ 0 & 0 & 1 & \cdots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{pmatrix} $$ I then use the SVD decomposition of $B$ , writing it as $B=UA^2V^T$ . Then, the trace calculation goes as follows: \begin{align*} \text{Tr}(MB) &= \text{Tr}(MUA^2 V^T) \\ &= \langle MUA, VA \rangle \\ &\leq ||A||_2^2 \quad (\text{by orthogonality and Cauchy-Schwarz inequality}) \\ &= \text{Tr}(A A^T) \\ &= \text{Tr}(S) \end{align*} where $S$ is the diagonal mat
Note that the singular values of $\sigma_i(B)=\sigma_i(B^{-1})^{-1}=\lambda_i^{\uparrow}\left((B^{-1})^TB^{-1}\right)^{-1/2}$ and $$ (B^{-1})^TB^{-1}=\pmatrix{1&-1\\ -1&2&\ddots\\ &\ddots&\ddots&\ddots\\ &&\ddots&\ddots&-1\\ &&&-1&2} $$ is a tridiagonal matrix that is almost Toeplitz. Its eigenvalues can hence be determined by solving a linear recurrence relation. They are known to be $4\sin^2\left(\frac{j-\frac12}{n+\frac12}\frac\pi2\right)$ for $j=1,\ldots,n$ . It follows that $\sum_{j=1}^n\sigma_j(B)=\sum_{j=1}^n\frac12\sin\left(\frac{j-\frac12}{n+\frac12}\frac\pi2\right)^{-1}$ .
|linear-algebra|supremum-and-infimum|svd|orthogonal-matrices|
0
Calculate $I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx$
Question $$I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx$$ My try $$ I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx \\ = \int_{-1}^1 \log(1-x) \, \left(\log2 + \sum_{n=1}^{+\infty} \frac{(-1)^{n-1}}{2^n \, n} \, (x-1)^n \right) \, dx \\ = \log2 \int_{-1}^1 \log(1-x) \, dx + \int_{-1}^1 \log(1-x) \, \sum_{n=1}^{+\infty} \frac{(-1)^{n-1}}{2^n \, n} \, (x-1)^n \, dx \\ = \log2 \, (2\log2 - 2) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{-1}^1 (x-1)^n \, \log(1-x) \, dx \right) \\ \overset{\begin{subarray}{c} t=1-x \\ dx=-dt \end{subarray}}{=}\, 2\log2 \, (\log2 - 1) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{0}^2 (-t)^n \, \log{t} \, dt \right) \\ = 2\log2 \, (\log2 - 1) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{0}^2 (-1)^n t^n \log{t} \, dt \right) $$
Yet another solution. Making use of the symmetry, we get that: \begin{align*} \int_{-1}^{1} \log(1-x) \log(1+x) \, {\rm d}x &= 2 \int_{0}^{1} \log(1-x)\log(1+x)\, {\rm d}x \\ &=2\int_{0}^{1} \log(1-x) \sum_{n=1}^{\infty} \frac{(-1)^{n-1}x^n}{n} \, {\rm d}x \\ &= 2\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} \int_{0}^{1} x^n \log \left ( 1-x \right ) \, {\rm d}x\\ &= -2 \sum_{n=1}^{\infty} \frac{(-1)^{n-1} \mathcal{H}_{n+1}}{n\left ( n+1 \right )} \\ &=-2 \sum_{n=1}^{\infty} (-1)^{n+1} \mathcal{H}_{n+1} \left [ \frac{1}{n} - \frac{1}{n+1} \right ] \\ &= -2 \sum_{n=1}^{\infty} (-1)^{n+1} \left [\frac{ \left ( \mathcal{H}_n+ \frac{1}{n+1} \right )}{n} - \frac{\mathcal{H}_{n+1}}{n+1} \right ] \\ &= -2 \sum_{n=1}^{\infty} \left [ \frac{(-1)^{n+1} \mathcal{H}_n}{n} + \frac{(-1)^{n+1}}{n(n+1)} + \frac{(-1)^{n+1} \mathcal{H}_{n+1}}{n+1}\right ] \\ &= -2\left [ \sum_{n=1}^{\infty} \frac{(-1)^{n+1} \mathcal{H}_n}{n} + \cancelto{2\log 2-1}{\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n(n+1)}} -\sum_{n=1}^{\
|real-analysis|calculus|integration|definite-integrals|
0
Calculate $I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx$
Question $$I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx$$ My try $$ I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx \\ = \int_{-1}^1 \log(1-x) \, \left(\log2 + \sum_{n=1}^{+\infty} \frac{(-1)^{n-1}}{2^n \, n} \, (x-1)^n \right) \, dx \\ = \log2 \int_{-1}^1 \log(1-x) \, dx + \int_{-1}^1 \log(1-x) \, \sum_{n=1}^{+\infty} \frac{(-1)^{n-1}}{2^n \, n} \, (x-1)^n \, dx \\ = \log2 \, (2\log2 - 2) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{-1}^1 (x-1)^n \, \log(1-x) \, dx \right) \\ \overset{\begin{subarray}{c} t=1-x \\ dx=-dt \end{subarray}}{=}\, 2\log2 \, (\log2 - 1) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{0}^2 (-t)^n \, \log{t} \, dt \right) \\ = 2\log2 \, (\log2 - 1) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{0}^2 (-1)^n t^n \log{t} \, dt \right) $$
Let the proposed integral be $I$ . Notice that $$I = 2\int_0^1\log(1-x)\log(1+x)\,dx.$$ Integrating by parts yields \begin{eqnarray*} I & = & -2\int_0^1\log(1-x)\log(1+x)\,d(1-x)\\ & = & 2\int_0^1(1-x)\left(\frac{\log(1-x)}{1+ x} - \frac{\log(1+x)}{1- x}\right)\,dx\\ & = & 2\int_0^1\frac{1-x}{1+x}\,\log(1-x)\,dx - 2\int_0^1\log(1+x)\,dx. \end{eqnarray*} It is easy to see that $$\int_0^1\log(1+x)\,dx = -1 + 2\log2.$$ Let $t = (1-x)/(1+x)$ . Then $$\int_0^1\frac{1-x}{1+x}\,\log(1-x)\,dx = \int_0^1\frac{2t}{(1+t)^2}[\log2 + \log t - \log(1 + t)]\,dt.$$ In view of $d(1/(1 + t)) = - dt/(1+t)^{2}$ , integrating by parts gives $$\int_0^1\frac{2t}{(1+t)^2}\,dt = -1 +2\log2;$$ $$\int_0^1\frac{2t\log t}{(1+t)^2}\,dt = -\frac{\pi^2}{6} +2\log2;$$ $$\int_0^1\frac{2t\log(1+t)}{(1+t)^2}\,dt = -1 + \log2 + \log^22.$$ In summary, we find that $$I = 4 -4\log2 + 2\log^22 - \frac{\pi^2}{3}.$$
|real-analysis|calculus|integration|definite-integrals|
0
Calculate $I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx$
Question $$I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx$$ My try $$ I = \int_{-1}^1 \log(1-x) \, \log(1+x) \, dx \\ = \int_{-1}^1 \log(1-x) \, \left(\log2 + \sum_{n=1}^{+\infty} \frac{(-1)^{n-1}}{2^n \, n} \, (x-1)^n \right) \, dx \\ = \log2 \int_{-1}^1 \log(1-x) \, dx + \int_{-1}^1 \log(1-x) \, \sum_{n=1}^{+\infty} \frac{(-1)^{n-1}}{2^n \, n} \, (x-1)^n \, dx \\ = \log2 \, (2\log2 - 2) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{-1}^1 (x-1)^n \, \log(1-x) \, dx \right) \\ \overset{\begin{subarray}{c} t=1-x \\ dx=-dt \end{subarray}}{=}\, 2\log2 \, (\log2 - 1) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{0}^2 (-t)^n \, \log{t} \, dt \right) \\ = 2\log2 \, (\log2 - 1) + \sum_{n=1}^{+\infty} \left( \frac{(-1)^{n-1}}{2^n \, n} \int_{0}^2 (-1)^n t^n \log{t} \, dt \right) $$
\begin{align*} I&=\displaystyle\int_{-1}^1 \log(1-x)\,\log(1+x)\, dx\\ &=\int_{-1}^1 \log(1-x)\,\Big(\log2+\mathop{\sum}\limits_{n=1}^{+\infty}\frac{(-1)^{n-1}}{2^n\,n}\,(x-1)^n\Big)\, dx\\ &=\log2\int_{-1}^1 \log(1-x)\,dx+\int_{-1}^1 \log(1-x)\,\mathop{\sum}\limits_{n=1}^{+\infty}\frac{(-1)^{n-1}}{2^n\,n}\,(x-1)^n\, dx\\ &=\log2\,\big(2\log2-2\big)+\mathop{\sum}\limits_{n=1}^{+\infty}\Big(\frac{(-1)^{n-1}}{2^n\,n}\int_{-1}^1(x-1)^n\, \log(1-x)\, dx\Big)\\ &\mathop{=\!=\!=\!=\!=\!=}\limits^{\begin{subarray}{c} {t\,=\,1-x}\\ {dx\,=\,-dt} \\ \end{subarray}}\,2\log2\,\big(\log2-1\big)+\mathop{\sum}\limits_{n=1}^{+\infty}\Big(\frac{(-1)^{n-1}}{2^n\,n}\int_{0}^2(-t)^n\, \log{t}\, dt\Big)\\ &=2\log2\,\big(\log2-1\big)+\mathop{\sum}\limits_{n=1}^{+\infty}\Big(\frac{(-1)^{n-1}}{2^n\,n}\int_{0}^2(-1)^nt^n \log{t}\, dt\Big)\\ &=2\log2\,\big(\log2-1\big)-\mathop{\sum}\limits_{n=1}^{+\infty}\Big(\frac{1}{2^n\,n}\int_{0}^2t^n\log{t}\, dt\Big)\\ &=2\log2\,\big(\log2-1\big)-\mathop{\sum}\limits_{n=1}
|real-analysis|calculus|integration|definite-integrals|
0
Finding the $x$ in the following logarithmic inequality
I am currently practising for my maths exam and I have come across this problem, which I can partially do: $$\log^2_{1/2} ⁡x After I put everything on the left hand side I get something like this: $$\log^2_{1/2} ⁡x - \log_{1/2} ⁡ x - 4 Then, I simply substitute $t$ as $\log_{1/2} x$ : $$t^2 - t - 4 The roots of this quadratic formula are: $x_1 = \frac{1-\sqrt{17}}{2}$ and $x_2 = \frac{1+\sqrt{17}}{2}$ , so the formula looks like this: $$ \left(t-\left(\frac{1-\sqrt{17}}{2}\right)\right)\left(t-\left(\frac{1+\sqrt{17}}{2}\right)\right) Here, the interval of $t$ is: $\left(\frac{1-\sqrt{17}}{2}, \frac{1+\sqrt{17}}{2}\right)$ . But here is the problem I get, I don't know what to do with this. All I know is that I somehow need to switch from $t$ to $\log_{1/2} ⁡ x$ and then find the $x$ . I need your help guys and I am desperate. Thanks in advance and have a nice rest of the weekend!
You found that $$ \frac{1 - \sqrt{17}}{2} Since, we defined $t=\log_{\frac{1}{2}}(x)$ $$\frac{1-\sqrt{17}}{2} We want to raise all three sides to the power of $\frac{1}{2}$ to get rid of the $\log{x}$ and get an inequality only in terms of $x$ . However, here is where we need to be careful. The rule for exponential inequalities state that if $a>1$ and $x>y$ , then $a^x > a^y$ . However, if $0 , then $a^x . This is due to the fact that the exponential function $f(x) = a^x$ is strictly increasing (i.e. as $x$ increases, $f(x)$ consistently increases) for $a>1$ , and strictly decreasing (i.e. as $x$ increases, $f(x)$ consistently decreases) for $0 . The base of the logarithm $\frac{1}{2}$ is between $0$ and $1$ , so when we raise all three sides to the power of $\frac{1}{2}$ the inequality signs flip: $$\left(\frac{1}{2}\right)^{\frac{1-\sqrt{17}}{2}} > \left(\frac{1}{2}\right)^{\log_{\frac{1}{2}}(x)} > \left(\frac{1}{2}\right)^{\frac{1+\sqrt{17}}{2}}$$ $$\left(\frac{1}{2}\right)^{\frac{1
|inequality|logarithms|
0
Graphing $(\lfloor x \rfloor + \lfloor1-x\rfloor)$
$\lim_{x\to0+}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ $\lim_{x\to0-}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ I tried to solve by graphing $(\lfloor x \rfloor + \lfloor1-x\rfloor)$ Graph of the function $(\lfloor x \rfloor + \lfloor1-x\rfloor)$ is straight line $y=0$ : https://www.desmos.com/calculator/la8vnf0z0f But function doesn't gives '0' for all values of x. As $(\lfloor 5 \rfloor + \lfloor1-5\rfloor) =1$ Then why graph of function is straight line $y=0$ ?
It is not just showing the $y=0$ line. I beleive you agree that the value of that function is either $0$ or $1$ , later occuring when $x$ is an integer. If you hover your fingers or mouse, whatever you're using, over any integral point then is shows that the range is not just $0$ . For example go to $x=5$ the point displayed will be $(5,1)$ The thing is cardinality of real numbers is greater than the cardinality of integers that's why it's not feasible to show those tiny points with different colors.
|real-analysis|algebra-precalculus|graphing-functions|ceiling-and-floor-functions|desmos|
0
Show that $||f_n-f||_p \rightarrow 0$ as $n \rightarrow \infty$
Given $(X, \mathcal{A}, \mu)$ be a finite measure space and $f_n \in L^p(X, \mu)$ where $f_n(x) \rightarrow f(x)$ almost everywhere as $n \rightarrow \infty$ and $1 \leq p . Now suppose $||f_n||_q \leq M and $M \in \mathbb{R}^+$ is a constant, plus $q > p $ . Then show that $||f_n-f||_p \rightarrow 0$ as $n \rightarrow \infty$ My Attempt We have been asked to use the hint to use Vitali's Convergence Theorem. Now firstly as $||f_n||_q \leq M . Using $L^p$ and $L^q$ space inclusion I can say that $f_n \in L^1(X, \mu) \ \forall n $ . This $\implies f_n$ is uniformly integrable, again using For every $\epsilon>0$ there exists $\delta>0$ such that $\int_A|f(x)|\mu(dx) These above conditions now let me use Vitali's convergence theorem. But that does not involve a $||.||_p$ norm anywhere. How do I proceed from here?
Here is a proof by contradiction which does not directly use Vitali or uniform integrability (but essentially proves these results in this setting). Suppose that $||f_n - f||_p \not \to 0$ . Then, $\exists \varepsilon >0$ s.t. (WLOG restricting to a subsequence) $||f_n - f||_p \ge \varepsilon$ for every $n$ . Let $\delta >0$ and consider $S_n = \{|f_n - f|^p \ge \delta\}$ . Then, since convergence a.e. implies convergence in measure for finite measure spaces, we have that $\mu(S_n) \downarrow 0$ (remark: this also follows from continuity from above for finite measures). But then, $$\varepsilon \le ||f_n - f||_p = \mu(|f_n - f|^p)= \mu(|f_n - f|^p[1(S_n) + 1(S_n^c)])$$ $$\varepsilon \le \mu(|f_n - f|^p1(S_n)) + \delta T$$ where $T = \mu(X)$ , noting that $|f_n-f|^p$ is bounded by $\delta$ on $S_n^c$ by definition. Let $\delta$ be sufficiently small such that $\eta = \varepsilon - \delta T > 0$ . Then, we have that: $$\mu(|f_n - f|^p 1(S_n)) \ge \eta$$ for every $n$ . If we now apply Höl
|measure-theory|lebesgue-integral|lebesgue-measure|
0
In how many ways can the letters of the word PANACEA be arranged so that the three As are NOT all together?
PANACEA has 7 letters with 3 As. There are 4! ways to arrange (P, N, C, E) _ P_N_C_E _. Between these letters, there are 5 slots. So, to arrange 3 As in 5 slots is 5P3. We divide it by 3! for the 3 As to avoid duplicates. We have: $4!\cdot \frac{5P3}{3!} = 240$ This is the correct answer: Total arrangement - 3 A's are together = 3 A's are not together $\frac{7!}{3!}-5! = 720$ I need someone to clarify why the first method didn't work or did I made a mistake?
The $\frac{7!}{3!}$ part is the usual permutation of 7 letters, where 3 are the same. The $5!$ is actually $5 \cdot 4!$ , where the $4!$ part is the possible arrangements of PNCE , as you recognized and $5$ are the slots you also recognized, just the excluded cases are only the ones where all the A -s are together, so AAAxxxx , xAAAxxx , xxAAAxx , xxxAAAx , and xxxxAAA . So you do have all the parts, just they have to be put together differently.
|combinatorics|
0
Prove that the union of three subspaces of $V$ is a subspace iff one of the subspaces contains the other two.
Prove that the union of three subspaces of V is a subspace iff one of the subspaces contains the other two. I can do this problem when I am working in only two subspaces of $V$ but I don't know how to do it with three. What I tried is: If one of the subspaces contains the other two, Then their union is obviously a subspace because the subspace that contains them is a subspace. (Is this sufficient??). If the union of three subspaces is a subspace..... How do I prove that one of the subspaces must contain the other two from here? *When proving this for two I said that there is an element in one of the subspaces that is not the other and proved by contradiction that one of the subspaces must be contained in the other. How would I do this for three?
This is similar to the above solutions but I'm writing it here for clarity: $\Rightarrow$ Denote the subspaces by $W_1, W_2, W_3$ . Assume that $(W_1 \cup W_2) \not\subset W_3$ and similar for $W_1,W_2$ . Take $x \in W_1/(W_2 \cup W_3)$ and similarly $x_2$ and $x_3$ . Assuming $\mathbb{F} \ne \mathbb{F}_2$ , take $\lambda \in \mathbb{F}$ such that $\lambda \ne 0, 1$ . Then, consider the set $$\{\lambda{x_1}+x_2, x_1+\lambda{x_2}, x_1+x_2, x_1\}.$$ By Pigeonhole, at least two of the elements are in the same set in same subspace. If two are in $W_1$ it can easily be deduced $x_2 \in W_1$ which gives a contradiction. If two are in $W_2$ , since the last element can not be in $W_2$ , one of the first two elements are in $W_2$ which with subtraction again gives $x_1 \in W_2$ a contradiction. If two elements are in $W_3$ (since the last element can not be in $W_3$ ), we get either $$\lambda{x_1}+x_2-\lambda{x_1}-\lambda^2{x_2}=(1-\lambda^2)x_2 \in W_3 \implies x_2 \in W_3$$ or $$\lambda{x_1}
|linear-algebra|vector-spaces|
0
Building random variable with given variance and expectation
I'm given variance V , expectation E and number n - amount of values random variable takes. Is there any general algorithm to construct discrete random variable with such variance and expectation? (probabilies of each outcome is a rational number) Code implementation is allowed too :)
Take any discrete random variable $X$ which takes $n$ different values with mean $\mu$ and variance $\sigma^2$ . For instance you can take $X$ uniformly distributed on $\{0,1,\cdots,n-1\}$ which has mean $\mu=\frac{n-1}{2}$ and variance $\sigma^2=\frac{n^2-1}{12}$ . Then $Y=E+\sqrt V\frac{X-\mu}{\sigma}$ has mean $E$ , variance $V$ and takes $n$ different values.
|probability|random-variables|expected-value|variance|
0
Rationale behind Conditional Proof
For some time now, I have been able to use the Conditional Proof rule in Natural Deduction. We assume A, derive B within the scope of the assumption and discharge the assumption to get that A implies B. This is, I believe, the syntactic approach to the proof, pure rule-based. However, what is the motivation behind this rule? In other words, why do we take it to be A implies B and not, say, A and B, or, other truth-functions? One suggestion may be that we take the implication truth-function because we have assumed A to be true and B to be true too if A is true, which defines the implication truth-function. But it is not clear to me if in a formal proof of validity where we have to derive the conclusion using pre-established rules only, we are allowed to conceive that the formulas in the proof have truth values. I am might be making some mistake at some fundamental level. If so, where have I gone wrong? Please help me clarify my doubts. Any help would be appreciated.
You are right that a formal proof, and any of its individual rule applications, is a purely syntactical object. Indeed, that is the very point of using a formal language with formal rules: they allow you to ‘figure things out’ by mere symbol manipulation. Indeed, the symbols do not by themselves mean anything. For all we know, the $A$ ’s and $B$ ’s stand for fruits, and the $\to$ ’s and $\land$ ’s for operation on those fruits, like putting them together in a fruit salad. However, if we do look at these symbols as representing sentences with a truth-value, then we’ll find that the inference rules reflect what we consider logically valid inference principles. Therefore, a formal derivation of $B$ that starts with assumption $A$ means that $B$ logically follows from $A$ , which we can symbolize as a conditional $A \to B$ . In other words: yes, a proof is all ‘just’ purely syntactical, but they ‘mirror’ meaningful inferences.
|logic|proof-writing|proof-explanation|natural-deduction|
0
Show that the map $f(z) = z^k$ is a open map in the complex plane
I can see that the map $f(z) = z^k$ is an homeomorphism for neighbourhoods around non-zero points not including zero. So I need to check only around neighbourhoods of zero. Can anyone help me out? Edit: As mentioned in the comments, I forgot to mention $k$ here is a positive integer.
$f$ is holomorphic: $f'(z)=kz^{k-1}.$ And it's nonconstant. Therefore by the open mapping theorem it is open. Btw it is not a homeomorphism. What about $k$ th roots of unity, for instance?
|general-topology|complex-analysis|
0
Doubts about Radon-Nikodym Theorem
I'm learning Radon-Nikodym theorem and I have some doubts now. Let $\mu(B(x,r))=r^2$ for each $x\in\mathbb{R}$ and $r>0$ , then $\mu$ is a radon measure on $\mathbb{R}$ and $\mu . By Radon-Nikodym theorem, we have $\mu(E)=\int_E f d\mathscr{H}^1$ for some $f$ . We can calculate that $f(x)=\lim_{r\to 0}\frac{\mu(x,r)}{\mathscr{H}^1(B(x,r))}=\lim_{r\to 0}\frac{r^2}{2r}=0$ . That is, $\mu(B(x,1))=\int_{B(x,1)}0d\mathscr{H}^1=0$ , which contradicts the fact that $\mu(B(x,1))=1$ . What's wrong? I couldn't understand.
The problem is that $\mu$ is not additive: $\mu([0, 2]) = \mu(B(1, 1)) = 1^2 = 1$ , but $\mu([0, 1]) + \mu([1, 2]) = \mu(B(\frac 1 2, \frac 1 2)) + \mu(B(1 + \frac 1 2, \frac 1 2)) = (\frac 1 2)^2 + (\frac 1 2)^2 = \frac 1 4 + \frac 1 4 = \frac 1 2$ . So $\mu$ is not even a measure.
|real-analysis|measure-theory|radon-nikodym|
0
extension condition for free abelian groups
if $G$ is a free abelian group with basis { ${a_\alpha}$ } then given the elements { ${y_\alpha}$ } of an abelian group $H$ , there are homomorphisms $h_\alpha : G_\alpha \to H$ such that $h(a_\alpha)=y_\alpha$ (because $G_\alpha$ is infinite cyclic). which $G_\alpha$ is the subgroup generated by $a_\alpha$ the above is part of a proof from Munkres Topology( page 411), can someone explain how the infinite cyclicness of these subgroups would imply the existence of such homomorphisms
Let $G = \langle a_\alpha \rangle_\alpha$ be free abelian and for fixed $\alpha$ , let $G_\alpha = \langle a_\alpha \rangle \subseteq G$ . Define $h_\alpha: G_\alpha \to H$ by $h_\alpha({a_\alpha}^n) = {y_\alpha}^n$ for $n \in \mathbb{Z}$ . Then $h_\alpha$ is well-defined as a mapping between sets because $G_\alpha$ is infinite cyclic and it is a group homomorphism because for $m, n \in \mathbb{Z}$ we have $h_\alpha({a_\alpha}^n \cdot {a_\alpha}^m) = h_\alpha({a_\alpha}^{n+m}) \overset{\text{Def. of } h_\alpha}{=} {y_\alpha}^{n+m} = {y_\alpha}^n \cdot {y_\alpha}^m \overset{\text{Def. of } h_\alpha}{=} h_\alpha({a_\alpha}^n) \cdot h_\alpha({a_\alpha}^m)$ .
|group-theory|cyclic-groups|group-homomorphism|free-groups|
0
Graphing $(\lfloor x \rfloor + \lfloor1-x\rfloor)$
$\lim_{x\to0+}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ $\lim_{x\to0-}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ I tried to solve by graphing $(\lfloor x \rfloor + \lfloor1-x\rfloor)$ Graph of the function $(\lfloor x \rfloor + \lfloor1-x\rfloor)$ is straight line $y=0$ : https://www.desmos.com/calculator/la8vnf0z0f But function doesn't gives '0' for all values of x. As $(\lfloor 5 \rfloor + \lfloor1-5\rfloor) =1$ Then why graph of function is straight line $y=0$ ?
To calculate $\lim_{x\to 0^+}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ , WLOG take $x\in(0,1)$ . Then, $1-x\in (0,1)$ , so $$\lim_{x\to0^+}(\lfloor x \rfloor + \lfloor1-x\rfloor) = \lim_{x\to0^+}(0 + 0) = 0.$$ To calculate $\lim_{x\to 0^-}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ , WLOG take $x\in(-1,0)$ . Then, $1-x\in (1,2)$ , so $$\lim_{x\to0^+}(\lfloor x \rfloor + \lfloor1-x\rfloor) = \lim_{x\to0^+}(-1 + 1) = 0.$$ Therefore, the limit $\lim_{x\to 0}(\lfloor x \rfloor + \lfloor1-x\rfloor)$ exists and is equal to $0$ . To better understand what is going on, let $x\in (n,n+1)$ for some integer $n$ . Then, $1-x \in (-n,1-n)$ , so $\lfloor x \rfloor + \lfloor1-x\rfloor = n +(-n) = 0$ . If $x = n$ for some integer $n$ , then $\lfloor x \rfloor + \lfloor1-x\rfloor = n + (1-n) = 1$ . We conclude that $$\lfloor x \rfloor + \lfloor1-x\rfloor= \begin{cases} 0,& x\not\in\mathbb Z\\ 1,& x\in \mathbb Z\end{cases}$$ so the graph of the function almost is the line $y = 0$ , it simply has discontinui
|real-analysis|algebra-precalculus|graphing-functions|ceiling-and-floor-functions|desmos|
0
finding symmetric closure of inequality
In Discrete math and its app book from Kenneth H. Rosen says that What is the symmetric closure of the relation $R=\{(a,b) | a >b\}$ on the set of positive integers ? The answer of book following: $R \cup R^{-1}=\{(a,b) | a >b\} \cup \{(b,a) | a >b\}= \{(a,b) | a \neq b\}$ I think that it should have been like $R \cup R^{-1}=\{(a,b) | a >b\} \cup \{(b,a) | \color{blue}{b >a}\}= \{(a,b) | a \neq b\}$ The book gives the same answer for other editions,so i think that it is not just typo. Can you explain me why my answer is not correct ?
I think you're confused about the set-builder notation. Consider the pair $(1, 2)$ . Clearly, $(1, 2) \in R^{-1}$ . In the books solution, taking $b = 1$ and $a = 2$ , we see that indeed $a > b$ , so that $(1, 2) \in \{(b, a) \mid a > b\}$ . In your solution, taking $b = 1$ and $a = 2$ , we see that $b \not> a$ , so that $(1, 2) \notin \{(b, a) \mid b > a\}$ .
|discrete-mathematics|relations|
0
All module homomorphisms from $\mathbb{Z}^n$ to $\mathbb{Z}$
This is a question on something on something more general, but for now I'd like to keep in simple. Consider a module homomorphism $\phi:\mathbb{Z}^n\to\mathbb{Z}$ , where $n$ is a positive integer. Here I will consider $\mathbb{Z}^n$ as a (left) $\mathbb{Z}$ -module. My question is: what are all the types of (module) homomorphisms possible? This question arrises since I was first reading about ring homomorphisms of $\psi:\mathbb{Z}^n\to\mathbb{Z}$ and it seems to me (although I haven't been able to prove it yet) that all said homomorphisms $\psi$ are just projections of the $i$ -th (i=1,2,...,n) coordinate (except when $n=1$ , then we have the zero homomorphism too). I have a suspicion that all the module homomorphisms aren't going to be of this same manner as the ring homomorphisms, but I might be mistaken. Thanks for any help in advanced!
$\mathbb Z^n$ is the direct sum of $n$ copies of $\mathbb Z$ . The restriction of $\phi$ to each summand induces a module homomorphism from $\mathbb Z$ to $\mathbb Z$ , and together these homomorphisms determine $\phi$ . The module homomorphisms from $\mathbb Z$ to $\mathbb Z$ , in turn, are determined by their value at $1$ , and are thus of the form $f_m:k\mapsto km$ with $m\in\mathbb Z$ . Thus, the module homomorphisms from $\mathbb Z^n$ to $\mathbb Z$ are the functions $f_x:y\to x\cdot y$ with $x\in\mathbb Z^n$ . You can check that $kf_x(y)=k(x\cdot y)=x\cdot(ky)=f_x(ky)$ and $f_x(y+z)=x\cdot(y+z)=x\cdot y+x\cdot z=f_x(y)+f_x(z)$ , as required. This is analogous to the vector space homomorphisms (i.e. linear maps) from $\mathbb R^n$ to $\mathbb R$ being given by $f_x:y\mapsto x\cdot y$ with $x\in\mathbb R^n$ .
|abstract-algebra|modules|group-homomorphism|ring-homomorphism|
0
Formulae with symbol outside the language of a structure
A basic question, but it's a little bit confusing for me. Given languages $L \subset L'$ , an $L'$ formula $\phi$ and an $L$ -structure $M$ and its $L'$ -reduct $M'$ how do I define satisfaction in a case, where I don't have an interpretation of a given symbol contained in a formula, e.g. symbol $0$ is in $L'$ but not in $L$ and we have formula $x + 0 = x$ obviously satisfiable in $(N, +, 0)$ . Now is it satisfiable in $(N, +)$ or not? Or it just makes no sense to consider such a formula in this reduct? Sorry if I'm being incoherent, first-timer here...
You are confusing some things. There is no $L'$ -reduct of $M$ , $L'$ contains more symbols than $L$ . So if you are given an $L'$ -structure $M'$ then you can get an $L$ -reduct $M$ by forgetting the extra symbols in $L'$ . To take your example, let $L' = \{+, 0\}$ and $L = \{+\}$ . Then $M' = (\mathbb{N}, +, 0)$ is an $L'$ -structure. Forgetting the extra symbols, in this case just the $0$ symbol, we get an $L$ -structure $M = (\mathbb{N}, +)$ . So $(\mathbb{N}, +)$ is a reduct of $(\mathbb{N}, +, 0)$ , and not the other way around. Now, about evaluating formulas. We cannot evaluate an $L'$ -formula in an $L$ -structure, exactly because of the issue you encountered: the formula may use symbols from $L'$ that are not in $L$ . However, because all the symbols of $L$ are also in $L'$ , it makes sense to evaluate an $L$ -formula in an $L'$ -structure. There are two ways to make this precise: Since all the symbols in $L$ are also in $L'$ , we could argue that the set of $L$ -formulas is a
|logic|first-order-logic|model-theory|
0
If $x\mapsto f(x,y)$ and $y\mapsto f(x,y)$ is continuous differentiable, then whether $f(x,y)$ is continous?
Here is the question in my exercise. Let $f(x,y)$ be a function defined on a rectangle $[a,b]\times [c,d]$ . Assume that for each fixed $y \in[c,d]$ the function $x\mapsto f(x,y)$ is a continuously differentiable function of $x$ on $[a,b]$ ; and for each fixed $x \in[a,b]$ the function $y\mapsto f(x,y)$ is a continuously differentiable function of $y$ on $[c,d]$ . Can you conclude that $f(x,y)$ is a continuous function on the rectangle $[a,b]\times [c,d]$ ? If your answer is Yes then give a proof; if your answer is No then give a counterexample. Here is my thinking: Fixed any point $P_0$ in $[a,b]\times [c,d]$ . If $x\mapsto f(x,y)$ and $y\mapsto f(x,y)$ are both continuously differentiable, which means the partial deravatives $f_x$ and $f_y$ are continuous. So $f(x,y)$ is differentiable at $P_0$ . Then $f(x,y)$ is continuous. Is my thinking right? If it's wrong, what mistake I made?
The function $$f(x,y):=\begin{cases} \frac{xy}{x^2+y^2}&\text{if }(x,y)\ne(0,0)\\0&\text{if }(x,y)=(0,0)\end{cases}$$ is infinitely differentiable on $\Bbb R^2\setminus\{(0,0)\}$ . Moreover, $$\forall x\in\Bbb R\quad f(x,0)=0,$$ so that $x\mapsto f(x,y)$ is infinitely differentiable on $\Bbb R$ for every $y\in\Bbb R$ , including $y=0$ . Similarly, $y\mapsto f(x,y)$ is infinitely differentiable on $\Bbb R$ for every $x\in\Bbb R$ . However, at $(0,0)$ , $f$ has no limit (hence is not continuous), since $$\lim_{x\to0}f(x,x)=\frac12\ne0=\lim_{x\to0}f(x,0).$$
|calculus|multivariable-calculus|
1
Probability that integers chosen can represent the lengths of the sides of a triangle
Three integers are randomly chosen from $1$ to $5$ with repetition not allowed. Find the probability that the integers chosen can represent the lengths of each side of a triangle I first found the number of ways to choose the three integers by $\binom{5}{3} = 10$ ways to choose three side lengths from the numbers $1$ through $5$ . Here's where I'm stuck. There are 10 combinations: $${1,2,3}$$ $${1,2,4}$$ $${1,2,5}$$ $${1,3,4}$$ $${1,3,5}$$ $${1,4,5}$$ $${2,3,4}$$ $${2,3,5}$$ $${2,4,5}$$ $${3,4,5}$$ But now how do I know which combinations can represent the lengths of a triangle
It is known that there is a triangle with sides $a$ , $b$ , $c$ if and only if three triangle inequalities hold: $$\begin{cases}a+b>c\\b+c>a\\ c+a>b\end{cases}$$ In our case repetition is not allowed, so the numbers $a$ , $b$ , $c$ are different, and we can arrange them: $$a And we only need to check if $$a+b>c.$$ You found all the triples. There are $\binom53=10$ of them. Only $3$ of them fit: $$2,3,4$$ $$2,4,5$$ $$3,4,5$$ All the others don’t fit. The triples of the form $1,b,c$ aren’t good, since always $1+b\le c$ . And the triple $2,3,5$ is also bad: $2+3=5$ .
|probability|combinatorics|geometry|combinations|triangles|
1
An example of a retract that is not a deformation retract.
I was told that the retraction of $S^2$ into a point is a retract but not a deformation retract , could someone explain this to me please? If I understood this wrongly, could someone give me an example of a retract that is not a deformation retract?
Any point $p\in S^2$ is a retract of $S^2$ , because the constant map $f:S^2\to\{p\}$ given by $f(x)=p$ is continuous and fixes $p$ . Nonetheless, no point is a deformation retract of $S^2$ , because that would mean $S^2$ would be contractible, and that is false . Intuitively, you can map $S^2$ to $p$ continuously (namely, by sending all $S^2$ to $p$ ), but you cannot continuously shrink the sphere to a point while staying in the sphere itself: there is always a hole inside.
|algebraic-topology|fundamental-groups|retraction|
1
How to evaluate line integrals without using Green's Theorem?
I recently learnt about line integrals and Green's Theorem. But the lecturer gave us an assignment to answer how to calculate line integrals "directly without using Green's Theorem". I've looked at the notes, but I can't seem to see the difference between line integral with Green's Theorem and without. The question is as below: Consider the vector field $F(x,y) = xy\textbf{i} + x^2\textbf{j}=F_1\textbf{i} + F_2\textbf{j}$, let C be the rectangle with vertices $(0,0), (3,0), (3,1)$ and $(0,1)$, let $T$ denote the unit tangent vector to $C$ directed anticlockwise around $C$, and let $n$ denote the unit normal vector to $C$ directed out of the region bounded by $C$. Let $D$ denote this region bounded by $C$. (a) Calculate the line integral $\int{F\cdot T ds}$ directly without using Green's theorem. (b) Calculate the double integral $\int\int\left(\frac{dF_2}{dx} - \frac{dF_1}{dy}\right)dA$ without using Green's theorem.
a) $$\int_{x=0}^{x=3}(x^2\textbf{j})\cdot (\textbf{i})dx+\int_{y=0}^{y=1}(3y\textbf{i}+3^2\textbf{j})\cdot(\textbf{j})dy+\int_{x=3}^{x=0}(x\textbf{i}+x^2\textbf{j})\cdot (\textbf{-i})dx+\int_{y=1}^{y=0}(0\textbf{i}+0\textbf{j})\cdot(\textbf{-j})dy=0+9+\frac92+0=\frac{27}2$$ b) $$\iint(\frac{\partial}{\partial x}x^2-\frac{\partial}{\partial y}xy)dA=\int_0^1\int_0^3(2x-x)dxdy=\int_0^1\frac92 dy=\frac92$$
|line-integrals|
0
Given a open set U containing a point z in the complex plane. Check that there exists a point w in U such that |w| > |z|.
I need this to prove the maximum modulus principle . If f is a non-c0nstant analytic function on a connected open set U and z $\in$ U. By open mapping theorem, we know that f(U) is an open set containing f(z). Thus, we use the above statement to conclude that, there exists a w $\in$ f(U) such that |w| > |f(z)| . Thus for some $u\in U$ we have w = f(u). Hence |f| does not have a maximum at any point of U.
Interpret $~|z|~$ as the (non-negative) length of the line segment from $~0~$ to $~z.~$ Since $~U~$ is open, $~z~$ is an interior point, which implies that there exists $~r > 0,~$ such that the circle of radius $~r~$ centered at $~z~$ is completely in $~U.~$ Draw the line segment from $~0~$ thru $~z~$ and extend it until it crosses the circle at two points. One of those two points must be further away from $~0~$ than $~z~$ is. Edit As suggested in the comment by Robert Israel, the case of $~z = 0~$ yields to the similar result that that there exists $~r > 0,~$ such that the circle of radius $~r~$ centered at $~(z = 0)~$ is in $~U.~$
|complex-analysis|
1
Given that $I_n=\int_0^{\frac{\pi}{2}}\frac{\sin^2nx}{\sin x}dx$,how to prove that $I_n\leq 1+\ln\sqrt{2n-1}$?
It's given that $$I_n=\int_0^{\frac{\pi}{2}}\frac{\sin^2nx}{\sin x}dx,$$ and I need to prove that $$I_n\leq1+\ln\sqrt{2n-1}.$$ I have got that $$I_n-I_{n-1}=\int_0^{\frac{\pi}{2}}\frac{\sin^2nx}{\sin x}dx-\int_0^{\frac{\pi}{2}}\frac{\sin^2(n-1)x}{\sin x}dx\\=\int_0^{\frac{\pi}{2}}\frac{[\sin nx-\sin(n-1)x][\sin nx+\sin(n-1)x]}{\sin x}dx\\=\int_0^{\frac{\pi}{2}}\frac{2\cos\frac{(2n-1)x}{2}\sin\frac{x}{2}\cdot2\sin\frac{(2n-1)x}{2}\cos\frac{x}{2}}{\sin x}dx\\=\int_0^{\frac{\pi}{2}}\frac{\sin[(2n-1)x]\sin x}{\sin x}dx\\=\int_0^{\frac{\pi}{2}}\sin[(2n-1)x]dx\\=\frac{1}{2n-1}.$$ Then it's easy to know that $$I_n=\sum_{r=1}^{n}\frac{1}{2r-1}.$$ That's all the work I've done. What should I do next?
By induction: For $n=1,2$ , $I_1=1,I_2=\frac{4}{3} is easy to check. Suppose $I_n , then $$I_{n+1}=I_n+\frac1{2n+1} We need to prove $$1+\ln\sqrt{2n-1}+\frac1{2n+1} which is equivalent to $$\frac1{2n+1} This follows from the inequality : $$\frac{x}{1+x} 0,$$ just take $x=\frac{2}{2n-1}$ .
|real-analysis|calculus|sequences-and-series|inequality|definite-integrals|
0
$AA^\top + A + A^\top = 0$, then $|\det A|\leq 2^n$
Let $A\in \mathbb R^{n\times n}$ such that $AA^\top + A + A^\top = 0$ . Prove that $|\det A|\leq 2^n$ . $AA^\top + A + A^\top = 0$ rewrites as $(A+I_n)(A^\top +I_n) = I_n$ , hence $A+I_n$ is an orthogonal matrix and $\det(A+I_n)\in \{-1,1\}$ . If $\lambda$ is a real eigenvalue of $A$ (with an eigenvector in $\mathbb R^n$ ) then $\lambda\in \{0,-2\}$ . However $A$ may have complex eigenvalues, or the eigenvectors may have complex entries. I cannot make further progress. I'm not supposed to know that an orthogonal matrix can be diagonalized over $\mathbb C$ with eigenvalues having modulus $1$ , I'm thus looking for a solution which does not leverage this fact.
The challenge for me was not to use auxiliary results on complex eigenvalues of orthogonal matrices (see Ben Grossmann's question ). A replacement for such results is the canonical form . Subtracting $I_2$ to each $2\times 2$ block, each block is easily seen to contribute by a factor $\leq 4$ to the determinant.
|linear-algebra|matrices|inequality|determinant|
1
Misconception about formulation of P=NP
There Is a somewhat misconception i can't understand about P=NP. For example, the Hilbert's tenth problem says that we can't decide , given a diophantine Equation, if we can solve It or not. But we can clearly check that a given solution Is Indeed a solution in polinomial time. Doesn't that disprove the P=NP conjecture?
You can't check solutions in polynomial time either. See the formal definition . The solution must have length polynomial in the size of the input. For some Diophantine equations, there is a solution but the number of digits in the smallest solution is so large that it can't be checked in polynomial time.
|number-theory|
0
Largest circle contained within a triangle
I am interested in figures of elementary geometry that can be used to illustrate the Hausdorff distance. You don't even have to know what it is. Here's what it looks like in geometrical terms: Let $T=ABC$ be a triangle (with its interior) and let $FrT=Fr(ABC)=AB\cup BC\cup CA$ be the boundary of $ABC$ . I'm looking for maximum of $$\{d(M,\text{Fr}T),M\in T\}$$ If $M\notin $ bissector of $\widehat{A}$ , suppose that $M$ is closer to $AB$ than to $AC$ , then, with the notations in the figure, $d(M,AB)=MI So the farthest point from $FrT$ is necessarily on the bisector of $\widehat{A}$ . Similarly, it will be on the bisector of $\widehat{B}$ . So that's $D$ , the center of the circle in $ABC$ . And the distance we are looking for is then $r$ , the radius of the inscribed circle. I am not absolutely convinced by what I have written. So I would appreciate the time you would take to proofread.
There seems to be a problem in your reasoning. Suppose $G$ is almost near $BC$ . Then $d(M, FrT)$ is greater than $d(G,FrT)$ . Though it’s true that $d (M, AB) . One can reason as follows. Suppose $M$ is not $D$ . Then it is (at least) in one of the three triangles: $ABD$ , $BCD$ , $CAD$ . Let it be the triangle $ABD$ . But then $$d(M,AB) And since $$d(M,FrT)=\min(d(M,AB), d(M,BC), d(M,CA))\le d(M,AB)$$ and $$d(D,AB)=d(D,FrT),$$ we have $$d(M,FrT)
|geometry|solution-verification|hausdorff-distance|
1
Riemann integrable function continuity points
$f$ is Riemann integrable on $[a,b]$ , $\int_a^bf(x)^2>0$ . Prove that there is a point of continuity $x$ on $[a,b]$ that $f(x) \neq 0$ . I provided the following: almost every point of $[a,b]$ is a continuity point. $F(x) = \int_a^bf(x)^2>0$ is differentiable in every continuity point and in such point $F'(x) = f^2(x)$ . If $F(x) > 0$ then there is a point where $f^2(x) > 0$ . It has a continuity point in some ball. Am I false everywhere?
One can show the following fact first: If $f$ is Riemann integrable on $[a,b]$ , and the set $\{x\in[a,b]:f(x)\neq0\}$ has measure $0$ , i.e., $f$ is zero almost everywhere, then $\int_a^bf(x)\,dx=0.$ With this fact, if we assume $f(x)=0$ for any point of continuity $x$ of $f$ , then by the fact that $f$ should be continuous a.e., it becomes a function that is zero a.e., and so is $f^2$ , which contradicts the assumption that $\int_a^b (f(x))^2\,dx>0.$
|real-analysis|integration|definite-integrals|riemann-integration|
0
How to find $\frac{\tan(x+y)}{\tan(x)}$ if $5\sin(2x+y)=7\sin(y)$
I saw this problem Find $\frac{\tan(x+y)}{\tan(x)}$ if $5\sin(2x+y)=7\sin(y)$ I tried to use $$\sin(x+y)\cos(x)+\cos(x+y)\sin(x)= \frac{7}{5}\sin(y)$$ $$\tan(x+y)=\frac{\frac{7}{5}\frac{\sin(y)}{\cos(x+y)}- \sin(x) }{\cos(x)}$$ $$\frac{\tan(x+y)}{\tan(x)}={\frac{7}{5}\frac{\sin(y)}{\sin(x)\cos(x+y)}- 1 }$$ Which lead to nothing. I also tried to use $\tan(a+b)$ formula but it also didn't lead to anything useful.
Hint: Observe that $\dfrac{2x+y+\ \ y}2=?,\dfrac{2x+y-\ \ y}2=?$ $$\dfrac{\sin(2x+y)}{\sin y}=\dfrac75$$ Apply Componendo & Dividendo $$\dfrac{7+5}{7-5}=\dfrac{\sin(2x+y)+\sin y}{\sin(2x+y)-\sin y}$$ Apply Prosthaphaeresis Formulas
|algebra-precalculus|trigonometry|
0
von Neumann subalgebras of the prime von Neumann algebras
A von Neumann $M$ is called prime if $M=M_1\bar{\otimes} M_2$ implies that $M_1$ or $M_2$ is a type $I$ von Neumann algebra. A factor is prime if and inly if the factor cannot be factorized as the tensor product of two diffuse von Neumann algebras. If a von Neumann algebra $M$ is prime, $N$ is a von Neumann subalgebra of $M$ . Is $N$ also prime?
No. Every II $_1$ factor contains a hyperfinite II $_1$ subfactor $R$ , which is not prime since $R = R \mathbin{\bar{\otimes}} R$ . So just choose a prime II $_1$ factor $M$ and $N = R$ .
|operator-algebras|von-neumann-algebras|
1
How to know if there are real solutions or not?
Consider the following system of equations: $$\begin{cases} 1 + 6(x+y) + 8xy = 0 \\ y = 6x^2 - 8 xy^2 \end{cases} $$ If for example from the first one I solve for $x$ , and then I substitute into the second one I get a fourth-degree equation, even a bit painful since it's also complete. I checked with some program, and I found out that there are four solutions in $\mathbb{C}$ , and I mean strictly complex conjugate. My question is: is there some smart way to figure out in advance, and without programs, if there are no real solutions in this case?
Substituting $y=-\frac{2(4x+3)}{6x+1}$ the system reduces to one equation, namely to $$ 0=64x^4 + 48x^3 + 28x^2 + 6x + 1=(16x^2 + 4x + 1)(4x^2 + 2x + 1). $$ It is easy to see that the two quadratic equations don't have real solutions. The decomposition can be found by writing $(16x^2+ax+b)(4x^2+cx+d)$ and comparing coefficients.
|complex-numbers|systems-of-equations|nonlinear-system|
0
How to know if there are real solutions or not?
Consider the following system of equations: $$\begin{cases} 1 + 6(x+y) + 8xy = 0 \\ y = 6x^2 - 8 xy^2 \end{cases} $$ If for example from the first one I solve for $x$ , and then I substitute into the second one I get a fourth-degree equation, even a bit painful since it's also complete. I checked with some program, and I found out that there are four solutions in $\mathbb{C}$ , and I mean strictly complex conjugate. My question is: is there some smart way to figure out in advance, and without programs, if there are no real solutions in this case?
Let $(x,y)$ a solution. If we assume $y=0$ , we easily arrive at a contradiction so $$y\neq 0$$ We have $$\begin{cases} -8xy=1+6x+6y \\ -8xy^2=y-6x^2 \end{cases}\implies \begin{cases} -8xy^2=y+6xy+6y^2 \\ -8xy^2=y-6x^2 \end{cases}$$ So finally , $2y=x$ And I'll let you go on
|complex-numbers|systems-of-equations|nonlinear-system|
0
The number of solutions to the congruence equation $P(x) \equiv 0 (\mathrm{mod}\ p^\alpha)$
Let $P(x)$ be a polynomial with integer coefficients, and $p$ be a large prime. I want to find the number of solutions to the congruence equation $$P(x)\equiv 0(\mathrm{mod}\ p^\alpha).$$ In my practice the degree of $P(x)$ is $4$ . We can write $$ P(x) = \sum_{j=0}^4 a_j x^j.$$ I wonder that whether the number of solutions is at most $4$ when $\mathrm{gcd}(a_0,a_1,a_2,a_3,a_4)=1$ ? If not, is there a finite upper bound on the number of solutions (not depend on $p$ )? Thanks in advance for your help!
COMMENT.- (Just for help you a little). $(1)\space\space P(x)\equiv0\pmod{p^{\alpha}}\Rightarrow P(x)\equiv0\pmod p$ . If $P(x)$ has degree $4$ and has more than $4$ solutions, then all the coefficients of $P(x)$ are multiples of $p$ (i.e. are $0$ modulo $p$ ) $(2)$ Let $P(x_1)\equiv0\pmod p$ and put $x=x_1+pt_1$ where $t_1\in\mathbb Z$ then look at the equation $P(x_1+pt_1)\equiv0\pmod{p^2}$ . Using Taylor's series for your polynomial $P(x)$ , you do have $$P(x_1)+pt_1P'(x_1)\equiv0\pmod{p^2}$$ here you could have trouble if $P'(x_1)\equiv0\pmod{p}$ . If $P'(x_1)\not\equiv0\pmod{p}$ , you have a solution $t_1=t'_1\pmod p$ so $x$ becomes $$x=x_1+pt'_1+p^2t_2=x_2+p^2t_2$$ and you have the equation $$P(x)\equiv0\pmod{p^3}$$ At this point $P'(x_2)\not\equiv0\pmod p$ is not divisible by $p$ ( Why? ) so this last equation have a unique solution $t_2$ with $t_2=t'_2\pmod p$ . Now you do have $$x=x_2+p^2t'_2+p^3t_3=x_3+p^3t_3$$ and so on. ►If $P'(x_1)\not\equiv0\pmod{p}$ then $x_1$ leads to a
|number-theory|modular-arithmetic|polynomial-congruences|
1
Evaluate the Fourier coefficient
Now I want to figure out the following problem: Suppose that $f\in L^1(-\pi,\pi)$ . Define $F(x)$ as \begin{align} F(x) = \int_{-\pi}^x (f(t)-c_0(f)) dt\,,~x\in[-\pi,\pi] \end{align} where $c_n(f)$ is \begin{align} c_n(f) = \frac{1}{2\pi}\int_{-\pi}^\pi f(x) e^{-inx}dx\,. \end{align} Then evaluate $c_0(F)$ . I tried to solve this problem as follows. Using Fubini's theorem, we have \begin{align} 2\pi c_0(F) &= \int_{-\pi}^\pi F(x) dx = \int_{-\pi}^\pi \int_{-\pi}^x (f(t)-c_0(f)) dt dx \\ &= \int_{-\pi}^\pi \int_{t}^\pi dx (f(t)-c_0(f)) dt = \int_{-\pi}^\pi (\pi-t)(f(t)-c_0(f)) dt\,. \end{align} Since we have $\int_{-\pi}^\pi (f(t)-c_0(f))dt=0$ , then \begin{align} \int_{-\pi}^\pi (\pi-t)(f(t)-c_0(f))dt=\int_{-\pi}^\pi -t(f(t)-c_0(f))dt=\int_{-\pi}^\pi -tf(t)dt = 2\pi c_0(F)\,. \end{align} After progressing this point, it is currently stuck. What can I do to get better results here?
Now I have solved the OP using Conrad's suggestion. Since $F\in W^1_1(-\pi,\pi)$ clearly and $F(\pi)=F(-\pi)=0$ , hence it is periodic, therefore we can say that the Fourier series of $F$ converges to $F$ pointwisely, i.e., \begin{align} F(x)=c_0(F)+\sum\limits_{n\neq 0}\frac{c_n(f)}{in}e^{inx}\,. \end{align} Then, we have \begin{align} F(x)-F(-\pi)=\sum\limits_{n\neq0}\frac{c_n(f)}{in}(e^{inx}-e^{-in\pi})\,. \end{align} Integrate both sides from $-\pi$ to $\pi$ yields, \begin{align} 2\pi c_0(F) &= \sum\limits_{n\neq0}\frac{c_n(f)}{in}\int_{-\pi}^\pi (e^{inx} - e^{-in\pi}) dx = 0\,, \end{align} since $\int_{-\pi}^\pi e^{inx}dx=0$ for $n\neq0$ and $\int_{-\pi}^\pi e^{-in\pi} dx = (-1)^n2\pi$ ; hence the sum of all the pairs $-n$ and $n$ vanish because of the term $(in)^{-1}$ .
|fourier-analysis|fourier-series|
1
L'Hospital's Rule and Transfer Function
In an Electrical Engineering Textbook I am given this function: $$ H_C(j\omega) = \frac{10^6j\omega}{10^{10} + -\omega^2 + 10^5j\omega} $$ where $j = \sqrt{-1}, \omega = 2\pi f, f=frequency (Hz) $ The text book goes on to say 'the high frequency asymptote is $$ H(j\omega) = \frac{10^6}{j\omega} $$ ' If I take the limit as $\omega -> \infty$ , then isn't the fraction in the form $ \frac{\infty}{\infty}$ . If not, why not? When I apply L'Hospitals Rule here, I do not get the same answer for the high-frequency asymptote, I get $$ \frac{10^6j}{-2\omega + 10^5j} $$ , so in the limit, the function goes to 0?
Your confusion comes from an abuse of notation. The 'high frequency asymptote', denoted by $\omega \to \infty$ , shouldn't be understood as a limit. Actually, your book aims to study the behaviour of the transfer function $H(j\omega)$ for high values of $\omega$ , i.e. when $\omega \gg 1$ , which is not the same as its value at infinity. You are looking for an approximated function and not for a limit value . In consquence, you cannot use L'Hospital's rule. In the end, you only need to factorize the expression, as follows : $$ H(j\omega) = \frac{10^6j\omega}{(j\omega)^2} \frac{1}{1 + \frac{10^5}{j\omega} + \frac{10^{10}}{(j\omega)^2}} \approx \frac{10^6}{j\omega}, $$ and to argue that the last two terms of the denominator are negligible when $\omega \gg 1$ .
|calculus|
0
Coin flip puzzle, prove $P(X>Y) > P(Y>X)$
I recently encountered the following riddle: Let a fair coin be flipped $N > 2$ times. Player A gets a point every time there are two heads in a row, and player B gets a point every time a tail is followed by a head, the player with more points wins. Thus, for example, for the sequence THHH , player A wins with 2 vs. 1 points while in the sequence THHTHH , neither wins, as they both score two points. Clearly, the expected number of points is equal, as the sequences HH and TH are both equally likely. Now the riddle itself: Are player A's chances of winning higher, lower or equal to that of player B? I found it surprising that player A's chances of winning are lower than that of player B's. Though there are intuitive explanations, I'd like to rigorously prove this.
This is not a complete answer, but a partial attempt too long for a comment. I hope someone can turn it into a full answer, as I have to give up thinking about it for the rest of the day: The natural sample space is $\Omega:=\{H,T\}^{N}$ , equipped with the counting measure. For convenience we assume $N$ is even. Now define, for $i\in\{1,\dots,N-1\}$ the random variables $X_i^{HH}$ and $X_i^{TH}$ $$X_i^{HH}(\omega)=\begin{cases} 1 & \omega_i=H=\omega_{i+1} \\ 0 &\text{otherwise}\end{cases},~ \text{and}\quad X_i^{TH}(\omega)=\begin{cases} 1 & \omega_i=T,~\text{and}~\omega_{i+1}=H \\ 0 &\text{otherwise}\end{cases}.$$ Thus the scores of players $A$ and $B$ are given by the random variables $$X_A=\sum_{i=1}^{N-1}X_i^{HH},~\text{and}~X_B=\sum_{i=1}^{N-1}X_i^{TH}.$$ As you noted, linearity of expectation and independence of different coin flips means that $\mathbb E(X_A)=\mathbb E(X_B)$ . We can now use the fact that $$\mathbb E(X_A-X_B)=\sum_{\omega\in\Omega}(X_A-X_B)(\omega)\frac{1}{2^N}$$
|probability|probability-distributions|puzzle|
0
$AA^\top + A + A^\top = 0$, then $|\det A|\leq 2^n$
Let $A\in \mathbb R^{n\times n}$ such that $AA^\top + A + A^\top = 0$ . Prove that $|\det A|\leq 2^n$ . $AA^\top + A + A^\top = 0$ rewrites as $(A+I_n)(A^\top +I_n) = I_n$ , hence $A+I_n$ is an orthogonal matrix and $\det(A+I_n)\in \{-1,1\}$ . If $\lambda$ is a real eigenvalue of $A$ (with an eigenvector in $\mathbb R^n$ ) then $\lambda\in \{0,-2\}$ . However $A$ may have complex eigenvalues, or the eigenvectors may have complex entries. I cannot make further progress. I'm not supposed to know that an orthogonal matrix can be diagonalized over $\mathbb C$ with eigenvalues having modulus $1$ , I'm thus looking for a solution which does not leverage this fact.
I think one should be able to argue without eigenvalues, but rather just using the fact that an orthogonal matrix is an isometry, or equivalently that its columns form an orthonormal basis of $\mathbb R^n$ : If $B=A+I$ then the condition on $A$ implies $BB^{\intercal}=I_n$ , that is, $B$ is an orthogonal matrix. Write $B = (b_1|b_2|\ldots |b_n)$ so that the $b_i$ are the columnn vectors of $B$ . Then if $\{e_i: 1\leq i \leq n\}$ denote the standard basis of $\mathbb R^n$ , it follows that $A$ has columns $b_i-e_i$ . Since the determinant is the volume of higher-dimensional parallelogram given by the column vectors, and this is bounded by the product of the lengths of the column vectors, the result follows from the triangle inequality. [ Note the claims about the volume-like properties of $\det$ follow from the alternating property and induction (essentially using Gram-Schmidt), so they are elementary. ]
|linear-algebra|matrices|inequality|determinant|
0
Conditions for the identity f(a)f(b)=f(a+b) to hold
If I have the identity $$f(a)f(b)=f(a')f(b')$$ for a given normalized probability distribution $f$ , and additionally the constraint $$a+b = a'+b' = const.$$ for any suitable pairs of real numbers $a,b$ and $a',b'$ , would this be sufficient to conclude that $$f(a)f(b)=f(a+b)$$ or are there further conditions required? I know that the latter relationship implies that $f$ is an exponential function, but I would like to know which conditions are mathematically required for this, and I can't quite see how this would follow from the two conditions alone I mentioned.
$$f(a)f(b)=f(a+b)f(0)=f(a+b)\iff f(0)=1$$ Hence, $f(a)=c^a$ for some constant $c$ (see e.g. A function with a property $f(x+y)=f(x)f(y)$ ).
|probability-distributions|
1
How to prove the equality of power series below?
Assume $$ F\left( x \right) := \sum_{n=-\infty}^{+\infty}{x^{\left( n+\frac{1}{2} \right) ^2}}, G\left( x \right) := \sum_{n=-\infty}^{+\infty}{x^{n^2}}, H\left( x \right) := \sum_{n=-\infty}^{+\infty}{\left( -1 \right) ^nx^{n^2}}, $$ I have to prove $$ \left( G\left( x \right) \right) ^2+\left( H\left( x \right) \right) ^2=2\left( G\left( x^2 \right) \right) ^2 $$ and $$ \left( G\left( x \right) \right) ^4=\left( F\left( x \right) \right) ^4+\left( H\left( x \right) \right) ^4. $$ After using Mathematica I find that it may have relation with EllipticTheta function . But it seems to be so difficult to me. Can anyone help me?
The coefficient of $x^k$ in $G(x)^2$ is the number of ways to write $k = m^2 + n^2$ where $m$ and $n$ are integers. The coefficient of $x^k$ in $H(x)^2$ is the number of ways to write $k = m^2 + n^2$ where $m +n$ is even minus the number of ways where $m+n$ is odd, thus $(-1)^k$ times the coefficient in $G(x)^2$ . So the coefficient of $x^k$ in $G(x)^2 + H(x)^2$ is twice the number of ways to write $k = m^2 + n^2$ when $k$ is even, $0$ when $k$ is odd. The coefficient of $x^k$ in $G(x^2)^2$ is $0$ if $k$ is odd (since $G(x^2)^2$ is an even function), and the coefficient of $x^{k/2}$ in $G(x)^2$ if $k$ is even. So that gives you your first equation.
|sequences-and-series|power-series|theta-functions|
0
Optimality condition inspired by subdifferential of square root: $y\in \text{argmin}(g(x)-a^Tx ) \Rightarrow y\in \text{argmin}(g^2(x)-2g(y)a^Tx).$
Let $f:\mathbb R^d\to\mathbb R\cup\{+\infty\}$ be a proper convex lower semicontinuous function. Suppose that $f$ is bounded by below, and for simplicity that $\inf f = 0$ . Set $\varphi:\mathbb R^d\to\mathbb R\cup\{+\infty\}$ as $\varphi(x):=\sqrt{f(x)}$ . It is known that \begin{align*} \partial\varphi(x) = \frac{1}{2\sqrt{f(x)}}\partial f(x)\quad \text{for all $x\in\text{dom }\partial f$ with $f(x)\neq 0$.} \end{align*} In particular, this implies that, if $\xi\in\partial\varphi(x)$ , then $2\sqrt{f(x)}\,\xi\in\partial f(x)$ for all $\xi\in \mathbb R^d$ and $x\in\text{dom }\partial f$ with $f(x)\neq 0$ . This can be rephrased as \begin{align}\label{pty} \text{If $x$ is a minimizer of $\sqrt{f} - \xi$, then it is a minimizer of $f-\big(2\sqrt{f(x)}\big)\xi$.} \end{align} The question is whether this is true for nonconvex functions, say for an $f$ which is only proper and lower semicontinuous. I have tried the following. If we take a minimizer $x$ of $\sqrt{f}-\xi$ , then for $s\in\ma
Let us prove the following statement for $g(x) \ge 0, x\in \text{dom } g$ , which covers the one in the OP by setting $g(x)=\sqrt{f(x)}$ : If $y$ is a minimizer of $g(x)-a^Tx$ , then $y$ is a minimizer of $g^2(x)-2g(y)a^Tx$ , i.e., $$\color{blue}{y\in \text{argmin}\left(g(x)-a^Tx \right) \Rightarrow y\in \text{argmin}\left(g^2(x)-2g(y)a^Tx\right)}.$$ As $y$ is a minimizer of $g(x)-a^Tx$ , then for any $x \in \text{dom } g $ : $$g(x)-g(y)\ge a^T(x-y) \tag{1}. $$ To prove that $y$ is a minimizer of $g^2(x)-2g(y)a^Tx$ , we need to show that for any $x \in \text{dom } g $ : $$g^2(x)-2g(y)a^Tx \ge g^2(y)-2g(y)a^Ty \\ \equiv \color{blue}{(g(x)+g(y))(g(x)-g(y))\ge 2g(y)a^T(x-y)} \tag{2}. $$ If $g(x) \ge g(y)$ , then $g(x)+g(y)\ge 2g(y) \ge 0$ and the LHS of (1) is non-negative; thus, (2) follows by multiplying the two inequalities $g(x)+g(y)\ge 2g(y) \ge 0$ and (1). If $g(x) , then $0\le g(x)+g(y) and both sides of (1) are negative, so (2) is obtained by multiplying the two inequalities $0\le
|optimization|nonlinear-optimization|non-convex-optimization|variational-analysis|
1
In how many ways can the letters of the word PANACEA be arranged so that the three As are NOT all together?
PANACEA has 7 letters with 3 As. There are 4! ways to arrange (P, N, C, E) _ P_N_C_E _. Between these letters, there are 5 slots. So, to arrange 3 As in 5 slots is 5P3. We divide it by 3! for the 3 As to avoid duplicates. We have: $4!\cdot \frac{5P3}{3!} = 240$ This is the correct answer: Total arrangement - 3 A's are together = 3 A's are not together $\frac{7!}{3!}-5! = 720$ I need someone to clarify why the first method didn't work or did I made a mistake?
The first method is correct, just incomplete. Continuing on the OP's method: a) All $A$ 's are separate $$ | P | N | C | E | $$ Using the gap and string method, we see we have to permute (P,N,C,E) as OP said. We can see that for the four letters, there are 5 gaps in which we place our 3 $A$ 's. So we need to pick 3 out of 5 gaps and permute the 4 remaining difference letters. Hence, $\color{blue}{^5C_{3} \times 4!}$ b) 2 A's are together Let the group $AA$ be represented as $G$ . So, we need to permute (P,N,C,E,G,A) However, we cannot have $G$ next to an $A$ as then there would be 3 consecutive $A$ 's, which is forbidden. Hence we make 2 cases: i) G is at either end of the word (2 such cases; when it is at the start and end) $$G,P,N,C,E,A$$ For the letter directly adjacent G, we cannot have A, meaning we have to pick a letter out of (P,C,E,N) and permute the remaining 3 letters, as well as A. Hence, $\color{blue}{ ^2C_{1} \times ^4C_{1} \times 4!}$ ii) G is somewhere in the middle of t
|combinatorics|
0
prove that: $\sqrt{2}=e^{1-{2K\over \pi}}\prod\limits_{n=1}^{\infty}\left({4n-1\over 4n+1}\right)^{4n}e^2$
show that $$\sqrt{2}=e^{1-{2K\over \pi}}\prod_{n=1}^{\infty}\left({4n-1\over 4n+1}\right)^{4n}e^2$$ where K is the catalan's constant; $K=0.9156 ...$ My try: take the ln $${1\over2}\ln{2}=\left(1-{2K\over \pi}\right)\sum_{n=1}^{\infty}\ln{\left({4n-1\over 4n+1}\right)^{4n}}e^2$$ $${1\over2}\ln{2}=\sum_{n=1}^{\infty}\ln{\left({4n-1\over 4n+1}\right)^{4n}}e^2- {2K\over \pi}\sum_{n=1}^{\infty}\ln{\left({4n-1\over 4n+1}\right)^{4n}}e^2$$ we know that $${1\over 2}\ln{2}=\sum_{n=1}^{\infty}\ln{\left(4n-1\over 4n+1\right)}+\sum_{n=1}^{\infty}\ln{\left(4n+1\over 4n+2\right)}$$ sub: then we got $$\sum_{n=1}^{\infty}\ln{\left(4n+1\over 4n+2\right)}=\sum_{n=1}^{\infty}\ln{\left({4n-1\over 4n+1}\right)^{4n-1}}e^2- {2K\over \pi}\sum_{n=1}^{\infty}\ln{\left({4n-1\over 4n+1}\right)^{4n}}e^2$$ Anyway I am stuck, any help please. I tried and looked everywhere on wolfram can't find any similiar infinite product to simplify this further.
Consider the $N$ -th partial summation, \begin{align*}S_N&=\sum\limits_{n=1}^{N} \left(2n\log \left(\frac{4n+1}{4n-1}\right) - 1\right) \\&= -N -\frac{1}{2}\sum\limits_{n=1}^{N} \log [(4n+1)(4n-1)] + \frac{1}{2}\sum\limits_{n=1}^{N} (4n+1)\log (4n+1) - (4n-1)\log (4n-1)\\&= -N - \frac{1}{2}\log \frac{(4N+1)!}{2^{2N}(2N)!}+\frac{1}{2}\sum\limits_{n=1}^{2N}(-1)^n (2n+1)\log (2n+1)\\&= -N + N\log 2 - \frac{1}{2}\log \frac{(4N+1)!}{2^{2N}(2N)!}+\sum\limits_{n=1}^{2N}(-1)^{n} \left(n+\frac{1}{2}\right)\log \left(n+\frac{1}{2}\right)\\&\underbrace{\sim}_{\substack{N \to \infty\\\text{Stirling Approx.}}} - \frac{1}{2}\log \frac{2^{1/2}(4N+1)^{2N+1}}{e^{1/2}2^{2N}}+\sum\limits_{n=1}^{2N}(-1)^{n} \left(n+\frac{1}{2}\right)\log \left(n+\frac{1}{2}\right)\\&= \frac{1}{4}-\frac{1}{4}\log 2 +\sum\limits_{n=0}^{2N}(-1)^{n} \left(n+\frac{1}{2}\right)\log \left(n+\frac{1}{2}\right)-\left(N+\frac{1}{2}\right)\log \left(2N+\frac{1}{2}\right)\end{align*} Also note the telescopic summation: $\displaystyle
|infinite-product|
0
Verifying Fourier inversion for a rectangular function
I define the Fourier transform of a function $g$ to be $ \hat{g}(\lambda) = \int_{-\infty}^\infty g(x)e^{-i\lambda x} \, \mathrm{d}x $ with an associated inverse formula $$ g(x)= \frac{1}{2\pi} \int_{-\infty}^\infty \hat{g}(\lambda) e^{-i\lambda x} \, \mathrm{d}\lambda. $$ Fix $c>0$ . I wish to verify this latter formula for the rectangular function $f(x) = \mathbf{1}_{|x| . Computing the Fourier transform is straightforward: after some manipulation, I find $ \hat{f}(\lambda) = \frac{2}{\lambda} \sin\left(\frac{\lambda c}{2} \right) $ so my task must involve evaluating $$ \int_{-\infty}^\infty \frac{2}{\lambda} \sin\left(\frac{\lambda c}{2} \right ) e^{i\lambda x}\, \mathrm{d}\lambda. $$ It seems like I can do this using contour integration. It seems like I should be using the following contour (image taken from this question ): and sending $\epsilon \to 0$ and $R \to \infty$ to evaluate the integral. But I'm not sure how I should handle the boundary terms: for instance, parametrising
Let $F(\lambda)= \frac{2\sin(\lambda c/2)}{\lambda}$ . Then, the inverse Fourier Transfom of $f$ is $$\begin{align} \mathscr{F^{-1}}\{F\}(x)&=\frac1{2\pi}\int_{-\infty}^\infty F(\lambda)e^{i\lambda x}\,d\lambda\\\\ &\frac1{2\pi}\int_{-\infty}^\infty \frac{2\sin(\lambda c/2)}{\lambda} e^{i\lambda x} \,d\lambda\\\\ &=\frac1{2\pi i} \int_{-\infty}^\infty \frac{e^{i\lambda (x+c/2)}}{\lambda}\,d\lambda-\frac1{2\pi i} \int_{-\infty}^\infty \frac{e^{i\lambda (x-c/2)}}{\lambda}\,d\lambda\tag1 \end{align}$$ The integral $\int_{-\infty}^\infty \frac{e^{i\lambda t}}{\lambda}\,d\lambda$ can be evaluated applying a host of different methodologies. Here, we will use contour integration. When $t>0$ ( $t ), we close the contour in the upper-half (lower-half) plane. The contour also deforms around the origin with a semi-circular arc. The Proceeding, we find $$\frac1{2\pi i}\int_{-\infty}^\infty \frac{e^{i\lambda t}}{\lambda}\,d\lambda=\frac12 \text{sgn}(t)\tag2$$ Using $(2)$ in $(1)$ yields the coveted
|fourier-transform|contour-integration|complex-integration|
1
Answer to this double integral while solving for flux without using symmetry
$$\int_0^L\int_0^L\frac{dy \text{ }dx}{(\sqrt{l^2+x^2+y^2})^3}$$ I tried solving it the normal way by keeping y constant first, plugging in the limits then solving the integral and that yielded me $$\dfrac{\arctan\left(\frac{L^2}{l\sqrt{l^2+2L^2}}\right)}{l}$$ But when i tried doing it using polar coordinates (i did it just to be sure), i converted the integral into $$2\int_{0}^{\pi/4}\int_0^{L\sec{\theta}}\frac{r}{(l^2 + r^2)^{3/2}} dr d\theta$$ which ended up to $$\frac{\pi}{4l}-\frac{\arcsin(\frac{l}{\sqrt{2(l^2+L^2)}})}{l}$$ For context, I was trying to solving for the flux through square plate without using symmetry (which is basically brute force) and ended up here. Some help on what would be the correct would be much appreciated.
It suffices to show that they're the same (I multiplied by a factor of $2$ to the polar solution because it wasn't the same result): $$\begin{align} \frac{\arctan\left(\frac{L^2}{l\sqrt{l^2+2L^2}}\right)}{l}&\overset{!}{=}\frac{\pi}{2l}-\frac{2\arcsin\left(\frac{l}{\sqrt{2(l^2+L^2)}}\right)}{l}\\ \frac{L^2}{l\sqrt{l^2+2L^2}}&=\tan\left(\frac{\pi}{2}-2\arcsin\left(\frac{l}{\sqrt{2(l^2+L^2)}}\right)\right)\\ &=\text{cotan}\left(2\arcsin\left(\frac{l}{\sqrt{2(l^2+L^2)}}\right)\right)\\ &=\frac{1-\tan^2\left(\arcsin\left(\frac{l}{\sqrt{2(l^2+L^2)}}\right)\right)}{2\tan \left(\arcsin\left(\frac{l}{\sqrt{2(l^2+L^2)}}\right)\right)}\\ &=\frac{1-\left(\dfrac{l}{\sqrt{l^2+2L^2}}\right)^2}{2\dfrac{l}{\sqrt{l^2+2L^2}}}=\frac{1-\dfrac{l^2}{l^2+2L^2}}{2\dfrac{l}{\sqrt{l^2+2L^2}}}=\frac{l^2+2L^2-l^2}{2\dfrac{l(l^2+2L^2)}{\sqrt{l^2+2L^2}}}=\frac{L^2}{l\sqrt{l^2+2L^2}}\ \blacksquare \end{align}$$
|integration|definite-integrals|
1
Solve in the set of real numbers the equation :$[x]\cdot\{x\}=2007x$
Solve in the set of real numbers the equation : $[x]\cdot\{x\}=2007x$ , where $[x]$ is the whole part of x and $\{x\}$ is the fractional part of x First thing to mention is that $\{x\}\in(0,1)$ , we can simply check if the equation when verifies when $x=0$ I tried writing $x=[x]+\{x\}$ and replacing it in the equation. But I got to nothing useful. Hope one of you can help me!Thank you!
Let $\lfloor x \rfloor = m$ and $\{x\} = t$ . Then your equation says $m t = 2007 (m + t)$ . Rewrite it as $$ (m - 2007)(t - 2007) = 2007^2$$ Now $-2006 > t - 2007 \ge -2007$ , so $m - 2007$ (which is an integer) is between $2007^2/(-2006) = -2008.00049\ldots$ and $2007^2/(-2007) = -2007$ . The possibilities are $m - 2007 = -2008$ and $m - 2007 = -2007$ , i.e. $m = -1$ (and then $t = 2007/2008$ so $x = -1/2008$ ) or $m = 0$ (and $t = 0$ so $x=0$ ). Note: this is using the definition $\{x\} = x - \lfloor x \rfloor$ . Some people define the fractional part of a negative number $x$ as $-\{-x\}$ .
|real-numbers|
0
Finding value of $I=\int^{\pi\over2}_0 \frac{\sin(nx)}{\sin{x}} \ \mathrm{d}x$
Finding value of $$I=\int^{\pi\over2}_0 \frac{\sin(nx)}{\sin{x}} \ \mathrm{d}x.$$ I got this question in my book. I got if $n$ is even then $I$ is $0$ , by using king's rule it turns out to be $I=-I$ . But I couldn't solve it when $n$ is an odd number. I tried by taking $n=1,3$ and $5$ and simply expanding $\sin (3x)$ and $\sin (5x)$ in terms of $\sin x$ and integrating I get $I=\pi$ . But can it be proved by in general taking $n$ , not assuming any particular odd value of $n$ .
Start by using the complex definition for $\sin(nx)$ , $$\int_{0}^{\frac{\pi}{2}}\frac{e^{inx}-e^{-inx}}{e^{ix}-e^{-ix}}dx$$ You can then expand using geometric series, $$\int_{0}^{\frac{\pi}{2}}\frac{\left(e^{ix}-e^{-ix}\right)\left(e^{\left(n-1\right)ix}+e^{\left(n-2\right)ix}e^{-ix}+...+e^{ix}e^{-\left(n-2\right)ix}+e^{-\left(n-1\right)ix}\right)}{e^{ix}-e^{-ix}}dx$$ Tidy up using summation, $$\int_{0}^{\frac{\pi}{2}}\frac{\left(e^{ix}-e^{-ix}\right)\left(\sum_{k=0}^{n-1}e^{\left(n-1-k\right)ix}e^{-kix}\right)}{e^{ix}-e^{-ix}}dx$$ $$\int_{0}^{\frac{\pi}{2}}\sum_{k=0}^{n-1}e^{\left(n-1-2k\right)ix}dx$$ Case 1: n is odd. Split the sum, $$\int_{0}^{\frac{\pi}{2}}\left(\sum_{k=0}^{\frac{n-1}{2}}e^{\left(n-1-2k\right)ix}+\sum_{k=\frac{n+1}{2}}^{n-1}e^{\left(n-1-2k\right)ix}\right)dx$$ Re-index, $$\int_{0}^{\frac{\pi}{2}}\left(\sum_{k=0}^{\frac{n-1}{2}}e^{2kix}+\sum_{k=1}^{\frac{n-1}{2}}e^{-2kix}\right)dx$$ $$\int_{0}^{\frac{\pi}{2}}\left(\sum_{k=1}^{\frac{n-1}{2}}2\cos\left(2kx\right)+1\
|calculus|integration|definite-integrals|
0
Show that $[\sqrt{n}]=[\sqrt{n}+\frac{1}{n}]$, for any $n\in N, n\geq 2$
Show that $[\sqrt{n}]=[\sqrt{n}+\frac{1}{n}]$ , for any $n\in N, n\geq 2$ I let $a=\sqrt{n}$ , and we know that $k\leq a , where $k\in N$ . From now all we have to do is to show that $k \leq \frac{1}{a^2}+a I tried processing the first inequality but got to nothing useful. I hope one of you can help me! Thank you!
We proceed by contradiction. Suppose $\sqrt{n} + \frac{1}{n} \geq k+1$ , so that $\sqrt{n} \geq k+1-\frac{1}{k^2}$ . Then we know that $\sqrt{n}$ must lie in the interval $[k+1-\frac{1}{k^2},k+1)$ , hence $n$ lies in the interval $$\left[(k+1-\frac{1}{k^2})^2,(k+1)^2\right) = \left[(k+1)^2 + \frac{1}{k^4} - \frac{2(k+1)}{k^2},(k+1)^2\right).$$ We aim to show that there is no integer in this interval, by showing that $\frac{2(k+1)}{k^2} - \frac{1}{k^4} . Rearranging, we need to show that $$k^4 - 2k^3 - 2k^2 + 1 > 0$$ which is true for $k \geq 4$ (since $k^4 > 4k^3 > 2k^3 + 2k^2$ ). This inequality is also true for $k=3$ by substitution. As for $k = 1,2$ , one may check manually that no such $n$ works.
|inequality|radicals|
0
Finding value of $I=\int^{\pi\over2}_0 \frac{\sin(nx)}{\sin{x}} \ \mathrm{d}x$
Finding value of $$I=\int^{\pi\over2}_0 \frac{\sin(nx)}{\sin{x}} \ \mathrm{d}x.$$ I got this question in my book. I got if $n$ is even then $I$ is $0$ , by using king's rule it turns out to be $I=-I$ . But I couldn't solve it when $n$ is an odd number. I tried by taking $n=1,3$ and $5$ and simply expanding $\sin (3x)$ and $\sin (5x)$ in terms of $\sin x$ and integrating I get $I=\pi$ . But can it be proved by in general taking $n$ , not assuming any particular odd value of $n$ .
Let $$I(n) = \int_0^{0.5\pi} \frac{\sin{(nx)}}{\sin{(x)}}dx$$ Then $$I'(n) = \int_0^{0.5\pi} \frac{\partial}{\partial n}\left(\frac{\sin{(nx)}}{\sin{(x)}}\right)dx=\int_0^{0.5\pi} \frac{1}{{\sin{(x)}}}\frac{\partial}{\partial n}\left({\sin{(nx)}}\right)dx=n\int_0^{0.5\pi} \frac{{\cos{(nx)}}dx}{{\sin{(x)}}}$$ $$ I''(n)=-n^2\int_0^{0.5\pi} \frac{{\sin{(nx)}}dx}{{\sin{(x)}}} \Rightarrow I(n) = -n^2I''(n) $$ Solving the ODE: $$ \frac{d^2I}{{dn}^2}=-I\times n^{-2} \Rightarrow I^{-1}\frac{d^2I}{{dn}^2}=-n^{-2} \Rightarrow \int \int \frac{dI }{I}dI = -\int \int \frac{dn}{n^2}dn $$ So we get: $$ \int C_1 dI + \int \ln{(I)}dI = \int \frac{dn}{n}+\int C_2 dn$$ $$ C_1I + I ln(I) - I = \ln(n)+C_2n + C_3$$ Rearranging for I: $$ {(e^{c_1-1}I)^I}=ne^{c_2n+C_3} $$ But we know that $e^{c_1-1}=c_4$ Let $J = c_4I. $ Then we get: $${J^J}=(ne^{c_2n+C_3})^{c_4}$$ So $$ J = e^{W((ne^{c_2n+C_3})^{c_4})}$$ $$ I(n) = \frac{1}{c_4} e^{W((ne^{c_2n+C_3})^{c_4})}$$ We could find the value of these coefficients by e
|calculus|integration|definite-integrals|
0
Is there any analytical solution of $\sin mx =k \sin x$ for $x$, where $m>1$ and $k \neq 0$?
Is there any analytical solution of $\sin mx =k \sin x$ for $x$ , where $m>1$ and $k\neq 0$ ? I also need solutions of $\cos mx =k \cos x$ and $\tan mx =k \tan x$ for $x$ . When $k=1$ , the solution is $x=\pi/(1+m)$ . But cannot solve for $k \neq 1$ .
Of course $x = 0$ is always a solution. Similarly, if $m$ is an integer, $x = $ any multiple of $\pi$ is a solution. But I suppose you don't want those. If $m$ is a positive integer, $\sin(mx) = \sin(x) U_{m-1}(\cos(x))$ , where $U_j$ are the Chebyshev polynomials of the second kind. So solutions are $x = 2 n \pi \pm \arccos(r)$ where $r$ is a root of $U_{m-1}(r) - k$ and $n$ is any integer.
|trigonometry|
0
Transfinite recursion to construct a function on ordinals
I am asked to use transfinite recursion to show that there is a function $F:ON \to V$ (here $ON$ denote the class of ordinals and $V$ the class of sets) that satisfies: $F(0) = 0$ $F(\lambda) = \lambda$ whenever $\lambda$ is a limit ordinal. $F(\mathcal{S}(\lambda)) = \mathcal{S}(\lambda)$ whenever $\lambda$ is a limit ordinal or if $\lambda = 0$ . $F(\alpha+2) = F(\alpha) + F(\alpha+1)$ for all odinals $\alpha$ . I understand that I must construct a function $G: V\to V$ so that when restricted to $ON$ , I get the properties above. I am unable to explicitly construct such a function that will yield the property 4 above. Edited: After asking this question I was pointed out that the transfinite recursion theorem (specifically the the notation $F(\alpha) = G(F\upharpoonright \alpha)$ ) gives a big hint on how to find the class function $G$ on $V$ . Specifically if $F(\alpha) = y$ then one must also have that $G(x) = y$ where $x = F\upharpoonright \alpha$ . So $G(x) = y$ for a function $x$
Use transfinite recursion to define the auxiliary function $G:\alpha\mapsto\langle F(\alpha),F(\alpha+1)\rangle$ . Then define $F(\alpha)$ as the first component of the ordered pair $G(\alpha)$ .
|set-theory|ordinals|transfinite-recursion|transfinite-induction|
0
Is there a matrix calculus rule for this?
My lecture notes differentiate a Lagrangian function. lecture notes They say the derivative of $\mu^T(Ax-b)$ where $A$ is a matrix and $\mu , x$ are vectors is $A^T \mu$ , but I don't understand why it isn't $\mu^TA$ . We haven't previously studied any matrix calculus rules before or how to differentiate matrices so I just wanted to see if there is a rule/reason for this. Thank you!
$\mu^TA$ is just $(A^T\mu)^T$ and for your purposes you need add $n \times 1$ sized vectors instead of $1\times n$
|matrices|linear-programming|matrix-calculus|karush-kuhn-tucker|
0
Formula for minimum of a changing parabola
So what I want to find out is the path drawn by the parabola $y=x^2+bx$ minimum as b changes. Im pretty sure it draws an upside down parabola because if I put in Desmos and slide the b it goes in the pattern if you look at it. How can find this formula and what is it called?
The minimum of the parabola $x^2+bx = (x-\frac{b}{2})^2 - \frac{b^2}{4}$ is achieved at $(\frac{b}{2},-\frac{b^2}{4})$ , which is precisely the parabola $y=-x^2$ .
|linear-algebra|polynomials|graphing-functions|maxima-minima|
1
An optimisation problem involving a special class of polynomials
I'm currently working on an interesting problem in function approximation thta just came to mind and am seeking insights or methodologies that might aid in approaching it. The problem is as follows: Given a continuous function say, $f(x) = \sin(x)$ , I want to find a polynomial $P(x)$ of degree $n$ , where each coefficient of $P(x)$ is constrained to either $1$ or $-1$ . The objective is to devise an algorithm that finds the polynomial $P$ minimizing the integral of the squared discrepancy between $f(x)$ and $P(x)$ across the interval $[-\pi,\pi]$ , succinctly put as: $$\min_{P \in \mathcal{P}} \int_{-\pi}^{\pi} (f(x) - P(x))^2 \, dx$$ Here, $\mathcal{P}$ represents the set of all polynomials of degree $n$ with coefficients from the set $\{1, -1\}$ . This scenario introduces a unique constraint on polynomial approximation due to the binary nature of the coefficients. I'm pondering how one might efficiently tackle this problem, especially as $n$ becomes large, given the expansive nature
Let $P(x) = \sum_{k=0}^n t_k x^k$ , where you want $t_k = \pm 1$ . Expanding out $(f(x) - P(x))^2$ and integrating, your objective is a quadratic form $$ a + \sum_k b_k t_k + \sum_{j,k} c_{j,k} t_j t_k $$ for some real constants $a$ , $b_k$ , $c_{j,k}$ . Write $t_k = 2 x_k-1$ and this is a Quadratic unconstrained binary optimization problem, a type of problem that is well studied, in particular because quantum annealing is one way of attacking it.
|polynomials|convex-optimization|nonlinear-optimization|
0
A picture geometry problem
My approach: I know only one way to relate inradius and sides of a triangle which is $\text{Area of triangle = (inradius)(semi-perimeter)}$ I am trying to get $RS$ and altitude of $\Delta AMB$ in terms of $r$ which equals to $b$ and $a/2 \; $ resp. Tell me if there's better way of doing this question
This is an elaborated solution given by @heropup
|euclidean-geometry|
0
Proving a Proposition about separable Hausdorff spaces that are locally euclidean
Let X be a separable Hausdorff space such that for every $x \in X$ there exists an open neighborhood $U$ of $x$ such that $U$ is homeomorphic to an open subset of $\mathbb{R}^n$ . Show that: (i) $X$ is locally compact (ii)there exists a countable compact cover of $X$ . My attempt: (i) Suppose $x \in X$ and U to be an open neighborhood of $x$ . Then there exists an open subset $V$ of $\mathbb{R}^n$ and a homeomorphism $\phi: U \rightarrow V$ . $V$ is an open neighborhood of $\phi(x)$ and we can find an open ball with radius $r$ , $B_r(\phi(x))$ such that $B_r(\phi(x)) \subseteq V$ . Now,the closed ball with radius $r/2 $ $\overline{B_{\frac{r}{2}}(\phi(x))}$ is a compact neighborhood of $\phi(x)$ and since $\phi$ is a homeomorphism, $\phi^{-1}(\overline{B_{\frac{r}{2}}(\phi(x))})$ is a compact neighborhood of $x$ in $X$ . Thus, X is locally compact. (ii): Approach 1: I know that if $X$ is second countable, then any open cover of X, has an open countable subcover. But $X$ isn't a second
Searching the $\pi$ -Base can help with these sorts of questions: https://topology.pi-base.org/spaces/?q=Separable%2B%24T_2%24%2BLocally+%24n%24-Euclidean%2B%7ELocally+Compact All locally euclidian spaces are locally compact: https://topology.pi-base.org/theorems/T000332/ https://topology.pi-base.org/spaces/?q=Separable%2B%24T_2%24%2BLocally+%24n%24-Euclidean%2B%7ESecond+Countable and https://topology.pi-base.org/spaces/?q=Separable%2B%24T_2%24%2BLocally+%24n%24-Euclidean%2B%7Esigma-compact In this case the $\pi$ -Base doesn't know if such spaces are $\sigma$ -compact or second countable. From https://topology.pi-base.org/spaces?q=%24T_2%24%2BLocally+%24n%24-Euclidean%2B%7E%24%5Csigma%24-compact we see that the separable condition will be necessary. From https://topology.pi-base.org/spaces?q=Separable%2BLocally+%24n%24-Euclidean%2B%7E%24%5Csigma%24-compact we see the Hausdorff condition would be necessary. From https://topology.pi-base.org/spaces?q=%24T_2%24%2BSeparable%2B%7E%24%5Csigm
|general-topology|separable-spaces|
1
Finding the values of $x$ that satisfy $\sin x+\sin2x+\sin3x+\cdots+\sin nx\le\frac{\sqrt3}{2}$ for all $n$
If the exhaustive set of $x\in(0,2\pi)$ for which $\forall n$ the inequality $$\sin x+\sin2x+\sin3x+\cdots+\sin nx\le\frac{\sqrt3}{2}$$ is valid is $l_1\le x\le l_2$ , find $l_1$ and $l_2$ . Let $\displaystyle\sum_{i=1}^n \sin(ix)$ = $S$ then I have managed to show that $$S=\frac{\sin\left(\frac{nx+x}{2}\right)\cdot\sin\left(\frac{nx}{2}\right)}{\sin\left(\frac{x}{2}\right)}$$ I do not know what to do next. How can I handle the case for all $n?$ Any help is greatly appreciated.
Partial answer: Proof that $x does not work for all $n$ . If $x \in (\pi/3, 2\pi/3)$ , then for $n=1$ , the statement is false. If $x \in (0,\pi/3)$ , then for some $n$ such that $(n-1)x and $2\pi/3\ge nx \ge \pi /3 $ , the statement is false. It remains to show that the statement is true for $x \in [2\pi/3, 2\pi)$ (which was verified by some users graphically).
|sequences-and-series|inequality|trigonometry|
0