title
string
question_body
string
answer_body
string
tags
string
accepted
int64
$\frac{1}{2a^2+3}+\frac{1}{2b^2+3}+\frac{1}{2c^2+3}\geq\frac{3}{5}.$
Let $a, b, c \in\mathbb{R}, a, b,c\geq \frac{1}{4}$ and $a+b+c=3$ . Show that $\frac{1}{2a^2+3}+\frac{1}{2b^2+3}+\frac{1}{2c^2+3}\geq\frac{3}{5}.$ My idea: $(4a-1)(a-1)^2\geq 0$ . Then I tried to divide $4a^3-9a^2+6a-1$ by $2a^2+3$ but I am stuck.
Another proof: It's enough to prove: $$\frac{1}{2a^2+3}+\frac{1}{2b^2+3} \ge \frac{4}{(a+b)^2+6}$$ Which is equaivalent to: $$(a-b)^2(2a^2+8ab+2b^2-6) \ge 0$$ So, the first inequality can be written as: $$\frac{4}{(a+b)^2+6}+\frac{1}{2c^2+3} \ge \frac{3}{5}$$ $$\Leftrightarrow \frac{4}{(3-c)^2+6}+\frac{1}{2c^2+3} \ge \frac{3}{5}$$ $$\Leftrightarrow \frac{6c(4 - c)(c - 1)^2}{5(c^2 - 6c + 15)(2c^2 + 3)} \ge 0$$ which is obvious since $c \le \frac{5}{2}$
|inequality|
0
How to evaluate $\int_{0}^{1} \int_{0}^{1} \frac{{(1 + x) \cdot \log(x) - (1 + y) \cdot \log(y)}}{{x - y}} \cdot (1 + \log(xy)) \,dy \,dx$
Question: How to evaluate this integral $$\int_{0}^{1} \int_{0}^{1} \frac{{(1 + x) \cdot \log(x) - (1 + y) \cdot \log(y)}}{{x - y}} \cdot (1 + \log(xy)) \,dy \,dx$$ My messy try $$\int_{0}^{1} \int_{0}^{1} \frac{{(1 + x) \cdot \log(x) - (1 + y) \cdot \log(y)}}{{x - y}} \cdot (1 + \log(xy)) \,dy \,dx$$ $$ \begin{array}{r} I=\int_0^1 \int_0^1 \frac{(1+x) \ln (x)-(1+y) \ln (y)}{x-y}(1+\ln (x y)) d y d x \\ I=\int_0^1 \int_0^{\frac{1}{x}} \frac{(1+x) \ln (x)-(1+x t)(\ln (x)+\ln (t))}{1-t}(1+2 \ln (x)+\ln (t)) d t d x \\ =\int_0^1 \int_0^{\frac{1}{x}} x \ln (x)(1+2 \ln (x)+\ln (t)) d t d x- \\ -\int_0^1 \int_0^{\frac{1}{x}} \frac{(1+x t) \ln (t)}{(1-t)}(1+2 \ln (x)+\ln (t)) d t d x \\ =\int_0^1 x \ln (x)\left(\frac{1+2 \ln (x)-\ln (x)-1}{x}\right) d x-\int_0^1 \int_0^1 \frac{(1+x t) \ln (t)}{(1-t)}(1+2 \ln (x) \ln (t)) d x d t + \end{array} $$ $$- \int_{0}^{\infty} \int_{0}^{\frac{1}{t}} \frac{{(1 + xt) \cdot \ln(t)}}{{1 - t}} \cdot (1 + 2 \cdot \ln(x) + \ln(t)) \,dx \,dt$$ $$I_1 = \int_{0}
$$I = \int_{0}^{1}\int_{0}^{1}\frac{\left(1+x\right)\ln x-\left(1+y\right)\ln y}{x-y}\left(1+\ln\left(xy\right)\right)dydx$$ $$I = I_{1} +I_{2} + I_{3} + I_{4}$$ where $$I_{1} = \int_{0}^{1}\int_{0}^{1}\frac{\ln^{2}x-\ln^{2}y}{x-y}dydx, I_{2} = \int_{0}^{1}\int_{0}^{1}\frac{\ln x-\ln y}{x-y}dydx,$$ $$ I_{3} = \int_{0}^{1}\int_{0}^{1}\frac{x\ln x-y\ln y}{x-y}dydx, I_{4} = \int_{0}^{1}\int_{0}^{1}\frac{x\ln x-y\ln y}{x-y}\left(\ln x+\ln y\right)dydx$$ . Every one of these integrands don't behave well when y = x, so split the bounds for x y. For example: $$I_{1} = \int_{0}^{1}\int_{0}^{x}\frac{\ln^{2}x-\ln^{2}y}{x\left(1-\frac{y}{x}\right)}dydx-\int_{0}^{1}\int_{x}^{1}\frac{\ln^{2}x-\ln^{2}y}{y\left(1-\frac{x}{y}\right)}dydx$$ Now you can preform a geometric series expansion on the denominators. This is a very long and tedious calculation but is not very difficult so I will omit it. In the end: $$I_{1} = -\frac{2\pi^{2}}{3}-4\sum_{n=1}^{\infty}\frac{1}{n^{3}}$$ $$I_{2} = \frac{\pi^{2}}{3}
|calculus|integration|multivariable-calculus|definite-integrals|closed-form|
0
Number of real solution of $e^{4x}+8e^{3x}+13e^{2x}-8e^x+1=0$
Number of real solution of $e^{4x}+8e^{3x}+13e^{2x}-8e^x+1=0$ What I try Using Aithmetu geometric Inequality $\displaystyle e^{4x}+8e^{3x}+13e^{2x}+1\geq 4(104e^{9x})^{\frac{1}{4}}>8e^x$ So $e^{4x}+8e^{3x}+13e^{2x}-8e^x+1>0$ for all real $x$ But answer given as $2$ real solution Plz have a look on my problem and what's wrong in this solution
The error in your answer here : $$4(104e^{9x})^{\dfrac{1}{4}}> 8e^x$$ put $ x=-4$ $$4(104e^{9x})^{\dfrac{1}{4}}= 4(104e^{-36})^{\dfrac{1}{4}}=0.001576... $$ $$8e^x=8e^{-4}=0.1465..$$
|calculus|
0
Why would the triangles join up to a rhombus?
The question I am going to present may as well sound very dumb. But this is becoming a hell of a confusing thing for me. The question is from ISI B.Math-B.Stat entrance exam 2022 UGA question paper. It is problem 29. The question included pictures, but I am going to rephrase the question in a way such that the pictures are not needed but I will still include the pictures. If $\triangle{APB}$ has area $4$ , $\triangle{BPC}$ has area $5$ , $\triangle{CPD}$ has area $x$ and $\triangle{APD}$ has area $13$ , where $AB=BC=CD=DA=6$ Then find the value of $x$ . Here is the picture: Now. Here's my question. Apparently if we join up the triangles we get a rhombus. Like this: Then we can simply draw two perpendicular going through the point $P$ . Then considering the areas of the triangles we will get that $\triangle{APB}+\triangle{CPD}=\triangle{BPC}+\triangle{APD}$ Meaning that, $x=13+5-4=14$ But why would the the triangle add up to a rhombus. I mean it intuitively makes sense but can we give a
Why do you think ‘apparently… we get a rhombus’? It is not always true. Any proof that assumes that the triangles join to form a rhombus is wrong, because we have a counter example: take $a=b=c=d=6$ . Unless we use the areas somehow, we can never assure that they will join to form a rhombus. The following was not my idea. It is merely a visualization of the data (slightly modified) given by @Eric in his post:
|geometry|
0
How to solve $\int \frac{1}{\sqrt{x^2+x-1}}dx$?
How can one evaluate $\int \frac{1}{\sqrt{x^2+x-1}}dx$ ? Our professor has given us this integral to think about and I've no idea what to do to solve it. I understand there's probably a substitution somewhere but i can't figure out how. The integral calculators I've used online haven't been helpful enough to explain exactly what to do and why I did it.
First we want to factorise the denominator, in order to set up a convenient substitution: $$\int\frac1{\sqrt{x^2+x+1}}\mathrm{d}x$$ can be rewritten as $$\frac2{\sqrt3}\int\frac1{\sqrt{(\frac2{\sqrt3}x+\frac1{\sqrt3})^2+1}}\mathrm{d}x$$ by completing the square and rescaling the constant term in the denominator. Looking at this integral, one can wishfully think "wouldnt it be nice if the messy squared term was a sinh expression? then the denominator would be $\sqrt{\cosh^2(u)}$ , which would cancel very nicely." This leads to the substitution $\sinh u=\frac2{\sqrt3}x+\frac1{\sqrt3}$ . Changing the variables in this way turns the integral into $$\int\frac{\cosh u}{\sqrt{sinh^2u+1}}\mathrm{d}u,$$ which is just the integral $\int1\mathrm{d}u$ due to the identity $\cosh^2u=1+\sinh^2u$ . The antiderivative is therefore $u=\mathrm{arsinh}(\frac2{\sqrt3}x+\frac1{\sqrt3})$ .
|calculus|integration|indefinite-integrals|
0
Decomposition of continuous functionals on $C^*$-algebra
Consider a $C^*$ -algebra $A$ and a continuous linear functional $\varphi:A\to\mathbb{C}$ with adjoint given by $\varphi^*(a):=\overline{\varphi(a^*)}$ . We know that each positive (cont. lin.) functional is self-adjoint. How would one show that each $\varphi\in A^*$ can be written as a linear combination of two self-adjoint members of $A^*$ ? My attempt: I thought I'd try with $\varphi_1(a+ib):=\varphi(a)$ and $\varphi_2(a+ib):=\varphi(b)$ , though I am unsure whether $\varphi_1,\varphi_2$ are indeed self-adjoint. I manage to conclude that $\varphi_1^*(a+ib)=\overline{\varphi(a)}$ and $\varphi_1^*(a+ib)=\overline{\varphi(b)}$ , but am unsure if concluding those equal to $\varphi(a),\varphi(b)$ would be legal. Note that both $a$ and $b$ are self-adjoint.
$$ \begin{align} \varphi^{**}(a)=\overline{\varphi^*(a^*)}&=\overline{\overline{\varphi(a^{**})}}=\varphi(a)\\ (\varphi+\psi)^*(a)=\overline{(\varphi+\psi)(a^*)}&= \overline{\varphi(a^*)+\psi(a^*)}=\overline{\varphi(a^*)}+\overline{\psi(a^*)} =\varphi^*(a)+\psi^*(a)\\ (\lambda \varphi)^*(a)=\overline{\lambda\varphi(a^*)}&= \bar{\lambda}\overline{\varphi(a^*)}=\bar{\lambda}\varphi^*(a) \end{align} $$ Any time you have a vector space $V$ over $\Bbb{C}$ with a complex linear involution $i\colon V\to V$ with the properties $$ \begin{align} i(i(v))&=v\\ i(v+w)&=i(v)+i(w)\\ i(\lambda v)&=\bar\lambda i(v) \end{align} $$ the self-adjoint elements are $v\in V$ such that $i(v)=v$ and any $v\in V$ has a decomposition $v=\Re(v)+\sqrt{-1}\Im(v)$ where $$ \begin{split} \Re(v)=\frac{v+i(v)}{2}\\ \Im(v)=\frac{v-i(v)}{2\sqrt{-1}} \end{split} $$ and $\Re(v),\Im(v)$ are self-adjoint.
|functional-analysis|c-star-algebras|
1
Produce an explicit bijection between rationals and naturals
I remember my professor in college challenging me with this question, which I failed to answer satisfactorily: I know there exists a bijection between the rational numbers and the natural numbers, but can anyone produce an explicit formula for such a bijection?
The following is not a rigorous mathematical proof but rather a sketch of an idea. First, represent any rational number $\frac{a}{b}$ with $a, b\in\mathbb{N}$ , $b\ne 0$ by the order pair $(a, b)$ . In this way, $\mathbb{Q}$ can be represented by the set of order pairs: $$ \begin{align} (1, 1),\ (2&, 1),\ ..., (k, 1),\ ...\\ (2, 1),\ (2&, 2),\ ..., (k, 2),\ ...\\ &\vdots\\ (n, 1),\ (n&, 2),\ ..., (k, n),\ ...\\ &\vdots\\ \end{align} $$ Denote by $R_1$ the set in the first row in the table, $R_2$ the second row, $R_k$ for the $k$ -th row. Then, it's easy to see that there's a bijection from $\mathbb{N}\to R_i$ , for all $i\in\mathbb{N}_+$ . Let $N_0=\{2n: n\in\mathbb{N}\}$ , $N_1=\{2n+1: n\in\mathbb{N}\}$ . It follows that $\mathbb{N}=N_0\bigcup N_1$ . Observe that the mappings $\mathscr{i}: n\to n$ , $f: 2n\mapsto n$ and $g: 2n+1\mapsto n$ are all bijective. In other words, the mappings $\mathscr{i}: \mathbb{N}\to \mathbb{N}$ , $f: N_0\to\mathbb{N}$ and $g: N_1\to\mathbb{N}$ are biject
|elementary-set-theory|rational-numbers|natural-numbers|
0
Deduction from Schanuel's conjecture
I am deducing from the Schanuel's conjecture the following statement: Let $\alpha, \beta$ be positive real algebraic numbers with $\alpha,\beta \neq 1$ and $\frac{\log \beta}{\log \alpha} \notin \mathbb{Q}$ . Then $\{x \in \mathbb{R}: \alpha^x, \beta^x \in \overline{\mathbb{Q}}\}=\mathbb{Q}$ . The inclusion $\supseteq$ is easy. For $\subseteq$ , after applying the Schanuel's conjecture two times by finding an algebraic relation between the logarithms, I am now at the point that $\log \alpha, \log \beta, \log \alpha^x$ are linearly dependent over $\mathbb{Q}$ . Then I write a linear relation $$A\log \alpha+B\log \beta+C\log \alpha^x=0$$ and then do something to get a contradiction. For this I solved the case $B=0$ : in this case we have $A,C \in \mathbb{Q}$ and thus $x$ must be in $\mathbb{Q}$ . So we only need to solve the case $B \neq 0$ . But then I don't know what to do next: I tried many approaches but led to the dead-end. Any help is appreciated.
I successfully solved this stuck so I will leave my proof here in case somebody will need it. So, $\log \alpha^x$ could be represented by a $\mathbb{Q}$ -linear combination $$\log \alpha^x=A_1\log \alpha+B_1 \log \beta.$$ For the argument in the question post, we can reuse it replacing $\log \alpha^x$ by $\log \beta^x$ and get another $\mathbb{Q}$ -linear combination $$\log \beta^x=A_2\log \alpha+B_2 \log \beta.$$ Let $t=\dfrac{\log \alpha}{\log \beta} \notin \mathbb{Q}$ , then by dividing both sides of the two linear combinations by $\log \alpha$ and $\log \beta$ respectively, we get $$x=A_1+B_1t=A_2.\dfrac{1}{t}+B_2.$$ Thus $B_1t^2+(A_1-B_2)t-A_2=0$ . This quadratic equation, with unknown $t$ , has coefficients living in $\mathbb{Q}$ , thus $t \in \overline{\mathbb{Q}}$ and then $x \in \overline{\mathbb{Q}}$ . Therefore the case $x \notin \mathbb{Q}$ cannot happen since it will make $\alpha^x$ and $\beta^x$ become not algebraic anymore, a contradiction. We conclude that $x \in \mathb
|number-theory|transcendental-numbers|transcendence-theory|
1
Prove function is in $\mathcal{O}(x\log(x))$
Consider $T, g: \mathbb{R}_{>0} \to \mathbb{R}_{>0}$ which are bounded on any bounded interval, satisfy $$T(x)=2 T(x / 2)+g(x)$$ for every real number $x \geqslant 1$ and $g(x)=\mathcal{O}(x)$ . I would like to show $T(x)=\mathcal{O}(x \log (x))$ . For this I want to show $\lim_{x\rightarrow \infty} \dfrac{T(x)}{x\log(x)}$ is finite. Notice $$T(x)=2^nT(x/2^n)+2^{n-1}g(x/2^{n-1})+\ldots+2g(x/2)+g(x)\leq2^nT(x/2^n)+nCx$$ where $C$ is a constant such that $g(x)\leq Cx$ . Then $$\lim\limits_{x\rightarrow \infty} \dfrac{T(x)}{x\log(x)}\leq \lim\limits_{x\rightarrow \infty} \dfrac{2^nT(x/2^n)+nCx}{x\log(x)}=\lim\limits_{x\rightarrow \infty} \dfrac{2^nT(x/2^n)}{x\log(x)}.$$ Now I tried taking $n\rightarrow \infty$ and setting $x=2^n$ to simplify all terms but I am really not sure I am allowed to do this. Do you have any idea how to finish the proof?
You don't need to prove that the limit is finite the limit can even not exist. However, you can easily prove that the function is bounded. Indeed, for $x\in \mathbb R_{>0}$ , $n = \left\lceil\log_2(x)\right\rceil$ then $$\log_2(x) \le n \le \log_2(x) + 1 \implies x \le 2^n \le 2x \implies \frac12 \le \frac{x}{2^n} \le 1$$ and \begin{align} \frac{T(x)}{x\log_2(x)} &\le \frac{T\left(\frac x{2^n}\right)}{\frac x{2^n} \log_2 x} + C\frac{n}{\log_2x}\\ &= \frac{\sup_{t\in\left[\frac12,1\right]} T(t)}{2 \log_2(x)} + C\left(1 + \frac1{\log_2(x)}\right) \end{align} Can you finish the proof?
|real-analysis|functions|asymptotics|
1
When is von Neumann entropy differentiable? (Edited)
Given a smoothly time-dependent density matrix (positive-semidefinite matrix with trace 1) $\rho(t)$ , its von Neumann entropy is defined as $$ S(t)=\mathrm{Tr}(f(\rho(t))),\ f:[0,\infty)\ni x\mapsto -x\ln x\in\mathbb{R}\ (0\ln0=0). $$ I want to know when $S(t)$ is differentiable. If $\rho(t)$ has the same rank, then it is differentiable and we have $$ S'(t)=-\mathrm{Tr}(\rho'(t)\ln\rho(t)). $$ (Related question is here . ) If the rank of $\rho(t)$ varies, I think $S(t)$ is not differentiable in general, because $f$ isn't (one-sided) differentiable at $x=0$ . However, some mathematical physicists have stated that $S(t)$ is differentiable without the rank assumption for the special $\rho(t)$ (Equation (1)&(3) in this paper and Equation (10)&(13), (16)&(26) in this paper ). Specifically, this is the case where $\rho(t)$ is a matrix on the tensor product space $\mathcal{H}_A\otimes\mathcal{H}_B$ and is given by using partial traces as $$ \rho(t)=\mathrm{Tr}_B(e^{-iHt}\rho(0)e^{iHt}), $$ w
We know that when $\rho$ describes a pure state then $\rho^2=\rho$ and $\rho\log\rho=\rho\log\rho^2=2\rho\log\rho$ so that $S=-\operatorname{tr}(\rho\log\rho)=0\,.$ This shows that $S(t)\equiv 0$ is equivalent to $\rho(t)$ being pure. In particular, $S(t)$ is differentiable and $\rho(t)$ is singular. A typical pure state has density matrix $$ \rho=\pmatrix{1&0\\0&0}\quad\text{ or }\quad\rho=\pmatrix{0&0\\0&1}\,. $$ A typical mixed state is $$ \rho(t)=\pmatrix{a(t)&0\\0&b(t)}\quad\text{ with }\quad a(t)+b(t)=1\,, \;a(t),b(t)\ge 0\,. $$ Then, if $a(t),b(t)>0\,,$ $$ S(t)=a(t)\log a(t)+b(t)\log b(t)\,. $$ l'Hospital allows to extend this continuously to the case $a(t)=0$ or $b(t)=0$ with the convention $0\log 0=0$ (compatible with pure states). This $S(t)$ is obviously differentiable whenever $a$ and $b$ are differentiable regardless if the rank of $\rho(t)$ changes or not. Edit: When $a$ reaches zero at $t_0$ we need one extra condition to ensure the differentiability of $a(t)\log a(t)$ a
|mathematical-physics|matrix-calculus|entropy|statistical-mechanics|
0
A locally finite topological space is Alexandrov
I'm trying to prove this statement: A locally finite topological space is Alexandrov. We know that for every point $x\in X$ , we can find a neighbourhood containing only a finite number of points of $X$ . How can we use that to prove $\cap_{i\in I}A_i $ is open (that is: arbitrary intersections of open sets are open, our definition of an Alexandrov space)? Thanks for your time.
Notice that if a space $X$ is locally finite, each point $x \in X$ has its minimal open neighborhood; we can take the intersection of all open neighborhoods of $x$ and this intersection can be made finite (we begin with the finite open neighborhood, then intersect with a neighborhood which misses some point in that neighborhood (if it exists), and continue the process until we run out of such neighborhoods). Now, if we had a collection of open sets $\{ U_{\lambda} \}_{\lambda \in \Lambda}$ , whose intersection is not open, there would have to exists a point $x \in \bigcap U_{\lambda}$ , such that no open neighborhood of that point would be in the intersection. But the minimal neighborhood $U_x$ was in every $U_{\lambda}$ , so it is also in the intersection, which is a contradiction.
|general-topology|
0
Inductive definition of concatenation of strings
May someone help me understand the following inductive definition of concatenation of strings : We will define $\alpha\cdot\beta$ by induction on $|\beta|.$ If $|\beta|=0\Rightarrow \beta=\epsilon,$ so then $\alpha\cdot\beta=\alpha\cdot\epsilon=\alpha$ If $|\beta|=n+1,$ then $\beta=\gamma b,|\gamma|=n.$ Then $\alpha\cdot\beta=(\alpha\cdot\gamma)b$ . I understand the basis of the induction. At least I think I do. Perhaps I don't understand how inductive definitions work. When $\beta$ is the empty word, then it's obvious that when we concatenate it with $\alpha$ , we'll get $\alpha$ again. But then I don't understand what happens next and why we take the length of $\beta$ to be $n+1$ . What's the idea? What's the inductive step when using induction to define a new object (how do we use it)? Thanks!
May someone help me understand the following inductive definition of concatenation of strings You would have an inductive definition of "string", which typically has two constructors, one for the "empty" string, and one for the "compound" string with one head character followed by the rest of the string. (In fact, here a "string" isn't but a "list of characters".) Then you would have a recursive definition of "concatenation of strings" by cases on the structure of one of the operands, not on its length: in your example, we consider $\beta$ , with one case for $\beta$ empty, the other for $\beta$ a compound string. Of course, for an appropriate definition of "size of a string", if $|\beta| = 0$ , one can derive $\beta = \operatorname {empty}$ , and, if $|\beta| \neq 0$ , one can derive $\beta = h :: \gamma$ for some character $h$ and string $\gamma$ , but that is an extra preliminary step that is not needed to reason inductively (on inductive strings as well as in general). What's the i
|logic|induction|
0
Prove that$(1-\omega+\omega^2)(1-\omega^2+\omega^4)(1-\omega^4+\omega^8)…$ to 2n factors$=2^{2n}$
Prove that $(1-\omega+\omega^2)(1-\omega^2+\omega^4)(1-\omega^4+\omega^8)…$ to 2n factors $=2^{2n}$ where $\omega$ is the cube root of unity My attempt: $(1+\omega^n+\omega^{2n})=0$ $\Rightarrow (1-\omega^n+\omega^{2n})=-2\omega^n$ $\Rightarrow \prod_{n=1 \to 2n}(-2\omega^n) =2^{2n}\omega^{1+2+3…2n}$ $\Rightarrow \prod_{n=1 \to 2n}(-2\omega^n) =2^{2n}\omega^{n(2n+1)}$ If $n$ is $3k$ type: $2^{2n}\omega^{n(2n+1)}=2^{2n}\omega^{3m}=2^{2n}$ If $n$ is $3k+1$ type: $2^{2n}\omega^{n(2n+1)}=2^{2n}\omega^{3m}=2^{2n}$ If $n$ is $3k+2$ type: $2^{2n}\omega^{n(2n+1)}=2^{2n}\omega^{(3k+2)(6k+5)} =2^{2n}\omega^{(3m+1)}=2^{2n}\omega^{3m}\omega=2^{2n}\omega$ What have I done wrong here?
I know my answer is essentially the same as Michael Rozenberg's, but I couldn't help posting my answer here anyways (although the calculation is the same, my approach to this was different). Each term is a 3-term geometric progression, and the common ratio is $-\omega^{2^{k-1}}$ . The product is now $$\prod^{2n}_{k=1} \frac{(-\omega^{2^{k-1}})^3 - 1}{-\omega^{2^{k-1}}-1}$$ Using the fact that $\omega^3 = 1$ , this reduces to $$\prod^{2n}_{k=1} \frac{2}{\omega^{2^{k-1}}+1}$$ As $k$ varies from $0,1,2,3,4,5,6....2n$ it can be observed that every consecutive pair of $\omega^{2^{k-1}}$ reduces to $\omega$ and $\omega^2$ by using $\omega^3 = 1$ . This gives us $n$ identical pairs of a fraction $\frac{1}{\omega+1}\cdot\frac{1}{\omega^2+1} = 1$ . Product of all these fractions is $1$ , which leaves the original product as $2^{2n}$ .
|sequences-and-series|algebra-precalculus|complex-numbers|roots|products|
0
Explicit isomorphism of algebras $\mathbb C[S_3]\cong \mathbb C \times \mathbb C \times M_2(\mathbb C)$
Let $\mathbb C[S_3]$ be the group algebra of $S_3$ . We have an isomorphism of $\mathbb C$ -algebras $$\mathbb C[S_3]\cong \mathbb C \times \mathbb C \times M_2(\mathbb C).$$ The existence of such an isomorphism is not difficult to show using Maschke's theorem . Moreover, one can show using character theory, as done, e.g., in this post , that this decomposition corresponds precisely to the (iso classes of) irreducible representations of $S_3$ . However, how does one construct an explicit isomorphism between $\mathbb C[S_3]$ and its decomposition, as $\mathbb C$ -algebras? I do know that one copy of $\mathbb C$ corresponds to the trivial representation, one copy corresponds to the sign representation, and the matrix algebra $M_2(\mathbb C)$ corresponds to the "standard" representation. I also know that elements in $\mathbb C[S_3]$ have the form $\sum_{1 \leq i \leq 6}\lambda_i \sigma_i$ for some $\lambda_i \in \mathbb C$ and $\sigma_i \in S_3$ . I also think this decomposition above int
There's a explicit isomorphism for every finite group $G$ . Let $V_1, \ldots, V_n$ be a complete set of representatives of the irreducible representations up to isomorphism with $\rho_i:G \to \operatorname{GL}(V_i)$ the defining homomorphism. Then the map $$\Bbb C[G] \to \prod_i \mathrm{End}(V_i)$$ sending $g \mapsto (\rho_i(g))_i$ is the desired isomorphism of algebras. This is related to harmonic analysis of finite groups. A reference is Representation Theory of Finite Groups by B. Steinberg.
|abstract-algebra|group-theory|representation-theory|symmetric-groups|
0
Integrate $\frac1{1 + ax^2}$ using partial fractions decomposition
I'm trying to solve the following integral: Integral of: $\int\frac{1}{1+ax^2}dx$ Where $a$ is some positive constant. We can't of course use basic U Substitution as the derivative of $1 + ax^2$ is $2ax$ , which isn't found elsewhere in the expression. I tried to use fractional decomposition to integrate this but ran into the issue outlined below. I'd like to divide this into two related questions: My first approach was to try and use fractional decomposition to integrate this. However, I wasn't able to factor the expression $1 + ax^2$ to a product of two linear expressions. Can this be done? If we can't do fractional decomposition, how can this integral be solved?
This is an immediate integral of a composition of function. Since $a>0$ , $$ \int \frac{dx}{1+ax^2} = \frac{1}{\sqrt{a}}\int\frac{\sqrt{a}}{1+(\sqrt{a}x)^2}dx = \frac{1}{\sqrt{a}}\arctan(\sqrt{a}x)+c $$
|calculus|integration|indefinite-integrals|
1
Why would the triangles join up to a rhombus?
The question I am going to present may as well sound very dumb. But this is becoming a hell of a confusing thing for me. The question is from ISI B.Math-B.Stat entrance exam 2022 UGA question paper. It is problem 29. The question included pictures, but I am going to rephrase the question in a way such that the pictures are not needed but I will still include the pictures. If $\triangle{APB}$ has area $4$ , $\triangle{BPC}$ has area $5$ , $\triangle{CPD}$ has area $x$ and $\triangle{APD}$ has area $13$ , where $AB=BC=CD=DA=6$ Then find the value of $x$ . Here is the picture: Now. Here's my question. Apparently if we join up the triangles we get a rhombus. Like this: Then we can simply draw two perpendicular going through the point $P$ . Then considering the areas of the triangles we will get that $\triangle{APB}+\triangle{CPD}=\triangle{BPC}+\triangle{APD}$ Meaning that, $x=13+5-4=14$ But why would the the triangle add up to a rhombus. I mean it intuitively makes sense but can we give a
The problem is incorrect. Let’s say that $c=6$ and see if this gives us a good counterexample. Given an acute triangle with two sides and an area, it’s possible to compute the third side (I just used an online calculator). Since b,6,6 gives area 5, b can be computed as 1.6833. Since a,1.6833,6 gives area 4, a can be 5.148. Since d,5.148,6 gives area 13, d can be 5.399. The area of c,d,6 would then be the area of a triangle with sides 6,5.399,6 which is 14.465. Thus, it’s possible to satisfy all the constraints of the problem and have $x$ not be one of the choices. I suppose the problem might still be possible if the constraints force some kind of range on the possible area which only contains 14, but I think a more likely solution is that the problem writers just screwed up.
|geometry|
0
Is the ring $\mathbb{R} [x]$ Noetherian?
I aim to assess the soundness of my approach. In an exercise, the question is posed regarding whether $\mathbb{R}[x]$ is Noetherian, and my response is negative. To support this, I employ the definition that a ring $R$ is Noetherian if each ascending chain of ideals in $R$ eventually becomes stable. Considering the family $\{I_n\}_{n\in\mathbb{N}}$ , where $$I_n = \{f\in\mathbb{R}[x] \mid \text{such that } f(a) = 0 \ \forall a\in\mathbb{R}\setminus \{0,1,...,n\}\}.$$ I can easily demonstrate that $I_n$ is an ideal for all $n\in\mathbb{N} $ . Furthermore, if $f \in I_n$ , then $f(a) = 0$ for all $a \in \mathbb{R}\setminus \{0,1,...,n\}$ , and consequently, $f(a) = 0$ for all $a \in \mathbb{R}\setminus \{0,1,...,n, n+1\}$ . This establishes $I_n \subset I_{n+1}$ for all $n \in \mathbb{N}$ , prompting me to observe the following ascending chain: $$I_0 \subset I_1 \subset I_2 \subset \cdots. $$ However, this chain does not stabilize, indicating tha $\mathbb{R} [x]$ is not noetherian. Is my
To expand on my comment, suppose $f \in \mathbb{R}[x]$ vanishes at every point outside of a finite set $S \subset \mathbb{R}$ . Then, the set of elements of $\mathbb{R}$ which are sent to zero are $f^{-1}(\{0\}) \subset \mathbb{R}$ , which is a closed set. Hence, since $\mathbb{R} \setminus S$ is dense in $\mathbb{R}$ , we must have $f^{-1}(\{0\}) = \mathbb{R}$ , where we view $f: \mathbb{R} \to \mathbb{R}$ as a continuous function. Hence $f$ is the zero function which implies $f$ is the zero polynomial. Alternatively, any polynomial $f \in \mathbb{R}[x]$ is equal to its Taylor expansion around any point $$f(x) = \sum_{n \geq 0} \frac{f^{(n)}(a)}{n!}(x - a)^n$$ so if $f$ is uniformly zero on $\mathbb{R} \setminus S$ , all of its derivatives would also vanish on $\mathbb{R} \setminus S$ , so taylor expanding this polynomial around any point of $\mathbb{R} \setminus S$ would reveal it to be the zero polynomial. As such, your ascending chain is just the trivial chain $(0) \subset (0) \sub
|linear-algebra|abstract-algebra|ring-theory|ideals|noetherian|
1
I have multiple questions regarding one problem: For a positive real $a$ what's the value of $\sqrt{a\sqrt{a\sqrt{a...}}}$
I'm reading a book, the book is aimed at high schoolers so it is perhaps not the most rigorous book. The intended solutions the book provides for the problem are the following: First assume the expression has a value and call it $x$ , so $x = \sqrt{a\sqrt{a\sqrt{a...}}}$ Then notice that $x = \sqrt{ax}$ , then $x^2 = ax$ , then $x(x-a) = 0$ , then since $a$ is positive the solution is $x = a$ and that's the value of the whole expression. They also offer a second solution that goes like this: If $x = \sqrt{a\sqrt{a\sqrt{a...}}}$ then we can also express it as: $x = a^{\frac{1}{2}}a^{\frac{1}{4}}a^{\frac{1}{8}}...$ , then $x = a^{\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+...}$ , and since $\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+... = 1$ , then $x = a$ once again. My first question is, how valid are any of these solutions at the university level? Say if this was a real analysis class, would any of those two answers be considered valid? Also, although I haven't taken a real analysis class yet, I ha
your first approach seems more reasonable and simple, although it is not complete. Stopping the solution where you did is wrong: you assumed the limit existed, but never proved it. So in order to complete the proof, tou need to actually verify that the limit does exist. A possible way to go about it is this. Define the sequence $\{x_n\}_n$ recursively as $$ x_1=\sqrt{a}\\ x_{n+1}=\sqrt{ax_n} $$ We prove the sequence is convergent (i.e. admits a limit) by showing that it increasing and bounded from above. It is indeed bounded by $a$ . By induction $$ x_1 = \sqrt{a}\le a\\ x_{n+1}=\sqrt{ax_n} \le \sqrt{a^2} = a $$ For the monotonicity instead we notice that $$ x_n which is true because the sequence is bounded by $a$ . Since now we proved the limit exests, the answer is complete As pointed out in the comments, the reasoning works only if $a>1$ . In case $a , the original limit is $1/\sqrt{a\sqrt{a \dots}}$ , which means the conclusion follows with the result $1/a$
|real-analysis|infinite-product|
0
How to come up with the generating function of an elliptic curve?
Having watched the otherwise splendid Numberphile video with Edward Frenkel explaining the Langlands program , two mysteries remained completely open to me: Given the equation $y^2 + y = x^3 - x^2$ how would one ever come up in the first place with the infinite product $$q\ \Pi_n(1-q^n)^2(1-q^{11\ n})^2$$ whose prime coefficients $b(p)$ – when written as an infinite sum $$\sum_n b(n)\ q^n$$ – miraculously happen to give the ("normalized") number of integer solutions of the equation modulo $p$ . And I also got no idea or hint, how this very special case of the Taniyama-Shimura-Weil conjecture (?) might be proved. Can the proof at least be sketched? Or did Frenkel sketch the proof and I did oversee it?
Before starting, I need to make a confession: up to the very end, I hoped that someone could write this answer instead of me. That's because this story deserves to be explained and shared, but even this case is all but straightforward. By "tractable", I mean that there exists someone who could likely pass an open book exam on most of it (in the sense of this interesting blog post ). But it would be a long exam! Anyway, I'll first sketch how the argument goes (please pay no heed to the notation which may seem a bit too much - I only wrote it like this because it's the standard), then comment some more before going into detail. This is going to be a very long answer. Feel free to skip over what you don't need or don't understand. We identify $g_0(q)=q\left(\prod_{n \geq 1}{(1-q^n)(1-q^{11n})}\right)^2=\sum_{n \geq 1}{a_nq^n}$ as an element of some vector space $\mathcal{S}_2(\Gamma_1(11))$ of holomorphic functions on the upper half-plane $\mathbb{H}$ . We identify Hecke operators $T_n$ (
|generating-functions|elliptic-curves|modular-forms|langlands-program|
1
Prove $\int_{0}^{1}\frac1k K(k)\ln\left[\frac{\left(1+k \right)^3}{1-k} \right]\text{d}k=\frac{\pi^3}{4}$
Is it possible to show $$ \int_{0}^{1}\frac{K(k)\ln\left[\tfrac{\left ( 1+k \right)^3}{1-k} \right] }{k} \text{d}k=\frac{\pi^3}{4}\;\;? $$ where $K(k)$ is the complete elliptic integral of the first kind with modulus $k$ . One proof is to compute the twins first: \begin{aligned} &\int_{0}^{1} \frac{K(k)\ln(1+k)}{k}\text{d}k =-2G\ln(2)-4\Im\operatorname{Li}_3\left ( \frac{1+i}{2} \right ) +\frac{5\pi^3}{32} +\frac\pi8\ln(2)^2,\\ &\int_{0}^{1} \frac{K(k)\ln(1-k)}{k}\text{d}k =-6G\ln(2)-12\Im\operatorname{Li}_3\left ( \frac{1+i}{2} \right ) +\frac{7\pi^3}{32} +\frac{3\pi}{8}\ln(2)^2, \end{aligned} where $G$ denotes Catalan's constant and $\text{Li}_3(.)$ trilogarithm. The simplicity of the result make me believe that it could be obtained by some implicit approaches. I would appreciate if you could offer some insights or ideas.
To add onto @User Zacky's Brilliant Answer! We have, $$I=-2\int_0^1K'\left[\frac{\ln k}{1-k^2}\right]dk$$ Here we can use a well known result, $$\int_0^1K'f(k)dk=\int_0^{\pi/2}\int_0^{\pi/2}f(\sin x\sin y)\ dxdy$$ to get, $$I=-2\int_0^{\pi/2}\int_0^{\pi/2}\frac{\ln(\sin x\sin y)}{1-(\sin x\sin y)^2}\ dxdy$$ Which reduces to, $$I=-2\pi\int_0^{\pi/2}\frac{\ln(\sin x)}{\cos x}dx=\frac{\pi^3}{4}$$ EDIT: The "well known" result mentioned here is in a Paper by ML Glasser , Equation $(16)$ to be exact.
|calculus|integration|definite-integrals|elliptic-integrals|polylogarithm|
0
Determinant of endomorphism bundle
We have that the endomorphism bundle of a smooth vector bundle $E$ is $\text{End}(E) = \text{Hom(E,E)}$ . Is it necessarily true that $\det(\text{End}(E))$ is always trivial for any smooth vector bundle $E$ ? I'm unsure on how to prove this. I have seen and proved that a smooth $n$ -dimensional manifold is orientable if and only if $\Lambda^nM$ is trivial or if it admits a smooth nowehere-vanishing $n$ -form. Given the fibres of $\det E$ are exactly $\Lambda^n E_x$ perhaps this would give some way to prove it, but I don't quite see how.
$\newcommand\End{\operatorname{End}}$ Any vector space $V$ has an identity map $I_V: V \rightarrow V$ and $I_V \in \End(V)$ . Therefore, for each $x \in M$ , $I_{E_x} \in \End(E_x)$ . This defines a section $I_E$ of $E$ . You can check using either a smooth frame or trivialization that $I_E$ is a smooth section. Now observe that if $E$ is a rank 1 vector bundle (i.e., line bundle), then so is $\End(E)$ . Therefore, since the identity section $I_E$ is everywhere nonzero, it defines a global trivialization of $\End(E)$ . Let me include the definition of the determinant bundle. First, given any linear map $L: V \rightarrow V$ , where $\dim(V) = k$ , there is an naturally (i.e., independent of any choice of basis) induced map $$ \det(L): \Lambda^kV \rightarrow \Lambda^kV, $$ where for any $v_1, \dots, v_k \in V$ , $$ L(v_1\wedge\cdots\wedge v_k) = L(v_1)\wedge\cdots\wedge L(v_k). $$ In particular, $\det(L) \in \End(\Lambda^kV)$ . The determinant line bundle of $\End(E)$ is $$\det(\End(E))
|differential-geometry|vector-bundles|fiber-bundles|
1
Can I determine the angles of a quadrilateral if I know the lengths of the sides and the difference between the diagonals?
I know the lengths of the four sides of a quadrilateral and the difference between the diagonals (but I do not know the actual lengths of the diagonals). My instinct is that this information ought to be sufficient to determine the angles of the quadrilateral, because a specific difference between the diagonals constrains it to a single, fixed shape. example measurements: L side length = 326mm; R side length = 325mm; bottom length = 677mm; top length = 675mm; diagonal from bottom L to top R is 7mm longer than from bottom R to top L
I'll add another answer just to show how this can be reduced to the solution of a single algebraic equation. Let's call $a = AB$ , $b = BC$ , $c = CD$ , $d = DA$ the given sides of the quadrilateral and $y=BD$ , $y+l=AC$ its unknown diagonals (but $l$ is given) . From the cosine law we get: $$ \cos\angle ADB={y^2+d^2-a^2\over2dy}, \quad \cos\angle CDB={y^2+c^2-b^2\over2cy}. $$ Hence, using the cosine addition formula : $$ \begin{align} \cos\angle ADC= &\ \cos\angle ADB\cos\angle CDB-\sin\angle ADB\sin\angle CDB\\ =&\ {y^2+d^2-a^2\over2dy}\cdot{y^2+c^2-b^2\over2cy}- \sqrt{1-{(y^2+d^2-a^2)^2\over(2dy)^2}} \sqrt{1-{(y^2+c^2-b^2)^2\over(2cy)^2}} \end{align} $$ and finally, from the cosine rule: $$ (y+l)^2=c^2+d^2-2cd \left({y^2+d^2-a^2\over2dy}\cdot{y^2+c^2-b^2\over2cy}- \sqrt{1-{(y^2+d^2-a^2)^2\over(2dy)^2}} \sqrt{1-{(y^2+c^2-b^2)^2\over(2cy)^2}}\right), $$ which is an equation in the unkown $y$ . We can rearrange terms and square both sides, to eliminate roots and denominators. The final
|geometry|
0
A graph is bipartite if and only if for $vw\in É$, for all $a\in V$, the shortest path from $a$ to $v$ is not equal with from $a$ to $w$
I need to prove that a simple graph $G=(V,E)$ is bipartite if and only if for $vw\in E$ , for all $a\in V$ , the length of the shortest path from $a$ to $v$ is not equal with that between $a$ and $w$ . It's for me clear, that if two shortest paths are equal, we find $avwa$ a odd cycle so that $G$ not bipartite. But I have no idea for the other direction. Assume all those paths are not equal, and there is an odd cycle, I failed to see how to find two paths with equal length. Any hints are appreciated!
Suppose there is such edge $vw$ and such vertex $a$ that $\text{dist}(a,v)=\text{dist}(a,w)$ . Consider the shortest paths from $a$ to $v$ and from $a$ to $w$ . Let $u$ be their last common vertex. It exists since $v\neq w$ . The length of either path from $a$ to $u$ is the same since these are the shortest paths. Hence the length of either path from $u$ to $v$ or $w$ is the same. This makes $vwu$ an odd cycle. Thus the graph $G$ is not bipartite. Now let $G$ be non-bipartite. Let us find a vertex and an edge with its ends equidistant from the found vertex. $G$ contains an odd cycle. Fix any vertex $a$ of it. Take two “opposite” vertices in the cycle $v$ and $w$ . If $\text{dist}(a,v)=\text{dist}(a,w)$ then we are done. If it’s not the case, then consider the shortest path from $a$ to $v$ . Since its length is strictly less than $\text{dist}(a,v)$ there exist such vertices $u_1$ and $u_2$ that they belong both to the path and the cycle, all the vertices on the path between $u_1$ and $u
|bipartite-graphs|
1
Automorphism of Labeled Number Line
This is the graph I am working with and the edge labelings are given by $l(\{2n, 2n + 1\}) = a$ and $l(\{2n + 1, 2n + 2\}) = b$ . The question I'm working on guides us through to a description of the automorphism group of this graph. I so far have the isomorphisms (defined on the vertices) given by $\mu (n) = 1-n$ and $\nu (n) = n+2$ . And I have that $\nu^k (n) = n + 2k$ for all $k \in \mathbb{Z}$ . I have also shown that $\nu^{-m}(\{2m, 2m+1\}) = \{0,1\}$ for all $m \in \mathbb{Z}$ . We are told that $\phi$ is an element of Aut( $\Gamma$ ). Now, the part I am confused about is as follows. Define $\theta = \nu^{-m} \circ \phi$ , where $\phi$ is as above, so $\theta (\{0,1\}) = \{0,1\}$ . Show that either $\theta = \text{Id}_{\Gamma}$ or $\mu \circ \theta = \text{Id}_{\Gamma}$ . Use this to show that either $\phi = \nu^m$ or $\phi = \nu^m \circ \mu$ . Thus, show that $\text{Aut}(\Gamma) = \{\nu^m, \nu^m\mu : m \in \mathbb{Z}\}$ and that no two elements of the latter set are equal. I ho
This is what I've got, and I think it makes sense... There are two cases for $\theta$ . \begin{equation} \text{Case 1: }\theta (0) = 0 \text{, and } \theta(1) = 1 \\ \text{Case 2: }\theta(0)=1 \text{, and } \theta(1) = 0 \end{equation} Given case 1, I claim that $\theta = \text{Id}_{\Gamma}$ . As $\theta$ is an isomorphism, case 1 forces $n \mapsto n$ , $\forall n \in \mathbb{Z}$ . To illustrate, consider where $\theta$ sends $2$ . It cannot be that $2 \mapsto 0$ nor $2 \mapsto 1$ as then $\theta$ would not be injective. Additionally, as $\theta$ is an isomorphism, the vertex $2$ must be incident to the vertex $1$ in the image of $\theta$ . Thus, we must have $2 \mapsto 2$ , and similarly for the vertex $-1$ and so on. Therefore, $\theta = \text{Id}_{\Gamma}$ . Given case 2, I claim that $\mu \circ \theta = \text{Id}_{\Gamma}$ . Notice that, $\mu(\theta(1)) = 1$ and $\mu(\theta(0)) = 0$ . And so, the argument above applies in this case to $\mu \circ \theta$ , i.e., $\mu \circ \theta =
|group-theory|graph-theory|automorphism-group|
1
Why are there only six linearly independent global conformal Killing vector fields on the two-sphere with the round metric?
The title pretty much sums it up. For a 2-manifold with metric $\gamma_{AB}$ , a vector field $Y^A$ is said to be a conformal Killing vector field if $\nabla_A Y_B + \nabla_B Y_A = \nabla_C Y^C \gamma_{AB}$ . In the case of a two-sphere with the round metric, this means $\partial_{\bar{z}} Y^z = \partial_z Y^{\bar{z}} = 0$ , where $z$ is the stereographic coordinate. The solutions to $Y^z$ is any holomorphic function of $z$ , and for $Y^{\bar{z}}$ we have any holomorphic function of $\bar{z}$ . Yet, I've seen references (such as arXiv: 1703.05448 [hep-th] ) mention that there are only six linearly independent global conformal Killing vector fields: those with $Y^z = 1, z, z^2, i, iz, iz^2$ (and I suppose $Y^{\bar{z}}$ is given by the conjugate?). My question is then: why are these the only global conformal Killing vector fields? Why isn't, for example, $z^3$ a global conformal Killing vector field as well?
In the context of the Riemann sphere, $\Bbb S^2$ , we can think of the standard coordinate $z$ as defining a coordinate chart $z : \Bbb S^2 \setminus \{N\} \to \Bbb C$ via stereographic projection with respect to the North Pole, $N$ . In this language, we're interested in which conformal Killing fields of $\Bbb S^2 \setminus \{N\}$ (i.e., holomorphic vector fields on $\Bbb C$ ) can be extended continuously to $\Bbb S^2$ . To analyze the behavior of the vector fields at $N$ , define a complementary coordinate $\zeta : \Bbb S^2 \setminus \{S\} \to \Bbb C$ by stereographic projection with respect to the South Pole, $S$ . The transition map between the coordinates identifies $\zeta = \frac{1}{z}$ , and the North Pole has coordinate $\zeta = 0$ . In particular, in the $\zeta$ coordinate we have $$z^k \partial_z = -\zeta^{2 - k} \partial_\zeta .$$ So, the vector field $z^k \partial_z$ on $\Bbb S^2 \setminus \{N\}$ , $k \in \Bbb Z_{\geq 0}$ , can only be extended smoothly to $N$ ( $\zeta = 0$
|differential-geometry|vector-fields|spheres|conformal-geometry|
1
Showing that the excluded point topology is first-countable
I'm new to topology and so I'm questioning myself here. But, this is what I've got: We have $\tau_p = \{ U \subseteq X : p \notin U \} \cup \{X\}$ . Let $x \in X$ such that $x \neq p$ . Then, $X^{\circ} = X$ , $\emptyset^{\circ} = \emptyset$ , $\{p\}^{\circ} = \emptyset$ , $\{x\}^{\circ} = \{x\}$ . Hence, the only neighborhood of $p$ is $X$ , and the only neighborhood of $x$ is $\{x\}$ . So, the neighborhood bases are given by $\mathcal{U}_p = \{X\}$ and $\mathcal{U}_x = \{\{x\}\}$ . These are both countable and so $(X, \tau_p)$ is first countable. So, have I done this right?
As mentioned in the comments, some of the statements you made do not make sense. But you have the right idea. We could write something like the following. Consider $X$ in the topology $\mathcal{T}_p$ consisting of $X$ along with all subsets of $X$ that do not contain the point $p \in X$ . Let $x \in X$ be arbitrary. If $x = p$ , then the only neighborhood of $x$ is the entire $X$ . So, a countable basis for $p$ is just $\{X\}$ . If $x \ne p$ , then the singleton $\{x\}$ is open, since it does not contain $p$ . Any neighborhood $U$ of $x$ must contain $x$ , so it follows that $\{x\} \subseteq U$ . Therefore, $\{\{x\}\}$ constitutes a countable basis for $x$ .
|general-topology|
1
Why would the triangles join up to a rhombus?
The question I am going to present may as well sound very dumb. But this is becoming a hell of a confusing thing for me. The question is from ISI B.Math-B.Stat entrance exam 2022 UGA question paper. It is problem 29. The question included pictures, but I am going to rephrase the question in a way such that the pictures are not needed but I will still include the pictures. If $\triangle{APB}$ has area $4$ , $\triangle{BPC}$ has area $5$ , $\triangle{CPD}$ has area $x$ and $\triangle{APD}$ has area $13$ , where $AB=BC=CD=DA=6$ Then find the value of $x$ . Here is the picture: Now. Here's my question. Apparently if we join up the triangles we get a rhombus. Like this: Then we can simply draw two perpendicular going through the point $P$ . Then considering the areas of the triangles we will get that $\triangle{APB}+\triangle{CPD}=\triangle{BPC}+\triangle{APD}$ Meaning that, $x=13+5-4=14$ But why would the the triangle add up to a rhombus. I mean it intuitively makes sense but can we give a
The problem is incorrect. The triangles need not make a rhombus, so that $x$ need not equal $14$ . Using Heron's formula, the triangles and their areas give us these equations: $$\begin{align} (a + b + 6)(-a + b + 6) (a - b + 6) (a + b - 6) &= 16\cdot4^2 \\ (b + c + 6)(-b + c + 6) (b - c + 6) (b + c - 6) &= 16\cdot5^2 \\ (c + d + 6)(-c + d + 6) (c - d + 6) (c + d - 6) &= 16\cdot x^2 \\ (d + a + 6)(-d + a + 6) (d - a + 6) (d + a - 6) &= 16\cdot13^2 \end{align}$$ Each candidate value for area $x$ yields a solvable system in $a$ , $b$ , $c$ , $d$ . Using Mathematica to solve numerically gives these options: $$\begin{array}{c|cccc|c} x & a & b & c & d & \text{angle sum} \\ \hline 12 & 12.50868 & \phantom{1}6.57405 & 12.47114 & 18.25586 & \phantom{9}25.17755^\circ \\ '' & \phantom{1}7.18734 & \phantom{1}1.70495 & \phantom{1}5.88180 & \phantom{1}4.34148 & 252.92081^\circ \\ \hline 13 & \phantom{1}7.56481 & \phantom{1}1.96718 & \phantom{1}7.23946 & \phantom{1}4.33798 & 185.40652^\circ \\ '' &
|geometry|
1
The Equivalence Transformation (??) for Generalized Continued Fractions
The equivalence transformation says that any sequence of non-zero complex numbers satisfy the general continued fraction in the following manner. Here is the link: https://en.wikipedia.org/wiki/Generalized_continued_fraction#The_equivalence_transformation Wikipedia says that this can be proved via induction but gives no sources or readings. I can't find any other mention of this equivalence transformation on continued fractions anywhere else on the internet (maybe I haven't looked well enough). Can someone provide me with a proof? Thanks. Note: I don't have a clear idea if this proof needs to involve this operation: $[x_1,x_2,...,x_m]$ or not. I am familiar with all the operations mentioned in the wikipedia article but I haven't managed to read too much about that specific operation. Can the proof avoid this (if possible)? update: https://poset.jp/posts/continued-fractions-attempt-1/continued-fractions-part-2/ Found this link on it, but I'll have to understand [] notation. $x_n:=b_0+\f
Ok, it was simpler than I thought because the recursive relationship of $A_n$ and $B_n$ was defined. $c_n{b_n}' = c_n(b_n+\frac{a_{n+1}}{b_{n+1}})$ and I already have proven that recurssive relationship before.
|fractions|continued-fractions|
1
Subsequence which converges for compact metric space.
Let $X$ be a compact metric space and $\mu_n$ a sequence of finite Borel measures on $X$ with the property that $$\sup_n \mu_n(X) Show for all $f \in C(X)$ there exists a subsequence $n_j$ such that $\int f d \mu_{n_j}$ converges. Attempt: We know as $X$ is a compact metric space, that $C(X)$ is separable. I.e., there exists a countable dense subset, call it $A \subset C(X)$ and so $$A:=\{f_n: n \in \Bbb{N}, \text{$f_n:X \to \Bbb{R}$ is continuous}\}.$$ and $\bar{A}=X$ . I also know for each $n$ , $f_n(X)$ attains a min and max. But I dont see how to produce this convergent subsequence. Also, since the sup of the $\mu_n(X)$ is finite and I have a sequence I cant say they have a convergent subsequence right cause they live in $\Bbb{R}^{\geq 0}$ which is not compact..
put $A=\sup_n \mu_n(X)$ .Since $X$ is compact, $f$ is bounded, so there is some constants $m,M$ such that: $\forall x\in X:m\leq f(x) \leq M $ ,so $mA\leq \int f d \mu_{n} \leq MA$ \ This shows that $u_{n}:=\int f d \mu_{n}$ is a bounded sequence of real numbers,so it has a convergent subsequence $\int f d \mu_{n_{j}}$
|real-analysis|lebesgue-integral|lebesgue-measure|
1
Prove every subspace of finite dim V is the intersection of hyperplanes
A hyperplane in V is defined as the kernel of a linear functional. Show that every subspace of V is the intersection of hyperplanes. Please can someone offer insight into how I prove this? Thanks a lot!
One can makeup a little bit the notation considering an orthonormal basis $\{v_{1},\dots, v_{k}\}$ of $W$ ; then extend this to an orthonormal basis $\{v_{1}, \dots, v_{k}, \dots v_{n}\}$ of $V$ ; and consider the $n-k$ hyperplanes $$ H_{i} = \{x \in V : \langle x, v_{i}\rangle = 0\}, \quad \text{for} \quad k+1\leq i \leq n. $$ Let us write $x = \langle x, v_{1} \rangle + \cdots +\langle x, v_{k}\rangle + \cdots + \langle x, v_{n}\rangle \in V$ . Then $$ \begin{align*} x \in W &\iff \langle x, v_{i}\rangle = 0 \quad \text{for} \quad k+1 \leq i \leq n \\ &\iff x \in H_{i} \quad \text{for} \quad k + 1\leq i \leq n\\ &\iff x \in \bigcap_{i= k+1}^{n}H_{i}. \end{align*} $$
|linear-algebra|
0
Prove a mapping is open
Prove that the map $$f:(0,1)\to\mathbb{R}^2$$ $$t\mapsto (\cos 2\pi t,\sin 2\pi t)$$ is an embedding. PS: In general topology, an embedding is a homeomorphism onto its image. More explicitly, an injective continuous map $f:X\to Y$ between topological spaces $X, Y$ is a topological embedding if $f$ yields a homeomorphism between $X$ and $f(X)$ . I already prove $f$ is injective, continuous, but stuck at proving it yields a homeomorphism between $X$ and $f(X)$ . To prove this, I'm trying to prove $g:X\to f(X)$ is homeomorphism $\iff g$ is an open mapping, which means $g(U)$ is open for every open set $U\subset X.$ Could someone help me to finish the problem? Thank in advance!
It's enough to show that the images under $f$ of all elements of a subbasis for the domain $X = (0, 1)$ are open. Taking $X$ as a subspace of $\mathbb{R}$ under the order (usual) topology, one subbasis for $X$ is the collection of intervals $\{ (0, a) | 0 . The image of the interval $(0, a)$ ( $0 ) is an arc of the unit circle that does not include its endpoints $(1, 0)$ and $(\cos (2 \pi a), \sin (2 \pi a))$ . This set is open in $f (X)$ as a subspace of $\mathbb{R}^2$ as a metric space with the Euclidean metric $d$ : if you take any point $p = (\cos (2 \pi t), \sin (2 \pi t))$ in this set, it is contained in the set $f (X) \cap B_d \left( p, \sin \left( \pi \min \{ t, a - t \} \right) \right)$ , which is open in $f (X) \subseteq \mathbb{R}^2$ and contained in the arc (as you should check). You can perform a very similar analysis for images of subbasis elements of the form $(b, 1)$ ( $0 ).
|real-analysis|general-topology|open-map|graph-embeddings|
0
$\sigma$-weakly closed subalgebra of direct product of matrix algebras is again a direct product of matrix algebras
Let $A$ be a $\sigma$ -weakly closed $*$ -subalgebra of the $W^*$ -algebra $\prod_{i\in I}^{\ell^\infty} M_{n_i}(\mathbb{C})$ . I believe that we must have $A\cong \prod_{j\in J}^{\ell^\infty} M_{m_j}(\mathbb{C})$ for certain $(m_j)_{j\in J}$ . Does anyone have an idea how we can show this?
Write $M=\prod_{i\in I}^{\ell^\infty} M_{n_i}(\mathbb{C})$ Let $p\in A$ be a nonzero projection. Let $A_0\subset pAp$ be a masa. If $A_0$ is not diffuse, then it has a minimal projection $p_0\leq p$ . Otherwise, $A_0$ is diffuse, which implies that it has a separable diffuse subalgebra $A_1$ ; hence $A_1\simeq L^\infty[0,1]$ [ Theorem III.1.22, Takesaki I ]. The identity function in $L^\infty[0,1]$ gives us a selfadjoint element $x\in A_1$ with empty point-spectrum. But $x\in M$ , so $x=\prod_i x_i$ , with each $x_i$ a selfadjoint matrix. We can write $x_i=\sum_{k=1}^{n_i} \lambda_{i,k}p_{i,k}$ where each $p_{i,k}$ is a minimal projection. Since $x\ne0$ , there exist $i,k$ such that $\lambda=\lambda_{i,k}\ne0$ . Then $1_{\{\lambda\}}(x)\geq p_{i,k}$ . As Borel functional calculus stays within a von Neumann algebra, this shows that $1_{\{\lambda\}}(x)\in A_1$ , a contradiction since $1_{\{\lambda\}}(x)=0$ in $L^\infty[0,1]$ . So $p$ majorizes a nonzero minimal projection. The previous p
|functional-analysis|operator-theory|operator-algebras|c-star-algebras|von-neumann-algebras|
1
Solve $x\left(x-1\right)y^{\prime\prime}+3xy^{\prime}+y=0$ for $0≤x<1$ with series
Solve $x\left(x-1\right)y^{\prime\prime}+3xy^{\prime}+y=0$ for $0≤x I am supposed to solve via the series method. The answer should look like this: $\begin{aligned}y\left(x\right)=A\frac{x}{\left(1-x\right)^{2}}+B\frac{x}{\left(1-x\right)^{2}}\left(\ln\left(x\right)+\frac{1}{x}\right)\end{aligned},$ where $A, B$ are constants. My attempt: $\begin{array}{l}y'(x)=\sum_{n=1}^\infty na_nx^{n-1}=\sum_{n=0}^\infty(n+1)a_{n+1}x^n\\ y''(x)=\sum_{n=2}^\infty n(n-1)a_nx^{n-2}=\sum_{n=0}^\infty(n+2)(n+1)a_{n+2}x^n\end{array}$ Substituting, $$x(x-1)\sum_{n=0}^\infty(n+2)(n+1)a_{n+2}x^n+3x\sum_{n=0}^\infty(n+1)a_{n+1}x^n+\sum_{n=0}^\infty a_nx^n=0$$ This is where I am stuck with the algebra. And I have no idea how and where to apply the condition $0≤x
Try to put everything under the same sum and then you can identify the expression of $a_n$ $$\sum_{n\ge0}n(n-1)a_nx^n -\sum_{n\ge0}(n+1)na_{n+1}x^n + 3 \sum_{n\ge0}n a_nx^n + \sum_{n\ge0}a_nx^n = 0 $$ Using uniqueness : $$n(n-1)a_n - (n+1)na_{n+1} + 3na_n + a_n = 0$$ $$(n(n-1) + 3n + 1))a_n= (n+1)na_{n+1}$$ $$(n+1) a_n= na_{n+1}$$ $${(n+1) \over n }a_n= a_{n+1}$$ Convergence radius is $$+\infty$$ with d’Alembert criterion Then by induction $$a_n = {n! \over (n-1)!} a_1 = na_1$$ Thus $$y(x) = \sum_{n\ge0} na_1x^n = a_1 {x \over (1-x)^2}$$
|real-analysis|calculus|sequences-and-series|ordinary-differential-equations|
0
Covariant derivative/connection of a local frame $\nabla \bar e = \nabla(e)g + edg$
Suppose $e$ and $\bar e$ are two frames for a vector bundle $E \to M$ over $U \subset M$ such that $\bar e = eg$ for some $g: U \to \text{GL}(r,\Bbb R)$ . Then $$\nabla \bar e = \nabla(e)g + edg.$$ I'm trying to understand this which is seemingly just the Leibniz rule, but I have some trouble with the definitions. A frame $e = (e_1,\dots,e_r)$ is an $r$ -tuple of sections $e_i : U \to E$ such that $e(p)=(e_1(p),\dots,e_r(p))$ forms a basis for $E_{p}$ . There are some slight ambiguities here, first off is $e$ a map $e : U \to E$ also? I mean it has to be since otherwise $\nabla e$ would not make sense. Second, could someone help me understand how the above equation is the Leibniz rule? In general for a connection $\nabla$ on a vector bundle the Leibniz rule gives $$\nabla (fs)=df \otimes s + f\nabla s$$ where $s$ is a section $M \to E$ , but $f :M \to \Bbb R$ is a smooth function not a map with codomain $\text{GL}(r,\Bbb R)$ so what gives?
You should understand $e$ as $e = (e_1,...,e_r)$ and $\nabla e = (\nabla e_1, ..., \nabla e_r)$ and $dg = (dg^i_j)$ as matrix. Then, with Einstein summation convention, $\bar e = e g = (g^i_1 e_i, ..., g^i_r e_i)$ and $\nabla \bar e_j = \nabla (g^i_j e_i) = (dg^i_j) e_i + g^i_j \nabla e_i$ and therefore $\nabla \bar e = e dg + (\nabla e )g$ .
|differential-geometry|
0
Elementary example where a neighborhood basis exists but no countable neighborhood basis exists?
Is there an elementary example of the following: $X$ is a topological space, and $p \in X$ . There exists a neighborhood basis for $X$ at $p$ , but there exists no countable neighborhood basis for $X$ at $p$ . I am having trouble coming up with an example where it is not possible to "sift out" a countable subset from the neighborhood basis which is itself a neighborhood basis, probably because my intuition is too attached to spaces that are at least first countable. I should perhaps mention that I am using the definition where neighborhood of $p$ means an open subset of $X$ containing $p$ . This is not a homework problem; just something I started thinking about while reading chapter 2 of Lee's Introduction to Topological Manifolds .
Let $X$ be any uncountable set, and consider the so-called co-countable topology defined as follows: $$ \tau=\lbrace V\subset X \backslash C_{X}V \textit{is countable}\rbrace \cup \lbrace \phi\rbrace $$ you can verify that this family defines indeed a topology. let $p\in x$ and let $V_{n})_{n\geq 1}$ be a countable family of nbds of $p$ .define $W=C_{X}(A\cup \lbrace q\rbrace $ ,where $A=\cup_{n\geq 1} C_X V_{n}$ (which is countable) and $q$ is any point in $X\setminus A\cup \lbrace p\rbrace $ .if there is some $n$ such that $V_{n} \subset W$ ,then $A\cup \lbrace q\rbrace \subset C_X V_n$ ,in particular $q\in A$ which is clearly imposible by the choice of $q$ .
|general-topology|
0
Covariant derivative/connection of a local frame $\nabla \bar e = \nabla(e)g + edg$
Suppose $e$ and $\bar e$ are two frames for a vector bundle $E \to M$ over $U \subset M$ such that $\bar e = eg$ for some $g: U \to \text{GL}(r,\Bbb R)$ . Then $$\nabla \bar e = \nabla(e)g + edg.$$ I'm trying to understand this which is seemingly just the Leibniz rule, but I have some trouble with the definitions. A frame $e = (e_1,\dots,e_r)$ is an $r$ -tuple of sections $e_i : U \to E$ such that $e(p)=(e_1(p),\dots,e_r(p))$ forms a basis for $E_{p}$ . There are some slight ambiguities here, first off is $e$ a map $e : U \to E$ also? I mean it has to be since otherwise $\nabla e$ would not make sense. Second, could someone help me understand how the above equation is the Leibniz rule? In general for a connection $\nabla$ on a vector bundle the Leibniz rule gives $$\nabla (fs)=df \otimes s + f\nabla s$$ where $s$ is a section $M \to E$ , but $f :M \to \Bbb R$ is a smooth function not a map with codomain $\text{GL}(r,\Bbb R)$ so what gives?
So suppose we have a linear connection on a vector bundle $E$ , then in any frame $(e_1,\dots, e_n)$ , there exist one forms $\xi_i^j$ such that: $$\nabla e_i=\xi_i^j\otimes e_j$$ Now suppose that $\tilde{e}_i$ is another frame, related by a matrix of smooth function $g=g^i_j$ , such that $\tilde{e}_i=e_jg^j_i$ , then: \begin{align} \nabla(\tilde{e_i})=&\nabla(e_jg^j_i)\\ =&\nabla(e_j)g^j_i+e_jdg^j_i \end{align} This is really what your formula is saying, they're just removing the indices as the statement holds for all $i$ , so you can apply it to the frame $e=(e_1,\dots, e_n)$ .
|differential-geometry|
0
Prove a mapping is open
Prove that the map $$f:(0,1)\to\mathbb{R}^2$$ $$t\mapsto (\cos 2\pi t,\sin 2\pi t)$$ is an embedding. PS: In general topology, an embedding is a homeomorphism onto its image. More explicitly, an injective continuous map $f:X\to Y$ between topological spaces $X, Y$ is a topological embedding if $f$ yields a homeomorphism between $X$ and $f(X)$ . I already prove $f$ is injective, continuous, but stuck at proving it yields a homeomorphism between $X$ and $f(X)$ . To prove this, I'm trying to prove $g:X\to f(X)$ is homeomorphism $\iff g$ is an open mapping, which means $g(U)$ is open for every open set $U\subset X.$ Could someone help me to finish the problem? Thank in advance!
To prove that the map $f: (0,1) \to \mathbb{R}^2$ defined by $t \mapsto (\cos 2\pi t, \sin 2\pi t)$ is an embedding, we need to show that it is an injective continuous map which is a homeomorphism onto its image. You've already proved that $f$ is injective. It's evident that both components of $f$ , namely $\cos 2\pi t$ and $\sin 2\pi t$ , are continuous functions. Since $\cos$ and $\sin$ are continuous functions and composition of continuous functions is continuous, $f$ is continuous. Now, to show that $f$ is a homeomorphism onto its image, we need to prove two things: Onto its image: For any point $(x,y)$ in the unit circle $\mathbb{S}^1$ , there exists $t$ such that $x = \cos 2\pi t$ and $y = \sin 2\pi t$ (since $\cos^2 t + \sin^2 t = 1$ ). Thus, every point in $\mathbb{S}^1$ is covered by $f$ . Since $f$ is defined on $(0,1)$ , it covers $\mathbb{S}^1$ except possibly at the points $(1,0)$ and $(-1,0)$ . However, since $\cos$ and $\sin$ are periodic functions, the map $f$ covers $\
|real-analysis|general-topology|open-map|graph-embeddings|
0
Show that $a+b+c$ divides at least one of the following numbers: $a^3+b^3-2c^3, b^3+c^3-2a^3, a^3+c^ 3-2b^3$
The question The natural numbers $a,b,c$ are considered non-zero and distinct two by two, so that $a^3+b^3+c^3=\frac{a^2b^2}{c}+\frac{a ^2c^2}{b}+\frac{b^2c^2}{a}$ a) It is possible that one of the numbers is equal to the gemometric average of the other two? b) Show that $a+b+c$ divides at least one of the following numbers: $a^3+b^3-2c^3, b^3+c^3-2a^3, a^3+c^ 3-2b^3$ my idea I was able to do the first point. a) Let's verify if $a^3+b^3+c^3=\frac{a^2b^2}{c}+\frac{a ^2c^2}{b}+\frac{b^2c^2}{a}$ , knowing that $a=\sqrt{bc}$ The equality would look like $(\sqrt{bc})^3+b^3+c^3=\frac{b^3*c}{c}+\frac{c^3*b}{b}+\frac{b^2c^2}{\sqrt{bc}}$ $(\sqrt{bc})^3+b^3+c^3=b^3+c^3+\frac{b^2c^2\sqrt{bc}}{bc}$ $(\sqrt{bc})^3=bc\sqrt{bc}$ which is totally true, so this means that it is possible that one of the numbers is equal to the gemometric average of the other two. b) For point b I thought writing the equality as a product where we see the numbers $a^3+b^3-2c^3, b^3+c^3-2a^3, a^3+c^ 3-2b^3$ and on the oth
Multiply both parts of $a^3+b^3+c^3=\frac{a^2b^2}{c}+\frac{a ^2c^2}{b}+\frac{b^2c^2}{a}$ by $abc$ : $$a^4bc+ab^4c+abc^4-a^3b^3-b^3c^3-c^3a^3=0.$$ Now expand the following: $$(a^2-bc)(b^2-ca)(c^2-ab)=$$ $$= a^4bc+ab^4c+abc^4-a^3b^3-b^3c^3-c^3a^3=0.$$ So without loss of generality let us assume that $a^2=bc$ ( the part (a) was a hint all along ). Then $$c^3+b^3-2a^3=(a+b+c)(a^2+b^2+c^2-ab-bc-ca)+3abc-3a^3=$$ $$= (a+b+c)(a^2+b^2+c^2-ab-bc-ca).$$ We are done.
|divisibility|exponentiation|radicals|
1
In how many ways can 6 people be lined up to get on a bus?
a) In how many ways can $6$ people be lined up to get on a bus? b) If $3$ specific persons, among $6$ , insist on following each other, how many ways are possible? My confusion I understand that a) is just 6!, but apparently b) is $4! \cdot 3!$ . Can anyone explain where they got 4! from? I thought it was $3! \cdot 3!$ , since $3$ people are following each other, and $3$ aren't. Thank you.
The three people who insist on following each other can be considered as one unit (or person) and then along with the three remaining, they can be arranged in a line in $(3 + 1)! = 4!$ . Doing this ensures that the $3$ people never get separated (always follow each other). However, those three people can be arranged among themselves, so we need to multiply the result above by $3!$ . And the final answer becomes $4! \times 3!$ . The problem with your approach $(3! \times 3!)$ is that it assumes the three special people must be placed in front (or behind) of the remaining three but does not account for the freedom to place them in either of the two positions. It also doesn't account for the fact that the group can be placed somewhere in between two of the remaining three people too. To compensate for this, you can multiply by $4$ (there are four positions where the group can be placed) and you will end up with the correct result. $$\times \; P_1 \times P_2 \times P_3 \; \times$$
|combinatorics|
0
bijection between $\mathbb{N}$ and $\mathbb{N}$×$\mathbb{N}$ that supports additivity
Is there a bijective function from $\mathbb{N}$ to $\mathbb{N} \times \mathbb{N}$ and supports additivity? Namely if $n,m \in \mathbb{N}$ then: $$f(n)+f(m)=f(n+m)$$ If there isn't, how can one prove such a thing?
any such map must have the form $$ f(n)=n(a,b) $$ where $(a,b)\in \mathbb{N}\times \mathbb{N}$ ,indeed,we can prove easily (by induction)that $\forall n\in \mathbb{N}:f(n)=nf(1)$ so $f$ is completely determined by its valus at $1$ .So there is no surjective "additive" map between $\mathbb{N}$ and $\mathbb{N}\times \mathbb{N}$ , and in particular no bijective ones.
|elementary-number-theory|elementary-set-theory|
1
Gaussian stochastic integral
first question here :) I am reading an article about the Heston model and stochastic volatility, in which they state the following: "because the process for $V_t$ is independent of the brownian motion $W_t$ , the distribution of $\int_u^t \sqrt{V_s} \, dW_s$ is normal with mean $0$ and variance $\int_u^t V_s \, ds$ ". This statement is very similar to the one I know, which is the case of a deterministic function $f(t)$ instead of the variance process $V_t$ . However I have never heard before a version of this where we consider a stochastic process independent of a brownian motion, hence I don't quite understand why this is true. Does anyone have a (sketch of) proof of the first statement ? Or any reference where I could find it ? Thanks !
A stochastic integral $\int_0^tU_s\,dW_s$ is in general not Gaussian when the integrand is not deterministic, even if $U$ and the Brownian motion $W$ are independent. Take for example $$ U_s=1_{\{s\le \tau\}} $$ where $\tau$ is independent of $W$ and exponentially distributed with parameter $\lambda\,.$ Then $$ \int_0^tU_s\,dW_s=W_{\tau\wedge t}\,. $$ Conditional on $\tau\,,$ $W_{\tau\wedge t}$ has variance $\tau\wedge t\,.$ Therefore its characteristic function is \begin{align} \mathbb E\left[e^{i\,x\,W_{\tau\wedge t}}\right]&= \mathbb E\left[e^{-\frac12x^2(\tau\wedge t)}\right] =\lambda\int_0^te^{-\frac12x^2u}e^{-\lambda u}\,du+\lambda\int_t^\infty e^{-\frac12x^2t}e^{-\lambda u}\,du\\ &=\lambda \frac{1-e^{-(\frac{x^2}2+\lambda) t}}{\frac{x^2}{2}+\lambda}+e^{-(\frac{x^2}2+\lambda) t}\,. \end{align} If $W_{\tau\wedge t}$ were Gaussian this should however be of the form $e^{-\frac12x^2\sigma^2}\,.$
|probability|stochastic-calculus|brownian-motion|
1
Prove $p \in \mathbb{P}$ is a divisor of $(a+b)^{p^n} - (a^{p^n} + b^{p^n})$
I am trying to solve the following Let $p \in \mathbb{P}$ , $n \in \mathbb{N}$ , and $a, b \in \mathbb{Z}$ . Prove by induction on $n$ : $p$ is a divisor of $(a+b)^{p^n} - (a^{p^n} + b^{p^n})$ . I think the idea is to use the fact that $p \in \mathbb{P}$ is a divisor of $\binom{p}{v}$ for $1 \leq v \leq p-1$ . For $n=1$ I show the following: I can write $(a+b)^{p} - ( a^{p} + b^{p} )$ as $a^{p} + b^{p} + \sum_{v=1}^{p-1} \binom{p}{v} a^{p-v} b^{v} - ( a^{p} + b^{p} )$ . This becomes $\sum_{v=1}^{p-1} \binom{p}{v} a^{p-v} b^{v}$ and I can apply the idea from above; since $p \in \mathbb{P}$ and $1 \leq v \leq p-1$ , $p$ divides all parts with $\binom{p}{v}$ . I presume that for $n$ the formula holds: $p$ is a divisor of $(a+b)^{p^{n}} - ( a^{p^{n}} + b^{p^{n}} )$ Now I want to do the induction step, and that is the part that I am struggling with: $(a+b)^{p^{n+1}} - ( a^{p^{n+1}} + b^{p^{n+1}} )$ I tried to write it out and see if I can continue with that: $(a+b)^{p^{n+1}} - ( a^{p^{n+1}}
If $(a+b)^{p^n}\equiv a^{p^n}+b^{p^n}\pmod p$ , raising both sides to the $p$ 'th power, we find $$ (a+b)^{p^{n+1}}\equiv (a^{p^n}+b^{p^n})^p\equiv a^{p^{n+1}}+b^{p^{n+1}}\pmod p, $$ where the second congruence is an application of the base case with $a$ replaced by $a^{p^n}$ and $b$ by $b^{p^n}$ .
|elementary-number-theory|
0
Prove $p \in \mathbb{P}$ is a divisor of $(a+b)^{p^n} - (a^{p^n} + b^{p^n})$
I am trying to solve the following Let $p \in \mathbb{P}$ , $n \in \mathbb{N}$ , and $a, b \in \mathbb{Z}$ . Prove by induction on $n$ : $p$ is a divisor of $(a+b)^{p^n} - (a^{p^n} + b^{p^n})$ . I think the idea is to use the fact that $p \in \mathbb{P}$ is a divisor of $\binom{p}{v}$ for $1 \leq v \leq p-1$ . For $n=1$ I show the following: I can write $(a+b)^{p} - ( a^{p} + b^{p} )$ as $a^{p} + b^{p} + \sum_{v=1}^{p-1} \binom{p}{v} a^{p-v} b^{v} - ( a^{p} + b^{p} )$ . This becomes $\sum_{v=1}^{p-1} \binom{p}{v} a^{p-v} b^{v}$ and I can apply the idea from above; since $p \in \mathbb{P}$ and $1 \leq v \leq p-1$ , $p$ divides all parts with $\binom{p}{v}$ . I presume that for $n$ the formula holds: $p$ is a divisor of $(a+b)^{p^{n}} - ( a^{p^{n}} + b^{p^{n}} )$ Now I want to do the induction step, and that is the part that I am struggling with: $(a+b)^{p^{n+1}} - ( a^{p^{n+1}} + b^{p^{n+1}} )$ I tried to write it out and see if I can continue with that: $(a+b)^{p^{n+1}} - ( a^{p^{n+1}}
$$(a+b)^{p^{n+1}} - ( a^{p^{n+1}} + b^{p^{n+1}} ) =$$ $$((a+b)^{p^n})^p - ( (a^{p^n})^p + (b^{p^n})^p )=$$ $$\overset{ind.hypoth.}=(kp+ (a^{p^n} + b^{p^n}))^p - ( (a^{p^n})^p + (b^{p^n})^p )=$$ $$\overset{ind.base} =Kp+(a^{p^n} + b^{p^n})^p- ( (a^{p^n})^p + (b^{p^n})^p )=$$ $$\overset{ind.base}=Kp+\kappa p.$$
|elementary-number-theory|
0
If $f,g$ are uniformly continuous and $f$ is bounded and non-periodic, then $fg$ is not necessarily uniformly continuos
I've just begun my grad program and we were introduced to this problem in our Analysis I course: consider two uniformly continuous functions $f$ and $g$ , say from $\mathbb R$ to $\mathbb R$ , where $f$ is bounded. I've learned that the canonical way to disprove that $fg$ is uniformly continuous is countering the claim with $f(x) = \sin(x)$ and $g(x) = x$ . Upon naïve inspection in Desmos, other uniformly continuous bounded functions that are also periodic will also work for $f$ for the counterexample (like the triangle wave; I don't have sufficient math background to prove or disprove that any function like this will work, though). Here's the catch : can a counterxample be built using a non-periodic , uniformly continuous bounded function for $f$ ? $g$ doesn't need to be $g(x) = x$ . I thought of a very boring possiblity: from my layman understanding, you could "shrink" the graph of $\sin$ after some value $x_0$ so that it's "periodic after $x_0$ ". You could still make this non-perio
You can take $$f(x)=\begin{cases}x-\lfloor x\rfloor&\text{ if }\lfloor x\rfloor\text{ is a perfect square and }x-\lfloor x\rfloor\leqslant\frac12\\1-x+\lfloor x\rfloor&\text{ if }\lfloor x\rfloor\text{ is a perfect square and }x-\lfloor x\rfloor>\frac12\\0&\text{ otherwise}\end{cases}$$ (see its graph below) and $g(x)=x$ .
|real-analysis|examples-counterexamples|uniform-continuity|
1
"Converse" to Chinese Remainder Theorem
There are lots of posts on MSE and the web titled "converse to CRT" but this is not the same. The following is from "Multiplicative number theory I: Classical theory" by Hugh L. Montgomery, Robert C. Vaughan: In Chinese Remainder Theorem direction is to go from two numbers moduli $q_1$ and $q_2$ to a unique number modulus $q_1q_2$ . But this claim of the book is a kind of converse to that. Also it is not of the form $a_1q_2m_1+a_2q_1m_2$ ( $m_i$ being reciprocal to $a_i$ ). I search Internet and worked on it a lot but I can't reach a satisfactory proof for the claim. Anyone knows how $a \mod q_1q_2$ can be written uniquely as $a_1q_2+a_2q_1$ ?
First, this is only true because $q_1$ and $q_2$ are coprime $\iff (q_1,q_2)=1$ : Suppose $\color{red}a \equiv \color{blue}{a_1}q_2+\color{green}{a_2}q_1\equiv \color{blue}{a_3}q_2+\color{green}{a_4}q_1 \pmod{\color{red}{q_1q_2}}$ Then, $\color{red}a \equiv \color{blue}{a_1}q_2\equiv \color{blue}{a_3}q_2 \pmod{\color{red}{q_1}}$ Due to the coprime condition, we have a unique inverse of $q_2$ modulo $q_1$ . In other words, $\color{blue}{a_1} \equiv \color{blue}{a_3} \pmod {q_1}$ . But since $1\le a_1,a_3\le q_1$ , they are equal. Similar argument for $a_2$ and $a_4$ .
|elementary-number-theory|proof-explanation|modular-arithmetic|chinese-remainder-theorem|
1
Possible to factor quadratics mentally?
I was doing math mentally, and I came upon a quadratic. I have no experience in factoring quadratics mentally, so I'm wondering if there is a simple trick to factor quadratics mentally. Question: Does anyone know a simple way to factor a quadratic mentally. Such as $6x^2+7x-3$
Sorry if I'm late, but usually if a quadratic has the form $ax^2+bx+c$ it can be factored as $(qx-d)(px-f)$ where q, d, f, and p are some numbers where $f*d=c, q*p=a$ . In this case, $6x^2+7x-3$ , -3 has factors -1 and 3, and 6 has factors 3 and 2. Now we must check to find b. If the factor is 2x-1, then the other factor would be 3x+3, multiplying this out would give a b-value of 3, not seven, so the quadratic becomes $(3x-1)(2x+3)$ as desired. In general, factoring mentally takes practice, and there is no sure way to accomplish it. The AC-method works to an extent, but often times some quadratic equations cannot be factored using rational numbers. Often times however, the easiest method is the quadratic formula, which I personally find myself using often.
|algebra-precalculus|
0
Explain that a function has a given Jacobian matrix
Assume that $W$ is a $n \times n$ matrix with elements $w_{ij}$ , and that $\mathbf{b} \in \mathbb{R}^n$ is a column vector with elements $b_{i}$ . We know that the Jacobian matrix to the affine transformation $\mathbf{F}(\mathbf{x}) = W\mathbf{x} + \mathbf{b}$ is $\mathbf{F}'(\mathbf{x}) = W$ . We are now looking at the function $\mathbf{G}: \mathbb{R}^{(n^2+n)} \rightarrow \mathbb{R}^n$ defined by $\mathbf{G}(w_{11},\ldots,w_{1n},\ldots,w_{n1},\ldots,w_{nn},\ldots,b_1,\ldots,b_n) = W\mathbf{x} + \mathbf{b}$ We are looking at the elements in $W$ and $\mathbf{b}$ as variables (with these listed row by row, from left to right), and $\mathbf{x}$ as a constant. Explain that the Jacobian matrix to $\mathbf{G}$ is where $O$ stands for a vector or matrix with only zeros. Above, $\mathbf{x}$ is therefore repeated on each row. When I find the Jacobian matrix to the function, I get the exact same one, except I get $x_1, x_2, \ldots, x_n$ in the diagonal to the left. How do I get this to be the
Let $w_{i,j}$ be the row $i$ , column $j$ entry of $W$ . Let $$b=\begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_n\end{pmatrix},$$ $$x=\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ v_n\end{pmatrix},$$ and $$y=Wx+b=\begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n\end{pmatrix}.$$ Then $$y_i=\sum_{i=1}^n w_{i,j}x_j+b_i.$$ We know that for any $1\leqslant i,j\leqslant n$ , $$\frac{\partial y_i}{\partial b_j}=\left\{\begin{array}{ll} 1 & : i = j \\ 0 & : i\neq j.\end{array}\right.$$ This gives the $n\times n$ identity matrix to the right of the bar. For every $1\leqslant i,p,q\leqslant n$ , we want to compute $$\frac{\partial y_i}{\partial w_{p,q}}=\sum_{j=1}^n x_j \frac{\partial w_{i,j}}{\partial w_{p,q}} + \frac{\partial b_i}{\partial w_{p,q}}.$$ Since the $w$ s and $b$ s are all independent variables, we get $\frac{\partial b_i}{\partial w_{p,q}}=0$ always, and $$\frac{\partial w_{i,j}}{\partial w_{p,q}} = \left\{\begin{array}{ll} 1 & : i=p,j=q \\ 0 & : \text{otherwise.} \end{array}\right.$$ So if $p\neq
|linear-algebra|
0
Roots of quartic equation - given product of two roots, find missing coefficient
The quartic equation $ax^4 + bx^3 + cx^2 + dx + e = 0$ has roots $\alpha, \beta, \gamma, \delta$ . Given that $\alpha \beta = p$ find the value of $k$ So I have deduced that $\gamma \delta = \frac{e}{ap}$ using product of roots $=-\frac{e}{a}$ but I am not sure how to proceed from here. I have written out Vieta's formulae, but can't seem to manipulate to get $k$ . Is there an efficient way to do this?
HINT.-From Vieta's formulas we have $$\begin{cases}\alpha+\beta=\frac32-(\gamma+\delta)\\\alpha\beta=4\end{cases}\Rightarrow\begin{cases}\alpha=f_1(\gamma,\delta)\\\beta=f_2(\gamma,\delta)\end{cases}$$ It follows the three equations we need for find out the three unknowns $\gamma,\delta, k$ : $$\begin{cases}f_1(\gamma,\delta)+f_2(\gamma,\delta)+\gamma+\delta=\frac32\\ [f_1(\gamma,\delta)+f_2(\gamma,\delta)](\gamma+\delta)+\gamma\delta+5=\frac k2\\ \gamma\delta=\frac85\end{cases}$$
|algebra-precalculus|
0
Roots of quartic equation - given product of two roots, find missing coefficient
The quartic equation $ax^4 + bx^3 + cx^2 + dx + e = 0$ has roots $\alpha, \beta, \gamma, \delta$ . Given that $\alpha \beta = p$ find the value of $k$ So I have deduced that $\gamma \delta = \frac{e}{ap}$ using product of roots $=-\frac{e}{a}$ but I am not sure how to proceed from here. I have written out Vieta's formulae, but can't seem to manipulate to get $k$ . Is there an efficient way to do this?
For simplicity, I denote the roots by $a,b,c,d$ . Observe that you only need $ac+ad+bc+bd = (a+b)(c+d)$ since you already have $ab$ and $cd$ . Since the coefficient of $x$ is zero, $$abc+abd+acd+bcd = 0 \iff ab(c+d) = -cd(a+b)$$ From here, substitute $(a+b)+(c+d) = 3/2$ to get the required answer
|algebra-precalculus|
1
binomial coefficient identity $\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}=2^{2n-2}$
While trying to solve a probability question about a coin flip game, I arrived at the expression $\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}.$ Computing this sum for several low values of $n$ suggested that it summed to $2^{2n-2}$ , suggesting there is an identity: $$\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}=2^{2n-2}$$ However I was not able to prove the identity by means of the Pascal recurrence relation, the Chu-Vandermonde identity $\sum_{k=0}^r {m\choose k}{n\choose {r-k}}={{m+n}\choose r}$ , nor the $\sum_{k=0}^n{n\choose k} = (1+1)^n=2^n$ identity from the binomial theorem. Can I get a hint how to proceed?
We seek to evaluate $$\sum_{k=0}^n {n\choose k} \sum_{j=0}^{k-1} {n-1\choose j}.$$ This is $$\sum_{k=0}^n {n\choose k} [z^k] \frac{z}{1-z} \sum_{j\ge 0} {n-1\choose j} z^j = \sum_{k=0}^n {n\choose k} [z^k] \frac{z}{1-z} (1+z)^{n-1} \\ = \sum_{k=0}^n {n\choose k} [z^{n-k}] \frac{z}{1-z} (1+z)^{n-1} = [z^n] \frac{z}{1-z} (1+z)^{n-1} \sum_{k=0}^n {n\choose k} z^k \\ = [z^n] \frac{z}{1-z} (1+z)^{n-1} (1+z)^n = [z^{n-1}] \frac{1}{1-z} (1+z)^{2n-1} \\ = \sum_{q=0}^{n-1} [z^{n-1-q}] \frac{1}{1-z} [z^q] (1+z)^{2n-1} = \sum_{q=0}^{n-1} {2n-1\choose q} = \frac{1}{2} 2^{2n-1} = 2^{2n-2}.$$
|combinatorics|summation|binomial-coefficients|
0
Why can’t improper integrals be defined directly using Riemann sums?
The standard way to define an improper integral of the form $\int_a^\infty f(t)dt$ is as follows. We first define the Riemann integral $\int_a^xf(t)dt$ for each $x>a$ in the standard way, i.e. using partitions of the interval $[a,x]$ and limits of Riemann sums and all that. Then we define $\int_a^\infty f(t)dt$ as $\lim_{x\rightarrow\infty}\int_a^xf(t)dt$ . But my question is, why can’t we define improper integrals in an analogous fashion to how we define Riemann integrals on closed intervals? That is, why can’t we take partitions of the interval $[a,\infty)$ , take Riemann sums which would be infinite series, and then take the limit of those Riemann sums either under the refinement partial order or as the mesh of the partition goes to $0$ ? Is the issue that the Riemann sum of a given partition of $[a,\infty)$ may not be a convergent series?
The Bolzmann-Stefan formula for the improper integral $\int_0^\infty f(x)\; dx $ , proposed and used by Bolzmann (who claimed to have learned it from his teacher Stefan, see reference Boltzmann's Probability Distribution of 1877, by Alexander Bach ; see also This math trick revolutionized physics youtube video), is: $$\boxed{\int_0^\infty f(x)\; dx = \lim_{n\to \infty} \epsilon \sum_{k=1}^n f(k\epsilon),}$$ where the limits $$ \boxed{n\to \infty, \; \epsilon \to 0, \; {\rm and} \; n\epsilon \to \infty}$$ are to be taken.
|real-analysis|improper-integrals|riemann-integration|partitions-for-integration|
0
How to prove that $\sum_{n=1}^\infty \left(e^{\frac 1 n} - e^{\frac 1 {n+2}}\right)$ converges?
I have the following series: $$\sum\limits_{n=1}^\infty\left(e^\frac{1}{n} - e^\frac{1}{n+2}\right)$$ I know that this series converges to $e + \sqrt{e} - 2$ (I found its sum using $S = \lim\limits_{n \to \infty} S_n$ , where $S_n$ is a partial sum of $n$ elements). But is there any way to prove that the series converges without finding the actual sum?
At infinity, using Taylor expansion, you have: $$e^{\frac1n}-e^{\frac1{n+2}}\sim 1+\frac1n-1-\frac1{n+2}=\frac{2}{n(n+2)}\sim\frac2{n^2} $$ Hence the series is convergent.
|sequences-and-series|convergence-divergence|
0
How to construct $\Delta ABC$, given the angle at $C$ , $\gamma$ the median drawn fom $C$, $t_c$, and the angle at the point $B$, $\beta$?
I've gotten stuck again, can't seem to figure out what I'm supposed to construct first. $\Delta ABC$ with $\gamma$ , $\beta$ and $t_c$ marked. Next to it text reads: $\gamma$ = $\angle ACB$ , $\beta$ = $\angle ABC$ and $t_c$ = median of $c$ " />
Construct a triangle with angles $\gamma$ and $\beta$ ; let $t'_c$ be the corresponding median. Then perform a homothety with center $C$ and ratio $t_c/t'_c$ .
|triangles|geometric-construction|
0
How to prove that $\sum_{n=1}^\infty \left(e^{\frac 1 n} - e^{\frac 1 {n+2}}\right)$ converges?
I have the following series: $$\sum\limits_{n=1}^\infty\left(e^\frac{1}{n} - e^\frac{1}{n+2}\right)$$ I know that this series converges to $e + \sqrt{e} - 2$ (I found its sum using $S = \lim\limits_{n \to \infty} S_n$ , where $S_n$ is a partial sum of $n$ elements). But is there any way to prove that the series converges without finding the actual sum?
We have that $$ \left(1+\frac1n\right)^n\le e \le \left(1+\frac1n\right)^{n+1}$$ and then $$e^\frac{1}{n} - e^\frac{1}{n+2}\le \left[\left(1+\frac1{n-1}\right)^n\right]^\frac1n-\left[\left(1+\frac1{n+2}\right)^{n+2}\right]^\frac1{n+2} =\frac1{n-1}-\frac1{n+2}=\frac3{(n-1)(n+2)}$$ therefore the series converges. As noticed by MartinR in the comments, the same estimate follows also from the well-known inequality $1+x \le e^x \le 1/(1-x)$ , indeed $$e^\frac{1}{n} - e^\frac{1}{n+2}\le \frac1{1-\frac1n}-1-\frac{1}{n+2}=\frac3{(n-1)(n+2)}$$
|sequences-and-series|convergence-divergence|
0
Lower bound $\sum_{\{ s_{1},..,s_{d}\}\subset [n]} d!\frac{1}{s_{1}s_{2}\dots s_{d}} $
I want to lower bound the following expression of choosing $d$ numbers from $[n]:=\{ 1,2,\dots,n \}$ . $$ \sum_{\{ s_{1},..,s_{d}\}\subset [n]} d!\frac{1}{s_{1}s_{2}\dots s_{d}} $$ Here $s_{1},\dots,s_{d}$ are all distincts. I know I can upper bound this by $\left( 1+\frac{1}{2}+\dots+\frac{1}{n} \right)^{d} = H_{n}^{d} . Is there a way to lower bound this quantity, perhaps to $\Theta((\ln n)^d)$ ? Or is a bound of $\Theta((\ln n)^d)$ even possible? I don't really know much about combinatorics, so not sure what tools I can utilize.
I can give an exact expression for your summation in terms of the harmonic numbers, which proves that your summation is $\Theta((\log n)^d)$ . First, recall the elementary symmetric polynomials in $n$ variables. $$ e_d(x_1,\dots,x_n)=\sum_{1\le i_1\lt \dots \lt i_d \le n}x_{i_1}x_{i_2}\cdots x_{i_d} $$ The expression you are interested in is exactly $d!\cdot e_d(\frac11,\frac12,\dots,\frac1n)$ . Next, recall the simpler power-sum symmetric polynomials, defined as follows. $$ p_d(x_1,\dots,x_n)=\sum_{i=1}^n x_i^d $$ It turns out that you can express the elementary symmetric functions in terms of the power-sum functions, in a somewhat complicated way. From the Wikipedia article for Newton's identities , we see that $$ e_d(x_1,\dots,x_n)=\frac1{d!}\sum_{\pi \in S_d}(-1)^{\text{sign}(\pi)}\prod_{C\in \text{cycles($\pi$)}}p_{|C|}(x_1,\dots,x_n)\tag{$*$} $$ To be clear, for each permutation $\pi$ with cycle lengths $(C_1,\dots,C_m)$ , the summand is $(-1)^{\text{sign}(\pi)}\cdot (x_1^{C_1}+\
|probability|combinatorics|graph-theory|
1
A weird probability question
This is the problem in question: You have two identical bowls: the first one contains 3 white balls and 4 black balls, and the second one contains 4 white balls and 5 black balls. If you choose randomly a ball from the two bowls, what is the probability it is white ? Let's define our events as such: A1 = choosing a ball from the first bowl A2 = choosing a ball from the second bowl B = choosing a white ball One approach would be using the theorem of total probability: $$\text{We know that }P(B|A_1) = \frac34\text{ and }P(B|A_2) = \frac45\text{, and that:}$$ $$P(A_1) = P(A_2) = \frac12\text{, because the bowls are identical}$$ $$P(B) = P(B |A_1)\times P(A_1) + P(B|A_2)\times P(A_2) = \frac37 \times \frac12 + \frac49 \times \frac12 = 55/126$$ The second approach would be simplifying the problem: Because the two bowls are identical, we could just say we don't even choose between two bowls, but just between the set of all balls. Then we could calculate the probability directly: $$P(B) = \fr
It depends on what you call “pick a ball at random”. If you choose any ball at random, it doesn’t matter in which bowl it was. The probability is equal to the amount of all while balls divided by the amount of all balls, so $7/16$ . Note that bowls are not equiprobable here. You will more likely pick a ball from a bigger bowl. If you however pick a bowl at random with probability $1/2$ then your first calculation is right.
|probability|combinatorics|solution-verification|
0
Elementary Row Operations hoffman
In Hoffman Linear Algebra, Chapter 1.3 Exercises, the following question is presented: Find all solutions to the system of equations: $(1 - i)X_1 - iX_2 = 0$ $2X_1 + (1-i)X_2=0$ In the solutions to this (using elementary row operations), this was given: picture of equation How did part 2 happen? Making all elements of row 1 zero?
The first row is simply a scaled-up version of the second row with the scalar being $(1-i)$ . Since it is redundant information, it seems that they just made everything zero. More specifically, $r_1\cdot (1-i)=r_2$ because $1\cdot (1-i)=1-i$ and $(\frac{1}{2}-\frac{1}{2}i)(1-i)=\frac{1}{2}(1-1-2i)=\frac{1}{2}(-2i)=-i$
|linear-algebra|matrices|
1
Doubt in van Oosten's Topos Theory notes
I came across this definition while reading Jaap van Oosten's Topos Theory lecture notes (pg 29, def 1.3). In a category with finite limits, an equivalence relation on an object $X$ is a subobject $R$ of $X \times X$ for which the following statements hold: The diagonal embedding $X \to X \times X$ factors through $R$ . The composition $R \hookrightarrow X \times X \xrightarrow{tw} X \times X$ factors through $R$ , where $tw$ denotes the twist map $\langle p_1, p_0 \rangle : X \times X \to X \times X$ . Here $p_0, p_1 : X \times X \to X$ are the projections. The map $\langle p_0s, p_1t \rangle : R' \to X \times X$ factors through $R$ , where we assume that the subobject $R$ is represented by the arrow $\langle r_0, r_1 \rangle : R \to X \times X$ , and the arrows $s$ and $t$ are defined by the pullback diagram, $\require{AMScd} \begin{CD} R' @>{t}>> R\\ @V{s}VV @VV{r_0}V\\ R @>{r_1}>> X \times X \end{CD}$ The subobject $R'$ is the “object of $R$ -related triples”. I was able to underst
There is one typo and one slight abuse of notation. The correct diagram is a pullback diagram $$ \require{AMScd} \begin{CD} R' @>{t}>> R\\ @V{s}VV @VV{r_0}V\\ R @>{r_1}>> X \end{CD} $$ Then $R'$ can be thought of as triples $(a,b,c)$ for which $(a,b)\in R$ and $(b,c)\in R$ . Moreover, it is more precise to say that it is the map $\langle r_0s, r_1t\rangle\colon R'\to X\times X$ that factors through $R\hookrightarrow X\times X$ . Informally, the former map sends $(a,b,c)$ to $(a,c)$ , so the requirement that this map factors through $R$ is the usual transitivity condition on equivalence relations.
|category-theory|definition|topos-theory|
1
How to prove that $\sum_{n=1}^\infty \left(e^{\frac 1 n} - e^{\frac 1 {n+2}}\right)$ converges?
I have the following series: $$\sum\limits_{n=1}^\infty\left(e^\frac{1}{n} - e^\frac{1}{n+2}\right)$$ I know that this series converges to $e + \sqrt{e} - 2$ (I found its sum using $S = \lim\limits_{n \to \infty} S_n$ , where $S_n$ is a partial sum of $n$ elements). But is there any way to prove that the series converges without finding the actual sum?
Another way, using that by standard limit $\frac{e^x-1}x\to 1$ we have $$e^\frac{1}{n} - e^\frac{1}{n+2}=e^\frac{1}{n+2}\;\frac{e^\frac{2}{n(n+2)}-1}{\frac{2}{n(n+2)}}\;\frac{2}{n(n+2)}\sim \frac{2}{n(n+2)}$$
|sequences-and-series|convergence-divergence|
0
Unordered Sampling With Replacement Intuition
I am trying to understand the intuition behind unordered sampling with replacement. This is the problem I have: I want to distribute $4$ identical balls to $2$ people. Let $1$ represent person $1$ , and let $2$ represent person $2$ . Then, the potential distributions are $1111, 2222, 1112, 2221, \text{and } 1122$ . Formulaically, we know that there should be ${n+k-1 \choose k}$ distributions. Thus, in this case, there should be $5 \choose 2$ $= 10$ (with $n = 4$ and $k = 2$ ) distributions. Where do the other $5$ orderings come from?
Your formula for the number of possible distributions for the given example is incorrect. It's $n+k-1 \choose k-1$ and not $n+k-1 \choose k$ . So the correct number of distributions here would be $${4+2-1 \choose 2-1} = {5 \choose 1} = 5$$ Which is what you rightly figured it should be.
|combinatorics|
0
Is it true that the curves $f(x,y)=0$ and $f(y,x)=0$ can only intersect on the line $x=y$?
Assume that the function $f(x,y)$ is not symmetric in $x,y$ . Consider the curve $f(x,y)=0$ in $\mathbb{R}^2$ . Is it always true that $f(a,b)=f(b,a)=0$ only if $a=b$ ? It does appear to be true when I draw an arbitrary curve $f(x,y)$ in $\mathbb{R}^2$ . However, I'm unable to prove this. I also remember seeing an easy counter-example in a similar situation, and want to make sure that I am not making a mistake.
Consider the function $f(x,y)=(x+y-1)(y-2x)$ . It is not symmetric in $x,y$ . $(1,0)$ and $(0,1)$ both are on the plot of $f$ , but $0\neq 1$ .
|calculus|algebra-precalculus|
1
Is it true that the curves $f(x,y)=0$ and $f(y,x)=0$ can only intersect on the line $x=y$?
Assume that the function $f(x,y)$ is not symmetric in $x,y$ . Consider the curve $f(x,y)=0$ in $\mathbb{R}^2$ . Is it always true that $f(a,b)=f(b,a)=0$ only if $a=b$ ? It does appear to be true when I draw an arbitrary curve $f(x,y)$ in $\mathbb{R}^2$ . However, I'm unable to prove this. I also remember seeing an easy counter-example in a similar situation, and want to make sure that I am not making a mistake.
Here is a simple counterexample: Let $f(x,y)=x\cdot y(x-1)$ . Then, $f(1,0)=0$ $f(0,1)=0$ but $a\ne b$ .
|calculus|algebra-precalculus|
0
Show continuity
A function $h:\mathbb{Q} \rightarrow \mathbb{R}$ , with $h(x) = 0$ for $|x| and $h(x) = 1$ for $|x| > \sqrt2$ is continuous for all $x$ in $\mathbb{Q}$ . It states in the solution that for $x_0$ in $\mathbb{Q}$ and $\delta =$ min{| $x_0$ + $\sqrt2$ |,| $x_0$ - $\sqrt2|$ } it follows that $h(x) - h(x_0) = 0$ and therefore $\epsilon$ . Thus continuous. I don't understand how to arrive at that solution. Can someone help me out? I am new to analysis. This might be simple but I just don't get it.
(Remark: if you haven't sketched the function $h$ -- on paper, or in your head if your visual imagination is up to it -- then you absolutely should do that first. Don't try to do this sort of thing just by pushing symbols around.) Obviously the function $h_0\,:\,\mathbb{R}\rightarrow\mathbb{R}$ with "the same definition" as $h$ isn't continuous. So what's different when you're mapping from $\mathbb{Q}$ ? Only the fact that the points $\pm\sqrt{2}$ aren't in $\mathbb{Q}$ , and those are the points in $\mathbb{R}$ where $h_0$ fails to be continuous. So, suppose you have some point $x_0$ that _isn't $\pm\sqrt2$ . You need to show that $h$ is continuous there, which means you need to find an interval around $x_0$ in which $h$ is well-behaved. Usually "well-behaved" would be defined in terms of how much you're allowing the value of $h$ to change -- "for all $\epsilon>0$ , there is some $\delta>0$ ", etc. -- but in this particular case $h$ is constant on any interval that doesn't include $\p
|continuity|
0
Show continuity
A function $h:\mathbb{Q} \rightarrow \mathbb{R}$ , with $h(x) = 0$ for $|x| and $h(x) = 1$ for $|x| > \sqrt2$ is continuous for all $x$ in $\mathbb{Q}$ . It states in the solution that for $x_0$ in $\mathbb{Q}$ and $\delta =$ min{| $x_0$ + $\sqrt2$ |,| $x_0$ - $\sqrt2|$ } it follows that $h(x) - h(x_0) = 0$ and therefore $\epsilon$ . Thus continuous. I don't understand how to arrive at that solution. Can someone help me out? I am new to analysis. This might be simple but I just don't get it.
As $\sqrt{2}$ is not a rational number, we are not asked to analise continuity around it. So let's pick $x_0$ as proposed. Without loss of generality let's just consider the case $x_0 > \sqrt{2}$ . Let $\delta = x_0 - \sqrt{2}$ , then for any $\epsilon>0$ and any $x_1$ such that $|x_1-x_0| with $d . Then we have that $x_1>\sqrt{2}$ and thus $h(x_1)=1$ so $|h(x_1)-h(x_0)|=|1-1|=0 .
|continuity|
0
Error solving: $\sin\left(\frac{x}{2}\right)=3+2\cos(x)$
I've the simple following trigonometric equation to solve: $$ \sin\left(\frac{x}{2}\right)=3+2\cos(x)\,. $$ There are some different ways to solve this, but one way is giving me a different answer and it's intriguing me because I don't see the flaw in the argument. If I use the fact that: $$ \sin\left(\frac{x}{2}\right)=\pm \sqrt{\frac{1-\cos(x)}{2}} $$ and then I take the square of both sides of the trigonometric equation, I get: $$ \frac{1-\cos(x)}{2}=\left[3+2\cos(x)\right]^2\,, $$ which can be solved by sostitution. If I do so, I get the solution: $$ x=\pi+2\pi k, \,where \,\,k\in\mathcal{Z}\,, $$ but the actual solution of the initial trigonometric equation is (it can be obtained using other methods as previously stated): $$ x=\pi+4\pi k, \,where \,\,k\in\mathcal{Z}\,. $$ So, where is the problem? I think it's some hidden condition of existence that I'm not taking into account, but I do not see where. I get it that, since initially I've $\sin(x/2)$ , the final solution should be $
Consider $x=3\pi/2$ . The equation becomes $-1=1$ . This equality will be correct, if you square both sides. But it is incorrect as it is. When you square both sides, it can happen that you acquire false roots. The right side is always positive. So you must add the condition $\sin(x/2)>0$ for your squaring both sides to be correct.
|trigonometry|solution-verification|
1
A weird probability question
This is the problem in question: You have two identical bowls: the first one contains 3 white balls and 4 black balls, and the second one contains 4 white balls and 5 black balls. If you choose randomly a ball from the two bowls, what is the probability it is white ? Let's define our events as such: A1 = choosing a ball from the first bowl A2 = choosing a ball from the second bowl B = choosing a white ball One approach would be using the theorem of total probability: $$\text{We know that }P(B|A_1) = \frac34\text{ and }P(B|A_2) = \frac45\text{, and that:}$$ $$P(A_1) = P(A_2) = \frac12\text{, because the bowls are identical}$$ $$P(B) = P(B |A_1)\times P(A_1) + P(B|A_2)\times P(A_2) = \frac37 \times \frac12 + \frac49 \times \frac12 = 55/126$$ The second approach would be simplifying the problem: Because the two bowls are identical, we could just say we don't even choose between two bowls, but just between the set of all balls. Then we could calculate the probability directly: $$P(B) = \fr
I would agree the wording is somewhat ambiguous. However, if we assume a crazy person is not writing this problem, then I think the intended meaning is that the two bowls are identical in every way except for having different amounts of different colored balls. Furthermore, I doubt the existence of two bowls is irrelevant to the problem. Otherwise, why mention it? To be safe, you could provide your teacher with two answers, one for each possible interpretation, and be sure to explain why each answer is relevant to the corresponding interpretation. I am partial to the following interpretation, and here is how I would approach it from first principles: Let $W$ be the event in which a white ball is chosen, and let $B_1$ and $B_2$ be events in which a ball is selected from bowls $1$ and $2$ , respectively. Furthermore, let $W \cap B_1$ be the joint event of choosing a white ball from bowl $1$ , and let $W \cap B_2$ be the joint event of choosing a white ball from bowl $2$ . First, recogniz
|probability|combinatorics|solution-verification|
0
Are paths in a stochastic process $\{X_t\}_{t \in \mathbb{R}}$ constant if $X_t$ is identically distributed for all $t$?
I've been trying to develop an intuition for stochastic processes and would like some help clearing up the following situation. Suppose $\{X_t\}_{t \in \mathbb{R}}$ is a real valued stochastic process defined on the probability space $(\Omega, \mathcal{F}, P)$ . Suppose all the $X_t$ are identically distributed for each $t$ , for concreteness we can assume they each have distribution $\mathcal{N}(0,1)$ . On one hand I would expect that the stochastic process above describes paths centered around the constant function $f(t) = 0$ . In more detail, we may say that within one standard deviation all such paths would lie within the strip $[-1, 1]$ on the range. Thus we clearly see that there is a possibility for some paths to not be constant. On the other hand we the paths are defined as follows: for each $\omega \in \Omega$ there is an associated path $t \mapsto X_t(\omega)$ . Since each $X_t$ is identically distributed, $X_t(\omega)$ is constant for all $t$ and fixed $\omega$ . Thus we hav
Your question is actually the same for random walks $S_{n}=\sum I_{k}$ for Gaussian $I_{k}\in N(0,1)$ . Then if we normalize we get $$X_{n}:=\frac{S_{n}}{\sqrt{n}}\sim N(0,1).$$ "Since each $X_t$ is identically distributed, $X_t(\omega)$ is constant for all $t$ and fixed $\omega$ ." Here ,however, the $X_{n}$ are not constant. As we add more $I_{k}$ we fluctuate around zero. If we take a realization, we simply get a different path trajectory. The $\omega$ here means that we sample a realization of the path $(X_{t})_{t\geq 0}$ . See for example What is a sample path of a stochastic process for many nice answers
|probability|probability-theory|stochastic-processes|
1
How important is the order of the conditional in the ($\epsilon$, $\delta$)-definition of the limit? Would it matter if it is a biconditional?
So we're looking at the ( $\epsilon$ , $\delta$ )-definition of the limit in class and I am kind of confused, because the teacher says one thing but the books say another. Teacher says that $\lim_{x \to p} f(x) = L$ if: $(\forall \epsilon > 0)(\exists \delta > 0)(\forall x \in \mathbb{R})(0 Now I went into the book, and it actually says something similar but different, it says that for the limit to exist and be equal to $L$ we must have: $(\forall \epsilon > 0)(\exists \delta > 0)(\forall x \in \mathbb{R})(0 The teacher is very adamant that it is a double implication and we need to prove it from one way to the other and vice versa. I have several questions. Firstly, what would even change if we invert the conditional on the definition of limit that the books have? Say we change it to this: $(\forall \epsilon > 0)(\exists \delta > 0)(\forall x \in \mathbb{R})(|f(x) - L| My intuition tells me that if the limit exists in the original definition, then it will also exists on the converse on
The statement with the biconditional is definitely not the correct definition of limit. Your instructor is just wrong. Let me write the two statements so we can refer to them: $$\begin{align*} (\forall \epsilon\gt 0)(\exists \delta\gt 0)(\forall x\in\mathbb{R})(0\lt|x-a|\lt \delta\implies |f(x)-L|\lt\epsilon) \tag{*}\\ (\forall \epsilon\gt 0)(\exists \delta\gt 0)(\forall x\in\mathbb{R})(0\lt|x-a|\lt \delta\iff |f(x)-L|\lt\epsilon)\tag{**} \end{align*} $$ $(*)$ is the usual definition of $\displaystyle \lim_{x\to a}f(x)=L$ . And $(**)$ is what your instructor is giving instead. Statement $(**)$ is way too restrictive. And it includes too many things that are not part of the idea of limit. You are correct that a function $f$ , point $a$ , and number $L$ that satisfy $(**)$ will satisfy $(*)$ , but the converse need not hold. The idea of "the limit of $f(x)$ as $x$ approaches $a$ is equal to $L$ " is that we can guarantee that the values of $f$ will be as close as we want to $L$ for every
|calculus|limits|
0
Lower bound $\sum_{\{ s_{1},..,s_{d}\}\subset [n]} d!\frac{1}{s_{1}s_{2}\dots s_{d}} $
I want to lower bound the following expression of choosing $d$ numbers from $[n]:=\{ 1,2,\dots,n \}$ . $$ \sum_{\{ s_{1},..,s_{d}\}\subset [n]} d!\frac{1}{s_{1}s_{2}\dots s_{d}} $$ Here $s_{1},\dots,s_{d}$ are all distincts. I know I can upper bound this by $\left( 1+\frac{1}{2}+\dots+\frac{1}{n} \right)^{d} = H_{n}^{d} . Is there a way to lower bound this quantity, perhaps to $\Theta((\ln n)^d)$ ? Or is a bound of $\Theta((\ln n)^d)$ even possible? I don't really know much about combinatorics, so not sure what tools I can utilize.
Lemma :For all $t\in \mathbb N$ , $$\alpha_t := \sup_{n> t}\frac{\ln^{t+1} (n+1)}{\sum_{\ell=t+1}^{n} \frac1{\ell+1}\ln^t(\ell)} Hint for the proof : Fix $t$ you can find $N\in \mathbb N_{\ge t}$ such that $u\mapsto \frac1u \ln^t(u)$ is decreasing on $[N, \infty)$ then, for $n\ge N$ , $$\sum_{\ell=N}^{n} \frac1\ell \ln^t(\ell) \ge \sum_{\ell=N}^{n} \int_{\ell}^{\ell+1}\frac1u \ln^t(u) \mathrm du= \ln^{t+1}(n+1) - \ln^{t+1}(N)$$ For $d\in \mathbb N_{\ge 1}$ , and $n\in \mathbb N_{\ge d}$ , let $$\mathcal S_{n, d} := \left\{\left(s_1, \ldots, s_d\right)\subset [n]: s_1 and $$a_{n, d} = \sum_{\left(s_1, \ldots, s_d\right) \in \mathcal S_{n, d}} \frac{d!}{\prod_{k=1}^d s_k}$$ Lemma: $$a_{n+1, d+1} = a_{n, d+1} + \frac{d+1}{n+1} a_{n, d}$$ Proof: \begin{align} a_{n+1, d+1} &= \sum_{\left(s_1, \ldots, s_{d+1}\right) \in \mathcal S_{n+1, d+1}} \frac{(d+1)!}{\prod_{k=1}^{d+1} s_k}\\ &= \sum_{\left(s_1, \ldots, s_{d+1}\right) \in \mathcal S_{n+1, d+1},\, s_{d+1}=n+1} \frac{(d+1)!}{\prod_{k=1}^{
|probability|combinatorics|graph-theory|
0
How many ways of traversing every arc of a complete digraph exactly once from a given starting vertex are there?
Given a set of $n$ states $V = \{ s_1, s_2, \ldots, s_n \}$ , and a complete digraph $G = (V, A)$ where $A = \{ (a,b) \mid (a,b) \in V^2\; \text{and}\; a \neq b \}$ , I'm interested in finding cyclic trails through $G$ which traverse every element of $A$ exactly once. I've found a number of different ways to construct such trails, including one based on A097285 (the sequence that contains exactly once every pair $(i,j)$ of distinct positive integers): $t = (a_1, a_2, \ldots, a_{n*(n+1)})$ where $a_i = (s_{A097285(i)}, s_{A097285(i+1)})$ . Given that there's more than one way to construct such trails, I'd like to better understand how to construct such trails, how many there are for a given $n$ , and how the properties of different trails compare (e.g. in my practical application of this, it's desirable to construct trails which visit each vertex at least once as early as possible along the trail). Also, what other problems are related to or isomorphic to this? I'm kinda getting vibes o
If you count oriented cycles, the number of such cycles for $n=2,3,4,\ldots$ is $1, 6, 768, 3888000, \ldots$ If you count non-oriented cycles, ie edge-injective embeddings of the cyclic graph on $n(n-1)$ vertices onto $K_n$ , the number of such cycles is $1, 4, 384, 1944264, \ldots$ Neither sequence is in the OEIS , even omitting the first term, so it seems that such paths are likely not well studied (since no one seems to have even studied the enumeration problem before). On the question of paths that visit each vertex as early as possible, the number of paths that start with $s_1\to s_2\ldots\to s_n$ is $1, 2, 36, 18432, \ldots$ for $n=2,3,4,5,\ldots$ . For all such cycles, add a factor of $(n-1)!$
|graph-theory|directed-graphs|eulerian-path|
0
Can I determine the angles of a quadrilateral if I know the lengths of the sides and the difference between the diagonals?
I know the lengths of the four sides of a quadrilateral and the difference between the diagonals (but I do not know the actual lengths of the diagonals). My instinct is that this information ought to be sufficient to determine the angles of the quadrilateral, because a specific difference between the diagonals constrains it to a single, fixed shape. example measurements: L side length = 326mm; R side length = 325mm; bottom length = 677mm; top length = 675mm; diagonal from bottom L to top R is 7mm longer than from bottom R to top L
If it is not required that the quadrilateral be convex, then in some cases the lengths of the four sides and the difference between the diagonals are insufficient to constrain the quadrilateral to a single, fixed shape. To illustrate this, suppose we are given $AB = DA = 5$ , $BC = CD = 3$ and $BD - AC = 1$ . Consider the diagram below where $B,C,D$ are in line yielding a quadrilateral consisting of two $3-4-5$ triangles: Obviously this does not meet the requirements since $BD-AC=2$ . Now imagine $C$ moved gradually to the right while holding point $A$ and lengths $AB,BC,CD,DA$ fixed, so that $B$ and $D$ are pulled towards each other. Initially $BD=6$ and $AC=4$ . Eventually, when $B$ and $D$ coincide, $BD=0$ and $AC=5+3=8$ so $BD-AC$ has reduced continuously from $2$ to $-8$ . There must be an intermediate point with $C$ to the right of $BD$ at which $BD-AC=1$ . Note that the argument does not depend on an assumption that $BD$ and $AC$ change at the same rate or pro rata (such an assu
|geometry|
0
Help determining the rank of a module
I have the following question on my homework: Find the rank of the subgroup of $\mathbb{Z}^3$ generated by (2,-2,0), (0,4,-4), and (5,0,-5) I've seen the comment on this post which inspired me to find the invariant factors. My argument so far: We can view this subgroup as a $\mathbb{Z}$ -module, call it $M$ , and consider the canonical surjection $g:\mathbb{Z}^3\to M$ which maps the standard basis vectors of $\mathbb{Z}^3$ to the generators of $M$ . This representation gives us the corresponding matrix $$A=\begin{pmatrix} 2 & -2 & 0 \\\\ 0 & 4 & -4 \\\\ 5 & 0 & -5\end{pmatrix}$$ Then I computed its (Smith) normal form using elementary transformations, and obtained $$\begin{pmatrix} 1 & 0 & 0 \\\\ 0 & 2 & 0 \\\\ 0 & 0 & 0\end{pmatrix}$$ Now, we get the invariant factor decomposition as $$M\cong (\mathbb{Z}/1\mathbb{Z})\oplus (\mathbb{Z}/2\mathbb{Z})\oplus \mathbb{Z}^1 \cong \mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}$$ This is where I'm stuck. I can't seem to find a basis for $\mathbb{Z}/2\
Amateur_Algebraist's comment has made me realize my mistake. The $g$ which I thought was a presentation of $M$ is not actually a presentation. The definition of a presentation we had in class was a very specific homomorphism from $R^m\to R^n$ , where $m$ is the rank of the kernel of $g$ , and $g$ is the surjective module homomorphism defined by taking the standard basis vectors of $R^n$ to the generators of $M$ (as I had defined $g$ above). The correct solution is as follows: Let $g$ be the same as I had defined, and let $x_1=(2,-2,0), x_2=(0,4,-4), x_3=(5,0,-5)$ . Using the calculations I had used to reduce $A$ to elementary form, we can determine linear dependence between the three generating vectors. We get that $$10x_1+5x_2-4x_3=0$$ Thus, we see that $$(a,b,c)\in ker(g)\iff g(a,b,c)=0$$ $$\iff ax_1+bx_2+cx_3=0\iff a=10k, b=4k, c=-5k\quad \text{some }k\in\mathbb{Z}$$ So we conclude $$ker(g)= $$ Where $ $ is the $\mathbb{Z}^3-$ module generated by $(10,5,-4)$ . Clearly $\ker(g)$ is t
|abstract-algebra|modules|smith-normal-form|
1
Finding Smooth Numbers
This is a German Olympiad exercise from 2015. Call a number $n\in\mathbb{N}$ smooth if there are $a_1,...,a_n\in\mathbb{Z}$ such that $$ a_1+a_2+...+a_n=a_1\cdot a_2\cdot ... \cdot a_n=n. $$ Find all smooth numbers. So I think that it is evident that every composite number can be written as $$ n=a_1+...+a_m=a_1\cdot ...\cdot a_m $$ for $m\leq n$ . Because we just take one prime factor of and then add only $1$ s. But the number is smooth if $m=n$ . I have trouble finding solutions, I found the obvious solution $1=1$ . Has anyone got an idea of how to find more or all smooth numbers?
HINT.-One has $$4n+1=\underbrace{1+1+\cdots+1}_{2n}+\underbrace{-1-1-\cdots-1}_{2n}+(4n+1)$$ $$4n+1=(1)^{2n}(-1)^{2n}(4n+1)$$ In other words, all integer $N\equiv1\pmod4$ is smooth. It is maybe a partial answer because it remains to look at $N\equiv2,3,0\pmod4$ . However it is clear that $N=2$ is not smooth because $x+y=xy=2$ has not integer solutions. I invite somebody to examine the probable impossibility of the remian cases. I have no time now.
|number-theory|elementary-number-theory|contest-math|
0
Exponential Generating Function for Dirichlet Character
I am working on a differential equation problem, and have ended up with the following form: $$g(x; N) = \sum_{n=0}^{\infty} \frac{\chi_0(n)}{n!} x^n,$$ with $\chi_0(n)$ the principal Dirichlet character mod $N$ . It is clear that this is equivalent to $$g(x; N) = \sum_{(n,\,N) \, = \, 1} \frac{x^n}{n!}.$$ I am trying to determine if the function is periodic, i.e. $$\forall N \in \mathbb{N}, \exists T \in \mathbb{C}, g(x + T; N) = g(x; N).$$ It is a sort of cut up exponential function. Is there a known name for this function? Or otherwise an arithmetic closed form? Thank you, Jackson
If $N$ is a power of $2$ , then $g(x; N) = \sinh x$ , which is periodic with period $T = 2\pi i$ , so we will ignore this case. Let $P_N(x) = \sum_{j=0}^{N-1} a_j x^j$ be the unique polynomial of degree at most $N-1$ whose value at every primitive $N^{th}$ root of unity is $1$ , and whose value at all other $N^{th}$ roots of unity is $0$ (use Lagrange interpolation, for example). Put $\zeta = \exp(2\pi i/N)$ . Then $$\begin{array}{rcl} \sum_{j=0}^{N-1} a_j \exp(\zeta^j x) & = & \sum_{j=0}^{N-1} a_j \sum_{k=0}^\infty \frac{\zeta^{jk} x^k}{k!} \\ & = & \sum_{k=0}^\infty \frac{x^k}{k!} \sum_{j=0}^{N-1} a_j \zeta^{jk} \\ & = & g(x; N) \end{array}$$ since by definition $\sum_{j=0}^{N-1} a_j \zeta^{jk} = 0$ if $k$ is not relatively prime to $N$ , and equals $1$ if $k$ is relatively prime to $N$ . For $T$ to be a period, this requires $$\sum_{j=0}^{N-1} a_j \exp(\zeta^j T) \exp(\zeta^j x) = \sum_{j=0}^{N-1} a_j \exp(\zeta^j x).$$ By linear independence of exponential functions (see e.g. here
|ordinary-differential-equations|number-theory|generating-functions|dirichlet-character|
1
Prove a mapping is open
Prove that the map $$f:(0,1)\to\mathbb{R}^2$$ $$t\mapsto (\cos 2\pi t,\sin 2\pi t)$$ is an embedding. PS: In general topology, an embedding is a homeomorphism onto its image. More explicitly, an injective continuous map $f:X\to Y$ between topological spaces $X, Y$ is a topological embedding if $f$ yields a homeomorphism between $X$ and $f(X)$ . I already prove $f$ is injective, continuous, but stuck at proving it yields a homeomorphism between $X$ and $f(X)$ . To prove this, I'm trying to prove $g:X\to f(X)$ is homeomorphism $\iff g$ is an open mapping, which means $g(U)$ is open for every open set $U\subset X.$ Could someone help me to finish the problem? Thank in advance!
It is easier to show that $f : (0,1) \to f((0,1)) = S := S^1 \setminus \{ e_1 \}$ , where $e_1 = (1,0) \in S^1$ , is a closed map. Consider the continuous map $$F : [0,1] \to S^1, F(t) = (\cos 2\pi t, \sin 2 \pi t).$$ Your map $f : (0,1) \to S$ is a restriction of $F$ . Let $C \subset (0,1)$ be closed in $(0,1)$ and $C' = \overline{C}^{[0,1]}$ be its closure in $[0,1]$ . Then $C'$ is compact, hence also $F(C')$ is compact, thus closed in $S^1$ . Therefore $F(C') \cap S$ is closed in $S$ . We have $C' \cap (0,1) = \overline{C}^{[0,1]} \cap (0,1) = \overline{C}^{(0,1)} = C$ . With $Z = C' \cap \{0,1\} \subset \{0,1\}$ we obtain $$C' = C' \cap (0,1) \cup C' \cap \{0,1\} = C \cup Z.$$ Since $F(Z) \subset \{ e_1 \}$ we get $$F(C') \cap S = F(C \cup Z) \cap S = (F(C) \cup F(Z)) \cap S = (f(C) \cup F(Z)) \cap S \\= f(C) \cap S \cup F(Z) \cap S = f(C) .$$ This proves that $f(C)$ is closed in $S$ .
|real-analysis|general-topology|open-map|graph-embeddings|
0
If $ a^2 + b^2=1,c^2+d^2=1,ac+bd=0 $ , find $ab+cd $?
I have tried everything except trigonometry.We have not still started doing trigonometry in classes. I don't know how am I supposed to solve this,since I put 5 hours and nothing led to solution.
$ab+cd=0$ because \begin{align*} cd &=cd(a^2+b^2)\\ &=a(ac)d+(bd)bc\\ &=a(-bd)d+(-ac)bc\\ &=-ab(d^2+c^2)\\ &=-ab. \end{align*}
|polynomials|systems-of-equations|
0
Find $n$ such that $7^k\mid 3^n+5^n-1$
Do we have $$ 7^k\mid 3^n+5^n-1\Longleftrightarrow n=t\cdot 7^{k-1}\quad\text{with}\quad t\equiv 1,5\,(\operatorname{mod}6), $$ where $k$ is an arbitrary positive integer? The part " $\Leftarrow$ " is easy. Let $a=3^{t\cdot 7^{k-1}}$ , then $5^{t\cdot 7^{k-1}}\equiv a^{-1}\,(\operatorname{mod}7^k)$ , and $$7^k\mid a^6-1=(a-1)(a+1)(a^2+a+1)(a^2-a+1).$$ We have $a\equiv\begin{cases}3,\text{if }t\equiv 1\,(\operatorname{mod}6)\\5,\text{if }t\equiv 5\,(\operatorname{mod}6)\end{cases}$ . Since $3$ and $5$ are a primitive roots modulo $7$ , none of $a-1$ , $a+1$ and $a^2+a+1$ is divisible by $7$ , so $7^k\mid a^2-a+1$ , or $3^{t\cdot 7^{k-1}}+5^{t\cdot 7^{k-1}}-1\equiv a+a^{-1}-1\equiv 0\,(\operatorname{mod}7^k)$ . What about the $\Rightarrow$ part? I think it is true, but how to show that $n$ must be divisible by $7^{k-1}$ (which solves the problem by simply looking at $n\operatorname{mod}6$ )? Thank you for any help in advance.
My approach (inspired by Ibrahim_K's post here ): Lemma. Let $n>0$ and $0\le m\le n$ , then $\nu_p(\binom{n}{m})\ge\nu_p(n)-\min\{\nu_p(m),\nu_p(n-m)\}$ , where $\nu_p$ is the $p$ -adic valuation. Proof. Let $d:=\min\{\nu_p(m),\nu_p(n-m)\}$ , then $\nu_p(n)\ge d$ . There is nothing to prove if $\nu_p(n)=d$ . If $\nu_p(n)>d$ , then we must have $\nu_p(m)=\nu_p(n-m)=d$ , so the $d$ -th digits of the base- $p$ expansions of $m$ and $n-m$ are both nonzero, while the $d$ -th to $(\nu_p(n)-1)$ -th digits of $n$ are all $0$ s, so there are carries at least on the $d$ -th to $(\nu_p(n)-1)$ -th digits when adding $m$ and $n-m$ . The result follows from Kummer's theorem . Now we study $\nu_7(3^n+5^n-1)$ . By looking at $(3^n+5^n-1)\operatorname{mod}7$ we see that $7\mid 3^n+5^n-1$ if and only if $n\equiv 1,5\,(\operatorname{mod}6)$ , so we will assume that henceforth; in particular $7\nmid 2^n-1$ . Then we have $$ \nu_7(3^n+5^n-1)=\nu_7((3^n+5^n+(-1)^n)(2^n-1))=\nu_7((6^n-(-1)^n)+(10^n-3^n)-(5^n
|elementary-number-theory|modular-arithmetic|divisibility|
1
Characteristic functions: upper bound on |phi(t)e^(-itx)|
This post about characteristic functions asks about absolutely integrable $\phi(t)$ . This answer mentions "we can find $R$ such that $\int_{\mathbb R\setminus[-R,R]}\lvert\phi(t)\rvert dt ". The argument I don't get is how $\frac 1{2\pi}\left\lvert\int_{-R}^R\phi(t)\left(e^{-itx}-e^{-ity} \right)dt\right\rvert \leqslant \frac{R^2}{\pi}\lvert x-y\rvert$ ; in particular, the $|x - y|$ bit . It seems to me that $|e ^ {-itx} - e ^ {-ity}|$ gets upper-bounded by $|-tx - (-t)y| = |t||x - y|$ . Why is that? Or, $|e ^ {-i\theta}| = 1$ , so if you only deal with $\int_{-R}^{R} |\phi(t)||t| \mathrm{d}t$ (in this comment ), where does the $|x - y|$ come from?
$|e^{-itx}-e^{-ity}|=|-it \int_x^{y} e^{-its}ds|\le |t||\int_x^{y} 1ds|=|t| |x-y|$ .
|probability-theory|uniform-continuity|characteristic-functions|
1
How many paths would confirm the existence of limit of a two variable function?
My question is : Suppose we know that $\lim_{(x,y) \to (a,b)}f(x,y)$ exists in infinitely many paths for a function $f : \mathbb{R}^2\rightarrow \mathbb{R}$ then can we say that limit exists and it can be obtained from choosing any path. May be i should have been more careful when saying so... Actual question is to evaluate $\lim_{(x,y) \to (0,0)} \dfrac{x^3y}{x^4+y^2}$ First thing one would do is to check for "Non existence" by which i mean trying out some paths hoping that limit in those two paths would differ and then conclude that limit does not exist. But then what if the limit in more than one path coincide.. In this situation i would take $y=mx$ then : $$\dfrac{x^3y}{x^4+y^2}=\dfrac{mx^4}{x^4+m^2x^2}\rightarrow 0 \text { as x $\rightarrow$ 0}$$ We realize that irrespective of $m$ we would end up with $\lim_{(x,y) \to (0,0)} \dfrac{x^3y}{x^4+y^2}=0$ along the path $y=mx$. I am sure with this we can not say that limit exists... so, how many paths (I have checked uncountably many a
@DavidMitra had some great input. Not sure if this is useful, but if you take the gradient of his example, you find the $x$ and $y$ components share a common factor, $(y-x^2)$ . So call his expression $f(x,y)$ . Then $\nabla f = (y-x^2)\vec{u}$ $df=\nabla f \cdot (dx \hat{i}+dy\hat{j})=(y-x^2)\hat{u}\cdot (dx \hat{i}+dy\hat{j})$ So $y=x^2\implies df=0$ . Indeed as he mentions $f(y=x^2)=1/2$ everywhere but the origin. If you can construct a sequence of ordered pairs approaching the origin $(x_n,y_n) \to (0,0)$ and another sequence $(x_m,y_m)\to (0,0)$ and $\lim_{n\to \infty} f(x_n,y_n) \ne \lim_{m \to \infty} f(x_m,y_m)$ , the limit doesn't exist because a limit must be unique. The limit is $0$ if $y_m=c_2\cdot x_m$ , but the limit is $1/2$ if $y_n=x_n^2$ . Take the gradient of your problem and the components do not share a common factor apart from $x$ . So while $y=mx$ doesn't guarantee a valid proof, it looks like it doesn't offer the same kind of counter example. To answer your overa
|real-analysis|limits|multivariable-calculus|
0
Multivariate Normal Distributions and the Uniform Distribution on the Sphere
Given a multivariate normal vector $X \sim N(0,I_d)$ (identity covariance matrix), it is well known that : $$\frac{X}{\|X\|_2} $$ is uniformly distributed on the sphere of radius $\sqrt{d}$ in $\mathbb{R}^d$ , where here $\|\cdot\|_2$ denotes the usual Euclidean norm. Question : Is this is not the case for any rotationally invariant $X$ ? Roughly speaking, $X$ being rotationally invariant in my mind corresponds to the distribution of $X$ depending only $\|X\|_2$ (radius) and so any normalized rotationally invariant distribution should also be uniformly distributed on the sphere (of course possibly with a different choice of radius). Is there something special about normality being invoked above? I know that the multivariate normal is the only rotationally invariant distribution with independent components $X_i$ (this is the so-called Maxwell's theorem) but I'm not sure if/where this fact is invoked when showing that normalized versions of $X$ are uniformly distributed on the sphere.
Yes indeed the argument is general for all isotropic distribution. It is for simulations that normal distributions are most interesting. The usual problem is to actually sample uniformly the hypersphere. The naive method of parametrizing the sphere (like generalized spherical coordinates) and finding the corresponding distribution on the parameters is typically too cumbersome. Instead, it is easier to normalize an isotropic multivariate normal variable. The latter is sampled by sampling each components being independently normally distributed. Btw, the usual way to sample a normal variable is to do the reverse in $2D$ . Indeed the direct method of reverting to a uniform distribution by the cdf is costly as it requires computing the error function. Instead, you can first sample the direction which is uniform, and then radially. Mathematically, you can get a standard normal from: $$ X=\sqrt{-\ln U}\cos(2\pi\Phi) $$ with $U,\Phi$ independently and uniformly distributed on $(0,1)$ . Actual
|probability|probability-theory|probability-distributions|geometric-probability|
0
Standard limits giving wrong answers.
As we all know, $$ \lim _{x \rightarrow 0}\left(\frac{\sin x}{x}\right)=1 $$ And I encountered many questions where I used this standard limit and it gave the right answer. However in the following questions I am unable to get the right answer by this. I actually want to solve this limit. $$ \lim _{x \rightarrow 0}\left(\frac{1}{x^2}-\frac{1}{\sin^2 x}\right) $$ So the first step I took is to multiply by x^2 in the numerator and in the denominator of 1/sin^2x. Which gives us (x/sinx)^2 formed, and as I know, we can use standard limit and write 1 instead of it. $$ \lim _{x \rightarrow 0}\left(\frac{1}{x^2}-\frac{1}{x^2}\left(\frac{x}{\sin x}\right)^2\right) $$ $$ \lim _{x \rightarrow 0}\left(\frac{1}{x^2}-\frac{1}{x^2}\right)=0 $$ But this gives us the limit by further subtraction as 0 which is not correct, when checked by expansion method. Where am I wrong? Below are some other examples of questions which I got wrong answers. We also know, $$ \lim _{x \rightarrow 0} \frac{e^x-1}{x}=1 $
As someone already pointed out, there seems to be a mistake in your factorization, if you multiply by $x^2$ the denominator and the numerator of $\frac{1}{\sin(x)^2}$ you will get $(\frac{x}{x\sin(x)})^2$ . And then you will be looking at the limit $\frac{1}{x^2}(1-(\frac{x}{\sin(x)})^2)$ . This limit is under an indeterminate form $\infty * 0$ . But more fundamentally, the reason why your approach is wrong is because you cannot deduce the second limit from the first. If you look at your Taylor expansion method, you will notice that the result for this limit comes from the third order term of $\sin(x)$ . You are trying to solve a limit that needs information up to order 3 of the $\sin(x)$ around 0 by injecting the $\lim_{x\rightarrow 0}\frac{\sin(x)}{x} = 1$ which only gives you information up to order 1. For the second question seems like you made a mistake, you should have $$\frac{e^x - 1 -x^2}{x^3} = \frac{1}{x^2}(\frac{e^x-1}{x} - x)$$ this gives you a $(+\infty)*1$ limit, so the l
|calculus|limits|derivatives|limits-without-lhopital|
0
Composition of homotopies of morphisms of complexes of abelian groups
Show that if $f_1, f_2$ are homotopy morphisms of complexes of abelian groups $C_n, D_n$ , and $g_1, g_2$ are homotopy morphisms of complexes of abelian groups $D_n, E_n$ , then $g_1 \circ f_1$ and $g_2 \circ f_2$ are homotopic. We have that $f_1 - f_2 = \delta^D h_n + h_{n-1} \delta^C$ and $g_1 - g_2 = \delta^E h'_n + h'_{n-1} \delta^D$ , and I tried to combine these two expressions in order to find something of the form $g_1f_1 - g_2f_2 = \delta^E h''_n + h''_{n-1} \delta^C$ . However, I could never arrive at a precise formula written in this way (additional terms appeared that I didn't know how to eliminate). Can anyone help me with this? Thank you
For inspiration, consider the analogous situation in topology: Let $f_1, f_2 : X \to Y$ and $g_1, g_2 : Y \to Z$ be continuous maps between topological spaces such that $f_1 \sim f_2$ and $g_1 \sim g_2$ . We want to show that $g_1 \circ f_1 \sim g_2 \circ f_2$ . We start by picking homotopies $H : f_1 \to f_2$ and $H' : g_1 \to g_2$ . We want to construct a homotopy $g_1 \circ f_1 \to g_2 \circ f_2$ , so in particular we want to construct paths $g_1(f_1(x)) \rightsquigarrow g_2(f_2(x))$ for all $x \in X$ . We already have a path $H(x,{-}) : f_1(x) \rightsquigarrow f_2(x)$ , and we can apply $g_2$ to this to get a path $g_2(H(x,{-})) : g_2(f_1(x)) \rightsquigarrow g_2(f_2(x))$ . We also have a path $H'(f_1(x),{-}) : g_1(f_1(x)) \rightsquigarrow g_2(f_1(x))$ , and so concatenating these yields a path $$H'(f_1(x),{-}) \diamond g_2(H(x,{-})) : g_1(f_1(x)) \rightsquigarrow g_2(f_2(x))!$$ These do indeed produce a homotopy $g_1 \circ f_1 \to g_2 \circ f_2$ . More abstractly: $g_2 \circ H$ is
|algebraic-topology|homological-algebra|abelian-groups|
1
Prove for any sets $A$ and $B$, $2^{(A−B)} ⊆ (2^A −2^B )∪\{∅\}$.
Prove for any sets A and B, $2^{(A−B)} ⊆ (2^A −2^B )∪\{∅\}$ . The hint given is: $∅ \notin 2^A −2^B$ . I can't seem to figure out how to go about this proof, some exploration would tell me that the statement does hold. I'm having a tough time translating this into proof language. I can see that the statement must be true since there are combinations of elements that are not removed from the set difference of the power sets that makes the $2^{(A-B)}$ a subset.
In the context of sets, $2^A$ means the set of all subsets of $A$ . So $2^{(A-B)}$ is the set of all subsets of $A-B$ . The claim is that every subset of $A - B$ is either the empty set or a subset of $A$ that is not a subset of $B$ . And this is clearly true, because if $S$ is a nonempty subset of $A - B$ , its members (of which there is at least one) are all members of $A$ (so it's a subset of $A$ ) that are not members of $B$ (so it's not a subset of $B$ ).
|elementary-set-theory|
0
Error solving: $\sin\left(\frac{x}{2}\right)=3+2\cos(x)$
I've the simple following trigonometric equation to solve: $$ \sin\left(\frac{x}{2}\right)=3+2\cos(x)\,. $$ There are some different ways to solve this, but one way is giving me a different answer and it's intriguing me because I don't see the flaw in the argument. If I use the fact that: $$ \sin\left(\frac{x}{2}\right)=\pm \sqrt{\frac{1-\cos(x)}{2}} $$ and then I take the square of both sides of the trigonometric equation, I get: $$ \frac{1-\cos(x)}{2}=\left[3+2\cos(x)\right]^2\,, $$ which can be solved by sostitution. If I do so, I get the solution: $$ x=\pi+2\pi k, \,where \,\,k\in\mathcal{Z}\,, $$ but the actual solution of the initial trigonometric equation is (it can be obtained using other methods as previously stated): $$ x=\pi+4\pi k, \,where \,\,k\in\mathcal{Z}\,. $$ So, where is the problem? I think it's some hidden condition of existence that I'm not taking into account, but I do not see where. I get it that, since initially I've $\sin(x/2)$ , the final solution should be $
Hint: Instead of squaring and getting extraneous solutions, rewrite $3 + 2 \cos x$ in terms of a double angle formula for $\cos x$ , i.e. $$\cos x = 1 - 2 \sin^2 \dfrac {x}{2}$$ Once you do a little simplification on the right hand side, you should have a quadratic in $\sin \dfrac {x}{2}$ , which is easily factorable. One side will have no solution, while the other will look almost the same as what you came up with, but the answer will come immediately once you take $\sin \dfrac {x}{2}$ into account.
|trigonometry|solution-verification|
0
Drawing of balls from box A to box B and back to box A
Box A contains 7 blue balls and 5 yellow balls. Box B contains 3 blue balls and 7 yellow balls. One ball is removed at random from Box A and placed in Box B. After thoroughly mixing the balls, a ball is drawn at random from box B and placed back into A. Find the probability that at the end of the experiment, box A has (a) exactly 7 blue and 5 yellow balls, (b) twice as many blue balls as yellow balls. My solution: For part (a): Scenario 1: Drawing a blue ball from Box A initially Probability of drawing a blue ball from Box A: $\frac{7}{12}$ After moving a blue ball from A to B, Box A has 6 blue and 5 yellow balls. Probability of drawing a blue ball from Box B: $\frac{4}{14}$ After moving this blue ball back to Box A, it will have 7 blue and 5 yellow balls. So, for this scenario: P = $\frac{7}{12} \times \frac{4}{14}$ Scenario 2: Drawing a yellow ball from Box A initially Probability of drawing a yellow ball from Box A: $\frac{5}{12}$ After moving a yellow ball from A to B, Box A has 7
In (a) your computation of the probability of drawing a blue ball from box B are both wrong (they should be 4/11 and 3/11). Also in the case where a yellow ball was drawn from the box A, you want the probability of drawing a yellow ball from box B. This gives you $$P = \frac{7}{12} \cdot\frac{4}{11} + \frac{5}{12}\cdot\frac{8}{11} = \frac{17}{33}$$ For (b) you want to compute the probability of having twice as many blue balls in box A than yellow balls. As the number of balls at the end in the box A is still 12, you deduce that you want to compute the probability of having 8 blue balls and 4 yellow balls in box A. The only way to attain this configuration is if you first remove a yellow ball from A and then you add a blue ball to A (which is equivalent to removing a blue ball from B). The odds of the first are $\frac{5}{12}$ . The odds of the second in knowing that a yellow ball was removed from A (thus added to B) are $\frac{3}{11}$ . You end up having that $$P = \frac{5}{12}\cdot \fr
|probability|
0
Prove $\sqrt{\frac{a+3}{a+3b}}+\sqrt{\frac{b+3}{b+3c}}+\sqrt{\frac{c+3}{c+3a}} \ge 3$
Let $a,b,c$ are positives such that $ab+bc+ca=3$ . Prove that: $$\sqrt{\frac{a+3}{a+3b}}+\sqrt{\frac{b+3}{b+3c}}+\sqrt{\frac{c+3}{c+3a}} \ge 3$$ Once, I see this problem in AoPS: https://artofproblemsolving.com/community/c6h3248614p29942459 I thought it was easy until I decide to solve it. Here is some of my attempt: First attempt : By using AM-GM, we have: $$\sqrt{\frac{a+3}{a+3b}}+\sqrt{\frac{b+3}{b+3c}}+\sqrt{\frac{c+3}{c+3a}} \ge 3\sqrt[6]{\frac{(a+3)(b+3)(c+3)}{(a+3b)(b+3c)(c+3a)}}$$ So we just need to prove: $$(a+3)(b+3)(c+3) \ge (a+3b)(b+3c)(c+3a)$$ Which is obviously wrong. Second attempt : By Bernoulli inequality, we have: $$\sqrt{\frac{a+3}{a+3b}} \ge 1+\frac{3(b-1)}{2(a+3b)}$$ So the first inequality is equivalent to: $$\sum_{cyc} \frac{3(b-1)}{2(a+3b)} \leq 0$$ which is equivalent to: $$\sum_{cyc }a^2 b + 6 \sum_{cyc}a^2 c+27 a b c \leq 39+3\sum_{cyc}a^2$$ Fortunately, it's obvious, but there is a big mistake here This is Bernoulli inequality is reversed for $h \le1$ , so m
Some thoughts. By Holder inequality, we have $$\left(\sum_{\mathrm{cyc}} \sqrt{\frac{a+3}{a+3b}}\right)^2 \sum_{\mathrm{cyc}} (a + 3b)(a + 3)^2(c + 2)^3 \ge \left(\sum_{\mathrm{cyc}} (a + 3)(c + 2)\right)^3. $$ It suffices to prove that $$\left(\sum_{\mathrm{cyc}} (a + 3)(c + 2)\right)^3 \ge 9 \sum_{\mathrm{cyc}} (a + 3b)(a + 3)^2(c + 2)^3.$$ This inequality is true which is verified by Mathematica (a Computer Algebra System). A human verfiable proof is required.
|inequality|
0
Is $\mu$ finitely additive, countably additive, or countably subadditive?
For any set $A\subset \mathbb{N}$ , define a set function $\mu(A)=\sum_{n\in A}2^{-n}$ if $A$ is finite subset otherwise $\mu(A)=\infty$ . Is $\mu$ finitely additive, countably additive, or countably subadditive? It is finitely additive because for two disjoint subsets $A, B$ , we have $A\cup B$ is finite and $$ \mu(A\cup B)=\sum_{n\in A\cup B}2^{-n}=\sum_{n\in A}2^{-n}+\sum_{n\in B}2^{-n}=\mu(A)+\mu(B) $$ But how about the another cases?
It is not countably additive since $\mu(\mathbb{N}) = \infty$ but taking $A_n = \{n\}$ , we have $\mathbb{N} = \bigcup_n A_n$ and the $A_n$ are disjoint, but $\sum \mu(A_n) . This also shows that $\mu$ is not countably subadditive. You correctly state that it is finitely additive, but you have to take more care in treating the cases where $A$ , $B$ , or both are infinite sets.
|real-analysis|probability|
1
Why is a general element in $G(L/K)$ a power of the Frobenius automorphism when restricted to $L \cap \tilde{K}$?
I'm going through Neukirch's Algebraic Number theory and I'm stuck on why we can say that a general element in $G(L/K)$ is a power of the Frobenius automorphism when restricted to $L \cap \tilde{K}$ in the proof of proposition 4.4. Prop. : Given a finite Galois extension $L/K$ , then the mapping $$\{\sigma \in G(\tilde{L}//K) | d_{K} (\sigma)=n \in \mathbb{N}^{+} \} \rightarrow G(L/K)$$ is surjective. In the proof it is just stated that given $\sigma \in G(L/K)$ when restricted to $L \cap \tilde{K}$ it is a power of the Frobenius automorphism restricted to $L \cap \tilde{K}$ . And I can't think of a reason why it should be true. enter image description here
This is Chapter IV of Neukirch, where he works with abstract Galois theory. Here is an argument that any $\sigma \in G(L|K)$ is a power of the Frobenius in that setting, rather than the special case where $G$ is the Galois group of local fields. In abstract Galois theory $G_K$ , $G_L$ are closed subgroups of a profinite group $G$ , and $G(L|K) = G_K/G_L$ . Also there is a surjective map $d: G \rightarrow \widehat{\mathbb{Z}}$ , $f_K = (\widehat{\mathbb{Z}} : d(G_K))$ , $d_K = (1/f_K)d$ , the Frobenius $\varphi_K$ is defined via $d_K(\varphi_K) = 1$ , and he writes $\varphi_{L|K}$ for $\varphi_K \bmod G_L$ . Unramified means $(d(G_K):d(G_L)) = (G_K : G_L)$ . It is enough to show that if $L$ is unramified over $K$ then $G(L|K) = ( \varphi_{L|K} )$ , that is, $G(L|K)$ is the cyclic group generated by $\varphi_{L|K}$ . For this, suppose $|G_K/G_L| = n$ , then \begin{equation}n = (G_K : G_L) = (d(G_K) : d(G_L)) = (d_K(G_K) : d_K(G_L))\end{equation} Since $d_K(G_K) = \widehat{\mathbb{Z}}$ ,
|algebraic-number-theory|class-field-theory|
0
Probability of strings making a complete loop
We got 6 pieces of strings. The top ends of the ropes are randomly paired up and tied together. The same procedure is done with the bottom ends of the strings. What is the probability that, as a result of the process, the 6 pieces of strings will be connected in a single closed loop of string? My initial attempt is to just multiply the probabilities in each step: Probability of tying the top of the string in initial procedure is 1. Because any head we tie will not effect the result we are getting. Second, step if we choose a random string and random bottom end of it, out of our new 3 full strings; only one of the outcomes will not be favorable out of all 5 bottom ends. So favorable probability $\frac{4}{5}$ Third, if we consider one of our new tied strings's bottom ends we have 2 favorable ends out of 3 ends so $\frac{2}{3}$ and finally 1 way to tie the last ends. Therefore in total; $1 \times \left(\frac{4}{5}\right) \times \left(\frac{2}{3}\right) \times 1 = \frac{8}{15} $ This was m
We can assign a number to each string from 1 to 6. We get three pairs $(i,j)$ of distinct numbers when choosing the top ends. We will not have a single closed loop if we choose one of these three pairs for the bottom ends. So the probability of having a single closed loop can be decomposed in $\frac{12}{15}$ when choosing the first bottom pair times $\frac{4}{6}$ when choosing the second bottom pair given we didn't choose one of the top three pairs before, which gives $\frac{8}{15}$ . $\frac{4}{6}$ comes from the fact that the first bottom pair removes the possibility of choosing two of the three top pairs and we also need to consider the probability of choosing the remaining top pair by default for the last bottom pair.
|probability|probability-theory|puzzle|
0
Why Doesn't the St Petersburg Paradox Happen All the Time?
I am learning about the St Petersburg Paradox https://en.wikipedia.org/wiki/St._Petersburg_paradox - here is my attempt to summarize it: A fair coin is tossed at each stage. The initial stake begins at 2 dollars and is doubled every time tails appears. The first time heads appears, the game ends and the player wins whatever is the current stake As we can see, this game will have an expected reward of infinite dollars: $$E(X) = \sum_{i=1}^{\infty} x_i \cdot p_i$$ $$E = \sum_{n=1}^{\infty} \frac{1}{2^n} \cdot 2^n = \frac{1}{2} \cdot 2 + \frac{1}{4} \cdot 4 + \frac{1}{8} \cdot 8 + \frac{1}{16} \cdot 16 + ... = 1 + 1 + 1 + 1 + ... = \sum_{n=1}^{\infty} 1 = \infty$$ The paradox is that even though the game has an infinite reward, in real life simulations, the game usually ends up with a finite reward. Although seemingly counterintuitive, this does seem logical. Even more, we can write computer simulations to see that large number of games will have finite rewards. My question is about apply
Others have commented on the technical details of random walks and brownian motions, but IMHO the OP's main confusion seems to be in this sentence: Thus, each of these infinitely long paths are weighted with infinitely small probabilities - and since there are an infinite number of these paths : the expected value would be infinite. This logic is like saying $\frac{1}{\infty} \times \infty = \infty$ (or perhaps $\frac{1}{\infty} \times \infty \times \infty = \infty$ ) when in fact it can be pretty much anything. Lets say you play a modified St Pete game where the payoff after $n$ flips is $a_n$ . Then the expected value is $$ E = \sum_{n=1}^{\infty} {1 \over 2^n} \times a_n $$ The $a_n$ 's can diverge to $\infty$ ("infinitely large payout"), but the sum can still be finite, e.g. when $a_n = (1.5)^n$ , even though you might think "there is always an infinitely small chance to have infinite payout". There is no mystery to this at all. This expected value is an infinite sum and just like
|probability|brownian-motion|paradoxes|
0
Biased random walk with unequal step size
Consider an asymmetric random walk $(X_n)$ in which the initial point is one ( $X_0 = 1$ ). It increases by $a$ with a probability of 0.3, remains the same with a probability of 0.4, and decreases by $b$ with a probability of 0.3, where $a and $a,b \in \mathbb{R}$ . Also, suppose this process stops once $X_n \geq \lambda$ . For example, $a=0.3$ , $b=0.5$ , and $\lambda=1.4$ . How can I calculate the probability of this process stopping? That is, what would be the probability of $X_n$ ever getting bigger than $\lambda=1.4$ ? Since the probability of stopping at the second period is $0.3\cdot 0.3$ , it would be greater than zero. Also, since it is supermartingale, by the maximal inequality for supermartingale, \begin{equation*} P(\sup X_n \geq 1.4) \leq \frac{\mathbb{E} X_0 }{1.4} = \frac{1}{1.4}. \end{equation*} Thus, it would not be one. However, I would like to derive precise predictions about them, which is where I am stuck now. I found other problems with asymmetric random walks wit
Too long for a comment, this answer addresses just this sub-question: would it be a completely different problem in an environment where the step sizes are irrational numbers? Not sure I'd call it "completely" different but it would be quite different. E.g. the solution technique in @leonbloy's answer (recurrence with single discrete index) would not work. Single index recurrence works because some number of + steps can cancel some number of - steps. But say your steps are $+1, -e$ and the boundary is $\pi^3$ . You might need to set up some 2-dimensional recurrence counting both + and - steps, and you'd need to actually calculate whether $N \times 1 - M \times e \ge \pi^3$ for various $(N,M)$ pairs. You would need both dimensions because you can never return to the same point at a later time, because there is no integer solution to $N \times 1 - M \times e = 0$ . :) Talking of irrationals remind me of this other interesting aspect when you consider the values as reals, and ask natural
|probability|probability-theory|martingales|random-walk|
0
Flow Rate based on Pipe Combination without considering Hydraulic Effects
I would really appreciate help on the following: Let us assume that we have n pumps pumping water through n pipes. The pumps are running at a constant power. It doesn't change. The flow rates in the pipes are observed to be a function of combination of the pipes which are active due to hydraulic effects. The flow rates in each bore due to the combination are lower than the flow rate when each pipe is activated solely. Is there a way to model this behaviour and calculate an expected flow rate for a various combination just based on data consisting of pipe active status and observed flow rates as follows without considering the hydraulic effects: Pipe A|Pipe B|Pipe C|Flow_rate_A|Flow_rate_B|Flow_rate_C 1 | 0 |0 |50 |0 |0 0 | 1 |0 |0 |55 |0 0 | 0 |1 |0 |0 |13 0 | 1 |1 |0 |50 |10 1 | 1 |1 |40 |46 |9
If pumps are running at constant power, then they are running at constant flow: pumps operate based on a characteristic pump curve, and their power usage/efficiency is directly related to their flow rate and discharge pressure. If you know the power (and the measurement is accurate...), you can get the flow rate directly from the pump curve. If you're asking what I think you're asking, you can't model the system properly without considering hydraulic effects: they literally dictate how pumps/piping systems operate. To model this system, you need to construct a network model of all pumps/pipelines. There are several software tools out there for performing these types of calculations. The overall principle is that you define the inlet/outlet pressures of the pipelines, and assume that the nodal pressures at any connections between multiple pipes ("tees") are equal. To solve this model, you can then assume a flow rate in each pipe branch, calculate pressure loss in each pipe using an appr
|mathematical-modeling|fluid-dynamics|
0
Inverse of a special triangular matrix
Let $A$ be an invertible upper triangular matrix with $A_{i,j}=A_{i+1,j+1}$ for all $i,j$ . How can I show that $A^{-1}$ has the same property? That, it is (an upper triangular) matrix with $A^{-1}_{i,j}=A^{-1}_{i+1,j+1}$ for all $i,j$ . I have verified this using computations, but is there a simple proof? Thanks.
The matrix $A$ in question is an example of an upper triangular Toeplitz matrix . As noted in the other answer, it can be expressed as $p(J)$ for some polynomial $p$ in the upper triangular nilpotent Jordan block $J$ . The converse is also true. In fact, a square matrix is an upper triangular Toeplitz matrix if and only if it is a polynomial in $J$ . Now consider your $A$ . By Cayley-Hamilton theorem, the inverse of $A$ must be a polynomial in $A$ . In turn, $A^{-1}$ is a polynomial in $J$ and hence it is an upper triangular Toeplitz matrix.
|linear-algebra|matrices|
0
Definition of 2-chain in Posets
The definition of a 2-chain comes from Fayer's paper at: https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/64468/Fayers%202-chains%3A%20an%20interesting%202020%20Accepted.pdf?sequence=2&isAllowed=y In the paper he defined 2-chains to bear the following 2 conditions: 1. There is a unique way to write P as the union of 2 chains. 2. $ is maximal subject to 1, i.e. if $ is a proper refinement of $ , then there is more than one way to write P as the union of 2 $ chains. After this main definition he claimed that there will only be 1 maximal element that is greater than every other non maximal elements in a 2-chain(call it super maximal). What about the following "2-chain": we have $x_1 on the left and $y_1 on the right? Clearly there is no super maximal element. What is this type of poset called?
The best term I know for a poset of the type you describe is simply "disjoint union of two chains." It's certainly not a $2$ -chain; we can see this by noting the absence of a unique supermaximal element, or more directly by observing that its order relation is not maximal since we can set $x_3>y_1$ and $y_2>x_2$ while preserving the other conditions. For completeness, here's the picture from the top of page $2$ of the cited paper showing the four isomorphism types of five-element $2$ -chains; note that the extension I describe above is the rightmost example:
|combinatorics|order-theory|
0
Show that ${n}^{n^{n}}>n(n!)((n!)!)$ where n is a positive integer greater than or equal to $3$.
Show that ${n}^{n^{n}}>n(n!)((n!)!)$ where n is a positive integer greater than or equal to $3$ . My attempt: Rewrite ${n}^{n^{n}} =n^{n} n^{n} … n^{n} n^{n} $ and $n(n!)((n!)!)=n(n)(n-1)(n-2)…1(n!)(n!-1)…1$ . Since $n^{n}\gt n!$ , $n^{n} n^{n} … n^{n} n^{n}\gt n!n!…n!n!=n!^{n}$ . I tried to show that $n!^{n}\gt n(n)(n-1)(n-2)…1(n!)(n!-1)…1$ , but I don’t know how. Please give me a hint instead of a solution.
HINT: Take logarithms to conclude $$n^{n^n} \ge ((n!)!)^4,$$ and note that this gives you what you need. En route, use the easy to see inequality $$M^M \ge 2^M (M!).$$ IF YOU NEED MORE ELABORATION: First note that, taking logs of the LHS of the OP, we note: $$\log(n^{n^n}) = n^n \times \log (n).$$ On the other hand, taking logs of $(n!)!$ on the RHS of the OP: $$\log((n!)!)$$ $$\le \log((n!)^{n!})$$ $$\le (n!)×\log(n!)$$ $$\le (n!)×\log(n^n)$$ $$\le (n!)\times (n \log n)$$ $$\le 2^{-n}n^n ×(n \log n),$$ Or in particular, $$\log((n!)!) \le 2^{-n}n^n ×(n \log n). $$ However, for $n \ge 4$ note the inequality $$2^{-n} ×n \le \frac{1}{4}.$$ So plugging this into the above gives $$\log((n!)!) \le \frac{n^n \log n}{4}.$$ However, we have already observed $$\log(n^{n^n}) = n^n\log(n),$$ and thus combining the above two relations yields $$\log(n^{n^n}) \ge 4(\log((n!)!), $$ or equivalently, $$n^{n^n} \ge (n!)!)^4.$$ Can you conclude yourself $$n × n! \le ((n!)!)^2$$ $$ yourself to get the desi
|algebra-precalculus|discrete-mathematics|
1
Is the Axiom of Completeness logically equivalent to "There is no proper superset of $\mathbb R$ that is an ordered Archimedean field"?
The Axiom of Completeness can be formulated as: There exists a set $R$ such that: $R$ is an ordered Archimedean field Any nonempty subset of $R$ with an upper bound has a least upper bound. Recently, I read something that suggested this is logically equivalent to the following: There exists a set $R$ such that: $R$ is an ordered Archimedean field No proper superset of $R$ is an ordered Archimedean field. Is the second version logically equivalent to the first? (That would explain why we call it the axiom of completeness and not of least upper bound .) If so, how do we show that? If not, what would be a counter example? The second version also suggests a dual: There exists a set $Q$ such that $Q$ is an ordered Archmidean field, and no proper subset of $Q$ is an ordered Archimedean field. Update See https://hsm.stackexchange.com/questions/17261/when-and-why-was-the-concept-of-having-a-least-upper-bound-dubbed-completenes/17263 which seems to show that indeed, the Axiom of Completeness wa
I've never heard anyone state the axiom of completeness as "there exists a set $R$ such that...". Phrasing it this way makes it sound like an axiom of set theory. Usually the axiom of completeness is an axiom about an ordered Archimedean field, i.e., it just says "any bounded nonempty subset of $R$ has a least upper bound". And then it's a theorem that an ordered Archimedian field satisfying this axiom exists. Also, I have to nitpick about the way you asked the question: both of the statements you write are provable (say in ZF set theory), so yes, they are equivalent over ZF set theory. On the other hand, they are certainly not logically equivalent (i.e., equivalent on the basis of no axioms). I think the proper way to frame the question is as follows: Let $R$ be an ordered Archimedean field. Is it true that $R$ is complete (any bounded subset of $R$ has a least upper bound) if and only if $R$ is maximal ( $R$ is not a proper subfield of an ordered Archimedean field). The answer is yes
|real-analysis|logic|axioms|complete-spaces|
1
Can a decimal that is infinitely repeating in one base be nonrepeating in another?
For instance, can a number like $0.1111111\cdots$ in base $3$ be represented as $0.23515613\cdots$ (non-repeating) in base $8$ ? I imagine the answer would be a resounding NO but it would be interesting to see a proof of why.
No. Let $x=0.\overline{a_{1}a_{2}\dots a_{n}}$ in base $b$ . We show $x$ must be rational: First, $b^{n}x=a_{1}a_{2}\dots a_{n}.\overline{a_{1}a_{2}\dots a_{n}}$ . So, $a_{1}a_{2}\dots a_{n}=b^{n}x-x=x\left(b^{n}-1\right)$ or $x=\frac{a_{1}a_{2}\dots a_{n}}{b^{n}-1}$ . (Since $x$ is the ratio of two integers, $x$ is rational.) Next, since $x$ is rational, $x$ must be repeating in any base $c$ : For a proof sketch, see e.g. this or this . (Those proof sketches are done for base $10$ but the same basic idea applies in any other base. The basic idea is that when we're doing the long division, the remainder at each step is in $\{0,1,2,\dots,c-1\}$ and so must eventually repeat.)
|decimal-expansion|number-systems|
0
Matrices - Gaussian Elimination
How to know which process should be done first and second in Gaussian elimination? What is the algorithm for this process or any easy way? Solving the Linear equation in a normal way takes less time than Gaussian elimination method because there are arrays of methods that can be done in order to do this.
I know this post was made a while ago, but here is my answer for future reference. Gaussian Elimination is a nice way of organizing the coefficients and information of a linear system. It gives you a clear start to end to evaluate any situation. If someone gave me 25 variables and 25 different equations relating those variables together and asked me to solve those using traditional high school scaling and canceling stuff off, I would be confused about where to even start. (I organize my matrices so the final matrix has zeros on the bottom left corner of the matrix) Getting entries that should become zero to be zero always is achievable by dividing one row by the value of the entry so that the new entry becomes one, and getting another row to do the same thing, but multiply it by $-1$ so that the entry in another row (same column) is $-1$ . When you add the two rows, the $+1$ and $-1$ add to make zero. There could be faster ways though.
|linear-algebra|gaussian-elimination|
0
Solved - Finding the function of $(1+x)(1+x^4)(1+x^{16})(1+x^{64})....$
The question states: For $0 , let $f(x) = (1+x)(1+x^4)(1+x^{16})(1+x^{64})(1+x^{256}).... $ Find $f(x).$ $f(x)= \prod_{n=0}^{\infty}(1+x^{(4^n)})$ I've tried noticing that it looks similar to the telescoping series $(1+x)(1+x^2)(1+x^4)...$ but applying the same trick of multiplying by $(1-x)$ doesn't work here. Can someone please guide me in the right direction to solving this problem? Thanks very much. Edit: The original problem was to find $f^{-1} \left ( \frac{8}{5f(\tfrac{3}{8})} \right)$ , as can be seen in this thread. The solution can be found by noticing that $f(x)f(x^2)=(1+x)(1+x^2)(1+x^4)(1+x^8)... = \frac{1}{1-x}$ and by plugging in $x = 3/8$ (credit to John Omelian) we get $f(\frac{3}{8})f(\frac{9}{64}) = \frac{1}{1-\frac{3}{8}} = \frac{8}{5}$ , in which then we can divide by $f(\frac{3}{8})$ and take the inverse of both sides to get $f^{-1} \left ( \frac{8}{5f(\tfrac{3}{8})} \right) = \boxed{\frac{9}{64}\:}$ . I apologize for any confusion that I've caused, but thanks to e
$$ (1+x)(1+x^2)(1+x^4) (1+x^8)\dots = \frac{1}{1-x} $$ $$ f(x) := (1+x) (1+x^4)(1+x^{16}) ( 1 + x^{64})\dots $$ $$ g(x) := (1+x^2)(1+x^8)(1+x^{32})(1+x^{128})\dots $$ First, $f(x) g(x) = \dfrac{1}{1-x} $ Next $$ g(\sqrt x) = f(x) $$
|sequences-and-series|telescopic-series|
0
Let $f(x)$ be a continuous function on $[0,1]$ such that $f(1)=0$. $\int_0^1 (f'(x))^2.dx=7$ and $\int_0^1 x^2f(x).dx=\frac13$. Find $\int_0^1f(x).dx$
I have a solution for the above question but i wanted to check if what i am doing is correct and can be done or not. The function i am getting also satisfies both the condition but i still am not sure if my method is correct or not. I have given my solution below in the answers , is an image of the original question
Its given: $\int_0^1 x^2f(x).dx = \frac13$ Applying Integration By Parts, and using the condition $f(1)=0$ we arrive at the situation, $\int_0^1 x^3f'(x).dx = -1$ Now let, $(f'(x)-cx^3)^2=0$ $\implies \int_0^1(f'(x)-cx^3)^2.dx=0$ $\implies \int_0^1 (f'(x))^2.dx + c^2\int_0^1x^6.dx -2c\int_0^1x^3f'(x).dx = 0$ putting values, $\implies 7 + \frac{c^2}7 +2c= 0$ multiplying whole eqn by 7 gives, $\implies c^2+49+14c=(c+7)^2=0 \implies c=-7$ this gives, $(f'(x)+7x^3)^2=0 \implies f'(x)=-7x^3$ Integrating we get, $f(x)=\frac{-7x^4}4+C$ , where C=constant of integration using $f(1)=0\implies C=\frac 74$ $f(x)=\frac{-7x^4}4 + \frac 74$ Now thus arriving finally at the answer by definite integration of $f(x)$ as, $\int_0^1f(x).dx=\frac 75$
|calculus|functions|definite-integrals|functional-calculus|
0