title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Matrix associativity
Matrix multiplication is associative. However, I am confused by the following issue. Consider the following matrices and vectors: $$W=\left(\begin{array}{ccc} 1&2&3\\ 4&5&6\\ 7&8&9 \end{array}\right), D=\left(\begin{array}{ccc} 1&0&0\\ 0&2&0\\ 0&0&3 \end{array}\right), v= \left(\begin{array}{ccc} 4\\ 5\\ 6 \end{array}\right).$$ Holds that $(Wv)D\neq W(vD)$ . In fact, $$(Wv)D= \left(\begin{array}{ccc} 78\\ 174\\ 270 \end{array}\right)\qquad W(vD)= \left(\begin{array}{ccc} 32\\ 154\\ 366 \end{array}\right).$$ Why do I get this result? Thank you
By writing $v A$ for a vector $v$ and a matrix $A$ , you actually mean $A v$ . Then $(W v) D = D W v$ and $W (v D) = W D v$ . You can expect that they are not the same. As the comments pointed, do not write $v A$ for a column vector $v$ . That doesn’t make sense, and will break the associativity. You can, however, write $v A$ for a row vector $v$ . You can see a column vector as an $n \times 1$ matrix, and a row vector as a $1 \times n$ matrix. Do not mix up the multiplication rules for column and row vectors. (In general, we use column vectors more often.)
|linear-algebra|
0
Matrix associativity
Matrix multiplication is associative. However, I am confused by the following issue. Consider the following matrices and vectors: $$W=\left(\begin{array}{ccc} 1&2&3\\ 4&5&6\\ 7&8&9 \end{array}\right), D=\left(\begin{array}{ccc} 1&0&0\\ 0&2&0\\ 0&0&3 \end{array}\right), v= \left(\begin{array}{ccc} 4\\ 5\\ 6 \end{array}\right).$$ Holds that $(Wv)D\neq W(vD)$ . In fact, $$(Wv)D= \left(\begin{array}{ccc} 78\\ 174\\ 270 \end{array}\right)\qquad W(vD)= \left(\begin{array}{ccc} 32\\ 154\\ 366 \end{array}\right).$$ Why do I get this result? Thank you
You can only multiply an $m\times n$ matrix by an $r\times s$ matrix when $n=r$ , in which case you get an $m\times s$ matrix. What associativity of matrix multiplication says is that if $A$ is $m\times n$ , $B$ is $n\times p$ , and $C$ is $p\times q$ , then $$(AB)C = A(BC),$$ with the result of both products being an $m\times q$ matrix. You have two $3\times 3$ matrices $D$ and $W$ , and a $3\times 1$ matrix $v$ . While you can multiply both $D$ and $W$ by $v$ , you cannot multiply $v$ by $D$ , and you cannot multiply $Wv$ (which is a $3\times 1$ matrix) by $D$ . So neither $(Wv)D$ nor $D(vW)$ are sensible expressions: in terms of size, you are trying to perform products of type $$\Bigl( (3\times 3)\cdot(3\times1)\Bigr)\cdot (3\times 3) = (3\times 1)\cdot (3\times 3)$$ and $$(3\times 3)\cdot\Bigl( (3\times 1)\cdot (3\times 3)\Bigr).$$ Neither one of those operations can be performed.
|linear-algebra|
0
Is there any relationship between these two equal integrals $\int_0^\infty\frac{\sin x}{x + x^2}dx=\int_0^\infty\frac{\pi-2\tan^{-1}x}{2 e^x}dx$
With some intermediate derivation results I found the two integrals are exactly the same. But why? \begin{align} I=\int_0^\infty\frac{\sin x}{x + x^2}\mathrm dx=\int_0^\infty\frac{\pi-2\tan^{-1}x}{2 e^x}\mathrm dx \end{align} Both are equal to \begin{align} I&=\operatorname{Si}(1)\cos(1)-\operatorname{Ci}(1)\sin(1)+\frac{\pi}{2}\left(1-\cos(1)\right)\\ &\approx0.9493467025590832615920\ldots \end{align} where $\operatorname{Ci}$ and $\operatorname{Si}$ are cosine and sine integrals, respectively. Note: The nominator in integrand is the double of Laplace transform of $\operatorname{sinc}$ function $$\mathcal{L}\left[\frac{\sin t}{t}\right](x)=\cot^{-1}{\frac1x}=\frac{\pi}{2}-\tan^{-1}x$$
Their equivalency is established below \begin{align} &\int_0^\infty\frac{\pi-2\tan^{-1}x}{2 e^x}dx\\ =& \int_0^\infty \left(\frac\pi2-\tan^{-1}x\right)d(1-e^{-x}) \overset{ibp} = \int_0^\infty \frac{1-e^{-x}}{1+x^2}dx\\ = &\int_0^\infty (1-e^{-x})\left( \int_0^\infty \sin y\ e^{-xy}\ dy \right)dx\\ = & \int_0^\infty \sin y \left(\int_0^\infty e^{-xy}(1-e^{-x})dx\right)dy\\ =& \int_0^\infty\frac{\sin y}{y + y^2}dy \end{align}
|integration|
1
Hint for showing the identity $(\nabla\psi\cdot\nabla)\nabla\psi^*+(\nabla\psi^*\cdot\nabla)\nabla\psi=\nabla|\nabla \psi|^2$
I need some help to show this following identity. $$(\nabla\psi\cdot\nabla)\nabla\psi^*+(\nabla\psi^*\cdot\nabla)\nabla\psi=\nabla|\nabla \psi|^2$$ My attempt: $$ \partial_i\psi\partial_i\partial_k\psi^*+\partial_i\psi^*\partial_i\partial_k\psi\\ \partial_i(\psi\partial_i\partial_k\psi^*+\psi^*\partial_i\partial_k\psi) $$ I term inside looks like a product rule but except there are two derivative. I am not sure what should I do next. Any hint will help. Thank you. Update: I have made some progress $$ \partial_i\partial_k(\psi \psi^*)=0\\\partial_i(\psi^*\partial_k\psi+\psi\partial_k\psi^*)=0\\\partial_i\psi^*\partial_k\psi+\psi^*\partial_i\partial_k\psi+\partial_i\psi^*\partial_k\psi+\psi\partial_i\partial_k \psi^*=0 $$ So, $$ \partial_i\psi^*\partial_k\psi+\partial_i\psi^*\partial_k\psi=-(\psi^*\partial_i\partial_k\psi+\psi\partial_i\partial_k \psi^*) $$ So $$ \partial_i(\psi\partial_i\partial_k\psi^*+\psi^*\partial_i\partial_k\psi)\\\implies\partial_i(\partial_i\psi^*\partial_k\psi+\
Using the summation convention, $$\begin{aligned} R H S_i&=\partial_i \frac{\partial \psi}{\partial x_j} \frac{\partial \psi^*}{\partial x_j} \\ & =\frac{\partial \psi}{\partial x_j} \frac{\partial^2 \psi^*}{\partial x_i \partial x_j} +\frac{\partial\psi^*}{\partial x_j} \frac{\partial^2 \psi}{\partial x_i \partial x_j} \\ & =\frac{\partial \psi}{\partial x_j} \partial_j \frac{\partial \psi^*}{\partial x_i} +\frac{\partial \psi^*}{\partial x_j} \partial_j \frac{\delta \psi}{\partial x_i} \\ & = L H S_i \\ \end{aligned}$$
|vector-analysis|quantum-mechanics|
1
Transcendence degree of $F(x)$ over a field $F$
I'm studying Algebraic geometry and have been stuck at processing the concept of Transcendental degree of a polynomial and came across the following argument online. The transcendence degree of the field of rational functions $F(x)$ over a field $F$ is one. This means that the field $F(x)$ can be thought of as a one-dimensional extension of the field $F$ , even though it contains an infinite number of elements. The transcendence degree measures the "size" of the transcendental part of an extension field. In the case of $F(x)$ , the transcendence degree is one because the field is generated by a single transcendental element, namely $x$ . To show that the transcendence degree is one, we need to show that $x$ is algebraically independent over $F$ and that any other transcendental element can be expressed as a rational function in $x$ . The fact that $x$ is transcendental over $F$ is clear because $x$ is not a root of any non-zero polynomial with coefficients in $F$ . Any other transcende
So we have $F \hookrightarrow F[x] \hookrightarrow F(x)$ . Saying that $x \in F(x)$ is algebraically independant over $F$ is, by definition, saying that for every polynomial $P \in F[x]$ , we have $P(x) = 0$ as an element of $F(x)$ if and only if $P = 0$ as an element of $F[x]$ (pay attention to the belonging of the elements). But now $P(x)$ is just the image of $P$ under the inclusion $F[x] \hookrightarrow F(x)$ so it is clear that $x$ is algebraically free.
|field-theory|extension-field|transcendence-degree|
0
Understanding the Importance of the Borel-Cantelli Lemma
The Borel-Cantelli Lemma states that: Lemma 1: If $\sum_{n=1}^{\infty} P(E_n) then $P(\limsup E_n) = 0$ Lemma 2: If $\{E_n\}$ are independent and $\sum_{n=1}^{\infty} P(E_n) = \infty$ then $P(\limsup E_n) = 1$ As I understand, "lim sup" can be understood in the following ways: For the sequence $a_n = (-1)^n$ , $\limsup_{n \to \infty} a_n = 1$ . This is because as $n$ becomes larger, $a_n =1$ For the sequence $b_n = \frac{1}{n}$ , $\limsup_{n \to \infty} b_n = 0$ . This is because as $n$ becomes larger, $b_n =0$ Going back to the Borel-Cantelli Lemma, here is my understanding of it: Suppose we have a single 6 sided fair die. We denote our event of interest to be rolling a 6. Naturally, there is a $1/6$ probability of rolling a 6. This means that every 6 rolls we would expect to see the number 6 once on average. If we roll the die an infinite number of times, the probability of getting an infinite number of 6's is 0. I suppose one could say that in an infinite number of rolls, we should
Maybe more interesting examples are these. At step $n$ you roll an $n$ -sided die, i.e. take a random integer from $1$ to $n$ , independently. The probability of getting a $1$ at step $n$ is thus $1/n$ . Since $\sum_{n=1}^\infty 1/n = \infty$ , Borel-Cantelli #2 says with probability $1$ you will get infinitely many $1$ 's. Similar to the above, except that you roll an $n^2$ -sided die. Since $\sum_{n=1}^\infty 1/n^2 , Borel-Cantelli #1 says with probability $1$ you will get only finitely many $1$ 's.
|probability|
0
Distance between two cities
In a certain exercise I am asked to find the distance between Madrid and Hong Kong. I have been given the coordinates (latitude and longitude), but I don't know how to proceed. Madrid: $40.50^◦N, 3.67^◦W$ Hong Kong: $22.28^◦N, 114.17 ^◦ E$ I have been given a formula regarding $arccos$ , but I don't know what to do.
To calculate the distance between two points on the Earth given their latitude and longitude, you can use the haversine formula or the spherical law of cosines. Since you mentioned a formula regarding arccos (the inverse cosine function), we'll use the spherical law of cosines for this calculation. The spherical law of cosines is given by: $$d = r\arccos((\sin(lat_1)\sin(lat_2)) + (\cos(lat_1)\cos(lat_2)\cos(lon_2 - lon_1)))$$ Where: $d$ is the distance between the two points (along the surface of the sphere), $lat_1$ and $lat_2$ are the latitudes of the two points in radians, $lon_1$ and $lon_2$ are the longitudes of the two points in radians, $r$ is the radius of the Earth ( $6371$ km) Given the coordinates of Madrid and Hong Kong: Madrid: $40.50^\circ N, 3.67^\circ W$ Hong Kong: $22.28^\circ N, 114.17^\circ E$ First, convert the coordinates from degrees to radians using the conversion $$\theta_{\text{radians}} = \theta_{\text{degrees}} \cdot \frac{\pi}{180}$$ Madrid latitude in radi
|geometry|
1
Transcendence degree of $F(x)$ over a field $F$
I'm studying Algebraic geometry and have been stuck at processing the concept of Transcendental degree of a polynomial and came across the following argument online. The transcendence degree of the field of rational functions $F(x)$ over a field $F$ is one. This means that the field $F(x)$ can be thought of as a one-dimensional extension of the field $F$ , even though it contains an infinite number of elements. The transcendence degree measures the "size" of the transcendental part of an extension field. In the case of $F(x)$ , the transcendence degree is one because the field is generated by a single transcendental element, namely $x$ . To show that the transcendence degree is one, we need to show that $x$ is algebraically independent over $F$ and that any other transcendental element can be expressed as a rational function in $x$ . The fact that $x$ is transcendental over $F$ is clear because $x$ is not a root of any non-zero polynomial with coefficients in $F$ . Any other transcende
In the field $F(x)$ , the symbol $x$ is a formal variable with no presupposed relations. I don't think it is completely obvious that $x$ is then transcendental over $F$ , but as in @NaNoS' answer it suffices to check that the obvious map $F[x] \to F(x)$ taking a polynomial $P(x)$ to the rational function $P(x)/1$ is injective. Then any non-zero polynomial $P$ with $F$ coefficients evaluated at the element $x = x/1$ of $F(x)$ equals $P(x)/1$ , and by the previous observation is non-zero. In $\mathbb{Q}(\sqrt{2})$ , $\sqrt{2}$ may or may not be a formal variable: you could think of this field as either (1) the subfield of say $\mathbb{R}$ or $\mathbb{C}$ generated by $\mathbb{Q}$ and an actual element $\sqrt{2}$ or (2) shorthand for the quotient ring $\mathbb{Q}[x]/(x^2 - 2)$ , with the image of $x$ being named $\sqrt{2}$ , which happens to be a field (since $x^2 - 2$ is irreducible over $\mathbb{Q}$ ). It is also useful to know that these two definitions produce isomorphic objects.
|field-theory|extension-field|transcendence-degree|
0
Understanding notation in Taylor's Formula for Stochastic Differential Equation
For the following equation, I don't understand how the terms $df(X_t)$ and $f'(X_t)dX_t$ differ. $$df(X_t) = f'(X_t)dX_t + 1/2 f''(X_t)(dX_t)^2$$ As I understand, both terms refer to the change in the function $f$ with respect to $X_t$ . Where am I going wrong? Thanks in advance.
In stochastic calculus, people tend to write $d$ as a sort of inverse operator to the integration. I think this is easier to explain this through an example. If someone writes $$ dX_t = \alpha(t) dt + \beta(t) dB_t $$ this is a short way to render the integral formulation (obtained after integrating in time on both ends) $$ X_t = X_0 + \int_0^t \alpha(s) ds + \int_0^t \beta(s) dB_s, $$ where $(B_t)_{t\ge 0}$ is a Brownian motion. There is two reasons for this, the first one being conciseness. The second one is that this is highly convenient when using Ito's formula. Indeed, if you know that $$ dX_t = \alpha(t) dt + \beta(t) dB_t$$ then Ito's formula can write $$df(X_t) = f'(X_t) dX_t + \frac12 f''(X_t) d\langle X \rangle_t, (1) $$ where $\langle X \rangle$ is the quadratic variation of the process $(X_t)_{t \ge 0}$ and from the formula for $dX_t$ above we get $$df(X_t) = f'(X_t) \alpha(t) dt + f'(X_t) \beta(t) dB_t + \frac12 f''(X_t) d\langle X \rangle_t, $$ where I replaced $dX_t$ wit
|taylor-expansion|stochastic-differential-equations|
1
Inverse of a 2x2 quaternion matrix: whence this formula?
From here , the inverse of a $2\times2$ quaternion matrix is given by $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix}^{-1} =\frac{1}{\mathrm{Nrd}} \begin{bmatrix} |d|^2\overline{a}-\overline{c}d\overline{b} & |b|^2\overline{c}-\overline{a}b\overline{d} \\ |c|^2\overline{b}-\overline{d}c\overline{a} & |a|^2\overline{d}-\overline{b}a\overline{c} \end{bmatrix} \tag{1}$$ where the so-called "reduced norm" of the matrix is given by $$ \mathrm{Nrd}\Big(\begin{bmatrix} a & b \\ c & d \end{bmatrix}\Big)=|a|^2|d|^2+|b|^2|d|^2-2\mathrm{Re}(\overline{b}a\overline{c}d). \tag{2}$$ This answer surprised me: I, proceeding like the OP, found two formulas involving noncommutative rational functions (so to speak) depending on whether the diagonal or antidiagonal entries were both nonzero. I was not expecting this. But the user that posted this is long gone and gave no explanation. Via searching, I found the reduced norm $\mathrm{Nrd}(a)$ of an element $a\in A$ of a central simple algebra $A$ of dimensio
$ \def\mc#1{\left[\begin{array}{r|rrr}#1\end{array}\right]} \def\m#1{\left[\begin{array}{r}#1\end{array}\right]} \def\qif{\quad\iff\quad} \def\M{{\cal M}} $ Transform each component quaternion into its $4\times 4$ matrix representation, e.g. $$\eqalign{ a = \mc{a_1\\a_2\\a_3\\a_4} \qif &A = \mc{ a_1 & -a_2 & -a_3 & -a_4 \\\hline a_2 & a_1 & -a_4 & a_3 \\ a_3 & a_4 & a_1 & -a_2 \\ a_4 & -a_3 & a_2 & a_1 }, \quad A^{-1} = \frac{A^T}{|a|^2} \\ }$$ Construct the equivalent block matrix and calculate its inverse $$\eqalign{ &M = &\m{a&b\\c&d} &\qif \M = \mc{A&B\\\hline C&D} \\ &&\qquad M^{-1} &\qif \M^{-1} \\ }$$ via the Schur complements of $\M$ . This will recover the formula of interest. For quaternionic matrices larger than $2\times 2$ , the block matrix approach remains valid for numerical purposes, but the corresponding symbolic formulas quickly become intractable.
|linear-algebra|abstract-algebra|matrices|quaternions|noncommutative-algebra|
1
compute $\sum_{k=0}^{n} \frac{{(-1)^k}}{k+1}{n \choose k}$.
Find $\sum_{k=0}^{n} \frac{{(-1)^k}}{k+1}{n \choose k}$ as a function of n. I have done it in the following way: Notice first that $\sum_{k=0}^{n} \frac{{(-1)^k}}{k+1}{n \choose k} = \sum_{k=0}^{\infty}\frac{(-1)^k}{k+1}{n \choose k}$ Because for $k > n$ we have $n \choose k$ = 0. Now, using the binomial theorem: $$(1+x)^n =\sum_{k=0}^{\infty}{n \choose k}x^k $$ $$(1-x)^n =\sum_{k=0}^{\infty}(-1)^k{n \choose k}x^k $$ $$\int(1-x)^ndx =\sum_{k=0}^{\infty}[(-1)^k{n \choose k}\int x^kdx] $$ $$ - \frac{(1-x)^{n+1}}{n+1} =\sum_{k=0}^{\infty}\frac{(-1)^k}{k+1}{n \choose k}x^{k+1}$$ Now let x = 1 and we get: $$\sum_{k=0}^{\infty}\frac{(-1)^k}{k+1}{n \choose k} = - \frac{(1-1)^{n+1}}{n+1} = 0$$ However, the correct solution is $\frac{1}{n+1}$ and not $0$ . I cant find my mistake, can someone help me find it please? Thank you! Edit : I have seen different ways to prove my question, but I would still like to know where is my mistake so i would know how to avoid it in the next time.
I think you forgot the constant term when integrating… $$\int(1-x)^ndx+C=\sum_{k=0}^{\infty}[(-1)^k{n \choose k}\int x^kdx] $$ Then let $x=0$ first, the correct number follows.
|combinatorics|binomial-coefficients|binomial-theorem|
0
compute $\sum_{k=0}^{n} \frac{{(-1)^k}}{k+1}{n \choose k}$.
Find $\sum_{k=0}^{n} \frac{{(-1)^k}}{k+1}{n \choose k}$ as a function of n. I have done it in the following way: Notice first that $\sum_{k=0}^{n} \frac{{(-1)^k}}{k+1}{n \choose k} = \sum_{k=0}^{\infty}\frac{(-1)^k}{k+1}{n \choose k}$ Because for $k > n$ we have $n \choose k$ = 0. Now, using the binomial theorem: $$(1+x)^n =\sum_{k=0}^{\infty}{n \choose k}x^k $$ $$(1-x)^n =\sum_{k=0}^{\infty}(-1)^k{n \choose k}x^k $$ $$\int(1-x)^ndx =\sum_{k=0}^{\infty}[(-1)^k{n \choose k}\int x^kdx] $$ $$ - \frac{(1-x)^{n+1}}{n+1} =\sum_{k=0}^{\infty}\frac{(-1)^k}{k+1}{n \choose k}x^{k+1}$$ Now let x = 1 and we get: $$\sum_{k=0}^{\infty}\frac{(-1)^k}{k+1}{n \choose k} = - \frac{(1-1)^{n+1}}{n+1} = 0$$ However, the correct solution is $\frac{1}{n+1}$ and not $0$ . I cant find my mistake, can someone help me find it please? Thank you! Edit : I have seen different ways to prove my question, but I would still like to know where is my mistake so i would know how to avoid it in the next time.
The problem is, when you integrated both sides, you forgot the constant of integration. Instead of indefinite integration, you should apply the operator $\int_0^x$ to both sides. This does what you want on the right hand side, because $$ \int_0^x \chi^k d\chi= \frac {x^{k+1}}{k+1} $$ On the LHS, you will have $\int_0^x(1-\chi)^nd\chi$ which will provide the missing $1/(n+1)$ .
|combinatorics|binomial-coefficients|binomial-theorem|
1
Why dual norm inequality is tight in $\mathbb{R}^{n}$
Let $\lVert \cdot \rVert $ be any norm on $\mathbb{R}^n$ I want to prove that For any $x$ , there is some $z$ such that $z^{T}x = \lVert x \rVert \lVert z \rVert _{*}$ where $\lVert y \rVert _{*} = \sup \{y^{T}x : \lVert x \rVert \leq 1 \}$ is dual norm of norm $\lVert \cdot \rVert$ I heard that James' theorem in functional analysis gives desired result, but I don't have any knowledge about functional analysis. Can I prove it by using more elementary results in $\mathbb{R}^{n}$ ?
First let $x \in \mathbb{R}^n$ s.t $||x|| . Then there exists some scaling factor $\alpha\geq 1$ such that $||\alpha x||=1$ . We can thus write $$ y^T\alpha x = \alpha y^Tx \geq y^Tx $$ which means that $$ ||y||_*=\sup_x\{y^Tx : ||x||\leq 1\}=\sup_x \{y^Tx : ||x||=1\}. $$ Since the set $S\triangleq \{x : ||x||=1\}$ is bounded and closed, it is compact (Heine-Borel theorem). Now, since $y^Tx$ is continous on this set, we know from the extreme value theorem that $\exists q \in S$ s.t. $y^Tq=\sup_x\{y^Tx | x\in S\}$ . In other words, there exists some $q$ with unit norm s.t. $y^Tq=||y||_*=||y||_*||q||$ . Note that this equation remains valid if $q$ is rescaled.
|functional-analysis|normed-spaces|convex-analysis|convex-optimization|duality-theorems|
0
Is the vector function $\mathbf r(t) = \langle t^3, t^3 \rangle$ smooth at $t = 0$?
This was confusing me when learning about curvature and smoothness. The condition for smoothness on interval $I$ is given as: $\mathbf r'$ is continuous; $\mathbf r'(t) \neq \mathbf 0$ . In this particular example, the space curve of this parametrization is just a straight line passing through the origin, but it follows that $$\mathbf r'(t) = \langle 3t^2, 3t^2 \rangle$$ thus $\mathbf r'(0) = \mathbf 0$ , meaning that $\mathbf r$ is not smooth at the origin. However, the space curve seems to be perfectly "smooth" and differentiable. What am I missing here? (Also, by this definition the parametrization $\mathbf s(t) = \langle t, t \rangle$ with the same space curve is smooth at the origin. Why could they be different?)
tl; dr: The term smooth gets used inconsistently. As elsewhere in mathematics, the words-to-concepts relation is not a mapping; the same word can mean multiple things. To be careful with terminology here, it might be best to say the mapping $r(t) = (t^{3}, t^{3})$ is infinitely differentiable , but not regular at $t = 0$ . In differential topology, the term smooth mapping generally connotes some degree of differentiability, often infinite differentiability. In differential topology and differential geometry, the term smooth manifold connotes a manifold with a smooth atlas (whose overlap maps are smooth). A smooth submanifold is a smooth manifold together with a smooth embedding into, e.g., a Cartesian space. In differential geometry, one sometimes uses the term regular mapping to connote a smooth immersion (a smooth mapping $f:M \to N$ from a smooth manifold into a manifold of equal or larger dimension whose differential $Df$ has rank $\dim M$ at each point). At other times (::cough::
|vectors|vector-analysis|
0
Understanding notation in Taylor's Formula for Stochastic Differential Equation
For the following equation, I don't understand how the terms $df(X_t)$ and $f'(X_t)dX_t$ differ. $$df(X_t) = f'(X_t)dX_t + 1/2 f''(X_t)(dX_t)^2$$ As I understand, both terms refer to the change in the function $f$ with respect to $X_t$ . Where am I going wrong? Thanks in advance.
Informally, using such heretical assumptions as $dt\approx 0$ , one could write $$ df(x_t)=f(X_{t+dt})-f(x_t)=f(X_d+dX_t)-f(X_t) $$ Obviously, the Taylor expansion gets applied to the first term, where then the derivatives of the smooth function $f$ occur. $$ dX_t=a_t\,dt+b_t\,dW_t $$ contains terms on different, but not directly comparable scales. However, the second term has, with minimal exceptions, a larger numerical value of scale $\sqrt{dt}$ . This means that in the square $(dX_t)^2$ the square of this term has the scale $dt$ which accumulates in integration to a non-vanishing contribution.
|taylor-expansion|stochastic-differential-equations|
0
$\dim(U_1 \cap U_2 \cap U_3) = n − 3$, Give a proof or find a counterexample.
Suppose that $U_1, U_2, U_3$ are three distinct subspaces of $\dim = n-1$ from a vector space of $\dim = n$ . where $n \gt 3$ . Give a proof or find a counterexample for $\dim(U_1 \cap U_2 \cap U_3) = n − 3$ . My attempt: first I showed that $\dim(U_1 \cap U_2 \cap U_3) \geq n − 3$ , but I do not know what should I do with the inverse. However for $n=3$ I found a counterexample which I mentioned it below. $U_1 = \{(x,x,y):x,y\in \mathbb{R}\}$ , $U_2 = \{(x,y,x):x,y\in \mathbb{R}\}$ , $U_3 = \{(y,x,x):x,y\in \mathbb{R}\}$ , $U_1 \cap U_2 \cap U_3 = \{(x,x,x):x\in \mathbb{R}\}$ Thank you for your time.
To construct such $(U_i)_1^3$ , we can find a $(n-2)$ -dimensional subspace $U$ first and then pick three pairwise independent vectors $v_i$ in the complementary space of $U$ which is feasible since the dimension of the complement is $2$ . Then $(span(U,v_i))_1^3$ is a counterexample you want.
|linear-algebra|vector-spaces|examples-counterexamples|
0
What is $\sqrt{-1}$? circular reasoning defining $i$.
I am reading complex analysis by Gamelin and I am having trouble understanding the square root function. The principal branch of $\sqrt{z}$ ( $f_1(z)$ ) is defined as $|z|^{\frac 1 2} e^{\frac{i \operatorname{Arg}(z)}{2}}$ for $z \in \mathbb{C} - (-\infty,0]$ and $f_2(z)$ is defined as $-f_1(z)$ where $\operatorname{Arg}(z) \in (-\pi , \pi]$ By this definition, what is $\sqrt{-r} $ where $r$ is a non negative real number? Of course the answer is $i\sqrt{r}$ but the definition of square root functions doesn't apply here What is $i$ then? $i:=\sqrt{-1}$ but how? We didn't define the square root function for negatives but we still use $i$ to define complex numbers. Shouldn't the definition of square root function taught before defining $i$ ? and defining $i^2=-1$ without define $i$ as either $\pm \sqrt{-1}$ is also very strange because we want extended function to be continuous and choosing $\pm \sqrt{-1}$ will make this impossible although $i$ must be one of the two (after defining the s
Understanding the Square Root Function in Complex Analysis The square root function on the real numbers is straightforward: for $ x \in \mathbb{R}^+$ , $\sqrt{x}$ is the positive number which, when squared, returns $x$ . Extending this function to the complex plane introduces several complications and requires careful definition to avoid ambiguity, primarily because every non-zero complex number has two square roots. The principal branch of the square root function, $\sqrt{z}$ , is defined to be single-valued. By restricting the argument of $z$ to $(-\pi, \pi]$ and choosing the square root with a non-negative imaginary part, we uniquely define $\sqrt{z}$ for all $z$ not on the non-positive real axis. Understanding $\sqrt{-r}$ with the Principal Square Root For any non-negative real number $r$ , $\sqrt{-r}$ is not within the realm of real numbers, since no real number squared gives a negative result. We must extend to the complex plane, and by the definition you've encountered in Gameli
|complex-analysis|algebra-precalculus|complex-numbers|continuity|riemann-surfaces|
0
Why doesn't the linear regression preserve the standard deviation?
If we model $Y = \beta X$ , we can estimate $\beta$ to minimize $$\sum (Y_i - \beta X_i)^2$$ Taking derivatives and solving for 0, we get $\sum 2\beta X_i^2 - 2Y_1X_i = 0 \implies \beta = \frac{\sum Y_i X_i}{\sum X_i^2}.$ Why does our best fit for $Y = \beta X$ not even satisfy that it has the same variance?
Depending on the measured data -- for example it forms a "dot cloud" with a distinct distance to the origin -- it might be reasonable to set as balance line the line from the origin through the dot cloud's barycentre $(\bar{x}, \bar{y})$ instead of a "best fit" according the LS method using vertical distances of the observed points to the resulting line as target figure. The slope of this line through the origin is $\displaystyle\beta=\frac {\sum_k y_k}{\sum_k x_k}=\frac{\bar{y}}{\bar{x}}$ . (This conforms to what I remember from studies decades ago.)
|statistics|regression|
0
Covariance Operator corresponding to multivariate covariance function
The usual definition of a covariance operator on $L_2(D)$ is: $$ C : L_2(D) \to L_2(D), \qquad (C \psi)(x) = \int_D c(x,y) \psi(y) dy \qquad \forall x\in D, ~~\psi \in L_2(D), $$ where $c(x,y): D \times D \to \mathbb{R}$ is the corresponding covariance function. The operator norm is then $$ \|C\|= \sup_{\|\psi\|_{L_2}=1} \sqrt{\int_D \left(\int_D c(x,y)\psi(y) dy\right)^2dx} $$ I am interested in the covariance operator corresponding to a multivariate covariance function. For example in this paper the author describes the multivariate covariance function $K: D \times D \to \mathbb{R}^p \times \mathbb{R}^p$ , so now we have a matrix valued covariance function $K(x,y) = [c_{ij} (x,y)]$ where $c_{ij}$ is a univariate covariance function defined as above. My question is about the operator corresponding to $K$ , and the definition of the operator norm. Is it correct to write this as: $$ C_K: (L_2(D))^p \to (L_2(D))^p, \qquad (C_K \psi)(x) =\int_{D} (K(x,y))^T \psi(y) dy, \qquad \forall x \i
I think you are just trying to write out the general operator norm in this setting. (I would say that the choice of $p$ for the dimension is unfortunate, since it too much reminds $L_p(D)$ . I will call it by $n$ from now on.) In your univariate case, your concrete formula is just $$ \|C\| = \sup_{\|\psi\|_{L_2(D)} = 1} \|C\psi\|_{L_2(D)}. $$ Therefore in the multivariate case, the norm should be $$ \|C_K\| = \sup_{\|\psi\|_{L_2(D)^n} = 1} \|C_K\psi\|_{L_2(D)^n}. $$ So it all boils down to how you define your norm on $L_2(D)^n$ . The norm on $L_2(D)^n$ comes from $L_2(D)$ , just like a norm on ${\mathbb R}^n$ comes from the absolute value norm $|\cdot|$ on ${\mathbb R}$ . You can do $L^1, L^\infty$ or $L^2$ , and they are all equivalent. So you can define the norm of $\psi = (\psi_1,\dots, \psi_n)\in L_2(D)^n$ as any of the following \begin{gather*} \sum_{i=1}^n \|\psi_i\|_{L_2(D)} = \sum_{i=1}^n \sqrt{\int_D \psi_i^2(x)\,dx},\\ \sqrt{\sum_{i=1}^n \|\psi_i\|^2_{L_2(D)}} = \sqrt{\sum_{i
|functional-analysis|statistics|reference-request|stochastic-processes|covariance|
1
Proof for Mean of Geometric Distribution
I am studying the proof for the mean of the Geometric Distribution http://www.randomservices.org/random/bernoulli/Geometric.html (The first arrow on Point No. 8 on the first page). It seems to be an arithmetico-geometric series (which I was able to sum using http://en.wikipedia.org/wiki/Arithmetico-geometric_sequence#Sum_to_infinite_terms ) Here are the steps I took to arrive at the result: Mean of Geometric Distribution: $E(N) = \sum_{n=1}^{\infty} [n p (1 - p)^{n-1}]=S_n$ An arithmetico-geometric series is $a + (a + d)r + (a + 2d)r^2+\cdots$ $E(n)$ is then an arithmetico-geometric series with first term: $a= p$ common difference $d= p$ common ratio $r= (1 - p)$ $1 - r = 1 - (1 - p) = 1 - 1 + p = p$ The formula for the sum to infinity of an arithmetico-geometric series is (from the link above): $$ \lim_{n\to\infty} S_{n}= \frac{a}{(1-r)} + \frac{rd}{(1 - r^2)} = \frac{p}{p} + \frac{(1-p)p}{p^2} = \frac{p^2 + p - p^2}{p^2} = \frac{p}{p^2} = \frac{1}{p}$$ Note: I have not checked the pr
Khan Academy used the approach I thought of in the question above. Though they have done it in a much more intuitive way. The most important thing is that it shows that the approach was valid.
|expectation|
0
First order ordinary differential equation issue
Consider the following initial value problem for $\dot{x} =F(t,x)$ : \begin{equation} \dot{x} = x^{1/3}, \quad x(0)=0 \end{equation} Now, the general solution reads \begin{equation} x(t)= \left[ \frac{2}{3} (t+ C) \right]^{3/2} \end{equation} with $C$ being an arbitrary constant. If $C=0$ the initial condition is satisfied and the particular solution reads \begin{equation} x(t)= \left[ \frac{2}{3} t \right]^{3/2} \end{equation} However, the slides mention that two additional solutions exist for $t \ge 0$ , namely $x(t)= - \left[ \frac{2}{3} t \right]^{3/2}$ and $x(t)=0$ . Why is that? We can invoke the existence and uniqueness theorem, where $\frac{\partial }{\partial x} x^{1/3} = x^{-2/3}$ is not defined at $x=0$ , implying that the solution of the IVP is not unique. But where do these other 2 solutions come from?
In addition, you have this one parameter family of solutions (and its opposite) for $T\geq 0$ $$ x_T(t)=\begin{cases} 0 & \text{ if } t you can easily check that $t\rightarrow x_T(t)$ is continuously derivable at $t=T$ and verifies the ode.
|ordinary-differential-equations|
0
Understanding predicates and quantifiers ∀ , ∃ (my reasoning)
So lets say we are trying to figure out if: P(x, y) is the predicate of $x^2 as defined for x, y ∈ Z. This means using quantifiers: ∀ and ∃, I can logically say: 1. ∀x∃y of P(x,y) is possible! This is because we can say $ y = x^2 + 1 $ which means $ x . This means for every x there is a y. This also means ∀y∃x of P(x,y) is possible! There can exist any x that if squared is less than y such as if y was 5 and x was 2 ($2^2 BUT ∃y∀x of P(x, y) is not possible. Why? Because if x becomes $ x=y+1 $ then $y+1^2 > y$ . This contradicts our understanding that $ x^2 . There does not exist a y for every x. Does my explaining of logic seem comprehensive and reasonable?
For the beginning I want to clarify the quantifiers (I am assuming that we are always working in $\mathbb{Z}$ ): Let's take a look at $\forall x. A$ . Where $A$ is a statement which can be either true or false. The $\forall x.$ quantifier, says that the following statement has to be true for every possible $x$ . If there exists one case for which the statement $A$ is false then the whole $\forall x. A$ is false. Now let's look at $\exists x. A$ . Here $A$ is again a statement which can be either true or false. The $\exists x$ quantifier, says that the following statement has to be true for at least one $x$ . So as long as there exists one $x$ for which $A$ is true then the whole $\exists x. A$ is true. Only if $A$ is false for every single value $x$ then $\exists x. A$ is false. With that we can now evaluate your statements. The predicate $P(x, y)$ is true if and only if $x^2 holds. First Case $\forall x.\exists y.P(x,y)$ Your explanation for the first case $(\forall x.\exists y.P(x,y)
|discrete-mathematics|logic|first-order-logic|quantifiers|
1
Understanding predicates and quantifiers ∀ , ∃ (my reasoning)
So lets say we are trying to figure out if: P(x, y) is the predicate of $x^2 as defined for x, y ∈ Z. This means using quantifiers: ∀ and ∃, I can logically say: 1. ∀x∃y of P(x,y) is possible! This is because we can say $ y = x^2 + 1 $ which means $ x . This means for every x there is a y. This also means ∀y∃x of P(x,y) is possible! There can exist any x that if squared is less than y such as if y was 5 and x was 2 ($2^2 BUT ∃y∀x of P(x, y) is not possible. Why? Because if x becomes $ x=y+1 $ then $y+1^2 > y$ . This contradicts our understanding that $ x^2 . There does not exist a y for every x. Does my explaining of logic seem comprehensive and reasonable?
1. ∀x∃y of P(x,y) is possible! This is because we can say $ y = x^2 + 1 $ which means $ x . This means for every x there is a y. Almost ... when you pick $y = x^2+1$ it is the case that $x^2 since $x^2 ... But I am not sure what your $x has to with it .. and the latter is not true anyway: $x=0$ is a counterexample. But yes, if you pick $y = x^2+1$ it is the case that $x^2 , and so $\forall x \exists y \ P(x,y)$ is indeed true. This also means ∀y∃x of P(x,y) is possible! There can exist any x that if squared is less than y such as if y was 5 and x was 2 ($2^2 Nope. If $y=0$ , then it is impossible to pick an $x$ such that $x^2 . So, with the interpretation and domain as given, $\forall y \exists x \ P(x,y)$ is false. BUT ∃y∀x of P(x, y) is not possible. Because if x becomes $ x=y+1 $ then $y+1^2 > y$ . This contradicts our understanding that $ x^2 . There does not exist a y for every x. You are right that this is false, but your explanation is not quite right. If you pick $x = y+1$ , th
|discrete-mathematics|logic|first-order-logic|quantifiers|
0
How many possible values of $k$ are there?
I considered this problem, but, I cannot solve. $a,b,c$ are a real numbers with $(a,b,c)\neq (0,0,0)$ . $k$ is defined as follows. $$k=\frac{(a^2+b^2+c^2)(a^3+b^3+c^3)}{(a^5+b^5+c^5)}$$ if $ab+bc+ca=0$ , How many possible values of $k$ are there? Firstly, I substitute $c=-\frac{ab}{a+b}$ into the equation of $k$ , when the value of a+b is not $0$ . Then,I got this equation. $$k=1-\frac{2 a^8 b^2 + 8 a^7 b^3 + 16 a^6 b^4 + 20 a^5 b^5 + 16 a^4 b^6 + 8 a^3 b^7 + 2 a^2 b^8}{a^{10} + 5 a^9 b + 10 a^8 b^2 + 10 a^7 b^3 + 5 a^6 b^4 + a^5 b^5 + 5 a^4 b^6 + 10 a^3 b^7 + 10 a^2 b^8 + 5 a b^9 + b^{10}}=$$ However, this equation is too messy. Secondly,I made an elementary symmetric polynomial from the equation of $k$ . Then, I got this equation. $$k=\frac{(a+b+c)^3+3abc}{(a+b+c)^3+5abc}$$ However, I cannot proceed from here. If you can solve this, could you tell me the answer or the hints? I'm sorry my broken English,I'm Japanese.
Looking at integers only. One infinite family is $$ a = u^2 - v^2 \; , \; \; \; b = 2(uv+v^2) \; , \; \; c = 2(-uv+v^2) \; , \; \; $$ Your ratio, fifth powers in the denominator, comes out $$ \frac{u^6 - 3 u^4 v^2 + 51 u^2 v^2 + 15 v^6}{u^6 - 11 u^4 v^2 + 67 u^2 v^4 + 7 v^6} $$ This is homogeneous, so we may divide through by $v^6,$ then draw a grph using $x = \frac{u}{v}.$ allowing for irrational $x$ gives us all values from $1$ to $15/7.$ Out of these there are infinitely many rational values when $u,v$ go back to being integers Some output, integer pairs $(u,v)$ that are coprime with $u+v$ odd. u: 0 v: 1 numer: 15 = 3 5 denom: 7 = 7 u: 1 v: 0 numer: 1 = 1 denom: 1 = 1 u: 1 v: 2 numer: 1765 = 5 353 denom: 1477 = 7 211 u: 1 v: 4 numer: 74449 = 74449 denom: 45649 = 191 239 u: 2 v: 1 numer: 235 = 5 47 denom: 163 = 163 u: 2 v: 3 numer: 27091 = 27091 denom: 25291 = 7 3613 u: 2 v: 5 numer: 360739 = 151 2389 denom: 272539 = 272539 u: 3 v: 2 numer: 8061 = 3 2687 denom: 7261 = 53 137 u: 3 v:
|algebra-precalculus|symmetric-polynomials|
0
Book recommendations for PDE
I am currently an undergraduate studying mathematics and have been mode on the pure side. Now I would like to get started with PDEs. There have been some questions from other people about book recommendations in this area. However they were looking for something that focuses on application. I would like to study the subject for its own sake and more proof based. Does anyone know some book that really goes into the theory of it. For context: I have done multivariable calculus, measure theory and functional analysis. But I don't know anything about differential geometry.
For my engineering degree I used Engineering Mathematics Volume 1 by A.J.M. Spencer and others (total of 9 authors) (Van Nostrand, 1977), which is a solid workmanlike text which covers the basics of the following areas: ODEs, Fourier series, Laplace transforms, PDEs with applications (wave equation, Laplace's equation, heat equation), multiple integrals, vector analysis, linear algebra, numerical analysis, and some basic statistics. It's very much from an applied mathematics point of view. I remember working through numerous examples of solutions of PDEs of the abovementioned types, and by the time the exam came along I was pretty well au fait with solving them. Not sure whether it's still in print, but it crops up every so often on the 2nd hand market. You may find a copy by shopping online. I also have Arne Broman's Introduction to Partial Differential Equations on my bookshelf (Dover, 1970) but haven't really got round to cracking it open since I got it. It covers Fourier, Laplace a
|partial-differential-equations|
0
Convergence of the sequence:
Convergence is defined as: Given the sequence $\{a_n\}$ , for any $\epsilon>0$ , there exists a positive integer $N$ such that $|a_n - L| for n $\geq N$ Now, if I consider the sequence $\{(-1)^{n+1}\}_{n=1}^{\infty}$ \ I take $\epsilon$ =2 and $L$ =0 then it satisfies the convergence definition for any $n$ because $\ |a_{n}-0| for all terms. Where am I abusing this definition?
In the definition "any $\varepsilon$ " means "for all $\varepsilon$ ". In your example you only give one $\varepsilon$ , but not prove for all $\varepsilon$ -s. That is the reason the word "any" is not used much in literature, because it can be confusing. The phrases "for all" and "there exists" are used more as they are more clear.
|calculus|sequences-and-series|convergence-divergence|
1
Integral $\int_{0}^{\infty}\frac{1}{x^2}(\frac{1}{\cosh^2x}-\frac{\tanh{x}}{x})dx$
I am dealing with the following integral from superconductivity theory $$\int_{0}^{\infty}\frac{1}{x^2}\left(\frac{1}{\cosh^2x}-\frac{\tanh{x}}{x}\right)dx$$ My attempt to calculate this integral: calculate residues of $$f(x)=\frac{x-\sinh x\cosh x}{x^3\cosh^2 x},$$ then use Cauchy theorem about residues (integration over the contour over Im axis). I know that the answer is $-7\zeta(3)/\pi^2$ , but I don't understand how to check it. The function $f(x)$ has the second order pole at $x_0=i\pi/2+i\pi n$ (also the third order poles at $x=0$ but it's not important). To calculate residue, I use $$\mathrm{res}\,f(x)=\lim\limits_{x\rightarrow x_0}\frac{1}{2}\left[f(x)(x-x_0)^2\right]'.$$ Can anyone help with this integral?
Differentiate $\int_0^\infty \frac{\sin(x y)}{\sinh\frac{\pi y}2 } dy = \tanh x$ twice with respect to $x$ to get $$\int_0^\infty \frac{y^2\sin(x y)}{\sinh\frac{\pi y}2 } dy =- \frac{d^2 (\tanh x)}{dx^2}= 2\tanh x \ \text{sech}^2 \ x$$ which is utilized to integrate \begin{align} &\int_{0}^{\infty}\frac{1}{x^2}\left(\frac{1}{\cosh^2x}-\frac{\tanh{x}}{x}\right)dx\\ = &\ \frac12\int_{0}^{\infty}\left(\tanh{x}-x \ \text{sech}^2 \ x\right)d\left(\frac1{x^2}\right)\\ \overset{ibp}=& -\int_{0}^{\infty}\frac{\tanh{x}\ \text{sech}^2 \ x}{x}dx = -\int_{0}^{\infty}\frac1x \bigg(\int_0^\infty \frac{y^2 \sin(x y)}{2\sinh\frac{\pi y}2 } dy\bigg) dx\\ =& -\frac12 \int_{0}^{\infty} \frac{y^2}{\sinh\frac{\pi y}2 }\int_0^\infty \frac{\sin(x y)}{x} dx\ dy = -\frac\pi4\int_{0}^{\infty} \frac{y^2}{\sinh\frac{\pi y}2 }\overset{\ln t={-{\pi y}/2}}{dy}\\ =&- \frac4{\pi^2} \int_0^1 \frac{\ln^2t}{1-t^2}dt = - \frac4{\pi^2}\cdot \frac74\zeta(3)=-\frac7{\pi^2}\zeta(3) \end{align}
|integration|definite-integrals|contour-integration|trigonometric-integrals|
0
Weinstein's bound on eigenvalues of self-adjoint operators
Consider a complex, separable Hilbert space $H$ and a (densely defined) self-adjoint operator $A: \mathcal D(A)\to H$ . Assume that $A$ admits an orthonormal basis of eigenvectors $(\varphi_n)_{n\in \mathbb N}$ with $A\varphi_n=\lambda_n \varphi_n$ . Next, define for $\psi\in \mathcal D(A^2)$ with $\|\psi\|=1$ the following quantities: $$ A_\psi:=\langle \psi,A\psi\rangle\tag 1$$ and $$\Delta_\psi A:=\sqrt{\langle \psi, (A-A_\psi)^2\psi\rangle} \tag 2\quad .$$ In Weinstein, D. H. "Modified ritz method." Proceedings of the National Academy of Sciences 20.9 (1934): 529-532. the author shows that $$ |\lambda^*-A_\psi|\leq \Delta_\psi A\tag 3 \quad .$$ with $\lambda^*\in \sigma_p(A)$ an eigenvalue of $A$ to be defined below. That is, the interval $[A_\psi-\Delta_\psi A\,,A_\psi +\Delta_\psi A]$ contains at least one eigenvalue. But in doing so, he assumes that there exists an eigenvalue $\lambda^*$ of $A$ which is the closest to $A_\psi$ , i.e. for which $(\lambda^*-A_\psi)^2\leq (\lambda_
Assume the contrapositive that $$[A_\psi-\Delta_\psi A\,,A_\psi+\Delta_\psi A]\cap \sigma_p(A)=\emptyset\tag{1}$$ Since the $\varphi_n$ are an orthonormal basis, we can write $$ \psi=\sum_{n=1}^\infty a_n\varphi_n $$ with $\sum_{n=1}^\infty |a_n|^2=\|\psi\|^2=1$ . There exists some $n_0$ such that $a_{n_0}\ne 0$ . Since $\psi\in\mathcal{D}(A)$ we get $$ (\Delta_\psi A)^2=\langle\psi,(A-A_\psi)^2\psi\rangle=\|(A-A_\psi)\psi\|^2= \sum_{n=1}^\infty|a_n|^2|\lambda_n-A_\psi|^2\tag{2} $$ Note that from $(1)$ follows that $|\lambda_n-A_\psi|>\Delta_\psi A$ for all $n$ . Hence $|a_n||\lambda_n-A_\psi|\ge(\Delta_\psi A) |a_n|$ for all $n$ so from $(2)$ $$ (\Delta_\psi A)^2=\sum_{n=1}^\infty|a_n|^2|\lambda_n-A_\psi|^2\ge(\Delta_\psi A)^2\sum_{n=1}|a_n|^2=(\Delta_\psi A)^2\cdot 1=(\Delta_\psi A)^2\tag{3} $$ From $(3)$ it follows we must have term-by-term equality $|a_n||\lambda_n-A_\psi|=(\Delta_\psi A) |a_n|$ for all $n$ . But $a_{n_0}\ne 0$ and $|\lambda_n-A_\psi|>\Delta_\psi A$ for all $n$ , s
|functional-analysis|operator-theory|hilbert-spaces|mathematical-physics|self-adjoint-operators|
0
Looking for a simple proof that groups of order $2p$ are up to isomorphism $\Bbb{Z}_{2p}$ and $D_p$ for prime $p>2$.
I'm looking for a simple proof that up to isomorphism every group of order $2p$ ( $p$ prime greater than two) is either $\mathbb{Z}_{2p}$ or $D_{p}$ (the Dihedral group of order $2p$ ). I should note that by simple I mean short and elegant and not necessarily elementary. So feel free to use tools like Sylow Theorems, Cauchy Theorem and similar stuff. Thanks a lot!
Let $|G| = 2p$ , and it can be written as $G = \{e, a, \ldots, a^{p-1}, ba, \ldots, ba^{p-1}, ab, \ldots, a^{p-1}b\}$ . However, this appears to be of order $3p$ , so there must be an equivalence $ba = a^{n}b$ for some $n \leq p-1$ . We observe that if we have an element $b \in G$ of order 2, there exists an automorphism of $H \cong \mathbb{Z}_{p}$ defined by $\alpha(a) = bab^{-1}$ such that $\alpha^2 = \text{id}$ . To understand why this is the case, consider this geometrically where $a$ and $b$ represent either rotations or reflections of a regular $2p$ -sided polygon. This leads to $\alpha^2(a) = a^{n^2} = \text{id} = a \implies n^2 = 1$ , which has two solutions in the ring $\mathbb{Z}_{p}$ , namely $n = \pm 1$ . If $n = 1$ , $G$ is abelian and isomorphic to $\mathbb{Z}_p \times \mathbb{Z}_2$ . Otherwise, we obtain precisely $bab^{-1} = a^{-1}$ (which also has geometric significance), making it isomorphic to $D_{2p}$ .
|group-theory|finite-groups|cyclic-groups|group-isomorphism|dihedral-groups|
0
Group extensions of a nontrivial group
A group $G$ is called an extension of a group $Q$ by $N$ if we have the following short exact sequence: $1 \rightarrow N \rightarrow G \rightarrow Q \rightarrow 1$ . If there is a homomorphism $f: G \rightarrow N$ that is the identity on $N$ , then such extensions give quite a simpler structure of $G$ , namely, $G \cong N \times Q$ . I was thinking of this phenomenon in the following fashion: for the group $N$ , we have a group $G$ such that $N$ is a direct summand of $G$ . I wondered whether such a thing happens for every nontrivial group $H$ . Namely, I have the following question. For every nontrivial group $H$ , does there exist a group $G_{H}$ (depending on $H$ ) such that $H \unlhd G_{H}$ and if $G_{H} \cong K \times T$ for some groups $K$ and $T$ , then $K \ncong H$ and $T \ncong H$ ?
If $H$ is an abelian group, then such a $G$ always exists. If $H$ as a nontrivial automorphism $\varphi$ , let $G=H\rtimes \langle \varphi\rangle$ . This group is nonabelian, since there exists $h\in H$ such that $\varphi(h)\neq h$ . Then we have $$(e,\varphi)(h,1) = (\varphi(h),\varphi)\neq (h,\varphi) = (h,1)(e,\varphi).$$ However, any group of the form $H\times\langle \varphi\rangle$ is abelian. If $H$ does not have a nontrivial automorphism, then $H$ is cyclic of order $2$ (we need the Axiom of Choice for this if $H$ is infinite; for finite abelian groups this is easy). Then we can take $G$ cyclic of order $4$ , which cannot be decomposed into a direct product with two nontrivial subgroups. But as Derek Holt points out in comments, if $H$ is complete and nontrivial (so $Z(H)=\{1\}$ and $\mathrm{Out}(H)=\{1\}$ ), then no such $G$ exists. Assume that $H$ is a normal subgroup of $G$ . First we prove that $G=HC_G(H)$ . Note that since $H\triangleleft G$ , then $HC_G(H)$ is a subgroup.
|abstract-algebra|group-theory|group-extensions|
1
Evaluation of limit using squeeze theorem
Let $f:(0,1) \to \mathbb{R}$ be the function defined as $f(x)=\sqrt{n}$ if $x \in \left[\frac{1}{n+1},{\frac{1}{n}}\right)$ where $n \in \mathbb{N}$ . Let $g:(0,1) \to \mathbb{R}$ be a function such that $\displaystyle \int_{x^2}^{x} \sqrt{\frac{1-t}{t}}dt for all $x \in (0,1)$ , then find $\displaystyle \lim_{x \to 0} f(x)g(x)$ . I am looking to multiply the inequality by $f(x)$ and then apply squeeze theorem but the only problem I can't tackle is what should I put in place of $f(x)$ as $f(x)=\sqrt{n}$ .
First notice that as $x\to0^+$ , $$\left(f(x)\right)^2=\left\lceil\frac1x\right\rceil-1\to\infty$$ and (by definition of $\lceil~\rceil$ ) $$\frac1{\sqrt{1+\frac1{\left(f(x)\right)^2}}}\le f(x)\sqrt x hence (by the squeeze theorem) $$\lim_{x\to0^+}\sqrt xf(x)=1.$$ Let us now estimate $h(x)-h(x^2)$ , where $$h(x):=\int_0^x\sqrt{\frac{1-t}t}dt.$$ By L'Hospital's theorem (N.B. this was the main idea in @Mathslover's sadly deleted answer ), $$\lim_{x\to0^+}\frac{h(x)}{\sqrt x}=\lim_{x\to0^+}\frac{\sqrt{\frac{1-x}x}}{\frac1{2\sqrt x}}=2$$ hence $$\lim_{x\to0^+}\frac{h(x^2)}{\sqrt x}=0,$$ so that $$\lim_{x\to0^+}\left(h(x)-h(x^2)\right)f(x)=2$$ and we are done: by the squeeze theorem again, $$\lim_{x\to0^+}f(x)g(x)=2.$$
|calculus|limits|
0
Circle Angle chasing problem: Compute APC+BQD
I was wondering if someone could give me a hint as to how to solve this problem: Let circle C1 intersect circle C2 at P and Q. A line intersects C1 at A and B, and C2 at C and D, such that the four points lie on the line on the order A, B, C, D and that P and Q lie on the same side of the line. Compute APC+BQD. I found this problem on an angle chasing worksheet by Ray Li. I've tried proving APC+BQD=180°, but I'm unsuccessful because I don't know how to obtain information about the angles. If you have any ideas, please let me know.
You have requested a hint to solve this angle chasing problem. We would like to give you a hint in the form of a diagram, Here, we have extended your sketch by adding a point $E$ and four red lines, i.e., $AQ$ , $AE$ , $EQ$ , and $PQ$ . Point $E$ is the point of intersection between extended $DQ$ and the circle $\Omega_1$ . If you do the angle chase using these lines, you will be able to compute $\angle APC+\angle BQD$ easily. You have also mentioned that you are trying to prove $\angle APC+\angle BQD = {\large{\pi}}$ . We think that you are on track.
|geometry|circles|angle|
1
looking for more explicit injection strict on form $f:\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$
I've seen some relevant posts on this question. I'm wondering if there are any methods to construct the injection $f:\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q}$ strictly on such domain and codomain without jumping back and forth claiming $\mathbb{Q}\sim\mathbb{N}\sim\mathbb{Z}$ . The general method I love is to rewrite the two rationals into decimals and then interleave the digit. Here I want more explicit operation for rational cross rational to rational, any operation on this specific form $(\frac{p}{q},\frac{m}{n})$ , where $\gcd(m,n)=\gcd(p,q)=1$ and $\frac{p}{q},\frac{m}{n}\in\mathbb{Q}$ Appreciate for any methods
One of the many injections $\Bbb Q\times\Bbb Q\to\Bbb Q$ is $$f\left(\frac pq,\frac mn\right):=2^p3^q5^m7^n$$ where the two rationals are in canonical form, i.e. $p,m\in\Bbb Z,q,n\in\Bbb N,\gcd(p,q)=\gcd(m,n)=1$ .
|real-analysis|analysis|elementary-set-theory|set-theory|
1
Which square is bigger?
Acute triangle $ABC$ has side length $a \ge b \ge c$ . We drop the altitude from point $C$ to side $c$ , and let its length be h. We construct a square with side length $h$ so that side $c$ , points $A$ and $C$ lie on the square's sides and point $B$ lies on its vertex. Then, we construct another square with side $s$ so that $C$ lies on its vertex and points $A$ and $B$ lie on its sides. Which square is bigger? I found the equations $c^2=a^2+b^2-2s\sqrt{a^2-s^2}-2s\sqrt{b^2-s^2}$ and $4c^2h^2=a^4-2a^2b^2-2a^2c^2+b^4+6b^2c^2+c^4$
In the attached figure, your second triangle is red color and your first one is the same black color. You have $$\begin{cases}a\cos(t)=s\\a\cos(u)=h\end{cases}\Rightarrow \frac{s}{h}=\frac{\cos(t)}{\cos(u)}$$ Since $\triangle{ABC}$ is acute we have three possibilities $$ s\lt h\iff \cos(t)\lt\cos(u)\iff t\gt u\\s\gt h\iff \cos(t)\gt\cos(u)\iff t\lt u\\s=h\iff t=u$$ Obviously these possibilities mean respectively black is bigger,red is bigger and both are equal. In the figure there are particular values to verify the size of concerned squares with $(a,b,c)=(10,8,7)$ and the literal calculation is similar but more difficult to manipulate. We have $h\approx7.946$ ; the value of $s$ is not hard to determine; we have the system of three unknowns in which we eliminate $y$ and $z$ $$ y^2+z^2=7^2\\s^2+(s-z)^2=8^2\\s^2+(s-y)^2=10^2$$ We get $s\approx7.91643$ and $s\approx 3.4658$ so the black square is bigger . Note the values of $h$ and $s$ are near. There are positions for $s\lt h$ and $s\gt
|geometry|
0
Power series of $f(z)=\frac{z^2+1}{z^2-z^4}$ near $z_0=i$
My problem is that I can't use the power series of $\frac{1}{1-z}$ because we are looking at a ball around point $z_0=i$ , and not all points there justify $|z| . So I don't know what to start from. This doesn't seem something with exponent or trigonometric functions series. Trying to find n'th derivative seems very hard. I did manage to find an answer using partial fractions and the power series of $\frac{1}{1-z}$ , but I can't use it.
The geometric power series $$ \frac{1}{1-z} = \sum_{n=0}^{\infty} z^n $$ is expanded around $z = 0$ , which is why it only converges inside the disk centered at $0$ . But we can expand a power series for a holomorphic (complex analytic) function $f$ around any point $z = z_0$ in its domain: $$ f(z) = \sum_{n = 0}^{\infty} \frac{f^{(n)}(z_0)}{n!} (z - z_0)^n, $$ and it will converge on $|z - z_0| , the disk of radius $R$ centered at $z_0$ . (It also might converge for some or all of the points on the boundary circle if $R , but that's not important here.) When you're looking for a series expansion near $z_0 = i$ that's the hint that you should expand about that point. You could just grind out derivatives and hopefully spot a pattern, then show it's true for all $n \geq 0$ , say by induction, then evaluate them at $z_0 = i$ and compute coefficients. But this is usually difficult if not impossible, and certainly can be tedious. Instead, whenever we can manipulate a known series (like you
|complex-analysis|
0
Solve $7\sqrt{x-8}-\sqrt{21x+12}=2\sqrt{3}$
Solve $7\sqrt{x-8}-\sqrt{21x+12}=2\sqrt{3}$ $\Rightarrow 7\sqrt{x-8}-\sqrt{3}\cdot\sqrt{7x+4}=2\sqrt{3} \tag{1}$ $\Rightarrow 7\sqrt{x-8}=\sqrt{3}\cdot(2+\sqrt{7x+4}) \tag{2}$ $\Rightarrow [7\sqrt{x-8}]^2=[\sqrt{3}\cdot(2+\sqrt{7x+4})]^2 \tag{3}$ $\Rightarrow 49\cdot(x-8)=3\cdot[4+(7x+4)+4\sqrt{7x+4}] \tag{4}$ $\Rightarrow 49x-392=21x+24+12\sqrt{7x+4} \tag{5}$ $\Rightarrow 28x-416=12\sqrt{7x+4} \tag{6}$ $\Rightarrow (28x-416)^2=(12\sqrt{7x+4})^2 \tag{7}$ $\Rightarrow 784x^2+173056-23296x=144(7x+4) \tag{8}$ $\Rightarrow 784x^2-24304x+173056=0 \tag{9}$ $\Rightarrow 49x^2-1519x+10816=0 \tag{10}$ $x=\dfrac{1519 \pm \sqrt{(-1519)^2-4\cdot49\cdot10816}}{98}=\dfrac{1519 \pm \sqrt{187425}}{98} \tag{11}$ But the answer is given as $x=20, 11$ . Where did I go wrong?
Raising to the second power twice leads to large numbers. We can avoid that, so the possibility of making a mistake is smaller. Substitute $\sqrt{3} \,t=\sqrt{x-8}\ge 0.$ Then $x=3t^2+8.$ The equation takes the form $$7\sqrt{3}\,t-2\sqrt{3}=\sqrt{3}\sqrt{21\,t^2+60} $$ Dividing by $\sqrt{3}$ and raising to the second power gives $$49\,t^2-28\,t+4=21\,t^2+60$$ Thus $$28\,t^2-28\,t -56=0$$ We divide by $28$ to get $$t^2-t-2=0$$ We are after the positive root, which is equal $t=2.$ Thus $x=3\,t^2+8=20.$
|algebra-precalculus|
1
Showing a function $\mathbb R \to \mathbb R^2$ is bilipschitz
Problem: Is the function $f\colon \mathbb R \to \mathbb R^2$ , defined by $f(x)=(x, \cos x)$ , $M$ -bilipschitz for some $M\geq 1$ ? (Using the standard metric) So far: By the MVT we have $\lvert \cos x -\cos y \rvert \leq \lvert x-y\rvert$ for all $x,y\in \mathbb R$ , so we easily get that $$d(f(x),f(y))=\sqrt{(x-y)^2+(\cos x-\cos y)^2} \leq \sqrt{2(x-y)^2}= \sqrt{2}d(x,y).$$ My problem is in showing the lower bound $1/\sqrt{2}$ , as intuitively this should indeed be a sensible lower bound, and thus $f$ should be $\sqrt{2}$ -bilipschitz. I'm completely blanking on bounding the distance from below. Or am I in fact wrong in assuming this function really is bilipchitz?
We always have $|f(x)-f(y)|= \sqrt{(x-y)^2 + (\cos x-\cos y)^2}\ge |x-y|$ , so the function is indeed bilipschitz onto its image.
|real-analysis|metric-spaces|lipschitz-functions|
1
Power series of $f(z)=\frac{z^2+1}{z^2-z^4}$ near $z_0=i$
My problem is that I can't use the power series of $\frac{1}{1-z}$ because we are looking at a ball around point $z_0=i$ , and not all points there justify $|z| . So I don't know what to start from. This doesn't seem something with exponent or trigonometric functions series. Trying to find n'th derivative seems very hard. I did manage to find an answer using partial fractions and the power series of $\frac{1}{1-z}$ , but I can't use it.
Hints. $$\frac1{z^2-z^4}=\frac1{z^2}+\frac{1/2}{1-z}+\frac{1/2}{1+z}$$ $$\frac1{z^2}=\left(-\frac1z\right)'$$ And for $|h| : $$-\frac1{i+h}=i\sum_{n=0}^\infty(ih)^n$$ $$\frac1{1-(i+h)}=\frac1{1-i}\sum_{n=0}^\infty\left(\frac h{1-i}\right)^n$$ $$\frac1{1+(i+h)}=\frac1{1+i}\sum_{n=0}^\infty\left(\frac{-h}{1+i}\right)^n.$$
|complex-analysis|
1
Book recommendations for PDE
I am currently an undergraduate studying mathematics and have been mode on the pure side. Now I would like to get started with PDEs. There have been some questions from other people about book recommendations in this area. However they were looking for something that focuses on application. I would like to study the subject for its own sake and more proof based. Does anyone know some book that really goes into the theory of it. For context: I have done multivariable calculus, measure theory and functional analysis. But I don't know anything about differential geometry.
The standard graduate textbook for PDEs is Partial differential equations by Evans. The first half of the book discusses explicit model examples by treating them rigorously. The next half deals with the general theory of PDEs, including Sobolev spaces, followed by standard linear elliptic, parabolic, and hyperbolic equations. In the end, he also discusses some nonlinear examples. The book is very readable and has a comprehensive set of appendices that contain any material you may not know but based on what you have mentioned in your question you should be alright.
|partial-differential-equations|
1
Why do we divide by 12 when calculating the number of different instances of the Rubik's cube?
$\frac{8!\cdot3^8\cdot12!\cdot2^{12}}{3\cdot2\cdot2}$ This is the solution, do you know a paper in which this formula is explained, why we calculate like this? Then I heard that the numerator of the fraction would be the result if one did not consider the mechanics of the cube, but could simply assume that all 54 faces of the cube could take on any of the available colours. But then what about the centre pieces? If we ignored the mechanics of the cube, could they also take on any colour, if yes, how does the numerator take the centre stones into account?
Just rotating the cube as one complete object gives you 24 ways to put it in front of you (say one a designated square on the table). These rotations are not seen as "moves", so the formula does not account for them. This means that in any state of the cube, you can always put it in front of you with the center pieces in a standard orientation. And when manipulating the cube you can, if you wish, always keep the center pieces fixed and rotate other planes around it (although that may not be how cubists do the solving). If we would also count the 24 "placements" of the cube as different states then the formula would give an even bigger number, 24 times bigger. But since that is not the convention we assume the center pieces fixed, and then note that we potentially have $8!$ permutations of the corners, $3^8$ orientations of the corners, $12!$ permutations of the edges, $2^{12}$ orientations of the edges. And then divide by $12$ because only $1/12$ th of all those potential possibilities
|combinatorics|rubiks-cube|
0
How to integrate $\int_{0}^{1} \frac{x \operatorname{Li}_2(1 - x)}{1 + x^2} \, dx$
How to integrate $$\int_{0}^{1} \frac{x \operatorname{Li}_2(1 - x)}{1 + x^2} \, dx$$ My try to integrate $$\text{I}=\int_{0}^{1} \frac{x \operatorname{Li}_2(1 - x)}{1 + x^2} \, dx$$ \begin{aligned} &= -\frac{1}{2} \int_{0}^{1} \frac{\ln(x) \ln(1 + x^2)}{1 - x} \, dx \\ &= -\frac{1}{2} \int_{0}^{1} \frac{\ln(x) \ln(1 + x^2)}{1 - x^2} \, dx \\ &\quad -\frac{1}{2} \int_{0}^{1} \frac{x \ln(x) \ln(1 + x^2)}{1 - x^2} \, dx \end{aligned} Can someone help me to integrate last two integrals?
Note that \begin{aligned} \int_{0}^{1} \frac{x \operatorname{Li}_2(1 - x)}{1 + x^2} \, dx \overset{ibp}=-\frac{1}{2} \int_{0}^{1} \frac{\ln x\ln(1 + x^2)}{1 - x} \, dx \\ \end{aligned} where the integral on the right hand side is given by $$ \int_{0}^{1} \frac{\ln x\ln(1 + x^2)}{1 - x} \, dx= -\frac{3\pi^2}{16}\ln2-\frac\pi2 G+2\zeta(3) $$
|calculus|integration|definite-integrals|logarithms|special-functions|
1
Induced change of basis on a (p,q) tensor
I'm struggling to simplify the last step of a $(p,q)$ tensor and how its components change with a linear change of basis on the associated vector space. So far I have: Given a vector space $V$ over some field $F$ with bases $\mathcal{B}=(e_{1}, \dots, e_{n})$ and $\mathcal{B}'=(\overline{e}_{1}, \dots, \overline{e}_{n})$ which are related linearly by a matrix $A=(a_{j}^{i})$ Now let $\alpha$ be a $(p,q)$ tensor on $V$ $$\begin{align} \alpha = \sum_{{i_{\beta}, j_{\gamma}}}{\alpha_{k_{1}, \dots, k_{p}}^{l_{1}, \dots, l_{q}} e^{i_{1}} \otimes \cdots \otimes e^{i_{p}} \otimes e_{j_{1}} \otimes \cdots \otimes e_{j_{q}}} \\ \alpha = \sum_{{i_{\beta}, j_{\gamma}}}{\overline{\alpha}_{k_{1}, \dots, k_{p}}^{l_{1}, \dots, l_{q}} \overline{e}^{i_{1}} \otimes \cdots \otimes \overline{e}^{i_{p}} \otimes \overline{e}_{j_{1}} \otimes \cdots \otimes \overline{e}_{j_{q}}} \end{align}$$ From here, I'm trying to find the components of $\alpha$ with respect to the new basis $\mathcal{B}'$ : $$ \overline{\
As recommended I consider a $(1,1)$ -tensor with summation convention: $$ \boldsymbol{\alpha}={\alpha^i}_j\; e_i\otimes e^j={\overline{\alpha}^i}_j\; \overline{e}_i\otimes \overline{e}^j\,. $$ Using $$ \overline{e}_j={a^\mu}_j\;e_\mu\,,\quad \overline{e}^i={b_\nu}^i\;e^\nu $$ we get \begin{align} {\overline{\alpha}^i}_j\stackrel{!}=\boldsymbol{\alpha}(\overline{e}^i,\overline{e}_j)= \boldsymbol{\alpha}\Big({b_\nu}^i\;e^\nu,{a^\mu}_j\;e_\mu\Big)= {b_\nu}^i\;{a^\mu}_j\;\boldsymbol{\alpha}\Big(e^\nu,e_\mu\Big)\stackrel{!}= {b_\nu}^i\;{a^\mu}_j\;{\alpha^\nu}_\mu\,. \end{align} The Kronecker deltas $$ \overline{e}^i(\overline{e}_j)=\delta^i_j\,,\quad e^\nu(e_\mu)=\delta^\nu_\mu $$ and so on have been used in the equals signs labelled with $!\,.$ They are the mechanism that picks the $(i,j)$ -th (resp. the $(\nu,\mu)$ -th) component of the tensor when it eats $\overline{e}^i$ and $\overline{e}_j\,,$ resp. $e^\nu$ and $e_\mu\,.$ To show what can go wrong when you don't apply the principle of
|summation|tensor-products|tensors|multilinear-algebra|
1
Am I correct? Subjective and Injective logic (my reasoning)
So I am trying to prove: The function $f: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ , defined by $f(x, y) = x^3 - xy^2$ is surjective and injective. Surjectivity: To prove surjectivity, we need to show that for every $z$ in the codomain $\mathbb{R}$ , there exists at least one pair $(x, y)$ in the domain $\mathbb{R}\times\mathbb{R}$ such that $f(x, y) = z$ . Given $f(x, y) = x^3 - xy^2$ , we can say: \begin{align} x^3 - xy^2 &= z \\ x^3 &= z + xy^2 \\ x &= \sqrt[3]{z + xy^2} \end{align} Now, if we fix any $y$ , we can choose $x = \sqrt[3]{z + xy^2}$ , which will give us the desired $x$ for any $z$ . The function is surjective. Injectivity: To prove injectivity, we need to show that for any $(x_1, y_1)$ and $(x_2, y_2)$ in the domain such that $f(x_1, y_1) = f(x_2, y_2)$ , then $(x_1, y_1) = (x_2, y_2)$ . We can suppose $f(x_1, y_1) = f(x_2, y_2)$ : \begin{align} x_1^3 - x_1y_1^2 &= x_2^3 - x_2y_2^2 \\ x_1^3 - x_2^3 &= x_1y_1^2 - x_2y_2^2 \\ (x_1 - x_2)(x_1^2 + x_1x_2 + x_2^2) &= x_1
The function is not injective. In fact, as $$f(x,y)=x^3-xy^2=x(x^2-y^2),$$ any $(x,y)\in\mathbb{R}^2$ such that $x=y$ verifies that $f(x,y)=0$ . It is, however, surjective: just notice that $f(x,0)=x^3$ , which is by itself a surjective function.
|functions|discrete-mathematics|polynomials|
0
Is a system, that is globally asymptotically stable for any constant input also input-to-state stable?
I am referring to the ISS definition by Sontag of ${\displaystyle |x(t)|\leq \beta (|x_{0}|,t)+\gamma (\|u\|_{\infty }).}$ I understand that 0-GAS is a necessary condition for ISS. But is GAS for all constant u a sufficient condition? Update: For clarification, I mean the system is GAS for an arbitrary, but constant input u. As of my understanding now, BIBO stability and 0-GAS are necessary for ISS. So the upper condition cannot be sufficient. So the updated question is: Is a system, that is BIBO stable and GAS for an arbitraty, but constant input automatically ISS?
I think this is not the case. First, to state the hypothesis from your question: Given $$ \dot{x}=f(x,u)\tag{1} $$ such that $(1)$ is globally asymptotically stable (GAS) for all $u\in \mathbb{R}^m$ . Then $(1)$ is input to state stable (ISS). I claim this is wrong by reusing a well known counterexample from linear time varying (LTV) system analysis: $$ \begin{align} \dot{x}_1 &= -x_1 - 5 x_2 \cos(u)^2 + 2.5 x_1 \sin(2 u) \\ \dot{x}_2 &= -x_2 + 5 x_1 \sin(u)^2 - 2.5 x_2 \sin(2 u) \end{align}\tag{2} $$ For any $u\in\mathbb{R}$ , the system $(2)$ has both its eigenvalues at $-1$ , so it is GAS for any (finite) constant $u$ . However, take the bounded input $$ u(t)=2\pi\left(\frac{t}{2\pi} - \left\lfloor\frac{t}{2\pi}\right\rfloor\right) \tag{3} $$ where $\lfloor\cdot\rfloor$ is the floor function . With this input, the system $(2)$ is equivalent to the first example of the 2.2 Examples section from Ilchmann, A., Owens, D. H., & Prätzel-Wolters, D. (1987). Sufficient conditions for stabil
|control-theory|nonlinear-system|stability-theory|nonlinear-dynamics|lyapunov-functions|
1
Showing that $\Sigma_1$ does not have the separation property
I am taking a course on recursion theory, and have the following problem: Prove that there are $\Sigma_1$ subsets $A, B$ of $\mathbb{N}\times \mathbb{N}$ which cannot be separated by a $\Sigma_1$ set. Prove the same for subsets of $\mathbb{N}$ , and conclude that $\Sigma_1$ does not have the separation property. The second part is more clear to me, but I'm unable to prove the first sentence. I was given the following hint for the first part: Let $W^2_e$ be the domain of $\{e\}_2$ , the $e$ -th binary recursive function, so that the $W^2_e$ enumerate the $\Sigma_1$ subsets of $\mathbb{N}\times\mathbb{N}$ . Think of a pair $\langle e,f\rangle$ is coding sets $A′ = W_e^2$ and $B′ = W_f^2$ and decide whether this pair should go in $A$ or in $B$ in a way that would prevent $A′, B′$ from separating $A, B.$ I'm assuming I should use uniformization, but it's not very clear to me how I should do so, even with the hint. Any guidance would be much appreciated. For context, The separation property
For posterity's sake, I've figured it out. It was actually extremely simple and analogous to the proof for $\Sigma_1$ separation failing for subsets of $\mathbb{N}$ . Let $$A=\{(e,f)\mid \{f\}^2(e,f)\downarrow\}\qquad B=\{(e,f)\mid \{e\}^2(e,f)\downarrow\}$$ both subsets of $\mathbb{N}\times\mathbb{N}.$ Clearly both $\Sigma_1.$ Suppose $A',B'\subset\mathbb{N}\times\mathbb{N}$ are $\Sigma_1$ sets separating $A,B,$ i.e $A\subset A',B\subset B'$ , $A'\cap B'=\emptyset,$ and $A'\cup B'=\mathbb{N}\times\mathbb{N}.$ Since they are $\Sigma_1,$ we have $e,f$ so that $A'=W_e^2$ and $B'=W_f^2.$ Then we know $(e,f)$ is either in $A'$ or $B'.$ If $(e,f)\in A'=W_e^2,$ then $$\{e\}^2(e,f)\downarrow\implies (e,f)\in B\subset B'$$ If $(e,f)\in B'=W_f^2,$ then $\{f\}(e,f)\downarrow\implies (e,f)\in A\subset A'$ . Thus in either case, $(e,f)\in A'\cap B',$ a contradiction. So $A,B$ cannot be separated.
|logic|first-order-logic|recursion|computability|
1
Generation of the topology of a vector topological space by a metric
I am trying to prove that the set of equivalence classes of measurable functions with the metric $\rho(f,g)=\int_{0}^{1}\frac{|f(x)-g(x)|}{1+|f(x)-g(x)|} \, d\mu$ is a vector topological space. By definition, a vector topological space is a linear space, with a topology in which addition and multiplication by a scalar are continuous over a set of arguments. According to the definition, I need to prove that the operations of addition and multiplication by scalar will be continuous. First, I considered multiplication by a scalar: if we take an arbitrary function $f(x)$ and consider the result of multiplication by a scalar $af(x)$ , then in order to show continuity, I need to show for any neighborhood $af(x)$ that there exists a neighborhood $f(x)$ such that all functions from this neighborhood, when multiplied by $a$ , will fall into the neighborhood of $af(x)$ . If $a=0$ , then we always get a zero function, so as a neighborhood of $f(x)$ we can take any ball containing $f(x)$ . if $a\n
The map $(x,y)\mapsto \frac{|x-y|}{1+|x-y|}$ is a metric on the real numbers. Then by triangle inequality $$ \frac{ |(f_1+g_1)-(f_2+g_2)| }{ 1+ |(f_1+g_1)-(f_2+g_2)|} \le \frac{ |f_1 -f_2|}{1+|f_1-f_2|} + \frac{ |g_1 -g_2|}{1+|g_1-g_2|} . $$ Now integrate that to obtain $$ d(f_1+g_1,f_2+g_2) \le d(f_1,f_2) + d(g_1,g_2), $$ which proves continuity of addition.
|general-topology|functional-analysis|measure-theory|metric-spaces|
0
How to prove this elementary geometry statement
While self-studying an elementary geometry book, I found the following claim: If $r$ and $s$ are lines contained in the plane $P$ , then for every pair of points $A\in r$ and $B\in s$ , with $A\neq B$ , it holds that $\overline{AB}\setminus\{A,B\}\subseteq SP^B_r\cap SP^A_s$ . where $\overline{AB}$ is the segment that joins $A$ and $B$ , and $SP^A_s$ is the semiplane of P defined by $s$ and that contains $A$ , so that $SP^A_s=\{X\in P: \overline{XA}\cap r=\emptyset\}$ . The statement is evident just from drawing the configuration of points and lines, but I would like to be able to write the proof in terms of sets. How can this be done? Any tips would be greatly appreciated!
Since P is a plane, it is a set of points with an affine structure, of dimension $2$ . There are two cases to consider: 1) $r\parallel s, r\neq s$ or 2)not. 1st case : $\color{red}{r\parallel s, r\neq s}$ Let $\vec P$ be the direction of $P$ , with dim $\vec P=2.$ $\exists u\in \vec P,$ s.t. $\color{red}r=A+\mathbb Ru$ and $\color{red}s=B+\mathbb R$ u Then $\forall x\in P, \exists !k\in \mathbb R, \exists !\lambda \in \mathbb R$ , s.t. $x-A=ku+\lambda (B-A)$ Let $P_A$ the vector space pointed in $A$ . Let $f:P_A\to \mathbb R $ defined as follows : $$x\mapsto \lambda$$ $f$ is a linear form. $r=f^{-1}(0), SP_r^B= \{x\in P_A:f(x)>0\}$ since $f(B)=1>0.$ As $\color{green}{\overline {AB}}:=\{x\in P_A| \exists \lambda \in [0,1], x=A+\lambda(B-A)\}, \color{green}{\overline {AB}}-\{A\}\subset SP_r^B$ Likewise, $\color{green}{\overline {AB}}-\{B\}\subset SP_s^A$ second case :(with quite similar arguments) $0:=r\cap s $ , $P_0$ , vector space pointed in $0$ . $\forall x\in P_0, \exists !(\lambda,
|linear-algebra|geometry|
1
Computing $\sum_{k=1}^{\infty}\frac{1}{k}\int_{\pi k}^{\infty}\frac{\sin(x)}{x}dx$
I was asked to evaluate the following sum: $$\sum_{k=1}^{\infty}\frac{1}{k}\int_{\pi k}^{\infty}\frac{\sin(x)}{x}dx$$ I'm trying to use $$\int_{0}^{\infty} \frac{\sin(x)}{x}dx=\frac{\pi}{2}$$ However, it doesn't seem to work. Any help is greatly appreciated.
The first observation we make is a simple substitution, with $x=nu$ and $dx=n\;du$ : $$\int_{n\pi}^{\infty}\frac{\sin x}x\;dx = \int_{\pi}^{\infty}\frac{\sin(nu)}{nu}\;n\;du = \int_{\pi}^{\infty}\frac{\sin(nx)}x\;dx$$ Using this identity we can rewrite the given integral as $$\sum_{n=1}^{\infty}\frac1n\int_{n\pi}^{\infty}\frac{\sin x}x\;dx = \sum_{n=1}^{\infty}\frac1n\int_{\pi}^{\infty}\frac{\sin(nx)}x\;dx$$ Now (I am causing Fubini to roll over in his grave by doing this) we interchange the sum and integral as follows: $$\int_{\pi}^{\infty}\left[\sum_{n=1}^{\infty} \frac{\sin(nx)}n\right]\;\frac{dx}x$$ To evaluate the sum inside the brackets, we take things into the complex plane. In the following, by $\text{Log}$ and $\text{Arg}$ we refer to the functions with branch cuts along the negative real axis. \begin{align*} \sum_{n=1}^{\infty} \frac{\sin(nx)}n &= \sum_{n=1}^{\infty} \frac{e^{inx} - e^{-inx}}{2in} \\ &= \frac1{2i}\sum_{n=1}^{\infty}\left[\frac{e^{inx}}n - \frac{e^{-inx}}n\rig
|real-analysis|sequences-and-series|
0
Evaluate $\int_{0}^{1}\{1/x\}^2\,dx$
Evaluate $$\displaystyle{\int_{0}^{1}\{1/x\}^2\,dx}$$ Where {•} is fractional part My work $$\displaystyle{\int\limits_0^1 {{{\left\{ {\frac{1}{x}} \right\}}^2}dx} = \sum\limits_{n = 1}^\infty {\left( {\int\limits_{1/\left( {n + 1} \right)}^{1/n} {{{\left\{ {\frac{1}{x}} \right\}}^2}dx} } \right)} }$$ For $\displaystyle{x \in \left( {\frac{1}{{n + 1}},\frac{1}{n}} \right]}$ $$\displaystyle{n \leqslant \frac{1}{x} $$\displaystyle{ = \int\limits_{1/\left( {n + 1} \right)}^{1/n} {\left( {\frac{1}{{{x^2}}} - \frac{2}{x}n + {n^2}} \right)dx} = 1 + \frac{n}{{n + 1}} - 2n\ln \frac{{n + 1}}{n} \Rightarrow \boxed{\int\limits_0^1 {{{\left\{ {\frac{1}{x}} \right\}}^2}dx} = \sum\limits_{n = 1}^\infty {\left( {1 + \frac{n}{{n + 1}} - 2n\ln \frac{{n + 1}}{n}} \right)} }}$$ $$\displaystyle{\sum\limits_{n = 1}^\infty {\left( {1 + \frac{n}{{n + 1}} - 2n\ln \frac{{n + 1}}{n}} \right)} = \sum\limits_{n = 1}^\infty {\left( {2 - \frac{1}{{n + 1}} - 2n\ln \frac{{n + 1}}{n}} \right)} = \mathop {\lim }\limits
\begin{align*} \int_0^1\left\{\frac1x\right\}^2\;dx &= \int_0^1\left(\frac1x-\left\lfloor\frac1x\right\rfloor\right)^2\;dx \\ &= \sum_{n=1}^{\infty}\int_{\frac1{n+1}}^{\frac1n}\left(\frac1x-\left\lfloor\frac1x\right\rfloor\right)^2\;dx \\ &= \sum_{n=1}^{\infty}\int_{\frac1{n+1}}^{\frac1n}\left(\frac1x-n\right)^2\;dx \\ &= \sum_{n=1}^{\infty}\int_{\frac1{n+1}}^{\frac1n}\left(\frac1{x^2}-\frac{2n}x+n^2\right)\;dx \\ &= \sum_{n=1}^{\infty}\left[-\frac1x-2n\log x+n^2x\right]_{x=\frac1{n+1}}^{\frac1n} \\ &= \sum_{n=1}^{\infty}\left[-n+(n+1)+2n\log(n)-2n\log(n+1)+n-\frac{n^2}{n+1}\right] \\ &= \sum_{n=1}^{\infty}\left[\frac{2n+1}{n+1}+2n\log(n)-2n\log(n+1)\right] \\ \end{align*} \begin{align*} &= \lim_{N\rightarrow\infty}\sum_{n=1}^N\left[2-\frac1{n+1}+2n\log(n)-2n\log(n+1)\right] \\ &= \lim_{N\rightarrow\infty}\left( 2N + 1 - H_{N+1} + 2\log(N!) - 2N\log(N+1) \right) \\ &= \lim_{N\rightarrow\infty}\left( 2N + 1 - H_{N+1} + 2\log\left(\sqrt{2\pi N}\left(\frac{N}e\right)^N\right) - 2\log\left
|calculus|integration|summation|indefinite-integrals|closed-form|
0
The quotient of a graph by its maximal tree.
I was just reading the proof that a graph is a tree iff it is simply connected in Hatcher, on pg.85, which is given below: The quotient map $X \to X/T$ is a homotopy equivalence by Proposition $0.17.$ The quotient $X/T$ is a graph with only one vertex, hence is a wedge sum of circles, whose fundamental group we showed in Example 1.21 to be free with basis the loops given by the edges of $X/T$ , which are the images of the loops $f_α$ in $X$ . The statement of Prop.0.17 is as follows: If the pair $(X,A)$ satisfies the homotopy extension property and $A$ is contractible, then the quotient map $q: X \to X/A$ is a homotopy equivalence. My questions about the proof are as follows: 1-Why the quotient map is a homotopy equivalence by proposition 0.17? Why is the pair $(X,T)$ satisfies the homotopy extension property? 2- I want to see a lot of examples of quotienting a graph by its maximal spanning tree please and see how the above proof is correct. Can anyone clarify these points to me please
As for 1, note that $T$ is a subcomplex of $X$ by definition, i.e. $(X, T)$ is a CW-pair and thus has the homotopy extension property by proposition 0.16 (see also example 0.14). As for 2, here's one class of examples you could consider: Take $X = \mathbb{R}$ with the usual CW-structure, i.e. we identify $X_0$ with $\mathbb{Z}$ and then glue in one 1-cell from with end points $n$ and $n + 1$ for each $n$ . This is a graph and a maximal spanning tree is just $\mathbb{R}$ itself, so this shows that $\mathbb{R}$ is contractible (it might be helpful to go through the proof an construct an explicit contracting homotopy with the given recipe). Now expand $X$ by taking a larger set of 1-cells: For instance, glue in an edge from 2 to 5, or from $2n$ to $2n + 2$ (for all $n$ ), or glue in infinitely many edges from 0 to 1, etc. The subcomplex $\mathbb{R} \subset X$ is still a maximal spanning tree, and all the additional edges you glued in will form loops on the unique 0-cell after contraction.
|graph-theory|algebraic-topology|fundamental-groups|
1
How to calculate $\sum_{n=1}^{\infty} (\int_0^\pi x^3cos(nx)dx)^2$
For each $n \geq 1$ , denote $C_n= \int_0^\pi x^3cos(nx)dx$ . Calculate $\sum_{n=1}^{\infty} {C_n}^2$ My suggestion: The given sequence is proportional to Fourier coefficients of $x^3$ expansion to an even function in $[-\pi,\pi]$ . The sum that I get is: $\frac{9\pi^8}{224}$ . Am I in the right direction, how do I validate my answer? Thanks
This follows easily from Parseval formula: assume $f(x)=a_0+\sum_{n\ge 1}a_n\cos nx, x \in [-\pi,\pi]$ is the Fourier series of the even extension of $x^3$ to $[-\pi, \pi]$ . Since for $n \ge1, a_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos nx dx$ it follows that $a_n=\frac{2C_n}{\pi}$ while $a_0=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)dx=\frac{\pi^3}{4}$ But Parseval says (easily seen by squaring, integrating and using orthogonality) that $$\int_{-\pi}^{\pi}f^2(x)dx=2\pi a_0^2+\pi\sum_{n \ge 1}a_n^2$$ Hence $$\frac{2\pi^7}{7}=\frac{\pi^7}{8}+\frac{4}{\pi}\sum_{n \ge 1}C_n^2$$ from which the claimed result $\sum_{n=1}^{\infty} {C_n}^2=\frac{9\pi^8}{224}$ follows.
|fourier-analysis|fourier-series|harmonic-analysis|
0
Understanding subspace-restricted isomorphisms and global isomorphisms
(It may be noted that the author was not aware of Endomorphisms & Automorphisms at the time of writing this question) I'm trying to better understand the concept of a linear transformation acting as an isomorphism on a specific subspace of the vector space, even though it may not be an isomorphism on the entire vector space. To elaborate, I am referring to a situation where a linear transformation acts like an isomorphism but only when we look at a smaller, specific part of the space (a subspace). Consider a linear transformation, $T: \mathbb{R}^3 \to \mathbb{R}^3$ defined by a rank 2 matrix: $$ A = \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0 \end{pmatrix} $$ $T$ is not an isomorphism on $\mathbb{R}^3$ , but it acts as an isomorphism on the subspace $W = \text{span}{((1, 0, 0), (0, 1, 0))}$ . I have a few related questions: What is the proper terminology for a linear transformation that acts as an isomorphism on a specific subspace but not necessarily on the entire space? I've te
For simplicity, I'll assume you are talking about a linear transformation $f : V \to V$ from some real vector space $V$ to itself. ( $V$ is $\Bbb{R}^3$ in your examples.) Any linear transformation is what you call a "restricted isomorphism" as it restricts to an isomorphism on the zero subspace. What you are perhaps really interested is subspaces $W$ on which $f$ restricts to an injection. We don't need new terminology for that: we just say $W$ has a trivial intersection with the kernel of $f$ . All you have is that in a block decomposition: $$\pmatrix{A B \\ C D}$$ given by putting the basis elements for what your refer to as "invariant" subspace first, we know that $A$ is invertible. The other blocks can be anything you like. A linear transformation that is an isomorphism on its entire domain is called an isomorphism. See above and have another think about this. I don't think we need any new terminology. P.S. your use of the term "invariant subspace" is not standard. I would expect i
|linear-algebra|abstract-algebra|matrices|linear-transformations|vector-space-isomorphism|
1
Winding numbers in QCD and winding numbers in complex analysis. Is there a relation through a differential geometric generalization?
I have a background in theoretical physics and the first time I came across winding numbers was in the context of the vacuum of QCD. By the way physicists treat this topic, I thought it had little to do with Winding numbers in complex analysis. Now I, came across some results of complex Brownian motion applied to proofs of Picard's theorems in complex analysis, related to winding and tangling of curves, and also to similarstochastic techniques applied to path integral formulation of quantum mechanics and related QFT topics. Thus, I am asking myself if winding numbers in QCD are related to closed loops on a certain bundle related to the SU(3) group. The paths could be the brownian motion that provides an equivalent measure to the vacuum state. I am very familiar with stochastic calculus applied to QFT, differential geometry and functional analysis, what I am lacking to answer the question in the title is the interpretation of the QCD winding numbers in this sense. I have also been think
In Stochastic areas, Horizontal Brownian Motions, and Hypoelliptic Heat Kernels they investigate the concept of winding functionals of Brownian motion over more general Lie groups like SU(2). Their construction might be more generalizable for SU(3) too since they look at general Lie groups.
|differential-geometry|stochastic-calculus|winding-number|
0
Does collar restrict to a closed neighbourhood?
Suppose $X$ is a (smooth) manifold with boundary and $f: [0,\infty)\times \partial X \rightarrow X$ is an open embedding such that $f(0,x) = x (\forall x \in \partial X) $ i.e., a collar. Is $f([0,1] \times \partial X)$ closed in $X$ ? It seems to be true but I have some problems: Suppose $f(t_n,x_n) \to y\in X$ , where $t_n \in [0,1]$ and $x_n \in \partial X$ . I try to show that $y = f(t,x)$ for some $t\in [0,1]$ and $x\in \partial X$ . But I don't know how to deal with the case where $y \in X-Im(f)$ . Thank you.
Here is an example. Let $X$ be the upper half-plane $\{(x,y): y\ge 0\}$ . Consider the map $$ f: (x,t) \mapsto ((1-t)x + t(e^x+1), t), \quad 0\le t\le 1, x\in {\mathbb R}. $$ This map restricts to the identity map when $t=0$ and sends ${\mathbb R}\times [0,1]$ onto $$ U={\mathbb R}\times [0,1) \cup (0,\infty)\times \{1\}\subset X, $$ which is the strip $0\le y\le 1$ with a ray removed from the top. Hence, $U$ is not closed. I am leaving it to you to check that $$ f: {\mathbb R}\times [0,1]\to U$$ is a diffeomorphism.
|general-topology|differential-geometry|differential-topology|smooth-manifolds|manifolds-with-boundary|
1
Existence and Uniqueness theorem - doubts
The class material presents the Existence and uniqueness theorem for ODEs as follows What I don't understand is the meaning of $r$ . I understand that the solution lies within a subinterval of $(t_0 - a, t_0 + a)$ , specifically $(t_0 - r, t_0 + r)$ , but what exactly does $r$ represent? And what is the significance of $b/M$ ? How does the inequality $r \leq b/M$ ensure that the solution $x(t)$ will remain within the rectangle?
I mean, why $b/M$ ensures that the solution stays inside the rectangle? Hint: if the solution exists: $$\dot x(t) = F(t,x(t))\implies |\dot x(t)|\le M \implies -M\le\dot x(t)\le M.$$ Integrating: $$-M(t - t_0)\le x(t) - x(t_0)\le M(t - t_0).$$ Can you continue?
|ordinary-differential-equations|
0
Given matrices $A$ and $B$ find $B'$ such that $AB = AB'$ yet $\|{B}'\|<\|B\|$.
I'm working on a task where I have matrices $A$ and $B$ respectively. I need to find a matrix $B'$ such that $AB = AB'$ yet $\|{B}'\| . This is mainly because currently $\|{B}\|$ is very large and which hurts the task I'm solving. Anyone has any idea on how to solve this? Thanks!
Given $A,B$ , the optimization problem $$ \begin{split} \min\; & \Vert B'\Vert^2\\ \text{s.t. }\, & AB'=AB \end{split} $$ is a convex optimization problem with linear constraints and can be solved via standard techniques (whenever it has a solution).
|linear-algebra|matrices|linear-transformations|matrix-norms|
0
Quadratic Recurrence Relation from Statistics
I was trying to solve a statistics problem on expected values when the following recurrence relation came up. Quite frankly I'm stumped on how to solve it ... even to the point that I'm starting to believe that it has no possible solutions. $X_{n}=\frac{1}{2}*(1+X_{n-1}^2)$ for $X_{0} = 0.5$ (tends to 1) (I've already taken a look at How to solve quadratic recurrence relation which would only prove helpful if my recurrence has no closed form solutions in which case I would appreciate a proof) If you run the following code in MATLAB, the random sampling and the original function do not completely match (at each step you run max(H(i-1), rand(0,1)): G = 0.5; for t=2:100; G(t) = 0.5+0.5*G(t-1)^2; end; H = rand(1,100000); for i=2:100; H(i,1:100000)=max(H(i-1,1:100000),rand(1,100000)); endfor; plot(mean(H')); hold on; plot(G,'g'); hold off
A closed form for $X_n$ is unlikely, but to rigorously show that $$ \lim_{n\to\infty}X_n=1 $$ we can argue as follows . . . It's clear that $X_n > 0$ for all $n$ . Also we have $X_0 , and if $X_{n-1} then $$ X_n = \frac {1+X_{n-1}^2}{2} so $X_n for all $n$ . Then we get \begin{align*} X_n-X_{n-1} &= \frac {1+X_{n-1}^2}{2} - X_{n-1} \\[4pt] &= \frac {1-2X_{n-1}+X_{n-1}^2}{2} \\[4pt] &= \frac {(1-X_{n-1})^2}{2} \\[4pt] & > 0 \\[4pt] \end{align*} so the sequence $X_0,X_1,X_2,...$ is monotonically increasing, hence, since the sequence is bounded above, it follows that it approaches a limit, $L$ say, as $n$ approaches infinity. Then from the recursion $$ X_n = \frac {1+X_{n-1}^2}{2} $$ we get \begin{align*} & L = \frac {1+L^2}{2} \\[4pt] \implies\;& 1-2L+L^2=0 \\[4pt] \implies\;& (1-L)^2=0 \\[4pt] \implies\;& L=1 \\[4pt] \end{align*}
|statistics|discrete-mathematics|recurrence-relations|expected-value|
0
Free and cocompact subgroup of the group of automorphisms on a tree
Let $T$ be a $k$ -regular tree for some $k>2$ and let $G=\text{Aut}(T)$ the group of automorphisms on the tree. I need to find a closed subgroup $H$ of $G$ , so $H$ is a free group (and therefore discrete) that acts cocompactly on $T$ (and therefore by some thereom $H$ is a lattice). The topology on $G$ , if it helps, is generated from $$G(T_0)=\{g \in G: gx=x , \forall x\in T_0\}$$ when $T_0$ is some finite subtree. So $G$ is locally compact and totally disconnected. One can construct a free subgroup $F_n$ of $G$ genereated by $n$ hyperbolic automorphisms of $T$ with pairwise disjoint axes, using the ping-pong lemma. But, who says this subgroup acts cocompactly on $T$ ? So another approach is to find a finite subtree $K$ (which is compact using the discrete topology on the tree), and find some free subgroup $F_n$ so $K$ is a fundamental domain with the action of $F_n$ on $T$ (that is, $F_n K=T$ ), and therefore the action is cocompact (maybe some Cayley graph?). Thanks!
Start with a finite connected graph $X$ of valence $k$ , e.g. the complete graph on $k+1$ vertices. Let $Y\to X$ be the universal covering of $X$ and $G$ the group of covering transformations. Thus, $G$ is a free group acting freely on the tree $Y$ (of valence $k$ ) with the quotient space $X$ . You can find details for instance in Hatcher's book "Algebraic topology", chapter 1.
|group-actions|topological-groups|trees|
0
Probabilistic prime number theorem
The Cramér random model for the primes is a random subset ${{\mathcal P}}$ of the natural numbers with ${1 \not \in {\mathcal P}}, {2 \in {\mathcal P}}$ , and the events ${n \in {\mathcal P}}$ for ${n=3,4,\dots}$ being jointly independent with ${{\bf P}(n \in {\mathcal P}) = \frac{1}{\log n}}$ (the restriction to ${n \geq 3}$ is to ensure that ${\frac{1}{\log n}}$ is less than ${1}$ ). Prove that almost surely, the quantity $\frac{1}{x/\log x} |\{n \leq x: n \in {\mathcal P}\}|$ converges to one as ${x \rightarrow \infty}$ . Question : We are supposed to show that the event $\lim_{x \to \infty} \frac{1}{x/\log x} |\{n \leq x: n \in {\mathcal P} \}| = 1$ has probability one. What will be the probability space $\Omega$ used to model such events, and how should one proceeds?
Use an infinite product space $\Omega = \prod_{n\geq 3} \{0,1\}$ where $\{0,1\}$ in the $n$ th component has the probability measure $\mu_n$ such that $\mu_n(\{1\}) = 1/\log n$ and $\mu_n(\{0\}) = 1-1/\log n$ . Give $\Omega$ the product measure.
|probability-theory|analytic-number-theory|
0
Is $\bigg\lvert\frac{1 - \zeta^s }{ 1 - \zeta}\bigg\rvert \in\mathbb{Q}(\zeta)$ for $s = 1, \ldots, p-1$?
Let $p$ be an odd prime and let $\zeta$ be a $p$ -th root of unity. Is it true that $$\Bigg\lvert\frac{1 - \zeta^s }{ 1 - \zeta}\Bigg\rvert \in\mathbb{Q}(\zeta)$$ for $s = 1, \ldots, p-1$ ? Remark: By $\lvert \cdot \rvert$ I mean the standard norm over $\mathbb{C}$ . I do not know how to prove this. I was trying to use the definition of the norm, so \begin{align*} \lvert 1-\zeta \rvert &= \sqrt{(1-\Re(\zeta))^2+\Im(\zeta)^2}, \\ \lvert 1-\zeta^s \rvert &= \sqrt{(1-\Re(\zeta^s))^2+\Im(\zeta^s)^2}, \end{align*} but I do not see what to do now. I know Kummer's Lemma, but I am not sure whether it could be of use here.
Let $z$ be the complex number $$ z=\frac{1-\zeta^s}{1-\zeta}\ . $$ Then the complex conjugate of $z$ is $$ \bar z=\frac{\overline{1-\zeta^s}}{\overline{1-\zeta}} =\frac{1-\zeta^{-s}}{1-\zeta^{-1}} =\frac{\zeta^{-s}}{\zeta^{-1}}\cdot\frac{1-\zeta^s}{1-\zeta} =\zeta^{1-s}\cdot z=\zeta^{2t}\cdot z \ , $$ where $t$ is chosen so that $1-s\equiv 2t$ modulo $p$ . (If $s$ is odd, then set $t=(1-s)/2\in\Bbb Z$ , if $s$ is even, then use $p+s$ instead of $s$ , which is odd and moves to the other case.) Then: $$ |z|^2=z\cdot \bar z=\zeta^{2t}\cdot z^2=(\zeta^t\cdot z)^2\ , $$ so $|z|=\pm \zeta^t\cdot z\in\Bbb Q(\zeta)$ . Note: Not needed, but to be more convincing, let us show explicitly that $\zeta^t\cdot z$ is real. Its conjugate is equal to itself: $$ \overline{\zeta^t\cdot z}=\zeta^{-t}\cdot\bar z=\zeta^{-t}\cdot\zeta^{2t}\cdot z=\zeta^t\cdot z \ . $$ A further note for the downvoters. The problem wants the following from us. Fix some primitive root $\zeta$ of unity of odd order $N$ . (We do
|abstract-algebra|complex-numbers|cyclotomic-fields|
1
Does a function having the property that for every $a_n\rightarrow a$ $f(a_n)$ diverges exist?
Could you help me solve the following tricky exam problem? Let $f: [ -1, 1]\rightarrow \mathbb R$ . Let $a\in [-1, 1]$ . Suppose that for every not ultimately constant sequence { $a_n$ }, $a_n\in [-1, 1]$ , s.t. $a_n\rightarrow a$ the sequence { $f(a_n)$ } diverges. Does such a function exist? What if the above condition is true for every point $a\in [-1, 1]$ ? I attempted the first part of the problem by setting $f(x) =\frac1x + 1$ for $x\in [-1, 0)$ , $f(x) = 0$ for $x = 0$ , and $f(x) =\frac1x - 1$ for $x\in(0, 1]$ . Then for any sequence converging to $0$ , there is no limit for the corresponding values of $f$ . Regarding the second part of the problem, I feel like such a function cannot exist, but I don’t know how to show this. I feel like I need to use Bolzano-Weierstrass Theorem. Any ideas?
No such function exists; we will show this by contradiction. Assume that $f$ is such a function and let $r > 0$ be such that the set $B = f^{-1}([-r, r])$ is infinite (such an $r$ exists: otherwise $[-1, 1] = \bigcup_{i = 1}^\infty f^{-1}([-i, i])$ would imply that $[-1, 1]$ is countable). Take any sequence $(b_n)_{n \in \mathbb{N}}$ with $b_n \in B$ for all $n$ and $b_n \neq b_m$ for $n \neq m$ . Now $(f(b_n))_n$ is bounded, so by Bolzano-Weierstrass we find a convergent subsequence $(f(b_{n_k}))_k$ . Applying Bolzano-Weierstrass once more to $(b_{n_k})_k$ , we find yet another subsequence $(b_{n_{k_l}})_l$ with the property that $(f(b_{n_{k_l}}))_l$ converges, contradiction.
|real-analysis|calculus|sequences-and-series|functional-analysis|analysis|
1
Is an integral domain with every ideal a product of prime ideals necessarily a Dedekind domain?
Question: Is an integral domain in which every proper ideal is a product of prime ideals necessarily a Dedekind domain? Please provide a reference. The first two sentences of the Wikipedia page for Dedekind domains are as follows: 'In abstract algebra, a Dedekind domain or Dedekind ring, named after Richard Dedekind, is an integral domain in which every non-zero proper ideal factors into a product of prime ideals. It can be shown that such a factorization is then necessarily unique up to the order of the factors.' This suggests that the definition of a Dedekind domain (or one possible definition of it) is an integral domain in which every non-zero proper ideal is a product of prime ideals, and that this property alone implies unique factorisation. However I can't find any outside source that confirms that this is true. The source that Wikipedia cites to support this claim is remark 3.25 of these notes , which, as far as I can tell, doesn't prove what Wikipedia claims it does, since the
Of course I find a reference right after asking the question. I swear I was looking for a while... Anyway, it's true, and a proof is on page 765 of Dummit-Foote abstract algebra. If I were less busy I would change the reference on the wikipedia page to this, but alas.
|reference-request|dedekind-domain|
0
How to integrate $\int \frac{3x^{4}+5x^{3}+7x^{2}+2x+3}{(x-6)^{5}}dx$?
Q) How to Integrate $\int \frac{3x^{4}+5x^{3}+7x^{2}+2x+3}{(x-6)^{5}}dx$ ? First of all let me tell what I think about this question. In my Coaching Institute, the chapter 'Integration' is over. This question came in my mind while I was solving the questions of 'Integration By Partial Fraction Decomposition' . Let me give two examples: Example 1) Let's integrate $\int\frac{x-5}{(x-7)^{2}}dx$ Now let me tell the solution of $\int\frac{x-5}{(x-7)^{2}}dx$ Let $I=\int\frac{x-5}{(x-7)^{2}}dx$ $\implies \frac{(x-5)}{(x-7)^{2}}=\frac{A}{(x-7)}+\frac{B}{(x-7)^{2}}$ $\implies (x-5)=Ax+(B-7A)$ Upon solving we get : $A=1, B=2$ $\implies I=\int\frac{1}{(x-7)}dx+\int\frac{2}{(x-7)^{2}}dx$ Finally, after this step, it is easy to solve. Now let me give the $2^{nd}$ example: Evaluate $ I_1=\int\frac{3x^{2}+2x+4}{(x-7)^{3}}dx$ Similarly we can integrate this expression by using Partial Fraction Decomposition. $\implies \frac{3x^{2}+2x+4}{(x-7)^{3}}=\frac{A}{(x-7)}+\frac{B}{(x-7)^{2}}+\frac{C}{(x-7)^{3}
By Differentiation Let \begin{aligned} P(x)=3 x^4+5 x^3+7 x^2+2 x+3 = A_4(x-6)^4+A_3(x-6)^3+A_2(x-6)^2+A_1(x-6)+A_0 \end{aligned} for some constants $A_0,A_1,\cdots,A_4.$ In order to extract the coefficients of $A_k$ ’s, we need to differentiate the identity $k$ times at $x=6$ and obtain $$A_k=\frac {P^{(k)}(6)}{k!}$$ $$\boxed{A_0=5235, A_1=3218,A_2=745,A_3=77,A_4=3}$$ Hence we have \begin{aligned} P(x)= 3(x-6)^4+77(x-6)^3+745(x-6)^2+3218(x-6)+5235, \end{aligned} and $$ \begin{aligned} I= & 3 \int \frac{d x}{x-6}+77 \int \frac{d x}{(x-6)^2}+745 \int \frac{d x}{(x-6)^3} +3218 \int \frac{d x}{(x-6)^4}+5235 \int \frac{d x}{(x-6)^5} \\ = & 3 \ln |x-6|-\frac{77}{x-6}-\frac{745}{2(x-6)^2}-\frac{3218}{3(x-6)^3} -\frac{5235}{4(x-6)^4}+C \quad \blacksquare \end{aligned} $$
|calculus|integration|indefinite-integrals|
0
Birth-death : Always more than 1 bifurcation?
Say I have a (smooth) function $f : \mathbb{R}^n \to \mathbb{R}$ , and a critical point $x$ ( ie , $f'(x) = 0$ ). I call this point degenerate if $\det \text{Hess}_x f = 0$ (so, equivalently, if the kernel of the Hessian at $x$ is non-trivial). An example is $x = 0$ for $f : \mathbb{R} \to \mathbb{R} : x \mapsto x^3$ . Then, if I perturb my function $f$ generically, I should observe such a phenomenon: Called a "birth-death bifurcation": my degenerate critical point will either bifurcate into multiple non-degenerate critical points (birth), or die. (In the picture, I drew the bifurcations $f(x) \pm \varepsilon x$ for $\varepsilon > 0$ . We can easily show that, for $f(x) = x^3$ , $f - \varepsilon x$ has two critical points near $0$ , while $f + \varepsilon x$ has none.) My question is then the following: is it standard knowledge that my degenerate critical point will either die , or bifurcate into strictly more than one critical point? (For a generic bifurcation). If so, where can I fin
Your statement is false. Consider $x^4$ . The perturbation $x^4+ax^3+bx^2+c$ has a derivative which is a cubic and so can anywhere from 1 to 3 real roots in the neighborhood of $0$ for small $a,b,c$ . For example, $x^4+.1x^2$ has one critical point, $x^4-.1 x^3$ has 2 (double root at 0), and $x^4-.1x^2$ has 3. In more dimensions, you’re looking at zeroes of the determinant of a hessian, so you’re considering zeroes in the neighborhood of a zero of a real function on $\mathbb R^n$ . You can get anywhere from zero critical points to infinitely many locally to the perturbation of a degenerate critical point.
|reference-request|dynamical-systems|perturbation-theory|morse-theory|bifurcation|
0
When does the sum of parts equal to the whole?
I am trying to prove the following statements. First, some definitions - here is my attempt to define a montonic function: A function $f(x)$ is monotonic on an interval $I$ if for any two points $x_1, x_2 \in I$ , the following conditions hold: Monotonically Increasing: If $x_1 then $f(x_1) \leq f(x_2)$ Monotonically Decreasing: If $x_1 then $f(x_1) \geq f(x_2)$ If either of these conditions holds for all $x_1, x_2 \in I$ , then $f(x)$ is said to be monotonic on $I$ . If neither condition holds, then $f(x)$ is not monotonic. Statements to Prove Define a function $TV_n(f) = \sum_{i=1}^{n} |f(x_i) - f(x_{i-1})|$ where $x_i = a + i \frac{b-a}{n}$ for $i = 0, 1, ..., n$ As the number of divisions $n$ goes to infinity, then: $$\lim_{n \to \infty} TV_n(f) = TV(f)$$ Define a function $QV_n(f) = \sum_{i=1}^{n} (f(x_i) - f(x_{i-1}))^2$ where $x_i = a + i \frac{b-a}{n}$ for $i = 0, 1, ..., n$ . As the number of divisions $n$ goes to infinity: $$\lim_{n \to \infty} QV_n(f) = 0$$ I am trying to un
For the first statement, the only thing you need to note is that $$\sum_{i=1}^{n}\vert f(x_i)-f(x_{i-1})\vert = \left\vert\sum_{i=1}^{n} (f(x_i)-f(x_{i-1}))\right\vert$$ holds for monotonic function $f$ . For the second statement, I think you need some extra condition like continuity, because we have following counterexample: Let $a=0$ , $b=1$ and take an arbitrary irrational number $r\in [0,1] \setminus \mathbb{Q}$ , $$f(x)=\begin{cases} x, & x Now, there \exists $i$ such that $r\in (\frac{i-1}{n},\frac{i}{n})$ and $$QV_n(f) \geqslant (f(x_i)-f(x_{i-1}))^2 \geqslant 10^2$$ for all $n$ , which means the second statement fails.
|calculus|
0
derivate discounted payoff
Let $S(t)$ be the stock price at time $t\geq 0$ with $S(0)=s_0$ and let $\Pi(S_t)=\max\{K-S_t,0\}=(K-S_t)_+$ the payoff of an american put with strike price $K$ . How can I calculate the derivate $\frac{d}{d \psi} \mathbb E[e^{-\delta t} \Pi(S(t))\chi_{\{S(t) for a stochastic $t$ and fixed $s$ ? I dont know how to start here, thanks for a hint.
I'd reason like this: first we can notice that in order for the argument to be different to zero, i must be that both $S and $S . Hence we have two cases: $\psi > K$ : in this case it must be $S and then the function does not depend on $\psi$ which means its derivative is zero $\psi : in this case it must be $S and then the expectation becomes $$ A:= \int_{-\infty}^{\psi}e^{-\delta t}(K-S)f_S(S) dS \Rightarrow A_{\psi} = e^{-\delta t}(K-\psi)f_S(\psi) $$ Putting both together, I'd say the derivative is $ e^{-\delta t}(K-\psi)^{\text{+}}f_S(\psi)$
|derivatives|expected-value|conditional-expectation|finance|
0
A complete local ring is finite over $\mathbb{Z}_p$ if and only if it has finitely many $\overline{\mathbb{Q}}_p$-valued points?
Let $\mathbb{Z}_p$ be the ring of $p$ -adic integers and $\overline{\mathbb{Q}}_p$ a fixed algebraic closure of the field $\mathbb{Q}_p$ of $p$ -adic numbers. Let $R$ be a complete Noetherian local $\mathbb{Z}_p$ -algebra. It is not hard to see that if $R$ is a finite $\mathbb{Z}_p$ -algebra, then the set $\operatorname{Hom}_{\mathbb{Z}_p}(R,\overline{\mathbb{Q}}_p)$ of continuous homomorphisms of $\mathbb{Z}_p$ -algebras is finite. A natural question, then, is the reverse also true? Let $R$ be a complete Noetherian local $\mathbb{Z}_p$ -algebra of characteristic zero such that the set $\text{Hom}_{\mathbb{Z}_p}(R,\overline{\mathbb{Q}}_p)$ is finite. Is it true that $R$ is a finite $\mathbb{Z}_p$ -algebra?
(Added: this answer is implicitly assuming that $R$ has residue field $\mathbf{F}_p$ . From context, I am guessing you are thinking about Galois deformation rings where this is always satisfied. Otherwise $\mathbf{Q}_p$ is also a counter example as Torsten Schoeneberg points out. The assumption on residue field implies that $R$ is a quotient of $\mathbf{Z}_p[[X_1,\ldots,X_d]]$ for some $d$ which is what is being implicitly used below.) The weaker statement that $R[1/p]$ is finite over $\mathbf{Q}_p$ is true (so if $R$ is flat over $\mathbf{Z}_p$ then you are fine.) Certainly we can assume that the Noetherian ring $R[1/p]$ has only finitely many maps to $K = \overline{\mathbf{Q}}_p$ . The point is that $R[1/p]$ is a Jacobson ring (see e.g. http://math.stanford.edu/~conrad/modseminar/pdf/L04.pdf ), and that any maximal ideal $P$ of $R[1/p]$ comes from a map $R[1/p] \rightarrow K$ . So to show that $R[1/p]$ is finite, it suffices to show that it is Artinian. Since $R[1/p]$ is Noetherian,
|algebraic-geometry|ring-theory|commutative-algebra|algebraic-number-theory|p-adic-number-theory|
1
How careful do we have to be with choosing whiskers?
Assume spaces to be connected . In general, the set of unbased homotopy classes $[X,Y]$ differs from the set of pointed homotopy classes $\langle X,Y\rangle$ . This is captured by the $\pi_1(Y)$ action on the set $\langle X,Y\rangle$ . The (computable) examples I know for which this action is non-trivial are not manifolds. What's the story with manifolds? Let's do a simple example. Suppose we consider a connected manifold $M$ , together with a chosen basepoint $b$ . Consider $m\in M$ away from the basepoint and unbased homotopy classes of loops, for which a representative loop runs through $m$ . Can we always pick a whisker from $b$ to $m$ to get based homotopy classes of loops (with basepoint $b$ ) such that forgetting the basepoint gives back the same homotopy class? In my mind, this is a question about choosing a whisker that does not pick up any $\pi_1$ -action. I'm not entirely sure, but I think that if the target space is connected, there is a surjection $\langle X,Y\rangle\right
If $\gamma\colon S^1\rightarrow M$ is some unbased loop and $m=\gamma(1)$ , then picking a path $\eta$ from the basepoint $b$ to $m$ in $M$ (here, we use that $M$ is connected) yields a based loop $\eta\gamma\eta^{-1}$ in $M$ and this loop is freely homotopic to $\gamma$ (the homotopy traverses at time $t$ the loop $\eta\vert_{[t,1]}\gamma\eta\vert_{[t,1]}^{-1}$ ). If $\eta^{\prime}$ is another such path from $b$ to $m$ , then $\eta^{\prime}\eta^{-1}$ is a based loop in $M$ and it operates taking $\eta\gamma\eta^{-1}$ to $\eta^{\prime}\gamma{\eta^{\prime}}^{-1}$ . Thus, distinct choices of such "whiskers" realize distinct elements in the same orbit of the $\pi_1(M,b)$ -action (and, conversely, one can show that any two elements in the same orbit can be transformed into another by such a choice). In particular, this argument demonstrates that we have a surjection $\pi_1(M,b)=\langle S^1,M\rangle\twoheadrightarrow[S^1,M]$ . The same is true if we replace $S^1$ by any well-pointed space $
|algebraic-topology|manifolds|
1
Solving equation with two floor functions
I'm trying to solve the following question \begin{equation*} \left\lfloor \frac{\left\lfloor \frac{3\lfloor x\rfloor }{2}\right\rfloor }{9}\right\rfloor =4 \end{equation*} I got the following inequalities: $ n \leq x \lt n+1 \\ m \leq \frac{3n}{2} \lt m+1 \\ 36 \leq m \lt 45 \\ ⌊⌋= n \\ ⌊\frac{3n}{2}⌋= m$ I don't know how to deal with the floor functions, so I have no idea where to start. If someone could walk me through the process that would be great!
Note that $\lfloor x\rfloor=n \Leftrightarrow n\leqslant x . We can simplify the equation and get inequalities. Further, since floor function can only have values in $\mathbb{Z}$ , we can discuss each possible values case by case.
|ceiling-and-floor-functions|
0
Is there any relationship between these two equal integrals $\int_0^\infty\frac{\sin x}{x + x^2}dx=\int_0^\infty\frac{\pi-2\tan^{-1}x}{2 e^x}dx$
With some intermediate derivation results I found the two integrals are exactly the same. But why? \begin{align} I=\int_0^\infty\frac{\sin x}{x + x^2}\mathrm dx=\int_0^\infty\frac{\pi-2\tan^{-1}x}{2 e^x}\mathrm dx \end{align} Both are equal to \begin{align} I&=\operatorname{Si}(1)\cos(1)-\operatorname{Ci}(1)\sin(1)+\frac{\pi}{2}\left(1-\cos(1)\right)\\ &\approx0.9493467025590832615920\ldots \end{align} where $\operatorname{Ci}$ and $\operatorname{Si}$ are cosine and sine integrals, respectively. Note: The nominator in integrand is the double of Laplace transform of $\operatorname{sinc}$ function $$\mathcal{L}\left[\frac{\sin t}{t}\right](x)=\cot^{-1}{\frac1x}=\frac{\pi}{2}-\tan^{-1}x$$
Integrate the function $$f(z) = \frac{e^{iz}}{z(1+z)} $$ around the contour $$[r, R] \cup Re^{i[0,\pi/2]} \cup [iR, ir] \cup re^{i[\pi/2,0]}. $$ $Re^{i[0,\pi/2]}$ is large quarter-circle in the first quadrant of the complex plane of radius $R$ , and $re^{i[\pi/2,0]} $ is a small quarter-circle in the first quadrant of the complex plane of radius $r$ . Let's call the small quarter-circle $C_{r}$ . Letting $R \to \infty$ , the integral vanishes on the large quarter-circle by Jordan's lemma. So we have $$\int_{r}^{\infty} \frac{e^{ix}}{x(1+x)} \, \mathrm dx - \int_{r}^{\infty} \frac{e^{-x}}{x(1+ix)} \, \mathrm dx + \int_{C_{r}} f(z) \, \mathrm dz= 0.$$ Since $f(z)$ is a simple pole and $C_{r}$ is a clockwise-oriented quarter-circle, $\int_{C_{r}} f(z) \, \mathrm dz$ goes to $-\frac{i \pi}{2} \operatorname*{Res}_{z = 0}f(z) = -\frac{i \pi}{2}$ as $r \to 0$ . Then equating the imaginary parts on both sides of equation, we have $$ \begin{align} \int_{0}^{\infty} \frac{\sin (x)}{x(1+x)} \, \m
|integration|
0
Solve The Partial Differencial Equation
Solve the DE : $t\frac{\partial ^2 U}{\partial x \partial t}+2\frac{\partial U}{\partial x}=x^2$ Here's the solution: (step 1) $\frac{\partial}{\partial x}[t\frac{\partial U}{\partial t}+2U]=x^2$ , then integrate with respect to x (step 2) $t\frac{\partial U}{\partial t}+2U=\frac{1}{3}x^3+R(t)$ , then multiply with t (step 3) $t^2\frac{\partial U}{\partial t}+2tU=\frac{1}{3}tx^3+tR(t)$ (step 4) $\frac{\partial}{\partial t}[t^2U] = \frac{1}{3}tx^3+t R(t)$ , then integrate with respect to t and we obtain (step 5) $t^2U=\frac{1}{6}t^2x^3 + G(t)$ (step 6) $U=\frac{1}{6}x^3+F(t)$ However I don't understand why we need to multiply it with $t$ after step 2? Can't we integrate it directly with respect to $t$ ? And also, why in step 4 the $2tU$ is gone?
You could theoretically integrate either side in step 2, but it doesn't lead anywhere: $$ \begin{aligned} \int \left(t\frac{\partial U}{\partial t} + 2U\right)\mathrm{d}t &=\int t\frac{\partial U}{\partial t}\mathrm{d}t + 2\int U\mathrm{d}t\\ &= t U - \int U\mathrm{d}t + 2\int U\mathrm{d}t\\ &= t U + \int U\mathrm{d}t \end{aligned} $$ where I integrated by parts to obtain the second line. As you can see, this introduces an anti-derivative of $U$ which makes the problem more difficult. Multiplying by $t$ allows you to use the product rule in the opposite way you normally do. Indeed, if I have two functions $f, g$ then $f^\prime g + fg^\prime$ can be simply written as $(fg)^\prime$ by the product rule. Multiplying by $t$ allows you to do this with $f=U$ and $g = t^2$ .
|partial-differential-equations|
0
Proving that the series $\sum_{n=1}^{\infty}f_n(1)$ converges
I am working on an exercise that goes like this: consider the functions $f_n:[0,1]\rightarrow\mathbb{R}$ for $n\in\mathbb{N}$ such that they are continuous and that $\sum_{n=1}^{\infty}f_n(x)$ converges uniformly on $[0,1)$ . Prove that $\sum_{n=1}^{\infty}f_n(1)$ converges. I'm not entirely sure how one could prove this. I am aware of uniform convergence for sequences of functions, but in this case I could use some help outlining the method of the proof because I am not sure how to use the uniform convergence of series of $f_n(x)$ to prove the claim. Thank you.
$\forall \varepsilon >0, \exists N>0,\forall n>N, \forall p\in \mathbb Z^{+},\forall x\in [0,1)$ , $$|f_{n+1}(x)+\ldots + f_{n+p}(x)| Due to the continuity of $f_{n}(x)$ , we let $x$ tend towards $1$ , to obtain $$|f_{n+1}(1)+\ldots + f_{n+p}(1)|\leq\varepsilon According to Cauchy criterion, $$\sum^{+\infty}_{n=1}f_{n}(1)$$ converges.
|real-analysis|sequences-and-series|uniform-convergence|
0
Understanding the upper bound implications of $R(p,n) \le \log_p n$ in the context of Wikipedia's proof of Bertrand's Postulate
In Wikipedia's proof of Bertrand's Postulate , in the second lemma, it is concluded that: $$R = R(p,{{2n}\choose{n}}) \le \log_p 2n$$ where $R(p,n)$ is the p-adic order of ${2n}\choose{n}$ Later in the main proof, the implication is: $$\prod\limits_{p \le \sqrt{2n}}p^{R(p,{{2n}\choose{n}})} \le (2n)^{\pi(\sqrt{2n})}$$ It seems to me that there is a slightly stronger upper bound that is also implied. $$\prod\limits_{p \le \sqrt{2n}}p^{R(p,{{2n}\choose{n}})} \le \frac{(2n)!}{(2n - \pi(\sqrt{2n}))!}$$ Am I wrong? Here's my thinking: (1) For each prime $p \le \sqrt{2n}$ , there exists one unique number $p^{R(p,{{2n}\choose{n}})}$ that is less than or equal to $2n$ . (2) Since each $p^{R(p,{{2n}\choose{n}})} \le 2n$ and distinct, it follows that $\prod\limits_{p \le \sqrt{2n}} p^{R(p,{{2n}\choose{n}})}$ will be less than or equal to $\frac{(2n)!}{(2n - \pi(\sqrt{2n}))!}$ Edit: I made the changes pointed out by John Omielan.
Your slightly stronger upper bound, and your thinking regarding it, are both correct. Good work. Nonetheless, FWIW, here's an alternate way to explain your (2). Let $\sigma$ be a permutation of $p^{R(p,{{2n}\choose{n}})}$ for all primes $p \le \sqrt{2n}$ , with the values being in strictly decreasing order. With $m = \pi(\sqrt{2n})$ , we therefore have $$2n \ge \sigma_{1} \gt \sigma_{2} \gt \ldots \gt \sigma_{m} \;\;\to\;\; \sigma_{i} \le 2n - (i - 1) \; \forall \; 1 \le i \le m$$ Thus, by the commutative property of multiplication, we then get $$\prod_{p \le \sqrt{2n}}p^{R(p,{{2n}\choose{n}})} = \prod_{i=1}^{m}\sigma_{i} \le \\ \prod_{i=1}^{m}(2n - i + 1) = (2n)\cdots(2n-\pi(\sqrt{2n})+1) = \frac{(2n)!}{(2n - \pi(\sqrt{2n}))!}$$
|proof-explanation|logarithms|prime-factorization|
1
UMVUEs for the means of $3$ independent normal distributions with the sum of means being $1$
Let $\theta_1, \theta_2$ and $\theta_3$ be nonnegative parameters with the constraint $\theta_1+\theta_2+\theta_3=1$ . We observe $X_{i 1}=\theta_1+\epsilon_{i 1}, X_{i 2}=\theta_2+\epsilon_{i 2}, X_{i 3}=\theta_3+\epsilon_{i 3}$ for $i=1,2, \ldots, n$ , where $\epsilon_{i k} \sim N(0,1)$ are independent normal random variables $(k=1,2,3)$ . Derive the UMVUE for $\theta_1$ . My problem in solving it is, I don't know how to accurately use the constraint $\theta_1+\theta_2+\theta_3=1$ , and I also don't have any idea what steps would it be like when finding the UMVUE like this. Can you provide any suggestion/solution for me? Thank you!
In general, one almost always uses one of two methods: either you proceed through Lehmann-Scheffe directly or you find an unbiased estimator and then condition on a complete sufficient statistic. See Chapter 2 of Lehmann-Casella for details (for example; most math stat books will touch on material like this). In either case, we need to find a complete sufficient statistic in the first place. There's lots of ways to do that, but in this case (and many cases) we may use the fact that we are dealing with an exponential family. Start by abbreviating $$T_j(\mathbf X) = \sum_{i=1}^n X_{ij}.$$ The joint density is (up to a constant that I can never remember right) $$f(\mathbf x) = \prod_{i=1}^n \exp\left(- \frac{1}{2}\sum_{j=1}^3 (x_{ij} - \theta_j)^2\right) = \exp \left( -\frac{1}{2} \left(\sum_{i=1}^n\sum_{j=1}^3 x_{ij}^2 - 2x_{ij}\theta_j + \theta_j^2\right) \right)$$ which simplifies to $$\exp \left(\theta_1 T_1(\mathbf x) + \theta_2 T_2(\mathbf x) + \theta_3 T_3(\mathbf x) - \frac{1}{2}
|probability|probability-theory|statistics|statistical-inference|umvue|
1
Show that $\cot{B}=6-\sqrt{3}$?
Write the expansion of $\sin(A+B)$ . It is given that in a triangle ABC, $\angle C=30^\text{o}$ , $\sin{A} =3\sin{B}$ . Show that $\cot{B}=6-\sqrt{3}$ . My try The reason they have asked us to write the expansion could be just to make use of it to substitution $A+B=180^\text{o}-C$ to , $\sin(A+B)=\sin{A}\cos{B}+\cos{A}\sin{B}$ which leads to, $\sin(30)=3\sin{B}\cos{B}+\cos{A}\sin{B}$ How do I proceed? Thanks in advance.
I suspect the $A$ and $B$ in $\sin(A + B)$ were not necessarily meant to be the same as the $A$ and $B$ in the triangle. In particular, note that since $\measuredangle C = 30^{\circ}$ , then $A + B = 150^{\circ} \;\to\; A = 150^{\circ} - B$ . Thus, from the given relation and using the sine expansion you provided, we get $$\begin{equation}\begin{aligned} \sin A & = 3\sin B \\ \sin(150^{\circ} + (-B)) & = 3\sin B \\ \sin(150^{\circ})\cos(-B) + \cos(150^{\circ})\sin(-B) & = 3\sin B \\ \frac{\cos B}{2} + \frac{(-\sqrt{3})(-\sin B)}{2} & = 3\sin B \\ \cos B + \sqrt{3}\sin B & = 6\sin B \\ \cos B & = (6 - \sqrt{3})\sin B \\ \cot B & = 6 - \sqrt{3} \end{aligned}\end{equation}$$
|trigonometry|
1
A unique circle with 3 points proof
I have prove the theorem: There is only one circle passing through three given non-collinear points in both geometrical and algebraic ways. THere is one question that I just have no idea with. 'the accuracy and limitations of this technique" I dont know what to write...! thank you
A few years late but I'll give the algebraic approach a try. Proof: A circle's equation in the Cartesian plane can be represented as: [ x^2 + y^2 + ax + by + c = 0 ] Where $(a, b)$ is the center of the circle in the translated coordinate system, and $c$ is a constant. For three points $(x_1, y_1), (x_2, y_2), (x_3, y_3)$ , we have the system of equations: \begin{align*} x_1^2 + y_1^2 + ax_1 + by_1 + c &= 0 \\ x_2^2 + y_2^2 + ax_2 + by_2 + c &= 0 \\ x_3^2 + y_3^2 + ax_3 + by_3 + c &= 0 \end{align*} We arrange this system into matrix form $Ax = b$ , where: [ A = \begin{bmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{bmatrix} x = \begin{bmatrix} a \\ b \\ c \end{bmatrix} b = \begin{bmatrix} -x_1^2 - y_1^2 \\ -x_2^2 - y_2^2 \\ -x_3^2 - y_3^2 \end{bmatrix} Matrix $A$ is derived from the coefficients of $x$ and $y$ in the circle's equation, and vector $b$ contains the square terms moved to the other side of the equation. For the matrix $A$ to have a unique solution for $x$ , it
|geometry|circles|
0
Understanding normal curvature
I want to prove that for every compact surface $S\subset \mathbb{R}^3$ there must exist an elliptic point. The proof follows like this: consider the function $f:\mathbb{R}^3\to\mathbb{R}$ like $f(p)=||p||^2.$ Since S is compact, the restriction $f_{|_S}$ must attain a maximum, lets say for $p_0\in S$ . Consider an unit tangent vector $v\in T_pS$ and a parametrized curve $\alpha:(-\epsilon,\epsilon)\to S$ such that $\alpha(0)=p_0$ and $\alpha'(0)=v$ . Consider the function $h=f\circ \alpha:(-\epsilon,\epsilon)\to\mathbb{R}$ , which attains a maximum at $t=0$ . Since it's a maximum we have that $$h'(0)=2\langle \alpha'(0),\alpha(0)\rangle=0,$$ which implies that $\alpha(0)=p$ (seen as the vector $p-(0,0,0)$ of $\mathbb{R}^3$ ) is perpendicular to $S$ at the point $p_0$ , so $p_0/||p_0||$ is an unit normal at this point. Also we have that $$h''(0)=2\langle \alpha''(0),\alpha(0)\rangle+2\langle \alpha'(0),\alpha'(0)\rangle=2(\alpha''(0)\cdot p_0+1)\leq 0\implies \alpha''(0)\cdot \frac{p_0}
Recall that if $\alpha:(-\epsilon,\epsilon)\to {\mathbb R}^3$ is parametrized by arc length $s$ , then $\alpha'(s)$ is the unit tangent, and $$ \| \alpha''(s)\| $$ is defined to be the curvature of $\alpha$ . Now if the curve $\alpha:(-\epsilon,\epsilon)\to S\subset {\mathbb R}^3$ lies on a surface $S$ , still parametrized by arc length $s$ , then $$ \alpha''(s)\cdot N_{\alpha(s)} $$ is defined to be the normal curvature of $\alpha$ on the surface. It is the projection of the curvature of $\alpha$ in the normal direction of the surface $S$ . You formula for normal curvature is obtained only after an "integration by parts calculation" calculation: $$ \alpha''(s)\cdot N_{\alpha(s)} = (\alpha'(s)\cdot N_{\alpha(s)})' - \alpha'(s)\cdot N_{\alpha(s)}' = -\alpha'(s)\cdot dN_{\alpha(s)}(\alpha'(s)). $$ Here $\ '=\frac{d}{ds}$ , and note $\alpha'(s)\cdot N_{\alpha(s)}=0$ . In particular, at $s=0$ , the normal curvature is $$ \alpha''(0)\cdot N_{p_0} = -d_{p_0} N(v)\cdot v, $$ where $v=\alpha'(
|differential-geometry|curves|surfaces|
1
Solving $25^n + 16^n \equiv 1 \pmod{121}$
Original Question: Solve for all positive integers $n$ such that $25^n + 16^n \equiv 1 \pmod{121}$ . I began by substituting $k=2n$ to obtain $$5^k + 4^k \equiv 1 \pmod{11}$$ Considering this modulo $11$ , it can be found that $k \equiv 4 \pmod{5}$ . Then, I observed that $5^3 \equiv 4 \pmod{121}$ , and so I split the congruence into three cases: Case-1 $k \equiv 0 \pmod{3}$ . Then, $k=3r$ . $$(4^k)^3+4^k \equiv 1 \pmod{121}$$ It follows that $4^k \equiv 64 \pmod{121}$ . Case-2 $k \equiv 1 \pmod{3}$ . Then, $k=3r+1$ . $$4 \cdot (4^r)^3 +5 \cdot (4^r) \equiv 1 \pmod{121}$$ It follows that $4^k \equiv 37 \pmod{121}$ . Case-3 $$k \equiv 2 \pmod{3}.$$ Then, $$ k=3r+2.$$ $$16 \cdot (4^r)^3 +25 \cdot (4^r) \equiv 1 \pmod{121}.$$ It follows that $4^r \equiv 80 \pmod{121}$ . This leaves the congruences: $4^k \equiv 64 \pmod{121}$ for $k \equiv 4 \pmod{15}$ $4^k \equiv 37 \pmod{121}$ for $k \equiv 9 \pmod{15}$ $4^k \equiv 80 \pmod{121}$ for $k \equiv 14 \pmod{15}$ From here, I am unsure how to
Note that $121=11^2$ , so the Euler indicator function $\varphi$ for $11^2$ is $$ \varphi(121)= \varphi(11^2)= \left(1-\frac 1{11}\right)\cdot 11^2=10\cdot 11=110\ . $$ The numbers $16=4^2$ , $25=5^2$ are relatively prime to $121$ , so their multiplicative orders modulo $121$ are divisors of $55$ , since they are already squares. So $n_0$ is a solution if and only if all $n$ of the shape $n_0+55k$ are solutions. The congruence $$ 25^n+16^n\equiv 1\qquad\text{ modulo }11 $$ has - as mentioned - solutions of the shape $n\equiv 2$ modulo $5$ . It is thus sufficient to check all $n$ from $0$ to $54$ , and among them only those which are two modulo five. So we check the eleven numbers $n$ among $2, 7,12,17,22,27,32,37,42,47, 52$ . For them we compute, with equality (considered in the ring $\Bbb Z/121$ ) written instead of equivalence (in $\Bbb Z$ ) some powers, and sums of them. Because of $25^{11}=3$ , $16^{11}=3^3$ , it is useful to observe that $3$ has multiplicative order five, $3^5=243
|elementary-number-theory|contest-math|
0
Solving equation with two floor functions
I'm trying to solve the following question \begin{equation*} \left\lfloor \frac{\left\lfloor \frac{3\lfloor x\rfloor }{2}\right\rfloor }{9}\right\rfloor =4 \end{equation*} I got the following inequalities: $ n \leq x \lt n+1 \\ m \leq \frac{3n}{2} \lt m+1 \\ 36 \leq m \lt 45 \\ ⌊⌋= n \\ ⌊\frac{3n}{2}⌋= m$ I don't know how to deal with the floor functions, so I have no idea where to start. If someone could walk me through the process that would be great!
Suppose $\lfloor x\rfloor = n$ ; as you noted, this means that $n\leq x\lt n+1$ . Next, suppose that $\lfloor \frac{3\lfloor x\rfloor}{2}\rfloor = m$ ; as before, this means that $m\leq \frac{3\lfloor x\rfloor}{2}\lt m+1$ ; from this, we obtain $\lfloor \frac{m}{9}\rfloor=4$ . This leads to $4\leq \frac{m}{9}\lt 5$ , which of course means that $36\leq m\lt 45$ , or $36\leq m\leq 44$ . Now we can back-substitute, so to speak. If $36\leq m\leq 44$ , then we have $36\leq \frac{3\lfloor x\rfloor}{2}\lt 45$ , which means that $24\leq \lfloor x\rfloor \lt 30$ , or $24\leq \lfloor x\rfloor \leq 29$ In other words, $x\in [24,30)$ . You can verify this by checking that $x$ taken from the endpoints of this interval satisfy the equation; given that the floor function is nondecreasing, this means that any $x$ within that interval will satisfy it
|ceiling-and-floor-functions|
0
Solving equation with two floor functions
I'm trying to solve the following question \begin{equation*} \left\lfloor \frac{\left\lfloor \frac{3\lfloor x\rfloor }{2}\right\rfloor }{9}\right\rfloor =4 \end{equation*} I got the following inequalities: $ n \leq x \lt n+1 \\ m \leq \frac{3n}{2} \lt m+1 \\ 36 \leq m \lt 45 \\ ⌊⌋= n \\ ⌊\frac{3n}{2}⌋= m$ I don't know how to deal with the floor functions, so I have no idea where to start. If someone could walk me through the process that would be great!
Just take deep breathes. $\left\lfloor \frac{\left\lfloor \frac{3\lfloor x\rfloor }{2}\right\rfloor }{9}\right\rfloor =4$ $4\le \frac{\lfloor \frac {3\lfloor x\rfloor}2\rfloor}9 $36 \le \lfloor \frac {3\lfloor x\rfloor}2\rfloor So we know that $\lfloor \frac {3\lfloor x\rfloor}2\rfloor$ is an integer and it is an integer between $36$ (inclusive) and $45$ exclusive, so $36 \le \frac {3\lfloor x\rfloor}2 $72 \le 3\lfloor x \rfloor $24 \le \lfloor x \rfloor and we know $\lfloor x \rfloor$ is an integer between $24$ and $30$ so $x$ maybe be an any real number $x \in [24, 30)$ .
|ceiling-and-floor-functions|
0
Moduli $m \ge 2$ such that $\{ a^a \bmod m : 1 \le a \le m \text{ and } \gcd(a, m) = 1 \}$ forms a reduced residue system
In AtCoder Regular Contest 172, the problem E. Last 9 Digits asks to solve $n^n \equiv X \pmod{10^9}$ for $n$ (the smallest positive $n$ , more precisely), given $X$ coprime to $10^9$ . The intended solution is, first solve it in reducing $10^2$ , then lift the solution to reducing $10^3$ , then $10^4$ , …, finally $10^9$ . The lifting is realizable because $\{ a^a \bmod m : 1 \le a \le m \text{ and } \gcd(a, m) = 1 \}$ forms a reduced residue system for any $m$ that is a power of $10$ . In other words, $a \mapsto a^a \bmod m$ is a bijective mapping on the reduced residue system modulo $10^k$ itself. My question is, can we find a pattern of all integers $m$ having this property? It doesn’t seem that the sequence is on the OEIS. Update: It seems that the sequence coincides with A124240 (numbers $n$ such that $\lambda(n) \mid n$ where $\lambda$ is Carmichael’s lambda function), with only one difference: $10$ is not in A124240. How to prove it?
Ultimately it looks like an argument using a primitive can be repeated in each cyclic factor. After all the group is abelian. When $\Bbb Z_m^×$ is not cyclic, we still get $a^a\equiv b^b\pmod m\implies a\equiv b\pmod {\lambda (m)}.$ This should be enough. Because now $ a^a\equiv b^b\pmod m\implies a^a\equiv b^a\implies(a/b)^a\equiv1\implies a=b \lor \lambda(m)\mid b.$ But (b,m)=1 and we assume that $\lambda (m)\mid m.$
|elementary-number-theory|modular-arithmetic|
0
Continuity on the overlap in pasting lemma when not both sets are closed
The pasting lemma states the following for a function $f\colon X\to Y$ ( $X$ , $Y$ topological spaces): If $X$ can be decomposed as a union of two closed sets $F_1$ and $F_2$ on each of which $f$ 's restriction is continuous, then $f$ is continuous. The necessity of the sets being all closed is usually shown by considering examples of the following flavours: $F_1$ and $F_2$ are taken to be disjoint: For instance, taking $F_1 := (-\infty, 0]$ and $F_2:= (0, +\infty)$ and defining $f\colon \mathbb {R\to R}$ by $f(x) := 0$ if $x\le 0$ and $f(x) := 1/x$ otherwise. Even if $F_1\cap F_2\ne\emptyset$ , the example is continuous on the overlap: For instance, taking $F_1 := \mathbb Q$ and $F_2 := (\mathbb R\setminus\mathbb Q)\cup\{0\}$ with $f\colon\mathbb{R\to R}$ given by $f(x) := 0$ if $x\in F_1$ and $f(x) := x$ if $x\in F_2$ . Question: Can we have an example so that $f$ is discontinuous in the overlap ?
I think "yes" as we can define continuity in point. And this is example of function continious in one point despite definition in whole real line
|real-analysis|general-topology|continuity|
0
Prove that $(a^\frac23 + b^\frac23)^\frac32 > (a^2 + b ^2)^\frac12$ for $a, b$ being positive real numbers.
I'm working on an optimization problem in which to make my argument rigorous, I find myself needing to prove that $(a^\frac23 + b^\frac23)^\frac32 > (a^2 + b ^2)^\frac12$ for $a, b$ being positive real numbers. I've played around with it for around 3 hours already but couldn't seem to prove it. I hope someone could give me a hint as to how to approach it (e.g. which identities to apply). A full proof is appreciated as well! Thank you!
Does this work? $$\left(a^{\frac{2}{3}} + b^{\frac{2}{3}}\right)^{\frac{3}{2}} > \left(a^2+b^2\right)^{\frac{1}{2}} \Longleftrightarrow \left(a^{\frac{2}{3}} + b^{\frac{2}{3}}\right)^{3} > \left(a^2+b^2\right) \Longleftrightarrow$$ $$ a^2 + b^2 + 3(ab)^{\frac{2}{3}}\left(a^{\frac{2}{3}} + b^{\frac{2}{3}}\right) > a^2+b^2$$ which is true as $a, b \in \mathbb{R}^+$
|inequality|cauchy-schwarz-inequality|
1
Homomorphism between diedral group $D_3$ triangle isometries and $S_3$ identification problem
My question deals with the dihedral group $D_3$ of equilateral triangle 123 (1 top vertex, 2 bottom right vertex, 3 bottom left vertex). R1 is the counterclockwise rotation of 120 degrees. R2 is the counterclockwise rotation of 240 degrees. SA is the symmetry through the top angle bissectrice (exchanging vertex 2 and 3) SB is the symmetry through the right angle bissectrice (exchanging vertex 1 and 3) SC is the symmetry through the left angle bisssectrice (exchanging vertex 1 and 2) I constructed the Cayley table accordingly, and compared it to the $S_3$ Cayley table: they match very well, hence my deduction that: R1 matches with permutation (123) of $S_3$. R2 matches with permutation (321). SA matches with permutation (23). SB matches with permutation (12). SC matches with permutation (31). First question, is this correspondence fully right? The shapes of the two tables are quite similar, but I did not try all the possibilities. Second problem : I tried to deepen the above corresponde
Consider the subgroup $H:=\{\texttt{1},\texttt{SA}\}$ . Now: \begin{alignat}{1} \texttt{R1.SA.R1}^{-1} &= \texttt{R1.SA.R2} \\ &= \texttt{SB} \\ &\ne \texttt{SA} \\ \end{alignat} whence: $$H\cap(\texttt{R1.}H\texttt{.R1}^{-1})=\{\texttt{1}\}$$ This suffices to conclude that $D_3$ acts faithfully by left multiplication on the left quotient set $D_3/H$ , namely: $$D_3\hookrightarrow S_{D_3/H}\cong S_3$$ But both $D_3$ and $S_3$ have cardinality $6$ , hence actually: $$D_3\cong S_3$$ (Incidentally, this basically proves that $D_n\hookrightarrow S_n$ for every $n>2$ .)
|group-theory|
0
How to solve $|x-5| + |x-4| ≥ 3$
I was given the following question to solve for homework. The solution $S$ I got was $$S = \{x : x ≥ 6 \text{ or } x\leq3\}$$ I checked my solution with the answers provided and it was correct. My working out can be seen below: $|x-5| + |x-4| - 3 = \begin{cases} 2x-12 , & \text{if } x>5\\ -2, & \text{if }4 I then solved $2x-12 ≥3, -2 ≥ 3,$ and $-2x+6 ≥3$ . This is how I was instructed to do it but it does not make sense to me because why do we just dismiss the regions stated above, namely $x > 5,4 and $x≤4$ . Can someone please provide an explicit explanation?
In short, you do not disregard those conditions. A more complete solution might proceed as follows: If $x > 5$ , then $|x - 5| + |x - 4| - 3 = 2x - 12$ , which we require to be greater than or equal to $0$ . So $2x - 12 \ge 0$ and $x > 5$ , hence $x \ge 6$ . This single condition $x \ge 6$ meets both criteria. Similarly, if $4 , then $|x - 5| + |x - 4| - 3 = -2$ , which is always negative, so no solutions exist in the interval $4 . Finally, if $x \le 4$ , then $|x - 5| + |x - 4| - 3 = -2x + 6$ , and if we want this to be nonnegative, then $x \le 3$ , so the combined conditions $x \le 3$ and $x means we must have $x \le 3$ . All together, this yields the solution $x \ge 6$ or $x \le 3$ .
|calculus|algebra-precalculus|analysis|inequality|absolute-value|
0
How to solve $|x-5| + |x-4| ≥ 3$
I was given the following question to solve for homework. The solution $S$ I got was $$S = \{x : x ≥ 6 \text{ or } x\leq3\}$$ I checked my solution with the answers provided and it was correct. My working out can be seen below: $|x-5| + |x-4| - 3 = \begin{cases} 2x-12 , & \text{if } x>5\\ -2, & \text{if }4 I then solved $2x-12 ≥3, -2 ≥ 3,$ and $-2x+6 ≥3$ . This is how I was instructed to do it but it does not make sense to me because why do we just dismiss the regions stated above, namely $x > 5,4 and $x≤4$ . Can someone please provide an explicit explanation?
I) $U_1=\{x: 5\leq x\}$ . Then, $x-5+x-4\geq3\implies x\geq6$ . So, $V_1=\{x: 6\leq x\}$ and the first solution set is $S_1=U_1\cap V_1=\{x: 6\leq x\}.$ II) $U_2=\{x: 4\leq x\leq 5\}$ . Then, $-x+5+x-4\geq3\implies 1\geq3$ , false. So , $V_2=\{\}$ and the second solution set is $S_2=U_2\cap V_2=\emptyset$ . III) $U_3=\{x: x\leq 4\}$ . Then, $-x+5-x+4\geq3\implies x\leq3$ . So, $V_3=\{x: x\leq3\}$ and the third solution set is $S_3=U_3\cap V_3=\{x: x\leq3\}.$ The solution set is then $S=S_1\cup S_2\cup S_3=\{x: x\leq3 \lor 6\leq x\}$
|calculus|algebra-precalculus|analysis|inequality|absolute-value|
1
Continuity on the overlap in pasting lemma when not both sets are closed
The pasting lemma states the following for a function $f\colon X\to Y$ ( $X$ , $Y$ topological spaces): If $X$ can be decomposed as a union of two closed sets $F_1$ and $F_2$ on each of which $f$ 's restriction is continuous, then $f$ is continuous. The necessity of the sets being all closed is usually shown by considering examples of the following flavours: $F_1$ and $F_2$ are taken to be disjoint: For instance, taking $F_1 := (-\infty, 0]$ and $F_2:= (0, +\infty)$ and defining $f\colon \mathbb {R\to R}$ by $f(x) := 0$ if $x\le 0$ and $f(x) := 1/x$ otherwise. Even if $F_1\cap F_2\ne\emptyset$ , the example is continuous on the overlap: For instance, taking $F_1 := \mathbb Q$ and $F_2 := (\mathbb R\setminus\mathbb Q)\cup\{0\}$ with $f\colon\mathbb{R\to R}$ given by $f(x) := 0$ if $x\in F_1$ and $f(x) := x$ if $x\in F_2$ . Question: Can we have an example so that $f$ is discontinuous in the overlap ?
This cannot happen for spaces where continuity is characterized by sequences (e.g. in metric spaces). For $j\in \{1,2\}$ let sequences $(x_n^{(j)})_{n\in \mathbb{N}}\subseteq F_j$ such that $\lim_{n\rightarrow \infty} x_n^{(j)}=x$ . As $f_j$ is continuous on $F_j$ , we have $$\lim_{n\rightarrow\infty} f_j(x_n^{(j)})=f_j(x).$$ However, if $x\in F_1\cap F_2$ , then $f_1(x)=f_2(x)$ . As $F_1\cup F_2=X$ , we get that for every $(x_n)_{n\in \mathbb{N}}\subseteq X$ with $\lim_{n\rightarrow \infty} x_n=x$ that $$\lim_{n\rightarrow \infty} f(x_n)=f(x)$$ which (by assumption) implies that $f$ is continuous at $x$ . In fact this can never happen. Pick $x\in F_1\cap F_2$ and pick some neighborhood $V$ of $f(x)$ in $Y$ . As $f_1, f_2$ are continuous, there exist open nbhds $x\in U_j\subseteq X$ such that $$U_j\cap F_j\subseteq f_j^{-1}(V).$$ Then $U=U_1\cap U_2$ is an open nbhd of $x$ in $X$ and we have $$ U= (U\cap F_1)\cup (U\cap F_2)\subseteq f_1^{-1}(V)\cup f_2^{-1}(V)=f^{-1}(V).$$ Thus, $f$ i
|real-analysis|general-topology|continuity|
1
Understanding the conditions for the Lagrange Inversion Formula
Lagrange Inversion Formula: Let $A(u) = \sum_{k \ge 0} a_k z^k$ be a power series in $\mathbb{C}[[z]]$ with $a_0 \ne 0$ . Then the equation $$B(z) = zA(B(Z)) \qquad (1)$$ has a unique solution $B(z) \in \mathbb{C}[[z]]$ such that $b_n = \frac{1}{n}[z^{n-1}]A(z)^n$ . Consider what happens if $a_0 = 0$ . What value does $[z^{n-1}]A(z)^n$ take? Is $B(z)$ still a solution to $(1)$ ? I understand that the condition $a_0 \ne 0$ is equivalent to $A(z)$ having a (multiplicative) inverse (see Wikipedia ). However, I do not see what, in general, can be now said about $[z^{n-1}]A(z)^n$ . Could you please give me a hint?
Hint: If $a_0=0$ we have \begin{align*} \left(A(z)\right)^n&=\left(a_1z+a_2z^2+\cdots\right)^n=a_1^nz^n+\cdots \end{align*} so that the $n$ -th power of $A$ has terms in $z$ with powers greater or equal $n$ only. It follows that for $n\geq 1$ \begin{align*} \color{blue}{[z^{n-1}]\left(A(z)\right)^n=0} \end{align*}
|combinatorics|complex-analysis|power-series|analytic-combinatorics|
1
Prove $\sum\limits_{i=1}^{2024}a_i<314$ where $a_1=2$, $a_i=2\sin\frac{a_{i-1}}2$
Let $a_1=2$ and $a_i=2\sin\frac{a_{i-1}}2$ for $i\ge2$ . Prove that $\sum\limits_{i=1}^{2024}a_i . In fact, $\sum\limits_{i=1}^{2024}a_i\approx298.796$ , so the inequality is very strong. I tried to establish inequalities with Taylor's series, and got $\sum\limits_{i=1}^{2024}a_i . This is not enough. We need better (more accurate) ways to estimate the series. This question is from the " $\pi$ day math contest" of THU, which has ended.
The following is strongly inspired by JimmyK4542's comment and Convergence of $\sqrt{n}x_{n}$ where $x_{n+1} = \sin(x_{n})$ . Set $b_n = a_n/2$ , then $b_1 = 1$ and $b_{n+1} = \sin(b_n)$ . In Convergence of $\sqrt{n}x_{n}$ where $x_{n+1} = \sin(x_{n})$ it is shown that $$ b_n \sim \sqrt{\frac{3}{n}} \text{ for } n \to \infty \, . $$ This suggests that an explicit extimate of the form $$ \tag{1} b_n \le \sqrt{\frac{3}{n+2}} $$ might hold for all $n \ge 1$ . If that is true then $$ \begin{align} \sum_{n=1}^{2024} a_n &\le 2 \sum_{n=1}^{2024} \sqrt{\frac{3}{n+2}} = 2 \sqrt 3 \sum_{n=3}^{2026} \frac{1}{\sqrt n} \\ & In order to prove $(1)$ by induction we need to show that $$ \sin \sqrt{\frac 3k} \le \sqrt{\frac{3}{k+1}} \text{ for } k \ge 3 \, . $$ With the substitution $x = \sqrt{3/k}$ this is equivalent to $$ \tag{2} \sin^2(x) \le \frac{3x^3}{3 + x^2} \text{ for } 0 or $$ \tag{3} \frac{1}{\sin^2(x)} > \frac{1}{x^2} + \frac 13 \text{ for } 0 I will now prove inequality $(3)$ , but I wond
|real-analysis|sequences-and-series|inequality|contest-math|recursion|
1
Find $n$ where $\displaystyle \bigg|i+2i^2+3i^3+\cdots \cdots +ni^n\bigg|=18\sqrt{2}$.
Suppose that $n$ is a natural number such that $\displaystyle \bigg|i+2i^2+3i^3+\cdots \cdots +ni^n\bigg|=18\sqrt{2}$ . Find the value of $n$ . What I try : Let $\displaystyle S =i+2i^2+3i^3+\cdots +ni^n\cdots (1)$ Then $\displaystyle iS =i^2+2i^3+3i^4+\cdots +ni^{n+1}\cdots (2)$ So $\displaystyle S(1-i)=i+i^2+i^3+\cdots +i^n-ni^{n+1}$ $\displaystyle S =\frac{i-i^{n+1}}{(1-i)^2}-\frac{ni^{n+1}}{1-i}=\frac{i^n-1}{2}-\frac{ni^{n+1}(1+i)}{2}$ So we have $\displaystyle \bigg|i^n-ni^{n+1}(1+i)\bigg|=36\sqrt{2}$ How do i find value of $n$ , please help me, Thanks
Hint : $$\begin{align} 36\sqrt{2}&=\bigg|i^n-ni^{n+1}(1+i)\bigg|\\&=|i^n|\cdot \left|1-ni(i+1) \right|\\&=\left|1-ni(i+1)\right| \\&=\left|(1+n)-ni\right|\\&=\sqrt{(1+n)^2+n^2} \end{align}$$
|complex-numbers|
0
How to solve $|x-5| + |x-4| ≥ 3$
I was given the following question to solve for homework. The solution $S$ I got was $$S = \{x : x ≥ 6 \text{ or } x\leq3\}$$ I checked my solution with the answers provided and it was correct. My working out can be seen below: $|x-5| + |x-4| - 3 = \begin{cases} 2x-12 , & \text{if } x>5\\ -2, & \text{if }4 I then solved $2x-12 ≥3, -2 ≥ 3,$ and $-2x+6 ≥3$ . This is how I was instructed to do it but it does not make sense to me because why do we just dismiss the regions stated above, namely $x > 5,4 and $x≤4$ . Can someone please provide an explicit explanation?
Deal in Cases... Case(i) $x\ge 5$ $|x-5|+|x-4|\geq 3$ is same as $$x-5+x-4\geq3$$ $$2x-9 \geq3 \implies x\geq6$$ Case(ii) $4 \le x $|x-5|+|x-4|\geq 3$ is same as $$5-x+x-4 \geq3\implies 1\geq3$$ which cannot happen and hence we can discard this case. Case(iii) $x $|x-5|+|x-4|\geq 3$ is same as $$4-x+5-x \geq 3 \implies x \leq 3$$ for case(i), what it means is if I need a solution greater than 5, then it should be greater than 6, which means $ x \in [5,6)$ can't be a solution similarly on case(iii), $x \in (3,4]$ can't be a solution
|calculus|algebra-precalculus|analysis|inequality|absolute-value|
0
Solve $x+2=\sqrt{4+x\sqrt{8-x}}$
Solve $x+2=\sqrt{4+x\sqrt{8-x}}$ $\Rightarrow (x+2)^2=(\sqrt{4+x\sqrt{8-x}})^2 \tag{1}$ $\Rightarrow x^2+4x+4=4+x\sqrt{8-x} \tag{2}$ $\Rightarrow x^2+4x=x\sqrt{8-x} \tag{3}$ $\Rightarrow (x^2+4x)^2=(x\sqrt{8-x})^2 \tag{4}$ $\Rightarrow x^4+8x^3+16x^2=8x^2-x^3 \tag{5}$ $\Rightarrow x^4+9x^3+8x^2=0 \tag{6}$ $\Rightarrow x^2+9x+8=0 \tag{7}$ $\Rightarrow (x+8)(x+1)=0 \tag{8}$ Therefore $x=-8, -1$ . The answer is given as $x=-8,-1,0$ . My problem is since there are square roots involved, normally I only take the positive $x$ (and there are no negative signs before the square roots). In this case, $x=-8$ and $x=-1$ are both negative, yet the solution $x=-1$ works in the original equation and $x=-8$ does not. If they are both negative, why so? Second, it seems it isn't possible to derive $x=0$ from the calculation, so how would I get this answer? Thanks.
Line (7) should be replaced by (7') $x^2(x^2+9x+8)=0$ as @m-stgt pointed it. And you get the possible solution $0$ . Then your reasoning is by implications . You've proved that if $x\in \mathbb R$ is a solution, then $x$ can only be equal to $-8$ , $-1$ or $0$ . Let's call $S$ the set of solutions : $$S:=\{x\in \mathbb R|x+2=\sqrt{4+x\sqrt{8-x}}\}$$ You proved that $\boxed{S\subset\{-8,-1,0\}}$ . Nothing in what you've done proves that $\{-8,-1,0\}\subset S.$ It remains to be seen whether each of the three possible solutions is suitable. Note : It's the same for $S:=\{x\in \mathbb R: x^2+1=0\}\subset\{-\mathrm i,\mathrm i\}$ and yet $S=\emptyset.$
|algebra-precalculus|
1
Find $n$ where $\displaystyle \bigg|i+2i^2+3i^3+\cdots \cdots +ni^n\bigg|=18\sqrt{2}$.
Suppose that $n$ is a natural number such that $\displaystyle \bigg|i+2i^2+3i^3+\cdots \cdots +ni^n\bigg|=18\sqrt{2}$ . Find the value of $n$ . What I try : Let $\displaystyle S =i+2i^2+3i^3+\cdots +ni^n\cdots (1)$ Then $\displaystyle iS =i^2+2i^3+3i^4+\cdots +ni^{n+1}\cdots (2)$ So $\displaystyle S(1-i)=i+i^2+i^3+\cdots +i^n-ni^{n+1}$ $\displaystyle S =\frac{i-i^{n+1}}{(1-i)^2}-\frac{ni^{n+1}}{1-i}=\frac{i^n-1}{2}-\frac{ni^{n+1}(1+i)}{2}$ So we have $\displaystyle \bigg|i^n-ni^{n+1}(1+i)\bigg|=36\sqrt{2}$ How do i find value of $n$ , please help me, Thanks
$$S(1-i) = i +...+i^{n} - ni^{n+1}$$ $$S(1-i) = \frac{i(i^n-1)}{i-1} - ni^{n+1}$$ we know that $|1-i| = \sqrt{2}$ , so: $$|S| = 18 \sqrt 2 \iff |S(1-i)| = 36 \iff$$ $$\iff | \frac{i(i^n-1)}{i-1} - ni^{n+1}| = 36 \iff 36\sqrt2 = |i^{n+1}-i-ni^{n+1}(i-1)|$$ I think now you must analyze each case: $n \equiv 0, \pm 1,2 \mod 4$ and see which one works
|complex-numbers|
0
Find $n$ where $\displaystyle \bigg|i+2i^2+3i^3+\cdots \cdots +ni^n\bigg|=18\sqrt{2}$.
Suppose that $n$ is a natural number such that $\displaystyle \bigg|i+2i^2+3i^3+\cdots \cdots +ni^n\bigg|=18\sqrt{2}$ . Find the value of $n$ . What I try : Let $\displaystyle S =i+2i^2+3i^3+\cdots +ni^n\cdots (1)$ Then $\displaystyle iS =i^2+2i^3+3i^4+\cdots +ni^{n+1}\cdots (2)$ So $\displaystyle S(1-i)=i+i^2+i^3+\cdots +i^n-ni^{n+1}$ $\displaystyle S =\frac{i-i^{n+1}}{(1-i)^2}-\frac{ni^{n+1}}{1-i}=\frac{i^n-1}{2}-\frac{ni^{n+1}(1+i)}{2}$ So we have $\displaystyle \bigg|i^n-ni^{n+1}(1+i)\bigg|=36\sqrt{2}$ How do i find value of $n$ , please help me, Thanks
So we have $$\bigg|i^n-1-ni^{n+1}(1+i)\bigg|=36\sqrt{2}.$$ If $n\underset{4}\equiv0$ then it becomes $$\bigg|1-1-ni(1+i)\bigg|=36\sqrt{2}$$ $$n\bigg|1-i\bigg|=36\sqrt{2}$$ $$n\sqrt{2}=36\sqrt{2}$$ $$n=36.$$ If $n\underset{4}\equiv1$ then it becomes $$\bigg|i-1+n(1+i)\bigg|=36\sqrt{2}$$ $$\bigg|n-1+i(n+1)\bigg|=36\sqrt{2}$$ $$\sqrt{(n-1)^2+(n+1)^2}=36\sqrt{2}$$ $$2(n^2+1)=2\cdot 36^2$$ $$n^2+1=36^2$$ $$n\in\emptyset.$$ If $n\underset{4}\equiv2$ then it becomes $$\bigg|-1-1+ni(1+i)\bigg|=36\sqrt{2}$$ $$\bigg|-n-2+ni\bigg|=36\sqrt{2}$$ $$\sqrt{(n+2)^2+n^2}=36\sqrt{2}$$ $$2n^2+4n+4=36^2\cdot2$$ $$n^2+2n+2=36^2$$ $$n\in\emptyset.$$ If $n\underset{4}\equiv3$ then it becomes $$\bigg|-i-1-n(1+i)\bigg|=36\sqrt{2}$$ $$\bigg|-n-1-i(1+n)\bigg|=36\sqrt{2}$$ $$(n+1)\sqrt{2}=36\sqrt2$$ $$n=35.$$ The answer is: $n=35$ or $36$ .
|complex-numbers|
1
How to solve $|x-5| + |x-4| ≥ 3$
I was given the following question to solve for homework. The solution $S$ I got was $$S = \{x : x ≥ 6 \text{ or } x\leq3\}$$ I checked my solution with the answers provided and it was correct. My working out can be seen below: $|x-5| + |x-4| - 3 = \begin{cases} 2x-12 , & \text{if } x>5\\ -2, & \text{if }4 I then solved $2x-12 ≥3, -2 ≥ 3,$ and $-2x+6 ≥3$ . This is how I was instructed to do it but it does not make sense to me because why do we just dismiss the regions stated above, namely $x > 5,4 and $x≤4$ . Can someone please provide an explicit explanation?
$|x-5|+|x-4|\ge 3$ The algebra is simplified with a symmetric substitution. $u=x-9/2$ $|u-1/2|+|u+1/2|\ge 3$ Case $u>1/2$ : $2u\ge 3\implies u\ge 3/2$ Case $-1/2\le u\le 1/2 :$ $1/2-u+u+1/2\ge 3$ Case $u \le -1/2$ $1/2-u-u-1/2\ge 3\implies -2u\ge 3 \implies u \le -3/2$ $u\le 3$ . Or $u\ge 6.$
|calculus|algebra-precalculus|analysis|inequality|absolute-value|
0
Identifying Hidden Initial Values in Integral Equations: Insights Needed
I've been pondering over a question sparked by two math problems: How can one assert from the outset that a problem contains hidden initial values, and what methods can be employed to discern this? My approach felt rather rudimentary; I found myself substituting various values into each new equation I derived, only to realize that the answers included a constant 'C' (lacking initial value information), which led to a significant waste of time on problem T63. This misconception made me believe that it was impossible to determine initial values for such problems, which in turn led me to overlook the implicit initial value information in problem T82, preventing me from defining 'C' and resulting in confusion. Before any manipulation, the problems were stated as follows: T63: $\int^1_{0}f(tx)dt = f(x)+xsinx\quad f(x) =\underline{\hspace{2cm}}$ T82: $\int_0^{x}f(t)dt = x+sinx+\int_0^{x}tf(x-t)dt \quad f(x) =\underline{\hspace{2cm}}$ What I am capable of doing includes separating the terms i
I have checked most of your manipulations and it all looks correct: $$\tag*{T63} \textstyle\int_0^1f(tx)\,dt=f(x)+x\sin x $$ can easily be brought into the form $$\tag*{T63'} \textstyle\int_0^xf(t)\,dt=xf(x)+x^2\sin x\,. $$ With a bit more work (essentially differentiation and integration by parts) $$\tag*{T82} \textstyle\int_0^xf(t)\,dt =\textstyle x+\sin x+\int_0^xtf(x-t)\,dt $$ can be brought into the form $$ f(x) =\textstyle 1+\cos x+\int_0^xf(t)\,dt $$ which can be viewed as an inhomogeneous linear ODE with initial condition $f(0)=2\,.$ Its solution is $$ f(x) =\frac{3e^x+\sin x+\cos x}{2}\,. $$ Now to your pressing question: both problems are obviously of the form $F(\color{red}{x},\int_0^xf(t)\,dt,f(x),g(x))=0$ but for T63 we have $$ F(\color{red}{x},U,V,W)=-U+\color{red}{x}V+W\,, $$ while for T82 it is $$ F(\color{red}{x},U,V,W)=-U+V+W\,. $$ independent of $\color{red}{x}\,$ . The difference is that the first $F$ does not specify $V=f(x)$ when we set $\color{red}{x}$ to zero. T
|calculus|differential|
1
Determine the ellipse tangent to the $x$ and $y$ axes at known points and also tangent to a given line at a unknown point
I'd like to determine the ellipse that is tangent to the $x$ axis at $\mathbf{r_1} = (a, 0), a \gt 0$ and the $y$ axis at $\mathbf{r_2} = (0, b), b \gt 0$ , and also tangent to the line $\mathbf{n}\cdot (\mathbf{r} - \mathbf{r_0}) = 0 $ at an unknown point $\mathbf{r_3}$ , where $\mathbf{r} = [x, y]^T$ and $\mathbf{n}$ and $\mathbf{r_0}$ are given 2-vectors. This is the task. My attempt: The equation of the ellipse is $ (\mathbf{r - C})^T Q (\mathbf{r - C}) = 1 $ where $\mathbf{C}=[C_x, C_y]^T$ is the unknown center, and $Q$ is $2 \times 2$ symmetric and positive definite, also unknown. That makes a total of $5$ unknowns. The gradient of the above function of $\mathbf{r}$ is $ g = 2 Q (\mathbf{r - C}) $ At $\mathbf{r_1}$ the gradient is pointing along the negative $\mathbb{j}$ direction, so that $ Q ( \mathbf{r_1 - C}) = - k \ \mathbf{j} , k \gt 0$ It follows that $ \mathbf{r_1 - C} = - k Q^{-1} \mathbf{j} \tag{1}$ Substituting this into the ellipse equation yields $ k = \dfrac{1}{\sqr
To find the ellipse tangent to $y=0$ at $(5,0),$ tangent to $x=0$ at $(0,3)$ and tangent to $(x-5)+2(y-6)=0$ or $-\frac1{17}x-\frac2{17}y+1=0,$ we set up five equations in essentially five unknowns by substituting the points in the general equation $$ax^2+bxy+cy^2+dx+ey+f=0,$$ substituting the points and slopes there ( $x=0,y=3,dx=0$ and $x=5,y=0,dy=0$ ) in $$(2ax+by+d)dx+(2cy+e+bx)dy=0,$$ and for the dual $$(cf-\frac{e^2}4)X^2+(\frac{de}2-bf)XY+(af-\frac{d^2}4)Y^2+(\frac{be}2-cd)X+(\frac{bd}2-ae)Y+(ac-\frac{b^2}4)=0$$ substitute $(X,Y)=(-\frac1{17},\frac2{17}),$ seeing as the corresponence goes through $xX+yY+1=0.$ $\begin{align}&5^2a+5d+f&=0\\&3^2c+3e+f&=0\\&10a+d&=0\\&6c+e&=0\\&\frac{4cf-8bf+16af-e^2+4de-34be+136ae-4d^2+68cd-68bd+1156ac-289b^2}{34^2}&=0\end{align}$ Solving gives the double line through $(5,0)$ and $(0,3)$ or a multiple of $$2601x^2+750xy+7225y^2-26010x-43350y+65025=0.$$ This generalizes.
|solution-verification|conic-sections|
1
Find the equation of a plane containing two given points and having a given distance to a third point
This problem is part of examination preparation material for second mid-semester test of 12-th grade in my school: In the 3D space Oxyz, given 3 points $A(1,0,0)$ , $B(0,-2,3)$ , $C(1,1,1)$ . Let $(P)$ be the plane containing $A$ , $B$ such that the distance from $C$ to the plane $(P)$ is $\frac{2}{\sqrt{3}}$ . The equation of the plane $(P)$ is: A. $2x + 3y + z - 1 = 0$ or $3x + y + 7z + 6 = 0$ B. $x + y + z - 1 = 0$ or $-2x +37y+17z+13=0$ C. $x + y +2z - 1 = 0$ or $-2x +3y+7z+23=0$ D. $x + y + z - 1 = 0$ or $-23x+37y+17z+23=0$ This is a multiple choice question. However, we're still expected to provide some work... It's not quite important, as it is not part of the mandatory homework section. But I still find it quite interesting, somehow. So far, the best thing I've got is in Geogebra using some translation and rotations. As on paper, I don't know where to even start... Surely I can't just tell my teacher "so we rotate this segment $sin^{-1}{(\frac{2}{\sqrt{3}}} \div |Vector(A,D)|)$
Let the equation of the plane be $a x + b y + c z = d $ Since $A$ and $B$ lie on the plane, then $ a = d $ $ -2 b + 3 c = d $ Further the distance of the plane from $C$ is $\dfrac{2}{\sqrt{3}} $ . Therefore, $ \left( \dfrac{2}{\sqrt{3}} \right)^2 = \dfrac{ (a + b + c - d)^2} {a^2 + b^2 + c^2} $ Solving the first two equations, gives $ a = d = 2t $ , where $t \in \mathbb{R} $ $ c = 2s $ , where $ s \in \mathbb{R} $ $ b = 3 s - t $ Substituting this into the third equation, $ \dfrac{4}{3} = \dfrac{ (5 s - t)^2 }{ 4 t^2 + (3 s - t)^2 + 4 s^2 } $ so that, $ 16 t^2 + 4 ( 9 s^2 - 6 s t + t^2 ) + 16 s^2 = 3 ( 25 s^2 + t^2 - 10 t s ) $ And finally, $ 17 t^2 - 23 s^2 + 6 t s = 0 $ Take $ s = 1 $ , then $ 17 t^2 + 6 t - 23 = 0 $ Factor, $(17 t + 23) (t - 1) = 0$ whose roots are $ t =1 $ and $ t = - \dfrac{23}{17} $ For $ t = 1 $ we get $ a = d = 2 , c = 2 , b = 2 $ So that the equation is $ x + y + z = 1 $ And for $ t = -\dfrac{23}{17}$ , modify $s$ to $17$ , then $t = -23 $ , and we will have,
|3d|
1
Does tensor product contain Cartesian product?
This is my first exposure to tensor product on abelian groups and $R$ -modules, I am not sure if I understand this correctly. Given $X \times Y$ and additional structure (abelian groups, vector spaces, $R$ -modules) such that bilinearity of function $f: X \times Y \to W$ is defined $$ f(a_1 + a_2, b) = f(a_1, b) + f(a_2, b) \text{ and } f(a, b_1 + b_2) = f(a, b_1) + f(a, b_2) $$ The definition of tensor product $X \otimes Y$ is a map $X \times Y \to X \otimes Y$ such that $$ Hom(X \otimes Y, W) \cong Bil(X \times Y, W) $$ where $Hom$ denotes the class of morphisms from $X \otimes Y$ to $W$ and $Bil$ denotes the class of bilinear morphisms from $X \times Y$ to $W$ . Now, if we take $W = X \times Y$ and $X \times Y \to W$ to be the identity map which is bilinear. By definition of tensor product, the identity map is factored through $X \otimes Y$ , namely, the map $X \times Y \to X \otimes Y$ is injective. That is, tensor product contains Cartesian product.
The identity map $X\times Y\to W$ is not bilinear. Writing $f$ for this identity map, $f(a_1,b)+f(a_2,b)=(a_1+a_2,2b)$ , which is different from $f(a_1+a_2,b)=(a_1+a_2,b)$ . Also, it is not true in general that $X\otimes Y$ contains $X\times Y$ (as a submodule). In the case of abelian groups, for example, $\mathbb{Z}\otimes\mathbb{Z}\cong\mathbb{Z}$ , which doesn't contain $\mathbb{Z}\times\mathbb{Z}$ as a subgroup.
|abstract-algebra|category-theory|homological-algebra|
1
What is the Basis of an Ordered Square?
The dictionary order topology on $\Bbb R\times \Bbb R$ is generated by a basis having elements $(a\times b,c\times d)$ for $a Let $I=[0,1]$ and consider $I\times I$. The restriction of dictionary order topology on $I\times I$ defines a topology on $I\times I.$ I'm trying to figure out what would be its basis. The following is what I have; For $a,b,c,d\in I$ Its basis will contain elements of the form $(a\times b,c\times d)$ for $a The basis contains elements $[0\times0,a\times b)$ with $0 Also $(a\times b,1\times1]$ with $a Is that correct? Suggestions Please!
As per the great book Munkres Definition. Let X be a set with a simple order relation; assume X has more than one element. Let B be the collection of all sets of the following types: (1) All open intervals (a, b) in X. (2) All intervals of the form [a0, b), where a0 is the smallest element (if any) of X. (3) All intervals of the form (a, b0], where b0 is the largest element (if any) of X. The collection B is a basis for a topology on X, which is called the order topology. If X has no smallest element, there are no sets of type (2), and if X has no largest element, there are no sets of type (3) So as per above definition the basis with largest element in dictionary order always include point 1,1 that is (a×b,1×1] where a,b is any point => 0,0 and less than 1×1 Similar statement holds for basis of the form of [0×0, c×d) Where c,d are any points So the basis quoted by you meet the above definition requirements
|general-topology|
0