title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Conditions for Rank One Update of Identity to be Invertible
|
We are to find under which conditions for $u$ and $v$ the rank-one update of the identity $A = I - uv^{T}$ , is invertible. Apparenetly the condition is simply that $1 + v^T u \neq 0$ . I can see that we must require that $Ax \neq 0$ for all $x$ which leads me to $x \neq (v^Tx)u$ . This is where I get stuck. The solution somehow concludes that is becomes sufficient to check that $Au \neq 0$ . What is the intuition about this?
|
The necessary and sufficient condition for $I-uv^T$ to be invertible is not $1+v^Tu\ne0$ , but $1-v^Tu\ne0$ , because \begin{align} &I-uv^T\text{ is singular}\\ &\iff \exists x\ne0 \text{ such that } (I-uv^T)x=0\\ &\iff \exists x\ne0 \text{ such that } x=u(v^Tx)\tag{1}\\ &\iff \exists y\ne0 \text{ such that } y=u(v^Ty) \text{ and } v^Ty=1\tag{2}\\ &\iff v^Tu=1\tag{3},\\ &\iff 1-v^Tu=0,\\ \end{align} where we take $y=\frac{1}{v^Tx}x$ in proving that $(1)\Rightarrow(2)$ and we take $y=u$ in proving that $(3)\Rightarrow(2)$ . Since $Au=(I-uv^T)u=u-u(v^Tu)=(1-v^Tu)u$ , when $Au\ne0$ , we must have $1-v^Tu\ne0$ . Hence the condition $Au\ne0$ is sufficient.
|
|linear-algebra|linear-transformations|
| 0
|
Measurability of the set of elements who belong to a infinite amount of subsets in a sequence
|
I've been struggling to prove the following statement: Let (X, $\mathcal{M}$ , $\mu$ ) a finite measure space and let $(A_n)_{n\in\mathbb{N}}$ a sequence of measurable sets in X. Now consider $M$ the set of elements from X who belong to a infinite amount of the subsets $A_n$ . Prove $M$ is measurable and $$\inf_{n\in\mathbb{N}} \mu(A_n) \leq \mu(M)$$ I have checked for a solution here but i do not understand neither the solution given (infinite does not mean "all") nor the approximation from the person who asked. Thanks in advance.
|
Not an answer, just want to add something that could be nice to know. For a sequence of sets $(A_n)_{n\in\mathbb N}$ , the set $$ A:=\bigcap_{n\geq 1}\bigcup_{m\geq n} A_n $$ is called 'limsup' of the sequence $(A_n)$ , written $$ A=\limsup_{n\to\infty} A_n. $$ There is an analogous definition for the liminf. One well-know exercise in measure theory, which is basically equivalent to the one in your post, is to show that if a sequence of sets $(A_n)$ is contained in a $\sigma$ -algebra $\mathcal F$ , then $\limsup A_n$ and $\liminf A_n$ both belong to $\mathcal F$ . Check the wikipedia page .
|
|real-analysis|measure-theory|proof-writing|lebesgue-measure|outer-measure|
| 0
|
Calculate the ideal generated by a variety
|
I'm taking an algebraic geometry course and I'm trying to solve the next problem: Take the curve $V$ parametrized by $(t,t^3,t^4) \in \mathbb{R}^3$ . Show that $V$ is an algebraic set and find $\mathbb{I}(V)$ , where $\mathbb{I}(V)$ denotes the ideal of all polynomials that are zero over $V$ . So far, I've shown that $V$ can be seen as $Z(\langle y-x^3, z-x^4 \rangle)$ , where $Z(I)$ is the set of zeros of the ideal $I$ . Now I'm trying to calculute the ideal $\mathbb{I}(V)$ but I don't know how to do so, the only thing I've shown is that this ideal must contain the ideal $\langle y-x^3, z-x^4 \rangle$ . I suspect those ideals coincide but I can't seem to prove it. Any suggestions on how to do so?
|
The most straightforward way to deal with these statements is to use the division algorithm. Indeed, let $f \in \mathbb{I}(V)$ . We may then write $f = q*(y - x^3) + g$ where $q$ is a polynomial and $g \in \mathbb{R}[x,z]$ . Similarly, $g = h*(z - x^4) + r$ where $r(x) \in \mathbb{R}[x]$ . To conclude, we show $r = 0$ . Indeed, since $f(t, t^3, t^4) = 0$ it follows that $r(t) = 0$ for all $t \in \mathbb{R}$ so $r$ is the zero polynomial. EDIT: I wanted to quickly describe what I meant by the "division algorithm". Let $A$ be a ring and $f \in A[x]$ a polynomial, written as $f = \sum f_i x^i$ for $f_i \in A$ . Then, let $g = x^n + g_{n - 1} x^{n - 1} + \cdots + g_0$ be a monic polynomial. We claim that we can write $f = q*g + r$ where the $x$ -degree of $r$ is less than $n$ . Indeed, if $\deg f then we're already done. Otherwise, let $f_mx^m$ be the highest term. Then, $f - (f_m x^{m - n})* g$ has strictly lower degree than $f$ . Repeating we eventually arrive at a polynomial of lower de
|
|algebraic-geometry|ideals|
| 1
|
Reference request: Elliptic Integrals and Elliptic Functions
|
Elliptic integrals, elliptic functions, and elliptic curves are well-studied objects in mathematics. Unfortunately, for various reasons, they are usually not covered in undergraduate courses. I feel that elliptic integrals and elliptic functions have been neglected in the current academic trends, although I might be mistaken in this regard, except perhaps in certain specialized subfields. Are there any textbooks written in modern language for undergraduates and beginning graduate students that cover these topics? Especially, I would like to see their historical development, classifications, relationships between them, and modern perspective.
|
How about H. McKean and V. Moll, Elliptic Curves: Function Theory, Geometry, Arithmetic , Cambridge University Press (1999) publisher's page Blurb: The subject of elliptic curves is one of the jewels of nineteenth-century mathematics, whose masters were Abel, Gauss, Jacobi, and Legendre. This book presents an introductory account of the subject in the style of the original discoverers, with references to and comments about more recent and modern developments. It combines three of the fundamental themes of mathematics: complex function theory, geometry, and arithmetic. After an informal preparatory chapter, the book follows a historical path, beginning with the work of Abel and Gauss on elliptic integrals and elliptic functions. This is followed by chapters on theta functions, modular groups and modular functions, the quintic, the imaginary quadratic field, and on elliptic curves. The many exercises with hints scattered throughout the text give the reader a glimpse of further developmen
|
|reference-request|elliptic-curves|elliptic-functions|
| 0
|
Spectrum of an operator is approximate point spectrum plus spectrum of dual operator
|
I'm trying to show that given an operator $T \in B(X)$ with $X$ Banach we have $$\sigma(T) = \sigma_{ap}(T) \cup \sigma_p(T') $$ Where $T' \in B(X')$ is the dual operator. I know that $\sigma(T) = \sigma(T')$ and certainly $\sigma_{ap}(T) \subset \sigma(T)$ so we have $\sigma_{ap}(T) \cup \sigma_p(T') \subset \sigma(T)$ and I'm struggling with the other inclusion. If we write $R_{\lambda} = {\lambda}I - T$ and now we want to show if $\lambda$ is outside of $ \sigma_{ap}(T) \cup \sigma_p(T')$ then $R_{\lambda}$ is invertible. Here is what I have so far: $R_{\lambda}$ has closed image. ( $\lambda$ is not in the approx. point spectrum of $T$ ) $R_{\lambda}$ is injective - the point spectrum of $T$ is contained in the approximate point spectrum of $T$ . I'd like to show that $R_{\lambda}$ has dense image. We know that $\ker(R_{\lambda}')$ is empty - so if $g \in Y'$ is such that $g(R_{\lambda})(x) = 0$ then $g$ must be zero. I'd like to maybe use the Hahn-Banach theorem on an element in $X
|
By definition $\sigma_{A P}(T) \subset \sigma(T)$ , so it is enough to prove Claim 1: $\sigma_P\left(T'\right) \subset \sigma(T)$ and Claim 2: $\sigma(T) \setminus \sigma_{A P}(T) \subset \sigma_P\left(T'\right)$ Proof of Claim 1: Let $\lambda \in \sigma_P\left(T'\right)$ . Then there exists $f \in X^*$ with $f \neq 0$ so that $T' f=\lambda f$ , i.e. so that for every $x \in X$ $$ 0=\left(T' f-\lambda f\right)(x)=f(T x)-\lambda f(x)=f(T x-\lambda x) . $$ Hence the restriction $\left.f\right|_Y$ of $f$ to the image $Y=(T-\lambda \mathrm{I}) X$ of $T-\lambda \mathrm{I}$ is zero, so as $f\ne0$ , we must have that $\bar{Y} \neq X$ . Thus the image of $T-\lambda \mathrm{I}$ is not dense in $X$ so $\lambda \in \sigma(T)$ . Proof of Claim 2: Let $\lambda \in \sigma(T) \setminus \sigma_{A P}(T)$ . Then as $\lambda$ is not an approximate eigenvalue of $T$ we know that $T-\lambda\mathrm{I}$ is bounded below, by the Lemma the image $Y=(T-\lambda \mathrm{I})(X)$ is closed. At the same time $Y$ can
|
|functional-analysis|spectral-theory|dual-maps|
| 0
|
Mayer Vietoris Sequence of the Circle
|
Let $\mathcal{H}_*$ be a ordinary homology theory, that is, a homology theory that satisfies the Eilenburg-Steenrod Axioms . I wanted to show that $\mathcal{H}_1(S^1)\cong \mathcal{H}_0(S^1)\cong R$ , where $H_0(*)\cong R$ , by using the Mayer-Vietoris sequence. The standard way to do this is to use open cover $U:=S^1\setminus\{N\}, V:=S^1\setminus\{S\}$ where $N,S$ are the north and south poles respectively. Then, we get a sequence $$0\rightarrow \mathcal{H}_1(S^1)\rightarrow \mathcal{H}_0(U\cap V)\xrightarrow{\mathcal{H}_0(i_U)\oplus \mathcal{H}_0(i_V)} \mathcal{H}_0(U)\oplus \mathcal{H}_0(V)\rightarrow \mathcal{H}_0(S^1)\rightarrow 0,$$ where $i_U$ and $i_V$ are the inclusions of $U\cap V$ in $U$ and $V$ respectively. Now, we have that $U\cap V$ is a homotopy equivalent to the disjoint union of two points and both $U$ and $V$ are homotopy equivalent to a point and so our sequence becomes $$0\rightarrow \mathcal{H}_1(S^1)\rightarrow R\oplus R\xrightarrow{\lambda} R\oplus R\rightarrow
|
$\require{AMScd}\newcommand{\H}{\mathcal{H}}$ This is not so bad. We just need to understand how the homotopy equivalences $U\simeq\{\text{pt}\}$ etc. affect the inclusions. $$\begin{CD}R\oplus R@>\begin{pmatrix}1&1\\1&1\end{pmatrix}>>R\oplus R\\@V\cong VV@VV\cong V\\\H_0(\{W\})\oplus\H_0(\{E\})@>\begin{pmatrix}\H_0(i_{W,V})&\H_0(i_{E,V})\\\H_0(i_{W,U})&\H_0(i_{E,U})\end{pmatrix}>>\H_0(V)\oplus\H_0(U)\\@V\H_0(i_W)+\H_0(i_E)V\cong V@VV=V\\\H_0(U\cap V)@>>i_0\H_0(i_V)+i_1\H_0(i_U)>\H_0(V)\oplus\H_0(U)\\\end{CD}$$ Commutes simply by functoriality, where I have taken the West and East 'poles' of the circle. Every element of the matrix in the middle row is $\H_0(f)$ for some $f$ homotopic to the identity, justifying the replacement by " $1$ ". Inverting isomorphisms where necessary we get the claim.
|
|algebraic-topology|homology-cohomology|mayer-vietoris-sequence|
| 1
|
General Triangular Inequality for distance between subsets.
|
Suppose we have $A$ and $B$ as subsets of a metric space $(E, d)$ . Is it true that for any subset $C$ of $E$ , $d(A, B) \leq d(A, C) + d(C, B)$ ?
|
As a counterexample: let $E =\mathbb{R}$ . let $A = ]-\infty, 1]$ and $B = [3, \infty[$ . Take $C = [0, 4]$ . It is evident that $d(A, C) + d(C, B) = 0
|
|general-topology|inequality|metric-spaces|triangle-inequality|
| 0
|
About the continuity of the partial derivative
|
I'm trying to study if the following function has continuous partial derivatives at the origin: $$f(x, y) = \begin{cases}\frac{x^4y^3}{x^8 + y^4} & (x, y) \neq (0, 0) \\\\ \quad 0 & (x, y) = (0, 0)\end{cases}$$ I proved $f$ is continuous at the origin, and I also proved its partial derivatives exist at the origin. Now to show the continuity of $f'_x$ at $(0, 0)$ here is what I did: $$\frac{\partial f}{\partial x} = \frac{4x^3y^3(y^4 - x^8)}{(x^8 + y^4)^2}$$ Having observed it goes to zero along various paths, I did: $$\bigg|\frac{4x^3y^3(y^4 - x^8)}{(x^8 + y^4)^2}\bigg| \leq \frac{4x^2y^2|x| |y| (x^8 + y^4)}{(x^8+y^4)^2} \leq \frac{4x^2y^2|x| |y|}{y^4} = \frac{4x^2|x|}{|y|}$$ But now I am stuck. If for example I say $y = x^4$ , this reduces to $\frac{4}{|x|}$ which does not goes to zero. But the notes say the partial derivatives are continuous (without any proof...) Any help? Thank you! Notice We cannot use polar coordinates. We are demanded to find a distance function to make an upper
|
As Ted and Rob already pointed out, summing their hints let me see if I can convince you. Notice that: $x^3 = (x^8)^{3/8} \leq (x^8 + y^4)^{3/8}$ and $y^3 = (y^4)^{3/4} \leq (y^4 + x^8)^{3/4}$ . We can then write: $$\bigg|\frac{4x^3y^3(y^4-x^8)}{(x^8 + y^4)^2}\bigg| \leq \frac{4(x^8+y^4)^{3/8} (x^8 + y^4)^{3/4} (x^8+y^4)}{(x^8 + y^4)^2} = 4(x^8 + y^4)^{3/8 + 3/4 + 1 - 2} = 4(x^8+y^4)^{1/8}$$ Which goes to $0$ as $(x, y) \to (0, 0)$ .
|
|multivariable-calculus|continuity|partial-derivative|
| 0
|
Spherical Tractrix
|
A pantoon P initially at North pole NP heading to Greenwich meridian intersection with equator, $( 0^{\circ} E, 0^{\circ} N $ ) towards point G moving down and east. It is dragged by a ship S moving east along the equator of a sphere ( unit radius ) connected by a very long floating chain of length $ PS =a=\pi/2, $ i.e., the chain is part of a full geodesic circle always kept taut. P has spherical coordinates $ (\theta, \phi)$ for longitude & latitude somewhat like in the rough sketch shown with the spherical triangle PQS. Assume the globe has no land masses. The same locus would result if P is a small magnet stuck on a steel sphere and S moves in an equatorial groove pulling thread PS. Please help with the parametrization of 3d tractrix (red) on the sphere. EDIT1: The parametrization can be also given for starting latitude $\beta if it helps for an easier formulation.
|
tl; dr: Perhaps surprisingly, a "spherical tractrix" (whose chain length is one-quarter the circumference of the sphere) is a latitude line traced at constant speed: Particularly, the point $P$ starting at the north pole does not move ! As $S$ sails around the equator, the distance from $S$ to the north pole stays the same, so nothing pulls $P$ away from its initial position. If instead $P$ starts on the prime meridian but not at a pole, the "tugboat" $S$ necessarily starts at longitude $90^{\circ}$ , and $P$ gets pulled around its latitude as shown. Let $\theta(t)$ and $\phi(t)$ denote the longitude and latitude of $P$ , so the Cartesian space coordinates of $P$ at time $t$ are $$ p(t) = \bigl(\cos\theta(t) \cos\phi(t), \sin\theta(t) \cos\phi(t), \sin\phi(t)\bigr). $$ Let $s(t) = \bigl(\cos(t + t_{0}), \sin(t + t_{0}), 0\bigr)$ denote the position of the dragging ship $S$ , initially at longitude $t_{0}$ and proceeding eastward around the equator at unit angular speed. The equations o
|
|differential-geometry|3d|spherical-coordinates|
| 0
|
Definition of stopping time
|
I am having a hard time for understanding stopping time, especially on notations. My book says: Def. A stopping time for the filtration $(g_n)$ is a random variable $T:\Omega \rightarrow \mathbb{N} \cup \infty \ \text{such that} $ { $T=n$ } $ \in g_n \ \text{for all} \ n \ge 0$ It is difficult for me to understand what $T$ is. Suppose $\Omega = $ { 1,2,3 } . Then $T(1) =1 , T(2) = 3 ...$ works that way? Also, $g_n$ is a $\sigma$ - algebra on $\Omega$ , It gets more confusing. Could you help me what the $T$ represents and, what is stopping time? Thanks in advance:)
|
The information that goes with this is ... $\mathcal G_n$ is information known time $n$ . That is, for an event $E$ , we have $E \in \mathcal G_n$ if and only if the outcome of $E$ is known at time $n$ . Then a stopping time $T$ is a random variable such that we know when it happens ... that is, for any time $n$ , the event $\{T = n\}$ is known at time $n$ . Example. Suppose we toss a fair coin repeatedly. Let $X_n$ be the outcome of the $n$ th toss. Then take $\mathcal G_n$ to be the events determined by $X_1,\dots, X_n$ . So an event $E$ belongs to $\mathcal G_3$ iff we can tell at time $3$ whether event $E$ occurs. Let $T$ be "the first time when the coin shows heads". Convince yourself this is a stopping time. Now let $T$ be defined by $T=n$ iff $X_{n+1} = H$ . This is not a stopping time: at time $n$ we do not know whether the next toss will be H.
|
|notation|stopping-times|
| 1
|
Equation of an 'n-Oval' (pins and loop of string method)
|
You can write the equations of an n -ellipse (polyellipse, egglipse, k -ellipse, etc...), where points $(x_1,y_1), (x_2, y_2)...,$ are foci: $$\sqrt{(x-x_1)^2+(y-y_1)^2}+\sqrt{(x-x_2)^2+(y-y_2)^2}+\sqrt{(x-x_3)^2+(y-y_3)^2}+\sqrt{(x-x_4)^2+(y-y_4)^2}...=d$$ For two points, the curve can be the same as an ellipse drawn with two pins and a loop of string (see fig. 1). However, with more points, the curve seems to be the same as James Clerk Maxwell's construction method (see fig. 2) You can also create various ovals by adding points to the loop and pins method shown in the first figure (see also fig. 3). What I want to know is how to construct equations for curves of the kind shown in fig. 3 given any number of points (akin to the above equation). I'd also be interested in knowing if there's anything published on this type of curve. Thanks.
|
The figure below is a reproduction of your fig. 3: $ABCDEF$ are the pins and the string is held tight by the pencil at $P$ . As long as $P$ lies between the extensions of lines $FA$ and $BA$ we have that $PF+PB$ is constant. Hence $P$ describes an arc $HI$ of the ellipse with foci $F$ and $B$ (blue in the figure). After $I$ , point $P$ lies between the extensions of $FA$ and $CB$ , with $PA+PB$ a constant. In this case $P$ describes an arc $IJ$ of the ellipse with foci $A$ and $B$ (orange in the figure). Note that the ellipses are tangent at $I$ , because they share the same normal (the bisector of $\angle FIB=\angle AIB$ ). After $J$ , point $P$ lies between the extensions of $FA$ and $DC$ , with $PA+PC$ a constant. In this case $P$ describes an arc of the ellipse with foci $A$ and $C$ (not shown in the figure), and so on. In summary, the curve is a patch of 12 elliptical arcs, each arc being tangent with the preceding and following one.
|
|calculus|trigonometry|algebraic-geometry|conic-sections|curves|
| 0
|
Infinite limit proof
|
Prove that $\lim_{x→+∞} \ln x =+∞$ I know that $f(x) = \ln x$ can be written as $\ln x$ if $x>1$ and $0$ if $x=1$ Do these have any useful to prove the limit above?
|
The function $\ln x$ is increasing. Therefore it suffices to indicate a sequence $x_n$ such that $\ln x_n$ tends to infinity. Take $x_n=3^n.$ Then $$\ln 3^n=n\ln 3> n$$
|
|calculus|proof-writing|
| 0
|
What is the intuition behind picture in Fulton and Harris about $sl_3(\mathbb C)$
|
We are seeing at representations $sl_3(\mathbb C)$ here from Fulton and Harris. The space $\mathfrak h$ is collection of diagonal matrices and we have written $\mathfrak{sl_3}(\mathbb C) = \mathfrak h \oplus (\oplus \mathfrak g_{\alpha})$ which is obtained by adjoint action of $\mathfrak h$ suhc that for any $H \in \mathfrak h,$ $Y \in \mathfrak{g}_{\alpha}$ we have $[H,Y] = \alpha(H)Y$ where $\alpha$ is a linear functional on $\mathfrak h.$ Then we find that the eigen vectors are $3 \times 3$ matrices $E_{i,j}$ with $L_i$ as eigen value where $L_i\begin{pmatrix}a_1 &0 & 0\\ 0& a_2 & 0 \\ 0 &0 & a_3 \end{pmatrix} = a_i.$ Then the author draws the above picture that I do not understand the motive. How did they draw this? The author claims that The virtue of this decomposition and the corresponding picture is that we can read off from it pretty much the entire structure of the Lie algebra. Ofcourse, the action of $\mathfrak h$ on $\mathfrak g$ is clear from the picture: $\mathfrak h$ car
|
That is a diagram depicting the $6$ roots of $\mathfrak{sl}_3$ . Those are the each of the $L_i - L_j$ . Any semisimple Lie algebra is given by the sum of a Cartan subalgebra and its root spaces and moreover you know how the bracket works between any two root spaces ( $[\mathfrak{g}_\alpha,\mathfrak{g}_\beta] = \mathfrak{g}_{\alpha+\beta}$ ) so this diagram conveys pretty much the whole structure of $\mathfrak{sl}_3$ . They have drawn those $6$ roots plus the $0$ weight on the backdrop of the weight lattice which is just the integer span of the weights $L_1, L_2$ . All weights of finite dimensional representations live on the weight lattice (including the roots which are weights for the adjoint representation) so this is a natural thing to draw.
|
|lie-algebras|intuition|
| 0
|
Sampling variance of edge density of subgraphs
|
I would like to evaluate the mean and variance of the edge density for subgraphs obtained by repeatedly subsampling nodes. Specifically, suppose we have an undirected graph $G$ with $N$ vertices and adjacency matrix $y$ . We obtain a subset of nodes $S$ by Bernoulli sampling , i.e., for each node $i$ in $G$ , we sample an inclusion indicator $z_i\sim\mathsf{Bernoulli}(\rho)$ and add $i$ to $S$ if $z_i=1$ . $\rho$ is the probability for a given node to be included in the subgraph, and $n=\sum_{i=1}^N z_i$ is the number of nodes of the subgraph. Then the edge density of the subgraph $G'$ is $$ \lambda=\frac{z^\intercal y z}{n(n-1)}, $$ where $z^\intercal$ denotes the transpose of $z$ . Following Strand (1979) , we let $\lambda=0$ for $n . What are the mean and variance of $\lambda$ under the sampling scheme? Background Reading Frank (1977) (I couldn't find an open access link, unfortunately) derives the sampling variance for an estimator of the total number of edges in the original graph
|
Here is a partial answer that only addresses the mean. We can interpret $\lambda = \frac{z^{\mathsf T}yz}{n(n-1)}$ as the sum of sort-of-indicator variables $\lambda_{ij}$ , for every $ij \in E(G)$ , where $$\lambda_{ij} = \begin{cases}\frac2{k(k-1)} & \text{if $i$, $j$, and $k-2$ other nodes are sampled} \\ 0 & \text{otherwise.}\end{cases}$$ Then $\lambda_{ij} = \frac2{k(k-1)}$ with probability $\binom{N-2}{k-2} \rho^k (1-\rho)^{N-k}$ , so its expected value is $$ \mathbb E[\lambda_{ij}] = \sum_{k=2}^N \frac2{k(k-1)} \binom{N-2}{k-2} \rho^k (1-\rho)^{N-k}. $$ Using the identity $\binom Nk = \frac{N(N-1)}{k(k-1)}\binom{N-2}{k-2}$ , we can rewrite this as $$ \mathbb E[\lambda_{ij}] = \frac2{N(N-1)} \sum_{k=2}^N \binom Nk \rho^k (1-\rho)^{N-k} = \frac2{N(N-1)}\Pr[n \ge 2]. $$ Altogether, we get $\mathbb E[\lambda] = \frac{|E(G)|}{\binom N2} \Pr[n\ge 2]$ , which is combinatorially nice: it's the edge density of $G$ , multiplied by the probability that $\lambda$ has a sensible denominator.
|
|graph-theory|expected-value|variance|sampling|random-graphs|
| 0
|
Is there a permutation of a given length in which every element divides sum of the elements before it?
|
For given positive integer $n$ , is it possible to construct a permutation $p$ of [1, n], such that for each $k$ in [2, n], $p_k$ divides $p_1 + p_2 + ... + p_{k-1}$ . Solutions for small $n$ : $n=3$ , p = 2 1 3 $n=4$ , p = 4 2 3 1 $n=5$ , p = 4 1 5 2 3 $n=6$ , p = 4 1 5 2 6 3 $n=7$ , p = 5 1 6 2 7 3 4 $n=8$ , p = 5 1 6 2 7 3 8 4 $n=9$ , p = 8 1 9 6 4 7 5 2 3
|
Let $p = (x + 1, 1, x + 2, 2, \ldots)$ , with $x = \left \lfloor \frac{n}{2} \right \rfloor$ . It's not hard to check that $p$ is indeed a permutation and the condition holds.
|
|number-theory|permutations|
| 1
|
Axler Linear Algebra Done Right 4th ed 5B.9 "Explain why all coefficients of the minimal polynomial of T are rational numbers. "
|
Suppose $T \in \mathcal{L}(V)$ is such that with respect to some basis of $V$ , all entries of the matrix of $T$ are rational numbers. Explain why all coefficients of the minimal polynomial of $T$ are rational numbers. My hunch is that it is because the rationals are closed under addition and multiplication, but I don't know how to formalize this into a rigorous proof. Any help is appreciated!
|
The trick here is essentially this lemma. Lemma: Let $k$ be a subfield of $K$ , and let $v_1, \ldots, v_n$ be $k$ -independent elements of $k^m$ . Then $v_1, \ldots, v_n$ are $K$ -independent elements of $K^n$ . There are several ways this can be proved. Here is one. Fix a $k$ -linear isomorphism $\phi : k^n \to k^n$ such that $\phi(v_i) = e_i$ . Then $\phi$ extends to a $K$ -linear isomorphism $\Phi : K^n \to K^n$ such that $\Phi(v_i) = e_i$ (just using the matrix of $\phi$ and of its inverse); thus, the $v_i$ s are $K$ -independent. Now identify $\mathcal{L}(V)$ with $K^{n^2}$ in the obvious way (here $k = \mathbb{Q}$ , $K = \mathbb{R}$ ); the operator $M$ , being a matrix with rational coefficients, is identified with an element of $k^{n^2}$ , as are all powers of $M$ . Let $m$ be the lowest degree of a polynomial $P \in k[x]$ such that $P(M) = 0$ . Then $M^0, \ldots, M^{m - 1}$ are $k$ -independent elements of $k^{n^2}$ , and hence are $K$ -independent elements of $K^{n^2}$ , so th
|
|linear-algebra|
| 0
|
Why are ultrametric spaces named as such?
|
I'm beginning my study of $p$ -adic numbers, so naturally I've come across the non-Archimidean property of absolute values, i.e. $$|x+y| \le \max\{|x|,|y|\}.$$ A metric space with the analogous property i.e. $$d(x,y) \le \max\{d(x,z),d(z,y)\}$$ is called an ultrametric space. I was just wondering as to the etymology of this term, especially why ultrametric was chosen over non-Archimidean. I understand that this is a stronger condition than the usual triangle inequality, so I guess ultra-metric as in stronger than-metric is the idea behind it, but non-Archimidean would keep terminology consistent between metrics and absolute values.
|
I prefer to use the term "strong triangle inequality". Anyway, the page https://mathshistory.st-andrews.ac.uk/Miller/mathword/u/ says ultrametric is due to Krasner in 1944 "according to an Internet web page." I've never seen Jeff Miller's pages on words in mathematics use such a lazy reference before. Imagine someone said something happened "according to a book". The OED page https://www.oed.com/dictionary/ultrametric_adj?tl=true says "The earliest known use of the word ultrametric is in the 1960s." That should be easily disproven (or maybe we're distinguishing between English and French?) by tracking down Krasner's paper in 1944.
|
|number-theory|terminology|
| 1
|
Dyadic sum of $\frac{x}{\ln x}$ (i.e. dyadic asymptotic for prime number theorem)
|
For $x\geq 10$ , denote $j=j_x$ s.t. $4 \geq \frac{x}{2^j}\geq 2$ . I want to prove that $$\sum_{i=0}^j \frac{x/2^i}{\log(x/2^i)} \sim \frac{2x}{\log x}$$ The reason I care is because the prime number theorem $\sum_{x implies that $\sum_{x , and I need the above asymptotic to go backward. Based on graphical evidence, this seems true https://www.desmos.com/calculator/qku5msur6w . I can manage to show that these are comparable (i.e. $\asymp$ ), but not $\sim$ . My method is to write $$\frac{x/2^i}{\log(x/2^i)} = \frac{x/2^i}{\log(x)} \cdot \frac{\log(x)}{\log(x/2^i)},$$ where the 2nd factor $$\frac{\log(x)}{\log(x/2^i)} = 1+\frac{\log(2^i)}{\log(x/2^i)} \leq 1+\frac{i\log(2)}{\log(x/2^j)} \leq 1+ \frac{i\log 2}{\log 2}= (1+i),$$ and of course $\frac{1+i}{2^i} \ll \frac{1}{(1.5)^i}$ , which is summable to some constant.
|
We split the sum into two parts at $i=j/2$ and estimate each part separately. For the first half, we note that $$ \frac{{\log x}}{x}\sum\limits_{i = 0}^{j/2} {\frac{{x/2^i }}{{\log (x/2^i )}}} = \sum\limits_{i = 0}^{j/2} {\frac{1}{{2^i }}\frac{{\log x}}{{\log (x/2^i )}}} $$ and $$ 1 \le \color{red}{\frac{{\log x}}{{\log (x/2^i )}}} \le \frac{{\log x}}{{\log (x/2^{j/2} )}} \le \frac{{\log x}}{{\frac{1}{2}\log (2x)}} \le 2 $$ for $0\le i\le j/2$ . Consequently, by Tannery's theorem , $$ \frac{{\log x}}{x}\sum\limits_{i = 0}^{j/2} {\frac{{x/2^i }}{{\log (x/2^i )}}} \to \sum\limits_{i = 0}^\infty {\frac{1}{{2^i }}} = 2 $$ as $x\to+\infty$ (i.e., as $j\to+\infty$ ). In other words, $$ \sum\limits_{i = 0}^{j/2} {\frac{{x/2^i }}{{\log (x/2^i )}}} \sim \frac{{2x}}{{\log x}} $$ as $x\to+\infty$ . For the second half of the sum, note that $$ \sum\limits_{i = j/2 + 1}^j {\frac{{x/2^i }}{{\log (x/2^i )}}} \ll j\frac{x}{{2^{j/2} }}\frac{1}{{\log (x/2^{j/2} )}} \ll (\log x)\sqrt x \frac{1}{{\log x}}
|
|real-analysis|asymptotics|analytic-number-theory|
| 0
|
Circle revolutions rolling around another circle
|
I just watched this video , and I'm a bit perplexed. Problem: The radius of Circle A is 1/3 the radius of Circle B. Circle A rolls around Circle B one trip back to its starting point. How many times will Circle A revolve in total? The intuitive answer is 3, but the correct answer is 4. I understand the trick -- that the center of Circle A must travel a distance of $2\pi(r_B + r_A)$, not $2\pi r_B$ -- but I'm still confused on one item. At the risk of sounding very un-mathematical, how do the (infinite set of) points on the circumference of each circle map to each other to accomplish this? Consider Circle A rolling along a straight line the length of the circumference of Circle B. Then it will revolve 3 times. It's like the universe "knows" when to apply a different point mapping when you change the arrangement of matter.
|
The way you stated the question demonstrates the confusion. You are trying to figure out how to map the points on the circumference to each other. But that isn't the amount of turning . In terms of distance rolled along, the ratio is indeed 3 as you would expect. But that isn't the amount of turning, its just the gear ratio of the two circles the ratio of the circumferences. In a flat geometry the two measurements are the same. Imagine now that you are sliding one circle around the other. How many times does it turn? In a flat geometry it turns 0 times. But in a circular geometry it turns exactly once, as evidenced by looking at the tangent or the normal vector and seeing how many times it turns. This shows that the amount of turning is always 1 more than the gear ratio of the two circles. 1 more than the amount of turning in a flat geometry. The curvature of the second circle causes the small circle to turn faster. But this is all relative perspective. If you are in the center of the
|
|geometry|circles|infinity|
| 0
|
On the cusps of $\Gamma_0(3) \cap \Gamma(2)$
|
I've been trying to compute the number of non-equivalent cusps of $\Gamma_0(3) \cap \Gamma(2)$ . My approach so far has been the following: I believe that this is finite index subgroup of $\Gamma_0(3)$ so it should have the same cusps if I'm not mistaken.I wanted to work with these cusps and see when could they be equivalent by an element in the intersection. Cusps of $\Gamma_0(n)$ should be $0,\infty$ and $\cfrac{a}{dc}$ where $d$ is some divisor of $n$ and $(a,dc)=1$ .I've been told that this subgroup has $6$ non-equivalent cusps in total which I am not sure how that is true as an upper bound of for the non-equivalent cusps of $\Gamma_0(3)$ is its index which is $4$ in this case. Maybe it would be easier to work with the cusps of $\Gamma(2)$ ?
|
I note several different things here. Let $G := \Gamma(2) \cap \Gamma_0(3)$ . The set of all cusps is the same (as noted by Shimura), and we know exactly what these are: $\mathbb{Q} \cup \{ \infty \}$ . Implicitly this question is actually about the set of inequivalent cusps . As $G$ is a subgroup of $\Gamma(2)$ and $\Gamma_0(3)$ , it has fewer elements and hence fewer matrices that can relate different cusps. We should generically expect for there to be more inequivalent cusps in $G$ than in either $\Gamma(2)$ or $\Gamma_0(3)$ . (Explicitly here: $\Gamma(2)$ has three inequivalent cusps: $\{ 0, 1, \infty \}$ ; and $\Gamma_0(3)$ has two inequivalent cusps: $\{ 0, \infty \}$ ; and $G$ has $6$ inequivalent cusps: $\{ 0, 1, -\frac{1}{2}, -\frac{1}{3}, \frac{2}{3}, \infty \}$ ). It is not true that any subgroup of finite index in $\Gamma_0(3)$ has the same set of inequivalent cusps, for example. To generate the list of inequivalent cusp representatives for $G$ , I first computed coset repr
|
|number-theory|modular-forms|
| 1
|
Estimate the norm of the (stochastic) heat equation with time-dependent diffusion coefficient
|
I'm considering the following (stochastic) PDE: $${\rm d}U_t=\kappa(t)\Delta U_t{\rm d}t+\sigma W_t\tag1$$ on $[0,1)^2$ with Neumann boundary conditions, where $\kappa:[0,T]\to(0,\infty)$ is linear and $W$ is a cylindrical Wiener process on $H:=L^2([0,1)^2)$ . I somehow need to compute/estimate (or at least find a sharp upper bound on) $\operatorname E\left[\|g(U_t)\|_H^2\right]$ , where $g:H\to H$ is nice enough. Is there anything in the literature on this? I'm particularly interested in $g(t,\;\cdot\;)=\nabla\ln p_t$ , where $p_t$ is the "density" of the projections of $U_t$ with respect to the Lebesgue measure. That is, if $(e_n)_{n\in\mathbb N}$ is an orthonormal basis of $H$ and $$H_d:=\operatorname{span}\{e_1,\ldots,e_d\}\;\;\;\text{for }d\in\mathbb N,$$ then $p_t$ is a Borel measurable function $H\to[0,\infty)$ such that $$\operatorname E\left[\varphi(\pi_dU_t)\right]=\int\lambda^{\otimes d}({\rm d}x)p_t(\iota_dx)\varphi(\iota_dx)\tag2$$ for all bounded Borel measurable $\varphi
|
I think your question is essentially finite dimensional. To make my life easier I am going to assume that $\kappa(t)=at +b$ for strictly positive $a,b$ . You can adapt the argument for another set of assumptions. Let $e_i, i=1,2 \dots \,$ denote the eigenfunctions of the Neumann Laplacian $-\Delta$ . Then, $X_t=\pi_d U_t$ solves the following SDE: \begin{align} dX_t = -\kappa(t) \Lambda_d X_t \, dt + \sqrt{2}dB_t^d \,, \end{align} where $B_t^d$ is standard d-dimensional Brownian motion and $\Lambda_d$ is diagonal matrix with entries $(\Lambda_d)_i=\lambda_i$ , with $\lambda_i\geq0, i=1, \dots,d$ the first $d$ eigenvalues of the $-\Delta$ on your domain with Neumann boundary conditions. Note that these are explicitly computable. I have used the fact that $W_t=\sqrt{2}\sum_{i=1}^\infty e_i B_t^i$ (I have added the $\sqrt{2}$ so that I have a better normalisation) where the $B_t^i$ 's are independent one dimensional Brownian motions (this is a pretty standard representation of cylindrical
|
|partial-differential-equations|stochastic-analysis|heat-equation|stochastic-differential-equations|stochastic-pde|
| 0
|
Probabilistic Transversals in Hypergraphs
|
This is a result from Noga Alon's 1990 paper, " Transversal Numbers of Uniform Hypergraphs ". In section 3: $\,H = (\,V,E\,)$ is a random $k$ -uniform hypergraph on a set $V$ of $n$ vertices, with $m$ (not necessarily distinct) edges, constructed by choosing the $m$ edges randomly and independently according to a uniform distribution on the $k$ -subsets of $V$ . He writes that by setting $n = [k \; log \, k]$ with edges $m = k$ and a fixed subset $X$ of $V$ , and $|X| \leq log^2k - 10 \,log \,k\, log\,( log\,k)$ , estimating the probability that $X$ is transversal of $H$ is done in the following manner. For each of the $m$ edges $e$ of $H$ , the probability that $e$ does not intersect $X$ satisfies $$\mathbb{P}(e \cap X = \emptyset) = \frac{\binom{\,n - |X|\,}{k}}{\binom{\,n}{k}} \geq \bigg( \frac{n-|X| - k}{n-k} \bigg)^k \geq \bigg(\frac{k \; log \, k - log^2k + 10 \,log \,k\, log\,( log\,k) - k}{k \; log \, k - k} \bigg)^k$$ My question (as a beginner in the probabilistic method) is
|
These are not derived. They’re choices the author makes in order to prove Proposition $3.1$ . Since $c_k$ is a bound that needs to hold for all $k$ -uniform hypergraphs, the author is free to make arbitrary choices in proving a lower bound for $c_k$ .
|
|probabilistic-method|hypergraphs|
| 0
|
Probability of Binomial Distributions
|
We have $ X, Y \sim Bin(N,1/2) $ independent variables, and I'm asked to find $ \def\P{\mathbb P}\P(X > Y) $ . Considering the identity $ \sum_{k = 0}^n {n \choose k}^2= {2n \choose n} $ , we can have a symmetry argument, saying $ \def\P{\mathbb P}\P(X > Y) = \def\P{\mathbb P}\P(Y > X) $ , and calculating $ \def\P{\mathbb P}\P(X > Y) = 0.5(1 - \def\P{\mathbb P}\P(X = Y) )$ , we see our answer is $ 0.5 - \frac{1}{2^{2N+1}}{2N \choose N} $ However, I thought of the following: if $ X = k, Y = m$ , then $ N \ge k > m \ge 0 $ , so we want to calculate the following: $ \sum_{k = 1}^{N} \sum_{m = 0}^{k-1}\def\P{\mathbb P}\P(X = k, Y = m) $ . Since they're independent, we can get $$ = \sum_{k = 1}^{N}(\def\P{\mathbb P}\P(X = k) \sum_{m = 0}^{k-1}\def\P{\mathbb P}\P(Y = m)) $$ and we get $$ \frac{1}{2^{2N}} \sum_{k = 1}^{N} {N \choose k} \sum_{m = 0}^{k-1} {N \choose m} $$ Is my line of reasoning correct? If so, how do I proceed from here? What are different ways to solve this?
|
We need to use two equalities, (1). $\sum_{k=0}^n\binom{n}{k}^2=\binom{2n}{n}$ or $\sum_{k=1}^n\binom{n}{k}^2=\binom{2n}{n} - 1$ (2). $\sum_{k=0}^{n}\binom{n}{k}=2^n$ or $\sum_{k=1}^{n}\binom{n}{k}=2^n - 1$ Let $S = \sum_{k=1}^n \sum_{m=0}^{k-1}\binom{n}{k}\binom{n}{m}$ . By switching the order double summation (Fubini's Theorem), we get $S = \sum_{m = 0}^n\sum_{k=m+1}^n\binom{n}{m}\binom{n}{k}$ . We further rename the indices by switching the "symbol" of $m$ and $k$ (just treat them as dummy variables, no math is needed here), so that $S = \sum_{k = 0}^n\sum_{m=k+1}^n\binom{n}{k}\binom{n}{m}$ . \begin{align*} 2S &= \sum_{k=1}^n \sum_{m=0}^{k-1}\binom{n}{k}\binom{n}{m} + \sum_{k = 0}^n\sum_{m=k+1}^n\binom{n}{k}\binom{n}{m} \\ & = \sum_{k=1}^n \sum_{m=0}^{k-1}\binom{n}{k}\binom{n}{m} + \sum_{k = 1}^n\sum_{m=k+1}^n\binom{n}{k}\binom{n}{m} + \sum_{m=0+1}^n\binom{n}{0}\binom{n}{m} \\ & = \sum_{k=1}^n \binom{n}{k}\left[\sum_{m=0}^{n}\binom{n}{m} - \binom{n}{k}\right] + \sum_{m=1}^n \binom{n
|
|probability|combinatorics|
| 1
|
${\text{Show that}}:{a_{2n}} - {a_{2n - 1}} = a_n^2 - a_{n - 1}^2$
|
I'm trying to solve this problem $$\eqalign{ & {\text{Consider a sequence is given by}}: \cr & {a_0} = 1,{a_{n + 1}} - {a_n} = \frac{{{a_n} + \sqrt {5a_n^2 - 4} }}{2} \cr & {\text{Show that}}:{a_{2n}} - {a_{2n - 1}} = a_n^2 - a_{n - 1}^2 \cr} $$ I tried to used mathematical induction $$\eqalign{ & * n = 1 \Rightarrow {a_2} - {a_1} = 5 - 2 = 3 = {2^2} - 1 \cr & * {\text{Suppose }}\left( 1 \right){\text{ is true for }}n = k \geqslant 1,{\text{ we need to prove }}\left( 1 \right){\text{ is true for }}n = k + 1 \cr & \Leftrightarrow {a_{2k + 2}} - {a_{2k + 1}} = a_{k + 1}^2 - a_k^2 \cr & {\text{We have}}:{a_{2k + 2}} - {a_{2k + 1}} = \frac{{3{a_{2k + 1}} + \sqrt {5a_{2k + 1}^2 - 4} }}{2} - \frac{{3{a_{2k}} + \sqrt {5a_{2k}^2 - 4} }}{2} \cr & = \frac{{3\left( {\frac{{3{a_{2k}} + \sqrt {5a_{2k}^2 - 4} }}{2}} \right) + \sqrt {5{{\left( {\frac{{3{a_{2k}} + \sqrt {5a_{2k}^2 - 4} }}{2}} \right)}^2} - 4} }}{2} - \frac{{3\left( {\frac{{3{a_{2k - 1}} + \sqrt {5a_{2k - 1}^2 - 4} }}{2}} \right) + \sq
|
We have that $a_{n+1}^2-3a_{n+1}*a_n +a_n^2 = -1 \\ $ then we also gonna have by swapping the index by one $a_{n}^2-3{a_n}*a_{a_{n-1}}^2+a_{n-1}^2 = -1 $ $a_{n+1} = a_{n-1}$ or $a_{n+1}+a_{n-1} = 3a_n$ since the sequence is increasng we have the second one $a_{n+1} +a_{n-1} = 3a_n$ then just use characteristic equation and simple elimination and substitution to prove that $a_n = F_{{2n}+1}$ where $F_{n}$ is the Fibonacci sequence
|
|sequences-and-series|algebra-precalculus|
| 0
|
Eigenvalues of a discretized system
|
I have to show that the dynamic matrix of a time-invariant discrete-time system has eigenvalues located at $1+\lambda_cT$ , if the discretization was performed by the explicit Euler method. This should be shown for an arbitrary $n \times n$ matrix $A$ . For the explicit Euler method the dynamic matrix is computed via $I + AT$ , where: $I$ ...the identity matrix $\lambda_c$ ...eigenvalues of the continuous time system $A$ ...dynamic matrix $T$ ...discretization time. My approach: For the eigenvalues, we want to compute the determinants and set them to zero: $$ det((I + AT) - \lambda_d I) = 0 $$ for the discrete system, where $\lambda_d$ denotes the eigenvalues of the discrete case. Then we have: $$ det(A - \lambda_cI) = 0 $$ for the continuous system. I thought of just comparing the two equations and solve for $\lambda_d$ , to get $1 + \lambda_cT$ , which does not quite work. I am bit stuck here and would be glad about any hint.
|
Since $\lambda_c$ is the eigenvalue of $A$ , then $Ax = \lambda_c x$ and by induction: $$ A^n x = A^{n-1}\lambda_cx = \cdots = A\lambda_c^{n-1} x = \lambda_c^n x \tag{1}\label{eq1} $$ and the relationship between the continuous time state matrix $A$ and its discrete counterpart $A_d$ is ( https://en.wikipedia.org/wiki/Discretization ): $$ A_d = e^{AT}= \sum_{k=0}^\infty \frac{1}{k!} (AT)^k \tag{2}\label{eq2} $$ then $$ A_dx= e^{AT}x= \left( \sum_{k=0}^\infty \frac{1}{k!} (AT)^k \right)x = \\ \sum_{k=0}^\infty \frac{1}{k!} T^kA^kx \stackrel{\eqref{eq1}}= \sum_{k=0}^\infty \frac{1}{k!} T^k\lambda_c^kx = \sum_{k=0}^\infty \frac{1}{k!} (T\lambda_c)^kx \stackrel{\eqref{eq2}}= e^{T\lambda_c}x \tag{3}\label{eq3} $$ Equation \eqref{eq3} states that $e^{T\lambda_c}$ is the eigenvalue of $A_d$ : $$ \lambda_d = e^{T\lambda_c} =\sum_{k=0}^\infty \frac{1}{k!} (T\lambda_c)^k \tag{4}\label{eq4} $$ Taking only first two terms of the sum in \eqref{eq4} will give (this is known as the Euler method): $$
|
|control-theory|
| 0
|
Is There an Open Source Equivalent for Mathematica's SeriesCoefficient Function?
|
I am exploring options for using computer to extract the coefficient of $x^n$ from the Taylor series expansion of a function of $x$ , where $n$ is kept as a symbolic entity. While Mathematica offers a function called SeriesCoefficient that achieves this, I am in search of an open source alternative that offers similar functionality. I looked into SageMath and SymPy. They do not seem to have this function. If you fix $n$ as a number, these two can get you the coefficient of $x^n$ , but not when you keep $n$ as a symbol. Does anyone know of any open source software package that can perform this task?
|
Maxima can do this, and in particular this functionality can be accessed in SageMath using the Maxima subsystem as described briefly here : if f is an expression involving x , then maxima(f).powerseries(x,0)._sage_() will produce a symbolic form of the Taylor series expansion around $0$ .
|
|computer-algebra-systems|sagemath|
| 1
|
Solve 1-D coupled differential equations analytically
|
I'm currently going through an article where I came across these two 1-D coupled differential equations. $$\frac{dA}{dz} = a_1B(z)e^{-i\beta z} $$ $$\frac{dB}{dz} = a_2A(z)e^{i\beta z} $$ with these three conditions $$ \frac{d}{dz}(|A(z)|^2 +|B(z)|^2)=0 $$ $$ a_1 =-a_2^* $$ $$ B(0)=B_0, A(0)=0 $$ The article then skips all steps and arrived at these two solutions, subject to the three conditions, $$ A(z)=B_0\frac{2a_1e^{-i\beta z/2}}{\sqrt{4|a_1|^2+\beta^2}}sin\left(\frac{1}{2}\sqrt{4|a_1|^2+\beta^2}z \right) $$ $$ B(z)=B_0e^{i\beta z/2}\left[cos\left(\frac{1}{2}\sqrt{4|a_1|^2+\beta^2}z\right)-i\frac{\beta^2}{\sqrt{4|a_1|^2+\beta^2}}sin\left(\frac{1}{2}\sqrt{4|a_1|^2+\beta^2}z\right)\right] $$ Any idea on how the author arrived at these two expressions? I tried integrating by parts but ended up with $A'(z)$ and $\int{A(z)}$ terms that I can't get rid of. The author also made no mention of trial solutions. Any help is appreciated.
|
Multiply the first equation through by $\mathrm{e}^{\mathrm{i} \beta z}$ and differentiate. (We do the multiplication so that $B(z)$ doesn't appear multiplied by another expression depending on $z$ so we don't have to use the product rule and end up with terms depending on $B$ and $B'$ .) \begin{align*} \frac{\mathrm{d}}{\mathrm{d}z} \left( \mathrm{e}^{\mathrm{i} \beta z} \frac{\mathrm{d}A}{\mathrm{d}z} \right) &= \frac{\mathrm{d}}{\mathrm{d}z} \left( a_1 B(z) \right) \\ \mathrm{i} \beta \mathrm{e}^{\mathrm{i} \beta z} A'(z) + \mathrm{e}^{\mathrm{i} \beta z} A''(z) &= a_1 B'(z) \\ &= a_1 a_2 \mathrm{e}^{\mathrm{i} \beta z} A(z) \end{align*} so $$ \mathrm{e}^{\mathrm{i} \beta z} \left( A''(z) + \mathrm{i} \beta A'(z) - a_1 a_2A(z) \right) = 0 \text{.} $$ Since $\mathrm{e}^{\mathrm{i} \beta z}$ is never zero (standard example of Picard's little theorem ), we must have $$ A''(z) + \mathrm{i} \beta A'(z) - a_1 a_2 A(z) = 0 \text{.} $$ Using the side condition between $a_1$ and $a_2$ , \beg
|
|ordinary-differential-equations|coupling|
| 1
|
How do I go about finding $\int\limits_{0}^{\frac{\pi}{4}}x\ln\left(1+\tan x\right)dx$?
|
By some analysis and through Wolfram|Alpha I know that the integral in question is equal to a fascinating $$I=\int\limits_{0}^{\frac{\pi}{4}}x\ln\left(1+\tan x\right)dx=\frac{21}{64}\zeta(3)+\frac{\pi^{2}}{64}\ln 2-\frac{\pi}{8}G$$ where $G$ is Catalan's constant. However, I have tried many methods to evaluate this integral, all to no avail. Using the Maclaurin series for $\ln(1+\tan x)$ unfortunately produces $$I=\sum\limits_{k=0}^{\infty}\frac{\left(-1\right)^{k}}{k+1}\int\limits_{0}^{\frac{\pi}{4}}x\tan^{k+1}xdx$$ the integral in which is particularly hard to deal with - spawning this question of mine , the answers and comments to which destroyed my dreams of continuing down this path. Alternatively, differentiating under the integral sign with $$I(n)=\int\limits_{0}^{\frac{\pi}{4}}x\ln\Big(\tan\left(nx\right)+1\Big)dx\Rightarrow I'(n)=\int\limits_{0}^{\frac{\pi}{4}}\frac{x}{n+\tan x}dx$$ looks equally hopeless after a few calculations. Integration by parts does not seem to work ver
|
\begin{align} I=&\int_{0}^{\frac{\pi}{4}}x\ln\left(1+\tan x\right)dx\\ =& \int_{0}^{\frac{\pi}{4}}x\ln\frac{4(1+\tan x)}{\sec^2 x}dx -2\int_{0}^{\frac{\pi}{4}}x\ln\left(2\cos x\right)dx \end{align} where $ \int_{0}^{\frac{\pi}{4}}x\ln\left(2\cos x\right)dx = \frac{\pi}{8}G-\frac{21}{128} \zeta(3)$ and \begin{align} J= &\int_{0}^{\frac{\pi}{4}}x\ln\frac{4(1+\tan x)}{\sec^2 x}\overset{x\to\frac\pi4-x}{dx} =\frac\pi4 \int_{0}^{\frac{\pi}{4}}\ln\frac{4(1+\tan x)}{\sec^2 x}dx-J\\ =& \ \frac\pi8 \bigg(\int_{0}^{\frac{\pi}{4}}\ln(1+\tan x)dx +\int_{0}^{\frac{\pi}{4}}\ln(4\cos^2 x)dx\bigg)\\ =&\ \frac\pi8\bigg( \frac\pi8\ln2+G \bigg)= \frac{\pi^2}{64}\ln2+\frac\pi8G\\ \end{align} Substitute above results into $I$ to obtain $$I=\frac{\pi^{2}}{64}\ln2-\frac{\pi}{8}G+ \frac{21}{64}\zeta(3) $$
|
|calculus|integration|sequences-and-series|taylor-expansion|trigonometric-integrals|
| 0
|
Changing the integration limit
|
I was reading the derivation of $E = mc^{2}$ but I am stuck in a very simple minor detail. In the derivation, we have the kinetic energy (K.E.) as $\int_{0}^{s} F \, ds$ , where $F$ is constant. Then we have \begin{align*} \int_{0}^{s} F \, ds &= \int_{0}^{s} \frac{d(\gamma m v)}{dt} \, ds \\ &= \int_{0}^{mv} vd(\gamma m v) \\ &= \int_{0}^{v} vd(\frac{mv}{\sqrt{1-\frac{v^2}{c^2}}}) \end{align*} I cannot see how the integration bounds change. Do we make some kind of $u$ substitution?
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{{\displaystyle #1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\sr}[2]{\,\,\,\stackrel{{#1}}{{#2}}\,\,\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} & m = mass.\quad m_{0} = rest\ mass \\[5mm] \color{#44f}{W\pars{\vec{a}\to\vec{b}}} & \equiv \color{#44f}{\int_{\vec{\,a}}^{\vec{\,b}}\vec{\on{F}}\cdot\dd\vec{\on{r}}} = \int_{\vec{a\,}}^{\vec{b\,}} \totald{\pars{m\vec{\on{v}}}}{t}\cdot\dd\vec{\on{r}} \\[5mm]
|
|calculus|
| 0
|
How does one solve this ODE using a differential operator and a series expansion?
|
I am solving a fairly basic non-homogeneous ODE, but I wanted to try using the differential operator, and a series expansion in order to find the particular solution. I am fairly convinced it is possible to solve an ODE using this method, however I an unable to understand how. This is what I have done thus far: $$ \begin{align*} 2y' + y &= \cos x \\ \end{align*} $$ $$ \begin{align*} (2D+1)y &= \cos x \\ \end{align*} $$ $$ \begin{align*} y &= \frac{\cos x}{2D+1} \\ \end{align*} $$ $$ \begin{align*} y &= \sum_{n=0}^\infty (2D)^{2n} \cdot (1 - 2D) \cdot \cos x \\ \end{align*} $$ $$ \begin{align*} y &= \sum_{n=0}^\infty (2D)^{2n} \cdot (\cos x + 2\sin x) \end{align*} $$ How do I proceed from here? I cannot spot any mistake I have made in anything I have done leading up to it, but trying to proceed seems almost impossible. Have I completely misunderstood how derivative operators work? I tried to solve using maclaurin series as well: $$ \begin{align*} 2y' + y &= \cos x \\ \end{align*} $$ $$
|
$2y'+y=\cos x$ $(2D+1)y=\cos x$ $(D^2+1)(2D+1)y=0$ $(2D^3+D^2+2D+1)y=0$ $y=c_1e^{-x/2}+c_2\sin x + c_3 \cos x$ $2y'=-(c_1)e^{-x/2}+2c_2 \cos x - 2c_3 \sin x$ $\cos x = (2c_2+c_3)\cos x +(c_2-2c_3) \sin x$ $c_2=2c_3, 5c_3=1\implies c_3=1/5, c_2=2/5$ $y=c_1e^{-x/2}+(2/5)\sin x +(1/5)\cos x$ $y=\frac{1}{1+2D}\cos x=(1-2D+4D^2-8D^3+...) \cos x$ $y=\sum_{k=0}^\infty(-2D)^k\cos x$ $y=\sum_{k=0}^\infty (-2D)^{4k}\cos x+\sum_{k=0}^\infty (-2D)^{4k+1}\cos x+\sum_{k=0}^\infty (-2D)^{4k+2}\cos x+\sum_{k=0}^\infty (-2D)^{4k+3}\cos x$ $y=\sum_{k=0}^\infty16^k \cos x+ \sum_{k=0}^\infty (2)\cdot 16^k \sin x + \sum_{k=0}^\infty -4\cdot 16^k\cdot \cos x+ \sum_{k=0}^\infty (-8)\cdot 16^k \cdot ( \sin x) $ If I"ve accurately represented the procedure, the result is: $y=\sum_{k=0}^\infty (-3)\cdot 16^k \cos x + (-6)16^k \sin x$ It's interesting the sin and cosine term still maintain a 2:1 ratio. I'm not sure this can be reconciled with the solution from above. Additional solution method: $2y'+y=\cos x$ $y
|
|sequences-and-series|ordinary-differential-equations|derivatives|
| 0
|
Is it true that for any positive integer $n$, there exists an integer $x$ where there are at least $n$ primes between $x^2$ and $(x+1)^2$
|
Am I correct that this follows directly from two observations: (1) The sum of the reciprocals of primes diverges . (2) The sum of the reciprocals of squares converges Here's my thinking: If there existed an integer $m$ that was a maximum number of primes between two consecutive squares, then the sum of the reciprocal of primes would be less than $m\frac{\pi^2}{6}$ which would be convergent which is not the case. Am I correctly understanding the implication of these two observations? Are there other well known properties that stem from these two observations?
|
Your reasoning is mostly correct. Two slight imprecisions: Where it says “sum of the primes” it should say “sum of the reciprocals of the primes”, as above. It doesn’t make much sense to say “the sum would be less than $m\frac{\pi^2}6$ which would be convergent”, since $m\frac{\pi^2}6$ is a constant. The argument should be that the sum of the reciprocals of the primes would be dominated by the sum of the $m$ -fold reciprocals of the squares, and would thus converge; its value would then be less than $m\frac{\pi^2}6$ . There’s no shortage of well-known properties that stem from $(1)$ , but I’m not aware of any that stem from the combination of $(1)$ and $(2)$ .
|
|divergent-series|square-numbers|distribution-of-primes|
| 1
|
How to find a vector such that the inner product is not zero?
|
Let $V$ be a finite set consist of nonzero vectors $v=(v_1, \cdots, v_n)\in \mathbb{Z}^n$ , how to find a vector $w\in \mathbb{Z}^n$ such that the inner product $ $ is nonzero for any $v\in V$ ?
|
For the case $n=1$ , we can choose $w=(1)$ . We build up to any $n$ from there. Consider $V'$ , formed by discarding the last element of each element of $V$ ; eg given $v \in V$ , $v' \in V'$ is $(v_1, v_2, \dots, v_{n-1})$ . Suppose you have $w' \in \mathbb R^{n-1}$ that satisfies $\left \ne 0 \forall v' \in V'$ . Let $m=\max_{v' \in V'} |\left |$ (the biggest inner product). Let $w_n=m+1$ . We then have $\left =\left + (m+1) v_n$ . For $v$ with $v_n=0$ , the inner product is unchanged: $\left =\left \ne 0$ . Otherwise, $|v_n| \geq 1$ ; therefore, $|(m+1) v_n| > m$ . Since $|\left | \leq m$ , the sum $\left =\left + (m+1) v_n \ne 0$ .
|
|linear-algebra|abstract-algebra|vectors|
| 1
|
Prove that there is an edge $e' \in E(T')-E(T)$ such that $T'+e-e'$ and $T-e+e'$ are both spanning trees of $G$.
|
Can someone please verify my proof or offer suggestions for improvement? Let $T, T'$ be two spanning trees of a connected graph $G$. For $e \in E(T)-E(T')$, prove that there is an edge $e' \in E(T')-E(T)$ such that $T'+e-e'$ and $T-e+e'$ are both spanning trees of $G$. Let $e = \{x, y\}$. There exists a path $x, u_1, u_2, \ldots u_k, y$ in $T'$. There exists an edge $e' = \{u_d, u_{d+1}\}$ such that $e' \notin T$. If that weren't the case, then the union of $x, y$ and $x, u_1, u_2, \ldots u_k, y$ would form a closed path. Remove edge $e'=\{u_d, u_{d+1}\}$ and add edge $\{x,y\}$ to $T'$. Then, the graph $T'+e-e'$ is connected. To see this, let $\alpha$ and $\beta$ be two vertices in $G$. There exists a path $p$ in $T'$ from $\alpha$ to $\beta$. If $p$ includes $\{u_d, u_{d+1}\}$, form a path $p'$ in $T'+e-e'$ by deleting $u_d, u_{d+1}$, and inserting $u_{d+1}, u_{d+2}, \ldots, x, u_1, u_2, \ldots u_{d-1}, u_d$ in its place. Then, the new path connects $\alpha$ to $\beta$ in $T'+e-e'$. S
|
$T-e$ will split $T$ into two disconnected trees, $A$ and $B$ . In the spanning tree $T'$ , there exists a path from any $a$ in $A$ to any $b$ in $B$ . On any such path connecting $a$ and $b$ , it has an edge $e'$ directly connecting an $a'$ in $A$ to a $b'$ in $B$ since either $a\to b$ ( $a'=a,\ b'=b$ ) or $a\to .... \to a' \to b' \to...\to b$ . Thus, $T-e+e'$ will reconnect $A$ and $B$ . $T-e+e'$ is a spanning tree of G since (1) it connects all vertices of $G$ and (2) there is no cycle $A$ and $B$ are trees so they contain no cycle. Adding $e'$ will not create a cycle (otherwise, it will contradict the fact that before adding $e'$ , $a'$ in $A$ and $b'$ in $B$ are disconnected). By symmetry, $T'-e'+e$ is also a spanning tree of $G$ .
|
|graph-theory|solution-verification|trees|
| 0
|
Question on The Theorem of Existence and Uniqueness of The Solutions to Sylvester Equations
|
Background I am self-studying linear algebra, and I got stuck on some steps of the proof of the following theorem: Theorem $\quad$ Let $A$ be an $n \times n$ complex matrix and $B$ be an $m \times m$ complex matrix. For any $n \times m$ complex matrix $C$ , the Sylvester equation $AX+XB=C$ has a unique solution $X$ , which is an $n \times m$ complex matrix, if and only if $A$ and $-B$ do not share any eigenvalue. Here is the proof: Proof $\quad$ The equation $AX+XB=C$ is a linear system with $mn$ unknowns and $mn$ equations. Hence, it is uniquely solvable for any given $C$ if and only if the homogeneous equation $AX+XB=0$ admits only the trivial solution. First suppose that $A$ and $-B$ do not share any eigenvalue. Let $X$ be a solution to the homogeneous system $AX+XB=0$ . Then $AX=X(-B)$ . Since \begin{align*} &A(AX) = A(X(-B))\\ \implies\ &(AA)X = (AX)(-B) = (X(-B))(-B) = X((-B)(-B))\\ \implies\ &A^2X = X(-B)^2, \end{align*} by mathematical induction, we have $A^kX=X(-B)^k$ for each
|
Let $p(x)=\sum_{k=0}^mc_kx^k$ . Then $$ p(A)X=\sum_{k=0}^mc_kA^kX=\sum_{k=0}^mc_kX(-B)^k=X\sum_{k=0}^mc_k(-B)^k=Xp(-B). $$ No. This is the assertion of Cayley-Hamilton theorem . Your reasoning is the typical incorrect one. Note that $p(A)$ is an $n\times n$ matrix but $\det(A-AI)$ is a complex number. They are two different kinds of things. Yes. Your interpretations are correct. Factorise $p(x)$ as $\prod_{k=1}^n(x-\lambda_k)$ . Then $\sigma(A)=\{\lambda_1,\ldots,\lambda_n\}$ and $p(-B)=\prod_{k=1}^n(-B-\lambda_kI)$ . For any number $\mu$ , note that $p(\mu)=0$ if and only if $\mu=\lambda_k$ for some $k$ . Now the following statements are equivalent: $p(-B)$ is singular; $-B-\lambda_kI$ is singular for some $k$ ; $\lambda_k\in\sigma(-B)$ for some $k$ , i.e., $A$ and $-B$ shares an eigenvalue $\lambda_k$ ; $p(\mu)=0$ for some $\mu\in\sigma(-B)$ ; $0\in p(\sigma(-B))$ . Here $p$ is the characteristic polynomial of $A$ . We have $p(A)=0$ (Cayley-Hamilton theorem) and $p(A)X=Xp(-B)$ . Henc
|
|linear-algebra|matrices|polynomials|eigenvalues-eigenvectors|determinant|
| 0
|
The expected max inner product between a random vector and the sum of random vectors
|
Given: Fix $n, k \in \mathbb{N}$ , we generate $n$ random vectors $X_i \in \mathbb{R}^k$ with $\|X_i\|_2 = 1$ . We want to compute: The expected max Euclidean inner product between a random vector and the sum of the random vectors, i.e., $$ \mathbb E \max_{i \in [n]} \langle X_i, \sum_{j \in [n]} X_j \rangle. $$ What I have tried: When $n = 2$ , $\langle X_1, X_1 + X_2 \rangle = \langle X_2, X_1 + X_2 \rangle = 1 + \langle X_1, X_2 \rangle$ ; $\mathbb E \langle X_1, X_2 \rangle = 0$ due to symmetry, and thus $\mathbb E \max_{i \in [n]} \langle X_i, \sum_{j \in [n]} X_j \rangle = 1$ . When $n \geq 3$ , it becomes unclear to me. Intuitively it might get larger as $n$ increases since we have more options, and would approach some limit, possibly related to $\| \sum_{j \in [n]} X_j \|$ . Simulations (1,000,000 trials) with $n = 3$ and $k = 2$ gives $1.5319441791547532 \pm 0.7104635978561801$ . Simulations (1,000,000 trials) with $n = 4$ and $k = 2$ gives $1.6632600316282025 \pm 0.9042611238
|
Fix $k$ . Asymptotically, how does the expected value of what you want behave as $n$ goes up? Note that, if you take a small finite area (taking up a small fixed percentage $\epsilon$ ) of the unit sphere's surface and ask for the probability that all the vectors $X_1,X_2,\ldots,X_n$ miss that area, the answer is $(1-\epsilon)^n\to0$ . So for any fixed subdivision of the unit sphere's surface into a finite number of little areas, the probability that each area has at least one of the vectors approaches $1$ . So for larger $n$ , it doesn't matter which direction $\sum X_j$ points in, because you can find one of those vectors with a similar direction and do the dot product with that. The only question is, what is the expectation of $||\sum X_j||_2$ ?
|
|probability|random-variables|expected-value|
| 0
|
Is there a function with infinite integral on every interval?
|
Could give some examples of nonnegative measurable function $f:\mathbb{R}\to[0,\infty)$, such that its integral over any bounded interval is infinite?
|
We don't really need that $|x-q_n|^{1/2}$ construction first. Let $q_n$ be an enumeration of all rational numbers. Define function $g: \mathbb{R} \rightarrow [0,\infty]$ by: $$ g(x)= \begin{cases} \sum_{n=0}^\infty \frac{1}{4^n |x-q_n|}, & \text{if}\ x\notin\mathbb{Q} \\ 0, & \text{if}\ x\in\mathbb{Q} \\ \end{cases} $$ Define set $ E=\bigcup_{k=1}^\infty \bigcap_{n=1}^\infty \left\{ x\in \mathbb{R}:|x-q_n|\ge\frac{1}{2^n k} \right\} $ . Then we have $\mu(\mathbb{R} \setminus E)=0$ , and $g$ converges on $E$ . Then, $f=\mathcal{X}_E \cdot g$ is finite on its domain. Further, $\mu \text{-} a.e. f=g$ , and thus $f$ has the same integral as $g$ over any non-empty open interval, which is infinite. proof: For any non-empty open interval $I=(a,b)$ , $\exists q_k \in (a,b) \cap \mathbb{Q}$ , $$ \int_I f \textit{d} \mu = \int_{I} g \textit{d} \mu \ge \int_{I\setminus\mathbb{Q}} \textstyle \frac{1}{4^k |x-q_k|} \displaystyle \textit{d} \mu = \int_{I\setminus\{q_k\}} \textstyle \frac{1}{4^k |x-q_
|
|real-analysis|measure-theory|examples-counterexamples|
| 0
|
Show that the function $F:[0,\infty)\to[0,\infty)$, $F(x)=\int_{0}^{x^{p}}f(t)dt$ is concave
|
Let $f:[0,\infty)\to[0,\infty)$ be a concave function with $p \in [0,\frac{1}{2}]$ . a) Show that $f$ is integrable on every interval $[a,b] \subset \mathbb R$ . b) Show that the function $F:[0,\infty)\to[0,\infty)$ , $F(x)=\int_{0}^{x^{p}}f(t)dt$ is concave. I know there is a theoren that says if a function is concave an a closed interval, then the function is continuous on the open interval. This theorem would prove instantly $a)$ , but I wonder if there is an alternative way. For $b)$ I tried starting the problem with the definition of concavity for f and nothing. Then I tried using the previous theorem and working with $F$ as a differentiable function on a arbitrary open interval but I couldn't finish. I wonder why they say $p$ is between $0$ and $\frac{1}{2}$ . Ok that means $x^{p}$ is concave and maybe we can use that for the function F, but why not $p$ upper bounded by $1$ ?. The function $F$ is actually a composition of a primitive of $f$ which is $0$ in $x=0$ and the function
|
For (b), the condition that $p \leq 1/2$ is necessary. Consider $f(t) = t$ and $p = 3/4$ , which renders $F(x) = x^{3/2}/2$ , which is not concave. First, we show the $p = 1/2$ case. Notice that $F$ is differentiable, with $$ F'(x) = f(x^{1/2})\cdot \frac{1}{2}x^{-1/2}. $$ For $F$ to be convex, it is enough to show that $F'$ is decreasing, for which in turn it is enough to show that $g(y) = 2F'(y^2) = f(y)/y$ is decreasing (since the function $x \mapsto x^{1/2}$ is increasing). Now, let $0 \leq x \leq y$ , from which we can write $x$ as a convex combination of $0$ and $y$ according to $$ x = \Bigl(1 - \frac{x}{y}\Bigr)\cdot 0 + \frac{x}{y}\cdot y. $$ Since $f$ is concave, we have that \begin{gather*} f(x) \geq \Bigl(1 - \frac{x}{y}\Bigr)\cdot f(0) + \frac{x}{y}\cdot f(y). \\ g(x) = \frac{f(x)}{x} \geq \frac{f(y)}{y} = g(y). \end{gather*} Thus $g$ , and so $F'$ is decreasing, from which $F$ is concave. Now, for $0 \leq p \leq 1/2$ , we have $$ F_p(x) = F(x^{2p}). $$ Since $p \leq 1/2$ ,
|
|real-analysis|integration|definite-integrals|
| 0
|
Normal bundle of transverse intersection of two irreducible components
|
Let $X$ be an equidimensional reduced scheme of finite type over an algebraically closed field $k$ . Assume that $X$ has two irreducible components $X_1$ and $X_2$ . Assume also that $X_1$ and $X_2$ are nonsingular varieties which meet transversally at $Y := X_1 \cap X_2$ , so that $Y$ is a nonsingular variety of codimension $1$ in $X_1$ and in $X_2$ . If it helps, we may assume that $Y$ is again irreducible. We may consider the two normal bundles $\mathcal N_{Y/X_1}$ and $\mathcal N_{Y/X_2}$ , which are actually line bundles on $Y$ . Thus, they define elements of $\mathrm{Pic}(Y)$ . Are these two elements inverse of eachother? In other words, is the line bundle $\mathcal N_{Y/X_1} \otimes \mathcal N_{Y/X_2}$ isomorphic to $\mathcal O_Y$ ?
|
No in general. For instance, if $X_i$ are hypersurfaces of degree $d_i$ in the projective space $\mathbb{P}^n$ then $$ \mathcal{N}_{Y/X_1} \cong \mathcal{N}_{X_2/\mathbb{P}^n}\vert_Y \cong \mathcal{O}_Y(d_2) $$ and similarly $\mathcal{N}_{Y/X_2} \cong \mathcal{O}_Y(d_1)$ . If, however, $X = X_1 \cup X_2$ is a fiber of a flat morphism to a curve, then the relation that you want is indeed true.
|
|algebraic-geometry|intersection-theory|divisors-algebraic-geometry|line-bundles|
| 1
|
Is there a function with infinite integral on every interval?
|
Could give some examples of nonnegative measurable function $f:\mathbb{R}\to[0,\infty)$, such that its integral over any bounded interval is infinite?
|
Let $I_1,I_2,I_3,\cdots$ enumerate the open intervals with rational endpoints. Construct pairwise disjoint closed nowhere dense sets of positive measure ( i.e. fat Cantor sets) $A_n\subset I_n$ ; this can be done since $A_1\cup\cdots\cup A_{n-1}$ is nowhere dense. So every interval contains infinitely many $A_n$ 's. Define $f:\mathbb R\to\mathbb R$ so that $f(x)=1/\lambda(A_n)$ if $x\in A_n$ and $f(x)=0$ if $x\notin\bigcup_nA_n$ .
|
|real-analysis|measure-theory|examples-counterexamples|
| 0
|
Given $a, b$ in a GCD domain $R$, what assumptions are necessary to find divisors which have $d_{1} d_{2} = \operatorname{lcm} (a, b)$?
|
To state the question fully, given $a,b$ in a ring $R$ , I want to be able to find (or know when it not possible) elements $d_{1} \mid a$ , $d_{2} \mid b$ such that $\gcd (d_{1}, d_{2}) = 1$ , and $d_{1}d_{2} = \operatorname{lcm} (a, b)$ . Clearly we need a GCD domain at least to ensure that $a$ and $b$ always have an lcm. And if I understand correctly, in a UFD it is always possible, since we can choose $d_{1}$ and $d_{2}$ using the prime factorisation. So my question is essentially how much do we need to assume such that this is possible? In the all the examples I have tried, it's either been easily possible or impossible for the above reasons - but I haven't been able to find a proof, disproof, or counterexample for any intermediate case (i.e. a GCD domain that is not a UFD or stronger). That said, I'm also pretty new to this kind of abstract algebra, so I could also be missing something obvious.
|
Let me show how to transform the problem into an equivalent problem which I suspect might be easier to find in the literature. Let $g$ and $l$ denote $\text{gcd}(a,b)$ and $\text{lcm}(a,b)$ respectively, then we have $gl=ab$ . If $d_1$ and $d_2$ are such that $l = d_1 d_2$ , $d_1 \mid a$ , $d_2 \mid b$ and $\text{gcd}(d_1, d_2)=1$ , then we have $gd_1 d_2 = gl = ab = (a/d_1)d_1(b/d_2)d_2$ , so $g=(a/d_1)(b/d_2)$ (let's assume $a,b\ne0$ so that we can cancel $l= d_1 d_2$ ), which implies that $d_1=(a/g)(b/d_2)$ and $d_2=(b/g)(a/d_1)$ . Since $\text{gcd}(a/g,b/g)=\text{gcd}(a,b)/g=1$ , $\text{gcd}(d_1,d_2)=1$ if and only if $\text{gcd}(a/g,a/d_1)=\text{gcd}(b/g,b/d_2)=\text{gcd}(a/d_1,b/d_2)=1$ . Thus the question becomes: given coprime elements $a' := a/g$ and $b' := b/g$ , can we always factorize $g=d_a d_b$ such that $\text{gcd}(a',d_a)=\text{gcd}(b',d_b)=\text{gcd}(d_a,d_b)=1$ ? I believe the answer is no, but unfortunately I haven't found a counterexample, and I'm now asking it as a
|
|abstract-algebra|elementary-number-theory|ring-theory|gcd-and-lcm|
| 0
|
$f$ is an entire function, then $f(\mathbb{C})$ is closed
|
While trying to prove " $f$ is an entire function such that $|f(z)| \to \infty, as |z|\to \infty$ , then $f(\mathbb{C}) $ is closed" I accidentally showed that " $f$ is an entire function, then $f(\mathbb{C})$ is closed" I know this is not a correct statement, but I need to know where did I gone wrong and I need some hints to correct the proof... the proof:- Let $a$ be a cluster point of $f(\mathbb{C})$ , then $\exists f(z_n) \in f(\mathbb{C})$ such that $f(z_n) \to a$ Now consider $$g(z): B[a,1] \to \mathbb{C}$$ $$g(z)=|f(z)-a|$$ This is a continuous real valued function on a compact set and hence attains a minimum, suppose it attains its minimum on the boundary, say the minimum is $r>0$ , but since we have $f(z_n)\to a$ , I can find a $f(z_k) \in B[a,1]$ such that $|f(z_k)-a| and hence the minimum is attained in the open ball $B(a,1)$ , and hence by Minimum modulus principle the minimum is zero, $$\implies \exists w , s.t f(w)=a$$ Which implies $a \in f(\mathbb{C})$ and hence $f(\mat
|
Your seqeunce $z_n$ has a bounded subsequence (because if $|z_n| \to \infty$ the $|a|=\lim |f(z_n)|=\infty$ , a contradiction). If $z_{n_k} \to z$ then $a=f(z)$ so $a \in f(\mathbb C)$ .
|
|complex-analysis|analysis|solution-verification|fake-proofs|
| 1
|
For a non-singular transformation $T$ the operator $f\mapsto f\circ T$ is an isometry on $\mathcal L^\infty(X,\mu)$
|
Let $(X,\mu)$ be a measure space and $T:X\to X$ be a non-singular transformation, that is, $\mu(T^{-1}(A))=0$ if and only if $\mu(A)=0.$ Now consider a map $\Phi:\mathcal L^\infty(X,\mu) \to \mathcal L^\infty(X,\mu)$ be defined by $\Phi(f)=f\circ T$ for all $f\in\mathcal L^\infty(X,\mu)$ . Now I want to show that $\Phi$ is a linear isometry. Now for linearity, let $\lambda\in \mathbb C$ and $f,g\in \mathcal L^\infty(X,\mu)$ . Then we have, for any $x\in X$ , \begin{align*}\Phi(f+\lambda g)(x) &= (f+\lambda g)(T(x))\\ &=f(T(x))+\lambda (g(T(x)))\\ &=\Phi(f)(x)+\lambda\Phi(g)(x).\end{align*} So I have proved the linearity. Also I proved that $\Phi$ is an isometry on $\mathcal L^2(X,\mu)$ and also got here when $T$ is measure preserving but I am unable to show that $\Phi$ is an isometry on $\mathcal L^\infty(X,\mu)$ when $T$ is non-singular, that is, I want to show $\|f\|_\infty=\|f\circ T\|_\infty.$ Please help me to solve this. Thanks in advance.
|
I think non-singular transformations are supposed to be bijective. Otherwise, the statement is false. Counter-example: Let $X=[0,1]$ with Lebesgue measure, $Tx=\frac x 2$ , $f(x)=0$ for $x \le \frac1 2$ and $f(x)=2$ for $x>\frac 1 2$ . Then, $\|\Phi(f)\|=0$ since $f(T(x))=0$ for all $x$ . But $\|f\|=2$ . Suppose $|f(x)| \le M$ whenver $x \in E$ where $\mu(E^{c})=0$ . Then, $|f(T(x))|\leq M$ whenever $Tx \in E$ i.e. whenever $x\in T^{-1}(E).$ It follows that $|f\circ T|\leq M$ a.e. It follows from the definition of $L^{\infty}$ norm that $\|\Phi(f)\|=\|f\circ T|\leq \|f\|$ . The reverse inequality follows in a similar way.
|
|real-analysis|measure-theory|linear-transformations|ergodic-theory|
| 1
|
Solve Double Integral with Polar System
|
Evaluate $\displaystyle\iint\limits_{\Omega}{(x+y)dx dy}$ where the area $\Omega$ is bounded by a curve $x^2+y^2=x+y$ . Note: Evaluate by converting to polar system. My Solve: $$\iint\limits_{\Omega}{(x+y)dx dy}=\int_0^{2\pi}d\theta \int_0^{\frac{1}{\sqrt 2}} r \,(r\sin\theta + r\cos\theta)dr=$$ $$=\int_0^{2\pi}(\sin\theta+\cos\theta)d\theta*\left.{\frac{r^3}{3}}\right\rvert_0^{\frac{1}{\sqrt2}}=?$$ My Question: The answer in the book is $\frac{\pi}{2}$ . I don't understand how they got it. Maybe when I switched to the polar coordinates I did something wrong? Sorry for asking help for simple question.
|
The area is bounded by a circle around $(1/2,1/2)$ with radius $1/\sqrt{2}\,:$ $$ 0=x^2-x+y^2-y=(x-\tfrac12)^2+(y-\tfrac12)^2-\tfrac12\,. $$ Expressing the integral in $(r,\theta)$ has to take into account where the center of the circle is: $$\iint\limits_\Omega(x+y)\,dx\,dy=\int_0^{2\pi}\int_0^\frac1{\sqrt{2}}\Big\{(r\cos\theta+\tfrac12)+(r\sin\theta+\tfrac12)\Big\}r\,d\theta\,dr=\frac\pi 2\,.$$
|
|integration|polar-coordinates|
| 1
|
Prove $(a+b+c-3)\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}-3\right)+abc+\frac{1}{abc}\ge 2.$
|
For any positive real numbers $a,b,c$ then prove $$(a+b+c-3)\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}-3\right)+abc+\frac{1}{abc}\ge 2.$$ I've try to assume $a+b+c\ge 3.\quad(1)$ By AM-GM $$abc+\frac{1}{abc}\ge 2$$ and we need to prove $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}-3\ge 0$$ By Cauchy-Schwarz $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\ge \frac{9}{a+b+c}$$ but $$\frac{9}{a+b+c}\ge 3 \iff a+b+c\le 3$$ which gives a contradiction to $(1).$ Similarly when $a+b+c I can not find a good approach for each assumed condition. Could you please give some hint to solve it? Thank you.
|
Since among three numbers: $$(a-1)(bc-1), (b-1)(ca-1), (c-1)(ab-1)$$ there are at least two of them having the same sign. WLOG, assume that they are $(b-1)(ca-1)$ , and $(c-1)(ab-1)$ . We have: $$ \left(a+b+c -3\right) \left(\frac{1}{a} +\frac{1}{b} +\frac{1}{c} -3\right) +abc +\frac{1}{abc} -2$$ $$= \left( a+\frac{1}{a}-2\right) \left(b+\frac{1}{b} +c+\frac{1}{c}-4 \right)+\frac{(b-1)(c-1)(ab-1)(ac-1)}{abc} \ge 0.$$
|
|inequality|contest-math|
| 1
|
$\log_{2} \frac{1-ab}{a + b} = 2ab + a + b -3$. Find min of $(a +b)$
|
Problem : $a$ , $b$ are positive real numbers and $\log_{2} \frac{1-ab}{a + b} = 2ab + a + b -3$ . Find the minimum value of $P = a + b$ I reached this point and had no idea how to proceed. Please advise: $2^{2(1-ab) + 1}(1 - ab) = 2^{a + b}(a + b)$
|
Let us take notations $$S=a+b, \ \ P=ab$$ keeping in mind the classical inequality (Arith. Mean vs. Geom. Mean) $$S \ge 2\sqrt{P}\tag{1}$$ With these notations, your relationship, which can be written : $$8(1-ab)=2^{a+b+2ab}(a+b)$$ becomes : $$8(1-P)=2^{S+2P}S$$ (1) implies : $$8(1-P) \ge 2^{2\sqrt{P}+2P}2 \sqrt{P}$$ $$\underbrace{4(1-P)}_{f(P)} \ge \underbrace{4^{\sqrt{P}+P}\sqrt{P}}_{g(P)}\tag{2}$$ $f$ is decreasing with $f(0)=4,f(1)=0$ , $g$ is increasing with $g(0)=0,g(1)=16$ as can be seen on the following graphical representation Therefore their curves have a unique intersection point. In fact the abscissa $P_0$ of this point is the unique positive root $P_0$ of equation : $$P+\sqrt{P}=1\tag{3}$$ (check it by plugging the LHS of (3) in (2)). Equation (3) above can be solved explicitly because, setting $p=\sqrt{P}$ , it becomes a quadratic equation, giving : $$P_0=\left(\tfrac12(\sqrt{5}-1)\right)^2=\tfrac12(3-\sqrt{5}) \approx 0.381966\tag{4}$$ In this way, the maximal value of $
|
|calculus|logarithms|exponentiation|
| 1
|
There are $n$ balls. Investigate the independence of the events $A_j =$ { number j appeared in the first j draws }, where: $j = 1, 2, . . . , n$.
|
From an urn containing $n$ balls numbered from $1$ to $n$ , one ball is drawn and then returned to the urn. Then that event is repeated. Investigate the independence of the events $A_j =$ { number j appeared in the first j draws }, where: $j = 1, 2, . . . , n$ . I know that for evens $A_i$ and $A_j$ to be independent, it has to be true that: $P(A_i) \cdot P(A_j) = P(A_i \wedge A_j)$ . I think that is can be assumed that: $P(A_i) = n^i - (n-1)^i$ . It is so, because we need to subtract all the sequences created by choosing each time one of of $n-1$ balls (because we can't choose the one with number i) from all of the sequences possible (created by choosing each time one of of $n$ balls). Now, I don't know how calculate $P(A_i \wedge A_j)$ . We need to place those $2$ numbers in $2$ draws (at least), but it's hard since the largest number where we can put $i$ is $i$ and for $j$ it's $j$ . I don't know how to catch the idea of putting those two in the sequence of draws in correct way. Eve
|
I think that is can be assumed that: $(_)=^−(−1)^$ I believe you meant the number of ways in which the desired event can happen. You need to divide by $n^i$ for it to be the probability. Now, I don't know how calculate $(A_i ∧ A_j)$ . If you consider the complement of this, we have three possibilities. (Without the loss of generality, we can assume $i ). $A_i^c \land A_j^c$ The number of ways this can happen is $(n-2)^i \cdot (n-1)^{j-i}$ . For the first $i$ draws, we exclude balls with numbers $i$ and $j$ . Thereafter, we exclude only balls with $j$ . $A_i^c \land A_j$ The number of ways this can happen is $(n-1)^i \cdot (n)^{j-i}$ . $A_i \land A_j^c$ The number of ways this can happen is $(n-1)^j$ . To get the probabilities, we need to divide by $n^j$ in each of the three cases. Can you take it from here?
|
|probability|independence|
| 0
|
Proving a canonical height property
|
Let $H$ be the classical Weil height for a number field (of degree $n$ say), i.e. given $\alpha$ an element of this field and $f(x) = a_0x^n + \cdots + a_n \in \mathbb{Z}[X]$ its minimal polynomial, the height is defined by $$H(\alpha)^n = |a_0|\prod_i \max(1, |\alpha_i|)$$ where the $\alpha_i$ are the different complex embeddings of $\alpha$ . I would like to prove that $H(\alpha^m) = H(\alpha)^m$ . It seems to be derived from the fact that $$g(x) = a_0^m (x-\alpha_1^m) \cdots (X-\alpha_n^m)$$ has the following properties, that I am struggling to understand: a polynomial in $\mathbb{Z}[X]$ (I can factor it by $f(x)$ , but it is not so clear to see how the remaining coefficients are also integers, maybe with the root-coefficients Newton relations?) a power of the minimal polynomial of $\alpha^m$ (it is surely a multiple, since it kills $\alpha^m$ which is among one of the $\alpha_i^m$ ) Thanks a lot for tips or help!
|
If we let $\omega$ be the $m$ th root of unity, we have: $$g(x^m)=\prod_{i=1}^m f(\omega^i x) \omega^{-in}$$ Since $\omega$ is an algebraic integer, all the coefficients on the RHS are algebraic integers, so all the coefficients of $g$ are algebraic integers. Also, all the coefficients of $g$ are symmetric polynomials of ratios of the coefficients of $f$ , so they must be rational, so they must be integers. Let $p$ be any arbitrary prime, let’s show that $p$ does not divide $g$ . Since $f$ is a minimal polynomial, it has some term not divisible by $p$ . Let that term be $a_p x^{n_p}$ . Then, working mod $p$ , $g(x^m)$ must have leading term $a_p^m x^{m n_p} (-1)^{(n_p-n)m(m+1)}$ . This is not divisible by $p$ , so $g$ must not be divisible by $p$ . Thus, $g$ is not divisible by any primes, so its coefficients are in lowest terms. Consider the Galois group of the splitting field of $f$ . It must act transitively on $\alpha_i$ . If we consider the subgroup of automorphisms which stabiliz
|
|number-theory|polynomials|algebraic-number-theory|minimal-polynomials|
| 0
|
The method of calculating the volume of a parallelepiped halved along its entire length in a multi-dimensional space
|
Consider a right-angled triangle as shown below, where vertex $O$ is the right angle. If we consider the midpoint of the two legs and draw a line passing through it, we have proved that the area of triangle $OCD$ is one-fourth of the area of triangle $OAB$ . Similarly, for a pyramid with a triangular base and vertex $O$ as shown in the figure below, if we pass a plane through the midpoint of all the edges, the volume of pyramid $ODEF$ is one-eighth of the volume of pyramid $OABC$ . We want to calculate the ratio of volumes in higher dimensions. Perhaps the expression I use may not be mathematically accurate, but I hope to convey my intention correctly. We want to consider a right-angled triangle in $D$ -dimensional space, pass a $D-1$ dimensional triangle through the legs, and calculate the ratio of the newly created $D$ -dimensional triangle to the original one. Considering that in two-dimensional space the ratio is $\dfrac{1}{4}$ , and in three-dimensional space, it is $\dfrac{1}{8}$
|
We have to understand what is volume $V(X)$ of a subset $X$ of $n$ -dimensional space $\mathbb R^n$ for natural $n$ . I think usually $V(X)$ is understood as ( $n$ -dimensional) Jordan measure $m(X)$ of $X$ . The definition of Jordan measure is rather technical and requires rather advanced mathematical apparatus. But the required conclusion holds. Namely, if $n$ is any natural number, $\lambda>0$ is any real number, $X$ is any subset of $\mathbb R^n$ , and $m(X) exists then $m(\lambda X)$ of the homothetic copy $\lambda X$ of $X$ exists and $m(\lambda X)=\lambda^n\mu(X)$ .
|
|linear-algebra|geometry|volume|
| 0
|
Direct back substitution disregarding change of variables
|
The solution to this definite integration problem involves considering $t=1/y$ to make a change of the integration bounds. However after solving in terms of $y$ they directly substitute back into $t$ disregarding the earlier substitution. Shouldn't it be substituted back as $y=1/t$ when or why can we just replace a variable with another like that?
|
This works because $y$ and $t$ are just variables that we can arbitrarily change, after the substitution: $$\int^x_1 \frac{\log y}{y(1+y)}dy \equiv \int^x_1 \frac{\log t}{t(1+t)}dt$$ And so the two integrals can be summed together. Hope this helps.
|
|calculus|integration|
| 0
|
Convergence/divergence of the operator norm for random sign matrices
|
For each ${n}$ , let ${A_n = (a_{ij,n})_{1 \leq i,j \leq n}}$ be a random ${n \times n}$ matrix (i.e. a random variable taking values in the space ${{\bf R}^{n \times n}}$ or ${{\bf C}^{n \times n}}$ of ${n \times n}$ matrices) such that the entries ${a_{ij,n}}$ of ${A_n}$ are jointly independent in ${i,j}$ and take values in ${\{-1,+1\}}$ with a probability of ${1/2}$ each. If ${\|A_n\|_{op}}$ denotes the operator norm of ${A_n}$ , and ${\varepsilon > 0}$ , show that ${\|A_n\|_{op} / n^{1/2+\varepsilon}}$ converges almost surely to zero, and that ${\|A_n\|_{op} / n^{1/2-\varepsilon}}$ diverges almost surely to infinity. (Hint: use the spectral theorem to relate ${\|A_n\|_{op}}$ with the quantities ${\hbox{tr} (A_n A_n^*)^k}$ .) Attempt: As $A_nA_n^*$ is symmetric, its' operator norm ${\|A_nA_n^*\|_{op}}$ is equal to the ${\ell^\infty}$ norm $\displaystyle \|A_nA_n^*\|_{op} = \max_{1 \leq i \leq n} \lambda_i$ of the eigenvalues ${\lambda_1,\ldots,\lambda_n\in {\bf R}}$ of $A_nA_n^*$ (n
|
Edit: After some further consideration, I managed to obtain a solution which I paste below. Critiques, verifications, and suggestion for different methods are welcomed. Fix some $\varepsilon, \varepsilon' > 0$ . As $A_nA_n^*$ is symmetric, the operator norm ${\|A_nA_n^*\|_{op}}$ is equal to the ${\ell^\infty}$ norm $\displaystyle \|A_nA_n^*\|_{op} = \max_{1 \leq i \leq n} \lambda_i$ of the eigenvalues ${\lambda_1,\ldots,\lambda_n\in {\bf R}}$ of $A_nA_n^*$ (note that these values are non-negative). Also, we have the standard linear algebra identity $\displaystyle \hbox{tr}(A_nA_n^*)^k = \sum_{i=1}^n \lambda_i^k$ . Consequently, for any natural number $k$ , we have the inequalities $\displaystyle \|A_n\|^{2k}_{op} = \| A_nA_n^* \|_{op}^k \leq \hbox{tr}(A_nA_n^*)^k \leq n \|A_nA_n^*\|_{op}^k = n\|A_n\|^{2k}_{op}$ It follows that for any natural number $k$ , $\displaystyle {\|A_n\|_{op} / n^{1/2+\varepsilon}} \leq [\hbox{tr}(A_nA_n^*)^k]^{\frac{1}{2k}} / n^{1/2+\varepsilon}$ and $\display
|
|linear-algebra|probability-theory|random-matrices|
| 0
|
How does one solve this ODE using a differential operator and a series expansion?
|
I am solving a fairly basic non-homogeneous ODE, but I wanted to try using the differential operator, and a series expansion in order to find the particular solution. I am fairly convinced it is possible to solve an ODE using this method, however I an unable to understand how. This is what I have done thus far: $$ \begin{align*} 2y' + y &= \cos x \\ \end{align*} $$ $$ \begin{align*} (2D+1)y &= \cos x \\ \end{align*} $$ $$ \begin{align*} y &= \frac{\cos x}{2D+1} \\ \end{align*} $$ $$ \begin{align*} y &= \sum_{n=0}^\infty (2D)^{2n} \cdot (1 - 2D) \cdot \cos x \\ \end{align*} $$ $$ \begin{align*} y &= \sum_{n=0}^\infty (2D)^{2n} \cdot (\cos x + 2\sin x) \end{align*} $$ How do I proceed from here? I cannot spot any mistake I have made in anything I have done leading up to it, but trying to proceed seems almost impossible. Have I completely misunderstood how derivative operators work? I tried to solve using maclaurin series as well: $$ \begin{align*} 2y' + y &= \cos x \\ \end{align*} $$ $$
|
You are working outside the radius of convergence of the geometric series. You get a result without further series computation by using that $(D^2+1)\sin x=0$ etc., that is, using $$ \frac1{1-4D^2}=\frac1{5-4(D^2+1)}=\frac15\frac1{1-\frac45(D^2+1)} $$ etc.
|
|sequences-and-series|ordinary-differential-equations|derivatives|
| 0
|
Beautiful errors in graph of $\sin(x^2+y^2)$
|
I was writing a simple program to help visualize inequalities based on 2 variables. The test inequality that I was using was this: $$\sin\left(0.1(x^2+y^2)\right)\geq0$$ Regions that satisfy the inequality are coloured in white. Here's the result, for a bounding box of $(-25,-25),(25,25)$ : Now, I am aware that due to computational inefficiency and round-off errors, we might often get absurd and incorrect results while doing simulations. But the results, when I chose a bigger bounding box, were not what I expected. For bounding boxes $(-50,-50),(50,50)$ (left) and $(-250,-250),(250,250)$ (right) : I was expecting glitches, random patterns, maybe something like Moire patterns . But this regularized repeating behaviour (especially in the largest bounding box) seems too orderly to be a result of errors. While I am convinced that the first image is closest to the truth, but there is no way I can explain this "beauty in error". All I can see: Initial appearance of randomly located circular
|
Its not an effect of rounding errors but of the simple fact, that discrete Fourier series of functions in a finite interval develop fake periodicty as interference of the discretization periodicity and function periodicty. For a function period smaller than about 10 times the lattice constant, the eye groups the points of a fast oscillating function into groups by the distance in in the 2d-graphics. On picture says more than 2^10 words pts=Array[({#, Sin[3 # [Pi]/256] + Cos[126 # [Pi]/256]} &), {256}]] ListPlot[pts] ListPlot[pts, Joined -> True]
|
|graphing-functions|computational-mathematics|visualization|
| 0
|
Let $B^2 :=\{(x,y,z) \in \mathbb{R}^3 : x^2 + y^2 + z^2 = 1\}$. Show $B^2$ is equinumerous to $\mathbb{R}$
|
Let $B^2 :=\{(x,y,x) \in \mathbb{R}^3 : x^2 + y^2 + z^2 = 1\}$ . Show $B^2$ is equinumerous to $\mathbb{R}$ . I think there are a few ways to do this, probably some are easier, but I've committed to defining a bijection between $B^2$ and $\mathbb{R}$ . Would this prove that the two are equinumerous and is it necessary to add more detail to my proof: Define a function $f : B^2 \to \mathbb{R}$ as $f(x^2 + y^2 + z^2 = 1)$ = \begin{cases} 0 & x=1, y=0, z=0 \\ x^ {-1} & x≠0 , y≠0, z=0 \\ x & x≠0 , y=0, z≠0 \\ -1 & x=0 , y=1, z=0 \\ -y^ {-1} & x≠0 , y≠0, z≠0 \\ -y & x=0 , y≠0, z≠0 \\ 1& x=0, y=0, z=1 \\ \end{cases} Therefore, there exists a bijection $f$ from $B^2$ to $\mathbb{R}$ , so $B^2$ is equinumerous to $\mathbb{R}$ .
|
That's not a bijection. It is not injective, e.g. $f(a,0,b)=f(0,-a,b)$ for any nonzero $a^2+b^2=1$ . It will be rather hard to construct such bijection explicitly, if even possible. One way of solving this problem is by constructing an injection (instead of bijection) from something of cardinality $|\mathbb{R}|$ to $B^2$ . Because $|B^2|$ is at most $|\mathbb{R}^2|=|\mathbb{R}|$ , which is true for any subset of $\mathbb{R}^2$ . Such injection can be for example $$f:[0,1]\to B^2$$ $$f(t)=(t,\sqrt{1-t^2},0)$$ which as a bonus can be easily generalized to arbitrary dimension.
|
|real-analysis|elementary-set-theory|equivalence-relations|
| 0
|
Jacobi identity for Poisson bracket in local coordinates
|
Suppose a bivector field $\pi^{ij}$ such that $\pi^{ij}=-\pi^{ji}$ , $\pi^{ij}\partial_{i}f\partial_{k}g=\{f, g\}$ defines a Poisson bracket $\{,\}$ on a smooth manifold (Einstein's summation is implied). The task is to find the condition on $\pi^{ij}$ following from the Jacobi identity for Poisson brackets: $$\{f,\{g,h\}\}+\{g,\{h,f\}\}+\{h,\{f,g\}\}=0$$ Note that this is the inverse problem to that which I was able to find asked before, namely from proving Jacobi identity for Poisson brackets. I know that the answer is supposed to be $$\sum_{\langle(ikl)\rangle}\pi^{ij}\frac{\partial \pi^{kl}}{\partial x^{j}}=0$$ where $\langle (ijk) \rangle$ denotes all cyclic permutations of indices $i,j,k$ , but I can't find the way to it. My attempt to solution was the following: $\newcommand{\p}{\partial}$ $\newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}}$ $\newcommand{\bra}{\langle}$ $\newcommand{\ket}{\rangle}$ The Jacobi identity translates to $$\pi^{ij}\p_{i}f\p_{j}(\pi^{kl}\p_{k}g\p_{l}
|
Hints: Let $\pi=\pi^{ij}\partial_i\wedge\partial_j$ be an antisymmetric bivector field [i.e. a $(2,0)$ contravariant tensor field], and $\{\cdot,\cdot\}:C^{\infty}(M)\times C^{\infty}(M)\to C^{\infty}(M)$ the corresponding antisymmetric bracket, which does not necessarily satisfy the Jacobi identity. Recall that in a local coordinate system $(x^1,\ldots,x^n)$ we have $\{x^i,x^j\}=\pi^{ij}$ . Define the Jacobiator $$ J(f,g,h) ~:=~ \sum_{{\rm cycl.} f,g,h} \{f,\{g,h\}\}.$$ Show that $J$ is a totally antisymmetric trivector field [i.e. a $(3,0)$ contravariant tensor field]. Show that the tensor $J=0$ vanishes identically iff in every local coordinate system $(x^1,\ldots,x^n)$ we have $\forall i,j,k\in\{1,\ldots,n\}: J(x^i,x^j,x^k)=0$ .
|
|differential-geometry|tensors|classical-mechanics|hamilton-equations|poisson-geometry|
| 1
|
Prove, if $\delta>0$, then $\lim_{n\to\infty}\int_{\delta<|s|<a}K_n(s)ds=0$
|
Prove, if $\delta>0$ , then $\lim_{n\to\infty}\int_{\delta if $$K_n(s)= \begin{cases} &n, |s| 1/2n \end{cases} $$ This shouldn't be too difficult, but I'm struggling somewhat. If I just set up the problem, then we have $$\int_{-1/2n}^{-\delta}n ds+\int_{\delta}^{1/2n}n ds=1-2n\delta$$ and clearly taking the limit does not give us the desired result. Where am I going wrong? Thanks!
|
Hint: If $n >\frac 1 {2\delta}$ then the set $\{s: \delta is the empty set.
|
|real-analysis|integration|definite-integrals|
| 1
|
Prove that any integer divisible by 3 can be written as a sum of cubes of four integers.
|
My book says only this: $$6k = (k+1)^3 + (k-1)^3 + (-k)^3 + (-k)^3$$ $$6k - 15 = (2k)^3 + (-2k-1)^3 + (k-2)^3 + (-k-2)^3$$ I want to know how it got to these expressions. Using trial and error will take a lot of time.
|
The required representations look as so called mathematical folklore. They look not so complicated, so I think they could be found even more or less accidentally during some elementary number theory investigations or exercises. But if you want a rational reconstruction then we can relate the representations to a counterpart of Waring's problem for integers. Looking for simple parametric families of representations, we can look for expressions $a_ik+b_i$ linear in $k$ with integer coefficients such that in the sum of their cubes the coefficients at $k^3$ and $k^2$ cancel and there remains an expression linear in $k$ .
|
|algebra-precalculus|elementary-number-theory|
| 0
|
A question on a step in proving $L^\infty$ is complete.
|
Let $(X,\mathcal{A},\mu)$ be $\sigma$ -finite measure space. Claim: $L^\infty$ is complete. Ideas of the proof: (i) Assume Cauchy in norm - let $\{f_n\}$ be a Cauchy sequence which converges in $L^\infty$ norm. That is, let $\epsilon>0$ , and assume $\left\Vert f_n-f_m \right\Vert_\infty Prove: $ \lim\limits_{n\to\infty}f_n(x) \text{ exists } $ for all $x\notin A=\bigcup\limits_{m,n}A_{m,n}\subset X$ , where $\mu\left( A_{m,n}\right)=0$ and on $A_{m,n}$ $f$ could take any finite value. I found that this diagonal argument was used in proving $L^\infty$ is complete in a book written by Zhang Gongqing , but I was taught that (to start with or even include) this step in proving $L^\infty$ is complete is not logical/mathematical. (ii) Define $f:=\lim\limits_{n\to\infty}f_n(x)$ for all $x\notin A$ . Prove: $f \in L^{\infty}$ . (iii) Prove: $\left\Vert f_n-f\right\Vert_\infty i.e. $\{f_n\}$ is uniformly convergent on $X\setminus A$ . When I presented the above outline to the professor of my a
|
There is nothing wrong with the proof in Weeden & Zygmund. $$ \|g\|_\infty=\inf\{a\ge 0\colon\mu\left(\{x\colon \vert g(x)\vert>a\}\right)=0\} $$ Note that if $b>a$ then $\{x\colon |g(x)|>b\}\subseteq\{x\colon |g(x)|>a\}$ . Therefore if $\|g\|_\infty then $\mu(\{x\colon |g(x)|>a\})=0$ for all $a>\|g\|_\infty$ . Assume $\|g\|_\infty . For $n>0$ (an integer) let $$ \begin{align} A_n&=\{x\colon|g(x)|>\|g\|_\infty+1/n\}\\ A&=\{x\colon|g(x)|>\|g\|_\infty\} \end{align} $$ By definition of $\|g\|_\infty$ , $\mu(A_n)=0$ for all $n>0$ . If $x\in A$ , there exists some $m>0$ such that $|g(x)|>\|g\|_\infty+1/m$ (this is due to the Archimedean property of $\Bbb{R}$ ) and therefore $x\in A_m$ . This proves that $$ A\subseteq \bigcup_{n\in\Bbb{N}^+} A_n $$ and since $\mu(A_n)=0$ for all $n\in\Bbb{N}^+$ it follows by $\sigma$ -sub-additivity that $\mu(A)=0$ . The complement of $A$ is $\{x\colon |g(x)|\le\|g\|_\infty\}$ . Therefore $|g(x)|\le\|g\|_\infty$ almost everywhere.
|
|functional-analysis|measure-theory|solution-verification|proof-explanation|cauchy-sequences|
| 0
|
Prove that any integer divisible by 3 can be written as a sum of cubes of four integers.
|
My book says only this: $$6k = (k+1)^3 + (k-1)^3 + (-k)^3 + (-k)^3$$ $$6k - 15 = (2k)^3 + (-2k-1)^3 + (k-2)^3 + (-k-2)^3$$ I want to know how it got to these expressions. Using trial and error will take a lot of time.
|
I don't know another proof, but I think I can show some steps to get $$6k = (k+1)^3 + (-k)^3+ (k-1)^3 + (-k)^3$$ $$6k-15=(2k+1)^3 + (-2k)^3 +(k-2)^3+ (-k-2)^3$$ Let us start with $$(ak+b)^3+(ck+d)^3+(ek+f)^3+(gk+h)^3\tag1$$ which can be written as $$(1)=(a^3+c^3+e^3+g^3) k^3 + 3(a^2 b+c^2d+e^2f+g^2h) k^2 + 3(a b^2+cd^2+ef^2+gh^2) k + b^3 + d^3 + f^3 + h^3 $$ So, we want to have $(a,b,c,d,e,f,g,h)$ satisfying $$\begin{cases}a^3+c^3+e^3+g^3=0 \\a^2b+c^2d+e^2f+g^2h=0 \\\text{$|b^3+d^3+f^3+h^3|$ is small}\end{cases}$$ Here, it should be natural to take $(c,g)=(-a,-e)$ . So, we want to have $(a,b,d,e,f,h)$ satisfying $$\begin{cases}a^2b+a^2d+e^2f+e^2h=0 \\\text{$|b^3+d^3+f^3+h^3|$ is small}\end{cases}$$ If we take $(f,h)=(-b,-d)$ for which $b^3+d^3+f^3+h^3=0$ holds, then $$(1)= 3(a-e)\color{red}{(a+e)(b+d)}k^2 + 3\color{red}{(a+e)(b+d)}(b-d)k $$ Here, taking $e=a$ , we have $(1)=6a(b^2-d^2)k$ . So, we can take $(a,b,d)=(1,1,0)$ to have $$6k = (k+1)^3 + (-k)^3+ (k-1)^3 + (-k)^3$$ So, we are
|
|algebra-precalculus|elementary-number-theory|
| 0
|
Prove Sylvester rank inequality: $\text{rank}(AB)\ge\text{rank}(A)+\text{rank}(B)-n$
|
If $A$ is a $m \times n$ matrix and $B$ a $n \times k$ matrix, prove that $$\text{rank}(AB)\ge\text{rank}(A)+\text{rank}(B)-n.$$ Also show when equality occurs.
|
Proof: let us consider linear-space:V(A)={AX in R(m) | X is in R(n)}, then we have dim(V(A)) = rank(A) "need proof inequation rank(A) consider linear-space: V(B)={BY in R(n) | Y is in R(k)}, then we have dim(V(B)) = rank(B) and consider R(n)'s direct sum: R(n) = V(B) + W(B), then dim(W(B)) = n-rank(B) for any x in R(n), can expressed: x = v + w, then v is in V(B) and w is in W(B) so, we have Ax = Av + Aw ==> Ax = ABy + Aw consider linear-space V((AB)y)'s dimension and linear-space V(Aw)'s dimension because w is in W(B), so W(B)'s base can express V(Aw), ==> dim(V(Aw)) so dim(V(A)) then rank(A) so rank(A) + rank(B) - n
|
|linear-algebra|matrices|inequality|contest-math|matrix-rank|
| 0
|
How to integrate a total derivative?
|
Suppose $f(x,y)=x y$ Then, its total derivative is $$ \begin{align} \mathbb{d}f&=x \mathbb{d}y+y \mathbb{d}x \\ \int\mathbb{d}f&= \int x \mathbb{d}y+ \int y \mathbb{d}x \tag{Integration}\\ f&=xy+yx +c\\ f&=2xy + c \\ f&=2xy \tag{suppose $c=0$} \\ \end{align} $$ Why are we not getting back the same value of $f$?
|
I made this observation (a long time ago) back in my BSc days. I proposed that one has to make a special sum , where identical terms are not summed up, but remain the same. In this case, we would have: $F = (xy) \bigoplus (yx) \bigoplus c = xy + c$ . The question remains whether this always works or not. I guessed this was the case, so I came with this demonstration. We let $F:\mathbb{R}^2 \rightarrow \mathbb{R}$ , furthermore let $F$ be given as $F(x,y) = f_1(x, y) + f_2(x) + f_3(y) + c_4$ , with all $f_i:\mathbb{R}^2 \rightarrow \mathbb{R}$ (in $\mathbb{R}^2$ we would have that $f_2$ is a curve in the XZ-plane swept along the Y-axis, and $f_3$ a curve in the YZ-plane swept along the X-axis, and the constant $c_4$ is the plane $z=c_4$ . So let's take the total derivative of $F$ \begin{eqnarray} \mathrm{d}F &=& \frac{\partial f_1(x,y)}{\partial x} \mathrm{d}x + \frac{\partial f_1(x,y)}{\partial y} \mathrm{d}y + \frac{\partial f_2(x)}{\partial x} \mathrm{d}x + \frac{\partial f_2(x)}{\pa
|
|integration|derivatives|
| 0
|
Combinatorial Proof of $p \mid {p \choose k}$
|
I found an 11 year old proof of the statement “if $p$ is prime and $ 0 then $p \mid {p \choose k}$ ” by Joriki here . It is as follows: If you count the number of k-element subsets of a p-element set that contain a given element and then sum over all p elements, you get $k {p \choose k}$ , since each element is in $k$ of the ${p \choose k}$ subsets. This is $p$ times the count for a single element, and since $p \nmid k$ for $0 , the factor $p$ must be in $p \choose k$ . I know that if $p$ is not prime, then the statement does not hold for general $n$ and $k$ , only with $n$ and $k$ coprime. However, I don’t see how this fact is reflected within the proof itself. My understanding is: We take all $k$ -subsets of a set of cardinality $p$ , with the condition they have all the element $a$ in them. There are ${p-1 \choose k-1}$ of these, and if we do this once for each element we get $$p {p-1 \choose k-1} = \frac{p!}{(k-1)! (p-k)!} = k {p \choose k}$$ Which makes sense as we have looked at
|
Let's sketch a way to combinatorialize that argument. Consider a string $s_0=a_0\dots a_{p-1}$ of length $p$ with $k$ $1$ 's and $p-k$ $0$ 's, along with all of its cyclic rotations $s_i=a_i\dots a_{p-1} a_0\dots a_{i-1}$ for $1\le i\le p-1$ . We claim that if $s_0$ is nonconstant, then strings $s_0,s_1,\dots,s_{p-1}$ are all distinct. If not, then let $i_0=\min\{1\le i\le p-1\mid s_i=s_0\}$ . A version of the Euclidean algorithm shows then that $i_0$ divides $p$ (if not, then $0 and $s_{i_1}=s_0$ , so $i_0\ne\min\{1\le i\le p-1\mid s_i=s_0\}$ , a contradiction). But $p$ is prime, so $i_0=1$ , and therefore, the string $s_0$ has period $1$ , i.e. is constant, so $k=0$ or $k=p$ . Thus, if $k\ne 0,p$ , then the set of strings of length $p$ with exactly $k$ $1$ 's can be partitioned into parts that each contain all cyclic rotations of a single string, with $p$ strings in each part. Thus, $p$ divides $\binom{p}{k}$ .
|
|combinatorics|proof-explanation|combinatorial-proofs|
| 0
|
Combinatorial Proof of $p \mid {p \choose k}$
|
I found an 11 year old proof of the statement “if $p$ is prime and $ 0 then $p \mid {p \choose k}$ ” by Joriki here . It is as follows: If you count the number of k-element subsets of a p-element set that contain a given element and then sum over all p elements, you get $k {p \choose k}$ , since each element is in $k$ of the ${p \choose k}$ subsets. This is $p$ times the count for a single element, and since $p \nmid k$ for $0 , the factor $p$ must be in $p \choose k$ . I know that if $p$ is not prime, then the statement does not hold for general $n$ and $k$ , only with $n$ and $k$ coprime. However, I don’t see how this fact is reflected within the proof itself. My understanding is: We take all $k$ -subsets of a set of cardinality $p$ , with the condition they have all the element $a$ in them. There are ${p-1 \choose k-1}$ of these, and if we do this once for each element we get $$p {p-1 \choose k-1} = \frac{p!}{(k-1)! (p-k)!} = k {p \choose k}$$ Which makes sense as we have looked at
|
It's perhaps easiest to see what goes wrong by looking at a particular example. For $n=6$ we will get $6\mid k\binom 6k$ , where $0 . Now $6\mid\binom 6k$ is equivalent to $2\mid \binom 6k$ and $3\mid\binom 6k$ . We know that $2\mid k\binom 6k$ and $3\mid k\binom 6k$ , but we don't know that $2\not\mid k$ and $3\not\mid k$ - in fact we only know that at least one of these statements holds.
|
|combinatorics|proof-explanation|combinatorial-proofs|
| 1
|
Question on the inequality in the definition of contraction mapping
|
The following definition is taken from Wikipedia: In mathematics, a $\textit{contraction mapping}$ , or $\textit{contraction}$ or $\textit{contractor}$ , on a metric space $ (M, d) $ is a function $ f $ from $ M $ to itself, with the property that there is some real number $ 0 \leq k such that for all $ x $ and $ y $ in $ M $ , $ d(f(x), f(y)) \leq k d(x, y). $ My question is, why it cannot be $ d(f(x), f(y)) = k d(x, y)$ instead? I suppose the inequality sign in the original definition serves some non-trivial purpose but I don't know what is that. I tried but failed to find a counter-example to my alternative definition without the inequality.
|
In math, we often try to construct definitions in a way so that (A) we can prove meaningful results about all objects which satisfy the definition, and (B) the definition has the widest scope possible. For many results which involve contraction mappings, the proofs work if we have either $\leq$ or $=$ in the "definition" of a contraction mapping. However, the proofs which use the $\leq$ definition (which includes the case of $=$ ) are stronger results, since it is a statement about a much wider class of objects. For example, consider $f(x)=x^2$ on $M=[0,0.5]$ This is a contraction with $k=0.5$ , but it does not always satisfy the definition with equality. For instance, strictness of the inequality is demonstrated by $$d\big(f(0.1),f(0.2)\big)= |0.01-0.04|=0.03 This is an example of a contraction mapping (in the traditional sense), but it does not satisfy the condition with equality. So all of the proofs we know and love about contraction mappings would not apply to this example if we u
|
|real-analysis|analysis|
| 1
|
Can we clear out $a,b,c$ in four equations: $ax+by+cz=0, cx+ay+bz=0, a+b+c=1, a^2+b^2+c^2=1$?
|
Can we clear out $a,b,c$ in four equations: $ax+by+cz=0, cx+ay+bz=0, a+b+c=1, a^2+b^2+c^2=1$ ? In my search of characteristic cone of $u_{xy}+u_{yz}+u_{zx}=0$ , I need to clear out $a,b,c$ in the above formula to get an equation depending only on $x,y,z$ , which should be a polynomial of $x,y,z$ of homgeneity $2$ (so that it is a cone). But I find it is not easy... An attempt: $ax+by+cz=0$ (1); $cx+ay+bz=0$ (2); and adding up to get $bx+cy+az=0$ (3). Oh, it is wrong.... If $a\cdot (1)+b\cdot(2)+c\cdot (3)$ , we find by using $ab+bc+ca=0$ to get $x=0$ . Similarly $y=z=0$ . Oh my god.!
|
$$ax+by+cz=0\tag 1$$ $$cx+ay+bz=0\tag 2$$ $$a+b+c=1\tag 3$$ $$a^2+b^2+c^2=1\tag 4$$ From (3) $\quad c=1-a-b\quad$ Put it into (1) , (2) and (4) : $$ax+by+(1-a-b)z=0\tag 5$$ $$(1-a-b)x+ay+bz=0\tag 6$$ $$a^2+b^2+(1-a-b)^2=1\tag 7$$ From (5) : $\quad b=\frac{-ax+az-z}{y-z}\quad$ Put it into (6) and (7). $$\left(1-a-\frac{-ax+az-z}{y-z}\right)x+ay+\frac{-ax+az-z}{y-z}z=0\tag 8$$ $$a^2+\left(\frac{-ax+az-z}{y-z}\right)^2+\left(1-a-\frac{-ax+az-z}{y-z}\right)^2=1\tag 9$$ from (8) : $\quad a=\frac{z^2-xy}{x^2+y^2+z^2-xy-yz-xz}\quad$ Put it into (9). After simplification : $$2\:\frac{xy+yz+xz}{x^2+y^2+z^2-xy-yz-xz}=0$$
|
|calculus|linear-algebra|ordinary-differential-equations|partial-differential-equations|characteristics|
| 1
|
Maximizing $11a+2b-ab$ for non-negative integers $a$ and $b$ less than $10$
|
Maximize $11a+2b-ab$ for non-negative integers $a$ and $b$ less than $10$ I solved this problem by factoring, $$\max(11a+2b-ab)= \max((11-b)(a-2)+22)= (11\times7)+22=99$$ Although it is a fine and straightforward approach for solving this problem, I'm interested to see other approaches too.
|
$$L:= \underbrace{a}_{\le 9}\cdot\underbrace{(11-b)}_{\ge 0}+2b \le 9\cdot(11-b)+2b=99-7\cdot\underbrace{b}_{\ge 0}\le 99$$
|
|calculus|algebra-precalculus|
| 0
|
Value of partial derivatives at a point, when the tangent plane is known
|
Knowing the equation of the tangent plane $\pi: 2x + y + 3z = 0$ at the point $(1, 1, 1)$ of a function $f(x, y)$ , how can I find the values of $f'_x(1, 1)$ and $f'_y(1, 1)$ ? I thought of this: since I know the equation of the plane, then I know it comes from $$\pi: z=f(1, 1) + f'_x(1, 1)(x-1) + f'_y(1, 1)(y-1)$$ Thence by comparison I shall say, since $z = -\frac{2}{3}x - \frac{1}{3}y$ $$0 = f(1, 1) - f'_x(1, 1) - f'_y(1, 1)$$ and most of all $$f'_x(1, 1) = -\frac{2}{3} \qquad f'_y(1, 1) = -\frac{1}{3}$$ Is this correct?
|
The solution is correct but I'd add perhaps a little bit more rigor. Since $(1,1,f(1,1))$ touches the plane we have $f(1,1)=-1\,.$ The equation you labelled with $\pi$ holds for all $x,y,z$ in the plane. Choosing two points $(2,0,0)$ and $(2,2,-2)$ in the plane gives us a system \begin{align} -\tfrac 43&=-1+f'_x-f'_y\,,\\[2mm] -2&=-1+f'_x+f'_y \end{align} that has the unique solution $f'_x=-2/3\,,f'_y=-1/3\,.$
|
|multivariable-calculus|partial-derivative|
| 0
|
The Lie derivative as a limit of a curve in $ \mathrm TM $
|
Let $ M $ be a smooth manifold, let $ p\in M $ and let $ X\in \mathscr X(M) $ be a fixed vector field on $ M $ . Given some other vector field $ Y $ the Lie derivative of $ X $ along $ Y $ evaluated at the point $ p $ is defined as the limit $$ (L_YX)\rvert_p = \lim_{t\to 0}\frac{\mathrm T_{\Phi_t^Y(p)}(\Phi_{-t}^Y)\circ X_{\Phi_t^Y(p)} - X_p}{t} $$ in $ \mathrm T_pM $ where the dot denotes function application and $$ \mathrm T_{\Phi_t^Y(p)}(\Phi_{-t}^Y)\colon \mathrm T_{\Phi_t^Y(p)}M\to \mathrm T_pM $$ is the tangent map at the point $ \Phi_t^Y(p) $ of the diffeomorphism $$ \Phi_t^Y\colon M\to M $$ induced by the flow of $ Y $ . This way we obtain another vector in $ \mathrm T_pM $ . But what if we consider the curve $$ \lambda(t) = \mathrm T_{\Phi_t^Y(p)}(\Phi_{-t}^Y)\circ X_{\Phi_t^Y(p)} - X_p $$ on the manifold $ \mathrm TM $ (the tangent bundle) and take its derivative at $ 0 $ ? In this way we obtain a vector $ \dot\lambda(0) $ in the tangent space $ \mathrm T_{0_p}(\mathrm TM) $
|
As far as I can see, $(L_YX)|_p$ and $\dot\lambda (0)$ are the same object. Denoting by $\pi:TM\to M$ the bundle projection, the curve $\lambda$ by definition has the property that $\pi\circ \lambda$ is the constant map $p$ . (So indeed, $\lambda$ can be viewed as a curve in the tangent space $T_pM$ .) Differentiating this at $t=0$ , you see that $\lambda'(0)$ lies in the kernel $\ker(T_{\lambda(0)}\pi)\subset T_{\lambda(0)}M$ and $\lambda(0)=0$ by construction. So actually $\lambda'(0)$ lies in the so-called vertical subbundle $V_{0_p}(TM)$ . It is a standard result that the vertical subbundle of a vector bundle is isomorphic to the pulback of the bundle to its own total space, i.e. $V(TM)=\pi^*TM$ , so the fiber $V_{0_p} TM$ is naturally identified with the $T_pM$ and under this identification $\lambda'(0)$ corresponds to $(L_YX)|_p$ . In fact, the identification of the Vertical subbundle with a pullback is completely analogous to the identification of the tangent spaces to a vector
|
|differential-geometry|
| 1
|
Maximizing $11a+2b-ab$ for non-negative integers $a$ and $b$ less than $10$
|
Maximize $11a+2b-ab$ for non-negative integers $a$ and $b$ less than $10$ I solved this problem by factoring, $$\max(11a+2b-ab)= \max((11-b)(a-2)+22)= (11\times7)+22=99$$ Although it is a fine and straightforward approach for solving this problem, I'm interested to see other approaches too.
|
The expression is linear in both variables, so the extrema is one of $ f(0, 0), f(0, 9), f(9, 0), f(9, 9)$ . Checking these 4 values, the maximum is $f(9, 0) = 99$ .
|
|calculus|algebra-precalculus|
| 1
|
Finding the maximum volume of a cuboid in a sphere
|
A cuboid, length of whose one edge is √2 units, is inscribed in a sphere of radius of √2 units. If maximum possible volume of suboid is V cubic units, then V/2√2 is I took the equation of sphere to be x^2 +y^2 +z^2 =2 but I'm confused on how to express the volume of cuboid in terms of x, y and z because after that I could just differentiate with respect of x, y and z and maybe it would simplify
|
It is easier to work with the halves of the edges, so we have p, q, and $ \frac {\sqrt 2}{2} $ , so the volume is $ 4 \sqrt 2 p q$ . To maximize the volume, we can ignore the coefficient and just maximize pq. We have that, being inscribed, $ \frac {1}{2} + p^2 + q^2 = 2 $ , so $ q = \sqrt {\frac {3}{2} - p^2} $ Maximizing $ p \sqrt {\frac {3}{2} - p^2} $ means finding the zero for the derivative, and, after some rewriting, $ \sqrt {\frac {3}{2} - p^2} - \frac {p^2} { \sqrt {\frac {3}{2} - p^2} } = 0 $ , or $ 2 p^2 = \frac {3}{2} $ or $ p = \frac { \sqrt 3}{2} $ , so we have the more or less expected result that the maximum is reached for $ p = q = \frac { \sqrt 3}{2} $ , so the edges are $ \sqrt 2 , \sqrt 3 $ and $ \sqrt 3 $ and the total maximal volume of the cuboid $ 3 \sqrt 2 $ .
|
|calculus|geometry|optimization|
| 0
|
Isomorphism between sections of $\text{Hom}(E,F)$ and bundle maps $E \to F$
|
Given vector bundles $E$ and $F$ over a smooth manifold $M$ do we have an isomorphism $$\Gamma(\text{Hom}(E,F)) \cong \text{Hom}_{C^\infty(M)}(E,F)?$$ That is, if I have a smooth bundle map $E \to F$ do I obtain a section of the bundle $\text{Hom}(E,F)$ and vice versa? I see this being used in multiple places, but I have not found a proof for this proposition.
|
I am not completely sure where your problem lies and what you mean by the notation $\text{Hom}_{C^\infty(M)}(E,F)$ . Sections of the bundle $\text{Hom}(E,F)$ are in bijective correspondence with vector bundle homomorphisms $E\to F$ whose base map is the identity on $M$ . If $\Phi:E\to F$ is such a homomorphism than for each $x\in M$ , $\Phi$ maps the fiber $E_x$ of $E$ at $x$ to the fiber $F_x$ of $F$ at $x$ and the restriction $E_x\to F_x$ of $\Phi$ bey definition is linear. Hence for each $x\in M$ , we obtain a linear map $s(x)\in \text{Hom}(E_x,F_x)$ and the latter space is exactly the fiber of the vector bundle $\text{Hom}(E,F)$ at $x\in M$ . So $\Phi$ gives rise to a map $s:M\to \text{Hom}(E,F)$ which maps each $x\in M$ to an element of the fiber over $x$ . Conversely, a section $s$ of $\text{Hom}(E,F)$ associates to each point $x\in M$ a linear map $s(x)\in \text{Hom}(E_x,F_x)$ and you can just peace those together to define a map $\Phi:E\to F$ which covers the identity on $M$ .
|
|differential-geometry|vector-bundles|
| 1
|
Convergence in trace
|
I am having difficulties with this problem. Given a non-negative compact self-adjoint operator $\gamma$ i.e. $\langle \gamma u, u\rangle \geq 0$ for all $u$ . Denote the cut-off function $\chi \in C^\infty_c\left(\mathbb{R}^d\right)$ satisfying $\chi$ is non-increasing, $\chi\left(\left|x\right|\right) = 1$ if $\left|x\right| \leq 1$ and $\chi\left(\left|x\right|\right) = 0$ if $\left|x\right| \geq 2$ . Define $\chi_R = \chi\left(\frac{x}{R}\right)$ for all $R > 0$ . Can we prove that $\text{Trace}\left(\chi_R \gamma \chi_R \right) \to \text{Trace}\left(\gamma\right)$ when $R \to \infty$ ? I tried to look for the relation between eigenvalues and eigenfunctions of $\chi_R \gamma \chi_R$ and $\gamma$ but I failed. Thank you very much for your help.
|
Two basic facts: An operator $T$ on $L^2(\mathbb R^d)$ is Hilbert-Schmidt if and only if there exists $k\in L^2(\mathbb R^d\times\mathbb R^d)$ such that $Tf(x)=\int k(x,y)f(y)\,dy$ for all $f\in L^2(\mathbb R^d)$ , and in this case, $\mathrm{tr}(T^\ast T)=\lVert k\rVert_2^2$ . The adjoint of $T$ is given by $T^\ast f(x)=\int \overline{k(y,x)}f(y)\,dy$ . It follows that $T$ is Hilbert-Schmidt if and only if $T^\ast$ is Hilbert-Schmidt and $\mathrm{tr}(T^\ast T)=\mathrm{tr}(TT^\ast)$ . If $(T_n)$ is a sequence of non-negative bounded operators and $T_n\nearrow T$ in strong operator topology, then $\mathrm{tr}(T_n)\nearrow \mathrm{tr}(T)$ . This is true since $\langle e_i,T_n e_i\rangle\nearrow \langle e_i,T e_i\rangle$ for every member $e_i$ of an ONB and the convergence of the traces follows from the monotone convergence theorem. As $$ \langle f,\chi_R^2 f\rangle=\int\chi_R(x)^2\lvert f(x)\rvert^2\,dx\nearrow \int\lVert f(x)\rvert^2\,dx=\langle f,If\rangle $$ for all $f\in L^2(\mathbb R
|
|operator-theory|spectral-theory|trace|compact-operators|
| 0
|
Understanding conditional probability in a problem
|
The problem is as follows: Senior students tend to stay up all night and therefore are not able to wake up on time in morning. Not only this but their dependence on tuitions further leads to absenteeism in school. Of the students in class XII, it is known that 30% of the students have 100% attendance. Previous year results report that 80% of all students who have 100% attendance attain A grade and 10% irregular students attain A grade in their annual examination. At the end of the year, one student is chosen at random from the class XII. Find the conditional probability that a student attains A grade given that he is not 100 % regular student. My working: I made a map of how the events occur below, and with that, it can be inferred that the probability 'Grade A' obtained by an irregular student is: $$ = P(being \ irregular)P(Grade A)$$ $$ = \frac{7}{10} \frac{1}{10} $$ However, the answer given in my book is $\frac{1}{10} $ . I don't really get any idea of how it is possible. From a gr
|
To get the answer $\frac{7}{10}\cdot\frac{1}{10}$ , the new question can be: "What is the probability that the randomly chosen student is an irregular student, who scores an A?". In this case you would have these calculations $$P(\text{Grade A and No 100%})=P(\text{No 100%})\cdot P(\text{Grade A}|\text{No 100%})=\frac{7}{10}\cdot\frac{1}{10}$$ (" $\text{Grade A}|\text{No 100%}$ " reads as "Grade A, given No $100\%$ ") The original question was asking for the conditional probability . It was basically asking "What fraction of irregular students scores an A?". This is why you would have these calculations and the textbook's answer $$P(\text{Grade A}|\text{No 100%})=\frac{P(\text{Grade A and No 100%})}{P(\text{No 100%})}=\frac{\frac{7}{10}\cdot\frac{1}{10}}{\frac{7}{10}}=\frac{1}{10}$$ Hope this helps and feel free to ask further questions, if this is a bit unclear! :)
|
|conditional-probability|
| 1
|
Does there exist an element of order $4$ in $GL_2(\mathbb{Z})/GL_2(\mathbb{Z})'$?
|
For a group $G$ , let $G'$ denote the commutator of the group $G$ , and if $H \leq G$ the left cosets will be denoted as $gH$ . Now, I understand the fact that $[SL_2(\mathbb{Z}):GL_2(\mathbb{Z})'] = 2.$ My question is: Does there exist an element of order $4$ in $GL_2(\mathbb{Z})/GL_2(\mathbb{Z})'$ ? What I have worked out so far is that all the elements of $GL_2(\mathbb{Z})$ that have order $4$ will have $i$ and $-i$ as their eigenvalues. Since order of $gH$ divides order of $g$ , the existence of an element of order $4$ in $GL_2(\mathbb{Z})/GL_2(\mathbb{Z})'$ would make it isomorphic to $\mathbb{Z}/4\mathbb{Z}$ . So my rephrased question is as follows: Is $GL_2(\mathbb{Z})/GL_2(\mathbb{Z})' \cong \mathbb{Z}/4\mathbb{Z}? $
|
${\rm GL}(2,{\mathbb Z})$ (or ${\rm GL}_2({\mathbb Z})$ if you prefer) is defined to be the multiplicative group of $2 \times 2$ invertible matrices over ${\mathbb Z}$ or, equivalently, matrices over ${\mathbb Z}$ with determinant $\pm 1$ . A presentation of ${\rm GL}(2,{\mathbb Z})$ , which I believe comes from the paper T. Brady, Automatic structures on Aut( $F_2)$ , Arch. Math. 63, 97-102 (1994), is $$\langle P,S,U| P^2, S^2, (SP)^4, (UPSP)^2, (UPS)^3, (US)^2 \rangle,$$ (where ${\rm SL}(2,{\mathbb Z}) = \langle PS,U\rangle$ ) from which you can calculate that $G/[G,G] \cong C_2 \times C_2$ , so the answer to your question is no. Added later : I should have said that an isomorphism from the group defined by the above presentation to ${\rm GL}(2,{\mathbb Z})$ is defined by $$P \mapsto \left(\begin{array}{rr}0&1\\1&0\end{array}\right),\ \ S \mapsto \left(\begin{array}{rr}-1&0\\0&1\end{array}\right),\ \ U \mapsto \left(\begin{array}{rr}1&1\\0&1\end{array}\right). $$
|
|group-theory|normal-subgroups|general-linear-group|derived-subgroup|
| 1
|
Number of points on the polar coordinates which satisfy $|\cos x| + \sin 3x = 0$
|
It's been a while I haven't touched this math so I feel like my answer is not the best way to solve this, but here's my attempt: Problem : Find the number of distinct points on the polar coordinates which are the solutions to this equation: $|\cos x| + \sin 3x = 0$ Answer: $|\cos x| + \sin3x = 0$ $\iff |\cos x| = \cos\left(3x + \frac{\pi}{2}\right)$ $$ \iff \begin{cases} x = 3x + \frac{\pi}{2} + k\pi \; \lor x = -3x - \frac{\pi}{2} + k\pi \quad \text{(based on the polar coordinates)} \\ \cos(3x + \frac{\pi}{2}) \geq 0 \end{cases} $$ $$\iff \begin{cases} x = -\frac{\pi}{4} + \frac{k\pi}{2}\; \lor x = -\frac{\pi}{8} + \frac{k\pi}{4}\\ \cos\left(3x + \frac{\pi}{2}\right) \geq 0 \end{cases} $$ I then solved for $x$ in $[0,2\pi)$ per the first condition and used the calculator to check for the second condition, which gives me the answer of $6$ points that work. I wondered if there is a more elegant algebraic approach that I could follow for these types of problems. However, I'm also looking
|
Here is a different approach. Rewrite the equation as $$|\cos x| = -\sin 3x \implies \cos^2 x = \sin^23x$$ so that $$\sin^2x = 1-(3\sin x - 4\sin^3 x)^2$$ ie $$16t^3-24t^2+10t-1=0 \iff (2t-1)(8t^2-8t+1)=0\iff (t-\sin^2(\pi/4))(t-\sin^2(\pi/8))(t-\sin^2(3\pi/8))=0$$ where $t:= \sin^2 x$ and since $\sin^2(\pi/8)+\sin^2(3\pi/8)=1$ and $\sin(\pi/8)\sin(3\pi/8)=\frac1{2\sqrt2}$ . To avoid extraneous solutions introduced by squaring the equation, we also need that $\sin 3x\le 0 \iff \sin x(3-4\sin^2x)\le 0$ . Out of the $6$ possible values for $\sin x$ we are only left with $3$ viz., $$\sin x = \sin(-\pi/4) \lor \sin x = \sin(-\pi/8)\lor \sin x = \sin(3\pi/8)$$ Therefore, what are the values of $x$ possible? In the range $[0,2\pi)$ , each of the different possible values of $\sin x$ correspond to $2$ values of $x$ . All of them are distinct, thus giving us $\color{blue}{6}$ solutions.
|
|trigonometry|
| 1
|
Can $AB$ be invertible if $A\in \mathbb R^{3\times 2}$ and $B\in \mathbb R^{2\times 3}$
|
I saw this question in practice problems and have seen similar questions asked about this on here. I proof that I'm not sure if it's acceptable or not: It is known that $(AB)^{-1} = B^{-1}A^{-1}$ So we have $(XY)^{-1} = Y^{-1}X^{-1}$ but as $X$ and $Y$ are non-square matrices, the right-hand side doesn't exist and therefore the left-hand side doesn't either.
|
By your logic, the matrix $[1]$ is not invertible, since $[1]$ is the product of the matrix $[1, 0]$ with the matrix $\begin{bmatrix}1\\0\end{bmatrix}$ . The statement you are trying to use is: If $A$ and $B$ are inverible square matrices, then $C=A\cdot B$ is invertible, and $C^{-1} = B^{-1}\cdot A^{-1}$ . However, from the fact above, it does not follow that if $A$ and $B$ are nonsquare, then their product is not invertible. If $A$ and $B$ are non-square, then the above fact says absolutely nothing about their product. In fact, as my example shows, a product of two non-square matrices absolutely can be invertible. However, in your case, the simplest way to think about your problem is using the rank of matrices. In particular, the rank of a product of matrices is at most the rank of either matrix (see this question for details). Given that, you can answer the following questions: What is the largest possible rank of $X$ ? What is the largest possible rank of $Y$ ? Therefore, what is t
|
|linear-algebra|matrices|solution-verification|
| 0
|
Can $AB$ be invertible if $A\in \mathbb R^{3\times 2}$ and $B\in \mathbb R^{2\times 3}$
|
I saw this question in practice problems and have seen similar questions asked about this on here. I proof that I'm not sure if it's acceptable or not: It is known that $(AB)^{-1} = B^{-1}A^{-1}$ So we have $(XY)^{-1} = Y^{-1}X^{-1}$ but as $X$ and $Y$ are non-square matrices, the right-hand side doesn't exist and therefore the left-hand side doesn't either.
|
Even simpler: You can also think of it in terms of linear maps, without using the rank inequality. If $B\in \mathbb R^{2 \times 3}$ then $B$ is a linear map $\mathbb R^3 \to \mathbb R^2$ which can not be injective, by dimensionality reasons. Hence $AB=A\circ B$ is not injective and therefore not invertible.
|
|linear-algebra|matrices|solution-verification|
| 0
|
Calculate $H^*(M - \{ p \})$ in terms of $H^*$ for $M$ a closed manifold
|
This is from my graduate level differential geometry class. Let $M$ be a closed manifold. I am trying to calculate $H^*(M - \{ p \})$ in terms of $H^*$ . Here is what I have so far: We know from excision theorem that the cohomology of the punctured manifold $M - \{ p \}$ can be computed using the long exact sequence of $(M, M - \{ p \})$ given by $$ \cdots \to H^n(M, M - \{ p \}) \to H^n(M) \to H^n(M - \{ p \}) \to H^{n+1}(M, M - \{ p \}) \to H^{n+1}(M, M - \{ p \}) \to \cdots $$ Since $M$ is a closed manifold and $\{ p \}$ is a single point, we have that $H^n(M, M - \{ p \})$ is isomorphic to the reduced cohomology $\tilde{H}^{n-1}(S^{n-1})$ . Thus we have $$ \cdots \to \tilde{H}^{n-1}(S^{n-1}) \to H^n(M) \to H^n(M - \{ p \}) \to \tilde{H}^n(S^n) \to \cdots$$ From this sequence, we have that $H^n(M - \{ p \})$ is isomorphic to $H^n(M)$ for $n \neq m-1, m$ where $m$ is the dimension of $M$ , and there is an exact sequence $$ 0 \to H^{m-1}(M) \to H^{m-1}(M - \{ p \}) \to \mathbb Z \to H
|
Apart from the small things mentioned in the comments and the fact that it makes no use of the assumption that $M$ is closed, the proof looks fine to me. However, concluding with a long exact sequence feels somewhat weak here, and in fact if you know a little more about the (co)homology of closed manifolds you can take the argument a little bit further: Let's consider the analogous exact sequence in homology $$ 0 \to H_m(M \setminus \{p\}) \to H_m(M) \overset{r_*}{\to} \underbrace{H_m(M, M \setminus \{p\})}_{\cong \mathbb{Z}} \to H_{m - 1}(M \setminus \{p\}) \to H_{m - 1}(M) \to 0 $$ where $r\colon (M, \emptyset) \to (M, M \setminus \{p\})$ is the restriction map. By the restriction theorem (cf. Hatcher, Algebraic Topology, Theorem 3.26), we only have to distinguish two cases: $M$ is orientable and $r_*$ is an isomorphism. In this case $H_m(M \setminus \{p\}) = 0$ and $H_{m - 1}(M \setminus \{p\}) \cong H_{m - 1}(M)$ . $M$ is not orientable and $r_*$ is trivial. In this case $H_m(M \se
|
|algebraic-topology|manifolds|homology-cohomology|de-rham-cohomology|
| 0
|
Solving $p\sin^{4}{\theta}-q\sin^{4}{\phi}=p$ and $p\cos^{4}{\theta}-q\cos^{4}{\phi}=q$ for $\theta$ and $\phi$
|
Given $$p\sin^{4}{\theta}-q\sin^{4}{\phi}=p$$ and $$p\cos^{4}{\theta}-q\cos^{4}{\phi}=q$$ find $\theta$ and $\phi$ . Here is my solution (help improving it would be much appreciated): $$p(1-2\cos^{2}{\theta}+\cos^{4}{\theta})-q(1-2\cos^{2}{\phi}+\cos^{4}{\phi})=p \tag1$$ Subtracting this from the original second equation gives $$p-2p\cos^{2}{\theta}-q+2q\cos^{2}{\phi}=p-q \tag2$$ Rearranging: $$\cos^{2}{\phi}=\frac{p}{q}\cos^{2}{\theta} \tag3$$ Which means $$\cos^{4}{\phi}=\frac{p^{2}}{q^{2}}\cos^{4}{\theta} \tag4$$ When substituted into the second original equation we get $$\left(p-\frac{p^{2}}{q}\right)\cos^{4}{\theta}=q \tag5$$ From this, $$\cos^{4}{\theta}=\frac{q^{2}}{qp-p^{2}} \tag6$$ and $$\cos^{4}{\phi}=\frac{p^{2}}{qp-p^{2}} \tag7$$ BUT... is it possible to obtain "nicer" expressions for $\theta$ and $\phi$ somehow?
|
We are given $$p\sin^4\theta-q\sin^4\phi=p,\qquad p\cos^4\theta-q\cos^4\phi=q.$$ Subtracting the second equation from the first, and adding it to the first, give respectively $$p(\sin^4\theta-\cos^4\theta)-q(\sin^4\phi-\cos^4\phi)=p-q,\qquad(1)\\ p(\sin^4\theta+\cos^4\theta)-q(\sin^4\phi+\cos^4\phi)=p+q.\qquad(2)$$ Since $\cos^2\theta+\sin^2\theta=\cos^2\phi+\sin^2\phi=1$ , factorizing the LHS terms of eqn $1$ gives $$p(\sin^2\theta-\cos^2\theta)-q(\sin^2\phi-\cos^2\phi)=p-q,$$ which simplifies to $$p\cos^2\theta=q\cos^2\phi.\qquad(3)$$ Now eqn $2$ may be written $p(1-2\cos^2\theta\sin^2\theta)-q(1-2\cos^2\phi\sin^2\phi)=p+q$ , which an application of eqn $3$ reduces to $$q\cos^2\phi\sin^2\phi=q(1+\sin^2\theta\cos^2\phi).$$ Observe that $\cos^2\phi\sin^2\phi=\frac14\sin^22\phi\leqslant\frac14$ , while $1+\sin^2\theta\cos^2\phi\geqslant1$ . It follows that $q=0.$ If $p$ is also zero, the given equations vanish and so have quite indefinite solutions. Otherwise, they reduce to $\sin^4\the
|
|trigonometry|solution-verification|alternative-proof|
| 0
|
Simplifying $\sum _{t=1}^{\infty }\:\frac{\left(1+g\right)^{ \lceil t/2\rceil}}{\left(1+k\right)^t}$
|
how to simplify this expression so that I get an expression only? when $t = 1, 2,.... $ $$y = \sum _{t=1}^{\infty }\:\frac{\left(1+g\right)^{ \lceil t/2\rceil}}{\left(1+k\right)^t}$$ I know that for t = 1, $\lceil \frac{t}{2}\rceil$ = 1 for t = 2, $\lceil \frac{2}{2}\rceil$ = 1 for t = 3, $\lceil \frac{3}{2}\rceil$ = 2 for t = 4, $\lceil \frac{4}{2}\rceil$ = 2 so in general $\lceil \frac{t}{2}\rceil = \frac{t}{2}$ if t is even, and $\lceil \frac{t}{2}\rceil = \frac{t+1}{2}$ , if $t$ ia odd. And I can split this expression into two parts the even part and the odd part. The problem I am facing is I dont know how that affect the summation symbol, should I let $t = 2k$ , and substitute $t = 2k$ for the even part and sub that into the summation and the t at the denominator? EDIT: (continuing from @mag's suggestion) $\sum _{t=1}^{\infty }\:\frac{\left(1+g\right)^{ \lceil \frac{t}{2}\rceil}}{\left(1+k\right)^t}$ $=\sum _{n=1}^{\infty }\:\frac{\left(1+g\right)^{\lceil n\rceil}}{\left(1+k\right
|
Yes, what you're describing is (almost) correct. Just be carefull as $k$ is already a variable in your calculation you have to substitute with a "free" variable like $t=2n$ for the even part and $t=2n-1$ for the odd part. $$\sum _{t=1}^{\infty }\:\frac{\left(1+g\right)^{ \lceil \frac{t}{2}\rceil}}{\left(1+k\right)^t}=\sum _{n=1}^{\infty }\:\frac{\left(1+g\right)^{\lceil n\rceil}}{\left(1+k\right)^{2n}}+\frac{\left(1+g\right)^{ \lceil\frac{2n-1}{2}\rceil}}{\left(1+k\right)^{2n-1}}$$ By using $\lceil\frac{2n-1}{2}\rceil=n$ you have simplified the ceil symbol.
|
|summation|ceiling-and-floor-functions|
| 1
|
Are Orthogonal Basis perpendicular to Original Basis in Gram-Schmidt?
|
a, b and c are 3 Independent Vectors. We can generate Orthonormal basis vectors using those 3 vectors using Gram-Schmidt method. Lets say those 3 orthogonal basis vectors generated from a, b and c are $A$ , $B$ and $C$ . We can generate orthonormal basis vectors by dividing those orthogonal basis vectors with their length. I am trying to figure out if $C$ will be orthogonal to b (which is one of the original basis vector). It seems like it should be, because $C$ is orthogonal to the plane containing a and b. Is there any way to prove this? I am trying to use the approach below and going into calculations which don't look pretty. $$C = c - \frac{(A^Tc)A}{A^TA} - \frac{(B^Tc)B}{B^TB}$$ Multiplying both sides by $b^T$ . Idea is to show $b^TC = 0$ to demonstrate orthogonality $$b^TC = b^Tc - \frac{(A^Tc)b^TA}{A^TA} - \frac{(B^Tc)b^TB}{B^TB}$$ Using $$B = b - \frac{(A^Tb)A}{A^TA}$$ $$b^TC = b^Tc - \frac{(A^Tc)b^TA}{A^TA} - \frac{\left(b - \frac{(A^Tb)A}{A^TA}\right)^Tcb^T\left(b - \frac{(A^
|
Yes, this is indeed the case. If we have a sequence of vectors $v_1, \ldots, v_n$ , and obtain $e_1, \ldots, e_n$ from the Gram-Schmidt procedure, then it has the additional (sometimes overlooked) property that $$\operatorname{span}\{e_1, \ldots, e_i\} = \operatorname{span}\{v_1, \ldots, v_i\}. \tag{$\star$}$$ In particular, since $e_{i+1}$ is perpendicular to each vector in $\{e_1, \ldots, e_i\}$ , and hence to each vector in $\operatorname{span}\{e_1, \ldots, e_i\}$ , it will also be perpendicular to each vector in $\operatorname{span}\{v_1, \ldots, v_i\}$ , including $v_1, \ldots, v_i$ . That is, each vector we obtain from Gram-schmidt will be orthogonal to every previous vector in the list. To prove $(\star)$ , it is best to use induction. Note that $e_1 = v_1$ , so $\operatorname{span}\{v_1\} = \operatorname{span}\{e_1\}$ . That is, the base case holds. Suppose $(\star)$ holds when $i = k$ . We have $$e_{k+1} = v_{k+1} - \sum_{i=1}^k \frac{v_{k+1}^\top e_i}{e_i^\top e_i} e_i.$$ No
|
|linear-algebra|gram-schmidt|
| 1
|
Are Orthogonal Basis perpendicular to Original Basis in Gram-Schmidt?
|
a, b and c are 3 Independent Vectors. We can generate Orthonormal basis vectors using those 3 vectors using Gram-Schmidt method. Lets say those 3 orthogonal basis vectors generated from a, b and c are $A$ , $B$ and $C$ . We can generate orthonormal basis vectors by dividing those orthogonal basis vectors with their length. I am trying to figure out if $C$ will be orthogonal to b (which is one of the original basis vector). It seems like it should be, because $C$ is orthogonal to the plane containing a and b. Is there any way to prove this? I am trying to use the approach below and going into calculations which don't look pretty. $$C = c - \frac{(A^Tc)A}{A^TA} - \frac{(B^Tc)B}{B^TB}$$ Multiplying both sides by $b^T$ . Idea is to show $b^TC = 0$ to demonstrate orthogonality $$b^TC = b^Tc - \frac{(A^Tc)b^TA}{A^TA} - \frac{(B^Tc)b^TB}{B^TB}$$ Using $$B = b - \frac{(A^Tb)A}{A^TA}$$ $$b^TC = b^Tc - \frac{(A^Tc)b^TA}{A^TA} - \frac{\left(b - \frac{(A^Tb)A}{A^TA}\right)^Tcb^T\left(b - \frac{(A^
|
Yes, if we have linearly independent vectors $x_1,\ldots, x_n$ and we apply Gram-Schmidt orthonormalization moving from left to right, resulting in $X_1,\ldots, X_n$ , then each $X_k$ will be orthogonal to $x_1,\ldots, x_{k-1}$ . Without loss of generality, we can assume that the vector space is question is $V=\text{span}\{x_1,\ldots, x_n\}$ (in general, we could be working in a larger space $X$ such that $\text{span}\{x_1,\ldots, x_n\}\subsetneq X$ , but the larger space is irrelevant in this context). For $k=1,\ldots, n$ , let $W_k=\text{span}\{x_1,\ldots, x_k\}$ . for any subset $S$ of $V$ , let $$S^\perp = \{x\in V:(\forall y\in S)(y\cdot x=0)\}.$$ So $S^\perp$ is the set of things orthogonal to everything in $S$ . As you said, we'll be interested in the case when $S$ is a plan (the plane spanned by $a,b$ ). The key point is that the plane spanned by $a,b$ is the same as the plane spanned by $A,B$ . It is difficult to reconstruct this "after the fact." It's much easier to prove it
|
|linear-algebra|gram-schmidt|
| 0
|
Prove $\frac{x^2}{a} + \frac{y^2}{b} \geq \frac{(x + y)^2}{a + b}$, with $x, y, a, b \in \mathbb{R}$ and $a, b > 0$
|
Prove $\frac{x^2}{a} + \frac{y^2}{b} \geq \frac{(x + y)^2}{a + b}$ , with $x, y, a, b \in \mathbb{R}$ and $a, b > 0$ Proof : $\frac{x^2}{a} + \frac{y^2}{b} \geq \frac{(x + y)^2}{a + b}$ $\iff (bx^2 + ay^2)(a + b) \geq ab(x^2 + 2xy + y^2)$ $\iff abx^2 + a^2y^2 + b^2x^2 + aby^2 \geq abx^2 + 2abxy + aby^2$ $\iff a^2y^2 + b^2x^2 \geq 2abxy$ $\iff (ay - bx)^2 \geq 0$ This result prompted me to use the Cauchy-Schwarz inequality to prove the statement directly $LHS \geq RHS$ . However, I couldn't seem to figure out a solution in this manner. I wondered if somebody here has a solution that doesn't involve going backward like above.
|
Let Cauchy–Bunyakovsky–Schwarz run over $$(x + y)^2\:=\: \left(\sqrt{a}\,\frac{x}{\sqrt a} \,+\, \sqrt{b}\,\frac{y}{\sqrt b}\right)^2$$ (and eventually divide by $a+b\,$ ).
|
|inequality|cauchy-schwarz-inequality|a.m.-g.m.-inequality|
| 0
|
Prove $\frac{x^2}{a} + \frac{y^2}{b} \geq \frac{(x + y)^2}{a + b}$, with $x, y, a, b \in \mathbb{R}$ and $a, b > 0$
|
Prove $\frac{x^2}{a} + \frac{y^2}{b} \geq \frac{(x + y)^2}{a + b}$ , with $x, y, a, b \in \mathbb{R}$ and $a, b > 0$ Proof : $\frac{x^2}{a} + \frac{y^2}{b} \geq \frac{(x + y)^2}{a + b}$ $\iff (bx^2 + ay^2)(a + b) \geq ab(x^2 + 2xy + y^2)$ $\iff abx^2 + a^2y^2 + b^2x^2 + aby^2 \geq abx^2 + 2abxy + aby^2$ $\iff a^2y^2 + b^2x^2 \geq 2abxy$ $\iff (ay - bx)^2 \geq 0$ This result prompted me to use the Cauchy-Schwarz inequality to prove the statement directly $LHS \geq RHS$ . However, I couldn't seem to figure out a solution in this manner. I wondered if somebody here has a solution that doesn't involve going backward like above.
|
Direct Cauchy-Schwarz would be a single step $$({x\over\sqrt{a}}\sqrt{a}+{y\over\sqrt{b}}\sqrt{b})^2\le({x^2\over a}+{y^2\over b})(a+b)$$ Simplification gives the required inequality
|
|inequality|cauchy-schwarz-inequality|a.m.-g.m.-inequality|
| 1
|
Orthogonality of left and right singular vectors of traceless 2D matrices
|
Let $A$ be a traceless $2\times 2$ complex matrix. Its SVD reads $A=UDV^\dagger$ , or in dyadic notation, $$A=s_1 u_1 v_1^\dagger+s_2 u_2 v_2^\dagger,$$ with $\langle u_i,u_j\rangle=\langle v_i,v_j\rangle=\delta_{ij}$ and $s_i\ge0$ . The left (right) singular vectors of $A$ are $(u_1,u_2)$ and $(v_1,v_2)$ , and its singular values are $s_1,s_2$ . The trace condition $\operatorname{Tr}(A)=0$ translates, in terms of its SVD, into $$s_1\langle v_1,u_1\rangle+s_2\langle v_2,u_2\rangle=0.$$ However, numerically, I find that the stronger condition $\langle u_1,v_1\rangle=\langle u_2,v_2\rangle=0$ holds. In words, the left and right singular vectors corresponding to the same singular values are always orthogonal. You can use the following Mathematica snippet to verify it directly: With[{mat = # - Tr[#]/2 IdentityMatrix@2 & @ RandomComplex[{-1 - I, 1 + I}, {2, 2}]}, SingularValueDecomposition@mat // Dot[ConjugateTranspose@#[[1]], #[[3]]] & // Chop // MatrixForm ] This snippet generates random
|
I later found out that this result can be seen as a direct consequence of the fact that any traceless matrix has an orthonormal basis with respect to which its diagonal is zero. In other words, for any $A$ such that $\operatorname{tr}(A)=0$ , there's an orthonormal basis $\mathbf u_k$ such that $\langle \mathbf u_k,A\mathbf u_k\rangle=0$ for all $k$ . This is discussed e.g. in Is every square traceless matrix unitarily similar to a zero-diagonal matrix? and Is there a similarity transformation rendering all diagonal elements of a matrix equal? . To see why that result implies the one at hand, note that if $A$ is 2x2 and has zero diagonal with respect to the basis $\mathbf u_k$ , then $\langle \mathbf u_1,A\mathbf u_1\rangle=\langle\mathbf u_2,A\mathbf u_2\rangle$ , and therefore we must $A\mathbf u_1\propto \mathbf u_2$ and $A\mathbf u_2\propto \mathbf u_1$ . But this means that $\langle A\mathbf u_1,A\mathbf u_2\rangle=0$ , i.e. $\{\mathbf u_1,\mathbf u_2\}$ are principal components i
|
|linear-algebra|matrices|trace|svd|
| 0
|
Unique ways to arrange people in seats. 4 people, 5 seats
|
Say that I have 4 people and 5 seats. How many unique ways are there of arranging these 4 people into the five seats? In questions like this I would use this formula: $$\frac{n!}{(n-k)!}$$ However this would give me -1! in the denominator. How do I interpret this? $$\frac{4!}{(-1)!}$$
|
Approaching directly by first principles: Without loss of generality, the people have names and can be ordered unambiguously according to those names in alphabetical order. We can then unambiguously refer to the "first" person and "second" person and so on with regards to their alphabetical order. Choose what seat the first person sits in. There are $5$ choices Choose what seat the second person sits in. There are $4$ remaining choices Choose what seat the third person sits in. There are $3$ remaining choices Choose what seat the fourth person sits in. There are $2$ remaining choices Multiplying the number of options at each step together gives us $5\times 4\times 3\times 2$ total possible outcomes. In general with $n$ chairs and $k$ people this approach gives us a total of $n^{\underline{k}} = \underbrace{n\cdot (n-1)\cdot (n-2)\cdots (n-k+1)}_{k\text{ terms}}$ where this is the falling factorial . You could just as easily have referred to this with your "formula" if you were to have
|
|permutations|combinations|
| 0
|
Find the perfect squares of four digits whose square root is the sum of the numbers obtained if we separate the first two digits from the last two
|
the question We are asking for the perfect squares of four digits whose square root is the sum of the numbers obtained if we separate the first two digits from the last two the idea let the number be $\overline{abcd}$ , where a,b,c,d are digits $32^2\leq \overline{abcd} \leq 99^2$ , which means $32\leq \sqrt{\overline{abcd} } \leq 99$ I don't really understand the part with 'sum'... there are 2 or more numbers to add? Hope one of you can help me! Thank you!
|
A necessary condition is to find positive integer solutions to $(x+y)^2 = 100x +y$ , subject to $32 \leq x+y \leq 99$ . Writing this as $ (x+y)(x+y-1) = 99 x $ , since $ \gcd(x+y, x+y - 1) = 1$ , conclude that we must have: $x+y \equiv 0, 1 \pmod{9}$ $x+y \equiv 0, 1 \pmod {11}$ Thus, we get that $ x+y \equiv 0, 1, 45, 55 \pmod{99}$ . Finally, verify that $x+y = 45, 55, 99$ satisfy the conditions. Note: In the verification, we have to show that $ x, y$ are integers from 0 to 100. In particular, this implies that $\lfloor \frac{ (x+y)^2}{100} \rfloor = x$ .
|
|radicals|square-numbers|decimal-expansion|
| 0
|
Why is the distributive law incorrect here?
|
I am trying to simplify: $(p \lor \neg q) \land (p \lor q) $ One thing, I identify from the table is this: $(p \lor q) \land (p \lor r) $ is second distributive law which becomes $p \lor (q \land r) $ My steps: $(p \lor \neg q) \land (p \lor q) $ $p \lor (q \lor \neg q) $ $p$ I watched this video: https://www.youtube.com/watch?v=fXfv1Ru5tzE&pp=ygUebG9naWMgc2ltcGxpZmljYXRpb24gdGF1dG9sb2d5 I was told that, my approach is wrong above.
|
My steps: $(p \lor \neg q) \land (p \lor q) $ $p \lor (q \lor \neg q) $ No. That should be $p \lor (q \color{red}\land \neg q) $ $p$ No. First of all, your $q \lor \neg q$ is equivalent to $\top$ , and so you would get $p \lor \top$ . Second, $p \lor \top$ is equivalent to $\top$ , rather than $p$ . And third: make sure to show that step. So, correcting this: $(p \lor \neg q) \land (p \lor q) $ $p \lor (q \land \neg q) $ $p \lor \bot $ $p$
|
|logic|propositional-calculus|boolean-algebra|
| 0
|
How to use bayes rule to solve the bag problem from Judea Pearl's Book
|
I am currently reading Chapter 3 from "The Book of Why", by Judea Pearl, and came accross an interesting involving applications of Bayes' Rule. It is as follows: "Suppose you’ve just landed in Zanzibar after making a tight connection in Aachen, and you’re waiting for your suitcase to appear on the carousel. Other passengers have started to get their bags, but you keep waiting… and waiting… and waiting. What are the chances that your suitcase did not actually make the connection from Aachen to Zanzibar? The answer depends, of course, on how long you have been waiting. If the bags have just started to show up on the carousel, perhaps you should be patient and wait a little bit longer. If you’ve been waiting a long time, then things are looking bad. Let's say that all the bags at Zanzibar airport get unloaded within ten minutes and that the probability your bag made the connection (the bag is on the plane) is 0.5. See Table of conditional probabilities This table, though large, should be
|
We need the inverse probability that the bag was on the plane, given that it had not shown up on the carousel after $x$ minutes. In other words, we want the probability $P(\text{bag on plane=T}\; |\; \text{bag on Carousel = F, time = x})$ According to the Bayes theorem, $P(\text{bag on plane=T}\; |\; \text{bag on Carousel = F, time = x}) = $ $$\frac{P(\text{bag on carousel = F, time = x | bag on plane = T)} *\: P(\text{bage on plane = T})}{P\text{(bag on carousel = F)}} \:\:\:\:\:(1)$$ Following Pearl, let's evaluate this expression at $x$ = 5 minutes. The numerator in equation (1) is the product of a conditional probability and a prior probability. The conditional probability is given by Table 3.3, corresponding to the row 'bag on plane = True', 'time elapsed = 5' (i.e., x =5), and the column 'carousel = false'. This value is 50 percent after 5 minutes if the bag was indeed on the plane. By assumption, the prior probability of the bag being on the plane is 50 percent (or p = 0.5). So,
|
|probability-theory|bayes-theorem|
| 0
|
Computing $\frac{\partial(X^Tb)}{\partial X}$
|
In the matrix cookbook there is an identity $$\frac{\partial (a^TX^T b)}{\partial X} = ba^T$$ I recently ran into a problem where I had to compute $$\frac{\partial (X^T b)}{\partial X}$$ but I couldn't find a formula for this. However it seems that, at least for my example, $$\frac{\partial (X^T b)}{\partial X} = b$$ . Does this formula hold in general? Does it even make sense to take the derivative $$\frac{\partial (X^T b)}{\partial X}$$ . The problem where this came up was chapter 3.1.5 of Pattern Recognition and Machine Learning specifically taking the derivative wrt W of 3.33: $$ln(p(T|X,W,\beta))=\frac{NK}{2}ln(\frac{\beta}{2\pi}) - \frac{\beta}{2}\sum_{n=1}^N || t_n -W^T \phi (x_n)||^2$$ where I used the chain rule to compute: $$\frac{\partial}{\partial W}ln(p(T|X,W,\beta))=- \frac{\beta}{2}\sum_{n=1}^N \frac{\partial}{\partial W}(t_n -W^T \phi (x_n))^T(t_n -W^T \phi (x_n)) $$ $$=- \frac{\beta}{2}\sum_{n=1}^N \frac{\partial}{\partial (t_n-W^T \phi (x_n))}(t_n -W^T \phi (x_n))^T(t
|
Let's consider $X \in \mathbb{R}^{m \times n}$ , so $b \in \mathbb{R}^m$ and $X^Tb \in \mathbb{R}^n$ . The derivative $\frac{\partial X^Tb}{\partial X}$ has $n \times m \times n$ terms, so it would be helpful to compute each $\frac{\partial (X^Tb)_k}{\partial X}$ separately: $$\frac{\partial (X^Tb)_k}{\partial X} = \begin{bmatrix} \frac{\partial (X^Tb)_k}{\partial X_{11}} & \frac{\partial (X^Tb)_k}{\partial X_{12}} & \cdots & \frac{\partial (X^Tb)_k}{\partial X_{1n}} \\ \frac{\partial (X^Tb)_k}{\partial X_{21}} & \frac{\partial (X^Tb)_k}{\partial X_{22}} & \cdots & \frac{\partial (X^Tb)_k}{\partial X_{2n}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial (X^Tb)_k}{\partial X_{m1}} & \frac{\partial (X^Tb)_k} {\partial X_{m2}} & \cdots & \frac{\partial (X^Tb)_k}{\partial X_{mn}} \\ \end{bmatrix}$$ If we compute for each element: $$\frac{\partial (X^Tb)_k}{\partial X_{ij}} = \frac{\partial}{\partial X_{ij}} \sum_{l = 1}^{m}X^T_{kl}b_l = \frac{\partial}{\partial X_{ij}} \sum_{l = 1}^
|
|matrix-calculus|
| 0
|
Computing $\frac{\partial(X^Tb)}{\partial X}$
|
In the matrix cookbook there is an identity $$\frac{\partial (a^TX^T b)}{\partial X} = ba^T$$ I recently ran into a problem where I had to compute $$\frac{\partial (X^T b)}{\partial X}$$ but I couldn't find a formula for this. However it seems that, at least for my example, $$\frac{\partial (X^T b)}{\partial X} = b$$ . Does this formula hold in general? Does it even make sense to take the derivative $$\frac{\partial (X^T b)}{\partial X}$$ . The problem where this came up was chapter 3.1.5 of Pattern Recognition and Machine Learning specifically taking the derivative wrt W of 3.33: $$ln(p(T|X,W,\beta))=\frac{NK}{2}ln(\frac{\beta}{2\pi}) - \frac{\beta}{2}\sum_{n=1}^N || t_n -W^T \phi (x_n)||^2$$ where I used the chain rule to compute: $$\frac{\partial}{\partial W}ln(p(T|X,W,\beta))=- \frac{\beta}{2}\sum_{n=1}^N \frac{\partial}{\partial W}(t_n -W^T \phi (x_n))^T(t_n -W^T \phi (x_n)) $$ $$=- \frac{\beta}{2}\sum_{n=1}^N \frac{\partial}{\partial (t_n-W^T \phi (x_n))}(t_n -W^T \phi (x_n))^T(t
|
In this case it is much clearer to think of the derivative as a linearization of the map $$ F\colon \mathbb R^{m\times n} \to \mathbb R^n,\, X\mapsto X^T b. $$ Indeed since $$ F(X+V)=(X+V)^Tb = X^Tb+V^Tb=F(X)+V^Tb $$ we find that $$ D_XF(V)=\frac{\partial (X^Tb)}{\partial X}(V)=V^Tb. $$ With this there is no need for partial derivatives or writing out the matrices at play. Generally, this definition of the derivative ( See here ) is often helpful in similar settings.
|
|matrix-calculus|
| 0
|
Equivalent Set [0,1] and [0,1]\X
|
Let $X:= \left \{\frac{1}{n}: n\in\mathbb{N}\right \}$ . Prove that there is a bijection between $[0,1]$ and $[0,1]-X$ . I still don't find a bijective function between $[0,1]$ and $[0,1]-X$ . So far, I already know that $[0,1]$ and $(0,1)$ are equivalent sets. We may find a bijective function between $(0,1)$ and $[0,1]-X$ and use the composition of bijective functions.
|
The identity function is an injection $[0,1]\setminus X\to [0,1]$ . If we can also find an injection $[0,1]\to [0,1]\setminus X$ , we know that there is a bijection by the Schroeder-Bernstein Theorem . We can construct such an injection by sending $0$ to $0$ and "compressing" each interval $\left(\frac{1}{n+1},\frac{1}{n}\right]$ into $\left(\frac{1}{n+1},\frac{2n+1}{2n(n+1)}\right]$ . This is expressed as follows: $$f(x) = \begin{cases}0 & x= 0\\ \frac{1}{n+1}+\frac{1}{2}\left(x - \frac{1}{n+1}\right)& x\in\left(\frac{1}{n+1},\frac{1}{n}\right],n\in\mathbb{N}\end{cases}$$
|
|elementary-set-theory|
| 1
|
Characterizing a Recursively Defined Rational Set
|
Let $S$ be the smallest set of rational numbers that contains $\tfrac{1}{2}$ , as well as the reciprocal of the sum of any two elements that are already in $S$ . Prove or disprove that $S=\mathbb{Q} \cap [\tfrac{1}{2},1]$ Note that the two elements may be identical. This is a repost of my previous post, which was answered partially as follows by @Tony Matthew (which I agree with): Partial answer Let's try building the set $S$ by starting with $\frac{1}{2}$ and adding new elements by taking two elements from the current set and calculating $f(a,b) \equiv \frac{1}{a + b}$ . One possible value for $S$ is the steady-state limit of this process. Since $\frac{1}{2}$ is the only element initially, the next new element must be $f(\frac{1}{2}, \frac{1}{2}) = 1$ , so that our set becomes $\left\lbrace\frac{1}{2}, 1\right\rbrace$ . Then, $f(1,1) = \frac{1}{2}, f(\frac{1}{2},1) = \frac{2}{3}$ , so the set next grows to $\left\lbrace\frac{1}{2}, \frac{2}{3}, 1\right\rbrace$ . Now the induction hypo
|
Your examples have essentially found all the important information when generalised: If you start with two rationals $a,b$ then $f(a,b)$ is rational so $S \subset \mathbb Q$ If you start with two numbers $a, b$ both in $\left[\tfrac{1}{2},1\right]$ then $f(a,b)\in \left[\tfrac{1}{2},1\right]$ so $S \subset \left[\tfrac{1}{2},1\right]$ You have $\frac12$ and $1=\frac11=f\left(\frac{1}{2},\frac{1}{2}\right)$ in $S$ and these have the smallest possible denominators If you have some positive integers $n, k$ with $k \lt n$ then $\frac{n+k}{2n} = f\left(\frac{n}{n+k},\frac{n}{n+k}\right)$ with smaller integer denominators If you have some positive integers $n, k$ with $k \le n$ then $\frac{n+k}{2n+1} = f\left(\frac{n}{n+k},\frac{n+1}{n+k}\right)$ with smaller integer denominators So using strong induction over the denominators, you can generate all rationals from $\frac12$ through to $1$ and so $S=\mathbb{Q} \cap \left[\tfrac{1}{2},1\right]$ .
|
|number-theory|elementary-number-theory|discrete-mathematics|algorithms|recurrence-relations|
| 0
|
Zeros of L function on the 0.5 line
|
Could someone please tell what results are known about the zeros of L function, $L(s,\chi)$ on the 1/2 line, where $\chi$ is a character mod $q$ ? Is there an upper bound for this count when we count zeros with $|\Im s|\leq T?$ In particular, what is known about zeta function? I know there are some explicit version of Von Mangoldt theorem that have been done for Zeta function. I want to know what are some relevant results for any $L(s,\chi).$ I guess there should be some results available for general $L$ functions, but I am unable to find them myself. Any help would be appreciated. Thanks in advance. Added: I meant an explicit formula for upper bound when I asked for one.
|
Let's state a general zero-counting form for a general $L$ -function. Take an $L$ -function with the functional equation $$ \Lambda(s) := L(s) Q^s \prod_{\nu = 1}^N \Gamma( \alpha_\nu + \beta_\nu ) = \omega \overline{\Lambda(1 - \overline{s})}.$$ Define the degree of $L$ to be $$ d_L := 2 \sum_{\nu} \alpha_\nu. $$ We also define the convenience variable $$ \alpha := \prod_\nu \alpha_\nu^{2 \alpha_\nu}. $$ In practice, $d_L$ is an integer for natural $L$ -functions. It is $1$ for $\zeta(s)$ and Dirichlet $L$ -functions $L(s, \chi)$ . Then the number of nontrivial zeros of $L(s)$ of the form $\rho = \sigma + i t$ with $\lvert t \rvert is given by $$ N(T) = \frac{d_L}{2 \pi} T \log \frac{4T}{e} + \frac{T}{2 \pi} \log(\alpha Q^2) + O(\log T). $$ We expect all of these to be on the $1/2$ line, but we don't know that. In many (maybe all?) cases we know at least that a positive proportion of the zeros are are the $1/2$ line. For degree $d_L \leq 2$ , we also know that $100$ percent of zeros a
|
|complex-analysis|number-theory|analytic-number-theory|riemann-zeta|l-functions|
| 0
|
The probability that the first ball drawn was white, if it is known that after two draws urn II contains exactly two white balls.
|
In urn I there are $3$ white balls and $2$ black balls. Urn II contains $2$ white balls and $3$ black balls. We draw a ball from urn I and either throw it back or put it into urn II - each of these two possibilities has probability $\frac{1}{2}$ . We then do the same for urn II. What is the probability that the first ball drawn was white, if it is known that after two draws urn II contains exactly two white balls? I made a tree as shown here: From that I think it's correct to say that the possibility is equal to: $$\begin{align*} &\frac{\frac{2}{5} \cdot \frac{1}{2} \cdot \frac{2}{5} \cdot \frac{1}{2} + \frac{2}{5} \cdot \frac{1}{2} \cdot \frac{3}{5} \cdot \frac{1}{2} + \frac{2}{5} \cdot \frac{1}{2} \cdot \frac{3}{5} \cdot \frac{1}{2} + \frac{2}{5} \cdot \frac{1}{2} \cdot \frac{2}{6} \cdot \frac{1}{2} + \frac{2}{5} \cdot \frac{1}{2} \cdot \frac{4}{6} \cdot \frac{1}{2} + \frac{2}{5} \cdot \frac{1}{2} \cdot \frac{4}{6} \cdot \frac{1}{2}}{\frac{3}{5} \cdot \frac{1}{2} \cdot \frac{2}{5} \c
|
Some degree of simplification is possible, but I get a different answer ! Using the notation $W= \;white\; B = \;black, W',B'=\;$ changed #s on transfer $S,T\;$ indicating ball static, transferred Simply $B,W$ etc indicating the ball has both static and transfer options open We have the following cases with their associated probabilities: $\textbf{1st draw}\;$ $\;\textbf{2nd draw}$ $WS\;(\frac3{10}) \mid WS+ B\; (\frac2{10}+\frac35 = \frac4{5})$ $WT\;(\frac3{10}) \mid W'T\; (\frac3{12} = \frac14)$ $BS\;(\frac2{10}) \mid WS,\; B\; (\frac2{10}+\frac35 =\frac4{5})$ $BT\; (\frac2{10}) \mid W'S,\; B'\; (\frac2{12}+\frac4{6} = \frac5{6})$ Putting it together, $$Pr = \frac{\frac3{10}(\frac45+\frac14)}{\frac3{10}(\frac45+\frac14)+\frac2{10}(\frac45+\frac56)}= \frac{27}{55},\;\approx 0.49$$
|
|probability|conditional-probability|
| 0
|
circle tangent to three circles
|
To-day I want to look at CCC - one circle tangent to three circles whose radii and positions of their centers are known. How does one solve this.. old fashioned ways like ruler and compass, or modern ways like intersection theory? these Apollonius problems are all very classical but good treatments are hard to find. here is a different such problem construct circle tangent to two given circles and a straight line
|
Closed form solution (S. Bancroft) This problem is equivalent to solving the GPS pseudorange equations. There exist closed form methods to solve this problem. I present here the method developped by Bancroft [1] in the 80s. Denote as $\bf{a}_i$ the center of the given circles and $\rho_i$ their radius. Denote as $\bf{x}_u$ the center of the unknown circle and $\beta$ its radius. The tangency contraint is: \begin{align} & \|\bf{a}_j - \bf{x}_u\|^2 = (\rho_j - \beta)^2, \\ \Leftrightarrow \qquad & \|\bf{a}_j - \bf{x}_u\|^2 - (\rho_j - \beta)^2 = 0. \tag{1} \end{align} Introduce the matrix $\bf{L} = \text{diag}(1,1,-1)$ . It induces a pseudo-scalar product and a pseudo-norm on $\mathbb{R}^{3}$ defined as: \begin{align} _{\bf{L}} &= \bf{x}^\intercal \bf{x}' - \beta \beta', & N_{\bf{L}}(\bf{p}) &= _{\bf{L}}. \end{align} This pseudo-scalar product is bilinear, symmetrical but it is not definite (nor positive). With these notations the tangency equations $(1)$ is simply $N_{\bf{L}}(\bf{p}_u -
|
|geometry|conic-sections|intersection-theory|
| 0
|
Gaussian covariance matrix
|
Assume a random vector $$X=(X_1,\ldots,X_p) \sim N(0,\Sigma)$$ with $\Sigma$ positive definite matrix. Is there literature regarding the covariance (formulas) of the form $E[X_j^kX_{j'}^l]$ ? with $k,l \in \{1,\ldots, q\}$ for some $q\in \mathbb{N}$ . Additional Note: More concretely i am concerned whether a covariance matrix of the form $\tilde{\Sigma} = \left( E[X_j^kX_{j'}^l] \right)_{l,k, j,j'}$ is still positive definite
|
For any couple $(j,j')$ , it suffices to calculate the characteristic function of $(X_j,X_{j'})$ : $$\phi_{(X_j,X_{j'})}(t_1,t_2)=\mathbb{E}\left( e^{t_1 X_j +t_2X_{j'}} \right) \tag{1}$$ The formula of $(1)$ can be easily deduce from this . We notice that $$ X_j^k X_{j'}^l \cdot e^{t_1 X_j +t_2X_{j'}} = \frac{\partial ^{(k+l)}}{\partial^kt_1\cdot \partial^l t_2} \left( e^{t_1 X_j +t_2X_{j'}}\right)$$ Then, the expectation $\mathbb{E}\left( X_j^k X_{j'}^l \right) $ can be calculated from the derivatives $(k+l)$ of the characteristic function at the point $(t_1,t_2)=(0,0)$ : $$\color{red}{\mathbb{E}\left( X_j^k X_{j'}^l \right) = \frac{\partial ^{(k+l)}\phi_{(X_j,X_{j'})}}{\partial^kt_1\cdot \partial^l t_2}(0,0)} $$
|
|reference-request|
| 1
|
Can $AB$ be invertible if $A\in \mathbb R^{3\times 2}$ and $B\in \mathbb R^{2\times 3}$
|
I saw this question in practice problems and have seen similar questions asked about this on here. I proof that I'm not sure if it's acceptable or not: It is known that $(AB)^{-1} = B^{-1}A^{-1}$ So we have $(XY)^{-1} = Y^{-1}X^{-1}$ but as $X$ and $Y$ are non-square matrices, the right-hand side doesn't exist and therefore the left-hand side doesn't either.
|
By thinking in terms of rank, we have first that ${\rm rank} A B \le {\rm rank} B$ , true for any matrices $A, B$ . Since the maximum possible value of ${\rm rank} B$ is $2$ , then we have ${\rm rank} A B \le 2$ . So, $A B$ as a matrix representing a linear map from $\mathbb{R}^3$ to $\mathbb{R}^3$ cannot be surjective, and hence cannot be bijective.
|
|linear-algebra|matrices|solution-verification|
| 0
|
Six Children and 3 Flavors of Ice Cream
|
$\textbf{Q}$ : There are $6$ children each offered a single scoop of any of $3$ flavors of ice cream. In how many ways can each child choose a flavor for their scoop so that some flavor of ice cream is selected by exactly $3$ children? I struggled with this for a bit. I'm counting this as $ 3 \cdot C(6,3) \cdot 2 \cdot 2 \cdot 1$ where $3$ is for the number of colors, $C(6,3)$ is choosing $3$ kids from $6$ to have one flavor and $2 \cdot 2 \cdot 1$ comes from the choices that the last $3$ children have. I'm doing something wrong though as this is not correct. Could someone help point me in the right direction? Thank you.
|
Here is a solution by inclusion/exclusion. I am assuming we want to count the number of distributions of ice cream to children in which at least one flavor is distributed to exactly three children. The number of ways to distribute one flavor to exactly three children is $$\binom{3}{1} \binom{6}{3} 2^3$$ But we have double-counted the cases where two flavors are each distributed to exactly three children. There are $$\binom{3}{2} \binom{6}{3}$$ such cases. So correcting for double-counting, the answer is $$\binom{3}{1} \binom{6}{3} 2^3 - \binom{3}{2} \binom{6}{3} = \boxed{420}$$
|
|combinatorics|
| 0
|
Examples of genus 1 curves that are not elliptic
|
Every elliptic curve over a field $K$ can be mapped to a smooth, projective genus 1 curve (also defined over $K$ ) with a $K$ -rational point, and vice versa. As I understand it, curves without said $K$ -rational points can never be mapped to elliptic curves (since clearly then the elliptic curve group law cannot be established). I'm trying to think of an example of an affine curve whose genus is $1$ , but which is not also an elliptic curve. I know that the Riemann-Roch theorem gives: $$g = \frac{(d-1)(d-2)}{2} - s$$ Some searching online yielded the curve $y^{2}=x^{5}-x$ , however I'm relatively certain that this Riemann-Roch gives $g = 6$ in $\mathbb{P}^2$ , not $1$ . What are some examples of affine genus $1$ curves that are not also elliptic curves?
|
COMMENT.-Poincaré has proven in "Sur les proprietés arithmétiques des courbes algébriques. Jour. Math. pures et appliquées 7 , pages 161-233 (1901)" that every curve with rational coefficients of degree greater than $3$ and genus $1$ is (birationally) equivalent to a cubic of genus $1$ . In the same way that Hilbert and Hurwitz had proven that any curve defined by a polynomial of degree $n$ , with rational coefficients and genus $0$ , is equivalent to another curve of degree $n-2$ and genus $0$ . Then if the degree is odd the curve is equivalent to a line and have infinitely many rational point and if the curve has even degree it is equivalent to a conic. (Look for this to the book "Diophantine equations" by Mordell (1969). Hilbert and Hurwitz have closed, with the metioned theorem, the study of genus $0$ and the great importance of elliptic curves, from this perspective, will be that it will close the study of curves of genus $1$ . Unfortunately the topic of elliptic curves is far fro
|
|algebraic-topology|elliptic-curves|algebraic-curves|
| 0
|
Examples of genus 1 curves that are not elliptic
|
Every elliptic curve over a field $K$ can be mapped to a smooth, projective genus 1 curve (also defined over $K$ ) with a $K$ -rational point, and vice versa. As I understand it, curves without said $K$ -rational points can never be mapped to elliptic curves (since clearly then the elliptic curve group law cannot be established). I'm trying to think of an example of an affine curve whose genus is $1$ , but which is not also an elliptic curve. I know that the Riemann-Roch theorem gives: $$g = \frac{(d-1)(d-2)}{2} - s$$ Some searching online yielded the curve $y^{2}=x^{5}-x$ , however I'm relatively certain that this Riemann-Roch gives $g = 6$ in $\mathbb{P}^2$ , not $1$ . What are some examples of affine genus $1$ curves that are not also elliptic curves?
|
The following are classical examples of genus $1$ curves that are not elliptic curves over $\Bbb{Q}$ : $1.$ $C : 3X^3 + 4Y^3 + 5Z^3 = 0$ in $\mathbb{P}^2_{\mathbb{Q}}$ (Selmer) $2.$ Compactification of an affine curve $2y^2 = x^4 - 17$ in $\mathbb{P}^3_{\mathbb{Q}}$ (Lind) You can prove that these curves have genus $1$ , but they do not have $\mathbb{Q}$ -rational points. I will explain each of the curves in more detail. $1.$ Indeed, $3X^3+4Y^3+5Z^3=0$ is a torsor of an elliptic curve $E: X^3+Y^3+60Z^3=0$ (This has a rational point $[1:-1:0]$ ).You can prove $C(\Bbb{Q})=\emptyset$ by showing that the rank of $E/\Bbb{Q}$ is $0$ .This kind of discussion is detailed in Cassels''Lectures on elliptic curves'. $2.$ Note that projective closure of this affine curve in $\mathbb{P}^2_{\mathbb{Q}}$ has singular point at infinity $[0:1:0]$ , thus we take projective closure in $\mathbb{P}^3_{\mathbb{Q}}$ not in $\mathbb{P}^2_{\mathbb{Q}}$ .The point at infinity contains $\sqrt{2}$ in its coordinat
|
|algebraic-topology|elliptic-curves|algebraic-curves|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.