title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Solving complex linear equations with conjugate operations
|
Consider the following equations: $$\left\{ \matrix{ {z_1} + {z_2} = 9 + 5i \hfill \cr \overline {{z_1}} + 2{z_2} = {z_1} + 10 + 2i \hfill \cr} \right.$$ Assuming ${z_1} = {x_1} + i{y_1}$ and ${z_2} = {x_2} + i{y_2}$ , substitute them to equations $$\left\{ \matrix{ {x_1} + i{y_1} + {x_2} + i{y_2} = 9 + 5i \hfill \cr {x_1} - i{y_1} + 2\left( {{x_2} + i{y_2}} \right) = {x_1} + i{y_1} + 10 + 2i \hfill \cr} \right.$$ The real parts and imaginary parts correspondingly equate: $$\left\{ \matrix{ {\mathop{\rm Re}\nolimits} ({x_1} + i{y_1} + {x_2} + i{y_2}) = {\mathop{\rm Re}\nolimits} (9 + 5i) \hfill \cr {\mathop{\rm Re}\nolimits} ({x_1} - i{y_1} + 2\left( {{x_2} + i{y_2}} \right)) = {\mathop{\rm Re}\nolimits} ({x_1} + i{y_1} + 10 + 2i) \hfill \cr {\mathop{\rm Im}\nolimits} ({x_1} + i{y_1} + {x_2} + i{y_2}) = {\mathop{\rm Im}\nolimits} (9 + 5i) \hfill \cr {\mathop{\rm Im}\nolimits} ({x_1} - i{y_1} + 2\left( {{x_2} + i{y_2}} \right)) = {\mathop{\rm Im}\nolimits} ({x_1} + i{y_1} + 10 + 2i) \hf
|
Multiply the first equation by $-2$ an add both equatuions. Immediately we'll get $z_1=4+2i$ . Now solve for $z_2$ .
|
|linear-algebra|complex-analysis|systems-of-equations|
| 0
|
Rectangular to Spherical/Spherical to Rectangular Case Not Consistent
|
I found the following equations to converting between Rectangular and Spherical coordinates on the site https://byjus.com/maths/spherical-coordinates/#:~:text=In%20three%20dimensional%20space%2C%20the,used%20to%20determine%20these%20coordinates . Rectangular to Spherical: r = sqrt(pow(x,2 + pow(y,2) + pow(z,2)) theta = arccos (x/r) phi = arccos (x/(r sin theta)) Spherical to Rectangular: x = r sin (theta) cos (phi) y = r sin (theta) sin (phi) z = r cos (theta) If I use the first set to convert (0, 10, 10) to spherical I get r= 10 sqrt(2), theta = 90 deg, phi = 90 deg. But if I use the second set to convert (10 sqrt(2), 90, 90) to rectangular, I get x = 0, y = 10 sqrt(2), z = 0. You would think that one should be the inverse of the other and return the original set.
|
$$\phi=\cos^{-1}(\frac{x}{r \sin \theta})$$ is just true, but not a transformation formula, because angular functions occur on both sides. by $$\frac{y}{x}=\frac {r \sin \phi}{r \cos \phi}$$ $$\phi=\tan^{-1}\left(\frac{y}{x}\right) + if(x>0,0,\pm \pi) $$ where the conventions in the x-negative half plane vary. In numerical languages there exists the $\tan^{-1}(x,y)$ of two arguemnts as a continuous function to $(-\pi,\pi)$
|
|coordinate-systems|
| 1
|
If $f(g(x))$ is known, then can we find $f(x)$?
|
I know that $f(x)$ is a parabola, and $f(-2x) = x - x^2$ . Can I assume that $f(x) = -\frac{1}{2}x - \frac{1}{4}x^2$ (real numbers only)? I got this equation because if I were to plug in $x = -2x$ I would get the original function.
|
In general if you have $$ f(g(x))=:h(x) $$ and $g$ is an invertible function than you have $$ f(x)=h(g^{-1}(x)) $$ If $g$ is not invertible then this is not possible. In your example $g(x)$ was linear and not constant and so invertible. and $$ g^{-1}(x)=-\frac{1}{2}x $$ as you have found.
|
|algebra-precalculus|
| 1
|
Convexity of affine function nested in log-sum-exp
|
Suppose there is a function $f (a,b) = \log \Big(e^{a - b} + e^{-a + b} \Big)$ , we know that log-sum-exp is convex ( related link ), and I want to show whether $f(a,b)$ is also convex. My answer (probability not correct) is that $f(a,b)$ is a convex function because $a-b$ and $-a+b$ are both affine, and they are composed in the convex log-sum-exp function, so $f(a,b)$ is also convex. Is my answer correct? If not, how to show whether $f(a,b)$ is convex or not?
|
The function is indeed convex. More generally, the function $$ f(x) := \log\Big( \sum_{i=1}^n e^{l_i(x)}\Big) $$ where each $l_i(x)$ is an affine function is a convex function since each summand $e^{l_i(x)}$ in $\log$ is a log-convex function (even log-linear) and a sum of log-convex functions is also log-convex .
|
|real-analysis|convex-analysis|convex-optimization|
| 0
|
If $f(g(x))$ is known, then can we find $f(x)$?
|
I know that $f(x)$ is a parabola, and $f(-2x) = x - x^2$ . Can I assume that $f(x) = -\frac{1}{2}x - \frac{1}{4}x^2$ (real numbers only)? I got this equation because if I were to plug in $x = -2x$ I would get the original function.
|
We have $f(x)=ax^2+bx+c$ since it is a parabola. So $f(-2x) = 4ax^2-2bx+c$ . Since $f(-2x)=-x^2+x$ we see that $a=-\frac14 ,b=-\frac12$ and $c=0$ . This gives us $f(x)=-\frac14 x^2-\frac 12 x$ which agrees with the polynomial you've found.
|
|algebra-precalculus|
| 0
|
Meaning of Jacobian when: 1. Evaluated at vectors, 2. Applied (multiplied) with vectors, and then evaluated
|
I apologise in advance for my vague and borderline senseless questions. I'm just trying to get a good hold on these things and I was having a tough time putting my thoughts into words. I have two questions. $\nabla f (\vec{v})$ shows the direction of steepest ascent of f at v. I feel like the jacobian is a generalisation of gradient, for vector valued functions. So what meaning does the jacobian matrix have when evaluated at a point in the input space? $[\nabla f \cdot \vec{u}]({\vec{v}}) = \nabla_{\vec{u}} f(\vec{v})$ gives the directional derivative of f along u, which captures how f changes at v as one moves along u. Can I say that (Jac f) $u$ evaluated at v captures how f changes at v as one moves along u in the input space? Further comments and elaborations on this are greatly appreciated. SETUP, stuff I figured out: Consider $$f: R^m \rightarrow R^n.$$ $$ f: \vec{x} = \begin{pmatrix} x_1 \\\ x_2 \\\ ... \\\ x_m\end{pmatrix} \rightarrow f(\vec{x}) = \begin{pmatrix}f_1(\vec{x}) \\\
|
Everything you said makes sense. The only thing I would add is the following. In single-variable calculus, the derivative represents the "slope" of the linear approximation. That is, near the point $x=p$ the graph looks like $f(x) \approx f(p) + f'(p)(x-p)$ . The idea in higher dimensions is the same. The Jacobian encodes the coefficients ("slopes") of the linear approximation of $f \colon \Bbb{R}^m \to \Bbb{R}^n$ . In other words, near $\vec{v} \in \Bbb{R}^m$ , the function $f$ is approximated by $$f(\vec{x}) \approx f(\vec{v}) + (\mathrm{Jac}(f(\vec{v})) \cdot (\vec{x}-\vec{v})$$ If you move the $f(\vec{v})$ to the other side, and substitute $\vec{u}=\vec{x}-\vec{v}$ , you have $f(\vec{v}+\vec{u})-f(\vec{v}) \approx (\mathrm{Jac}(f(\vec{v})) \cdot \vec{u}$ . This was what you asked/claimed in question #2: that $(\mathrm{Jac}(f(\vec{v}))\vec{u}$ captures how the value of $f$ changes under small changes in the input in the direction $\vec{u}$ .
|
|multivariable-calculus|jacobian|
| 0
|
Why $S_{\gamma'(0)}(J(0))=0$ for the geodesic sub-manifold?
|
Picture below is from do Carmo's Riemannian Geometry , I don't know how to show the red line. The $S$ is shape operator, assuming local extension of $\gamma'(0)$ is $N$ which normal to $M$ , then $$ S_{\gamma'(0)} J(0) = -(\nabla_{J(0)}N)^T $$ where $\nabla$ is connection of $M$ , and $T$ means the tangential component. What I get from Deane: Since his mark is a little confused, I'll explain the mark first. Let $\gamma(t)$ is the geodesic in picture below. $f(s,t)$ is the extand of $\gamma(t)$ such that $$ f(0,t) =\gamma(t),~~~ \partial_t f(0,0) =\gamma'(0),~~~ \partial_t f(s,0) \bot\sum\nolimits_\epsilon $$ Then, $J(t)=\partial_s f(0,t)$ is Jacobi field along $\gamma(t)$ , and to be less precise, there is $$ S_{\gamma'(0)} J(0) = -(\nabla_{J(0)}\partial_t f(s,0))^T $$ For showing $S_{\gamma'(0)} J(0)=0$ , $\forall v\in T_{\gamma(0)}\sum\nolimits_\epsilon$ , there is $$ \langle S_{\gamma'(0)} J(0), v\rangle = \langle -(\nabla_{J(0)}\partial_t f(s,0))^T, v \rangle = \langle -\nabla_{J(0
|
Here's the story. I'm pretty sure it's all in the book somewhere. First, let $\Sigma \subset M$ be any hypersurface. For each $x \in \Sigma$ , let $\nu(x)$ be a unit normal to $\Sigma$ such that $\nu$ always points to the same side of $\Sigma$ . This, of course, is the Gauss map. Recall that given any $X \in T_x\Sigma$ , the shape operator is defined to be $$ S(X) = \nabla_X\nu. $$ You can then define the exponential map from $\Sigma$ (rather than from a point) to be the map $$ E: \Sigma\times(-\delta,\delta) \rightarrow M, $$ where for each $x \in \Sigma$ , the curve $$ \gamma_x(t) = E(x,t) $$ is a unit speed geodesic. Given $x \in \Sigma$ and $X \in T_x\Sigma$ , let $c: (-\epsilon,\epsilon) \rightarrow \Sigma$ be a curve such that $c(0)=x$ and $c'(0) = X$ . Consider the $1$ -parameter family of geodescs \begin{align*} C: (-\epsilon,\epsilon)\times(-\delta,\delta) &\rightarrow M\\ (s,t) &\mapsto E(c(s),t). \end{align*} Then $$ J(t) = \left.\partial_sC(s,t)\right|_{s=0} $$ is a Jacobi
|
|riemannian-geometry|
| 1
|
$\partial \bar\partial$-lemma for difference of probability measures
|
Let $X$ be a complex Kahler manifold of dimension $k$ . For a smooth $(k,k)$ form $\omega$ , it is for sure $\partial$ and $\bar\partial$ -closed since it is of top degree. Assume that $\omega$ is a difference of two probability measures. Could we prove that $\omega$ is further $\partial$ or $\bar\partial$ -exact? If not, when could we apply the $\partial \bar\partial$ -lemma to a smooth form $\omega$ of top degree?
|
I guess that what you mean by "difference of two probability measures" is that $\omega = \alpha - \beta$ with $\alpha$ and $\beta$ real non-negative $(n,n)$ forms such that $\int_X \alpha = \int_X \beta = 1$ . In this case, $\int_X \omega = 0$ . The $\partial\overline{\partial}$ -lemma tells you that if $\omega = d\gamma$ for $\gamma$ a $2n - 1$ form, then $\omega = i\partial\overline{\partial}\delta$ with $\delta$ some real $(n - 1,n - 1)$ form. In other words, $d$ -closed implies $\partial\overline{\partial}$ -closed. And by Hodge duality, the natural pairing $H^{n,n}(X,\mathbb{R}) \times H^0(X,\mathbb{R}) \rightarrow \mathbb{R}$ is non degenerate hence $\dim(H^{n,n}(X,\mathbb{R})) = 1$ (when the manifold is connected, else it doesn't work). Moreover, by Stokes' theorem, if $\alpha$ is $d$ -closed, then $\int_X \alpha = 0$ . By equality of the dimensions, the reciprocal holds. In other words, $\omega$ is $d$ -closed, thus $\partial\overline{\partial}$ -closed. In particular, it is $\
|
|complex-analysis|complex-geometry|
| 1
|
Is homology dual to cohomology or is homotopy dual to cohomology?
|
I thought it was homology that is dual to cohomology . But reading Bott & Tu I am now confused. In the intro, page 2 and onwards, it briefly describes a dual point of view and the relation between homotopy and cohomology. It sounds like homotopy and cohomology are dual to each other, although it is not precisely defined. For example, at the bottom of page 2, it says "the two concepts are dual to each other". The question is, is this saying homotopy and cohomology are dual to each other in general, or is it just that the concepts are "somewhat dual" to each other? Are they dual to each other and if so in which sense and to what extent?
|
Homotopy and cohomology are dual in a vague sense called " Eckmann-Hilton duality ". As the linked page says, this is more a heuristic rather than a precise notion of duality. Certainly in the formal mathematical sense of vector space duality, then homology/cohomology are trying to be dual to each other (and this is achieved when working with field coefficients and finite CW complexes, for example). Homotopy/cohomology are dual in the sense that homotopy groups are determined by homotopy classes of maps out of spheres, and spheres have just a single nonvanishing reduced cohomology group; "dually," cohomology groups are determined by homotopy classes of maps into certain spaces called Eilenberg-Mac Lane spaces, which have just a single nonvanishing homotopy group. If we write $S^n$ for a sphere, as usual, and $K_n$ for an Eilenberg-Mac Lane space, we have this informal notion of duality with $$ [S^n, -] \quad \longleftrightarrow \quad [-, K_n]. $$
|
|algebraic-topology|homology-cohomology|homotopy-theory|
| 1
|
Non-Linear Non-homogenious ODE
|
I am trying to solve the following differential equation $$x''+sin(x)=f(t),$$ 1-I know the solution of the homogeneous system $x_h(t)$ , is it possible to use it to find the general solution of the non-homogeneous system? I am looking for a solution in the following form $$x(t)=\int_{-\infty}^t \xi(t-t') f(t') dt' $$ but i don't know how to find $\xi (t-t')$ . 2-Is it possible to find x(t) in terms of fourier series? 3- What other approximation methods can be used to solve this system? f(t) is a fluctuation force sampled from Gaussian distribution with zero mean and $\sigma=1$
|
The homogenous equation is the equation of the mathematical pendulum with general solution the Jacobi amplitude $$x(t)= 2 \text{am}(\frac{\omega}{2} (t-t_0) | i \frac{2}{\omega}) , \quad \text{math notation}$$ $$x(t)= 2 \text{am}(\frac{\omega}{2} (t-t_0), - \frac{4}{\omega^2}) , \quad \text{technical notation}$$ Verification by $$\sin(2 \text {am}) = 2 \text{sn} * \text{cn}, \text{am}'=\text{dn} $$ The amplitude is either oscillating for $\omega or rotating for $\omega>2.$ Between both the solitary single loop for $\omega=2$ , $\text{am}(t,1)$ $$ x(t)= 4 \ \tan^{-1}(e^{t-t_0})$$ The homogenous equation is solved by the energy method, multiplication by $x'=\frac{dx}{dt}$ , solving for $dt$ yielding by simple integration the inverse function as an elliptic integral $F$ $$t-t_0 = \ \text{F}(x,k).$$ By nonlinearity and our knowlegde about the highly sophisticated methods to solve the sine-Gordon PDE, to find the perturbation solutions of the mathematical pendulum simply by a linear additiv
|
|calculus|ordinary-differential-equations|power-series|stochastic-calculus|stochastic-differential-equations|
| 1
|
How to elegantly solve $\left(x^2-\frac1{x^2}\right)^2+2\left(x+\frac1{x}\right)^2 = 2024$?
|
Find all values of $x$ such that $$\left(x^2-\frac{1}{x^2}\right)^2\:+\:2\,\left(x+\frac{1}{x}\right)^2 \:=\: 2024$$ I arrived at the solutions $\,x=\pm \sqrt{22 \pm \sqrt{483}}$ , $\,\pm \sqrt{-23 \pm 4 \sqrt{33}}\,$ , by simplifying to a point where there is no more linear term, and then substituting $y=x^2$ . It took many steps and was quite messy. Is there some elegant way to solve it?
|
How lucky to get the equation presented in this form, this allows us to fairly quickly determine all its solutions. Not sure if it's elegant, but surely it is effective. Two observations to start with: $\,(1)\,$ If $x$ is a solution, then $x\neq 0,$ and $\,-x\,$ and $\,\raise{.1em}{\pm}\,\dfrac{1}x\,$ are also solutions. $(2)\,$ The first summand of the equation would yield the terms $x^4$ and $x^{-4}$ , having the highest and the lowest degree overall respectively. Transformation into a polynomial would thus result in an 8th order equation, and we'd expect 8 roots in total. Using the identities $$x^2-\frac{1}{x^2}\:=\: \left(x+\frac{1}{x}\right)\left(x-\frac1x\right) \quad\text{and}\quad\left(x-\frac1x\right)^2 +2 \:=\: \left(x +\frac1x\right)^2 -2$$ the given equation is turned into $$\left(x+\frac{1}{x}\right)^2 \left[\left(x +\frac1x\right)^2 -2\,\right] \;=\; 2024 = 45^2-1 \\[4ex] \quad\implies\quad \left[\frac14\left(x +\frac1x\right)^2 -\frac{46}4\,\right] \left[\frac14\left(x +
|
|algebra-precalculus|roots|radicals|
| 0
|
The vertices of a triangle are three random points on a unit circle. The side lengths are $a,b,c$. Show that $P(ab>c)=\frac12$.
|
The vertices of a triangle are three unifomly random points on a unit circle. The side lengths are, in random order, $a,b,c$ . Show that $P(ab>c)=\frac12$ . The result is strongly suggested by simulations, and by my attempt shown below. The simplicity of the result suggests that there may be an intuitive explanation. I am hoping for an intuitive explanation, but if that's not possible then any answer is welcome. (Examples of intuitive explanations are here and here .) My attempt Assume that the circle is centred at the origin, and the vertices of the triangle are: $A(\cos(-2Y),\sin(-2Y))$ where $0\le Y\le\pi$ $B(\cos(2X),\sin(2X))$ where $0\le X\le\pi$ $C(1,0)$ Let: $a=BC=2\sin X$ $b=AC=2\sin Y$ $c=AB=\left|2\sin\left(\frac{2\pi-2X-2Y}{2}\right)\right|=|2\sin(X+Y)|$ $P\left[ab>c\right]=P\left[2(\sin X)(\sin Y)>|\sin(X+Y)|\right]$ This probability is the ratio of the area of the shaded region to the area of the square in the graph below. Rotate these regions $45^\circ$ clockwise about t
|
Here's a way to finish the last integral making use of its symmetry. $$I=\int_\frac{\pi}{2}^\frac{\pi}{4} \arccos\left(\cos x \sqrt{1+\tan x}\right)dx\overset{\cot x\to x}=\int_0^1 \frac{\arccos \sqrt{\frac{x(1+x)}{1+x^2}}}{1+x^2}dx$$ $$=\int_0^1 \frac{\arctan \sqrt{\frac{1}{x}\frac{1-x}{1+x}}}{1+x^2} dx\overset{\large \frac{1-x}{1+x}\to x}=\int_0^1 \frac{\operatorname{arccot} \sqrt{\frac{1}{x}\frac{1-x}{1+x}}}{1+x^2}dx$$ $$\Rightarrow 2I=\frac{\pi}{2}\int_0^1 \frac{1}{1+x^2}dx\Rightarrow \boxed{I=\frac{\pi^2}{16}}$$ Above it was utilized that $\, \arccos x =\arctan \left(\frac{\sqrt{1-x^2}}{x}\right)$ and $\arctan x+\operatorname{arccot} x=\frac{\pi}{2}$ .
|
|probability|integration|definite-integrals|intuition|geometric-probability|
| 0
|
How do i determine the "sign" of a region of integration in multivariable integrals?
|
In 1 dimension if i am given two numbers a $x \in [a,b]$ i know that $\int_{a}^{b}f(x)dx$ is positive and $\int_{b}^{a}f(x)dx$ is negative. However in 2d and above the region of integration is often given simply as a set of numbers: $\iint_{D}f(x)dxdy$ Am i to assume the D in $\iint_{D}f(x)dxdy$ is a "positive region" unless otherwise stated? Edit: In a technical sense, is the region of integration assumed to be a positively oriented differentiable manifold unless stated otherwise?
|
tl; dr: Don't worry about orientation when calculating integrals over regions. Caveats: Even in a first multivariable course, do take care with orientations of paths in line integrals, and with surface normals when calculating flux across surfaces. And do take care with ordinary integrals whose limits are given, signifying an orientation, e.g., if $a and $f$ is integrable, then $$ \int_{a}^{b} f = \int_{[a, b]} f = \int_{a}^{b} f(x)\, dx = -\int_{b}^{a} f(x)\, dx. $$ If someone writes $\displaystyle\int_{b}^{a} f$ , it's a good idea to ask their intent. Integrating functions over regions of Euclidean space (or even over Riemannian manifolds, not necessarily oriented or orientable!) amounts to integrating "measures" or "densities"; loosely, absolute values of top-dimensional forms. Formally, a product of Cartesian differentials "has the same value" regardless of ordering. In the plane, for example, we might write $$ dx\, dy = dy\, dx = |dx \wedge dy|. $$ Integrating the constant functio
|
|calculus|integration|multivariable-calculus|
| 0
|
Forgetful functor to the functor from the category of spaces to that of pointed spaces
|
I want to define a forgetful functor $\beta:\textbf{Top}^0\to \textbf{Top}$ , where $\textbf{Top}^0$ is the category of pointed spaces and pointed continuous maps. I know that $\beta$ should forget what the base point of any pointed space is. But I am not sure exactly what this means. Is it that we simply forget that $x_X$ is the base point of the pointed space $(X,x_X)$ ? Or does it mean that we are actually removing $x_X$ from $X$ , ie is $\beta(X,x_X)=X\setminus \{x\}$ ?
|
It means the first: the functor takes the form $\mathbf{Top}^0\to\mathbf{Top}, (X,x_X)\mapsto X$ .
|
|general-topology|algebraic-topology|
| 1
|
Show that the sequence $f_{n}(x)=\sin nx$ does not have a convergent subsequence.
|
I am currently trying to prove, that for $f_{n}\in C([0,2\pi])$ with $f_{n}=\sin nx$ , that $f_n$ does not have a converging subsequence. I have seen proofs on this site, where the dominated convergence theorem is used. In my case I should find a correlation between the $\Vert \cdot \Vert_{L^{\infty}}$ -norm and the $\Vert \cdot \Vert_{L^2}$ -norm and I can't seem to find one. I know that, if $f_n$ does have a subsequence $f_{n_k}$ , then $\Vert f_{n_{k+1}}-f_{n_k} \Vert_{L^{\infty}}$ should converge to $0$ with $k\rightarrow\infty$ . The problem is, that the supremum of $g_{n_k}(x):=f_{n_{k+1}}(x)-f_{n_k}(x)$ does not seem to be calculable. So my guess is that I should bound this with the $L_2$ -norm, because $\Vert g_{n_k} \Vert_{L^{2}}=\sqrt{2\pi}$ for all $k$ but I don't know how. Any help in furthering my progress would be appreciated.
|
Here’s a very general fact which you should keep in mind. If $(X,\mathfrak{M},\mu)$ is a finite measure space, and $1\leq p , then we have an continuous linear inclusion $\iota: L^q(\mu)\hookrightarrow L^p(\mu)$ with operator norm $\mu(X)^{\frac{1}{p}-\frac{1}{q}}$ . In words this is saying that on a finite measure space, if you’re in a higher Lebesgue space, then you’re automatically inside all lower ones. In fact, the case $q=\infty$ is so obvious (and also the one relevant for your question) that we can give a one line proof: \begin{align} \|f\|_p^p&=\int_X|f|^p\,d\mu\leq\int_X\|f\|_{\infty}^p\,d\mu=\|f\|_{\infty}^p\cdot\mu(X). \end{align} Taking $1/p$ roots gives $\|f\|_p\leq \|f\|_{\infty}\cdot \mu(X)^{\frac{1}{p}}=\|f\|_{\infty}\cdot \mu(X)^{\frac{1}{p}-\frac{1}{\infty}}$ . I leave it to you to prove the inequality in the cases $q (use Holder’s inequality with a clever choice of conjugate exponents), and that the operator norm really is equal to $\mu(X)^{\frac{1}{p}-\frac{1}{q}}$
|
|real-analysis|functional-analysis|
| 0
|
Question about three lines theorem
|
I'm trying to prove that the function defined as $$F_\epsilon(z)=F(z)M_0^{z-1}M_1^{-z}e^{\epsilon z(z-1)},$$ where $F$ is an holomorphic function on $0 and continuous and bounded on the clousure of the strip such as $$|F(it)|\leq M_0,\,|F(1+it)|\leq M_1,\,\forall t\in\mathbb{R},$$ satisfies that $$|F_\epsilon(it)|\leq 1,\,|F(1+it)|\leq 1.$$ I have done the following $$|F_\epsilon(it)|=|F(it)M_0^{it-1}M_1^{-it}e^{-\epsilon t^2}e^{-\epsilon it}\leq |M_0^{it}M_1^{-it}|=|e^{it\log(M_0/M_1)}|\leq 1.$$ Im not sure that im able to say that $$M_0^{it}M_1^{-it}=e^{it\log(M_0/M_1)},$$ because of the behaviour of complex logarithm. What do you think? How can i prove that it is well define? Any help will be very appreciated
|
For any $a>0$ and $t\in\Bbb{R}$ , we have $a^{it}:=e^{it\log a}$ , and no funny business with complex logarithms; this is the usual logarithm $(0,\infty)\to\Bbb{R}$ . So, the absolute value is $|a^{it}|=|e^{it\log a}|=1$ . Apply this with $M_0,M_1$ .
|
|complex-analysis|inequality|logarithms|
| 0
|
A problem when integrating $\int \frac{a}{x^2 + 1}\text{ } dx$
|
I’m still not an expert in integration, and I’m still learning, so sorry if it’s a trivial question: Here’s the general integral: $$\int \frac{a}{x^2 + 1}\text{ } dx, a \in \mathbb R$$ I know that this antiderivative is $a \tan^{-1} (x)$ . Now, I want to compute $$\int_{-\infty}^{\infty} \frac{a}{x^2 + 1}\text{ } dx = 2\int_{0}^{\infty} \frac{a}{x^2 + 1} \text{ } dx$$ This basically means $$\left(a \lim\limits_{T\to \infty} \tan^{-1}(T)\right) - 0$$ I’ve heard a lot of people say it’s $\frac{a\pi}{2}$ . HOWEVER, the tangent also approaches infinity at any $x = \frac{a(2n+1)\pi}{2}$ . So really the entire area under that curve from negative to positive infinity, should be $a\pi$ but is equal to $a(2n+1)\pi = 2na\pi + a\pi$ by integrating algebraically. What am I missing, and what should we do for other situations like this where the periodicity of the antiderivative can give multiple solutions?
|
In ITF the range of $tan^{-1}x$ is widely considered as from $(\frac{-\pi}{2}, \frac{\pi}{2})$ even though like you said $tanx$ has multiple solutions for any $x = \alpha$ . However for $tan^{-1}(\alpha)$ we only consider the solution that lies in the mentioned range. The reason is because when we choose to invert a function we make sure its bijective in the given domain. Since $tanx$ isnt bijective in its domain we just take a sub-set of its domain where its bijective and then find its inverse corresponding to that part only. The widely accepeted sub-set of the domain $tanx$ we used to find its inverse is $(\frac{-\pi}{2}, \frac{\pi}{2})$ . However for the issue your facing its not necessary to follow the strict range given above, but we must make sure its bijective. For example in an interval from $(\frac{(2n-1)\pi}{2},\frac{(2n+1)\pi}{2}$ ) $tanx$ is always bijective. So on integrating and applying the limits we get $2(\frac{(2n+1)\pi}{2} -n\pi)$ which is $\pi$ .
|
|integration|improper-integrals|
| 0
|
Will Jeffrey's Prior Always be Improper?
|
If our prior distribution has a support which ranges to infinity, then will the Jeffrey's prior necessarily be improper? For example, with a Gamma or Normal prior, the Jeffrey's prior is improper. But, with a Beta prior, we get an actual distribution when calculating the Jeffrey's prior. The support for the Gamma and Normal distributions range to infinity, whereas the support for the Beta distribution is $[0,1]$ . Does this generalize to all priors whose support ranges to infinity? EDIT: Also, if possible, it would be really interesting to see this claim proven (although I'm not sure something like that is even possible).
|
The Fisher information matrix is related to the second order rate of change in the cross-entropy of a parametric distribution. When a distribution changes little as function of the parameter and even drops to zero, then you can have a parametric distribution with the Jeffreys prior being a proper distribution. The examples in the other answers obtain such proper posterior distribution with a parametric distribution where the change in the distribution along the entire range of the parameter is only small. Another way to obtain a Jeffreys prior that is a proper distribution would be to use a reparameterisation of the Bernoulli distribution to obtain a paramter on an infinite range. For example $p = logit(\theta)$ . A more difficult case would probably be when we have a parametric distribution $f(x|\theta)$ such that the integral of the maximum frequency in each point $\int_{\forall x} max_{\theta} \lbrace f(x|\theta)\rbrace \,\text{d}x$ diverges. That would place a minimum limit on the
|
|probability-distributions|statistical-inference|bayesian|parameter-estimation|
| 0
|
Rectangular to Spherical/Spherical to Rectangular Case Not Consistent
|
I found the following equations to converting between Rectangular and Spherical coordinates on the site https://byjus.com/maths/spherical-coordinates/#:~:text=In%20three%20dimensional%20space%2C%20the,used%20to%20determine%20these%20coordinates . Rectangular to Spherical: r = sqrt(pow(x,2 + pow(y,2) + pow(z,2)) theta = arccos (x/r) phi = arccos (x/(r sin theta)) Spherical to Rectangular: x = r sin (theta) cos (phi) y = r sin (theta) sin (phi) z = r cos (theta) If I use the first set to convert (0, 10, 10) to spherical I get r= 10 sqrt(2), theta = 90 deg, phi = 90 deg. But if I use the second set to convert (10 sqrt(2), 90, 90) to rectangular, I get x = 0, y = 10 sqrt(2), z = 0. You would think that one should be the inverse of the other and return the original set.
|
There are so many defects in the formulas given on that page that it is hard to figure out what formulas they meant to write there. $\theta = \cos^{-1}(x / r)$ If you solve this for $x$ , it says that $x = r \cos\theta$ . But if you look at the diagram at the top of the page that supposedly says what the angles mean in spherical coordinates, this makes no sense. Here is the diagram with a few added annotations: The triangle with blue edges is a right triangle. I have named the vertices of this triangle $O$ , $P$ , and $Q$ in this version of the diagram. The point $P$ is the point with spherical coordinates $(r, \theta, \phi)$ . The point $O$ is the origin of the coordinate system, with rectangular coordinates $(0,0,0)$ . The point $Q$ is in the $x,y$ plane directly below $P$ and is the right-angled vertex of the triangle. The hypotenuse of the triangle is $OP = r$ , as shown in the original diagram. Clearly $\angle OPQ = \phi$ and therefore $PQ = r\cos(\phi)$ and $OQ = r\sin(\phi)$ . I
|
|coordinate-systems|
| 0
|
Is "$\prec$" a formal math symbol?
|
Is " $\prec$ " a formal symbol? I've never seen it before.
|
If you are asking about what the symbol is typically used for, and what sort of precedence there is for the use of it, then the following would be my answer: Generally when $\prec$ is used, it denotes some type of abstract order relation, see https://en.wikipedia.org/wiki/Order_theory . Like in your above example, we give some order to the function by defining an order relation given by the convergence criteria.
|
|calculus|notation|
| 0
|
Prove every homomorphism of an odd cycle into itself is a bijection.
|
This comes straight from problem 1.5 (p. 12) of Handbook of Product Graphs, Second Edition by Hammack, Imrich, and Klavžar. It even comes with a hint in the appendix: Exercise 1.5 Hint: Show that the homomorphic image of an odd cycle is a closed walk that contains an odd cycle. But I think maybe there's a misprint in the question or I'm missing some important nuance in how to think about cycles or the range of the homomorphism. Had the problem statement said onto instead of into I think I would agree, but I've double and triple checked the text and the errata at http://imrich.at/wp-content/uploads/Misprints_and_Notes-3.pdf . I'm assuming that the set mapped into by homomorphism $\varphi$ is a set of $2k+1$ distinct vertices: $\{ u_1, u_2, \dots, u_{2k+1} \}$ , with the original cycle $C_{2k+1}$ being $u_1u_2\dots{}u_{2k+1}u_1$ , and a mapped cycle being a subsequence of the closed walk $\varphi(u_1)\varphi(u_2)\dots\varphi(u_{2k+1})\varphi(u_1)$ . But I think there are easy counterexam
|
Okay, I think this is easiest for me by recognizing first that, for any homomorphism $\varphi: C_{2k+1} \rightarrow C_{2k+1}$ , it must be the case that $\left(\varphi(v_{i+1}) - \varphi(v_i)\right) \equiv \pm 1 \pmod{2k+1}$ , since edges must map to edges. (For vertices that increase in the clockwise direction, consider the sign positive; negative for counterclockwise.) Then, consider possible values of $S$ , defined as $$S = \sum_{i=1}^{2k+1} \varphi(i+1) - \varphi(i).$$ We know that $$S \bmod (2k+1) = \varphi(2k+2) - \varphi(1) = \varphi(1) - \varphi(1) = 0,$$ since we have a telescoping sum with vertices wrapping around clockwise from $2k+1$ back to $1$ . Since this is a cycle of odd length, $S$ must be $\pm (2k+1)$ : it cannot be 0, because that would imply an equal number of $+1$ and $-1$ values, which could only be true if the cycle were of even length. Thus, when mapping from an odd cycle to an odd cycle of the same length, any homomorphism must transit the mapped cycle once fu
|
|graph-theory|
| 1
|
Existence of monoidable submagma of monoid which is neither a submonoid nor idempotent?
|
Does there exist a monoid $(M;*,1)$ and a submagma $S$ of $M$ , such that $S$ is not a submonoid of $M$ , but $S$ is "monoidable", meaning it contains an identity element, and also, $S$ is not idempotent either? The reason I am asking this question is because I was considering the monoid $(N;*,1)$ of nonnegative integers under multiplication, and its monoidable submagma yet non-submonoid $\{0\}$ . That example does not quite satisfy my condition, because it is idempotent. I am now wondering whether there are any non-idempotent examples of what I am looking for.
|
Let $M_1$ and $M_2$ be two monoids such that $M_1$ has a non-identity idempotent element $e$ , and $M_2$ is not idempotent. Consider $M_1 \times M_2$ . Then $\{e\} \times M_2$ is a subsemigroup but not a submonoid. It's isomorphic to $M_2$ as a semigroup though, so it's monoidable, if I've understood correctly (but not idempotent). This means there is an example where $M$ is of order $4$ , for example, as well as many more familiar examples like $\Bbb Z \times \Bbb Z$ under multiplication. An instance of this "in nature" is the monoid of $2 \times 2$ matrices over your favourite ring (say $\Bbb Z$ or $\Bbb R$ ) under multiplication, and the subsemigroup of matrices whose only nonzero entry is the top left one.
|
|abstract-algebra|monoid|
| 1
|
Determining the curve to which variable line segments are tangent
|
A variable line passing through a fixed point $P(x_0,y_0)$ intersects the circle $x^2+y^2=a^2$ at some point $Q$ . If all the perpendicular bisectors of the variable line segment $PQ$ are tangent to a curve $C$ , determine $C$ if (i) $P$ is inside the circle (ii) $P$ is outside the circle (Note: in both cases, only select one point of intersection with the circle, do not find the perpendicular bisector of the chord between both points of intersection); Motivation: I was watching this video on Feynmann's lost lecture on planetary orbits and came across an interesting geometry puzzle; https://youtu.be/xdIjYBtnvZU?si=g-5pJiu_zo8SZf5 _ which moreover resolves the first part of the question but then my dense self could not understand why the center, $Q$ , and $P$ as defined in "geometry proof land" are collinear so I decided to turn to a hardcore coordinate geometry approach to the problem. I would really appreciate it if the answer to this problem also uses the tools of coordinate geometry
|
Let me see if I can convince you with a different geometrical approach, instead of resorting to coordinates. I'll do my proof for a point $P$ inside the circle, but a similar reasoning is possible if $P$ is outside (see EDIT at the end). Consider two rays through $P$ , $PQ$ and $PQ'$ , with their perpendicular bisectors meeting at $\tilde T$ (see figure below). Midpoints $M$ and $M'$ lie on a fixed circle (purple in the figure), which is the transformed of the given circle under a homothety of centre $P$ and ratio $1\over2$ . Hence its centre $C$ is the midpoint of $OP$ and its radius ${1\over2}a$ But points $M$ and $M'$ also belong to the circle with diameter $P\tilde T$ (blue in the figure). If $Q'$ is moved towards $Q$ , point $\tilde T$ also moves, approaching a limiting position $T$ which is the point on the envelope of the perpendicular bisectors lying on the perpendicular bisector of $PQ$ (this is the "classical" definition of envelope you can find on Wikipedia). When $Q'\to Q$
|
|analytic-geometry|physics|conic-sections|locus|
| 1
|
Is "$\prec$" a formal math symbol?
|
Is " $\prec$ " a formal symbol? I've never seen it before.
|
In mathematics, you are allowed to introduce any symbol you like, and to define that symbol, and to apply that definition. So, for example, perhaps I wish to introduce the notation $@\!!\!\!\! for a binary relation on the natural numbers. Then I have to tell you the definition of that relation: $m \, @\!!\!\!\! means that $m^2 + 3n$ is a prime number. Great! And silly, yes... I can't dream of what it could be applied to. But my point is, that's how notations and their definitions work throughout mathematics. If you look at the screen shot in your own post, you'll see that's exactly what's going on: that passage is introducing the notation $\prec$ for a binary relation on the set of functions of $n$ , and it is telling you the definition of that relation: $f(n) \prec g(n)$ means that $\lim_{n \to \infty} \frac{f(n)}{g(n)} = 0$ . And there are many important applications of this relation.
|
|calculus|notation|
| 0
|
Why doesn't the indefinite integral act as a function even though it is?
|
Suppose we have a function that does not involve integrals say $f(x) = x +2$ , $f(3)$ would just be subsituting 3 into the equation. Now suppose $$ f(x) = \int 2x^2 $$ Supposing we have $f(4)$ , what is so special regarding integrals that makes it so that $$ \int 2(4)^2 $$ and $$ 2*(4)^3/3 $$ do not give the same answer. I have a feeling this has to do with the fundamental theorem of calculus yet I am not sure. Any explanation would be great!
|
The entire notation $\int f(x)dx=F$ is just shorthand for saying $F’=f$ - the $x$ on the left and right aren’t exactly the same. Rather $\int f dx$ isn’t really even a function which you can feed values into - there’s that pesky ‘ $+C$ ’ term representing that it’s an entire class of functions which have the desired derivative. The variable $x$ inside of $\int f(x)dx$ is a dummy variable, analogous to the index variable inside a sum. Anytime you actually want to substitute variables or do anything with them, you should use definite integrals where substitution actually makes sense. There, when the variable on the left side is substituted for, it fits into the integration bounds, not into the function itself. For example, let: $$F(y)=\int^y_0 f(x) dx$$ Substituting $y=3$ is then allowed: $$F(3)=\int^3_0 f(x)dx$$
|
|calculus|integration|
| 0
|
Defining an Exponential Object using almost an adjunction
|
We know that exponential functor $(-)^X$ can be defined as right adjoint to the functor $ -\times X$ . Consider two $\frak{C}$ -objects $X$ and $Y$ and a functor $F:\frak{C}\to \frak{C}$ . Suppose $\frak{C}$$(Z \times X,Y) \cong \frak{C}$$(Z,F(Y))$ naturally in $Z$ (so almost an adjunction). Then I was wondering whether $F(Y)$ can be considered as the exponential object $Y^X$ . I think the answer is yes and this is my attempt. First take $Z=F(X)$ . Let $ev: F(Y) \times X \to Y$ be the element in $\frak{C}$$(F(Y) \times X,Y)$ corresponding to the identity arrow $id_{F(Y)}:F(Y) \to F(Y)$ (my claim is that $ev$ is the evaluation arrow). Let $f:Z \times X \to Y$ be an arbitrary $\frak{C}$ -arrow of this form. Let $\tilde{f}$ be the corresponding element in $\frak{C}$$(Z,F(Y))$ . Consider the following naturality square: Since the assignment $\tilde{(-)}$ is a bijection, it is easy to see $\tilde{f}$ is the only choice satisfying $f=ev(\tilde{f} \times id_X)$ Is my argument correct?
|
It's a bit ambiguous; have you fixed $X$ ? If I interpret your statement in the following way: Suppose $X\in\mathfrak{C}$ is given and $F^X:\mathfrak{C}\to\mathfrak{C}$ is a functor for which there are isomorphisms $\mathfrak{C}(X\times Z,Y)\cong\mathfrak{C}(Z,F^X(Y))$ which are natural in $Z$ , for every $Y$ fixed. Is $F(Y)$ an exponential object $Y^X$ , for every $Y$ ? Well, yes. It is a well known principle that if you have a family of objects $A_Y:Y\in\mathfrak{C}$ for which isomorphisms $\lambda_{Y,Z}:\mathfrak{C}(L(Z),Y)\cong\mathfrak{C}(Z,A_Y)$ are given, naturally in $Z$ and for some functor $L$ , then the object map $Y\mapsto A_Y$ uniquely inherits a functor structure such that $L\dashv A_\bullet$ under those isomorphisms $\lambda$ . So you can promote the weak adjunction to a full adjunction . There is an extension of this to parametric adjunctions as well. In your case, taking $A_Y:=F^X(Y)$ and $L:=X\times(-)$ , you would just have to ( potentially ) change the functor $F^X$
|
|category-theory|
| 1
|
Probability that a triangle inscribed in a square comprises at least $\frac{1}{4}$ of the area of the square
|
Question: Suppose that points $P_1$ , $P_2$ , and $P_3$ are chosen uniformly at random on the sides of a square $T$ . Compute the probability that $$\frac{[\triangle P_1 P_2 P_3]}{[T]}>\frac{1}{4}$$ where $[X]$ denotes the area of polygon $X$ . Without loss of generality, I assumed the side length of the square to be $1$ . Because the question mentions $\frac{1}{4}$ of the square, I considered splitting the square into quadrants. It is obvious that all three points cannot lie in the same quadrant of the square. Similarly, there cannot be two points in the same quadrant because the area must be less than $\frac{1}{2} \cdot \frac{1}{2} \cdot 1=\frac{1}{4}$ . [EDIT: This is wrong] This means that all three vertices must lie in different quadrants in the square. From here, I considered cases: Case 1: Two of the points are on the same side, which occurs with probability $\frac{9}{16}$ . Case 2: All three points are on different sides, which occurs with probability $\frac{3}{8}$ . From here,
|
Here is an outline of how you can use some known results to relatively quickly solve this in the general case. I will regroup the cases differently according to their simplicity. Two vertices on the same side, one on the opposite side ( $p = 3/16$ ). This is just equivalent to the problem of finding the distance between two random points, which has a probability density $\rho_1(S) = 4(1-2S)$ . Two vertices on the same side, one on the adjacent side ( $p = 3/8$ ). This is a product distribution of a uniform distribution and the distribution in the previous case. Using the formula for the product distribution density we have $\rho_2(S) = 4(2S - \ln (2S) - 1)$ . All three vertices are on different sides ( $p=3/8$ ). This is a more complicated but known problem . The probability density in this case is given by $\rho_3(S) = 4((2S-1)\ln (1-2S) - 2S\ln (2S))$ . Combining the cases above we have $$\rho(S) = \frac{3}{16} \rho_1(S) + \frac{3}{8} \rho_2(S) + \frac{3}{8} \rho_3 (S) = 3\left( \lef
|
|probability|contest-math|geometric-probability|
| 0
|
To find condition on $p$ such that the asymptotics of a function is infinity
|
I came across the following asymptotic problem in my reseach, however I don't know how to answer it. Let $C_0,C_1,C_2,C_3$ be absolute constant (do not depend on $n$ ), and $p(n)$ is a function of $n$ . We need to find $p(n)$ such that the following two conditions hold: (1) $p(n)$ that is larger than the following function, i.e., $$\tiny p(n)>\frac{-(4+(n-2)(2C_1+C_2))+\sqrt{((4+(n-2)(2C_1+C_2)))^2-4\cdot (-(n-2)(C_1+C_2-C_3))\cdot(-2C_0-(n-2)C_1)}}{2(-(n-2)(C_1+C_2-C_3))}$$ It is okay if this only holds for sufficiently large $n$ . (2) Let $$\tiny E(p(n),n):=(-2C_0-(n-2)C_1+(4+(n-2)(2C_1+C_2))\cdot p-(n-2)(C_1+C_2-C_3)p^2)^2$$ and $$\tiny F(p(n),n):= 4C_0^2\cdot (-4p^2+4p)+C_1^2\cdot \frac{1}{8}(n-2)4(-p+1)p+(n-1)(1-(1-2p)^4)+C_2\cdot \frac{1}{4}(n-1)\big(1-(1-2p)^4\big)+C_3^2 \frac{1}{8}(n-2)4(-p+1)p+(n-1)(1-(1-2p)^4)$$ we need $$E\succ F$$ for sufficiently large $n$ . In other words, $$\lim_{n\rightarrow+\infty}\frac{E}{F}=+\infty$$ What I tried Should I assume the following and the
|
With the help of a CAS, we find $$ \frac{F}{E}\sim\frac{1}{n} \frac{N}{D} \qquad, \qquad n \to \infty $$ Where $$ N= \left(-4 C_2-32\right) p^4+\left(8 C_2+64\right) p^3+\left(-\frac{C_1^2}{2}-6 C_2-\frac{C_3^2}{2}-48\right) p^2+\left(\frac{C_1^2}{2}+\frac{C_3^2}{2}+2 C_2+16\right) p \\ D=\left(C_1+C_2-C_3\right){}^2 p^4-2 \left(2 C_1+C_2\right) \left(C_1+C_2-C_3\right) p^3+\left(\left(2 C_1+C_2\right){}^2+2 C_1 \left(C_1+C_2-C_3\right)\right) p^2-2 C_1 \left(2 C_1+C_2\right) p+C_1^2 $$ Consider cases: If $p\to \infty$ then the $4$ powers of $p$ in the numerator match the $4$ powers in the denominator, so $F/E\sim 1/n \to 0$ . If $p\to 0$ then the numerator vanishes and the denominator stays finite, so $F/E\to 0$ . If $p\to \text{const}$ then $F/E\sim1/n\to 0$ . So for any choice of $p$ , your second condition is fulfilled- as long as the appropriate leading term doesn't vanish due to an unfortunate relation between the $C_i$ (such as $C_1+C_2-C_3=0$ ). For your first condition, let $Q
|
|asymptotics|
| 1
|
On partition of a number $n$ into positive integers
|
I need to prove this inequality, but I do not have a good background in algebra, if you can guide me: We have: $$ p_1+p_2+\ldots+p_k = q_1+q_2+\ldots+q_k+\ldots+q_t = n $$ and $$ p_1 + 2p_2 + \ldots +kp_k and $$ 1 \leq l_1 and $$ p_1 \geq p_2 \geq ... \geq p_k\\ $$ and $$ q_1 \geq q_2 \geq ... \geq q_t $$ where $p_i \in \mathbb{N^*}$ , $q_i \in \mathbb{N^*}$ , $l_i\in \mathbb{N^*}$ for all $i$ . Then I need to prove that: $$ l_1p_1+l_2p_2+\ldots +l_kp_k If anyone can help me please. Best regards
|
After several mouths of research, i found a counter example which i will share with you: For $k = 5$ : $p_1 = 6, p_2 = 4, p_3 = 4, p_4 = 1, p_5 = 1$ For $t = 6$ : $q_1 = 6, q_2 = 6, q_3 = 1, q_4 = 1, q_5 = 1, q_6 = 1$ We take $l_1 = 1, l_2 = 2, l_3 = 5, l_4 = 6, l_5 = 7, l_6 = 8$ So we get: $\sum_{i=1}^{5} ip_i = 35$ and $\sum_{i=1}^{6} iq_i = 36$ and : $\sum_{i=1}^{5} l_ip_i = 47 > \sum_{i=1}^{6} l_iq_i = 44$
|
|inequality|integer-partitions|
| 1
|
How to compute the integral $\int_0^\infty dx\ \frac{e^{- i y x}}{x-i\alpha}$?
|
Consider the function $$ f(y,\alpha):=\int_0^\infty dx\ \frac{e^{- i y x}}{x-i\alpha} $$ for $y>0$ and $\alpha \in \mathbb{R}$ . This is a Fourier transform over half the real line. I have tried to compute the above with a contour integration by integrating over the quarter-circle that runs along $[0,\infty] \cup \{ R e^{i \theta } | \theta \in [0,\pi/2] \} \cup [i\infty,0]$ but I am unable to make progress for the case when $\alpha > 0$ . Is there a simpler way to compute this integral?
|
Here no intention is to be a ready answer, but it will be too long for a comment since I am not sure whether this is a valid try or not by Fourier transform. \begin{align} F(\omega)=\int_0^\infty\frac{e^{-i\omega x}}{x-a}\mathrm dx \end{align} where $a=i \alpha$ As we can find that $$\mathcal{F}\left\{\frac{1}{x}\right\}=-i\pi\operatorname{sgn}(\omega).$$ then by shifting property $f(x\pm a)\leftrightarrow{F}(\omega)e^{\pm i\omega a}$ , we have $$\mathcal{F}\left\{\frac{1}{x-i\alpha}\right\}=-i\pi\operatorname{sgn}(\omega)e^{i\omega (-i\alpha)}=-i\pi\operatorname{sgn}(\omega)e^{\omega \alpha}.$$ Please help to correct if somewhere is wrong.
|
|integration|fourier-transform|contour-integration|
| 0
|
How to prove that a function is integratable using the Riemann condition?
|
Specifically, From reading a some texts, it states that the Riemann condition for showing that the function is integratable is by showing that there is an $\epsilon > 0$ and condition $s \leq f \leq t$ (s and t are step functions below and above f respectively). To prove integrability it is said to then show that $$ \int_{a}^{b}t-\int_{a}^{b}s However, the steps to get this point are somewhat unclear and I am not sure where to start. For example, how do I show that $x^2$ is integratable from $[0, 2]$ using this condition? Is it perhaps possible to show integrability through the $\epsilon-\delta$ definition of continuity? If so, how do I go about doing that? Is there a good intuition to help understand this method? Any help is appreciated.
|
The intuition is the same as for proving continuity with $\varepsilon-\delta$ , i.e. you start of with the scratch work and when you reach the end, you know what you require $\delta$ to be. In the case of integrals, after expanding the difference of the upper sum and lower sum for some partition $P$ , you will know how fine the partition needs to be, so that for any given $\varepsilon>0$ , you have $U(f,P)-L(f,P) whenever $\lvert P\rvert . For your case of $f(x)=x^2$ , note that $f$ is increasing on $[0,2]$ , so the supremum of $f$ for any sub-interval is the right endpoint, and the infimum is the left endpoint. When evaluating the difference we get a telescoping sum, which leaves the right-most end point and left-most endpoint, times the length of the subintervals. The proof proceeds as follows: Let $\varepsilon>0$ be given and let $P$ be a partition of $[0,2]$ into $n$ parts as follows: $0=x_0 , where $n$ is the smallest natural number for which $n>8/\varepsilon$ . Thus we get $$ U(f
|
|real-analysis|calculus|riemann-integration|
| 1
|
Finite subgroups of the multiplicative group of a field are cyclic
|
In Grove's book Algebra, Proposition 3.7 at page 94 is the following If $G$ is a finite subgroup of the multiplicative group $F^*$ of a field $F$, then $G$ is cyclic. He starts the proof by saying "Since $G$ is the direct product of its Sylow subgroups ...". But this is only true if the Sylow subgroups of $G$ are all normal. How do we know this?
|
This is another elementary proof that uses only the fact that a polynomial of degree $n$ has at most $n$ roots in a field. So, let $F$ be a finite field let $g$ be an element of maximal possible order $n$ in the multiplicative group $F^*=F\setminus\{0\}$ of $F$ . Let $H:=\{g^k:0\le k be the cyclic subgroup of $F^*$ , generated by the element $g$ . If $H=F^*$ , then the group $F^*$ is cyclic and we are done. So, assume that $H\ne F^*$ . Observe that every element $x\in H$ satisfies the equation $x^n-1=0$ , which has at most $n$ roots in the field $F$ . Therefore, $H=\{x\in F:x^n=1\}$ . Let $p$ be the smallest positive number for which there exists an element $y\in F^*\setminus H$ such that $y^p\in H$ . The minimality of $p$ implies that $p$ is a prime number. Let $k\in\{0,\dots,n-1\}$ be the smallest possible number such that $y^p=g^k$ for some element $y\in F^*\setminus H$ . We claim that $k . Assuming that $k\ge p$ , we can observe that the element $z=yg^{-1}\in F^*\setminus H$ has $z
|
|abstract-algebra|group-theory|
| 0
|
Showing existence of a non-standard model of arithmetic elementarily equivalent to standard model of arithmetic
|
Let $\mathcal{M}_A=\langle\mathbb N, 0^{\mathcal{M}_A}, s^{\mathcal{M}_A}, +^{\mathcal{M}_A}, \times^{\mathcal{M}_A}, be the standard model for the language of arithmetic $\mathcal L_A$ . Define theory $\text{Th}(\mathcal{M}_A)=\{\alpha : \mathcal{M}_A\models\alpha\}$ . I am trying to show the existence of model $\mathcal M$ that is elementarily equivalent to $\mathcal{M}_A$ and contains at least one non-standard element. I can start the proof as follows: Extend $\mathcal{L}_A$ with a new constant symbol $c$ , and call this such language $\mathcal{L}'$ . Define $\Gamma=\text{Th}(\mathcal{M}_A)\cup\{\underbrace{s\dots s}_{\text{$n$ times}}(0) . One can use compactness to demonstrate that $\Gamma$ is finitely satisfiable, and hence satisfiable. So there is some model $\mathcal M$ so that $\mathcal{M}\models\Gamma$ and contains non-standard element in its domain. I am not sure whether $\mathcal M$ is elementarily equivalent to $\mathcal M_A$ (with respect to $\mathcal L_A$ ). I think the
|
Continuation of the proof: ... Since $\Gamma$ is satisfiable. There is some $\mathcal M'$ s.t. $\mathcal M'\models\Gamma$ . Let $\mathcal M$ be reduct of $\mathcal M'$ by removing symbol $c$ . $\mathcal M$ is a structure for $\mathcal L_A$ and its domain contains non-standard elements but they are not value of any term in $\mathcal L_A$ . That is, $\mathcal M$ is not covered . To show that it is elementarily equivalent to the standard model $\mathcal M_A$ , we need to demonstrate that for any $\mathcal L_A$ sentence $\alpha$ , $\mathcal M_A\models\alpha$ iff $\mathcal M\models\alpha$ . Let $\alpha$ be a $\mathcal L_A$ sentence. ( $\Longrightarrow$ ) Trivial since $\mathcal M\models\text{Th}(\mathcal M_A)$ , i.e. $\text{Th}(\mathcal M_A)\subseteq\text{Th}(\mathcal M)$ . ( $\Longleftarrow$ ) Suppose $\mathcal M\models\alpha$ . Then, $\mathcal M\not\models\lnot\alpha$ . Then, $\lnot\alpha\notin\text{Th}(\mathcal M)$ . Since $\text{Th}(\mathcal M_A)\subseteq\text{Th}(\mathcal M)$ , $\lnot\
|
|logic|model-theory|nonstandard-models|
| 0
|
Where is the $1/2$ lost in Fourier transform?
|
From an old notebook, I found a question mark (not known existing how many years) about a contradition between the integral form and the original exponential decaying function $f(t)$ . $$ f(t)=\begin{cases} e^{-bt},& t\ge 0\\ 0 & t where $b>0$ and $f(0)=1$ . The Fourier transform of $f(t)$ is $$ \mathcal{F}[f(t)](\omega)=\frac{1}{j\omega+b}, $$ But if $f(t)$ is recovered by inverse transform from $F(\omega)$ \begin{align} f(t)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{1}{j\omega+b}e^{j\omega t}\mathrm d\omega\\ &=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{b\cos{\omega t}+\omega\sin{\omega t}}{\omega^2+b^2}\mathrm d\omega\\ &=\begin{cases} 0 & \text{ for } t 0 \end{cases} \end{align} where in the second case $t=0$ , $f(t)$ becomes only $\frac12\ne e^{-bt}|_{t=0}=1$ . Where in the steps lost the other half?
|
This is part of the Gibb's phenomenon . It is more thoroughly explained there, but the short version is: Not all functions are recovered by the inverse Fourier transform of their Fourier transform. The combination "inverse Fourier transform of the Fourier transform of $f(t)$ " is a projection operator, onto some subset, $F_{\text{nice}}$ , of the space of functions. The more times a function is continuously differentiable, the "closer" it is to $F_{\text{nice}}$ and the smaller the discrepancy between finite initial sums of "inverse Fourier transform of the Fourier transform of $f(t)$ " and $f$ . At jump discontinuities, the function isn't even zero times continuously differentiable, so we can expect larger changes in the projection. These changes caused by a jump discontinuity, at $x = x_0$ , say, are: The projection function outputs the midpoint of the jump at $x_0$ . The projection function overshoots the height of the jump from both sides in an oscillatory manner. For your function
|
|complex-analysis|fourier-analysis|fourier-transform|
| 1
|
Sum of unitary operators converges to projection operator
|
So let $\mathcal{H}$ be a hilbert space and $U$ is a unitary operator on $\mathcal{H}$. Let $I=\{v\in\mathcal{H}:U(v)=v\}$. Show that $\frac{1}{N}\sum_{n=1}^NU^n(v)\rightarrow Pv$ where $P:\mathcal{H}\rightarrow\mathcal{H}$ is the projection operator onto $I$. I don't know how to proceed with this problem. Can someone suggest hints as to how to do this.
|
I commented on Kroki's answer and came out with an alternative solution to avoid the problem using the inverse of $\mathrm{Id}-U$ . Here it is. It is easy to show that $I=\{x\in \mathcal{H}:Ux=x\}$ and $V=\{Ux-x:x\in\mathcal{H}\}$ are subspaces of $\mathcal{H}$ . Claim : $I$ is the orthogonal complement of $V$ and $\mathcal{H}=I\oplus\overline{V}$ . Proof : First, we show that $I\subset V^\perp$ . Take any $x\in I$ , we have $\forall y\in H$ , $\left =\left =\left $ . Then, $$\left =\left -\left =0$$ This is true $\forall Uy-y\in V$ . Therefore, $I\subset V^\perp$ . Next, let's show that $V^\perp\subset I$ . Take any $x\in V^\perp$ , we have \begin{align} &\|Ux-x\|^2\\ &=\left \\ &=\left -\left \\ &=\left -0\\ &=\left \\ &=\left \\ &=0 \end{align} where in the third equality we have used the fact that $U^{-1}$ is also a unitary operator. And in the fourth equality, since $U$ is a bijection, $\exists y\in H$ such that $Uy=x$ . By the strict positivity axiom, we have $Ux-x=0\Rightarrow U
|
|functional-analysis|operator-theory|hilbert-spaces|
| 0
|
Expected value of two different random variables
|
I am a bit confused. We have two different random variables $X,Y$ and another two random variables $G,F$ , such that $G$ has the same distribution as $X$ and $F$ has the same distribution as $Y$ . Does \begin{align*} E(XY)=E(GF) \end{align*} hold? Or do we need some independence assumptions? Sorry for this silly qustion:)
|
You need information regarding the joint distribution. Counterexample: Suppose all $4$ variables are $\pm 1$ with equal probability. Scenario $1$ : They are pairwise independent. Then both $XY$ and $GF$ are also $\pm 1$ with equal probability so both expectations are $0$ (in particular, they are equal). Scenario $2$ : $X,Y$ are pairwise independent but $G=F$ . Then $E(XY)=0$ as before but $GF$ is the constant $1$ , hence has expectation $1$ .
|
|random-variables|expected-value|independence|
| 1
|
Determine the number of $5$ digit numbers in which no digit occurs more than twice (numbers starting with $0$ are allowed).
|
Determine the number of $5$ digit numbers in which no digit occurs more than twice (numbers starting with $0$ are allowed). I see there $3$ possible groups of scenarios: No digits repeat $ = 10 \cdot 9 \cdot 8 \cdot 7 \cdot 6$ We have one pair of same digits $ = 10 \cdot \binom{5}{2} \cdot 9 \cdot 8 \cdot 7$ , because: we choose $1$ of $10$ digits for the pair, then we choose $2$ of $5$ positions in number to put them in (in that same moment we choose the other $3$ in which we place the digits that do not repeat), then we choose the digits for the $3$ places that are left. We have two different pairs of same digits $ = \binom{10}{2} \cdot \binom{5}{2} \cdot \binom{3}{2} \cdot 8$ , because we choose which $2$ of $10$ digits to appear twice, then we choose $2$ of $5$ positions for the smaller of chosen digits, then we choose $2$ of remaining $3$ positions for the larger of selected digits, then we choose $1$ of $8$ remaining digits to fill the remaining one position. Is that correct?
|
Your answer is correct, but as such problems are susceptible to errors, as is evidenced in comments, here is a mechanical way using the product of two multinomial coefficients, which totally saves you the labor of thought $\textbf{[Lay down Pattern] x [Permute Pattern]}$ which yields $$\binom{5}{1,1,1,1,1,0,0,0,0,0}\binom{10}{5,5} + \binom{5}{2,1,1,1,0,0,0,0,0,0}\binom{10}{1,3,6} + \binom{5}{2,2,1,0,0,0,0,0,0,0}\binom{10}{2,1,7} = 91440$$ $\boxed{\textbf{Added}}$ Of course, as the multinomial coefficient $\dbinom{5}{1,1,1,1,1,0,0,0,0,0}\color{red} {\equiv} \dfrac{5!}{1!1!1!1!1!0!0!0!0!0!} = 5!$ , by using the permutation form of the multinomial coefficients, and removing factorials of $1$ and $0$ , you can shorten the expression substantially as $$5!\cdot\frac{10!}{5!5!} + \frac{5!}{2!}\frac{10!}{3!6!} + \frac{5!}{2!2!}\frac{10!}{2!7!}$$
|
|combinatorics|solution-verification|
| 0
|
Bifurcation of the dde $x'(t) = r x(t-d) - h x(t)$ wich describes a population model with delay
|
Consider the following population model: $x'(t) = r x(t-d) - h x(t)$ , with growth rate $r > 0$ and death rate $h > 0$ , where only adults that have reached the age $d > 0$ can reproduce. I have to draw a bifurcation diagram. For $d = 0$ it looks easy but I don't know how to do it for $d > 0$ since it's my first encounter with a DDE. Can someone help me? My approach: The first thing to do would be to rescale the time so the delay becomes 1 and have all parameters outside $x$ . we can take $a = \frac{t}{d}$ , it'll probably look something like this: $\frac{1}{d}x′(a) = ... - h y(a)$ , where $y(a) = x(t)$ . I don't know how the term $r x(t-d)$ should look. I can also assume we seek solutions of the form $ce^{\lambda t}$ , so I can derive the characteristic function.
|
If I understand it correctly, your problem is solving the DDE. To do this you can simply substitute $x\left( t \right) = e^{\lambda \cdot t}$ (aka $x'\left( t \right) = \lambda \cdot e^{\lambda \cdot t}$ ) - reducing the DDE to it's characteristic polynomial - and solve for $\lambda$ via the Lambert-W Function: \begin{align*} x'\left( t \right) &= r \cdot x\left( t - d \right) - h \cdot x\left( t \right) \tag{$x\left( t \right) := e^{\lambda \cdot t}$}\\ \lambda \cdot e^{\lambda \cdot t} &= r \cdot e^{\lambda \cdot \left( t - d \right)} - h \cdot e^{\lambda \cdot t}\\ \lambda \cdot e^{\lambda \cdot t} &= r \cdot e^{\lambda \cdot t} \cdot e^{-\lambda \cdot d} - h \cdot e^{\lambda \cdot t} \tag{$\div e^{\lambda \cdot t}$}\\ \lambda &= r \cdot e^{-\lambda \cdot d} - h \tag{$+ h$}\\ \lambda + h &= r \cdot e^{-\lambda \cdot d} \tag{$\cdot \left( d \cdot e^{d \cdot \lambda} \cdot e^{d \cdot h} \right)$}\\ \left( d \cdot \lambda + d \cdot h \right) \cdot e^{d \cdot \lambda + d \cdot h} &= d \
|
|ordinary-differential-equations|bifurcation|delay-differential-equations|
| 0
|
Rectangles Game
|
Neznayka draws a rectangle, divides it into 64 smaller rectangles by drawing $7$ straight lines parallel to each of the original rectangle's sides. After that, Znayka points to $n$ rectangles of the division at the same time, and Neznayka names/gives/reveals the areas of each of these rectangles. What is the smallest value of $n$ for Znayka to be able to name/give/reveal the areas of all the rectangles in the division?
|
BASIC LIMIT : There are $8$ unknowns along the length of the rectangle. There are $8$ unknowns along the width of the rectangle. We require $8+8=16$ Equations to get those unknowns. When we know those $16$ unknowns , then all areas are known. The areas we might choose are : $8$ Diagonal Elements $7$ Elements just below the Diagonal $1$ Element at the top Corner UPDATED LIMIT : Shown in the Diagram , we have $16$ Elements which are selected to highlight that we have to take ratios of neighboring terms which are left-right & which are up-down. Imagine that we double all widths while halving all lengths : Since $2 \times 1/2=1$ , we will still have same areas for all Parts. This implies that we have $1$ less unknown , hence we have only $16-1=15$ unknowns & hence we require only $15$ Equations via $15$ Elements. We can skip the top-right green Element in the Diagram. Every row & every Column can be obtained via ratios of neighboring terms. Hence $n=15$ here.
|
|combinatorics|geometry|recreational-mathematics|
| 0
|
Category of isomorphisms is equivalent to underlying $\infty$-category
|
Let $\mathscr{C}$ be an $\infty$ -category (for example taking quasicategories as a model). Recall that the arrow category is $\mathsf{Ar}(\mathscr{C}) = \mathsf{Fun}([1], \mathscr{C})$ . We denote by $\mathsf{Isom}(\mathscr{C})$ the full sub- $\infty$ -category spanned by the equivalences. Then, there is the expected result that $\mathsf{Isom}(\mathscr{C}) \simeq \mathscr{C}$ . This follows for example from Kerodon 02BY where Lurie even shows that $\mathrm{ev}_0, \mathrm{ev}_1 : \mathsf{Isom}(\mathscr{C}) \to \mathscr{C}$ are trivial fibrations. However, the proof there is taken as the corollary of a much more general technical-looking result. It feels like that's overkill for this situation which is why I'm looking for a more immediate way. If one tried to prove this $1$ -categorically, then one could write down inverse functors and explicit natural isomorphisms realizing that the functors realize equivalences of categories. But I couldn't manage to transport such a technique to $\in
|
$\newcommand{\Hom}{\operatorname{Hom}}$ If you know how to compute mapping anima in arrow categories, there is an easy model-independent proof: Recall that for morphisms $f\colon x\to y$ and $g\colon a\to b$ in $\mathcal C$ , we have $$ \Hom_{\operatorname{Ar}(\mathcal C)}(f,g) = \Hom_{\mathcal C}(x,a) \times_{\Hom_{\mathcal C}(x,b)} \Hom_{\mathcal C}(y,b). $$ It is now straightforward that the functor $$ \alpha\colon \mathcal C \to \operatorname{Isom}(\mathcal C),\quad c\mapsto \mathrm{id}_c $$ is fully faithful. The essential image coincides with $\operatorname{Isom}(\mathcal C)$ : if $f\colon x\to y$ is an isomorphism, then the commutative square $$ \require{AmsCD} \begin{CD} x @>{\mathrm{id}_x}>> x \\ @V{\mathrm{id}x}VV @VV{f}V \\ x @>>{f}> y \end{CD} $$ shows that $\mathrm{id}_x \simeq f$ in $\operatorname{Isom}(\mathcal C)$ . Hence, $\alpha$ is an equivalence of $\infty$ -categories. Now, note that $\mathrm{ev}_0\circ \alpha = \mathrm{ev}_1\circ\alpha = \mathrm{id}_{\mathcal C}$
|
|category-theory|homotopy-theory|higher-category-theory|
| 1
|
About isomorphism of each category to a category of ordered semicategories with actions
|
I will call an ordered semicategory with action (OSA) an ordered semicategory that is also a semicategory with actions acting on ordered elements, where the action preserves the order of both elements and morphisms. I denote the action of a morphism $f$ as $\langle f\rangle$ . I call two morphisms $\mu$ and $\nu$ of OSA isomorphic, when they are isomorphic as morphisms of ordered semicategories, and there is a bijection $f$ mapping ( $\langle\nu\rangle=f\circ\langle\mu\rangle\circ f^{-1}$ ) the action of $\mu$ to the action of $\nu$ . I will call space-in-general an endomorphism of an ordered semicategory with action. Background: On different examples from general topology (metrics, locales/frames and everything in between, such as topological spaces, uniform spaces, etc.) I show that we have interesting representation of these examples as a special cases of space-in-general. There appears a natural question: Is every mathematical structure representable as a space-in-general? (That pa
|
I a little modified my question to make this answer trivial. (Please check for errors, however.) The answer is almost trivial: First remove the order from the question by solving instead: is every concrete category is isomorphic by $\tau$ to a subcategory of semicategories with actions, such that $\tau a$ is isomorphic to $\tau b$ iff $\tau a$ is isomorphic to $\tau b$ as semicategories with actions. (Then we add order to our semicategory by the formula $\tau a\leq\tau b\Leftrightarrow a\leq b$ (what is well-defined because $\tau$ is a bijection). Now it's trivial: For a concretizable category $C$ take $\tau$ its faithful functor to $\mathbf{Set}$ . Then $(\operatorname{Mor}C,\tau)$ is a category with action $\tau$ . $\mu$ is isomorphic to $\nu$ in our semicategory when they are isomorphic as morphisms of $C$ and there is a bijection $f$ such that $\langle\nu\rangle=f\circ\langle\mu\rangle\circ f^{-1}$ that is when $\tau\nu=f\circ(\tau\mu)\circ f^{-1}$ . Because $\tau$ is an isomorphis
|
|general-topology|category-theory|
| 0
|
What are the properties of the differential operator that justify this?
|
Say I am solving the following differential equation: $$y'' - y = x^2$$ It is perfectly permissible for me to proceed as follows: $$D^2y- y = x^2$$ $$y(D^2-1) = x^2$$ $$y = (D^2 - 1)^{-1} \cdot x^2$$ $$y=\sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!} D^n\cdot x^2$$ $$y = (-1 - D^2 - D^4...) x^2$$ $$y_p = -x^2 - 2$$ Which specific properties have been used (specifically the second and third step) which justify this? Similarly, why am I allowed to do this? $$\frac{dy}{dx}=2x$$ $$dy=2xdx$$ In general, which properties hold for the differential operator?
|
The operator D does not 'commute' with functions. So one writes $$ (D^2-1) y = x^2 $$ Multiplication by the pseudo-inverse on both sides from left produces the Neumann geometric series $$(D^2-1)^{-1} \ (D^2-1) y =- (1-D^2)^{-1} x^2 = - \sum {D^{2n}} x^2 = -(x^2+ 2)$$ The pseudo-inverse of an operator is the inverse on the subspace with the kernel, consisting of all y $$(D^2-1) y=0$$ is excluded. So the general solution is that special solution plus any combination of exponentials solving $(D^2-1) y=0.$ The Neumann series representation can be understood, if an eigenbasis of the differential operator is used to expand all functions. In this basis the differential operator is a diagonal matrix, that can be inverted by taking the inverse diagonal matrix elements on the subspace nonzeero eigenvalues.
|
|ordinary-differential-equations|
| 0
|
Showing an element of a permutation raised to a specific power is equal to the permutation
|
Here's my permutation: $\sigma = \left( \begin{array}{ccccccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ 7 & 4 & 2 & 5 & 6 & 3 & 12 & 11 & 13 & 1 & 8 & 10 & 9 \\ \end{array} \right)$ I'm trying to find $\tau$ in $S_{13}$ such that $\tau^3$ = $\sigma$ , or show that such an element doesn't exist. I've started by writing $\sigma$ as a product of disjoint cycles, and got (1, 7, 12, 10)(2, 4, 5, 6, 3)(8, 11)(9, 13) ,and I've established that $ord (\sigma)$ = 20 as 20 = $lcm (2, 4, 5)$ I was thinking about using Lagrange, but was not sure if that works for permutations. Thank you!
|
Note that since the order of $\sigma$ is $20$ , we know $\sigma^{21} = \sigma$ . Thus $\tau = \sigma^7$ is a solution. An approach of this kind will work whenever a group element has an order coprime to the "root" you want to extract.
|
|group-theory|permutations|
| 0
|
Showing any covering map $\mathbb{R}P^2 \rightarrow X$ is a homeomorphism.
|
This post states that we can examine the deck transformations of the covering space $S^2 \rightarrow \mathbb{R}P^2 \rightarrow X$ to see that only the identity and antipodal maps are deck transformations. I am not sure how to show this is true. The homeomorphism is clear to me once I prove this. If $q: S^2 \rightarrow \mathbb{R}P^2$ is the quotient covering map, and $p: \mathbb{R}P^2 \rightarrow X$ is any covering map, then a deck transformation $\phi: S^2 \rightarrow S^2$ for $p \circ q$ satisfies $$p \circ q = (p \circ q) \circ \phi.$$ For any $x \in S^2$ , we have $p([x]) = p([\phi(x)])$ where $[x] = q(x)$ denotes an equivalence class in $\mathbb{R}P^2$ . We don't have any further assumptions on $p$ , so I don't know how to proceed from here. I can see that deck transformations with respect to $q$ are just the identity or antipodal maps, but composing with $p$ messes me up.
|
Here is a proof based on examining covering transformations of $p\circ q$ . The group $G$ of covering transformations of $p\circ q$ contains the antipodal map. In order to show that $p$ is 1-1, it suffices to prove that $G$ has order 2. For each continuous map $f: S^2\to S^2$ define the induced map of homology group: $$ f_{*,i}: H_i(S^2; {\mathbb Q})\to H_i(S^2; {\mathbb Q}). $$ Let $G_0 denote the index 2 subgroup consisting of covering transformations preserving orientation of $S^2$ , i.e. inducting the identity map $f_{*,2}$ . For an element $f\in G_0$ compute its Lefschetz number $\Lambda_f$ , $$ \Lambda_f= Tr(f_{*,0}) - Tr(f_{*,1}) + Tr(f_{*,2})= 1 -0+1=2. $$ Hence, by the Lefschetz theorem , $f$ has a fixed point in $S^2$ . Since $f$ is a covering transformation, it follows that such $f$ is the identity map, i.e. $G_0=1$ , i.e. $G$ has order $2$ , i.e. $p$ is 1-1.
|
|algebraic-topology|covering-spaces|
| 1
|
Surjective homomorphism of Lie groups with finite kernel is covering map
|
Let $G$ and $H$ be Lie groups. Is there an easy way to see why a surjective Lie group homomorphism $\varphi: G \to H\:$ with finite kernel is a covering map? When assuming $G$ is a Hausdorff space, the only thing I am not sure about is how to prove that $\varphi$ is open, the rest is clear. Do we need such an assumption? For context, I am interested in proving that the spin group is the double cover of the special orthogonal group if the underlying field is either $\mathbb{R}$ or $\mathbb{C}$ .
|
Theorem: Let $G$ be a Lindelöf and locally compact topological group which acts in a continuous and transitive way on a Baire space $M$ . If $m\in M$ , then the map $$\begin{array}{ccc}G&\longrightarrow&M\\g&\mapsto&g\cdot m\end{array}$$ is an open map. Proof: It will be enough to prove that if $V$ is a neighborhood of $e$ , then $V\cdot m$ is a neighborhood of $m$ . Let $W$ be a neighborhood of $e$ such that $W^{-1}\cdot W\subset V$ and suppose that $W\cdot m$ is a neighborhood of some of its points; in other words, suppose that, for some $w_0\in W$ , $W\cdot m$ is a neighborhood of $w_0\cdot m$ . Then ${w_0}^{-1}\cdot(W\cdot m)$ is a neighborhood of $m$ and therefore $\bigcup_{w\in W}w^{-1}\cdot(W\cdot m)$ is a neighborhood of $m$ . But $\bigcup_{w\in W}w^{-1}\cdot(W\cdot m)\subset V\cdot m$ , and so $V\cdot m$ is also a neighborhood of $m$ . Therefore, all that remains to be proved is that among all neighborhoods $W$ of $e$ such that $W^{-1}\cdot W\subset V$ there is at least one su
|
|lie-groups|covering-spaces|
| 1
|
Bifurcation of the dde $x'(t) = r x(t-d) - h x(t)$ wich describes a population model with delay
|
Consider the following population model: $x'(t) = r x(t-d) - h x(t)$ , with growth rate $r > 0$ and death rate $h > 0$ , where only adults that have reached the age $d > 0$ can reproduce. I have to draw a bifurcation diagram. For $d = 0$ it looks easy but I don't know how to do it for $d > 0$ since it's my first encounter with a DDE. Can someone help me? My approach: The first thing to do would be to rescale the time so the delay becomes 1 and have all parameters outside $x$ . we can take $a = \frac{t}{d}$ , it'll probably look something like this: $\frac{1}{d}x′(a) = ... - h y(a)$ , where $y(a) = x(t)$ . I don't know how the term $r x(t-d)$ should look. I can also assume we seek solutions of the form $ce^{\lambda t}$ , so I can derive the characteristic function.
|
Assuming $x(t)=0\ \forall t\lt d$ , and using the Laplace transform, we have $$ \hat x(s) = \frac{x_0}{(s+h-r e^{-s d})} = \frac{x_0}{s+h}\frac{1}{1-\frac{r e^{-s d}}{(s+h)}} $$ now assuming $|r e^{-s d}| we have $$ \hat x(s) = \frac{x_0}{s-h}\sum_{k=0}^{\infty}\left(\frac{r e^{-s d}}{s+h}\right)^k $$ and $$ \mathcal{L}^{-1}\left[\left(\frac{r e^{-s d}}{s+h}\right)^k\right]=\frac{r^k (t-d k)^{k-1} e^{d h k-h t} \theta (t-d k)}{\Gamma (k)} $$ where $\theta(\cdot)$ is the Heaviside Theta function. Follows a MATHEMATICA script computing an example for $r = 0.2, h = 0.3, x_0 = 1, d = 2$ . n = 5; parms = {r -> 0.2, h -> 0.3, x0 -> 1, d -> 2}; xt = InverseLaplaceTransform[x0/(s + h) Sum[(r Exp[-s d]/(s + h))^k, {k, 0, n}], s, t]; xt0 = xt /. parms; Plot[xt0, {t, 0, n d} /. parms, PlotStyle -> Blue, PlotRange -> All] For $r=0.2, h=0.3$ For $r=0.2, h = 0.2$ For $r = 0.2, h = 0.1$
|
|ordinary-differential-equations|bifurcation|delay-differential-equations|
| 0
|
Is the limsup perserved under monotonically increasing functions?
|
I've done a bit of digging but I haven't found any sources to confirm that this is true. Take $\limsup{\{x_n\}}=L$ and $f(x): D \to \mathbb{R}$ to be a continuous and monotonically increasing function such that $\{x_n\} \in D$ . Does $\limsup{f(x_n)} = f(L)$ ? I take it to be true because $f$ preserves order, and as there are only a finite number of terms greater than a given $L + \epsilon$ then after the application of the function there will still only be a finite number of terms greater than $f(L) + \epsilon$ . Any help would be appreciated!
|
Quoting your question: "Take $\limsup{\{x_n\}}=L$ and $f(x): D \to \mathbb{R}$ to be a continuous and monotonically increasing function such that $\{x_n\} \in D$ . Does $\limsup{f(x_n)} = f(L)$ ?" i.) Given that $f(x)$ is continuous and monotonically increasing has no logical connection to $\{x_n\} \in D$ . ii.) I am assuming that your question refers to $D$ bounded. Thus: Using Heine-Borel theorem: $lim_{x\rightarrow L}f(x)$ exists and $lim_{x \rightarrow L}f(x)=M$ iff for every sequence $x_n\rightarrow L, x_n\neq L, \forall n\implies f(x_n)\rightarrow M$ . Now: $f(x)$ is continuous in $D\implies lim_{x\rightarrow L}f(x)=f(L)=M$ by definition of continuity. which is equivalent to: $\forall (x_n)\rightarrow L \implies f(x_n)\rightarrow M$ Thus: Let $(x_n)$ be a sequence with $limsup(x_n)=L$ . Then there exists a subsequence of $x_n$ Namely, $x_{n_k}$ , s.t. $x_{n_k}\rightarrow L$ . And since $f(x)$ is continuous $lim f(x_{n_k})\rightarrow M$ .
|
|real-analysis|limsup-and-liminf|
| 1
|
Determining the curve to which variable line segments are tangent
|
A variable line passing through a fixed point $P(x_0,y_0)$ intersects the circle $x^2+y^2=a^2$ at some point $Q$ . If all the perpendicular bisectors of the variable line segment $PQ$ are tangent to a curve $C$ , determine $C$ if (i) $P$ is inside the circle (ii) $P$ is outside the circle (Note: in both cases, only select one point of intersection with the circle, do not find the perpendicular bisector of the chord between both points of intersection); Motivation: I was watching this video on Feynmann's lost lecture on planetary orbits and came across an interesting geometry puzzle; https://youtu.be/xdIjYBtnvZU?si=g-5pJiu_zo8SZf5 _ which moreover resolves the first part of the question but then my dense self could not understand why the center, $Q$ , and $P$ as defined in "geometry proof land" are collinear so I decided to turn to a hardcore coordinate geometry approach to the problem. I would really appreciate it if the answer to this problem also uses the tools of coordinate geometry
|
It relies on a known result (up to a homothety with ratio $1/2$ as explained in the answer of @intelligenci pauca) ; consider this excerpt from https://mathshistory.st-andrews.ac.uk/Curves/Definitions2/ " Negative pedal : Given a curve C and O a fixed point then for a point P on C draw a line perpendicular to OP. The envelope of these lines as P describes the curve C is the negative pedal of C. The ellipse is the negative pedal of a circle if the fixed point is inside the circle while the negative pedal of a circle from a point outside is a hyperbola."
|
|analytic-geometry|physics|conic-sections|locus|
| 0
|
Hilbert transform of x (or a sawtooth)
|
This has a simple curiosity value, but I was converting the czt() (chirp z-transform) function from Octave into (wx)Maxima and, while testing it, I used a simple ramp, or $x$ . The appearance of the transform is that of an "U". So I was wondering if there is a closed form formula for the response. If I use: $$\dfrac{1}{\pi}\int{\dfrac{x}{t-x}\mathrm{d}x}=-\dfrac{t\log(x-t)+x}{\pi}$$ which is a logarithm on a slope; at best, it's only half the curve. There are other integral forms, I tried the difference version but that gives something very similar to the above. Then I realized that applying the FFT means the infinity $x$ becomes a modulo operation, or a sawtooth, and it's more fitting for a cosine sum: $\sum\frac{\cos{kx}}{k}$ . I modified the formula above (and used a bit of hammer time) and got: $$n\cdot\dfrac{\log(k+1)+\log(n-k+2)-7.33}{2.5},\quad k=1,2,...,n$$ which seems to come close to the actual curve (blue is the result of the Hilbert transform, green is the cosine sum, and r
|
Is there a continuous function that describes the Hilbert transform of (a periodic) x? Yes, we can start from the Fourier series expansion of a saw tooth wave: $$ \begin{align*} &s(x) = \frac{x}{\pi}, for -\pi Now it is quite quick to convert to the Hilbert transform because we have $H(sin(nx))(x) = -cos(nx)$ : $$ H(s)(x) = -\sum_{n=1}^\infty B_n cos(nx) $$ When I punch this into a symbolic solver it gives me the following result: $$ H(s)(x) = -\frac{1}{\pi} (ln(e^{-ix}+1) + ln( e^{ix}+1)) \\ x \notin pi + 2\pi k, k\in \mathbb{Z} \\ $$ So then the instantaneous absolute value is $$ |s(x) + iH(s)(x)| $$ So much for answering your initial question, however I do notice that for your case relating to $$ s(x) = \frac{x}{\pi}, for 0 using the same method of derivation does not line up exactly with the discrete result near the edges. However, the derivation via series expansion may help you to clear up why this is happening.
|
|fourier-analysis|analytic-functions|integral-transforms|
| 0
|
If $\mu(\Omega) = \mu^*(A) + \mu^*(A^C)$, then $A$ is $\mu^*$-measurable
|
Let $\mu$ an almost measure such that $\mu(\Omega) . Fix $A \subseteq \Omega$ . I'm required to show that if $\mu(\Omega) = \mu^*(A) + \mu^*(A^C)$ , then $A$ is $\mu^*$ -measurable, that is equivalent to show that, for every $E \subseteq \Omega$ , $\mu^*(E \cap A)+\mu^*(E \setminus A) \leq \mu^*(E)$ . I don't know if I'm missing some result to successfully prove the desired result. Here is what I've done: Let $E \subseteq \Omega$ . Observe that $$\mu^*(E \cap A) + \mu^*(E \setminus A) \le \mu^*(A) + \mu^*(A^C) = \mu(X) = \mu^*(X) = \mu^*(E \cup E^C) \le \mu^*(E) + \mu^*(E^C)$$ since $(E \cap A) \subseteq A$ and $(E \setminus A) \subseteq A^C$ and $\mu^*$ is monotone. Note that $\mu^*(E^C) , since $\mu(\Omega) . So I have tried to show that: $$\mu^*(E \cap A) + \mu^*(E \setminus A) + \mu^*(E^C) \le \mu^*(A) + \mu^*(A^C)$$ Set theoretic, the equality holds, but not necessarily with $\mu^*$ . I would appreciate any hints or other required results. Intuitively I require some result using $
|
I think you have too many $^*$ 's in your statement. You want to show $A$ is $\mu-$ measurable, otherwise writing $\mu^*(\Omega)=\mu^*(A)+\mu^*(A^c)$ doesn't make sense. To do this, let $A\subset B,A^c\subset C, B,C\in\mathscr{A}$ be such that $$\mu^*(A)\leq\mu(B)\leq \mu^*(A)+\varepsilon,$$ $$\mu^*(A^c)\leq\mu(C)\leq\mu^*(A^c)+\varepsilon.$$ Then $\mu(\Omega)=\mu^*(A)+\mu^*(A^c)\leq \mu(B)+\mu(C)$ . But notice $B\cap C=\emptyset$ by construction, so this gives $\mu(\Omega)=\mu(B\cup C)$ . From this you can show $A,B,$ and $C$ all differ from one another by a set of $\mu^*$ -measure 0. Hence $A$ is $\mu-$ measurable.
|
|measure-theory|
| 0
|
How to compute a minimal spanning set and the minimal spanning number of $\Bbb C[x,y]_{(x-1,y-1)}/(x^3-y^2)$?
|
Let $A=\frac{\mathbb C[X,Y]}{(X^3-Y^2)}$ . I am asked to show that $\mathbf m=(\overline X-1,\overline Y-1)$ is a maximal ideal of $A$ which I have shown successfully. Now I am asked to compute the $\mu(\mathbf mA_m)$ which is the unique cardinality of a minimal generating set of $\mathbf mA_m$ which I am unable to do. I know I have to calculate $\dim_{A_m/mA_m}(mA_m/(mA_m)^2)$ . Can someone help me to calculate it? I know I have to use the formula $\mu(mA_m)=\dim_{A_m/mA_m}(mA_m/(mA_m)^2)$ but I got stuck while actually doing it. Some help is needed as I am just a beginner.
|
The first step is to simplify the algebraic expression $mA_m/(mA_m)^2$ a bit. As localization is exact, $mA_m/(mA_m)^2\cong (m/m^2)_m$ , and since $A\setminus m$ acts invertibly on $m/m^2$ , we get that $mA_m/(mA_m)^2\cong m/m^2$ as $A/m$ -vector spaces. Now we'll do even a bit more work to make our problem easier: there's a natural map from $(x-1,y-1)\subset\Bbb C[x,y]$ to $m$ , and we can compose this with the map $m\to m/m^2$ to get a composite map $(x-1,y-1)\to m/m^2$ , which has kernel generated by $(x-1,y-1)^2$ and $x^3-y^2$ . So $m/m^2\cong (x-1,y-1)/((x-1,y-1)^2+(x^3-y^2))$ , so $m/m^2$ has a surjective map from the two-dimensional vector space $(x-1,y-1)/(x-1,y-1)^2$ with kernel generated by $f=x^3-y^2$ . Now all we have to do is figure out whether $f\in (x-1,y-1)^2$ or not: if $f\in(x-1,y-1)^2$ , then the kernel is zero, and $m/m^2$ is two-dimensional, else the kernel is the span of $f$ and $m/m^2$ is one-dimensional. I'll give you a shot at this computation on your own. You
|
|algebraic-geometry|commutative-algebra|problem-solving|localization|local-rings|
| 1
|
Is any ideal of a Banach algebra closed?
|
This is the proof I have obtained to show that any ideal of a Banach algebra is closed: If $\cal B$ is a Banach algebra, then let $I$ be an ideal (bilateral). Let $\{y_n\}_{n\in{\mathbb N}}$ a sequence of elements of $I$ converging to $y$ in $\cal B$ , then $y$ must be in $I$ . Indeed, if it is not in $I$ , then there must exists an element $x$ in the complement of $I$ such that $xy$ is not in $I$ . Let $\{x_n\}_{n\in{\mathbb N}}$ a sequence in the complement of $I$ converging to $x$ . Because such sequence is an arbitrary sequence, the sequence $\{x_n y \}_{n\in{\mathbb N}}$ can be chosen to be entirely contained in the complement of $I$ and its elements to have distance from $I$ greater than $0$ for $n$ finite. Moreover the sequences $\{x_n y_m\}_{m\in{\mathbb N}}$ for $n$ fixed, are entirely contained in $I$ and they converge to the elements $x_n y$ for $n\in{\mathbb N}$ . This is an absurd because the elements $x_n y$ have been chosen to have distance from $I$ greater than $0$ for
|
Your argument does not work in the ubiquitous case when the ideal $I\triangleleft\cal B\,$ is a dense subset, since then each $b\in\cal B$ has distance zero from $I$ . Concrete examples are the ideal of finite-rank operators $\cal F(\mathcal H)\triangleleft\cal K(\mathcal H)\,$ in the compact operators on a (-n infinite-dimensional) separable Hilbert space $\mathcal H$ . Or simpler, the commutative Banach algebra $c_0 = \left\{(x_n)_{n\in\mathbb N} \mid x_n \text{ goes to zero}\right\}$ and its ideal of sequences having finite support. (Thus, restriction from the first mentioned realm of noncommutativity to commutative subalgebras.)
|
|convergence-divergence|banach-spaces|ideals|operator-algebras|banach-algebras|
| 0
|
Estimating the number of integer tuples that satisfy a linear inequality
|
Given a linear inequality in $n$ variables: $$\sum\limits_{i=1}^n c_i x_i \leq b$$ I want to estimate how many positive integer tuples $(x_1, x_2, \dots x_n)$ that satisfy that inequality. What is the best way to do this? Obviously, I'm looking for something much more efficient than simply counting up the points in the grid that satisfy it. All the coefficients, $c_i$ and $b$ are positive.
|
The question is: Given a linear inequality in $n$ variables: $$\sum\limits_{i=1}^n c_i x_i \leq b$$ I want to estimate how many positive integer tuples $(x_1, x_2, \dots x_n)$ that satisfy that inequality. All the coefficients, $c_i$ and $b$ are positive. You didn't specify how accurate the estimate should be, but I will assume it's reasonable. I suggest first trying this for small values of $n$ . For example, suppose that $n=1$ . The inequality to solve becomes $$ 0 The solution set region is the interval $(0,b/c_1]$ . Thus, an estimate of the integer solutions is the length of the interval which is $(b/c_1).$ Similarly, suppose that $n=2.$ The inequality to solve becomes $$ 0 The solution set region is the right triangle with the right angle at the origin and with side lengths $b/c_1$ and $b/c_2$ . An estimate of the number of integer solutions is the area of the triangle which is $(b/c_1)(b/c_2)/2.$ The same thing happens for $n=3$ where the volume of the tetrahedron solution set re
|
|combinatorics|integers|
| 1
|
$A_n-A_{n-1} = q^nA_n-aq^{n-1}A_{n-1}$ for Cauchy sequence
|
For $|q| , $|t| , then $$1+\sum_{n=1}^{\infty} \frac{(1-a)(\cdots)(1-aq^{n-1})t^n}{(1-q)(\cdots)(1-q^n)}=\prod_{n=0}^{\infty}\frac{(1-atq^n)}{(1-tq^n)}=\sum_{n=0}^{\infty}A_nt^n$$ Fromwhat I am reading they show that $A_n-A_{n-1}=q^nA_n-aq^{n-1}A_{n-1} \implies A_n = \frac{(1-aq^{n-1})}{(1-q^n)}A_{n-1}$ I cannot see how this is the case , what exactly does $\sum_{n=0}^{\infty}A_n$ equal to? I tried the following: $$1+\sum_{n=1}^{\infty} \frac{(1-a)(\cdots)(1-aq^{n-1})t^n}{(1-q)(\cdots)(1-q^n)} = 1+\sum_{n=0}^{\infty} \frac{(1-a)(\cdots)(1-aq^{n})t^{n+1}}{(1-q)(\cdots)(1-q^{n+1})}$$ when shifting $n-1=n$ , then I subtract $1$ on both sides to get $$\sum_{n=0}^{\infty}A_nt^n=\sum_{n=0}^{\infty} \frac{(1-a)(\cdots)(1-aq^{n})t^{n+1}}{(1-q)(\cdots)(1-q^{n+1})}=\sum_{n=1}^{\infty} \frac{(1-a)(\cdots)(1-aq^{n-1})t^n}{(1-q)(\cdots)(1-q^n)}$$ However this produces $t^{n+1}$ for starting from $n=0$ , unlike the above equation, and isn't the RHS $A_{n-1}$ ?
|
The question is about $$1+\sum_{n=1}^{\infty} \frac{(1-a)(\cdots)(1-aq^{n-1})t^n}{(1-q)(\cdots)(1-q^n)}=\prod_{n=0}^{\infty}\frac{(1-atq^n)}{(1-tq^n)}=\sum_{n=0}^{\infty}A_nt^n$$ The equation is usually written using the q-Pochhammer symbol as $$ F(a, t;q) := \sum_{n=0}^\infty \frac{(a;q)_n}{(q;q)_n} t^n =\prod_{n=0}^{\infty}\frac{(1-atq^n)}{(1-tq^n)}=\sum_{n=0}^{\infty}A_nt^n. \tag1 $$ Note that using the infinite product gives the equation $$ F(a, t;q) = \frac{1-at}{1-t} F(a, qt;q). \tag2 $$ Now using the infinite sums gives equations $$ \sum_{n=0}^\infty A_n (1-t)t^n = \sum_{n=0}^\infty A_n (1-at)(qt)^n. \tag3$$ Expanding both sides as a power series in $t$ and equating coefficients gives the equation $$ A_n-A_{n-1}=q^nA_n-aq^{n-1}A_{n-1}. \tag4$$
|
|summation|cauchy-sequences|summation-method|einstein-notation|
| 0
|
Source for 5 term exact sequence in group cohomology
|
I know (see for example page 47 in Brown) that a short exact sequence of groups $$1 \rightarrow N \rightarrow G \rightarrow Q \rightarrow 1$$ gives rise to a 5 term exact sequence in their homology groups $$H_2(G) \rightarrow H_2(Q) \rightarrow H_1(N)_Q \rightarrow H_1(G) \rightarrow H_1(Q) \rightarrow 0$$ I also know that there is a 5 term exact sequence for cohomology groups but I can't find a source for it. My guess would be to reverse arrows and switch to invariants instead of co-invariants, so: $$ 0 \leftarrow H^2(G) \leftarrow H^2(Q) \leftarrow H^1(N)^Q \leftarrow H^1(G) \leftarrow H^1(Q) $$ Is this correct? Have I confused coinvariants and invariants? Does anyone has a source that lays this out nicely?
|
The five term exact sequence in group cohomology is: $$0 \to H^1(Q,M^N)\xrightarrow{\text{inf}}H^1(G,M) \xrightarrow{\text{res}}H^1(N,M)^Q \xrightarrow{\text{tg}}H^2(Q,M^N) \xrightarrow{\text{inf}}H^2(G,M).$$ For a reference see Huebschmann: Exact sequences in the cohomology of a group extension, J. of Algebra 444, 297-312 (2015) . There can also be found further references.
|
|abstract-algebra|group-theory|homological-algebra|group-cohomology|spectral-sequences|
| 0
|
An example of non-small object in Top, illustrated in Hovey's modelcategory textbook 49p
|
The below is the description in Hovey's textbook at 49p that the Sierpinski space is not small in the category Top. "The Sierpinski space, consisting of two points where exactly one of them is open, is not small in Top, as was pointed out to the author by Stefan Schwede. To see this, given a limit ordinal $\lambda$ , give the set $Y=$ $\lambda \cup\{\lambda\}$ the order topology. Let $X_\alpha$ be $Y \times\{0,1\}$ modulo the equivalence relation $(x, 0) \sim(x, 1)$ if $x . Then the $X_\alpha$ define a $\lambda$ -sequence in Top. The colimit $X$ of the $X_\alpha$ is $Y$ with an extra point $(\lambda, 1)$ with exactly the same neighborhoods at $\lambda$ . These two points define a continuous map from the Sierpinski space into $X$ which does not factor continuously through any $X_\alpha$ ." It seems that the space $\{0,1\}$ is the Sierpinski space (not the discrete space). I can't figure out the last sentence: isn't the map $\{0,1\} \rightarrow X_0$ defined by $0 \mapsto (\lambda, 0)$ an
|
This example is corrected in errata: https://people.math.rochester.edu/faculty/doug/otherpapers/hovey-model-cats-errata.pdf .
|
|general-topology|model-categories|
| 0
|
Is the Gram-Schmidt Orthogonalization process for functions the same as it is for vectors?
|
I was not able to find resources online and was wondering so since it would greatly help me with my work if I can directly apply what I have learned from linear algebra.
|
Gram-Schmidt Orthonormalizing process can be applied to any (at most countable) linearly independent vectors in any inner product spaces. You can find the information at Wikipedia for finite vectors. To apply the process for functions, we must prove that the functions we consider are in some inner product spaces (so that we can apply Gram-Schmidt through its inner product). Most commonly used inner product space for functions is $L^2$ space, which is the set of square-integrable functions (see this article in Wikipedia). By the way, you can learn tools described above such as general inner product in functional analysis (as the generalization of linear algebra and topological theory for Euclid spaces). If you are not famillier with the notation in the article of above links, you can start with the standard textbook of functional analysis.
|
|linear-algebra|ordinary-differential-equations|legendre-polynomials|gram-schmidt|
| 1
|
Uniform convergence of a series of functions depending upon a parameter
|
I'm unable to prove if the following series converge uniformly on $[0,+\infty)$ for the values of the parameter $\beta$ between $2$ and $3$ . $$ \sum_{n\geq 1}\frac{n^\beta x}{x^4+n^4} $$ The maximum of the summed functions is realized (for fixed $n$ ) in $\frac{n}{\sqrt[4]{3}}$ and it is $\frac{\sqrt[4]{27}}{4}n^{\beta-3}$ Clearly there is total convergence for $\beta and there is no convergence if $\beta>3$ except in $x=0$ . For $2 pointwise convergence holds but not total. How to prove uniform convergence? Any hint will be appreciated.
|
Note that for $x \in [0,\infty)$ , $$\sup_{x \in [0,\infty)}\left|\sum_{k = n+1}^\infty\frac{k^\beta x}{x^4+k^4}\right|\geqslant \sup_{x \in [0,\infty)}\sum_{k = n+1}^{2n}\frac{k^\beta x}{x^4+k^4}\geqslant\sup_{x \in [0,\infty)} n \cdot \frac{n^\beta x}{x^4 + (2n)^4}\\ \underbrace{\geqslant}_{x = n\in [0,\infty)} n \cdot \frac{n^\beta n}{n^4 + (2n)^4}= \frac{n^{\beta+2}}{17 n^4}= \frac{n^{\beta-2}}{17}$$ Since the RHS does not converge to $0$ as $n \to \infty$ for $\beta \geqslant 2$ , the series is not uniformly convergent on $[0,\infty)$ for $2 .
|
|real-analysis|sequences-and-series|uniform-convergence|pointwise-convergence|
| 1
|
When does open descending sequence contain closed descending sequence
|
In a topological space $(X,\mathcal{T})$ , consider any descending sequence of nonempty open sets: $A_{1} \supset A_{2} \supset A_{3} \supset \dots$ , when does there exist a a descending sequence of nonempty closed sets $C_{1} \supset C_{2} \supset C_{3} \supset\dots$ such that for all $i\in \mathbb{N}$ , $C_{i} \subset A_{i}$ ? I think this is false in general, for instance, consider the topology $(X, \{ A,\emptyset,X\})$ , where $\emptyset \neq A \subset X$ . $A$ itself forms a descending chain. I tried to see if using $T_{3}$ would help, or if this is true for general metric spaces, but is a little stuck. However, I felt this question should be simple. I was thinking about some basic theorems such as characterization of completeness and compactness, and Baire Category theorem, which led me to wonder about this.
|
Note that if $X$ is T1, then it is sufficient to assume that $\bigcap_i A_i \neq \emptyset$ since $C_i = x$ for all $i$ where $x \in \bigcap_i A_i$ is any point does the job, seeing as points in T1-spaces are closed. If $X$ isn't T1, you can of course replace the assumption by " $\bigcap_i A_i$ contains a closed set," but I'm not sure that you can do any better. In the case $\bigcap_i A_i = \emptyset$ , I don't think one can say much: Even if you assume that $X$ is a metric space, it can both happen that a sequence $C_i$ exists and that it doesn't. For the first case, consider for instance $\mathbb{N}$ with the discrete metric and let $A_i = \{i, i + 1, \ldots \}$ . Clearly $\bigcap_i A_i = \emptyset$ , but since $\mathbb{N}$ is discrete you can of course just let $C_i = A_i$ and be done. On the other hand, if you consider $\mathbb{R}$ and $A_i = (0, 1 / i)$ , then no sequence $\{C_i\}_i$ exists, for we would have $C_1 \cap (0, 1 / i) \neq \emptyset$ for all $i$ , which implies that $0
|
|general-topology|functional-analysis|metric-spaces|
| 1
|
Show that $\int_{\arccos(1/4)}^{\pi/2}\arccos(\cos x (2\sin^2x+\sqrt{1+4\sin^4x})) \mathrm dx=\frac{\pi^2}{40}$
|
There is numerical evidence that $$I=\int_{\arccos(1/4)}^{\pi/2}\arccos\left(\cos x\left(2\sin^2x+\sqrt{1+4\sin^4x}\right)\right)\mathrm dx=\frac{\pi^2}{40}$$ How can this be proved? I was trying to answer another question , and I got it down to this integral. Wolfram does not evaluate the indefinite integral. I tried techniques from a roughly similar integral . Letting $u=\tan \frac{x}{2}$ , I got $$I=\int_{\sqrt{3/5}}^1 \dfrac{2\arccos{\left(\frac{(1-u^2)\left(8u^2+\sqrt{u^8+4u^6+70u^4+4u^2+1}\right)}{(1+u^2)^3}\right)}}{1+u^2}\mathrm du$$ but I don't know what to do with this.
|
Some algebra to arrive at a manageable integral Notice that the $\arccos $ argument is the " $+$ " solution of a quadratic equation: $$\cos x\left(2\sin^2x\pm\sqrt{1+4\sin^4x}\right)=\frac{2\sin x \sin(2x)\pm\sqrt{4\sin^2 x\sin^2(2x)+4\cos^2 x}}{2}$$ More exactly, with $b=-2\sin x\sin(2x)$ and $c=-\cos^2x$ in $y_{\pm}=\frac{-b\pm\sqrt{b^2-4c}}{2}$ . We also have $\arccos y_-\pm\arccos y_+=\arccos\left(y_-y_+\mp\sqrt{1-(y_-+y_+)^2 +2y_-y_+ +y_-^2y_+^2}\right)$ , and since Vieta's relations allows us to utilize $y_-+y_+=2\sin x\sin(2x)$ and $\ y_-y_+=-\cos^2 x$ , it makes sense to consider the following integrals: $$I_{\pm}=\int_{\arccos(1/4)}^{\pi/2}\arccos\left(\cos x\left(2\sin^2x\pm\sqrt{1+4\sin^4x}\right)\right) dx$$ The original integral can then be rewritten as $I_+=\frac12\left((I_- + I_+)-(I_- - I_+)\right)$ , where: $$I_- \pm I_+=\int_{\arccos(1/4)}^{\pi/2}\arccos\left(-\cos^2 x\mp\sin^2 x\sqrt{1-16\cos^2 x}\right)dx$$ $$\overset{\sqrt{1-16\cos^2 x}\to x}=\int_0^1 \frac{x\arcco
|
|calculus|integration|trigonometry|definite-integrals|closed-form|
| 1
|
A recurrence relation of the Pell and Negative Pell Equation.
|
In solving for isosceles almost and nearly Pythagorean triples, I have observed that the set of triples: $$ S_n = \{ (1,1,1), (2,2,3) ,(5,5,7), (12, 12, 17), (29, 29, 41) \dots \}$$ are iso-APTs for odd $n$ and iso-NPTs for even $n$ . Moreover, I've observed that the next terms are given by: $$(x+y, x+y, x+2y)$$ where $(x,y)$ is a solution to $x^2-2y^2 = \pm 1$ How can I show that if I have $(x,y)$ as a solution to either signs of $x^2-2y^2 = \pm 1$ , then $(x+2y, x+y)$ is a solution to the Pell equation of the opposite sign?
|
Note the two variable "Special Algebraic Identity" $$ 0 = (a + 2b)^2 - 2(a + b)^2 + a^2 - 2 b^2 \tag1 $$ which is a special case of the four variable identity $$ 0 = (ad-bc)^2 - ab(c-d)^2 - (a-b)(ad^2-bc^2). \tag2 $$ In equation $(1)$ replace $a$ with $x$ and $b$ with $y$ to get $$ (x+2y)^2 - 2(x+y)^2 = -(x^2 - 2y^2). \tag3 $$
|
|algebra-precalculus|elementary-number-theory|pell-type-equations|
| 0
|
An exact definition of multiplication
|
I am looking into repeated operations, and it seems really hard to precisely define multiplication. Of course, for integer $b$ and real number $a$ , we use the grade school definition we all know: $$ab = \underbrace{a + a + a + \cdots + a}_{b\text{ times}}$$ but what about for real numbers $a$ and $b$ ? For exponentiating (for integers: repeated multiplication), we have a precise formula to define it, which is easy to derive: $$a^x = \sum_{n=0}^{\infty} \frac{x^n \left(\ln(a)\right)^n}{n!}$$ which is nice because we only have integer powers in the sum, which we already know how to define: $$x^n = \underbrace{x \times x\times x \times \cdots \times x}_{n\text{ times}}$$ But this just raises the question of how we define $x \times x$ precisely. Is there an analogous formula to this for multiplication? How does the calculator compute multiplication of reals? Note: According to sources, just approximating multiplication for real numbers uses calculus or numerical methods. I cannot grasp wh
|
There's a very elegant way to define multiplication with a bit of machinery. I learnt this (the Tate/Eudoxos Reals) from Steve Schanuel in 1995. On the wikipedia page https://en.wikipedia.org/wiki/Construction_of_the_real_numbers linked by Ivan above, there's a reference to Construction from integers (Eudoxus reals), here's another description with some more details: https://mattbaker.blog/2021/12/15/the-eudoxus-reals/ . In this construction, multiplication of reals is composition of functions. The trick is to identify each real with the function (n -> r * n) on the integers, which (as the linked pages show) with the right quotient allows you to find an isomorph of the reals as a suitable quotient of suitable functions Z -> Z. Anyway, just a little gem that I think belonged in this thread.
|
|real-analysis|definition|arithmetic|real-numbers|
| 0
|
An $n\times n\times n$ cube is divided into $n^3$ unit cubes. How many pairs of such unit cubes exist which have no more than two vertices in common?
|
A cube with edge length n is divided by planes parallel to its faces into $n^3$ unit cubes. How many pairs of such unit cubes exist which have no more than two vertices in common? My solution: Two cubes have more than two vertices in common if they meet with whole walls Each inner cube of big cube has $6$ neighbours with which it is in contact with whole faces. There are $(n-2)^3$ such cubes. Each cube on a face of big cube (but not on an edge) has $5$ neighbours which touch each other with whole faces. There are $6(n-2)^2$ such cubes. Each cube on an edge of big cube (but not a vertex) has $4$ neighbours with whom it is in full face contact. There are $12(n-2)$ such cubes. Each vertex of big cube has $3$ neighbours which are contacted by whole faces. There are $8$ vertices. Therefore, we have: $\frac{1}{2} ((n-2)^3 + 6(n-2)^2 + 12(n-2) + 8) = 3n^2(n-1)$ pairs that have more than two vertices in common (we need to divide the sum by $2$ because without it we would count each pair twice)
|
$$\binom{n^3}{2} - 3n^2(n-1) = \frac{1}{2}n^2(n^4 \color{red}{-} 7n + 6) = \frac{1}{2}n^2(n-1)(n^3 + n^2 + n - 6),$$ but otherwise, your reasoning is correct. $$\begin{array}{c|c|c|c} \text{Cube type} & \text{Number of cubes} & \text{Number face-adjacent} & \text{Number non-face-adjacent} \\ \hline \text{interior} & (n-2)^3 & 6 & n^3 - 6 - 1 \\ \text{interior face} & 6(n-2)^2 & 5 & n^3 - 5 - 1 \\ \text{edge} & 12(n-2) & 4 & n^3 - 4 - 1 \\ \text{corner} & 8 & 3 & n^3 - 3 - 1 \end{array}$$ Hence twice the total number of pairs is $$(n-2)^3 (n^3 - 7) + 6(n-2)^2 (n^3 - 6) + 12(n-2)^2 (n^3 - 5) + 8(n^3 - 4) = n^2(n^4 - 7n + 6).$$
|
|combinatorics|
| 1
|
Why can a compound biconditional statement whose individual statements don't all have the same truth values be true?
|
Why can $P_1 ⇔ P_2 ⇔ P_3 ⇔ \ldots ⇔ P_n$ be true when not all the $P$ ’s have the same truth value? For example: If P1 = T P2 = T P3 = F P4 = F would this be true? T(P1) ⇔ T(P2) ⇔ F(P3) ⇔ F(P4) = T(P1&2) ⇔ F(P3) ⇔ F(P4) = F(P1,2,3) ⇔ F(P4) = True If so how can an evaluation for a series of compound biconditionals with index n and x (individual truth values) be represented as a formula? Potentially statement is true with known x = 0; x = n; ?
|
How can P1 ⇔ P2 ⇔ P3, ... ⇔ Pn be True when P1 =/= P2 =/= P3 =/= Pn and all P are not false ? Your ‘P1 =/= P2 =/= P3 =/= Pn’ is a little unclear, but from the title of your post, it is clear that your question is: how can $P1 ⇔ P2 ⇔ P3, ... ⇔ Pn$ be true if it is not the case that all $P$ ’s are true, nor the case that all $P$ ’s are false? In other words, how can $P1 ⇔ P2 ⇔ P3, ... ⇔ Pn$ be true if it is not the case that all $P$ ’s have the same truth-value? (I think this is what you tried to express with your ‘P1 =/= P2 =/= P3 =/= Pn’) This is a good and important question, and many students express the same confusion when they run into this. Here is the thing: the way the $\Leftrightarrow$ is used in mathematics as a meta-logical statement is different from how it is used as a logical truth-functional operator. Specifically, as a metalogical statement we write $P_1 \Leftrightarrow P_2 \Leftrightarrow P_3 \Leftrightarrow P_4$ to mean that all these statements are equivalent to each
|
|discrete-mathematics|logic|propositional-calculus|boolean-algebra|
| 1
|
Prove that if $\alpha \geq1$ and $n\geq 2\alpha \log(2\alpha)$ then $n \geq \alpha \log(2n)$
|
I want to prove this statement: $\alpha \geq1$ and $n\geq 2\alpha \log(2\alpha)$ then $n \geq \alpha \log(2n)$ It seemed pretty simple but I couldn't do too much. My first attempt was $$ \log(n) \geq \log ( 2\alpha \log(2\alpha) ) $$ But this attempt didn't look promising. Could you give me a hint?
|
Assume, for the sake of contradiction, that there exists $(\alpha, n)$ such that $\alpha \ge 1$ and $n \ge 2\alpha \ln (2\alpha)$ and $n . We have $\alpha \ln(2n) > n \ge 2\alpha \ln(2\alpha)$ which results in $n \ge 2\alpha^2$ . Thus, we have $n which results in $\sqrt{2n} , or $\mathrm{e}^{\sqrt{2n}} . However, using $\mathrm{e}^x \ge 1 + x + \frac12 x^2 + \frac16 x^3$ for all $x > 0$ , we have $$\mathrm{e}^{\sqrt{2n}} \ge 1 + \sqrt{2n} + \frac12 \sqrt{2n}^2 + \frac16 \sqrt{2n}^3 > 2n.$$ Contradiction.
|
|inequality|logarithms|
| 1
|
Is there a prime number $p$ dividing $1+2!^2+3!^2+\cdots (p-1)!^2$?
|
Is there a prime number $p$ with $p \mid \sum_{j=1}^{p-1} j!^2$ ? I checked the primes upto $600\ 000$ without finding a solution. Heuristic : If we can assume that the probability that $p$ is a solution is $\frac{1}{p}$ , then there should be infinite many solutions and therefore the desired prime should exist. But I guess this is not the case and maybe the special expression can easily be determined to be or not to be divisible by $p$ . Motivation : Such a prime number would prove that $1+2!^2+3!^2+\cdots n!^2$ can be prime only for finite many positive integers $n$ and also give an upper bound for the possible $n$ . If the answer to the question is no , there is a chance that there are infinite many such primes.
|
Yes. By running a script which computes this sum mod $p$ through the first $100\,000$ primes, I found one solution: $p = 1\,248\,829$ ( $96\,379$ th prime). UPDATE. This is also the only solution for the first $1\,000\,000$ primes. UPDATE 2. If the probability that $p$ is the solution is indeed $1/p$ , it follows from the Mertens' third theorem , that the probability to find the solution between $10^{m-1}$ th prime and $10^m$ th prime is asymptotically $1/m$ . This could explain our serendipitous finding, but also discourages to look any further.
|
|summation|prime-numbers|divisibility|factorial|
| 1
|
Imaginary asymptotics for the digamma function
|
I often see asymptotics and precise expansion for the gamma $\Gamma$ or the digamma $\psi$ function $\psi$ when the argument goes to $+\infty$ , in particular when it stays real (or in a given angle sector towards $+\infty$ ). I would like to know the precise asymptotics along the imaginary axis, i.e. asymptotics for $$\psi(x_0 + iy) = \frac{\Gamma'}{\Gamma}(x_0 + iy)$$ when $x_0$ is fixed, say positive, and $y$ goes to $\pm \infty$ . Do we know such an expansion, with explicit dependencies in $x_0$ ? Typically, the Stirling formula $$\Gamma(z) \sim \sqrt{\frac{2\pi}{z}} \left(\frac{z}{e} \right)^z$$ is valid for all complex number in the angle sector $|\mathrm{arg}(z) - \pi| \geq \delta$ for any $\delta > 0$ . This is unfortunately not enough to obtain information on the derivative $\Gamma'$ , and therefore on $\psi$ . Is there a similar formula for the digamma function?
|
Differentiating the known asymptotic expansion of $\log\Gamma(z+h)$ with respect to $z$ gives $$ \psi (z + h) \sim \log z + \frac{{2h - 1}}{{2z}} - \sum\limits_{k = 2}^\infty {\frac{{( - 1)^k B_k (h)}}{k}\frac{1}{{z^k }}} , $$ as $z\to \infty$ in the sector $|\arg z|\le \pi-\delta , with any fixed $h\in \mathbb C$ . Here $B_k$ denotes the $k$ th Bernoulli polynomial. Taking $h=x_0\in \mathbb R$ and $z=\pm\mathrm{i}y$ with $y>0$ , we obtain $$ \psi (x_0 \pm {\rm i}y) \sim \log y \pm \frac{\pi }{2}{\rm i} \mp \frac{{2x_0 - 1}}{{2y}}{\rm i} - \sum\limits_{k = 2}^\infty {\frac{{( \pm {\rm i})^k B_k (x_0 )}}{k}\frac{1}{{y^k }}} $$ as $y\to+\infty$ .
|
|complex-analysis|asymptotics|taylor-expansion|gamma-function|
| 0
|
Meaning of the $ f(z+a)$ for complex function $f(z)$
|
Here is the question bothering me very long time. Say the complex function $f(z) = u(x,y) + iv (x,y)$ Consider the $T : z \to z+a$ for $a \in \mathbb{C}$ (Here the $a = \alpha + i \beta$ ) Then, $f(z+a) = u(x+\alpha, y+\beta) + i v(x+\alpha, y+\beta)$ for $f(z)$ I.e. $f(z+a)$ is the translation by " $- \alpha$ " to x axis(real part) and " $-\beta$ " to y axis respectively for the both of each $u(x,y)$ and $v(x,y)$ . But the problem is occurred like the below. I took the example as the $f(z) = z$ with the considering the $g(z) = z+1 = f(z)+1$ . Then we can get the $g$ by translating the $f$ as the moving the $f$ to the positive real part " $+1$ ". By the way at the point of the $f(z+1) = g(z) = (x+1) + i (y)$ then, we can get the $g $ translating by the negative real part " $-1$ ". Hence even though there are same function, one is maps( $f(z)+1$ ) to $x=1$ but the other( $f(z+1)$ ) is maps to $x=-1$ considering the $x=0$ I'm very confused the meaning of the $f(z+a) (= f \circ T)$ . What
|
It just means the composition of two functions. Let $z = x+ iy, a = \alpha + i\beta\in \mathbb{C}, x, y, \alpha, \beta \in \mathbb{R}$ , and set $T: z \mapsto z + a, f: \mathbb{C} \rightarrow \mathbb{C}$ . First you need to know $$(f \circ T)(z) = f(T(z))$$ which is the definition of the composition of two functions. Then since $T(z) = z + a$ , we have $$(f \circ T)(z) = f(T(z)) = f(z + a)$$ which is exactly what you need. For the example you give, we set $f: z \mapsto z, T: z \mapsto z + 1$ , then we have $$(f \circ T)(z) = f(T(z)) = f (z + 1) = z + 1$$ And there is no $-1$ in the expression.
|
|complex-analysis|
| 0
|
Can a set of vectors span P2 and P3? If a set of vectors does not include a value for X^3, but spans P2, can it still span P3?
|
Given these four vectors, x^2 + x, 2x^2, 2x-3, 9, I have determined they are linearly dependent and that they form a vector equation of C1 + 2c2 = 0 C1 + 2c3 = 0 -3c3 + 9c4 = 0 Which I think has an infinite solutions… I’m confused whether they span P3 or not. Or if it’s possible for them to span P3 given (1,x,x^2,x^3) isn’t possible.
|
They can in general, but in your specific case they can't because they cannot generate $x^3$ . Perhaps, to expand on this, you may also be interested in an answer to the question: what is the meaning of $x$ ? Here is one possible explanation in the case of real polynomials. (See also here , and a more advanced discussion here .): If $V$ is the vector space of all infinite real sequences with operations defined naturally, we let: $$1=(1,0,0,\cdots),$$ $$x=(0,1,0,0,\cdots),$$ $$x^2=(0,0,1,0,0,\cdots),$$ $$x^3=(0,0,0,1,0,0,\cdots),$$ and so on. The vector space obtained by the span of $\{1,x,x^2,\cdots,x^n\}$ is the vector space $P_n$ . Now $$x^2+x=(0,1,1,0,0,\cdots),$$ $$2x^2=(0,0,2,0,0,\cdots),$$ $$2x-3=(-3,2,0,0,\cdots),$$ $$9=(9,0,0,\cdots)$$ and it is easily seen there is no way that any linear combination of the above can yield a non-zero value at the fourth position. Hence these cannot generate $x^3$ . It follows that these vectors cannot span $P_3$ .
|
|linear-algebra|polynomials|vector-spaces|
| 0
|
Sum of numbers in $[0,1]$ raised to exponents at least as large as $1$.
|
For $a,b\in [0,1]$ and $\epsilon\geq 0$ , does the following equality hold? $a^{1+\epsilon}+b^{1+\epsilon}\geq |a-b|^{1+\epsilon}$ All I can think to do so far is: \begin{align*} |a-b|^{1+\epsilon} = ((a-b)^2)^{(\epsilon+1)/2} \end{align*} I want to know if I can apply this result, I know such a result is true for $\epsilon=1$ , i.e. $a^2+b^2\geq (a-b)^2$
|
As ahint : Without lose of generality divide by $b^{1+\epsilon}$ so $$a^{1+\epsilon}+b^{1+\epsilon}\geq |a-b|^{1+\epsilon} \ \div b^{1+\epsilon}\\ (\frac ab)^{1+\epsilon}+1\ge |(\frac ab)-1|^{1+\epsilon}$$ now take $\frac ab=x $ and $x \in [0,\infty)$ now take $f(x)=x^{1+\epsilon}+1-|x-1|^{1+\epsilon}$ note that $|x-1|^{1+\epsilon}=|1-x|^{1+\epsilon}$ now try to prove $f(x)\ge 0$ I write for the case of $0\leq x and rest for you $$0\leq x so $f(x)$ is increasing function, and range can be obtained by check $f(0),f(\frac 12),\lim_{x\to \infty} f(x)$ Remark $x=\frac 12 $ is the roof of $f'(x)=0$ $$f(0)=0\\f(\frac 12)=1\\\lim_{x\to \infty} f(x)\to +\infty \\so\\ f(x) \in [0,+\infty) \to 0\leq f(x) and inequality proved for the case of $0\leq x I think you can take over
|
|algebra-precalculus|exponentiation|
| 0
|
$ \chi (G) $ when $G$ is triangle free
|
Let $G$ be a triangle-free graph. Let $n_{0} = v(G)$ . Now $G$ will have a an independent set of size $\lfloor \sqrt{n_{0}} \rfloor$ . You can google wikipedia on triangle-free graph and you will get the proof. What I would like to do is this. Remove the above independent set, call it $S$ . Now $G - S$ is also triangle free. Let $n_{1} = v(G-S)$ . Do the same thing, i.e removing an independent set of size $\lfloor \sqrt{n_{1}} \rfloor$ . Repeat the process until we have $0$ vertices left. Clearly, if $v(G) \geq 1$ , the operation will remove a positive number of nodes, until there are $0$ nodes left. Question is, how do I prove the number of steps, until termination, is $ \leq 2 \lceil \sqrt{n_{0}} \rceil$ ? I tried induction, but it fails. Consider $n = 625$ . Then $625 - \lfloor \sqrt{625} \rfloor = 600$ . Suppose the proposition is true for all $n . But $2 \lceil \sqrt{600} \rceil = 50$ . So I cannot prove this by showing $2 \lceil \sqrt{n - \lfloor \sqrt{n} \rfloor} \rceil + 1 \leq
|
Lemma. A triangle-free graph on $\binom{k+1}2$ vertices has an independent set of size $k$ . Proof by induction on $k$ . Suppose $G$ is a triangle-free graph on $\binom{k+2}2$ vertices; I claim that $G$ has an independent set of size $k+1$ . Choose a vertex $v$ . If $v$ has at least $k+1$ neighbors we're done, since the set of neighbors of $v$ is independent. If $v$ has at most $k$ neighbors, let $H$ be the subgraph that remains after removing $v$ and its neighbors from $G$ , which has at least $\binom{k+2}2-(k+1)=\binom{k+1}2$ vertices. By the inductive hypothesis $H$ contains an independent set $S$ of size $k$ , so $S\cup\{v\}$ is an independent set of size $k+1$ . Theorem. A triangle-free graph on $\binom{k+1}2$ vertices has chromatic number at most $k$ . Proof by induction on $k$ . Suppose $G$ is a triangle-free graph on $\binom{k+2}2$ vertices; I claim that $\chi(G)\le k+1$ . By the lemma, $G$ has an independent set $S$ of size $k+1$ , and $G-S$ has $\binom{k+2}2-(k+1)=\binom{k+1}
|
|graph-theory|coloring|
| 0
|
If $B = x(xI-A)^{-1}$ for a generator matrix $A$, then $B-B^2$ has positive diagonal elements
|
Let $A$ be the generator matrix of a continuous-time Markov chain. This means that $A$ has positive off-diagonal elements $A_{ij} > 0$ , $i \ne j$ , and row sums $\sum_j A_{ij}$ equal to $0$ . For example, $A$ could be $$ A = \left( \begin{matrix} -7 & 4 & 3 \\ 1 & -2 & 1 \\ 3 & 5 & -8 \end{matrix} \right). $$ I am interested in proving the following claim about the matrix $B = x(x I - A)^{-1}$ for some $x > 0$ . Claim. For the matrix $B = x(x I - A)^{-1}$ , it holds that the diagonal elements of $B - B^2$ are non-negative. Using numerical simulations I have convinced myself that this claim is likely true; however, I have not been able to make much progress toward proving it. It is straightforward to show that the matrix $B$ is stochastic. However, the claim above is not true for all stochastic matrices $B$ ; there is something special about stochastic matrices of this particular form. Any ideas?
|
The missing part of this post is a proof that $B$ is a stochastic matrix. Such a proof can quickly lead to a proof that OP's claim is true. I suggest using Inverse of strictly diagonally dominant matrix for reference. Mimicking the link, we have $A' := I-A$ is a strictly diagonally dominant matrix with positive diagonal and negative off diagonal and $\mathbf 1 = A'\mathbf 1$ . Now $\delta A'$ has all diagonal entries $\lt \frac{1}{n}$ for some $\delta \gt 0$ small enough. Write $Q = I- \delta A'$ which is a positive matrix and has Perron vector $\mathbf 1$ that satisfies $Q\mathbf 1 = (1-\delta) \mathbf 1$ . $\delta^{-1}\big(I-A\big)^{-1}=\big(\delta A'\big)^{-1}= \big(I-Q\big)^{-1}=I + Q + Q^2 + Q^3+\dots $ where the spectral radius of $Q$ is $1-\delta$ hence the series converges (i.) This proves $\big(I-A\big)^{-1}$ is a positive matrix and $\big(I-A\big)^{-1}\mathbf 1 = \mathbf 1$ hence it is a stochastic matrix. (ii.) $Q$ , being strictly substochastic in each row, is a transition
|
|linear-algebra|matrices|stochastic-processes|markov-chains|stochastic-matrices|
| 0
|
Bevel gear design based geometry problem.
|
Given two cones $C_1, C_2$ centered at the vertex of a unit sphere and knowing only the angle $x$ between the axis of the cones, would it is be possible to find the two conical angles at the vertex? I presume there isnt sufficient information for this. So if in addition I know that the diameters of the base circles of the two cones are in the ratio $d_1:d_2 = p:q$ for some natural numbers $p,q$ , can I express the angles of the two cone in terms of this ratio? The attached picture sort of captures what I wish to say : Edit 1: The bigger and smaller cones are tangent to each other along a ruling, to the left as seen in the picture.
|
Let the half-angle of each cone be $\theta$ , $\varphi$ with $0 . Then from your diagram, we have $$\theta - \varphi = x.$$ Furthermore, assume $0 . Then $$\sin \theta = \frac{d_2}{2}, \quad \sin \varphi = \frac{d_1}{2}.$$ Consequently if $d_1/d_2 = p/q$ , we have $$q \sin \varphi = p \sin \theta = p \sin (x + \varphi) = p \left(\sin x \cos \varphi + \cos x \sin \varphi\right),$$ hence $$0 = p \sin x \cos \varphi + (p \cos x - q) \sin \varphi$$ or $$\tan \varphi = \frac{p \sin x}{q - p \cos x}.$$ Therefore, $$\varphi = \tan^{-1} \frac{\sin x}{q/p - \cos x}$$ and $\theta = x + \varphi$ .
|
|geometry|euclidean-geometry|analytic-geometry|
| 1
|
Sum of numbers in $[0,1]$ raised to exponents at least as large as $1$.
|
For $a,b\in [0,1]$ and $\epsilon\geq 0$ , does the following equality hold? $a^{1+\epsilon}+b^{1+\epsilon}\geq |a-b|^{1+\epsilon}$ All I can think to do so far is: \begin{align*} |a-b|^{1+\epsilon} = ((a-b)^2)^{(\epsilon+1)/2} \end{align*} I want to know if I can apply this result, I know such a result is true for $\epsilon=1$ , i.e. $a^2+b^2\geq (a-b)^2$
|
Without loss of generality consider $a\ge b \ge 0$ , then $$a^{1+\epsilon} + b^{1+\epsilon} \ge a^{1+\epsilon} \ge \lvert a-b\rvert^{1+\epsilon}.$$ Using the increasing property of $x^{1+\epsilon}$ for $x\ge 0$ : $$\begin{align*} b &\ge 0 &&\implies & b^{1+\epsilon} &\ge 0\\ a &\ge a-b \ge 0 &&\implies & a^{1+\epsilon} &\ge (a-b)^{1+\epsilon} \end{align*}$$
|
|algebra-precalculus|exponentiation|
| 1
|
Prerequisite of rough path theory
|
I'm currently reading A Course on Rough Paths: With an Introduction to Regularity Structures , but I find it difficult to start on even though I've taken undergraduate mathematical analysis, linear algebra, algebra, ODE and PDE courses of the math department. I'm not familiar with the concepts of "tensor", "tensor product", "Lie group" and "Lie algebra", which are not mentioned in analysis course. Are there any introductory textbooks/materials that elaborate these concepts?
|
This is because this book is more like a second volume to the first work by P.Friz "Multidimensional Stochastic Processes as Rough Paths: Theory and Applications" . Here they go over in detail the Lie-algebra perspective for Rough paths.
|
|rough-path-theory|
| 0
|
Why do the constants $\gamma$, $\zeta(2)$ and $\zeta(3)$ appear in the values of $\Gamma''(1)$ and $\Gamma'''(1)$?
|
I have been interested in the gamma function and its derivatives recently and was surprised to see these constants appear in the values of the derivatives of the gamma function: $$\Gamma'(1) = -\gamma$$ $$\Gamma''(1) = \gamma^2 + \zeta(2)$$ $$\Gamma'''(1) = -2\zeta(3)-\gamma^3-\frac{\gamma\pi^2}{2}$$ I have read that $$-\int_0^\infty e^{-t}\ln(t)dt$$ is a definition of $\gamma$ , which also happens to be $-\Gamma'(1)$ , so this isn't very surprising to see showing up in higher derivatives of $\Gamma(z)$ . I am more curious about where the zeta values are coming from. I should clarify that I am currently taking calculus 2 and the basic integration techniques I know don't seem to get me anywhere in solving these by hand, so I got these closed form values from Wolfram Alpha. Maybe there is some more advanced technique which would easily explain where the zetas are coming from?
|
This is a consequence of applying repeatedly the Leibniz integral rule for differentiation under the integral sign (the case with constant limits of integration) which you might do in Calculus II: $$\begin{aligned} \Gamma'(1)=\left[\dfrac{\mathrm d}{\mathrm dz}\displaystyle\int_0^\infty x^{z-1}e^{-x}\mathrm dx\right|_{z=1}&=\left[\displaystyle\int_0^\infty \dfrac{\partial e^{(z-1)\ln x}}{\partial z}e^{-x}\mathrm dx\right|_{z=1}\\ &=\left[\displaystyle\int_0^\infty \ln xe^{(z-1)\ln x}e^{-x}\mathrm dx\right|_{z=1}\\ &=\displaystyle\int_0^\infty \ln x\ e^{-x}\mathrm dx=-\gamma \end{aligned} $$ $$\begin{aligned} \Gamma''(1)=\left[\dfrac{\mathrm d^2}{\mathrm dz^2}\displaystyle\int_0^\infty x^{z-1}e^{-x}\mathrm dx\right|_{z=1}&=\left[\dfrac{\mathrm d}{\mathrm dz}\displaystyle\int_0^\infty \dfrac{\partial e^{(z-1)\ln x}}{\partial z}e^{-x}\mathrm dx\right|_{z=1}\\ &=\left[\dfrac{\mathrm d}{\mathrm dz}\displaystyle\int_0^\infty \ln xe^{(z-1)\ln x}e^{-x}\mathrm dx\right|_{z=1}\\ &=...\\ &=\left[
|
|calculus|gamma-function|riemann-zeta|
| 1
|
mean value property for subharmonic functions (probabilistic proof)
|
A function $u \in C^2(D)$ on a domain $D$ is called subharmonic on $D$ if $\Delta u \geq 0$ . Let $B(x, r)$ be the ball of radius $r$ centered at $x$ in $\mathbb{R}^d$ . Give a probabilistic proof-that is, using ideas such as Ito integration and submartingle properties - that if $u$ is subharmonic on $D$ , then it satisfies the following analogue of the mean value property: If $x \in D$ is such that $B(x, r) \subseteq D$ then $$ u(x) \leq \frac{1}{|\partial B(x, r)|} \int_{\partial B(x, r)} u(y) d y $$ where $\partial B(x, r)=\{y:|x-y|=r\}$ is the sphere of radius $r$ centered at $x$ and $|\partial B(x, r)|$ denotes its surface area.
|
This is standard material in potential theory eg. see Brownian motion by Peter Mörters and Yuval Peres. Following proof modified from here Harmonic functions and Brownian motion for subharmonic Fix a ball $B(x,r)$ and consider $\tau= \inf{ \{t>0, |B_t-x| = r} \}$ . Now compute by Itô's formula $$u(B_\tau) = u(B_0) + \int_0^\tau u'(B_s)\, dB_s + \frac{1}{2}\int_0^\tau \Delta u(B_s)\, ds $$ (since subharmonic functions are at least $C^{2}$ ). Take expectations: $$\Bbb{E}[u(B_\tau)] = u(B_0) + \Bbb{E}\bigg[\int_0^\tau u'(B_s)\, dB_s\bigg] + \frac{1}{2}\int_0^\tau \Delta u(B_s)\, ds $$ Since $\Bbb{E}\big[\int_0^\tau u'(B_s)\, dB_s\big] = 0$ and $\Delta u\geq 0$ (subharmonic) we obtain that $$\Bbb{E}[u(B_\tau)] \geq u(B_0)=u(x)$$ Now it suffices to take a Brownian motion starting at $x$ and to note by rotation invariance that $$\Bbb{E}[u(B_\tau)] = \frac{1}{|\partial B(x,r)|}\int_{\partial B(x,r)} u(x)\, dS$$ and therefore you obtain the mean value property $$u(x) \leq\Bbb{E}[u(B_\tau)] = \
|
|stochastic-calculus|brownian-motion|harmonic-functions|
| 1
|
How do I simplify $\sqrt{\frac{1-\frac{\sqrt5 }5}2}$?
|
I've been stuck on simplifying this nested radical. I've included a snapshot of the problem and solution that is in the trigonometry book that I am studying. I've omitted the actual trig problem and just posted a pic of the part that is stumping me. I get the first part correctly but the second part of the equation (the actual correct answer according to the book) is what is stumping me. I don't know if the book is wrong but any help would be appreciated.
|
$\sqrt {\frac{\frac{\sqrt 5}{\sqrt 5} - \frac{1}{\sqrt 5}}{2}} = \sqrt{\frac{\sqrt 5 - 1}{2 \sqrt 5}} = \sqrt{\frac{\Phi}{\sqrt 5} - \frac{1}{{\sqrt5}}} = \sqrt {\frac{\Phi - 1}{\sqrt 5}}$ Can you please post the problem.
|
|algebra-precalculus|radicals|nested-radicals|
| 0
|
Given a square matrix $A$, both $AA^T$ and $A^TA$ are symmetric
|
I need help with a proof for my liner algebra class. If $A$ is a square matrix, then $AA^T$ and $A^TA$ are symmetric. I have no idea where to start!
|
Taking the transpose of $ AA^T $ and using the reverse order law of transposition, $ (AB)^T = B^TA^T $ , would be the simplest proof.
|
|linear-algebra|matrices|
| 0
|
Proving $2c(1-c)+(1-2c)(a-c)b>0$ assuming $a\in(0,1), b\in(0,1), c\in(0,1)$
|
I have the following inequality I am trying to prove using the assumptions $a\in(0,1), b\in(0,1), c\in(0,1)$ : $2c(1-c)+(1-2c)(a-c)b>0$ The following Mathematica code (also visible in this online notebook ) shows the inequality to be true: Simplify[2c(1-c)+(1-2c)(a-c)b>0,0 Any pointers (math or software) in the right direction would be much appreciated. I have tried using Trace in Mathematica, to no avail. As for math, I have tried expanding but cannot eliminate terms in such a way as to get anything tractable, apparently.
|
We have $$2c(1-c)+(1-2c)(a-c)b = 2c(1-c)(1-b)+ba(1-c)+bc(1-a) > 0.$$ $2c(1-c)+(1-2c)(a-c)b = 2c(1-c)+(1-2c)ab - (1-2c)cb$ $= 2c(1-c)+(1-2c)ab - (2-2c)cb + cb$ $= 2c(1-c)(1-b)+(1-2c)ab + cb$ $= 2c(1-c)(1-b) + (ab - abc) + (cb - abc)$
|
|inequality|proof-explanation|
| 1
|
Possible Mordell-Weil groups of Weierstrass elliptic curves
|
I am finding the Mordell-Weil group of the elliptic curve of the form $y^2=x^3+ax+b$ over the rational field. Using Magma, I found that many of them have the form $\mathbb{Z}/2\times \mathbb{Z}/2\times \mathbb{Z}$ or $\mathbb{Z}/2\times \mathbb{Z}/2\times \mathbb{Z}\times \mathbb{Z}$ . Can we determine full candidates of the group form? From Elliptic curves with trivial Mordell–Weil group over certain fields. , the answer gave full possibilities of $E(\mathbb{F}_3)$ but did not show how to find them. I've read the book like "Rational points on elliptic curves" but I cannot find the answer. Can anyone help?
|
Even in the case $b=0$ , the Mordell-Weil group over rational field is not completely determined. The Mordell-Weil group consists of two parts, what we call rank part and torsion part $E(K)_{tor}$ . Rank is an extremely difficult object. In 'Rational Points on Elliptic Curves' by Silverman and Tate, it is stated that the rank of the elliptic curve $E_p: y^2 = x^3 + px (p\equiv 1\mod8:\text{prime})$ over $\Bbb{Q}$ is believed to be either $0$ or $2$ . For torsion part, the more is known by Mazur that $E(\Bbb{Q})_{tor}\cong \Bbb{Z}/N\Bbb{Z}(N=1,2,,,10,12)$ or $\Bbb{Z}/2\Bbb{Z}\times \Bbb{Z}/2N\Bbb{Z}(1\le N\le 4)$ .
|
|group-theory|elliptic-curves|
| 0
|
Where is the optimal point to “hide” in a shape?
|
A puzzle I’ve wondered about but never got around to solving/verifying: In the game Pokemon Let’s Go, the Pokemon Abra instantly teleports away if the player is detected within its line of sight. Let’s say you’re in a square of length $s$ and one Abra randomly spawns in the square, with a random orientation, and has a 60-degree line of sight in front of it. What’s the optimal place in the square to be so that Abra doesn’t teleport away? If Abra has an unlimited length of sight (i.e. can see as far as possible within the square), it seems like any place in the square is equally good? Since given any place you choose to be, no matter where Abra spawns in the square, there is a uniform 5/6 chance that Abra doesn’t detect you due to its orientation? However, if Abra does have a limited length of sight (i.e. can only spot you from a distance $d$ away), my initial intuition would be to hide in the corner. But is there an elegant argument here without e.g. using calculus? And are there any co
|
Abra must first be within range of you, i.e. in the disc $D$ of radius $d$ around you . If within range, there is always a $\frac16$ chance its sight sector includes you, i.e. it spots you and Teleports – this is because circles and intersections of convex shapes are all convex. The two events are independent. By minimising the intersection area of $D$ and the park (square) $P$ , you therefore mimimise the chance of Abra spotting you. If $d\ge\sqrt2s$ there is no advantage to hiding in a corner, since $P\subseteq D$ no matter where $D$ 's centre is within $P$ . Otherwise – and this is obvious , not just intuition – placing $D$ 's centre at a corner minimises the probability in question. This argument naturally generalises to $P$ any convex shape, and indeed any shape if we assume there are no walls around the park.
|
|geometry|optimization|puzzle|
| 1
|
$K$ is the splitting field of degree $p$ polynomial $f(x)$. If $[K:F] =tp, t \in \mathbb{N}$, $f(x)$ is irreducible
|
Let $p$ be a prime and let $f(x) \in F[x]$ have degree $p$ . Let $K$ be the splitting field for $f(x)$ over $F$ . Suppose $[K:F] =tp$ for some $t \in \mathbb{N}$ . Prove that $f(x)$ is irreducible over $F$ . Here's what I have so far. Suppose not. Suppose instead that $f(x)$ is reducible . Then $f(x) = g(x)h(x)$ for nonconstant $p(x),q(x) \in F[x]$ . Since $p$ is prime, then WLOG $\deg(g(x)) = p-1$ and $\deg(h(x)) = 1$ . Assume also that $h(x)$ is monic. Then $h(x)=x-\psi$ for some $\psi\in F$ . This means $\psi$ is a root of $f(x)$ in $F$ . Since $K$ is the splitting field for $f(x)$ , we know $$F \subseteq F(\psi) \subseteq K$$ Therefore, \begin{equation} [K:F]=[K:F(\psi)][F(\psi): F] = tp \end{equation} The chain of inclusions is actually strict, i.e. $$F \subset F(\psi) \subset K$$ and this is because $f(x)$ has more than just $\psi$ as a root, by our reducibility assumption. So neither $K/F(\psi)$ nor $F(\psi)/F$ are trivial extensions. Now, $$[F(\psi):F] because $p$ is prime. Als
|
As I noted in comments, your argument is incorrect from the start, as you assume that if $f$ is not irreducible, then it must have a linear factor. That is not true. Claim. If $f(x)\in F[x]$ is separable and has degree $k$ , then its splitting field has degree over $F$ dividing $k!$ . Proof. The splitting field is Galois over $F$ ; and the Galois group acts on the roots of $f$ ; since the splitting field is generated by the roots of $f$ action of any element of the Galois group is completely determined by the action on the roots. Thus, the Galois group is isomorphic to a subgroup of $S_k$ , hence has order dividing $k!$ . Now let $f(x)$ have degree $p$ . If $f(x)$ is reducible, $f(x) = g(x)h(x)$ with $\deg(g)=k\lt p$ , $\deg(h)=\ell\lt p$ , then the splitting field $L$ of $g$ has degree dividing $k!$ ; and $K$ is the splitting field of $h$ over $L$ , so $[K:L]\mid \ell!$ . Therefore, $[K:F]=[K:L][L:F]\mid \ell!k!$ . In particular, since both $\ell$ and $k$ are strictly smaller than $p$
|
|abstract-algebra|field-theory|
| 1
|
How to evaluate Ahmed's integral?
|
How to show that: $$\int_{0}^{1}\frac{\tan^{-1}\sqrt{x^{2}+2}}{(x^{2}+1)\sqrt{x^{2}+2}}\mathop{\mathrm{d}x}=\frac{5\pi ^{2}}{96}$$ I saw this on Wolfram .
|
Observing that $$ \int_0^1 \frac{d y}{y^2+x^2+2}=\left[\tan ^{-1}\frac{y}{\sqrt{x^2+2}}\right]_0^1 =\tan ^{-1} \frac{1}{\sqrt{x^2+2}}=\frac{\pi}{2}-\tan ^{-1} \sqrt{x^2+2}, $$ we have $$ \begin{aligned} \int_0^1 \frac{\tan ^{-1}\left(\sqrt{x^2+2}\right)}{\left(x^2+1\right) \sqrt{x^2+2}} d x& =\frac{\pi}{2} \int_0^1 \frac{1}{\left(x^2+1\right) \sqrt{x^2+2}} d x-\int_0^1 \frac{1}{x^2+1}\left(\int_0^1 \frac{1}{y^2+x^2+2} d y\right) d x \\ & =\frac{\pi^2}{12}-\frac{1}{2} \int_0^1 \int_0^1\left(\frac{1}{x^2+1}+\frac{1}{y^2+1}\right) \frac{1}{y^2+x^2+2} d y d x \textrm{ (By symmetry)}\\ & =\frac{\pi^2}{12}-\frac{1}{2} \int_0^1 \int_0^1 \frac{1}{\left(x^2+1\right)\left(y^2+1\right)} d y dx \\ & =\frac{\pi^2}{12}-\frac{1}{2}\left(\int_0^1 \frac{1}{x^2+1} d x\right)\left(\int_0^1 \frac{1}{y^2+1} d y\right) \\ & =\frac{\pi^2}{12}-\frac{1}{2} \cdot \frac{\pi}{4} \cdot \frac{\pi}{4} \\ & =\frac{5}{96} \pi^2 \end{aligned} $$
|
|integration|definite-integrals|
| 0
|
Showing B is not derivable from A
|
Let's take it that under the assumption of A, ~C is true. And let's also take it that if ~C is true and if B is derivable from A, then C is true. I want to show that B is not derivable from A. Is this possible under these assumptions? It seems like we can assume that B is derivable from A for the sake of obtaining a contradiction. So there is some proof that begins by assuming A and obtains B in a finite amount of steps. We take that derivation under which we already have A as an assumption and within it, since we know that A gives ~C, we conclude that ~C is true. Now since we also know that if ~C and if B is derivable from A, then C is true. So we get to a contradiction, and so B is not provable from A. Is this valid? Or does this only show that under the assumption that B is derivable from A, then ~A is derivable by the empty set?
|
Assume A $\supset$ B. (for contradiction) Assume ( $\lnot$ C $\land$ (A $\supset$ B)) $\supset$ C. (gathered from your question) ( $\lnot$ C $\land$ True) $\supset$ C. (replacing A implies B with its truth value) $\lnot$ C $\supset$ C. (reducing "and True") $\lnot$ C only implies C if $\lnot$ C is false. We can conclude that C is true under the assumptions. But this isn't a contradiction. Instead, using Implication, $\lnot$ C $\supset$ C is equivalent to C $\lor$ C. Maybe another answerer can find a contradiction but I don't. Quoting your first paragraph, "let's also take it that if $\lnot$ C is true and if B is derivable from A, then C is true." Let's say instead, "if $\lnot$ C is true and anything , then C is true." It seems like you are trying to conjure up a contradiction with these C terms, then stick it onto the A and B term. I couldn't make it work. Is there a particular problem that inspires the question? We should be able to work with the symbols but I am just curious.
|
|logic|
| 0
|
About the Exponential function
|
Consider function $y=a^x$ , $a>1$ , and we need to show that $$\frac{2(a-1)}{(a+1)} My idea is to use $y=a^x = \exp{(x\ln a)}$ , then find the derivative of $y$ at $x=0$ , which is $\ln a$ . But I don’t know how to continue beyond this. Any idea on using geometry or something to prove it?
|
For LHS, set $$f(a) = \ln a - \frac{2(a-1)}{a+1} = \ln a + \frac{2}{a+1} - 2, a> 1$$ Then we have $$f'(a) = \frac{1}{a} - \frac{2}{(a+1)^2} = \frac{(a+1)^2 - 2a}{a(a+1)^2} = \frac{a^2 + 1}{a(a+1)^2} > 0$$ So $f$ is increasing when $a> 1$ , and this means $f(a) > f(1) = 0$ . So we have LHS of the inequality. For RHS, set $x = -1 + \sqrt{2a-1} > 0$ since $a>1$ , so we have $$a = \frac{1}{2}(x+1)^2 + \frac{1}{2} = 1 + x + \frac{1}{2}x^2$$ By Taylor series of $e^x$ , we have $$e^x = 1 + x + \frac{1}{2}x^2 + ... + \frac{1}{n!}x^n + ... >1 + x + \frac{1}{2}x^2 = a$$ And this means $$e^x = e^{-1 + \sqrt{2a-1}} > a$$ So we have $$-1 + \sqrt{2a-1} > \ln a$$ which is exactly RHS.
|
|geometry|analysis|functions|
| 0
|
Is the characteristic polynomial positive for values greater than the spectral radius of the matrix?
|
Because $$\det(tI-A)=t^n+...$$ So the characteristic polynomial is positive for big enough values of $t$ . If the characteristic polynomial is negative for some number greater than the spectral radius then it would be 0 for some point by the intermediate value theorem. Is this correct?
|
Your argument is correct, but there is an elementary algebraic argument that does not rely on calculus. Factorise $\det(tI-A)$ as $$ \prod_{i=1}^{n_1}(t-p_i)\prod_{j=1}^{n_2}(t+q_j)\prod_{k=1}^{n_3}(t-\lambda_k)(t-\overline{\lambda}_k), $$ where the $p_i$ s are nonnegative, the $q_j$ s are positive and the $\lambda_k$ s are non-real. Then for each $i$ , since $\rho(A)\ge p_i$ , we have $t-p_i>0$ whenever $t>\rho(A)$ (because $\rho(A)\ge p_i$ ); for each $j$ , since $q_j$ is positive, $t+q_j>0$ whenever $t\ge0$ ; for each $k$ , since $\lambda_k$ is not real, $(t-\lambda_k)(t-\overline{\lambda}_k)= |t-\lambda_k|^2>0$ whenever $t\in\mathbb R$ . It follows that $\det(tI-A)>0$ when $t>\rho(A)$ .
|
|linear-algebra|abstract-algebra|determinant|
| 0
|
Computing all possible conformal factors on the sphere
|
Proposition to prove . Let $\tau\colon \mathbb S^n\to \mathbb S^n$ be a conformal map, meaning that $\tau^\star g= \Lambda^2 g$ for a scalar field $\Lambda$ . (Here $g$ denotes the standard metric tensor on $\mathbb S^n$ ). Then there exists $\theta\in\mathbb R^{n+1}, |\theta| such that $$\Lambda(\omega)=(1-\theta\cdot \omega)^{-1},\qquad \forall \omega\in \mathbb S^n.$$ I have found this statement in some sources; for example, in this paper of Chen-Frank-Weth , pag.7, "since the Jacobian determinant...", without proof. Can you show me a proof, or point me in the right direction in the literature?
|
I can prove something close to the statement in the question for 2d case. Let me know if there is anything wrong! We want to consider automorphisms of $CP^1\cong \mathbb{S}^2$ . The canonical coordinate of $CP^1$ is related to the spherical coordinate through the stereographic projection: $$ z = \cot\left(\frac{\theta}{2}\right)e^{i\phi}, $$ It follows that ( Stereographic projection is conformal --- from the line element ) $$ g = d\theta^2 + \sin^2\theta d\phi^2 $$ $$ = \frac{4dzd\overline{z}}{(1+|z|^2)^2}, $$ The automorphisms of $CP^1$ are Möbius transformations, which can be generated by $$ z \rightarrow az + b, \quad z \rightarrow \frac{1}{z}, $$ where $a,b\in \mathbb{C}$ . But $$ \frac{4d\frac{1}{z}d\frac{1}{\overline{z}}}{(1 + |\frac{1}{z}|^2)^2} = \frac{4dzd\overline{z}}{(1+|z|^2)^2} $$ so the metric is invariant under the inverse operation, we only need to consider translation and scaling. Continuing with the transformation $z \rightarrow az + b$ , $$ \frac{4|a|^2dzd\overline{
|
|differential-geometry|riemannian-geometry|conformal-geometry|
| 0
|
Why is the matrix product of 2 orthogonal matrices also an orthogonal matrix?
|
I've seen the statement "The matrix product of two orthogonal matrices is another orthogonal matrix. " on Wolfram's website but haven't seen any proof online as to why this is true. By orthogonal matrix, I mean an $n \times n$ matrix with orthonormal columns. I was working on a problem to show whether $Q^3$ is an orthogonal matrix (where $Q$ is orthogonal matrix), but I think understanding this general case would probably solve that.
|
A hint, and the approach is not tha straightforward, but may give a different perspective: Check that the map from to group of invertible square matrices to itself is an automorphism $$g \mapsto (g^{t})^{-1}$$ (an involution in fact). Now, for every endomorphism of a group, the set of fixed points forms a subgroup. In our case, the subgroup is $O(n, \mathbb{R})$ ( real case), or $U(n)$ ( in the complex case). Note also that we have an automorphism of a Lie group, and the corresponding automorphism of Lie algebras is $X \mapsto -X^t$ .
|
|linear-algebra|matrices|orthogonality|orthogonal-matrices|
| 0
|
$H_1 ,H_2 \unlhd \, G$ with $H_1 \cap H_2 = \{1_G\} $. Prove every two elements in $H_1, H_2$ commute
|
This is the proof, which I mostly understand except for one bit: You have $h_1 \in H_1$ and $h_2 \in H_2$. We also have $h_1^{-1}(h_2^{-1}h_1h_2) \in H_1$, because $h_2^{-1}h_1h_2 \in h_2^{-1}H_1h_2 = H_1$. Similarly, we have $(h_1^{-1}h_2^{-1}h_1)h_2 \in H_2$. Therefore $$ h_1^{-1}h_2^{-1}h_1h_2 \in H_1 \cap H_2 = \{1_G\} $$ and so $h_1^{-1}h_2^{-1}h_1h_2 = \{1_G\}$. Let's first multiply everything on the left by $h_1$ $$h_1^{-1}h_2^{-1}h_1h_2 = \{1_G\}$$ $$ h_1 h_1^{-1}h_2^{-1}h_1h_2 = h_1 \{1_G\}$$ $$ h_2^{-1}h_1h_2 = h_1 \{1_G\}$$ Multiply both sides on the left by $h_2$ giving us $$ h_1h_2 = h_2 h_1 $$ The bit I don't get is right at the beginning. Why is this correct: "We also have $h_1^{-1}(h_2^{-1}h_1h_2) \in H_1$, because $h_2^{-1}h_1h_2 \in h_2^{-1}H_1h_2 = H_1$."
|
This is another proof I made with concepts from my group theory class. For any subgroups $H,K$ we define in $H\times K$ the equivalence relation $(h_1,k_1)\equiv (h_2,k_2)$ if $h_1 k_1 = h_2 k_2$ and then the bijection \begin{align} \Phi: H&\cap K \rightarrow [(h,k)]\\ &x \longrightarrow (hx,x^{-1}k) \end{align} where $[(h,k)]$ is an equivalence class. From all of this, we get $||H\cap K||=|| [(h,k)]||$ . Once we have this, if $H,K \unlhd G$ and $H\cap K=\{1_G\}$ . By contradiction, suppose that $\exists h_1 \in H$ such that $h_1k_1\neq k_1 h_1$ . So, there exists $h_2 \in H$ such that $h_1\neq h_2$ and $h_1 k_1= k_1 h_2$ because $H$ is normal. Moreover, $K$ is also normal so $\exists k_2 \in K$ that $k_1 h_2 = h_2 k_2$ . We can see that $(h_1,k_1)\equiv (h_2,k_2)$ . Therefore, $|[(h_1,k_1)]| = |H\cap K| \geq 2$ which is a contradiction. So, $h_1k_1=k_1h_1$ .
|
|abstract-algebra|group-theory|abelian-groups|proof-explanation|normal-subgroups|
| 0
|
What is the Maximum scale a cube could be at and still be rotated at any angle to fit within a cube 1x1x1 units
|
so say I have 2 cubes A and B what is the maximum size cube A while able to be rotated at any angle and still fit in cube B which is 1 unit cubed
|
Rotating A at any angle traces out a sphere of diameter $d$ equal to the cube's diagonal $\ell_A\sqrt{3}$ . This sphere should fit within cube B, i.e. be inscribed in it, thus $d=\ell_B$ . This implies $\ell_A=\ell_B/\sqrt{3}$
|
|geometry|
| 0
|
How to prove that the square root of all non perfect squares is irrational?
|
I was doing this question as a homework from a lecture. I have proven that for all prime numbers, the square root is irrational using Euler’s lemma. However, I’m stuck on how to prove it for all non perfect squares. How should I approach this problem? Any help or solutions would be appreciated! Thank you so much!
|
Let $n$ be a non-perfect square. Then by definition if $n = p_1^{\alpha_1}\cdots p_k^{\alpha_k}$ is the prime factorization of $n$ , then at least one of the $\alpha$ 's is odd. Let's say odd alphas are $\alpha_{k_1},\cdots,\alpha_{k_m}$ . Then $n$ can be written as $n = A^2 \cdot \prod_{j = 1}^m p_{k_j}$ for some $A\in\mathbb{Z}$ . So $\sqrt{n}$ is irrational $\iff \sqrt{\prod_{j = 1}^m p_{k_j}}$ is irrational $\iff \prod_{j = 1}^m p_{k_j} = \frac{a^2}{b^2}$ for $a,b \in\mathbb{Z}$ that are relatively prime, which is a contradiction.
|
|elementary-number-theory|
| 1
|
The probability that the problem is solved correctly by at least one of them
|
Question: Four persons independently solve a certain problem correctly with probabilities 1/2, 3/4, 1/4 and 1/8.Then the probability that the problem is solved correctly by at least one of them is ? Solution: say P(A)=1/2, P(B)=3/4, P(C)=1/4, P(D)=1/8 and P(A')=1 -1/2 = 1/2, P(B')= 1 - 3/4 = 1/4, P(C') = 1 - 1/4 = 3/1, P(D') = 1 - 1/8 = 7/8 Probability(at least one of them solves correctly) = 1- Probability(None of them solves correctly) Probability(None of them solves correctly) = P(A').P(B').P(C').P(B') = 1/2 * 1/4 * 3/4 * 7/8 = 21/256 P(at least one of them solves correctly) = 1 - 21/256 = 235/256. This is the complimentary method and it's the correct answer. But when I try to do the direct method i.e Required probability = (Number of ways at least one solved)/(Total number of ways all solved) Number of ways at least one solved = [P(A).P(B').P(C').P(D') ] + [P(A').P(B).P(C').P(D')] + [P(A').P(B').P(C).P(D')] + [ (P(A').P(B').P(C').P(D)] = [1/2 * 1/4 * 3/4 * 7/8] + [1/2 * 3/4 * 3/4 *
|
Number of ways at least one solved = [P(A).P(B').P(C').P(D') ] + [P(A').P(B).P(C').P(D')] + [P(A').P(B').P(C).P(D')] + [ (P(A').P(B').P(C').P(D)] If you add probabilities you can’t get the number of ways . The probability is a real number in $[0,1]$ , the number if ways is a non-negative integer. What you found here is the probability that exactly one person solved the problem, not at least one. Total number of ways all solved = P(A) * P(B) * P(C) * P(D) This is the probability that everyone solved the problem. It happens rarely, this is nowhere near total something. You first reasoning is correct. Just use that. If you want to use the “direct” reasoning, you should add probabilities that exactly one person solved the problem (which you found), probabilities that exactly two persons solved the problem ( $\binom42=6$ more summands), probabilities that exactly three persons solved the problem ( $\binom43=4$ more summands), and finally the probability of everyone solving the problem (whic
|
|probability|
| 0
|
Let $F\leq E \leq K$ fields. If $K/F$ is a finite extension field then $[E:F]\leq [K:F]$ and $[K:E]\leq [K:F]$
|
Let $F\leq E \leq K$ fields. If $K/F$ is a finite extension field then $[E:F]\leq [K:F]$ and $[K:E]\leq [K:F]$ My try, how $K/F$ is a extension field then as a $F-$ vector space $E$ is a subspace of $K$ and then $\rm{dim}_F E \leq \rm{dim}_F K$ and then $[E:F]\leq [K:F]$ by definition but for the second part im stuck any hint? im trying not to use that $[K:F]=[K:E][K:F]$
|
Let $N=[K:E] . Then we can pick a basis $\{a_1, \dots, a_N\}$ of $K$ over $E$ . If we can show that $\{a_1, \dots, a_N\}$ are linearly independent over $F$ too, then we get $[K:E]=N \leq [K:F]$ (as every linearly independent family can be extended to a basis). Let us show now that $\{a_1, \dots, a_N\}$ are linearly dependent over $F$ . Let $r_1, \dots, r_N \in F $ such that $$r_1 a_1 + \dots + r_N a_N =0.$$ As $r_j \in F\subseteq E$ and $\{a_1, \dots, a_N\}$ is linearly independent over $E$ we get $r_j=0$ for all $j\in \{1, \dots, N\}$ , which means that $\{a_1, \dots, a_N\}$ is linearly independent over $F$ .
|
|galois-theory|
| 0
|
Existence of monoidable submagma of monoid which is neither a submonoid nor idempotent?
|
Does there exist a monoid $(M;*,1)$ and a submagma $S$ of $M$ , such that $S$ is not a submonoid of $M$ , but $S$ is "monoidable", meaning it contains an identity element, and also, $S$ is not idempotent either? The reason I am asking this question is because I was considering the monoid $(N;*,1)$ of nonnegative integers under multiplication, and its monoidable submagma yet non-submonoid $\{0\}$ . That example does not quite satisfy my condition, because it is idempotent. I am now wondering whether there are any non-idempotent examples of what I am looking for.
|
A 3-element monoid suffices. Take the monoid $M$ presented by $\langle a \mid a^3=a\rangle$ . Then $M=\{1,a,a^2\}$ and $S = \{a,a^2\}$ is a non-idempotent subsemigroup of $M$ . But $S$ is also a monoid, in fact a cyclic group of order 2, with identity $a^2$ , since $a^2a^2=a^3a =aa$ .
|
|abstract-algebra|monoid|
| 0
|
Is a $\sigma$-locally finite collection of open sets locally countable?
|
Problem I encountered this statement on nLab, which says that weakly Lindelöf spaces with a $\sigma$ -locally finite basis are second-countable . The original proof given below the statement is Proof . Let $\mathcal{V}$ be a $\sigma$ -locally finite basis. For each $x\in X$ , there is a neighborhood $N_{x}$ meeting countably many members of $\mathcal{V}$ . If $X$ is weakly Lindelöf, there is a countable $\{N_n\}_{n=1}^\infty$ which covers a dense subset of $X$ . Then $\mathcal{U}=\{V\in\mathcal{V}\mid N_n\cap V\neq\varnothing ~\text{for some}~n\}$ is a countable basis for $X$ . The proof is extremely brief, and I couldn't understand the italicized part, which I refer to as the collection $\mathcal{V}$ being locally countable for the time being. I believe it is not a very common property since I've searched $\pi$ -Base and Wikipedia but couldn't find anything about this locally countable property. The proof simply stated it as if it is an easy corollary. I know that $\mathcal{V}$ being
|
You're correct; we should add σ-Locally Finite Base + Weakly Lindelöf => Second Countable to the pi-Base. https://github.com/pi-base/data/issues/584 First, I'll confirm that your claim that every locally finite (or even locally countable) collection in a weakly Lindelöf space is countable. So a σ-locally finite base would be a countable union of countable sets, showing second-countability. But to your original question: is every σ-locally finite base $\mathcal U=\bigcup_{n a locally countable base? Naively one might say: take a neighborhood of a point, it hits finitely-many of each $\mathcal U_n$ , so it hits countably-many overall. But that's no good: to get the finite collection for each $\mathcal U_n$ , a different neighborhood may be chosen, and the intersection of $\omega$ -many neighborhoods need not be a neighborhood. My assumption is that there must be some σ-locally finite base that's not locally countable, but it's late and I'm not coming up with one on the fly at the moment.
|
|real-analysis|general-topology|second-countable|lindelof-spaces|
| 1
|
Find the number of non-negative integer solutions of the equation $x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 57$ where $x_1 \lt 3, x_3 \ge 4$
|
Find the number of non-negative integers to $$x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 57$$ where $$x_1 \lt 3, x_3 \ge 4$$ I first found the total number of solutions N= $57+6-1\choose57$ =6471002, then the total number of equations for the restriction $x_1 \ge 3$ , N(P1)= $54+6-1\choose54$ =3425422, then for the restriction $x_3 \le 3$ , N(P2)= $4+6-1\choose4$ =126 and I have to find the number of solutions for both the restrictions N(P1P2), but I don't know the number I should plug in the combination formula or if my calculations are correct.
|
First eliminate the lower bound x3>=4 ..lets take x3-4>=0 take x3-4 = y so our equation gets reduced to x1+x2+y3+x4+x5+x6=57-4=53 NO of solution of this equation can be obtained from stars and bars concept. divide 53 identical balls in 6 distinct boxes. C(53+6-1,6-1) = 58!/53!5! = 4582116 Now we have to eliminate all those cases where x1>=3 take y1=x1-3>=0 our equation gets reduced to y1+x2+y3+x4+x5+x6=53-3=50 solution of this equation is C(50+6-1,6-1) = 55!/5!50!=3478761. We need to remove these solutions to satisfy our equation Total number of solutions = 4582116-3478761= 1103355
|
|combinatorics|discrete-mathematics|
| 0
|
Showing B is not derivable from A
|
Let's take it that under the assumption of A, ~C is true. And let's also take it that if ~C is true and if B is derivable from A, then C is true. I want to show that B is not derivable from A. Is this possible under these assumptions? It seems like we can assume that B is derivable from A for the sake of obtaining a contradiction. So there is some proof that begins by assuming A and obtains B in a finite amount of steps. We take that derivation under which we already have A as an assumption and within it, since we know that A gives ~C, we conclude that ~C is true. Now since we also know that if ~C and if B is derivable from A, then C is true. So we get to a contradiction, and so B is not provable from A. Is this valid? Or does this only show that under the assumption that B is derivable from A, then ~A is derivable by the empty set?
|
I don't believe the argument you're trying to construct is valid. As you can see below, a truth table in which $A$ is false, $B$ is true, and $C$ is true will show it is possible for your premises $A \to \neg C$ and $[\neg C \wedge (A \to B) ] \to C$ to be true and the conclusion $\neg (A \to B)$ to be false. Hence, the argument is invalid. $ \begin{array}{llllllllllllllll} A & \to & \neg & C & \Big| & [\neg & C & \wedge & (A & \to & B) ] & \to & C & \Big|\Big| & \neg & (A & \to & B) \\ 0 & \:\color{red}{1} & 0 & 1 && \:\:0 & 1 & 0 & \:\:0 & \:1 & 1 & \:\color{red}{1} & 1 && \:\color{red}{0} & \:\:0 & \:1 & 1 \end{array} $
|
|logic|
| 1
|
A question on conditional expectation of a random variable
|
Consider the joint probability density function: $$f(x_1,x_2)= \begin{cases} 2e^{-2x_1}, & \text{ for } 0 \le x_2 \le x_1 Find the value of $E(X_1|X_2=x_2 )$ for $0 \le x_2 My attempt: We know the formula for $$E(X|Y=y)= \int_{-\infty}^{\infty} x f_{X|Y}(x|y)dx=\frac{1}{f_Y(y)}x \int_{-\infty}^{\infty} f_{X,Y}(x,y)dx,$$ where $f_Y(y)$ is the marginal density function of the random variable $Y$ . Applying it here in this case, we have that $f_{X_2}(x_2)=\int_{x_2}^{\infty} 2e^{-2x_1}dx_1=e^{-2x_2}.$ So, we have that $$E(X|Y=y)=\frac{1}{e^{-2x_2}}\int_{x_2}^{\infty} x_12e^{-2x_1}dx_1= 2e^{2x_2} \bigg[x_1 \bigg(\frac{e^{-2x_1}}{-2} \bigg)-\bigg(\frac{e^{-2x_1}}{4}\bigg) \bigg]_{x_2}^{\infty}= 2e^{2x_2}\bigg[x_2 \bigg(\frac{e^{-2x_2}}{2} \bigg)+\bigg(\frac{e^{-2x_2}}{4}\bigg) \bigg] = x_2 + \frac{1}{2}.$$ So, $E(X_1|X_2=x_2 ) = x_2 + \frac{1}{2}.$ I have a problem here. My answer is not matching with any of the options given. I want to know where I have gone wrong. Please point out my mist
|
Your computations are correct as shown, but there is a problem with the joint density, since $$\int_{x_2=0}^\infty \int_{x_1=x_2}^\infty 2e^{-2x_1} \, dx_1 \, dx_2 = \int_{x_2=0}^\infty e^{-2x_2} \, dx_2 = \frac{1}{2} \ne 1.$$ Check that the joint density is correctly specified.
|
|probability-theory|random-variables|conditional-expectation|
| 1
|
Any norm on $\mathbb R^n$ induces product metric?
|
Let $X_1, \ldots, X_n$ be metric spaces and $\|\cdot\|$ be a norm on $\mathbb R^n$ . Then it's easily seen that if $\|\cdot\|$ is monotonic in each coordinate while (others fixed) in the orthant $[0, +\infty)^n$ , then $d(x, y) := \bigl\| \bigl( d_1(x_1, y_1), \ldots, d_n(x_n, y_n) \bigr) \bigr\|$ defines a metric on $X_1\times\cdots\times X_n$ . Monotonicity gives an easy assurance that triangle equality is satisfied. Now, as this SE post shows, there do exist norms on $\mathbb R^n$ which don't fit the monotonicity requirement. Question: Does $d$ due to such norms still form a metric?
|
Not necessarily. For example, let $X_1 = X_2$ be $\mathbb{R}^2$ equipped with the usual $\infty$ -norm, $\|\cdot\|$ be the $1$ -norm on $\mathbb{R}^2$ with respect to the basis $\{(0, \frac{1}{2}), (1, 1)\}$ . Let $x = ((0, 0), (0, 0)), y = ((1, 0), (0, 0)), z = ((0, 1), (1, 0))$ , then $d(x, y) = \|(1, 0)\| = \|(1, 1) - 2(0, \frac{1}{2})\| = 3$ , $d(x, z) = \|(1, 1)\| = 1$ , and $d(z, y) = \|(1, 1)\| = 1$ . Hence, we do not have $d(x, y) \leq d(x, z) + d(z, y)$ and $d$ is not a metric.
|
|metric-spaces|normed-spaces|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.