title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Sturm-Liouville boundary value problem with bounded function
|
$$ y'' + \lambda y = 0, \quad y'(0) = 0 \quad |y(x)| I have tried numerous Sturm Liouville Boundary Value problems, but never done problems involving boundary conditions where there are inequalities and infinities. Is this a regular SLBVP or does it classify as a singular SLBVP. How should I solve this?
|
The condition $|y(x)| just means that the function is real (doesn't take on infinite values). Thus, you can solve this as a normal equation, finding that the general solution is (if $\lambda>0$ , other cases are left to the reader) $$y(x)=c_1\sin(\sqrt{\lambda}x)+c_2\cos(\sqrt{\lambda}x)$$ By forcing $y'(0)=0$ one finds $c_1=0$ and the solution is $c_2\cos(\sqrt{\lambda}x)$ .
|
|ordinary-differential-equations|eigenvalues-eigenvectors|boundary-value-problem|eigenfunctions|sturm-liouville|
| 0
|
Using integration by parts to solve a differential equation
|
I am stuck on the following analysis problem. Suppose that $u\in C([a,b])$ is twice continuously differentiable, $V\in C([a,b])$ , $V(x)\geq 0$ for all $x\in[a,b]$ and $$ -u''(x)+V(x)u(x)=0, \forall x\in[a,b]$$ $$u(a)=u(b)=0$$ Show that $u(x)=0$ for all $x\in[a,b]$ . The hint tells me to use integration by parts to solve this problem, but I can't see where to use it. I tried the following, but it doesn't lead to a meaningful result. Let $v(x)=\int_a^x V(t)dt$ . Then, integrating by parts, we have $$ \int_a^b V(x)u(x)dx = v(b)u(b)-v(a)u(a)-\int_a^bv(x)u'(x)dx = -\int_a^bv(x)u'(x)dx $$ and because $u''(x)=V(x)u(x)$ , $$\int_a^bV(x)u(x)dx = \int_a^bu''(x)dx = u'(b)-u'(a) = -\int_a^bv(x)u'(x)dx $$ I'd appreciate if anyone could tell me where to use the integration by parts in this problem. Thank you!
|
The general 'trick' is to use the fact that $-\Delta = -\frac{d^2}{dx^2}$ is a positive operator on the space of differentiable functions, square integrable and vanishing at the boundary. $$\int_a^b u(x)(-u''(x)) dx = \int_a^b u'(x)^2 dx > 0 $$ but $$\int_a^b V(x)u(x)^2 dx>0 $$ too, so u=0. Geometrical background: A solution $u: x\to u(x)$ starting with a zero on the left boundary with slope $u'(a)>0$ and $u''> 0 $ everywhere, has a positive slope everywhere and cannot bend down to yield a zero on the upper boundary.
|
|real-analysis|integration|ordinary-differential-equations|
| 1
|
What do I do wrong when computing this commutator of differential operators?
|
I am trying to understand the Lax pair of the KdV equation. I found some pdf (page 5) that gives the Lax pair in the form $$ L=\frac{\partial^2}{\partial x^2}-u $$ and $$ M=-4\frac{\partial^3}{\partial x^3} + 6u\frac{\partial}{\partial x} + 3\frac{\partial u}{\partial x}. $$ Now one needs to compute the commutator $$[M,L]=ML-LM$$ of the operators above. I tried to compute $LM$ first, but it seems that I made some mistake and I don't understand what I did wrong. Here is my attempt $$ \frac{\partial^2}{\partial x^2}\left(-4\frac{\partial^3}{\partial x^3}\right) = -4\frac{\partial^5}{\partial x^5}. $$ By the product rule: $$ \frac{\partial^2}{\partial x^2}\left(6u\frac{\partial}{\partial x}\right) = 6\frac{\partial}{\partial x}\left(\frac{\partial u}{\partial x}\frac{\partial}{\partial x} + u\frac{\partial^2}{\partial x^2}\right) = 6\left(\frac{\partial^2 u}{\partial x^2}\frac{\partial}{\partial x} + \frac{\partial u}{\partial x}\frac{\partial^2}{\partial x^2} + \frac{\partial u}{\partial
|
This is not a complete answer, but indicates where your calculation has gone wrong. Let us look at the operators $A:=\frac{\partial}{\partial x}$ and $B:=\frac{\partial u}{\partial x}$ and calculate the operator $AB$ . Your calculation would yield the answer $\frac{\partial^2 u}{\partial x^2}$ . However let's be more careful: what we need to find is the effect that $AB$ would have on every function $\phi(x)$ . Recall what $A$ and $B$ mean: $A$ is the differential operator, $B$ is the "multiply by $\frac{\partial u}{\partial x}$ " operator. So we have $$ AB(\phi(x)) =A(B(\phi(x)) =A(\frac{\partial u}{\partial x}\phi(x)) =\frac{\partial^2 u}{\partial x^2}\phi(x)+\frac{\partial u}{\partial x}\frac{\partial }{\partial x}\phi(x). $$ That is, $$ AB=\frac{\partial^2 u}{\partial x^2}+\frac{\partial u}{\partial x}\frac{\partial }{\partial x}. $$ If you go through your calculations you'll find a few places where you need to make this, or similar, corrections. (For what it's worth, with these cor
|
|operator-theory|
| 1
|
Is this a composite sentence?
|
The following question is not on my homework. It is a conceptual question that followed when I saw the definition of composite sentences in a chapter on mathematical reasoning. A sentence such as: Antarctica lies to the south of the Indian Ocean and the Indian Ocean lies to the south of India. is a composite sentence, but is such a sentence: It is entirely untrue that milk is an element. a composite sentence?
|
Milk is an element. => not a composite sentence. Not(Milk is an element.). => not a composite sentence. "Not" is just a synonym of "It is entirely untrue that". Hence: It is entirely untrue that milk is an element. => Why would this be a composite sentence?
|
|logic|
| 0
|
Equivalent norm in homeomorphic spaces
|
Let $\|\cdot\|_1$ and $\|\cdot\|_2$ be two norms on a real vector space $X$ such that the resulting spaces with induced topologies are homeomorphic, i.e., there exists a bicontinuous bijection $X\to X$. Is it true that these two norms have to be necessarily equivalent? [I think the answer is positive as in the classical case of different norms in $\mathbb{R}^n$]
|
Yes! By definition, two metrics, $\rho$ and $\sigma$ on a metric space, $X$ , is said to be equivalent if the identity mapping of $\langle X,\rho\rangle$ onto $\langle X, \sigma \rangle$ is homomorphism. Now if two metrics are (different) norms, the inverse image of every open set is open in the other space, hence (by definition) the identity map is continuous (in both directions), hence it's homomorphism, Hence they are equivalent.
|
|general-topology|normed-spaces|
| 0
|
Parametrization of vector field with an unknown vector potential
|
$F$ is a vector field with unknown potential $A$ and there's a surface with the boundary of a unit circle at the origin. Also, the circulation $A$ around the unit circle oriented clockwise is a positive constant. Given this information, I know that the flux through the unit circle will give me the flux through the surface, or any surface. How do I find $F$ given this information? I've parametrized $F$ with $F(rcos(\theta), rsin(\theta),0)$ but I'm not sure where to go from here
|
If I understand correctly you want to find the vector field $F$ that satisfies $$ F=\nabla\times A\,,\quad\oint_{S^1}A\cdot ds=c $$ for some constant $c>0\,.$ This problem does not have a unique solution. By Stokes, $$ \int_{B_1(0)}F\cdot\boldsymbol{n}\,dS=\int_{B_1(0)}(\nabla\times A)\cdot\boldsymbol{n}\,dS=c\,. $$ When $c=2\pi$ one solution is $$ F=\nabla\times A=\nabla\times\pmatrix{-y\\x\\0}=\pmatrix{0\\0\\2} $$ which has $$ \oint_{S^1}A\cdot ds=2\pi= \int_{x^2+y^2\le 1}F\cdot\boldsymbol{n}\,dS\,. $$ When we add to this $F$ the vector field $$ F'=\nabla\times A'=\nabla\times\pmatrix{-y^2\\x^2\\0}=2\pmatrix{0\\0\\x+y} $$ which has $$ \oint_{S^1}A'\cdot ds=0= \int_{x^2+y^2\le 1}F'\cdot\boldsymbol{n}\,dS $$ we get another solution.
|
|multivariable-calculus|
| 1
|
Probability of moving from the top node to the bottom node.
|
$G_N$ " /> Consider the set of graph $G_N$ for an interger $N\geq2$ , shown above. Suppose a random walker starts at the top node of graph $G_N$ and begins a random walk where any route out of a node is equally probable. Find the probability, as a function of $N$ , that the walker reaches the bottom node before returning to the top node. I initially tried to think about this by braching out and counting probabilities, but soon realised that the walker can just go on forever. I tried to think about this using recurrence equations for the middle two points. Something along the line of letting $x$ be the chance to get to the bottom node from this vertex and $y$ for the other vertex. So for the top middle point, there's a $\frac{1}{3}$ chance it does up top, and $\frac{2}{3}$ it does to the $y$ vertex and use whatever chabce that has for $x=\frac13\cdot0+\frac23\cdot y$ I'm unsure how to go from here, however. I believe the answer should to be $\frac{2n}{3n-1}$
|
I do get the answer $\frac{2n}{3n-1}$ for $n=2,3,\dots, 10$ calculating with the following Sage-code: def solve(n): matN = 2*n A = 1/3*matrix(QQ, [[ (2 if (j==i+1 and i%2 or j==i-1 and i%2==0) else 1) if abs(j-i)==1 else 0 for j in range(matN)] for i in range(matN)]) Q = A[1:matN-1, 1:matN-1] R = A[1:matN-1, [0,matN-1]] fundMat = (Q**0 - Q)^(-1) sol = fundMat*R return 2/3 + 1/3*sol[0,1] Some explanation: Uses a Markov-Chain with top and bottom absorbing. Take one step to begin with, either you go straight to goal or to inner world where this Markov chain is then moving (starting from the second to top most node). Then the probability we want equals to $\frac{2}{3} + \frac{1}{3}p_n$ where $p_n$ is the probability of being absorbed to the bottom node starting from the second node. The transition matrix is (for case $N=3$ ) $$ \frac{1}{3} \left(\begin{array}{r|rrrr|r} 3 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 2 & 0 \\ 0 & 0 & 0 & 2 &
|
|probability|graph-theory|
| 0
|
Let $X$ be a topological vector space with a countable local base. Then how does the following metric attain maximum?
|
Let $X$ be a topological vector space with a countable local base. Now a seminorm $p_i$ can be associated with each element of local base. From rudins book on functional analysis we get that this space is metrizable by the following metric.. $$ d(x,y)= \text{max}_{i\geq 1} \frac{c_i p_i(x-y)}{1+p_i(x-y)} $$ where $c_i$ is a strictly decreasing positive sequence which decreases to $0$ . How to prove that the given maximum exists? The sequence is bounded by $c_1$ . for all $c_i$ , $\frac{c_i p_i(x-y)}{1+p_i(x-y)} . this is beacuse $\frac{p_i(x-y)}{1+p_i(x-y)} . We also know that all $c_i . Therefore the sequence is bounded by $c_1$ . Now all bounded sequence need not have a maximum but always have a supremum. (For example the sequence $1-\frac{1}{i}$ with $i\geq1$ is bounded and has a supremum but does not have a defined maximum.) But this sequence has a maximum and this has got something to do with $\frac{p_i(x-y)}{1+p_i(x-y)}$ being less than $1$ . Can someone give a hint here ...
|
$\frac{c_i p_i(x-y)}{1+p_i(x-y)}\le c_i\to 0$ as $i \to \infty$ and any sequence of non-negative numbers which tends to $0$ has a maximum.
|
|functional-analysis|
| 0
|
Probability of moving from the top node to the bottom node.
|
$G_N$ " /> Consider the set of graph $G_N$ for an interger $N\geq2$ , shown above. Suppose a random walker starts at the top node of graph $G_N$ and begins a random walk where any route out of a node is equally probable. Find the probability, as a function of $N$ , that the walker reaches the bottom node before returning to the top node. I initially tried to think about this by braching out and counting probabilities, but soon realised that the walker can just go on forever. I tried to think about this using recurrence equations for the middle two points. Something along the line of letting $x$ be the chance to get to the bottom node from this vertex and $y$ for the other vertex. So for the top middle point, there's a $\frac{1}{3}$ chance it does up top, and $\frac{2}{3}$ it does to the $y$ vertex and use whatever chabce that has for $x=\frac13\cdot0+\frac23\cdot y$ I'm unsure how to go from here, however. I believe the answer should to be $\frac{2n}{3n-1}$
|
These graphs are equivalent to a cyclic graph on $2n$ vertices with alternating edge weights $2/3$ and $1/3$ , thus the transition matrix has the form $$\mathcal P = \begin{bmatrix} 0 & 1/3 & 0 & 0 &\ldots & 0 & 2/3 \\ 1/3 & 0 & 2/3 & 0 &&& 0 \\ 0 & 2/3 & 0 & 1/3 & \ddots && \vdots \\ 0 & 0 & 1/3 & 0 & \ddots & \ddots & \vdots \\ \vdots && \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & & & \ddots & \ddots & \ddots & 1/3 \\ 2/3 & 0 & \ldots & \ldots & 0 & 1/3 & 0 \end{bmatrix}_{\,2n \,\times\, 2n},$$ where row $i$ corresponds to the state of being at the $i^{\rm th}$ vertex from the top. For the initial state vector $\begin{bmatrix}1 & 0 & \ldots & 0\end{bmatrix}$ we want the probability of reaching state $2n$ before returning to state $1$ . To find this, we can modify $\mathcal P$ so that the top and bottom vertices become absorbing, and the initial state vector begins at vertex $2$ with probability $1/3$ or is at the bottom vertex with probability $2/3$ , effectively having taken the fi
|
|probability|graph-theory|
| 0
|
$f$ is diagonalizable iff its minimal polynomial is "free from squares" (proof)
|
Let $f \in End(V)$ ; then $f$ is diagonalizable iff its minimal polynomial is "free from squares", as in, all of its terms are all raised to the first power and (edit:) all irreducible factors are linear. I've seen similar questions on this matter bun none of them actually answer my question... I think I've figured out the first implication (which I will write down now), but I fell I'm nowhere near to figure out the second; so any help or suggestion would be truly appreciated! Here's my take of the first implication: As previously said let $f \in End(V)$ be diagonalizable of (distinct) eigenvectors $\lambda_1,...,\lambda_k$ ; then we know that $$(x-\lambda_1)...(x-\lambda_k) \quad \text{divides} \quad q_f(x)$$ the minimal polynomial of $f$ . Showing that $(f-\lambda_1 I)...(f-\lambda_k I)(v)=0 \quad \forall v \in V$ would imply that in the previous statement "divides" could be replaced with "equals"; now we know that $(f-\lambda_1 I)...(f-\lambda_k I)=0 \iff$ it's $0$ on a base for $V$
|
I believe I managed to prove the implication myself as follows: Let $q_f$ be the minimal polynomial of $f$ as in: $$q_f(t)=(t-\lambda_1)\cdots(t-\lambda_k)$$ We want to prove that $f\in End(V)$ is diagonalizable, so that $V$ is direct sum of eigenspaces. Now let all eigenvalues $\lambda_1, ..., \lambda_k \in \Bbb{K}$ a field. We know that $V$ is direct sum of generalized eigenspaces $V^{\lambda_i}$ and we want to prove that each one of them is actually an eigenspace $V_{\lambda_i}$ . So we want to show that if $w \in V^{\lambda_i}$ then $w \in ker(f-\lambda_i I)$ . Now, we know that $$0=q_f(f)=(f-\lambda_1 I)\cdots(f-\lambda_k I)$$ so $$\begin{align*} 0&=q_f(f)(w)\\ &=(f-\lambda_1 I)\cdots(f-\lambda_k I) (w)\\ &=(f-\lambda_1 I)\cdots(f-\lambda_{i-1} I)(f-\lambda_{i+1} I)\cdots(f-\lambda_k I)(f-\lambda_i I) (w) \end{align*}$$ Since generalized eigenspaces are $f$ -invariant, they are $(f-\lambda_i I)$ -invariant too, so we can consider: $$(f-\lambda_1 I)|_{V^{\lambda_i}}\cdots(f-\lambda
|
|linear-algebra|diagonalization|minimal-polynomials|
| 0
|
demonstration of vector laplacian in cartesian coordinates
|
I am stucked with the following demonstration. The vector laplacian formula is: $Δa = ∇(∇a) - ∇×(∇×a)$ , where $a$ is a vector field. I have to demonstrate that the vector laplacian in cartesian coordinates is: $Δa = (∇∇ax)ux +(∇∇ay)uy +(∇∇az)uz$ where: $ux$,$uy$ and $uz$ are the unit vectors, and $∇∇$ stands for nabla's operator to square. Thanks in advance.
|
I was also interested in this question so I am writing down what I came with. I've read on wiki that it can be seen as a particular case of the vector triple product formula (Lagrange's formula): $\mathbf{u}\times (\mathbf{v}\times \mathbf{w}) = (\mathbf{u}\cdot\mathbf{w})\ \mathbf{v} - (\mathbf{u}\cdot\mathbf{v})\ \mathbf{w}$ The demonstration (also on wiki) for first coordinate is: $\begin{align} (\mathbf{u} \times (\mathbf{v} \times \mathbf{w}))_x &= \mathbf{u}_y(\mathbf{v}_x\mathbf{w}_y - \mathbf{v}_y\mathbf{w}_x) - \mathbf{u}_z(\mathbf{v}_z\mathbf{w}_x - \mathbf{v}_x\mathbf{w}_z) \\ &= \mathbf{v}_x(\mathbf{u}_y\mathbf{w}_y + \mathbf{u}_z\mathbf{w}_z) - \mathbf{w}_x(\mathbf{u}_y\mathbf{v}_y + \mathbf{u}_z\mathbf{v}_z) \\ &= \mathbf{v}_x(\mathbf{u}_y\mathbf{w}_y + \mathbf{u}_z\mathbf{w}_z) - (\mathbf{u}_y\mathbf{v}_y + \mathbf{u}_z\mathbf{v}_z)\mathbf{w}_x + (\mathbf{u}_x\mathbf{v}_x\mathbf{w}_x - \mathbf{u}_x\mathbf{v}_x\mathbf{w}_x) \\ &= \mathbf{v}_x(\mathbf{u}_x\mathbf{w}_x + \m
|
|laplacian|
| 0
|
Are there other kind of primitives, other than $\int_a ^x f(t)dt +k$ ? For example for non-continuous functions.
|
The second fundamental theorem of Calculus says if a function is continuous (hence, Riemann integrable) then $\int_a ^x f(t) dt$ is a primitive of $f$ . Ok this is for continuous functions. However what happens for not necessarily continuous functions ? The first fundamental theorem of Calculus says if $f$ is Riemann integrable (i.e. not necessarily continuous) then $\int_a ^b f(t) dt = F(b) - F(a)$ where $F$ is a primitive. Can $F$ , in the context of the first fundamental theorem, be a primitive not of the form $\int_a ^x f(t) dt$ ? Or do all the Riemann integrable functions have primitives of the form $\int_a ^x f(t) dt$ ?
|
Not all Riemann Integrable functions have primitives. Consider $f: [0,1]\to\mathbb{R}$ given by $f(x)=0, x\in[0,1/2)$ and $f(x)=1, x\in[1/2,1]$ . $f$ cannot be the derivative of a function because derivatives satisfy the Darboux condition and $f$ doesn't (it doesn't take on $1/2$ , for example). Now, $$f(x) = \cases{2x\sin\frac{1}{x^2}-\frac{2}{x}\cos\frac{1}{x^2} \quad\text{for } x\ne 0 \\0\quad\text{for }x=0}$$ is not continuous at $x=0$ , but $$F(x) = \cases{x^2 \sin\frac{1}{x^2} \quad \text{for } x\ne 0 \\ 0\quad\text{for }x=0}$$ is a primitive of $f$ .
|
|integration|riemann-integration|
| 1
|
$|f(z)|$ is constant but $f(z)$ is not constant.
|
According to this post Show that if $f$ is analytic in a domain $D$ and $|f(z)|$ is constant in $D$, then the function $f(z)$ is constant in $D$ We have, If $f$ is analytic in a domain $D$ and $|f(z)|$ is constant in $D$ then the function $f(z)$ is constant in $D$ . But , If I consider the function $f(z) =e^{iz}$ then clearly $f$ is analytic on any domain $D\subseteq\mathbb{C}$ . Moreover, $|f(z) |=1$ for all $z\in D$ But, $f(z) =e^{iz}$ is Not constant in $D$ . I think, I made some mistake. But I am not getting where! Please help me.
|
Consider $z=x+iy$ , then $f(z)=e^{iz}=e^{ix}e^{-y}=e^{-y}(\cos{x}+i \sin{x})$ , so $|f|$ is not constant when $y$ increases. You may ignore that $z\in \mathbb{C}$ , not a real variable.
|
|complex-analysis|analytic-functions|
| 0
|
What is the graph of Rudin's 7.18 function
|
Code borrowed from here Theorem 7.18 from Baby Rudin: There exists a real continuous function on the real line which is nowhere differentiable. proof Define $$\tag{34} \varphi(x) = \lvert x \rvert \qquad \qquad (-1 \leq x \leq 1) $$ and extend the definition of $\varphi(x)$ to all real $x$ by requiring that $$ \tag{35} \varphi(x+2) = \varphi(x). $$ Then, for all $s$ and $t$ , $$\tag{36} \lvert \varphi(s) - \varphi(t) \rvert \leq \lvert s-t \rvert. $$ In particular, $\varphi$ is continuous on $\mathbb{R}^1$ . Define $$ \tag{37} f(x) = \sum_{n=0}^\infty \left( \frac{3}{4} \right)^n \varphi \left( 4^n x \right). $$ Since $0 \leq \varphi \leq 1$ , Theorem 7.10 shows that the series (37) converges uniformly on $\mathbb{R}^1$ . By Theorem 7.12, $f$ is continuous on $\mathbb{R}^1$ . Now fix a real number $x$ and a positive integer $m$ . Put $$ \tag{38} \delta_m = \pm \frac{1}{2} \cdot 4^{-m} $$ where the sign is so chosen that no integer lies between $4^m x$ and $4^m \left( x + \delta_m \righ
|
This is the function after summing the first 25 terms in the series. And this is the function as we sum from 1 to 15 terms. Here is the code to generate it: import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation def phi(x): return np.abs(x % 2 - 1) def f(x, num_terms=25): n = np.arange(num_terms)[:, np.newaxis] terms = (3/4)**n * phi(4**n * x) return np.sum(terms, axis=0) # Generate x values x = np.linspace(-2, 2, 100000) # Create a figure and axis fig, ax = plt.subplots(figsize=(8, 6)) # Initialize the plot line, = ax.plot([], []) # Set up the plot properties ax.set_xlim(-2, 2) ax.set_ylim(0, 5) ax.set_xlabel('x') ax.set_ylabel('f(x)') ax.grid(True) # Update function for the animation def update(i): y = f(x, i+1) line.set_data(x, y) ax.set_title(f'Visualization of f(x) with {i+1} terms') return line, # Create the animation anim = FuncAnimation(fig, update, frames=15, interval=500, blit=True) # Save the animation as a GIF anim.save('f(x)_anima
|
|real-analysis|derivatives|continuity|graphing-functions|
| 1
|
$|f(z)|$ is constant but $f(z)$ is not constant.
|
According to this post Show that if $f$ is analytic in a domain $D$ and $|f(z)|$ is constant in $D$, then the function $f(z)$ is constant in $D$ We have, If $f$ is analytic in a domain $D$ and $|f(z)|$ is constant in $D$ then the function $f(z)$ is constant in $D$ . But , If I consider the function $f(z) =e^{iz}$ then clearly $f$ is analytic on any domain $D\subseteq\mathbb{C}$ . Moreover, $|f(z) |=1$ for all $z\in D$ But, $f(z) =e^{iz}$ is Not constant in $D$ . I think, I made some mistake. But I am not getting where! Please help me.
|
But , If I consider the function $f(z) =e^{iz}$ then clearly $f$ is analytic on any domain $D\subseteq\mathbb{C}$ . Correct. Moreover, $|f(z) |=1$ for all $z\in D$ No. $|f(z)|=1$ if and only if $z\in\mathbb{R}$ . But for example $f(i)=e^{-1}$ and so $|f(i)|\neq 1$ . When we are dealing with complex analysis, the term "domain" almost always means at least open subset of $\mathbb{C}$ . And therefore $D\subseteq \mathbb{R}$ is impossible, since $\mathbb{R}$ has empty interior in $\mathbb{C}$ . And this implies that $|f(z)|=1$ for all $z\in D$ is also impossible.
|
|complex-analysis|analytic-functions|
| 1
|
Is $\forall x\exists x(x < x)$ a sentence?
|
Going through my notes on predicate logic, I read the following inductive definitions: Definition: An atomic term is either a variable or a constant. If $f$ is an $n$ -place function symbol and $t_1, . . . , t_n$ are terms , then $f(t_1, . . . , t_n)$ is also a term . Definition: An atomic formula is a string of the form $P(t_1, . . . , t_n)$ for the $n$ -place predicate $P$ (which may be the equal sign) and the terms $t_1, . . . , t_n$ . If $F$ and $G$ are formulas , so are $$\neg F,(F ∧ G),(F ∨ G), ∃xF, ∀xF.$$ Definition: An occurrence of a variable $x$ in a formula $F$ is bound iff it is part of a subformula beginning with $∃x . . .$ or $∀x . . .$ , in which the quantifier in question is said to bind the variable, and otherwise the occurrence is said to be free . A formula is a sentence iff no occurrence of any variable in it is free. It seems to me that these definitions allow for formulas such as $$\forall x\exists x (x to be a sentence. If so, is that not problematic? Once semant
|
Yes, it is. The $\forall x$ isn’t quantifying any free variables, since there are no free variables in $\exists x \ x , and the $\forall x$ is therefore called a ‘null quantifier’. However, using the standard semantics for quantifiers we can still evaluate such a statement: we effectively ask: is it true for all objects in our domain that $\exists x \ x is true? And note, if $\exists x \ x is true, then it is true that 'for all objects' $\exists x \ x is true. Indeed, it turns out that $\forall x \ P$ is equivalent to just $P$ if $P$ does not contain any free variables. Same goes for a null existential quantifier. As such, we have the following well recognized equivalence principles: Null Quantification where $P$ does not contain any free variables $x$ , we have: $\forall x \ P \Leftrightarrow P$ $\exists x \ P \Leftrightarrow P$ So no, there is no problem with allowing null quantifiers. And the fact that we have multiple quantifiers quantifying over the same variable isn't an issue ei
|
|logic|definition|first-order-logic|quantifiers|
| 0
|
Find the coefficient of $x^{30}$ in the following polynomial $(1+x+x^2+x^3+x^4+x^6)^6$
|
How do we find the coefficient of $x^{30}$ in the following polynomial $(1+x+x^2+x^3+x^4+x^6)^6$ My approach is as follows: $$1+x+x^2+x^3+x^4+x^5+x^6=\frac{1-x^7}{1-x}$$ hence $$\begin{align}1+x+x^2+x^3+x^4+x^6&=\frac{1-x^7}{1-x}-x^5\\&=(1-x^7-x^5+x^6)(1-x)^{-1}\end{align}$$ $$(1+x+x^2+x^3+x^4+x^6)^6=(1-x^7-x^5+x^6)^6(1-x)^{-6}.$$ It is getting complicated.
|
let's say one term is $$\frac{6!}{n_0!n_1n_2!n_3!n_4!n_5!n_6!}1^{n_0}x^{n_1}(x^2)^{n_2}(x^3)^{n_3}(x^4)^{n_4}(x^5)^{n_5}(x^6)^{n_6}$$ you need some terms that $$n_1+2n_2+3n_3+4n_4+5n_5+6n_6=30$$ and $$n_0+n_1+\cdots+n_6=6$$
|
|binomial-coefficients|generating-functions|
| 0
|
Seeking source for a quip about when it's ok to publish a theorem
|
Long ago, perhaps 40-50 years ago, I read a paper or book that contained advice about how to write and publish mathematics that contained a certain quip (below). Perhaps it was something published by the AMS, or perhaps it was written by Paul Halmos. Anyway, although I've tried, I've not been able to find it again. In my (admittedly deficient) memory, it went something like this: Do not attempt to publish case x of a theorem in the year 1950+x. Does anyone recognize this and know the source?
|
The statement is in Halmos, What to publish, Monthly January 1975, Page 15. "Do not publish in 1975 the case of dimension 2 of an interesting conjecture in algebraic geometry, one that you don't know how to settle in general, and then follow it by dimension 3 in 1976, dimension 4 in 1977, and so on, with dimension $k-3$ in 197k."
|
|reference-request|publishing|
| 1
|
A question about some exact sequence of Lie algebras
|
M. Schottenloher in his book "A mathematical introduction to conformal field theorey" in Remark 4.3 says that: For every central extension of Lie algebras $0\to \mathfrak{a}\to \mathfrak{h}\overset{\pi}{\to} \mathfrak{g}\to 0$ , there is a linear map $\beta: \mathfrak{g}\to \mathfrak{h}$ with $\pi \circ \beta = id_{\mathfrak{g}}$ ( $\beta$ is in general not a Lie algebra homomorphism). How can I define $\beta$ ?
|
The map $\beta$ is called a section . And that exists if the short exact sequence splits. In your case then it will be a trivial extension. The explicit mapping will depend on the algebra itself but usually it should be clear once the exact sequence is defined.(It's the right inverse of the projection map $\pi$ ). Note/Edit 1: A right inverse $\beta$ (as morphism in $\textbf{Set}$ ) always exists (but not unique unless $\mathfrak{a}=0$ ) as $\pi$ is surjective. But it also has to be (in this case) a morphism in $\textbf{LieAlg}$ to be a section. For an example one can look at the (non-split) short exact sequence in $\textbf{Ab}$ : $$0 \to 2\mathbf{Z} \to \mathbf{Z} \to \mathbf{Z/2Z} \to 0$$ Note/Edit 2 (to include Torsten helpful comments): The question asks for linear section so in fact we are looking at $\textbf{Vect}$ . But every short exact sequence in $\textbf{Vect}$ splits, which can be proven by linear algebra. See here .
|
|lie-algebras|exact-sequence|
| 1
|
How can I prove that if $x^5-x^3+x=2$, with $x$ a real number, then $x^6>3$, using only middle school algebra?
|
How can I prove that if $x^5-x^3+x=2$ , with $x$ a real number, then $x^6>3$ , using only middle school algebra? I have been struggling with this one. And I can't find any solution that doesn't rely on derivatives.
|
$x^5-x^3+x=2\Longrightarrow x^6=2x-x^2+x^4\cdots (1)$ Also we have $\displaystyle x^4=\frac{2}{x}+x^2-1\cdots\cdots \cdots \cdots \cdot (2)$ From $(1)$ and $(2),$ we have $\displaystyle x^6=2x-x^2+\frac{2}{x}+x^2-1=2\bigg(x+\frac{1}{x}\bigg)-1$ $\displaystyle x^6=2\bigg[\underbrace{\bigg(\sqrt{x}-\frac{1}{\sqrt{x}}\bigg)^2}_{\geq 0}+2\bigg]-1\geq 3$ Equality hold when $x=1$ But from $(1),$ We have $1=2$ So we have $\displaystyle x^6>3$
|
|algebra-precalculus|
| 0
|
Find the coefficient of $x^{30}$ in the following polynomial $(1+x+x^2+x^3+x^4+x^6)^6$
|
How do we find the coefficient of $x^{30}$ in the following polynomial $(1+x+x^2+x^3+x^4+x^6)^6$ My approach is as follows: $$1+x+x^2+x^3+x^4+x^5+x^6=\frac{1-x^7}{1-x}$$ hence $$\begin{align}1+x+x^2+x^3+x^4+x^6&=\frac{1-x^7}{1-x}-x^5\\&=(1-x^7-x^5+x^6)(1-x)^{-1}\end{align}$$ $$(1+x+x^2+x^3+x^4+x^6)^6=(1-x^7-x^5+x^6)^6(1-x)^{-6}.$$ It is getting complicated.
|
You have already gotten algebraic and combinatorial answers. I will provide a computational/analysis inspired answer. In engineering language this is the iterated / accumulated convolution with the kernel [1,1,1,1,1,0,1]. So you can just straight forwardly calculate just like a convolution 6 times and look at position 30. In octave code: k=[1,1,1,1,1,0,1];l=1;for i=1:6;l=conv(l,k,'full');endfor;l(31) Or you can for example use the convolutional theorem of the Fourier transform and calculate the power on the Fourier coefficients. k=[1,1,1,1,1,0,1];k2=[k,zeros(1,25)];ifft2(fft2(k2).^6)(31)
|
|binomial-coefficients|generating-functions|
| 0
|
How many pairs $(a,b)$ such that $\frac{x}{a}+\frac{y}{b}$ is in set $S=\{x \in \mathbb{R} : 0 \le x \le 1\}$?
|
Suppose $S=\{x \in \mathbb{R} : 0 \le x \le 1\}$ . Then how many pairs $(a,b)$ such that $S$ has exactly $2018$ elements that can be written as $\frac{x}{a}+\frac{y}{b}$ , where $x,y$ are integers. Here's what I have attempted, not much but here: We know the form $\frac{x}{a}+\frac{y}{b}$ is in $S$ , so it can be concluded: $$0 \le \frac{x}{a}+\frac{y}{b} \le 1 $$ Which can be written again as: $$0 \le xb+ ya \le ab$$ I'm not sure where to go from here, I know that $ab$ and $xb+ya$ are integers but I can't go anywhere from there...
|
We have $0 \le \frac{bx+ay}{ab} \le 1$ . By Bezout's Lemma , any number of the form $bx+ay, x,y \in \Bbb Z$ is a multiple of $g = \gcd(a,b)$ . Moreover, for any multiple of $g$ , finding the infinite family of solutions $(x,y)$ is possible. So, $bx+ay$ takes the values $0,g,2g \ldots ab$ in the desired interval. These are a total of $ab/g + 1 = \mathrm{lcm}(a,b)+1$ numbers. We are given $\mathrm{lcm}(a,b) + 1 = 2018 \iff \mathrm{lcm}(a,b) = 2017$ . Note that $2017$ is prime. Hence, $$\boxed{(a,b) \in \{(1,2017), (2017,1), (2017,2017)\}}$$ giving 3 pairs.
|
|elementary-number-theory|
| 1
|
Probability of moving from the top node to the bottom node.
|
$G_N$ " /> Consider the set of graph $G_N$ for an interger $N\geq2$ , shown above. Suppose a random walker starts at the top node of graph $G_N$ and begins a random walk where any route out of a node is equally probable. Find the probability, as a function of $N$ , that the walker reaches the bottom node before returning to the top node. I initially tried to think about this by braching out and counting probabilities, but soon realised that the walker can just go on forever. I tried to think about this using recurrence equations for the middle two points. Something along the line of letting $x$ be the chance to get to the bottom node from this vertex and $y$ for the other vertex. So for the top middle point, there's a $\frac{1}{3}$ chance it does up top, and $\frac{2}{3}$ it does to the $y$ vertex and use whatever chabce that has for $x=\frac13\cdot0+\frac23\cdot y$ I'm unsure how to go from here, however. I believe the answer should to be $\frac{2n}{3n-1}$
|
If we ignore the direct paths from the top node to the bottom node and try to figure out the probabilities $q_j$ of reaching the bottom node without returning to the top node when we start at node $j$ , we know that those probabilities form an eigenvector $(q_1\;\dots\;q_{2n})^T$ associated with the eigenvalue $1$ of the transition matrix (with top and bottom absorbing). We also know that $q_1 = 0$ , because we cannot reach the bottom node without returning to the top node if we are already at the top node. We know that $q_{2n} = 1,$ because it is certain that we reach the bottom node when we are already at that node. Such an eigenvector is uniquely defined by the properties mentioned above and can be found with some linear algebra and/or some trial and error: Start with $0$ , accumulate $2$ s and $1$ s in alternating order until you have $2n$ numbers (including the $0$ at the beginning). In the end, divide all the intermediate results by the sum of the $2$ s and $1$ s. Example: $n=4:$
|
|probability|graph-theory|
| 1
|
What does $\nabla \nabla$ mean? (nabla nabla, del del)
|
I see that used in many books but none of them defines what this actually means, since $\nabla$ is typically seen as a vector, but plain vector-vector multiplication does not exist. For instance, in the identity: $\nabla \times \nabla \times A = \nabla \nabla \cdot A - \nabla ^ 2 A$ Or in Green's dyadic function: $\bar{G}(\mathbf{r},\mathbf{r}') = \left[\bar{I} + \frac{\nabla \nabla}{k_0^2}\right] \frac{e^{i k_0 |\mathbf{r} - \mathbf{r}'|}}{4 \pi |\mathbf{r} - \mathbf{r}'|}$
|
As this reminded me of the ever confusion for many years by the different notations, I would like to share my notes and hope to be useful. If we take the double gradients, i.e., the gradient of the gradient, we obtain the so-called Hessian matrix, here donated by $\nabla\!\nabla$ in stead of $\nabla^2$ . However, not only a few mathematicians like to use $\nabla^2$ , which becomes to bewilder many engineers due to the fact that $\nabla^2$ stands for the Laplacian were already deeply ingrained in the education. Especially, when some writers do not differentiate the scalar notations with vector/matrix ones. So to avoid any confusion, the Laplacian is donated by $\nabla^2=\nabla\cdot\nabla$ , the divergent of the gradient and is a scalar, while the Hessian Matrix is the gradient of the gradient, therefore explicitly expressed as $\nabla\!\nabla$ . From the word-width point of view, $\nabla^2$ can save spaces for $\nabla\cdot\nabla$ , but not for $\nabla\!\nabla$ . Really, really wish that
|
|vector-analysis|
| 0
|
Solve $y+3=3\sqrt{y+7}$
|
Solve $y+3=3\sqrt{y+7}$ $\Rightarrow (y+3)^2=9(y+7)$ $\Rightarrow y^2+6y+9=9y+63$ $\Rightarrow y^2-3y-54=0$ $\Rightarrow (y-9)(y+6)=0$ $y=9, -6$ But plugging $y=-6$ into $y+3=3\sqrt{y+7}$ yields $-3=3$ . I know the convention is $\sqrt{y}$ is equal to a positive number, but in this case $\sqrt{y+7}=\sqrt{-6+7} = \sqrt{1} = +1$ , so why doesn't it work?
|
When you square an equation you allow for negative solutions too for instance $$x-1=1$$ By squaring on both sides $$ x^2-2x=0 $$ This comes from the fact that an equation and an equation squared on both sides are not equivalent When you apply square root it introduces mod and hence the two answers $$x^2-2x=0\\ (x-1)^2=1\\ |x-1|=1$$ Which is not the same as $x-1=1$ Side note this is the reason why quadratics generally have 2 solutions
|
|algebra-precalculus|
| 0
|
mutual information of two normal random vectors
|
I'm dealing with a question that request to calculate the mutual information of two normal random vectors, this is the description: If $\mathbf X\sim \mathcal N(\mu_X,\Sigma_X),~\mathbf Y\sim \mathcal N(\mu_Y,\Sigma_Y)$ are random vectors with $N$ elements, the covariance matrix of $\mathbf X$ and $\mathbf Y,$ $\Sigma_{XY}$ is $$\begin{align}\Sigma_{XY} &=\mathbb{E}[(\mathbf X-\mathbb{E}\mathbf X)(\mathbf Y-\mathbb{E}\mathbf Y)^\mathsf T]\\ &=\left[\begin{matrix}\Sigma_X&\mathsf{cov}(\mathbf X,\mathbf Y)\\\mathsf{cov}(\mathbf Y,\mathbf X)&\Sigma_Y\end{matrix}\right]\\ &=\left[\begin{matrix}\Sigma_X&\boldsymbol \rho\sqrt{|\Sigma_X||\Sigma_Y|}\\\boldsymbol \rho^\mathsf T\sqrt{|\Sigma_Y||\Sigma_X|}&\Sigma_Y\end{matrix}\right] \end{align}$$ if we define correlation matrix $$\boldsymbol \rho={{\sf cov}(\mathbf X,\mathbf Y)\over \sqrt{|\Sigma_X||\Sigma_Y|}}.$$ Now the PDF of joint distribution $(\mathbf X,\mathbf Y)$ is $$\begin{align} p_{XY}(\mathbf x,\mathbf y) =&\frac1{\sqrt{(2\pi)^{2N}|\
|
Claim #1: If $X \sim \mathcal{N}(\mu, \Sigma)$ , where $\mu \in \mathbb{R}^d, \Sigma \in \mathbb{R}^{d \times d}$ then $$ \mathbb{E}[(X - \mu)^T \Sigma^{-1}(X - \mu)] = d $$ Indeed, the affine transformation $Y = \Sigma^{-1/2}(X - \mu)$ gives $Y \sim \mathcal{N}(0, I_d)$ , where $I_d$ is $d\times d$ identity matrix. Then, $$ (X - \mu)^T \Sigma^{-1}(X - \mu) = \sum_{k = 1}^d Y_k^2 \sim \chi^2(p) $$ (notice that $Y_k^2 \sim \chi^2(1)$ ). Therefore, $\mathbb{E}[(X - \mu)^T \Sigma^{-1}(X - \mu)] = d$ Now, $$ \int_{\mathbb{R^N}} \int_{\mathbb{R}^N} p_{XY}(x, y)q_{XY}(x, y)dxdy = -\dfrac{1}{2}\mathbb{E}[(Z - \mu')^T(\Sigma')^{-1}(Z - \mu')] = -N $$ where $Z = (X, Y) \sim \mathcal{N}(\mu', \Sigma')$ Finally, $$ \int_{\mathbb{R^N}} \int_{\mathbb{R}^N} p_{XY}(x, y)q_{X}(x)dxdy = \int_{\mathbb{R}^N} p_X(x)q_X(x)dx = -\dfrac{1}{2}\mathbb{E}[(X - \mu_X)^T\Sigma_X^{-1}(X - \mu)] = -N/2 $$ Similarly, $$ \int_{\mathbb{R^N}} \int_{\mathbb{R}^N} p_{XY}(x, y)q_{Y}(y)dxdy = \int_{\mathbb{R}^N} p_Y(y)q_Y(
|
|multivariable-calculus|normal-distribution|matrix-calculus|mutual-information|
| 1
|
if $f\in C_c(X)$, then $f$ is zero on boundary of $supp(f)$
|
if $X$ be a topological space and $f\in C_c(X)$ , then we know that $$f\bigg[\Big(supp(f)\Big)^c\bigg]=0 $$ is this true that if $f\in C_c(X)$ , then $f\bigg[\partial\Big(supp(f)\Big)\bigg]=0$ ? if $x\in \partial \Big(supp(f)\Big) $ , since $$\partial \Big(supp(f)\Big)\subseteq\overline{ \Big(supp(f)\Big)^c } $$ there exist a net $\{x_{\alpha}\}\subseteq \Big(supp(f)\Big)^c $ such that $x_{\alpha}\longrightarrow x$ and since $f$ is continous, then $$0= f(x_{\alpha})\longrightarrow f(x)$$ it implies that $f(x)=0$ . is my proof true?
|
It works with nice topological spaces which are sequentiable (like metric spaces for example). If you want a more general proof involving pure topological reasoning with open and closed set, recall that by definition, when $A \subset X$ , $\overline{A}$ is the smallest (for the inclusion) closed subset that containes $A$ , $\overset{\mathrm{o}}{A}$ is the largest open subset contained in $A$ and $\partial A = \overline{A}\backslash\overset{\mathrm{o}}{A}$ . And $\mathrm{supp}(f) = \overline{f^{-1}(\mathbb{R}^*)}$ , and continuity of $f$ means that the reciprocal image of open (or equivalently closed) subsets of $\mathbb{R}$ is open in $X$ . In particular, $f^{-1}(\mathbb{R}^*)$ is open in $X$ . In particular, it equals its own interior thus $\overset{\mathrm{o}}{\overline{f^{-1}(\mathbb{R}^*)}} \supset \overset{\mathrm{o}}{f^{-1}(\mathbb{R}^*)} = f^{-1}(\mathbb{R}^*)$ . From here, we have that, $$ f^{-1}(\mathbb{R}^*) \cap \partial\mathrm{supp}(f) = f^{-1}(\mathbb{R}^*) \cap \left(\ove
|
|general-topology|
| 0
|
For any module with vector set $V$ and scalar set $C$, must there exist a set $X$ such that $V$ and $(X → C)$ are isomorphic?
|
Here's some evidence that suggests the affirmative to my question. There exists an isomorphism between: $\mathbb{R}^{n}$ and $(\mathbb{N}^{ ) $\mathbb{R}^{\infty}$ and ( $\mathbb{N} \to \mathbb{R}$ ) $C$ and $( \to C)$ , where $C$ is an underlying set for a ring and is a singleton set. If the answer to my question is 'yes', then with the univalence axiom, I can reformulate the definition of a module to be a ring paired with some other set.
|
For finite dimensional vector spaces (over fields), @IsAdisplayName has given the correct answer in the comments. Indeed, if $V$ is a finite dimensional vector space, then it is isomorphic to some $\mathbb{R}^n$ (or $k^n$ ) and hence isomorphic to a function space by the first example you gave. For infinite dimensional vector spaces, the answer is in general "no", even if the axiom of choice available. For instance, the $\mathbb{Q}$ -vector space $\mathbb{Q}[X]$ of polynomials cannot be isomorphic to a space of the form $(X \to \mathbb{Q})$ : If $X$ is finite, then $(X \to \mathbb{Q})$ is, unlike $\mathbb{Q}[X]$ , finite dimensional. If $X$ is infinite, then its underlying set is, unlike the underlying set of $\mathbb{Q}[X]$ , uncountable. Over general rings, there are even more counterexamples. For instance, the $\mathbb{Z}$ -module $\mathbb{Z}/(2)$ is not isomorphic to a module of the form $(X \to \mathbb{Z})$ : In none of these modules, with the exception of $X = \emptyset$ , do we
|
|linear-algebra|abstract-algebra|homotopy-type-theory|
| 1
|
How can I prove that if $x^5-x^3+x=2$, with $x$ a real number, then $x^6>3$, using only middle school algebra?
|
How can I prove that if $x^5-x^3+x=2$ , with $x$ a real number, then $x^6>3$ , using only middle school algebra? I have been struggling with this one. And I can't find any solution that doesn't rely on derivatives.
|
Here is a more straightforward but much more laborious reasoning that doesn’t use derivatives or any meaningful calculus. First, since $$2=x^5-x^3+x=x\cdot\underbrace{(x^4-x^2+1)}_{>0},$$ we have that $x$ is positive. Now we prove that $f(x)=x^5-x^3+x$ strictly increases. Let $0 . We want to prove that $$x^5-x^3+x $$(x-y)(x^4+x^3y+x^2y^2+xy^3+y^4-x^2-xy-y^2+1) Since $x-y , we want to prove that the second factor is positive. It follows from these AM-GMs: $$x^4+\frac13\ge 2\sqrt{x^4\cdot\frac13}\ge x^2,$$ $$y^4+\frac13\ge 2\sqrt{y^4\cdot\frac13}\ge y^2,$$ $$x^2y^2+\frac13\ge 2\sqrt{x^2y^2\cdot\frac13}\ge xy.$$ Now, knowing that $f(x)$ is strictly increasing, let us see that $x>\frac65$ . Indeed, otherwise $$x^5-x^3+x Now $$x\cdot(x^5-x^3+x)=x\cdot 2$$ $$x^6-x^4+x^2=2x$$ $$x^6=x^4-x^2+2x.$$ Knowing that, we consider $$x^6>3$$ $$x^4-x^2+2x-2>1$$ $$x^2(x+1)(x-1)+2(x-1)>1$$ $$(x-1)(x^2(x+1)+2)>1$$ Since $x>\frac65$ the LHS is a product of two positive increasing functions and thus is increa
|
|algebra-precalculus|
| 0
|
Finding parameter values for which a system has closed orbits
|
My question is on Exercise 7.3.8 of Chaos and Nonlinear Dynamics (2nd ed) by Strogatz: 7.3.8. Recall the system $\dot{r} = r(1-r^2) + \mu r \cos \theta, \; \dot{\theta} = 1$ of Example 7.3.1. Using the computer, plot the phase portrait for various values of $\mu > 0$ . Is there a critical value $\mu_c$ at which the closed orbit ceases to exist? If so, estimate it. If not, prove that a closed orbit exists for all $\mu > 0$ . [Here the ODE is given in polar coordinates, i.e. $x = r \cos \theta$ and $y = r \sin \theta$ .] I just want to make sure my reasoning is correct. Here is my work: Differentiating the equations $x = r \cos \theta$ and $y = r \sin \theta$ with respect to $t$ gives \begin{align*} \dot{x} &= -r \dot{\theta} \sin \theta + \dot{r} \cos \theta \\[2pt] \dot{y} &= r \dot{\theta} \cos \theta + \dot{r} \sin \theta \end{align*} Using the above equations I found the following Cartesian equations: \begin{align*} \dot{x} &= -y + x(1 - x^2 - y^2) + \frac{\mu x^2}{\sqrt{x^2 + y^2}}
|
Here is the solution in my own words, based on Lutz Lehmann's answer. We claim that a closed orbit exists for all $\mu > 0$ . To show this, we first note that \begin{align*} \dot{\theta} = 1 \implies \theta(t) = t + \theta(0). \end{align*} Hence, all trajectories (except for the fixed points) rotate through an angle of $2\pi$ every unit of time. In particular, any closed orbit must cross the axis $\theta = 0$ every unit of time. Therefore, a closed orbit exists if and only if there exists an initial condition $(r(0),\theta(0)) = (r_0,0)$ such that $r(2\pi) = r_0$ . We show that such an $r_0 > 0$ exists by explicitly solving the IVP $$ (1) \quad \begin{cases} \dot{r} = r(1-r^2) + \mu r \cos \theta, \\[2pt] \dot{\theta} = 1 \\[2pt] (r(0),\theta(0)) = (r_0, 0). \end{cases} $$ From $\dot{\theta} = 1$ and $\theta(0) = 0$ we have $\theta(t) = t$ , and so we are reduced to solving a single inhomogeneous ODE: $$ \dot{r} = r(1-r^2) + \mu r \cos t, \quad r(0) = r_0. $$ By rewriting the equation
|
|ordinary-differential-equations|dynamical-systems|
| 0
|
Are there rings where factorization is unique, but does not necessarily exist?
|
It feels like this should be a well-known question, but I can't find any related questions on this site by searching; apologies in advance if this is a duplicate. I assume rings are commutative with multiplicative identity. Most examples of non-UFDs involve cases where factorization is not unique, e.g. $\mathbb Z[\sqrt{-5}]$ is not an UFD because $$ 2 \cdot 3 = 6 = (1+\sqrt{-5}) \cdot (1-\sqrt{-5}) $$ and none of the divisors are associate. Is there an integral domain $R$ where: Factorization is unique in the sense that if $p_1p_2\cdots p_n = q_1q_2\cdots q_m$ where $p_i$ and $q_i$ are irreducible elements in $R$ , then $m=n$ and we can reorder such that $p_i$ is an associate of $q_i$ for each $i=1,2,\dots, n$ . Factorization may not exist : there is some non-unit $x$ that cannot be written as the product of irreducible elements.
|
I'm not familiar with the content, but this paper suggests yes: Coykendall, Jim, and Muhammad Zafrullah. "AP-domains and unique factorization." Journal of Pure and Applied Algebra 189.1-3 (2004): 27-35. It appears to be the main subject of the article. From the introduction: The standard definitions of “factorization domains” (e.g., UFDs, HFDs, BFDs, etc.) always include the assumption that the domain is atomic (i.e., every nonzero, nonunit of the domain can be written as a product of irreducible elements or atoms). It is natural (and perhaps imperative) to consider the implications to the theory when this assumption is dropped. For example, one might declare more generally that a domain, R, is an unrestricted unique factorization domain (U-UFD) if every element that can be factored uniquely into irreducible elements has unique factorization. So, you might find examples there . I am a little puzzled by "if every element that can be factored uniquely into irreducible elements has unique
|
|ring-theory|unique-factorization-domains|
| 0
|
Testing Riemann integrability of a function that is discontinuous at all rational points.
|
Prove that the function $f$ from $[a,b]$ to $\mathbb{R}$ defined by $$f(x) =\begin{cases} \frac{1}{q^2}, & \text{when }x = \frac{p}{q} \\ \frac{1}{q^3}, & \text{when } x=\sqrt{\frac{p}{q}} \end{cases}$$ where $p$ and $q$ are relatively prime integers and $f(x)=0$ for all other values of $x$ , is Riemann integrable on $[a,b]$ . The solution given in my textbook is terse and difficult to comprehend. I somewhat got a rough idea from the book and tried to rewrite a detailed solution myself; however, I couldn't complete it. Please help me finish the solution. MY SOLUTION We know that a bounded function $f: [a,b] \to \mathbb{R}$ is R-integrable iff the set of its discontinuity has a measure zero. Therefore, we need to prove that the set of all points of discontinuity in $[a,b]$ is a set of measure zero. We are going to show that the function $f$ is discontinuous for all rational values of $x$ and continuous for all irrational values of $x$ . If the set of discontinuities in the interval is e
|
I think your solution is overcomplicated. By Lebesgue theorem, a bounded function $f$ on a segment is Riemann integrable iff the set of the discontinuity points of $f$ has the Lebesgue measure $0$ . But I have to note first that there is an ambiguity in the definition of $f(x)$ when $x$ is a positive noninteger rational number. Indeed, then there exist relatively prime integers $p$ and $q>1$ such that $x=\frac pq$ , so $f(x)=\frac 1{q^2}$ . But the integers $p^2$ and $q^2$ are also relatively prime, and $x=\sqrt{\frac {p^2}{q^2}}$ , so $f(x)=\frac 1{q^6}$ . But, anyway, for any of these two choices of $f(x)$ for each such $x$ , the function $f$ is continuous in all but countable many points of $[a,b]$ . Indeed, in any case for each natural $n$ the set $$A_n=\left\{x\in [a,b]:f(x)>\frac 1n\right\}$$ is finite. This easily implies for any $x\in [a,b]\setminus\bigcup_{n\in\mathbb N} A_n$ we have $f(x)=0$ , and, moreover, $f$ is continuous at $x$ . Indeed, let $n$ be any natural number. Th
|
|real-analysis|continuity|epsilon-delta|riemann-integration|
| 1
|
Is there a simple formula for $\binom{2n}{n} \pmod{n^3}$?
|
Is there a simple formula for the following? $$f(n) = \binom{2n}{n} \pmod{n^3}$$ I know $f(n) = 2$ iff $n$ is prime and greater than $3$ , but I don't know anything about composite numbers.
|
Just a comment. Numbers appear to be in this form: \begin{aligned} &{2 n \choose n} \mod n^3 \equiv {2 k \choose k} + \frac{t n^3}{k^3} \\\\ & \text{for }kp \quad p \in \mathbb{P} \text{ prime}, 1\geq t > k^3, k \in \mathbb{N}, k > 1 \end{aligned} For $k=1, {2p \choose p} \equiv 2 \bmod p^3$ which, as stated in the comments, is a reformulation of Wolstenholme Theorem.
|
|probability|combinatorics|prime-numbers|modular-arithmetic|binomial-coefficients|
| 0
|
Express a stochastic process in terms of another
|
Let Y and Z be two random variables on $(\Omega,\mathcal{F},P)$ , it is well-known that $\sigma(Z)\subset \sigma(Y)$ is equivalent to that there exists some Borel measurable function such that $Z=f(Y)$ . I am wondering if the analogous is true for stochastic processes. Let ${Y_t}$ and $Z_t$ be two stochastic processes on $(\Omega,\mathcal{F},\mathcal{F}_t,P)$ . Suppose that the natural filtration of $Z$ is contained in the natural filtration of $Y$ , i.e. $\sigma(Z_s,0\le s\le t)\subset \sigma(Y_s,0\le s\le t)$ . Is it true that there exists some Borel measurable function such that $Z_t=f(t,Y_t)$ ?
|
No, this is not true. As an example, let $Y$ be a Brownian motion and define $Z$ by $Z_t = 0$ for $t \in [0,2)$ and $Z_t = Y_1$ for $t \in [2,\infty)$ . Clearly $\sigma(Z_s, 0 \le s \le t) \subset \sigma(Y_s, 0 \le s \le t)$ for all $t$ , but there is no function $f$ such that $Z_t = f(t,Y_t)$ for all $t$ . For example, $Z_2 = Y_1$ and $Y_1 \ne f(2,Y_2)$ for any $f$ because $\sigma(Y_1) \not \subset \sigma(Y_2)$ .
|
|probability-theory|stochastic-processes|stochastic-calculus|stochastic-analysis|
| 0
|
Difficulty Finding dz/dt
|
If $$z=e^xcos y$$ With $$x^3+e^x-t^2-t=1$$ $$yt^2+y^2t-t+y=0$$ Find $$\frac {dz}{dt}$$ When I asked my teacher for any tips he said something about differentiating both sides of the 2nd and 3nd equation. But I just can't see how this could help me or get anywhere closer to the solution. How can I find $\frac{dz}{dt}$ ?
|
Use chain rule here: $\frac{dz}{dt}=\frac{\partial z}{\partial x}\frac{dx}{dt}+\frac{\partial z}{\partial y}\frac{dy}{dt}$ . One should be able to find $\frac{dx}{dt}$ and $\frac{dy}{dt}$ from the equations provided.
|
|derivatives|
| 0
|
Difficulty Finding dz/dt
|
If $$z=e^xcos y$$ With $$x^3+e^x-t^2-t=1$$ $$yt^2+y^2t-t+y=0$$ Find $$\frac {dz}{dt}$$ When I asked my teacher for any tips he said something about differentiating both sides of the 2nd and 3nd equation. But I just can't see how this could help me or get anywhere closer to the solution. How can I find $\frac{dz}{dt}$ ?
|
Presumably $x$ and $y$ are both functions of $t$ . Differentiate the second equation, and you can solve for $dx/dt$ as a function of $x$ and $t$ . Similarly for the third to get $dy/dt$ as a function of $y$ and $t$ . Now differentiate the first equation, substitute in, and you can get $dz/dt$ as a function of $x$ , $y$ and $t$ . You could actually solve for $y$ explicitly as a function of $t$ because your third equation is quadratic in $y$ , but you can't get $x$ explicitly as a function of $t$ in the second equation.
|
|derivatives|
| 0
|
Difficulty Finding dz/dt
|
If $$z=e^xcos y$$ With $$x^3+e^x-t^2-t=1$$ $$yt^2+y^2t-t+y=0$$ Find $$\frac {dz}{dt}$$ When I asked my teacher for any tips he said something about differentiating both sides of the 2nd and 3nd equation. But I just can't see how this could help me or get anywhere closer to the solution. How can I find $\frac{dz}{dt}$ ?
|
To find $\frac{\mathrm dz}{\mathrm dt}$ , we need to find the dependence of $x$ and $y$ on $t$ . In particular, if we write $z = z(x(t), y(t))$ , then by the chain rule, $$\frac{\mathrm dz}{\mathrm dt} = \frac{\partial z}{\partial x} \frac{\mathrm dx}{\mathrm dt} + \frac{\partial z}{\partial y} \frac{\mathrm dy}{\mathrm dt}.$$ Differentiating the first equation wrt $t$ , $$3x^2 \frac{\mathrm dx}{\mathrm dt} + e^x \frac{\mathrm dx}{\mathrm dt} - 2t - 1 = 0 \implies \frac{\mathrm dx}{\mathrm dt} = \frac{2t+1}{3x^2 + e^x},$$ and for the second equation, $$\frac{\mathrm dy}{\mathrm dt} t^2 + 2ty + 2y \frac{\mathrm dy}{\mathrm dt} t + y^2 - 1 + \frac{\mathrm dy}{\mathrm dt} = 0 \implies \frac{\mathrm dy}{\mathrm dt} = \frac{1 - 2ty}{t^2 + 2yt + 1}.$$ Thus, \begin{align*} \frac{\mathrm dz}{\mathrm dt} = e^x \cos y \left( \frac{2t+1}{3x^2 + e^x} \right) - e^x \sin y \left(\frac{1 - 2ty}{t^2 + 2yt + 1} \right) \end{align*}
|
|derivatives|
| 1
|
Fixed single-elimination tournament probabilities
|
Suppose there is a single-elimination bracket style tournament where the number of players n = 2 k where k > 1. Each player is assigned a seeding of 1 through n , and during each round the matchups are uniformly chosen at random. The lower seed always wins in this fixed tournament but the matchups for each round are chosen uniformly at random. I am trying to find the probability that a specific matchup i , j will happen in a given round r , for example in the first round every pair has an equally likely chance of happening. Another example would be that in the final round, the amount of matchups possible would be n / 2 because seed 1 will always make it and their opponents could be seed 1 v 2, ..., n /2 + 1. I don't really know how to find a formula for this, or find a pattern as I have tried and failed, I have made a simulation that gives me rough percentages after running over 1 million tournaments but I need the exact fractions/probabilities, any ideas on how to find these probabili
|
Fix positive integers $k,i,j,r$ with $i and $r\le k$ . Assume players $i,j$ have ranks $i,j$ respectively. Given the specified random pairings, our goal is to find the probability, $p$ say, that players $i,j$ meet in round $r$ . No bias is introduced by assuming a single initial random draw with the usual tree structure, followed by deterministic outcomes. Define events $A,B,C$ as follows . . . Let $A$ be the event that players $i,j$ are in separate halves of the same $2^r$ -player sub-tree (so they could potentially meet in round $r$ ). $\\[4pt]$ Let $B$ be the event that player $j$ wins all matches in the first $r-1$ rounds. $\\[4pt]$ Let $C$ be the event that player $i$ wins all matches in the first $r-1$ rounds. Then we have \begin{align*} P(A)&= \frac{2^{r-1}}{2^k-1} \\[4pt] P(B{\,|\,}A)&= {\large{ \frac {\binom{2^k-j}{2^{r-1}-1}} {\binom{2^k-2}{2^{r-1}-1}} }} \\[4pt] P(C{\,|\,}A,B)&= {\large{ \frac {\binom{2^k-i-2^{r-1}}{2^{r-1}-1}} {\binom{2^k-1-2^{r-1}}{2^{r-1}-1}} }} \\[4pt] \
|
|probability|
| 1
|
Prove $xy \gt (x-1)(y+1)$
|
I did a problem in competitive programming, the problem is to split a number into $x$ and $y$ ( $x \leq y$ ) and make a new number by $(x - 1)(y + 1)$ , do it until a new number different zero, and count how many different numbers can have by this split way. It's easy for me but, when I draft, a new number is always lower than the original number. But I can't prove it. So, anyone can prove it to me? $$x y > (x - 1) (y + 1)$$ Sorry for the story, but it's so boring if I just write an inequality. (maybe it's just an easy problem :D)
|
Notice that, since $x\leq y\Rightarrow x-y\leq 0$ . So we have that \begin{align} (x-1)(y+1) &= xy-y+x-1 \\ & = xy + (x-y) - 1\\ & \leq xy-1 \\ & \lt xy \end{align}
|
|inequality|
| 0
|
What is $r^{-1}(B)$ if $r$ is a rule and $B$ is an expression in a proof system?
|
In the first paragraph, what does $r^{-1}(B)$ mean? Does it have something to do with relations? e.g if $A$ is a relation then $A^{-1}$ is the inverse of that relation. Its from a paper about the hilbert system: https://www3.cs.stonybrook.edu/~cse541/chapter8.pdf
|
It is stated that " $MP^{-1}(B)$ is countably infinite", so I suppose $r^{-1}(B)$ is the set of all sequences of well-formed formulas (wffs) that result in wff $B$ when rule $r$ is applied on such a sequence. For example, modus ponens (⊢ψ,⊢ψ→φ ⇒ ⊢φ) takes two arguments, therefore all elements of $MP^{-1}(B)$ are pairs of formulas, specifically $MP^{-1}(B) = \{(A, A\rightarrow B)\,|\,A\text{ is wff}\}$ . On another example, necessitation (⊢ψ ⇒ ⊢□ψ) takes one argument, therefore all elements of $NE^{-1}(B)$ are just formulas, specifically $NE^{-1}(B) = \{A\,|\,B=□A\}$ . Note that $NE^{-1}(B)$ is empty whenever $B$ does not start with $□$ , and otherwise it has precisely one element.
|
|logic|hilbert-calculus|
| 0
|
Differential equation with memory
|
I am trying to integrate this equation: $$ \frac{dz}{dt} = \alpha \cdot (s(t)-z(t-1)) $$ I came up with this equation to model a "leaky integrator" system where s(t) are some samples from the environment and z(t) is a "decision variable". Thus this system can continuously decide between 2 alternatives based on the sign of z(t). However, I'm not sure how to integrate the equation and solve for z(t), mainly because of the term z(t-1) which Wolfram Alpha seems to struggle with... Any tips?
|
This is known as a delay-differential equation. Note that for an "initial condition", you want to specify $z(t)$ on an interval of length $1$ , say $(-1,0)$ , rather than at just one point. One way to try solving it is with Laplace transform. Since you're using $s$ in your equation, let me use $u$ as the Laplace transform variable. Then, with $Z(u)$ and $S(u)$ the Laplace transforms of $z$ and $s$ , the Laplace transform of your equation is $$u Z(u) - z(0) = \alpha S(u) - \alpha \int_{-1}^0 z(t) e^{-u(t+1)} \; dt - e^{-u} Z(u)$$ so $$Z(u) = (u + e^{-u})^{-1} \left(z(0) + \alpha S(u) - \alpha \int_{-1}^0 z(t) e^{-u(t+1)}\; dt \right)$$
|
|ordinary-differential-equations|delay-differential-equations|
| 0
|
Prove $xy \gt (x-1)(y+1)$
|
I did a problem in competitive programming, the problem is to split a number into $x$ and $y$ ( $x \leq y$ ) and make a new number by $(x - 1)(y + 1)$ , do it until a new number different zero, and count how many different numbers can have by this split way. It's easy for me but, when I draft, a new number is always lower than the original number. But I can't prove it. So, anyone can prove it to me? $$x y > (x - 1) (y + 1)$$ Sorry for the story, but it's so boring if I just write an inequality. (maybe it's just an easy problem :D)
|
First, expand the right side of the equation. $xy > (x-1)(y+1)$ $xy > (xy + x - y - 1)$ Subtract $xy$ from both sides: $xy - xy > xy + x - y - 1 - xy$ $0 > x - y - 1$ Because $x$ is less than or equal to $y$ , the expression $x - y$ will always be either $0$ or negative. When you subtract $1$ from either $0$ or a negative number, the result will be a negative number, which will always be less than $0$ .
|
|inequality|
| 0
|
Are there rings where factorization is unique, but does not necessarily exist?
|
It feels like this should be a well-known question, but I can't find any related questions on this site by searching; apologies in advance if this is a duplicate. I assume rings are commutative with multiplicative identity. Most examples of non-UFDs involve cases where factorization is not unique, e.g. $\mathbb Z[\sqrt{-5}]$ is not an UFD because $$ 2 \cdot 3 = 6 = (1+\sqrt{-5}) \cdot (1-\sqrt{-5}) $$ and none of the divisors are associate. Is there an integral domain $R$ where: Factorization is unique in the sense that if $p_1p_2\cdots p_n = q_1q_2\cdots q_m$ where $p_i$ and $q_i$ are irreducible elements in $R$ , then $m=n$ and we can reorder such that $p_i$ is an associate of $q_i$ for each $i=1,2,\dots, n$ . Factorization may not exist : there is some non-unit $x$ that cannot be written as the product of irreducible elements.
|
Let $k = {\Bbb F}_2$ and let $M$ be the multiplicative monoid defined on $\{ x \in \Bbb R \mid x \geqslant 1\}$ . I claim that the monoid ring $k[M]$ contains no irreducible elements. The elements of $k[M]$ can be written as formal sums $$ \sum_{m \in M} c_m m \quad \text{where $c_m \in k$ and $c_m = 0$ for almost all $m \in M$} $$ First, $1$ is the unique unit of $R$ . Next, since we are in characteristic $2$ and each $c_m$ is idempotent, one has $$ \sum_{m \in M} c_m m = \Bigl(\sum_{m \in M} c_m\sqrt{m} \Bigr)^2 $$ and thus no $\sum_{m \in M} c_m m$ is irreducible. Since there is no irreducible element, $1$ is the unique element to have a factorization, namely the trivial, but also unique, factorisation $ 1 = \prod_{i \in \emptyset} x_i $ . Thus $k[M]$ answers your question.
|
|ring-theory|unique-factorization-domains|
| 1
|
unbounded operator satisfying $||T(x_n)|| \to \infty$
|
Let $E,F$ be a normed space, and $T:D(T) (\subset E) \rightarrow F$ be a densely defined unbounded linear operator. By unbounedness, for all $x \in E$ , there is a sequence $(x_n) \subset D(T)$ such that $x_n \to x$ and $||T(x_n)|| \to \infty$ . On the other hand, the follwing hold? If $D(T) \neq E$ , there is a $x \in E$ such that if $(x_n) \subset D(T)$ satisfy $x_n \to x$ , then $||T(x_n)|| \to \infty$ . Any advice would be appreciated.
|
No, take some unbounded linear operator with $D(T) = E$ and take $x_n = x$ for all $n \in \mathbb{N}$ . Then $x_n \to x$ but $||Tx_n|| = ||Tx|| \to ||Tx|| .
|
|linear-algebra|functional-analysis|operator-theory|normed-spaces|
| 0
|
Application of "Every nonzero residue modulo a prime can be represented as a power of a primitive root."
|
I am reading a machine learning paper , and this paragraph below doesn't quite make sense. How is x+y (mod p−1) and x*y (mod p) equivalent? Suppose p = 5 and x = 3 and y = 4, then clearly 7 mod 4 =3 isn't equal to 12 mod 5 = 2. I'm also not sure how the line "every nonzero residue modulo a prime can be represented as a power of a primitive root." relates to this fact. Since the operands are presented to the neural network as unrelated abstract symbols, the operations x+y (mod p−1) and x*y (mod p) with a prime number p and non-zero x, y are indistinguishable from the neural network’s perspective (and similarly x − y (mod p − 1) and x/y (mod p)). This is because every nonzero residue modulo a prime can be represented as a power of a primitive root.
|
Working with your example with $p=5$ , observe that $2$ is a primitive root. That means its powers yield all the nonzero congruence classes: $$ 2^0 \equiv 2^4 \equiv 1,\ 2^1 \equiv 2,\ 2^2 \equiv 4,\ 2^3 \equiv 3 \pmod{5}. $$ Since $2^4 \equiv 1 \pmod{5}$ the exponents work modulo $4$ . $$ $$ So to multiply modulo $5$ we add exponents modulo $5-1 = 4$ : $$ 3 \times 4 \equiv 2^3 \times 2^2 = 2^5 \equiv 2^1 = 2 \pmod{5}. $$
|
|elementary-number-theory|primitive-roots|
| 1
|
Is every ring with a homomorphism from $R$ an $R$-algebra if and only if $R$ is a solid ring?
|
Given a commutative ring $R$ , is every ring with a homomorphism from $R$ an $R$ -algebra if and only if $R$ is a solid ring? A ring $R$ is said to be solid if the unique homomorphism $\mathbb{Z} \to R$ is an epimorphism in the category of rings, or equivalently, if $r \otimes 1=1 \otimes r \in R \otimes_{\mathbb{Z}} R$ for all $r \in R$ . Also, it is known that a ring homomorphism $f:R \to A$ makes $A$ into an $R$ -algebra if and only if $f(R) \subseteq Z(A)$ , where $Z(A)$ is the center of $A$ . The "if" direction could be proven by considering the two $R$ -module structures on $A$ induced by $f$ (where $ra$ is defined to be $f(r)a$ in one structure and $af(r)$ in the other) and noting that they must be the same if $R$ is solid (as any abelian group has at most one $R$ -module structure). For the "only if" direction, let $S$ be a ring with two homomorphisms $f,g:R \to S$ . Then, define a multiplication on the abelian group $R \oplus S$ with the rule $(r, s)(r', s')=(rr', f(r)s'+sg(r'
|
To update my comments: I think the statement, which I would phrase as: In the category of (associative, unital, not necessarily commutative) rings, $(*)$ Every homomorphism $R\rightarrow S$ has image in $Z(S)$ (i.e. makes $S$ an $R$ -algebra) is equivalent to $(**)$ $R$ is a solid ring is correct, and your proof works. For $(*) \implies (**)$ , it might be handy to remark right away that solid rings can be characterized as the rings $R$ such that if there is any morphism $R \rightarrow S$ , it is unique. And the proof could be shortened by saying first that $(*)$ implies $R$ is commutative, and instead of using your construction (which works), it might be easier to look at the ring of matrices $$\{\pmatrix{r&s\\ 0&r} : r\in R, s \in S\}$$ where the $S$ in the upper right corner is viewed as left- $R$ -module via $f$ and as right- $R$ -module via $g$ . That just corresponds to (but in my view, gives a good intuition for) the shorter multiplication formula $$(r,s)(r',s') := (rr', f(r)s+s
|
|commutative-algebra|
| 0
|
Parametrized curve in R^2
|
We can define a parametrized curve in ${ℝ}^2$ as a function $r:I→{ℝ}^2$ where $I$ is an interval of $ℝ$ and we can write $r(t)=(x(t),y(t))$ . Does a parametric curve whose graph is the same as a closed disk in ${ℝ}^2$ described by the equation : $x^{2}+y^{2}\le r^{2}$ exist? The question might be stupid but i am trying to find an answer to this question using the formal definition of a parametric curve. The graph of a parametric curve is defined by the set: {(x(t),y(t)): $t \in I$ }
|
A (continuous) parametric curve whose image is the disc $\{(x,y) \mid x^2 + y^2 \le r\}$ does indeed exist. However , the construction of such an object usually doesn't occur until a higher level course in topology and/or analysis. The first similar example one usually encounters is the Peano space filling curve which, rather than having image being a disc, instead has image being a square such as $[0,1] \times [0,1]$ . You can read about the construction at that link; in very brief outline, one constructs, very carefully, a sequence of parameterized curves with image contained in square such that the sequence gets denser and denser in the square, and such that the sequence converges, in a very strong sense, to the desired parameterized curve whose image is the whole square. The necessity for employing high powered topological tools arises from the need to generalize convergence concepts from sequences of points to sequences of functions. The difference between squares and discs, while
|
|calculus|analysis|multivariable-calculus|
| 1
|
What are helpful tricks to solve quantifiers questions in discrete mathematics?
|
What are useful tricks to solve quantifiers questions? Like for example here Let P(x,y) be the statement "x has taken y" and Q(x) be the statement "& knows Java", where the domain for x consists of all students and the domain for y consists of all Math courses. The statement "No student has taken any Math course or knows lava" is formulated by ∀x∀y(¬P(x,y) ^ ¬Q(x)) Like here why it is using and, while the question stated or I want general tricks to help me solve those types of questions
|
One good strategy is to divide and conquer, while following some well-known patterns. For example, many statements involving quantifiers follow one of the following four forms: "All of [these] are like [that]" "Some of [these] are like [that]" "None of [these] are like [that]" (which is equivalent to "All of [these] are not like [that]" "Some of [these] are not like [that]" If $P(x)$ is a formula that formalizes "x is one of [these]", and $Q(x)$ symbolizes "x is like [that]", these four general patterns respective symbolize as: $\forall x (P(x) \to Q(x))$ $\exists x (P(x) \land Q(x))$ $\forall x (P(x) \to \neg Q(x))$ $\exists x (P(x) \land \neg Q(x))$ These statements show a nice split between what's called the 'subject term' (the things that the statement is about) and the 'predicate term' (that what we say/predicate about those things). So this gives you a natural way to break up the symbolization. For example, take your statement: "No student has taken any Math course or knows Java"
|
|discrete-mathematics|
| 0
|
Question about an inequality during proof that $e^2$ is irrational.
|
I am reading a proof of the irrationality of $e^2$ and I am stuck on the following inequality: Let $S := -a\underbrace{\left(\frac{1}{n+1} - \frac{1}{(n+1)(n+2)} + \frac{1}{(n+1)(n+2)(n+3)} \mp ...\right)}_{S^*}$ (just because of space issues) with $a \in \mathbb{Z}$ , $n \in \mathbb{N}$ . The proof I am reading states that $$ -\frac{a}{n} Why is this true? My intuition tells me that $S^* since $1/n$ is already greater than the first term of $S^*$ and the terms afterwards all tends to 0 rather quickly so it never reaches $1/n$ but I am looking for a more rigorous explanation. The same goes for $S^*$ being apparently smaller than $\tilde S$ . I see that the terms which get subtracted tends to zero more quickly than the terms of $S^*$ and so the inequality could be true as far as my intuition goes but not further. Thanks in advance for any help! (The proof I am referring to is out of "Proofs from THE BOOK" by Martin Aigner and Günter M. Ziegler in case anyone is wondering.)
|
For $a > 0$ , if you divide through by $(-a)$ , the inequality becomes $$\frac1{n+1} - \sum_{k=1}^\infty \frac1{(n+1)^k} Or equivalently, $$\frac{-1}{n^2 + n} The middle summation is alternating and the absolute value of the terms decreases to $0$ as $k \to \infty$ . Thus it must converge. If we sum the terms in pairs, each pair consists of a positive term added to a negative term of smaller size, so every partial sum is $> 0$ . So the full summation is $\ge 0$ . which shows the left hand inequality. For the right hand inequality, we can rewrite the middle summation as $$\frac 1{n+1} - \sum_{k=2}^\infty \frac{(-1)^kn!}{(n+1+k)!}$$ Once again this is an alternating series of decreasing terms with positive first term, meaning the summation itself is also $> 0$ , and thus the full expression is $ . Thus the right hand inequality also holds.
|
|real-analysis|analysis|elementary-number-theory|irrational-numbers|
| 0
|
Normal Sylow Subgroups of Solvable Groups
|
If $G$ is square free such that $|G|=p_1\cdots p_n$ , where $p_n>p_{n-1}>\cdots$ . Then we can use the N/C Theorem, along with some induction, to show that the Sylow $p_n$ -subgroup of $G$ , $P_n$ , is normal in $G$ . Suppose $G$ isn't square free, and let's assume $G$ is solvable. Suppose $p_k$ is the largest prime divisor of $G$ with exponent $1$ . Could we necessarily say $P_k\trianglelefteq G$ ?
|
$\DeclareMathOperator{\Aut}{Aut}\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$ One of the possible generalizations of the $A_{4}$ example is the following. Let $p$ be any odd prime. Let $k$ be the order of $2$ modulo $p$ , so that $p \mid 2^{k} - 1$ . Let $N$ be an elementary abelian group of order $2^{k}$ . Then $N$ has an automorphism $\alpha$ of order $p$ . This is because $N$ can be regarded as the additive group of the field $F$ with $2^{k}$ element. The multiplicative group of $F$ is cyclic of order $2^{k} - 1$ , generated by an element $b$ , say. Multiplication by $b$ yields an automorphism $\beta$ of $N$ of order $2^{k} - 1$ , and then one can take $$ \alpha = \beta^{(2^{k} - 1)/p}. $$ Let $P = \Span{a}$ be a cyclic group of order $p$ The homomorphism $P \to \Aut(N)$ determined by $a \mapsto \alpha$ yields a semidirect product $$ G = N \rtimes P $$ which is soluble, and whose order $p \cdot 2^{k}$ is not squarefree as $k > 1$ . Since $N$ is normal in $G$ , but $G$ is non
|
|abstract-algebra|group-theory|normal-subgroups|sylow-theory|
| 0
|
Is $\forall x\exists x(x < x)$ a sentence?
|
Going through my notes on predicate logic, I read the following inductive definitions: Definition: An atomic term is either a variable or a constant. If $f$ is an $n$ -place function symbol and $t_1, . . . , t_n$ are terms , then $f(t_1, . . . , t_n)$ is also a term . Definition: An atomic formula is a string of the form $P(t_1, . . . , t_n)$ for the $n$ -place predicate $P$ (which may be the equal sign) and the terms $t_1, . . . , t_n$ . If $F$ and $G$ are formulas , so are $$\neg F,(F ∧ G),(F ∨ G), ∃xF, ∀xF.$$ Definition: An occurrence of a variable $x$ in a formula $F$ is bound iff it is part of a subformula beginning with $∃x . . .$ or $∀x . . .$ , in which the quantifier in question is said to bind the variable, and otherwise the occurrence is said to be free . A formula is a sentence iff no occurrence of any variable in it is free. It seems to me that these definitions allow for formulas such as $$\forall x\exists x (x to be a sentence. If so, is that not problematic? Once semant
|
Is it problematic to allow for expressions such as $\forall x\exists x (x to count as well formed sentences? No (as others have explained). You can set up your formal first-order logical languages that way, while keeping everything working systematically in an acceptable way. Is is necessary to allow for expressions such as $\forall x\exists x (x to count as well formed sentences? Not at all. You can set up the construction rules for your first-order languages so that the things go along these lines: if $\varphi(n)$ is a sentence involving occurrences of the name/parameter $n$ , but involving no occurrences of the variable $\xi$ , then $\forall \xi\varphi(\xi)$ and $\exists \xi\varphi(\xi)$ are sentences. This sort of construction rule doesn't generate repeated or vacuous quantifiers. Does it matter which way we go? Formally no. But perhaps conceptually, the second approach is to be preferred. The idea is that the key semantically-relevant unit is not a bare quantifier expression $\for
|
|logic|definition|first-order-logic|quantifiers|
| 1
|
Floor of a random variable with bounded density
|
Let $X$ be a random variable on $\mathbb R^+$ with bounded density, and let $\lfloor \cdot \rfloor$ denote the floor function. Show that for $\lambda\in \mathbb R^+$ , $$\lim_{\lambda\to\infty} \mathbb P(\lfloor \lambda X\rfloor \mbox{ is even}) = \frac{1}2.$$ The statement makes intuitive sense to me, but I don't know how to show it. I created this simple example to better understand a problem I'm working on where space is partitioned into equivalence classes, (odd / even, in the case of the example) "uniformly" spread out over the space, and fine with respect to the extent of the bounded density. If alternative assumptions on $X$ can be used, I would be curious to hear suggestions.
|
I believe I have the answer. For $\lambda\in \mathbb R^+$ , we can define a measure on $\mathbb R^+$ by $$\mu_{\lambda}(A) = \int_{A}\mathbb 1_{\{\lfloor \lambda s\rfloor\mbox{ is even}\}}\,\mathrm ds,\qquad A\in\mathcal B(\mathbb R^+).$$ Remark that for $b > a > 0$ , $$\mu_\lambda((a,b)) \xrightarrow[\lambda \to \infty]{} \frac{b-a}{2}.$$ Since $\mu_\lambda$ converges to half of the Lebesgue measure on the open intervals in $\mathbb R^+$ , which generate $\mathcal B(\mathbb R^+)$ , we have that $\mu_\lambda$ converges to half of the Lebesgue measure pointwise on $\mathcal B(\mathbb R^+)$ . Moreover, this implies that $$\int_{\mathbb R^+} f(s)\,\mathrm{d}\mu_\lambda(s) \xrightarrow[\lambda\to\infty]{} \frac{1}{2}\int_{\mathbb R^+}f(s)\,\mathrm{d} s,$$ for any bounded measurable function $f$ (see this post) . Finally, with $f_X$ the density of $X$ , one has $$\mathbb P(\lfloor \lambda X\rfloor \mbox{ is even}) = \int_{\mathbb R^+}\mathbb 1_{\{\lfloor \lambda s\rfloor\mbox{ is even}\}} f
|
|probability|random-variables|
| 0
|
Connection between compactification and a quotent space.
|
I've been studying topology and I am trying to find a relation between the compactification of a space $X$ and it's quotent space. For example, one-point compactification of the open interval $(0,1)$ can be thought of as bringing its end points to the point infinity (we consider the embedding $f(t) = (cos(2πt),sin(2πt)$ and then the closure of $f(X)$ is the unit circle which is a compact space). Same can be done with the same quotent map on the closed interval $[0,1]$ and we would get the unit circle as its quotent space. Is there any connection between these terms? (Since the identification of end points during this quotent map can be thought of as gluing them as well).
|
Let $X$ be a Tychonoff space (or $T_{3.5}$ space), that is, a Hausdorff space which has a compactification. There is two most important compactifications of $X$ , first one always exists and its the Stone-Cech compactification $\beta X$ of $X$ . It can be defined in multiple ways, and it has the property (which determines it up to homeomorphism that fixes $X$ ) that its the largest compactification of $X$ , in the sense that if $\gamma X$ is another compactification, then there exists a (necessarily surjective) continuous function $\beta X\to \gamma X$ which extends the dense embedding $X\hookrightarrow \gamma X$ . Then $\gamma X$ is a quotient of $\beta X$ , since the map $\beta X\to \gamma X$ is closed, continuous and surjective, so a quotient map. The second most important compactification of $X$ is the one point compactification $\alpha X$ (here $\alpha X$ is non-standard notation, people often denote it by $X^*$ ). It doesn't always exist, but when it does, and that is when $X$ is
|
|general-topology|algebraic-topology|
| 1
|
expected value of exponential compound poisson process
|
Let $Z(t)=\sum_{i=1}^{N(t)} X_i$ and let $N(t)$ be a Poisson process with parameter $\lambda$ and $X_1,X_2,\dots$ positive iid random variables with density function $f_X(x)$ , independent of $N(t)$ . How can I calculate $E[e^{\lambda t-\mu Z(t)}]$ ?
|
Since $N_t$ and all the $X_i$ are independent and the latter are identically distributed we have \begin{align} \mathbb E\left[e^{-\mu Z_t}\right]&=\mathbb E\left[\prod_{i=1}^{N_t}e^{-\mu X_i}\right]=\sum_{n=0}^\infty\prod_{i=1}^n\mathbb E\left[e^{-\mu X_i}\right] \frac{(\lambda t)^n}{n!}e^{-\lambda t}=\sum_{n=0}^\infty \left(\mathbb E\left[e^{-\mu X_1}\right]\right)^n\frac{(\lambda t)^n}{n!}e^{-\lambda t}\\ &=\exp\Big((\mathbb E\left[e^{-\mu X_1}\right]-1)\lambda t\Big)\,. \end{align}
|
|expected-value|poisson-process|
| 1
|
Extrema of $\sum_{j=0}^{n-1} \frac{1}{|z-a_j|^2}$ for $z$ on unit circle
|
Let $n \in \mathbb{N}$ , $0 and $\omega = \exp\left(\frac{2 \pi i}{n}\right)$ . For $j = 0, 1, \ldots, n-1$ define $a_j = r \omega^j$ , these are the vertices of a regular $n$ -gon inside the circle of radious $r$ . Now for which $z \in \mathbb{C}$ on the unit circle, that is $|z|=1$ , does the following expression $$\sum_{j=0}^{n-1} \frac{1}{|z-a_j|^2}$$ achieve its minimal and maximal value? The answer turns out that the maximal value is achievet for $z_{max} = \exp\left(\frac{2j \pi i}{n}\right) = \omega^j$ , that is for the point lying above the vertices, and minimal for $z_{min}= \exp\left(\frac{(2 j+ 1)\pi i}{n}\right)$ , that is the points above the midpoint between two vertices. Now due to simmetry we can conclude that the points $z_{max}$ and $z_{min}$ will be local extrema and that we can only concentrate on $z$ with argument in $(0,\pi / n)$ . It remains to show that for any given $n$ the above expression is monotone on the interval $(0,\pi / n)$ . But for the life of me I a
|
Let's note that if $z=e^{it}, w=re^{i\theta}, 0 we have $$\frac{1}{|z-w|^2}=\frac{1}{1-2r\cos (t-\theta)+r^2}=\frac{1}{1-r^2}\Re(\frac{1+re^{i(\theta-t)}}{1-re^{i(\theta-t)}})$$ so in particular for a fixed $r$ the function $$f_r(t)=\sum_{j=0}^{n-1}\frac{1}{|z-w_j|^2}, w_j=r \exp\left(\frac{2 \pi ij}{n}\right)=re^{i\theta_j}$$ attains the maximum and minimum at the same $t$ as the function $$g_r(t)=\Re\sum_{j=0}^{n-1}\frac{1+re^{i(\theta_j-t)}}{1-re^{i(\theta_j-t)}}$$ But using the geometric series and absolute convergence to interchange we have: $$\sum_{j=0}^{n-1}\frac{1+re^{i(\theta_j-t)}}{1-re^{i(\theta_j-t)}}=n+2\sum_{k \ge 1}r^ke^{-ikt}(\sum_{j=0}^{n-1}e^{ik\theta_j})$$ By orthogonality the inner sums are either $n$ when $k=mn$ or $0$ otherwise so $$g_r(t)=n+2n\Re \sum_{m \ge 1}r^{mn}e^{imnt}=n+2n\sum_{m \ge 1}r^{mn}\cos{mnt}$$ Clearly the maximum is attained when $\cos mnt=1$ for all $m \ge 1$ hence at $t=\theta_j$ . For the minimum we can put $r^n=q, nt=t_1$ and using the geomet
|
|complex-analysis|geometry|extreme-value-analysis|
| 1
|
Is there a similar notion to the domain dual to codomain and range
|
Given a function $f\colon X \to Y$ , $X$ is called the "domain", $Y$ is called the "codomain" and the range of $f$ is defined to be the set, $\{ y \in Y \mid \exists x \in X \colon y=f(x) \}$ We can extend this concept to binary relations. We call a set "graph(of a binary relation)" $R$ a set whose only elements are ordered pairs, i.e. there exist sets $A$ and $B$ such that $R \subseteq A \times B$ . Notice that the sets $A$ and $B$ are not unique, since $\emptyset \subseteq \emptyset \times \emptyset$ and $\emptyset \subseteq \{ \emptyset \} \times \{ \emptyset \}$ . For this reason if $R \subseteq A \times B$ , we define a binary relation as the triplet $(A, B, R)$ . Now, if we were to analogously define the "codomain" and the "range", the codomain is defined as $B$ and the range is defined as $\{ y \mid (\exists x)((x, y) \in R) \}$ . But how would we define the domain? We could define is as $A$ , but also as the set $\{ x \mid (\exists y)((x, y) \in R) \}$ . With functions the two
|
I think it would be standard to say that, for a relation $(A,B,R)$ , $A$ is the domain, $B$ is the codomain, $\{y \mid \exists x ((x,y) \in R)\}$ is the range, and $\{x \mid \exists y ((x,y) \in R)\}$ is the corange . As you note, the range and corange are determined entirely by the set $R$ , while the domain and codomain cannot be determined uniquely by $R$ and instead must be specified as part of the data of the relation.
|
|elementary-set-theory|relations|function-and-relation-composition|
| 0
|
Equality of $2$nd compound matrices implies equality up to sign
|
There’s a bit of multilinear-algebraic folklore knocking around in my head that I’d like to cite and use but that I can’t track down for the life of me: Let $n > 2$ be an integer, and let $a,b \in \operatorname{GL}(n,\mathbb{R})$ . If $\bigwedge^2 a = \bigwedge^2 b$ , then $a=\pm b$ . Note that the condition $\bigwedge^2 a = \bigwedge^2 b$ is equivalent to the equality $C_2(a) = C_2(b)$ of $2$ nd compound matrices, but I’ve had no luck so far with what seem to be the usual references for compound matrix lore. So, assuming that this half-remembered lore is correctly remembered, is there a handy citation one can provide for it?
|
Since $C_2$ is a multiplicative operator, we have $$C_2(A) = C_2(B) \Longrightarrow C_2(A B^{-1}) = C_2(B B^{-1}) = I.$$ So it suffices to show that for an $n \times n$ matrix $M$ , $C_2(M) = I$ iff $M = \pm I$ . This can be seen as follows: we have $$M_{11} M_{22} - M_{12} M_{21} = 1, M_{11} M_{23} = M_{21}M_{13}, M_{12} M_{23} = M_{22}M_{13}.$$ So if $(M_{13}, M_{23})$ is not the zero vector, then $(M_{11}, M_{21})$ and $(M_{12}, M_{22})$ are both its scalar multiplication, meaning that $M_{11} M_{22} - M_{12} M_{21} = 0$ , contradiction! So we have $M_{13} = M_{23} = 0$ . Symmetrically, one can show that all off-diagonal elements are zero. Finally, note that $M_{ii} M_{jj} = 1$ for any $i \neq j$ , so it follows that the diagonal elements are either all $1$ or all $-1$ .
|
|linear-algebra|reference-request|multilinear-algebra|
| 1
|
Is this quotient still a ring?
|
Let's say we have $\mathbb{Z}_3[X]\mathbin{/}(7x^2 + 1)$ , such that we take the quotient of the polynomial ring $\mathbb{Z}_3[X]$ with that specific ideal. Is this still a ring, and does it make sense to ask what the result of $(x^2 + 3x^3) \cdot (13 + 7x)$ is in it? What's the procedure for it?
|
Is it still a ring? Yes, for an ideal $I\subseteq R$ we have that $R/I$ is the quotient ring. Does it make sense to ask what the result of $(x^2 + 3x^3) \cdot (13 + 7x)$ is? Yes. In this ring $7x^2+1=0$ and $3=0$ , so $x^2=-1$ . This means that every polynomial is on the form $$a+bx$$ where $a,b\in\Bbb Z_3$ . Since any polynomial may be reduced to a polynomial of lower degree with $x^2=-1$ . Example: $$\begin{align}(x^2 + 3x^3)(13 + 7x)&=(-1-3x)(1+x)\\&=-1-4x+3x^2\\&=-1-4x-3\\&=-4-4x\\&=2+2x\end{align}$$ In fact, $$\Bbb Z_3[x]/(7x^2+1)\cong\Bbb Z_3[i]$$
|
|ring-theory|ideals|quotient-spaces|polynomial-rings|
| 0
|
Find the general solution of a linear system of ODEs, with non-constant coefficients
|
It is given, that the following system of ODEs has a non-trivial solution with a constant y axis \begin{alignat*}{4} \frac{dx}{dt} = x-e^{-t}y \\ \frac{dy}{dt} = 2e^tx-y \\ \end{alignat*} Find the general solution of this system. Any ideas? Thank you
|
Multiply the top equation by $e^t$ and subtract this first equation from the second. The result is $$y'-e^tx' = xe^t.$$ $$y' = e^tx' + xe^t = \frac{d}{dt}(e^tx).$$ Thus, $$y = e^tx + C_1.$$ Substitute into original first equation. $$x' = x - e^{-t}(e^tx + C_1).$$ $$x' = x-x-C_1e^{-t} = -C_1e^{-t}.$$ You should be able to continue from here. Solve for $x(t)$ . Then, simply substitute to find $y(t)$ . You will find that $y$ constant is a special case.
|
|ordinary-differential-equations|
| 1
|
Topological Quotients: Understanding $X/\sim$ and $X/Y$ with Insights into the disk Structure.
|
I could use some assistance in clarifying a concept. In topology, when we have a space denoted as $X$ , we can create a quotient space (a space of equivalence classes) denoted as $X/\sim$ , where $\sim$ is an equivalence relation defined on $X$ . However, I'm curious about the meaning of $X/Y$ when both $X$ and $Y$ are topological spaces. Specifically, I'd like to understand why the quotient $S^1 \times [0,1]/S^1 \times \{1\}$ is referred to as a disk $D^2$ .
|
As @SassatelliGiulio already notes in the comments, $X / Y$ in the context of topological spaces where $Y \subseteq X$ is a subspace means the quotient $X / {{\sim_Y}}$ where $a \sim_Y b$ if $a, b \in Y$ (or $a = b$ ). In other words, $X / Y$ is obtained from $X$ by taking all the points in $Y$ and identifying them to be a single point. We also say that $X / Y$ is obtained from $X$ by collapsing $Y$ . For your concrete example, $S^1 \times [0, 1]$ is a cylinder, and collapsing the "top" (or bottom, depending on which way you draw it) boundary circle $S^1 \times \{1\}$ gives you a cone, which is homeomorphic to $D^2$ . More formally, the map $f\colon S^1 \times [0, 1] \to D^2$ given by $f(x, t) = (1 - t)x$ is continuous and $f(x, 1) = 0$ for all $x \in S^1$ , so by the universal property of quotient spaces it factors over a map $S^1 \times [0, 1] / S^1 \times \{1\} \to D^2$ (note for this that a map $g\colon X \to Z$ respects $\sim_Y$ iff $g|_Y$ is constant) which is in fact a homeomorp
|
|general-topology|algebraic-topology|quotient-spaces|
| 1
|
Topological Quotients: Understanding $X/\sim$ and $X/Y$ with Insights into the disk Structure.
|
I could use some assistance in clarifying a concept. In topology, when we have a space denoted as $X$ , we can create a quotient space (a space of equivalence classes) denoted as $X/\sim$ , where $\sim$ is an equivalence relation defined on $X$ . However, I'm curious about the meaning of $X/Y$ when both $X$ and $Y$ are topological spaces. Specifically, I'd like to understand why the quotient $S^1 \times [0,1]/S^1 \times \{1\}$ is referred to as a disk $D^2$ .
|
As mentioned in the comments, the quotient of a space $X$ by a subspace $Y$ is the space whose points are equivalence classes such that all of $Y$ is a single class (a point in the quotient), and every point in $X \setminus Y$ is a singleton class. So, we often speak of collapsing $Y$ to a point in $X$ , thinking of the quotient map $X \twoheadrightarrow X/Y$ . In your example, $X = S^1 \times [0, 1]$ is the cylinder over the circle, and the subspace $Y = S^1 \times \{1\}$ is the circle at one end of the cylinder. What happens when you pinch all of that circle to a single point under the quotient $q$ ? You get a cone. Now, you have to convince yourself that this cone is homeomorphic to the disk $D^2$ . Picture pushing the cone point down to the plane of the bottom circular boundary, effectively smashing it into the plane. Can you write down this map $h: X/Y \to D^2$ explicitly?
|
|general-topology|algebraic-topology|quotient-spaces|
| 0
|
Isomorphic objects have the same dimension (pivotal categories)
|
I want to prove that if two objects $X,Y$ in a pivotal category $\mathcal{C}$ (is that enough? Or do we need something more?) are isomorphic, then $X$ and $Y$ have the same dimension, i.e., $$ \mathrm{dim}(X) = \mathrm{Tr}^{L}(\mathrm{id}_{X}) = \mathrm{Tr}^{L}(\mathrm{id}_{Y}) = \mathrm{dim}(Y),$$ where $\mathrm{Tr}^{L}$ is the left quantum trace (see Tensor Categories-EGNO Ch 4.7). I want to show that the following diagram, corresponding to the dimensions of $X$ and $Y$ , commutes: Here $f\colon X \to Y$ is an isomorphism and $f^{\ast-1}$ is the dual morphism of $f^{-1}$ . I was able to show that the inner square commutes, but I am having trouble showing that the outer triangles commute. I tried taking the explicit form of $(f^{-1})^{\ast}$ but I don't get that the composition with $\mathrm{coev}_{X}$ should be $\mathrm{coev}_{Y}$ . I would appreciate a hint for this last part.
|
If you happen to know about string diagrams here is a simple diagrammatic proof for the left triangle: And below is that same proof translated word for word into familiar (perhaps too verbose) equations. The right triangle is done in a similar way. Note the use of the definition of the dual of $f^{-1}$ and the rigidity axiom $$ (f \otimes f^{*-1}) \text{coev}_X $$ $$ =(f \otimes 1_{Y^*}) (1_X \otimes f^{*-1}) \text{coev}_X $$ $$ = (f \otimes 1_{Y^*}) (1_X \otimes \text{ev}_X \otimes 1_{Y^*}) (1_X \otimes 1_{X^*} \otimes f^{-1} \otimes 1_{Y^*}) (1_X \otimes 1_{X^*} \otimes \text{coev}_Y)\text{coev}_X $$ $$ = (f \otimes 1_{Y^*}) (1_X \otimes \text{ev}_X \otimes 1_{Y^*}) (1_X \otimes 1_{X^*} \otimes f^{-1} \otimes 1_{Y^*}) (\text{coev}_X \otimes 1_Y \otimes 1_{Y^*}) \text{coev}_Y $$ $$ = (f \otimes 1_{Y^*}) (1_X \otimes \text{ev}_X \otimes 1_{Y^*}) (\text{coev}_X \otimes 1_X \otimes 1_{Y^*}) (f^{-1} \otimes 1_{Y^*}) \text{coev}_Y $$ $$= (f \otimes 1_{Y^*}) (f^{-1} \otimes 1_{Y^*}) \text{c
|
|category-theory|monoidal-categories|
| 1
|
Maximizing $\sin x \cos x + \cos x$ geometrically
|
Using basic calculus, it's easy to see that we have a maximum at $ x = \pi/6$ , but is there a way to prove this geometrically? I can get the $\cos x \sin x$ to appear but I'm stuck here; how should I proceed with maximizing the sum of length of the base + hypotenuse ( $\cos x \sin x + \cos x$ )? Any hints would be appreciated.
|
Factorize the given expression to get $(\cos x)[\sin x + 1]$ The given expression is just the area of the rectangle as shown in the following figure. It is not difficult to see that the three medium sized triangles are congruent. By midpoint theorem, the line $\cos x \times \sin x$ is half the line BC, which is $\cos x$ . This means $\dfrac {\sin x \cos x}{cos x} = 0.5$ According to the following figure, x has to be $\dfrac {\pi}{6}$ .
|
|geometry|trigonometry|
| 0
|
Finding the range of the two-variable function $f(x,y) = 2 + \sqrt{9-x^2-y^2}$
|
I'm required to determine analytically the range of the following function: $f(x,y) = 2 + \sqrt{9-x^2-y^2}$ The domain for this function is $\{(x,y) \in \mathbb{R}^2 : x^2 + y^2 ≤ 9\}$ . I've found that the range for a very similar function (which is $f(x,y) = \sqrt{9-x^2-y^2}$ ) is $\{z | 0 ≤ 2 ≤ 3\}\implies \text{Range} = [0,3].$ However, I'm not sure if that same range applies to the first function. I have some confusion in that case. Could someone please help me find the range of this function? Thank you
|
The function $f\left(x,y\right)=2+\sqrt{9-x^2-y^2}$ represents an upper hemisphere centered at (0,0,2) with a radius of 3. This means the range of $f\left(x,y\right)$ is $\left[2,5\right]$ .
|
|calculus|functions|
| 0
|
Drawing Cards at Random
|
Situation We have $10$ cards, with $7$ blacks and $3$ reds. Draw $3$ cards at random. Exercise (1) What is the probability that at least one of the cards is black if we have sampling with replacement? (2) What is the probability that at least one of the cards is black if we have sampling without replacement? My attempt The complementary quantity of at least one black card is zero black cards, which is equal to $3$ red cards. (1) We have the following with replacement: \begin{align} P(\geq 1~\text{black}) &= 1-P(0~\text{black})\\ &= 1-P(3~\text{red})\\ &= 1-\frac{3}{10}\cdot\frac{3}{10}\cdot\frac{3}{10}\\ &= \frac{973}{1000}. \end{align} (2) We have the following without replacement: \begin{align} P(\geq 1~\text{black}) &= 1-P(0~\text{black})\\ &= 1-P(3~\text{red})\\ &= 1-\frac{3}{10}\cdot\frac{2}{9}\cdot\frac{1}{8}\\ &= \frac{714}{720}. \end{align} Question Is my attempt correct? If it's not, what is the right way of doing it?
|
Yes, these calculations are correct. For the second probability, if you give an answer as a fraction, it's nice to give it in lowest terms: $$ \frac{714}{720} = \frac{119}{120}. $$
|
|probability|combinatorics|
| 1
|
How can I prove that if $x^5-x^3+x=2$, with $x$ a real number, then $x^6>3$, using only middle school algebra?
|
How can I prove that if $x^5-x^3+x=2$ , with $x$ a real number, then $x^6>3$ , using only middle school algebra? I have been struggling with this one. And I can't find any solution that doesn't rely on derivatives.
|
Here's an answer I feel that would be about the level of middle school (but coming from a high schooler)... I've only used a tiny bit of basic AM-GM Inequality: $x^5-x^3+x=2$ . Taking out x as a common factor and since x is clearly non zero, we get: $x^4-x^2+1=\frac{2}{x}$ . I will use this later. Now in our original equation, we can rewrite as: $x^5=x^3-x+2$ Multiply x on both sides here to obtain $x^6=x^4-x^2+2x$ . Use from earlier that $x^4-x^2+1=\frac{2}{x}$ to obtain $x^6=\frac{2}{x}-1+2x$ . Using AM-GM Inequality we get $\frac{2}{x}+2x$ >= $2\sqrt{(2x)(\frac{2}{x})}$ . Thus $x^6>= 4-1$ (Equality holding at $2x=\frac{2}{x}$ or $x=1,-1$ ). Now notice that when we put $x=1$ into $x^5-x^3+1$ , we get $1$ and not $2$ . Neither does putting x=-1 satisfy the original equation. Which would imply $x^6>3$ , equality removed.
|
|algebra-precalculus|
| 0
|
Differential equation with memory
|
I am trying to integrate this equation: $$ \frac{dz}{dt} = \alpha \cdot (s(t)-z(t-1)) $$ I came up with this equation to model a "leaky integrator" system where s(t) are some samples from the environment and z(t) is a "decision variable". Thus this system can continuously decide between 2 alternatives based on the sign of z(t). However, I'm not sure how to integrate the equation and solve for z(t), mainly because of the term z(t-1) which Wolfram Alpha seems to struggle with... Any tips?
|
Your differential equation is a delay differential equation (DDE). You can find a quick overview on this here. Solving for the homogeneous solution When solving a homogeneous solution of an linear DDE you could reduce it to it's characteristic equation using the substitution $z\left( t \right) := e^{\lambda \cdot t} \implies z^{\left( n \right)}\left( t + \tau \right) \equiv \lambda^{n} \cdot e^{\lambda \cdot \left( t + \tau \right)}$ . You will get no polynomial like in ODEs but a similar equation wich you can solve using the Lambert-W Function $\operatorname{W}_{k}$ : \begin{align*} \frac{\operatorname{d}z\left( t \right)}{\operatorname{d}t} &= \alpha \cdot \left( s\left( t \right) - z\left( t - 1 \right) \right)\\ \frac{\operatorname{d}z_{h}\left( t \right)}{\operatorname{d}t} &= -\alpha \cdot z_{h}\left( t - 1 \right) \tag{$z_{h}\left( t \right) := e^{\lambda \cdot t}$}\\ \lambda \cdot e^{\lambda \cdot t} &= -\alpha \cdot e^{\lambda \cdot \left( t - 1 \right)}\\ \lambda \cdot e^{\l
|
|ordinary-differential-equations|delay-differential-equations|
| 1
|
How can I find the sum $\sum_{i=1}^{10} i(i+1)$?
|
During the resolution of a physics problem, I have encountered the sum: $$\sum_{i=1}^{10} i(i+1)$$ For practical purposes, I evaluated it using a Python script, but a calculator would have been enough since the sum stops at $i=10$ . However, I wonder what I could do mathematically with that sum if I did not have access to a computer and the sum reached large values of $i$ . Is there a way to evaluate it analytically?
|
We have $$(i+1)^3-i^3=3i^2+3i+1\\=3i(i+1)+1$$ Thus $$\sum_{i=1}^ni(i+1)={1\over 3}\sum_{i=1}^n[(i+1)^3-i^3]+{1\over 3}n\\ ={1\over 3}[(n+1)^3-1]+{1\over 3}n$$
|
|summation|
| 0
|
Is $\varphi : \mathbb{R}_{2 \times 2}\rightarrow \mathbb{R}^{4}$ onto?
|
The question was to decide whether $\varphi$ is a linear transformation – I've checked and properly shown that it is in fact a linear transformation. Now I'm trying to decide whether it's isomorphic. I've checked for injection - everything checks out but I got stuck at evaluating surjection. $\varphi : \mathbb{R}_{2 \times 2}\rightarrow \mathbb{R}^{4}$ is defined as $\varphi\begin{bmatrix} a & b \\ c & d \end{bmatrix}=(a,-a+c,d,b)$ . We have been taught that when checking for surjection, we're trying to figure whether for every vector $v$ there is a vector $u$ such that $\varphi(v) = u$ . So I tried to show that for every $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ there is $\begin{bmatrix} x & y \\ z & w \end{bmatrix}$ such that $\varphi\begin{bmatrix} a & b \\ c & d \end{bmatrix}=(a,-a+c,d,b)=\begin{bmatrix} x & y \\ z & w \end{bmatrix}$ . And I got stuck here (or confused even?) and I'd appreciate any help.
|
You are mixing elements of $\Bbb R_{2\times 2}$ and $\Bbb R^4$ . To check surjectivity you need to begin with $u = (w,x,y,z)$ and find $v =\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ such that $\varphi(v) = u$
|
|linear-algebra|vector-spaces|
| 0
|
How can I find the sum $\sum_{i=1}^{10} i(i+1)$?
|
During the resolution of a physics problem, I have encountered the sum: $$\sum_{i=1}^{10} i(i+1)$$ For practical purposes, I evaluated it using a Python script, but a calculator would have been enough since the sum stops at $i=10$ . However, I wonder what I could do mathematically with that sum if I did not have access to a computer and the sum reached large values of $i$ . Is there a way to evaluate it analytically?
|
$$ \sum_{i=0}^n x^i = \frac{1-x^{n+1}}{1-x} $$ $$ \sum_{i=0}^n x^{i+1} = x\frac{1-x^{n+1}}{1-x} $$ $$ \sum_{i=0}^n i(i+1)x^{i-1} = \frac{d^2}{dx^2}x\frac{1-x^{n+1}}{1-x} $$ $$ \sum_{i=0}^n i(i+1) = \frac{d^2}{dx^2}x\frac{1-x^{n+1}}{1-x} \; at\; x=1. $$
|
|summation|
| 0
|
Property of lower semicontinuous functions
|
I am trying to prove that f is sequentially lower semicontinuous at x if and only if $f()=\sup_{r>0}\inf_{y \in B(x,r)}f(y)$ . Following the proof of ( $F$ is lower semicontinuous $\iff F(x)=\sup_{r>0}\inf_{y\in B(x,r)}F(y)$ for all $x\in X$ ), I can understand the $ implication but I cannot prove the converse direction. For general function it holds $f(x)\geq \sup_{r>0}\inf_{y \in B(x,r)}f(y)$ . So I need to show that if f is lower semicontinuous, then it holds $f(x)\leq \sup_{r>0}\inf_{y \in B(x,r)}f(y).$ Since for $x_n\to x$ it holds $\liminf_{n\to\infty}f(x_n)\geq f(x)$ , it would suffice to find a sequence $(x_n)_n$ for which $\liminf_{n\to\infty}f(x_n) = \sup_{r>0}\inf_{y \in B(x,r)}f(y)$ .
|
If $\sup_{r>0}\inf_{y\in B(x,r)} f(y)=f(x)$ , then $f$ is lower semicontinuous at $x$ . Let $\lim_{n\rightarrow \infty} x_n=x$ . We define $r_n:=\sup_{m\geq n} \vert x _n -x \vert$ . Note that $\lim_{n\rightarrow \infty} r_n=0$ . Thus, we have \begin{align*} \liminf_{n\rightarrow \infty} f(x_n) &=\lim_{n\rightarrow \infty} \inf_{m\geq n} f(x_m) \geq \lim_{n\rightarrow \infty} \inf_{y\in B(x,r_n)} f(y) \\ &=\lim_{r\rightarrow 0^+} \inf_{y\in B(x,r)} f(y) =\sup_{r>0} \inf_{y\in B(x,r)} f(y) =f(x). \end{align*} Now let's show the other direction. Namely, let $f$ by lower semicontinuous at $x$ , then $\sup_{r>0}\inf_{y\in B(x,r)} f(y)=f(x)$ . We have $\sup_{r>0} \inf_{y\in B(x,r)} f(y)\leq f(x)$ (this holds for every function), thus, we are left to show that $\sup_{r>0} \inf_{y\in B(x,r)} f(y)\geq f(x)$ . As $r\mapsto \inf_{y\in B(x,r)} f(y)$ is a decreasing function, we get $$ \sup_{r>0} \inf_{y\in B(x,r)} f(y) = \lim_{n\rightarrow \infty} \inf_{y\in B(x,1/n)} f(y).$$ For every $n\in \mat
|
|real-analysis|calculus-of-variations|supremum-and-infimum|limsup-and-liminf|semicontinuous-functions|
| 0
|
Show that the function $f:(0,1] \rightarrow \mathbb{R}$ defined by $f(x)=x\sin{1/x}$ is uniformly continuous on $(0,1]$.
|
My professor gave the class a hint to try to find a continuous extension for this problem. In our textbook, Elementary Analysis: The Theory of Calculus by Ross, one of the theorems states "A real-valued function $f$ on $(a,b)$ is uniformly continuous on $(a,b)$ if and only if it can be extended to a continuous function $\tilde{f}$ on $[a,b]$ ." Now, I have not had any trouble finding a continuous extension, but the fact that the domain in the problem is $(0,1]$ and not $(0,1)$ for some reason is making it difficult for me to figure out how to use the above theorem for this problem. Does anyone have any hints? I would greatly appreciate it. Thank you!
|
Define $g: [0,1]\to\mathbb{R}$ given by $$g(x) = \cases{x\sin(1/x) \quad \text{for } x\ne 0 \\ 0\quad\text{for }x=0}$$ $g$ is obviously continuous in $(0,1]$ , and as $\sin(1/x)$ is bounded, $$\displaystyle\lim_{x\to 0} g(x)=\displaystyle\lim_{x\to 0} x\sin(1/x)=0=g(0),$$ so $g$ is continuous at $0$ too. Now, $g$ is continuous at the compact set $[0,1]$ , so the Heine Theorem gives us that $g$ is uniformly continuous on $[0,1]$ . Let's prove that $f$ is uniformly continuous on $(0,1]$ . Let $\varepsilon>0$ . As $g$ is uniformly continuous, there exists a $\delta>0$ such that, for all $x,y\in [0,1]$ with $|x-y| we have $|g(x)-g(y)| . In particular, if we take $x,y\in (0,1]$ (with $|x-y| ), $$|g(x)-g(y)|=|x\sin(1/x)-y\sin(1/y)|=|f(x)-f(y)|
|
|real-analysis|
| 0
|
Definition of a weak-star weak-star continuous function
|
I have seen the phrase "weak-star weak-star continuous" many times. But I'm don't know what it means for a function to be weak-star weak-star continuous. I just assumed it means that the function is continuous in the weak topology. Could someone provide a formal definition and an example on this? Thank you in advance!
|
Defining the continuity of a function in topological spaces requires two topologies, one on the domain and one on the codomain. It might be relatively easy to forget that fact if you use the same specific spaces and topologies every time like with $\mathbb{R}$ , metric spaces or normed spaces in general, which is why often a simple "continuous" is unambiguous enough in a lot of situations, but not with weak topologies of course. Now, the weak- $*$ topology on the (continuous) dual $X^*$ of $X$ a topological $\mathbb{K}$ -vector space (it does not need to be normed as commented by Marco below, but this includes the case $X$ normed) ( $\mathbb{K} = \mathbb{R}$ or $\mathbb{C}$ ) is the topology $\tau_{w^*}(X) := \{\Phi_x^{-1}(U) \mid U \text{ open in }\mathbb{K},\, x \in X\}$ (personal notations, probably not standard), where $\Phi_x$ is the evaluation map $\Phi_x : \Lambda \in X^* \mapsto \Lambda(x) \in \mathbb{K}$ . Hence, a map $f : X^* \to Y^*$ is said to be weak- $*$ weak- $*$ contin
|
|real-analysis|general-topology|functional-analysis|definition|
| 0
|
Second derivatives of 1/r
|
Let $r = \sqrt{x^2 + y^2 + z^2}$ . From the fact that $\nabla^2 r^{-1} = -4\pi \delta^{(3)}(\vec{r})$ , is it correct to say that $$ \frac{\partial^2}{\partial x^2}(r^{-1}) = \frac{3x^2 - r^2}{r^5} - \frac{4\pi}{3} \delta^{(3)}(\vec{r}) \\ \frac{\partial^2}{\partial x \partial y}(r^{-1}) = \frac{3xy}{r^5} $$ The question is, is it justified to split the Dirac delta evenly in all three directions? This seems like the most straightforward way. $$ \delta = \frac{\delta}{3} + \frac{\delta}{3} + \frac{\delta}{3} $$ Or maybe there are other ways to distribute, while still respecting symmetry, like $$ \delta = \frac{x^2 \delta}{r^2} + \frac{y^2 \delta}{r^2} + \frac{z^2 \delta}{r^2} $$
|
You're completely correct that $\delta$ -at- $0$ in 3D is some sort of "product" of the one-dimensional $\delta$ 's in any choice of (maybe best to be orthogonal...?) coordinates. But, although this is probably unsatisfying, the "decomposition" of it is as a tensor product $\delta_x\otimes \delta_y\otimes \delta_z$ , in the (standard) coordinate variables. Thus, by a sort of product rule, applying $\Delta$ , expressed as sum of second derivatives in the chosen variables, $$ \Delta(\delta) \;=\; (\partial_x^2+\partial_y^2+\partial_z^2)(\delta_x\otimes\delta_y\otimes\delta_z) $$ $$ \;=\; \delta_x''\otimes \delta_y\otimes \delta_z + \delta_x\otimes \delta_y''\otimes \delta_z + \delta_x \otimes\delta_y\otimes\delta_z'' $$ This is literally correct, but there are (perhaps unsurprisingly) some subtleties in interpreting/using it. :)
|
|derivatives|dirac-delta|
| 0
|
If $\{S_n \}$ is a disjoint sequence of subsets of $X$ then $\lim S_n=\varnothing$
|
Let $X$ be a set. Consider a sequence of subsets \begin{align} \big\{S_n\subseteq X\,\big|\,S_i\cap S_j=\varnothing\ \forall i\ne j \big\}. \end{align} Then $\lim_{n\to\infty}S_n$ exists and is equal to $\varnothing$ . My proof : To prove that $\lim_{n\to\infty}S_n$ exists, we need to show $\displaystyle\liminf S_n=\limsup S_n.$ Since the direction $\subseteq $ is trivial, we will show the direction $\supseteq$ . By definition, we have \begin{align} \limsup S_n\ &=\ \bigcap_{n=1}^\infty \bigcup_{k\geq n}S_k \newline \liminf S_n\ &=\ \bigcup_{n=1}^\infty \bigcap_{k\geq n}S_k. \end{align} Let $x\in\displaystyle \limsup S_n$ , then \begin{align} \forall n\in\mathbb N,\ x\in\bigcup_{k\geq n}S_k, \end{align} which means \begin{align} \forall n\in\mathbb N,\ \exists k\geq n: \ x\in S_k. \end{align} Here, I wonder how will we use the assumption that the sets $S_n$ are disjoint ? May you give me some hints on this ? Thanks.
|
Assuming that $S_n = E_n$ and that this is just a notational mixup, note that $\forall n \in \mathbb{N}\exists k \geq n\colon x \in E_k$ implies that $x$ lies in $E_k$ for infinitely many values of $k$ whereas disjointness implies that $x$ lies in at most one $E_k$ , so no such $x$ exists.
|
|measure-theory|
| 0
|
If $\{S_n \}$ is a disjoint sequence of subsets of $X$ then $\lim S_n=\varnothing$
|
Let $X$ be a set. Consider a sequence of subsets \begin{align} \big\{S_n\subseteq X\,\big|\,S_i\cap S_j=\varnothing\ \forall i\ne j \big\}. \end{align} Then $\lim_{n\to\infty}S_n$ exists and is equal to $\varnothing$ . My proof : To prove that $\lim_{n\to\infty}S_n$ exists, we need to show $\displaystyle\liminf S_n=\limsup S_n.$ Since the direction $\subseteq $ is trivial, we will show the direction $\supseteq$ . By definition, we have \begin{align} \limsup S_n\ &=\ \bigcap_{n=1}^\infty \bigcup_{k\geq n}S_k \newline \liminf S_n\ &=\ \bigcup_{n=1}^\infty \bigcap_{k\geq n}S_k. \end{align} Let $x\in\displaystyle \limsup S_n$ , then \begin{align} \forall n\in\mathbb N,\ x\in\bigcup_{k\geq n}S_k, \end{align} which means \begin{align} \forall n\in\mathbb N,\ \exists k\geq n: \ x\in S_k. \end{align} Here, I wonder how will we use the assumption that the sets $S_n$ are disjoint ? May you give me some hints on this ? Thanks.
|
We have that $\text{lim sup } S_n$ is the set of elements which are in an infinite number of $S_n$ . As the $S_n$ are disjoint, this directly implies $\text{lim sup } S_n=\emptyset$ .
|
|measure-theory|
| 0
|
Fourier Transform of $sgn(t)$
|
I'm trying to find the Fourier transform of $sgn(t)$ where $$ sgn(t) = \begin{cases} 1, & t > 0 \\ 0, & t = 0 \\ -1, & t By definition, $$ X(\omega) = \int_{-\infty}^{\infty} sgn(t) e^{-i\omega t} \, dt = \int_0^{\infty} e^{-i\omega t} \, dt + \int_{-\infty}^0 -e^{-i\omega t} \, dt. $$ However, that second integral diverges. Where is the flaw in my methodology?
|
There can be several ways to get the FT of $\operatorname{sgn}(t)$ . Note that $\operatorname{sgn}(t)$ is not integrable and cannot directly do the FT. But it can be achieved by limiting process, for instance, we can construct $\operatorname{sgn}(t)=\int_R\frac{\sin(\omega t)}{\pi\omega}\mathrm d\omega$ or by Heaviside functions with a decaying factor $e^{-\alpha t}$ with $\alpha>0$ . $$\operatorname{sgn}(t)=u(t)-u(-t)=\lim_{\alpha\to0}\left[e^{-\alpha t}u(t)-e^{\alpha t}u(-t)\right].$$ Then we can evaluate Fourier integrals. $$\mathscr{F}[\operatorname{sgn}(t)]=\lim_{\alpha\to0}\left\{\mathscr{F}\left[e^{-\alpha t}u(t)\right]-\mathscr{F}\left[e^{\alpha t}u(-t)\right]\right\}=\lim_{\alpha\to0}\left\{\frac{1}{\alpha+i\omega}-\frac{1}{\alpha-i\omega}\right\}=\frac{2}{i\omega}$$ A second way is to use $$\mathscr{F}[u(t)]=\pi\delta(\omega)+\frac{1}{i\omega}=\mathscr{F}\left[\frac12+\frac12\operatorname{sgn}(t)\right]=\pi\delta(\omega)+\frac12\mathscr{F}[\operatorname{sgn}(t)].$$ Therefore
|
|fourier-analysis|
| 0
|
How to show that the set of minimizers for a convex function over a convex set is closed?
|
It is well known that the set of minimizers of a convex function over a convex set is convex. It is also true that it is closed. But I have not been able to show this result. Let $f$ be a convex function on a convex set $C$ , I wish to show that the set of minimizers of $f$ , denoted as $S$ , is closed. To show $S$ is closed, I need to show $\mathbb{R}^n \backslash S$ is open. To show $\mathbb{R}^n \backslash S$ is open, I need to show that there exists some $r > 0$ such that $\{y \in \mathbb{R}^n | \|x - y\|_2 . In other words, any $y$ from this ball needs to satisfy $f(y) > f_\min$ , where $f_\min$ is the minimum value of $f(x)$ . I am not sure how to proceed from this point. I've tried this: let $y = x + (r/2)u$ , $u \in \mathbb{R}^n \backslash S$ , then $f(y) = f(x+(r/2)u) = f(r/2(x + u) + r/2x) \leq (r/2) f(x+u) + r/2f(x)$ , but this is not getting me anywhere.
|
Let $a_n$ be a sequence of minimizers that converges to $a$ . To show the set is closed we need to show $a$ is also a minimizer. For all $n\in\mathbb{N}$ you have $a_n\in argmin_{x\in S} f(x)$ where both $f$ and $S$ are convex. This means we have, $f(a_n)\leq f(x)$ for all $x\in S$ . Since all convex functions $f:\mathbb{R}^n\rightarrow\mathbb{R}$ are continuous, we have: $f(a) = f(\lim_{n\rightarrow\infty} a_n) = \lim_{n\rightarrow\infty} f(a_n)$ . Since $f(a_n)\leq f(x)$ for all $x\in S$ and $n\in\mathbb{N}$ , it follows that $\lim_{n\rightarrow\infty}f(a_n)\leq \lim_{n\rightarrow\infty}f(x)= f(x)$ for all $x\in S$ . Therefore, by the preceding chain of equalities, it follows that $f(a)\leq f(x)$ for all $x\in S$ , so that $a\in argmin_{x\in S} f(x)$ .
|
|real-analysis|optimization|proof-writing|convex-analysis|convex-optimization|
| 0
|
Approach to an improper integral involving sinh
|
How do I calculate an integral $\int_{-\infty}^{\infty} \frac{z-a}{\sinh(z-a)}\frac{z+a}{\sinh(z+a)} dz$ for $a>0$ ? Expanding the integrand and integrating term-by-term (a-la polylog) gives rise to an ugly looking double series. Edit: it surely can be done this way, but what is the most straightforward way to calculate this?
|
If you ask WolframAlpha , it says "Standard computation time exceeded...". If you use the free download version of Wolfram Engine you get an answer. If it's a homework problem you should of course not ask it here. As for the ugly looking series you mention, the primitive function is according to the engine: $$ \frac{1}{{e^{4 a}-1}} \left\{e^{2 a} \left(-2 a^2 \log (\sinh (a-z) \text{csch}(a+z))-2 z \text{Li}_2\left(e^{2 a-2 z}\right)+2 z \text{Li}_2\left(e^{-2 (a+z)}\right)-\text{Li}_3\left(e^{2 a-2 z}\right)+\text{Li}_3\left(e^{-2 (a+z)}\right)+2 z^2 \left(\log \left(1-e^{2 a-2 z}\right)-\log \left(1-e^{-2 (a+z)}\right)\right)\right)\right\} $$ which is in closed form, not a (double) series at all. And for the definite integral it gives an even simpler result where no polylogs at all are present any more.
|
|improper-integrals|
| 0
|
Global $L^{2}$ solution of Laplace equation in $\mathbb{R}^{3}$.
|
Does there exists a global and non-trivial solution $u\in L^{2}(\mathbb{R}^{3})$ to the Laplace equation $\Delta u=0$ ? Using the spherical symmetry of the problem, one can consider the usual solution obtained by seperation of variables which yields something like this $$u(r,\theta,\varphi)=\sum_{l=0}^{\infty}\sum_{m=-l}^{l}c_{ml}r^{l}Y_{ml}(\theta,\varphi)$$ where $Y_{ml}$ are the spherical harmonics and $c_{ml}$ coefficients. This sum is well defined as long as $r$ is smaller then a critical radius $R$ which depends on the choice of coefficients $c_{ml}$ . However, even if we manage to find coefficients such that $R\to \infty$ , its seems to me that the solutions will never be in $L^{2}$ due to the $r^{l}$ terms. Is there any other Ansatz one can think of?
|
Also, in addition to @JackT's apt answer: we can view this as a generalization of the Liouville theorem for entire functions: if we ask for a tempered distribution (a pretty general thing) $u$ such that $\Delta u=0$ , then, taking Fourier transform, $r^2\cdot \widehat{u}=0$ . That is, $\widehat{u}$ is a tempered distribution supported at $0$ . Thus, it is a finite linear combination of derivatives of Dirac deltas (and more can be said). Thus, the original $u$ must have been a polynomial... and none of these is square-integrable.
|
|partial-differential-equations|harmonic-functions|laplacian|elliptic-equations|
| 0
|
Four points in $\Bbb R^3$ form equal angles then |AB|=|CD| and |BC|=|DA|.
|
I'm interested in the following problem: If $A,B,C,D$ are four distinct points in $\Bbb R^3$ satisfying $\angle ABC=\angle BCD=\angle CDA=\angle DAB$ , then is it must be true that $|AB|=|CD|,|BC|=|DA|$ ? If $A,B,C,D$ are coplanar then $ABCD$ is a rectangle, so $|AB|=|CD|,|BC|=|DA|$ . If $A,B,C,D$ are not coplanar then I don't know how to prove. For $|AB|=|BC|=|CD|=|DA|$ I have an example: $A = (0, -1, -a)$ $B = (-1, 0, a)$ $C = (0, 1, -a)$ $D = (1, 0, a)$ I can rephrase the problem in terms of vectors $B-A,C-B,D-C,A-D$ : If four vectors $\vec a,\vec b,\vec c,\vec d$ in $\Bbb R^3$ satisfy $$\vec a+\vec b+\vec c+\vec d=\vec0$$ and $$\frac{\vec a\cdot\vec b}{|\vec a||\vec b|}=\dots=\frac{\vec d\cdot\vec a}{|\vec d||\vec a|}\tag1$$ then is it must be true that $|\vec a|=|\vec c|,|\vec b|=|\vec d|$ ? I can write the equation $(1)$ in components $$\frac{a_1 b_1+a_2 b_2+a_3 b_3}{\sqrt{a_1^2+a_2^2+a_3^2} \sqrt{b_1^2+b_2^2+b_3^2}}=\frac{b_1 c_1+b_2 c_2+b_3 c_3}{\sqrt{b_1^2+b_2^2+b_3^2} \sqrt{c
|
If we define the lengths of line segments as follows: $ a = |AB|, b = |BC| , c = |CD| , d = | DA | $ Then, from the law of cosines, $|AC|^2 = a^2 + b^2 - 2 a b cos \theta = c^2 + d^2 - 2 c d \cos \theta $ and $ |BD|^2 = a^2 + d^2 - 2 a d \cos \theta = b^2 + c^2 - 2 b c \cos \theta $ Fix $\theta, a$ and $ b$ to some values, and solve for $c$ and $d$ . I fixed $\theta = \dfrac{\pi}{6} $ , $a = 1 , b = 1.2 $ and got the following solutions for $c,d$ $ (c, d) = (1, 1.2) , (1, 0.532050808), (1.078460969, 1.2) $ So we don't necessarily have to have $ |AB| = |CD|$ and $| BC | = |DA| $ .
|
|euclidean-geometry|3d|
| 1
|
The set of all critical points of a smooth map is closed
|
Let $f : \mathbb{R}^m \to \mathbb{R}^n$ be a smooth map. How do I show that the set of all critical points of $f$ is closed in $\mathbb{R}^m$? (Here, a critical point is a point $x \in \mathbb{R}^m$ for which the derivative $Df_x : \mathbb{R}^m \to \mathbb{R}^n$ is not onto.) I can prove this by the inverse function theorem when $m = n$ but cannot see any easy way of going about it when $m > n$. Thanks.
|
Alternatively, you can show that the set of regular points (i.e. where $dF_x$ is surjective) is open. If $m , $dF_x$ can't be surjective, so we're done. If $m = n$ , we can use the IFT, as you say. If $m > n$ , look at $dF_x$ , which we assume to be surjective. So, there are $n$ independent columns, i.e. there is an $n \times n$ nonsingular submatrix $A_x$ . The determinant is continuous and the partials are continuous, so consider $h(x) = \mathrm{det}(A_x)$ . so the preimage of $\mathbf{R}^n - \{0\}$ is an open neighbourhood of $x$ comprising regular points, as desired. Edit: I realize now that this is precisely what one of the linked posts say...
|
|differential-geometry|
| 0
|
Four points in $\Bbb R^3$ form equal angles then |AB|=|CD| and |BC|=|DA|.
|
I'm interested in the following problem: If $A,B,C,D$ are four distinct points in $\Bbb R^3$ satisfying $\angle ABC=\angle BCD=\angle CDA=\angle DAB$ , then is it must be true that $|AB|=|CD|,|BC|=|DA|$ ? If $A,B,C,D$ are coplanar then $ABCD$ is a rectangle, so $|AB|=|CD|,|BC|=|DA|$ . If $A,B,C,D$ are not coplanar then I don't know how to prove. For $|AB|=|BC|=|CD|=|DA|$ I have an example: $A = (0, -1, -a)$ $B = (-1, 0, a)$ $C = (0, 1, -a)$ $D = (1, 0, a)$ I can rephrase the problem in terms of vectors $B-A,C-B,D-C,A-D$ : If four vectors $\vec a,\vec b,\vec c,\vec d$ in $\Bbb R^3$ satisfy $$\vec a+\vec b+\vec c+\vec d=\vec0$$ and $$\frac{\vec a\cdot\vec b}{|\vec a||\vec b|}=\dots=\frac{\vec d\cdot\vec a}{|\vec d||\vec a|}\tag1$$ then is it must be true that $|\vec a|=|\vec c|,|\vec b|=|\vec d|$ ? I can write the equation $(1)$ in components $$\frac{a_1 b_1+a_2 b_2+a_3 b_3}{\sqrt{a_1^2+a_2^2+a_3^2} \sqrt{b_1^2+b_2^2+b_3^2}}=\frac{b_1 c_1+b_2 c_2+b_3 c_3}{\sqrt{b_1^2+b_2^2+b_3^2} \sqrt{c
|
Unless I misunderstand, here's a quick visual disproof: You may extend the lines BC and AD arbitrarily (but by the same distance) to make DC whatever length you like without changing any of the interior angles.
|
|euclidean-geometry|3d|
| 0
|
Sets as numbers through Dedekind's cuts
|
I am having some issues understanding the definition of real numbers as a Dedekind Cut. Firstly, a Dedekind cut is a set, not a number (namely, a set in which all its values are less than the value of the rational number q), so i don't see how can we proceed from that. Then, I don"t see the intuition behind defining real numbers as Dedekind, even as a starting point,but I think it comes from this first problem. Just to make it clear, I don't see how $\pi$ is the set of all rational numbers smaller than $\pi$ .
|
So I think a lot of your confusion stems from understanding how/why mathematics is formalised. Set theory is a very simple theory, from which we expect there to be no contradictions. Here's a lightly anachronistic account of why we may want this: In 1903, Frege attempted to define arithmetic from simple laws; one of which was `Basic Law 5'. In a (rough) sense, he proposed that a set was defined by a property shared amongst its elements; a seemingly simple system. Bertrand Russel (see Russel's paradox) demonstrated that this actually leads to a contradiction, namely if we consider the set of all sets that do not contain themselves $X=\{x:x\notin x\}$ . From there we just ask if $X\in X$ . Either possibility leads to a contradiction. So what mathematicians developed were various laws of sets, axioms (which we predict by raw intuition, but can never prove) that don't lead to contradiction. One popular one is Zermelo-Fraenkel (ZF) set theory. Here's the rub; since mathematics with more exo
|
|elementary-set-theory|real-numbers|rational-numbers|
| 1
|
What is meant by "find the eigenvectors of a matrix"
|
Let's say I have a question asking me to find the eigenvectors associated with $\lambda = 2$ for $A$ (a $3 \times 3$ matrix). I find that the eigenvectors associated with $\lambda = 2$ are all vectors spanned by $\mathbf{u}_1$ and $\mathbf{u}_2$ (a plane). Can I call $\mathbf{u}_1$ and $\mathbf{u}_2$ the eigenvectors associated with $\lambda = 2$ , even though there is a whole plane of them? I can find a basis for the space but I don't see how I can give a finite number of vectors. Is the correct way to word it that eigenvectors are $\operatorname{span} \{\mathbf{u}_1, \mathbf{u}_2 \}$ ?
|
Let‘s review the commonly used terms for eigenvalue problems: The idea behind eigenvalue problems is that we would like to find the vectors $v \neq 0$ which are mapped to a multiple of themselves under an endomorphism, or, after choosing a basis, a matrix $A$ . Meaning, $$A \cdot v = \lambda \cdot v,$$ where $\lambda$ is an element in the underlying field, for example the real numbers. If there is such a non-trivial vector to a specific $\lambda$ such that this equation holds, $\lambda$ is called an eigenvalue. Any $v \neq 0$ which makes the equation true is then called eigenvector to the eigenvalue $\lambda$ , so yes, $u_1$ and $u_2$ are most certainly called eigenvectors! It is easy to see that if $v$ is an eigenvector, so is any non-zero multiple of $v$ ! Therefore, the space of all eigenvectors to an eigenvalue together with the zero vector, which is called the eigenspace $V_\lambda$ of the eigenvalue $\lambda$ , is usually not finite (And never finite over the real or complex numb
|
|linear-algebra|eigenvalues-eigenvectors|definition|
| 0
|
Conditional characteristic functions and distributions
|
Let $X$ be an RV, $\mathcal{F}$ a $\sigma$ -algebra, $\phi$ the characteristic function of a distribution $\nu$ . Assume that for the conditional characteristic function $$\phi_\mathcal{F}(t) := \mathbb{E}[\exp(itX) \mid \mathcal{F}]$$ holds $$\phi_\mathcal{F}(t) = \phi(t) \quad a.s. \qquad (2)$$ and show that $X$ has distribution $\nu$ . I know that two RVs have the same distribution iff they have the same characteristic function, but I do not see how to use it here. Could you please give me a hint? Edit: Following Stratis Markou's suggestion I took the conditional expectation of $(2)$ : \begin{align*} \mathbb{E}[\phi_\mathcal{F}(t) \mid \mathcal{F}] &= \mathbb{E}[\phi(t) \mid \mathcal{F}] \\ \Longleftrightarrow \mathbb{E}[\mathbb{E}[\exp(itX) \vert \mathcal{F}] \mid \mathcal{F}] &= \mathbb{E}[\mathbb{E}[\exp(itX)] \mid \mathcal{F}] \\ \end{align*} In class we have done the tower law/law of total expectation, but when I us this here I only get $\mathbb{E}[\mathbb{E}[\exp(itX) \mid \ma
|
Would it help if you took expectations w.r.t. to the conditioning variable on both sides of the second equation and then applied the law of total expectation ? P.S. I'm a little unsure of what you mean by having a $\sigma$ -algebra in the conditioning side in your first equation.
|
|probability-theory|characteristic-functions|
| 1
|
Determine the internally tangent ellipse inside a given parallelogram given the direction of the major/minor axes
|
Question: You're given the parallelogram $ABCD$ shown below, with vertices at $A(2,3)$ , $B(12, 9)$ , $C(14, 17)$ , $D(4,11)$ . And you want to determine the ellipse that is tangent to all four sides of the parallelogram, such that its major or minor axis is at an angle $+30^\circ$ with positive $x$ axis. To determine the ellipse, find its algebraic equation, or find its major and minor semi-axes, and their inclination angle from the positive $x$ axis. Remarks: From symmetry, it follows that the center of the ellipse is the center of the parallelogram. The gradient of the ellipse (the normal vector) is pointing in the outward direction at the points of contact with the sides of the parallelogram.
|
I'll propose instead a ruler-and-compass construction, because it's fairly easy. Let then $ABCD$ be the given parallelogram, $O$ its center, and $OE$ the given major axis, intersecting side $BC$ at $E$ (see figure below). Construct the bisector of $\angle BCD$ , intersecting the major axis at $H$ , and line $CI\perp CH$ , intersecting the major axis at $I$ . Draw circle $HCI$ and construct from $O$ a tangent $OK$ to this circle. Construct then on the major axis points $M$ and $N$ such that $OM=ON=OK$ . Points $M$ and $N$ are the foci of the ellipse. To find one of the contact points we can then construct $M'$ reflecting $M$ about line $CD$ : the intersection $P$ between $M'N$ and $CD$ is a contact point. Having the foci and a point, the ellipse is completely determined and it's easy to find its vertices, if wanted. I have no time now to explain why this construction works, I'll probably add something later on.
|
|geometry|conic-sections|
| 1
|
Metric properties of graphs of lie groups homomorphisms
|
graphs of homomorphism of $(\mathbb{R}^{n},+)$ are minimal submanifolds. The same holds for ( $S^{1},\times$ ). Are there generalizations of these statements?
|
Let $G$ be a Lie group equipped with a biinvariant semi-Riemannian metric. (For instance, take any compact Lie group with a biinvariant Riemannian metric.) Then nonconstant geodesics in $G$ are $G$ -translates of 1-parameter subgroups of $G$ . Accordingly, Lie subgroups of $G$ are totally geodesic. Now take the direct product of two such metrized groups $G\times H$ with the product semi-Riemannian metric. Graphs of continuous homomorphisms $G\to H$ are totally geodesic, hence, minimal, submanifolds of this product.
|
|differential-geometry|lie-groups|group-homomorphism|
| 1
|
$Cov[X_m, X_n] = \mathbb{E}[(X_m- \mathbb{E}(X_m)) (X_n- \mathbb{E}(X_n))]$ for $S_n := \sum_{i=1}^n X_i$ a martingale
|
Let $(X_n)_n$ be a sequence of square-integrable RVs and let $\mathcal{F}$ be given by $\mathcal{F}_n = \sigma (X_1, \ldots, X_n)$ . Suppose that $S_n := \sum_{i=1}^n X_i$ is an $\mathcal{F}$ -martingal. Show that $$Cov[X_m, X_n] = \mathbb{E}[X_m X_n]$$ I know that by definition we have $$Cov[X_m, X_n] = \mathbb{E}[(X_m- \mathbb{E}(X_m)) (X_n- \mathbb{E}[X_n])],$$ so I guess we should argue that $\mathbb{E}[X_m] = 0$ and $\mathbb{E}[X_n] = 0$ , but I am not sure whether this is actually true. Could you please give me a hint?
|
It is true. The fact that $(S_n)$ is a martingale implies that $ES_{n+1}=E[E(S_{n+1}| S_n)]=ES_n$ , so $EX_n=E(S_{n+1}-S_n)=0$ for all $n$ .
|
|probability|martingales|
| 1
|
Solve $x^x-5x+6=0$ using Lambert W function.
|
How do I solve $x^x - 5x + 6 = 0$ using the Lambert W function? EDIT: I solved equations $2^x - 5x + 6 = 0$ and $3^x - 4x - 15 = 0$ using Lambert W function, but not able to solve $x^x - 5x + 6 = 0$ equation. How and which method can be used to solve this equation? How to solve this equation without graphically plotting, which method can be used? I am just preparing for my examination, I got this question after solving some Lambert W function equations and I don't know much about that.
|
Using Lagrange reversion : $$x=a+bx^x\implies x=a+\sum_{n=1}^\infty\frac{b^n}{n!}\frac{d^{n-1}}{da^{n-1}}a^{na}$$ Now use a Stirling number of the first kind formula, possible to derive from this pattern $$\frac{d^n}{dx^n}f(\ln(x))=x^{-n}\sum_{k=1}^n S_n^{(k)}f^{(k)}(\ln(x)):\frac{d^{n-1}}{da^{n-1}}a^{na}=a^{-n}\sum_{k=1}^{n-1}S_{n-1}^{(k)}\left.\frac{d^k}{dt^k}e^{n t e^t}\right|_{\ln(a)},n>1$$ Also, the $n=1$ term is extracted out. Finally, use $e^x$ ’s Maclaurin series and general Leibniz rule $$\left.\frac{d^k}{dt^k}e^{n t e^t}\right|_{\ln(a)}=\sum_{m=0}^\infty\frac{n^m}{m!}\left.\frac{d^k}{dt^k}t^me^{mt}\right|_{\ln(a)}=\sum_{j=0}^k\binom kj\left.\frac{d^j}{dt^j}t^m\right|_{\ln(a)} \left.\frac{d^{k-j}}{dt^{k-j}}e^{tm}\right|_{\ln(a)}$$ summing over $j$ gives a Tricomi confluent hypergeometric function : $$x=a+bx^x\implies x=a+ba^a+\sum_{n=2}^\infty\sum_{m=1}^\infty\sum_{k=1}^{n-1}\frac{(-1)^kb^nn^m\ln^{m-k}(a)a^{m-n+1}S_{n-1}^{(k)}}{m!n!}\operatorname U(-k,m-k+1,-mt)$$ shown here:
|
|logarithms|lambert-w|
| 0
|
Let $f\colon H \to L(H,X)$ strongly continuous ($X,H$ Hilbert spaces). Is the adjoint function $f^*\colon H \to L(X,H)$ strongly continuous?
|
Let $X,H$ be Hilbert spaces and let $f\colon H \to L(H,X)$ , where $L(H,X)$ is the space of linear bounded operators. In this case it is natural to define the adjoint function $f^*\colon H \to L(X,H)(=L(X^*,H^*))$ by $$f^* (x)=[f(x)]^* \quad \forall x \in H,$$ where $*$ denotes the adjoint of the operator $f(x) \in L(H,X).$ Clearly if $f$ is continuous in the operator topology, we have that $f^*$ is continuous in the operator topology, i.e. as $|x- x_0|_H\to 0$ $$\|f^*(x)-f^*(x_0)\|_{L(X,H)}=\|f(x)-f(x_0)\|_{L(H,X)} \to 0.$$ However, assume that $f$ is strongly continuous, i.e. $$\lim_{|x-x_0|_H\to 0}|f(x)\eta-f(x_0)\eta|_X=0 \quad \forall \eta \in H. $$ Is then $f^*$ strongly continuous in this case? I.e. we should have $$\lim_{|x-x_0|_H\to 0}|f^*(x)y-f^*(x_0)y|_X=0 \quad \forall y \in X$$
|
No. Let $H = X = l^2(\mathbb{N})$ and $v$ be the adjoint of the unilateral shift. We define a function $g: \mathbb{R} \to L(H)$ first by, $$g(t) = \begin{cases} 0 &, \, \mathrm{if} \, t \leq 0\\ \frac{\frac{1}{n}-t}{\frac{1}{n}-\frac{1}{n+1}}v^{n+1}+\frac{t-\frac{1}{n+1}}{\frac{1}{n}-\frac{1}{n+1}}v^n &, \, \mathrm{if} \, \frac{1}{n+1} \leq t Since $v^n \to 0$ strongly as $n \to \infty$ , it is easy to verify that $g$ is strongly continuous. Now simply let $f(h) = g(\mathrm{Re}\langle e_1, h \rangle)$ . Clearly $f$ is still strongly continuous. But $f^\ast$ is not strongly continuous. Indeed, $\frac{1}{n}e_1 \to 0$ , but $f^\ast(\frac{1}{n}e_1) = g(\frac{1}{n})^\ast = (v^n)^\ast$ , which does not converge strongly (to $f^\ast(0) = 0$ or any other operator).
|
|functional-analysis|analysis|operator-theory|hilbert-spaces|banach-spaces|
| 1
|
How to find the original matrix given its basis for column & null space?
|
I was given this problem: Find a matrix $A$ such that $C(A)$ is spanned by $(1, 2, -1, 3), (2, 1, 1, 1), (3, 1, -1 , 1)$ and $N(A)$ is spanned by $(-2, -1, 1, 0, 0), (-3, 2, 0, -2, 1)$ . I have no idea where to start with this. I know that the column space basis vectors must be in the original, $A$ , but I have no clue how I would derive the remaining two dependent columns for the null space. I also know that $A$ should be $4 \times 5$ with rank $3$ , but what trips me up is where to place the independent columns in this matrix (a.k.a. the $C(A)$ basis vectors). Thanks
|
Think about the linear transformation $\mathbb{R}^n \to \mathbb{R}^m$ defined by $\mathbf{x} \mapsto A\mathbf{x}$ . As you've observed, based on the size of the vectors in the column space and null space, $m = 4$ and $n = 5$ . Moreover, since the column space is $3$ -dimensional, the matrix has rank $3$ , so it has $3$ pivot columns. Equivalently, since the null space is $2$ -dimensional, the matrix has corank $2$ , i.e., it has $2$ other columns corresponding to free variables. We can determine $A$ explicitly in two steps, first using the null space then the column space. Step 1: Null space An arbitrary vector in the null space is a linear combination of the $2$ given null vectors, so it looks like $$ \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{pmatrix} = s\begin{pmatrix} -2 \\ -1 \\ 1 \\ 0 \\ 0 \end{pmatrix} + t\begin{pmatrix} -3 \\ 2 \\ 0 \\ -2 \\ 1 \end{pmatrix} = \begin{pmatrix} -2s - 3t \\ -s + 2t \\ s \\ -2t \\ t \end{pmatrix} $$ This is equivalent to the system of equa
|
|linear-algebra|matrices|
| 1
|
Continuous discounted cash flow generation
|
Problem statement The problem is to find the limit of the following sequence: \begin{equation} D(r) = \lim_{n \to \infty} \frac{\sum_{i=1}^{n} \frac{1}{(1+\frac{r}{n})^i}}{n}. \end{equation} Background Consider a hypothetical example where we have an investment that costs us 50 dolars and each year generates 100 dolars for the next three years. The discount rate is 5%. Calculate the Net Present Value (NPV) of the investment. The general formula for the NPV is: \begin{equation} NPV = -I + \sum_{t=1}^{T} \frac{CF_t}{(1+r)^t}, \end{equation} where: $NPV$ is Net Present Value, $I$ is Initial Investment, $CF_t$ is Cash Flow at time $t$ , $r$ is discount rate, $T$ is number of years. In our case, the NPV will look like this: $$ NPV = -50 + \frac{100}{(1+0.05)^1} + \frac{100}{(1+0.05)^2} + \frac{100}{(1+0.05)^3} \approx 222.325. $$ The investment is made in year zero, and the cash flow is generated at the end of the year. So, graphically, the situation looks like this: Model 1 But this is not
|
I think you are mixing some concepts. Your formula does not approximate the continously compounded interest. The correct formula is $$ FV = \lim_{n \to \infty} I \bigg(1+\frac{r}{n}\bigg)^n = Ie^r $$ (for the NPV just invert this formula). The idea behind this is that you invest the sum $I$ for $n$ periods (each year per $t$ years - note that here we also assuming $t=1$ ). this means we have something like this $$ I \bigg(1+\frac{r}{n}\bigg)\cdots \bigg(1+\frac{r}{n}\bigg) $$ where we have $n$ of those terms. If we then let the number of compounding periods tend to infinity we have the classical approximation of continously compounded interest. Your $D(r)$ is not equivalent. We can however compute the limit by remembering that $$ S := \sum_{i=1}^{n}z^i = \frac{z-z^{n+1}}{1-z} = \frac{z(1-z^{n})}{1-z} $$ where in our case $z = 1/(1+r/n)$ . Hence, after some algebra, we end up with $$ D(r) = \frac{S}{n} = \frac{1}{r}\bigg[1 - \bigg( \frac{1}{1+r/n}\bigg)^n \bigg] \to \frac{1}{r} (1-e^{-r
|
|sequences-and-series|finance|
| 1
|
Diagonals dividing an hexagon into regions of the same area.
|
Say a diagonal of a convex polygon with an even number of vertices is a line segment connecting opposite vertices. I've found the following problem in an old exercise list a teacher gave me: If each of the three diagonals of a convex hexagon divides it into regions of the same area, show that the three diagonals meet at the same point. In the image below I marked the areas determined by the diagonals. My first impulse was to try to show that $P = 0$ . For this I set up the following system: $$\begin{cases} T/2 &= A+B+Y\\ T/2 &= B+C+Z\\ T/2 &= C+A+X\\ T/2 &= X+Y+A+P\\ T/2 &= Y+Z+B+P\\ T/2 &= Z+X+C+P\\ \end{cases}$$ Where $T = A+B+C+X+Y+Z+P$ is the hexagon's total area. The system may be simplified to $$\begin{cases} A &= Z+P\\ B &= X+P\\ C &= Y+P \end{cases}$$ but I don't see how to proceed from that point. Also, I wonder if the same statement is true by replacing the hexagon with a $2n-$ agon. Of course, in this case, writing a system with the areas would become unfeasible, so a smarte
|
This is only true for hexagons. There's a way to make counter examples for $n \geq 4$ and here's a geogebra applet showing a counterexample for the case $n=4$ . Now, I present how to make such counterexamples. Let $n \geq 4$ and $P = A_1 \dots A_{2n}$ be our polygon. Also, to simplify notation, think of $P$ as a sequence of points mod $2n$ so that by $A_{2n+1}$ I mean $A_1$ . Definition 1: We say $P$ is bissective if every segment $A_iA_{i+n}$ splits it into two polygons of equal area. Lemma 2: If $P$ is bissective, then the segments $A_i A_{n+i+1}$ and $A_{i+1} A_{n+i}$ are parallel for all $i$ . Proof: It suffices to prove for $i=1$ . Let $H$ be the intersection between these two segments. Notice that when we draw them, $P$ gets divided into four polygons. Let $S_1$ , $S_2$ , $S_3$ and $S_4$ be the areas of $A_1 A_2 H$ , $A_{n+1} A_{n+2} H$ , $A_2 A_3 \dots A_{n+1} H$ and $A_{n+2} A_{n+3} \dots A_{2n} A_1 H$ respectively. Since $P$ is bissective, $S_1+S_3 = S_2 + S_4$ and $S_1 + S_4
|
|geometry|contest-math|area|
| 0
|
How do repeated roots of the discriminant correspond to the order of repeated roots in the original polynomial?
|
Background/context: If $S \subset \mathbb{CP}^1 \times \mathbb{CP}^1$ is a smooth curve with bidegree $(d_1, d_2)$ , we know that its genus is $(d_1 - 1)(d_2 - 1)$ for example by the adjunction formula. Alternatively, we can attempt to compute the genus via the Riemann-Hurwitz formula as follows: Write $S = \{P(z,w) = 0\}$ where $z, w$ are in $\mathbb{CP}^1$ . The projection map $p$ onto the first factor is a $d_2$ -sheeted branched cover. The branch points $z_i$ occur exactly when the polynomial $P(z_i, w)$ has repeated roots (when considered as a polynomial in $w$ , treating $z_i$ as coefficients). These repeated roots are detected by the discriminant of $P(z_i, w)$ , which has degree $2d_2 - 2$ in the "coefficients". But the coefficients are homogeneous of degree $d_1$ in $z$ , so the discriminant is a degree $d_1(2d_2-2)$ polynomial in $z$ . Computing $\chi(S) = 2d_2 - d_1(2d_2 - 2)$ , we actually get exactly the correct genus for $S$ ! This means $d_1(2d_2 - 2)$ precisely counts t
|
After getting no engagement here, I cross-posted the question to Math Overflow. I received some answers there, and will accordingly close this post. See https://mathoverflow.net/questions/466758/what-relationship-is-there-between-repeated-roots-of-discriminants-and-orders-of
|
|algebraic-geometry|polynomials|algebraic-curves|covering-spaces|discriminant|
| 1
|
Show that if $f$ is right continuous then $f$ is measurable
|
I'm trying to solve the following problem: Show that if the function $f: \mathbb{R} \to \mathbb{R}$ is right continuous then $f$ is measurable. (A function $f$ is measurable if $f^{-1}(E) \in \mathcal{B}(\mathbb{R})$ for each $E \in \mathcal{B}(\mathbb{R})$ which is equivalent to say that $f^{-1}(\alpha, \infty) \in \mathcal{B}(\mathbb{R})$ or $f^{-1}[-\infty, \alpha] \in \mathcal{B}(\mathbb{R})$ , $\forall \alpha \in \mathbb{R}$ , where $\mathcal{B}(\mathbb{R})$ represents the Borel $\sigma$ -algebra) I know that if $f$ is continuous then it is measurable since $(\alpha, \infty)$ is an open set and hence $f^{-1} (\alpha, \infty)$ would be open and therefore measurable with respect the Borel $\sigma$ -algebra, but I'm not sure how could I construct a similar argument with right continuity.
|
Consider $f_n(x):=\sum_{k\in\Bbb Z} f(k/n)\cdot 1_{((k-1)/n,k/n]}$ , $\in \Bbb R$ . Each $f_n$ is Borel measurable, being a step function. And $\lim_nf_n(x)=f(x)$ for all $x\in\Bbb R$ because $f$ is right continuous. (To see this, fix $x$ . Given $\epsilon>0$ choose $\delta>0$ so small that $|f(t)-f(x)| if $x\le t . Now choose $n$ so large that $1/n . There is a unique integer $k$ such that $(k-1)/n , and for that $n$ we have $|x-k/n|\le 1/n , so $|f(x)-f(k/n)| . It follows that $|f_n(x)-f(x)| .)
|
|real-analysis|measure-theory|lebesgue-measure|borel-measures|
| 0
|
Compactly supported smooth vectorfield on smooth manifold is complete
|
I have problems visualizing intuitively and understanding the proof of theorem 9.16 in John Lee's "Introduction to Smooth Manifolds", 2nd ed. The proof starts "Suppose V is a compactly supported vector field on a smooth manifold M, and let $K= supp \, V$ . For each $p \in K$ , there is a neighborhood $U_p$ of p and a positive number $\epsilon_p$ such that the flow of V is defined at least on $ (-\epsilon_p,\epsilon_p) \times U_p$ . ..." Does the author mean a neighborhood $U_p \subseteq K$ or a neighborhood $U_p \subseteq M$ ? I cannot figure out, why the second sentence is true. (The rest of the proof is clear to me.) Is there any geometric intuition possible in understanding why this theorem must be true? It would be particularly interesting because of the following corollary stating that on a compact smooth manifold every smooth vector field is complete. Thanks for any help!
|
The second sentence is true because of Lemma 9.15 (Uniform Time Lemma) on Lee. The proof of this lemma is pretty intuitive. If integral curves exists for time $(-\varepsilon,\varepsilon)$ at every point, then by appropriately translating the integral curves, you automatically get existence at all time. Now, the uniform time lemma says that if a vector field fails to be complete, then the domains of definitions of the integral curves must taper off to $0$ (e.g. the area below the graph of $y=1/x$ ). This cannot happen on a compact manifold.
|
|differential-geometry|manifolds|smooth-manifolds|vector-fields|
| 0
|
How we do double integration with variable bornes?
|
If I apply the change of variable method to calculate a double integral, and I pose ( x = u ) and ( y = v-u ), with ( x ) and ( y ) being strictly positive, then we will find that the domain is ( 0 0 ) or ( v > u )? Because if I choose the latter, I will obtain a result related to ( u ). As far as I know, we must obtain a fixed integral with variable limits. So, what is the correct choice and why?
|
The variable of integration does not "exist" outside of the integral. When you perform a change of variables in a double integral, you set the bounds of integration so that the overall region of integration is equivalent to the original integral. For example, you can treat the integral $\int_A F(x, y)\ dx\ dy$ over the area $A \subset \mathbb{R}^2$ as a double integral $\int_{A_y} \left(\int_{A_x(y)} F(x, y)\ dx\right)\ dy$ where the inner integral, with respect to $x$ , happens over a range of $x$ values that depend on $y$ . Once you do that inner integral, what you have left is a function of $y$ , i.e. $G(y) = \int_{A_x(y)} F(x, y)\ dx$ and then the outer integral becomes $\int_{A_y} G(y)\ dy$ , and the final result doesn't depend on either variable. When you transform $(x, y) \rightarrow (u, v)$ , you will: Change the function being integrated, i.e. $\tilde{F}(u, v) = F(x(u, v), y(u, v))$ . Introduce the Jacobian to change the variables, i.e. you'll make the substitution $dx\ dy = \
|
|probability|
| 1
|
Non-cyclic subgroup of order 4 in non-dihedral group
|
A group $G$ has sixteen elements: $$\{e, r, r^2, \dots , r^7, s, rs, r^2s, \dots , r^7s\},$$ where $r$ and $s$ satisfy the relations $r^8 = e, s^2 = e, sr = r^3s$ . (Note that $G$ is not a dihedral group.) a) Make a list of the cyclic subgroups of $G$ . Make sure to state the elements of each cyclic subgroup, and do not list any subgroup more than once. b) The group $G$ has two subgroups of order $4$ which are not cyclic. State the elements of each of these subgroups. I have managed to answer part a) and for part b) I have found the klein group, $\{e, r^4, s, r^4s\}$ but I can't find the second group of order $4$ , could someone please help with this? Thanks for your answers!
|
Hint: Find all elements of order two. Take two of them, $a,b$ . Then if $ab=ba$ , then $\{ e, a, b, ab\}$ is one such subgroup.
|
|group-theory|cyclic-groups|combinatorial-group-theory|
| 0
|
Density of Primes in different sets
|
I am examining the density of Primes in other sets than the naturals. E.g. we want to have the density of Mersenne Primes. From the prime number theorem I know that in the naturals we have $$ \frac{\pi(n)}{\frac{n}{\log(n)}} \rightarrow 1. $$ But what if I have a set $M^{\leq n} := \left\{m\in\mathbb N: m\leq n, m\text{ is a Mersenne number}\right\}$ , which are all Mersenne numbers less or equal to $n$ . And the function $ \pi_M(n) = \mid\left\{p\leq n, p\text{ is a Mersenne prime}\right\}\mid, $ which are the Mersenne primes less or equal than $n$ . What would be the correct way to examine their destiny? Would I look for the limit of $$ \frac{\pi_M(n)}{\frac{\mid M^{\leq n}\mid}{\log(n)}} $$ or rather $$ \frac{\pi_M(n)}{\frac{\mid M^{\leq n}\mid}{\log(\mid M^{\leq n}\mid)}} $$
|
The general heuristic is that "a random integer $n$ has a $1/\log n$ probability of being prime". Therefore the first guess as to an appropriate density of primes within a set $S$ is usually $$ \#\{p\le x,\, p\in S,\, p\text{ prime}\} \approx \sum_{\substack{n\le x \\ n\in S}} \frac1{\log n}. $$ This heuristic should be modified to include any divisibility tendencies of $S$ ; for some sets this barely matters at all, but for others (such as the Mersenne numbers) there are notably different probabilities of being divisible by small primes than for random integers. For the specific case of the Mersenne numbers, there are probably multiple expositions to be found that go through the heuristic in detail (in particular to justify the conjecture that there are infinitely many Mersenne primes).
|
|distribution-of-primes|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.