title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Determine the real numbers $a, b, c, d \in [1,3]$, knowing that the relation $(a + b + c + d)^2 = 3(a^2 + b^2 +c^2 + d^2)$.
|
the question Determine the real numbers $a, b, c, d \in [1,3]$ , knowing that the relation $(a + b + c + d)^2 = 3(a^2 + b^2 +c^2 + d^2)$ . my idea $(a+b+c+d)^2=a^2+b^2+c^2+d^2+2(ab+ac+ad+bc+bd+cd)$ $=> 2(ab+ac+ad+bc+bd+cd)=2(a^2 + b^2 +c^2 + d^2)=> ab+ac+ad+bc+bd+cd=a^2 + b^2 +c^2 + d^2$ From here I've been trying to get it to a form where 0 will equal the product of some numbers but I didn't get to anything helpful. I hope one of you can help me! Thank you!
|
For $x \in \{a,b,c,d \}$ , we have: $$(x-1)(x-3) \le 0 \iff x^2 \le 4x-3 \tag{1}$$ Let us denote $S = a+b+c+d$ , from $(1)$ and the assumption, we deduce that $$\begin{align} &S^2 = 3 \sum_{\text{sym}}a^2 \le3 \left( \sum_{\text{sym}}(4a-3) \right) = 12S-36\\ \iff &(S-6)^2\le 0\\ \iff &S = 6 \tag{2} \end{align}$$ So, $(a,b,c,d)$ must satisfy the equality of $(1)$ and the equality of $(2)$ . As we cannot have $2$ variables having value equal to $3$ , it's easy to deduce that the largest variable must be equal to $3$ and the 3 smaller variables receive value of $1$ .
|
|algebra-precalculus|square-numbers|
| 1
|
$I_n(t,a) = \int_0^\infty \frac{\cos(xt)}{\left(x^2 + a^2\right)^n}\:dx$
|
Spurred on by this , here I'm hoping to resolve the following integral: \begin{equation} I_n(a,t) = \int_0^\infty \frac{\cos(xt)}{\left(x^2 + a^2\right)^n}\:dx \end{equation} Where $a,t \in \mathbb{R}^+$ and $n \in \mathbb{N}$ . To begin with we observe that: \begin{equation} I_n(a,t) = \int_0^\infty \frac{\cos(xt)}{\left(a^2\left(\frac{x^2}{a^2} + 1\right)\right)^n}\:dx = \frac{1}{a^{2n}} \int_0^\infty \frac{\cos(xt)}{\left(\left(\frac{x}{a}\right)^2 + 1\right)^n}\:dx \end{equation} Let $u = \frac{x}{a}$ : \begin{align} I_n(a,t) &= \frac{1}{a^{2n}} \int_0^\infty \frac{\cos(uat)}{\left(u^2 + 1\right)^n}\cdot a\:du = a^{1 - 2n}\int_0^\infty \frac{\cos(uat)}{\left(u^2 + 1\right)^n}\:du \\ &=a^{1 - 2n}I_n(1, at) \end{align} Thus, we need only resolve the following integral to solve $I_n(a,t)$ : \begin{equation} J_n(s) = \int_0^\infty \frac{\cos(su)}{\left(u^2 + 1\right)^n}\:du \end{equation} Noting $I_n(a,t) = J_n(at)$ . Here we will proceed by forming a differential equation for $J_n(s)$
|
Recognize that the integral $I_n(t) = \int_0^\infty \frac{\cos(xt)}{\left(x^2 + a^2\right)^n}\:dx$ satisfies the recursion $$I_{n+1}(t)=\frac1{2n a^2}\left[(2n-1) I_{n}(t)-t I’_{n}(t)\right]$$ which can be evaluated successively as follows \begin{align} & I_1(t)=\int_{0}^{\infty }{\frac{\cos \left( xt \right)}{ {{x}^{2}}+a^2}dx}=\frac{\pi}{2a} {e}^{-at} \\ &I_2 (t)= \frac1{2 a^2}\left[ I_{1}(t)-t I’_{1}(t)\right] =\frac\pi{4a^3}e^{-at}(1+at)\\ &I_3 (t)= \frac1{4 a^2}\left[3 I_{2}(t)-t I’_{2}(t)\right] =\frac\pi{16 a^5}e^{-at}(3+3at + a^2t^2)\\ &I_4 (t)= \frac1{6 a^2}\left[5 I_{3}(t)-t I’_{3}(t)\right] =\frac\pi{96 a^7}e^{-at}(15+15at + 6a^2t^2+a^3t^3)\\ &I_5 (t)= \frac1{8 a^2}\left[7 I_{4}(t)-t I’_{4}(t)\right]=\cdots \end{align}
|
|real-analysis|integration|definite-integrals|laplace-transform|recursion|
| 0
|
Which is the error in this application of Möbius inversion formula?
|
In Wikipedia the following generalisation of the Möbius inversion formula is given (and proved): Suppose $F(x)$ and $G(x)$ are complex-valued functions defined on the interval $[1, ∞)$ such that $$G(x)=\sum_{1\le n\le x} F\left(\frac xn\right)$$ then $$F(x)=\sum_{1\le n\le x} \mu(n)G\left(\frac xn\right)$$ Let us consider $F(x) = \frac{2}{x} - 1$ and $G(x) = 1$ , which (to my understanding) fulfil the requirements for the Möbius inversion formula application. It is easy to check that $$\sum_{1\le n\le x} F\left(\frac xn\right) =\sum_{1\le n\le x} \left(\frac{2n}{x} - 1\right) = \frac{2}{x}\sum_{1\le n\le x} n - \sum_{1\le n\le x} 1 = \frac{x^2+x}{x} - x = 1 = G(x)$$ and so, using the Möbius inversion formula, we have that $$\sum_{1\le n\le x} \mu(n) = \frac{2}{x} - 1$$ However, this last expression is clearly false. Where is the error?
|
Your statement that $$\sum_{1\leq n\leq x}n=\frac{x^2+x}2$$ is only true when $x$ is an integer. But you need the equality for all real $x\geq1.$
|
|mobius-function|mobius-inversion|
| 1
|
What projections of convex bodies can tell us about their volumes
|
Let $K,L$ be two origin-symmetric, convex bodies in $\mathbb{R}^3$ . Let $\xi\in\mathbb{S}^2$ be a vector on the unit sphere in $\mathbb{R}^3$ . Denote $\xi^{\perp}$ the subspace in $\mathbb{R}^3$ orthogonal to $\xi$ . Denote $K\mid_{\xi^{\perp}}$ as the projection of the body $K$ onto the subspace $\xi^{\perp}$ . Same for $L\mid_{\xi^{\perp}}$ . Suppose that for any vector $\xi$ , there exists a rotation $\phi_{\xi}$ of $K\mid_{\xi^{\perp}}$ such that $\phi_{\xi}(K\mid_{\xi^{\perp}})\subset L\mid_{\xi^{\perp}}$ . This rotation is taken in the $2$ -dimensional sense on the plane $\xi^{\perp}$ . Does it follow that the volume of $K$ is less than or equal to the volume of $L$ ?
|
This doesn't answer the question, but can be useful: Let $K, L \subset \mathbb{R}^3$ be convex bodies containing the origin as an interior point such that for every $\xi \in S^2$ , the projection $K|_{\xi^{\perp}}$ can be rotated about the origin into $L|_{\xi^{\perp}}$ . Then either $K = L$ or $K$ can be obtained by reflecting $L$ about the origin. see here . EDIT : if $K$ is centrally symmetric there is an easier proof of the theorem outlined in the book Uniqueness Questions in Reconstruction of Multidimensional Objects from Tomography-Type Projection Data (see theorem 2.1.1.). The main idea is to express the perimeter of the projection as an integral equation that contains the width function (i.e. the function that, for any given direction gives you the length of the affine diameter) and showing that this implies that the bodies have the same width in any direction. Maybe it is possible to generalize this concept by using some Gronwall-like result Moreover it is true a weaker versio
|
|real-analysis|geometry|convex-analysis|
| 0
|
If $\sum \|f_i\|$ converges and $\sum_{i=1}^\infty f_i$ exists in a normed function space, do we get for free that $\|\sum_{i=1}^n f_i - f\| \to 0$?
|
Let $(V,\|-\|)$ be a normed function space, and suppose that $(f_i)_i$ is a sequence of elements of $V$ so that $\sum \|f_i\| . Further suppose that this data implies that $f = \sum f_i$ , where $\sum_{i=1}^n f_i \to f$ pointwise , is an element of $V$ . Then I would think that we can show immediately that $\sum_{i=1}^n f_i \to f$ in norm, as: $$\left\|\sum_{i=1}^n f_i - f \right\| = \left\|\sum_{i=1}^n f_i- \sum_{i=1}^\infty f_i\right\| = \left\|\sum_{i=n+1}^\infty f_i\right\| \leq \sum_{i=n+1}^\infty \|f_i\|$$ which is sensible, as we can show that any tail of $\sum f_i$ is in $V$ as well. However, as $\sum \|f_i\| , it follows that the right-hand-side goes to zero as $n \to \infty$ , thus proving that $\sum_{i=1}^n f_i \to f$ in norm. Here is my question: Have I made a mistake in the above reasoning? The reason I am suspicious is because whenever I have seen an example of this situation in books (e.g., showing $L^p$ is a Banach space), the author shows convergence in norm using some
|
I made a comment but I think this needs expanding. So here it is the expanded comment. Theorem. Let $(v_n)$ be a sequence of vectors in a complete normed space such that $\sum\limits_{n=1}^\infty \|v_n\|$ converges. Then there is a $w$ such that $\sum\limits_{i=1}^n v_i \to w$ in the normed space. Proof. Immediately from the triangle inequality $\left\|\sum\limits_{i=n}^{n+p} v_i\right\| \leq \sum\limits_{i=n}^{n+p} \|v_i\|$ , so the sequence of partial sums of $(v_n)$ is fundamental and therefore converges by completeness. QED By notation, one often writes the vector $w = \sum\limits_{n=1}^\infty v_n.$ However, when the vector space is that of functions, one can also use the notation $\sum\limits_{n=1}^\infty f_n$ to mean pointwise limit. Thus, pointwise convergence of series of functions can have two completely different meanings. One being of simple convergence and the other of normed convergence. In function spaces, normed convergence and pointwise convergence can differ substantia
|
|functional-analysis|convergence-divergence|normed-spaces|
| 0
|
$\aleph_0^c=2^c$
|
Problem: Prove that $\aleph_0^c=2^c$ My attempt: $\aleph_0^c=\aleph_0^{2^{\aleph_0}}=\aleph_0^{\aleph_1}=\aleph_2$ Then $2^{2^{\aleph_0}}=2^{\aleph_1}=\aleph_2$ , so $\aleph_0^c=2^c$ . But my intuition is telling me I may have made a mistake or an assumption that I can't make. Did I mess up? Or is this a valid proof?
|
There is a theorem tells: Assume $A, B$ are two groups, if there is an injective function from $A$ to $B$ and an injective function from $B$ to $A$ , then the cardinality of $A$ and $B$ equals. This theorem is not very easy to prove. If you know this theorem, by using it we can prove this way: I will prove $\mathbb N ^c = 2^c$ . the injection from $2^c$ to $\mathbb N ^c$ is easy to construct. I only give the construction of injection from $\mathbb N ^c$ to $2^c$ . Focus on this function $$f(x) = \tan\left(\pi x - \frac{\pi}2\right)$$ It is a bijective function from $(0,1)$ to $\mathbb R$ . We define the map $\mathbb N^{\mathbb R} \rightarrow \mathbb N^{(0,1)}$ by mapping $g: \mathbb R \rightarrow \mathbb N$ to $g \circ f: (0,1) \rightarrow \mathbb N$ . You can check that it is a bijection. Hence we now only need to construct an injective map from $N^{(0,1)}$ to $2^c$ . Now assume $h \in N^{(0,1)}$ is a function, we define a map $\mathcal F: N^{(0,1)} \rightarrow 2^c$ by mapping $h$ to
|
|real-analysis|infinity|
| 0
|
Dirichlet series and Euler product
|
For a multiplicative function $f$ , show that we have \begin{equation}\sum_{n=1}^{\infty}\frac{f(n)}{n^s}=\prod_p\left(\sum_{\nu=0}^{\infty}\frac{f(p^{\nu})}{p^{\nu s}}\right).\end{equation} My solution: Write $n$ as a product of distinct prime powers, i.e. $n=p_1^{\alpha_1}\times\cdots\times p_r^{\alpha_r}$ . Then the LHS is the same as \begin{align}\sum_{n=1}^{\infty}\frac{f(n)}{n^s}&=\sum_{n=1}^{\infty}\frac{f(p_1^{\alpha_1}\times\cdots\times p_r^{\alpha_r})}{p_1^{\alpha_1s}\times\cdots\times p_r^{\alpha_rs}}\\&=\sum_{n=1}^{\infty}\left(\frac{f(p_1^{\alpha_1})}{p_1^{\alpha_1s}}\times\cdots\times\frac{f(p_r^{\alpha_r})}{p_r^{\alpha_rs}}\right)\\&=\sum_{n=1}^{\infty}\left(\prod_{i=1}^r\frac{f(p_i^{\alpha_i})}{p_i^{\alpha_is}}\right).\end{align} This is quite close to what we want, but not quite. We need to justify swapping the sum and product (I assume this is by absolute convergence?). Furthermore the product is taken over primes from $i$ up to $r$ but I believe in the initial proble
|
You want to show for any multiplicative function $f$ , \begin{equation}\sum_{n=1}^{\infty}\frac{f(n)}{n^s}=\prod_p\left(\sum_{\nu=0}^{\infty}\frac{f(p^{\nu})}{p^{\nu s}}\right). \end{equation} Your question about convergence is a good one. As stated, the equality is purely formal , in that it doesn't consider large $f$ . What if $f$ is huge? For example, what if $f(p^k) = p^{k^2}$ (then extended multiplicatively)? Then neither side converges for any $s$ . Thus you either need to assume $f$ is small enough for the Dirichlet series to converge somewhere (in which case it actually converges absolutely somewhere, possibly for a slightly larger $s$ ; you can prove this on your own, or you can look up the topic of the abscissa of convergence and the abscissa of absolute convergence of Dirichlet series), or alternately make the proof for formal power series. My solution: Write $n$ as a product of distinct prime powers, i.e. $n=p_1^{\alpha_1}\times\cdots\times p_r^{\alpha_r}$ . Then the LHS is
|
|number-theory|elementary-number-theory|solution-verification|analytic-number-theory|euler-product|
| 1
|
Homotopy between the trivial element of $\pi_1(S_1)$ and a single loop around the circle
|
I am new to this topic of homotopy and the fundamental group. I have just read the proof that the fundamental group of $S_1$ is isomorphic to $\mathbb{Z}$ . There's something that has been bugging since the beginning. Can't I define an homotopy between the constant loop staying at $(1,0)$ and the loop going around the circle counter clockwise as follows? $$H(s,t) = (\cos(2\pi st), \sin(2\pi st))$$ for $t=0$ we get $H(s,0)=(\cos(0),\sin(0))$ for all $s \in [0,1]$ and $H(s,1)=(\cos(2\pi s),\sin(2\pi s))$ for all $s \in [0,1]$ . Also, $H(s,t)$ is continuous as a composition of continuous functions $p(x)=(\cos(2\pi x), \sin(2\pi x))$ with $p:[0,1]\to S_1$ and $g(s,t)=st$ where $g:[0,1]^2 \to [0,1]$ . What am I missing here?
|
You're missing that the homotopy isn't supposed to be a homotopy between paths, it's supposed to be a homotopy between loops. Which is to say, we need to have $$ H(0,t)=H(1,t) $$ for all $t\in[0,1]$ (and in some interpretations this value is required to be constant as well).
|
|algebraic-topology|fundamental-groups|
| 0
|
Homotopy between the trivial element of $\pi_1(S_1)$ and a single loop around the circle
|
I am new to this topic of homotopy and the fundamental group. I have just read the proof that the fundamental group of $S_1$ is isomorphic to $\mathbb{Z}$ . There's something that has been bugging since the beginning. Can't I define an homotopy between the constant loop staying at $(1,0)$ and the loop going around the circle counter clockwise as follows? $$H(s,t) = (\cos(2\pi st), \sin(2\pi st))$$ for $t=0$ we get $H(s,0)=(\cos(0),\sin(0))$ for all $s \in [0,1]$ and $H(s,1)=(\cos(2\pi s),\sin(2\pi s))$ for all $s \in [0,1]$ . Also, $H(s,t)$ is continuous as a composition of continuous functions $p(x)=(\cos(2\pi x), \sin(2\pi x))$ with $p:[0,1]\to S_1$ and $g(s,t)=st$ where $g:[0,1]^2 \to [0,1]$ . What am I missing here?
|
Welcome to MSE! You have a great question. It's related to a common source of confusion when first learning about the fundamental group and homotopy. You proposed the homotopy $H(s,t) = (\cos(2\pi st), \sin(2\pi st))$ , which is indeed continuous, but there is an issue that arises when trying to use it as a homotopy between the constant loop and the loop going around the circle counterclockwise. The problem lies in the fact that your homotopy is not constant at the endpoint $t=1$ for all $s \in [0,1]$ . Specifically, at $t=1$ , the homotopy takes the loop to the point $(\cos(2\pi s), \sin(2\pi s))$ instead of the starting point $(1,0)$ . This means that the endpoint of the loop changes as $t$ varies from 0 to 1, which breaks the requirement for a homotopy. In order for a homotopy between loops to be valid, the endpoints of the loops need to remain fixed throughout the homotopy. In this case, the constant loop is based at $(1,0)$ , and the loop going around the circle counterclockwise i
|
|algebraic-topology|fundamental-groups|
| 0
|
Essential spectrum from dimension argument of projection subspace
|
I have the following, possibly trivial question, regarding checking for the essential spectrum: Given a self-adjoint bounded operator $T\in \mathcal{L}(\mathcal{H})$ , does $\dim \Big(\text{Im}\big(\chi_{[a-\epsilon,a+\epsilon]}(T) \big) \Big)=\infty$ imply that $\sigma_{ess}(T)\cap [a-\epsilon,a+\epsilon] \neq \emptyset$ ? Is it true that $\dim \Big(\text{Im}\big(\chi_{[a-\epsilon,a+\epsilon]}(T) \big) \Big)=\infty$ if and only if $\sigma_{ess}(T)\cap [a-\epsilon,a+\epsilon] \neq \emptyset$ ? I have the following argument which I am unsure of. I know that for such a $T$ as above, $\lambda \in \sigma_{ess}(T)$ if and only if there exists a sequence $\psi_n\in \mathcal{H}$ such that $\Vert \psi_n\Vert\equiv 1$ , $\Vert (T-\lambda I)\psi_n\Vert \to 0$ and $\psi_n$ converge weakly to $0\in \mathcal{H}$ . Using an iterated partition of $[a-\epsilon,a+\epsilon]$ , I can find a nested sequence of shirnking closed intervals, $I_{n+1}\subseteq I_n \subseteq [a-\epsilon,a+\epsilon]$ and $Leb(I_
|
I think your argument is correct, not much else to say. Another way to see this which is on the same page is to consider the fact that the discrete spectrum is, well, discrete, and thus $E := \sigma_\mathrm{dis}(T) \cap [a - \varepsilon, a + \varepsilon]$ is necessarily finite. Indeed, that intersection remains discrete, yet if it were infinite, then by compactness of $[a - \varepsilon, a + \varepsilon]$ you could find a limit point, which would be absurd. $E$ being finite, and the ranks of the Riesz projectors (the fancy name for the spectral projections) being finite for elements of $E$ , you would have $\dim \Big(\text{Im}\big(\chi_{[a-\epsilon,a+\epsilon]}(T) \big) \Big) if there were no elements from the essential spectrum in our interval, thus $\sigma_{\mathrm{ess}}(T)\cap [a-\epsilon,a+\epsilon] \neq \emptyset$ needs to hold.
|
|functional-analysis|solution-verification|spectral-theory|
| 1
|
Convergence and boundedness of double indexed sequence
|
Let $X$ be a Banach space. Assume $(z_{n,m})_{n,m}$ is a sequence in $X$ and $x\in X$ such that $z_{n,m}\rightarrow^{m\rightarrow \infty} z_n'$ and $z_{n'}\rightarrow z'$ as $n\rightarrow \infty$ . Then, why is it that $(z_{n,m})_{n,m}$ is bounded up to subsequence? Argument: As $m\rightarrow \infty$ , $z_{n,m}$ goes to $z_n'$ so for each $n$ , $(z_{n,m})_{m}$ is bounded. Now $z_{n'}\rightarrow z'$ , $(z_n')_{n}$ is bounded. That is there exists an $M>0$ such that $|z_n'|\leq M$ . So, $|z_{n,m}|\leq |z_{n,m}-z_n'|+|z_n'|\leq M +|z_{n,m}-z_n'|$ Now $|z_{n,m}-z_n'|\rightarrow 0$ as $m\rightarrow \infty$ , so $|z_{n,m}-z_n'|$ is bounded by a constant $C_n$ for all $n$ . Why $C_n$ bounded above by something independent of $n$ ?
|
This isn't true. Take $X = \mathbb{R}$ , $z_{n,1} = n$ , and $z_{n,m} = 1$ for $m > 1$ . Then $\lim_{m \rightarrow \infty} z_{n,m} = 1$ , which clearly converges to $1$ , but $z_{n,m}$ is unbounded. If we want to show the existence of a bounded subsequence, note that for all $n$ there exists $M = M_n$ such that $|z_{n,m}-z_n'| for all $m \ge M_n$ . Now, we have that $|z_{n,M_n}| \le |z_n'| + |z_{n,M_n} - z_n'| for some constant $C$ .
|
|real-analysis|functional-analysis|
| 0
|
Stuck on the inverse function
|
This should be a piece of cake but I really have no idea about how to do it. I have to calculate $f^{-1}(2)$ where $f(x) = \ln(x) + 2x^5$ . I proved the function is invertible (that is: injective and surjective), but I am blocked about he request of $f^{-1}(2)$ . I cannot find a way to calculate $f^{-1}$ . any help? I also have to find the derivative of the inverse function, but in this case I will be fine.
|
We don't need to solve for $f^{-1}(x)$ . In fact, doing so looks like it would be yucky. So we only need to solve $f(x)=2$ which will imply $f^{-1}(2)=x$ . IN your comment you asked how to find general $f^{-1}(a)$ for a general $a$ . The best way to do this is solve $f(x)=a$ from which it follows that $f^{-1}(a)=x$ .
|
|calculus|analysis|inverse-function|
| 0
|
Stuck on the inverse function
|
This should be a piece of cake but I really have no idea about how to do it. I have to calculate $f^{-1}(2)$ where $f(x) = \ln(x) + 2x^5$ . I proved the function is invertible (that is: injective and surjective), but I am blocked about he request of $f^{-1}(2)$ . I cannot find a way to calculate $f^{-1}$ . any help? I also have to find the derivative of the inverse function, but in this case I will be fine.
|
The thing not to do here is to try to find $f^{-1}(x)$ in terms of $x$ and then plug in $x=2$ . Instead, note $f^{-1}(2)=b$ if and only if $f(b)=2$ , and $b=1$ plainly works. Sometimes you get lucky. In general, being able to find a closed-form expression for $f^{-1}(x)$ is rare.
|
|calculus|analysis|inverse-function|
| 1
|
On the existence/applications of infinitely-nested functions
|
Inside a previous question , one particular nested function shown is the known tetration . This "kind" of arbitrary repeated functions has always intrigued me, because inside their properties lie so many beautiful relations (i.e. the domain of $\lim_{n\to+\infty}{}^nx$ is $e^{-e}\leq x\leq e^{1/e}$). So I wanted to define a similar expression, and search for its properties: $$ f(n,x)={}^n\log x=\underbrace{\log(\log(\log(\ldots\log x)))}_{n\text{ times}},\quad n\in\mathbb{N}^+,x\in\mathbb{R} %\quad\text{or}\quad{}^n\sin x=\underbrace{\sin(\sin(\sin(\ldots\sin x)))}_{n\text{ times}} $$ just looking at how the domain explodes even when $n {}^{n-2}e\wedge\mathcal{D}_{f(1,x)}:x>0\quad\text{for example, }\;\mathcal{D}_{f(7,x)}:x\gtrsim 10^{10^{10^{6.22}}} $$ This may be not the only "strange" property of this particular function, but even if I think it can be exploited, I've never found any source (neither textbooks nor online); so in conclusion, is there any application of such function (o
|
Let us examine this. $^{1}\log(x)$ only yields a real number for positive real $x$ . For other $x$ , they are never never real. $^{2}\log(x)$ only yields a real value for real values of $x>1=10^0$ It is undefinied for $x=1$ , and has only nonreal values otherwise. $^{3}\log(x)=\log(^2\log(x))$ Taking the antilog of the inequality above, we can deduce that it only returns a real value for $x>10=10^1$ $^{4}\log(x)=\log(^3\log(x))$ Taking the antilog of the inequality above, we can deduce that it only returns a real value for $x>10^{10}=10^{10^1}$ Note that for each increase of $n$ by 1, the minimum value for $x$ for a real result is the common antilog of the minimum value the the previous. $$^{n}\log(x)=log(^{n-1}log(x))$$ The minimum value of $x$ for this to be a real number is equal to taking the common antilog of 1 $n-2$ times.
|
|algebra-precalculus|logarithms|recurrence-relations|function-and-relation-composition|
| 0
|
(dis)proof of conjecture on square unit fractions
|
Consider a finite set S of positive integers, and define $q(S) = \sum_{s \in S}{1/s^2}$ . Letting $\rho = \pi^2/6$ , we have $q(S)$ in the ranges $[0, \rho - 1), [1, \rho)$ . I conjecture that for every rational $r$ within those ranges, $\exists S: q(S) = r$ . Can anyone prove or disprove this? Note that for plain unit fractions $u(S) = \sum_{s \in S}{1/s}$ it is straightforward to show by a greedy algorithm that $\exists S: u(S) = r \forall r \in \mathbb{Q}^+$ ; conversely for prime unit fractions $p(S) = \sum_{s \in S}{1/p_s}$ almost all rationals are unreachable, since for each $S$ , $p(S)$ has a unique (squarefree) denominator. Given that the primes are denser than the squares, it would thus be a somewhat interesting result if the conjecture were to be proven. Computationally I have established existence of $S$ for all rationals with denominators up to 50 within the range; see results on github . Update : added "finite" to clarify first sentence; corrected $\mathbb{R}$ to $\mathbb{
|
You have rediscovered a celebrated theorem of R. L. Graham , who proved this exact statement (as a special case of something much more general) in 1964.
|
|elementary-number-theory|conjectures|
| 1
|
Can you explain how to convert this notation to a function to someone unfamiliar with group theory?
|
While working on a project, I came across a paper ( "On $C^2$ -smooth Surfaces of Constant Width" by Brendan Guilfoyle and Wilhelm Klingenberg ) that includes the definition $g ∈ $ on page 15, where $$ is the "tetrahedral group" ("a discrete subgroup of isometries $ ⊂ O(3)$ "). The paper then uses $g(z)$ as a function with a complex input, where $z$ is "the local complex coordinate on the unit 2-sphere in $^3$ obtained by stereographic projection from the south pole". Is it possible to define an algebraic expression for $g(z)$ in a way that is understandable to a person with no background in group theory?
|
To add to the previous answer, you can give a fairly straight-forward description of the tetrahedron on the Riemann sphere if you note that $$ T=\{(1,1,1),(1,-1,-1),(-1,1,-1),(-1,-1,1)\}=\{(\epsilon_1,\epsilon_2,\epsilon_3): \epsilon_i \in \{\pm 1\}, \epsilon_1\epsilon_2\epsilon_3 =1\} $$ form the vertices of a regular tetrahedron - although they lie on the sphere of radius $\sqrt{3}$ rather than the unit sphere. Now stereographic projection from $C=(-1,-1,-1)$ takes a vector $v$ on the sphere to the vector $S(v)$ on the plane perpendicular to $C$ which is also on the line through $C$ and $v$ . Thus $S(v) = t.v +(1-t)C$ where $t$ is determined by the condition that the dot product $S(v)\cdot C =0$ . Thus $tv\cdot C +3(1-t)=0$ , that is, $t = 3/(3-v\cdot C)$ . It follows that for the projection of $(-1,-1,1)$ we must take $t= 3/(3-1)=3/2$ , and hence the projection is $(-1,-1,2)$ . Now rescaling by $1/\sqrt{3}$ we see that $(1,1,1)$ is sent to $0$ , while the other 3 vertices are sent t
|
|group-theory|
| 0
|
The existence of a larger compact set containing all minimising geodesics of a compact subset
|
On a hyperbolic surface, is a compact set $K$ necessarily contained in a larger compact set $K’$ so that any two points of $K$ can be joined by a minimising geodesic within $K’$ ? The question came out when I was studying the Pick Theorem in the book Dynamics in One Complex Variable by John Milnor (Page22). At the end of the proof, it seems that the answer of my question is obviously yes so that one can choose such a larger compact set $K’$ to conclude the proof. But I am just unsure about the existence of such a larger compact set $K’$ . Here’s my attempt. Let $G$ be the union of all geodesics of $K$ , that is, $x$ is in $G$ if and only if $x$ is on a minimising geodesic joining some two points of $K$ . If we can prove $G$ to be bounded(I still cannot work this out), then we can take the closed ball containing $G$ as the wanted $K’$ . Any help and discussion will be sincerely appreciated.
|
Here are two relevant notions of convexity for complete Riemannian manifolds $M$ : a. A subset $C\subset M$ is weakly convex if for all pairs of points $x, y\in C$ there exists a geodesic $c$ in $M$ connecting $x, y$ , such that $c\subset C$ . b. A subset $C\subset M$ is strongly convex if for all pairs of points $x, y\in C$ there exists a minimizing geodesic $c$ in $M$ connecting $x, y$ , such that $c\subset C$ . (The terminology "weakly" and "strongly" convex is mine, as there is no consistent terminology in the literature.) Lemma 1. Suppose that $M$ is a complete (connected) hyperbolic surface. Then for every compact $K\subset M$ there exists a weakly convex compact subset $C\subset M$ containing $K$ . Proof. There exists a compact subset $\tilde{K}$ of the hyperbolic plane ${\mathbb H}^2$ (the universal covering space of $M$ ) which projects onto $K$ under the universal covering map $\pi: {\mathbb H}^2\to M$ . Since $\tilde{K}$ is compact, there exists a closed metric ball $B\subse
|
|differential-geometry|riemannian-geometry|riemann-surfaces|geodesic|
| 0
|
Equipping a set with an abstract notion of computability
|
I'm curious what the right abstraction is for equipping an arbitrary set with "something kind of like computability". Topologies don't seem to fit because the complement of an open set is not open in general. Sigma algebras don't seem to fit because computable functions on real numbers can't "zero in" on a particular real under our intuitive notion of computability. We can construct narrower and narrower sets containing the real in question, but can't isolate the real itself. Most of the definitions of computability that I've seen before involve defining a subset of $\mathbb{N} \to \mathbb{N}$ or $(\mathbb{N})^n \to \mathbb{N}$ . One constructs Turing Machines and describes how to feed data in and read it back out, or considers an initial pool of functions and closes them under certain operators . There are a couple of properties that apply to functions in $\mathbb{N} \to \mathbb{N}$ that are vaguely computability-like. computability itself: corresponding to an always-halting Turing Ma
|
This is very late to the party, but I think partial combinatory algebras (pcas) provide a more purely "algebraic" avenue for this sort of question ... or, as I'm going to tweak them, "pcas with structure." For background on pcas, I recommend Terwijn's Computability in partial combinatory algebras . Briefly, a pca with structure is a triple $$\mathfrak{A}=(A,I,\cdot)$$ such that $(A,I)$ is a structure in the usual first-order sense (e.g. $I$ is a function with domain function/constant/relation symbols), $(A,\cdot)$ is a pca, and each primitve function (resp. relation) of $(A,I)$ is represented (resp. has represented characteristic function) in $(A,\cdot)$ . To elaborate this latter bit: For each unary primitive function $f$ of $(A,I)$ , there is an $e\in A$ such that for all $a\in A$ we have $e\cdot a\downarrow =f(a)$ . Similarly, if $f$ is a primitve $n$ -ary function of $(A,I)$ , then there is an $e\in A$ such that for all $a_1,...,a_n\in A$ we have $$ea_1...a_n:=(...((e\cdot a_1)\cdo
|
|soft-question|computability|
| 0
|
Prove $X$ and $Y$ with $X \sim \text{Unif}(0, 1)$ and $Y = X$ have no joint probability density function
|
Prove that the two random variables $X$ and $Y$ with $X \sim \text{Unif}(0, 1)$ and $Y = X$ have no joint probability density function (PDF), while each margin has a PDF. Here is my progress, please help me grade it and complete the proof! Note that if $B = \{ (x, y) \in (0, 1) \times (0, 1) : x = y \}$ , then $$1 = \underset{B}{\int\limits \int\limits} f_{X, Y}(x, y) = \int\limits_{x=0}^1 \quad \int\limits_{y=x}^x f_{X, Y}(x, y) = 0$$ a contradiction. Grade this solution, please. I do not have a clue how to do the second part.
|
The joint cdf of $X$ and $Y$ are given below for any $(x, y) \in (0, 1)^2$ : $$F_{X,Y}(x,y)=\mathbb P(X\le x,Y\le y)=P(X\le \min(x,y))= \min(x,y)$$ You can see that there is no $f_{X,Y}(u,w)$ such that for any $(x, y) \in (0, 1)^2$ the following representation holds: $$F_{X,Y}(x,y)=\int\limits_{u=0}^x \quad \int\limits_{w=0}^y f_{X, Y}(u, w) \text{d}u\text{d}w.$$ Indeed, if such a function exits, then it (almost everywhere) satisfies $$f_{X, Y}(x, y)=\frac{\partial^2{F_{X, Y}(x, y)}}{\partial{x}\partial{y}},$$ but the RHS is zero on non-diagonal points $(x,y)$ with $x \ne y$ and not defined on diagonal $(x,x)$ points. Hence, for a zero function $f_{X,Y}(u,w)=0$ , the above integral representation cannot be given. In fact, $X$ and $Y$ have a continuous distribution, without a joint density function, while its margins have density functions.
|
|probability|probability-theory|probability-distributions|marginal-distribution|marginal-probability|
| 0
|
Epsilon delta proof with x going towards infinity.
|
I have the following function: $h(x)=\frac{-2e^x}{e^x+\frac{1}{e^x}}$ with $x \rightarrow \infty$ and $h(x) \rightarrow -2$ and I am attempting to prove it with an epsilon delta proof. I then do the following: $\Big\lvert h(x)-(-2)\Big\rvert=\dfrac{-2e^x+2(e^x+\dfrac{1}{e^x})}{e^x+\dfrac{1}{e^x}}=\dfrac{\dfrac{2}{e^x}}{e^x+\dfrac{1}{e^x}} and attempt to create an upper bound with the following inequality and evolving around it: $e^x>1+x \Rightarrow e^x+\frac{1}{e^x} > 1+x+\frac{1}{e^x} \Rightarrow \frac{1}{e^x+\dfrac{1}{e^x}} But can't seem to get further. Help would be appreciated!
|
As you said $\dfrac{\dfrac{2}{e^x}}{e^x+\dfrac{1}{e^x}} That's equivalent to saying $\frac{2}{e^{2x}+1} so $e^{2x}$ +1> $\frac{2}{\epsilon}$ , so $e^{2x}>\frac{2}{\epsilon}-1$ . Now, if you have a negative/ zero on the right hand side, you can choose h=0. Else, you can pick h = $ 1/2 *ln(\frac{2}{\epsilon}-1)$ so x>h $\implies |h(x)-(-2)| so satisfies that limit. As John Douma has stated above, this is not technically epsilon delta : you look for some h s.t x>h implies $|h(x)-(-2)| for this type of limit.
|
|real-analysis|solution-verification|epsilon-delta|
| 0
|
Epsilon delta proof with x going towards infinity.
|
I have the following function: $h(x)=\frac{-2e^x}{e^x+\frac{1}{e^x}}$ with $x \rightarrow \infty$ and $h(x) \rightarrow -2$ and I am attempting to prove it with an epsilon delta proof. I then do the following: $\Big\lvert h(x)-(-2)\Big\rvert=\dfrac{-2e^x+2(e^x+\dfrac{1}{e^x})}{e^x+\dfrac{1}{e^x}}=\dfrac{\dfrac{2}{e^x}}{e^x+\dfrac{1}{e^x}} and attempt to create an upper bound with the following inequality and evolving around it: $e^x>1+x \Rightarrow e^x+\frac{1}{e^x} > 1+x+\frac{1}{e^x} \Rightarrow \frac{1}{e^x+\dfrac{1}{e^x}} But can't seem to get further. Help would be appreciated!
|
You need to be cautious: What you want to show is that $$|h(x) - (-2)| for $x$ large; this is not what you are given! To do this, fix $ε>0$ . We have $$|h(x)-(-2)| = \frac{\frac{2}{e^x}}{e^x + \frac{1}{e^x}} = \frac{2}{e^{2x} + 1} =: f(x).$$ If $ε\geq 1$ , choose $M = 0$ . we immediately have that $f(x)\leq 1 \leq ε$ for any $x\geq M = 0$ , since $e^{2x} + 1 \geq 2$ . If $0 , choose $M = \frac{1}{2}\log(\frac{2}{ε} - 1)$ . Then, by monotonicity of $f$ (check this!), we have for all $x\geq M$ that $$f(x) \leq f(M) = \frac{2}{\frac{2}{ε} - 1 + 1} = ε$$ as desired.
|
|real-analysis|solution-verification|epsilon-delta|
| 0
|
Isometric embedding of standard simplex
|
The standard $n$ -simplex is the subset of $\mathbb R^{n+1}$ given by $\Delta^n = \left\{(t_0,\dots,t_n)\in\mathbb{R}^{n+1}~\big|~\sum_{i = 0}^n t_i = 1 \text{ and } t_i \ge 0 \text{ for all } i\right\}$ . Clearly, it is actually an $n$ -dimensional object. I wish to find an isometric embedding of $\Delta^n$ into a subset of $\mathbb R^n$ . I might have overlooked something, and this might be trivial, but after quite some trying I still haven't been able to come up with something. Could someone help me further with this?
|
I could not find any resource about it, so I built the transformation from scratch and I want to share it. The first part of the problem is to create $d+1$ equidistant points in $\mathbb R^d$ . The second is to use these points to define the isometry explicitly. Equidistant points on hyperspheres We proceed by induction. The goal at each iteration $k$ is to produce $k+1$ equidistant vectors $v_1,...,v_{k+1}$ lying on the surface of the unit ball embedded in $\mathbb R^k$ , or in other words, on the $(k-1)$ -sphere. Once $k=d$ , we are done. For $k=1$ , we just take $v_1=-1$ and $v_2=1$ . Now suppose that $u_1, ..., u_{k} \in \mathbb{R}^{k-1}$ are equidistant. Let $v_i \in \mathbb{R}^k$ be the result of scaling $u_i$ by $\frac{\sqrt{k^2-1}}{k} and prepending to it the value $-1/k$ . And define $v_{k+1} \in \mathbb R^k$ as $[1, 0,...,0 ]^T$ . Then, $v_1, ..., v_{k} \in \mathbb{R}^{k}$ are equidistant. The following code follows this construction. The image below shows the output (blue po
|
|isometry|simplex|
| 0
|
Determine if a vector valued function has a root based on the behaviour on the boundary
|
Say $f: \mathbb R^n \to \mathbb R^n$ is a vector valued continuous function. Say we further know that $t^\top f(t) \ge 0$ $\,\forall \,\,||t||=1$ . Does this imply that $f$ has a root in the unit ball. This is pretty trivial to prove when $n=1$ as you have that $f(1)\ge0$ and $f(-1)\le 0$ . I can further show that each individual component of $f$ will have some root but I cannot show that a single $t$ can be the root for each of the component at the same time.
|
First of all, let me change notation: we are given a function $f\colon \mathbb R^n\to \mathbb R^n$ such that $x^Tf(x)\ge 0$ for $\lvert x \rvert=1$ . (I can't write $t^T$ , that seems too terrible to me. :-) ). The answer is affirmative. Indeed, define $g(x):=-f(x)$ . Now we have a continuous function such that $x^Tg(x)\le 0$ for $\lvert x \rvert=1$ . We can thus invoke this recent question from MathOverflow: https://mathoverflow.net/q/463957/13042 , showing that $g$ must vanish somewhere for $\lvert x \rvert\le 1$ . Therefore, $f$ must vanish also.
|
|real-analysis|analysis|functions|
| 1
|
Why is function's root undefined at some point when there is a clear and well-defined solution?
|
Consider the function $$ f(x,y,\lambda)=\frac{1}{8} x \left[\left(\lambda ^2+6 \lambda -4\right) y-(\lambda -1)^2 x\right]$$ over real domain $x>0$ , $y>0$ and $\lambda\in[0,1]$ . If we set $y=x$ then we have $f(x,x,\lambda)=\frac{1}{8} (8 \lambda -5) x^2$ , meaning that $\lambda=5/8$ is the unique $\lambda$ that solves $f(x,x,\lambda)=0$ in the function's domain. However, if I solve $ f(x,y,\lambda)=0$ for $\lambda$ I find $$\lambda=\frac{x+3 y- \sqrt{y (3 x+13 y)}}{x-y},$$ which is undefined at $x=y$ (but has $5/8$ as its $r\rightarrow v$ limit, found via l'Hopital's rule). Given that $f(x,x,\lambda)=0$ is solved by a clear and well-defined $\lambda$ , why is the general solution to $f(x,y,\lambda)=0$ undefined at $x=y$ ?
|
When $x = y$ , $f$ becomes a simpler function of $\lambda$ . Specifically: $$ f(x,x,\lambda) = x^2\lambda - \frac{5 x^2}8. $$ Therefore $f(x,x,\lambda) = 0$ is not a quadratic equation in $\lambda.$ What happens to the quadratic formula when $a = 0$ ? Your "general" solution makes an assumption you did not account for, and is therefore not a general solution. It is a solution for the case $x \neq y$ only. But you could try the "other" quadratic formula, $$ x = \frac{2c}{-b \pm \sqrt{b^2 - 4ac}}. $$ This is undefined in the " $+$ " case when $x=y$ but the " $-$ " case works. I still would not call that a general solution (because of the undefined root in one case), but at least you can use it to relate the two cases $x=y$ and $x\neq y$ to each other.
|
|real-analysis|
| 1
|
Understand the definition of covariant derivative for parameterized set
|
A geometric set $S \subset R^n$ is a set having the property that for each point $p \in S$ , there is a vector subspace $T_pS \subset T_p\mathbb R^n$ . Moreover, these subspaces should vary smoothly with $p$ and should all have the same dimension. Let $U\subset\mathbb R^k$ be a domain and let $\phi:U\rightarrow\mathbb R^n(k\leq n)$ be a smooth, one-to-one function that is regular for all $p\in U$ . A parameterized set $S=\phi(U)$ is defined to be the image of $U$ in $\mathbb R^n$ by $\phi$ . The geometric features of the parameterized set $S$ come from “encoding” features of the parameter space $U$ through the function $\phi$ . Then the author ( First Steps in Differential Geometry Riemannian, Contact, Symplectic by Andrew McInerney ) built intuition like Riemannian metrics, Riemannian Connection and curvature on the geometric set or more specifically the parameterized set. Proposition 5.1.9. Let $(U, g)$ be a Riemannian space, $\mathcal{X}(U)$ the set of smooth vector fields on $U$ ,
|
This is related to the Kozsul formula, also called the fundamental theorem of Riemannian geometry. It gives a formula for computing the unique connection on a Riemannian manifold that preserves the metric and is torsion-free. It is a bit long to write all the details, but see https://en.wikipedia.org/wiki/Fundamental_theorem_of_Riemannian_geometry . In particular, you get concrete formulas for the Christoffel symbols, and that would allow you to compute many connections and connection forms, in response to your second question. For the first one, you just need to check that the terms of the Kozsul formula match with the more abstract formula you put there. Let me write out this part. We want to check that $\nabla_X Y =\gamma^{-1}\theta_{Y, X}$ , that is, by definition, for a vector field $Z$ , $$ g(\nabla_X Y, Z)=g(\gamma^{-1}\theta_{Y, X}, Z)=\theta_{Y, X}(Z). $$ Now $g(\nabla_X Y, Z)$ is exactly the subject of the Kozsul formula, so we want to check the RHS also produces this result.
|
|differential-geometry|connections|
| 0
|
Range of $g(x)=\cot^{-1}\log_{1/2}(x^4-2x^2+3)$
|
Theoretically, we know that the range and domain of $f(x)=\cot^{-1}x$ is $(-\infty,\infty)$ and $(0,\pi)$ , respectively. Its plot is a smooth continuous function. Strangely, I find that Wolfram Mathematica plots it as an odd discontinuous (at $x=0$ ) function in $(-\pi/2,\pi/2)$ ! Question 1: Have you encountered this? In this regard, the range of $$g(x)=\cot^{-1} \log_{1/2}(x^4-2x^2+3).......(1)$$ could be found interesting. This is an even function in the domain $(-\infty,\infty).$ We get $$g'(x)=\frac{-1}{1+[\log_{1/2}(x^4-2x^2+3)]^2}\frac{(4x^3-4x)}{x^4-2x^2+3} $$ $g'(x)=0$ gives stationary points as $x=0,\pm 1.$ $g(0)=\cot^{-1}\log_{1/2} 3\approx 2.57, g(\pm 1) = \cot^{-1} \log_{1/2}2=\cot^{-1} (-1)=3\pi/4\approx 2.35.$ The limiting values of $g(\pm \infty)\to \cot^{-1}\log_{1/2} (\infty)\to \cot^{-1} (-\infty)\to \pi.$ Consequently, the range of $g(x)$ is $$[3\pi/4,\pi).......(2)$$ Question 2: Is the obtained range correct and what could be other ways of finding it. Edit The new
|
I think you could divide it into some easier parts. let's think about this $f(x)=x^4-2x^2+3=(x^2-1)^2+2 \to R_f=[2,+\infty)$ then you can take $\log$ so $$g(x)=\log_{\frac 12}f(x)=-\log_2 f(x)\\2\le f(x) then apply $cot^{-1}$ You can also use substitution $u=(x^2-1)^2+2$ with $2\le u and find $$cot^{-1}(-\log_2 u)$$ and finally $$A=-\log_2 u \to \\ -\infty
|
|functions|
| 1
|
Why is function's root undefined at some point when there is a clear and well-defined solution?
|
Consider the function $$ f(x,y,\lambda)=\frac{1}{8} x \left[\left(\lambda ^2+6 \lambda -4\right) y-(\lambda -1)^2 x\right]$$ over real domain $x>0$ , $y>0$ and $\lambda\in[0,1]$ . If we set $y=x$ then we have $f(x,x,\lambda)=\frac{1}{8} (8 \lambda -5) x^2$ , meaning that $\lambda=5/8$ is the unique $\lambda$ that solves $f(x,x,\lambda)=0$ in the function's domain. However, if I solve $ f(x,y,\lambda)=0$ for $\lambda$ I find $$\lambda=\frac{x+3 y- \sqrt{y (3 x+13 y)}}{x-y},$$ which is undefined at $x=y$ (but has $5/8$ as its $r\rightarrow v$ limit, found via l'Hopital's rule). Given that $f(x,x,\lambda)=0$ is solved by a clear and well-defined $\lambda$ , why is the general solution to $f(x,y,\lambda)=0$ undefined at $x=y$ ?
|
Let's expand your definition of $f(x,y,\lambda)$ , collecting like terms in $\lambda$ : $$\begin{align} f(x,y,\lambda) &= \frac{x}{8} \left( (\lambda^2 + 6 \lambda - 4)y - (\lambda - 1)^2 x \right) \\ &= \frac{x}{8} \left( y \lambda^2 + 6y\lambda - 4y - x \lambda^2 + 2x\lambda - x \right) \\ &= \frac{x}{8} \left( (y-x)\lambda^2 + (2x + 6y)\lambda - (x + 4y)\right). \end{align}$$ Then you presumably applied the quadratic formula with the choice $$a = y-x, \quad b = 2x+6y, \quad c = -(x+4y).$$ But $f$ is not a quadratic polynomial if $a = 0$ , or equivalently, $x = y$ . So we have two cases: if $x \ne y$ , then the quadratic formula yields $$\begin{align} \lambda &= \frac{-(x+3y) \pm \sqrt{(x+3y)^2 + (y-x)(x+4y)}}{y-x} \\ &= \frac{x+3y \pm \sqrt{y(3x+13y)}}{x-y} \\ &= \frac{x+3y - \sqrt{y(3x+13y)}}{x-y} \end{align}$$ where we eliminate the root that lies outside of $\lambda \in [0,1]$ ; and when $x = y$ , the linear solution $$\lambda = \frac{x+4y}{2(x+3y)} = \frac{5x}{8y} = \frac{5}{8}$
|
|real-analysis|
| 0
|
Is the argument for approaching of an approximated quantity to its true value sound?
|
Consider a quantity, $a$ , which could represent various measures such as the area under a curve. Initially, we estimate the value of $a$ . As we refine our approximation, we find that we can make it arbitrarily close to a number $\ell$ . This suggests that $a$ equals $\ell$ . But why is this the case? Here's the rationale: Suppose, for contradiction, that $a$ does not equal $\ell$ , but instead equals a different number, say $m$ . In the process of approximating $a$ , it should approach $m$ (or in other words, we find that we can make it arbitrarily close to $m$ ), as $m$ is its true value. However, $a$ is approaching $\ell$ (Because we said that we can make it arbitrarily close to a number $\ell$ ), and this is a contradiction. Therefore, the actual value of $a$ must be $\ell$ , as any other assumption leads to a contradiction. Is this reasoning sound? Edit: According to Charles Hudgins's comment, my reasoning wasn't sound and as I understood, the sound proof is as follows: $a$ can g
|
For completeness, here is the proof I poorly outlined in the comments. Suppose $|a - \ell| for every $\epsilon > 0$ . This formalizes the notion that $a$ and $\ell$ are arbitrarily close. Suppose, for contradiction, that $a \neq \ell$ . Then $\frac{|a - \ell|}{2} > 0$ . But, taking $\epsilon = \frac{|a - \ell|}{2} > 0$ , this means $|a - \ell| , i.e. $\frac{|a - \ell|}{2} , a contradiction. We conclude $a = \ell$ . Note to the interested reader: It is crucial that this proof requires contradiction because there are ways of formulating math that reject proof by contradiction and consequently permit numbers which are arbitrarily close but nevertheless distinct.
|
|calculus|limits|logic|
| 1
|
Range of $g(x)=\cot^{-1}\log_{1/2}(x^4-2x^2+3)$
|
Theoretically, we know that the range and domain of $f(x)=\cot^{-1}x$ is $(-\infty,\infty)$ and $(0,\pi)$ , respectively. Its plot is a smooth continuous function. Strangely, I find that Wolfram Mathematica plots it as an odd discontinuous (at $x=0$ ) function in $(-\pi/2,\pi/2)$ ! Question 1: Have you encountered this? In this regard, the range of $$g(x)=\cot^{-1} \log_{1/2}(x^4-2x^2+3).......(1)$$ could be found interesting. This is an even function in the domain $(-\infty,\infty).$ We get $$g'(x)=\frac{-1}{1+[\log_{1/2}(x^4-2x^2+3)]^2}\frac{(4x^3-4x)}{x^4-2x^2+3} $$ $g'(x)=0$ gives stationary points as $x=0,\pm 1.$ $g(0)=\cot^{-1}\log_{1/2} 3\approx 2.57, g(\pm 1) = \cot^{-1} \log_{1/2}2=\cot^{-1} (-1)=3\pi/4\approx 2.35.$ The limiting values of $g(\pm \infty)\to \cot^{-1}\log_{1/2} (\infty)\to \cot^{-1} (-\infty)\to \pi.$ Consequently, the range of $g(x)$ is $$[3\pi/4,\pi).......(2)$$ Question 2: Is the obtained range correct and what could be other ways of finding it. Edit The new
|
Mathematica uses the range $\cot^{-1} x \in (-\pi/2, 0) \cup (0, \pi/2)$ for real-valued $x$ . It does this for reasons related to the choice of branch of the complex-valued function. It's not ideal in cases where we want a continuous function, but one way to work around this is to employ the relationship $$\cot^{-1} x = \frac{\pi}{2} - \tan^{-1} x, \quad x \in \mathbb R.$$ This yields the desired domain and range for your function.
|
|functions|
| 0
|
Bounded fourth moment implies bounded lower moments
|
In the moment method, let $X$ be an absolutely integrable real scalar random variable, and assume a finite fourth moment ${{\bf E} |X|^4 , it is stated that all lower moments such as ${{\bf E} |X|^2}$ are thus finite by the Hölder or Jensen inequalities. I can see that by the Hölder inequality with $p=q=2$ , we have ${\bf E}|X|^2 = {\bf E}|X|^2 \cdot 1 \leq ({\bf E}|X|^4)^{1/2} , but I don't see how can this follow from Jensen's inequality as well, since if we want to use the convex function $f: {\bf R} \to {\bf R}$ given by $f(x) = x^2$ to conclude $({\bf E}|X|^2)^2 \leq {\bf E}|X|^4 directly, we'll need the condition that $|X|^2$ is absolutely integrable as a priori.
|
We can truncate the variables. For $M>0$ , define $$X_M = \left\{\begin{array}{ll} X & : |X|\leqslant M \\ 0 & : X\neq 0.\end{array}\right.$$ Then $\lim_M X_M=X$ almost surely. By the monotone convergence theorem, $$\lim_M \mathbb{E}|X_M|^4=\mathbb{E}|X|^4.$$ Apply Jensen's inequality to each $X_M$ , which are all bounded and so have all moments finite, to deduce that $$[\mathbb{E}|X_M|^2]^2\leqslant \mathbb{E}|X_M|^4\leqslant \mathbb{E}|X|^4$$ for all $M$ . Again, by the monotone convergence theorem (and continuity of the square function), $$[\mathbb{E}|X|^2]^2 = \lim_M [\mathbb{E}|X_M|^2]^2\leqslant \mathbb{E}|X|^4.$$ The monotone convergence theorem is valid even without the assumption that $\mathbb{E}|X|^2 , because if $\mathbb{E}|X|^2=\infty$ , then $$\lim_M \mathbb{E}|X_M|^2=\infty.$$ A similar trick will work for any other powers $0 . We apply Jensen's inequality with the convex function $f(x)=x^{q/p}$ .
|
|probability|probability-theory|
| 1
|
Schrodinger semigroup and conservation laws
|
Consider a sufficiently fast decaying and smooth function $f\in L^1(\mathbb{R})\cap L^2(\mathbb{R})$ such that $$ \int_{\mathbb{R}} f(x)dx=0. $$ Now consider the Schrodinger semigroup $e^{it\partial_x^2}$ , that is, the operator such that $$ \mathcal{F}\big[e^{it\partial_x^2}f\big](\xi)=e^{-it\xi^2}\mathcal{F}[f](\xi). $$ Is it true that $$ \int_{\mathbb{R}} e^{it\partial_x^2}\big(f(x-t)\big)dx=0? $$ I do believe that, if we put $f(x)$ instead of $f(x-t)$ in the last line it should be true since (sufficiently fast decaying) solutions to the equation $$ i\partial_t u+\partial_x^2u=0 $$ conserve the average $$ \int_{\mathbb{R}}u(t,x)dx=\int_{\mathbb{R}}u(0,x)dx. $$ So in that sense, I would like to do something like $$ \int_{\mathbb{R}} e^{it\partial_x^2}\big(f(x-t)\big)dx = \int_{\mathbb{R}} f(x-t)dx = \int_{\mathbb{R}} f(x)dx=0, $$ but I'm not 100% sure if that is allowed given the time dependence in $f(x-t)$ . Does anybody know how to prove or disprove the above claim?
|
Yes, that's ok. For each fixed $t\in\mathbb R$ , the operator $e^{it\partial_x^2}$ commutes with translations, as you can see via the PDE or via the Fourier transform, as you prefer. Therefore, $$ e^{it\partial_x^2}(f(\cdot - t))(x)=(e^{it\partial_x^2}f)(x -t).$$ Integrating in $x$ , changing variable to $y=x-t$ (no problem here, since $t$ is fixed), you are back to the case you know.
|
|partial-differential-equations|
| 0
|
3d fractal helix modeling
|
I'm trying to build a 3d visual to illustrate a concept. Imagine a circular helix . We could define a cylinder that contains that helix. But now imagine this cylinder takes the helicoïd shape too ! We have a big helix made of a small one. What i'm trying to do is to repeat that process and use each new helix as the composant of a larger one. A small helix has to spin 9 times to make 1 spin of the larger one. I tried to build this with fractals generators but I don't know how to translate my idea into a formula (I mean the z²+c type). So I tried with 3d modeling softwares, i managed to create the first helix but I don't have the ability to go further because I need to use some programming languages to create the recursive process. So here is my question : what would be the easiest way to build this visual considering i'm not a programmer? Could it be translated into a formula i could enter in a fractal generator? Thank you in advance for any answer :)
|
In 2020 I constructed a similar object using signed distance fields based on a helix as a sheared stack of toruses: https://github.com/claudeha/fragm-examples/blob/029c5408a89a2d4e932ce4f020c558f02d385516/00-Helices.frag#L45-L60 uniform float HelixD; slider[0.0,2.0,10.0] uniform float HelixR; slider[0.0,1.0,10.0] uniform float Helixr; slider[0.0,0.5,10.0] uniform float HelixScale; slider[0.0,2.618,10.0] void Torus(inout vec3 q) { q = vec3(log(length(q.xy)), atan(q.y, q.x), q.z); } void Helix(inout vec3 q) { q.z += HelixD * atan(q.y, q.x); q.z = mod(q.z + pi * HelixD, 2.0 * pi * HelixD) - pi * HelixD; Torus(q); } float HelicesDE(vec3 q, int depth) { q /= length(vec2(HelixD, HelixR)); mat3 dq = mat3(1, 0, 0, 0, 1, 0, 0, 0, 1); mat3 m = mat3(0,1,0, 0,0,1, 1,0,0); float d = length(q.xy) - Helixr * HelixScale; for (int i = 0; i AFAIK the terminology for including each level of a fractal construction (rather than just the final limit) is "condensation".
|
|3d|mathematical-modeling|recursive-algorithms|fractals|
| 0
|
Schrodinger semigroup and conservation laws
|
Consider a sufficiently fast decaying and smooth function $f\in L^1(\mathbb{R})\cap L^2(\mathbb{R})$ such that $$ \int_{\mathbb{R}} f(x)dx=0. $$ Now consider the Schrodinger semigroup $e^{it\partial_x^2}$ , that is, the operator such that $$ \mathcal{F}\big[e^{it\partial_x^2}f\big](\xi)=e^{-it\xi^2}\mathcal{F}[f](\xi). $$ Is it true that $$ \int_{\mathbb{R}} e^{it\partial_x^2}\big(f(x-t)\big)dx=0? $$ I do believe that, if we put $f(x)$ instead of $f(x-t)$ in the last line it should be true since (sufficiently fast decaying) solutions to the equation $$ i\partial_t u+\partial_x^2u=0 $$ conserve the average $$ \int_{\mathbb{R}}u(t,x)dx=\int_{\mathbb{R}}u(0,x)dx. $$ So in that sense, I would like to do something like $$ \int_{\mathbb{R}} e^{it\partial_x^2}\big(f(x-t)\big)dx = \int_{\mathbb{R}} f(x-t)dx = \int_{\mathbb{R}} f(x)dx=0, $$ but I'm not 100% sure if that is allowed given the time dependence in $f(x-t)$ . Does anybody know how to prove or disprove the above claim?
|
Denote Fourier transform by $\hat{}$ . $$ \hat{f}(0)=\int_{-\infty}^{\infty} f(x)dx=0 $$ Let $g_t(x)=f(x-t)$ and $h_t(x)=\left(e^{it\partial_x^2}g_t\right)(x)$ . You're looking for the value of $$ \hat{h}(0)=\int_{-\infty}^{\infty} h(x)dx=0 $$ Now $$ \begin{align} \hat{g_t}(\xi)&=\int_{-\infty}^{\infty} g_t(x)e^{-ix\xi}dx\\ &=\int_{-\infty}^{\infty} f(x-t)e^{-ix\xi}dx\\ &=\int_{-\infty}^{\infty} f(x')e^{-i(x'+t)\xi}dx'\\ &=e^{-it\xi}\int_{-\infty}^{\infty} f(x')e^{-ix'\xi}dx'=e^{-it\xi}\hat{f}(\xi) \end{align} $$ and $$ \hat{h}(\xi)=e^{-it\xi^2}\hat{g_t}(\xi)=e^{-it\xi^2}e^{-it\xi}\hat{f}(\xi)=e^{-it(\xi^2+\xi)}\hat{f}(\xi) $$ so $$ \hat{h}(0)=e^{-it(0^2+0)}\hat{f}(0)=1\cdot 0=0 $$
|
|partial-differential-equations|
| 1
|
Why isn't the fourier transform defined differently?
|
I never really understood why the amplitude of the Fourier transform does not represent the amplitude of a sin/cos signal of a specific frequency? The way the Fourier transform is typically defined, makes taking the Fourier transform of the sine function to be infinity at its frequency (delta function). Since the point of the Fourier transform is to find the coefficients (amplitudes) of the individual sine and cosine waves, why not define it by dividing the dot product of the function with the corresponding dot product of the basis functions? This way one gets the amplitude of the individual sine/cosine waves? Example: $$ \begin{align} \left|\mathscr{F}(f(t))\right| &= \left|\frac{\int_{-\infty}^\infty f(t)\cos(\omega t)\,dx}{\int_{-\infty}^\infty \cos^2(\omega t)\,dx} + i\,\frac{\int_{-\infty}^\infty f(t)\sin(\omega t)\,dx}{\int_{-\infty}^\infty \sin^2(\omega t)\,dx}\right| \\[6pt] \left|\mathscr{F}(\sin(at))\right| &= \left|\frac{\int_{-\infty}^\infty \sin(at)\cos(\omega t)\,dx}{\int
|
The Fourier transform $$F(\omega)=\mathcal{F}_x[f(x)](\omega)=\int\limits_{-\infty}^\infty f(x)\, e^{-2 \pi i \omega x}\, dx\tag{1}$$ of any periodic function $f(x)$ doesn't converge in the usual sense (except when $f(x)=0$ ) because periodic functions don't decay as $|x|\to\infty$ . Consequently the Fourier transform of a periodic function can only converge in a distributional sense. If $f(x)$ is a periodic function it can be represented by the exponential Fourier series $$f(x)=\underset{N\to\infty}{\text{lim}}\left(\sum\limits_{n=-N}^N F_P(n)\, e^{i \frac{2 \pi}{P} n x}\right),\quad x\in\mathbb{R}\tag{2}$$ where $P$ is the period and $$F_P(\omega)=\frac{1}{P} \int\limits_{-\frac{P}{2}}^{\frac{P}{2}} f(x)\, e^{-i \frac{2 \pi}{P} \omega x} \, dx\tag{3}$$ is the truncated Fourier transform of $f(x)$ (i.e. the integration limits are truncated). If $f(x)$ is a non-periodic function it can be represented over the interval $a by the exponential Fourier series $$f(x)=\underset{N\to\infty}{\t
|
|linear-algebra|complex-analysis|fourier-transform|
| 0
|
Why my proof of every maximal ideal being prime is incorrect
|
I tried to prove that for a commutative ring $R$ with identity, every maximal ideal $I$ is prime. I think my proof was sort of going in the right direction, but this MSE answer was what I now realize that I was going for. I am wondering where exactly my proof goes wrong; I was already suspicious when I had realized that I never used the fact $ab \in I$ anywhere. Let $I \leq R$ be a maximal ideal, and let $ab \in I$ where $a, b \in R$ . Suppose that $a$ and $b$ are contained in $R$ but not in $I$ . Without loss of generality, let $J = I \cup \{a\}$ . Since $I$ is maximal and does not contain $a$ , we must have $J = R$ , so $1 \in J$ . This means that $1 = rj$ for some $j \in J$ and $r \in R$ . But since $1 \neq rx$ for any $x \in I$ , we must have $1 = r(ax) = a(rx)$ , which implies that $a$ and $rx$ are multiplicative inverses. But $a^{-1} = rx$ implies that $a^{-1}$ exists in $I$ , meaning that $1 \in I$ . This is a contradiction, so $J = I$ and $a \in I$ . Therefore, $I$ is prime. Th
|
The proof fails at the step that $1=r(ax)$ for some $x \in I$ . The first reason why this does not need to hold is that $(I,a)$ is the set of all elements $i+ax$ , where $i \in I$ , $x \in R$ . We cannot conclude that $i=0$ , which I believe is what you are doing implicitly here. Secondly, even if this was true we would only require $x \in R$ , not $x \in I$ .
|
|abstract-algebra|ring-theory|solution-verification|ideals|maximal-and-prime-ideals|
| 1
|
Exception where $L^2$-Holder condition does not hold
|
Recently, I have tried to resolve the following problem. Let periodic $f$ be defined by the following Fourier series, \begin{align} f(x) = \sum_{k=1}^\infty \frac{1}{k^{3/2}} e^{ikx}. \end{align} Prove that $f$ belongs to Nikol'skii space $H^1_2(-\pi,\pi)$ but \begin{align} \frac{1}{2\pi}\int_{-\pi}^\pi \vert f(x+h) - f(x) \vert^2 dx \geq \frac{4h^2}{\pi^2}\log \frac{\pi}{\vert h \vert} \end{align} for $0 . It is not that hard to prove the first claim, since $f$ is absolutely convergent, $f\in L^1(-\pi,\pi)$ ; hence, we can integrate the series term by term. This yields that \begin{align} c_n(f)=\frac{1}{2\pi}\int_{-\pi}^\pi \sum_{k=1}^\infty \frac{e^{-ix(n-k)}}{k^{3/2}}dx = \frac{1}{2\pi} \sum_{k=1}^\infty \frac{1}{k^{3/2}} \int_{-\pi}^\pi e^{-ix(n-k)} dx = \frac{1}{n^{3/2}} \end{align} if $n=k$ . That is, $c_n(f)=0$ for $n\leq0$ . Thus, we have for all $0\leq j \in \mathbb{Z}$ , \begin{align} \sum_{2^j\leq \vert n \vert which implies that $f\in H^1_2(-\pi,\pi)$ . For any $0 , $f\in H
|
Now I have fully resolved the OP. It was my simple mistake. Since $\vert \sin(nh/2) \vert \geq (2/\pi)(n\vert h\vert /2)$ for all $n$ satisfying $n\vert h \vert \leq \pi$ due to the concavity of $\sin(z)$ on $z\in[0,\pi/2]$ , then we have \begin{align} \frac{1}{2\pi}\int_{-\pi}^\pi \vert f(x+h) - f(x) \vert^2 dx \geq \frac{4h^2}{\pi^2}\sum_{n\vert h \vert \leq \pi} \frac{1}{n}. \end{align} There is $0 such that $N \vert h \vert \leq \pi by Archimedean property, and using Riemann-sum implies that \begin{align} \sum_{n=1}^N \frac{1}{n} \geq \log(N+1) \geq \log\frac{\pi}{\vert h \vert}, \end{align} for all $N>0$ since $1/n$ is monotonic decreasing. This concludes the proof.
|
|fourier-analysis|fourier-series|
| 1
|
Maximal Volume of the Solid of Revolution of a Rectangle Constrained by an Ordinate Set
|
This is question Number 20 from Tom M. Apostol's Calculus Chapter 4 Section 4.21: A cylinder is obtained by revolving a rectangle about the x-axis, the base of the rectangle lying on the x-axis and the entire rectangle lying in the region between the curve $y=\frac{x}{x^2+1}$ and the x-axis. Find the maximum possible volume of the cylinder. The below is my attempt at a solution: A simple graph in desmos to illustrate the situation: https://www.desmos.com/calculator/hxktkpibfs Since the cylinder is a solid of revolution, we may restrict our consideration to only the x-y plane for if we have found the maximal rectangle the cylinder which results is also maximal. Because this is a rectangle, we may parameterize it based on x. If (x,0) is a vertex, then $(x,\frac{x}{x^2+1})$ is also a vertex. The y-coordinate of the third vertex must be equal to that of the second (because this is the side of a rectangle). Hence if the rectangle is $t-x$ long, then we must also have the points $(t,0)$ and
|
The rectangle with maximum area does not necessarily imply that the corresponding cylinder's volume is maximized, because while the area of a rectangle with dimensions $w$ and $h$ is $wh$ , the corresponding volume of the cylinder would be $V = \pi w^2 h$ . Maximizing $wh$ does not mean $V$ will be maximized. Another way to conceptualize this is that increasing $w$ increases the rectangle's area linearly, but the volume of the cylinder increases quadratically . This is where your error lies. First, observe that $$f(x) = \frac{x}{x^2 + 1} = \frac{1}{x + \frac{1}{x}}, \quad x > 0,$$ so that $f(\frac{1}{x}) = f(x)$ for all $x > 0$ . So if the rectangle has a vertex at $(x,0)$ , then the other three vertices must be located at $$(x,f(x)), (\tfrac{1}{x}, f(\tfrac{1}{x})), (\tfrac{1}{x}, 0).$$ Hence assume without loss of generality that $0 , or equivalently, $0 . We then have the volume of the cylinder as $$V(x) = \pi f(x)^2 \left(\frac{1}{x} - x\right) = \pi \frac{x(1-x^2)}{(1+x^2)^2}.$$ C
|
|calculus|analysis|
| 1
|
Evaluate $\int_{0}^{\frac{\pi}{4}} \tan^{-1}\left(\sqrt{\frac{\cos 2x}{2\cos^2 x}}\right)\:dx$
|
Evaluate $$I=\int_{0}^{\frac{\pi}{4}} \tan^{-1}\left(\sqrt{\frac{\cos 2x}{2\cos^2 x}}\right)\:dx$$ I tried with substitution: $$\frac{\cos 2x}{2\cos^2 x}=\tan^2 y$$ So we have $$\sec^2 x=2-2\tan^2 y$$ Differentiating we get: $$2\sec^2 x\tan x\:dx=-4\tan y\sec^2 y\:dy$$ So $$dx=\frac{-\tan y\sec^2 y\:dy}{(1-\tan^2 y)\sqrt{1-2\tan^2 y}}$$ So we get $$I=-\int_{\tan^{-1}\left(\frac{1}{\sqrt{2}}\right)}^0\:\frac{y \tan y \sec^2 y\:dy}{(1-\tan^2 y)\sqrt{1-2\tan^2 y}}$$ I am stuck here?
|
By means of various manipulations that can be probably be more immediately condensed, I ended up recovering an integral that appears in another answer . $$\begin{align*} & \int_0^\tfrac\pi4 \arctan \sqrt{\frac{\cos(2x)}{2\cos^2x}} \, dx \\ &= \int_0^\tfrac\pi4 \arctan \sqrt{1 - \frac12\sec^2x} \, dx \\ &= \int_\tfrac\pi4^\tfrac\pi2 \arctan(\cos y) \frac{\cot y}{\sqrt{2\sin^2y-1}} \, dy & \sin y=\frac1{\sqrt2}\sec x \\ &= \int_0^\tfrac\pi4 \arctan(\sin y) \frac{\tan y}{\sqrt{2\cos^2y-1}} \, dy & y\to\frac\pi2-y \\ &= \int_0^1 \arctan \frac z{\sqrt{1+z^2}} \cdot \frac z{\sqrt{1-z^4}} \, dz & z=\tan y \\ &= \frac\pi4 \arctan\frac1{\sqrt2} - \frac12 \int_0^1 \frac{\arcsin z^2}{\sqrt{1+z^2}\left(1+2z^2\right)} \, dz & \text{by parts} \\ &= \frac\pi4 \arctan\frac1{\sqrt2} - \frac14 \int_0^1 \frac{\arcsin z}{\sqrt z \sqrt{1+z} (1+2z)} \, dz & z\to\sqrt z \\ &= \frac\pi4 \arctan\frac1{\sqrt2} - \frac14 \int_0^1 \frac{\arctan \frac z{\sqrt{1-z^2}}}{\sqrt z \sqrt{1+z} (1+2z)} \, dz & (*) \\ &= \
|
|integration|algebra-precalculus|definite-integrals|inverse-function|
| 0
|
Proving $3^{100} > 5\cdot10^{47}$ with integral representation
|
I want to prove that $3^{100} > 5\cdot10^{47}$ without using calculator or any approximation of the logarithm. I tought about finding an integral representation of $(3^{100} - 5\cdot10^{47})$ with the integrand non negative in the chosen integration interval (something like this ), but I don't know how to chose the integrand. Do you have any ideas? Or even any elegant way to prove the inequality?
|
We have by hand calculation $3^{10} =59049>59\cdot 10^3$ $59^2=3481>348\cdot10$ $348^2=121104>12\cdot 10^4$ then $$3^{100}>348\cdot 12^2\cdot 10^{43}>5.01\cdot 10^{47}$$
|
|integration|inequality|number-comparison|
| 0
|
Definition of a sober space
|
We say that a topological space is sober if every nonempty irreducible closed subset is the closure of a one-point set. Just to clarify, for the definition of sober, does it mean that every nonempty irreducible closed subset is the closure of some one-point set; or a subset is nonempty irreducible closed if and only if it is the closure of a one-point set? Is it guaranteed that the closure of a one-point set is irreducible and closed?
|
You've gotten your definition wrong: a space $X$ is sober iff for every nonempty irreducible closed subset $Z\subset X$ there exists a unique point $z\in X$ such that $\overline{\{z\}}=Z$ . (This is a unique existence requirement; your post originally only had an existence requirement.) In any case, in any topological space, the closure of any one-point set is an irreducible closed subset: if $x\in X$ is a point and $\overline{\{x\}}=Y\cup Z$ for nonempty closed subsets $Y,Z\subset X$ , then one of $Y$ and $Z$ must contain $x$ , and hence either $Y=\overline{\{x\}}$ or $Z=\overline{\{x\}}$ , proving $\overline{\{x\}}$ is irreducible.
|
|general-topology|algebraic-geometry|
| 1
|
What are $\mathbb Q$-conjugates?
|
I‘m reading this paper from Keith Conrad and in Example 1.1 he states: I don‘t the term " $\mathbb Q$ -conjugates" - what does that mean? I don‘t get why $\sqrt 2 \mapsto \sqrt 3$ is not possible in this case.
|
If $K/F$ is a field extension, which is to say $K$ is a field that contains $F$ as a subfield, and $\alpha$ is an element of $K$ but not $F$ , then the $F$ -conjugates of $\alpha$ are the elements that satisfy all polynomial expressions in $F$ that $\alpha$ also satisfies. Equivalently, you consider the Galois group $G$ of the extension, and say that $\beta$ is a conjugate of $\alpha$ if there's some $\sigma \in G$ such that $\sigma(\alpha) = \beta$ . In this specific case, we have $K = \mathbb{Q}(\sqrt 2, \sqrt 3)$ . The $\mathbb{Q}$ -conjugates of $\sqrt 2$ are all the $\mathbb{Q}$ -linear combinations of $\sqrt 2, \sqrt 3,$ and $1$ that satisfy the same rational polynomials that $\sqrt 2$ does. The automorphism $\sigma$ defined by $$\sigma(a + b\sqrt 2 + c\sqrt 3) = a - b\sqrt 2 + c\sqrt 3$$ demonstrates this conjugacy. For example, we have $(\sqrt 2)^2 - 2 = 0$ , and $(-\sqrt 2)^2 - 2 = 0$ . In fact, any polynomial with rational coefficients that has $\sqrt 2$ as a root also has $-
|
|abstract-algebra|galois-theory|
| 0
|
Is there a method to calculate large number modulo?
|
Is there a (number theoretic or algebraic) trick to find a large nunber modulo some number? Say I have the number $123456789123$ and I want to find its value modulo some other number, say, $17$. It's not fast for me to find the prime factorisation first. It's also not fast to check how many multiples of $17$ I can "fit" into the large number. So I was wondering if there is any method out there to do this efficiently. I am looking for something like the other "magic trick" where you sum all the digits and take the result $\mod 9$.
|
1111 1111 1111 1111: 11 17 73 101 137 5882353 1 10 |2 15 |3 14 |4 4 | 5 6 |6 9 |7 5 |8 16 | 9 7 |10 2 |11 3 |12 13 | 13 11 |14 8 |15 12 |16 1 | 17 10 | so stack them by every 16 decimal digits, and the mid-point 10^8 congruent to -1 : 123,456,789,123 ( mod 17 ) ------------------------ ≡ -1 * 1234 + 56,789,123 ( mod 17 ) ≡ 56,787,889 ( mod 17 ) ------------------------ every 2 digits congruent to 15, aka -2 , and every 4 congruent to 4 , thus ≡ ( 56 [78] 89 ) ( mod 17 ) (4 * (-2 * (56) + ) ≡ + (-2 * [78] + (89)) ( mod 17 ) ≡ -203 ( mod 17 ) ≡ -16 ( mod 17 ) => [ 1 ]
|
|abstract-algebra|number-theory|modular-arithmetic|
| 0
|
What are the prerequisites for Michael Spivak's monumental A Comprehensive Introduction to Differential Geometry?
|
What are the prerequisites for Michael Spivak's monumental A Comprehensive Introduction to Differential Geometry? In particular for volume 1? Are these 5 volumes self-consistent in the sense that a knowledge of the prerequisites of Vol.1 is sufficient to tackle all the volumes?
|
The previous answer lists things that are required to follow, more or less, the main text. If you want to solve most of the problems, you should know the following as well. Measure theory. General topology.
|
|differential-geometry|reference-request|
| 0
|
Conceptually Understanding the Mathematical Definition for an Envelope of Family of Curves
|
Assuming we have a one-parameter, two-dimensional, family of curves, given by $f(x, y, p) = 0$ , there are two requirements for the envelope (see https://en.wikipedia.org/wiki/Envelope_(mathematics)# ) of that family (assuming it exists): $$\begin{Bmatrix} &f(x, y, p) &= &0& \\ &\frac{\partial}{\partial p} \left[f(x, y, p)\right]&= &0&\end{Bmatrix}$$ Now, this first requirement makes sense, as the envelope curve must itself sit on a point from the family of curves. But I'm having trouble conceptually wrapping my mind around this second requirement and what it means .
|
I won't be adding anything else to the previous answers but a piece of imagination and loose interpretations to bring a physical intuition about. A moving point running along the envelope If you look carefully the animation of an sliding bar on the envelope page of the wikipedia, you'll agree with me, we can think of the envelope as a rail $\Gamma(x)$ guiding a moving point. Lets such point $x$ be parameterized by $t$ $|$ $x=x(t)$ and $\Gamma(x(t))$ . The derivative, $$ \Gamma'(x(t)) = \frac{\partial\Gamma}{\partial x} \cdot \frac{\partial x}{\partial t} $$ The mobile point actually exists within a family of curves $ y = C(x,t)$ so the differential of its movement is, $$ dC(x,t) = \frac{\partial C}{\partial x} dx + \frac{\partial C}{\partial t} dt $$ Diving by $dt$ , $$ \frac{dC(x,t)}{dt} = \frac{\partial C}{\partial x} \frac{dx}{dt} + \frac{\partial C}{\partial t} \frac{dt}{dt} $$ Or simply, $$ \frac{dC(x,t)}{dt} = \frac{\partial C}{\partial x} \frac{dx}{dt} + \frac{\partial C}{\parti
|
|partial-differential-equations|intuition|curves|envelope|
| 0
|
Geometry problem: finding coordinates on outer circle using extrusion from centre through point of inner circle
|
My problem involves an inner circle (centre $I$ ) with a displaced placement within an outer circle (centre $O$ ). I have a point placed somewhere on the circumference of the inner circle ( $P$ ). I wish to extrude from point $P$ orthogonally until it reaches and intersects the outer circle at position $Q$ , so I need to calculate the position of $Q$ . The radius of the outer circle ( $r_1$ ), inner circle ( $r_2$ ) and displacement of inner circle ( $r_3$ ) are known. Any help would be greatly appreciated. Some background: I actually need to do this for multiple points on the inner circle ( $P_1$ , $P_2$ etc, but I have all these coordinates). What I have so far: $x_Q = x_I + \frac{(x_P - x_I) \cdot r_1}{\sqrt{(x_P - x_I)^2 + (y_P - y_I)^2}}$ $y_Q = x_I + \frac{(y_P - y_I) \cdot r_1}{\sqrt{(x_P - x_I)^2 + (y_P - y_I)^2}}$ which gives me something like this, but the magnitude is wrong: I then tried adding the original displacement of the inner circle, $x_Q = x_I + \frac{(x_P - x_I) \cd
|
We use polar coordinates. We can ignore point P because it anyhow lies on radius IQ and that PQ is always perpendicular to inner circle centered at I. No need even to label it. $$ IQ=r, ~~ IO =d, \text{ labeled in place of }r_3; $$ By Cosine Rule on triangle $I Q_1 O$ we have $$r_1^2=r^2+d^2-2rd \cos(\pi-\theta),$$ $$ r^2+2rd \cos \theta +(d^2-r_1^2) =0 ~;$$ Solutions of the quadratic equation $$ r_{1,2}=d\cos \theta\pm \sqrt { r_1^2 - (d \sin \theta)^2)} $$ The (the extended ) radius vector intersects the outer circle at two points $Q_1,Q_2$ centered at $I$ as shown above. $$ (x_{Q_1},y_{Q_1})= r_1( \cos\theta, \sin\theta); (x_{Q_2},y_{Q_2})= r_2( \cos\theta, \sin\theta).~$$ EDIT A polar plot and Mathematica code is also attached: Slight label change. Red circle radii $\rho_{1,2}$ at $ Q_{1,2}$ is eccentric to circle of radius $\rho_2$ centered at I.
|
|geometry|vectors|analytic-geometry|
| 0
|
Poisson distribution with big numbers
|
I failed this question at an exam and would like help in explaining how to solve it: Historical data show that approximately 70 000 vehicles cross Älvsborgsbron every day. What is the probability that more than 7 500 000 vehicles cross Älvsborgsbron in 100 days? For full points you must motivate and explain all assumptions and approximations. Thanks for the help Edit: I have tried to use the poisson formula and end up with values that my calculator says doesnt work. I¨ve also tried this \frac{\left(7500000-7000000\right)}{2645.75} but gives my a result of 188,98. So doesnt make sense as I should get a percentage answer. My only though about that number is that it is so high that its basically zero percent chance of happening.
|
You're on the right track. Assume a normal distribution (see here why) Assume a daily standard deviation - let's say $\sigma$ daily is 10,000. Calculate the Z-score for 7,500,000 crossings $$ Z = (7,500,000 - 7,000,000)/100,000 $$ I think your assumption in step 2, where you've assumed a variance of 7,000,000 is wrong and this has led to a very high Z-score. Your conclusion seems right - there's an extremely low chance of this occurring.
|
|probability|poisson-distribution|
| 0
|
Fourier transform of hyperbolic functions
|
I am looking for the Fourier transform of $x \tanh(x)$ in $\mathbb{R}$ . The Fourier transform on $\mathbb{R}\setminus\{0\}$ is $$2\sum_{n=0}^{\infty}(2n+1)\exp\left(-(2n+1)\frac{|t|}{2}\right).$$ But there is singularity at $0$ . How can I get the Fourier transform of $x \tanh(x)$ in $\mathbb{R}$ ?
|
The Fourier Transform of the distribution $t\tanh(t)$ is a tempered distribution. We can calculate its Fourier Transform in terms of the sum of the Fourier tranform of the $L^1$ funcion $f(t)=t\tanh(t)-|t|$ and the Fourier Transform of the tempered distribution $g(t)=|t|$ . We now proceed. FOURIER TRANSFORM OF $\displaystyle |t|$ : In THIS ANSWER , I developed the Fourier Transform for $g_a(t)=|t|^a$ for all real values of $a$ . In particular, for $a=1$ , we have $$\mathscr{F}\{ g_1\}(\omega)=-\frac2{\omega^2}$$ Note that the object $\mathscr{F}\{ g_1\}(\omega)=-2/\omega^2$ is not a function; it is a Tempered distribution. It does not assign a value for each $\omega$ . Rather, it is defined as how it acts on any Schwartz function $\phi$ in the following way. We have $$\langle \mathscr{F}\{ g_1\},\phi\rangle =\lim_{\delta\to0^+}\int_{|\omega|\ge\delta|} \left(\frac{-2}{\omega^2}\right)\left(\phi(\omega)-\phi(0)\right)\,d\omega $$ FOURIER TRANSFORM OF $\displaystyle \tanh(t)-|t|$ : Note
|
|fourier-transform|
| 1
|
Folland theorem 6.13
|
I read theorem 6.13 out of Folland’s book. I was wondering what goes wrong if you take q = infty. Is there a nice counterexample?
|
Consider $X=[0,1]$ with measure $\delta_0+\lambda$ , where $\lambda$ is the Lebesgue measure, while $\delta_0((0,1])=0,\delta_0(\{0\})=\infty$ . Let $g$ be the indicator function of $\{0\}$ : $\|g\|_{\infty}=1$ . Let $f \in L^1(X)$ : then $f(0)=0$ , thus $fg=0$ . Hence $\|\phi_g\|=0 .
|
|real-analysis|measure-theory|
| 0
|
solution-verification | Show that if $a$ and $q$ are natural numbers and the number $(a+\sqrt{q})(a+\sqrt{q+1})$ is rational, then $q=0$.
|
the question Show that if $a$ and $q$ are natural numbers and the number $(a+\sqrt{q})(a+\sqrt{q+1})$ is rational, then $q=0$ . the idea for the number to be rational both members have to be rational (*) because a is natural, it means that $\sqrt{q}$ and $\sqrt{q+1}$ should both be perfect squares, but they are also consecutive $\sqrt{q}+1=k^2+1$ => $\sqrt{q+1}=k^2+2k+1$ The equality would happen only id $2k=0 => k=0 => q=0$ Im not sure of the part I noted (*), because I think I should also demonstrate this fact, but I don't know how. Hope one of you can tell me if my idea is right and how can I improve my answer! Thank you!
|
This is a "reader's digest" of the magic main part of @DS's answer . Assume $a>0$ , and let $$\alpha:=a+\sqrt q,\quad\bar\alpha:=a-\sqrt q,\quad\beta:=a+\sqrt{q+1}.$$ Note that: $\alpha(\beta-\bar\alpha)\ne0$ $\alpha\bar\alpha=a^2-q\in\Bbb Q$ $(\beta-\bar\alpha)(\beta-\alpha)=1\in\Bbb Q$ $(\alpha\beta-\alpha\bar\alpha)(\bar\alpha\beta-\alpha\bar\alpha)=\alpha\bar\alpha(\beta-\bar\alpha)(\beta-\alpha)$ . Therefore, if $\alpha\beta\in\Bbb Q$ then $\bar\alpha\beta\in\Bbb Q$ , hence $\beta=\frac{\alpha\beta+\bar\alpha\beta}{2a}\in\Bbb Q$ .
|
|rational-numbers|square-numbers|radical-equations|
| 0
|
If $H, K \leq G$ implies $H \subseteq K$ or $K \subseteq H$, then $G$ is a (not necessarily finite) $p$-group.
|
Let $G$ be a group with the following property: for every $H, K \leq G$ , either $H \subseteq K$ or $K \subseteq H$ . Show that there exists a prime number $p$ such that the order of every element of $G$ is a power of $p$ . It is straightforward to show the result above when $G$ is finite. Indeed, fix $g \in G$ and consider $|\langle g \rangle| = p^{k}m$ where $(p, m) = 1$ and $m \neq 1$ . Then, $g^{p^{k}}$ has order $m$ and $g^{m}$ has order $p^k$ . By the hypothesis, either $$\langle g^{p^k} \rangle \subseteq \langle g^m \rangle\text{ or }\langle g^m \rangle \subseteq \langle g^{p^k} \rangle.$$ Both cases contradict Lagrange's Theorem. Thus, $m = 1$ and the order of $g$ is $p^k$ . Given any other element in $G$ , say $h$ , we have either $\langle g \rangle \subseteq \langle h \rangle$ or $\langle h \rangle \subseteq \langle g \rangle$ , which means the order of $h$ is either a power of $p$ or a multiple of a power of $p$ . In the first case, we are done. In the second case, the argum
|
Let $x\in G$ . If $x$ had infinite order, then $H=\langle x^2\rangle$ and $K=\langle x^3\rangle$ are two subgroups of $G$ , but $H\not\subseteq K$ (since $x^2\in H\setminus K$ ), and $K\not\subseteq H$ (since $x^3\in K\setminus H$ ). Thus, given the assumption on $G$ , it follows that every element of $G$ has finite order. (To prove it directly without a proof by contradiction, let $x\in G$ , and consider $H=\langle x^2\rangle$ , $K=\langle x^3\rangle$ ; then either $H\leq K$ or $K\leq H$ ; if $H\leq K$ , then there exists $k$ such that $x^2=(x^3)^k = x^{3k}$ , so $x^{3k-2}=1$ , proving $x$ has finite order (since $3k-2\neq 0$ ); and if $K\leq H$ , then there exists $k$ such that $x^3=(x^2)^k = x^{2k}$ , so now $x^{2k-3}=1$ , and since $2k-3\neq 0$ , this shows $x$ has finite order. Either way, we conclude $x$ has finite order.) At this point, the rest of your argument shows every element has order a power of a prime, and that the prime is the same for all elements.
|
|abstract-algebra|group-theory|p-groups|
| 1
|
Finding the expectation of the smallest value when $k$ numbers is taken from $1$ to $n$
|
Let X be the smallest value obtained when k numbers are randomly chosen from the set 1,...,n. Find E[X] by interpreting X as a negative hypergeometric random variable. This is Self Test Exercise 7.7 of Sheldon's A First Course in Probability. The way I approached this is to first consider (for example) P(X = 1). Using the suggestion to take X as a negative hypergeometric random variable, we have $$P(X = 1) = \frac{\binom{1}{1}\binom{n - 1}{k - 1}}{\binom{n}{k}}$$ My idea was that we have one element 1 and the rest $k - 1$ elements from ... the remaining $n - 1$ elements. Similarly, for $P(X = 2)$ , we have one way of getting the minimum element $2$ , and $\binom{n - 2}{k - 1}$ ways of getting the other elements (well, except 1). Similarly, we can deduce that $$P(X = i) = \frac{\binom{n - i}{k - 1}}{\binom{n}{k}}$$ As a result, I got $$E(X) = \sum_{i = 1}^n i \times \frac{\binom{n - i}{k - 1}}{\binom{n}{k}}$$ The answer in the book is $\frac{n + 1}{k + 1}$ , which is just so much more e
|
An easier way is to use the complementary CDF: $$P(X > i)=1-F_X(x),$$ to compute the expectation as follows: $$\mathbb E(X) = \sum_{i = 0}^{n-k}P(X > i),$$ which can be used for any non-negative random variable $X$ ; see here for more details. Indeed, $$P(X >i) = \frac{\binom{n - i}{k}}{\binom{n}{k}}.$$ Hence, $$\mathbb E(X) =\sum_{i = 0}^{n-k}\frac{\binom{n - i}{k}}{\binom{n}{k}}=\frac{\binom{n +1}{k+1}}{\binom{n}{k}}=\frac{n +1}{k+1}.$$ To compute the summation, I used the hockey-stick identity . Direct method: You could directly calculate the expectation using the PMF, $P(X= i)$ : $$\mathbb E(X) = \sum_{i = 1}^{n-k+1} i \times P(X = i)=\sum_{i = 1}^{n-k+1} i \times \frac{\binom{n - i}{k-1}}{\binom{n}{k}}.$$ To manage this, use the following $$\sum_{i = 1}^{n-k+1} i \times \binom{n - i}{k-1}=(n+1)\sum_{i = 1}^{n-k+1} \binom{n - i}{k-1} - \sum_{i = 1}^{n-k+1}(n+1-i) \times \binom{n - i}{k-1}$$ and then use the hockey-stick identity for each summation, but after applying the following
|
|probability|expected-value|
| 1
|
Ask a question on adjoint operator?
|
Let $H_1,H_2$ be Hilbert spaces with inner products $\langle \cdot, \cdot \rangle_1$ and $\langle \cdot, \cdot \rangle_2$ . Corresponding to every $T \in \mathcal B(H_1,H_2)$ , there is a unique element $T^*$ of $B(H_2,H_1)$ determined by the relation $$\langle T x_1, x_2 \rangle_2=\langle x_1, T^* x_2 \rangle_1$$ for all $x_1 \in H_1, x_2 \in H_2$ . Proof: Consider $\langle T x_1, x_2 \rangle_2$ as a function of $x_1$ for fixed $x_2$ . It is bounded and linear so that the Riesz representation theorem may be applied to see that there exists a unique $y \in H_1$ such that $\langle T x_1, x_2 \rangle_2=\langle x_1, y \rangle_1$ . Thus, we take $T^* x_2=y$ . This definition gives us a linear mapping. To see that it bounded first note that $T$ is necessarily the adjoint of $T^*$ . Then $$|| T^* x_2 ||^2_1= | \langle x_2,T T^* x_2 \rangle_2 |$$ $$\leq ||T|| || T^* x_2 ||_1 ||x_2||_2$$ Question: Why $|| T^* x_2 ||^2_1= | \langle x_2,T T^* x_2 \rangle_2 |$ ?
|
Question: Why $\| T^* x_2 \|^2_1= | \langle x_2,T T^* x_2 \rangle_2 |$ ? Because you just proved that $T^*$ exists and it is the adjoint of $T$ . Then $$ \|T^*x_2\|_1^2=\langle T^*x_2,T^*x_2\rangle_1=\langle T(T^*x_2),x_2\rangle_2 =\overline{\langle x_2,TT^*x_2\rangle_2}=\langle x_2,TT^*x_2\rangle_2, $$ where the conjugation is not necessary because it is applied to a real number.
|
|functional-analysis|proof-explanation|adjoint-operators|
| 0
|
Propositional Logic for variable tautologies?
|
My textbook states that x = x+1 is not a proposition. Why? I thought about domains but no matter the domain this statement is always false because 0 is not equal to 1. My textbook defines a proposition in the following way: A boolean proposition (or simply proposition) is a statement which is either true or false (sometimes abbreviated as T or F ). We call this the A boolean proposition (or simply proposition) is a statement which is either true or false (sometimes abbreviated as T or F). We call this the truth value of the proposition.
|
At first it might be tempting to consider "x = x+1" as a proposition since it's obviously false in under normal arithmetic, it has no solution since there is no real number that satisfies the equation. BUT such interpretation is outside the domain of propositional logic, which deals with declarative statements and their truth values, not mathematical equations and their solutions. So the statement "x = x+1" isn't a boolean proposition, it is not asserting a fact that can be true or false. It's expressing a mathematical relationship that, in the algebra case, does not hold for any real number x. "x = x+1" could be seen as an assignment/increment statement in programming, where the value of "x+1" is being assigned to the variable "x", which is not a proposition either, but rather an operation or command.
|
|logic|
| 0
|
If $L$ is an interval and $C$ is convex, is $LC$ convex too?
|
Let $LC = \{\lambda v \mid \lambda \in L, v \in C\},$ where $C$ is some convex set of the vector space $V.$ If $L$ is an interval, is $LC$ convex? I thought and assumed so for a while, and some drawings suggested this to be true, however, today I tried to prove it analytically and the proof escaped me. We start with $\lambda u, \mu v \in L C$ and $t \in (0,1).$ Then $t \mu v + (1-t) \lambda u$ ...? No clue. I also tried writing $LC = \bigcup\limits_{\lambda \in L} \lambda C$ which is the union of convex sets; and I know that a tower of convex sets is convex but then I don't know how to prove this family $(\lambda C)_{\lambda \in L}$ is a tower. Other ideas, suggestions or a counter example appreciated.
|
For the sake of completeness, I provide a counterexample to my own question, thanks to Izaak van Dongen. Consider $V = \mathbf{R}^2$ and let $C = \{1\} \times [0,1]$ a convex set. Then $[-1,1]C = [-1,0]C \cup [0,1]C$ consists of two triangles that look like a bowtie at the origin, therefore not convex. (For $c \in C,$ $[0,1]c$ is just the line segment conecting the origin to $c,$ and $[-1,0]c$ is therefore the reflection through the origin of this segment of line. Then $(0,\frac{1}{2})$ does no belong to $[-1,1]C$ but it is the mid point between two of the four corners of $[-1,1]C.$ )
|
|linear-algebra|convex-geometry|locally-convex-spaces|
| 0
|
Derivative of trace of antisymmetrical Matrix
|
it is easy to show that the derivative of the trace $Tr(A)$ with respect to $A$ if A is a N x N matrix, is $\frac{\partial Tr(A)}{\partial A} = I_{N \times N}$ However, if I now assume A to be antisymmetric $a_{ij} = -a_{ji}$ the trace $Tr(A) = 0$ but with above result the derivative would still be $I_{N \times N}$ and not $= 0$ . In principle the above solution should not be depend on what properties $A$ has. Yet it seems like it does. What am I missing here?
|
$ \def\l{\lambda} \def\o{{\tt1}} \def\BR#1{\Big[#1\Big]} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\qif{\quad\iff\quad} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} $ The utility of the gradient $(G)$ is that it can estimate (via a $\o^{st}$ order Taylor series) the change in the function $(f)$ which results from a small change $(dA)$ in the matrix argument $(A)$ $$\eqalign{ f\LR{A+dA} &= f\LR{A} \;&+\; \trace{G^TdA} \\ \trace{A+dA} &= \trace{A} \;&+\; \trace{dA} \\ }$$ where the last line sets $f\LR{A}=\trace{A}\,$ and $\,G=I,\:$ i.e. your particular function. If both $A$ and $dA$ are skew symmetric, then all of the terms in the Taylor series are zero $$\eqalign{ \trace{A} = \trace{dA} = 0 \qiq \trace{A+dA} = 0 \\ }$$ However, suppose you decide to
|
|calculus|matrices|matrix-calculus|
| 0
|
Does the existence of a continuous function on a topological space imply that that space is Hausdorff?
|
It makes sense to me that this would be true but I am not sure. My understanding of (at least one of the definitions of) a continuous function is that for every point $p$ in the functions, domain, there exsits a neighborhood $N$ around that point There is a neighborhood $f(N)$ in the function's image that corresponds to $N$ As the width of $N$ shrinks to $0$ , so does the width of $f(N)$ This seems to imply that the topological space the function is defined on is Hausdorff, since for any two points $p_1$ and $p_2$ in the space, we can always find a neighborhood around $p_1$ that does not include $p_2$ and vice versa. Otherwise, our function could not be continuous at $p_1$ or $p_2$ . If this is not true, are there any other conditions on the function or the space that could make this true?
|
There's some major misunderstandings here. First of all lets talk about continuity at a point. The proper definition is: A function $f:X\to Y$ between topological spaces is continuous at $x\in X$ if for any open neighbourhood $V$ of $f(x)$ in $Y$ there is an open neighbourhood $U$ of $x$ in $X$ such that $f(U)\subseteq V$ . Then we will say $f:X\to Y$ is continuous if it is continuous at every point. The (global) continuity has a simpler description though: A function $f:X\to Y$ is continuous if and only if $f^{-1}(U)$ is open in $X$ for any open subset $U$ of $Y$ . Therefore it is quite easy to see that given two nonempty topological spaces $X$ and $Y$ every constant function $X\to Y$ is continuous. Sometimes there aren't other continuous functions (e.g. when $X$ is connected while $Y$ totally disconnected), but there always is at least as many as elements of $Y$ . In particular whether those spaces are Hausdorff or not is irrelevant. Also topological spaces don't have a concept of wi
|
|general-topology|continuity|
| 1
|
Finding the expectation of the smallest value when $k$ numbers is taken from $1$ to $n$
|
Let X be the smallest value obtained when k numbers are randomly chosen from the set 1,...,n. Find E[X] by interpreting X as a negative hypergeometric random variable. This is Self Test Exercise 7.7 of Sheldon's A First Course in Probability. The way I approached this is to first consider (for example) P(X = 1). Using the suggestion to take X as a negative hypergeometric random variable, we have $$P(X = 1) = \frac{\binom{1}{1}\binom{n - 1}{k - 1}}{\binom{n}{k}}$$ My idea was that we have one element 1 and the rest $k - 1$ elements from ... the remaining $n - 1$ elements. Similarly, for $P(X = 2)$ , we have one way of getting the minimum element $2$ , and $\binom{n - 2}{k - 1}$ ways of getting the other elements (well, except 1). Similarly, we can deduce that $$P(X = i) = \frac{\binom{n - i}{k - 1}}{\binom{n}{k}}$$ As a result, I got $$E(X) = \sum_{i = 1}^n i \times \frac{\binom{n - i}{k - 1}}{\binom{n}{k}}$$ The answer in the book is $\frac{n + 1}{k + 1}$ , which is just so much more e
|
This approach uses the principle of symmetry to find the expected value. Distribute the $n$ numbers in ascending order uniformly on a $0-1$ scale, so they are at $\frac{m}{n+1},$ for $m=1,2,3,...n$ Similarly, the $k$ sampled numbers are at $\frac1{k+1}, \frac2{k+1}, \frac3{k+1} ... \frac{k}{k+1}$ on an average . (The $n$ numbers partition the scale into $n+1$ equal segments and similarly the $k$ sampled numbers partition the scale into $k+1$ equal segments). We have to find the point at which the minimum sampled point lies, i.e. $\frac{1}{k+1} = \frac{m}{n+1}$ , $$=> m = \frac{n+1}{k+1}$$
|
|probability|expected-value|
| 0
|
Is this induced mapping continuous?
|
Let $X,Y,Z$ be topological spaces, and let $f:X\to Y$ and $g:X\to Z$ be continuous functions. Suppose that $f$ is constant on each fiber of $g$ , i.e. for every $z\in Z$ , the fiber $g^{-1}(z) \subseteq X$ is sent to a single point $y_z\in Y$ . Then clearly there exists a mapping $Z \to Y$ sending $z$ to $y_z$ . Is this mapping continuous?
|
Does not have to be. Consider $X=Y=Z$ as sets and $f(x)=g(x)=x$ . Then the induced function $Z\to Y$ is the identity as well. Now put discrete topology on $X$ and $Y$ and a non-discrete on $Z$ .
|
|general-topology|
| 1
|
Evaluate $\int_{0}^{\frac{\pi}{4}} \tan^{-1}\left(\sqrt{\frac{\cos 2x}{2\cos^2 x}}\right)\:dx$
|
Evaluate $$I=\int_{0}^{\frac{\pi}{4}} \tan^{-1}\left(\sqrt{\frac{\cos 2x}{2\cos^2 x}}\right)\:dx$$ I tried with substitution: $$\frac{\cos 2x}{2\cos^2 x}=\tan^2 y$$ So we have $$\sec^2 x=2-2\tan^2 y$$ Differentiating we get: $$2\sec^2 x\tan x\:dx=-4\tan y\sec^2 y\:dy$$ So $$dx=\frac{-\tan y\sec^2 y\:dy}{(1-\tan^2 y)\sqrt{1-2\tan^2 y}}$$ So we get $$I=-\int_{\tan^{-1}\left(\frac{1}{\sqrt{2}}\right)}^0\:\frac{y \tan y \sec^2 y\:dy}{(1-\tan^2 y)\sqrt{1-2\tan^2 y}}$$ I am stuck here?
|
Substitute $\tan x =\sin y$ \begin{align} & \int_{0}^{\pi/4 } \tan^{-1} \sqrt{\frac{\cos2x}{2\cos^2x}}\ dx =\int_0^{\pi/2}\frac{\cos y}{1+\sin^2 y}\tan^{-1}\frac{\cos y}{\sqrt2} \ dy\\ &\>\>\>\>\>=\int_0^{\pi/2}\int_0^{\pi/4} \frac{\sqrt2 \cos^2 y}{(1+\sin^2 y)(2\cos^2 x+ \cos^2y\sin^2 x)}dx\ dy\\ &\>\>\>\>\>= \frac\pi2\int_0^{\pi/4} \bigg(1-\frac{\cos x}{\sqrt{1+\cos^2x}}\bigg)dx=\frac\pi2\bigg(\frac\pi{4}-\frac\pi6\bigg)=\frac{\pi^2}{24} \end{align}
|
|integration|algebra-precalculus|definite-integrals|inverse-function|
| 0
|
A function is convex if and only if its gradient is monotone.
|
Let a convex $ U \subset_{op} \mathbb{R^n} , n \geq 2$, with the usual inner product. A function $F: U \rightarrow \mathbb{R^n} $ is monotone if $ \langle F(x) - F(y), x-y \rangle \geq 0, \forall x,y \in \mathbb{R^n}.$ Let $f:U \rightarrow \mathbb{R}$ differentiable. Show that $f$ is convex $\iff \nabla f:U \rightarrow \mathbb{R^n}$ is monotone. My attempt on the right implication: I already proved that if $f$ is convex and 2-differentiable then $f''(x) \geq 0$. But this exercise only says f is 1-differentiable. Then I tried the following: $f$ is convex $\iff \forall x,y \in U $ the function $\varphi:[0,1] \rightarrow \mathbb{R}$, defined by $ \varphi(t) = f((1-t)x+ty)$ is convex. Then $\varphi'$ is non-decreasing, then $\nabla \varphi(x) \geq 0$... but I'm stucked here. My attempt on the left implication: $ |\nabla \varphi (x) - \nabla \varphi (y)|| x-y| \geq | \langle \nabla \varphi (x) - \nabla \varphi (y), x-y \rangle | \geq 0$ And so $ |\nabla \varphi (x) - \nabla \varphi (y)| \ge
|
Argument for why gradient monotonicity gives convexity: Suppose the gradient is monotone and fix any $x$ , $y$ . We can reparametrize $F$ to $$ G(t) = F(x + t(y-x)). $$ Then $G(0) =x, G(1)=y$ . Moreover: $$ G'(t) = \nabla F(x + t(y-x)) \cdot(y-x). $$ Now, notice: $$ [G'(t)-G'(0)]t = [\nabla F(x + t(y-x)) - \nabla F(x) ]\cdot[(x-t(y-x)) - x], $$ and so monotonicity tells you that $G'(t)\geq G'(0)$ . Then we can write the following: $$ G(1) = G(0)+ \int_0^1 G'(t)dt \geq G(0)+\int_0^1 G'(0) dt = G(0)+\nabla F(x) \cdot(y-x). $$ This gives: $$ F(y) = F(x) + \nabla F(x) \cdot(y-x). $$ Since this holds for every $y$ , $\nabla F(x)$ is a subgradient of $F$ at $x$ . Since this argument also holds for every $x$ , $F$ has a subgradient everywhere and so must be convex.
|
|real-analysis|convex-analysis|multivalued-functions|
| 0
|
Upper bound of the integral $\int_\delta^\infty t^m e^{-\nu t^2} dt$
|
I am reading Wong's book on "Asymptotic Approximations of Integrals". On page 497, the book recalls (without proof) the following estimate: for all $\delta>0$ and $\nu>1$ , $$ \int_\delta^\infty t^m e^{-\nu t^2} dt \le K_\delta e^{-\nu \delta^2}, $$ where $K_\delta$ is a constant independent of $\nu$ . May I know whether the above estimate is a well-known result? Could you provide a reference for it?
|
Performing the change of integration variables from $t$ to $s$ via $ t = \delta \sqrt {1 + s}$ gives $$ \int_\delta ^{ + \infty } {t^m {\rm e}^{ - \nu t^2 } {\rm d}t} = \frac{{\delta ^{m + 1} }}{2}{\rm e}^{ - \nu \delta ^2 } \int_0^{ + \infty } {(1 + s)^{(m - 1)/2} {\rm e}^{ - \nu \delta ^2 s} {\rm d}s} . $$ Since $\nu>1$ , we can assert that $$ \int_0^{ + \infty } {(1 + s)^{(m - 1)/2} {\rm e}^{ - \nu \delta ^2 s} {\rm d}s} \le \int_0^{ + \infty } {(1 + s)^{(m - 1)/2} {\rm e}^{ - \delta ^2 s} {\rm d}s} . $$ Hence, the claim follows by letting $$ K_\delta = \frac{{\delta ^{m + 1} }}{2}\int_0^{ + \infty } {(1 + s)^{(m - 1)/2} {\rm e}^{ - \delta ^2 s} {\rm d}s} . $$
|
|inequality|improper-integrals|bounds-of-integration|
| 0
|
Proving $\sum_{i=m+1}^\infty \frac{m!^{i/m}}{i!}<1$.
|
One of this days, the user @AspiringMat was trying to prove that, for any integer $m\ge 1$ , $$\sum_{i=m+1}^\infty \frac{m!^{i/m}}{i!} and asked for help here on MSE. I've spent too much of my time attempting to solve this. However when I finally got to it, I realized he deleted the question ! I really don't want all my effort to be wasted, so... I'm asking and answering his question once again here. My solution is really messed up, so feel free to post your own as well.
|
Its enough to prove the summation sequence is strictly decreasing. $$\begin{aligned} \sum_{i=m+1}^\infty \frac{m!^{i/m}}{i!} &> \sum_{i=m+2}^\infty \frac{(m+1)!^{i/(m+1)}}{i!}&\iff\\ \frac{m!^{(m+1)/m}}{(m+1)!} &> \sum_{i=m+2}^\infty \frac{(m+1)!^{i/(m+1)}}{i!} - \sum_{i=m+2}^\infty \frac{m!^{i/m}}{i!}&\iff\\ \frac{m!^{1/m}}{m+1} &> \sum_{i=m+2}^\infty \frac{((m+1)!^{1/(m+1)}-m!^{1/m})^i}{i!} \end{aligned}$$ Lema. For $m\ge 1$ we have $(m+1)!^{1/(m+1)}-m!^{1/m} . Proof. Freestyle approximations incoming... $$\begin{aligned} (m+1)!^{1/(m+1)}-m!^{1/m} & as we wanted. $\square$ (1) Stirling's inequality : $\displaystyle \frac{\sqrt{2\pi n}n^{n}}{e^{n-1/(12n+1)}} . (2) $e^{1-1/12(m+1)^2}>e^{1-1/m(12m+1)}$ and $\pi^{1/2m} > \pi^{1/2(m+1)}$ . (3) $\dfrac{\pi^{1/2m}}{e^{1-1/12(m+1)^2}}$ reaches its maximum for $m=1$ and $\dfrac{\sqrt\pi}{e^{1-1/48}}\approx 0.66577\dots . (4) $(2(m+1))^{1/2(m+1)}-(2m)^{1/2m} . (5) $\sqrt2$ is the maximum of $x^{1/x}$ for $x$ positive integer. Therefore, it is
|
|real-analysis|sequences-and-series|taylor-expansion|
| 0
|
The $ 1 $-form $ \frac{x\,\mathrm dy - y\,\mathrm dx}{x^2 + y^2} $ in polar coordinates
|
I have a silly question about the pullback of $ 1 $ -forms. Take two smooth manifolds $ M $ and $ N $ and some smooth map $ F\colon M\to N $ between them. Take some $ 1 $ -form $ \eta $ on $ N $ . Suppose that with respect to some coordinates $ (y^1,\dots,y^m) $ on an open subset $ V $ of $ N $ we can write $$ \eta{\restriction_V} = \sum_{i = 1}^m \eta_i\mathrm dy^i $$ for some smooth functions $ \eta_1,\dots,\eta_m\colon V\to \mathbb R $ . Then $$ F^*\eta{\restriction_{F^{-1}(V)}} = \sum_{i = 1}^m (\eta_i\circ F)\,\mathrm d(y^i\circ F) $$ by the properties of the pullback map $ F^*\colon \Omega_N^1(V)\to \Omega_M^1(F^{-1}(V)) $ . Now, I was playing with the standard example $$ \eta = \frac{x\,\mathrm dy - y\,\mathrm dx}{x^2 + y^2} $$ on the manifold $ X = \mathbb R^2\setminus\{\text{non positive $ x $ semi-axis}\} $ . According to Lee's Introduction to Smooth Manifolds Example 11.28, one can find the polar coordinate expression of $ \eta $ simply by computing the pullback $$ \eta = \m
|
OK, after a bit of thought I think I can answer myself. I'll do it in gory detail hoping it can be of some help to other people struggling with these ideas and most importantly with the classical notation. Our goal is to write the $ 1 $ -form $$ \eta = \frac{x\,\mathrm dy - y\,\mathrm dx}{x^2 + y^2} = \frac{x}{x^2 + y^2}\mathrm dy + \frac{-y}{x^2 + y^2}\mathrm dx $$ defined on the space $$ X = \mathbb R^2\setminus{\text{non positive $ x $ semi-axis}} $$ with respect to the local (indeed, global) frame $ \{\mathrm dr,\mathrm d\theta\} $ induced by the coordinates $ \phi = (\phi^1,\phi^2) = (r,\theta) $ on $ X $ , where $$ \phi^{-1}\colon \left]0,+\infty\right[\times \left]-\pi,\pi\right[\xrightarrow{{\cong}} X\text{,}\qquad \phi(r,\theta) = \begin{pmatrix}r\cos\theta\\ r\sin\theta\end{pmatrix} $$ is the usual polar-coordinates parametrization of $ X $ . First of all, a little comment on the notation $$ \frac{x}{x^2 + y^2}\text{,}\qquad \frac{-y}{x^2 + y^2} \tag{*}\label{components} $$ u
|
|differential-geometry|differential-forms|
| 0
|
What's the intuition behind non-integer exponents/powers
|
Consider some $a \in \mathbb{R}$ and $x \in \mathbb{R}\backslash \mathbb{N}$. Is there some intuition to be had for the number $a^x$? For example the intuition of $a^2$ is obvious; it's $a*a$ which I can think about with real world objects such as apples (when $a \in \mathbb{N}$). What about $a^{1.9}$?
|
It has to do with extending exponentiation to rational number exponents using various properties of multiplying powers and powering powers. We know that for positive real $r$ , $$(r^a)^b=r^{ab}$$ Without loss of generality, let $z$ be an integer. Using the law of powers of powers. $$(r^{1/z})^z=r^{1z/z}=r^1=r$$ Taking the ( $z$ )th root of the equation above. $$r^{1/z}=\sqrt[z]r$$ Let $z_1$ and $z_2$ be integers. $$(r^{1/z_1})^{z_2}=r^{z_2/z_1}$$ As such, we can well-define arbitrary rational exponents for positive reals. A positive real raised to an arbitrary rational power has exactly one positive real value for each rational number. And because all real numbers are arbitrarily approximated by rational numbers and can be ordered among the rational numbers, arbitrary real powers can likewise be approximated. We can define $\pi^{\pi}$ and even conclude it is between $\pi^{3.141}$ and $\pi^{3.142}$ . Going beyond positive real bases involves $i^2=-1$ , natural logarithms, natural antilo
|
|intuition|exponentiation|
| 0
|
Finding the extreme points of a polyhedron
|
Let $$X = \{ x \in \mathbb R^n: 1/2 \leq x_1 \leq 1 \text{ and } x_{i-1} \leq x_i \leq 2x_{i-1}\forall i=2,\ldots,n \}.$$ Prove that $X$ is a polyhedron and determine all of its extreme points. My attempt: First, we'll get $X$ in the form $X = P(A, b) = \{ x \in \mathbb R^n \mid Ax \leq b\}$ . In order to do so, we can write the constraints as $$ -x_1 \leq -1/2,\,\, x_1 \leq 1, \\ x_1 - x_2 \leq 0,\,\, x_2 - x_3 \leq 0,\,\, x_3 - x_4 \leq 0,\,\, \ldots, \,\, x_{n-1} - x_n \leq 0, \\ x_2 - 2x_1 \leq 0, \,\, x_3 - 2x_2 \leq 0, \,\, \ldots, \,\, x_n - 2 x_{n-1} \leq 0. $$ The first line corresponds to the matrix $$ A_1 = \begin{pmatrix} -1 & 0 & \cdots & 0 \\ 1 & 0 & \cdots & 0\end{pmatrix} \in \mathbb R^{2 \times n} $$ the second one to $$ A_2 = \begin{pmatrix} 1 & -1 \\ & 1 & -1 \\ & & 1 & -1 \\ & & & \ddots & \ddots \\ & & & & 1 & -1 \end{pmatrix} \in \mathbb R^{(n-1)\times n} $$ and the third one to $$ A_3 = \begin{pmatrix} -2 & 1 \\ & -2 & 1 \\ & & -2 & 1 \\ & & & \ddots & \ddots \\
|
Your $A$ and $b$ are correct. You have $n$ pairs of inequalities, and at most one inequality in each pair can be active, so there are $2^n$ extreme points, which you can obtain by independently selecting $$x_1\in\{1/2,1\}, x_2\in\{x_1,2x_1\}, x_3\in\{x_2,2x_2\},\dots,x_n\in\{x_{n-1},2x_{n-1}\}.$$
|
|geometry|optimization|convex-optimization|polytopes|
| 1
|
Series Expansion
|
An old problem from Whittaker and Watson I'm having issues with. Any guidance would be appreciated . Show that the function $$ f(x)=\int_0^\infty \left\{ \log u +\log\left(\frac{1}{1-e^{-u}} \right) \right\}\frac{du}{u}e^{-xu} $$ has the asymptotic expansion $$ f(x)\sim\frac{1}{2x}-\frac{B_1}{2^2x^2}+\frac{B_3}{4^2x^4}-\frac{B_5}{6^2x^6}+\ldots, $$ where $B_1, B_3, \ldots$ are Bernoulli's numbers. Show also that $f(x)$ can be developed as an absolutely convergent series of the form $$ f(x)=\sum_{k=1}^\infty\frac{c_k}{(x+1)(x+2)\ldots(x+k)}. $$
|
Answering the second part of the question. Performing the change of integration variables from $u$ to $t$ via $u = - \log (1 - t)$ yields $$ f(x) = \int_0^1 {\frac{1}{{ - (1 - t)\log (1 - t)}}\log \left( {\frac{{ - \log (1 - t)}}{t}} \right)(1 - t)^x {\rm d}t} , $$ provided $\operatorname{Re}(x)>0$ . We have the Maclaurin series expansion $$ \frac{1}{{ - (1 - t)\log (1 - t)}}\log \left( {\frac{{ - \log (1 - t)}}{t}} \right) = \sum\limits_{n = 1}^\infty {c_n \frac{{t^{n - 1} }}{{(n - 1)!}}} = \frac{1}{2} + \frac{{11}}{{24}}\frac{t}{{1!}} + \frac{7}{8}\frac{{t^2 }}{{2!}} + \ldots , $$ which converges absolutely and uniformly on any compact subset of $|t| . Using the beta integral $$ \int_0^1 {t^{n - 1} (1 - t)^x {\rm d}t} = (n - 1)!\frac{{\Gamma (x + 1)}}{{\Gamma (x + n + 1)}} = \frac{{(n - 1)!}}{{(x + 1) \cdots (x + n)}}, $$ and term-by-term integration yield the absolutely convergent series $$ f(x) = \sum\limits_{n = 1}^\infty {\frac{{c_n }}{{(x + 1) \cdots (x + n)}}} , $$ for $\operat
|
|real-analysis|sequences-and-series|
| 0
|
Question about theorem 9.21 in Rudin's PMA
|
The theorem in question is: Now, the part that I have a problem is the following highlighted text: Theorem (5.10) is I don't understand what function plays the role of $f$ at (5.10). What i've tried is the function $$g(t)=f(x+v_{j-1}+th_je_j)$$ But its derivative depends on the deriative of $f$ which I don't know it exists yet.
|
Your idea of looking at the map $g(t) := f(x+v_{j-1}+th_{j}e_{j})$ , which is defined on some open subset containing $[0,1]$ , namely $\{t\in \mathbb{R} : x+v_{j-1}+th_{j}e_{j} \in E \}$ , is indeed a good idea. To establish that $g$ is differentiable on $[0,1]$ , note that $g$ is a composition of the maps $\phi_{1}$ and $\phi_{2}$ (defined on suitable open subsets) given by $\phi_{1}(t) := th_{j}$ and $\phi_{2}(s) := f(x+v_{j-1}+se_{j})$ . The derivative of $\phi_{1}$ at $t$ is $h_{j}$ and the derivative of $\phi_{2}$ at $s$ is $(D_{j}f)(x+v_{j-1}+se_{j})$ , which can be more clearly observed from \begin{align*} \lim_{\xi\to 0}\frac{\phi_{2}(s+\xi ) - \phi_{2}(s)}{\xi}&= \lim_{\xi \to 0}\frac{f(x+v_{j-1}+(s+\xi )e_{j}) - f(x+v_{j-1}+se_{j})}{\xi} \\ &= \lim_{\xi \to 0}\frac{f((x+v_{j-1}+se_{j})+\xi e_{j}) - f(x+v_{j-1}+se_{j})}{\xi} \\ &= (D_{j}f)(x+v_{j-1} + se_{j}). \end{align*} By the chain rule, the derivative of $g = \phi_{2} \circ \phi_{1}$ at $t$ is $h_{j}(D_{j}f)(x+v_{j-1}+th_
|
|derivatives|mean-value-theorem|
| 1
|
Proof of expected length in random division of $[0,1]$ interval
|
I have trouble fully understanding the proof in a formal manner for the expected length of the $k^{th}$ smallest interval when we randomly divide the $[0,1]$ interval using $n$ points. The $k^{th}$ smallest interval's expected length is equal to $$\frac{\frac{1}{k} + \frac{1}{k+1} + \dots + \frac{1}{n+1}}{n+1}$$ Proof : Without loss of generality, assume the $[0,1]$ segment is broken into segments of length $s_1 \geq s_2 \geq \dots \geq s_n \geq s_{n+1}$ , in that order. We are given that $ s_1 + \dots + s_{n+1} = 1$ , and want to find the expected value of each $s_k$ . Set $ s_i = x_i + \dots + x_{n+1} $ for each $ i = 1, \dots, n+1 $ . Then, we have $ x_1 + 2x_2 + \dots + (n+1)x_{n+1} = 1 $ , and want to find the expected value of $ s_k = x_k + \dots + x_{n+1} $ . If we set $y_i = ix_i $ , then we have $ y_1 + \dots + y_{n+1} = 1 $ , so by symmetry $ E[y_i] = \frac{1}{n+1} $ for all $ i $ . Thus, $ E[x_i] = \frac{1}{i(n+1)} $ for each $ i $ , and now by linearity of expectation $ E[s
|
Let $\left(U_0, \ldots, U_{n}\right)$ the vector of random intervals. It is known that $$U_i \sim \frac{\epsilon_i}{\sum_{i=0}^{n}\epsilon_i}$$ where $\epsilon_i \sim \mathcal{E}(1)$ and independent. Using the order statistic, $$S_i \sim \frac{\epsilon_{(n-i+1)}}{\sum_{i=0}^{n}\epsilon_{(i)}}$$ Since (see this ) $$\epsilon_{(n-i+1)} \sim V_{n+1} + V_{n} + \ldots + V_{i} = \sum_{k=i}^{n+1} V_k$$ where $V_i \sim \mathcal E(i)$ and independent. You can write $V_i = \frac1i W_i$ where $W_i \sim \mathcal E(1)$ and independent. So $$\epsilon_{(n-i+1)} = \sum_{k=i}^{n+1} \frac1k W_k$$ Let $$X_i = S_{i} - S_{i+1} \sim \frac{\frac1i W_i}{\sum_{i=1}^{n+1}\sum_{k=i}^{n+1} \frac1k W_k}$$ and $$Y_i = iX_i \sim \frac{W_i}{\sum_{k=1}^{n+1} W_k}.$$ This proves the symmetry you are looking for.
|
|probability|combinatorics|proof-explanation|expected-value|
| 0
|
Confusion on Probability Problem
|
Suppose there are 25 indistinguishable balls which can go into 7 urns in such a way that all configurations of balls are equally likely. For example, (3, 0, 7, 9, 2, 1, 3) and (0, 18, 1, 1, 0, 0, 5) are equally likely, where the the ith number is the number of balls in urn i. Given that all 7 urns have at least one ball, what is the probability that one of the urns has at least 15 balls? Hint: is it possible that more than one urn has at least 15 balls at a time? My thinking: Because each urn must have one ball, there are ${{25-1} \choose {7-1}} = {24 \choose 6}$ total solutions. The probability that one has exactly 15 is $$\frac{{7 \choose 1} \times {25 \choose 15} \times {6 \choose 6} \times {9 \choose 5}}{{24 \choose 6}}$$ I planned to do that for 16, 17, and 18, then add them. However, the numerator is significantly larger than the denominator, so this is clearly not the way to go.
|
Since the balls are indistinguishable, the different final configurations are entirely determined by how many balls are in each urn from $1$ to $k$ (here $k=7$ ). So the outcomes, as you mentioned in the comments, can be identified with $k$ -tuples $(m_1,\ldots, m_k)$ , where $m_i$ is the number of balls in urn number $i$ . Also, each ball has to go somewhere, so $m_1+\ldots+m_k= n$ , where $n$ is the total number of balls. So without any constraints, the total number of configurations is equal to the cardinality of the set $$S(n,k)=\{(m_1,\ldots, m_k):m_1+\ldots + m_k=n\}.$$ We note that $$|S(n,k)|=\binom{n+k-1}{k-1}=\binom{n+k-1}{n}$$ such tuples. This is valid for $n=0,1,\ldots$ and $k=1,2,\ldots$ . Note that the $m_i$ are allowed to be zero (because we aren't yet considering constraints such as "each urn must have at least one ball -- we'll get to that in a moment). This is the "stars and bars" argument (I've hard people take issue with this name and propose better names, but "star
|
|probability|combinatorics|probability-theory|
| 0
|
Series $\sum_{n=0}^{\infty}(-1)^n\binom{2n}{n}^5\left(\frac{1+4n}{2^{10n}}\right)=\frac{\Gamma^4(1/4)}{2\pi^4}$
|
$$\sum_{n=0}^{\infty}(-1)^n\binom{2n}{n}^5\left(\frac{1+4n}{2^{10n}}\right)=\frac{\Gamma^4(1/4)}{2\pi^4}$$ Is there any way to prove this? I don't even know where to start with this one. The following Generating Function is known. With $K$ as the Complete Elliptic Integral of the First Kind. $$K^2\left(\sqrt{\frac{1-\sqrt{1-x^2}}{2}}\right)=\frac{\pi^2}{4}\sum_{n=0}^{\infty}\binom{2n}{n}^3\frac{x^{2n}}{2^{6n}}$$ But to go from here to the Fifth Power will be almost impossible and then we would have to differentiate it too. So this approach is probably not the way to go. I believe it might be like one of those Ramanujan Series for $1/\pi$ but this one has a Gamma Term so I am not sure.
|
Using directly generalized hypergeometric functions, $$f(x)=\sum_{n=0}^{\infty}(-1)^n\,\binom{2n}{n}^5 \,\frac{(1+4n)}{x^n}$$ $$f(x)=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac {1}{2};1,1,1,1;-\frac{2^{10}}{x}\right)-$$ $$\frac {2^7}x \, _5F_4\left(\frac{3}{2},\frac{3}{2},\frac{3}{2},\frac{3}{2},\frac {3}{2};2,2,2,2;-\frac{2^{10}}{x}\right)$$ So, $$f\left(2^{10}\right)=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac {1}{2};1,1,1,1;-1\right)-$$ $$\frac{1}{8} \, _5F_4\left(\frac{3}{2},\frac{3}{2},\frac{3}{2},\frac{3}{2},\frac {3}{2};2,2,2,2;-1\right)$$ which is $$f\left(2^{10}\right)=\frac{16 }{\pi ^3}\big[ K(-1)\big]^2$$ that is to say the rhs.
|
|integration|sequences-and-series|summation|pi|elliptic-integrals|
| 0
|
Lagrange error in Taylor series
|
question 15 I've tried solving this but I get $n = 4$ instead of $n = 5$ which is the right answer. I used the lagrange remainder $\leq \frac{1}{(n+1)!}f^{(n+1)}(z)(x-c)^{n+1}$ I plugged in 1 for the $f^{(n)}(z)$ and 1.2 for $x$ and then solved until the inequality became true which was at $n=4$ is the problem something to do with $n+1$ vs $n$ in the lagrange remainder equation?
|
The question asks the number of terms not $n$ . Number of terms is $ n+1$ . ... Edit: The error $E$ is given by the following upper bound formula $$E\leq \frac{|f^{(n+1)}(z)|}{(n+1)!}|x-x_0|^{n+1}$$ for some $z$ specified by Lagrange's theorem. We are given $|f^{(n+1)}(z)|\leq1$ for all $z$ and $x=1.2, x_0=1.$ The questions wants $E Combined with the formula we have the inequality $\frac{(0.2)^{n+1}}{(n+1)!} which is true for $n\geq 4.$ Therefore the minimum degree Taylor's polynomial needed is $$T_4(x)=\color{purple}{f(1)}+\color{blue}{f'(1)(x-1)}+\color{red}{\frac12 f''(1)(x-1)^2}+\color{purple}{\frac16f'''(1)(x-1)^3}+\color{green}{\frac1{24}f^{(4)}(x-1)^4}$$ which has $5$ terms.
|
|calculus|taylor-expansion|
| 0
|
Conditionally independent coordinates relative to sum
|
Let $X$ and $Y$ be random variables taking values in the same finite abelian group $G$ , so that we can define their sum $X+Y$ . Let $(X_1, Y_1)$ and $(X_2, Y_2)$ be conditionally independent copies of $(X,Y)$ relative to $X+Y$ ; that is, for all $z\in G$ , the random variables $(X_1, Y_1 | X+Y = z)$ and $(X_2, Y_2 | X+Y = z)$ are independent and both have the same distribution as $(X,Y | X+Y = z)$ . From this can we conclude that $X_1$ and $Y_2$ are conditionally independent relative to $X+Y$ as well? It seems like something that should be true, but perhaps my intuition is flawed because I don't see an obvious proof. Thanks!
|
The question translates to whether $$P(X_1 = x_1, Y_2 = y_2 | X+Y = z) = P(X_1 = x_1 | X+Y = z) \cdot P(Y_2 = y_2 | X+Y = z)$$ for all $z \in G$ . For example, in the specific instances where $X$ and $Y$ are to take values in an abelian group $G$ and when conditional on the sum $X+Y$ , some kind of algebraic properties would come up. For instance, in an abelian group, the operation (here, addition) is commutative, which might affect the way the sum would interact with the conditioning. Since $(X_1, Y_1)$ and $(X_2, Y_2)$ are conditionally independent given $X+Y=z$ , think of what that means in the distribution of $X_1+Y_1$ and $X_2+Y_2$ . Proving conditional independence without any specific structures for $G$ , or the distributions of $X$ and $Y$ , might be hard at times. Think of easy examples or counterexamples that would either confirm or disprove the claim. For instance, if $G$ is a finite group, does that simplify the analysis?
|
|probability|conditional-probability|
| 0
|
You have $12$ female and $8$ male employees. How many ways can you choose a committee of $5$ people with at least one male and one female?
|
I understand the solution given in my book, but I fail to understand why my solution is wrong. My thinking is that there are $5$ positions in the committee, and since there must be at least one male and one female, call one slot the male slot and the other the female slot. There are $12$ females to choose from for the female slot, and $8$ males to choose from for the male slot. The remaining are $18$ people from which we need to choose $3$ to be on the committee. So why is the answer not equal to $12\cdot8\binom{18}{3}$ ?
|
Consider a smaller example with $2$ females and $2$ males where you must select a committee of $3$ with at least $1$ of each. Your approach would yield $2\cdot2\binom{4-2}{3-2}=4\binom{2}{1}=8$ when the correct count is instead $4$ .
|
|combinatorics|combinations|
| 0
|
You have $12$ female and $8$ male employees. How many ways can you choose a committee of $5$ people with at least one male and one female?
|
I understand the solution given in my book, but I fail to understand why my solution is wrong. My thinking is that there are $5$ positions in the committee, and since there must be at least one male and one female, call one slot the male slot and the other the female slot. There are $12$ females to choose from for the female slot, and $8$ males to choose from for the male slot. The remaining are $18$ people from which we need to choose $3$ to be on the committee. So why is the answer not equal to $12\cdot8\binom{18}{3}$ ?
|
You overcount a lot. Change a man from the male slot with a man from the rest of the committee (if there is one). It’s the same committee, but you count both as if they are different.
|
|combinatorics|combinations|
| 0
|
If $H, K \leq G$ implies $H \subseteq K$ or $K \subseteq H$, then $G$ is a (not necessarily finite) $p$-group.
|
Let $G$ be a group with the following property: for every $H, K \leq G$ , either $H \subseteq K$ or $K \subseteq H$ . Show that there exists a prime number $p$ such that the order of every element of $G$ is a power of $p$ . It is straightforward to show the result above when $G$ is finite. Indeed, fix $g \in G$ and consider $|\langle g \rangle| = p^{k}m$ where $(p, m) = 1$ and $m \neq 1$ . Then, $g^{p^{k}}$ has order $m$ and $g^{m}$ has order $p^k$ . By the hypothesis, either $$\langle g^{p^k} \rangle \subseteq \langle g^m \rangle\text{ or }\langle g^m \rangle \subseteq \langle g^{p^k} \rangle.$$ Both cases contradict Lagrange's Theorem. Thus, $m = 1$ and the order of $g$ is $p^k$ . Given any other element in $G$ , say $h$ , we have either $\langle g \rangle \subseteq \langle h \rangle$ or $\langle h \rangle \subseteq \langle g \rangle$ , which means the order of $h$ is either a power of $p$ or a multiple of a power of $p$ . In the first case, we are done. In the second case, the argum
|
The property that whenever $K$ and $H$ are subgroups, either $K\subseteq H$ or $H\subseteq K$ says that the subgroups of $G$ are totally ordered by containment. That is an incredibly restrictive hypothesis, and since it is clearly false for $\mathbb Z$ , $G$ cannot have any elements of infinite order. On the other hand, if $g\in G$ , then $\langle g\rangle$ is a cyclic group of order $d$ say. Then the subgroups of $\langle g \rangle$ correspond to the divisors of $d$ , and the only way for these to be linearly ordered is for $d$ to be a prime power, and hence every element of $G$ has order $p^k$ for some prime $p$ . But now if we fix some $g_0 \in G$ with order $p^{k_0}$ say, and let $g \in G$ be arbitrary, then $\langle g \rangle \subseteq \langle g_0 \rangle$ or $\langle g_0 \rangle \subseteq \langle g \rangle$ . Hence if $g$ has order $q^l$ for some prime $q$ , we see that $p^{k_0} \mid q^l$ or $q^l \mid p^{k_0}$ , and hence $q=p$ , and every element of $G$ has order $p^l$ for some
|
|abstract-algebra|group-theory|p-groups|
| 0
|
A Conjecture Relating Modulo Arithmetic and the Riemann Zeta Function.
|
I recently created a function that has perplexed many of my fellow amateur mathematicians. It goes something like this: $$f\left(g(x)\right)=\frac{1}{N^{2}}\sum_{n=1}^{N}\left(Ng(x)\operatorname{mod}n\right)$$ $g(x)$ can be any smooth, nonzero function, $N$ any a natural number, and $n$ the index of summation. For its simplicity, I chose $g(x)=e^{-x^{2}}$ , a Gaussian function. Here is the graph of $f$ when $N=1$ (red), $N=10$ (green), and $N=100$ (blue): This implies that in the limit as $N\rightarrow \infty$ of $f$ , $f$ might converge to a smooth, limiting function. After some trial and error, I found that this limiting curve might in fact be $$e^{-x^{2}}-\frac{\pi^{2}}{12}e^{-2x^{2}}$$ The constant $\pi^{2}\div{12}$ interests me, because twice this value was proven by Euler to be the infinite sum of inverse squares. So our limiting curve can be rewritten: $$e^{-x^{2}}-\sum_{n=1}^{\infty}\left(\frac{e^{-2x^{2}}}{2n^{2}}\right)=e^{-x^{2}}-\zeta(2)\frac{e^{-2x^{2}}}{2}$$ Where $\zeta(
|
If $0 , then $$\begin{align*} \frac1{N^2}\sum_{n=1}^Nn \lfloor rN/n\rfloor &= \frac1{N^2}\sum_{k=1}^{\lfloor rN\rfloor}\sum_{n=1}^{\lfloor rN/k\rfloor}n \\ &= \frac1{2N^2}\sum_{k=1}^{\lfloor rN\rfloor} \lfloor \tfrac{rN}k\rfloor (\lfloor \tfrac{rN}k\rfloor +1) \\ &\sim \frac12\sum_{k=1}^{\lfloor rN\rfloor} \frac{r}{k} \left(\frac{r}{k} + \frac1N\right) \\ &\to \frac{r^2}2\zeta(2) \end{align*}$$ In particular, this verifies your expression for $g(x) = e^{-x^2}$ and gives an affirmative answer to your first question when $0 \leq g \leq 1$ . Specifically, for such functions $$f(g(x)) \xrightarrow{N\to\infty} g(x) - \frac{\zeta(2)}2(g(x))^2.$$
|
|limits|number-theory|modular-arithmetic|riemann-zeta|l-functions|
| 1
|
Application of Farkas Lemma
|
Let $A \in \mathbb R^{m \times n }, b \in \mathbb R^m$ and $0 \neq c \in \mathbb R^m$ . Prove: Either the system $Ax = c$ or the system $A^T y = 0, c^T y = 1$ has a solution. Looks a bit like Farkas Lemma but the $c^T y = 1$ confuses me. Can someone give me a hint? I know two versions of Farkas Lemma: $$ \text{Either} \quad Ax = b, x \geq 0 \quad \text{or} \quad A^T y \geq 0, b^T y $$ \text{Either} \quad Ax \leq b \quad \text{or} \quad A^T y = 0, y \geq 0, b^T y
|
Hint: To get a nonnegative variable into the mix, replace a free variable with a difference of nonnegative variables.
|
|optimization|systems-of-equations|linear-programming|
| 0
|
Weak convergence of dependent variables
|
$X_n \xrightarrow[]{d} X$ , $Y_n \xrightarrow[]{d} Y$ where $X \sim N(\mu_x, \sigma_x)$ and $Y \sim N(\mu_y, \sigma_y)$ , but $X_n \not\!\perp\!\!\!\perp Y_n$ . What do we need to analyze $(X_n, Y_n) \xrightarrow[]{d} \: ?$ . In other words, we have two dependent random variables that converge in distribution and we are interested in the asymptotics of their joint distribution. Is there any work out there that addresses this topic? My intuition says that under specific conditions on the dependence, this will converge to a joint normal, but what are these conditions?
|
Your intuition is correct. Suppose $X_n$ are iid $N(0,1)$ and $Y_n=(-1)^n X_n$ . Clearly the sequences $X_n$ and $Y_n$ each converge in distribution, but $X_nY_n$ does not. So the sequence of pairs $(X_n,Y_n)$ does not, either.
|
|probability-theory|statistics|asymptotics|weak-convergence|central-limit-theorem|
| 0
|
The product $abc$ when $a+b+c=191$
|
Find the greatest possible number of back-to-back zeros at the end of the product of three counting numbers if the sum of these three numbers equals 191. (The number 202100 has exactly 2 back-to-back zeros at the end.) I tried with examples such as 100,90,1 and 10,81,100, giving me a max amount of 3 trailing zeroes. According to the answer sheet, it said that there were 5 zeroes. Is this possible?
|
The number of trailing zeros in a number is equal to the largest power of 10 that divides the number (e.g. $202100 = 2021 \times 10^2$ ). In turn, this is equal to the minimum of the largest power of 2 and 5 respectively that divide the number (e.g. $500 = 2^2 \times 5^3$ , $200 = 2^3 \times 5^2$ ). Since 2 and 5 are primes, that means we are looking to maximise the total number of factors of each that appear in the three numbers that we will multiply together. For example, the split 100, 90, 1 gives us a total of three 2s and three 5s. Let's call our numbers $x, y, z$ , and we know that $x + y + z = 191$ . Since 191 is neither even nor a multiple of 5, at most 2 out of $x, y$ and $z$ can be even or a multiple of 5, but not necessarily the same two. Also, since $191 , the largest power of 5 that can appear in any of the numbers is 3, and if for example we set $x = 125$ then $y + z = 66$ so then the largest power of 5 we can have in either of them is 2, meaning that our product is defin
|
|elementary-number-theory|decimal-expansion|
| 0
|
Permutations in "PERSNICKETINESS" with Constraints on Letter Placement
|
I'm tackling a combinatorial challenge with the word "PERSNICKETINESS". The task is to determine the number of permutations where the second "S" appears after the last vowel, considering letter frequencies of E (3x), S (3x), N (2x), I (2x), and each of P, R, C, K, T appearing once. I've developed two methods to approach the problem but arrived at two different expressions for the solution. I'm seeking insights on which method (if either) is correct. Approach 1: Vowels and the second S placement: Considering a block of 6 characters (3 Es, 2 Is, and the second S), I calculated the ways to place this block in the 15 available positions as $$\binom{15}{6}$$ . Internal arrangement of the block: For the internal arrangement of this block, ensuring the second S follows all vowels, I used $$\frac{6!}{3!2!}$$ to account for repetitions. Remaining letters arrangement: With 9 positions left, the arrangement of the remaining letters (including repetitions of S and N) is calculated by $$\frac{9!}{2
|
In you first approach you don’t actually ensure that this S which goes after vowels is the second S. Also, why are you considering $6!\over 3!2!$ . The letter S goes last, it doesn’t participate in permutations of the six letters. Instead, you should consider all the vowels and the letters S. There are $15\choose 8$ ways to choose placements for them. Two letters S go last. The rest $6$ letters are placed in $6!\over 3!2!$ ways. After that, you place all the other letters in $7!/2!$ ways. So the answer is $${15\choose 8}\times {6!\over 3!2!} \times {7!\over 2!}=972972000.$$ Here is a slightly different approach. There are $\frac{15!}{3!3!2!2!}$ overall permutations. What part of it suitable? Due to earlier reasoning it’s ${6!/(3!2!) \over 8!/(3!3!2!)}=\frac3{ 28}$ . The answer is the same: $$\frac{15!}{3!3!2!2!} \times \frac 3{ 28}=972972000.$$
|
|combinatorics|permutations|combinations|
| 1
|
Two quadratic functions with min F(x) > min G(x). Does this imply that F(x) > G(x) for all x?
|
Let us suppose that $F(x)=ax^{2}+bx+c$ and $G(x)=Ax^2+Bx+C$ where $a, b, c, A, B, $ and $C$ are real numbers with $a > 0$ and $A > 0$ . Let us suppose that the minimum value of the quadratic function defined by $F(x)$ is greater than or equal to the minimum value of the quadratic function defined by $G(x)$ . Does it necessarily hold that $G(x) \leq F(x)$ for every real number $x$ ? Thanks in advance for sharing your thoughts on this question with me.
|
No, it does not. A simple counter-example is when $f(x)=2x^2-1$ and $g(x)=x^2.$ Although the minimum of $f(x)$ is $-1$ and the minimum of $g(x)$ is $0,$ $g(x)>f(x)$ does not hold for $x=2.$
|
|geometry|algebra-precalculus|differential-geometry|analytic-geometry|conic-sections|
| 0
|
Construct ellipse with given foci such that it is tangent to a given circle
|
Given two points $F_1$ and $F_2$ and a circle centered at $r_0$ with radius $s$ , I'd like to construct the ellipse with foci $F_1$ and $F_2$ that is tangent to the given circle. That is the question. My attempt: Let $r = [x, y]^T $ be the position vector of a point in the plane. To simplify the analysis, I'll introduce a new coordinate reference with its origin at the center of the ellipse. This is known, because the center of the ellipse is just the midpoint of the two foci. So let $ C = \dfrac{1}{2} (F_1 + F_2) $ And define the unit vector $ u_1 = \dfrac{ F_2 - F_1}{\| F_2 - F_1 \| } $ And let $u_2 $ be a unit vector that is perpendicular to $u_1$ . Now define the rotation matrix $ R = [u_1, u_2] $ By letting the $x'$ axis point along $u_1$ and the $y'$ axis point along $u_2$ , then if $p' = (x',y')$ is the local coordinate of a (world) point $p$ with resepct to this new frame, then then two vectors are related by $ p = C + R p' $ So that $ p' = R^T (p - C) $ Using these new coordin
|
Comment: Some graphical construction for particular cases, where the ellipse and the circle are not concentric. In all cases we have two solutions. 1- construction $N_1$ shows the large ellipse tends to becom a corc;e when the center of the circle tends to be collinear with major axis of the ellipse. 2- construction $O_1$ shows the case where the center of the circle makes an isosceles triangle with the foci of the ellipse. 3- Construction $P_1$ shows another case.
|
|geometry|conic-sections|
| 0
|
Confusion with matrix derivatives and component-wise gradients
|
According to this source (The Matrix Cookbook), we have that for $f(X) = \mathbf{a}^T X \mathbf{b}$ where $X \in \mathbb{R}^{n \times m}, \mathbf{a} \in \mathbb{R}^n, \mathbf{b} \in \mathbb{R}^m$ , the derivative is $$ \frac{\partial f}{\partial X} = \mathbf{a} \mathbf{b}^T$$ which is a scalar since it's basically a dot product. I see how we can arrive at this result by applying matrix derivative rules/applying basic derivative rules to vectors/matrices. However, taking the derivative (gradient) of a function $f: \mathbb{R}^{n \times m} \to \mathbb{R}$ based on my my calculus knowledge should yield a vector/matrix of partial derivatives. As an example, I tried this with $\mathbf{a} = \begin{bmatrix} a_1 & a_2 \end{bmatrix}, \mathbf{b} = \begin{bmatrix} b_1 & b_2 \end{bmatrix}$ , and $X = \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}$ . I expanded the expression and took the gradient to get that $$\nabla f = \begin{bmatrix} \frac{\partial f}{\partial x_1} & \frac{\partial f}{\par
|
There are several competing notations. The notation in Matrix Cookbook can be motivated as follows -- suppose we minimize scalar function of scalar variable $w$ using gradient descent update $$w\leftarrow w-f'(w)$$ Now, suppose $f(W)$ is a scalar function of matrix variable. We want similar looking update formula to hold $$W\leftarrow W-f'(W)$$ So this determines the shape of $f'$ : $i,j$ th entry of $f'$ is the derivative of $f$ w.r.t. to $i,j$ the entry of $W$ , which gives you $f'(W)=ab^T$ when $f(W)=a'Wb$
|
|multivariable-calculus|derivatives|matrix-calculus|
| 0
|
Given $f(x)=x^3+x^2-x+2=0$ calculate $3$ iterations at$x_0=-2.4$
|
Given $f(x)=x^3+x^2-x+2=0$ Find an interval[a,b] such that there exists $1$ and only root Find a function $\phi(x)=x$ Find an interval where it is possible to use the simple iteration method and calculate $3$ iterations at initial value $x_0=-2.4$ What I tried: Using intermediate value theorem we have $f(-1)=3$ and $f(-3)=-13$ so there is at least $1$ root, I showed there this root is the only one using rolls theorem.. Assume by contradiction there are $2$ different roots $x_1,x_2$ such that $f(x_1)=f(x_2)=0$ the function is continuous and differentiable so there is a root $x_1 such that $f'(c)=0$ but $f'(x)=3x^2+2x-1=(x+1)(3x-1)=0$ so the roots are $c_1=-1$ and $c_2=\frac{1}{3}$ and there is only one of them in $[-3,-1]$ I added $x$ to both sides and got $x=x^3+x^2+2$ and then $\phi(x)=x^3+x^2+2$ we know that it converges if $max|\phi'(x)| so I solved $3x^2+2x and got $x \in (-1,\frac{1}{3})$ but I don't know how to pick the interval from here is it $(-1,\frac{1}{3})$ or something els
|
I think $\phi(x)=-(x^2-x+2)^{1/3}$ works. We have $$\phi'(x)=\frac{-2x+1}{3(x^2-x+2)^{2/3}}$$ and $$\phi''(x)=\frac{2(x^2-x-5)}{9(x^2-x+2)^{5/3}}$$ So, we can say that for $-3\lt x\lt \frac{1-\sqrt{21}}{2}\approx -1.79$ , $\phi''(x)\gt 0$ holds. Since $$|\phi'(-3)|=\frac{14^{1/3}}{6}\lt 1$$ $$\bigg|\phi'\bigg(\frac{1-\sqrt{21}}{2}\bigg)\bigg|=\frac{7^{1/3}}{\sqrt{21}}\lt 1$$ we can say that, for $-3\lt x\lt \frac{1-\sqrt{21}}{2}\approx -1.79$ , $|\phi'(x)|\lt 1$ holds.
|
|numerical-methods|fixed-point-theorems|
| 1
|
Why does replacing $\cosh$ by $\cos$ in this biquadratic integral not change anything?
|
TLDR; can I do the second integral with the method I used for the first integral? I was reading about some integrals when I saw the following: $$\int_{0}^{\infty}\frac{dx}{x^4+2x^2\cosh(2t)+1} = \frac{\pi}{4\cosh(t)}$$ The way to do this is to write the denominator as $(x^2+e^{2t})(x^2+e^{-2t})$ and then this is simple $\arctan$ integral after partial fraction decomposition. Then I saw another integral where cosh is replaced by cos, ie. $$\int_{0}^{\infty} \frac{dx}{x^4+2x^2\cos(2t)+1}= \frac{\pi}{4\cos(t)}$$ but the way to do this integral is quite different, and in fact quite difficult (you need to do a $1/x$ sub, then add the two resulting things, then write it as half of $-\infty$ to $+\infty$ integral, then add $2x\sin(t)$ to numerator, and then you get arctan). Basically there is no way I could have thought of this... However, even before seeing the solution to the second problem, I guessed the correct answer, and basically my reasoning was that $\cosh(2it)=\cos(2t)$ or basically
|
Both integrals are examples of this integral, which (as you observed) you can solve by partial fractions and arctan: $$ \int_0^\infty \frac1{x^4 + x^2 \left(k^2 + \frac1{k^2}\right) + 1} \,\mathrm dx. \tag1 $$ We write \begin{align} \frac1{x^4 + x^2 \left(k^2 + \frac1{k^2}\right) + 1} &= \frac1{\left(x^2 + k^2\right) \left(x^2 + \frac1{k^2}\right)} \\ &= \frac1{k^2 - \frac1{k^2}} \left(\frac1{x^2 + \frac1{k^2}} - \frac1{x^2 + k^2}\right). \end{align} We find that \begin{align} \int \frac1{x^2 + k^2} \,\mathrm dx &= \frac1k \arctan\left(\frac xk\right) + C_1, \\ \int \frac1{x^2 + \frac1{k^2}} \,\mathrm dx &= k \arctan(kx) + C_2, \end{align} and putting this all together, the indefinite integral is $$ \int \frac1{x^4 + x^2 \left(k^2 + \frac1{k^2}\right) + 1} \,\mathrm dx = \frac{k \arctan(kx) - \frac1k \arctan\left(\frac xk\right)} {k^2 - \frac1{k^2}} + C_3. $$ This is the point at which things get a little tricky, depending on what $k$ is. If $k$ is a positive real number then it is eas
|
|integration|complex-analysis|definite-integrals|
| 0
|
Confusion with matrix derivatives and component-wise gradients
|
According to this source (The Matrix Cookbook), we have that for $f(X) = \mathbf{a}^T X \mathbf{b}$ where $X \in \mathbb{R}^{n \times m}, \mathbf{a} \in \mathbb{R}^n, \mathbf{b} \in \mathbb{R}^m$ , the derivative is $$ \frac{\partial f}{\partial X} = \mathbf{a} \mathbf{b}^T$$ which is a scalar since it's basically a dot product. I see how we can arrive at this result by applying matrix derivative rules/applying basic derivative rules to vectors/matrices. However, taking the derivative (gradient) of a function $f: \mathbb{R}^{n \times m} \to \mathbb{R}$ based on my my calculus knowledge should yield a vector/matrix of partial derivatives. As an example, I tried this with $\mathbf{a} = \begin{bmatrix} a_1 & a_2 \end{bmatrix}, \mathbf{b} = \begin{bmatrix} b_1 & b_2 \end{bmatrix}$ , and $X = \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}$ . I expanded the expression and took the gradient to get that $$\nabla f = \begin{bmatrix} \frac{\partial f}{\partial x_1} & \frac{\partial f}{\par
|
The problem is vector usually refers to a column vector (otherwise the expression $a^TXb$ doesn't even make sense since the dimensions don't match). If you instead do $$ab^T = \begin{pmatrix}a_1\\a_2\end{pmatrix}\begin{pmatrix}b_1 & b_2\end{pmatrix} = \begin{pmatrix} a_1b_1 & a_1b_2\\ a_2b_1 & a_2b_2 \end{pmatrix}$$ then you get the same result as your computation.
|
|multivariable-calculus|derivatives|matrix-calculus|
| 1
|
Integration of multivariate odd symmetric function
|
If $f(\mathbf{x}): R^p \rightarrow R $ is a multivariate odd-symmetric function in the sense that $f(\mathbf{x}) = -f(-\mathbf{x})$ for any $\mathbf{x}$ and it is absolutely integrable, does it integrate to 0 on $R^p$ ? Any references will be appreciated. Thank you! The multivariate odd symmetric definition was found on https://en.wikipedia.org/wiki/Even_and_odd_functions
|
How about doing the substiution $y=-x$ , $$\int_{\mathbb R^p} f(x) \mathrm dx= \int_{\mathbb R^p} f(-y) \mathrm dy = -\int_{\mathbb R^p} f(y)\mathrm dy$$ Can you finish the proof?
|
|real-analysis|calculus|integration|multivariable-calculus|indefinite-integrals|
| 0
|
Geometric effects of parameters in homogeneous second-order ordinary differential equations with constant coefficients
|
In homogeneous second-order ordinary differential equations (ODE) with constant coefficients, of the form: \begin{equation} ay''+by'+cy=0, \end{equation} is there any conclusion about the effects of each of the parameters $a$ , $b$ and $c$ on the graph of an ODE solution? It's a curiosity I've been having, but I've never seen anything like this in ODE books.
|
Let the independent variable in your equation be the time $t$ . The equation can be written in the form $$ y''+\gamma y'+\omega_0^2 y=0,\quad \gamma=\frac{b}{a},\quad \omega_0^2=\frac{c}{a}.\tag{*} $$ I assume $\gamma,\omega_0^2>0$ , since this is the case of interest in physical problems. Then $(*)$ is Newton's second law for a damped harmonic oscillator. There is a competition between the two frequencies $\gamma$ and $\omega_0$ : larger values of $\omega_0$ promote longer-lived oscillations in the solutions to $(*)$ , while larger values of $\gamma$ "dampen" the oscillations; i.e . cause them to decay towards zero.
|
|ordinary-differential-equations|geometric-interpretation|
| 0
|
Proving that the set of $a$ for which the roots of $P(X)-a$ are all real is an interval
|
I found this problem on a problem sheet about polynomials: Let $P \in \mathbb{R}[X]$ , $S_a = \{t \in \mathbb{C} \mid P(t) = a\}$ , and $I = \{a \in \mathbb{R} \mid S_a \subset \mathbb{R}\}$ . Show that $I$ is an interval. I can see intuitively why this must be true: if we take a polyomial $P$ such that all the roots of $P'$ are real and simple, then the graph of $P$ shows that this interval is exactly the interval between the smallest maximum of $P$ and the largest minimum of $P$ . I tried to show that this holds, but I couldn't.
|
$\def\R{\mathbb{R}}\def\ge{\geqslant}\def\emptyset{\varnothing}\def\le{\leqslant}\def\paren#1{\left(#1\right)}\def\={\mathrel{\hphantom{=}}}$ Lemma: For any $Q(x) \in \R[x] \setminus \{0\}$ and $a \in \R^*$ , $x^3 Q(x) - a$ does not split in $\R[x]$ . Proof: Suppose $$ x^3 Q(x) - a = c\prod\limits_{k = 1}^n (x - x_k), $$ where $c \in \R^*$ , $x_1, \cdots, x_n \in \R$ . Comparing the coefficients of the lowest three terms yields: \begin{gather*} \begin{cases} \prod\limits_{k = 1}^n x_k \ne 0\\ \sum\limits_{k = 1}^n \prod\limits_{j \ne k} x_j = 0\\ \sum\limits_{k_1 thus $$ \sum_{k = 1}^n \frac{1}{x_k^2} = \paren{ \sum_{k = 1}^n \frac{1}{x_k} }^2 - 2 \sum_{k_1 a contradiction. Now return to the problem. Case 1: $P'$ has at least one real multiple root. Suppose $x_0 \in \R$ is a multiple root of $P'$ , then ${(x - x_0)}^3 \mid P(x) - P(x_0)$ . For any $a \in \R \setminus \{P(x_0)\}$ , note that $$ P(x) - a = {(x - x_0)}^3 \cdot \frac{ P(x) - P(x_0) }{ {(x - x_0)}^3 } - (a - P(x_0)), $$ thu
|
|algebra-precalculus|polynomials|roots|
| 1
|
Fully normal implies paracompact without a $T_1$ assumption?
|
It's well-known that a $T_1$ topological space is fully normal if and only if it is $T_2$ and paracompact. It appears, looking at the proofs from Henno Brandsma's nice exposition here and here , that we can drop the $T_1$ assumption for the implication that fully normal implies paracompactness (without concluding Hausdorff of course). Concretely, for this direction of the proof, the $T_1$ assumption appears to only be used in the implication $(3)\implies (4)$ of Lemma 1 from the second linked document, and only to assert that by regularity (which follows from $T_1$ and fully normal), the family of all open subsets of a space whose closure lies in some element of an open cover $\mathcal U$ , is again an open cover. But this fact is already true for all fully normal spaces, regardless of regularity, since a star refinement has the property that the closure of each member lies in a member of the original cover. I want to update $\pi$ -base to reflect that $T_1$ is not needed to deduce par
|
Here are equivalent conditions for a space $X$ to be fully normal. I assume no separation conditions. Proposition The following statements about a space $X$ are equivalent. $X$ is fully normal (i.e. every open covering of $X$ has an open star-refinement). Every open covering of $X$ has an open barycentric refinement. Every open cover has a locally-finite closed refinement. Every open covering of $X$ is numerable. Every open covering of $X$ is normal. Barycentric refinements are defined in the linked $\pi$ -base page. Lemma ([1, Lemma 5.1.15]) Let $\mathcal{U},\mathcal{V},\mathcal{W}$ be open covers of a space $X$ If $\mathcal{U}$ is a barycentric refinement of $\mathcal{V}$ and $\mathcal{V}$ is a barcentric refinement of $\mathcal{W}$ , then $\mathcal{U}$ is a star refinement of $\mathcal{W}$ . $\quad\blacksquare$ Thus $2.\Rightarrow1.$ That $3.\Rightarrow2.$ is a consequence of the following. Lemma ([1, Lemma 5.1.13]) If $\mathcal{U}$ is an open cover of a space $X$ and $\mathcal{U}$
|
|general-topology|reference-request|compactness|separation-axioms|paracompactness|
| 1
|
Finding the effective annual rate of interest (no constant)
|
A deposit of 10,000 is done. During first year, the bank credits an annual effective interest rate of $\text {i}$ . During the second year, the bank credits an annual effective interest rate of $\text {i-5%}$ . After two years, he has 12,093.75 What would be the amount earned by an annual effective interest rate of $\text {i+9%}$ for each of the three years? So, we need to solve for $\text {i}$ from $A(2)=10,000[1+(i-0.05)]^2=12,093.75$ $$i=\Bigg(\frac{12,093.75}{10,000}\Bigg)^{1/2}-0.95, \text {for t=2}$$ But when substituted in $A(2)$ , $A(2)\ne 12,093.75$ I'm new at this. How it is supposed to be done?
|
If the first year's effective interest rate is $i$ , then at the end of the first year, the deposit will be $10000(1+i)$ . Then, if the second year's rate is $i - 0.05$ , then at the end of the second year, the deposit will be $$10000(1+i)(1 + i - 0.05).$$ This is the amount that must equal $12093.75$ . Then the question asks, if the interest rate had been $i+0.09$ for all three years, then I would interpret that to mean the accumulated value $$10000(1 + i + 0.09)^3,$$ where $i$ was the rate calculated earlier.
|
|solution-verification|finance|compound-interest|
| 0
|
Recursive computation of a weighted mean
|
Background Let $u=[u_1,\dots,u_N]'$ be a generic vector, its mean value is defined as $$ \bar{u}_N \triangleq \frac{1}{N}\sum_{i=1}^N u_i $$ in few simple steps one can prove the following recursion \begin{equation} \begin{aligned} \bar{u}_k &= \left(1-\frac{1}{k}\right)\bar{u}_{k-1}+\frac{1}{k}u_k & k=1,\dots,N\\ \bar{u}_0 &\triangleq 0 \end{aligned} \qquad(*) \end{equation} less trivially, for the more general weighted mean $$ \bar{u}_N^\lambda \triangleq \frac{1}{N}\sum_{i=1}^N \lambda^{N-i}u_i $$ where $\lambda\in\mathbb{R}$ is a given parameter, holds the recursion $$ \begin{aligned} \bar{u}_k^\lambda &= \lambda\,\left(1-\frac{1}{k}\right)\bar{u}_{k-1}^\lambda+\frac{1}{k}u_k & k=1,\dots,N\\ \bar{u}_0^\lambda &\triangleq 0 \end{aligned} $$ to see why, note that $$ \begin{aligned} \bar{u}_{N}^\lambda &= \frac{1}{N}\left(\sum_{i=1}^{N-1} \lambda^{N-i} u_i+u_N\right) =\frac{1}{N}\left(\lambda\sum_{i=1}^{N-1} \lambda^{(N-1)-i} u_i+u_N\right)\\ &=\frac{1}{N}\left(\lambda(N-1)\frac{1}{N-
|
Here my solution. Firstly, denote the unnormalized weights as $\tilde{w}_i \geq 0$ , so that the normalized weights are given by \begin{equation*} w_i \triangleq \frac{\tilde{w}_i}{W_N} \qquad W_N\triangleq \sum_{i=1}^N \tilde{w}_i \end{equation*} then, the weighted mean $\bar{u}_n^w$ is given by the following couple of recursions \begin{equation*}\begin{aligned} \bar{u}_k^w &= \left(1-\frac{\tilde{w}_k}{W_k}\right) \bar{u}_{k-1}^w+\frac{\tilde{w}_k}{W_k} u_i\\ \bar{u}_0^w &=0 \end{aligned} \qquad \begin{aligned} W_k &= W_{k-1} + \tilde{w}_k \\ W_0 &= 0 \end{aligned} \end{equation*} Note that, as expected, in the uniform case $\tilde{w}_i = 1$ we get the back $(*)$ . The recursion for the normalizer $W_k$ is trivial, while the recursion for the mean is given by the following facts \begin{equation*} \begin{aligned} \bar{u}_N^w &= \sum_{i=1}^{N} \frac{\tilde{w}_i}{W_N} u_i =\sum_{i=1}^{N-1} \frac{\tilde{w}_i}{W_N} u_i +\frac{\tilde{w}_N}{W_N} u_N\\ &=\frac{W_{N-1}}{W_N} \underbrace{\sum_
|
|recurrence-relations|recursion|means|
| 1
|
Is centre of a Von Neumann algebra trivial?
|
Is the centre of a dense sub algebra $A$ of the Von Neumann algebra $M$ is trivial ( $Z_A \subset A$ ) then shall we conclude $Z_M(M)$ is trivial hence $M$ is factor? Remember that we see $Z_A(A) \subset A$ within that subalgebra not in whole $M$ .
|
The spirit of the situation is that taking a weak operator closure can make lots of things appear. Like when you take the double commutant of a projectionless algebra and you get a von Neumann algebra with more projections than you can wish for. Here we can exploit the fact that most commonly C $^*$ -algebras admit very distinct representations. For instance take $A$ to be the UHF $(2^\infty)$ algebra. This algebra is unital and simple, and so it's center is trivial. Doing GNS for the trace we get a representation $\pi:A\to B (H)$ such that $\pi(A)''$ is the hyperfinite II $_1$ -factor. But you can choose to instead do GNS with respect to a well-chosen state, and now you get $\sigma:A\to B (H)$ such that $\sigma(A)''$ is a Powers' Factor , which is a type III $_\lambda$ -factor. Note that because $A$ is simple, all representations are faithful. What we can do next is consider $\pi\oplus\sigma:A\to B (H\oplus H)$ . Because these representations map into factors they don't have subrepres
|
|functional-analysis|operator-theory|operator-algebras|von-neumann-algebras|
| 0
|
If a $n \times n$ matrix $A$ is not invertible, then is it possible that the classical adjoint matrix of $A$ is invertible?
|
Let $A$ be a $n \times n$ matrix over $\mathbb{R}$ or $\mathbb{C}$ . Let $\text{adj}(A)$ denote the classical adjoint matrix of $A$ (or adjugate matrix of $A$ ). It is known that $A \cdot \text{adj}(A) = \text{det}(A) I_n = \text{adj}(A) \cdot A$ . Thus, if $A$ is invertible, then $\text{det}(A) \neq 0$ , and so $\text{adj}(A)$ is also invertible, since $\frac{A}{\text{det}(A)} \cdot \text{adj}(A) = I_n = \text{adj}(A) \cdot \frac{A}{\text{det}(A)}$ . My question : Is there a case that $A$ is not invertible and $\text{adj}(A)$ is invertible?
|
Suppose $\operatorname{adj}(A)$ is invertible but $A$ is not. Then $\det(A)=0$ , and $A\operatorname{adj}(A)=\det(A)I$ implies $A=\det(A)\left(\operatorname{adj}(A)\right)^{-1}$ , i.e., $A$ is the zero matrix. In this case it's easy to see $\operatorname{adj}(A)$ is also the zero matrix (all of the cofactors of $A$ are zero), contradicting the assumption.
|
|linear-algebra|
| 1
|
Work done to pump water out of a conical tank into a window above it
|
Water is pumped from a conical tank of top radius 3 ft and a height of 5 ft to a window 10 ft above the tank. The tank is completely full of water. How much work is done? This is what I have so far: $$\int_0^{15}\pi(\text{radius of slice})^2 \cdot (62.4) (15-h)\,dh$$ I am having a problem figuring out the radius of the slice
|
We'll assume no frictional losses, since we aren't given any information about the flow. Let the density of water be $\rho = 62.4 \frac{\text{lbm}}{\text{ft}^3}$ , the base radius of the cone be $R = 3 \text{ ft}$ , the height of the cone be $h = 5 \text{ ft}$ , and the vertical distance from the top of the cone to the final water elevation be $d = 10 \text{ ft}$ . Of course, the gravitational acceleration is $g = 1 \frac{\text{lbf}}{\text{lbm}}$ . Consider an infinitessimally thin layer of water in the tank at height $y$ above the ground, with a height $d y$ . By similar triangles, the radius of this layer of water is $\frac{y}{h} R$ . Therefore, the mass of this layer is $d m = \rho d V = \pi \rho \frac{y^2}{h^2} R^2 d y$ . The change in gravitational potential energy in bringing this layer to the final elevation is $d U = g (h + d - y) d m$ . Therefore, the total energy required is $$\Delta U = \int_{y = 0}^{y = h} d U = \pi \rho g \frac{R^2}{h^2} \int_0^h y^2 (h + d - y) d y$$ $$=
|
|calculus|integration|
| 0
|
Calculating Coefficients of an N Degree Polynomial raised to an Arbitrary Power
|
Suppose you have $(a_0+a_1x+a_2x^2+...+a_nx^n)^k$ , and you want to expand and find a formula for the coefficients $\beta_j$ such that $\beta_j$ is the coefficient of the $x^j$ term. I understand that when the all coefficients $a_1, ..., a_n$ are equal to 1, you would get: $$\beta_j = \sum_{i=0}^{\lfloor\frac{j-n}{k}\rfloor}(-1)^i\binom{n}{i}\binom{j-ik-1}{n-1}$$ but how would you generalize this $\forall a_1, ..., a_n \in \mathbb{R}$ ?
|
Wilf, in 1.5.3 of generatingfunctionology shows how binomial coefficients are related to a polynomial generating function. My observations have been that we can do this for a polynomial like $p(x) = 1 + \sum_{i=0}^{k}c_ix^{e_i}$ where $e_i$ are positive integers and $e_0$ is the smallest. The coefficient of $x^r$ is generated by the polynomial that interpolates through the coefficients of $x_r$ in $p(x)^n$ for $n \in [0, \lfloor r/e_0 \rfloor]$ . Consider $f(x) = 2 + x^2 - 3x^3 + 7x^7$ . If we are interested in the coefficient of $x^{24}$ for a given power (potentially quite large) then we divide $f(x)$ by 2 to give $p(x) = f(x)/2 = 1 + x^2/2 - 3x^3/2+7x^7/2$ calculate the coefficient of $x^{24}$ in the powers of $p(x)$ from $0$ through $12$ , e.g. $(n,a_r(n)) = (0,0), (1,0), (2,0), (3,0), (4,-1029/4), ..., (12, -216565271/4096)$ make an interpolating polynomial through those points, $$a_{24}(n) = 2^{n}n(n - 3)(n - 2)(n - 1)(n^8 + 11820n^7 + 8400714n^6 + 292451040n^5 - 22255776231n^4 +
|
|combinatorics|polynomials|exponentiation|generating-functions|multinomial-theorem|
| 0
|
Understanding a step in a functional equation
|
I was trying to solve a functional equation and I while I was pretty close to the answer there is a remaining case that has stumped me for a while now. Here is the problem Determine all functions $f: \mathbb{N}^* \to \mathbb{N}^*$ satisfying $$ \frac{f(x)+y}{x+f(y)} + \frac{f(x)y}{xf(y)} = \frac{2(x+y)}{f(x+y)}, \qquad \forall x,y \in \mathbb{N}^*. $$ It's easy to see that the function $f(x)=x$ satisfies the condition, but then remains the issue of proving that it is the only function that satisfies the condition. I was able to prove that $f{(2x)}=2x$ so naturally I then considered the odd numbers, however I ended up with two very nasty expressions, if we let $x=1$ and $y=2n$ we end up with this $$ \frac{2(1+2n)^2}{a(2+2n)}=f(2n+1)$$ Where $a=f(1)$ Or if we let $x=2n$ and $y=1$ Then we end up with: $$ \frac{a(2n+1)(2n+a)}{a(n+1)+n}=f(2n+1)$$ (In fact the intended solution uses the second substitution but I couldn't understand the rest of the explanation). Since we have a fraction the d
|
After proving that $f(2n)=2n$ , your claim that substituting $x=1$ and $y=2n$ implies that $$ \frac{2(1+2n)^2}{a(2+2n)}=f(2n+1) $$ is false, and I see where you made a mistake with your algebraic manipulations. Using $f(2n)=2n$ and $a=f(1)$ , I obtain instead that $$ \frac{a+2n}{1+2n}+a=\frac{2(2n+1)}{f(2n+1)}, $$ which expands to $$ \frac{a+2n+(2n+1)a}{2n+1}=\frac{2(2n+1)}{f(2n+1)}, $$ which rearranges to $$ f(2n+1)=\frac{2(2n+1)^2}{a+2n+(2n+1)a}\qquad (\star). $$ Note that this differs from your expression because $$ a+2n+(2n+1)a=2a+2n(a+1) $$ whereas your expression has the $2a+2n(a+1)$ replaced by $a(2+2n)$ . Once we obtain the corrected version of $(\star)$ , to conclude the argument boils down to finding the value of $a$ , and this can be done by substituting $x=1$ and $y=3$ into the original functional equation to obtain that $$ \frac{a+3}{1+f(3)}+\frac{3a}{f(3)}=2\qquad (\star\star). $$ On the other hand setting $n=1$ in $(\star)$ yields that $$ f(3)=\frac{18}{4a+2}=\frac{9}{2a
|
|functions|solution-verification|functional-equations|integers|natural-numbers|
| 1
|
Stolz-Cesaro theorem for 0/0 case when limit exists = 0
|
Consider this particular instance for the $0/0$ case of the Stolz-Cesaro Theorem: $$\lim_{n \to \infty} a_n = \lim_{n \to \infty} b_n = 0 $$ with $a_n$ and $b_n$ two sequences of real numbers, $b_n$ strictly monotone, whereby $$\lim_{n \to \infty} \frac{a_{n+1} - a_{n}}{b_{n+1} - b_{n}} = 0 \,\,\,\,\,\,\,\, (1)$$ then also $$\lim_{n \to \infty} \frac{a_n}{b_n} = 0 \,\,\,\, \,\,\,\,(2)$$ it would appear to me that for those particular instances for which it is in addition verified that each $b_n$ is strictly positive and such that $b_n \neq b_{n+1}$ , then the condition for the sequence $b_n$ to be strictly monotone could be dispensed with. Is this true, or am I missing some subtle features? Update following Stefan's Answer counterexample: the counterexample shows that the conditions $b_n$ strictly positive, $b_n \neq b_{n+1}$ , and limit (1) vanishing, are alone not sufficient to ensure that also limit (2) consequently vanishes. Were the sequence $b_n$ strictly monotone, then that woul
|
No, the monotonicity of $(b_n)$ is essential. If you investigate the proof of the Stolz-Cesaro theorem, you will see that it is necessary for the increments $b_{n+1} - b_n$ to be (eventually) positive. For a simple counterexample consider the following: Let $$ a_n =\frac{1}{n}, \ \ \ \ \ \ b_n = \begin{cases} \frac{1}{n}, & \text{n even} \\\\ 0, & \text{n odd} \end{cases} $$ Then clearly $a_n \to 0$ and $b_n \to 0$ . Moreover, $$ \lvert b_{n+1} - b_n \rvert \geq \frac{1}{n+1} $$ and $$ \lvert a_{n+1} - a_n \rvert = \frac{1}{n(n+1)} $$ implying that $$ \left\lvert \frac{a_{n+1} - a_n}{b_{n+1} - b_n} \right\rvert \leq \frac{ \frac{1}{n(n+1)} } { \frac{1}{n+1} } = \frac{1}{n} \to 0. $$ However, $$ \frac{a_n}{b_n} \not\to 0 $$ as $b_{2n+1} = 0$ for all $n \in \mathbb{N}$ . For a counterexample without $0$ in the denominator you just have to add a small value to the $0$ s of $b$ , I just wanted to keep it simple and demonstrate the idea. As a side note: The same issues occur when using L'Hô
|
|real-analysis|calculus|limits|
| 1
|
what order polynomial gives the coefficient of $x^n$ in $P(x)^m$?
|
Given a univariate polynomial of degree $k$ raised to power $m$ with non-zero constant term $c_0$ and exponents $e_i \ge 1$ present (but not necessarily $k$ of them) it appears to me that $c_n/c_0^n$ (the coefficient of $x^n$ divided by the constant term of the polynomial raised to the power $m$ ) will be a polynomial of degree $s$ where $s$ is the largest number of elements needed to partition $n$ using values of $e_i$ . For example, consider $P(x) = 2 + x^2 + x^7 + x^{13}$ . Here, $k = 13$ and $e_i = {2, 7, 13}$ . The coefficient of $x^{51}$ in $P(x)^m$ depends on the value of $m$ until $m>22$ , after which the coefficient is given by a polynomial of degree 23. When we look at ways to partition 51 using $e_i$ we find there are 10 different ways: the smallest number of elements needed is 7 (2x2s + 3x7s + 2x13s) while the largest number of elements needed is $s = 23$ (22 x 2s and 1 x 7). Multinomial coefficients can be used to find the value of $c_{51}$ for a given $m$ , but I don't se
|
An answer to this question is given here where the method of creating a generating function for the coefficient of $x^r$ in $p(x)^n$ for arbitrary coefficients is given. The method applies to an all-ones polynomial, too, as described in this question. In general, if $p(x)$ contains exponents $\in [0, k]$ , then when it is raised to the $m$ th power, there will be terms of order $n \in [0, mk]$ and the coefficient of $x^n$ will be $p(0)^m \cdot a_n(m)$ where $a_n(m)$ is an $nth$ order polynomial in $m$ if the term is present, else 0: $a_0$ is 1, $a_1(m)$ is linear in $m$ , $a_2(m)$ is quadratic in $m$ , etc... The exact form of $a_r(m)$ depends on the terms present in $p(x)$ . Consider the simple $(2+x)^m$ : the first 4 terms in terms of $m$ are $\frac{2^m m (m - 2) (m - 1)}{48}\mathbf{x^3} + \frac{2^m m (m - 1)}{8}\mathbf{x^2} + \frac{2^m m}{2}\mathbf{x} + 2^m$ . So two methods of finding the coefficient of $x^n$ in $p(x)^m$ is to are to either 1) find partitions of $n$ that match the
|
|combinatorics|polynomials|
| 0
|
Help proving $\lnot$(G $\rightarrow$ $\lnot$A) $\vdash$ G.
|
I'm have been working on a natural deduction assignment for a couple of days and I went yesterday to ask my teacher for help but he gave me no helpful information so I'm asking here. I have tried a couple different methods but each one has just ended up in a loop of opening indirect proofs to discharge the assumption of the previous proof. I'm trying to prove $\lnot$ (G $\rightarrow$ $\lnot$ A) $\vdash$ G. The only path I haven't tried yet is using a material conditional equivalency but I don't think ( $\lnot$ G $\rightarrow$ A) $\equiv$ (G $\lor$ A) is the same as (G $\rightarrow$ $\lnot$ A) $\equiv$ ( $\lnot$ G $\lor$ A). At this point any advice that can help me find a path through this proof or a hint in the right direction would be amazing.
|
With Natural Deduction . $\lnot (G \to \lnot A)$ --- premise $\lnot G$ --- assumed [a] $G$ --- assumed [b] $\lnot A$ --- from 2) and 3) using rules for negation $(G \to \lnot A)$ --- from 3) and 4) by $(\to \text I)$ discharging [b] $\lnot \lnot G$ --- from 1) and 5) using rules for negation and discharging [a] $G$ --- from 6) by Double Negation. Conclusion : we have proved $\lnot (G \to \lnot A) \vdash G$ .
|
|logic|solution-verification|natural-deduction|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.