title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Is the density of scale mixtures of Normal-Gamma log-concave?
|
Consider the following density function (unnormalized) $$p(y;b)=\int_{0}^{\infty}\dfrac{\exp\left(-\frac{y^2}{2\left(b+\frac 5u\right)}\right)}{\sqrt{b+\frac 5u}} u^{1/2}\exp\left(-\dfrac{2u}{3}\right)du.$$ Is it true that $p(y;b)$ is a log-concave function in $b$ when $y$ is considered fixed? My attempt: First of all, It is easy to plot $\log p(y;b)$ as a function of $b$ for a fixed value of $y$ , and I've gotten a good hint that the curve is indeed concave. Note that one can think of $p(y;b)$ as a scale mixture of normal-gamma densities. I am aware that normal and gamma both are log-concave densities. However, I am not immediately able to see why that makes $p(y;b)$ log-concave as well. It is useful to know that the integral cannot be put into any standard closed form. One idea I had was to look at the integral as an expectation $p(y;b)=\mathbb{E}_u[\text{integrand}]$ , where $u\sim \text{Gamma}(3/2,3/2)$ , but this route requires the joint (log)concavity of the integrand in $(b,u)$
|
I realized that the function $p(y;b)$ cannot be a log-concave for all fixed $y$ . Let me show this for $y=0$ . In this case, $$p(0;b)=\int_{0}^{\infty}\dfrac{1}{\sqrt{b+\frac 5u}} u^{1/2}\exp\left(-\dfrac{2u}{3}\right)du.$$ is both convex and log-convex over $b>0$ . Considering that the function $$\frac{1}{\sqrt{b+\frac 5u}}$$ is well-defined and convex over $b>0$ for any fixed value of $u>0$ , $p(0;b)$ is also convex in $b$ . Moreover, after applying the transformation $x=\frac{5}{u}$ , the integration used in the definition of $p(0;b)$ can be calculated explicitly, see this Wolfram link: explicit formula of $p(0;b)$ . The log of this function, which is plotted here: plot of $\log p(0;b)$ , is clearly a convex function over $b>0$ . Responding to a comment below on the unimodality of the following function: $$f(b)=\mathbb E_{N} \left [ -\log \mathbb E_{G} \left[\dfrac{\exp\left(-\frac{N^2}{2\left(b+\frac 5G\right)}\right)}{\sqrt{b+\frac 5G}} \right ]\right ] $$ with $N \sim \text{Nroma
|
|statistics|probability-distributions|convex-analysis|convex-optimization|density-function|
| 1
|
Is it possible to construct a real number theory on Peano arithmetic?
|
I know how to construct $\mathbb{Z}, \mathbb{Q}, \mathbb{R}$ from $\mathbb{N}$ in set theory. For example, the construction of $\mathbb{Z}$ is, $$\mathbb{Z}=\mathbb{N}^2/\sim$$ $$(a, b)\sim(c, d)\Leftrightarrow a+d=b+c$$ However, I do not know how to construct tuples and quotients in PA. Is it possible to construct $\mathbb{R}$ in Peano arithmetic?
|
See Peano Axioms, it's possible construct a real number theory
|
|logic|peano-axioms|
| 0
|
Find all entire functions such that $|f(z+z')|\leq |f(z)| + |f(z')|$, for all $z,z'\in\mathbb{C}$
|
Find all entire functions such that $|f(z+z')|\leq |f(z)| + |f(z')|$ , for all $z,z'\in\mathbb{C}$ In particular, let $z=z'$ yields $|f(2z)|\leq2|f(z)|$ . This gives that $\frac{f(2z)}{f(z)}=c, $ for some $c\in\mathbb{C}$ . By considering continuity at 0, we have $f(0)=\lim_{n\to\infty}\frac{1}{c^n} f(1)$ . Then $c\in\mathbb{R}$ and $\frac{1}{c}\leq1$ . If $c=1$ , then $f(z)$ is the constant equating $f(0)$ . But if $\frac{1}{c} we just have that $f(0)=0$ . This does not seem to lead to anywhere. Is there another thing to be considered? Hints will be appreciated. Thank you.
|
Let $z'=z$ ,then we have $$|f(2z)|\leq|2f(z)|,\quad z\in\mathbb C.$$ So $$f(2z)=2\alpha f(z),\quad z\in\mathbb C \tag{1}$$ for some $\alpha$ with $|\alpha|\leq1$ (For this, we can use $\left|\frac{f(2z)}{2f(z)}\right|\leq1$ , and Liouville's theorem. For detail, you can refer $|f(z)|\leq|\sin z|$ ). If $\alpha=0$ , there is nothing to say. Suppose $0 , by $(1)$ , we can get, for $n\geq1$ , $$2^nf^{(n)}(2z)=2\alpha f^{(n)}(z).$$ Take $z=0$ , we have $$2^nf^{(n)}(0)=2\alpha f^{(n)}(0), \quad \forall n\geq1.$$ This implies $f^{(n)}(0)=0$ for $n\geq2$ . So $$f(z)=\sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n!}z^n=f(0)+f'(0)z,\quad \forall z\in\mathbb C.$$ Back to $(1)$ , we have $$f(0)+2f'(0)z=2\alpha(f(0)+f'(0)z),\quad z\in\mathbb C,$$ then we have $$ \begin{cases} f(0)=2\alpha f(0) \cr 2f'(0)=2\alpha f'(0) \end{cases}, $$ this implies $f(0)=0$ or $f'(0)=0$ . Hence we have $$f(z)\equiv f(0)\quad \text{or}\quad f(z)=f'(0)z.$$
|
|complex-analysis|functions|inequality|entire-functions|
| 1
|
Find the radius of the circle tangent to $x^2, \sqrt{x}, x=2$
|
I'm taking up a post that was closed for lack of context because I'm very interested in it : Let $(a,b)$ be the center of this circle. It seems intuitive that $b=a$ , but I have not been able to prove it formally, although I know that two reciprocal functions are symmetric with respect to the first bisector $y=x$ . Then let $(X,X^2)$ the point of tangency with $y=x^2$ . I think we're going to use the formula for the distance from $(a,a)$ to the line $y-X^2=2X(x-X)$ . We have obviously the relation $r=2-a$ . The normal to $(X,X^2) $ passes through $(a,a)$ I'm not sure if my notations are the best to elegantly solve the exercise. I hope you will share my enthusiasm for this lovely exercise that I have just discovered thanks to MSE.
|
Let $Q(b,b^2)$ and $P(a,\sqrt a)$ be the tangency points of the circle on the curves $y=x^2$ , $y=\sqrt x$ respectively. Then the intersection of the corresponding tangent lines is $$R\left(\frac{2\sqrt a\,b^2+a}{4\sqrt a\, b-1},\frac{2ab+b^2}{4\sqrt a\, b-1}\right).$$ Since $QO=PO$ where $O$ is the center of the circle, we have $QR=PR$ . Now $QR^2=PR^2$ gives $$(2a-2b)(2\sqrt a\,b^2+a)+(2\sqrt a-2b^2)(2ab+b^2)=(a^2+a-b^4-b^2)(4\sqrt a\,b-1).$$ WolframAlpha finds $b=\sqrt a$ . I am still looking at the equation. When $b=\sqrt a$ , we have two solutions of the problem in OP. The smaller circle has the center $O\approx(1.65544,1.65544)$ and radius $r\approx 0.34456$ .
|
|geometry|functions|analytic-geometry|
| 0
|
Difficult calculus question that includes composite functions, primes, roots, etc
|
Let $f(x)$ be a cubic whose coefficient of the leading term is positive and $g(x) = e^{\sin(\pi x)} - 1$ . The composite function $h(x) = g(f(x))$ is defined in the set of all numbers, and has a local maximum at $x=0$ . In the open interval $(0,3)$ the function $h(x)$ intersects the line $y= 1$ seven times. Given that $f(3) =1/2$ , $f'(3) = 0$ , and $f(2) = \frac{q}{p}$ , find the value of $p+q$ given that $p$ and $q$ are coprime natural numbers This is my working so far: Since $h(x)$ has a local maximum at $x=0$ , this means that at $x=0$ the derivative of $h(x) = 0$ , i.e. $$h'(0) = g'f(0) \cdot f'(0) = 0$$ We need to find the derivative of g(x) first to find an expression for the derivative of h(x), so $$\frac{d}{dx} g(x)$$ $$= \frac{d}{dx} (e^{\sin(\pi x)} - 1)$$ $$g'(x)= \pi \cos(\pi x) \cdot e^{\sin(\pi x)}$$ Hence $$h'(0) = \pi \cdot \cos(\pi f(0)) \cdot e^{\sin(\pi f(0))} \cdot f'(0) = 0$$ $$\cos(\pi f(0)) \cdot e^{\sin(\pi f(0))} \cdot f'(0) = 0$$ Given that any exponential fu
|
This answer proves that there are infinitely many $(p,q)$ satisfying the given conditions. Proof : Let $f(x):=ax^3+bx^2+cx+d$ with $a\gt 0$ . Then, we have $$\begin{align}f'(x)&=3ax^2+2bx+c \\\\f''(x)&=6ax+2b \\\\g'(x)&=\pi e^{\sin(\pi x)}\cos(\pi x) \\\\g''(x)&=\pi^2 e^{\sin(\pi x)}\bigg(\cos^2(\pi x)-\sin(\pi x)\bigg) \\\\h'(x)&=g'(f(x))f'(x) \\\\h''(x)&=g''(f(x))(f'(x))^2+g'(f(x))f''(x)\end{align}$$ Since $h(x)$ has a local maximum at $x=0$ , we have to have $h'(0)=0$ and $h''(0)\le 0$ . Since $$h'(0)=\pi e^{\sin(\pi d)}\cos(\pi d)c$$ $$h''(0)=\pi^2c^2 e^{\sin(\pi d)}\bigg(\cos^2(\pi d)-\sin(\pi d)\bigg)+2b\pi e^{\sin(\pi d)}\cos(\pi d)$$ we have $$\cos(\pi d)c=0\tag1$$ $$-\pi c^2\sin(\pi d)+2b\cos(\pi d)\le 0\tag2$$ In the following, let us consider the case $\cos(\pi d)\not=0$ , i.e. $d\not=\frac{1}{2}+k$ where $k$ is an integer. Then, we have $$c=0\tag3$$ $$b\cos(\pi d)\le 0\tag4$$ We have $f(3)=\frac 12,f'(3)=0$ and $f(2)=\frac qp$ , so $$27a+9b+d=\frac 12$$ $$27a+6b=0$$ $$8a+4b
|
|calculus|functions|derivatives|polynomials|
| 1
|
PMF of Sum of Values Selected Without Replacement
|
I’m currently stuck on this question. You have $m$ values, $X_1, X_2, …, X_m$ where each value is selected without replacement from the range $[1, m+n]$ inclusive without replacement. $Y = X_1+ X_2+ \dots+ X_m$ . I understand that the expectation of Y can be found through linearity of expectation, and is given by $E[Y] = \frac{m (m + n + 1)}{2}$ . What I am struggling with is solving for the probability mass function and variance of Y. I initially tried a stars-and-bars / combinatorics approach, but was unable to work around the constraints of each $X_i$ being within $[1, m+n]$ and the fact that there are no repeats/the numbers are selected without replacement.
|
I take it that you intended $Y$ to be the sum of the $X_i$ . (That would lead to the expected value you give.) I doubt that you’ll find a closed form for the probability distribution, since it’s related to partition counts, which can usually not be expressed in closed form. But we can derive a generating function and use it obtain the variance. Let $y^k$ represent $Y=k$ , and let $t^m$ label a sum of $m$ values. Then the generating function for the number of outcomes where $m$ values in $[1,m+n]$ sum to $k$ is $$ f(y,t)=\prod_{j=1}^{m+n}\left(1+ty^j\right)\;. $$ This generating function is ordinary in $y$ but exponential in $t$ , since there are $m!$ ways to order a partition into $m$ distinct parts represented by $t^m$ ; but that needn’t concern us here since we will only be considering quotients of coefficients for fixed $m$ , so the factor $m!$ cancels. With the coefficient extraction operator $[t^m]\sum_kc_kt^k=c_m$ the expected value of $Y$ is $$ \mathsf E[Y] = \frac{[t^m]\left.\f
|
|probability|combinatorics|probability-distributions|expected-value|variance|
| 1
|
Looking for a challenging task or variant related to the knight's tour problem
|
I recently took it upon myself to investigate the knight's tour problem for a math assessment. I decided to investigate how the problem differs with a general knight $(m, n)$ that moves $m$ squares along one axis and $n$ squares along the other. I feel like I have dug myself into a hole as I have only been able to prove fairly obvious instances where the knight's tour is impossible. I am struggling to find a specific task that is mathematically challenging yet not impossible. To clarify, I am not asking for any answers or work regarding my assignment, but rather some feasible ideas to explore related to a general knight $(m, n)$ . I have been considering looking at permutations of the knight vs. the general knight within the first few moves but I feel this task has no real significance about how a general knight differs from a traditional knight in the knight's tour problem. Are there smaller tasks I can tackle to build up to more complex ideas or is the Hamiltonian path problem mostly
|
Late reply, but hoping it can still be helpful. Now, if I am not getting wrong, you are simply describing the $(m,n)$ leapers from fairy chess on a $2$ D chessboard (see Leapers $(m, n)$ ). If this is not the case, let me introduce my idea for a $k$ -dimensional $x_1 \times x_2 \times \cdots \times x_k \subseteq \mathbb{Z}^k$ chessboard as $k$ becomes sufficiently large for the given leaper. Let $\sqrt{m^2+n^2} \notin \mathbb{N}$ , then the given $(m,n)$ would be uniquely defined by the (fixed) Euclidean distance covered at any move and thus we can go beyond the usual limit of the planar moves. As an example, the FIDE definition of the knight stated in the FIDE Handbook points to this kind of solution and I recently proved the counterintuitive result that such (Euclidean) knight's tour exists even on each $2 \times 2 \times \cdots \times 2$ chessboard with at least $2^6$ cells (Reference here: https://arxiv.org/abs/2309.09639 ).
|
|probability|combinatorics|graph-theory|permutations|knight-tours|
| 0
|
Proof verification for $\sup A \leq \inf B$.
|
Let A and B be two sets such that $a \leq b$ for every $a$ in $A$ and $b$ in $B$ . We aim to prove that $\sup A \leq \inf B$ . Assume, for the sake of contradiction, that $\sup A > \inf B$ . Then, there exists $b_1 \in B$ such that $\sup A > b_1$ . Since $a \leq b_1$ for every $a$ in $A$ , $b_1$ is an upper bound for $A$ . However, $b_1 , contradicting the definition of $\sup A$ as the least upper bound of $A$ . Thus, our assumption $\sup A > \inf B$ must be false, leading to the conclusion that $\sup A \leq \inf B$ . Therefore, we have shown that $\sup A \leq \inf B$ .
|
Suppose $\sup A>\inf B$ : Let $\epsilon :=\sup A - \inf B$ By definition of $\inf $ and $\sup$ $$\exists b_1 \in B,a_1\in A \text{ such that }\inf B\leq b_1 Then $$b_1 This leads us to a contradiction. $\square$
|
|real-analysis|solution-verification|
| 1
|
Proof verification for $\sup A \leq \inf B$.
|
Let A and B be two sets such that $a \leq b$ for every $a$ in $A$ and $b$ in $B$ . We aim to prove that $\sup A \leq \inf B$ . Assume, for the sake of contradiction, that $\sup A > \inf B$ . Then, there exists $b_1 \in B$ such that $\sup A > b_1$ . Since $a \leq b_1$ for every $a$ in $A$ , $b_1$ is an upper bound for $A$ . However, $b_1 , contradicting the definition of $\sup A$ as the least upper bound of $A$ . Thus, our assumption $\sup A > \inf B$ must be false, leading to the conclusion that $\sup A \leq \inf B$ . Therefore, we have shown that $\sup A \leq \inf B$ .
|
There is also a direct proof without contradiction. Suppose any $b\in B$ is given. Since $a\le b$ for every $a\in A$ , $b$ is un upper bound of $A$ . Thus $\sup A\le b$ by definition of $\sup$ . Since this holds for any $b\in B$ , $\sup A$ is a lower bound of $B$ . Thus $\sup A\le\inf B$ by definition of $\inf$ .
|
|real-analysis|solution-verification|
| 0
|
how do you find the initial amount of a decay problem when you dont have one in the problem?
|
Suppose a sample of a certain substance decayed to $65.2\%$ of its original amount after $300$ days. What is the half-life (in days) of this substance? (Round your answers to two decimal places.) I just cant process how to get all the values for the Decay Rate Formula: $M(t) = M(0)e^{kt}$
|
Since the figure after $300$ years is given in decimal fractions, the initial amount is implicitly given as $1$ , and is not even needed for a solution. Also, $M(t) = M(0)e^{kt}$ came in by solving a differential equation for continuous growth/decay, and isn't really needed for simpler problems like this where an yearly growth/decay rate suffices Let the decay rate/yr be $d,\;$ and $t$ years the half-life, then $$d^t = 0.5 \tag1$$ $$d^{300} = 0.652 \tag2$$ Dividing equation $(1)$ by equation $(2)$ , taking logs and simplifying, $t = 300\cdot\frac {log(0.5)}{log(0.652)} \approx 486.18$ years
|
|calculus|algebra-precalculus|exponential-function|
| 0
|
Why no one uses the product formula for sine function to calculate $\pi$?
|
$$\sin(\pi x)=\pi x \prod_{n \ge 1}\left(1-\frac{x^2}{n^2}\right)$$ $$\pi = \frac{\sin(\pi x)}{x\prod_{n \ge 1}\left(1-\frac{x^2}{n^2}\right)}$$ Let $x=\frac{1}{2}$ $$\pi = \frac{2}{\prod_{n \ge 1}\left(1-\frac{1}{4 n^2}\right)}$$ The question is Why no one uses this formula to calculate $\pi$ ? Maybe I am wrong but I have never seen anyone uses this. The answer must be that this formula converge very slowly to $\pi$ and not efficient at all. But how slowly it converge to $\pi$ ? How much terms it need to reach the correct first 10, 100, 1000 decimals ?
|
Let $$\tilde\pi_{(m)} = \frac{2}{\prod_{n= 1}^m\left(1-\frac{1}{4 n^2}\right)}=\pi \,\frac{ \Gamma (m+1)^2}{\Gamma \left(m+\frac{1}{2}\right) \Gamma \left(m+\frac{3}{2}\right)}$$ $$\frac{\tilde\pi_{(m+1)}-\tilde\pi_{(m)}} {\tilde\pi_{(m)}}=\frac{1}{(2 m+1) (2 m+3)}\sim \frac 1{4m^2}$$ So, if you want that between two successive terms the relative difference be $10^{-k}$ , you need $$m \sim \frac 12 \sqrt{10^k}$$
|
|calculus|numerical-methods|pi|infinite-product|
| 1
|
Inversion of a matrix equation
|
Is there a general way to invert (solve for $u$ ) this? $$\sum_{ij}R_{ijk}a_iu_j = -x_k$$ With $a,u,x \in \mathbb{R}^N$ . $R_{ijk}$ is symmetric in the last two indices. So really I'm trying to invert this: $$A= \begin{bmatrix} a'R_1 \\ a'R_2 \\ ...\\ a'R_N \end{bmatrix}$$ Could there be a nice formula for this inverse if all the $R_i$ are invertible? If you multiply $A$ by a matrix that has as columns $R_i^{-1}a$ you get that it's equal to $a'aI_n+ E$ where $E$ is traceless (it has $0$ in every element of the diagonal).
|
I gave the bounty to greg because of really nice matrix calculus (and helpfulness in general). What I was looking for precisely was something along the lines of this formula: $$A^{-1} = \frac{1}{|A|}\text{adj} A$$ And from this I can say that the numerator is a polynomial in $a$ of degree $n-1$ and the denominator a polynomial of degree $n$ .
|
|linear-algebra|matrix-equations|tensors|multilinear-algebra|index-notation|
| 1
|
Try to prove $\lim_{n \to \infty}n(\ln 2-A_n) = \frac{1}{4}$
|
$$A_n=\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n}$$ Try to prove $$\lim_{n \to \infty}n(\ln 2-A_n) = \frac{1}{4}$$ I try to decompose $\ln 2$ as $$\ln(2n)-\ln(n)=\ln\left(1+\frac{1}{2n-1}\right)+\dots+\ln\left(1+\frac{1}{n}\right)\;,$$ but I can't continue, is that right?
|
Use Stolz's theorem of type $\frac00$ : $$\lim_{n\to\infty}n(\ln 2-A_n) =\lim_{n\to\infty}\frac{\ln 2-A_n}{\frac1n} =\lim_{n\to\infty}\frac{(\ln 2-A_n)-(\ln 2-A_{n+1})}{\frac1n-\frac1{n+1}}\\ =\lim_{n\to\infty}\frac{\frac1{2n+1}-\frac{1}{2n+2}}{\frac1{n(n+1)}} =\lim_{n\to\infty}\frac{n(n+1)}{(2n+1)(2n+2)} =\frac{1}{4}.$$
|
|calculus|
| 0
|
Random walk hitting probability
|
If $p_i$ is the probability of hitting $i$ when starting at 0 on a simple random walk on the integers, I am struggling to understand thoroughly why $p_i=p_1^i$ due to the Markov property. I understand heuristically, hitting $i$ is ‘like hitting 1 $i$ times’, but I would love to see a formal derivation/proof of why this is the case.
|
We can show this by induction: for $i=1$ this result is obvious. So suppose $p_i = p_1^i$ for some $i\ge 1$ . We want to show $p_{i+1} = p_1p_i$ . Define a stopping time $\tau$ to be the first time our SRW $X$ hits $i$ , formally we let $$\tau = \inf\{n\ge 1: X_n = i\}.$$ Then the event $\{\tau is precisely the event that $X$ hits $i$ , so $\mathbb{P}(\tau . The strong Markov property now tells us that conditional on $\{\tau , $(X_{n})_{n\ge \tau}$ is independent of $(X_n)_{n\le \tau}$ given $X_\tau$ . Therefore conditional on $\{\tau , $(X_n)_{n\ge \tau}$ is a SRW starting from $X_\tau = i$ . In particular if $Y$ is a SRW starting from $i$ , $$\mathbb{P}(\exists n\ge \tau : X_n=i+1|\tau Since $X$ must hit $i$ first if it is to hit $i+1$ we have $$\begin{align*} p_1&=\mathbb{P}(\exists n\ge \tau : X_n=i+1|\tau
|
|probability|markov-chains|markov-process|random-walk|
| 0
|
Gil-Pelaez Formula consistently ends up being equal to 0
|
For my first post on Math Stackexchange, I ask your help on a specific issue regarding the Gil-Pelaez formula. I have tried various versions of the formula to get the result right but I still cannot spot where my error is. The Gil-Pelaez formula (1951) for computing the cumulative density function of random variable $ X $ , noted $ F(X) $ , can be defined as follows: $ F_{X}(x) = \dfrac{1}{2} - \dfrac{1}{\pi}\int_{0}^{+\infty}\Im\left(\dfrac{e^{- ixs}\varphi_{X}(s)}{s}\right)ds $ (1) or $ F_{X}(x) = \dfrac{1}{2} + \dfrac{1}{2\pi}\int_{0}^{+\infty}\dfrac{e^{ixs}\varphi_{X}(-s) - e^{-ixs}\varphi_{X}(s)}{is}ds $ (2) , where $ \varphi_{X}(s) $ is the characteristic function of $ X $ . Assume that all moments of $ X $ are known and $ \varphi_{X}(s) $ can be written in series expansion form, we have: $ \varphi_{X}(s) = \sum_{j=0}^{\infty} \dfrac{(is)^{j}}{j!}\mathbb{E}[X^{j}] $ Then, we can write: $ \int_{0}^{+\infty}\Im\left(\dfrac{e^{-ixs}}{s} \varphi_{X}(s)\right)ds = \int_{0}^{+\infty}\I
|
I am not sure. Can we interchange the integral with sum over $j$ here? $\int_{0}^{+\infty}\Im\left(\sum_{j=0}^{\infty} \dfrac{i^{j}\mathbb{E}[X^{j}] }{j!} s^{j-1}e^{-ixs}\right)ds = \Im\left(\sum_{j=0}^{\infty} \dfrac{i^{j}\mathbb{E}[X^{j}] }{j!}\int_{0}^{+\infty} s^{j-1} e^{-ixs} ds\right) $ For example, if $x=0$ , the integral in the RHS seems to be infinite, while the LHS could still be finite.
|
|complex-analysis|characteristic-functions|cumulative-distribution-functions|mellin-transform|
| 0
|
Transformation of a random variable vs. joint transformation of several random variables
|
Let $X \sim f_X(x)$ and $Y = g(X)$ . If $g(X)$ is a differentiable, monotonic function with inverse such that $X = g^{-1}(Y)$ then the PDF of $Y$ can be described: $$ f_Y(y) = f_X(g^{-1}(y)) \bigg| \dfrac{d}{dy}g^{-1}(y) \bigg| $$ In the multidimensional case, I have seen a very similar expression where the absolute value of the derivative is replaced by the absolute value of the determinant of the Jacobian: $f_Y(\pmb{y}) = f_X(g^{-1}(\pmb{y})) |\bigg| \dfrac{\partial \pmb{x}}{\partial \pmb{y}} \bigg| |$ However, I have not seen a qualifier that in the multivariate case the function $\pmb{y} =g(\pmb{x})$ must be monotonic in all variables. Is this assumed or does this requirement not apply in this case? If not why?
|
The reason is that it is assumed that the system $$\pmb{y} =g(\pmb{x})$$ has a unique solution $x=g^{-1}(\pmb{y})$ , i.e., $g$ is invertible when using the formula. In the univariate case, the function $g$ is invertible if and only if $g$ is strictly monotone, which is not the case in the multivariate case ( $y_1=x_1, y_2=-x_2$ is a simple example where $g$ is not monotone). Therefore, the condition of strict monotonicity of $g$ used in univariate case is replaced with other equivalent conditions holding in the multivariate case (i.e., the system has a unique solution or $g$ is invertible). Suppose that the system has multiple solutions : $$\pmb{x}=h^j(\pmb{y}), j=1,\dots, k$$ such that the union of the sets $h^j(S_Y), j=1,\dots,k$ is $S_X$ with null intersections ( $S_X$ and $S_Y$ denote the support of $X$ and $Y$ ), that is, $h^j(S_Y)\cap h^k(S_Y)$ has Lebesgue measure zero for each $j \neq k$ , and $\cup_{j=1}^k h^j(S_Y)=S_X$ . Then, the following $$\color{blue}{f_Y(\pmb{y}) = \sum_
|
|calculus|probability|statistics|change-of-variable|
| 1
|
Stating the Gil-Pelaez Theorem in terms of a real integral
|
In ( Gil-Pelaez, 1951 ), the following is stated $F(x)= \frac{1}{2}+\frac{1}{2\pi}\int_0^\infty\frac{e^{itx}\phi(-t)-e^{-itx}\phi(t)}{it}dt.$ I am trying to show that it can be equivalently stated as $F(x)= \frac{1}{2}+\frac{1}{\pi}\int_0^\infty\Re \left[\frac{e^{-itx}\phi(t)}{it}\right]dt$ . To that end, I have tried using the following relation: $\Re[f(t)]=\frac{f(t)+\bar{f(t)}}{2}$ , where $\bar{f(t)}$ is the complex conjugate of $f(t)$ . This yields $\Re \left[\frac{e^{-itx}\phi(t)}{it}\right] = \frac{1}{2}\left[\frac{e^{-itx}\phi(t)}{it}+\frac{ie^{itx}\phi(-t)}{t}\right]$ . In order to complete the argument, I therefore need the relation below to hold $\frac{1}{2}\frac{e^{itx}\phi(-t)-e^{-itx}\phi(t)}{it} = \frac{1}{2}\left[\frac{e^{-itx}\phi(t)}{it}+\frac{ie^{itx}\phi(-t)}{t}\right]$ , which only seems to hold if I multiply either the right hand side or left hand side by -1. My question is therefore, what have I missed in the above?
|
First, I think there is a typo and the formula should be $F(x)= \frac{1}{2} {\color{red} -} \frac{1}{\pi}\int_0^\infty\Re \left[\frac{e^{-itx}\phi(t)}{it}\right]dt$ . By the definition, the characteristic function satisfies $\phi(-t) = \overline{\phi(t)}$ . It leads to $\Re \left[\frac{e^{-itx} \phi(t)}{-it} \right] = \Re \left[\frac{e^{itx} \phi(-t)}{it} \right]$ . Then the claim follows.
|
|probability|probability-theory|complex-numbers|characteristic-functions|
| 0
|
Do units have an irreducible decomposition?
|
I'm teaching myself algebra, and I'm working through a proof of the statement: Let $ R $ be a unique factorization domain (UFD). If $ f,g \in R[X] $ are primitive, then so is $ fg $ . (Start 0f) Proof: Let $ f = a_0 + a_1 X + \dots + a_n X^n $ and $ g = b_0 + b_1 X + \dots + b_m X^m $ where $ a_n, b_m \neq 0 $ . Suppose $ fg $ is not primitive. Then its content $ c(fg) $ is not a unit. Since $ R $ is a UFD, there exists some irreducible $ p $ which divides $ c(fg) $ . Furthermore, since $ f $ and $ g $ are primitive, then $ c(f) $ and $ c(g) $ are units. So $ p \nmid c(f) $ and $ p \nmid c(g) $ . It's at this point where I get confused. I'm assuming the author here is stating that because $ c(f) $ and $ c(g) $ are units, then by definition they can't be irreducible. And if they aren't irreducible, they can't be written as a product of irreducible elements. But couldn't it be true that an element can be written as a product of irreducible elements, but the fact that the element is a uni
|
To put Greg Martin's comment as an answer: the definition of an irreducible element requires it to be a non-unit. If $p \mid u$ where $p$ is irreducible and $u$ is a unit, then we can write $u=pk$ for some $k$ . But then $p(ku^{-1})=1$ so $p$ has a multiplicative inverse.
|
|abstract-algebra|
| 0
|
Polar Form of Equidistant Curve to Ellipse
|
By rolling a circle around an ellipse and tracing the circle's center, one obtains an equidistant curve to the ellipse. I am interested in the polar form of such a curve. Let an ellipse be given by the parametric equation \begin{equation} e(\alpha) = \begin{bmatrix}a\cos\alpha\\ b\sin\alpha \end{bmatrix}, \text{ for } \alpha \in [0, 2\pi), \end{equation} where $a, b$ are the lengths of the semi-major and semi-minor axis, respectively. Suppose a circle of radius $R$ is rolled along the ellipse $e$ , then calculating the normal vector of length $R$ at each point along the ellipse gives an equidistant curve to it, i.e., \begin{equation}\label{eq_ellipse_enlargment} t(\alpha) = \begin{bmatrix} a\cos\alpha\\ b\sin\alpha \end{bmatrix} + \frac{R}{\sqrt{b^2\cos^2\alpha + a^2 \sin^2\alpha}}\begin{bmatrix} b\cos\alpha\\ a\sin\alpha \end{bmatrix}. \end{equation} As the equidistant curve is symmetric w.r.t. to both of its axes, it should be possible to derive a polar form, $r(\phi)$ , where $r$ is
|
I would try to obtain an implicit definition first, using algebraic equations, then use that to derive a polar form by looking at the equation of the radius for a given angle. First let's get rid of those triginometric functions using the tangent half-angle formulas : $$ t := \tan\frac\alpha2 \qquad \cos\alpha = \frac{1-t^2}{1+t^2} \qquad \sin\alpha = \frac{2t}{1+t^2} $$ Then the point on the ellipse becomes $$ \begin{bmatrix}a\cos\alpha\\b\sin\alpha\end{bmatrix}= \frac1{1+t^2}\begin{bmatrix}a(1-t^2)\\b\cdot2t\end{bmatrix} $$ and the direction of the offset becomes $$ \begin{bmatrix}b\cos\alpha\\a\sin\alpha\end{bmatrix}= \frac1{1+t^2}\begin{bmatrix}b(1-t^2)\\a\cdot2t\end{bmatrix} $$ The magnitude of that vector doesn't matter because you need to normalize it anyway, so you might as well ignore the scalar factor in front. The square root from the norm of the remaining vector is a bit of an annoyance. Let's introduce a symbol for that and see how it relates to the square root you had. $$
|
|geometry|analytic-geometry|
| 0
|
Counting 3-cycles in $S_6$
|
I've bolded the incorrect part. Find the number of order $3$ elements in $S_6$ My "Solution": There are two ways to obtain these, either a 3-cycle or product of 2 disjoint 3-cycles. Counting 3-cycles: Choose 3 of 6 elements, this will determine the first 3 elements. We don't worry about the order of the rest. To get the distinct cycles (noting $(1 2 3) = (2 3 1)$), we fix the first element and permute the remaining elements. Thus: $\frac{6!}{3!3!} (3-1)!=40$ Counting Disjoint 3-cycles: Choose 3 of 6 elements, this will determine the first 3 elements. Using the above computation we obtain $40$. We need to account for of the rest. Noting again that some cycles are similar, we fix the first element and permute the remaining to get distinct cycles, thus: $40\times (3-1)! = 80$ Thus: $120$ elements of order $3$. The bolded reasoning is incorrect. The answer key says it should be 40. How am I over counting?
|
We only have to consider the following Young-Tableaus : $$ \begin{align*} [a_1][a_2][a_3]\\ [a_4]\\ [a_5]\\ [a_6] \end{align*} $$ and \begin{align*} [a_1][a_2][a_3]\\ [a_4] [a_5] [a_6] \end{align*} The first Young-Tableau can be filled in $6!$ possible ways by the $6$ objects. However, cyclic shifts in the first row don't matter and permuting the last three rows also doesn't matter, so there are $6!/(3*3!) = 40$ ways. The second Young-Tableau can be filled in $6!$ possible ways as well, but then there are $3$ cyclic shifts in each row as well as $2$ ways to order the rows. So we get $6!/(3*3*2) = 40$ as well. In total, this gives $40+40=80$ elements of order 3.
|
|abstract-algebra|group-theory|proof-verification|
| 0
|
General distribution of Coumpound poisson process as a limit of poisson random variables.
|
Let $S_t$ be a compound Poisson process, i.e. $$S_t= \sum_{i=0}^{N(t)} X_i,$$ where $N(t)$ is a Poisson process and $X_i$ are iid and independent from $N_t$ . I know that in general the distribution of $S_t$ is complicated and has the form $$P(S_t\leq x)= \sum_{k=0}^\infty F_X^{*k}(x)\frac{(\lambda t)^k}{k!}e^{-\lambda t}$$ for $x \in \mathbb{R}$ , where $\lambda$ is the intensity of $N_t$ and $F_X^{*k}(x)$ is the $k$ -fold convolution of $X_i$ . I'm trying to prove that the limit of a linear combinations of independent Poisson random variables with some parameters $\lambda_1, \lambda_2, ...$ for some constant values $\alpha_1, \alpha_2, ...$ has the same distribution. $$ P(\lim_{n\rightarrow\infty}\sum_{i=1}^n\alpha_iY_i \leq x) = P(S_t \leq x) $$ for all $x\in\mathbb{R}$ for a fixed $t\geq 0$ , where $Y_i \sim \text{Pois}(\lambda_i)$ . How could I go about proving this result. My professor told me it's simple to derive, but I must admit that I'm stuggling a bit, and have not found li
|
We first compute the characteristic function $\mathbb{E}[e^{iuS_t}]$ of $S_t$ in terms of the characteristic function $\Phi$ of $X$ as follows $$\begin{align*} \mathbb{E}[e^{iuS_t}] &= \sum_{j\ge 0} \mathbb{E}[e^{iuS_t}|S_t=j] \mathbb{P}(S_t=j)\\ &= \sum_{j\ge 0} \mathbb{E}[e^{iu(X_1+\ldots+X_j)}] \mathbb{P}(S_t=j)\\ &= \sum_{j\ge 0} \Phi(u)^j \frac{(\lambda t)^j)}{j!} e^{-\lambda t}\\ &= \exp(\Phi(u)\lambda t-\lambda t)\\ &= \exp((\Phi(u)-1)\lambda t). \end{align*}$$ By considering the case $X_i = \alpha_j$ and $S_t \sim \mathrm{Poisson}(\lambda_j)$ in the above we also get the characteristic function of $\alpha_j Y_j$ for a Poisson random variable $Y_j$ of parameter $\lambda_j$ as $$\mathbb{E}[e^{iu\alpha_j Y_j}] = \exp(\lambda_j (e^{iu\alpha_j}-1)).$$ Therefore to get the desired convergence it suffices to find $\alpha_j,\lambda_j$ such that $$\sum_{j\ge 0} \lambda_j(e^{iu\alpha_j}-1) = (\Phi(u)-1)\lambda t \tag{$\ast$}$$ for all $u \in \mathbb{R}$ . For general $X$ I am not sure wh
|
|probability-theory|stochastic-processes|
| 0
|
All Integer solutions to $2006=(x+y)^{2}+3x+y$
|
How do I find all the solutions to $2006=(x+y)^{2}+3x+y$ Where $x,y \in \mathbb{Z}$ One such solution I found was $(x,y)=(1000,-998)$ . I did it by the following, $2006+3y=(x+y)^{2}+3(x+y)+y$ $2(1003+y)=(x+y)(x+y+3)$ . Which upon solving the system, we will get $x=1000$ and $y=-998$ . How can I continue to find other solutions? 'Cause I noticed that $(13,31)$ also works but I'm not sure where that's derived from, and I have a feeling that there's more.
|
Let $x$ and $y$ be integers such that $$2006=(x+y)^2+3x+y.\tag{1}$$ Then also $$8024=4(x+y)^2+12x+4y=4x^2+8xy+4y^2+12x+4y=(2x+2y+1)^2+2x-1,$$ or equivalently $$(2x+2y+1)^2=8025-2x.$$ We see that $8025-2x$ must be an odd perfect square. Conversely, for any odd perfect square $z^2$ the pair $$x=\frac{8025-z^2}{2}\qquad\text{ and }\qquad y=\frac{z-2x-1}{2}=\frac{z^2+z-8026}{2},$$ are two integers that satisfy $(1)$ .
|
|elementary-number-theory|
| 0
|
Find the signature of a reflection.
|
I am doing some self-study and I have this task: Show that the signature of the following mapping, the reflection about the hyperplane $a^{\perp}$ , given by $S_a(v) = v - 2\frac{ }{ }a$ is (n-1,1,0), where a is an arbitrary nonzero vector. I would know how to find the eigenvalues of this mapping if I had a matrix on hand, but I am not sure how I would do it this way. Should I just find the representative matrix of this mapping and go with it?
|
Under the reflection $a\to -a$ , $a^{\perp}$ is invariant. In any basis with this split $e_1=a, e_{2\dots n} \in a^{\perp}$ , the signature is evident.
|
|linear-algebra|linear-transformations|inner-products|orthogonality|reflection|
| 0
|
Prove ring homomorphism carries intersection of ideals to image
|
I want to show that if the ring homomorphism $f:R \rightarrow S$ is onto, $I,J$ two ideals of $R$ and $\ker f\subseteq I$ then $f(I\cap J)=f(I)\cap f(J)$ . To prove $f(I)\cap f(J)\subseteq f(I\cap J)$ , I started with $f(x)\in f(I)\cap f(J)$ , thus $f(x)\in f(I)$ and $f(x)\in f(J)$ , so $x\in I\cap J$ and $f(x)\in f(I\cap J)$ . This is clearly wrong because I have not used $\ker f\subseteq I$ and without this condition this statement is not true. Can you point out where I did wrong and hint at the right path? There are multiple theorems with the condition $\ker f\subseteq I$ . How do I use this condition? I wanted to find $f(x)-f(y)=f(x-y)=0$ to have $x-y\in\ker f$ and go from there. Note that I have not yet studied the homomorphism theorems and quotient rings; I know the correspondence theorem.
|
Let $y\in f(I)\cap f(J).$ Then $y\in f(I)$ and $y\in f(J),$ that is, there exist $a\in I$ and $b\in J$ such that $y=f(a)=f(b),$ so $$f(a-b)=0\implies a-b\in Ker(f)\implies a-b\in I\implies b\in I.$$ Therefore, $b\in I\cap J\implies y=f(b)\in f(I\cap J).$ The other inclusion is easy. I think that the onto condition is not necessary.
|
|abstract-algebra|ring-theory|
| 1
|
Find the signature of a reflection.
|
I am doing some self-study and I have this task: Show that the signature of the following mapping, the reflection about the hyperplane $a^{\perp}$ , given by $S_a(v) = v - 2\frac{ }{ }a$ is (n-1,1,0), where a is an arbitrary nonzero vector. I would know how to find the eigenvalues of this mapping if I had a matrix on hand, but I am not sure how I would do it this way. Should I just find the representative matrix of this mapping and go with it?
|
A reflection is a special case of linear involution, with a hyperplane as subspace of fixed vectors: $S_a(v)=v\iff v\perp a$ and $S_a(v)=-v\iff v\in\operatorname{span}(a)$ . More generally (and even on an infinite dimensional space), a linear involution has two complementary eigenspaces, associated with the eigenvalues $1$ and $-1$ .
|
|linear-algebra|linear-transformations|inner-products|orthogonality|reflection|
| 0
|
Working of the Chain Rule in Calculus
|
I came across a proof of the chain rule in a book called "Calculus Made Easy" by Silvanus P. Thompson. It said that the rule works because we essentially multiply and divide by another small change in another function (usually represented as du for a change in function u(x) ). I.e. $$\frac {dy}{dx}=\frac {dy}{du}.\frac {du}{dx}$$ But when I tried to confirm this with my Physics teacher, he said that we can't actually explain it in that way because, du separated from dy or dx has no meaning as $\frac {d}{dx}$ is considered as a whole entity or an operator rather than a fraction . Please help me out with a clear explanation regarding this. Also, please say whether there is any more direct approach other than the indirect (I suppose, it's indirect because we make up an intermediate function) chain rule to differentiate function compositions such as $$y=e^{sin x}$$ P.S. I'm just a beginner in the context of Calculus.
|
Unfortunately what you call the indirect method is in fact the most direct method. Here's a link as a review libretexts Although when we're dealing with composite functions we do seem to treat them as a sort of fraction ("cancelling" out the du) this is not in fact what we are doing. $\frac{dy}{du}$ is the rate of change of y wrt u and $\frac{du}{dx}$ is the rate of change of u wrt x. They are two different animals. You are multiplying together two different rates of change to get the overall rate of change. So you have to write down the steps: Let u = sin(x) and differentiate $y=e^u$ to get $\frac{dy}{du}$ and then multiply by $\frac{du}{dx}$ . Like all things, you'll become familiar with this. That's the thing about math, it's like learning a new sport, you only improve with practice.
|
|derivatives|calculus|
| 1
|
How to efficiently compute the determinant of a matrix with unknown diagonal entries?
|
I would like to ask Python to compute the determinant of a large symmetric matrix where all off diagonal entries are known. The diagonal entries could vary. Since I need to compute the determinant many times with different diagonal entries, it seems a waste of time when the computation involves the multiplications of all those known entries over and over again. Is there a way to pre-compute the determinant so that it is a function of diagonal entries with some prefactors?
|
I don't think that this is very efficient but it does determine algebraically what the algebraic form of the answer looks like. Suppose that your matrix is $A=[a_{ij}]_{i,j=0}^n$ , and that $x_i:=a_{ii}$ . Then the determinant is some polynomial $$ \Delta=\sum_{i_1 where the sum is running over all the $2^n$ subsets of $\{1,2,\dots,n\}$ . How can we determine the coefficients $\alpha_{i_1 i_2 \dots i_k}$ ? Well note first that from the polynomial expansion we have that $$ \alpha_{i_1 i_2 \dots i_k} =\frac{\partial^k\Delta}{\partial x_{i_1}\partial x_{i_2}\dots\partial x_{i_k}}\big|_{x_1=x_2=\dots=x_n=0}. $$ Now we can differentiate $\Delta$ using the shape of $A$ . Consider the usual expansion of $\Delta$ by the $j$ -th row. Then we see that differentiating with respect to $x_j$ essentially replaces the $j$ -row of $\Delta$ by a row which is $0$ everywhere except at the $j$ -th column, where it is $1$ . Hence we obtain the coefficient $\alpha_{i_1 i_2 \dots i_k}$ by replacing in $A$ th
|
|linear-algebra|determinant|symmetric-matrices|python|
| 0
|
Over the $p$-adic field, intersecting a linear subspace with the vectors with entries in $\mathbb{Z}_p$
|
Let $p$ be a prime number. Let $V$ be a linear subspace of $\mathbb{Q}_p^n$ of dimension $d$ . Is $V \cap \mathbb{Z}_p^n$ a free $\mathbb{Z}_p$ -module of rank $d$ ? Is every $\mathbb{Z}_p$ -basis for $V\cap\mathbb{Z}_p^n$ a $\mathbb{Q}_p$ -basis for $V$ ?
|
Write $\tilde{V} = V \cap \mathbb{Z}_p^n$ and let $v_1, \ldots, v_n \in \tilde{V}$ be linearly independent over $\mathbb{Z}_p$ ( $n \geq 1$ any). Then the $v_i \in V$ are linearly independent over $\mathbb{Q}_p$ as well, for if there is a non-trivial relation $0 = \sum_{i = 1}^n a_i v_i$ with $a_i \in \mathbb{Q}_p$ , you can multiply with a large enough power of $p$ to transform this into a relation over $\mathbb{Z}_p$ where no non-trivial such relations hold. On the other hand, if you start out with $w_1, \ldots, w_n \in V$ linearly independent over $\mathbb{Q}_p$ , you may again scale with $p^r$ , some $r \gg 0$ to assure that $w_i \in \tilde{V}$ , and then the $w_i$ are obviously linearly independent over $\mathbb{Z}_p$ . In particular, this shows that $d$ is the maximal size of a linearly independent subset of $\tilde{V}$ over $\mathbb{Z}_p$ , so since $\mathbb{Z}_p$ is a PID and $\tilde{V}$ is torsion-free, we conclude that $\tilde{V}$ is free of rank $d$ , and so the answer to bo
|
|modules|algebraic-number-theory|p-adic-number-theory|
| 1
|
Need Help with Induction Proof: Showing 4^{2n} - 1 is Divisible by 15 for n≥1
|
I'm currently grappling with a problem related to proving a statement using mathematical induction, for every integer n≥1, the expression 4^{2n} - 1 is divisible by 15. So far, I understand the basic principle behind mathematical induction, which involves proving the statement for a base case (typically n=1) and then assuming it holds for an arbitrary positive integer k to prove it for k+1. However, I'm struggling with how to apply these steps specifically to show that 4^{2n} - 1 is indeed divisible by 15. For the base case, I evaluated 4^{2(1)} - 1 = 15, which is divisible by 15, so it seems to hold up for n=1. My main challenge lies in the induction step, where I need to demonstrate that if the statement is true for n=k, then it must also be true for n=k+1. I have a rough idea that it involves some manipulation or factorization of the expression 4^{2(k+1)} - 1, but I'm not quite sure how to structure this argument or which properties of numbers to apply. Could anyone guide me through
|
You will find a number a direct good answers at How can I show that $4^{2n}-1$ is divisible by $15 $ for all $n$ greater or equal to $1$ In you really want to use induction, you already have the initial step for $n=1$ : $4^{2\times 1} = 16 = 15 + 1$ . The induction step is not that much harder. Let us assume that for $k$ between $1$ and $n$ , $4^{2k} - 1$ is divisible by $15$ . Then $\exists q \in \mathbb{N}, 4^{2n} = 15\times q + 1$ . It directly gives: $$ \begin{aligned} 4^{2\left(n+1\right)} &= 16\times 4^{2n} = \left(15 + 1\right)\left(15\times q + 1\right)\\ &=15\times 15\times q + 15 + 15 \times q + 1\\ &= 15\left(15q +1+q\right) + 1 \end{aligned}$$ Which is enough to prove that $4^{2\times \left(n + 1\right)}-1$ is divisible by $15$ .
|
|discrete-mathematics|solution-verification|induction|
| 0
|
List of geometric theorems linked by two squares
|
I'm trying to create a classification for geometric theorems that relate to two squares As a type of organization and classification And the curiosity to explore I have collected some theorems of this type that I will put in an answer/answers. I hope you can help me expand my list. It's important to note that I'm not looking for theorems about squares because it would become too extensive a list, I'm looking for theorems about a number of squares equal to exactly twoTherefore, theorems such as Van Opel's theorem are not accepted in the answers
|
Euclid Book II Prop 6 on what we would now describe algebraically as the difference of squares:
|
|geometry|euclidean-geometry|big-list|
| 0
|
Given a tuple of $k$ distinct integers, is there a generator list in a $\mathbb{Z}/n\mathbb{Z}$ that matches the tuple?
|
Motivation: In $\langle\mathbb{Z}/7\mathbb{Z},\times\rangle,\ \langle 3\rangle = (3,2,6,4,5,1).$ Given a $k-$ tuple of distinct integers, $q_1, q_2, \ldots, q_k,$ (all nonzero) does $\exists$ integers $a,m,n$ with $n>\max\{ q_1,q_2,\ldots,q_k\},\ $ such that $a m^i \equiv q_{i} \pmod n\ \forall\ i\in \{1,2,\ldots, k\}\ $ [In other words, $q_i$ is the remainder when $am^i$ is divided by $n.]\ ?$ Maybe there is a counter-example like $(100,99,98, 3,2,1).$ But I doubt it. I imagine it is true and we can even take $a=1,$ for example. I think it could be true by Chinese remainder theorem and/or Fermat's Little theorem? It should be true for $k=2,$ so maybe then we can use induction on $k$ ? I think that without the "nonzero" and "all distinct" conditions, there would be easy counter-examples. Edit: I think $n$ must be prime for this to work, right? Well, I think that if $n$ is prime then every number other than $0$ and $1$ "generates" all of $\{1,2,\ldots,n-1\},$ although I don't recall the
|
Basically, you want geometric sequence modulo some $n$ , so $q_i^2 \equiv q_{i-1}q_{i+1}\pmod n$ , i.e. $n\mid q_i^2-q_{i-1}q_{i+1}$ , so to find counterexample you can take any triple $(q_1,q_2,q_3)$ such that $q_2^2-q_1q_3 = 1$ , for example $q_1 = 1, q_3 = q_2^2-1$ with $q_2$ arbitrary. In fact, take any three consecutive integers $x-1,x,x+1$ , then $x^2\equiv (x-1)(x+1) \pmod n$ iff $n\mid 1$ , so we have plenty of counterexamples. For $k = 2$ , the answer is almost trivial, take any prime $n$ greater tghan $q_1$ and $q_2$ and let $m \equiv q_2q_1^{-1}$ , $a\equiv q_1m^{-1}$ , where the inverse is modulo $n$ . Or, slightly amusingly, take $n = q_1+q_2$ , $m=-1$ and $a = q_2$ . Then $q_1\equiv am\pmod n$ , $q_2\equiv am^2\pmod n$ . For the questions in edit, this depends on what you mean by generate. If by generate you mean you take the smallest ideal containing generator, then for a prime $p$ , ring $\mathbb Z/p\mathbb Z$ is a field by Bézout's identity , since if $a\not\equiv 0 \p
|
|group-theory|elementary-number-theory|finite-groups|examples-counterexamples|chinese-remainder-theorem|
| 0
|
If $a+b+c=x+y+z=ax+by+cz=0$, then find value of $\frac{a^2}{a^2+b^2+c^2}+\frac{x^2}{x^2+y^2+z^2}$
|
If $a+b+c=x+y+z=ax+by+cz=0$ , then find value of $\frac{a^2}{a^2+b^2+c^2}+\frac{x^2}{x^2+y^2+z^2}$ From the $a+b+c=0$ equality I tried taking systems of equations into consideration: $$\begin{cases} c=-(a+b)\\ z=-(x+y)\\ ax+by+(a+b)(x+y)=0 \end{cases} $$ which can be later on turned to : $$\begin{cases} c=-(a+b)\\ z=-(x+y)\\ (2a+b)x+(a+2b)y=0 \end{cases} $$ However this is did not put me anywhere near the solution. Any help is appreciated.
|
I'm going to rename the variables, because I think it will make my explanation less confusing: If $x_1+y_1+z_1=x_2+y_2+z_2=x_1x_2+y_1y_2+z_1z_2=0$ , then find the value of: $$ \frac{x_1^2}{x_1^2+y_1^2+z_1^2}+\frac{x_2^2}{x_2^2+y_2^2+c_2^2} $$ Let's define some vectors: Let $\vec{v}_1=(x_1,y_1,z_1)$ and $\vec{v}_2=(x_2,y_2,c_1)$ . Clearly, both $\vec{v}_1$ and $\vec{v}_2$ are on the plane $x+y+z=0$ since $x_1+y_1+z_1=x_2+y_2+z_2=0$ . Moreover, $\vec{v}_1$ and $\vec{v}_2$ are perpendicular to each other, i.e. $\vec{v}_1\cdot \vec{v}_2=0$ (where $\cdot$ represents the dot product) because $x_1x_2+y_1y_2+z_1z_2=0$ . Now, without loss of generality, we can assume $\vec{v}_1$ and $\vec{v}_2$ are unit vectors. We can assume this without losing generality is because, if $\vec{v}_1$ or $\vec{v}_2$ is not a unit vector, then we can just normalize that vector and all the conditions we need will still hold: Normalizing a vector just changes the length of that vector, so normalization will not chan
|
|contest-math|systems-of-equations|mixing-variables|
| 0
|
Is there a nice 'local' criterion for equivalence of categories?
|
A first tentative guess would be "two categories $A$ and $B$ are equivalent iff each connected component of $A$ is equivalent to a (full) subcategory of $B$ and vice versa", but this fails, for example, in the case of $FiniteFields$ and $(\mathbb{N}, \leq)$ [wrong] Trying to strengthen it to "[...] iff each connected component of $A$ is ('strictly') isomorphic to some subcategory of $B$ , and vice versa" then fails (in the other direction) for 'smaller' skeletons A somewhat compromising possibility, involving a 'global' condition, (which I'm not sure actually holds; please tell if it's false, or well-known) would be a Dedekind-Cantor-Schröder-Bernstein-type proposition: "two categories are equivalent iff for one of them, every connected component is equivalent to some subcategory of the other, and, on the opposite direction, there is a fully faithful (not necessarily essentially surjective) functor"
|
Let $A$ be the disjoint union of countably many copies of the category $\bullet\to\bullet$ , and let $B$ be the disjoint union of $A$ with the category $\bullet$ . Then $A$ and $B$ are not equivalent, but there are fully faithful functors in both directions, and each connected component of either is equivalent to a subcategory of the other.
|
|category-theory|
| 0
|
If $a+b+c=x+y+z=ax+by+cz=0$, then find value of $\frac{a^2}{a^2+b^2+c^2}+\frac{x^2}{x^2+y^2+z^2}$
|
If $a+b+c=x+y+z=ax+by+cz=0$ , then find value of $\frac{a^2}{a^2+b^2+c^2}+\frac{x^2}{x^2+y^2+z^2}$ From the $a+b+c=0$ equality I tried taking systems of equations into consideration: $$\begin{cases} c=-(a+b)\\ z=-(x+y)\\ ax+by+(a+b)(x+y)=0 \end{cases} $$ which can be later on turned to : $$\begin{cases} c=-(a+b)\\ z=-(x+y)\\ (2a+b)x+(a+2b)y=0 \end{cases} $$ However this is did not put me anywhere near the solution. Any help is appreciated.
|
(i) If $a + 2b \ne 0$ , then $$y = - \frac{(2a + b)x}{a + 2b}.$$ We have $$x^2 + xy + y^2 = x^2 - x \cdot \frac{(2a + b)x}{a + 2b} + \left(\frac{(2a + b)x}{a + 2b}\right)^2 = \frac{3x^2(a^2 + ab + b^2)}{(a + 2b)^2}.$$ We have \begin{align*} \frac{a^2}{a^2+b^2+c^2}+\frac{x^2}{x^2+y^2+z^2} &= \frac{a^2}{a^2+b^2+(a + b)^2}+\frac{x^2}{x^2+y^2+(x + y)^2}\\ &= \frac{a^2}{2a^2 + 2ab + 2b^2} + \frac{x^2}{2x^2 + 2xy + 2y^2}\\ &= \frac{a^2}{2a^2 + 2ab + 2b^2} + \frac{x^2}{\frac{6x^2(a^2 + ab + b^2)}{(a + 2b)^2}}\\ &= \frac{a^2}{2a^2 + 2ab + 2b^2} + \frac{(a+2b)^2}{6(a^2 + ab + b^2)}\\ &= \frac23. \end{align*} (ii) If $a + 2b = 0$ , then $a = -2b$ and $-3bx = 0$ . Thus, $x = 0$ ( Note . If $b = 0$ , then $a = b = c = 0$ ). Thus, $$\frac{a^2}{a^2+b^2+c^2}+\frac{x^2}{x^2+y^2+z^2} = \frac23.$$
|
|contest-math|systems-of-equations|mixing-variables|
| 0
|
If $a+b+c=x+y+z=ax+by+cz=0$, then find value of $\frac{a^2}{a^2+b^2+c^2}+\frac{x^2}{x^2+y^2+z^2}$
|
If $a+b+c=x+y+z=ax+by+cz=0$ , then find value of $\frac{a^2}{a^2+b^2+c^2}+\frac{x^2}{x^2+y^2+z^2}$ From the $a+b+c=0$ equality I tried taking systems of equations into consideration: $$\begin{cases} c=-(a+b)\\ z=-(x+y)\\ ax+by+(a+b)(x+y)=0 \end{cases} $$ which can be later on turned to : $$\begin{cases} c=-(a+b)\\ z=-(x+y)\\ (2a+b)x+(a+2b)y=0 \end{cases} $$ However this is did not put me anywhere near the solution. Any help is appreciated.
|
First note there is an implicit assumption that neither $\langle a,b,c \rangle$ nor $\langle x,y,z \rangle$ is the zero vector for the problem to make sense. Next, since $a + b + c = 0$ we have $c = - a - b$ , so the vector $\langle a,b,c \rangle$ can be rewritten as $\langle a,b, - a - b\rangle$ . The assumptions imply that $\langle x,y,z\rangle$ is perpendicular to both $\langle a,b,c\rangle$ and $\langle 1,1,1\rangle$ . So it has to be a multiple of the cross product $\langle a,b,c\rangle \times \langle 1,1,1\rangle$ $= \langle c - b, a - c, b - a\rangle$ $= \langle -a-2b, 2a + b, b - a\rangle$ . Since the expression you are looking at is invariant under multiplication of $\langle x,y,z\rangle$ by a nonzero scalar, without loss of generality we can assume $\langle x,y,z\rangle = \langle -a-2b, 2a + b, b - a\rangle$ . So we have $$\langle a,b,c \rangle = \langle a,b, - a - b\rangle$$ $$\langle x,y,z\rangle = \langle -a-2b, 2a + b, b - a\rangle$$ At this point we can just insert these
|
|contest-math|systems-of-equations|mixing-variables|
| 0
|
What is the highest power of a function called if it is negative?
|
$$ \text{Given } \{n, \thinspace N\} \in \mathbb{N}^+ \text{ and } N > n \\ \ \\ \begin{align} p(x) &= \sum_{n}^N a_n{x^n} \\ \ \\ r(x) &= \sum_{n}^N \frac{a_n}{x^n} \end{align} $$ The highest power $N$ of a polynomial $p(x)$ is called its "degree". What is the highest power $n$ of $r(x)$ called?
|
First some definitions: A rational function is any function that can be put in the form $f(x)/g(x)$ for polynomials $f(x)$ and $g(x)$ . The degree of a rational function $f(x)/g(x)$ , assuming that this fraction is in lowest terms (i.e. $f(x)$ and $g(x)$ have no common polynomial factors), is the maximum of the degree of $f(x)$ and the degree of $g(x)$ . In your case, you have a function $r(x)$ where the most negative exponent is $x^{-N}$ . As I will show below, this coincides with the definition of the degree of a rational function, i.e. the degree of $r(x)$ , so the name for the most negative exponent in this case is just called the degree of the rational function. To find the degree of the function $r(x)$ you made, we first need to put $r(x)$ into the form of one polynomial divided by another polynomial. To do this, we just need to factor out $1/x^N$ from the sum: $$r(x)=\frac{\sum_{i=n}^N a_i x^{N-i}}{x^N}$$ Assuming $a_N\neq 0$ , this fraction is in lowest terms because the only f
|
|polynomials|terminology|rational-functions|
| 0
|
projection of vector not equal length of the projection
|
I have 2 vectors, a and b, from the origin to a plane. The normal vector to the plane, is n. (a dot n) and (b dot n) is the same, and I learned it is equal to the projection of a onto n (and b onto n). So, this projection should be equal to the length from the origin to the point of intersection of the normal line and the plane. But it is not equal that! Why ? I have 4 instead of 1.206. See here : https://www.desmos.com/3d/614b66604f
|
Things are easier to understand when we have a normal vector with length $1$ (which is what you described in the comment). So let $u=n/|n|$ . Taking the dot product of a vector $v$ with $u$ is measuring how much of $v$ is in the direction of $u$ , while ignoring the part of $v$ that is orthogonal (perpendicular) to $u$ . More precisely, we let $v_\parallel=(v\cdot u)u$ and $v_\perp = v-v_\parallel$ . We use the symbols $\parallel$ and $\perp$ to denote parallel and perpendicular to $u$ . Note that $$v_\parallel \cdot u=(v\cdot u)(u\cdot u)=v\cdot u|u|^2=v\cdot u,$$ so $$v_\perp \cdot u=v\cdot u-v_\parallel\cdot u=0,$$ where $v_\parallel$ is in the direction of $u$ and $v_\perp$ is orthogonal. If we wanted to write these with the initial vector $n$ , it would just be replacing the $u$ with $n/|n|$ . Planes to which $u$ is a normal vector are now characterized as sets of the form $$\{x:x\cdot u=c\}$$ for a fixed $c$ . That's because if $cu$ is on the plane, then the rest of the plane is
|
|vectors|3d|projection|
| 1
|
An inequality involving infimums of the scaled $k$-th moment and of a scaled moment-generating function.
|
Let $X$ be a non-negative r.v., prove that \begin{equation} \inf_{k\in\mathbb{Z}_+} \frac{\mathbb{E}[X^k]}{t^k}\leqslant \inf_{\lambda\geqslant 0}\frac{\mathbb{E}[e^{\lambda X}]}{e^{\lambda t}},\;\forall t>0. \end{equation} I was given the above problem during a class of the advanced statistics course in my university. I tried attempting it as follows: First, let us rewrite the exponentials on the RHS of the above equation using the Maclaurin expansion: $$ e^{\lambda X} = \sum_{k=0}^{\infty} \frac{(\lambda X)^k}{k!}, \quad e^{\lambda t} = \sum_{k=0}^{\infty} \frac{(\lambda t)^k}{k!}. $$ $X$ being non-negative allows us to expand the expectation of $e^{\lambda X}$ as follows: $$ \mathbb{E}[e^{\lambda X}] = \mathbb{E}\left[\sum_{k=0}^{\infty} \frac{(\lambda X)^k}{k!}\right] = \sum_{k=0}^{\infty} \frac{\lambda^k\mathbb{E}X^k}{k!}. $$ Now, we can write: $$ \frac{\mathbb{E}e^{\lambda X}}{e^{\lambda t}} = \frac{\sum_{k=0}^{\infty} \frac{\lambda^k\mathbb{E}X^k}{k!}}{\sum_{k=0}^{\infty} \frac{
|
You have already noted that $$\mathbb{E}[e^{\lambda X}] = \sum_{k\ge 0} \frac{\lambda^k \mathbb{E}X^k}{k!}$$ which can further be written as $$\mathbb{E}[e^{\lambda X}]=\sum_{k\ge 0} \frac{\mathbb{E}X^k}{t^k} \frac{(\lambda t)^k}{k!} \ge \left(\inf_{k\in \mathbb{Z}_+} \frac{\mathbb{E}X^k}{t^k} \right)\sum_{k\ge 0} \frac{(\lambda t)^k}{k!} =\left(\inf_{k\in \mathbb{Z}_+} \frac{\mathbb{E}X^k}{t^k} \right)e^{\lambda t} $$ so dividing through by $e^{\lambda t}$ and taking an infimum over $\lambda \ge 0$ gives the result.
|
|statistics|inequality|supremum-and-infimum|moment-generating-functions|
| 1
|
Range and domain of $x\mapsto 1-f(x+1)$, knowing those of $f$
|
Problem: Let $f$ be a function which has domain $D_f=[-1,2]$ and range $=[0,1]$ . What are the domain and range of the function $g$ defined by $g(x) = 1-f(x+1)$ ? My thinking: If the domain of $f$ is $[-1,2]$ , then the domain of $x\mapsto f(x+1)$ is $[0,3]$ (adding 1 to both the extreme limits of domain of $f$ ). And because the range of $f$ is $[0,1]$ , the range of $1- f(x+1)$ is $[-1,0]$ (subtracting 1 from both the extreme limits of range of $f$ ). But this is not the range and domain. I have been taught that when a change occurs inside the function, only the domain changes. And when the change is outside the function, the range changes. Is this incorrect? Question: What would be the range and domain of $g$ , and how is my thinking incorrect?
|
Your thinking is partly correct. Remember that the domain is the set of possible values you can put in for $x$ . If the domain of $f$ is $[-1,2]$ then the domain of $f_1(x) := f(x+1)$ is $[-2, 1]$ , shifted one to the left. Take a look at the borders: If you put in $-2$ you get $f_1(-2)=f(-2+1)=f(-1)$ and this is exactly the left side of the domain of $f$ . So you don't replace $x$ with the domain but your final output should be the domain $[-1,2]$ . What you have done here is: $$f([-1,2] + 1) = f([0, 3])$$ but you should do the reverse: $$f([-1,2]) = f(x+1) \Rightarrow x+1 \in [-1,2] \Rightarrow x \in [-2,1]$$ informally speaking. For the range, if the range of $f$ is $[0,1]$ then the range of $-f$ is $[-1,0]$ and thus the range of $1-f$ is $[0,1]$ again. In this case it doesn't matter if you put in $x$ or $x+1$ as you are only interested in the domain. So your thinking is somehow correct but you need to think from the other direction. Similarly that $(x-d)^2$ moves the quadratic curv
|
|functions|
| 1
|
Range and domain of $x\mapsto 1-f(x+1)$, knowing those of $f$
|
Problem: Let $f$ be a function which has domain $D_f=[-1,2]$ and range $=[0,1]$ . What are the domain and range of the function $g$ defined by $g(x) = 1-f(x+1)$ ? My thinking: If the domain of $f$ is $[-1,2]$ , then the domain of $x\mapsto f(x+1)$ is $[0,3]$ (adding 1 to both the extreme limits of domain of $f$ ). And because the range of $f$ is $[0,1]$ , the range of $1- f(x+1)$ is $[-1,0]$ (subtracting 1 from both the extreme limits of range of $f$ ). But this is not the range and domain. I have been taught that when a change occurs inside the function, only the domain changes. And when the change is outside the function, the range changes. Is this incorrect? Question: What would be the range and domain of $g$ , and how is my thinking incorrect?
|
$f$ is only defined for $x\in[-1,2]$ . Thus, $g(3)=1-f(4)$ does not make sense. The correct domain for $g$ would be $[-2,1]$ . Now, if $x\in[-2,1]$ , $x+1\in[-1,2]$ and $g(x)=1-f(x+1)$ is fine. Regarding the range, if you shift $f$ to the left one unit its range remains $[0,1]$ , and as the image of $[0,1]$ by $1-x$ is again $[0,1]$ , we reach the range of $g$ is $[0,1]$ .
|
|functions|
| 0
|
Range and domain of $x\mapsto 1-f(x+1)$, knowing those of $f$
|
Problem: Let $f$ be a function which has domain $D_f=[-1,2]$ and range $=[0,1]$ . What are the domain and range of the function $g$ defined by $g(x) = 1-f(x+1)$ ? My thinking: If the domain of $f$ is $[-1,2]$ , then the domain of $x\mapsto f(x+1)$ is $[0,3]$ (adding 1 to both the extreme limits of domain of $f$ ). And because the range of $f$ is $[0,1]$ , the range of $1- f(x+1)$ is $[-1,0]$ (subtracting 1 from both the extreme limits of range of $f$ ). But this is not the range and domain. I have been taught that when a change occurs inside the function, only the domain changes. And when the change is outside the function, the range changes. Is this incorrect? Question: What would be the range and domain of $g$ , and how is my thinking incorrect?
|
$$x\in D_g\iff x+1\in D_f\iff x\in D_f-1=[0,3]-1=[-1,2].$$ $$\operatorname{range}(g)=1-\operatorname{range}(x\mapsto f(x+1))=1-\operatorname{range}(f)=1-[0,1]=1+[-1,0]=[0,1].$$
|
|functions|
| 0
|
An example of a topology on $\mathbf{R}^2$ in which addition is discontinuous and multiplication by a scalar is continuous
|
I am trying to build an example of a topology on $\mathbf{R}^2$ in which addition is discontinuous and multiplication by scalar is continuous. I think that such a topology is generated by the collection of all intervals on all straight lines passing through the origin $(0,0)$ . Then, for example, adding points $(1,1)$ and $(-1,1)$ gives a point $(0,2)$ , but we will not have neighborhoods of points $(1,1)$ and $(1,-1)$ whose sum is contained in a neighborhood of the point $(0,2)$ . But multiplication by a scalar in such a topology seems to me to be continuous, since for any neighborhood of the product $ax$ we can specify a corresponding neighborhood $x$ so that all points from it, when multiplied by $a$ , fall into the neighborhood of $ax$ . Namely, it seems to me that given an arbitrary neighborhood of $ax$ we can multiply by $1/a$ and get the necessary neighborhood of $x$ . Can you please tell me if my reasoning is correct?
|
Here is an example. Let $τ= \{B^c, ℝ^2-(0,0), ℝ^2, ∅ \}$ , where $B^c$ is the complement of the unit open ball centered at $(0,0)$ . It is clearly a topology on the plane. I. Multiplication map $m$ is continuous. It is enough to verify that preimages of $B^c$ and $ℝ^2-(0,0)$ are open since those are the only non-trivial open sets and $m$ is onto. $$ \begin{align} m^{-1}(ℝ^2-(0,0)) & =\{(λ,x,y) : (λx, λy) ≠ (0,0) \}\\ &= \{(λ,x,y) : λ,x,y ≠ 0 \} \\ &= (ℝ-0)\times (ℝ^2 - (0,0)) \end{align} $$ which is open. Moreover $$ \begin{align} m^{-1}(B^c) &= \{(λ,x,y) : ||(λx, λy)|| ≥ 1 \} \\ &= \{(λ,x,y) : λ≠0 \text{ and }||(x, y)|| ≥ 1/|λ| \} \\ &= \bigcup_{λ≠0} \{(λ,x,y) : ||(x,y)||≥1/λ \} \\ &= (ℝ-0) \times (ℝ^2-(0,0)) \end{align} $$ which is again open. II. Addition is not continuous. In fact it is not even sequentially continous: Let $x_n = (0,2)$ and $y_n = (0,-2)$ be the constatnt sequences in $(ℝ^2, τ)$ . Both are convergent to any point in $B^c$ , in particular both sequences converge to
|
|general-topology|functional-analysis|continuity|
| 0
|
Infinite Series $\sum\limits_{n=1}^\infty\left(\frac{H_n}n\right)^2$
|
How can I find a closed form for the following sum? $$\sum_{n=1}^{\infty}\left(\frac{H_n}{n}\right)^2$$ ($H_n=\sum_{k=1}^n\frac{1}{k}$).
|
$$\displaystyle{\begin{gathered} S = \sum\limits_{n = 1}^\infty {{{\left( {\frac{{{H_n}}}{n}} \right)}^2}} = \sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}{{\left( {\sum\limits_{k = 1}^n {\frac{1}{k}} } \right)}^2}} = \sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}\left( {\sum\limits_{k = 1}^n {\frac{1}{{{k^2}}} + 2\sum\limits_{k = 1}^{n - 1} {\left( {\frac{1}{k} \sum\limits_{\lambda = k + 1}^n {\frac{1}{\lambda }} } \right)} } } \right) = } \hfill \\ = \sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}\left( {\sum\limits_{k = 1}^n {\frac{1}{{{k^2}}}} } \right)} + 2 \sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}\sum\limits_{k = 1}^{n - 1} {\left( {\frac{1}{k} \sum\limits_{\lambda = k + 1}^n {\frac{1}{\lambda }} } \right)} } \Rightarrow \boxed{S = {S_1} + 2 {S_2}} \hfill \\ \end{gathered} }$$ $$\displaystyle{{S_1} = \sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}\left( {\sum\limits_{k = 1}^n {\frac{1}{{{k^2}}}} } \right)} {\text{ \& }}{S_2} = \sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}\sum\limit
|
|real-analysis|sequences-and-series|complex-analysis|closed-form|harmonic-numbers|
| 0
|
About $\lim \left(1+\frac {x}{n}\right)^n$
|
I was wondering if it is possible to get a link to a rigorous proof that $$\displaystyle \lim_{n\to\infty} \left(1+\frac {x}{n}\right)^n=\exp x$$
|
I have read through many answers but this is the only approach that is easy for me to understand: before proofing that $\displaystyle \lim_{n\to\infty} \left(1+\frac {x}{n}\right)^n=\exp x$ , let's first proof that $\frac{\mathrm{d} }{\mathrm{d} x} \ln(x) = \frac{1}{x}$ : $$y=\ln x$$ $$e^y=x$$ find derivative using implicit differentiation $$(e^y) \frac{dy}{dx}=\frac{d(x)}{dx}$$ $$e^y \frac{dy}{dx}=1$$ $$ \frac{dy}{dx}=\frac{1}{e^y}$$ replace $e^y$ with $x$ , and $y$ with $ln(x)$ : $$ \frac{\mathrm{d} }{\mathrm{d} x} \ln(x) = \frac{1}{x}$$ according to the definition of the derivative, it means that $$ \frac{\mathrm{d} }{\mathrm{d} x} \ln(x) = \lim_{h \to 0} \frac{ln(x + h) - ln(x)}{h} = \frac{1}{x} $$ now proof that $\displaystyle \lim_{n\to\infty} \left(1+\frac {x}{n}\right)^n=\exp x$ $$\displaystyle \lim_{n\to\infty} \left(1+\frac {x}{n}\right)^n = \lim_{n\to\infty} \left(\exp\left(\ln\left(1+\frac {x}{n}\right)^n\right)\right) = \\ \lim_{n\to\infty} \left(\exp\left(n\ln\left(1+\fra
|
|limits|exponential-function|faq|
| 0
|
Solve $\| X A - B \|$ subject to $X C = C X$
|
Given $A, B \in \mathbb{R}^{n \times k}$ and S.P.D. $C \in \mathbb{R}^{n \times n}$ , I would like to find an analytical solution for the matrix $X \in \mathbb{R}^{n \times n}$ that minimizes \begin{align} \lVert X A - B \rVert^2_{F} \end{align} subject to the hard constraint $$ C X = X C. $$ Given the eigendecomposition of $C$ with $C = R \Lambda R^T$ , from Nearest commuting matrix it appears that a reasonable projection operator onto the space of matrices commuting with $C$ is $P_C(X) = R P_\Lambda(R^T X R) R^T$ with $$ [P_\Lambda(X)]_{ij} = \begin{cases} X_{ij}, & \lambda_i = \lambda_j \\ 0, & \textrm{otherwise} \end{cases} $$ . With this in mind, my guess at solving this would be to apply this projection operator to the unconstrained minimal solution to the least squares system, e.g. $$ X = P_C(B A^{\dagger}) $$ with $A^{\dagger}$ denoting the M.P. pseudoinverse of $A$ . Alternatively, it seems like the solution $$ X = P_C(B A^T) $$ is also reasonable (similar to the orthogonal pr
|
Let $\lambda_1,\lambda_2,\ldots,\lambda_k$ be the distinct eigenvalues of $C$ and $\Pi_j=I-(C-\lambda_jI)(C-\lambda_jI)^+$ be the orthogonal projector onto the eigenspace for $\lambda_j$ . Then $C=\sum_j\lambda_j\Pi_j$ and every $X$ that commutes with $C$ must satisfy $X=\sum_j\Pi_jX\Pi_j$ . Therefore \begin{align*} \|XA-B\|_F^2 &=\big\|\sum_j\Pi_jX\Pi_jA-B\big\|_F^2\\ &=\big\|\sum_j\Pi_j(X\Pi_jA-B)\big\|_F^2\\ &=\sum_j\|\Pi_j(X\Pi_jA-B)\|_F^2\\ &=\sum_j\|(\Pi_jX\Pi_j)(\Pi_jA)-\Pi_jB\|_F^2. \end{align*} Hence a global minimiser $X$ is given by $\sum_j(\Pi_jB)(\Pi_jA)^+$ .
|
|matrices|matrix-equations|matrix-decomposition|least-squares|symmetric-matrices|
| 1
|
What is the meaning of ${2\choose1}$ in $\frac{dy}{dx} = {2\choose1}x$, ${2\choose1} = \frac{dy/dx}{x}$
|
What do we count here? What do we count with (2 choose 1) in this particular situation of taking the derivative of x^2? The ratio of the speed at the growth of the area of a square and the growth of its side. As in the "art of counting", why nCr appears in calculus? In the rate of change of the area of a growing square, what is being counted? $y = x^2$ $y + dy = (x + dx)^2$ $y + dy = {2\choose0}x^2 + {2\choose1}xdx + {2\choose2}dx^2$ $dy = {2\choose1}xdx$ $dy/dx = {2\choose1}x$ $\frac{dy}{dx} = {2\choose1}x$ ${2\choose1} = \frac{dy/dx}{x}$ I can see that we are choosing the two x sides and the little bit of x -s, in other words dx -s. And we multiply them everywhich way to get the areas of dy . As if they were in a set of {x,dx}. If this was a cube, we would choose three sides and three little bits, and multiplied those for calculating the little bit of growth in volume. Expanding $(x+dx)^n$ We discard the infinitesimal bits, the second term of the expansion, $\binom{n}{1}x$ , seems to
|
$\frac{dy}{dx} = \begin{pmatrix} 2 \\ 1 \end{pmatrix} x = 2x$ So $dy= 2x dx$ Upon integration we get, $y= x^2 + \mathrm{constant \, of \, integration}$ Is this what you are asking?
|
|calculus|combinatorics|
| 0
|
Commutator of an abelian normal subgroup and the cyclic group generated by x
|
I'm working through the book "The Theory of Finite Groups" by Hans Kurzweil and Bernd Stellmacher. Section 1.6, Exercise 1 is: Let $A$ be an abelian normal subgroup of $G$ and $x \in G$ . Show $[A, \langle x \rangle]$ = $\{[a, x] \mid a \in A\}$ . Defns: Commutator: $ [x, y]=x^{-1} y^{-1} x y$ $[A, \langle x \rangle]= \langle \{[a, x^i] \mid a \in A, x^i \in \langle x \rangle\} \rangle$ As far as I've gotten: Obviously { $[a, x] \mid a \in A$ } is contained in $ [A, \langle x\rangle]. $ I need to show that $[a, x^i] = [a', x] $ for some $a' \in A.$ I've tried $a'= [a, x^{i-1}], $ and a number of other variations, but I can't find the trick that makes it work. Thank you for any help.
|
I assume the group G is finite. Let $k \in \mathbb{N_0}$ arbitrary. $$ [a, x^{k+1}] = [a, x \cdot x^k] = [a, x^k] \cdot [a, x]^{x^k} = [a, x^k] \cdot [x^{-k}ax^k, x] $$ Note, I have used the invariance of commutators under group homomorphisms (using $a \to x^{-k}ax^k$ ) and one of the commutator formulas mentioned in the book ( $[x, yz] = [x, z][x, y]^z$ ). Finally we reduce the use $[a, x^k]$ term to a few $[-, x]$ terms by induction and use $$ [a, x] \cdot [b, x] = [ab, x] $$ for $a,b \in A$ . I have used that A is normal and abelian serval times.
|
|abstract-algebra|group-theory|
| 0
|
Vector bundles and their extensions
|
Let $E'$ and $E''$ be a vector bundles over a complex manifold $X$ . An extension of $E''$ by $E'$ is a an exact sequence of vector bundles $$ 0 \longrightarrow E'\longrightarrow E \longrightarrow E'' \longrightarrow 0. $$ Two extensions of $E''$ by $E'$ are equivalent if we have an isomorphism of exact sequences $$ 0 \longrightarrow E'\longrightarrow E_1 \longrightarrow E'' \longrightarrow 0, $$ and $$ 0 \longrightarrow E'\longrightarrow E_2 \longrightarrow E'' \longrightarrow 0. $$ I'm looking for a proof for the following proposition The equivalence classes of extensions of $E''$ by $E'$ are in one-to-one correspondence with the elements of $H^1(X,\operatorname{Hom}(E'',E'))$ , the trivial extension corresponding to the zero element. I have not seen these extensions defined in pretty much any book on complex/differential geometry and cannot find a proof for the above result. If someone here has knowledge on where these things are discussed I would appreciate it.
|
In terms of definitions, you can find definitions in any book on homological algebra, (e.g. Weibel's book) though you probably need to be comfortable with what a sheaf is and how to associate a locally free sheaf of $\mathcal O_X$ -modules to a vector bundle. In terms of computation, Cech cocycles are probably the most concrete way to understand what an element of either side "looks like". Note that if $0 \to E' \to E \to E''\to 0$ is a short exact sequence of vector bundles, then if $\mathcal E,\mathcal E',\mathcal E''$ are the corresponding locally free sheaves of sections of $E$ , $E'$ and $E''$ respectively, then $0 \to \mathcal E' \to \mathcal E \to \mathcal E'' \to 0$ is a short exact sequence of sheaves. In this way, short exact sequences of vector bundles exactly correspond to short exact sequences of the associated locally-free sheaves. For any two vector bundles $E'$ and $E''$ you can use the (multi)-linear algebra constructions on each fibre to form vector bundles $E'\otimes
|
|differential-geometry|complex-geometry|vector-bundles|
| 0
|
expected winning for a lottery
|
You decide to make a lottery with n tickets, where each ticket is numbered between 1 and n, and each ticket is unique. Each ticket costs $5, and the lottery works in the following manner. Once all n tickets have been purchased, a number x is selected at random between 1 and n and all the money is divided equally between people with tickets less than x. That way, if 1 is selected, you(as the organizer) get to keep the prize pool. Everyone's number is randomized, the only case where the organizer wins the prize pool is if x=1, as no ticket number is less than 1, so no person wins anything and so the organizer automatically wins the prize pool. Also, the organizer does not hold the number 1, and does not win any money if the number is greater than 1.The only way where they earn the prize pool is if the chosen number is 1, other than that, there is no way they can win any money. The first part is to calculate the expected winnings per lottery as the organizer. So there is only one case whe
|
So if a person has ticket $p$ , the likelihood of winning the lottery is $(n-p)/n$ right? And then if $q>p$ is the chosen number, one wins $5n/(q-1)$ . So you need a double sum, once over the possible ticket purchases and over the possible chosen ticket $q$ . In sum you will get the following expected outcome: $$ E[X] = \sum_{i=1}^n\sum_{j=1}^nP(p=i)P(q=j)1_{i Which also makes sense: If there wouldn't be an organizer, on average everyone should get back his $5$ . But in $1/n$ of cases you 100% don't get back your input as the organizer wins it. Thus you need to subtract $5/n$ . If you want to know the expected result given that the contestant already has chosen ticket $p$ , you just get the second sum: $$ E[X|\text{contestant}=p]=\sum_{j=p+1}^n \frac{1}{n}\frac{5n}{j-1} = 5\sum_{j=p}^{n-1}\frac{1}{j} $$ The average probability to win is a bit easier as you don't need the $5n/(j-1)$ part: $$ P(p
|
|probability|discrete-mathematics|probability-distributions|lotteries|
| 0
|
generalized version of definition of cyclic groups and its abelian property
|
I am beginner in group theory and I have learned cyclic groups yet. When I look at the books for definition, it is said that Let $G$ be a cyclic group and let a be a generator of $G$ so that $$G= \langle a \rangle = {\{ a^n \mid n \in Z \}}$$ I know that the definition could have been like $G={\{ an \mid n \in Z \}}$ if the operation is addition. I also know that multiplication operation is used as general operation in groups theory, so the binary operations does not have to be multiplication. So, can we generalize the definition for other binary operations except for multiplication and addition ? In other words, if a group is cyclic , then does the binary operation have to be multiplication or addition ? Is there any other cyclic group example whose binary operation is different from multiplication or addition such as random $*$ ? If so, how can we prove that they are abelian, because every cyclic group is abelian. It is easy to show that $a^n$ or $an$ is abelian, but what if the oper
|
Powers of an element and the operation of a group are just notational; for example, as long as the axioms of group theory are satisfied by your use of notation, you could write, say, $$a_n, n_a, \frac{a}{n}, \frac{n}{a}, (n)a, a^{(n)}, a\sim n, n\sim a, a\,\ddot\smile\, n . . .$$ or any other combination of symbols for " $a$ to the power of $n$ ". This fact follows from the general philosophy of isomorphisms. That said, for an abstract abelian group, we use addition and $na$ (for powers) most often. Not to put too fine a point on it, but it's just a binary operation at the end of the day. Every cyclic group is indeed abelian.
|
|abstract-algebra|group-theory|cyclic-groups|
| 0
|
expected winning for a lottery
|
You decide to make a lottery with n tickets, where each ticket is numbered between 1 and n, and each ticket is unique. Each ticket costs $5, and the lottery works in the following manner. Once all n tickets have been purchased, a number x is selected at random between 1 and n and all the money is divided equally between people with tickets less than x. That way, if 1 is selected, you(as the organizer) get to keep the prize pool. Everyone's number is randomized, the only case where the organizer wins the prize pool is if x=1, as no ticket number is less than 1, so no person wins anything and so the organizer automatically wins the prize pool. Also, the organizer does not hold the number 1, and does not win any money if the number is greater than 1.The only way where they earn the prize pool is if the chosen number is 1, other than that, there is no way they can win any money. The first part is to calculate the expected winnings per lottery as the organizer. So there is only one case whe
|
A quick way to solve the problem: As you have noted, the probability that no contestant gets any payout is $\frac 1n$ (as only the selection $1$ achieves that result). Thus there is a $\frac {n-1}n$ chance that the contestants (collectively) will get the entire pool of $5n$ . But, of course, a priori, each contestant has the same expected gain here so the answer must be $$\frac {n-1}{n^2}\times 5n=\frac {5(n-1)}n=5-\frac 5n$$ Phrased differently: the expected payout to the organizer is $5$ , and the expected total payout is $5n$ (that's the guaranteed payout). thus each contestant must expect to get $$\frac {5n-5}n=5-\frac 5n$$ which, of course, is the same result. Sanity check: Let's do this in the case $n=2$ . In that case a contestant either gets a $1$ or a $2$ . If they get a $2$ , they can not make any money. If they get a $1$ , they have a $\frac 12$ chance of getting $10$ and a $\frac 12$ chance of getting $0$ . Thus the answer is $\frac 12\times \frac 12\times 10=\frac 52$ , as
|
|probability|discrete-mathematics|probability-distributions|lotteries|
| 1
|
Radical/Prime/Maximal ideals under quotient maps
|
Let $I$ be an ideal of a ring (commutative with unity) $R$ and let $q:R\to R/I$ be the quotient map. Then there is a well known correspondence between ideals of $R$ containing $I$ and ideals of $R/I.$ Let $J$ be an ideal of $R$ containing $I$ and let $J'$ be the corresponding ideal in the quotient ring. I had to show that $J$ is radical/prime/maximal iff $J'$ is radical/prime/maximal. Showing $J$ is radical/prime/maximal if $J'$ is was simple enough by direct element manipulation arguments. For the other direction, I did it by using $J'=J/I,$ the fact that ideals are radical/prime/maximal iff their quotient rings are reduced/domains/fields and the second isomorphism theorem. However, I tried unsuccessfully to find a "direct" proof that perhaps manipulates elements of the ring in a similar way to the other direction. Is there a more direct proof? I'm wondering this especially for the radical ideals, because this is Exercise 1.22 (page 7) in Fulton's Algebraic Curves and he has never eve
|
For $J$ prime (maximal, resp.), we have an isomorphism $R/J = (R/I)/(J/I)$ , then $J$ prime (maximal) if and only if $R/J$ is an integer ring (field), if and only if $J/I$ is prime(maximal). For $J$ radical, use that the radical of $J$ is the intersection of all prime ideals contains $J$ , hence contains $I$ , and above statements.
|
|abstract-algebra|commutative-algebra|ring-theory|ideals|
| 0
|
Proof that an Ito diffusion start with 0 will be positive immediately
|
Let $X_t$ be a Ito diffusion satisfying the SDE $dX_t=\mu(X_t)dt+\sigma(X_t)dW_t$ with $\mu$ and $\sigma$ being Lipschitz and $X_0=0$ . Assume that $\sigma(x)>0$ . Can we prove that $\tau=0$ almost surely where $\tau:=\inf\{t\geq0:X_t>0\}$ ? If not, what extra assumption needs to be made for this to be true. I know at least this works for Brownian motion. Is there any reference for such property?
|
1. By Girsanov's theorem, the laws of $X$ and of the solution $Y$ of $dY_t=\sigma(Y_t)dW_t$ are locally absolutely continuous, so you can assume without loss of generality that $\mu=0$ . 2. When $\mu=0$ your process $X$ is a local martingale, hence a time change of Brownian motion—for which the conclusion is known to be true. How is your question affected by time change?
|
|reference-request|stochastic-processes|stochastic-calculus|stochastic-analysis|stochastic-differential-equations|
| 0
|
Can every (non-odd) set be partitioned into subsets of two elements?
|
Can every (non-odd) set be partitioned into subsets of two elements? By this I mean that given a set $A$ , there is some $S \subset \mathscr{P}(A)$ such that every element of $S$ has exactly two elements and every element of $A$ appears in exactly one element of $S$ . For a trivial example, $\mathbb{Z}^+$ can be partitioned into $\{ \{1,2\}, \{3, 4\}, \dots \}$ . It seems like "common sense" that this partition should exist, but I have been unable to prove it. I was hoping for a construction using simple set theory operations, but a proof by negation would be second-best. However I would be happy just to know if this is something that can be proved only by some more advanced mathematics. My thoughts on constructing a partition for progressively more complicated sets $A$ follow. If $A$ were allowed to be finite, it's easy to see that it can be so partitioned depending on whether the cardinality is even (can be partitioned) or odd (cannot be partitioned). If $A$ were countably infinite,
|
The answer to your question is yes, but my proof will rely heavily on the Axiom of Choice. I don't know immediately whether your claim is true without choice. Well order $A$ in the order type of its cardinality. That means (using consequences of the Axiom of Choice) that we pick a bijection $f: \kappa \to A$ for some cardinal $\kappa$ . Let $\alpha \in \kappa$ be any ordinal. Then $\alpha$ can be uniquely expressed as $\beta + n$ , where $\beta$ is a limit ordinal and $n \in \Bbb N$ . Pair up $f(\beta+ 2k)$ with $f(\beta+2k+1)$ . When $A$ is uncountable, this proof essentially partitions $A$ into a whole bunch of copies of $\Bbb N$ , and then uses the obvious partition on $\Bbb N$ .
|
|set-theory|set-partition|
| 0
|
Justification of interchanging the limit and integral
|
I have a question regarding interchanging the derivative and the integral, namely $$\frac{d}{d\zeta}f^{1}(\zeta)=\frac{d}{d\zeta}\int_{-\infty}^{+\infty}f(x)e^{-2x\pi \zeta i}dx.$$ Where $f^{1}(\zeta)$ is fourier transform of $f$ , It is given that $xf(x)\in L^{1}(R)$ . I know that for DMT we need the integrand to be measurable, which I believe $f(x)e^{-2\pi x\zeta i}$ is and we need it to be bounded independently of the limit variable, that is $\zeta$ , therefore $|f(x)e^{-2\pi i x \zeta}|\leq |f(x)|$ which is integrable on $R$ therefore the interchange is permitted, right? I have only seen DMT being used as $\lim_{n \to \pm \infty}$ , I have not seen it being used for derivatives, is it possible? Another theorem I believe that can be used is from multivariable calculus and requires uniform convergence with respect to the variable we are differentiatiing, that is $\zeta$ but that is easily given that $f\in L^{1}(R)$ , the integrand is continuous and partial derivative would be uniform
|
Define $\Phi(x,\zeta)=f(x)e^{-2\pi i x\zeta}$ . $\Phi:\Bbb{R}\times\Bbb{R}\to\Bbb{C}$ is measurable $|\Phi(x,\zeta)|=|f(x)|$ , so each $\Phi(\cdot,\zeta)\in L^1(\Bbb{R})$ ; this is why the Fourier transform is well-defined in the first place. $\frac{\partial\Phi}{\partial\zeta}(x,\zeta)=-2\pi i x\cdot f(x)e^{-2\pi i x\zeta}$ is also measurable, and we have $\left| \frac{\partial\Phi}{\partial\zeta}(x,\zeta)\right|=2\pi |xf(x)|$ , and the RHS is integrable by hypothesis. The $2\pi$ isn’t the important thing (and neither is the equal sign). What is important here is that you have a $\zeta$ -independent upper-bound on the derivative $\frac{\partial\Phi}{\partial\zeta}$ , which is $x$ -integrable. This is enough to apply Leibniz’s integral rule (see the measure-theory statement); the proof here uses Lebesgue’s DCT (applied to the difference quotient) and the mean-value inequality. Note that the third bullet point is important, but you didn’t mention it, so no, your explanation is not corre
|
|analysis|fourier-analysis|
| 1
|
Can every (non-odd) set be partitioned into subsets of two elements?
|
Can every (non-odd) set be partitioned into subsets of two elements? By this I mean that given a set $A$ , there is some $S \subset \mathscr{P}(A)$ such that every element of $S$ has exactly two elements and every element of $A$ appears in exactly one element of $S$ . For a trivial example, $\mathbb{Z}^+$ can be partitioned into $\{ \{1,2\}, \{3, 4\}, \dots \}$ . It seems like "common sense" that this partition should exist, but I have been unable to prove it. I was hoping for a construction using simple set theory operations, but a proof by negation would be second-best. However I would be happy just to know if this is something that can be proved only by some more advanced mathematics. My thoughts on constructing a partition for progressively more complicated sets $A$ follow. If $A$ were allowed to be finite, it's easy to see that it can be so partitioned depending on whether the cardinality is even (can be partitioned) or odd (cannot be partitioned). If $A$ were countably infinite,
|
Hint If $\kappa$ is an infinite cardinal, let: $$ E:=\{\alpha You can think about $E$ as the set of even ordinals below $\kappa$ . Define $P:=\{\{\alpha,\alpha+1\}:\alpha\in E\}$ . It is a partition of $\kappa$ into pairs. Now, by AC , you can do the same in every infinite set.
|
|set-theory|set-partition|
| 0
|
If roots of $a(x^2+m^2)+amx+cm^2x^2=0$ are $u,v$
|
If roots of $$a(x^2+m^2)+amx+cm^2x^2=0.......(1)$$ are $u,v$ , one can easily prove that $$a(u^2+v^2)+auv+cu^2v^2=0.........(2)$$ Interestingly, if (1) is $f(x,m)=0$ , then (2) is $f(u,v)=0.$ The question: Why is this happening?
|
Let $f(x,y) = a(x^2 + y^2) +ayx + cy^2x^2 $ For a given $m$ , we are told that there exists $u, v$ such that $f(u, m) = 0 $ and $f(v, m) = 0 $ . OP claims that $f(u, v) = 0 $ , and that there is an easy thought unstated proof. Presumably they are asking if we can shed light on any deeper meaning. Treating this as a quadratic in $x$ with $ y = m$ fixed, the 2 roots satisfy $u+v = \frac{ -am}{a+cm^2} $ . $uv = \frac{ am^2 } { a+cm^2 } $ . In particular, $ 1/u + 1/v + 1/m = 0 $ is a necessary condition. Given 2 values, this uniquely determines the third, so it's also a sufficient condition. To me, this is the "deeper meaning", and you will see how it is used. We now exploit the symmetry of $x, y$ . Treating this as a quadratic in $y$ with $x = u$ fixed, suppose that $0 = f(u, m) = f(u, n)$ (We know that $n$ exists, eg from Vieta), then $m+n = \frac{ -au}{a+cu^2 } $ $mn = \frac{ au^2 } { a+cu^2 }$ As before $ 1/m + 1/n + 1/u = 0 $ is a necessary and sufficient condition. Thus, $ n = v$ , s
|
|polynomials|quadratics|
| 1
|
A question on elementary Weierstrass factors
|
Given $E_p(z) = (1-z)\exp(\sum_{k = 1}^p \frac{z^k}{k})$ , we want to show that $E_p(z) = O(z^{p+1})$ . To do this, Taylor uses the following argument: $\log(E_p) = \log(1-z) + \sum_{k=1}^p \frac{z^k}{k} = -\sum_{k=p+1}^{\infty} \frac{z^k}{k}$ , for $|z| where the second equality sign follows from Taylor series expansion of $\log(1-z)$ about $0$ . And the above should imply that $E_p(z) - 1 = O(z^{p+1})$ . I am not sure why (2) (a result about the function) follows from (1) (a result about log of the function).
|
For $w \to 0$ is $$ e^w -1 = \sum_{j=1}^\infty \frac{w^j}{j!} = O(w) \, . $$ Substituting $w= \log E_p(z)$ (with $|z| ) gives $$ E_p(z)-1= e^{\log E_p(z)}-1 = O(\log E_p(z)) = O(z^{p+1}) \, . $$ In other words: for $|z| \le R is $|\log E_p(z)| \le K|z|^{p+1}$ with some constant $K$ , and for $|w|\le K$ is $|e^w-1| \le L|w|$ with some constant $L$ , so that $$ |E_p(z)-1|= |e^{\log E_p(z)}-1 | \le L|\log E_p(z)| \le KL|z|^{p+1} \, . $$
|
|real-analysis|complex-analysis|
| 1
|
Can't figure out trigonometry problem
|
Problem : Let $\triangle ABC$ have sides $a,b,c$ . If $\angle A:\angle B:\angle C=4:2:1$ , prove that $\frac{1}{a}+\frac{1}{b}=\frac{1}{c}$ . We are given background to Law of Sines and Law of Cosines, but I don't have any idea how to arrive at the equation.
|
$\angle A = \frac{4\pi}{7}$ , $\angle B=\frac{2\pi}{7}$ , and $\angle C=\frac{\pi}{7}$ (radians) We know that in any triangle, the side opposite to largest angle is the largest side. The side opposite to the smallest angle, is the smallest size. (and the last angle and side) Sides $a$ and $b$ are both larger than $c$ and since the smaller the denominator, the larger the fraction, this seems logical. Let's formally prove this now! Let $\angle C = x$ . We know $\angle A = 4x$ and $\angle B = 2x$ . We will find the sides of the triangle in terms of $x$ . Law of sines: $$\frac{a}{\sin 4x}=\frac{b}{\sin 2x}$$ Rearranging, we get $\frac{a}{b}=\frac{\sin 4x}{\sin 2x}=\frac{2\sin (2x) \cos (2x)}{\sin 2x}=2\cos 2x$ . . Repeat this process for $\frac{b}{c}=2\cos x$ . We wish to prove $\frac{c}{a}+\frac{c}{b}=1$ . (multiple original equation by $c$ ). We already found $\frac{a}{b}$ and $\frac{b}{c}.$ To find $\frac{a}{c}$ , we multiple the two ratios we have! Now we have $\frac{a}{c}=4\cos 2x\cos
|
|trigonometry|
| 0
|
If you lived in a 4-torus, what would the doughnut hole look like from the inside?
|
I'm not just curious; it refers to general relativity. Specifically, would the hole in the torus' center look to us like a sphere, one you cannot enter because you always slip across the side and go around it instead of through?
|
The OP specifically asks about a 4-torus, still many of the answers are given with reference to a 2-dimensional person living in a 2-torus. It is claimed that a 2D person can not see the curved structure we associate with a torus (a donut-like shape). It seems the problem about "seeing the hole" is that it belongs to a higher order dimension. The OP doesn't specify what order of dimensions the person looking at (or for) the hole, belongs to, so it's hard to decide. If it's a 5-or-more-dimensional person, then he or she should be able to see the hole. An example in 2D is given with video game worlds, that exhibit the properties of 2-toruses. Again, the game is only playable in 3 dimensions. People say a lot about n-dimensional people living in n-dimensional spaces, that raises a few questions in my opinion. One question: Can a 2-torus exist at all, unless embedded in 3-dimensional space? Even if embedded in 3D space, could the curvature overall be so smooth that it could be perceived as
|
|general-topology|
| 0
|
How can you set bounds algebraically for integral equations?
|
Sometimes I see questions like this: $$\frac{dy}{y}=2dx$$ where $y = 1$ when $x = 0$ My book tells me to do this: $$\int_1^y\frac{dy}{y}=\int_0^x2dx$$ At first I thought this was a clever way of saying 0=0 (as an integral from $a$ to $a$ is always zero) to solve the problem. Then I saw this question: One method of dyeing a piece of cloth is to immerse it in a container which has P grams of dye dissolved in a fixed volume of water. The cloth absorbs the dye at a rate proportional to the mass of dye remaining: $$\frac{dx}{dt}=k(P-x)$$ where t is time in seconds, x is the mass of dye absorbed by the cloth and k = $\frac{1}{50}$ Find the time taken to dye a piece of cloth if a mass of $\frac{5}{8}P$ needs to be absorbed to reach the desired colour. (From 2020 Leaving Cert Higher Applied Maths Paper Q. 10) They rearrange $$\int\frac{dx}{P-x}=\frac{1}{50}\int dt$$ and then set the bounds to: $$\int_0^{\frac{5}{8}P}\frac{dx}{P-x}=\frac{1}{50}\int_0^t dt$$ I don't understand how we can do this
|
You have $$\frac{dx}{dt} = k(P-x)$$ We can divide by $P-x$ to get $$\frac1{P-x}\frac{dx}{dt} = k$$ Start the clock (i.e., $t = 0$ ) when the cloth is put into the dye. At that point, there is no dye in the cloth, so $\left.x\right|_{t = 0} = 0$ . At some time $T$ in the future it reaches the desired state of $\left.x\right|_{t = T} = \frac58P$ . To figure what $T$ is, get rid of the derivative by integrating over $t$ from $0$ to $T$ : $$\int_0^T\frac1{P-x}\frac{dx}{dt}\,dt = \int_0^Tk\,dt = k(T-0) = kT$$ Now in the left hand integral, we can change the variable of integration to $x$ , since $dx = \frac{dx}{dt}dt$ and we know the values of $x$ at $t=0$ and $t = T$ : $$\int_0^T\frac1{P-x}\frac{dx}{dt}\,dt = \int_0^{\frac58P}\frac{dx}{P-x} = -\left.\ln(P-x)\right|_{x=0}^{x=\frac58P} = \ln P - \ln\frac58P = \ln \frac 85$$ Thus $T = \frac1k\ln\frac85$ is the time we are after.
|
|ordinary-differential-equations|definite-integrals|
| 1
|
Why are closed balls not a topology in a metric space?
|
Define a closed ball in a metric space $X$ with the metric $\rho$ that has centre a $x$ and radius $r$ as the set $\{y:\rho(x,y)\le r\}$ for $r\ge0$ and $x,y\in X$ . Why are arbitrary unions of closed balls not a topology in $X$ but arbitrary unions of open balls are? An open ball is defined as the set $\{y:\rho(x,y)\lt r\}$ .
|
In $\mathbb{R}$ let $\{r_n\}_{n\in N}\subseteq(0,1)$ be an increasing sequence such that $r_n\rightarrow1$ whenever $n\rightarrow\infty$ . The sets $[-r_n,r_n]$ are closed balls in the standar topolgy of $\mathbb{R}$ but $\bigcup_{n\in N}[-r_n,r_n]=(-1,1)$ .
|
|real-analysis|general-topology|analysis|metric-spaces|
| 1
|
Derivative in normed space
|
I'm studying the Differential Calculus Functions of several variables. Let $f:A\subset\mathbb{R}\to\mathbb{R}^n$ is differentiable with $||f(t)||>0$ for all $t\in A$ . Prove that: $$u(t)=\dfrac{f(t)}{||f(t)||}$$ is differentiable and find $ $ I know the way to check $u(t)$ is differentiable or not that is calculating this limit: $$\lim\limits_{h\to0}{\dfrac{||u(t+h)-u(t)||}{|h|}};$$ And what I got next is: $$\lim\limits_{h\to0}{\dfrac{\bigg|\bigg|\dfrac{f(t+h)}{||f(t+h)||}-\dfrac{f(t)}{||f(t)||}\bigg|\bigg|}{|h|}}$$ I found out that: $$\dfrac{\bigg|\bigg|\dfrac{f(t+h)}{||f(t+h)||}-\dfrac{f(t)}{||f(t)||}\bigg|\bigg|}{|h|}\le\dfrac{1}{|h|}\bigg|\bigg|\dfrac{f(t+h)-f(t)}{||f(t+h)||}+\left(\dfrac{1}{||f(t+h)||}-\dfrac{1}{||f(t)||}\right)f(t)\bigg|\bigg|$$ $$\le\dfrac{1}{||f(t+h)||}\dfrac{||f(t+h)-f(t)||}{|h|}+\dfrac{1}{|h|}\bigg|\dfrac{1}{||f(t+h)||}-\dfrac{1}{||f(t)||}\bigg|||f(t)||;$$ Let $|h|\to0$ then: $$\lim\limits_{h\to0}{\dfrac{||u(t+h)-u(t)||}{|h|}}\le2\dfrac{||f'(t)||}{||f(t)||}$$
|
The function $u$ is differentiable as a ratio of differentiable functions $f$ and $|f\|.$ We have $1=\|u(t)\|^2=\langle u(t),u(t)\rangle.$ Thus $$ 0={d\over dt}\langle u(t),u(t)\rangle=2\langle u(t),u'(t)\rangle.$$ Remark There is no need for calculating $u'.$
|
|limits|normed-spaces|differential-topology|vector-analysis|
| 0
|
Why are closed balls not a topology in a metric space?
|
Define a closed ball in a metric space $X$ with the metric $\rho$ that has centre a $x$ and radius $r$ as the set $\{y:\rho(x,y)\le r\}$ for $r\ge0$ and $x,y\in X$ . Why are arbitrary unions of closed balls not a topology in $X$ but arbitrary unions of open balls are? An open ball is defined as the set $\{y:\rho(x,y)\lt r\}$ .
|
Note that the set of closed balls (closed in the sense of a given metric $\rho$ ) does constitute a basis for a topology on $X$ . You can easily show this: for any $x \in X$ , the closed ball $C_\rho (x, 1)$ contains $x$ . Suppose that the intersection of $C_\rho (y, \delta)$ and $C_\rho (z, \varepsilon)$ includes $x$ . The basis element $C_\rho (x, \min \{ \delta - \rho (x, y), \varepsilon - \rho (x, z) \}) \subseteq C_\rho (y, \delta) \cap C_\rho (z, \varepsilon)$ also contains $x$ . However, the topology induced by this basis of closed balls is not the same as the topology induced by the metric $\rho$ . Indeed, the new topology is the discrete topology on $X$ , because each singleton $\{ x \} = C_\rho (x, 0)$ is open.
|
|real-analysis|general-topology|analysis|metric-spaces|
| 0
|
Find $(f^{-1})'(0)$ if $f(x)=\int_{e^2}^x \frac{dt}{\ln t}$
|
Let f(x) = $$\int_{e^2}^{x} 1/ln(t) \, dt$$ Find the value of (f^-1)'(0). I am stucked upon this question. I know f'(x) is 1/(lnx) and (f^-1)'(x) is ln(x). by subing 0 as x, the answer will be undefined. Unsure how to move on after these steps. Please advice. Thank you My workings
|
It is not generally true that $(f^{-1})'(x) = \frac{1}{f'(x)}$ . The correct result is that $(f^{-1})'(f(x)) = \frac{1}{f'(x)}$ . This can be seen by differentiating both sides of $f^{-1}(f(x))=x$ and being careful to apply the chain rule on the LHS. Therefore, to determine $(f^{-1})'(0)$ in this problem, you should find $x$ having $f(x)=0$ and evaluate the reciprocal of the derivative of $f$ at this $x$ .
|
|calculus|
| 0
|
Determine whether the complex power series converges at a point
|
I need to determine if a series $$\sum\limits_{n=1}^{\infty} \frac{(z-1+i)^{2n-1}}{5^n(n+1)ln^3(n+1)}$$ converges at a point $z_1 = -1$ After substituting the point, I got: $$ \sum\limits_{n=1}^{\infty} \frac{(i - 2)^{2n-1}}{5^n(n+1)ln^3(n+1)} $$ And I do not know what to do next. I have heard that it is necessary to split this series into two, one with real coefficients, the other with complex ones. But here I don't see any way to break up this row like that. Can I investigate for convergence here using the root test ( $\lim\limits_{n \to \infty} \sqrt[n]{|a_n|}$ )? $$\lim\limits_{n \to \infty} \sqrt[n]{|a_n|} = \lim\limits_{n \to \infty} \sqrt[n]{\left|\frac{(i-2)^{2n-1}}{5^n(n+1)ln^3(n+1)}\right|} = \lim\limits_{n \to \infty} \sqrt[n]{\left| \frac{(i-2)^{2n}}{(i-2)5^n} \right|} = \lim\limits_{n \to \infty} \frac{|i-2|^2}{5\sqrt{5}} = \lim\limits_{n \to \infty} \frac{5}{5\sqrt{5}} = \frac{1}{\sqrt{5}}$$ Therefore it converges.
|
To prove the series converges prove that the series $\sum_{n\ge 1} z^n/n$ converges on $|z| = 1, z \ne 1$ , now put $z = (i-2)^2/5$ to conclude
|
|complex-analysis|complex-numbers|power-series|
| 0
|
Determining whether a random variable is more likely to be positive or negative
|
Suppose that we have a random variable $X$ defined on a finite set of integers $D$ with the following properties: $D=\mathbb{Z} \cap [k_1,k_2]$ for some integers $k_1 . For all $d \in D$ , $\mathbb{P}(X=d)>0$ . $\mathbb{E}[X]=0$ . $|k_1| . If $k_1 \leq d_1 , then $\mathbb{P}(X=d_1) \leq \mathbb{P}(X=d_2)$ . If $k_2 \geq d_1>d_2>0$ , then $\mathbb{P}(X=d_1) \leq \mathbb{P}(X=d_2)$ . My goal is to show that $$ \mathbb{P}(X \mathbb{P}(X>0),$$ or to find an explicit example of a random variable $X$ that satisfies all of the above properties but for which $$ \mathbb{P}(X 0),$$ In other words, I want to show that $X$ is more likely to be negative than positive, or to find a counterexample to such a claim. The idea behind this problem is to determine the player that is most likely to win in a game for which two players have bounded integer scores, have the same average score, but in which one player, when winning, does it in general by larger margins than the other. I also want that the proba
|
This is not the case as written. Since the probabilities decrease away from the centre on both sides, it doesn’t matter that $|k_1|\lt|k_2|$ ; almost the entire mass on the positive side can be concentrated near the origin. For example, with $k_1=-2$ and $k_2=3$ and $\epsilon$ small: \begin{array}{c|c} k&\mathbb P(X=k)\\\hline -2&\frac16\\ -1&\frac16+5\epsilon\\ 0&\frac16-7\epsilon\\ 1&\frac12\\ 2&\epsilon\\ 3&\epsilon \end{array} All the conditions are fulfilled and the probabilities sum to $1$ , but $\mathbb P(X\gt0)\gt\mathbb P(X\lt0)$ . However, it works if you turn around the inequality in the sixth property. If two kids (or adults, for that matter) are on a seesaw and the one on the shorter side hunches over while the one on the longer side leans back, the only way that the seesaw can nevertheless be in equilibrium is if the kid on the shorter side is heavier. There’s a direct analogy between the expected value and the centre of mass (with probability playing the role of mass). M
|
|probability|random-variables|game-theory|
| 0
|
Why are closed balls not a topology in a metric space?
|
Define a closed ball in a metric space $X$ with the metric $\rho$ that has centre a $x$ and radius $r$ as the set $\{y:\rho(x,y)\le r\}$ for $r\ge0$ and $x,y\in X$ . Why are arbitrary unions of closed balls not a topology in $X$ but arbitrary unions of open balls are? An open ball is defined as the set $\{y:\rho(x,y)\lt r\}$ .
|
If you define closed balls strictly, that is, if $B_r(x_0)=\{x: d(x,x_0)\leq r\}$ is a closed ball for $\color{red}{r>0}$ , then the set of closed balls does not constitute a basis for a topology. For example, consider $\Bbb R^2$ and its set of closed balls $\cal C$ in Euclidean metric. Let $A=\{(x,y): x^2+y^2\leq\tfrac14\}$ and $B=\{(x,y): (x-1)^2+y^2\leq\tfrac14\}$ . They are closed balls. But $A\cap B=\{(1,0)\}$ is not a closed ball. Therefore, $\cal C$ is not a topological basis according to the definition of topological basis.
|
|real-analysis|general-topology|analysis|metric-spaces|
| 0
|
Topology (confusion related to subspace topology and properties of a set)
|
The interior of a subset $A$ of a topological space $X$ is given by $\operatorname{int}(A)$ , defined as the union of all open subsets of $A$ . I am confused if they mean open with regards to the subspace topology or topology of $X$ itself.
|
The interior of a subset $A$ of a topological space $X$ , denoted $int(A)$ , is defined with respect to the topology of $X$ itself, not just the subspace topology of $A$ . In other words, when we say "the union of all open subsets of $A$ ," we're referring to subsets that are open in $X$ and contained in $A$ . To clarify, an open subset of $A$ in this context means any subset of $A$ that is also an open set in the overarching space $X$ . The interior of $A$ is the largest open set in $X$ that is contained entirely within $A$ . This definition ensures that the concept of "interior" is directly tied to the topology of the whole space $X$ , highlighting how properties of subsets like $A$ are understood in the context of their containing space.
|
|general-topology|
| 0
|
Fourier Transform and conjugations
|
Consider a sufficiently regular and decaying function $f\in \mathcal{S}(\mathbb{R})$ and define the Fourier transform as $$ \mathcal{F}(f)(\xi)=\int e^{-ix\xi}f(x)dx \quad \hbox{ and } \quad \mathcal{F}^{-1}(f)(x)=\int e^{ix\xi}f(\xi)d\xi. $$ With these definitions I would like to calculate $\mathcal{F}_{\mu\mapsto\mu'}^{-1}[\overline{\widehat{f}(\xi-\mu)}]$ , where $\overline{\cdot}$ stands for the complex conjugate. First, using that $\mathcal{F}(\overline{f})(\xi)=\overline{\mathcal{F}(f)(-\xi)}$ , we can directly calculate the above quantity, $$ \mathcal{F}_{\mu\mapsto\mu'}^{-1}[\overline{\widehat{f}(\xi-\mu)}]=\mathcal{F}_{\mu\mapsto\mu'}^{-1}[\widehat{\overline{f}}(\mu-\xi)]=e^{i\xi\mu'}\overline{f}(\mu'). $$ However, if instead I tried to verify my result by direct calculations, taking out of the integral the complex conjugate, for some reason I obtained an extra minus sign: $$ \mathcal{F}_{\mu\mapsto\mu'}^{-1}[\overline{\widehat{f}(\xi-\mu)}]=\overline{\int e^{-i\mu\mu'}\wideha
|
Think about convolution $u\ast v(x)=\int u(x-y)v(y)dy$ , using change of variable to check why $\int u(x-y)v(y)dy=\int u(y)v(x-y)dy$ . In your case $u(x)=\hat f(x)$ and $v(x)=e^{-ix\mu'}$ . More precisely, when we take reflection in change of variable we have $$\int_{-\infty}^{+\infty} u(-t)dt=\int_{+\infty}^{-\infty}u(s)(-ds)=\int_{-\infty}^{+\infty}u(s)ds.$$ Btw your Fourier inversion is incorrect. You should either multiplying $(2\pi )^{-n/2}$ on both $\mathcal F$ and $\mathcal F^{-1}$ , or use $\mathcal F^{-1}f(x)=\frac1{(2\pi)^n}\int f(\xi)e^{ix\xi}d\xi$ . Also you should avoid using $\mu\to\mu'$ where we tends to use different symbols between the frequence side and the physical side.
|
|analysis|fourier-analysis|fourier-transform|
| 1
|
The normal cone to a ball at a boundary point of the ball
|
I got stuck in a problem that in the end I needed to know what is the normal cone to a ball $B = \left\{ x \in \mathbb{R}^n : \lVert x \rVert \leq 1 \right\}$ (here $\lVert \cdot \rVert$ is an arbirtrary norm), at a point at the boundary $\partial B$ . Recalling that the normal cone to a closed convex set $C$ at a point $x$ in $\mathbb{R}^n$ is the set $$N_C(x) = \left\{ y \in \mathbb{R}^n : \langle y,z-x \rangle \leq 0 \ \forall z \in C \right\}.$$ I manage to compute it for $n = 1$ , you can assume $B = [-1,1]$ , if I'm not mistaken, it is $$N_B(x) = \begin{cases} [0,+\infty), & \ \text{if} \ x=1, \\ (-\infty,0], & \ \text{if} \ x=-1.\end{cases}$$ Intuitively, I think the normal cone $N_B(x)$ will always be a half space at boundary points, but I don't know how this in a more general setting, is my intuition correct? If not, what is it? Any help, hints or references would be appreciated. $\textbf{Edit}:$ Actually when the norm is differentiable at the point I managed to see that the n
|
Let $\|\cdot\|_*$ denote the dual norm to $\|\cdot\|$ : $$ \|y\|_* := \sup_{\|x\|\le 1} x^Ty. $$ This implies $$ x^Ty \le \|x\| \cdot \|y\|_*. $$ Let $$ B:=\{x: \ \|x\|\le 1\}. $$ Claim . The normal cone of the unit ball $B$ is then $$ N_B(x) = \{ y: \ x^Ty = \|y\|_*\}. $$ Proof . Assume $x^Ty = \|y\|_*$ . Take $z\in B$ . Then $$ \langle y,z-x \rangle \le \|y\|_* \|z\| - \|y\|_* \le 0. $$ Now take $y \in N_B(x) $ . By compactness, there is $z$ with $\|z\|\le 1$ such that $z^Ty=\|y\|_*$ . Then $$ 0 \ge \langle y, z-x \rangle = \|y\|_* - y^Tx, $$ which implies $y^Tx \ge \|y\|_*$ . Since $\|x\|=1$ , this implies $x^Ty = \|y\|_*$ by $$ \|y\|_* \le x^Ty \le \|x\|\cdot \|y\|_* = \|y\|_*. $$
|
|analysis|convex-analysis|convex-geometry|convex-cone|
| 0
|
Solving an equation analytically of the type $n \cdot x^{n - 1} \cdot y + \frac{x^n}{z} = w$ for n
|
I am currently faced with solving the following equation for my thesis. I am looking for a symbolic answer to n $$ k \left( n \cdot d_{\text{rz}}^{n-1} \cdot \frac{\partial d_{\text{rz}}}{\partial r} + \frac{d_{\text{rz}}^n}{r} \right) = \rho \left( \frac{\partial v_z}{\partial r} \cdot v_r + \frac{\partial v_z}{\partial z} \cdot v_z \right)$$ So this equation can be written for simplification purposes as: $$n \cdot x^{n - 1} \cdot y + \frac{x^n}{z} = w$$ Now I would like to solve this equation analytically for n. Ideas that I have are trying to extract n by: The the logarithm of both sides and apply various rules for mathematical operations using logarithms Use the Lambert function and attempt to write the equation in an adequate form respectively I have namely tried both methods, unfortunately to no avail. I have previously used the Lambert function to extract n from $n \cdot x^{n - 1} = y$ . This has worked but applying this methodology to my case, namely $n \cdot x^{n - 1} \cdot y
|
$$ nyx^{n-1} + \frac{x^n}{z} =w $$ $$ nyx^{-1} + \frac{1}{z} =wx^{-n}$$ We can set $\psi := yx^{-1}$ and $q= \frac{1}{z}$ $$ \psi n+ q= wx^{-n} $$ $$ \psi(n+ \frac{q}{\psi})= wx^{-n} $$ Now let $u := n+\frac{q}{\psi} \implies n =u-\frac{q}{\psi}$ $$ \psi u = wx^{-u+\frac{q}{\psi}} $$ $$ u = \frac{w}{\psi}x^{-u}x^{\frac{q}{\psi}} $$ Set $\eta = \frac{w}{\psi}x^{\frac{q}{\psi}}$ So we simplify to: $$ u = \eta x^{-u} $$ $$ x^{u}u = \eta\implies e^{u\ln x} u \ln x = \eta \ln x $$ So we have at the end $$ u = \frac{1}{\ln x}\operatorname{W_n}(\eta \ln x) $$
|
|linear-algebra|algebra-precalculus|lambert-w|
| 0
|
Random variable $X$ it is distributed evenly over the segment $[-\pi, 0]$. Find the distribution
|
I don't know what to do in the task: Random variable $X$ it is distributed evenly over the segment $[-\pi, 0]$ . Find the distribution >function of a random value $Y = \sin X$ I do the following $F_Y = P(\sin X But then I got stuck, how do I get the information I need from the distribution of X. But I also get that $Im Y = [-1, 0]$ it's mean that function of distribution $0$ on $(-\infty, -1]$ and $1$ on $[0, +\infty)$
|
The CDF (cumulative distribution function) of $X$ is $$ \mathbb P(X The PDF (probability distribution function) is the derivative of this. To find the CDF of $\sin X$ you calculate $\mathbb P(\sin X But you made a mistake. Taking into account that $\sin x$ is symmetric around $-\pi/2$ in the interval $[-\pi,0]$ this is not hard: \begin{align}\mathbb P(\sin X
|
|probability-theory|probability-distributions|random-variables|
| 1
|
What is the logical system of Tractatus Logico-Philosophicus?
|
Tractatus Logico-Philosophicus states simply that 6 The general form of the truth function is: $[\bar p, \bar\xi, N(\bar \xi)]$ . This is the general form of the sentence. Wikipedia and other sources say $\bar p$ stands for an atomic proposition; $\bar \xi$ stands for a set of propositions; $N(\xi)$ stands for the set of all negations of propositions in $\bar \xi$ . However, this couldn't possibly be correct, because 6.03 The general form of the integer is: $[0, \xi, \xi+1]$ . And not a single negation is here. Not only that, $0$ is not an atomic proposition. Not only that, as far as I see, $[0, \xi, \xi+1]$ is not a function of type $$[0, \xi, \xi+1]: \mathrm{Something} \to \{\top, \bot\}$$ So, what is the logical system of Tractatus Logico-Philosophicus?
|
For the general form of an integer, the notation introduced at 5.2522 is used: 5.2521 If an operation is applied repeatedly to its own results, I speak of successive applications of it. ( ‘O’O’O’a’ is the result of three successive applications of the operation ‘O’ξ’ to ‘a’ .) In a similar sense I speak of successive applications of more than one operation to a number of propositions. 5.2522 Accordingly I use the sign ‘[ a, x, O’x ]’ for the general term of the series of forms a, O’a, O’O’a , . . . .This bracketed expression is a variable: the first term of the bracketed expression is the beginning of the series of forms, the second is the form of a term x arbitrarily selected from the series, and the third is the form of the term that immediately follows x in the series. It should be remarked that Wittgenstein's Tractatus Logico-Philosophicus is not entirely clear of equivocal formulations which have subsequently lead to debates. For example, see the papers, P.T. Geach's “ Wittgenstei
|
|logic|philosophy|
| 0
|
Mean curvature of graph over its tangent plane
|
Let $S$ be a regular surface in $\mathbb{R}^3$ and $p\in S$ a point on the surface. By the implicit function theorem $S$ can be locally written as a graph of a function, e.g. $V\cap S = \{ (x,y,z) \in \mathbb{R}^3 : (x,y)\in U, z=f(x,y)\}$ for some open neighbourhood $V$ of $p$ , open set $U\subset \mathbb{R}^2$ and some smooth function $f: U \rightarrow \mathbb{R}.$ By choosing local coordinates we can identify $U$ as part of the tangent plane of $S$ at $p$ , furthermore we can set $f^{-1}(p)=(0,0)$ . In this case, the mean curvature at $p$ is given by $H=\frac{f_{xx}\;\;+f_{yy}}{2}$ (average of second derivatives at $p$ ) and principal curvatures are $f_{xx},f_{yy}$ . Is this correct? If it is, how can one describe this more precisely than "choosing local coordinates..."? If it is not, how could I achieve a similar result (surface as graph over its tangent plane and easy formula of $H$ )? Thank you.
|
By choosing local coordinates we can identify $U$ as part of the tangent plane of $S$ at $p$ , furthermore we can set $f^{-1}(p)=(0,0)$ . In this case, the mean curvature at $p$ is given by $H=\frac{f_{xx}+f_{yy}}{2}$ (average of second derivatives at $p$ ) and principal curvatures are $f_{xx},f_{yy}$ . Is this correct? I think it is not always true that the principal curvatures are $f_{xx},f_{yy}$ . We need to rotate the xy-plane such that the x-axis and y-axis along the directions of the two principal vectors. (But $H=\frac{f_{xx}+f_{yy}}{2}$ is always true, since Laplacian is invariant under orthogonal transformations)
|
|differential-geometry|surfaces|curvature|tangent-spaces|
| 0
|
Does a metric space need to be a Hausdorff space?
|
I need to prove that: Let $X$ , $Y$ be metric compact spaces, $f:X \to Y$ be continuous and bijective. Then $f$ is an homeomorphism. But I have been investigating, and there's no such theorem, because it says that $Y$ needs to be a Hausdorff space. Can someone explain me why $Y$ needs to be a Hausdorff space? Thanks.
|
Another try... so maybe we can pull ourselves more and more out of the box. :-) I will use the assumptions that $X$ is compact and $Y$ is Hausdorff. Notice that a bijection means that every point of $X$ is identified with every point of $Y$ . But you can think further... every subset of $X$ corresponds to exactly one subset of $Y$ . And a family of subsets of $X$ is in bijection with the corresponding family of subsets of $Y$ . In practice, $X$ and $Y$ are the same, and the function $f$ is the identity. We just regard $x$ and $f(x)$ as being the same thing. So, when you say that \begin{align*} \textrm{id}: (X, \tau) &\rightarrow (X, \gamma)\\ x &\mapsto x \end{align*} is continuous, what you are saying is that $$\gamma \subset \tau.$$ Notice that $\gamma$ is Hausdorff and $\tau$ is compact. When you add open sets to a Hausdorff topology, it does not cease being Hausdorff! And when you remove open sets from a compact topology it does not cease being compact. So, actually, we have that b
|
|general-topology|metric-spaces|separation-axioms|
| 0
|
Find the boundary map of CW complex of two 0-cells and one 1-cell
|
The question is given $I_0 = \mathbb{Z}^2, I_1 = \mathbb{Z}$ and a boundary map $\partial: I_1 \to I_0$ defined by $1 \mapsto (+1, -1)$ . Find a CW structure on the unit interval $I$ such that the chain complex is isomorphic to $I_\bullet$ . I constructed the CW structure as follows $X_0$ consists of two distinct points, then $C^{CW}_0(X) = I_0 = \mathbb{Z}^2$ $X_1 = I$ with the attaching map attaches two points of $S^0$ to two points of $X_0$ , then $C^{CW}_1(X) = I_1 = \mathbb{Z}$ As $X_0 \subseteq X_1$ , short exact sequence $$ 0 \to C_\bullet(X_0) \to C_\bullet(X_1) \to C_\bullet(X_1, X_0) \to 0 $$ Induced long exact sequence $$ H_1(X_1) \to H_1(X_1, X_0) \to H_0(X_0) \to H_0(X_1) $$ As $X_1$ contractible, then $H_1(X_1) = 0$ , as $X_1$ has two path-components, $X_0$ has one path-components, then $H_0(X_0) = \mathbb{Z}^2, H_0(X_1) = \mathbb{Z}$ . Since $H_1(X_1) = 0$ , the connecting homomorphism $\partial: H_1(X_1, X_0) \to H_0(X_0)$ is injective. On the other hand, on the level o
|
$\partial b$ need not always be $0$ . For example, take $b$ to be the chain corresponding to the $1$ -cell, and let $c$ be its image in $C_1(X_1, X_0)$ . Then $\partial b$ lies in $C_0(X_0)$ , so $\partial c = 0$ in $C_1(X_1, X_0)$ . Indeed, $H_1(X_1, X_0)$ will be free abelian of rank one, generated by $[c]$ , and the connecting homomorphism should take $[c]$ to the homology class determined by the difference of the two $0$ -cells.
|
|algebraic-topology|
| 1
|
Hamilton Jacobi equation in linear quadratic case
|
I am trying to solve an HJB equation in a very simple case that put me in trouble. For a function $v(t,x)$ regular enough defined on $[0,T]$ with condition $v(T,x) = 0$ , I want to find a solution of $$ \partial_{t}v(t,x) -\frac{1}{2}\left(\partial_{x}v(t,x)\right)^2 + 2x\partial_{x}v(t,x) + \frac{1}{2}x^2 = 0 $$ I have a small background in ode but I heard that such equations are simple to solve. My attempt is the following : I start with a function of two variables of the form $v(t,x)=\phi(t)x^2$ that can be differentiate easily : $$ \partial_{t}v(t,x) = \phi’(t)x^2\quad\text{and}\quad\partial_{x}v(t,x) = 2\phi(t)x $$ The pde becomes $$ \phi’(t)x^2 + \frac{1}{2}x^2 + 4\phi(t)x^2 -2(\phi(t)x)^2 = 0 $$ Then I should solve $$ \phi’(t)+ \frac{1}{2}+ 4\phi(t)-2\phi(t)^2 = 0 $$ With terminal condition $\phi(T) = 0$ . However I don’t know how to solve this ode. It seems to be closely related to the Riccati equation but the constant term does not allow me to do an interesting change of varia
|
Hint. The differential equation $\phi'(t)+ \frac{1}{2}+ 4\phi(t)-2\phi(t)^2 = 0$ is separable. The solution that satisfies the condition $\phi(T)=0$ is given by $$ \int_{\phi}^0\frac{du}{2u^2-4u-\frac{1}{2}}=\int_t^Tds=T-t. $$ Addendum The differential equation $\phi'(t)+ \frac{1}{2}+ 4\phi(t)-2\phi(t)^2 = 0$ is a Riccati equation. It can be transformed into a linear second order ODE with the substitution $\phi=-\frac{v'}{2v}$ : $$ -\frac{v''}{2v}+\frac{(v')^2}{2v^2}+\frac{1}{2}-\frac{2v'}{v}-\frac{(v')^2}{2v^2}=0 \implies v''+4v'-v=0. $$
|
|ordinary-differential-equations|analysis|partial-differential-equations|hamilton-jacobi-equation|
| 1
|
Calculate Laurent series for $1/ \sin(z)$
|
How can calculate Laurent series for $$f(z)=1/ \sin(z) $$ ?? I searched for it and found only the final result, is there a simple way to explain it ?
|
Are you looking for this kind of expansion Laurent series 1/sin(z) ? For $z\ne k\pi\:(k\in\mathbb Z)$ , the expansion takes \begin{align} \frac1{\sin(z)}&=\sum_{k=-\infty}^\infty\frac{z(-1)^k}{z^2-k^2\pi^2}. \end{align} Its proof can be found here Lobachevsky Integral Formula .
|
|complex-numbers|laurent-series|
| 0
|
Can anyone come up with a method for integrating product of logarithms?
|
So recently I’ve been studying integrals of products of logarithms $$\int \prod_{n=1}^N \log(x+a_n) dx$$ Can anyone come up with general method for dealing with such integral? Let's start simple, with N = 2. $$ \int \frac{\log(z+m)}{z} dz = \int \frac{\log(1-u)+\log m}{u} du = -Li_2(\frac{-z}{m}) + \log m\log z + C$$ Then we apply integration by parts to $\int \log(x+a)\log(x+b) dx$ , setting $u = \log(x+b)$ and $v = (x+a)\log(x+a)-(x+b)$ . Well, $x+a$ could be written as $(x+b) + (a-b)$ . Yielding $$\int \log(x+a)\log(x+b) dx = (x+a)\log(x+a)\log(x+b) - (x+a)\log(x+a) - (x+b)\log(x+b) + 2x + (b-a)\int\frac{\log(x+a)}{x+b}dx$$ $$=(a-b)Li_2(\frac{x+b}{b-a}) +(x+a)\log(x+a)\log(x+b) - (x+a)\log(x+a) $$ $$-(x+(a-b)\log(a-b)+b)\log(x+b) + 2x + C$$ The case $N = 3$ is a lot more complicated. I wasn't able to come up with a solution until recently. WolframAlpha was able to give solution but cannot provide any human readable derivation to its solution. My solution to $N=3$ goes like this. We
|
$$I(u) = \int \prod_{n=u}^N \ln(x+a_n) \,\,dx $$ By integration by parts: $$ I (u) = \ln(x+a_1)I (u+1)-\int \frac{I(u+1)}{x+a_1}dx = \ln(x+a_1)\left(\ln(x+a_2)I (u+2)-\int \frac{I(u+2)}{x+a_2}dx\right)-\int \frac{\ln(x+a_2)I (u+2)-\int \frac{I(u+2)}{x+a_1}dx }{x+a_1}dx $$ And we could expand this all the way to $N$ , but we would have $2^N$ $I$ functions in our equation, which can be a lot I'd say this is the idea.
|
|calculus|indefinite-integrals|computer-algebra-systems|
| 0
|
Finding the harmonic conjugate of the function
|
$u(x,y)=e^{x^2-y^2}(e^y \cos(x-2xy)+ e^{-y} \cos(x+2xy))$ Solving the Cauchy-Riemann equations for this is not practical. Alternatively, I could try to express $u(x,y)$ in the form of the real or complex part of a (analytic) function in $z$ . In doing that, I either observe it right away, or put in $x=\frac{z+\bar z}{2}$ and $y=\frac{z-\bar z}{2i}$ . But that turned really messy and the $z$ 's and $\bar z$ 's in the coefficients of the exponential functions do not reduce too give the desired form. I would like to ask what else can I do here to find the harmonic conjugate?
|
Given that $u$ is harmonic, you can simply replace $x$ with $z$ and $y$ with $0$ to get an analytic function which agrees with $u$ on the real axis, and hence everywhere: $$ f(z) = 2 e^{z^2} \cos z . $$ Now just take the imaginary part of $f(x+iy)$ .
|
|complex-analysis|harmonic-analysis|harmonic-functions|
| 1
|
Expected number of passengers in a bus, if both bus and passengers arrival time have a Poisson distribution.
|
Here's the full question In Poisson Bus City, there is a shuttle bus that goes between Stop A and Stop B, with no stops in between. The times at which the bus arrives at Stop A are a Poisson point process with one bus arriving every five minutes on average, day and night, at which point it immediately picks up all passengers waiting. Citizens of Poisson Bus City arrive at Stop A at Poisson random times, with an average of 5 passengers arriving every minute, and board the next bus that arrives. Suppose that you visit this city and that you arrive at Stop A at a time chosen uniformly at random from the times in a day. How many citizens of Poisson Bus City do you expect to be on the bus that you take? Please comment on the solution below. As I arrive in the middle of an interval, anyone who arrives at Stop A during that interval would get on the same bus as me. I've also worked out that the expected length of interarrival interval is $5$ minutes. From here, I'm thinking of using the follo
|
This solution is incorrect, because this statement is wrong: I've also worked out that the expected length of interarrival interval is 5 minutes. If you choose random interarrival intervals and take the average length of the intervals, then yes, you will get an expected length of $5$ minutes. However, that's not what we're doing: We're choosing a uniformly random time , not a random interarrival interval, and then looking at the interarrival time which that random time ends up in. Since longer intervals take more time, if we choose times uniformly randomly, it is more likely to choose intervals which take longer than $5$ minutes than it is to choose intervals which take less than $5$ minutes, so the expected length of the interval is actually greater than $5$ minutes. In fact, if we choose a uniformly random time, then the time since the last bus arrived and the time since the next bus will arrive are both exponential distributions . Therefore, the expected time since the last bus arri
|
|solution-verification|random-variables|poisson-distribution|poisson-process|
| 1
|
Give an example of two 3-manifolds with different second de rham groups
|
I am asked to construct two 3-manifolds $M_1$ , $M_2$ both covered by two open sets $U$ , $V$ (different for each manifold) s.t. the intersection is diffeomorphic to $\mathbb S^1 \times\mathbb R^2$ but the second de Rham group of $M_1$ and $M_2$ is trivial and not trivial respectively. My thought so far is to take $M_1=\mathbb S^2 \times\mathbb R$ and $M_2=\mathbb S^1 \times\mathbb R^2$ but I can't work out the part about the intersection.
|
For $M_2$ , finding $U$ and $V$ is pretty obvious so I guess you need help for $M_1$ . For this, consider a "latitude map" $z : \mathbb{S}^2 \rightarrow [-1,1]$ sending the south pole on $-1$ , the north pole on $1$ and any point of the equator on $0$ . Then $U = \{z and $V = \{z > -1/2\} \times \mathbb{R}$ fit because $U \cap V = \{-1/2 and $\{-1/2 is a strip around the equator, thus homeomorphic to $\mathbb{S}^1 \times \mathbb{R}$ . I suggest you make a draw of what happens on $\mathbb{S}^2$ , it is pretty obvious when you visualize it.
|
|differential-topology|
| 1
|
Is addition a term-function in this structure?
|
This is a follow-up to my previous model theory question, here: Is addition definable from successor and multiplication? . I asked whether addition is definable by a first-order formula in the structure $(\mathbb{N};\times,S,0,1)$ in that question, and found that that question was asked previously. However, now I have a different question. Is addition a term function in that structure? A term function is a function composed from projection functions and the functions and constants that are in the structure. I think it isn't, but I want a proof that it is not.
|
No, there is not. Suppose $t$ is a term such that $t(x,y)\equiv x+y$ . WLOG we may assume that $t$ has no instance of " $0\times$ " or " $\times 0$ " occurring in it. Clearly both variables $x$ and $y$ must actually occur in $t$ . But now we can show by induction on complexity that $\forall x,y[t(x,y)\ge xy]$ , which isn't true for $x+y$ .
|
|model-theory|natural-numbers|universal-algebra|
| 1
|
The positive Laplacian is indeed the negative Laplacian
|
I know this question sounds like a joke. And it probably is:). I found it kind of annoying, but also interesting, to call $-\Delta=-\sum_{j=1}^n\partial^2_{jj}$ "the positive Laplacian" as it is the positive operator with respect to standard $L^2$ -pairing. But in the mean time it is also regarded as the "negative of Laplacian" since we have a minus sign. I think there are two different questions related to this topic. What is a more clear way to explain the difference on what positive and negative is for a Laplacian? And also how to use such terminologies appropriately in different circumstances. I believe such clarification is important, as the Hodge-Laplacian $\Box=dd^*+d^*d$ (let me not using $\Delta$ here) can be called a "positive Laplacian" with slightly less ambiguity. Also the word "subharmonic functions" refers to those $f$ such that $-\Delta f\le0$ , but not $\Delta f\le0$ . Are other similar (but also interesting) confusions happen in other fields of mathematics? If I remem
|
We call $-\Delta$ the positive Laplacian since it is positive semi-definite as a self-adjoint operator on, say, $C_c^\infty({\mathbb R}^n)$ , the space of functions with compact support under the $L^2$ inner product. That is, if $f, g\in C_c^\infty({\mathbb R}^n)$ , then \begin{align*} \langle -\Delta f, g\rangle &= \int_{{\mathbb R}^n} (-\Delta f) g = \int_{{\mathbb R}^n} f(-\Delta g) = \langle f, -\Delta g\rangle,\\ \langle -\Delta f, f\rangle &= \int_{{\mathbb R}^n} (-\Delta f) f = \int_{{\mathbb R}^n} |\nabla f|^2 \geq 0, \end{align*} where the identities hold by integration by parts and there are no boundary terms by the compact support condition. Then the eigenvalues of $-\Delta$ are nonnegative. The Hodge-Laplacian $\square = dd^* + d^*d$ is, by construction, self-adjoint and positive semi-definite. Actually on a function $f$ on a compact manifold, $$ \square f = - \Delta f. $$ A $C^2$ function is subharmonic iff $\Delta f\geq 0$ , and the reason is that such function lies below
|
|calculus|analysis|education|laplacian|
| 0
|
Solving an exact IVP and solution domain
|
$e^{2y}+x^2+2xe^{2y}\frac{dy}{dx}=0 \\ y(-1)=1$ This IVP is exact and: \begin{align*} \frac{d(e^{2y}x + \frac{x^3}{3})}{dx}=0 \\ \iff e^{2y}x + \frac{x^3}{3}=k; \ k \in \mathbb{R} \\ \underbrace{\iff}_{y(-1)=1} e^{2y}x + \frac{x^3}{3}=-e^2-\frac{1}{3} \end{align*} $$e^{2y}x + \frac{x^3}{3}=-e^2-\frac{1}{3} \\ \iff e^{2y}x = -\frac{x^3}{3}-e^2-\frac{1}{3} \\ \iff e^{2y} = \frac{-\frac{x^3}{3}-e^2-\frac{1}{3}}{x} \\ \iff y=\frac{1}{2}\log(\frac{-\frac{x^3}{3}-e^2-\frac{1}{3}}{x}); D_{y}=]{\sqrt[3]{-e^2-\frac{1}{3}}}, 0[ $$ Is the solution and its domain correct?
|
Your solution for the $y$ value is correct. Maybe correct your working for the domain, because: $-\dfrac{x^3}{3}-e^2-\dfrac13=0 \implies x^3=-3e^3-1$ If $\dfrac{-\dfrac{x^3}{3}-e^2-\dfrac13}{x}$ is $0$ or undefined then the natural log of that is undefined. This means open set rather than closed i.e $D_y=\left(\sqrt[3]{-3e^2-1},0\right)$
|
|ordinary-differential-equations|initial-value-problems|
| 0
|
Solving an equation analytically of the type $n \cdot x^{n - 1} \cdot y + \frac{x^n}{z} = w$ for n
|
I am currently faced with solving the following equation for my thesis. I am looking for a symbolic answer to n $$ k \left( n \cdot d_{\text{rz}}^{n-1} \cdot \frac{\partial d_{\text{rz}}}{\partial r} + \frac{d_{\text{rz}}^n}{r} \right) = \rho \left( \frac{\partial v_z}{\partial r} \cdot v_r + \frac{\partial v_z}{\partial z} \cdot v_z \right)$$ So this equation can be written for simplification purposes as: $$n \cdot x^{n - 1} \cdot y + \frac{x^n}{z} = w$$ Now I would like to solve this equation analytically for n. Ideas that I have are trying to extract n by: The the logarithm of both sides and apply various rules for mathematical operations using logarithms Use the Lambert function and attempt to write the equation in an adequate form respectively I have namely tried both methods, unfortunately to no avail. I have previously used the Lambert function to extract n from $n \cdot x^{n - 1} = y$ . This has worked but applying this methodology to my case, namely $n \cdot x^{n - 1} \cdot y
|
more detailed: $$nx^{n-1}y+\frac{x^n}{z}=w$$ $$\frac{nye^{n\ln(x)}}{x}+\frac{e^{n\ln(x)}}{z}=w$$ $$\left(\frac{y}{x}n+\frac{1}{z}\right)e^{n\ln(x)}=w$$ $$\frac{x}{y}\ln(x)\left(\frac{y}{x}n+\frac{1}{z}\right)e^{n\ln(x)}=\frac{x}{y}\ln(x)w$$ $$(n\ln(x)+\frac{x}{yz}\ln(x))e^{n\ln(x)}=\frac{x}{y}\ln(x)w$$ $$(n\ln(x)+\frac{x}{yz}\ln(x))e^{n\ln(x)+\frac{x}{yz}\ln(x)}=\frac{x}{y}\ln(x)we^{\frac{x}{yz}\ln(x)}$$ $$n\ln(x)+\frac{x}{yz}\ln(x)=W\left(\frac{x}{y}\ln(x)we^{\frac{x}{yz}\ln(x)}\right)$$ $$n=\frac{1}{\ln(x)}W\left(\frac{x}{y}\ln(x)we^{\frac{x}{yz}\ln(x)}\right)-\frac{x}{yz}$$ $$n=\frac{1}{\ln(x)}W\left(\frac{x}{y}\ln(x)wx^\frac{x}{yz}\right)-\frac{x}{yz}$$
|
|linear-algebra|algebra-precalculus|lambert-w|
| 1
|
A Challenging Euler Sum $\sum\limits_{n=1}^\infty \frac{H_n}{\tbinom{2n}{n}}$
|
Recently, I encountered a strange series involving Harmonic Numbers and Binomial Coefficients both. According to Mathematica : $$\displaystyle \sum_{n=1}^\infty \frac{H_n}{\binom{2n}{n}} = -\frac{2\sqrt{3} \pi}{27}(\log (3)-2)+\frac{2}{27} \left( \psi_1 \left( \frac{1}{3}\right)-\psi_1 \left(\frac{2}{3} \right)\right)$$ Here $\psi_n(z)$ denotes the Polygamma Function . Can anybody provide a nice proof of the above statement? My Failed Attempt Using the Beta-function identity, $$\frac{1}{\binom{2n}{n}}=(2n+1)\int_0^1 y^n(1-y)^n \ dy$$ $$\displaystyle \begin{aligned} \sum_{n=1}^\infty \frac{H_n}{\binom{2n}{n}} &= \sum_{n=1}^\infty (2n+1)H_n \int_0^1 (y-y^2)^n dy \\ &= \int_0^1 \sum_{n=1}^\infty (2n+1)H_n (y-y^2)^n \ dy \end{aligned}$$ Here, I used the identity $$\sum_{n=1}^\infty (2n+1)H_n t^n=\frac{2t-(1+t)\log(1-t)}{(t-1)^2}\quad |t| and got $$\sum_{n=1}^\infty \frac{H_n}{\binom{2n}{n}}=\int_0^1 \frac{2y-2y^2-(1+y-y^2)\log(y^2-y+1)}{(y^2-y+1)^2}dy$$ How should I continue from here? I t
|
I'll leave it advanced, I need to figure out the last two integrals $$\displaystyle{\sum\limits_{n=1}^{+\infty }{\frac{H_{n}}{\left( \begin{matrix} 2n \\ n \\ \end{matrix} \right)}}=\sum\limits_{n=1}^{+\infty }{H_{n}\cdot \frac{\left( n! \right)^{2}}{\left( 2n \right)!}}=\sum\limits_{n=1}^{+\infty }{\left( 2n+1 \right)H_{n}\cdot \frac{\Gamma \left( n+1 \right)\Gamma \left( n+1 \right)}{\Gamma \left( 2n+2 \right)}}=\sum\limits_{n=1}^{+\infty }{\left( 2n+1 \right)H_{n}\cdot \beta \left( n+1,n+1 \right)}}$$ $$\displaystyle{=\sum\limits_{n=1}^{+\infty }{\left( 2n+1 \right)H_{n}\cdot \int\limits_{0}^{1}{t^{n}\left( 1-t \right)^{n}dt}}=\int\limits_{0}^{1}{\sum\limits_{n=1}^{+\infty }{\left( 2n+1 \right)H_{n}\left( t-t^{2} \right)^{n}}dt}}$$ $$\displaystyle{=\int\limits_{0}^{1}{\left( -2x\cdot \frac{d}{dx}\left( \frac{\log \left( 1-x \right)}{1-x} \right)-\frac{\log \left( 1-x \right)}{1-x} \right)\left| _{x=t-t^{2}} \right.dt}=\int\limits_{0}^{1}{\left( 2x\left( \frac{1-\log \left( 1-x \righ
|
|sequences-and-series|special-functions|definite-integrals|closed-form|
| 0
|
Find prime numbers satisfying an equation
|
Find all triplets $(m, n, p)$ , where $p$ is a prime number and $m, n ∈ \Bbb N$ , such that $p=\frac{m}{4}\sqrt{{2n-m \over 2n+m}}$ My procedure is as follows: $p=\frac{m}{4}{\sqrt{(2n)^2-m^2}\over 2n+m}$ It can be shown that m cannot be odd, so if $m=2k$ , $p=\frac{k}{2}{\sqrt{n^2-k^2}\over n+k}$ If $l=\sqrt{n^2-k^2}$ then $(k,l,n)$ form a Pythagorean triplet giving the two equations $p=\frac{kl}{2(n+k)}, k^2+l^2=n^2$ I do not know how to proceed after this. Using a python script, I think the only solutions are $(24,15,2),(24,20,3)$ and $(30,39,5)$ . How do I prove this? Any help would be appreciated, thanks.
|
Let $m$ and $n$ be positive integers, and $p$ a prime number, such that $$p=\frac{m}{4}\sqrt{\frac{2n-m}{2n+m}}.$$ Then squaring and clearing denominators yields $$16p^2(2n+m)=m^2(2n-m).$$ Note that $m$ is even, say $m=2k$ for some positive integer $k$ , and so $$4p^2(n+k)=k^2(n-k).$$ Let $d:=\gcd(k,n)$ so that $k=da$ and $n=db$ with $\gcd(a,b)=1$ . Then also $$4p^2(b+a)=d^2a^2(b-a).\tag{1}$$ Of course $a^2$ is coprime to $b+a$ and so $a^2$ divides $4p^2$ , meaning that $a$ divides $2p$ . That is to say $a\in\{1,2,p,2p\}$ . If $a=1$ then equation $(1)$ reduces to $$4p^2(b+1)=d^2(b-1).$$ Of course $\gcd(b+1,b-1)$ divides $2$ , so they are either both perfect squares, or both twice perfect squares. The former is impossible as no two perfect squares differ by $2$ . In the latter case we easily find that $b=1$ , but then the right hand side vanishes, a contradiction. If $a=2$ then equation $(1)$ reduces to $$p^2(b+2)=d^2(b-2).$$ Now $\gcd(b+2,b-2)=1$ because $b$ is odd, and so $b+2$ and $b
|
|number-theory|discrete-mathematics|prime-numbers|diophantine-equations|pythagorean-triples|
| 1
|
A Challenging Euler Sum $\sum\limits_{n=1}^\infty \frac{H_n}{\tbinom{2n}{n}}$
|
Recently, I encountered a strange series involving Harmonic Numbers and Binomial Coefficients both. According to Mathematica : $$\displaystyle \sum_{n=1}^\infty \frac{H_n}{\binom{2n}{n}} = -\frac{2\sqrt{3} \pi}{27}(\log (3)-2)+\frac{2}{27} \left( \psi_1 \left( \frac{1}{3}\right)-\psi_1 \left(\frac{2}{3} \right)\right)$$ Here $\psi_n(z)$ denotes the Polygamma Function . Can anybody provide a nice proof of the above statement? My Failed Attempt Using the Beta-function identity, $$\frac{1}{\binom{2n}{n}}=(2n+1)\int_0^1 y^n(1-y)^n \ dy$$ $$\displaystyle \begin{aligned} \sum_{n=1}^\infty \frac{H_n}{\binom{2n}{n}} &= \sum_{n=1}^\infty (2n+1)H_n \int_0^1 (y-y^2)^n dy \\ &= \int_0^1 \sum_{n=1}^\infty (2n+1)H_n (y-y^2)^n \ dy \end{aligned}$$ Here, I used the identity $$\sum_{n=1}^\infty (2n+1)H_n t^n=\frac{2t-(1+t)\log(1-t)}{(t-1)^2}\quad |t| and got $$\sum_{n=1}^\infty \frac{H_n}{\binom{2n}{n}}=\int_0^1 \frac{2y-2y^2-(1+y-y^2)\log(y^2-y+1)}{(y^2-y+1)^2}dy$$ How should I continue from here? I t
|
Otherwise.. without Polygamma function.. Lemma 1: $$\displaystyle{g\left( x \right) = \sum\limits_{n = 1}^\infty {\frac{{{{\left( {n!} \right)}^2}}}{{\left( {2n} \right)!}}{x^n}} = \frac{{z\sqrt {4 - z} + 4 \cdot \sqrt z \cdot \arcsin \left( {\frac{{\sqrt z }}{2}} \right)}}{{\left( {4 - z} \right)\sqrt {4 - z} }}}$$ .. It follows elementarily from the series $$\displaystyle{\sum\limits_{n = 0}^\infty {\frac{{{2^{2n}}{{\left( {n!} \right)}^2}{z^{2n + 2}}}}{{\left( {n + 1} \right)\left( {2n + 1} \right)!}}} = {\arcsin ^2}z}$$ https://en.wikipedia.org/wiki/List_of_m ... cal_series with two productions, namely:. $$\displaystyle{\sum\limits_{n = 0}^\infty {\frac{{{2^{2n}}{{\left( {n!} \right)}^2}{z^{2n + 2}}}}{{\left( {n + 1} \right)\left( {2n + 1} \right)!}}} = {\arcsin ^2}z \Rightarrow \sum\limits_{n = 0}^\infty {\frac{{{2^{2n}}{{\left( {n!} \right)}^2}{z^{2n + 1}}}}{{\left( {2n + 1} \right)!}}} }$$ $$\displaystyle{ = \frac{{\arcsin z}}{{\sqrt {1 - {z^2}} }} \Rightarrow \sum\limits_{n = 0
|
|sequences-and-series|special-functions|definite-integrals|closed-form|
| 0
|
A consequence of the invariance of domain by Brouwer
|
Theorem (of the invariance of domain). If $U$ is an open set of $\mathbb{R}^n$ and $f:U\to \mathbb{R}^n$ is a continuous and injective application, then $f(U)$ is open. Consequence . If $U$ is a non empty open set in $\mathbb{R}^n$ , then an application $f:U\to \mathbb{R}^m$ continuous and injective cannot exists if $m . In particular, $\mathbb{R}^n$ is homeomorphic to $\mathbb{R}^m$ iff $m=n$ . Proof . By absurd, let $f:U\to \mathbb{R}^m$ continuous and injective and take the immersion $\varphi: \mathbb{R}^m \to \mathbb{R}^n$ defined by $\varphi(\mathbf{x})=(\mathbf{x},0,...,0)$ . Clearly, $\varphi$ is continuous and injective. Let's define $g=\varphi \circ f$ . So, $g$ is continuous and injective. The previous theorem gives us that $g(U)$ is open in $\mathbb{R}^n$ . But this is not possible because $\forall \mathbf{y}\in g(U)$ , $\forall V$ open neighbourhood of $\mathbf{y}$ in $g(U)$ , we have that $V\cap (\mathbb{R}^n\setminus \varphi(\mathbb{R}^m))\ne \emptyset$ . It's all clear b
|
$u = (u_1, ..., u_n) \in U \subseteq \mathbb{R}^n \implies f(u) = ((fu)_1,..., (fu)_m) \in f(U) \subseteq \mathbb{R}^m \implies g(u) = ((fu)_1,..., (fu)_m,0,...,0) \in g(U) \in \mathbb{R}^n$ now let $y \in g(U)$ , and consider any open ball $B(y, r)$ : the point $y' = (y_1,...,y_m, \frac{r}{2}, 0, ..., 0) \in B(y, r)$ yet $y' \notin g(U)$ , so $B(y, r) \nsubseteq g(U)$ , hence $g(U)$ is not open
|
|general-topology|solution-verification|algebraic-topology|
| 1
|
Is there a simple geometric proof of why two simple harmonic oscillators draw a circle?
|
We all know that a circle can be drawn with the trigonometric functions $x=\cos(t), y=\sin(t)$ . If we define the sine and cosine functions in terms of triangles (like we do in high school), then this is quite obvious. But then later on in our education, we learn that the solution to a simple harmonic oscillator is the sine function. A weight on an undamped spring goes back and forth following a sine wave over time, and that this is the intuition behind a lot of wave motion (like sound waves). However, it's not generally taught why the sine wave solution to simple harmonic motion is the same function as the sine wave as defined by triangles or circles. Or in other words, when you take two harmonic oscillators and plot their outputs as $x=\cos(t), y=\sin(t)$ , why should they make a perfect circle? More specifically: it's intuitive enough that they must form a shape that makes a full loop of some kind. But why does it happen to be a perfect circle , as opposed to an alternative shape li
|
Is there a simple geometric proof of why two simple harmonic oscillators draw a circle? No, there is no geometric proof, because it is not always true. If the two oscillators are oscillating with different amplitudes, the resulting figure is an ellipse, not a circle. If the two oscillators are oscillating at to different frequencies related by a ratio of natural numbers, the resulting figure is a Lissajous curve .
|
|calculus|geometry|trigonometry|taylor-expansion|intuition|
| 0
|
Show that $\mathbb E(X_n) \xrightarrow{n \to \infty} \mathbb E(X)$ using the following decomposition.
|
Suppose that each $X_n$ and $X$ are non-negative random variables and $X_n \stackrel d \to X.$ Assume that $\{X_n \}_{n \geq 1}$ is uniformly integrable. Prove the following decomposition $$\mathbb E(X_n) = \int_{0}^{M} \mathbb P (M \geq X_n \gt t)\ dt + \int_{X_n \gt M} X_n\ d \mathbb P.$$ Hence, show that $\mathbb E(X_n)$ converges to $\mathbb E(X).$ Argue that the same is true even if we did not assume non-negativity. I am able to show that decomposition. I also showed that if $\{X_n\}_{n \geq 1}$ is a sequence of random variables (not necessarily non-negative) and $X_n \stackrel d \to X$ then $X_n^+ \stackrel d \to X^+$ and $X_n^- \stackrel d \to X^-.$ I also showed that if $\{X_n \}_{n \geq 1}$ is uniformly integrable then so are $\{X_n^+ \}_{n \geq 1}$ and $\{X_n^- \}_{n \geq 1}.$ So if we can able to show that $\mathbb E(X_n) \to \mathbb E(X)$ for non-negative sequence of random variables $\{X_n \}_{n \geq 1}$ satisfying uniform integrability I can able to extend the same idea f
|
Recall that one of the definitions of uniform integrability (people use a lot of different ones, so if you have one in mind you have to specify) is that for any $\epsilon > 0$ , there is some $M$ such that $$\mathbb E[|X_n| \cdot 1_{|X_n| > M}] = \int_{|X_n| > M} |X_n| d\mathbb P As you said, we may take $X_n$ nonnegative without loss of generality, so that we can drop the absolute values above. We first show that $\mathbb E[X] , as by Fatou $$\mathbb E[X] = \int_0^\infty \lim_n \mathbb P(X_n > t)dt \leq \liminf_n \int_0^\infty \mathbb P(X_n > t)dt = \liminf_n \mathbb E[X_n] Now, by the decomposition in the problem, $$ \mathbb E[X] - \mathbb E[X_n] = \int_0^M (\mathbb P(t M} X d\mathbb P - \int_{X_n>M} X_n d\mathbb P. $$ For any $\epsilon > 0$ , by uniform integrability and the fact that $\mathbb E[X] , we can choose $M$ such that both of the latter integrals are bounded by $\epsilon$ . Then the above reduces to $$ \mathbb E[X] - \mathbb E[X_n] \leq \int_0^M (\mathbb P(t Now, by conver
|
|probability-theory|uniform-integrability|
| 1
|
Is euler's identity invalid in this case?
|
in "Introductory Quantum Mechanics" by Richard L. Liboff, question 1.21 part o asks the reader to derive the following sum. $$ \sum_{n=1}^{L} e^{-i\frac{2\pi n}{L}} = 0 $$ The given hint asks the reader to separate n from the other powers and treat the expression as a geometric sum. Using the geometric sum equation, proving that expression is trivial. However, I'm a bit confused as to why using euler's identity here doesn't work. To elaborate: $$ \sum_{n=1}^{L} \exp{(-i\frac{2\pi n}{L})} = 0 $$ $$ \sum_{n=1}^{L} \exp{(-i2\pi)}^{\frac{n}{L}} = 0 $$ $$ \sum_{n=1}^{L} (1)^{\frac{n}{L}} = 0 $$ Except that last equation is not true. I tried searching around the internet but I couldn't find a reason why im not allowed to use $\exp(-i2\pi) = 1$ here. What am I missing here?
|
By your reasoning, we could show that all powers are equal to $1$ ; e.g., $$e^z = (e^{2\pi i})^{z/(2\pi i)} = 1^{z/(2\pi i)} = 1.$$ Where is the flaw? Clearly, the second equality is suspect: $$e^{ab} \overset{?}{=} (e^a)^b.$$ Indeed, this isn't even true in the real numbers: $$-8 = (-2)^3 = (-2)^{2(3/2)} = ((-2)^2)^{3/2} = 4^{3/2} = 2^3 = 8.$$
|
|complex-numbers|summation|
| 0
|
Bound $\|\mathbb{E} [ \boldsymbol{H} (x-y)]\|^2$ for $\mu \preccurlyeq \boldsymbol{H} \preccurlyeq L$
|
Suppose function $F$ is twice-differentiable, L-lipschitz smooth and $\mu$ -strongly convex. Let $\boldsymbol{x}(\xi), \boldsymbol{y}(\xi) \in \mathbb{R}^d$ , for a random variable $\xi$ . Denote $\boldsymbol{H}=\int_0^1 \nabla^2 F(\boldsymbol{y}+s(\boldsymbol{x}-\boldsymbol{y})) \mathrm{d} s$ . Then, prove that $$ \mu\langle\mathbb{E}[\boldsymbol{x}-\boldsymbol{y}], \mathbb{E}[\boldsymbol{H}(\boldsymbol{x}-\boldsymbol{y})]\rangle \leq \mathbb{E}\langle\boldsymbol{H}(\boldsymbol{x}-\boldsymbol{y}), \mathbb{E}[\boldsymbol{H}(\boldsymbol{x}-\boldsymbol{y})]\rangle \leq L\langle\mathbb{E}[\boldsymbol{x}-\boldsymbol{y}], \mathbb{E}[\boldsymbol{H}(\boldsymbol{x}-\boldsymbol{y})]\rangle . $$ My approach to this was to first note that $\mu \preccurlyeq \boldsymbol{H} \preccurlyeq L$ . But to be honest, after this I'm completely stuck. Would anyone know how to get this inequality? The definitions of $L$ -smoothness and $\mu$ -strongly convex are the same as the standard version, e.g. same as h
|
Counterexample to stated claim. Consider the function $f(x) = x \tan^{-1}(x) - (1/2) \log(1+x^2)$ , which is convex and has $f''(x) = 1/(1+x^2)$ . On any bounded domain $[-M, M]$ , then $$ \frac{1}{1 + M^2} \leq f''(x) \leq 1 $$ Thus, we see that $f(x)$ satisfies $\mu = (1+ M^2)^{-1}$ strong convexity and $1$ -smoothness. On the other hand, $$ H(x, y) = \int_0^1 f''(x + t(y-x)) \, dt = f'(y) - f'(x) = \tan^{-1}(y) - \tan^{-1}(x). $$ Thus, let us consider $y \equiv 0$ and $x$ which is equiprobably $\pm 1$ . Then $H(x, y) (x - y) = -\tan^{-1}(x) x$ . As a consequence, $$ \mathbb{E}[H(x, y) (x - y)] = -\pi/ 4 $$ Your inequality in this case---using that $\mathbb{E}[x-y] =0$ , then reads as $$ 0 \leq \frac{\pi^2}{16} \leq 0, $$ which is obviously absurd. Extension to a globally strongly convex and smooth function. Let us pick any $M > 0$ and define the function $$ g_M(x) = \begin{cases} f(x), & |x| \leq M \\ \tfrac{1}{2(1+ M^2)}(x-M)^2 + \tan^{-1}(M)(x-M) + f(M) & x > M \\ \tfrac{1}{2(1+ M
|
|optimization|convex-analysis|convex-optimization|expected-value|matrix-calculus|
| 1
|
Guaranteed graph labyrinth solving sequence
|
Starting from a vertex of an unknown, finite, strongly connected directed graph, we want to 'get out' (reach the vertex of the labyrinth called 'end'). Each vertex has two exits (edge which goes from vertex in question to an other one), one exit is labeled 'a', the other exit is labeled 'b'. We have limitless 'memory' but, we don't recognize when we arrive at the same vertex again, so at each step we can only pick if we go exit a or exit b, or we recognize when we have entered the exit vertex. Show that there is an algorithm to get out of any maze! Write the algorithm. If n is its input, then its output is a sequence 'a', 'b' that exits any maze with at most n vertices. I got this assignment (math student) in a course to do with algorithms. I don't believe the actual code outputting the 'a', 'b' sequence is particularly difficult once the structure of the function is found mathematically. I've had multiple ideas, it is clear, that were we to find a sequence that would guarantee a visit
|
Define an $n$ -labyrinth as a strongly connected directed graph on $0\leq m\leq n-2$ vertices labeled $s,e,1,2,...,m$ , where each vertex $v$ is the source of two edges labeled $a,b$ . ( $s$ stands for "start", and $e$ stands for "end") For any sequence $S$ in $\{a,b\}^*$ , any $n$ -labyrinth $L$ , and any vertex $v$ of $L$ we denote $S_{v,L}$ the path in $L$ starting at $v$ and taking the sequence of edges in $L$ associated with the sequence $S$ . Here, I provide provide an outline for an algorithm that produces a finite sequence $S$ in $\{a,b\}^*$ such that for any $n$ -labyrinth $L$ , the path $S_{s,L}$ passes over the end vertex $e$ . First, generate a list of every possible $n$ -labyrinth $(L_1,L_2,\ldots,L_k)$ . (there are various ways to do this) Now, let $S$ be the empty string. For each $i$ in $1,2,...,k$ , perform the following procedure: Define $v$ as the final vertex in the path $S_{s,L_i}$ . Find a sequence $T$ such that $T_{v,L_i}$ is a path from $v$ to $e$ . ( $T$ exists
|
|combinatorics|graph-theory|algorithms|recursive-algorithms|tiling|
| 0
|
Is there a good presentation of the matrix algebras?
|
Let $R$ be a commutative ring with identity. Suppose the matrix algebra $\operatorname{Mat}_n (R)$ and matrices $$E_{mk}:=(a_{ij})=\begin{cases} a_{ij}=1 \ \text{if} \ i=m \ \text{and} \ j=k \\ a_{ij}=0 \ \text{otherwise} \end{cases}.$$ Now consider algebras homomorphism from the free $R$ -algebra $$f\colon R\langle t_{ij}\rangle \to \operatorname{Mat}_n (R), \ t_{ij}\mapsto E_{ij}.$$ Is there a description of generator set for $\operatorname{Ker} f$ ? It's easy to establish that $f$ is surjective. Hence, in other words, this question is about some "nice" presentation of the matrix algebra. Thanks for paying attention
|
The ideal $\operatorname{Ker} f$ is generated by the differences $t_{ij}t_{k\ell} - \delta_{jk} t_{i\ell}$ (where $\delta$ means the Kronecker delta) and the difference $t_{11}+t_{22}+\cdots+t_{nn}-1$ . Proof idea. These differences are clearly in $\operatorname{Ker} f$ (since $f$ sends them to the matrices $E_{ij}E_{k\ell} - \delta_{jk} E_{i\ell} = 0$ and $E_{11}+E_{22}+\cdots+E_{nn}-I_n = 0$ ). It remains to show that they generate $\operatorname{Ker} f$ . To this purpose, we observe that any word (= noncommutative monomial) in the $t_{ij}$ 's can be reduced modulo these differences to a linear combination of single $t_{ij}$ 's (use $t_{ij}t_{k\ell} - \delta_{jk} t_{i\ell}$ to reduce the degree of a word of degree $> 1$ , and use $t_{11}+t_{22}+\cdots+t_{nn}-1$ to increase the degree of the empty word). But a linear combination of single $t_{ij}$ 's never lies in $\operatorname{Ker} f$ unless it is trivial (i.e., all coefficients are zero), since their images $f\left(t_{ij}\right) =
|
|abstract-algebra|group-presentation|algebras|
| 1
|
Is euler's identity invalid in this case?
|
in "Introductory Quantum Mechanics" by Richard L. Liboff, question 1.21 part o asks the reader to derive the following sum. $$ \sum_{n=1}^{L} e^{-i\frac{2\pi n}{L}} = 0 $$ The given hint asks the reader to separate n from the other powers and treat the expression as a geometric sum. Using the geometric sum equation, proving that expression is trivial. However, I'm a bit confused as to why using euler's identity here doesn't work. To elaborate: $$ \sum_{n=1}^{L} \exp{(-i\frac{2\pi n}{L})} = 0 $$ $$ \sum_{n=1}^{L} \exp{(-i2\pi)}^{\frac{n}{L}} = 0 $$ $$ \sum_{n=1}^{L} (1)^{\frac{n}{L}} = 0 $$ Except that last equation is not true. I tried searching around the internet but I couldn't find a reason why im not allowed to use $\exp(-i2\pi) = 1$ here. What am I missing here?
|
The proposed identity $a^{(bc)}=(a^b)^c$ does hold in complex variables if $c$ , the factor separated out of the exponent, is an integer. We may therefore use this identity provided that $n$ , and only $n$ , is separated from the exponent. Then $\sum\limits_{n=1}^L\exp(-i2\pi n/L)=\sum\limits_{n=1}^L[\exp(-i2\pi/L)]^n$ You then apply the geometric sequence sum formula to the right side, rendering the first term and common ratio as $\exp(-i2\pi/L)$ . You should then get a fraction with a zero numerator and a nonzero denominator for all $L\in\mathbb{N}_{\ge2}$ , proving the claim.
|
|complex-numbers|summation|
| 1
|
Maximum number of fixed points of linear transformation
|
Given a finite dimensional vector space over a finite field, what is the linear transformation $$L : V \rightarrow V $$ , which has the most fixed points? ( $L$ can not be the identity).
|
Any linear transformation that is the identity on a co-dimension 1 subspace, but isn't the identity, will be maximal, no matter which of the possible ways you use to evaluate "most fixed points". And this is of course not unique, so you really can't say " the most".
|
|linear-algebra|linear-transformations|
| 0
|
Is euler's identity invalid in this case?
|
in "Introductory Quantum Mechanics" by Richard L. Liboff, question 1.21 part o asks the reader to derive the following sum. $$ \sum_{n=1}^{L} e^{-i\frac{2\pi n}{L}} = 0 $$ The given hint asks the reader to separate n from the other powers and treat the expression as a geometric sum. Using the geometric sum equation, proving that expression is trivial. However, I'm a bit confused as to why using euler's identity here doesn't work. To elaborate: $$ \sum_{n=1}^{L} \exp{(-i\frac{2\pi n}{L})} = 0 $$ $$ \sum_{n=1}^{L} \exp{(-i2\pi)}^{\frac{n}{L}} = 0 $$ $$ \sum_{n=1}^{L} (1)^{\frac{n}{L}} = 0 $$ Except that last equation is not true. I tried searching around the internet but I couldn't find a reason why im not allowed to use $\exp(-i2\pi) = 1$ here. What am I missing here?
|
(a) For the case with complex numbers: $$ i=i^1 = i^{\frac{4}{4}}=(i^4)^{\frac{1}{4}} = 1^{\frac{1}{4}} = \sqrt[4]{1} = 1 $$ Which is obviously incorrect, with the mistake being: $$i^\frac{4}{4}\neq (i^4)^{\frac{1}{4}}$$ In the case of: $$z^{\frac{a}b{}}$$ Here it matters on the $\gcd(a,b)$ : If $$\gcd(a,b)=1 \implies z^\frac{a}{b} = (z^a)^\frac{1}{b}$$ (b) Also should be mentioned non-invertible exponentiation, for example: $$a=a^1=a^{\frac{2}{2}} \neq \sqrt {a^2}$$ Since $$\sqrt{a^2}= |a|$$
|
|complex-numbers|summation|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.