title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Banach spaces associated to Sobolev Towers
In the book One Parameter Semigroups for Linear Evolution Equations , the authors provide some definitions as follows: (Page 124). [For $A \in \mathcal{L}(X)$ , define] For each $n \in \mathbb{N}, x \in D(A^n)$ we define the n-norm $$\lVert x\rVert _n: = \lVert A^nx \rVert $$ and call $$X_n : = (D(A^n), \lVert \, \cdot \, \rVert _n)$$ the Sobolev space of order n. (Page 129). Let $(A, D(A))$ be a densely-defined operator on $X$ such that $\rho(A)\neq \emptyset$ . [Define] For each fixed $n \in \mathbb{N},$ $$ \lVert x \rVert_{n,\lambda} : = \lVert (\lambda - A)^n x\rVert_X , \quad x \in D(A^n). $$ (Page 515). [Let $A: D(A) \subset X \to X$ be closed and define] $X_1 : = (D(A), \lVert \, \cdot \, \rVert _A)$ is a Banach space for the graph norm $$\lVert x \rVert_A : = \lVert x \rVert + \lVert Ax \rVert , \quad x \in D(A).$$ All of this is fine. However, in the footnote of page 515, the author write The definition of $X_1$ also makes sense if $A$ has [non]empty resolvent set. Since if $\
For the original question: you are indeed, however it is entirely possible the author just meant to talk about the $X_1$ of page $124$ in the context where $0 \in \rho(A)$ , since that's when $\|\cdot\|_{1,0}$ would be involved and equal to $\|\cdot\|_1$ . For the question from the edit: this equality seems to be algebraic, but even if it is topological (in the sense of topological complements ), all the norms introduced are equivalent so you can pick any of them. The underlying topology is the exact same and nothing here involves a property like isometric isomorphism or whatever else which isn't preserved by equivalence of norms, so there's no worry to be had.
|functional-analysis|spectral-theory|unbounded-operators|
1
$x^n/(1+x^n) $is monotonic
$x^n/(1+x^n)$ is monotonic , $x \in R^+ $ , $n \in N$ tried solving it by differentiating but not getting feasible result, also is there any other way of computation through which this can be solved easily? any hint would be appreciated
A composition of monotonic functions is monotonic (even if one is increasing and the other is decreasing). Here the two functions are $$x\mapsto x^n$$ and $$x\mapsto \frac{x}{1+x}$$
|uniform-convergence|monotone-functions|
0
Why is $2^n +1=(2+1)(2^{n-1} - 2^{n-2} +2^{n-3}-\ldots+1)$ for odd $n$
This transformation is only a part of solution of my problem but the most significant one. I need to show that $2^n+1$ is divisible by $3$ for odd $n$ ’s. I have exercised polynomial transformation for two weeks and I did geometric progression for a week, but I can’t still easily come up to this transformation.
I got it,it is really simple and stupid but I haven’t managed to understand it for some reason. It may be related to the fact that I wasn’t doing problems with negative coefficient,so I found formula with alternating + and - sign unfamiliar and strange.Thanks all people here for help and effort.
|algebra-precalculus|
0
About the two step centralizer
Let $G$ be a finite group. Define $\gamma_2(G)=[G,G]$ and $\gamma_{i+1}(G)=[\gamma_i(G),G]$ for all $i≥2$ . Let $G$ be a finite $p$ -group of order $p^n$ and maximal class. For each $i$ with $2≤i≤n−2$ , the $2$ -step centralizer $K_i$ in $G$ is defined to be the centralizer in $G$ of $\gamma_i(G)/\gamma_{i+2}(G)$ . In some papers it is given without reference or proof that " $K_2$ coincides with $G$ if and only if $n = 3$ ". I need a reference or proof of this results if there is any help.
For $p$ -groups of maximal class, one reference is Leedham-Green and McKay's The Structure of Groups of Prime Power Order : Proposition 3.1.4. Let $G$ be a finite $p$ -group of order $p^n$ and of maximal class. Then the $2$ -step centralizers $K_2,\ldots,K_{n-2}$ are maximal subgroups of $G$ . Proof. For $i$ satisfying $2\leq i\leq n-2$ , there is an embedding of $G/K_i$ to the automorphism group of $\gamma_i(G)/\gamma_{i+2}(G)$ induced by conjugation. That quotient has order $p^2$ since $G$ is of maximal class. The automorphism group of a group of order $p^2$ has Sylow $p$ -subgroups of order $p$ , so $K_i$ has index $p$ . $\Box$ That gives that the two-step centralizer $K_2$ is not $G$ if $n\geq 4$ . If $n=3$ , then $\gamma_2(G)$ is central and $\gamma_4(G)$ is trivial, so $K_2=G$ . If $G$ is not of maximal class then the "only if" clause fails, as clearly any group of class at most two will have $\gamma_2(G)$ central and $\gamma_4(G)$ trivial, so $K_2=G$ .
|abstract-algebra|group-theory|reference-request|finite-groups|p-groups|
1
Show that $a_n \underset{(+\infty)}{\sim} n$ with $a_n$ solution of $e^{-x}\sum_{k=0}^{n}\frac{x^k}{k!}=\frac{1}{2}$
Let $n$ be a natual and $f_n$ defined as: $$ f_n (x ) = e^{-x}\sum_{k=0}^{n}\frac{x^k}{k!} $$ Let $a_n$ be the unique positive solution of $f_n (x )=\frac{1}{2}$ , I'm asked to show that $a_n \underset{(+\infty)}{\sim}n$ . What I know is that $$ \lim\limits_{n \rightarrow +\infty} f_n (n ) = \lim\limits_{n \rightarrow +\infty} e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!} = \lim\limits_{n \rightarrow +\infty}f_n (a_n ) = \frac{1}{2} $$ which makes the result intuitive. Because we have $f_n (n ) \underset{(+\infty)}{\sim}f_n (a_n )$ with $f_n$ only passing through $\frac{1}{2}$ once. However, I struggle to find how to use this to find that $a_n \underset{(+\infty)}{\sim}n$ . Any hint would be appreciated.
Not a full answer but way to much for a comment : Let $n$ be a natural number and $f_n$ defined as: $$ f_n\left(x\right) = e^{-x}\sum_{k=0}^{n}\frac{x^k}{k!} $$ Now clearly a truncated taylor series for $\exp(x)$ is less than $\exp(x)$ for $x>0$ . So $$f_n(x) Taylor's theorem with error term gives us $$\exp(x) = 1 + x + x^2/2! + ... + x^n/n! + R_n(x)$$ where for $x>0$ we get $0 . So we get for $x>0$ (using O-notation) $$ f_n(x) We conclude $$1 = \exp(-x) \exp(x) = f_n(x) + O(\frac{x^{n+1}}{(n+1)!})$$ If we can show that $f_n(n)$ is about equal to this $O(\frac{x^{n+1}}{(n+1)!})$ then both must equal about $1/2$ and thereby proving the conjecture. We could do this in steps show that $\int_1^{\infty} \frac{x^t}{t!} = \exp(x) + o(1)$ . show that $\int_1^{n} \frac{x^t}{t!} = \exp(x) - \frac{x^{n+1}}{(n+1)!} + o(1)$ however that seems like a hard or long way ; those integrals have no closed forms, so we need truncated series or asymptotics again or something clever. We reconsider the proble
|sequences-and-series|roots|
0
Limit of a multivariable function raised to a power
I got this limit in one of my math exams, but I got stuck because I couldn't decide which term was of a greater significance, $(x^2+y^2)$ or $e^{(-x-y)}$ . Maybe I'm not even on the right path. How to solve this limit? $$\lim_{(,y)→(\infty,\infty)}(x^2+y^2)\cdot e^{(-x-y)}$$
Using polar coordinates, we may rewrite the limit as $$\displaystyle \lim_{r\to\infty} r^2\cdot e^{-r(\cos(\theta)+\sin(\theta))}$$ As $x,y\geq 0$ , $\theta\in [0,\pi/2]$ , so $\cos(\theta)+\sin(\theta)\geq 1$ , and $$0\leq r^2\cdot e^{-r(cos(\theta)+sin(\theta))}\leq r^2\cdot e^{-r}\to 0,$$ so the limit is $0$ .
|limits|multivariable-calculus|
1
Can a function be defined as the union of two other functions?
So I read from various sources that a function can be defined as a binary relation. Then is it valid to say, for example, $f = \{ (1, 2), (2, 3) \}$ ? And suppose I have another function $g = \{ (4, 5) \}$ . Does it then make sense to write $(f \cup g)(2) = 3$ ?
In the language of set theory, using the standard representation of function as sets of pairs, you are exactly right. Your $f$ and $g$ are both functions, as is $f \cup g$ , and $(f \cup g)(2)=f(2) = 3$ . In general, the union of any two functions $f$ and $g$ is a function provided they agree on the intersection of their domains: i.e., provided that for any $x$ , $y$ and $z$ , if $(x, y) \in f$ and $(x, z) \in g$ , then $y = z$ .
|functions|notation|relations|
1
Expected path length in a ladder-like graph if edges can be randomly removed.
Question from an old exam: The King of Squares sets out to patrol the City of Squares. The City of Squares is an infinite ladder, i.e., a graph $(V, E)$ where $V = \mathbb{N} \times \{0,1\}$ and the vertex $(x, y)$ is connected to $(x-1, y)$ for $x>0,(x+1, y)$ and $(x, 1-y)$ . Unfortunately, due to the revolt, some parts of the streets are blocked. Each edge independently becomes blocked with probability $\frac{1}{2}$ . The king leaves his palace at the point $(0,0)$ . Let $X$ be the largest value of the coordinate $x$ that the King can reach without passing through the blocked streets. The king can look ahead, so he will choose the best route before he sets off. In the situation in the image below $X=5$ . (a) Find EX. (b) Find the probability that both points $(X, 0)$ and $(X, 1)$ are reachable. My attempt at (a): Let $$ X_i = \begin{cases} i, & \text{if it's possible to reach $(i, 0)$ or $(i,1)$} \\ 0, & \text{otherwise}. \end{cases} $$ Now we need to find the probability $p(i)$ that
Notations: $U_n$ the event that the upper horizontal edge is not blocked at $n$ . $L_n$ the event that the lower vertical edge is not blocked at $n$ . $V_n$ the event that the vertical edge is not blocked at $n$ . $N = \inf\left\{ n \in \mathbb N_{\ge 1}\, : \, L_n \cap U_n \cap V_n^c\; \text{is false}\right\}$ , $N$ follows a geometric distribution $\mathcal G\left(p = \frac78\right)$ Question a: Idea: The idea is to establish a system of equations such that $\mathbb E\left[X\mid V_0\right]$ and $\mathbb E\left[X \mid V_0^c\right]$ are solutions of the system. Recursion: In the case of $V_0^c$ , \begin{array}{|c|c|c|} \hline \textbf{Event} & \textbf{Probability} & \textbf{Expectation}\\ \hline L_1^c & \frac12 & 0\\ \hline L_1 \cap V_1^c & \frac14 & 1 + \mathbb E\left[X \mid V_0^c\right]\\ \hline L_1 \cap V_1 & \frac14 & 1 + \mathbb E\left[X \mid V_0\right]\\ \hline \end{array} In the case of $V_0$ , \begin{array}{|c|c|c|} \hline \textbf{Event} & \textbf{Probability} & \mathbb E\left[X
|probability|expected-value|
0
What is the relation sense of Taylor formula in optimization?
In the course, we do a lot of examples about optimization using Taylor method, but I notice that if we have function like: $f(x) = \alpha \times x^2 + \beta\times y^2 + \sigma \times xy + \gamma \times x + \phi \times y + \xi$ without determine the Hessian, I can ensure that Taylor formula will be: $f(a+h)-f(a)= \alpha \times x^2 + \beta\times y^2 + \sigma \times xy$ with a a critic point. So it is always true? if yes what is the rule of Hessian here? I am really confused
If I understand you well, you are asking why the Taylor polynomial of your function is the function itself. This happens because $f$ is by itself a polynomial. In fact, a function is equal to its (finite) Taylor polynomial iff it is a polynomial.
|optimization|taylor-expansion|
0
Confusion with effect of averaging on standard deviation
First of all, I was reading a text and stuck with this part: Now regarding the above quote, I want to give two scenarios I made up to make my question clear: Scenario 1: Imagine we take N measurements from same sensor, and each measurement consists of 10 individual temperature readings (samples). And imagine each measurement has a standard deviation estimate of 0.5 degrees Celsius. (So we have in total 10×N data points from N measurements) Scenario 2: Now imagine we take one single measurements from the same sensor, and this single measurement consists of 10×N individual temperature readings (samples). (We now have again in total 10×N data points but from a single measurement) My questions are: 1-) What scenario is the text talking about? What does it mean by measurements?(A measurement is a data point or an array of data points in the text?) 2-) After averaging, what happens standard deviation in both case(the Scenario 1 and Scenario 2 in my examples)?
In both cases you have $10N$ measurements . But there multiple different random variables these could be sampling: Case 0 - Let's call $T$ the random variable that represents a single measurement. You've told me that $stdev(T) = 0.5$ , and you've got $10N$ samples of this random variable. This isn't either of your scenarios. Case 1 - We can define a random variable $U$ as "take ten measurements and average them together". You might write this as $U=(T_1 + ... + T_{10})/10$ . By grouping and averaging your $10N$ measurements you would yield $N$ samples of $U$ , and the text is telling you that $stdev(U) = 0.5/\sqrt {10}$ . This corresponds to your scenario 1, roughly, and is the key take away from this portion of your text. Case 2 - You could, if you wanted to, average together all of the $10N$ measurements. This would correspond to a single sample of a random variable $V = (T_1 + ... +T_{10N})/(10N)$ . This corresponds roughly to your scenario 2, and isn't very realistic, but it would
|standard-deviation|
0
For symmetric, non-singular, positive definite matrix $B$ and some unit vector $u$ show $u^T Bu \ge \frac{1}{ {\lVert B^{-1} \rVert} }$
For symmetric non-singular positive definite matrix $B$ , and any unit vector ${\lVert u \rVert} = 1$ , show that: \begin{gather*} u^T Bu \ge \frac{1}{ {\lVert B^{-1} \rVert} } \end{gather*} Since $B$ is positive definite, we know that: \begin{gather*} u^T Bu > 0 \\ \end{gather*} That and the Cauchy Schwarz Inequality give us: \begin{gather*} 0 For any invertible matrix: \begin{gather*} {\lVert B^{-1} \rVert} {\lVert B \rVert} \ge 1, \quad \frac{1}{ {\lVert B^{-1} \rVert} } \le {\lVert B \rVert} \\ \end{gather*} I'm stuck on what to try from here.
Using Cauchy Schwarz we have, \begin{align*} 1 = u^{\top} u = \langle B^{1/2}u, B^{-1/2}u\rangle \leq (u^{\top} B u) \cdot (u^{\top} B^{-1} u) \leq (u^{\top} B u) \cdot \|B^{-1}\|. \end{align*}
|linear-transformations|inner-products|symmetric-matrices|positive-definite|
0
$x^n/(1+x^n) $is monotonic
$x^n/(1+x^n)$ is monotonic , $x \in R^+ $ , $n \in N$ tried solving it by differentiating but not getting feasible result, also is there any other way of computation through which this can be solved easily? any hint would be appreciated
For all $n \in \mathbb {N}$ and $x \in \mathbb {R}_{+}$ , $$ \begin{align} \frac {\text {d}}{\text {d} x} \frac {{x}^{n}}{{x}^{n} + 1} & = \frac {\text {d}}{\text {d} x} \Big( 1 - \frac {1}{{x}^{n} + 1} \Big) \\ & = \frac {1}{{\left( {x}^{n} + 1 \right)}^{2}} \cdot \frac {\text {d}}{\text {d} x} {x}^{n} = \frac {n {x}^{n - 1}}{{\left( {x}^{n} + 1 \right)}^{2}} > 0. \end{align} $$ So the function is monotonic.
|uniform-convergence|monotone-functions|
1
Why aren't $\infty$-groupoids commutative in HoTT?
I'm trying to read through HoTT, but I'm confused by the path induction principle, it seems too strong at the first glance. I tried "proving" that all suitable paths commute, and it looks like I kind of managed to do it. Where did I make a mistake? Given a type $A$ with a judgmentally unique term $a: A$ (for any $x: A$ we assert $x \equiv a$ ), we want to prove that for any $p, q: a =_A a$ we can construct an equality $p \cdot q = q \cdot p$ (I'm using \cdot here because the book's typesetting uses a macro that I don't want to copy-paste). So we want to construct: $\mathsf{comm}:\prod_{x:A}\prod_{p,q:x=_{A}x}p\cdot q=q\cdot p$ . Define $C(x,p):\equiv\prod_{q:a=_{A}x}p\cdot q=q\cdot p$ (type checks: the concatenation is well-defined because $(a=_{A}x)\equiv(a=_{A}a)$ , and the function that defines the induction base should have the type $\prod_{q:a=_{A}x}\mathsf{refl}_{a}\cdot q=q\cdot\mathsf{refl}_{x}$ . We construct this function by trivially combining witnesses to the absorption of
This proof does indeed work for a type with a judgmentally unique term. This type is the contractible type, so all of its identity types are contractible as well. In particular, $p \cdot q = q \cdot p$ holds trivially for that type. The type $\sum_{x : A} x =_A a$ is also contractible, so it holds there too. More generally, any set has propositional identity types, so any two terms of an identity type are equal if they exist. However, the proof as written doesn't really generalize at all. The key thing is that path induction requires that the endpoint be "free", and in particular independent of the start point. What that means is that path induction doesn't really work on terms of the form $p : x = x$ , which is what you'd need in order to define the type $p \cdot q = q \cdot p$ . In your proof, you took advantage of the fact that $x \equiv a$ to sort of cheat and have the endpoint both free (an arbitrary term of type $A$ independent of $a$ ) and fixed (judgmentally equal to $a$ ) in o
|type-theory|homotopy-type-theory|
1
Average number of required swaps in selection sort
Assume that the distinct integers 1,..., N are in random order and need to be sorted using selection sort. Here I am interested in the average number of swaps required, rather than the number of comparisons. Self-swaps are not counted. Running the selection sort over every possible permutation for a given N provides the following average number of swaps: N Total swaps Average number of swaps 2 1 0.5 3 7 1.1667 4 46 1.9167 5 326 2.7167 6 2556 3.5500 7 22,212 4.4071 8 212,976 5.2821 9 2,239,344 6.1710 Is there a formula for the total or average number of required swaps? A linear least-squares fit produces $$ 0.816411N - 1.27647 $$ for the average, but it is inexact. The code presented at https://en.wikipedia.org/wiki/Selection_sort#Implementations was used.
Among $N!$ permutations, $(N-1)!$ of them have the minimal element in the correct position, and do not require a swap; $N1 - (N-1)!$ do require it. That is, the average number of swaps to place the first element is $1 - \dfrac{1}{N}$ , and then you are left with a random set of $N-1$ elements. That is, an average number of swaps is $S_N = 1 - \dfrac{1}{N} + A_{N-1}$ . Expanding, obtain $S_N = N - H_N \approx N - \ln N - \gamma$
|sorting|
0
Exponential identity involving differential operators
I stumbled upon an equation that could be an identity, but I'm not sure. $$ e^{a(\partial_x + f'(x))} e^{-a \partial_x} = e^{f(x+a) - f(x)} $$ The operator exponential can be understood as a power series $$ e^{a\partial_x} g(x) = (1 + a\partial_x + \frac{a^2}{2!} \partial_x^2 + \ldots) g(x) = g(x+a) $$ Let $X = a(\partial_x + f'(x))$ and $Y = -a \partial_x$ . $$ [X,Y] = -a^2 [f'(x), \partial_x] = a^2 f''(x) \\ $$ $$ [X,[X,Y]] = -[Y,[X,Y]] = a^3 f^{(3)}(x) $$ I checked the first few terms of the BCH formula. \begin{align*} &X + Y + \frac{1}{2}[X,Y] + \frac{1}{12} [X,[X,Y]] - \frac{1}{12} [Y,[X,Y]] + \ldots \\ & \stackrel{\text{?}}{=} a f'(x) + \frac{a^2}{2!} f''(x) + \frac{a^3}{3!} f^{(3)}(x) + \ldots \\ &= f(x+a) - f(x) \end{align*}
I don't think it's a good idea to use BCH formula. Here is my proof. Set $I(x)=e^{f(x)}$ we get $\partial_x+f'=I^{-1}\circ\partial_x\circ I$ , since $(Ig)'=(f'g+g')I$ . Therefore $\exp(a(\partial_x+f'))=\exp(I^{-1}\circ a\partial_x\circ I)=I^{-1}\circ\exp(a\partial_x)\circ I$ , which means $$e^{a(\partial_x+f')}e^{-a\partial_x}g(x)=I^{-1}\circ e^{a\partial_x}(I(x)g(x-a))=I(x)^{-1}I(x+a)g(x)=e^{-f(x)+f(x+a)}g(x).$$
|matrix-exponential|
1
How do you show the convergence of $\sum_{n=1}^∞ 2^n\tan(1/n!)$?
How do you show the convergence of $\sum_{n=1}^∞ 2^n\tan(1/n!)$ ? Does one use the ratio test? If so, how?
When $n\to\infty$ , $\displaystyle\frac{1}{n!}\to 0$ , so $\displaystyle\lim_{n\to\infty} \displaystyle\frac{\tan\left(\displaystyle\frac{1}{n!}\right)}{1/n!}=1$ . Thus, the series has the same character as $\displaystyle\sum_{n=1}^{\infty} a_n$ , where $a_n=\displaystyle\frac{2^n}{n!}$ . Applying the ratio test, we reach $$\displaystyle\frac{a_{n+1}}{a_n}=\displaystyle\frac{2}{n+1}\to 0$$ Then we conclude that the sum of $a_n$ converges and our original series does too.
|sequences-and-series|convergence-divergence|
0
Divergence Theorem Exercise
The problem is as follows: Given that $\nabla \cdot \mathbf{F} = 0$ in $V$ $\mathbf{F} \cdot \mathbf{n} = 0$ on $\partial V$ Prove that: $\int_V \mathbf{F} \, dV = 0$ . I understand it intuitively, but I can't figure out how to show the result. All I have is below which just leads to $0=0$ $\int_V (\nabla \cdot \mathbf{F}) \, dV = \int_{\partial V} \mathbf{F} \cdot \mathbf{n} \, dS$
Perhaps Green's identity? It says $$ \int_V \left(\psi \: \nabla \cdot \mathbf{F} + \mathbf{F} \cdot \nabla \psi \right)dV = \int_{\partial V} \psi \: (\mathbf{F} \cdot \mathbf{n})dS $$ for any differentiable $\psi$ . The RHS is zero, the first term of the LHS is zero, and thus it reduces to $$\int_V \mathbf{F} \cdot \nabla \psi \: dV =0 $$ Taking $\psi$ to be $x_i$ leads to $\int_V F_i dV = 0$ .
|vector-fields|divergence-theorem|
0
Equation for 2-Axis Rotation to Point to Target Azimuth & Elevation
I previously asked this question the other way around (given the angles of rotation, what are the resulting angles of azimuth and elevation), thinking that it would be simple to reverse it, but it's not. So now my question is what are the angles of rotation needed so that the vector normal to the surface points to a target azimuth and target elevation? It is similar to another question asked, but in my case the first rotation is about the y-axis and the second rotation is about the beam's own axis, as shown in the picture.
The first step is to attach a local coordinate frame $O x'y'z'$ to the beam. This coordinate frame and the world coordinate frame share the same origin $O$ . The $Ox'$ axis extends along the length of the beam, and is shown in red, keeping the convention that you used. Similarly the $Oy'$ axis extends in a transverse direction along the width of the beam, and is shown in green as shown in the figure, and finally the $Oz'$ is perpendicular to the surface of the beam, and is shown in blue. We can assume that initially, before any rotations, the beam's own coordinate frame was oriented along the world coordinate frame. Then three rotations take place. First the beam is rotated about the $z$ axis to a certain angle $\phi$ . Secondly the beam is rotated about its own $Oy'$ axis by a negative angle of $(-\theta)$ . This will raise its tip. And thirdly and finally, it is rotated about its own $Ox'$ axis by an angle $\Omega$ . The combinations of the three rotations results in the following co
|linear-algebra|rotations|spherical-geometry|
1
Evaluating $\arctan\left(\sin\frac12(\beta-x)\csc\frac12(\beta+x)\right)$ when $x=\pi$
I'm trying to evaluate the following: $$\frac{\sin x}{1-\cos\beta\cos x} - 2\cot\beta \arctan\left(\sin \left(\frac{\beta-x}{2}\right) \csc \left(\frac{\beta+x}{2}\right)\right)$$ This is the result of an integral and I need to evaluate at $x = \pi$ and $x=0$ . The first part of the expression evaluates to zero for both $x = \pi$ (upper limit) and $x=0$ (lower limit). For $x=0$ , I think the expression yields $0.5\pi \cot\beta$ . I'm getting stuck when $x=\pi$ although I suspect that this also evaluates to $0.5\pi \cot\beta$
As a hint: $$\sin (a+b)=\sin a \cos b + \cos a \sin b $$ so when you put $x=\pi$ you will have \begin{align*}\sin (\frac{\beta-x}{2}) \csc (\frac{\beta+x}{2})& = \frac{\sin (\frac{\beta-x}{2})}{\sin (\frac{\beta+x}{2})}= \frac{\sin (\frac{\beta-\pi}{2})}{\sin (\frac{\beta+\pi}{2})}\\& =\frac{\sin \frac{\beta}{2}\cos \frac{\pi}{2}-\cos \frac{\beta}{2}\sin \frac{\pi}{2} }{\sin \frac{\beta}{2}\cos \frac{\pi}{2}+\cos \frac{\beta}{2}\sin \frac{\pi}{2}} \\& =\frac{0-\cos \frac{\beta}{2}\sin \frac{\pi}{2}}{0+\cos \frac{\beta}{2}\sin \frac{\pi}{2}}= -1\end{align*} now you deal with $\arctan(-1)$
|definite-integrals|
0
Sketch a set in the complex plane
How to sketch this set in the complex plane? $$S=\{ z\in C: Re[(4-i)z] > Re[(-5+7i)(4+6i)] \}$$ Where: S - set Re[x+yi] - real part of the complex number C - complex numbers z - complex number I solved the Real parts inside the brackets and here what I`ve got: $$Re[(4-i)z] = Re[(4-i)(x+iy)] = Re[(4x+y) + i(4y-x)] = 4x+y$$ $$Re[(-5+7i)(4+6i)] = Re[-20-30i+28i-42] = Re[-64-2i] = -64$$ After that I have an inequality: $$4x+y>-64$$
So this is how it looks like plotted by Wolfram Alpha (it goes through x=-16)
|complex-numbers|
1
If $A \subseteq B$, then $A\Delta B=B\backslash A$
So I'm working on my discrete math homework, and I came to this one: If $A \subseteq B$ , then $A\Delta B=B\backslash A$ I was getting it split up into cases to prove it, then I noticed that it was $B\backslash A$ , and that didn't make sense to me. If it was $A\backslash A$ that would make sense to me because then both $A\Delta B$ and $A\backslash A$ would equal $\emptyset$ , but I don't see any way that $B\backslash A$ could equal $A\Delta B$ .
The elements of $A\Delta B$ are the elements that are in $A$ or $B$ but not in both. If $A\subseteq B$ then the elements that are in $A$ are automatically in $B$ but the elements in $B\setminus A$ are not in $A$ . Therefore $A\Delta B=B\setminus A$ .
|discrete-mathematics|elementary-set-theory|
0
If $A \subseteq B$, then $A\Delta B=B\backslash A$
So I'm working on my discrete math homework, and I came to this one: If $A \subseteq B$ , then $A\Delta B=B\backslash A$ I was getting it split up into cases to prove it, then I noticed that it was $B\backslash A$ , and that didn't make sense to me. If it was $A\backslash A$ that would make sense to me because then both $A\Delta B$ and $A\backslash A$ would equal $\emptyset$ , but I don't see any way that $B\backslash A$ could equal $A\Delta B$ .
Let $A \subset B$ , then $A-B= \phi$ , thus, $A \triangle B = (A- B) \cup B-A=\phi \cup (B-A)=B-A$
|discrete-mathematics|elementary-set-theory|
1
Count lattice points enclosed in curve
Let $$ (\log x_1)^2+ (\log x_2)^2 + \cdot\cdot\cdot + (\log x_n)^2= r $$ For $r=0,1,2,…$ The goal is to count the number of lattice points enclosed by the curve/surface for $r$ . In this way the problem is a generalization of the Gauss circle problem. What is the exact solution or at least an approximate solution in dimension $2$ ? I calculated that the first several terms in the sequence are $1,4,10,15,28,42,…$ The sequence seems to grow more slowly than when compared to the usual Gauss circle problem.
First of all, I feel the necessity to represent some of these curves : Fig. 1 : the first 5 curves for $r=1,\cdots 5$ . In fact the "curve" for $r=0$ is reduced... to a single point $(1,1)$ . The asymptotic growth of these numbers $$N(r)=1,4,10,15,28,42,59,86,111,152,197,252,319,404,504,621,752,916,1112,1331,1589,1889,2233,2629,3081,3596,4192,4865,5634,6501,7480\cdots$$ look, on a "trial and error" basis to be approximately governed (on a logarithmic scale) by the following approximation : $$\ln(N(r)) \approx 1.73r^{0.48}\tag{1}$$ (as represented on the figure below) : Otherwise said : $$N(r) \approx \exp(1.73r^{0.48})$$ Fig. 2 : Representation on a log. scale : exact figures (function $\log(N(r))$ represented as the blue curve, approximate values (RHS of (1)) represented by the red curve.
|geometry|number-theory|asymptotics|analytic-geometry|
1
How do you show the convergence of $\sum_{n=1}^∞ 2^n\tan(1/n!)$?
How do you show the convergence of $\sum_{n=1}^∞ 2^n\tan(1/n!)$ ? Does one use the ratio test? If so, how?
Another point of view is $$0\le \tan (\frac 1{n!})\le 2\times \frac 1{n!} $$ so $$0\le\ \sum_{n=1}^∞ 2^n\tan(\frac 1{n!}) \le \sum_{n=1}^∞ 2^n \times 2*(\frac 1{n!}) \\0\le\ \sum_{n=1}^∞ 2^n\tan(\frac 1{n!})\le 2\sum_{n=1}^∞ (\frac {2^n}{n!})\\\ 0\le\ \sum_{n=1}^∞ 2^n\tan(\frac 1{n!})\le 2(e^2-1)$$ Remark : $$e^x=\frac 1{0!}+\frac x{1!}+\frac {x^2}{2!}+\frac {x^3}{3!}+\cdots $$
|sequences-and-series|convergence-divergence|
0
Can a dodecagon be cut into $n$ congruent pieces for any $n$ not of the form $1,2,3,4,6,8,12k^2,24k^2$?
Suppose I want to cut a regular dodecagon into $n$ congruent simply-connected pieces. For which $n$ is this possible? I can cut it into 24 right triangles, by cutting from the center to each vertex and the midpoint of each edge; by gluing consecutive such triangles together, I can also get any factor of $24$ . If I cut into either 12 or 24 triangular sectors, I can subdivide each of those triangles into $k^2$ congruent pieces for any positive integer $k$ , giving $n=12k^2$ or $n=24k^2$ . Can anything else be done? Obviously if the answer is "no" this is likely to be near-impossible to prove, but I would accept any evidence of someone investigating this problem in the literature and failing to exhibit additional possibilities. (I'd also be interested in partial impossibility proofs - for instance, the case of triangular tiles seems likely to be tractable using the sorts of methodologies Michael Beeson has applied to the problem of tiling a larger triangle.) I ask about a dodecagon in pa
I. In the figure below, $AOB$ , $BOC$ are two adjacent isosceles triangles of the twelve that can tile the dodecagon. Join $AC$ , and on $AO$ , $CO$ construct isosceles triangles $AEO$ , $CDO$ with base angle $=\angle BAC=15^o$ , and join $AD$ . This produces two quadrilaterals $$AEOD\cong ABCD$$ with angles of $45^o$ , $150^o$ , $60^o$ , $105^o$ . Since in area the two quadrilaterals together exceed $\triangle OAB+\triangle OBC$ by $\triangle AEO$ , and fall short by $\triangle CDO\cong \triangle AEO$ , then $$AEOD=AOB=ABCD=BOC$$ and the dodecagon can be tiled with twelve of these congruent quadrilaterals, just as it can be tiled by the twelve congruent isosceles triangles based on the sides of the dodecagon with common vertex at the center. This transformation of $n$ congruent isosceles triangles into $n$ congruent quadrilaterals which can tile a regular n-gon is evidently possible only for the dodecagon. For congruent triangles $AEO$ , $CDO$ must also be congruent with similar trian
|geometry|tiling|dissection|
0
$\nabla\cdot(f\vec{g})=f\nabla\vec{g}+\vec{g}\cdot\nabla f$ using Levi-Civita
I need to prove the following equality: $\nabla\cdot(f\vec{g})=f\nabla\vec{g}+\vec{g}\cdot\nabla f$ . I know that proving this equality using the properties of $\nabla$ and $(\cdot)$ is easy, what the exercise asks me to do is to use the symbol of Levi-Civita, I'm new to working with tensors and that's why I'm a bit confused. My question is, is there a formula/relationship between $\varepsilon$ and $\nabla$ , $\cdot$ , that can help me prove equality? (At the moment I have found a relationship between $\varepsilon$ and $\nabla$ , $\times$ , but it does not help me to solve this exercise)
Let $f:\mathbb{R}^n\to\mathbb{R}$ , $\vec{g}:\mathbb{R}^n\to\mathbb{R}^n$ be arbitrary. The identity $\nabla \cdot (f\vec{g})=(\nabla f)\cdot\vec{g}+f(\nabla \cdot \vec{g})$ can then be verified using index notation. In index notation, the vector $\vec{g}$ is expressed as $\vec{g}=g_i\hat{e}_i$ , where $\hat{e}_i$ is the standard Cartesian basis vector in the direction of increasing $x_i$ and $g_i$ is the component of $\vec{g}$ along $\hat{e}_i$ (since the Cartesian basis vectors are orthonormal, $g_j=\vec{g}\cdot \hat{e}_j$ as can be verified by taking the dot product of $\vec{g}=g_i\hat{e}_i$ with $\hat{e}_j$ ). It is important to note that implicit on the right hand side is the sum over $i\in \{1,2,...,n\}$ . It is then natural to express the differential vector operator $\nabla$ as $\nabla=\hat{e}_i \partial_i$ , where $\partial_i=\frac{\partial}{\partial x_i}$ is the partial derivative operator in the $\hat{e}_i$ direction. Then an object like the divergence of $\vec{g}$ can be un
|vector-analysis|inner-products|tensors|
0
Regarding intersecting surfaces of two surfaces
When I am plotting two quadratic surfaces $y^2/2 + z^2/2 - 16.7 z = -30$ and $-(x^2/2) + 16.7 z = 200$ , then its intersecting orbit is found to be a single orbit (see attached image). But, when I am changing the RHS value of both the expressions, then the shape of the intersecting orbit and the number of intersecting orbit changes i.e. after a certain value of RHS in both the equations, the number of intersecting orbit= 2. But, before that, we are getting only one/single intersecting orbit. Example: Intersecting orbit of $y^2/2 + z^2/2 - 16.7 z = -90$ and $-(x^2/2) + 16.7 z = 70$ is shown below. I know these RHS values are only numbers or constant values. Due to these values, the nature of the surfaces remains the same and the shape and number of intersecting orbits change. So, It is confirmed that something is happening (I mean some kind of changes in the dynamics of both the surfaces are taking place). I have attached the code below that clearly shows how orbits are changing w.r.t R
To visualize what is happening I would rather keep one surface constant (call it reference surface) and manipulate the each of the two surfaces by varying associated constants separately . This may help to resolve parametric variation (manipulation) of each of the two surfaces. In this case the reference surface suggests itself by adding them together and setting the constant not involving any k or m: $$ \frac{y^2+z^2-x^2}{2}= k+m-2= 30 \text{ say}$$ which is a hyperboloid of one sheet kept steady so we can manipulate it with the prismatic parabola (cutting) and elliptic cylinder (piercing) separately for better intersection understanding by visualization: Interpretation of the main reference surface lends itself to study the originating physics and dynamics from constitutive energy conservation etc. Clear["Global `*"] Manipulate[ ContourPlot3D[{y^2/2 + z^2/2 - x^2/2 == 30, -(x^2/2) + 16.7 z == 10 + m}, {x, -40, 40}, {y, -40, 40}, {z, -10, 40}, Mesh -> None, ContourStyle -> Directive[S
|general-topology|geometry|geometric-topology|
0
Order of automorphism group of abelian group
In Derek Robinson's A Course in the Theory of Groups , exercise 1.5.13 states: Let $G=\mathbb{Z}_{p^{n_1}}\oplus\cdots\oplus\mathbb{Z}_{p^{n_k}}$, where $n_1 Now, this exercise is wrong. What is true, however, is that $$|Aut(G)|=(p-1)^kp^r$$ for some $r$. Is there an elementary way to prove this? It's pretty easy to see that if $\alpha\in Aut(G)$ fixes pointwise the quotients $G_{i+1}/G_i$, then it has order a power of $p$. If $N\lhd Aut(G)$ is the subgroup of all such $\alpha$, then for every $xN\in Aut(G)/N$, $(xN)^{p-1}=1$. That is, $Aut(G)/N$ has exponent $p-1$. But I don't see a way to show $|Aut(G)/N|=(p-1)^k$. Of course, it is entirely possible such an easy proof does not exist. But that makes me wonder what the point of Robinson's exercise is.
The accepted answer constructs a series of subgroups, but doesn't explain in detail why the series is characteristic. It hints there is a more insightful argument than the calculations given here -- maybe somebody can post that. Also, we point out where in the accepted answer it is necessary that the series is characteristic. In a characteristic series each term of the series is a characteristic subgroup of the whole group. (Robinson defines the term later in Chapter 3, and there gives an ambiguous definition that can be read as a strongly characteristic series . Thanks to @Steve D for pointing out the difference.) The series given contains duplicate terms. Since each non-trivial factor has size $p$ , the series has $t=\sum_{i=1}^k n_i$ non-trivial factors, while the construction has $kn_1>t$ terms. The series is characteristic To aid calculations we introduce new notation. Rewrite the order constraints in the accepted answer as $G_m^i=\oplus_{j=1}^k G_j^{i,m}$ with $$ G_j^{i,m} = p^{r
|group-theory|finite-groups|abelian-groups|p-groups|automorphism-group|
0
Necessary Assumptions when Deriving Poisson Distribution
Poisson distribution expresses the probability that a specific number of discrete independent events happen over a fixed time interval, as long as the events are sufficiently rare. To be precise, I accept the following premises for Poisson events: Probability of exactly $k$ events happening over a time interval $[t_1, t_2]$ depends only its length $\Delta t = t_2-t_1$ . Probability distributions for any two non-overlapping intervals are independent. Expected value of the number of events happening over the period $T$ is finite. No two events can happen at exactly the same time. However, when searching for derivation of Poisson distribution from the first principles online, I saw all of them make the following extra assumption: For sufficiently small interval $\mathrm{d}t$ probability of exactly one event occurring equals to $\mu \mathrm{d}t$ , and we can neglect the probability that two or more events will occur. See for example this , this , and this article. Do we really need this ad
The extra assumption is not necessary and can be derived from the four assumptions you gave. We will use following notation in our proof: $T$ , the total length over which we compute the Poisson distribution $P(K=k; \Delta t)$ : the probability of exactly $k$ events happening over the interval of length $\Delta t$ . $E(K; \Delta t)$ : the expected number of events happening over the interval $\Delta t$ . Proof Let's split the interval $T$ into $n$ smaller intervals $\Delta t_n = T/n$ . Let's denote $p_n = P(k > 0, T/n)$ the probability of at least one event happening within the smaller interval. The expected number of events happening over the interval $\Delta t_n$ is at least $p_n$ , as can be seen: $$E(K; \Delta t_n) = \sum_{k=1}^{\infty} k \cdot P(K=k; \Delta t_n) \geq \sum_{k=1}^{\infty} P(K=k; \Delta t_n) = p_n $$ Expected value of $K$ over the full time interval $T$ is the sum of expected values over all the smaller $\Delta t$ intervals. Combined with the above inequality, we get
|poisson-distribution|
1
How to evaluate $\int_0^{\infty } \left(\frac{1}{(x+1)^2 \log (x+1)}-\frac{\log (x+1) \tan ^{-1}(x)}{x^3}\right) \, dx$
How to evaluate $$\int_0^{\infty } \left(\frac{1}{(x+1)^2 \log (x+1)}-\frac{\log (x+1) \tan ^{-1}(x)}{x^3}\right) \, dx = G - \gamma + \frac{1}{4} \pi \log 2 - \frac{3}{2}.$$ I made some progress. Integrating the second term by parts shows that negative of the integral equals $$ -I = 1/2 + 1/2 \int_0^{\infty} \,dx \frac{1}{x^2} \left[ \frac{ \ln(1+x)}{1+x^2} + \frac{ \tan^{-1} x }{1+x} \right] - \frac{1}{(1+x)^2 \ln(1+x)}. $$ Integrating the arctan term by parts and using the partial fraction expansion of $\frac{1}{x^2(1+x)}$ yields $$ -I = 1 - G/2 - \frac{\pi}{8} \ln 2 + \int_0^{\infty} \,dx \frac{1}{2(1+x^2)} \left[\frac{\ln(1+x)}{x^2} + \frac{1}{x} \right] - \frac{1}{(1+x)^2 \ln (1+x)}. $$ Here I used the integrals $$ \int_0^{\infty} \,dx \frac{\ln x}{1+x^2} = 0$$ and $$ \int_0^{\infty} \,dx \frac{\ln (1+x)}{1+x^2} = G + \frac{\pi}{4} \ln 2.$$ The latter still requires proof. The value of the integral follows immediately if we can show that $$ \int_0^{\infty} \,dx \frac{1}{x} \left[
To show that $$ \int_0^{\infty} \frac{1}{x}\left(\frac{1}{1+x^2}-e^{-x}\right)=\gamma, $$ we can use integration by parts on $\frac{1}{x}$ that $$ \begin{aligned} \int_0^{\infty} \frac{1}{x}\left(\frac{1}{1+x^2}-e^{-x}\right) d x = & \int_0^{\infty}\left(\frac{1}{1+x^2}-e^{-x}\right) d(\ln x) \\ = & {\left[\ln x\left(\frac{1}{1+x^2}-e^{-x}\right)\right]_0^{\infty} } - \int_0^{\infty} \ln x\left[\frac{2 x}{\left(1+x^2\right)^2}+e^{-x}\right] d x\\=& -2 \int_0^{\infty} \frac{x \ln x}{\left(1+x^2\right)^2} d x+\gamma\\=& \gamma \end{aligned} $$ where $$ \int_0^{\infty} \frac{x \ln x}{\left(1+x^2\right)^2} d x \stackrel{x\mapsto\frac{1}{x}}{=} -\int_0^{\infty} \frac{x \ln x}{\left(x^2+1\right)^2} d x \Rightarrow \int_0^{\infty} \frac{x \ln x}{\left(1+x^2\right)^2} d x =0 $$
|calculus|integration|definite-integrals|improper-integrals|closed-form|
0
I'm stuck in calculating $_2F_1(1,1;\tfrac12;\tfrac12)$
By using $(71)$ and $(70)$ identities of Mathworld's hypergeometric function page, $${_2}F{_1}(1,1;\tfrac12;\tfrac12)\stackrel{(71)}{=}2\,{_2}F{_1}(1,-\tfrac12;\tfrac12;-1)\stackrel{(70)}{=}2\left(\tfrac{\sqrt\pi}2\tfrac{\Gamma(\tfrac12)}{\Gamma(1)}+1\right)=\pi+2.$$ But, WolframAlpha calculates it as $\tfrac{\pi}2+2.$ Where is my mistake? I also wonder whether there is an Euler-type integral that can be used to calculate ${_2}F{_1}(1,1;\tfrac12;\tfrac12)$ . Thanks for reading.
UPDATE : The identity $${_2}F{_1}(1,1;\tfrac{1}{2};\tfrac{1}{2}) = \frac{\pi}{2}+2$$ can actually be obtained directly from Euler's integral representation . I show this in the second part of my answer. As Semiclassical pointed out in the comments, equation (70) should be $${_2}F_1(1,-a;a;-1)=\frac{\sqrt{\pi}}{2} \frac{\Gamma(a+1)}{\Gamma(a+\frac{1}{2})}+1. $$ We can use Euler's integral representation to prove that this indentity holds for $a>0$ . First assume that $a>1$ . Using Euler's integral representation, we have $$ \begin{align} {_2}F_1(1,-a;a;-1) &= {_2}F_1(-a,1;a;-1) \\ &= \frac{1}{B(1,a-1)} \int_{0}^{1} x^{1-1} (1-x)^{a-2}(1+x)^a \, \mathrm dx \\&= (a-1) \int_{0}^{\infty} (1-x)^{a-2}(1+x)^{a} \, \mathrm dx. \end{align}$$ Then integrating by parts, we have $$ \begin{align} {_2}F_1(1,-a;a;-1) &= - (1+x)^{a}(1-x)^{a-1} \bigg|_{0}^{1} + a\int_{0}^{1} (1-x)^{a-1}(1+x)^{a-1} \, \mathrm dx \\ &= 1 +a \int_{0}^{1} (1-x^{2})^{a-1} \, \mathrm dx. \end{align}$$ We now have an integral
|special-functions|hypergeometric-function|
0
Why are coordinate maps diffeomorphisms (revisited)
I have the same question as this post here. To summarise, the poster introduces an apparent counter-example. Consider the manifold $\mathbb{R}$ and a spanning atlas $(\mathbb{R}, \psi)$ where $\psi = x^{1/3}$ . Clearly $\psi$ is homeomorphic, but it is not diffeomorphic. I believe the easy counter example is to see that $\partial_x\psi=(1/3)x^{-2/3}$ which is clearly not defined for $x = 0\in\mathbb{R}$ . Hence, the counter-example. I do not understand the accepted answer on the post. I am also following Loring's book on manifolds so I know that to show that a map $f: M\rightarrow\mathbb{R}$ is smooth on a manifold $M$ then we need to ensure that for every chart $(U, \phi)$ on $M$ then $f\circ\phi^{-1}$ is also $C^{\infty}$ . It seems that the answer to the post is saying that $\psi$ is diffeomorphic since the composition $\psi\circ\psi^{-1} = \mathbb{1}$ is clearly smooth. This agrees with the definition of smoothness just mentioned, but it also seems incompatible with the smoothness
As usual, the simplest examples in differential topology are also the most confusing; largely because we use precise and general definitions which are good for proving general statements, but take care to unpack into what should be easy examples. The crux of the matter is is that we can equip $\mathbb R$ with two smooth structures. The first is the standard one given by $(\mathbb R, I)$ , where $I$ is the identity map. The second is the $(\mathbb R, \phi)$ , where $\phi$ is the homeomorphism $x\mapsto x^3$ (this defines a smooth structure because $\phi^{-1}\circ \phi=I$ , which is smooth, so we just through in any other charts which are smoothly compatible with $\phi$ to obtain a maximal smooth atlas). These smooth structures are not the same (meaning the maximal smooth atlases they induce are different) because $I\circ \phi^{-1}(x)=x^{1/3}$ is not smooth at the origin, it has a cusp. It follows that $(\mathbb R,I)$ and $(\mathbb R, \phi)$ are "different" smooth manifolds, in the sense
|manifolds|differential-topology|
0
Some integral representations of the Euler–Mascheroni constant
What kind of substitution should I use to obtain the following integrals? $$\begin{align} \int_0^1 \ln \ln \left(\frac{1}{x}\right)\,dx &=\int_0^\infty e^{-x} \ln x\,dx\tag1\\ &=\int_0^\infty \left(\frac{1}{xe^x} - \frac{1}{e^x-1} \right)\,dx\tag2\\ &=-\int_0^1 \left(\frac{1}{1-x} + \frac{1}{\ln x} \right)\,dx\tag3\\ &=\int_0^\infty \left( e^{-x} - \frac{1}{1+x^k} \right)\,\frac{dx}{x},\qquad k>0\tag4\\ \end{align}$$ This is not homework problems and I know that the above integrals equal to $-\gamma$ (where $\gamma$ is the Euler-Mascheroni constant). I got these integrals while reading this Wikipedia page . According to Wikipedia, the Euler–Mascheroni constant is defined as the limiting difference between the harmonic series and the natural logarithm: $$\gamma=\lim_{N\to\infty} \left(\sum_{k=1}^N \frac{1}{k} - \ln N\right)$$ but I don't know why can this definition be associated to the above integrals? I can obtain the equation $(1)$ using substitution $t=\ln \left(\frac{1}{x}\right)\r
To evaluate the fourth one, we can do by integration by parts on $\frac{1}{x}$ , $$ \begin{aligned} \int _0^{\infty} \frac{1}{x}\left(\frac{1}{1+x^k}-e^{-x}\right) d x = & \int_0^{\infty}\left(\frac{1}{1+x^k}-e^{-x}\right) d(\ln x) \\ = & {\left[\ln x\left(\frac{1}{1+x^k}-e^{-x}\right)\right]_0^{\infty} } - \int_0^{\infty} \ln x\left[\frac{kx^{k-1}}{\left(1+x^k\right)^2}+e^{-x}\right] d x\\ =& -k\int_0^{\infty} \frac{x^{k-1} \ln x}{\left(1+x^k\right)^2} d x+\gamma \\=& \gamma \end{aligned} $$ where $$\int_0^{\infty} \frac{x^{k-1} \ln x}{\left(1+x^k\right)^2} d x = -\int_0^{\infty} \frac{x^{k-1} \ln x}{\left(x^k+1\right)^2} d x \Rightarrow \int_0^{\infty} \frac{x^{k-1} \ln x}{\left(1+x^k\right)^2} d x =0 $$
|calculus|real-analysis|integration|sequences-and-series|improper-integrals|
0
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $$\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$$ Starting from the condition set that $xyz =1$ I did this : $$x^3+y^3+z^3\ge3\sqrt[3]{x^3y^3z^3}$$ $$x^3+y^3+z^3\ge3$$ I also know that the fraction $\frac{3}{2}$ usually either comes from the famous $Nesbitt$ equation or from some sort of $Tittu$ that leads to $\frac{9}{4}$ which then is simplified. However I can't find any way how to manipulate the inequality. I did try to multiply and rewrite the inequality so that $x,y,z$ are an even power like this : $$\frac{x^4}{x^2+yz}+\frac{y^4}{y^2+xz}+\frac{z^4}{z^2+xy}\ge\frac{3}{2}$$ However this didn't bring me anywhere. Any kind of help is welcomed.
No tricks requires, just standard techniques. Lemma: Apply Jensen's to show that If $ \frac{ x+y+z}{3} = A $ , then $\sum \frac{ x^5 } { x^2 + 1 } \geq 3 \frac{ A^5}{A^2 + 1 }$ . Proof: Work through the tedious verification that $f''(x) = \frac{ 2x^3 (3x^4 + 9x^2 + 10 } { (x^2+1)^3 } \geq 0$ . Corollary: The condition $ xyz=1 $ tells us that $ x+y+z \geq 3$ . Hence, $$ \sum \frac{ x^5 }{ x^2 + 1 } \geq 3 \frac{ A^ 5 } { A^2 + 1 } \geq \frac{3}{2}.$$ Equality holds iff $ A = 1, x=y=z = 1$ .
|inequality|cauchy-schwarz-inequality|a.m.-g.m.-inequality|
0
Regarding matrices that are positive definite on a subspace
If $A$ is positive definite relative to a subspace $L \subset \mathbb{R}^n$ , i.e. $$\forall x \in L, x \ne 0, \quad \langle x, Ax \rangle > 0 $$ then can we find a positive definite matrix $B$ (on the entire $\mathbb{R}^n$ ) such that $$\forall x \in L, \quad \langle x, Bx \rangle = \langle x, Ax \rangle? $$ To approach this question, I have verified its correctness in the trivial cases of $L$ and when $L$ is a $1$ -dimensional subspace. Indeed, let $L = \langle \bar{x} \rangle$ for $\bar{x} \ne 0$ . The positive definiteness of $A$ relative to $L$ reduces to the single condition $$\langle \bar{x}, A \bar{x} \rangle > 0. $$ Consider the diagonal matrix $B$ with the entries on the main diagonal being $b_1,\ldots,b_n > 0$ . It is obvious that $B$ is positive definite, now we will choose $b_i$ to suit our need. Since $\bar{x} \ne 0$ , there exists $i$ such that $\bar{x}_i \ne 0$ . We need $$\sum\limits_{i=1}^n b_i \bar{x}_i^2 = \langle \bar{x}, B\bar{x} \rangle = \langle \bar{x}, A \bar{
Yes, you can. Choose a subspace $M\subset \mathbb{R}^n$ such that $L\oplus M=\mathbb{R}^n$ (this can always be done). Given positive definite quadratic forms $Q_L:L\to \mathbb{R}$ and $Q_M:M\to \mathbb{R}$ , one has a positive definite quadratic form $Q:L\oplus M\to \mathbb{R}$ given as $(l,m)\mapsto Q_L(l)+Q_M(m)$ . Moreover, $Q|_L=Q_L$ .
|matrices|positive-definite|
1
Probability of *at least* k same specific digits in an n-digit sequence?
Given an n-digit integer sequence, e.g. $n=5$ [1, 1, 5, 6, 4] where each digit is independent and uniformly random in the range 1-6, what is the probability of having at least k same digits of a certain value (e.g. 1)? Specifically I'm interested in $k = n/2$ (rounded up) and n in the range 1 - 40. So ... f(k, n) = Probability of *at least* k same of a specific digit = ... n ... k ...? Background: This is as far as I got with the question of plotting the probability of rolling $n$ dice once and having at least $k$ 1 s in the resulting roll. (e.g. at least half the dice showing 1 .) Yes, there are $6^n$ combinations to roll $n$ dice. The probability for each die to show or not show a 1 is $1/6$ or $5/6$ respectively. Yes, the question is the same whether asking if we have at least half 1 s or at least half 4 s ... any single digit, the others are irrelevant. And, yes, it does not matter if I roll $n$ dice once to generate the sequence or one die $n$ times. That it is $1/6$ for one die i
$$P(k, n) = \binom{n}{k} p^k (1-p)^{n-k}$$ Where $\binom{n}{k}$ is the number of ways to choose $k$ successes from $n$ , which accounts for all the combinations of dice that would roll successfully without duplicates. $p^k$ is the probability of each success to the power of desired successes. $(1-p)^{n-k}$ is the probability of failure to the power of number of failures. In this case, you are assuming that you will have exactly $k$ successes and $(n-k)$ failures. If you want to find at least $k$ successes, you have to add all the possible outcomes from $k$ to $n$ : $$P(k, n) = \sum_{k=x}^n\binom{n}{k} p^k (1-p)^{n-k}$$ We can also write it as a probability with the number of successful cases over the total number of cases. In this case, $p$ is the number of options (6 for dice): $$P(k, n) = \frac{\binom{n}{k}(p-1)^{n-k}}{p^n}$$ Or for at least k successes: $$P(k, n) = \sum_{k=x}^n\frac{\binom{n}{k}(p-1)^{n-k}}{p^n}$$
|probability|combinatorics|dice|
1
How many nilpotent matrices are there in $M_n(\mathbb R)$ up to similarity?
I am trying to count all nilpotent matrices in $M_n(\mathbb R)$ up to similarity. I did the same exercise for idempotent matrices and it was quite simple. I realised that rank of an idempotent matrix is a non-negative integer and two idempotent matrices are similar iff they have the same rank. So I got the answer $n+1$ . However, it doesn't seem to be simple for nilpotent matrices. Things which are clear to me: If two nilpotent matrices are similar, they must have the same order of nilpotence. There is exactly one class of similarity for nilpotent matrices of order $n$ . The question I would like to ask: How many nilpotent matrices of order $k$ are there up to similarity? The answer is $1$ if $k=n$ . The answer is $1$ again if $k=1$ . (The null matrix is the only matrix which has nilpotence of order $1$ and it is its own similarity class.) What about other values of $k$ ? By looking at Jordan normal form, here's what I found about $n=2$ and $n=3$ . How to generalize? For $n=2$ , order
A matrix $A$ is nilpotent if and only if $0$ is its only eigenvalue, hence if and only if its Jordan canonical form is a direct sum $\bigoplus J_{k_i}$ of Jordan blocks $$J_{k_i} := J_{k_i}(0) = \pmatrix{0&1\\&0&\smash{\ddots}\\&&\smash{\ddots}&\smash{\ddots}\\&&&0&1\\&&&&0}$$ of size $k_i$ and eigenvalue $0$ ; if $A$ has size $n \times n$ , then the sum of the sizes of the Jordan blocks is $\sum k_i = n$ . On the other hand, two matrices are similar if they have the same Jordan canonical form (allowing permutations of blocks), so the map $J_{k_1} \oplus \cdots \oplus J_{k_r} \leftrightarrow (k_1, \ldots, k_r)$ defines a bijection between the set of similarity classes of $n \times n$ nilpotent matrices (represented by their Jordan canonical form) and the set of (unordered) partitions of $n$ . The counts $a(n)$ of the similarity classes thus comprise OEIS A000041 . For small $n$ , the counts $a(n)$ are as follows. $$ \begin{array}{rcccccccccc} \hline n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 &
|linear-algebra|combinatorics|matrices|
1
Does $\frac{\int^a_b f(x)dx}{\int^a_b g(x)dx}= \int^a_b \frac{f(x)}{g(x)}dx$ for $g(x)\neq 0$ with $x\in [a,b]$
Does $$\frac{\int^a_b f(x)dx}{\int^a_b g(x)dx}= \int^a_b \frac{f(x)}{g(x)}dx$$ is $$g(x)\neq 0$$ for $x\in [a,b]$ I came up with this question when I am learning Mellin's transform. I am not sure if this of my intuition is right or wrong. Please give me some guidance, many thanks!
There are plenty of counter examples. Here is one: Suppose $f(x) = \frac{1}{x}$ and $g(x) = 1$ and the interval of interest is $[1,3]$ . Then $$ \int_{1}^{3} f(x) \, dx = \ln(3), \quad \int_{1}^{3} g(x) \, dx = 2, \quad \int_{1}^{3} \frac{f(x)}{g(x)} \, dx = \ln(3) $$
|indefinite-integrals|
1
Why is the submodule of K[x,y] generated by x and y not a free module?
If $\mathbb{K}$ is a field, and we consider the module $\mathbb{K}[x,y]$ over itself, I've seen others give the example that the ideal generated by $\langle x, y \rangle$ is not a free module to demonstrate that a submodule of a free module need not be free. However, I don't understand why. From the definition of a free module, a module is free if it has a linearly independent basis. Since $x,y$ have no relations between them, they seem to form a linearly independent $\mathbb{K}[x,y]-$ basis for the ideal. So what am I missing here? Why is this submodule not free? Similarly, I've seen the example that $\mathbb{Z}_6$ is a free module over itself but apparently $2\mathbb{Z}_6$ is not free for some reason. Again, this modules is generated by $\langle 2\rangle$ so by definition it has to be free, right? I'm confused. Any explanation would be appreciated, thank you!
They are not independent over $\mathbb{K}[x,y]$ , which is the ring in question: using boldface for the module elements, you have $y\mathbf{x} -x\mathbf{y}=\mathbf{0}$ , even though neither $x$ nor $y$ are zero. It is not an independent set. So it is not true that "there are no relations between them." They do span a free $\mathbb{K}$ -module, but that isn't what you are considering. The subgroup $\langle2\rangle$ of the integers modulo $6$ is not free since $3\mathbf{2}=\mathbf{0}$ , even though the scalar $3$ is not zero. It is not an independent set. You can also verify it does not satisfy the universal property of free modules: you cannot extend the map $\{2\}\to \mathbb{Z}_6$ that sends $2$ to $1$ to a $\mathbb{Z}_6$ -module homomorphism $\langle 2\rangle\to \mathbb{Z}_6$ .
|abstract-algebra|modules|free-modules|
0
Why is the submodule of K[x,y] generated by x and y not a free module?
If $\mathbb{K}$ is a field, and we consider the module $\mathbb{K}[x,y]$ over itself, I've seen others give the example that the ideal generated by $\langle x, y \rangle$ is not a free module to demonstrate that a submodule of a free module need not be free. However, I don't understand why. From the definition of a free module, a module is free if it has a linearly independent basis. Since $x,y$ have no relations between them, they seem to form a linearly independent $\mathbb{K}[x,y]-$ basis for the ideal. So what am I missing here? Why is this submodule not free? Similarly, I've seen the example that $\mathbb{Z}_6$ is a free module over itself but apparently $2\mathbb{Z}_6$ is not free for some reason. Again, this modules is generated by $\langle 2\rangle$ so by definition it has to be free, right? I'm confused. Any explanation would be appreciated, thank you!
$2\Bbb Z_6$ is generated by $2$ but it's not freely generated by $2$ . It's not isomorphic to $\Bbb Z_6[X]$ for some set $X$ , you can see that just by checking the cardinality. Since $3\cdot2=0$ and $3\neq0$ , there is a problem... free modules must in particular be torsion free. In general for a ring $R$ , $(\alpha)$ is a free (left/right) $R$ -module iff. $\alpha$ is not a (left/right) zero divisor. With $R=\Bbb K[x,y]$ and $J=(x,y)\subset R$ ; as a $\Bbb K$ -module, $J$ is certainly free, but as an $R$ -module it is not. Yes, $x,y$ generate but the generation isn't itself free; for example there is a nontrivial linear dependence $y\cdot x-x\cdot y=0$ , which couldn't occur if $x,y$ were free generators (this in itself is not sufficient to conclude $J$ isn’t a free $R$ -module, just that it is not free in the obvious ways; however, you can combine this observation with the fact ideals of a domain are free iff. they are principal to make a conclusion).
|abstract-algebra|modules|free-modules|
0
If I had a finite amount of red balls in a infinite sea of blue balls, what is the probability of grabbing a red ball?
If I had a finite amount of red balls in a infinite sea of blue balls, what is the probability of grabbing a red ball? is it 0 because $\frac{n}{\infty} = 0$ ? But I can't accept that because if every one of those balls where grabbed by one of infinite people somebody would have needed to grab a red ball in order for all the balls to be grabbed
Say there are r red balls and b blue balls. The probability that you grab a red ball is $\frac{r}{r+b}$ . As $b$ tends to infinity, the probability tends to $0$ .
|probability|infinity|
0
Minimal elements under division in a ring
Let $p$ be an element of a commutative ring with unity. The following definition is natural: $p$ is minimal under division if its only divisors (up to equivalence) are $1$ and itself. That is, for all $a$ , if $(p) \subseteq (a)$ then $(a) = (1)$ or $(a) = (p)$ . Although closely related, in a ring with zero divisors, this differs from the usual notions of irreducible and prime : In the ring $\mathbb{Z}[x, y] / (xy)$ , the ideal $(x)$ is prime, but not minimal under division. In particular, $x$ reduces as $x = x (1 - y)$ , and $(1 - y)$ is an ideal strictly between $(x)$ and $(1)$ . In the ring $\mathbb{Z} / 6\mathbb{Z}$ , the ideal $(2) = (4)$ is minimal under division, but not irreducible since $4 = 4 \cdot 4$ . (It is also prime.) I'm looking for references about this concept. What is such an element usually called? Notes and related questions If $(p)$ is a maximal ideal then $p$ is minimal under division, but not necessarily vice versa. If $p$ is irreducible then $p$ is minimal und
Answering my own question. I decided to take a look at Anderson and Valdes-Leon's paper (1996) . They do consider this notion, and they call it m-irreducible (p. 448): A nonunit $a \in R$ is m-irreducible if $(a)$ is a maximal element in the set of proper ideals of $R$ . This notion is related to three other notions of irreducibility, none of them equivalent (see also the discussion at this MO answer ). Here is a diagram of all the relevant notions ordered by inclusion (most of these I know are strict, but I may be missing some inclusions): maximal very strongly irreducible | \ / | m-irreducible prime | | strongly irreducible | | weakly irreducible Maximal and prime are standard ( $p$ is prime if $(p)$ is a prime ideal and $p$ is maximal if $(p)$ is a maximal ideal). The rest of the categories above can be restated as follows, in decreasing order of strength. I use weakly irreducible for what they call irreducible to avoid confusion with the usual definition. $p$ is very strongly irred
|ring-theory|reference-request|commutative-algebra|terminology|maximal-and-prime-ideals|
1
Separability of codomains of Borel functions taking values in completely regular spaces
I am looking for a reference (or a counterexample) to the following statement. Let $X$ be a separable metric space. Suppose that $Y$ is a completely regular topological space and $f\colon X\to Y$ is a surjective Borel function. Must $Y$ be separable too? This would be almost automatic if this condition forced metrisability of $Y$ , but I see not reason for this to happen.
Theorem 1. (PFA) If $f:X\to Y$ is a Borel map, $X$ is Polish, $Y$ is $T_3$ , and the induced maps $f^n:X^n\to Y^n$ are Borel, then $Y$ is hereditarily Lindelof and hereditarily separable. In addition, $Y$ doesn't have to be metrizable, since the map $f = \text{Id}_\mathbb{R}:(\mathbb{R}, \tau_e)\to (\mathbb{R}, \tau_s)$ where $\tau_e$ is Euclidean topology and $\tau_s$ is Sorgenfrey topology is Borel, but $(\mathbb{R}, \tau_s)$ is not metrizable. Though perhaps not the best example given that the induced map $f^2$ is surely not Borel, since Sorgenfrey plane is not Lindelof. Let $f:X\to Y$ be Borel where $X$ is a Polish space. In this question Taras Banakh claims its possible to prove that $Y$ has countable tightness using Four Poles theorem: Theorem 2. (Brzuchowski, Cichoń, Grzegorek, Ryll-Nardzewski) Suppose $I\subseteq \mathcal{P}(X)$ is a proper $\sigma$ -ideal of Polish space $X$ with Borel basis. Let $\mathcal{A}\subseteq I$ be a point-finite cover of $X$ . Then there is $\mathcal
|general-topology|continuity|borel-sets|separable-spaces|borel-measures|
0
If $\theta$ is a WFF, then $((\theta))$ is not a WFF
I can show that if $\theta$ is a WFF, then $(\theta)$ is not a WFF. Suppose $ (\theta )$ is a WFF. Then by unique readability, we know $(\theta) \equiv \neg\alpha$ or $(\theta) \equiv (\alpha \square \beta)$ for some formulae $\alpha$ and $\beta$ and $\square$ being one of the four binary connectives. And as $\neg$ is not the same as (, $(\theta) \equiv (\alpha \square \beta)$ , So $\theta \equiv \alpha \square \beta$ . So, $\alpha$ is a proper initial segment of $\theta$ , a contradiction as $\alpha$ is a formula. Hence $(\theta)$ is not a WFF. But I am having difficulty showing $(( \theta ))$ is not a WFF. Any hints? And how to show the general case of $n$ parenthesis on either side?
For the general case, here is the induction: Let $t_n$ be a string of $n$ "(" symbols and $v_n$ be a string of $n$ ")" symbols. We claim that if $s$ is a string and $t_nsv_n=\phi_1 \square \phi_2$ for formula $\phi_1,\phi_2$ , then $\phi_1 \square \phi_2=t_n\theta_1 \square_1 \theta_2)\square_2 \theta_3)... )\square_{n+1} \theta_{n+2}$ for some formulae $\theta_1,\theta_2..\theta_{n+2}$ and connectives $\square_1,\square_2,..\square_{n+1}$ * and we show by induction on $n$ : For the base case $(n=0)$ just put $\theta_1=\phi_1, \theta_2=\phi_2$ and $\square=\square_1$ . For the inductive step, suppose $s$ is a string and $t_{n+1}sv_{n+1}=\phi_1 \square \phi_2$ . Then $t_n(s)v_n=\phi_1 \square \phi_2$ and so by our IH, $\phi_1 \square \phi_2=t_n\gamma \square_2 \theta_3)\square_3 \theta_4)... )\square_{n+2} \theta_{n+3}$ for some formulae $\gamma, \theta_3,.. \theta_{n+3}$ and connectives $\square_2, \square_3, ..\square_{n+2}$ . But then $\gamma$ starts with ( and so by unique readabili
|propositional-calculus|
0
Using one variable vs two variables
I'm in the process of of again learning algebra 1. But I ran across this problem the other day. The sum of two numbers is $10$ and the sum of their reciprocals is $5/12$ . Find the numbers. Instinctively, I thought okay $x+x=10$ And when worked out $x=5$ . Looked up the answer and quite different to mine. Did some research and found out the proper way to first solve for it should be $x+y=10$ . How is this so? I am confused. Beforehand thank you.
You (implicitly) assumed that the numbers were equal when when you wrote $x + x =10$ because $x = x$ for all numbers $x$ . You should choose different letters for variables if you aren't sure they're the same.
|word-problem|
0
Using one variable vs two variables
I'm in the process of of again learning algebra 1. But I ran across this problem the other day. The sum of two numbers is $10$ and the sum of their reciprocals is $5/12$ . Find the numbers. Instinctively, I thought okay $x+x=10$ And when worked out $x=5$ . Looked up the answer and quite different to mine. Did some research and found out the proper way to first solve for it should be $x+y=10$ . How is this so? I am confused. Beforehand thank you.
You tried $5$ , but the sum of the reciprocals $1/5+1/5 \ne 5/12$ . Hint: Did you try 3, 4? What are the factors of 12? These will help you figure out what $x$ and $y$ must be. Answer: If you choose $x=4$ and $y=6$ then $x+y = 4+6=10$ and $1/4+1/6=5/12.$
|word-problem|
0
Is there a perfect group in which not every element is a commutator?
Is there a perfect group in which not every element is a commutator? By a well-known fact, it must have order at least $96.$ By Ore's conjecture (now a theorem), it must be infinite or non-simple.
A good source to look for examples is I.M.Isaacs' paper "Commutators and the commutator subgroup", The American Mathematical Monthly, Vol. 84, No. 9 (Nov., 1977), pp. 720-722 (3 pages), available from JSTOR . The main theorem is: Theorem. Let $U$ and $H$ be finite groups, $U$ abelian and $H$ nonabelian. Let $G=U\wr H$ be their wreath product. Then $G'$ contains a noncommutator if $$\sum_{A\in\mathscr{A}} \left(\frac{1}{|U|}\right)^{[H:A]}\leq \frac{1}{|U|},$$ where $\mathscr{A}$ is the set of maximal abelian subgroups of $H$ . In particular this condition holds if $|U|\geq|\mathscr{A}|$ . Towards the end he notes that if $H$ is simple and $U$ an abelian group, then the wreath product $G=U\wr H$ satisfies $G'=G''$ , so if $U$ is large enough then $G'$ is a perfect group in which not every element is a commutator.
|abstract-algebra|group-theory|conjectures|
0
Another Sophomore's Dream: $\int_{-\infty}^{\infty}\binom{n}{x}dx=\sum_{i=0}^{n} \binom{n}{i}$
I found an identity $$\int_{-\infty}^{\infty}\binom{n}{x}dx=\sum_{i=0}^{n} \binom{n}{i}$$ where LHS can be calculated by the Reflection relation and Dirichlet integral . The result is $2^n$ , which is apparently equal to RHS. This identity is in the form "integration=summation", which is similar to the "Sophomore's dream" , $\int_0^1 x^{-x}dx = \sum_{n=1}^\infty n^{-n}$ . Is this another coincidence? If not, what is the reason behind it to make it true.
A very heuristic argument shows that this is not a mere coincidence. Indeed, we begin with a more general setting and later discuss how this is related to OP's identity. Let $f(\theta)$ be a locally integrable $1$ -periodic function on $\mathbb{R}$ . Define $\hat{f}(x)$ by $$ \hat{f}(x) = \int_{-1/2}^{1/2} f(\theta) e^{-2\pi i x \theta} \, \mathrm{d}\theta. $$ Note that $\hat{f}(x)$ is precisely the Fourier series coefficient of $f$ when $x \in \mathbb{Z}$ , but here we allow $x$ to take values in $\mathbb{R}$ . (Alternatively speaking, we are considering a class of functions $\hat{f}(x)$ whose Fourier transform is supported on $[-\frac{1}{2}, \frac{1}{2}]$ .) We also note that, informally, we have $$ \int_{-\infty}^{\infty} e^{-2\pi i x \theta} \, \mathrm{d}x = \delta_0(\theta). $$ This is just a formal way of stating the Fourier inversion theorem . Combining these together, without caring mathematical rigor, we expect to have \begin{align*} \int_{-\infty}^{\infty} \hat{f}(x) \, \math
|calculus|integration|binomial-coefficients|binomial-theorem|
1
Nocedal 3.25. Calculate $\frac{\partial \phi}{\partial \alpha}$ where $\phi(\alpha) = f(x - \alpha \nabla f)$ and $f(x) = \frac{1}{2} x^T Qx - b^T x$
I'm trying to understand the calculation of the following equation 3.25 in the textbook "Numerical Optimization by Nocedal and Wright": Here is my work: \begin{gather*} f: \mathbb{R}^n \to \mathbb{R}, \quad x,b \in \mathbb{R}^n, \quad Q \in \mathbb{R}^{n \times n} \\ f(x) = \frac{1}{2} x^T Qx - b^T x \\ \end{gather*} where $Q$ is symmetric and positive definite. Then the gradient is: \begin{gather*} \nabla f(x) = Qx - b \\ \end{gather*} Let: \begin{gather*} \phi(\alpha) = f(x - \alpha \nabla f) = \frac{1}{2} (x - \alpha \nabla f)^T Q(x - \alpha \nabla f) - b^T (x - \alpha \nabla f) \\ \end{gather*} Calculate the partial derivative $\frac{\partial \phi}{\partial \alpha}$ : \begin{gather*} \frac{\partial \phi}{\partial \alpha} = - \frac{1}{2} \nabla f^T Q(x - \alpha \nabla f) - \frac{1}{2} (x - \alpha \nabla f)^T Q \nabla f + b^T \nabla f \\ = - \frac{1}{2} \nabla f^T (Qx) + \frac{1}{2} \alpha (\nabla f)^T (Q \nabla f) - \frac{1}{2} x^T (Q \nabla f) + \frac{1}{2} \alpha (\nabla f)^T (Q \
Both of you are right. Note that $Q$ is symmetrical, \begin{align} \alpha &= \frac{\frac{1}{2} \nabla f^T (Qx) + \frac{1}{2} x^T (Q \nabla f) - b^T \nabla f}{(\nabla f)^T (Q \nabla f)} \\ &=\frac{\frac{1}{2} x^T (Q\nabla f) + \frac{1}{2} x^T (Q \nabla f) - b^T \nabla f}{(\nabla f)^T (Q \nabla f)} \\ &=\frac{ x^T (Q \nabla f) - b^T \nabla f}{(\nabla f)^T (Q \nabla f)} \\ &= \frac{(Qx-b)^T\nabla f}{(\nabla f)^T (Q \nabla f)}\\ &= \frac{\nabla f^T\nabla f}{(\nabla f)^T (Q \nabla f)}\\ \end{align}
|multivariable-calculus|optimization|numerical-optimization|
1
Expected path length in a ladder-like graph if edges can be randomly removed.
Question from an old exam: The King of Squares sets out to patrol the City of Squares. The City of Squares is an infinite ladder, i.e., a graph $(V, E)$ where $V = \mathbb{N} \times \{0,1\}$ and the vertex $(x, y)$ is connected to $(x-1, y)$ for $x>0,(x+1, y)$ and $(x, 1-y)$ . Unfortunately, due to the revolt, some parts of the streets are blocked. Each edge independently becomes blocked with probability $\frac{1}{2}$ . The king leaves his palace at the point $(0,0)$ . Let $X$ be the largest value of the coordinate $x$ that the King can reach without passing through the blocked streets. The king can look ahead, so he will choose the best route before he sets off. In the situation in the image below $X=5$ . (a) Find EX. (b) Find the probability that both points $(X, 0)$ and $(X, 1)$ are reachable. My attempt at (a): Let $$ X_i = \begin{cases} i, & \text{if it's possible to reach $(i, 0)$ or $(i,1)$} \\ 0, & \text{otherwise}. \end{cases} $$ Now we need to find the probability $p(i)$ that
Define a Markov chain as follows: There are $4$ states $11,10,01,00$ . We are in state $10$ at time $i$ if $(i,0)$ is reachable using only roads to the left of $i$ – i.e. left-only reachable – but $(i,1)$ is not, in state $11$ if both $(i,0)$ and $(i,1)$ are left-only reachable and similarly for the other two states. At $i=0$ we are in $11$ or $10$ with equal probability, corresponding to whether or not the leftmost rung of the ladder is blocked. Three edges are needed to compute the transition probabilities – the next rung and two side edges – and the transition matrix is $$P=\begin{bmatrix} 1/2&1/4&1/4&0\\1/8&1/4&0&0\\1/8&0&1/4&0\\1/4&1/2&1/2&1\end{bmatrix}\begin{bmatrix}p_{11}\\p_{10}\\p_{01}\\p_{00}\end{bmatrix}$$ Let the upper-left $3×3$ submatrix be $Q$ ; $Q^iv=Q^i(1/2,1/2,0)^T$ furnishes the probabilities of being in some transient (non-absorbing) state at time $i$ . The remaining nonzero matrix $b=(1/4,1/2,1/2)^T$ gives the absorbing probabilities for each state. For this probl
|probability|expected-value|
0
Rigorous Meaning of "Drawing a Sample" $\omega$ from a Probability Space $(\Omega, \mathcal{A}, \mathbb{P})$
Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability space. What does it mean (in the most formal and rigorous sense possible) to "draw a sample" $\omega \in \Omega$ from this space? Intuitively, I think I understand what is happening, but I am looking for a precise mathematical way of describing the process of sampling. Kind regards and thank you very much! Joker
My two cents: The difficulty of rigorously defining "drawing a sample from a distribution $\mathbb{P}$ " comes from the fact that, ultimately, probability theory is only an attempt to describe or model the reality of a "random phenomenon / experiment", and such a "random phenomenon / experiment" is difficult to further pin down mathematically (see @Alejandro's answer). It is this "random experiment" (e.g., a toss of a coin, a draw of a card, an execution of the RNG algorithm on your computer) that produces a random outcome (which we model as an element $\omega \in \Omega$ ), not the probability measure $\mathbb{P}$ . In other words, the random experiment/phenomenon exists by itself (from which we can draw a sample), independently of what probability space $(\Omega, \Sigma, \mathbb{P})$ we choose to describe it (which only reflects our knowledge of reality). Now, how do we associate a probability model $(\Omega, \Sigma, \mathbb{P})$ with a random phenomenon? This is often an art (e.g.,
|probability|probability-theory|sampling|sampling-theory|
0
Understanding Metatheory and the Broader Picture of Foundational Set Theory
So I'm trying to put together a clearer picture of what is going on when we study set theory. I'll describe my current picture which I'd appreciate some feedback on, and I'll ask some specific questions as well. So from the start: if we take a Platonist perspective (which I was taught as the most pedagogically effective philosophy to have when learning set theory) then we assume sets in some way or another exist along with the intuitive properties like membership. Then when we list the ZFC axioms (which can be done via some bootstrapping process without need for sets) and we are just saying that sets satisfy these objects. Using our intuitive mathematical reasoning and the axioms we can develop all everyday mathematics, including mathematical logic. Is it fair to say that this intuitive notion of a set and mathematical reasoning is the `most' meta metatheory? However, now having developed mathematical logic, using this metatheory we can consider ZFC formally as a mathematical object al
Regarding what constitutes the metatheory, Kunen defines it as "basic finitistic reasoning" in his Foundations of Mathematics (in I.7.2) and elaborates in his section III: We conclude with two additional questions: What exactly is the metatheory? Why is it beyond reproach, as we said above, for even consistent? For the first question, unfortunately, we cannot say exactly. Roughly, as we said, the metatheory is basic finitistic reasoning about finite objects such as finite numbers and finite symbolic expressions. One could attempt to give a precise definition of exactly what finitistic reasoning is — for example, we could say that it is what can be formalized within the system PRA mentioned above. But if you look at the definition of PRA (or of any other formal system), you will see that to understand the definition, you need to understand already basic finitistic reasoning. That is, starting from nothing, you can't explain anything. (I think implicit here is the claim that finitistic r
|logic|set-theory|foundations|meta-math|
0
Taking the Limit of Stars and Bars
I'm current working on the following question from Chapter 1 of Theory of Probability and Random Processes by Koralov and Sinai: For integers $n$ and $r$ , find the number of solutions of the equation $$x_1 + \ldots + x_r = n,$$ where $x_i \geq 0$ are integers. Assuming the uniform distribution on the space of the solutions, find $P(x_1=a)$ and its limit as $r\to \infty$ , $n\to\infty$ , $n/r\to\rho > 0$ . So, the first part is just a simple stars and bars and we get $\binom{n+r-1}{r-1}$ . However, it's the second part, taking the limit, that I'm struggling with. Essentially, I'm trying to evaluate the following limit $$\lim_{n\to\infty\,r\to\infty\\n/r\to\rho} \binom{n-a+r-2}{r-2}\big/\binom{n+r-1}{r-1}.$$ Intuitively, I expect the limit to converge to some Poisson Distribution parameterized by $\rho$ and evaluated at $a$ . I've attempted this by applying Stirling's Approximation, but have no idea how to proceed: \begin{align}P(x_1=a)&=\binom{n-a+r-2}{r-2}\big/\binom{n+r-1}{r-1} \\ &=
I think you can get a reasonable result without Stirling... $${n-a+r-2\choose r-2}\big/{n+r-1 \choose r-1}$$ $$=\frac{(n-a+r-2)!}{(n-a)!(r-2)!}\cdot{\frac{n!(r-1)!}{(n+r-1)!}}$$ $$={\frac{n(n-1)\cdots(n-a+1)(r-1)}{(n+r-1)(n+r-2)\cdots(n+r-(a+1))}}$$ So in our limit, as $\{n,r\}\rightarrow \infty$ , we have; $$\approx\frac{(n)^{a}r}{(n+r)^{a+1}}=\frac{(r\rho)^{a}r}{(r\rho+r)^{a+1}}=\frac{(\rho)^{a}}{(1+\rho)^{a+1}}$$
|probability|combinatorics|limits|solution-verification|
1
Rouché's Theorem: How many roots does $\lambda-z=\frac{1}{3}e^{z^2}$ have in the strip Re$(z)\in [-1,1]$?
I'm studying for my complex analysis qual, and I've found a problem that I haven't seen solved here yet: Take $\lambda$ to be purely imaginary. Prove that $z-\lambda=\frac{1}{3}e^{z^2}$ has exactly one solution in the strip $S:=\{x+iy:x\in [-1,1]\}$ . I've tried showing that the roots are all in the disk $|\lambda-z|=\frac{1}{3}|e^{z^2}|=\frac{1}{3}e^{x^2-y^2}\leq \frac {1}{3}e$ , but after this, I have that on the circle $|\lambda-z|=\frac{1}{3}e$ , $|\lambda-z|\geq \frac{1}{3}e^{z^2}$ , where equality holds when the real part of $z$ is $\pm 1$ and the imaginary part is $0$ . If I'm remembering correctly, the conditions for Rouché's Theorem require strict inequality. How should I proceed?
On the circle $|\lambda-z|=\frac{1}{3}e\;$ we have \begin{align*} & |\lambda-z| so we can't have $\text{Re}(z)=\pm 1$ .
|complex-analysis|roots|rouches-theorem|
0
How to use Parametric differentiation method to evaluate this integral $\int_0^{\infty}{\frac{\ln \left( 1+x^2 \right)}{\left( 1+x^2 \right) ^2}dx}$
I know use the triangular tape transformation is a method . I am trying finding another method $$ \begin{align*} &\displaystyle\int_0^{\infty}{\displaystyle\frac{\ln \left( 1+x^2 \right)}{\left( 1+x^2 \right) ^2}dx} &\\ &\xrightarrow{x=\tan t}\displaystyle\int_0^{\displaystyle\frac{\pi}{2}}{\displaystyle\frac{\ln \left( \sec ^2t \right)}{\sec ^2t}dt}\,\, &\\ &=-\displaystyle\int_0^{\displaystyle\frac{\pi}{2}}{2\cos ^2t\ln \left( \cos t \right) dt}\,\, &\\ &=-\displaystyle\int_0^{\displaystyle\frac{\pi}{2}}{2\cos ^2t\ln \left( \cos t \right) dt}\,\, &\\ &=-\displaystyle\int_0^{\displaystyle\frac{\pi}{2}}{\ln \left( \cos t \right) d\left( t+\displaystyle\frac{1}{2}\sin 2t \right)} &\\ &=-\left( t+\displaystyle\frac{1}{2}\sin 2t \right) \ln \left( \cos t \right) \mid_{0}^{\displaystyle\frac{\pi}{2}}+\displaystyle\int_0^{\displaystyle\frac{\pi}{2}}{\left( t+\displaystyle\frac{1}{2}\sin 2t \right) d\ln\cos t} &\\ &=\displaystyle\int_0^{\displaystyle\frac{\pi}{2}}{td\ln\cos t}+\displaystyle\
Numerically $$\int_0^\infty\frac{\text{Log}\left[1+ x^2\right]}{\left(1+x^2\right)^2} =0.303359$$ $$\int da \ \int_0^\infty dx \ \partial_a\ \frac{\log (1+ a^2x^2)}{(1+a^2 \ x^2)^2} = -\frac{\pi }{4 a}+\frac{\pi \text{Log}[16]}{8 a} $$ For a=1 $$\frac{1}{8} \pi (-2+\log16)=0.303395$$
|calculus|integration|
0
Does $\int_0^{+\infty}\frac{\ln(1+x^n)-\ln(1+x^{n+1})}{(1+x^2)\ln(x)}\, dx$ converge? How?
I found this improper integral and I have been trying it for a long time. Try to determine whether it converges or not, and if it converges to what $$\int_0^{+\infty}\frac{\ln(1+x^n)-\ln(1+x^{n+1})}{(1+x^2)\ln(x)}\, dx$$ I tried to compare the integral with $$\int_0^{+\infty}\frac{\ln(x^n)-\ln(x^{n+1})}{(1+x^2)\ln(x)}\, dx$$ but i couldn't get a result. Any help will be greatly appreciated.
Mathematica suggests, that the integral does not depend on $n$ (NIntegrate[ Log[1 + x^(# + 1)] - Log[1 + x^#]/((1 + x^2) Log[x]), {x, 0, \[Infinity]}] &) /@ Range[7] {0.785398,0.785398,0.785398,0.785398,0.785398,0.785398,0.785398} $\pi$ /4 - 0.7853981592949361 ` = 4 *10^-9
|calculus|convergence-divergence|improper-integrals|
0
Clarification for a rule in this sequent calculus
I'm reading through Ebbinghaus' Mathematical Logic and more specifically chapter 4 where a sequent calculus is constructed. Below is the rule I need clarification on because, according to my definitely-wrong understanding, it leads to a contradiction. Here, $\Gamma$ is just a sequent of formulas. At first, this rule was intuitive to me until I encountered in a justification in the next section where $\Gamma'$ was $\Gamma$ with $\neg \phi$ . According to this rule, that is valid. In fact, the book has a definition on the notion of correctness: A sequent $\Gamma \phi$ is correct if $\Gamma \models \phi$ , or $\{\psi \mid \psi \text{ is a member of } \Gamma\} \models \phi$ . Rule 2.1 is supposed to yield a correct formula, but $\Gamma' \phi$ is not correct because there exists no interpretation $\mathfrak J$ such that $\mathfrak J \models \phi$ and $\mathfrak J \models \neg\phi$ . How can this be? What am I missing here?
"Consistent" and "Correct" are not synonyms. A correct sequent may have inconsistent premises. $\phi,\lnot\phi\vdash\phi$ is correct, by definition , because $\phi\models\phi$ and $\phi$ is a member of $\{\phi,\lnot\phi\}$ . $\phi,\lnot\phi\vdash\lnot\phi$ is also correct, because $\lnot\phi\models\lnot\phi$ and $\lnot\phi$ is a member of $\{\phi,\lnot\phi\}$ . The fact that no interpretation will satisfy $\{\phi,\lnot\phi\}$ has no standing.
|logic|propositional-calculus|sequent-calculus|
1
Does $\int_0^{+\infty}\frac{\ln(1+x^n)-\ln(1+x^{n+1})}{(1+x^2)\ln(x)}\, dx$ converge? How?
I found this improper integral and I have been trying it for a long time. Try to determine whether it converges or not, and if it converges to what $$\int_0^{+\infty}\frac{\ln(1+x^n)-\ln(1+x^{n+1})}{(1+x^2)\ln(x)}\, dx$$ I tried to compare the integral with $$\int_0^{+\infty}\frac{\ln(x^n)-\ln(x^{n+1})}{(1+x^2)\ln(x)}\, dx$$ but i couldn't get a result. Any help will be greatly appreciated.
Define $I$ as the integral that we need to compute and $I_1 = \int_0^{1}\frac{\ln(1+x^n)-\ln(1+x^{n+1})}{(1+x^2)\ln(x)}\, dx$ . Then, $$\begin{align*} I &= I_1+\int_{1}^{\infty}\frac{\ln(1+x^n)-\ln(1+x^{n+1})}{(1+x^2)\ln(x)}\, dx\\ &\overset{x = 1/t}{=} I_1-\int_{0}^{1}\frac{\ln((1+t^n)/t^n)-\ln((1+t^{n+1})/t^{n+1})}{(1+t^2)\ln(t)}\, dt \\ &= \int_{0}^{1}\frac{\ln(t^n)-\ln(t^{n+1})}{(1+t^2)\ln(t)}\, dt = -\int_0^1 \frac{1}{1+t^2}dt = -\frac{\pi}{4}. \end{align*}$$
|calculus|convergence-divergence|improper-integrals|
1
In the proof of extension of a homomorphism, proposition 5.23 of Atiyah-Macdonald, Introduction to commutative algebra.
I don’t understand the part of the proof of prop5.23 of Atiyah-Macdonald, Introduction to commutative algebra. The author separates the proof into two cases, the case that x is transcendental or the case that x is algebraic over A. About the former case, I don’t find where the author uses the hypothesis that x is transcendental.
If $x$ is algebraic over $A$ and $P(t) \in A[t]$ is a (possibly non-monic) polynomial with $P(x) = 0$ , then in the proof of (i), the element $\xi$ has to be a root of the non-zero polynomial $f(P)(t) \in \Omega[t]$ (where $f(P)(t)$ is the polynomial we obtain by applying $f$ to the coefficients of $P(t)$ ). But then there are only finitely many possibilities for $\xi$ , and so we cannot guarantee that for $v = a_0x^n + a_1x^{n-1} + \dotsb + a_n$ as in the proof, we have that $f(a_0)\xi^n + f(a_1)\xi^{n-1} + \dotsb + f(a_n) \neq 0$ . One thing to note here is that $f$ may have non-trivial kernel, and hence, $f(a_0) \xi^n + \dotsb + f(a_n) = 0$ for all roots $\xi$ of $f(P)(t)$ does not imply that $v = 0$ .
|commutative-algebra|
1
Probability that you get a composite number both times when rolling a six-sided die
Suppose you roll a standard six sided dice twice. What is the probability that you get a composite number both times? Considering that $4$ and $6$ are the only composite numbers that we have here, the probability of getting a composite number both times should be $2/6 \times 2/6 = 1/9$ . Right? Well, the correct answer is $1/4$ . How though?
Perhaps the author is incorrectly taking $1$ to be a composite number. With that assumption, the probability would be $1/2 \times 1/2 = 1/4$ . Or as Dominique says, you would want to make sure the question actually says "a composite number" and not "not a prime number". They do not mean the same thing. $1$ will fail the former criterion but pass the latter.
|probability|
0
Solving for the digits in WHITE+WATER=PICNIC
I'm writing solutions for students who are taking a competition exam and I took problems from old purple comet competition problems. This problem is the last one from the 2004 middle school contest and says: In the addition problem WHITE+WATER=PICNIC each distinct letter represents a different digit. Find the number represented by the answer PICNIC. I have looked for solutions online and their answers have been insufficient; usually stating the only possibilities for WHITE and WATER (of which there are only two) and then giving what PICNIC must be. I have tried my hand at this problem using strategies for solving cryptoarithmetic puzzles. I started with recognizing that P is 1 and then working with the W being at least 5, but this has always devolved into a myriad of cases. This is a problem intended for middle school students so I'm a doubtful that the intended solution is to check so many cases. Is there some insight for this puzzle to better organize the cases?
$$ \begin{array} \\ & & W & H & I & T & E \\ +) & & W & A & T & E & R \\ \hline & P & I & C & N & I & C \end{array} $$ All $10$ digits are represented, so let's figure out where $0$ is first. Obviously, $W$ and $P$ are out and none of $T,E,R,I,H,A$ can be $0.$ So there are two cases. First, let $C = 0.$ Then $E+R=10$ and $T+E+1 = I$ or $10+I.$ If the former, then $I\geq 1+2+3=6$ and thus $W\geq 8.$ If $I=6,$ then it forces $W=8$ but that contradicts the carryover from $C = 0.$ If $I=7,$ then $W=8.$ Now we have $T,E = \{2,4\}$ and $E,R = \{4,6\}$ due to $1,8,7$ having already been chosen. Therefore, $E=4, T=2,R=6$ and $N = I+T = 7+2=9.$ The only remaining digits are $3$ and $5$ and that won't satisfy $5+3 = 10.$ Moving on, if $I=8,$ then $W = 9$ but that again contradicts the carryover from the preceding $C = 0.$ $I=9$ is impossible as that would force $W = I= 9.$ Therefore $C\neq 0.$ The only other case is $N = 0$ and note that $W\neq 5$ for otherwise it implies $P=I=1.$ If $W=9,$ then
|arithmetic|cryptarithm|
0
Subcategory of left-right projective bimodules
Let $\mathbb{K}$ be a fixed field. Take a finite-dimensional $\mathbb{K}$ -algebra $A$ and look at the category $\text{mod}(A^e)$ of finite-dimensional $A$ -bimodules. It has a full subcategory $\text{lrp}(A^e)$ that has a objects those $A$ -bimodules which are projective when considered as left and as right $A$ -modules. Is that subcategory a dense (or even absolutely dense , or colimit-dense ) subcategory of $\text{mod}(A^e)$ ? Is it in some other sense a 'generating subcategory' of $\text{mod}(A^e)$ ? In Left-right projective bimodules and stable equivalences of Morita type it is claimed that if $A$ is a self-injective algebra, then, 'except for trivial cases', the complement $\text{mod}(A^e)\setminus\text{lrp}(A^e)$ has infinitely many pairwise non-isomorphic indecomposable objects. Don't know if that helps.
Yes, the subcategory $\operatorname{lpr}(A^{e})$ is dense in $\operatorname{mod}(A^{e})$ . The following lemma is an immediate consequence of Theorem 5.13 in Kelly, G. M. , The basic concepts of enriched category theory , Reprints in Theory and Applications of Categories, No. 10 (2005). Lemma. If $\mathcal{E}\subset\mathcal{D}\subset\mathcal{C}$ are full inclusions of categories, and $\mathcal{E}$ is dense in $\mathcal{C}$ , then $\mathcal{D}$ is dense in $\mathcal{C}$ , and $\mathcal{E}$ is dense in $\mathcal{D}$ . $\square$ Since $A^{e}\oplus A^{e}$ is an object of $\operatorname{lrp}(A^{e})$ , to prove that $\operatorname{lrp}(A^{e})$ is dense in $\operatorname{mod}(A^{e})$ it is sufficient, by the Lemma, to prove that the full subcategory containing the single object $A^{e}\oplus A^{e}$ is dense in $\operatorname{mod}(A^{e})$ . But more generally, for any ring $R$ , the single object $R\oplus R$ is dense in the module category of $R$ . See, for example, (2.2) in Isbell, J. R. , Ade
|abstract-algebra|category-theory|modules|representation-theory|functors|
1
Combinatoric and subsets
I recently encountered a question Let $n \in \mathbb{Z} ^+$ , Let $U$ be a set containing $n$ elements. Let $\mathcal{S} \subseteq P(U)$ . Let $m \in \mathbb{Z}^+, m \leq n$ . Prove that $$ |\mathcal{S}| > \sum_{i=0}^{m - 1} \binom{n}{i} \Rightarrow \exists S' \in P(U). (|S'|=m) \wedge \left(\{S \cap S':S \in \mathcal{S}\} = P(S')\right) $$ Currently, I am inducting on $n$ and $m$ . I have proved the cases when $n = 1$ or $m = 1$ . For the induction step, I have three cases. The first case is when none of the sets in $\mathcal{S}$ contains the $n$ th element of $U$ . The second case is when exactly half of the sets in $\mathcal{S}$ contain the $n$ th element of $U$ . The last case is when the previous two cases are false. I can prove the first two cases. However I am strugling with the third case. I have tried using the induction hypothesis on $n - 1, m - 1$ and $n, m - 1$ , but I didn't get anywhere. My idea is to use a non-constructive probabalistic method, but that turned real messy
What you are trying to prove is the well-known Sauer–Shelah lemma . The core idea to make induction on the size of $\mathcal{S}$ work is to modify what you wanna prove a little. Namely, instead of proving that there exists some subset $S'$ of $U$ of cardinality $m$ such that $\{S \cap S': S\in \mathcal{S}\} = P(S')$ , prove that there exist at least $|\mathcal{S}|$ subsets of $U$ without restriction on their cardinalities such that $\{S \cap S': S\in \mathcal{S}\} = P(S')$ . The full proof of this fact is on the Wikipedia page. The initial claim now follows from the fact that $|\mathcal{S}|$ is strictly greater than the number of subsets of $U$ of cardinality up to $m-1$ .
|combinatorics|elementary-set-theory|probabilistic-method|
1
CDF of $Y$ given the CDF of $X$ and $Y=X^2$
Let $X$ be a continuous random variable with CDF $F_X$ , and $Y=X^2$ . Find the CDF of $Y$ . My solution: $$F_Y(t)=P(Y \leq t) = P(X^2 \leq t) = P( -\sqrt{t} \leq X \leq \sqrt{t}) = $$ $$P(X \leq \sqrt{t}) - P(X \leq -\sqrt{t}) = F_X(\sqrt{t}) - F_X(-\sqrt{t})$$ (assuming $t \geq 0$ , otherwise $F_Y(t)=0$ ). Is this correct?
$$F_Y(t)=P(Y \leq t) = P(X^2 \leq t) = P( -\sqrt{t} \leq X \leq \sqrt{t}) = $$ $$P(X \leq \sqrt{t}) - P(X where $F(s-)$ denotes the left-hand limit of $F(s)$ at $s$ . Since $X$ is continuous it follows that we can replace $F(s-)$ by $F(s)$ so we get $$F_Y(t)=F_X(\sqrt{t}) - F_X((-\sqrt{t})).$$
|probability|cumulative-distribution-functions|
1
Can a function be defined as the union of two other functions?
So I read from various sources that a function can be defined as a binary relation. Then is it valid to say, for example, $f = \{ (1, 2), (2, 3) \}$ ? And suppose I have another function $g = \{ (4, 5) \}$ . Does it then make sense to write $(f \cup g)(2) = 3$ ?
What you're doing is ok only if the functions have disjoint domains (or they happen to be equal on any domain overlap). This is not a very typical situation. In practice, any two functions you might want to combine would instead either have overlapping (possibly identical) domains, or structurally completely different domains. Example of the former, $$\begin{align} f &: \mathbb{R} \to \mathbb{R}, & f(x) &= x^2 \\ g &: \mathbb{R}\setminus\{0\} \to \mathbb{R}, & g(x) &= \tfrac1x \end{align}$$ then $f\cup g$ is not a function, because it contains e.g. both $(2,4)$ and $(2,\tfrac12)$ . Example of the latter, $$\begin{align} h &: \{0,1,2,3\} \to \mathbb{Q}, & h(x) &= \begin{cases} 1 & \text{for $x=0$} \\\tfrac12 & \text{for $x=1$} \\\tfrac25 & \text{for $x=2$} \\\tfrac37 & \text{for $x=3$} \end{cases} \\ i &: \mathbb{R}^2 \to \mathbb{R}^2, & i(x,y) &= (-y,x) \end{align}$$ In this case $h\cup i$ is a function, but it is a complete oddball with a composite domain that makes it hard to work wi
|functions|notation|relations|
0
A Challenging Logarithmic Integral $\int_0^1 \frac{\log(x)\log(1-x)\log^2(1+x)}{x}dx$
How can we prove that: $$\int_0^1 \frac{\log(x)\log(1-x)\log^2(1+x)}{x}dx=\frac{7\pi^2}{48}\zeta(3)-\frac{25}{16}\zeta(5)$$ where $\zeta(z)$ is the Riemann Zeta Function . The best I could do was to express it in terms of Euler Sums. Let $I$ denote the integral. $$I=-\frac{\pi^2}{24}\zeta(3)+2\sum_{r=2}^\infty \frac{(-1)^r (H_r)^2}{r^3}-2\sum_{r=2}^\infty \frac{(-1)^r H_r}{r^4}+2 \sum_{r=2}^\infty \frac{(-1)^r H_r^{(2)}H_r}{r^2}-2\sum_{r=2}^\infty \frac{(-1)^r H_r^{(2)}}{r^3}$$ where $\displaystyle H_r^{(n)}=\sum_{n=1}^r \frac{1}{k^n}$. I am unable to simplify these sums further. Does anyone have any idea on how to solve this integral? Please see here for more details. Some time ago I was able to solve the simpler integral: \begin{align*} \int_0^1 \frac{\log(1-x)\log(x)\log(1+x)}{x}dx &=-\frac{3 \pi^4}{160}+\frac{7\log(2)}{4}\zeta(3)-\frac{\pi^2 \log^2(2)}{12} +\frac{\log^4(2)}{12} \\ &\quad+ 2 \text{Li}_4 \left(\frac{1}{2} \right) \end{align*} where $\text{Li}_n(z)$ is the Polylogarit
Okay, I simplified my solution. My original solution was about 3 times longer than this, so it is so lucky for me to find a shortcut like this :) Step 1. Let $I$ be the integral in question: \begin{align*} I &= \int_{0}^{1} \frac{\log x \log (1 - x) \log^{2} (1 + x)}{x} \, dx. \end{align*} By the simple algebraic formula $(a + b)^{3} + (a - b)^{3} - 2 a^{3} = 6 a b^{2}$ , it follows that \begin{align*} I &= \frac{1}{6} \int_{0}^{1} \frac{\log x}{x} \left\{ \log^{3} (1-x^{2}) + \log^{3} \left( \frac{1-x}{1+x} \right) - 2\log^{3}(1-x) \right\} \, dx \\ &= \frac{1}{6} \int_{0}^{1} \frac{\log x \log^{3} (1-x^{2})}{x} \, dx + \frac{1}{6} \int_{0}^{1} \frac{\log x \log^{3} \left( \frac{1-x}{1+x} \right)}{x} \, dx - \frac{1}{3} \int_{0}^{1} \frac{\log x \log^{3} (1-x)}{x} \, dx \tag{1} \end{align*} Applying the substitution $x^{2} \mapsto x$ , the first integral reduces to \begin{align*} \frac{1}{6} \int_{0}^{1} \frac{\log x \log^{3} (1-x^{2})}{x} \, dx &= \frac{1}{24} \int_{0}^{1} \frac{\log
|analysis|integration|special-functions|definite-integrals|
0
Find $a$ for which $(a-3)(a-7)$ is a perfect square
The only solutions seem to be 3 and 7 but I can't prove that there are no others. Context: Find every value for integer a, for which $x^2-(a+5)x+5a+1$ expression can be factored as $(x+b)(x+c)$ where b and c are integers. $x=\frac{a+5 \pm \sqrt{(a-3)(a-7)}}{2}$ (In a pre-calc book, btw. Real solutions only (don't know if it matters)) I have no idea where to start, forgive me for not having any real attempts.
Find all integers $a$ such that $$x^2-(a+5)x+5a+1=(x+b)(x+c)$$ where $b,c$ are integers The roots of the quadratic are $-b$ and $-c$ , therefore $$(b+5)(c+5)=b c + 5(b+c)+25=5a+1+5(-a-5)+25=1$$ Either $b+5=c+5=1$ in which case $b=c=-4$ and $a=3$ or $b+5=c+5=-1$ in which case $b=c=-6$ and $a=7$
|square-numbers|
0
Show that $\int_0^{\infty}\frac{\operatorname{Li}_s(-x)}{x^{\alpha+1}}\mathrm dx=-\frac1{\alpha^s}\frac{\pi}{\sin(\pi \alpha)}$
I have come across the following integral while going over this list (Problem $35$ ) $$\int_0^{\infty}\frac{\operatorname{Li}_s(-x)}{x^{\alpha+1}}\mathrm dx=-\frac1{\alpha^s}\frac{\pi}{\sin(\pi \alpha)}~~~~s>0, \alpha\in(0,1)$$ where $\operatorname{Li}_s(x)$ denotes the Polylogarithm Function. Using the series expansion of $\operatorname{Li}_s(-x)$ yields $$\begin{align} \int_0^{\infty}\frac{\operatorname{Li}_s(-x)}{x^{\alpha+1}}\mathrm dx&=\int_0^{\infty}\frac1{x^{\alpha+1}}\left[\sum_{n=1}^{\infty}\frac{(-x)^n}{n^s}\right]\mathrm dx\\ &=\sum_{n=1}^{\infty}\frac{(-1)^n}{n^s}\int_0^{\infty}\frac{x^n}{x^{\alpha+1}}\mathrm dx\\ &=\sum_{n=1}^{\infty}\frac{(-1)^n}{n^s}\int_0^{\infty}x^{n-\alpha-1}\mathrm dx \end{align}$$ One can easily see the problems concerning the convergence of the last integral. Also, I am not even sure whether it is possible to change the order of summation and integration in this case or not. Another approach is based on an integral representation of $\operatorname{
I managed to work out the dilogarithmic one. Here is my proof: Changing to a Double Integral $\displaystyle \text{Li}_2(-x) = -\int_0^{-x}\frac{\log(1-t)}{t}dt=-\int_0^1\frac{\log(1+ux)}{u}du \tag{1}$ Plugging (1) into our integral and reversing the order of integration we find \begin{align*} I &= \int_0^\infty \frac{\text{Li}_2(-x)}{x^{1+\alpha}}dx \\ &=- \int_0^\infty \int_0^1 \frac{\log(1+ux)}{ux^{1+\alpha}}du \ dx \\ &=- \int_0^1 \int_0^\infty \frac{\log(1+u x)}{u x^{1+\alpha}}dx \ du \tag{2} \end{align*} Simplification of the Inner Integral \begin{align*} \int_0^\infty \frac{\log(1+ux)}{x^{1+\alpha}}dx &= u^{\alpha}\int_0^\infty \frac{t e^t}{(e^t-1)^{1+\alpha}}dt \quad (t=\log(1+ux))\\ &=\frac{u^\alpha}{\alpha} \left[ 0+\int_0^\infty \frac{1}{(e^t-1)^\alpha}dt\right] \quad (\text{Integration by Parts}) \\ &=\frac{u^\alpha}{\alpha} \int_0^\infty e^{-\alpha t}(1-e^{-t})^{-\alpha} dt \\ &= \frac{u^\alpha}{\alpha} \int_0^1 y^{\alpha-1}(1-y)^{-\alpha} dy \quad (y=e^{-t}) \\ &= \frac{u^
|integration|definite-integrals|closed-form|polylogarithm|
0
Can a function be defined as the union of two other functions?
So I read from various sources that a function can be defined as a binary relation. Then is it valid to say, for example, $f = \{ (1, 2), (2, 3) \}$ ? And suppose I have another function $g = \{ (4, 5) \}$ . Does it then make sense to write $(f \cup g)(2) = 3$ ?
Defining the union of 2 functions in that way is an oversimplification. Given 2 sets $X$ and $Y$ , you can define a function $f:X\to Y$ as a relation $\mathcal{R}$ between $X$ and $Y$ verifying: $$\forall (x, y, z) \in X\times Y\times Y, x\mathcal{R}y \wedge x\mathcal{R}z \implies y = z$$ But AFAIK, even if a relation between two sets is fully defined by its graph , i.e. the subset of $X\times Y$ consisting of the elements of X related to the elements of Y, I have never seen unions nor intersections of relations. Furthermore, when you use functions, you must ensure the unicity of the image of any element from the starting set. And the union of the graphs offers no guarantee over that point. What is common on the other hand is to define a function on subsets: if $X_1 \subset X$ and $X2 \subset X$ and you have 2 functions $$\begin{aligned}f_1:&X_1\to Y &f_2:&X_2\to Y\\ &x\mapsto f_1(x)&&x\mapsto f_2(x) \end{aligned}$$ you can define: $$\begin{aligned}f:&X_1\cup X_2&\to &Y\\ &x&\mapsto&f(
|functions|notation|relations|
0
Cauchy sequences over $\mathbb{Q}$: A basic question
Define an equivalence relation on Cauchy sequences in $\mathbb{Q}$ by $(x_n)\equiv (y_n)$ if $\lim (x_n-y_n)=0$ . The question I am trying to prove is that If $(x_n)$ and $(y_n)$ are Cauchy sequences in $\mathbb{Q}$ such that $(x_n)(y_n)\equiv (0)$ then $(x_n)\equiv (0)$ or $(y_n)\equiv (0)$ . Here, I am considering the problem only over $\mathbb{Q}$ and not considering anthing about $\mathbb{R}$ . Some part of proof: Suppose $(x_n)\not\equiv (0)$ . We show that $(y_n)\equiv (0)$ . Since $(x_n)\not\equiv (0)$ , so $(x_n)$ is a Cauchy sequence but its limit is not $0$ . Hence there is an $\epsilon>0$ , such that for every $i\in\mathbb{N}$ , there is $n_i$ such that $|x_{n_i}|\ge \epsilon$ . What this says is that infinitely many terms of $(x_n)$ are bounded away from $0$ . How to proceed now, to show that $(y_n)\equiv (0)$ ?
Hence there is an $\epsilon>0$ , such that for every $i\in\mathbb{N}$ , there is $n_i$ such that $|x_{n_i}|\ge \epsilon$ . Correct. You can use that and the fact that $(x_n)$ is a Cauchy sequence to get the even stronger statement that there exists $N\in\mathbb{N}$ such that $|x_n|\geq\frac{\epsilon}{2}$ for all $n\geq N$ . That makes it straightforward to show $y_n\rightarrow 0$ .
|real-analysis|calculus|sequences-and-series|cauchy-sequences|
0
How to evaluate $\int_{0}^{\infty} \int_{0}^{\infty} \frac{\sin(x) \sin(y) \sin(x+y)}{x y(x+y)} ~{\rm d}x ~{\rm d}y$?
I came across the problem of evaluating the double integral: $$\int_{0}^{\infty} \int_{0}^{\infty} \frac{\sin(x) \sin(y) \sin(x+y)}{x y(x+y)} ~{\rm d}x ~{\rm d}y$$ My attempt at this was to substitute $\sin(x) \cos(y) + \cos(x) \sin(y)$ for $\sin(x+y)$, although I had no clue where to go from there. My second attempt was to use trigonometric identities to rewrite the integral as: $$\frac{1}{4} \int_{0}^{\infty} \int_{0}^{\infty} \frac{\sin(2x) + \sin(2y) - \sin(2x + 2y)}{x^2 y + x y^2} ~{\rm d}x ~{\rm d}y$$ Which didn't seem to help, so I continued to get: $$\frac{1}{2} \int_{0}^{\infty} \int_{0}^{\infty} \frac{\sin(2x) \sin^2 (y) + \sin(2y) \sin^2 (x)}{x^2 y + x y^2} ~{\rm d}x ~{\rm d}y$$ I am sure I am missing something, but I am unsure how to integrate this. Any help or advice is appreciated.
Write the integral as $$ I = \mathrm{Im} \int_0^\infty dx \int_0^\infty dy \frac{\sin x \sin y}{xy(x+y)}e^{i(x+y)} $$ Define $$ J(a) \equiv \int_0^\infty dx \int_0^\infty dy \frac{\sin x \sin y}{xy(x+y)}e^{ia(x+y)} $$ so $$ J'(a) = i \int_0^\infty dx \int_0^\infty dy \frac{\sin x \sin y}{xy}e^{ia(x+y)} \equiv i (K(a))^2, $$ where $$ K(a) = \int_0^\infty dx \frac{\sin x}{x}e^{i a x} $$ We have, for $\mathrm{Im}\ a = \epsilon > 0$ , $$ K'(a) = i\ \int_0^\infty dx \sin x\ e^{i a x} = \frac i 2 \left[\frac{1}{a+1} - \frac{1}{a-1} \right] $$ so $$ K(a) = \int_{\infty + \epsilon i}^{a} dz\ K'(z) = i\ \mathrm{arctanh}(1/a) $$ We have, for example from Riemann-Lebesgue, $$ J(\infty+\epsilon i) = 0 $$ whence $$ J(1) = \int_\infty^1 da\ J'(a) $$ Taking the imaginary part, we get $$ I = \int_1^\infty dx\ \mathrm{arctanh}^2 \frac 1 x = \int_0^1 \frac{dx}{x^2}\ \mathrm{arctanh}^2 x \\= \frac 1 4 \int_0^1 dx\ \frac{\ln^2(1+x)+\ln^2(1-x)-2 \ln(1+x)\ln(1-x)}{x^2} $$ The integral can be evaluated by fi
|real-analysis|integration|multivariable-calculus|improper-integrals|multiple-integral|
0
an infinite series involving odd zeta
I ran across a cool series I have been trying to chip away at. $$\sum_{k=1}^{\infty}\frac{\zeta(2k+1)-1}{k+2}=\frac{-\gamma}{2}-6\ln(A)+\ln(2)+\frac{7}{6}\approx 0.0786\ldots$$ where A = the Glaisher-Kinkelin constant. Numerically, it is approx. $1.282427\ldots$ I began by writing zeta as a sum and switching the summation order $$\sum_{n=2}^\infty \sum_{k=1}^\infty \frac{1}{(k+2)n^{2k+1}}$$ The first sum is the series for $-n^3\ln(1-\frac{1}{n^2})-n-\frac{1}{2n}$ So, we have $-\sum_{n=2}^\infty \left[\ln(1-\frac{1}{n^2})+n+\frac{1}{2n}\right]$ This series numerically checks out, so I am onto something. At first glance the series looks like it should diverge, but it does converge. Another idea I had was to write out the series of the series: $$1/3(1/2)^{3}+1/4(1/2)^{5}+1/5(1/2)^{7}+\cdots +1/3(1/3)^{3}+1/4(1/3)^{5}+1/5(1/3)^{7}+\cdots +1/3(1/4)^{3}+1/4(1/4)^{5}+1/5(1/4)^{7}+\cdots$$ and so on. This can be written as $$1/3x^{3}+1/4x^{5}+1/5x^{7}+\cdots +1/3x^{3}+1/4x^{5}+1/5x^{7}+\cdots
\begin{align*}\sum\limits_{k=1}^{\infty} \frac{\zeta(2k+1)-1}{2k+3} &= \sum\limits_{k=1}^{\infty}\sum\limits_{n=2}^{\infty} \frac{1}{(2k+3)n^{2k+1}}\\&= \sum\limits_{n=2}^{\infty}n^2\left(\sum\limits_{k=4}^{\infty} \frac{1}{kn^k} - \frac{1}{2}\sum\limits_{k=2}^{\infty} \frac{1}{kn^{2k}}\right)\\&= \sum\limits_{n=2}^{\infty}n^2\left(-\log\left(1-\frac{1}{n}\right)-\frac{1}{n}-\frac{1}{2n^2}-\frac{1}{3n^3} + \frac{1}{2n^2}+\frac{1}{2}\log\left(1-\frac{1}{n^2}\right)\right)\\&= \sum\limits_{n=2}^{\infty}n^2\left(\frac{1}{2}\log\left(\frac{n+1}{n-1}\right) - \frac{1}{n}-\frac{1}{3n^3}\right)\end{align*} Now, consider the partial sum: \begin{align*}S_N=\sum\limits_{n=2}^{N}n^2\log\left(\frac{n+1}{n-1}\right) &= \sum\limits_{n=2}^{N} ((n+1)^2 - 2(n+1)+1)\log (n+1) \\& \quad -\sum\limits_{n=2}^{N} ((n-1)^2 + 2(n-1)+1)\log (n-1) \\&= (N+1)^2\log(N+1)+N^2\log N - 2\log 2 + \log \left(\frac{N(N+1)}{2}\right)\\& \quad +2N\log N -2\sum\limits_{n=2}^{N+1} n\log n-2\sum\limits_{n=2}^{N} n\log n\end{
|sequences-and-series|
0
For continuous surjective map $f: X\to Y$, $f^{-1}(D_j) = \bigcup_{f(C_i) \subseteq D_j} C_i $ ( Connected components )?
Let $f: X\to Y$ be a continuous surjective map ( possibly open ) between topological spaces. Let $\{ C_i \}_{i}$ be the components of $X$ and $\{ D_j \}_j$ the components of $Y$ . Then for each $j$ , $$f^{-1}(D_j) = \bigcup_{f(C_i) \subseteq D_j} C_i $$ ? If so, why? If this question is true, then we can show that the induced map $$f^{*} : \{ \operatorname{connected components of} X\} \to \{ \operatorname{connected components of }Y \} , $$ $C \mapsto \operatorname{the unique connected component in Y containing } f(C) $ is surjective. EDIT : My first trial is as follows : Let $D$ be a connected component of $Y$ . Let $E:=f^{-1}(D)$ . Let $x\in E$ . Let $C_x$ be the connected component of $x$ in $X$ . Then $C_x$ is connected and by continuity, $f(C_x)$ is also connected in $Y$ . Since $f(C_x)$ and $D$ both contains $f(x)$ , $f(C_x) \cup D$ is also connected. By the maximality of connected component, $D = f(C_x) \cup D$ ; i.e., $f(C_x) \subseteq D$ . So we have $C_x \subseteq f^{-1}(f(C_x
Your approach is correct and works for each $f : X \to Y$ . Surjectivity is not needed. Let $\mathscr C(X)$ denote the set of components of $X$ . You correctly proved that the preimage $f^{-1}(D)$ of a component $D$ of $Y$ is a union of components of $X$ . This means that the set $$\mathscr C' = \{ C \in \mathscr C(X) \mid C \subset f^{-1}(D) \}$$ has the property that $$f^{-1}(D) = \bigcup_{C \in \mathscr C'} C .$$ The condition $C \subset f^{-1}(D)$ is equivalent to $f(C) \subset D$ and therefore $$f^{-1}(D) = \bigcup_{C \in \mathscr C(X) \text{ with } f(C) \subset D} C .$$
|general-topology|
1
Is Jordan measure a measure?
According to this answer the Jordan Measure is not a measure. This other question points out the fact that the Jordan measurable sets do not form a $\sigma$ -algebra. I am wondering whether the Jordan measure fails to be a measure because one of the $3$ conditions of a measure is not met or because there is no $\sigma$ -algebra on which it can be defined? Edit: If the Jordan measure is a measure, in what $\sigma$ -algebra is it defined?
Very much late to the party. The Jordan volume/measure/content is defined on the Jordan measurable sets (subsets of $\mathbb{R}^{n}$ whose indicator functions are Riemann integrable). These sets do not form a $\sigma$ -algebra because they're not closed under countable unions, as we can see for the classical example $\mathbb{Q}\cap[0,1]$ . So, as for reason 1, Jordan volume is not a measure because the indicator function of sets which are countable unions of disjoint Jordan measurable sets may not be Riemann integrable, which means (a) the volume of such unions wouldn't even be defined and (b) Jordan volume is not $\sigma$ -additive on Jordan sets. More simply put, is as @Mittens said: not a measure because its domain isn't a $\sigma$ -algebra. As for reason 2, this tells us that the $\sigma$ -algebra $\sigma(J)$ $\textit{generated by}$ the collection $J$ of Jordan sets in $\mathbb{R}^{n}$ is formed by the sets which can be written as the union of a borel set and a closed set. So, is J
|measure-theory|
1
Prove $T$ with bounded basis sum in Hilbert space is compact.
Let $T:\mathcal{H}\rightarrow \mathcal{H}$ be a linear continuous operator between Hilbert spaces and $\{b_i\: |\: i \in I\}$ an orthonormal basis. Prove that if: $$\sum_{i\in I}\lVert T b_i\rVert^2 Then, $T$ is compact. Hilbert spaces are reflexive, as is easily seen from applying Riesz Representation Theorem twice. This means that if $x_n$ is a sequence whose norm is bounded by $M$ , there is a subsequence such that $x_{n_j}\rightharpoonup x$ . By continuity of $T$ , we obtain that $T(x_{n_j})\rightharpoonup T(x)$ . I want to show this convergence is strong. Because we have a basis it is clear that from Parseval we obtain: $$\lVert T(x_{n_j})-T(x)\rVert^2=\sum_{i\in I}|\langle T(x_{n_j}-x),b_i\rangle |^2=\sum_{i\in I}|\langle x_{n_j}-x,T^*(b_i) \rangle|^2 $$ By our hypothesis there is also $I_o$ with $|I_o| such that $\sum_{i\in I_o^C}\lVert Tb_i\rVert^2 . Using weak convergence for every $i\in I_o$ we have $|\langle x_{n_j }-x_j, T^*(b_i) \rangle|\rightarrow 0$ , and thus one can ta
I think you can do exactly the same proof, except that you try to show that $T^*$ is compact. Let $x_n$ be a bounded sequence in $H$ . By reflexivity, you find a weakly convergent subsequence $x_{n_j}$ . Then $$||T^*(x_{n_j}) - T^*(x)||^2 = \sum_{i\in I} |\langle x_{n_j} - x, T(b_i)\rangle|^2$$ and you can apply your hypothesis to get that $T^*$ is compact, which is equivalent to $T$ itself being compact.
|functional-analysis|hilbert-spaces|compact-operators|
1
Epsilon proof verification
Does this epsilon proof work? $x>K ⟹|()−| $f(x)=\dfrac{1}{x}+\log \left(\dfrac{x}{1+x}\right)\rightarrow 0$ with $x \rightarrow \infty$ . Using this inequality: $\frac{x-1}{x}\leq \log(x) \leq x-1$ $\dfrac{\dfrac{x}{1+x}-1}{\dfrac{x}{1+x}} \leq \log \left(\frac{x}{1+x}\right) \leq \frac{x}{1+x}-1$ $-\frac{1}{x} \leq \log \left(\frac{x}{1+x}\right) \leq \frac{-1}{1+x}$ $0 \leq \log \left(\frac{x}{1+x}\right) +\frac{1}{x} \leq \frac{-1}{1+x}+\frac{1}{x}$ $\log \left(\frac{x}{1+x}\right) +\frac{1}{x} \leq \frac{1}{(1+x)x}$ set $\epsilon=\dfrac{1}{(1+K)K}$ $\epsilon=\dfrac{1}{(1+K)K} \Leftrightarrow K=\frac{-\epsilon+\sqrt{\epsilon^2+4\epsilon}}{2\epsilon}$
The calculation is all fine, but it falls short at the end. Setting $\varepsilon$ to be a function of $K$ is a misstep; you don't get to determine what form $\varepsilon$ takes, as it is an arbitrary positive number. The next step, where you give $K$ as a function of $\varepsilon$ is better, however, there is a small logical hole where you actually prove that $$x > \frac{-\varepsilon+\sqrt{\varepsilon^2+4\varepsilon}}{2\varepsilon} \implies |f(x) - 0| All your argument shows is that when $x = K$ , then $|f(x) - L| \le \frac{1}{x(x + 1)} = \varepsilon$ , which is not the same thing. How do you know that larger values of $x$ won't result in larger values of $|f(x) - L|$ ? Essentially, to fix this approach, you need to show that $\frac{1}{x(x + 1)}$ is strictly monotone decreasing (which it is, as it is the reciprocal of a positive strictly monotone increasing function). That way, you know that $$x \ge K \implies |f(x) - L| \le \frac{1}{x(x + 1)} Alternatively, you could use the squeeze t
|real-analysis|solution-verification|epsilon-delta|
0
Does $\tan^{-1}\left(\frac{x}{y}\right)+\tan^{-1}\left(\frac{y}{x}\right)=\text{sgn}(xy)\frac{\pi}{2}$?
Is it true that $$ \tan^{-1}\left(\frac{x}{y}\right)+\tan^{-1}\left(\frac{y}{x}\right)=\text{sgn}(xy)\frac{\pi}{2} $$ for $x,y\in\mathbb{R}\backslash \{0\}$ ? If so, I do I show it? Any hint would be useful.
From the post you linked , we know that $$\arctan(t)+\arctan\left(\frac{1}{t}\right)=\frac{\pi}{2}\text{sgn}(t),\quad\forall t\in\mathbb{R}\setminus \{0\}$$ Substituting $t=\frac{x}{y}$ with $y\neq 0$ , we get that $$\arctan\left(\frac{x}{y}\right)+\arctan\left(\frac{y}{x}\right)=\frac{\pi}{2}\text{sgn}\left(\frac{x}{y}\right),\quad\forall x,y\in\mathbb{R}\setminus \{0\}$$ Since $\text{sgn}\left(\frac{x}{y}\right)=\text{sgn}\left(xy\right)$ , $$\arctan\left(\frac{x}{y}\right)+\arctan\left(\frac{y}{x}\right)=\frac{\pi}{2}\text{sgn}\left(xy\right),\quad\forall x,y\in\mathbb{R}\setminus \{0\}$$
|trigonometry|
1
Is there a perfect group in which not every element is a commutator?
Is there a perfect group in which not every element is a commutator? By a well-known fact, it must have order at least $96.$ By Ore's conjecture (now a theorem), it must be infinite or non-simple.
In addition to Isaacs' Theorem that Arturo mentioned, below is a concrete example of the application of this theorem. It appeared in the paper On Commutators in Groups , by L-C. Kappe, R. F. Morse, Groups St Andrews $2005$ , Vol. 2, p.541. However, their calculation in Example 4.5 is wrong and we present the improved one. The example remains correct. By the way, it seems (GAP calculations needed) that the smallest perfect group that contains an element not being a commutator, has order $960$ : Let $G=C_2 \wr A_5$ , the regular wreath product. Then $|G|=2^{60} \cdot 60$ . Observe that the maximal abelian subgroups of $A_5$ are exactly its Sylow subgroups. As usual, let $n_p(G)$ denote the number of Sylow $p$ -subgroups of $G$ . Then $n_2(A_5)=5, n_3(A_5)=10, n_5(A_5)=6$ . This yields the sum: $$5(\frac{1}{2})^{15} + 10(\frac{1}{2})^{20} + 6(\frac{1}{2})^{12} \lt 21 \cdot (\frac{1}{2})^{12} \lt \frac{1}{2}.$$ Hence by Isaacs' theorem (and the remark at the end of his paper: if $H$ is sim
|abstract-algebra|group-theory|conjectures|
0
There exists a constant $c$ if a positive integer $a$ is even and not a multiple of $10$, the sum of the digits of $a^k\ge c\log k$ for all $k\ge 2$
I'm going through Number Theory:Concepts And Problems by Titu Andreescu, and I'm stuck on exercise 2.43. The question says: Prove that there exists a constant $c > 0$ with the following property: if a positive integer $a$ is even and not a multiple of $10$ ,then the sum of the digits of $a^k$ is greater than $c \log k$ for all $k\ge 2$ But I don't see how the author arrives at the answer.Especially I do not know what the motivation is for defining a sequence $ \left(b_{n}\right)_{n \geq 0}$ by $b_{0} = 0 $ and $b_{n+1} = 1+\left[b_{n} \log_{2}(10)\right]$ . Proof. Define a sequence $ \left(b_{n}\right)_{n \geq 0}$ by $b_{0} = 0 $ and $b_{n+1} = 1+\left[b_{n} \log _{2}(10)\right]$ . This sequence is increasing and $ b_{n+1} \leq\left(1+\log _{2}(10)\right) b_{n} $ for $ n \geq 1$ , thus $b_{n} \leq c^{n}$ for all $ n \geq 1 $ , where $c = 1+\log _{2}(10)$ . Suppose now that $ k \geq b_{n}$ and write $a^{k} = c_{0}+10 c_{1}+\ldots$ in base $10 $ . For each $2 \leq j \leq n$ we have that
Let's look at the problem from another angle. Assume $b_1, b_2, b_3, ...$ is an increasing sequence such that $b_1=1$ and for every $i \in \mathbb N,$ $2^{b_{i+1}}$ has $b_i+1$ digits. At first glance, this definition may seem strange, but what does that mean? In fact, what it means is the reasoning behind its definition! This definition simply implies that: $$2^{b_{i+1}}>c_0+c_1 {10}^1+c_2 {10}^2+ ... +c_{b_{i-1}}{10}^{b_{i-1}}+...+c_{b_{i}-1}{10}^{b_i-1}, \ \ (*)$$ where the coefficients belong to $\{0,1, ..,9\}.$ Now, let's assume $k \geq b_{n+1}$ . Considering $a^k=c_0+c_1 {10}^1+c_2 {10}^2+ ... $ (in base $10$ and $c_0 \neq 0$ ), we have: $$a^k \geq 2^{b_{n+1}} >c_0+c_1 {10}^1+c_2 {10}^2+ ... +c_{b_{n-1}}{10}^{b_{n-1}}+...+c_{b_{n}-1}{10}^{b_n-1} \neq 0. $$ Now, note that $2^{b_{n}}|a^k.$ So, $$2^{b_{n}} | c_0+c_1 {10}^1+c_2 {10}^2+ ... +c_{b_{n-1}}{10}^{b_{n-1}}+...+c_{b_{n}-1}{10}^{b_n-1}.$$ Since $2^{b_{n}} >c_0+c_1 {10}^1+c_2 {10}^2+ ... +c_{b_{n-2}}{10}^{b_{n-2}}+...+c_{b_{n-
|sequences-and-series|elementary-number-theory|
1
Cubic sphere packing in 8 dimentions
Kissing number in 8 dimensions is supposedly 240 , for spheres that are centered around points of E8 lattice. However, if we simply take an 8-cube, it has 256 vertices. Using each vertex as a center of a sphere, we can put another sphere in the middle of this cube, touching all of them. This alone is already more than 240. In addition, this middle sphere would actually touch the centers of all 16 7-faces (because the diagonal of unit 8-cube is exactly 2, and thus the middle sphere has a radius of 1/2, touching all of 7-faces), and thus 16 additional spheres in the neighboring cubes, giving total kissing number of 272. Why doesn't this argument work?
The diagonal of an 8-dimensional cube of length 2 is $4 \sqrt{2}$ . You can fit 256 unit spheres on the corners. However, the unit sphere in the center will not touch any corner sphere since $1+2+1 .
|geometry|spheres|
1
Eigenvalues and corresponding eigenvectors of a family of pentadiagonal matrices
Let $A=(a_{ij})$ be a square matrix of order $n\times n$ such that $a_{ij}=1$ if $j=i+2$ or $i=j+2$ ; otherwise it is $0$ . I want to find the generalized formula for the eigenvalues and corresponding eigenvectors of $A$ .
Hint (Too long for a comment) : You are looking for a general (closed-form) formula for the eigenvalues of this pentagonal matrix. I don't know if such a formula exists, but, maybe, having particular cases may help you in your quest. With a little "Sage" program, I have obtained the factorized version of the characteristic polynomials up to $n=15$ of these matrices $n \times n$ evidencing that in most cases the (real, of course) eigenvalues are rather simple (even when they are "double" :)) : $$\begin{cases} P_3(x)&=& (x + 1)(x - 1)x\\ P_4(x)&=& (x + 1)^2(x - 1)^2\\ P_5(x)&=& (x^2 - 2)(x + 1)(x - 1)x\\ P_6(x)&=& (x^2 - 2)^2x^2\\ P_7(x)&=& (x^2 + x - 1)(x^2 - x - 1)(x^2 - 2)x\\ P_8(x)&=& (x^2 + x - 1)^2(x^2 - x - 1)^2\\ P_9(x)&=& (x^2 + x - 1)(x^2 - x - 1)(x^2 - 3)(x + 1)(x - 1)x\\ P_{10}(x)&=&(x^2 - 3)^2(x + 1)^2(x - 1)^2x^2\\ P_{11}(x)&=&(x^3 + x^2 - 2x - 1)(x^3 - x^2 - 2x + 1)(x^2 - 3)(x + 1)(x - 1)x\\ P_{12}(x)&=&(x^3 + x^2 - 2x - 1)^2(x^3 - x^2 - 2x + 1)^2\\ P_{13}(x)&=&(x^4 - 4x^2
|matrices|eigenvalues-eigenvectors|
0
Efficiently find the intersection of a ray with a convex hull
In some other question regarding basis selection a surprising solution came up, where we construct the convex hull of the given points and then shoot a specific ray from the origin. The corners of the facet that got hit first, will be the selected basis. But constructing the whole convex hull is not feasable, within my framework. Thus I am looking for a way to quickly intersect a ray with a convex hull. Now let's describe the problem more concretely: Let $\vec a_1, \dots, \vec a_n \in \mathbb{N}^m \setminus \{\vec 0\}$ be arbetrairy and $C$ be its convex hull. Let $\vec b \in \mathbb{N}^m \setminus \{\vec 0\}$ be some vector such that there exists $\lambda_1, \dots, \lambda_n \in \mathbb{N}$ such that $\vec b = \lambda_1\vec a_1 + \dots +\lambda_n\vec a_n$ . This implies that $\vec b$ is always "behind" $C$ when viewed from the origin and thus the ray $R := \{\gamma\vec b\mid \gamma \in \mathbb{R}_{\geq 0}\}$ will always intersect $C$ at some point $\vec p$ (I want the first intersecti
I figured it out by myself. My question is basically an variation on the question whether a single point is inside a convex hull. This is just an adapted version of this answer . Let's start with the one fact, I found here : If my $\vec a_1, \dots, \vec a_n$ are affine independent, then any vector $\vec p \in C$ can be represented as an unique affine combination of $\vec a_1, \dots, \vec a_n$ . Let's first assume that they are and handle a possible dependence later. Remember that a affine combination is a linear combination $\vec p = \lambda_1\vec a_1 + \dots + \lambda_n\vec a_n$ where $\lambda_1 + \dots + \lambda_n = 1$ and $\lambda_j \geq 0$ . Because the $\vec p$ we are seeking lies on a facet, and thus on the affine space of the corners of that facet, we know that there exists an affine combination of just the corners to arrive at $\vec p$ . And as the affine combination of all $\vec a_j$ is unique, we know that in this affine combination, all coefficiants of all unrelated $\vec a_
|linear-algebra|geometry|euclidean-geometry|convex-hulls|
1
Clarification about proof of Atiyah-Macdonald $3.19$ viii)
$\DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\Supp}{Supp}$ Let $A$ be a ring, $M$ an $A$ -module. The support of $M$ is defined to be the set $\Supp(M)$ of prime ideals $\frak{p}$ of $A$ such that $M_{\frak{p}}\neq 0.$ Prove that if $f: A\to B$ is a ring homomorphism and $M$ is a finitely generated $A$ -module, then $\Supp(B \otimes_A M)=f^{*-1}(\Supp(M)).$ [ $f^*$ is the map $\Spec(B)\to \Spec(A)$ sending $q \mapsto f^{-1}(q)$ .] I'm reading Jeffrey Carlson's solution here . They write, My trouble is that $1/1$ may not be the only generator of $N$ , so I don't see why $(1/1) \otimes (x/1)=0$ in $N \otimes_{A_{\frak{p}}} M_{\frak{p}}$ for all $x \in M_{\frak{p}}$ implies that $N \otimes_{A_{\frak{p}}} M_{\frak{p}}=0.$ How does this follow?
I assume $N_i \subset B_{\mathfrak{q}}$ , rather than $N_i \subset B$ ? Any simple tensor of $N \otimes_{A_{\mathfrak{p}}} M_{\mathfrak{p}}$ is of the form $(b/t) \otimes (m/s)$ for $b \in B$ , $t \in B-\mathfrak{q}$ , $m \in M$ and $s \in A-\mathfrak{p}$ , which is just $(b/f(s)t) \cdot (1/1)\otimes (m/1)$ , considering that the $A_{\mathfrak{p}}$ -action on $N$ is given by $(a/s)\cdot (b/t) = f(a)b/f(s)t$ . By the proof above, any element of the form $(1/1)\otimes (m/1)$ is zero, so $N\otimes_{A_{\mathfrak{p}}} M_{\mathfrak{p}}= 0$ follows.
|commutative-algebra|modules|tensor-products|localization|finitely-generated|
0
A weak homotopy category for CW complexes
I am working on Weibel's K-book and have, while meditating phantom maps and weak homotopies, asked myself if in the same way as their is a category $Ho(Top)$ identifying homotopic maps, is their a category $Ho_w(Top)$ identifying weakly homotopic maps? To recall, two maps are weakly homotopic if they induces homotopic maps when precomposed by any map from a finite CW-complex. I do not know any of the formalism of model category theory (which I believe is the setting for precisely defining $Ho(Top)$ ) which implies my question might be "bad" in the sense that the answer is simply "yes you can, just do it". Thank you for any help in satisfying my curiosity.
The model theory people probably have a smarter answer, but to "naively" answer the question: Yes. In general, if $\mathcal{C}$ is a category and we have an equivalence relation $\sim$ on the Hom-set $\mathcal{C}(x,y)$ for all objects $x,y\in\mathrm{Ob}(\mathcal{C}))$ s.t. $f\sim f^{\prime}$ in $\mathcal{C}(x,y)$ and $g\sim g^{\prime}$ in $\mathcal{C}(y,z)$ implies $gf\sim g^{\prime}f^{\prime}$ in $\mathcal{C}(x,z)$ , then we can form a quotient category $\mathcal{C}/\sim$ with the same objects as $\mathcal{C}$ and Hom-sets $(\mathcal{C}/\sim)(x,y)=\mathcal{C}(x,y)/\sim$ . The assumption on $\sim$ implies that composition in $\mathcal{C}$ induces a well-defined composition on $\mathcal{C}/\sim$ and the lattter inherits the axioms of a category from the former. Then, there is a projection functor $p\colon\mathcal{C}\rightarrow\mathcal{C}/\sim$ , which is universal with the property that $f\sim f^{\prime}$ implies $p(f)=p(f^{\prime})$ . Taking $\mathcal{C}=\mathbf{Top}$ and $\sim$ the re
|algebraic-topology|homotopy-theory|model-categories|
1
What exactly is the proof of onto here?
I have been reading the solution of the following question: We can regard $\pi_1(X, x_0)$ as the set of basepoint-preserving homotopy classes of maps $(S^1,s_0) \to (X, x_0).$ Let $[S^1, X]$ be the set of homotopy classes of maps $S^1\to X,$ with no conditions on basepoints. Thus there is a natural map $\Phi: \pi_1(X, x_0) \to [S^1, X]$ obtained by ignoring basepoints. Show that $(a)$ $\Phi$ is onto if $X$ is path-connected,\ $(b)$ Show that $\Phi([f]) = \Phi([g])$ iff $[f]$ and $[g]$ are conjugate in $\pi_1(X, x_0).$ Here: Why is $\phi$ onto if $X$ is path connected in Hatcher's exercise 1.1.6? I am not quite sure where exactly is the proof of onto in this solution, could someone clarify this to me please?(I want a proof without using Homotopy extension property please if possible)
The fact that $\Phi$ is onto is quite simple. All it means is that every loop $f:[0,1]\to X$ is (freely) homotopic to a loop based at $x_0$ . If $X$ is path connected, then there is a path $\lambda:[0,1]\to X$ from $x_0$ to $f(0)=f(1)$ . Then $f$ is freely homotopic to the path composition $\lambda * f*\lambda^{-1}$ , which is a loop based at $x_0$ . And this follows from general homotopic properties of path composition.
|algebraic-topology|homology-cohomology|homotopy-theory|fundamental-groups|path-connected|
1
A book costs $1 plus half its price.
Can anyone tell me exactly what is the question that is being asked in English for this equation? I am confused. Please check the link for the video: https://youtube.com/shorts/mGDnsbKr7IM?si=3eLrjOzLffTp3SBb A book costs \$1 plus half its price. How much does it cost? A. \$1 B. \$1.50 C. \$2 Is the answer not \$1.50? The answer for this problem is \$2. Is he actually asking how much more of a price will it cost if you add another $1 plus half its price on top of the current book’s price? Meaning let’s say the book cost \$2 and by adding another \$1 plus half its price, it will be another \$2 (hence the answer \$2) which makes the total \$4.
This is a very famous problem. The solution is as follows: Let the price of the book be $x$ . Then according to the question we have $$1+\frac{x}{2}=x$$ $$\implies 1=\frac{x}2$$ $$\implies\boxed{\color{red}{x=2}}$$ The question asks you the price of one book. It says that the book costs $1$ dollar, plus half it's price. Emphasis on the comma pls.
|systems-of-equations|
0
Computation: Riemann tensor of 2D manifold via conformal transformation
I am having issues computing Riemann tensor of 2D manifold. It should be very simple, but I cannot see the mistake I am making. Assume a 2D manifold with metric in very simple form $$\tilde{g}=\frac{1}{f(x)^2}(dx^2+dy^2).$$ Since this is evidently conformally flat (indeed all 2D manifolds are, but here one can see it easily), I wanted to use this to my advantage. For example from https://en.wikipedia.org/wiki/List_of_formulas_in_Riemannian_geometry#Conformal_change (employing notation I can easily TeX from there) for conformal transform $\tilde{g}=e^{2\varphi}g$ implies change of Riemann tensor $$R(\tilde{g})=e^{2\varphi}(R(g)-g\bar\wedge{}T)$$ where $\bar\wedge$ denotes Kulkarni–Nomizu product and $$T=\nabla{}d\varphi - d\varphi\,d\varphi+\frac{1}{2}|d\varphi|^2g.$$ where the norm and $\nabla$ correspond to $g = dx^2 + dy^2$ , which is flat. Therefore I believe that also $R(g) = 0$ . Since $\varphi=-\log{f(x)}$ , I believe that $$d\varphi = -\frac{f'}{f}\,dx,$$ $$\nabla{}d\varphi = (-
One can notice that $\bar{g}\bar\wedge{}dx\,dx = g_{yy}(dx\,dy\,dx\,dy + dy\,dx\,dy\,dx)$ and $\bar{g}\bar\wedge{}dy\,dy = g_{yy}(dy\,dx\,dy\,dx+dx\,dy\,dx\,dy)$ , while also $g_{xx} = g_{yy}$ . Therefore, thanks to linearity of $\bar\wedge$ in its second argument, it holds $$2\bar{g}\bar\wedge{}dx\,dx = \bar{g}\bar\wedge(dx\,dx + dy\,dy) =f^2 \bar{g}\bar\wedge\bar{g}.$$ It is a feature of low dimension and diagonality of $\bar{g}$ , which together with symmetries of $\bar\wedge$ largely limits number of components of $\bar{g}\bar\wedge{}A$ for any symmetric 2-form $A$ . Therefore $$R(\bar{g}) = \frac{1}{2}(-f'^2+f\,f'')\bar{g}\bar\wedge{}\bar{g},$$ as I wanted to prove. Note that one can easily deduce solutions for a unit sphere fow which it must hold $-f'^2+ff'' = 1$ (to keep scalar curvature positive this implies $f(x) = \cosh(x)$ and indeed correponds to $\tanh(x)=\cos(R)$ ; $y=\varphi$ substitution in the usual sphere metric $g_{S^2} = dR^2 + \sin^2(R)d\varphi^2$ ) and for flat pl
|differential-geometry|riemannian-geometry|
1
Trigonometric Identities: Given that $2\cos(3a)=\cos(a)$ find $\cos(2a)$
Given that $2\cos(3a)=\cos(a)$ find $\cos(2a)$ . $2\cos(3a)=\cos(a)$ I converted $\cos(2a)$ into $\cos^2(a)-\sin^2(a)$ Then I tried plugging in. I know this is not right, but I have no clue how to solve this. Hints please? edit: Because I got that $\cos(2a) = 4\cos^2(3a)-1$
cos2a? 2cos3a = cosa 2cos3a = cosa ⟺ 2(4cosa^3-3cosa)=cosa ⟺8cosa^3 - 6cosa = cosa 8cosa^3=7cosa we divide everything by cosa 8cosa^2=7 ⟺ cosa^2=7/8 cos 2a = cos^2 - sin^2 We need to find sin^2 sin^2 + cos^2 = 1 ⟺ sin^2 = 1 - 7/8 ⟺ sin^2 = 1/8 cos 2a = 7/8 - 1/8 ⟺ cos 2a = 6/8 ⟺ cos 2a = 3/4 I'm sorry if it's not written as beautifully as above
|trigonometry|proof-writing|
0
Evaluate the limit involving trace of a Matrix
Let $A=[a_{ij}]_{n×n}$ be a square Matrix such that $a_{ij}=(3i+2)5^{-j}$ then the value of $$\lim_{n\to\infty} (\operatorname{tr}(A^n))^{\frac{1}{n}}, n\in \mathbb{N}$$ I can see the pattern of elements in the matrix $A$ but I can't calculate $A^n.$ Also, I don't know any property of trace that relates, $\operatorname{tr}(A^n)$ to $\operatorname{tr}(A)$ . Any help is greatly appreciated.
When we multiply a matrix $A$ by itself, $\begin{bmatrix} a_{11} & a_{12} & ..... \\ a_{12} & a_{22} & .... \\ .... \end{bmatrix}$ to get $\begin{bmatrix} A_{11} & A_{12} & ..... \\ A_{12} & A_{22} & .... \\ .... \end{bmatrix}$ , $A_{ij} = \sum_{k=1}^n a_{ik}a_{kj}$ . Therefore $A_{tt} = \sum_{k=1}^n \frac{(3t+2)}{5^t}\frac{(3k+2)}{5^k}$ which is equal to $S_{n}\frac{(3t+2)}{5^t}$ . Thus, trace of $A^2$ will be $\sum_{t=1}^n A_{tt}$ which is $S^2_{n}$ . Similarly if we multiply $\begin{bmatrix} A_{11} & A_{12} & ..... \\ A_{12} & A_{22} & .... \\ .... \end{bmatrix}$ and $\begin{bmatrix} a_{11} & a_{12} & ..... \\ a_{12} & a_{22} & .... \\ .... \end{bmatrix}$ to get $\begin{bmatrix} B_{11} & B_{12} & ..... \\ B_{12} & B_{22} & .... \\ .... \end{bmatrix}$ , $B_{ij} = \sum_{k=1}^n A_{ik}a_{kj}$ . $B_{tt} = \sum_{k=1}^n A_{tk}a_{kt} = \sum_{k=1}^n (\sum_{z=1}^n a_{tz}a_{zk})a_{kt} = \sum_{k=1}^n S_{n} \frac{(3t+2)}{5^t}\frac{(3k+2)}{5^k} = S_{n}^2\frac{(3t+2)}{5^t}$ . Thus trace of matrix
|real-analysis|matrices|limits|
1
Expected value of $E\left[Y|X\right]$ with 3 random variables $X$ and $Y$ and $Z$ and the $MSE$ of it
Okay, the information we have: $Z$ ~ $N(0,1)$ $Z$ is statistic independent of $X$ $Y=\begin{cases}a\left|X\right|&Z\ge 0\\ b\left|X\right|&Z We know that $X$ is a continuous random variable that $P\left(-1\le X\le 1\right)=1$ and $f_X\left(x\right) and $E[X]=\mu$ and $Var[X]=\sigma^2$ I have to find $$\hat{Y\:}_{opt}\left(X\right)=E\left[Y|X\right]$$ To be honest, I do not know how to start. Thought maybe separating to $X\ge0$ and $X\le0$ and then do: $E\left[Y|X\right]=E\left[Y|X\ge 0\right]+E\left[Y|X But I do not know if it is allowed or how to continue with it. I don't think I have to use Law of total expectation since I will have two dependency and I don't know if it possible to solve it? I guess the $MSE$ I can solve afterwards since it is just $MSE\left\{\hat{Y\:}_{opt}\left(X\right)\right\}=E\left[\left(Y-\hat{Y\:}_{opt}\left(X\right)\right)^2\right]$ . But my problem is finding $\hat{Y\:}_{opt}\left(X\right)$ Any clues will be appreciated!!
But I do not know if it is allowed or how to continue with it. You know the law of total expectation, apply it to the conditional expectation: $$E[Y|X] = E[ E[Y|X,U]]$$ where $U=1$ if $Z \ge 0$ , $U=0$ otherwise, and the outer expectation is wrt to $U$ . Now, $E[Y|X,U=1] = a |X|$ and $E[Y|X,U=0] = b |X|$ , which we can writen as $$ E[Y|X,U] = |X| [ a U + b(1-U) ] $$ Can you go on from here?
|probability|conditional-expectation|mean-square-error|
1
Sequence of functions that converges strongly in $L^3$, weakly in $L^2$ and not strongly in $L^2$
How can I construct a sequence of functions $f_n: \mathbb{R} \rightarrow \mathbb{R}$ such that $$f_n \overset{L^3}{\to} 0 \\ f_n \overset{w-L^2}{\to} 0 \\ f_n \overset{L^2}{\not\to} 0 $$ I know without the convergence in $L^3$ part you want to use something like an orthonormal basis...
Take disjoint intervals $I_1,I_2,\ldots$ with lengths $l_1, l_2, \ldots$ such that $\lim_n l_n=\infty$ . Let $f_n=\frac{1}{\sqrt{l_n}}1_{I_n}$ . Because the intervals are pairwise disjoint, this is an orthonormal family in $L_2$ , and is therefore weakly converging to $0$ in $L_2$ . Since the sequence is weakly convergent to $0$ in $L_2$ , if it were strongly convergent to something in $L_2$ , that something would have to be $0$ . But $$\|f_n\|_2=1$$ for all $n$ , so the sequence $f_n$ cannot converge strongly in $L_2$ . Last, $$\|f_n\|_3 =l_n^{-1/6}\to 0,$$ so $f_n$ converges strongly to $0$ in $L_3$ .
|functional-analysis|weak-convergence|
1
Is this proof of the Lah Numbers Polynomial Identity With Induction on $n$ correct?
The Lah numbers satisfy the following rising-falling factorials relation \begin{equation} x^{\overline{n}} = \sum_{k=0}^{n} L(n, k) x^{\underline{k}}, \end{equation} where $x^{\overline{n}}$ = $x(x+1)\times ... \times (x+n-1)$ and $x^{\underline{n}}$ = $x(x-1) \times ... \times (x-n+1)$ . What would be the proof of this identity using induction on $n$ with the following recurrence relation $L(n, k) = L(n-1, k-1) + (n+k-1) L(n-1, k)$ ? This is what I tried: \begin{equation} x^{\overline{n}} \\ =x^{\overline{n-1}} (x+n+k-1) \\ = \left( \sum_{k=0}^{n} L(n-1, k) x^{\underline{k}} \right) (x+n+k-1) \\ = \sum_{k=0}^{n} L(n-1, k) x^{\underline{k+1}} + (n+k-1) \sum_{k=0}^{n} L(n-1, k) x^{\underline{k}} \\ = \sum_{k=0}^{n} L(n-1, k-1) x^{\underline{k}} + \sum_{k=0}^{n} (n+k-1) L(n-1, k) x^{\underline{k}} \\ = \sum_{k=0}^{n} \left( L(n-1, k-1) x^{\underline{k}} + (n+k-1) L(n-1, k) \right) x^{\underline{k}} \\ = \sum_{k=0}^{n} L(n, k) x^{\underline{k}}, \end{equation} but I do not think that the
Well, it is wrong because, as I see it, you tried to force the recursion when you do $$x^{\overline{n}} = x^{\overline{n-1}}(x+n+k-1),$$ that is not true , but you can do this: $$x^{\overline{n}} = x^{\overline{n-1}}(x+n-1)=x^{\overline{n-1}}((x\color{red}{-k})+(n\color{red}{+k}-1)).$$ Now, when you distribute you will get $$\sum _{k = 0}^{n-1}{L(n-1,k)(x-k)x^{\underline{k}}}+\sum _{k = 0}^{n-1}{L(n-1,k)(n+k-1)x^{\underline{k}}}.$$ Notice that $(x-k)x^{\underline{k}}=x^{\underline{k+1}}$ and you would be done. Ps. Why is the \underline so ugly?
|combinatorics|stirling-numbers|
1
Triple-Transitivity/"Specify three know all" property of exotic transitive $S_5\subset S_6$
Let the exotic transitive subgroup $S_5\subset S_6$ act on $\{1,2,\dots,6\}$ . For $1\leq i,j\leq 6$ , define subsets: $$X_{ji}:=\{\sigma\in S_5\,\mid \sigma(j)=i\}.$$ Does the following properties hold (note 3 implies the other two): Exotic $S_5$ is doubly-transitive in the sense that for all distinct $1\leq j_1,j_2\leq 6$ and (necessarily) distinct $1\leq i_1,i_2\leq 6$ there exists $\sigma\in S_5$ such that $$\sigma(j_1)=i_1\text{ and }\sigma(j_2)=i_2.$$ That is, we have $X_{j_1i_1}\cap X_{j_2i_2}\neq \emptyset$ . Exotic $S_5$ is also triply -transitive in the sense that for all distinct $1\leq j_1,j_2,j_3\leq 6$ and (necessarily) distinct $1\leq i_1,i_2,i_3\leq 6$ there exists $\sigma\in S_5$ such that: $$\sigma(j_m)=i_m\qquad (1\leq m\leq 3).$$ That is, we have $X_{j_1i_1}\cap X_{j_2i_2}\cap X_{j_3i_3}\neq \emptyset$ . Once we know three values of any $\sigma\in S_5$ , we in fact know all six. That is, for suitably distinct $1\leq j_1,j_2,j_3\leq 6$ and distinct $1\leq i_1,i_2,i_3
The group you call the exotic $S_5$ is otherwise (and better) known as ${\rm PGL}(2,5)$ , and the three properties you described are collectively known as sharp triple transitivity . The group ${\rm PSL}(2,q)$ acts sharply triply transitively on the $q+1$ points of the projective line for all prime powers $q$ .
|group-theory|permutations|finite-groups|group-actions|symmetric-groups|
1
Is there a visual proof for why this property of a matrix is true?
Let's say we have three equations written in their standard form: \begin{align} a_1x + b_1y + c_1 = 0 && (l_1) \\ a_2x + b_2y + c_2 = 0 && (l_2) \\ a_3x + b_3y + c_3 = 0 && (l_3) \end{align} Then if we consider the matrix $$ M = \begin{bmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{bmatrix} $$ the following is true: $$ \det M \begin{cases} \lt 0 & \text{, if the intersection point of $l_1$ and $l_2$ lies on the "positive" side of $l_3$}\\ = 0 & \text{, if the intersection point of $l_1$ and $l_2$ lies exactly on $l_3$}\\ \gt 0 & \text{, if the intersection point of $l_1$ and $l_2$ lies on the "negative" side of $l_3$} \end{cases} $$ I understand why this works mathematically, but I can't find a way that explains it more intuitively. Ideally, I'm looking for a visual proof (e.g. involving some signed volume maybe ?).
You're asking for a visual proof of something that's not true. There's a problem with talking about "the intersection point of $l_1$ and $l_2$ " or "the positive side" of $l_3$ . For example, if $a_1=a_2=b_1=b_2=c_1=c_2=0$ , $l_1,l_2$ are just the whole of $\mathbb{R}^2$ . If $a_3=b_3=0$ and $c_3>0$ , everything is on the "positive side" of $l_3$ . However, let's ignore that. Consider $$y=0 \tag{$l_1$}$$ $$x=0 \tag{$l_2$} $$ and $$x+y+1=0 \tag{$l_3$}$$ Then we get $$M=\begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 0 & 0 & 1\end{pmatrix}.$$ Then $l_1$ is just the $x$ -axis, $l_2$ is just the $y$ -axis, and the intersection is the origin. This is on the positive side of $l_3$ , since $1\cdot 0+1\cdot 0+1=1>0$ . But $\det(M)=-1$ . However, if we just switch $l_1$ and $l_2$ , we get the matrix $$\begin{pmatrix}1 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix},$$ which has $\det(M)=1$ . But we didn't change $l_1,l_2$ , or $l_3$ , and the origin (intersection of $l_1,l_2$ ) still lies on the posi
|linear-algebra|geometry|proof-writing|visualization|
1
Show by hand : $e^{e^2}>1000\phi$
Problem: Show by hand without any computer assistance: $$e^{e^2}>1000\phi,$$ where $\phi$ denotes the golden ratio $\frac{1+\sqrt{5}}{2} \approx 1.618034$ . I come across this limit showing: $$\lim_{x\to 0}x!^{\frac{x!!^{\frac{2}{x!!-1}}}{x!-1}}=e^{e^2}.$$ I cannot show it without knowing some decimals and if so using power series or continued fractions. It seems challenging and perhaps a tricky calculation is needed. If you have no restriction on the method, how to show it with pencil and paper ? Some approach: We have, using the incomplete gamma function and continued fractions: $$\int_{e}^{\infty}e^{-e^{-193/139}x^{193/139+2}}dx=\frac{139}{471}\cdot e\cdot\operatorname{Ei}_{332/471}(e^2)>e^{-e^2},$$ where $\operatorname{Ei}$ denotes the exponential integral. Finding an integral for the golden ratio $\phi$ is needed now. Following my comment we have : $$e^{-e^2} Where the function in the integral follow the current order for $x\ge e$ .As said before use continued fraction of incomple
Here's a very tedious calculation using repeated squaring. I did use a computer to automate and typeset, but in principle it could be performed by hand. First: $$e^z = \left(e^{\frac{z}{2^{m}}}\right)^{2^{m}} $$ and (for $y > 0$ ) $$e^y > 1 + y + \frac{y^2}{2}$$ so working with 24 bits after the point (truncating towards zero): $$\begin{aligned} e^2 &> \left(1 + \frac{2}{2^{9}} + \frac{1}{2}\left(\frac{2}{2^{9}}\right)^2\right)^{2^{9}} \\ &= \left(1.000000010000000010000000_2\right)^{2^{9}} \\ &\ge \left(1.000000100000001000000001_2\right)^{2^{8}} \\ &\ge \left(1.000001000000100000001010_2\right)^{2^{7}} \\ &\ge \left(1.000010000010000001010100_2\right)^{2^{6}} \\ &\ge \left(1.000100001000001010110001_2\right)^{2^{5}} \\ &\ge \left(1.001000100001010111111010_2\right)^{2^{4}} \\ &\ge \left(1.010010001011010111001100_2\right)^{2^{3}} \\ &\ge \left(1.101001100001001001011011_2\right)^{2^{2}} \\ &\ge \left(10.101101111110000010000101_2\right)^{2^{1}} \\ &\ge \left(111.011000111001010011010
|inequality|constants|golden-ratio|number-comparison|
0
Semantic consequence in propositional logic
I'm reading Introduction to Mathematical logic by Sam Buss. I'm currently going through propositional logic and I can't understand why (c) is correct: Theorem I.13. Suppose that Γ and ∆ are sets of formulas, and Γ ⊆ ∆. (a) If φ satisfies ∆, then φ satisfies Γ. (b) If ∆ is satisfiable, then Γ is also satisfiable. (c) If Γ ⊧ A, then ∆ ⊧ A. (a) and (b) make perfect sense to me. Γ should be satisfiable when Δ (a set with additional formulas) is satisfiable. But why is (c) correct when we don't know if the additional formulas in Δ could make the set unsatisfiable? Apologies if this is an extremely elementary question
$Γ ⊧ A$ means that every $\phi$ that makes all $B \in \Gamma$ true will make $A$ true as well. By adding more premises we won't lose any of these truth-preserving valuations. Given any $\phi$ : Either it satisfies $\Delta$ - then it also satisfies $\Gamma$ , and by $\Gamma \vDash A$ we have that $A$ will be true under this $\phi$ as well. Or $\phi$ does not satisfy $\Gamma$ , in which case the truth value for $A$ does not matter. In any event, we get that any $\phi$ that satisfies $\Delta$ will satisfy $A$ as well, which is exactly $\Delta \vDash A$ .
|logic|propositional-calculus|
1
If $W_t$ is a Brownian motion $\operatorname{Cov}\left( c^{-\frac{1}{2}} W(cs), W(t) \right)\ne \operatorname{Cov}(W(s),W(t))$
Let $W(t)$ a Wiener process. By the scaling propety of the Brownian motion we have that $$ \frac{1}{\sqrt{c}} W(ct) $$ is a Wiener process. Let $(\mathcal{F}_t)_{t \ge 0}$ the Filtration of the Wiener process and suppose $cs . Then $W(cs)$ is $\mathcal{F}_t$ measurable As $\frac{1}{\sqrt{c}} W(cs) \sim W(s)$ ( $\sim$ here is equality in law) it looks to me that $$ \operatorname{Cov}\left( c^{-\frac{1}{2}} W(cs), W(t) \right)= \operatorname{Cov}(W(s),W(t)) $$ But looking at this question (in particular one of the comments and the accepted answer) it looks like this is not the case. Is this formula false? In that case what is $\operatorname{Cov}( c^{-\frac{1}{2}} W(cs), W(t) )$ ?
$\frac{1}{\sqrt c} W(cs)$ is a Brownian motion, but it isn't the same Brownian motion that you started with, so $\text{Cov}(\frac{1}{\sqrt c} W(cs),W(t)) \ne \text{Cov}(W(s),W(t))$ . Instead, using standard rules of covariance and the fact that $cs , we have \begin{align*} \text{Cov}(\frac{1}{\sqrt c} W(cs),W(t)) &= \frac{1}{\sqrt c} \text{Cov}(W(cs),W(t)) \\ &= \frac{1}{\sqrt c} cs \\ &= \sqrt{c} s \end{align*}
|probability|probability-theory|probability-distributions|stochastic-processes|brownian-motion|
1
Critical point of a function on manifold
In one differential geometry class I'm taking, a critical point $p$ of a function $f$ on a manifold $(M,\mathcal{A})$ is defined as a point where $\forall$ coordinate chart $(U,\phi_U)\in\mathcal{A}$ , $Df\circ\phi_U^{-1}(\phi_U(p))=0$ . However, I don't understand why do we want to define it this way. Why can't we define it simply as $Df(p)=0$ ? For example, it is possible that $D\phi_U^{-1}$ is the zero map at $\phi_U(p)$ , but $Df(p)\neq 0$ , and then $Df\circ\phi_U^{-1}(\phi_U(p))=Df(p)\circ D\phi_U^{-1}(\phi_U(p))=0$ Edit: Here’s an example showing that the definitions are not equivalent. Consider $S^2$ and the chart $(U,\phi_U)$ , where $U=S^2-\{(0,0,1)\}$ and $\phi_U(x_1,x_2,x_3)=(\frac{x_1}{1-x_3},\frac{x_2}{1-x_3})$ . $\phi^{−1}_U(u,v)=(\frac{2u}{u^2+v^2+1}, \frac{2v}{u^2+v^2+1}, \frac{u^2+v^2-1}{u^2+v^2+1}).$ Then for $f(x_1,x_2,x_3)=(x_3)^2$ , $Df\circ\phi^{-1}_U$ is zero at $(0,0)$ . This shows that $(0,0,−1)$ is a critical point of $f$ on $S^2$ . However, $Df(0,0,−1)=(0,0,
First of all, it is unclear if in the class you are taking they defined $Df$ for smooth functions $f$ on $M$ . I assume they did and that they verified that for smooth maps of open subsets $f: R^n\to R^m$ , $Df$ is given by the "usual" derivative from the vector calculus. I assume also that they proved in the class the Chain Rule for maps between differentiable manifolds: If $$ X \stackrel{g}{\longrightarrow} Y \stackrel{f}{\longrightarrow} Z $$ are smooth maps of smooth manifolds, then for $h=f\circ g$ one has $$ Dh=Df \circ Dg. $$ Furthermore, I assume that you verified that local charts $(\phi, U)$ on a smooth manifold are diffeomorphisms between their domains and ranges, in particular, $D\phi^{-1}_q$ is invertible for all $q\in \phi(U)$ . Now, the definition for a critical point of $f\in C^\infty(M)$ , $$ Df_p={\bf 0} $$ is perfectly reasonable. (Here and below, ${\bf 0}$ stands for the zero map $T_pM\to {\mathbb R}$ .) Let's verify that it is equivalent to the definition from your
|differential-geometry|manifolds|
1
intersection (closest point) of multiple rays in 3D space
I am working on a computer vision project with a sphere of calibrated images. And I would like to (relatively) quickly estimate the center of the sphere by simply computing the intersection of all the rays of the camera direction. Here is an example of an image sphere: For each ray, I have a origin $o$ and a direction vector $d$ such as : $r(t) = o + t.d$ , with $t \in\mathbb{R}$ . How do I compute the equation $Ax = b$ , so I can use a least-squared approach? I couldn't find many related questions on my problem because many results that I found are either in 2D, for only 2 rays or between a ray and an object or a plane. I already used Singular Value Decomposition (SVD) for similar problems (triangulation and the computation of the homography matrix) so I'd like to keep using it but I don't know if it is possible with 180 entries. Could you please help me?
Follows a MATHEMATICA script which generated 180 random points p as well as directions v pertaining to a spheric field with center at o and solves satisfactorily the identification problem n = 180; SeedRandom[1]; o = {ox, oy, oz}; p = RandomReal[{-1, 1}, {n, 3}]; o0 = RandomReal[{-1, 1}/5, 3] v = Table[(o0 - p[[k]]) RandomReal[{0.2, 0.8}], {k, 1, n}]; sum = Sum[(p[[k]] + lambda[k] v[[k]] - o) . (p[[k]] + lambda[k] v[[k]] - o), {k, 1, n}]; equs = Grad[sum, Join[Table[lambda[k], {k, 1, n}], o]]; sols = Solve[equs == 0, Join[Table[lambda[k], {k, 1, n}], o]][[1]]; o /. sols (o0 - o) /. sols The process used is a minimum square error from the sum $$ E(\lambda,o)=\sum_{k=1}^n\|p_k + \lambda_k v_k - o\|^2\ \ \ \ \ (1) $$ NOTE From $(1)$ we have $$ \cases{ \frac{\partial E}{\partial \lambda_k} = (p_k + \lambda_k v_k - o)\cdot v_k = 0,\ \ \ \ k=1,\cdots,n\\ \frac{\partial E}{\partial o} = \sum_{k=1}^n p_k + \lambda_k v_k - o=0 } $$ and eliminating $\lambda_k$ we arrive at $$ o = M^{-1}\cdot P $
|geometry|
0
Reference for: Every element of a finite field is the sum of two squares.
Note: This is a reference-request question. It doesn't require the usual level of context. I am not asking for a proof. It is well known that Theorem: Let $x\in F$ for a finite field $F$ . Then there exist $a,b\in F$ such that $x=a^2+b^2$ . I need the theorem for my dissertation. Therefore, I need a reference, please. For context: I am writing up this (with a citation and the appropriate credit, of course), which a theorem of mine in a draft of my dissertation now relies upon.
Proposition 1 in Vitaly Bergelson, Andrew Best, and Alex Iosevich, Sums of Powers in Large Finite Fields: A Mix of Methods , The American Mathematical Monthly, 128:8, pp. 701-718 . This has the advantage of being open access.
|reference-request|field-theory|finite-fields|
1