title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Prove that $\left(\frac{a}{b}+ \frac{b}{c}+\frac{c}{a}+x \right) \left(\frac{a^2}{b} + \frac{b^2}{c}+\frac{c^2}{a}+y \right) \ge z$ for $a+b+c=1$
|
Let $a,b,c>0$ such that $a+b+c=1$ . Prove that for all $m>0$ , the following inequality holds: \begin{align*} \left(\frac{a}{b} + \frac{b}{c} + \frac{c}{a} -\frac{m^4+2m^3-8m-4}{m^2} \right) \left(\frac{a^2}{b}+\frac{b^2}{c}+\frac{c^2}{a}+\frac{m^3-m-8}{m} \right)\\ \ge- \frac{\left(m+2 \right) \left(m-2 \right)^2 \left(m+1\right)^2\left(m^2+2m+4\right)}{m^3} \end{align*} Remark. For the particular case $m=1$ , it is the KaiRain's problem posted at AoPS : \begin{align*} \left(\frac{a}{b} +\frac{b}{c}+\frac{c}{a}+9 \right) \left(\frac{a^2}{b}+\frac{b^2}{c} +\frac{c^2}{a} -8\right) \ge -84 \end{align*} This is a hard inequality, with the equality at $a=b=c= \frac{1}{3}$ and also for $\left(a,b,c \right)= \frac{4}{7} \left(\sin^2 \frac{3 \pi}{7}, \sin^2 \frac{2 \pi}{7}, \sin^2 \frac{\pi}{7} \right)$ or any cyclic permutations. A SOS (Sum of Squares) proof was given: \begin{align*} \left(\frac{a}{b} +\frac{b}{c}+\frac{c}{a}+9 \right) \left(\frac{a^2}{b}+\frac{b^2}{c} +\frac{c^2}{a} -8a-8b-
|
Here is an SOS (Sum of Squares) solution. We have \begin{align*} &\Big(\frac{a}{b} + \frac{b}{c} + \frac{c}{a} -\frac{m^4+2m^3-8m-4}{m^2} \Big) \Big(\frac{a^2}{b}+\frac{b^2}{c}+\frac{c^2}{a}+\frac{m^3-m-8}{m} (a + b + c) \Big)\\[6pt] &\qquad + \frac{\left(m+2 \right) \left(m-2 \right)^2 \left(m+1\right)^2\left(m^2+2m+4\right)}{m^3}(a+b+c)\\[6pt] ={}& \frac{1}{m^2(a+b+c)a^2b^2c^2} \left[\sum_{\mathrm{cyc}} \frac34ac(a^2bm - a^2cm + ab^2m - b^2cm - 2ab^2 + 2bc^2)^2\right.\\[6pt] &\qquad \left. + \sum_{\mathrm{cyc}} g(a,b,c, m) + h(a,b,c,m)\right.\\ &\qquad \left. + \frac34 a^2(-abcm^2 + b^2cm^2 + a^2cm - b^3m + 2b^2c - 2bc^2)^2\right] \end{align*} where \begin{align*} g(a, b, c, m) &:= \frac14 ac\left(-2ab^2m^2 + 2b^2cm^2 + a^2bm + a^2cm + ab^2m\right.\\ &\qquad\qquad \left. - 2abcm + b^2cm - 2bc^2m - 2ab^2 + 4abc - 2bc^2\right)^2, \\ h(a,b,c,m) &:= \frac14 \left(-a^2bcm^2 - ab^2cm^2 + 2abc^2m^2 + a^3cm + ab^3m \right.\\ &\qquad \left. - 2bc^3m - 4a^2bc + 2ab^2c + 2abc^2\right)^2. \end{a
|
|inequality|
| 1
|
Complete integral of pde without independent variables
|
Show that the complete integral of pde $F(u,p,q)=0$ ($p=u_{x}$ and $q=u_{y}$) is $$ f(x,y,u,a,b) = x + ay + b - \int\frac{du}{g(u,a)}, $$ where the function $p=g(u,a)$ is computed from the differential equation. I have started with writing Charpit system, namely: $$ \begin{cases} \frac{dX}{ds} = F_{p} \\ \frac{dY}{ds} = F_{q} \\ \frac{dP}{ds} = -F_{u}P \\ \frac{dQ}{ds} = -F_{u}Q \\ \frac{dU}{ds} = PF_{p} + QF_{q} \end{cases} $$ where $X=X(s), Y=Y(s)$ is some curve in the $x-y$ plane which is parameterized with the $s$ and $U(s)=u(X(s),Y(s))$, $P(s)=u_{x}(X(s),Y(s))$, $Q(s)=u_{y}(X(s),Y(s))$. I don't know what to do next - how can I find the solution without more knowledge about the function $F$? Thanks in advance for any help!
|
To solve this problem, we have to assume that $F_pF_qF_u\neq 0$ , i.e., $F(u,p,q)$ is a non-trivial function of its arguments. In this case, it follows from the third and fourth Charpit's equations that $$ \frac{dp}{dq}=\frac{p}{q} \implies q=ap. \tag{1} $$ Plugging this result in $du=pdx+qdy$ , and defining $\xi:=x+ay$ , we obtain $$ du=p(dx+ady)=pd\xi \implies p=\frac{du}{d\xi}. \tag{2} $$ In addition, substituting $q=ap$ in $F(u,p,q)=0$ gives $F(u,p,ap)=0$ ; solving this equation for $p$ yields $$ p=g(u,a). \tag{3} $$ Combining $(2)$ and $(3)$ we finally obtain $$ \frac{du}{d\xi}=g(u,a) \implies \xi+b = \int\frac{du}{g(u,a)} $$ $$ \implies x+ay+b-\int\frac{du}{g(u,a)}=0. \quad{\square} \tag{4} $$
|
|partial-differential-equations|characteristics|
| 0
|
I'm stuck in calculating $_2F_1(1,1;\tfrac12;\tfrac12)$
|
By using $(71)$ and $(70)$ identities of Mathworld's hypergeometric function page, $${_2}F{_1}(1,1;\tfrac12;\tfrac12)\stackrel{(71)}{=}2\,{_2}F{_1}(1,-\tfrac12;\tfrac12;-1)\stackrel{(70)}{=}2\left(\tfrac{\sqrt\pi}2\tfrac{\Gamma(\tfrac12)}{\Gamma(1)}+1\right)=\pi+2.$$ But, WolframAlpha calculates it as $\tfrac{\pi}2+2.$ Where is my mistake? I also wonder whether there is an Euler-type integral that can be used to calculate ${_2}F{_1}(1,1;\tfrac12;\tfrac12)$ . Thanks for reading.
|
The only real Euler-type integral (barring complex contour integrals that look like Euler-type integrals in their integrands) is \begin{align} {_2}F{_1}(1,1;\frac{1}{2}; -1) = 2 \int_0^\infty e^{-2t} {_1}F{_1}(1,\frac{1}{2}, t) \,dt, \end{align} (NIST Handbook 2010, eq. 13.10.3) It is easily verifiable that $${_1}F{_1}(1,\frac{1}{2}, t) = 1 + e^t \sqrt{\pi} \sqrt{t} \;\text{erf}(\sqrt{t})$$ in which $\text{erf}$ is the error function (hint: combine NIST Handbook 2010, eq. 13.6.5 and 13.6.7 p. 332 with $a=-\frac{1}{2}$ and integrate using 13.3.15; or expand series). Hence: \begin{align} {_2}F{_1}(1,1;\frac{1}{2}; -1) = 1+ 2 \sqrt{\pi} \; \mathscr{L}(\sqrt{t} \text{ erf}(\sqrt{t}); 1) \end{align} The Laplace transform $\mathscr{L}(\sqrt{t} \text{ erf}(\sqrt{t}); p)$ is tabulated (for example, Prudnikov et al. Vol. 4, eq. 8 p. 172): $$\mathscr{L}(\sqrt{t} \text{ erf}(\sqrt{t}); p) = \frac{1}{\sqrt{\pi p^3}}\left(\frac{\sqrt{p}}{p+1} + \arctan(\frac{1}{\sqrt{p}})\right)$$ whence \begin{ali
|
|special-functions|hypergeometric-function|
| 1
|
Taking the limit beyond infinity, with the ordinals
|
Imagine a function $f:X\to X$ and $x\in X$ (keeping $f$ and $X$ vague on purpose) and let's define $u_1 = f (x)$ $u_2=f^2(x)=f(f(x))$ $u_n=f^n(x)$ Let's also assume that $\forall n, u_{n-1} \neq u_n $ (otherwise its boring) and that $lim_{n\to\infty} u_n$ exists. Let $u_{\omega} = lim_{n\to\infty} u_n$ , and now you can see how we can take the limit of a sequence beyond infinity : we define $u_{\omega+n}=f^n(u_\omega)$ . And we can keep going. I'm trying to find cool examples that would illustrate this kind of concept. But I don't even know which keywords I should be searching for. When I search for 'sequence of ordinals' I get abstract stuff that I cannot understand as a layman. Question: Can you find cool examples that illustrate the concept of sequence beyond infinity?
|
Cantor Bendixson derivatives As Noah mentioned in the comments, the Cantor-Bendixson derivative is a very interesting (to me, at least) example of such a function. Let $K$ be a topological space. For a subset $L$ of $K$ , let $L'$ denote the set of members of $L$ which are not relatively isolated in $L$ . This is the Cantor-Bendixson derivative of $L$ . Note that if $L$ is closed, so is $L'$ . That's because a point is isolated iff the singleton containing it is open, so an arbitrary union of (relatively in $L$ ) isolated points is (relatively in $L$ ) open, and it's (relative in $L$ ) complement is closed. So we can think of these functions as taking the closed subsets of $X$ to the closed subsets of $X$ . We can define $$L^0=L,$$ $$L^{\xi+1}=(L^\xi)',$$ and if $\xi$ is a limit ordinal, $$L^\xi=\bigcap_{\zeta By a cardinality argument, there must exist some minimum $\xi$ such that $X^\xi=X^{\xi+1}$ , and then $X^\zeta=X^\xi$ for all $\zeta>\xi$ . Let $X^\infty$ denote this stabilized
|
|general-topology|logic|infinity|ordinals|
| 1
|
How to use computer algebra system
|
Consider the two vector-valued functions \begin{align} &f(x,y)=(x+y^2, y+x^3) \\ & g(x,y)=(2x+y^3, 2y+x^4). \end{align} Then \begin{align} f(g(x,y))&=((2x+y^3)+(2y+x^4)^2,(2y+x^4)+(2x+y^3)^3) \\ &= (2x+4y^2+y^3+4x^4y+x^8, 2y+8x^3+x^4+12x^2y^3+6xy^6+y^{27}) \end{align} gives vector substitution $f \circ g$ . By vector-substitution, I mean the vector composition $f \circ g=(f_1(g_1(x,y),g_2(x,y)), f_2(g_1(x,y),g_2(x,y))$ , where $f(x,y)=(f_1(x,y),f_2(x,y))$ and $g(x,y)=(g_1(x,y),g_2(x,y))$ . Can you please suggest some computer algebra code (e.g., PARI/GP, SAGE, SymPy etc.) doing the above operation? If it were one-variable functions like $f(x)=\sum_{i=1}^n a_ix^n$ and $g(x)=\sum_{j=1}^{m}b_jx^j$ , then PARI/GP function subst(f,x,g) would compute $f \circ g$ . But in our case, these are vector-valued functions. Indeed, I want to find some non-trivial $g(x,y)=(\cdots, \cdots)$ that commutes with $f(x,y)=(x+y^2, y+x^3)$ .
|
With GIAC you can do: f(x,y):=(x+y^2, y+x^3) g(x,y):=(2x+y^3, 2y+x^4) h(x,y):=f(g(x,y)) simplify(h(x,y)) This gives: x^8+4*x^4*y+2*x+y^3+4*y^2,x^4+8*x^3+12*x^2*y^3+6*x*y^6+y^9+2*y GIAC is the general purpose symbolic algebra software powering the graphical interface Xcas. It is certainly possible to use it online, but I didn't search the website. Personally, I use it in R with the help of the R package giacR : library(giacR) giac
|
|polynomials|matrix-decomposition|substitution|computer-algebra-systems|
| 1
|
Is the Axiom of Choice inconsistent with Countable Additivity?
|
Consider a fair lottery among a countably infinite number of people. The Axiom of Countable Additivity says this is impossible to construct: If all people have a positive (and equal) probability of winning, the sum of their probabilities of winning would diverge, while if all people have a $0$ probability of winning, the sum of their probabilities would be $0 \neq 1$ . However, I feel like the Axiom of Choice can be used to construct such a lottery: Have each person select an element from $[0, 1)$ . This is possible even with the Axiom of Countable Additivity. Lemma: With probability $1$ everyone will have different numbers. Proof: Let $X_i$ be the number selected by $i$ . Let $Y_{ij} = X_i - X_j \mod 1$ . In order for two numbers to be equal with nonzero probability, there must be a nonzero probability that at least one $Y_{ij}$ equals $0$ . But for all $i$ and $j$ , $Y_{ij}$ is distributed uniformly and at random on $[0, 1)$ . This means that there is this same nonzero probability th
|
Edit: In this answer I'm assuming the well-order of the reals is fixed at the outset and not somehow "chosen randomly", though, as commenter on another answer has woken me up to, you phrased the question as if were part of a random process. This would be a whole other can of worms (What distribution does this come from?! Is it independent of the sequence drawn?) but it doesn't affect the upshot of this answer. Edit 2: Decided I didn't quite agree with how I put my original answer so rewrote it a bit. Here's a simpler example, that misses some features of yours but is more direct. Say we have two players, and a non-Lebesgue-measurable set $E\subseteq [0,1],$ and some measure-zero set $C\subseteq [0,1]\setminus E.$ Draw a value $x\in[0,1]$ uniformly at random (i.e. via the Lebesgue measure on $[0,1]$ ), and player 1 wins if $x\in E$ , it is a draw if $x\in C,$ and otherwise player 2 wins. Unlike in your case, there's no intuition telling us that player 1 and 2 have equal probability to w
|
|probability-theory|axiom-of-choice|axioms|
| 0
|
Summation of a difference of two square-roots
|
Does a closed form expression exist for $$\sum_{k=1}^{n}\left(\sqrt{1+\frac{1}{k}}-\sqrt{1 -\frac{1}{k}}\right)$$ I obtained $$0.97423066\ln(n)+1.2019463$$ using log regression and it works very well for my (physics) problems but I was wondering if a better solution exists. Is the Euler-Maclaurin method something to look at ? Many thanks.
|
Since \begin{align*} \sqrt{1+\frac1k} - \sqrt{1-\frac1k} &= \frac{(\sqrt{1+\frac1k} - \sqrt{1-\frac1k})(\sqrt{1+\frac1k} + \sqrt{1-\frac1k})}{\sqrt{1+\frac1k} + \sqrt{1-\frac1k}} \\ &= \frac{\frac2k}{\sqrt{1+\frac1k} + \sqrt{1-\frac1k}} \\ &= \frac{\frac2k}{(1+O(\frac1k))+(1+O(\frac1k))} = \frac1k + O\biggl( \frac1{k^2} \biggr), \end{align*} the sum will be asymptotically $$ \sum_{k=1}^n \biggl( \sqrt{1+\frac1k} - \sqrt{1-\frac1k} \biggr) = \sum_{k=1}^n \biggl( \frac1k + O\biggl( \frac1{k^2} \biggr) \biggr) = \ln n + O(1) $$ with leading coefficient $1$ . I doubt there's a particularly nice form for the constant other than $$ \gamma + \sum_{k=1}^\infty \biggl( \sqrt{1+\frac1k} - \sqrt{1-\frac1k} - \frac1k \biggr) $$ (where $\gamma$ is Euler's constant); but it turns out this sum converges as fast as $\sum \frac1{k^3}$ , so even just the first $10^5$ terms already gives $1.01902854$ accurately rounded to that many decimal places.
|
|sequences-and-series|
| 1
|
Subspaces with common images
|
Let $X$ and $Y$ be finite dimensional vector spaces over $\mathbb{C}$ , and let $S,T:X\to Y$ be linear transformations. Is there a method for determining all subspaces $V\subseteq X$ such that $S(V)=T(V)$ (as subspaces not necessarily pointwise). I am especially interested in the case where $\text{dim} X \geq \text{dim} Y$ .
|
I can provide a partial and unfinished answer, when one of the transformations, say $T$ , is injective. Then there exists a linear transformation $T^{-1}:T(X)\to X$ such that $T^{-1}Tx=x$ for each $x\in X$ . Now let $V$ be any subspace of $X$ . Then $T(V)=S(V)$ iff $S(V)\subset T(X)$ and $T^{-1}S(V)=V$ . That is we have to characterize invariant subspaces $V$ of the linear transformation $T^{-1}S:S^{-1}T(X)\to X$ . All of them are subspaces of the space $Z=\bigcap_{n=1}^\infty (S^{-1}T)^n(X)$ . So put $R=T^{-1}S|Z$ . Then $R$ is a nonsingular linear transformation on the space $Z$ . According to Theorem 8 of [Gan, Chapter VII], the invariant subspaces of $R$ can be described as follows. Let $\psi(\lambda)=(\lambda-\lambda_1)^{n_1}\dots (\lambda-\lambda_k)^{n_k}$ be the minimal polynomial of $R$ . For each natural $i\le k$ let $Z_i=\{x\in Z: (R-\lambda_kI)^{n_k}x=0\}$ . Then $Z_i$ is an invariant subspace of $R$ . Moreover, $Z$ splits into the direct sum $\bigoplus_{i=1}^k Z_i$ of the s
|
|linear-algebra|abstract-algebra|algorithms|linear-transformations|
| 0
|
A quadric has the origin as centre if and only if its equation has no terms of degree $1$
|
I'd like to know how to prove that if a(n affine) quadric of A^n (regarded as the affine space canonically associated to K^n, being a field) has the origin as centre, then its equation has no terms of degree $1$ .
|
Let $\vec{q}\in V[x]$ be a vector in the two-dimensional vector space $V[x]$ over the field $\mathbb{R}$ defined by: $$\vec{q}(x)=(x,ax^2+bx+c) $$ For $\{a,b,c\}\subseteq\mathbb{R}$ . It can be geometrically shown that the set $Q\subseteq V$ defined by: $$Q=\{\vec{q}(x)\in V[x]| x\in \mathbb{R}\}$$ has an axis of symmetry passing through $\vec{q}(0)=(0,c)$ iff $\vec{q}(x)=\vec{q}(-x)$ . Assume this is the case, and observe that $\vec{q}$ can be rewritten as $\vec{q}=\vec{e}+\vec{o}$ , where $\{\vec{e},\vec{o}\}\subseteq V[x]$ are defined as: $$\vec{e}(x)=\frac{1}{2}(\vec{q}(x)+\vec{q}(-x))$$ $$\vec{o}(x)=\frac{1}{2}(\vec{q}(x)-\vec{q}(-x))$$ We observe that $\vec{o}(-x)=-\vec{o}(x)$ , so: $$\vec{q}(x)=\vec{q}(-x)\Leftrightarrow \vec{o}(x)\text{ is constant}$$ But the only way that $\vec{o}$ can be constant is if $b=0$ , so: $$\vec{q}(x)=(x,ax^2+c)$$ To prove the general case for an arbitrary affine plane $A$ we recall that $V\cong A$ for every $A$ , so there is an isomorphism $\phi:A\r
|
|geometry|affine-geometry|quadrics|
| 1
|
Equation of pair of lines created by the intersection of 2 double cones?
|
The equations of $2$ double cones having their vertex at the origin $(0,0,0)$ are given by: $(ax+by+cz)^2=(x^2+y^2+z^2)\cos^2(\theta_1) \hspace{25pt} (1)$ ( $\theta_1=$ semi-apical angle, and $a,b,c$ are the direction cosine of the axis of the cone) $(a'x+b'y+c'z)^2=(x^2+y^2+z^2) \cos^2(\theta_2) \hspace{25pt} (2)$ ( $\theta_2=$ semi-apical angle, and $a',b',c'$ are the direction cosine of the axis of the cone) Suppose, it's given that the 2 cones intersect. So, it is obvious that their intersection will yield a pair of straight lines (Also at a specific case only a line, when just touching each other). What will be the equation of the lines (or line)?
|
The two given cones share the same vertex which is the origin. The axis of the first is the unit vector $u_1 = (a,b,c)$ while the second's axis is the unit vector $u_2 = (a',b',c')$ . The semi-apical angle of the first is $\theta_1$ and the second's is $\theta_2$ . The two lines of the intersection, are those vectors $r = (x,y,z)$ that make an angle of $\theta_1$ with $u_1$ and $ \theta_2$ with $u_2$ . Imposing $r$ to have a unit length results in the following three equations $ a x + b y + c z = \cos \theta_1 $ $ a' x + b' y + c' z = \cos \theta_2 $ $ x^2 + y^2 + z^2 = 1 $ The solution of this system is relatively easy because the first two equations are linear in $x,y,z$ . Their solution is the line $ L(\lambda) = V_0 + \lambda V_1 $ where $ V_1 = (a,b,c) \times (a',b',c') $ is the direction vector, and $ V_0 $ is any point that solves both equations. Such a point can be found by setting $z = 0 $ and solving the remaing $2 \times 2$ system. Doing so, gives $ x_1 = \dfrac{ b' \cos \th
|
|3d|
| 0
|
Several values of irrational exponentiation
|
When talking about a number to a rational exponent, there are as many answers as the denominator of the exponent. Like the question: Is $9^{1/2}$ equal to $3$ or $-3$. However when we have an irrational exponent like $2^{\sqrt2}$, I cant immediatly see more than one candidate solution: $e^{\sqrt2\log2}$. Are there any ways to interpret irrational exponents such that they give several candidate solutions? Another way to state this question would be: are there more than one complex solution to $x^{1/p}=y$ for irrational $p$ and positive real $y$.
|
Among arbitrary nonzero complex bases, there are an infinitude of $j$ th powers for each irrational number $j$ . Let $a$ and $b$ be real numbers, $ab\neq0$ , $e$ the base of the natural logarithm, $z$ be an integer, and $i^2=-1$ $$(a+ib)^j=(e^{\ln(a+ib)+i\pi+i2\pi z})^j=(e^{\ln(\sqrt{a^2+b^2})+iarg(a,b)+i2\pi z})^j=(e^{\ln(\sqrt{a^2+b^2})+i(arg(a,b)+2\pi z)})^j=e^{j\ln(\sqrt{a^2+b^2})+ij(arg(a,b)+2\pi z)}=(\sqrt{a^2+b^2})^j\cos(j(arg(a,b)+2\pi z))+i(\sqrt{a^2+b^2})^j\sin(j(arg(a,b)+2\pi z))$$ As stated above, $j$ is irrational, so for no value of $z$ except $z=0$ is $2j$ is an integer multiple of 2. so each nonzero complex has an infinitude of $j$ th powers.
|
|exponentiation|irrational-numbers|
| 0
|
Proof that if $card\,A \le card\,B$ and $card\,B \le card\,A$, then $card\,A = card\,B$
|
The question defines the ' $\le$ ' operation on the cardinality of arbitrary sets A, and B as follows: Let $\mathcal F$ be defined as the family of all injections from subsets of A onto B. Since $\mathcal F$ can be considered a family of subsets of the cartesian product of A and B, $A \times B$ , it can be partially ordered by inclusion. Applying Zorn's Lemma, we conclude that $\mathcal F$ has a maximal element, $\mathcal f$ . If the domain of $\mathcal f$ is A, then $card\,A \le card\,B$ . The reason I believe it was defined this way was to show that ' $\le$ ' is a linear ordering. I am not sure if that proof is necessary to solve this question. But that proof can be accessed here: Applying Zorn's lemma in the proposition about cardinality of sets . It goes on to show that if $card\,A \le card\,B$ and $card\,B \le card\,A$ , then $card\,A = card\,B$ . The question asks to complete the proof. This is how it goes: Let $f: A \to B$ and $g: B \to A$ be injections. If $a \in A \cap rng\,g$
|
The whole point of the proof of the Cantor-Bernstein Theorem (the statement that if there is an injection $f\colon A\to B$ and an injection $g\colon B\to A$ , then there is a bijection $h\colon A\to B$ ) is to prove that the relation $\leq$ among cardinalities is a partial order. You don't know that ahead of time. Say two sets $A$ and $B$ are equipollent , denoted $A\approx B$ , if and only if there is a bijection between $A$ and $B$ . It is easy to verify this is an equivalence relation. Now define the relation $\preceq$ on sets by saying that $A\preceq B$ if and only if there is an injective function $f\colon A\to B$ . It is easy to show that: $A\preceq A$ for all $A$ ; that is, $\preceq$ is reflexive. If $A\preceq B$ and $B\preceq C$ , then $A\preceq C$ ; that is, $\preceq$ is transitive. If $A\approx X$ and $B\approx Y$ , then $A\preceq B$ if and only if $X\preceq Y$ . The point of the statement you are trying to prove is to show that If $A\preceq B$ and $B\preceq A$ , then $A\appr
|
|solution-verification|proof-explanation|set-theory|
| 1
|
General mapping of a ring homomorphism from ZxZ to ZxZ
|
Suppose we have $\phi : \mathbb{Z}$ x $\mathbb{Z} \rightarrow \mathbb{Z}$ x $\mathbb{Z}$ , where $\phi$ is a ring homomorphism. Also, assume we know $\phi(1,0) = (a,b)$ and $\phi(0,1) = (c,d)$. Then suppose we want to know the mapping of some other $(m,n) \in \mathbb{Z}$ x $\mathbb{Z}$. In general, will $\phi(m,n) = m\cdot(a,b) + n\cdot(c,d)$ ? My reasoning here is the following: $\phi(m,n) = \phi(m\cdot(1,0)+n\cdot(0,1))$. Then, since $\phi$ is a ring homomorphism, this is equal to $\phi(m\cdot(1,0)) + \phi(n\cdot(0,1))$, and since $m$ and $n$ and simply constants, this equals $m\cdot\phi(1,0) + n\cdot\phi(0,1)$ = $m\cdot(a,b) + n\cdot(c,d)$. Does this logic check out? Any help would be greatly appreciated.
|
Suppose we have $\phi : \mathbb{Z}$ x $\mathbb{Z} \rightarrow \mathbb{Z}$ x $\mathbb{Z}$ , where $\phi$ is a ring homomorphism. Then it must satisfy the following 2 conditions: for all $a,b \in \mathbb{Z}$ x $\mathbb{Z}$ $\phi (a+b) =\phi (a) + \phi (b)$ $\phi (a\cdot b) =\phi (a)\cdot\phi (b)$ Then for $(1,0) \in \mathbb{Z}$ x $\mathbb{Z}$ , we know that $(1,0)\cdot(1,0)=(1,0)$ , so $$\phi((1,0))=\phi ((1,0)\cdot(1,0)) =\phi ((1,0)) \cdot\phi ((1,0)) =\phi ((1,0))^2$$ Under what condition does $\phi ((1,0))^2 = \phi((1,0))$ ? Suppose $\phi((1,0)) = (a,b)\in\mathbb{Z}$ x $\mathbb{Z}$ , then $a^2 = a$ and $b^2 = b$ . Solving it gives you $a=1$ or $0$ , $b=1$ or $0$ . We know $\phi$ must map $(0,0)$ to $(0,0)$ and $(1,1)$ to $(1,1)$ . So the only options for $\phi((1,0))$ would be $(1,0)$ or $(0,1)$ .
|
|abstract-algebra|ring-homomorphism|
| 0
|
Equivalent formulation of the $abc$-conjecture
|
In a 2015 paper titled Lecture on the abc conjecture and some of its consequences by Michel Waldschmidt it is stated that: It can be easily seen that the $abc$ -conjecture is equivalent to the following statement: For each $\epsilon > 0$ , there exists $\kappa(\epsilon)$ such that, for any abc triple $(a, b, c)$ , $$c Another paper, Modular Forms, Elliptic curves and the ABC-Conjecture , written by Dorian Goldfeld (apparently as dedication to Alan Baker), states that: Let $A$ , $B$ , $C$ be non–zero, pairwise relatively prime, rational integers satisfying $A + B + C = 0$ . Define $N$ to be the square free part of $ABC$ . Then for every $\epsilon > 0$ , there exists $\kappa(\epsilon) > 0$ such that: $$\max(|A|, |B|, |C|) I have the following questions: Are both of these equivalent formulations of the $abc$ -conjecture correct? What does $\kappa(\epsilon)$ denote here? I know it's a constant but why it's $\kappa(\epsilon)$ and not simply $\kappa$ ? Is it like a function which varies for
|
I think this question will help me as well as the public to better understand the equivalent formulations of the $abc$ -conjecture Well, the "public" doesn't really care about the $abc$ conjecture or any other higher math. The number $\kappa(\varepsilon)$ depends on $\varepsilon$ . It absolutely does not depend on $a$ , $b$ , or $c$ . Any equation $A + B + C = 0$ in nonzero integers can have its terms negated and/or moved to the other side in order rewrite it as $a+b = c$ where $a$ , $b$ , and $c$ are all positive: $\{a, b, c\} = \{|A|, |B|, |C|\}$ . Then $\max(|A|,|B|,|C|) = \max(a,b,c) = c$ since $c = a + b$ with $a$ , $b$ , and $c$ all positive.
|
|constants|open-problem|abc-conjecture|
| 1
|
Maximal ideals in $\mathbb{Z}[\sqrt{-5}]$
|
This is problem 8.3.8 in D&F Let R be the quadratic integer ring $Z[\sqrt{-5}]$ and define the ideals $I_2 = (2, 1 + \sqrt{-5})$ , $I_3 = (3, 2 + \sqrt{-5})$ , and $I_3' = (3, 2 - \sqrt{-5})$ . Prove that $I_2$ , $I_3$ and $I_3'$ are prime ideals in $R$ . [One approach: for $I_3$ observe that $R/I_3 \cong (R/(3))/(I_3/(3))$ by the Third Isomorphism Theorem for Rings. Show that $R/(3)$ has $9$ elements, $(I_3/(3))$ has $3$ elements, and that $R/I_3 \cong \mathbb{Z}/3\mathbb{Z}$ as an additive abelian group. Conclude that $I_3$ is a maximal (hence prime) ideal and that $R/I_3 \cong \mathbb{Z}/3\mathbb{Z}$ as rings. ] The last line is giving me a bit of trouble. I've seen these posts ( 1 , 2 , there are several more) but the method of writing $R/I_3$ as a polynomial ring over an irreducible is unfamiliar. I assume I'm being asked to show $R/I_3 \cong \mathbb{Z}/3\mathbb{Z}$ first and then that $I_3$ is prime. I tried using the first isomorphism theorem and calculating by hand but I'm not
|
Let $I=(3,2+\sqrt{-5})$ and $R=\Bbb Z[\sqrt{-5}]$ . Note that every element of $R$ is of the form $m+n\sqrt{-5}$ , where $m,n\in\Bbb Z$ . Also notice that $I$ contains $-3$ and therefore $-3+2+\sqrt{-5}=\sqrt{-5}-1$ . When we quotient by an ideal containing $\sqrt{-5}-1$ , we are really applying the relation $\sqrt{-5}-1=0$ , or $\sqrt{-5}=1$ . Following this, in the quotient $R/I$ , we can write $m+n\sqrt{-5}=m+n\pmod{I}$ . If you want to say this more precisely, we have \begin{align} m+n\sqrt{-5}&\equiv m+n\sqrt{-5}-n(\sqrt{-5}-1)\pmod{I}\\ &\equiv m+n\pmod{I} \end{align} since $\sqrt{-5}-1\in I$ . This suggests we can define a map $R/I\to \Bbb Z/3$ by taking $m+n\sqrt{-5}$ to $m+n\pmod 3$ . Let $f\colon R\to\Bbb Z/3$ be the map $f(m+n\sqrt{-5})=m+n\pmod 3$ . Verify that $f$ is a homomorphism satisfying $f(3)=0$ and $f(2+\sqrt{-5})=0$ . It follows that $f$ descends to a map $\overline f\colon R/I\to\Bbb Z/3$ . It is easy to see that this map is surjective. To show $\overline f$ is in
|
|abstract-algebra|ring-theory|field-theory|
| 1
|
Easy proof that $n=5$ is the only solution of $n^n \equiv n^{n^n} \pmod {10^{n-1}}$ if $n \in \mathbb{N}-\{0,1\}$ is not a multiple of $10$
|
Let $n > 1$ be an integer not a multiple of $10$ . Is there a short proof that $n = 5$ is the only solution to $n^n \equiv n^{n^n} \pmod {10^{n-1}}$ , given the fact that $5^{5^5} \equiv 3125 \pmod{10^4}$ (and even $3125 \equiv 5^{5^5} \pmod{10^5}$ )? P.S. This question is related to my research on the congruence speed of tetration and arose in the discussion of the OEIS sequence $A369826$ before it was finally approved. Moreover, I am planning to submit another sequence concerning the number of frozen rightmost digits of some tetration bases at a given height (as my pending sequences are processed), and I would like to add also a short explanation in the comments that does not invoke the congruence speed of $n$ (i.e., for any integer $n>5$ that is not a multiple of $10$ , the number of frozen digits at height $2$ cannot exceed three times the constant congruence speed of the base, which is given by Equation (16) of Number of stable digits of any integer tetration , and then the stated
|
A relatively short proof that the only non-multiple of $10$ with $n \gt 1$ which is a solution to $$n^n(n^{n^n-n} - 1) \equiv 0 \pmod{10^{n-1}} \tag{1}\label{eq1A}$$ is $5$ involves handling several cases, while also using the Lifting-the-exponent lemma (LTE lemma). Case # $1$ : $2 \nmid n$ , $5 \mid n$ and $n \ge 15$ Since $5^{n-1} \mid n^n$ , we just need to check if $$n^{n^n-n} - 1 \equiv 0 \pmod{2^{n-1}} \tag{2}\label{eq2A}$$ Since $2 \mid n - 1$ and $n^n - n$ is even, the LTE lemma gives $$\nu_2(n^{n^n-n} - 1) = \nu_2(n-1) + \nu_2(n+1) + \nu_2(n^n - n) - 1 \tag{3}\label{eq3A}$$ Using the LTE lemma with $n^n - n = n(n^{n-1} - 1)$ results in $$\nu_2(n^n - n) = \nu_2(n-1) + \nu_2(n+1) + \nu_2(n-1) - 1 \tag{4}\label{eq4A}$$ Substituting this into \eqref{eq3A}, and using that $\nu_2(n-1)$ or $\nu_2(n+1)$ is $1$ , shows that $$\nu_2(n^{n^n-n} - 1) = 3\nu_2(n-1) + 2\nu_2(n+1) - 2 \lt 3\lceil\log_2(n-1)\rceil \lt n - 1 \tag{5}\label{eq5A}$$ This means \eqref{eq2A} and, thus also \eqref{eq
|
|discrete-mathematics|modular-arithmetic|exponentiation|power-towers|hyperoperation|
| 1
|
How to prove $\sum \limits_{cyc} \frac{1}{7a^2+bc} \ge \frac{9}{8(ab+bc+ca)}$ for $a,b,c>0$?
|
Let $a,b,c$ be positive real numbers such, prove $$\frac{1}{7a^2+bc}+\frac{1}{7b^2+ca}+\frac{1}{7c^2+ab} \ge \frac{9}{8(ab+bc+ca)}$$ I saw it here . I try to full expanding but it's very complicated. Also, Cauchy Schwartz $$\sum_{cyc} \frac{1}{7a^2+bc} \ge \frac{9}{7(a^2+b^2+c^2)+(ab+bc+ca)}$$ leads us to a wrong inequality. Then I try $2bc \le b^2+c^2$ without success. So far, I haven't had any good idea yet. Can someone help me with this problem? Thank you. My expanding $$\sum_{sym} \left(56a^4b^2+56a^4c^2+49a^4bc-\frac{49}{2}(ab)^3-\frac{49}{2}(ac)^3+456a^3b^2c+456a^3bc^2-1024a^2b^2c^2\right) \ge 0$$ it seems true as Murihead but I don't know how to use it.
|
Remark : I also give a proof using the (standard) pqr method. Let $p = a + b + c, q = ab + bc + ca, r = abc$ . The desired inequality is equivalently written as $$-4608r^2 + (-63p^3 + 680pq)r + 56p^2q^2 - 161q^3 \ge 0. \tag{1}$$ Using $q^2 \ge 3pr$ , it suffices to prove that $$-4608r\cdot \frac{q^2}{3p} + (-63p^3 + 680pq)r + 56p^2q^2 - 161q^3 \ge 0$$ or $$q^2(56p^2 - 161q) \ge \left(\frac{1536q^2}{p} + 63p^3 - 680pq\right) r. \tag{2}$$ Since $p^2 \ge 3q$ , we have $\mathrm{LHS}_{(2)} \ge 0$ . We only need to prove the case that $\frac{1536q^2}{p} + 63p^3 - 680pq > 0$ . Using $q^2 \ge 3pr$ , it suffices to prove that $$q^2(56p^2 - 161q) \ge \left(\frac{1536q^2}{p} + 63p^3 - 680pq\right) \cdot \frac{q^2}{3p}$$ or $$\frac{q^2(105p^2 + 512q)(p^2 - 3q)}{3p^2} \ge 0$$ which is true using $p^2 \ge 3q$ . We are done.
|
|inequality|
| 0
|
Telescoping recursive term ${D(h) = D(h-2)+1}$
|
In the context of Computer Science, I am trying to calculate the maximum depth difference between leaf nodes in any existing AVL-Trees of height $h$ . I don't think any knowledge of AVL trees is needed, since the following terms are the stand-alone mathematical relations resulting from them. After some reflections, the recursive term resulting of this is the following ( $D(h)$ being the maximum depth difference for any possible AVL tree of height $h$ ): $D(h=2)=0, D(h=3)=1$ $D(h) = \max (D(h-1), D(h-2)+1) \quad\forall h\geq4$ This is then simplified to: $D(h) = D(h-2)+1 \quad\forall h\geq4$ Question 1: How can one assume this simplification? I believe somewhere I read something about monotone growth in it, but since there's only a recursive relation towards two terms before, there's no apparent connection between $D(h-1)$ and $D(h-2)$ . We now use telescoping to arrive at the following: $D(h) = D(h-2)+1 = ... = D(h-2k) + k$ Question 2: How do I understand what is the condition of k, to
|
Question 1 may be by induction: for $h\ge4$ , let $P(h)$ be the statement $$D(h-2) \le D(h-1) \le D(h-2)+1$$ For $h=4$ , check that $$D(2) \le D(3) \le D(2)+1$$ Assume $P(k)$ is true for some $k\ge 4$ : $$D(k-2) \le D(k-1) \le D(k-2)+1$$ Then for $h=k+1$ (which compares $D(k-1)$ and $D(k)$ ), $$\begin{align*} D(k) &= \max\{D(k-1), D(k-2)+1\}\\ &\ge D(k-1) \tag 1\\ D(k) &= D(k-2)+1\\ &\le D(k-1) + 1 \tag2 \end{align*}$$ From inequalities $(1)$ and $(2)$ , $$D(k-1) \le D(k) \le D(k-1)+1$$ So by mathematical induction, $\forall h\ge 4$ , $P(h)$ is true and particularly $D(h-1) \le D(h-2)+1$ . So $$D(h) = D(h-2) + 1 \quad \forall h\ge 4$$ Question 2 and 3 are usually by iterating enough times and observing the pattern. $$\begin{align*} D(h) &= D(h-2) + 1\\ &= (D(h-4) + 1) + 1 &&= D(h-4) + 2\\ &= (D(h-6) + 1) + 2 &&= D(h-6) + 3\\ &= (D(h-8) + 1) + 3 &&= D(h-\underbrace{8}_{2\cdot 4}) + 4\\ &= \vdots\\ &= D(h-2k) + k\\ \end{align*}$$ Then pick $k$ such that the argument $h-2k$ is $2$ or $3$
|
|algorithms|recursion|trees|recursive-algorithms|
| 1
|
Boundary surjectivity implies surjectivity.
|
Let $A$ and $B$ be two compact convex subsets in $\mathbb{R}^n$ and $f:A\rightarrow B$ be a continuous function. Assume the following : i) $f( \mathrm{int} A) \subset \mathrm{int} B$ ; ii) $f : \partial A \rightarrow \partial B$ is bijective. The symbol " $\partial$ " means "boundary of" and the symbol " $\mathrm{int}$ " means "interior of". I have the intuition that $f(A) = B$ , that is, $f$ is surjective. Is it true? Do you have some help? Thank you! [I have this question from an interesting topic: generalized barycentric coordinates. It is an area with applications in Computer Graphics.] [I know that a continuous functions leaves compact sets in compact sets. The same holds with connected sets. It suggests me that the convex hip. can be replaced with connected.]
|
Let's lay out the blueprint for this answer. First, we're going to cover some handy properties of the gauge function. Second, we will dip into some algebraic topology to prove your conjecture true in the specific case where $A$ and $B$ are both the closed unit ball in $\Bbb{R}^n$ . Third, and finally, we use this result to prove your conjecture true for more general convex bodies $A$ and $B$ in $\Bbb{R}^n$ . To match with standard notation in algebraic topology, we will use $$D^n = \{x \in \Bbb{R}^n : \|x\| \le 1\}$$ to be the closed unit ball/disc in $\Bbb{R}^n$ . The gauge function Definition. Suppose we have a convex set $C$ that contains $0$ in its interior. Define the gauge function: $$\gamma_C(x) = \inf\{\lambda \ge 0 : x \in \lambda C\}.$$ Note that, since $0 \in \operatorname{int} C$ , this function is well-defined everywhere, and $\gamma_C(x) \ge 0$ for all $x$ . Proposition 1 $\gamma_C$ is a sublinear, and hence convex, function. Proof. Note that, for $\mu > 0$ and $x \in \Bb
|
|real-analysis|convex-analysis|
| 1
|
Probability of two specific cards to be in your hand in a game of bridge
|
Assume that a 52-card deck is distributed among 4 players, each with 13. What is the probability that two specific cards, say the Ace of Hearts and Ace of Diamonds to be in my hand? I have two approach in solving this, each giving a different answer. The first approach to consider the sample space to be the position of the two aces among the four players. There is a 1/4 chance that the Ace of hearts to be in your hand and a 1/4 chance that the Ace of diamonds to be in your hand, so the chance that both Aces to be in your hand is 1/16 The second approach is using a combination approach, the sample size being all possibilites of a 13 card hand from 52 cards. Then, we can calculate as follows: $\frac{{50}\choose{11}}{{52}\choose{13}} = \frac{1}{17} $ Which one is the correct answer and where did one of the working went wrong?
|
Here's an alternative approach. We arrange the $52$ cards in a row. You get the first $13$ cards. The next player gets the next $13$ and so on. The total number of arranging the cards is $52!$ . For you to get the Ace of Hearts ( $A_H$ ) and the Ace of Diamonds ( $A_D$ ), they must be placed somewhere in the first $13$ positions. That can be done in $13 \times 12$ ways. The remaining $50$ cards can then be arranged in $50!$ ways. So the required probability would be $$\frac{13 \times 12 \times 50!}{52!} = \frac{1}{17}$$ Problem with your first approach While the probability of you getting one specific card is $1/4$ , that of getting another specific card when you already have the first in hand isn't $1/4$ . Let me explain. As I just said, the probability of you getting the first card is indeed $1/4$ . Each player has $13$ openings which will each be filled with one card. So there is symmetry. Hence the probability of a specific card going to any specific player is $1/4$ , regardless of
|
|probability|combinatorics|statistics|conditional-probability|
| 1
|
Second Borel-Cantelli lemma via the moment method
|
Let ${E_1,E_2,\dots}$ be a sequence of jointly independent events. If ${\sum_{n=1}^\infty {\bf P}(E_n) = \infty}$ , show that almost surely an infinite number of the ${E_n}$ hold simultaneously. (Hint: compute the mean and variance of ${S_n= \sum_{i=1}^n 1_{E_i}}$ . One can also compute the fourth moment if desired, but it is not necessary to do so for this result.) Question : From the hint, we want ${\bf P}(\lim_n S_n = \infty) = 1$ . i.e., $S_n$ diverges almost surely, yet I didn’t see immediately how the mean ${\bf E}(S_n) = \sum_{i=1}^n {\bf P}(E_i)$ and the variance ${\bf Var}(S_n) = \sum_{i=1}^n {\bf P}(E_i)(1 - {\bf P}(E_i))$ is applicable here, the Chebyshev’s inequality does not seem to convey too much information.
|
$$ P\left(\vert S_n - E S_n \vert > \dfrac{1}{2}ES_n\right) \le \dfrac{4\text{Var}(S_n)}{(ES_n)^2} = \dfrac{4\sum_{k = 1}^n P(E_k)(1 - P(E_k))}{(\sum_{k = 1}^n P(E_k))^2} \le \dfrac{4}{\sum_{k = 1}^n P(E_k)} \rightarrow 0 $$ Or, equivalently, $$ P\left(\vert S_n - E S_n \vert \le \dfrac{1}{2}ES_n\right) \rightarrow 1 $$ But, $$ \{\vert S_n - ES_n \vert \le \dfrac{1}{2}ES_n\} \subseteq \{S_n \ge \dfrac{1}{2}ES_n\} $$ Therefore, $$ P\left(S_n \ge \dfrac{1}{2}ES_n\right) \rightarrow 1 $$ Now, for $M > 0$ , we can choose $n_0$ such that $$ \dfrac{1}{2} ES_n \ge M \ \forall n \ge n_0 $$ Thus, $$ P(S_n \ge M) \ge P\left(S_n \ge \dfrac{1}{2}ES_n\right) \forall n \ge n_0 $$ Finally, since $$ \bigcap_{M \in \mathbb{N}} \bigcup_{n \in \mathbb{N}} \{S_n \ge M\} \subseteq \{\lim_n S_n = \infty\} $$ we can conclude that $\mathbb{P}(\lim_n S_n = \infty) = 1$
|
|probability-theory|borel-cantelli-lemmas|
| 0
|
Help with reducing $71^{71}$ mod $17$ using Fermat's Little Theorem
|
I am self studying number theory and have come across a problem involving Fermat's Little Theorem that I cannot seem to solve. The question asks to find the least remainder of $71^{71} \pmod {17}$ . I started by noticing that $71^{71}=71^{(17-1)4} \times 71^{7}$ , which, by Fermat's, means my answer is congruent to $71^{7} \mod17$ . Is this the right first step, or is there a better way to solve this problem?
|
Okay so you are doing it right! First you do as what OP did reduce $71$ to relate it with Fermat's little theorem $71^{71} \equiv 71^{4(17-1)}\times 71^7 \mod 17$ Now, $(71^{16})^4 \equiv 1^4 \mod 17$ (Using Fermat's little theorem) Which means we are left with $71^7 \mod 17$ $71 \equiv 4(17-1) +7\mod 17$ $\implies 71 \equiv 3 \mod 17$ $\implies 71^7 \equiv 3^7 \mod 17$ $\implies 3^{4} \cdot 3^{3} \equiv 81 \times 27 \mod 17$ $\implies 81 \times 27 \equiv (-4)(-7)\mod 17$ $\implies 28 \equiv 11 \mod 17$
|
|elementary-number-theory|
| 1
|
Generate random point on the ellipsoid in 3d.
|
I want to generate the trajectory of point M in space, such that the sum of its distances to two other points, P and Q in space, remains constant, that is, |PM| + |MQ| = constant. Therefore, I believe the trajectory of this point should be an ellipsoid with PQ as its foci and rotating about the PQ axis. However, I wrote some code for this but it failed to generate the correct trajectory. I noticed that the code only succeeds when the midpoint of P and Q is at the origin (0,0,0). In other cases, the generated trajectory is incorrect. There seems to be a logical issue somewhere in the code. Can anyone help? clear; clc; d = 50; f1 = [0.5,7,1.0]; f2 = [3.5,6.0,0.0]; origin = [(f1(1) + f2(1))/2 ; (f1(2) + f2(2))/2; (f1(3) + f2(3))/2]; cc = sqrt((f1(1)-f2(1))^2 + (f1(2)-f2(2))^2 + (f1(3)-f2(3))^2)/2; a = d / 2; b = sqrt(a^2-cc^2); c = b; N = 99; u = rand(1,N); v = rand(1,N); theta = u * 2 * pi; phi = acos(v * 2 - 1); sinTheta = sin(theta); cosTheta = cos(theta); sinPhi = sin(phi); cosPhi = c
|
P = [cc,0,0]; P1 = f2-origin'; cP = cross(P, P1); I've identified the mistake I made, which is that when calculating the angle of rotation, I should compute it relative to the center point. Therefore, P1 should first subtract the origin.
|
|linear-algebra|3d|rotations|
| 1
|
Connection between the polylogarithm and the Bernoulli polynomials.
|
I have been studying the polylogarithm function and came across its relation with Bernoulli polynomials, as Wikipedia site asserts: For positive integer polylogarithm orders $s$ , the Hurwitz zeta function $\zeta(1−s, x)$ reduces to Bernoulli polynomials, $\zeta(1−n, x) = −B_n(x) / n$ , and Jonquière's inversion formula for $n = 1, 2, 3, …$ becomes: $$Li_n(e^{2\pi ix}) + (-1)^n Li_n(e^{-2\pi ix}) = -\frac{(2\pi i)^n}{n!} B_n(x)$$ where again $0 \leq Re(x) if $Im(x) \geq 0$ , and $0 if $Im(x) . While I understand the formula itself and that the Hurwitz zeta function reduces to the Bernoulli polynomials for natural orders $s$ , I am having trouble understanding how to obtain it. I have referred to the following sources: Wikipedia - Polylogarithm MathWorld - Jonquière's Relation However, I still do not understand how to prove this formula. Could someone please explain, in detail, how is it obtained? Any insights or references to textbooks/research papers that provide a detailed derivation
|
See the Wikipedia page Hurwitz zeta function , which states the result under the name "Hurwitz's formula" as: $$ \zeta(1-s,a) = \frac{\Gamma(s)}{(2\pi)^s} \left( e^{-\pi i s/2} \sum_{n=1}^\infty \frac{e^{2\pi ina}}{n^s} + e^{\pi i s/2} \sum_{n=1}^\infty \frac{e^{-2\pi ina}}{n^s} \right) $$ This becomes the formula stated in the question, after taking $s$ to be a positive integer, renaming variables, and rearranging. The page cites references to a number of books and articles where you can take your pick among several possible proofs: Hurwitz's formula has a variety of different proofs. [See the references in Section 4 of Kanemitsu et al. 2017 .] One proof uses the contour integration representation along with the residue theorem. [Apostol, Introduction to analytic number theory , Theorem 12.6][Whittaker and Watson, A Course Of Modern Analysis 4th ed., Section 13.15] A second proof uses a theta function identity, or equivalently Poisson summation. [ Fine 1951 ] These proofs are analogou
|
|complex-analysis|special-functions|zeta-functions|polylogarithm|bernoulli-polynomials|
| 1
|
Transition matrices in $\mathbb{R}^n$: how to compute
|
I was reading "Elementary Linear Algebra Applications" by Howard Anton, and in section 4.6, Change of Basis, it talks about finding the transition matrix, $P$ , from an old basis $B$ to to another new basis $B'$ . I understand that if the transition matrix satisfies $P_{B \to B'} B = B'$ , then the the transition matrix can be found with $P_{B \to B'} = \{ [u_1']_B | [u_2']_B |\cdots | [u_n']_B \}$ , where $u_i$ is the $i$ th basis/column in $B'$ and $[v]_B$ is the vector's coordinates with respect to basis $B$ . The textbook then says this: I don't understand how they got step 3, and i couldn't find a proof in the textbook... i don't see how simultaneously solving n systems is connected to this... what is the "same coefficient matrix" in the n systems? An explanation + proof would be greatly appreciated!
|
Row operations correspond to left multiplication by a matrix. Thus, performing row operations that turn $B'$ into $I$ is the same as left multiplying by $B'^{-1}$ . If we do the same row operations to $B$ , the result is $B'^{-1}B$ . Let $e_i$ be the vector $\left(\begin{matrix}0 \\ ... \\ 1 \\ ... \\ 0 \end{matrix} \right)$ with the $1$ at the $i$ th position. Then $B e_i$ is the $i$ th column of $B$ , so $B$ changes the $B$ basis to the standard basis. Then, left multiplying by $B'^{-1}$ changes the standard basis to the $B'$ basis, so $B'^{-1}B$ changes from the $B$ basis to the $B'$ basis.
|
|linear-algebra|linear-transformations|change-of-basis|
| 1
|
Verifying the necessary and sufficient conditions for a function's inverse
|
If $(f \circ g)(x)=h(x)$ and $(g \circ f)(x)=h^{-1}(x)$ , is it necessary that $f(x)$ and $g(x)$ are inverses of each other? If not, what extra condition (other than just directly saying $h(x)=x$ ) must be imposed on $h(x)$ for them to be inverses? Assume that $f(x)$ and $g(x)$ are neither the identity function(s) nor equal to each other, and well-behaved (i.e. $C^{\infty}$ continuous and differentiable functions). All the 3 functions are invertible in their respective domains. So, upon applying the inverse operator on the first equality, $((f \circ g)(x))^{-1}=h^{-1}(x)$ $\implies$ $(g^{-1} \circ f^{-1})(x)=h^{-1}(x)$ $\implies (g \circ f)(x)=(g^{-1} \circ f^{-1})(x)$ and $(f \circ g)(x)=(f^{-1} \circ g^{-1})(x)$ However, I'm not able to proceed further and I haven't been able to come up with a counter-example yet. I was hoping to somehow demonstrate the commutativity of the composition of these two particular functions but that isn't easy to establish, just inspecting the above 2 rel
|
No, they don't have to be inverses of each other, and they don't have to commute. An easy way to get counterexamples is to take two distinct involutions (functions equal to their own inverses). And there are lots of continuous involutions of real numbers. For any real number $c$ , let $f_c$ reflect the real line about $c$ ; that is, $f_c(x)=2c-x$ . Note that $$f_c(f_c(x)) = f_c(2c-x) = 2c-2c+x = x.$$ Now pick $c\neq d$ and let $f=f_c$ , $g=f_d$ . Then $$(f\circ g)\circ(g\circ f)= f\circ(g\circ g)\circ f = f\circ f= \mathrm{id},$$ and likewise $(g\circ f)\circ(f\circ g) = \mathrm{id}$ . So setting $f\circ g=h$ we have $g\circ f=h^{-1}$ . But neither $f$ nor $g$ are the identity (they each have a unique fixed point), they are not inverses of each other, and they do not commute: $$\begin{align*} f\circ g(x)&=f(2d-x) = 2c - 2d + x,\\ g\circ f(x) &= g(2c-x) = 2d-2c +x. \end{align*}$$ Since $c\neq d$ , then $2(c-d)\neq 2(d-c)$ .
|
|functions|inverse-function|function-and-relation-composition|
| 1
|
Prove $1< \frac a{\sqrt{a^2+b^2}} + \frac b{\sqrt{b^2+c^2}} + \frac c{\sqrt{c^2+a^2}} \le \frac{3\sqrt2}2$
|
Suppose that a, b, c are positive real numbers, prove that : $$1 This is a question from 100 inequalities by Vasc and Arqady. Link For the Right Hand Side inequality , I have tried applying AM-GM inequality and Cauchy–Schwarz inequality to solve the inequality but the issue is I am unable to arrange the terms for the $\le$ sign. The terms I want on the left side seem to end up on the right and vice versa. Even if I try to solve using the flipped sign of inequality, I am getting stuck midway unable to simplify the terms. Terms used in AM-GM inequalty : $A : \frac a{\sqrt{a^2+b^2}}$ $B : \frac b{\sqrt{b^2+c^2}}$ $C : \frac c{\sqrt{c^2+a^2}}$ and then continuing with $\frac {A+B+C}3 \ge \sqrt[3] {ABC}$ . For the Cauchy–Schwarz inequality, I am using : $a_1 : \frac a{\sqrt{a^2+b^2}}, \ \ a_2 : \frac b{\sqrt{b^2+c^2}}, \ \ a_3 : \frac c{\sqrt{c^2+a^2}}$ and $b_1 : \frac{\sqrt{a^2+b^2}}a, \ \ b_2 : \frac{\sqrt{b^2+c^2}}b, \ \ b_3 : \frac{\sqrt{c^2+a^2}}c$ and then continuing with $(a_1^2+a_2
|
For the LHS: $$ \begin{align} \frac a{\sqrt{a^2+b^2}} + \frac b{\sqrt{b^2+c^2}} + \frac c{\sqrt{c^2+a^2}} &> \frac a{\sqrt{a^2+b^2+c^2}} + \frac b{\sqrt{a^2+b^2+c^2}} + \frac c{\sqrt{a^2+b^2+c^2}}\\ &= \frac {a+b+c}{\sqrt{a^2+b^2+c^2}}\\ &> 1 \end{align} $$ where the last inequality follows from $$ a+b+c = \sqrt{(a+b+c)^2} > \sqrt{a^2+b^2+c^2}. $$
|
|algebra-precalculus|inequality|
| 0
|
Probability of two specific cards to be in your hand in a game of bridge
|
Assume that a 52-card deck is distributed among 4 players, each with 13. What is the probability that two specific cards, say the Ace of Hearts and Ace of Diamonds to be in my hand? I have two approach in solving this, each giving a different answer. The first approach to consider the sample space to be the position of the two aces among the four players. There is a 1/4 chance that the Ace of hearts to be in your hand and a 1/4 chance that the Ace of diamonds to be in your hand, so the chance that both Aces to be in your hand is 1/16 The second approach is using a combination approach, the sample size being all possibilites of a 13 card hand from 52 cards. Then, we can calculate as follows: $\frac{{50}\choose{11}}{{52}\choose{13}} = \frac{1}{17} $ Which one is the correct answer and where did one of the working went wrong?
|
If you want to use the "slot" method ( $4$ groups of $13$ slots each), since the question says my hand (of $13$ ), $Pr = \dfrac{13}{52}\dfrac{12}{51} = \dfrac{1}{17}$ [In your attempt, $\Large\frac14$ was right, just writing it as $\Large\frac{13}{52}$ would have shown where you were going wrong] And then, of course, there is the "standard" method of just looking at my hand which you already know, only I have written it in the full form as being more transparent $\dfrac{\binom22\binom{50}{11}}{\binom{52}{13}} =\dfrac1{17}$
|
|probability|combinatorics|statistics|conditional-probability|
| 0
|
Finite group where all the elements are commutators
|
Is there a finite group of order greater than $1$ , where all the elements are commutators? I thought about this question once, but I couldn't think of a single example. Obviously, such a group must be perfect. Perhaps the following fact can be used somehow: $g \in G$ is commutator iff $$\sum\limits_{\chi \in Irr(G)} \frac{\chi(g)}{\chi(1)} \neq 0$$
|
Let $G$ be a group, let $H$ be the commutator subgroup, then $H$ is normal in $G$ , and the quotient group $G/H$ is abelian. If $G$ is a nonabelian simple group, then it has no normal subgroups other than itself and the one-element subgroup, so its commutator subgroup must be the whole group. So any nonabelian simple group should do, e.g., the alternating group $A_n$ for $n\ge5$ . Now, the question is whether that means all elements are commutators. There are finite groups where there are products of commutators that aren't themselves commutators, but the smallest example is a group of order $96$ , so for $A_5$ it must be the case that all the elements are commutators. The result about groups of order $96$ has been mentioned on previous questions about commutators on this website, e.g., Derived subgroup where not every element is a commutator
|
|group-theory|
| 1
|
Difficulty understanding why $ P \implies Q$ is equivalent to P only if Q.
|
I have difficulties understanding why $ P \implies Q$ is equivalent to P only if Q. I do understand that in the statement "P only if Q", it means if $ \lnot Q \implies \lnot P$". Regarding this example, Equivalence of $a \rightarrow b$ and $\lnot a \vee b$ If I win the lottery, then I will give you \$1 billion. This statement has the form $P \implies Q$. But saying P only if Q, means "I win the lottery only if I give you $1 billion" doesn't sound so right. Is there anything I'm missing here?
|
In everyday speech/English, " $P$ only if $Q$ " suggests $Q$ causes $P$ . Examples: I'll go only if he isn't going. ("He isn't going" may help to cause "I'll go".) I'll take on this project only if the company increases my pay. ("The company increases my pay" may help to cause "I'll take on this project".) In contrast, in logic, " $P$ implies $Q$ " or " $P$ only if $Q$ " ignores any causality. In particular, there's no suggestion that either $P$ causes $Q$ or $Q$ causes $P$ . So, although "I win the lottery only if I give you \$1 billion" is logically correct, it sounds weird in everyday speech because it suggests that "I give you $1 billion" causes "I win the lottery" (which is of course absurd). This is perhaps similar to how the statement, "If I'm a trillionaire, then pigs can fly," is logically true (but nonetheless sounds weird to most).
|
|discrete-mathematics|logic|propositional-calculus|boolean-algebra|
| 0
|
Equivalence of two forms of the Marcinkiewicz interpolation theorem
|
On this article and Stein & Weiss (statement 1) and the books by Linares & Ponce and Duoandikoetxea (statement 2), I found the following statements of the Marcinkiewicz interpolation theorem: If $T:L^{p_0}+L^{p_1}\to L_w^{q_0}+L_w^{q_1}$ where $p_0\neq p_1$ and $q_0\neq q_1$ is a sublinear operator of weak type $(p_0, q_0)$ and weak type $(p_1, q_1)$ then it is of strong type $(p_\theta, q_\theta)$ for the appropriate $p_\theta, q_\theta$ . If $T:L^1+L^r\to L^1_w+L^r_w$ is a sublinear operator of weak type $(1,1)$ and weak type $(r,r)$ then it is of strong type $(p,p)$ for $1 . They were stated with the same name, which makes makes me think that they ought to be equivalent by some more or less trivial argument, but I haven't managed to find it. Also, none of the sources I mentioned discuss this. Statement 2 can readily be seen to imply statement 1 if $p_0=q_0 by writing the input as the difference of two positive functions and considering $S(f)=T(f^{1/p_0})^{p_0}$ . So, how can it be s
|
I would guess that it is not easy to prove 1 from 2. However, 2 seems to be the most important special case from 1 and therefore, some authors prefer to give only this special case. If I remember correctly, the proof of 2 is also a little bit easier than the more general case 1.
|
|functional-analysis|lp-spaces|harmonic-analysis|
| 0
|
If for two strings a+b=b+a holds true, what properties can we derive about such strings?
|
Given two strings a and b such that a+b = b+a , what can we infer about such strings? Its like an abstract question where we don't know anything else but want to know about the properties of such strings. Example: a = ABC b = ABCABC a+b = ABCABCABC b+a = ABCABCABC I think the following holds true but I believe we can take this further - If |a| = |b| then it implies both strings are equal. If |a| I believe we can also prove that there exists some string t which is repeating in both strings a and b but I can't prove it. Any help would be appreciated. In general, I have seen a+b and b+a comparisons, and not just equality are very useful.
|
I'm going to denote concatenation of $a$ and $b$ by $ab$ , rather than $a + b$ . If I want $n$ copies of $a$ in a row, I will denote it by $a^n$ . We can do a kind of Euclidean algorithm-style argument to establish that there is some atomic string $c$ such that both $a$ and $b$ are a number of copies of $c$ end-to-end. Take $a_0 = a$ and $b_0 = b$ , assuming that $|b| \ge |a|$ . Otherwise, switch their roles. For $n \ge 0$ , let us perform the following step, recursively. Step $n$ : we begin with two strings $a_n, b_n$ such that $a_nb_n = b_na_n$ , and $|a_n| \le |n_0|$ . Let $m_n$ be the greatest number of copies of $a_n$ that we can fit as a prefix of $b_n$ . That is, $m_n$ is the largest number such that there exists some $c$ such that $b_n = a_n^{m_n} c$ . Let $a_{n+1}$ be such a string $c$ , and let $b_{n+1} = a_n$ . If $a_{n+1}$ is an empty string, return $b_{n+1}$ and terminate, otherwise proceed to Step $n+1$ . I claim that, given $a_n, b_n$ as above, we also have $a_{n+1}b_{n+
|
|abstract-algebra|discrete-mathematics|gcd-and-lcm|
| 1
|
Find the 4th degree polynomial with integral coefficients whose root is $\sqrt{3}$ + $\sqrt{2}$
|
The question says to find the 4th degree polynomial with integral coefficients whose root is $\sqrt{3}$ + $\sqrt{2}$ This is how I solved I assumed $ax^{4} + bx^{3} + cx^{2} + dx + e$ to be the polynomial where a,b,c,d,e are integers. as $x = \sqrt{3}$ + $\sqrt{2}$ is a root of the polynomial, thus putting $x$ in it yields 0 as the result. This gives $2\sqrt{6}(10a+c) + \sqrt{3}(9b+d) + \sqrt{2}(11b+d) + 49a + 5c + e = 0 \;\;\;$ ...(1) Now, as $a,b,c,d,e$ are integers thus $10a + c = 0$ $9b + d = 0$ $11b + d = 0$ and $49a + 5c + e = 0$ for the equation (1) to be satisfied By solving these 4 relations, we get $b = d = 0$ $a = e$ and $c = -10e$ Therefore our polynomial becomes $ex^{4} - (10e)x^{2}+ e$ But now I am stuck with finding the value of $e$ . The answer to the problem is $x^{4} - (10)x^{2}+1 $ but I think e can be any integer as there are no other constraints given in the question. Is the answer wrong or did I do any mistakes in the procedure?
|
Start with the fact that $x=\sqrt 2+\sqrt 3$ is a solution to the polynomial. Squaring both sides gives $x^2=2+2\sqrt 6+3=5+2\sqrt 6$ . Thus $x^2-5=2\sqrt 6$ . Squaring again, $x^4-10x^2+25=24$ . Now observe that the equation $x^4-10x^2+1=0$ is satisfied by $x=\sqrt 2+\sqrt 3$ . Your solution is also correct. Notice that there are infinitely many solutions, since $\sqrt 2+\sqrt 3$ remains a solution after you multiply by any integer. Once you reach the solution $ex^4-10ex^2+e$ , you are free to choose $e=1$ to get $x^4-10x^2+1$ . You could just as easily pick $e=-17$ . (This is partly why you often see problems ask for a monic polynomial. Then there is less ambiguity since the leading coefficient must be $1$ .)
|
|polynomials|
| 1
|
Why does $\int_{0}^{1}x^1\,dx=\frac{1}{2}$, and $\int_{0}^{1}x^2\,dx=\frac{1}{3}$, and $\int_{0}^{1}x^3\,dx=\frac{1}{4}$, and so on?
|
The $\int_{0}^{1}x^a\,dx$ seems to equal $\frac{1}{a+1}$ . I just came across this and thought it must be known why this is. See the the table below: Why are these nice fractions showing up out of nowhere; what does the area under $F(x)=x^a$ between $x=0$ and $x=1$ have to do with these nice fractions? Of course, the antiderivative of $x^a$ is $\frac{{x}^{a+1}}{a+1}$ . And, that evaluated from $0$ to $1$ is just $\frac{{1}^{a+1}}{a+1}$ , which equals $\frac{1}{a+1}$ . But what is the geometrical link here, between this integral and these nice fractions?
|
I hope this helps you ,https://www.desmos.com/calculator/8nak6fm0ks you can Imagine $y=\frac 1{x+1} \ \forall x=n\in \mathbb{N}$ and cross-section of $y=\frac 1{x+1}$ and integral $[0,1]$ Te second desmos file is https://www.desmos.com/calculator/vy5ybvxejp
|
|calculus|algebra-precalculus|
| 0
|
Union of sets of sets
|
I am not sure how one should interpret the union of Sets of sets. Is the union a set with the elements being the elements of the sets inside the Sets, or is it a set with the elements being the Sets?
|
In order to understand this question: Let us have a set: A = {1,2,3} The elements in the set are 1, 2, 3. To show an element is part of a set, we use the following symbol -> ∈. For example, 1 ∈ A Let's say we have two sets, A ={1,2,3} and B = {4,5,6}. A union is a calculation where you make the sets you are unifying into one. And yes, You are talking about the elements. It is basically like addition with whole numbers. The sign we use is -> ∪ (Not to be confused with U (The universal set)). Let's get back to our example: A = {1,2,3} B = {4,5,6} A∪B = {1,2,3,4,5,6} It is important to note if an element is displayed more than once it should be shown once in the union. You would show the union of the sets as (Set 1)∪(Set 2). One last thing, sets do not have to be used with numbers. They can be used with all different types of information. I hope you like this answer.
|
|elementary-set-theory|
| 0
|
Why does $\int_{0}^{1}x^1\,dx=\frac{1}{2}$, and $\int_{0}^{1}x^2\,dx=\frac{1}{3}$, and $\int_{0}^{1}x^3\,dx=\frac{1}{4}$, and so on?
|
The $\int_{0}^{1}x^a\,dx$ seems to equal $\frac{1}{a+1}$ . I just came across this and thought it must be known why this is. See the the table below: Why are these nice fractions showing up out of nowhere; what does the area under $F(x)=x^a$ between $x=0$ and $x=1$ have to do with these nice fractions? Of course, the antiderivative of $x^a$ is $\frac{{x}^{a+1}}{a+1}$ . And, that evaluated from $0$ to $1$ is just $\frac{{1}^{a+1}}{a+1}$ , which equals $\frac{1}{a+1}$ . But what is the geometrical link here, between this integral and these nice fractions?
|
First of all your question is a good question. Yes you are correct. That is : $\int_{0}^{1}xdx=\frac{1}{2},\int_{0}^{1}x^{2}dx=\frac{1}{3},...$ This is generally the formula of induction. That is we know that $\int_{0}^{1}x^{n}dx=\frac{1}{n+1}$ .
|
|calculus|algebra-precalculus|
| 0
|
Pie chart values don't equal 100%
|
On the website "ArtFCity" there is a pie chart with values taken from a data table. However, the values inside the pie chart do not equal $100\%$. What is wrong with this math? $55.59 + 31.1 + 2.19 + 2.63 + 1.83 + 1.51 + 1.64 + 0.23 + 0.16 + 0.26 + 0.27 = 97.41$ Is there a missing value somewhere?
|
The answer to 55.59 + 31.1 + 2.19 + 2.63 + 1.83 + 1.51 + 1.64 + 0.23 + 0.16 + 0.26+ 0.27 is definitely 97.41. The website may not have been able to get the exact values, as they stated in the subheading of the graph. However, pie charts are always equivalent to 100%. There is nothing wrong with the math. Just a rounding miscalculation.
|
|algebra-precalculus|
| 0
|
dumbed down explanation of Fubini's theorem?
|
Could someone please give me a simple explanation of when it is valid to interchange the order of Intergration and summation when the summation is an infante summation? for some context during lecture the other day my current prof interchanged the order, and I asked if that were valid here and he said he couldn't think of any reason it wouldn't be so I said that I remembered a theorem about when that was valid. After class I told him i thought it was fubini's theorem and that the "function" inside the summation had to be Lebesgue integrable and he didn't know what that meant. if someone could try to help me understand this and to explain the concept to him that would be very helpful.
|
As you have correctly noted, exchanging a series and an integral (or two series/integrals) is a nontrivial thing because you have to exchange two limiting processes which in general leads to different results. The general form of the Fubini-Tonelli theorem gives a handy sufficient condition when it is possible to do exchange the order of integration: If a function is absolutely integrable in one specific order of integration, then it is automatically integrable in any order of integration and all orders yield the same result, i.e. you can exchange the integration processes as you like. Formally, if a measurable function $f: \mathbb{R}^n \to \mathbb{R}$ satisfies $$ \int_{-\infty}^\infty \cdots \int_{-\infty}^\infty \lvert f(x_1, \dots, x_n) \rvert dx_1 \cdots x_n then $$ \int_{-\infty}^\infty \cdots \int_{-\infty}^\infty f(x_1, \dots, x_n) dx_{\sigma(1)} \cdots x_{\sigma(n)} = \int_{-\infty}^\infty \cdots \int_{-\infty}^\infty f(x_1, \dots, x_n) dx_{\rho(1)} \cdots x_{\rho(n)} $$ for a
|
|real-analysis|integration|sequences-and-series|
| 0
|
Expected score on an increasingly multiple choice test
|
If a multiple-choice test (where all questions are worth the same) has $n$ choices on each question, random guessing gets you a score of $\frac1n$ on average. But what if the number of choices increased as you went through the test? Let's say that the number of choices matches the question number, so question $\#3$ has $3$ choices, and question $\#5$ has $5$ choices, and so on. If this test has $n$ questions, what is the expected score for random guessing? The way I think of it is letting each question be worth 1 point and $X_i$ represent the amount of points gained on question $\#i$ . Then the expected value for $X_i$ is $$E(X_i)=0\cdot\frac{i-1}{i}+1\cdot\frac1i=\frac1i$$ So the expected value for the amount of points gained on the entire test is $$E(X)=E(X_1+X_2+\dots+X_n)$$ $$E(X)=E(X_1)+E(X_2)+\dots+E(X_n)$$ $$E(X)=1+\frac12+\dots+\frac1n=H_n$$ Since there are n points in total, the expected score is $H_n/n$ . Is this the correct way to find the expected score on the test? If it i
|
Yes, this is correct. $H_n\sim \ln n$ as $n\to\infty$ , that is, the ratio $H_n/\ln n\to 1$ . One way to see this is that $\ln n=\int_1^n\frac1xdx$ , and by approximating the area by $n$ rectangles you can show that $1+\frac12+\cdots+\frac1n>\ln n>\frac12+\frac13+\cdots \frac1{n+1}$ , so $\ln n . So $H_n/n\approx\frac{\ln n}{n}\to0$ .
|
|probability|expected-value|
| 1
|
On periodic orbits of a discrete dynamic system
|
Let $f: D \subset \mathbb{R} \rightarrow \mathbb{R}$ continuous with $f(D) \subset D$ and let $x_0$ be a $n$ -periodic atractor point of $f$ . Show that every point of the cycle $\{ x_0, f(x_0),\dots,f^{n-1}(x_0) \}$ is also an $n$ -periodic atractor point of $f$ . I tried using the definition of continuity in $\mathbb{R}$ , but I always end up using 3 points of the cycle and I don't know how to solve this. Can anyone help me?
|
So essentially, you have per assumptions continuous functions $F,G$ , here more precisely $F=f^k$ , $G=f^{n-k}$ , with a cycle $F(x_*)=y_*$ and $G(y_*)=x_*$ and a neighborhood $U$ of $x_*$ so that $$\lim_{m\to\infty}(G\circ F)^{\circ m}(x)=x_* ~~\text{ for all }~~ x\in U. $$ Now take $V=G^{-1}(U)$ and observe $$(F\circ G)^{\circ m}=F\circ(G\circ F)^{\circ(m-1)}\circ G$$ to find a similar limit property for all $y\in V$ being propagated towards $y_*$ . This redistribution of the compositions means that an $y=y_0\in V$ gets mapped to some $x=x_0\in U$ , and the sequences $x_m$ , $y_m$ are connected by $x_m=G(x_m)$ and $y_{m+1}=F(x_m)$ . Thus convergence of one series is equivalent to the convergence of the other.
|
|discrete-mathematics|dynamical-systems|
| 0
|
Why can we change indices when defining covariant derivative formula?
|
For $ \overrightarrow{V} = V^i \overrightarrow{e_i} = V_i \overrightarrow{e^i} $ , $ \frac{\partial \overrightarrow{V}}{\partial x^j} = \frac{\partial V^i}{\partial x^j} \overrightarrow{e_i} + V^i \frac{\partial \overrightarrow{e_i}}{\partial x^j} = \frac{\partial V^i}{\partial x^j} \overrightarrow{e_i} + V^i (\Gamma^k_{ij} \overrightarrow{e_k}) $ Yes, I get it until here. But I don't understand how are we able to switch dummy indices i and k only for the second term of the last side so it becomes $ \frac{\partial V^i}{\partial x^j} \overrightarrow{e_i} + V^k (\Gamma^i_{ki} \overrightarrow{e_i}) = (\frac{\partial V^i}{\partial x^j} + V^k \Gamma^i_{kj}) \overrightarrow{e_i} $ . Shouldn't i in the first term also be changed to k?
|
You have a double sum $$ \sum_i V^i \sum_k \Gamma^k_{ij} e_k = \sum_{i,k} V^i\Gamma^k_{ij}e_k $$ and if you relabel the dummy variables $i,k$ as, lets say, $\alpha,\beta$ , you end up with $$ \sum_i V^i \sum_k \Gamma^k_{ij} e_k = \sum_{\alpha,\beta} V^{\alpha}\Gamma^{\beta}_{\alpha j}e_{\beta} $$ which now becomes, by relabelling $\alpha,\beta$ as $k,i$ , $$ \sum_i V^i \sum_k \Gamma^k_{ij} e_k = \sum_{i,k} V^k\Gamma^i_{kj}e_i. $$ Hence, \begin{align} \sum_i \left( \partial_jV^i e_i + V^i\sum_k\Gamma^k_{ij}e_k \right) &= \left(\sum_i\partial_jV^ie_i\right) + \left(\sum_iV^i\sum_k \Gamma^k_{ij}e_k\right) \\ &= \left(\sum_i \partial_jV^ie_i\right) + \left(\sum_{i,k} V^i\Gamma^k_{ij}e_k\right) \\ &= \left(\sum_i \partial_jV^ie_i\right) + \left(\sum_{i,k}V^k\Gamma^i_{kj}e_i \right) \\ &= \left(\sum_i \partial_jV^ie_i\right) + \left(\sum_i\sum_kV^k\Gamma^i_{kj}e_i\right)\\ &= \sum_i\left( \partial_jV^i + \sum_{k}V^k\Gamma^i_{kj}\right)e_i. \end{align} If you don't understand, look at an easy
|
|derivatives|differential-geometry|vector-analysis|covariance|differential|
| 1
|
Relationship between $L^2$-distance and cosine similarity
|
Given a vector space $X$ , the cosine-similarity can be defined as: $$c(x,y)= \frac{\langle x, y\rangle}{\| x \| \| y \|} $$ and distance is: $$d(x,y) = \| x - y\| $$ First, I expect to estimate some "similarity" between linear operators, i.e. $A, B: X\to X$ . The most commonly used one is 2-norm derived directly from vector norm: $$ \| A \| = \sup_{\|x\|=1} \|Ax\| $$ and $$D(A, B) = \| A - B\| Similarly, I can also define the cosine similarity between two operators by: $$ C(A,B) = \inf_x c(Ax, Bx) $$ When $C(A,B) = 1$ , it is easy to assert that $\exists k>0,$ s.t. $A = k B$ . Now, I expect to use $C$ to estimate $D$ i.e. $$ \min_k D(A, kB) = \textrm{some-function}(C(A, B)) $$ However, I just cannot get a good result. Thanks for your comment.
|
You say that "When $C(A,B)=1$ , it is easy to assert that $\exists K>0$ st $A=kB$ ." Fix $a>1$ and let $$A=\begin{pmatrix} a & 0 \\ 0 & 1\end{pmatrix}$$ and $$B=\begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix}.$$ Let $$x=\begin{pmatrix} 0 \\ 1 \end{pmatrix}.$$ Then $$Ax=Bx=x,$$ and $$c(Ax,Bx)=1.$$ Therefore $$C(A,B)=\sup_{\|y\|=1}c(Ay,By)=1.$$ But clearly neither matrix is a constant multiple of the other. The condition that $\sup_{\|y\|=1}c(Ay,By)=1$ just means that for at least one non-zero $y$ , $Ay,By$ point in the same direction. If $A=kB$ for some $k>0$ , then $Ay,By$ will point in the same direction for every $y$ (where we agree that the zero vector points in the same direction as itself). But with $A,B$ above for any $k$ , $\|A-kB\|\geqslant a$ , which can be as big as we want. So a large $C(A,B)$ , even equal to $1$ , as you have defined it does not yield any estimate on $\min_k \|A-kB\|$ . For the other direction, if $\|A-kB\|$ is small for $k>0$ , (as a fraction of $\|A\|$ ), you
|
|calculus|linear-algebra|functional-analysis|machine-learning|
| 0
|
dumbed down explanation of Fubini's theorem?
|
Could someone please give me a simple explanation of when it is valid to interchange the order of Intergration and summation when the summation is an infante summation? for some context during lecture the other day my current prof interchanged the order, and I asked if that were valid here and he said he couldn't think of any reason it wouldn't be so I said that I remembered a theorem about when that was valid. After class I told him i thought it was fubini's theorem and that the "function" inside the summation had to be Lebesgue integrable and he didn't know what that meant. if someone could try to help me understand this and to explain the concept to him that would be very helpful.
|
The classical long way via measurable sets: A set is measurable if its inner measure (supremum of union of covers by open sets from inside) and outer measure (infimum of union of covers by closed set from outside) coincide. The n-dimensional integral over a measurable set in $\mathbb R^n$ is its volume. The n-dimensional integral over a positive, smooth function is its $L_1$ norm wrt to that a measurable set. As the limit of series of positive, bounded terms, its independent of the division of the volume into a complete cover of nonoverlapping, measurable subsets and the order of summation. A positive function is called (Lebesgue) measurable, if it is the limit of step functions on a subdivision of the measurable volume in measurable subdivisions. The Lebesgue integral is defined as the limit of sums with order according to the value of the functions on the subdivisions. The Lebesgue integral over a measurable set $V\subset \mathbb R$ over a measurable function is $$\int_V f(x) dx := \
|
|real-analysis|integration|sequences-and-series|
| 0
|
Does this simple model have a name?
|
For my thesis I created a simple random graph model and studied some of its properties, and I was wondering if this model has a name so I can look into it further. The model essentially takes the Erdos-Renyi graph model and iterates it, and adds a probability $q$ at each iteration to remove a given existing edge. So more concreteley We start with a graph $G = (V,E)$ with $E = \emptyset$ . For each iteration, we consider all pairs $x,y\in V$ . If $(x,y)\not\in E$ add $(x,y)$ to $E$ with probability $p$ . If $(x,y)\in E$ , we remove $(x,y)$ from $E$ with probability $q$ . We continue this process for $t$ iterations, and then return the graph that we have at the end. We write the resulting graph as $G(n,p,q,t)$ . Thanks in advance!
|
This is just the Erdos-Renyi graph with different parameters: each edge will be independently in at the end with some probability $r=r(p,q,t)$ which you can in principle work out. (Indeed it has a limit as $t\to\infty$ for any nontrivial choice of $p,q$ , because the state of each edge after $t$ iterations is just a two-state Markov chain.)
|
|graph-theory|random-graphs|
| 1
|
How to determine whether a curve can be smoothly parametrized in C
|
When I was reading complex analysis, I saw the definition for integration of a smooth curve, but late in a theorem, the proof is trying to integrate a triangle directly without proving that it can be smoothly parametrized by a interval. How can we determine this in general? Thanks!
|
The triangle is essentially a disjoint union of $3$ consecutive smooth curves ( $i.e.$ the sides of the triangle) . You can read the first 2 - 3 paragraphs in the writeup attached below for more clarification. https://complex-analysis.com/content/curves_in_the_complex_plane.html
|
|complex-analysis|analysis|
| 1
|
How to compare angles without function atan2?
|
I have angle between two vectors: pseudocode for func angleBetween: dotProduct = v1.x * v2.x + v1.y * v2.y; determinant = v1.x * v2.y - v1.y * v2.x; angle = atan2(determinant, dotProduct); How to compare $$\text{angleBetween}(v_1,v_2) using only determinant and dot product (or other combination) without using atan2 ?
|
The sign of the determinant indicates the orientation of the vectors, while the dot product gives a measure of their alignment. Here's how you can compare the angles using only the dot product and the determinant: Calculate the dot product and determinant for both pairs of vector.\ Use the dot product to check if the vectors are pointing in the same direction (dot product > 0) or opposite directions (dot product If the dot product is positive, use the sign of the determinant to determine the relative angle between the vectors. If the dot product is negative, you'll have to handle the cases where the angle exceeds 180 degrees. # Assuming v1, v2, v3, v4 are vectors represented as tuples (x, y) def dot_product(v1, v2): return v1[0] * v2[0] + v1[1] * v2[1] def determinant(v1, v2): return v1[0] * v2[1] - v1[1] * v2[0] def compare_angles(v1, v2, v3, v4): dot_product_12 = dot_product(v1, v2) dot_product_34 = dot_product(v3, v4) det_12 = determinant(v1, v2) det_34 = determinant(v3, v4) # Check
|
|geometry|
| 1
|
Problem on integral inequality
|
$$\ln 1 + \ln 2 + \cdots + \ln k ≥\int_1^k \log(x) \,\mathrm{d} x$$ I have no idea how to start as i have not seen this type of inequality before. I tried looking at the graphs and this is indeed true. However I have no idea how to prove this. Any help would be aprreciated.
|
Since $\ln(x)$ is a monotonically increasing function, its definite integral is smaller then any corresponding upper sum. In particular, if we split up $[1,k]$ into intervals of length $1$ and take into account that $\ln(1)=0$ , we obtain \begin{align*} \int_1^k\ln(x)\,\mathrm{d}x &= \sum_{j=2}^k\int_{j-1}^j\ln(x)\,\mathrm{d}x\\ &\leq \sum_{j=2}^k\sup_{x\in[j-1,j]}\ln(x)\\ &= \sum_{j=2}^k\ln(j)\\ &= \ln(1) + \ln(2) + \cdots + \ln(k). \end{align*}
|
|calculus|sequences-and-series|analysis|
| 0
|
Not decreasing sequence that converges to 0: sufficient condition for the sequence of indexes for which the sequence decreases to be bounded.
|
Consider a strictly positive sequence $\{u_n\}$ of real numbers (i.e. $\forall n, u_n >0$ ). Suppose that $\displaystyle \lim_{n \to \infty} u_n = 0$ and that $\{u_n\}$ is not decreasing . Consider the sequence $\{l_n\}$ defined by : $\forall n \in \mathbb N, l_n := \min\left\{i-n,\text{ such that } u_i n\right\}$ . This sequence counts the number of iterations needed before a term of the sequence $\{u_n\}$ , smaller than the current term, appears. In a previous post , I wondered whether such a sequence could be said to be bounded (when $\{u_n\}$ is not decreasing), but the answer is no: I've been given a counter-example. (See the post ). I'm now wondering what kind of conditions need to be added to the sequence $\{u_n\}$ for the sequence $\{l_n\}$ to become bounded. At first, I thought of a " Cauchy sequence " condition, but I have the impression that the counterexample $\{u_n\}$ given in the previous post is indeed a Cauchy sequence, yet the associated sequence $\{l_n\}$ is not bound
|
Given that $u_n>0$ converges to $0$ but isn't a decreasing sequence, each of the following is equivalent: " $l_n$ is bounded" "there exists $K$ such that $\inf B_{j+1} for all $j$ , where $B_j$ denotes $\{u_{jK},u_{jK+1},\cdots,u_{(j+1)K-1}\}$ " "there exists $K$ such that $c_{j+1} for all $j$ , where $\displaystyle c_j$ denotes $\inf_{n\leq jK}u_n$ " "the gap between any pair of consecutive elements of $T$ is bounded, where $T$ denotes the set of integers $\tau$ such that $u_i>u_\tau$ whenever $i " Not too hard to prove.
|
|sequences-and-series|convergence-divergence|cauchy-sequences|
| 1
|
Computational Complexity of Equational Logic
|
Equational logic uses a surprisingly small set of axioms to prove all algebraic identities (algebraic in the sense of universal algebra, so things like field theory fall beyond this scope). This makes proving an identity in algebra seem pretty easy, an experience which matches experience, where usually you can find a proof quickly or even instantly, whereas disproving an identity can involve constructing a complex counterexample. I was wondering if this can be demonstrated formally. Specifically, suppose we have an efficient algorithm for doing equational logic. Given an algebraic identity consisting of $n$ symbols, what can we say about maximum length of time the algorithm will take to prove the identity asymptotically? Note that I am only interested in identities. There is no disproof in equational logic, so the algorithm will never halt on a non-identity.
|
The short answer is: equational logic is as powerful as computation in general, so there is no computable bound for the time needed to prove equational problems. One way to make this precise is the following: fix some complete algorithm $A$ for proving equations from finite sets of equations. Then there is no computable function $f: \mathbb{N} \to \mathbb{N}$ s.t. $A$ terminates with a proof on all provable inputs of size $n$ in $f(n)$ steps. Suppose such a function would exist. Let $\langle S, R\rangle$ be a finite presentation of a group with an undecidable word problem. Then the following would be a decision algorithm for the word problem in this group: given an input word $w$ let $n$ be the size of the axioms of groups plus the size of $\langle S,R \rangle$ plus the size of $w$ . Compute $k=f(n)$ . Run $A$ for $k$ many steps trying to prove $w = e$ from the axioms of groups and $\langle S, R\rangle$ . If $A$ outputs a proof answer with yes, otherwise answer with no.
|
|logic|computational-complexity|proof-theory|universal-algebra|computer-assisted-proofs|
| 1
|
Sum of complex numbers at the vertices of a regular polygon
|
If $z_1,z_2,z_3, \ldots ,z_n$ are the vertices of an $n$ -sided regular polygon with $z_0$ as its centre, then find $$\sum_{r=1}^{n} z_r^k$$ where $k \in \Bbb N, k . And my working is here: $$z_r - z_0 = (z_1-z_0)e^{i(2\pi(r-1)/n)}$$ $$\sum_{r=1}^n z_r^k = \left(z_0+(z_1-z_0)e^{i(2\pi(r-1)/n)}\right)^k$$ Thanking you in anticipation, Ann
|
As recommended in the comments start with the Lemma: $$\forall\{m:\, n\nmid m\}:\quad \sum_{r=0}^{n-1}\left(\omega_{n,\alpha}^r\right)^m=0,\tag1$$ where $\omega_{n,\alpha}^r=e^{i\left(\alpha+\frac{2\pi}nr\right)}$ The proof is trivial: $$ \sum_{r=0}^{n-1}e^{i\left(\alpha+\frac{2\pi}nr\right)m}=e^{i\alpha m}\sum_{r=0}^{n-1}e^{i\frac{2\pi}nrm}=e^{i\alpha m}\frac{1-e^{i(2\pi m)}}{1-e^{i\frac{2\pi}nm}}=0. $$ Now since $z_r=z+R\omega_{n,\alpha}^r$ we have $$ \begin{aligned} \sum_{r=0}^{n-1}z_r^k&=\sum_{r=0}^{n-1}\left(z+R\omega_{n,\alpha}^r\right)^k\\ &=\sum_{r=0}^{n-1}\sum_{m=0}^k \binom km z^{k-m}R^{m}\left(\omega_{n,\alpha}^r\right)^m\\ &=\sum_{m=0}^k \binom km z^{k-m}R^{m}\sum_{r=0}^{n-1}\left(\omega_{n,\alpha}^r\right)^m\\ &=nz^k. \end{aligned} $$ The last equality holds since in view of $k only the term with $m=0$ survives according to Lemma $(1)$ .
|
|complex-numbers|roots-of-unity|
| 0
|
Sum of squares are closed under products and squaring.
|
For any natural number $n\ge 1$, given pairs $(a_1,b_1),(a_2,b_2),...,(a_n,b_n)$ of integer numbers, there exist integer number $c$ and $d$ such that $$\prod_{i=1}^{n}(a_i^2+b_i^2) = c^2+d^2$$ My initial approach is Base Case: $(a_1^2+b_1^2) = a_1^2+b_1^2$ which is true. (Although it is trivial) Prove the statement is true when $n=2$: We have $$(a^2+b^2)(c^2+d^2) = (ac-bd)^2+(ad+bc)^2$$ (Thanks André Nicolas for pointing it out) So if $a,b,c,d$ are integers, $ac,bd,ad,bc$ are all integers and integers are closed under addition and subtraction. Hence $(ac-bd),(ad+bc)$ are integers. Inductive Hypothesis: $\prod_{i=1}^{n}(a_i^2+b_i^2) = c^2+d^2$ is true Inductive Step: $$\prod_{i=1}^{n+1}(a_i^2+b_i^2) = \prod_{i=1}^{n}(a_i^2+b_i^2)\cdot (a_{n+1}^2+b_{n+1}^2) = (c^2+d^2)\cdot (a_{n+1}^2+b_{n+1}^2)$$ Where $c$ and $d$ are integers. But when we apply $n=2$, we have $(c^2+d^2)\cdot (a_{n+1}^2+b_{n+1}^2) = (e^2 + f^2)$ where $e$ and $f$ are integers. Hence, by the principle of induction, the s
|
I prefer to proceed as follows. Prove that the statement holds for n=2. In a product with more than 2 factors, you can always replace two factors with one factor until you are left with two factors.
|
|elementary-number-theory|proof-writing|diophantine-equations|sums-of-squares|
| 0
|
Question over the splitting field of a certain polynomial
|
I read in a resolution of an exercise that the splitting field of the polynomial $ f(x)=x^{4}+2x^{3}-1 $ over $ \Bbb Z_{3}[x]$ is obtained expanding $\Bbb Z_{3}$ with any root of the polynomial $f(x)$ could someone give me an explanation of this?
|
If we knew $f(x) = x^4-x^3-1$ to be irreducible, then $\Bbb F_3[x]/(f)$ would be isomorphic to $\Bbb F_{81}$ , and by this we would get the claim in OP. We can check irreducibility by hand, but I'd like to present another method. I claim that if $f(\alpha) = 0$ , $\alpha \in \Bbb F_3[x]/(f)$ , then $\alpha$ is invertible and has multiplicative order exactly $80$ . To verify this, we need to compute $x^{16}$ , $x^{40}$ and $x^{80}$ modulo $f$ . $x^4=x^3+1$ , $x^5=x^3+x+1$ , $x^6=x^3+x^2+x+1$ ; $x^8 = (x^3+1)^2 = (x^3+x^2+x+1)+2x^3+1=x^2+x+2$ ; $x^{16} = (x^8)^2 = 2x^2+x+2 \not = 1$ ; $x^{40} = (x^{16})^2 x^8 = (2x^3+x+2)(x^2+x+2)= 2 \not = 1$ ; $x^{80}$ is then clearly $1$ . So the quotient is indeed a field of order $81$ , and we additionally got a remarkable fact that $f$ is a primitive polynomial in the sense of field theory : each of its roots is a generator of the multiplicative group of the quotient field. (Note that, in general, a root of an irreducible polynomial need not genera
|
|abstract-algebra|
| 1
|
Exploring Markov Models for Determining Car Insurance Costs
|
Question: I'm currently delving into the realm of Markov models to understand how they can be applied to determine the cost of car insurance. Thus far, I've come across the bonus-malus system and hidden Markov chains. However, I'm curious to learn about other Markov models that could be relevant in this context. Could someone shed light on additional models or elaborate on how these models are utilized in determining car insurance costs? Background: In my exploration of car insurance cost determination, I've encountered the bonus-malus system, which adjusts premiums based on the insured's claim history, and hidden Markov chains, which model the underlying states affecting claim occurrence. While these are insightful, I'm keen to broaden my understanding by discovering other Markov models applicable to this domain. Criteria for Response: Short explanation of additional Markov models relevant to determining car insurance costs. Insights into how these models are applied in the insurance
|
In car insurance, a portfolio might be divided into classes depending on their claim experience. For each class an individual premium is calculated by methods such as marginal sums, generalized linear models, etc. In my experience, Markov models are not used solely for premium calculation in car insurance. However, as you already indicated, the dynamic adjustment of a driver‘s premium (according to the classes) might be modeled using a Markov process. In practice, the premium is often based upon more detailed information than just the number of claims in the previous period. In actuarial science, Markov models are especially used in life insurance to model states of an individual over time (e.g. healthy, disabled, dead). Such a model could be an inhomogeneous Markov-Process. Discrete models in industry are provided, e.g., as mortality-tables. A good book I can recommend is Asmussen, Steffensen: Risk an Insurance.
|
|probability|stochastic-processes|markov-chains|mathematical-modeling|
| 0
|
Confusion with matrix derivatives and component-wise gradients
|
According to this source (The Matrix Cookbook), we have that for $f(X) = \mathbf{a}^T X \mathbf{b}$ where $X \in \mathbb{R}^{n \times m}, \mathbf{a} \in \mathbb{R}^n, \mathbf{b} \in \mathbb{R}^m$ , the derivative is $$ \frac{\partial f}{\partial X} = \mathbf{a} \mathbf{b}^T$$ which is a scalar since it's basically a dot product. I see how we can arrive at this result by applying matrix derivative rules/applying basic derivative rules to vectors/matrices. However, taking the derivative (gradient) of a function $f: \mathbb{R}^{n \times m} \to \mathbb{R}$ based on my my calculus knowledge should yield a vector/matrix of partial derivatives. As an example, I tried this with $\mathbf{a} = \begin{bmatrix} a_1 & a_2 \end{bmatrix}, \mathbf{b} = \begin{bmatrix} b_1 & b_2 \end{bmatrix}$ , and $X = \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}$ . I expanded the expression and took the gradient to get that $$\nabla f = \begin{bmatrix} \frac{\partial f}{\partial x_1} & \frac{\partial f}{\par
|
The Jacobian matrix is the derivative of a vector function $\pmb{f}$ of a vector $\pmb{x}$ : $$ \frac{\partial \pmb{f}}{\partial \pmb{x}^T} = \begin{bmatrix}\frac{\partial f_1}{\partial x_{1}} & \frac{\partial f_1}{\partial x_{2}} \\ \frac{\partial f_2}{\partial x_{1}} & \frac{\partial f_2}{\partial x_2} \end{bmatrix} $$ i.e. the partial derivatives are w.r.t. the transpose of the vector. The relation between the differential of $\pmb{f}$ and the differential of $\pmb{x}$ is, \begin{align*} d\pmb{f}= \begin{bmatrix} df_1 \\ df_2 \end{bmatrix} & = \begin{bmatrix} \frac{\partial f_1}{\partial x_1} dx_1 + \frac{\partial f_1}{\partial x_2} dx_2 \\ \frac{\partial f_2}{\partial x_1} dx_1 + \frac{\partial f_2}{\partial x_2} dx_2 \end{bmatrix} \\ &= \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} \end{bmatrix} \begin{bmatrix} dx_1 \\ dx_2 \end{bmatrix} =\frac{\partial \pmb{f}}{\partia
|
|multivariable-calculus|derivatives|matrix-calculus|
| 0
|
What is the purpose of covariance matrix in multivariate (normal) distribution?
|
Why we care about covariance matrix in multivariate normal distribution? When we are talking about single random variable X ~ N(mu, std) the parameters are mu and std . But when we have vector (X1, X2, X3, ..., Xn) ~ N(mu, cov) we have n-dim vector mu and n x n covariance matrix cov . Why don't we just have n-dim vector of stds for second parameter?
|
The easiest way to see why is to think about the bivariate case. If you have just two univariate normal distributions (one for the $x$ axis and one for the $y$ axis on a plane), you would always end up with an ellipse with major and minor radiuses perfectly aligned with the $x$ and $y$ axis. There would be no way to represent an elliptical distribution that is tilted or spread (diagonally for example) in the $xy$ plane. The orientation of the ellipse of the distribution is encoded in the covariance matrix. The covariance matrix encodes the variance as well as the orientation of the elliptical distribution so that you can have arbitrary multivariate normal distributions. You can see two univariate distributions in the walls on the image below. The resulting elliptical distribution is however slightly tilted because there is some correlation between them. (Example taken from https://en.wikipedia.org/wiki/Multivariate_normal_distribution ). If you had to plot these univariate distribution
|
|statistics|multivariable-calculus|normal-distribution|
| 0
|
If $X_1,\dots,X_n$ are independent and identically distributed and $S_n$ is their sum, prove that $\mathbb{E}[X_i \mid S_n]=\mathbb{E}[X_1 \mid S_n]$
|
Consider the following exercise If $X_1,\dots,X_n$ are independent with same distribution and $S_n=X_1+\cdots+X_n$ , prove that $$\mathbb{E}[X_i \mid S_n]=\frac{S_n}{n}$$ Proof: Since the variables have the same distribution we have that: $$S_n=\mathbb{E}[S_n \mid S_n]=\mathbb{E}\left[\sum_{i=0}^n X_i \mid S_n \right]=\sum_{i=0}^n\mathbb{E}[X_i \mid S_n]=n \cdot \mathbb{E}[X_i \mid S_n]$$ I'm trying to figure out why the fact that the variables have the same distribution implies that they have the same conditional expectation with respect to $S_n$ . I tried to see if the expected values of the variables on the sets of the form $S_n^{-1}(A)$ (with $A\subseteq \mathbb{R}$ borel set) are equal (I mean that $\mathbb{E}[X_1\mathbf{1}_{S_n^{-1}(A)}]=\mathbb{E}[X_i\mathbf{1}_{S_n^{-1}(A)}]$ ), because this fact would give me the thesis, but I don't know how to do it. Any suggestion on how to demonstrate the equality between the conditional expectations?
|
The exercise is a direct application of the following proposition, with $X=(X_1,X_2,\cdots,X_n)$ and $Y=(X_{i_1},X_{i_2},\cdots,X_{i_n})$ : If $X=(X_1,X_2,\cdots,X_n)$ and $Y=(Y_1,Y_2,\cdots,Y_n)$ are identically distributed and $f$ and $g$ are measurable functions, then: $$ E[f(X)\mid g(X)=z]=E[f(Y)\mid g(Y)=z] \tag{1}$$ Proof: $$\int E[f(X)\mid g(X)]I_A(g(X))dP_{\Omega}=\int f(X)I_A(g(X))dP_{\Omega} $$ But: $$\int E[f(X)\mid g(X)]I_A(g(X))dP_{\Omega}=\int E[f(X)\mid g(\mathbf x) ]I_A(g(\mathbf x))dP_X$$ And: $$\int f(X)I_A(g(X))dP_{\Omega}=\int f(\mathbf x)I_A(g(\mathbf x))dP_X$$ Therefore, for $(1)$ to be valid, it is enough that: $$\int E[f(X)\mid g(\mathbf x)]I_A(g(\mathbf x))dP_X=\int f(\mathbf x)I_A(g(\mathbf x))dP_X$$ But this last equation, and therefore $E[f(X)\mid g(\mathbf x)]$ , only depends on the distribution of $X$ .
|
|probability|probability-theory|expected-value|conditional-expectation|
| 1
|
Prove that the function is not non-expansive
|
Prove that the function is not non-expansive. T is non-expansive if $$ \forall x,y \hspace{3mm} ||Tx-Ty|| \leq ||x-y|| $$ $T(x)= \frac{x sin(x)}{2} \hspace{2mm} 0\leq x \leq \pi $ and $0$ elsewhere. I want to prove this without taking derivatives. My attempt: \begin{align} |\frac{x sin(x)}{2}-\frac{y sin(y)}{2}|&=\frac{1}{2}|xsinx-xsiny+xsiny-ysiny| \\ &\leq\frac{1}{2}(|xsinx-xsiny|+|xsiny-ysiny|) \\ &= \frac{1}{2}|x||sinx-siny|+|siny|(|x-y|)\\ &\leq \frac{1}{2}|x||x-y|+\frac{1}{2}|x-y| \end{align} I am unable to proceed further, Can someone please help?
|
As mentioned by Mengchun Zhang, this is not true. In fact, we have $T'(x) for $x \in [3,\pi]$ and, therefore, $T$ is not non-expansive.
|
|contraction-mapping|
| 1
|
How to find $P(A|B)$ when we only know $P(A)$ and $P(B)$?
|
In case you’d want to know: I’m a 6th grade student and I am self-learning probability (that’s one of the things). I know Bayes’ theorem: $$ P(A | B) = \frac{P(B | A) \cdot P(A)}{P(B)} $$ Here’s an example: $A$ : The chance you get sick, which could be $P(A) = 0.000219$ . $B$ : The chance you test negative, which could be $P(B) = 0.9993$ . Our goal is to find $P(A | B)$ which is the chance you are sick given you test negative. This is basically the false negative rate. There are three things we need: The chance you get sick, which we have: it’s $0.021$ . The chance you test negative, which we also have: it’s $0.9993$ . The chance you test negative given you are sick, which we don’t know. This is the specificity. To find the false negative rate, what would we do? We need $P(B | A)$ to find $P(A | B)$ , but how do we find $ P(B | A)$ when we only know $P(A)$ and $P(B)$ (assuming we don’t have things like data sets at hand)?
|
You cannot. $P(B|A) = P(B \cap A)/P(A)$ , so in order to know $P(B|A)$ or $P(A|B)$ you at least need knowledge of $P(A \cap B)$ , which you can't in general determine using only $P(A)$ and $P(B)$ .
|
|probability|conditional-probability|bayes-theorem|
| 0
|
The spectrum of an invertible element $x$ is $\sigma(x^{-1})=\{\lambda^{-1}: \lambda\in \sigma(x)\}$
|
Suppose $x$ is invertible in the unital Banach algebra $A$. How can I prove that $\sigma(x^{-1})=\{\lambda^{-1} : \lambda\in \sigma(x)\}$
|
If $\alpha\in \sigma (x^{-1})$ this means $x^{-1}-\alpha$ is not invertible. This means $\alpha\not=0$ . Suppose by way of contradiction that $x-\alpha^{-1}$ is invertible. In this case, we may find $c$ such that: $$(x-\alpha^{-1})c=c(x-\alpha^{-1})=1\Leftrightarrow$$ $$xc=cx=1+c\alpha^{-1}\Leftrightarrow$$ $$c^{-1}x^{-1}=x^{-1}c^{-1}=(1+c\alpha^{-1})^{-1}\Leftrightarrow$$ $$(1+c\alpha^{-1})c^{-1}x^{-1}=x^{-1} c^{-1}(1+c\alpha^{-1})=1\Leftrightarrow $$ $$(c^{-1}+\alpha^{-1})x^{-1}=x^{-1} (c^{-1}+\alpha^{-1})=1\Leftrightarrow $$ $$(c^{-1}+\alpha^{-1})(x^{-1}-\alpha)=(x^{-1}-\alpha) (c^{-1}+\alpha^{-1})=1-\alpha c^{-1}-1 =-\alpha c^{-1}\Leftrightarrow$$ $$(-\alpha^{-1}-c\alpha^{-2})(x^{-1}-\alpha)=(x^{-1}-\alpha)(-\alpha^{-1}-c\alpha^{-2})=1$$ But this means $x^{-1}-\alpha$ is invertible, which is a contradiction. Hence, $\alpha^{-1}\in \sigma(x)$ and we conclude that: $$\{\alpha^{-1}\:|\: \alpha\in \sigma(x^{-1})\}\subseteq \sigma(x)$$ Reciprocally, if $\beta \in \sigma(x)$ , by what we
|
|operator-theory|spectral-theory|banach-algebras|functional-calculus|
| 0
|
Evaluate the sum $\sum_{k=0}^n \frac{\binom n k}{(k+1)(k+3)} $
|
How to sum the sequence $$\sum_{k=0}^n \frac{\binom n k}{(k+1)(k+3)} $$ where $n \choose k$ are the usual binomial coefficients in expansion of $(1+x)^n$ . I know how to sum such sequence when the denominator has consecutive multiplicands (by repeatedly integrating the expansion of $(1+x)^n$ ). Does there even exist a general procedure for such sums?
|
Hint: $$\dfrac{\binom nr}{(r+1)(r+3)}=\cdots=\dfrac1{(n+1)(n+2)(n+3)}\cdot\dfrac{(n+3)!(r+2)}{(r+3)!(n+3-(r+3))!}=?$$ Again, $\displaystyle\dfrac{(n+3)!(r+2)}{(r+3)!(n+3-(r+3))!}=\dfrac{(n+3)!(r+3-1)}{(r+3)!(n+3-(r+3))!}=\cdots=(n+3)\binom{n+2}{r+2}-\binom{n+3}{r+3}$
|
|sequences-and-series|algebra-precalculus|binomial-coefficients|binomial-theorem|
| 0
|
Are there attempts to use machine learning to settle disputes over long mathematical proofs?
|
As far as I remember, The proof of Perelman of the Geometerization Conjecture took a year to check by the experts. The claimed proof of the ABC conjecture Mochizuki took few years to check, and Peter Scholze another great mathematician rejected the proof while Mochizuki who is also another respected mathematician still insists the proof is correct as far as I am aware. The classification theorem of finite simple groups is also another example where the proof is so long that very few people can actually understand it and so the rest of the community has to be skeptical. My Position: If a proof is written in a formal language and checked on an automated proof checker, then for all practical purposes there is no reason to be skeptical anymore. The problem is that human mathematicians do not write proofs in a formal language because it is a very exhausting process (possibly infeasible with the long proofs like that of Perelman or Mochizuki). Instead mathematicians write argument that mixes
|
(1) "Given the success of ChatGpt in dealing with human natural language" ?? Maybe Chatgpt can itself answer this question , no humans needed here !! (2) Currently , Chatgpt is a little toy with no understanding of logic & mathematics. What output it gives is not mathematically valid beyond what inputs it was given earlier. (3) There are Automated Theorem Proving Systems , which more suitable for such tasks. Deterministic Processes & Mathematics & logic are involved there , unlike the Arbitrary & Probabilistic Processes in GenAI. (4) Proof Verification is a somewhat easier task having some good Success. In general , Proof Assistants can assist ( not replace ) Mathematicians. UPDATES : (5) OP "Are there attempts to use machine learning to settle disputes over long mathematical proofs?" : we can not use machine learning to settle disputes over short Proofs. Even less so for long Proofs , until & unless Humans check the Output Proof. Why so ? Say we have Proofgpt which converts human Proo
|
|proof-writing|math-software|artificial-intelligence|automated-theorem-proving|
| 0
|
Evaluate the sum $\sum_{k=0}^n \frac{\binom n k}{(k+1)(k+3)} $
|
How to sum the sequence $$\sum_{k=0}^n \frac{\binom n k}{(k+1)(k+3)} $$ where $n \choose k$ are the usual binomial coefficients in expansion of $(1+x)^n$ . I know how to sum such sequence when the denominator has consecutive multiplicands (by repeatedly integrating the expansion of $(1+x)^n$ ). Does there even exist a general procedure for such sums?
|
Hint: Let $\displaystyle\dfrac{\binom nr}{(r+1)(r+3)}=a\binom{n+3}{r+3}+b\binom{n+2}{r+2}+c\binom{n+1}{r+1}$ where $a,b,c$ are arbitrary constants On simplification, $$\dfrac1{(r+1)(r+3)}=\dfrac{a(n+3)(n+2)(n+1)}{(r+3)(r+2)(r+1)}+\dfrac{b(n+2)(n+1)}{(r+2)(r+1)}+\dfrac{c(n+1)}{r+1}$$ $$\iff r+2=a(n+3)(n+2)(n+1)+b(n+2)(n+1)(r+3)+c(n+1)(r+3)(r+2)$$ Set $r+3=0\implies -3+2=a(n+3)(n+2)(n+1)\implies a=?$ $r+2=0\implies 0=-1+b(n+2)(n+1)(-2+3)\iff b=?$ Comparing the coefficients of $r^2, c=0$
|
|sequences-and-series|algebra-precalculus|binomial-coefficients|binomial-theorem|
| 0
|
What is the time complexity of multiplying two matrices over an arbitrary ring?
|
I know that the time complexity of matrix multiplication over a field is well studied (multiplying two $n \times n$ matrices can be done in $n^\omega$ field operations, where $\omega$ is the matrix multiplication exponent). What about matrix multiplication over a commutative ring, such as a primary ideal domain? Is it the same?
|
Algebraic matrix multiplication algorithms can always be transformed into algorithms that compute bilinear decompositions $$XY = \sum_{k = 1}^{r(n)} \operatorname{tr} (F_k X) \operatorname{tr} (G_k Y) W_k$$ so that the number of operations increases at most by a constant multiple. This works over every ring (and even semiring) and one can define the matrix multiplication exponent for every ring. All currently known matrix multiplication algorithms use integer constants and work over every ring, but this is not proven that this must hold in general. It is known that for fields the exponent of matrix multiplication depends only on the characteristic of the field (A. Schönhage, "Partial and total matrix multiplication", SICOMP 10(3)), and one can adapt this to prove that transcendental and integral extensions of rings do not change the matrix multiplication exponent.
|
|matrices|computer-science|computational-complexity|computational-mathematics|computational-algebra|
| 0
|
Maximizing Differential Entropy with Limited Support and Power Budget Constraint and Expected sigmoid constraint
|
Context I am delving into the study of probability distributions and their entropy, with a particular focus on identifying a distribution that maximizes entropy under specific constraints. An area of interest is the application of the sigmoid function, widely recognized in logistic regression and neural networks, to this problem. The sigmoid function is defined as: $$ \sigma(x) = \frac{1}{1 + e^{-x}} $$ Problem Statement Given a random variable $X$ , I am faced with the following constraints: The expectation of $X^2$ , denoted as $E[X^2]$ , is constrained by a constant $E$ . The expectation of the sigmoid of $X$ , $E[\sigma(X)]$ , is limited to a maximum value $D$ , where $D$ is within the range $(0, 1)$ . The goal is to find the probability distribution of $X$ that achieves maximum differential entropy under these constraints. Hypothesis My initial conjecture is that a binomial distribution with equal probabilities of success and failure might be the optimal solution for maximizing en
|
It’s not entirely clear to me whether you mean discrete or differential entropy, but in either case there’s no maximum because in either case a uniform distribution with values below $D$ can be chosen with arbitrarily large entropy by extending the support towards $-\infty$ , increasing the number of points / the length of the interval and thus the entropy.
|
|probability|statistics|information-theory|entropy|
| 0
|
Prove that triangles $VAC$ and $VBD$ have equal areas and equal perimeters....
|
The question Let $VABCD$ be a quadrilateral pyramid with a rectangular base. $\angle AVC =\angle BVD$ prove that triangles $VAC$ and $VBD$ have equal areas and equal perimeters. The idea Because the base ABCD is rectangular we get that $DB=AC$ . We also know the congruences of the angles, so I was thinking of showing that VDB is congruent with VDC this will make the perimeters and the areas equal. I don't know how to show this. I hope one of you can help me! Thank you!
|
Let $A(0,0,0),$ $B(0,b,0),$ $C(a,b,0),$ $D(a,0,0)$ and $V(x,y,z)$ . Then, the perimeter condition $VA+VD=VB+VC$ and the area condition $\Vert \vec{VA}\times\vec{VD}\Vert=\Vert\vec{VB}\times\vec{VC}\Vert$ give $$\sqrt{x^2+y^2+z^2}+\sqrt{(x-a)^2+y^2+z^2}=\sqrt{x^2+(y-b)^2+z^2}+\sqrt{(x-a)^2+(y-b)^2+z^2}\tag1$$ and $$y=\frac b2.\tag2$$ If $y=\frac b2$ then $(1)$ is satisfied. So, for counter-example we must choose $y\neq\frac b2.$ Blue already gave counter-examples in comments. My counter-example: $a=2,b=1, x=0.5, y=0.4, z\approx 0.498422$ where $z$ is a solution of $$\frac{x^2+y^2+z^2-ax}{\sqrt{x^2+y^2+z^2}\sqrt{(x-a)^2+y^2+z^2}}=\frac{x^2+(y-b)^2+z^2-ax}{\sqrt{x^2+(y-b)^2+z^2}\sqrt{(x-a)^2+(y-b)^2+z^2}}$$ obtained by the law of cosines as Mathlove did.
|
|geometry|area|angle|rectangles|
| 0
|
How to find $P(A|B)$ when we only know $P(A)$ and $P(B)$?
|
In case you’d want to know: I’m a 6th grade student and I am self-learning probability (that’s one of the things). I know Bayes’ theorem: $$ P(A | B) = \frac{P(B | A) \cdot P(A)}{P(B)} $$ Here’s an example: $A$ : The chance you get sick, which could be $P(A) = 0.000219$ . $B$ : The chance you test negative, which could be $P(B) = 0.9993$ . Our goal is to find $P(A | B)$ which is the chance you are sick given you test negative. This is basically the false negative rate. There are three things we need: The chance you get sick, which we have: it’s $0.021$ . The chance you test negative, which we also have: it’s $0.9993$ . The chance you test negative given you are sick, which we don’t know. This is the specificity. To find the false negative rate, what would we do? We need $P(B | A)$ to find $P(A | B)$ , but how do we find $ P(B | A)$ when we only know $P(A)$ and $P(B)$ (assuming we don’t have things like data sets at hand)?
|
Here’s another way to see why this can’t be possible. A perfect test would have $P(B)=1-P(A)$ and $P(A\mid B)=P(B\mid A)=0$ . If you could determine $P(A\mid B)$ from $P(A)$ and $P(B)$ , then $P(B)=1-P(A)$ would have to yield $P(A\mid B)=0$ (since that’s what it would have to yield for a perfect test). But surely not every test that has $P(B)=1-P(A)$ (for instance uniformly drawing a random number in $[0,1]$ and giving a negative result if it’s $\ge P(A)$ ) has $P(A\mid B)=0$ .
|
|probability|conditional-probability|bayes-theorem|
| 0
|
A definite integral over the unit sphere
|
Is there a closed form for the following definite integral over the unit sphere? $$I = \int_0^{\pi}d\theta \sin\theta \int_0^{2\pi}d\phi |x (\sin\theta\cos\phi)^2 +(1-x)(\sin\theta\sin\phi)^2 - (\cos\theta)^2| $$ where $x\in[0,2]$ . This came up in a physics problem. Mahthematica (surprisingly) returned the result $I = 4 \pi \sqrt{\frac{x}{(x+1)^3}}$ which really only agrees with the numerical integration at $x=\frac{1}{2}$ , so it is most likely wrong. Since the function inside the absolute value signs is just a weighted sum of squares of the coordinates of the unit directional vector, I thought that it might be dealt with by going to some sort of ellipsoidal coordinate system. But so far I have not found a way to make it work. Update 1 : Thanks to the comment from @SangchulLee , I was able to proceed a bit further to the following intermediate result. First, we substitute $$u(\phi)=x\cos^2\phi+(1-x)\sin^2\phi$$ Then, the integral reduces to $$ I = \int_0^{2\pi}d\phi\int_{-1}^1dt | (1
|
With the help of Mathematica, I was finally able to find a closed form, though I am not able to construct a proof of it. I will note down the result in the following in the hope that it will still be of some value to someone that comes across similar problems in the future. We first note that the integrand is $\pi$ -periodic and even in $\phi$ , we get \begin{equation} \begin{split} I(x) &= \frac{32}{3} \int_0^{\pi/2} d\phi \sqrt{\frac{u(\phi)^3}{1+u(\phi)}} \Theta(u(\phi)>0) \\ &= \frac{8}{3} \int_{-1}^{1} dy \sqrt{\frac{[(2x-1)y+1]^3}{(1-y^2)[(2x-1)y+3]}} \Theta((2x-1)y+1) \end{split} \end{equation} where we have substituted $y=\cos2\phi$ . Using Mathematica, we find the last expression to be \begin{equation} I(x) = \frac{16}{3} \sqrt{\frac{x}{2-x}} \Re\left( (x-1) K\left(\frac{1-2x}{x(x-2)}\right) - (x-2) E\left(\frac{1-2x}{x(x-2)}\right) \right) \end{equation} where $\Re$ denotes the real part, $K$ is the complete elliptic integral of the first kind, and $E$ is the complete ellipti
|
|integration|multivariable-calculus|definite-integrals|
| 0
|
Prove that it exists $i \neq j$ s.t. $P(A_i \cap A_j) \geq \frac{nc^2-c}{n-1}$ with $P(A_i) \geq c $
|
Question: Let $A_1,A_2,...,A_n \subset \Omega $ a sequence of outcome such that $P(A_i) \geq c $ for all $i$ . Prove that it exists $i \neq j$ s.t. $P(A_i \cap A_j) \geq \frac{nc^2-c}{n-1}$ I have tried different ways including the following one but I did not succeed to conclude. 1-First note that: $\int (\sum_{1 \leq i \leq n} I_{A_i})^2dP= \sum_{i \neq j} \int I_{A_i}I_{A_j}dP + \sum_{i = j} \int I_{A_i}^2dP= \sum_{i \neq j} P(A_i \cap A_j) + \sum_{i = j}P(A_i)$ 2-Now we know that $\sum_{i = j}P(A_i)$ is the sum of $n$ element, thus $\sum_{i = j}P(A_i) \geq nc$ 3-We know too that $\sum_{i \neq j} P(A_i \cap A_j)$ is the sum of $n(n-1)$ elements. More over $\forall i \neq j$ by absurd let suppose that $P(A_i \cap A_j) . But after I don't succeed to continue. Can someone help me please? Thank for your help.
|
Remark: $z = \sum_{i = j}P(A_i)$ 4-Let define $X = \sum_i I_{A_i} $ so according to the Jensen inequality we have that $(E[X])^2 \leq E[X^2]$ or equivalently $(E[X])^2 = (\int \sum_i I_{A_i} dP)^2 = ( \sum_i P(A_i))^2 = z^2 \leq E[X^2] = \int (\sum_i I_{A_i} )^2 dP $ . 5-Remind that by absurd assumption we have that: $ (E[X])^2= \sum_{i \neq j} P(A_i \cap A_j) + \sum_{i = j}P(A_i) 6- From "-5" and "-4" combine we get that $z^2 . On an other side $z^2 - z$ is an increasing function on $z \geq 1$ and $nc \geq 1$ (other wise it is negative and the given inequality is satisfied). And let remember that $z=\sum_{i = j}P(A_i)$ with $ \forall i $ we have that $P(A_i) \geq c \Rightarrow z \geq nc $ . So We should have $z(z-1) \geq nc(nc-1)$ and not $z(z-1) . Q.E.D.
|
|probability|probability-theory|measure-theory|inequality|
| 1
|
In writing a proof how are we certain whether some part of a goal in the proof can be ignored or not?
|
I ask this question because I probably meet the same obstacle when writing other proofs. In writing a proof My example would be $(0\le a\lt b)\implies(0\le\sqrt{a}\lt\sqrt{b})$ The goal is $(0\le\sqrt{a}\lt\sqrt{b})$ But here the $0\le\sqrt{a}$ is like an assumption or a definition So the proof of $0\le\sqrt{a}$ is ignored Could we add these extra assumptions or definitions in the proof in order to make our proofs more readable? How are we certain whether some part of a goal in the proof can be ignored or not? I asked this proof before which is from the instructor's solutions manual of a textbook. Same proof
|
Actually, the proof $a √a is sufficient to prove your question. Just take $a = 0$ and $b = a$ for proving $0 0 in the above statement. For the equality case $a = 0$ , the proof is trivial as $√a = 0$ . I think the book should mention explicitly what they are trying to do.
|
|proof-writing|proof-explanation|
| 1
|
Evaluate the sum $\sum_{k=0}^n \frac{\binom n k}{(k+1)(k+3)} $
|
How to sum the sequence $$\sum_{k=0}^n \frac{\binom n k}{(k+1)(k+3)} $$ where $n \choose k$ are the usual binomial coefficients in expansion of $(1+x)^n$ . I know how to sum such sequence when the denominator has consecutive multiplicands (by repeatedly integrating the expansion of $(1+x)^n$ ). Does there even exist a general procedure for such sums?
|
One can proceed similarly as in Is there a closed formula for $\sum_{k=0}^{n}\frac{1}{(k+1)(k+2)}\binom{n}{k}$? : Integrating $$ \sum_{k=0}^n \binom n k x^k = (1+x)^n $$ gives $$ \sum_{k=0}^n \binom n k \frac{x^{k+1}}{k+1} = \frac{1}{n+1}\bigl((1+x)^{n+1} - 1 \bigr). $$ Now multiply by $x$ and then integrate again, for $0 \le x \le 1$ : $$ \sum_{k=0}^n \binom n k \frac{1}{(k+1)(k+3)} = \frac{1}{n+1} \int_0^1 x\bigl((1+x)^{n+1} - 1 \bigr) \, dx \, . $$ The integral can be evaluated with integration by parts, the result is $$ \sum_{k=0}^n \binom n k \frac{1}{(k+1)(k+3)} = \frac{2^{n+3} - (n+4)}{2(n+2)(n+3)} \, . $$
|
|sequences-and-series|algebra-precalculus|binomial-coefficients|binomial-theorem|
| 1
|
Prove that triangles $VAC$ and $VBD$ have equal areas and equal perimeters....
|
The question Let $VABCD$ be a quadrilateral pyramid with a rectangular base. $\angle AVC =\angle BVD$ prove that triangles $VAC$ and $VBD$ have equal areas and equal perimeters. The idea Because the base ABCD is rectangular we get that $DB=AC$ . We also know the congruences of the angles, so I was thinking of showing that VDB is congruent with VDC this will make the perimeters and the areas equal. I don't know how to show this. I hope one of you can help me! Thank you!
|
Intuitively, Consider (an easier 3d development) of tetrahedron VABCD. Figure at left: Draw identical circles separately through $ (AD,BC)$ whose line of centers is parallel to AB or CD. Note the altitudes from $(V1,V2)$ to AD or BC are equal. Also note that the angles subtended at $(V1,V2)$ inside identical circle arcs are equa, by virtue of symmetry with respect to axis Ox. Now fold the triangles $ (V1DA, V2CB) along fold/hinge lines (DA,CB) inward until V1, V2 come together at V, call it vertex V. The symmetry (coming from identical circles) forces Ox to bifurcate sides AB and CD by the plane OxV. Also note angles contained in circular segments DVA,CBB in those inclined triangles must be equal. For lines in plane of ABCD there is symmetry about a central horizontal mirror x- axis, but not about the y-axis. Altitude $h$ is perpendicular the plane of ABCD. Triangles (VAC, VBD) are congruent and it proves as required.. By projection of sides in the unsymmetrical tetrahedron Vertex V mu
|
|geometry|area|angle|rectangles|
| 0
|
Finding angle relative velocity makes with unit vector i.
|
I've attempted this question but can't seem to get the right answer. Three identical particles A, B and C are moving in a plane and, at time $t$ , their position vectors, $â$ , $b̂$ and $ĉ$ , with respect to an origin O, are (in metres): $$â=(2t+1)î+(2t+3)ĵ$$ $$b̂=(10-t)î+(12-t)ĵ$$ $$ĉ=(t^{3}-15t+4)î+(-3t^{2}+2t+1)ĵ$$ The first part of the question was to find the magnitude of the velocity of particle C relative to particle A when $t=2.00s$ , which I did correctly. I substituted $t=2$ into the position vectors and then found the relative displacement: ( $â-ĉ=23î+14ĵ$ ). Then I calculated its magnitude using Pythagoras ( $\sqrt{23^{2}+14^{2}}=26.9$ , after which I divided by $t=2$ to find the relative velocity, which is $13ms^{-1}$ . The second part of the question asks to find the angle which this relative velocity makes with the unit vector $î$ at this time. I drew myself a little diagram and realised that the required angle should be $90°$ plus the angle which the relative velocity v
|
$V_{(a,o)}=2i +2j$ (differentiate wrt t) and $ V_{(c,o)}=(3t^2-15)i+(-6t+2)j$ where $V_a$ and $V_c$ are velocity relative to origin, Now $V_{(c,a)}=V_{(c,o)}+V_{(o,a)}=(3t^2-15)i+(-6t+2)j-2i -2j=(3t^2-17)i-6tj$ Now substitute $t=2$ for the velocity of $c$ relatively $a$ which is $-5i-12j$ For the second part take the dot product with $i$ then $\cos (\theta)=\dfrac{-5}{13}$ .
|
|vectors|classical-mechanics|
| 0
|
Example of a point that is not the limit of any sequence in a connected topological space
|
Question: Let $X$ be a connected space with a topology not necessarily sequential. What is an example where a point in $X$ is not the limit of any not eventually constant sequence? Motivation. Suppose $X$ is a topological vector space over $\mathbb{R}$ or $\mathbb{C}$ . If we define a sequentially separated set $S$ of $X$ as such that, for every $x\in S$ , $x$ lies outside the sequential closure of subspace $Y_x:=\text{span}(S\setminus \{x\})$ . I'm trying to use the usual Zorn's lemma argument claiming there always exists a maximal such set. But it seems, if $X$ has a not-so-nice topology, there might be points which cannot be approximated by any not eventually constant sequence, and that can derail the reasoning. I came to think about this issue when trying to understand uncountable Schauder basis. Thanks.
|
Let $X = \omega_1\times [0, 1) \cup \{\omega_1\}$ where $\omega_1\times [0, 1)$ is given lexicographic order and $\omega_1$ is a point which is greater than all points of $\omega_1\times [0, 1)$ . Give $X$ the order topology. Then $X$ is connected and $\omega_1$ is not a limit of a not eventually constant sequence. Imagine $X$ as a long closed interval, modification of the long ray to which we add a top element $\omega_1$ , or as the ordinal $\omega_1+1$ to which we fill in the gaps between ordinals using segmemts.
|
|general-topology|topological-vector-spaces|schauder-basis|
| 1
|
Upper bound of the derivative of a rational function $\frac{f}{f+g}$
|
Let $f$ and $g$ be real nonzero polynomials with nonnegative coefficients such that the degree of $g$ equals the degree of $f$ plus one, that is $\deg(f+g) = \deg(f) + 1$ , $(f+g)(0) \neq 0$ . Let $$ h = \frac{f}{f+g}. $$ In particular, it follows that $h$ is well defined on the interval $[0,+\infty)$ , and $$ 0 \le h(x) \le 1, \qquad x \ge 0. $$ Even more, using the partial fractions decomposition, we have that there is a constant $C > 0$ such that the $n$ -th derivative of $h$ satisfies $$ |h^{(n)}(x)| \le C \frac{n!}{x^{n+1}}, \qquad x > 0,\ n \in \mathbb{N}. $$ Is it true that we can choose $C$ as above that does not depend on the coefficients of polynomials $f$ and $g$ ?
|
I don't think we can. As a counterexample, take $n=0$ and set $f_k(x)=k$ and $g_k(x)=x$ , so that we have a sequence of these kinds of functions: $$ h_k(x)=\frac{k}{k+x}. $$ If there were such a $c$ , independent of the coefficients of $f$ and $g$ , we would have $$ h_k(x)\leq \frac{c}{x}, $$ independently of $k$ , that is to say, $x\cdot h_k(x)$ is bounded indepently of $k$ . However, we have that $$ \lim_{x\rightarrow \infty}x\cdot h_k(x)=k, $$ so this cannot be true (just take $k$ as for example $\lceil c\rceil+1$ and $x$ sufficiently big).
|
|real-analysis|polynomials|rational-functions|
| 1
|
Example of a point that is not the limit of any sequence in a connected topological space
|
Question: Let $X$ be a connected space with a topology not necessarily sequential. What is an example where a point in $X$ is not the limit of any not eventually constant sequence? Motivation. Suppose $X$ is a topological vector space over $\mathbb{R}$ or $\mathbb{C}$ . If we define a sequentially separated set $S$ of $X$ as such that, for every $x\in S$ , $x$ lies outside the sequential closure of subspace $Y_x:=\text{span}(S\setminus \{x\})$ . I'm trying to use the usual Zorn's lemma argument claiming there always exists a maximal such set. But it seems, if $X$ has a not-so-nice topology, there might be points which cannot be approximated by any not eventually constant sequence, and that can derail the reasoning. I came to think about this issue when trying to understand uncountable Schauder basis. Thanks.
|
When looking for pathological examples in topology (or just counterexamples, in general), it is usually best to start with very small, simple cases. Don't make your life too complicated—try to find the smallest space which has the properties you need. In this case, consider a two point set $\{a,b\}$ with the topology $\{\varnothing, \{a\}, \{a,b\}\}$ . This space is connected, as there do not exist two nonempty disjoint open sets (let alone two nonempty disjoint open sets whose union is the entire space). Any sequence which has limit $a$ must eventually be constantly $a$ . NB: As FShrike points out in the comments, this space is the Sierpinski space . I've never heard that term before, so I learned something new today. Huzzah.
|
|general-topology|topological-vector-spaces|schauder-basis|
| 0
|
Integrable function and measure space
|
I have two similar problem in measure space. let $f$ be an integrable function on a measure space $X,M,\mu$ such that $$\int_{E} f \, d\mu = 0$$for all sets $E \in M$. let $f$ be an integrable function on a measure space $R,L,\lambda$ (that is lebesgue space and measure) such that $$\int_{a}^b f \, d\lambda = 0$$ for all $-\infty I wanna prove that $f=0$ $a.e.$ both case I got an intution that I can using that fact: $$\lambda\left\{x\mid f(x)\geq \frac 1n\right\}\leq n\int f \, d\lambda=0,$$ But, I can't apply that precisely. Could you give some hints?
|
First $f = Max(0; f)-Max(0;-f)$ . Let focus on $Max(0; f)$ that means when the function is positive. We note it $f$ by commodity. Concerning $Max(0;-f)$ the prove will be very similar. 1-We know that: $\{x \in X: f(x)>0\}=\bigcup_n \{x \in X: f(x)\geq \frac 1n\}$ 2-Now let's note the following inequality: $0=\int_{x \in X}^{}f(x) d \mu (x) = \int_{x \in X: f(x)>0}^{}f(x) d \mu (x) + \int_{x \in X : f(x)=0}^{}f(x) d \mu (x) \geq \int_{x \in X: f(x)>0}^{}f(x) d \mu (x) > \int_{x \in X : f(x) \geq \frac{1}{n}}f(x) d \mu (x) \geq \frac{1}{n} \mu(\left \{ x \in X: f(x) \geq \frac{1}{n} \right \}) \geq 0 \Rightarrow \mu(\left \{ x \in X: f(x) \geq \frac{1}{n} \right \})=0$ Rem: by definition $\mu(.) \geq 0 $ We have now proved that $ \forall n \in \mathbb{N}$ we have $\mu(\left \{ x \in X: f(x) \geq \frac{1}{n} \right \})=0$ 3-Apply $\mu(.)$ on the both side of the equality in "1-" we get: $\mu (\{x \in X: f(x)>0\})=\mu (\bigcup_n \{x \in X: f(x)\geq \frac 1n\})$ . But $0 \leq \mu (\bigcup_n
|
|measure-theory|
| 0
|
How to use computer algebra system
|
Consider the two vector-valued functions \begin{align} &f(x,y)=(x+y^2, y+x^3) \\ & g(x,y)=(2x+y^3, 2y+x^4). \end{align} Then \begin{align} f(g(x,y))&=((2x+y^3)+(2y+x^4)^2,(2y+x^4)+(2x+y^3)^3) \\ &= (2x+4y^2+y^3+4x^4y+x^8, 2y+8x^3+x^4+12x^2y^3+6xy^6+y^{27}) \end{align} gives vector substitution $f \circ g$ . By vector-substitution, I mean the vector composition $f \circ g=(f_1(g_1(x,y),g_2(x,y)), f_2(g_1(x,y),g_2(x,y))$ , where $f(x,y)=(f_1(x,y),f_2(x,y))$ and $g(x,y)=(g_1(x,y),g_2(x,y))$ . Can you please suggest some computer algebra code (e.g., PARI/GP, SAGE, SymPy etc.) doing the above operation? If it were one-variable functions like $f(x)=\sum_{i=1}^n a_ix^n$ and $g(x)=\sum_{j=1}^{m}b_jx^j$ , then PARI/GP function subst(f,x,g) would compute $f \circ g$ . But in our case, these are vector-valued functions. Indeed, I want to find some non-trivial $g(x,y)=(\cdots, \cdots)$ that commutes with $f(x,y)=(x+y^2, y+x^3)$ .
|
In Mathematica or Wolfram language we have maps $$f,g : V_2\to V_2$$ f[x_] := {x[[1]] + x[[2]]^2, x[[2]] + x[[1]]^3} g[x_] := {2 x[[1]] + x[[2]]^3, 2 x[[2]] + x[[1]]^4} f[g[{x, y}]] {2 x + y^3 + (x^4 + 2 y)^2, x^4 + 2 y + (2 x + y^3)^3} f[g[{x, y}]] // Expand {2 x + x^8 + 4 x^4 y + 4 y^2 + y^3, 8 x^3 + x^4 + 2 y + 12 x^2 y^3 + 6 x y^6 + y^9}
|
|polynomials|matrix-decomposition|substitution|computer-algebra-systems|
| 0
|
Intuition behind the component formula of the scalar product
|
I have learned the component formula for the scalar product for quite a while, but really, it doesn't seem intuitive at all. I have 2 questions that confuse me while imagining what the scalar product really is, which are: 1- Why is the scalar product defined as $\vec{u} . \vec{v} = \sum u_{i}v_{i}$ (It doesn't make sense at all). 2- Does the concept of angles exist in dimensions higher than 3? In other words, can we always say that $\vec{u} . \vec{v} = ||\vec{u}|| \times ||\vec{v}|| \times \cos \theta$ ?
|
In a nutshell, the dot/scalar product is an example of what we call inner product . The inner product is an operation that we use to make sense of the concept of angles and lengths in an abstract vector space, e.g. higher dimensional vector spaces. The point is, at beginning we define angles and then the cosine and literally see that $\cos\theta$ represents the projection of the vector $\vec{u}$ in the direction of $\vec{v}$ (where the angle between them is $\theta$ ). However, the opposite approach is valid as well: one defines an operation called inner product, which satisfies some of properties that we expect from the idea of "projection", which are Fixing the vector $\vec{v}$ , we expect linearity in the first component: $(\lambda \vec{u} + \vec{w})\cdot \vec{v} = \lambda(\vec{u}\cdot \vec{v}) + \vec{w}\cdot \vec{v}$ We also expect symmetry of the projection, i.e. the projection of $\vec{u}$ onto $\vec{v}$ is the same as the projection of $\vec{v}$ onto $\vec{u}$ : $\vec{u}\cdot \v
|
|linear-algebra|inner-products|
| 0
|
Triangle inequality for complex numbers
|
I just start to learn about complex numbers and I want to prove the triangle inequality, which says that if $ z $ and $ w $ are complex numbers, then $ \displaystyle |z + w| \le |z| + |w|. $ My approach is to square both sides of the inequality (since each side is nonnegative) to obtain the equivalence $ |x| \ge x $ for every $ x \in \mathbb{C}. $ Now squaring the right hand side yields $ |z|^2 + 2|z||w| + |w|^2, $ but for the left hand side, why doesn't it hold that $ |z + w|^2 = (z + w)^2 = z^2 + 2zw + w^2 $ like with real numbers?
|
Below a full answer. -Using basic complex number property. On one side we have: $|z+w|^2=(z+w)(z+w)^*= (z+w)(z^*+w^*)=zz^*+zw^*+wz^*+ww^*=|z|^2+|w|^2+2Re(zw*)$ On the other side we have: $(|z|+|w|)^2=|z|^2+|w|^2+2|z||w|$ -Now as we know that $ \forall z=x+iy \in \mathbb{C} \Rightarrow Re(z)=x \leq |z|= \sqrt{x^2+y^2} \Rightarrow Re(zw^*) . -Hence we can conclude that: $|z+w|^2 \leq (|z|+|w|)^2 \Rightarrow |z+w| \leq |z|+|w| $ Q.E.D. P.S. I do not know why my last answer was arbitrary deleted for no reason as it seems to me that this answer is correct and can help other students.
|
|complex-numbers|
| 0
|
Show by hand : $e^{e^2}>1000\phi$
|
Problem: Show by hand without any computer assistance: $$e^{e^2}>1000\phi,$$ where $\phi$ denotes the golden ratio $\frac{1+\sqrt{5}}{2} \approx 1.618034$ . I come across this limit showing: $$\lim_{x\to 0}x!^{\frac{x!!^{\frac{2}{x!!-1}}}{x!-1}}=e^{e^2}.$$ I cannot show it without knowing some decimals and if so using power series or continued fractions. It seems challenging and perhaps a tricky calculation is needed. If you have no restriction on the method, how to show it with pencil and paper ? Some approach: We have, using the incomplete gamma function and continued fractions: $$\int_{e}^{\infty}e^{-e^{-193/139}x^{193/139+2}}dx=\frac{139}{471}\cdot e\cdot\operatorname{Ei}_{332/471}(e^2)>e^{-e^2},$$ where $\operatorname{Ei}$ denotes the exponential integral. Finding an integral for the golden ratio $\phi$ is needed now. Following my comment we have : $$e^{-e^2} Where the function in the integral follow the current order for $x\ge e$ .As said before use continued fraction of incomple
|
$\color{green}{\textbf{Full Solution.}}$ Should be proven the inequality $$e^2>\ln(1000\varphi),$$ wherein $$10^{\frac16} $$\ln(1000) $$\,\dfrac{\varphi^2}{e}=\dfrac{\sqrt5+3}{2e} where $z=\dfrac{333}{17726},$ $$2\ln\varphi-1 and then $$\ln(1000\varphi) Therefore, the given inequality is correct too.
|
|inequality|constants|golden-ratio|number-comparison|
| 0
|
$\Gamma(N)$ -inequivalent cusps clarification
|
We know that $\Gamma(N)$ has at most $[\Gamma(1):\Gamma(N)]$ inequivalent cusps given possibly by $g_i \infty$ where the $g_i$ are coset representatives of the subgroup $\Gamma(N)$ . Then I don't understand why the number of inequivalent cusps is actually $[\Gamma(1):\Gamma(N)]/N$ . We know that $[\Gamma(1)_{\infty}: \Gamma(N)_{\infty}]=N$ (by explicitely describing these groups) and I am not sure how to use this to conclude the argument.Here $\Gamma(1)_{\infty},\Gamma(N)_{\infty}$ are the stabilizers of $\infty$ .
|
Intuitively, this should make sense: For every coset $[g_i]$ , we are considering $g_i\infty$ as a potential cusp - but we do not want to count the same cusps twice. Thus, we want to ignore all the cosets $[g_i]\neq [1]$ satisfying $g_i\infty = \infty$ , since these will be redundant. It turns out that only one in $N$ cosets actually moves $\infty$ - this is what $[\Gamma(1)_\infty:\Gamma(N)_\infty]=N$ means. Further, for two distinct cosets $[g]$ and $[h]$ , if $g\infty = h\infty$ , we see that $h^{-1}g\infty = \infty$ , i.e. we only want to count one of the cosets $[g]$ and $[h]$ , as $[h^{-1}g]$ acts similarly to $[1]$ . Thus, it works simply dividing $[\Gamma(1):\Gamma(N)]$ by $N$ . Let me also supply a rigorous group theory argument. Take the quotient $G:=\Gamma(1)/\Gamma(N)$ , and let us consider the subgroup $K:=\Gamma(1)_\infty/(\Gamma(1)_\infty\cap \Gamma(N))$ . Now, it should be clear that $\Gamma(1)_\infty\cap \Gamma(N) = \Gamma(N)_\infty$ , and so $\#K=N$ . As you have stat
|
|number-theory|modular-forms|
| 1
|
Prove that $\mathbb{E}\exp{\lambda\xi} \le \exp\left(\lambda^2 \Vert\xi\Vert_{\psi_2}^2\right)$
|
Problem: Let $\xi$ be a real random variable. We say that $\xi$ is $\psi_2$ when $\exists \lambda >0$ such that $\mathbb{E}\exp(\xi^2/\lambda^2) \le e$ . We denote by $\Vert \xi \Vert_{\psi_2}$ the infimum of such $\lambda$ , that is \begin{align*} \Vert \xi \Vert_{\psi_2} = \inf\left\{\lambda > 0: \mathbb{E}\exp(\xi^2/\lambda^2) \le e\right\}. \end{align*} Suppose that $\xi$ is $\psi_2$ and that $\mathbb{E}\xi = 0$ . Prove that $\forall \lambda >0$ $$\mathbb{E}\exp(\lambda \xi) \le \exp\left(\lambda^2 \Vert \xi\Vert_{\psi_2}^2\right).$$ My attempt: By using the inequality $e^{y} \le y+ e^{y^2}$ , we have \begin{align*} \mathbb{E}\exp(\lambda \xi) &= \int_{-\infty}^{+\infty}\exp(\lambda t) f_{\xi}(t) dt\\ & \le \int_{\infty}^{+\infty}(\lambda t + e^{\lambda^2 t^2})f_\xi(t) dt\\ & \le \lambda \int_{-\infty}^{+\infty}t f_\xi(t)dt + \int_{-\infty}^{+\infty}e^{\lambda^2 t^2}f_\xi(t)dt \\ & = \lambda \mathbb{E}\xi + \int_{-\infty}^{+\infty}e^{\lambda^2 t^2}f_\xi(t)dt = \int_{-\infty}^{+\inf
|
If $\lambda \left\|\xi\right\|_{\psi_2} > 1$ Let $\mu=\left\|\xi\right\|_{\psi_2}$ , such that $\lambda\mu > 1$ $$\lambda \xi = \mu \lambda \times \frac\xi\mu \le \frac12\mu^2 \lambda^2 + \frac12\left(\frac\xi\mu\right)^2$$ Take the expectation of the exponenent, use the fact $u\mapsto \sqrt u$ is concave, you have $$\mathbb E \left[\exp \lambda \xi\right]\le \sqrt e\times \exp \left(\frac12\lambda^2\mu^2\right) \le \exp \left(\lambda^2\mu^2\right)$$ Then you have the inequality you are looking for. For the other case $\lambda \left\|\xi\right\|_{\psi_2}\le 1$ use the computations that you did and the fact that $u\mapsto u^r$ is concave for $r\in[0,1]$
|
|probability|inequality|probability-distributions|random-variables|
| 1
|
Automorphism group isomorphic to the Klein 4 group.
|
Question: Prove that $\text{Aut}(\mathbb{Q}(i, \sqrt{2})) \cong V_{4}$ , where $V_{4}$ is the Klein 4 group. Attempt: We know that under an automorphism $\sigma \in \text{Aut}(\mathbb{Q}(i, \sqrt{2}))$ the automorphism fixes the prime field $\mathbb{Q}$ . Therefore, the image of such automorphisms is entirely determined by the image of $i$ and $\sqrt{2}$ . This is where my concern is. I am pretty sure that the only possibilities for automorphisms would be the identity map, the map where $i \mapsto i$ and $\sqrt{2} \mapsto -\sqrt{2}$ , the map where $i \mapsto -i$ and $\sqrt{2} \mapsto \sqrt{2}$ and finally the map $i \mapsto -i$ and $\sqrt{2} \mapsto -\sqrt{2}$ . One possibility I could think of is to consider the field extension $\mathbb{Q}(i, \sqrt{2})$ as some splitting field of a polynomial with the corresponding four roots, meaning that a root would also have to map to another root. However, I am unsure if this is the right way to think about the automorphisms. If anybody could ex
|
Your determination of the automorphisms is correct. Instead of trying to find and work with a polynomial for which $\newcommand{\Q}{\mathbb{Q}}\Q(\sqrt{2}, i)$ is a splitting field, here's a standard approach using the tower $\Q(i, \sqrt{2}) / \Q(\sqrt{2}) / \Q$ : The extension $\Q(\sqrt{2}) / \Q$ is Galois of degree 2, and the nontrivial element of $\newcommand{\Gal}{\operatorname{Gal}}\Gal(\Q(\sqrt{2}) / \Q)$ is the automorphism taking $\sqrt{2}$ to $-\sqrt{2}$ . Now $\Q(i, \sqrt{2}) / \Q(\sqrt{2})$ is also quadratic since $i \notin \Q(\sqrt{2})$ , thus by the extension theorem $\Gal(\Q(i, \sqrt{2}) / \Q(\sqrt{2}))$ consists of extending the two automorphisms of $\Gal(\Q(\sqrt{2}) / \Q)$ by either $i \mapsto i$ or $i \mapsto -i$ , and you end up with the description you gave.
|
|abstract-algebra|field-theory|galois-theory|extension-field|automorphism-group|
| 1
|
Finding an affine transformation that will map one set of three given points $P_1, P_2, P_3$ to another given set of points $P'_1, P'_2, P'_3$
|
Suppose you given three points with known coordinates $P_1, P_2, P_3$ , and also another set of three points $P'_1, P'_2, P'_3$ . Now does there always exist an affine transformation $ f( r ) = A r + b $ where $A \in \mathbb{R}^{ 3 \times 3}$ and $b \in \mathbb{R}^3 $ , that will map $P_1$ to $P'_1$ , $P_2$ to $P'_2$ and $P_3$ to $P'_3$ ? My attempt: The affine transformation can be written as $ f(r) = A (r - P_1) + P'_1 $ For $3 \times 3$ matrix $A$ , this automatically satisfies the first condition. Now, we have to find $A$ such that $ A( P_2 - P_1) + P'_1 = P'_2 $ $ A( P_3 - P_1) + P'_1 = P'_3 $ i.e. $ A [ (P_2 - P_1) , (P_3 - P_1) ] = [ (P'_2 - P'1) , (P'_3 - P'_1) ] $ Let $G = [(P_2 - P_1) , (P_3 - P_1) ], H = [ (P'_2 - P'1) , (P'_3 - P'_1) ] $ , then $G$ and $H$ are $3 \times 2$ , and $ A G = H $ Transposing, $ G^T A^T = H^T $ Let $B = A^T = [b_1, b_2, b_3] $ then these unknown columns are solutions of three linear systems, $ G^T b_1 = h_1 $ $ G^T b_2 = h_2 $ $G^T b_3 = h_3 $ Not
|
Extend $P_2-P_1,P_3-P_1$ to a basis $P_2-P_1,P_3-P_1,v$ then your choice of where the first two vectors is sent is forced, but you can send $v$ anywhere. Thus, in this basis, the third column of the matrix $A$ is arbitrary, leaving three degrees of freedom.
|
|linear-algebra|geometry|solution-verification|analytic-geometry|3d|
| 0
|
Extension of Sobolev functions on non-connected set
|
Let $D = D_1 \cup D_2 \cup...\cup D_m$ be a subset of $\mathbb{R}^d$ , where each $D_j$ is a (connected and) bounded domain with Lipschitz boundary, and $\min_{i,j} dist(D_i,D_j) \geq c > 0$ for some $c$ . Let a function $f$ defined on $D$ . Is there a Sobolev extension theorem for $f$ from $D$ to $\mathbb{R}^d$ ?
|
We can indeed extend Sobolev functions suitably on such a domain. I assume you know already that we can do this for connected, bounded Lipschitz domains and want to extend this to such domains you described. In this case, extend for every $i$ the functions $f_{|D_i}$ to $\mathbb{R}^d$ , and denote this extension by $E_i(f)$ . Now, choose smooth cutoff functions $\phi_i\in C_c^\infty(B_{c/3}(D_i))$ such that $\phi_i\equiv 1$ on $D_i$ . ( $B_{c/3}(D_i)$ is all the points having distance at most $c/3$ away from $D_i$ ). Then, the extension by zero of each of the $\phi_i E(f_i)$ is a Sobolev function on the full space. By assumption on the separation of the subdomains, $B_{c/3}(D_i)$ and $B_{c/3}(D_j)$ are still well-separated for $i\neq j$ . Then we set $$ E(f)=\begin{cases}{\sum_{i=1}^m E_i(f)(x)\phi_i}(x) & \text{if }x\in\bigcup_i B_{c/3}(D_i) \\ 0\quad &\text{else}\end{cases}. $$ This should give you the result (note that for estimating the norm of this extension, any norm of the $\phi
|
|functional-analysis|sobolev-spaces|fractional-sobolev-spaces|
| 0
|
B-Isomorphic? (Husemoller, Fiber Bundles). Illustrated diagrams included.
|
Wikipedia gives the definition of a bundle map as the arrow $\left( \xrightarrow{\quad \varphi \quad } \right)$ in the commuting diagram below: Hussemoller gives the definition of a bundle map as the arrow (note the ordered pair) $\left( \xrightarrow{\quad \left(u, f \right) \quad } \right)$ in the diagram below: (i) Are these definitions distinct, or are they just different preferences for stating the same concept? (ii) Is $(f, u)$ a functor? If it's a functor, doesn't there need to be a map $\big( p \xrightarrow{\quad} p' \big)$ ? Why is it sufficient to only represent the map between bundles by mapping the objects in the category? Now, the wikipedia article doesn't define isomorphisms, and Hussemoler doesn't give any more diagrams. From Husemoller (bundle isomorphism): "From general properties in a category, a bundle morphism $$(E, p, B) \xrightarrow{\quad (u, f)\quad} (E', p', B')$$ is an isomorphism $\iff$ there exists a morphism $$(E, p, B) \xleftarrow{\quad (u', f')\quad} (E', p
|
The definitions are equivalent: You can always recover $f$ from $\varphi$ by noting that the definition implies that $\varphi$ takes fibres to fibres, so if $\varphi(E_x) \subseteq F_y$ where $E_{-}, F_{-}$ are the fibres over the respective points, then we must have $f(x) = y$ . Who says that $(u, f)$ is a functor? It is a morphism in the category of bundles. This explains Husemollers remark about isomorphisms. The diagram you drew is per se of the right shape, but to capture the definition you need to specify that the top and bottom composites be the identity in both directions. For the last part, you've almost got it right: The projection map to $B$ on the right should be $\pi_X\colon X \times F \to X$ , the projection to the first factor. Alternatively, you could express the same fact by saying that there is a bundle isomorphism $(E, p, B) \overset{(u, \mathrm{id}_B)}{\to} (B \times F, \pi_X, B)$ (more generally, a $B$ -homomorphism is just a bundle map covering $\mathrm{id}_B$ ).
|
|differential-geometry|category-theory|differential-topology|vector-bundles|fiber-bundles|
| 1
|
Prove that triangles $VAC$ and $VBD$ have equal areas and equal perimeters....
|
The question Let $VABCD$ be a quadrilateral pyramid with a rectangular base. $\angle AVC =\angle BVD$ prove that triangles $VAC$ and $VBD$ have equal areas and equal perimeters. The idea Because the base ABCD is rectangular we get that $DB=AC$ . We also know the congruences of the angles, so I was thinking of showing that VDB is congruent with VDC this will make the perimeters and the areas equal. I don't know how to show this. I hope one of you can help me! Thank you!
|
As commented by Blue, the claim is not true in general. In the following, I'm going to prove the following claim : Claim 1 : If $\angle AVC =\angle BVD\color{red}{\not=90^\circ}$ , then triangles $VAC$ and $VBD$ have equal areas and equal perimeters. Proof : Applying the law of cosines to $\triangle{AVC}$ and $\triangle{BVD}$ , we have $$\cos\angle{AVC}=\frac{VA^2+VC^2-AC^2}{2\times VA\times VC}\tag1$$ $$\cos\angle{BVD}=\frac{VB^2+VD^2-BD^2}{2\times VB\times VD}\tag2$$ Here, let $E$ be a point on the plane $ABCD$ such that $EV\perp ABCD$ . Let $F$ be a point on $AB$ such that $EF\perp AB$ . Let $G$ be a point on $CD$ such that $EG\perp CD$ . Here, we have $$\begin{align}&VA^2+VC^2-AC^2 \\\\&=VE^2+EA^2+VE^2+EC^2-AC^2 \\\\&=2VE^2+EF^2+FA^2+EG^2+GC^2-AC^2\end{align}$$ and $$\begin{align}&VB^2+VD^2-BD^2 \\\\&=VE^2+EB^2+VE^2+ED^2-BD^2 \\\\&=2VE^2+EF^2+FB^2+EG^2+GD^2-BD^2\end{align}$$ Since $FA=GD,GC=FB,AC=BD$ , we have $$VA^2+VC^2-AC^2=VB^2+VD^2-BD^2\not=0$$ So, from $(1)(2)$ , we get $$\fr
|
|geometry|area|angle|rectangles|
| 1
|
Supremum of average of i.i.d. stochastic processes
|
Suppose that $(X_t)_{t\in [0, T]}$ is a stationary stochastic process with continuous sample paths such that $\mathbb E[X_t] = 0$ and $\mathbb E[|X_t|^2] , and assume that $$ S := \mathbb E \left[ \sup_{t \in [0, T]} |X_t|\right] Take $J$ independent copies $(X^1_t), \dotsc, (X^J_t)$ of the process $(X_t)$ . Does it hold that $$ \mathbb E \left[\sup_{t \in [0, T]} \left| \frac{1}{J} \sum_{j=1}^JX^j_t \right| \right] \leq \frac{S}{\sqrt{J}} \, ? $$ possibly up to a constant on the right-hand side. In other words, does the left-hand side decrease at the usual Monte Carlo rate? Particular case. If $X_t$ is a Gaussian process, then the answer is yes, and both sides are in fact equal. Indeed, it holds in this case that $$ \frac{1}{\sqrt{J}} \sum_{j=1}^J X^j = X \qquad \text{in law as stochastic processes,} $$ because both sides are Gaussian processes with the same mean and covariance functions. Can we say something on the case of a general mean-zero stationary stochastic process $X$ ? Possi
|
Here is a sketch of counterexample. For $n\ge 2$ , take a continuous function $f_n\colon [0,1]\to \mathbb R$ such that $$ f_n(t) = \begin{cases} n!,\quad & |t - 1/2| 1/n! \end{cases} $$ and $f_n$ takes intermediate values in other points. Then, $\int_0^1 f_n(t)^2 dt \asymp n!$ , so there exist some probabilities $p_n \asymp \big((n+1)!\log^2 n\big)^{-1}, n\ge 2,$ such that $\sum_{n\ge 2} p_n \cdot \int_0^1 f_n(t)^2 dt = 1$ . Now define $$ X_t = \kappa \cdot f_N\big((t+U) \mod 1\big), t\in [0,1], $$ where $U,N,\kappa$ are independent, $U$ is uniformly distributed on $[0,1]$ , $N$ has distribution $\mathrm P(N=n) = p_n$ , $n\ge 2$ , and $\kappa$ has the Rademacher distribution: $\mathrm P(\kappa = \pm 1) = 1/2$ . Then the process $X$ is stationary with $\mathrm E[X] = 0$ , $\mathrm{Var}(X) = 1$ , and $\mathrm E[\sup_{t\in [0,1]} |X_t|] = \sum_{n\ge 2} n!\cdot p_n . Now take $J\sim (n+1)! \log^3 n$ , and let $X^j = \kappa^j\cdot f_{N^j}(t+U^j), j=1,\dots, J,$ be independent copies of $X$
|
|probability-theory|stochastic-processes|
| 0
|
is arithmetic finitely consistent?
|
Let's take PA1( First order axioms of peano arithmetic ) for example. From godel's 2nd incompleteness theorem, PA1 can't prove its own consistency, more specifically it can't prove that the largest consistent subset of the theory is PA1 itself. But PA1 has infinite axioms, so can PA1 prove atleast for a given finite set of axioms ( of PA1 ) that they are consistent, specifically that no contradictional proof exists which uses only those axioms ? Or any formal theory of arithmetic for that matter, can it prove for a finite subset of its own axioms that they are consistent ? Edit - What I know is that PA1 can prove that any finite subset of it is consistent, but I want to know if it can prove that a finite set of its axioms itself is consistent ? I believe PA1 or any other formal theory of arithmetic should be able to prove that for any given finite subset of its axioms its consistent.
|
note: As currently stated, this question is dangerously close to being a duplicate. While people deliberate whether it really counts as one, I'll try to at least give an answer that is not a duplicate of any existing one: instead of telling you how this is proven, I'll tell you a bit about the historical context of the result. For any fixed finite set $\mathcal{F}$ of axioms of Peano arithmetic, one can obtain a proof of $\mathrm{Con}(\mathcal{F})$ inside Peano arithmetic. But the analogous proposition absolutely does not hold for every formal theory of arithmetic. Mostowski studied this question in his 1952 article On models of axiomatic systems ( Mostowski's article ). His results immediately imply that Peano arithmetic is not finitely axiomatizable. Kreisel and Lévy put the result in the context of reflection principles in their 1968 article, and provide proof-theoretic arguments showing the same (and a lot more). You should always keep in mind that in these results, the finite set
|
|proof-explanation|first-order-logic|peano-axioms|incompleteness|
| 0
|
If $\gcd(a,b,c) = 1$ and $c = {ab\over a-b}$, then prove that $a-b$ is a square.
|
If $\gcd(a,b,c) = 1$ and $c = {ab\over a-b}$ , then prove that $a-b$ is a square. $\\$ Well I tried expressing $a=p_1^{a_1}.p_2^{a_2} \cdots p_k^{a_k}$ and $b = q_1^{b_1}.q_2^{b_2}\cdots q_k^{b_k}$ and $c=r_1^{c_1}.r_2^{c_2}\cdots r_k^{c_k}$ basically emphasizing on the fact that the primes which divide $a$ are different from those that divide $b$ and $c$ , but I couldn't come up with anything fruitful. $\\$ Any help would be appreciated. $\\$ Thanks EDIT:- $a,b,c$ are positive integers.
|
Multiply $(a,b,c)=1$ by $a-b$ to get $(a^2-ab,ab-b^2,ab)=a-b$ or equivalently $(a^2,b^2,ab)=a-b$ . Suppose $p^{2k-1} ||a-b$ . Then $p^{2k-1}|a^2$ , so $p^k|a$ . Similarly $p^k|b$ . But then $p^{2k} | (a^2,b^2,a b) = a-b$ , a contradiction.
|
|elementary-number-theory|gcd-and-lcm|
| 0
|
How can you show that the trace class norm $\|A\|_1:=\mathrm{Tr}(|A|)$ satisfies the triangle inequality?
|
Exact wording of my question is a bit oxymoronic, since a norm by definition is a metric, and thus requires proper context. Let $H$ be a separable Hilbert space over the field $\mathbb{K}$ . I am aware that a bounded linear operator $A\in\mathcal{B}(H)$ is said to be of trace class if $\mathrm{Tr}(|A|) for $|A|$ the square root of $A^*A$ and $$\mathrm{Tr}(A):= \sum_{k=1}^\infty\left $$ for a Hilbert basis $\{e_k\}$ for $H$ . Then, the $p$ th Schatten norm for $p\in [1,\infty)$ is defined as $$\|A\|_p := \left(\mathrm{Tr}(|A|^p)\right)^{1/p}$$ and as a special case, the trace class norm is given by $$\|A\|_1 := \mathrm{Tr}(|A|)$$ None of the references I have seen anywhere actually prove that what we call the Schatten norm is actually a norm, and the same goes for the trace norm. The main difficulty I have with proving that Schatten norm is actually a norm is already present in the case of the trace norm, and hence this question focuses on that. What I struggle most with showing that fo
|
A more or less direct proof depends on the inequality $$\tag1 \|AB\|_1\leq\|A\|\,\|B\|_1. $$ To show $(1)$ , we note that $$\tag2 |AB|=(B^*A^*AB)^{1/2}\leq(B^*\|A\|^2B)^{1/2}=\|A\|\,|B|. $$ The proof of $(2)$ depends on three facts: if $T$ is positive then $STS^*$ is positive (straightforward calculation) $A^*A\leq \|A\|^2\, I$ (straightforward computation: $\langle A^*Ax,x\rangle\leq \|A^*Ax\|\,\|x\|\leq \|A\|^2\|x\|^2=\langle \|A\|^2x,x\rangle$ ) The square root preserves operator inequalities . Having $(2)$ available, we get $\def\tr{\operatorname{Tr}}$ $$\tag3 \tr(|AB|)\leq \|A\|\,\tr(|B|). $$ We can then obtain $\def\abajo{\\[0.2cm]}$ $$\tag4 |\tr(A)|\leq \tr(|A|). $$ Indeed, writing the polar decomposition $A=V|A|$ , \begin{align} |\tr(A)| &=\tr(V|A|)=\tr(V|A|^{1/2}|A|^{1/2})\abajo &\leq \tr(|A|^{1/2}V^*V|A|^{1/2})^{1/2}\tr(|A|)^{1/2}\tag5\abajo &\leq\tr(|A|). \end{align} And now, using the polar decomposition, $A+B=V|A+B|$ and $(5)$ , \begin{align} \|A+B\|_1 &=\tr(|A+B|) =\tr(V^
|
|linear-algebra|functional-analysis|operator-theory|normed-spaces|hilbert-spaces|
| 0
|
Why Is This Jump Discontinuity Not Differentiable?
|
Let $f(x) = 2x$ for $x and $f(x)=2x+1$ for $x>0$ . Why is this function not differentiable? I understand that if a function is differentiable, then it has to be continuous which this function is not. That being said, I am trying to understand it in a different way such as by using the Epsilon-Delta definition. For example, $\lim_{h\to 0+}\frac{2(x+h)+1-(2x+1)}{h}=2$ and $\lim_{h\to 0-}\frac{2(x+h)-2x}{h}=2$ . So, wouldn't both the one-sided limits imply the limit as $h$ goes to zero for the function going to $2$ imply it is differentiable? I feel like I am doing something wrong here. Maybe negating the Epsilon-Delta Definition would be better.
|
$\lim_{h\to 0}\frac{2(x+h)+1-(2x+1)}{h}=2$ only proves that for $x>0$ , $f'(x)=2$ . Likewise, $\lim_{h\to 0}\frac{2(x+h)-2x}{h}=2$ only proves that for $x , $f'(x)=2$ . They prove nothing about $f'(0)$ . For $f$ (or rather, an extension of $f$ by some choice for $f(0)$ ) to be differentiable at $0$ , both limits $$a:=\lim_{h\to0^+}\frac{2h+1-f(0)}h=2+\lim_{h\to0^+}\frac{1-f(0)}h$$ and $$b:=\lim_{h\to0^-}\frac{2h-f(0)}h=2-\lim_{h\to0^-}\frac{f(0)}h$$ should first exist (and moreover, be equal). But $b$ exists iff $f(0)=0$ and $a$ exists iff $1-f(0)=0$ .
|
|derivatives|epsilon-delta|
| 1
|
Modern reference on PA degrees?
|
I'm currently trying to work my way around some papers from Jockush et al, and PA degrees come up frequently. I'd be interested in a modern reference/survey summarizing the main results on the subject, if such a thing exists. My usual references don't seem to discuss PA degrees.
|
If you are just looking to understand the basic results for $PA$ degrees I would recommend "Reverse Mathematics: Problems, Reductions and Proofs" by Mummert and Dzhafarov. In particular look at section 2.8 "Trees and PA degrees" which goes over some of the classical results about $PA$ degrees and basis theorems. I would also highly recommend "Turing computability: Theory and Applications" Section 10 will go over similar results and I believe there is a nicer proof of the fact that a set has $PA$ degree if and only if it computes a diagonally non recursive function. It also has a proof that a set with $PA$ degree computes a completion of $PA$ . It also has a few basis/non basis theorems which you cannot find on the reverse mathematics book.
|
|logic|reference-request|first-order-logic|computability|peano-axioms|
| 1
|
Finding solutions of modulus functions
|
I decided to do some practice with some functions, and was posed with the following question: So, I sketched the two graphs. For convenience I'll display a photo of them from Desmos. The blue line is | $3x - 2$ | and the red is | $x-5$ |. Now, to find when the two intersect, it the case where the two equations have the same output with the same input is achieved. However, the modulus sign seemed to trip me up when I was writing | $3x - 2$ | = | $x-5$ | and begin doing algebra. I couldn't merely solve it like the function had no modulus sign, since that is a different function. So, I figured by inspection that only the left side of the red function makes the intersections, so the following conditions are the only valid ones. $$-x-5 = 3x-2$$ $$-x-5 = -3x+2$$ And with that I can find the inputs necessary. However, is there a general way to solve this type of problem? Say, if I didn't have the graph to make the inference that I did? How could I solve this problem if I couldn't graph the tw
|
The best way to solve this algebraically would probably be to split the functions up into piecewise functions. Let's say: $f(x)=|x-5|$ $g(x)=|3x-2|$ First, we need to know where the functions will cross over the x-axis. $f(x)$ crosses the x-axis at $x=5$ , while $g(x)$ crosses over at $x=\frac{2}{3}$ . We also know that both functions normally increase when there is no absolute value. This means the right-halves of the functions are unchanged, while the left-halves of the functions are reflected over the x-axis from negative to positive. This gives us 4 piecewise functions: $f(x)= \begin{cases} -(x-5), & x \le 5 \\ x-5, & x \ge 5 \end{cases}$ $g(x)= \begin{cases} -(3x-2), & x \le \frac{2}{3}\\ 3x-2, & x \ge \frac{2}{3} \end{cases}$ Now, the next step is to figure out where each piece of one function intersects the pieces of the other function. Let's start with the piece $-(x-5)$ , and see where it intersects the pieces of $g(x)$ . $-(x-5)=-(3x-2)\\ 2x=-3\\ x=\frac{-3}{2}$ Intersection
|
|functions|absolute-value|
| 0
|
Find the change of basis matrix so that the following is in Jordan Normal Form
|
Let the following matrix be given. Note that we are in the field consisting of five integers: F = (0, 1, 2, 3, 4) $A = \begin{bmatrix} 1 & 2 & 0\\ 3 & 2 & 1\\ 0 & 2 & 2 \end{bmatrix}$ The Jordan Normal Form of the matrix is $J = \begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0 \end{bmatrix}$ Now, to find the basis transformation matrix, I know first of all that $A^3 = 0$ . Now I want to consider $A^2 = \begin{bmatrix} 2 & 1 & 2\\ 4 & 2 & 4\\ 1 & 3 & 1 \end{bmatrix}$ Choosing $v_3 = \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}$ , we see that $A^2v_3 = \begin{bmatrix} 2\\ 4\\ 1 \end{bmatrix}$ , so $v_3$ is not in the kernel of $A^2$ . I will set $v_2 = \begin{bmatrix} 1 & 2 & 0\\ 3 & 2 & 1\\ 0 & 2 & 2 \end{bmatrix} * \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix} = \begin{bmatrix} 0\\ 1\\ 2 \end{bmatrix}$ and $v_1 = \begin{bmatrix} 1 & 2 & 0\\ 3 & 2 & 1\\ 0 & 2 & 2 \end{bmatrix} * \begin{bmatrix} 0\\ 1\\ 2 \end{bmatrix} = \begin{bmatrix} 2\\ 4\\ 2 \end{bmatrix}$ Now set $S = [v_1, v_2, v_3] = \begi
|
Your approach to finding $S$ is correct. The fact that $A^2 \neq 0$ , $A^3 = 0$ , and $A^2 v_3 \neq 0$ ensures that the vectors $A^2v_3, Av_3, v_3$ are linearly independent over the field $\Bbb F$ . As such, the matrix $S$ that you construct has linearly independent columns, and so it is necessarily possible to compute its inverse while staying in the field $\Bbb F$ . The trick is that instead of dividing as you normally would (which would lead to fractions), you multiply by multiplicative inverses. Let's compute the inverse of your matrix $S$ using Gaussian elimination. We reduce the augmented matrix $[S \mid I]$ , $$ \pmatrix{2 & 0 & 0 & 1 & 0 & 0\\ 4 & 1 & 0 & 0 & 1 & 0\\ 2 & 2 & 1 & 0 & 0 & 1}. $$ To begin, we could "divide" the first row by $2$ . Note that "dividing" in this context means multiplying by the multiplicative inverse. In other words, the number that we should multiply the first row by is the solution to the equation $2 x \equiv 1 \pmod 5$ . You can verify that $x = 3$
|
|linear-algebra|matrices|linear-transformations|jordan-normal-form|similar-matrices|
| 1
|
Solve the equation $\arcsin\bigg(\dfrac{x+1}{\sqrt{x^2+2x+2}}\bigg)-\arcsin\bigg(\dfrac{x}{\sqrt{x^2+1}}\bigg)=\dfrac{\pi}{4}$
|
Solve the equation $$\arcsin\bigg(\dfrac{x+1}{\sqrt{x^2+2x+2}}\bigg)-\arcsin\bigg(\dfrac{x}{\sqrt{x^2+1}}\bigg)=\dfrac{\pi}{4}$$ My solution: I converted this equation in terms of $\arctan$ and applied tangent to both sides, and I got my answer as $x=-1,0$ . But then one of my friends said that $x=2$ satisfies too the above equation, and the reason he gave is as follows: $$\arcsin\bigg(\dfrac{3}{\sqrt{10}}\bigg)=\dfrac{\pi}{4}+\arcsin\bigg(\dfrac{2}{\sqrt{5}}\bigg)$$ and he applied sinus to both sides to obtain $\dfrac{3}{\sqrt{10}}=\dfrac{3}{\sqrt{10}}\:$ . Now I don't have any explanation for him. Can anyone here explain the reason behind this situation? I plotted it in desmos, and I am getting $x=-1,0$ only. Link to desmos
|
The reason behind this situation is that $\sin(\pi/2 - \theta) = \sin(\pi/2 + \theta)$ , even though $\pi/2 - \theta \neq \pi/2 + \theta$ . Given this, $$\sin(\arcsin(\frac{3}{\sqrt{10}})) = \sin(\frac{\pi}{4} + \arcsin(\frac{2}{\sqrt{5}})),$$ but $$\arcsin(\frac{3}{\sqrt{10}})\neq \frac{\pi}{4} + \arcsin(\frac{2}{\sqrt{5}})$$ The right-hand side is greater than $\pi/2$ and the left-hand side is less than $\pi/2$ .
|
|algebra-precalculus|trigonometry|inverse-function|inverse-trigonometric-functions|
| 0
|
Prove that for any function $f:\mathbb R\to\mathbb R$ there exist real numbers $x,y$, with $x\neq y$, such that $|f(x)-f(y)|\leq 1$.
|
I need help with a 9th grade functions exercise: Prove that for any function $f:\mathbb R\to\mathbb R$ there exist real numbers $x,y$ , with $x\neq y$ , such that $|f(x)-f(y)|\leq 1$ . I tried assuming that $|f(x) - f(y) | > 1$ so that I could say that each $f(z)$ , where $z$ is an integer, is from an interval $(a, a+1]$ where a is also an integer. So $f: \mathbb Z \to \mathbb R$ would just be from the interval $(-\infty, \infty)$ . But if n is not an integer, $f(n)$ would still be from $\mathbb R$ , meaning $|f(z) - f(n)| \leq 1$ , a contradiction. However, I don’t really know how to write this so it’s easily understandable. Sorry if I haven’t explained myself well, but this is how we do it where I’m from.
|
Here's a proof. It may be what you are trying to say. The real line is the disjoint union of the countably many half open intervals $(n,n+1]$ of length $1$ (here $n$ is an integer). Since there are uncountably many real numbers there must be some interval that contains two images $f(x)$ and $f(y)$ with $x \ne y$ . This is probably an acceptable answer for a ninth grade competition, although your standard ninth grade curriculum might not include a proof of the uncountability of the reals.
|
|functions|real-numbers|
| 0
|
Does $a \mid b $ and $a \nmid c$ imply $a \nmid b+c$
|
I know that if $a \mid b$ and $a \mid c$ then $a \mid b+c$ . Which is easy to see since $a \mid b \Leftrightarrow \exists k \in \mathbb{N} : a \cdot k=b$ and $a \mid c \Leftrightarrow \exists n \in \mathbb{N}: a \cdot n=c$ . Thus $b+c=ak+an=a(n+k)$ , which implies $a \mid c+b$ . But does $a \mid b$ and $a \nmid c$ imply $a \nmid (b+c)$ ? I think that this is true, but I do not know how to proof this.
|
This is using the more general (and very important, this will come up all the time!) fact that if $a \mid x$ and $a \mid y$ , then $a \mid mx + ny$ for all $m, n \in \mathbb Z$ , where something of the form $mx + ny$ is called an integer combination of $x$ and $y$ for future note. I'd suggest trying to prove this on your own using the definitions you gave, ask for help if you need though. From there, if you assume for a contradiction that $a \mid b$ and $a \nmid c$ , but $a \mid b + c$ , then using the above, $a \mid b$ and $a \mid b + c$ imply $a \mid (-1)b + (b + c)$ , that is, $a \mid c$ , which is a contradiction, so you must have $a \nmid b + c$ .
|
|elementary-number-theory|
| 1
|
How to check if something is perfect square or not.
|
While finding eigenvalues of a particular matrix, I end up with the following: $\sqrt{p^{2\alpha}-4p^{\alpha}+8p^{\alpha-1}+4}$ , where $p$ is an odd prime and $\alpha \geq 1$ . The next step is to check if this eigenvalue is an integer or not. How do I go about checking the integrality of such an expression?
|
Edited to add: Please stop upvoting me! Piquito's is the answer that deserves your votes! If $p\ge 11$ :** That eigenvalue is not an integer. This is because $p^{2\alpha}-4p^\alpha+4=(p^\alpha-2)^2$ is a perfect square, and the next perfect square is $(p^\alpha-1)^2=p^{2\alpha}-2p^\alpha+1$ , which is strictly greater than $p^{2\alpha}-4p^\alpha+8p^{\alpha-1}+4$ if $p\ge 11$ (because rearranging terms, this is equivalent to $2p^{\alpha-1}(p-8)>3$ , which is true for all $\alpha\ge 1$ ). So $p^{2\alpha}-4p^\alpha+8p^{\alpha-1}+4$ lies strictly between two consecutive perfect squares, and therefore is not itself a square. If $\alpha=1$ : The expression inside the square root reduces to $p^2-4p+12=(p-2)^2+8$ ; the only two perfect squares that differ by $8$ are $1$ and $9$ , so $(p-2)^2=1$ , giving $p=3$ . So $\alpha=1,p=3$ gives the eigenvalue $3$ .
|
|linear-algebra|number-theory|square-numbers|
| 0
|
Modern reference on PA degrees?
|
I'm currently trying to work my way around some papers from Jockush et al, and PA degrees come up frequently. I'd be interested in a modern reference/survey summarizing the main results on the subject, if such a thing exists. My usual references don't seem to discuss PA degrees.
|
Another good source is Diamondstone/Dzhafarov/Soare's survey paper $\Pi^0_1$ Classes, Peano Arithmetic, Randomness, and Computable Domination . In particular, they go into detail about weak basis theorems; while these theorems are admittedly superceded by the low basis theorem (and other "strong" basis theorems), at least in the case of the Kreisel-Shoenfield basis theorem ("Every nonempty $\Pi^0_1$ class has an element $ ") I think it's still worth seeing the result if only for "flavor."
|
|logic|reference-request|first-order-logic|computability|peano-axioms|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.